linux-kselftest.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2018-11-28 19:36 brendanhiggins
  2018-11-28 19:36 ` Brendan Higgins
                   ` (21 more replies)
  0 siblings, 22 replies; 232+ messages in thread
From: brendanhiggins @ 2018-11-28 19:36 UTC (permalink / raw)


This patch set proposes KUnit, a lightweight unit testing and mocking
framework for the Linux kernel.

Unlike Autotest and kselftest, KUnit is a true unit testing framework;
it does not require installing the kernel on a test machine or in a VM
and does not require tests to be written in userspace running on a host
kernel. Additionally, KUnit is fast: From invocation to completion KUnit
can run several dozen tests in under a second. Currently, the entire
KUnit test suite for KUnit runs in under a second from the initial
invocation (build time excluded).

KUnit is heavily inspired by JUnit, Python's unittest.mock, and
Googletest/Googlemock for C++. KUnit provides facilities for defining
unit test cases, grouping related test cases into test suites, providing
common infrastructure for running tests, mocking, spying, and much more.

## What's so special about unit testing?

A unit test is supposed to test a single unit of code in isolation,
hence the name. There should be no dependencies outside the control of
the test; this means no external dependencies, which makes tests orders
of magnitudes faster. Likewise, since there are no external dependencies,
there are no hoops to jump through to run the tests. Additionally, this
makes unit tests deterministic: a failing unit test always indicates a
problem. Finally, because unit tests necessarily have finer granularity,
they are able to test all code paths easily solving the classic problem
of difficulty in exercising error handling code.

## Is KUnit trying to replace other testing frameworks for the kernel?

No. Most existing tests for the Linux kernel are end-to-end tests, which
have their place. A well tested system has lots of unit tests, a
reasonable number of integration tests, and some end-to-end tests. KUnit
is just trying to address the unit test space which is currently not
being addressed.

## More information on KUnit

There is a bunch of documentation near the end of this patch set that
describes how to use KUnit and best practices for writing unit tests.
For convenience I am hosting the compiled docs here:
https://google.github.io/kunit-docs/third_party/kernel/docs/
Additionally for convenience, I have applied these patches to a branch:
https://kunit.googlesource.com/linux/+/kunit/rfc/4.19/v3
The repo may be cloned with:
git clone https://kunit.googlesource.com/linux
This patchset is on the kunit/rfc/4.19/v3 branch.

## Changes Since Last Version

 - Changed namespace prefix from `test_*` to `kunit_*` as requested by
   Shuah.
 - Started converting/cleaning up the device tree unittest to use KUnit.
 - Started adding KUnit expectations with custom messages.

-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework
  2018-11-28 19:36 [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework brendanhiggins
@ 2018-11-28 19:36 ` Brendan Higgins
  2018-11-28 19:36 ` [RFC v3 01/19] kunit: test: add KUnit test runner core brendanhiggins
                   ` (20 subsequent siblings)
  21 siblings, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-11-28 19:36 UTC (permalink / raw)


This patch set proposes KUnit, a lightweight unit testing and mocking
framework for the Linux kernel.

Unlike Autotest and kselftest, KUnit is a true unit testing framework;
it does not require installing the kernel on a test machine or in a VM
and does not require tests to be written in userspace running on a host
kernel. Additionally, KUnit is fast: From invocation to completion KUnit
can run several dozen tests in under a second. Currently, the entire
KUnit test suite for KUnit runs in under a second from the initial
invocation (build time excluded).

KUnit is heavily inspired by JUnit, Python's unittest.mock, and
Googletest/Googlemock for C++. KUnit provides facilities for defining
unit test cases, grouping related test cases into test suites, providing
common infrastructure for running tests, mocking, spying, and much more.

## What's so special about unit testing?

A unit test is supposed to test a single unit of code in isolation,
hence the name. There should be no dependencies outside the control of
the test; this means no external dependencies, which makes tests orders
of magnitudes faster. Likewise, since there are no external dependencies,
there are no hoops to jump through to run the tests. Additionally, this
makes unit tests deterministic: a failing unit test always indicates a
problem. Finally, because unit tests necessarily have finer granularity,
they are able to test all code paths easily solving the classic problem
of difficulty in exercising error handling code.

## Is KUnit trying to replace other testing frameworks for the kernel?

No. Most existing tests for the Linux kernel are end-to-end tests, which
have their place. A well tested system has lots of unit tests, a
reasonable number of integration tests, and some end-to-end tests. KUnit
is just trying to address the unit test space which is currently not
being addressed.

## More information on KUnit

There is a bunch of documentation near the end of this patch set that
describes how to use KUnit and best practices for writing unit tests.
For convenience I am hosting the compiled docs here:
https://google.github.io/kunit-docs/third_party/kernel/docs/
Additionally for convenience, I have applied these patches to a branch:
https://kunit.googlesource.com/linux/+/kunit/rfc/4.19/v3
The repo may be cloned with:
git clone https://kunit.googlesource.com/linux
This patchset is on the kunit/rfc/4.19/v3 branch.

## Changes Since Last Version

 - Changed namespace prefix from `test_*` to `kunit_*` as requested by
   Shuah.
 - Started converting/cleaning up the device tree unittest to use KUnit.
 - Started adding KUnit expectations with custom messages.

-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 01/19] kunit: test: add KUnit test runner core
  2018-11-28 19:36 [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework brendanhiggins
  2018-11-28 19:36 ` Brendan Higgins
@ 2018-11-28 19:36 ` brendanhiggins
  2018-11-28 19:36   ` Brendan Higgins
                     ` (3 more replies)
  2018-11-28 19:36 ` [RFC v3 02/19] kunit: test: add test resource management API brendanhiggins
                   ` (19 subsequent siblings)
  21 siblings, 4 replies; 232+ messages in thread
From: brendanhiggins @ 2018-11-28 19:36 UTC (permalink / raw)


Add core facilities for defining unit tests; this provides a common way
to define test cases, functions that execute code which is under test
and determine whether the code under test behaves as expected; this also
provides a way to group together related test cases in test suites (here
we call them test_modules).

Just define test cases and how to execute them for now; setting
expectations on code will be defined later.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 include/kunit/test.h | 165 ++++++++++++++++++++++++++++++++++++++++++
 kunit/Kconfig        |  17 +++++
 kunit/Makefile       |   1 +
 kunit/test.c         | 168 +++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 351 insertions(+)
 create mode 100644 include/kunit/test.h
 create mode 100644 kunit/Kconfig
 create mode 100644 kunit/Makefile
 create mode 100644 kunit/test.c

diff --git a/include/kunit/test.h b/include/kunit/test.h
new file mode 100644
index 0000000000000..ffe66bb355d63
--- /dev/null
+++ b/include/kunit/test.h
@@ -0,0 +1,165 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Base unit test (KUnit) API.
+ *
+ * Copyright (C) 2018, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+
+#ifndef _KUNIT_TEST_H
+#define _KUNIT_TEST_H
+
+#include <linux/types.h>
+#include <linux/slab.h>
+
+struct kunit;
+
+/**
+ * struct kunit_case - represents an individual test case.
+ * @run_case: the function representing the actual test case.
+ * @name: the name of the test case.
+ *
+ * A test case is a function with the signature, ``void (*)(struct kunit *)``
+ * that makes expectations (see KUNIT_EXPECT_TRUE()) about code under test. Each
+ * test case is associated with a &struct kunit_module and will be run after the
+ * module's init function and followed by the module's exit function.
+ *
+ * A test case should be static and should only be created with the KUNIT_CASE()
+ * macro; additionally, every array of test cases should be terminated with an
+ * empty test case.
+ *
+ * Example:
+ *
+ * .. code-block:: c
+ *
+ *	void add_test_basic(struct kunit *test)
+ *	{
+ *		KUNIT_EXPECT_EQ(test, 1, add(1, 0));
+ *		KUNIT_EXPECT_EQ(test, 2, add(1, 1));
+ *		KUNIT_EXPECT_EQ(test, 0, add(-1, 1));
+ *		KUNIT_EXPECT_EQ(test, INT_MAX, add(0, INT_MAX));
+ *		KUNIT_EXPECT_EQ(test, -1, add(INT_MAX, INT_MIN));
+ *	}
+ *
+ *	static struct kunit_case example_test_cases[] = {
+ *		KUNIT_CASE(add_test_basic),
+ *		{},
+ *	};
+ *
+ */
+struct kunit_case {
+	void (*run_case)(struct kunit *test);
+	const char name[256];
+
+	/* private: internal use only. */
+	bool success;
+};
+
+/**
+ * KUNIT_CASE - A helper for creating a &struct kunit_case
+ * @test_name: a reference to a test case function.
+ *
+ * Takes a symbol for a function representing a test case and creates a
+ * &struct kunit_case object from it. See the documentation for
+ * &struct kunit_case for an example on how to use it.
+ */
+#define KUNIT_CASE(test_name) { .run_case = test_name, .name = #test_name }
+
+/**
+ * struct kunit_module - describes a related collection of &struct kunit_case s.
+ * @name: the name of the test. Purely informational.
+ * @init: called before every test case.
+ * @exit: called after every test case.
+ * @test_cases: a null terminated array of test cases.
+ *
+ * A kunit_module is a collection of related &struct kunit_case s, such that
+ * @init is called before every test case and @exit is called after every test
+ * case, similar to the notion of a *test fixture* or a *test class* in other
+ * unit testing frameworks like JUnit or Googletest.
+ *
+ * Every &struct kunit_case must be associated with a kunit_module for KUnit to
+ * run it.
+ */
+struct kunit_module {
+	const char name[256];
+	int (*init)(struct kunit *test);
+	void (*exit)(struct kunit *test);
+	struct kunit_case *test_cases;
+};
+
+/**
+ * struct kunit - represents a running instance of a test.
+ * @priv: for user to store arbitrary data. Commonly used to pass data created
+ * in the init function (see &struct kunit_module).
+ *
+ * Used to store information about the current context under which the test is
+ * running. Most of this data is private and should only be accessed indirectly
+ * via public functions; the one exception is @priv which can be used by the
+ * test writer to store arbitrary data.
+ */
+struct kunit {
+	void *priv;
+
+	/* private: internal use only. */
+	const char *name; /* Read only after initialization! */
+	spinlock_t lock; /* Gaurds all mutable test state. */
+	bool success; /* Protected by lock. */
+	void (*vprintk)(const struct kunit *test,
+			const char *level,
+			struct va_format *vaf);
+};
+
+int kunit_init_test(struct kunit *test, const char *name);
+
+int kunit_run_tests(struct kunit_module *module);
+
+/**
+ * module_test() - used to register a &struct kunit_module with KUnit.
+ * @module: a statically allocated &struct kunit_module.
+ *
+ * Registers @module with the test framework. See &struct kunit_module for more
+ * information.
+ */
+#define module_test(module) \
+		static int module_kunit_init##module(void) \
+		{ \
+			return kunit_run_tests(&module); \
+		} \
+		late_initcall(module_kunit_init##module)
+
+void __printf(3, 4) kunit_printk(const char *level,
+				 const struct kunit *test,
+				 const char *fmt, ...);
+
+/**
+ * kunit_info() - Prints an INFO level message associated with the current test.
+ * @test: The test context object.
+ * @fmt: A printk() style format string.
+ *
+ * Prints an info level message associated with the test module being run. Takes
+ * a variable number of format parameters just like printk().
+ */
+#define kunit_info(test, fmt, ...) \
+		kunit_printk(KERN_INFO, test, fmt, ##__VA_ARGS__)
+
+/**
+ * kunit_warn() - Prints a WARN level message associated with the current test.
+ * @test: The test context object.
+ * @fmt: A printk() style format string.
+ *
+ * See kunit_info().
+ */
+#define kunit_warn(test, fmt, ...) \
+		kunit_printk(KERN_WARNING, test, fmt, ##__VA_ARGS__)
+
+/**
+ * kunit_err() - Prints an ERROR level message associated with the current test.
+ * @test: The test context object.
+ * @fmt: A printk() style format string.
+ *
+ * See kunit_info().
+ */
+#define kunit_err(test, fmt, ...) \
+		kunit_printk(KERN_ERR, test, fmt, ##__VA_ARGS__)
+
+#endif /* _KUNIT_TEST_H */
diff --git a/kunit/Kconfig b/kunit/Kconfig
new file mode 100644
index 0000000000000..49b44c4f6630a
--- /dev/null
+++ b/kunit/Kconfig
@@ -0,0 +1,17 @@
+#
+# KUnit base configuration
+#
+
+menu "KUnit support"
+
+config KUNIT
+	bool "Enable support for unit tests (KUnit)"
+	depends on UML
+	help
+	  Enables support for kernel unit tests (KUnit), a lightweight unit
+	  testing and mocking framework for the Linux kernel. These tests are
+	  able to be run locally on a developer's workstation without a VM or
+	  special hardware. For more information, please see
+	  Documentation/kunit/
+
+endmenu
diff --git a/kunit/Makefile b/kunit/Makefile
new file mode 100644
index 0000000000000..5efdc4dea2c08
--- /dev/null
+++ b/kunit/Makefile
@@ -0,0 +1 @@
+obj-$(CONFIG_KUNIT) +=			test.o
diff --git a/kunit/test.c b/kunit/test.c
new file mode 100644
index 0000000000000..26d3d6d260e6c
--- /dev/null
+++ b/kunit/test.c
@@ -0,0 +1,168 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Base unit test (KUnit) API.
+ *
+ * Copyright (C) 2018, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+
+#include <linux/sched.h>
+#include <linux/sched/debug.h>
+#include <os.h>
+#include <kunit/test.h>
+
+static bool kunit_get_success(struct kunit *test)
+{
+	unsigned long flags;
+	bool success;
+
+	spin_lock_irqsave(&test->lock, flags);
+	success = test->success;
+	spin_unlock_irqrestore(&test->lock, flags);
+
+	return success;
+}
+
+static void kunit_set_success(struct kunit *test, bool success)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&test->lock, flags);
+	test->success = success;
+	spin_unlock_irqrestore(&test->lock, flags);
+}
+
+static int kunit_vprintk_emit(const struct kunit *test,
+			      int level,
+			      const char *fmt,
+			      va_list args)
+{
+	return vprintk_emit(0, level, NULL, 0, fmt, args);
+}
+
+static int kunit_printk_emit(const struct kunit *test,
+			     int level,
+			     const char *fmt, ...)
+{
+	va_list args;
+	int ret;
+
+	va_start(args, fmt);
+	ret = kunit_vprintk_emit(test, level, fmt, args);
+	va_end(args);
+
+	return ret;
+}
+
+static void kunit_vprintk(const struct kunit *test,
+			  const char *level,
+			  struct va_format *vaf)
+{
+	kunit_printk_emit(test,
+			  level[1] - '0',
+			  "kunit %s: %pV", test->name, vaf);
+}
+
+int kunit_init_test(struct kunit *test, const char *name)
+{
+	spin_lock_init(&test->lock);
+	test->name = name;
+	test->vprintk = kunit_vprintk;
+
+	return 0;
+}
+
+/*
+ * Initializes and runs test case. Does not clean up or do post validations.
+ */
+static void kunit_run_case_internal(struct kunit *test,
+				    struct kunit_module *module,
+				    struct kunit_case *test_case)
+{
+	int ret;
+
+	if (module->init) {
+		ret = module->init(test);
+		if (ret) {
+			kunit_err(test, "failed to initialize: %d", ret);
+			kunit_set_success(test, false);
+			return;
+		}
+	}
+
+	test_case->run_case(test);
+}
+
+/*
+ * Performs post validations and cleanup after a test case was run.
+ * XXX: Should ONLY BE CALLED AFTER kunit_run_case_internal!
+ */
+static void kunit_run_case_cleanup(struct kunit *test,
+				   struct kunit_module *module,
+				   struct kunit_case *test_case)
+{
+	if (module->exit)
+		module->exit(test);
+}
+
+/*
+ * Performs all logic to run a test case.
+ */
+static bool kunit_run_case(struct kunit *test,
+			   struct kunit_module *module,
+			   struct kunit_case *test_case)
+{
+	kunit_set_success(test, true);
+
+	kunit_run_case_internal(test, module, test_case);
+	kunit_run_case_cleanup(test, module, test_case);
+
+	return kunit_get_success(test);
+}
+
+int kunit_run_tests(struct kunit_module *module)
+{
+	bool all_passed = true, success;
+	struct kunit_case *test_case;
+	struct kunit test;
+	int ret;
+
+	ret = kunit_init_test(&test, module->name);
+	if (ret)
+		return ret;
+
+	for (test_case = module->test_cases; test_case->run_case; test_case++) {
+		success = kunit_run_case(&test, module, test_case);
+		if (!success)
+			all_passed = false;
+
+		kunit_info(&test,
+			  "%s %s",
+			  test_case->name,
+			  success ? "passed" : "failed");
+	}
+
+	if (all_passed)
+		kunit_info(&test, "all tests passed");
+	else
+		kunit_info(&test, "one or more tests failed");
+
+	return 0;
+}
+
+void kunit_printk(const char *level,
+		  const struct kunit *test,
+		  const char *fmt, ...)
+{
+	struct va_format vaf;
+	va_list args;
+
+	va_start(args, fmt);
+
+	vaf.fmt = fmt;
+	vaf.va = &args;
+
+	test->vprintk(test, level, &vaf);
+
+	va_end(args);
+}
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 01/19] kunit: test: add KUnit test runner core
  2018-11-28 19:36 ` [RFC v3 01/19] kunit: test: add KUnit test runner core brendanhiggins
@ 2018-11-28 19:36   ` Brendan Higgins
  2018-11-30  3:14   ` mcgrof
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-11-28 19:36 UTC (permalink / raw)


Add core facilities for defining unit tests; this provides a common way
to define test cases, functions that execute code which is under test
and determine whether the code under test behaves as expected; this also
provides a way to group together related test cases in test suites (here
we call them test_modules).

Just define test cases and how to execute them for now; setting
expectations on code will be defined later.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 include/kunit/test.h | 165 ++++++++++++++++++++++++++++++++++++++++++
 kunit/Kconfig        |  17 +++++
 kunit/Makefile       |   1 +
 kunit/test.c         | 168 +++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 351 insertions(+)
 create mode 100644 include/kunit/test.h
 create mode 100644 kunit/Kconfig
 create mode 100644 kunit/Makefile
 create mode 100644 kunit/test.c

diff --git a/include/kunit/test.h b/include/kunit/test.h
new file mode 100644
index 0000000000000..ffe66bb355d63
--- /dev/null
+++ b/include/kunit/test.h
@@ -0,0 +1,165 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Base unit test (KUnit) API.
+ *
+ * Copyright (C) 2018, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+
+#ifndef _KUNIT_TEST_H
+#define _KUNIT_TEST_H
+
+#include <linux/types.h>
+#include <linux/slab.h>
+
+struct kunit;
+
+/**
+ * struct kunit_case - represents an individual test case.
+ * @run_case: the function representing the actual test case.
+ * @name: the name of the test case.
+ *
+ * A test case is a function with the signature, ``void (*)(struct kunit *)``
+ * that makes expectations (see KUNIT_EXPECT_TRUE()) about code under test. Each
+ * test case is associated with a &struct kunit_module and will be run after the
+ * module's init function and followed by the module's exit function.
+ *
+ * A test case should be static and should only be created with the KUNIT_CASE()
+ * macro; additionally, every array of test cases should be terminated with an
+ * empty test case.
+ *
+ * Example:
+ *
+ * .. code-block:: c
+ *
+ *	void add_test_basic(struct kunit *test)
+ *	{
+ *		KUNIT_EXPECT_EQ(test, 1, add(1, 0));
+ *		KUNIT_EXPECT_EQ(test, 2, add(1, 1));
+ *		KUNIT_EXPECT_EQ(test, 0, add(-1, 1));
+ *		KUNIT_EXPECT_EQ(test, INT_MAX, add(0, INT_MAX));
+ *		KUNIT_EXPECT_EQ(test, -1, add(INT_MAX, INT_MIN));
+ *	}
+ *
+ *	static struct kunit_case example_test_cases[] = {
+ *		KUNIT_CASE(add_test_basic),
+ *		{},
+ *	};
+ *
+ */
+struct kunit_case {
+	void (*run_case)(struct kunit *test);
+	const char name[256];
+
+	/* private: internal use only. */
+	bool success;
+};
+
+/**
+ * KUNIT_CASE - A helper for creating a &struct kunit_case
+ * @test_name: a reference to a test case function.
+ *
+ * Takes a symbol for a function representing a test case and creates a
+ * &struct kunit_case object from it. See the documentation for
+ * &struct kunit_case for an example on how to use it.
+ */
+#define KUNIT_CASE(test_name) { .run_case = test_name, .name = #test_name }
+
+/**
+ * struct kunit_module - describes a related collection of &struct kunit_case s.
+ * @name: the name of the test. Purely informational.
+ * @init: called before every test case.
+ * @exit: called after every test case.
+ * @test_cases: a null terminated array of test cases.
+ *
+ * A kunit_module is a collection of related &struct kunit_case s, such that
+ * @init is called before every test case and @exit is called after every test
+ * case, similar to the notion of a *test fixture* or a *test class* in other
+ * unit testing frameworks like JUnit or Googletest.
+ *
+ * Every &struct kunit_case must be associated with a kunit_module for KUnit to
+ * run it.
+ */
+struct kunit_module {
+	const char name[256];
+	int (*init)(struct kunit *test);
+	void (*exit)(struct kunit *test);
+	struct kunit_case *test_cases;
+};
+
+/**
+ * struct kunit - represents a running instance of a test.
+ * @priv: for user to store arbitrary data. Commonly used to pass data created
+ * in the init function (see &struct kunit_module).
+ *
+ * Used to store information about the current context under which the test is
+ * running. Most of this data is private and should only be accessed indirectly
+ * via public functions; the one exception is @priv which can be used by the
+ * test writer to store arbitrary data.
+ */
+struct kunit {
+	void *priv;
+
+	/* private: internal use only. */
+	const char *name; /* Read only after initialization! */
+	spinlock_t lock; /* Gaurds all mutable test state. */
+	bool success; /* Protected by lock. */
+	void (*vprintk)(const struct kunit *test,
+			const char *level,
+			struct va_format *vaf);
+};
+
+int kunit_init_test(struct kunit *test, const char *name);
+
+int kunit_run_tests(struct kunit_module *module);
+
+/**
+ * module_test() - used to register a &struct kunit_module with KUnit.
+ * @module: a statically allocated &struct kunit_module.
+ *
+ * Registers @module with the test framework. See &struct kunit_module for more
+ * information.
+ */
+#define module_test(module) \
+		static int module_kunit_init##module(void) \
+		{ \
+			return kunit_run_tests(&module); \
+		} \
+		late_initcall(module_kunit_init##module)
+
+void __printf(3, 4) kunit_printk(const char *level,
+				 const struct kunit *test,
+				 const char *fmt, ...);
+
+/**
+ * kunit_info() - Prints an INFO level message associated with the current test.
+ * @test: The test context object.
+ * @fmt: A printk() style format string.
+ *
+ * Prints an info level message associated with the test module being run. Takes
+ * a variable number of format parameters just like printk().
+ */
+#define kunit_info(test, fmt, ...) \
+		kunit_printk(KERN_INFO, test, fmt, ##__VA_ARGS__)
+
+/**
+ * kunit_warn() - Prints a WARN level message associated with the current test.
+ * @test: The test context object.
+ * @fmt: A printk() style format string.
+ *
+ * See kunit_info().
+ */
+#define kunit_warn(test, fmt, ...) \
+		kunit_printk(KERN_WARNING, test, fmt, ##__VA_ARGS__)
+
+/**
+ * kunit_err() - Prints an ERROR level message associated with the current test.
+ * @test: The test context object.
+ * @fmt: A printk() style format string.
+ *
+ * See kunit_info().
+ */
+#define kunit_err(test, fmt, ...) \
+		kunit_printk(KERN_ERR, test, fmt, ##__VA_ARGS__)
+
+#endif /* _KUNIT_TEST_H */
diff --git a/kunit/Kconfig b/kunit/Kconfig
new file mode 100644
index 0000000000000..49b44c4f6630a
--- /dev/null
+++ b/kunit/Kconfig
@@ -0,0 +1,17 @@
+#
+# KUnit base configuration
+#
+
+menu "KUnit support"
+
+config KUNIT
+	bool "Enable support for unit tests (KUnit)"
+	depends on UML
+	help
+	  Enables support for kernel unit tests (KUnit), a lightweight unit
+	  testing and mocking framework for the Linux kernel. These tests are
+	  able to be run locally on a developer's workstation without a VM or
+	  special hardware. For more information, please see
+	  Documentation/kunit/
+
+endmenu
diff --git a/kunit/Makefile b/kunit/Makefile
new file mode 100644
index 0000000000000..5efdc4dea2c08
--- /dev/null
+++ b/kunit/Makefile
@@ -0,0 +1 @@
+obj-$(CONFIG_KUNIT) +=			test.o
diff --git a/kunit/test.c b/kunit/test.c
new file mode 100644
index 0000000000000..26d3d6d260e6c
--- /dev/null
+++ b/kunit/test.c
@@ -0,0 +1,168 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Base unit test (KUnit) API.
+ *
+ * Copyright (C) 2018, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+
+#include <linux/sched.h>
+#include <linux/sched/debug.h>
+#include <os.h>
+#include <kunit/test.h>
+
+static bool kunit_get_success(struct kunit *test)
+{
+	unsigned long flags;
+	bool success;
+
+	spin_lock_irqsave(&test->lock, flags);
+	success = test->success;
+	spin_unlock_irqrestore(&test->lock, flags);
+
+	return success;
+}
+
+static void kunit_set_success(struct kunit *test, bool success)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&test->lock, flags);
+	test->success = success;
+	spin_unlock_irqrestore(&test->lock, flags);
+}
+
+static int kunit_vprintk_emit(const struct kunit *test,
+			      int level,
+			      const char *fmt,
+			      va_list args)
+{
+	return vprintk_emit(0, level, NULL, 0, fmt, args);
+}
+
+static int kunit_printk_emit(const struct kunit *test,
+			     int level,
+			     const char *fmt, ...)
+{
+	va_list args;
+	int ret;
+
+	va_start(args, fmt);
+	ret = kunit_vprintk_emit(test, level, fmt, args);
+	va_end(args);
+
+	return ret;
+}
+
+static void kunit_vprintk(const struct kunit *test,
+			  const char *level,
+			  struct va_format *vaf)
+{
+	kunit_printk_emit(test,
+			  level[1] - '0',
+			  "kunit %s: %pV", test->name, vaf);
+}
+
+int kunit_init_test(struct kunit *test, const char *name)
+{
+	spin_lock_init(&test->lock);
+	test->name = name;
+	test->vprintk = kunit_vprintk;
+
+	return 0;
+}
+
+/*
+ * Initializes and runs test case. Does not clean up or do post validations.
+ */
+static void kunit_run_case_internal(struct kunit *test,
+				    struct kunit_module *module,
+				    struct kunit_case *test_case)
+{
+	int ret;
+
+	if (module->init) {
+		ret = module->init(test);
+		if (ret) {
+			kunit_err(test, "failed to initialize: %d", ret);
+			kunit_set_success(test, false);
+			return;
+		}
+	}
+
+	test_case->run_case(test);
+}
+
+/*
+ * Performs post validations and cleanup after a test case was run.
+ * XXX: Should ONLY BE CALLED AFTER kunit_run_case_internal!
+ */
+static void kunit_run_case_cleanup(struct kunit *test,
+				   struct kunit_module *module,
+				   struct kunit_case *test_case)
+{
+	if (module->exit)
+		module->exit(test);
+}
+
+/*
+ * Performs all logic to run a test case.
+ */
+static bool kunit_run_case(struct kunit *test,
+			   struct kunit_module *module,
+			   struct kunit_case *test_case)
+{
+	kunit_set_success(test, true);
+
+	kunit_run_case_internal(test, module, test_case);
+	kunit_run_case_cleanup(test, module, test_case);
+
+	return kunit_get_success(test);
+}
+
+int kunit_run_tests(struct kunit_module *module)
+{
+	bool all_passed = true, success;
+	struct kunit_case *test_case;
+	struct kunit test;
+	int ret;
+
+	ret = kunit_init_test(&test, module->name);
+	if (ret)
+		return ret;
+
+	for (test_case = module->test_cases; test_case->run_case; test_case++) {
+		success = kunit_run_case(&test, module, test_case);
+		if (!success)
+			all_passed = false;
+
+		kunit_info(&test,
+			  "%s %s",
+			  test_case->name,
+			  success ? "passed" : "failed");
+	}
+
+	if (all_passed)
+		kunit_info(&test, "all tests passed");
+	else
+		kunit_info(&test, "one or more tests failed");
+
+	return 0;
+}
+
+void kunit_printk(const char *level,
+		  const struct kunit *test,
+		  const char *fmt, ...)
+{
+	struct va_format vaf;
+	va_list args;
+
+	va_start(args, fmt);
+
+	vaf.fmt = fmt;
+	vaf.va = &args;
+
+	test->vprintk(test, level, &vaf);
+
+	va_end(args);
+}
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 02/19] kunit: test: add test resource management API
  2018-11-28 19:36 [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework brendanhiggins
  2018-11-28 19:36 ` Brendan Higgins
  2018-11-28 19:36 ` [RFC v3 01/19] kunit: test: add KUnit test runner core brendanhiggins
@ 2018-11-28 19:36 ` brendanhiggins
  2018-11-28 19:36   ` Brendan Higgins
  2018-11-28 19:36 ` [RFC v3 03/19] kunit: test: add string_stream a std::stream like string builder brendanhiggins
                   ` (18 subsequent siblings)
  21 siblings, 1 reply; 232+ messages in thread
From: brendanhiggins @ 2018-11-28 19:36 UTC (permalink / raw)


Create a common API for test managed resources like memory and test
objects. A lot of times a test will want to set up infrastructure to be
used in test cases; this could be anything from just wanting to allocate
some memory to setting up a driver stack; this defines facilities for
creating "test resources" which are managed by the test infrastructure
and are automatically cleaned up at the conclusion of the test.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 include/kunit/test.h | 109 +++++++++++++++++++++++++++++++++++++++++++
 kunit/test.c         |  95 +++++++++++++++++++++++++++++++++++++
 2 files changed, 204 insertions(+)

diff --git a/include/kunit/test.h b/include/kunit/test.h
index ffe66bb355d63..583840e24ffda 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -12,6 +12,69 @@
 #include <linux/types.h>
 #include <linux/slab.h>
 
+struct kunit_resource;
+
+typedef int (*kunit_resource_init_t)(struct kunit_resource *, void *);
+typedef void (*kunit_resource_free_t)(struct kunit_resource *);
+
+/**
+ * struct kunit_resource - represents a *test managed resource*
+ * @allocation: for the user to store arbitrary data.
+ * @free: a user supplied function to free the resource. Populated by
+ * kunit_alloc_resource().
+ *
+ * Represents a *test managed resource*, a resource which will automatically be
+ * cleaned up at the end of a test case.
+ *
+ * Example:
+ *
+ * .. code-block:: c
+ *
+ *	struct kunit_kmalloc_params {
+ *		size_t size;
+ *		gfp_t gfp;
+ *	};
+ *
+ *	static int kunit_kmalloc_init(struct kunit_resource *res, void *context)
+ *	{
+ *		struct kunit_kmalloc_params *params = context;
+ *		res->allocation = kmalloc(params->size, params->gfp);
+ *
+ *		if (!res->allocation)
+ *			return -ENOMEM;
+ *
+ *		return 0;
+ *	}
+ *
+ *	static void kunit_kmalloc_free(struct kunit_resource *res)
+ *	{
+ *		kfree(res->allocation);
+ *	}
+ *
+ *	void *kunit_kmalloc(struct kunit *test, size_t size, gfp_t gfp)
+ *	{
+ *		struct kunit_kmalloc_params params;
+ *		struct kunit_resource *res;
+ *
+ *		params.size = size;
+ *		params.gfp = gfp;
+ *
+ *		res = kunit_alloc_resource(test, kunit_kmalloc_init,
+ *			kunit_kmalloc_free, &params);
+ *		if (res)
+ *			return res->allocation;
+ *		else
+ *			return NULL;
+ *	}
+ */
+struct kunit_resource {
+	void *allocation;
+	kunit_resource_free_t free;
+
+	/* private: internal use only. */
+	struct list_head node;
+};
+
 struct kunit;
 
 /**
@@ -104,6 +167,7 @@ struct kunit {
 	const char *name; /* Read only after initialization! */
 	spinlock_t lock; /* Gaurds all mutable test state. */
 	bool success; /* Protected by lock. */
+	struct list_head resources; /* Protected by lock. */
 	void (*vprintk)(const struct kunit *test,
 			const char *level,
 			struct va_format *vaf);
@@ -127,6 +191,51 @@ int kunit_run_tests(struct kunit_module *module);
 		} \
 		late_initcall(module_kunit_init##module)
 
+/**
+ * kunit_alloc_resource() - Allocates a *test managed resource*.
+ * @test: The test context object.
+ * @init: a user supplied function to initialize the resource.
+ * @free: a user supplied function to free the resource.
+ * @context: for the user to pass in arbitrary data.
+ *
+ * Allocates a *test managed resource*, a resource which will automatically be
+ * cleaned up at the end of a test case. See &struct kunit_resource for an
+ * example.
+ */
+struct kunit_resource *kunit_alloc_resource(struct kunit *test,
+					    kunit_resource_init_t init,
+					    kunit_resource_free_t free,
+					    void *context);
+
+void kunit_free_resource(struct kunit *test, struct kunit_resource *res);
+
+/**
+ * kunit_kmalloc() - Like kmalloc() except the allocation is *test managed*.
+ * @test: The test context object.
+ * @size: The size in bytes of the desired memory.
+ * @gfp: flags passed to underlying kmalloc().
+ *
+ * Just like `kmalloc(...)`, except the allocation is managed by the test case
+ * and is automatically cleaned up after the test case concludes. See &struct
+ * kunit_resource for more information.
+ */
+void *kunit_kmalloc(struct kunit *test, size_t size, gfp_t gfp);
+
+/**
+ * kunit_kzalloc() - Just like kunit_kmalloc(), but zeroes the allocation.
+ * @test: The test context object.
+ * @size: The size in bytes of the desired memory.
+ * @gfp: flags passed to underlying kmalloc().
+ *
+ * See kzalloc() and kunit_kmalloc() for more information.
+ */
+static inline void *kunit_kzalloc(struct kunit *test, size_t size, gfp_t gfp)
+{
+	return kunit_kmalloc(test, size, gfp | __GFP_ZERO);
+}
+
+void kunit_cleanup(struct kunit *test);
+
 void __printf(3, 4) kunit_printk(const char *level,
 				 const struct kunit *test,
 				 const char *fmt, ...);
diff --git a/kunit/test.c b/kunit/test.c
index 26d3d6d260e6c..fb1a786e4c94f 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -66,6 +66,7 @@ static void kunit_vprintk(const struct kunit *test,
 int kunit_init_test(struct kunit *test, const char *name)
 {
 	spin_lock_init(&test->lock);
+	INIT_LIST_HEAD(&test->resources);
 	test->name = name;
 	test->vprintk = kunit_vprintk;
 
@@ -93,6 +94,11 @@ static void kunit_run_case_internal(struct kunit *test,
 	test_case->run_case(test);
 }
 
+static void kunit_case_internal_cleanup(struct kunit *test)
+{
+	kunit_cleanup(test);
+}
+
 /*
  * Performs post validations and cleanup after a test case was run.
  * XXX: Should ONLY BE CALLED AFTER kunit_run_case_internal!
@@ -103,6 +109,8 @@ static void kunit_run_case_cleanup(struct kunit *test,
 {
 	if (module->exit)
 		module->exit(test);
+
+	kunit_case_internal_cleanup(test);
 }
 
 /*
@@ -150,6 +158,93 @@ int kunit_run_tests(struct kunit_module *module)
 	return 0;
 }
 
+struct kunit_resource *kunit_alloc_resource(struct kunit *test,
+					    kunit_resource_init_t init,
+					    kunit_resource_free_t free,
+					    void *context)
+{
+	struct kunit_resource *res;
+	unsigned long flags;
+	int ret;
+
+	res = kzalloc(sizeof(*res), GFP_KERNEL);
+	if (!res)
+		return NULL;
+
+	ret = init(res, context);
+	if (ret)
+		return NULL;
+
+	res->free = free;
+	spin_lock_irqsave(&test->lock, flags);
+	list_add_tail(&res->node, &test->resources);
+	spin_unlock_irqrestore(&test->lock, flags);
+
+	return res;
+}
+
+void kunit_free_resource(struct kunit *test, struct kunit_resource *res)
+{
+	res->free(res);
+	list_del(&res->node);
+	kfree(res);
+}
+
+struct kunit_kmalloc_params {
+	size_t size;
+	gfp_t gfp;
+};
+
+static int kunit_kmalloc_init(struct kunit_resource *res, void *context)
+{
+	struct kunit_kmalloc_params *params = context;
+
+	res->allocation = kmalloc(params->size, params->gfp);
+	if (!res->allocation)
+		return -ENOMEM;
+
+	return 0;
+}
+
+static void kunit_kmalloc_free(struct kunit_resource *res)
+{
+	kfree(res->allocation);
+}
+
+void *kunit_kmalloc(struct kunit *test, size_t size, gfp_t gfp)
+{
+	struct kunit_kmalloc_params params;
+	struct kunit_resource *res;
+
+	params.size = size;
+	params.gfp = gfp;
+
+	res = kunit_alloc_resource(test,
+				   kunit_kmalloc_init,
+				   kunit_kmalloc_free,
+				   &params);
+
+	if (res)
+		return res->allocation;
+	else
+		return NULL;
+}
+
+void kunit_cleanup(struct kunit *test)
+{
+	struct kunit_resource *resource, *resource_safe;
+	unsigned long flags;
+
+	spin_lock_irqsave(&test->lock, flags);
+	list_for_each_entry_safe(resource,
+				 resource_safe,
+				 &test->resources,
+				 node) {
+		kunit_free_resource(test, resource);
+	}
+	spin_unlock_irqrestore(&test->lock, flags);
+}
+
 void kunit_printk(const char *level,
 		  const struct kunit *test,
 		  const char *fmt, ...)
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 02/19] kunit: test: add test resource management API
  2018-11-28 19:36 ` [RFC v3 02/19] kunit: test: add test resource management API brendanhiggins
@ 2018-11-28 19:36   ` Brendan Higgins
  0 siblings, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-11-28 19:36 UTC (permalink / raw)


Create a common API for test managed resources like memory and test
objects. A lot of times a test will want to set up infrastructure to be
used in test cases; this could be anything from just wanting to allocate
some memory to setting up a driver stack; this defines facilities for
creating "test resources" which are managed by the test infrastructure
and are automatically cleaned up at the conclusion of the test.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 include/kunit/test.h | 109 +++++++++++++++++++++++++++++++++++++++++++
 kunit/test.c         |  95 +++++++++++++++++++++++++++++++++++++
 2 files changed, 204 insertions(+)

diff --git a/include/kunit/test.h b/include/kunit/test.h
index ffe66bb355d63..583840e24ffda 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -12,6 +12,69 @@
 #include <linux/types.h>
 #include <linux/slab.h>
 
+struct kunit_resource;
+
+typedef int (*kunit_resource_init_t)(struct kunit_resource *, void *);
+typedef void (*kunit_resource_free_t)(struct kunit_resource *);
+
+/**
+ * struct kunit_resource - represents a *test managed resource*
+ * @allocation: for the user to store arbitrary data.
+ * @free: a user supplied function to free the resource. Populated by
+ * kunit_alloc_resource().
+ *
+ * Represents a *test managed resource*, a resource which will automatically be
+ * cleaned up at the end of a test case.
+ *
+ * Example:
+ *
+ * .. code-block:: c
+ *
+ *	struct kunit_kmalloc_params {
+ *		size_t size;
+ *		gfp_t gfp;
+ *	};
+ *
+ *	static int kunit_kmalloc_init(struct kunit_resource *res, void *context)
+ *	{
+ *		struct kunit_kmalloc_params *params = context;
+ *		res->allocation = kmalloc(params->size, params->gfp);
+ *
+ *		if (!res->allocation)
+ *			return -ENOMEM;
+ *
+ *		return 0;
+ *	}
+ *
+ *	static void kunit_kmalloc_free(struct kunit_resource *res)
+ *	{
+ *		kfree(res->allocation);
+ *	}
+ *
+ *	void *kunit_kmalloc(struct kunit *test, size_t size, gfp_t gfp)
+ *	{
+ *		struct kunit_kmalloc_params params;
+ *		struct kunit_resource *res;
+ *
+ *		params.size = size;
+ *		params.gfp = gfp;
+ *
+ *		res = kunit_alloc_resource(test, kunit_kmalloc_init,
+ *			kunit_kmalloc_free, &params);
+ *		if (res)
+ *			return res->allocation;
+ *		else
+ *			return NULL;
+ *	}
+ */
+struct kunit_resource {
+	void *allocation;
+	kunit_resource_free_t free;
+
+	/* private: internal use only. */
+	struct list_head node;
+};
+
 struct kunit;
 
 /**
@@ -104,6 +167,7 @@ struct kunit {
 	const char *name; /* Read only after initialization! */
 	spinlock_t lock; /* Gaurds all mutable test state. */
 	bool success; /* Protected by lock. */
+	struct list_head resources; /* Protected by lock. */
 	void (*vprintk)(const struct kunit *test,
 			const char *level,
 			struct va_format *vaf);
@@ -127,6 +191,51 @@ int kunit_run_tests(struct kunit_module *module);
 		} \
 		late_initcall(module_kunit_init##module)
 
+/**
+ * kunit_alloc_resource() - Allocates a *test managed resource*.
+ * @test: The test context object.
+ * @init: a user supplied function to initialize the resource.
+ * @free: a user supplied function to free the resource.
+ * @context: for the user to pass in arbitrary data.
+ *
+ * Allocates a *test managed resource*, a resource which will automatically be
+ * cleaned up at the end of a test case. See &struct kunit_resource for an
+ * example.
+ */
+struct kunit_resource *kunit_alloc_resource(struct kunit *test,
+					    kunit_resource_init_t init,
+					    kunit_resource_free_t free,
+					    void *context);
+
+void kunit_free_resource(struct kunit *test, struct kunit_resource *res);
+
+/**
+ * kunit_kmalloc() - Like kmalloc() except the allocation is *test managed*.
+ * @test: The test context object.
+ * @size: The size in bytes of the desired memory.
+ * @gfp: flags passed to underlying kmalloc().
+ *
+ * Just like `kmalloc(...)`, except the allocation is managed by the test case
+ * and is automatically cleaned up after the test case concludes. See &struct
+ * kunit_resource for more information.
+ */
+void *kunit_kmalloc(struct kunit *test, size_t size, gfp_t gfp);
+
+/**
+ * kunit_kzalloc() - Just like kunit_kmalloc(), but zeroes the allocation.
+ * @test: The test context object.
+ * @size: The size in bytes of the desired memory.
+ * @gfp: flags passed to underlying kmalloc().
+ *
+ * See kzalloc() and kunit_kmalloc() for more information.
+ */
+static inline void *kunit_kzalloc(struct kunit *test, size_t size, gfp_t gfp)
+{
+	return kunit_kmalloc(test, size, gfp | __GFP_ZERO);
+}
+
+void kunit_cleanup(struct kunit *test);
+
 void __printf(3, 4) kunit_printk(const char *level,
 				 const struct kunit *test,
 				 const char *fmt, ...);
diff --git a/kunit/test.c b/kunit/test.c
index 26d3d6d260e6c..fb1a786e4c94f 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -66,6 +66,7 @@ static void kunit_vprintk(const struct kunit *test,
 int kunit_init_test(struct kunit *test, const char *name)
 {
 	spin_lock_init(&test->lock);
+	INIT_LIST_HEAD(&test->resources);
 	test->name = name;
 	test->vprintk = kunit_vprintk;
 
@@ -93,6 +94,11 @@ static void kunit_run_case_internal(struct kunit *test,
 	test_case->run_case(test);
 }
 
+static void kunit_case_internal_cleanup(struct kunit *test)
+{
+	kunit_cleanup(test);
+}
+
 /*
  * Performs post validations and cleanup after a test case was run.
  * XXX: Should ONLY BE CALLED AFTER kunit_run_case_internal!
@@ -103,6 +109,8 @@ static void kunit_run_case_cleanup(struct kunit *test,
 {
 	if (module->exit)
 		module->exit(test);
+
+	kunit_case_internal_cleanup(test);
 }
 
 /*
@@ -150,6 +158,93 @@ int kunit_run_tests(struct kunit_module *module)
 	return 0;
 }
 
+struct kunit_resource *kunit_alloc_resource(struct kunit *test,
+					    kunit_resource_init_t init,
+					    kunit_resource_free_t free,
+					    void *context)
+{
+	struct kunit_resource *res;
+	unsigned long flags;
+	int ret;
+
+	res = kzalloc(sizeof(*res), GFP_KERNEL);
+	if (!res)
+		return NULL;
+
+	ret = init(res, context);
+	if (ret)
+		return NULL;
+
+	res->free = free;
+	spin_lock_irqsave(&test->lock, flags);
+	list_add_tail(&res->node, &test->resources);
+	spin_unlock_irqrestore(&test->lock, flags);
+
+	return res;
+}
+
+void kunit_free_resource(struct kunit *test, struct kunit_resource *res)
+{
+	res->free(res);
+	list_del(&res->node);
+	kfree(res);
+}
+
+struct kunit_kmalloc_params {
+	size_t size;
+	gfp_t gfp;
+};
+
+static int kunit_kmalloc_init(struct kunit_resource *res, void *context)
+{
+	struct kunit_kmalloc_params *params = context;
+
+	res->allocation = kmalloc(params->size, params->gfp);
+	if (!res->allocation)
+		return -ENOMEM;
+
+	return 0;
+}
+
+static void kunit_kmalloc_free(struct kunit_resource *res)
+{
+	kfree(res->allocation);
+}
+
+void *kunit_kmalloc(struct kunit *test, size_t size, gfp_t gfp)
+{
+	struct kunit_kmalloc_params params;
+	struct kunit_resource *res;
+
+	params.size = size;
+	params.gfp = gfp;
+
+	res = kunit_alloc_resource(test,
+				   kunit_kmalloc_init,
+				   kunit_kmalloc_free,
+				   &params);
+
+	if (res)
+		return res->allocation;
+	else
+		return NULL;
+}
+
+void kunit_cleanup(struct kunit *test)
+{
+	struct kunit_resource *resource, *resource_safe;
+	unsigned long flags;
+
+	spin_lock_irqsave(&test->lock, flags);
+	list_for_each_entry_safe(resource,
+				 resource_safe,
+				 &test->resources,
+				 node) {
+		kunit_free_resource(test, resource);
+	}
+	spin_unlock_irqrestore(&test->lock, flags);
+}
+
 void kunit_printk(const char *level,
 		  const struct kunit *test,
 		  const char *fmt, ...)
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 03/19] kunit: test: add string_stream a std::stream like string builder
  2018-11-28 19:36 [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework brendanhiggins
                   ` (2 preceding siblings ...)
  2018-11-28 19:36 ` [RFC v3 02/19] kunit: test: add test resource management API brendanhiggins
@ 2018-11-28 19:36 ` brendanhiggins
  2018-11-28 19:36   ` Brendan Higgins
  2018-11-30  3:29   ` mcgrof
  2018-11-28 19:36 ` [RFC v3 04/19] kunit: test: add test_stream a std::stream like logger brendanhiggins
                   ` (17 subsequent siblings)
  21 siblings, 2 replies; 232+ messages in thread
From: brendanhiggins @ 2018-11-28 19:36 UTC (permalink / raw)


A number of test features need to do pretty complicated string printing
where it may not be possible to rely on a single preallocated string
with parameters.

So provide a library for constructing the string as you go similar to
C++'s std::string.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 include/kunit/string-stream.h |  44 ++++++++++
 kunit/Makefile                |   3 +-
 kunit/string-stream.c         | 149 ++++++++++++++++++++++++++++++++++
 3 files changed, 195 insertions(+), 1 deletion(-)
 create mode 100644 include/kunit/string-stream.h
 create mode 100644 kunit/string-stream.c

diff --git a/include/kunit/string-stream.h b/include/kunit/string-stream.h
new file mode 100644
index 0000000000000..933ed5740cf07
--- /dev/null
+++ b/include/kunit/string-stream.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * C++ stream style string builder used in KUnit for building messages.
+ *
+ * Copyright (C) 2018, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+
+#ifndef _KUNIT_STRING_STREAM_H
+#define _KUNIT_STRING_STREAM_H
+
+#include <linux/types.h>
+#include <linux/spinlock.h>
+#include <linux/kref.h>
+#include <stdarg.h>
+
+struct string_stream_fragment {
+	struct list_head node;
+	char *fragment;
+};
+
+struct string_stream {
+	size_t length;
+	struct list_head fragments;
+
+	/* length and fragments are protected by this lock */
+	spinlock_t lock;
+	struct kref refcount;
+	int (*add)(struct string_stream *this, const char *fmt, ...);
+	int (*vadd)(struct string_stream *this, const char *fmt, va_list args);
+	char *(*get_string)(struct string_stream *this);
+	void (*clear)(struct string_stream *this);
+	bool (*is_empty)(struct string_stream *this);
+};
+
+struct string_stream *new_string_stream(void);
+
+void destroy_string_stream(struct string_stream *stream);
+
+void string_stream_get(struct string_stream *stream);
+
+int string_stream_put(struct string_stream *stream);
+
+#endif /* _KUNIT_STRING_STREAM_H */
diff --git a/kunit/Makefile b/kunit/Makefile
index 5efdc4dea2c08..275b565a0e81f 100644
--- a/kunit/Makefile
+++ b/kunit/Makefile
@@ -1 +1,2 @@
-obj-$(CONFIG_KUNIT) +=			test.o
+obj-$(CONFIG_KUNIT) +=			test.o \
+					string-stream.o
diff --git a/kunit/string-stream.c b/kunit/string-stream.c
new file mode 100644
index 0000000000000..1e7efa630cc35
--- /dev/null
+++ b/kunit/string-stream.c
@@ -0,0 +1,149 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * C++ stream style string builder used in KUnit for building messages.
+ *
+ * Copyright (C) 2018, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <kunit/string-stream.h>
+
+static int string_stream_vadd(struct string_stream *this,
+			       const char *fmt,
+			       va_list args)
+{
+	struct string_stream_fragment *fragment;
+	int len;
+	va_list args_for_counting;
+	unsigned long flags;
+
+	/* Make a copy because `vsnprintf` could change it */
+	va_copy(args_for_counting, args);
+
+	/* Need space for null byte. */
+	len = vsnprintf(NULL, 0, fmt, args_for_counting) + 1;
+
+	va_end(args_for_counting);
+
+	fragment = kmalloc(sizeof(*fragment), GFP_KERNEL);
+	if (!fragment)
+		return -ENOMEM;
+
+	fragment->fragment = kmalloc(len, GFP_KERNEL);
+	if (!fragment->fragment) {
+		kfree(fragment);
+		return -ENOMEM;
+	}
+
+	len = vsnprintf(fragment->fragment, len, fmt, args);
+	spin_lock_irqsave(&this->lock, flags);
+	this->length += len;
+	list_add_tail(&fragment->node, &this->fragments);
+	spin_unlock_irqrestore(&this->lock, flags);
+	return 0;
+}
+
+static int string_stream_add(struct string_stream *this, const char *fmt, ...)
+{
+	va_list args;
+	int result;
+
+	va_start(args, fmt);
+	result = string_stream_vadd(this, fmt, args);
+	va_end(args);
+	return result;
+}
+
+static void string_stream_clear(struct string_stream *this)
+{
+	struct string_stream_fragment *fragment, *fragment_safe;
+	unsigned long flags;
+
+	spin_lock_irqsave(&this->lock, flags);
+	list_for_each_entry_safe(fragment,
+				 fragment_safe,
+				 &this->fragments,
+				 node) {
+		list_del(&fragment->node);
+		kfree(fragment->fragment);
+		kfree(fragment);
+	}
+	this->length = 0;
+	spin_unlock_irqrestore(&this->lock, flags);
+}
+
+static char *string_stream_get_string(struct string_stream *this)
+{
+	struct string_stream_fragment *fragment;
+	size_t buf_len = this->length + 1; /* +1 for null byte. */
+	char *buf;
+	unsigned long flags;
+
+	buf = kzalloc(buf_len, GFP_KERNEL);
+	if (!buf)
+		return NULL;
+
+	spin_lock_irqsave(&this->lock, flags);
+	list_for_each_entry(fragment, &this->fragments, node)
+		strlcat(buf, fragment->fragment, buf_len);
+	spin_unlock_irqrestore(&this->lock, flags);
+
+	return buf;
+}
+
+static bool string_stream_is_empty(struct string_stream *this)
+{
+	bool is_empty;
+	unsigned long flags;
+
+	spin_lock_irqsave(&this->lock, flags);
+	is_empty = list_empty(&this->fragments);
+	spin_unlock_irqrestore(&this->lock, flags);
+
+	return is_empty;
+}
+
+void destroy_string_stream(struct string_stream *stream)
+{
+	stream->clear(stream);
+	kfree(stream);
+}
+
+static void string_stream_destroy(struct kref *kref)
+{
+	struct string_stream *stream = container_of(kref,
+						    struct string_stream,
+						    refcount);
+	destroy_string_stream(stream);
+}
+
+struct string_stream *new_string_stream(void)
+{
+	struct string_stream *stream = kzalloc(sizeof(*stream), GFP_KERNEL);
+
+	if (!stream)
+		return NULL;
+
+	INIT_LIST_HEAD(&stream->fragments);
+	spin_lock_init(&stream->lock);
+	kref_init(&stream->refcount);
+	stream->add = string_stream_add;
+	stream->vadd = string_stream_vadd;
+	stream->get_string = string_stream_get_string;
+	stream->clear = string_stream_clear;
+	stream->is_empty = string_stream_is_empty;
+	return stream;
+}
+
+void string_stream_get(struct string_stream *stream)
+{
+	kref_get(&stream->refcount);
+}
+
+int string_stream_put(struct string_stream *stream)
+{
+	return kref_put(&stream->refcount, &string_stream_destroy);
+}
+
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 03/19] kunit: test: add string_stream a std::stream like string builder
  2018-11-28 19:36 ` [RFC v3 03/19] kunit: test: add string_stream a std::stream like string builder brendanhiggins
@ 2018-11-28 19:36   ` Brendan Higgins
  2018-11-30  3:29   ` mcgrof
  1 sibling, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-11-28 19:36 UTC (permalink / raw)


A number of test features need to do pretty complicated string printing
where it may not be possible to rely on a single preallocated string
with parameters.

So provide a library for constructing the string as you go similar to
C++'s std::string.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 include/kunit/string-stream.h |  44 ++++++++++
 kunit/Makefile                |   3 +-
 kunit/string-stream.c         | 149 ++++++++++++++++++++++++++++++++++
 3 files changed, 195 insertions(+), 1 deletion(-)
 create mode 100644 include/kunit/string-stream.h
 create mode 100644 kunit/string-stream.c

diff --git a/include/kunit/string-stream.h b/include/kunit/string-stream.h
new file mode 100644
index 0000000000000..933ed5740cf07
--- /dev/null
+++ b/include/kunit/string-stream.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * C++ stream style string builder used in KUnit for building messages.
+ *
+ * Copyright (C) 2018, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+
+#ifndef _KUNIT_STRING_STREAM_H
+#define _KUNIT_STRING_STREAM_H
+
+#include <linux/types.h>
+#include <linux/spinlock.h>
+#include <linux/kref.h>
+#include <stdarg.h>
+
+struct string_stream_fragment {
+	struct list_head node;
+	char *fragment;
+};
+
+struct string_stream {
+	size_t length;
+	struct list_head fragments;
+
+	/* length and fragments are protected by this lock */
+	spinlock_t lock;
+	struct kref refcount;
+	int (*add)(struct string_stream *this, const char *fmt, ...);
+	int (*vadd)(struct string_stream *this, const char *fmt, va_list args);
+	char *(*get_string)(struct string_stream *this);
+	void (*clear)(struct string_stream *this);
+	bool (*is_empty)(struct string_stream *this);
+};
+
+struct string_stream *new_string_stream(void);
+
+void destroy_string_stream(struct string_stream *stream);
+
+void string_stream_get(struct string_stream *stream);
+
+int string_stream_put(struct string_stream *stream);
+
+#endif /* _KUNIT_STRING_STREAM_H */
diff --git a/kunit/Makefile b/kunit/Makefile
index 5efdc4dea2c08..275b565a0e81f 100644
--- a/kunit/Makefile
+++ b/kunit/Makefile
@@ -1 +1,2 @@
-obj-$(CONFIG_KUNIT) +=			test.o
+obj-$(CONFIG_KUNIT) +=			test.o \
+					string-stream.o
diff --git a/kunit/string-stream.c b/kunit/string-stream.c
new file mode 100644
index 0000000000000..1e7efa630cc35
--- /dev/null
+++ b/kunit/string-stream.c
@@ -0,0 +1,149 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * C++ stream style string builder used in KUnit for building messages.
+ *
+ * Copyright (C) 2018, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <kunit/string-stream.h>
+
+static int string_stream_vadd(struct string_stream *this,
+			       const char *fmt,
+			       va_list args)
+{
+	struct string_stream_fragment *fragment;
+	int len;
+	va_list args_for_counting;
+	unsigned long flags;
+
+	/* Make a copy because `vsnprintf` could change it */
+	va_copy(args_for_counting, args);
+
+	/* Need space for null byte. */
+	len = vsnprintf(NULL, 0, fmt, args_for_counting) + 1;
+
+	va_end(args_for_counting);
+
+	fragment = kmalloc(sizeof(*fragment), GFP_KERNEL);
+	if (!fragment)
+		return -ENOMEM;
+
+	fragment->fragment = kmalloc(len, GFP_KERNEL);
+	if (!fragment->fragment) {
+		kfree(fragment);
+		return -ENOMEM;
+	}
+
+	len = vsnprintf(fragment->fragment, len, fmt, args);
+	spin_lock_irqsave(&this->lock, flags);
+	this->length += len;
+	list_add_tail(&fragment->node, &this->fragments);
+	spin_unlock_irqrestore(&this->lock, flags);
+	return 0;
+}
+
+static int string_stream_add(struct string_stream *this, const char *fmt, ...)
+{
+	va_list args;
+	int result;
+
+	va_start(args, fmt);
+	result = string_stream_vadd(this, fmt, args);
+	va_end(args);
+	return result;
+}
+
+static void string_stream_clear(struct string_stream *this)
+{
+	struct string_stream_fragment *fragment, *fragment_safe;
+	unsigned long flags;
+
+	spin_lock_irqsave(&this->lock, flags);
+	list_for_each_entry_safe(fragment,
+				 fragment_safe,
+				 &this->fragments,
+				 node) {
+		list_del(&fragment->node);
+		kfree(fragment->fragment);
+		kfree(fragment);
+	}
+	this->length = 0;
+	spin_unlock_irqrestore(&this->lock, flags);
+}
+
+static char *string_stream_get_string(struct string_stream *this)
+{
+	struct string_stream_fragment *fragment;
+	size_t buf_len = this->length + 1; /* +1 for null byte. */
+	char *buf;
+	unsigned long flags;
+
+	buf = kzalloc(buf_len, GFP_KERNEL);
+	if (!buf)
+		return NULL;
+
+	spin_lock_irqsave(&this->lock, flags);
+	list_for_each_entry(fragment, &this->fragments, node)
+		strlcat(buf, fragment->fragment, buf_len);
+	spin_unlock_irqrestore(&this->lock, flags);
+
+	return buf;
+}
+
+static bool string_stream_is_empty(struct string_stream *this)
+{
+	bool is_empty;
+	unsigned long flags;
+
+	spin_lock_irqsave(&this->lock, flags);
+	is_empty = list_empty(&this->fragments);
+	spin_unlock_irqrestore(&this->lock, flags);
+
+	return is_empty;
+}
+
+void destroy_string_stream(struct string_stream *stream)
+{
+	stream->clear(stream);
+	kfree(stream);
+}
+
+static void string_stream_destroy(struct kref *kref)
+{
+	struct string_stream *stream = container_of(kref,
+						    struct string_stream,
+						    refcount);
+	destroy_string_stream(stream);
+}
+
+struct string_stream *new_string_stream(void)
+{
+	struct string_stream *stream = kzalloc(sizeof(*stream), GFP_KERNEL);
+
+	if (!stream)
+		return NULL;
+
+	INIT_LIST_HEAD(&stream->fragments);
+	spin_lock_init(&stream->lock);
+	kref_init(&stream->refcount);
+	stream->add = string_stream_add;
+	stream->vadd = string_stream_vadd;
+	stream->get_string = string_stream_get_string;
+	stream->clear = string_stream_clear;
+	stream->is_empty = string_stream_is_empty;
+	return stream;
+}
+
+void string_stream_get(struct string_stream *stream)
+{
+	kref_get(&stream->refcount);
+}
+
+int string_stream_put(struct string_stream *stream)
+{
+	return kref_put(&stream->refcount, &string_stream_destroy);
+}
+
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 04/19] kunit: test: add test_stream a std::stream like logger
  2018-11-28 19:36 [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework brendanhiggins
                   ` (3 preceding siblings ...)
  2018-11-28 19:36 ` [RFC v3 03/19] kunit: test: add string_stream a std::stream like string builder brendanhiggins
@ 2018-11-28 19:36 ` brendanhiggins
  2018-11-28 19:36   ` Brendan Higgins
  2018-11-28 19:36 ` [RFC v3 05/19] kunit: test: add the concept of expectations brendanhiggins
                   ` (16 subsequent siblings)
  21 siblings, 1 reply; 232+ messages in thread
From: brendanhiggins @ 2018-11-28 19:36 UTC (permalink / raw)


A lot of the expectation and assertion infrastructure prints out fairly
complicated test failure messages, so add a C++ style log library for
for logging test results.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 include/kunit/kunit-stream.h |  50 ++++++++++++
 include/kunit/test.h         |   2 +
 kunit/Makefile               |   3 +-
 kunit/kunit-stream.c         | 153 +++++++++++++++++++++++++++++++++++
 kunit/test.c                 |   8 ++
 5 files changed, 215 insertions(+), 1 deletion(-)
 create mode 100644 include/kunit/kunit-stream.h
 create mode 100644 kunit/kunit-stream.c

diff --git a/include/kunit/kunit-stream.h b/include/kunit/kunit-stream.h
new file mode 100644
index 0000000000000..3b3119450be3f
--- /dev/null
+++ b/include/kunit/kunit-stream.h
@@ -0,0 +1,50 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * C++ stream style string formatter and printer used in KUnit for outputting
+ * KUnit messages.
+ *
+ * Copyright (C) 2018, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+
+#ifndef _KUNIT_KUNIT_STREAM_H
+#define _KUNIT_KUNIT_STREAM_H
+
+#include <linux/types.h>
+#include <kunit/string-stream.h>
+
+struct kunit;
+
+/**
+ * struct kunit_stream - a std::stream style string builder.
+ * @set_level: sets the level that this string should be printed at.
+ * @add: adds the formatted input to the internal buffer.
+ * @append: adds the contents of other to this.
+ * @commit: prints out the internal buffer to the user.
+ * @clear: clears the internal buffer.
+ *
+ * A std::stream style string builder. Allows messages to be built up and
+ * printed all at once.
+ */
+struct kunit_stream {
+	void (*set_level)(struct kunit_stream *this, const char *level);
+	void (*add)(struct kunit_stream *this, const char *fmt, ...);
+	void (*append)(struct kunit_stream *this, struct kunit_stream *other);
+	void (*commit)(struct kunit_stream *this);
+	void (*clear)(struct kunit_stream *this);
+	/* private: internal use only. */
+	struct kunit *test;
+	spinlock_t lock; /* Guards level. */
+	const char *level;
+	struct string_stream *internal_stream;
+};
+
+/**
+ * kunit_new_stream() - constructs a new &struct kunit_stream.
+ * @test: The test context object.
+ *
+ * Constructs a new test managed &struct kunit_stream.
+ */
+struct kunit_stream *kunit_new_stream(struct kunit *test);
+
+#endif /* _KUNIT_KUNIT_STREAM_H */
diff --git a/include/kunit/test.h b/include/kunit/test.h
index 583840e24ffda..ea424095e4fb4 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -11,6 +11,7 @@
 
 #include <linux/types.h>
 #include <linux/slab.h>
+#include <kunit/kunit-stream.h>
 
 struct kunit_resource;
 
@@ -171,6 +172,7 @@ struct kunit {
 	void (*vprintk)(const struct kunit *test,
 			const char *level,
 			struct va_format *vaf);
+	void (*fail)(struct kunit *test, struct kunit_stream *stream);
 };
 
 int kunit_init_test(struct kunit *test, const char *name);
diff --git a/kunit/Makefile b/kunit/Makefile
index 275b565a0e81f..6ddc622ee6b1c 100644
--- a/kunit/Makefile
+++ b/kunit/Makefile
@@ -1,2 +1,3 @@
 obj-$(CONFIG_KUNIT) +=			test.o \
-					string-stream.o
+					string-stream.o \
+					kunit-stream.o
diff --git a/kunit/kunit-stream.c b/kunit/kunit-stream.c
new file mode 100644
index 0000000000000..70f5182245e0b
--- /dev/null
+++ b/kunit/kunit-stream.c
@@ -0,0 +1,153 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * C++ stream style string formatter and printer used in KUnit for outputting
+ * KUnit messages.
+ *
+ * Copyright (C) 2018, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+
+#include <kunit/test.h>
+#include <kunit/kunit-stream.h>
+#include <kunit/string-stream.h>
+
+static const char *kunit_stream_get_level(struct kunit_stream *this)
+{
+	unsigned long flags;
+	const char *level;
+
+	spin_lock_irqsave(&this->lock, flags);
+	level = this->level;
+	spin_unlock_irqrestore(&this->lock, flags);
+
+	return level;
+}
+
+static void kunit_stream_set_level(struct kunit_stream *this, const char *level)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&this->lock, flags);
+	this->level = level;
+	spin_unlock_irqrestore(&this->lock, flags);
+}
+
+static void kunit_stream_add(struct kunit_stream *this, const char *fmt, ...)
+{
+	va_list args;
+	struct string_stream *stream = this->internal_stream;
+
+	va_start(args, fmt);
+	if (stream->vadd(stream, fmt, args) < 0)
+		kunit_err(this->test, "Failed to allocate fragment: %s", fmt);
+
+	va_end(args);
+}
+
+static void kunit_stream_append(struct kunit_stream *this,
+				struct kunit_stream *other)
+{
+	struct string_stream *other_stream = other->internal_stream;
+	const char *other_content;
+
+	other_content = other_stream->get_string(other_stream);
+
+	if (!other_content) {
+		kunit_err(this->test,
+			  "Failed to get string from second argument for appending.");
+		return;
+	}
+
+	this->add(this, other_content);
+}
+
+static void kunit_stream_clear(struct kunit_stream *this)
+{
+	this->internal_stream->clear(this->internal_stream);
+}
+
+static void kunit_stream_commit(struct kunit_stream *this)
+{
+	struct string_stream *stream = this->internal_stream;
+	struct string_stream_fragment *fragment;
+	const char *level;
+	char *buf;
+
+	level = kunit_stream_get_level(this);
+	if (!level) {
+		kunit_err(this->test,
+			  "Stream was committed without a specified log level.");
+		level = KERN_ERR;
+		this->set_level(this, level);
+	}
+
+	buf = stream->get_string(stream);
+	if (!buf) {
+		kunit_err(this->test,
+			 "Could not allocate buffer, dumping stream:");
+		list_for_each_entry(fragment, &stream->fragments, node) {
+			kunit_err(this->test, fragment->fragment);
+		}
+		goto cleanup;
+	}
+
+	kunit_printk(level, this->test, buf);
+	kfree(buf);
+
+cleanup:
+	this->clear(this);
+}
+
+static int kunit_stream_init(struct kunit_resource *res, void *context)
+{
+	struct kunit *test = context;
+	struct kunit_stream *stream;
+
+	stream = kzalloc(sizeof(*stream), GFP_KERNEL);
+	if (!stream)
+		return -ENOMEM;
+	res->allocation = stream;
+	stream->test = test;
+	spin_lock_init(&stream->lock);
+	stream->internal_stream = new_string_stream();
+
+	if (!stream->internal_stream)
+		return -ENOMEM;
+
+	stream->set_level = kunit_stream_set_level;
+	stream->add = kunit_stream_add;
+	stream->append = kunit_stream_append;
+	stream->commit = kunit_stream_commit;
+	stream->clear = kunit_stream_clear;
+
+	return 0;
+}
+
+static void kunit_stream_free(struct kunit_resource *res)
+{
+	struct kunit_stream *stream = res->allocation;
+
+	if (!stream->internal_stream->is_empty(stream->internal_stream)) {
+		kunit_err(stream->test,
+			 "End of test case reached with uncommitted stream entries.");
+		stream->commit(stream);
+	}
+
+	destroy_string_stream(stream->internal_stream);
+	kfree(stream);
+}
+
+struct kunit_stream *kunit_new_stream(struct kunit *test)
+{
+	struct kunit_resource *res;
+
+	res = kunit_alloc_resource(test,
+				   kunit_stream_init,
+				   kunit_stream_free,
+				   test);
+
+	if (res)
+		return res->allocation;
+	else
+		return NULL;
+}
diff --git a/kunit/test.c b/kunit/test.c
index fb1a786e4c94f..abeb939dc7fa2 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -63,12 +63,20 @@ static void kunit_vprintk(const struct kunit *test,
 			  "kunit %s: %pV", test->name, vaf);
 }
 
+static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
+{
+	kunit_set_success(test, false);
+	stream->set_level(stream, KERN_ERR);
+	stream->commit(stream);
+}
+
 int kunit_init_test(struct kunit *test, const char *name)
 {
 	spin_lock_init(&test->lock);
 	INIT_LIST_HEAD(&test->resources);
 	test->name = name;
 	test->vprintk = kunit_vprintk;
+	test->fail = kunit_fail;
 
 	return 0;
 }
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 04/19] kunit: test: add test_stream a std::stream like logger
  2018-11-28 19:36 ` [RFC v3 04/19] kunit: test: add test_stream a std::stream like logger brendanhiggins
@ 2018-11-28 19:36   ` Brendan Higgins
  0 siblings, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-11-28 19:36 UTC (permalink / raw)


A lot of the expectation and assertion infrastructure prints out fairly
complicated test failure messages, so add a C++ style log library for
for logging test results.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 include/kunit/kunit-stream.h |  50 ++++++++++++
 include/kunit/test.h         |   2 +
 kunit/Makefile               |   3 +-
 kunit/kunit-stream.c         | 153 +++++++++++++++++++++++++++++++++++
 kunit/test.c                 |   8 ++
 5 files changed, 215 insertions(+), 1 deletion(-)
 create mode 100644 include/kunit/kunit-stream.h
 create mode 100644 kunit/kunit-stream.c

diff --git a/include/kunit/kunit-stream.h b/include/kunit/kunit-stream.h
new file mode 100644
index 0000000000000..3b3119450be3f
--- /dev/null
+++ b/include/kunit/kunit-stream.h
@@ -0,0 +1,50 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * C++ stream style string formatter and printer used in KUnit for outputting
+ * KUnit messages.
+ *
+ * Copyright (C) 2018, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+
+#ifndef _KUNIT_KUNIT_STREAM_H
+#define _KUNIT_KUNIT_STREAM_H
+
+#include <linux/types.h>
+#include <kunit/string-stream.h>
+
+struct kunit;
+
+/**
+ * struct kunit_stream - a std::stream style string builder.
+ * @set_level: sets the level that this string should be printed at.
+ * @add: adds the formatted input to the internal buffer.
+ * @append: adds the contents of other to this.
+ * @commit: prints out the internal buffer to the user.
+ * @clear: clears the internal buffer.
+ *
+ * A std::stream style string builder. Allows messages to be built up and
+ * printed all at once.
+ */
+struct kunit_stream {
+	void (*set_level)(struct kunit_stream *this, const char *level);
+	void (*add)(struct kunit_stream *this, const char *fmt, ...);
+	void (*append)(struct kunit_stream *this, struct kunit_stream *other);
+	void (*commit)(struct kunit_stream *this);
+	void (*clear)(struct kunit_stream *this);
+	/* private: internal use only. */
+	struct kunit *test;
+	spinlock_t lock; /* Guards level. */
+	const char *level;
+	struct string_stream *internal_stream;
+};
+
+/**
+ * kunit_new_stream() - constructs a new &struct kunit_stream.
+ * @test: The test context object.
+ *
+ * Constructs a new test managed &struct kunit_stream.
+ */
+struct kunit_stream *kunit_new_stream(struct kunit *test);
+
+#endif /* _KUNIT_KUNIT_STREAM_H */
diff --git a/include/kunit/test.h b/include/kunit/test.h
index 583840e24ffda..ea424095e4fb4 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -11,6 +11,7 @@
 
 #include <linux/types.h>
 #include <linux/slab.h>
+#include <kunit/kunit-stream.h>
 
 struct kunit_resource;
 
@@ -171,6 +172,7 @@ struct kunit {
 	void (*vprintk)(const struct kunit *test,
 			const char *level,
 			struct va_format *vaf);
+	void (*fail)(struct kunit *test, struct kunit_stream *stream);
 };
 
 int kunit_init_test(struct kunit *test, const char *name);
diff --git a/kunit/Makefile b/kunit/Makefile
index 275b565a0e81f..6ddc622ee6b1c 100644
--- a/kunit/Makefile
+++ b/kunit/Makefile
@@ -1,2 +1,3 @@
 obj-$(CONFIG_KUNIT) +=			test.o \
-					string-stream.o
+					string-stream.o \
+					kunit-stream.o
diff --git a/kunit/kunit-stream.c b/kunit/kunit-stream.c
new file mode 100644
index 0000000000000..70f5182245e0b
--- /dev/null
+++ b/kunit/kunit-stream.c
@@ -0,0 +1,153 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * C++ stream style string formatter and printer used in KUnit for outputting
+ * KUnit messages.
+ *
+ * Copyright (C) 2018, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+
+#include <kunit/test.h>
+#include <kunit/kunit-stream.h>
+#include <kunit/string-stream.h>
+
+static const char *kunit_stream_get_level(struct kunit_stream *this)
+{
+	unsigned long flags;
+	const char *level;
+
+	spin_lock_irqsave(&this->lock, flags);
+	level = this->level;
+	spin_unlock_irqrestore(&this->lock, flags);
+
+	return level;
+}
+
+static void kunit_stream_set_level(struct kunit_stream *this, const char *level)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&this->lock, flags);
+	this->level = level;
+	spin_unlock_irqrestore(&this->lock, flags);
+}
+
+static void kunit_stream_add(struct kunit_stream *this, const char *fmt, ...)
+{
+	va_list args;
+	struct string_stream *stream = this->internal_stream;
+
+	va_start(args, fmt);
+	if (stream->vadd(stream, fmt, args) < 0)
+		kunit_err(this->test, "Failed to allocate fragment: %s", fmt);
+
+	va_end(args);
+}
+
+static void kunit_stream_append(struct kunit_stream *this,
+				struct kunit_stream *other)
+{
+	struct string_stream *other_stream = other->internal_stream;
+	const char *other_content;
+
+	other_content = other_stream->get_string(other_stream);
+
+	if (!other_content) {
+		kunit_err(this->test,
+			  "Failed to get string from second argument for appending.");
+		return;
+	}
+
+	this->add(this, other_content);
+}
+
+static void kunit_stream_clear(struct kunit_stream *this)
+{
+	this->internal_stream->clear(this->internal_stream);
+}
+
+static void kunit_stream_commit(struct kunit_stream *this)
+{
+	struct string_stream *stream = this->internal_stream;
+	struct string_stream_fragment *fragment;
+	const char *level;
+	char *buf;
+
+	level = kunit_stream_get_level(this);
+	if (!level) {
+		kunit_err(this->test,
+			  "Stream was committed without a specified log level.");
+		level = KERN_ERR;
+		this->set_level(this, level);
+	}
+
+	buf = stream->get_string(stream);
+	if (!buf) {
+		kunit_err(this->test,
+			 "Could not allocate buffer, dumping stream:");
+		list_for_each_entry(fragment, &stream->fragments, node) {
+			kunit_err(this->test, fragment->fragment);
+		}
+		goto cleanup;
+	}
+
+	kunit_printk(level, this->test, buf);
+	kfree(buf);
+
+cleanup:
+	this->clear(this);
+}
+
+static int kunit_stream_init(struct kunit_resource *res, void *context)
+{
+	struct kunit *test = context;
+	struct kunit_stream *stream;
+
+	stream = kzalloc(sizeof(*stream), GFP_KERNEL);
+	if (!stream)
+		return -ENOMEM;
+	res->allocation = stream;
+	stream->test = test;
+	spin_lock_init(&stream->lock);
+	stream->internal_stream = new_string_stream();
+
+	if (!stream->internal_stream)
+		return -ENOMEM;
+
+	stream->set_level = kunit_stream_set_level;
+	stream->add = kunit_stream_add;
+	stream->append = kunit_stream_append;
+	stream->commit = kunit_stream_commit;
+	stream->clear = kunit_stream_clear;
+
+	return 0;
+}
+
+static void kunit_stream_free(struct kunit_resource *res)
+{
+	struct kunit_stream *stream = res->allocation;
+
+	if (!stream->internal_stream->is_empty(stream->internal_stream)) {
+		kunit_err(stream->test,
+			 "End of test case reached with uncommitted stream entries.");
+		stream->commit(stream);
+	}
+
+	destroy_string_stream(stream->internal_stream);
+	kfree(stream);
+}
+
+struct kunit_stream *kunit_new_stream(struct kunit *test)
+{
+	struct kunit_resource *res;
+
+	res = kunit_alloc_resource(test,
+				   kunit_stream_init,
+				   kunit_stream_free,
+				   test);
+
+	if (res)
+		return res->allocation;
+	else
+		return NULL;
+}
diff --git a/kunit/test.c b/kunit/test.c
index fb1a786e4c94f..abeb939dc7fa2 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -63,12 +63,20 @@ static void kunit_vprintk(const struct kunit *test,
 			  "kunit %s: %pV", test->name, vaf);
 }
 
+static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
+{
+	kunit_set_success(test, false);
+	stream->set_level(stream, KERN_ERR);
+	stream->commit(stream);
+}
+
 int kunit_init_test(struct kunit *test, const char *name)
 {
 	spin_lock_init(&test->lock);
 	INIT_LIST_HEAD(&test->resources);
 	test->name = name;
 	test->vprintk = kunit_vprintk;
+	test->fail = kunit_fail;
 
 	return 0;
 }
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 05/19] kunit: test: add the concept of expectations
  2018-11-28 19:36 [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework brendanhiggins
                   ` (4 preceding siblings ...)
  2018-11-28 19:36 ` [RFC v3 04/19] kunit: test: add test_stream a std::stream like logger brendanhiggins
@ 2018-11-28 19:36 ` brendanhiggins
  2018-11-28 19:36   ` Brendan Higgins
  2018-11-28 19:36 ` [RFC v3 06/19] arch: um: enable running kunit from User Mode Linux brendanhiggins
                   ` (15 subsequent siblings)
  21 siblings, 1 reply; 232+ messages in thread
From: brendanhiggins @ 2018-11-28 19:36 UTC (permalink / raw)


Add support for expectations, which allow properties to be specified and
then verified in tests.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 include/kunit/test.h | 379 +++++++++++++++++++++++++++++++++++++++++++
 kunit/test.c         |  34 ++++
 2 files changed, 413 insertions(+)

diff --git a/include/kunit/test.h b/include/kunit/test.h
index ea424095e4fb4..098a9dceef9ea 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -273,4 +273,383 @@ void __printf(3, 4) kunit_printk(const char *level,
 #define kunit_err(test, fmt, ...) \
 		kunit_printk(KERN_ERR, test, fmt, ##__VA_ARGS__)
 
+static inline struct kunit_stream *kunit_expect_start(struct kunit *test,
+						      const char *file,
+						      const char *line)
+{
+	struct kunit_stream *stream = kunit_new_stream(test);
+
+	stream->add(stream, "EXPECTATION FAILED at %s:%s\n\t", file, line);
+
+	return stream;
+}
+
+static inline void kunit_expect_end(struct kunit *test,
+				    bool success,
+				    struct kunit_stream *stream)
+{
+	if (!success)
+		test->fail(test, stream);
+	else
+		stream->clear(stream);
+}
+
+#define KUNIT_EXPECT_START(test) \
+		kunit_expect_start(test, __FILE__, __stringify(__LINE__))
+
+#define KUNIT_EXPECT_END(test, success, stream) \
+		kunit_expect_end(test, success, stream)
+
+#define KUNIT_EXPECT_MSG(test, success, message, fmt, ...) do {		       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+									       \
+	__stream->add(__stream, message);				       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+	KUNIT_EXPECT_END(test, success, __stream);			       \
+} while (0)
+
+#define KUNIT_EXPECT(test, success, message) do {			       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+									       \
+	__stream->add(__stream, message);				       \
+	KUNIT_EXPECT_END(test, success, __stream);			       \
+} while (0)
+
+/**
+ * KUNIT_SUCCEED() - A no-op expectation. Only exists for code clarity.
+ * @test: The test context object.
+ *
+ * The opposite of KUNIT_FAIL(), it is an expectation that cannot fail. In other
+ * words, it does nothing and only exists for code clarity. See
+ * KUNIT_EXPECT_TRUE() for more information.
+ */
+#define KUNIT_SUCCEED(test) do {} while (0)
+
+/**
+ * KUNIT_FAIL() - Always causes a test to fail when evaluated.
+ * @test: The test context object.
+ * @fmt: an informational message to be printed when the assertion is made.
+ * @...: string format arguments.
+ *
+ * The opposite of KUNIT_SUCCEED(), it is an expectation that always fails. In
+ * other words, it always results in a failed expectation, and consequently
+ * always causes the test case to fail when evaluated. See KUNIT_EXPECT_TRUE()
+ * for more information.
+ */
+#define KUNIT_FAIL(test, fmt, ...) \
+		KUNIT_EXPECT_MSG(test, false, "", fmt, ##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_TRUE() - Causes a test failure when the expression is not true.
+ * @test: The test context object.
+ * @condition: an arbitrary boolean expression. The test fails when this does
+ * not evaluate to true.
+ *
+ * This and expectations of the form `KUNIT_EXPECT_*` will cause the test case
+ * to fail when the specified condition is not met; however, it will not prevent
+ * the test case from continuing to run; this is otherwise known as an
+ * *expectation failure*.
+ */
+#define KUNIT_EXPECT_TRUE(test, condition)				       \
+		KUNIT_EXPECT(test, (condition),				       \
+		       "Expected " #condition " is true, but is false.")
+
+#define KUNIT_EXPECT_TRUE_MSG(test, condition, fmt, ...)		       \
+		KUNIT_EXPECT_MSG(test, (condition),			       \
+				"Expected " #condition " is true, but is false.\n",\
+				fmt, ##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_FALSE() - Makes a test failure when the expression is not false.
+ * @test: The test context object.
+ * @condition: an arbitrary boolean expression. The test fails when this does
+ * not evaluate to false.
+ *
+ * Sets an expectation that @condition evaluates to false. See
+ * KUNIT_EXPECT_TRUE() for more information.
+ */
+#define KUNIT_EXPECT_FALSE(test, condition)				       \
+		KUNIT_EXPECT(test, !(condition),			       \
+		       "Expected " #condition " is false, but is true.")
+
+#define KUNIT_EXPECT_FALSE_MSG(test, condition, fmt, ...)		       \
+		KUNIT_EXPECT_MSG(test, !(condition),			       \
+				"Expected " #condition " is false, but is true.\n",\
+				fmt, ##__VA_ARGS__)
+
+void kunit_expect_binary_msg(struct kunit *test,
+			    long long left, const char *left_name,
+			    long long right, const char *right_name,
+			    bool compare_result,
+			    const char *compare_name,
+			    const char *file,
+			    const char *line,
+			    const char *fmt, ...);
+
+static inline void kunit_expect_binary(struct kunit *test,
+				       long long left, const char *left_name,
+				       long long right, const char *right_name,
+				       bool compare_result,
+				       const char *compare_name,
+				       const char *file,
+				       const char *line)
+{
+	struct kunit_stream *stream = kunit_expect_start(test, file, line);
+
+	stream->add(stream,
+		    "Expected %s %s %s, but\n",
+		    left_name, compare_name, right_name);
+	stream->add(stream, "\t\t%s == %lld\n", left_name, left);
+	stream->add(stream, "\t\t%s == %lld", right_name, right);
+
+	kunit_expect_end(test, compare_result, stream);
+}
+
+/*
+ * A factory macro for defining the expectations for the basic comparisons
+ * defined for the built in types.
+ *
+ * Unfortunately, there is no common type that all types can be promoted to for
+ * which all the binary operators behave the same way as for the actual types
+ * (for example, there is no type that long long and unsigned long long can
+ * both be cast to where the comparison result is preserved for all values). So
+ * the best we can do is do the comparison in the original types and then coerce
+ * everything to long long for printing; this way, the comparison behaves
+ * correctly and the printed out value usually makes sense without
+ * interpretation, but can always be interpretted to figure out the actual
+ * value.
+ */
+#define KUNIT_EXPECT_BINARY(test, left, condition, right) do {		       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+	kunit_expect_binary(test,					       \
+			   (long long) __left, #left,			       \
+			   (long long) __right, #right,			       \
+			   __left condition __right, #condition,	       \
+			   __FILE__, __stringify(__LINE__));		       \
+} while (0)
+
+#define KUNIT_EXPECT_BINARY_MSG(test, left, condition, right, fmt, ...) do {   \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+	kunit_expect_binary_msg(test,					       \
+			   (long long) __left, #left,			       \
+			   (long long) __right, #right,			       \
+			   __left condition __right, #condition,	       \
+			   __FILE__, __stringify(__LINE__),		       \
+			   fmt, ##__VA_ARGS__);				       \
+} while (0)
+
+/**
+ * KUNIT_EXPECT_EQ() - Sets an expectation that @left and @right are equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are
+ * equal. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) == (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_EQ(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, ==, right)
+
+#define KUNIT_EXPECT_EQ_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					==,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_NE() - An expectation that @left and @right are not equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are not
+ * equal. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) != (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_NE(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, !=, right)
+
+#define KUNIT_EXPECT_NE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					!=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_LT() - An expectation that @left is less than @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the value that @left evaluates to is less than the
+ * value that @right evaluates to. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) < (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_LT(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, <, right)
+
+#define KUNIT_EXPECT_LT_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					<,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_LE() - Expects that @left is less than or equal to @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the value that @left evaluates to is less than or
+ * equal to the value that @right evaluates to. Semantically this is equivalent
+ * to KUNIT_EXPECT_TRUE(@test, (@left) <= (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_LE(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, <=, right)
+
+#define KUNIT_EXPECT_LE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					<=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_GT() - An expectation that @left is greater than @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the value that @left evaluates to is greater than
+ * the value that @right evaluates to. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) > (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_GT(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, >, right)
+
+#define KUNIT_EXPECT_GT_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					>,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_GE() - Expects that @left is greater than or equal to @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the value that @left evaluates to is greater than
+ * the value that @right evaluates to. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) >= (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_GE(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, >=, right)
+
+#define KUNIT_EXPECT_GE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					>=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_STREQ() - Expects that strings @left and @right are equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a null terminated string.
+ * @right: an arbitrary expression that evaluates to a null terminated string.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are
+ * equal. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, !strcmp((@left), (@right))). See KUNIT_EXPECT_TRUE()
+ * for more information.
+ */
+#define KUNIT_EXPECT_STREQ(test, left, right) do {			       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Expected " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+									       \
+	KUNIT_EXPECT_END(test, !strcmp(left, right), __stream);		       \
+} while (0)
+
+#define KUNIT_EXPECT_STREQ_MSG(test, left, right, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Expected " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+									       \
+	KUNIT_EXPECT_END(test, !strcmp(left, right), __stream);		       \
+} while (0)
+
+/**
+ * KUNIT_EXPECT_NOT_ERR_OR_NULL() - Expects that @ptr is not null and not err.
+ * @test: The test context object.
+ * @ptr: an arbitrary pointer.
+ *
+ * Sets an expectation that the value that @ptr evaluates to is not null and not
+ * an errno stored in a pointer. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, !IS_ERR_OR_NULL(@ptr)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_NOT_ERR_OR_NULL(test, ptr) do {			       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(ptr) __ptr = (ptr);					       \
+									       \
+	if (!__ptr)							       \
+		__stream->add(__stream,					       \
+			      "Expected " #ptr " is not null, but is.");       \
+	if (IS_ERR(__ptr))						       \
+		__stream->add(__stream,					       \
+			      "Expected " #ptr " is not error, but is: %ld",   \
+			      PTR_ERR(__ptr));				       \
+									       \
+	KUNIT_EXPECT_END(test, !IS_ERR_OR_NULL(__ptr), __stream);	       \
+} while (0)
+
+#define KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, ptr, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(ptr) __ptr = (ptr);					       \
+									       \
+	if (!__ptr) {							       \
+		__stream->add(__stream,					       \
+			      "Expected " #ptr " is not null, but is.");       \
+		__stream->add(__stream, fmt, ##__VA_ARGS__);		       \
+	}								       \
+	if (IS_ERR(__ptr)) {						       \
+		__stream->add(__stream,					       \
+			      "Expected " #ptr " is not error, but is: %ld",   \
+			      PTR_ERR(__ptr));				       \
+									       \
+		__stream->add(__stream, fmt, ##__VA_ARGS__);		       \
+	}								       \
+	KUNIT_EXPECT_END(test, !IS_ERR_OR_NULL(__ptr), __stream);	       \
+} while (0)
+
 #endif /* _KUNIT_TEST_H */
diff --git a/kunit/test.c b/kunit/test.c
index abeb939dc7fa2..0fe6571f23d41 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -269,3 +269,37 @@ void kunit_printk(const char *level,
 
 	va_end(args);
 }
+
+void kunit_expect_binary_msg(struct kunit *test,
+			     long long left, const char *left_name,
+			     long long right, const char *right_name,
+			     bool compare_result,
+			     const char *compare_name,
+			     const char *file,
+			     const char *line,
+			     const char *fmt, ...)
+{
+	struct kunit_stream *stream = kunit_expect_start(test, file, line);
+	struct va_format vaf;
+	va_list args;
+
+	stream->add(stream,
+		    "Expected %s %s %s, but\n",
+		    left_name, compare_name, right_name);
+	stream->add(stream, "\t\t%s == %lld\n", left_name, left);
+	stream->add(stream, "\t\t%s == %lld", right_name, right);
+
+	if (fmt) {
+		va_start(args, fmt);
+
+		vaf.fmt = fmt;
+		vaf.va = &args;
+
+		stream->add(stream, "\n%pV", &vaf);
+
+		va_end(args);
+	}
+
+	kunit_expect_end(test, compare_result, stream);
+}
+
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 05/19] kunit: test: add the concept of expectations
  2018-11-28 19:36 ` [RFC v3 05/19] kunit: test: add the concept of expectations brendanhiggins
@ 2018-11-28 19:36   ` Brendan Higgins
  0 siblings, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-11-28 19:36 UTC (permalink / raw)


Add support for expectations, which allow properties to be specified and
then verified in tests.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 include/kunit/test.h | 379 +++++++++++++++++++++++++++++++++++++++++++
 kunit/test.c         |  34 ++++
 2 files changed, 413 insertions(+)

diff --git a/include/kunit/test.h b/include/kunit/test.h
index ea424095e4fb4..098a9dceef9ea 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -273,4 +273,383 @@ void __printf(3, 4) kunit_printk(const char *level,
 #define kunit_err(test, fmt, ...) \
 		kunit_printk(KERN_ERR, test, fmt, ##__VA_ARGS__)
 
+static inline struct kunit_stream *kunit_expect_start(struct kunit *test,
+						      const char *file,
+						      const char *line)
+{
+	struct kunit_stream *stream = kunit_new_stream(test);
+
+	stream->add(stream, "EXPECTATION FAILED at %s:%s\n\t", file, line);
+
+	return stream;
+}
+
+static inline void kunit_expect_end(struct kunit *test,
+				    bool success,
+				    struct kunit_stream *stream)
+{
+	if (!success)
+		test->fail(test, stream);
+	else
+		stream->clear(stream);
+}
+
+#define KUNIT_EXPECT_START(test) \
+		kunit_expect_start(test, __FILE__, __stringify(__LINE__))
+
+#define KUNIT_EXPECT_END(test, success, stream) \
+		kunit_expect_end(test, success, stream)
+
+#define KUNIT_EXPECT_MSG(test, success, message, fmt, ...) do {		       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+									       \
+	__stream->add(__stream, message);				       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+	KUNIT_EXPECT_END(test, success, __stream);			       \
+} while (0)
+
+#define KUNIT_EXPECT(test, success, message) do {			       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+									       \
+	__stream->add(__stream, message);				       \
+	KUNIT_EXPECT_END(test, success, __stream);			       \
+} while (0)
+
+/**
+ * KUNIT_SUCCEED() - A no-op expectation. Only exists for code clarity.
+ * @test: The test context object.
+ *
+ * The opposite of KUNIT_FAIL(), it is an expectation that cannot fail. In other
+ * words, it does nothing and only exists for code clarity. See
+ * KUNIT_EXPECT_TRUE() for more information.
+ */
+#define KUNIT_SUCCEED(test) do {} while (0)
+
+/**
+ * KUNIT_FAIL() - Always causes a test to fail when evaluated.
+ * @test: The test context object.
+ * @fmt: an informational message to be printed when the assertion is made.
+ * @...: string format arguments.
+ *
+ * The opposite of KUNIT_SUCCEED(), it is an expectation that always fails. In
+ * other words, it always results in a failed expectation, and consequently
+ * always causes the test case to fail when evaluated. See KUNIT_EXPECT_TRUE()
+ * for more information.
+ */
+#define KUNIT_FAIL(test, fmt, ...) \
+		KUNIT_EXPECT_MSG(test, false, "", fmt, ##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_TRUE() - Causes a test failure when the expression is not true.
+ * @test: The test context object.
+ * @condition: an arbitrary boolean expression. The test fails when this does
+ * not evaluate to true.
+ *
+ * This and expectations of the form `KUNIT_EXPECT_*` will cause the test case
+ * to fail when the specified condition is not met; however, it will not prevent
+ * the test case from continuing to run; this is otherwise known as an
+ * *expectation failure*.
+ */
+#define KUNIT_EXPECT_TRUE(test, condition)				       \
+		KUNIT_EXPECT(test, (condition),				       \
+		       "Expected " #condition " is true, but is false.")
+
+#define KUNIT_EXPECT_TRUE_MSG(test, condition, fmt, ...)		       \
+		KUNIT_EXPECT_MSG(test, (condition),			       \
+				"Expected " #condition " is true, but is false.\n",\
+				fmt, ##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_FALSE() - Makes a test failure when the expression is not false.
+ * @test: The test context object.
+ * @condition: an arbitrary boolean expression. The test fails when this does
+ * not evaluate to false.
+ *
+ * Sets an expectation that @condition evaluates to false. See
+ * KUNIT_EXPECT_TRUE() for more information.
+ */
+#define KUNIT_EXPECT_FALSE(test, condition)				       \
+		KUNIT_EXPECT(test, !(condition),			       \
+		       "Expected " #condition " is false, but is true.")
+
+#define KUNIT_EXPECT_FALSE_MSG(test, condition, fmt, ...)		       \
+		KUNIT_EXPECT_MSG(test, !(condition),			       \
+				"Expected " #condition " is false, but is true.\n",\
+				fmt, ##__VA_ARGS__)
+
+void kunit_expect_binary_msg(struct kunit *test,
+			    long long left, const char *left_name,
+			    long long right, const char *right_name,
+			    bool compare_result,
+			    const char *compare_name,
+			    const char *file,
+			    const char *line,
+			    const char *fmt, ...);
+
+static inline void kunit_expect_binary(struct kunit *test,
+				       long long left, const char *left_name,
+				       long long right, const char *right_name,
+				       bool compare_result,
+				       const char *compare_name,
+				       const char *file,
+				       const char *line)
+{
+	struct kunit_stream *stream = kunit_expect_start(test, file, line);
+
+	stream->add(stream,
+		    "Expected %s %s %s, but\n",
+		    left_name, compare_name, right_name);
+	stream->add(stream, "\t\t%s == %lld\n", left_name, left);
+	stream->add(stream, "\t\t%s == %lld", right_name, right);
+
+	kunit_expect_end(test, compare_result, stream);
+}
+
+/*
+ * A factory macro for defining the expectations for the basic comparisons
+ * defined for the built in types.
+ *
+ * Unfortunately, there is no common type that all types can be promoted to for
+ * which all the binary operators behave the same way as for the actual types
+ * (for example, there is no type that long long and unsigned long long can
+ * both be cast to where the comparison result is preserved for all values). So
+ * the best we can do is do the comparison in the original types and then coerce
+ * everything to long long for printing; this way, the comparison behaves
+ * correctly and the printed out value usually makes sense without
+ * interpretation, but can always be interpretted to figure out the actual
+ * value.
+ */
+#define KUNIT_EXPECT_BINARY(test, left, condition, right) do {		       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+	kunit_expect_binary(test,					       \
+			   (long long) __left, #left,			       \
+			   (long long) __right, #right,			       \
+			   __left condition __right, #condition,	       \
+			   __FILE__, __stringify(__LINE__));		       \
+} while (0)
+
+#define KUNIT_EXPECT_BINARY_MSG(test, left, condition, right, fmt, ...) do {   \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+	kunit_expect_binary_msg(test,					       \
+			   (long long) __left, #left,			       \
+			   (long long) __right, #right,			       \
+			   __left condition __right, #condition,	       \
+			   __FILE__, __stringify(__LINE__),		       \
+			   fmt, ##__VA_ARGS__);				       \
+} while (0)
+
+/**
+ * KUNIT_EXPECT_EQ() - Sets an expectation that @left and @right are equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are
+ * equal. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) == (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_EQ(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, ==, right)
+
+#define KUNIT_EXPECT_EQ_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					==,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_NE() - An expectation that @left and @right are not equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are not
+ * equal. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) != (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_NE(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, !=, right)
+
+#define KUNIT_EXPECT_NE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					!=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_LT() - An expectation that @left is less than @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the value that @left evaluates to is less than the
+ * value that @right evaluates to. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) < (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_LT(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, <, right)
+
+#define KUNIT_EXPECT_LT_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					<,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_LE() - Expects that @left is less than or equal to @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the value that @left evaluates to is less than or
+ * equal to the value that @right evaluates to. Semantically this is equivalent
+ * to KUNIT_EXPECT_TRUE(@test, (@left) <= (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_LE(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, <=, right)
+
+#define KUNIT_EXPECT_LE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					<=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_GT() - An expectation that @left is greater than @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the value that @left evaluates to is greater than
+ * the value that @right evaluates to. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) > (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_GT(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, >, right)
+
+#define KUNIT_EXPECT_GT_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					>,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_GE() - Expects that @left is greater than or equal to @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the value that @left evaluates to is greater than
+ * the value that @right evaluates to. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) >= (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_GE(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, >=, right)
+
+#define KUNIT_EXPECT_GE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					>=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_STREQ() - Expects that strings @left and @right are equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a null terminated string.
+ * @right: an arbitrary expression that evaluates to a null terminated string.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are
+ * equal. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, !strcmp((@left), (@right))). See KUNIT_EXPECT_TRUE()
+ * for more information.
+ */
+#define KUNIT_EXPECT_STREQ(test, left, right) do {			       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Expected " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+									       \
+	KUNIT_EXPECT_END(test, !strcmp(left, right), __stream);		       \
+} while (0)
+
+#define KUNIT_EXPECT_STREQ_MSG(test, left, right, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Expected " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+									       \
+	KUNIT_EXPECT_END(test, !strcmp(left, right), __stream);		       \
+} while (0)
+
+/**
+ * KUNIT_EXPECT_NOT_ERR_OR_NULL() - Expects that @ptr is not null and not err.
+ * @test: The test context object.
+ * @ptr: an arbitrary pointer.
+ *
+ * Sets an expectation that the value that @ptr evaluates to is not null and not
+ * an errno stored in a pointer. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, !IS_ERR_OR_NULL(@ptr)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_NOT_ERR_OR_NULL(test, ptr) do {			       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(ptr) __ptr = (ptr);					       \
+									       \
+	if (!__ptr)							       \
+		__stream->add(__stream,					       \
+			      "Expected " #ptr " is not null, but is.");       \
+	if (IS_ERR(__ptr))						       \
+		__stream->add(__stream,					       \
+			      "Expected " #ptr " is not error, but is: %ld",   \
+			      PTR_ERR(__ptr));				       \
+									       \
+	KUNIT_EXPECT_END(test, !IS_ERR_OR_NULL(__ptr), __stream);	       \
+} while (0)
+
+#define KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, ptr, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(ptr) __ptr = (ptr);					       \
+									       \
+	if (!__ptr) {							       \
+		__stream->add(__stream,					       \
+			      "Expected " #ptr " is not null, but is.");       \
+		__stream->add(__stream, fmt, ##__VA_ARGS__);		       \
+	}								       \
+	if (IS_ERR(__ptr)) {						       \
+		__stream->add(__stream,					       \
+			      "Expected " #ptr " is not error, but is: %ld",   \
+			      PTR_ERR(__ptr));				       \
+									       \
+		__stream->add(__stream, fmt, ##__VA_ARGS__);		       \
+	}								       \
+	KUNIT_EXPECT_END(test, !IS_ERR_OR_NULL(__ptr), __stream);	       \
+} while (0)
+
 #endif /* _KUNIT_TEST_H */
diff --git a/kunit/test.c b/kunit/test.c
index abeb939dc7fa2..0fe6571f23d41 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -269,3 +269,37 @@ void kunit_printk(const char *level,
 
 	va_end(args);
 }
+
+void kunit_expect_binary_msg(struct kunit *test,
+			     long long left, const char *left_name,
+			     long long right, const char *right_name,
+			     bool compare_result,
+			     const char *compare_name,
+			     const char *file,
+			     const char *line,
+			     const char *fmt, ...)
+{
+	struct kunit_stream *stream = kunit_expect_start(test, file, line);
+	struct va_format vaf;
+	va_list args;
+
+	stream->add(stream,
+		    "Expected %s %s %s, but\n",
+		    left_name, compare_name, right_name);
+	stream->add(stream, "\t\t%s == %lld\n", left_name, left);
+	stream->add(stream, "\t\t%s == %lld", right_name, right);
+
+	if (fmt) {
+		va_start(args, fmt);
+
+		vaf.fmt = fmt;
+		vaf.va = &args;
+
+		stream->add(stream, "\n%pV", &vaf);
+
+		va_end(args);
+	}
+
+	kunit_expect_end(test, compare_result, stream);
+}
+
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 06/19] arch: um: enable running kunit from User Mode Linux
  2018-11-28 19:36 [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework brendanhiggins
                   ` (5 preceding siblings ...)
  2018-11-28 19:36 ` [RFC v3 05/19] kunit: test: add the concept of expectations brendanhiggins
@ 2018-11-28 19:36 ` brendanhiggins
  2018-11-28 19:36   ` Brendan Higgins
                     ` (2 more replies)
  2018-11-28 19:36 ` [RFC v3 07/19] kunit: test: add initial tests brendanhiggins
                   ` (14 subsequent siblings)
  21 siblings, 3 replies; 232+ messages in thread
From: brendanhiggins @ 2018-11-28 19:36 UTC (permalink / raw)


Make minimum number of changes outside of the KUnit directories for
KUnit to build and run using UML.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 Kconfig  | 2 ++
 Makefile | 2 +-
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/Kconfig b/Kconfig
index 48a80beab6853..10428501edb78 100644
--- a/Kconfig
+++ b/Kconfig
@@ -30,3 +30,5 @@ source "crypto/Kconfig"
 source "lib/Kconfig"
 
 source "lib/Kconfig.debug"
+
+source "kunit/Kconfig"
diff --git a/Makefile b/Makefile
index 69fa5c0310d83..197f01cbddf03 100644
--- a/Makefile
+++ b/Makefile
@@ -966,7 +966,7 @@ endif
 
 
 ifeq ($(KBUILD_EXTMOD),)
-core-y		+= kernel/ certs/ mm/ fs/ ipc/ security/ crypto/ block/
+core-y		+= kernel/ certs/ mm/ fs/ ipc/ security/ crypto/ block/ kunit/
 
 vmlinux-dirs	:= $(patsubst %/,%,$(filter %/, $(init-y) $(init-m) \
 		     $(core-y) $(core-m) $(drivers-y) $(drivers-m) \
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 06/19] arch: um: enable running kunit from User Mode Linux
  2018-11-28 19:36 ` [RFC v3 06/19] arch: um: enable running kunit from User Mode Linux brendanhiggins
@ 2018-11-28 19:36   ` Brendan Higgins
  2018-11-28 21:26   ` robh
  2018-11-30  3:30   ` mcgrof
  2 siblings, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-11-28 19:36 UTC (permalink / raw)


Make minimum number of changes outside of the KUnit directories for
KUnit to build and run using UML.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 Kconfig  | 2 ++
 Makefile | 2 +-
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/Kconfig b/Kconfig
index 48a80beab6853..10428501edb78 100644
--- a/Kconfig
+++ b/Kconfig
@@ -30,3 +30,5 @@ source "crypto/Kconfig"
 source "lib/Kconfig"
 
 source "lib/Kconfig.debug"
+
+source "kunit/Kconfig"
diff --git a/Makefile b/Makefile
index 69fa5c0310d83..197f01cbddf03 100644
--- a/Makefile
+++ b/Makefile
@@ -966,7 +966,7 @@ endif
 
 
 ifeq ($(KBUILD_EXTMOD),)
-core-y		+= kernel/ certs/ mm/ fs/ ipc/ security/ crypto/ block/
+core-y		+= kernel/ certs/ mm/ fs/ ipc/ security/ crypto/ block/ kunit/
 
 vmlinux-dirs	:= $(patsubst %/,%,$(filter %/, $(init-y) $(init-m) \
 		     $(core-y) $(core-m) $(drivers-y) $(drivers-m) \
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 07/19] kunit: test: add initial tests
  2018-11-28 19:36 [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework brendanhiggins
                   ` (6 preceding siblings ...)
  2018-11-28 19:36 ` [RFC v3 06/19] arch: um: enable running kunit from User Mode Linux brendanhiggins
@ 2018-11-28 19:36 ` brendanhiggins
  2018-11-28 19:36   ` Brendan Higgins
  2018-11-30  3:40   ` mcgrof
  2018-11-28 19:36 ` [RFC v3 08/19] arch: um: add shim to trap to allow installing a fault catcher for tests brendanhiggins
                   ` (13 subsequent siblings)
  21 siblings, 2 replies; 232+ messages in thread
From: brendanhiggins @ 2018-11-28 19:36 UTC (permalink / raw)


Add a test for string stream along with a simpler example.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 kunit/Kconfig              | 12 ++++++
 kunit/Makefile             |  4 ++
 kunit/example-test.c       | 88 ++++++++++++++++++++++++++++++++++++++
 kunit/string-stream-test.c | 61 ++++++++++++++++++++++++++
 4 files changed, 165 insertions(+)
 create mode 100644 kunit/example-test.c
 create mode 100644 kunit/string-stream-test.c

diff --git a/kunit/Kconfig b/kunit/Kconfig
index 49b44c4f6630a..c3dc7bca83f9d 100644
--- a/kunit/Kconfig
+++ b/kunit/Kconfig
@@ -14,4 +14,16 @@ config KUNIT
 	  special hardware. For more information, please see
 	  Documentation/kunit/
 
+config KUNIT_TEST
+	bool "KUnit test for KUnit"
+	depends on KUNIT
+	help
+	  Enables KUnit test to test KUnit.
+
+config KUNIT_EXAMPLE_TEST
+	bool "Example test for KUnit"
+	depends on KUNIT
+	help
+	  Enables example KUnit test to demo features of KUnit.
+
 endmenu
diff --git a/kunit/Makefile b/kunit/Makefile
index 6ddc622ee6b1c..60a9ea6cb4697 100644
--- a/kunit/Makefile
+++ b/kunit/Makefile
@@ -1,3 +1,7 @@
 obj-$(CONFIG_KUNIT) +=			test.o \
 					string-stream.o \
 					kunit-stream.o
+
+obj-$(CONFIG_KUNIT_TEST) +=		string-stream-test.o
+
+obj-$(CONFIG_KUNIT_EXAMPLE_TEST) +=	example-test.o
diff --git a/kunit/example-test.c b/kunit/example-test.c
new file mode 100644
index 0000000000000..4197cc217d96f
--- /dev/null
+++ b/kunit/example-test.c
@@ -0,0 +1,88 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Example KUnit test to show how to use KUnit.
+ *
+ * Copyright (C) 2018, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+
+#include <kunit/test.h>
+
+/*
+ * This is the most fundamental element of KUnit, the test case. A test case
+ * makes a set EXPECTATIONs and ASSERTIONs about the behavior of some code; if
+ * any expectations or assertions are not met, the test fails; otherwise, the
+ * test passes.
+ *
+ * In KUnit, a test case is just a function with the signature
+ * `void (*)(struct kunit *)`. `struct kunit` is a context object that stores
+ * information about the current test.
+ */
+static void example_simple_test(struct kunit *test)
+{
+	/*
+	 * This is an EXPECTATION; it is how KUnit tests things. When you want
+	 * to test a piece of code, you set some expectations about what the
+	 * code should do. KUnit then runs the test and verifies that the code's
+	 * behavior matched what was expected.
+	 */
+	KUNIT_EXPECT_EQ(test, 1 + 1, 2);
+}
+
+/*
+ * This is run once before each test case, see the comment on
+ * example_test_module for more information.
+ */
+static int example_test_init(struct kunit *test)
+{
+	kunit_info(test, "initializing");
+
+	return 0;
+}
+
+/*
+ * Here we make a list of all the test cases we want to add to the test module
+ * below.
+ */
+static struct kunit_case example_test_cases[] = {
+	/*
+	 * This is a helper to create a test case object from a test case
+	 * function; its exact function is not important to understand how to
+	 * use KUnit, just know that this is how you associate test cases with a
+	 * test module.
+	 */
+	KUNIT_CASE(example_simple_test),
+	{},
+};
+
+/*
+ * This defines a suite or grouping of tests.
+ *
+ * Test cases are defined as belonging to the suite by adding them to
+ * `kunit_cases`.
+ *
+ * Often it is desirable to run some function which will set up things which
+ * will be used by every test; this is accomplished with an `init` function
+ * which runs before each test case is invoked. Similarly, an `exit` function
+ * may be specified which runs after every test case and can be used to for
+ * cleanup. For clarity, running tests in a test module would behave as follows:
+ *
+ * module.init(test);
+ * module.test_case[0](test);
+ * module.exit(test);
+ * module.init(test);
+ * module.test_case[1](test);
+ * module.exit(test);
+ * ...;
+ */
+static struct kunit_module example_test_module = {
+	.name = "example",
+	.init = example_test_init,
+	.test_cases = example_test_cases,
+};
+
+/*
+ * This registers the above test module telling KUnit that this is a suite of
+ * tests that need to be run.
+ */
+module_test(example_test_module);
diff --git a/kunit/string-stream-test.c b/kunit/string-stream-test.c
new file mode 100644
index 0000000000000..ec2675593c364
--- /dev/null
+++ b/kunit/string-stream-test.c
@@ -0,0 +1,61 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * KUnit test for struct string_stream.
+ *
+ * Copyright (C) 2018, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+
+#include <linux/slab.h>
+#include <kunit/test.h>
+#include <kunit/string-stream.h>
+
+static void string_stream_test_get_string(struct kunit *test)
+{
+	struct string_stream *stream = new_string_stream();
+	char *output;
+
+	stream->add(stream, "Foo");
+	stream->add(stream, " %s", "bar");
+
+	output = stream->get_string(stream);
+	KUNIT_EXPECT_STREQ(test, output, "Foo bar");
+	kfree(output);
+	destroy_string_stream(stream);
+}
+
+static void string_stream_test_add_and_clear(struct kunit *test)
+{
+	struct string_stream *stream = new_string_stream();
+	char *output;
+	int i;
+
+	for (i = 0; i < 10; i++)
+		stream->add(stream, "A");
+
+	output = stream->get_string(stream);
+	KUNIT_EXPECT_STREQ(test, output, "AAAAAAAAAA");
+	KUNIT_EXPECT_EQ(test, stream->length, 10);
+	KUNIT_EXPECT_FALSE(test, stream->is_empty(stream));
+	kfree(output);
+
+	stream->clear(stream);
+
+	output = stream->get_string(stream);
+	KUNIT_EXPECT_STREQ(test, output, "");
+	KUNIT_EXPECT_TRUE(test, stream->is_empty(stream));
+	destroy_string_stream(stream);
+}
+
+static struct kunit_case string_stream_test_cases[] = {
+	KUNIT_CASE(string_stream_test_get_string),
+	KUNIT_CASE(string_stream_test_add_and_clear),
+	{}
+};
+
+static struct kunit_module string_stream_test_module = {
+	.name = "string-stream-test",
+	.test_cases = string_stream_test_cases
+};
+module_test(string_stream_test_module);
+
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 07/19] kunit: test: add initial tests
  2018-11-28 19:36 ` [RFC v3 07/19] kunit: test: add initial tests brendanhiggins
@ 2018-11-28 19:36   ` Brendan Higgins
  2018-11-30  3:40   ` mcgrof
  1 sibling, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-11-28 19:36 UTC (permalink / raw)


Add a test for string stream along with a simpler example.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 kunit/Kconfig              | 12 ++++++
 kunit/Makefile             |  4 ++
 kunit/example-test.c       | 88 ++++++++++++++++++++++++++++++++++++++
 kunit/string-stream-test.c | 61 ++++++++++++++++++++++++++
 4 files changed, 165 insertions(+)
 create mode 100644 kunit/example-test.c
 create mode 100644 kunit/string-stream-test.c

diff --git a/kunit/Kconfig b/kunit/Kconfig
index 49b44c4f6630a..c3dc7bca83f9d 100644
--- a/kunit/Kconfig
+++ b/kunit/Kconfig
@@ -14,4 +14,16 @@ config KUNIT
 	  special hardware. For more information, please see
 	  Documentation/kunit/
 
+config KUNIT_TEST
+	bool "KUnit test for KUnit"
+	depends on KUNIT
+	help
+	  Enables KUnit test to test KUnit.
+
+config KUNIT_EXAMPLE_TEST
+	bool "Example test for KUnit"
+	depends on KUNIT
+	help
+	  Enables example KUnit test to demo features of KUnit.
+
 endmenu
diff --git a/kunit/Makefile b/kunit/Makefile
index 6ddc622ee6b1c..60a9ea6cb4697 100644
--- a/kunit/Makefile
+++ b/kunit/Makefile
@@ -1,3 +1,7 @@
 obj-$(CONFIG_KUNIT) +=			test.o \
 					string-stream.o \
 					kunit-stream.o
+
+obj-$(CONFIG_KUNIT_TEST) +=		string-stream-test.o
+
+obj-$(CONFIG_KUNIT_EXAMPLE_TEST) +=	example-test.o
diff --git a/kunit/example-test.c b/kunit/example-test.c
new file mode 100644
index 0000000000000..4197cc217d96f
--- /dev/null
+++ b/kunit/example-test.c
@@ -0,0 +1,88 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Example KUnit test to show how to use KUnit.
+ *
+ * Copyright (C) 2018, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+
+#include <kunit/test.h>
+
+/*
+ * This is the most fundamental element of KUnit, the test case. A test case
+ * makes a set EXPECTATIONs and ASSERTIONs about the behavior of some code; if
+ * any expectations or assertions are not met, the test fails; otherwise, the
+ * test passes.
+ *
+ * In KUnit, a test case is just a function with the signature
+ * `void (*)(struct kunit *)`. `struct kunit` is a context object that stores
+ * information about the current test.
+ */
+static void example_simple_test(struct kunit *test)
+{
+	/*
+	 * This is an EXPECTATION; it is how KUnit tests things. When you want
+	 * to test a piece of code, you set some expectations about what the
+	 * code should do. KUnit then runs the test and verifies that the code's
+	 * behavior matched what was expected.
+	 */
+	KUNIT_EXPECT_EQ(test, 1 + 1, 2);
+}
+
+/*
+ * This is run once before each test case, see the comment on
+ * example_test_module for more information.
+ */
+static int example_test_init(struct kunit *test)
+{
+	kunit_info(test, "initializing");
+
+	return 0;
+}
+
+/*
+ * Here we make a list of all the test cases we want to add to the test module
+ * below.
+ */
+static struct kunit_case example_test_cases[] = {
+	/*
+	 * This is a helper to create a test case object from a test case
+	 * function; its exact function is not important to understand how to
+	 * use KUnit, just know that this is how you associate test cases with a
+	 * test module.
+	 */
+	KUNIT_CASE(example_simple_test),
+	{},
+};
+
+/*
+ * This defines a suite or grouping of tests.
+ *
+ * Test cases are defined as belonging to the suite by adding them to
+ * `kunit_cases`.
+ *
+ * Often it is desirable to run some function which will set up things which
+ * will be used by every test; this is accomplished with an `init` function
+ * which runs before each test case is invoked. Similarly, an `exit` function
+ * may be specified which runs after every test case and can be used to for
+ * cleanup. For clarity, running tests in a test module would behave as follows:
+ *
+ * module.init(test);
+ * module.test_case[0](test);
+ * module.exit(test);
+ * module.init(test);
+ * module.test_case[1](test);
+ * module.exit(test);
+ * ...;
+ */
+static struct kunit_module example_test_module = {
+	.name = "example",
+	.init = example_test_init,
+	.test_cases = example_test_cases,
+};
+
+/*
+ * This registers the above test module telling KUnit that this is a suite of
+ * tests that need to be run.
+ */
+module_test(example_test_module);
diff --git a/kunit/string-stream-test.c b/kunit/string-stream-test.c
new file mode 100644
index 0000000000000..ec2675593c364
--- /dev/null
+++ b/kunit/string-stream-test.c
@@ -0,0 +1,61 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * KUnit test for struct string_stream.
+ *
+ * Copyright (C) 2018, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+
+#include <linux/slab.h>
+#include <kunit/test.h>
+#include <kunit/string-stream.h>
+
+static void string_stream_test_get_string(struct kunit *test)
+{
+	struct string_stream *stream = new_string_stream();
+	char *output;
+
+	stream->add(stream, "Foo");
+	stream->add(stream, " %s", "bar");
+
+	output = stream->get_string(stream);
+	KUNIT_EXPECT_STREQ(test, output, "Foo bar");
+	kfree(output);
+	destroy_string_stream(stream);
+}
+
+static void string_stream_test_add_and_clear(struct kunit *test)
+{
+	struct string_stream *stream = new_string_stream();
+	char *output;
+	int i;
+
+	for (i = 0; i < 10; i++)
+		stream->add(stream, "A");
+
+	output = stream->get_string(stream);
+	KUNIT_EXPECT_STREQ(test, output, "AAAAAAAAAA");
+	KUNIT_EXPECT_EQ(test, stream->length, 10);
+	KUNIT_EXPECT_FALSE(test, stream->is_empty(stream));
+	kfree(output);
+
+	stream->clear(stream);
+
+	output = stream->get_string(stream);
+	KUNIT_EXPECT_STREQ(test, output, "");
+	KUNIT_EXPECT_TRUE(test, stream->is_empty(stream));
+	destroy_string_stream(stream);
+}
+
+static struct kunit_case string_stream_test_cases[] = {
+	KUNIT_CASE(string_stream_test_get_string),
+	KUNIT_CASE(string_stream_test_add_and_clear),
+	{}
+};
+
+static struct kunit_module string_stream_test_module = {
+	.name = "string-stream-test",
+	.test_cases = string_stream_test_cases
+};
+module_test(string_stream_test_module);
+
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 08/19] arch: um: add shim to trap to allow installing a fault catcher for tests
  2018-11-28 19:36 [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework brendanhiggins
                   ` (7 preceding siblings ...)
  2018-11-28 19:36 ` [RFC v3 07/19] kunit: test: add initial tests brendanhiggins
@ 2018-11-28 19:36 ` brendanhiggins
  2018-11-28 19:36   ` Brendan Higgins
                     ` (2 more replies)
  2018-11-28 19:36 ` [RFC v3 09/19] kunit: test: add the concept of assertions brendanhiggins
                   ` (12 subsequent siblings)
  21 siblings, 3 replies; 232+ messages in thread
From: brendanhiggins @ 2018-11-28 19:36 UTC (permalink / raw)


Add context to current thread that allows a test to specify that it
wants to skip the normal checks to run an installed fault catcher.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 arch/um/include/asm/processor-generic.h |  4 +++-
 arch/um/kernel/trap.c                   | 15 +++++++++++----
 2 files changed, 14 insertions(+), 5 deletions(-)

diff --git a/arch/um/include/asm/processor-generic.h b/arch/um/include/asm/processor-generic.h
index b58b746d3f2ca..d566cd416ff02 100644
--- a/arch/um/include/asm/processor-generic.h
+++ b/arch/um/include/asm/processor-generic.h
@@ -27,6 +27,7 @@ struct thread_struct {
 	struct task_struct *prev_sched;
 	struct arch_thread arch;
 	jmp_buf switch_buf;
+	bool is_running_test;
 	struct {
 		int op;
 		union {
@@ -51,7 +52,8 @@ struct thread_struct {
 	.fault_addr		= NULL, \
 	.prev_sched		= NULL, \
 	.arch			= INIT_ARCH_THREAD, \
-	.request		= { 0 } \
+	.request		= { 0 }, \
+	.is_running_test	= false, \
 }
 
 static inline void release_thread(struct task_struct *task)
diff --git a/arch/um/kernel/trap.c b/arch/um/kernel/trap.c
index cced829460427..bf90e678b3d71 100644
--- a/arch/um/kernel/trap.c
+++ b/arch/um/kernel/trap.c
@@ -201,6 +201,12 @@ void segv_handler(int sig, struct siginfo *unused_si, struct uml_pt_regs *regs)
 	segv(*fi, UPT_IP(regs), UPT_IS_USER(regs), regs);
 }
 
+static void segv_run_catcher(jmp_buf *catcher, void *fault_addr)
+{
+	current->thread.fault_addr = fault_addr;
+	UML_LONGJMP(catcher, 1);
+}
+
 /*
  * We give a *copy* of the faultinfo in the regs to segv.
  * This must be done, since nesting SEGVs could overwrite
@@ -219,7 +225,10 @@ unsigned long segv(struct faultinfo fi, unsigned long ip, int is_user,
 	if (!is_user && regs)
 		current->thread.segv_regs = container_of(regs, struct pt_regs, regs);
 
-	if (!is_user && (address >= start_vm) && (address < end_vm)) {
+	catcher = current->thread.fault_catcher;
+	if (catcher && current->thread.is_running_test)
+		segv_run_catcher(catcher, (void *) address);
+	else if (!is_user && (address >= start_vm) && (address < end_vm)) {
 		flush_tlb_kernel_vm();
 		goto out;
 	}
@@ -246,12 +255,10 @@ unsigned long segv(struct faultinfo fi, unsigned long ip, int is_user,
 		address = 0;
 	}
 
-	catcher = current->thread.fault_catcher;
 	if (!err)
 		goto out;
 	else if (catcher != NULL) {
-		current->thread.fault_addr = (void *) address;
-		UML_LONGJMP(catcher, 1);
+		segv_run_catcher(catcher, (void *) address);
 	}
 	else if (current->thread.fault_addr != NULL)
 		panic("fault_addr set but no fault catcher");
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 08/19] arch: um: add shim to trap to allow installing a fault catcher for tests
  2018-11-28 19:36 ` [RFC v3 08/19] arch: um: add shim to trap to allow installing a fault catcher for tests brendanhiggins
@ 2018-11-28 19:36   ` Brendan Higgins
  2018-11-30  3:34   ` mcgrof
  2018-11-30  3:41   ` mcgrof
  2 siblings, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-11-28 19:36 UTC (permalink / raw)


Add context to current thread that allows a test to specify that it
wants to skip the normal checks to run an installed fault catcher.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 arch/um/include/asm/processor-generic.h |  4 +++-
 arch/um/kernel/trap.c                   | 15 +++++++++++----
 2 files changed, 14 insertions(+), 5 deletions(-)

diff --git a/arch/um/include/asm/processor-generic.h b/arch/um/include/asm/processor-generic.h
index b58b746d3f2ca..d566cd416ff02 100644
--- a/arch/um/include/asm/processor-generic.h
+++ b/arch/um/include/asm/processor-generic.h
@@ -27,6 +27,7 @@ struct thread_struct {
 	struct task_struct *prev_sched;
 	struct arch_thread arch;
 	jmp_buf switch_buf;
+	bool is_running_test;
 	struct {
 		int op;
 		union {
@@ -51,7 +52,8 @@ struct thread_struct {
 	.fault_addr		= NULL, \
 	.prev_sched		= NULL, \
 	.arch			= INIT_ARCH_THREAD, \
-	.request		= { 0 } \
+	.request		= { 0 }, \
+	.is_running_test	= false, \
 }
 
 static inline void release_thread(struct task_struct *task)
diff --git a/arch/um/kernel/trap.c b/arch/um/kernel/trap.c
index cced829460427..bf90e678b3d71 100644
--- a/arch/um/kernel/trap.c
+++ b/arch/um/kernel/trap.c
@@ -201,6 +201,12 @@ void segv_handler(int sig, struct siginfo *unused_si, struct uml_pt_regs *regs)
 	segv(*fi, UPT_IP(regs), UPT_IS_USER(regs), regs);
 }
 
+static void segv_run_catcher(jmp_buf *catcher, void *fault_addr)
+{
+	current->thread.fault_addr = fault_addr;
+	UML_LONGJMP(catcher, 1);
+}
+
 /*
  * We give a *copy* of the faultinfo in the regs to segv.
  * This must be done, since nesting SEGVs could overwrite
@@ -219,7 +225,10 @@ unsigned long segv(struct faultinfo fi, unsigned long ip, int is_user,
 	if (!is_user && regs)
 		current->thread.segv_regs = container_of(regs, struct pt_regs, regs);
 
-	if (!is_user && (address >= start_vm) && (address < end_vm)) {
+	catcher = current->thread.fault_catcher;
+	if (catcher && current->thread.is_running_test)
+		segv_run_catcher(catcher, (void *) address);
+	else if (!is_user && (address >= start_vm) && (address < end_vm)) {
 		flush_tlb_kernel_vm();
 		goto out;
 	}
@@ -246,12 +255,10 @@ unsigned long segv(struct faultinfo fi, unsigned long ip, int is_user,
 		address = 0;
 	}
 
-	catcher = current->thread.fault_catcher;
 	if (!err)
 		goto out;
 	else if (catcher != NULL) {
-		current->thread.fault_addr = (void *) address;
-		UML_LONGJMP(catcher, 1);
+		segv_run_catcher(catcher, (void *) address);
 	}
 	else if (current->thread.fault_addr != NULL)
 		panic("fault_addr set but no fault catcher");
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 09/19] kunit: test: add the concept of assertions
  2018-11-28 19:36 [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework brendanhiggins
                   ` (8 preceding siblings ...)
  2018-11-28 19:36 ` [RFC v3 08/19] arch: um: add shim to trap to allow installing a fault catcher for tests brendanhiggins
@ 2018-11-28 19:36 ` brendanhiggins
  2018-11-28 19:36   ` Brendan Higgins
  2018-11-28 19:36 ` [RFC v3 10/19] kunit: test: add test managed resource tests brendanhiggins
                   ` (11 subsequent siblings)
  21 siblings, 1 reply; 232+ messages in thread
From: brendanhiggins @ 2018-11-28 19:36 UTC (permalink / raw)


Add support for assertions which are like expectations except the test
terminates if the assertion is not satisfied.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 include/kunit/test.h       | 388 ++++++++++++++++++++++++++++++++++++-
 kunit/Makefile             |   3 +-
 kunit/string-stream-test.c |  12 +-
 kunit/test-test.c          |  37 ++++
 kunit/test.c               | 164 +++++++++++++++-
 5 files changed, 586 insertions(+), 18 deletions(-)
 create mode 100644 kunit/test-test.c

diff --git a/include/kunit/test.h b/include/kunit/test.h
index 098a9dceef9ea..7be11dba0b14e 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -84,9 +84,10 @@ struct kunit;
  * @name: the name of the test case.
  *
  * A test case is a function with the signature, ``void (*)(struct kunit *)``
- * that makes expectations (see KUNIT_EXPECT_TRUE()) about code under test. Each
- * test case is associated with a &struct kunit_module and will be run after the
- * module's init function and followed by the module's exit function.
+ * that makes expectations and assertions (see KUNIT_EXPECT_TRUE() and
+ * KUNIT_ASSERT_TRUE()) about code under test. Each test case is associated with
+ * a &struct kunit_module and will be run after the module's init function and
+ * followed by the module's exit function.
  *
  * A test case should be static and should only be created with the KUNIT_CASE()
  * macro; additionally, every array of test cases should be terminated with an
@@ -168,11 +169,14 @@ struct kunit {
 	const char *name; /* Read only after initialization! */
 	spinlock_t lock; /* Gaurds all mutable test state. */
 	bool success; /* Protected by lock. */
+	bool death_test; /* Protected by lock. */
 	struct list_head resources; /* Protected by lock. */
+	void (*set_death_test)(struct kunit *test, bool death_test);
 	void (*vprintk)(const struct kunit *test,
 			const char *level,
 			struct va_format *vaf);
 	void (*fail)(struct kunit *test, struct kunit_stream *stream);
+	void (*abort)(struct kunit *test);
 };
 
 int kunit_init_test(struct kunit *test, const char *name);
@@ -652,4 +656,382 @@ static inline void kunit_expect_binary(struct kunit *test,
 	KUNIT_EXPECT_END(test, !IS_ERR_OR_NULL(__ptr), __stream);	       \
 } while (0)
 
+static inline struct kunit_stream *kunit_assert_start(struct kunit *test,
+						    const char *file,
+						    const char *line)
+{
+	struct kunit_stream *stream = kunit_new_stream(test);
+
+	stream->add(stream, "ASSERTION FAILED at %s:%s\n\t", file, line);
+
+	return stream;
+}
+
+static inline void kunit_assert_end(struct kunit *test,
+				   bool success,
+				   struct kunit_stream *stream)
+{
+	if (!success) {
+		test->fail(test, stream);
+		test->abort(test);
+	} else {
+		stream->clear(stream);
+	}
+}
+
+#define KUNIT_ASSERT_START(test) \
+		kunit_assert_start(test, __FILE__, __stringify(__LINE__))
+
+#define KUNIT_ASSERT_END(test, success, stream) \
+		kunit_assert_end(test, success, stream)
+
+#define KUNIT_ASSERT(test, success, message) do {			       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+									       \
+	__stream->add(__stream, message);				       \
+	KUNIT_ASSERT_END(test, success, __stream);			       \
+} while (0)
+
+#define KUNIT_ASSERT_MSG(test, success, message, fmt, ...) do {		       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+									       \
+	__stream->add(__stream, message);				       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+	KUNIT_ASSERT_END(test, success, __stream);			       \
+} while (0)
+
+#define KUNIT_ASSERT_FAILURE(test, fmt, ...) \
+		KUNIT_ASSERT_MSG(test, false, "", fmt, ##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_TRUE() - Sets an assertion that @condition is true.
+ * @test: The test context object.
+ * @condition: an arbitrary boolean expression. The test fails and aborts when
+ * this does not evaluate to true.
+ *
+ * This and assertions of the form `KUNIT_ASSERT_*` will cause the test case to
+ * fail *and immediately abort* when the specified condition is not met. Unlike
+ * an expectation failure, it will prevent the test case from continuing to run;
+ * this is otherwise known as an *assertion failure*.
+ */
+#define KUNIT_ASSERT_TRUE(test, condition)				       \
+		KUNIT_ASSERT(test, (condition),				       \
+		       "Asserted " #condition " is true, but is false.")
+
+#define KUNIT_ASSERT_TRUE_MSG(test, condition, fmt, ...)		       \
+		KUNIT_ASSERT_MSG(test, (condition),			       \
+				"Asserted " #condition " is true, but is false.\n",\
+				fmt, ##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_FALSE() - Sets an assertion that @condition is false.
+ * @test: The test context object.
+ * @condition: an arbitrary boolean expression.
+ *
+ * Sets an assertion that the value that @condition evaluates to is false. This
+ * is the same as KUNIT_EXPECT_FALSE(), except it causes an assertion failure
+ * (see KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_FALSE(test, condition)				       \
+		KUNIT_ASSERT(test, !(condition),			       \
+		       "Asserted " #condition " is false, but is true.")
+
+#define KUNIT_ASSERT_FALSE_MSG(test, condition, fmt, ...)		       \
+		KUNIT_ASSERT_MSG(test, !(condition),			       \
+				"Asserted " #condition " is false, but is true.\n",\
+				fmt, ##__VA_ARGS__)
+
+void kunit_assert_binary_msg(struct kunit *test,
+			    long long left, const char *left_name,
+			    long long right, const char *right_name,
+			    bool compare_result,
+			    const char *compare_name,
+			    const char *file,
+			    const char *line,
+			    const char *fmt, ...);
+
+static inline void kunit_assert_binary(struct kunit *test,
+				      long long left, const char *left_name,
+				      long long right, const char *right_name,
+				      bool compare_result,
+				      const char *compare_name,
+				      const char *file,
+				      const char *line)
+{
+	kunit_assert_binary_msg(test,
+			       left, left_name,
+			       right, right_name,
+			       compare_result,
+			       compare_name,
+			       file,
+			       line,
+			       NULL);
+}
+
+/*
+ * A factory macro for defining the expectations for the basic comparisons
+ * defined for the built in types.
+ *
+ * Unfortunately, there is no common type that all types can be promoted to for
+ * which all the binary operators behave the same way as for the actual types
+ * (for example, there is no type that long long and unsigned long long can
+ * both be cast to where the comparison result is preserved for all values). So
+ * the best we can do is do the comparison in the original types and then coerce
+ * everything to long long for printing; this way, the comparison behaves
+ * correctly and the printed out value usually makes sense without
+ * interpretation, but can always be interpretted to figure out the actual
+ * value.
+ */
+#define KUNIT_ASSERT_BINARY(test, left, condition, right) do {		       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+	kunit_assert_binary(test,					       \
+			   (long long) __left, #left,			       \
+			   (long long) __right, #right,			       \
+			   __left condition __right, #condition,	       \
+			   __FILE__, __stringify(__LINE__));		       \
+} while (0)
+
+#define KUNIT_ASSERT_BINARY_MSG(test, left, condition, right, fmt, ...) do {   \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+	kunit_assert_binary_msg(test,					       \
+			       (long long) __left, #left,		       \
+			       (long long) __right, #right,		       \
+			       __left condition __right, #condition,	       \
+			       __FILE__, __stringify(__LINE__),		       \
+			       fmt, ##__VA_ARGS__);			       \
+} while (0)
+
+/**
+ * KUNIT_ASSERT_EQ() - Sets an assertion that @left and @right are equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the values that @left and @right evaluate to are
+ * equal. This is the same as KUNIT_EXPECT_EQ(), except it causes an assertion
+ * failure (see KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_EQ(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, ==, right)
+
+#define KUNIT_ASSERT_EQ_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					==,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_NE() - An assertion that @left and @right are not equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the values that @left and @right evaluate to are not
+ * equal. This is the same as KUNIT_EXPECT_NE(), except it causes an assertion
+ * failure (see KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_NE(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, !=, right)
+
+#define KUNIT_ASSERT_NE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					!=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_LT() - An assertion that @left is less than @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the value that @left evaluates to is less than the
+ * value that @right evaluates to. This is the same as KUNIT_EXPECT_LT(), except
+ * it causes an assertion failure (see KUNIT_ASSERT_TRUE()) when the assertion
+ * is not met.
+ */
+#define KUNIT_ASSERT_LT(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, <, right)
+
+#define KUNIT_ASSERT_LT_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					<,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+/**
+ * KUNIT_ASSERT_LE() - An assertion that @left is less than or equal to @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the value that @left evaluates to is less than or
+ * equal to the value that @right evaluates to. This is the same as
+ * KUNIT_EXPECT_LE(), except it causes an assertion failure (see
+ * KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_LE(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, <=, right)
+
+#define KUNIT_ASSERT_LE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					<=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+/**
+ * KUNIT_ASSERT_GT() - An assertion that @left is greater than @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the value that @left evaluates to is greater than the
+ * value that @right evaluates to. This is the same as KUNIT_EXPECT_GT(), except
+ * it causes an assertion failure (see KUNIT_ASSERT_TRUE()) when the assertion
+ * is not met.
+ */
+#define KUNIT_ASSERT_GT(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, >, right)
+
+#define KUNIT_ASSERT_GT_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					>,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_GE() - Assertion that @left is greater than or equal to @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the value that @left evaluates to is greater than the
+ * value that @right evaluates to. This is the same as KUNIT_EXPECT_GE(), except
+ * it causes an assertion failure (see KUNIT_ASSERT_TRUE()) when the assertion
+ * is not met.
+ */
+#define KUNIT_ASSERT_GE(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, >=, right)
+
+#define KUNIT_ASSERT_GE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					>=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_STREQ() - An assertion that strings @left and @right are equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a null terminated string.
+ * @right: an arbitrary expression that evaluates to a null terminated string.
+ *
+ * Sets an assertion that the values that @left and @right evaluate to are
+ * equal. This is the same as KUNIT_EXPECT_STREQ(), except it causes an
+ * assertion failure (see KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_STREQ(test, left, right) do {			       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Asserted " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+									       \
+	KUNIT_ASSERT_END(test, !strcmp(left, right), __stream);		       \
+} while (0)
+
+#define KUNIT_ASSERT_STREQ_MSG(test, left, right, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Expected " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+									       \
+	KUNIT_ASSERT_END(test, !strcmp(left, right), __stream);		       \
+} while (0)
+
+/**
+ * KUNIT_ASSERT_NOT_ERR_OR_NULL() - Assertion that @ptr is not null and not err.
+ * @test: The test context object.
+ * @ptr: an arbitrary pointer.
+ *
+ * Sets an assertion that the value that @ptr evaluates to is not null and not
+ * an errno stored in a pointer. This is the same as
+ * KUNIT_EXPECT_NOT_ERR_OR_NULL(), except it causes an assertion failure (see
+ * KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr) do {			       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(ptr) __ptr = (ptr);					       \
+									       \
+	if (!__ptr)							       \
+		__stream->add(__stream,					       \
+			      "Asserted " #ptr " is not null, but is.");       \
+	if (IS_ERR(__ptr))						       \
+		__stream->add(__stream,					       \
+			      "Asserted " #ptr " is not error, but is: %ld",   \
+			      PTR_ERR(__ptr));				       \
+									       \
+	KUNIT_ASSERT_END(test, !IS_ERR_OR_NULL(__ptr), __stream);	       \
+} while (0)
+
+#define KUNIT_ASSERT_NOT_ERR_OR_NULL_MSG(test, ptr, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(ptr) __ptr = (ptr);					       \
+									       \
+	if (!__ptr) {							       \
+		__stream->add(__stream,					       \
+			      "Asserted " #ptr " is not null, but is.");       \
+		__stream->add(__stream, fmt, ##__VA_ARGS__);		       \
+	}								       \
+	if (IS_ERR(__ptr)) {						       \
+		__stream->add(__stream,					       \
+			      "Asserted " #ptr " is not error, but is: %ld",   \
+			      PTR_ERR(__ptr));				       \
+									       \
+		__stream->add(__stream, fmt, ##__VA_ARGS__);		       \
+	}								       \
+	KUNIT_ASSERT_END(test, !IS_ERR_OR_NULL(__ptr), __stream);	       \
+} while (0)
+
+/**
+ * KUNIT_ASSERT_SIGSEGV() - An assertion that @expr will cause a segfault.
+ * @test: The test context object.
+ * @expr: an arbitrary block of code.
+ *
+ * Sets an assertion that @expr, when evaluated, will cause a segfault.
+ * Currently this assertion is only really useful for testing the KUnit
+ * framework, as a segmentation fault in normal kernel code is always incorrect.
+ * However, the plan is to replace this assertion with an arbitrary death
+ * assertion similar to
+ * https://github.com/google/googletest/blob/master/googletest/docs/advanced.md#death-tests
+ * which will probably be massaged to make sense in the context of the kernel
+ * (maybe assert that a panic occurred, or that BUG() was called).
+ *
+ * NOTE: no code after this assertion will ever be executed.
+ */
+#define KUNIT_ASSERT_SIGSEGV(test, expr) do {				       \
+	test->set_death_test(test, true);				       \
+	expr;								       \
+	test->set_death_test(test, false);				       \
+	KUNIT_ASSERT_FAILURE(test,					       \
+			    "Asserted that " #expr " would cause death, but did not.");\
+} while (0)
+
 #endif /* _KUNIT_TEST_H */
diff --git a/kunit/Makefile b/kunit/Makefile
index 60a9ea6cb4697..e4c300f67479a 100644
--- a/kunit/Makefile
+++ b/kunit/Makefile
@@ -2,6 +2,7 @@ obj-$(CONFIG_KUNIT) +=			test.o \
 					string-stream.o \
 					kunit-stream.o
 
-obj-$(CONFIG_KUNIT_TEST) +=		string-stream-test.o
+obj-$(CONFIG_KUNIT_TEST) +=		test-test.o \
+					string-stream-test.o
 
 obj-$(CONFIG_KUNIT_EXAMPLE_TEST) +=	example-test.o
diff --git a/kunit/string-stream-test.c b/kunit/string-stream-test.c
index ec2675593c364..c5346a6c932ce 100644
--- a/kunit/string-stream-test.c
+++ b/kunit/string-stream-test.c
@@ -19,7 +19,7 @@ static void string_stream_test_get_string(struct kunit *test)
 	stream->add(stream, " %s", "bar");
 
 	output = stream->get_string(stream);
-	KUNIT_EXPECT_STREQ(test, output, "Foo bar");
+	KUNIT_ASSERT_STREQ(test, output, "Foo bar");
 	kfree(output);
 	destroy_string_stream(stream);
 }
@@ -34,16 +34,16 @@ static void string_stream_test_add_and_clear(struct kunit *test)
 		stream->add(stream, "A");
 
 	output = stream->get_string(stream);
-	KUNIT_EXPECT_STREQ(test, output, "AAAAAAAAAA");
-	KUNIT_EXPECT_EQ(test, stream->length, 10);
-	KUNIT_EXPECT_FALSE(test, stream->is_empty(stream));
+	KUNIT_ASSERT_STREQ(test, output, "AAAAAAAAAA");
+	KUNIT_ASSERT_EQ(test, stream->length, 10);
+	KUNIT_ASSERT_FALSE(test, stream->is_empty(stream));
 	kfree(output);
 
 	stream->clear(stream);
 
 	output = stream->get_string(stream);
-	KUNIT_EXPECT_STREQ(test, output, "");
-	KUNIT_EXPECT_TRUE(test, stream->is_empty(stream));
+	KUNIT_ASSERT_STREQ(test, output, "");
+	KUNIT_ASSERT_TRUE(test, stream->is_empty(stream));
 	destroy_string_stream(stream);
 }
 
diff --git a/kunit/test-test.c b/kunit/test-test.c
new file mode 100644
index 0000000000000..88b3bcf9c4e00
--- /dev/null
+++ b/kunit/test-test.c
@@ -0,0 +1,37 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * KUnit test for core test infrastructure.
+ *
+ * Copyright (C) 2018, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+#include <kunit/test.h>
+
+static void test_test_catches_segfault(struct kunit *test)
+{
+	void (*invalid_func)(void) = (void (*)(void)) SIZE_MAX;
+
+	KUNIT_ASSERT_SIGSEGV(test, invalid_func());
+}
+
+static int test_test_init(struct kunit *test)
+{
+	return 0;
+}
+
+static void test_test_exit(struct kunit *test)
+{
+}
+
+static struct kunit_case test_test_cases[] = {
+	KUNIT_CASE(test_test_catches_segfault),
+	{},
+};
+
+static struct kunit_module test_test_module = {
+	.name = "test-test",
+	.init = test_test_init,
+	.exit = test_test_exit,
+	.test_cases = test_test_cases,
+};
+module_test(test_test_module);
diff --git a/kunit/test.c b/kunit/test.c
index 0fe6571f23d41..db3b0ea0f5888 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
 	spin_unlock_irqrestore(&test->lock, flags);
 }
 
+static bool kunit_get_death_test(struct kunit *test)
+{
+	unsigned long flags;
+	bool death_test;
+
+	spin_lock_irqsave(&test->lock, flags);
+	death_test = test->death_test;
+	spin_unlock_irqrestore(&test->lock, flags);
+
+	return death_test;
+}
+
+static void kunit_set_death_test(struct kunit *test, bool death_test)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&test->lock, flags);
+	test->death_test = death_test;
+	spin_unlock_irqrestore(&test->lock, flags);
+}
+
 static int kunit_vprintk_emit(const struct kunit *test,
 			      int level,
 			      const char *fmt,
@@ -70,13 +91,34 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
 	stream->commit(stream);
 }
 
+static void __noreturn kunit_abort(struct kunit *test)
+{
+	kunit_set_death_test(test, true);
+	if (current->thread.fault_catcher && current->thread.is_running_test)
+		UML_LONGJMP(current->thread.fault_catcher, 1);
+
+	/*
+	 * Attempted to abort from a not properly initialized test context.
+	 */
+	kunit_err(test,
+		 "Attempted to abort from a not properly initialized test context!");
+	if (!current->thread.fault_catcher)
+		kunit_err(test, "No fault_catcher present!");
+	if (!current->thread.is_running_test)
+		kunit_err(test, "is_running_test not set!");
+	show_stack(NULL, NULL);
+	BUG();
+}
+
 int kunit_init_test(struct kunit *test, const char *name)
 {
 	spin_lock_init(&test->lock);
 	INIT_LIST_HEAD(&test->resources);
 	test->name = name;
+	test->set_death_test = kunit_set_death_test;
 	test->vprintk = kunit_vprintk;
 	test->fail = kunit_fail;
+	test->abort = kunit_abort;
 
 	return 0;
 }
@@ -122,16 +164,89 @@ static void kunit_run_case_cleanup(struct kunit *test,
 }
 
 /*
- * Performs all logic to run a test case.
+ * Handles an unexpected crash in a test case.
  */
-static bool kunit_run_case(struct kunit *test,
-			   struct kunit_module *module,
-			   struct kunit_case *test_case)
+static void kunit_handle_test_crash(struct kunit *test,
+				   struct kunit_module *module,
+				   struct kunit_case *test_case)
 {
-	kunit_set_success(test, true);
+	kunit_err(test, "%s crashed", test_case->name);
+	/*
+	 * TODO(brendanhiggins at google.com): This prints the stack trace up
+	 * through this frame, not up to the frame that caused the crash.
+	 */
+	show_stack(NULL, NULL);
 
-	kunit_run_case_internal(test, module, test_case);
-	kunit_run_case_cleanup(test, module, test_case);
+	kunit_case_internal_cleanup(test);
+}
+
+/*
+ * Performs all logic to run a test case. It also catches most errors that
+ * occurs in a test case and reports them as failures.
+ *
+ * XXX: THIS DOES NOT FOLLOW NORMAL CONTROL FLOW. READ CAREFULLY!!!
+ */
+static bool kunit_run_case_catch_errors(struct kunit *test,
+				       struct kunit_module *module,
+				       struct kunit_case *test_case)
+{
+	jmp_buf fault_catcher;
+	int faulted;
+
+	kunit_set_success(test, true);
+	kunit_set_death_test(test, false);
+
+	/*
+	 * Tell the trap subsystem that we want to catch any segfaults that
+	 * occur.
+	 */
+	current->thread.is_running_test = true;
+	current->thread.fault_catcher = &fault_catcher;
+
+	/*
+	 * ENTER HANDLER: If a failure occurs, we enter here.
+	 */
+	faulted = UML_SETJMP(&fault_catcher);
+	if (faulted == 0) {
+		/*
+		 * NORMAL CASE: we have not run kunit_run_case_internal yet.
+		 *
+		 * kunit_run_case_internal may encounter a fatal error; if it
+		 * does, we will jump to ENTER_HANDLER above instead of
+		 * continuing normal control flow.
+		 */
+		kunit_run_case_internal(test, module, test_case);
+		/*
+		 * This line may never be reached.
+		 */
+		kunit_run_case_cleanup(test, module, test_case);
+	} else if (kunit_get_death_test(test)) {
+		/*
+		 * EXPECTED DEATH: kunit_run_case_internal encountered
+		 * anticipated fatal error. Everything should be in a safe
+		 * state.
+		 */
+		kunit_run_case_cleanup(test, module, test_case);
+	} else {
+		/*
+		 * UNEXPECTED DEATH: kunit_run_case_internal encountered an
+		 * unanticipated fatal error. We have no idea what the state of
+		 * the test case is in.
+		 */
+		kunit_handle_test_crash(test, module, test_case);
+		kunit_set_success(test, false);
+	}
+	/*
+	 * EXIT HANDLER: test case has been run and all possible errors have
+	 * been handled.
+	 */
+
+	/*
+	 * Tell the trap subsystem that we no longer want to catch any
+	 * segfaults.
+	 */
+	current->thread.fault_catcher = NULL;
+	current->thread.is_running_test = false;
 
 	return kunit_get_success(test);
 }
@@ -148,7 +263,7 @@ int kunit_run_tests(struct kunit_module *module)
 		return ret;
 
 	for (test_case = module->test_cases; test_case->run_case; test_case++) {
-		success = kunit_run_case(&test, module, test_case);
+		success = kunit_run_case_catch_errors(&test, module, test_case);
 		if (!success)
 			all_passed = false;
 
@@ -303,3 +418,36 @@ void kunit_expect_binary_msg(struct kunit *test,
 	kunit_expect_end(test, compare_result, stream);
 }
 
+void kunit_assert_binary_msg(struct kunit *test,
+			     long long left, const char *left_name,
+			     long long right, const char *right_name,
+			     bool compare_result,
+			     const char *compare_name,
+			     const char *file,
+			     const char *line,
+			     const char *fmt, ...)
+{
+	struct kunit_stream *stream = kunit_assert_start(test, file, line);
+	struct va_format vaf;
+	va_list args;
+
+	stream->add(stream,
+		    "Asserted %s %s %s, but\n",
+		    left_name, compare_name, right_name);
+	stream->add(stream, "\t\t%s == %lld\n", left_name, left);
+	stream->add(stream, "\t\t%s == %lld", right_name, right);
+
+	if (fmt) {
+		va_start(args, fmt);
+
+		vaf.fmt = fmt;
+		vaf.va = &args;
+
+		stream->add(stream, "\n%pV", &vaf);
+
+		va_end(args);
+	}
+
+	kunit_assert_end(test, compare_result, stream);
+}
+
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 09/19] kunit: test: add the concept of assertions
  2018-11-28 19:36 ` [RFC v3 09/19] kunit: test: add the concept of assertions brendanhiggins
@ 2018-11-28 19:36   ` Brendan Higgins
  0 siblings, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-11-28 19:36 UTC (permalink / raw)


Add support for assertions which are like expectations except the test
terminates if the assertion is not satisfied.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 include/kunit/test.h       | 388 ++++++++++++++++++++++++++++++++++++-
 kunit/Makefile             |   3 +-
 kunit/string-stream-test.c |  12 +-
 kunit/test-test.c          |  37 ++++
 kunit/test.c               | 164 +++++++++++++++-
 5 files changed, 586 insertions(+), 18 deletions(-)
 create mode 100644 kunit/test-test.c

diff --git a/include/kunit/test.h b/include/kunit/test.h
index 098a9dceef9ea..7be11dba0b14e 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -84,9 +84,10 @@ struct kunit;
  * @name: the name of the test case.
  *
  * A test case is a function with the signature, ``void (*)(struct kunit *)``
- * that makes expectations (see KUNIT_EXPECT_TRUE()) about code under test. Each
- * test case is associated with a &struct kunit_module and will be run after the
- * module's init function and followed by the module's exit function.
+ * that makes expectations and assertions (see KUNIT_EXPECT_TRUE() and
+ * KUNIT_ASSERT_TRUE()) about code under test. Each test case is associated with
+ * a &struct kunit_module and will be run after the module's init function and
+ * followed by the module's exit function.
  *
  * A test case should be static and should only be created with the KUNIT_CASE()
  * macro; additionally, every array of test cases should be terminated with an
@@ -168,11 +169,14 @@ struct kunit {
 	const char *name; /* Read only after initialization! */
 	spinlock_t lock; /* Gaurds all mutable test state. */
 	bool success; /* Protected by lock. */
+	bool death_test; /* Protected by lock. */
 	struct list_head resources; /* Protected by lock. */
+	void (*set_death_test)(struct kunit *test, bool death_test);
 	void (*vprintk)(const struct kunit *test,
 			const char *level,
 			struct va_format *vaf);
 	void (*fail)(struct kunit *test, struct kunit_stream *stream);
+	void (*abort)(struct kunit *test);
 };
 
 int kunit_init_test(struct kunit *test, const char *name);
@@ -652,4 +656,382 @@ static inline void kunit_expect_binary(struct kunit *test,
 	KUNIT_EXPECT_END(test, !IS_ERR_OR_NULL(__ptr), __stream);	       \
 } while (0)
 
+static inline struct kunit_stream *kunit_assert_start(struct kunit *test,
+						    const char *file,
+						    const char *line)
+{
+	struct kunit_stream *stream = kunit_new_stream(test);
+
+	stream->add(stream, "ASSERTION FAILED at %s:%s\n\t", file, line);
+
+	return stream;
+}
+
+static inline void kunit_assert_end(struct kunit *test,
+				   bool success,
+				   struct kunit_stream *stream)
+{
+	if (!success) {
+		test->fail(test, stream);
+		test->abort(test);
+	} else {
+		stream->clear(stream);
+	}
+}
+
+#define KUNIT_ASSERT_START(test) \
+		kunit_assert_start(test, __FILE__, __stringify(__LINE__))
+
+#define KUNIT_ASSERT_END(test, success, stream) \
+		kunit_assert_end(test, success, stream)
+
+#define KUNIT_ASSERT(test, success, message) do {			       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+									       \
+	__stream->add(__stream, message);				       \
+	KUNIT_ASSERT_END(test, success, __stream);			       \
+} while (0)
+
+#define KUNIT_ASSERT_MSG(test, success, message, fmt, ...) do {		       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+									       \
+	__stream->add(__stream, message);				       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+	KUNIT_ASSERT_END(test, success, __stream);			       \
+} while (0)
+
+#define KUNIT_ASSERT_FAILURE(test, fmt, ...) \
+		KUNIT_ASSERT_MSG(test, false, "", fmt, ##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_TRUE() - Sets an assertion that @condition is true.
+ * @test: The test context object.
+ * @condition: an arbitrary boolean expression. The test fails and aborts when
+ * this does not evaluate to true.
+ *
+ * This and assertions of the form `KUNIT_ASSERT_*` will cause the test case to
+ * fail *and immediately abort* when the specified condition is not met. Unlike
+ * an expectation failure, it will prevent the test case from continuing to run;
+ * this is otherwise known as an *assertion failure*.
+ */
+#define KUNIT_ASSERT_TRUE(test, condition)				       \
+		KUNIT_ASSERT(test, (condition),				       \
+		       "Asserted " #condition " is true, but is false.")
+
+#define KUNIT_ASSERT_TRUE_MSG(test, condition, fmt, ...)		       \
+		KUNIT_ASSERT_MSG(test, (condition),			       \
+				"Asserted " #condition " is true, but is false.\n",\
+				fmt, ##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_FALSE() - Sets an assertion that @condition is false.
+ * @test: The test context object.
+ * @condition: an arbitrary boolean expression.
+ *
+ * Sets an assertion that the value that @condition evaluates to is false. This
+ * is the same as KUNIT_EXPECT_FALSE(), except it causes an assertion failure
+ * (see KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_FALSE(test, condition)				       \
+		KUNIT_ASSERT(test, !(condition),			       \
+		       "Asserted " #condition " is false, but is true.")
+
+#define KUNIT_ASSERT_FALSE_MSG(test, condition, fmt, ...)		       \
+		KUNIT_ASSERT_MSG(test, !(condition),			       \
+				"Asserted " #condition " is false, but is true.\n",\
+				fmt, ##__VA_ARGS__)
+
+void kunit_assert_binary_msg(struct kunit *test,
+			    long long left, const char *left_name,
+			    long long right, const char *right_name,
+			    bool compare_result,
+			    const char *compare_name,
+			    const char *file,
+			    const char *line,
+			    const char *fmt, ...);
+
+static inline void kunit_assert_binary(struct kunit *test,
+				      long long left, const char *left_name,
+				      long long right, const char *right_name,
+				      bool compare_result,
+				      const char *compare_name,
+				      const char *file,
+				      const char *line)
+{
+	kunit_assert_binary_msg(test,
+			       left, left_name,
+			       right, right_name,
+			       compare_result,
+			       compare_name,
+			       file,
+			       line,
+			       NULL);
+}
+
+/*
+ * A factory macro for defining the expectations for the basic comparisons
+ * defined for the built in types.
+ *
+ * Unfortunately, there is no common type that all types can be promoted to for
+ * which all the binary operators behave the same way as for the actual types
+ * (for example, there is no type that long long and unsigned long long can
+ * both be cast to where the comparison result is preserved for all values). So
+ * the best we can do is do the comparison in the original types and then coerce
+ * everything to long long for printing; this way, the comparison behaves
+ * correctly and the printed out value usually makes sense without
+ * interpretation, but can always be interpretted to figure out the actual
+ * value.
+ */
+#define KUNIT_ASSERT_BINARY(test, left, condition, right) do {		       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+	kunit_assert_binary(test,					       \
+			   (long long) __left, #left,			       \
+			   (long long) __right, #right,			       \
+			   __left condition __right, #condition,	       \
+			   __FILE__, __stringify(__LINE__));		       \
+} while (0)
+
+#define KUNIT_ASSERT_BINARY_MSG(test, left, condition, right, fmt, ...) do {   \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+	kunit_assert_binary_msg(test,					       \
+			       (long long) __left, #left,		       \
+			       (long long) __right, #right,		       \
+			       __left condition __right, #condition,	       \
+			       __FILE__, __stringify(__LINE__),		       \
+			       fmt, ##__VA_ARGS__);			       \
+} while (0)
+
+/**
+ * KUNIT_ASSERT_EQ() - Sets an assertion that @left and @right are equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the values that @left and @right evaluate to are
+ * equal. This is the same as KUNIT_EXPECT_EQ(), except it causes an assertion
+ * failure (see KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_EQ(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, ==, right)
+
+#define KUNIT_ASSERT_EQ_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					==,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_NE() - An assertion that @left and @right are not equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the values that @left and @right evaluate to are not
+ * equal. This is the same as KUNIT_EXPECT_NE(), except it causes an assertion
+ * failure (see KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_NE(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, !=, right)
+
+#define KUNIT_ASSERT_NE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					!=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_LT() - An assertion that @left is less than @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the value that @left evaluates to is less than the
+ * value that @right evaluates to. This is the same as KUNIT_EXPECT_LT(), except
+ * it causes an assertion failure (see KUNIT_ASSERT_TRUE()) when the assertion
+ * is not met.
+ */
+#define KUNIT_ASSERT_LT(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, <, right)
+
+#define KUNIT_ASSERT_LT_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					<,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+/**
+ * KUNIT_ASSERT_LE() - An assertion that @left is less than or equal to @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the value that @left evaluates to is less than or
+ * equal to the value that @right evaluates to. This is the same as
+ * KUNIT_EXPECT_LE(), except it causes an assertion failure (see
+ * KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_LE(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, <=, right)
+
+#define KUNIT_ASSERT_LE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					<=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+/**
+ * KUNIT_ASSERT_GT() - An assertion that @left is greater than @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the value that @left evaluates to is greater than the
+ * value that @right evaluates to. This is the same as KUNIT_EXPECT_GT(), except
+ * it causes an assertion failure (see KUNIT_ASSERT_TRUE()) when the assertion
+ * is not met.
+ */
+#define KUNIT_ASSERT_GT(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, >, right)
+
+#define KUNIT_ASSERT_GT_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					>,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_GE() - Assertion that @left is greater than or equal to @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the value that @left evaluates to is greater than the
+ * value that @right evaluates to. This is the same as KUNIT_EXPECT_GE(), except
+ * it causes an assertion failure (see KUNIT_ASSERT_TRUE()) when the assertion
+ * is not met.
+ */
+#define KUNIT_ASSERT_GE(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, >=, right)
+
+#define KUNIT_ASSERT_GE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					>=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_STREQ() - An assertion that strings @left and @right are equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a null terminated string.
+ * @right: an arbitrary expression that evaluates to a null terminated string.
+ *
+ * Sets an assertion that the values that @left and @right evaluate to are
+ * equal. This is the same as KUNIT_EXPECT_STREQ(), except it causes an
+ * assertion failure (see KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_STREQ(test, left, right) do {			       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Asserted " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+									       \
+	KUNIT_ASSERT_END(test, !strcmp(left, right), __stream);		       \
+} while (0)
+
+#define KUNIT_ASSERT_STREQ_MSG(test, left, right, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Expected " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+									       \
+	KUNIT_ASSERT_END(test, !strcmp(left, right), __stream);		       \
+} while (0)
+
+/**
+ * KUNIT_ASSERT_NOT_ERR_OR_NULL() - Assertion that @ptr is not null and not err.
+ * @test: The test context object.
+ * @ptr: an arbitrary pointer.
+ *
+ * Sets an assertion that the value that @ptr evaluates to is not null and not
+ * an errno stored in a pointer. This is the same as
+ * KUNIT_EXPECT_NOT_ERR_OR_NULL(), except it causes an assertion failure (see
+ * KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr) do {			       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(ptr) __ptr = (ptr);					       \
+									       \
+	if (!__ptr)							       \
+		__stream->add(__stream,					       \
+			      "Asserted " #ptr " is not null, but is.");       \
+	if (IS_ERR(__ptr))						       \
+		__stream->add(__stream,					       \
+			      "Asserted " #ptr " is not error, but is: %ld",   \
+			      PTR_ERR(__ptr));				       \
+									       \
+	KUNIT_ASSERT_END(test, !IS_ERR_OR_NULL(__ptr), __stream);	       \
+} while (0)
+
+#define KUNIT_ASSERT_NOT_ERR_OR_NULL_MSG(test, ptr, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(ptr) __ptr = (ptr);					       \
+									       \
+	if (!__ptr) {							       \
+		__stream->add(__stream,					       \
+			      "Asserted " #ptr " is not null, but is.");       \
+		__stream->add(__stream, fmt, ##__VA_ARGS__);		       \
+	}								       \
+	if (IS_ERR(__ptr)) {						       \
+		__stream->add(__stream,					       \
+			      "Asserted " #ptr " is not error, but is: %ld",   \
+			      PTR_ERR(__ptr));				       \
+									       \
+		__stream->add(__stream, fmt, ##__VA_ARGS__);		       \
+	}								       \
+	KUNIT_ASSERT_END(test, !IS_ERR_OR_NULL(__ptr), __stream);	       \
+} while (0)
+
+/**
+ * KUNIT_ASSERT_SIGSEGV() - An assertion that @expr will cause a segfault.
+ * @test: The test context object.
+ * @expr: an arbitrary block of code.
+ *
+ * Sets an assertion that @expr, when evaluated, will cause a segfault.
+ * Currently this assertion is only really useful for testing the KUnit
+ * framework, as a segmentation fault in normal kernel code is always incorrect.
+ * However, the plan is to replace this assertion with an arbitrary death
+ * assertion similar to
+ * https://github.com/google/googletest/blob/master/googletest/docs/advanced.md#death-tests
+ * which will probably be massaged to make sense in the context of the kernel
+ * (maybe assert that a panic occurred, or that BUG() was called).
+ *
+ * NOTE: no code after this assertion will ever be executed.
+ */
+#define KUNIT_ASSERT_SIGSEGV(test, expr) do {				       \
+	test->set_death_test(test, true);				       \
+	expr;								       \
+	test->set_death_test(test, false);				       \
+	KUNIT_ASSERT_FAILURE(test,					       \
+			    "Asserted that " #expr " would cause death, but did not.");\
+} while (0)
+
 #endif /* _KUNIT_TEST_H */
diff --git a/kunit/Makefile b/kunit/Makefile
index 60a9ea6cb4697..e4c300f67479a 100644
--- a/kunit/Makefile
+++ b/kunit/Makefile
@@ -2,6 +2,7 @@ obj-$(CONFIG_KUNIT) +=			test.o \
 					string-stream.o \
 					kunit-stream.o
 
-obj-$(CONFIG_KUNIT_TEST) +=		string-stream-test.o
+obj-$(CONFIG_KUNIT_TEST) +=		test-test.o \
+					string-stream-test.o
 
 obj-$(CONFIG_KUNIT_EXAMPLE_TEST) +=	example-test.o
diff --git a/kunit/string-stream-test.c b/kunit/string-stream-test.c
index ec2675593c364..c5346a6c932ce 100644
--- a/kunit/string-stream-test.c
+++ b/kunit/string-stream-test.c
@@ -19,7 +19,7 @@ static void string_stream_test_get_string(struct kunit *test)
 	stream->add(stream, " %s", "bar");
 
 	output = stream->get_string(stream);
-	KUNIT_EXPECT_STREQ(test, output, "Foo bar");
+	KUNIT_ASSERT_STREQ(test, output, "Foo bar");
 	kfree(output);
 	destroy_string_stream(stream);
 }
@@ -34,16 +34,16 @@ static void string_stream_test_add_and_clear(struct kunit *test)
 		stream->add(stream, "A");
 
 	output = stream->get_string(stream);
-	KUNIT_EXPECT_STREQ(test, output, "AAAAAAAAAA");
-	KUNIT_EXPECT_EQ(test, stream->length, 10);
-	KUNIT_EXPECT_FALSE(test, stream->is_empty(stream));
+	KUNIT_ASSERT_STREQ(test, output, "AAAAAAAAAA");
+	KUNIT_ASSERT_EQ(test, stream->length, 10);
+	KUNIT_ASSERT_FALSE(test, stream->is_empty(stream));
 	kfree(output);
 
 	stream->clear(stream);
 
 	output = stream->get_string(stream);
-	KUNIT_EXPECT_STREQ(test, output, "");
-	KUNIT_EXPECT_TRUE(test, stream->is_empty(stream));
+	KUNIT_ASSERT_STREQ(test, output, "");
+	KUNIT_ASSERT_TRUE(test, stream->is_empty(stream));
 	destroy_string_stream(stream);
 }
 
diff --git a/kunit/test-test.c b/kunit/test-test.c
new file mode 100644
index 0000000000000..88b3bcf9c4e00
--- /dev/null
+++ b/kunit/test-test.c
@@ -0,0 +1,37 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * KUnit test for core test infrastructure.
+ *
+ * Copyright (C) 2018, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+#include <kunit/test.h>
+
+static void test_test_catches_segfault(struct kunit *test)
+{
+	void (*invalid_func)(void) = (void (*)(void)) SIZE_MAX;
+
+	KUNIT_ASSERT_SIGSEGV(test, invalid_func());
+}
+
+static int test_test_init(struct kunit *test)
+{
+	return 0;
+}
+
+static void test_test_exit(struct kunit *test)
+{
+}
+
+static struct kunit_case test_test_cases[] = {
+	KUNIT_CASE(test_test_catches_segfault),
+	{},
+};
+
+static struct kunit_module test_test_module = {
+	.name = "test-test",
+	.init = test_test_init,
+	.exit = test_test_exit,
+	.test_cases = test_test_cases,
+};
+module_test(test_test_module);
diff --git a/kunit/test.c b/kunit/test.c
index 0fe6571f23d41..db3b0ea0f5888 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
 	spin_unlock_irqrestore(&test->lock, flags);
 }
 
+static bool kunit_get_death_test(struct kunit *test)
+{
+	unsigned long flags;
+	bool death_test;
+
+	spin_lock_irqsave(&test->lock, flags);
+	death_test = test->death_test;
+	spin_unlock_irqrestore(&test->lock, flags);
+
+	return death_test;
+}
+
+static void kunit_set_death_test(struct kunit *test, bool death_test)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&test->lock, flags);
+	test->death_test = death_test;
+	spin_unlock_irqrestore(&test->lock, flags);
+}
+
 static int kunit_vprintk_emit(const struct kunit *test,
 			      int level,
 			      const char *fmt,
@@ -70,13 +91,34 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
 	stream->commit(stream);
 }
 
+static void __noreturn kunit_abort(struct kunit *test)
+{
+	kunit_set_death_test(test, true);
+	if (current->thread.fault_catcher && current->thread.is_running_test)
+		UML_LONGJMP(current->thread.fault_catcher, 1);
+
+	/*
+	 * Attempted to abort from a not properly initialized test context.
+	 */
+	kunit_err(test,
+		 "Attempted to abort from a not properly initialized test context!");
+	if (!current->thread.fault_catcher)
+		kunit_err(test, "No fault_catcher present!");
+	if (!current->thread.is_running_test)
+		kunit_err(test, "is_running_test not set!");
+	show_stack(NULL, NULL);
+	BUG();
+}
+
 int kunit_init_test(struct kunit *test, const char *name)
 {
 	spin_lock_init(&test->lock);
 	INIT_LIST_HEAD(&test->resources);
 	test->name = name;
+	test->set_death_test = kunit_set_death_test;
 	test->vprintk = kunit_vprintk;
 	test->fail = kunit_fail;
+	test->abort = kunit_abort;
 
 	return 0;
 }
@@ -122,16 +164,89 @@ static void kunit_run_case_cleanup(struct kunit *test,
 }
 
 /*
- * Performs all logic to run a test case.
+ * Handles an unexpected crash in a test case.
  */
-static bool kunit_run_case(struct kunit *test,
-			   struct kunit_module *module,
-			   struct kunit_case *test_case)
+static void kunit_handle_test_crash(struct kunit *test,
+				   struct kunit_module *module,
+				   struct kunit_case *test_case)
 {
-	kunit_set_success(test, true);
+	kunit_err(test, "%s crashed", test_case->name);
+	/*
+	 * TODO(brendanhiggins at google.com): This prints the stack trace up
+	 * through this frame, not up to the frame that caused the crash.
+	 */
+	show_stack(NULL, NULL);
 
-	kunit_run_case_internal(test, module, test_case);
-	kunit_run_case_cleanup(test, module, test_case);
+	kunit_case_internal_cleanup(test);
+}
+
+/*
+ * Performs all logic to run a test case. It also catches most errors that
+ * occurs in a test case and reports them as failures.
+ *
+ * XXX: THIS DOES NOT FOLLOW NORMAL CONTROL FLOW. READ CAREFULLY!!!
+ */
+static bool kunit_run_case_catch_errors(struct kunit *test,
+				       struct kunit_module *module,
+				       struct kunit_case *test_case)
+{
+	jmp_buf fault_catcher;
+	int faulted;
+
+	kunit_set_success(test, true);
+	kunit_set_death_test(test, false);
+
+	/*
+	 * Tell the trap subsystem that we want to catch any segfaults that
+	 * occur.
+	 */
+	current->thread.is_running_test = true;
+	current->thread.fault_catcher = &fault_catcher;
+
+	/*
+	 * ENTER HANDLER: If a failure occurs, we enter here.
+	 */
+	faulted = UML_SETJMP(&fault_catcher);
+	if (faulted == 0) {
+		/*
+		 * NORMAL CASE: we have not run kunit_run_case_internal yet.
+		 *
+		 * kunit_run_case_internal may encounter a fatal error; if it
+		 * does, we will jump to ENTER_HANDLER above instead of
+		 * continuing normal control flow.
+		 */
+		kunit_run_case_internal(test, module, test_case);
+		/*
+		 * This line may never be reached.
+		 */
+		kunit_run_case_cleanup(test, module, test_case);
+	} else if (kunit_get_death_test(test)) {
+		/*
+		 * EXPECTED DEATH: kunit_run_case_internal encountered
+		 * anticipated fatal error. Everything should be in a safe
+		 * state.
+		 */
+		kunit_run_case_cleanup(test, module, test_case);
+	} else {
+		/*
+		 * UNEXPECTED DEATH: kunit_run_case_internal encountered an
+		 * unanticipated fatal error. We have no idea what the state of
+		 * the test case is in.
+		 */
+		kunit_handle_test_crash(test, module, test_case);
+		kunit_set_success(test, false);
+	}
+	/*
+	 * EXIT HANDLER: test case has been run and all possible errors have
+	 * been handled.
+	 */
+
+	/*
+	 * Tell the trap subsystem that we no longer want to catch any
+	 * segfaults.
+	 */
+	current->thread.fault_catcher = NULL;
+	current->thread.is_running_test = false;
 
 	return kunit_get_success(test);
 }
@@ -148,7 +263,7 @@ int kunit_run_tests(struct kunit_module *module)
 		return ret;
 
 	for (test_case = module->test_cases; test_case->run_case; test_case++) {
-		success = kunit_run_case(&test, module, test_case);
+		success = kunit_run_case_catch_errors(&test, module, test_case);
 		if (!success)
 			all_passed = false;
 
@@ -303,3 +418,36 @@ void kunit_expect_binary_msg(struct kunit *test,
 	kunit_expect_end(test, compare_result, stream);
 }
 
+void kunit_assert_binary_msg(struct kunit *test,
+			     long long left, const char *left_name,
+			     long long right, const char *right_name,
+			     bool compare_result,
+			     const char *compare_name,
+			     const char *file,
+			     const char *line,
+			     const char *fmt, ...)
+{
+	struct kunit_stream *stream = kunit_assert_start(test, file, line);
+	struct va_format vaf;
+	va_list args;
+
+	stream->add(stream,
+		    "Asserted %s %s %s, but\n",
+		    left_name, compare_name, right_name);
+	stream->add(stream, "\t\t%s == %lld\n", left_name, left);
+	stream->add(stream, "\t\t%s == %lld", right_name, right);
+
+	if (fmt) {
+		va_start(args, fmt);
+
+		vaf.fmt = fmt;
+		vaf.va = &args;
+
+		stream->add(stream, "\n%pV", &vaf);
+
+		va_end(args);
+	}
+
+	kunit_assert_end(test, compare_result, stream);
+}
+
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 10/19] kunit: test: add test managed resource tests
  2018-11-28 19:36 [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework brendanhiggins
                   ` (9 preceding siblings ...)
  2018-11-28 19:36 ` [RFC v3 09/19] kunit: test: add the concept of assertions brendanhiggins
@ 2018-11-28 19:36 ` brendanhiggins
  2018-11-28 19:36   ` Brendan Higgins
  2018-11-28 19:36 ` [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel brendanhiggins
                   ` (10 subsequent siblings)
  21 siblings, 1 reply; 232+ messages in thread
From: brendanhiggins @ 2018-11-28 19:36 UTC (permalink / raw)


Tests how tests interact with test managed resources in their lifetime.

Signed-off-by: Avinash Kondareddy <avikr at google.com>
Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 kunit/test-test.c | 121 +++++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 110 insertions(+), 11 deletions(-)

diff --git a/kunit/test-test.c b/kunit/test-test.c
index 88b3bcf9c4e00..36fd95c90a26a 100644
--- a/kunit/test-test.c
+++ b/kunit/test-test.c
@@ -7,31 +7,130 @@
  */
 #include <kunit/test.h>
 
-static void test_test_catches_segfault(struct kunit *test)
+static void kunit_test_catches_segfault(struct kunit *test)
 {
 	void (*invalid_func)(void) = (void (*)(void)) SIZE_MAX;
 
 	KUNIT_ASSERT_SIGSEGV(test, invalid_func());
 }
 
-static int test_test_init(struct kunit *test)
+/*
+ * Context for testing test managed resources
+ * is_resource_initialized is used to test arbitrary resources
+ */
+struct kunit_test_context {
+	struct kunit test;
+	bool is_resource_initialized;
+};
+
+static int fake_resource_init(struct kunit_resource *res, void *context)
 {
+	struct kunit_test_context *ctx = context;
+
+	res->allocation = &ctx->is_resource_initialized;
+	ctx->is_resource_initialized = true;
 	return 0;
 }
 
-static void test_test_exit(struct kunit *test)
+static void fake_resource_free(struct kunit_resource *res)
+{
+	bool *is_resource_initialized = res->allocation;
+
+	*is_resource_initialized = false;
+}
+
+static void kunit_test_init_resources(struct kunit *test)
+{
+	struct kunit_test_context *ctx = test->priv;
+
+	kunit_init_test(&ctx->test, "testing_test_init_test");
+
+	KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
+}
+
+static void kunit_test_alloc_resource(struct kunit *test)
+{
+	struct kunit_test_context *ctx = test->priv;
+	struct kunit_resource *res;
+	kunit_resource_free_t free = fake_resource_free;
+
+	res = kunit_alloc_resource(&ctx->test,
+				   fake_resource_init,
+				   fake_resource_free,
+				   ctx);
+
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, res);
+	KUNIT_EXPECT_EQ(test, &ctx->is_resource_initialized, res->allocation);
+	KUNIT_EXPECT_TRUE(test, list_is_last(&res->node, &ctx->test.resources));
+	KUNIT_EXPECT_EQ(test, free, res->free);
+}
+
+static void kunit_test_free_resource(struct kunit *test)
 {
+	struct kunit_test_context *ctx = test->priv;
+	struct kunit_resource *res = kunit_alloc_resource(&ctx->test,
+							  fake_resource_init,
+							  fake_resource_free,
+							  ctx);
+
+	kunit_free_resource(&ctx->test, res);
+
+	KUNIT_EXPECT_EQ(test, false, ctx->is_resource_initialized);
+	KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
+}
+
+static void kunit_test_cleanup_resources(struct kunit *test)
+{
+	int i;
+	const int num_res = 5;
+	struct kunit_test_context *ctx = test->priv;
+	struct kunit_resource *resources[num_res];
+
+	for (i = 0; i < num_res; i++) {
+		resources[i] = kunit_alloc_resource(&ctx->test,
+						    fake_resource_init,
+						    fake_resource_free,
+						    ctx);
+	}
+
+	kunit_cleanup(&ctx->test);
+
+	KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
+}
+
+static int kunit_test_init(struct kunit *test)
+{
+	struct kunit_test_context *ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+
+	if (!ctx)
+		return -ENOMEM;
+	test->priv = ctx;
+
+	kunit_init_test(&ctx->test, "test_test_context");
+	return 0;
+}
+
+static void kunit_test_exit(struct kunit *test)
+{
+	struct kunit_test_context *ctx = test->priv;
+
+	kunit_cleanup(&ctx->test);
+	kfree(ctx);
 }
 
-static struct kunit_case test_test_cases[] = {
-	KUNIT_CASE(test_test_catches_segfault),
+static struct kunit_case kunit_test_cases[] = {
+	KUNIT_CASE(kunit_test_catches_segfault),
+	KUNIT_CASE(kunit_test_init_resources),
+	KUNIT_CASE(kunit_test_alloc_resource),
+	KUNIT_CASE(kunit_test_free_resource),
+	KUNIT_CASE(kunit_test_cleanup_resources),
 	{},
 };
 
-static struct kunit_module test_test_module = {
-	.name = "test-test",
-	.init = test_test_init,
-	.exit = test_test_exit,
-	.test_cases = test_test_cases,
+static struct kunit_module kunit_test_module = {
+	.name = "kunit-test",
+	.init = kunit_test_init,
+	.exit = kunit_test_exit,
+	.test_cases = kunit_test_cases,
 };
-module_test(test_test_module);
+module_test(kunit_test_module);
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 10/19] kunit: test: add test managed resource tests
  2018-11-28 19:36 ` [RFC v3 10/19] kunit: test: add test managed resource tests brendanhiggins
@ 2018-11-28 19:36   ` Brendan Higgins
  0 siblings, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-11-28 19:36 UTC (permalink / raw)


Tests how tests interact with test managed resources in their lifetime.

Signed-off-by: Avinash Kondareddy <avikr at google.com>
Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 kunit/test-test.c | 121 +++++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 110 insertions(+), 11 deletions(-)

diff --git a/kunit/test-test.c b/kunit/test-test.c
index 88b3bcf9c4e00..36fd95c90a26a 100644
--- a/kunit/test-test.c
+++ b/kunit/test-test.c
@@ -7,31 +7,130 @@
  */
 #include <kunit/test.h>
 
-static void test_test_catches_segfault(struct kunit *test)
+static void kunit_test_catches_segfault(struct kunit *test)
 {
 	void (*invalid_func)(void) = (void (*)(void)) SIZE_MAX;
 
 	KUNIT_ASSERT_SIGSEGV(test, invalid_func());
 }
 
-static int test_test_init(struct kunit *test)
+/*
+ * Context for testing test managed resources
+ * is_resource_initialized is used to test arbitrary resources
+ */
+struct kunit_test_context {
+	struct kunit test;
+	bool is_resource_initialized;
+};
+
+static int fake_resource_init(struct kunit_resource *res, void *context)
 {
+	struct kunit_test_context *ctx = context;
+
+	res->allocation = &ctx->is_resource_initialized;
+	ctx->is_resource_initialized = true;
 	return 0;
 }
 
-static void test_test_exit(struct kunit *test)
+static void fake_resource_free(struct kunit_resource *res)
+{
+	bool *is_resource_initialized = res->allocation;
+
+	*is_resource_initialized = false;
+}
+
+static void kunit_test_init_resources(struct kunit *test)
+{
+	struct kunit_test_context *ctx = test->priv;
+
+	kunit_init_test(&ctx->test, "testing_test_init_test");
+
+	KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
+}
+
+static void kunit_test_alloc_resource(struct kunit *test)
+{
+	struct kunit_test_context *ctx = test->priv;
+	struct kunit_resource *res;
+	kunit_resource_free_t free = fake_resource_free;
+
+	res = kunit_alloc_resource(&ctx->test,
+				   fake_resource_init,
+				   fake_resource_free,
+				   ctx);
+
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, res);
+	KUNIT_EXPECT_EQ(test, &ctx->is_resource_initialized, res->allocation);
+	KUNIT_EXPECT_TRUE(test, list_is_last(&res->node, &ctx->test.resources));
+	KUNIT_EXPECT_EQ(test, free, res->free);
+}
+
+static void kunit_test_free_resource(struct kunit *test)
 {
+	struct kunit_test_context *ctx = test->priv;
+	struct kunit_resource *res = kunit_alloc_resource(&ctx->test,
+							  fake_resource_init,
+							  fake_resource_free,
+							  ctx);
+
+	kunit_free_resource(&ctx->test, res);
+
+	KUNIT_EXPECT_EQ(test, false, ctx->is_resource_initialized);
+	KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
+}
+
+static void kunit_test_cleanup_resources(struct kunit *test)
+{
+	int i;
+	const int num_res = 5;
+	struct kunit_test_context *ctx = test->priv;
+	struct kunit_resource *resources[num_res];
+
+	for (i = 0; i < num_res; i++) {
+		resources[i] = kunit_alloc_resource(&ctx->test,
+						    fake_resource_init,
+						    fake_resource_free,
+						    ctx);
+	}
+
+	kunit_cleanup(&ctx->test);
+
+	KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
+}
+
+static int kunit_test_init(struct kunit *test)
+{
+	struct kunit_test_context *ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+
+	if (!ctx)
+		return -ENOMEM;
+	test->priv = ctx;
+
+	kunit_init_test(&ctx->test, "test_test_context");
+	return 0;
+}
+
+static void kunit_test_exit(struct kunit *test)
+{
+	struct kunit_test_context *ctx = test->priv;
+
+	kunit_cleanup(&ctx->test);
+	kfree(ctx);
 }
 
-static struct kunit_case test_test_cases[] = {
-	KUNIT_CASE(test_test_catches_segfault),
+static struct kunit_case kunit_test_cases[] = {
+	KUNIT_CASE(kunit_test_catches_segfault),
+	KUNIT_CASE(kunit_test_init_resources),
+	KUNIT_CASE(kunit_test_alloc_resource),
+	KUNIT_CASE(kunit_test_free_resource),
+	KUNIT_CASE(kunit_test_cleanup_resources),
 	{},
 };
 
-static struct kunit_module test_test_module = {
-	.name = "test-test",
-	.init = test_test_init,
-	.exit = test_test_exit,
-	.test_cases = test_test_cases,
+static struct kunit_module kunit_test_module = {
+	.name = "kunit-test",
+	.init = kunit_test_init,
+	.exit = kunit_test_exit,
+	.test_cases = kunit_test_cases,
 };
-module_test(test_test_module);
+module_test(kunit_test_module);
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel
  2018-11-28 19:36 [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework brendanhiggins
                   ` (10 preceding siblings ...)
  2018-11-28 19:36 ` [RFC v3 10/19] kunit: test: add test managed resource tests brendanhiggins
@ 2018-11-28 19:36 ` brendanhiggins
  2018-11-28 19:36   ` Brendan Higgins
                     ` (2 more replies)
  2018-11-28 19:36 ` [RFC v3 12/19] kunit: add KUnit wrapper script and simple output parser brendanhiggins
                   ` (9 subsequent siblings)
  21 siblings, 3 replies; 232+ messages in thread
From: brendanhiggins @ 2018-11-28 19:36 UTC (permalink / raw)


The ultimate goal is to create minimal isolated test binaries; in the
meantime we are using UML to provide the infrastructure to run tests, so
define an abstract way to configure and run tests that allow us to
change the context in which tests are built without affecting the user.
This also makes pretty and dynamic error reporting, and a lot of other
nice features easier.

kunit_config.py:
  - parse .config and Kconfig files.

kunit_kernel.py: provides helper functions to:
  - configure the kernel using kunitconfig.
  - build the kernel with the appropriate configuration.
  - provide function to invoke the kernel and stream the output back.

Signed-off-by: Felix Guo <felixguoxiuping at gmail.com>
Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 tools/testing/kunit/.gitignore      |   3 +
 tools/testing/kunit/kunit_config.py |  60 +++++++++++++
 tools/testing/kunit/kunit_kernel.py | 126 ++++++++++++++++++++++++++++
 3 files changed, 189 insertions(+)
 create mode 100644 tools/testing/kunit/.gitignore
 create mode 100644 tools/testing/kunit/kunit_config.py
 create mode 100644 tools/testing/kunit/kunit_kernel.py

diff --git a/tools/testing/kunit/.gitignore b/tools/testing/kunit/.gitignore
new file mode 100644
index 0000000000000..c791ff59a37a9
--- /dev/null
+++ b/tools/testing/kunit/.gitignore
@@ -0,0 +1,3 @@
+# Byte-compiled / optimized / DLL files
+__pycache__/
+*.py[cod]
\ No newline at end of file
diff --git a/tools/testing/kunit/kunit_config.py b/tools/testing/kunit/kunit_config.py
new file mode 100644
index 0000000000000..183bd5e758762
--- /dev/null
+++ b/tools/testing/kunit/kunit_config.py
@@ -0,0 +1,60 @@
+# SPDX-License-Identifier: GPL-2.0
+
+import collections
+import re
+
+CONFIG_IS_NOT_SET_PATTERN = r'^# CONFIG_\w+ is not set$'
+CONFIG_PATTERN = r'^CONFIG_\w+=\S+$'
+
+KconfigEntryBase = collections.namedtuple('KconfigEntry', ['raw_entry'])
+
+
+class KconfigEntry(KconfigEntryBase):
+
+	def __str__(self) -> str:
+		return self.raw_entry
+
+
+class KconfigParseError(Exception):
+	"""Error parsing Kconfig defconfig or .config."""
+
+
+class Kconfig(object):
+	"""Represents defconfig or .config specified using the Kconfig language."""
+
+	def __init__(self):
+		self._entries = []
+
+	def entries(self):
+		return set(self._entries)
+
+	def add_entry(self, entry: KconfigEntry) -> None:
+		self._entries.append(entry)
+
+	def is_subset_of(self, other: "Kconfig") -> bool:
+		return self.entries().issubset(other.entries())
+
+	def write_to_file(self, path: str) -> None:
+		with open(path, 'w') as f:
+			for entry in self.entries():
+				f.write(str(entry) + '\n')
+
+	def parse_from_string(self, blob: str) -> None:
+		"""Parses a string containing KconfigEntrys and populates this Kconfig."""
+		self._entries = []
+		is_not_set_matcher = re.compile(CONFIG_IS_NOT_SET_PATTERN)
+		config_matcher = re.compile(CONFIG_PATTERN)
+		for line in blob.split('\n'):
+			line = line.strip()
+			if not line:
+				continue
+			elif config_matcher.match(line) or is_not_set_matcher.match(line):
+				self._entries.append(KconfigEntry(line))
+			elif line[0] == '#':
+				continue
+			else:
+				raise KconfigParseError('Failed to parse: ' + line)
+
+	def read_from_file(self, path: str) -> None:
+		with open(path, 'r') as f:
+			self.parse_from_string(f.read())
diff --git a/tools/testing/kunit/kunit_kernel.py b/tools/testing/kunit/kunit_kernel.py
new file mode 100644
index 0000000000000..bba7ea7ca1869
--- /dev/null
+++ b/tools/testing/kunit/kunit_kernel.py
@@ -0,0 +1,126 @@
+# SPDX-License-Identifier: GPL-2.0
+
+import logging
+import subprocess
+import os
+
+import kunit_config
+
+KCONFIG_PATH = '.config'
+
+class ConfigError(Exception):
+	"""Represents an error trying to configure the Linux kernel."""
+
+
+class BuildError(Exception):
+	"""Represents an error trying to build the Linux kernel."""
+
+
+class LinuxSourceTreeOperations(object):
+	"""An abstraction over command line operations performed on a source tree."""
+
+	def make_mrproper(self):
+		try:
+			subprocess.check_output(['make', 'mrproper'])
+		except OSError as e:
+			raise ConfigError('Could not call make command: ' + e)
+		except subprocess.CalledProcessError as e:
+			raise ConfigError(e.output)
+
+	def make_olddefconfig(self):
+		try:
+			subprocess.check_output(['make', 'ARCH=um', 'olddefconfig'])
+		except OSError as e:
+			raise ConfigError('Could not call make command: ' + e)
+		except subprocess.CalledProcessError as e:
+			raise ConfigError(e.output)
+
+	def make(self, jobs):
+		try:
+			subprocess.check_output([
+					'make',
+					'ARCH=um',
+					'--jobs=' + str(jobs)])
+		except OSError as e:
+			raise BuildError('Could not call execute make: ' + e)
+		except subprocess.CalledProcessError as e:
+			raise BuildError(e.output)
+
+	def linux_bin(self, params, timeout):
+		"""Runs the Linux UML binary. Must be named 'linux'."""
+		process = subprocess.Popen(
+			['./linux'] + params,
+			stdin=subprocess.PIPE,
+			stdout=subprocess.PIPE,
+			stderr=subprocess.PIPE)
+		process.wait(timeout=timeout)
+		return process
+
+
+class LinuxSourceTree(object):
+	"""Represents a Linux kernel source tree with KUnit tests."""
+
+	def __init__(self):
+		self._kconfig = kunit_config.Kconfig()
+		self._kconfig.read_from_file('kunitconfig')
+		self._ops = LinuxSourceTreeOperations()
+
+	def clean(self):
+		try:
+			self._ops.make_mrproper()
+		except ConfigError as e:
+			logging.error(e)
+			return False
+		return True
+
+	def build_config(self):
+		self._kconfig.write_to_file(KCONFIG_PATH)
+		try:
+			self._ops.make_olddefconfig()
+		except ConfigError as e:
+			logging.error(e)
+			return False
+		validated_kconfig = kunit_config.Kconfig()
+		validated_kconfig.read_from_file(KCONFIG_PATH)
+		if not self._kconfig.is_subset_of(validated_kconfig):
+			logging.error('Provided Kconfig is not contained in validated .config!')
+			return False
+		return True
+
+	def build_reconfig(self):
+		"""Creates a new .config if it is not a subset of the kunitconfig."""
+		if os.path.exists(KCONFIG_PATH):
+			existing_kconfig = kunit_config.Kconfig()
+			existing_kconfig.read_from_file(KCONFIG_PATH)
+			if not self._kconfig.is_subset_of(existing_kconfig):
+				print('Regenerating .config ...')
+				os.remove(KCONFIG_PATH)
+				return self.build_config()
+			else:
+				return True
+		else:
+			print('Generating .config ...')
+			return self.build_config()
+
+	def build_um_kernel(self, jobs):
+		try:
+			self._ops.make_olddefconfig()
+			self._ops.make(jobs)
+		except (ConfigError, BuildError) as e:
+			logging.error(e)
+			return False
+		used_kconfig = kunit_config.Kconfig()
+		used_kconfig.read_from_file(KCONFIG_PATH)
+		if not self._kconfig.is_subset_of(used_kconfig):
+			logging.error('Provided Kconfig is not contained in final config!')
+			return False
+		return True
+
+	def run_kernel(self, args=[]):
+		timeout = None
+		args.extend(['mem=256M'])
+		process = self._ops.linux_bin(args, timeout)
+		with open('test.log', 'w') as f:
+			for line in process.stdout:
+				f.write(line.rstrip().decode('ascii') + '\n')
+				yield line.rstrip().decode('ascii')
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel
  2018-11-28 19:36 ` [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel brendanhiggins
@ 2018-11-28 19:36   ` Brendan Higgins
  2018-11-29 13:54   ` kieran.bingham
  2018-11-30  3:44   ` mcgrof
  2 siblings, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-11-28 19:36 UTC (permalink / raw)


The ultimate goal is to create minimal isolated test binaries; in the
meantime we are using UML to provide the infrastructure to run tests, so
define an abstract way to configure and run tests that allow us to
change the context in which tests are built without affecting the user.
This also makes pretty and dynamic error reporting, and a lot of other
nice features easier.

kunit_config.py:
  - parse .config and Kconfig files.

kunit_kernel.py: provides helper functions to:
  - configure the kernel using kunitconfig.
  - build the kernel with the appropriate configuration.
  - provide function to invoke the kernel and stream the output back.

Signed-off-by: Felix Guo <felixguoxiuping at gmail.com>
Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 tools/testing/kunit/.gitignore      |   3 +
 tools/testing/kunit/kunit_config.py |  60 +++++++++++++
 tools/testing/kunit/kunit_kernel.py | 126 ++++++++++++++++++++++++++++
 3 files changed, 189 insertions(+)
 create mode 100644 tools/testing/kunit/.gitignore
 create mode 100644 tools/testing/kunit/kunit_config.py
 create mode 100644 tools/testing/kunit/kunit_kernel.py

diff --git a/tools/testing/kunit/.gitignore b/tools/testing/kunit/.gitignore
new file mode 100644
index 0000000000000..c791ff59a37a9
--- /dev/null
+++ b/tools/testing/kunit/.gitignore
@@ -0,0 +1,3 @@
+# Byte-compiled / optimized / DLL files
+__pycache__/
+*.py[cod]
\ No newline at end of file
diff --git a/tools/testing/kunit/kunit_config.py b/tools/testing/kunit/kunit_config.py
new file mode 100644
index 0000000000000..183bd5e758762
--- /dev/null
+++ b/tools/testing/kunit/kunit_config.py
@@ -0,0 +1,60 @@
+# SPDX-License-Identifier: GPL-2.0
+
+import collections
+import re
+
+CONFIG_IS_NOT_SET_PATTERN = r'^# CONFIG_\w+ is not set$'
+CONFIG_PATTERN = r'^CONFIG_\w+=\S+$'
+
+KconfigEntryBase = collections.namedtuple('KconfigEntry', ['raw_entry'])
+
+
+class KconfigEntry(KconfigEntryBase):
+
+	def __str__(self) -> str:
+		return self.raw_entry
+
+
+class KconfigParseError(Exception):
+	"""Error parsing Kconfig defconfig or .config."""
+
+
+class Kconfig(object):
+	"""Represents defconfig or .config specified using the Kconfig language."""
+
+	def __init__(self):
+		self._entries = []
+
+	def entries(self):
+		return set(self._entries)
+
+	def add_entry(self, entry: KconfigEntry) -> None:
+		self._entries.append(entry)
+
+	def is_subset_of(self, other: "Kconfig") -> bool:
+		return self.entries().issubset(other.entries())
+
+	def write_to_file(self, path: str) -> None:
+		with open(path, 'w') as f:
+			for entry in self.entries():
+				f.write(str(entry) + '\n')
+
+	def parse_from_string(self, blob: str) -> None:
+		"""Parses a string containing KconfigEntrys and populates this Kconfig."""
+		self._entries = []
+		is_not_set_matcher = re.compile(CONFIG_IS_NOT_SET_PATTERN)
+		config_matcher = re.compile(CONFIG_PATTERN)
+		for line in blob.split('\n'):
+			line = line.strip()
+			if not line:
+				continue
+			elif config_matcher.match(line) or is_not_set_matcher.match(line):
+				self._entries.append(KconfigEntry(line))
+			elif line[0] == '#':
+				continue
+			else:
+				raise KconfigParseError('Failed to parse: ' + line)
+
+	def read_from_file(self, path: str) -> None:
+		with open(path, 'r') as f:
+			self.parse_from_string(f.read())
diff --git a/tools/testing/kunit/kunit_kernel.py b/tools/testing/kunit/kunit_kernel.py
new file mode 100644
index 0000000000000..bba7ea7ca1869
--- /dev/null
+++ b/tools/testing/kunit/kunit_kernel.py
@@ -0,0 +1,126 @@
+# SPDX-License-Identifier: GPL-2.0
+
+import logging
+import subprocess
+import os
+
+import kunit_config
+
+KCONFIG_PATH = '.config'
+
+class ConfigError(Exception):
+	"""Represents an error trying to configure the Linux kernel."""
+
+
+class BuildError(Exception):
+	"""Represents an error trying to build the Linux kernel."""
+
+
+class LinuxSourceTreeOperations(object):
+	"""An abstraction over command line operations performed on a source tree."""
+
+	def make_mrproper(self):
+		try:
+			subprocess.check_output(['make', 'mrproper'])
+		except OSError as e:
+			raise ConfigError('Could not call make command: ' + e)
+		except subprocess.CalledProcessError as e:
+			raise ConfigError(e.output)
+
+	def make_olddefconfig(self):
+		try:
+			subprocess.check_output(['make', 'ARCH=um', 'olddefconfig'])
+		except OSError as e:
+			raise ConfigError('Could not call make command: ' + e)
+		except subprocess.CalledProcessError as e:
+			raise ConfigError(e.output)
+
+	def make(self, jobs):
+		try:
+			subprocess.check_output([
+					'make',
+					'ARCH=um',
+					'--jobs=' + str(jobs)])
+		except OSError as e:
+			raise BuildError('Could not call execute make: ' + e)
+		except subprocess.CalledProcessError as e:
+			raise BuildError(e.output)
+
+	def linux_bin(self, params, timeout):
+		"""Runs the Linux UML binary. Must be named 'linux'."""
+		process = subprocess.Popen(
+			['./linux'] + params,
+			stdin=subprocess.PIPE,
+			stdout=subprocess.PIPE,
+			stderr=subprocess.PIPE)
+		process.wait(timeout=timeout)
+		return process
+
+
+class LinuxSourceTree(object):
+	"""Represents a Linux kernel source tree with KUnit tests."""
+
+	def __init__(self):
+		self._kconfig = kunit_config.Kconfig()
+		self._kconfig.read_from_file('kunitconfig')
+		self._ops = LinuxSourceTreeOperations()
+
+	def clean(self):
+		try:
+			self._ops.make_mrproper()
+		except ConfigError as e:
+			logging.error(e)
+			return False
+		return True
+
+	def build_config(self):
+		self._kconfig.write_to_file(KCONFIG_PATH)
+		try:
+			self._ops.make_olddefconfig()
+		except ConfigError as e:
+			logging.error(e)
+			return False
+		validated_kconfig = kunit_config.Kconfig()
+		validated_kconfig.read_from_file(KCONFIG_PATH)
+		if not self._kconfig.is_subset_of(validated_kconfig):
+			logging.error('Provided Kconfig is not contained in validated .config!')
+			return False
+		return True
+
+	def build_reconfig(self):
+		"""Creates a new .config if it is not a subset of the kunitconfig."""
+		if os.path.exists(KCONFIG_PATH):
+			existing_kconfig = kunit_config.Kconfig()
+			existing_kconfig.read_from_file(KCONFIG_PATH)
+			if not self._kconfig.is_subset_of(existing_kconfig):
+				print('Regenerating .config ...')
+				os.remove(KCONFIG_PATH)
+				return self.build_config()
+			else:
+				return True
+		else:
+			print('Generating .config ...')
+			return self.build_config()
+
+	def build_um_kernel(self, jobs):
+		try:
+			self._ops.make_olddefconfig()
+			self._ops.make(jobs)
+		except (ConfigError, BuildError) as e:
+			logging.error(e)
+			return False
+		used_kconfig = kunit_config.Kconfig()
+		used_kconfig.read_from_file(KCONFIG_PATH)
+		if not self._kconfig.is_subset_of(used_kconfig):
+			logging.error('Provided Kconfig is not contained in final config!')
+			return False
+		return True
+
+	def run_kernel(self, args=[]):
+		timeout = None
+		args.extend(['mem=256M'])
+		process = self._ops.linux_bin(args, timeout)
+		with open('test.log', 'w') as f:
+			for line in process.stdout:
+				f.write(line.rstrip().decode('ascii') + '\n')
+				yield line.rstrip().decode('ascii')
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 12/19] kunit: add KUnit wrapper script and simple output parser
  2018-11-28 19:36 [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework brendanhiggins
                   ` (11 preceding siblings ...)
  2018-11-28 19:36 ` [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel brendanhiggins
@ 2018-11-28 19:36 ` brendanhiggins
  2018-11-28 19:36   ` Brendan Higgins
  2018-11-28 19:36 ` [RFC v3 13/19] kunit: improve output from python wrapper brendanhiggins
                   ` (8 subsequent siblings)
  21 siblings, 1 reply; 232+ messages in thread
From: brendanhiggins @ 2018-11-28 19:36 UTC (permalink / raw)


The KUnit wrapper script interfaces with the two modules
(kunit_config.py and kunit_kernel.py) and provides a command line
interface for running KUnit tests. This interface allows the caller to
specify options like test timeouts. The script handles configuring,
building and running the kernel and tests.

The output parser (kunit_parser.py) simply strips out all the output
from the kernel that is outputted as part of it's initialization
sequence. This ensures that only the output from KUnit is displayed
on the screen.

A full version of the output is written to test.log, or can be seen by
passing --raw_output to the wrapper script.

Signed-off-by: Felix Guo <felixguoxiuping at gmail.com>
Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 tools/testing/kunit/kunit.py        | 40 +++++++++++++++++++++++++++++
 tools/testing/kunit/kunit_kernel.py |  3 +--
 tools/testing/kunit/kunit_parser.py | 24 +++++++++++++++++
 3 files changed, 65 insertions(+), 2 deletions(-)
 create mode 100755 tools/testing/kunit/kunit.py
 create mode 100644 tools/testing/kunit/kunit_parser.py

diff --git a/tools/testing/kunit/kunit.py b/tools/testing/kunit/kunit.py
new file mode 100755
index 0000000000000..1356be404996b
--- /dev/null
+++ b/tools/testing/kunit/kunit.py
@@ -0,0 +1,40 @@
+#!/usr/bin/python3
+# SPDX-License-Identifier: GPL-2.0
+
+# A thin wrapper on top of the KUnit Kernel
+
+import argparse
+import sys
+import os
+
+import kunit_config
+import kunit_kernel
+import kunit_parser
+
+parser = argparse.ArgumentParser(description='Runs KUnit tests.')
+
+parser.add_argument('--raw_output', help='don\'t format output from kernel',
+		    action='store_true')
+
+parser.add_argument('--timeout', help='maximum number of seconds to allow for '
+		    'all tests to run. This does not include time taken to '
+		    'build the tests.', type=int, default=300,
+		    metavar='timeout')
+
+cli_args = parser.parse_args()
+linux = kunit_kernel.LinuxSourceTree()
+
+success = linux.build_reconfig()
+if not success:
+	quit()
+
+print('Building KUnit Kernel ...')
+success = linux.build_um_kernel()
+if not success:
+	quit()
+
+print('Starting KUnit Kernel ...')
+if cli_args.raw_output:
+	kunit_parser.raw_output(linux.run_kernel(timeout=cli_args.timeout))
+else:
+	kunit_parser.parse_run_tests(linux.run_kernel(timeout=cli_args.timeout))
diff --git a/tools/testing/kunit/kunit_kernel.py b/tools/testing/kunit/kunit_kernel.py
index bba7ea7ca1869..623f25b16f6c8 100644
--- a/tools/testing/kunit/kunit_kernel.py
+++ b/tools/testing/kunit/kunit_kernel.py
@@ -116,8 +116,7 @@ class LinuxSourceTree(object):
 			return False
 		return True
 
-	def run_kernel(self, args=[]):
-		timeout = None
+	def run_kernel(self, args=[], timeout=None):
 		args.extend(['mem=256M'])
 		process = self._ops.linux_bin(args, timeout)
 		with open('test.log', 'w') as f:
diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py
new file mode 100644
index 0000000000000..1dff3adb73bd3
--- /dev/null
+++ b/tools/testing/kunit/kunit_parser.py
@@ -0,0 +1,24 @@
+# SPDX-License-Identifier: GPL-2.0
+
+import re
+
+kunit_start_re = re.compile('console .* enabled')
+kunit_end_re = re.compile('List of all partitions:')
+
+def isolate_kunit_output(kernel_output):
+	started = False
+	for line in kernel_output:
+		if kunit_start_re.match(line):
+			started = True
+		elif kunit_end_re.match(line):
+			break
+		elif started:
+			yield line
+
+def raw_output(kernel_output):
+	for line in kernel_output:
+		print(line)
+
+def parse_run_tests(kernel_output):
+	for output in isolate_kunit_output(kernel_output):
+		print(output)
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 12/19] kunit: add KUnit wrapper script and simple output parser
  2018-11-28 19:36 ` [RFC v3 12/19] kunit: add KUnit wrapper script and simple output parser brendanhiggins
@ 2018-11-28 19:36   ` Brendan Higgins
  0 siblings, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-11-28 19:36 UTC (permalink / raw)


The KUnit wrapper script interfaces with the two modules
(kunit_config.py and kunit_kernel.py) and provides a command line
interface for running KUnit tests. This interface allows the caller to
specify options like test timeouts. The script handles configuring,
building and running the kernel and tests.

The output parser (kunit_parser.py) simply strips out all the output
from the kernel that is outputted as part of it's initialization
sequence. This ensures that only the output from KUnit is displayed
on the screen.

A full version of the output is written to test.log, or can be seen by
passing --raw_output to the wrapper script.

Signed-off-by: Felix Guo <felixguoxiuping at gmail.com>
Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 tools/testing/kunit/kunit.py        | 40 +++++++++++++++++++++++++++++
 tools/testing/kunit/kunit_kernel.py |  3 +--
 tools/testing/kunit/kunit_parser.py | 24 +++++++++++++++++
 3 files changed, 65 insertions(+), 2 deletions(-)
 create mode 100755 tools/testing/kunit/kunit.py
 create mode 100644 tools/testing/kunit/kunit_parser.py

diff --git a/tools/testing/kunit/kunit.py b/tools/testing/kunit/kunit.py
new file mode 100755
index 0000000000000..1356be404996b
--- /dev/null
+++ b/tools/testing/kunit/kunit.py
@@ -0,0 +1,40 @@
+#!/usr/bin/python3
+# SPDX-License-Identifier: GPL-2.0
+
+# A thin wrapper on top of the KUnit Kernel
+
+import argparse
+import sys
+import os
+
+import kunit_config
+import kunit_kernel
+import kunit_parser
+
+parser = argparse.ArgumentParser(description='Runs KUnit tests.')
+
+parser.add_argument('--raw_output', help='don\'t format output from kernel',
+		    action='store_true')
+
+parser.add_argument('--timeout', help='maximum number of seconds to allow for '
+		    'all tests to run. This does not include time taken to '
+		    'build the tests.', type=int, default=300,
+		    metavar='timeout')
+
+cli_args = parser.parse_args()
+linux = kunit_kernel.LinuxSourceTree()
+
+success = linux.build_reconfig()
+if not success:
+	quit()
+
+print('Building KUnit Kernel ...')
+success = linux.build_um_kernel()
+if not success:
+	quit()
+
+print('Starting KUnit Kernel ...')
+if cli_args.raw_output:
+	kunit_parser.raw_output(linux.run_kernel(timeout=cli_args.timeout))
+else:
+	kunit_parser.parse_run_tests(linux.run_kernel(timeout=cli_args.timeout))
diff --git a/tools/testing/kunit/kunit_kernel.py b/tools/testing/kunit/kunit_kernel.py
index bba7ea7ca1869..623f25b16f6c8 100644
--- a/tools/testing/kunit/kunit_kernel.py
+++ b/tools/testing/kunit/kunit_kernel.py
@@ -116,8 +116,7 @@ class LinuxSourceTree(object):
 			return False
 		return True
 
-	def run_kernel(self, args=[]):
-		timeout = None
+	def run_kernel(self, args=[], timeout=None):
 		args.extend(['mem=256M'])
 		process = self._ops.linux_bin(args, timeout)
 		with open('test.log', 'w') as f:
diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py
new file mode 100644
index 0000000000000..1dff3adb73bd3
--- /dev/null
+++ b/tools/testing/kunit/kunit_parser.py
@@ -0,0 +1,24 @@
+# SPDX-License-Identifier: GPL-2.0
+
+import re
+
+kunit_start_re = re.compile('console .* enabled')
+kunit_end_re = re.compile('List of all partitions:')
+
+def isolate_kunit_output(kernel_output):
+	started = False
+	for line in kernel_output:
+		if kunit_start_re.match(line):
+			started = True
+		elif kunit_end_re.match(line):
+			break
+		elif started:
+			yield line
+
+def raw_output(kernel_output):
+	for line in kernel_output:
+		print(line)
+
+def parse_run_tests(kernel_output):
+	for output in isolate_kunit_output(kernel_output):
+		print(output)
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 13/19] kunit: improve output from python wrapper
  2018-11-28 19:36 [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework brendanhiggins
                   ` (12 preceding siblings ...)
  2018-11-28 19:36 ` [RFC v3 12/19] kunit: add KUnit wrapper script and simple output parser brendanhiggins
@ 2018-11-28 19:36 ` brendanhiggins
  2018-11-28 19:36   ` Brendan Higgins
  2018-11-28 19:36 ` [RFC v3 14/19] Documentation: kunit: add documentation for KUnit brendanhiggins
                   ` (7 subsequent siblings)
  21 siblings, 1 reply; 232+ messages in thread
From: brendanhiggins @ 2018-11-28 19:36 UTC (permalink / raw)


- add colors to displayed output
- add timing and summary

Signed-off-by: Felix Guo <felixguoxiuping at gmail.com>
Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 tools/testing/kunit/kunit.py        | 27 ++++++++-
 tools/testing/kunit/kunit_parser.py | 93 ++++++++++++++++++++++++++++-
 2 files changed, 115 insertions(+), 5 deletions(-)

diff --git a/tools/testing/kunit/kunit.py b/tools/testing/kunit/kunit.py
index 1356be404996b..0b8e8c20a746e 100755
--- a/tools/testing/kunit/kunit.py
+++ b/tools/testing/kunit/kunit.py
@@ -6,6 +6,7 @@
 import argparse
 import sys
 import os
+import time
 
 import kunit_config
 import kunit_kernel
@@ -21,20 +22,40 @@ parser.add_argument('--timeout', help='maximum number of seconds to allow for '
 		    'build the tests.', type=int, default=300,
 		    metavar='timeout')
 
+parser.add_argument('--jobs',
+		    help='As in the make command, "Specifies  the number of '
+		    'jobs (commands) to run simultaneously."',
+		    type=int, default=8, metavar='jobs')
+
 cli_args = parser.parse_args()
 linux = kunit_kernel.LinuxSourceTree()
 
+config_start = time.time()
 success = linux.build_reconfig()
+config_end = time.time()
 if not success:
 	quit()
 
-print('Building KUnit Kernel ...')
-success = linux.build_um_kernel()
+kunit_parser.print_with_timestamp('Building KUnit Kernel ...')
+
+build_start = time.time()
+success = linux.build_um_kernel(jobs=cli_args.jobs)
+build_end = time.time()
 if not success:
 	quit()
 
-print('Starting KUnit Kernel ...')
+kunit_parser.print_with_timestamp('Starting KUnit Kernel ...')
+test_start = time.time()
+
 if cli_args.raw_output:
 	kunit_parser.raw_output(linux.run_kernel(timeout=cli_args.timeout))
 else:
 	kunit_parser.parse_run_tests(linux.run_kernel(timeout=cli_args.timeout))
+
+test_end = time.time()
+
+kunit_parser.print_with_timestamp((
+	"Elapsed time: %.3fs total, %.3fs configuring, %.3fs " +
+	"building, %.3fs running.\n") % (test_end - config_start,
+	config_end - config_start, build_end - build_start,
+	test_end - test_start))
diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py
index 1dff3adb73bd3..d9051e407d5a7 100644
--- a/tools/testing/kunit/kunit_parser.py
+++ b/tools/testing/kunit/kunit_parser.py
@@ -1,6 +1,7 @@
 # SPDX-License-Identifier: GPL-2.0
 
 import re
+from datetime import datetime
 
 kunit_start_re = re.compile('console .* enabled')
 kunit_end_re = re.compile('List of all partitions:')
@@ -19,6 +20,94 @@ def raw_output(kernel_output):
 	for line in kernel_output:
 		print(line)
 
+DIVIDER = "=" * 30
+
+RESET = '\033[0;0m'
+
+def red(text):
+	return '\033[1;31m' + text + RESET
+
+def yellow(text):
+	return '\033[1;33m' + text + RESET
+
+def green(text):
+	return '\033[1;32m' + text + RESET
+
+def print_with_timestamp(message):
+	print('[%s] %s' % (datetime.now().strftime('%H:%M:%S'), message))
+
+def print_log(log):
+	for m in log:
+		print_with_timestamp(m)
+
 def parse_run_tests(kernel_output):
-	for output in isolate_kunit_output(kernel_output):
-		print(output)
+	test_case_output = re.compile('^kunit .*?: (.*)$')
+
+	test_module_success = re.compile('^kunit .*: all tests passed')
+	test_module_fail = re.compile('^kunit .*: one or more tests failed')
+
+	test_case_success = re.compile('^kunit (.*): (.*) passed')
+	test_case_fail = re.compile('^kunit (.*): (.*) failed')
+	test_case_crash = re.compile('^kunit (.*): (.*) crashed')
+
+	total_tests = set()
+	failed_tests = set()
+	crashed_tests = set()
+
+	def get_test_name(match):
+		return match.group(1) + ":" + match.group(2)
+
+	current_case_log = []
+	def end_one_test(match, log):
+		log.clear()
+		total_tests.add(get_test_name(match))
+
+	print_with_timestamp(DIVIDER)
+	for line in isolate_kunit_output(kernel_output):
+		# Ignore module output:
+		if (test_module_success.match(line) or
+		    test_module_fail.match(line)):
+			print_with_timestamp(DIVIDER)
+			continue
+
+		match = re.match(test_case_success, line)
+		if match:
+			print_with_timestamp(green("[PASSED] ") +
+					     get_test_name(match))
+			end_one_test(match, current_case_log)
+			continue
+
+		match = re.match(test_case_fail, line)
+		# Crashed tests will report as both failed and crashed. We only
+		# want to show and count it once.
+		if match and get_test_name(match) not in crashed_tests:
+			failed_tests.add(get_test_name(match))
+			print_with_timestamp(red("[FAILED] " +
+						 get_test_name(match)))
+			print_log(map(yellow, current_case_log))
+			print_with_timestamp("")
+			end_one_test(match, current_case_log)
+			continue
+
+		match = re.match(test_case_crash, line)
+		if match:
+			crashed_tests.add(get_test_name(match))
+			print_with_timestamp(yellow("[CRASH] " +
+						    get_test_name(match)))
+			print_log(current_case_log)
+			print_with_timestamp("")
+			end_one_test(match, current_case_log)
+			continue
+
+		# Strip off the `kunit module-name:` prefix
+		match = re.match(test_case_output, line)
+		if match:
+			current_case_log.append(match.group(1))
+		else:
+			current_case_log.append(line)
+
+	fmt = green if (len(failed_tests) + len(crashed_tests) == 0) else red
+	print_with_timestamp(
+		fmt("Testing complete. %d tests run. %d failed. %d crashed." %
+		    (len(total_tests), len(failed_tests), len(crashed_tests))))
+
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 13/19] kunit: improve output from python wrapper
  2018-11-28 19:36 ` [RFC v3 13/19] kunit: improve output from python wrapper brendanhiggins
@ 2018-11-28 19:36   ` Brendan Higgins
  0 siblings, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-11-28 19:36 UTC (permalink / raw)


- add colors to displayed output
- add timing and summary

Signed-off-by: Felix Guo <felixguoxiuping at gmail.com>
Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 tools/testing/kunit/kunit.py        | 27 ++++++++-
 tools/testing/kunit/kunit_parser.py | 93 ++++++++++++++++++++++++++++-
 2 files changed, 115 insertions(+), 5 deletions(-)

diff --git a/tools/testing/kunit/kunit.py b/tools/testing/kunit/kunit.py
index 1356be404996b..0b8e8c20a746e 100755
--- a/tools/testing/kunit/kunit.py
+++ b/tools/testing/kunit/kunit.py
@@ -6,6 +6,7 @@
 import argparse
 import sys
 import os
+import time
 
 import kunit_config
 import kunit_kernel
@@ -21,20 +22,40 @@ parser.add_argument('--timeout', help='maximum number of seconds to allow for '
 		    'build the tests.', type=int, default=300,
 		    metavar='timeout')
 
+parser.add_argument('--jobs',
+		    help='As in the make command, "Specifies  the number of '
+		    'jobs (commands) to run simultaneously."',
+		    type=int, default=8, metavar='jobs')
+
 cli_args = parser.parse_args()
 linux = kunit_kernel.LinuxSourceTree()
 
+config_start = time.time()
 success = linux.build_reconfig()
+config_end = time.time()
 if not success:
 	quit()
 
-print('Building KUnit Kernel ...')
-success = linux.build_um_kernel()
+kunit_parser.print_with_timestamp('Building KUnit Kernel ...')
+
+build_start = time.time()
+success = linux.build_um_kernel(jobs=cli_args.jobs)
+build_end = time.time()
 if not success:
 	quit()
 
-print('Starting KUnit Kernel ...')
+kunit_parser.print_with_timestamp('Starting KUnit Kernel ...')
+test_start = time.time()
+
 if cli_args.raw_output:
 	kunit_parser.raw_output(linux.run_kernel(timeout=cli_args.timeout))
 else:
 	kunit_parser.parse_run_tests(linux.run_kernel(timeout=cli_args.timeout))
+
+test_end = time.time()
+
+kunit_parser.print_with_timestamp((
+	"Elapsed time: %.3fs total, %.3fs configuring, %.3fs " +
+	"building, %.3fs running.\n") % (test_end - config_start,
+	config_end - config_start, build_end - build_start,
+	test_end - test_start))
diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py
index 1dff3adb73bd3..d9051e407d5a7 100644
--- a/tools/testing/kunit/kunit_parser.py
+++ b/tools/testing/kunit/kunit_parser.py
@@ -1,6 +1,7 @@
 # SPDX-License-Identifier: GPL-2.0
 
 import re
+from datetime import datetime
 
 kunit_start_re = re.compile('console .* enabled')
 kunit_end_re = re.compile('List of all partitions:')
@@ -19,6 +20,94 @@ def raw_output(kernel_output):
 	for line in kernel_output:
 		print(line)
 
+DIVIDER = "=" * 30
+
+RESET = '\033[0;0m'
+
+def red(text):
+	return '\033[1;31m' + text + RESET
+
+def yellow(text):
+	return '\033[1;33m' + text + RESET
+
+def green(text):
+	return '\033[1;32m' + text + RESET
+
+def print_with_timestamp(message):
+	print('[%s] %s' % (datetime.now().strftime('%H:%M:%S'), message))
+
+def print_log(log):
+	for m in log:
+		print_with_timestamp(m)
+
 def parse_run_tests(kernel_output):
-	for output in isolate_kunit_output(kernel_output):
-		print(output)
+	test_case_output = re.compile('^kunit .*?: (.*)$')
+
+	test_module_success = re.compile('^kunit .*: all tests passed')
+	test_module_fail = re.compile('^kunit .*: one or more tests failed')
+
+	test_case_success = re.compile('^kunit (.*): (.*) passed')
+	test_case_fail = re.compile('^kunit (.*): (.*) failed')
+	test_case_crash = re.compile('^kunit (.*): (.*) crashed')
+
+	total_tests = set()
+	failed_tests = set()
+	crashed_tests = set()
+
+	def get_test_name(match):
+		return match.group(1) + ":" + match.group(2)
+
+	current_case_log = []
+	def end_one_test(match, log):
+		log.clear()
+		total_tests.add(get_test_name(match))
+
+	print_with_timestamp(DIVIDER)
+	for line in isolate_kunit_output(kernel_output):
+		# Ignore module output:
+		if (test_module_success.match(line) or
+		    test_module_fail.match(line)):
+			print_with_timestamp(DIVIDER)
+			continue
+
+		match = re.match(test_case_success, line)
+		if match:
+			print_with_timestamp(green("[PASSED] ") +
+					     get_test_name(match))
+			end_one_test(match, current_case_log)
+			continue
+
+		match = re.match(test_case_fail, line)
+		# Crashed tests will report as both failed and crashed. We only
+		# want to show and count it once.
+		if match and get_test_name(match) not in crashed_tests:
+			failed_tests.add(get_test_name(match))
+			print_with_timestamp(red("[FAILED] " +
+						 get_test_name(match)))
+			print_log(map(yellow, current_case_log))
+			print_with_timestamp("")
+			end_one_test(match, current_case_log)
+			continue
+
+		match = re.match(test_case_crash, line)
+		if match:
+			crashed_tests.add(get_test_name(match))
+			print_with_timestamp(yellow("[CRASH] " +
+						    get_test_name(match)))
+			print_log(current_case_log)
+			print_with_timestamp("")
+			end_one_test(match, current_case_log)
+			continue
+
+		# Strip off the `kunit module-name:` prefix
+		match = re.match(test_case_output, line)
+		if match:
+			current_case_log.append(match.group(1))
+		else:
+			current_case_log.append(line)
+
+	fmt = green if (len(failed_tests) + len(crashed_tests) == 0) else red
+	print_with_timestamp(
+		fmt("Testing complete. %d tests run. %d failed. %d crashed." %
+		    (len(total_tests), len(failed_tests), len(crashed_tests))))
+
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 14/19] Documentation: kunit: add documentation for KUnit
  2018-11-28 19:36 [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework brendanhiggins
                   ` (13 preceding siblings ...)
  2018-11-28 19:36 ` [RFC v3 13/19] kunit: improve output from python wrapper brendanhiggins
@ 2018-11-28 19:36 ` brendanhiggins
  2018-11-28 19:36   ` Brendan Higgins
  2018-11-29 13:56   ` kieran.bingham
  2018-11-28 19:36 ` [RFC v3 15/19] MAINTAINERS: add entry for KUnit the unit testing framework brendanhiggins
                   ` (6 subsequent siblings)
  21 siblings, 2 replies; 232+ messages in thread
From: brendanhiggins @ 2018-11-28 19:36 UTC (permalink / raw)


Add documentation for KUnit, the Linux kernel unit testing framework.
- Add intro and usage guide for KUnit
- Add API reference

Signed-off-by: Felix Guo <felixguoxiuping at gmail.com>
Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 Documentation/index.rst           |   1 +
 Documentation/kunit/api/index.rst |  16 ++
 Documentation/kunit/api/test.rst  |  15 +
 Documentation/kunit/faq.rst       |  46 +++
 Documentation/kunit/index.rst     |  80 ++++++
 Documentation/kunit/start.rst     | 180 ++++++++++++
 Documentation/kunit/usage.rst     | 447 ++++++++++++++++++++++++++++++
 7 files changed, 785 insertions(+)
 create mode 100644 Documentation/kunit/api/index.rst
 create mode 100644 Documentation/kunit/api/test.rst
 create mode 100644 Documentation/kunit/faq.rst
 create mode 100644 Documentation/kunit/index.rst
 create mode 100644 Documentation/kunit/start.rst
 create mode 100644 Documentation/kunit/usage.rst

diff --git a/Documentation/index.rst b/Documentation/index.rst
index 5db7e87c7cb1d..275ef4db79f61 100644
--- a/Documentation/index.rst
+++ b/Documentation/index.rst
@@ -68,6 +68,7 @@ merged much easier.
    kernel-hacking/index
    trace/index
    maintainer/index
+   kunit/index
 
 Kernel API documentation
 ------------------------
diff --git a/Documentation/kunit/api/index.rst b/Documentation/kunit/api/index.rst
new file mode 100644
index 0000000000000..c31c530088153
--- /dev/null
+++ b/Documentation/kunit/api/index.rst
@@ -0,0 +1,16 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=============
+API Reference
+=============
+.. toctree::
+
+	test
+
+This section documents the KUnit kernel testing API. It is divided into 3
+sections:
+
+================================= ==============================================
+:doc:`test`                       documents all of the standard testing API
+                                  excluding mocking or mocking related features.
+================================= ==============================================
diff --git a/Documentation/kunit/api/test.rst b/Documentation/kunit/api/test.rst
new file mode 100644
index 0000000000000..7c926014f047c
--- /dev/null
+++ b/Documentation/kunit/api/test.rst
@@ -0,0 +1,15 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+========
+Test API
+========
+
+This file documents all of the standard testing API excluding mocking or mocking
+related features.
+
+.. kernel-doc:: include/kunit/test.h
+   :internal:
+
+.. kernel-doc:: include/kunit/kunit-stream.h
+   :internal:
+
diff --git a/Documentation/kunit/faq.rst b/Documentation/kunit/faq.rst
new file mode 100644
index 0000000000000..cb8e4fb2257a0
--- /dev/null
+++ b/Documentation/kunit/faq.rst
@@ -0,0 +1,46 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=========================================
+Frequently Asked Questions
+=========================================
+
+How is this different from Autotest, kselftest, etc?
+====================================================
+KUnit is a unit testing framework. Autotest, kselftest (and some others) are
+not.
+
+A `unit test <https://martinfowler.com/bliki/UnitTest.html>`_ is supposed to
+test a single unit of code in isolation, hence the name. A unit test should be
+the finest granularity of testing and as such should allow all possible code
+paths to be tested in the code under test; this is only possible if the code
+under test is very small and does not have any external dependencies outside of
+the test's control like hardware.
+
+There are no testing frameworks currently available for the kernel that do not
+require installing the kernel on a test machine or in a VM and all require
+tests to be written in userspace and run on the kernel under test; this is true
+for Autotest, kselftest, and some others, disqualifying any of them from being
+considered unit testing frameworks.
+
+What is the difference between a unit test and these other kinds of tests?
+==========================================================================
+Most existing tests for the Linux kernel would be categorized as an integration
+test, or an end-to-end test.
+
+- A unit test is supposed to test a single unit of code in isolation, hence the
+  name. A unit test should be the finest granularity of testing and as such
+  should allow all possible code paths to be tested in the code under test; this
+  is only possible if the code under test is very small and does not have any
+  external dependencies outside of the test's control like hardware.
+- An integration test tests the interaction between a minimal set of components,
+  usually just two or three. For example, someone might write an integration
+  test to test the interaction between a driver and a piece of hardware, or to
+  test the interaction between the userspace libraries the kernel provides and
+  the kernel itself; however, one of these tests would probably not test the
+  entire kernel along with hardware interactions and interactions with the
+  userspace.
+- An end-to-end test usually tests the entire system from the perspective of the
+  code under test. For example, someone might write an end-to-end test for the
+  kernel by installing a production configuration of the kernel on production
+  hardware with a production userspace and then trying to exercise some behavior
+  that depends on interactions between the hardware, the kernel, and userspace.
diff --git a/Documentation/kunit/index.rst b/Documentation/kunit/index.rst
new file mode 100644
index 0000000000000..c6710211b647f
--- /dev/null
+++ b/Documentation/kunit/index.rst
@@ -0,0 +1,80 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=========================================
+KUnit - Unit Testing for the Linux Kernel
+=========================================
+
+.. toctree::
+	:maxdepth: 2
+
+	start
+	usage
+	api/index
+	faq
+
+What is KUnit?
+==============
+
+KUnit is a lightweight unit testing and mocking framework for the Linux kernel.
+These tests are able to be run locally on a developer's workstation without a VM
+or special hardware.
+
+KUnit is heavily inspired by JUnit, Python's unittest.mock, and
+Googletest/Googlemock for C++. KUnit provides facilities for defining unit test
+cases, grouping related test cases into test suites, providing common
+infrastructure for running tests, and much more.
+
+Get started now: :doc:`start`
+
+Why KUnit?
+==========
+
+A unit test is supposed to test a single unit of code in isolation, hence the
+name. A unit test should be the finest granularity of testing and as such should
+allow all possible code paths to be tested in the code under test; this is only
+possible if the code under test is very small and does not have any external
+dependencies outside of the test's control like hardware.
+
+Outside of KUnit, there are no testing frameworks currently
+available for the kernel that do not require installing the kernel on a test
+machine or in a VM and all require tests to be written in userspace running on
+the kernel; this is true for Autotest, and kselftest, disqualifying
+any of them from being considered unit testing frameworks.
+
+KUnit addresses the problem of being able to run tests without needing a virtual
+machine or actual hardware with User Mode Linux. User Mode Linux is a Linux
+architecture, like ARM or x86; however, unlike other architectures it compiles
+to a standalone program that can be run like any other program directly inside
+of a host operating system; to be clear, it does not require any virtualization
+support; it is just a regular program.
+
+KUnit is fast. Excluding build time, from invocation to completion KUnit can run
+several dozen tests in only 10 to 20 seconds; this might not sound like a big
+deal to some people, but having such fast and easy to run tests fundamentally
+changes the way you go about testing and even writing code in the first place.
+Linus himself said in his `git talk at Google
+<https://gist.github.com/lorn/1272686/revisions#diff-53c65572127855f1b003db4064a94573R874>`_:
+
+	"... a lot of people seem to think that performance is about doing the
+	same thing, just doing it faster, and that is not true. That is not what
+	performance is all about. If you can do something really fast, really
+	well, people will start using it differently."
+
+In this context Linus was talking about branching and merging,
+but this point also applies to testing. If your tests are slow, unreliable, are
+difficult to write, and require a special setup or special hardware to run,
+then you wait a lot longer to write tests, and you wait a lot longer to run
+tests; this means that tests are likely to break, unlikely to test a lot of
+things, and are unlikely to be rerun once they pass. If your tests are really
+fast, you run them all the time, every time you make a change, and every time
+someone sends you some code. Why trust that someone ran all their tests
+correctly on every change when you can just run them yourself in less time than
+it takes to read his / her test log?
+
+How do I use it?
+===================
+
+*   :doc:`start` - for new users of KUnit
+*   :doc:`usage` - for a more detailed explanation of KUnit features
+*   :doc:`api/index` - for the list of KUnit APIs used for testing
+
diff --git a/Documentation/kunit/start.rst b/Documentation/kunit/start.rst
new file mode 100644
index 0000000000000..5cdba5091905e
--- /dev/null
+++ b/Documentation/kunit/start.rst
@@ -0,0 +1,180 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+===============
+Getting Started
+===============
+
+Installing dependencies
+=======================
+KUnit has the same dependencies as the Linux kernel. As long as you can build
+the kernel, you can run KUnit.
+
+KUnit Wrapper
+=============
+Included with KUnit is a simple Python wrapper that helps format the output to
+easily use and read KUnit output. It handles building and running the kernel, as
+well as formatting the output.
+
+The wrapper can be run with:
+
+.. code-block:: bash
+
+   ./tools/testing/kunit/kunit.py
+
+Creating a kunitconfig
+======================
+The Python script is a thin wrapper around Kbuild as such, it needs to be
+configured with a ``kunitconfig`` file. This file essentially contains the
+regular Kernel config, with the specific test targets as well.
+
+.. code-block:: bash
+
+	git clone -b master https://kunit.googlesource.com/kunitconfig $PATH_TO_KUNITCONFIG_REPO
+	cd $PATH_TO_LINUX_REPO
+	ln -s $PATH_TO_KUNIT_CONFIG_REPO/kunitconfig kunitconfig
+
+You may want to add kunitconfig to your local gitignore.
+
+Verifying KUnit Works
+-------------------------
+
+To make sure that everything is set up correctly, simply invoke the Python
+wrapper from your kernel repo:
+
+.. code-block:: bash
+
+	./tools/testing/kunit/kunit.py
+
+.. note::
+   You may want to run ``make mrproper`` first.
+
+If everything worked correctly, you should see the following:
+
+.. code-block:: bash
+
+	Generating .config ...
+	Building KUnit Kernel ...
+	Starting KUnit Kernel ...
+
+followed by a list of tests that are run. All of them should be passing.
+
+.. note::
+   Because it is building a lot of sources for the first time, the ``Building
+   kunit kernel`` step may take a while.
+
+Writing your first test
+==========================
+
+In your kernel repo let's add some code that we can test. Create a file
+``drivers/misc/example.h`` with the contents:
+
+.. code-block:: c
+
+	int misc_example_add(int left, int right);
+
+create a file ``drivers/misc/example.c``:
+
+.. code-block:: c
+
+	#include <linux/errno.h>
+
+	#include "example.h"
+
+	int misc_example_add(int left, int right)
+	{
+		return left + right;
+	}
+
+Now add the following lines to ``drivers/misc/Kconfig``:
+
+.. code-block:: kconfig
+
+	config MISC_EXAMPLE
+		bool "My example"
+
+and the following lines to ``drivers/misc/Makefile``:
+
+.. code-block:: make
+
+	obj-$(CONFIG_MISC_EXAMPLE) += example.o
+
+Now we are ready to write the test. The test will be in
+``drivers/misc/example-test.c``:
+
+.. code-block:: c
+
+	#include <kunit/test.h>
+	#include "example.h"
+
+	/* Define the test cases. */
+
+	static void misc_example_add_test_basic(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, 1, misc_example_add(1, 0));
+		KUNIT_EXPECT_EQ(test, 2, misc_example_add(1, 1));
+		KUNIT_EXPECT_EQ(test, 0, misc_example_add(-1, 1));
+		KUNIT_EXPECT_EQ(test, INT_MAX, misc_example_add(0, INT_MAX));
+		KUNIT_EXPECT_EQ(test, -1, misc_example_add(INT_MAX, INT_MIN));
+	}
+
+	static void misc_example_test_failure(struct kunit *test)
+	{
+		KUNIT_FAIL(test, "This test never passes.");
+	}
+
+	static struct kunit_case misc_example_test_cases[] = {
+		KUNIT_CASE(misc_example_add_test_basic),
+		KUNIT_CASE(misc_example_test_failure),
+		{},
+	};
+
+	static struct kunit_module misc_example_test_module = {
+		.name = "misc-example",
+		.test_cases = misc_example_test_cases,
+	};
+	module_test(misc_example_test_module);
+
+Now add the following to ``drivers/misc/Kconfig``:
+
+.. code-block:: kconfig
+
+	config MISC_EXAMPLE_TEST
+		bool "Test for my example"
+		depends on MISC_EXAMPLE && KUNIT
+
+and the following to ``drivers/misc/Makefile``:
+
+.. code-block:: make
+
+	obj-$(CONFIG_MISC_EXAMPLE_TEST) += example-test.o
+
+Now add it to your ``kunitconfig``:
+
+.. code-block:: none
+
+	CONFIG_MISC_EXAMPLE=y
+	CONFIG_MISC_EXAMPLE_TEST=y
+
+Now you can run the test:
+
+.. code-block:: bash
+
+	./tools/testing/kunit/kunit.py
+
+You should see the following failure:
+
+.. code-block:: none
+
+	...
+	[16:08:57] [PASSED] misc-example:misc_example_add_test_basic
+	[16:08:57] [FAILED] misc-example:misc_example_test_failure
+	[16:08:57] EXPECTATION FAILED at drivers/misc/example-test.c:17
+	[16:08:57] 	This test never passes.
+	...
+
+Congrats! You just wrote your first KUnit test!
+
+Next Steps
+=============
+*   Check out the :doc:`usage` page for a more
+    in-depth explanation of KUnit.
diff --git a/Documentation/kunit/usage.rst b/Documentation/kunit/usage.rst
new file mode 100644
index 0000000000000..96ef7f9a1add4
--- /dev/null
+++ b/Documentation/kunit/usage.rst
@@ -0,0 +1,447 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=============
+Using KUnit
+=============
+
+The purpose of this document is to describe what KUnit is, how it works, how it
+is intended to be used, and all the concepts and terminology that are needed to
+understand it. This guide assumes a working knowledge of the Linux kernel and
+some basic knowledge of testing.
+
+For a high level introduction to KUnit, including setting up KUnit for your
+project, see :doc:`start`.
+
+Organization of this document
+=================================
+
+This document is organized into two main sections: Testing and Isolating
+Behavior. The first covers what a unit test is and how to use KUnit to write
+them. The second covers how to use KUnit to isolate code and make it possible
+to unit test code that was otherwise un-unit-testable.
+
+Testing
+==========
+
+What is KUnit?
+------------------
+
+"K" is short for "kernel" so "KUnit" is the "(Linux) Kernel Unit Testing
+Framework." KUnit is intended first and foremost for writing unit tests; it is
+general enough that it can be used to write integration tests; however, this is
+a secondary goal. KUnit has no ambition of being the only testing framework for
+the kernel; for example, it does not intend to be an end-to-end testing
+framework.
+
+What is Unit Testing?
+-------------------------
+
+A `unit test <https://martinfowler.com/bliki/UnitTest.html>`_ is a test that
+tests code at the smallest possible scope, a *unit* of code. In the C
+programming language that's a function.
+
+Unit tests should be written for all the publicly exposed functions in a
+compilation unit; so that is all the functions that are exported in either a
+*class* (defined below) or all functions which are **not** static.
+
+Writing Tests
+-------------
+
+Test Cases
+~~~~~~~~~~
+
+The fundamental unit in KUnit is the test case. A test case is a function with
+the signature ``void (*)(struct kunit *test)``. It calls a function to be tested
+and then sets *expectations* for what should happen. For example:
+
+.. code-block:: c
+
+	void example_test_success(struct kunit *test)
+	{
+	}
+
+	void example_test_failure(struct kunit *test)
+	{
+		KUNIT_FAIL(test, "This test never passes.");
+	}
+
+In the above example ``example_test_success`` always passes because it does
+nothing; no expectations are set, so all expectations pass. On the other hand
+``example_test_failure`` always fails because it calls ``KUNIT_FAIL``, which is
+a special expectation that logs a message and causes the test case to fail.
+
+Expectations
+~~~~~~~~~~~~
+An *expectation* is a way to specify that you expect a piece of code to do
+something in a test. An expectation is called like a function. A test is made
+by setting expectations about the behavior of a piece of code under test; when
+one or more of the expectations fail, the test case fails and information about
+the failure is logged. For example:
+
+.. code-block:: c
+
+	void add_test_basic(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, 1, add(1, 0));
+		KUNIT_EXPECT_EQ(test, 2, add(1, 1));
+	}
+
+In the above example ``add_test_basic`` makes a number of assertions about the
+behavior of a function called ``add``; the first parameter is always of type
+``struct kunit *``, which contains information about the current test context;
+the second parameter, in this case, is what the value is expected to be; the
+last value is what the value actually is. If ``add`` passes all of these
+expectations, the test case, ``add_test_basic`` will pass; if any one of these
+expectations fail, the test case will fail.
+
+It is important to understand that a test case *fails* when any expectation is
+violated; however, the test will continue running, potentially trying other
+expectations until the test case ends or is otherwise terminated. This is as
+opposed to *assertions* which are discussed later.
+
+To learn about more expectations supported by KUnit, see :doc:`api/test`.
+
+.. note::
+   A single test case should be pretty short, pretty easy to understand,
+   focused on a single behavior.
+
+For example, if we wanted to properly test the add function above, we would
+create additional tests cases which would each test a different property that an
+add function should have like this:
+
+.. code-block:: c
+
+	void add_test_basic(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, 1, add(1, 0));
+		KUNIT_EXPECT_EQ(test, 2, add(1, 1));
+	}
+
+	void add_test_negative(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, 0, add(-1, 1));
+	}
+
+	void add_test_max(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, INT_MAX, add(0, INT_MAX));
+		KUNIT_EXPECT_EQ(test, -1, add(INT_MAX, INT_MIN));
+	}
+
+	void add_test_overflow(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, INT_MIN, add(INT_MAX, 1));
+	}
+
+Notice how it is immediately obvious what all the properties that we are testing
+for are.
+
+Assertions
+~~~~~~~~~~
+
+KUnit also has the concept of an *assertion*. An assertion is just like an
+expectation except the assertion immediately terminates the test case if it is
+not satisfied.
+
+For example:
+
+.. code-block:: c
+
+	static void mock_test_do_expect_default_return(struct kunit *test)
+	{
+		struct mock_test_context *ctx = test->priv;
+		struct mock *mock = ctx->mock;
+		int param0 = 5, param1 = -5;
+		const char *two_param_types[] = {"int", "int"};
+		const void *two_params[] = {&param0, &param1};
+		const void *ret;
+
+		ret = mock->do_expect(mock,
+				      "test_printk", test_printk,
+				      two_param_types, two_params,
+				      ARRAY_SIZE(two_params));
+		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ret);
+		KUNIT_EXPECT_EQ(test, -4, *((int *) ret));
+	}
+
+In this example, the method under test should return a pointer to a value, so
+if the pointer returned by the method is null or an errno, we don't want to
+bother continuing the test since the following expectation could crash the test
+case. `ASSERT_NOT_ERR_OR_NULL(...)` allows us to bail out of the test case if
+the appropriate conditions have not been satisfied to complete the test.
+
+Modules / Test Suites
+~~~~~~~~~~~~~~~~~~~~~
+
+Now obviously one unit test isn't very helpful; the power comes from having
+many test cases covering all of your behaviors. Consequently it is common to
+have many *similar* tests; in order to reduce duplication in these closely
+related tests most unit testing frameworks provide the concept of a *test
+suite*, in KUnit we call it a *test module*; all it is is just a collection of
+test cases for a unit of code with a set up function that gets invoked before
+every test cases and then a tear down function that gets invoked after every
+test case completes.
+
+Example:
+
+.. code-block:: c
+
+	static struct kunit_case example_test_cases[] = {
+		KUNIT_CASE(example_test_foo),
+		KUNIT_CASE(example_test_bar),
+		KUNIT_CASE(example_test_baz),
+		{},
+	};
+
+	static struct kunit_module example_test_module[] = {
+		.name = "example",
+		.init = example_test_init,
+		.exit = example_test_exit,
+		.test_cases = example_test_cases,
+	};
+	module_test(example_test_module);
+
+In the above example the test suite, ``example_test_module``, would run the test
+cases ``example_test_foo``, ``example_test_bar``, and ``example_test_baz``, each
+would have ``example_test_init`` called immediately before it and would have
+``example_test_exit`` called immediately after it.
+``module_test(example_test_module)`` registers the test suite with the KUnit
+test framework.
+
+.. note::
+   A test case will only be run if it is associated with a test suite.
+
+For a more information on these types of things see the :doc:`api/test`.
+
+Isolating Behavior
+==================
+
+The most important aspect of unit testing that other forms of testing do not
+provide is the ability to limit the amount of code under test to a single unit.
+In practice, this is only possible by being able to control what code gets run
+when the unit under test calls a function and this is usually accomplished
+through some sort of indirection where a function is exposed as part of an API
+such that the definition of that function can be changed without affecting the
+rest of the code base. In the kernel this primarily comes from two constructs,
+classes, structs that contain function pointers that are provided by the
+implementer, and architecture specific functions which have definitions selected
+at compile time.
+
+Classes
+-------
+
+Classes are not a construct that is built into the C programming language;
+however, it is an easily derived concept. Accordingly, pretty much every project
+that does not use a standardized object oriented library (like GNOME's GObject)
+has their own slightly different way of doing object oriented programming; the
+Linux kernel is no exception.
+
+The central concept in kernel object oriented programming is the class. In the
+kernel, a *class* is a struct that contains function pointers. This creates a
+contract between *implementers* and *users* since it forces them to use the
+same function signature without having to call the function directly. In order
+for it to truly be a class, the function pointers must specify that a pointer
+to the class, known as a *class handle*, be one of the parameters; this makes
+it possible for the member functions (also known as *methods*) to have access
+to member variables (more commonly known as *fields*) allowing the same
+implementation to have multiple *instances*.
+
+Typically a class can be *overridden* by *child classes* by embedding the
+*parent class* in the child class. Then when a method provided by the child
+class is called, the child implementation knows that the pointer passed to it is
+of a parent contained within the child; because of this, the child can compute
+the pointer to itself because the pointer to the parent is always a fixed offset
+from the pointer to the child; this offset is the offset of the parent contained
+in the child struct. For example:
+
+.. code-block:: c
+
+	struct shape {
+		int (*area)(struct shape *this);
+	};
+
+	struct rectangle {
+		struct shape parent;
+		int length;
+		int width;
+	};
+
+	int rectangle_area(struct shape *this)
+	{
+		struct rectangle *self = container_of(this, struct shape, parent);
+
+		return self->length * self->width;
+	};
+
+	void rectangle_new(struct rectangle *self, int length, int width)
+	{
+		self->parent.area = rectangle_area;
+		self->length = length;
+		self->width = width;
+	}
+
+In this example (as in most kernel code) the operation of computing the pointer
+to the child from the pointer to the parent is done by ``container_of``.
+
+Faking Classes
+~~~~~~~~~~~~~~
+
+In order to unit test a piece of code that calls a method in a class, the
+behavior of the method must be controllable, otherwise the test ceases to be a
+unit test and becomes an integration test.
+
+A fake just provides an implementation of a piece of code that is different than
+what runs in a production instance, but behaves identically from the standpoint
+of the callers; this is usually done to replace a dependency that is hard to
+deal with, or is slow.
+
+A good example for this might be implementing a fake EEPROM that just stores the
+"contents" in an internal buffer. For example, let's assume we have a class that
+represents an EEPROM:
+
+.. code-block:: c
+
+	struct eeprom {
+		ssize_t (*read)(struct eeprom *this, size_t offset, char *buffer, size_t count);
+		ssize_t (*write)(struct eeprom *this, size_t offset, const char *buffer, size_t count);
+	};
+
+And we want to test some code that buffers writes to the EEPROM:
+
+.. code-block:: c
+
+	struct eeprom_buffer {
+		ssize_t (*write)(struct eeprom_buffer *this, const char *buffer, size_t count);
+		int flush(struct eeprom_buffer *this);
+		size_t flush_count; /* Flushes when buffer exceeds flush_count. */
+	};
+
+	struct eeprom_buffer *new_eeprom_buffer(struct eeprom *eeprom);
+	void destroy_eeprom_buffer(struct eeprom *eeprom);
+
+We can easily test this code by *faking out* the underlying EEPROM:
+
+.. code-block:: c
+
+	struct fake_eeprom {
+		struct eeprom parent;
+		char contents[FAKE_EEPROM_CONTENTS_SIZE];
+	};
+
+	ssize_t fake_eeprom_read(struct eeprom *parent, size_t offset, char *buffer, size_t count)
+	{
+		struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent);
+
+		count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset);
+		memcpy(buffer, this->contents + offset, count);
+
+		return count;
+	}
+
+	ssize_t fake_eeprom_write(struct eeprom *this, size_t offset, const char *buffer, size_t count)
+	{
+		struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent);
+
+		count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset);
+		memcpy(this->contents + offset, buffer, count);
+
+		return count;
+	}
+
+	void fake_eeprom_init(struct fake_eeprom *this)
+	{
+		this->parent.read = fake_eeprom_read;
+		this->parent.write = fake_eeprom_write;
+		memset(this->contents, 0, FAKE_EEPROM_CONTENTS_SIZE);
+	}
+
+We can now use it to test ``struct eeprom_buffer``:
+
+.. code-block:: c
+
+	struct eeprom_buffer_test {
+		struct fake_eeprom *fake_eeprom;
+		struct eeprom_buffer *eeprom_buffer;
+	};
+
+	static void eeprom_buffer_test_does_not_write_until_flush(struct kunit *test)
+	{
+		struct eeprom_buffer_test *ctx = test->priv;
+		struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
+		struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
+		char buffer[] = {0xff};
+
+		eeprom_buffer->flush_count = SIZE_MAX;
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 1);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 1);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0);
+
+		eeprom_buffer->flush(eeprom_buffer);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
+	}
+
+	static void eeprom_buffer_test_flushes_after_flush_count_met(struct kunit *test)
+	{
+		struct eeprom_buffer_test *ctx = test->priv;
+		struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
+		struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
+		char buffer[] = {0xff};
+
+		eeprom_buffer->flush_count = 2;
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 1);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 1);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
+	}
+
+	static void eeprom_buffer_test_flushes_increments_of_flush_count(struct kunit *test)
+	{
+		struct eeprom_buffer_test *ctx = test->priv;
+		struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
+		struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
+		char buffer[] = {0xff, 0xff};
+
+		eeprom_buffer->flush_count = 2;
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 1);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 2);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
+		/* Should have only flushed the first two bytes. */
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[2], 0);
+	}
+
+	static int eeprom_buffer_test_init(struct kunit *test)
+	{
+		struct eeprom_buffer_test *ctx;
+
+		ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
+		ASSERT_NOT_ERR_OR_NULL(test, ctx);
+
+		ctx->fake_eeprom = kunit_kzalloc(test, sizeof(*ctx->fake_eeprom), GFP_KERNEL);
+		ASSERT_NOT_ERR_OR_NULL(test, ctx->fake_eeprom);
+
+		ctx->eeprom_buffer = new_eeprom_buffer(&ctx->fake_eeprom->parent);
+		ASSERT_NOT_ERR_OR_NULL(test, ctx->eeprom_buffer);
+
+		test->priv = ctx;
+
+		return 0;
+	}
+
+	static void eeprom_buffer_test_exit(struct kunit *test)
+	{
+		struct eeprom_buffer_test *ctx = test->priv;
+
+		destroy_eeprom_buffer(ctx->eeprom_buffer);
+	}
+
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 14/19] Documentation: kunit: add documentation for KUnit
  2018-11-28 19:36 ` [RFC v3 14/19] Documentation: kunit: add documentation for KUnit brendanhiggins
@ 2018-11-28 19:36   ` Brendan Higgins
  2018-11-29 13:56   ` kieran.bingham
  1 sibling, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-11-28 19:36 UTC (permalink / raw)


Add documentation for KUnit, the Linux kernel unit testing framework.
- Add intro and usage guide for KUnit
- Add API reference

Signed-off-by: Felix Guo <felixguoxiuping at gmail.com>
Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 Documentation/index.rst           |   1 +
 Documentation/kunit/api/index.rst |  16 ++
 Documentation/kunit/api/test.rst  |  15 +
 Documentation/kunit/faq.rst       |  46 +++
 Documentation/kunit/index.rst     |  80 ++++++
 Documentation/kunit/start.rst     | 180 ++++++++++++
 Documentation/kunit/usage.rst     | 447 ++++++++++++++++++++++++++++++
 7 files changed, 785 insertions(+)
 create mode 100644 Documentation/kunit/api/index.rst
 create mode 100644 Documentation/kunit/api/test.rst
 create mode 100644 Documentation/kunit/faq.rst
 create mode 100644 Documentation/kunit/index.rst
 create mode 100644 Documentation/kunit/start.rst
 create mode 100644 Documentation/kunit/usage.rst

diff --git a/Documentation/index.rst b/Documentation/index.rst
index 5db7e87c7cb1d..275ef4db79f61 100644
--- a/Documentation/index.rst
+++ b/Documentation/index.rst
@@ -68,6 +68,7 @@ merged much easier.
    kernel-hacking/index
    trace/index
    maintainer/index
+   kunit/index
 
 Kernel API documentation
 ------------------------
diff --git a/Documentation/kunit/api/index.rst b/Documentation/kunit/api/index.rst
new file mode 100644
index 0000000000000..c31c530088153
--- /dev/null
+++ b/Documentation/kunit/api/index.rst
@@ -0,0 +1,16 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=============
+API Reference
+=============
+.. toctree::
+
+	test
+
+This section documents the KUnit kernel testing API. It is divided into 3
+sections:
+
+================================= ==============================================
+:doc:`test`                       documents all of the standard testing API
+                                  excluding mocking or mocking related features.
+================================= ==============================================
diff --git a/Documentation/kunit/api/test.rst b/Documentation/kunit/api/test.rst
new file mode 100644
index 0000000000000..7c926014f047c
--- /dev/null
+++ b/Documentation/kunit/api/test.rst
@@ -0,0 +1,15 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+========
+Test API
+========
+
+This file documents all of the standard testing API excluding mocking or mocking
+related features.
+
+.. kernel-doc:: include/kunit/test.h
+   :internal:
+
+.. kernel-doc:: include/kunit/kunit-stream.h
+   :internal:
+
diff --git a/Documentation/kunit/faq.rst b/Documentation/kunit/faq.rst
new file mode 100644
index 0000000000000..cb8e4fb2257a0
--- /dev/null
+++ b/Documentation/kunit/faq.rst
@@ -0,0 +1,46 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=========================================
+Frequently Asked Questions
+=========================================
+
+How is this different from Autotest, kselftest, etc?
+====================================================
+KUnit is a unit testing framework. Autotest, kselftest (and some others) are
+not.
+
+A `unit test <https://martinfowler.com/bliki/UnitTest.html>`_ is supposed to
+test a single unit of code in isolation, hence the name. A unit test should be
+the finest granularity of testing and as such should allow all possible code
+paths to be tested in the code under test; this is only possible if the code
+under test is very small and does not have any external dependencies outside of
+the test's control like hardware.
+
+There are no testing frameworks currently available for the kernel that do not
+require installing the kernel on a test machine or in a VM and all require
+tests to be written in userspace and run on the kernel under test; this is true
+for Autotest, kselftest, and some others, disqualifying any of them from being
+considered unit testing frameworks.
+
+What is the difference between a unit test and these other kinds of tests?
+==========================================================================
+Most existing tests for the Linux kernel would be categorized as an integration
+test, or an end-to-end test.
+
+- A unit test is supposed to test a single unit of code in isolation, hence the
+  name. A unit test should be the finest granularity of testing and as such
+  should allow all possible code paths to be tested in the code under test; this
+  is only possible if the code under test is very small and does not have any
+  external dependencies outside of the test's control like hardware.
+- An integration test tests the interaction between a minimal set of components,
+  usually just two or three. For example, someone might write an integration
+  test to test the interaction between a driver and a piece of hardware, or to
+  test the interaction between the userspace libraries the kernel provides and
+  the kernel itself; however, one of these tests would probably not test the
+  entire kernel along with hardware interactions and interactions with the
+  userspace.
+- An end-to-end test usually tests the entire system from the perspective of the
+  code under test. For example, someone might write an end-to-end test for the
+  kernel by installing a production configuration of the kernel on production
+  hardware with a production userspace and then trying to exercise some behavior
+  that depends on interactions between the hardware, the kernel, and userspace.
diff --git a/Documentation/kunit/index.rst b/Documentation/kunit/index.rst
new file mode 100644
index 0000000000000..c6710211b647f
--- /dev/null
+++ b/Documentation/kunit/index.rst
@@ -0,0 +1,80 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=========================================
+KUnit - Unit Testing for the Linux Kernel
+=========================================
+
+.. toctree::
+	:maxdepth: 2
+
+	start
+	usage
+	api/index
+	faq
+
+What is KUnit?
+==============
+
+KUnit is a lightweight unit testing and mocking framework for the Linux kernel.
+These tests are able to be run locally on a developer's workstation without a VM
+or special hardware.
+
+KUnit is heavily inspired by JUnit, Python's unittest.mock, and
+Googletest/Googlemock for C++. KUnit provides facilities for defining unit test
+cases, grouping related test cases into test suites, providing common
+infrastructure for running tests, and much more.
+
+Get started now: :doc:`start`
+
+Why KUnit?
+==========
+
+A unit test is supposed to test a single unit of code in isolation, hence the
+name. A unit test should be the finest granularity of testing and as such should
+allow all possible code paths to be tested in the code under test; this is only
+possible if the code under test is very small and does not have any external
+dependencies outside of the test's control like hardware.
+
+Outside of KUnit, there are no testing frameworks currently
+available for the kernel that do not require installing the kernel on a test
+machine or in a VM and all require tests to be written in userspace running on
+the kernel; this is true for Autotest, and kselftest, disqualifying
+any of them from being considered unit testing frameworks.
+
+KUnit addresses the problem of being able to run tests without needing a virtual
+machine or actual hardware with User Mode Linux. User Mode Linux is a Linux
+architecture, like ARM or x86; however, unlike other architectures it compiles
+to a standalone program that can be run like any other program directly inside
+of a host operating system; to be clear, it does not require any virtualization
+support; it is just a regular program.
+
+KUnit is fast. Excluding build time, from invocation to completion KUnit can run
+several dozen tests in only 10 to 20 seconds; this might not sound like a big
+deal to some people, but having such fast and easy to run tests fundamentally
+changes the way you go about testing and even writing code in the first place.
+Linus himself said in his `git talk at Google
+<https://gist.github.com/lorn/1272686/revisions#diff-53c65572127855f1b003db4064a94573R874>`_:
+
+	"... a lot of people seem to think that performance is about doing the
+	same thing, just doing it faster, and that is not true. That is not what
+	performance is all about. If you can do something really fast, really
+	well, people will start using it differently."
+
+In this context Linus was talking about branching and merging,
+but this point also applies to testing. If your tests are slow, unreliable, are
+difficult to write, and require a special setup or special hardware to run,
+then you wait a lot longer to write tests, and you wait a lot longer to run
+tests; this means that tests are likely to break, unlikely to test a lot of
+things, and are unlikely to be rerun once they pass. If your tests are really
+fast, you run them all the time, every time you make a change, and every time
+someone sends you some code. Why trust that someone ran all their tests
+correctly on every change when you can just run them yourself in less time than
+it takes to read his / her test log?
+
+How do I use it?
+===================
+
+*   :doc:`start` - for new users of KUnit
+*   :doc:`usage` - for a more detailed explanation of KUnit features
+*   :doc:`api/index` - for the list of KUnit APIs used for testing
+
diff --git a/Documentation/kunit/start.rst b/Documentation/kunit/start.rst
new file mode 100644
index 0000000000000..5cdba5091905e
--- /dev/null
+++ b/Documentation/kunit/start.rst
@@ -0,0 +1,180 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+===============
+Getting Started
+===============
+
+Installing dependencies
+=======================
+KUnit has the same dependencies as the Linux kernel. As long as you can build
+the kernel, you can run KUnit.
+
+KUnit Wrapper
+=============
+Included with KUnit is a simple Python wrapper that helps format the output to
+easily use and read KUnit output. It handles building and running the kernel, as
+well as formatting the output.
+
+The wrapper can be run with:
+
+.. code-block:: bash
+
+   ./tools/testing/kunit/kunit.py
+
+Creating a kunitconfig
+======================
+The Python script is a thin wrapper around Kbuild as such, it needs to be
+configured with a ``kunitconfig`` file. This file essentially contains the
+regular Kernel config, with the specific test targets as well.
+
+.. code-block:: bash
+
+	git clone -b master https://kunit.googlesource.com/kunitconfig $PATH_TO_KUNITCONFIG_REPO
+	cd $PATH_TO_LINUX_REPO
+	ln -s $PATH_TO_KUNIT_CONFIG_REPO/kunitconfig kunitconfig
+
+You may want to add kunitconfig to your local gitignore.
+
+Verifying KUnit Works
+-------------------------
+
+To make sure that everything is set up correctly, simply invoke the Python
+wrapper from your kernel repo:
+
+.. code-block:: bash
+
+	./tools/testing/kunit/kunit.py
+
+.. note::
+   You may want to run ``make mrproper`` first.
+
+If everything worked correctly, you should see the following:
+
+.. code-block:: bash
+
+	Generating .config ...
+	Building KUnit Kernel ...
+	Starting KUnit Kernel ...
+
+followed by a list of tests that are run. All of them should be passing.
+
+.. note::
+   Because it is building a lot of sources for the first time, the ``Building
+   kunit kernel`` step may take a while.
+
+Writing your first test
+==========================
+
+In your kernel repo let's add some code that we can test. Create a file
+``drivers/misc/example.h`` with the contents:
+
+.. code-block:: c
+
+	int misc_example_add(int left, int right);
+
+create a file ``drivers/misc/example.c``:
+
+.. code-block:: c
+
+	#include <linux/errno.h>
+
+	#include "example.h"
+
+	int misc_example_add(int left, int right)
+	{
+		return left + right;
+	}
+
+Now add the following lines to ``drivers/misc/Kconfig``:
+
+.. code-block:: kconfig
+
+	config MISC_EXAMPLE
+		bool "My example"
+
+and the following lines to ``drivers/misc/Makefile``:
+
+.. code-block:: make
+
+	obj-$(CONFIG_MISC_EXAMPLE) += example.o
+
+Now we are ready to write the test. The test will be in
+``drivers/misc/example-test.c``:
+
+.. code-block:: c
+
+	#include <kunit/test.h>
+	#include "example.h"
+
+	/* Define the test cases. */
+
+	static void misc_example_add_test_basic(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, 1, misc_example_add(1, 0));
+		KUNIT_EXPECT_EQ(test, 2, misc_example_add(1, 1));
+		KUNIT_EXPECT_EQ(test, 0, misc_example_add(-1, 1));
+		KUNIT_EXPECT_EQ(test, INT_MAX, misc_example_add(0, INT_MAX));
+		KUNIT_EXPECT_EQ(test, -1, misc_example_add(INT_MAX, INT_MIN));
+	}
+
+	static void misc_example_test_failure(struct kunit *test)
+	{
+		KUNIT_FAIL(test, "This test never passes.");
+	}
+
+	static struct kunit_case misc_example_test_cases[] = {
+		KUNIT_CASE(misc_example_add_test_basic),
+		KUNIT_CASE(misc_example_test_failure),
+		{},
+	};
+
+	static struct kunit_module misc_example_test_module = {
+		.name = "misc-example",
+		.test_cases = misc_example_test_cases,
+	};
+	module_test(misc_example_test_module);
+
+Now add the following to ``drivers/misc/Kconfig``:
+
+.. code-block:: kconfig
+
+	config MISC_EXAMPLE_TEST
+		bool "Test for my example"
+		depends on MISC_EXAMPLE && KUNIT
+
+and the following to ``drivers/misc/Makefile``:
+
+.. code-block:: make
+
+	obj-$(CONFIG_MISC_EXAMPLE_TEST) += example-test.o
+
+Now add it to your ``kunitconfig``:
+
+.. code-block:: none
+
+	CONFIG_MISC_EXAMPLE=y
+	CONFIG_MISC_EXAMPLE_TEST=y
+
+Now you can run the test:
+
+.. code-block:: bash
+
+	./tools/testing/kunit/kunit.py
+
+You should see the following failure:
+
+.. code-block:: none
+
+	...
+	[16:08:57] [PASSED] misc-example:misc_example_add_test_basic
+	[16:08:57] [FAILED] misc-example:misc_example_test_failure
+	[16:08:57] EXPECTATION FAILED at drivers/misc/example-test.c:17
+	[16:08:57] 	This test never passes.
+	...
+
+Congrats! You just wrote your first KUnit test!
+
+Next Steps
+=============
+*   Check out the :doc:`usage` page for a more
+    in-depth explanation of KUnit.
diff --git a/Documentation/kunit/usage.rst b/Documentation/kunit/usage.rst
new file mode 100644
index 0000000000000..96ef7f9a1add4
--- /dev/null
+++ b/Documentation/kunit/usage.rst
@@ -0,0 +1,447 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=============
+Using KUnit
+=============
+
+The purpose of this document is to describe what KUnit is, how it works, how it
+is intended to be used, and all the concepts and terminology that are needed to
+understand it. This guide assumes a working knowledge of the Linux kernel and
+some basic knowledge of testing.
+
+For a high level introduction to KUnit, including setting up KUnit for your
+project, see :doc:`start`.
+
+Organization of this document
+=================================
+
+This document is organized into two main sections: Testing and Isolating
+Behavior. The first covers what a unit test is and how to use KUnit to write
+them. The second covers how to use KUnit to isolate code and make it possible
+to unit test code that was otherwise un-unit-testable.
+
+Testing
+==========
+
+What is KUnit?
+------------------
+
+"K" is short for "kernel" so "KUnit" is the "(Linux) Kernel Unit Testing
+Framework." KUnit is intended first and foremost for writing unit tests; it is
+general enough that it can be used to write integration tests; however, this is
+a secondary goal. KUnit has no ambition of being the only testing framework for
+the kernel; for example, it does not intend to be an end-to-end testing
+framework.
+
+What is Unit Testing?
+-------------------------
+
+A `unit test <https://martinfowler.com/bliki/UnitTest.html>`_ is a test that
+tests code at the smallest possible scope, a *unit* of code. In the C
+programming language that's a function.
+
+Unit tests should be written for all the publicly exposed functions in a
+compilation unit; so that is all the functions that are exported in either a
+*class* (defined below) or all functions which are **not** static.
+
+Writing Tests
+-------------
+
+Test Cases
+~~~~~~~~~~
+
+The fundamental unit in KUnit is the test case. A test case is a function with
+the signature ``void (*)(struct kunit *test)``. It calls a function to be tested
+and then sets *expectations* for what should happen. For example:
+
+.. code-block:: c
+
+	void example_test_success(struct kunit *test)
+	{
+	}
+
+	void example_test_failure(struct kunit *test)
+	{
+		KUNIT_FAIL(test, "This test never passes.");
+	}
+
+In the above example ``example_test_success`` always passes because it does
+nothing; no expectations are set, so all expectations pass. On the other hand
+``example_test_failure`` always fails because it calls ``KUNIT_FAIL``, which is
+a special expectation that logs a message and causes the test case to fail.
+
+Expectations
+~~~~~~~~~~~~
+An *expectation* is a way to specify that you expect a piece of code to do
+something in a test. An expectation is called like a function. A test is made
+by setting expectations about the behavior of a piece of code under test; when
+one or more of the expectations fail, the test case fails and information about
+the failure is logged. For example:
+
+.. code-block:: c
+
+	void add_test_basic(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, 1, add(1, 0));
+		KUNIT_EXPECT_EQ(test, 2, add(1, 1));
+	}
+
+In the above example ``add_test_basic`` makes a number of assertions about the
+behavior of a function called ``add``; the first parameter is always of type
+``struct kunit *``, which contains information about the current test context;
+the second parameter, in this case, is what the value is expected to be; the
+last value is what the value actually is. If ``add`` passes all of these
+expectations, the test case, ``add_test_basic`` will pass; if any one of these
+expectations fail, the test case will fail.
+
+It is important to understand that a test case *fails* when any expectation is
+violated; however, the test will continue running, potentially trying other
+expectations until the test case ends or is otherwise terminated. This is as
+opposed to *assertions* which are discussed later.
+
+To learn about more expectations supported by KUnit, see :doc:`api/test`.
+
+.. note::
+   A single test case should be pretty short, pretty easy to understand,
+   focused on a single behavior.
+
+For example, if we wanted to properly test the add function above, we would
+create additional tests cases which would each test a different property that an
+add function should have like this:
+
+.. code-block:: c
+
+	void add_test_basic(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, 1, add(1, 0));
+		KUNIT_EXPECT_EQ(test, 2, add(1, 1));
+	}
+
+	void add_test_negative(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, 0, add(-1, 1));
+	}
+
+	void add_test_max(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, INT_MAX, add(0, INT_MAX));
+		KUNIT_EXPECT_EQ(test, -1, add(INT_MAX, INT_MIN));
+	}
+
+	void add_test_overflow(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, INT_MIN, add(INT_MAX, 1));
+	}
+
+Notice how it is immediately obvious what all the properties that we are testing
+for are.
+
+Assertions
+~~~~~~~~~~
+
+KUnit also has the concept of an *assertion*. An assertion is just like an
+expectation except the assertion immediately terminates the test case if it is
+not satisfied.
+
+For example:
+
+.. code-block:: c
+
+	static void mock_test_do_expect_default_return(struct kunit *test)
+	{
+		struct mock_test_context *ctx = test->priv;
+		struct mock *mock = ctx->mock;
+		int param0 = 5, param1 = -5;
+		const char *two_param_types[] = {"int", "int"};
+		const void *two_params[] = {&param0, &param1};
+		const void *ret;
+
+		ret = mock->do_expect(mock,
+				      "test_printk", test_printk,
+				      two_param_types, two_params,
+				      ARRAY_SIZE(two_params));
+		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ret);
+		KUNIT_EXPECT_EQ(test, -4, *((int *) ret));
+	}
+
+In this example, the method under test should return a pointer to a value, so
+if the pointer returned by the method is null or an errno, we don't want to
+bother continuing the test since the following expectation could crash the test
+case. `ASSERT_NOT_ERR_OR_NULL(...)` allows us to bail out of the test case if
+the appropriate conditions have not been satisfied to complete the test.
+
+Modules / Test Suites
+~~~~~~~~~~~~~~~~~~~~~
+
+Now obviously one unit test isn't very helpful; the power comes from having
+many test cases covering all of your behaviors. Consequently it is common to
+have many *similar* tests; in order to reduce duplication in these closely
+related tests most unit testing frameworks provide the concept of a *test
+suite*, in KUnit we call it a *test module*; all it is is just a collection of
+test cases for a unit of code with a set up function that gets invoked before
+every test cases and then a tear down function that gets invoked after every
+test case completes.
+
+Example:
+
+.. code-block:: c
+
+	static struct kunit_case example_test_cases[] = {
+		KUNIT_CASE(example_test_foo),
+		KUNIT_CASE(example_test_bar),
+		KUNIT_CASE(example_test_baz),
+		{},
+	};
+
+	static struct kunit_module example_test_module[] = {
+		.name = "example",
+		.init = example_test_init,
+		.exit = example_test_exit,
+		.test_cases = example_test_cases,
+	};
+	module_test(example_test_module);
+
+In the above example the test suite, ``example_test_module``, would run the test
+cases ``example_test_foo``, ``example_test_bar``, and ``example_test_baz``, each
+would have ``example_test_init`` called immediately before it and would have
+``example_test_exit`` called immediately after it.
+``module_test(example_test_module)`` registers the test suite with the KUnit
+test framework.
+
+.. note::
+   A test case will only be run if it is associated with a test suite.
+
+For a more information on these types of things see the :doc:`api/test`.
+
+Isolating Behavior
+==================
+
+The most important aspect of unit testing that other forms of testing do not
+provide is the ability to limit the amount of code under test to a single unit.
+In practice, this is only possible by being able to control what code gets run
+when the unit under test calls a function and this is usually accomplished
+through some sort of indirection where a function is exposed as part of an API
+such that the definition of that function can be changed without affecting the
+rest of the code base. In the kernel this primarily comes from two constructs,
+classes, structs that contain function pointers that are provided by the
+implementer, and architecture specific functions which have definitions selected
+at compile time.
+
+Classes
+-------
+
+Classes are not a construct that is built into the C programming language;
+however, it is an easily derived concept. Accordingly, pretty much every project
+that does not use a standardized object oriented library (like GNOME's GObject)
+has their own slightly different way of doing object oriented programming; the
+Linux kernel is no exception.
+
+The central concept in kernel object oriented programming is the class. In the
+kernel, a *class* is a struct that contains function pointers. This creates a
+contract between *implementers* and *users* since it forces them to use the
+same function signature without having to call the function directly. In order
+for it to truly be a class, the function pointers must specify that a pointer
+to the class, known as a *class handle*, be one of the parameters; this makes
+it possible for the member functions (also known as *methods*) to have access
+to member variables (more commonly known as *fields*) allowing the same
+implementation to have multiple *instances*.
+
+Typically a class can be *overridden* by *child classes* by embedding the
+*parent class* in the child class. Then when a method provided by the child
+class is called, the child implementation knows that the pointer passed to it is
+of a parent contained within the child; because of this, the child can compute
+the pointer to itself because the pointer to the parent is always a fixed offset
+from the pointer to the child; this offset is the offset of the parent contained
+in the child struct. For example:
+
+.. code-block:: c
+
+	struct shape {
+		int (*area)(struct shape *this);
+	};
+
+	struct rectangle {
+		struct shape parent;
+		int length;
+		int width;
+	};
+
+	int rectangle_area(struct shape *this)
+	{
+		struct rectangle *self = container_of(this, struct shape, parent);
+
+		return self->length * self->width;
+	};
+
+	void rectangle_new(struct rectangle *self, int length, int width)
+	{
+		self->parent.area = rectangle_area;
+		self->length = length;
+		self->width = width;
+	}
+
+In this example (as in most kernel code) the operation of computing the pointer
+to the child from the pointer to the parent is done by ``container_of``.
+
+Faking Classes
+~~~~~~~~~~~~~~
+
+In order to unit test a piece of code that calls a method in a class, the
+behavior of the method must be controllable, otherwise the test ceases to be a
+unit test and becomes an integration test.
+
+A fake just provides an implementation of a piece of code that is different than
+what runs in a production instance, but behaves identically from the standpoint
+of the callers; this is usually done to replace a dependency that is hard to
+deal with, or is slow.
+
+A good example for this might be implementing a fake EEPROM that just stores the
+"contents" in an internal buffer. For example, let's assume we have a class that
+represents an EEPROM:
+
+.. code-block:: c
+
+	struct eeprom {
+		ssize_t (*read)(struct eeprom *this, size_t offset, char *buffer, size_t count);
+		ssize_t (*write)(struct eeprom *this, size_t offset, const char *buffer, size_t count);
+	};
+
+And we want to test some code that buffers writes to the EEPROM:
+
+.. code-block:: c
+
+	struct eeprom_buffer {
+		ssize_t (*write)(struct eeprom_buffer *this, const char *buffer, size_t count);
+		int flush(struct eeprom_buffer *this);
+		size_t flush_count; /* Flushes when buffer exceeds flush_count. */
+	};
+
+	struct eeprom_buffer *new_eeprom_buffer(struct eeprom *eeprom);
+	void destroy_eeprom_buffer(struct eeprom *eeprom);
+
+We can easily test this code by *faking out* the underlying EEPROM:
+
+.. code-block:: c
+
+	struct fake_eeprom {
+		struct eeprom parent;
+		char contents[FAKE_EEPROM_CONTENTS_SIZE];
+	};
+
+	ssize_t fake_eeprom_read(struct eeprom *parent, size_t offset, char *buffer, size_t count)
+	{
+		struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent);
+
+		count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset);
+		memcpy(buffer, this->contents + offset, count);
+
+		return count;
+	}
+
+	ssize_t fake_eeprom_write(struct eeprom *this, size_t offset, const char *buffer, size_t count)
+	{
+		struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent);
+
+		count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset);
+		memcpy(this->contents + offset, buffer, count);
+
+		return count;
+	}
+
+	void fake_eeprom_init(struct fake_eeprom *this)
+	{
+		this->parent.read = fake_eeprom_read;
+		this->parent.write = fake_eeprom_write;
+		memset(this->contents, 0, FAKE_EEPROM_CONTENTS_SIZE);
+	}
+
+We can now use it to test ``struct eeprom_buffer``:
+
+.. code-block:: c
+
+	struct eeprom_buffer_test {
+		struct fake_eeprom *fake_eeprom;
+		struct eeprom_buffer *eeprom_buffer;
+	};
+
+	static void eeprom_buffer_test_does_not_write_until_flush(struct kunit *test)
+	{
+		struct eeprom_buffer_test *ctx = test->priv;
+		struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
+		struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
+		char buffer[] = {0xff};
+
+		eeprom_buffer->flush_count = SIZE_MAX;
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 1);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 1);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0);
+
+		eeprom_buffer->flush(eeprom_buffer);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
+	}
+
+	static void eeprom_buffer_test_flushes_after_flush_count_met(struct kunit *test)
+	{
+		struct eeprom_buffer_test *ctx = test->priv;
+		struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
+		struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
+		char buffer[] = {0xff};
+
+		eeprom_buffer->flush_count = 2;
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 1);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 1);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
+	}
+
+	static void eeprom_buffer_test_flushes_increments_of_flush_count(struct kunit *test)
+	{
+		struct eeprom_buffer_test *ctx = test->priv;
+		struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
+		struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
+		char buffer[] = {0xff, 0xff};
+
+		eeprom_buffer->flush_count = 2;
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 1);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 2);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
+		/* Should have only flushed the first two bytes. */
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[2], 0);
+	}
+
+	static int eeprom_buffer_test_init(struct kunit *test)
+	{
+		struct eeprom_buffer_test *ctx;
+
+		ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
+		ASSERT_NOT_ERR_OR_NULL(test, ctx);
+
+		ctx->fake_eeprom = kunit_kzalloc(test, sizeof(*ctx->fake_eeprom), GFP_KERNEL);
+		ASSERT_NOT_ERR_OR_NULL(test, ctx->fake_eeprom);
+
+		ctx->eeprom_buffer = new_eeprom_buffer(&ctx->fake_eeprom->parent);
+		ASSERT_NOT_ERR_OR_NULL(test, ctx->eeprom_buffer);
+
+		test->priv = ctx;
+
+		return 0;
+	}
+
+	static void eeprom_buffer_test_exit(struct kunit *test)
+	{
+		struct eeprom_buffer_test *ctx = test->priv;
+
+		destroy_eeprom_buffer(ctx->eeprom_buffer);
+	}
+
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 15/19] MAINTAINERS: add entry for KUnit the unit testing framework
  2018-11-28 19:36 [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework brendanhiggins
                   ` (14 preceding siblings ...)
  2018-11-28 19:36 ` [RFC v3 14/19] Documentation: kunit: add documentation for KUnit brendanhiggins
@ 2018-11-28 19:36 ` brendanhiggins
  2018-11-28 19:36   ` Brendan Higgins
  2018-11-28 19:36 ` [RFC v3 16/19] arch: um: make UML unflatten device tree when testing brendanhiggins
                   ` (5 subsequent siblings)
  21 siblings, 1 reply; 232+ messages in thread
From: brendanhiggins @ 2018-11-28 19:36 UTC (permalink / raw)


Add myself as maintainer of KUnit, the Linux kernel's unit testing
framework.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 MAINTAINERS | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index b2f710eee67a7..8c9b56dbc9645 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -7988,6 +7988,16 @@ S:	Maintained
 F:	tools/testing/selftests/
 F:	Documentation/dev-tools/kselftest*
 
+KERNEL UNIT TESTING FRAMEWORK (KUnit)
+M:	Brendan Higgins <brendanhiggins at google.com>
+L:	kunit-dev at googlegroups.com
+W:	https://google.github.io/kunit-docs/third_party/kernel/docs/
+S:	Maintained
+F:	Documentation/kunit/
+F:	include/kunit/
+F:	kunit/
+F:	tools/testing/kunit/
+
 KERNEL USERMODE HELPER
 M:	"Luis R. Rodriguez" <mcgrof at kernel.org>
 L:	linux-kernel at vger.kernel.org
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 15/19] MAINTAINERS: add entry for KUnit the unit testing framework
  2018-11-28 19:36 ` [RFC v3 15/19] MAINTAINERS: add entry for KUnit the unit testing framework brendanhiggins
@ 2018-11-28 19:36   ` Brendan Higgins
  0 siblings, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-11-28 19:36 UTC (permalink / raw)


Add myself as maintainer of KUnit, the Linux kernel's unit testing
framework.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 MAINTAINERS | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index b2f710eee67a7..8c9b56dbc9645 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -7988,6 +7988,16 @@ S:	Maintained
 F:	tools/testing/selftests/
 F:	Documentation/dev-tools/kselftest*
 
+KERNEL UNIT TESTING FRAMEWORK (KUnit)
+M:	Brendan Higgins <brendanhiggins at google.com>
+L:	kunit-dev at googlegroups.com
+W:	https://google.github.io/kunit-docs/third_party/kernel/docs/
+S:	Maintained
+F:	Documentation/kunit/
+F:	include/kunit/
+F:	kunit/
+F:	tools/testing/kunit/
+
 KERNEL USERMODE HELPER
 M:	"Luis R. Rodriguez" <mcgrof at kernel.org>
 L:	linux-kernel at vger.kernel.org
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 16/19] arch: um: make UML unflatten device tree when testing
  2018-11-28 19:36 [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework brendanhiggins
                   ` (15 preceding siblings ...)
  2018-11-28 19:36 ` [RFC v3 15/19] MAINTAINERS: add entry for KUnit the unit testing framework brendanhiggins
@ 2018-11-28 19:36 ` brendanhiggins
  2018-11-28 19:36   ` Brendan Higgins
                     ` (2 more replies)
  2018-11-28 19:36 ` [RFC v3 17/19] of: unittest: migrate tests to run on KUnit brendanhiggins
                   ` (4 subsequent siblings)
  21 siblings, 3 replies; 232+ messages in thread
From: brendanhiggins @ 2018-11-28 19:36 UTC (permalink / raw)


Make UML unflatten any present device trees when running KUnit tests.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 arch/um/kernel/um_arch.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/um/kernel/um_arch.c b/arch/um/kernel/um_arch.c
index a818ccef30ca2..bd58ae3bf4148 100644
--- a/arch/um/kernel/um_arch.c
+++ b/arch/um/kernel/um_arch.c
@@ -13,6 +13,7 @@
 #include <linux/sched.h>
 #include <linux/sched/task.h>
 #include <linux/kmsg_dump.h>
+#include <linux/of_fdt.h>
 
 #include <asm/pgtable.h>
 #include <asm/processor.h>
@@ -347,6 +348,9 @@ void __init setup_arch(char **cmdline_p)
 	read_initrd();
 
 	paging_init();
+#if IS_ENABLED(CONFIG_OF_UNITTEST)
+	unflatten_device_tree();
+#endif
 	strlcpy(boot_command_line, command_line, COMMAND_LINE_SIZE);
 	*cmdline_p = command_line;
 	setup_hostinfo(host_info, sizeof host_info);
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 16/19] arch: um: make UML unflatten device tree when testing
  2018-11-28 19:36 ` [RFC v3 16/19] arch: um: make UML unflatten device tree when testing brendanhiggins
@ 2018-11-28 19:36   ` Brendan Higgins
  2018-11-28 21:16   ` robh
  2018-11-30  3:46   ` mcgrof
  2 siblings, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-11-28 19:36 UTC (permalink / raw)


Make UML unflatten any present device trees when running KUnit tests.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 arch/um/kernel/um_arch.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/um/kernel/um_arch.c b/arch/um/kernel/um_arch.c
index a818ccef30ca2..bd58ae3bf4148 100644
--- a/arch/um/kernel/um_arch.c
+++ b/arch/um/kernel/um_arch.c
@@ -13,6 +13,7 @@
 #include <linux/sched.h>
 #include <linux/sched/task.h>
 #include <linux/kmsg_dump.h>
+#include <linux/of_fdt.h>
 
 #include <asm/pgtable.h>
 #include <asm/processor.h>
@@ -347,6 +348,9 @@ void __init setup_arch(char **cmdline_p)
 	read_initrd();
 
 	paging_init();
+#if IS_ENABLED(CONFIG_OF_UNITTEST)
+	unflatten_device_tree();
+#endif
 	strlcpy(boot_command_line, command_line, COMMAND_LINE_SIZE);
 	*cmdline_p = command_line;
 	setup_hostinfo(host_info, sizeof host_info);
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 17/19] of: unittest: migrate tests to run on KUnit
  2018-11-28 19:36 [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework brendanhiggins
                   ` (16 preceding siblings ...)
  2018-11-28 19:36 ` [RFC v3 16/19] arch: um: make UML unflatten device tree when testing brendanhiggins
@ 2018-11-28 19:36 ` brendanhiggins
  2018-11-28 19:36   ` Brendan Higgins
                     ` (2 more replies)
  2018-11-28 19:36 ` [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest brendanhiggins
                   ` (3 subsequent siblings)
  21 siblings, 3 replies; 232+ messages in thread
From: brendanhiggins @ 2018-11-28 19:36 UTC (permalink / raw)


Migrate tests without any cleanup, or modifying test logic in anyway to
run under KUnit using the KUnit expectation and assertion API.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 drivers/of/Kconfig    |    1 +
 drivers/of/unittest.c | 1405 ++++++++++++++++++++++-------------------
 2 files changed, 752 insertions(+), 654 deletions(-)

diff --git a/drivers/of/Kconfig b/drivers/of/Kconfig
index ad3fcad4d75b8..f309399deac20 100644
--- a/drivers/of/Kconfig
+++ b/drivers/of/Kconfig
@@ -15,6 +15,7 @@ if OF
 config OF_UNITTEST
 	bool "Device Tree runtime unit tests"
 	depends on !SPARC
+	depends on KUNIT
 	select IRQ_DOMAIN
 	select OF_EARLY_FLATTREE
 	select OF_RESOLVE
diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
index 41b49716ac75f..a5ef44730ffdb 100644
--- a/drivers/of/unittest.c
+++ b/drivers/of/unittest.c
@@ -26,186 +26,187 @@
 
 #include <linux/bitops.h>
 
+#include <kunit/test.h>
+
 #include "of_private.h"
 
-static struct unittest_results {
-	int passed;
-	int failed;
-} unittest_results;
-
-#define unittest(result, fmt, ...) ({ \
-	bool failed = !(result); \
-	if (failed) { \
-		unittest_results.failed++; \
-		pr_err("FAIL %s():%i " fmt, __func__, __LINE__, ##__VA_ARGS__); \
-	} else { \
-		unittest_results.passed++; \
-		pr_debug("pass %s():%i\n", __func__, __LINE__); \
-	} \
-	failed; \
-})
-
-static void __init of_unittest_find_node_by_name(void)
+static void of_unittest_find_node_by_name(struct kunit *test)
 {
 	struct device_node *np;
 	const char *options, *name;
 
 	np = of_find_node_by_path("/testcase-data");
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	unittest(np && !strcmp("/testcase-data", name),
-		"find /testcase-data failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
+			       "find /testcase-data failed\n");
 	of_node_put(np);
 	kfree(name);
 
 	/* Test if trailing '/' works */
-	np = of_find_node_by_path("/testcase-data/");
-	unittest(!np, "trailing '/' on /testcase-data/ should fail\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
+			    "trailing '/' on /testcase-data/ should fail\n");
 
 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	unittest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", name),
-		"find /testcase-data/phandle-tests/consumer-a failed\n");
+	KUNIT_EXPECT_STREQ_MSG(test,
+			       "/testcase-data/phandle-tests/consumer-a", name,
+			       "find /testcase-data/phandle-tests/consumer-a failed\n");
 	of_node_put(np);
 	kfree(name);
 
 	np = of_find_node_by_path("testcase-alias");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	unittest(np && !strcmp("/testcase-data", name),
-		"find testcase-alias failed\n");
+	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
+			       "find testcase-alias failed\n");
 	of_node_put(np);
 	kfree(name);
 
 	/* Test if trailing '/' works on aliases */
-	np = of_find_node_by_path("testcase-alias/");
-	unittest(!np, "trailing '/' on testcase-alias/ should fail\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
+			    "trailing '/' on testcase-alias/ should fail\n");
 
 	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	unittest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", name),
-		"find testcase-alias/phandle-tests/consumer-a failed\n");
+	KUNIT_EXPECT_STREQ_MSG(test,
+			       "/testcase-data/phandle-tests/consumer-a", name,
+			       "find testcase-alias/phandle-tests/consumer-a failed\n");
 	of_node_put(np);
 	kfree(name);
 
-	np = of_find_node_by_path("/testcase-data/missing-path");
-	unittest(!np, "non-existent path returned node %pOF\n", np);
+	KUNIT_EXPECT_EQ_MSG(test,
+			    of_find_node_by_path("/testcase-data/missing-path"),
+			    NULL,
+			    "non-existent path returned node %pOF\n", np);
 	of_node_put(np);
 
-	np = of_find_node_by_path("missing-alias");
-	unittest(!np, "non-existent alias returned node %pOF\n", np);
+	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("missing-alias"), NULL,
+			    "non-existent alias returned node %pOF\n", np);
 	of_node_put(np);
 
-	np = of_find_node_by_path("testcase-alias/missing-path");
-	unittest(!np, "non-existent alias with relative path returned node %pOF\n", np);
+	KUNIT_EXPECT_EQ_MSG(test,
+			    of_find_node_by_path("testcase-alias/missing-path"),
+			    NULL,
+			    "non-existent alias with relative path returned node %pOF\n",
+			    np);
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
-	unittest(np && !strcmp("testoption", options),
-		 "option path test failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
+			       "option path test failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
-	unittest(np && !strcmp("test/option", options),
-		 "option path test, subcase #1 failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
+			       "option path test, subcase #1 failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
-	unittest(np && !strcmp("test/option", options),
-		 "option path test, subcase #2 failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
+			       "option path test, subcase #2 failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
-	unittest(np, "NULL option path test failed\n");
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
+					 "NULL option path test failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
 				       &options);
-	unittest(np && !strcmp("testaliasoption", options),
-		 "option alias path test failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
+			       "option alias path test failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
 				       &options);
-	unittest(np && !strcmp("test/alias/option", options),
-		 "option alias path test, subcase #1 failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
+			       "option alias path test, subcase #1 failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
-	unittest(np, "NULL option alias path test failed\n");
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
+					 "NULL option alias path test failed\n");
 	of_node_put(np);
 
 	options = "testoption";
 	np = of_find_node_opts_by_path("testcase-alias", &options);
-	unittest(np && !options, "option clearing test failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
+			    "option clearing test failed\n");
 	of_node_put(np);
 
 	options = "testoption";
 	np = of_find_node_opts_by_path("/", &options);
-	unittest(np && !options, "option clearing root node test failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
+			    "option clearing root node test failed\n");
 	of_node_put(np);
 }
 
-static void __init of_unittest_dynamic(void)
+static void of_unittest_dynamic(struct kunit *test)
 {
 	struct device_node *np;
 	struct property *prop;
 
 	np = of_find_node_by_path("/testcase-data");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	/* Array of 4 properties for the purpose of testing */
 	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
-	if (!prop) {
-		unittest(0, "kzalloc() failed\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
 
 	/* Add a new property - should pass*/
 	prop->name = "new-property";
 	prop->value = "new-property-data";
 	prop->length = strlen(prop->value) + 1;
-	unittest(of_add_property(np, prop) == 0, "Adding a new property failed\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+			    "Adding a new property failed\n");
 
 	/* Try to add an existing property - should fail */
 	prop++;
 	prop->name = "new-property";
 	prop->value = "new-property-data-should-fail";
 	prop->length = strlen(prop->value) + 1;
-	unittest(of_add_property(np, prop) != 0,
-		 "Adding an existing property should have failed\n");
+	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
+			    "Adding an existing property should have failed\n");
 
 	/* Try to modify an existing property - should pass */
 	prop->value = "modify-property-data-should-pass";
 	prop->length = strlen(prop->value) + 1;
-	unittest(of_update_property(np, prop) == 0,
-		 "Updating an existing property should have passed\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
+			    "Updating an existing property should have passed\n");
 
 	/* Try to modify non-existent property - should pass*/
 	prop++;
 	prop->name = "modify-property";
 	prop->value = "modify-missing-property-data-should-pass";
 	prop->length = strlen(prop->value) + 1;
-	unittest(of_update_property(np, prop) == 0,
-		 "Updating a missing property should have passed\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
+			    "Updating a missing property should have passed\n");
 
 	/* Remove property - should pass */
-	unittest(of_remove_property(np, prop) == 0,
-		 "Removing a property should have passed\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
+			    "Removing a property should have passed\n");
 
 	/* Adding very large property - should pass */
 	prop++;
 	prop->name = "large-property-PAGE_SIZEx8";
 	prop->length = PAGE_SIZE * 8;
 	prop->value = kzalloc(prop->length, GFP_KERNEL);
-	unittest(prop->value != NULL, "Unable to allocate large buffer\n");
-	if (prop->value)
-		unittest(of_add_property(np, prop) == 0,
-			 "Adding a large property should have passed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+			    "Adding a large property should have passed\n");
 }
 
-static int __init of_unittest_check_node_linkage(struct device_node *np)
+static int of_unittest_check_node_linkage(struct device_node *np)
 {
 	struct device_node *child;
 	int count = 0, rc;
@@ -230,27 +231,29 @@ static int __init of_unittest_check_node_linkage(struct device_node *np)
 	return rc;
 }
 
-static void __init of_unittest_check_tree_linkage(void)
+static void of_unittest_check_tree_linkage(struct kunit *test)
 {
 	struct device_node *np;
 	int allnode_count = 0, child_count;
 
-	if (!of_root)
-		return;
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);
 
 	for_each_of_allnodes(np)
 		allnode_count++;
 	child_count = of_unittest_check_node_linkage(of_root);
 
-	unittest(child_count > 0, "Device node data structure is corrupted\n");
-	unittest(child_count == allnode_count,
-		 "allnodes list size (%i) doesn't match sibling lists size (%i)\n",
-		 allnode_count, child_count);
+	KUNIT_EXPECT_GT_MSG(test, child_count, 0,
+			    "Device node data structure is corrupted\n");
+	KUNIT_EXPECT_EQ_MSG(test, child_count, allnode_count,
+			    "allnodes list size (%i) doesn't match sibling lists size (%i)\n",
+			    allnode_count, child_count);
 	pr_debug("allnodes list size (%i); sibling lists size (%i)\n", allnode_count, child_count);
 }
 
-static void __init of_unittest_printf_one(struct device_node *np, const char *fmt,
-					  const char *expected)
+static void of_unittest_printf_one(struct kunit *test,
+				   struct device_node *np,
+				   const char *fmt,
+				   const char *expected)
 {
 	unsigned char *buf;
 	int buf_size;
@@ -266,9 +269,12 @@ static void __init of_unittest_printf_one(struct device_node *np, const char *fm
 	size = snprintf(buf, buf_size - 2, fmt, np);
 
 	/* use strcmp() instead of strncmp() here to be absolutely sure strings match */
-	unittest((strcmp(buf, expected) == 0) && (buf[size+1] == 0xff),
-		"sprintf failed; fmt='%s' expected='%s' rslt='%s'\n",
-		fmt, expected, buf);
+	KUNIT_EXPECT_STREQ_MSG(test, buf, expected,
+			       "sprintf failed; fmt='%s' expected='%s' rslt='%s'\n",
+			       fmt, expected, buf);
+	KUNIT_EXPECT_EQ_MSG(test, buf[size+1], 0xff,
+			    "sprintf failed; fmt='%s' expected='%s' rslt='%s'\n",
+			    fmt, expected, buf);
 
 	/* Make sure length limits work */
 	size++;
@@ -276,40 +282,43 @@ static void __init of_unittest_printf_one(struct device_node *np, const char *fm
 		/* Clear the buffer, and make sure it works correctly still */
 		memset(buf, 0xff, buf_size);
 		snprintf(buf, size+1, fmt, np);
-		unittest(strncmp(buf, expected, size) == 0 && (buf[size+1] == 0xff),
-			"snprintf failed; size=%i fmt='%s' expected='%s' rslt='%s'\n",
-			size, fmt, expected, buf);
+		KUNIT_EXPECT_STREQ_MSG(test, buf, expected,
+				       "snprintf failed; size=%i fmt='%s' expected='%s' rslt='%s'\n",
+				       size, fmt, expected, buf);
+		KUNIT_EXPECT_EQ_MSG(test, buf[size+1], 0xff,
+				    "snprintf failed; size=%i fmt='%s' expected='%s' rslt='%s'\n",
+				    size, fmt, expected, buf);
 	}
 	kfree(buf);
 }
 
-static void __init of_unittest_printf(void)
+static void of_unittest_printf(struct kunit *test)
 {
 	struct device_node *np;
 	const char *full_name = "/testcase-data/platform-tests/test-device at 1/dev at 100";
 	char phandle_str[16] = "";
 
 	np = of_find_node_by_path(full_name);
-	if (!np) {
-		unittest(np, "testcase data missing\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	num_to_str(phandle_str, sizeof(phandle_str), np->phandle, 0);
 
-	of_unittest_printf_one(np, "%pOF",  full_name);
-	of_unittest_printf_one(np, "%pOFf", full_name);
-	of_unittest_printf_one(np, "%pOFp", phandle_str);
-	of_unittest_printf_one(np, "%pOFP", "dev at 100");
-	of_unittest_printf_one(np, "ABC %pOFP ABC", "ABC dev at 100 ABC");
-	of_unittest_printf_one(np, "%10pOFP", "   dev at 100");
-	of_unittest_printf_one(np, "%-10pOFP", "dev at 100   ");
-	of_unittest_printf_one(of_root, "%pOFP", "/");
-	of_unittest_printf_one(np, "%pOFF", "----");
-	of_unittest_printf_one(np, "%pOFPF", "dev at 100:----");
-	of_unittest_printf_one(np, "%pOFPFPc", "dev at 100:----:dev at 100:test-sub-device");
-	of_unittest_printf_one(np, "%pOFc", "test-sub-device");
-	of_unittest_printf_one(np, "%pOFC",
+	of_unittest_printf_one(test, np, "%pOF",  full_name);
+	of_unittest_printf_one(test, np, "%pOFf", full_name);
+	of_unittest_printf_one(test, np, "%pOFp", phandle_str);
+	of_unittest_printf_one(test, np, "%pOFP", "dev at 100");
+	of_unittest_printf_one(test, np, "ABC %pOFP ABC", "ABC dev at 100 ABC");
+	of_unittest_printf_one(test, np, "%10pOFP", "   dev at 100");
+	of_unittest_printf_one(test, np, "%-10pOFP", "dev at 100   ");
+	of_unittest_printf_one(test, of_root, "%pOFP", "/");
+	of_unittest_printf_one(test, np, "%pOFF", "----");
+	of_unittest_printf_one(test, np, "%pOFPF", "dev at 100:----");
+	of_unittest_printf_one(test,
+			       np,
+			       "%pOFPFPc",
+			       "dev at 100:----:dev at 100:test-sub-device");
+	of_unittest_printf_one(test, np, "%pOFc", "test-sub-device");
+	of_unittest_printf_one(test, np, "%pOFC",
 			"\"test-sub-device\",\"test-compat2\",\"test-compat3\"");
 }
 
@@ -319,7 +328,7 @@ struct node_hash {
 };
 
 static DEFINE_HASHTABLE(phandle_ht, 8);
-static void __init of_unittest_check_phandles(void)
+static void of_unittest_check_phandles(struct kunit *test)
 {
 	struct device_node *np;
 	struct node_hash *nh;
@@ -331,24 +340,25 @@ static void __init of_unittest_check_phandles(void)
 			continue;
 
 		hash_for_each_possible(phandle_ht, nh, node, np->phandle) {
+			KUNIT_EXPECT_NE_MSG(test, nh->np->phandle, np->phandle,
+					    "Duplicate phandle! %i used by %pOF and %pOF\n",
+					    np->phandle, nh->np, np);
 			if (nh->np->phandle == np->phandle) {
-				pr_info("Duplicate phandle! %i used by %pOF and %pOF\n",
-					np->phandle, nh->np, np);
 				dup_count++;
 				break;
 			}
 		}
 
 		nh = kzalloc(sizeof(*nh), GFP_KERNEL);
-		if (WARN_ON(!nh))
-			return;
+		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, nh);
 
 		nh->np = np;
 		hash_add(phandle_ht, &nh->node, np->phandle);
 		phandle_count++;
 	}
-	unittest(dup_count == 0, "Found %i duplicates in %i phandles\n",
-		 dup_count, phandle_count);
+	KUNIT_EXPECT_EQ_MSG(test, dup_count, 0,
+			    "Found %i duplicates in %i phandles\n",
+			    dup_count, phandle_count);
 
 	/* Clean up */
 	hash_for_each_safe(phandle_ht, i, tmp, nh, node) {
@@ -357,20 +367,22 @@ static void __init of_unittest_check_phandles(void)
 	}
 }
 
-static void __init of_unittest_parse_phandle_with_args(void)
+static void of_unittest_parse_phandle_with_args(struct kunit *test)
 {
 	struct device_node *np;
 	struct of_phandle_args args;
-	int i, rc;
+	int i, rc = 0;
 
 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
-	rc = of_count_phandle_with_args(np, "phandle-list", "#phandle-cells");
-	unittest(rc == 7, "of_count_phandle_with_args() returned %i, expected 7\n", rc);
+	KUNIT_EXPECT_EQ_MSG(test,
+			    of_count_phandle_with_args(np,
+						       "phandle-list",
+						       "#phandle-cells"),
+			    7,
+			    "of_count_phandle_with_args() returned %i, expected 7\n",
+			    rc);
 
 	for (i = 0; i < 8; i++) {
 		bool passed = true;
@@ -423,81 +435,90 @@ static void __init of_unittest_parse_phandle_with_args(void)
 			passed = false;
 		}
 
-		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
-			 i, args.np, rc);
+		KUNIT_EXPECT_TRUE_MSG(test, passed,
+				      "index %i - data error on node %pOF rc=%i\n",
+				      i, args.np, rc);
 	}
 
 	/* Check for missing list property */
-	rc = of_parse_phandle_with_args(np, "phandle-list-missing",
-					"#phandle-cells", 0, &args);
-	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
-	rc = of_count_phandle_with_args(np, "phandle-list-missing",
-					"#phandle-cells");
-	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
+	KUNIT_EXPECT_EQ(test,
+			of_parse_phandle_with_args(np,
+						   "phandle-list-missing",
+						   "#phandle-cells",
+						   0, &args),
+			-ENOENT);
+	KUNIT_EXPECT_EQ(test,
+			of_count_phandle_with_args(np,
+						   "phandle-list-missing",
+						   "#phandle-cells"),
+			-ENOENT);
 
 	/* Check for missing cells property */
-	rc = of_parse_phandle_with_args(np, "phandle-list",
-					"#phandle-cells-missing", 0, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
-	rc = of_count_phandle_with_args(np, "phandle-list",
-					"#phandle-cells-missing");
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(test,
+			of_parse_phandle_with_args(np,
+						   "phandle-list",
+						   "#phandle-cells-missing",
+						   0, &args),
+			-EINVAL);
+	KUNIT_EXPECT_EQ(test,
+			of_count_phandle_with_args(np,
+						   "phandle-list",
+						   "#phandle-cells-missing"),
+			-EINVAL);
 
 	/* Check for bad phandle in list */
-	rc = of_parse_phandle_with_args(np, "phandle-list-bad-phandle",
-					"#phandle-cells", 0, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
-	rc = of_count_phandle_with_args(np, "phandle-list-bad-phandle",
-					"#phandle-cells");
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(test,
+			of_parse_phandle_with_args(np,
+						   "phandle-list-bad-phandle",
+						   "#phandle-cells",
+						   0, &args),
+			-EINVAL);
+	KUNIT_EXPECT_EQ(test,
+			of_count_phandle_with_args(np,
+						   "phandle-list-bad-phandle",
+						   "#phandle-cells"),
+			-EINVAL);
 
 	/* Check for incorrectly formed argument list */
-	rc = of_parse_phandle_with_args(np, "phandle-list-bad-args",
-					"#phandle-cells", 1, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
-	rc = of_count_phandle_with_args(np, "phandle-list-bad-args",
-					"#phandle-cells");
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(test,
+			of_parse_phandle_with_args(np,
+						   "phandle-list-bad-args",
+						   "#phandle-cells",
+						   1, &args),
+			-EINVAL);
+	KUNIT_EXPECT_EQ(test,
+			of_count_phandle_with_args(np,
+						   "phandle-list-bad-args",
+						   "#phandle-cells"),
+			-EINVAL);
 }
 
-static void __init of_unittest_parse_phandle_with_args_map(void)
+static void of_unittest_parse_phandle_with_args_map(struct kunit *test)
 {
 	struct device_node *np, *p0, *p1, *p2, *p3;
 	struct of_phandle_args args;
 	int i, rc;
 
 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-b");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	p0 = of_find_node_by_path("/testcase-data/phandle-tests/provider0");
-	if (!p0) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p0);
 
 	p1 = of_find_node_by_path("/testcase-data/phandle-tests/provider1");
-	if (!p1) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p1);
 
 	p2 = of_find_node_by_path("/testcase-data/phandle-tests/provider2");
-	if (!p2) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p2);
 
 	p3 = of_find_node_by_path("/testcase-data/phandle-tests/provider3");
-	if (!p3) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p3);
 
-	rc = of_count_phandle_with_args(np, "phandle-list", "#phandle-cells");
-	unittest(rc == 7, "of_count_phandle_with_args() returned %i, expected 7\n", rc);
+	KUNIT_EXPECT_EQ(test,
+		       of_count_phandle_with_args(np,
+						  "phandle-list",
+						  "#phandle-cells"),
+		       7);
 
 	for (i = 0; i < 8; i++) {
 		bool passed = true;
@@ -554,117 +575,214 @@ static void __init of_unittest_parse_phandle_with_args_map(void)
 			passed = false;
 		}
 
-		unittest(passed, "index %i - data error on node %s rc=%i\n",
-			 i, args.np->full_name, rc);
+		KUNIT_EXPECT_TRUE_MSG(test, passed,
+				      "index %i - data error on node %s rc=%i\n",
+				      i, args.np->full_name, rc);
 	}
 
 	/* Check for missing list property */
-	rc = of_parse_phandle_with_args_map(np, "phandle-list-missing",
-					    "phandle", 0, &args);
-	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
+	KUNIT_EXPECT_EQ(test,
+			of_parse_phandle_with_args_map(np,
+						       "phandle-list-missing",
+						       "phandle",
+						       0, &args),
+			-ENOENT);
 
 	/* Check for missing cells,map,mask property */
-	rc = of_parse_phandle_with_args_map(np, "phandle-list",
-					    "phandle-missing", 0, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(test,
+			of_parse_phandle_with_args_map(np,
+						       "phandle-list",
+						       "phandle-missing",
+						       0, &args),
+			-EINVAL);
 
 	/* Check for bad phandle in list */
-	rc = of_parse_phandle_with_args_map(np, "phandle-list-bad-phandle",
-					    "phandle", 0, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(test,
+			of_parse_phandle_with_args_map(np,
+						       "phandle-list-bad-phandle",
+						       "phandle",
+						       0, &args),
+			-EINVAL);
 
 	/* Check for incorrectly formed argument list */
-	rc = of_parse_phandle_with_args_map(np, "phandle-list-bad-args",
-					    "phandle", 1, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(test,
+			of_parse_phandle_with_args_map(np,
+						       "phandle-list-bad-args",
+						       "phandle",
+						       1, &args),
+			-EINVAL);
 }
 
-static void __init of_unittest_property_string(void)
+static void of_unittest_property_string(struct kunit *test)
 {
 	const char *strings[4];
 	struct device_node *np;
 	int rc;
 
 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
-	if (!np) {
-		pr_err("No testcase data in device tree\n");
-		return;
-	}
-
-	rc = of_property_match_string(np, "phandle-list-names", "first");
-	unittest(rc == 0, "first expected:0 got:%i\n", rc);
-	rc = of_property_match_string(np, "phandle-list-names", "second");
-	unittest(rc == 1, "second expected:1 got:%i\n", rc);
-	rc = of_property_match_string(np, "phandle-list-names", "third");
-	unittest(rc == 2, "third expected:2 got:%i\n", rc);
-	rc = of_property_match_string(np, "phandle-list-names", "fourth");
-	unittest(rc == -ENODATA, "unmatched string; rc=%i\n", rc);
-	rc = of_property_match_string(np, "missing-property", "blah");
-	unittest(rc == -EINVAL, "missing property; rc=%i\n", rc);
-	rc = of_property_match_string(np, "empty-property", "blah");
-	unittest(rc == -ENODATA, "empty property; rc=%i\n", rc);
-	rc = of_property_match_string(np, "unterminated-string", "blah");
-	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+
+	KUNIT_EXPECT_EQ(test,
+			of_property_match_string(np,
+						 "phandle-list-names",
+						 "first"),
+			0);
+	KUNIT_EXPECT_EQ(test,
+			of_property_match_string(np,
+						 "phandle-list-names",
+						 "second"),
+			1);
+	KUNIT_EXPECT_EQ(test,
+			of_property_match_string(np,
+						 "phandle-list-names",
+						 "third"),
+			2);
+	KUNIT_EXPECT_EQ_MSG(test,
+			    of_property_match_string(np,
+						     "phandle-list-names",
+						     "fourth"),
+			    -ENODATA,
+			    "unmatched string");
+	KUNIT_EXPECT_EQ_MSG(test,
+			    of_property_match_string(np,
+						     "missing-property",
+						     "blah"),
+			    -EINVAL,
+			    "missing property");
+	KUNIT_EXPECT_EQ_MSG(test,
+			    of_property_match_string(np,
+						     "empty-property",
+						     "blah"),
+			    -ENODATA,
+			    "empty property");
+	KUNIT_EXPECT_EQ_MSG(test,
+			    of_property_match_string(np,
+						     "unterminated-string",
+						     "blah"),
+			    -EILSEQ,
+			    "unterminated string");
 
 	/* of_property_count_strings() tests */
-	rc = of_property_count_strings(np, "string-property");
-	unittest(rc == 1, "Incorrect string count; rc=%i\n", rc);
-	rc = of_property_count_strings(np, "phandle-list-names");
-	unittest(rc == 3, "Incorrect string count; rc=%i\n", rc);
-	rc = of_property_count_strings(np, "unterminated-string");
-	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
-	rc = of_property_count_strings(np, "unterminated-string-list");
-	unittest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ(test,
+			of_property_count_strings(np, "string-property"),
+			1);
+	KUNIT_EXPECT_EQ(test,
+			of_property_count_strings(np, "phandle-list-names"),
+			3);
+	KUNIT_EXPECT_EQ_MSG(test,
+			    of_property_count_strings(np,
+						      "unterminated-string"),
+			    -EILSEQ,
+			    "unterminated string");
+	KUNIT_EXPECT_EQ_MSG(test,
+			    of_property_count_strings(
+					    np,
+					    "unterminated-string-list"),
+			    -EILSEQ,
+			    "unterminated string array");
 
 	/* of_property_read_string_index() tests */
 	rc = of_property_read_string_index(np, "string-property", 0, strings);
-	unittest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_ASSERT_EQ(test, rc, 0);
+	KUNIT_EXPECT_STREQ(test, strings[0], "foobar");
 	strings[0] = NULL;
 	rc = of_property_read_string_index(np, "string-property", 1, strings);
-	unittest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
-	rc = of_property_read_string_index(np, "phandle-list-names", 0, strings);
-	unittest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc);
-	rc = of_property_read_string_index(np, "phandle-list-names", 1, strings);
-	unittest(rc == 0 && !strcmp(strings[0], "second"), "of_property_read_string_index() failure; rc=%i\n", rc);
-	rc = of_property_read_string_index(np, "phandle-list-names", 2, strings);
-	unittest(rc == 0 && !strcmp(strings[0], "third"), "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ(test, rc, -ENODATA);
+	KUNIT_EXPECT_EQ(test, strings[0], NULL);
+	rc = of_property_read_string_index(np,
+					   "phandle-list-names",
+					   0,
+					   strings);
+	KUNIT_ASSERT_EQ(test, rc, 0);
+	KUNIT_EXPECT_STREQ(test, strings[0], "first");
+	rc = of_property_read_string_index(np,
+					   "phandle-list-names",
+					   1,
+					   strings);
+	KUNIT_ASSERT_EQ(test, rc, 0);
+	KUNIT_EXPECT_STREQ(test, strings[0], "second");
+	rc = of_property_read_string_index(np,
+					   "phandle-list-names",
+					   2,
+					   strings);
+	KUNIT_ASSERT_EQ(test, rc, 0);
+	KUNIT_EXPECT_STREQ(test, strings[0], "third");
 	strings[0] = NULL;
-	rc = of_property_read_string_index(np, "phandle-list-names", 3, strings);
-	unittest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
+	rc = of_property_read_string_index(np,
+					   "phandle-list-names", 3, strings);
+	KUNIT_EXPECT_EQ(test, rc, -ENODATA);
+	KUNIT_EXPECT_EQ(test, strings[0], NULL);
 	strings[0] = NULL;
-	rc = of_property_read_string_index(np, "unterminated-string", 0, strings);
-	unittest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
-	rc = of_property_read_string_index(np, "unterminated-string-list", 0, strings);
-	unittest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc);
+	rc = of_property_read_string_index(np,
+					   "unterminated-string",
+					   0,
+					   strings);
+	KUNIT_EXPECT_EQ(test, rc, -EILSEQ);
+	KUNIT_EXPECT_EQ(test, strings[0], NULL);
+	rc = of_property_read_string_index(np,
+					   "unterminated-string-list",
+					   0,
+					   strings);
+	KUNIT_ASSERT_EQ(test, rc, 0);
+	KUNIT_EXPECT_STREQ(test, strings[0], "first");
 	strings[0] = NULL;
-	rc = of_property_read_string_index(np, "unterminated-string-list", 2, strings); /* should fail */
-	unittest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
-	strings[1] = NULL;
+	rc = of_property_read_string_index(np,
+					   "unterminated-string-list",
+					   2,
+					   strings); /* should fail */
+	KUNIT_EXPECT_EQ(test, rc, -EILSEQ);
+	KUNIT_EXPECT_EQ(test, strings[0], NULL);
 
 	/* of_property_read_string_array() tests */
-	rc = of_property_read_string_array(np, "string-property", strings, 4);
-	unittest(rc == 1, "Incorrect string count; rc=%i\n", rc);
-	rc = of_property_read_string_array(np, "phandle-list-names", strings, 4);
-	unittest(rc == 3, "Incorrect string count; rc=%i\n", rc);
-	rc = of_property_read_string_array(np, "unterminated-string", strings, 4);
-	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
+	strings[1] = NULL;
+	KUNIT_EXPECT_EQ(test,
+			of_property_read_string_array(np,
+						      "string-property",
+						      strings, 4),
+			1);
+	KUNIT_EXPECT_EQ(test,
+			of_property_read_string_array(np,
+						      "phandle-list-names",
+						      strings, 4),
+			3);
+	KUNIT_EXPECT_EQ_MSG(test,
+			    of_property_read_string_array(np,
+							  "unterminated-string",
+							  strings, 4),
+			    -EILSEQ,
+			    "unterminated string");
 	/* -- An incorrectly formed string should cause a failure */
-	rc = of_property_read_string_array(np, "unterminated-string-list", strings, 4);
-	unittest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ_MSG(test,
+			    of_property_read_string_array(
+					    np,
+					    "unterminated-string-list",
+					    strings,
+					    4),
+			    -EILSEQ,
+			    "unterminated string array");
 	/* -- parsing the correctly formed strings should still work: */
 	strings[2] = NULL;
-	rc = of_property_read_string_array(np, "unterminated-string-list", strings, 2);
-	unittest(rc == 2 && strings[2] == NULL, "of_property_read_string_array() failure; rc=%i\n", rc);
+	rc = of_property_read_string_array(np,
+					   "unterminated-string-list",
+					   strings,
+					   2);
+	KUNIT_EXPECT_EQ(test, rc, 2);
+	KUNIT_EXPECT_EQ(test, strings[2], NULL);
 	strings[1] = NULL;
-	rc = of_property_read_string_array(np, "phandle-list-names", strings, 1);
-	unittest(rc == 1 && strings[1] == NULL, "Overwrote end of string array; rc=%i, str='%s'\n", rc, strings[1]);
+	rc = of_property_read_string_array(np,
+					   "phandle-list-names",
+					   strings,
+					   1);
+	KUNIT_ASSERT_EQ(test, rc, 1);
+	KUNIT_EXPECT_EQ_MSG(test, strings[1], NULL,
+			    "Overwrote end of string array");
 }
 
 #define propcmp(p1, p2) (((p1)->length == (p2)->length) && \
 			(p1)->value && (p2)->value && \
 			!memcmp((p1)->value, (p2)->value, (p1)->length) && \
 			!strcmp((p1)->name, (p2)->name))
-static void __init of_unittest_property_copy(void)
+static void of_unittest_property_copy(struct kunit *test)
 {
 #ifdef CONFIG_OF_DYNAMIC
 	struct property p1 = { .name = "p1", .length = 0, .value = "" };
@@ -672,20 +790,24 @@ static void __init of_unittest_property_copy(void)
 	struct property *new;
 
 	new = __of_prop_dup(&p1, GFP_KERNEL);
-	unittest(new && propcmp(&p1, new), "empty property didn't copy correctly\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, new);
+	KUNIT_EXPECT_TRUE_MSG(test, propcmp(&p1, new),
+			      "empty property didn't copy correctly");
 	kfree(new->value);
 	kfree(new->name);
 	kfree(new);
 
 	new = __of_prop_dup(&p2, GFP_KERNEL);
-	unittest(new && propcmp(&p2, new), "non-empty property didn't copy correctly\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, new);
+	KUNIT_EXPECT_TRUE_MSG(test, propcmp(&p2, new),
+			      "non-empty property didn't copy correctly");
 	kfree(new->value);
 	kfree(new->name);
 	kfree(new);
 #endif
 }
 
-static void __init of_unittest_changeset(void)
+static void of_unittest_changeset(struct kunit *test)
 {
 #ifdef CONFIG_OF_DYNAMIC
 	struct property *ppadd, padd = { .name = "prop-add", .length = 1, .value = "" };
@@ -698,32 +820,32 @@ static void __init of_unittest_changeset(void)
 	struct of_changeset chgset;
 
 	n1 = __of_node_dup(NULL, "n1");
-	unittest(n1, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n1);
 
 	n2 = __of_node_dup(NULL, "n2");
-	unittest(n2, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n2);
 
 	n21 = __of_node_dup(NULL, "n21");
-	unittest(n21, "testcase setup failure %p\n", n21);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n21);
 
 	nchangeset = of_find_node_by_path("/testcase-data/changeset");
 	nremove = of_get_child_by_name(nchangeset, "node-remove");
-	unittest(nremove, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, nremove);
 
 	ppadd = __of_prop_dup(&padd, GFP_KERNEL);
-	unittest(ppadd, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppadd);
 
 	ppname_n1  = __of_prop_dup(&pname_n1, GFP_KERNEL);
-	unittest(ppname_n1, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n1);
 
 	ppname_n2  = __of_prop_dup(&pname_n2, GFP_KERNEL);
-	unittest(ppname_n2, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n2);
 
 	ppname_n21 = __of_prop_dup(&pname_n21, GFP_KERNEL);
-	unittest(ppname_n21, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n21);
 
 	ppupdate = __of_prop_dup(&pupdate, GFP_KERNEL);
-	unittest(ppupdate, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppupdate);
 
 	parent = nchangeset;
 	n1->parent = parent;
@@ -731,54 +853,82 @@ static void __init of_unittest_changeset(void)
 	n21->parent = n2;
 
 	ppremove = of_find_property(parent, "prop-remove", NULL);
-	unittest(ppremove, "failed to find removal prop");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppremove);
 
 	of_changeset_init(&chgset);
 
-	unittest(!of_changeset_attach_node(&chgset, n1), "fail attach n1\n");
-	unittest(!of_changeset_add_property(&chgset, n1, ppname_n1), "fail add prop name\n");
-
-	unittest(!of_changeset_attach_node(&chgset, n2), "fail attach n2\n");
-	unittest(!of_changeset_add_property(&chgset, n2, ppname_n2), "fail add prop name\n");
-
-	unittest(!of_changeset_detach_node(&chgset, nremove), "fail remove node\n");
-	unittest(!of_changeset_add_property(&chgset, n21, ppname_n21), "fail add prop name\n");
-
-	unittest(!of_changeset_attach_node(&chgset, n21), "fail attach n21\n");
-
-	unittest(!of_changeset_add_property(&chgset, parent, ppadd), "fail add prop prop-add\n");
-	unittest(!of_changeset_update_property(&chgset, parent, ppupdate), "fail update prop\n");
-	unittest(!of_changeset_remove_property(&chgset, parent, ppremove), "fail remove prop\n");
-
-	unittest(!of_changeset_apply(&chgset), "apply failed\n");
+	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n1),
+			       "fail attach n1\n");
+	KUNIT_EXPECT_FALSE_MSG(test,
+			       of_changeset_add_property(&chgset,
+							 n1,
+							 ppname_n1),
+			       "fail add prop name\n");
+
+	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n2),
+			       "fail attach n2\n");
+	KUNIT_EXPECT_FALSE_MSG(test,
+			       of_changeset_add_property(&chgset,
+							 n2,
+							 ppname_n2),
+			       "fail add prop name\n");
+
+	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_detach_node(&chgset, nremove),
+			       "fail remove node\n");
+	KUNIT_EXPECT_FALSE_MSG(test,
+			       of_changeset_add_property(&chgset,
+							 n21,
+							 ppname_n21),
+			       "fail add prop name\n");
+
+	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n21),
+			       "fail attach n21\n");
+
+	KUNIT_EXPECT_FALSE_MSG(test,
+			       of_changeset_add_property(&chgset,
+							 parent,
+							 ppadd),
+			       "fail add prop prop-add\n");
+	KUNIT_EXPECT_FALSE_MSG(test,
+			       of_changeset_update_property(&chgset,
+							    parent,
+							    ppupdate),
+			       "fail update prop\n");
+	KUNIT_EXPECT_FALSE_MSG(test,
+			       of_changeset_remove_property(&chgset,
+							    parent,
+							    ppremove),
+			       "fail remove prop\n");
+
+	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_apply(&chgset),
+			       "apply failed\n");
 
 	of_node_put(nchangeset);
 
 	/* Make sure node names are constructed correctly */
-	unittest((np = of_find_node_by_path("/testcase-data/changeset/n2/n21")),
-		 "'%pOF' not added\n", n21);
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
+			test,
+			np = of_find_node_by_path(
+					"/testcase-data/changeset/n2/n21"),
+			"'%pOF' not added\n", n21);
 	of_node_put(np);
 
-	unittest(!of_changeset_revert(&chgset), "revert failed\n");
+	KUNIT_EXPECT_FALSE(test, of_changeset_revert(&chgset));
 
 	of_changeset_destroy(&chgset);
 #endif
 }
 
-static void __init of_unittest_parse_interrupts(void)
+static void of_unittest_parse_interrupts(struct kunit *test)
 {
 	struct device_node *np;
 	struct of_phandle_args args;
 	int i, rc;
 
-	if (of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)
-		return;
+	KUNIT_ASSERT_FALSE(test, of_irq_workarounds & OF_IMAP_OLDWORLD_MAC);
 
 	np = of_find_node_by_path("/testcase-data/interrupts/interrupts0");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	for (i = 0; i < 4; i++) {
 		bool passed = true;
@@ -790,16 +940,14 @@ static void __init of_unittest_parse_interrupts(void)
 		passed &= (args.args_count == 1);
 		passed &= (args.args[0] == (i + 1));
 
-		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
-			 i, args.np, rc);
+		KUNIT_EXPECT_TRUE_MSG(test, passed,
+				      "index %i - data error on node %pOF rc=%i\n",
+				      i, args.np, rc);
 	}
 	of_node_put(np);
 
 	np = of_find_node_by_path("/testcase-data/interrupts/interrupts1");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	for (i = 0; i < 4; i++) {
 		bool passed = true;
@@ -836,26 +984,23 @@ static void __init of_unittest_parse_interrupts(void)
 		default:
 			passed = false;
 		}
-		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
-			 i, args.np, rc);
+		KUNIT_EXPECT_TRUE_MSG(test, passed,
+				     "index %i - data error on node %pOF rc=%i\n",
+				     i, args.np, rc);
 	}
 	of_node_put(np);
 }
 
-static void __init of_unittest_parse_interrupts_extended(void)
+static void of_unittest_parse_interrupts_extended(struct kunit *test)
 {
 	struct device_node *np;
 	struct of_phandle_args args;
 	int i, rc;
 
-	if (of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)
-		return;
+	KUNIT_ASSERT_FALSE(test, of_irq_workarounds & OF_IMAP_OLDWORLD_MAC);
 
 	np = of_find_node_by_path("/testcase-data/interrupts/interrupts-extended0");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	for (i = 0; i < 7; i++) {
 		bool passed = true;
@@ -909,8 +1054,9 @@ static void __init of_unittest_parse_interrupts_extended(void)
 			passed = false;
 		}
 
-		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
-			 i, args.np, rc);
+		KUNIT_EXPECT_TRUE_MSG(test, passed,
+				      "index %i - data error on node %pOF rc=%i\n",
+				      i, args.np, rc);
 	}
 	of_node_put(np);
 }
@@ -950,7 +1096,7 @@ static struct {
 	{ .path = "/testcase-data/match-node/name9", .data = "K", },
 };
 
-static void __init of_unittest_match_node(void)
+static void of_unittest_match_node(struct kunit *test)
 {
 	struct device_node *np;
 	const struct of_device_id *match;
@@ -958,26 +1104,19 @@ static void __init of_unittest_match_node(void)
 
 	for (i = 0; i < ARRAY_SIZE(match_node_tests); i++) {
 		np = of_find_node_by_path(match_node_tests[i].path);
-		if (!np) {
-			unittest(0, "missing testcase node %s\n",
-				match_node_tests[i].path);
-			continue;
-		}
+		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 		match = of_match_node(match_node_table, np);
-		if (!match) {
-			unittest(0, "%s didn't match anything\n",
-				match_node_tests[i].path);
-			continue;
-		}
+		KUNIT_ASSERT_NOT_ERR_OR_NULL_MSG(test, np,
+						 "%s didn't match anything",
+						 match_node_tests[i].path);
 
-		if (strcmp(match->data, match_node_tests[i].data) != 0) {
-			unittest(0, "%s got wrong match. expected %s, got %s\n",
-				match_node_tests[i].path, match_node_tests[i].data,
-				(const char *)match->data);
-			continue;
-		}
-		unittest(1, "passed");
+		KUNIT_EXPECT_STREQ_MSG(test,
+				       match->data, match_node_tests[i].data,
+				       "%s got wrong match. expected %s, got %s\n",
+				       match_node_tests[i].path,
+				       match_node_tests[i].data,
+				       (const char *)match->data);
 	}
 }
 
@@ -989,9 +1128,9 @@ static struct resource test_bus_res = {
 static const struct platform_device_info test_bus_info = {
 	.name = "unittest-bus",
 };
-static void __init of_unittest_platform_populate(void)
+static void of_unittest_platform_populate(struct kunit *test)
 {
-	int irq, rc;
+	int irq;
 	struct device_node *np, *child, *grandchild;
 	struct platform_device *pdev, *test_bus;
 	const struct of_device_id match[] = {
@@ -1005,32 +1144,27 @@ static void __init of_unittest_platform_populate(void)
 	/* Test that a missing irq domain returns -EPROBE_DEFER */
 	np = of_find_node_by_path("/testcase-data/testcase-device1");
 	pdev = of_find_device_by_node(np);
-	unittest(pdev, "device 1 creation failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, pdev);
 
 	if (!(of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)) {
 		irq = platform_get_irq(pdev, 0);
-		unittest(irq == -EPROBE_DEFER,
-			 "device deferred probe failed - %d\n", irq);
+		KUNIT_ASSERT_EQ(test, irq, -EPROBE_DEFER);
 
 		/* Test that a parsing failure does not return -EPROBE_DEFER */
 		np = of_find_node_by_path("/testcase-data/testcase-device2");
 		pdev = of_find_device_by_node(np);
-		unittest(pdev, "device 2 creation failed\n");
+		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, pdev);
 		irq = platform_get_irq(pdev, 0);
-		unittest(irq < 0 && irq != -EPROBE_DEFER,
-			 "device parsing error failed - %d\n", irq);
+		KUNIT_ASSERT_TRUE_MSG(test, irq < 0 && irq != -EPROBE_DEFER,
+				      "device parsing error failed - %d\n",
+				      irq);
 	}
 
 	np = of_find_node_by_path("/testcase-data/platform-tests");
-	unittest(np, "No testcase data in device tree\n");
-	if (!np)
-		return;
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	test_bus = platform_device_register_full(&test_bus_info);
-	rc = PTR_ERR_OR_ZERO(test_bus);
-	unittest(!rc, "testbus registration failed; rc=%i\n", rc);
-	if (rc)
-		return;
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, test_bus);
 	test_bus->dev.of_node = np;
 
 	/*
@@ -1045,17 +1179,21 @@ static void __init of_unittest_platform_populate(void)
 	of_platform_populate(np, match, NULL, &test_bus->dev);
 	for_each_child_of_node(np, child) {
 		for_each_child_of_node(child, grandchild)
-			unittest(of_find_device_by_node(grandchild),
-				 "Could not create device for node '%s'\n",
-				 grandchild->name);
+			KUNIT_EXPECT_TRUE_MSG(test,
+					      of_find_device_by_node(
+							      grandchild),
+					      "Could not create device for node '%s'\n",
+					      grandchild->name);
 	}
 
 	of_platform_depopulate(&test_bus->dev);
 	for_each_child_of_node(np, child) {
 		for_each_child_of_node(child, grandchild)
-			unittest(!of_find_device_by_node(grandchild),
-				 "device didn't get destroyed '%s'\n",
-				 grandchild->name);
+			KUNIT_EXPECT_FALSE_MSG(test,
+					       of_find_device_by_node(
+							       grandchild),
+					       "device didn't get destroyed '%s'\n",
+					       grandchild->name);
 	}
 
 	platform_device_unregister(test_bus);
@@ -1129,7 +1267,7 @@ static int attach_node_and_children(struct device_node *np)
  *	unittest_data_add - Reads, copies data from
  *	linked tree and attaches it to the live tree
  */
-static int __init unittest_data_add(void)
+static int unittest_data_add(void)
 {
 	void *unittest_data;
 	struct device_node *unittest_data_node, *np;
@@ -1200,7 +1338,7 @@ static int __init unittest_data_add(void)
 }
 
 #ifdef CONFIG_OF_OVERLAY
-static int __init overlay_data_apply(const char *overlay_name, int *overlay_id);
+static int overlay_data_apply(const char *overlay_name, int *overlay_id);
 
 static int unittest_probe(struct platform_device *pdev)
 {
@@ -1429,173 +1567,157 @@ static void of_unittest_destroy_tracked_overlays(void)
 	} while (defers > 0);
 }
 
-static int __init of_unittest_apply_overlay(int overlay_nr, int unittest_nr,
-		int *overlay_id)
+static int of_unittest_apply_overlay(struct kunit *test,
+				     int overlay_nr,
+				     int unittest_nr,
+				     int *overlay_id)
 {
 	const char *overlay_name;
 
 	overlay_name = overlay_name_from_nr(overlay_nr);
 
-	if (!overlay_data_apply(overlay_name, overlay_id)) {
-		unittest(0, "could not apply overlay \"%s\"\n",
-				overlay_name);
-		return -EFAULT;
-	}
+	KUNIT_ASSERT_TRUE_MSG(test,
+			      overlay_data_apply(overlay_name, overlay_id),
+			      "could not apply overlay \"%s\"\n",
+			      overlay_name);
 	of_unittest_track_overlay(*overlay_id);
 
 	return 0;
 }
 
 /* apply an overlay while checking before and after states */
-static int __init of_unittest_apply_overlay_check(int overlay_nr,
+static int of_unittest_apply_overlay_check(struct kunit *test, int overlay_nr,
 		int unittest_nr, int before, int after,
 		enum overlay_type ovtype)
 {
-	int ret, ovcs_id;
+	int ovcs_id;
 
 	/* unittest device must not be in before state */
-	if (of_unittest_device_exists(unittest_nr, ovtype) != before) {
-		unittest(0, "%s with device @\"%s\" %s\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype),
-				!before ? "enabled" : "disabled");
-		return -EINVAL;
-	}
+	KUNIT_ASSERT_EQ_MSG(test,
+			    of_unittest_device_exists(unittest_nr, ovtype),
+			    before,
+			    "%s with device @\"%s\" %s\n",
+			    overlay_name_from_nr(overlay_nr),
+			    unittest_path(unittest_nr, ovtype),
+			    !before ? "enabled" : "disabled");
 
 	ovcs_id = 0;
-	ret = of_unittest_apply_overlay(overlay_nr, unittest_nr, &ovcs_id);
-	if (ret != 0) {
-		/* of_unittest_apply_overlay already called unittest() */
-		return ret;
-	}
+	KUNIT_EXPECT_EQ(test,
+			of_unittest_apply_overlay(test,
+						  overlay_nr,
+						  unittest_nr,
+						  &ovcs_id),
+			0);
 
 	/* unittest device must be to set to after state */
-	if (of_unittest_device_exists(unittest_nr, ovtype) != after) {
-		unittest(0, "%s failed to create @\"%s\" %s\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype),
-				!after ? "enabled" : "disabled");
-		return -EINVAL;
-	}
+	KUNIT_ASSERT_EQ_MSG(test,
+			    of_unittest_device_exists(unittest_nr, ovtype),
+			    after,
+			    "%s failed to create @\"%s\" %s\n",
+			    overlay_name_from_nr(overlay_nr),
+			    unittest_path(unittest_nr, ovtype),
+			    !after ? "enabled" : "disabled");
 
 	return 0;
 }
 
 /* apply an overlay and then revert it while checking before, after states */
-static int __init of_unittest_apply_revert_overlay_check(int overlay_nr,
+static int of_unittest_apply_revert_overlay_check(
+		struct kunit *test, int overlay_nr,
 		int unittest_nr, int before, int after,
 		enum overlay_type ovtype)
 {
-	int ret, ovcs_id;
+	int ovcs_id;
 
 	/* unittest device must be in before state */
-	if (of_unittest_device_exists(unittest_nr, ovtype) != before) {
-		unittest(0, "%s with device @\"%s\" %s\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype),
-				!before ? "enabled" : "disabled");
-		return -EINVAL;
-	}
+	KUNIT_ASSERT_EQ_MSG(test,
+			    of_unittest_device_exists(unittest_nr, ovtype),
+			    before,
+			    "%s with device @\"%s\" %s\n",
+			    overlay_name_from_nr(overlay_nr),
+			    unittest_path(unittest_nr, ovtype),
+			    !before ? "enabled" : "disabled");
 
 	/* apply the overlay */
 	ovcs_id = 0;
-	ret = of_unittest_apply_overlay(overlay_nr, unittest_nr, &ovcs_id);
-	if (ret != 0) {
-		/* of_unittest_apply_overlay already called unittest() */
-		return ret;
-	}
+	KUNIT_ASSERT_EQ(test,
+			of_unittest_apply_overlay(test,
+						  overlay_nr,
+						  unittest_nr,
+						  &ovcs_id),
+			0);
 
 	/* unittest device must be in after state */
-	if (of_unittest_device_exists(unittest_nr, ovtype) != after) {
-		unittest(0, "%s failed to create @\"%s\" %s\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype),
-				!after ? "enabled" : "disabled");
-		return -EINVAL;
-	}
-
-	ret = of_overlay_remove(&ovcs_id);
-	if (ret != 0) {
-		unittest(0, "%s failed to be destroyed @\"%s\"\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype));
-		return ret;
-	}
+	KUNIT_ASSERT_EQ_MSG(test,
+			    of_unittest_device_exists(unittest_nr, ovtype),
+			    after,
+			    "%s failed to create @\"%s\" %s\n",
+			    overlay_name_from_nr(overlay_nr),
+			    unittest_path(unittest_nr, ovtype),
+			    !after ? "enabled" : "disabled");
+
+	KUNIT_ASSERT_EQ_MSG(test, of_overlay_remove(&ovcs_id), 0,
+			    "%s failed to be destroyed @\"%s\"\n",
+			    overlay_name_from_nr(overlay_nr),
+			    unittest_path(unittest_nr, ovtype));
 
 	/* unittest device must be again in before state */
-	if (of_unittest_device_exists(unittest_nr, PDEV_OVERLAY) != before) {
-		unittest(0, "%s with device @\"%s\" %s\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype),
-				!before ? "enabled" : "disabled");
-		return -EINVAL;
-	}
+	KUNIT_ASSERT_EQ_MSG(test,
+			    of_unittest_device_exists(unittest_nr,
+						      PDEV_OVERLAY),
+			    before,
+			    "%s with device @\"%s\" %s\n",
+			    overlay_name_from_nr(overlay_nr),
+			    unittest_path(unittest_nr, ovtype),
+			    !before ? "enabled" : "disabled");
 
 	return 0;
 }
 
 /* test activation of device */
-static void __init of_unittest_overlay_0(void)
+static void of_unittest_overlay_0(struct kunit *test)
 {
 	/* device should enable */
-	if (of_unittest_apply_overlay_check(0, 0, 0, 1, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 0);
+	of_unittest_apply_overlay_check(test, 0, 0, 0, 1, PDEV_OVERLAY);
 }
 
 /* test deactivation of device */
-static void __init of_unittest_overlay_1(void)
+static void of_unittest_overlay_1(struct kunit *test)
 {
 	/* device should disable */
-	if (of_unittest_apply_overlay_check(1, 1, 1, 0, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 1);
+	of_unittest_apply_overlay_check(test, 1, 1, 1, 0, PDEV_OVERLAY);
 }
 
 /* test activation of device */
-static void __init of_unittest_overlay_2(void)
+static void of_unittest_overlay_2(struct kunit *test)
 {
 	/* device should enable */
-	if (of_unittest_apply_overlay_check(2, 2, 0, 1, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 2);
+	of_unittest_apply_overlay_check(test, 2, 2, 0, 1, PDEV_OVERLAY);
 }
 
 /* test deactivation of device */
-static void __init of_unittest_overlay_3(void)
+static void of_unittest_overlay_3(struct kunit *test)
 {
 	/* device should disable */
-	if (of_unittest_apply_overlay_check(3, 3, 1, 0, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 3);
+	of_unittest_apply_overlay_check(test, 3, 3, 1, 0, PDEV_OVERLAY);
 }
 
 /* test activation of a full device node */
-static void __init of_unittest_overlay_4(void)
+static void of_unittest_overlay_4(struct kunit *test)
 {
 	/* device should disable */
-	if (of_unittest_apply_overlay_check(4, 4, 0, 1, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 4);
+	of_unittest_apply_overlay_check(test, 4, 4, 0, 1, PDEV_OVERLAY);
 }
 
 /* test overlay apply/revert sequence */
-static void __init of_unittest_overlay_5(void)
+static void of_unittest_overlay_5(struct kunit *test)
 {
 	/* device should disable */
-	if (of_unittest_apply_revert_overlay_check(5, 5, 0, 1, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 5);
+	of_unittest_apply_revert_overlay_check(test, 5, 5, 0, 1, PDEV_OVERLAY);
 }
 
 /* test overlay application in sequence */
-static void __init of_unittest_overlay_6(void)
+static void of_unittest_overlay_6(struct kunit *test)
 {
 	int i, ov_id[2], ovcs_id;
 	int overlay_nr = 6, unittest_nr = 6;
@@ -1604,74 +1726,69 @@ static void __init of_unittest_overlay_6(void)
 
 	/* unittest device must be in before state */
 	for (i = 0; i < 2; i++) {
-		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
-				!= before) {
-			unittest(0, "%s with device @\"%s\" %s\n",
-					overlay_name_from_nr(overlay_nr + i),
-					unittest_path(unittest_nr + i,
-						PDEV_OVERLAY),
-					!before ? "enabled" : "disabled");
-			return;
-		}
+		KUNIT_ASSERT_EQ_MSG(test,
+				    of_unittest_device_exists(unittest_nr + i,
+							      PDEV_OVERLAY),
+				    before,
+				    "%s with device @\"%s\" %s\n",
+				    overlay_name_from_nr(overlay_nr + i),
+				    unittest_path(unittest_nr + i,
+						  PDEV_OVERLAY),
+				    !before ? "enabled" : "disabled");
 	}
 
 	/* apply the overlays */
 	for (i = 0; i < 2; i++) {
-
 		overlay_name = overlay_name_from_nr(overlay_nr + i);
 
-		if (!overlay_data_apply(overlay_name, &ovcs_id)) {
-			unittest(0, "could not apply overlay \"%s\"\n",
-					overlay_name);
-			return;
-		}
+		KUNIT_ASSERT_TRUE_MSG(test,
+				      overlay_data_apply(overlay_name,
+							 &ovcs_id),
+				      "could not apply overlay \"%s\"\n",
+				      overlay_name);
 		ov_id[i] = ovcs_id;
 		of_unittest_track_overlay(ov_id[i]);
 	}
 
 	for (i = 0; i < 2; i++) {
 		/* unittest device must be in after state */
-		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
-				!= after) {
-			unittest(0, "overlay @\"%s\" failed @\"%s\" %s\n",
-					overlay_name_from_nr(overlay_nr + i),
-					unittest_path(unittest_nr + i,
-						PDEV_OVERLAY),
-					!after ? "enabled" : "disabled");
-			return;
-		}
+		KUNIT_ASSERT_EQ_MSG(test,
+				    of_unittest_device_exists(unittest_nr + i,
+							      PDEV_OVERLAY),
+				    after,
+				    "overlay @\"%s\" failed @\"%s\" %s\n",
+				    overlay_name_from_nr(overlay_nr + i),
+				    unittest_path(unittest_nr + i,
+						  PDEV_OVERLAY),
+				    !after ? "enabled" : "disabled");
 	}
 
 	for (i = 1; i >= 0; i--) {
 		ovcs_id = ov_id[i];
-		if (of_overlay_remove(&ovcs_id)) {
-			unittest(0, "%s failed destroy @\"%s\"\n",
-					overlay_name_from_nr(overlay_nr + i),
-					unittest_path(unittest_nr + i,
-						PDEV_OVERLAY));
-			return;
-		}
+		KUNIT_ASSERT_FALSE_MSG(test, of_overlay_remove(&ovcs_id),
+				       "%s failed destroy @\"%s\"\n",
+				       overlay_name_from_nr(overlay_nr + i),
+				       unittest_path(unittest_nr + i,
+						     PDEV_OVERLAY));
 		of_unittest_untrack_overlay(ov_id[i]);
 	}
 
 	for (i = 0; i < 2; i++) {
 		/* unittest device must be again in before state */
-		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
-				!= before) {
-			unittest(0, "%s with device @\"%s\" %s\n",
-					overlay_name_from_nr(overlay_nr + i),
-					unittest_path(unittest_nr + i,
-						PDEV_OVERLAY),
-					!before ? "enabled" : "disabled");
-			return;
-		}
+		KUNIT_ASSERT_EQ_MSG(test,
+				    of_unittest_device_exists(unittest_nr + i,
+							      PDEV_OVERLAY),
+				    before,
+				    "%s with device @\"%s\" %s\n",
+				    overlay_name_from_nr(overlay_nr + i),
+				    unittest_path(unittest_nr + i,
+						  PDEV_OVERLAY),
+				    !before ? "enabled" : "disabled");
 	}
-
-	unittest(1, "overlay test %d passed\n", 6);
 }
 
 /* test overlay application in sequence */
-static void __init of_unittest_overlay_8(void)
+static void of_unittest_overlay_8(struct kunit *test)
 {
 	int i, ov_id[2], ovcs_id;
 	int overlay_nr = 8, unittest_nr = 8;
@@ -1681,76 +1798,73 @@ static void __init of_unittest_overlay_8(void)
 
 	/* apply the overlays */
 	for (i = 0; i < 2; i++) {
-
 		overlay_name = overlay_name_from_nr(overlay_nr + i);
 
-		if (!overlay_data_apply(overlay_name, &ovcs_id)) {
-			unittest(0, "could not apply overlay \"%s\"\n",
-					overlay_name);
-			return;
-		}
+		KUNIT_ASSERT_TRUE_MSG(test,
+				      overlay_data_apply(overlay_name,
+							 &ovcs_id),
+				      "could not apply overlay \"%s\"\n",
+				      overlay_name);
 		ov_id[i] = ovcs_id;
 		of_unittest_track_overlay(ov_id[i]);
 	}
 
 	/* now try to remove first overlay (it should fail) */
 	ovcs_id = ov_id[0];
-	if (!of_overlay_remove(&ovcs_id)) {
-		unittest(0, "%s was destroyed @\"%s\"\n",
-				overlay_name_from_nr(overlay_nr + 0),
-				unittest_path(unittest_nr,
-					PDEV_OVERLAY));
-		return;
-	}
+	KUNIT_ASSERT_TRUE_MSG(test, of_overlay_remove(&ovcs_id),
+			      "%s was destroyed @\"%s\"\n",
+			      overlay_name_from_nr(overlay_nr + 0),
+			      unittest_path(unittest_nr,
+					    PDEV_OVERLAY));
 
 	/* removing them in order should work */
 	for (i = 1; i >= 0; i--) {
 		ovcs_id = ov_id[i];
-		if (of_overlay_remove(&ovcs_id)) {
-			unittest(0, "%s not destroyed @\"%s\"\n",
-					overlay_name_from_nr(overlay_nr + i),
-					unittest_path(unittest_nr,
-						PDEV_OVERLAY));
-			return;
-		}
+		KUNIT_ASSERT_FALSE_MSG(test, of_overlay_remove(&ovcs_id),
+				       "%s not destroyed @\"%s\"\n",
+				       overlay_name_from_nr(overlay_nr + i),
+				       unittest_path(unittest_nr,
+						     PDEV_OVERLAY));
 		of_unittest_untrack_overlay(ov_id[i]);
 	}
-
-	unittest(1, "overlay test %d passed\n", 8);
 }
 
 /* test insertion of a bus with parent devices */
-static void __init of_unittest_overlay_10(void)
+static void of_unittest_overlay_10(struct kunit *test)
 {
-	int ret;
 	char *child_path;
 
 	/* device should disable */
-	ret = of_unittest_apply_overlay_check(10, 10, 0, 1, PDEV_OVERLAY);
-	if (unittest(ret == 0,
-			"overlay test %d failed; overlay application\n", 10))
-		return;
+	KUNIT_ASSERT_EQ_MSG(test,
+			    of_unittest_apply_overlay_check(test,
+							    10,
+							    10,
+							    0,
+							    1,
+							    PDEV_OVERLAY),
+			    0,
+			    "overlay test %d failed; overlay application\n",
+			    10);
 
 	child_path = kasprintf(GFP_KERNEL, "%s/test-unittest101",
 			unittest_path(10, PDEV_OVERLAY));
-	if (unittest(child_path, "overlay test %d failed; kasprintf\n", 10))
-		return;
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, child_path);
 
-	ret = of_path_device_type_exists(child_path, PDEV_OVERLAY);
+	KUNIT_EXPECT_TRUE_MSG(test,
+			      of_path_device_type_exists(child_path,
+							 PDEV_OVERLAY),
+			      "overlay test %d failed; no child device\n", 10);
 	kfree(child_path);
-
-	unittest(ret, "overlay test %d failed; no child device\n", 10);
 }
 
 /* test insertion of a bus with parent devices (and revert) */
-static void __init of_unittest_overlay_11(void)
+static void of_unittest_overlay_11(struct kunit *test)
 {
-	int ret;
-
 	/* device should disable */
-	ret = of_unittest_apply_revert_overlay_check(11, 11, 0, 1,
-			PDEV_OVERLAY);
-	unittest(ret == 0, "overlay test %d failed; overlay apply\n", 11);
+	KUNIT_EXPECT_FALSE(test,
+			  of_unittest_apply_revert_overlay_check(test,
+								 11, 11, 0, 1,
+								 PDEV_OVERLAY));
 }
 
 #if IS_BUILTIN(CONFIG_I2C) && IS_ENABLED(CONFIG_OF_OVERLAY)
@@ -1972,25 +2086,23 @@ static struct i2c_driver unittest_i2c_mux_driver = {
 
 #endif
 
-static int of_unittest_overlay_i2c_init(void)
+static int of_unittest_overlay_i2c_init(struct kunit *test)
 {
-	int ret;
+	KUNIT_ASSERT_EQ_MSG(test,
+			    i2c_add_driver(&unittest_i2c_dev_driver),
+			    0,
+			    "could not register unittest i2c device driver\n");
 
-	ret = i2c_add_driver(&unittest_i2c_dev_driver);
-	if (unittest(ret == 0,
-			"could not register unittest i2c device driver\n"))
-		return ret;
-
-	ret = platform_driver_register(&unittest_i2c_bus_driver);
-	if (unittest(ret == 0,
-			"could not register unittest i2c bus driver\n"))
-		return ret;
+	KUNIT_ASSERT_EQ_MSG(test,
+			    platform_driver_register(&unittest_i2c_bus_driver),
+			    0,
+			    "could not register unittest i2c bus driver\n");
 
 #if IS_BUILTIN(CONFIG_I2C_MUX)
-	ret = i2c_add_driver(&unittest_i2c_mux_driver);
-	if (unittest(ret == 0,
-			"could not register unittest i2c mux driver\n"))
-		return ret;
+	KUNIT_ASSERT_EQ_MSG(test,
+			    i2c_add_driver(&unittest_i2c_mux_driver),
+			    0,
+			    "could not register unittest i2c mux driver\n");
 #endif
 
 	return 0;
@@ -2005,101 +2117,89 @@ static void of_unittest_overlay_i2c_cleanup(void)
 	i2c_del_driver(&unittest_i2c_dev_driver);
 }
 
-static void __init of_unittest_overlay_i2c_12(void)
+static void of_unittest_overlay_i2c_12(struct kunit *test)
 {
 	/* device should enable */
-	if (of_unittest_apply_overlay_check(12, 12, 0, 1, I2C_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 12);
+	of_unittest_apply_overlay_check(test, 12, 12, 0, 1, I2C_OVERLAY);
 }
 
 /* test deactivation of device */
-static void __init of_unittest_overlay_i2c_13(void)
+static void of_unittest_overlay_i2c_13(struct kunit *test)
 {
 	/* device should disable */
-	if (of_unittest_apply_overlay_check(13, 13, 1, 0, I2C_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 13);
+	of_unittest_apply_overlay_check(test, 13, 13, 1, 0, I2C_OVERLAY);
 }
 
 /* just check for i2c mux existence */
-static void of_unittest_overlay_i2c_14(void)
+static void of_unittest_overlay_i2c_14(struct kunit *test)
 {
+	KUNIT_SUCCEED(test);
 }
 
-static void __init of_unittest_overlay_i2c_15(void)
+static void of_unittest_overlay_i2c_15(struct kunit *test)
 {
 	/* device should enable */
-	if (of_unittest_apply_overlay_check(15, 15, 0, 1, I2C_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 15);
+	of_unittest_apply_overlay_check(test, 15, 15, 0, 1, I2C_OVERLAY);
 }
 
 #else
 
-static inline void of_unittest_overlay_i2c_14(void) { }
-static inline void of_unittest_overlay_i2c_15(void) { }
+static inline void of_unittest_overlay_i2c_14(struct kunit *test) { }
+static inline void of_unittest_overlay_i2c_15(struct kunit *test) { }
 
 #endif
 
-static void __init of_unittest_overlay(void)
+static void of_unittest_overlay(struct kunit *test)
 {
 	struct device_node *bus_np = NULL;
 
-	if (platform_driver_register(&unittest_driver)) {
-		unittest(0, "could not register unittest driver\n");
-		goto out;
-	}
+	KUNIT_ASSERT_FALSE_MSG(test,
+			       platform_driver_register(&unittest_driver),
+			       "could not register unittest driver\n");
 
 	bus_np = of_find_node_by_path(bus_path);
-	if (bus_np == NULL) {
-		unittest(0, "could not find bus_path \"%s\"\n", bus_path);
-		goto out;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL_MSG(test, bus_np,
+					 "could not find bus_path \"%s\"\n",
+					 bus_path);
 
-	if (of_platform_default_populate(bus_np, NULL, NULL)) {
-		unittest(0, "could not populate bus @ \"%s\"\n", bus_path);
-		goto out;
-	}
+	KUNIT_ASSERT_FALSE_MSG(test,
+			       of_platform_default_populate(bus_np, NULL, NULL),
+			       "could not populate bus @ \"%s\"\n", bus_path);
 
-	if (!of_unittest_device_exists(100, PDEV_OVERLAY)) {
-		unittest(0, "could not find unittest0 @ \"%s\"\n",
-				unittest_path(100, PDEV_OVERLAY));
-		goto out;
-	}
+	KUNIT_ASSERT_TRUE_MSG(test,
+			      of_unittest_device_exists(100, PDEV_OVERLAY),
+			      "could not find unittest0 @ \"%s\"\n",
+			      unittest_path(100, PDEV_OVERLAY));
 
-	if (of_unittest_device_exists(101, PDEV_OVERLAY)) {
-		unittest(0, "unittest1 @ \"%s\" should not exist\n",
-				unittest_path(101, PDEV_OVERLAY));
-		goto out;
-	}
-
-	unittest(1, "basic infrastructure of overlays passed");
+	KUNIT_ASSERT_FALSE_MSG(test,
+			       of_unittest_device_exists(101, PDEV_OVERLAY),
+			       "unittest1 @ \"%s\" should not exist\n",
+			       unittest_path(101, PDEV_OVERLAY));
 
 	/* tests in sequence */
-	of_unittest_overlay_0();
-	of_unittest_overlay_1();
-	of_unittest_overlay_2();
-	of_unittest_overlay_3();
-	of_unittest_overlay_4();
-	of_unittest_overlay_5();
-	of_unittest_overlay_6();
-	of_unittest_overlay_8();
-
-	of_unittest_overlay_10();
-	of_unittest_overlay_11();
+	of_unittest_overlay_0(test);
+	of_unittest_overlay_1(test);
+	of_unittest_overlay_2(test);
+	of_unittest_overlay_3(test);
+	of_unittest_overlay_4(test);
+	of_unittest_overlay_5(test);
+	of_unittest_overlay_6(test);
+	of_unittest_overlay_8(test);
+
+	of_unittest_overlay_10(test);
+	of_unittest_overlay_11(test);
 
 #if IS_BUILTIN(CONFIG_I2C)
-	if (unittest(of_unittest_overlay_i2c_init() == 0, "i2c init failed\n"))
+	KUNIT_ASSERT_EQ_MSG(test,
+			   of_unittest_overlay_i2c_init(test),
+			   0,
+			   "i2c init failed\n");
 		goto out;
 
-	of_unittest_overlay_i2c_12();
-	of_unittest_overlay_i2c_13();
-	of_unittest_overlay_i2c_14();
-	of_unittest_overlay_i2c_15();
+	of_unittest_overlay_i2c_12(test);
+	of_unittest_overlay_i2c_13(test);
+	of_unittest_overlay_i2c_14(test);
+	of_unittest_overlay_i2c_15(test);
 
 	of_unittest_overlay_i2c_cleanup();
 #endif
@@ -2111,7 +2211,7 @@ static void __init of_unittest_overlay(void)
 }
 
 #else
-static inline void __init of_unittest_overlay(void) { }
+static inline void of_unittest_overlay(struct kunit *test) { }
 #endif
 
 #ifdef CONFIG_OF_OVERLAY
@@ -2254,7 +2354,7 @@ void __init unittest_unflatten_overlay_base(void)
  *
  * Return 0 on unexpected error.
  */
-static int __init overlay_data_apply(const char *overlay_name, int *overlay_id)
+static int overlay_data_apply(const char *overlay_name, int *overlay_id)
 {
 	struct overlay_info *info;
 	int found = 0;
@@ -2301,19 +2401,17 @@ static int __init overlay_data_apply(const char *overlay_name, int *overlay_id)
  * The first part of the function is _not_ normal overlay usage; it is
  * finishing splicing the base overlay device tree into the live tree.
  */
-static __init void of_unittest_overlay_high_level(void)
+static void of_unittest_overlay_high_level(struct kunit *test)
 {
 	struct device_node *last_sibling;
 	struct device_node *np;
 	struct device_node *of_symbols;
-	struct device_node *overlay_base_symbols;
+	struct device_node *overlay_base_symbols = 0;
 	struct device_node **pprev;
 	struct property *prop;
 
-	if (!overlay_base_root) {
-		unittest(0, "overlay_base_root not initialized\n");
-		return;
-	}
+	KUNIT_ASSERT_TRUE_MSG(test, overlay_base_root,
+			      "overlay_base_root not initialized\n");
 
 	/*
 	 * Could not fixup phandles in unittest_unflatten_overlay_base()
@@ -2358,11 +2456,10 @@ static __init void of_unittest_overlay_high_level(void)
 	}
 
 	for (np = overlay_base_root->child; np; np = np->sibling) {
-		if (of_get_child_by_name(of_root, np->name)) {
-			unittest(0, "illegal node name in overlay_base %s",
-				np->name);
-			return;
-		}
+		KUNIT_ASSERT_FALSE_MSG(test,
+				      of_get_child_by_name(of_root, np->name),
+				      "illegal node name in overlay_base %s",
+				      np->name);
 	}
 
 	/*
@@ -2395,21 +2492,24 @@ static __init void of_unittest_overlay_high_level(void)
 
 			new_prop = __of_prop_dup(prop, GFP_KERNEL);
 			if (!new_prop) {
-				unittest(0, "__of_prop_dup() of '%s' from overlay_base node __symbols__",
-					 prop->name);
+				KUNIT_FAIL(test,
+					   "__of_prop_dup() of '%s' from overlay_base node __symbols__",
+					   prop->name);
 				goto err_unlock;
 			}
 			if (__of_add_property(of_symbols, new_prop)) {
 				/* "name" auto-generated by unflatten */
 				if (!strcmp(new_prop->name, "name"))
 					continue;
-				unittest(0, "duplicate property '%s' in overlay_base node __symbols__",
-					 prop->name);
+				KUNIT_FAIL(test,
+					   "duplicate property '%s' in overlay_base node __symbols__",
+					   prop->name);
 				goto err_unlock;
 			}
 			if (__of_add_property_sysfs(of_symbols, new_prop)) {
-				unittest(0, "unable to add property '%s' in overlay_base node __symbols__ to sysfs",
-					 prop->name);
+				KUNIT_FAIL(test,
+					   "unable to add property '%s' in overlay_base node __symbols__ to sysfs",
+					   prop->name);
 				goto err_unlock;
 			}
 		}
@@ -2420,14 +2520,16 @@ static __init void of_unittest_overlay_high_level(void)
 
 	/* now do the normal overlay usage test */
 
-	unittest(overlay_data_apply("overlay", NULL),
-		 "Adding overlay 'overlay' failed\n");
+	KUNIT_EXPECT_TRUE_MSG(test, overlay_data_apply("overlay", NULL),
+			      "Adding overlay 'overlay' failed\n");
 
-	unittest(overlay_data_apply("overlay_bad_phandle", NULL),
-		 "Adding overlay 'overlay_bad_phandle' failed\n");
+	KUNIT_EXPECT_TRUE_MSG(test,
+			      overlay_data_apply("overlay_bad_phandle", NULL),
+			      "Adding overlay 'overlay_bad_phandle' failed\n");
 
-	unittest(overlay_data_apply("overlay_bad_symbol", NULL),
-		 "Adding overlay 'overlay_bad_symbol' failed\n");
+	KUNIT_EXPECT_TRUE_MSG(test,
+			      overlay_data_apply("overlay_bad_symbol", NULL),
+			      "Adding overlay 'overlay_bad_symbol' failed\n");
 
 	return;
 
@@ -2437,54 +2539,49 @@ static __init void of_unittest_overlay_high_level(void)
 
 #else
 
-static inline __init void of_unittest_overlay_high_level(void) {}
+static inline void of_unittest_overlay_high_level(struct kunit *test) {}
 
 #endif
 
-static int __init of_unittest(void)
+static int of_test_init(struct kunit *test)
 {
-	struct device_node *np;
-	int res;
-
 	/* adding data for unittest */
-	res = unittest_data_add();
-	if (res)
-		return res;
+	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
+
 	if (!of_aliases)
 		of_aliases = of_find_node_by_path("/aliases");
 
-	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
-	if (!np) {
-		pr_info("No testcase data in device tree; not running tests\n");
-		return 0;
-	}
-	of_node_put(np);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
+			"/testcase-data/phandle-tests/consumer-a"));
 
-	pr_info("start of unittest - you will see error messages\n");
-	of_unittest_check_tree_linkage();
-	of_unittest_check_phandles();
-	of_unittest_find_node_by_name();
-	of_unittest_dynamic();
-	of_unittest_parse_phandle_with_args();
-	of_unittest_parse_phandle_with_args_map();
-	of_unittest_printf();
-	of_unittest_property_string();
-	of_unittest_property_copy();
-	of_unittest_changeset();
-	of_unittest_parse_interrupts();
-	of_unittest_parse_interrupts_extended();
-	of_unittest_match_node();
-	of_unittest_platform_populate();
-	of_unittest_overlay();
+	return 0;
+}
 
+static struct kunit_case of_test_cases[] = {
+	KUNIT_CASE(of_unittest_check_tree_linkage),
+	KUNIT_CASE(of_unittest_check_phandles),
+	KUNIT_CASE(of_unittest_find_node_by_name),
+	KUNIT_CASE(of_unittest_dynamic),
+	KUNIT_CASE(of_unittest_parse_phandle_with_args),
+	KUNIT_CASE(of_unittest_parse_phandle_with_args_map),
+	KUNIT_CASE(of_unittest_printf),
+	KUNIT_CASE(of_unittest_property_string),
+	KUNIT_CASE(of_unittest_property_copy),
+	KUNIT_CASE(of_unittest_changeset),
+	KUNIT_CASE(of_unittest_parse_interrupts),
+	KUNIT_CASE(of_unittest_parse_interrupts_extended),
+	KUNIT_CASE(of_unittest_match_node),
+	KUNIT_CASE(of_unittest_platform_populate),
+	KUNIT_CASE(of_unittest_overlay),
 	/* Double check linkage after removing testcase data */
-	of_unittest_check_tree_linkage();
-
-	of_unittest_overlay_high_level();
-
-	pr_info("end of unittest - %i passed, %i failed\n",
-		unittest_results.passed, unittest_results.failed);
+	KUNIT_CASE(of_unittest_check_tree_linkage),
+	KUNIT_CASE(of_unittest_overlay_high_level),
+	{},
+};
 
-	return 0;
-}
-late_initcall(of_unittest);
+static struct kunit_module of_test_module = {
+	.name = "of-test",
+	.init = of_test_init,
+	.test_cases = of_test_cases,
+};
+module_test(of_test_module);
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 17/19] of: unittest: migrate tests to run on KUnit
  2018-11-28 19:36 ` [RFC v3 17/19] of: unittest: migrate tests to run on KUnit brendanhiggins
@ 2018-11-28 19:36   ` Brendan Higgins
       [not found]   ` <CAL_Jsq+09Kx7yMBC_Jw45QGmk6U_fp4N6HOZDwYrM4tWw+_dOA@mail.gmail.com>
  2018-12-04 10:56   ` frowand.list
  2 siblings, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-11-28 19:36 UTC (permalink / raw)


Migrate tests without any cleanup, or modifying test logic in anyway to
run under KUnit using the KUnit expectation and assertion API.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 drivers/of/Kconfig    |    1 +
 drivers/of/unittest.c | 1405 ++++++++++++++++++++++-------------------
 2 files changed, 752 insertions(+), 654 deletions(-)

diff --git a/drivers/of/Kconfig b/drivers/of/Kconfig
index ad3fcad4d75b8..f309399deac20 100644
--- a/drivers/of/Kconfig
+++ b/drivers/of/Kconfig
@@ -15,6 +15,7 @@ if OF
 config OF_UNITTEST
 	bool "Device Tree runtime unit tests"
 	depends on !SPARC
+	depends on KUNIT
 	select IRQ_DOMAIN
 	select OF_EARLY_FLATTREE
 	select OF_RESOLVE
diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
index 41b49716ac75f..a5ef44730ffdb 100644
--- a/drivers/of/unittest.c
+++ b/drivers/of/unittest.c
@@ -26,186 +26,187 @@
 
 #include <linux/bitops.h>
 
+#include <kunit/test.h>
+
 #include "of_private.h"
 
-static struct unittest_results {
-	int passed;
-	int failed;
-} unittest_results;
-
-#define unittest(result, fmt, ...) ({ \
-	bool failed = !(result); \
-	if (failed) { \
-		unittest_results.failed++; \
-		pr_err("FAIL %s():%i " fmt, __func__, __LINE__, ##__VA_ARGS__); \
-	} else { \
-		unittest_results.passed++; \
-		pr_debug("pass %s():%i\n", __func__, __LINE__); \
-	} \
-	failed; \
-})
-
-static void __init of_unittest_find_node_by_name(void)
+static void of_unittest_find_node_by_name(struct kunit *test)
 {
 	struct device_node *np;
 	const char *options, *name;
 
 	np = of_find_node_by_path("/testcase-data");
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	unittest(np && !strcmp("/testcase-data", name),
-		"find /testcase-data failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
+			       "find /testcase-data failed\n");
 	of_node_put(np);
 	kfree(name);
 
 	/* Test if trailing '/' works */
-	np = of_find_node_by_path("/testcase-data/");
-	unittest(!np, "trailing '/' on /testcase-data/ should fail\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
+			    "trailing '/' on /testcase-data/ should fail\n");
 
 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	unittest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", name),
-		"find /testcase-data/phandle-tests/consumer-a failed\n");
+	KUNIT_EXPECT_STREQ_MSG(test,
+			       "/testcase-data/phandle-tests/consumer-a", name,
+			       "find /testcase-data/phandle-tests/consumer-a failed\n");
 	of_node_put(np);
 	kfree(name);
 
 	np = of_find_node_by_path("testcase-alias");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	unittest(np && !strcmp("/testcase-data", name),
-		"find testcase-alias failed\n");
+	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
+			       "find testcase-alias failed\n");
 	of_node_put(np);
 	kfree(name);
 
 	/* Test if trailing '/' works on aliases */
-	np = of_find_node_by_path("testcase-alias/");
-	unittest(!np, "trailing '/' on testcase-alias/ should fail\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
+			    "trailing '/' on testcase-alias/ should fail\n");
 
 	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	unittest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", name),
-		"find testcase-alias/phandle-tests/consumer-a failed\n");
+	KUNIT_EXPECT_STREQ_MSG(test,
+			       "/testcase-data/phandle-tests/consumer-a", name,
+			       "find testcase-alias/phandle-tests/consumer-a failed\n");
 	of_node_put(np);
 	kfree(name);
 
-	np = of_find_node_by_path("/testcase-data/missing-path");
-	unittest(!np, "non-existent path returned node %pOF\n", np);
+	KUNIT_EXPECT_EQ_MSG(test,
+			    of_find_node_by_path("/testcase-data/missing-path"),
+			    NULL,
+			    "non-existent path returned node %pOF\n", np);
 	of_node_put(np);
 
-	np = of_find_node_by_path("missing-alias");
-	unittest(!np, "non-existent alias returned node %pOF\n", np);
+	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("missing-alias"), NULL,
+			    "non-existent alias returned node %pOF\n", np);
 	of_node_put(np);
 
-	np = of_find_node_by_path("testcase-alias/missing-path");
-	unittest(!np, "non-existent alias with relative path returned node %pOF\n", np);
+	KUNIT_EXPECT_EQ_MSG(test,
+			    of_find_node_by_path("testcase-alias/missing-path"),
+			    NULL,
+			    "non-existent alias with relative path returned node %pOF\n",
+			    np);
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
-	unittest(np && !strcmp("testoption", options),
-		 "option path test failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
+			       "option path test failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
-	unittest(np && !strcmp("test/option", options),
-		 "option path test, subcase #1 failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
+			       "option path test, subcase #1 failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
-	unittest(np && !strcmp("test/option", options),
-		 "option path test, subcase #2 failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
+			       "option path test, subcase #2 failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
-	unittest(np, "NULL option path test failed\n");
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
+					 "NULL option path test failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
 				       &options);
-	unittest(np && !strcmp("testaliasoption", options),
-		 "option alias path test failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
+			       "option alias path test failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
 				       &options);
-	unittest(np && !strcmp("test/alias/option", options),
-		 "option alias path test, subcase #1 failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
+			       "option alias path test, subcase #1 failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
-	unittest(np, "NULL option alias path test failed\n");
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
+					 "NULL option alias path test failed\n");
 	of_node_put(np);
 
 	options = "testoption";
 	np = of_find_node_opts_by_path("testcase-alias", &options);
-	unittest(np && !options, "option clearing test failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
+			    "option clearing test failed\n");
 	of_node_put(np);
 
 	options = "testoption";
 	np = of_find_node_opts_by_path("/", &options);
-	unittest(np && !options, "option clearing root node test failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
+			    "option clearing root node test failed\n");
 	of_node_put(np);
 }
 
-static void __init of_unittest_dynamic(void)
+static void of_unittest_dynamic(struct kunit *test)
 {
 	struct device_node *np;
 	struct property *prop;
 
 	np = of_find_node_by_path("/testcase-data");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	/* Array of 4 properties for the purpose of testing */
 	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
-	if (!prop) {
-		unittest(0, "kzalloc() failed\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
 
 	/* Add a new property - should pass*/
 	prop->name = "new-property";
 	prop->value = "new-property-data";
 	prop->length = strlen(prop->value) + 1;
-	unittest(of_add_property(np, prop) == 0, "Adding a new property failed\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+			    "Adding a new property failed\n");
 
 	/* Try to add an existing property - should fail */
 	prop++;
 	prop->name = "new-property";
 	prop->value = "new-property-data-should-fail";
 	prop->length = strlen(prop->value) + 1;
-	unittest(of_add_property(np, prop) != 0,
-		 "Adding an existing property should have failed\n");
+	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
+			    "Adding an existing property should have failed\n");
 
 	/* Try to modify an existing property - should pass */
 	prop->value = "modify-property-data-should-pass";
 	prop->length = strlen(prop->value) + 1;
-	unittest(of_update_property(np, prop) == 0,
-		 "Updating an existing property should have passed\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
+			    "Updating an existing property should have passed\n");
 
 	/* Try to modify non-existent property - should pass*/
 	prop++;
 	prop->name = "modify-property";
 	prop->value = "modify-missing-property-data-should-pass";
 	prop->length = strlen(prop->value) + 1;
-	unittest(of_update_property(np, prop) == 0,
-		 "Updating a missing property should have passed\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
+			    "Updating a missing property should have passed\n");
 
 	/* Remove property - should pass */
-	unittest(of_remove_property(np, prop) == 0,
-		 "Removing a property should have passed\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
+			    "Removing a property should have passed\n");
 
 	/* Adding very large property - should pass */
 	prop++;
 	prop->name = "large-property-PAGE_SIZEx8";
 	prop->length = PAGE_SIZE * 8;
 	prop->value = kzalloc(prop->length, GFP_KERNEL);
-	unittest(prop->value != NULL, "Unable to allocate large buffer\n");
-	if (prop->value)
-		unittest(of_add_property(np, prop) == 0,
-			 "Adding a large property should have passed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+			    "Adding a large property should have passed\n");
 }
 
-static int __init of_unittest_check_node_linkage(struct device_node *np)
+static int of_unittest_check_node_linkage(struct device_node *np)
 {
 	struct device_node *child;
 	int count = 0, rc;
@@ -230,27 +231,29 @@ static int __init of_unittest_check_node_linkage(struct device_node *np)
 	return rc;
 }
 
-static void __init of_unittest_check_tree_linkage(void)
+static void of_unittest_check_tree_linkage(struct kunit *test)
 {
 	struct device_node *np;
 	int allnode_count = 0, child_count;
 
-	if (!of_root)
-		return;
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);
 
 	for_each_of_allnodes(np)
 		allnode_count++;
 	child_count = of_unittest_check_node_linkage(of_root);
 
-	unittest(child_count > 0, "Device node data structure is corrupted\n");
-	unittest(child_count == allnode_count,
-		 "allnodes list size (%i) doesn't match sibling lists size (%i)\n",
-		 allnode_count, child_count);
+	KUNIT_EXPECT_GT_MSG(test, child_count, 0,
+			    "Device node data structure is corrupted\n");
+	KUNIT_EXPECT_EQ_MSG(test, child_count, allnode_count,
+			    "allnodes list size (%i) doesn't match sibling lists size (%i)\n",
+			    allnode_count, child_count);
 	pr_debug("allnodes list size (%i); sibling lists size (%i)\n", allnode_count, child_count);
 }
 
-static void __init of_unittest_printf_one(struct device_node *np, const char *fmt,
-					  const char *expected)
+static void of_unittest_printf_one(struct kunit *test,
+				   struct device_node *np,
+				   const char *fmt,
+				   const char *expected)
 {
 	unsigned char *buf;
 	int buf_size;
@@ -266,9 +269,12 @@ static void __init of_unittest_printf_one(struct device_node *np, const char *fm
 	size = snprintf(buf, buf_size - 2, fmt, np);
 
 	/* use strcmp() instead of strncmp() here to be absolutely sure strings match */
-	unittest((strcmp(buf, expected) == 0) && (buf[size+1] == 0xff),
-		"sprintf failed; fmt='%s' expected='%s' rslt='%s'\n",
-		fmt, expected, buf);
+	KUNIT_EXPECT_STREQ_MSG(test, buf, expected,
+			       "sprintf failed; fmt='%s' expected='%s' rslt='%s'\n",
+			       fmt, expected, buf);
+	KUNIT_EXPECT_EQ_MSG(test, buf[size+1], 0xff,
+			    "sprintf failed; fmt='%s' expected='%s' rslt='%s'\n",
+			    fmt, expected, buf);
 
 	/* Make sure length limits work */
 	size++;
@@ -276,40 +282,43 @@ static void __init of_unittest_printf_one(struct device_node *np, const char *fm
 		/* Clear the buffer, and make sure it works correctly still */
 		memset(buf, 0xff, buf_size);
 		snprintf(buf, size+1, fmt, np);
-		unittest(strncmp(buf, expected, size) == 0 && (buf[size+1] == 0xff),
-			"snprintf failed; size=%i fmt='%s' expected='%s' rslt='%s'\n",
-			size, fmt, expected, buf);
+		KUNIT_EXPECT_STREQ_MSG(test, buf, expected,
+				       "snprintf failed; size=%i fmt='%s' expected='%s' rslt='%s'\n",
+				       size, fmt, expected, buf);
+		KUNIT_EXPECT_EQ_MSG(test, buf[size+1], 0xff,
+				    "snprintf failed; size=%i fmt='%s' expected='%s' rslt='%s'\n",
+				    size, fmt, expected, buf);
 	}
 	kfree(buf);
 }
 
-static void __init of_unittest_printf(void)
+static void of_unittest_printf(struct kunit *test)
 {
 	struct device_node *np;
 	const char *full_name = "/testcase-data/platform-tests/test-device at 1/dev at 100";
 	char phandle_str[16] = "";
 
 	np = of_find_node_by_path(full_name);
-	if (!np) {
-		unittest(np, "testcase data missing\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	num_to_str(phandle_str, sizeof(phandle_str), np->phandle, 0);
 
-	of_unittest_printf_one(np, "%pOF",  full_name);
-	of_unittest_printf_one(np, "%pOFf", full_name);
-	of_unittest_printf_one(np, "%pOFp", phandle_str);
-	of_unittest_printf_one(np, "%pOFP", "dev at 100");
-	of_unittest_printf_one(np, "ABC %pOFP ABC", "ABC dev at 100 ABC");
-	of_unittest_printf_one(np, "%10pOFP", "   dev at 100");
-	of_unittest_printf_one(np, "%-10pOFP", "dev at 100   ");
-	of_unittest_printf_one(of_root, "%pOFP", "/");
-	of_unittest_printf_one(np, "%pOFF", "----");
-	of_unittest_printf_one(np, "%pOFPF", "dev at 100:----");
-	of_unittest_printf_one(np, "%pOFPFPc", "dev at 100:----:dev at 100:test-sub-device");
-	of_unittest_printf_one(np, "%pOFc", "test-sub-device");
-	of_unittest_printf_one(np, "%pOFC",
+	of_unittest_printf_one(test, np, "%pOF",  full_name);
+	of_unittest_printf_one(test, np, "%pOFf", full_name);
+	of_unittest_printf_one(test, np, "%pOFp", phandle_str);
+	of_unittest_printf_one(test, np, "%pOFP", "dev at 100");
+	of_unittest_printf_one(test, np, "ABC %pOFP ABC", "ABC dev at 100 ABC");
+	of_unittest_printf_one(test, np, "%10pOFP", "   dev at 100");
+	of_unittest_printf_one(test, np, "%-10pOFP", "dev at 100   ");
+	of_unittest_printf_one(test, of_root, "%pOFP", "/");
+	of_unittest_printf_one(test, np, "%pOFF", "----");
+	of_unittest_printf_one(test, np, "%pOFPF", "dev at 100:----");
+	of_unittest_printf_one(test,
+			       np,
+			       "%pOFPFPc",
+			       "dev at 100:----:dev at 100:test-sub-device");
+	of_unittest_printf_one(test, np, "%pOFc", "test-sub-device");
+	of_unittest_printf_one(test, np, "%pOFC",
 			"\"test-sub-device\",\"test-compat2\",\"test-compat3\"");
 }
 
@@ -319,7 +328,7 @@ struct node_hash {
 };
 
 static DEFINE_HASHTABLE(phandle_ht, 8);
-static void __init of_unittest_check_phandles(void)
+static void of_unittest_check_phandles(struct kunit *test)
 {
 	struct device_node *np;
 	struct node_hash *nh;
@@ -331,24 +340,25 @@ static void __init of_unittest_check_phandles(void)
 			continue;
 
 		hash_for_each_possible(phandle_ht, nh, node, np->phandle) {
+			KUNIT_EXPECT_NE_MSG(test, nh->np->phandle, np->phandle,
+					    "Duplicate phandle! %i used by %pOF and %pOF\n",
+					    np->phandle, nh->np, np);
 			if (nh->np->phandle == np->phandle) {
-				pr_info("Duplicate phandle! %i used by %pOF and %pOF\n",
-					np->phandle, nh->np, np);
 				dup_count++;
 				break;
 			}
 		}
 
 		nh = kzalloc(sizeof(*nh), GFP_KERNEL);
-		if (WARN_ON(!nh))
-			return;
+		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, nh);
 
 		nh->np = np;
 		hash_add(phandle_ht, &nh->node, np->phandle);
 		phandle_count++;
 	}
-	unittest(dup_count == 0, "Found %i duplicates in %i phandles\n",
-		 dup_count, phandle_count);
+	KUNIT_EXPECT_EQ_MSG(test, dup_count, 0,
+			    "Found %i duplicates in %i phandles\n",
+			    dup_count, phandle_count);
 
 	/* Clean up */
 	hash_for_each_safe(phandle_ht, i, tmp, nh, node) {
@@ -357,20 +367,22 @@ static void __init of_unittest_check_phandles(void)
 	}
 }
 
-static void __init of_unittest_parse_phandle_with_args(void)
+static void of_unittest_parse_phandle_with_args(struct kunit *test)
 {
 	struct device_node *np;
 	struct of_phandle_args args;
-	int i, rc;
+	int i, rc = 0;
 
 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
-	rc = of_count_phandle_with_args(np, "phandle-list", "#phandle-cells");
-	unittest(rc == 7, "of_count_phandle_with_args() returned %i, expected 7\n", rc);
+	KUNIT_EXPECT_EQ_MSG(test,
+			    of_count_phandle_with_args(np,
+						       "phandle-list",
+						       "#phandle-cells"),
+			    7,
+			    "of_count_phandle_with_args() returned %i, expected 7\n",
+			    rc);
 
 	for (i = 0; i < 8; i++) {
 		bool passed = true;
@@ -423,81 +435,90 @@ static void __init of_unittest_parse_phandle_with_args(void)
 			passed = false;
 		}
 
-		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
-			 i, args.np, rc);
+		KUNIT_EXPECT_TRUE_MSG(test, passed,
+				      "index %i - data error on node %pOF rc=%i\n",
+				      i, args.np, rc);
 	}
 
 	/* Check for missing list property */
-	rc = of_parse_phandle_with_args(np, "phandle-list-missing",
-					"#phandle-cells", 0, &args);
-	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
-	rc = of_count_phandle_with_args(np, "phandle-list-missing",
-					"#phandle-cells");
-	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
+	KUNIT_EXPECT_EQ(test,
+			of_parse_phandle_with_args(np,
+						   "phandle-list-missing",
+						   "#phandle-cells",
+						   0, &args),
+			-ENOENT);
+	KUNIT_EXPECT_EQ(test,
+			of_count_phandle_with_args(np,
+						   "phandle-list-missing",
+						   "#phandle-cells"),
+			-ENOENT);
 
 	/* Check for missing cells property */
-	rc = of_parse_phandle_with_args(np, "phandle-list",
-					"#phandle-cells-missing", 0, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
-	rc = of_count_phandle_with_args(np, "phandle-list",
-					"#phandle-cells-missing");
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(test,
+			of_parse_phandle_with_args(np,
+						   "phandle-list",
+						   "#phandle-cells-missing",
+						   0, &args),
+			-EINVAL);
+	KUNIT_EXPECT_EQ(test,
+			of_count_phandle_with_args(np,
+						   "phandle-list",
+						   "#phandle-cells-missing"),
+			-EINVAL);
 
 	/* Check for bad phandle in list */
-	rc = of_parse_phandle_with_args(np, "phandle-list-bad-phandle",
-					"#phandle-cells", 0, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
-	rc = of_count_phandle_with_args(np, "phandle-list-bad-phandle",
-					"#phandle-cells");
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(test,
+			of_parse_phandle_with_args(np,
+						   "phandle-list-bad-phandle",
+						   "#phandle-cells",
+						   0, &args),
+			-EINVAL);
+	KUNIT_EXPECT_EQ(test,
+			of_count_phandle_with_args(np,
+						   "phandle-list-bad-phandle",
+						   "#phandle-cells"),
+			-EINVAL);
 
 	/* Check for incorrectly formed argument list */
-	rc = of_parse_phandle_with_args(np, "phandle-list-bad-args",
-					"#phandle-cells", 1, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
-	rc = of_count_phandle_with_args(np, "phandle-list-bad-args",
-					"#phandle-cells");
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(test,
+			of_parse_phandle_with_args(np,
+						   "phandle-list-bad-args",
+						   "#phandle-cells",
+						   1, &args),
+			-EINVAL);
+	KUNIT_EXPECT_EQ(test,
+			of_count_phandle_with_args(np,
+						   "phandle-list-bad-args",
+						   "#phandle-cells"),
+			-EINVAL);
 }
 
-static void __init of_unittest_parse_phandle_with_args_map(void)
+static void of_unittest_parse_phandle_with_args_map(struct kunit *test)
 {
 	struct device_node *np, *p0, *p1, *p2, *p3;
 	struct of_phandle_args args;
 	int i, rc;
 
 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-b");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	p0 = of_find_node_by_path("/testcase-data/phandle-tests/provider0");
-	if (!p0) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p0);
 
 	p1 = of_find_node_by_path("/testcase-data/phandle-tests/provider1");
-	if (!p1) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p1);
 
 	p2 = of_find_node_by_path("/testcase-data/phandle-tests/provider2");
-	if (!p2) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p2);
 
 	p3 = of_find_node_by_path("/testcase-data/phandle-tests/provider3");
-	if (!p3) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p3);
 
-	rc = of_count_phandle_with_args(np, "phandle-list", "#phandle-cells");
-	unittest(rc == 7, "of_count_phandle_with_args() returned %i, expected 7\n", rc);
+	KUNIT_EXPECT_EQ(test,
+		       of_count_phandle_with_args(np,
+						  "phandle-list",
+						  "#phandle-cells"),
+		       7);
 
 	for (i = 0; i < 8; i++) {
 		bool passed = true;
@@ -554,117 +575,214 @@ static void __init of_unittest_parse_phandle_with_args_map(void)
 			passed = false;
 		}
 
-		unittest(passed, "index %i - data error on node %s rc=%i\n",
-			 i, args.np->full_name, rc);
+		KUNIT_EXPECT_TRUE_MSG(test, passed,
+				      "index %i - data error on node %s rc=%i\n",
+				      i, args.np->full_name, rc);
 	}
 
 	/* Check for missing list property */
-	rc = of_parse_phandle_with_args_map(np, "phandle-list-missing",
-					    "phandle", 0, &args);
-	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
+	KUNIT_EXPECT_EQ(test,
+			of_parse_phandle_with_args_map(np,
+						       "phandle-list-missing",
+						       "phandle",
+						       0, &args),
+			-ENOENT);
 
 	/* Check for missing cells,map,mask property */
-	rc = of_parse_phandle_with_args_map(np, "phandle-list",
-					    "phandle-missing", 0, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(test,
+			of_parse_phandle_with_args_map(np,
+						       "phandle-list",
+						       "phandle-missing",
+						       0, &args),
+			-EINVAL);
 
 	/* Check for bad phandle in list */
-	rc = of_parse_phandle_with_args_map(np, "phandle-list-bad-phandle",
-					    "phandle", 0, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(test,
+			of_parse_phandle_with_args_map(np,
+						       "phandle-list-bad-phandle",
+						       "phandle",
+						       0, &args),
+			-EINVAL);
 
 	/* Check for incorrectly formed argument list */
-	rc = of_parse_phandle_with_args_map(np, "phandle-list-bad-args",
-					    "phandle", 1, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(test,
+			of_parse_phandle_with_args_map(np,
+						       "phandle-list-bad-args",
+						       "phandle",
+						       1, &args),
+			-EINVAL);
 }
 
-static void __init of_unittest_property_string(void)
+static void of_unittest_property_string(struct kunit *test)
 {
 	const char *strings[4];
 	struct device_node *np;
 	int rc;
 
 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
-	if (!np) {
-		pr_err("No testcase data in device tree\n");
-		return;
-	}
-
-	rc = of_property_match_string(np, "phandle-list-names", "first");
-	unittest(rc == 0, "first expected:0 got:%i\n", rc);
-	rc = of_property_match_string(np, "phandle-list-names", "second");
-	unittest(rc == 1, "second expected:1 got:%i\n", rc);
-	rc = of_property_match_string(np, "phandle-list-names", "third");
-	unittest(rc == 2, "third expected:2 got:%i\n", rc);
-	rc = of_property_match_string(np, "phandle-list-names", "fourth");
-	unittest(rc == -ENODATA, "unmatched string; rc=%i\n", rc);
-	rc = of_property_match_string(np, "missing-property", "blah");
-	unittest(rc == -EINVAL, "missing property; rc=%i\n", rc);
-	rc = of_property_match_string(np, "empty-property", "blah");
-	unittest(rc == -ENODATA, "empty property; rc=%i\n", rc);
-	rc = of_property_match_string(np, "unterminated-string", "blah");
-	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+
+	KUNIT_EXPECT_EQ(test,
+			of_property_match_string(np,
+						 "phandle-list-names",
+						 "first"),
+			0);
+	KUNIT_EXPECT_EQ(test,
+			of_property_match_string(np,
+						 "phandle-list-names",
+						 "second"),
+			1);
+	KUNIT_EXPECT_EQ(test,
+			of_property_match_string(np,
+						 "phandle-list-names",
+						 "third"),
+			2);
+	KUNIT_EXPECT_EQ_MSG(test,
+			    of_property_match_string(np,
+						     "phandle-list-names",
+						     "fourth"),
+			    -ENODATA,
+			    "unmatched string");
+	KUNIT_EXPECT_EQ_MSG(test,
+			    of_property_match_string(np,
+						     "missing-property",
+						     "blah"),
+			    -EINVAL,
+			    "missing property");
+	KUNIT_EXPECT_EQ_MSG(test,
+			    of_property_match_string(np,
+						     "empty-property",
+						     "blah"),
+			    -ENODATA,
+			    "empty property");
+	KUNIT_EXPECT_EQ_MSG(test,
+			    of_property_match_string(np,
+						     "unterminated-string",
+						     "blah"),
+			    -EILSEQ,
+			    "unterminated string");
 
 	/* of_property_count_strings() tests */
-	rc = of_property_count_strings(np, "string-property");
-	unittest(rc == 1, "Incorrect string count; rc=%i\n", rc);
-	rc = of_property_count_strings(np, "phandle-list-names");
-	unittest(rc == 3, "Incorrect string count; rc=%i\n", rc);
-	rc = of_property_count_strings(np, "unterminated-string");
-	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
-	rc = of_property_count_strings(np, "unterminated-string-list");
-	unittest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ(test,
+			of_property_count_strings(np, "string-property"),
+			1);
+	KUNIT_EXPECT_EQ(test,
+			of_property_count_strings(np, "phandle-list-names"),
+			3);
+	KUNIT_EXPECT_EQ_MSG(test,
+			    of_property_count_strings(np,
+						      "unterminated-string"),
+			    -EILSEQ,
+			    "unterminated string");
+	KUNIT_EXPECT_EQ_MSG(test,
+			    of_property_count_strings(
+					    np,
+					    "unterminated-string-list"),
+			    -EILSEQ,
+			    "unterminated string array");
 
 	/* of_property_read_string_index() tests */
 	rc = of_property_read_string_index(np, "string-property", 0, strings);
-	unittest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_ASSERT_EQ(test, rc, 0);
+	KUNIT_EXPECT_STREQ(test, strings[0], "foobar");
 	strings[0] = NULL;
 	rc = of_property_read_string_index(np, "string-property", 1, strings);
-	unittest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
-	rc = of_property_read_string_index(np, "phandle-list-names", 0, strings);
-	unittest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc);
-	rc = of_property_read_string_index(np, "phandle-list-names", 1, strings);
-	unittest(rc == 0 && !strcmp(strings[0], "second"), "of_property_read_string_index() failure; rc=%i\n", rc);
-	rc = of_property_read_string_index(np, "phandle-list-names", 2, strings);
-	unittest(rc == 0 && !strcmp(strings[0], "third"), "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ(test, rc, -ENODATA);
+	KUNIT_EXPECT_EQ(test, strings[0], NULL);
+	rc = of_property_read_string_index(np,
+					   "phandle-list-names",
+					   0,
+					   strings);
+	KUNIT_ASSERT_EQ(test, rc, 0);
+	KUNIT_EXPECT_STREQ(test, strings[0], "first");
+	rc = of_property_read_string_index(np,
+					   "phandle-list-names",
+					   1,
+					   strings);
+	KUNIT_ASSERT_EQ(test, rc, 0);
+	KUNIT_EXPECT_STREQ(test, strings[0], "second");
+	rc = of_property_read_string_index(np,
+					   "phandle-list-names",
+					   2,
+					   strings);
+	KUNIT_ASSERT_EQ(test, rc, 0);
+	KUNIT_EXPECT_STREQ(test, strings[0], "third");
 	strings[0] = NULL;
-	rc = of_property_read_string_index(np, "phandle-list-names", 3, strings);
-	unittest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
+	rc = of_property_read_string_index(np,
+					   "phandle-list-names", 3, strings);
+	KUNIT_EXPECT_EQ(test, rc, -ENODATA);
+	KUNIT_EXPECT_EQ(test, strings[0], NULL);
 	strings[0] = NULL;
-	rc = of_property_read_string_index(np, "unterminated-string", 0, strings);
-	unittest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
-	rc = of_property_read_string_index(np, "unterminated-string-list", 0, strings);
-	unittest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc);
+	rc = of_property_read_string_index(np,
+					   "unterminated-string",
+					   0,
+					   strings);
+	KUNIT_EXPECT_EQ(test, rc, -EILSEQ);
+	KUNIT_EXPECT_EQ(test, strings[0], NULL);
+	rc = of_property_read_string_index(np,
+					   "unterminated-string-list",
+					   0,
+					   strings);
+	KUNIT_ASSERT_EQ(test, rc, 0);
+	KUNIT_EXPECT_STREQ(test, strings[0], "first");
 	strings[0] = NULL;
-	rc = of_property_read_string_index(np, "unterminated-string-list", 2, strings); /* should fail */
-	unittest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
-	strings[1] = NULL;
+	rc = of_property_read_string_index(np,
+					   "unterminated-string-list",
+					   2,
+					   strings); /* should fail */
+	KUNIT_EXPECT_EQ(test, rc, -EILSEQ);
+	KUNIT_EXPECT_EQ(test, strings[0], NULL);
 
 	/* of_property_read_string_array() tests */
-	rc = of_property_read_string_array(np, "string-property", strings, 4);
-	unittest(rc == 1, "Incorrect string count; rc=%i\n", rc);
-	rc = of_property_read_string_array(np, "phandle-list-names", strings, 4);
-	unittest(rc == 3, "Incorrect string count; rc=%i\n", rc);
-	rc = of_property_read_string_array(np, "unterminated-string", strings, 4);
-	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
+	strings[1] = NULL;
+	KUNIT_EXPECT_EQ(test,
+			of_property_read_string_array(np,
+						      "string-property",
+						      strings, 4),
+			1);
+	KUNIT_EXPECT_EQ(test,
+			of_property_read_string_array(np,
+						      "phandle-list-names",
+						      strings, 4),
+			3);
+	KUNIT_EXPECT_EQ_MSG(test,
+			    of_property_read_string_array(np,
+							  "unterminated-string",
+							  strings, 4),
+			    -EILSEQ,
+			    "unterminated string");
 	/* -- An incorrectly formed string should cause a failure */
-	rc = of_property_read_string_array(np, "unterminated-string-list", strings, 4);
-	unittest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ_MSG(test,
+			    of_property_read_string_array(
+					    np,
+					    "unterminated-string-list",
+					    strings,
+					    4),
+			    -EILSEQ,
+			    "unterminated string array");
 	/* -- parsing the correctly formed strings should still work: */
 	strings[2] = NULL;
-	rc = of_property_read_string_array(np, "unterminated-string-list", strings, 2);
-	unittest(rc == 2 && strings[2] == NULL, "of_property_read_string_array() failure; rc=%i\n", rc);
+	rc = of_property_read_string_array(np,
+					   "unterminated-string-list",
+					   strings,
+					   2);
+	KUNIT_EXPECT_EQ(test, rc, 2);
+	KUNIT_EXPECT_EQ(test, strings[2], NULL);
 	strings[1] = NULL;
-	rc = of_property_read_string_array(np, "phandle-list-names", strings, 1);
-	unittest(rc == 1 && strings[1] == NULL, "Overwrote end of string array; rc=%i, str='%s'\n", rc, strings[1]);
+	rc = of_property_read_string_array(np,
+					   "phandle-list-names",
+					   strings,
+					   1);
+	KUNIT_ASSERT_EQ(test, rc, 1);
+	KUNIT_EXPECT_EQ_MSG(test, strings[1], NULL,
+			    "Overwrote end of string array");
 }
 
 #define propcmp(p1, p2) (((p1)->length == (p2)->length) && \
 			(p1)->value && (p2)->value && \
 			!memcmp((p1)->value, (p2)->value, (p1)->length) && \
 			!strcmp((p1)->name, (p2)->name))
-static void __init of_unittest_property_copy(void)
+static void of_unittest_property_copy(struct kunit *test)
 {
 #ifdef CONFIG_OF_DYNAMIC
 	struct property p1 = { .name = "p1", .length = 0, .value = "" };
@@ -672,20 +790,24 @@ static void __init of_unittest_property_copy(void)
 	struct property *new;
 
 	new = __of_prop_dup(&p1, GFP_KERNEL);
-	unittest(new && propcmp(&p1, new), "empty property didn't copy correctly\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, new);
+	KUNIT_EXPECT_TRUE_MSG(test, propcmp(&p1, new),
+			      "empty property didn't copy correctly");
 	kfree(new->value);
 	kfree(new->name);
 	kfree(new);
 
 	new = __of_prop_dup(&p2, GFP_KERNEL);
-	unittest(new && propcmp(&p2, new), "non-empty property didn't copy correctly\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, new);
+	KUNIT_EXPECT_TRUE_MSG(test, propcmp(&p2, new),
+			      "non-empty property didn't copy correctly");
 	kfree(new->value);
 	kfree(new->name);
 	kfree(new);
 #endif
 }
 
-static void __init of_unittest_changeset(void)
+static void of_unittest_changeset(struct kunit *test)
 {
 #ifdef CONFIG_OF_DYNAMIC
 	struct property *ppadd, padd = { .name = "prop-add", .length = 1, .value = "" };
@@ -698,32 +820,32 @@ static void __init of_unittest_changeset(void)
 	struct of_changeset chgset;
 
 	n1 = __of_node_dup(NULL, "n1");
-	unittest(n1, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n1);
 
 	n2 = __of_node_dup(NULL, "n2");
-	unittest(n2, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n2);
 
 	n21 = __of_node_dup(NULL, "n21");
-	unittest(n21, "testcase setup failure %p\n", n21);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n21);
 
 	nchangeset = of_find_node_by_path("/testcase-data/changeset");
 	nremove = of_get_child_by_name(nchangeset, "node-remove");
-	unittest(nremove, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, nremove);
 
 	ppadd = __of_prop_dup(&padd, GFP_KERNEL);
-	unittest(ppadd, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppadd);
 
 	ppname_n1  = __of_prop_dup(&pname_n1, GFP_KERNEL);
-	unittest(ppname_n1, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n1);
 
 	ppname_n2  = __of_prop_dup(&pname_n2, GFP_KERNEL);
-	unittest(ppname_n2, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n2);
 
 	ppname_n21 = __of_prop_dup(&pname_n21, GFP_KERNEL);
-	unittest(ppname_n21, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n21);
 
 	ppupdate = __of_prop_dup(&pupdate, GFP_KERNEL);
-	unittest(ppupdate, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppupdate);
 
 	parent = nchangeset;
 	n1->parent = parent;
@@ -731,54 +853,82 @@ static void __init of_unittest_changeset(void)
 	n21->parent = n2;
 
 	ppremove = of_find_property(parent, "prop-remove", NULL);
-	unittest(ppremove, "failed to find removal prop");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppremove);
 
 	of_changeset_init(&chgset);
 
-	unittest(!of_changeset_attach_node(&chgset, n1), "fail attach n1\n");
-	unittest(!of_changeset_add_property(&chgset, n1, ppname_n1), "fail add prop name\n");
-
-	unittest(!of_changeset_attach_node(&chgset, n2), "fail attach n2\n");
-	unittest(!of_changeset_add_property(&chgset, n2, ppname_n2), "fail add prop name\n");
-
-	unittest(!of_changeset_detach_node(&chgset, nremove), "fail remove node\n");
-	unittest(!of_changeset_add_property(&chgset, n21, ppname_n21), "fail add prop name\n");
-
-	unittest(!of_changeset_attach_node(&chgset, n21), "fail attach n21\n");
-
-	unittest(!of_changeset_add_property(&chgset, parent, ppadd), "fail add prop prop-add\n");
-	unittest(!of_changeset_update_property(&chgset, parent, ppupdate), "fail update prop\n");
-	unittest(!of_changeset_remove_property(&chgset, parent, ppremove), "fail remove prop\n");
-
-	unittest(!of_changeset_apply(&chgset), "apply failed\n");
+	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n1),
+			       "fail attach n1\n");
+	KUNIT_EXPECT_FALSE_MSG(test,
+			       of_changeset_add_property(&chgset,
+							 n1,
+							 ppname_n1),
+			       "fail add prop name\n");
+
+	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n2),
+			       "fail attach n2\n");
+	KUNIT_EXPECT_FALSE_MSG(test,
+			       of_changeset_add_property(&chgset,
+							 n2,
+							 ppname_n2),
+			       "fail add prop name\n");
+
+	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_detach_node(&chgset, nremove),
+			       "fail remove node\n");
+	KUNIT_EXPECT_FALSE_MSG(test,
+			       of_changeset_add_property(&chgset,
+							 n21,
+							 ppname_n21),
+			       "fail add prop name\n");
+
+	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n21),
+			       "fail attach n21\n");
+
+	KUNIT_EXPECT_FALSE_MSG(test,
+			       of_changeset_add_property(&chgset,
+							 parent,
+							 ppadd),
+			       "fail add prop prop-add\n");
+	KUNIT_EXPECT_FALSE_MSG(test,
+			       of_changeset_update_property(&chgset,
+							    parent,
+							    ppupdate),
+			       "fail update prop\n");
+	KUNIT_EXPECT_FALSE_MSG(test,
+			       of_changeset_remove_property(&chgset,
+							    parent,
+							    ppremove),
+			       "fail remove prop\n");
+
+	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_apply(&chgset),
+			       "apply failed\n");
 
 	of_node_put(nchangeset);
 
 	/* Make sure node names are constructed correctly */
-	unittest((np = of_find_node_by_path("/testcase-data/changeset/n2/n21")),
-		 "'%pOF' not added\n", n21);
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
+			test,
+			np = of_find_node_by_path(
+					"/testcase-data/changeset/n2/n21"),
+			"'%pOF' not added\n", n21);
 	of_node_put(np);
 
-	unittest(!of_changeset_revert(&chgset), "revert failed\n");
+	KUNIT_EXPECT_FALSE(test, of_changeset_revert(&chgset));
 
 	of_changeset_destroy(&chgset);
 #endif
 }
 
-static void __init of_unittest_parse_interrupts(void)
+static void of_unittest_parse_interrupts(struct kunit *test)
 {
 	struct device_node *np;
 	struct of_phandle_args args;
 	int i, rc;
 
-	if (of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)
-		return;
+	KUNIT_ASSERT_FALSE(test, of_irq_workarounds & OF_IMAP_OLDWORLD_MAC);
 
 	np = of_find_node_by_path("/testcase-data/interrupts/interrupts0");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	for (i = 0; i < 4; i++) {
 		bool passed = true;
@@ -790,16 +940,14 @@ static void __init of_unittest_parse_interrupts(void)
 		passed &= (args.args_count == 1);
 		passed &= (args.args[0] == (i + 1));
 
-		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
-			 i, args.np, rc);
+		KUNIT_EXPECT_TRUE_MSG(test, passed,
+				      "index %i - data error on node %pOF rc=%i\n",
+				      i, args.np, rc);
 	}
 	of_node_put(np);
 
 	np = of_find_node_by_path("/testcase-data/interrupts/interrupts1");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	for (i = 0; i < 4; i++) {
 		bool passed = true;
@@ -836,26 +984,23 @@ static void __init of_unittest_parse_interrupts(void)
 		default:
 			passed = false;
 		}
-		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
-			 i, args.np, rc);
+		KUNIT_EXPECT_TRUE_MSG(test, passed,
+				     "index %i - data error on node %pOF rc=%i\n",
+				     i, args.np, rc);
 	}
 	of_node_put(np);
 }
 
-static void __init of_unittest_parse_interrupts_extended(void)
+static void of_unittest_parse_interrupts_extended(struct kunit *test)
 {
 	struct device_node *np;
 	struct of_phandle_args args;
 	int i, rc;
 
-	if (of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)
-		return;
+	KUNIT_ASSERT_FALSE(test, of_irq_workarounds & OF_IMAP_OLDWORLD_MAC);
 
 	np = of_find_node_by_path("/testcase-data/interrupts/interrupts-extended0");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	for (i = 0; i < 7; i++) {
 		bool passed = true;
@@ -909,8 +1054,9 @@ static void __init of_unittest_parse_interrupts_extended(void)
 			passed = false;
 		}
 
-		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
-			 i, args.np, rc);
+		KUNIT_EXPECT_TRUE_MSG(test, passed,
+				      "index %i - data error on node %pOF rc=%i\n",
+				      i, args.np, rc);
 	}
 	of_node_put(np);
 }
@@ -950,7 +1096,7 @@ static struct {
 	{ .path = "/testcase-data/match-node/name9", .data = "K", },
 };
 
-static void __init of_unittest_match_node(void)
+static void of_unittest_match_node(struct kunit *test)
 {
 	struct device_node *np;
 	const struct of_device_id *match;
@@ -958,26 +1104,19 @@ static void __init of_unittest_match_node(void)
 
 	for (i = 0; i < ARRAY_SIZE(match_node_tests); i++) {
 		np = of_find_node_by_path(match_node_tests[i].path);
-		if (!np) {
-			unittest(0, "missing testcase node %s\n",
-				match_node_tests[i].path);
-			continue;
-		}
+		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 		match = of_match_node(match_node_table, np);
-		if (!match) {
-			unittest(0, "%s didn't match anything\n",
-				match_node_tests[i].path);
-			continue;
-		}
+		KUNIT_ASSERT_NOT_ERR_OR_NULL_MSG(test, np,
+						 "%s didn't match anything",
+						 match_node_tests[i].path);
 
-		if (strcmp(match->data, match_node_tests[i].data) != 0) {
-			unittest(0, "%s got wrong match. expected %s, got %s\n",
-				match_node_tests[i].path, match_node_tests[i].data,
-				(const char *)match->data);
-			continue;
-		}
-		unittest(1, "passed");
+		KUNIT_EXPECT_STREQ_MSG(test,
+				       match->data, match_node_tests[i].data,
+				       "%s got wrong match. expected %s, got %s\n",
+				       match_node_tests[i].path,
+				       match_node_tests[i].data,
+				       (const char *)match->data);
 	}
 }
 
@@ -989,9 +1128,9 @@ static struct resource test_bus_res = {
 static const struct platform_device_info test_bus_info = {
 	.name = "unittest-bus",
 };
-static void __init of_unittest_platform_populate(void)
+static void of_unittest_platform_populate(struct kunit *test)
 {
-	int irq, rc;
+	int irq;
 	struct device_node *np, *child, *grandchild;
 	struct platform_device *pdev, *test_bus;
 	const struct of_device_id match[] = {
@@ -1005,32 +1144,27 @@ static void __init of_unittest_platform_populate(void)
 	/* Test that a missing irq domain returns -EPROBE_DEFER */
 	np = of_find_node_by_path("/testcase-data/testcase-device1");
 	pdev = of_find_device_by_node(np);
-	unittest(pdev, "device 1 creation failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, pdev);
 
 	if (!(of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)) {
 		irq = platform_get_irq(pdev, 0);
-		unittest(irq == -EPROBE_DEFER,
-			 "device deferred probe failed - %d\n", irq);
+		KUNIT_ASSERT_EQ(test, irq, -EPROBE_DEFER);
 
 		/* Test that a parsing failure does not return -EPROBE_DEFER */
 		np = of_find_node_by_path("/testcase-data/testcase-device2");
 		pdev = of_find_device_by_node(np);
-		unittest(pdev, "device 2 creation failed\n");
+		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, pdev);
 		irq = platform_get_irq(pdev, 0);
-		unittest(irq < 0 && irq != -EPROBE_DEFER,
-			 "device parsing error failed - %d\n", irq);
+		KUNIT_ASSERT_TRUE_MSG(test, irq < 0 && irq != -EPROBE_DEFER,
+				      "device parsing error failed - %d\n",
+				      irq);
 	}
 
 	np = of_find_node_by_path("/testcase-data/platform-tests");
-	unittest(np, "No testcase data in device tree\n");
-	if (!np)
-		return;
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	test_bus = platform_device_register_full(&test_bus_info);
-	rc = PTR_ERR_OR_ZERO(test_bus);
-	unittest(!rc, "testbus registration failed; rc=%i\n", rc);
-	if (rc)
-		return;
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, test_bus);
 	test_bus->dev.of_node = np;
 
 	/*
@@ -1045,17 +1179,21 @@ static void __init of_unittest_platform_populate(void)
 	of_platform_populate(np, match, NULL, &test_bus->dev);
 	for_each_child_of_node(np, child) {
 		for_each_child_of_node(child, grandchild)
-			unittest(of_find_device_by_node(grandchild),
-				 "Could not create device for node '%s'\n",
-				 grandchild->name);
+			KUNIT_EXPECT_TRUE_MSG(test,
+					      of_find_device_by_node(
+							      grandchild),
+					      "Could not create device for node '%s'\n",
+					      grandchild->name);
 	}
 
 	of_platform_depopulate(&test_bus->dev);
 	for_each_child_of_node(np, child) {
 		for_each_child_of_node(child, grandchild)
-			unittest(!of_find_device_by_node(grandchild),
-				 "device didn't get destroyed '%s'\n",
-				 grandchild->name);
+			KUNIT_EXPECT_FALSE_MSG(test,
+					       of_find_device_by_node(
+							       grandchild),
+					       "device didn't get destroyed '%s'\n",
+					       grandchild->name);
 	}
 
 	platform_device_unregister(test_bus);
@@ -1129,7 +1267,7 @@ static int attach_node_and_children(struct device_node *np)
  *	unittest_data_add - Reads, copies data from
  *	linked tree and attaches it to the live tree
  */
-static int __init unittest_data_add(void)
+static int unittest_data_add(void)
 {
 	void *unittest_data;
 	struct device_node *unittest_data_node, *np;
@@ -1200,7 +1338,7 @@ static int __init unittest_data_add(void)
 }
 
 #ifdef CONFIG_OF_OVERLAY
-static int __init overlay_data_apply(const char *overlay_name, int *overlay_id);
+static int overlay_data_apply(const char *overlay_name, int *overlay_id);
 
 static int unittest_probe(struct platform_device *pdev)
 {
@@ -1429,173 +1567,157 @@ static void of_unittest_destroy_tracked_overlays(void)
 	} while (defers > 0);
 }
 
-static int __init of_unittest_apply_overlay(int overlay_nr, int unittest_nr,
-		int *overlay_id)
+static int of_unittest_apply_overlay(struct kunit *test,
+				     int overlay_nr,
+				     int unittest_nr,
+				     int *overlay_id)
 {
 	const char *overlay_name;
 
 	overlay_name = overlay_name_from_nr(overlay_nr);
 
-	if (!overlay_data_apply(overlay_name, overlay_id)) {
-		unittest(0, "could not apply overlay \"%s\"\n",
-				overlay_name);
-		return -EFAULT;
-	}
+	KUNIT_ASSERT_TRUE_MSG(test,
+			      overlay_data_apply(overlay_name, overlay_id),
+			      "could not apply overlay \"%s\"\n",
+			      overlay_name);
 	of_unittest_track_overlay(*overlay_id);
 
 	return 0;
 }
 
 /* apply an overlay while checking before and after states */
-static int __init of_unittest_apply_overlay_check(int overlay_nr,
+static int of_unittest_apply_overlay_check(struct kunit *test, int overlay_nr,
 		int unittest_nr, int before, int after,
 		enum overlay_type ovtype)
 {
-	int ret, ovcs_id;
+	int ovcs_id;
 
 	/* unittest device must not be in before state */
-	if (of_unittest_device_exists(unittest_nr, ovtype) != before) {
-		unittest(0, "%s with device @\"%s\" %s\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype),
-				!before ? "enabled" : "disabled");
-		return -EINVAL;
-	}
+	KUNIT_ASSERT_EQ_MSG(test,
+			    of_unittest_device_exists(unittest_nr, ovtype),
+			    before,
+			    "%s with device @\"%s\" %s\n",
+			    overlay_name_from_nr(overlay_nr),
+			    unittest_path(unittest_nr, ovtype),
+			    !before ? "enabled" : "disabled");
 
 	ovcs_id = 0;
-	ret = of_unittest_apply_overlay(overlay_nr, unittest_nr, &ovcs_id);
-	if (ret != 0) {
-		/* of_unittest_apply_overlay already called unittest() */
-		return ret;
-	}
+	KUNIT_EXPECT_EQ(test,
+			of_unittest_apply_overlay(test,
+						  overlay_nr,
+						  unittest_nr,
+						  &ovcs_id),
+			0);
 
 	/* unittest device must be to set to after state */
-	if (of_unittest_device_exists(unittest_nr, ovtype) != after) {
-		unittest(0, "%s failed to create @\"%s\" %s\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype),
-				!after ? "enabled" : "disabled");
-		return -EINVAL;
-	}
+	KUNIT_ASSERT_EQ_MSG(test,
+			    of_unittest_device_exists(unittest_nr, ovtype),
+			    after,
+			    "%s failed to create @\"%s\" %s\n",
+			    overlay_name_from_nr(overlay_nr),
+			    unittest_path(unittest_nr, ovtype),
+			    !after ? "enabled" : "disabled");
 
 	return 0;
 }
 
 /* apply an overlay and then revert it while checking before, after states */
-static int __init of_unittest_apply_revert_overlay_check(int overlay_nr,
+static int of_unittest_apply_revert_overlay_check(
+		struct kunit *test, int overlay_nr,
 		int unittest_nr, int before, int after,
 		enum overlay_type ovtype)
 {
-	int ret, ovcs_id;
+	int ovcs_id;
 
 	/* unittest device must be in before state */
-	if (of_unittest_device_exists(unittest_nr, ovtype) != before) {
-		unittest(0, "%s with device @\"%s\" %s\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype),
-				!before ? "enabled" : "disabled");
-		return -EINVAL;
-	}
+	KUNIT_ASSERT_EQ_MSG(test,
+			    of_unittest_device_exists(unittest_nr, ovtype),
+			    before,
+			    "%s with device @\"%s\" %s\n",
+			    overlay_name_from_nr(overlay_nr),
+			    unittest_path(unittest_nr, ovtype),
+			    !before ? "enabled" : "disabled");
 
 	/* apply the overlay */
 	ovcs_id = 0;
-	ret = of_unittest_apply_overlay(overlay_nr, unittest_nr, &ovcs_id);
-	if (ret != 0) {
-		/* of_unittest_apply_overlay already called unittest() */
-		return ret;
-	}
+	KUNIT_ASSERT_EQ(test,
+			of_unittest_apply_overlay(test,
+						  overlay_nr,
+						  unittest_nr,
+						  &ovcs_id),
+			0);
 
 	/* unittest device must be in after state */
-	if (of_unittest_device_exists(unittest_nr, ovtype) != after) {
-		unittest(0, "%s failed to create @\"%s\" %s\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype),
-				!after ? "enabled" : "disabled");
-		return -EINVAL;
-	}
-
-	ret = of_overlay_remove(&ovcs_id);
-	if (ret != 0) {
-		unittest(0, "%s failed to be destroyed @\"%s\"\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype));
-		return ret;
-	}
+	KUNIT_ASSERT_EQ_MSG(test,
+			    of_unittest_device_exists(unittest_nr, ovtype),
+			    after,
+			    "%s failed to create @\"%s\" %s\n",
+			    overlay_name_from_nr(overlay_nr),
+			    unittest_path(unittest_nr, ovtype),
+			    !after ? "enabled" : "disabled");
+
+	KUNIT_ASSERT_EQ_MSG(test, of_overlay_remove(&ovcs_id), 0,
+			    "%s failed to be destroyed @\"%s\"\n",
+			    overlay_name_from_nr(overlay_nr),
+			    unittest_path(unittest_nr, ovtype));
 
 	/* unittest device must be again in before state */
-	if (of_unittest_device_exists(unittest_nr, PDEV_OVERLAY) != before) {
-		unittest(0, "%s with device @\"%s\" %s\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype),
-				!before ? "enabled" : "disabled");
-		return -EINVAL;
-	}
+	KUNIT_ASSERT_EQ_MSG(test,
+			    of_unittest_device_exists(unittest_nr,
+						      PDEV_OVERLAY),
+			    before,
+			    "%s with device @\"%s\" %s\n",
+			    overlay_name_from_nr(overlay_nr),
+			    unittest_path(unittest_nr, ovtype),
+			    !before ? "enabled" : "disabled");
 
 	return 0;
 }
 
 /* test activation of device */
-static void __init of_unittest_overlay_0(void)
+static void of_unittest_overlay_0(struct kunit *test)
 {
 	/* device should enable */
-	if (of_unittest_apply_overlay_check(0, 0, 0, 1, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 0);
+	of_unittest_apply_overlay_check(test, 0, 0, 0, 1, PDEV_OVERLAY);
 }
 
 /* test deactivation of device */
-static void __init of_unittest_overlay_1(void)
+static void of_unittest_overlay_1(struct kunit *test)
 {
 	/* device should disable */
-	if (of_unittest_apply_overlay_check(1, 1, 1, 0, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 1);
+	of_unittest_apply_overlay_check(test, 1, 1, 1, 0, PDEV_OVERLAY);
 }
 
 /* test activation of device */
-static void __init of_unittest_overlay_2(void)
+static void of_unittest_overlay_2(struct kunit *test)
 {
 	/* device should enable */
-	if (of_unittest_apply_overlay_check(2, 2, 0, 1, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 2);
+	of_unittest_apply_overlay_check(test, 2, 2, 0, 1, PDEV_OVERLAY);
 }
 
 /* test deactivation of device */
-static void __init of_unittest_overlay_3(void)
+static void of_unittest_overlay_3(struct kunit *test)
 {
 	/* device should disable */
-	if (of_unittest_apply_overlay_check(3, 3, 1, 0, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 3);
+	of_unittest_apply_overlay_check(test, 3, 3, 1, 0, PDEV_OVERLAY);
 }
 
 /* test activation of a full device node */
-static void __init of_unittest_overlay_4(void)
+static void of_unittest_overlay_4(struct kunit *test)
 {
 	/* device should disable */
-	if (of_unittest_apply_overlay_check(4, 4, 0, 1, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 4);
+	of_unittest_apply_overlay_check(test, 4, 4, 0, 1, PDEV_OVERLAY);
 }
 
 /* test overlay apply/revert sequence */
-static void __init of_unittest_overlay_5(void)
+static void of_unittest_overlay_5(struct kunit *test)
 {
 	/* device should disable */
-	if (of_unittest_apply_revert_overlay_check(5, 5, 0, 1, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 5);
+	of_unittest_apply_revert_overlay_check(test, 5, 5, 0, 1, PDEV_OVERLAY);
 }
 
 /* test overlay application in sequence */
-static void __init of_unittest_overlay_6(void)
+static void of_unittest_overlay_6(struct kunit *test)
 {
 	int i, ov_id[2], ovcs_id;
 	int overlay_nr = 6, unittest_nr = 6;
@@ -1604,74 +1726,69 @@ static void __init of_unittest_overlay_6(void)
 
 	/* unittest device must be in before state */
 	for (i = 0; i < 2; i++) {
-		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
-				!= before) {
-			unittest(0, "%s with device @\"%s\" %s\n",
-					overlay_name_from_nr(overlay_nr + i),
-					unittest_path(unittest_nr + i,
-						PDEV_OVERLAY),
-					!before ? "enabled" : "disabled");
-			return;
-		}
+		KUNIT_ASSERT_EQ_MSG(test,
+				    of_unittest_device_exists(unittest_nr + i,
+							      PDEV_OVERLAY),
+				    before,
+				    "%s with device @\"%s\" %s\n",
+				    overlay_name_from_nr(overlay_nr + i),
+				    unittest_path(unittest_nr + i,
+						  PDEV_OVERLAY),
+				    !before ? "enabled" : "disabled");
 	}
 
 	/* apply the overlays */
 	for (i = 0; i < 2; i++) {
-
 		overlay_name = overlay_name_from_nr(overlay_nr + i);
 
-		if (!overlay_data_apply(overlay_name, &ovcs_id)) {
-			unittest(0, "could not apply overlay \"%s\"\n",
-					overlay_name);
-			return;
-		}
+		KUNIT_ASSERT_TRUE_MSG(test,
+				      overlay_data_apply(overlay_name,
+							 &ovcs_id),
+				      "could not apply overlay \"%s\"\n",
+				      overlay_name);
 		ov_id[i] = ovcs_id;
 		of_unittest_track_overlay(ov_id[i]);
 	}
 
 	for (i = 0; i < 2; i++) {
 		/* unittest device must be in after state */
-		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
-				!= after) {
-			unittest(0, "overlay @\"%s\" failed @\"%s\" %s\n",
-					overlay_name_from_nr(overlay_nr + i),
-					unittest_path(unittest_nr + i,
-						PDEV_OVERLAY),
-					!after ? "enabled" : "disabled");
-			return;
-		}
+		KUNIT_ASSERT_EQ_MSG(test,
+				    of_unittest_device_exists(unittest_nr + i,
+							      PDEV_OVERLAY),
+				    after,
+				    "overlay @\"%s\" failed @\"%s\" %s\n",
+				    overlay_name_from_nr(overlay_nr + i),
+				    unittest_path(unittest_nr + i,
+						  PDEV_OVERLAY),
+				    !after ? "enabled" : "disabled");
 	}
 
 	for (i = 1; i >= 0; i--) {
 		ovcs_id = ov_id[i];
-		if (of_overlay_remove(&ovcs_id)) {
-			unittest(0, "%s failed destroy @\"%s\"\n",
-					overlay_name_from_nr(overlay_nr + i),
-					unittest_path(unittest_nr + i,
-						PDEV_OVERLAY));
-			return;
-		}
+		KUNIT_ASSERT_FALSE_MSG(test, of_overlay_remove(&ovcs_id),
+				       "%s failed destroy @\"%s\"\n",
+				       overlay_name_from_nr(overlay_nr + i),
+				       unittest_path(unittest_nr + i,
+						     PDEV_OVERLAY));
 		of_unittest_untrack_overlay(ov_id[i]);
 	}
 
 	for (i = 0; i < 2; i++) {
 		/* unittest device must be again in before state */
-		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
-				!= before) {
-			unittest(0, "%s with device @\"%s\" %s\n",
-					overlay_name_from_nr(overlay_nr + i),
-					unittest_path(unittest_nr + i,
-						PDEV_OVERLAY),
-					!before ? "enabled" : "disabled");
-			return;
-		}
+		KUNIT_ASSERT_EQ_MSG(test,
+				    of_unittest_device_exists(unittest_nr + i,
+							      PDEV_OVERLAY),
+				    before,
+				    "%s with device @\"%s\" %s\n",
+				    overlay_name_from_nr(overlay_nr + i),
+				    unittest_path(unittest_nr + i,
+						  PDEV_OVERLAY),
+				    !before ? "enabled" : "disabled");
 	}
-
-	unittest(1, "overlay test %d passed\n", 6);
 }
 
 /* test overlay application in sequence */
-static void __init of_unittest_overlay_8(void)
+static void of_unittest_overlay_8(struct kunit *test)
 {
 	int i, ov_id[2], ovcs_id;
 	int overlay_nr = 8, unittest_nr = 8;
@@ -1681,76 +1798,73 @@ static void __init of_unittest_overlay_8(void)
 
 	/* apply the overlays */
 	for (i = 0; i < 2; i++) {
-
 		overlay_name = overlay_name_from_nr(overlay_nr + i);
 
-		if (!overlay_data_apply(overlay_name, &ovcs_id)) {
-			unittest(0, "could not apply overlay \"%s\"\n",
-					overlay_name);
-			return;
-		}
+		KUNIT_ASSERT_TRUE_MSG(test,
+				      overlay_data_apply(overlay_name,
+							 &ovcs_id),
+				      "could not apply overlay \"%s\"\n",
+				      overlay_name);
 		ov_id[i] = ovcs_id;
 		of_unittest_track_overlay(ov_id[i]);
 	}
 
 	/* now try to remove first overlay (it should fail) */
 	ovcs_id = ov_id[0];
-	if (!of_overlay_remove(&ovcs_id)) {
-		unittest(0, "%s was destroyed @\"%s\"\n",
-				overlay_name_from_nr(overlay_nr + 0),
-				unittest_path(unittest_nr,
-					PDEV_OVERLAY));
-		return;
-	}
+	KUNIT_ASSERT_TRUE_MSG(test, of_overlay_remove(&ovcs_id),
+			      "%s was destroyed @\"%s\"\n",
+			      overlay_name_from_nr(overlay_nr + 0),
+			      unittest_path(unittest_nr,
+					    PDEV_OVERLAY));
 
 	/* removing them in order should work */
 	for (i = 1; i >= 0; i--) {
 		ovcs_id = ov_id[i];
-		if (of_overlay_remove(&ovcs_id)) {
-			unittest(0, "%s not destroyed @\"%s\"\n",
-					overlay_name_from_nr(overlay_nr + i),
-					unittest_path(unittest_nr,
-						PDEV_OVERLAY));
-			return;
-		}
+		KUNIT_ASSERT_FALSE_MSG(test, of_overlay_remove(&ovcs_id),
+				       "%s not destroyed @\"%s\"\n",
+				       overlay_name_from_nr(overlay_nr + i),
+				       unittest_path(unittest_nr,
+						     PDEV_OVERLAY));
 		of_unittest_untrack_overlay(ov_id[i]);
 	}
-
-	unittest(1, "overlay test %d passed\n", 8);
 }
 
 /* test insertion of a bus with parent devices */
-static void __init of_unittest_overlay_10(void)
+static void of_unittest_overlay_10(struct kunit *test)
 {
-	int ret;
 	char *child_path;
 
 	/* device should disable */
-	ret = of_unittest_apply_overlay_check(10, 10, 0, 1, PDEV_OVERLAY);
-	if (unittest(ret == 0,
-			"overlay test %d failed; overlay application\n", 10))
-		return;
+	KUNIT_ASSERT_EQ_MSG(test,
+			    of_unittest_apply_overlay_check(test,
+							    10,
+							    10,
+							    0,
+							    1,
+							    PDEV_OVERLAY),
+			    0,
+			    "overlay test %d failed; overlay application\n",
+			    10);
 
 	child_path = kasprintf(GFP_KERNEL, "%s/test-unittest101",
 			unittest_path(10, PDEV_OVERLAY));
-	if (unittest(child_path, "overlay test %d failed; kasprintf\n", 10))
-		return;
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, child_path);
 
-	ret = of_path_device_type_exists(child_path, PDEV_OVERLAY);
+	KUNIT_EXPECT_TRUE_MSG(test,
+			      of_path_device_type_exists(child_path,
+							 PDEV_OVERLAY),
+			      "overlay test %d failed; no child device\n", 10);
 	kfree(child_path);
-
-	unittest(ret, "overlay test %d failed; no child device\n", 10);
 }
 
 /* test insertion of a bus with parent devices (and revert) */
-static void __init of_unittest_overlay_11(void)
+static void of_unittest_overlay_11(struct kunit *test)
 {
-	int ret;
-
 	/* device should disable */
-	ret = of_unittest_apply_revert_overlay_check(11, 11, 0, 1,
-			PDEV_OVERLAY);
-	unittest(ret == 0, "overlay test %d failed; overlay apply\n", 11);
+	KUNIT_EXPECT_FALSE(test,
+			  of_unittest_apply_revert_overlay_check(test,
+								 11, 11, 0, 1,
+								 PDEV_OVERLAY));
 }
 
 #if IS_BUILTIN(CONFIG_I2C) && IS_ENABLED(CONFIG_OF_OVERLAY)
@@ -1972,25 +2086,23 @@ static struct i2c_driver unittest_i2c_mux_driver = {
 
 #endif
 
-static int of_unittest_overlay_i2c_init(void)
+static int of_unittest_overlay_i2c_init(struct kunit *test)
 {
-	int ret;
+	KUNIT_ASSERT_EQ_MSG(test,
+			    i2c_add_driver(&unittest_i2c_dev_driver),
+			    0,
+			    "could not register unittest i2c device driver\n");
 
-	ret = i2c_add_driver(&unittest_i2c_dev_driver);
-	if (unittest(ret == 0,
-			"could not register unittest i2c device driver\n"))
-		return ret;
-
-	ret = platform_driver_register(&unittest_i2c_bus_driver);
-	if (unittest(ret == 0,
-			"could not register unittest i2c bus driver\n"))
-		return ret;
+	KUNIT_ASSERT_EQ_MSG(test,
+			    platform_driver_register(&unittest_i2c_bus_driver),
+			    0,
+			    "could not register unittest i2c bus driver\n");
 
 #if IS_BUILTIN(CONFIG_I2C_MUX)
-	ret = i2c_add_driver(&unittest_i2c_mux_driver);
-	if (unittest(ret == 0,
-			"could not register unittest i2c mux driver\n"))
-		return ret;
+	KUNIT_ASSERT_EQ_MSG(test,
+			    i2c_add_driver(&unittest_i2c_mux_driver),
+			    0,
+			    "could not register unittest i2c mux driver\n");
 #endif
 
 	return 0;
@@ -2005,101 +2117,89 @@ static void of_unittest_overlay_i2c_cleanup(void)
 	i2c_del_driver(&unittest_i2c_dev_driver);
 }
 
-static void __init of_unittest_overlay_i2c_12(void)
+static void of_unittest_overlay_i2c_12(struct kunit *test)
 {
 	/* device should enable */
-	if (of_unittest_apply_overlay_check(12, 12, 0, 1, I2C_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 12);
+	of_unittest_apply_overlay_check(test, 12, 12, 0, 1, I2C_OVERLAY);
 }
 
 /* test deactivation of device */
-static void __init of_unittest_overlay_i2c_13(void)
+static void of_unittest_overlay_i2c_13(struct kunit *test)
 {
 	/* device should disable */
-	if (of_unittest_apply_overlay_check(13, 13, 1, 0, I2C_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 13);
+	of_unittest_apply_overlay_check(test, 13, 13, 1, 0, I2C_OVERLAY);
 }
 
 /* just check for i2c mux existence */
-static void of_unittest_overlay_i2c_14(void)
+static void of_unittest_overlay_i2c_14(struct kunit *test)
 {
+	KUNIT_SUCCEED(test);
 }
 
-static void __init of_unittest_overlay_i2c_15(void)
+static void of_unittest_overlay_i2c_15(struct kunit *test)
 {
 	/* device should enable */
-	if (of_unittest_apply_overlay_check(15, 15, 0, 1, I2C_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 15);
+	of_unittest_apply_overlay_check(test, 15, 15, 0, 1, I2C_OVERLAY);
 }
 
 #else
 
-static inline void of_unittest_overlay_i2c_14(void) { }
-static inline void of_unittest_overlay_i2c_15(void) { }
+static inline void of_unittest_overlay_i2c_14(struct kunit *test) { }
+static inline void of_unittest_overlay_i2c_15(struct kunit *test) { }
 
 #endif
 
-static void __init of_unittest_overlay(void)
+static void of_unittest_overlay(struct kunit *test)
 {
 	struct device_node *bus_np = NULL;
 
-	if (platform_driver_register(&unittest_driver)) {
-		unittest(0, "could not register unittest driver\n");
-		goto out;
-	}
+	KUNIT_ASSERT_FALSE_MSG(test,
+			       platform_driver_register(&unittest_driver),
+			       "could not register unittest driver\n");
 
 	bus_np = of_find_node_by_path(bus_path);
-	if (bus_np == NULL) {
-		unittest(0, "could not find bus_path \"%s\"\n", bus_path);
-		goto out;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL_MSG(test, bus_np,
+					 "could not find bus_path \"%s\"\n",
+					 bus_path);
 
-	if (of_platform_default_populate(bus_np, NULL, NULL)) {
-		unittest(0, "could not populate bus @ \"%s\"\n", bus_path);
-		goto out;
-	}
+	KUNIT_ASSERT_FALSE_MSG(test,
+			       of_platform_default_populate(bus_np, NULL, NULL),
+			       "could not populate bus @ \"%s\"\n", bus_path);
 
-	if (!of_unittest_device_exists(100, PDEV_OVERLAY)) {
-		unittest(0, "could not find unittest0 @ \"%s\"\n",
-				unittest_path(100, PDEV_OVERLAY));
-		goto out;
-	}
+	KUNIT_ASSERT_TRUE_MSG(test,
+			      of_unittest_device_exists(100, PDEV_OVERLAY),
+			      "could not find unittest0 @ \"%s\"\n",
+			      unittest_path(100, PDEV_OVERLAY));
 
-	if (of_unittest_device_exists(101, PDEV_OVERLAY)) {
-		unittest(0, "unittest1 @ \"%s\" should not exist\n",
-				unittest_path(101, PDEV_OVERLAY));
-		goto out;
-	}
-
-	unittest(1, "basic infrastructure of overlays passed");
+	KUNIT_ASSERT_FALSE_MSG(test,
+			       of_unittest_device_exists(101, PDEV_OVERLAY),
+			       "unittest1 @ \"%s\" should not exist\n",
+			       unittest_path(101, PDEV_OVERLAY));
 
 	/* tests in sequence */
-	of_unittest_overlay_0();
-	of_unittest_overlay_1();
-	of_unittest_overlay_2();
-	of_unittest_overlay_3();
-	of_unittest_overlay_4();
-	of_unittest_overlay_5();
-	of_unittest_overlay_6();
-	of_unittest_overlay_8();
-
-	of_unittest_overlay_10();
-	of_unittest_overlay_11();
+	of_unittest_overlay_0(test);
+	of_unittest_overlay_1(test);
+	of_unittest_overlay_2(test);
+	of_unittest_overlay_3(test);
+	of_unittest_overlay_4(test);
+	of_unittest_overlay_5(test);
+	of_unittest_overlay_6(test);
+	of_unittest_overlay_8(test);
+
+	of_unittest_overlay_10(test);
+	of_unittest_overlay_11(test);
 
 #if IS_BUILTIN(CONFIG_I2C)
-	if (unittest(of_unittest_overlay_i2c_init() == 0, "i2c init failed\n"))
+	KUNIT_ASSERT_EQ_MSG(test,
+			   of_unittest_overlay_i2c_init(test),
+			   0,
+			   "i2c init failed\n");
 		goto out;
 
-	of_unittest_overlay_i2c_12();
-	of_unittest_overlay_i2c_13();
-	of_unittest_overlay_i2c_14();
-	of_unittest_overlay_i2c_15();
+	of_unittest_overlay_i2c_12(test);
+	of_unittest_overlay_i2c_13(test);
+	of_unittest_overlay_i2c_14(test);
+	of_unittest_overlay_i2c_15(test);
 
 	of_unittest_overlay_i2c_cleanup();
 #endif
@@ -2111,7 +2211,7 @@ static void __init of_unittest_overlay(void)
 }
 
 #else
-static inline void __init of_unittest_overlay(void) { }
+static inline void of_unittest_overlay(struct kunit *test) { }
 #endif
 
 #ifdef CONFIG_OF_OVERLAY
@@ -2254,7 +2354,7 @@ void __init unittest_unflatten_overlay_base(void)
  *
  * Return 0 on unexpected error.
  */
-static int __init overlay_data_apply(const char *overlay_name, int *overlay_id)
+static int overlay_data_apply(const char *overlay_name, int *overlay_id)
 {
 	struct overlay_info *info;
 	int found = 0;
@@ -2301,19 +2401,17 @@ static int __init overlay_data_apply(const char *overlay_name, int *overlay_id)
  * The first part of the function is _not_ normal overlay usage; it is
  * finishing splicing the base overlay device tree into the live tree.
  */
-static __init void of_unittest_overlay_high_level(void)
+static void of_unittest_overlay_high_level(struct kunit *test)
 {
 	struct device_node *last_sibling;
 	struct device_node *np;
 	struct device_node *of_symbols;
-	struct device_node *overlay_base_symbols;
+	struct device_node *overlay_base_symbols = 0;
 	struct device_node **pprev;
 	struct property *prop;
 
-	if (!overlay_base_root) {
-		unittest(0, "overlay_base_root not initialized\n");
-		return;
-	}
+	KUNIT_ASSERT_TRUE_MSG(test, overlay_base_root,
+			      "overlay_base_root not initialized\n");
 
 	/*
 	 * Could not fixup phandles in unittest_unflatten_overlay_base()
@@ -2358,11 +2456,10 @@ static __init void of_unittest_overlay_high_level(void)
 	}
 
 	for (np = overlay_base_root->child; np; np = np->sibling) {
-		if (of_get_child_by_name(of_root, np->name)) {
-			unittest(0, "illegal node name in overlay_base %s",
-				np->name);
-			return;
-		}
+		KUNIT_ASSERT_FALSE_MSG(test,
+				      of_get_child_by_name(of_root, np->name),
+				      "illegal node name in overlay_base %s",
+				      np->name);
 	}
 
 	/*
@@ -2395,21 +2492,24 @@ static __init void of_unittest_overlay_high_level(void)
 
 			new_prop = __of_prop_dup(prop, GFP_KERNEL);
 			if (!new_prop) {
-				unittest(0, "__of_prop_dup() of '%s' from overlay_base node __symbols__",
-					 prop->name);
+				KUNIT_FAIL(test,
+					   "__of_prop_dup() of '%s' from overlay_base node __symbols__",
+					   prop->name);
 				goto err_unlock;
 			}
 			if (__of_add_property(of_symbols, new_prop)) {
 				/* "name" auto-generated by unflatten */
 				if (!strcmp(new_prop->name, "name"))
 					continue;
-				unittest(0, "duplicate property '%s' in overlay_base node __symbols__",
-					 prop->name);
+				KUNIT_FAIL(test,
+					   "duplicate property '%s' in overlay_base node __symbols__",
+					   prop->name);
 				goto err_unlock;
 			}
 			if (__of_add_property_sysfs(of_symbols, new_prop)) {
-				unittest(0, "unable to add property '%s' in overlay_base node __symbols__ to sysfs",
-					 prop->name);
+				KUNIT_FAIL(test,
+					   "unable to add property '%s' in overlay_base node __symbols__ to sysfs",
+					   prop->name);
 				goto err_unlock;
 			}
 		}
@@ -2420,14 +2520,16 @@ static __init void of_unittest_overlay_high_level(void)
 
 	/* now do the normal overlay usage test */
 
-	unittest(overlay_data_apply("overlay", NULL),
-		 "Adding overlay 'overlay' failed\n");
+	KUNIT_EXPECT_TRUE_MSG(test, overlay_data_apply("overlay", NULL),
+			      "Adding overlay 'overlay' failed\n");
 
-	unittest(overlay_data_apply("overlay_bad_phandle", NULL),
-		 "Adding overlay 'overlay_bad_phandle' failed\n");
+	KUNIT_EXPECT_TRUE_MSG(test,
+			      overlay_data_apply("overlay_bad_phandle", NULL),
+			      "Adding overlay 'overlay_bad_phandle' failed\n");
 
-	unittest(overlay_data_apply("overlay_bad_symbol", NULL),
-		 "Adding overlay 'overlay_bad_symbol' failed\n");
+	KUNIT_EXPECT_TRUE_MSG(test,
+			      overlay_data_apply("overlay_bad_symbol", NULL),
+			      "Adding overlay 'overlay_bad_symbol' failed\n");
 
 	return;
 
@@ -2437,54 +2539,49 @@ static __init void of_unittest_overlay_high_level(void)
 
 #else
 
-static inline __init void of_unittest_overlay_high_level(void) {}
+static inline void of_unittest_overlay_high_level(struct kunit *test) {}
 
 #endif
 
-static int __init of_unittest(void)
+static int of_test_init(struct kunit *test)
 {
-	struct device_node *np;
-	int res;
-
 	/* adding data for unittest */
-	res = unittest_data_add();
-	if (res)
-		return res;
+	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
+
 	if (!of_aliases)
 		of_aliases = of_find_node_by_path("/aliases");
 
-	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
-	if (!np) {
-		pr_info("No testcase data in device tree; not running tests\n");
-		return 0;
-	}
-	of_node_put(np);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
+			"/testcase-data/phandle-tests/consumer-a"));
 
-	pr_info("start of unittest - you will see error messages\n");
-	of_unittest_check_tree_linkage();
-	of_unittest_check_phandles();
-	of_unittest_find_node_by_name();
-	of_unittest_dynamic();
-	of_unittest_parse_phandle_with_args();
-	of_unittest_parse_phandle_with_args_map();
-	of_unittest_printf();
-	of_unittest_property_string();
-	of_unittest_property_copy();
-	of_unittest_changeset();
-	of_unittest_parse_interrupts();
-	of_unittest_parse_interrupts_extended();
-	of_unittest_match_node();
-	of_unittest_platform_populate();
-	of_unittest_overlay();
+	return 0;
+}
 
+static struct kunit_case of_test_cases[] = {
+	KUNIT_CASE(of_unittest_check_tree_linkage),
+	KUNIT_CASE(of_unittest_check_phandles),
+	KUNIT_CASE(of_unittest_find_node_by_name),
+	KUNIT_CASE(of_unittest_dynamic),
+	KUNIT_CASE(of_unittest_parse_phandle_with_args),
+	KUNIT_CASE(of_unittest_parse_phandle_with_args_map),
+	KUNIT_CASE(of_unittest_printf),
+	KUNIT_CASE(of_unittest_property_string),
+	KUNIT_CASE(of_unittest_property_copy),
+	KUNIT_CASE(of_unittest_changeset),
+	KUNIT_CASE(of_unittest_parse_interrupts),
+	KUNIT_CASE(of_unittest_parse_interrupts_extended),
+	KUNIT_CASE(of_unittest_match_node),
+	KUNIT_CASE(of_unittest_platform_populate),
+	KUNIT_CASE(of_unittest_overlay),
 	/* Double check linkage after removing testcase data */
-	of_unittest_check_tree_linkage();
-
-	of_unittest_overlay_high_level();
-
-	pr_info("end of unittest - %i passed, %i failed\n",
-		unittest_results.passed, unittest_results.failed);
+	KUNIT_CASE(of_unittest_check_tree_linkage),
+	KUNIT_CASE(of_unittest_overlay_high_level),
+	{},
+};
 
-	return 0;
-}
-late_initcall(of_unittest);
+static struct kunit_module of_test_module = {
+	.name = "of-test",
+	.init = of_test_init,
+	.test_cases = of_test_cases,
+};
+module_test(of_test_module);
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2018-11-28 19:36 [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework brendanhiggins
                   ` (17 preceding siblings ...)
  2018-11-28 19:36 ` [RFC v3 17/19] of: unittest: migrate tests to run on KUnit brendanhiggins
@ 2018-11-28 19:36 ` brendanhiggins
  2018-11-28 19:36   ` Brendan Higgins
  2018-12-04 10:58   ` frowand.list
  2018-11-28 19:36 ` [RFC v3 19/19] of: unittest: split up some super large test cases brendanhiggins
                   ` (2 subsequent siblings)
  21 siblings, 2 replies; 232+ messages in thread
From: brendanhiggins @ 2018-11-28 19:36 UTC (permalink / raw)


Split out a couple of test cases that these features in base.c from the
unittest.c monolith. The intention is that we will eventually split out
all test cases and group them together based on what portion of device
tree they test.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 drivers/of/Makefile      |   2 +-
 drivers/of/base-test.c   | 214 ++++++++++++++++++++++++++
 drivers/of/test-common.c | 149 ++++++++++++++++++
 drivers/of/test-common.h |  16 ++
 drivers/of/unittest.c    | 316 +--------------------------------------
 5 files changed, 381 insertions(+), 316 deletions(-)
 create mode 100644 drivers/of/base-test.c
 create mode 100644 drivers/of/test-common.c
 create mode 100644 drivers/of/test-common.h

diff --git a/drivers/of/Makefile b/drivers/of/Makefile
index 663a4af0cccd5..4a4bd527d586c 100644
--- a/drivers/of/Makefile
+++ b/drivers/of/Makefile
@@ -8,7 +8,7 @@ obj-$(CONFIG_OF_PROMTREE) += pdt.o
 obj-$(CONFIG_OF_ADDRESS)  += address.o
 obj-$(CONFIG_OF_IRQ)    += irq.o
 obj-$(CONFIG_OF_NET)	+= of_net.o
-obj-$(CONFIG_OF_UNITTEST) += unittest.o
+obj-$(CONFIG_OF_UNITTEST) += unittest.o base-test.o test-common.o
 obj-$(CONFIG_OF_MDIO)	+= of_mdio.o
 obj-$(CONFIG_OF_RESERVED_MEM) += of_reserved_mem.o
 obj-$(CONFIG_OF_RESOLVE)  += resolver.o
diff --git a/drivers/of/base-test.c b/drivers/of/base-test.c
new file mode 100644
index 0000000000000..5731787a3fca8
--- /dev/null
+++ b/drivers/of/base-test.c
@@ -0,0 +1,214 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Unit tests for functions defined in base.c.
+ */
+#include <linux/of.h>
+
+#include <kunit/test.h>
+
+#include "test-common.h"
+
+static void of_unittest_find_node_by_name(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options, *name;
+
+	np = of_find_node_by_path("/testcase-data");
+	name = kasprintf(GFP_KERNEL, "%pOF", np);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
+			       "find /testcase-data failed\n");
+	of_node_put(np);
+	kfree(name);
+
+	/* Test if trailing '/' works */
+	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
+			    "trailing '/' on /testcase-data/ should fail\n");
+
+	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	name = kasprintf(GFP_KERNEL, "%pOF", np);
+	KUNIT_EXPECT_STREQ_MSG(test,
+			       "/testcase-data/phandle-tests/consumer-a", name,
+			       "find /testcase-data/phandle-tests/consumer-a failed\n");
+	of_node_put(np);
+	kfree(name);
+
+	np = of_find_node_by_path("testcase-alias");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	name = kasprintf(GFP_KERNEL, "%pOF", np);
+	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
+			       "find testcase-alias failed\n");
+	of_node_put(np);
+	kfree(name);
+
+	/* Test if trailing '/' works on aliases */
+	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
+			    "trailing '/' on testcase-alias/ should fail\n");
+
+	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	name = kasprintf(GFP_KERNEL, "%pOF", np);
+	KUNIT_EXPECT_STREQ_MSG(test,
+			       "/testcase-data/phandle-tests/consumer-a", name,
+			       "find testcase-alias/phandle-tests/consumer-a failed\n");
+	of_node_put(np);
+	kfree(name);
+
+	KUNIT_EXPECT_EQ_MSG(test,
+			    of_find_node_by_path("/testcase-data/missing-path"),
+			    NULL,
+			    "non-existent path returned node %pOF\n", np);
+	of_node_put(np);
+
+	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("missing-alias"), NULL,
+			    "non-existent alias returned node %pOF\n", np);
+	of_node_put(np);
+
+	KUNIT_EXPECT_EQ_MSG(test,
+			    of_find_node_by_path("testcase-alias/missing-path"),
+			    NULL,
+			    "non-existent alias with relative path returned node %pOF\n",
+			    np);
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
+			       "option path test failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
+			       "option path test, subcase #1 failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path(
+			"/testcase-data/testcase-device1:test/option",
+			&options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
+			       "option path test, subcase #2 failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
+					 "NULL option path test failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
+				       &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
+			       "option alias path test failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
+				       &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
+			       "option alias path test, subcase #1 failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
+					 "NULL option alias path test failed\n");
+	of_node_put(np);
+
+	options = "testoption";
+	np = of_find_node_opts_by_path("testcase-alias", &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
+			    "option clearing test failed\n");
+	of_node_put(np);
+
+	options = "testoption";
+	np = of_find_node_opts_by_path("/", &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
+			    "option clearing root node test failed\n");
+	of_node_put(np);
+}
+
+static void of_unittest_dynamic(struct kunit *test)
+{
+	struct device_node *np;
+	struct property *prop;
+
+	np = of_find_node_by_path("/testcase-data");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+
+	/* Array of 4 properties for the purpose of testing */
+	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
+
+	/* Add a new property - should pass*/
+	prop->name = "new-property";
+	prop->value = "new-property-data";
+	prop->length = strlen(prop->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+			    "Adding a new property failed\n");
+
+	/* Try to add an existing property - should fail */
+	prop++;
+	prop->name = "new-property";
+	prop->value = "new-property-data-should-fail";
+	prop->length = strlen(prop->value) + 1;
+	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
+			    "Adding an existing property should have failed\n");
+
+	/* Try to modify an existing property - should pass */
+	prop->value = "modify-property-data-should-pass";
+	prop->length = strlen(prop->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
+			    "Updating an existing property should have passed\n");
+
+	/* Try to modify non-existent property - should pass*/
+	prop++;
+	prop->name = "modify-property";
+	prop->value = "modify-missing-property-data-should-pass";
+	prop->length = strlen(prop->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
+			    "Updating a missing property should have passed\n");
+
+	/* Remove property - should pass */
+	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
+			    "Removing a property should have passed\n");
+
+	/* Adding very large property - should pass */
+	prop++;
+	prop->name = "large-property-PAGE_SIZEx8";
+	prop->length = PAGE_SIZE * 8;
+	prop->value = kzalloc(prop->length, GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+			    "Adding a large property should have passed\n");
+}
+
+static int of_test_init(struct kunit *test)
+{
+	/* adding data for unittest */
+	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
+
+	if (!of_aliases)
+		of_aliases = of_find_node_by_path("/aliases");
+
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
+			"/testcase-data/phandle-tests/consumer-a"));
+
+	return 0;
+}
+
+static struct kunit_case of_test_cases[] = {
+	KUNIT_CASE(of_unittest_find_node_by_name),
+	KUNIT_CASE(of_unittest_dynamic),
+	{},
+};
+
+static struct kunit_module of_test_module = {
+	.name = "of-base-test",
+	.init = of_test_init,
+	.test_cases = of_test_cases,
+};
+module_test(of_test_module);
diff --git a/drivers/of/test-common.c b/drivers/of/test-common.c
new file mode 100644
index 0000000000000..0b2319fde3b3e
--- /dev/null
+++ b/drivers/of/test-common.c
@@ -0,0 +1,149 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Common code to be used by unit tests.
+ */
+#include "test-common.h"
+
+#include <linux/of_fdt.h>
+#include <linux/slab.h>
+
+#include "of_private.h"
+
+/**
+ *	update_node_properties - adds the properties
+ *	of np into dup node (present in live tree) and
+ *	updates parent of children of np to dup.
+ *
+ *	@np:	node already present in live tree
+ *	@dup:	node present in live tree to be updated
+ */
+static void update_node_properties(struct device_node *np,
+					struct device_node *dup)
+{
+	struct property *prop;
+	struct device_node *child;
+
+	for_each_property_of_node(np, prop)
+		of_add_property(dup, prop);
+
+	for_each_child_of_node(np, child)
+		child->parent = dup;
+}
+
+/**
+ *	attach_node_and_children - attaches nodes
+ *	and its children to live tree
+ *
+ *	@np:	Node to attach to live tree
+ */
+static int attach_node_and_children(struct device_node *np)
+{
+	struct device_node *next, *dup, *child;
+	unsigned long flags;
+	const char *full_name;
+
+	full_name = kasprintf(GFP_KERNEL, "%pOF", np);
+	dup = of_find_node_by_path(full_name);
+	kfree(full_name);
+	if (dup) {
+		update_node_properties(np, dup);
+		return 0;
+	}
+
+	child = np->child;
+	np->child = NULL;
+
+	mutex_lock(&of_mutex);
+	raw_spin_lock_irqsave(&devtree_lock, flags);
+	np->sibling = np->parent->child;
+	np->parent->child = np;
+	of_node_clear_flag(np, OF_DETACHED);
+	raw_spin_unlock_irqrestore(&devtree_lock, flags);
+
+	__of_attach_node_sysfs(np);
+	mutex_unlock(&of_mutex);
+
+	while (child) {
+		next = child->sibling;
+		attach_node_and_children(child);
+		child = next;
+	}
+
+	return 0;
+}
+
+/**
+ *	unittest_data_add - Reads, copies data from
+ *	linked tree and attaches it to the live tree
+ */
+int unittest_data_add(void)
+{
+	void *unittest_data;
+	struct device_node *unittest_data_node, *np;
+	/*
+	 * __dtb_testcases_begin[] and __dtb_testcases_end[] are magically
+	 * created by cmd_dt_S_dtb in scripts/Makefile.lib
+	 */
+	extern uint8_t __dtb_testcases_begin[];
+	extern uint8_t __dtb_testcases_end[];
+	const int size = __dtb_testcases_end - __dtb_testcases_begin;
+	int rc;
+
+	if (!size) {
+		pr_warn("%s: No testcase data to attach; not running tests\n",
+			__func__);
+		return -ENODATA;
+	}
+
+	/* creating copy */
+	unittest_data = kmemdup(__dtb_testcases_begin, size, GFP_KERNEL);
+
+	if (!unittest_data) {
+		pr_warn("%s: Failed to allocate memory for unittest_data; "
+			"not running tests\n", __func__);
+		return -ENOMEM;
+	}
+	of_fdt_unflatten_tree(unittest_data, NULL, &unittest_data_node);
+	if (!unittest_data_node) {
+		pr_warn("%s: No tree to attach; not running tests\n", __func__);
+		return -ENODATA;
+	}
+
+	/*
+	 * This lock normally encloses of_resolve_phandles()
+	 */
+	of_overlay_mutex_lock();
+
+	rc = of_resolve_phandles(unittest_data_node);
+	if (rc) {
+		pr_err("%s: Failed to resolve phandles (rc=%i)\n",
+		       __func__, rc);
+		of_overlay_mutex_unlock();
+		return -EINVAL;
+	}
+
+	if (!of_root) {
+		of_root = unittest_data_node;
+		for_each_of_allnodes(np)
+			__of_attach_node_sysfs(np);
+		of_aliases = of_find_node_by_path("/aliases");
+		of_chosen = of_find_node_by_path("/chosen");
+		of_overlay_mutex_unlock();
+		return 0;
+	}
+
+	/* attach the sub-tree to live tree */
+	np = unittest_data_node->child;
+	while (np) {
+		struct device_node *next = np->sibling;
+
+		np->parent = of_root;
+		attach_node_and_children(np);
+		np = next;
+	}
+
+	of_overlay_mutex_unlock();
+
+	return 0;
+}
+
diff --git a/drivers/of/test-common.h b/drivers/of/test-common.h
new file mode 100644
index 0000000000000..a35484406bbf1
--- /dev/null
+++ b/drivers/of/test-common.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * Common code to be used by unit tests.
+ */
+#ifndef _LINUX_OF_TEST_COMMON_H
+#define _LINUX_OF_TEST_COMMON_H
+
+#include <linux/of.h>
+
+/**
+ *	unittest_data_add - Reads, copies data from
+ *	linked tree and attaches it to the live tree
+ */
+int unittest_data_add(void);
+
+#endif /* _LINUX_OF_TEST_COMMON_H */
diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
index a5ef44730ffdb..b8c220d330f03 100644
--- a/drivers/of/unittest.c
+++ b/drivers/of/unittest.c
@@ -29,182 +29,7 @@
 #include <kunit/test.h>
 
 #include "of_private.h"
-
-static void of_unittest_find_node_by_name(struct kunit *test)
-{
-	struct device_node *np;
-	const char *options, *name;
-
-	np = of_find_node_by_path("/testcase-data");
-	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
-			       "find /testcase-data failed\n");
-	of_node_put(np);
-	kfree(name);
-
-	/* Test if trailing '/' works */
-	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
-			    "trailing '/' on /testcase-data/ should fail\n");
-
-	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	KUNIT_EXPECT_STREQ_MSG(test,
-			       "/testcase-data/phandle-tests/consumer-a", name,
-			       "find /testcase-data/phandle-tests/consumer-a failed\n");
-	of_node_put(np);
-	kfree(name);
-
-	np = of_find_node_by_path("testcase-alias");
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
-			       "find testcase-alias failed\n");
-	of_node_put(np);
-	kfree(name);
-
-	/* Test if trailing '/' works on aliases */
-	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
-			    "trailing '/' on testcase-alias/ should fail\n");
-
-	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	KUNIT_EXPECT_STREQ_MSG(test,
-			       "/testcase-data/phandle-tests/consumer-a", name,
-			       "find testcase-alias/phandle-tests/consumer-a failed\n");
-	of_node_put(np);
-	kfree(name);
-
-	KUNIT_EXPECT_EQ_MSG(test,
-			    of_find_node_by_path("/testcase-data/missing-path"),
-			    NULL,
-			    "non-existent path returned node %pOF\n", np);
-	of_node_put(np);
-
-	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("missing-alias"), NULL,
-			    "non-existent alias returned node %pOF\n", np);
-	of_node_put(np);
-
-	KUNIT_EXPECT_EQ_MSG(test,
-			    of_find_node_by_path("testcase-alias/missing-path"),
-			    NULL,
-			    "non-existent alias with relative path returned node %pOF\n",
-			    np);
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
-			       "option path test failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
-			       "option path test, subcase #1 failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
-			       "option path test, subcase #2 failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
-	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
-					 "NULL option path test failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
-				       &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
-			       "option alias path test failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
-				       &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
-			       "option alias path test, subcase #1 failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
-	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
-					 "NULL option alias path test failed\n");
-	of_node_put(np);
-
-	options = "testoption";
-	np = of_find_node_opts_by_path("testcase-alias", &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
-			    "option clearing test failed\n");
-	of_node_put(np);
-
-	options = "testoption";
-	np = of_find_node_opts_by_path("/", &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
-			    "option clearing root node test failed\n");
-	of_node_put(np);
-}
-
-static void of_unittest_dynamic(struct kunit *test)
-{
-	struct device_node *np;
-	struct property *prop;
-
-	np = of_find_node_by_path("/testcase-data");
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-
-	/* Array of 4 properties for the purpose of testing */
-	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
-
-	/* Add a new property - should pass*/
-	prop->name = "new-property";
-	prop->value = "new-property-data";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
-			    "Adding a new property failed\n");
-
-	/* Try to add an existing property - should fail */
-	prop++;
-	prop->name = "new-property";
-	prop->value = "new-property-data-should-fail";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
-			    "Adding an existing property should have failed\n");
-
-	/* Try to modify an existing property - should pass */
-	prop->value = "modify-property-data-should-pass";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
-			    "Updating an existing property should have passed\n");
-
-	/* Try to modify non-existent property - should pass*/
-	prop++;
-	prop->name = "modify-property";
-	prop->value = "modify-missing-property-data-should-pass";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
-			    "Updating a missing property should have passed\n");
-
-	/* Remove property - should pass */
-	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
-			    "Removing a property should have passed\n");
-
-	/* Adding very large property - should pass */
-	prop++;
-	prop->name = "large-property-PAGE_SIZEx8";
-	prop->length = PAGE_SIZE * 8;
-	prop->value = kzalloc(prop->length, GFP_KERNEL);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
-	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
-			    "Adding a large property should have passed\n");
-}
+#include "test-common.h"
 
 static int of_unittest_check_node_linkage(struct device_node *np)
 {
@@ -1200,143 +1025,6 @@ static void of_unittest_platform_populate(struct kunit *test)
 	of_node_put(np);
 }
 
-/**
- *	update_node_properties - adds the properties
- *	of np into dup node (present in live tree) and
- *	updates parent of children of np to dup.
- *
- *	@np:	node already present in live tree
- *	@dup:	node present in live tree to be updated
- */
-static void update_node_properties(struct device_node *np,
-					struct device_node *dup)
-{
-	struct property *prop;
-	struct device_node *child;
-
-	for_each_property_of_node(np, prop)
-		of_add_property(dup, prop);
-
-	for_each_child_of_node(np, child)
-		child->parent = dup;
-}
-
-/**
- *	attach_node_and_children - attaches nodes
- *	and its children to live tree
- *
- *	@np:	Node to attach to live tree
- */
-static int attach_node_and_children(struct device_node *np)
-{
-	struct device_node *next, *dup, *child;
-	unsigned long flags;
-	const char *full_name;
-
-	full_name = kasprintf(GFP_KERNEL, "%pOF", np);
-	dup = of_find_node_by_path(full_name);
-	kfree(full_name);
-	if (dup) {
-		update_node_properties(np, dup);
-		return 0;
-	}
-
-	child = np->child;
-	np->child = NULL;
-
-	mutex_lock(&of_mutex);
-	raw_spin_lock_irqsave(&devtree_lock, flags);
-	np->sibling = np->parent->child;
-	np->parent->child = np;
-	of_node_clear_flag(np, OF_DETACHED);
-	raw_spin_unlock_irqrestore(&devtree_lock, flags);
-
-	__of_attach_node_sysfs(np);
-	mutex_unlock(&of_mutex);
-
-	while (child) {
-		next = child->sibling;
-		attach_node_and_children(child);
-		child = next;
-	}
-
-	return 0;
-}
-
-/**
- *	unittest_data_add - Reads, copies data from
- *	linked tree and attaches it to the live tree
- */
-static int unittest_data_add(void)
-{
-	void *unittest_data;
-	struct device_node *unittest_data_node, *np;
-	/*
-	 * __dtb_testcases_begin[] and __dtb_testcases_end[] are magically
-	 * created by cmd_dt_S_dtb in scripts/Makefile.lib
-	 */
-	extern uint8_t __dtb_testcases_begin[];
-	extern uint8_t __dtb_testcases_end[];
-	const int size = __dtb_testcases_end - __dtb_testcases_begin;
-	int rc;
-
-	if (!size) {
-		pr_warn("%s: No testcase data to attach; not running tests\n",
-			__func__);
-		return -ENODATA;
-	}
-
-	/* creating copy */
-	unittest_data = kmemdup(__dtb_testcases_begin, size, GFP_KERNEL);
-
-	if (!unittest_data) {
-		pr_warn("%s: Failed to allocate memory for unittest_data; "
-			"not running tests\n", __func__);
-		return -ENOMEM;
-	}
-	of_fdt_unflatten_tree(unittest_data, NULL, &unittest_data_node);
-	if (!unittest_data_node) {
-		pr_warn("%s: No tree to attach; not running tests\n", __func__);
-		return -ENODATA;
-	}
-
-	/*
-	 * This lock normally encloses of_resolve_phandles()
-	 */
-	of_overlay_mutex_lock();
-
-	rc = of_resolve_phandles(unittest_data_node);
-	if (rc) {
-		pr_err("%s: Failed to resolve phandles (rc=%i)\n", __func__, rc);
-		of_overlay_mutex_unlock();
-		return -EINVAL;
-	}
-
-	if (!of_root) {
-		of_root = unittest_data_node;
-		for_each_of_allnodes(np)
-			__of_attach_node_sysfs(np);
-		of_aliases = of_find_node_by_path("/aliases");
-		of_chosen = of_find_node_by_path("/chosen");
-		of_overlay_mutex_unlock();
-		return 0;
-	}
-
-	/* attach the sub-tree to live tree */
-	np = unittest_data_node->child;
-	while (np) {
-		struct device_node *next = np->sibling;
-
-		np->parent = of_root;
-		attach_node_and_children(np);
-		np = next;
-	}
-
-	of_overlay_mutex_unlock();
-
-	return 0;
-}
-
 #ifdef CONFIG_OF_OVERLAY
 static int overlay_data_apply(const char *overlay_name, int *overlay_id);
 
@@ -2560,8 +2248,6 @@ static int of_test_init(struct kunit *test)
 static struct kunit_case of_test_cases[] = {
 	KUNIT_CASE(of_unittest_check_tree_linkage),
 	KUNIT_CASE(of_unittest_check_phandles),
-	KUNIT_CASE(of_unittest_find_node_by_name),
-	KUNIT_CASE(of_unittest_dynamic),
 	KUNIT_CASE(of_unittest_parse_phandle_with_args),
 	KUNIT_CASE(of_unittest_parse_phandle_with_args_map),
 	KUNIT_CASE(of_unittest_printf),
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2018-11-28 19:36 ` [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest brendanhiggins
@ 2018-11-28 19:36   ` Brendan Higgins
  2018-12-04 10:58   ` frowand.list
  1 sibling, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-11-28 19:36 UTC (permalink / raw)


Split out a couple of test cases that these features in base.c from the
unittest.c monolith. The intention is that we will eventually split out
all test cases and group them together based on what portion of device
tree they test.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 drivers/of/Makefile      |   2 +-
 drivers/of/base-test.c   | 214 ++++++++++++++++++++++++++
 drivers/of/test-common.c | 149 ++++++++++++++++++
 drivers/of/test-common.h |  16 ++
 drivers/of/unittest.c    | 316 +--------------------------------------
 5 files changed, 381 insertions(+), 316 deletions(-)
 create mode 100644 drivers/of/base-test.c
 create mode 100644 drivers/of/test-common.c
 create mode 100644 drivers/of/test-common.h

diff --git a/drivers/of/Makefile b/drivers/of/Makefile
index 663a4af0cccd5..4a4bd527d586c 100644
--- a/drivers/of/Makefile
+++ b/drivers/of/Makefile
@@ -8,7 +8,7 @@ obj-$(CONFIG_OF_PROMTREE) += pdt.o
 obj-$(CONFIG_OF_ADDRESS)  += address.o
 obj-$(CONFIG_OF_IRQ)    += irq.o
 obj-$(CONFIG_OF_NET)	+= of_net.o
-obj-$(CONFIG_OF_UNITTEST) += unittest.o
+obj-$(CONFIG_OF_UNITTEST) += unittest.o base-test.o test-common.o
 obj-$(CONFIG_OF_MDIO)	+= of_mdio.o
 obj-$(CONFIG_OF_RESERVED_MEM) += of_reserved_mem.o
 obj-$(CONFIG_OF_RESOLVE)  += resolver.o
diff --git a/drivers/of/base-test.c b/drivers/of/base-test.c
new file mode 100644
index 0000000000000..5731787a3fca8
--- /dev/null
+++ b/drivers/of/base-test.c
@@ -0,0 +1,214 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Unit tests for functions defined in base.c.
+ */
+#include <linux/of.h>
+
+#include <kunit/test.h>
+
+#include "test-common.h"
+
+static void of_unittest_find_node_by_name(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options, *name;
+
+	np = of_find_node_by_path("/testcase-data");
+	name = kasprintf(GFP_KERNEL, "%pOF", np);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
+			       "find /testcase-data failed\n");
+	of_node_put(np);
+	kfree(name);
+
+	/* Test if trailing '/' works */
+	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
+			    "trailing '/' on /testcase-data/ should fail\n");
+
+	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	name = kasprintf(GFP_KERNEL, "%pOF", np);
+	KUNIT_EXPECT_STREQ_MSG(test,
+			       "/testcase-data/phandle-tests/consumer-a", name,
+			       "find /testcase-data/phandle-tests/consumer-a failed\n");
+	of_node_put(np);
+	kfree(name);
+
+	np = of_find_node_by_path("testcase-alias");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	name = kasprintf(GFP_KERNEL, "%pOF", np);
+	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
+			       "find testcase-alias failed\n");
+	of_node_put(np);
+	kfree(name);
+
+	/* Test if trailing '/' works on aliases */
+	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
+			    "trailing '/' on testcase-alias/ should fail\n");
+
+	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	name = kasprintf(GFP_KERNEL, "%pOF", np);
+	KUNIT_EXPECT_STREQ_MSG(test,
+			       "/testcase-data/phandle-tests/consumer-a", name,
+			       "find testcase-alias/phandle-tests/consumer-a failed\n");
+	of_node_put(np);
+	kfree(name);
+
+	KUNIT_EXPECT_EQ_MSG(test,
+			    of_find_node_by_path("/testcase-data/missing-path"),
+			    NULL,
+			    "non-existent path returned node %pOF\n", np);
+	of_node_put(np);
+
+	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("missing-alias"), NULL,
+			    "non-existent alias returned node %pOF\n", np);
+	of_node_put(np);
+
+	KUNIT_EXPECT_EQ_MSG(test,
+			    of_find_node_by_path("testcase-alias/missing-path"),
+			    NULL,
+			    "non-existent alias with relative path returned node %pOF\n",
+			    np);
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
+			       "option path test failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
+			       "option path test, subcase #1 failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path(
+			"/testcase-data/testcase-device1:test/option",
+			&options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
+			       "option path test, subcase #2 failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
+					 "NULL option path test failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
+				       &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
+			       "option alias path test failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
+				       &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
+			       "option alias path test, subcase #1 failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
+					 "NULL option alias path test failed\n");
+	of_node_put(np);
+
+	options = "testoption";
+	np = of_find_node_opts_by_path("testcase-alias", &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
+			    "option clearing test failed\n");
+	of_node_put(np);
+
+	options = "testoption";
+	np = of_find_node_opts_by_path("/", &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
+			    "option clearing root node test failed\n");
+	of_node_put(np);
+}
+
+static void of_unittest_dynamic(struct kunit *test)
+{
+	struct device_node *np;
+	struct property *prop;
+
+	np = of_find_node_by_path("/testcase-data");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+
+	/* Array of 4 properties for the purpose of testing */
+	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
+
+	/* Add a new property - should pass*/
+	prop->name = "new-property";
+	prop->value = "new-property-data";
+	prop->length = strlen(prop->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+			    "Adding a new property failed\n");
+
+	/* Try to add an existing property - should fail */
+	prop++;
+	prop->name = "new-property";
+	prop->value = "new-property-data-should-fail";
+	prop->length = strlen(prop->value) + 1;
+	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
+			    "Adding an existing property should have failed\n");
+
+	/* Try to modify an existing property - should pass */
+	prop->value = "modify-property-data-should-pass";
+	prop->length = strlen(prop->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
+			    "Updating an existing property should have passed\n");
+
+	/* Try to modify non-existent property - should pass*/
+	prop++;
+	prop->name = "modify-property";
+	prop->value = "modify-missing-property-data-should-pass";
+	prop->length = strlen(prop->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
+			    "Updating a missing property should have passed\n");
+
+	/* Remove property - should pass */
+	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
+			    "Removing a property should have passed\n");
+
+	/* Adding very large property - should pass */
+	prop++;
+	prop->name = "large-property-PAGE_SIZEx8";
+	prop->length = PAGE_SIZE * 8;
+	prop->value = kzalloc(prop->length, GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+			    "Adding a large property should have passed\n");
+}
+
+static int of_test_init(struct kunit *test)
+{
+	/* adding data for unittest */
+	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
+
+	if (!of_aliases)
+		of_aliases = of_find_node_by_path("/aliases");
+
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
+			"/testcase-data/phandle-tests/consumer-a"));
+
+	return 0;
+}
+
+static struct kunit_case of_test_cases[] = {
+	KUNIT_CASE(of_unittest_find_node_by_name),
+	KUNIT_CASE(of_unittest_dynamic),
+	{},
+};
+
+static struct kunit_module of_test_module = {
+	.name = "of-base-test",
+	.init = of_test_init,
+	.test_cases = of_test_cases,
+};
+module_test(of_test_module);
diff --git a/drivers/of/test-common.c b/drivers/of/test-common.c
new file mode 100644
index 0000000000000..0b2319fde3b3e
--- /dev/null
+++ b/drivers/of/test-common.c
@@ -0,0 +1,149 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Common code to be used by unit tests.
+ */
+#include "test-common.h"
+
+#include <linux/of_fdt.h>
+#include <linux/slab.h>
+
+#include "of_private.h"
+
+/**
+ *	update_node_properties - adds the properties
+ *	of np into dup node (present in live tree) and
+ *	updates parent of children of np to dup.
+ *
+ *	@np:	node already present in live tree
+ *	@dup:	node present in live tree to be updated
+ */
+static void update_node_properties(struct device_node *np,
+					struct device_node *dup)
+{
+	struct property *prop;
+	struct device_node *child;
+
+	for_each_property_of_node(np, prop)
+		of_add_property(dup, prop);
+
+	for_each_child_of_node(np, child)
+		child->parent = dup;
+}
+
+/**
+ *	attach_node_and_children - attaches nodes
+ *	and its children to live tree
+ *
+ *	@np:	Node to attach to live tree
+ */
+static int attach_node_and_children(struct device_node *np)
+{
+	struct device_node *next, *dup, *child;
+	unsigned long flags;
+	const char *full_name;
+
+	full_name = kasprintf(GFP_KERNEL, "%pOF", np);
+	dup = of_find_node_by_path(full_name);
+	kfree(full_name);
+	if (dup) {
+		update_node_properties(np, dup);
+		return 0;
+	}
+
+	child = np->child;
+	np->child = NULL;
+
+	mutex_lock(&of_mutex);
+	raw_spin_lock_irqsave(&devtree_lock, flags);
+	np->sibling = np->parent->child;
+	np->parent->child = np;
+	of_node_clear_flag(np, OF_DETACHED);
+	raw_spin_unlock_irqrestore(&devtree_lock, flags);
+
+	__of_attach_node_sysfs(np);
+	mutex_unlock(&of_mutex);
+
+	while (child) {
+		next = child->sibling;
+		attach_node_and_children(child);
+		child = next;
+	}
+
+	return 0;
+}
+
+/**
+ *	unittest_data_add - Reads, copies data from
+ *	linked tree and attaches it to the live tree
+ */
+int unittest_data_add(void)
+{
+	void *unittest_data;
+	struct device_node *unittest_data_node, *np;
+	/*
+	 * __dtb_testcases_begin[] and __dtb_testcases_end[] are magically
+	 * created by cmd_dt_S_dtb in scripts/Makefile.lib
+	 */
+	extern uint8_t __dtb_testcases_begin[];
+	extern uint8_t __dtb_testcases_end[];
+	const int size = __dtb_testcases_end - __dtb_testcases_begin;
+	int rc;
+
+	if (!size) {
+		pr_warn("%s: No testcase data to attach; not running tests\n",
+			__func__);
+		return -ENODATA;
+	}
+
+	/* creating copy */
+	unittest_data = kmemdup(__dtb_testcases_begin, size, GFP_KERNEL);
+
+	if (!unittest_data) {
+		pr_warn("%s: Failed to allocate memory for unittest_data; "
+			"not running tests\n", __func__);
+		return -ENOMEM;
+	}
+	of_fdt_unflatten_tree(unittest_data, NULL, &unittest_data_node);
+	if (!unittest_data_node) {
+		pr_warn("%s: No tree to attach; not running tests\n", __func__);
+		return -ENODATA;
+	}
+
+	/*
+	 * This lock normally encloses of_resolve_phandles()
+	 */
+	of_overlay_mutex_lock();
+
+	rc = of_resolve_phandles(unittest_data_node);
+	if (rc) {
+		pr_err("%s: Failed to resolve phandles (rc=%i)\n",
+		       __func__, rc);
+		of_overlay_mutex_unlock();
+		return -EINVAL;
+	}
+
+	if (!of_root) {
+		of_root = unittest_data_node;
+		for_each_of_allnodes(np)
+			__of_attach_node_sysfs(np);
+		of_aliases = of_find_node_by_path("/aliases");
+		of_chosen = of_find_node_by_path("/chosen");
+		of_overlay_mutex_unlock();
+		return 0;
+	}
+
+	/* attach the sub-tree to live tree */
+	np = unittest_data_node->child;
+	while (np) {
+		struct device_node *next = np->sibling;
+
+		np->parent = of_root;
+		attach_node_and_children(np);
+		np = next;
+	}
+
+	of_overlay_mutex_unlock();
+
+	return 0;
+}
+
diff --git a/drivers/of/test-common.h b/drivers/of/test-common.h
new file mode 100644
index 0000000000000..a35484406bbf1
--- /dev/null
+++ b/drivers/of/test-common.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * Common code to be used by unit tests.
+ */
+#ifndef _LINUX_OF_TEST_COMMON_H
+#define _LINUX_OF_TEST_COMMON_H
+
+#include <linux/of.h>
+
+/**
+ *	unittest_data_add - Reads, copies data from
+ *	linked tree and attaches it to the live tree
+ */
+int unittest_data_add(void);
+
+#endif /* _LINUX_OF_TEST_COMMON_H */
diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
index a5ef44730ffdb..b8c220d330f03 100644
--- a/drivers/of/unittest.c
+++ b/drivers/of/unittest.c
@@ -29,182 +29,7 @@
 #include <kunit/test.h>
 
 #include "of_private.h"
-
-static void of_unittest_find_node_by_name(struct kunit *test)
-{
-	struct device_node *np;
-	const char *options, *name;
-
-	np = of_find_node_by_path("/testcase-data");
-	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
-			       "find /testcase-data failed\n");
-	of_node_put(np);
-	kfree(name);
-
-	/* Test if trailing '/' works */
-	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
-			    "trailing '/' on /testcase-data/ should fail\n");
-
-	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	KUNIT_EXPECT_STREQ_MSG(test,
-			       "/testcase-data/phandle-tests/consumer-a", name,
-			       "find /testcase-data/phandle-tests/consumer-a failed\n");
-	of_node_put(np);
-	kfree(name);
-
-	np = of_find_node_by_path("testcase-alias");
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
-			       "find testcase-alias failed\n");
-	of_node_put(np);
-	kfree(name);
-
-	/* Test if trailing '/' works on aliases */
-	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
-			    "trailing '/' on testcase-alias/ should fail\n");
-
-	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	KUNIT_EXPECT_STREQ_MSG(test,
-			       "/testcase-data/phandle-tests/consumer-a", name,
-			       "find testcase-alias/phandle-tests/consumer-a failed\n");
-	of_node_put(np);
-	kfree(name);
-
-	KUNIT_EXPECT_EQ_MSG(test,
-			    of_find_node_by_path("/testcase-data/missing-path"),
-			    NULL,
-			    "non-existent path returned node %pOF\n", np);
-	of_node_put(np);
-
-	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("missing-alias"), NULL,
-			    "non-existent alias returned node %pOF\n", np);
-	of_node_put(np);
-
-	KUNIT_EXPECT_EQ_MSG(test,
-			    of_find_node_by_path("testcase-alias/missing-path"),
-			    NULL,
-			    "non-existent alias with relative path returned node %pOF\n",
-			    np);
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
-			       "option path test failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
-			       "option path test, subcase #1 failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
-			       "option path test, subcase #2 failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
-	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
-					 "NULL option path test failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
-				       &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
-			       "option alias path test failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
-				       &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
-			       "option alias path test, subcase #1 failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
-	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
-					 "NULL option alias path test failed\n");
-	of_node_put(np);
-
-	options = "testoption";
-	np = of_find_node_opts_by_path("testcase-alias", &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
-			    "option clearing test failed\n");
-	of_node_put(np);
-
-	options = "testoption";
-	np = of_find_node_opts_by_path("/", &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
-			    "option clearing root node test failed\n");
-	of_node_put(np);
-}
-
-static void of_unittest_dynamic(struct kunit *test)
-{
-	struct device_node *np;
-	struct property *prop;
-
-	np = of_find_node_by_path("/testcase-data");
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-
-	/* Array of 4 properties for the purpose of testing */
-	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
-
-	/* Add a new property - should pass*/
-	prop->name = "new-property";
-	prop->value = "new-property-data";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
-			    "Adding a new property failed\n");
-
-	/* Try to add an existing property - should fail */
-	prop++;
-	prop->name = "new-property";
-	prop->value = "new-property-data-should-fail";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
-			    "Adding an existing property should have failed\n");
-
-	/* Try to modify an existing property - should pass */
-	prop->value = "modify-property-data-should-pass";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
-			    "Updating an existing property should have passed\n");
-
-	/* Try to modify non-existent property - should pass*/
-	prop++;
-	prop->name = "modify-property";
-	prop->value = "modify-missing-property-data-should-pass";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
-			    "Updating a missing property should have passed\n");
-
-	/* Remove property - should pass */
-	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
-			    "Removing a property should have passed\n");
-
-	/* Adding very large property - should pass */
-	prop++;
-	prop->name = "large-property-PAGE_SIZEx8";
-	prop->length = PAGE_SIZE * 8;
-	prop->value = kzalloc(prop->length, GFP_KERNEL);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
-	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
-			    "Adding a large property should have passed\n");
-}
+#include "test-common.h"
 
 static int of_unittest_check_node_linkage(struct device_node *np)
 {
@@ -1200,143 +1025,6 @@ static void of_unittest_platform_populate(struct kunit *test)
 	of_node_put(np);
 }
 
-/**
- *	update_node_properties - adds the properties
- *	of np into dup node (present in live tree) and
- *	updates parent of children of np to dup.
- *
- *	@np:	node already present in live tree
- *	@dup:	node present in live tree to be updated
- */
-static void update_node_properties(struct device_node *np,
-					struct device_node *dup)
-{
-	struct property *prop;
-	struct device_node *child;
-
-	for_each_property_of_node(np, prop)
-		of_add_property(dup, prop);
-
-	for_each_child_of_node(np, child)
-		child->parent = dup;
-}
-
-/**
- *	attach_node_and_children - attaches nodes
- *	and its children to live tree
- *
- *	@np:	Node to attach to live tree
- */
-static int attach_node_and_children(struct device_node *np)
-{
-	struct device_node *next, *dup, *child;
-	unsigned long flags;
-	const char *full_name;
-
-	full_name = kasprintf(GFP_KERNEL, "%pOF", np);
-	dup = of_find_node_by_path(full_name);
-	kfree(full_name);
-	if (dup) {
-		update_node_properties(np, dup);
-		return 0;
-	}
-
-	child = np->child;
-	np->child = NULL;
-
-	mutex_lock(&of_mutex);
-	raw_spin_lock_irqsave(&devtree_lock, flags);
-	np->sibling = np->parent->child;
-	np->parent->child = np;
-	of_node_clear_flag(np, OF_DETACHED);
-	raw_spin_unlock_irqrestore(&devtree_lock, flags);
-
-	__of_attach_node_sysfs(np);
-	mutex_unlock(&of_mutex);
-
-	while (child) {
-		next = child->sibling;
-		attach_node_and_children(child);
-		child = next;
-	}
-
-	return 0;
-}
-
-/**
- *	unittest_data_add - Reads, copies data from
- *	linked tree and attaches it to the live tree
- */
-static int unittest_data_add(void)
-{
-	void *unittest_data;
-	struct device_node *unittest_data_node, *np;
-	/*
-	 * __dtb_testcases_begin[] and __dtb_testcases_end[] are magically
-	 * created by cmd_dt_S_dtb in scripts/Makefile.lib
-	 */
-	extern uint8_t __dtb_testcases_begin[];
-	extern uint8_t __dtb_testcases_end[];
-	const int size = __dtb_testcases_end - __dtb_testcases_begin;
-	int rc;
-
-	if (!size) {
-		pr_warn("%s: No testcase data to attach; not running tests\n",
-			__func__);
-		return -ENODATA;
-	}
-
-	/* creating copy */
-	unittest_data = kmemdup(__dtb_testcases_begin, size, GFP_KERNEL);
-
-	if (!unittest_data) {
-		pr_warn("%s: Failed to allocate memory for unittest_data; "
-			"not running tests\n", __func__);
-		return -ENOMEM;
-	}
-	of_fdt_unflatten_tree(unittest_data, NULL, &unittest_data_node);
-	if (!unittest_data_node) {
-		pr_warn("%s: No tree to attach; not running tests\n", __func__);
-		return -ENODATA;
-	}
-
-	/*
-	 * This lock normally encloses of_resolve_phandles()
-	 */
-	of_overlay_mutex_lock();
-
-	rc = of_resolve_phandles(unittest_data_node);
-	if (rc) {
-		pr_err("%s: Failed to resolve phandles (rc=%i)\n", __func__, rc);
-		of_overlay_mutex_unlock();
-		return -EINVAL;
-	}
-
-	if (!of_root) {
-		of_root = unittest_data_node;
-		for_each_of_allnodes(np)
-			__of_attach_node_sysfs(np);
-		of_aliases = of_find_node_by_path("/aliases");
-		of_chosen = of_find_node_by_path("/chosen");
-		of_overlay_mutex_unlock();
-		return 0;
-	}
-
-	/* attach the sub-tree to live tree */
-	np = unittest_data_node->child;
-	while (np) {
-		struct device_node *next = np->sibling;
-
-		np->parent = of_root;
-		attach_node_and_children(np);
-		np = next;
-	}
-
-	of_overlay_mutex_unlock();
-
-	return 0;
-}
-
 #ifdef CONFIG_OF_OVERLAY
 static int overlay_data_apply(const char *overlay_name, int *overlay_id);
 
@@ -2560,8 +2248,6 @@ static int of_test_init(struct kunit *test)
 static struct kunit_case of_test_cases[] = {
 	KUNIT_CASE(of_unittest_check_tree_linkage),
 	KUNIT_CASE(of_unittest_check_phandles),
-	KUNIT_CASE(of_unittest_find_node_by_name),
-	KUNIT_CASE(of_unittest_dynamic),
 	KUNIT_CASE(of_unittest_parse_phandle_with_args),
 	KUNIT_CASE(of_unittest_parse_phandle_with_args_map),
 	KUNIT_CASE(of_unittest_printf),
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 19/19] of: unittest: split up some super large test cases
  2018-11-28 19:36 [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework brendanhiggins
                   ` (18 preceding siblings ...)
  2018-11-28 19:36 ` [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest brendanhiggins
@ 2018-11-28 19:36 ` brendanhiggins
  2018-11-28 19:36   ` Brendan Higgins
  2018-12-04 10:52 ` [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework frowand.list
  2018-12-04 11:40 ` frowand.list
  21 siblings, 1 reply; 232+ messages in thread
From: brendanhiggins @ 2018-11-28 19:36 UTC (permalink / raw)


Split up the super large test cases of_unittest_find_node_by_name and
of_unittest_dynamic into properly sized and defined test cases.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 drivers/of/base-test.c | 315 ++++++++++++++++++++++++++++++++++-------
 1 file changed, 260 insertions(+), 55 deletions(-)

diff --git a/drivers/of/base-test.c b/drivers/of/base-test.c
index 5731787a3fca8..46c3dd9ce6628 100644
--- a/drivers/of/base-test.c
+++ b/drivers/of/base-test.c
@@ -8,10 +8,10 @@
 
 #include "test-common.h"
 
-static void of_unittest_find_node_by_name(struct kunit *test)
+static void of_test_find_node_by_name_basic(struct kunit *test)
 {
 	struct device_node *np;
-	const char *options, *name;
+	const char *name;
 
 	np = of_find_node_by_path("/testcase-data");
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
@@ -20,11 +20,21 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 			       "find /testcase-data failed\n");
 	of_node_put(np);
 	kfree(name);
+}
 
+static void of_test_find_node_by_name_trailing_slash(struct kunit *test)
+{
 	/* Test if trailing '/' works */
 	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
 			    "trailing '/' on /testcase-data/ should fail\n");
 
+}
+
+static void of_test_find_node_by_name_multiple_components(struct kunit *test)
+{
+	struct device_node *np;
+	const char *name;
+
 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
@@ -33,6 +43,12 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 			       "find /testcase-data/phandle-tests/consumer-a failed\n");
 	of_node_put(np);
 	kfree(name);
+}
+
+static void of_test_find_node_by_name_with_alias(struct kunit *test)
+{
+	struct device_node *np;
+	const char *name;
 
 	np = of_find_node_by_path("testcase-alias");
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
@@ -41,10 +57,23 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 			       "find testcase-alias failed\n");
 	of_node_put(np);
 	kfree(name);
+}
 
+static void of_test_find_node_by_name_with_alias_and_slash(struct kunit *test)
+{
 	/* Test if trailing '/' works on aliases */
 	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
-			    "trailing '/' on testcase-alias/ should fail\n");
+			   "trailing '/' on testcase-alias/ should fail\n");
+}
+
+/*
+ * TODO(brendanhiggins at google.com): This looks like a duplicate of
+ * of_test_find_node_by_name_multiple_components
+ */
+static void of_test_find_node_by_name_multiple_components_2(struct kunit *test)
+{
+	struct device_node *np;
+	const char *name;
 
 	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
@@ -54,29 +83,60 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 			       "find testcase-alias/phandle-tests/consumer-a failed\n");
 	of_node_put(np);
 	kfree(name);
+}
+
+static void of_test_find_node_by_name_missing_path(struct kunit *test)
+{
+	struct device_node *np;
 
 	KUNIT_EXPECT_EQ_MSG(test,
-			    of_find_node_by_path("/testcase-data/missing-path"),
+			    np = of_find_node_by_path(
+					    "/testcase-data/missing-path"),
 			    NULL,
-			    "non-existent path returned node %pOF\n", np);
+			   "non-existent path returned node %pOF\n", np);
 	of_node_put(np);
+}
 
-	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("missing-alias"), NULL,
-			    "non-existent alias returned node %pOF\n", np);
+static void of_test_find_node_by_name_missing_alias(struct kunit *test)
+{
+	struct device_node *np;
+
+	KUNIT_EXPECT_EQ_MSG(test,
+			    np = of_find_node_by_path("missing-alias"), NULL,
+			   "non-existent alias returned node %pOF\n", np);
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_missing_alias_with_relative_path(
+		struct kunit *test)
+{
+	struct device_node *np;
 
 	KUNIT_EXPECT_EQ_MSG(test,
-			    of_find_node_by_path("testcase-alias/missing-path"),
+			    np = of_find_node_by_path(
+					    "testcase-alias/missing-path"),
 			    NULL,
-			    "non-existent alias with relative path returned node %pOF\n",
-			    np);
+			   "non-existent alias with relative path returned node %pOF\n",
+			   np);
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_option(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
 			       "option path test failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_option_and_slash(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
@@ -91,11 +151,22 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
 			       "option path test, subcase #2 failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_null_option(struct kunit *test)
+{
+	struct device_node *np;
 
 	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
 	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
 					 "NULL option path test failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_option_alias(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
 				       &options);
@@ -103,6 +174,13 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
 			       "option alias path test failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_option_alias_and_slash(
+		struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
 				       &options);
@@ -110,11 +188,22 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
 			       "option alias path test, subcase #1 failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_null_option_alias(struct kunit *test)
+{
+	struct device_node *np;
 
 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
-	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
-					 "NULL option alias path test failed\n");
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
+			test, np, "NULL option alias path test failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_option_clearing(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	options = "testoption";
 	np = of_find_node_opts_by_path("testcase-alias", &options);
@@ -122,6 +211,12 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
 			    "option clearing test failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_option_clearing_root(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	options = "testoption";
 	np = of_find_node_opts_by_path("/", &options);
@@ -131,64 +226,147 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 	of_node_put(np);
 }
 
-static void of_unittest_dynamic(struct kunit *test)
+static int of_test_find_node_by_name_init(struct kunit *test)
 {
+	/* adding data for unittest */
+	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
+
+	if (!of_aliases)
+		of_aliases = of_find_node_by_path("/aliases");
+
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
+			"/testcase-data/phandle-tests/consumer-a"));
+
+	return 0;
+}
+
+static struct kunit_case of_test_find_node_by_name_cases[] = {
+	KUNIT_CASE(of_test_find_node_by_name_basic),
+	KUNIT_CASE(of_test_find_node_by_name_trailing_slash),
+	KUNIT_CASE(of_test_find_node_by_name_multiple_components),
+	KUNIT_CASE(of_test_find_node_by_name_with_alias),
+	KUNIT_CASE(of_test_find_node_by_name_with_alias_and_slash),
+	KUNIT_CASE(of_test_find_node_by_name_multiple_components_2),
+	KUNIT_CASE(of_test_find_node_by_name_missing_path),
+	KUNIT_CASE(of_test_find_node_by_name_missing_alias),
+	KUNIT_CASE(of_test_find_node_by_name_missing_alias_with_relative_path),
+	KUNIT_CASE(of_test_find_node_by_name_with_option),
+	KUNIT_CASE(of_test_find_node_by_name_with_option_and_slash),
+	KUNIT_CASE(of_test_find_node_by_name_with_null_option),
+	KUNIT_CASE(of_test_find_node_by_name_with_option_alias),
+	KUNIT_CASE(of_test_find_node_by_name_with_option_alias_and_slash),
+	KUNIT_CASE(of_test_find_node_by_name_with_null_option_alias),
+	KUNIT_CASE(of_test_find_node_by_name_option_clearing),
+	KUNIT_CASE(of_test_find_node_by_name_option_clearing_root),
+	{},
+};
+
+static struct kunit_module of_test_find_node_by_name_module = {
+	.name = "of-test-find-node-by-name",
+	.init = of_test_find_node_by_name_init,
+	.test_cases = of_test_find_node_by_name_cases,
+};
+module_test(of_test_find_node_by_name_module);
+
+struct of_test_dynamic_context {
 	struct device_node *np;
-	struct property *prop;
+	struct property *prop0;
+	struct property *prop1;
+};
 
-	np = of_find_node_by_path("/testcase-data");
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+static void of_test_dynamic_basic(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+	struct property *prop0 = ctx->prop0;
 
-	/* Array of 4 properties for the purpose of testing */
-	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
+	/* Add a new property - should pass*/
+	prop0->name = "new-property";
+	prop0->value = "new-property-data";
+	prop0->length = strlen(prop0->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
+			    "Adding a new property failed\n");
+
+	/* Test that we can remove a property */
+	KUNIT_EXPECT_EQ(test, of_remove_property(np, prop0), 0);
+}
+
+static void of_test_dynamic_add_existing_property(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+	struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;
 
 	/* Add a new property - should pass*/
-	prop->name = "new-property";
-	prop->value = "new-property-data";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+	prop0->name = "new-property";
+	prop0->value = "new-property-data";
+	prop0->length = strlen(prop0->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
 			    "Adding a new property failed\n");
 
 	/* Try to add an existing property - should fail */
-	prop++;
-	prop->name = "new-property";
-	prop->value = "new-property-data-should-fail";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
+	prop1->name = "new-property";
+	prop1->value = "new-property-data-should-fail";
+	prop1->length = strlen(prop1->value) + 1;
+	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop1), 0,
 			    "Adding an existing property should have failed\n");
+}
+
+static void of_test_dynamic_modify_existing_property(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+	struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;
+
+	/* Add a new property - should pass*/
+	prop0->name = "new-property";
+	prop0->value = "new-property-data";
+	prop0->length = strlen(prop0->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
+			    "Adding a new property failed\n");
 
 	/* Try to modify an existing property - should pass */
-	prop->value = "modify-property-data-should-pass";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
+	prop1->name = "new-property";
+	prop1->value = "modify-property-data-should-pass";
+	prop1->length = strlen(prop1->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop1), 0,
 			    "Updating an existing property should have passed\n");
+}
+
+static void of_test_dynamic_modify_non_existent_property(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+	struct property *prop0 = ctx->prop0;
 
 	/* Try to modify non-existent property - should pass*/
-	prop++;
-	prop->name = "modify-property";
-	prop->value = "modify-missing-property-data-should-pass";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
+	prop0->name = "modify-property";
+	prop0->value = "modify-missing-property-data-should-pass";
+	prop0->length = strlen(prop0->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop0), 0,
 			    "Updating a missing property should have passed\n");
+}
 
-	/* Remove property - should pass */
-	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
-			    "Removing a property should have passed\n");
+static void of_test_dynamic_large_property(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+	struct property *prop0 = ctx->prop0;
 
 	/* Adding very large property - should pass */
-	prop++;
-	prop->name = "large-property-PAGE_SIZEx8";
-	prop->length = PAGE_SIZE * 8;
-	prop->value = kzalloc(prop->length, GFP_KERNEL);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
-	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+	prop0->name = "large-property-PAGE_SIZEx8";
+	prop0->length = PAGE_SIZE * 8;
+	prop0->value = kunit_kzalloc(test, prop0->length, GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop0->value);
+
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
 			    "Adding a large property should have passed\n");
 }
 
-static int of_test_init(struct kunit *test)
+static int of_test_dynamic_init(struct kunit *test)
 {
-	/* adding data for unittest */
+	struct of_test_dynamic_context *ctx;
+
 	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
 
 	if (!of_aliases)
@@ -197,18 +375,45 @@ static int of_test_init(struct kunit *test)
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
 			"/testcase-data/phandle-tests/consumer-a"));
 
+	ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+	test->priv = ctx;
+
+	ctx->np = of_find_node_by_path("/testcase-data");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->np);
+
+	ctx->prop0 = kunit_kzalloc(test, sizeof(*ctx->prop0), GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop0);
+
+	ctx->prop1 = kunit_kzalloc(test, sizeof(*ctx->prop1), GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop1);
+
 	return 0;
 }
 
-static struct kunit_case of_test_cases[] = {
-	KUNIT_CASE(of_unittest_find_node_by_name),
-	KUNIT_CASE(of_unittest_dynamic),
+static void of_test_dynamic_exit(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+
+	of_remove_property(np, ctx->prop0);
+	of_remove_property(np, ctx->prop1);
+	of_node_put(np);
+}
+
+static struct kunit_case of_test_dynamic_cases[] = {
+	KUNIT_CASE(of_test_dynamic_basic),
+	KUNIT_CASE(of_test_dynamic_add_existing_property),
+	KUNIT_CASE(of_test_dynamic_modify_existing_property),
+	KUNIT_CASE(of_test_dynamic_modify_non_existent_property),
+	KUNIT_CASE(of_test_dynamic_large_property),
 	{},
 };
 
-static struct kunit_module of_test_module = {
-	.name = "of-base-test",
-	.init = of_test_init,
-	.test_cases = of_test_cases,
+static struct kunit_module of_test_dynamic_module = {
+	.name = "of-dynamic-test",
+	.init = of_test_dynamic_init,
+	.exit = of_test_dynamic_exit,
+	.test_cases = of_test_dynamic_cases,
 };
-module_test(of_test_module);
+module_test(of_test_dynamic_module);
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 19/19] of: unittest: split up some super large test cases
  2018-11-28 19:36 ` [RFC v3 19/19] of: unittest: split up some super large test cases brendanhiggins
@ 2018-11-28 19:36   ` Brendan Higgins
  0 siblings, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-11-28 19:36 UTC (permalink / raw)


Split up the super large test cases of_unittest_find_node_by_name and
of_unittest_dynamic into properly sized and defined test cases.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 drivers/of/base-test.c | 315 ++++++++++++++++++++++++++++++++++-------
 1 file changed, 260 insertions(+), 55 deletions(-)

diff --git a/drivers/of/base-test.c b/drivers/of/base-test.c
index 5731787a3fca8..46c3dd9ce6628 100644
--- a/drivers/of/base-test.c
+++ b/drivers/of/base-test.c
@@ -8,10 +8,10 @@
 
 #include "test-common.h"
 
-static void of_unittest_find_node_by_name(struct kunit *test)
+static void of_test_find_node_by_name_basic(struct kunit *test)
 {
 	struct device_node *np;
-	const char *options, *name;
+	const char *name;
 
 	np = of_find_node_by_path("/testcase-data");
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
@@ -20,11 +20,21 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 			       "find /testcase-data failed\n");
 	of_node_put(np);
 	kfree(name);
+}
 
+static void of_test_find_node_by_name_trailing_slash(struct kunit *test)
+{
 	/* Test if trailing '/' works */
 	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
 			    "trailing '/' on /testcase-data/ should fail\n");
 
+}
+
+static void of_test_find_node_by_name_multiple_components(struct kunit *test)
+{
+	struct device_node *np;
+	const char *name;
+
 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
@@ -33,6 +43,12 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 			       "find /testcase-data/phandle-tests/consumer-a failed\n");
 	of_node_put(np);
 	kfree(name);
+}
+
+static void of_test_find_node_by_name_with_alias(struct kunit *test)
+{
+	struct device_node *np;
+	const char *name;
 
 	np = of_find_node_by_path("testcase-alias");
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
@@ -41,10 +57,23 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 			       "find testcase-alias failed\n");
 	of_node_put(np);
 	kfree(name);
+}
 
+static void of_test_find_node_by_name_with_alias_and_slash(struct kunit *test)
+{
 	/* Test if trailing '/' works on aliases */
 	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
-			    "trailing '/' on testcase-alias/ should fail\n");
+			   "trailing '/' on testcase-alias/ should fail\n");
+}
+
+/*
+ * TODO(brendanhiggins at google.com): This looks like a duplicate of
+ * of_test_find_node_by_name_multiple_components
+ */
+static void of_test_find_node_by_name_multiple_components_2(struct kunit *test)
+{
+	struct device_node *np;
+	const char *name;
 
 	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
@@ -54,29 +83,60 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 			       "find testcase-alias/phandle-tests/consumer-a failed\n");
 	of_node_put(np);
 	kfree(name);
+}
+
+static void of_test_find_node_by_name_missing_path(struct kunit *test)
+{
+	struct device_node *np;
 
 	KUNIT_EXPECT_EQ_MSG(test,
-			    of_find_node_by_path("/testcase-data/missing-path"),
+			    np = of_find_node_by_path(
+					    "/testcase-data/missing-path"),
 			    NULL,
-			    "non-existent path returned node %pOF\n", np);
+			   "non-existent path returned node %pOF\n", np);
 	of_node_put(np);
+}
 
-	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("missing-alias"), NULL,
-			    "non-existent alias returned node %pOF\n", np);
+static void of_test_find_node_by_name_missing_alias(struct kunit *test)
+{
+	struct device_node *np;
+
+	KUNIT_EXPECT_EQ_MSG(test,
+			    np = of_find_node_by_path("missing-alias"), NULL,
+			   "non-existent alias returned node %pOF\n", np);
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_missing_alias_with_relative_path(
+		struct kunit *test)
+{
+	struct device_node *np;
 
 	KUNIT_EXPECT_EQ_MSG(test,
-			    of_find_node_by_path("testcase-alias/missing-path"),
+			    np = of_find_node_by_path(
+					    "testcase-alias/missing-path"),
 			    NULL,
-			    "non-existent alias with relative path returned node %pOF\n",
-			    np);
+			   "non-existent alias with relative path returned node %pOF\n",
+			   np);
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_option(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
 			       "option path test failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_option_and_slash(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
@@ -91,11 +151,22 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
 			       "option path test, subcase #2 failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_null_option(struct kunit *test)
+{
+	struct device_node *np;
 
 	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
 	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
 					 "NULL option path test failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_option_alias(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
 				       &options);
@@ -103,6 +174,13 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
 			       "option alias path test failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_option_alias_and_slash(
+		struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
 				       &options);
@@ -110,11 +188,22 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
 			       "option alias path test, subcase #1 failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_null_option_alias(struct kunit *test)
+{
+	struct device_node *np;
 
 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
-	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
-					 "NULL option alias path test failed\n");
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
+			test, np, "NULL option alias path test failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_option_clearing(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	options = "testoption";
 	np = of_find_node_opts_by_path("testcase-alias", &options);
@@ -122,6 +211,12 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
 			    "option clearing test failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_option_clearing_root(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	options = "testoption";
 	np = of_find_node_opts_by_path("/", &options);
@@ -131,64 +226,147 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 	of_node_put(np);
 }
 
-static void of_unittest_dynamic(struct kunit *test)
+static int of_test_find_node_by_name_init(struct kunit *test)
 {
+	/* adding data for unittest */
+	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
+
+	if (!of_aliases)
+		of_aliases = of_find_node_by_path("/aliases");
+
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
+			"/testcase-data/phandle-tests/consumer-a"));
+
+	return 0;
+}
+
+static struct kunit_case of_test_find_node_by_name_cases[] = {
+	KUNIT_CASE(of_test_find_node_by_name_basic),
+	KUNIT_CASE(of_test_find_node_by_name_trailing_slash),
+	KUNIT_CASE(of_test_find_node_by_name_multiple_components),
+	KUNIT_CASE(of_test_find_node_by_name_with_alias),
+	KUNIT_CASE(of_test_find_node_by_name_with_alias_and_slash),
+	KUNIT_CASE(of_test_find_node_by_name_multiple_components_2),
+	KUNIT_CASE(of_test_find_node_by_name_missing_path),
+	KUNIT_CASE(of_test_find_node_by_name_missing_alias),
+	KUNIT_CASE(of_test_find_node_by_name_missing_alias_with_relative_path),
+	KUNIT_CASE(of_test_find_node_by_name_with_option),
+	KUNIT_CASE(of_test_find_node_by_name_with_option_and_slash),
+	KUNIT_CASE(of_test_find_node_by_name_with_null_option),
+	KUNIT_CASE(of_test_find_node_by_name_with_option_alias),
+	KUNIT_CASE(of_test_find_node_by_name_with_option_alias_and_slash),
+	KUNIT_CASE(of_test_find_node_by_name_with_null_option_alias),
+	KUNIT_CASE(of_test_find_node_by_name_option_clearing),
+	KUNIT_CASE(of_test_find_node_by_name_option_clearing_root),
+	{},
+};
+
+static struct kunit_module of_test_find_node_by_name_module = {
+	.name = "of-test-find-node-by-name",
+	.init = of_test_find_node_by_name_init,
+	.test_cases = of_test_find_node_by_name_cases,
+};
+module_test(of_test_find_node_by_name_module);
+
+struct of_test_dynamic_context {
 	struct device_node *np;
-	struct property *prop;
+	struct property *prop0;
+	struct property *prop1;
+};
 
-	np = of_find_node_by_path("/testcase-data");
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+static void of_test_dynamic_basic(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+	struct property *prop0 = ctx->prop0;
 
-	/* Array of 4 properties for the purpose of testing */
-	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
+	/* Add a new property - should pass*/
+	prop0->name = "new-property";
+	prop0->value = "new-property-data";
+	prop0->length = strlen(prop0->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
+			    "Adding a new property failed\n");
+
+	/* Test that we can remove a property */
+	KUNIT_EXPECT_EQ(test, of_remove_property(np, prop0), 0);
+}
+
+static void of_test_dynamic_add_existing_property(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+	struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;
 
 	/* Add a new property - should pass*/
-	prop->name = "new-property";
-	prop->value = "new-property-data";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+	prop0->name = "new-property";
+	prop0->value = "new-property-data";
+	prop0->length = strlen(prop0->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
 			    "Adding a new property failed\n");
 
 	/* Try to add an existing property - should fail */
-	prop++;
-	prop->name = "new-property";
-	prop->value = "new-property-data-should-fail";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
+	prop1->name = "new-property";
+	prop1->value = "new-property-data-should-fail";
+	prop1->length = strlen(prop1->value) + 1;
+	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop1), 0,
 			    "Adding an existing property should have failed\n");
+}
+
+static void of_test_dynamic_modify_existing_property(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+	struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;
+
+	/* Add a new property - should pass*/
+	prop0->name = "new-property";
+	prop0->value = "new-property-data";
+	prop0->length = strlen(prop0->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
+			    "Adding a new property failed\n");
 
 	/* Try to modify an existing property - should pass */
-	prop->value = "modify-property-data-should-pass";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
+	prop1->name = "new-property";
+	prop1->value = "modify-property-data-should-pass";
+	prop1->length = strlen(prop1->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop1), 0,
 			    "Updating an existing property should have passed\n");
+}
+
+static void of_test_dynamic_modify_non_existent_property(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+	struct property *prop0 = ctx->prop0;
 
 	/* Try to modify non-existent property - should pass*/
-	prop++;
-	prop->name = "modify-property";
-	prop->value = "modify-missing-property-data-should-pass";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
+	prop0->name = "modify-property";
+	prop0->value = "modify-missing-property-data-should-pass";
+	prop0->length = strlen(prop0->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop0), 0,
 			    "Updating a missing property should have passed\n");
+}
 
-	/* Remove property - should pass */
-	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
-			    "Removing a property should have passed\n");
+static void of_test_dynamic_large_property(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+	struct property *prop0 = ctx->prop0;
 
 	/* Adding very large property - should pass */
-	prop++;
-	prop->name = "large-property-PAGE_SIZEx8";
-	prop->length = PAGE_SIZE * 8;
-	prop->value = kzalloc(prop->length, GFP_KERNEL);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
-	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+	prop0->name = "large-property-PAGE_SIZEx8";
+	prop0->length = PAGE_SIZE * 8;
+	prop0->value = kunit_kzalloc(test, prop0->length, GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop0->value);
+
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
 			    "Adding a large property should have passed\n");
 }
 
-static int of_test_init(struct kunit *test)
+static int of_test_dynamic_init(struct kunit *test)
 {
-	/* adding data for unittest */
+	struct of_test_dynamic_context *ctx;
+
 	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
 
 	if (!of_aliases)
@@ -197,18 +375,45 @@ static int of_test_init(struct kunit *test)
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
 			"/testcase-data/phandle-tests/consumer-a"));
 
+	ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+	test->priv = ctx;
+
+	ctx->np = of_find_node_by_path("/testcase-data");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->np);
+
+	ctx->prop0 = kunit_kzalloc(test, sizeof(*ctx->prop0), GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop0);
+
+	ctx->prop1 = kunit_kzalloc(test, sizeof(*ctx->prop1), GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop1);
+
 	return 0;
 }
 
-static struct kunit_case of_test_cases[] = {
-	KUNIT_CASE(of_unittest_find_node_by_name),
-	KUNIT_CASE(of_unittest_dynamic),
+static void of_test_dynamic_exit(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+
+	of_remove_property(np, ctx->prop0);
+	of_remove_property(np, ctx->prop1);
+	of_node_put(np);
+}
+
+static struct kunit_case of_test_dynamic_cases[] = {
+	KUNIT_CASE(of_test_dynamic_basic),
+	KUNIT_CASE(of_test_dynamic_add_existing_property),
+	KUNIT_CASE(of_test_dynamic_modify_existing_property),
+	KUNIT_CASE(of_test_dynamic_modify_non_existent_property),
+	KUNIT_CASE(of_test_dynamic_large_property),
 	{},
 };
 
-static struct kunit_module of_test_module = {
-	.name = "of-base-test",
-	.init = of_test_init,
-	.test_cases = of_test_cases,
+static struct kunit_module of_test_dynamic_module = {
+	.name = "of-dynamic-test",
+	.init = of_test_dynamic_init,
+	.exit = of_test_dynamic_exit,
+	.test_cases = of_test_dynamic_cases,
 };
-module_test(of_test_module);
+module_test(of_test_dynamic_module);
-- 
2.20.0.rc0.387.gc7a69e6b6c-goog

^ permalink raw reply related	[flat|nested] 232+ messages in thread

* [RFC v3 16/19] arch: um: make UML unflatten device tree when testing
  2018-11-28 19:36 ` [RFC v3 16/19] arch: um: make UML unflatten device tree when testing brendanhiggins
  2018-11-28 19:36   ` Brendan Higgins
@ 2018-11-28 21:16   ` robh
  2018-11-28 21:16     ` Rob Herring
  2018-12-04  0:00     ` brendanhiggins
  2018-11-30  3:46   ` mcgrof
  2 siblings, 2 replies; 232+ messages in thread
From: robh @ 2018-11-28 21:16 UTC (permalink / raw)


On Wed, Nov 28, 2018 at 1:38 PM Brendan Higgins
<brendanhiggins at google.com> wrote:
>
> Make UML unflatten any present device trees when running KUnit tests.
>
> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> ---
>  arch/um/kernel/um_arch.c | 4 ++++
>  1 file changed, 4 insertions(+)
>
> diff --git a/arch/um/kernel/um_arch.c b/arch/um/kernel/um_arch.c
> index a818ccef30ca2..bd58ae3bf4148 100644
> --- a/arch/um/kernel/um_arch.c
> +++ b/arch/um/kernel/um_arch.c
> @@ -13,6 +13,7 @@
>  #include <linux/sched.h>
>  #include <linux/sched/task.h>
>  #include <linux/kmsg_dump.h>
> +#include <linux/of_fdt.h>
>
>  #include <asm/pgtable.h>
>  #include <asm/processor.h>
> @@ -347,6 +348,9 @@ void __init setup_arch(char **cmdline_p)
>         read_initrd();
>
>         paging_init();
> +#if IS_ENABLED(CONFIG_OF_UNITTEST)
> +       unflatten_device_tree();
> +#endif

Kind of strange to have this in the arch code. I'd rather have this in
the unittest code if possible. Can we have an initcall conditional on
CONFIG_UM in the unittest do this? Side note, use a C if with
IS_ENABLED() whenever possible instead of pre-processor #if.

I'll take a fix separately as it was on my todo to fix. I've got the
unit tests running in a gitlab CI job now[1].

Rob

[1] https://gitlab.com/robherring/linux-dt-unittest/pipelines

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 16/19] arch: um: make UML unflatten device tree when testing
  2018-11-28 21:16   ` robh
@ 2018-11-28 21:16     ` Rob Herring
  2018-12-04  0:00     ` brendanhiggins
  1 sibling, 0 replies; 232+ messages in thread
From: Rob Herring @ 2018-11-28 21:16 UTC (permalink / raw)


On Wed, Nov 28, 2018 at 1:38 PM Brendan Higgins
<brendanhiggins@google.com> wrote:
>
> Make UML unflatten any present device trees when running KUnit tests.
>
> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> ---
>  arch/um/kernel/um_arch.c | 4 ++++
>  1 file changed, 4 insertions(+)
>
> diff --git a/arch/um/kernel/um_arch.c b/arch/um/kernel/um_arch.c
> index a818ccef30ca2..bd58ae3bf4148 100644
> --- a/arch/um/kernel/um_arch.c
> +++ b/arch/um/kernel/um_arch.c
> @@ -13,6 +13,7 @@
>  #include <linux/sched.h>
>  #include <linux/sched/task.h>
>  #include <linux/kmsg_dump.h>
> +#include <linux/of_fdt.h>
>
>  #include <asm/pgtable.h>
>  #include <asm/processor.h>
> @@ -347,6 +348,9 @@ void __init setup_arch(char **cmdline_p)
>         read_initrd();
>
>         paging_init();
> +#if IS_ENABLED(CONFIG_OF_UNITTEST)
> +       unflatten_device_tree();
> +#endif

Kind of strange to have this in the arch code. I'd rather have this in
the unittest code if possible. Can we have an initcall conditional on
CONFIG_UM in the unittest do this? Side note, use a C if with
IS_ENABLED() whenever possible instead of pre-processor #if.

I'll take a fix separately as it was on my todo to fix. I've got the
unit tests running in a gitlab CI job now[1].

Rob

[1] https://gitlab.com/robherring/linux-dt-unittest/pipelines

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 06/19] arch: um: enable running kunit from User Mode Linux
  2018-11-28 19:36 ` [RFC v3 06/19] arch: um: enable running kunit from User Mode Linux brendanhiggins
  2018-11-28 19:36   ` Brendan Higgins
@ 2018-11-28 21:26   ` robh
  2018-11-28 21:26     ` Rob Herring
  2018-11-30  3:37     ` mcgrof
  2018-11-30  3:30   ` mcgrof
  2 siblings, 2 replies; 232+ messages in thread
From: robh @ 2018-11-28 21:26 UTC (permalink / raw)


On Wed, Nov 28, 2018 at 1:37 PM Brendan Higgins
<brendanhiggins at google.com> wrote:
>
> Make minimum number of changes outside of the KUnit directories for
> KUnit to build and run using UML.

There's nothing in this patch limiting this to UML. Only patch 1 does
that and I would remove that depends. I'd guess most folks will want
to run under something other than UML. DRM for instance (though the
virtual KMS stuff may work in UML?).

Plus you want to make sure this all builds with allmodconfig for x86
(or ARM) because those get the most (and quickest) compile coverage.

Rob

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 06/19] arch: um: enable running kunit from User Mode Linux
  2018-11-28 21:26   ` robh
@ 2018-11-28 21:26     ` Rob Herring
  2018-11-30  3:37     ` mcgrof
  1 sibling, 0 replies; 232+ messages in thread
From: Rob Herring @ 2018-11-28 21:26 UTC (permalink / raw)


On Wed, Nov 28, 2018 at 1:37 PM Brendan Higgins
<brendanhiggins@google.com> wrote:
>
> Make minimum number of changes outside of the KUnit directories for
> KUnit to build and run using UML.

There's nothing in this patch limiting this to UML. Only patch 1 does
that and I would remove that depends. I'd guess most folks will want
to run under something other than UML. DRM for instance (though the
virtual KMS stuff may work in UML?).

Plus you want to make sure this all builds with allmodconfig for x86
(or ARM) because those get the most (and quickest) compile coverage.

Rob

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel
  2018-11-28 19:36 ` [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel brendanhiggins
  2018-11-28 19:36   ` Brendan Higgins
@ 2018-11-29 13:54   ` kieran.bingham
  2018-11-29 13:54     ` Kieran Bingham
  2018-12-03 23:48     ` brendanhiggins
  2018-11-30  3:44   ` mcgrof
  2 siblings, 2 replies; 232+ messages in thread
From: kieran.bingham @ 2018-11-29 13:54 UTC (permalink / raw)


Hi Brendan,

Thanks again for this series!

On 28/11/2018 19:36, Brendan Higgins wrote:
> The ultimate goal is to create minimal isolated test binaries; in the
> meantime we are using UML to provide the infrastructure to run tests, so
> define an abstract way to configure and run tests that allow us to
> change the context in which tests are built without affecting the user.
> This also makes pretty and dynamic error reporting, and a lot of other
> nice features easier.


I wonder if we could somehow generate a shared library object
'libkernel' or 'libumlinux' from a UM configured set of headers and
objects so that we could create binary targets directly ?


> kunit_config.py:
>   - parse .config and Kconfig files.
> 
> kunit_kernel.py: provides helper functions to:
>   - configure the kernel using kunitconfig.
>   - build the kernel with the appropriate configuration.
>   - provide function to invoke the kernel and stream the output back.
> 
> Signed-off-by: Felix Guo <felixguoxiuping at gmail.com>
> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> ---
>  tools/testing/kunit/.gitignore      |   3 +
>  tools/testing/kunit/kunit_config.py |  60 +++++++++++++
>  tools/testing/kunit/kunit_kernel.py | 126 ++++++++++++++++++++++++++++
>  3 files changed, 189 insertions(+)
>  create mode 100644 tools/testing/kunit/.gitignore
>  create mode 100644 tools/testing/kunit/kunit_config.py
>  create mode 100644 tools/testing/kunit/kunit_kernel.py
> 
> diff --git a/tools/testing/kunit/.gitignore b/tools/testing/kunit/.gitignore
> new file mode 100644
> index 0000000000000..c791ff59a37a9
> --- /dev/null
> +++ b/tools/testing/kunit/.gitignore
> @@ -0,0 +1,3 @@
> +# Byte-compiled / optimized / DLL files
> +__pycache__/
> +*.py[cod]
> \ No newline at end of file
> diff --git a/tools/testing/kunit/kunit_config.py b/tools/testing/kunit/kunit_config.py
> new file mode 100644
> index 0000000000000..183bd5e758762
> --- /dev/null
> +++ b/tools/testing/kunit/kunit_config.py
> @@ -0,0 +1,60 @@
> +# SPDX-License-Identifier: GPL-2.0
> +
> +import collections
> +import re
> +
> +CONFIG_IS_NOT_SET_PATTERN = r'^# CONFIG_\w+ is not set$'
> +CONFIG_PATTERN = r'^CONFIG_\w+=\S+$'
> +
> +KconfigEntryBase = collections.namedtuple('KconfigEntry', ['raw_entry'])
> +
> +
> +class KconfigEntry(KconfigEntryBase):
> +
> +	def __str__(self) -> str:
> +		return self.raw_entry
> +
> +
> +class KconfigParseError(Exception):
> +	"""Error parsing Kconfig defconfig or .config."""
> +
> +
> +class Kconfig(object):
> +	"""Represents defconfig or .config specified using the Kconfig language."""
> +
> +	def __init__(self):
> +		self._entries = []
> +
> +	def entries(self):
> +		return set(self._entries)
> +
> +	def add_entry(self, entry: KconfigEntry) -> None:
> +		self._entries.append(entry)
> +
> +	def is_subset_of(self, other: "Kconfig") -> bool:
> +		return self.entries().issubset(other.entries())
> +
> +	def write_to_file(self, path: str) -> None:
> +		with open(path, 'w') as f:
> +			for entry in self.entries():
> +				f.write(str(entry) + '\n')
> +
> +	def parse_from_string(self, blob: str) -> None:
> +		"""Parses a string containing KconfigEntrys and populates this Kconfig."""
> +		self._entries = []
> +		is_not_set_matcher = re.compile(CONFIG_IS_NOT_SET_PATTERN)
> +		config_matcher = re.compile(CONFIG_PATTERN)
> +		for line in blob.split('\n'):
> +			line = line.strip()
> +			if not line:
> +				continue
> +			elif config_matcher.match(line) or is_not_set_matcher.match(line):
> +				self._entries.append(KconfigEntry(line))
> +			elif line[0] == '#':
> +				continue
> +			else:
> +				raise KconfigParseError('Failed to parse: ' + line)
> +
> +	def read_from_file(self, path: str) -> None:
> +		with open(path, 'r') as f:
> +			self.parse_from_string(f.read())
> diff --git a/tools/testing/kunit/kunit_kernel.py b/tools/testing/kunit/kunit_kernel.py
> new file mode 100644
> index 0000000000000..bba7ea7ca1869
> --- /dev/null
> +++ b/tools/testing/kunit/kunit_kernel.py
> @@ -0,0 +1,126 @@
> +# SPDX-License-Identifier: GPL-2.0
> +
> +import logging
> +import subprocess
> +import os
> +
> +import kunit_config
> +
> +KCONFIG_PATH = '.config'
> +
> +class ConfigError(Exception):
> +	"""Represents an error trying to configure the Linux kernel."""
> +
> +
> +class BuildError(Exception):
> +	"""Represents an error trying to build the Linux kernel."""
> +
> +
> +class LinuxSourceTreeOperations(object):
> +	"""An abstraction over command line operations performed on a source tree."""
> +
> +	def make_mrproper(self):
> +		try:
> +			subprocess.check_output(['make', 'mrproper'])
> +		except OSError as e:
> +			raise ConfigError('Could not call make command: ' + e)
> +		except subprocess.CalledProcessError as e:
> +			raise ConfigError(e.output)
> +
> +	def make_olddefconfig(self):
> +		try:
> +			subprocess.check_output(['make', 'ARCH=um', 'olddefconfig'])
> +		except OSError as e:
> +			raise ConfigError('Could not call make command: ' + e)
> +		except subprocess.CalledProcessError as e:
> +			raise ConfigError(e.output)
> +
> +	def make(self, jobs):
> +		try:
> +			subprocess.check_output([
> +					'make',
> +					'ARCH=um',
> +					'--jobs=' + str(jobs)])

Perhaps as a future extension:

It would be nice if we could set an O= here to keep the source tree
pristine.

In fact I might even suggest that this should always be set so that the
unittesting could live along side an existing kernel build? :

 O ?= $KBUILD_SRC/
 O := $(O)/kunittest/$(ARCH)/build


> +		except OSError as e:
> +			raise BuildError('Could not call execute make: ' + e)
> +		except subprocess.CalledProcessError as e:
> +			raise BuildError(e.output)
> +
> +	def linux_bin(self, params, timeout):
> +		"""Runs the Linux UML binary. Must be named 'linux'."""
> +		process = subprocess.Popen(
> +			['./linux'] + params,
> +			stdin=subprocess.PIPE,
> +			stdout=subprocess.PIPE,
> +			stderr=subprocess.PIPE)
> +		process.wait(timeout=timeout)
> +		return process
> +
> +
> +class LinuxSourceTree(object):
> +	"""Represents a Linux kernel source tree with KUnit tests."""
> +
> +	def __init__(self):
> +		self._kconfig = kunit_config.Kconfig()
> +		self._kconfig.read_from_file('kunitconfig')
> +		self._ops = LinuxSourceTreeOperations()
> +
> +	def clean(self):
> +		try:
> +			self._ops.make_mrproper()
> +		except ConfigError as e:
> +			logging.error(e)
> +			return False
> +		return True
> +
> +	def build_config(self):
> +		self._kconfig.write_to_file(KCONFIG_PATH)
> +		try:
> +			self._ops.make_olddefconfig()
> +		except ConfigError as e:
> +			logging.error(e)
> +			return False
> +		validated_kconfig = kunit_config.Kconfig()
> +		validated_kconfig.read_from_file(KCONFIG_PATH)
> +		if not self._kconfig.is_subset_of(validated_kconfig):
> +			logging.error('Provided Kconfig is not contained in validated .config!')
> +			return False
> +		return True
> +
> +	def build_reconfig(self):
> +		"""Creates a new .config if it is not a subset of the kunitconfig."""
> +		if os.path.exists(KCONFIG_PATH):
> +			existing_kconfig = kunit_config.Kconfig()
> +			existing_kconfig.read_from_file(KCONFIG_PATH)
> +			if not self._kconfig.is_subset_of(existing_kconfig):
> +				print('Regenerating .config ...')
> +				os.remove(KCONFIG_PATH)
> +				return self.build_config()
> +			else:
> +				return True
> +		else:
> +			print('Generating .config ...')
> +			return self.build_config()
> +
> +	def build_um_kernel(self, jobs):
> +		try:
> +			self._ops.make_olddefconfig()
> +			self._ops.make(jobs)
> +		except (ConfigError, BuildError) as e:
> +			logging.error(e)
> +			return False
> +		used_kconfig = kunit_config.Kconfig()
> +		used_kconfig.read_from_file(KCONFIG_PATH)
> +		if not self._kconfig.is_subset_of(used_kconfig):
> +			logging.error('Provided Kconfig is not contained in final config!')
> +			return False
> +		return True
> +
> +	def run_kernel(self, args=[]):
> +		timeout = None
> +		args.extend(['mem=256M'])
> +		process = self._ops.linux_bin(args, timeout)
> +		with open('test.log', 'w') as f:
> +			for line in process.stdout:
> +				f.write(line.rstrip().decode('ascii') + '\n')
> +				yield line.rstrip().decode('ascii')
> 

-- 
Regards
--
Kieran

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel
  2018-11-29 13:54   ` kieran.bingham
@ 2018-11-29 13:54     ` Kieran Bingham
  2018-12-03 23:48     ` brendanhiggins
  1 sibling, 0 replies; 232+ messages in thread
From: Kieran Bingham @ 2018-11-29 13:54 UTC (permalink / raw)


Hi Brendan,

Thanks again for this series!

On 28/11/2018 19:36, Brendan Higgins wrote:
> The ultimate goal is to create minimal isolated test binaries; in the
> meantime we are using UML to provide the infrastructure to run tests, so
> define an abstract way to configure and run tests that allow us to
> change the context in which tests are built without affecting the user.
> This also makes pretty and dynamic error reporting, and a lot of other
> nice features easier.


I wonder if we could somehow generate a shared library object
'libkernel' or 'libumlinux' from a UM configured set of headers and
objects so that we could create binary targets directly ?


> kunit_config.py:
>   - parse .config and Kconfig files.
> 
> kunit_kernel.py: provides helper functions to:
>   - configure the kernel using kunitconfig.
>   - build the kernel with the appropriate configuration.
>   - provide function to invoke the kernel and stream the output back.
> 
> Signed-off-by: Felix Guo <felixguoxiuping at gmail.com>
> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> ---
>  tools/testing/kunit/.gitignore      |   3 +
>  tools/testing/kunit/kunit_config.py |  60 +++++++++++++
>  tools/testing/kunit/kunit_kernel.py | 126 ++++++++++++++++++++++++++++
>  3 files changed, 189 insertions(+)
>  create mode 100644 tools/testing/kunit/.gitignore
>  create mode 100644 tools/testing/kunit/kunit_config.py
>  create mode 100644 tools/testing/kunit/kunit_kernel.py
> 
> diff --git a/tools/testing/kunit/.gitignore b/tools/testing/kunit/.gitignore
> new file mode 100644
> index 0000000000000..c791ff59a37a9
> --- /dev/null
> +++ b/tools/testing/kunit/.gitignore
> @@ -0,0 +1,3 @@
> +# Byte-compiled / optimized / DLL files
> +__pycache__/
> +*.py[cod]
> \ No newline at end of file
> diff --git a/tools/testing/kunit/kunit_config.py b/tools/testing/kunit/kunit_config.py
> new file mode 100644
> index 0000000000000..183bd5e758762
> --- /dev/null
> +++ b/tools/testing/kunit/kunit_config.py
> @@ -0,0 +1,60 @@
> +# SPDX-License-Identifier: GPL-2.0
> +
> +import collections
> +import re
> +
> +CONFIG_IS_NOT_SET_PATTERN = r'^# CONFIG_\w+ is not set$'
> +CONFIG_PATTERN = r'^CONFIG_\w+=\S+$'
> +
> +KconfigEntryBase = collections.namedtuple('KconfigEntry', ['raw_entry'])
> +
> +
> +class KconfigEntry(KconfigEntryBase):
> +
> +	def __str__(self) -> str:
> +		return self.raw_entry
> +
> +
> +class KconfigParseError(Exception):
> +	"""Error parsing Kconfig defconfig or .config."""
> +
> +
> +class Kconfig(object):
> +	"""Represents defconfig or .config specified using the Kconfig language."""
> +
> +	def __init__(self):
> +		self._entries = []
> +
> +	def entries(self):
> +		return set(self._entries)
> +
> +	def add_entry(self, entry: KconfigEntry) -> None:
> +		self._entries.append(entry)
> +
> +	def is_subset_of(self, other: "Kconfig") -> bool:
> +		return self.entries().issubset(other.entries())
> +
> +	def write_to_file(self, path: str) -> None:
> +		with open(path, 'w') as f:
> +			for entry in self.entries():
> +				f.write(str(entry) + '\n')
> +
> +	def parse_from_string(self, blob: str) -> None:
> +		"""Parses a string containing KconfigEntrys and populates this Kconfig."""
> +		self._entries = []
> +		is_not_set_matcher = re.compile(CONFIG_IS_NOT_SET_PATTERN)
> +		config_matcher = re.compile(CONFIG_PATTERN)
> +		for line in blob.split('\n'):
> +			line = line.strip()
> +			if not line:
> +				continue
> +			elif config_matcher.match(line) or is_not_set_matcher.match(line):
> +				self._entries.append(KconfigEntry(line))
> +			elif line[0] == '#':
> +				continue
> +			else:
> +				raise KconfigParseError('Failed to parse: ' + line)
> +
> +	def read_from_file(self, path: str) -> None:
> +		with open(path, 'r') as f:
> +			self.parse_from_string(f.read())
> diff --git a/tools/testing/kunit/kunit_kernel.py b/tools/testing/kunit/kunit_kernel.py
> new file mode 100644
> index 0000000000000..bba7ea7ca1869
> --- /dev/null
> +++ b/tools/testing/kunit/kunit_kernel.py
> @@ -0,0 +1,126 @@
> +# SPDX-License-Identifier: GPL-2.0
> +
> +import logging
> +import subprocess
> +import os
> +
> +import kunit_config
> +
> +KCONFIG_PATH = '.config'
> +
> +class ConfigError(Exception):
> +	"""Represents an error trying to configure the Linux kernel."""
> +
> +
> +class BuildError(Exception):
> +	"""Represents an error trying to build the Linux kernel."""
> +
> +
> +class LinuxSourceTreeOperations(object):
> +	"""An abstraction over command line operations performed on a source tree."""
> +
> +	def make_mrproper(self):
> +		try:
> +			subprocess.check_output(['make', 'mrproper'])
> +		except OSError as e:
> +			raise ConfigError('Could not call make command: ' + e)
> +		except subprocess.CalledProcessError as e:
> +			raise ConfigError(e.output)
> +
> +	def make_olddefconfig(self):
> +		try:
> +			subprocess.check_output(['make', 'ARCH=um', 'olddefconfig'])
> +		except OSError as e:
> +			raise ConfigError('Could not call make command: ' + e)
> +		except subprocess.CalledProcessError as e:
> +			raise ConfigError(e.output)
> +
> +	def make(self, jobs):
> +		try:
> +			subprocess.check_output([
> +					'make',
> +					'ARCH=um',
> +					'--jobs=' + str(jobs)])

Perhaps as a future extension:

It would be nice if we could set an O= here to keep the source tree
pristine.

In fact I might even suggest that this should always be set so that the
unittesting could live along side an existing kernel build? :

 O ?= $KBUILD_SRC/
 O := $(O)/kunittest/$(ARCH)/build


> +		except OSError as e:
> +			raise BuildError('Could not call execute make: ' + e)
> +		except subprocess.CalledProcessError as e:
> +			raise BuildError(e.output)
> +
> +	def linux_bin(self, params, timeout):
> +		"""Runs the Linux UML binary. Must be named 'linux'."""
> +		process = subprocess.Popen(
> +			['./linux'] + params,
> +			stdin=subprocess.PIPE,
> +			stdout=subprocess.PIPE,
> +			stderr=subprocess.PIPE)
> +		process.wait(timeout=timeout)
> +		return process
> +
> +
> +class LinuxSourceTree(object):
> +	"""Represents a Linux kernel source tree with KUnit tests."""
> +
> +	def __init__(self):
> +		self._kconfig = kunit_config.Kconfig()
> +		self._kconfig.read_from_file('kunitconfig')
> +		self._ops = LinuxSourceTreeOperations()
> +
> +	def clean(self):
> +		try:
> +			self._ops.make_mrproper()
> +		except ConfigError as e:
> +			logging.error(e)
> +			return False
> +		return True
> +
> +	def build_config(self):
> +		self._kconfig.write_to_file(KCONFIG_PATH)
> +		try:
> +			self._ops.make_olddefconfig()
> +		except ConfigError as e:
> +			logging.error(e)
> +			return False
> +		validated_kconfig = kunit_config.Kconfig()
> +		validated_kconfig.read_from_file(KCONFIG_PATH)
> +		if not self._kconfig.is_subset_of(validated_kconfig):
> +			logging.error('Provided Kconfig is not contained in validated .config!')
> +			return False
> +		return True
> +
> +	def build_reconfig(self):
> +		"""Creates a new .config if it is not a subset of the kunitconfig."""
> +		if os.path.exists(KCONFIG_PATH):
> +			existing_kconfig = kunit_config.Kconfig()
> +			existing_kconfig.read_from_file(KCONFIG_PATH)
> +			if not self._kconfig.is_subset_of(existing_kconfig):
> +				print('Regenerating .config ...')
> +				os.remove(KCONFIG_PATH)
> +				return self.build_config()
> +			else:
> +				return True
> +		else:
> +			print('Generating .config ...')
> +			return self.build_config()
> +
> +	def build_um_kernel(self, jobs):
> +		try:
> +			self._ops.make_olddefconfig()
> +			self._ops.make(jobs)
> +		except (ConfigError, BuildError) as e:
> +			logging.error(e)
> +			return False
> +		used_kconfig = kunit_config.Kconfig()
> +		used_kconfig.read_from_file(KCONFIG_PATH)
> +		if not self._kconfig.is_subset_of(used_kconfig):
> +			logging.error('Provided Kconfig is not contained in final config!')
> +			return False
> +		return True
> +
> +	def run_kernel(self, args=[]):
> +		timeout = None
> +		args.extend(['mem=256M'])
> +		process = self._ops.linux_bin(args, timeout)
> +		with open('test.log', 'w') as f:
> +			for line in process.stdout:
> +				f.write(line.rstrip().decode('ascii') + '\n')
> +				yield line.rstrip().decode('ascii')
> 

-- 
Regards
--
Kieran

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 14/19] Documentation: kunit: add documentation for KUnit
  2018-11-28 19:36 ` [RFC v3 14/19] Documentation: kunit: add documentation for KUnit brendanhiggins
  2018-11-28 19:36   ` Brendan Higgins
@ 2018-11-29 13:56   ` kieran.bingham
  2018-11-29 13:56     ` Kieran Bingham
  2018-11-30  3:45     ` mcgrof
  1 sibling, 2 replies; 232+ messages in thread
From: kieran.bingham @ 2018-11-29 13:56 UTC (permalink / raw)


Hi Brendan,

Please excuse the top posting, but I'm replying here as I'm following
the section "Creating a kunitconfig" in Documentation/kunit/start.rst.

Could the three line kunitconfig file live under say
	 arch/um/configs/kunit_defconfig?

So that it's always provided? And could even be extended with tests
which people would expect to be run by default? (say in distributions)

--
Kieran




On 28/11/2018 19:36, Brendan Higgins wrote:
> Add documentation for KUnit, the Linux kernel unit testing framework.
> - Add intro and usage guide for KUnit
> - Add API reference
> 
> Signed-off-by: Felix Guo <felixguoxiuping at gmail.com>
> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> ---
>  Documentation/index.rst           |   1 +
>  Documentation/kunit/api/index.rst |  16 ++
>  Documentation/kunit/api/test.rst  |  15 +
>  Documentation/kunit/faq.rst       |  46 +++
>  Documentation/kunit/index.rst     |  80 ++++++
>  Documentation/kunit/start.rst     | 180 ++++++++++++
>  Documentation/kunit/usage.rst     | 447 ++++++++++++++++++++++++++++++
>  7 files changed, 785 insertions(+)
>  create mode 100644 Documentation/kunit/api/index.rst
>  create mode 100644 Documentation/kunit/api/test.rst
>  create mode 100644 Documentation/kunit/faq.rst
>  create mode 100644 Documentation/kunit/index.rst
>  create mode 100644 Documentation/kunit/start.rst
>  create mode 100644 Documentation/kunit/usage.rst
> 
> diff --git a/Documentation/index.rst b/Documentation/index.rst
> index 5db7e87c7cb1d..275ef4db79f61 100644
> --- a/Documentation/index.rst
> +++ b/Documentation/index.rst
> @@ -68,6 +68,7 @@ merged much easier.
>     kernel-hacking/index
>     trace/index
>     maintainer/index
> +   kunit/index
>  
>  Kernel API documentation
>  ------------------------
> diff --git a/Documentation/kunit/api/index.rst b/Documentation/kunit/api/index.rst
> new file mode 100644
> index 0000000000000..c31c530088153
> --- /dev/null
> +++ b/Documentation/kunit/api/index.rst
> @@ -0,0 +1,16 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +=============
> +API Reference
> +=============
> +.. toctree::
> +
> +	test
> +
> +This section documents the KUnit kernel testing API. It is divided into 3
> +sections:
> +
> +================================= ==============================================
> +:doc:`test`                       documents all of the standard testing API
> +                                  excluding mocking or mocking related features.
> +================================= ==============================================
> diff --git a/Documentation/kunit/api/test.rst b/Documentation/kunit/api/test.rst
> new file mode 100644
> index 0000000000000..7c926014f047c
> --- /dev/null
> +++ b/Documentation/kunit/api/test.rst
> @@ -0,0 +1,15 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +========
> +Test API
> +========
> +
> +This file documents all of the standard testing API excluding mocking or mocking
> +related features.
> +
> +.. kernel-doc:: include/kunit/test.h
> +   :internal:
> +
> +.. kernel-doc:: include/kunit/kunit-stream.h
> +   :internal:
> +
> diff --git a/Documentation/kunit/faq.rst b/Documentation/kunit/faq.rst
> new file mode 100644
> index 0000000000000..cb8e4fb2257a0
> --- /dev/null
> +++ b/Documentation/kunit/faq.rst
> @@ -0,0 +1,46 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +=========================================
> +Frequently Asked Questions
> +=========================================
> +
> +How is this different from Autotest, kselftest, etc?
> +====================================================
> +KUnit is a unit testing framework. Autotest, kselftest (and some others) are
> +not.
> +
> +A `unit test <https://martinfowler.com/bliki/UnitTest.html>`_ is supposed to
> +test a single unit of code in isolation, hence the name. A unit test should be
> +the finest granularity of testing and as such should allow all possible code
> +paths to be tested in the code under test; this is only possible if the code
> +under test is very small and does not have any external dependencies outside of
> +the test's control like hardware.
> +
> +There are no testing frameworks currently available for the kernel that do not
> +require installing the kernel on a test machine or in a VM and all require
> +tests to be written in userspace and run on the kernel under test; this is true
> +for Autotest, kselftest, and some others, disqualifying any of them from being
> +considered unit testing frameworks.
> +
> +What is the difference between a unit test and these other kinds of tests?
> +==========================================================================
> +Most existing tests for the Linux kernel would be categorized as an integration
> +test, or an end-to-end test.
> +
> +- A unit test is supposed to test a single unit of code in isolation, hence the
> +  name. A unit test should be the finest granularity of testing and as such
> +  should allow all possible code paths to be tested in the code under test; this
> +  is only possible if the code under test is very small and does not have any
> +  external dependencies outside of the test's control like hardware.
> +- An integration test tests the interaction between a minimal set of components,
> +  usually just two or three. For example, someone might write an integration
> +  test to test the interaction between a driver and a piece of hardware, or to
> +  test the interaction between the userspace libraries the kernel provides and
> +  the kernel itself; however, one of these tests would probably not test the
> +  entire kernel along with hardware interactions and interactions with the
> +  userspace.
> +- An end-to-end test usually tests the entire system from the perspective of the
> +  code under test. For example, someone might write an end-to-end test for the
> +  kernel by installing a production configuration of the kernel on production
> +  hardware with a production userspace and then trying to exercise some behavior
> +  that depends on interactions between the hardware, the kernel, and userspace.
> diff --git a/Documentation/kunit/index.rst b/Documentation/kunit/index.rst
> new file mode 100644
> index 0000000000000..c6710211b647f
> --- /dev/null
> +++ b/Documentation/kunit/index.rst
> @@ -0,0 +1,80 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +=========================================
> +KUnit - Unit Testing for the Linux Kernel
> +=========================================
> +
> +.. toctree::
> +	:maxdepth: 2
> +
> +	start
> +	usage
> +	api/index
> +	faq
> +
> +What is KUnit?
> +==============
> +
> +KUnit is a lightweight unit testing and mocking framework for the Linux kernel.
> +These tests are able to be run locally on a developer's workstation without a VM
> +or special hardware.
> +
> +KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> +Googletest/Googlemock for C++. KUnit provides facilities for defining unit test
> +cases, grouping related test cases into test suites, providing common
> +infrastructure for running tests, and much more.
> +
> +Get started now: :doc:`start`
> +
> +Why KUnit?
> +==========
> +
> +A unit test is supposed to test a single unit of code in isolation, hence the
> +name. A unit test should be the finest granularity of testing and as such should
> +allow all possible code paths to be tested in the code under test; this is only
> +possible if the code under test is very small and does not have any external
> +dependencies outside of the test's control like hardware.
> +
> +Outside of KUnit, there are no testing frameworks currently
> +available for the kernel that do not require installing the kernel on a test
> +machine or in a VM and all require tests to be written in userspace running on
> +the kernel; this is true for Autotest, and kselftest, disqualifying
> +any of them from being considered unit testing frameworks.
> +
> +KUnit addresses the problem of being able to run tests without needing a virtual
> +machine or actual hardware with User Mode Linux. User Mode Linux is a Linux
> +architecture, like ARM or x86; however, unlike other architectures it compiles
> +to a standalone program that can be run like any other program directly inside
> +of a host operating system; to be clear, it does not require any virtualization
> +support; it is just a regular program.
> +
> +KUnit is fast. Excluding build time, from invocation to completion KUnit can run
> +several dozen tests in only 10 to 20 seconds; this might not sound like a big
> +deal to some people, but having such fast and easy to run tests fundamentally
> +changes the way you go about testing and even writing code in the first place.
> +Linus himself said in his `git talk at Google
> +<https://gist.github.com/lorn/1272686/revisions#diff-53c65572127855f1b003db4064a94573R874>`_:
> +
> +	"... a lot of people seem to think that performance is about doing the
> +	same thing, just doing it faster, and that is not true. That is not what
> +	performance is all about. If you can do something really fast, really
> +	well, people will start using it differently."
> +
> +In this context Linus was talking about branching and merging,
> +but this point also applies to testing. If your tests are slow, unreliable, are
> +difficult to write, and require a special setup or special hardware to run,
> +then you wait a lot longer to write tests, and you wait a lot longer to run
> +tests; this means that tests are likely to break, unlikely to test a lot of
> +things, and are unlikely to be rerun once they pass. If your tests are really
> +fast, you run them all the time, every time you make a change, and every time
> +someone sends you some code. Why trust that someone ran all their tests
> +correctly on every change when you can just run them yourself in less time than
> +it takes to read his / her test log?
> +
> +How do I use it?
> +===================
> +
> +*   :doc:`start` - for new users of KUnit
> +*   :doc:`usage` - for a more detailed explanation of KUnit features
> +*   :doc:`api/index` - for the list of KUnit APIs used for testing
> +
> diff --git a/Documentation/kunit/start.rst b/Documentation/kunit/start.rst
> new file mode 100644
> index 0000000000000..5cdba5091905e
> --- /dev/null
> +++ b/Documentation/kunit/start.rst
> @@ -0,0 +1,180 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +===============
> +Getting Started
> +===============
> +
> +Installing dependencies
> +=======================
> +KUnit has the same dependencies as the Linux kernel. As long as you can build
> +the kernel, you can run KUnit.
> +
> +KUnit Wrapper
> +=============
> +Included with KUnit is a simple Python wrapper that helps format the output to
> +easily use and read KUnit output. It handles building and running the kernel, as
> +well as formatting the output.
> +
> +The wrapper can be run with:
> +
> +.. code-block:: bash
> +
> +   ./tools/testing/kunit/kunit.py
> +
> +Creating a kunitconfig
> +======================
> +The Python script is a thin wrapper around Kbuild as such, it needs to be
> +configured with a ``kunitconfig`` file. This file essentially contains the
> +regular Kernel config, with the specific test targets as well.
> +
> +.. code-block:: bash
> +
> +	git clone -b master https://kunit.googlesource.com/kunitconfig $PATH_TO_KUNITCONFIG_REPO
> +	cd $PATH_TO_LINUX_REPO
> +	ln -s $PATH_TO_KUNIT_CONFIG_REPO/kunitconfig kunitconfig
> +
> +You may want to add kunitconfig to your local gitignore.> +
> +Verifying KUnit Works
> +-------------------------
> +
> +To make sure that everything is set up correctly, simply invoke the Python
> +wrapper from your kernel repo:
> +
> +.. code-block:: bash
> +
> +	./tools/testing/kunit/kunit.py
> +
> +.. note::
> +   You may want to run ``make mrproper`` first.
> +
> +If everything worked correctly, you should see the following:
> +
> +.. code-block:: bash
> +
> +	Generating .config ...
> +	Building KUnit Kernel ...
> +	Starting KUnit Kernel ...
> +
> +followed by a list of tests that are run. All of them should be passing.
> +
> +.. note::
> +   Because it is building a lot of sources for the first time, the ``Building
> +   kunit kernel`` step may take a while.
> +
> +Writing your first test
> +==========================
> +
> +In your kernel repo let's add some code that we can test. Create a file
> +``drivers/misc/example.h`` with the contents:
> +
> +.. code-block:: c
> +
> +	int misc_example_add(int left, int right);
> +
> +create a file ``drivers/misc/example.c``:
> +
> +.. code-block:: c
> +
> +	#include <linux/errno.h>
> +
> +	#include "example.h"
> +
> +	int misc_example_add(int left, int right)
> +	{
> +		return left + right;
> +	}
> +
> +Now add the following lines to ``drivers/misc/Kconfig``:
> +
> +.. code-block:: kconfig
> +
> +	config MISC_EXAMPLE
> +		bool "My example"
> +
> +and the following lines to ``drivers/misc/Makefile``:
> +
> +.. code-block:: make
> +
> +	obj-$(CONFIG_MISC_EXAMPLE) += example.o
> +
> +Now we are ready to write the test. The test will be in
> +``drivers/misc/example-test.c``:
> +
> +.. code-block:: c
> +
> +	#include <kunit/test.h>
> +	#include "example.h"
> +
> +	/* Define the test cases. */
> +
> +	static void misc_example_add_test_basic(struct kunit *test)
> +	{
> +		KUNIT_EXPECT_EQ(test, 1, misc_example_add(1, 0));
> +		KUNIT_EXPECT_EQ(test, 2, misc_example_add(1, 1));
> +		KUNIT_EXPECT_EQ(test, 0, misc_example_add(-1, 1));
> +		KUNIT_EXPECT_EQ(test, INT_MAX, misc_example_add(0, INT_MAX));
> +		KUNIT_EXPECT_EQ(test, -1, misc_example_add(INT_MAX, INT_MIN));
> +	}
> +
> +	static void misc_example_test_failure(struct kunit *test)
> +	{
> +		KUNIT_FAIL(test, "This test never passes.");
> +	}
> +
> +	static struct kunit_case misc_example_test_cases[] = {
> +		KUNIT_CASE(misc_example_add_test_basic),
> +		KUNIT_CASE(misc_example_test_failure),
> +		{},
> +	};
> +
> +	static struct kunit_module misc_example_test_module = {
> +		.name = "misc-example",
> +		.test_cases = misc_example_test_cases,
> +	};
> +	module_test(misc_example_test_module);
> +
> +Now add the following to ``drivers/misc/Kconfig``:
> +
> +.. code-block:: kconfig
> +
> +	config MISC_EXAMPLE_TEST
> +		bool "Test for my example"
> +		depends on MISC_EXAMPLE && KUNIT
> +
> +and the following to ``drivers/misc/Makefile``:
> +
> +.. code-block:: make
> +
> +	obj-$(CONFIG_MISC_EXAMPLE_TEST) += example-test.o
> +
> +Now add it to your ``kunitconfig``:
> +
> +.. code-block:: none
> +
> +	CONFIG_MISC_EXAMPLE=y
> +	CONFIG_MISC_EXAMPLE_TEST=y
> +
> +Now you can run the test:
> +
> +.. code-block:: bash
> +
> +	./tools/testing/kunit/kunit.py
> +
> +You should see the following failure:
> +
> +.. code-block:: none
> +
> +	...
> +	[16:08:57] [PASSED] misc-example:misc_example_add_test_basic
> +	[16:08:57] [FAILED] misc-example:misc_example_test_failure
> +	[16:08:57] EXPECTATION FAILED at drivers/misc/example-test.c:17
> +	[16:08:57] 	This test never passes.
> +	...
> +
> +Congrats! You just wrote your first KUnit test!
> +
> +Next Steps
> +=============
> +*   Check out the :doc:`usage` page for a more
> +    in-depth explanation of KUnit.
> diff --git a/Documentation/kunit/usage.rst b/Documentation/kunit/usage.rst
> new file mode 100644
> index 0000000000000..96ef7f9a1add4
> --- /dev/null
> +++ b/Documentation/kunit/usage.rst
> @@ -0,0 +1,447 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +=============
> +Using KUnit
> +=============
> +
> +The purpose of this document is to describe what KUnit is, how it works, how it
> +is intended to be used, and all the concepts and terminology that are needed to
> +understand it. This guide assumes a working knowledge of the Linux kernel and
> +some basic knowledge of testing.
> +
> +For a high level introduction to KUnit, including setting up KUnit for your
> +project, see :doc:`start`.
> +
> +Organization of this document
> +=================================
> +
> +This document is organized into two main sections: Testing and Isolating
> +Behavior. The first covers what a unit test is and how to use KUnit to write
> +them. The second covers how to use KUnit to isolate code and make it possible
> +to unit test code that was otherwise un-unit-testable.
> +
> +Testing
> +==========
> +
> +What is KUnit?
> +------------------
> +
> +"K" is short for "kernel" so "KUnit" is the "(Linux) Kernel Unit Testing
> +Framework." KUnit is intended first and foremost for writing unit tests; it is
> +general enough that it can be used to write integration tests; however, this is
> +a secondary goal. KUnit has no ambition of being the only testing framework for
> +the kernel; for example, it does not intend to be an end-to-end testing
> +framework.
> +
> +What is Unit Testing?
> +-------------------------
> +
> +A `unit test <https://martinfowler.com/bliki/UnitTest.html>`_ is a test that
> +tests code at the smallest possible scope, a *unit* of code. In the C
> +programming language that's a function.
> +
> +Unit tests should be written for all the publicly exposed functions in a
> +compilation unit; so that is all the functions that are exported in either a
> +*class* (defined below) or all functions which are **not** static.
> +
> +Writing Tests
> +-------------
> +
> +Test Cases
> +~~~~~~~~~~
> +
> +The fundamental unit in KUnit is the test case. A test case is a function with
> +the signature ``void (*)(struct kunit *test)``. It calls a function to be tested
> +and then sets *expectations* for what should happen. For example:
> +
> +.. code-block:: c
> +
> +	void example_test_success(struct kunit *test)
> +	{
> +	}
> +
> +	void example_test_failure(struct kunit *test)
> +	{
> +		KUNIT_FAIL(test, "This test never passes.");
> +	}
> +
> +In the above example ``example_test_success`` always passes because it does
> +nothing; no expectations are set, so all expectations pass. On the other hand
> +``example_test_failure`` always fails because it calls ``KUNIT_FAIL``, which is
> +a special expectation that logs a message and causes the test case to fail.
> +
> +Expectations
> +~~~~~~~~~~~~
> +An *expectation* is a way to specify that you expect a piece of code to do
> +something in a test. An expectation is called like a function. A test is made
> +by setting expectations about the behavior of a piece of code under test; when
> +one or more of the expectations fail, the test case fails and information about
> +the failure is logged. For example:
> +
> +.. code-block:: c
> +
> +	void add_test_basic(struct kunit *test)
> +	{
> +		KUNIT_EXPECT_EQ(test, 1, add(1, 0));
> +		KUNIT_EXPECT_EQ(test, 2, add(1, 1));
> +	}
> +
> +In the above example ``add_test_basic`` makes a number of assertions about the
> +behavior of a function called ``add``; the first parameter is always of type
> +``struct kunit *``, which contains information about the current test context;
> +the second parameter, in this case, is what the value is expected to be; the
> +last value is what the value actually is. If ``add`` passes all of these
> +expectations, the test case, ``add_test_basic`` will pass; if any one of these
> +expectations fail, the test case will fail.
> +
> +It is important to understand that a test case *fails* when any expectation is
> +violated; however, the test will continue running, potentially trying other
> +expectations until the test case ends or is otherwise terminated. This is as
> +opposed to *assertions* which are discussed later.
> +
> +To learn about more expectations supported by KUnit, see :doc:`api/test`.
> +
> +.. note::
> +   A single test case should be pretty short, pretty easy to understand,
> +   focused on a single behavior.
> +
> +For example, if we wanted to properly test the add function above, we would
> +create additional tests cases which would each test a different property that an
> +add function should have like this:
> +
> +.. code-block:: c
> +
> +	void add_test_basic(struct kunit *test)
> +	{
> +		KUNIT_EXPECT_EQ(test, 1, add(1, 0));
> +		KUNIT_EXPECT_EQ(test, 2, add(1, 1));
> +	}
> +
> +	void add_test_negative(struct kunit *test)
> +	{
> +		KUNIT_EXPECT_EQ(test, 0, add(-1, 1));
> +	}
> +
> +	void add_test_max(struct kunit *test)
> +	{
> +		KUNIT_EXPECT_EQ(test, INT_MAX, add(0, INT_MAX));
> +		KUNIT_EXPECT_EQ(test, -1, add(INT_MAX, INT_MIN));
> +	}
> +
> +	void add_test_overflow(struct kunit *test)
> +	{
> +		KUNIT_EXPECT_EQ(test, INT_MIN, add(INT_MAX, 1));
> +	}
> +
> +Notice how it is immediately obvious what all the properties that we are testing
> +for are.
> +
> +Assertions
> +~~~~~~~~~~
> +
> +KUnit also has the concept of an *assertion*. An assertion is just like an
> +expectation except the assertion immediately terminates the test case if it is
> +not satisfied.
> +
> +For example:
> +
> +.. code-block:: c
> +
> +	static void mock_test_do_expect_default_return(struct kunit *test)
> +	{
> +		struct mock_test_context *ctx = test->priv;
> +		struct mock *mock = ctx->mock;
> +		int param0 = 5, param1 = -5;
> +		const char *two_param_types[] = {"int", "int"};
> +		const void *two_params[] = {&param0, &param1};
> +		const void *ret;
> +
> +		ret = mock->do_expect(mock,
> +				      "test_printk", test_printk,
> +				      two_param_types, two_params,
> +				      ARRAY_SIZE(two_params));
> +		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ret);
> +		KUNIT_EXPECT_EQ(test, -4, *((int *) ret));
> +	}
> +
> +In this example, the method under test should return a pointer to a value, so
> +if the pointer returned by the method is null or an errno, we don't want to
> +bother continuing the test since the following expectation could crash the test
> +case. `ASSERT_NOT_ERR_OR_NULL(...)` allows us to bail out of the test case if
> +the appropriate conditions have not been satisfied to complete the test.
> +
> +Modules / Test Suites
> +~~~~~~~~~~~~~~~~~~~~~
> +
> +Now obviously one unit test isn't very helpful; the power comes from having
> +many test cases covering all of your behaviors. Consequently it is common to
> +have many *similar* tests; in order to reduce duplication in these closely
> +related tests most unit testing frameworks provide the concept of a *test
> +suite*, in KUnit we call it a *test module*; all it is is just a collection of
> +test cases for a unit of code with a set up function that gets invoked before
> +every test cases and then a tear down function that gets invoked after every
> +test case completes.
> +
> +Example:
> +
> +.. code-block:: c
> +
> +	static struct kunit_case example_test_cases[] = {
> +		KUNIT_CASE(example_test_foo),
> +		KUNIT_CASE(example_test_bar),
> +		KUNIT_CASE(example_test_baz),
> +		{},
> +	};
> +
> +	static struct kunit_module example_test_module[] = {
> +		.name = "example",
> +		.init = example_test_init,
> +		.exit = example_test_exit,
> +		.test_cases = example_test_cases,
> +	};
> +	module_test(example_test_module);
> +
> +In the above example the test suite, ``example_test_module``, would run the test
> +cases ``example_test_foo``, ``example_test_bar``, and ``example_test_baz``, each
> +would have ``example_test_init`` called immediately before it and would have
> +``example_test_exit`` called immediately after it.
> +``module_test(example_test_module)`` registers the test suite with the KUnit
> +test framework.
> +
> +.. note::
> +   A test case will only be run if it is associated with a test suite.
> +
> +For a more information on these types of things see the :doc:`api/test`.
> +
> +Isolating Behavior
> +==================
> +
> +The most important aspect of unit testing that other forms of testing do not
> +provide is the ability to limit the amount of code under test to a single unit.
> +In practice, this is only possible by being able to control what code gets run
> +when the unit under test calls a function and this is usually accomplished
> +through some sort of indirection where a function is exposed as part of an API
> +such that the definition of that function can be changed without affecting the
> +rest of the code base. In the kernel this primarily comes from two constructs,
> +classes, structs that contain function pointers that are provided by the
> +implementer, and architecture specific functions which have definitions selected
> +at compile time.
> +
> +Classes
> +-------
> +
> +Classes are not a construct that is built into the C programming language;
> +however, it is an easily derived concept. Accordingly, pretty much every project
> +that does not use a standardized object oriented library (like GNOME's GObject)
> +has their own slightly different way of doing object oriented programming; the
> +Linux kernel is no exception.
> +
> +The central concept in kernel object oriented programming is the class. In the
> +kernel, a *class* is a struct that contains function pointers. This creates a
> +contract between *implementers* and *users* since it forces them to use the
> +same function signature without having to call the function directly. In order
> +for it to truly be a class, the function pointers must specify that a pointer
> +to the class, known as a *class handle*, be one of the parameters; this makes
> +it possible for the member functions (also known as *methods*) to have access
> +to member variables (more commonly known as *fields*) allowing the same
> +implementation to have multiple *instances*.
> +
> +Typically a class can be *overridden* by *child classes* by embedding the
> +*parent class* in the child class. Then when a method provided by the child
> +class is called, the child implementation knows that the pointer passed to it is
> +of a parent contained within the child; because of this, the child can compute
> +the pointer to itself because the pointer to the parent is always a fixed offset
> +from the pointer to the child; this offset is the offset of the parent contained
> +in the child struct. For example:
> +
> +.. code-block:: c
> +
> +	struct shape {
> +		int (*area)(struct shape *this);
> +	};
> +
> +	struct rectangle {
> +		struct shape parent;
> +		int length;
> +		int width;
> +	};
> +
> +	int rectangle_area(struct shape *this)
> +	{
> +		struct rectangle *self = container_of(this, struct shape, parent);
> +
> +		return self->length * self->width;
> +	};
> +
> +	void rectangle_new(struct rectangle *self, int length, int width)
> +	{
> +		self->parent.area = rectangle_area;
> +		self->length = length;
> +		self->width = width;
> +	}
> +
> +In this example (as in most kernel code) the operation of computing the pointer
> +to the child from the pointer to the parent is done by ``container_of``.
> +
> +Faking Classes
> +~~~~~~~~~~~~~~
> +
> +In order to unit test a piece of code that calls a method in a class, the
> +behavior of the method must be controllable, otherwise the test ceases to be a
> +unit test and becomes an integration test.
> +
> +A fake just provides an implementation of a piece of code that is different than
> +what runs in a production instance, but behaves identically from the standpoint
> +of the callers; this is usually done to replace a dependency that is hard to
> +deal with, or is slow.
> +
> +A good example for this might be implementing a fake EEPROM that just stores the
> +"contents" in an internal buffer. For example, let's assume we have a class that
> +represents an EEPROM:
> +
> +.. code-block:: c
> +
> +	struct eeprom {
> +		ssize_t (*read)(struct eeprom *this, size_t offset, char *buffer, size_t count);
> +		ssize_t (*write)(struct eeprom *this, size_t offset, const char *buffer, size_t count);
> +	};
> +
> +And we want to test some code that buffers writes to the EEPROM:
> +
> +.. code-block:: c
> +
> +	struct eeprom_buffer {
> +		ssize_t (*write)(struct eeprom_buffer *this, const char *buffer, size_t count);
> +		int flush(struct eeprom_buffer *this);
> +		size_t flush_count; /* Flushes when buffer exceeds flush_count. */
> +	};
> +
> +	struct eeprom_buffer *new_eeprom_buffer(struct eeprom *eeprom);
> +	void destroy_eeprom_buffer(struct eeprom *eeprom);
> +
> +We can easily test this code by *faking out* the underlying EEPROM:
> +
> +.. code-block:: c
> +
> +	struct fake_eeprom {
> +		struct eeprom parent;
> +		char contents[FAKE_EEPROM_CONTENTS_SIZE];
> +	};
> +
> +	ssize_t fake_eeprom_read(struct eeprom *parent, size_t offset, char *buffer, size_t count)
> +	{
> +		struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent);
> +
> +		count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset);
> +		memcpy(buffer, this->contents + offset, count);
> +
> +		return count;
> +	}
> +
> +	ssize_t fake_eeprom_write(struct eeprom *this, size_t offset, const char *buffer, size_t count)
> +	{
> +		struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent);
> +
> +		count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset);
> +		memcpy(this->contents + offset, buffer, count);
> +
> +		return count;
> +	}
> +
> +	void fake_eeprom_init(struct fake_eeprom *this)
> +	{
> +		this->parent.read = fake_eeprom_read;
> +		this->parent.write = fake_eeprom_write;
> +		memset(this->contents, 0, FAKE_EEPROM_CONTENTS_SIZE);
> +	}
> +
> +We can now use it to test ``struct eeprom_buffer``:
> +
> +.. code-block:: c
> +
> +	struct eeprom_buffer_test {
> +		struct fake_eeprom *fake_eeprom;
> +		struct eeprom_buffer *eeprom_buffer;
> +	};
> +
> +	static void eeprom_buffer_test_does_not_write_until_flush(struct kunit *test)
> +	{
> +		struct eeprom_buffer_test *ctx = test->priv;
> +		struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
> +		struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
> +		char buffer[] = {0xff};
> +
> +		eeprom_buffer->flush_count = SIZE_MAX;
> +
> +		eeprom_buffer->write(eeprom_buffer, buffer, 1);
> +		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
> +
> +		eeprom_buffer->write(eeprom_buffer, buffer, 1);
> +		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0);
> +
> +		eeprom_buffer->flush(eeprom_buffer);
> +		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
> +		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
> +	}
> +
> +	static void eeprom_buffer_test_flushes_after_flush_count_met(struct kunit *test)
> +	{
> +		struct eeprom_buffer_test *ctx = test->priv;
> +		struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
> +		struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
> +		char buffer[] = {0xff};
> +
> +		eeprom_buffer->flush_count = 2;
> +
> +		eeprom_buffer->write(eeprom_buffer, buffer, 1);
> +		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
> +
> +		eeprom_buffer->write(eeprom_buffer, buffer, 1);
> +		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
> +		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
> +	}
> +
> +	static void eeprom_buffer_test_flushes_increments_of_flush_count(struct kunit *test)
> +	{
> +		struct eeprom_buffer_test *ctx = test->priv;
> +		struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
> +		struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
> +		char buffer[] = {0xff, 0xff};
> +
> +		eeprom_buffer->flush_count = 2;
> +
> +		eeprom_buffer->write(eeprom_buffer, buffer, 1);
> +		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
> +
> +		eeprom_buffer->write(eeprom_buffer, buffer, 2);
> +		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
> +		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
> +		/* Should have only flushed the first two bytes. */
> +		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[2], 0);
> +	}
> +
> +	static int eeprom_buffer_test_init(struct kunit *test)
> +	{
> +		struct eeprom_buffer_test *ctx;
> +
> +		ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
> +		ASSERT_NOT_ERR_OR_NULL(test, ctx);
> +
> +		ctx->fake_eeprom = kunit_kzalloc(test, sizeof(*ctx->fake_eeprom), GFP_KERNEL);
> +		ASSERT_NOT_ERR_OR_NULL(test, ctx->fake_eeprom);
> +
> +		ctx->eeprom_buffer = new_eeprom_buffer(&ctx->fake_eeprom->parent);
> +		ASSERT_NOT_ERR_OR_NULL(test, ctx->eeprom_buffer);
> +
> +		test->priv = ctx;
> +
> +		return 0;
> +	}
> +
> +	static void eeprom_buffer_test_exit(struct kunit *test)
> +	{
> +		struct eeprom_buffer_test *ctx = test->priv;
> +
> +		destroy_eeprom_buffer(ctx->eeprom_buffer);
> +	}
> +
> 

-- 
Regards
--
Kieran

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 14/19] Documentation: kunit: add documentation for KUnit
  2018-11-29 13:56   ` kieran.bingham
@ 2018-11-29 13:56     ` Kieran Bingham
  2018-11-30  3:45     ` mcgrof
  1 sibling, 0 replies; 232+ messages in thread
From: Kieran Bingham @ 2018-11-29 13:56 UTC (permalink / raw)


Hi Brendan,

Please excuse the top posting, but I'm replying here as I'm following
the section "Creating a kunitconfig" in Documentation/kunit/start.rst.

Could the three line kunitconfig file live under say
	 arch/um/configs/kunit_defconfig?

So that it's always provided? And could even be extended with tests
which people would expect to be run by default? (say in distributions)

--
Kieran




On 28/11/2018 19:36, Brendan Higgins wrote:
> Add documentation for KUnit, the Linux kernel unit testing framework.
> - Add intro and usage guide for KUnit
> - Add API reference
> 
> Signed-off-by: Felix Guo <felixguoxiuping at gmail.com>
> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> ---
>  Documentation/index.rst           |   1 +
>  Documentation/kunit/api/index.rst |  16 ++
>  Documentation/kunit/api/test.rst  |  15 +
>  Documentation/kunit/faq.rst       |  46 +++
>  Documentation/kunit/index.rst     |  80 ++++++
>  Documentation/kunit/start.rst     | 180 ++++++++++++
>  Documentation/kunit/usage.rst     | 447 ++++++++++++++++++++++++++++++
>  7 files changed, 785 insertions(+)
>  create mode 100644 Documentation/kunit/api/index.rst
>  create mode 100644 Documentation/kunit/api/test.rst
>  create mode 100644 Documentation/kunit/faq.rst
>  create mode 100644 Documentation/kunit/index.rst
>  create mode 100644 Documentation/kunit/start.rst
>  create mode 100644 Documentation/kunit/usage.rst
> 
> diff --git a/Documentation/index.rst b/Documentation/index.rst
> index 5db7e87c7cb1d..275ef4db79f61 100644
> --- a/Documentation/index.rst
> +++ b/Documentation/index.rst
> @@ -68,6 +68,7 @@ merged much easier.
>     kernel-hacking/index
>     trace/index
>     maintainer/index
> +   kunit/index
>  
>  Kernel API documentation
>  ------------------------
> diff --git a/Documentation/kunit/api/index.rst b/Documentation/kunit/api/index.rst
> new file mode 100644
> index 0000000000000..c31c530088153
> --- /dev/null
> +++ b/Documentation/kunit/api/index.rst
> @@ -0,0 +1,16 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +=============
> +API Reference
> +=============
> +.. toctree::
> +
> +	test
> +
> +This section documents the KUnit kernel testing API. It is divided into 3
> +sections:
> +
> +================================= ==============================================
> +:doc:`test`                       documents all of the standard testing API
> +                                  excluding mocking or mocking related features.
> +================================= ==============================================
> diff --git a/Documentation/kunit/api/test.rst b/Documentation/kunit/api/test.rst
> new file mode 100644
> index 0000000000000..7c926014f047c
> --- /dev/null
> +++ b/Documentation/kunit/api/test.rst
> @@ -0,0 +1,15 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +========
> +Test API
> +========
> +
> +This file documents all of the standard testing API excluding mocking or mocking
> +related features.
> +
> +.. kernel-doc:: include/kunit/test.h
> +   :internal:
> +
> +.. kernel-doc:: include/kunit/kunit-stream.h
> +   :internal:
> +
> diff --git a/Documentation/kunit/faq.rst b/Documentation/kunit/faq.rst
> new file mode 100644
> index 0000000000000..cb8e4fb2257a0
> --- /dev/null
> +++ b/Documentation/kunit/faq.rst
> @@ -0,0 +1,46 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +=========================================
> +Frequently Asked Questions
> +=========================================
> +
> +How is this different from Autotest, kselftest, etc?
> +====================================================
> +KUnit is a unit testing framework. Autotest, kselftest (and some others) are
> +not.
> +
> +A `unit test <https://martinfowler.com/bliki/UnitTest.html>`_ is supposed to
> +test a single unit of code in isolation, hence the name. A unit test should be
> +the finest granularity of testing and as such should allow all possible code
> +paths to be tested in the code under test; this is only possible if the code
> +under test is very small and does not have any external dependencies outside of
> +the test's control like hardware.
> +
> +There are no testing frameworks currently available for the kernel that do not
> +require installing the kernel on a test machine or in a VM and all require
> +tests to be written in userspace and run on the kernel under test; this is true
> +for Autotest, kselftest, and some others, disqualifying any of them from being
> +considered unit testing frameworks.
> +
> +What is the difference between a unit test and these other kinds of tests?
> +==========================================================================
> +Most existing tests for the Linux kernel would be categorized as an integration
> +test, or an end-to-end test.
> +
> +- A unit test is supposed to test a single unit of code in isolation, hence the
> +  name. A unit test should be the finest granularity of testing and as such
> +  should allow all possible code paths to be tested in the code under test; this
> +  is only possible if the code under test is very small and does not have any
> +  external dependencies outside of the test's control like hardware.
> +- An integration test tests the interaction between a minimal set of components,
> +  usually just two or three. For example, someone might write an integration
> +  test to test the interaction between a driver and a piece of hardware, or to
> +  test the interaction between the userspace libraries the kernel provides and
> +  the kernel itself; however, one of these tests would probably not test the
> +  entire kernel along with hardware interactions and interactions with the
> +  userspace.
> +- An end-to-end test usually tests the entire system from the perspective of the
> +  code under test. For example, someone might write an end-to-end test for the
> +  kernel by installing a production configuration of the kernel on production
> +  hardware with a production userspace and then trying to exercise some behavior
> +  that depends on interactions between the hardware, the kernel, and userspace.
> diff --git a/Documentation/kunit/index.rst b/Documentation/kunit/index.rst
> new file mode 100644
> index 0000000000000..c6710211b647f
> --- /dev/null
> +++ b/Documentation/kunit/index.rst
> @@ -0,0 +1,80 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +=========================================
> +KUnit - Unit Testing for the Linux Kernel
> +=========================================
> +
> +.. toctree::
> +	:maxdepth: 2
> +
> +	start
> +	usage
> +	api/index
> +	faq
> +
> +What is KUnit?
> +==============
> +
> +KUnit is a lightweight unit testing and mocking framework for the Linux kernel.
> +These tests are able to be run locally on a developer's workstation without a VM
> +or special hardware.
> +
> +KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> +Googletest/Googlemock for C++. KUnit provides facilities for defining unit test
> +cases, grouping related test cases into test suites, providing common
> +infrastructure for running tests, and much more.
> +
> +Get started now: :doc:`start`
> +
> +Why KUnit?
> +==========
> +
> +A unit test is supposed to test a single unit of code in isolation, hence the
> +name. A unit test should be the finest granularity of testing and as such should
> +allow all possible code paths to be tested in the code under test; this is only
> +possible if the code under test is very small and does not have any external
> +dependencies outside of the test's control like hardware.
> +
> +Outside of KUnit, there are no testing frameworks currently
> +available for the kernel that do not require installing the kernel on a test
> +machine or in a VM and all require tests to be written in userspace running on
> +the kernel; this is true for Autotest, and kselftest, disqualifying
> +any of them from being considered unit testing frameworks.
> +
> +KUnit addresses the problem of being able to run tests without needing a virtual
> +machine or actual hardware with User Mode Linux. User Mode Linux is a Linux
> +architecture, like ARM or x86; however, unlike other architectures it compiles
> +to a standalone program that can be run like any other program directly inside
> +of a host operating system; to be clear, it does not require any virtualization
> +support; it is just a regular program.
> +
> +KUnit is fast. Excluding build time, from invocation to completion KUnit can run
> +several dozen tests in only 10 to 20 seconds; this might not sound like a big
> +deal to some people, but having such fast and easy to run tests fundamentally
> +changes the way you go about testing and even writing code in the first place.
> +Linus himself said in his `git talk at Google
> +<https://gist.github.com/lorn/1272686/revisions#diff-53c65572127855f1b003db4064a94573R874>`_:
> +
> +	"... a lot of people seem to think that performance is about doing the
> +	same thing, just doing it faster, and that is not true. That is not what
> +	performance is all about. If you can do something really fast, really
> +	well, people will start using it differently."
> +
> +In this context Linus was talking about branching and merging,
> +but this point also applies to testing. If your tests are slow, unreliable, are
> +difficult to write, and require a special setup or special hardware to run,
> +then you wait a lot longer to write tests, and you wait a lot longer to run
> +tests; this means that tests are likely to break, unlikely to test a lot of
> +things, and are unlikely to be rerun once they pass. If your tests are really
> +fast, you run them all the time, every time you make a change, and every time
> +someone sends you some code. Why trust that someone ran all their tests
> +correctly on every change when you can just run them yourself in less time than
> +it takes to read his / her test log?
> +
> +How do I use it?
> +===================
> +
> +*   :doc:`start` - for new users of KUnit
> +*   :doc:`usage` - for a more detailed explanation of KUnit features
> +*   :doc:`api/index` - for the list of KUnit APIs used for testing
> +
> diff --git a/Documentation/kunit/start.rst b/Documentation/kunit/start.rst
> new file mode 100644
> index 0000000000000..5cdba5091905e
> --- /dev/null
> +++ b/Documentation/kunit/start.rst
> @@ -0,0 +1,180 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +===============
> +Getting Started
> +===============
> +
> +Installing dependencies
> +=======================
> +KUnit has the same dependencies as the Linux kernel. As long as you can build
> +the kernel, you can run KUnit.
> +
> +KUnit Wrapper
> +=============
> +Included with KUnit is a simple Python wrapper that helps format the output to
> +easily use and read KUnit output. It handles building and running the kernel, as
> +well as formatting the output.
> +
> +The wrapper can be run with:
> +
> +.. code-block:: bash
> +
> +   ./tools/testing/kunit/kunit.py
> +
> +Creating a kunitconfig
> +======================
> +The Python script is a thin wrapper around Kbuild as such, it needs to be
> +configured with a ``kunitconfig`` file. This file essentially contains the
> +regular Kernel config, with the specific test targets as well.
> +
> +.. code-block:: bash
> +
> +	git clone -b master https://kunit.googlesource.com/kunitconfig $PATH_TO_KUNITCONFIG_REPO
> +	cd $PATH_TO_LINUX_REPO
> +	ln -s $PATH_TO_KUNIT_CONFIG_REPO/kunitconfig kunitconfig
> +
> +You may want to add kunitconfig to your local gitignore.> +
> +Verifying KUnit Works
> +-------------------------
> +
> +To make sure that everything is set up correctly, simply invoke the Python
> +wrapper from your kernel repo:
> +
> +.. code-block:: bash
> +
> +	./tools/testing/kunit/kunit.py
> +
> +.. note::
> +   You may want to run ``make mrproper`` first.
> +
> +If everything worked correctly, you should see the following:
> +
> +.. code-block:: bash
> +
> +	Generating .config ...
> +	Building KUnit Kernel ...
> +	Starting KUnit Kernel ...
> +
> +followed by a list of tests that are run. All of them should be passing.
> +
> +.. note::
> +   Because it is building a lot of sources for the first time, the ``Building
> +   kunit kernel`` step may take a while.
> +
> +Writing your first test
> +==========================
> +
> +In your kernel repo let's add some code that we can test. Create a file
> +``drivers/misc/example.h`` with the contents:
> +
> +.. code-block:: c
> +
> +	int misc_example_add(int left, int right);
> +
> +create a file ``drivers/misc/example.c``:
> +
> +.. code-block:: c
> +
> +	#include <linux/errno.h>
> +
> +	#include "example.h"
> +
> +	int misc_example_add(int left, int right)
> +	{
> +		return left + right;
> +	}
> +
> +Now add the following lines to ``drivers/misc/Kconfig``:
> +
> +.. code-block:: kconfig
> +
> +	config MISC_EXAMPLE
> +		bool "My example"
> +
> +and the following lines to ``drivers/misc/Makefile``:
> +
> +.. code-block:: make
> +
> +	obj-$(CONFIG_MISC_EXAMPLE) += example.o
> +
> +Now we are ready to write the test. The test will be in
> +``drivers/misc/example-test.c``:
> +
> +.. code-block:: c
> +
> +	#include <kunit/test.h>
> +	#include "example.h"
> +
> +	/* Define the test cases. */
> +
> +	static void misc_example_add_test_basic(struct kunit *test)
> +	{
> +		KUNIT_EXPECT_EQ(test, 1, misc_example_add(1, 0));
> +		KUNIT_EXPECT_EQ(test, 2, misc_example_add(1, 1));
> +		KUNIT_EXPECT_EQ(test, 0, misc_example_add(-1, 1));
> +		KUNIT_EXPECT_EQ(test, INT_MAX, misc_example_add(0, INT_MAX));
> +		KUNIT_EXPECT_EQ(test, -1, misc_example_add(INT_MAX, INT_MIN));
> +	}
> +
> +	static void misc_example_test_failure(struct kunit *test)
> +	{
> +		KUNIT_FAIL(test, "This test never passes.");
> +	}
> +
> +	static struct kunit_case misc_example_test_cases[] = {
> +		KUNIT_CASE(misc_example_add_test_basic),
> +		KUNIT_CASE(misc_example_test_failure),
> +		{},
> +	};
> +
> +	static struct kunit_module misc_example_test_module = {
> +		.name = "misc-example",
> +		.test_cases = misc_example_test_cases,
> +	};
> +	module_test(misc_example_test_module);
> +
> +Now add the following to ``drivers/misc/Kconfig``:
> +
> +.. code-block:: kconfig
> +
> +	config MISC_EXAMPLE_TEST
> +		bool "Test for my example"
> +		depends on MISC_EXAMPLE && KUNIT
> +
> +and the following to ``drivers/misc/Makefile``:
> +
> +.. code-block:: make
> +
> +	obj-$(CONFIG_MISC_EXAMPLE_TEST) += example-test.o
> +
> +Now add it to your ``kunitconfig``:
> +
> +.. code-block:: none
> +
> +	CONFIG_MISC_EXAMPLE=y
> +	CONFIG_MISC_EXAMPLE_TEST=y
> +
> +Now you can run the test:
> +
> +.. code-block:: bash
> +
> +	./tools/testing/kunit/kunit.py
> +
> +You should see the following failure:
> +
> +.. code-block:: none
> +
> +	...
> +	[16:08:57] [PASSED] misc-example:misc_example_add_test_basic
> +	[16:08:57] [FAILED] misc-example:misc_example_test_failure
> +	[16:08:57] EXPECTATION FAILED at drivers/misc/example-test.c:17
> +	[16:08:57] 	This test never passes.
> +	...
> +
> +Congrats! You just wrote your first KUnit test!
> +
> +Next Steps
> +=============
> +*   Check out the :doc:`usage` page for a more
> +    in-depth explanation of KUnit.
> diff --git a/Documentation/kunit/usage.rst b/Documentation/kunit/usage.rst
> new file mode 100644
> index 0000000000000..96ef7f9a1add4
> --- /dev/null
> +++ b/Documentation/kunit/usage.rst
> @@ -0,0 +1,447 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +=============
> +Using KUnit
> +=============
> +
> +The purpose of this document is to describe what KUnit is, how it works, how it
> +is intended to be used, and all the concepts and terminology that are needed to
> +understand it. This guide assumes a working knowledge of the Linux kernel and
> +some basic knowledge of testing.
> +
> +For a high level introduction to KUnit, including setting up KUnit for your
> +project, see :doc:`start`.
> +
> +Organization of this document
> +=================================
> +
> +This document is organized into two main sections: Testing and Isolating
> +Behavior. The first covers what a unit test is and how to use KUnit to write
> +them. The second covers how to use KUnit to isolate code and make it possible
> +to unit test code that was otherwise un-unit-testable.
> +
> +Testing
> +==========
> +
> +What is KUnit?
> +------------------
> +
> +"K" is short for "kernel" so "KUnit" is the "(Linux) Kernel Unit Testing
> +Framework." KUnit is intended first and foremost for writing unit tests; it is
> +general enough that it can be used to write integration tests; however, this is
> +a secondary goal. KUnit has no ambition of being the only testing framework for
> +the kernel; for example, it does not intend to be an end-to-end testing
> +framework.
> +
> +What is Unit Testing?
> +-------------------------
> +
> +A `unit test <https://martinfowler.com/bliki/UnitTest.html>`_ is a test that
> +tests code at the smallest possible scope, a *unit* of code. In the C
> +programming language that's a function.
> +
> +Unit tests should be written for all the publicly exposed functions in a
> +compilation unit; so that is all the functions that are exported in either a
> +*class* (defined below) or all functions which are **not** static.
> +
> +Writing Tests
> +-------------
> +
> +Test Cases
> +~~~~~~~~~~
> +
> +The fundamental unit in KUnit is the test case. A test case is a function with
> +the signature ``void (*)(struct kunit *test)``. It calls a function to be tested
> +and then sets *expectations* for what should happen. For example:
> +
> +.. code-block:: c
> +
> +	void example_test_success(struct kunit *test)
> +	{
> +	}
> +
> +	void example_test_failure(struct kunit *test)
> +	{
> +		KUNIT_FAIL(test, "This test never passes.");
> +	}
> +
> +In the above example ``example_test_success`` always passes because it does
> +nothing; no expectations are set, so all expectations pass. On the other hand
> +``example_test_failure`` always fails because it calls ``KUNIT_FAIL``, which is
> +a special expectation that logs a message and causes the test case to fail.
> +
> +Expectations
> +~~~~~~~~~~~~
> +An *expectation* is a way to specify that you expect a piece of code to do
> +something in a test. An expectation is called like a function. A test is made
> +by setting expectations about the behavior of a piece of code under test; when
> +one or more of the expectations fail, the test case fails and information about
> +the failure is logged. For example:
> +
> +.. code-block:: c
> +
> +	void add_test_basic(struct kunit *test)
> +	{
> +		KUNIT_EXPECT_EQ(test, 1, add(1, 0));
> +		KUNIT_EXPECT_EQ(test, 2, add(1, 1));
> +	}
> +
> +In the above example ``add_test_basic`` makes a number of assertions about the
> +behavior of a function called ``add``; the first parameter is always of type
> +``struct kunit *``, which contains information about the current test context;
> +the second parameter, in this case, is what the value is expected to be; the
> +last value is what the value actually is. If ``add`` passes all of these
> +expectations, the test case, ``add_test_basic`` will pass; if any one of these
> +expectations fail, the test case will fail.
> +
> +It is important to understand that a test case *fails* when any expectation is
> +violated; however, the test will continue running, potentially trying other
> +expectations until the test case ends or is otherwise terminated. This is as
> +opposed to *assertions* which are discussed later.
> +
> +To learn about more expectations supported by KUnit, see :doc:`api/test`.
> +
> +.. note::
> +   A single test case should be pretty short, pretty easy to understand,
> +   focused on a single behavior.
> +
> +For example, if we wanted to properly test the add function above, we would
> +create additional tests cases which would each test a different property that an
> +add function should have like this:
> +
> +.. code-block:: c
> +
> +	void add_test_basic(struct kunit *test)
> +	{
> +		KUNIT_EXPECT_EQ(test, 1, add(1, 0));
> +		KUNIT_EXPECT_EQ(test, 2, add(1, 1));
> +	}
> +
> +	void add_test_negative(struct kunit *test)
> +	{
> +		KUNIT_EXPECT_EQ(test, 0, add(-1, 1));
> +	}
> +
> +	void add_test_max(struct kunit *test)
> +	{
> +		KUNIT_EXPECT_EQ(test, INT_MAX, add(0, INT_MAX));
> +		KUNIT_EXPECT_EQ(test, -1, add(INT_MAX, INT_MIN));
> +	}
> +
> +	void add_test_overflow(struct kunit *test)
> +	{
> +		KUNIT_EXPECT_EQ(test, INT_MIN, add(INT_MAX, 1));
> +	}
> +
> +Notice how it is immediately obvious what all the properties that we are testing
> +for are.
> +
> +Assertions
> +~~~~~~~~~~
> +
> +KUnit also has the concept of an *assertion*. An assertion is just like an
> +expectation except the assertion immediately terminates the test case if it is
> +not satisfied.
> +
> +For example:
> +
> +.. code-block:: c
> +
> +	static void mock_test_do_expect_default_return(struct kunit *test)
> +	{
> +		struct mock_test_context *ctx = test->priv;
> +		struct mock *mock = ctx->mock;
> +		int param0 = 5, param1 = -5;
> +		const char *two_param_types[] = {"int", "int"};
> +		const void *two_params[] = {&param0, &param1};
> +		const void *ret;
> +
> +		ret = mock->do_expect(mock,
> +				      "test_printk", test_printk,
> +				      two_param_types, two_params,
> +				      ARRAY_SIZE(two_params));
> +		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ret);
> +		KUNIT_EXPECT_EQ(test, -4, *((int *) ret));
> +	}
> +
> +In this example, the method under test should return a pointer to a value, so
> +if the pointer returned by the method is null or an errno, we don't want to
> +bother continuing the test since the following expectation could crash the test
> +case. `ASSERT_NOT_ERR_OR_NULL(...)` allows us to bail out of the test case if
> +the appropriate conditions have not been satisfied to complete the test.
> +
> +Modules / Test Suites
> +~~~~~~~~~~~~~~~~~~~~~
> +
> +Now obviously one unit test isn't very helpful; the power comes from having
> +many test cases covering all of your behaviors. Consequently it is common to
> +have many *similar* tests; in order to reduce duplication in these closely
> +related tests most unit testing frameworks provide the concept of a *test
> +suite*, in KUnit we call it a *test module*; all it is is just a collection of
> +test cases for a unit of code with a set up function that gets invoked before
> +every test cases and then a tear down function that gets invoked after every
> +test case completes.
> +
> +Example:
> +
> +.. code-block:: c
> +
> +	static struct kunit_case example_test_cases[] = {
> +		KUNIT_CASE(example_test_foo),
> +		KUNIT_CASE(example_test_bar),
> +		KUNIT_CASE(example_test_baz),
> +		{},
> +	};
> +
> +	static struct kunit_module example_test_module[] = {
> +		.name = "example",
> +		.init = example_test_init,
> +		.exit = example_test_exit,
> +		.test_cases = example_test_cases,
> +	};
> +	module_test(example_test_module);
> +
> +In the above example the test suite, ``example_test_module``, would run the test
> +cases ``example_test_foo``, ``example_test_bar``, and ``example_test_baz``, each
> +would have ``example_test_init`` called immediately before it and would have
> +``example_test_exit`` called immediately after it.
> +``module_test(example_test_module)`` registers the test suite with the KUnit
> +test framework.
> +
> +.. note::
> +   A test case will only be run if it is associated with a test suite.
> +
> +For a more information on these types of things see the :doc:`api/test`.
> +
> +Isolating Behavior
> +==================
> +
> +The most important aspect of unit testing that other forms of testing do not
> +provide is the ability to limit the amount of code under test to a single unit.
> +In practice, this is only possible by being able to control what code gets run
> +when the unit under test calls a function and this is usually accomplished
> +through some sort of indirection where a function is exposed as part of an API
> +such that the definition of that function can be changed without affecting the
> +rest of the code base. In the kernel this primarily comes from two constructs,
> +classes, structs that contain function pointers that are provided by the
> +implementer, and architecture specific functions which have definitions selected
> +at compile time.
> +
> +Classes
> +-------
> +
> +Classes are not a construct that is built into the C programming language;
> +however, it is an easily derived concept. Accordingly, pretty much every project
> +that does not use a standardized object oriented library (like GNOME's GObject)
> +has their own slightly different way of doing object oriented programming; the
> +Linux kernel is no exception.
> +
> +The central concept in kernel object oriented programming is the class. In the
> +kernel, a *class* is a struct that contains function pointers. This creates a
> +contract between *implementers* and *users* since it forces them to use the
> +same function signature without having to call the function directly. In order
> +for it to truly be a class, the function pointers must specify that a pointer
> +to the class, known as a *class handle*, be one of the parameters; this makes
> +it possible for the member functions (also known as *methods*) to have access
> +to member variables (more commonly known as *fields*) allowing the same
> +implementation to have multiple *instances*.
> +
> +Typically a class can be *overridden* by *child classes* by embedding the
> +*parent class* in the child class. Then when a method provided by the child
> +class is called, the child implementation knows that the pointer passed to it is
> +of a parent contained within the child; because of this, the child can compute
> +the pointer to itself because the pointer to the parent is always a fixed offset
> +from the pointer to the child; this offset is the offset of the parent contained
> +in the child struct. For example:
> +
> +.. code-block:: c
> +
> +	struct shape {
> +		int (*area)(struct shape *this);
> +	};
> +
> +	struct rectangle {
> +		struct shape parent;
> +		int length;
> +		int width;
> +	};
> +
> +	int rectangle_area(struct shape *this)
> +	{
> +		struct rectangle *self = container_of(this, struct shape, parent);
> +
> +		return self->length * self->width;
> +	};
> +
> +	void rectangle_new(struct rectangle *self, int length, int width)
> +	{
> +		self->parent.area = rectangle_area;
> +		self->length = length;
> +		self->width = width;
> +	}
> +
> +In this example (as in most kernel code) the operation of computing the pointer
> +to the child from the pointer to the parent is done by ``container_of``.
> +
> +Faking Classes
> +~~~~~~~~~~~~~~
> +
> +In order to unit test a piece of code that calls a method in a class, the
> +behavior of the method must be controllable, otherwise the test ceases to be a
> +unit test and becomes an integration test.
> +
> +A fake just provides an implementation of a piece of code that is different than
> +what runs in a production instance, but behaves identically from the standpoint
> +of the callers; this is usually done to replace a dependency that is hard to
> +deal with, or is slow.
> +
> +A good example for this might be implementing a fake EEPROM that just stores the
> +"contents" in an internal buffer. For example, let's assume we have a class that
> +represents an EEPROM:
> +
> +.. code-block:: c
> +
> +	struct eeprom {
> +		ssize_t (*read)(struct eeprom *this, size_t offset, char *buffer, size_t count);
> +		ssize_t (*write)(struct eeprom *this, size_t offset, const char *buffer, size_t count);
> +	};
> +
> +And we want to test some code that buffers writes to the EEPROM:
> +
> +.. code-block:: c
> +
> +	struct eeprom_buffer {
> +		ssize_t (*write)(struct eeprom_buffer *this, const char *buffer, size_t count);
> +		int flush(struct eeprom_buffer *this);
> +		size_t flush_count; /* Flushes when buffer exceeds flush_count. */
> +	};
> +
> +	struct eeprom_buffer *new_eeprom_buffer(struct eeprom *eeprom);
> +	void destroy_eeprom_buffer(struct eeprom *eeprom);
> +
> +We can easily test this code by *faking out* the underlying EEPROM:
> +
> +.. code-block:: c
> +
> +	struct fake_eeprom {
> +		struct eeprom parent;
> +		char contents[FAKE_EEPROM_CONTENTS_SIZE];
> +	};
> +
> +	ssize_t fake_eeprom_read(struct eeprom *parent, size_t offset, char *buffer, size_t count)
> +	{
> +		struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent);
> +
> +		count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset);
> +		memcpy(buffer, this->contents + offset, count);
> +
> +		return count;
> +	}
> +
> +	ssize_t fake_eeprom_write(struct eeprom *this, size_t offset, const char *buffer, size_t count)
> +	{
> +		struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent);
> +
> +		count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset);
> +		memcpy(this->contents + offset, buffer, count);
> +
> +		return count;
> +	}
> +
> +	void fake_eeprom_init(struct fake_eeprom *this)
> +	{
> +		this->parent.read = fake_eeprom_read;
> +		this->parent.write = fake_eeprom_write;
> +		memset(this->contents, 0, FAKE_EEPROM_CONTENTS_SIZE);
> +	}
> +
> +We can now use it to test ``struct eeprom_buffer``:
> +
> +.. code-block:: c
> +
> +	struct eeprom_buffer_test {
> +		struct fake_eeprom *fake_eeprom;
> +		struct eeprom_buffer *eeprom_buffer;
> +	};
> +
> +	static void eeprom_buffer_test_does_not_write_until_flush(struct kunit *test)
> +	{
> +		struct eeprom_buffer_test *ctx = test->priv;
> +		struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
> +		struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
> +		char buffer[] = {0xff};
> +
> +		eeprom_buffer->flush_count = SIZE_MAX;
> +
> +		eeprom_buffer->write(eeprom_buffer, buffer, 1);
> +		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
> +
> +		eeprom_buffer->write(eeprom_buffer, buffer, 1);
> +		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0);
> +
> +		eeprom_buffer->flush(eeprom_buffer);
> +		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
> +		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
> +	}
> +
> +	static void eeprom_buffer_test_flushes_after_flush_count_met(struct kunit *test)
> +	{
> +		struct eeprom_buffer_test *ctx = test->priv;
> +		struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
> +		struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
> +		char buffer[] = {0xff};
> +
> +		eeprom_buffer->flush_count = 2;
> +
> +		eeprom_buffer->write(eeprom_buffer, buffer, 1);
> +		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
> +
> +		eeprom_buffer->write(eeprom_buffer, buffer, 1);
> +		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
> +		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
> +	}
> +
> +	static void eeprom_buffer_test_flushes_increments_of_flush_count(struct kunit *test)
> +	{
> +		struct eeprom_buffer_test *ctx = test->priv;
> +		struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
> +		struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
> +		char buffer[] = {0xff, 0xff};
> +
> +		eeprom_buffer->flush_count = 2;
> +
> +		eeprom_buffer->write(eeprom_buffer, buffer, 1);
> +		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
> +
> +		eeprom_buffer->write(eeprom_buffer, buffer, 2);
> +		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
> +		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
> +		/* Should have only flushed the first two bytes. */
> +		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[2], 0);
> +	}
> +
> +	static int eeprom_buffer_test_init(struct kunit *test)
> +	{
> +		struct eeprom_buffer_test *ctx;
> +
> +		ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
> +		ASSERT_NOT_ERR_OR_NULL(test, ctx);
> +
> +		ctx->fake_eeprom = kunit_kzalloc(test, sizeof(*ctx->fake_eeprom), GFP_KERNEL);
> +		ASSERT_NOT_ERR_OR_NULL(test, ctx->fake_eeprom);
> +
> +		ctx->eeprom_buffer = new_eeprom_buffer(&ctx->fake_eeprom->parent);
> +		ASSERT_NOT_ERR_OR_NULL(test, ctx->eeprom_buffer);
> +
> +		test->priv = ctx;
> +
> +		return 0;
> +	}
> +
> +	static void eeprom_buffer_test_exit(struct kunit *test)
> +	{
> +		struct eeprom_buffer_test *ctx = test->priv;
> +
> +		destroy_eeprom_buffer(ctx->eeprom_buffer);
> +	}
> +
> 

-- 
Regards
--
Kieran

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 17/19] of: unittest: migrate tests to run on KUnit
       [not found]   ` <CAL_Jsq+09Kx7yMBC_Jw45QGmk6U_fp4N6HOZDwYrM4tWw+_dOA@mail.gmail.com>
@ 2018-11-30  0:39     ` rdunlap
  2018-11-30  0:39       ` Randy Dunlap
  2018-12-04  0:13       ` brendanhiggins
  2018-12-04  0:08     ` brendanhiggins
  2019-02-13  1:44     ` brendanhiggins
  2 siblings, 2 replies; 232+ messages in thread
From: rdunlap @ 2018-11-30  0:39 UTC (permalink / raw)


On 11/28/18 12:56 PM, Rob Herring wrote:
>> diff --git a/drivers/of/Kconfig b/drivers/of/Kconfig
>> index ad3fcad4d75b8..f309399deac20 100644
>> --- a/drivers/of/Kconfig
>> +++ b/drivers/of/Kconfig
>> @@ -15,6 +15,7 @@ if OF
>>  config OF_UNITTEST
>>         bool "Device Tree runtime unit tests"
>>         depends on !SPARC
>> +       depends on KUNIT
> Unless KUNIT has depends, better to be a select here.

That's just style or taste.  I would prefer to use depends
instead of select, but that's also just my preference.

-- 
~Randy

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 17/19] of: unittest: migrate tests to run on KUnit
  2018-11-30  0:39     ` rdunlap
@ 2018-11-30  0:39       ` Randy Dunlap
  2018-12-04  0:13       ` brendanhiggins
  1 sibling, 0 replies; 232+ messages in thread
From: Randy Dunlap @ 2018-11-30  0:39 UTC (permalink / raw)


On 11/28/18 12:56 PM, Rob Herring wrote:
>> diff --git a/drivers/of/Kconfig b/drivers/of/Kconfig
>> index ad3fcad4d75b8..f309399deac20 100644
>> --- a/drivers/of/Kconfig
>> +++ b/drivers/of/Kconfig
>> @@ -15,6 +15,7 @@ if OF
>>  config OF_UNITTEST
>>         bool "Device Tree runtime unit tests"
>>         depends on !SPARC
>> +       depends on KUNIT
> Unless KUNIT has depends, better to be a select here.

That's just style or taste.  I would prefer to use depends
instead of select, but that's also just my preference.

-- 
~Randy

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 01/19] kunit: test: add KUnit test runner core
  2018-11-28 19:36 ` [RFC v3 01/19] kunit: test: add KUnit test runner core brendanhiggins
  2018-11-28 19:36   ` Brendan Higgins
@ 2018-11-30  3:14   ` mcgrof
  2018-11-30  3:14     ` Luis Chamberlain
                       ` (2 more replies)
  2018-11-30  3:28   ` mcgrof
  2018-12-01  3:02   ` mcgrof
  3 siblings, 3 replies; 232+ messages in thread
From: mcgrof @ 2018-11-30  3:14 UTC (permalink / raw)


On Wed, Nov 28, 2018 at 11:36:18AM -0800, Brendan Higgins wrote:
> +#define module_test(module) \
> +		static int module_kunit_init##module(void) \
> +		{ \
> +			return kunit_run_tests(&module); \
> +		} \
> +		late_initcall(module_kunit_init##module)

Here in lies an assumption that suffices. I'm inclined to believe we
need new initcall level here so to ensure we *do* run after all the
respective kernels iniut calls. Otherwise we're left at the whims of
link order for kunit. For instance if a kunit test relies on frameworks
which are also late_initcall() we'd have complete incompatibility with
anything linked *after* kunit.

> diff --git a/kunit/Kconfig b/kunit/Kconfig
> new file mode 100644
> index 0000000000000..49b44c4f6630a
> --- /dev/null
> +++ b/kunit/Kconfig
> @@ -0,0 +1,17 @@
> +#
> +# KUnit base configuration
> +#
> +
> +menu "KUnit support"
> +
> +config KUNIT
> +	bool "Enable support for unit tests (KUnit)"
> +	depends on UML

Consider using:

if UML
   ...
endif

That allows the depends to be done once.

> +	help
> +	  Enables support for kernel unit tests (KUnit), a lightweight unit
> +	  testing and mocking framework for the Linux kernel. These tests are
> +	  able to be run locally on a developer's workstation without a VM or
> +	  special hardware.


Some mention of UML may be good here?

> For more information, please see
> +	  Documentation/kunit/
> +
> +endmenu

I'm a bit conflicted here. This currently depends on UML but yet you
noted on RFC v2 that your intention is to liberate kunit from UML and
ideally allow unit tests to depend only on userspace. I've addressed
tests using both selftests kernels drivers and also re-written kernel
APIs to userspace to test there. I think we may need to live with both.

Then for the UML stuff, I think if we *really* accept that UML will
always be a viable option we should probably consider now throwing these
things under drivers/platform/uml/. This follows the pattern of arch
specific drivers. Whether or not we end up with a complete userspace
component independent of UML may implicate having a shared component
somewhere else.

Likewise, I realize the goal is to *avoid* using a virtual machine for
these tests, but would it in any way make sense to share kunit to be
supported for other architectures to allow easier-to-write tests as
well?

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 01/19] kunit: test: add KUnit test runner core
  2018-11-30  3:14   ` mcgrof
@ 2018-11-30  3:14     ` Luis Chamberlain
  2018-12-01  1:51     ` brendanhiggins
  2018-12-05 13:15     ` anton.ivanov
  2 siblings, 0 replies; 232+ messages in thread
From: Luis Chamberlain @ 2018-11-30  3:14 UTC (permalink / raw)


On Wed, Nov 28, 2018@11:36:18AM -0800, Brendan Higgins wrote:
> +#define module_test(module) \
> +		static int module_kunit_init##module(void) \
> +		{ \
> +			return kunit_run_tests(&module); \
> +		} \
> +		late_initcall(module_kunit_init##module)

Here in lies an assumption that suffices. I'm inclined to believe we
need new initcall level here so to ensure we *do* run after all the
respective kernels iniut calls. Otherwise we're left at the whims of
link order for kunit. For instance if a kunit test relies on frameworks
which are also late_initcall() we'd have complete incompatibility with
anything linked *after* kunit.

> diff --git a/kunit/Kconfig b/kunit/Kconfig
> new file mode 100644
> index 0000000000000..49b44c4f6630a
> --- /dev/null
> +++ b/kunit/Kconfig
> @@ -0,0 +1,17 @@
> +#
> +# KUnit base configuration
> +#
> +
> +menu "KUnit support"
> +
> +config KUNIT
> +	bool "Enable support for unit tests (KUnit)"
> +	depends on UML

Consider using:

if UML
   ...
endif

That allows the depends to be done once.

> +	help
> +	  Enables support for kernel unit tests (KUnit), a lightweight unit
> +	  testing and mocking framework for the Linux kernel. These tests are
> +	  able to be run locally on a developer's workstation without a VM or
> +	  special hardware.


Some mention of UML may be good here?

> For more information, please see
> +	  Documentation/kunit/
> +
> +endmenu

I'm a bit conflicted here. This currently depends on UML but yet you
noted on RFC v2 that your intention is to liberate kunit from UML and
ideally allow unit tests to depend only on userspace. I've addressed
tests using both selftests kernels drivers and also re-written kernel
APIs to userspace to test there. I think we may need to live with both.

Then for the UML stuff, I think if we *really* accept that UML will
always be a viable option we should probably consider now throwing these
things under drivers/platform/uml/. This follows the pattern of arch
specific drivers. Whether or not we end up with a complete userspace
component independent of UML may implicate having a shared component
somewhere else.

Likewise, I realize the goal is to *avoid* using a virtual machine for
these tests, but would it in any way make sense to share kunit to be
supported for other architectures to allow easier-to-write tests as
well?

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 01/19] kunit: test: add KUnit test runner core
  2018-11-28 19:36 ` [RFC v3 01/19] kunit: test: add KUnit test runner core brendanhiggins
  2018-11-28 19:36   ` Brendan Higgins
  2018-11-30  3:14   ` mcgrof
@ 2018-11-30  3:28   ` mcgrof
  2018-11-30  3:28     ` Luis Chamberlain
  2018-12-01  2:08     ` brendanhiggins
  2018-12-01  3:02   ` mcgrof
  3 siblings, 2 replies; 232+ messages in thread
From: mcgrof @ 2018-11-30  3:28 UTC (permalink / raw)


> +static void kunit_run_case_internal(struct kunit *test,
> +				    struct kunit_module *module,
> +				    struct kunit_case *test_case)
> +{
> +	int ret;
> +
> +	if (module->init) {
> +		ret = module->init(test);
> +		if (ret) {
> +			kunit_err(test, "failed to initialize: %d", ret);
> +			kunit_set_success(test, false);
> +			return;
> +		}
> +	}
> +
> +	test_case->run_case(test);
> +}

<-- snip -->

> +static bool kunit_run_case(struct kunit *test,
> +			   struct kunit_module *module,
> +			   struct kunit_case *test_case)
> +{
> +	kunit_set_success(test, true);
> +
> +	kunit_run_case_internal(test, module, test_case);
> +	kunit_run_case_cleanup(test, module, test_case);
> +
> +	return kunit_get_success(test);
> +}

So we are running the module->init() for each test case... is that
correct? Shouldn't the init run once? Also, typically init calls are
pegged with __init so we free them later. You seem to have skipped the
init annotations. Why?

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 01/19] kunit: test: add KUnit test runner core
  2018-11-30  3:28   ` mcgrof
@ 2018-11-30  3:28     ` Luis Chamberlain
  2018-12-01  2:08     ` brendanhiggins
  1 sibling, 0 replies; 232+ messages in thread
From: Luis Chamberlain @ 2018-11-30  3:28 UTC (permalink / raw)


> +static void kunit_run_case_internal(struct kunit *test,
> +				    struct kunit_module *module,
> +				    struct kunit_case *test_case)
> +{
> +	int ret;
> +
> +	if (module->init) {
> +		ret = module->init(test);
> +		if (ret) {
> +			kunit_err(test, "failed to initialize: %d", ret);
> +			kunit_set_success(test, false);
> +			return;
> +		}
> +	}
> +
> +	test_case->run_case(test);
> +}

<-- snip -->

> +static bool kunit_run_case(struct kunit *test,
> +			   struct kunit_module *module,
> +			   struct kunit_case *test_case)
> +{
> +	kunit_set_success(test, true);
> +
> +	kunit_run_case_internal(test, module, test_case);
> +	kunit_run_case_cleanup(test, module, test_case);
> +
> +	return kunit_get_success(test);
> +}

So we are running the module->init() for each test case... is that
correct? Shouldn't the init run once? Also, typically init calls are
pegged with __init so we free them later. You seem to have skipped the
init annotations. Why?

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 03/19] kunit: test: add string_stream a std::stream like string builder
  2018-11-28 19:36 ` [RFC v3 03/19] kunit: test: add string_stream a std::stream like string builder brendanhiggins
  2018-11-28 19:36   ` Brendan Higgins
@ 2018-11-30  3:29   ` mcgrof
  2018-11-30  3:29     ` Luis Chamberlain
                       ` (2 more replies)
  1 sibling, 3 replies; 232+ messages in thread
From: mcgrof @ 2018-11-30  3:29 UTC (permalink / raw)


On Wed, Nov 28, 2018 at 11:36:20AM -0800, Brendan Higgins wrote:
> A number of test features need to do pretty complicated string printing
> where it may not be possible to rely on a single preallocated string
> with parameters.
> 
> So provide a library for constructing the string as you go similar to
> C++'s std::string.

Hrm, what's the potential for such thing actually being eventually
generically useful for printk folks, I wonder? Petr?

  Luis

> 
> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> ---
>  include/kunit/string-stream.h |  44 ++++++++++
>  kunit/Makefile                |   3 +-
>  kunit/string-stream.c         | 149 ++++++++++++++++++++++++++++++++++
>  3 files changed, 195 insertions(+), 1 deletion(-)
>  create mode 100644 include/kunit/string-stream.h
>  create mode 100644 kunit/string-stream.c
> 
> diff --git a/include/kunit/string-stream.h b/include/kunit/string-stream.h
> new file mode 100644
> index 0000000000000..933ed5740cf07
> --- /dev/null
> +++ b/include/kunit/string-stream.h
> @@ -0,0 +1,44 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * C++ stream style string builder used in KUnit for building messages.
> + *
> + * Copyright (C) 2018, Google LLC.
> + * Author: Brendan Higgins <brendanhiggins at google.com>
> + */
> +
> +#ifndef _KUNIT_STRING_STREAM_H
> +#define _KUNIT_STRING_STREAM_H
> +
> +#include <linux/types.h>
> +#include <linux/spinlock.h>
> +#include <linux/kref.h>
> +#include <stdarg.h>
> +
> +struct string_stream_fragment {
> +	struct list_head node;
> +	char *fragment;
> +};
> +
> +struct string_stream {
> +	size_t length;
> +	struct list_head fragments;
> +
> +	/* length and fragments are protected by this lock */
> +	spinlock_t lock;
> +	struct kref refcount;
> +	int (*add)(struct string_stream *this, const char *fmt, ...);
> +	int (*vadd)(struct string_stream *this, const char *fmt, va_list args);
> +	char *(*get_string)(struct string_stream *this);
> +	void (*clear)(struct string_stream *this);
> +	bool (*is_empty)(struct string_stream *this);
> +};
> +
> +struct string_stream *new_string_stream(void);
> +
> +void destroy_string_stream(struct string_stream *stream);
> +
> +void string_stream_get(struct string_stream *stream);
> +
> +int string_stream_put(struct string_stream *stream);
> +
> +#endif /* _KUNIT_STRING_STREAM_H */
> diff --git a/kunit/Makefile b/kunit/Makefile
> index 5efdc4dea2c08..275b565a0e81f 100644
> --- a/kunit/Makefile
> +++ b/kunit/Makefile
> @@ -1 +1,2 @@
> -obj-$(CONFIG_KUNIT) +=			test.o
> +obj-$(CONFIG_KUNIT) +=			test.o \
> +					string-stream.o
> diff --git a/kunit/string-stream.c b/kunit/string-stream.c
> new file mode 100644
> index 0000000000000..1e7efa630cc35
> --- /dev/null
> +++ b/kunit/string-stream.c
> @@ -0,0 +1,149 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * C++ stream style string builder used in KUnit for building messages.
> + *
> + * Copyright (C) 2018, Google LLC.
> + * Author: Brendan Higgins <brendanhiggins at google.com>
> + */
> +
> +#include <linux/list.h>
> +#include <linux/slab.h>
> +#include <kunit/string-stream.h>
> +
> +static int string_stream_vadd(struct string_stream *this,
> +			       const char *fmt,
> +			       va_list args)
> +{
> +	struct string_stream_fragment *fragment;
> +	int len;
> +	va_list args_for_counting;
> +	unsigned long flags;
> +
> +	/* Make a copy because `vsnprintf` could change it */
> +	va_copy(args_for_counting, args);
> +
> +	/* Need space for null byte. */
> +	len = vsnprintf(NULL, 0, fmt, args_for_counting) + 1;
> +
> +	va_end(args_for_counting);
> +
> +	fragment = kmalloc(sizeof(*fragment), GFP_KERNEL);
> +	if (!fragment)
> +		return -ENOMEM;
> +
> +	fragment->fragment = kmalloc(len, GFP_KERNEL);
> +	if (!fragment->fragment) {
> +		kfree(fragment);
> +		return -ENOMEM;
> +	}
> +
> +	len = vsnprintf(fragment->fragment, len, fmt, args);
> +	spin_lock_irqsave(&this->lock, flags);
> +	this->length += len;
> +	list_add_tail(&fragment->node, &this->fragments);
> +	spin_unlock_irqrestore(&this->lock, flags);
> +	return 0;
> +}
> +
> +static int string_stream_add(struct string_stream *this, const char *fmt, ...)
> +{
> +	va_list args;
> +	int result;
> +
> +	va_start(args, fmt);
> +	result = string_stream_vadd(this, fmt, args);
> +	va_end(args);
> +	return result;
> +}
> +
> +static void string_stream_clear(struct string_stream *this)
> +{
> +	struct string_stream_fragment *fragment, *fragment_safe;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&this->lock, flags);
> +	list_for_each_entry_safe(fragment,
> +				 fragment_safe,
> +				 &this->fragments,
> +				 node) {
> +		list_del(&fragment->node);
> +		kfree(fragment->fragment);
> +		kfree(fragment);
> +	}
> +	this->length = 0;
> +	spin_unlock_irqrestore(&this->lock, flags);
> +}
> +
> +static char *string_stream_get_string(struct string_stream *this)
> +{
> +	struct string_stream_fragment *fragment;
> +	size_t buf_len = this->length + 1; /* +1 for null byte. */
> +	char *buf;
> +	unsigned long flags;
> +
> +	buf = kzalloc(buf_len, GFP_KERNEL);
> +	if (!buf)
> +		return NULL;
> +
> +	spin_lock_irqsave(&this->lock, flags);
> +	list_for_each_entry(fragment, &this->fragments, node)
> +		strlcat(buf, fragment->fragment, buf_len);
> +	spin_unlock_irqrestore(&this->lock, flags);
> +
> +	return buf;
> +}
> +
> +static bool string_stream_is_empty(struct string_stream *this)
> +{
> +	bool is_empty;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&this->lock, flags);
> +	is_empty = list_empty(&this->fragments);
> +	spin_unlock_irqrestore(&this->lock, flags);
> +
> +	return is_empty;
> +}
> +
> +void destroy_string_stream(struct string_stream *stream)
> +{
> +	stream->clear(stream);
> +	kfree(stream);
> +}
> +
> +static void string_stream_destroy(struct kref *kref)
> +{
> +	struct string_stream *stream = container_of(kref,
> +						    struct string_stream,
> +						    refcount);
> +	destroy_string_stream(stream);
> +}
> +
> +struct string_stream *new_string_stream(void)
> +{
> +	struct string_stream *stream = kzalloc(sizeof(*stream), GFP_KERNEL);
> +
> +	if (!stream)
> +		return NULL;
> +
> +	INIT_LIST_HEAD(&stream->fragments);
> +	spin_lock_init(&stream->lock);
> +	kref_init(&stream->refcount);
> +	stream->add = string_stream_add;
> +	stream->vadd = string_stream_vadd;
> +	stream->get_string = string_stream_get_string;
> +	stream->clear = string_stream_clear;
> +	stream->is_empty = string_stream_is_empty;
> +	return stream;
> +}
> +
> +void string_stream_get(struct string_stream *stream)
> +{
> +	kref_get(&stream->refcount);
> +}
> +
> +int string_stream_put(struct string_stream *stream)
> +{
> +	return kref_put(&stream->refcount, &string_stream_destroy);
> +}
> +
> -- 
> 2.20.0.rc0.387.gc7a69e6b6c-goog
> 

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 03/19] kunit: test: add string_stream a std::stream like string builder
  2018-11-30  3:29   ` mcgrof
@ 2018-11-30  3:29     ` Luis Chamberlain
  2018-12-01  2:14     ` brendanhiggins
  2018-12-03 10:55     ` pmladek
  2 siblings, 0 replies; 232+ messages in thread
From: Luis Chamberlain @ 2018-11-30  3:29 UTC (permalink / raw)


On Wed, Nov 28, 2018@11:36:20AM -0800, Brendan Higgins wrote:
> A number of test features need to do pretty complicated string printing
> where it may not be possible to rely on a single preallocated string
> with parameters.
> 
> So provide a library for constructing the string as you go similar to
> C++'s std::string.

Hrm, what's the potential for such thing actually being eventually
generically useful for printk folks, I wonder? Petr?

  Luis

> 
> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> ---
>  include/kunit/string-stream.h |  44 ++++++++++
>  kunit/Makefile                |   3 +-
>  kunit/string-stream.c         | 149 ++++++++++++++++++++++++++++++++++
>  3 files changed, 195 insertions(+), 1 deletion(-)
>  create mode 100644 include/kunit/string-stream.h
>  create mode 100644 kunit/string-stream.c
> 
> diff --git a/include/kunit/string-stream.h b/include/kunit/string-stream.h
> new file mode 100644
> index 0000000000000..933ed5740cf07
> --- /dev/null
> +++ b/include/kunit/string-stream.h
> @@ -0,0 +1,44 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * C++ stream style string builder used in KUnit for building messages.
> + *
> + * Copyright (C) 2018, Google LLC.
> + * Author: Brendan Higgins <brendanhiggins at google.com>
> + */
> +
> +#ifndef _KUNIT_STRING_STREAM_H
> +#define _KUNIT_STRING_STREAM_H
> +
> +#include <linux/types.h>
> +#include <linux/spinlock.h>
> +#include <linux/kref.h>
> +#include <stdarg.h>
> +
> +struct string_stream_fragment {
> +	struct list_head node;
> +	char *fragment;
> +};
> +
> +struct string_stream {
> +	size_t length;
> +	struct list_head fragments;
> +
> +	/* length and fragments are protected by this lock */
> +	spinlock_t lock;
> +	struct kref refcount;
> +	int (*add)(struct string_stream *this, const char *fmt, ...);
> +	int (*vadd)(struct string_stream *this, const char *fmt, va_list args);
> +	char *(*get_string)(struct string_stream *this);
> +	void (*clear)(struct string_stream *this);
> +	bool (*is_empty)(struct string_stream *this);
> +};
> +
> +struct string_stream *new_string_stream(void);
> +
> +void destroy_string_stream(struct string_stream *stream);
> +
> +void string_stream_get(struct string_stream *stream);
> +
> +int string_stream_put(struct string_stream *stream);
> +
> +#endif /* _KUNIT_STRING_STREAM_H */
> diff --git a/kunit/Makefile b/kunit/Makefile
> index 5efdc4dea2c08..275b565a0e81f 100644
> --- a/kunit/Makefile
> +++ b/kunit/Makefile
> @@ -1 +1,2 @@
> -obj-$(CONFIG_KUNIT) +=			test.o
> +obj-$(CONFIG_KUNIT) +=			test.o \
> +					string-stream.o
> diff --git a/kunit/string-stream.c b/kunit/string-stream.c
> new file mode 100644
> index 0000000000000..1e7efa630cc35
> --- /dev/null
> +++ b/kunit/string-stream.c
> @@ -0,0 +1,149 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * C++ stream style string builder used in KUnit for building messages.
> + *
> + * Copyright (C) 2018, Google LLC.
> + * Author: Brendan Higgins <brendanhiggins at google.com>
> + */
> +
> +#include <linux/list.h>
> +#include <linux/slab.h>
> +#include <kunit/string-stream.h>
> +
> +static int string_stream_vadd(struct string_stream *this,
> +			       const char *fmt,
> +			       va_list args)
> +{
> +	struct string_stream_fragment *fragment;
> +	int len;
> +	va_list args_for_counting;
> +	unsigned long flags;
> +
> +	/* Make a copy because `vsnprintf` could change it */
> +	va_copy(args_for_counting, args);
> +
> +	/* Need space for null byte. */
> +	len = vsnprintf(NULL, 0, fmt, args_for_counting) + 1;
> +
> +	va_end(args_for_counting);
> +
> +	fragment = kmalloc(sizeof(*fragment), GFP_KERNEL);
> +	if (!fragment)
> +		return -ENOMEM;
> +
> +	fragment->fragment = kmalloc(len, GFP_KERNEL);
> +	if (!fragment->fragment) {
> +		kfree(fragment);
> +		return -ENOMEM;
> +	}
> +
> +	len = vsnprintf(fragment->fragment, len, fmt, args);
> +	spin_lock_irqsave(&this->lock, flags);
> +	this->length += len;
> +	list_add_tail(&fragment->node, &this->fragments);
> +	spin_unlock_irqrestore(&this->lock, flags);
> +	return 0;
> +}
> +
> +static int string_stream_add(struct string_stream *this, const char *fmt, ...)
> +{
> +	va_list args;
> +	int result;
> +
> +	va_start(args, fmt);
> +	result = string_stream_vadd(this, fmt, args);
> +	va_end(args);
> +	return result;
> +}
> +
> +static void string_stream_clear(struct string_stream *this)
> +{
> +	struct string_stream_fragment *fragment, *fragment_safe;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&this->lock, flags);
> +	list_for_each_entry_safe(fragment,
> +				 fragment_safe,
> +				 &this->fragments,
> +				 node) {
> +		list_del(&fragment->node);
> +		kfree(fragment->fragment);
> +		kfree(fragment);
> +	}
> +	this->length = 0;
> +	spin_unlock_irqrestore(&this->lock, flags);
> +}
> +
> +static char *string_stream_get_string(struct string_stream *this)
> +{
> +	struct string_stream_fragment *fragment;
> +	size_t buf_len = this->length + 1; /* +1 for null byte. */
> +	char *buf;
> +	unsigned long flags;
> +
> +	buf = kzalloc(buf_len, GFP_KERNEL);
> +	if (!buf)
> +		return NULL;
> +
> +	spin_lock_irqsave(&this->lock, flags);
> +	list_for_each_entry(fragment, &this->fragments, node)
> +		strlcat(buf, fragment->fragment, buf_len);
> +	spin_unlock_irqrestore(&this->lock, flags);
> +
> +	return buf;
> +}
> +
> +static bool string_stream_is_empty(struct string_stream *this)
> +{
> +	bool is_empty;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&this->lock, flags);
> +	is_empty = list_empty(&this->fragments);
> +	spin_unlock_irqrestore(&this->lock, flags);
> +
> +	return is_empty;
> +}
> +
> +void destroy_string_stream(struct string_stream *stream)
> +{
> +	stream->clear(stream);
> +	kfree(stream);
> +}
> +
> +static void string_stream_destroy(struct kref *kref)
> +{
> +	struct string_stream *stream = container_of(kref,
> +						    struct string_stream,
> +						    refcount);
> +	destroy_string_stream(stream);
> +}
> +
> +struct string_stream *new_string_stream(void)
> +{
> +	struct string_stream *stream = kzalloc(sizeof(*stream), GFP_KERNEL);
> +
> +	if (!stream)
> +		return NULL;
> +
> +	INIT_LIST_HEAD(&stream->fragments);
> +	spin_lock_init(&stream->lock);
> +	kref_init(&stream->refcount);
> +	stream->add = string_stream_add;
> +	stream->vadd = string_stream_vadd;
> +	stream->get_string = string_stream_get_string;
> +	stream->clear = string_stream_clear;
> +	stream->is_empty = string_stream_is_empty;
> +	return stream;
> +}
> +
> +void string_stream_get(struct string_stream *stream)
> +{
> +	kref_get(&stream->refcount);
> +}
> +
> +int string_stream_put(struct string_stream *stream)
> +{
> +	return kref_put(&stream->refcount, &string_stream_destroy);
> +}
> +
> -- 
> 2.20.0.rc0.387.gc7a69e6b6c-goog
> 

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 06/19] arch: um: enable running kunit from User Mode Linux
  2018-11-28 19:36 ` [RFC v3 06/19] arch: um: enable running kunit from User Mode Linux brendanhiggins
  2018-11-28 19:36   ` Brendan Higgins
  2018-11-28 21:26   ` robh
@ 2018-11-30  3:30   ` mcgrof
  2018-11-30  3:30     ` Luis Chamberlain
  2 siblings, 1 reply; 232+ messages in thread
From: mcgrof @ 2018-11-30  3:30 UTC (permalink / raw)


On Wed, Nov 28, 2018 at 11:36:23AM -0800, Brendan Higgins wrote:
> Make minimum number of changes outside of the KUnit directories for
> KUnit to build and run using UML.
> 
> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> ---
>  Kconfig  | 2 ++
>  Makefile | 2 +-
>  2 files changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/Kconfig b/Kconfig
> index 48a80beab6853..10428501edb78 100644
> --- a/Kconfig
> +++ b/Kconfig
> @@ -30,3 +30,5 @@ source "crypto/Kconfig"
>  source "lib/Kconfig"
>  
>  source "lib/Kconfig.debug"
> +
> +source "kunit/Kconfig"

Since this is all UML why not source it from arch/um/Kconfig instead?

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 06/19] arch: um: enable running kunit from User Mode Linux
  2018-11-30  3:30   ` mcgrof
@ 2018-11-30  3:30     ` Luis Chamberlain
  0 siblings, 0 replies; 232+ messages in thread
From: Luis Chamberlain @ 2018-11-30  3:30 UTC (permalink / raw)


On Wed, Nov 28, 2018@11:36:23AM -0800, Brendan Higgins wrote:
> Make minimum number of changes outside of the KUnit directories for
> KUnit to build and run using UML.
> 
> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> ---
>  Kconfig  | 2 ++
>  Makefile | 2 +-
>  2 files changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/Kconfig b/Kconfig
> index 48a80beab6853..10428501edb78 100644
> --- a/Kconfig
> +++ b/Kconfig
> @@ -30,3 +30,5 @@ source "crypto/Kconfig"
>  source "lib/Kconfig"
>  
>  source "lib/Kconfig.debug"
> +
> +source "kunit/Kconfig"

Since this is all UML why not source it from arch/um/Kconfig instead?

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 08/19] arch: um: add shim to trap to allow installing a fault catcher for tests
  2018-11-28 19:36 ` [RFC v3 08/19] arch: um: add shim to trap to allow installing a fault catcher for tests brendanhiggins
  2018-11-28 19:36   ` Brendan Higgins
@ 2018-11-30  3:34   ` mcgrof
  2018-11-30  3:34     ` Luis Chamberlain
  2018-12-03 23:34     ` brendanhiggins
  2018-11-30  3:41   ` mcgrof
  2 siblings, 2 replies; 232+ messages in thread
From: mcgrof @ 2018-11-30  3:34 UTC (permalink / raw)


On Wed, Nov 28, 2018 at 11:36:25AM -0800, Brendan Higgins wrote:
> diff --git a/arch/um/kernel/trap.c b/arch/um/kernel/trap.c
> index cced829460427..bf90e678b3d71 100644
> --- a/arch/um/kernel/trap.c
> +++ b/arch/um/kernel/trap.c
> @@ -201,6 +201,12 @@ void segv_handler(int sig, struct siginfo *unused_si, struct uml_pt_regs *regs)
>  	segv(*fi, UPT_IP(regs), UPT_IS_USER(regs), regs);
>  }
>  
> +static void segv_run_catcher(jmp_buf *catcher, void *fault_addr)
> +{
> +	current->thread.fault_addr = fault_addr;
> +	UML_LONGJMP(catcher, 1);
> +}
> +
>  /*
>   * We give a *copy* of the faultinfo in the regs to segv.
>   * This must be done, since nesting SEGVs could overwrite
> @@ -219,7 +225,10 @@ unsigned long segv(struct faultinfo fi, unsigned long ip, int is_user,
>  	if (!is_user && regs)
>  		current->thread.segv_regs = container_of(regs, struct pt_regs, regs);
>  
> -	if (!is_user && (address >= start_vm) && (address < end_vm)) {
> +	catcher = current->thread.fault_catcher;

This and..

> +	if (catcher && current->thread.is_running_test)
> +		segv_run_catcher(catcher, (void *) address);
> +	else if (!is_user && (address >= start_vm) && (address < end_vm)) {
>  		flush_tlb_kernel_vm();
>  		goto out;
>  	}

*not this*

> @@ -246,12 +255,10 @@ unsigned long segv(struct faultinfo fi, unsigned long ip, int is_user,
>  		address = 0;
>  	}
>  
> -	catcher = current->thread.fault_catcher;
>  	if (!err)
>  		goto out;
>  	else if (catcher != NULL) {
> -		current->thread.fault_addr = (void *) address;
> -		UML_LONGJMP(catcher, 1);
> +		segv_run_catcher(catcher, (void *) address);
>  	}
>  	else if (current->thread.fault_addr != NULL)
>  		panic("fault_addr set but no fault catcher");

But with this seems one atomic change which should be submitted
separately, its just a helper. Think it would make the actual
change needed easier to review, ie, your needed changes would
be smaller and clearer for what you need.

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 08/19] arch: um: add shim to trap to allow installing a fault catcher for tests
  2018-11-30  3:34   ` mcgrof
@ 2018-11-30  3:34     ` Luis Chamberlain
  2018-12-03 23:34     ` brendanhiggins
  1 sibling, 0 replies; 232+ messages in thread
From: Luis Chamberlain @ 2018-11-30  3:34 UTC (permalink / raw)


On Wed, Nov 28, 2018@11:36:25AM -0800, Brendan Higgins wrote:
> diff --git a/arch/um/kernel/trap.c b/arch/um/kernel/trap.c
> index cced829460427..bf90e678b3d71 100644
> --- a/arch/um/kernel/trap.c
> +++ b/arch/um/kernel/trap.c
> @@ -201,6 +201,12 @@ void segv_handler(int sig, struct siginfo *unused_si, struct uml_pt_regs *regs)
>  	segv(*fi, UPT_IP(regs), UPT_IS_USER(regs), regs);
>  }
>  
> +static void segv_run_catcher(jmp_buf *catcher, void *fault_addr)
> +{
> +	current->thread.fault_addr = fault_addr;
> +	UML_LONGJMP(catcher, 1);
> +}
> +
>  /*
>   * We give a *copy* of the faultinfo in the regs to segv.
>   * This must be done, since nesting SEGVs could overwrite
> @@ -219,7 +225,10 @@ unsigned long segv(struct faultinfo fi, unsigned long ip, int is_user,
>  	if (!is_user && regs)
>  		current->thread.segv_regs = container_of(regs, struct pt_regs, regs);
>  
> -	if (!is_user && (address >= start_vm) && (address < end_vm)) {
> +	catcher = current->thread.fault_catcher;

This and..

> +	if (catcher && current->thread.is_running_test)
> +		segv_run_catcher(catcher, (void *) address);
> +	else if (!is_user && (address >= start_vm) && (address < end_vm)) {
>  		flush_tlb_kernel_vm();
>  		goto out;
>  	}

*not this*

> @@ -246,12 +255,10 @@ unsigned long segv(struct faultinfo fi, unsigned long ip, int is_user,
>  		address = 0;
>  	}
>  
> -	catcher = current->thread.fault_catcher;
>  	if (!err)
>  		goto out;
>  	else if (catcher != NULL) {
> -		current->thread.fault_addr = (void *) address;
> -		UML_LONGJMP(catcher, 1);
> +		segv_run_catcher(catcher, (void *) address);
>  	}
>  	else if (current->thread.fault_addr != NULL)
>  		panic("fault_addr set but no fault catcher");

But with this seems one atomic change which should be submitted
separately, its just a helper. Think it would make the actual
change needed easier to review, ie, your needed changes would
be smaller and clearer for what you need.

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 06/19] arch: um: enable running kunit from User Mode Linux
  2018-11-28 21:26   ` robh
  2018-11-28 21:26     ` Rob Herring
@ 2018-11-30  3:37     ` mcgrof
  2018-11-30  3:37       ` Luis Chamberlain
  2018-11-30 14:05       ` robh
  1 sibling, 2 replies; 232+ messages in thread
From: mcgrof @ 2018-11-30  3:37 UTC (permalink / raw)


On Wed, Nov 28, 2018 at 03:26:03PM -0600, Rob Herring wrote:
> On Wed, Nov 28, 2018 at 1:37 PM Brendan Higgins
> <brendanhiggins at google.com> wrote:
> >
> > Make minimum number of changes outside of the KUnit directories for
> > KUnit to build and run using UML.
> 
> There's nothing in this patch limiting this to UML. 

Not that one, but the abort thing segv thing is, eventually.
To support other architectures we'd need to make a wrapper to that
hack which Brendan added, and then allow each os to implement
its own call, and add an asm-generic helper.

Are you volunteering to add the x86 hook?

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 06/19] arch: um: enable running kunit from User Mode Linux
  2018-11-30  3:37     ` mcgrof
@ 2018-11-30  3:37       ` Luis Chamberlain
  2018-11-30 14:05       ` robh
  1 sibling, 0 replies; 232+ messages in thread
From: Luis Chamberlain @ 2018-11-30  3:37 UTC (permalink / raw)


On Wed, Nov 28, 2018@03:26:03PM -0600, Rob Herring wrote:
> On Wed, Nov 28, 2018 at 1:37 PM Brendan Higgins
> <brendanhiggins@google.com> wrote:
> >
> > Make minimum number of changes outside of the KUnit directories for
> > KUnit to build and run using UML.
> 
> There's nothing in this patch limiting this to UML. 

Not that one, but the abort thing segv thing is, eventually.
To support other architectures we'd need to make a wrapper to that
hack which Brendan added, and then allow each os to implement
its own call, and add an asm-generic helper.

Are you volunteering to add the x86 hook?

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 07/19] kunit: test: add initial tests
  2018-11-28 19:36 ` [RFC v3 07/19] kunit: test: add initial tests brendanhiggins
  2018-11-28 19:36   ` Brendan Higgins
@ 2018-11-30  3:40   ` mcgrof
  2018-11-30  3:40     ` Luis Chamberlain
  2018-12-03 23:26     ` brendanhiggins
  1 sibling, 2 replies; 232+ messages in thread
From: mcgrof @ 2018-11-30  3:40 UTC (permalink / raw)


On Wed, Nov 28, 2018 at 11:36:24AM -0800, Brendan Higgins wrote:
> Add a test for string stream along with a simpler example.
> 
> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> ---
>  kunit/Kconfig              | 12 ++++++
>  kunit/Makefile             |  4 ++
>  kunit/example-test.c       | 88 ++++++++++++++++++++++++++++++++++++++

BTW if you need another more concrete but very simple example I think it
may be possible to port tools/testing/selftests/sysctl/sysctl.sh +
lib/test_sysctl.c into a kunit test. Correct me if I'm wrong.

I think that would show the differences clearly between selftests and
kunit as well.

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 07/19] kunit: test: add initial tests
  2018-11-30  3:40   ` mcgrof
@ 2018-11-30  3:40     ` Luis Chamberlain
  2018-12-03 23:26     ` brendanhiggins
  1 sibling, 0 replies; 232+ messages in thread
From: Luis Chamberlain @ 2018-11-30  3:40 UTC (permalink / raw)


On Wed, Nov 28, 2018@11:36:24AM -0800, Brendan Higgins wrote:
> Add a test for string stream along with a simpler example.
> 
> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> ---
>  kunit/Kconfig              | 12 ++++++
>  kunit/Makefile             |  4 ++
>  kunit/example-test.c       | 88 ++++++++++++++++++++++++++++++++++++++

BTW if you need another more concrete but very simple example I think it
may be possible to port tools/testing/selftests/sysctl/sysctl.sh +
lib/test_sysctl.c into a kunit test. Correct me if I'm wrong.

I think that would show the differences clearly between selftests and
kunit as well.

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 08/19] arch: um: add shim to trap to allow installing a fault catcher for tests
  2018-11-28 19:36 ` [RFC v3 08/19] arch: um: add shim to trap to allow installing a fault catcher for tests brendanhiggins
  2018-11-28 19:36   ` Brendan Higgins
  2018-11-30  3:34   ` mcgrof
@ 2018-11-30  3:41   ` mcgrof
  2018-11-30  3:41     ` Luis Chamberlain
  2018-12-03 23:37     ` brendanhiggins
  2 siblings, 2 replies; 232+ messages in thread
From: mcgrof @ 2018-11-30  3:41 UTC (permalink / raw)


On Wed, Nov 28, 2018 at 11:36:25AM -0800, Brendan Higgins wrote:
> +static void segv_run_catcher(jmp_buf *catcher, void *fault_addr)
> +{
> +	current->thread.fault_addr = fault_addr;
> +	UML_LONGJMP(catcher, 1);
> +}

Some documentation about what this does exactly would be appreciated.
With the goal it may be useful to others wanting to consider support
for other archs -- if that actually ends up being desirable.

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 08/19] arch: um: add shim to trap to allow installing a fault catcher for tests
  2018-11-30  3:41   ` mcgrof
@ 2018-11-30  3:41     ` Luis Chamberlain
  2018-12-03 23:37     ` brendanhiggins
  1 sibling, 0 replies; 232+ messages in thread
From: Luis Chamberlain @ 2018-11-30  3:41 UTC (permalink / raw)


On Wed, Nov 28, 2018@11:36:25AM -0800, Brendan Higgins wrote:
> +static void segv_run_catcher(jmp_buf *catcher, void *fault_addr)
> +{
> +	current->thread.fault_addr = fault_addr;
> +	UML_LONGJMP(catcher, 1);
> +}

Some documentation about what this does exactly would be appreciated.
With the goal it may be useful to others wanting to consider support
for other archs -- if that actually ends up being desirable.

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel
  2018-11-28 19:36 ` [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel brendanhiggins
  2018-11-28 19:36   ` Brendan Higgins
  2018-11-29 13:54   ` kieran.bingham
@ 2018-11-30  3:44   ` mcgrof
  2018-11-30  3:44     ` Luis Chamberlain
  2018-12-03 23:50     ` brendanhiggins
  2 siblings, 2 replies; 232+ messages in thread
From: mcgrof @ 2018-11-30  3:44 UTC (permalink / raw)


On Wed, Nov 28, 2018 at 11:36:28AM -0800, Brendan Higgins wrote:
> The ultimate goal is to create minimal isolated test binaries; in the
> meantime we are using UML to provide the infrastructure to run tests, so
> define an abstract way to configure and run tests that allow us to
> change the context in which tests are built without affecting the user.
> This also makes pretty and dynamic error reporting, and a lot of other
> nice features easier.
> 
> kunit_config.py:
>   - parse .config and Kconfig files.
>
> 
> kunit_kernel.py: provides helper functions to:
>   - configure the kernel using kunitconfig.

We get the tools to run the config stuff, build, etc, but not a top
level 'make kunitconfig' or whatever. We have things like 'make
kvmconfig' and 'make xenconfig', I think it would be reasonable to
add similar for this.

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel
  2018-11-30  3:44   ` mcgrof
@ 2018-11-30  3:44     ` Luis Chamberlain
  2018-12-03 23:50     ` brendanhiggins
  1 sibling, 0 replies; 232+ messages in thread
From: Luis Chamberlain @ 2018-11-30  3:44 UTC (permalink / raw)


On Wed, Nov 28, 2018@11:36:28AM -0800, Brendan Higgins wrote:
> The ultimate goal is to create minimal isolated test binaries; in the
> meantime we are using UML to provide the infrastructure to run tests, so
> define an abstract way to configure and run tests that allow us to
> change the context in which tests are built without affecting the user.
> This also makes pretty and dynamic error reporting, and a lot of other
> nice features easier.
> 
> kunit_config.py:
>   - parse .config and Kconfig files.
>
> 
> kunit_kernel.py: provides helper functions to:
>   - configure the kernel using kunitconfig.

We get the tools to run the config stuff, build, etc, but not a top
level 'make kunitconfig' or whatever. We have things like 'make
kvmconfig' and 'make xenconfig', I think it would be reasonable to
add similar for this.

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 14/19] Documentation: kunit: add documentation for KUnit
  2018-11-29 13:56   ` kieran.bingham
  2018-11-29 13:56     ` Kieran Bingham
@ 2018-11-30  3:45     ` mcgrof
  2018-11-30  3:45       ` Luis Chamberlain
  2018-12-03 23:53       ` brendanhiggins
  1 sibling, 2 replies; 232+ messages in thread
From: mcgrof @ 2018-11-30  3:45 UTC (permalink / raw)


On Thu, Nov 29, 2018 at 01:56:37PM +0000, Kieran Bingham wrote:
> Hi Brendan,
> 
> Please excuse the top posting, but I'm replying here as I'm following
> the section "Creating a kunitconfig" in Documentation/kunit/start.rst.
> 
> Could the three line kunitconfig file live under say
> 	 arch/um/configs/kunit_defconfig?
> 
> So that it's always provided? And could even be extended with tests
> which people would expect to be run by default? (say in distributions)

Indeed, and then a top level 'make kunitconfig' could use it as well.

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 14/19] Documentation: kunit: add documentation for KUnit
  2018-11-30  3:45     ` mcgrof
@ 2018-11-30  3:45       ` Luis Chamberlain
  2018-12-03 23:53       ` brendanhiggins
  1 sibling, 0 replies; 232+ messages in thread
From: Luis Chamberlain @ 2018-11-30  3:45 UTC (permalink / raw)


On Thu, Nov 29, 2018@01:56:37PM +0000, Kieran Bingham wrote:
> Hi Brendan,
> 
> Please excuse the top posting, but I'm replying here as I'm following
> the section "Creating a kunitconfig" in Documentation/kunit/start.rst.
> 
> Could the three line kunitconfig file live under say
> 	 arch/um/configs/kunit_defconfig?
> 
> So that it's always provided? And could even be extended with tests
> which people would expect to be run by default? (say in distributions)

Indeed, and then a top level 'make kunitconfig' could use it as well.

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 16/19] arch: um: make UML unflatten device tree when testing
  2018-11-28 19:36 ` [RFC v3 16/19] arch: um: make UML unflatten device tree when testing brendanhiggins
  2018-11-28 19:36   ` Brendan Higgins
  2018-11-28 21:16   ` robh
@ 2018-11-30  3:46   ` mcgrof
  2018-11-30  3:46     ` Luis Chamberlain
  2018-12-04  0:02     ` brendanhiggins
  2 siblings, 2 replies; 232+ messages in thread
From: mcgrof @ 2018-11-30  3:46 UTC (permalink / raw)


On Wed, Nov 28, 2018 at 11:36:33AM -0800, Brendan Higgins wrote:
> Make UML unflatten any present device trees when running KUnit tests.
> 
> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> ---
>  arch/um/kernel/um_arch.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/arch/um/kernel/um_arch.c b/arch/um/kernel/um_arch.c
> index a818ccef30ca2..bd58ae3bf4148 100644
> --- a/arch/um/kernel/um_arch.c
> +++ b/arch/um/kernel/um_arch.c
> @@ -13,6 +13,7 @@
>  #include <linux/sched.h>
>  #include <linux/sched/task.h>
>  #include <linux/kmsg_dump.h>
> +#include <linux/of_fdt.h>
>  
>  #include <asm/pgtable.h>
>  #include <asm/processor.h>
> @@ -347,6 +348,9 @@ void __init setup_arch(char **cmdline_p)
>  	read_initrd();
>  
>  	paging_init();
> +#if IS_ENABLED(CONFIG_OF_UNITTEST)
> +	unflatten_device_tree();
> +#endif

*Why?*

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 16/19] arch: um: make UML unflatten device tree when testing
  2018-11-30  3:46   ` mcgrof
@ 2018-11-30  3:46     ` Luis Chamberlain
  2018-12-04  0:02     ` brendanhiggins
  1 sibling, 0 replies; 232+ messages in thread
From: Luis Chamberlain @ 2018-11-30  3:46 UTC (permalink / raw)


On Wed, Nov 28, 2018@11:36:33AM -0800, Brendan Higgins wrote:
> Make UML unflatten any present device trees when running KUnit tests.
> 
> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> ---
>  arch/um/kernel/um_arch.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/arch/um/kernel/um_arch.c b/arch/um/kernel/um_arch.c
> index a818ccef30ca2..bd58ae3bf4148 100644
> --- a/arch/um/kernel/um_arch.c
> +++ b/arch/um/kernel/um_arch.c
> @@ -13,6 +13,7 @@
>  #include <linux/sched.h>
>  #include <linux/sched/task.h>
>  #include <linux/kmsg_dump.h>
> +#include <linux/of_fdt.h>
>  
>  #include <asm/pgtable.h>
>  #include <asm/processor.h>
> @@ -347,6 +348,9 @@ void __init setup_arch(char **cmdline_p)
>  	read_initrd();
>  
>  	paging_init();
> +#if IS_ENABLED(CONFIG_OF_UNITTEST)
> +	unflatten_device_tree();
> +#endif

*Why?*

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 06/19] arch: um: enable running kunit from User Mode Linux
  2018-11-30  3:37     ` mcgrof
  2018-11-30  3:37       ` Luis Chamberlain
@ 2018-11-30 14:05       ` robh
  2018-11-30 14:05         ` Rob Herring
  2018-11-30 18:22         ` mcgrof
  1 sibling, 2 replies; 232+ messages in thread
From: robh @ 2018-11-30 14:05 UTC (permalink / raw)


On Thu, Nov 29, 2018 at 9:37 PM Luis Chamberlain <mcgrof at kernel.org> wrote:
>
> On Wed, Nov 28, 2018 at 03:26:03PM -0600, Rob Herring wrote:
> > On Wed, Nov 28, 2018 at 1:37 PM Brendan Higgins
> > <brendanhiggins at google.com> wrote:
> > >
> > > Make minimum number of changes outside of the KUnit directories for
> > > KUnit to build and run using UML.
> >
> > There's nothing in this patch limiting this to UML.
>
> Not that one, but the abort thing segv thing is, eventually.
> To support other architectures we'd need to make a wrapper to that
> hack which Brendan added, and then allow each os to implement
> its own call, and add an asm-generic helper.

I've not looked into why this is needed, but can't you make the abort
support optional and arches can select it when they support it. At
least before, the DT unittests didn't need this to run and shouldn't
depend on it after converting to kunit.

Rob

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 06/19] arch: um: enable running kunit from User Mode Linux
  2018-11-30 14:05       ` robh
@ 2018-11-30 14:05         ` Rob Herring
  2018-11-30 18:22         ` mcgrof
  1 sibling, 0 replies; 232+ messages in thread
From: Rob Herring @ 2018-11-30 14:05 UTC (permalink / raw)


On Thu, Nov 29, 2018@9:37 PM Luis Chamberlain <mcgrof@kernel.org> wrote:
>
> On Wed, Nov 28, 2018@03:26:03PM -0600, Rob Herring wrote:
> > On Wed, Nov 28, 2018 at 1:37 PM Brendan Higgins
> > <brendanhiggins@google.com> wrote:
> > >
> > > Make minimum number of changes outside of the KUnit directories for
> > > KUnit to build and run using UML.
> >
> > There's nothing in this patch limiting this to UML.
>
> Not that one, but the abort thing segv thing is, eventually.
> To support other architectures we'd need to make a wrapper to that
> hack which Brendan added, and then allow each os to implement
> its own call, and add an asm-generic helper.

I've not looked into why this is needed, but can't you make the abort
support optional and arches can select it when they support it. At
least before, the DT unittests didn't need this to run and shouldn't
depend on it after converting to kunit.

Rob

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 06/19] arch: um: enable running kunit from User Mode Linux
  2018-11-30 14:05       ` robh
  2018-11-30 14:05         ` Rob Herring
@ 2018-11-30 18:22         ` mcgrof
  2018-11-30 18:22           ` Luis Chamberlain
  2018-12-03 23:22           ` brendanhiggins
  1 sibling, 2 replies; 232+ messages in thread
From: mcgrof @ 2018-11-30 18:22 UTC (permalink / raw)


On Fri, Nov 30, 2018 at 08:05:34AM -0600, Rob Herring wrote:
> On Thu, Nov 29, 2018 at 9:37 PM Luis Chamberlain <mcgrof at kernel.org> wrote:
> >
> > On Wed, Nov 28, 2018 at 03:26:03PM -0600, Rob Herring wrote:
> > > On Wed, Nov 28, 2018 at 1:37 PM Brendan Higgins
> > > <brendanhiggins at google.com> wrote:
> > > >
> > > > Make minimum number of changes outside of the KUnit directories for
> > > > KUnit to build and run using UML.
> > >
> > > There's nothing in this patch limiting this to UML.
> >
> > Not that one, but the abort thing segv thing is, eventually.
> > To support other architectures we'd need to make a wrapper to that
> > hack which Brendan added, and then allow each os to implement
> > its own call, and add an asm-generic helper.
> 
> I've not looked into why this is needed, but can't you make the abort
> support optional and arches can select it when they support it.

Its why I have asked for it to be properly documented. The patches in no
way illustrate *why* such thing is done. And if we are going to
potentially have other archs do something similar best to make it
explicit.

> At
> least before, the DT unittests didn't need this to run and shouldn't
> depend on it after converting to kunit.

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 06/19] arch: um: enable running kunit from User Mode Linux
  2018-11-30 18:22         ` mcgrof
@ 2018-11-30 18:22           ` Luis Chamberlain
  2018-12-03 23:22           ` brendanhiggins
  1 sibling, 0 replies; 232+ messages in thread
From: Luis Chamberlain @ 2018-11-30 18:22 UTC (permalink / raw)


On Fri, Nov 30, 2018@08:05:34AM -0600, Rob Herring wrote:
> On Thu, Nov 29, 2018@9:37 PM Luis Chamberlain <mcgrof@kernel.org> wrote:
> >
> > On Wed, Nov 28, 2018@03:26:03PM -0600, Rob Herring wrote:
> > > On Wed, Nov 28, 2018 at 1:37 PM Brendan Higgins
> > > <brendanhiggins@google.com> wrote:
> > > >
> > > > Make minimum number of changes outside of the KUnit directories for
> > > > KUnit to build and run using UML.
> > >
> > > There's nothing in this patch limiting this to UML.
> >
> > Not that one, but the abort thing segv thing is, eventually.
> > To support other architectures we'd need to make a wrapper to that
> > hack which Brendan added, and then allow each os to implement
> > its own call, and add an asm-generic helper.
> 
> I've not looked into why this is needed, but can't you make the abort
> support optional and arches can select it when they support it.

Its why I have asked for it to be properly documented. The patches in no
way illustrate *why* such thing is done. And if we are going to
potentially have other archs do something similar best to make it
explicit.

> At
> least before, the DT unittests didn't need this to run and shouldn't
> depend on it after converting to kunit.

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 01/19] kunit: test: add KUnit test runner core
  2018-11-30  3:14   ` mcgrof
  2018-11-30  3:14     ` Luis Chamberlain
@ 2018-12-01  1:51     ` brendanhiggins
  2018-12-01  1:51       ` Brendan Higgins
  2018-12-01  2:57       ` mcgrof
  2018-12-05 13:15     ` anton.ivanov
  2 siblings, 2 replies; 232+ messages in thread
From: brendanhiggins @ 2018-12-01  1:51 UTC (permalink / raw)


On Thu, Nov 29, 2018 at 7:14 PM Luis Chamberlain <mcgrof at kernel.org> wrote:
>
> On Wed, Nov 28, 2018 at 11:36:18AM -0800, Brendan Higgins wrote:
> > +#define module_test(module) \
> > +             static int module_kunit_init##module(void) \
> > +             { \
> > +                     return kunit_run_tests(&module); \
> > +             } \
> > +             late_initcall(module_kunit_init##module)
>
> Here in lies an assumption that suffices. I'm inclined to believe we
> need new initcall level here so to ensure we *do* run after all the
> respective kernels iniut calls. Otherwise we're left at the whims of
> link order for kunit. For instance if a kunit test relies on frameworks
> which are also late_initcall() we'd have complete incompatibility with
> anything linked *after* kunit.

Yep, I have some patches that address this, but I thought this is
sufficient for the initial patchset (I figured that's the type of
thing that people will have opinions about so best to get it out of
the critical path). Do you want me to add those in the next revision?

>
> > diff --git a/kunit/Kconfig b/kunit/Kconfig
> > new file mode 100644
> > index 0000000000000..49b44c4f6630a
> > --- /dev/null
> > +++ b/kunit/Kconfig
> > @@ -0,0 +1,17 @@
> > +#
> > +# KUnit base configuration
> > +#
> > +
> > +menu "KUnit support"
> > +
> > +config KUNIT
> > +     bool "Enable support for unit tests (KUnit)"
> > +     depends on UML
>
> Consider using:
>
> if UML
>    ...
> endif
>
> That allows the depends to be done once.

If you want to eliminate depends, wouldn't it be best to have KUNIT
depend on whatever it needs, and then do `if KUNIT` below that? That
seems cleaner over the long term. Anyway, Kees actually asked me to
change it to the way it is now; I really don't care either way.

>
> > +     help
> > +       Enables support for kernel unit tests (KUnit), a lightweight unit
> > +       testing and mocking framework for the Linux kernel. These tests are
> > +       able to be run locally on a developer's workstation without a VM or
> > +       special hardware.
>
>
> Some mention of UML may be good here?

Good point.
>
> > For more information, please see
> > +       Documentation/kunit/
> > +
> > +endmenu
>
> I'm a bit conflicted here. This currently depends on UML but yet you
> noted on RFC v2 that your intention is to liberate kunit from UML and
> ideally allow unit tests to depend only on userspace. I've addressed
> tests using both selftests kernels drivers and also re-written kernel
> APIs to userspace to test there. I think we may need to live with both.

I am not entirely opposed. The greater isolation we can achieve, the
fewer dependencies, and barriers to setting up test fixtures the
better. I think the best way to do that in most cases is allowing
minimal test binaries to be built that have the absolute minimum
amount of code necessary to test the desired property. That being
said, integration tests are a thing and drawing a line between them
and unit tests is not always possible, so supporting other
architectures might be necessary.

>
> Then for the UML stuff, I think if we *really* accept that UML will
> always be a viable option we should probably consider now throwing these
> things under drivers/platform/uml/. This follows the pattern of arch
> specific drivers. Whether or not we end up with a complete userspace
> component independent of UML may implicate having a shared component
> somewhere else.

Fair enough. What specifically are you suggesting should go in
`drivers/platform/uml`? Just the bits that are completely tied to UML
or a concrete architecture?

>
> Likewise, I realize the goal is to *avoid* using a virtual machine for
> these tests, but would it in any way make sense to share kunit to be
> supported for other architectures to allow easier-to-write tests as
> well?

You are not the first person to ask for this.

For the vast majority of tests, I think we can (and consequently
should) make them run without any external dependencies. Doing so
makes it such that someone can run a test without knowing anything
about it, which allows you to do a lot of things. For one, I, as a
developer, don't have to hunt down somebody's QEMU patches, or
whatever. But it also means I, as someone maintaining part of the
kernel, can make nice test runners and build things like presubmit
servers on top of them.

Nevertheless, I accept that there are things which are just easier to
do with hardware or a VM (for integration tests it is necessary).
Still, I think we need to make sure the vast majority of unit tests do
not depend on real hardware or a VM.

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 01/19] kunit: test: add KUnit test runner core
  2018-12-01  1:51     ` brendanhiggins
@ 2018-12-01  1:51       ` Brendan Higgins
  2018-12-01  2:57       ` mcgrof
  1 sibling, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-12-01  1:51 UTC (permalink / raw)


On Thu, Nov 29, 2018@7:14 PM Luis Chamberlain <mcgrof@kernel.org> wrote:
>
> On Wed, Nov 28, 2018@11:36:18AM -0800, Brendan Higgins wrote:
> > +#define module_test(module) \
> > +             static int module_kunit_init##module(void) \
> > +             { \
> > +                     return kunit_run_tests(&module); \
> > +             } \
> > +             late_initcall(module_kunit_init##module)
>
> Here in lies an assumption that suffices. I'm inclined to believe we
> need new initcall level here so to ensure we *do* run after all the
> respective kernels iniut calls. Otherwise we're left at the whims of
> link order for kunit. For instance if a kunit test relies on frameworks
> which are also late_initcall() we'd have complete incompatibility with
> anything linked *after* kunit.

Yep, I have some patches that address this, but I thought this is
sufficient for the initial patchset (I figured that's the type of
thing that people will have opinions about so best to get it out of
the critical path). Do you want me to add those in the next revision?

>
> > diff --git a/kunit/Kconfig b/kunit/Kconfig
> > new file mode 100644
> > index 0000000000000..49b44c4f6630a
> > --- /dev/null
> > +++ b/kunit/Kconfig
> > @@ -0,0 +1,17 @@
> > +#
> > +# KUnit base configuration
> > +#
> > +
> > +menu "KUnit support"
> > +
> > +config KUNIT
> > +     bool "Enable support for unit tests (KUnit)"
> > +     depends on UML
>
> Consider using:
>
> if UML
>    ...
> endif
>
> That allows the depends to be done once.

If you want to eliminate depends, wouldn't it be best to have KUNIT
depend on whatever it needs, and then do `if KUNIT` below that? That
seems cleaner over the long term. Anyway, Kees actually asked me to
change it to the way it is now; I really don't care either way.

>
> > +     help
> > +       Enables support for kernel unit tests (KUnit), a lightweight unit
> > +       testing and mocking framework for the Linux kernel. These tests are
> > +       able to be run locally on a developer's workstation without a VM or
> > +       special hardware.
>
>
> Some mention of UML may be good here?

Good point.
>
> > For more information, please see
> > +       Documentation/kunit/
> > +
> > +endmenu
>
> I'm a bit conflicted here. This currently depends on UML but yet you
> noted on RFC v2 that your intention is to liberate kunit from UML and
> ideally allow unit tests to depend only on userspace. I've addressed
> tests using both selftests kernels drivers and also re-written kernel
> APIs to userspace to test there. I think we may need to live with both.

I am not entirely opposed. The greater isolation we can achieve, the
fewer dependencies, and barriers to setting up test fixtures the
better. I think the best way to do that in most cases is allowing
minimal test binaries to be built that have the absolute minimum
amount of code necessary to test the desired property. That being
said, integration tests are a thing and drawing a line between them
and unit tests is not always possible, so supporting other
architectures might be necessary.

>
> Then for the UML stuff, I think if we *really* accept that UML will
> always be a viable option we should probably consider now throwing these
> things under drivers/platform/uml/. This follows the pattern of arch
> specific drivers. Whether or not we end up with a complete userspace
> component independent of UML may implicate having a shared component
> somewhere else.

Fair enough. What specifically are you suggesting should go in
`drivers/platform/uml`? Just the bits that are completely tied to UML
or a concrete architecture?

>
> Likewise, I realize the goal is to *avoid* using a virtual machine for
> these tests, but would it in any way make sense to share kunit to be
> supported for other architectures to allow easier-to-write tests as
> well?

You are not the first person to ask for this.

For the vast majority of tests, I think we can (and consequently
should) make them run without any external dependencies. Doing so
makes it such that someone can run a test without knowing anything
about it, which allows you to do a lot of things. For one, I, as a
developer, don't have to hunt down somebody's QEMU patches, or
whatever. But it also means I, as someone maintaining part of the
kernel, can make nice test runners and build things like presubmit
servers on top of them.

Nevertheless, I accept that there are things which are just easier to
do with hardware or a VM (for integration tests it is necessary).
Still, I think we need to make sure the vast majority of unit tests do
not depend on real hardware or a VM.

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 01/19] kunit: test: add KUnit test runner core
  2018-11-30  3:28   ` mcgrof
  2018-11-30  3:28     ` Luis Chamberlain
@ 2018-12-01  2:08     ` brendanhiggins
  2018-12-01  2:08       ` Brendan Higgins
  2018-12-01  3:10       ` mcgrof
  1 sibling, 2 replies; 232+ messages in thread
From: brendanhiggins @ 2018-12-01  2:08 UTC (permalink / raw)


On Thu, Nov 29, 2018 at 7:28 PM Luis Chamberlain <mcgrof at kernel.org> wrote:
>
> > +static void kunit_run_case_internal(struct kunit *test,
> > +                                 struct kunit_module *module,
> > +                                 struct kunit_case *test_case)
> > +{
> > +     int ret;
> > +
> > +     if (module->init) {
> > +             ret = module->init(test);
> > +             if (ret) {
> > +                     kunit_err(test, "failed to initialize: %d", ret);
> > +                     kunit_set_success(test, false);
> > +                     return;
> > +             }
> > +     }
> > +
> > +     test_case->run_case(test);
> > +}
>
> <-- snip -->
>
> > +static bool kunit_run_case(struct kunit *test,
> > +                        struct kunit_module *module,
> > +                        struct kunit_case *test_case)
> > +{
> > +     kunit_set_success(test, true);
> > +
> > +     kunit_run_case_internal(test, module, test_case);
> > +     kunit_run_case_cleanup(test, module, test_case);
> > +
> > +     return kunit_get_success(test);
> > +}
>
> So we are running the module->init() for each test case... is that
> correct? Shouldn't the init run once? Also, typically init calls are

Yep, it's correct. `module->init()` should run once before every test
case, reason being that the kunit_module serves as a test fixture in
which each test cases should be run completely independently of every
other. init and exit is supposed to allow code common to all test
cases to run since it is so common to have dependencies needed for a
test to be common to every test case.

Maybe it is confusing that I call it kunit_module? Maybe I should call
it kunit_fixture or something?

> pegged with __init so we free them later. You seem to have skipped the
> init annotations. Why?

Like I said above, these aren't normal init functions. A
kunit_module->init() function should run once before each test case
and thus should reside in the same linker section as any other KUnit
test code.

Cheers

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 01/19] kunit: test: add KUnit test runner core
  2018-12-01  2:08     ` brendanhiggins
@ 2018-12-01  2:08       ` Brendan Higgins
  2018-12-01  3:10       ` mcgrof
  1 sibling, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-12-01  2:08 UTC (permalink / raw)


On Thu, Nov 29, 2018@7:28 PM Luis Chamberlain <mcgrof@kernel.org> wrote:
>
> > +static void kunit_run_case_internal(struct kunit *test,
> > +                                 struct kunit_module *module,
> > +                                 struct kunit_case *test_case)
> > +{
> > +     int ret;
> > +
> > +     if (module->init) {
> > +             ret = module->init(test);
> > +             if (ret) {
> > +                     kunit_err(test, "failed to initialize: %d", ret);
> > +                     kunit_set_success(test, false);
> > +                     return;
> > +             }
> > +     }
> > +
> > +     test_case->run_case(test);
> > +}
>
> <-- snip -->
>
> > +static bool kunit_run_case(struct kunit *test,
> > +                        struct kunit_module *module,
> > +                        struct kunit_case *test_case)
> > +{
> > +     kunit_set_success(test, true);
> > +
> > +     kunit_run_case_internal(test, module, test_case);
> > +     kunit_run_case_cleanup(test, module, test_case);
> > +
> > +     return kunit_get_success(test);
> > +}
>
> So we are running the module->init() for each test case... is that
> correct? Shouldn't the init run once? Also, typically init calls are

Yep, it's correct. `module->init()` should run once before every test
case, reason being that the kunit_module serves as a test fixture in
which each test cases should be run completely independently of every
other. init and exit is supposed to allow code common to all test
cases to run since it is so common to have dependencies needed for a
test to be common to every test case.

Maybe it is confusing that I call it kunit_module? Maybe I should call
it kunit_fixture or something?

> pegged with __init so we free them later. You seem to have skipped the
> init annotations. Why?

Like I said above, these aren't normal init functions. A
kunit_module->init() function should run once before each test case
and thus should reside in the same linker section as any other KUnit
test code.

Cheers

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 03/19] kunit: test: add string_stream a std::stream like string builder
  2018-11-30  3:29   ` mcgrof
  2018-11-30  3:29     ` Luis Chamberlain
@ 2018-12-01  2:14     ` brendanhiggins
  2018-12-01  2:14       ` Brendan Higgins
  2018-12-01  3:12       ` mcgrof
  2018-12-03 10:55     ` pmladek
  2 siblings, 2 replies; 232+ messages in thread
From: brendanhiggins @ 2018-12-01  2:14 UTC (permalink / raw)


On Thu, Nov 29, 2018 at 7:29 PM Luis Chamberlain <mcgrof at kernel.org> wrote:
>
> On Wed, Nov 28, 2018 at 11:36:20AM -0800, Brendan Higgins wrote:
> > A number of test features need to do pretty complicated string printing
> > where it may not be possible to rely on a single preallocated string
> > with parameters.
> >
> > So provide a library for constructing the string as you go similar to
> > C++'s std::string.
>
> Hrm, what's the potential for such thing actually being eventually
> generically useful for printk folks, I wonder? Petr?

Are you saying you think this is applicable for other things? Or are
you saying that you are afraid that somebody might try to use this
elsewhere?

In the former case, this doesn't belong here. In the latter case, it
explicitly depends on KUnit, so it is only available when running
tests.

Cheers

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 03/19] kunit: test: add string_stream a std::stream like string builder
  2018-12-01  2:14     ` brendanhiggins
@ 2018-12-01  2:14       ` Brendan Higgins
  2018-12-01  3:12       ` mcgrof
  1 sibling, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-12-01  2:14 UTC (permalink / raw)


On Thu, Nov 29, 2018@7:29 PM Luis Chamberlain <mcgrof@kernel.org> wrote:
>
> On Wed, Nov 28, 2018@11:36:20AM -0800, Brendan Higgins wrote:
> > A number of test features need to do pretty complicated string printing
> > where it may not be possible to rely on a single preallocated string
> > with parameters.
> >
> > So provide a library for constructing the string as you go similar to
> > C++'s std::string.
>
> Hrm, what's the potential for such thing actually being eventually
> generically useful for printk folks, I wonder? Petr?

Are you saying you think this is applicable for other things? Or are
you saying that you are afraid that somebody might try to use this
elsewhere?

In the former case, this doesn't belong here. In the latter case, it
explicitly depends on KUnit, so it is only available when running
tests.

Cheers

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 01/19] kunit: test: add KUnit test runner core
  2018-12-01  1:51     ` brendanhiggins
  2018-12-01  1:51       ` Brendan Higgins
@ 2018-12-01  2:57       ` mcgrof
  2018-12-01  2:57         ` Luis Chamberlain
  1 sibling, 1 reply; 232+ messages in thread
From: mcgrof @ 2018-12-01  2:57 UTC (permalink / raw)


On Fri, Nov 30, 2018 at 05:51:11PM -0800, Brendan Higgins wrote:
> On Thu, Nov 29, 2018 at 7:14 PM Luis Chamberlain <mcgrof at kernel.org> wrote:
> >
> > On Wed, Nov 28, 2018 at 11:36:18AM -0800, Brendan Higgins wrote:
> > > +#define module_test(module) \
> > > +             static int module_kunit_init##module(void) \
> > > +             { \
> > > +                     return kunit_run_tests(&module); \
> > > +             } \
> > > +             late_initcall(module_kunit_init##module)
> >
> > Here in lies an assumption that suffices. I'm inclined to believe we
> > need new initcall level here so to ensure we *do* run after all the
> > respective kernels iniut calls. Otherwise we're left at the whims of
> > link order for kunit. For instance if a kunit test relies on frameworks
> > which are also late_initcall() we'd have complete incompatibility with
> > anything linked *after* kunit.
> 
> Yep, I have some patches that address this, but I thought this is
> sufficient for the initial patchset (I figured that's the type of
> thing that people will have opinions about so best to get it out of
> the critical path). Do you want me to add those in the next revision?
> 
> >
> > > diff --git a/kunit/Kconfig b/kunit/Kconfig
> > > new file mode 100644
> > > index 0000000000000..49b44c4f6630a
> > > --- /dev/null
> > > +++ b/kunit/Kconfig
> > > @@ -0,0 +1,17 @@
> > > +#
> > > +# KUnit base configuration
> > > +#
> > > +
> > > +menu "KUnit support"
> > > +
> > > +config KUNIT
> > > +     bool "Enable support for unit tests (KUnit)"
> > > +     depends on UML
> >
> > Consider using:
> >
> > if UML
> >    ...
> > endif
> >
> > That allows the depends to be done once.
> 
> If you want to eliminate depends, wouldn't it be best to have KUNIT
> depend on whatever it needs, and then do `if KUNIT` below that? That
> seems cleaner over the long term. Anyway, Kees actually asked me to
> change it to the way it is now; I really don't care either way.

Yes, that works better. The idea is to just avoid having to write in
depends on over and over again.

> > I'm a bit conflicted here. This currently depends on UML but yet you
> > noted on RFC v2 that your intention is to liberate kunit from UML and
> > ideally allow unit tests to depend only on userspace. I've addressed
> > tests using both selftests kernels drivers and also re-written kernel
> > APIs to userspace to test there. I think we may need to live with both.
> 
> I am not entirely opposed. The greater isolation we can achieve, the
> fewer dependencies, and barriers to setting up test fixtures the
> better. I think the best way to do that in most cases is allowing
> minimal test binaries to be built that have the absolute minimum
> amount of code necessary to test the desired property. That being
> said, integration tests are a thing and drawing a line between them
> and unit tests is not always possible, so supporting other
> architectures might be necessary.

Then lets pave the way for it to be done easily.

> > Then for the UML stuff, I think if we *really* accept that UML will
> > always be a viable option we should probably consider now throwing these
> > things under drivers/platform/uml/. This follows the pattern of arch
> > specific drivers. Whether or not we end up with a complete userspace
> > component independent of UML may implicate having a shared component
> > somewhere else.
> 
> Fair enough. What specifically are you suggesting should go in
> `drivers/platform/uml`? Just the bits that are completely tied to UML
> or a concrete architecture?

The bits that are UML specific. As I see it, with the above intention
clarified, kunit is a framework for architectures and UML is supported
first. The code doesn't currently reflect this.

> > Likewise, I realize the goal is to *avoid* using a virtual machine for
> > these tests, but would it in any way make sense to share kunit to be
> > supported for other architectures to allow easier-to-write tests as
> > well?
> 
> You are not the first person to ask for this.
> 
> For the vast majority of tests, I think we can (and consequently
> should) make them run without any external dependencies. Doing so
> makes it such that someone can run a test without knowing anything
> about it, which allows you to do a lot of things. For one, I, as a
> developer, don't have to hunt down somebody's QEMU patches, or
> whatever. But it also means I, as someone maintaining part of the
> kernel, can make nice test runners and build things like presubmit
> servers on top of them.
> 
> Nevertheless, I accept that there are things which are just easier to
> do with hardware or a VM (for integration tests it is necessary).
> Still, I think we need to make sure the vast majority of unit tests do
> not depend on real hardware or a VM.

When possible, sure.

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 01/19] kunit: test: add KUnit test runner core
  2018-12-01  2:57       ` mcgrof
@ 2018-12-01  2:57         ` Luis Chamberlain
  0 siblings, 0 replies; 232+ messages in thread
From: Luis Chamberlain @ 2018-12-01  2:57 UTC (permalink / raw)


On Fri, Nov 30, 2018@05:51:11PM -0800, Brendan Higgins wrote:
> On Thu, Nov 29, 2018@7:14 PM Luis Chamberlain <mcgrof@kernel.org> wrote:
> >
> > On Wed, Nov 28, 2018@11:36:18AM -0800, Brendan Higgins wrote:
> > > +#define module_test(module) \
> > > +             static int module_kunit_init##module(void) \
> > > +             { \
> > > +                     return kunit_run_tests(&module); \
> > > +             } \
> > > +             late_initcall(module_kunit_init##module)
> >
> > Here in lies an assumption that suffices. I'm inclined to believe we
> > need new initcall level here so to ensure we *do* run after all the
> > respective kernels iniut calls. Otherwise we're left at the whims of
> > link order for kunit. For instance if a kunit test relies on frameworks
> > which are also late_initcall() we'd have complete incompatibility with
> > anything linked *after* kunit.
> 
> Yep, I have some patches that address this, but I thought this is
> sufficient for the initial patchset (I figured that's the type of
> thing that people will have opinions about so best to get it out of
> the critical path). Do you want me to add those in the next revision?
> 
> >
> > > diff --git a/kunit/Kconfig b/kunit/Kconfig
> > > new file mode 100644
> > > index 0000000000000..49b44c4f6630a
> > > --- /dev/null
> > > +++ b/kunit/Kconfig
> > > @@ -0,0 +1,17 @@
> > > +#
> > > +# KUnit base configuration
> > > +#
> > > +
> > > +menu "KUnit support"
> > > +
> > > +config KUNIT
> > > +     bool "Enable support for unit tests (KUnit)"
> > > +     depends on UML
> >
> > Consider using:
> >
> > if UML
> >    ...
> > endif
> >
> > That allows the depends to be done once.
> 
> If you want to eliminate depends, wouldn't it be best to have KUNIT
> depend on whatever it needs, and then do `if KUNIT` below that? That
> seems cleaner over the long term. Anyway, Kees actually asked me to
> change it to the way it is now; I really don't care either way.

Yes, that works better. The idea is to just avoid having to write in
depends on over and over again.

> > I'm a bit conflicted here. This currently depends on UML but yet you
> > noted on RFC v2 that your intention is to liberate kunit from UML and
> > ideally allow unit tests to depend only on userspace. I've addressed
> > tests using both selftests kernels drivers and also re-written kernel
> > APIs to userspace to test there. I think we may need to live with both.
> 
> I am not entirely opposed. The greater isolation we can achieve, the
> fewer dependencies, and barriers to setting up test fixtures the
> better. I think the best way to do that in most cases is allowing
> minimal test binaries to be built that have the absolute minimum
> amount of code necessary to test the desired property. That being
> said, integration tests are a thing and drawing a line between them
> and unit tests is not always possible, so supporting other
> architectures might be necessary.

Then lets pave the way for it to be done easily.

> > Then for the UML stuff, I think if we *really* accept that UML will
> > always be a viable option we should probably consider now throwing these
> > things under drivers/platform/uml/. This follows the pattern of arch
> > specific drivers. Whether or not we end up with a complete userspace
> > component independent of UML may implicate having a shared component
> > somewhere else.
> 
> Fair enough. What specifically are you suggesting should go in
> `drivers/platform/uml`? Just the bits that are completely tied to UML
> or a concrete architecture?

The bits that are UML specific. As I see it, with the above intention
clarified, kunit is a framework for architectures and UML is supported
first. The code doesn't currently reflect this.

> > Likewise, I realize the goal is to *avoid* using a virtual machine for
> > these tests, but would it in any way make sense to share kunit to be
> > supported for other architectures to allow easier-to-write tests as
> > well?
> 
> You are not the first person to ask for this.
> 
> For the vast majority of tests, I think we can (and consequently
> should) make them run without any external dependencies. Doing so
> makes it such that someone can run a test without knowing anything
> about it, which allows you to do a lot of things. For one, I, as a
> developer, don't have to hunt down somebody's QEMU patches, or
> whatever. But it also means I, as someone maintaining part of the
> kernel, can make nice test runners and build things like presubmit
> servers on top of them.
> 
> Nevertheless, I accept that there are things which are just easier to
> do with hardware or a VM (for integration tests it is necessary).
> Still, I think we need to make sure the vast majority of unit tests do
> not depend on real hardware or a VM.

When possible, sure.

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 01/19] kunit: test: add KUnit test runner core
  2018-11-28 19:36 ` [RFC v3 01/19] kunit: test: add KUnit test runner core brendanhiggins
                     ` (2 preceding siblings ...)
  2018-11-30  3:28   ` mcgrof
@ 2018-12-01  3:02   ` mcgrof
  2018-12-01  3:02     ` Luis Chamberlain
  3 siblings, 1 reply; 232+ messages in thread
From: mcgrof @ 2018-12-01  3:02 UTC (permalink / raw)


On Wed, Nov 28, 2018 at 11:36:18AM -0800, Brendan Higgins wrote:
> +int kunit_run_tests(struct kunit_module *module)
> +{
> +	bool all_passed = true, success;
> +	struct kunit_case *test_case;
> +	struct kunit test;
> +	int ret;
> +
> +	ret = kunit_init_test(&test, module->name);
> +	if (ret)
> +		return ret;
> +
> +	for (test_case = module->test_cases; test_case->run_case; test_case++) {
> +		success = kunit_run_case(&test, module, test_case);

We are running test cases serially, why not address testing
asynchronously, this way tests can also be paralellized when possible,
therefore decreasing test time even further.

Would that mess up the printing/log stuff somehow?

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 01/19] kunit: test: add KUnit test runner core
  2018-12-01  3:02   ` mcgrof
@ 2018-12-01  3:02     ` Luis Chamberlain
  0 siblings, 0 replies; 232+ messages in thread
From: Luis Chamberlain @ 2018-12-01  3:02 UTC (permalink / raw)


On Wed, Nov 28, 2018@11:36:18AM -0800, Brendan Higgins wrote:
> +int kunit_run_tests(struct kunit_module *module)
> +{
> +	bool all_passed = true, success;
> +	struct kunit_case *test_case;
> +	struct kunit test;
> +	int ret;
> +
> +	ret = kunit_init_test(&test, module->name);
> +	if (ret)
> +		return ret;
> +
> +	for (test_case = module->test_cases; test_case->run_case; test_case++) {
> +		success = kunit_run_case(&test, module, test_case);

We are running test cases serially, why not address testing
asynchronously, this way tests can also be paralellized when possible,
therefore decreasing test time even further.

Would that mess up the printing/log stuff somehow?

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 01/19] kunit: test: add KUnit test runner core
  2018-12-01  2:08     ` brendanhiggins
  2018-12-01  2:08       ` Brendan Higgins
@ 2018-12-01  3:10       ` mcgrof
  2018-12-01  3:10         ` Luis Chamberlain
  2018-12-03 22:47         ` brendanhiggins
  1 sibling, 2 replies; 232+ messages in thread
From: mcgrof @ 2018-12-01  3:10 UTC (permalink / raw)


On Fri, Nov 30, 2018 at 06:08:36PM -0800, Brendan Higgins wrote:
> On Thu, Nov 29, 2018 at 7:28 PM Luis Chamberlain <mcgrof at kernel.org> wrote:
> >
> > > +static void kunit_run_case_internal(struct kunit *test,
> > > +                                 struct kunit_module *module,
> > > +                                 struct kunit_case *test_case)
> > > +{
> > > +     int ret;
> > > +
> > > +     if (module->init) {
> > > +             ret = module->init(test);
> > > +             if (ret) {
> > > +                     kunit_err(test, "failed to initialize: %d", ret);
> > > +                     kunit_set_success(test, false);
> > > +                     return;
> > > +             }
> > > +     }
> > > +
> > > +     test_case->run_case(test);
> > > +}
> >
> > <-- snip -->
> >
> > > +static bool kunit_run_case(struct kunit *test,
> > > +                        struct kunit_module *module,
> > > +                        struct kunit_case *test_case)
> > > +{
> > > +     kunit_set_success(test, true);
> > > +
> > > +     kunit_run_case_internal(test, module, test_case);
> > > +     kunit_run_case_cleanup(test, module, test_case);
> > > +
> > > +     return kunit_get_success(test);
> > > +}
> >
> > So we are running the module->init() for each test case... is that
> > correct? Shouldn't the init run once? Also, typically init calls are
> 
> Yep, it's correct. `module->init()` should run once before every test
> case, reason being that the kunit_module serves as a test fixture in
> which each test cases should be run completely independently of every
> other.

Shouldn't the init be test_case specific as well? Right now we just
past the struct kunit, but not the struct kunit_case. I though that
that the struct kunit_case was where we'd customize each specific
test case as we see fit for each test case. If not, how would we
do say, a different type of initialization for a different type of
test (for the same unit)?

> init and exit is supposed to allow code common to all test
> cases to run since it is so common to have dependencies needed for a
> test to be common to every test case.

Sure things in common make sense, however the differntiating aspects
seem important as well on init? Or should the author be doing all
custom specific initializations on run_case() instead?

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 01/19] kunit: test: add KUnit test runner core
  2018-12-01  3:10       ` mcgrof
@ 2018-12-01  3:10         ` Luis Chamberlain
  2018-12-03 22:47         ` brendanhiggins
  1 sibling, 0 replies; 232+ messages in thread
From: Luis Chamberlain @ 2018-12-01  3:10 UTC (permalink / raw)


On Fri, Nov 30, 2018@06:08:36PM -0800, Brendan Higgins wrote:
> On Thu, Nov 29, 2018@7:28 PM Luis Chamberlain <mcgrof@kernel.org> wrote:
> >
> > > +static void kunit_run_case_internal(struct kunit *test,
> > > +                                 struct kunit_module *module,
> > > +                                 struct kunit_case *test_case)
> > > +{
> > > +     int ret;
> > > +
> > > +     if (module->init) {
> > > +             ret = module->init(test);
> > > +             if (ret) {
> > > +                     kunit_err(test, "failed to initialize: %d", ret);
> > > +                     kunit_set_success(test, false);
> > > +                     return;
> > > +             }
> > > +     }
> > > +
> > > +     test_case->run_case(test);
> > > +}
> >
> > <-- snip -->
> >
> > > +static bool kunit_run_case(struct kunit *test,
> > > +                        struct kunit_module *module,
> > > +                        struct kunit_case *test_case)
> > > +{
> > > +     kunit_set_success(test, true);
> > > +
> > > +     kunit_run_case_internal(test, module, test_case);
> > > +     kunit_run_case_cleanup(test, module, test_case);
> > > +
> > > +     return kunit_get_success(test);
> > > +}
> >
> > So we are running the module->init() for each test case... is that
> > correct? Shouldn't the init run once? Also, typically init calls are
> 
> Yep, it's correct. `module->init()` should run once before every test
> case, reason being that the kunit_module serves as a test fixture in
> which each test cases should be run completely independently of every
> other.

Shouldn't the init be test_case specific as well? Right now we just
past the struct kunit, but not the struct kunit_case. I though that
that the struct kunit_case was where we'd customize each specific
test case as we see fit for each test case. If not, how would we
do say, a different type of initialization for a different type of
test (for the same unit)?

> init and exit is supposed to allow code common to all test
> cases to run since it is so common to have dependencies needed for a
> test to be common to every test case.

Sure things in common make sense, however the differntiating aspects
seem important as well on init? Or should the author be doing all
custom specific initializations on run_case() instead?

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 03/19] kunit: test: add string_stream a std::stream like string builder
  2018-12-01  2:14     ` brendanhiggins
  2018-12-01  2:14       ` Brendan Higgins
@ 2018-12-01  3:12       ` mcgrof
  2018-12-01  3:12         ` Luis Chamberlain
  1 sibling, 1 reply; 232+ messages in thread
From: mcgrof @ 2018-12-01  3:12 UTC (permalink / raw)


On Fri, Nov 30, 2018 at 06:14:17PM -0800, Brendan Higgins wrote:
> On Thu, Nov 29, 2018 at 7:29 PM Luis Chamberlain <mcgrof at kernel.org> wrote:
> >
> > On Wed, Nov 28, 2018 at 11:36:20AM -0800, Brendan Higgins wrote:
> > > A number of test features need to do pretty complicated string printing
> > > where it may not be possible to rely on a single preallocated string
> > > with parameters.
> > >
> > > So provide a library for constructing the string as you go similar to
> > > C++'s std::string.
> >
> > Hrm, what's the potential for such thing actually being eventually
> > generically useful for printk folks, I wonder? Petr?
> 
> Are you saying you think this is applicable for other things? 

Yes.

> This doesn't belong here.

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 03/19] kunit: test: add string_stream a std::stream like string builder
  2018-12-01  3:12       ` mcgrof
@ 2018-12-01  3:12         ` Luis Chamberlain
  0 siblings, 0 replies; 232+ messages in thread
From: Luis Chamberlain @ 2018-12-01  3:12 UTC (permalink / raw)


On Fri, Nov 30, 2018@06:14:17PM -0800, Brendan Higgins wrote:
> On Thu, Nov 29, 2018@7:29 PM Luis Chamberlain <mcgrof@kernel.org> wrote:
> >
> > On Wed, Nov 28, 2018@11:36:20AM -0800, Brendan Higgins wrote:
> > > A number of test features need to do pretty complicated string printing
> > > where it may not be possible to rely on a single preallocated string
> > > with parameters.
> > >
> > > So provide a library for constructing the string as you go similar to
> > > C++'s std::string.
> >
> > Hrm, what's the potential for such thing actually being eventually
> > generically useful for printk folks, I wonder? Petr?
> 
> Are you saying you think this is applicable for other things? 

Yes.

> This doesn't belong here.

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 03/19] kunit: test: add string_stream a std::stream like string builder
  2018-11-30  3:29   ` mcgrof
  2018-11-30  3:29     ` Luis Chamberlain
  2018-12-01  2:14     ` brendanhiggins
@ 2018-12-03 10:55     ` pmladek
  2018-12-03 10:55       ` Petr Mladek
  2018-12-04  0:35       ` brendanhiggins
  2 siblings, 2 replies; 232+ messages in thread
From: pmladek @ 2018-12-03 10:55 UTC (permalink / raw)


On Thu 2018-11-29 19:29:24, Luis Chamberlain wrote:
> On Wed, Nov 28, 2018 at 11:36:20AM -0800, Brendan Higgins wrote:
> > A number of test features need to do pretty complicated string printing
> > where it may not be possible to rely on a single preallocated string
> > with parameters.
> > 
> > So provide a library for constructing the string as you go similar to
> > C++'s std::string.
> 
> Hrm, what's the potential for such thing actually being eventually
> generically useful for printk folks, I wonder? Petr?

printk() is a bit tricky:

   + It should work in any context. Any additional lock adds risk of a
     deadlock. Especially the NMI and scheduler contexts are problematic.
     There are problems with any other code that might be called
     from console drivers and calls printk() under a lock.

   + It should work also when the system is out of memory. Especially
     atomic context is problematic because we could not wait for
     memory reclaim or swap.

   + We also do to the best effort to get the message out on the
     console. It is important when the system is about to die.
     Any extra buffering layer might cause delay and avoid seeing the
     message.

>From this point of views, this API is not generally usable with printk().

Now, the question is how many of the above fits also for unit testing.
At least, you might need to be careful when allocating memory in
atomic context.

BTW: There are more existing printk APIs: Well, I admit the they are
not easily reusable in unit testing:

   + printk() is old, crappy code, complicated with all the
     cornercases and consoles.

   + include/linux/seq_buf.h is simple buffering. It is used primary
     for sysfs output. It might be usable if you add support for
     loglevel and use big enough buffer. I quess that you should
     flush the buffer regularly anyway.

   + trace_printk() uses lockless per-CPU buffers. It currently does not
     support loglevels. But it might be pretty interesting choice as well.


I do not say that you have to use one of the existing API. But you
might consider them if you encouter any problems and maintaining
your variant gets complicated.

Best Regards,
Petr

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 03/19] kunit: test: add string_stream a std::stream like string builder
  2018-12-03 10:55     ` pmladek
@ 2018-12-03 10:55       ` Petr Mladek
  2018-12-04  0:35       ` brendanhiggins
  1 sibling, 0 replies; 232+ messages in thread
From: Petr Mladek @ 2018-12-03 10:55 UTC (permalink / raw)


On Thu 2018-11-29 19:29:24, Luis Chamberlain wrote:
> On Wed, Nov 28, 2018@11:36:20AM -0800, Brendan Higgins wrote:
> > A number of test features need to do pretty complicated string printing
> > where it may not be possible to rely on a single preallocated string
> > with parameters.
> > 
> > So provide a library for constructing the string as you go similar to
> > C++'s std::string.
> 
> Hrm, what's the potential for such thing actually being eventually
> generically useful for printk folks, I wonder? Petr?

printk() is a bit tricky:

   + It should work in any context. Any additional lock adds risk of a
     deadlock. Especially the NMI and scheduler contexts are problematic.
     There are problems with any other code that might be called
     from console drivers and calls printk() under a lock.

   + It should work also when the system is out of memory. Especially
     atomic context is problematic because we could not wait for
     memory reclaim or swap.

   + We also do to the best effort to get the message out on the
     console. It is important when the system is about to die.
     Any extra buffering layer might cause delay and avoid seeing the
     message.

>From this point of views, this API is not generally usable with printk().

Now, the question is how many of the above fits also for unit testing.
At least, you might need to be careful when allocating memory in
atomic context.

BTW: There are more existing printk APIs: Well, I admit the they are
not easily reusable in unit testing:

   + printk() is old, crappy code, complicated with all the
     cornercases and consoles.

   + include/linux/seq_buf.h is simple buffering. It is used primary
     for sysfs output. It might be usable if you add support for
     loglevel and use big enough buffer. I quess that you should
     flush the buffer regularly anyway.

   + trace_printk() uses lockless per-CPU buffers. It currently does not
     support loglevels. But it might be pretty interesting choice as well.


I do not say that you have to use one of the existing API. But you
might consider them if you encouter any problems and maintaining
your variant gets complicated.

Best Regards,
Petr

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 01/19] kunit: test: add KUnit test runner core
  2018-12-01  3:10       ` mcgrof
  2018-12-01  3:10         ` Luis Chamberlain
@ 2018-12-03 22:47         ` brendanhiggins
  2018-12-03 22:47           ` Brendan Higgins
  1 sibling, 1 reply; 232+ messages in thread
From: brendanhiggins @ 2018-12-03 22:47 UTC (permalink / raw)


On Fri, Nov 30, 2018 at 7:10 PM Luis Chamberlain <mcgrof at kernel.org> wrote:
>
> On Fri, Nov 30, 2018 at 06:08:36PM -0800, Brendan Higgins wrote:
> > On Thu, Nov 29, 2018 at 7:28 PM Luis Chamberlain <mcgrof at kernel.org> wrote:
> > >
> > > > +static void kunit_run_case_internal(struct kunit *test,
> > > > +                                 struct kunit_module *module,
> > > > +                                 struct kunit_case *test_case)
> > > > +{
> > > > +     int ret;
> > > > +
> > > > +     if (module->init) {
> > > > +             ret = module->init(test);
> > > > +             if (ret) {
> > > > +                     kunit_err(test, "failed to initialize: %d", ret);
> > > > +                     kunit_set_success(test, false);
> > > > +                     return;
> > > > +             }
> > > > +     }
> > > > +
> > > > +     test_case->run_case(test);
> > > > +}
> > >
> > > <-- snip -->
> > >
> > > > +static bool kunit_run_case(struct kunit *test,
> > > > +                        struct kunit_module *module,
> > > > +                        struct kunit_case *test_case)
> > > > +{
> > > > +     kunit_set_success(test, true);
> > > > +
> > > > +     kunit_run_case_internal(test, module, test_case);
> > > > +     kunit_run_case_cleanup(test, module, test_case);
> > > > +
> > > > +     return kunit_get_success(test);
> > > > +}
> > >
> > > So we are running the module->init() for each test case... is that
> > > correct? Shouldn't the init run once? Also, typically init calls are
> >
> > Yep, it's correct. `module->init()` should run once before every test
> > case, reason being that the kunit_module serves as a test fixture in
> > which each test cases should be run completely independently of every
> > other.
>
> Shouldn't the init be test_case specific as well? Right now we just
> past the struct kunit, but not the struct kunit_case. I though that
> that the struct kunit_case was where we'd customize each specific
> test case as we see fit for each test case. If not, how would we
> do say, a different type of initialization for a different type of
> test (for the same unit)?

Maybe there should be other init functions, but specifying an init
function per case is not typical. In most unit testing frameworks
there is some sort of optional per test case init function that sets
up the fixture common to all cases; it is also fairly common to have
an init function that runs once at the very beginning of the entire
test suite (like what you thought I was doing); however, it is not
used nearly as often as the former, and even then is usually used in
conjunction with the former.

Nevertheless, I don't think I have ever seen a unit test framework
provide a way to make init functions specific to each case. I don't
see any good reason not to do it other than the lack of examples in
the wild suggest it would not get much usage.

In general, some limited initialization specific to a test case is
allowed in the test case itself, and if you have really complicated
initialization that warrants a separate init function, but isn't
shared between cases, you should probably put the test in a separate
test suite with a separate test fixture. I am sure there will be edge
cases that don't fit, but there is no technical reason why you cannot
just do the initialization in the test case itself in these cases.

>
> > init and exit is supposed to allow code common to all test
> > cases to run since it is so common to have dependencies needed for a
> > test to be common to every test case.
>
> Sure things in common make sense, however the differntiating aspects
> seem important as well on init? Or should the author be doing all
> custom specific initializations on run_case() instead?
>

Usually limited initialization specific to a test case will just go in
that test case.

Cheers

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 01/19] kunit: test: add KUnit test runner core
  2018-12-03 22:47         ` brendanhiggins
@ 2018-12-03 22:47           ` Brendan Higgins
  0 siblings, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-12-03 22:47 UTC (permalink / raw)


On Fri, Nov 30, 2018@7:10 PM Luis Chamberlain <mcgrof@kernel.org> wrote:
>
> On Fri, Nov 30, 2018@06:08:36PM -0800, Brendan Higgins wrote:
> > On Thu, Nov 29, 2018@7:28 PM Luis Chamberlain <mcgrof@kernel.org> wrote:
> > >
> > > > +static void kunit_run_case_internal(struct kunit *test,
> > > > +                                 struct kunit_module *module,
> > > > +                                 struct kunit_case *test_case)
> > > > +{
> > > > +     int ret;
> > > > +
> > > > +     if (module->init) {
> > > > +             ret = module->init(test);
> > > > +             if (ret) {
> > > > +                     kunit_err(test, "failed to initialize: %d", ret);
> > > > +                     kunit_set_success(test, false);
> > > > +                     return;
> > > > +             }
> > > > +     }
> > > > +
> > > > +     test_case->run_case(test);
> > > > +}
> > >
> > > <-- snip -->
> > >
> > > > +static bool kunit_run_case(struct kunit *test,
> > > > +                        struct kunit_module *module,
> > > > +                        struct kunit_case *test_case)
> > > > +{
> > > > +     kunit_set_success(test, true);
> > > > +
> > > > +     kunit_run_case_internal(test, module, test_case);
> > > > +     kunit_run_case_cleanup(test, module, test_case);
> > > > +
> > > > +     return kunit_get_success(test);
> > > > +}
> > >
> > > So we are running the module->init() for each test case... is that
> > > correct? Shouldn't the init run once? Also, typically init calls are
> >
> > Yep, it's correct. `module->init()` should run once before every test
> > case, reason being that the kunit_module serves as a test fixture in
> > which each test cases should be run completely independently of every
> > other.
>
> Shouldn't the init be test_case specific as well? Right now we just
> past the struct kunit, but not the struct kunit_case. I though that
> that the struct kunit_case was where we'd customize each specific
> test case as we see fit for each test case. If not, how would we
> do say, a different type of initialization for a different type of
> test (for the same unit)?

Maybe there should be other init functions, but specifying an init
function per case is not typical. In most unit testing frameworks
there is some sort of optional per test case init function that sets
up the fixture common to all cases; it is also fairly common to have
an init function that runs once at the very beginning of the entire
test suite (like what you thought I was doing); however, it is not
used nearly as often as the former, and even then is usually used in
conjunction with the former.

Nevertheless, I don't think I have ever seen a unit test framework
provide a way to make init functions specific to each case. I don't
see any good reason not to do it other than the lack of examples in
the wild suggest it would not get much usage.

In general, some limited initialization specific to a test case is
allowed in the test case itself, and if you have really complicated
initialization that warrants a separate init function, but isn't
shared between cases, you should probably put the test in a separate
test suite with a separate test fixture. I am sure there will be edge
cases that don't fit, but there is no technical reason why you cannot
just do the initialization in the test case itself in these cases.

>
> > init and exit is supposed to allow code common to all test
> > cases to run since it is so common to have dependencies needed for a
> > test to be common to every test case.
>
> Sure things in common make sense, however the differntiating aspects
> seem important as well on init? Or should the author be doing all
> custom specific initializations on run_case() instead?
>

Usually limited initialization specific to a test case will just go in
that test case.

Cheers

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 06/19] arch: um: enable running kunit from User Mode Linux
  2018-11-30 18:22         ` mcgrof
  2018-11-30 18:22           ` Luis Chamberlain
@ 2018-12-03 23:22           ` brendanhiggins
  2018-12-03 23:22             ` Brendan Higgins
  1 sibling, 1 reply; 232+ messages in thread
From: brendanhiggins @ 2018-12-03 23:22 UTC (permalink / raw)


On Fri, Nov 30, 2018 at 10:22 AM Luis Chamberlain <mcgrof at kernel.org> wrote:
>
> On Fri, Nov 30, 2018 at 08:05:34AM -0600, Rob Herring wrote:
> > On Thu, Nov 29, 2018 at 9:37 PM Luis Chamberlain <mcgrof at kernel.org> wrote:
> > >
> > > On Wed, Nov 28, 2018 at 03:26:03PM -0600, Rob Herring wrote:
> > > > On Wed, Nov 28, 2018 at 1:37 PM Brendan Higgins
> > > > <brendanhiggins at google.com> wrote:
> > > > >
> > > > > Make minimum number of changes outside of the KUnit directories for
> > > > > KUnit to build and run using UML.
> > > >
> > > > There's nothing in this patch limiting this to UML.
> > >
> > > Not that one, but the abort thing segv thing is, eventually.
> > > To support other architectures we'd need to make a wrapper to that
> > > hack which Brendan added, and then allow each os to implement
> > > its own call, and add an asm-generic helper.

I think Rob is referring to the description for this patch. This patch
previously did what you suggested, Luis, (source the KUnit kconfig
from arch/um/) but Kees asked me to change it to how it is now (which
probably makes sense if we are saying KUnit is not intended to be tied
to a particular architecture, no?), and I forgot to update the commit
description, sorry.

> >
> > I've not looked into why this is needed, but can't you make the abort
> > support optional and arches can select it when they support it.
>
> Its why I have asked for it to be properly documented. The patches in no
> way illustrate *why* such thing is done. And if we are going to
> potentially have other archs do something similar best to make it
> explicit.

Yeah, I should better document it. I should also probably not include
any UML specific header files in kunit/test.h; that seems like I am
asking to get more tightly coupled if I am not careful about exactly
what things I depend on.

I think Luis is right, I need to add a wrapper around the features
needed for the hack to support abort() and then write a UML specific
implementation.

For the asm-generic case, we could probably just have abort() call
BUG(), with that KUnit should work on most architectures, albeit with
pretty reduced functionality.

>
> > At
> > least before, the DT unittests didn't need this to run and shouldn't
> > depend on it after converting to kunit.

Fair enough.

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 06/19] arch: um: enable running kunit from User Mode Linux
  2018-12-03 23:22           ` brendanhiggins
@ 2018-12-03 23:22             ` Brendan Higgins
  0 siblings, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-12-03 23:22 UTC (permalink / raw)


On Fri, Nov 30, 2018@10:22 AM Luis Chamberlain <mcgrof@kernel.org> wrote:
>
> On Fri, Nov 30, 2018@08:05:34AM -0600, Rob Herring wrote:
> > On Thu, Nov 29, 2018@9:37 PM Luis Chamberlain <mcgrof@kernel.org> wrote:
> > >
> > > On Wed, Nov 28, 2018@03:26:03PM -0600, Rob Herring wrote:
> > > > On Wed, Nov 28, 2018 at 1:37 PM Brendan Higgins
> > > > <brendanhiggins@google.com> wrote:
> > > > >
> > > > > Make minimum number of changes outside of the KUnit directories for
> > > > > KUnit to build and run using UML.
> > > >
> > > > There's nothing in this patch limiting this to UML.
> > >
> > > Not that one, but the abort thing segv thing is, eventually.
> > > To support other architectures we'd need to make a wrapper to that
> > > hack which Brendan added, and then allow each os to implement
> > > its own call, and add an asm-generic helper.

I think Rob is referring to the description for this patch. This patch
previously did what you suggested, Luis, (source the KUnit kconfig
from arch/um/) but Kees asked me to change it to how it is now (which
probably makes sense if we are saying KUnit is not intended to be tied
to a particular architecture, no?), and I forgot to update the commit
description, sorry.

> >
> > I've not looked into why this is needed, but can't you make the abort
> > support optional and arches can select it when they support it.
>
> Its why I have asked for it to be properly documented. The patches in no
> way illustrate *why* such thing is done. And if we are going to
> potentially have other archs do something similar best to make it
> explicit.

Yeah, I should better document it. I should also probably not include
any UML specific header files in kunit/test.h; that seems like I am
asking to get more tightly coupled if I am not careful about exactly
what things I depend on.

I think Luis is right, I need to add a wrapper around the features
needed for the hack to support abort() and then write a UML specific
implementation.

For the asm-generic case, we could probably just have abort() call
BUG(), with that KUnit should work on most architectures, albeit with
pretty reduced functionality.

>
> > At
> > least before, the DT unittests didn't need this to run and shouldn't
> > depend on it after converting to kunit.

Fair enough.

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 07/19] kunit: test: add initial tests
  2018-11-30  3:40   ` mcgrof
  2018-11-30  3:40     ` Luis Chamberlain
@ 2018-12-03 23:26     ` brendanhiggins
  2018-12-03 23:26       ` Brendan Higgins
  2018-12-03 23:43       ` mcgrof
  1 sibling, 2 replies; 232+ messages in thread
From: brendanhiggins @ 2018-12-03 23:26 UTC (permalink / raw)


On Thu, Nov 29, 2018 at 7:40 PM Luis Chamberlain <mcgrof at kernel.org> wrote:
>
> On Wed, Nov 28, 2018 at 11:36:24AM -0800, Brendan Higgins wrote:
> > Add a test for string stream along with a simpler example.
> >
> > Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> > ---
> >  kunit/Kconfig              | 12 ++++++
> >  kunit/Makefile             |  4 ++
> >  kunit/example-test.c       | 88 ++++++++++++++++++++++++++++++++++++++
>
> BTW if you need another more concrete but very simple example I think it
> may be possible to port tools/testing/selftests/sysctl/sysctl.sh +
> lib/test_sysctl.c into a kunit test. Correct me if I'm wrong.

I think that is pretty doable. I don't know that I want to shoot for
that on the next revision. But I can definitely do it in a later
revision, or a later patchset, unless you would strongly prefer it
now, that is.

>
> I think that would show the differences clearly between selftests and
> kunit as well.

True. Maybe a good thing to shoot for once the DT tests are in order?

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 07/19] kunit: test: add initial tests
  2018-12-03 23:26     ` brendanhiggins
@ 2018-12-03 23:26       ` Brendan Higgins
  2018-12-03 23:43       ` mcgrof
  1 sibling, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-12-03 23:26 UTC (permalink / raw)


On Thu, Nov 29, 2018@7:40 PM Luis Chamberlain <mcgrof@kernel.org> wrote:
>
> On Wed, Nov 28, 2018@11:36:24AM -0800, Brendan Higgins wrote:
> > Add a test for string stream along with a simpler example.
> >
> > Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> > ---
> >  kunit/Kconfig              | 12 ++++++
> >  kunit/Makefile             |  4 ++
> >  kunit/example-test.c       | 88 ++++++++++++++++++++++++++++++++++++++
>
> BTW if you need another more concrete but very simple example I think it
> may be possible to port tools/testing/selftests/sysctl/sysctl.sh +
> lib/test_sysctl.c into a kunit test. Correct me if I'm wrong.

I think that is pretty doable. I don't know that I want to shoot for
that on the next revision. But I can definitely do it in a later
revision, or a later patchset, unless you would strongly prefer it
now, that is.

>
> I think that would show the differences clearly between selftests and
> kunit as well.

True. Maybe a good thing to shoot for once the DT tests are in order?

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 08/19] arch: um: add shim to trap to allow installing a fault catcher for tests
  2018-11-30  3:34   ` mcgrof
  2018-11-30  3:34     ` Luis Chamberlain
@ 2018-12-03 23:34     ` brendanhiggins
  2018-12-03 23:34       ` Brendan Higgins
  2018-12-03 23:46       ` mcgrof
  1 sibling, 2 replies; 232+ messages in thread
From: brendanhiggins @ 2018-12-03 23:34 UTC (permalink / raw)


On Thu, Nov 29, 2018 at 7:34 PM Luis Chamberlain <mcgrof at kernel.org> wrote:
>
> On Wed, Nov 28, 2018 at 11:36:25AM -0800, Brendan Higgins wrote:
> > diff --git a/arch/um/kernel/trap.c b/arch/um/kernel/trap.c
> > index cced829460427..bf90e678b3d71 100644
> > --- a/arch/um/kernel/trap.c
> > +++ b/arch/um/kernel/trap.c
> > @@ -201,6 +201,12 @@ void segv_handler(int sig, struct siginfo *unused_si, struct uml_pt_regs *regs)
> >       segv(*fi, UPT_IP(regs), UPT_IS_USER(regs), regs);
> >  }
> >
> > +static void segv_run_catcher(jmp_buf *catcher, void *fault_addr)
> > +{
> > +     current->thread.fault_addr = fault_addr;
> > +     UML_LONGJMP(catcher, 1);
> > +}
> > +
> >  /*
> >   * We give a *copy* of the faultinfo in the regs to segv.
> >   * This must be done, since nesting SEGVs could overwrite
> > @@ -219,7 +225,10 @@ unsigned long segv(struct faultinfo fi, unsigned long ip, int is_user,
> >       if (!is_user && regs)
> >               current->thread.segv_regs = container_of(regs, struct pt_regs, regs);
> >
> > -     if (!is_user && (address >= start_vm) && (address < end_vm)) {
> > +     catcher = current->thread.fault_catcher;
>
> This and..
>
> > +     if (catcher && current->thread.is_running_test)
> > +             segv_run_catcher(catcher, (void *) address);
> > +     else if (!is_user && (address >= start_vm) && (address < end_vm)) {
> >               flush_tlb_kernel_vm();
> >               goto out;
> >       }
>
> *not this*

I don't understand. Are you saying the previous block of code is good
and this one is bad?

>
> > @@ -246,12 +255,10 @@ unsigned long segv(struct faultinfo fi, unsigned long ip, int is_user,
> >               address = 0;
> >       }
> >
> > -     catcher = current->thread.fault_catcher;
> >       if (!err)
> >               goto out;
> >       else if (catcher != NULL) {
> > -             current->thread.fault_addr = (void *) address;
> > -             UML_LONGJMP(catcher, 1);
> > +             segv_run_catcher(catcher, (void *) address);
> >       }
> >       else if (current->thread.fault_addr != NULL)
> >               panic("fault_addr set but no fault catcher");
>
> But with this seems one atomic change which should be submitted
> separately, its just a helper. Think it would make the actual
> change needed easier to review, ie, your needed changes would
> be smaller and clearer for what you need.

Are you suggesting that I pull out the bits needed to implement abort
in the next patch and squash it into this one?

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 08/19] arch: um: add shim to trap to allow installing a fault catcher for tests
  2018-12-03 23:34     ` brendanhiggins
@ 2018-12-03 23:34       ` Brendan Higgins
  2018-12-03 23:46       ` mcgrof
  1 sibling, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-12-03 23:34 UTC (permalink / raw)


On Thu, Nov 29, 2018@7:34 PM Luis Chamberlain <mcgrof@kernel.org> wrote:
>
> On Wed, Nov 28, 2018@11:36:25AM -0800, Brendan Higgins wrote:
> > diff --git a/arch/um/kernel/trap.c b/arch/um/kernel/trap.c
> > index cced829460427..bf90e678b3d71 100644
> > --- a/arch/um/kernel/trap.c
> > +++ b/arch/um/kernel/trap.c
> > @@ -201,6 +201,12 @@ void segv_handler(int sig, struct siginfo *unused_si, struct uml_pt_regs *regs)
> >       segv(*fi, UPT_IP(regs), UPT_IS_USER(regs), regs);
> >  }
> >
> > +static void segv_run_catcher(jmp_buf *catcher, void *fault_addr)
> > +{
> > +     current->thread.fault_addr = fault_addr;
> > +     UML_LONGJMP(catcher, 1);
> > +}
> > +
> >  /*
> >   * We give a *copy* of the faultinfo in the regs to segv.
> >   * This must be done, since nesting SEGVs could overwrite
> > @@ -219,7 +225,10 @@ unsigned long segv(struct faultinfo fi, unsigned long ip, int is_user,
> >       if (!is_user && regs)
> >               current->thread.segv_regs = container_of(regs, struct pt_regs, regs);
> >
> > -     if (!is_user && (address >= start_vm) && (address < end_vm)) {
> > +     catcher = current->thread.fault_catcher;
>
> This and..
>
> > +     if (catcher && current->thread.is_running_test)
> > +             segv_run_catcher(catcher, (void *) address);
> > +     else if (!is_user && (address >= start_vm) && (address < end_vm)) {
> >               flush_tlb_kernel_vm();
> >               goto out;
> >       }
>
> *not this*

I don't understand. Are you saying the previous block of code is good
and this one is bad?

>
> > @@ -246,12 +255,10 @@ unsigned long segv(struct faultinfo fi, unsigned long ip, int is_user,
> >               address = 0;
> >       }
> >
> > -     catcher = current->thread.fault_catcher;
> >       if (!err)
> >               goto out;
> >       else if (catcher != NULL) {
> > -             current->thread.fault_addr = (void *) address;
> > -             UML_LONGJMP(catcher, 1);
> > +             segv_run_catcher(catcher, (void *) address);
> >       }
> >       else if (current->thread.fault_addr != NULL)
> >               panic("fault_addr set but no fault catcher");
>
> But with this seems one atomic change which should be submitted
> separately, its just a helper. Think it would make the actual
> change needed easier to review, ie, your needed changes would
> be smaller and clearer for what you need.

Are you suggesting that I pull out the bits needed to implement abort
in the next patch and squash it into this one?

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 08/19] arch: um: add shim to trap to allow installing a fault catcher for tests
  2018-11-30  3:41   ` mcgrof
  2018-11-30  3:41     ` Luis Chamberlain
@ 2018-12-03 23:37     ` brendanhiggins
  2018-12-03 23:37       ` Brendan Higgins
  1 sibling, 1 reply; 232+ messages in thread
From: brendanhiggins @ 2018-12-03 23:37 UTC (permalink / raw)


On Thu, Nov 29, 2018 at 7:41 PM Luis Chamberlain <mcgrof at kernel.org> wrote:
>
> On Wed, Nov 28, 2018 at 11:36:25AM -0800, Brendan Higgins wrote:
> > +static void segv_run_catcher(jmp_buf *catcher, void *fault_addr)
> > +{
> > +     current->thread.fault_addr = fault_addr;
> > +     UML_LONGJMP(catcher, 1);
> > +}
>
> Some documentation about what this does exactly would be appreciated.
> With the goal it may be useful to others wanting to consider support
> for other archs -- if that actually ends up being desirable.

Yeah, I should. Should this go in the wrapper around the abort() hack?
Or do you think I should write some documentation on how KUnit works
under Documentation/ ?

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 08/19] arch: um: add shim to trap to allow installing a fault catcher for tests
  2018-12-03 23:37     ` brendanhiggins
@ 2018-12-03 23:37       ` Brendan Higgins
  0 siblings, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-12-03 23:37 UTC (permalink / raw)


On Thu, Nov 29, 2018@7:41 PM Luis Chamberlain <mcgrof@kernel.org> wrote:
>
> On Wed, Nov 28, 2018@11:36:25AM -0800, Brendan Higgins wrote:
> > +static void segv_run_catcher(jmp_buf *catcher, void *fault_addr)
> > +{
> > +     current->thread.fault_addr = fault_addr;
> > +     UML_LONGJMP(catcher, 1);
> > +}
>
> Some documentation about what this does exactly would be appreciated.
> With the goal it may be useful to others wanting to consider support
> for other archs -- if that actually ends up being desirable.

Yeah, I should. Should this go in the wrapper around the abort() hack?
Or do you think I should write some documentation on how KUnit works
under Documentation/ ?

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 07/19] kunit: test: add initial tests
  2018-12-03 23:26     ` brendanhiggins
  2018-12-03 23:26       ` Brendan Higgins
@ 2018-12-03 23:43       ` mcgrof
  2018-12-03 23:43         ` Luis Chamberlain
  1 sibling, 1 reply; 232+ messages in thread
From: mcgrof @ 2018-12-03 23:43 UTC (permalink / raw)


On Mon, Dec 03, 2018 at 03:26:26PM -0800, Brendan Higgins wrote:
> On Thu, Nov 29, 2018 at 7:40 PM Luis Chamberlain <mcgrof at kernel.org> wrote:
> >
> > On Wed, Nov 28, 2018 at 11:36:24AM -0800, Brendan Higgins wrote:
> > > Add a test for string stream along with a simpler example.
> > >
> > > Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> > > ---
> > >  kunit/Kconfig              | 12 ++++++
> > >  kunit/Makefile             |  4 ++
> > >  kunit/example-test.c       | 88 ++++++++++++++++++++++++++++++++++++++
> >
> > BTW if you need another more concrete but very simple example I think it
> > may be possible to port tools/testing/selftests/sysctl/sysctl.sh +
> > lib/test_sysctl.c into a kunit test. Correct me if I'm wrong.
> 
> I think that is pretty doable. I don't know that I want to shoot for
> that on the next revision. But I can definitely do it in a later
> revision, or a later patchset, unless you would strongly prefer it
> now, that is.

No rush on my end, just figured I'd mention a simple candidate in case
you needed another one to evaluate.

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 07/19] kunit: test: add initial tests
  2018-12-03 23:43       ` mcgrof
@ 2018-12-03 23:43         ` Luis Chamberlain
  0 siblings, 0 replies; 232+ messages in thread
From: Luis Chamberlain @ 2018-12-03 23:43 UTC (permalink / raw)


On Mon, Dec 03, 2018@03:26:26PM -0800, Brendan Higgins wrote:
> On Thu, Nov 29, 2018@7:40 PM Luis Chamberlain <mcgrof@kernel.org> wrote:
> >
> > On Wed, Nov 28, 2018@11:36:24AM -0800, Brendan Higgins wrote:
> > > Add a test for string stream along with a simpler example.
> > >
> > > Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> > > ---
> > >  kunit/Kconfig              | 12 ++++++
> > >  kunit/Makefile             |  4 ++
> > >  kunit/example-test.c       | 88 ++++++++++++++++++++++++++++++++++++++
> >
> > BTW if you need another more concrete but very simple example I think it
> > may be possible to port tools/testing/selftests/sysctl/sysctl.sh +
> > lib/test_sysctl.c into a kunit test. Correct me if I'm wrong.
> 
> I think that is pretty doable. I don't know that I want to shoot for
> that on the next revision. But I can definitely do it in a later
> revision, or a later patchset, unless you would strongly prefer it
> now, that is.

No rush on my end, just figured I'd mention a simple candidate in case
you needed another one to evaluate.

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 08/19] arch: um: add shim to trap to allow installing a fault catcher for tests
  2018-12-03 23:34     ` brendanhiggins
  2018-12-03 23:34       ` Brendan Higgins
@ 2018-12-03 23:46       ` mcgrof
  2018-12-03 23:46         ` Luis Chamberlain
  2018-12-04  0:44         ` brendanhiggins
  1 sibling, 2 replies; 232+ messages in thread
From: mcgrof @ 2018-12-03 23:46 UTC (permalink / raw)


On Mon, Dec 03, 2018 at 03:34:57PM -0800, Brendan Higgins wrote:
> On Thu, Nov 29, 2018 at 7:34 PM Luis Chamberlain <mcgrof at kernel.org> wrote:
> >
> > On Wed, Nov 28, 2018 at 11:36:25AM -0800, Brendan Higgins wrote:
> > > diff --git a/arch/um/kernel/trap.c b/arch/um/kernel/trap.c
> > > index cced829460427..bf90e678b3d71 100644
> > > --- a/arch/um/kernel/trap.c
> > > +++ b/arch/um/kernel/trap.c
> > > @@ -201,6 +201,12 @@ void segv_handler(int sig, struct siginfo *unused_si, struct uml_pt_regs *regs)
> > >       segv(*fi, UPT_IP(regs), UPT_IS_USER(regs), regs);
> > >  }
> > >
> > > +static void segv_run_catcher(jmp_buf *catcher, void *fault_addr)
> > > +{
> > > +     current->thread.fault_addr = fault_addr;
> > > +     UML_LONGJMP(catcher, 1);
> > > +}
> > > +
> > >  /*
> > >   * We give a *copy* of the faultinfo in the regs to segv.
> > >   * This must be done, since nesting SEGVs could overwrite
> > > @@ -219,7 +225,10 @@ unsigned long segv(struct faultinfo fi, unsigned long ip, int is_user,
> > >       if (!is_user && regs)
> > >               current->thread.segv_regs = container_of(regs, struct pt_regs, regs);
> > >
> > > -     if (!is_user && (address >= start_vm) && (address < end_vm)) {
> > > +     catcher = current->thread.fault_catcher;
> >
> > This and..
> >
> > > +     if (catcher && current->thread.is_running_test)
> > > +             segv_run_catcher(catcher, (void *) address);
> > > +     else if (!is_user && (address >= start_vm) && (address < end_vm)) {
> > >               flush_tlb_kernel_vm();
> > >               goto out;
> > >       }
> >
> > *not this*
> 
> I don't understand. Are you saying the previous block of code is good
> and this one is bad?

No, I was saying that the above block of code is a functional change,
but I was also pointing out other areas which were not and could be
folded into a separate atomic patch where no functionality changes.

> > > @@ -246,12 +255,10 @@ unsigned long segv(struct faultinfo fi, unsigned long ip, int is_user,
> > >               address = 0;
> > >       }
> > >
> > > -     catcher = current->thread.fault_catcher;
> > >       if (!err)
> > >               goto out;
> > >       else if (catcher != NULL) {
> > > -             current->thread.fault_addr = (void *) address;
> > > -             UML_LONGJMP(catcher, 1);
> > > +             segv_run_catcher(catcher, (void *) address);
> > >       }
> > >       else if (current->thread.fault_addr != NULL)
> > >               panic("fault_addr set but no fault catcher");
> >
> > But with this seems one atomic change which should be submitted
> > separately, its just a helper. Think it would make the actual
> > change needed easier to review, ie, your needed changes would
> > be smaller and clearer for what you need.
> 
> Are you suggesting that I pull out the bits needed to implement abort
> in the next patch and squash it into this one?

No, I'm suggesting you can probably split this patch in 2, one which
wraps things with no functional changes, and another which adds your
changes.

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 08/19] arch: um: add shim to trap to allow installing a fault catcher for tests
  2018-12-03 23:46       ` mcgrof
@ 2018-12-03 23:46         ` Luis Chamberlain
  2018-12-04  0:44         ` brendanhiggins
  1 sibling, 0 replies; 232+ messages in thread
From: Luis Chamberlain @ 2018-12-03 23:46 UTC (permalink / raw)


On Mon, Dec 03, 2018@03:34:57PM -0800, Brendan Higgins wrote:
> On Thu, Nov 29, 2018@7:34 PM Luis Chamberlain <mcgrof@kernel.org> wrote:
> >
> > On Wed, Nov 28, 2018@11:36:25AM -0800, Brendan Higgins wrote:
> > > diff --git a/arch/um/kernel/trap.c b/arch/um/kernel/trap.c
> > > index cced829460427..bf90e678b3d71 100644
> > > --- a/arch/um/kernel/trap.c
> > > +++ b/arch/um/kernel/trap.c
> > > @@ -201,6 +201,12 @@ void segv_handler(int sig, struct siginfo *unused_si, struct uml_pt_regs *regs)
> > >       segv(*fi, UPT_IP(regs), UPT_IS_USER(regs), regs);
> > >  }
> > >
> > > +static void segv_run_catcher(jmp_buf *catcher, void *fault_addr)
> > > +{
> > > +     current->thread.fault_addr = fault_addr;
> > > +     UML_LONGJMP(catcher, 1);
> > > +}
> > > +
> > >  /*
> > >   * We give a *copy* of the faultinfo in the regs to segv.
> > >   * This must be done, since nesting SEGVs could overwrite
> > > @@ -219,7 +225,10 @@ unsigned long segv(struct faultinfo fi, unsigned long ip, int is_user,
> > >       if (!is_user && regs)
> > >               current->thread.segv_regs = container_of(regs, struct pt_regs, regs);
> > >
> > > -     if (!is_user && (address >= start_vm) && (address < end_vm)) {
> > > +     catcher = current->thread.fault_catcher;
> >
> > This and..
> >
> > > +     if (catcher && current->thread.is_running_test)
> > > +             segv_run_catcher(catcher, (void *) address);
> > > +     else if (!is_user && (address >= start_vm) && (address < end_vm)) {
> > >               flush_tlb_kernel_vm();
> > >               goto out;
> > >       }
> >
> > *not this*
> 
> I don't understand. Are you saying the previous block of code is good
> and this one is bad?

No, I was saying that the above block of code is a functional change,
but I was also pointing out other areas which were not and could be
folded into a separate atomic patch where no functionality changes.

> > > @@ -246,12 +255,10 @@ unsigned long segv(struct faultinfo fi, unsigned long ip, int is_user,
> > >               address = 0;
> > >       }
> > >
> > > -     catcher = current->thread.fault_catcher;
> > >       if (!err)
> > >               goto out;
> > >       else if (catcher != NULL) {
> > > -             current->thread.fault_addr = (void *) address;
> > > -             UML_LONGJMP(catcher, 1);
> > > +             segv_run_catcher(catcher, (void *) address);
> > >       }
> > >       else if (current->thread.fault_addr != NULL)
> > >               panic("fault_addr set but no fault catcher");
> >
> > But with this seems one atomic change which should be submitted
> > separately, its just a helper. Think it would make the actual
> > change needed easier to review, ie, your needed changes would
> > be smaller and clearer for what you need.
> 
> Are you suggesting that I pull out the bits needed to implement abort
> in the next patch and squash it into this one?

No, I'm suggesting you can probably split this patch in 2, one which
wraps things with no functional changes, and another which adds your
changes.

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel
  2018-11-29 13:54   ` kieran.bingham
  2018-11-29 13:54     ` Kieran Bingham
@ 2018-12-03 23:48     ` brendanhiggins
  2018-12-03 23:48       ` Brendan Higgins
  2018-12-04 20:47       ` mcgrof
  1 sibling, 2 replies; 232+ messages in thread
From: brendanhiggins @ 2018-12-03 23:48 UTC (permalink / raw)


On Thu, Nov 29, 2018 at 5:54 AM Kieran Bingham
<kieran.bingham at ideasonboard.com> wrote:
>
> Hi Brendan,
>
> Thanks again for this series!
>
> On 28/11/2018 19:36, Brendan Higgins wrote:
> > The ultimate goal is to create minimal isolated test binaries; in the
> > meantime we are using UML to provide the infrastructure to run tests, so
> > define an abstract way to configure and run tests that allow us to
> > change the context in which tests are built without affecting the user.
> > This also makes pretty and dynamic error reporting, and a lot of other
> > nice features easier.
>
>
> I wonder if we could somehow generate a shared library object
> 'libkernel' or 'libumlinux' from a UM configured set of headers and
> objects so that we could create binary targets directly ?

That's an interesting idea. I think it would be difficult to figure
out exactly where to draw the line of what goes in there and what
needs to be built specific to a test a priori. Of course, that leads
into the biggest problem in general, needed to know what I need to
build to test the thing that I want to test.

Nevertheless, I could definitely imagine that being useful in a lot of cases.

> > diff --git a/tools/testing/kunit/kunit_kernel.py b/tools/testing/kunit/kunit_kernel.py
> > new file mode 100644
> > index 0000000000000..bba7ea7ca1869
> > --- /dev/null
> > +++ b/tools/testing/kunit/kunit_kernel.py
...
> > +     def make(self, jobs):
> > +             try:
> > +                     subprocess.check_output([
> > +                                     'make',
> > +                                     'ARCH=um',
> > +                                     '--jobs=' + str(jobs)])
>
> Perhaps as a future extension:
>
> It would be nice if we could set an O= here to keep the source tree
> pristine.
>
> In fact I might even suggest that this should always be set so that the
> unittesting could live along side an existing kernel build? :
>
>  O ?= $KBUILD_SRC/
>  O := $(O)/kunittest/$(ARCH)/build

I agree with that. It would be pretty annoying to run a unit test and
have it mess up your .config and force you to rebuild everything else.
(I have actually done this to myself a couple of times...)

Cheers

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel
  2018-12-03 23:48     ` brendanhiggins
@ 2018-12-03 23:48       ` Brendan Higgins
  2018-12-04 20:47       ` mcgrof
  1 sibling, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-12-03 23:48 UTC (permalink / raw)


On Thu, Nov 29, 2018 at 5:54 AM Kieran Bingham
<kieran.bingham@ideasonboard.com> wrote:
>
> Hi Brendan,
>
> Thanks again for this series!
>
> On 28/11/2018 19:36, Brendan Higgins wrote:
> > The ultimate goal is to create minimal isolated test binaries; in the
> > meantime we are using UML to provide the infrastructure to run tests, so
> > define an abstract way to configure and run tests that allow us to
> > change the context in which tests are built without affecting the user.
> > This also makes pretty and dynamic error reporting, and a lot of other
> > nice features easier.
>
>
> I wonder if we could somehow generate a shared library object
> 'libkernel' or 'libumlinux' from a UM configured set of headers and
> objects so that we could create binary targets directly ?

That's an interesting idea. I think it would be difficult to figure
out exactly where to draw the line of what goes in there and what
needs to be built specific to a test a priori. Of course, that leads
into the biggest problem in general, needed to know what I need to
build to test the thing that I want to test.

Nevertheless, I could definitely imagine that being useful in a lot of cases.

> > diff --git a/tools/testing/kunit/kunit_kernel.py b/tools/testing/kunit/kunit_kernel.py
> > new file mode 100644
> > index 0000000000000..bba7ea7ca1869
> > --- /dev/null
> > +++ b/tools/testing/kunit/kunit_kernel.py
...
> > +     def make(self, jobs):
> > +             try:
> > +                     subprocess.check_output([
> > +                                     'make',
> > +                                     'ARCH=um',
> > +                                     '--jobs=' + str(jobs)])
>
> Perhaps as a future extension:
>
> It would be nice if we could set an O= here to keep the source tree
> pristine.
>
> In fact I might even suggest that this should always be set so that the
> unittesting could live along side an existing kernel build? :
>
>  O ?= $KBUILD_SRC/
>  O := $(O)/kunittest/$(ARCH)/build

I agree with that. It would be pretty annoying to run a unit test and
have it mess up your .config and force you to rebuild everything else.
(I have actually done this to myself a couple of times...)

Cheers

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel
  2018-11-30  3:44   ` mcgrof
  2018-11-30  3:44     ` Luis Chamberlain
@ 2018-12-03 23:50     ` brendanhiggins
  2018-12-03 23:50       ` Brendan Higgins
  2018-12-04 20:48       ` mcgrof
  1 sibling, 2 replies; 232+ messages in thread
From: brendanhiggins @ 2018-12-03 23:50 UTC (permalink / raw)


On Thu, Nov 29, 2018 at 7:44 PM Luis Chamberlain <mcgrof at kernel.org> wrote:
>
> On Wed, Nov 28, 2018 at 11:36:28AM -0800, Brendan Higgins wrote:
> > The ultimate goal is to create minimal isolated test binaries; in the
> > meantime we are using UML to provide the infrastructure to run tests, so
> > define an abstract way to configure and run tests that allow us to
> > change the context in which tests are built without affecting the user.
> > This also makes pretty and dynamic error reporting, and a lot of other
> > nice features easier.
> >
> > kunit_config.py:
> >   - parse .config and Kconfig files.
> >
> >
> > kunit_kernel.py: provides helper functions to:
> >   - configure the kernel using kunitconfig.
>
> We get the tools to run the config stuff, build, etc, but not a top
> level 'make kunitconfig' or whatever. We have things like 'make
> kvmconfig' and 'make xenconfig', I think it would be reasonable to
> add similar for this.

Are you just asking for a defconfig for KUnit, or are you asking for a
way to run KUnit from make?

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel
  2018-12-03 23:50     ` brendanhiggins
@ 2018-12-03 23:50       ` Brendan Higgins
  2018-12-04 20:48       ` mcgrof
  1 sibling, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-12-03 23:50 UTC (permalink / raw)


On Thu, Nov 29, 2018@7:44 PM Luis Chamberlain <mcgrof@kernel.org> wrote:
>
> On Wed, Nov 28, 2018@11:36:28AM -0800, Brendan Higgins wrote:
> > The ultimate goal is to create minimal isolated test binaries; in the
> > meantime we are using UML to provide the infrastructure to run tests, so
> > define an abstract way to configure and run tests that allow us to
> > change the context in which tests are built without affecting the user.
> > This also makes pretty and dynamic error reporting, and a lot of other
> > nice features easier.
> >
> > kunit_config.py:
> >   - parse .config and Kconfig files.
> >
> >
> > kunit_kernel.py: provides helper functions to:
> >   - configure the kernel using kunitconfig.
>
> We get the tools to run the config stuff, build, etc, but not a top
> level 'make kunitconfig' or whatever. We have things like 'make
> kvmconfig' and 'make xenconfig', I think it would be reasonable to
> add similar for this.

Are you just asking for a defconfig for KUnit, or are you asking for a
way to run KUnit from make?

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 14/19] Documentation: kunit: add documentation for KUnit
  2018-11-30  3:45     ` mcgrof
  2018-11-30  3:45       ` Luis Chamberlain
@ 2018-12-03 23:53       ` brendanhiggins
  2018-12-03 23:53         ` Brendan Higgins
  2018-12-06 12:16         ` kieran.bingham
  1 sibling, 2 replies; 232+ messages in thread
From: brendanhiggins @ 2018-12-03 23:53 UTC (permalink / raw)


On Thu, Nov 29, 2018 at 7:45 PM Luis Chamberlain <mcgrof at kernel.org> wrote:
>
> On Thu, Nov 29, 2018 at 01:56:37PM +0000, Kieran Bingham wrote:
> > Hi Brendan,
> >
> > Please excuse the top posting, but I'm replying here as I'm following
> > the section "Creating a kunitconfig" in Documentation/kunit/start.rst.
> >
> > Could the three line kunitconfig file live under say
> >        arch/um/configs/kunit_defconfig?
> >
> > So that it's always provided? And could even be extended with tests
> > which people would expect to be run by default? (say in distributions)
>
> Indeed, and then a top level 'make kunitconfig' could use it as well.

Yep, I totally agree.

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 14/19] Documentation: kunit: add documentation for KUnit
  2018-12-03 23:53       ` brendanhiggins
@ 2018-12-03 23:53         ` Brendan Higgins
  2018-12-06 12:16         ` kieran.bingham
  1 sibling, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-12-03 23:53 UTC (permalink / raw)


On Thu, Nov 29, 2018@7:45 PM Luis Chamberlain <mcgrof@kernel.org> wrote:
>
> On Thu, Nov 29, 2018@01:56:37PM +0000, Kieran Bingham wrote:
> > Hi Brendan,
> >
> > Please excuse the top posting, but I'm replying here as I'm following
> > the section "Creating a kunitconfig" in Documentation/kunit/start.rst.
> >
> > Could the three line kunitconfig file live under say
> >        arch/um/configs/kunit_defconfig?
> >
> > So that it's always provided? And could even be extended with tests
> > which people would expect to be run by default? (say in distributions)
>
> Indeed, and then a top level 'make kunitconfig' could use it as well.

Yep, I totally agree.

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 16/19] arch: um: make UML unflatten device tree when testing
  2018-11-28 21:16   ` robh
  2018-11-28 21:16     ` Rob Herring
@ 2018-12-04  0:00     ` brendanhiggins
  2018-12-04  0:00       ` Brendan Higgins
  1 sibling, 1 reply; 232+ messages in thread
From: brendanhiggins @ 2018-12-04  0:00 UTC (permalink / raw)


On Wed, Nov 28, 2018 at 1:16 PM Rob Herring <robh at kernel.org> wrote:
>
> On Wed, Nov 28, 2018 at 1:38 PM Brendan Higgins
> <brendanhiggins at google.com> wrote:
> > diff --git a/arch/um/kernel/um_arch.c b/arch/um/kernel/um_arch.c
> > index a818ccef30ca2..bd58ae3bf4148 100644
> > --- a/arch/um/kernel/um_arch.c
> > +++ b/arch/um/kernel/um_arch.c
> > +#if IS_ENABLED(CONFIG_OF_UNITTEST)
> > +       unflatten_device_tree();
> > +#endif
>
> Kind of strange to have this in the arch code. I'd rather have this in
> the unittest code if possible. Can we have an initcall conditional on
> CONFIG_UM in the unittest do this? Side note, use a C if with
> IS_ENABLED() whenever possible instead of pre-processor #if.

Yeah, that makes more sense. I will send a separate patch.

>
> I'll take a fix separately as it was on my todo to fix. I've got the
> unit tests running in a gitlab CI job now[1].
>
> Rob
>
> [1] https://gitlab.com/robherring/linux-dt-unittest/pipelines

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 16/19] arch: um: make UML unflatten device tree when testing
  2018-12-04  0:00     ` brendanhiggins
@ 2018-12-04  0:00       ` Brendan Higgins
  0 siblings, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-12-04  0:00 UTC (permalink / raw)


On Wed, Nov 28, 2018@1:16 PM Rob Herring <robh@kernel.org> wrote:
>
> On Wed, Nov 28, 2018 at 1:38 PM Brendan Higgins
> <brendanhiggins@google.com> wrote:
> > diff --git a/arch/um/kernel/um_arch.c b/arch/um/kernel/um_arch.c
> > index a818ccef30ca2..bd58ae3bf4148 100644
> > --- a/arch/um/kernel/um_arch.c
> > +++ b/arch/um/kernel/um_arch.c
> > +#if IS_ENABLED(CONFIG_OF_UNITTEST)
> > +       unflatten_device_tree();
> > +#endif
>
> Kind of strange to have this in the arch code. I'd rather have this in
> the unittest code if possible. Can we have an initcall conditional on
> CONFIG_UM in the unittest do this? Side note, use a C if with
> IS_ENABLED() whenever possible instead of pre-processor #if.

Yeah, that makes more sense. I will send a separate patch.

>
> I'll take a fix separately as it was on my todo to fix. I've got the
> unit tests running in a gitlab CI job now[1].
>
> Rob
>
> [1] https://gitlab.com/robherring/linux-dt-unittest/pipelines

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 16/19] arch: um: make UML unflatten device tree when testing
  2018-11-30  3:46   ` mcgrof
  2018-11-30  3:46     ` Luis Chamberlain
@ 2018-12-04  0:02     ` brendanhiggins
  2018-12-04  0:02       ` Brendan Higgins
  1 sibling, 1 reply; 232+ messages in thread
From: brendanhiggins @ 2018-12-04  0:02 UTC (permalink / raw)


On Thu, Nov 29, 2018 at 7:46 PM Luis Chamberlain <mcgrof at kernel.org> wrote:
>
> On Wed, Nov 28, 2018 at 11:36:33AM -0800, Brendan Higgins wrote:
> > Make UML unflatten any present device trees when running KUnit tests.
> >
> > Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> > ---
> >  arch/um/kernel/um_arch.c | 4 ++++
> >  1 file changed, 4 insertions(+)
> >
> > diff --git a/arch/um/kernel/um_arch.c b/arch/um/kernel/um_arch.c
> > index a818ccef30ca2..bd58ae3bf4148 100644
> > --- a/arch/um/kernel/um_arch.c
> > +++ b/arch/um/kernel/um_arch.c
> > @@ -13,6 +13,7 @@
> >  #include <linux/sched.h>
> >  #include <linux/sched/task.h>
> >  #include <linux/kmsg_dump.h>
> > +#include <linux/of_fdt.h>
> >
> >  #include <asm/pgtable.h>
> >  #include <asm/processor.h>
> > @@ -347,6 +348,9 @@ void __init setup_arch(char **cmdline_p)
> >       read_initrd();
> >
> >       paging_init();
> > +#if IS_ENABLED(CONFIG_OF_UNITTEST)
> > +     unflatten_device_tree();
> > +#endif
>
> *Why?*

Whoops, I didn't realize how bad that looked. In anycase, doing what
Rob suggested as a separate patch should clear this up.

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 16/19] arch: um: make UML unflatten device tree when testing
  2018-12-04  0:02     ` brendanhiggins
@ 2018-12-04  0:02       ` Brendan Higgins
  0 siblings, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-12-04  0:02 UTC (permalink / raw)


On Thu, Nov 29, 2018@7:46 PM Luis Chamberlain <mcgrof@kernel.org> wrote:
>
> On Wed, Nov 28, 2018@11:36:33AM -0800, Brendan Higgins wrote:
> > Make UML unflatten any present device trees when running KUnit tests.
> >
> > Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> > ---
> >  arch/um/kernel/um_arch.c | 4 ++++
> >  1 file changed, 4 insertions(+)
> >
> > diff --git a/arch/um/kernel/um_arch.c b/arch/um/kernel/um_arch.c
> > index a818ccef30ca2..bd58ae3bf4148 100644
> > --- a/arch/um/kernel/um_arch.c
> > +++ b/arch/um/kernel/um_arch.c
> > @@ -13,6 +13,7 @@
> >  #include <linux/sched.h>
> >  #include <linux/sched/task.h>
> >  #include <linux/kmsg_dump.h>
> > +#include <linux/of_fdt.h>
> >
> >  #include <asm/pgtable.h>
> >  #include <asm/processor.h>
> > @@ -347,6 +348,9 @@ void __init setup_arch(char **cmdline_p)
> >       read_initrd();
> >
> >       paging_init();
> > +#if IS_ENABLED(CONFIG_OF_UNITTEST)
> > +     unflatten_device_tree();
> > +#endif
>
> *Why?*

Whoops, I didn't realize how bad that looked. In anycase, doing what
Rob suggested as a separate patch should clear this up.

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 17/19] of: unittest: migrate tests to run on KUnit
       [not found]   ` <CAL_Jsq+09Kx7yMBC_Jw45QGmk6U_fp4N6HOZDwYrM4tWw+_dOA@mail.gmail.com>
  2018-11-30  0:39     ` rdunlap
@ 2018-12-04  0:08     ` brendanhiggins
  2018-12-04  0:08       ` Brendan Higgins
  2019-02-13  1:44     ` brendanhiggins
  2 siblings, 1 reply; 232+ messages in thread
From: brendanhiggins @ 2018-12-04  0:08 UTC (permalink / raw)


On Wed, Nov 28, 2018 at 12:56 PM Rob Herring <robh at kernel.org> wrote:
>
> On Wed, Nov 28, 2018 at 1:38 PM Brendan Higgins
> <brendanhiggins at google.com> wrote:
> >
> > Migrate tests without any cleanup, or modifying test logic in anyway to
> > run under KUnit using the KUnit expectation and assertion API.
>
> Nice! You beat me to it. This is probably going to conflict with what
> is in the DT tree for 4.21. Also, please Cc the DT list for
> drivers/of/ changes.

Oh, I thought you were asking me to do it :-) In any case, I am happy to.

Oh yeah, sorry about not CC'ing the list.

Cheers

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 17/19] of: unittest: migrate tests to run on KUnit
  2018-12-04  0:08     ` brendanhiggins
@ 2018-12-04  0:08       ` Brendan Higgins
  0 siblings, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-12-04  0:08 UTC (permalink / raw)


On Wed, Nov 28, 2018@12:56 PM Rob Herring <robh@kernel.org> wrote:
>
> On Wed, Nov 28, 2018 at 1:38 PM Brendan Higgins
> <brendanhiggins@google.com> wrote:
> >
> > Migrate tests without any cleanup, or modifying test logic in anyway to
> > run under KUnit using the KUnit expectation and assertion API.
>
> Nice! You beat me to it. This is probably going to conflict with what
> is in the DT tree for 4.21. Also, please Cc the DT list for
> drivers/of/ changes.

Oh, I thought you were asking me to do it :-) In any case, I am happy to.

Oh yeah, sorry about not CC'ing the list.

Cheers

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 17/19] of: unittest: migrate tests to run on KUnit
  2018-11-30  0:39     ` rdunlap
  2018-11-30  0:39       ` Randy Dunlap
@ 2018-12-04  0:13       ` brendanhiggins
  2018-12-04  0:13         ` Brendan Higgins
  2018-12-04 13:40         ` robh
  1 sibling, 2 replies; 232+ messages in thread
From: brendanhiggins @ 2018-12-04  0:13 UTC (permalink / raw)


On Thu, Nov 29, 2018 at 4:40 PM Randy Dunlap <rdunlap at infradead.org> wrote:
>
> On 11/28/18 12:56 PM, Rob Herring wrote:
> >> diff --git a/drivers/of/Kconfig b/drivers/of/Kconfig
> >> index ad3fcad4d75b8..f309399deac20 100644
> >> --- a/drivers/of/Kconfig
> >> +++ b/drivers/of/Kconfig
> >> @@ -15,6 +15,7 @@ if OF
> >>  config OF_UNITTEST
> >>         bool "Device Tree runtime unit tests"
> >>         depends on !SPARC
> >> +       depends on KUNIT
> > Unless KUNIT has depends, better to be a select here.
>
> That's just style or taste.  I would prefer to use depends
> instead of select, but that's also just my preference.

I prefer depends too, but Rob is the maintainer here.

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 17/19] of: unittest: migrate tests to run on KUnit
  2018-12-04  0:13       ` brendanhiggins
@ 2018-12-04  0:13         ` Brendan Higgins
  2018-12-04 13:40         ` robh
  1 sibling, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-12-04  0:13 UTC (permalink / raw)


On Thu, Nov 29, 2018@4:40 PM Randy Dunlap <rdunlap@infradead.org> wrote:
>
> On 11/28/18 12:56 PM, Rob Herring wrote:
> >> diff --git a/drivers/of/Kconfig b/drivers/of/Kconfig
> >> index ad3fcad4d75b8..f309399deac20 100644
> >> --- a/drivers/of/Kconfig
> >> +++ b/drivers/of/Kconfig
> >> @@ -15,6 +15,7 @@ if OF
> >>  config OF_UNITTEST
> >>         bool "Device Tree runtime unit tests"
> >>         depends on !SPARC
> >> +       depends on KUNIT
> > Unless KUNIT has depends, better to be a select here.
>
> That's just style or taste.  I would prefer to use depends
> instead of select, but that's also just my preference.

I prefer depends too, but Rob is the maintainer here.

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 03/19] kunit: test: add string_stream a std::stream like string builder
  2018-12-03 10:55     ` pmladek
  2018-12-03 10:55       ` Petr Mladek
@ 2018-12-04  0:35       ` brendanhiggins
  2018-12-04  0:35         ` Brendan Higgins
  1 sibling, 1 reply; 232+ messages in thread
From: brendanhiggins @ 2018-12-04  0:35 UTC (permalink / raw)


On Mon, Dec 3, 2018 at 2:55 AM Petr Mladek <pmladek at suse.com> wrote:
>
> On Thu 2018-11-29 19:29:24, Luis Chamberlain wrote:
> > On Wed, Nov 28, 2018 at 11:36:20AM -0800, Brendan Higgins wrote:
> > > A number of test features need to do pretty complicated string printing
> > > where it may not be possible to rely on a single preallocated string
> > > with parameters.
> > >
> > > So provide a library for constructing the string as you go similar to
> > > C++'s std::string.
> >
> > Hrm, what's the potential for such thing actually being eventually
> > generically useful for printk folks, I wonder? Petr?
>
> printk() is a bit tricky:
>
>    + It should work in any context. Any additional lock adds risk of a
>      deadlock. Especially the NMI and scheduler contexts are problematic.
>      There are problems with any other code that might be called
>      from console drivers and calls printk() under a lock.
>
>    + It should work also when the system is out of memory. Especially
>      atomic context is problematic because we could not wait for
>      memory reclaim or swap.
>
>    + We also do to the best effort to get the message out on the
>      console. It is important when the system is about to die.
>      Any extra buffering layer might cause delay and avoid seeing the
>      message.
>
> From this point of views, this API is not generally usable with printk().

Yeah, that makes sense. I wouldn't really expect this to work well in
those cases.

> Now, the question is how many of the above fits also for unit testing.
> At least, you might need to be careful when allocating memory in
> atomic context.

True, but this is only supposed to be used for constructing
expectation failure messages which should only happen from a
non-atomic context.

>
> BTW: There are more existing printk APIs: Well, I admit the they are
> not easily reusable in unit testing:
>
>    + printk() is old, crappy code, complicated with all the
>      cornercases and consoles.
>
>    + include/linux/seq_buf.h is simple buffering. It is used primary
>      for sysfs output. It might be usable if you add support for
>      loglevel and use big enough buffer. I quess that you should
>      flush the buffer regularly anyway.
>
>    + trace_printk() uses lockless per-CPU buffers. It currently does not
>      support loglevels. But it might be pretty interesting choice as well.
>
>
> I do not say that you have to use one of the existing API. But you
> might consider them if you encouter any problems and maintaining
> your variant gets complicated.

Alright, I will take a look.

Thanks!

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 03/19] kunit: test: add string_stream a std::stream like string builder
  2018-12-04  0:35       ` brendanhiggins
@ 2018-12-04  0:35         ` Brendan Higgins
  0 siblings, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-12-04  0:35 UTC (permalink / raw)


On Mon, Dec 3, 2018@2:55 AM Petr Mladek <pmladek@suse.com> wrote:
>
> On Thu 2018-11-29 19:29:24, Luis Chamberlain wrote:
> > On Wed, Nov 28, 2018@11:36:20AM -0800, Brendan Higgins wrote:
> > > A number of test features need to do pretty complicated string printing
> > > where it may not be possible to rely on a single preallocated string
> > > with parameters.
> > >
> > > So provide a library for constructing the string as you go similar to
> > > C++'s std::string.
> >
> > Hrm, what's the potential for such thing actually being eventually
> > generically useful for printk folks, I wonder? Petr?
>
> printk() is a bit tricky:
>
>    + It should work in any context. Any additional lock adds risk of a
>      deadlock. Especially the NMI and scheduler contexts are problematic.
>      There are problems with any other code that might be called
>      from console drivers and calls printk() under a lock.
>
>    + It should work also when the system is out of memory. Especially
>      atomic context is problematic because we could not wait for
>      memory reclaim or swap.
>
>    + We also do to the best effort to get the message out on the
>      console. It is important when the system is about to die.
>      Any extra buffering layer might cause delay and avoid seeing the
>      message.
>
> From this point of views, this API is not generally usable with printk().

Yeah, that makes sense. I wouldn't really expect this to work well in
those cases.

> Now, the question is how many of the above fits also for unit testing.
> At least, you might need to be careful when allocating memory in
> atomic context.

True, but this is only supposed to be used for constructing
expectation failure messages which should only happen from a
non-atomic context.

>
> BTW: There are more existing printk APIs: Well, I admit the they are
> not easily reusable in unit testing:
>
>    + printk() is old, crappy code, complicated with all the
>      cornercases and consoles.
>
>    + include/linux/seq_buf.h is simple buffering. It is used primary
>      for sysfs output. It might be usable if you add support for
>      loglevel and use big enough buffer. I quess that you should
>      flush the buffer regularly anyway.
>
>    + trace_printk() uses lockless per-CPU buffers. It currently does not
>      support loglevels. But it might be pretty interesting choice as well.
>
>
> I do not say that you have to use one of the existing API. But you
> might consider them if you encouter any problems and maintaining
> your variant gets complicated.

Alright, I will take a look.

Thanks!

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 08/19] arch: um: add shim to trap to allow installing a fault catcher for tests
  2018-12-03 23:46       ` mcgrof
  2018-12-03 23:46         ` Luis Chamberlain
@ 2018-12-04  0:44         ` brendanhiggins
  2018-12-04  0:44           ` Brendan Higgins
  1 sibling, 1 reply; 232+ messages in thread
From: brendanhiggins @ 2018-12-04  0:44 UTC (permalink / raw)


On Mon, Dec 3, 2018 at 3:46 PM Luis Chamberlain <mcgrof at kernel.org> wrote:
>
> On Mon, Dec 03, 2018 at 03:34:57PM -0800, Brendan Higgins wrote:
> > On Thu, Nov 29, 2018 at 7:34 PM Luis Chamberlain <mcgrof at kernel.org> wrote:
> > >
> > > On Wed, Nov 28, 2018 at 11:36:25AM -0800, Brendan Higgins wrote:
> > > > diff --git a/arch/um/kernel/trap.c b/arch/um/kernel/trap.c
> > > > index cced829460427..bf90e678b3d71 100644
> > > > --- a/arch/um/kernel/trap.c
> > > > +++ b/arch/um/kernel/trap.c
> > > > @@ -201,6 +201,12 @@ void segv_handler(int sig, struct siginfo *unused_si, struct uml_pt_regs *regs)
> > > >       segv(*fi, UPT_IP(regs), UPT_IS_USER(regs), regs);
> > > >  }
> > > >
> > > > +static void segv_run_catcher(jmp_buf *catcher, void *fault_addr)
> > > > +{
> > > > +     current->thread.fault_addr = fault_addr;
> > > > +     UML_LONGJMP(catcher, 1);
> > > > +}
> > > > +
> > > >  /*
> > > >   * We give a *copy* of the faultinfo in the regs to segv.
> > > >   * This must be done, since nesting SEGVs could overwrite
> > > > @@ -219,7 +225,10 @@ unsigned long segv(struct faultinfo fi, unsigned long ip, int is_user,
> > > >       if (!is_user && regs)
> > > >               current->thread.segv_regs = container_of(regs, struct pt_regs, regs);
> > > >
> > > > -     if (!is_user && (address >= start_vm) && (address < end_vm)) {
> > > > +     catcher = current->thread.fault_catcher;
> > >
> > > This and..
> > >
> > > > +     if (catcher && current->thread.is_running_test)
> > > > +             segv_run_catcher(catcher, (void *) address);
> > > > +     else if (!is_user && (address >= start_vm) && (address < end_vm)) {
> > > >               flush_tlb_kernel_vm();
> > > >               goto out;
> > > >       }
> > >
> > > *not this*
> >
> > I don't understand. Are you saying the previous block of code is good
> > and this one is bad?
>
> No, I was saying that the above block of code is a functional change,
> but I was also pointing out other areas which were not and could be
> folded into a separate atomic patch where no functionality changes.
>
> > > > @@ -246,12 +255,10 @@ unsigned long segv(struct faultinfo fi, unsigned long ip, int is_user,
> > > >               address = 0;
> > > >       }
> > > >
> > > > -     catcher = current->thread.fault_catcher;
> > > >       if (!err)
> > > >               goto out;
> > > >       else if (catcher != NULL) {
> > > > -             current->thread.fault_addr = (void *) address;
> > > > -             UML_LONGJMP(catcher, 1);
> > > > +             segv_run_catcher(catcher, (void *) address);
> > > >       }
> > > >       else if (current->thread.fault_addr != NULL)
> > > >               panic("fault_addr set but no fault catcher");
> > >
> > > But with this seems one atomic change which should be submitted
> > > separately, its just a helper. Think it would make the actual
> > > change needed easier to review, ie, your needed changes would
> > > be smaller and clearer for what you need.
> >
> > Are you suggesting that I pull out the bits needed to implement abort
> > in the next patch and squash it into this one?
>
> No, I'm suggesting you can probably split this patch in 2, one which
> wraps things with no functional changes, and another which adds your
> changes.
>

That makes sense.

Thanks for the clarification!

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 08/19] arch: um: add shim to trap to allow installing a fault catcher for tests
  2018-12-04  0:44         ` brendanhiggins
@ 2018-12-04  0:44           ` Brendan Higgins
  0 siblings, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-12-04  0:44 UTC (permalink / raw)


On Mon, Dec 3, 2018@3:46 PM Luis Chamberlain <mcgrof@kernel.org> wrote:
>
> On Mon, Dec 03, 2018@03:34:57PM -0800, Brendan Higgins wrote:
> > On Thu, Nov 29, 2018@7:34 PM Luis Chamberlain <mcgrof@kernel.org> wrote:
> > >
> > > On Wed, Nov 28, 2018@11:36:25AM -0800, Brendan Higgins wrote:
> > > > diff --git a/arch/um/kernel/trap.c b/arch/um/kernel/trap.c
> > > > index cced829460427..bf90e678b3d71 100644
> > > > --- a/arch/um/kernel/trap.c
> > > > +++ b/arch/um/kernel/trap.c
> > > > @@ -201,6 +201,12 @@ void segv_handler(int sig, struct siginfo *unused_si, struct uml_pt_regs *regs)
> > > >       segv(*fi, UPT_IP(regs), UPT_IS_USER(regs), regs);
> > > >  }
> > > >
> > > > +static void segv_run_catcher(jmp_buf *catcher, void *fault_addr)
> > > > +{
> > > > +     current->thread.fault_addr = fault_addr;
> > > > +     UML_LONGJMP(catcher, 1);
> > > > +}
> > > > +
> > > >  /*
> > > >   * We give a *copy* of the faultinfo in the regs to segv.
> > > >   * This must be done, since nesting SEGVs could overwrite
> > > > @@ -219,7 +225,10 @@ unsigned long segv(struct faultinfo fi, unsigned long ip, int is_user,
> > > >       if (!is_user && regs)
> > > >               current->thread.segv_regs = container_of(regs, struct pt_regs, regs);
> > > >
> > > > -     if (!is_user && (address >= start_vm) && (address < end_vm)) {
> > > > +     catcher = current->thread.fault_catcher;
> > >
> > > This and..
> > >
> > > > +     if (catcher && current->thread.is_running_test)
> > > > +             segv_run_catcher(catcher, (void *) address);
> > > > +     else if (!is_user && (address >= start_vm) && (address < end_vm)) {
> > > >               flush_tlb_kernel_vm();
> > > >               goto out;
> > > >       }
> > >
> > > *not this*
> >
> > I don't understand. Are you saying the previous block of code is good
> > and this one is bad?
>
> No, I was saying that the above block of code is a functional change,
> but I was also pointing out other areas which were not and could be
> folded into a separate atomic patch where no functionality changes.
>
> > > > @@ -246,12 +255,10 @@ unsigned long segv(struct faultinfo fi, unsigned long ip, int is_user,
> > > >               address = 0;
> > > >       }
> > > >
> > > > -     catcher = current->thread.fault_catcher;
> > > >       if (!err)
> > > >               goto out;
> > > >       else if (catcher != NULL) {
> > > > -             current->thread.fault_addr = (void *) address;
> > > > -             UML_LONGJMP(catcher, 1);
> > > > +             segv_run_catcher(catcher, (void *) address);
> > > >       }
> > > >       else if (current->thread.fault_addr != NULL)
> > > >               panic("fault_addr set but no fault catcher");
> > >
> > > But with this seems one atomic change which should be submitted
> > > separately, its just a helper. Think it would make the actual
> > > change needed easier to review, ie, your needed changes would
> > > be smaller and clearer for what you need.
> >
> > Are you suggesting that I pull out the bits needed to implement abort
> > in the next patch and squash it into this one?
>
> No, I'm suggesting you can probably split this patch in 2, one which
> wraps things with no functional changes, and another which adds your
> changes.
>

That makes sense.

Thanks for the clarification!

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework
  2018-11-28 19:36 [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework brendanhiggins
                   ` (19 preceding siblings ...)
  2018-11-28 19:36 ` [RFC v3 19/19] of: unittest: split up some super large test cases brendanhiggins
@ 2018-12-04 10:52 ` frowand.list
  2018-12-04 10:52   ` Frank Rowand
  2018-12-04 11:40 ` frowand.list
  21 siblings, 1 reply; 232+ messages in thread
From: frowand.list @ 2018-12-04 10:52 UTC (permalink / raw)


On 11/28/18 11:36 AM, Brendan Higgins wrote:
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.
> 
> Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> it does not require installing the kernel on a test machine or in a VM
> and does not require tests to be written in userspace running on a host
> kernel. Additionally, KUnit is fast: From invocation to completion KUnit
> can run several dozen tests in under a second. Currently, the entire
> KUnit test suite for KUnit runs in under a second from the initial
> invocation (build time excluded).
> 
> KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> Googletest/Googlemock for C++. KUnit provides facilities for defining
> unit test cases, grouping related test cases into test suites, providing
> common infrastructure for running tests, mocking, spying, and much more.
> 
> ## What's so special about unit testing?
> 


> A unit test is supposed to test a single unit of code in isolation,
> hence the name. There should be no dependencies outside the control of
> the test; this means no external dependencies, which makes tests orders
> of magnitudes faster. Likewise, since there are no external dependencies,
> there are no hoops to jump through to run the tests. Additionally, this

This question might be a misunderstanding of the intent of some of the
terminology in the above paragraph, so this is mostly a request for
clarification.

With my pre-conception of what unit tests are, I read "test a single unit
of code" to mean a relatively narrow piece of a subsystem.  So if I
understand correctly, taking examples from patch 17 "of: unittest:
migrate tests to run on KUnit", each function call like
KUNIT_ASSERT_NOT_ERR_OR_NULL(), KUNIT_EXPECT_STREQ_MSG(), and
KUNIT_EXPECT_EQ_MSG() are each a separate unit test, and thus the
paragraph says that each of these function calls should have no
dependencies outside the test.  Do I understand that correctly?

< snip >

-Frank

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework
  2018-12-04 10:52 ` [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework frowand.list
@ 2018-12-04 10:52   ` Frank Rowand
  0 siblings, 0 replies; 232+ messages in thread
From: Frank Rowand @ 2018-12-04 10:52 UTC (permalink / raw)


On 11/28/18 11:36 AM, Brendan Higgins wrote:
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.
> 
> Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> it does not require installing the kernel on a test machine or in a VM
> and does not require tests to be written in userspace running on a host
> kernel. Additionally, KUnit is fast: From invocation to completion KUnit
> can run several dozen tests in under a second. Currently, the entire
> KUnit test suite for KUnit runs in under a second from the initial
> invocation (build time excluded).
> 
> KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> Googletest/Googlemock for C++. KUnit provides facilities for defining
> unit test cases, grouping related test cases into test suites, providing
> common infrastructure for running tests, mocking, spying, and much more.
> 
> ## What's so special about unit testing?
> 


> A unit test is supposed to test a single unit of code in isolation,
> hence the name. There should be no dependencies outside the control of
> the test; this means no external dependencies, which makes tests orders
> of magnitudes faster. Likewise, since there are no external dependencies,
> there are no hoops to jump through to run the tests. Additionally, this

This question might be a misunderstanding of the intent of some of the
terminology in the above paragraph, so this is mostly a request for
clarification.

With my pre-conception of what unit tests are, I read "test a single unit
of code" to mean a relatively narrow piece of a subsystem.  So if I
understand correctly, taking examples from patch 17 "of: unittest:
migrate tests to run on KUnit", each function call like
KUNIT_ASSERT_NOT_ERR_OR_NULL(), KUNIT_EXPECT_STREQ_MSG(), and
KUNIT_EXPECT_EQ_MSG() are each a separate unit test, and thus the
paragraph says that each of these function calls should have no
dependencies outside the test.  Do I understand that correctly?

< snip >

-Frank

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 17/19] of: unittest: migrate tests to run on KUnit
  2018-11-28 19:36 ` [RFC v3 17/19] of: unittest: migrate tests to run on KUnit brendanhiggins
  2018-11-28 19:36   ` Brendan Higgins
       [not found]   ` <CAL_Jsq+09Kx7yMBC_Jw45QGmk6U_fp4N6HOZDwYrM4tWw+_dOA@mail.gmail.com>
@ 2018-12-04 10:56   ` frowand.list
  2018-12-04 10:56     ` Frank Rowand
  2 siblings, 1 reply; 232+ messages in thread
From: frowand.list @ 2018-12-04 10:56 UTC (permalink / raw)


On 11/28/18 11:36 AM, Brendan Higgins wrote:
> Migrate tests without any cleanup, or modifying test logic in anyway to
> run under KUnit using the KUnit expectation and assertion API.
> 
> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> ---
>  drivers/of/Kconfig    |    1 +
>  drivers/of/unittest.c | 1405 ++++++++++++++++++++++-------------------
>  2 files changed, 752 insertions(+), 654 deletions(-)

< snip >

I am travelling and will not have an opportunity to properly review this
patch, patch 18, or patch 19 until next week.

-Frank

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 17/19] of: unittest: migrate tests to run on KUnit
  2018-12-04 10:56   ` frowand.list
@ 2018-12-04 10:56     ` Frank Rowand
  0 siblings, 0 replies; 232+ messages in thread
From: Frank Rowand @ 2018-12-04 10:56 UTC (permalink / raw)


On 11/28/18 11:36 AM, Brendan Higgins wrote:
> Migrate tests without any cleanup, or modifying test logic in anyway to
> run under KUnit using the KUnit expectation and assertion API.
> 
> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> ---
>  drivers/of/Kconfig    |    1 +
>  drivers/of/unittest.c | 1405 ++++++++++++++++++++++-------------------
>  2 files changed, 752 insertions(+), 654 deletions(-)

< snip >

I am travelling and will not have an opportunity to properly review this
patch, patch 18, or patch 19 until next week.

-Frank

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2018-11-28 19:36 ` [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest brendanhiggins
  2018-11-28 19:36   ` Brendan Higgins
@ 2018-12-04 10:58   ` frowand.list
  2018-12-04 10:58     ` Frank Rowand
  2018-12-05 23:54     ` brendanhiggins
  1 sibling, 2 replies; 232+ messages in thread
From: frowand.list @ 2018-12-04 10:58 UTC (permalink / raw)


Hi Brendan,

On 11/28/18 11:36 AM, Brendan Higgins wrote:
> Split out a couple of test cases that these features in base.c from the
> unittest.c monolith. The intention is that we will eventually split out
> all test cases and group them together based on what portion of device
> tree they test.

Why does splitting this file apart improve the implementation?


> 
> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> ---
>  drivers/of/Makefile      |   2 +-
>  drivers/of/base-test.c   | 214 ++++++++++++++++++++++++++
>  drivers/of/test-common.c | 149 ++++++++++++++++++
>  drivers/of/test-common.h |  16 ++
>  drivers/of/unittest.c    | 316 +--------------------------------------
>  5 files changed, 381 insertions(+), 316 deletions(-)
>  create mode 100644 drivers/of/base-test.c
>  create mode 100644 drivers/of/test-common.c
>  create mode 100644 drivers/of/test-common.h
>
< snip >

-Frank

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2018-12-04 10:58   ` frowand.list
@ 2018-12-04 10:58     ` Frank Rowand
  2018-12-05 23:54     ` brendanhiggins
  1 sibling, 0 replies; 232+ messages in thread
From: Frank Rowand @ 2018-12-04 10:58 UTC (permalink / raw)


Hi Brendan,

On 11/28/18 11:36 AM, Brendan Higgins wrote:
> Split out a couple of test cases that these features in base.c from the
> unittest.c monolith. The intention is that we will eventually split out
> all test cases and group them together based on what portion of device
> tree they test.

Why does splitting this file apart improve the implementation?


> 
> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> ---
>  drivers/of/Makefile      |   2 +-
>  drivers/of/base-test.c   | 214 ++++++++++++++++++++++++++
>  drivers/of/test-common.c | 149 ++++++++++++++++++
>  drivers/of/test-common.h |  16 ++
>  drivers/of/unittest.c    | 316 +--------------------------------------
>  5 files changed, 381 insertions(+), 316 deletions(-)
>  create mode 100644 drivers/of/base-test.c
>  create mode 100644 drivers/of/test-common.c
>  create mode 100644 drivers/of/test-common.h
>
< snip >

-Frank

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework
  2018-11-28 19:36 [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework brendanhiggins
                   ` (20 preceding siblings ...)
  2018-12-04 10:52 ` [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework frowand.list
@ 2018-12-04 11:40 ` frowand.list
  2018-12-04 11:40   ` Frank Rowand
  2018-12-04 13:49   ` robh
  21 siblings, 2 replies; 232+ messages in thread
From: frowand.list @ 2018-12-04 11:40 UTC (permalink / raw)


Hi Brendan, Rob,

On 11/28/18 11:36 AM, Brendan Higgins wrote:
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.
> 
> Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> it does not require installing the kernel on a test machine or in a VM
> and does not require tests to be written in userspace running on a host
> kernel. Additionally, KUnit is fast: From invocation to completion KUnit
> can run several dozen tests in under a second. Currently, the entire
> KUnit test suite for KUnit runs in under a second from the initial
> invocation (build time excluded).
> 
> KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> Googletest/Googlemock for C++. KUnit provides facilities for defining
> unit test cases, grouping related test cases into test suites, providing
> common infrastructure for running tests, mocking, spying, and much more.
> 
> ## What's so special about unit testing?
> 
> A unit test is supposed to test a single unit of code in isolation,
> hence the name. There should be no dependencies outside the control of
> the test; this means no external dependencies, which makes tests orders
> of magnitudes faster. Likewise, since there are no external dependencies,
> there are no hoops to jump through to run the tests. Additionally, this
> makes unit tests deterministic: a failing unit test always indicates a
> problem. Finally, because unit tests necessarily have finer granularity,
> they are able to test all code paths easily solving the classic problem
> of difficulty in exercising error handling code.
> 
> ## Is KUnit trying to replace other testing frameworks for the kernel?
> 
> No. Most existing tests for the Linux kernel are end-to-end tests, which
> have their place. A well tested system has lots of unit tests, a
> reasonable number of integration tests, and some end-to-end tests. KUnit
> is just trying to address the unit test space which is currently not
> being addressed.
> 
> ## More information on KUnit
> 
> There is a bunch of documentation near the end of this patch set that
> describes how to use KUnit and best practices for writing unit tests.
> For convenience I am hosting the compiled docs here:
> https://google.github.io/kunit-docs/third_party/kernel/docs/
> Additionally for convenience, I have applied these patches to a branch:
> https://kunit.googlesource.com/linux/+/kunit/rfc/4.19/v3
> The repo may be cloned with:
> git clone https://kunit.googlesource.com/linux
> This patchset is on the kunit/rfc/4.19/v3 branch.
> 
> ## Changes Since Last Version
> 
>  - Changed namespace prefix from `test_*` to `kunit_*` as requested by
>    Shuah.


>  - Started converting/cleaning up the device tree unittest to use KUnit.
>  - Started adding KUnit expectations with custom messages.
> 

Sorry I missed your reply to me in the v1 patch thread.  I've been
traveling a lot the last few weeks.  I'm starting to read messages
that occurred late in the v1 patch thread and the v2 patch thread,
so I'm just coming up to speed on this.

My comments below are motivated by adding the devicetree unittest to
this version of the patch series.

Pulling a comment from way back in the v1 patch thread:

On 10/17/18 3:22 PM, Brendan Higgins wrote:
> On Wed, Oct 17, 2018 at 10:49 AM <Tim.Bird at sony.com> wrote:

< snip >

> The test and the code under test are linked together in the same
> binary and are compiled under Kbuild. Right now I am linking
> everything into a UML kernel, but I would ultimately like to make
> tests compile into completely independent test binaries. So each test
> file would get compiled into its own test binary and would link
> against only the code needed to run the test, but we are a bit of a
> ways off from that.

I have never used UML, so you should expect naive questions from me,
exhibiting my lack of understanding.

Does this mean that I have to build a UML architecture kernel to run
the KUnit tests?

*** Rob, if the answer is yes, then it seems like for my workflow,
which is to build for real ARM hardware, my work is doubled (or
worse), because for every patch/commit that I apply, I not only have
to build the ARM kernel and boot on the real hardware to test, I also
have to build the UML kernel and boot in UML.  If that is correct
then I see this as a major problem for me.

Brenden, in the above quote you said that in the future you would
like to make the "tests compile into completely independent test
binaries".  I am assuming those are intended to run as standalone
user space programs instead of inside UML.  Is that correct?  If
so, how will KUnit tests be able to test code that uses locking
mechanisms that require instructions that are not available to
user space execution?  (I _think_ that such instructions may be
present, depending on which locking mechanism, but I might be
mistaken.)

Another possible concern that I have for removing the devicetree
unit tests from my normal kernel build process is that I think
that the ability to use sparse to analyze the source in the
unit tests is removed.  Please correct me if I misunderstand
that.

Another issue is that the devicetree unit tests will no longer
be cross compiled with my ARM compiler, so I lose a small
amount of testing for compiler related issues.

Overall, I'm still trying to learn enough to determine whether
the gains from moving to KUnit outweigh the losses.

-Frank

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework
  2018-12-04 11:40 ` frowand.list
@ 2018-12-04 11:40   ` Frank Rowand
  2018-12-04 13:49   ` robh
  1 sibling, 0 replies; 232+ messages in thread
From: Frank Rowand @ 2018-12-04 11:40 UTC (permalink / raw)


Hi Brendan, Rob,

On 11/28/18 11:36 AM, Brendan Higgins wrote:
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.
> 
> Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> it does not require installing the kernel on a test machine or in a VM
> and does not require tests to be written in userspace running on a host
> kernel. Additionally, KUnit is fast: From invocation to completion KUnit
> can run several dozen tests in under a second. Currently, the entire
> KUnit test suite for KUnit runs in under a second from the initial
> invocation (build time excluded).
> 
> KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> Googletest/Googlemock for C++. KUnit provides facilities for defining
> unit test cases, grouping related test cases into test suites, providing
> common infrastructure for running tests, mocking, spying, and much more.
> 
> ## What's so special about unit testing?
> 
> A unit test is supposed to test a single unit of code in isolation,
> hence the name. There should be no dependencies outside the control of
> the test; this means no external dependencies, which makes tests orders
> of magnitudes faster. Likewise, since there are no external dependencies,
> there are no hoops to jump through to run the tests. Additionally, this
> makes unit tests deterministic: a failing unit test always indicates a
> problem. Finally, because unit tests necessarily have finer granularity,
> they are able to test all code paths easily solving the classic problem
> of difficulty in exercising error handling code.
> 
> ## Is KUnit trying to replace other testing frameworks for the kernel?
> 
> No. Most existing tests for the Linux kernel are end-to-end tests, which
> have their place. A well tested system has lots of unit tests, a
> reasonable number of integration tests, and some end-to-end tests. KUnit
> is just trying to address the unit test space which is currently not
> being addressed.
> 
> ## More information on KUnit
> 
> There is a bunch of documentation near the end of this patch set that
> describes how to use KUnit and best practices for writing unit tests.
> For convenience I am hosting the compiled docs here:
> https://google.github.io/kunit-docs/third_party/kernel/docs/
> Additionally for convenience, I have applied these patches to a branch:
> https://kunit.googlesource.com/linux/+/kunit/rfc/4.19/v3
> The repo may be cloned with:
> git clone https://kunit.googlesource.com/linux
> This patchset is on the kunit/rfc/4.19/v3 branch.
> 
> ## Changes Since Last Version
> 
>  - Changed namespace prefix from `test_*` to `kunit_*` as requested by
>    Shuah.


>  - Started converting/cleaning up the device tree unittest to use KUnit.
>  - Started adding KUnit expectations with custom messages.
> 

Sorry I missed your reply to me in the v1 patch thread.  I've been
traveling a lot the last few weeks.  I'm starting to read messages
that occurred late in the v1 patch thread and the v2 patch thread,
so I'm just coming up to speed on this.

My comments below are motivated by adding the devicetree unittest to
this version of the patch series.

Pulling a comment from way back in the v1 patch thread:

On 10/17/18 3:22 PM, Brendan Higgins wrote:
> On Wed, Oct 17, 2018@10:49 AM <Tim.Bird@sony.com> wrote:

< snip >

> The test and the code under test are linked together in the same
> binary and are compiled under Kbuild. Right now I am linking
> everything into a UML kernel, but I would ultimately like to make
> tests compile into completely independent test binaries. So each test
> file would get compiled into its own test binary and would link
> against only the code needed to run the test, but we are a bit of a
> ways off from that.

I have never used UML, so you should expect naive questions from me,
exhibiting my lack of understanding.

Does this mean that I have to build a UML architecture kernel to run
the KUnit tests?

*** Rob, if the answer is yes, then it seems like for my workflow,
which is to build for real ARM hardware, my work is doubled (or
worse), because for every patch/commit that I apply, I not only have
to build the ARM kernel and boot on the real hardware to test, I also
have to build the UML kernel and boot in UML.  If that is correct
then I see this as a major problem for me.

Brenden, in the above quote you said that in the future you would
like to make the "tests compile into completely independent test
binaries".  I am assuming those are intended to run as standalone
user space programs instead of inside UML.  Is that correct?  If
so, how will KUnit tests be able to test code that uses locking
mechanisms that require instructions that are not available to
user space execution?  (I _think_ that such instructions may be
present, depending on which locking mechanism, but I might be
mistaken.)

Another possible concern that I have for removing the devicetree
unit tests from my normal kernel build process is that I think
that the ability to use sparse to analyze the source in the
unit tests is removed.  Please correct me if I misunderstand
that.

Another issue is that the devicetree unit tests will no longer
be cross compiled with my ARM compiler, so I lose a small
amount of testing for compiler related issues.

Overall, I'm still trying to learn enough to determine whether
the gains from moving to KUnit outweigh the losses.

-Frank

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 17/19] of: unittest: migrate tests to run on KUnit
  2018-12-04  0:13       ` brendanhiggins
  2018-12-04  0:13         ` Brendan Higgins
@ 2018-12-04 13:40         ` robh
  2018-12-04 13:40           ` Rob Herring
  2018-12-05 23:42           ` brendanhiggins
  1 sibling, 2 replies; 232+ messages in thread
From: robh @ 2018-12-04 13:40 UTC (permalink / raw)


On Mon, Dec 3, 2018 at 6:14 PM Brendan Higgins
<brendanhiggins at google.com> wrote:
>
> On Thu, Nov 29, 2018 at 4:40 PM Randy Dunlap <rdunlap at infradead.org> wrote:
> >
> > On 11/28/18 12:56 PM, Rob Herring wrote:
> > >> diff --git a/drivers/of/Kconfig b/drivers/of/Kconfig
> > >> index ad3fcad4d75b8..f309399deac20 100644
> > >> --- a/drivers/of/Kconfig
> > >> +++ b/drivers/of/Kconfig
> > >> @@ -15,6 +15,7 @@ if OF
> > >>  config OF_UNITTEST
> > >>         bool "Device Tree runtime unit tests"
> > >>         depends on !SPARC
> > >> +       depends on KUNIT
> > > Unless KUNIT has depends, better to be a select here.
> >
> > That's just style or taste.  I would prefer to use depends
> > instead of select, but that's also just my preference.
>
> I prefer depends too, but Rob is the maintainer here.

Well, we should be consistent, not the follow the whims of each maintainer.

Rob

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 17/19] of: unittest: migrate tests to run on KUnit
  2018-12-04 13:40         ` robh
@ 2018-12-04 13:40           ` Rob Herring
  2018-12-05 23:42           ` brendanhiggins
  1 sibling, 0 replies; 232+ messages in thread
From: Rob Herring @ 2018-12-04 13:40 UTC (permalink / raw)


On Mon, Dec 3, 2018 at 6:14 PM Brendan Higgins
<brendanhiggins@google.com> wrote:
>
> On Thu, Nov 29, 2018@4:40 PM Randy Dunlap <rdunlap@infradead.org> wrote:
> >
> > On 11/28/18 12:56 PM, Rob Herring wrote:
> > >> diff --git a/drivers/of/Kconfig b/drivers/of/Kconfig
> > >> index ad3fcad4d75b8..f309399deac20 100644
> > >> --- a/drivers/of/Kconfig
> > >> +++ b/drivers/of/Kconfig
> > >> @@ -15,6 +15,7 @@ if OF
> > >>  config OF_UNITTEST
> > >>         bool "Device Tree runtime unit tests"
> > >>         depends on !SPARC
> > >> +       depends on KUNIT
> > > Unless KUNIT has depends, better to be a select here.
> >
> > That's just style or taste.  I would prefer to use depends
> > instead of select, but that's also just my preference.
>
> I prefer depends too, but Rob is the maintainer here.

Well, we should be consistent, not the follow the whims of each maintainer.

Rob

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework
  2018-12-04 11:40 ` frowand.list
  2018-12-04 11:40   ` Frank Rowand
@ 2018-12-04 13:49   ` robh
  2018-12-04 13:49     ` Rob Herring
  2018-12-05 23:10     ` brendanhiggins
  1 sibling, 2 replies; 232+ messages in thread
From: robh @ 2018-12-04 13:49 UTC (permalink / raw)


On Tue, Dec 4, 2018 at 5:40 AM Frank Rowand <frowand.list at gmail.com> wrote:
>
> Hi Brendan, Rob,
>
> On 11/28/18 11:36 AM, Brendan Higgins wrote:
> > This patch set proposes KUnit, a lightweight unit testing and mocking
> > framework for the Linux kernel.
> >
> > Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> > it does not require installing the kernel on a test machine or in a VM
> > and does not require tests to be written in userspace running on a host
> > kernel. Additionally, KUnit is fast: From invocation to completion KUnit
> > can run several dozen tests in under a second. Currently, the entire
> > KUnit test suite for KUnit runs in under a second from the initial
> > invocation (build time excluded).
> >
> > KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> > Googletest/Googlemock for C++. KUnit provides facilities for defining
> > unit test cases, grouping related test cases into test suites, providing
> > common infrastructure for running tests, mocking, spying, and much more.
> >
> > ## What's so special about unit testing?
> >
> > A unit test is supposed to test a single unit of code in isolation,
> > hence the name. There should be no dependencies outside the control of
> > the test; this means no external dependencies, which makes tests orders
> > of magnitudes faster. Likewise, since there are no external dependencies,
> > there are no hoops to jump through to run the tests. Additionally, this
> > makes unit tests deterministic: a failing unit test always indicates a
> > problem. Finally, because unit tests necessarily have finer granularity,
> > they are able to test all code paths easily solving the classic problem
> > of difficulty in exercising error handling code.
> >
> > ## Is KUnit trying to replace other testing frameworks for the kernel?
> >
> > No. Most existing tests for the Linux kernel are end-to-end tests, which
> > have their place. A well tested system has lots of unit tests, a
> > reasonable number of integration tests, and some end-to-end tests. KUnit
> > is just trying to address the unit test space which is currently not
> > being addressed.
> >
> > ## More information on KUnit
> >
> > There is a bunch of documentation near the end of this patch set that
> > describes how to use KUnit and best practices for writing unit tests.
> > For convenience I am hosting the compiled docs here:
> > https://google.github.io/kunit-docs/third_party/kernel/docs/
> > Additionally for convenience, I have applied these patches to a branch:
> > https://kunit.googlesource.com/linux/+/kunit/rfc/4.19/v3
> > The repo may be cloned with:
> > git clone https://kunit.googlesource.com/linux
> > This patchset is on the kunit/rfc/4.19/v3 branch.
> >
> > ## Changes Since Last Version
> >
> >  - Changed namespace prefix from `test_*` to `kunit_*` as requested by
> >    Shuah.
>
>
> >  - Started converting/cleaning up the device tree unittest to use KUnit.
> >  - Started adding KUnit expectations with custom messages.
> >
>
> Sorry I missed your reply to me in the v1 patch thread.  I've been
> traveling a lot the last few weeks.  I'm starting to read messages
> that occurred late in the v1 patch thread and the v2 patch thread,
> so I'm just coming up to speed on this.
>
> My comments below are motivated by adding the devicetree unittest to
> this version of the patch series.
>
> Pulling a comment from way back in the v1 patch thread:
>
> On 10/17/18 3:22 PM, Brendan Higgins wrote:
> > On Wed, Oct 17, 2018 at 10:49 AM <Tim.Bird at sony.com> wrote:
>
> < snip >
>
> > The test and the code under test are linked together in the same
> > binary and are compiled under Kbuild. Right now I am linking
> > everything into a UML kernel, but I would ultimately like to make
> > tests compile into completely independent test binaries. So each test
> > file would get compiled into its own test binary and would link
> > against only the code needed to run the test, but we are a bit of a
> > ways off from that.
>
> I have never used UML, so you should expect naive questions from me,
> exhibiting my lack of understanding.
>
> Does this mean that I have to build a UML architecture kernel to run
> the KUnit tests?

In this version of the patch series, yes.

> *** Rob, if the answer is yes, then it seems like for my workflow,
> which is to build for real ARM hardware, my work is doubled (or
> worse), because for every patch/commit that I apply, I not only have
> to build the ARM kernel and boot on the real hardware to test, I also
> have to build the UML kernel and boot in UML.  If that is correct
> then I see this as a major problem for me.

I've already raised this issue elsewhere in the series. Restricting
the DT tests to UML is a non-starter.

> Brenden, in the above quote you said that in the future you would
> like to make the "tests compile into completely independent test
> binaries".  I am assuming those are intended to run as standalone
> user space programs instead of inside UML.  Is that correct?  If
> so, how will KUnit tests be able to test code that uses locking
> mechanisms that require instructions that are not available to
> user space execution?  (I _think_ that such instructions may be
> present, depending on which locking mechanism, but I might be
> mistaken.)

I think he means as kernel modules as kunit is for testing internal
kernel interfaces. kselftest is userspace level tests.

If this were true about locking, then UML itself would not be viable.

> Another possible concern that I have for removing the devicetree
> unit tests from my normal kernel build process is that I think
> that the ability to use sparse to analyze the source in the
> unit tests is removed.  Please correct me if I misunderstand
> that.
>
> Another issue is that the devicetree unit tests will no longer
> be cross compiled with my ARM compiler, so I lose a small
> amount of testing for compiler related issues.

0-day does that for you. :)

> Overall, I'm still trying to learn enough to determine whether
> the gains from moving to KUnit outweigh the losses.
>
> -Frank

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework
  2018-12-04 13:49   ` robh
@ 2018-12-04 13:49     ` Rob Herring
  2018-12-05 23:10     ` brendanhiggins
  1 sibling, 0 replies; 232+ messages in thread
From: Rob Herring @ 2018-12-04 13:49 UTC (permalink / raw)


On Tue, Dec 4, 2018@5:40 AM Frank Rowand <frowand.list@gmail.com> wrote:
>
> Hi Brendan, Rob,
>
> On 11/28/18 11:36 AM, Brendan Higgins wrote:
> > This patch set proposes KUnit, a lightweight unit testing and mocking
> > framework for the Linux kernel.
> >
> > Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> > it does not require installing the kernel on a test machine or in a VM
> > and does not require tests to be written in userspace running on a host
> > kernel. Additionally, KUnit is fast: From invocation to completion KUnit
> > can run several dozen tests in under a second. Currently, the entire
> > KUnit test suite for KUnit runs in under a second from the initial
> > invocation (build time excluded).
> >
> > KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> > Googletest/Googlemock for C++. KUnit provides facilities for defining
> > unit test cases, grouping related test cases into test suites, providing
> > common infrastructure for running tests, mocking, spying, and much more.
> >
> > ## What's so special about unit testing?
> >
> > A unit test is supposed to test a single unit of code in isolation,
> > hence the name. There should be no dependencies outside the control of
> > the test; this means no external dependencies, which makes tests orders
> > of magnitudes faster. Likewise, since there are no external dependencies,
> > there are no hoops to jump through to run the tests. Additionally, this
> > makes unit tests deterministic: a failing unit test always indicates a
> > problem. Finally, because unit tests necessarily have finer granularity,
> > they are able to test all code paths easily solving the classic problem
> > of difficulty in exercising error handling code.
> >
> > ## Is KUnit trying to replace other testing frameworks for the kernel?
> >
> > No. Most existing tests for the Linux kernel are end-to-end tests, which
> > have their place. A well tested system has lots of unit tests, a
> > reasonable number of integration tests, and some end-to-end tests. KUnit
> > is just trying to address the unit test space which is currently not
> > being addressed.
> >
> > ## More information on KUnit
> >
> > There is a bunch of documentation near the end of this patch set that
> > describes how to use KUnit and best practices for writing unit tests.
> > For convenience I am hosting the compiled docs here:
> > https://google.github.io/kunit-docs/third_party/kernel/docs/
> > Additionally for convenience, I have applied these patches to a branch:
> > https://kunit.googlesource.com/linux/+/kunit/rfc/4.19/v3
> > The repo may be cloned with:
> > git clone https://kunit.googlesource.com/linux
> > This patchset is on the kunit/rfc/4.19/v3 branch.
> >
> > ## Changes Since Last Version
> >
> >  - Changed namespace prefix from `test_*` to `kunit_*` as requested by
> >    Shuah.
>
>
> >  - Started converting/cleaning up the device tree unittest to use KUnit.
> >  - Started adding KUnit expectations with custom messages.
> >
>
> Sorry I missed your reply to me in the v1 patch thread.  I've been
> traveling a lot the last few weeks.  I'm starting to read messages
> that occurred late in the v1 patch thread and the v2 patch thread,
> so I'm just coming up to speed on this.
>
> My comments below are motivated by adding the devicetree unittest to
> this version of the patch series.
>
> Pulling a comment from way back in the v1 patch thread:
>
> On 10/17/18 3:22 PM, Brendan Higgins wrote:
> > On Wed, Oct 17, 2018@10:49 AM <Tim.Bird@sony.com> wrote:
>
> < snip >
>
> > The test and the code under test are linked together in the same
> > binary and are compiled under Kbuild. Right now I am linking
> > everything into a UML kernel, but I would ultimately like to make
> > tests compile into completely independent test binaries. So each test
> > file would get compiled into its own test binary and would link
> > against only the code needed to run the test, but we are a bit of a
> > ways off from that.
>
> I have never used UML, so you should expect naive questions from me,
> exhibiting my lack of understanding.
>
> Does this mean that I have to build a UML architecture kernel to run
> the KUnit tests?

In this version of the patch series, yes.

> *** Rob, if the answer is yes, then it seems like for my workflow,
> which is to build for real ARM hardware, my work is doubled (or
> worse), because for every patch/commit that I apply, I not only have
> to build the ARM kernel and boot on the real hardware to test, I also
> have to build the UML kernel and boot in UML.  If that is correct
> then I see this as a major problem for me.

I've already raised this issue elsewhere in the series. Restricting
the DT tests to UML is a non-starter.

> Brenden, in the above quote you said that in the future you would
> like to make the "tests compile into completely independent test
> binaries".  I am assuming those are intended to run as standalone
> user space programs instead of inside UML.  Is that correct?  If
> so, how will KUnit tests be able to test code that uses locking
> mechanisms that require instructions that are not available to
> user space execution?  (I _think_ that such instructions may be
> present, depending on which locking mechanism, but I might be
> mistaken.)

I think he means as kernel modules as kunit is for testing internal
kernel interfaces. kselftest is userspace level tests.

If this were true about locking, then UML itself would not be viable.

> Another possible concern that I have for removing the devicetree
> unit tests from my normal kernel build process is that I think
> that the ability to use sparse to analyze the source in the
> unit tests is removed.  Please correct me if I misunderstand
> that.
>
> Another issue is that the devicetree unit tests will no longer
> be cross compiled with my ARM compiler, so I lose a small
> amount of testing for compiler related issues.

0-day does that for you. :)

> Overall, I'm still trying to learn enough to determine whether
> the gains from moving to KUnit outweigh the losses.
>
> -Frank

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel
  2018-12-03 23:48     ` brendanhiggins
  2018-12-03 23:48       ` Brendan Higgins
@ 2018-12-04 20:47       ` mcgrof
  2018-12-04 20:47         ` Luis Chamberlain
  2018-12-06 12:32         ` kieran.bingham
  1 sibling, 2 replies; 232+ messages in thread
From: mcgrof @ 2018-12-04 20:47 UTC (permalink / raw)


On Mon, Dec 03, 2018 at 03:48:15PM -0800, Brendan Higgins wrote:
> On Thu, Nov 29, 2018 at 5:54 AM Kieran Bingham
> <kieran.bingham at ideasonboard.com> wrote:
> >
> > Hi Brendan,
> >
> > Thanks again for this series!
> >
> > On 28/11/2018 19:36, Brendan Higgins wrote:
> > > The ultimate goal is to create minimal isolated test binaries; in the
> > > meantime we are using UML to provide the infrastructure to run tests, so
> > > define an abstract way to configure and run tests that allow us to
> > > change the context in which tests are built without affecting the user.
> > > This also makes pretty and dynamic error reporting, and a lot of other
> > > nice features easier.
> >
> >
> > I wonder if we could somehow generate a shared library object
> > 'libkernel' or 'libumlinux' from a UM configured set of headers and
> > objects so that we could create binary targets directly ?
> 
> That's an interesting idea. I think it would be difficult to figure
> out exactly where to draw the line of what goes in there and what
> needs to be built specific to a test a priori. Of course, that leads
> into the biggest problem in general, needed to know what I need to
> build to test the thing that I want to test.
> 
> Nevertheless, I could definitely imagine that being useful in a lot of cases.

Whether or not we can abstract away the kernel into such a mechanism
with uml libraries is a good question worth exploring.

Developers working upstream do modify their kernels a lot, so we'd have
to update such libraries quite a bit, but I think that's fine too. The
*real* value I think from the above suggestion would be enterprise /
mobile distros or stable kernel maintainers which have a static kernel
they need to support for a relatively *long time*, consider a 10 year
time frame. Running unit tests without qemu with uml and libraries for
respective kernels seems real worthy.

The overhead for testing a unit test for said targets, *ideally*, would
just be to to reboot into the system with such libraries available, a
unit test would just look for the respective uname -r library and mimic
that kernel, much the same way enterprise distributions today rely on
having debugging symbols available to run against crash / gdb. Having
debug modules / kernel for crash requires such effort already, so this
would just be an extra layer of other prospect tests.

All ideaware for now, but the roadmap seems to be paving itself.

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel
  2018-12-04 20:47       ` mcgrof
@ 2018-12-04 20:47         ` Luis Chamberlain
  2018-12-06 12:32         ` kieran.bingham
  1 sibling, 0 replies; 232+ messages in thread
From: Luis Chamberlain @ 2018-12-04 20:47 UTC (permalink / raw)


On Mon, Dec 03, 2018@03:48:15PM -0800, Brendan Higgins wrote:
> On Thu, Nov 29, 2018 at 5:54 AM Kieran Bingham
> <kieran.bingham@ideasonboard.com> wrote:
> >
> > Hi Brendan,
> >
> > Thanks again for this series!
> >
> > On 28/11/2018 19:36, Brendan Higgins wrote:
> > > The ultimate goal is to create minimal isolated test binaries; in the
> > > meantime we are using UML to provide the infrastructure to run tests, so
> > > define an abstract way to configure and run tests that allow us to
> > > change the context in which tests are built without affecting the user.
> > > This also makes pretty and dynamic error reporting, and a lot of other
> > > nice features easier.
> >
> >
> > I wonder if we could somehow generate a shared library object
> > 'libkernel' or 'libumlinux' from a UM configured set of headers and
> > objects so that we could create binary targets directly ?
> 
> That's an interesting idea. I think it would be difficult to figure
> out exactly where to draw the line of what goes in there and what
> needs to be built specific to a test a priori. Of course, that leads
> into the biggest problem in general, needed to know what I need to
> build to test the thing that I want to test.
> 
> Nevertheless, I could definitely imagine that being useful in a lot of cases.

Whether or not we can abstract away the kernel into such a mechanism
with uml libraries is a good question worth exploring.

Developers working upstream do modify their kernels a lot, so we'd have
to update such libraries quite a bit, but I think that's fine too. The
*real* value I think from the above suggestion would be enterprise /
mobile distros or stable kernel maintainers which have a static kernel
they need to support for a relatively *long time*, consider a 10 year
time frame. Running unit tests without qemu with uml and libraries for
respective kernels seems real worthy.

The overhead for testing a unit test for said targets, *ideally*, would
just be to to reboot into the system with such libraries available, a
unit test would just look for the respective uname -r library and mimic
that kernel, much the same way enterprise distributions today rely on
having debugging symbols available to run against crash / gdb. Having
debug modules / kernel for crash requires such effort already, so this
would just be an extra layer of other prospect tests.

All ideaware for now, but the roadmap seems to be paving itself.

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel
  2018-12-03 23:50     ` brendanhiggins
  2018-12-03 23:50       ` Brendan Higgins
@ 2018-12-04 20:48       ` mcgrof
  2018-12-04 20:48         ` Luis Chamberlain
  1 sibling, 1 reply; 232+ messages in thread
From: mcgrof @ 2018-12-04 20:48 UTC (permalink / raw)


On Mon, Dec 03, 2018 at 03:50:48PM -0800, Brendan Higgins wrote:
> On Thu, Nov 29, 2018 at 7:44 PM Luis Chamberlain <mcgrof at kernel.org> wrote:
> >
> > On Wed, Nov 28, 2018 at 11:36:28AM -0800, Brendan Higgins wrote:
> > > The ultimate goal is to create minimal isolated test binaries; in the
> > > meantime we are using UML to provide the infrastructure to run tests, so
> > > define an abstract way to configure and run tests that allow us to
> > > change the context in which tests are built without affecting the user.
> > > This also makes pretty and dynamic error reporting, and a lot of other
> > > nice features easier.
> > >
> > > kunit_config.py:
> > >   - parse .config and Kconfig files.
> > >
> > >
> > > kunit_kernel.py: provides helper functions to:
> > >   - configure the kernel using kunitconfig.
> >
> > We get the tools to run the config stuff, build, etc, but not a top
> > level 'make kunitconfig' or whatever. We have things like 'make
> > kvmconfig' and 'make xenconfig', I think it would be reasonable to
> > add similar for this.
> 
> Are you just asking for a defconfig for KUnit, or are you asking for a
> way to run KUnit from make?

At least the first. The later seems intrusive as a top level Makefile
thing.

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel
  2018-12-04 20:48       ` mcgrof
@ 2018-12-04 20:48         ` Luis Chamberlain
  0 siblings, 0 replies; 232+ messages in thread
From: Luis Chamberlain @ 2018-12-04 20:48 UTC (permalink / raw)


On Mon, Dec 03, 2018@03:50:48PM -0800, Brendan Higgins wrote:
> On Thu, Nov 29, 2018@7:44 PM Luis Chamberlain <mcgrof@kernel.org> wrote:
> >
> > On Wed, Nov 28, 2018@11:36:28AM -0800, Brendan Higgins wrote:
> > > The ultimate goal is to create minimal isolated test binaries; in the
> > > meantime we are using UML to provide the infrastructure to run tests, so
> > > define an abstract way to configure and run tests that allow us to
> > > change the context in which tests are built without affecting the user.
> > > This also makes pretty and dynamic error reporting, and a lot of other
> > > nice features easier.
> > >
> > > kunit_config.py:
> > >   - parse .config and Kconfig files.
> > >
> > >
> > > kunit_kernel.py: provides helper functions to:
> > >   - configure the kernel using kunitconfig.
> >
> > We get the tools to run the config stuff, build, etc, but not a top
> > level 'make kunitconfig' or whatever. We have things like 'make
> > kvmconfig' and 'make xenconfig', I think it would be reasonable to
> > add similar for this.
> 
> Are you just asking for a defconfig for KUnit, or are you asking for a
> way to run KUnit from make?

At least the first. The later seems intrusive as a top level Makefile
thing.

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 01/19] kunit: test: add KUnit test runner core
  2018-11-30  3:14   ` mcgrof
  2018-11-30  3:14     ` Luis Chamberlain
  2018-12-01  1:51     ` brendanhiggins
@ 2018-12-05 13:15     ` anton.ivanov
  2018-12-05 13:15       ` Anton Ivanov
  2018-12-05 14:45       ` arnd
  2 siblings, 2 replies; 232+ messages in thread
From: anton.ivanov @ 2018-12-05 13:15 UTC (permalink / raw)


On 30/11/2018 03:14, Luis Chamberlain wrote:
> On Wed, Nov 28, 2018 at 11:36:18AM -0800, Brendan Higgins wrote:
>> +#define module_test(module) \
>> +		static int module_kunit_init##module(void) \
>> +		{ \
>> +			return kunit_run_tests(&module); \
>> +		} \
>> +		late_initcall(module_kunit_init##module)
> Here in lies an assumption that suffices. I'm inclined to believe we
> need new initcall level here so to ensure we *do* run after all the
> respective kernels iniut calls. Otherwise we're left at the whims of
> link order for kunit. For instance if a kunit test relies on frameworks
> which are also late_initcall() we'd have complete incompatibility with
> anything linked *after* kunit.
>
>> diff --git a/kunit/Kconfig b/kunit/Kconfig
>> new file mode 100644
>> index 0000000000000..49b44c4f6630a
>> --- /dev/null
>> +++ b/kunit/Kconfig
>> @@ -0,0 +1,17 @@
>> +#
>> +# KUnit base configuration
>> +#
>> +
>> +menu "KUnit support"
>> +
>> +config KUNIT
>> +	bool "Enable support for unit tests (KUnit)"
>> +	depends on UML
> Consider using:
>
> if UML
>     ...
> endif
>
> That allows the depends to be done once.
>
>> +	help
>> +	  Enables support for kernel unit tests (KUnit), a lightweight unit
>> +	  testing and mocking framework for the Linux kernel. These tests are
>> +	  able to be run locally on a developer's workstation without a VM or
>> +	  special hardware.
>
> Some mention of UML may be good here?
>
>> For more information, please see
>> +	  Documentation/kunit/
>> +
>> +endmenu
> I'm a bit conflicted here. This currently depends on UML but yet you
> noted on RFC v2 that your intention is to liberate kunit from UML and
> ideally allow unit tests to depend only on userspace. I've addressed
> tests using both selftests kernels drivers and also re-written kernel
> APIs to userspace to test there. I think we may need to live with both.
>
> Then for the UML stuff, I think if we *really* accept that UML will
> always be a viable option we should probably consider now throwing these
> things under drivers/platform/uml/. This follows the pattern of arch
> specific drivers. Whether or not we end up with a complete userspace

UML platform drivers predate that and are under arch/um/drivers/

We should either keep to current convention or consider relocating the 
existing ones - having things spread in different places around the tree 
is not good in the long run (UML already has a few of those under the 
x86 tree, let's not increase the number).

> component independent of UML may implicate having a shared component
> somewhere else.
>
> Likewise, I realize the goal is to *avoid* using a virtual machine for
> these tests, but would it in any way make sense to share kunit to be
> supported for other architectures to allow easier-to-write tests as
> well?
>
>    Luis
>
> _______________________________________________
> linux-um mailing list
> linux-um at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-um
>

-- 
Anton R. Ivanov
Cambridgegreys Limited. Registered in England. Company Number 10273661

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 01/19] kunit: test: add KUnit test runner core
  2018-12-05 13:15     ` anton.ivanov
@ 2018-12-05 13:15       ` Anton Ivanov
  2018-12-05 14:45       ` arnd
  1 sibling, 0 replies; 232+ messages in thread
From: Anton Ivanov @ 2018-12-05 13:15 UTC (permalink / raw)


On 30/11/2018 03:14, Luis Chamberlain wrote:
> On Wed, Nov 28, 2018@11:36:18AM -0800, Brendan Higgins wrote:
>> +#define module_test(module) \
>> +		static int module_kunit_init##module(void) \
>> +		{ \
>> +			return kunit_run_tests(&module); \
>> +		} \
>> +		late_initcall(module_kunit_init##module)
> Here in lies an assumption that suffices. I'm inclined to believe we
> need new initcall level here so to ensure we *do* run after all the
> respective kernels iniut calls. Otherwise we're left at the whims of
> link order for kunit. For instance if a kunit test relies on frameworks
> which are also late_initcall() we'd have complete incompatibility with
> anything linked *after* kunit.
>
>> diff --git a/kunit/Kconfig b/kunit/Kconfig
>> new file mode 100644
>> index 0000000000000..49b44c4f6630a
>> --- /dev/null
>> +++ b/kunit/Kconfig
>> @@ -0,0 +1,17 @@
>> +#
>> +# KUnit base configuration
>> +#
>> +
>> +menu "KUnit support"
>> +
>> +config KUNIT
>> +	bool "Enable support for unit tests (KUnit)"
>> +	depends on UML
> Consider using:
>
> if UML
>     ...
> endif
>
> That allows the depends to be done once.
>
>> +	help
>> +	  Enables support for kernel unit tests (KUnit), a lightweight unit
>> +	  testing and mocking framework for the Linux kernel. These tests are
>> +	  able to be run locally on a developer's workstation without a VM or
>> +	  special hardware.
>
> Some mention of UML may be good here?
>
>> For more information, please see
>> +	  Documentation/kunit/
>> +
>> +endmenu
> I'm a bit conflicted here. This currently depends on UML but yet you
> noted on RFC v2 that your intention is to liberate kunit from UML and
> ideally allow unit tests to depend only on userspace. I've addressed
> tests using both selftests kernels drivers and also re-written kernel
> APIs to userspace to test there. I think we may need to live with both.
>
> Then for the UML stuff, I think if we *really* accept that UML will
> always be a viable option we should probably consider now throwing these
> things under drivers/platform/uml/. This follows the pattern of arch
> specific drivers. Whether or not we end up with a complete userspace

UML platform drivers predate that and are under arch/um/drivers/

We should either keep to current convention or consider relocating the 
existing ones - having things spread in different places around the tree 
is not good in the long run (UML already has a few of those under the 
x86 tree, let's not increase the number).

> component independent of UML may implicate having a shared component
> somewhere else.
>
> Likewise, I realize the goal is to *avoid* using a virtual machine for
> these tests, but would it in any way make sense to share kunit to be
> supported for other architectures to allow easier-to-write tests as
> well?
>
>    Luis
>
> _______________________________________________
> linux-um mailing list
> linux-um at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-um
>

-- 
Anton R. Ivanov
Cambridgegreys Limited. Registered in England. Company Number 10273661

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 01/19] kunit: test: add KUnit test runner core
  2018-12-05 13:15     ` anton.ivanov
  2018-12-05 13:15       ` Anton Ivanov
@ 2018-12-05 14:45       ` arnd
  2018-12-05 14:45         ` Arnd Bergmann
  2018-12-05 14:49         ` anton.ivanov
  1 sibling, 2 replies; 232+ messages in thread
From: arnd @ 2018-12-05 14:45 UTC (permalink / raw)


On Wed, Dec 5, 2018 at 2:42 PM Anton Ivanov
<anton.ivanov at cambridgegreys.com> wrote:
> On 30/11/2018 03:14, Luis Chamberlain wrote:
> > On Wed, Nov 28, 2018 at 11:36:18AM -0800, Brendan Higgins wrote:
> > Then for the UML stuff, I think if we *really* accept that UML will
> > always be a viable option we should probably consider now throwing these
> > things under drivers/platform/uml/. This follows the pattern of arch
> > specific drivers. Whether or not we end up with a complete userspace
>
> UML platform drivers predate that and are under arch/um/drivers/
>
> We should either keep to current convention or consider relocating the
> existing ones - having things spread in different places around the tree
> is not good in the long run (UML already has a few of those under the
> x86 tree, let's not increase the number).

I don't mind the current location much, but if we move drivers, we should
move the into the appropriate subsystems based on what they do, rather
than having a new place with a mix of things.

E.g. the tty drivers should all be in drivers/tty/ and the network drivers in
drivers/net. To paraphrase what you said above: having tty drivers spread in
different places around the tree is not good in the long run. We have long
ago moved from organizing drivers by bus interface to organizing drivers
by class, uml and drivers/platform are just exceptions to this rule.

          Arnd

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 01/19] kunit: test: add KUnit test runner core
  2018-12-05 14:45       ` arnd
@ 2018-12-05 14:45         ` Arnd Bergmann
  2018-12-05 14:49         ` anton.ivanov
  1 sibling, 0 replies; 232+ messages in thread
From: Arnd Bergmann @ 2018-12-05 14:45 UTC (permalink / raw)


On Wed, Dec 5, 2018 at 2:42 PM Anton Ivanov
<anton.ivanov@cambridgegreys.com> wrote:
> On 30/11/2018 03:14, Luis Chamberlain wrote:
> > On Wed, Nov 28, 2018@11:36:18AM -0800, Brendan Higgins wrote:
> > Then for the UML stuff, I think if we *really* accept that UML will
> > always be a viable option we should probably consider now throwing these
> > things under drivers/platform/uml/. This follows the pattern of arch
> > specific drivers. Whether or not we end up with a complete userspace
>
> UML platform drivers predate that and are under arch/um/drivers/
>
> We should either keep to current convention or consider relocating the
> existing ones - having things spread in different places around the tree
> is not good in the long run (UML already has a few of those under the
> x86 tree, let's not increase the number).

I don't mind the current location much, but if we move drivers, we should
move the into the appropriate subsystems based on what they do, rather
than having a new place with a mix of things.

E.g. the tty drivers should all be in drivers/tty/ and the network drivers in
drivers/net. To paraphrase what you said above: having tty drivers spread in
different places around the tree is not good in the long run. We have long
ago moved from organizing drivers by bus interface to organizing drivers
by class, uml and drivers/platform are just exceptions to this rule.

          Arnd

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 01/19] kunit: test: add KUnit test runner core
  2018-12-05 14:45       ` arnd
  2018-12-05 14:45         ` Arnd Bergmann
@ 2018-12-05 14:49         ` anton.ivanov
  2018-12-05 14:49           ` Anton Ivanov
  1 sibling, 1 reply; 232+ messages in thread
From: anton.ivanov @ 2018-12-05 14:49 UTC (permalink / raw)


On 05/12/2018 14:45, Arnd Bergmann wrote:
> On Wed, Dec 5, 2018 at 2:42 PM Anton Ivanov
> <anton.ivanov at cambridgegreys.com> wrote:
>> On 30/11/2018 03:14, Luis Chamberlain wrote:
>>> On Wed, Nov 28, 2018 at 11:36:18AM -0800, Brendan Higgins wrote:
>>> Then for the UML stuff, I think if we *really* accept that UML will
>>> always be a viable option we should probably consider now throwing these
>>> things under drivers/platform/uml/. This follows the pattern of arch
>>> specific drivers. Whether or not we end up with a complete userspace
>> UML platform drivers predate that and are under arch/um/drivers/
>>
>> We should either keep to current convention or consider relocating the
>> existing ones - having things spread in different places around the tree
>> is not good in the long run (UML already has a few of those under the
>> x86 tree, let's not increase the number).
> I don't mind the current location much, but if we move drivers, we should
> move the into the appropriate subsystems based on what they do, rather
> than having a new place with a mix of things.
>
> E.g. the tty drivers should all be in drivers/tty/ and the network drivers in
> drivers/net. To paraphrase what you said above: having tty drivers spread in
> different places around the tree is not good in the long run. We have long
> ago moved from organizing drivers by bus interface to organizing drivers
> by class, uml and drivers/platform are just exceptions to this rule.

There are some issues with that because uml drivers have bits of what is 
effectively host side of the hypervisor as a part of them. IMHO, having 
that in driver/X is not very appropriate. So at least the *_user.c and 
*_user.h bits have to go (or stay) somewhere else

Brgds,

-- 
Anton R. Ivanov
Cambridgegreys Limited. Registered in England. Company Number 10273661

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 01/19] kunit: test: add KUnit test runner core
  2018-12-05 14:49         ` anton.ivanov
@ 2018-12-05 14:49           ` Anton Ivanov
  0 siblings, 0 replies; 232+ messages in thread
From: Anton Ivanov @ 2018-12-05 14:49 UTC (permalink / raw)


On 05/12/2018 14:45, Arnd Bergmann wrote:
> On Wed, Dec 5, 2018 at 2:42 PM Anton Ivanov
> <anton.ivanov@cambridgegreys.com> wrote:
>> On 30/11/2018 03:14, Luis Chamberlain wrote:
>>> On Wed, Nov 28, 2018@11:36:18AM -0800, Brendan Higgins wrote:
>>> Then for the UML stuff, I think if we *really* accept that UML will
>>> always be a viable option we should probably consider now throwing these
>>> things under drivers/platform/uml/. This follows the pattern of arch
>>> specific drivers. Whether or not we end up with a complete userspace
>> UML platform drivers predate that and are under arch/um/drivers/
>>
>> We should either keep to current convention or consider relocating the
>> existing ones - having things spread in different places around the tree
>> is not good in the long run (UML already has a few of those under the
>> x86 tree, let's not increase the number).
> I don't mind the current location much, but if we move drivers, we should
> move the into the appropriate subsystems based on what they do, rather
> than having a new place with a mix of things.
>
> E.g. the tty drivers should all be in drivers/tty/ and the network drivers in
> drivers/net. To paraphrase what you said above: having tty drivers spread in
> different places around the tree is not good in the long run. We have long
> ago moved from organizing drivers by bus interface to organizing drivers
> by class, uml and drivers/platform are just exceptions to this rule.

There are some issues with that because uml drivers have bits of what is 
effectively host side of the hypervisor as a part of them. IMHO, having 
that in driver/X is not very appropriate. So at least the *_user.c and 
*_user.h bits have to go (or stay) somewhere else

Brgds,

-- 
Anton R. Ivanov
Cambridgegreys Limited. Registered in England. Company Number 10273661

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework
  2018-12-04 13:49   ` robh
  2018-12-04 13:49     ` Rob Herring
@ 2018-12-05 23:10     ` brendanhiggins
  2018-12-05 23:10       ` Brendan Higgins
  2019-03-22  0:27       ` frowand.list
  1 sibling, 2 replies; 232+ messages in thread
From: brendanhiggins @ 2018-12-05 23:10 UTC (permalink / raw)


On Tue, Dec 4, 2018 at 5:49 AM Rob Herring <robh at kernel.org> wrote:
>
> On Tue, Dec 4, 2018 at 5:40 AM Frank Rowand <frowand.list at gmail.com> wrote:
> >
> > Hi Brendan, Rob,
> >
> > Pulling a comment from way back in the v1 patch thread:
> >
> > On 10/17/18 3:22 PM, Brendan Higgins wrote:
> > > On Wed, Oct 17, 2018 at 10:49 AM <Tim.Bird at sony.com> wrote:
> >
> > < snip >
> >
> > > The test and the code under test are linked together in the same
> > > binary and are compiled under Kbuild. Right now I am linking
> > > everything into a UML kernel, but I would ultimately like to make
> > > tests compile into completely independent test binaries. So each test
> > > file would get compiled into its own test binary and would link
> > > against only the code needed to run the test, but we are a bit of a
> > > ways off from that.
> >
> > I have never used UML, so you should expect naive questions from me,
> > exhibiting my lack of understanding.
> >
> > Does this mean that I have to build a UML architecture kernel to run
> > the KUnit tests?
>
> In this version of the patch series, yes.
>
> > *** Rob, if the answer is yes, then it seems like for my workflow,
> > which is to build for real ARM hardware, my work is doubled (or
> > worse), because for every patch/commit that I apply, I not only have
> > to build the ARM kernel and boot on the real hardware to test, I also
> > have to build the UML kernel and boot in UML.  If that is correct
> > then I see this as a major problem for me.
>
> I've already raised this issue elsewhere in the series. Restricting
> the DT tests to UML is a non-starter.

I have already stated my position elsewhere on the matter, but in
summary: Ensuring most tests can run without external dependencies
(hardware, VM, etc) has a lot of benefits and should be supported in
nearly all cases, but such tests should also work when compiled to run
on real hardware/VM; the tooling might not be as good in the latter
case, but I understand that there are good reasons to support it
nonetheless.

So I am going to try to add basic support for running tests on other
architectures in the next version or two.

>
> > Brenden, in the above quote you said that in the future you would
> > like to make the "tests compile into completely independent test
> > binaries".  I am assuming those are intended to run as standalone
> > user space programs instead of inside UML.  Is that correct?  If
> > so, how will KUnit tests be able to test code that uses locking
> > mechanisms that require instructions that are not available to
> > user space execution?  (I _think_ that such instructions may be
> > present, depending on which locking mechanism, but I might be
> > mistaken.)
>
> I think he means as kernel modules as kunit is for testing internal
> kernel interfaces. kselftest is userspace level tests.

Frank is right: my long term goal is to make it so unit tests can run
as stand alone user space programs.

>
> If this were true about locking, then UML itself would not be viable.
>
> > Another possible concern that I have for removing the devicetree
> > unit tests from my normal kernel build process is that I think
> > that the ability to use sparse to analyze the source in the
> > unit tests is removed.  Please correct me if I misunderstand
> > that.
> >
> > Another issue is that the devicetree unit tests will no longer
> > be cross compiled with my ARM compiler, so I lose a small
> > amount of testing for compiler related issues.
>
> 0-day does that for you. :)
>
> > Overall, I'm still trying to learn enough to determine whether
> > the gains from moving to KUnit outweigh the losses.

Of course.

>From what I have seen so far, the DT unittests seem like a pretty good
use case for KUnit. If you don't mind, what frustrates you most about
the tests you have now?

What are the most common breakages you see?

When do they get caught?

My initial reaction when I looked at the tests was that it seemed like
it would be hard to understand what caused a failure and it seemed
non-obvious where a test for a new feature should go.

To me, the thing that seemed like it needed the most work was
refactoring the tests to make them easier to understand. For example,
one thing I found when I started breaking the tests apart I found some
cases that I really had to stare at (or run diff on them) to figure
out what they did differently.

Looking forward to get your thoughts.

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework
  2018-12-05 23:10     ` brendanhiggins
@ 2018-12-05 23:10       ` Brendan Higgins
  2019-03-22  0:27       ` frowand.list
  1 sibling, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-12-05 23:10 UTC (permalink / raw)


On Tue, Dec 4, 2018@5:49 AM Rob Herring <robh@kernel.org> wrote:
>
> On Tue, Dec 4, 2018@5:40 AM Frank Rowand <frowand.list@gmail.com> wrote:
> >
> > Hi Brendan, Rob,
> >
> > Pulling a comment from way back in the v1 patch thread:
> >
> > On 10/17/18 3:22 PM, Brendan Higgins wrote:
> > > On Wed, Oct 17, 2018@10:49 AM <Tim.Bird@sony.com> wrote:
> >
> > < snip >
> >
> > > The test and the code under test are linked together in the same
> > > binary and are compiled under Kbuild. Right now I am linking
> > > everything into a UML kernel, but I would ultimately like to make
> > > tests compile into completely independent test binaries. So each test
> > > file would get compiled into its own test binary and would link
> > > against only the code needed to run the test, but we are a bit of a
> > > ways off from that.
> >
> > I have never used UML, so you should expect naive questions from me,
> > exhibiting my lack of understanding.
> >
> > Does this mean that I have to build a UML architecture kernel to run
> > the KUnit tests?
>
> In this version of the patch series, yes.
>
> > *** Rob, if the answer is yes, then it seems like for my workflow,
> > which is to build for real ARM hardware, my work is doubled (or
> > worse), because for every patch/commit that I apply, I not only have
> > to build the ARM kernel and boot on the real hardware to test, I also
> > have to build the UML kernel and boot in UML.  If that is correct
> > then I see this as a major problem for me.
>
> I've already raised this issue elsewhere in the series. Restricting
> the DT tests to UML is a non-starter.

I have already stated my position elsewhere on the matter, but in
summary: Ensuring most tests can run without external dependencies
(hardware, VM, etc) has a lot of benefits and should be supported in
nearly all cases, but such tests should also work when compiled to run
on real hardware/VM; the tooling might not be as good in the latter
case, but I understand that there are good reasons to support it
nonetheless.

So I am going to try to add basic support for running tests on other
architectures in the next version or two.

>
> > Brenden, in the above quote you said that in the future you would
> > like to make the "tests compile into completely independent test
> > binaries".  I am assuming those are intended to run as standalone
> > user space programs instead of inside UML.  Is that correct?  If
> > so, how will KUnit tests be able to test code that uses locking
> > mechanisms that require instructions that are not available to
> > user space execution?  (I _think_ that such instructions may be
> > present, depending on which locking mechanism, but I might be
> > mistaken.)
>
> I think he means as kernel modules as kunit is for testing internal
> kernel interfaces. kselftest is userspace level tests.

Frank is right: my long term goal is to make it so unit tests can run
as stand alone user space programs.

>
> If this were true about locking, then UML itself would not be viable.
>
> > Another possible concern that I have for removing the devicetree
> > unit tests from my normal kernel build process is that I think
> > that the ability to use sparse to analyze the source in the
> > unit tests is removed.  Please correct me if I misunderstand
> > that.
> >
> > Another issue is that the devicetree unit tests will no longer
> > be cross compiled with my ARM compiler, so I lose a small
> > amount of testing for compiler related issues.
>
> 0-day does that for you. :)
>
> > Overall, I'm still trying to learn enough to determine whether
> > the gains from moving to KUnit outweigh the losses.

Of course.

>From what I have seen so far, the DT unittests seem like a pretty good
use case for KUnit. If you don't mind, what frustrates you most about
the tests you have now?

What are the most common breakages you see?

When do they get caught?

My initial reaction when I looked at the tests was that it seemed like
it would be hard to understand what caused a failure and it seemed
non-obvious where a test for a new feature should go.

To me, the thing that seemed like it needed the most work was
refactoring the tests to make them easier to understand. For example,
one thing I found when I started breaking the tests apart I found some
cases that I really had to stare at (or run diff on them) to figure
out what they did differently.

Looking forward to get your thoughts.

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 17/19] of: unittest: migrate tests to run on KUnit
  2018-12-04 13:40         ` robh
  2018-12-04 13:40           ` Rob Herring
@ 2018-12-05 23:42           ` brendanhiggins
  2018-12-05 23:42             ` Brendan Higgins
  2018-12-07  0:41             ` robh
  1 sibling, 2 replies; 232+ messages in thread
From: brendanhiggins @ 2018-12-05 23:42 UTC (permalink / raw)


On Tue, Dec 4, 2018 at 5:41 AM Rob Herring <robh at kernel.org> wrote:
>
> On Mon, Dec 3, 2018 at 6:14 PM Brendan Higgins
> <brendanhiggins at google.com> wrote:
> >
> > On Thu, Nov 29, 2018 at 4:40 PM Randy Dunlap <rdunlap at infradead.org> wrote:
> > >
> > > On 11/28/18 12:56 PM, Rob Herring wrote:
> > > >> diff --git a/drivers/of/Kconfig b/drivers/of/Kconfig
> > > >> index ad3fcad4d75b8..f309399deac20 100644
> > > >> --- a/drivers/of/Kconfig
> > > >> +++ b/drivers/of/Kconfig
> > > >> @@ -15,6 +15,7 @@ if OF
> > > >>  config OF_UNITTEST
> > > >>         bool "Device Tree runtime unit tests"
> > > >>         depends on !SPARC
> > > >> +       depends on KUNIT
> > > > Unless KUNIT has depends, better to be a select here.
> > >
> > > That's just style or taste.  I would prefer to use depends
> > > instead of select, but that's also just my preference.
> >
> > I prefer depends too, but Rob is the maintainer here.
>
> Well, we should be consistent, not the follow the whims of each maintainer.

Sorry, I don't think that came out the way I meant it. I don't really
think we are consistent on this point across the kernel, and I don't
feel very strongly about the point, so I was just looking to follow
the path of least resistance. (I also just assumed Rob would keep us
consistent within drivers/of/.)

I figure if we are running unit tests from the test runner script or
from an automated system, you won't be hunting for dependencies for a
single test every time you want to run a test, so select doesn't make
it easier to configure in most imagined use cases.

KUNIT hypothetically should not depend on anything, so select should
be safe to use.

On the other hand, if we end up being wrong on this point and KUnit
gains widespread adoption, I would prefer not to be in a position
where I have to change a bunch of configs all over the kernel because
this example got copied and pasted.

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 17/19] of: unittest: migrate tests to run on KUnit
  2018-12-05 23:42           ` brendanhiggins
@ 2018-12-05 23:42             ` Brendan Higgins
  2018-12-07  0:41             ` robh
  1 sibling, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-12-05 23:42 UTC (permalink / raw)


On Tue, Dec 4, 2018@5:41 AM Rob Herring <robh@kernel.org> wrote:
>
> On Mon, Dec 3, 2018 at 6:14 PM Brendan Higgins
> <brendanhiggins@google.com> wrote:
> >
> > On Thu, Nov 29, 2018@4:40 PM Randy Dunlap <rdunlap@infradead.org> wrote:
> > >
> > > On 11/28/18 12:56 PM, Rob Herring wrote:
> > > >> diff --git a/drivers/of/Kconfig b/drivers/of/Kconfig
> > > >> index ad3fcad4d75b8..f309399deac20 100644
> > > >> --- a/drivers/of/Kconfig
> > > >> +++ b/drivers/of/Kconfig
> > > >> @@ -15,6 +15,7 @@ if OF
> > > >>  config OF_UNITTEST
> > > >>         bool "Device Tree runtime unit tests"
> > > >>         depends on !SPARC
> > > >> +       depends on KUNIT
> > > > Unless KUNIT has depends, better to be a select here.
> > >
> > > That's just style or taste.  I would prefer to use depends
> > > instead of select, but that's also just my preference.
> >
> > I prefer depends too, but Rob is the maintainer here.
>
> Well, we should be consistent, not the follow the whims of each maintainer.

Sorry, I don't think that came out the way I meant it. I don't really
think we are consistent on this point across the kernel, and I don't
feel very strongly about the point, so I was just looking to follow
the path of least resistance. (I also just assumed Rob would keep us
consistent within drivers/of/.)

I figure if we are running unit tests from the test runner script or
from an automated system, you won't be hunting for dependencies for a
single test every time you want to run a test, so select doesn't make
it easier to configure in most imagined use cases.

KUNIT hypothetically should not depend on anything, so select should
be safe to use.

On the other hand, if we end up being wrong on this point and KUnit
gains widespread adoption, I would prefer not to be in a position
where I have to change a bunch of configs all over the kernel because
this example got copied and pasted.

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2018-12-04 10:58   ` frowand.list
  2018-12-04 10:58     ` Frank Rowand
@ 2018-12-05 23:54     ` brendanhiggins
  2018-12-05 23:54       ` Brendan Higgins
  2019-02-14 23:57       ` frowand.list
  1 sibling, 2 replies; 232+ messages in thread
From: brendanhiggins @ 2018-12-05 23:54 UTC (permalink / raw)


On Tue, Dec 4, 2018 at 2:58 AM Frank Rowand <frowand.list at gmail.com> wrote:
>
> Hi Brendan,
>
> On 11/28/18 11:36 AM, Brendan Higgins wrote:
> > Split out a couple of test cases that these features in base.c from the
> > unittest.c monolith. The intention is that we will eventually split out
> > all test cases and group them together based on what portion of device
> > tree they test.
>
> Why does splitting this file apart improve the implementation?

This is in preparation for patch 19/19 and other hypothetical future
patches where test cases are split up and grouped together by what
portion of DT they test (for example the parsing tests and the
platform/device tests would probably go separate files as well). This
patch by itself does not do anything useful, but I figured it made
patch 19/19 (and, if you like what I am doing, subsequent patches)
easier to review.

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2018-12-05 23:54     ` brendanhiggins
@ 2018-12-05 23:54       ` Brendan Higgins
  2019-02-14 23:57       ` frowand.list
  1 sibling, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2018-12-05 23:54 UTC (permalink / raw)


On Tue, Dec 4, 2018@2:58 AM Frank Rowand <frowand.list@gmail.com> wrote:
>
> Hi Brendan,
>
> On 11/28/18 11:36 AM, Brendan Higgins wrote:
> > Split out a couple of test cases that these features in base.c from the
> > unittest.c monolith. The intention is that we will eventually split out
> > all test cases and group them together based on what portion of device
> > tree they test.
>
> Why does splitting this file apart improve the implementation?

This is in preparation for patch 19/19 and other hypothetical future
patches where test cases are split up and grouped together by what
portion of DT they test (for example the parsing tests and the
platform/device tests would probably go separate files as well). This
patch by itself does not do anything useful, but I figured it made
patch 19/19 (and, if you like what I am doing, subsequent patches)
easier to review.

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 14/19] Documentation: kunit: add documentation for KUnit
  2018-12-03 23:53       ` brendanhiggins
  2018-12-03 23:53         ` Brendan Higgins
@ 2018-12-06 12:16         ` kieran.bingham
  2018-12-06 12:16           ` Kieran Bingham
  2019-02-09  0:56           ` brendanhiggins
  1 sibling, 2 replies; 232+ messages in thread
From: kieran.bingham @ 2018-12-06 12:16 UTC (permalink / raw)


Hi Brendan,

On 03/12/2018 23:53, Brendan Higgins wrote:
> On Thu, Nov 29, 2018 at 7:45 PM Luis Chamberlain <mcgrof at kernel.org> wrote:
>>
>> On Thu, Nov 29, 2018 at 01:56:37PM +0000, Kieran Bingham wrote:
>>> Hi Brendan,
>>>
>>> Please excuse the top posting, but I'm replying here as I'm following
>>> the section "Creating a kunitconfig" in Documentation/kunit/start.rst.
>>>
>>> Could the three line kunitconfig file live under say
>>>        arch/um/configs/kunit_defconfig?


Further consideration to this topic - I mentioned putting it in
  arch/um/configs

- but I think this is wrong.

We now have a location for config-fragments, which is essentially what
this is, under kernel/configs

So perhaps an addition as :

 kernel/configs/kunit.config

Would be more appropriate - and less (UM) architecture specific.



>>>
>>> So that it's always provided? And could even be extended with tests
>>> which people would expect to be run by default? (say in distributions)
>>
>> Indeed, and then a top level 'make kunitconfig' could use it as well.
> 
> Yep, I totally agree.
> 

-- 
Regards
--
Kieran

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 14/19] Documentation: kunit: add documentation for KUnit
  2018-12-06 12:16         ` kieran.bingham
@ 2018-12-06 12:16           ` Kieran Bingham
  2019-02-09  0:56           ` brendanhiggins
  1 sibling, 0 replies; 232+ messages in thread
From: Kieran Bingham @ 2018-12-06 12:16 UTC (permalink / raw)


Hi Brendan,

On 03/12/2018 23:53, Brendan Higgins wrote:
> On Thu, Nov 29, 2018@7:45 PM Luis Chamberlain <mcgrof@kernel.org> wrote:
>>
>> On Thu, Nov 29, 2018@01:56:37PM +0000, Kieran Bingham wrote:
>>> Hi Brendan,
>>>
>>> Please excuse the top posting, but I'm replying here as I'm following
>>> the section "Creating a kunitconfig" in Documentation/kunit/start.rst.
>>>
>>> Could the three line kunitconfig file live under say
>>>        arch/um/configs/kunit_defconfig?


Further consideration to this topic - I mentioned putting it in
  arch/um/configs

- but I think this is wrong.

We now have a location for config-fragments, which is essentially what
this is, under kernel/configs

So perhaps an addition as :

 kernel/configs/kunit.config

Would be more appropriate - and less (UM) architecture specific.



>>>
>>> So that it's always provided? And could even be extended with tests
>>> which people would expect to be run by default? (say in distributions)
>>
>> Indeed, and then a top level 'make kunitconfig' could use it as well.
> 
> Yep, I totally agree.
> 

-- 
Regards
--
Kieran

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel
  2018-12-04 20:47       ` mcgrof
  2018-12-04 20:47         ` Luis Chamberlain
@ 2018-12-06 12:32         ` kieran.bingham
  2018-12-06 12:32           ` Kieran Bingham
                             ` (3 more replies)
  1 sibling, 4 replies; 232+ messages in thread
From: kieran.bingham @ 2018-12-06 12:32 UTC (permalink / raw)


Hi Luis,

On 04/12/2018 20:47, Luis Chamberlain wrote:
> On Mon, Dec 03, 2018 at 03:48:15PM -0800, Brendan Higgins wrote:
>> On Thu, Nov 29, 2018 at 5:54 AM Kieran Bingham
>> <kieran.bingham at ideasonboard.com> wrote:
>>>
>>> Hi Brendan,
>>>
>>> Thanks again for this series!
>>>
>>> On 28/11/2018 19:36, Brendan Higgins wrote:
>>>> The ultimate goal is to create minimal isolated test binaries; in the
>>>> meantime we are using UML to provide the infrastructure to run tests, so
>>>> define an abstract way to configure and run tests that allow us to
>>>> change the context in which tests are built without affecting the user.
>>>> This also makes pretty and dynamic error reporting, and a lot of other
>>>> nice features easier.
>>>
>>>
>>> I wonder if we could somehow generate a shared library object
>>> 'libkernel' or 'libumlinux' from a UM configured set of headers and
>>> objects so that we could create binary targets directly ?
>>
>> That's an interesting idea. I think it would be difficult to figure
>> out exactly where to draw the line of what goes in there and what
>> needs to be built specific to a test a priori. Of course, that leads
>> into the biggest problem in general, needed to know what I need to
>> build to test the thing that I want to test.
>>
>> Nevertheless, I could definitely imagine that being useful in a lot of cases.
> 
> Whether or not we can abstract away the kernel into such a mechanism
> with uml libraries is a good question worth exploring.
> 
> Developers working upstream do modify their kernels a lot, so we'd have
> to update such libraries quite a bit, but I think that's fine too. The
> *real* value I think from the above suggestion would be enterprise /
> mobile distros or stable kernel maintainers which have a static kernel
> they need to support for a relatively *long time*, consider a 10 year
> time frame. Running unit tests without qemu with uml and libraries for
> respective kernels seems real worthy.


I think any such library might be something generated by the kernel
build system, so if someone makes substantial changes to a core
component provided by the library - it can be up to them to build a
corresponding userspace library as well.

We could also consider to only provide *static* libraries rather than
dynamic. So any one building some userspace tool / test with this would
be required to compile against (the version of) the kernel they expect
perhaps... - much like we expect modules to be compiled currently.

And then the userspace binary would be sufficiently able to live it's
life on it's own :)


> The overhead for testing a unit test for said targets, *ideally*, would
> just be to to reboot into the system with such libraries available, a
> unit test would just look for the respective uname -r library and mimic
> that kernel, much the same way enterprise distributions today rely on
> having debugging symbols available to run against crash / gdb. Having
> debug modules / kernel for crash requires such effort already, so this
> would just be an extra layer of other prospect tests.

Oh - although, yes - there are some good concepts there - but I'm a bit
weary of how easy it would be to 'run' the said test against multiple
kernel version libraries... there would be a lot of possible ABI
conflicts perhaps.

My main initial idea for a libumlinux is to provide infrastructure such
as our linked-lists and other kernel formatting so that we can take
kernel code directly to userspace for test and debug (assuming that
there are no hardware dependencies or things that we can't mock out)


I think all of this could complement kunit of course - this isn't
suggesting an alternative implementation :-)


> All ideaware for now, but the roadmap seems to be paving itself.

I guess all great ideas start as ideaware somehow ...

Now we just have to start the race to see who can tweak the kernel build
system to produce an output library first :)

 (I won't be upset if I don't win the race)

-- 
Regards
--
Kieran

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel
  2018-12-06 12:32         ` kieran.bingham
@ 2018-12-06 12:32           ` Kieran Bingham
  2018-12-06 15:37           ` willy
                             ` (2 subsequent siblings)
  3 siblings, 0 replies; 232+ messages in thread
From: Kieran Bingham @ 2018-12-06 12:32 UTC (permalink / raw)


Hi Luis,

On 04/12/2018 20:47, Luis Chamberlain wrote:
> On Mon, Dec 03, 2018@03:48:15PM -0800, Brendan Higgins wrote:
>> On Thu, Nov 29, 2018 at 5:54 AM Kieran Bingham
>> <kieran.bingham@ideasonboard.com> wrote:
>>>
>>> Hi Brendan,
>>>
>>> Thanks again for this series!
>>>
>>> On 28/11/2018 19:36, Brendan Higgins wrote:
>>>> The ultimate goal is to create minimal isolated test binaries; in the
>>>> meantime we are using UML to provide the infrastructure to run tests, so
>>>> define an abstract way to configure and run tests that allow us to
>>>> change the context in which tests are built without affecting the user.
>>>> This also makes pretty and dynamic error reporting, and a lot of other
>>>> nice features easier.
>>>
>>>
>>> I wonder if we could somehow generate a shared library object
>>> 'libkernel' or 'libumlinux' from a UM configured set of headers and
>>> objects so that we could create binary targets directly ?
>>
>> That's an interesting idea. I think it would be difficult to figure
>> out exactly where to draw the line of what goes in there and what
>> needs to be built specific to a test a priori. Of course, that leads
>> into the biggest problem in general, needed to know what I need to
>> build to test the thing that I want to test.
>>
>> Nevertheless, I could definitely imagine that being useful in a lot of cases.
> 
> Whether or not we can abstract away the kernel into such a mechanism
> with uml libraries is a good question worth exploring.
> 
> Developers working upstream do modify their kernels a lot, so we'd have
> to update such libraries quite a bit, but I think that's fine too. The
> *real* value I think from the above suggestion would be enterprise /
> mobile distros or stable kernel maintainers which have a static kernel
> they need to support for a relatively *long time*, consider a 10 year
> time frame. Running unit tests without qemu with uml and libraries for
> respective kernels seems real worthy.


I think any such library might be something generated by the kernel
build system, so if someone makes substantial changes to a core
component provided by the library - it can be up to them to build a
corresponding userspace library as well.

We could also consider to only provide *static* libraries rather than
dynamic. So any one building some userspace tool / test with this would
be required to compile against (the version of) the kernel they expect
perhaps... - much like we expect modules to be compiled currently.

And then the userspace binary would be sufficiently able to live it's
life on it's own :)


> The overhead for testing a unit test for said targets, *ideally*, would
> just be to to reboot into the system with such libraries available, a
> unit test would just look for the respective uname -r library and mimic
> that kernel, much the same way enterprise distributions today rely on
> having debugging symbols available to run against crash / gdb. Having
> debug modules / kernel for crash requires such effort already, so this
> would just be an extra layer of other prospect tests.

Oh - although, yes - there are some good concepts there - but I'm a bit
weary of how easy it would be to 'run' the said test against multiple
kernel version libraries... there would be a lot of possible ABI
conflicts perhaps.

My main initial idea for a libumlinux is to provide infrastructure such
as our linked-lists and other kernel formatting so that we can take
kernel code directly to userspace for test and debug (assuming that
there are no hardware dependencies or things that we can't mock out)


I think all of this could complement kunit of course - this isn't
suggesting an alternative implementation :-)


> All ideaware for now, but the roadmap seems to be paving itself.

I guess all great ideas start as ideaware somehow ...

Now we just have to start the race to see who can tweak the kernel build
system to produce an output library first :)

 (I won't be upset if I don't win the race)

-- 
Regards
--
Kieran

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel
  2018-12-06 12:32         ` kieran.bingham
  2018-12-06 12:32           ` Kieran Bingham
@ 2018-12-06 15:37           ` willy
  2018-12-06 15:37             ` Matthew Wilcox
                               ` (2 more replies)
  2018-12-07  1:05           ` mcgrof
  2018-12-07 18:35           ` kent.overstreet
  3 siblings, 3 replies; 232+ messages in thread
From: willy @ 2018-12-06 15:37 UTC (permalink / raw)


On Thu, Dec 06, 2018 at 12:32:47PM +0000, Kieran Bingham wrote:
> On 04/12/2018 20:47, Luis Chamberlain wrote:
> > On Mon, Dec 03, 2018 at 03:48:15PM -0800, Brendan Higgins wrote:
> >> On Thu, Nov 29, 2018 at 5:54 AM Kieran Bingham
> >> <kieran.bingham at ideasonboard.com> wrote:
> >>>
> >>> Hi Brendan,
> >>>
> >>> Thanks again for this series!
> >>>
> >>> On 28/11/2018 19:36, Brendan Higgins wrote:
> >>>> The ultimate goal is to create minimal isolated test binaries; in the
> >>>> meantime we are using UML to provide the infrastructure to run tests, so
> >>>> define an abstract way to configure and run tests that allow us to
> >>>> change the context in which tests are built without affecting the user.
> >>>> This also makes pretty and dynamic error reporting, and a lot of other
> >>>> nice features easier.
> >>>
> >>>
> >>> I wonder if we could somehow generate a shared library object
> >>> 'libkernel' or 'libumlinux' from a UM configured set of headers and
> >>> objects so that we could create binary targets directly ?
> >>
> >> That's an interesting idea. I think it would be difficult to figure
> >> out exactly where to draw the line of what goes in there and what
> >> needs to be built specific to a test a priori. Of course, that leads
> >> into the biggest problem in general, needed to know what I need to
> >> build to test the thing that I want to test.
> >>
> >> Nevertheless, I could definitely imagine that being useful in a lot of cases.
> > 
> > Whether or not we can abstract away the kernel into such a mechanism
> > with uml libraries is a good question worth exploring.
> > 
> > Developers working upstream do modify their kernels a lot, so we'd have
> > to update such libraries quite a bit, but I think that's fine too. The
> > *real* value I think from the above suggestion would be enterprise /
> > mobile distros or stable kernel maintainers which have a static kernel
> > they need to support for a relatively *long time*, consider a 10 year
> > time frame. Running unit tests without qemu with uml and libraries for
> > respective kernels seems real worthy.
> 
> I think any such library might be something generated by the kernel
> build system, so if someone makes substantial changes to a core
> component provided by the library - it can be up to them to build a
> corresponding userspace library as well.
> 
> We could also consider to only provide *static* libraries rather than
> dynamic. So any one building some userspace tool / test with this would
> be required to compile against (the version of) the kernel they expect
> perhaps... - much like we expect modules to be compiled currently.
> 
> And then the userspace binary would be sufficiently able to live it's
> life on it's own :)
> 
> > The overhead for testing a unit test for said targets, *ideally*, would
> > just be to to reboot into the system with such libraries available, a
> > unit test would just look for the respective uname -r library and mimic
> > that kernel, much the same way enterprise distributions today rely on
> > having debugging symbols available to run against crash / gdb. Having
> > debug modules / kernel for crash requires such effort already, so this
> > would just be an extra layer of other prospect tests.
> 
> Oh - although, yes - there are some good concepts there - but I'm a bit
> weary of how easy it would be to 'run' the said test against multiple
> kernel version libraries... there would be a lot of possible ABI
> conflicts perhaps.
> 
> My main initial idea for a libumlinux is to provide infrastructure such
> as our linked-lists and other kernel formatting so that we can take
> kernel code directly to userspace for test and debug (assuming that
> there are no hardware dependencies or things that we can't mock out)
> 
> I think all of this could complement kunit of course - this isn't
> suggesting an alternative implementation :-)

I suspect the reason Luis cc'd me on this is that we already have some
artisinally-crafted userspace kernel-mocking interfaces under tools/.
The tools/testing/radix-tree directory is the source of some of this,
but I've been moving pieces out into tools/ more generally where it
makes sense to.

We have liburcu already, which is good.  The main sticking points are:

 - No emulation of kernel thread interfaces
 - The kernel does not provide the ability to aggressively fail memory
   allocations (which is useful when trying to exercise the memory failure
   paths).
 - printk has started adding a lot of %pX enhancements which printf
   obviously doesn't know about.
 - No global pseudo-random number generator in the kernel.  Probably
   we should steal the i915 one.

I know Dan Williams has also done a lot of working mocking kernel
interfaces for libnvdimm.

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel
  2018-12-06 15:37           ` willy
@ 2018-12-06 15:37             ` Matthew Wilcox
  2018-12-07 11:30             ` kieran.bingham
  2018-12-11 14:09             ` pmladek
  2 siblings, 0 replies; 232+ messages in thread
From: Matthew Wilcox @ 2018-12-06 15:37 UTC (permalink / raw)


On Thu, Dec 06, 2018@12:32:47PM +0000, Kieran Bingham wrote:
> On 04/12/2018 20:47, Luis Chamberlain wrote:
> > On Mon, Dec 03, 2018@03:48:15PM -0800, Brendan Higgins wrote:
> >> On Thu, Nov 29, 2018 at 5:54 AM Kieran Bingham
> >> <kieran.bingham@ideasonboard.com> wrote:
> >>>
> >>> Hi Brendan,
> >>>
> >>> Thanks again for this series!
> >>>
> >>> On 28/11/2018 19:36, Brendan Higgins wrote:
> >>>> The ultimate goal is to create minimal isolated test binaries; in the
> >>>> meantime we are using UML to provide the infrastructure to run tests, so
> >>>> define an abstract way to configure and run tests that allow us to
> >>>> change the context in which tests are built without affecting the user.
> >>>> This also makes pretty and dynamic error reporting, and a lot of other
> >>>> nice features easier.
> >>>
> >>>
> >>> I wonder if we could somehow generate a shared library object
> >>> 'libkernel' or 'libumlinux' from a UM configured set of headers and
> >>> objects so that we could create binary targets directly ?
> >>
> >> That's an interesting idea. I think it would be difficult to figure
> >> out exactly where to draw the line of what goes in there and what
> >> needs to be built specific to a test a priori. Of course, that leads
> >> into the biggest problem in general, needed to know what I need to
> >> build to test the thing that I want to test.
> >>
> >> Nevertheless, I could definitely imagine that being useful in a lot of cases.
> > 
> > Whether or not we can abstract away the kernel into such a mechanism
> > with uml libraries is a good question worth exploring.
> > 
> > Developers working upstream do modify their kernels a lot, so we'd have
> > to update such libraries quite a bit, but I think that's fine too. The
> > *real* value I think from the above suggestion would be enterprise /
> > mobile distros or stable kernel maintainers which have a static kernel
> > they need to support for a relatively *long time*, consider a 10 year
> > time frame. Running unit tests without qemu with uml and libraries for
> > respective kernels seems real worthy.
> 
> I think any such library might be something generated by the kernel
> build system, so if someone makes substantial changes to a core
> component provided by the library - it can be up to them to build a
> corresponding userspace library as well.
> 
> We could also consider to only provide *static* libraries rather than
> dynamic. So any one building some userspace tool / test with this would
> be required to compile against (the version of) the kernel they expect
> perhaps... - much like we expect modules to be compiled currently.
> 
> And then the userspace binary would be sufficiently able to live it's
> life on it's own :)
> 
> > The overhead for testing a unit test for said targets, *ideally*, would
> > just be to to reboot into the system with such libraries available, a
> > unit test would just look for the respective uname -r library and mimic
> > that kernel, much the same way enterprise distributions today rely on
> > having debugging symbols available to run against crash / gdb. Having
> > debug modules / kernel for crash requires such effort already, so this
> > would just be an extra layer of other prospect tests.
> 
> Oh - although, yes - there are some good concepts there - but I'm a bit
> weary of how easy it would be to 'run' the said test against multiple
> kernel version libraries... there would be a lot of possible ABI
> conflicts perhaps.
> 
> My main initial idea for a libumlinux is to provide infrastructure such
> as our linked-lists and other kernel formatting so that we can take
> kernel code directly to userspace for test and debug (assuming that
> there are no hardware dependencies or things that we can't mock out)
> 
> I think all of this could complement kunit of course - this isn't
> suggesting an alternative implementation :-)

I suspect the reason Luis cc'd me on this is that we already have some
artisinally-crafted userspace kernel-mocking interfaces under tools/.
The tools/testing/radix-tree directory is the source of some of this,
but I've been moving pieces out into tools/ more generally where it
makes sense to.

We have liburcu already, which is good.  The main sticking points are:

 - No emulation of kernel thread interfaces
 - The kernel does not provide the ability to aggressively fail memory
   allocations (which is useful when trying to exercise the memory failure
   paths).
 - printk has started adding a lot of %pX enhancements which printf
   obviously doesn't know about.
 - No global pseudo-random number generator in the kernel.  Probably
   we should steal the i915 one.

I know Dan Williams has also done a lot of working mocking kernel
interfaces for libnvdimm.

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 17/19] of: unittest: migrate tests to run on KUnit
  2018-12-05 23:42           ` brendanhiggins
  2018-12-05 23:42             ` Brendan Higgins
@ 2018-12-07  0:41             ` robh
  2018-12-07  0:41               ` Rob Herring
  1 sibling, 1 reply; 232+ messages in thread
From: robh @ 2018-12-07  0:41 UTC (permalink / raw)


On Wed, Dec 5, 2018 at 5:43 PM Brendan Higgins
<brendanhiggins at google.com> wrote:
>
> On Tue, Dec 4, 2018 at 5:41 AM Rob Herring <robh at kernel.org> wrote:
> >
> > On Mon, Dec 3, 2018 at 6:14 PM Brendan Higgins
> > <brendanhiggins at google.com> wrote:
> > >
> > > On Thu, Nov 29, 2018 at 4:40 PM Randy Dunlap <rdunlap at infradead.org> wrote:
> > > >
> > > > On 11/28/18 12:56 PM, Rob Herring wrote:
> > > > >> diff --git a/drivers/of/Kconfig b/drivers/of/Kconfig
> > > > >> index ad3fcad4d75b8..f309399deac20 100644
> > > > >> --- a/drivers/of/Kconfig
> > > > >> +++ b/drivers/of/Kconfig
> > > > >> @@ -15,6 +15,7 @@ if OF
> > > > >>  config OF_UNITTEST
> > > > >>         bool "Device Tree runtime unit tests"
> > > > >>         depends on !SPARC
> > > > >> +       depends on KUNIT
> > > > > Unless KUNIT has depends, better to be a select here.
> > > >
> > > > That's just style or taste.  I would prefer to use depends
> > > > instead of select, but that's also just my preference.
> > >
> > > I prefer depends too, but Rob is the maintainer here.
> >
> > Well, we should be consistent, not the follow the whims of each maintainer.
>
> Sorry, I don't think that came out the way I meant it. I don't really
> think we are consistent on this point across the kernel, and I don't
> feel very strongly about the point, so I was just looking to follow
> the path of least resistance. (I also just assumed Rob would keep us
> consistent within drivers/of/.)

I meant across unittests, we should be consistent. All unittests do
either "depends on KUNIT" or "select KUNIT". The question I would ask
is does KUNIT need to be user visible or is useful to enable without
any unittests enabled? With depends, a user has 2 options to go enable
vs. 1 with select.

But if you want a global kill switch to turn off all unittests, then
depends works better.

> I figure if we are running unit tests from the test runner script or
> from an automated system, you won't be hunting for dependencies for a
> single test every time you want to run a test, so select doesn't make
> it easier to configure in most imagined use cases.
>
> KUNIT hypothetically should not depend on anything, so select should
> be safe to use.
>
> On the other hand, if we end up being wrong on this point and KUnit
> gains widespread adoption, I would prefer not to be in a position
> where I have to change a bunch of configs all over the kernel because
> this example got copied and pasted.

You'll be so happy that 100s of tests have been created using kunit,
it won't be a big deal. :)

In any case, I wouldn't spend more time on this.

Rob

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 17/19] of: unittest: migrate tests to run on KUnit
  2018-12-07  0:41             ` robh
@ 2018-12-07  0:41               ` Rob Herring
  0 siblings, 0 replies; 232+ messages in thread
From: Rob Herring @ 2018-12-07  0:41 UTC (permalink / raw)


On Wed, Dec 5, 2018 at 5:43 PM Brendan Higgins
<brendanhiggins@google.com> wrote:
>
> On Tue, Dec 4, 2018@5:41 AM Rob Herring <robh@kernel.org> wrote:
> >
> > On Mon, Dec 3, 2018 at 6:14 PM Brendan Higgins
> > <brendanhiggins@google.com> wrote:
> > >
> > > On Thu, Nov 29, 2018@4:40 PM Randy Dunlap <rdunlap@infradead.org> wrote:
> > > >
> > > > On 11/28/18 12:56 PM, Rob Herring wrote:
> > > > >> diff --git a/drivers/of/Kconfig b/drivers/of/Kconfig
> > > > >> index ad3fcad4d75b8..f309399deac20 100644
> > > > >> --- a/drivers/of/Kconfig
> > > > >> +++ b/drivers/of/Kconfig
> > > > >> @@ -15,6 +15,7 @@ if OF
> > > > >>  config OF_UNITTEST
> > > > >>         bool "Device Tree runtime unit tests"
> > > > >>         depends on !SPARC
> > > > >> +       depends on KUNIT
> > > > > Unless KUNIT has depends, better to be a select here.
> > > >
> > > > That's just style or taste.  I would prefer to use depends
> > > > instead of select, but that's also just my preference.
> > >
> > > I prefer depends too, but Rob is the maintainer here.
> >
> > Well, we should be consistent, not the follow the whims of each maintainer.
>
> Sorry, I don't think that came out the way I meant it. I don't really
> think we are consistent on this point across the kernel, and I don't
> feel very strongly about the point, so I was just looking to follow
> the path of least resistance. (I also just assumed Rob would keep us
> consistent within drivers/of/.)

I meant across unittests, we should be consistent. All unittests do
either "depends on KUNIT" or "select KUNIT". The question I would ask
is does KUNIT need to be user visible or is useful to enable without
any unittests enabled? With depends, a user has 2 options to go enable
vs. 1 with select.

But if you want a global kill switch to turn off all unittests, then
depends works better.

> I figure if we are running unit tests from the test runner script or
> from an automated system, you won't be hunting for dependencies for a
> single test every time you want to run a test, so select doesn't make
> it easier to configure in most imagined use cases.
>
> KUNIT hypothetically should not depend on anything, so select should
> be safe to use.
>
> On the other hand, if we end up being wrong on this point and KUnit
> gains widespread adoption, I would prefer not to be in a position
> where I have to change a bunch of configs all over the kernel because
> this example got copied and pasted.

You'll be so happy that 100s of tests have been created using kunit,
it won't be a big deal. :)

In any case, I wouldn't spend more time on this.

Rob

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel
  2018-12-06 12:32         ` kieran.bingham
  2018-12-06 12:32           ` Kieran Bingham
  2018-12-06 15:37           ` willy
@ 2018-12-07  1:05           ` mcgrof
  2018-12-07  1:05             ` Luis Chamberlain
  2018-12-07 18:35           ` kent.overstreet
  3 siblings, 1 reply; 232+ messages in thread
From: mcgrof @ 2018-12-07  1:05 UTC (permalink / raw)


On Thu, Dec 06, 2018 at 12:32:47PM +0000, Kieran Bingham wrote:
> My main initial idea for a libumlinux is to provide infrastructure such
> as our linked-lists and other kernel formatting so that we can take
> kernel code directly to userspace for test and debug (assuming that
> there are no hardware dependencies or things that we can't mock out)

The tools/ directory already does this for a tons of things. Its where
I ended up placing some API I tested a long time ago when I wanted to
test it in userspace, and provide the unit test in userspace (for my
linker table patches).

> Now we just have to start the race to see who can tweak the kernel build
> system to produce an output library first :)

Should be relatively easy if the tools directory used. Yes, there is
an inherent risk of duplication, but that was decided long ago.

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel
  2018-12-07  1:05           ` mcgrof
@ 2018-12-07  1:05             ` Luis Chamberlain
  0 siblings, 0 replies; 232+ messages in thread
From: Luis Chamberlain @ 2018-12-07  1:05 UTC (permalink / raw)


On Thu, Dec 06, 2018@12:32:47PM +0000, Kieran Bingham wrote:
> My main initial idea for a libumlinux is to provide infrastructure such
> as our linked-lists and other kernel formatting so that we can take
> kernel code directly to userspace for test and debug (assuming that
> there are no hardware dependencies or things that we can't mock out)

The tools/ directory already does this for a tons of things. Its where
I ended up placing some API I tested a long time ago when I wanted to
test it in userspace, and provide the unit test in userspace (for my
linker table patches).

> Now we just have to start the race to see who can tweak the kernel build
> system to produce an output library first :)

Should be relatively easy if the tools directory used. Yes, there is
an inherent risk of duplication, but that was decided long ago.

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel
  2018-12-06 15:37           ` willy
  2018-12-06 15:37             ` Matthew Wilcox
@ 2018-12-07 11:30             ` kieran.bingham
  2018-12-07 11:30               ` Kieran Bingham
  2018-12-11 14:09             ` pmladek
  2 siblings, 1 reply; 232+ messages in thread
From: kieran.bingham @ 2018-12-07 11:30 UTC (permalink / raw)


Hi Matthew,

On 06/12/2018 15:37, Matthew Wilcox wrote:
> On Thu, Dec 06, 2018 at 12:32:47PM +0000, Kieran Bingham wrote:
>> On 04/12/2018 20:47, Luis Chamberlain wrote:
>>> On Mon, Dec 03, 2018 at 03:48:15PM -0800, Brendan Higgins wrote:
>>>> On Thu, Nov 29, 2018 at 5:54 AM Kieran Bingham
>>>> <kieran.bingham at ideasonboard.com> wrote:
>>>>>
>>>>> Hi Brendan,
>>>>>
>>>>> Thanks again for this series!
>>>>>
>>>>> On 28/11/2018 19:36, Brendan Higgins wrote:
>>>>>> The ultimate goal is to create minimal isolated test binaries; in the
>>>>>> meantime we are using UML to provide the infrastructure to run tests, so
>>>>>> define an abstract way to configure and run tests that allow us to
>>>>>> change the context in which tests are built without affecting the user.
>>>>>> This also makes pretty and dynamic error reporting, and a lot of other
>>>>>> nice features easier.
>>>>>
>>>>>
>>>>> I wonder if we could somehow generate a shared library object
>>>>> 'libkernel' or 'libumlinux' from a UM configured set of headers and
>>>>> objects so that we could create binary targets directly ?
>>>>
>>>> That's an interesting idea. I think it would be difficult to figure
>>>> out exactly where to draw the line of what goes in there and what
>>>> needs to be built specific to a test a priori. Of course, that leads
>>>> into the biggest problem in general, needed to know what I need to
>>>> build to test the thing that I want to test.
>>>>
>>>> Nevertheless, I could definitely imagine that being useful in a lot of cases.
>>>
>>> Whether or not we can abstract away the kernel into such a mechanism
>>> with uml libraries is a good question worth exploring.
>>>
>>> Developers working upstream do modify their kernels a lot, so we'd have
>>> to update such libraries quite a bit, but I think that's fine too. The
>>> *real* value I think from the above suggestion would be enterprise /
>>> mobile distros or stable kernel maintainers which have a static kernel
>>> they need to support for a relatively *long time*, consider a 10 year
>>> time frame. Running unit tests without qemu with uml and libraries for
>>> respective kernels seems real worthy.
>>
>> I think any such library might be something generated by the kernel
>> build system, so if someone makes substantial changes to a core
>> component provided by the library - it can be up to them to build a
>> corresponding userspace library as well.
>>
>> We could also consider to only provide *static* libraries rather than
>> dynamic. So any one building some userspace tool / test with this would
>> be required to compile against (the version of) the kernel they expect
>> perhaps... - much like we expect modules to be compiled currently.
>>
>> And then the userspace binary would be sufficiently able to live it's
>> life on it's own :)
>>
>>> The overhead for testing a unit test for said targets, *ideally*, would
>>> just be to to reboot into the system with such libraries available, a
>>> unit test would just look for the respective uname -r library and mimic
>>> that kernel, much the same way enterprise distributions today rely on
>>> having debugging symbols available to run against crash / gdb. Having
>>> debug modules / kernel for crash requires such effort already, so this
>>> would just be an extra layer of other prospect tests.
>>
>> Oh - although, yes - there are some good concepts there - but I'm a bit
>> weary of how easy it would be to 'run' the said test against multiple
>> kernel version libraries... there would be a lot of possible ABI
>> conflicts perhaps.
>>
>> My main initial idea for a libumlinux is to provide infrastructure such
>> as our linked-lists and other kernel formatting so that we can take
>> kernel code directly to userspace for test and debug (assuming that
>> there are no hardware dependencies or things that we can't mock out)
>>
>> I think all of this could complement kunit of course - this isn't
>> suggesting an alternative implementation :-)
> 
> I suspect the reason Luis cc'd me on this is that we already have some
> artisinally-crafted userspace kernel-mocking interfaces under tools/.

Aha - excellent - I had hoped to grab you at Plumbers to talk about
this, after hearing you mention something at your Xarray talk - but
didn't seem to find a suitable time.

> The tools/testing/radix-tree directory is the source of some of this,
> but I've been moving pieces out into tools/ more generally where it
> makes sense to.

Sounds like we already have a starting point then.


> We have liburcu already, which is good.  The main sticking points are:
> 
>  - No emulation of kernel thread interfaces

Scheduling finesse aside, This shouldn't be too hard to emulate/wrap
with pthreads?


>  - The kernel does not provide the ability to aggressively fail memory
>    allocations (which is useful when trying to exercise the memory failure
>    paths).

Fault injection throughout would certainly be a valuable addition to any
unit-testing.

Wrapping tests into a single userspace binary could facilitate further
memory leak checking or other valgrind facilities too.



>  - printk has started adding a lot of %pX enhancements which printf
>    obviously doesn't know about.

Wrapping through User-mode linux essentially provides this already
though. In fact I guess that goes for the thread interfaces topic above too.


>  - No global pseudo-random number generator in the kernel.  Probably
>    we should steal the i915 one.
> 
> I know Dan Williams has also done a lot of working mocking kernel
> interfaces for libnvdimm.


Thanks for the references - more to investigate.

-- 
Regards
--
Kieran

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel
  2018-12-07 11:30             ` kieran.bingham
@ 2018-12-07 11:30               ` Kieran Bingham
  0 siblings, 0 replies; 232+ messages in thread
From: Kieran Bingham @ 2018-12-07 11:30 UTC (permalink / raw)


Hi Matthew,

On 06/12/2018 15:37, Matthew Wilcox wrote:
> On Thu, Dec 06, 2018@12:32:47PM +0000, Kieran Bingham wrote:
>> On 04/12/2018 20:47, Luis Chamberlain wrote:
>>> On Mon, Dec 03, 2018@03:48:15PM -0800, Brendan Higgins wrote:
>>>> On Thu, Nov 29, 2018 at 5:54 AM Kieran Bingham
>>>> <kieran.bingham@ideasonboard.com> wrote:
>>>>>
>>>>> Hi Brendan,
>>>>>
>>>>> Thanks again for this series!
>>>>>
>>>>> On 28/11/2018 19:36, Brendan Higgins wrote:
>>>>>> The ultimate goal is to create minimal isolated test binaries; in the
>>>>>> meantime we are using UML to provide the infrastructure to run tests, so
>>>>>> define an abstract way to configure and run tests that allow us to
>>>>>> change the context in which tests are built without affecting the user.
>>>>>> This also makes pretty and dynamic error reporting, and a lot of other
>>>>>> nice features easier.
>>>>>
>>>>>
>>>>> I wonder if we could somehow generate a shared library object
>>>>> 'libkernel' or 'libumlinux' from a UM configured set of headers and
>>>>> objects so that we could create binary targets directly ?
>>>>
>>>> That's an interesting idea. I think it would be difficult to figure
>>>> out exactly where to draw the line of what goes in there and what
>>>> needs to be built specific to a test a priori. Of course, that leads
>>>> into the biggest problem in general, needed to know what I need to
>>>> build to test the thing that I want to test.
>>>>
>>>> Nevertheless, I could definitely imagine that being useful in a lot of cases.
>>>
>>> Whether or not we can abstract away the kernel into such a mechanism
>>> with uml libraries is a good question worth exploring.
>>>
>>> Developers working upstream do modify their kernels a lot, so we'd have
>>> to update such libraries quite a bit, but I think that's fine too. The
>>> *real* value I think from the above suggestion would be enterprise /
>>> mobile distros or stable kernel maintainers which have a static kernel
>>> they need to support for a relatively *long time*, consider a 10 year
>>> time frame. Running unit tests without qemu with uml and libraries for
>>> respective kernels seems real worthy.
>>
>> I think any such library might be something generated by the kernel
>> build system, so if someone makes substantial changes to a core
>> component provided by the library - it can be up to them to build a
>> corresponding userspace library as well.
>>
>> We could also consider to only provide *static* libraries rather than
>> dynamic. So any one building some userspace tool / test with this would
>> be required to compile against (the version of) the kernel they expect
>> perhaps... - much like we expect modules to be compiled currently.
>>
>> And then the userspace binary would be sufficiently able to live it's
>> life on it's own :)
>>
>>> The overhead for testing a unit test for said targets, *ideally*, would
>>> just be to to reboot into the system with such libraries available, a
>>> unit test would just look for the respective uname -r library and mimic
>>> that kernel, much the same way enterprise distributions today rely on
>>> having debugging symbols available to run against crash / gdb. Having
>>> debug modules / kernel for crash requires such effort already, so this
>>> would just be an extra layer of other prospect tests.
>>
>> Oh - although, yes - there are some good concepts there - but I'm a bit
>> weary of how easy it would be to 'run' the said test against multiple
>> kernel version libraries... there would be a lot of possible ABI
>> conflicts perhaps.
>>
>> My main initial idea for a libumlinux is to provide infrastructure such
>> as our linked-lists and other kernel formatting so that we can take
>> kernel code directly to userspace for test and debug (assuming that
>> there are no hardware dependencies or things that we can't mock out)
>>
>> I think all of this could complement kunit of course - this isn't
>> suggesting an alternative implementation :-)
> 
> I suspect the reason Luis cc'd me on this is that we already have some
> artisinally-crafted userspace kernel-mocking interfaces under tools/.

Aha - excellent - I had hoped to grab you at Plumbers to talk about
this, after hearing you mention something at your Xarray talk - but
didn't seem to find a suitable time.

> The tools/testing/radix-tree directory is the source of some of this,
> but I've been moving pieces out into tools/ more generally where it
> makes sense to.

Sounds like we already have a starting point then.


> We have liburcu already, which is good.  The main sticking points are:
> 
>  - No emulation of kernel thread interfaces

Scheduling finesse aside, This shouldn't be too hard to emulate/wrap
with pthreads?


>  - The kernel does not provide the ability to aggressively fail memory
>    allocations (which is useful when trying to exercise the memory failure
>    paths).

Fault injection throughout would certainly be a valuable addition to any
unit-testing.

Wrapping tests into a single userspace binary could facilitate further
memory leak checking or other valgrind facilities too.



>  - printk has started adding a lot of %pX enhancements which printf
>    obviously doesn't know about.

Wrapping through User-mode linux essentially provides this already
though. In fact I guess that goes for the thread interfaces topic above too.


>  - No global pseudo-random number generator in the kernel.  Probably
>    we should steal the i915 one.
> 
> I know Dan Williams has also done a lot of working mocking kernel
> interfaces for libnvdimm.


Thanks for the references - more to investigate.

-- 
Regards
--
Kieran

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel
  2018-12-06 12:32         ` kieran.bingham
                             ` (2 preceding siblings ...)
  2018-12-07  1:05           ` mcgrof
@ 2018-12-07 18:35           ` kent.overstreet
  2018-12-07 18:35             ` Kent Overstreet
  3 siblings, 1 reply; 232+ messages in thread
From: kent.overstreet @ 2018-12-07 18:35 UTC (permalink / raw)


On Thu, Dec 06, 2018 at 12:32:47PM +0000, Kieran Bingham wrote:
> Oh - although, yes - there are some good concepts there - but I'm a bit
> weary of how easy it would be to 'run' the said test against multiple
> kernel version libraries... there would be a lot of possible ABI
> conflicts perhaps.
> 
> My main initial idea for a libumlinux is to provide infrastructure such
> as our linked-lists and other kernel formatting so that we can take
> kernel code directly to userspace for test and debug (assuming that
> there are no hardware dependencies or things that we can't mock out)

I think this would be a really wonderful to make happen, and could potentially
be much wore widely useful than for just running tests, by making it easier to
share code between both kernel and userspace.

For bcachefs I've got a shim layer that lets me build almost everything in
fs/bcachefs and use it as a library in the userspace bcachefs-tools - e.g. for
fsck and migrate. Mine was a quick and dirty hack, but even so it's been
_extremely_ useful and a major success - I think if this became something more
official a lot of uses would be found for it.

I'm not sure if you've actually started on this (haven't seen most of the thread
yet), but if any of the bcachefs-tools shim code is useful feel free to steal it
- I've got dirt-simple, minimum viable shims for the kthread api, workqueus,
timers, the block layer, and assorted other stuff:

https://evilpiepirate.org/git/bcachefs-tools.git/

Going forward, one issue is going to be that a libumllinux is going to want to
shim some interfaces, and for other things it'll just want to pull in the kernel
implementation - e.g. rhashtables. It might be nice if we could refactor things
a bit so that things like rhashtables could be built as a standalone library, as
is.

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel
  2018-12-07 18:35           ` kent.overstreet
@ 2018-12-07 18:35             ` Kent Overstreet
  0 siblings, 0 replies; 232+ messages in thread
From: Kent Overstreet @ 2018-12-07 18:35 UTC (permalink / raw)


On Thu, Dec 06, 2018@12:32:47PM +0000, Kieran Bingham wrote:
> Oh - although, yes - there are some good concepts there - but I'm a bit
> weary of how easy it would be to 'run' the said test against multiple
> kernel version libraries... there would be a lot of possible ABI
> conflicts perhaps.
> 
> My main initial idea for a libumlinux is to provide infrastructure such
> as our linked-lists and other kernel formatting so that we can take
> kernel code directly to userspace for test and debug (assuming that
> there are no hardware dependencies or things that we can't mock out)

I think this would be a really wonderful to make happen, and could potentially
be much wore widely useful than for just running tests, by making it easier to
share code between both kernel and userspace.

For bcachefs I've got a shim layer that lets me build almost everything in
fs/bcachefs and use it as a library in the userspace bcachefs-tools - e.g. for
fsck and migrate. Mine was a quick and dirty hack, but even so it's been
_extremely_ useful and a major success - I think if this became something more
official a lot of uses would be found for it.

I'm not sure if you've actually started on this (haven't seen most of the thread
yet), but if any of the bcachefs-tools shim code is useful feel free to steal it
- I've got dirt-simple, minimum viable shims for the kthread api, workqueus,
timers, the block layer, and assorted other stuff:

https://evilpiepirate.org/git/bcachefs-tools.git/

Going forward, one issue is going to be that a libumllinux is going to want to
shim some interfaces, and for other things it'll just want to pull in the kernel
implementation - e.g. rhashtables. It might be nice if we could refactor things
a bit so that things like rhashtables could be built as a standalone library, as
is.

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel
  2018-12-06 15:37           ` willy
  2018-12-06 15:37             ` Matthew Wilcox
  2018-12-07 11:30             ` kieran.bingham
@ 2018-12-11 14:09             ` pmladek
  2018-12-11 14:09               ` Petr Mladek
  2018-12-11 14:41               ` rostedt
  2 siblings, 2 replies; 232+ messages in thread
From: pmladek @ 2018-12-11 14:09 UTC (permalink / raw)


On Thu 2018-12-06 07:37:18, Matthew Wilcox wrote:
> On Thu, Dec 06, 2018 at 12:32:47PM +0000, Kieran Bingham wrote:
> > On 04/12/2018 20:47, Luis Chamberlain wrote:
> > > On Mon, Dec 03, 2018 at 03:48:15PM -0800, Brendan Higgins wrote:
> > >> On Thu, Nov 29, 2018 at 5:54 AM Kieran Bingham
> > >> <kieran.bingham at ideasonboard.com> wrote:
> > > Developers working upstream do modify their kernels a lot, so we'd have
> > > to update such libraries quite a bit, but I think that's fine too. The
> > > *real* value I think from the above suggestion would be enterprise /
> > > mobile distros or stable kernel maintainers which have a static kernel
> > > they need to support for a relatively *long time*, consider a 10 year
> > > time frame. Running unit tests without qemu with uml and libraries for
> > > respective kernels seems real worthy.
> > 
> > I think any such library might be something generated by the kernel
> > build system, so if someone makes substantial changes to a core
> > component provided by the library - it can be up to them to build a
> > corresponding userspace library as well.
> > 
> > My main initial idea for a libumlinux is to provide infrastructure such
> > as our linked-lists and other kernel formatting so that we can take
> > kernel code directly to userspace for test and debug (assuming that
> > there are no hardware dependencies or things that we can't mock out)
> 
> We have liburcu already, which is good.  The main sticking points are:
> 
>  - printk has started adding a lot of %pX enhancements which printf
>    obviously doesn't know about.

I wonder how big problem it is and if it is worth using another
approach.

An alternative would be to replace them with helper functions
the would produce the same string. The meaning would be easier
to understand. But concatenating with the surrounding text
would be less elegant. People might start using pr_cont()
that is problematic (mixed lines).

Also the %pX formats are mostly used to print context of some
structures. Even the helper functions would need some maintenance
to keep them compatible.

BTW: The printk() feature has been introduced 10 years ago by
the commit 4d8a743cdd2690c0bc8 ("vsprintf: add infrastructure
support for extended '%p' specifiers").

Best Regards,
Petr

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel
  2018-12-11 14:09             ` pmladek
@ 2018-12-11 14:09               ` Petr Mladek
  2018-12-11 14:41               ` rostedt
  1 sibling, 0 replies; 232+ messages in thread
From: Petr Mladek @ 2018-12-11 14:09 UTC (permalink / raw)


On Thu 2018-12-06 07:37:18, Matthew Wilcox wrote:
> On Thu, Dec 06, 2018@12:32:47PM +0000, Kieran Bingham wrote:
> > On 04/12/2018 20:47, Luis Chamberlain wrote:
> > > On Mon, Dec 03, 2018@03:48:15PM -0800, Brendan Higgins wrote:
> > >> On Thu, Nov 29, 2018 at 5:54 AM Kieran Bingham
> > >> <kieran.bingham@ideasonboard.com> wrote:
> > > Developers working upstream do modify their kernels a lot, so we'd have
> > > to update such libraries quite a bit, but I think that's fine too. The
> > > *real* value I think from the above suggestion would be enterprise /
> > > mobile distros or stable kernel maintainers which have a static kernel
> > > they need to support for a relatively *long time*, consider a 10 year
> > > time frame. Running unit tests without qemu with uml and libraries for
> > > respective kernels seems real worthy.
> > 
> > I think any such library might be something generated by the kernel
> > build system, so if someone makes substantial changes to a core
> > component provided by the library - it can be up to them to build a
> > corresponding userspace library as well.
> > 
> > My main initial idea for a libumlinux is to provide infrastructure such
> > as our linked-lists and other kernel formatting so that we can take
> > kernel code directly to userspace for test and debug (assuming that
> > there are no hardware dependencies or things that we can't mock out)
> 
> We have liburcu already, which is good.  The main sticking points are:
> 
>  - printk has started adding a lot of %pX enhancements which printf
>    obviously doesn't know about.

I wonder how big problem it is and if it is worth using another
approach.

An alternative would be to replace them with helper functions
the would produce the same string. The meaning would be easier
to understand. But concatenating with the surrounding text
would be less elegant. People might start using pr_cont()
that is problematic (mixed lines).

Also the %pX formats are mostly used to print context of some
structures. Even the helper functions would need some maintenance
to keep them compatible.

BTW: The printk() feature has been introduced 10 years ago by
the commit 4d8a743cdd2690c0bc8 ("vsprintf: add infrastructure
support for extended '%p' specifiers").

Best Regards,
Petr

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel
  2018-12-11 14:09             ` pmladek
  2018-12-11 14:09               ` Petr Mladek
@ 2018-12-11 14:41               ` rostedt
  2018-12-11 14:41                 ` Steven Rostedt
  2018-12-11 17:01                 ` anton.ivanov
  1 sibling, 2 replies; 232+ messages in thread
From: rostedt @ 2018-12-11 14:41 UTC (permalink / raw)


On Tue, 11 Dec 2018 15:09:26 +0100
Petr Mladek <pmladek at suse.com> wrote:

> > We have liburcu already, which is good.  The main sticking points are:
> > 
> >  - printk has started adding a lot of %pX enhancements which printf
> >    obviously doesn't know about.  
> 
> I wonder how big problem it is and if it is worth using another
> approach.

No, please do not change the %pX approach.

> 
> An alternative would be to replace them with helper functions
> the would produce the same string. The meaning would be easier
> to understand. But concatenating with the surrounding text
> would be less elegant. People might start using pr_cont()
> that is problematic (mixed lines).
> 
> Also the %pX formats are mostly used to print context of some
> structures. Even the helper functions would need some maintenance
> to keep them compatible.
> 
> BTW: The printk() feature has been introduced 10 years ago by
> the commit 4d8a743cdd2690c0bc8 ("vsprintf: add infrastructure
> support for extended '%p' specifiers").

trace-cmd and perf know about most of the %pX data and how to read it.
Perhaps we can extend the libtraceevent library to export a generic way
to read data from printk() output for other tools to use.

-- Steve

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel
  2018-12-11 14:41               ` rostedt
@ 2018-12-11 14:41                 ` Steven Rostedt
  2018-12-11 17:01                 ` anton.ivanov
  1 sibling, 0 replies; 232+ messages in thread
From: Steven Rostedt @ 2018-12-11 14:41 UTC (permalink / raw)


On Tue, 11 Dec 2018 15:09:26 +0100
Petr Mladek <pmladek@suse.com> wrote:

> > We have liburcu already, which is good.  The main sticking points are:
> > 
> >  - printk has started adding a lot of %pX enhancements which printf
> >    obviously doesn't know about.  
> 
> I wonder how big problem it is and if it is worth using another
> approach.

No, please do not change the %pX approach.

> 
> An alternative would be to replace them with helper functions
> the would produce the same string. The meaning would be easier
> to understand. But concatenating with the surrounding text
> would be less elegant. People might start using pr_cont()
> that is problematic (mixed lines).
> 
> Also the %pX formats are mostly used to print context of some
> structures. Even the helper functions would need some maintenance
> to keep them compatible.
> 
> BTW: The printk() feature has been introduced 10 years ago by
> the commit 4d8a743cdd2690c0bc8 ("vsprintf: add infrastructure
> support for extended '%p' specifiers").

trace-cmd and perf know about most of the %pX data and how to read it.
Perhaps we can extend the libtraceevent library to export a generic way
to read data from printk() output for other tools to use.

-- Steve

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel
  2018-12-11 14:41               ` rostedt
  2018-12-11 14:41                 ` Steven Rostedt
@ 2018-12-11 17:01                 ` anton.ivanov
  2018-12-11 17:01                   ` Anton Ivanov
  2019-02-09  0:40                   ` brendanhiggins
  1 sibling, 2 replies; 232+ messages in thread
From: anton.ivanov @ 2018-12-11 17:01 UTC (permalink / raw)



On 12/11/18 2:41 PM, Steven Rostedt wrote:
> On Tue, 11 Dec 2018 15:09:26 +0100
> Petr Mladek <pmladek at suse.com> wrote:
>
>>> We have liburcu already, which is good.  The main sticking points are:
>>>
>>>   - printk has started adding a lot of %pX enhancements which printf
>>>     obviously doesn't know about.
>> I wonder how big problem it is and if it is worth using another
>> approach.
> No, please do not change the %pX approach.
>
>> An alternative would be to replace them with helper functions
>> the would produce the same string. The meaning would be easier
>> to understand. But concatenating with the surrounding text
>> would be less elegant. People might start using pr_cont()
>> that is problematic (mixed lines).
>>
>> Also the %pX formats are mostly used to print context of some
>> structures. Even the helper functions would need some maintenance
>> to keep them compatible.
>>
>> BTW: The printk() feature has been introduced 10 years ago by
>> the commit 4d8a743cdd2690c0bc8 ("vsprintf: add infrastructure
>> support for extended '%p' specifiers").
> trace-cmd and perf know about most of the %pX data and how to read it.
> Perhaps we can extend the libtraceevent library to export a generic way
> to read data from printk() output for other tools to use.

Going back for a second to using UML for this. UML console at present is 
interrupt driven - it emulates serial IO using several different 
back-ends (file descriptors, xterm or actual tty/ptys). Epoll events on 
the host side are used to trigger the UML interrupts - both read and write.

This works OK for normal use, but may result in all kinds of interesting 
false positives/false negatives when UML is used to run unit tests 
against a change which changes interrupt behavior.

IMO it may be useful to consider some alternatives specifically for unit 
test coverage purposes where printk and/or the whole console output 
altogether bypass some of the IRQ driven semantics.

-- 

Anton R. Ivanov

Cambridge Greys Limited, England and Wales company No 10273661
http://www.cambridgegreys.com/

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel
  2018-12-11 17:01                 ` anton.ivanov
@ 2018-12-11 17:01                   ` Anton Ivanov
  2019-02-09  0:40                   ` brendanhiggins
  1 sibling, 0 replies; 232+ messages in thread
From: Anton Ivanov @ 2018-12-11 17:01 UTC (permalink / raw)



On 12/11/18 2:41 PM, Steven Rostedt wrote:
> On Tue, 11 Dec 2018 15:09:26 +0100
> Petr Mladek <pmladek@suse.com> wrote:
>
>>> We have liburcu already, which is good.  The main sticking points are:
>>>
>>>   - printk has started adding a lot of %pX enhancements which printf
>>>     obviously doesn't know about.
>> I wonder how big problem it is and if it is worth using another
>> approach.
> No, please do not change the %pX approach.
>
>> An alternative would be to replace them with helper functions
>> the would produce the same string. The meaning would be easier
>> to understand. But concatenating with the surrounding text
>> would be less elegant. People might start using pr_cont()
>> that is problematic (mixed lines).
>>
>> Also the %pX formats are mostly used to print context of some
>> structures. Even the helper functions would need some maintenance
>> to keep them compatible.
>>
>> BTW: The printk() feature has been introduced 10 years ago by
>> the commit 4d8a743cdd2690c0bc8 ("vsprintf: add infrastructure
>> support for extended '%p' specifiers").
> trace-cmd and perf know about most of the %pX data and how to read it.
> Perhaps we can extend the libtraceevent library to export a generic way
> to read data from printk() output for other tools to use.

Going back for a second to using UML for this. UML console at present is 
interrupt driven - it emulates serial IO using several different 
back-ends (file descriptors, xterm or actual tty/ptys). Epoll events on 
the host side are used to trigger the UML interrupts - both read and write.

This works OK for normal use, but may result in all kinds of interesting 
false positives/false negatives when UML is used to run unit tests 
against a change which changes interrupt behavior.

IMO it may be useful to consider some alternatives specifically for unit 
test coverage purposes where printk and/or the whole console output 
altogether bypass some of the IRQ driven semantics.

-- 

Anton R. Ivanov

Cambridge Greys Limited, England and Wales company No 10273661
http://www.cambridgegreys.com/

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel
  2018-12-11 17:01                 ` anton.ivanov
  2018-12-11 17:01                   ` Anton Ivanov
@ 2019-02-09  0:40                   ` brendanhiggins
  2019-02-09  0:40                     ` Brendan Higgins
  1 sibling, 1 reply; 232+ messages in thread
From: brendanhiggins @ 2019-02-09  0:40 UTC (permalink / raw)


On Tue, Dec 11, 2018 at 9:02 AM Anton Ivanov
<anton.ivanov at cambridgegreys.com> wrote:
>
>
> On 12/11/18 2:41 PM, Steven Rostedt wrote:
> > On Tue, 11 Dec 2018 15:09:26 +0100
> > Petr Mladek <pmladek at suse.com> wrote:
> >
> >>> We have liburcu already, which is good.  The main sticking points are:
> >>>
> >>>   - printk has started adding a lot of %pX enhancements which printf
> >>>     obviously doesn't know about.
> >> I wonder how big problem it is and if it is worth using another
> >> approach.
> > No, please do not change the %pX approach.
> >
> >> An alternative would be to replace them with helper functions
> >> the would produce the same string. The meaning would be easier
> >> to understand. But concatenating with the surrounding text
> >> would be less elegant. People might start using pr_cont()
> >> that is problematic (mixed lines).
> >>
> >> Also the %pX formats are mostly used to print context of some
> >> structures. Even the helper functions would need some maintenance
> >> to keep them compatible.
> >>
> >> BTW: The printk() feature has been introduced 10 years ago by
> >> the commit 4d8a743cdd2690c0bc8 ("vsprintf: add infrastructure
> >> support for extended '%p' specifiers").
> > trace-cmd and perf know about most of the %pX data and how to read it.
> > Perhaps we can extend the libtraceevent library to export a generic way
> > to read data from printk() output for other tools to use.
>
> Going back for a second to using UML for this. UML console at present is
> interrupt driven - it emulates serial IO using several different
> back-ends (file descriptors, xterm or actual tty/ptys). Epoll events on
> the host side are used to trigger the UML interrupts - both read and write.
>
> This works OK for normal use, but may result in all kinds of interesting
> false positives/false negatives when UML is used to run unit tests
> against a change which changes interrupt behavior.
>
> IMO it may be useful to consider some alternatives specifically for unit
> test coverage purposes where printk and/or the whole console output
> altogether bypass some of the IRQ driven semantics.

Whoops, sorry, didn't see your comment before I went on vacation.

I completely agree. It is also annoying when trying to test other
really low level parts of the kernel. I would really like to get KUnit
to the point where it does not have any dependencies on anything in
the kernel, but that is very challenging for many reasons. This
loosely relates to what Luis, myself, and others have talked about in
other threads about having a stricter notion of code dependencies in
the kernel. Thinking about it now, I suspect it might be easier to
limit KUnit's dependency on kernel infrastructure first; that could
kind of motivate the later work.

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel
  2019-02-09  0:40                   ` brendanhiggins
@ 2019-02-09  0:40                     ` Brendan Higgins
  0 siblings, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2019-02-09  0:40 UTC (permalink / raw)


On Tue, Dec 11, 2018 at 9:02 AM Anton Ivanov
<anton.ivanov@cambridgegreys.com> wrote:
>
>
> On 12/11/18 2:41 PM, Steven Rostedt wrote:
> > On Tue, 11 Dec 2018 15:09:26 +0100
> > Petr Mladek <pmladek@suse.com> wrote:
> >
> >>> We have liburcu already, which is good.  The main sticking points are:
> >>>
> >>>   - printk has started adding a lot of %pX enhancements which printf
> >>>     obviously doesn't know about.
> >> I wonder how big problem it is and if it is worth using another
> >> approach.
> > No, please do not change the %pX approach.
> >
> >> An alternative would be to replace them with helper functions
> >> the would produce the same string. The meaning would be easier
> >> to understand. But concatenating with the surrounding text
> >> would be less elegant. People might start using pr_cont()
> >> that is problematic (mixed lines).
> >>
> >> Also the %pX formats are mostly used to print context of some
> >> structures. Even the helper functions would need some maintenance
> >> to keep them compatible.
> >>
> >> BTW: The printk() feature has been introduced 10 years ago by
> >> the commit 4d8a743cdd2690c0bc8 ("vsprintf: add infrastructure
> >> support for extended '%p' specifiers").
> > trace-cmd and perf know about most of the %pX data and how to read it.
> > Perhaps we can extend the libtraceevent library to export a generic way
> > to read data from printk() output for other tools to use.
>
> Going back for a second to using UML for this. UML console at present is
> interrupt driven - it emulates serial IO using several different
> back-ends (file descriptors, xterm or actual tty/ptys). Epoll events on
> the host side are used to trigger the UML interrupts - both read and write.
>
> This works OK for normal use, but may result in all kinds of interesting
> false positives/false negatives when UML is used to run unit tests
> against a change which changes interrupt behavior.
>
> IMO it may be useful to consider some alternatives specifically for unit
> test coverage purposes where printk and/or the whole console output
> altogether bypass some of the IRQ driven semantics.

Whoops, sorry, didn't see your comment before I went on vacation.

I completely agree. It is also annoying when trying to test other
really low level parts of the kernel. I would really like to get KUnit
to the point where it does not have any dependencies on anything in
the kernel, but that is very challenging for many reasons. This
loosely relates to what Luis, myself, and others have talked about in
other threads about having a stricter notion of code dependencies in
the kernel. Thinking about it now, I suspect it might be easier to
limit KUnit's dependency on kernel infrastructure first; that could
kind of motivate the later work.

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 14/19] Documentation: kunit: add documentation for KUnit
  2018-12-06 12:16         ` kieran.bingham
  2018-12-06 12:16           ` Kieran Bingham
@ 2019-02-09  0:56           ` brendanhiggins
  2019-02-09  0:56             ` Brendan Higgins
  2019-02-11 12:16             ` kieran.bingham
  1 sibling, 2 replies; 232+ messages in thread
From: brendanhiggins @ 2019-02-09  0:56 UTC (permalink / raw)


On Thu, Dec 6, 2018 at 4:16 AM Kieran Bingham
<kieran.bingham at ideasonboard.com> wrote:
>
> Hi Brendan,
>
> On 03/12/2018 23:53, Brendan Higgins wrote:
> > On Thu, Nov 29, 2018 at 7:45 PM Luis Chamberlain <mcgrof at kernel.org> wrote:
> >>
> >> On Thu, Nov 29, 2018 at 01:56:37PM +0000, Kieran Bingham wrote:
> >>> Hi Brendan,
> >>>
> >>> Please excuse the top posting, but I'm replying here as I'm following
> >>> the section "Creating a kunitconfig" in Documentation/kunit/start.rst.
> >>>
> >>> Could the three line kunitconfig file live under say
> >>>        arch/um/configs/kunit_defconfig?
>
>
> Further consideration to this topic - I mentioned putting it in
>   arch/um/configs
>
> - but I think this is wrong.
>
> We now have a location for config-fragments, which is essentially what
> this is, under kernel/configs
>
> So perhaps an addition as :
>
>  kernel/configs/kunit.config
>
> Would be more appropriate - and less (UM) architecture specific.

Sorry for the long radio silence.

I just got around to doing this and I found that there are some
configs that are desirable to have when running KUnit under x86 in a
VM, but not UML. So should we have one that goes in with
config-fragments and others that go into architectures? Another idea,
it would be nice to have a KUnit config that runs all known tests
(this probably won't work in practice once we start testing mutually
exclusive things or things with lots of ifdeffery, but it probably
something we should try to maintain as best as we can?); this probably
shouldn't go in with the fragments, right?

I will be sending another revision out soon, but I figured I might be
able to catch you before I did so.

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 14/19] Documentation: kunit: add documentation for KUnit
  2019-02-09  0:56           ` brendanhiggins
@ 2019-02-09  0:56             ` Brendan Higgins
  2019-02-11 12:16             ` kieran.bingham
  1 sibling, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2019-02-09  0:56 UTC (permalink / raw)


On Thu, Dec 6, 2018 at 4:16 AM Kieran Bingham
<kieran.bingham@ideasonboard.com> wrote:
>
> Hi Brendan,
>
> On 03/12/2018 23:53, Brendan Higgins wrote:
> > On Thu, Nov 29, 2018@7:45 PM Luis Chamberlain <mcgrof@kernel.org> wrote:
> >>
> >> On Thu, Nov 29, 2018@01:56:37PM +0000, Kieran Bingham wrote:
> >>> Hi Brendan,
> >>>
> >>> Please excuse the top posting, but I'm replying here as I'm following
> >>> the section "Creating a kunitconfig" in Documentation/kunit/start.rst.
> >>>
> >>> Could the three line kunitconfig file live under say
> >>>        arch/um/configs/kunit_defconfig?
>
>
> Further consideration to this topic - I mentioned putting it in
>   arch/um/configs
>
> - but I think this is wrong.
>
> We now have a location for config-fragments, which is essentially what
> this is, under kernel/configs
>
> So perhaps an addition as :
>
>  kernel/configs/kunit.config
>
> Would be more appropriate - and less (UM) architecture specific.

Sorry for the long radio silence.

I just got around to doing this and I found that there are some
configs that are desirable to have when running KUnit under x86 in a
VM, but not UML. So should we have one that goes in with
config-fragments and others that go into architectures? Another idea,
it would be nice to have a KUnit config that runs all known tests
(this probably won't work in practice once we start testing mutually
exclusive things or things with lots of ifdeffery, but it probably
something we should try to maintain as best as we can?); this probably
shouldn't go in with the fragments, right?

I will be sending another revision out soon, but I figured I might be
able to catch you before I did so.

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 14/19] Documentation: kunit: add documentation for KUnit
  2019-02-09  0:56           ` brendanhiggins
  2019-02-09  0:56             ` Brendan Higgins
@ 2019-02-11 12:16             ` kieran.bingham
  2019-02-11 12:16               ` Kieran Bingham
  2019-02-12 22:10               ` brendanhiggins
  1 sibling, 2 replies; 232+ messages in thread
From: kieran.bingham @ 2019-02-11 12:16 UTC (permalink / raw)


Hi Brendan,

On 09/02/2019 00:56, Brendan Higgins wrote:
> On Thu, Dec 6, 2018 at 4:16 AM Kieran Bingham
> <kieran.bingham at ideasonboard.com> wrote:
>>
>> Hi Brendan,
>>
>> On 03/12/2018 23:53, Brendan Higgins wrote:
>>> On Thu, Nov 29, 2018 at 7:45 PM Luis Chamberlain <mcgrof at kernel.org> wrote:
>>>>
>>>> On Thu, Nov 29, 2018 at 01:56:37PM +0000, Kieran Bingham wrote:
>>>>> Hi Brendan,
>>>>>
>>>>> Please excuse the top posting, but I'm replying here as I'm following
>>>>> the section "Creating a kunitconfig" in Documentation/kunit/start.rst.
>>>>>
>>>>> Could the three line kunitconfig file live under say
>>>>>        arch/um/configs/kunit_defconfig?
>>
>>
>> Further consideration to this topic - I mentioned putting it in
>>   arch/um/configs
>>
>> - but I think this is wrong.
>>
>> We now have a location for config-fragments, which is essentially what
>> this is, under kernel/configs
>>
>> So perhaps an addition as :
>>
>>  kernel/configs/kunit.config
>>
>> Would be more appropriate - and less (UM) architecture specific.
> 
> Sorry for the long radio silence.
> 
> I just got around to doing this and I found that there are some
> configs that are desirable to have when running KUnit under x86 in a
> VM, but not UML. 

Should this behaviour you mention be handled by the KCONFIG depends flags?

depends on (KUMIT & UML)
or
depends on (KUNIT & !UML)

or such?

An example of which configs you are referring to would help to
understand the issue perhaps.


> So should we have one that goes in with
> config-fragments and others that go into architectures? Another idea,
> it would be nice to have a KUnit config that runs all known tests

This might also be a config option added to the tests directly like
COMPILE_TEST perhaps?

(Not sure what that would be called though ... KUNIT_RUNTIME_TEST?)

I think that might be more maintainable as otherwise each new test would
have to modify the {min,def}{config,fragment} ...


> (this probably won't work in practice once we start testing mutually
> exclusive things or things with lots of ifdeffery, but it probably
> something we should try to maintain as best as we can?); this probably
> shouldn't go in with the fragments, right?

Sounds like we agree there :)

> 
> I will be sending another revision out soon, but I figured I might be
> able to catch you before I did so.

Thanks for thinking of me.
I hope I managed to reply in time to help and not hinder your progress.

-- 
Regards
--
Kieran

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 14/19] Documentation: kunit: add documentation for KUnit
  2019-02-11 12:16             ` kieran.bingham
@ 2019-02-11 12:16               ` Kieran Bingham
  2019-02-12 22:10               ` brendanhiggins
  1 sibling, 0 replies; 232+ messages in thread
From: Kieran Bingham @ 2019-02-11 12:16 UTC (permalink / raw)


Hi Brendan,

On 09/02/2019 00:56, Brendan Higgins wrote:
> On Thu, Dec 6, 2018 at 4:16 AM Kieran Bingham
> <kieran.bingham@ideasonboard.com> wrote:
>>
>> Hi Brendan,
>>
>> On 03/12/2018 23:53, Brendan Higgins wrote:
>>> On Thu, Nov 29, 2018@7:45 PM Luis Chamberlain <mcgrof@kernel.org> wrote:
>>>>
>>>> On Thu, Nov 29, 2018@01:56:37PM +0000, Kieran Bingham wrote:
>>>>> Hi Brendan,
>>>>>
>>>>> Please excuse the top posting, but I'm replying here as I'm following
>>>>> the section "Creating a kunitconfig" in Documentation/kunit/start.rst.
>>>>>
>>>>> Could the three line kunitconfig file live under say
>>>>>        arch/um/configs/kunit_defconfig?
>>
>>
>> Further consideration to this topic - I mentioned putting it in
>>   arch/um/configs
>>
>> - but I think this is wrong.
>>
>> We now have a location for config-fragments, which is essentially what
>> this is, under kernel/configs
>>
>> So perhaps an addition as :
>>
>>  kernel/configs/kunit.config
>>
>> Would be more appropriate - and less (UM) architecture specific.
> 
> Sorry for the long radio silence.
> 
> I just got around to doing this and I found that there are some
> configs that are desirable to have when running KUnit under x86 in a
> VM, but not UML. 

Should this behaviour you mention be handled by the KCONFIG depends flags?

depends on (KUMIT & UML)
or
depends on (KUNIT & !UML)

or such?

An example of which configs you are referring to would help to
understand the issue perhaps.


> So should we have one that goes in with
> config-fragments and others that go into architectures? Another idea,
> it would be nice to have a KUnit config that runs all known tests

This might also be a config option added to the tests directly like
COMPILE_TEST perhaps?

(Not sure what that would be called though ... KUNIT_RUNTIME_TEST?)

I think that might be more maintainable as otherwise each new test would
have to modify the {min,def}{config,fragment} ...


> (this probably won't work in practice once we start testing mutually
> exclusive things or things with lots of ifdeffery, but it probably
> something we should try to maintain as best as we can?); this probably
> shouldn't go in with the fragments, right?

Sounds like we agree there :)

> 
> I will be sending another revision out soon, but I figured I might be
> able to catch you before I did so.

Thanks for thinking of me.
I hope I managed to reply in time to help and not hinder your progress.

-- 
Regards
--
Kieran

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 14/19] Documentation: kunit: add documentation for KUnit
  2019-02-11 12:16             ` kieran.bingham
  2019-02-11 12:16               ` Kieran Bingham
@ 2019-02-12 22:10               ` brendanhiggins
  2019-02-12 22:10                 ` Brendan Higgins
  2019-02-13 21:55                 ` kieran.bingham
  1 sibling, 2 replies; 232+ messages in thread
From: brendanhiggins @ 2019-02-12 22:10 UTC (permalink / raw)


On Mon, Feb 11, 2019 at 4:16 AM Kieran Bingham
<kieran.bingham at ideasonboard.com> wrote:
>
> Hi Brendan,
>
> On 09/02/2019 00:56, Brendan Higgins wrote:
> > On Thu, Dec 6, 2018 at 4:16 AM Kieran Bingham
> > <kieran.bingham at ideasonboard.com> wrote:
> >>
> >> Hi Brendan,
> >>
> >> On 03/12/2018 23:53, Brendan Higgins wrote:
> >>> On Thu, Nov 29, 2018 at 7:45 PM Luis Chamberlain <mcgrof at kernel.org> wrote:
> >>>>
> >>>> On Thu, Nov 29, 2018 at 01:56:37PM +0000, Kieran Bingham wrote:
> >>>>> Hi Brendan,
> >>>>>
> >>>>> Please excuse the top posting, but I'm replying here as I'm following
> >>>>> the section "Creating a kunitconfig" in Documentation/kunit/start.rst.
> >>>>>
> >>>>> Could the three line kunitconfig file live under say
> >>>>>        arch/um/configs/kunit_defconfig?
> >>
> >>
> >> Further consideration to this topic - I mentioned putting it in
> >>   arch/um/configs
> >>
> >> - but I think this is wrong.
> >>
> >> We now have a location for config-fragments, which is essentially what
> >> this is, under kernel/configs
> >>
> >> So perhaps an addition as :
> >>
> >>  kernel/configs/kunit.config
> >>
> >> Would be more appropriate - and less (UM) architecture specific.
> >
> > Sorry for the long radio silence.
> >
> > I just got around to doing this and I found that there are some
> > configs that are desirable to have when running KUnit under x86 in a
> > VM, but not UML.
>
> Should this behaviour you mention be handled by the KCONFIG depends flags?
>
> depends on (KUMIT & UML)
> or
> depends on (KUNIT & !UML)
>
> or such?

Not really. Anything that is strictly necessary to run KUnit on an
architectures should of course be turned on as a dependency like you
suggest, but I am talking about stuff that you would probably want to
get yourself going, but is by no means necessary.

>
> An example of which configs you are referring to would help to
> understand the issue perhaps.
>

For example, you might want to enable a serial console that is known
to work with a fairly generic qemu setup when building for x86:
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_CONSOLE=y

Obviously not a dependency, and not even particularly useful to people
who know what they are doing, but to someone who is new or just wants
something to work out of the box would probably want that.

>
> > So should we have one that goes in with
> > config-fragments and others that go into architectures? Another idea,
> > it would be nice to have a KUnit config that runs all known tests
>
> This might also be a config option added to the tests directly like
> COMPILE_TEST perhaps?

That just allows a bunch of drivers to be compiled, it does not
actually go through and turn the configs on, right? I mean, there is
no a priori way to know that there is a configuration which spans all
possible options available under COMPILE_TEST, right? Maybe I
misunderstand what you are suggesting...

>
> (Not sure what that would be called though ... KUNIT_RUNTIME_TEST?)
>
> I think that might be more maintainable as otherwise each new test would
> have to modify the {min,def}{config,fragment} ...
>

Looking at kselftest-merge, they just start out with a set of
fragments in which the union should contain all tests and then merge
it with a base .config (probably intended to be $(ARCH)_defconfig).
However, I don't know if that is the state of the art.

>
> > (this probably won't work in practice once we start testing mutually
> > exclusive things or things with lots of ifdeffery, but it probably
> > something we should try to maintain as best as we can?); this probably
> > shouldn't go in with the fragments, right?
>
> Sounds like we agree there :)

Totally. Long term we will need something a lot more sophisticated
than anything under discussion here. I was talking about this with
Luis on another thread:
https://groups.google.com/forum/#!topic/kunit-dev/EQ1x0SzrUus (feel
free to chime in!). Nevertheless, that's a really hard problem and I
figure some variant of defconfigs and config fragments will work well
enough until we reach that point.

>
> >
> > I will be sending another revision out soon, but I figured I might be
> > able to catch you before I did so.
>
> Thanks for thinking of me.

How can I forget? You have been super helpful!

> I hope I managed to reply in time to help and not hinder your progress.

Yep, no trouble at all. You are the one helping me :-)

Thanks!

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 14/19] Documentation: kunit: add documentation for KUnit
  2019-02-12 22:10               ` brendanhiggins
@ 2019-02-12 22:10                 ` Brendan Higgins
  2019-02-13 21:55                 ` kieran.bingham
  1 sibling, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2019-02-12 22:10 UTC (permalink / raw)


On Mon, Feb 11, 2019 at 4:16 AM Kieran Bingham
<kieran.bingham@ideasonboard.com> wrote:
>
> Hi Brendan,
>
> On 09/02/2019 00:56, Brendan Higgins wrote:
> > On Thu, Dec 6, 2018 at 4:16 AM Kieran Bingham
> > <kieran.bingham@ideasonboard.com> wrote:
> >>
> >> Hi Brendan,
> >>
> >> On 03/12/2018 23:53, Brendan Higgins wrote:
> >>> On Thu, Nov 29, 2018@7:45 PM Luis Chamberlain <mcgrof@kernel.org> wrote:
> >>>>
> >>>> On Thu, Nov 29, 2018@01:56:37PM +0000, Kieran Bingham wrote:
> >>>>> Hi Brendan,
> >>>>>
> >>>>> Please excuse the top posting, but I'm replying here as I'm following
> >>>>> the section "Creating a kunitconfig" in Documentation/kunit/start.rst.
> >>>>>
> >>>>> Could the three line kunitconfig file live under say
> >>>>>        arch/um/configs/kunit_defconfig?
> >>
> >>
> >> Further consideration to this topic - I mentioned putting it in
> >>   arch/um/configs
> >>
> >> - but I think this is wrong.
> >>
> >> We now have a location for config-fragments, which is essentially what
> >> this is, under kernel/configs
> >>
> >> So perhaps an addition as :
> >>
> >>  kernel/configs/kunit.config
> >>
> >> Would be more appropriate - and less (UM) architecture specific.
> >
> > Sorry for the long radio silence.
> >
> > I just got around to doing this and I found that there are some
> > configs that are desirable to have when running KUnit under x86 in a
> > VM, but not UML.
>
> Should this behaviour you mention be handled by the KCONFIG depends flags?
>
> depends on (KUMIT & UML)
> or
> depends on (KUNIT & !UML)
>
> or such?

Not really. Anything that is strictly necessary to run KUnit on an
architectures should of course be turned on as a dependency like you
suggest, but I am talking about stuff that you would probably want to
get yourself going, but is by no means necessary.

>
> An example of which configs you are referring to would help to
> understand the issue perhaps.
>

For example, you might want to enable a serial console that is known
to work with a fairly generic qemu setup when building for x86:
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_CONSOLE=y

Obviously not a dependency, and not even particularly useful to people
who know what they are doing, but to someone who is new or just wants
something to work out of the box would probably want that.

>
> > So should we have one that goes in with
> > config-fragments and others that go into architectures? Another idea,
> > it would be nice to have a KUnit config that runs all known tests
>
> This might also be a config option added to the tests directly like
> COMPILE_TEST perhaps?

That just allows a bunch of drivers to be compiled, it does not
actually go through and turn the configs on, right? I mean, there is
no a priori way to know that there is a configuration which spans all
possible options available under COMPILE_TEST, right? Maybe I
misunderstand what you are suggesting...

>
> (Not sure what that would be called though ... KUNIT_RUNTIME_TEST?)
>
> I think that might be more maintainable as otherwise each new test would
> have to modify the {min,def}{config,fragment} ...
>

Looking at kselftest-merge, they just start out with a set of
fragments in which the union should contain all tests and then merge
it with a base .config (probably intended to be $(ARCH)_defconfig).
However, I don't know if that is the state of the art.

>
> > (this probably won't work in practice once we start testing mutually
> > exclusive things or things with lots of ifdeffery, but it probably
> > something we should try to maintain as best as we can?); this probably
> > shouldn't go in with the fragments, right?
>
> Sounds like we agree there :)

Totally. Long term we will need something a lot more sophisticated
than anything under discussion here. I was talking about this with
Luis on another thread:
https://groups.google.com/forum/#!topic/kunit-dev/EQ1x0SzrUus (feel
free to chime in!). Nevertheless, that's a really hard problem and I
figure some variant of defconfigs and config fragments will work well
enough until we reach that point.

>
> >
> > I will be sending another revision out soon, but I figured I might be
> > able to catch you before I did so.
>
> Thanks for thinking of me.

How can I forget? You have been super helpful!

> I hope I managed to reply in time to help and not hinder your progress.

Yep, no trouble at all. You are the one helping me :-)

Thanks!

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 17/19] of: unittest: migrate tests to run on KUnit
       [not found]   ` <CAL_Jsq+09Kx7yMBC_Jw45QGmk6U_fp4N6HOZDwYrM4tWw+_dOA@mail.gmail.com>
  2018-11-30  0:39     ` rdunlap
  2018-12-04  0:08     ` brendanhiggins
@ 2019-02-13  1:44     ` brendanhiggins
  2019-02-13  1:44       ` Brendan Higgins
                         ` (2 more replies)
  2 siblings, 3 replies; 232+ messages in thread
From: brendanhiggins @ 2019-02-13  1:44 UTC (permalink / raw)


On Wed, Nov 28, 2018 at 12:56 PM Rob Herring <robh at kernel.org> wrote:
>
> On Wed, Nov 28, 2018 at 1:38 PM Brendan Higgins
> <brendanhiggins at google.com> wrote:
> >
> > Migrate tests without any cleanup, or modifying test logic in anyway to
> > run under KUnit using the KUnit expectation and assertion API.
>
> Nice! You beat me to it. This is probably going to conflict with what
> is in the DT tree for 4.21. Also, please Cc the DT list for
> drivers/of/ changes.
>
> Looks good to me, but a few mostly formatting comments below.

I just realized that we never talked about your other comments, and I
still have some questions. (Sorry, it was the last thing I looked at
while getting v4 ready.) No worries if you don't get to it before I
send v4 out, I just didn't want you to think I was ignoring you.

>
> >
> > Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> > ---
> >  drivers/of/Kconfig    |    1 +
> >  drivers/of/unittest.c | 1405 ++++++++++++++++++++++-------------------
> >  2 files changed, 752 insertions(+), 654 deletions(-)
> >
<snip>
> > diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
> > index 41b49716ac75f..a5ef44730ffdb 100644
> > --- a/drivers/of/unittest.c
> > +++ b/drivers/of/unittest.c
<snip>
> > -
> > -static void __init of_unittest_find_node_by_name(void)
> > +static void of_unittest_find_node_by_name(struct kunit *test)
>
> Why do we have to drop __init everywhere? The tests run later?

>From the standpoint of a unit test __init doesn't really make any
sense, right? I know that right now we are running as part of a
kernel, but the goal should be that a unit test is not part of a
kernel and we just include what we need.

Even so, that's the future. For now, I did not put the KUnit
infrastructure in the .init section because I didn't think it belonged
there. In practice, KUnit only knows how to run during the init phase
of the kernel, but I don't think it should be restricted there. You
should be able to run tests whenever you want because you should be
able to test anything right? I figured any restriction on that is
misleading and will potentially get in the way at worst, and
unnecessary at best especially since people shouldn't build a
production kernel with all kinds of unit tests inside.

>
> >  {
> >         struct device_node *np;
> >         const char *options, *name;
> >
<snip>
> >
> >
> > -       np = of_find_node_by_path("/testcase-data/missing-path");
> > -       unittest(!np, "non-existent path returned node %pOF\n", np);
> > +       KUNIT_EXPECT_EQ_MSG(test,
> > +                           of_find_node_by_path("/testcase-data/missing-path"),
> > +                           NULL,
> > +                           "non-existent path returned node %pOF\n", np);
>
> 1 tab indent would help with less vertical code (in general, not this
> one so much).

Will do.

>
> >         of_node_put(np);
> >
> > -       np = of_find_node_by_path("missing-alias");
> > -       unittest(!np, "non-existent alias returned node %pOF\n", np);
> > +       KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("missing-alias"), NULL,
> > +                           "non-existent alias returned node %pOF\n", np);
> >         of_node_put(np);
> >
> > -       np = of_find_node_by_path("testcase-alias/missing-path");
> > -       unittest(!np, "non-existent alias with relative path returned node %pOF\n", np);
> > +       KUNIT_EXPECT_EQ_MSG(test,
> > +                           of_find_node_by_path("testcase-alias/missing-path"),
> > +                           NULL,
> > +                           "non-existent alias with relative path returned node %pOF\n",
> > +                           np);
> >         of_node_put(np);
> >
<snip>
> >
> > -static void __init of_unittest_property_string(void)
> > +static void of_unittest_property_string(struct kunit *test)
> >  {
> >         const char *strings[4];
> >         struct device_node *np;
> >         int rc;
> >
> >         np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> > -       if (!np) {
> > -               pr_err("No testcase data in device tree\n");
> > -               return;
> > -       }
> > -
> > -       rc = of_property_match_string(np, "phandle-list-names", "first");
> > -       unittest(rc == 0, "first expected:0 got:%i\n", rc);
> > -       rc = of_property_match_string(np, "phandle-list-names", "second");
> > -       unittest(rc == 1, "second expected:1 got:%i\n", rc);
> > -       rc = of_property_match_string(np, "phandle-list-names", "third");
> > -       unittest(rc == 2, "third expected:2 got:%i\n", rc);
> > -       rc = of_property_match_string(np, "phandle-list-names", "fourth");
> > -       unittest(rc == -ENODATA, "unmatched string; rc=%i\n", rc);
> > -       rc = of_property_match_string(np, "missing-property", "blah");
> > -       unittest(rc == -EINVAL, "missing property; rc=%i\n", rc);
> > -       rc = of_property_match_string(np, "empty-property", "blah");
> > -       unittest(rc == -ENODATA, "empty property; rc=%i\n", rc);
> > -       rc = of_property_match_string(np, "unterminated-string", "blah");
> > -       unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
> > +       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> > +
> > +       KUNIT_EXPECT_EQ(test,
> > +                       of_property_match_string(np,
> > +                                                "phandle-list-names",
> > +                                                "first"),
> > +                       0);
> > +       KUNIT_EXPECT_EQ(test,
> > +                       of_property_match_string(np,
> > +                                                "phandle-list-names",
> > +                                                "second"),
> > +                       1);
>
> Fewer lines on these would be better even if we go over 80 chars.

On the of_property_match_string(...), I have no opinion. I will do
whatever you like best.

Nevertheless, as far as the KUNIT_EXPECT_*(...), I do have an opinion: I am
trying to establish a good, readable convention. Given an expect statement
structured as
```
KUNIT_EXPECT_*(
    test,
    expect_arg_0, ..., expect_arg_n,
    fmt_str, fmt_arg_0, ..., fmt_arg_n)
```
where `test` is the `struct kunit` context argument, `expect_arg_{0, ..., n}`
are the arguments the expectations is being made about (so in the above example,
`of_property_match_string(...)` and `1`), and `fmt_*` is the optional format
string that comes at the end of some expectations.

The pattern I had been trying to promote is the following:

1) If everything fits on 1 line, do that.
2) If you must make a line split, prefer to keep `test` on its own line,
`expect_arg_{0, ..., n}` should be kept together, if possible, and the format
string should follow the conventions already most commonly used with format
strings.
3) If you must split up `expect_arg_{0, ..., n}` each argument should get its
own line and should not share a line with either `test` or any `fmt_*`.

The reason I care about this so much is because expectations should be
extremely easy to read; they are the most important part of a unit
test because they tell you what the test is verifying. I am not
married to the formatting I proposed above, but I want something that
will be extremely easy to identify the arguments that the expectation
is on. Maybe that means that I need to add some syntactic fluff to
make it clearer, I don't know, but this is definitely something we
need to get right, especially in the earliest examples.

>
> > +       KUNIT_EXPECT_EQ(test,
> > +                       of_property_match_string(np,
> > +                                                "phandle-list-names",
> > +                                                "third"),
> > +                       2);
> > +       KUNIT_EXPECT_EQ_MSG(test,
> > +                           of_property_match_string(np,
> > +                                                    "phandle-list-names",
> > +                                                    "fourth"),
> > +                           -ENODATA,
> > +                           "unmatched string");
> > +       KUNIT_EXPECT_EQ_MSG(test,
> > +                           of_property_match_string(np,
> > +                                                    "missing-property",
> > +                                                    "blah"),
> > +                           -EINVAL,
> > +                           "missing property");
> > +       KUNIT_EXPECT_EQ_MSG(test,
> > +                           of_property_match_string(np,
> > +                                                    "empty-property",
> > +                                                    "blah"),
> > +                           -ENODATA,
> > +                           "empty property");
> > +       KUNIT_EXPECT_EQ_MSG(test,
> > +                           of_property_match_string(np,
> > +                                                    "unterminated-string",
> > +                                                    "blah"),
> > +                           -EILSEQ,
> > +                           "unterminated string");
<snip>
> >  /* test insertion of a bus with parent devices */
> > -static void __init of_unittest_overlay_10(void)
> > +static void of_unittest_overlay_10(struct kunit *test)
> >  {
> > -       int ret;
> >         char *child_path;
> >
> >         /* device should disable */
> > -       ret = of_unittest_apply_overlay_check(10, 10, 0, 1, PDEV_OVERLAY);
> > -       if (unittest(ret == 0,
> > -                       "overlay test %d failed; overlay application\n", 10))
> > -               return;
> > +       KUNIT_ASSERT_EQ_MSG(test,
> > +                           of_unittest_apply_overlay_check(test,
> > +                                                           10,
> > +                                                           10,
> > +                                                           0,
> > +                                                           1,
> > +                                                           PDEV_OVERLAY),
>
> I prefer putting multiple args on a line and having fewer lines.

Looking at this now, I tend to agree, but I don't think I saw a
consistent way to break them up for these functions. I figured there
should be some type of pattern.

>
> > +                           0,
> > +                           "overlay test %d failed; overlay application\n",
> > +                           10);
> >
> >         child_path = kasprintf(GFP_KERNEL, "%s/test-unittest101",
> >                         unittest_path(10, PDEV_OVERLAY));
> > -       if (unittest(child_path, "overlay test %d failed; kasprintf\n", 10))
> > -               return;
> > +       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, child_path);
> >
> > -       ret = of_path_device_type_exists(child_path, PDEV_OVERLAY);
> > +       KUNIT_EXPECT_TRUE_MSG(test,
> > +                             of_path_device_type_exists(child_path,
> > +                                                        PDEV_OVERLAY),
> > +                             "overlay test %d failed; no child device\n", 10);
> >         kfree(child_path);
> > -
> > -       unittest(ret, "overlay test %d failed; no child device\n", 10);
> >  }
<snip>

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 17/19] of: unittest: migrate tests to run on KUnit
  2019-02-13  1:44     ` brendanhiggins
@ 2019-02-13  1:44       ` Brendan Higgins
  2019-02-14 20:10       ` robh
  2019-02-18 22:56       ` frowand.list
  2 siblings, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2019-02-13  1:44 UTC (permalink / raw)


On Wed, Nov 28, 2018@12:56 PM Rob Herring <robh@kernel.org> wrote:
>
> On Wed, Nov 28, 2018 at 1:38 PM Brendan Higgins
> <brendanhiggins@google.com> wrote:
> >
> > Migrate tests without any cleanup, or modifying test logic in anyway to
> > run under KUnit using the KUnit expectation and assertion API.
>
> Nice! You beat me to it. This is probably going to conflict with what
> is in the DT tree for 4.21. Also, please Cc the DT list for
> drivers/of/ changes.
>
> Looks good to me, but a few mostly formatting comments below.

I just realized that we never talked about your other comments, and I
still have some questions. (Sorry, it was the last thing I looked at
while getting v4 ready.) No worries if you don't get to it before I
send v4 out, I just didn't want you to think I was ignoring you.

>
> >
> > Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> > ---
> >  drivers/of/Kconfig    |    1 +
> >  drivers/of/unittest.c | 1405 ++++++++++++++++++++++-------------------
> >  2 files changed, 752 insertions(+), 654 deletions(-)
> >
<snip>
> > diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
> > index 41b49716ac75f..a5ef44730ffdb 100644
> > --- a/drivers/of/unittest.c
> > +++ b/drivers/of/unittest.c
<snip>
> > -
> > -static void __init of_unittest_find_node_by_name(void)
> > +static void of_unittest_find_node_by_name(struct kunit *test)
>
> Why do we have to drop __init everywhere? The tests run later?

>From the standpoint of a unit test __init doesn't really make any
sense, right? I know that right now we are running as part of a
kernel, but the goal should be that a unit test is not part of a
kernel and we just include what we need.

Even so, that's the future. For now, I did not put the KUnit
infrastructure in the .init section because I didn't think it belonged
there. In practice, KUnit only knows how to run during the init phase
of the kernel, but I don't think it should be restricted there. You
should be able to run tests whenever you want because you should be
able to test anything right? I figured any restriction on that is
misleading and will potentially get in the way at worst, and
unnecessary at best especially since people shouldn't build a
production kernel with all kinds of unit tests inside.

>
> >  {
> >         struct device_node *np;
> >         const char *options, *name;
> >
<snip>
> >
> >
> > -       np = of_find_node_by_path("/testcase-data/missing-path");
> > -       unittest(!np, "non-existent path returned node %pOF\n", np);
> > +       KUNIT_EXPECT_EQ_MSG(test,
> > +                           of_find_node_by_path("/testcase-data/missing-path"),
> > +                           NULL,
> > +                           "non-existent path returned node %pOF\n", np);
>
> 1 tab indent would help with less vertical code (in general, not this
> one so much).

Will do.

>
> >         of_node_put(np);
> >
> > -       np = of_find_node_by_path("missing-alias");
> > -       unittest(!np, "non-existent alias returned node %pOF\n", np);
> > +       KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("missing-alias"), NULL,
> > +                           "non-existent alias returned node %pOF\n", np);
> >         of_node_put(np);
> >
> > -       np = of_find_node_by_path("testcase-alias/missing-path");
> > -       unittest(!np, "non-existent alias with relative path returned node %pOF\n", np);
> > +       KUNIT_EXPECT_EQ_MSG(test,
> > +                           of_find_node_by_path("testcase-alias/missing-path"),
> > +                           NULL,
> > +                           "non-existent alias with relative path returned node %pOF\n",
> > +                           np);
> >         of_node_put(np);
> >
<snip>
> >
> > -static void __init of_unittest_property_string(void)
> > +static void of_unittest_property_string(struct kunit *test)
> >  {
> >         const char *strings[4];
> >         struct device_node *np;
> >         int rc;
> >
> >         np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> > -       if (!np) {
> > -               pr_err("No testcase data in device tree\n");
> > -               return;
> > -       }
> > -
> > -       rc = of_property_match_string(np, "phandle-list-names", "first");
> > -       unittest(rc == 0, "first expected:0 got:%i\n", rc);
> > -       rc = of_property_match_string(np, "phandle-list-names", "second");
> > -       unittest(rc == 1, "second expected:1 got:%i\n", rc);
> > -       rc = of_property_match_string(np, "phandle-list-names", "third");
> > -       unittest(rc == 2, "third expected:2 got:%i\n", rc);
> > -       rc = of_property_match_string(np, "phandle-list-names", "fourth");
> > -       unittest(rc == -ENODATA, "unmatched string; rc=%i\n", rc);
> > -       rc = of_property_match_string(np, "missing-property", "blah");
> > -       unittest(rc == -EINVAL, "missing property; rc=%i\n", rc);
> > -       rc = of_property_match_string(np, "empty-property", "blah");
> > -       unittest(rc == -ENODATA, "empty property; rc=%i\n", rc);
> > -       rc = of_property_match_string(np, "unterminated-string", "blah");
> > -       unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
> > +       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> > +
> > +       KUNIT_EXPECT_EQ(test,
> > +                       of_property_match_string(np,
> > +                                                "phandle-list-names",
> > +                                                "first"),
> > +                       0);
> > +       KUNIT_EXPECT_EQ(test,
> > +                       of_property_match_string(np,
> > +                                                "phandle-list-names",
> > +                                                "second"),
> > +                       1);
>
> Fewer lines on these would be better even if we go over 80 chars.

On the of_property_match_string(...), I have no opinion. I will do
whatever you like best.

Nevertheless, as far as the KUNIT_EXPECT_*(...), I do have an opinion: I am
trying to establish a good, readable convention. Given an expect statement
structured as
```
KUNIT_EXPECT_*(
    test,
    expect_arg_0, ..., expect_arg_n,
    fmt_str, fmt_arg_0, ..., fmt_arg_n)
```
where `test` is the `struct kunit` context argument, `expect_arg_{0, ..., n}`
are the arguments the expectations is being made about (so in the above example,
`of_property_match_string(...)` and `1`), and `fmt_*` is the optional format
string that comes at the end of some expectations.

The pattern I had been trying to promote is the following:

1) If everything fits on 1 line, do that.
2) If you must make a line split, prefer to keep `test` on its own line,
`expect_arg_{0, ..., n}` should be kept together, if possible, and the format
string should follow the conventions already most commonly used with format
strings.
3) If you must split up `expect_arg_{0, ..., n}` each argument should get its
own line and should not share a line with either `test` or any `fmt_*`.

The reason I care about this so much is because expectations should be
extremely easy to read; they are the most important part of a unit
test because they tell you what the test is verifying. I am not
married to the formatting I proposed above, but I want something that
will be extremely easy to identify the arguments that the expectation
is on. Maybe that means that I need to add some syntactic fluff to
make it clearer, I don't know, but this is definitely something we
need to get right, especially in the earliest examples.

>
> > +       KUNIT_EXPECT_EQ(test,
> > +                       of_property_match_string(np,
> > +                                                "phandle-list-names",
> > +                                                "third"),
> > +                       2);
> > +       KUNIT_EXPECT_EQ_MSG(test,
> > +                           of_property_match_string(np,
> > +                                                    "phandle-list-names",
> > +                                                    "fourth"),
> > +                           -ENODATA,
> > +                           "unmatched string");
> > +       KUNIT_EXPECT_EQ_MSG(test,
> > +                           of_property_match_string(np,
> > +                                                    "missing-property",
> > +                                                    "blah"),
> > +                           -EINVAL,
> > +                           "missing property");
> > +       KUNIT_EXPECT_EQ_MSG(test,
> > +                           of_property_match_string(np,
> > +                                                    "empty-property",
> > +                                                    "blah"),
> > +                           -ENODATA,
> > +                           "empty property");
> > +       KUNIT_EXPECT_EQ_MSG(test,
> > +                           of_property_match_string(np,
> > +                                                    "unterminated-string",
> > +                                                    "blah"),
> > +                           -EILSEQ,
> > +                           "unterminated string");
<snip>
> >  /* test insertion of a bus with parent devices */
> > -static void __init of_unittest_overlay_10(void)
> > +static void of_unittest_overlay_10(struct kunit *test)
> >  {
> > -       int ret;
> >         char *child_path;
> >
> >         /* device should disable */
> > -       ret = of_unittest_apply_overlay_check(10, 10, 0, 1, PDEV_OVERLAY);
> > -       if (unittest(ret == 0,
> > -                       "overlay test %d failed; overlay application\n", 10))
> > -               return;
> > +       KUNIT_ASSERT_EQ_MSG(test,
> > +                           of_unittest_apply_overlay_check(test,
> > +                                                           10,
> > +                                                           10,
> > +                                                           0,
> > +                                                           1,
> > +                                                           PDEV_OVERLAY),
>
> I prefer putting multiple args on a line and having fewer lines.

Looking at this now, I tend to agree, but I don't think I saw a
consistent way to break them up for these functions. I figured there
should be some type of pattern.

>
> > +                           0,
> > +                           "overlay test %d failed; overlay application\n",
> > +                           10);
> >
> >         child_path = kasprintf(GFP_KERNEL, "%s/test-unittest101",
> >                         unittest_path(10, PDEV_OVERLAY));
> > -       if (unittest(child_path, "overlay test %d failed; kasprintf\n", 10))
> > -               return;
> > +       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, child_path);
> >
> > -       ret = of_path_device_type_exists(child_path, PDEV_OVERLAY);
> > +       KUNIT_EXPECT_TRUE_MSG(test,
> > +                             of_path_device_type_exists(child_path,
> > +                                                        PDEV_OVERLAY),
> > +                             "overlay test %d failed; no child device\n", 10);
> >         kfree(child_path);
> > -
> > -       unittest(ret, "overlay test %d failed; no child device\n", 10);
> >  }
<snip>

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 14/19] Documentation: kunit: add documentation for KUnit
  2019-02-12 22:10               ` brendanhiggins
  2019-02-12 22:10                 ` Brendan Higgins
@ 2019-02-13 21:55                 ` kieran.bingham
  2019-02-13 21:55                   ` Kieran Bingham
  2019-02-14  0:17                   ` brendanhiggins
  1 sibling, 2 replies; 232+ messages in thread
From: kieran.bingham @ 2019-02-13 21:55 UTC (permalink / raw)


Hi Brendan,

On 12/02/2019 22:10, Brendan Higgins wrote:
> On Mon, Feb 11, 2019 at 4:16 AM Kieran Bingham
> <kieran.bingham at ideasonboard.com> wrote:
>>
>> Hi Brendan,
>>
>> On 09/02/2019 00:56, Brendan Higgins wrote:
>>> On Thu, Dec 6, 2018 at 4:16 AM Kieran Bingham
>>> <kieran.bingham at ideasonboard.com> wrote:
>>>>
>>>> Hi Brendan,
>>>>
>>>> On 03/12/2018 23:53, Brendan Higgins wrote:
>>>>> On Thu, Nov 29, 2018 at 7:45 PM Luis Chamberlain <mcgrof at kernel.org> wrote:
>>>>>>
>>>>>> On Thu, Nov 29, 2018 at 01:56:37PM +0000, Kieran Bingham wrote:
>>>>>>> Hi Brendan,
>>>>>>>
>>>>>>> Please excuse the top posting, but I'm replying here as I'm following
>>>>>>> the section "Creating a kunitconfig" in Documentation/kunit/start.rst.
>>>>>>>
>>>>>>> Could the three line kunitconfig file live under say
>>>>>>>        arch/um/configs/kunit_defconfig?
>>>>
>>>>
>>>> Further consideration to this topic - I mentioned putting it in
>>>>   arch/um/configs
>>>>
>>>> - but I think this is wrong.
>>>>
>>>> We now have a location for config-fragments, which is essentially what
>>>> this is, under kernel/configs
>>>>
>>>> So perhaps an addition as :
>>>>
>>>>  kernel/configs/kunit.config
>>>>
>>>> Would be more appropriate - and less (UM) architecture specific.
>>>
>>> Sorry for the long radio silence.
>>>
>>> I just got around to doing this and I found that there are some
>>> configs that are desirable to have when running KUnit under x86 in a
>>> VM, but not UML.
>>
>> Should this behaviour you mention be handled by the KCONFIG depends flags?
>>
>> depends on (KUMIT & UML)
>> or
>> depends on (KUNIT & !UML)
>>
>> or such?
> 
> Not really. Anything that is strictly necessary to run KUnit on an
> architectures should of course be turned on as a dependency like you
> suggest, but I am talking about stuff that you would probably want to
> get yourself going, but is by no means necessary.
> 
>>
>> An example of which configs you are referring to would help to
>> understand the issue perhaps.
>>
> 
> For example, you might want to enable a serial console that is known
> to work with a fairly generic qemu setup when building for x86:
> CONFIG_SERIAL_8250=y
> CONFIG_SERIAL_8250_CONSOLE=y
> 
> Obviously not a dependency, and not even particularly useful to people
> who know what they are doing, but to someone who is new or just wants
> something to work out of the box would probably want that.

It sounds like that would be a config fragment for qemu ?

Although - perhaps this is already covered by the following fragment:
   kernel/configs/kvm_guest.config


>>> So should we have one that goes in with
>>> config-fragments and others that go into architectures? Another idea,
>>> it would be nice to have a KUnit config that runs all known tests
>>
>> This might also be a config option added to the tests directly like
>> COMPILE_TEST perhaps?
> 
> That just allows a bunch of drivers to be compiled, it does not
> actually go through and turn the configs on, right? I mean, there is
> no a priori way to know that there is a configuration which spans all
> possible options available under COMPILE_TEST, right? Maybe I
> misunderstand what you are suggesting...

Bah - you're right of course. I was mis-remembering the functionality of
COMPILE_TEST as if it were some sort of 'select' but it's just an enable..

Sorry for the confusion.



>> (Not sure what that would be called though ... KUNIT_RUNTIME_TEST?)
>>
>> I think that might be more maintainable as otherwise each new test would
>> have to modify the {min,def}{config,fragment} ...
>>
> 
> Looking at kselftest-merge, they just start out with a set of
> fragments in which the union should contain all tests and then merge
> it with a base .config (probably intended to be $(ARCH)_defconfig).
> However, I don't know if that is the state of the art.
> 
>>
>>> (this probably won't work in practice once we start testing mutually
>>> exclusive things or things with lots of ifdeffery, but it probably
>>> something we should try to maintain as best as we can?); this probably
>>> shouldn't go in with the fragments, right?
>>
>> Sounds like we agree there :)
> 
> Totally. Long term we will need something a lot more sophisticated
> than anything under discussion here. I was talking about this with
> Luis on another thread:
> https://groups.google.com/forum/#!topic/kunit-dev/EQ1x0SzrUus (feel
> free to chime in!). Nevertheless, that's a really hard problem and I
> figure some variant of defconfigs and config fragments will work well
> enough until we reach that point.
> 
>>
>>>
>>> I will be sending another revision out soon, but I figured I might be
>>> able to catch you before I did so.
>>
>> Thanks for thinking of me.
> 
> How can I forget? You have been super helpful!
> 
>> I hope I managed to reply in time to help and not hinder your progress.
> 
> Yep, no trouble at all. You are the one helping me :-)
> 
> Thanks!
> 

-- 
Regards
--
Kieran

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 14/19] Documentation: kunit: add documentation for KUnit
  2019-02-13 21:55                 ` kieran.bingham
@ 2019-02-13 21:55                   ` Kieran Bingham
  2019-02-14  0:17                   ` brendanhiggins
  1 sibling, 0 replies; 232+ messages in thread
From: Kieran Bingham @ 2019-02-13 21:55 UTC (permalink / raw)


Hi Brendan,

On 12/02/2019 22:10, Brendan Higgins wrote:
> On Mon, Feb 11, 2019 at 4:16 AM Kieran Bingham
> <kieran.bingham@ideasonboard.com> wrote:
>>
>> Hi Brendan,
>>
>> On 09/02/2019 00:56, Brendan Higgins wrote:
>>> On Thu, Dec 6, 2018 at 4:16 AM Kieran Bingham
>>> <kieran.bingham@ideasonboard.com> wrote:
>>>>
>>>> Hi Brendan,
>>>>
>>>> On 03/12/2018 23:53, Brendan Higgins wrote:
>>>>> On Thu, Nov 29, 2018@7:45 PM Luis Chamberlain <mcgrof@kernel.org> wrote:
>>>>>>
>>>>>> On Thu, Nov 29, 2018@01:56:37PM +0000, Kieran Bingham wrote:
>>>>>>> Hi Brendan,
>>>>>>>
>>>>>>> Please excuse the top posting, but I'm replying here as I'm following
>>>>>>> the section "Creating a kunitconfig" in Documentation/kunit/start.rst.
>>>>>>>
>>>>>>> Could the three line kunitconfig file live under say
>>>>>>>        arch/um/configs/kunit_defconfig?
>>>>
>>>>
>>>> Further consideration to this topic - I mentioned putting it in
>>>>   arch/um/configs
>>>>
>>>> - but I think this is wrong.
>>>>
>>>> We now have a location for config-fragments, which is essentially what
>>>> this is, under kernel/configs
>>>>
>>>> So perhaps an addition as :
>>>>
>>>>  kernel/configs/kunit.config
>>>>
>>>> Would be more appropriate - and less (UM) architecture specific.
>>>
>>> Sorry for the long radio silence.
>>>
>>> I just got around to doing this and I found that there are some
>>> configs that are desirable to have when running KUnit under x86 in a
>>> VM, but not UML.
>>
>> Should this behaviour you mention be handled by the KCONFIG depends flags?
>>
>> depends on (KUMIT & UML)
>> or
>> depends on (KUNIT & !UML)
>>
>> or such?
> 
> Not really. Anything that is strictly necessary to run KUnit on an
> architectures should of course be turned on as a dependency like you
> suggest, but I am talking about stuff that you would probably want to
> get yourself going, but is by no means necessary.
> 
>>
>> An example of which configs you are referring to would help to
>> understand the issue perhaps.
>>
> 
> For example, you might want to enable a serial console that is known
> to work with a fairly generic qemu setup when building for x86:
> CONFIG_SERIAL_8250=y
> CONFIG_SERIAL_8250_CONSOLE=y
> 
> Obviously not a dependency, and not even particularly useful to people
> who know what they are doing, but to someone who is new or just wants
> something to work out of the box would probably want that.

It sounds like that would be a config fragment for qemu ?

Although - perhaps this is already covered by the following fragment:
   kernel/configs/kvm_guest.config


>>> So should we have one that goes in with
>>> config-fragments and others that go into architectures? Another idea,
>>> it would be nice to have a KUnit config that runs all known tests
>>
>> This might also be a config option added to the tests directly like
>> COMPILE_TEST perhaps?
> 
> That just allows a bunch of drivers to be compiled, it does not
> actually go through and turn the configs on, right? I mean, there is
> no a priori way to know that there is a configuration which spans all
> possible options available under COMPILE_TEST, right? Maybe I
> misunderstand what you are suggesting...

Bah - you're right of course. I was mis-remembering the functionality of
COMPILE_TEST as if it were some sort of 'select' but it's just an enable..

Sorry for the confusion.



>> (Not sure what that would be called though ... KUNIT_RUNTIME_TEST?)
>>
>> I think that might be more maintainable as otherwise each new test would
>> have to modify the {min,def}{config,fragment} ...
>>
> 
> Looking at kselftest-merge, they just start out with a set of
> fragments in which the union should contain all tests and then merge
> it with a base .config (probably intended to be $(ARCH)_defconfig).
> However, I don't know if that is the state of the art.
> 
>>
>>> (this probably won't work in practice once we start testing mutually
>>> exclusive things or things with lots of ifdeffery, but it probably
>>> something we should try to maintain as best as we can?); this probably
>>> shouldn't go in with the fragments, right?
>>
>> Sounds like we agree there :)
> 
> Totally. Long term we will need something a lot more sophisticated
> than anything under discussion here. I was talking about this with
> Luis on another thread:
> https://groups.google.com/forum/#!topic/kunit-dev/EQ1x0SzrUus (feel
> free to chime in!). Nevertheless, that's a really hard problem and I
> figure some variant of defconfigs and config fragments will work well
> enough until we reach that point.
> 
>>
>>>
>>> I will be sending another revision out soon, but I figured I might be
>>> able to catch you before I did so.
>>
>> Thanks for thinking of me.
> 
> How can I forget? You have been super helpful!
> 
>> I hope I managed to reply in time to help and not hinder your progress.
> 
> Yep, no trouble at all. You are the one helping me :-)
> 
> Thanks!
> 

-- 
Regards
--
Kieran

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 14/19] Documentation: kunit: add documentation for KUnit
  2019-02-13 21:55                 ` kieran.bingham
  2019-02-13 21:55                   ` Kieran Bingham
@ 2019-02-14  0:17                   ` brendanhiggins
  2019-02-14  0:17                     ` Brendan Higgins
  2019-02-14 17:26                     ` mcgrof
  1 sibling, 2 replies; 232+ messages in thread
From: brendanhiggins @ 2019-02-14  0:17 UTC (permalink / raw)


On Wed, Feb 13, 2019 at 1:55 PM Kieran Bingham
<kieran.bingham at ideasonboard.com> wrote:
>
> Hi Brendan,
>
> On 12/02/2019 22:10, Brendan Higgins wrote:
> > On Mon, Feb 11, 2019 at 4:16 AM Kieran Bingham
> > <kieran.bingham at ideasonboard.com> wrote:
> >>
> >> Hi Brendan,
> >>
> >> On 09/02/2019 00:56, Brendan Higgins wrote:
> >>> On Thu, Dec 6, 2018 at 4:16 AM Kieran Bingham
> >>> <kieran.bingham at ideasonboard.com> wrote:
> >>>>
> >>>> Hi Brendan,
> >>>>
> >>>> On 03/12/2018 23:53, Brendan Higgins wrote:
> >>>>> On Thu, Nov 29, 2018 at 7:45 PM Luis Chamberlain <mcgrof at kernel.org> wrote:
> >>>>>>
> >>>>>> On Thu, Nov 29, 2018 at 01:56:37PM +0000, Kieran Bingham wrote:
> >>>>>>> Hi Brendan,
> >>>>>>>
> >>>>>>> Please excuse the top posting, but I'm replying here as I'm following
> >>>>>>> the section "Creating a kunitconfig" in Documentation/kunit/start.rst.
> >>>>>>>
> >>>>>>> Could the three line kunitconfig file live under say
> >>>>>>>        arch/um/configs/kunit_defconfig?
> >>>>
> >>>>
> >>>> Further consideration to this topic - I mentioned putting it in
> >>>>   arch/um/configs
> >>>>
> >>>> - but I think this is wrong.
> >>>>
> >>>> We now have a location for config-fragments, which is essentially what
> >>>> this is, under kernel/configs
> >>>>
> >>>> So perhaps an addition as :
> >>>>
> >>>>  kernel/configs/kunit.config
> >>>>
> >>>> Would be more appropriate - and less (UM) architecture specific.
> >>>
> >>> Sorry for the long radio silence.
> >>>
> >>> I just got around to doing this and I found that there are some
> >>> configs that are desirable to have when running KUnit under x86 in a
> >>> VM, but not UML.
> >>
> >> Should this behaviour you mention be handled by the KCONFIG depends flags?
> >>
> >> depends on (KUMIT & UML)
> >> or
> >> depends on (KUNIT & !UML)
> >>
> >> or such?
> >
> > Not really. Anything that is strictly necessary to run KUnit on an
> > architectures should of course be turned on as a dependency like you
> > suggest, but I am talking about stuff that you would probably want to
> > get yourself going, but is by no means necessary.
> >
> >>
> >> An example of which configs you are referring to would help to
> >> understand the issue perhaps.
> >>
> >
> > For example, you might want to enable a serial console that is known
> > to work with a fairly generic qemu setup when building for x86:
> > CONFIG_SERIAL_8250=y
> > CONFIG_SERIAL_8250_CONSOLE=y
> >
> > Obviously not a dependency, and not even particularly useful to people
> > who know what they are doing, but to someone who is new or just wants
> > something to work out of the box would probably want that.
>
> It sounds like that would be a config fragment for qemu ?
>
> Although - perhaps this is already covered by the following fragment:
>    kernel/configs/kvm_guest.config
>

Oh, yep, you are right. Does that mean we should bother at all with a defconfig?

Luis, I know you said you wanted one. I am thinking just stick with
the UML one? The downside there is we then get stuck having to
maintain the fragment and the defconfig. I right now (in the new
revision I am working on) have the Python kunit_tool copy the
defconfig if no kunitconfig is provided and a flag is set. It would be
pretty straightforward to make it merge in the fragment instead.

All that being said, I think I am going to drop the arch/x86
defconfig, since I think we all agree that it is not very useful, but
keep the UML defconfig and the fragment. That will at least given
something concrete to discuss.

>
> >>> So should we have one that goes in with
> >>> config-fragments and others that go into architectures? Another idea,
> >>> it would be nice to have a KUnit config that runs all known tests
> >>
> >> This might also be a config option added to the tests directly like
> >> COMPILE_TEST perhaps?
> >
> > That just allows a bunch of drivers to be compiled, it does not
> > actually go through and turn the configs on, right? I mean, there is
> > no a priori way to know that there is a configuration which spans all
> > possible options available under COMPILE_TEST, right? Maybe I
> > misunderstand what you are suggesting...
>
> Bah - you're right of course. I was mis-remembering the functionality of
> COMPILE_TEST as if it were some sort of 'select' but it's just an enable..
>
> Sorry for the confusion.
>

No problem, I thought for a second that was a good example too (and I
wish it were. It would make my life so much easier!). I remember
getting emails with a COMPILE_TEST config attached that demonstrates
an invalid build caused by my changes, presumably that email bot just
tries random configs with a new change until it finds one that breaks.

>
> >> (Not sure what that would be called though ... KUNIT_RUNTIME_TEST?)
> >>
> >> I think that might be more maintainable as otherwise each new test would
> >> have to modify the {min,def}{config,fragment} ...
> >>
> >
> > Looking at kselftest-merge, they just start out with a set of
> > fragments in which the union should contain all tests and then merge
> > it with a base .config (probably intended to be $(ARCH)_defconfig).
> > However, I don't know if that is the state of the art.
> >
> >>
> >>> (this probably won't work in practice once we start testing mutually
> >>> exclusive things or things with lots of ifdeffery, but it probably
> >>> something we should try to maintain as best as we can?); this probably
> >>> shouldn't go in with the fragments, right?
> >>
> >> Sounds like we agree there :)
> >
> > Totally. Long term we will need something a lot more sophisticated
> > than anything under discussion here. I was talking about this with
> > Luis on another thread:
> > https://groups.google.com/forum/#!topic/kunit-dev/EQ1x0SzrUus (feel
> > free to chime in!). Nevertheless, that's a really hard problem and I
> > figure some variant of defconfigs and config fragments will work well
> > enough until we reach that point.
> >
> >>
> >>>
> >>> I will be sending another revision out soon, but I figured I might be
> >>> able to catch you before I did so.
> >>
> >> Thanks for thinking of me.
> >
> > How can I forget? You have been super helpful!
> >
> >> I hope I managed to reply in time to help and not hinder your progress.
> >
> > Yep, no trouble at all. You are the one helping me :-)
> >
> > Thanks!
> >
>
> --
> Regards
> --
> Kieran

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 14/19] Documentation: kunit: add documentation for KUnit
  2019-02-14  0:17                   ` brendanhiggins
@ 2019-02-14  0:17                     ` Brendan Higgins
  2019-02-14 17:26                     ` mcgrof
  1 sibling, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2019-02-14  0:17 UTC (permalink / raw)


On Wed, Feb 13, 2019 at 1:55 PM Kieran Bingham
<kieran.bingham@ideasonboard.com> wrote:
>
> Hi Brendan,
>
> On 12/02/2019 22:10, Brendan Higgins wrote:
> > On Mon, Feb 11, 2019 at 4:16 AM Kieran Bingham
> > <kieran.bingham@ideasonboard.com> wrote:
> >>
> >> Hi Brendan,
> >>
> >> On 09/02/2019 00:56, Brendan Higgins wrote:
> >>> On Thu, Dec 6, 2018 at 4:16 AM Kieran Bingham
> >>> <kieran.bingham@ideasonboard.com> wrote:
> >>>>
> >>>> Hi Brendan,
> >>>>
> >>>> On 03/12/2018 23:53, Brendan Higgins wrote:
> >>>>> On Thu, Nov 29, 2018@7:45 PM Luis Chamberlain <mcgrof@kernel.org> wrote:
> >>>>>>
> >>>>>> On Thu, Nov 29, 2018@01:56:37PM +0000, Kieran Bingham wrote:
> >>>>>>> Hi Brendan,
> >>>>>>>
> >>>>>>> Please excuse the top posting, but I'm replying here as I'm following
> >>>>>>> the section "Creating a kunitconfig" in Documentation/kunit/start.rst.
> >>>>>>>
> >>>>>>> Could the three line kunitconfig file live under say
> >>>>>>>        arch/um/configs/kunit_defconfig?
> >>>>
> >>>>
> >>>> Further consideration to this topic - I mentioned putting it in
> >>>>   arch/um/configs
> >>>>
> >>>> - but I think this is wrong.
> >>>>
> >>>> We now have a location for config-fragments, which is essentially what
> >>>> this is, under kernel/configs
> >>>>
> >>>> So perhaps an addition as :
> >>>>
> >>>>  kernel/configs/kunit.config
> >>>>
> >>>> Would be more appropriate - and less (UM) architecture specific.
> >>>
> >>> Sorry for the long radio silence.
> >>>
> >>> I just got around to doing this and I found that there are some
> >>> configs that are desirable to have when running KUnit under x86 in a
> >>> VM, but not UML.
> >>
> >> Should this behaviour you mention be handled by the KCONFIG depends flags?
> >>
> >> depends on (KUMIT & UML)
> >> or
> >> depends on (KUNIT & !UML)
> >>
> >> or such?
> >
> > Not really. Anything that is strictly necessary to run KUnit on an
> > architectures should of course be turned on as a dependency like you
> > suggest, but I am talking about stuff that you would probably want to
> > get yourself going, but is by no means necessary.
> >
> >>
> >> An example of which configs you are referring to would help to
> >> understand the issue perhaps.
> >>
> >
> > For example, you might want to enable a serial console that is known
> > to work with a fairly generic qemu setup when building for x86:
> > CONFIG_SERIAL_8250=y
> > CONFIG_SERIAL_8250_CONSOLE=y
> >
> > Obviously not a dependency, and not even particularly useful to people
> > who know what they are doing, but to someone who is new or just wants
> > something to work out of the box would probably want that.
>
> It sounds like that would be a config fragment for qemu ?
>
> Although - perhaps this is already covered by the following fragment:
>    kernel/configs/kvm_guest.config
>

Oh, yep, you are right. Does that mean we should bother at all with a defconfig?

Luis, I know you said you wanted one. I am thinking just stick with
the UML one? The downside there is we then get stuck having to
maintain the fragment and the defconfig. I right now (in the new
revision I am working on) have the Python kunit_tool copy the
defconfig if no kunitconfig is provided and a flag is set. It would be
pretty straightforward to make it merge in the fragment instead.

All that being said, I think I am going to drop the arch/x86
defconfig, since I think we all agree that it is not very useful, but
keep the UML defconfig and the fragment. That will at least given
something concrete to discuss.

>
> >>> So should we have one that goes in with
> >>> config-fragments and others that go into architectures? Another idea,
> >>> it would be nice to have a KUnit config that runs all known tests
> >>
> >> This might also be a config option added to the tests directly like
> >> COMPILE_TEST perhaps?
> >
> > That just allows a bunch of drivers to be compiled, it does not
> > actually go through and turn the configs on, right? I mean, there is
> > no a priori way to know that there is a configuration which spans all
> > possible options available under COMPILE_TEST, right? Maybe I
> > misunderstand what you are suggesting...
>
> Bah - you're right of course. I was mis-remembering the functionality of
> COMPILE_TEST as if it were some sort of 'select' but it's just an enable..
>
> Sorry for the confusion.
>

No problem, I thought for a second that was a good example too (and I
wish it were. It would make my life so much easier!). I remember
getting emails with a COMPILE_TEST config attached that demonstrates
an invalid build caused by my changes, presumably that email bot just
tries random configs with a new change until it finds one that breaks.

>
> >> (Not sure what that would be called though ... KUNIT_RUNTIME_TEST?)
> >>
> >> I think that might be more maintainable as otherwise each new test would
> >> have to modify the {min,def}{config,fragment} ...
> >>
> >
> > Looking at kselftest-merge, they just start out with a set of
> > fragments in which the union should contain all tests and then merge
> > it with a base .config (probably intended to be $(ARCH)_defconfig).
> > However, I don't know if that is the state of the art.
> >
> >>
> >>> (this probably won't work in practice once we start testing mutually
> >>> exclusive things or things with lots of ifdeffery, but it probably
> >>> something we should try to maintain as best as we can?); this probably
> >>> shouldn't go in with the fragments, right?
> >>
> >> Sounds like we agree there :)
> >
> > Totally. Long term we will need something a lot more sophisticated
> > than anything under discussion here. I was talking about this with
> > Luis on another thread:
> > https://groups.google.com/forum/#!topic/kunit-dev/EQ1x0SzrUus (feel
> > free to chime in!). Nevertheless, that's a really hard problem and I
> > figure some variant of defconfigs and config fragments will work well
> > enough until we reach that point.
> >
> >>
> >>>
> >>> I will be sending another revision out soon, but I figured I might be
> >>> able to catch you before I did so.
> >>
> >> Thanks for thinking of me.
> >
> > How can I forget? You have been super helpful!
> >
> >> I hope I managed to reply in time to help and not hinder your progress.
> >
> > Yep, no trouble at all. You are the one helping me :-)
> >
> > Thanks!
> >
>
> --
> Regards
> --
> Kieran

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 14/19] Documentation: kunit: add documentation for KUnit
  2019-02-14  0:17                   ` brendanhiggins
  2019-02-14  0:17                     ` Brendan Higgins
@ 2019-02-14 17:26                     ` mcgrof
  2019-02-14 17:26                       ` Luis Chamberlain
  2019-02-14 22:07                       ` brendanhiggins
  1 sibling, 2 replies; 232+ messages in thread
From: mcgrof @ 2019-02-14 17:26 UTC (permalink / raw)


On Wed, Feb 13, 2019 at 04:17:13PM -0800, Brendan Higgins wrote:
> On Wed, Feb 13, 2019 at 1:55 PM Kieran Bingham
> <kieran.bingham at ideasonboard.com> wrote:
> Oh, yep, you are right. Does that mean we should bother at all with a defconfig?

If one wanted a qemu enabled type of kernel and also for kuniut one
could imply run:

make kvmconfig
make kunitconfig

That would get what you suggest above of default "bells and whistles"
and keep the kuniut as a fragment.

Hm, actually the kvmconfig doesn't really enable the required fragments
for qemu, so perhaps one would be good. It would have the serial stuff
for instance.

> Luis, I know you said you wanted one. I am thinking just stick with
> the UML one? The downside there is we then get stuck having to
> maintain the fragment and the defconfig. I right now (in the new
> revision I am working on) have the Python kunit_tool copy the
> defconfig if no kunitconfig is provided and a flag is set. It would be
> pretty straightforward to make it merge in the fragment instead.

Up to you in the end.

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 14/19] Documentation: kunit: add documentation for KUnit
  2019-02-14 17:26                     ` mcgrof
@ 2019-02-14 17:26                       ` Luis Chamberlain
  2019-02-14 22:07                       ` brendanhiggins
  1 sibling, 0 replies; 232+ messages in thread
From: Luis Chamberlain @ 2019-02-14 17:26 UTC (permalink / raw)


On Wed, Feb 13, 2019@04:17:13PM -0800, Brendan Higgins wrote:
> On Wed, Feb 13, 2019 at 1:55 PM Kieran Bingham
> <kieran.bingham@ideasonboard.com> wrote:
> Oh, yep, you are right. Does that mean we should bother at all with a defconfig?

If one wanted a qemu enabled type of kernel and also for kuniut one
could imply run:

make kvmconfig
make kunitconfig

That would get what you suggest above of default "bells and whistles"
and keep the kuniut as a fragment.

Hm, actually the kvmconfig doesn't really enable the required fragments
for qemu, so perhaps one would be good. It would have the serial stuff
for instance.

> Luis, I know you said you wanted one. I am thinking just stick with
> the UML one? The downside there is we then get stuck having to
> maintain the fragment and the defconfig. I right now (in the new
> revision I am working on) have the Python kunit_tool copy the
> defconfig if no kunitconfig is provided and a flag is set. It would be
> pretty straightforward to make it merge in the fragment instead.

Up to you in the end.

  Luis

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 17/19] of: unittest: migrate tests to run on KUnit
  2019-02-13  1:44     ` brendanhiggins
  2019-02-13  1:44       ` Brendan Higgins
@ 2019-02-14 20:10       ` robh
  2019-02-14 20:10         ` Rob Herring
  2019-02-14 21:52         ` brendanhiggins
  2019-02-18 22:56       ` frowand.list
  2 siblings, 2 replies; 232+ messages in thread
From: robh @ 2019-02-14 20:10 UTC (permalink / raw)


On Tue, Feb 12, 2019 at 7:44 PM Brendan Higgins
<brendanhiggins at google.com> wrote:
>
> On Wed, Nov 28, 2018 at 12:56 PM Rob Herring <robh at kernel.org> wrote:
> >
> > On Wed, Nov 28, 2018 at 1:38 PM Brendan Higgins
> > <brendanhiggins at google.com> wrote:
> > >
> > > Migrate tests without any cleanup, or modifying test logic in anyway to
> > > run under KUnit using the KUnit expectation and assertion API.
> >
> > Nice! You beat me to it. This is probably going to conflict with what
> > is in the DT tree for 4.21. Also, please Cc the DT list for
> > drivers/of/ changes.
> >
> > Looks good to me, but a few mostly formatting comments below.
>
> I just realized that we never talked about your other comments, and I
> still have some questions. (Sorry, it was the last thing I looked at
> while getting v4 ready.) No worries if you don't get to it before I
> send v4 out, I just didn't want you to think I was ignoring you.
>
> >
> > >
> > > Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> > > ---
> > >  drivers/of/Kconfig    |    1 +
> > >  drivers/of/unittest.c | 1405 ++++++++++++++++++++++-------------------
> > >  2 files changed, 752 insertions(+), 654 deletions(-)
> > >
> <snip>
> > > diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
> > > index 41b49716ac75f..a5ef44730ffdb 100644
> > > --- a/drivers/of/unittest.c
> > > +++ b/drivers/of/unittest.c
> <snip>
> > > -
> > > -static void __init of_unittest_find_node_by_name(void)
> > > +static void of_unittest_find_node_by_name(struct kunit *test)
> >
> > Why do we have to drop __init everywhere? The tests run later?
>
> From the standpoint of a unit test __init doesn't really make any
> sense, right? I know that right now we are running as part of a
> kernel, but the goal should be that a unit test is not part of a
> kernel and we just include what we need.

Well, the test only runs during boot and better to free the space when
done with it. There was some desire to make it a kernel module and
then we'd also need to get rid of __init too.

> Even so, that's the future. For now, I did not put the KUnit
> infrastructure in the .init section because I didn't think it belonged
> there. In practice, KUnit only knows how to run during the init phase
> of the kernel, but I don't think it should be restricted there. You
> should be able to run tests whenever you want because you should be
> able to test anything right? I figured any restriction on that is
> misleading and will potentially get in the way at worst, and
> unnecessary at best especially since people shouldn't build a
> production kernel with all kinds of unit tests inside.

More folks will run things if they can be enabled on production
kernels. If size is the only issue, modules mitigate that. However,
there's probably APIs to test which we don't want to export to
modules.

I think in general, we change things in the kernel when needed, not
for something in the future. Changing __init is simple enough to do
later.

OTOH, things get copied and maybe this we don't want copied, so we can
remove it if you want to.

> <snip>
> > >
> > > -static void __init of_unittest_property_string(void)
> > > +static void of_unittest_property_string(struct kunit *test)
> > >  {
> > >         const char *strings[4];
> > >         struct device_node *np;
> > >         int rc;
> > >
> > >         np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> > > -       if (!np) {
> > > -               pr_err("No testcase data in device tree\n");
> > > -               return;
> > > -       }
> > > -
> > > -       rc = of_property_match_string(np, "phandle-list-names", "first");
> > > -       unittest(rc == 0, "first expected:0 got:%i\n", rc);
> > > -       rc = of_property_match_string(np, "phandle-list-names", "second");
> > > -       unittest(rc == 1, "second expected:1 got:%i\n", rc);
> > > -       rc = of_property_match_string(np, "phandle-list-names", "third");
> > > -       unittest(rc == 2, "third expected:2 got:%i\n", rc);
> > > -       rc = of_property_match_string(np, "phandle-list-names", "fourth");
> > > -       unittest(rc == -ENODATA, "unmatched string; rc=%i\n", rc);
> > > -       rc = of_property_match_string(np, "missing-property", "blah");
> > > -       unittest(rc == -EINVAL, "missing property; rc=%i\n", rc);
> > > -       rc = of_property_match_string(np, "empty-property", "blah");
> > > -       unittest(rc == -ENODATA, "empty property; rc=%i\n", rc);
> > > -       rc = of_property_match_string(np, "unterminated-string", "blah");
> > > -       unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
> > > +       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> > > +
> > > +       KUNIT_EXPECT_EQ(test,
> > > +                       of_property_match_string(np,
> > > +                                                "phandle-list-names",
> > > +                                                "first"),
> > > +                       0);
> > > +       KUNIT_EXPECT_EQ(test,
> > > +                       of_property_match_string(np,
> > > +                                                "phandle-list-names",
> > > +                                                "second"),
> > > +                       1);
> >
> > Fewer lines on these would be better even if we go over 80 chars.
>
> On the of_property_match_string(...), I have no opinion. I will do
> whatever you like best.
>
> Nevertheless, as far as the KUNIT_EXPECT_*(...), I do have an opinion: I am
> trying to establish a good, readable convention. Given an expect statement
> structured as
> ```
> KUNIT_EXPECT_*(
>     test,
>     expect_arg_0, ..., expect_arg_n,
>     fmt_str, fmt_arg_0, ..., fmt_arg_n)
> ```
> where `test` is the `struct kunit` context argument, `expect_arg_{0, ..., n}`
> are the arguments the expectations is being made about (so in the above example,
> `of_property_match_string(...)` and `1`), and `fmt_*` is the optional format
> string that comes at the end of some expectations.
>
> The pattern I had been trying to promote is the following:
>
> 1) If everything fits on 1 line, do that.
> 2) If you must make a line split, prefer to keep `test` on its own line,
> `expect_arg_{0, ..., n}` should be kept together, if possible, and the format
> string should follow the conventions already most commonly used with format
> strings.
> 3) If you must split up `expect_arg_{0, ..., n}` each argument should get its
> own line and should not share a line with either `test` or any `fmt_*`.

You'd better write a checkpatch.pl check or else good luck enforcing that. :)

> The reason I care about this so much is because expectations should be
> extremely easy to read; they are the most important part of a unit
> test because they tell you what the test is verifying. I am not
> married to the formatting I proposed above, but I want something that
> will be extremely easy to identify the arguments that the expectation
> is on. Maybe that means that I need to add some syntactic fluff to
> make it clearer, I don't know, but this is definitely something we
> need to get right, especially in the earliest examples.

Makes sense. I think putting the test (of_property_match_string) on
one line furthers the readability.

Rob

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 17/19] of: unittest: migrate tests to run on KUnit
  2019-02-14 20:10       ` robh
@ 2019-02-14 20:10         ` Rob Herring
  2019-02-14 21:52         ` brendanhiggins
  1 sibling, 0 replies; 232+ messages in thread
From: Rob Herring @ 2019-02-14 20:10 UTC (permalink / raw)


On Tue, Feb 12, 2019 at 7:44 PM Brendan Higgins
<brendanhiggins@google.com> wrote:
>
> On Wed, Nov 28, 2018@12:56 PM Rob Herring <robh@kernel.org> wrote:
> >
> > On Wed, Nov 28, 2018 at 1:38 PM Brendan Higgins
> > <brendanhiggins@google.com> wrote:
> > >
> > > Migrate tests without any cleanup, or modifying test logic in anyway to
> > > run under KUnit using the KUnit expectation and assertion API.
> >
> > Nice! You beat me to it. This is probably going to conflict with what
> > is in the DT tree for 4.21. Also, please Cc the DT list for
> > drivers/of/ changes.
> >
> > Looks good to me, but a few mostly formatting comments below.
>
> I just realized that we never talked about your other comments, and I
> still have some questions. (Sorry, it was the last thing I looked at
> while getting v4 ready.) No worries if you don't get to it before I
> send v4 out, I just didn't want you to think I was ignoring you.
>
> >
> > >
> > > Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> > > ---
> > >  drivers/of/Kconfig    |    1 +
> > >  drivers/of/unittest.c | 1405 ++++++++++++++++++++++-------------------
> > >  2 files changed, 752 insertions(+), 654 deletions(-)
> > >
> <snip>
> > > diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
> > > index 41b49716ac75f..a5ef44730ffdb 100644
> > > --- a/drivers/of/unittest.c
> > > +++ b/drivers/of/unittest.c
> <snip>
> > > -
> > > -static void __init of_unittest_find_node_by_name(void)
> > > +static void of_unittest_find_node_by_name(struct kunit *test)
> >
> > Why do we have to drop __init everywhere? The tests run later?
>
> From the standpoint of a unit test __init doesn't really make any
> sense, right? I know that right now we are running as part of a
> kernel, but the goal should be that a unit test is not part of a
> kernel and we just include what we need.

Well, the test only runs during boot and better to free the space when
done with it. There was some desire to make it a kernel module and
then we'd also need to get rid of __init too.

> Even so, that's the future. For now, I did not put the KUnit
> infrastructure in the .init section because I didn't think it belonged
> there. In practice, KUnit only knows how to run during the init phase
> of the kernel, but I don't think it should be restricted there. You
> should be able to run tests whenever you want because you should be
> able to test anything right? I figured any restriction on that is
> misleading and will potentially get in the way at worst, and
> unnecessary at best especially since people shouldn't build a
> production kernel with all kinds of unit tests inside.

More folks will run things if they can be enabled on production
kernels. If size is the only issue, modules mitigate that. However,
there's probably APIs to test which we don't want to export to
modules.

I think in general, we change things in the kernel when needed, not
for something in the future. Changing __init is simple enough to do
later.

OTOH, things get copied and maybe this we don't want copied, so we can
remove it if you want to.

> <snip>
> > >
> > > -static void __init of_unittest_property_string(void)
> > > +static void of_unittest_property_string(struct kunit *test)
> > >  {
> > >         const char *strings[4];
> > >         struct device_node *np;
> > >         int rc;
> > >
> > >         np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> > > -       if (!np) {
> > > -               pr_err("No testcase data in device tree\n");
> > > -               return;
> > > -       }
> > > -
> > > -       rc = of_property_match_string(np, "phandle-list-names", "first");
> > > -       unittest(rc == 0, "first expected:0 got:%i\n", rc);
> > > -       rc = of_property_match_string(np, "phandle-list-names", "second");
> > > -       unittest(rc == 1, "second expected:1 got:%i\n", rc);
> > > -       rc = of_property_match_string(np, "phandle-list-names", "third");
> > > -       unittest(rc == 2, "third expected:2 got:%i\n", rc);
> > > -       rc = of_property_match_string(np, "phandle-list-names", "fourth");
> > > -       unittest(rc == -ENODATA, "unmatched string; rc=%i\n", rc);
> > > -       rc = of_property_match_string(np, "missing-property", "blah");
> > > -       unittest(rc == -EINVAL, "missing property; rc=%i\n", rc);
> > > -       rc = of_property_match_string(np, "empty-property", "blah");
> > > -       unittest(rc == -ENODATA, "empty property; rc=%i\n", rc);
> > > -       rc = of_property_match_string(np, "unterminated-string", "blah");
> > > -       unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
> > > +       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> > > +
> > > +       KUNIT_EXPECT_EQ(test,
> > > +                       of_property_match_string(np,
> > > +                                                "phandle-list-names",
> > > +                                                "first"),
> > > +                       0);
> > > +       KUNIT_EXPECT_EQ(test,
> > > +                       of_property_match_string(np,
> > > +                                                "phandle-list-names",
> > > +                                                "second"),
> > > +                       1);
> >
> > Fewer lines on these would be better even if we go over 80 chars.
>
> On the of_property_match_string(...), I have no opinion. I will do
> whatever you like best.
>
> Nevertheless, as far as the KUNIT_EXPECT_*(...), I do have an opinion: I am
> trying to establish a good, readable convention. Given an expect statement
> structured as
> ```
> KUNIT_EXPECT_*(
>     test,
>     expect_arg_0, ..., expect_arg_n,
>     fmt_str, fmt_arg_0, ..., fmt_arg_n)
> ```
> where `test` is the `struct kunit` context argument, `expect_arg_{0, ..., n}`
> are the arguments the expectations is being made about (so in the above example,
> `of_property_match_string(...)` and `1`), and `fmt_*` is the optional format
> string that comes at the end of some expectations.
>
> The pattern I had been trying to promote is the following:
>
> 1) If everything fits on 1 line, do that.
> 2) If you must make a line split, prefer to keep `test` on its own line,
> `expect_arg_{0, ..., n}` should be kept together, if possible, and the format
> string should follow the conventions already most commonly used with format
> strings.
> 3) If you must split up `expect_arg_{0, ..., n}` each argument should get its
> own line and should not share a line with either `test` or any `fmt_*`.

You'd better write a checkpatch.pl check or else good luck enforcing that. :)

> The reason I care about this so much is because expectations should be
> extremely easy to read; they are the most important part of a unit
> test because they tell you what the test is verifying. I am not
> married to the formatting I proposed above, but I want something that
> will be extremely easy to identify the arguments that the expectation
> is on. Maybe that means that I need to add some syntactic fluff to
> make it clearer, I don't know, but this is definitely something we
> need to get right, especially in the earliest examples.

Makes sense. I think putting the test (of_property_match_string) on
one line furthers the readability.

Rob

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 17/19] of: unittest: migrate tests to run on KUnit
  2019-02-14 20:10       ` robh
  2019-02-14 20:10         ` Rob Herring
@ 2019-02-14 21:52         ` brendanhiggins
  2019-02-14 21:52           ` Brendan Higgins
  1 sibling, 1 reply; 232+ messages in thread
From: brendanhiggins @ 2019-02-14 21:52 UTC (permalink / raw)


On Thu, Feb 14, 2019 at 12:10 PM Rob Herring <robh at kernel.org> wrote:
>
> On Tue, Feb 12, 2019 at 7:44 PM Brendan Higgins
> <brendanhiggins at google.com> wrote:
> >
> > On Wed, Nov 28, 2018 at 12:56 PM Rob Herring <robh at kernel.org> wrote:
> > >
> > > On Wed, Nov 28, 2018 at 1:38 PM Brendan Higgins
> > > <brendanhiggins at google.com> wrote:
> > > >
> > > > Migrate tests without any cleanup, or modifying test logic in anyway to
> > > > run under KUnit using the KUnit expectation and assertion API.
> > >
> > > Nice! You beat me to it. This is probably going to conflict with what
> > > is in the DT tree for 4.21. Also, please Cc the DT list for
> > > drivers/of/ changes.
> > >
> > > Looks good to me, but a few mostly formatting comments below.
> >
> > I just realized that we never talked about your other comments, and I
> > still have some questions. (Sorry, it was the last thing I looked at
> > while getting v4 ready.) No worries if you don't get to it before I
> > send v4 out, I just didn't want you to think I was ignoring you.
> >
> > >
> > > >
> > > > Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> > > > ---
> > > >  drivers/of/Kconfig    |    1 +
> > > >  drivers/of/unittest.c | 1405 ++++++++++++++++++++++-------------------
> > > >  2 files changed, 752 insertions(+), 654 deletions(-)
> > > >
> > <snip>
> > > > diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
> > > > index 41b49716ac75f..a5ef44730ffdb 100644
> > > > --- a/drivers/of/unittest.c
> > > > +++ b/drivers/of/unittest.c
> > <snip>
> > > > -
> > > > -static void __init of_unittest_find_node_by_name(void)
> > > > +static void of_unittest_find_node_by_name(struct kunit *test)
> > >
> > > Why do we have to drop __init everywhere? The tests run later?
> >
> > From the standpoint of a unit test __init doesn't really make any
> > sense, right? I know that right now we are running as part of a
> > kernel, but the goal should be that a unit test is not part of a
> > kernel and we just include what we need.
>
> Well, the test only runs during boot and better to free the space when
> done with it. There was some desire to make it a kernel module and
> then we'd also need to get rid of __init too.
>
> > Even so, that's the future. For now, I did not put the KUnit
> > infrastructure in the .init section because I didn't think it belonged
> > there. In practice, KUnit only knows how to run during the init phase
> > of the kernel, but I don't think it should be restricted there. You
> > should be able to run tests whenever you want because you should be
> > able to test anything right? I figured any restriction on that is
> > misleading and will potentially get in the way at worst, and
> > unnecessary at best especially since people shouldn't build a
> > production kernel with all kinds of unit tests inside.
>
> More folks will run things if they can be enabled on production
> kernels. If size is the only issue, modules mitigate that. However,
> there's probably APIs to test which we don't want to export to
> modules.
>
> I think in general, we change things in the kernel when needed, not
> for something in the future. Changing __init is simple enough to do
> later.
>
> OTOH, things get copied and maybe this we don't want copied, so we can
> remove it if you want to.

Mmmm...I just realized that the patch I sent you the other day makes
this patch unhappy because unflatten_device_tree is in the .init
section. So I will need to fix that. I still think that the correct
course of action is to make KUnit non init. Luis pointed out in
another thread that to be 100% sure that everything will be properly
initialized, KUnit must be able to run after all initialization takes
place.

>
> > <snip>
> > > >
> > > > -static void __init of_unittest_property_string(void)
> > > > +static void of_unittest_property_string(struct kunit *test)
> > > >  {
> > > >         const char *strings[4];
> > > >         struct device_node *np;
> > > >         int rc;
> > > >
> > > >         np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> > > > -       if (!np) {
> > > > -               pr_err("No testcase data in device tree\n");
> > > > -               return;
> > > > -       }
> > > > -
> > > > -       rc = of_property_match_string(np, "phandle-list-names", "first");
> > > > -       unittest(rc == 0, "first expected:0 got:%i\n", rc);
> > > > -       rc = of_property_match_string(np, "phandle-list-names", "second");
> > > > -       unittest(rc == 1, "second expected:1 got:%i\n", rc);
> > > > -       rc = of_property_match_string(np, "phandle-list-names", "third");
> > > > -       unittest(rc == 2, "third expected:2 got:%i\n", rc);
> > > > -       rc = of_property_match_string(np, "phandle-list-names", "fourth");
> > > > -       unittest(rc == -ENODATA, "unmatched string; rc=%i\n", rc);
> > > > -       rc = of_property_match_string(np, "missing-property", "blah");
> > > > -       unittest(rc == -EINVAL, "missing property; rc=%i\n", rc);
> > > > -       rc = of_property_match_string(np, "empty-property", "blah");
> > > > -       unittest(rc == -ENODATA, "empty property; rc=%i\n", rc);
> > > > -       rc = of_property_match_string(np, "unterminated-string", "blah");
> > > > -       unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
> > > > +       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> > > > +
> > > > +       KUNIT_EXPECT_EQ(test,
> > > > +                       of_property_match_string(np,
> > > > +                                                "phandle-list-names",
> > > > +                                                "first"),
> > > > +                       0);
> > > > +       KUNIT_EXPECT_EQ(test,
> > > > +                       of_property_match_string(np,
> > > > +                                                "phandle-list-names",
> > > > +                                                "second"),
> > > > +                       1);
> > >
> > > Fewer lines on these would be better even if we go over 80 chars.
> >
> > On the of_property_match_string(...), I have no opinion. I will do
> > whatever you like best.
> >
> > Nevertheless, as far as the KUNIT_EXPECT_*(...), I do have an opinion: I am
> > trying to establish a good, readable convention. Given an expect statement
> > structured as
> > ```
> > KUNIT_EXPECT_*(
> >     test,
> >     expect_arg_0, ..., expect_arg_n,
> >     fmt_str, fmt_arg_0, ..., fmt_arg_n)
> > ```
> > where `test` is the `struct kunit` context argument, `expect_arg_{0, ..., n}`
> > are the arguments the expectations is being made about (so in the above example,
> > `of_property_match_string(...)` and `1`), and `fmt_*` is the optional format
> > string that comes at the end of some expectations.
> >
> > The pattern I had been trying to promote is the following:
> >
> > 1) If everything fits on 1 line, do that.
> > 2) If you must make a line split, prefer to keep `test` on its own line,
> > `expect_arg_{0, ..., n}` should be kept together, if possible, and the format
> > string should follow the conventions already most commonly used with format
> > strings.
> > 3) If you must split up `expect_arg_{0, ..., n}` each argument should get its
> > own line and should not share a line with either `test` or any `fmt_*`.
>
> You'd better write a checkpatch.pl check or else good luck enforcing that. :)

Absolutely. Well I already had to touch checkpatch.pl for something
else, so at least I know roughly what I am getting myself into.

>
> > The reason I care about this so much is because expectations should be
> > extremely easy to read; they are the most important part of a unit
> > test because they tell you what the test is verifying. I am not
> > married to the formatting I proposed above, but I want something that
> > will be extremely easy to identify the arguments that the expectation
> > is on. Maybe that means that I need to add some syntactic fluff to
> > make it clearer, I don't know, but this is definitely something we
> > need to get right, especially in the earliest examples.
>
> Makes sense. I think putting the test (of_property_match_string) on
> one line furthers the readability.

Fair enough, I tried to apply your comments the best that I could on
v4, but I think I will probably need to make another pass (especially
given the init thing).

Anyway, let's continue the discussion on v4.

Cheers

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 17/19] of: unittest: migrate tests to run on KUnit
  2019-02-14 21:52         ` brendanhiggins
@ 2019-02-14 21:52           ` Brendan Higgins
  0 siblings, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:52 UTC (permalink / raw)


On Thu, Feb 14, 2019@12:10 PM Rob Herring <robh@kernel.org> wrote:
>
> On Tue, Feb 12, 2019 at 7:44 PM Brendan Higgins
> <brendanhiggins@google.com> wrote:
> >
> > On Wed, Nov 28, 2018@12:56 PM Rob Herring <robh@kernel.org> wrote:
> > >
> > > On Wed, Nov 28, 2018 at 1:38 PM Brendan Higgins
> > > <brendanhiggins@google.com> wrote:
> > > >
> > > > Migrate tests without any cleanup, or modifying test logic in anyway to
> > > > run under KUnit using the KUnit expectation and assertion API.
> > >
> > > Nice! You beat me to it. This is probably going to conflict with what
> > > is in the DT tree for 4.21. Also, please Cc the DT list for
> > > drivers/of/ changes.
> > >
> > > Looks good to me, but a few mostly formatting comments below.
> >
> > I just realized that we never talked about your other comments, and I
> > still have some questions. (Sorry, it was the last thing I looked at
> > while getting v4 ready.) No worries if you don't get to it before I
> > send v4 out, I just didn't want you to think I was ignoring you.
> >
> > >
> > > >
> > > > Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> > > > ---
> > > >  drivers/of/Kconfig    |    1 +
> > > >  drivers/of/unittest.c | 1405 ++++++++++++++++++++++-------------------
> > > >  2 files changed, 752 insertions(+), 654 deletions(-)
> > > >
> > <snip>
> > > > diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
> > > > index 41b49716ac75f..a5ef44730ffdb 100644
> > > > --- a/drivers/of/unittest.c
> > > > +++ b/drivers/of/unittest.c
> > <snip>
> > > > -
> > > > -static void __init of_unittest_find_node_by_name(void)
> > > > +static void of_unittest_find_node_by_name(struct kunit *test)
> > >
> > > Why do we have to drop __init everywhere? The tests run later?
> >
> > From the standpoint of a unit test __init doesn't really make any
> > sense, right? I know that right now we are running as part of a
> > kernel, but the goal should be that a unit test is not part of a
> > kernel and we just include what we need.
>
> Well, the test only runs during boot and better to free the space when
> done with it. There was some desire to make it a kernel module and
> then we'd also need to get rid of __init too.
>
> > Even so, that's the future. For now, I did not put the KUnit
> > infrastructure in the .init section because I didn't think it belonged
> > there. In practice, KUnit only knows how to run during the init phase
> > of the kernel, but I don't think it should be restricted there. You
> > should be able to run tests whenever you want because you should be
> > able to test anything right? I figured any restriction on that is
> > misleading and will potentially get in the way at worst, and
> > unnecessary at best especially since people shouldn't build a
> > production kernel with all kinds of unit tests inside.
>
> More folks will run things if they can be enabled on production
> kernels. If size is the only issue, modules mitigate that. However,
> there's probably APIs to test which we don't want to export to
> modules.
>
> I think in general, we change things in the kernel when needed, not
> for something in the future. Changing __init is simple enough to do
> later.
>
> OTOH, things get copied and maybe this we don't want copied, so we can
> remove it if you want to.

Mmmm...I just realized that the patch I sent you the other day makes
this patch unhappy because unflatten_device_tree is in the .init
section. So I will need to fix that. I still think that the correct
course of action is to make KUnit non init. Luis pointed out in
another thread that to be 100% sure that everything will be properly
initialized, KUnit must be able to run after all initialization takes
place.

>
> > <snip>
> > > >
> > > > -static void __init of_unittest_property_string(void)
> > > > +static void of_unittest_property_string(struct kunit *test)
> > > >  {
> > > >         const char *strings[4];
> > > >         struct device_node *np;
> > > >         int rc;
> > > >
> > > >         np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> > > > -       if (!np) {
> > > > -               pr_err("No testcase data in device tree\n");
> > > > -               return;
> > > > -       }
> > > > -
> > > > -       rc = of_property_match_string(np, "phandle-list-names", "first");
> > > > -       unittest(rc == 0, "first expected:0 got:%i\n", rc);
> > > > -       rc = of_property_match_string(np, "phandle-list-names", "second");
> > > > -       unittest(rc == 1, "second expected:1 got:%i\n", rc);
> > > > -       rc = of_property_match_string(np, "phandle-list-names", "third");
> > > > -       unittest(rc == 2, "third expected:2 got:%i\n", rc);
> > > > -       rc = of_property_match_string(np, "phandle-list-names", "fourth");
> > > > -       unittest(rc == -ENODATA, "unmatched string; rc=%i\n", rc);
> > > > -       rc = of_property_match_string(np, "missing-property", "blah");
> > > > -       unittest(rc == -EINVAL, "missing property; rc=%i\n", rc);
> > > > -       rc = of_property_match_string(np, "empty-property", "blah");
> > > > -       unittest(rc == -ENODATA, "empty property; rc=%i\n", rc);
> > > > -       rc = of_property_match_string(np, "unterminated-string", "blah");
> > > > -       unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
> > > > +       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> > > > +
> > > > +       KUNIT_EXPECT_EQ(test,
> > > > +                       of_property_match_string(np,
> > > > +                                                "phandle-list-names",
> > > > +                                                "first"),
> > > > +                       0);
> > > > +       KUNIT_EXPECT_EQ(test,
> > > > +                       of_property_match_string(np,
> > > > +                                                "phandle-list-names",
> > > > +                                                "second"),
> > > > +                       1);
> > >
> > > Fewer lines on these would be better even if we go over 80 chars.
> >
> > On the of_property_match_string(...), I have no opinion. I will do
> > whatever you like best.
> >
> > Nevertheless, as far as the KUNIT_EXPECT_*(...), I do have an opinion: I am
> > trying to establish a good, readable convention. Given an expect statement
> > structured as
> > ```
> > KUNIT_EXPECT_*(
> >     test,
> >     expect_arg_0, ..., expect_arg_n,
> >     fmt_str, fmt_arg_0, ..., fmt_arg_n)
> > ```
> > where `test` is the `struct kunit` context argument, `expect_arg_{0, ..., n}`
> > are the arguments the expectations is being made about (so in the above example,
> > `of_property_match_string(...)` and `1`), and `fmt_*` is the optional format
> > string that comes at the end of some expectations.
> >
> > The pattern I had been trying to promote is the following:
> >
> > 1) If everything fits on 1 line, do that.
> > 2) If you must make a line split, prefer to keep `test` on its own line,
> > `expect_arg_{0, ..., n}` should be kept together, if possible, and the format
> > string should follow the conventions already most commonly used with format
> > strings.
> > 3) If you must split up `expect_arg_{0, ..., n}` each argument should get its
> > own line and should not share a line with either `test` or any `fmt_*`.
>
> You'd better write a checkpatch.pl check or else good luck enforcing that. :)

Absolutely. Well I already had to touch checkpatch.pl for something
else, so at least I know roughly what I am getting myself into.

>
> > The reason I care about this so much is because expectations should be
> > extremely easy to read; they are the most important part of a unit
> > test because they tell you what the test is verifying. I am not
> > married to the formatting I proposed above, but I want something that
> > will be extremely easy to identify the arguments that the expectation
> > is on. Maybe that means that I need to add some syntactic fluff to
> > make it clearer, I don't know, but this is definitely something we
> > need to get right, especially in the earliest examples.
>
> Makes sense. I think putting the test (of_property_match_string) on
> one line furthers the readability.

Fair enough, I tried to apply your comments the best that I could on
v4, but I think I will probably need to make another pass (especially
given the init thing).

Anyway, let's continue the discussion on v4.

Cheers

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 14/19] Documentation: kunit: add documentation for KUnit
  2019-02-14 17:26                     ` mcgrof
  2019-02-14 17:26                       ` Luis Chamberlain
@ 2019-02-14 22:07                       ` brendanhiggins
  2019-02-14 22:07                         ` Brendan Higgins
  1 sibling, 1 reply; 232+ messages in thread
From: brendanhiggins @ 2019-02-14 22:07 UTC (permalink / raw)


On Thu, Feb 14, 2019 at 9:26 AM Luis Chamberlain <mcgrof at kernel.org> wrote:
>
> On Wed, Feb 13, 2019 at 04:17:13PM -0800, Brendan Higgins wrote:
> > On Wed, Feb 13, 2019 at 1:55 PM Kieran Bingham
> > <kieran.bingham at ideasonboard.com> wrote:
> > Oh, yep, you are right. Does that mean we should bother at all with a defconfig?
>
> If one wanted a qemu enabled type of kernel and also for kuniut one
> could imply run:
>
> make kvmconfig
> make kunitconfig
>
> That would get what you suggest above of default "bells and whistles"
> and keep the kuniut as a fragment.
>
> Hm, actually the kvmconfig doesn't really enable the required fragments
> for qemu, so perhaps one would be good. It would have the serial stuff
> for instance.
>
> > Luis, I know you said you wanted one. I am thinking just stick with
> > the UML one? The downside there is we then get stuck having to
> > maintain the fragment and the defconfig. I right now (in the new
> > revision I am working on) have the Python kunit_tool copy the
> > defconfig if no kunitconfig is provided and a flag is set. It would be
> > pretty straightforward to make it merge in the fragment instead.
>
> Up to you in the end.

I don't really have any opinions on the matter; I don't really use
defconfigs in any of my workflows. So, I just want whatever is easier
for people. The thing that makes the most sense to me would be to
provide a "merge-kunitconfig" option similar to what kselftest does,
but I don't intend on doing that in the initial patchset, unless
someone really thinks that I should do it. So in the meantime, I guess
provide both since that gives people options?

In anycase, I just (finally) sent out v4, so I suggest we continue the
discussion over there.

Cheers

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 14/19] Documentation: kunit: add documentation for KUnit
  2019-02-14 22:07                       ` brendanhiggins
@ 2019-02-14 22:07                         ` Brendan Higgins
  0 siblings, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2019-02-14 22:07 UTC (permalink / raw)


On Thu, Feb 14, 2019@9:26 AM Luis Chamberlain <mcgrof@kernel.org> wrote:
>
> On Wed, Feb 13, 2019@04:17:13PM -0800, Brendan Higgins wrote:
> > On Wed, Feb 13, 2019 at 1:55 PM Kieran Bingham
> > <kieran.bingham@ideasonboard.com> wrote:
> > Oh, yep, you are right. Does that mean we should bother at all with a defconfig?
>
> If one wanted a qemu enabled type of kernel and also for kuniut one
> could imply run:
>
> make kvmconfig
> make kunitconfig
>
> That would get what you suggest above of default "bells and whistles"
> and keep the kuniut as a fragment.
>
> Hm, actually the kvmconfig doesn't really enable the required fragments
> for qemu, so perhaps one would be good. It would have the serial stuff
> for instance.
>
> > Luis, I know you said you wanted one. I am thinking just stick with
> > the UML one? The downside there is we then get stuck having to
> > maintain the fragment and the defconfig. I right now (in the new
> > revision I am working on) have the Python kunit_tool copy the
> > defconfig if no kunitconfig is provided and a flag is set. It would be
> > pretty straightforward to make it merge in the fragment instead.
>
> Up to you in the end.

I don't really have any opinions on the matter; I don't really use
defconfigs in any of my workflows. So, I just want whatever is easier
for people. The thing that makes the most sense to me would be to
provide a "merge-kunitconfig" option similar to what kselftest does,
but I don't intend on doing that in the initial patchset, unless
someone really thinks that I should do it. So in the meantime, I guess
provide both since that gives people options?

In anycase, I just (finally) sent out v4, so I suggest we continue the
discussion over there.

Cheers

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2018-12-05 23:54     ` brendanhiggins
  2018-12-05 23:54       ` Brendan Higgins
@ 2019-02-14 23:57       ` frowand.list
  2019-02-14 23:57         ` Frank Rowand
  2019-02-15  0:56         ` brendanhiggins
  1 sibling, 2 replies; 232+ messages in thread
From: frowand.list @ 2019-02-14 23:57 UTC (permalink / raw)


On 12/5/18 3:54 PM, Brendan Higgins wrote:
> On Tue, Dec 4, 2018 at 2:58 AM Frank Rowand <frowand.list at gmail.com> wrote:
>>
>> Hi Brendan,
>>
>> On 11/28/18 11:36 AM, Brendan Higgins wrote:
>>> Split out a couple of test cases that these features in base.c from the
>>> unittest.c monolith. The intention is that we will eventually split out
>>> all test cases and group them together based on what portion of device
>>> tree they test.
>>
>> Why does splitting this file apart improve the implementation?
> 
> This is in preparation for patch 19/19 and other hypothetical future
> patches where test cases are split up and grouped together by what
> portion of DT they test (for example the parsing tests and the
> platform/device tests would probably go separate files as well). This
> patch by itself does not do anything useful, but I figured it made
> patch 19/19 (and, if you like what I am doing, subsequent patches)
> easier to review.

I do not see any value in splitting the devicetree tests into
multiple files.

Please help me understand what the benefits of such a split are.

Thanks,

Frank

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2019-02-14 23:57       ` frowand.list
@ 2019-02-14 23:57         ` Frank Rowand
  2019-02-15  0:56         ` brendanhiggins
  1 sibling, 0 replies; 232+ messages in thread
From: Frank Rowand @ 2019-02-14 23:57 UTC (permalink / raw)


On 12/5/18 3:54 PM, Brendan Higgins wrote:
> On Tue, Dec 4, 2018@2:58 AM Frank Rowand <frowand.list@gmail.com> wrote:
>>
>> Hi Brendan,
>>
>> On 11/28/18 11:36 AM, Brendan Higgins wrote:
>>> Split out a couple of test cases that these features in base.c from the
>>> unittest.c monolith. The intention is that we will eventually split out
>>> all test cases and group them together based on what portion of device
>>> tree they test.
>>
>> Why does splitting this file apart improve the implementation?
> 
> This is in preparation for patch 19/19 and other hypothetical future
> patches where test cases are split up and grouped together by what
> portion of DT they test (for example the parsing tests and the
> platform/device tests would probably go separate files as well). This
> patch by itself does not do anything useful, but I figured it made
> patch 19/19 (and, if you like what I am doing, subsequent patches)
> easier to review.

I do not see any value in splitting the devicetree tests into
multiple files.

Please help me understand what the benefits of such a split are.

Thanks,

Frank

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2019-02-14 23:57       ` frowand.list
  2019-02-14 23:57         ` Frank Rowand
@ 2019-02-15  0:56         ` brendanhiggins
  2019-02-15  0:56           ` Brendan Higgins
  2019-02-15  2:05           ` frowand.list
  1 sibling, 2 replies; 232+ messages in thread
From: brendanhiggins @ 2019-02-15  0:56 UTC (permalink / raw)


On Thu, Feb 14, 2019 at 3:57 PM Frank Rowand <frowand.list at gmail.com> wrote:
>
> On 12/5/18 3:54 PM, Brendan Higgins wrote:
> > On Tue, Dec 4, 2018 at 2:58 AM Frank Rowand <frowand.list at gmail.com> wrote:
> >>
> >> Hi Brendan,
> >>
> >> On 11/28/18 11:36 AM, Brendan Higgins wrote:
> >>> Split out a couple of test cases that these features in base.c from the
> >>> unittest.c monolith. The intention is that we will eventually split out
> >>> all test cases and group them together based on what portion of device
> >>> tree they test.
> >>
> >> Why does splitting this file apart improve the implementation?
> >
> > This is in preparation for patch 19/19 and other hypothetical future
> > patches where test cases are split up and grouped together by what
> > portion of DT they test (for example the parsing tests and the
> > platform/device tests would probably go separate files as well). This
> > patch by itself does not do anything useful, but I figured it made
> > patch 19/19 (and, if you like what I am doing, subsequent patches)
> > easier to review.
>
> I do not see any value in splitting the devicetree tests into
> multiple files.
>
> Please help me understand what the benefits of such a split are.

Sorry, I thought it made sense in context of what I am doing in the
following patch. All I am trying to do is to provide an effective way
of grouping test cases. To be clear, the idea, assuming you agree, is
that we would follow up with several other patches like this one and
the subsequent patch, one which would pull out a couple test
functions, as I have done here, and another that splits those
functions up into a bunch of proper test cases.

I thought that having that many unrelated test cases in a single file
would just be a pain to sort through deal with, review, whatever.

This is not something I feel particularly strongly about, it is just
pretty atypical from my experience to have so many unrelated test
cases in a single file.

Maybe you would prefer that I break up the test cases first, and then
we split up the file as appropriate?

I just assumed that we would agree it would be way too much stuff for
a single file, so I went ahead and broke it up first, because I
thought it would make it easier to review in that order rather than
the other way around.

Cheers

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2019-02-15  0:56         ` brendanhiggins
@ 2019-02-15  0:56           ` Brendan Higgins
  2019-02-15  2:05           ` frowand.list
  1 sibling, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2019-02-15  0:56 UTC (permalink / raw)


On Thu, Feb 14, 2019@3:57 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 12/5/18 3:54 PM, Brendan Higgins wrote:
> > On Tue, Dec 4, 2018@2:58 AM Frank Rowand <frowand.list@gmail.com> wrote:
> >>
> >> Hi Brendan,
> >>
> >> On 11/28/18 11:36 AM, Brendan Higgins wrote:
> >>> Split out a couple of test cases that these features in base.c from the
> >>> unittest.c monolith. The intention is that we will eventually split out
> >>> all test cases and group them together based on what portion of device
> >>> tree they test.
> >>
> >> Why does splitting this file apart improve the implementation?
> >
> > This is in preparation for patch 19/19 and other hypothetical future
> > patches where test cases are split up and grouped together by what
> > portion of DT they test (for example the parsing tests and the
> > platform/device tests would probably go separate files as well). This
> > patch by itself does not do anything useful, but I figured it made
> > patch 19/19 (and, if you like what I am doing, subsequent patches)
> > easier to review.
>
> I do not see any value in splitting the devicetree tests into
> multiple files.
>
> Please help me understand what the benefits of such a split are.

Sorry, I thought it made sense in context of what I am doing in the
following patch. All I am trying to do is to provide an effective way
of grouping test cases. To be clear, the idea, assuming you agree, is
that we would follow up with several other patches like this one and
the subsequent patch, one which would pull out a couple test
functions, as I have done here, and another that splits those
functions up into a bunch of proper test cases.

I thought that having that many unrelated test cases in a single file
would just be a pain to sort through deal with, review, whatever.

This is not something I feel particularly strongly about, it is just
pretty atypical from my experience to have so many unrelated test
cases in a single file.

Maybe you would prefer that I break up the test cases first, and then
we split up the file as appropriate?

I just assumed that we would agree it would be way too much stuff for
a single file, so I went ahead and broke it up first, because I
thought it would make it easier to review in that order rather than
the other way around.

Cheers

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2019-02-15  0:56         ` brendanhiggins
  2019-02-15  0:56           ` Brendan Higgins
@ 2019-02-15  2:05           ` frowand.list
  2019-02-15  2:05             ` Frank Rowand
  2019-02-15 10:56             ` brendanhiggins
  1 sibling, 2 replies; 232+ messages in thread
From: frowand.list @ 2019-02-15  2:05 UTC (permalink / raw)


On 2/14/19 4:56 PM, Brendan Higgins wrote:
> On Thu, Feb 14, 2019 at 3:57 PM Frank Rowand <frowand.list at gmail.com> wrote:
>>
>> On 12/5/18 3:54 PM, Brendan Higgins wrote:
>>> On Tue, Dec 4, 2018 at 2:58 AM Frank Rowand <frowand.list at gmail.com> wrote:
>>>>
>>>> Hi Brendan,
>>>>
>>>> On 11/28/18 11:36 AM, Brendan Higgins wrote:
>>>>> Split out a couple of test cases that these features in base.c from the
>>>>> unittest.c monolith. The intention is that we will eventually split out
>>>>> all test cases and group them together based on what portion of device
>>>>> tree they test.
>>>>
>>>> Why does splitting this file apart improve the implementation?
>>>
>>> This is in preparation for patch 19/19 and other hypothetical future
>>> patches where test cases are split up and grouped together by what
>>> portion of DT they test (for example the parsing tests and the
>>> platform/device tests would probably go separate files as well). This
>>> patch by itself does not do anything useful, but I figured it made
>>> patch 19/19 (and, if you like what I am doing, subsequent patches)
>>> easier to review.
>>
>> I do not see any value in splitting the devicetree tests into
>> multiple files.
>>
>> Please help me understand what the benefits of such a split are.

Note that my following comments are specific to the current devicetree
unittests, and may not apply to the general case of unit tests in other
subsystems.


> Sorry, I thought it made sense in context of what I am doing in the
> following patch. All I am trying to do is to provide an effective way
> of grouping test cases. To be clear, the idea, assuming you agree, is

Looking at _just_ the first few fragments of the following patch, the
change is to break down a moderate size function of related tests,
of_unittest_find_node_by_name(), into a lot of extremely small functions.
Then to find the execution order of the many small functions requires
finding the array of_test_find_node_by_name_cases[].  Then I have to
chase off into the kunit test runner core, where I find that the set
of tests in of_test_find_node_by_name_cases[] is processed by a
late_initcall().  So now the order of the various test groupings,
declared via module_test(), are subject to the fragile orderings
of initcalls.

There are ordering dependencies within the devicetree unittests.

I do not like breaking the test cases down into such small atoms.

I do not see any value __for devicetree unittests__ of having
such small atoms.

It makes it harder for me to read the source of the tests and
understand the order they will execute.  It also makes it harder
for me to read through the actual tests (in this example the
tests that are currently grouped in of_unittest_find_node_by_name())
because of all the extra function headers injected into the
existing single function to break it apart into many smaller
functions.

Breaking the tests into separate chunks, each chunk invoked
independently as the result of module_test() of each chunk,
loses the summary result for the devicetree unittests of
how many tests are run and how many passed.  This is the
only statistic that I need to determine whether the
unittests have detected a new fault caused by a specific
patch or commit.  I don't need to look at any individual
test result unless the overall result reports a failure.


> that we would follow up with several other patches like this one and
> the subsequent patch, one which would pull out a couple test
> functions, as I have done here, and another that splits those
> functions up into a bunch of proper test cases.
> 
> I thought that having that many unrelated test cases in a single file
> would just be a pain to sort through deal with, review, whatever.

Having all the test cases in a single file makes it easier for me to
read, understand, modify, and maintain the tests.


> This is not something I feel particularly strongly about, it is just
> pretty atypical from my experience to have so many unrelated test
> cases in a single file.
> 
> Maybe you would prefer that I break up the test cases first, and then
> we split up the file as appropriate?

I prefer that the test cases not be broken up arbitrarily.  There _may_
be cases where the devicetree unittests are currently not well grouped
and may benefit from change, but if so that should be handled independently
of any transformation into a KUnit framework.


> I just assumed that we would agree it would be way too much stuff for
> a single file, so I went ahead and broke it up first, because I
> thought it would make it easier to review in that order rather than
> the other way around.

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2019-02-15  2:05           ` frowand.list
@ 2019-02-15  2:05             ` Frank Rowand
  2019-02-15 10:56             ` brendanhiggins
  1 sibling, 0 replies; 232+ messages in thread
From: Frank Rowand @ 2019-02-15  2:05 UTC (permalink / raw)


On 2/14/19 4:56 PM, Brendan Higgins wrote:
> On Thu, Feb 14, 2019@3:57 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>
>> On 12/5/18 3:54 PM, Brendan Higgins wrote:
>>> On Tue, Dec 4, 2018@2:58 AM Frank Rowand <frowand.list@gmail.com> wrote:
>>>>
>>>> Hi Brendan,
>>>>
>>>> On 11/28/18 11:36 AM, Brendan Higgins wrote:
>>>>> Split out a couple of test cases that these features in base.c from the
>>>>> unittest.c monolith. The intention is that we will eventually split out
>>>>> all test cases and group them together based on what portion of device
>>>>> tree they test.
>>>>
>>>> Why does splitting this file apart improve the implementation?
>>>
>>> This is in preparation for patch 19/19 and other hypothetical future
>>> patches where test cases are split up and grouped together by what
>>> portion of DT they test (for example the parsing tests and the
>>> platform/device tests would probably go separate files as well). This
>>> patch by itself does not do anything useful, but I figured it made
>>> patch 19/19 (and, if you like what I am doing, subsequent patches)
>>> easier to review.
>>
>> I do not see any value in splitting the devicetree tests into
>> multiple files.
>>
>> Please help me understand what the benefits of such a split are.

Note that my following comments are specific to the current devicetree
unittests, and may not apply to the general case of unit tests in other
subsystems.


> Sorry, I thought it made sense in context of what I am doing in the
> following patch. All I am trying to do is to provide an effective way
> of grouping test cases. To be clear, the idea, assuming you agree, is

Looking at _just_ the first few fragments of the following patch, the
change is to break down a moderate size function of related tests,
of_unittest_find_node_by_name(), into a lot of extremely small functions.
Then to find the execution order of the many small functions requires
finding the array of_test_find_node_by_name_cases[].  Then I have to
chase off into the kunit test runner core, where I find that the set
of tests in of_test_find_node_by_name_cases[] is processed by a
late_initcall().  So now the order of the various test groupings,
declared via module_test(), are subject to the fragile orderings
of initcalls.

There are ordering dependencies within the devicetree unittests.

I do not like breaking the test cases down into such small atoms.

I do not see any value __for devicetree unittests__ of having
such small atoms.

It makes it harder for me to read the source of the tests and
understand the order they will execute.  It also makes it harder
for me to read through the actual tests (in this example the
tests that are currently grouped in of_unittest_find_node_by_name())
because of all the extra function headers injected into the
existing single function to break it apart into many smaller
functions.

Breaking the tests into separate chunks, each chunk invoked
independently as the result of module_test() of each chunk,
loses the summary result for the devicetree unittests of
how many tests are run and how many passed.  This is the
only statistic that I need to determine whether the
unittests have detected a new fault caused by a specific
patch or commit.  I don't need to look at any individual
test result unless the overall result reports a failure.


> that we would follow up with several other patches like this one and
> the subsequent patch, one which would pull out a couple test
> functions, as I have done here, and another that splits those
> functions up into a bunch of proper test cases.
> 
> I thought that having that many unrelated test cases in a single file
> would just be a pain to sort through deal with, review, whatever.

Having all the test cases in a single file makes it easier for me to
read, understand, modify, and maintain the tests.


> This is not something I feel particularly strongly about, it is just
> pretty atypical from my experience to have so many unrelated test
> cases in a single file.
> 
> Maybe you would prefer that I break up the test cases first, and then
> we split up the file as appropriate?

I prefer that the test cases not be broken up arbitrarily.  There _may_
be cases where the devicetree unittests are currently not well grouped
and may benefit from change, but if so that should be handled independently
of any transformation into a KUnit framework.


> I just assumed that we would agree it would be way too much stuff for
> a single file, so I went ahead and broke it up first, because I
> thought it would make it easier to review in that order rather than
> the other way around.

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2019-02-15  2:05           ` frowand.list
  2019-02-15  2:05             ` Frank Rowand
@ 2019-02-15 10:56             ` brendanhiggins
  2019-02-15 10:56               ` Brendan Higgins
  2019-02-18 22:25               ` frowand.list
  1 sibling, 2 replies; 232+ messages in thread
From: brendanhiggins @ 2019-02-15 10:56 UTC (permalink / raw)


On Thu, Feb 14, 2019 at 6:05 PM Frank Rowand <frowand.list at gmail.com> wrote:
>
> On 2/14/19 4:56 PM, Brendan Higgins wrote:
> > On Thu, Feb 14, 2019 at 3:57 PM Frank Rowand <frowand.list at gmail.com> wrote:
> >>
> >> On 12/5/18 3:54 PM, Brendan Higgins wrote:
> >>> On Tue, Dec 4, 2018 at 2:58 AM Frank Rowand <frowand.list at gmail.com> wrote:
> >>>>
> >>>> Hi Brendan,
> >>>>
> >>>> On 11/28/18 11:36 AM, Brendan Higgins wrote:
> >>>>> Split out a couple of test cases that these features in base.c from the
> >>>>> unittest.c monolith. The intention is that we will eventually split out
> >>>>> all test cases and group them together based on what portion of device
> >>>>> tree they test.
> >>>>
> >>>> Why does splitting this file apart improve the implementation?
> >>>
> >>> This is in preparation for patch 19/19 and other hypothetical future
> >>> patches where test cases are split up and grouped together by what
> >>> portion of DT they test (for example the parsing tests and the
> >>> platform/device tests would probably go separate files as well). This
> >>> patch by itself does not do anything useful, but I figured it made
> >>> patch 19/19 (and, if you like what I am doing, subsequent patches)
> >>> easier to review.
> >>
> >> I do not see any value in splitting the devicetree tests into
> >> multiple files.
> >>
> >> Please help me understand what the benefits of such a split are.
>
> Note that my following comments are specific to the current devicetree
> unittests, and may not apply to the general case of unit tests in other
> subsystems.
>
Note taken.
>
> > Sorry, I thought it made sense in context of what I am doing in the
> > following patch. All I am trying to do is to provide an effective way
> > of grouping test cases. To be clear, the idea, assuming you agree, is
>
> Looking at _just_ the first few fragments of the following patch, the
> change is to break down a moderate size function of related tests,
> of_unittest_find_node_by_name(), into a lot of extremely small functions.

Hmm...I wouldn't call that a moderate function. By my standards those
functions are pretty large. In any case, I want to limit the
discussion to specifically what a test case should look like, and the
general consensus outside of the kernel is that unit test cases should
be very very small. The reason is that each test case is supposed to
test one specific property; it should be obvious what that property
is; and it should be obvious what is needed to exercise that property.

> Then to find the execution order of the many small functions requires
> finding the array of_test_find_node_by_name_cases[].  Then I have to

Execution order shouldn't matter. Each test case should be totally
hermetic. Obviously in this case we depend on the preceeding test case
to clean up properly, but that is something I am working on.

> chase off into the kunit test runner core, where I find that the set
> of tests in of_test_find_node_by_name_cases[] is processed by a
> late_initcall().  So now the order of the various test groupings,

That's fair. You are not the only one to complain about that. The
late_initcall is a hack which I plan on replacing shortly (and yes I
know that me planning on doing something doesn't mean much in this
discussion, but that's what I got); regardless, order shouldn't
matter.

> declared via module_test(), are subject to the fragile orderings
> of initcalls.
>
> There are ordering dependencies within the devicetree unittests.

There is now in the current devicetree unittests, but, if I may be so
bold, that is something that I would like to fix.

>
> I do not like breaking the test cases down into such small atoms.
>
> I do not see any value __for devicetree unittests__ of having
> such small atoms.

I imagine it probably makes less sense in the context of a strict
dependency order, but that is something that I want to do away with.
Ideally, when you look at a test case you shouldn't need to think
about anything other than the code under test and the test case
itself; so in my universe, a smaller test case should mean less you
need to think about.

I don't want to get hung up on size too much because I don't think
this is what it is really about. I think you and I can agree that a
test should be as simple and complete as possible. The ideal test
should cover all behavior, and should be obviously correct (since
otherwise we would have to test the test too). Obviously, these two
goals are at odds, so the compromise I attempt to make is to make a
bunch of test cases which are separately simple enough to be obviously
correct at first glance, and the sum total of all the tests provides
the necessary coverage. Additionally, because each test case is
independent of every other test case, they can be reasoned about
individually, and it is not necessary to reason about them as a group.
Hypothetically, this should give you the best of both worlds.

So even if I failed in execution, I think the principle is good.

>
> It makes it harder for me to read the source of the tests and
> understand the order they will execute.  It also makes it harder
> for me to read through the actual tests (in this example the
> tests that are currently grouped in of_unittest_find_node_by_name())
> because of all the extra function headers injected into the
> existing single function to break it apart into many smaller
> functions.

Well now the same groups are expressed as test modules, it's just a
collection of closely related test cases, but they are grouped
together for just that reason. Nevertheless, I argue this is superior
to grouping them together in a function, because a test module
(elsewhere called a test suite) relates test cases together, but makes
it clear that they are still logically independent, two test cases in
a suite should run completely independently of each other.

>
> Breaking the tests into separate chunks, each chunk invoked
> independently as the result of module_test() of each chunk,
> loses the summary result for the devicetree unittests of
> how many tests are run and how many passed.  This is the

We still provide that. Well, we provide a total result of all tests
run, but they are already grouped by test module, and we could provide
module level summaries, that would be pretty trivial.

> only statistic that I need to determine whether the
> unittests have detected a new fault caused by a specific
> patch or commit.  I don't need to look at any individual
> test result unless the overall result reports a failure.

Yep, we do that too.

>
>
> > that we would follow up with several other patches like this one and
> > the subsequent patch, one which would pull out a couple test
> > functions, as I have done here, and another that splits those
> > functions up into a bunch of proper test cases.
> >
> > I thought that having that many unrelated test cases in a single file
> > would just be a pain to sort through deal with, review, whatever.
>
> Having all the test cases in a single file makes it easier for me to
> read, understand, modify, and maintain the tests.

Alright, well that's a much harder thing to make a strong statement
about. From my experience, I have usually seen one or two *maybe
three* test suites in a single file, and you have a lot more than that
in the file right now, but this sounds like a discussion for later
anyway.

>
> > This is not something I feel particularly strongly about, it is just
> > pretty atypical from my experience to have so many unrelated test
> > cases in a single file.
> >
> > Maybe you would prefer that I break up the test cases first, and then
> > we split up the file as appropriate?
>
> I prefer that the test cases not be broken up arbitrarily.  There _may_

I wasn't trying to break them up arbitrarily. I thought I was doing it
according to a pattern (breaking up the file, that is), but maybe I
just hadn't looked at enough examples.

> be cases where the devicetree unittests are currently not well grouped
> and may benefit from change, but if so that should be handled independently
> of any transformation into a KUnit framework.

I agree. I did this because I wanted to illustrate what I thought real
world KUnit unit tests should look like (I also wanted to be able to
show off KUnit test features that help you write these kinds of
tests); I was not necessarily intending that all the of: unittest
patches would get merged in with the whole RFC. I was mostly trying to
create cause for discussion (which it seems like I succeeded at ;-) ).

So fair enough, I will propose these patches separately and later
(except of course this one that splits up the file). Do you want the
initial transformation to the KUnit framework in the main KUnit
patchset, or do you want that to be done separately? If I recall, Rob
suggested this as a good initial example that other people could refer
to, and some people seemed to think that I needed one to help guide
the discussion and provide direction for early users. I don't
necessarily think that means the initial real world example needs to
be a part of the initial patchset though.

Cheers

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2019-02-15 10:56             ` brendanhiggins
@ 2019-02-15 10:56               ` Brendan Higgins
  2019-02-18 22:25               ` frowand.list
  1 sibling, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2019-02-15 10:56 UTC (permalink / raw)


On Thu, Feb 14, 2019@6:05 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/14/19 4:56 PM, Brendan Higgins wrote:
> > On Thu, Feb 14, 2019@3:57 PM Frank Rowand <frowand.list@gmail.com> wrote:
> >>
> >> On 12/5/18 3:54 PM, Brendan Higgins wrote:
> >>> On Tue, Dec 4, 2018@2:58 AM Frank Rowand <frowand.list@gmail.com> wrote:
> >>>>
> >>>> Hi Brendan,
> >>>>
> >>>> On 11/28/18 11:36 AM, Brendan Higgins wrote:
> >>>>> Split out a couple of test cases that these features in base.c from the
> >>>>> unittest.c monolith. The intention is that we will eventually split out
> >>>>> all test cases and group them together based on what portion of device
> >>>>> tree they test.
> >>>>
> >>>> Why does splitting this file apart improve the implementation?
> >>>
> >>> This is in preparation for patch 19/19 and other hypothetical future
> >>> patches where test cases are split up and grouped together by what
> >>> portion of DT they test (for example the parsing tests and the
> >>> platform/device tests would probably go separate files as well). This
> >>> patch by itself does not do anything useful, but I figured it made
> >>> patch 19/19 (and, if you like what I am doing, subsequent patches)
> >>> easier to review.
> >>
> >> I do not see any value in splitting the devicetree tests into
> >> multiple files.
> >>
> >> Please help me understand what the benefits of such a split are.
>
> Note that my following comments are specific to the current devicetree
> unittests, and may not apply to the general case of unit tests in other
> subsystems.
>
Note taken.
>
> > Sorry, I thought it made sense in context of what I am doing in the
> > following patch. All I am trying to do is to provide an effective way
> > of grouping test cases. To be clear, the idea, assuming you agree, is
>
> Looking at _just_ the first few fragments of the following patch, the
> change is to break down a moderate size function of related tests,
> of_unittest_find_node_by_name(), into a lot of extremely small functions.

Hmm...I wouldn't call that a moderate function. By my standards those
functions are pretty large. In any case, I want to limit the
discussion to specifically what a test case should look like, and the
general consensus outside of the kernel is that unit test cases should
be very very small. The reason is that each test case is supposed to
test one specific property; it should be obvious what that property
is; and it should be obvious what is needed to exercise that property.

> Then to find the execution order of the many small functions requires
> finding the array of_test_find_node_by_name_cases[].  Then I have to

Execution order shouldn't matter. Each test case should be totally
hermetic. Obviously in this case we depend on the preceeding test case
to clean up properly, but that is something I am working on.

> chase off into the kunit test runner core, where I find that the set
> of tests in of_test_find_node_by_name_cases[] is processed by a
> late_initcall().  So now the order of the various test groupings,

That's fair. You are not the only one to complain about that. The
late_initcall is a hack which I plan on replacing shortly (and yes I
know that me planning on doing something doesn't mean much in this
discussion, but that's what I got); regardless, order shouldn't
matter.

> declared via module_test(), are subject to the fragile orderings
> of initcalls.
>
> There are ordering dependencies within the devicetree unittests.

There is now in the current devicetree unittests, but, if I may be so
bold, that is something that I would like to fix.

>
> I do not like breaking the test cases down into such small atoms.
>
> I do not see any value __for devicetree unittests__ of having
> such small atoms.

I imagine it probably makes less sense in the context of a strict
dependency order, but that is something that I want to do away with.
Ideally, when you look at a test case you shouldn't need to think
about anything other than the code under test and the test case
itself; so in my universe, a smaller test case should mean less you
need to think about.

I don't want to get hung up on size too much because I don't think
this is what it is really about. I think you and I can agree that a
test should be as simple and complete as possible. The ideal test
should cover all behavior, and should be obviously correct (since
otherwise we would have to test the test too). Obviously, these two
goals are at odds, so the compromise I attempt to make is to make a
bunch of test cases which are separately simple enough to be obviously
correct at first glance, and the sum total of all the tests provides
the necessary coverage. Additionally, because each test case is
independent of every other test case, they can be reasoned about
individually, and it is not necessary to reason about them as a group.
Hypothetically, this should give you the best of both worlds.

So even if I failed in execution, I think the principle is good.

>
> It makes it harder for me to read the source of the tests and
> understand the order they will execute.  It also makes it harder
> for me to read through the actual tests (in this example the
> tests that are currently grouped in of_unittest_find_node_by_name())
> because of all the extra function headers injected into the
> existing single function to break it apart into many smaller
> functions.

Well now the same groups are expressed as test modules, it's just a
collection of closely related test cases, but they are grouped
together for just that reason. Nevertheless, I argue this is superior
to grouping them together in a function, because a test module
(elsewhere called a test suite) relates test cases together, but makes
it clear that they are still logically independent, two test cases in
a suite should run completely independently of each other.

>
> Breaking the tests into separate chunks, each chunk invoked
> independently as the result of module_test() of each chunk,
> loses the summary result for the devicetree unittests of
> how many tests are run and how many passed.  This is the

We still provide that. Well, we provide a total result of all tests
run, but they are already grouped by test module, and we could provide
module level summaries, that would be pretty trivial.

> only statistic that I need to determine whether the
> unittests have detected a new fault caused by a specific
> patch or commit.  I don't need to look at any individual
> test result unless the overall result reports a failure.

Yep, we do that too.

>
>
> > that we would follow up with several other patches like this one and
> > the subsequent patch, one which would pull out a couple test
> > functions, as I have done here, and another that splits those
> > functions up into a bunch of proper test cases.
> >
> > I thought that having that many unrelated test cases in a single file
> > would just be a pain to sort through deal with, review, whatever.
>
> Having all the test cases in a single file makes it easier for me to
> read, understand, modify, and maintain the tests.

Alright, well that's a much harder thing to make a strong statement
about. From my experience, I have usually seen one or two *maybe
three* test suites in a single file, and you have a lot more than that
in the file right now, but this sounds like a discussion for later
anyway.

>
> > This is not something I feel particularly strongly about, it is just
> > pretty atypical from my experience to have so many unrelated test
> > cases in a single file.
> >
> > Maybe you would prefer that I break up the test cases first, and then
> > we split up the file as appropriate?
>
> I prefer that the test cases not be broken up arbitrarily.  There _may_

I wasn't trying to break them up arbitrarily. I thought I was doing it
according to a pattern (breaking up the file, that is), but maybe I
just hadn't looked at enough examples.

> be cases where the devicetree unittests are currently not well grouped
> and may benefit from change, but if so that should be handled independently
> of any transformation into a KUnit framework.

I agree. I did this because I wanted to illustrate what I thought real
world KUnit unit tests should look like (I also wanted to be able to
show off KUnit test features that help you write these kinds of
tests); I was not necessarily intending that all the of: unittest
patches would get merged in with the whole RFC. I was mostly trying to
create cause for discussion (which it seems like I succeeded at ;-) ).

So fair enough, I will propose these patches separately and later
(except of course this one that splits up the file). Do you want the
initial transformation to the KUnit framework in the main KUnit
patchset, or do you want that to be done separately? If I recall, Rob
suggested this as a good initial example that other people could refer
to, and some people seemed to think that I needed one to help guide
the discussion and provide direction for early users. I don't
necessarily think that means the initial real world example needs to
be a part of the initial patchset though.

Cheers

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2019-02-15 10:56             ` brendanhiggins
  2019-02-15 10:56               ` Brendan Higgins
@ 2019-02-18 22:25               ` frowand.list
  2019-02-18 22:25                 ` Frank Rowand
  2019-02-20 20:44                 ` frowand.list
  1 sibling, 2 replies; 232+ messages in thread
From: frowand.list @ 2019-02-18 22:25 UTC (permalink / raw)


On 2/15/19 2:56 AM, Brendan Higgins wrote:
> On Thu, Feb 14, 2019 at 6:05 PM Frank Rowand <frowand.list at gmail.com> wrote:
>>
>> On 2/14/19 4:56 PM, Brendan Higgins wrote:
>>> On Thu, Feb 14, 2019 at 3:57 PM Frank Rowand <frowand.list at gmail.com> wrote:
>>>>
>>>> On 12/5/18 3:54 PM, Brendan Higgins wrote:
>>>>> On Tue, Dec 4, 2018 at 2:58 AM Frank Rowand <frowand.list at gmail.com> wrote:
>>>>>>
>>>>>> Hi Brendan,
>>>>>>
>>>>>> On 11/28/18 11:36 AM, Brendan Higgins wrote:
>>>>>>> Split out a couple of test cases that these features in base.c from the
>>>>>>> unittest.c monolith. The intention is that we will eventually split out
>>>>>>> all test cases and group them together based on what portion of device
>>>>>>> tree they test.
>>>>>>
>>>>>> Why does splitting this file apart improve the implementation?
>>>>>
>>>>> This is in preparation for patch 19/19 and other hypothetical future
>>>>> patches where test cases are split up and grouped together by what
>>>>> portion of DT they test (for example the parsing tests and the
>>>>> platform/device tests would probably go separate files as well). This
>>>>> patch by itself does not do anything useful, but I figured it made
>>>>> patch 19/19 (and, if you like what I am doing, subsequent patches)
>>>>> easier to review.
>>>>
>>>> I do not see any value in splitting the devicetree tests into
>>>> multiple files.
>>>>
>>>> Please help me understand what the benefits of such a split are.
>>
>> Note that my following comments are specific to the current devicetree
>> unittests, and may not apply to the general case of unit tests in other
>> subsystems.
>>
> Note taken.
>>
>>> Sorry, I thought it made sense in context of what I am doing in the
>>> following patch. All I am trying to do is to provide an effective way
>>> of grouping test cases. To be clear, the idea, assuming you agree, is
>>
>> Looking at _just_ the first few fragments of the following patch, the
>> change is to break down a moderate size function of related tests,
>> of_unittest_find_node_by_name(), into a lot of extremely small functions.
> 
> Hmm...I wouldn't call that a moderate function. By my standards those
> functions are pretty large. In any case, I want to limit the
> discussion to specifically what a test case should look like, and the
> general consensus outside of the kernel is that unit test cases should
> be very very small. The reason is that each test case is supposed to> test one specific property; it should be obvious what that property
> is; and it should be obvious what is needed to exercise that property.

That is a valid model and philosophy of unit test design.

It is not a model that the devicetree unit tests can be shoe horned
into.  Sort of...  In a sense, the existing devicetree unit tests
already to that, if you consider each unittest() (and sometime a few
lines of code that creates the result that unittest() checks) to be a separate
unit test.  But the kunit model does not consider the sort of
equivalent KUNIT_EXPECT_EQ(), etc, to be a unit test, the unit test
in kunit would be KUNIT_CASE().  Although it is a little confusing to
me that the initialization and clean up on exit occur one level
higher than KUNIT_CASE(), in struct kunit_module.  I think the
confusion is just a matter of slight conflict in the documentation
(btw, the documents where very helpful for me to understand the
overall concepts and model).


>> Then to find the execution order of the many small functions requires
>> finding the array of_test_find_node_by_name_cases[].  Then I have to
> 
> Execution order shouldn't matter. Each test case should be totally
> hermetic. Obviously in this case we depend on the preceeding test case
> to clean up properly, but that is something I am working on.

But the order _does_ matter for the devicetree unit tests.

That is one of the problems.  The devicetree unit tests are not small,
independent tests.  Some of the tests change state in a way that
following tests depend upon.

The design documents also mention that each unit test should have
a pre-test initialization, and a post-test cleanup to remove the
results of the initialization.

The devicetree unit tests have a large, intrusive initialization.
Once again, not a good fit for this model.

The devicetree unit tests also have an undocumented (and not at all
obvious) need to leave state changed in some cases after the test
completes.  There are cases where the way that I fully validate
the success of the tests is to examine the state of the live
devicetree via /proc/devicetree/. Ideally, this would be done by
a script or a program, but creating that is not near the top of
my todo list.


>> chase off into the kunit test runner core, where I find that the set
>> of tests in of_test_find_node_by_name_cases[] is processed by a
>> late_initcall().  So now the order of the various test groupings,
> 
> That's fair. You are not the only one to complain about that. The
> late_initcall is a hack which I plan on replacing shortly (and yes I
> know that me planning on doing something doesn't mean much in this
> discussion, but that's what I got); regardless, order shouldn't
> matter.

But again, it does.


>> declared via module_test(), are subject to the fragile orderings
>> of initcalls.
>>
>> There are ordering dependencies within the devicetree unittests.
> 
> There is now in the current devicetree unittests, but, if I may be so
> bold, that is something that I would like to fix.
> 
>>
>> I do not like breaking the test cases down into such small atoms.
>>
>> I do not see any value __for devicetree unittests__ of having
>> such small atoms.
> 
> I imagine it probably makes less sense in the context of a strict
> dependency order, but that is something that I want to do away with.
> Ideally, when you look at a test case you shouldn't need to think
> about anything other than the code under test and the test case
> itself; so in my universe, a smaller test case should mean less you
> need to think about.

For the general case, I think that is an excellent model.


> I don't want to get hung up on size too much because I don't think
> this is what it is really about. I think you and I can agree that a
> test should be as simple and complete as possible. The ideal test
> should cover all behavior, and should be obviously correct (since
> otherwise we would have to test the test too). Obviously, these two
> goals are at odds, so the compromise I attempt to make is to make a
> bunch of test cases which are separately simple enough to be obviously
> correct at first glance, and the sum total of all the tests provides
> the necessary coverage. Additionally, because each test case is
> independent of every other test case, they can be reasoned about
> individually, and it is not necessary to reason about them as a group.
> Hypothetically, this should give you the best of both worlds.
> 
> So even if I failed in execution, I think the principle is good.
> 
>>
>> It makes it harder for me to read the source of the tests and
>> understand the order they will execute.  It also makes it harder
>> for me to read through the actual tests (in this example the
>> tests that are currently grouped in of_unittest_find_node_by_name())
>> because of all the extra function headers injected into the
>> existing single function to break it apart into many smaller
>> functions.
> 
> Well now the same groups are expressed as test modules, it's just a
> collection of closely related test cases, but they are grouped
> together for just that reason. Nevertheless, I argue this is superior
> to grouping them together in a function, because a test module
> (elsewhere called a test suite) relates test cases together, but makes
> it clear that they are still logically independent, two test cases in
> a suite should run completely independently of each other.

That is missing my point.  Converting to the kunit format adds a
lot of boilerplate function declarations.  Compare that extra
boilerplate to a one line comment.  This is a clarity of source
code argument that I am making.

It may be a little hard to see my point given the current state of
unittest.c.  I could definitely make that much more readable using
the current model.


>>
>> Breaking the tests into separate chunks, each chunk invoked
>> independently as the result of module_test() of each chunk,
>> loses the summary result for the devicetree unittests of
>> how many tests are run and how many passed.  This is the
> 
> We still provide that. Well, we provide a total result of all tests
> run, but they are already grouped by test module, and we could provide
> module level summaries, that would be pretty trivial.

Providing the module level summary (assuming that all of the devicetree
tests were in a single module) would meet this need.


>> only statistic that I need to determine whether the
>> unittests have detected a new fault caused by a specific
>> patch or commit.  I don't need to look at any individual
>> test result unless the overall result reports a failure.
> 
> Yep, we do that too.

Well, when you add the module level summary...


>>
>>
>>> that we would follow up with several other patches like this one and
>>> the subsequent patch, one which would pull out a couple test
>>> functions, as I have done here, and another that splits those
>>> functions up into a bunch of proper test cases.
>>>
>>> I thought that having that many unrelated test cases in a single file
>>> would just be a pain to sort through deal with, review, whatever.
>>
>> Having all the test cases in a single file makes it easier for me to
>> read, understand, modify, and maintain the tests.
> 
> Alright, well that's a much harder thing to make a strong statement
> about. From my experience, I have usually seen one or two *maybe
> three* test suites in a single file, and you have a lot more than that
> in the file right now, but this sounds like a discussion for later
> anyway.

drivers/of/test-common.c is already split out by the patch series.


>>
>>> This is not something I feel particularly strongly about, it is just
>>> pretty atypical from my experience to have so many unrelated test
>>> cases in a single file.
>>>
>>> Maybe you would prefer that I break up the test cases first, and then
>>> we split up the file as appropriate?
>>
>> I prefer that the test cases not be broken up arbitrarily.  There _may_
> 
> I wasn't trying to break them up arbitrarily. I thought I was doing it
> according to a pattern (breaking up the file, that is), but maybe I
> just hadn't looked at enough examples.

This goes back to the kunit model of putting each test into a separate
function that can be a KUNIT_CASE().  That is a model that I do not agree
with for devicetree.


>> be cases where the devicetree unittests are currently not well grouped
>> and may benefit from change, but if so that should be handled independently
>> of any transformation into a KUnit framework.
> 
> I agree. I did this because I wanted to illustrate what I thought real
> world KUnit unit tests should look like (I also wanted to be able to
> show off KUnit test features that help you write these kinds of
> tests); I was not necessarily intending that all the of: unittest
> patches would get merged in with the whole RFC. I was mostly trying to
> create cause for discussion (which it seems like I succeeded at ;-) ).
> 
> So fair enough, I will propose these patches separately and later
> (except of course this one that splits up the file). Do you want the
> initial transformation to the KUnit framework in the main KUnit
> patchset, or do you want that to be done separately? If I recall, Rob
> suggested this as a good initial example that other people could refer
> to, and some people seemed to think that I needed one to help guide
> the discussion and provide direction for early users. I don't
> necessarily think that means the initial real world example needs to
> be a part of the initial patchset though.
> 
> Cheers
> 

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2019-02-18 22:25               ` frowand.list
@ 2019-02-18 22:25                 ` Frank Rowand
  2019-02-20 20:44                 ` frowand.list
  1 sibling, 0 replies; 232+ messages in thread
From: Frank Rowand @ 2019-02-18 22:25 UTC (permalink / raw)


On 2/15/19 2:56 AM, Brendan Higgins wrote:
> On Thu, Feb 14, 2019@6:05 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>
>> On 2/14/19 4:56 PM, Brendan Higgins wrote:
>>> On Thu, Feb 14, 2019@3:57 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>>>
>>>> On 12/5/18 3:54 PM, Brendan Higgins wrote:
>>>>> On Tue, Dec 4, 2018@2:58 AM Frank Rowand <frowand.list@gmail.com> wrote:
>>>>>>
>>>>>> Hi Brendan,
>>>>>>
>>>>>> On 11/28/18 11:36 AM, Brendan Higgins wrote:
>>>>>>> Split out a couple of test cases that these features in base.c from the
>>>>>>> unittest.c monolith. The intention is that we will eventually split out
>>>>>>> all test cases and group them together based on what portion of device
>>>>>>> tree they test.
>>>>>>
>>>>>> Why does splitting this file apart improve the implementation?
>>>>>
>>>>> This is in preparation for patch 19/19 and other hypothetical future
>>>>> patches where test cases are split up and grouped together by what
>>>>> portion of DT they test (for example the parsing tests and the
>>>>> platform/device tests would probably go separate files as well). This
>>>>> patch by itself does not do anything useful, but I figured it made
>>>>> patch 19/19 (and, if you like what I am doing, subsequent patches)
>>>>> easier to review.
>>>>
>>>> I do not see any value in splitting the devicetree tests into
>>>> multiple files.
>>>>
>>>> Please help me understand what the benefits of such a split are.
>>
>> Note that my following comments are specific to the current devicetree
>> unittests, and may not apply to the general case of unit tests in other
>> subsystems.
>>
> Note taken.
>>
>>> Sorry, I thought it made sense in context of what I am doing in the
>>> following patch. All I am trying to do is to provide an effective way
>>> of grouping test cases. To be clear, the idea, assuming you agree, is
>>
>> Looking at _just_ the first few fragments of the following patch, the
>> change is to break down a moderate size function of related tests,
>> of_unittest_find_node_by_name(), into a lot of extremely small functions.
> 
> Hmm...I wouldn't call that a moderate function. By my standards those
> functions are pretty large. In any case, I want to limit the
> discussion to specifically what a test case should look like, and the
> general consensus outside of the kernel is that unit test cases should
> be very very small. The reason is that each test case is supposed to> test one specific property; it should be obvious what that property
> is; and it should be obvious what is needed to exercise that property.

That is a valid model and philosophy of unit test design.

It is not a model that the devicetree unit tests can be shoe horned
into.  Sort of...  In a sense, the existing devicetree unit tests
already to that, if you consider each unittest() (and sometime a few
lines of code that creates the result that unittest() checks) to be a separate
unit test.  But the kunit model does not consider the sort of
equivalent KUNIT_EXPECT_EQ(), etc, to be a unit test, the unit test
in kunit would be KUNIT_CASE().  Although it is a little confusing to
me that the initialization and clean up on exit occur one level
higher than KUNIT_CASE(), in struct kunit_module.  I think the
confusion is just a matter of slight conflict in the documentation
(btw, the documents where very helpful for me to understand the
overall concepts and model).


>> Then to find the execution order of the many small functions requires
>> finding the array of_test_find_node_by_name_cases[].  Then I have to
> 
> Execution order shouldn't matter. Each test case should be totally
> hermetic. Obviously in this case we depend on the preceeding test case
> to clean up properly, but that is something I am working on.

But the order _does_ matter for the devicetree unit tests.

That is one of the problems.  The devicetree unit tests are not small,
independent tests.  Some of the tests change state in a way that
following tests depend upon.

The design documents also mention that each unit test should have
a pre-test initialization, and a post-test cleanup to remove the
results of the initialization.

The devicetree unit tests have a large, intrusive initialization.
Once again, not a good fit for this model.

The devicetree unit tests also have an undocumented (and not at all
obvious) need to leave state changed in some cases after the test
completes.  There are cases where the way that I fully validate
the success of the tests is to examine the state of the live
devicetree via /proc/devicetree/. Ideally, this would be done by
a script or a program, but creating that is not near the top of
my todo list.


>> chase off into the kunit test runner core, where I find that the set
>> of tests in of_test_find_node_by_name_cases[] is processed by a
>> late_initcall().  So now the order of the various test groupings,
> 
> That's fair. You are not the only one to complain about that. The
> late_initcall is a hack which I plan on replacing shortly (and yes I
> know that me planning on doing something doesn't mean much in this
> discussion, but that's what I got); regardless, order shouldn't
> matter.

But again, it does.


>> declared via module_test(), are subject to the fragile orderings
>> of initcalls.
>>
>> There are ordering dependencies within the devicetree unittests.
> 
> There is now in the current devicetree unittests, but, if I may be so
> bold, that is something that I would like to fix.
> 
>>
>> I do not like breaking the test cases down into such small atoms.
>>
>> I do not see any value __for devicetree unittests__ of having
>> such small atoms.
> 
> I imagine it probably makes less sense in the context of a strict
> dependency order, but that is something that I want to do away with.
> Ideally, when you look at a test case you shouldn't need to think
> about anything other than the code under test and the test case
> itself; so in my universe, a smaller test case should mean less you
> need to think about.

For the general case, I think that is an excellent model.


> I don't want to get hung up on size too much because I don't think
> this is what it is really about. I think you and I can agree that a
> test should be as simple and complete as possible. The ideal test
> should cover all behavior, and should be obviously correct (since
> otherwise we would have to test the test too). Obviously, these two
> goals are at odds, so the compromise I attempt to make is to make a
> bunch of test cases which are separately simple enough to be obviously
> correct at first glance, and the sum total of all the tests provides
> the necessary coverage. Additionally, because each test case is
> independent of every other test case, they can be reasoned about
> individually, and it is not necessary to reason about them as a group.
> Hypothetically, this should give you the best of both worlds.
> 
> So even if I failed in execution, I think the principle is good.
> 
>>
>> It makes it harder for me to read the source of the tests and
>> understand the order they will execute.  It also makes it harder
>> for me to read through the actual tests (in this example the
>> tests that are currently grouped in of_unittest_find_node_by_name())
>> because of all the extra function headers injected into the
>> existing single function to break it apart into many smaller
>> functions.
> 
> Well now the same groups are expressed as test modules, it's just a
> collection of closely related test cases, but they are grouped
> together for just that reason. Nevertheless, I argue this is superior
> to grouping them together in a function, because a test module
> (elsewhere called a test suite) relates test cases together, but makes
> it clear that they are still logically independent, two test cases in
> a suite should run completely independently of each other.

That is missing my point.  Converting to the kunit format adds a
lot of boilerplate function declarations.  Compare that extra
boilerplate to a one line comment.  This is a clarity of source
code argument that I am making.

It may be a little hard to see my point given the current state of
unittest.c.  I could definitely make that much more readable using
the current model.


>>
>> Breaking the tests into separate chunks, each chunk invoked
>> independently as the result of module_test() of each chunk,
>> loses the summary result for the devicetree unittests of
>> how many tests are run and how many passed.  This is the
> 
> We still provide that. Well, we provide a total result of all tests
> run, but they are already grouped by test module, and we could provide
> module level summaries, that would be pretty trivial.

Providing the module level summary (assuming that all of the devicetree
tests were in a single module) would meet this need.


>> only statistic that I need to determine whether the
>> unittests have detected a new fault caused by a specific
>> patch or commit.  I don't need to look at any individual
>> test result unless the overall result reports a failure.
> 
> Yep, we do that too.

Well, when you add the module level summary...


>>
>>
>>> that we would follow up with several other patches like this one and
>>> the subsequent patch, one which would pull out a couple test
>>> functions, as I have done here, and another that splits those
>>> functions up into a bunch of proper test cases.
>>>
>>> I thought that having that many unrelated test cases in a single file
>>> would just be a pain to sort through deal with, review, whatever.
>>
>> Having all the test cases in a single file makes it easier for me to
>> read, understand, modify, and maintain the tests.
> 
> Alright, well that's a much harder thing to make a strong statement
> about. From my experience, I have usually seen one or two *maybe
> three* test suites in a single file, and you have a lot more than that
> in the file right now, but this sounds like a discussion for later
> anyway.

drivers/of/test-common.c is already split out by the patch series.


>>
>>> This is not something I feel particularly strongly about, it is just
>>> pretty atypical from my experience to have so many unrelated test
>>> cases in a single file.
>>>
>>> Maybe you would prefer that I break up the test cases first, and then
>>> we split up the file as appropriate?
>>
>> I prefer that the test cases not be broken up arbitrarily.  There _may_
> 
> I wasn't trying to break them up arbitrarily. I thought I was doing it
> according to a pattern (breaking up the file, that is), but maybe I
> just hadn't looked at enough examples.

This goes back to the kunit model of putting each test into a separate
function that can be a KUNIT_CASE().  That is a model that I do not agree
with for devicetree.


>> be cases where the devicetree unittests are currently not well grouped
>> and may benefit from change, but if so that should be handled independently
>> of any transformation into a KUnit framework.
> 
> I agree. I did this because I wanted to illustrate what I thought real
> world KUnit unit tests should look like (I also wanted to be able to
> show off KUnit test features that help you write these kinds of
> tests); I was not necessarily intending that all the of: unittest
> patches would get merged in with the whole RFC. I was mostly trying to
> create cause for discussion (which it seems like I succeeded at ;-) ).
> 
> So fair enough, I will propose these patches separately and later
> (except of course this one that splits up the file). Do you want the
> initial transformation to the KUnit framework in the main KUnit
> patchset, or do you want that to be done separately? If I recall, Rob
> suggested this as a good initial example that other people could refer
> to, and some people seemed to think that I needed one to help guide
> the discussion and provide direction for early users. I don't
> necessarily think that means the initial real world example needs to
> be a part of the initial patchset though.
> 
> Cheers
> 

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 17/19] of: unittest: migrate tests to run on KUnit
  2019-02-13  1:44     ` brendanhiggins
  2019-02-13  1:44       ` Brendan Higgins
  2019-02-14 20:10       ` robh
@ 2019-02-18 22:56       ` frowand.list
  2019-02-18 22:56         ` Frank Rowand
  2019-02-28  0:29         ` brendanhiggins
  2 siblings, 2 replies; 232+ messages in thread
From: frowand.list @ 2019-02-18 22:56 UTC (permalink / raw)


On 2/12/19 5:44 PM, Brendan Higgins wrote:
> On Wed, Nov 28, 2018 at 12:56 PM Rob Herring <robh at kernel.org> wrote:
>>
>> On Wed, Nov 28, 2018 at 1:38 PM Brendan Higgins
>> <brendanhiggins at google.com> wrote:
>>>
>>> Migrate tests without any cleanup, or modifying test logic in anyway to
>>> run under KUnit using the KUnit expectation and assertion API.
>>
>> Nice! You beat me to it. This is probably going to conflict with what
>> is in the DT tree for 4.21. Also, please Cc the DT list for
>> drivers/of/ changes.
>>
>> Looks good to me, but a few mostly formatting comments below.
> 
> I just realized that we never talked about your other comments, and I
> still have some questions. (Sorry, it was the last thing I looked at
> while getting v4 ready.) No worries if you don't get to it before I
> send v4 out, I just didn't want you to think I was ignoring you.
> 
>>
>>>
>>> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
>>> ---
>>>  drivers/of/Kconfig    |    1 +
>>>  drivers/of/unittest.c | 1405 ++++++++++++++++++++++-------------------
>>>  2 files changed, 752 insertions(+), 654 deletions(-)
>>>
> <snip>
>>> diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
>>> index 41b49716ac75f..a5ef44730ffdb 100644
>>> --- a/drivers/of/unittest.c
>>> +++ b/drivers/of/unittest.c
> <snip>
>>> -
>>> -static void __init of_unittest_find_node_by_name(void)
>>> +static void of_unittest_find_node_by_name(struct kunit *test)
>>
>> Why do we have to drop __init everywhere? The tests run later?
> 
>>From the standpoint of a unit test __init doesn't really make any
> sense, right? I know that right now we are running as part of a
> kernel, but the goal should be that a unit test is not part of a
> kernel and we just include what we need.
> 
> Even so, that's the future. For now, I did not put the KUnit
> infrastructure in the .init section because I didn't think it belonged
> there. In practice, KUnit only knows how to run during the init phase
> of the kernel, but I don't think it should be restricted there. You
> should be able to run tests whenever you want because you should be
> able to test anything right? I figured any restriction on that is
> misleading and will potentially get in the way at worst, and
> unnecessary at best especially since people shouldn't build a
> production kernel with all kinds of unit tests inside.
> 
>>
>>>  {
>>>         struct device_node *np;
>>>         const char *options, *name;
>>>
> <snip>
>>>
>>>
>>> -       np = of_find_node_by_path("/testcase-data/missing-path");
>>> -       unittest(!np, "non-existent path returned node %pOF\n", np);
>>> +       KUNIT_EXPECT_EQ_MSG(test,
>>> +                           of_find_node_by_path("/testcase-data/missing-path"),
>>> +                           NULL,
>>> +                           "non-existent path returned node %pOF\n", np);
>>
>> 1 tab indent would help with less vertical code (in general, not this
>> one so much).
> 
> Will do.
> 
>>
>>>         of_node_put(np);
>>>
>>> -       np = of_find_node_by_path("missing-alias");
>>> -       unittest(!np, "non-existent alias returned node %pOF\n", np);
>>> +       KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("missing-alias"), NULL,
>>> +                           "non-existent alias returned node %pOF\n", np);
>>>         of_node_put(np);
>>>
>>> -       np = of_find_node_by_path("testcase-alias/missing-path");
>>> -       unittest(!np, "non-existent alias with relative path returned node %pOF\n", np);
>>> +       KUNIT_EXPECT_EQ_MSG(test,
>>> +                           of_find_node_by_path("testcase-alias/missing-path"),
>>> +                           NULL,
>>> +                           "non-existent alias with relative path returned node %pOF\n",
>>> +                           np);
>>>         of_node_put(np);
>>>
> <snip>
>>>
>>> -static void __init of_unittest_property_string(void)
>>> +static void of_unittest_property_string(struct kunit *test)
>>>  {
>>>         const char *strings[4];
>>>         struct device_node *np;
>>>         int rc;
>>>
>>>         np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
>>> -       if (!np) {
>>> -               pr_err("No testcase data in device tree\n");
>>> -               return;
>>> -       }
>>> -
>>> -       rc = of_property_match_string(np, "phandle-list-names", "first");
>>> -       unittest(rc == 0, "first expected:0 got:%i\n", rc);
>>> -       rc = of_property_match_string(np, "phandle-list-names", "second");
>>> -       unittest(rc == 1, "second expected:1 got:%i\n", rc);
>>> -       rc = of_property_match_string(np, "phandle-list-names", "third");
>>> -       unittest(rc == 2, "third expected:2 got:%i\n", rc);
>>> -       rc = of_property_match_string(np, "phandle-list-names", "fourth");
>>> -       unittest(rc == -ENODATA, "unmatched string; rc=%i\n", rc);
>>> -       rc = of_property_match_string(np, "missing-property", "blah");
>>> -       unittest(rc == -EINVAL, "missing property; rc=%i\n", rc);
>>> -       rc = of_property_match_string(np, "empty-property", "blah");
>>> -       unittest(rc == -ENODATA, "empty property; rc=%i\n", rc);
>>> -       rc = of_property_match_string(np, "unterminated-string", "blah");
>>> -       unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
>>> +       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>> +
>>> +       KUNIT_EXPECT_EQ(test,
>>> +                       of_property_match_string(np,
>>> +                                                "phandle-list-names",
>>> +                                                "first"),
>>> +                       0);
>>> +       KUNIT_EXPECT_EQ(test,
>>> +                       of_property_match_string(np,
>>> +                                                "phandle-list-names",
>>> +                                                "second"),
>>> +                       1);
>>
>> Fewer lines on these would be better even if we go over 80 chars.

Agreed.  unittest.c already is a greater than 80 char file in general, and
is a file that benefits from that.


> On the of_property_match_string(...), I have no opinion. I will do
> whatever you like best.
> 
> Nevertheless, as far as the KUNIT_EXPECT_*(...), I do have an opinion: I am
> trying to establish a good, readable convention. Given an expect statement
> structured as
> ```
> KUNIT_EXPECT_*(
>     test,
>     expect_arg_0, ..., expect_arg_n,
>     fmt_str, fmt_arg_0, ..., fmt_arg_n)
> ```
> where `test` is the `struct kunit` context argument, `expect_arg_{0, ..., n}`
> are the arguments the expectations is being made about (so in the above example,
> `of_property_match_string(...)` and `1`), and `fmt_*` is the optional format
> string that comes at the end of some expectations.
> 
> The pattern I had been trying to promote is the following:
> 
> 1) If everything fits on 1 line, do that.
> 2) If you must make a line split, prefer to keep `test` on its own line,
> `expect_arg_{0, ..., n}` should be kept together, if possible, and the format
> string should follow the conventions already most commonly used with format
> strings.
> 3) If you must split up `expect_arg_{0, ..., n}` each argument should get its
> own line and should not share a line with either `test` or any `fmt_*`.
> 
> The reason I care about this so much is because expectations should be
> extremely easy to read; they are the most important part of a unit
> test because they tell you what the test is verifying. I am not
> married to the formatting I proposed above, but I want something that
> will be extremely easy to identify the arguments that the expectation
> is on. Maybe that means that I need to add some syntactic fluff to
> make it clearer, I don't know, but this is definitely something we
> need to get right, especially in the earliest examples.

I will probably raise the ire of the kernel formatting rule makers by offering
what I think is a _much_ more readable format __for this specific case__.
In other words for drivers/of/unittest.c.

If you can not make your mail window _very_ wide, or if this email has been
line wrapped, this example will not be clear.

Two possible formats:


### -----  version 1, as created by the patch series

static void of_unittest_property_string(struct kunit *test)
{
        const char *strings[4];
        struct device_node *np;
        int rc;

        np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);

        KUNIT_EXPECT_EQ(
                test,
                of_property_match_string(np, "phandle-list-names", "first"),
                0);
        KUNIT_EXPECT_EQ(
                test,
                of_property_match_string(np, "phandle-list-names", "second"),
                1);
        KUNIT_EXPECT_EQ(
                test,
                of_property_match_string(np, "phandle-list-names", "third"),
                2);
        KUNIT_EXPECT_EQ_MSG(
                test,
                of_property_match_string(np, "phandle-list-names", "fourth"),
                -ENODATA,
                "unmatched string");
        KUNIT_EXPECT_EQ_MSG(
                test,
                of_property_match_string(np, "missing-property", "blah"),
                -EINVAL,
                "missing property");
        KUNIT_EXPECT_EQ_MSG(
                test,
                of_property_match_string(np, "empty-property", "blah"),
                -ENODATA,
                "empty property");
        KUNIT_EXPECT_EQ_MSG(
                test,
                of_property_match_string(np, "unterminated-string", "blah"),
                -EILSEQ,
                "unterminated string");

        /* of_property_count_strings() tests */
        KUNIT_EXPECT_EQ(test,
                        of_property_count_strings(np, "string-property"), 1);
        KUNIT_EXPECT_EQ(test,
                        of_property_count_strings(np, "phandle-list-names"), 3);
        KUNIT_EXPECT_EQ_MSG(
                test,
                of_property_count_strings(np, "unterminated-string"), -EILSEQ,
                "unterminated string");
        KUNIT_EXPECT_EQ_MSG(
                test,
                of_property_count_strings(np, "unterminated-string-list"),
                -EILSEQ,
                "unterminated string array");




### -----  version 2, modified to use really long lines

static void of_unittest_property_string(struct kunit *test)
{
        const char *strings[4];
        struct device_node *np;
        int rc;

        np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);

        KUNIT_EXPECT_EQ(    test, of_property_match_string(np, "phandle-list-names", "first"),  0);
        KUNIT_EXPECT_EQ(    test, of_property_match_string(np, "phandle-list-names", "second"), 1);
        KUNIT_EXPECT_EQ(    test, of_property_match_string(np, "phandle-list-names", "third"),  2);
        KUNIT_EXPECT_EQ_MSG(test, of_property_match_string(np, "phandle-list-names", "fourth"), -ENODATA, "unmatched string");
        KUNIT_EXPECT_EQ_MSG(test, of_property_match_string(np, "missing-property", "blah"),     -EINVAL, "missing property");
        KUNIT_EXPECT_EQ_MSG(test, of_property_match_string(np, "empty-property", "blah"),       -ENODATA, "empty property");
        KUNIT_EXPECT_EQ_MSG(test, of_property_match_string(np, "unterminated-string", "blah"),  -EILSEQ, "unterminated string");

        /* of_property_count_strings() tests */
        KUNIT_EXPECT_EQ(    test, of_property_count_strings(np, "string-property"),             1);
        KUNIT_EXPECT_EQ(    test, of_property_count_strings(np, "phandle-list-names"),          3);
        KUNIT_EXPECT_EQ_MSG(test, of_property_count_strings(np, "unterminated-string"),         -EILSEQ, "unterminated string");
        KUNIT_EXPECT_EQ_MSG(test, of_property_count_strings(np, "unterminated-string-list"),    -EILSEQ, "unterminated string array");

        
        ------------------------  ------------------------------------------------------------- --------------------------------------
             ^                         ^                                                             ^
             |                         |                                                             |
             |                         |                                                             |
            mostly boilerplate        what is being tested                                          expected result, error message
            (can vary in relop
             and _MSG or not)

In my opinion, the second version is much more readable.  It is easy to see the
differences in the boilerplate.  It is easy to see what is being tested, and how
the arguments of the tested function vary for each test.  It is easy to see the
expected result and error message.  The entire block fits into a single short
window (though much wider).

- Frank

>>
>>> +       KUNIT_EXPECT_EQ(test,
>>> +                       of_property_match_string(np,
>>> +                                                "phandle-list-names",
>>> +                                                "third"),
>>> +                       2);
>>> +       KUNIT_EXPECT_EQ_MSG(test,
>>> +                           of_property_match_string(np,
>>> +                                                    "phandle-list-names",
>>> +                                                    "fourth"),
>>> +                           -ENODATA,
>>> +                           "unmatched string");
>>> +       KUNIT_EXPECT_EQ_MSG(test,
>>> +                           of_property_match_string(np,
>>> +                                                    "missing-property",
>>> +                                                    "blah"),
>>> +                           -EINVAL,
>>> +                           "missing property");
>>> +       KUNIT_EXPECT_EQ_MSG(test,
>>> +                           of_property_match_string(np,
>>> +                                                    "empty-property",
>>> +                                                    "blah"),
>>> +                           -ENODATA,
>>> +                           "empty property");
>>> +       KUNIT_EXPECT_EQ_MSG(test,
>>> +                           of_property_match_string(np,
>>> +                                                    "unterminated-string",
>>> +                                                    "blah"),
>>> +                           -EILSEQ,
>>> +                           "unterminated string");
> <snip>
>>>  /* test insertion of a bus with parent devices */
>>> -static void __init of_unittest_overlay_10(void)
>>> +static void of_unittest_overlay_10(struct kunit *test)
>>>  {
>>> -       int ret;
>>>         char *child_path;
>>>
>>>         /* device should disable */
>>> -       ret = of_unittest_apply_overlay_check(10, 10, 0, 1, PDEV_OVERLAY);
>>> -       if (unittest(ret == 0,
>>> -                       "overlay test %d failed; overlay application\n", 10))
>>> -               return;
>>> +       KUNIT_ASSERT_EQ_MSG(test,
>>> +                           of_unittest_apply_overlay_check(test,
>>> +                                                           10,
>>> +                                                           10,
>>> +                                                           0,
>>> +                                                           1,
>>> +                                                           PDEV_OVERLAY),
>>
>> I prefer putting multiple args on a line and having fewer lines.
> 
> Looking at this now, I tend to agree, but I don't think I saw a
> consistent way to break them up for these functions. I figured there
> should be some type of pattern.
> 
>>
>>> +                           0,
>>> +                           "overlay test %d failed; overlay application\n",
>>> +                           10);
>>>
>>>         child_path = kasprintf(GFP_KERNEL, "%s/test-unittest101",
>>>                         unittest_path(10, PDEV_OVERLAY));
>>> -       if (unittest(child_path, "overlay test %d failed; kasprintf\n", 10))
>>> -               return;
>>> +       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, child_path);
>>>
>>> -       ret = of_path_device_type_exists(child_path, PDEV_OVERLAY);
>>> +       KUNIT_EXPECT_TRUE_MSG(test,
>>> +                             of_path_device_type_exists(child_path,
>>> +                                                        PDEV_OVERLAY),
>>> +                             "overlay test %d failed; no child device\n", 10);
>>>         kfree(child_path);
>>> -
>>> -       unittest(ret, "overlay test %d failed; no child device\n", 10);
>>>  }
> <snip>
> 

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 17/19] of: unittest: migrate tests to run on KUnit
  2019-02-18 22:56       ` frowand.list
@ 2019-02-18 22:56         ` Frank Rowand
  2019-02-28  0:29         ` brendanhiggins
  1 sibling, 0 replies; 232+ messages in thread
From: Frank Rowand @ 2019-02-18 22:56 UTC (permalink / raw)


On 2/12/19 5:44 PM, Brendan Higgins wrote:
> On Wed, Nov 28, 2018@12:56 PM Rob Herring <robh@kernel.org> wrote:
>>
>> On Wed, Nov 28, 2018 at 1:38 PM Brendan Higgins
>> <brendanhiggins@google.com> wrote:
>>>
>>> Migrate tests without any cleanup, or modifying test logic in anyway to
>>> run under KUnit using the KUnit expectation and assertion API.
>>
>> Nice! You beat me to it. This is probably going to conflict with what
>> is in the DT tree for 4.21. Also, please Cc the DT list for
>> drivers/of/ changes.
>>
>> Looks good to me, but a few mostly formatting comments below.
> 
> I just realized that we never talked about your other comments, and I
> still have some questions. (Sorry, it was the last thing I looked at
> while getting v4 ready.) No worries if you don't get to it before I
> send v4 out, I just didn't want you to think I was ignoring you.
> 
>>
>>>
>>> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
>>> ---
>>>  drivers/of/Kconfig    |    1 +
>>>  drivers/of/unittest.c | 1405 ++++++++++++++++++++++-------------------
>>>  2 files changed, 752 insertions(+), 654 deletions(-)
>>>
> <snip>
>>> diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
>>> index 41b49716ac75f..a5ef44730ffdb 100644
>>> --- a/drivers/of/unittest.c
>>> +++ b/drivers/of/unittest.c
> <snip>
>>> -
>>> -static void __init of_unittest_find_node_by_name(void)
>>> +static void of_unittest_find_node_by_name(struct kunit *test)
>>
>> Why do we have to drop __init everywhere? The tests run later?
> 
>>From the standpoint of a unit test __init doesn't really make any
> sense, right? I know that right now we are running as part of a
> kernel, but the goal should be that a unit test is not part of a
> kernel and we just include what we need.
> 
> Even so, that's the future. For now, I did not put the KUnit
> infrastructure in the .init section because I didn't think it belonged
> there. In practice, KUnit only knows how to run during the init phase
> of the kernel, but I don't think it should be restricted there. You
> should be able to run tests whenever you want because you should be
> able to test anything right? I figured any restriction on that is
> misleading and will potentially get in the way at worst, and
> unnecessary at best especially since people shouldn't build a
> production kernel with all kinds of unit tests inside.
> 
>>
>>>  {
>>>         struct device_node *np;
>>>         const char *options, *name;
>>>
> <snip>
>>>
>>>
>>> -       np = of_find_node_by_path("/testcase-data/missing-path");
>>> -       unittest(!np, "non-existent path returned node %pOF\n", np);
>>> +       KUNIT_EXPECT_EQ_MSG(test,
>>> +                           of_find_node_by_path("/testcase-data/missing-path"),
>>> +                           NULL,
>>> +                           "non-existent path returned node %pOF\n", np);
>>
>> 1 tab indent would help with less vertical code (in general, not this
>> one so much).
> 
> Will do.
> 
>>
>>>         of_node_put(np);
>>>
>>> -       np = of_find_node_by_path("missing-alias");
>>> -       unittest(!np, "non-existent alias returned node %pOF\n", np);
>>> +       KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("missing-alias"), NULL,
>>> +                           "non-existent alias returned node %pOF\n", np);
>>>         of_node_put(np);
>>>
>>> -       np = of_find_node_by_path("testcase-alias/missing-path");
>>> -       unittest(!np, "non-existent alias with relative path returned node %pOF\n", np);
>>> +       KUNIT_EXPECT_EQ_MSG(test,
>>> +                           of_find_node_by_path("testcase-alias/missing-path"),
>>> +                           NULL,
>>> +                           "non-existent alias with relative path returned node %pOF\n",
>>> +                           np);
>>>         of_node_put(np);
>>>
> <snip>
>>>
>>> -static void __init of_unittest_property_string(void)
>>> +static void of_unittest_property_string(struct kunit *test)
>>>  {
>>>         const char *strings[4];
>>>         struct device_node *np;
>>>         int rc;
>>>
>>>         np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
>>> -       if (!np) {
>>> -               pr_err("No testcase data in device tree\n");
>>> -               return;
>>> -       }
>>> -
>>> -       rc = of_property_match_string(np, "phandle-list-names", "first");
>>> -       unittest(rc == 0, "first expected:0 got:%i\n", rc);
>>> -       rc = of_property_match_string(np, "phandle-list-names", "second");
>>> -       unittest(rc == 1, "second expected:1 got:%i\n", rc);
>>> -       rc = of_property_match_string(np, "phandle-list-names", "third");
>>> -       unittest(rc == 2, "third expected:2 got:%i\n", rc);
>>> -       rc = of_property_match_string(np, "phandle-list-names", "fourth");
>>> -       unittest(rc == -ENODATA, "unmatched string; rc=%i\n", rc);
>>> -       rc = of_property_match_string(np, "missing-property", "blah");
>>> -       unittest(rc == -EINVAL, "missing property; rc=%i\n", rc);
>>> -       rc = of_property_match_string(np, "empty-property", "blah");
>>> -       unittest(rc == -ENODATA, "empty property; rc=%i\n", rc);
>>> -       rc = of_property_match_string(np, "unterminated-string", "blah");
>>> -       unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
>>> +       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>> +
>>> +       KUNIT_EXPECT_EQ(test,
>>> +                       of_property_match_string(np,
>>> +                                                "phandle-list-names",
>>> +                                                "first"),
>>> +                       0);
>>> +       KUNIT_EXPECT_EQ(test,
>>> +                       of_property_match_string(np,
>>> +                                                "phandle-list-names",
>>> +                                                "second"),
>>> +                       1);
>>
>> Fewer lines on these would be better even if we go over 80 chars.

Agreed.  unittest.c already is a greater than 80 char file in general, and
is a file that benefits from that.


> On the of_property_match_string(...), I have no opinion. I will do
> whatever you like best.
> 
> Nevertheless, as far as the KUNIT_EXPECT_*(...), I do have an opinion: I am
> trying to establish a good, readable convention. Given an expect statement
> structured as
> ```
> KUNIT_EXPECT_*(
>     test,
>     expect_arg_0, ..., expect_arg_n,
>     fmt_str, fmt_arg_0, ..., fmt_arg_n)
> ```
> where `test` is the `struct kunit` context argument, `expect_arg_{0, ..., n}`
> are the arguments the expectations is being made about (so in the above example,
> `of_property_match_string(...)` and `1`), and `fmt_*` is the optional format
> string that comes at the end of some expectations.
> 
> The pattern I had been trying to promote is the following:
> 
> 1) If everything fits on 1 line, do that.
> 2) If you must make a line split, prefer to keep `test` on its own line,
> `expect_arg_{0, ..., n}` should be kept together, if possible, and the format
> string should follow the conventions already most commonly used with format
> strings.
> 3) If you must split up `expect_arg_{0, ..., n}` each argument should get its
> own line and should not share a line with either `test` or any `fmt_*`.
> 
> The reason I care about this so much is because expectations should be
> extremely easy to read; they are the most important part of a unit
> test because they tell you what the test is verifying. I am not
> married to the formatting I proposed above, but I want something that
> will be extremely easy to identify the arguments that the expectation
> is on. Maybe that means that I need to add some syntactic fluff to
> make it clearer, I don't know, but this is definitely something we
> need to get right, especially in the earliest examples.

I will probably raise the ire of the kernel formatting rule makers by offering
what I think is a _much_ more readable format __for this specific case__.
In other words for drivers/of/unittest.c.

If you can not make your mail window _very_ wide, or if this email has been
line wrapped, this example will not be clear.

Two possible formats:


### -----  version 1, as created by the patch series

static void of_unittest_property_string(struct kunit *test)
{
        const char *strings[4];
        struct device_node *np;
        int rc;

        np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);

        KUNIT_EXPECT_EQ(
                test,
                of_property_match_string(np, "phandle-list-names", "first"),
                0);
        KUNIT_EXPECT_EQ(
                test,
                of_property_match_string(np, "phandle-list-names", "second"),
                1);
        KUNIT_EXPECT_EQ(
                test,
                of_property_match_string(np, "phandle-list-names", "third"),
                2);
        KUNIT_EXPECT_EQ_MSG(
                test,
                of_property_match_string(np, "phandle-list-names", "fourth"),
                -ENODATA,
                "unmatched string");
        KUNIT_EXPECT_EQ_MSG(
                test,
                of_property_match_string(np, "missing-property", "blah"),
                -EINVAL,
                "missing property");
        KUNIT_EXPECT_EQ_MSG(
                test,
                of_property_match_string(np, "empty-property", "blah"),
                -ENODATA,
                "empty property");
        KUNIT_EXPECT_EQ_MSG(
                test,
                of_property_match_string(np, "unterminated-string", "blah"),
                -EILSEQ,
                "unterminated string");

        /* of_property_count_strings() tests */
        KUNIT_EXPECT_EQ(test,
                        of_property_count_strings(np, "string-property"), 1);
        KUNIT_EXPECT_EQ(test,
                        of_property_count_strings(np, "phandle-list-names"), 3);
        KUNIT_EXPECT_EQ_MSG(
                test,
                of_property_count_strings(np, "unterminated-string"), -EILSEQ,
                "unterminated string");
        KUNIT_EXPECT_EQ_MSG(
                test,
                of_property_count_strings(np, "unterminated-string-list"),
                -EILSEQ,
                "unterminated string array");




### -----  version 2, modified to use really long lines

static void of_unittest_property_string(struct kunit *test)
{
        const char *strings[4];
        struct device_node *np;
        int rc;

        np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);

        KUNIT_EXPECT_EQ(    test, of_property_match_string(np, "phandle-list-names", "first"),  0);
        KUNIT_EXPECT_EQ(    test, of_property_match_string(np, "phandle-list-names", "second"), 1);
        KUNIT_EXPECT_EQ(    test, of_property_match_string(np, "phandle-list-names", "third"),  2);
        KUNIT_EXPECT_EQ_MSG(test, of_property_match_string(np, "phandle-list-names", "fourth"), -ENODATA, "unmatched string");
        KUNIT_EXPECT_EQ_MSG(test, of_property_match_string(np, "missing-property", "blah"),     -EINVAL, "missing property");
        KUNIT_EXPECT_EQ_MSG(test, of_property_match_string(np, "empty-property", "blah"),       -ENODATA, "empty property");
        KUNIT_EXPECT_EQ_MSG(test, of_property_match_string(np, "unterminated-string", "blah"),  -EILSEQ, "unterminated string");

        /* of_property_count_strings() tests */
        KUNIT_EXPECT_EQ(    test, of_property_count_strings(np, "string-property"),             1);
        KUNIT_EXPECT_EQ(    test, of_property_count_strings(np, "phandle-list-names"),          3);
        KUNIT_EXPECT_EQ_MSG(test, of_property_count_strings(np, "unterminated-string"),         -EILSEQ, "unterminated string");
        KUNIT_EXPECT_EQ_MSG(test, of_property_count_strings(np, "unterminated-string-list"),    -EILSEQ, "unterminated string array");

        
        ------------------------  ------------------------------------------------------------- --------------------------------------
             ^                         ^                                                             ^
             |                         |                                                             |
             |                         |                                                             |
            mostly boilerplate        what is being tested                                          expected result, error message
            (can vary in relop
             and _MSG or not)

In my opinion, the second version is much more readable.  It is easy to see the
differences in the boilerplate.  It is easy to see what is being tested, and how
the arguments of the tested function vary for each test.  It is easy to see the
expected result and error message.  The entire block fits into a single short
window (though much wider).

- Frank

>>
>>> +       KUNIT_EXPECT_EQ(test,
>>> +                       of_property_match_string(np,
>>> +                                                "phandle-list-names",
>>> +                                                "third"),
>>> +                       2);
>>> +       KUNIT_EXPECT_EQ_MSG(test,
>>> +                           of_property_match_string(np,
>>> +                                                    "phandle-list-names",
>>> +                                                    "fourth"),
>>> +                           -ENODATA,
>>> +                           "unmatched string");
>>> +       KUNIT_EXPECT_EQ_MSG(test,
>>> +                           of_property_match_string(np,
>>> +                                                    "missing-property",
>>> +                                                    "blah"),
>>> +                           -EINVAL,
>>> +                           "missing property");
>>> +       KUNIT_EXPECT_EQ_MSG(test,
>>> +                           of_property_match_string(np,
>>> +                                                    "empty-property",
>>> +                                                    "blah"),
>>> +                           -ENODATA,
>>> +                           "empty property");
>>> +       KUNIT_EXPECT_EQ_MSG(test,
>>> +                           of_property_match_string(np,
>>> +                                                    "unterminated-string",
>>> +                                                    "blah"),
>>> +                           -EILSEQ,
>>> +                           "unterminated string");
> <snip>
>>>  /* test insertion of a bus with parent devices */
>>> -static void __init of_unittest_overlay_10(void)
>>> +static void of_unittest_overlay_10(struct kunit *test)
>>>  {
>>> -       int ret;
>>>         char *child_path;
>>>
>>>         /* device should disable */
>>> -       ret = of_unittest_apply_overlay_check(10, 10, 0, 1, PDEV_OVERLAY);
>>> -       if (unittest(ret == 0,
>>> -                       "overlay test %d failed; overlay application\n", 10))
>>> -               return;
>>> +       KUNIT_ASSERT_EQ_MSG(test,
>>> +                           of_unittest_apply_overlay_check(test,
>>> +                                                           10,
>>> +                                                           10,
>>> +                                                           0,
>>> +                                                           1,
>>> +                                                           PDEV_OVERLAY),
>>
>> I prefer putting multiple args on a line and having fewer lines.
> 
> Looking at this now, I tend to agree, but I don't think I saw a
> consistent way to break them up for these functions. I figured there
> should be some type of pattern.
> 
>>
>>> +                           0,
>>> +                           "overlay test %d failed; overlay application\n",
>>> +                           10);
>>>
>>>         child_path = kasprintf(GFP_KERNEL, "%s/test-unittest101",
>>>                         unittest_path(10, PDEV_OVERLAY));
>>> -       if (unittest(child_path, "overlay test %d failed; kasprintf\n", 10))
>>> -               return;
>>> +       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, child_path);
>>>
>>> -       ret = of_path_device_type_exists(child_path, PDEV_OVERLAY);
>>> +       KUNIT_EXPECT_TRUE_MSG(test,
>>> +                             of_path_device_type_exists(child_path,
>>> +                                                        PDEV_OVERLAY),
>>> +                             "overlay test %d failed; no child device\n", 10);
>>>         kfree(child_path);
>>> -
>>> -       unittest(ret, "overlay test %d failed; no child device\n", 10);
>>>  }
> <snip>
> 

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2019-02-18 22:25               ` frowand.list
  2019-02-18 22:25                 ` Frank Rowand
@ 2019-02-20 20:44                 ` frowand.list
  2019-02-20 20:44                   ` Frank Rowand
                                     ` (2 more replies)
  1 sibling, 3 replies; 232+ messages in thread
From: frowand.list @ 2019-02-20 20:44 UTC (permalink / raw)


On 2/18/19 2:25 PM, Frank Rowand wrote:
> On 2/15/19 2:56 AM, Brendan Higgins wrote:
>> On Thu, Feb 14, 2019 at 6:05 PM Frank Rowand <frowand.list at gmail.com> wrote:
>>>
>>> On 2/14/19 4:56 PM, Brendan Higgins wrote:
>>>> On Thu, Feb 14, 2019 at 3:57 PM Frank Rowand <frowand.list at gmail.com> wrote:
>>>>>
>>>>> On 12/5/18 3:54 PM, Brendan Higgins wrote:
>>>>>> On Tue, Dec 4, 2018 at 2:58 AM Frank Rowand <frowand.list at gmail.com> wrote:
>>>>>>>

< snip >

>
> It makes it harder for me to read the source of the tests and
> understand the order they will execute.  It also makes it harder
> for me to read through the actual tests (in this example the
> tests that are currently grouped in of_unittest_find_node_by_name())
> because of all the extra function headers injected into the
> existing single function to break it apart into many smaller
> functions.

< snip >

>>>> This is not something I feel particularly strongly about, it is just
>>>> pretty atypical from my experience to have so many unrelated test
>>>> cases in a single file.
>>>>
>>>> Maybe you would prefer that I break up the test cases first, and then
>>>> we split up the file as appropriate?
>>>
>>> I prefer that the test cases not be broken up arbitrarily.  There _may_

I expect that I created confusion by putting this in a reply to patch 18/19.
It is actually a comment about patch 19/19.  Sorry about that.


>>
>> I wasn't trying to break them up arbitrarily. I thought I was doing it
>> according to a pattern (breaking up the file, that is), but maybe I
>> just hadn't looked at enough examples.
> 
> This goes back to the kunit model of putting each test into a separate
> function that can be a KUNIT_CASE().  That is a model that I do not agree
> with for devicetree.

So now that I am actually talking about patch 19/19, let me give a concrete
example.  I will cut and paste (after my comments), the beginning portion
of base-test.c, after applying patch 19/19 (the "base version".  Then I
will cut and paste my alternative version which does not break the tests
down into individual functions (the "frank version").

I will also reply to this email with the base version and the frank version
as attachments, which will make it easier to save as separate versions
for easier viewing.  I'm not sure if an email with attachments will make
it through the list servers, but I am cautiously optimistic.

I am using v4 of the patch series because I never got v3 to cleanly apply
and it is not a constructive use of my time to do so since I have v4 applied.

One of the points I was trying to make is that readability suffers from the
approach taken by patches 18/19 and 19/19.

The base version contains the extra text of a function header for each
unit test.  This is visual noise and makes the file larger.  It is also
one more possible location of an error (although not likely).

The frank version has converted each of the new function headers into
a comment, using the function name with '_' converted to ' '.  The
comments are more readable than the function headers.  Note that I added
an extra blank line before each comment, which violates the kernel
coding standards, but I feel this makes the code more readable.

The base version needs to declare each of the individual test functions
in of_test_find_node_by_name_cases[]. It is possible that a test function
could be left out of of_test_find_node_by_name_cases[], in error.  This
will result in a compile warning (I think warning instead of error, but
I have not verified that) so the error might be caught or it might be
overlooked.

In the base version, the order of execution of the test code requires
bouncing back and forth between the test functions and the coding of
of_test_find_node_by_name_cases[].

In the frank version the order of execution of the test code is obvious.

It is possible that a test function could be left out of
of_test_find_node_by_name_cases[], in error.  This will result in a compile
warning (I think warning instead of error, but I have not verified that)
so it might be caught or it might be overlooked.

The base version is 265 lines.  The frank version is 208 lines, 57 lines
less.  Less is better.


## ==========  base version  ====================================

// SPDX-License-Identifier: GPL-2.0
/*
 * Unit tests for functions defined in base.c.
 */
#include <linux/of.h>

#include <kunit/test.h>

#include "test-common.h"

static void of_test_find_node_by_name_basic(struct kunit *test)
{
	struct device_node *np;
	const char *name;

	np = of_find_node_by_path("/testcase-data");
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
			       "find /testcase-data failed\n");
	of_node_put(np);
	kfree(name);
}

static void of_test_find_node_by_name_trailing_slash(struct kunit *test)
{
	/* Test if trailing '/' works */
	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
			    "trailing '/' on /testcase-data/ should fail\n");

}

static void of_test_find_node_by_name_multiple_components(struct kunit *test)
{
	struct device_node *np;
	const char *name;

	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(
		test, "/testcase-data/phandle-tests/consumer-a", name,
		"find /testcase-data/phandle-tests/consumer-a failed\n");
	of_node_put(np);
	kfree(name);
}

static void of_test_find_node_by_name_with_alias(struct kunit *test)
{
	struct device_node *np;
	const char *name;

	np = of_find_node_by_path("testcase-alias");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
			       "find testcase-alias failed\n");
	of_node_put(np);
	kfree(name);
}

static void of_test_find_node_by_name_with_alias_and_slash(struct kunit *test)
{
	/* Test if trailing '/' works on aliases */
	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
			   "trailing '/' on testcase-alias/ should fail\n");
}

/*
 * TODO(brendanhiggins at google.com): This looks like a duplicate of
 * of_test_find_node_by_name_multiple_components
 */
static void of_test_find_node_by_name_multiple_components_2(struct kunit *test)
{
	struct device_node *np;
	const char *name;

	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(
		test, "/testcase-data/phandle-tests/consumer-a", name,
		"find testcase-alias/phandle-tests/consumer-a failed\n");
	of_node_put(np);
	kfree(name);
}

static void of_test_find_node_by_name_missing_path(struct kunit *test)
{
	struct device_node *np;

	KUNIT_EXPECT_EQ_MSG(
		test,
		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
		"non-existent path returned node %pOF\n", np);
	of_node_put(np);
}

static void of_test_find_node_by_name_missing_alias(struct kunit *test)
{
	struct device_node *np;

	KUNIT_EXPECT_EQ_MSG(
		test, np = of_find_node_by_path("missing-alias"), NULL,
		"non-existent alias returned node %pOF\n", np);
	of_node_put(np);
}

static void of_test_find_node_by_name_missing_alias_with_relative_path(
		struct kunit *test)
{
	struct device_node *np;

	KUNIT_EXPECT_EQ_MSG(
		test,
		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
		"non-existent alias with relative path returned node %pOF\n",
		np);
	of_node_put(np);
}

static void of_test_find_node_by_name_with_option(struct kunit *test)
{
	struct device_node *np;
	const char *options;

	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
			       "option path test failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_with_option_and_slash(struct kunit *test)
{
	struct device_node *np;
	const char *options;

	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
			       "option path test, subcase #1 failed\n");
	of_node_put(np);

	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
			       "option path test, subcase #2 failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_with_null_option(struct kunit *test)
{
	struct device_node *np;

	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
					 "NULL option path test failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_with_option_alias(struct kunit *test)
{
	struct device_node *np;
	const char *options;

	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
				       &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
			       "option alias path test failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_with_option_alias_and_slash(
		struct kunit *test)
{
	struct device_node *np;
	const char *options;

	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
				       &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
			       "option alias path test, subcase #1 failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_with_null_option_alias(struct kunit *test)
{
	struct device_node *np;

	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
			test, np, "NULL option alias path test failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_option_clearing(struct kunit *test)
{
	struct device_node *np;
	const char *options;

	options = "testoption";
	np = of_find_node_opts_by_path("testcase-alias", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
			    "option clearing test failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_option_clearing_root(struct kunit *test)
{
	struct device_node *np;
	const char *options;

	options = "testoption";
	np = of_find_node_opts_by_path("/", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
			    "option clearing root node test failed\n");
	of_node_put(np);
}

static int of_test_find_node_by_name_init(struct kunit *test)
{
	/* adding data for unittest */
	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());

	if (!of_aliases)
		of_aliases = of_find_node_by_path("/aliases");

	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
			"/testcase-data/phandle-tests/consumer-a"));

	return 0;
}

static struct kunit_case of_test_find_node_by_name_cases[] = {
	KUNIT_CASE(of_test_find_node_by_name_basic),
	KUNIT_CASE(of_test_find_node_by_name_trailing_slash),
	KUNIT_CASE(of_test_find_node_by_name_multiple_components),
	KUNIT_CASE(of_test_find_node_by_name_with_alias),
	KUNIT_CASE(of_test_find_node_by_name_with_alias_and_slash),
	KUNIT_CASE(of_test_find_node_by_name_multiple_components_2),
	KUNIT_CASE(of_test_find_node_by_name_missing_path),
	KUNIT_CASE(of_test_find_node_by_name_missing_alias),
	KUNIT_CASE(of_test_find_node_by_name_missing_alias_with_relative_path),
	KUNIT_CASE(of_test_find_node_by_name_with_option),
	KUNIT_CASE(of_test_find_node_by_name_with_option_and_slash),
	KUNIT_CASE(of_test_find_node_by_name_with_null_option),
	KUNIT_CASE(of_test_find_node_by_name_with_option_alias),
	KUNIT_CASE(of_test_find_node_by_name_with_option_alias_and_slash),
	KUNIT_CASE(of_test_find_node_by_name_with_null_option_alias),
	KUNIT_CASE(of_test_find_node_by_name_option_clearing),
	KUNIT_CASE(of_test_find_node_by_name_option_clearing_root),
	{},
};

static struct kunit_module of_test_find_node_by_name_module = {
	.name = "of-test-find-node-by-name",
	.init = of_test_find_node_by_name_init,
	.test_cases = of_test_find_node_by_name_cases,
};
module_test(of_test_find_node_by_name_module);


## ==========  frank version  ===================================

	// SPDX-License-Identifier: GPL-2.0
/*
 * Unit tests for functions defined in base.c.
 */
#include <linux/of.h>

#include <kunit/test.h>

#include "test-common.h"

static void of_unittest_find_node_by_name(struct kunit *test)
{
	struct device_node *np;
	const char *options, *name;


	// find node by name basic

	np = of_find_node_by_path("/testcase-data");
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
			       "find /testcase-data failed\n");
	of_node_put(np);
	kfree(name);


	// find node by name trailing slash

	/* Test if trailing '/' works */
	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
			    "trailing '/' on /testcase-data/ should fail\n");


	// find node by name multiple components

	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(
		test, "/testcase-data/phandle-tests/consumer-a", name,
		"find /testcase-data/phandle-tests/consumer-a failed\n");
	of_node_put(np);
	kfree(name);


	// find node by name with alias

	np = of_find_node_by_path("testcase-alias");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
			       "find testcase-alias failed\n");
	of_node_put(np);
	kfree(name);


	// find node by name with alias and slash

	/* Test if trailing '/' works on aliases */
	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
			    "trailing '/' on testcase-alias/ should fail\n");


	// find node by name multiple components 2

	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(
		test, "/testcase-data/phandle-tests/consumer-a", name,
		"find testcase-alias/phandle-tests/consumer-a failed\n");
	of_node_put(np);
	kfree(name);


	// find node by name missing path

	KUNIT_EXPECT_EQ_MSG(
		test,
		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
		"non-existent path returned node %pOF\n", np);
	of_node_put(np);


	// find node by name missing alias

	KUNIT_EXPECT_EQ_MSG(
		test, np = of_find_node_by_path("missing-alias"), NULL,
		"non-existent alias returned node %pOF\n", np);
	of_node_put(np);


	//  find node by name missing alias with relative path

	KUNIT_EXPECT_EQ_MSG(
		test,
		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
		"non-existent alias with relative path returned node %pOF\n",
		np);
	of_node_put(np);


	// find node by name with option

	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
			       "option path test failed\n");
	of_node_put(np);


	// find node by name with option and slash

	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
			       "option path test, subcase #1 failed\n");
	of_node_put(np);

	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
			       "option path test, subcase #2 failed\n");
	of_node_put(np);


	// find node by name with null option

	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
					 "NULL option path test failed\n");
	of_node_put(np);


	// find node by name with option alias

	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
				       &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
			       "option alias path test failed\n");
	of_node_put(np);


	// find node by name with option alias and slash

	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
				       &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
			       "option alias path test, subcase #1 failed\n");
	of_node_put(np);


	// find node by name with null option alias

	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
			test, np, "NULL option alias path test failed\n");
	of_node_put(np);


	// find node by name option clearing

	options = "testoption";
	np = of_find_node_opts_by_path("testcase-alias", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
			    "option clearing test failed\n");
	of_node_put(np);


	// find node by name option clearing root

	options = "testoption";
	np = of_find_node_opts_by_path("/", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
			    "option clearing root node test failed\n");
	of_node_put(np);
}

static int of_test_init(struct kunit *test)
{
	/* adding data for unittest */
	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());

	if (!of_aliases)
		of_aliases = of_find_node_by_path("/aliases");

	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
			"/testcase-data/phandle-tests/consumer-a"));

	return 0;
}

static struct kunit_case of_test_cases[] = {
	KUNIT_CASE(of_unittest_find_node_by_name),
	{},
};

static struct kunit_module of_test_module = {
	.name = "of-base-test",
	.init = of_test_init,
	.test_cases = of_test_cases,
};
module_test(of_test_module);


> 
> 
>>> be cases where the devicetree unittests are currently not well grouped
>>> and may benefit from change, but if so that should be handled independently
>>> of any transformation into a KUnit framework.
>>
>> I agree. I did this because I wanted to illustrate what I thought real
>> world KUnit unit tests should look like (I also wanted to be able to
>> show off KUnit test features that help you write these kinds of
>> tests); I was not necessarily intending that all the of: unittest
>> patches would get merged in with the whole RFC. I was mostly trying to
>> create cause for discussion (which it seems like I succeeded at ;-) ).
>>
>> So fair enough, I will propose these patches separately and later
>> (except of course this one that splits up the file). Do you want the
>> initial transformation to the KUnit framework in the main KUnit
>> patchset, or do you want that to be done separately? If I recall, Rob
>> suggested this as a good initial example that other people could refer
>> to, and some people seemed to think that I needed one to help guide
>> the discussion and provide direction for early users. I don't
>> necessarily think that means the initial real world example needs to
>> be a part of the initial patchset though.
>>
>> Cheers
>>
> 
> 

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2019-02-20 20:44                 ` frowand.list
@ 2019-02-20 20:44                   ` Frank Rowand
  2019-02-20 20:47                   ` frowand.list
  2019-02-28  3:52                   ` brendanhiggins
  2 siblings, 0 replies; 232+ messages in thread
From: Frank Rowand @ 2019-02-20 20:44 UTC (permalink / raw)


On 2/18/19 2:25 PM, Frank Rowand wrote:
> On 2/15/19 2:56 AM, Brendan Higgins wrote:
>> On Thu, Feb 14, 2019@6:05 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>>
>>> On 2/14/19 4:56 PM, Brendan Higgins wrote:
>>>> On Thu, Feb 14, 2019@3:57 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>>>>
>>>>> On 12/5/18 3:54 PM, Brendan Higgins wrote:
>>>>>> On Tue, Dec 4, 2018@2:58 AM Frank Rowand <frowand.list@gmail.com> wrote:
>>>>>>>

< snip >

>
> It makes it harder for me to read the source of the tests and
> understand the order they will execute.  It also makes it harder
> for me to read through the actual tests (in this example the
> tests that are currently grouped in of_unittest_find_node_by_name())
> because of all the extra function headers injected into the
> existing single function to break it apart into many smaller
> functions.

< snip >

>>>> This is not something I feel particularly strongly about, it is just
>>>> pretty atypical from my experience to have so many unrelated test
>>>> cases in a single file.
>>>>
>>>> Maybe you would prefer that I break up the test cases first, and then
>>>> we split up the file as appropriate?
>>>
>>> I prefer that the test cases not be broken up arbitrarily.  There _may_

I expect that I created confusion by putting this in a reply to patch 18/19.
It is actually a comment about patch 19/19.  Sorry about that.


>>
>> I wasn't trying to break them up arbitrarily. I thought I was doing it
>> according to a pattern (breaking up the file, that is), but maybe I
>> just hadn't looked at enough examples.
> 
> This goes back to the kunit model of putting each test into a separate
> function that can be a KUNIT_CASE().  That is a model that I do not agree
> with for devicetree.

So now that I am actually talking about patch 19/19, let me give a concrete
example.  I will cut and paste (after my comments), the beginning portion
of base-test.c, after applying patch 19/19 (the "base version".  Then I
will cut and paste my alternative version which does not break the tests
down into individual functions (the "frank version").

I will also reply to this email with the base version and the frank version
as attachments, which will make it easier to save as separate versions
for easier viewing.  I'm not sure if an email with attachments will make
it through the list servers, but I am cautiously optimistic.

I am using v4 of the patch series because I never got v3 to cleanly apply
and it is not a constructive use of my time to do so since I have v4 applied.

One of the points I was trying to make is that readability suffers from the
approach taken by patches 18/19 and 19/19.

The base version contains the extra text of a function header for each
unit test.  This is visual noise and makes the file larger.  It is also
one more possible location of an error (although not likely).

The frank version has converted each of the new function headers into
a comment, using the function name with '_' converted to ' '.  The
comments are more readable than the function headers.  Note that I added
an extra blank line before each comment, which violates the kernel
coding standards, but I feel this makes the code more readable.

The base version needs to declare each of the individual test functions
in of_test_find_node_by_name_cases[]. It is possible that a test function
could be left out of of_test_find_node_by_name_cases[], in error.  This
will result in a compile warning (I think warning instead of error, but
I have not verified that) so the error might be caught or it might be
overlooked.

In the base version, the order of execution of the test code requires
bouncing back and forth between the test functions and the coding of
of_test_find_node_by_name_cases[].

In the frank version the order of execution of the test code is obvious.

It is possible that a test function could be left out of
of_test_find_node_by_name_cases[], in error.  This will result in a compile
warning (I think warning instead of error, but I have not verified that)
so it might be caught or it might be overlooked.

The base version is 265 lines.  The frank version is 208 lines, 57 lines
less.  Less is better.


## ==========  base version  ====================================

// SPDX-License-Identifier: GPL-2.0
/*
 * Unit tests for functions defined in base.c.
 */
#include <linux/of.h>

#include <kunit/test.h>

#include "test-common.h"

static void of_test_find_node_by_name_basic(struct kunit *test)
{
	struct device_node *np;
	const char *name;

	np = of_find_node_by_path("/testcase-data");
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
			       "find /testcase-data failed\n");
	of_node_put(np);
	kfree(name);
}

static void of_test_find_node_by_name_trailing_slash(struct kunit *test)
{
	/* Test if trailing '/' works */
	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
			    "trailing '/' on /testcase-data/ should fail\n");

}

static void of_test_find_node_by_name_multiple_components(struct kunit *test)
{
	struct device_node *np;
	const char *name;

	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(
		test, "/testcase-data/phandle-tests/consumer-a", name,
		"find /testcase-data/phandle-tests/consumer-a failed\n");
	of_node_put(np);
	kfree(name);
}

static void of_test_find_node_by_name_with_alias(struct kunit *test)
{
	struct device_node *np;
	const char *name;

	np = of_find_node_by_path("testcase-alias");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
			       "find testcase-alias failed\n");
	of_node_put(np);
	kfree(name);
}

static void of_test_find_node_by_name_with_alias_and_slash(struct kunit *test)
{
	/* Test if trailing '/' works on aliases */
	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
			   "trailing '/' on testcase-alias/ should fail\n");
}

/*
 * TODO(brendanhiggins at google.com): This looks like a duplicate of
 * of_test_find_node_by_name_multiple_components
 */
static void of_test_find_node_by_name_multiple_components_2(struct kunit *test)
{
	struct device_node *np;
	const char *name;

	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(
		test, "/testcase-data/phandle-tests/consumer-a", name,
		"find testcase-alias/phandle-tests/consumer-a failed\n");
	of_node_put(np);
	kfree(name);
}

static void of_test_find_node_by_name_missing_path(struct kunit *test)
{
	struct device_node *np;

	KUNIT_EXPECT_EQ_MSG(
		test,
		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
		"non-existent path returned node %pOF\n", np);
	of_node_put(np);
}

static void of_test_find_node_by_name_missing_alias(struct kunit *test)
{
	struct device_node *np;

	KUNIT_EXPECT_EQ_MSG(
		test, np = of_find_node_by_path("missing-alias"), NULL,
		"non-existent alias returned node %pOF\n", np);
	of_node_put(np);
}

static void of_test_find_node_by_name_missing_alias_with_relative_path(
		struct kunit *test)
{
	struct device_node *np;

	KUNIT_EXPECT_EQ_MSG(
		test,
		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
		"non-existent alias with relative path returned node %pOF\n",
		np);
	of_node_put(np);
}

static void of_test_find_node_by_name_with_option(struct kunit *test)
{
	struct device_node *np;
	const char *options;

	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
			       "option path test failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_with_option_and_slash(struct kunit *test)
{
	struct device_node *np;
	const char *options;

	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
			       "option path test, subcase #1 failed\n");
	of_node_put(np);

	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
			       "option path test, subcase #2 failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_with_null_option(struct kunit *test)
{
	struct device_node *np;

	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
					 "NULL option path test failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_with_option_alias(struct kunit *test)
{
	struct device_node *np;
	const char *options;

	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
				       &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
			       "option alias path test failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_with_option_alias_and_slash(
		struct kunit *test)
{
	struct device_node *np;
	const char *options;

	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
				       &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
			       "option alias path test, subcase #1 failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_with_null_option_alias(struct kunit *test)
{
	struct device_node *np;

	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
			test, np, "NULL option alias path test failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_option_clearing(struct kunit *test)
{
	struct device_node *np;
	const char *options;

	options = "testoption";
	np = of_find_node_opts_by_path("testcase-alias", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
			    "option clearing test failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_option_clearing_root(struct kunit *test)
{
	struct device_node *np;
	const char *options;

	options = "testoption";
	np = of_find_node_opts_by_path("/", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
			    "option clearing root node test failed\n");
	of_node_put(np);
}

static int of_test_find_node_by_name_init(struct kunit *test)
{
	/* adding data for unittest */
	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());

	if (!of_aliases)
		of_aliases = of_find_node_by_path("/aliases");

	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
			"/testcase-data/phandle-tests/consumer-a"));

	return 0;
}

static struct kunit_case of_test_find_node_by_name_cases[] = {
	KUNIT_CASE(of_test_find_node_by_name_basic),
	KUNIT_CASE(of_test_find_node_by_name_trailing_slash),
	KUNIT_CASE(of_test_find_node_by_name_multiple_components),
	KUNIT_CASE(of_test_find_node_by_name_with_alias),
	KUNIT_CASE(of_test_find_node_by_name_with_alias_and_slash),
	KUNIT_CASE(of_test_find_node_by_name_multiple_components_2),
	KUNIT_CASE(of_test_find_node_by_name_missing_path),
	KUNIT_CASE(of_test_find_node_by_name_missing_alias),
	KUNIT_CASE(of_test_find_node_by_name_missing_alias_with_relative_path),
	KUNIT_CASE(of_test_find_node_by_name_with_option),
	KUNIT_CASE(of_test_find_node_by_name_with_option_and_slash),
	KUNIT_CASE(of_test_find_node_by_name_with_null_option),
	KUNIT_CASE(of_test_find_node_by_name_with_option_alias),
	KUNIT_CASE(of_test_find_node_by_name_with_option_alias_and_slash),
	KUNIT_CASE(of_test_find_node_by_name_with_null_option_alias),
	KUNIT_CASE(of_test_find_node_by_name_option_clearing),
	KUNIT_CASE(of_test_find_node_by_name_option_clearing_root),
	{},
};

static struct kunit_module of_test_find_node_by_name_module = {
	.name = "of-test-find-node-by-name",
	.init = of_test_find_node_by_name_init,
	.test_cases = of_test_find_node_by_name_cases,
};
module_test(of_test_find_node_by_name_module);


## ==========  frank version  ===================================

	// SPDX-License-Identifier: GPL-2.0
/*
 * Unit tests for functions defined in base.c.
 */
#include <linux/of.h>

#include <kunit/test.h>

#include "test-common.h"

static void of_unittest_find_node_by_name(struct kunit *test)
{
	struct device_node *np;
	const char *options, *name;


	// find node by name basic

	np = of_find_node_by_path("/testcase-data");
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
			       "find /testcase-data failed\n");
	of_node_put(np);
	kfree(name);


	// find node by name trailing slash

	/* Test if trailing '/' works */
	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
			    "trailing '/' on /testcase-data/ should fail\n");


	// find node by name multiple components

	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(
		test, "/testcase-data/phandle-tests/consumer-a", name,
		"find /testcase-data/phandle-tests/consumer-a failed\n");
	of_node_put(np);
	kfree(name);


	// find node by name with alias

	np = of_find_node_by_path("testcase-alias");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
			       "find testcase-alias failed\n");
	of_node_put(np);
	kfree(name);


	// find node by name with alias and slash

	/* Test if trailing '/' works on aliases */
	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
			    "trailing '/' on testcase-alias/ should fail\n");


	// find node by name multiple components 2

	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(
		test, "/testcase-data/phandle-tests/consumer-a", name,
		"find testcase-alias/phandle-tests/consumer-a failed\n");
	of_node_put(np);
	kfree(name);


	// find node by name missing path

	KUNIT_EXPECT_EQ_MSG(
		test,
		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
		"non-existent path returned node %pOF\n", np);
	of_node_put(np);


	// find node by name missing alias

	KUNIT_EXPECT_EQ_MSG(
		test, np = of_find_node_by_path("missing-alias"), NULL,
		"non-existent alias returned node %pOF\n", np);
	of_node_put(np);


	//  find node by name missing alias with relative path

	KUNIT_EXPECT_EQ_MSG(
		test,
		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
		"non-existent alias with relative path returned node %pOF\n",
		np);
	of_node_put(np);


	// find node by name with option

	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
			       "option path test failed\n");
	of_node_put(np);


	// find node by name with option and slash

	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
			       "option path test, subcase #1 failed\n");
	of_node_put(np);

	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
			       "option path test, subcase #2 failed\n");
	of_node_put(np);


	// find node by name with null option

	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
					 "NULL option path test failed\n");
	of_node_put(np);


	// find node by name with option alias

	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
				       &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
			       "option alias path test failed\n");
	of_node_put(np);


	// find node by name with option alias and slash

	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
				       &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
			       "option alias path test, subcase #1 failed\n");
	of_node_put(np);


	// find node by name with null option alias

	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
			test, np, "NULL option alias path test failed\n");
	of_node_put(np);


	// find node by name option clearing

	options = "testoption";
	np = of_find_node_opts_by_path("testcase-alias", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
			    "option clearing test failed\n");
	of_node_put(np);


	// find node by name option clearing root

	options = "testoption";
	np = of_find_node_opts_by_path("/", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
			    "option clearing root node test failed\n");
	of_node_put(np);
}

static int of_test_init(struct kunit *test)
{
	/* adding data for unittest */
	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());

	if (!of_aliases)
		of_aliases = of_find_node_by_path("/aliases");

	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
			"/testcase-data/phandle-tests/consumer-a"));

	return 0;
}

static struct kunit_case of_test_cases[] = {
	KUNIT_CASE(of_unittest_find_node_by_name),
	{},
};

static struct kunit_module of_test_module = {
	.name = "of-base-test",
	.init = of_test_init,
	.test_cases = of_test_cases,
};
module_test(of_test_module);


> 
> 
>>> be cases where the devicetree unittests are currently not well grouped
>>> and may benefit from change, but if so that should be handled independently
>>> of any transformation into a KUnit framework.
>>
>> I agree. I did this because I wanted to illustrate what I thought real
>> world KUnit unit tests should look like (I also wanted to be able to
>> show off KUnit test features that help you write these kinds of
>> tests); I was not necessarily intending that all the of: unittest
>> patches would get merged in with the whole RFC. I was mostly trying to
>> create cause for discussion (which it seems like I succeeded at ;-) ).
>>
>> So fair enough, I will propose these patches separately and later
>> (except of course this one that splits up the file). Do you want the
>> initial transformation to the KUnit framework in the main KUnit
>> patchset, or do you want that to be done separately? If I recall, Rob
>> suggested this as a good initial example that other people could refer
>> to, and some people seemed to think that I needed one to help guide
>> the discussion and provide direction for early users. I don't
>> necessarily think that means the initial real world example needs to
>> be a part of the initial patchset though.
>>
>> Cheers
>>
> 
> 

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2019-02-20 20:44                 ` frowand.list
  2019-02-20 20:44                   ` Frank Rowand
@ 2019-02-20 20:47                   ` frowand.list
  2019-02-20 20:47                     ` Frank Rowand
  2019-02-28  3:52                   ` brendanhiggins
  2 siblings, 1 reply; 232+ messages in thread
From: frowand.list @ 2019-02-20 20:47 UTC (permalink / raw)


On 2/20/19 12:44 PM, Frank Rowand wrote:
> On 2/18/19 2:25 PM, Frank Rowand wrote:
>> On 2/15/19 2:56 AM, Brendan Higgins wrote:
>>> On Thu, Feb 14, 2019 at 6:05 PM Frank Rowand <frowand.list at gmail.com> wrote:
>>>>
>>>> On 2/14/19 4:56 PM, Brendan Higgins wrote:
>>>>> On Thu, Feb 14, 2019 at 3:57 PM Frank Rowand <frowand.list at gmail.com> wrote:
>>>>>>
>>>>>> On 12/5/18 3:54 PM, Brendan Higgins wrote:
>>>>>>> On Tue, Dec 4, 2018 at 2:58 AM Frank Rowand <frowand.list at gmail.com> wrote:
>>>>>>>>
> 
> < snip >
> 
>>
>> It makes it harder for me to read the source of the tests and
>> understand the order they will execute.  It also makes it harder
>> for me to read through the actual tests (in this example the
>> tests that are currently grouped in of_unittest_find_node_by_name())
>> because of all the extra function headers injected into the
>> existing single function to break it apart into many smaller
>> functions.
> 
> < snip >
> 
>>>>> This is not something I feel particularly strongly about, it is just
>>>>> pretty atypical from my experience to have so many unrelated test
>>>>> cases in a single file.
>>>>>
>>>>> Maybe you would prefer that I break up the test cases first, and then
>>>>> we split up the file as appropriate?
>>>>
>>>> I prefer that the test cases not be broken up arbitrarily.  There _may_
> 
> I expect that I created confusion by putting this in a reply to patch 18/19.
> It is actually a comment about patch 19/19.  Sorry about that.
> 
> 
>>>
>>> I wasn't trying to break them up arbitrarily. I thought I was doing it
>>> according to a pattern (breaking up the file, that is), but maybe I
>>> just hadn't looked at enough examples.
>>
>> This goes back to the kunit model of putting each test into a separate
>> function that can be a KUNIT_CASE().  That is a model that I do not agree
>> with for devicetree.
> 
> So now that I am actually talking about patch 19/19, let me give a concrete
> example.  I will cut and paste (after my comments), the beginning portion
> of base-test.c, after applying patch 19/19 (the "base version".  Then I
> will cut and paste my alternative version which does not break the tests
> down into individual functions (the "frank version").
> 
> I will also reply to this email with the base version and the frank version
> as attachments, which will make it easier to save as separate versions
> for easier viewing.  I'm not sure if an email with attachments will make
> it through the list servers, but I am cautiously optimistic.

base_version and frank_version attached.

-Frank


> 
> I am using v4 of the patch series because I never got v3 to cleanly apply
> and it is not a constructive use of my time to do so since I have v4 applied.
> 
> One of the points I was trying to make is that readability suffers from the
> approach taken by patches 18/19 and 19/19.
> 
> The base version contains the extra text of a function header for each
> unit test.  This is visual noise and makes the file larger.  It is also
> one more possible location of an error (although not likely).
> 
> The frank version has converted each of the new function headers into
> a comment, using the function name with '_' converted to ' '.  The
> comments are more readable than the function headers.  Note that I added
> an extra blank line before each comment, which violates the kernel
> coding standards, but I feel this makes the code more readable.
> 
> The base version needs to declare each of the individual test functions
> in of_test_find_node_by_name_cases[]. It is possible that a test function
> could be left out of of_test_find_node_by_name_cases[], in error.  This
> will result in a compile warning (I think warning instead of error, but
> I have not verified that) so the error might be caught or it might be
> overlooked.
> 
> In the base version, the order of execution of the test code requires
> bouncing back and forth between the test functions and the coding of
> of_test_find_node_by_name_cases[].
> 
> In the frank version the order of execution of the test code is obvious.
> 
> It is possible that a test function could be left out of
> of_test_find_node_by_name_cases[], in error.  This will result in a compile
> warning (I think warning instead of error, but I have not verified that)
> so it might be caught or it might be overlooked.
> 
> The base version is 265 lines.  The frank version is 208 lines, 57 lines
> less.  Less is better.
> 
> 
> ## ==========  base version  ====================================
> 
> // SPDX-License-Identifier: GPL-2.0
> /*
>  * Unit tests for functions defined in base.c.
>  */
> #include <linux/of.h>
> 
> #include <kunit/test.h>
> 
> #include "test-common.h"
> 
> static void of_test_find_node_by_name_basic(struct kunit *test)
> {
> 	struct device_node *np;
> 	const char *name;
> 
> 	np = of_find_node_by_path("/testcase-data");
> 	name = kasprintf(GFP_KERNEL, "%pOF", np);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> 			       "find /testcase-data failed\n");
> 	of_node_put(np);
> 	kfree(name);
> }
> 
> static void of_test_find_node_by_name_trailing_slash(struct kunit *test)
> {
> 	/* Test if trailing '/' works */
> 	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
> 			    "trailing '/' on /testcase-data/ should fail\n");
> 
> }
> 
> static void of_test_find_node_by_name_multiple_components(struct kunit *test)
> {
> 	struct device_node *np;
> 	const char *name;
> 
> 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	name = kasprintf(GFP_KERNEL, "%pOF", np);
> 	KUNIT_EXPECT_STREQ_MSG(
> 		test, "/testcase-data/phandle-tests/consumer-a", name,
> 		"find /testcase-data/phandle-tests/consumer-a failed\n");
> 	of_node_put(np);
> 	kfree(name);
> }
> 
> static void of_test_find_node_by_name_with_alias(struct kunit *test)
> {
> 	struct device_node *np;
> 	const char *name;
> 
> 	np = of_find_node_by_path("testcase-alias");
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	name = kasprintf(GFP_KERNEL, "%pOF", np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> 			       "find testcase-alias failed\n");
> 	of_node_put(np);
> 	kfree(name);
> }
> 
> static void of_test_find_node_by_name_with_alias_and_slash(struct kunit *test)
> {
> 	/* Test if trailing '/' works on aliases */
> 	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
> 			   "trailing '/' on testcase-alias/ should fail\n");
> }
> 
> /*
>  * TODO(brendanhiggins at google.com): This looks like a duplicate of
>  * of_test_find_node_by_name_multiple_components
>  */
> static void of_test_find_node_by_name_multiple_components_2(struct kunit *test)
> {
> 	struct device_node *np;
> 	const char *name;
> 
> 	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	name = kasprintf(GFP_KERNEL, "%pOF", np);
> 	KUNIT_EXPECT_STREQ_MSG(
> 		test, "/testcase-data/phandle-tests/consumer-a", name,
> 		"find testcase-alias/phandle-tests/consumer-a failed\n");
> 	of_node_put(np);
> 	kfree(name);
> }
> 
> static void of_test_find_node_by_name_missing_path(struct kunit *test)
> {
> 	struct device_node *np;
> 
> 	KUNIT_EXPECT_EQ_MSG(
> 		test,
> 		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
> 		"non-existent path returned node %pOF\n", np);
> 	of_node_put(np);
> }
> 
> static void of_test_find_node_by_name_missing_alias(struct kunit *test)
> {
> 	struct device_node *np;
> 
> 	KUNIT_EXPECT_EQ_MSG(
> 		test, np = of_find_node_by_path("missing-alias"), NULL,
> 		"non-existent alias returned node %pOF\n", np);
> 	of_node_put(np);
> }
> 
> static void of_test_find_node_by_name_missing_alias_with_relative_path(
> 		struct kunit *test)
> {
> 	struct device_node *np;
> 
> 	KUNIT_EXPECT_EQ_MSG(
> 		test,
> 		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
> 		"non-existent alias with relative path returned node %pOF\n",
> 		np);
> 	of_node_put(np);
> }
> 
> static void of_test_find_node_by_name_with_option(struct kunit *test)
> {
> 	struct device_node *np;
> 	const char *options;
> 
> 	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
> 			       "option path test failed\n");
> 	of_node_put(np);
> }
> 
> static void of_test_find_node_by_name_with_option_and_slash(struct kunit *test)
> {
> 	struct device_node *np;
> 	const char *options;
> 
> 	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> 			       "option path test, subcase #1 failed\n");
> 	of_node_put(np);
> 
> 	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> 			       "option path test, subcase #2 failed\n");
> 	of_node_put(np);
> }
> 
> static void of_test_find_node_by_name_with_null_option(struct kunit *test)
> {
> 	struct device_node *np;
> 
> 	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
> 	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
> 					 "NULL option path test failed\n");
> 	of_node_put(np);
> }
> 
> static void of_test_find_node_by_name_with_option_alias(struct kunit *test)
> {
> 	struct device_node *np;
> 	const char *options;
> 
> 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
> 				       &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
> 			       "option alias path test failed\n");
> 	of_node_put(np);
> }
> 
> static void of_test_find_node_by_name_with_option_alias_and_slash(
> 		struct kunit *test)
> {
> 	struct device_node *np;
> 	const char *options;
> 
> 	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
> 				       &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
> 			       "option alias path test, subcase #1 failed\n");
> 	of_node_put(np);
> }
> 
> static void of_test_find_node_by_name_with_null_option_alias(struct kunit *test)
> {
> 	struct device_node *np;
> 
> 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
> 	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
> 			test, np, "NULL option alias path test failed\n");
> 	of_node_put(np);
> }
> 
> static void of_test_find_node_by_name_option_clearing(struct kunit *test)
> {
> 	struct device_node *np;
> 	const char *options;
> 
> 	options = "testoption";
> 	np = of_find_node_opts_by_path("testcase-alias", &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> 			    "option clearing test failed\n");
> 	of_node_put(np);
> }
> 
> static void of_test_find_node_by_name_option_clearing_root(struct kunit *test)
> {
> 	struct device_node *np;
> 	const char *options;
> 
> 	options = "testoption";
> 	np = of_find_node_opts_by_path("/", &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> 			    "option clearing root node test failed\n");
> 	of_node_put(np);
> }
> 
> static int of_test_find_node_by_name_init(struct kunit *test)
> {
> 	/* adding data for unittest */
> 	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
> 
> 	if (!of_aliases)
> 		of_aliases = of_find_node_by_path("/aliases");
> 
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
> 			"/testcase-data/phandle-tests/consumer-a"));
> 
> 	return 0;
> }
> 
> static struct kunit_case of_test_find_node_by_name_cases[] = {
> 	KUNIT_CASE(of_test_find_node_by_name_basic),
> 	KUNIT_CASE(of_test_find_node_by_name_trailing_slash),
> 	KUNIT_CASE(of_test_find_node_by_name_multiple_components),
> 	KUNIT_CASE(of_test_find_node_by_name_with_alias),
> 	KUNIT_CASE(of_test_find_node_by_name_with_alias_and_slash),
> 	KUNIT_CASE(of_test_find_node_by_name_multiple_components_2),
> 	KUNIT_CASE(of_test_find_node_by_name_missing_path),
> 	KUNIT_CASE(of_test_find_node_by_name_missing_alias),
> 	KUNIT_CASE(of_test_find_node_by_name_missing_alias_with_relative_path),
> 	KUNIT_CASE(of_test_find_node_by_name_with_option),
> 	KUNIT_CASE(of_test_find_node_by_name_with_option_and_slash),
> 	KUNIT_CASE(of_test_find_node_by_name_with_null_option),
> 	KUNIT_CASE(of_test_find_node_by_name_with_option_alias),
> 	KUNIT_CASE(of_test_find_node_by_name_with_option_alias_and_slash),
> 	KUNIT_CASE(of_test_find_node_by_name_with_null_option_alias),
> 	KUNIT_CASE(of_test_find_node_by_name_option_clearing),
> 	KUNIT_CASE(of_test_find_node_by_name_option_clearing_root),
> 	{},
> };
> 
> static struct kunit_module of_test_find_node_by_name_module = {
> 	.name = "of-test-find-node-by-name",
> 	.init = of_test_find_node_by_name_init,
> 	.test_cases = of_test_find_node_by_name_cases,
> };
> module_test(of_test_find_node_by_name_module);
> 
> 
> ## ==========  frank version  ===================================
> 
> 	// SPDX-License-Identifier: GPL-2.0
> /*
>  * Unit tests for functions defined in base.c.
>  */
> #include <linux/of.h>
> 
> #include <kunit/test.h>
> 
> #include "test-common.h"
> 
> static void of_unittest_find_node_by_name(struct kunit *test)
> {
> 	struct device_node *np;
> 	const char *options, *name;
> 
> 
> 	// find node by name basic
> 
> 	np = of_find_node_by_path("/testcase-data");
> 	name = kasprintf(GFP_KERNEL, "%pOF", np);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> 			       "find /testcase-data failed\n");
> 	of_node_put(np);
> 	kfree(name);
> 
> 
> 	// find node by name trailing slash
> 
> 	/* Test if trailing '/' works */
> 	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
> 			    "trailing '/' on /testcase-data/ should fail\n");
> 
> 
> 	// find node by name multiple components
> 
> 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	name = kasprintf(GFP_KERNEL, "%pOF", np);
> 	KUNIT_EXPECT_STREQ_MSG(
> 		test, "/testcase-data/phandle-tests/consumer-a", name,
> 		"find /testcase-data/phandle-tests/consumer-a failed\n");
> 	of_node_put(np);
> 	kfree(name);
> 
> 
> 	// find node by name with alias
> 
> 	np = of_find_node_by_path("testcase-alias");
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	name = kasprintf(GFP_KERNEL, "%pOF", np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> 			       "find testcase-alias failed\n");
> 	of_node_put(np);
> 	kfree(name);
> 
> 
> 	// find node by name with alias and slash
> 
> 	/* Test if trailing '/' works on aliases */
> 	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
> 			    "trailing '/' on testcase-alias/ should fail\n");
> 
> 
> 	// find node by name multiple components 2
> 
> 	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	name = kasprintf(GFP_KERNEL, "%pOF", np);
> 	KUNIT_EXPECT_STREQ_MSG(
> 		test, "/testcase-data/phandle-tests/consumer-a", name,
> 		"find testcase-alias/phandle-tests/consumer-a failed\n");
> 	of_node_put(np);
> 	kfree(name);
> 
> 
> 	// find node by name missing path
> 
> 	KUNIT_EXPECT_EQ_MSG(
> 		test,
> 		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
> 		"non-existent path returned node %pOF\n", np);
> 	of_node_put(np);
> 
> 
> 	// find node by name missing alias
> 
> 	KUNIT_EXPECT_EQ_MSG(
> 		test, np = of_find_node_by_path("missing-alias"), NULL,
> 		"non-existent alias returned node %pOF\n", np);
> 	of_node_put(np);
> 
> 
> 	//  find node by name missing alias with relative path
> 
> 	KUNIT_EXPECT_EQ_MSG(
> 		test,
> 		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
> 		"non-existent alias with relative path returned node %pOF\n",
> 		np);
> 	of_node_put(np);
> 
> 
> 	// find node by name with option
> 
> 	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
> 			       "option path test failed\n");
> 	of_node_put(np);
> 
> 
> 	// find node by name with option and slash
> 
> 	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> 			       "option path test, subcase #1 failed\n");
> 	of_node_put(np);
> 
> 	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> 			       "option path test, subcase #2 failed\n");
> 	of_node_put(np);
> 
> 
> 	// find node by name with null option
> 
> 	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
> 	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
> 					 "NULL option path test failed\n");
> 	of_node_put(np);
> 
> 
> 	// find node by name with option alias
> 
> 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
> 				       &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
> 			       "option alias path test failed\n");
> 	of_node_put(np);
> 
> 
> 	// find node by name with option alias and slash
> 
> 	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
> 				       &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
> 			       "option alias path test, subcase #1 failed\n");
> 	of_node_put(np);
> 
> 
> 	// find node by name with null option alias
> 
> 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
> 	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
> 			test, np, "NULL option alias path test failed\n");
> 	of_node_put(np);
> 
> 
> 	// find node by name option clearing
> 
> 	options = "testoption";
> 	np = of_find_node_opts_by_path("testcase-alias", &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> 			    "option clearing test failed\n");
> 	of_node_put(np);
> 
> 
> 	// find node by name option clearing root
> 
> 	options = "testoption";
> 	np = of_find_node_opts_by_path("/", &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> 			    "option clearing root node test failed\n");
> 	of_node_put(np);
> }
> 
> static int of_test_init(struct kunit *test)
> {
> 	/* adding data for unittest */
> 	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
> 
> 	if (!of_aliases)
> 		of_aliases = of_find_node_by_path("/aliases");
> 
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
> 			"/testcase-data/phandle-tests/consumer-a"));
> 
> 	return 0;
> }
> 
> static struct kunit_case of_test_cases[] = {
> 	KUNIT_CASE(of_unittest_find_node_by_name),
> 	{},
> };
> 
> static struct kunit_module of_test_module = {
> 	.name = "of-base-test",
> 	.init = of_test_init,
> 	.test_cases = of_test_cases,
> };
> module_test(of_test_module);
> 
> 
>>
>>
>>>> be cases where the devicetree unittests are currently not well grouped
>>>> and may benefit from change, but if so that should be handled independently
>>>> of any transformation into a KUnit framework.
>>>
>>> I agree. I did this because I wanted to illustrate what I thought real
>>> world KUnit unit tests should look like (I also wanted to be able to
>>> show off KUnit test features that help you write these kinds of
>>> tests); I was not necessarily intending that all the of: unittest
>>> patches would get merged in with the whole RFC. I was mostly trying to
>>> create cause for discussion (which it seems like I succeeded at ;-) ).
>>>
>>> So fair enough, I will propose these patches separately and later
>>> (except of course this one that splits up the file). Do you want the
>>> initial transformation to the KUnit framework in the main KUnit
>>> patchset, or do you want that to be done separately? If I recall, Rob
>>> suggested this as a good initial example that other people could refer
>>> to, and some people seemed to think that I needed one to help guide
>>> the discussion and provide direction for early users. I don't
>>> necessarily think that means the initial real world example needs to
>>> be a part of the initial patchset though.
>>>
>>> Cheers
>>>
>>
>>
> 
> 

-------------- next part --------------
// SPDX-License-Identifier: GPL-2.0
/*
 * Unit tests for functions defined in base.c.
 */
#include <linux/of.h>

#include <kunit/test.h>

#include "test-common.h"

static void of_test_find_node_by_name_basic(struct kunit *test)
{
	struct device_node *np;
	const char *name;

	np = of_find_node_by_path("/testcase-data");
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
			       "find /testcase-data failed\n");
	of_node_put(np);
	kfree(name);
}

static void of_test_find_node_by_name_trailing_slash(struct kunit *test)
{
	/* Test if trailing '/' works */
	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
			    "trailing '/' on /testcase-data/ should fail\n");

}

static void of_test_find_node_by_name_multiple_components(struct kunit *test)
{
	struct device_node *np;
	const char *name;

	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(
		test, "/testcase-data/phandle-tests/consumer-a", name,
		"find /testcase-data/phandle-tests/consumer-a failed\n");
	of_node_put(np);
	kfree(name);
}

static void of_test_find_node_by_name_with_alias(struct kunit *test)
{
	struct device_node *np;
	const char *name;

	np = of_find_node_by_path("testcase-alias");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
			       "find testcase-alias failed\n");
	of_node_put(np);
	kfree(name);
}

static void of_test_find_node_by_name_with_alias_and_slash(struct kunit *test)
{
	/* Test if trailing '/' works on aliases */
	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
			   "trailing '/' on testcase-alias/ should fail\n");
}

/*
 * TODO(brendanhiggins at google.com): This looks like a duplicate of
 * of_test_find_node_by_name_multiple_components
 */
static void of_test_find_node_by_name_multiple_components_2(struct kunit *test)
{
	struct device_node *np;
	const char *name;

	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(
		test, "/testcase-data/phandle-tests/consumer-a", name,
		"find testcase-alias/phandle-tests/consumer-a failed\n");
	of_node_put(np);
	kfree(name);
}

static void of_test_find_node_by_name_missing_path(struct kunit *test)
{
	struct device_node *np;

	KUNIT_EXPECT_EQ_MSG(
		test,
		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
		"non-existent path returned node %pOF\n", np);
	of_node_put(np);
}

static void of_test_find_node_by_name_missing_alias(struct kunit *test)
{
	struct device_node *np;

	KUNIT_EXPECT_EQ_MSG(
		test, np = of_find_node_by_path("missing-alias"), NULL,
		"non-existent alias returned node %pOF\n", np);
	of_node_put(np);
}

static void of_test_find_node_by_name_missing_alias_with_relative_path(
		struct kunit *test)
{
	struct device_node *np;

	KUNIT_EXPECT_EQ_MSG(
		test,
		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
		"non-existent alias with relative path returned node %pOF\n",
		np);
	of_node_put(np);
}

static void of_test_find_node_by_name_with_option(struct kunit *test)
{
	struct device_node *np;
	const char *options;

	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
			       "option path test failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_with_option_and_slash(struct kunit *test)
{
	struct device_node *np;
	const char *options;

	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
			       "option path test, subcase #1 failed\n");
	of_node_put(np);

	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
			       "option path test, subcase #2 failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_with_null_option(struct kunit *test)
{
	struct device_node *np;

	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
					 "NULL option path test failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_with_option_alias(struct kunit *test)
{
	struct device_node *np;
	const char *options;

	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
				       &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
			       "option alias path test failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_with_option_alias_and_slash(
		struct kunit *test)
{
	struct device_node *np;
	const char *options;

	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
				       &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
			       "option alias path test, subcase #1 failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_with_null_option_alias(struct kunit *test)
{
	struct device_node *np;

	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
			test, np, "NULL option alias path test failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_option_clearing(struct kunit *test)
{
	struct device_node *np;
	const char *options;

	options = "testoption";
	np = of_find_node_opts_by_path("testcase-alias", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
			    "option clearing test failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_option_clearing_root(struct kunit *test)
{
	struct device_node *np;
	const char *options;

	options = "testoption";
	np = of_find_node_opts_by_path("/", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
			    "option clearing root node test failed\n");
	of_node_put(np);
}

static int of_test_find_node_by_name_init(struct kunit *test)
{
	/* adding data for unittest */
	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());

	if (!of_aliases)
		of_aliases = of_find_node_by_path("/aliases");

	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
			"/testcase-data/phandle-tests/consumer-a"));

	return 0;
}

static struct kunit_case of_test_find_node_by_name_cases[] = {
	KUNIT_CASE(of_test_find_node_by_name_basic),
	KUNIT_CASE(of_test_find_node_by_name_trailing_slash),
	KUNIT_CASE(of_test_find_node_by_name_multiple_components),
	KUNIT_CASE(of_test_find_node_by_name_with_alias),
	KUNIT_CASE(of_test_find_node_by_name_with_alias_and_slash),
	KUNIT_CASE(of_test_find_node_by_name_multiple_components_2),
	KUNIT_CASE(of_test_find_node_by_name_missing_path),
	KUNIT_CASE(of_test_find_node_by_name_missing_alias),
	KUNIT_CASE(of_test_find_node_by_name_missing_alias_with_relative_path),
	KUNIT_CASE(of_test_find_node_by_name_with_option),
	KUNIT_CASE(of_test_find_node_by_name_with_option_and_slash),
	KUNIT_CASE(of_test_find_node_by_name_with_null_option),
	KUNIT_CASE(of_test_find_node_by_name_with_option_alias),
	KUNIT_CASE(of_test_find_node_by_name_with_option_alias_and_slash),
	KUNIT_CASE(of_test_find_node_by_name_with_null_option_alias),
	KUNIT_CASE(of_test_find_node_by_name_option_clearing),
	KUNIT_CASE(of_test_find_node_by_name_option_clearing_root),
	{},
};

static struct kunit_module of_test_find_node_by_name_module = {
	.name = "of-test-find-node-by-name",
	.init = of_test_find_node_by_name_init,
	.test_cases = of_test_find_node_by_name_cases,
};
module_test(of_test_find_node_by_name_module);
-------------- next part --------------
	// SPDX-License-Identifier: GPL-2.0
/*
 * Unit tests for functions defined in base.c.
 */
#include <linux/of.h>

#include <kunit/test.h>

#include "test-common.h"

static void of_unittest_find_node_by_name(struct kunit *test)
{
	struct device_node *np;
	const char *options, *name;


	// find node by name basic

	np = of_find_node_by_path("/testcase-data");
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
			       "find /testcase-data failed\n");
	of_node_put(np);
	kfree(name);


	// find node by name trailing slash

	/* Test if trailing '/' works */
	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
			    "trailing '/' on /testcase-data/ should fail\n");


	// find node by name multiple components

	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(
		test, "/testcase-data/phandle-tests/consumer-a", name,
		"find /testcase-data/phandle-tests/consumer-a failed\n");
	of_node_put(np);
	kfree(name);


	// find node by name with alias

	np = of_find_node_by_path("testcase-alias");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
			       "find testcase-alias failed\n");
	of_node_put(np);
	kfree(name);


	// find node by name with alias and slash

	/* Test if trailing '/' works on aliases */
	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
			    "trailing '/' on testcase-alias/ should fail\n");


	// find node by name multiple components 2

	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(
		test, "/testcase-data/phandle-tests/consumer-a", name,
		"find testcase-alias/phandle-tests/consumer-a failed\n");
	of_node_put(np);
	kfree(name);


	// find node by name missing path

	KUNIT_EXPECT_EQ_MSG(
		test,
		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
		"non-existent path returned node %pOF\n", np);
	of_node_put(np);


	// find node by name missing alias

	KUNIT_EXPECT_EQ_MSG(
		test, np = of_find_node_by_path("missing-alias"), NULL,
		"non-existent alias returned node %pOF\n", np);
	of_node_put(np);


	//  find node by name missing alias with relative path

	KUNIT_EXPECT_EQ_MSG(
		test,
		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
		"non-existent alias with relative path returned node %pOF\n",
		np);
	of_node_put(np);


	// find node by name with option

	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
			       "option path test failed\n");
	of_node_put(np);


	// find node by name with option and slash

	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
			       "option path test, subcase #1 failed\n");
	of_node_put(np);

	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
			       "option path test, subcase #2 failed\n");
	of_node_put(np);


	// find node by name with null option

	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
					 "NULL option path test failed\n");
	of_node_put(np);


	// find node by name with option alias

	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
				       &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
			       "option alias path test failed\n");
	of_node_put(np);


	// find node by name with option alias and slash

	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
				       &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
			       "option alias path test, subcase #1 failed\n");
	of_node_put(np);


	// find node by name with null option alias

	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
			test, np, "NULL option alias path test failed\n");
	of_node_put(np);


	// find node by name option clearing

	options = "testoption";
	np = of_find_node_opts_by_path("testcase-alias", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
			    "option clearing test failed\n");
	of_node_put(np);


	// find node by name option clearing root

	options = "testoption";
	np = of_find_node_opts_by_path("/", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
			    "option clearing root node test failed\n");
	of_node_put(np);
}

static int of_test_init(struct kunit *test)
{
	/* adding data for unittest */
	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());

	if (!of_aliases)
		of_aliases = of_find_node_by_path("/aliases");

	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
			"/testcase-data/phandle-tests/consumer-a"));

	return 0;
}

static struct kunit_case of_test_cases[] = {
	KUNIT_CASE(of_unittest_find_node_by_name),
	{},
};

static struct kunit_module of_test_module = {
	.name = "of-base-test",
	.init = of_test_init,
	.test_cases = of_test_cases,
};
module_test(of_test_module);

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2019-02-20 20:47                   ` frowand.list
@ 2019-02-20 20:47                     ` Frank Rowand
  0 siblings, 0 replies; 232+ messages in thread
From: Frank Rowand @ 2019-02-20 20:47 UTC (permalink / raw)


On 2/20/19 12:44 PM, Frank Rowand wrote:
> On 2/18/19 2:25 PM, Frank Rowand wrote:
>> On 2/15/19 2:56 AM, Brendan Higgins wrote:
>>> On Thu, Feb 14, 2019@6:05 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>>>
>>>> On 2/14/19 4:56 PM, Brendan Higgins wrote:
>>>>> On Thu, Feb 14, 2019@3:57 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>>>>>
>>>>>> On 12/5/18 3:54 PM, Brendan Higgins wrote:
>>>>>>> On Tue, Dec 4, 2018@2:58 AM Frank Rowand <frowand.list@gmail.com> wrote:
>>>>>>>>
> 
> < snip >
> 
>>
>> It makes it harder for me to read the source of the tests and
>> understand the order they will execute.  It also makes it harder
>> for me to read through the actual tests (in this example the
>> tests that are currently grouped in of_unittest_find_node_by_name())
>> because of all the extra function headers injected into the
>> existing single function to break it apart into many smaller
>> functions.
> 
> < snip >
> 
>>>>> This is not something I feel particularly strongly about, it is just
>>>>> pretty atypical from my experience to have so many unrelated test
>>>>> cases in a single file.
>>>>>
>>>>> Maybe you would prefer that I break up the test cases first, and then
>>>>> we split up the file as appropriate?
>>>>
>>>> I prefer that the test cases not be broken up arbitrarily.  There _may_
> 
> I expect that I created confusion by putting this in a reply to patch 18/19.
> It is actually a comment about patch 19/19.  Sorry about that.
> 
> 
>>>
>>> I wasn't trying to break them up arbitrarily. I thought I was doing it
>>> according to a pattern (breaking up the file, that is), but maybe I
>>> just hadn't looked at enough examples.
>>
>> This goes back to the kunit model of putting each test into a separate
>> function that can be a KUNIT_CASE().  That is a model that I do not agree
>> with for devicetree.
> 
> So now that I am actually talking about patch 19/19, let me give a concrete
> example.  I will cut and paste (after my comments), the beginning portion
> of base-test.c, after applying patch 19/19 (the "base version".  Then I
> will cut and paste my alternative version which does not break the tests
> down into individual functions (the "frank version").
> 
> I will also reply to this email with the base version and the frank version
> as attachments, which will make it easier to save as separate versions
> for easier viewing.  I'm not sure if an email with attachments will make
> it through the list servers, but I am cautiously optimistic.

base_version and frank_version attached.

-Frank


> 
> I am using v4 of the patch series because I never got v3 to cleanly apply
> and it is not a constructive use of my time to do so since I have v4 applied.
> 
> One of the points I was trying to make is that readability suffers from the
> approach taken by patches 18/19 and 19/19.
> 
> The base version contains the extra text of a function header for each
> unit test.  This is visual noise and makes the file larger.  It is also
> one more possible location of an error (although not likely).
> 
> The frank version has converted each of the new function headers into
> a comment, using the function name with '_' converted to ' '.  The
> comments are more readable than the function headers.  Note that I added
> an extra blank line before each comment, which violates the kernel
> coding standards, but I feel this makes the code more readable.
> 
> The base version needs to declare each of the individual test functions
> in of_test_find_node_by_name_cases[]. It is possible that a test function
> could be left out of of_test_find_node_by_name_cases[], in error.  This
> will result in a compile warning (I think warning instead of error, but
> I have not verified that) so the error might be caught or it might be
> overlooked.
> 
> In the base version, the order of execution of the test code requires
> bouncing back and forth between the test functions and the coding of
> of_test_find_node_by_name_cases[].
> 
> In the frank version the order of execution of the test code is obvious.
> 
> It is possible that a test function could be left out of
> of_test_find_node_by_name_cases[], in error.  This will result in a compile
> warning (I think warning instead of error, but I have not verified that)
> so it might be caught or it might be overlooked.
> 
> The base version is 265 lines.  The frank version is 208 lines, 57 lines
> less.  Less is better.
> 
> 
> ## ==========  base version  ====================================
> 
> // SPDX-License-Identifier: GPL-2.0
> /*
>  * Unit tests for functions defined in base.c.
>  */
> #include <linux/of.h>
> 
> #include <kunit/test.h>
> 
> #include "test-common.h"
> 
> static void of_test_find_node_by_name_basic(struct kunit *test)
> {
> 	struct device_node *np;
> 	const char *name;
> 
> 	np = of_find_node_by_path("/testcase-data");
> 	name = kasprintf(GFP_KERNEL, "%pOF", np);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> 			       "find /testcase-data failed\n");
> 	of_node_put(np);
> 	kfree(name);
> }
> 
> static void of_test_find_node_by_name_trailing_slash(struct kunit *test)
> {
> 	/* Test if trailing '/' works */
> 	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
> 			    "trailing '/' on /testcase-data/ should fail\n");
> 
> }
> 
> static void of_test_find_node_by_name_multiple_components(struct kunit *test)
> {
> 	struct device_node *np;
> 	const char *name;
> 
> 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	name = kasprintf(GFP_KERNEL, "%pOF", np);
> 	KUNIT_EXPECT_STREQ_MSG(
> 		test, "/testcase-data/phandle-tests/consumer-a", name,
> 		"find /testcase-data/phandle-tests/consumer-a failed\n");
> 	of_node_put(np);
> 	kfree(name);
> }
> 
> static void of_test_find_node_by_name_with_alias(struct kunit *test)
> {
> 	struct device_node *np;
> 	const char *name;
> 
> 	np = of_find_node_by_path("testcase-alias");
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	name = kasprintf(GFP_KERNEL, "%pOF", np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> 			       "find testcase-alias failed\n");
> 	of_node_put(np);
> 	kfree(name);
> }
> 
> static void of_test_find_node_by_name_with_alias_and_slash(struct kunit *test)
> {
> 	/* Test if trailing '/' works on aliases */
> 	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
> 			   "trailing '/' on testcase-alias/ should fail\n");
> }
> 
> /*
>  * TODO(brendanhiggins at google.com): This looks like a duplicate of
>  * of_test_find_node_by_name_multiple_components
>  */
> static void of_test_find_node_by_name_multiple_components_2(struct kunit *test)
> {
> 	struct device_node *np;
> 	const char *name;
> 
> 	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	name = kasprintf(GFP_KERNEL, "%pOF", np);
> 	KUNIT_EXPECT_STREQ_MSG(
> 		test, "/testcase-data/phandle-tests/consumer-a", name,
> 		"find testcase-alias/phandle-tests/consumer-a failed\n");
> 	of_node_put(np);
> 	kfree(name);
> }
> 
> static void of_test_find_node_by_name_missing_path(struct kunit *test)
> {
> 	struct device_node *np;
> 
> 	KUNIT_EXPECT_EQ_MSG(
> 		test,
> 		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
> 		"non-existent path returned node %pOF\n", np);
> 	of_node_put(np);
> }
> 
> static void of_test_find_node_by_name_missing_alias(struct kunit *test)
> {
> 	struct device_node *np;
> 
> 	KUNIT_EXPECT_EQ_MSG(
> 		test, np = of_find_node_by_path("missing-alias"), NULL,
> 		"non-existent alias returned node %pOF\n", np);
> 	of_node_put(np);
> }
> 
> static void of_test_find_node_by_name_missing_alias_with_relative_path(
> 		struct kunit *test)
> {
> 	struct device_node *np;
> 
> 	KUNIT_EXPECT_EQ_MSG(
> 		test,
> 		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
> 		"non-existent alias with relative path returned node %pOF\n",
> 		np);
> 	of_node_put(np);
> }
> 
> static void of_test_find_node_by_name_with_option(struct kunit *test)
> {
> 	struct device_node *np;
> 	const char *options;
> 
> 	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
> 			       "option path test failed\n");
> 	of_node_put(np);
> }
> 
> static void of_test_find_node_by_name_with_option_and_slash(struct kunit *test)
> {
> 	struct device_node *np;
> 	const char *options;
> 
> 	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> 			       "option path test, subcase #1 failed\n");
> 	of_node_put(np);
> 
> 	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> 			       "option path test, subcase #2 failed\n");
> 	of_node_put(np);
> }
> 
> static void of_test_find_node_by_name_with_null_option(struct kunit *test)
> {
> 	struct device_node *np;
> 
> 	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
> 	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
> 					 "NULL option path test failed\n");
> 	of_node_put(np);
> }
> 
> static void of_test_find_node_by_name_with_option_alias(struct kunit *test)
> {
> 	struct device_node *np;
> 	const char *options;
> 
> 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
> 				       &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
> 			       "option alias path test failed\n");
> 	of_node_put(np);
> }
> 
> static void of_test_find_node_by_name_with_option_alias_and_slash(
> 		struct kunit *test)
> {
> 	struct device_node *np;
> 	const char *options;
> 
> 	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
> 				       &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
> 			       "option alias path test, subcase #1 failed\n");
> 	of_node_put(np);
> }
> 
> static void of_test_find_node_by_name_with_null_option_alias(struct kunit *test)
> {
> 	struct device_node *np;
> 
> 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
> 	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
> 			test, np, "NULL option alias path test failed\n");
> 	of_node_put(np);
> }
> 
> static void of_test_find_node_by_name_option_clearing(struct kunit *test)
> {
> 	struct device_node *np;
> 	const char *options;
> 
> 	options = "testoption";
> 	np = of_find_node_opts_by_path("testcase-alias", &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> 			    "option clearing test failed\n");
> 	of_node_put(np);
> }
> 
> static void of_test_find_node_by_name_option_clearing_root(struct kunit *test)
> {
> 	struct device_node *np;
> 	const char *options;
> 
> 	options = "testoption";
> 	np = of_find_node_opts_by_path("/", &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> 			    "option clearing root node test failed\n");
> 	of_node_put(np);
> }
> 
> static int of_test_find_node_by_name_init(struct kunit *test)
> {
> 	/* adding data for unittest */
> 	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
> 
> 	if (!of_aliases)
> 		of_aliases = of_find_node_by_path("/aliases");
> 
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
> 			"/testcase-data/phandle-tests/consumer-a"));
> 
> 	return 0;
> }
> 
> static struct kunit_case of_test_find_node_by_name_cases[] = {
> 	KUNIT_CASE(of_test_find_node_by_name_basic),
> 	KUNIT_CASE(of_test_find_node_by_name_trailing_slash),
> 	KUNIT_CASE(of_test_find_node_by_name_multiple_components),
> 	KUNIT_CASE(of_test_find_node_by_name_with_alias),
> 	KUNIT_CASE(of_test_find_node_by_name_with_alias_and_slash),
> 	KUNIT_CASE(of_test_find_node_by_name_multiple_components_2),
> 	KUNIT_CASE(of_test_find_node_by_name_missing_path),
> 	KUNIT_CASE(of_test_find_node_by_name_missing_alias),
> 	KUNIT_CASE(of_test_find_node_by_name_missing_alias_with_relative_path),
> 	KUNIT_CASE(of_test_find_node_by_name_with_option),
> 	KUNIT_CASE(of_test_find_node_by_name_with_option_and_slash),
> 	KUNIT_CASE(of_test_find_node_by_name_with_null_option),
> 	KUNIT_CASE(of_test_find_node_by_name_with_option_alias),
> 	KUNIT_CASE(of_test_find_node_by_name_with_option_alias_and_slash),
> 	KUNIT_CASE(of_test_find_node_by_name_with_null_option_alias),
> 	KUNIT_CASE(of_test_find_node_by_name_option_clearing),
> 	KUNIT_CASE(of_test_find_node_by_name_option_clearing_root),
> 	{},
> };
> 
> static struct kunit_module of_test_find_node_by_name_module = {
> 	.name = "of-test-find-node-by-name",
> 	.init = of_test_find_node_by_name_init,
> 	.test_cases = of_test_find_node_by_name_cases,
> };
> module_test(of_test_find_node_by_name_module);
> 
> 
> ## ==========  frank version  ===================================
> 
> 	// SPDX-License-Identifier: GPL-2.0
> /*
>  * Unit tests for functions defined in base.c.
>  */
> #include <linux/of.h>
> 
> #include <kunit/test.h>
> 
> #include "test-common.h"
> 
> static void of_unittest_find_node_by_name(struct kunit *test)
> {
> 	struct device_node *np;
> 	const char *options, *name;
> 
> 
> 	// find node by name basic
> 
> 	np = of_find_node_by_path("/testcase-data");
> 	name = kasprintf(GFP_KERNEL, "%pOF", np);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> 			       "find /testcase-data failed\n");
> 	of_node_put(np);
> 	kfree(name);
> 
> 
> 	// find node by name trailing slash
> 
> 	/* Test if trailing '/' works */
> 	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
> 			    "trailing '/' on /testcase-data/ should fail\n");
> 
> 
> 	// find node by name multiple components
> 
> 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	name = kasprintf(GFP_KERNEL, "%pOF", np);
> 	KUNIT_EXPECT_STREQ_MSG(
> 		test, "/testcase-data/phandle-tests/consumer-a", name,
> 		"find /testcase-data/phandle-tests/consumer-a failed\n");
> 	of_node_put(np);
> 	kfree(name);
> 
> 
> 	// find node by name with alias
> 
> 	np = of_find_node_by_path("testcase-alias");
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	name = kasprintf(GFP_KERNEL, "%pOF", np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> 			       "find testcase-alias failed\n");
> 	of_node_put(np);
> 	kfree(name);
> 
> 
> 	// find node by name with alias and slash
> 
> 	/* Test if trailing '/' works on aliases */
> 	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
> 			    "trailing '/' on testcase-alias/ should fail\n");
> 
> 
> 	// find node by name multiple components 2
> 
> 	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	name = kasprintf(GFP_KERNEL, "%pOF", np);
> 	KUNIT_EXPECT_STREQ_MSG(
> 		test, "/testcase-data/phandle-tests/consumer-a", name,
> 		"find testcase-alias/phandle-tests/consumer-a failed\n");
> 	of_node_put(np);
> 	kfree(name);
> 
> 
> 	// find node by name missing path
> 
> 	KUNIT_EXPECT_EQ_MSG(
> 		test,
> 		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
> 		"non-existent path returned node %pOF\n", np);
> 	of_node_put(np);
> 
> 
> 	// find node by name missing alias
> 
> 	KUNIT_EXPECT_EQ_MSG(
> 		test, np = of_find_node_by_path("missing-alias"), NULL,
> 		"non-existent alias returned node %pOF\n", np);
> 	of_node_put(np);
> 
> 
> 	//  find node by name missing alias with relative path
> 
> 	KUNIT_EXPECT_EQ_MSG(
> 		test,
> 		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
> 		"non-existent alias with relative path returned node %pOF\n",
> 		np);
> 	of_node_put(np);
> 
> 
> 	// find node by name with option
> 
> 	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
> 			       "option path test failed\n");
> 	of_node_put(np);
> 
> 
> 	// find node by name with option and slash
> 
> 	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> 			       "option path test, subcase #1 failed\n");
> 	of_node_put(np);
> 
> 	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> 			       "option path test, subcase #2 failed\n");
> 	of_node_put(np);
> 
> 
> 	// find node by name with null option
> 
> 	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
> 	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
> 					 "NULL option path test failed\n");
> 	of_node_put(np);
> 
> 
> 	// find node by name with option alias
> 
> 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
> 				       &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
> 			       "option alias path test failed\n");
> 	of_node_put(np);
> 
> 
> 	// find node by name with option alias and slash
> 
> 	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
> 				       &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
> 			       "option alias path test, subcase #1 failed\n");
> 	of_node_put(np);
> 
> 
> 	// find node by name with null option alias
> 
> 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
> 	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
> 			test, np, "NULL option alias path test failed\n");
> 	of_node_put(np);
> 
> 
> 	// find node by name option clearing
> 
> 	options = "testoption";
> 	np = of_find_node_opts_by_path("testcase-alias", &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> 			    "option clearing test failed\n");
> 	of_node_put(np);
> 
> 
> 	// find node by name option clearing root
> 
> 	options = "testoption";
> 	np = of_find_node_opts_by_path("/", &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> 			    "option clearing root node test failed\n");
> 	of_node_put(np);
> }
> 
> static int of_test_init(struct kunit *test)
> {
> 	/* adding data for unittest */
> 	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
> 
> 	if (!of_aliases)
> 		of_aliases = of_find_node_by_path("/aliases");
> 
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
> 			"/testcase-data/phandle-tests/consumer-a"));
> 
> 	return 0;
> }
> 
> static struct kunit_case of_test_cases[] = {
> 	KUNIT_CASE(of_unittest_find_node_by_name),
> 	{},
> };
> 
> static struct kunit_module of_test_module = {
> 	.name = "of-base-test",
> 	.init = of_test_init,
> 	.test_cases = of_test_cases,
> };
> module_test(of_test_module);
> 
> 
>>
>>
>>>> be cases where the devicetree unittests are currently not well grouped
>>>> and may benefit from change, but if so that should be handled independently
>>>> of any transformation into a KUnit framework.
>>>
>>> I agree. I did this because I wanted to illustrate what I thought real
>>> world KUnit unit tests should look like (I also wanted to be able to
>>> show off KUnit test features that help you write these kinds of
>>> tests); I was not necessarily intending that all the of: unittest
>>> patches would get merged in with the whole RFC. I was mostly trying to
>>> create cause for discussion (which it seems like I succeeded at ;-) ).
>>>
>>> So fair enough, I will propose these patches separately and later
>>> (except of course this one that splits up the file). Do you want the
>>> initial transformation to the KUnit framework in the main KUnit
>>> patchset, or do you want that to be done separately? If I recall, Rob
>>> suggested this as a good initial example that other people could refer
>>> to, and some people seemed to think that I needed one to help guide
>>> the discussion and provide direction for early users. I don't
>>> necessarily think that means the initial real world example needs to
>>> be a part of the initial patchset though.
>>>
>>> Cheers
>>>
>>
>>
> 
> 

-------------- next part --------------
// SPDX-License-Identifier: GPL-2.0
/*
 * Unit tests for functions defined in base.c.
 */
#include <linux/of.h>

#include <kunit/test.h>

#include "test-common.h"

static void of_test_find_node_by_name_basic(struct kunit *test)
{
	struct device_node *np;
	const char *name;

	np = of_find_node_by_path("/testcase-data");
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
			       "find /testcase-data failed\n");
	of_node_put(np);
	kfree(name);
}

static void of_test_find_node_by_name_trailing_slash(struct kunit *test)
{
	/* Test if trailing '/' works */
	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
			    "trailing '/' on /testcase-data/ should fail\n");

}

static void of_test_find_node_by_name_multiple_components(struct kunit *test)
{
	struct device_node *np;
	const char *name;

	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(
		test, "/testcase-data/phandle-tests/consumer-a", name,
		"find /testcase-data/phandle-tests/consumer-a failed\n");
	of_node_put(np);
	kfree(name);
}

static void of_test_find_node_by_name_with_alias(struct kunit *test)
{
	struct device_node *np;
	const char *name;

	np = of_find_node_by_path("testcase-alias");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
			       "find testcase-alias failed\n");
	of_node_put(np);
	kfree(name);
}

static void of_test_find_node_by_name_with_alias_and_slash(struct kunit *test)
{
	/* Test if trailing '/' works on aliases */
	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
			   "trailing '/' on testcase-alias/ should fail\n");
}

/*
 * TODO(brendanhiggins at google.com): This looks like a duplicate of
 * of_test_find_node_by_name_multiple_components
 */
static void of_test_find_node_by_name_multiple_components_2(struct kunit *test)
{
	struct device_node *np;
	const char *name;

	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(
		test, "/testcase-data/phandle-tests/consumer-a", name,
		"find testcase-alias/phandle-tests/consumer-a failed\n");
	of_node_put(np);
	kfree(name);
}

static void of_test_find_node_by_name_missing_path(struct kunit *test)
{
	struct device_node *np;

	KUNIT_EXPECT_EQ_MSG(
		test,
		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
		"non-existent path returned node %pOF\n", np);
	of_node_put(np);
}

static void of_test_find_node_by_name_missing_alias(struct kunit *test)
{
	struct device_node *np;

	KUNIT_EXPECT_EQ_MSG(
		test, np = of_find_node_by_path("missing-alias"), NULL,
		"non-existent alias returned node %pOF\n", np);
	of_node_put(np);
}

static void of_test_find_node_by_name_missing_alias_with_relative_path(
		struct kunit *test)
{
	struct device_node *np;

	KUNIT_EXPECT_EQ_MSG(
		test,
		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
		"non-existent alias with relative path returned node %pOF\n",
		np);
	of_node_put(np);
}

static void of_test_find_node_by_name_with_option(struct kunit *test)
{
	struct device_node *np;
	const char *options;

	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
			       "option path test failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_with_option_and_slash(struct kunit *test)
{
	struct device_node *np;
	const char *options;

	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
			       "option path test, subcase #1 failed\n");
	of_node_put(np);

	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
			       "option path test, subcase #2 failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_with_null_option(struct kunit *test)
{
	struct device_node *np;

	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
					 "NULL option path test failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_with_option_alias(struct kunit *test)
{
	struct device_node *np;
	const char *options;

	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
				       &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
			       "option alias path test failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_with_option_alias_and_slash(
		struct kunit *test)
{
	struct device_node *np;
	const char *options;

	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
				       &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
			       "option alias path test, subcase #1 failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_with_null_option_alias(struct kunit *test)
{
	struct device_node *np;

	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
			test, np, "NULL option alias path test failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_option_clearing(struct kunit *test)
{
	struct device_node *np;
	const char *options;

	options = "testoption";
	np = of_find_node_opts_by_path("testcase-alias", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
			    "option clearing test failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_option_clearing_root(struct kunit *test)
{
	struct device_node *np;
	const char *options;

	options = "testoption";
	np = of_find_node_opts_by_path("/", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
			    "option clearing root node test failed\n");
	of_node_put(np);
}

static int of_test_find_node_by_name_init(struct kunit *test)
{
	/* adding data for unittest */
	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());

	if (!of_aliases)
		of_aliases = of_find_node_by_path("/aliases");

	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
			"/testcase-data/phandle-tests/consumer-a"));

	return 0;
}

static struct kunit_case of_test_find_node_by_name_cases[] = {
	KUNIT_CASE(of_test_find_node_by_name_basic),
	KUNIT_CASE(of_test_find_node_by_name_trailing_slash),
	KUNIT_CASE(of_test_find_node_by_name_multiple_components),
	KUNIT_CASE(of_test_find_node_by_name_with_alias),
	KUNIT_CASE(of_test_find_node_by_name_with_alias_and_slash),
	KUNIT_CASE(of_test_find_node_by_name_multiple_components_2),
	KUNIT_CASE(of_test_find_node_by_name_missing_path),
	KUNIT_CASE(of_test_find_node_by_name_missing_alias),
	KUNIT_CASE(of_test_find_node_by_name_missing_alias_with_relative_path),
	KUNIT_CASE(of_test_find_node_by_name_with_option),
	KUNIT_CASE(of_test_find_node_by_name_with_option_and_slash),
	KUNIT_CASE(of_test_find_node_by_name_with_null_option),
	KUNIT_CASE(of_test_find_node_by_name_with_option_alias),
	KUNIT_CASE(of_test_find_node_by_name_with_option_alias_and_slash),
	KUNIT_CASE(of_test_find_node_by_name_with_null_option_alias),
	KUNIT_CASE(of_test_find_node_by_name_option_clearing),
	KUNIT_CASE(of_test_find_node_by_name_option_clearing_root),
	{},
};

static struct kunit_module of_test_find_node_by_name_module = {
	.name = "of-test-find-node-by-name",
	.init = of_test_find_node_by_name_init,
	.test_cases = of_test_find_node_by_name_cases,
};
module_test(of_test_find_node_by_name_module);
-------------- next part --------------
	// SPDX-License-Identifier: GPL-2.0
/*
 * Unit tests for functions defined in base.c.
 */
#include <linux/of.h>

#include <kunit/test.h>

#include "test-common.h"

static void of_unittest_find_node_by_name(struct kunit *test)
{
	struct device_node *np;
	const char *options, *name;


	// find node by name basic

	np = of_find_node_by_path("/testcase-data");
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
			       "find /testcase-data failed\n");
	of_node_put(np);
	kfree(name);


	// find node by name trailing slash

	/* Test if trailing '/' works */
	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
			    "trailing '/' on /testcase-data/ should fail\n");


	// find node by name multiple components

	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(
		test, "/testcase-data/phandle-tests/consumer-a", name,
		"find /testcase-data/phandle-tests/consumer-a failed\n");
	of_node_put(np);
	kfree(name);


	// find node by name with alias

	np = of_find_node_by_path("testcase-alias");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
			       "find testcase-alias failed\n");
	of_node_put(np);
	kfree(name);


	// find node by name with alias and slash

	/* Test if trailing '/' works on aliases */
	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
			    "trailing '/' on testcase-alias/ should fail\n");


	// find node by name multiple components 2

	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(
		test, "/testcase-data/phandle-tests/consumer-a", name,
		"find testcase-alias/phandle-tests/consumer-a failed\n");
	of_node_put(np);
	kfree(name);


	// find node by name missing path

	KUNIT_EXPECT_EQ_MSG(
		test,
		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
		"non-existent path returned node %pOF\n", np);
	of_node_put(np);


	// find node by name missing alias

	KUNIT_EXPECT_EQ_MSG(
		test, np = of_find_node_by_path("missing-alias"), NULL,
		"non-existent alias returned node %pOF\n", np);
	of_node_put(np);


	//  find node by name missing alias with relative path

	KUNIT_EXPECT_EQ_MSG(
		test,
		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
		"non-existent alias with relative path returned node %pOF\n",
		np);
	of_node_put(np);


	// find node by name with option

	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
			       "option path test failed\n");
	of_node_put(np);


	// find node by name with option and slash

	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
			       "option path test, subcase #1 failed\n");
	of_node_put(np);

	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
			       "option path test, subcase #2 failed\n");
	of_node_put(np);


	// find node by name with null option

	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
					 "NULL option path test failed\n");
	of_node_put(np);


	// find node by name with option alias

	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
				       &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
			       "option alias path test failed\n");
	of_node_put(np);


	// find node by name with option alias and slash

	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
				       &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
			       "option alias path test, subcase #1 failed\n");
	of_node_put(np);


	// find node by name with null option alias

	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
			test, np, "NULL option alias path test failed\n");
	of_node_put(np);


	// find node by name option clearing

	options = "testoption";
	np = of_find_node_opts_by_path("testcase-alias", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
			    "option clearing test failed\n");
	of_node_put(np);


	// find node by name option clearing root

	options = "testoption";
	np = of_find_node_opts_by_path("/", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
			    "option clearing root node test failed\n");
	of_node_put(np);
}

static int of_test_init(struct kunit *test)
{
	/* adding data for unittest */
	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());

	if (!of_aliases)
		of_aliases = of_find_node_by_path("/aliases");

	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
			"/testcase-data/phandle-tests/consumer-a"));

	return 0;
}

static struct kunit_case of_test_cases[] = {
	KUNIT_CASE(of_unittest_find_node_by_name),
	{},
};

static struct kunit_module of_test_module = {
	.name = "of-base-test",
	.init = of_test_init,
	.test_cases = of_test_cases,
};
module_test(of_test_module);

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 17/19] of: unittest: migrate tests to run on KUnit
  2019-02-18 22:56       ` frowand.list
  2019-02-18 22:56         ` Frank Rowand
@ 2019-02-28  0:29         ` brendanhiggins
  2019-02-28  0:29           ` Brendan Higgins
  1 sibling, 1 reply; 232+ messages in thread
From: brendanhiggins @ 2019-02-28  0:29 UTC (permalink / raw)


On Mon, Feb 18, 2019 at 2:56 PM Frank Rowand <frowand.list at gmail.com> wrote:
>
> On 2/12/19 5:44 PM, Brendan Higgins wrote:
> > On Wed, Nov 28, 2018 at 12:56 PM Rob Herring <robh at kernel.org> wrote:
> >>
> >> On Wed, Nov 28, 2018 at 1:38 PM Brendan Higgins
> >> <brendanhiggins at google.com> wrote:
<snip>
> >>> ---
> >>>  drivers/of/Kconfig    |    1 +
> >>>  drivers/of/unittest.c | 1405 ++++++++++++++++++++++-------------------
> >>>  2 files changed, 752 insertions(+), 654 deletions(-)
> >>>
> > <snip>
> >>> diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
> >>> index 41b49716ac75f..a5ef44730ffdb 100644
> >>> --- a/drivers/of/unittest.c
> >>> +++ b/drivers/of/unittest.c
<snip>
> >>> +
> >>> +       KUNIT_EXPECT_EQ(test,
> >>> +                       of_property_match_string(np,
> >>> +                                                "phandle-list-names",
> >>> +                                                "first"),
> >>> +                       0);
> >>> +       KUNIT_EXPECT_EQ(test,
> >>> +                       of_property_match_string(np,
> >>> +                                                "phandle-list-names",
> >>> +                                                "second"),
> >>> +                       1);
> >>
> >> Fewer lines on these would be better even if we go over 80 chars.
>
> Agreed.  unittest.c already is a greater than 80 char file in general, and
> is a file that benefits from that.
>

Noted.

>
> > On the of_property_match_string(...), I have no opinion. I will do
> > whatever you like best.
> >
> > Nevertheless, as far as the KUNIT_EXPECT_*(...), I do have an opinion: I am
> > trying to establish a good, readable convention. Given an expect statement
> > structured as
> > ```
> > KUNIT_EXPECT_*(
> >     test,
> >     expect_arg_0, ..., expect_arg_n,
> >     fmt_str, fmt_arg_0, ..., fmt_arg_n)
> > ```
> > where `test` is the `struct kunit` context argument, `expect_arg_{0, ..., n}`
> > are the arguments the expectations is being made about (so in the above example,
> > `of_property_match_string(...)` and `1`), and `fmt_*` is the optional format
> > string that comes at the end of some expectations.
> >
> > The pattern I had been trying to promote is the following:
> >
> > 1) If everything fits on 1 line, do that.
> > 2) If you must make a line split, prefer to keep `test` on its own line,
> > `expect_arg_{0, ..., n}` should be kept together, if possible, and the format
> > string should follow the conventions already most commonly used with format
> > strings.
> > 3) If you must split up `expect_arg_{0, ..., n}` each argument should get its
> > own line and should not share a line with either `test` or any `fmt_*`.
> >
> > The reason I care about this so much is because expectations should be
> > extremely easy to read; they are the most important part of a unit
> > test because they tell you what the test is verifying. I am not
> > married to the formatting I proposed above, but I want something that
> > will be extremely easy to identify the arguments that the expectation
> > is on. Maybe that means that I need to add some syntactic fluff to
> > make it clearer, I don't know, but this is definitely something we
> > need to get right, especially in the earliest examples.
>
> I will probably raise the ire of the kernel formatting rule makers by offering
> what I think is a _much_ more readable format __for this specific case__.
> In other words for drivers/of/unittest.c.
>
> If you can not make your mail window _very_ wide, or if this email has been
> line wrapped, this example will not be clear.
>
> Two possible formats:
>
>
> ### -----  version 1, as created by the patch series
>
> static void of_unittest_property_string(struct kunit *test)
> {
>         const char *strings[4];
>         struct device_node *np;
>         int rc;
>
>         np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>
>         KUNIT_EXPECT_EQ(
>                 test,
>                 of_property_match_string(np, "phandle-list-names", "first"),
>                 0);
>         KUNIT_EXPECT_EQ(
>                 test,
>                 of_property_match_string(np, "phandle-list-names", "second"),
>                 1);
>         KUNIT_EXPECT_EQ(
>                 test,
>                 of_property_match_string(np, "phandle-list-names", "third"),
>                 2);
>         KUNIT_EXPECT_EQ_MSG(
>                 test,
>                 of_property_match_string(np, "phandle-list-names", "fourth"),
>                 -ENODATA,
>                 "unmatched string");
>         KUNIT_EXPECT_EQ_MSG(
>                 test,
>                 of_property_match_string(np, "missing-property", "blah"),
>                 -EINVAL,
>                 "missing property");
>         KUNIT_EXPECT_EQ_MSG(
>                 test,
>                 of_property_match_string(np, "empty-property", "blah"),
>                 -ENODATA,
>                 "empty property");
>         KUNIT_EXPECT_EQ_MSG(
>                 test,
>                 of_property_match_string(np, "unterminated-string", "blah"),
>                 -EILSEQ,
>                 "unterminated string");
>
>         /* of_property_count_strings() tests */
>         KUNIT_EXPECT_EQ(test,
>                         of_property_count_strings(np, "string-property"), 1);
>         KUNIT_EXPECT_EQ(test,
>                         of_property_count_strings(np, "phandle-list-names"), 3);
>         KUNIT_EXPECT_EQ_MSG(
>                 test,
>                 of_property_count_strings(np, "unterminated-string"), -EILSEQ,
>                 "unterminated string");
>         KUNIT_EXPECT_EQ_MSG(
>                 test,
>                 of_property_count_strings(np, "unterminated-string-list"),
>                 -EILSEQ,
>                 "unterminated string array");
>
>
>
>
> ### -----  version 2, modified to use really long lines
>
> static void of_unittest_property_string(struct kunit *test)
> {
>         const char *strings[4];
>         struct device_node *np;
>         int rc;
>
>         np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>
>         KUNIT_EXPECT_EQ(    test, of_property_match_string(np, "phandle-list-names", "first"),  0);
>         KUNIT_EXPECT_EQ(    test, of_property_match_string(np, "phandle-list-names", "second"), 1);
>         KUNIT_EXPECT_EQ(    test, of_property_match_string(np, "phandle-list-names", "third"),  2);
>         KUNIT_EXPECT_EQ_MSG(test, of_property_match_string(np, "phandle-list-names", "fourth"), -ENODATA, "unmatched string");
>         KUNIT_EXPECT_EQ_MSG(test, of_property_match_string(np, "missing-property", "blah"),     -EINVAL, "missing property");
>         KUNIT_EXPECT_EQ_MSG(test, of_property_match_string(np, "empty-property", "blah"),       -ENODATA, "empty property");
>         KUNIT_EXPECT_EQ_MSG(test, of_property_match_string(np, "unterminated-string", "blah"),  -EILSEQ, "unterminated string");
>
>         /* of_property_count_strings() tests */
>         KUNIT_EXPECT_EQ(    test, of_property_count_strings(np, "string-property"),             1);
>         KUNIT_EXPECT_EQ(    test, of_property_count_strings(np, "phandle-list-names"),          3);
>         KUNIT_EXPECT_EQ_MSG(test, of_property_count_strings(np, "unterminated-string"),         -EILSEQ, "unterminated string");
>         KUNIT_EXPECT_EQ_MSG(test, of_property_count_strings(np, "unterminated-string-list"),    -EILSEQ, "unterminated string array");
>
>
>         ------------------------  ------------------------------------------------------------- --------------------------------------
>              ^                         ^                                                             ^
>              |                         |                                                             |
>              |                         |                                                             |
>             mostly boilerplate        what is being tested                                          expected result, error message
>             (can vary in relop
>              and _MSG or not)
>
> In my opinion, the second version is much more readable.  It is easy to see the
> differences in the boilerplate.  It is easy to see what is being tested, and how
> the arguments of the tested function vary for each test.  It is easy to see the
> expected result and error message.  The entire block fits into a single short
> window (though much wider).

I have no opinion on the over 80 char thing, so as long as everyone
else is okay with it, I have no complaints.

Cheers

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 17/19] of: unittest: migrate tests to run on KUnit
  2019-02-28  0:29         ` brendanhiggins
@ 2019-02-28  0:29           ` Brendan Higgins
  0 siblings, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2019-02-28  0:29 UTC (permalink / raw)


On Mon, Feb 18, 2019@2:56 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/12/19 5:44 PM, Brendan Higgins wrote:
> > On Wed, Nov 28, 2018@12:56 PM Rob Herring <robh@kernel.org> wrote:
> >>
> >> On Wed, Nov 28, 2018 at 1:38 PM Brendan Higgins
> >> <brendanhiggins@google.com> wrote:
<snip>
> >>> ---
> >>>  drivers/of/Kconfig    |    1 +
> >>>  drivers/of/unittest.c | 1405 ++++++++++++++++++++++-------------------
> >>>  2 files changed, 752 insertions(+), 654 deletions(-)
> >>>
> > <snip>
> >>> diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
> >>> index 41b49716ac75f..a5ef44730ffdb 100644
> >>> --- a/drivers/of/unittest.c
> >>> +++ b/drivers/of/unittest.c
<snip>
> >>> +
> >>> +       KUNIT_EXPECT_EQ(test,
> >>> +                       of_property_match_string(np,
> >>> +                                                "phandle-list-names",
> >>> +                                                "first"),
> >>> +                       0);
> >>> +       KUNIT_EXPECT_EQ(test,
> >>> +                       of_property_match_string(np,
> >>> +                                                "phandle-list-names",
> >>> +                                                "second"),
> >>> +                       1);
> >>
> >> Fewer lines on these would be better even if we go over 80 chars.
>
> Agreed.  unittest.c already is a greater than 80 char file in general, and
> is a file that benefits from that.
>

Noted.

>
> > On the of_property_match_string(...), I have no opinion. I will do
> > whatever you like best.
> >
> > Nevertheless, as far as the KUNIT_EXPECT_*(...), I do have an opinion: I am
> > trying to establish a good, readable convention. Given an expect statement
> > structured as
> > ```
> > KUNIT_EXPECT_*(
> >     test,
> >     expect_arg_0, ..., expect_arg_n,
> >     fmt_str, fmt_arg_0, ..., fmt_arg_n)
> > ```
> > where `test` is the `struct kunit` context argument, `expect_arg_{0, ..., n}`
> > are the arguments the expectations is being made about (so in the above example,
> > `of_property_match_string(...)` and `1`), and `fmt_*` is the optional format
> > string that comes at the end of some expectations.
> >
> > The pattern I had been trying to promote is the following:
> >
> > 1) If everything fits on 1 line, do that.
> > 2) If you must make a line split, prefer to keep `test` on its own line,
> > `expect_arg_{0, ..., n}` should be kept together, if possible, and the format
> > string should follow the conventions already most commonly used with format
> > strings.
> > 3) If you must split up `expect_arg_{0, ..., n}` each argument should get its
> > own line and should not share a line with either `test` or any `fmt_*`.
> >
> > The reason I care about this so much is because expectations should be
> > extremely easy to read; they are the most important part of a unit
> > test because they tell you what the test is verifying. I am not
> > married to the formatting I proposed above, but I want something that
> > will be extremely easy to identify the arguments that the expectation
> > is on. Maybe that means that I need to add some syntactic fluff to
> > make it clearer, I don't know, but this is definitely something we
> > need to get right, especially in the earliest examples.
>
> I will probably raise the ire of the kernel formatting rule makers by offering
> what I think is a _much_ more readable format __for this specific case__.
> In other words for drivers/of/unittest.c.
>
> If you can not make your mail window _very_ wide, or if this email has been
> line wrapped, this example will not be clear.
>
> Two possible formats:
>
>
> ### -----  version 1, as created by the patch series
>
> static void of_unittest_property_string(struct kunit *test)
> {
>         const char *strings[4];
>         struct device_node *np;
>         int rc;
>
>         np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>
>         KUNIT_EXPECT_EQ(
>                 test,
>                 of_property_match_string(np, "phandle-list-names", "first"),
>                 0);
>         KUNIT_EXPECT_EQ(
>                 test,
>                 of_property_match_string(np, "phandle-list-names", "second"),
>                 1);
>         KUNIT_EXPECT_EQ(
>                 test,
>                 of_property_match_string(np, "phandle-list-names", "third"),
>                 2);
>         KUNIT_EXPECT_EQ_MSG(
>                 test,
>                 of_property_match_string(np, "phandle-list-names", "fourth"),
>                 -ENODATA,
>                 "unmatched string");
>         KUNIT_EXPECT_EQ_MSG(
>                 test,
>                 of_property_match_string(np, "missing-property", "blah"),
>                 -EINVAL,
>                 "missing property");
>         KUNIT_EXPECT_EQ_MSG(
>                 test,
>                 of_property_match_string(np, "empty-property", "blah"),
>                 -ENODATA,
>                 "empty property");
>         KUNIT_EXPECT_EQ_MSG(
>                 test,
>                 of_property_match_string(np, "unterminated-string", "blah"),
>                 -EILSEQ,
>                 "unterminated string");
>
>         /* of_property_count_strings() tests */
>         KUNIT_EXPECT_EQ(test,
>                         of_property_count_strings(np, "string-property"), 1);
>         KUNIT_EXPECT_EQ(test,
>                         of_property_count_strings(np, "phandle-list-names"), 3);
>         KUNIT_EXPECT_EQ_MSG(
>                 test,
>                 of_property_count_strings(np, "unterminated-string"), -EILSEQ,
>                 "unterminated string");
>         KUNIT_EXPECT_EQ_MSG(
>                 test,
>                 of_property_count_strings(np, "unterminated-string-list"),
>                 -EILSEQ,
>                 "unterminated string array");
>
>
>
>
> ### -----  version 2, modified to use really long lines
>
> static void of_unittest_property_string(struct kunit *test)
> {
>         const char *strings[4];
>         struct device_node *np;
>         int rc;
>
>         np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>
>         KUNIT_EXPECT_EQ(    test, of_property_match_string(np, "phandle-list-names", "first"),  0);
>         KUNIT_EXPECT_EQ(    test, of_property_match_string(np, "phandle-list-names", "second"), 1);
>         KUNIT_EXPECT_EQ(    test, of_property_match_string(np, "phandle-list-names", "third"),  2);
>         KUNIT_EXPECT_EQ_MSG(test, of_property_match_string(np, "phandle-list-names", "fourth"), -ENODATA, "unmatched string");
>         KUNIT_EXPECT_EQ_MSG(test, of_property_match_string(np, "missing-property", "blah"),     -EINVAL, "missing property");
>         KUNIT_EXPECT_EQ_MSG(test, of_property_match_string(np, "empty-property", "blah"),       -ENODATA, "empty property");
>         KUNIT_EXPECT_EQ_MSG(test, of_property_match_string(np, "unterminated-string", "blah"),  -EILSEQ, "unterminated string");
>
>         /* of_property_count_strings() tests */
>         KUNIT_EXPECT_EQ(    test, of_property_count_strings(np, "string-property"),             1);
>         KUNIT_EXPECT_EQ(    test, of_property_count_strings(np, "phandle-list-names"),          3);
>         KUNIT_EXPECT_EQ_MSG(test, of_property_count_strings(np, "unterminated-string"),         -EILSEQ, "unterminated string");
>         KUNIT_EXPECT_EQ_MSG(test, of_property_count_strings(np, "unterminated-string-list"),    -EILSEQ, "unterminated string array");
>
>
>         ------------------------  ------------------------------------------------------------- --------------------------------------
>              ^                         ^                                                             ^
>              |                         |                                                             |
>              |                         |                                                             |
>             mostly boilerplate        what is being tested                                          expected result, error message
>             (can vary in relop
>              and _MSG or not)
>
> In my opinion, the second version is much more readable.  It is easy to see the
> differences in the boilerplate.  It is easy to see what is being tested, and how
> the arguments of the tested function vary for each test.  It is easy to see the
> expected result and error message.  The entire block fits into a single short
> window (though much wider).

I have no opinion on the over 80 char thing, so as long as everyone
else is okay with it, I have no complaints.

Cheers

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2019-02-20 20:44                 ` frowand.list
  2019-02-20 20:44                   ` Frank Rowand
  2019-02-20 20:47                   ` frowand.list
@ 2019-02-28  3:52                   ` brendanhiggins
  2019-02-28  3:52                     ` Brendan Higgins
  2019-03-22  0:22                     ` frowand.list
  2 siblings, 2 replies; 232+ messages in thread
From: brendanhiggins @ 2019-02-28  3:52 UTC (permalink / raw)


On Wed, Feb 20, 2019 at 12:45 PM Frank Rowand <frowand.list at gmail.com> wrote:
>
> On 2/18/19 2:25 PM, Frank Rowand wrote:
> > On 2/15/19 2:56 AM, Brendan Higgins wrote:
> >> On Thu, Feb 14, 2019 at 6:05 PM Frank Rowand <frowand.list at gmail.com> wrote:
> >>>
> >>> On 2/14/19 4:56 PM, Brendan Higgins wrote:
> >>>> On Thu, Feb 14, 2019 at 3:57 PM Frank Rowand <frowand.list at gmail.com> wrote:
> >>>>>
> >>>>> On 12/5/18 3:54 PM, Brendan Higgins wrote:
> >>>>>> On Tue, Dec 4, 2018 at 2:58 AM Frank Rowand <frowand.list at gmail.com> wrote:
> >>>>>>>
>
> < snip >
>
> >
> > It makes it harder for me to read the source of the tests and
> > understand the order they will execute.  It also makes it harder
> > for me to read through the actual tests (in this example the
> > tests that are currently grouped in of_unittest_find_node_by_name())
> > because of all the extra function headers injected into the
> > existing single function to break it apart into many smaller
> > functions.
>
> < snip >
>
> >>>> This is not something I feel particularly strongly about, it is just
> >>>> pretty atypical from my experience to have so many unrelated test
> >>>> cases in a single file.
> >>>>
> >>>> Maybe you would prefer that I break up the test cases first, and then
> >>>> we split up the file as appropriate?
> >>>
> >>> I prefer that the test cases not be broken up arbitrarily.  There _may_
>
> I expect that I created confusion by putting this in a reply to patch 18/19.
> It is actually a comment about patch 19/19.  Sorry about that.
>

No worries.

>
> >>
> >> I wasn't trying to break them up arbitrarily. I thought I was doing it
> >> according to a pattern (breaking up the file, that is), but maybe I
> >> just hadn't looked at enough examples.
> >
> > This goes back to the kunit model of putting each test into a separate
> > function that can be a KUNIT_CASE().  That is a model that I do not agree
> > with for devicetree.
>
> So now that I am actually talking about patch 19/19, let me give a concrete
> example.  I will cut and paste (after my comments), the beginning portion
> of base-test.c, after applying patch 19/19 (the "base version".  Then I
> will cut and paste my alternative version which does not break the tests
> down into individual functions (the "frank version").

Awesome, thanks for putting the comparison together!

>
> I will also reply to this email with the base version and the frank version
> as attachments, which will make it easier to save as separate versions
> for easier viewing.  I'm not sure if an email with attachments will make
> it through the list servers, but I am cautiously optimistic.
>
> I am using v4 of the patch series because I never got v3 to cleanly apply
> and it is not a constructive use of my time to do so since I have v4 applied.
>
> One of the points I was trying to make is that readability suffers from the
> approach taken by patches 18/19 and 19/19.

I understood that point.

>
> The base version contains the extra text of a function header for each
> unit test.  This is visual noise and makes the file larger.  It is also
> one more possible location of an error (although not likely).

I don't see how it is much more visual noise than a comment.
Admittedly, a space versus an underscore might be nice, but I think it
is also more likely that a function name is more likely to be kept up
to date than a comment even if they are both informational. It also
forces the user to actually name all the tests. Even then, I am not
married to doing it this exact way. The thing I really care about is
isolating the code in each test case so that it can be executed
separately.

A side thought, when I was proofreading this, it occurred to me that
you might not like the function name over comment partly because you
think about them differently. You aren't used to seeing a function
used to frame things or communicate information in this way. Is this
true? Admittedly, I have gotten used to a lot of unit test frameworks
that break up test cases by function, so I wondering if part of the
difference in comfortability with this framing might come from the
fact that I am really used to seeing it this way and you are not? If
this is the case, maybe it would be better if we had something like:

KUNIT_DECLARE_CASE(case_id, "Test case description")
{
        KUNIT_EXPECT_EQ(kunit, ...);
        ...
}

Just a thought.

>
> The frank version has converted each of the new function headers into
> a comment, using the function name with '_' converted to ' '.  The
> comments are more readable than the function headers.  Note that I added
> an extra blank line before each comment, which violates the kernel
> coding standards, but I feel this makes the code more readable.

I agree that the extra space is an improvement, but I think any
sufficient visual separation would work.

>
> The base version needs to declare each of the individual test functions
> in of_test_find_node_by_name_cases[]. It is possible that a test function
> could be left out of of_test_find_node_by_name_cases[], in error.  This
> will result in a compile warning (I think warning instead of error, but
> I have not verified that) so the error might be caught or it might be
> overlooked.

It's a warning, but that can be fixed.

>
> In the base version, the order of execution of the test code requires
> bouncing back and forth between the test functions and the coding of
> of_test_find_node_by_name_cases[].

You shouldn't need to bounce back and forth because the order in which
the tests run shouldn't matter.

>
> In the frank version the order of execution of the test code is obvious.

So I know we were arguing before over whether order *does* matter in
some of the other test cases (none in the example that you or I
posted), but wouldn't it be better if the order of execution didn't
matter? If you don't allow a user to depend on the execution of test
cases, then arguably these test case dependencies would never form and
the order wouldn't matter.
>
> It is possible that a test function could be left out of
> of_test_find_node_by_name_cases[], in error.  This will result in a compile
> warning (I think warning instead of error, but I have not verified that)
> so it might be caught or it might be overlooked.
>
> The base version is 265 lines.  The frank version is 208 lines, 57 lines
> less.  Less is better.

I agree that less is better, but there are different kinds of less to
consider. I prefer less logic in a function to fewer lines overall.

It seems we are in agreement that test cases should be small and
simple, so I won't dwell on that point any longer. I agree that the
test cases themselves when taken in isolation in base version and
frank version are equally simple (obviously, they are the same).

If I am correct, we are only debating whether it is best to put each
test case in its own function or not. That being said, I honestly
still think my version (base version) is easier to understand. The
reason I think mine is easier to read is entirely because of the code
isolation provided by each test case running it its own function. I
can look at a test case by itself and know that it doesn't depend on
anything that happened in a preceding test case. It is true that I
have to look in different places in the file, but I think that is more
than made up for by the fact that in order to understand a test case,
I only have to look at two functions: init, and the test case itself
(well, also exit if you care about how things are cleaned up). I don't
have to look through every single test case that proceeds it.

It might not be immediately obvious what isolation my version provides
over your version at first glance, and that is exactly the point. We
know that they are the same because you pulled the test cases out of
my version, but what about the other test suite in 19/19,
of_test_dynamic? If you notice, I did not just break each test case by
wrapping it in a function; that didn't work because there was a
dependency between some of the test cases. I removed that dependency,
so that each test case is actually isolated:

## ============== single function (18/19) version ===========
static void of_unittest_dynamic(struct kunit *test)
{
        struct device_node *np;
        struct property *prop;

        np = of_find_node_by_path("/testcase-data");
        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);

        /* Array of 4 properties for the purpose of testing */
        prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);

        /* Add a new property - should pass*/
        prop->name = "new-property";
        prop->value = "new-property-data";
        prop->length = strlen(prop->value) + 1;
        KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
                            "Adding a new property failed\n");

        /* Try to add an existing property - should fail */
        prop++;
        prop->name = "new-property";
        prop->value = "new-property-data-should-fail";
        prop->length = strlen(prop->value) + 1;
        KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
                            "Adding an existing property should have failed\n");

        /* Try to modify an existing property - should pass */
        prop->value = "modify-property-data-should-pass";
        prop->length = strlen(prop->value) + 1;
        KUNIT_EXPECT_EQ_MSG(
                test, of_update_property(np, prop), 0,
                "Updating an existing property should have passed\n");

        /* Try to modify non-existent property - should pass*/
        prop++;
        prop->name = "modify-property";
        prop->value = "modify-missing-property-data-should-pass";
        prop->length = strlen(prop->value) + 1;
        KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
                            "Updating a missing property should have passed\n");

        /* Remove property - should pass */
        KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
                            "Removing a property should have passed\n");

        /* Adding very large property - should pass */
        prop++;
        prop->name = "large-property-PAGE_SIZEx8";
        prop->length = PAGE_SIZE * 8;
        prop->value = kzalloc(prop->length, GFP_KERNEL);
        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
        KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
                            "Adding a large property should have passed\n");
}

## ============== multi function (19/19) version ===========
struct of_test_dynamic_context {
        struct device_node *np;
        struct property *prop0;
        struct property *prop1;
};

static void of_test_dynamic_basic(struct kunit *test)
{
        struct of_test_dynamic_context *ctx = test->priv;
        struct device_node *np = ctx->np;
        struct property *prop0 = ctx->prop0;

        /* Add a new property - should pass*/
        prop0->name = "new-property";
        prop0->value = "new-property-data";
        prop0->length = strlen(prop0->value) + 1;
        KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
                            "Adding a new property failed\n");

        /* Test that we can remove a property */
        KUNIT_EXPECT_EQ(test, of_remove_property(np, prop0), 0);
}

static void of_test_dynamic_add_existing_property(struct kunit *test)
{
        struct of_test_dynamic_context *ctx = test->priv;
        struct device_node *np = ctx->np;
        struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;

        /* Add a new property - should pass*/
        prop0->name = "new-property";
        prop0->value = "new-property-data";
        prop0->length = strlen(prop0->value) + 1;
        KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
                            "Adding a new property failed\n");

        /* Try to add an existing property - should fail */
        prop1->name = "new-property";
        prop1->value = "new-property-data-should-fail";
        prop1->length = strlen(prop1->value) + 1;
        KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop1), 0,
                            "Adding an existing property should have failed\n");
}

static void of_test_dynamic_modify_existing_property(struct kunit *test)
{
        struct of_test_dynamic_context *ctx = test->priv;
        struct device_node *np = ctx->np;
        struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;

        /* Add a new property - should pass*/
        prop0->name = "new-property";
        prop0->value = "new-property-data";
        prop0->length = strlen(prop0->value) + 1;
        KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
                            "Adding a new property failed\n");

        /* Try to modify an existing property - should pass */
        prop1->name = "new-property";
        prop1->value = "modify-property-data-should-pass";
        prop1->length = strlen(prop1->value) + 1;
        KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop1), 0,
                            "Updating an existing property should have
passed\n");
}

static void of_test_dynamic_modify_non_existent_property(struct kunit *test)
{
        struct of_test_dynamic_context *ctx = test->priv;
        struct device_node *np = ctx->np;
        struct property *prop0 = ctx->prop0;

        /* Try to modify non-existent property - should pass*/
        prop0->name = "modify-property";
        prop0->value = "modify-missing-property-data-should-pass";
        prop0->length = strlen(prop0->value) + 1;
        KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop0), 0,
                            "Updating a missing property should have passed\n");
}

static void of_test_dynamic_large_property(struct kunit *test)
{
        struct of_test_dynamic_context *ctx = test->priv;
        struct device_node *np = ctx->np;
        struct property *prop0 = ctx->prop0;

        /* Adding very large property - should pass */
        prop0->name = "large-property-PAGE_SIZEx8";
        prop0->length = PAGE_SIZE * 8;
        prop0->value = kunit_kzalloc(test, prop0->length, GFP_KERNEL);
        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop0->value);

        KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
                            "Adding a large property should have passed\n");
}

static int of_test_dynamic_init(struct kunit *test)
{
        struct of_test_dynamic_context *ctx;

        KUNIT_ASSERT_EQ(test, 0, unittest_data_add());

        if (!of_aliases)
                of_aliases = of_find_node_by_path("/aliases");

        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
                        "/testcase-data/phandle-tests/consumer-a"));

        ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
        test->priv = ctx;

        ctx->np = of_find_node_by_path("/testcase-data");
        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->np);

        ctx->prop0 = kunit_kzalloc(test, sizeof(*ctx->prop0), GFP_KERNEL);
        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop0);

        ctx->prop1 = kunit_kzalloc(test, sizeof(*ctx->prop1), GFP_KERNEL);
        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop1);

        return 0;
}

static void of_test_dynamic_exit(struct kunit *test)
{
        struct of_test_dynamic_context *ctx = test->priv;
        struct device_node *np = ctx->np;

        of_remove_property(np, ctx->prop0);
        of_remove_property(np, ctx->prop1);
        of_node_put(np);
}

static struct kunit_case of_test_dynamic_cases[] = {
        KUNIT_CASE(of_test_dynamic_basic),
        KUNIT_CASE(of_test_dynamic_add_existing_property),
        KUNIT_CASE(of_test_dynamic_modify_existing_property),
        KUNIT_CASE(of_test_dynamic_modify_non_existent_property),
        KUNIT_CASE(of_test_dynamic_large_property),
        {},
};

static struct kunit_module of_test_dynamic_module = {
        .name = "of-dynamic-test",
        .init = of_test_dynamic_init,
        .exit = of_test_dynamic_exit,
        .test_cases = of_test_dynamic_cases,
};
module_test(of_test_dynamic_module);

Compare the test cases for adding of_test_dynamic_basic,
of_test_dynamic_add_existing_property,
of_test_dynamic_modify_existing_property, and
of_test_dynamic_modify_non_existent_property to the originals. My
version is much longer overall, but I think is still much easier to
understand. I can say from when I was trying to split this up in the
first place, it was not obvious what properties were expected to be
populated as a precondition for a given test case (except the first
one of course). Whereas, in my version, it is immediately obvious what
the preconditions are for a test case. I think you can apply this same
logic to the examples you provided, in frank version, I don't
immediately know if one test cases does something that is a
precondition for another test case.

My version also makes it easier to run a test case entirely by itself
which is really valuable for debugging purposes. A common thing that
happens when you have lots of unit tests is something breaks and lots
of tests fail. If the test cases are good, there should be just a
couple (ideally one) test cases that directly assert the violated
property; those are the test cases you actually want to focus on, the
rest are noise for the purposes of that breakage. In my version, it is
much easier to turn off the test cases that you don't care about and
then focus in on the ones that exercise the violated property.

Now I know that, hermeticity especially, but other features as well
(test suite summary, error on unused test case function, etc) are not
actually in KUnit as it is under consideration here. Maybe it would be
best to save these last two patches (18/19, and 19/19) until I have
these other features checked in and reconsider them then?

>
> ## ==========  base version  ====================================
>
> // SPDX-License-Identifier: GPL-2.0
> /*
>  * Unit tests for functions defined in base.c.
>  */
> #include <linux/of.h>
>
> #include <kunit/test.h>
>
> #include "test-common.h"
>
> static void of_test_find_node_by_name_basic(struct kunit *test)
> {
>         struct device_node *np;
>         const char *name;
>
>         np = of_find_node_by_path("/testcase-data");
>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
>                                "find /testcase-data failed\n");
>         of_node_put(np);
>         kfree(name);
> }
>
> static void of_test_find_node_by_name_trailing_slash(struct kunit *test)
> {
>         /* Test if trailing '/' works */
>         KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
>                             "trailing '/' on /testcase-data/ should fail\n");
>
> }
>
> static void of_test_find_node_by_name_multiple_components(struct kunit *test)
> {
>         struct device_node *np;
>         const char *name;
>
>         np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>         KUNIT_EXPECT_STREQ_MSG(
>                 test, "/testcase-data/phandle-tests/consumer-a", name,
>                 "find /testcase-data/phandle-tests/consumer-a failed\n");
>         of_node_put(np);
>         kfree(name);
> }
>
> static void of_test_find_node_by_name_with_alias(struct kunit *test)
> {
>         struct device_node *np;
>         const char *name;
>
>         np = of_find_node_by_path("testcase-alias");
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>         KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
>                                "find testcase-alias failed\n");
>         of_node_put(np);
>         kfree(name);
> }
>
> static void of_test_find_node_by_name_with_alias_and_slash(struct kunit *test)
> {
>         /* Test if trailing '/' works on aliases */
>         KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
>                            "trailing '/' on testcase-alias/ should fail\n");
> }
>
> /*
>  * TODO(brendanhiggins at google.com): This looks like a duplicate of
>  * of_test_find_node_by_name_multiple_components
>  */
> static void of_test_find_node_by_name_multiple_components_2(struct kunit *test)
> {
>         struct device_node *np;
>         const char *name;
>
>         np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>         KUNIT_EXPECT_STREQ_MSG(
>                 test, "/testcase-data/phandle-tests/consumer-a", name,
>                 "find testcase-alias/phandle-tests/consumer-a failed\n");
>         of_node_put(np);
>         kfree(name);
> }
>
> static void of_test_find_node_by_name_missing_path(struct kunit *test)
> {
>         struct device_node *np;
>
>         KUNIT_EXPECT_EQ_MSG(
>                 test,
>                 np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
>                 "non-existent path returned node %pOF\n", np);
>         of_node_put(np);
> }
>
> static void of_test_find_node_by_name_missing_alias(struct kunit *test)
> {
>         struct device_node *np;
>
>         KUNIT_EXPECT_EQ_MSG(
>                 test, np = of_find_node_by_path("missing-alias"), NULL,
>                 "non-existent alias returned node %pOF\n", np);
>         of_node_put(np);
> }
>
> static void of_test_find_node_by_name_missing_alias_with_relative_path(
>                 struct kunit *test)
> {
>         struct device_node *np;
>
>         KUNIT_EXPECT_EQ_MSG(
>                 test,
>                 np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
>                 "non-existent alias with relative path returned node %pOF\n",
>                 np);
>         of_node_put(np);
> }
>
> static void of_test_find_node_by_name_with_option(struct kunit *test)
> {
>         struct device_node *np;
>         const char *options;
>
>         np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
>                                "option path test failed\n");
>         of_node_put(np);
> }
>
> static void of_test_find_node_by_name_with_option_and_slash(struct kunit *test)
> {
>         struct device_node *np;
>         const char *options;
>
>         np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
>                                "option path test, subcase #1 failed\n");
>         of_node_put(np);
>
>         np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
>                                "option path test, subcase #2 failed\n");
>         of_node_put(np);
> }
>
> static void of_test_find_node_by_name_with_null_option(struct kunit *test)
> {
>         struct device_node *np;
>
>         np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
>         KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
>                                          "NULL option path test failed\n");
>         of_node_put(np);
> }
>
> static void of_test_find_node_by_name_with_option_alias(struct kunit *test)
> {
>         struct device_node *np;
>         const char *options;
>
>         np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
>                                        &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
>                                "option alias path test failed\n");
>         of_node_put(np);
> }
>
> static void of_test_find_node_by_name_with_option_alias_and_slash(
>                 struct kunit *test)
> {
>         struct device_node *np;
>         const char *options;
>
>         np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
>                                        &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
>                                "option alias path test, subcase #1 failed\n");
>         of_node_put(np);
> }
>
> static void of_test_find_node_by_name_with_null_option_alias(struct kunit *test)
> {
>         struct device_node *np;
>
>         np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
>         KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
>                         test, np, "NULL option alias path test failed\n");
>         of_node_put(np);
> }
>
> static void of_test_find_node_by_name_option_clearing(struct kunit *test)
> {
>         struct device_node *np;
>         const char *options;
>
>         options = "testoption";
>         np = of_find_node_opts_by_path("testcase-alias", &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_EQ_MSG(test, options, NULL,
>                             "option clearing test failed\n");
>         of_node_put(np);
> }
>
> static void of_test_find_node_by_name_option_clearing_root(struct kunit *test)
> {
>         struct device_node *np;
>         const char *options;
>
>         options = "testoption";
>         np = of_find_node_opts_by_path("/", &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_EQ_MSG(test, options, NULL,
>                             "option clearing root node test failed\n");
>         of_node_put(np);
> }
>
> static int of_test_find_node_by_name_init(struct kunit *test)
> {
>         /* adding data for unittest */
>         KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
>
>         if (!of_aliases)
>                 of_aliases = of_find_node_by_path("/aliases");
>
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
>                         "/testcase-data/phandle-tests/consumer-a"));
>
>         return 0;
> }
>
> static struct kunit_case of_test_find_node_by_name_cases[] = {
>         KUNIT_CASE(of_test_find_node_by_name_basic),
>         KUNIT_CASE(of_test_find_node_by_name_trailing_slash),
>         KUNIT_CASE(of_test_find_node_by_name_multiple_components),
>         KUNIT_CASE(of_test_find_node_by_name_with_alias),
>         KUNIT_CASE(of_test_find_node_by_name_with_alias_and_slash),
>         KUNIT_CASE(of_test_find_node_by_name_multiple_components_2),
>         KUNIT_CASE(of_test_find_node_by_name_missing_path),
>         KUNIT_CASE(of_test_find_node_by_name_missing_alias),
>         KUNIT_CASE(of_test_find_node_by_name_missing_alias_with_relative_path),
>         KUNIT_CASE(of_test_find_node_by_name_with_option),
>         KUNIT_CASE(of_test_find_node_by_name_with_option_and_slash),
>         KUNIT_CASE(of_test_find_node_by_name_with_null_option),
>         KUNIT_CASE(of_test_find_node_by_name_with_option_alias),
>         KUNIT_CASE(of_test_find_node_by_name_with_option_alias_and_slash),
>         KUNIT_CASE(of_test_find_node_by_name_with_null_option_alias),
>         KUNIT_CASE(of_test_find_node_by_name_option_clearing),
>         KUNIT_CASE(of_test_find_node_by_name_option_clearing_root),
>         {},
> };
>
> static struct kunit_module of_test_find_node_by_name_module = {
>         .name = "of-test-find-node-by-name",
>         .init = of_test_find_node_by_name_init,
>         .test_cases = of_test_find_node_by_name_cases,
> };
> module_test(of_test_find_node_by_name_module);
>
>
> ## ==========  frank version  ===================================
>
>         // SPDX-License-Identifier: GPL-2.0
> /*
>  * Unit tests for functions defined in base.c.
>  */
> #include <linux/of.h>
>
> #include <kunit/test.h>
>
> #include "test-common.h"
>
> static void of_unittest_find_node_by_name(struct kunit *test)
> {
>         struct device_node *np;
>         const char *options, *name;
>
>
>         // find node by name basic
>
>         np = of_find_node_by_path("/testcase-data");
>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
>                                "find /testcase-data failed\n");
>         of_node_put(np);
>         kfree(name);
>
>
>         // find node by name trailing slash
>
>         /* Test if trailing '/' works */
>         KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
>                             "trailing '/' on /testcase-data/ should fail\n");
>
>
>         // find node by name multiple components
>
>         np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>         KUNIT_EXPECT_STREQ_MSG(
>                 test, "/testcase-data/phandle-tests/consumer-a", name,
>                 "find /testcase-data/phandle-tests/consumer-a failed\n");
>         of_node_put(np);
>         kfree(name);
>
>
>         // find node by name with alias
>
>         np = of_find_node_by_path("testcase-alias");
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>         KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
>                                "find testcase-alias failed\n");
>         of_node_put(np);
>         kfree(name);
>
>
>         // find node by name with alias and slash
>
>         /* Test if trailing '/' works on aliases */
>         KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
>                             "trailing '/' on testcase-alias/ should fail\n");
>
>
>         // find node by name multiple components 2
>
>         np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>         KUNIT_EXPECT_STREQ_MSG(
>                 test, "/testcase-data/phandle-tests/consumer-a", name,
>                 "find testcase-alias/phandle-tests/consumer-a failed\n");
>         of_node_put(np);
>         kfree(name);
>
>
>         // find node by name missing path
>
>         KUNIT_EXPECT_EQ_MSG(
>                 test,
>                 np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
>                 "non-existent path returned node %pOF\n", np);
>         of_node_put(np);
>
>
>         // find node by name missing alias
>
>         KUNIT_EXPECT_EQ_MSG(
>                 test, np = of_find_node_by_path("missing-alias"), NULL,
>                 "non-existent alias returned node %pOF\n", np);
>         of_node_put(np);
>
>
>         //  find node by name missing alias with relative path
>
>         KUNIT_EXPECT_EQ_MSG(
>                 test,
>                 np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
>                 "non-existent alias with relative path returned node %pOF\n",
>                 np);
>         of_node_put(np);
>
>
>         // find node by name with option
>
>         np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
>                                "option path test failed\n");
>         of_node_put(np);
>
>
>         // find node by name with option and slash
>
>         np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
>                                "option path test, subcase #1 failed\n");
>         of_node_put(np);
>
>         np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
>                                "option path test, subcase #2 failed\n");
>         of_node_put(np);
>
>
>         // find node by name with null option
>
>         np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
>         KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
>                                          "NULL option path test failed\n");
>         of_node_put(np);
>
>
>         // find node by name with option alias
>
>         np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
>                                        &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
>                                "option alias path test failed\n");
>         of_node_put(np);
>
>
>         // find node by name with option alias and slash
>
>         np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
>                                        &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
>                                "option alias path test, subcase #1 failed\n");
>         of_node_put(np);
>
>
>         // find node by name with null option alias
>
>         np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
>         KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
>                         test, np, "NULL option alias path test failed\n");
>         of_node_put(np);
>
>
>         // find node by name option clearing
>
>         options = "testoption";
>         np = of_find_node_opts_by_path("testcase-alias", &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_EQ_MSG(test, options, NULL,
>                             "option clearing test failed\n");
>         of_node_put(np);
>
>
>         // find node by name option clearing root
>
>         options = "testoption";
>         np = of_find_node_opts_by_path("/", &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_EQ_MSG(test, options, NULL,
>                             "option clearing root node test failed\n");
>         of_node_put(np);
> }
>
> static int of_test_init(struct kunit *test)
> {
>         /* adding data for unittest */
>         KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
>
>         if (!of_aliases)
>                 of_aliases = of_find_node_by_path("/aliases");
>
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
>                         "/testcase-data/phandle-tests/consumer-a"));
>
>         return 0;
> }
>
> static struct kunit_case of_test_cases[] = {
>         KUNIT_CASE(of_unittest_find_node_by_name),
>         {},
> };
>
> static struct kunit_module of_test_module = {
>         .name = "of-base-test",
>         .init = of_test_init,
>         .test_cases = of_test_cases,
> };
> module_test(of_test_module);
>
>
> >
> >
> >>> be cases where the devicetree unittests are currently not well grouped
> >>> and may benefit from change, but if so that should be handled independently
> >>> of any transformation into a KUnit framework.
> >>
> >> I agree. I did this because I wanted to illustrate what I thought real
> >> world KUnit unit tests should look like (I also wanted to be able to
> >> show off KUnit test features that help you write these kinds of
> >> tests); I was not necessarily intending that all the of: unittest
> >> patches would get merged in with the whole RFC. I was mostly trying to
> >> create cause for discussion (which it seems like I succeeded at ;-) ).
> >>
> >> So fair enough, I will propose these patches separately and later
> >> (except of course this one that splits up the file). Do you want the
> >> initial transformation to the KUnit framework in the main KUnit
> >> patchset, or do you want that to be done separately? If I recall, Rob
> >> suggested this as a good initial example that other people could refer
> >> to, and some people seemed to think that I needed one to help guide
> >> the discussion and provide direction for early users. I don't
> >> necessarily think that means the initial real world example needs to
> >> be a part of the initial patchset though.

I really appreciate you taking the time to discuss these difficult points :-)

If the way I want to express test cases here is really that difficult
to read, then it means that I have some work to do to make it better,
because I plan on constructing other test cases in a very similar way.
So, if you think that these test cases have real readability issues,
then there is something I either need to improve with the framework,
or the documentation.

So if you would rather discuss these patches later once I added those
features that would make the notion of hermeticity stronger, or would
make summaries better, or anything else I mentioned, that's fine with
me, but if you think there is something fundamentally wrong with my
approach, I would rather figured out the right way to handle it sooner
rather than later.

Looking forward to hear what you think!

Cheers

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2019-02-28  3:52                   ` brendanhiggins
@ 2019-02-28  3:52                     ` Brendan Higgins
  2019-03-22  0:22                     ` frowand.list
  1 sibling, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2019-02-28  3:52 UTC (permalink / raw)


On Wed, Feb 20, 2019@12:45 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/18/19 2:25 PM, Frank Rowand wrote:
> > On 2/15/19 2:56 AM, Brendan Higgins wrote:
> >> On Thu, Feb 14, 2019@6:05 PM Frank Rowand <frowand.list@gmail.com> wrote:
> >>>
> >>> On 2/14/19 4:56 PM, Brendan Higgins wrote:
> >>>> On Thu, Feb 14, 2019@3:57 PM Frank Rowand <frowand.list@gmail.com> wrote:
> >>>>>
> >>>>> On 12/5/18 3:54 PM, Brendan Higgins wrote:
> >>>>>> On Tue, Dec 4, 2018@2:58 AM Frank Rowand <frowand.list@gmail.com> wrote:
> >>>>>>>
>
> < snip >
>
> >
> > It makes it harder for me to read the source of the tests and
> > understand the order they will execute.  It also makes it harder
> > for me to read through the actual tests (in this example the
> > tests that are currently grouped in of_unittest_find_node_by_name())
> > because of all the extra function headers injected into the
> > existing single function to break it apart into many smaller
> > functions.
>
> < snip >
>
> >>>> This is not something I feel particularly strongly about, it is just
> >>>> pretty atypical from my experience to have so many unrelated test
> >>>> cases in a single file.
> >>>>
> >>>> Maybe you would prefer that I break up the test cases first, and then
> >>>> we split up the file as appropriate?
> >>>
> >>> I prefer that the test cases not be broken up arbitrarily.  There _may_
>
> I expect that I created confusion by putting this in a reply to patch 18/19.
> It is actually a comment about patch 19/19.  Sorry about that.
>

No worries.

>
> >>
> >> I wasn't trying to break them up arbitrarily. I thought I was doing it
> >> according to a pattern (breaking up the file, that is), but maybe I
> >> just hadn't looked at enough examples.
> >
> > This goes back to the kunit model of putting each test into a separate
> > function that can be a KUNIT_CASE().  That is a model that I do not agree
> > with for devicetree.
>
> So now that I am actually talking about patch 19/19, let me give a concrete
> example.  I will cut and paste (after my comments), the beginning portion
> of base-test.c, after applying patch 19/19 (the "base version".  Then I
> will cut and paste my alternative version which does not break the tests
> down into individual functions (the "frank version").

Awesome, thanks for putting the comparison together!

>
> I will also reply to this email with the base version and the frank version
> as attachments, which will make it easier to save as separate versions
> for easier viewing.  I'm not sure if an email with attachments will make
> it through the list servers, but I am cautiously optimistic.
>
> I am using v4 of the patch series because I never got v3 to cleanly apply
> and it is not a constructive use of my time to do so since I have v4 applied.
>
> One of the points I was trying to make is that readability suffers from the
> approach taken by patches 18/19 and 19/19.

I understood that point.

>
> The base version contains the extra text of a function header for each
> unit test.  This is visual noise and makes the file larger.  It is also
> one more possible location of an error (although not likely).

I don't see how it is much more visual noise than a comment.
Admittedly, a space versus an underscore might be nice, but I think it
is also more likely that a function name is more likely to be kept up
to date than a comment even if they are both informational. It also
forces the user to actually name all the tests. Even then, I am not
married to doing it this exact way. The thing I really care about is
isolating the code in each test case so that it can be executed
separately.

A side thought, when I was proofreading this, it occurred to me that
you might not like the function name over comment partly because you
think about them differently. You aren't used to seeing a function
used to frame things or communicate information in this way. Is this
true? Admittedly, I have gotten used to a lot of unit test frameworks
that break up test cases by function, so I wondering if part of the
difference in comfortability with this framing might come from the
fact that I am really used to seeing it this way and you are not? If
this is the case, maybe it would be better if we had something like:

KUNIT_DECLARE_CASE(case_id, "Test case description")
{
        KUNIT_EXPECT_EQ(kunit, ...);
        ...
}

Just a thought.

>
> The frank version has converted each of the new function headers into
> a comment, using the function name with '_' converted to ' '.  The
> comments are more readable than the function headers.  Note that I added
> an extra blank line before each comment, which violates the kernel
> coding standards, but I feel this makes the code more readable.

I agree that the extra space is an improvement, but I think any
sufficient visual separation would work.

>
> The base version needs to declare each of the individual test functions
> in of_test_find_node_by_name_cases[]. It is possible that a test function
> could be left out of of_test_find_node_by_name_cases[], in error.  This
> will result in a compile warning (I think warning instead of error, but
> I have not verified that) so the error might be caught or it might be
> overlooked.

It's a warning, but that can be fixed.

>
> In the base version, the order of execution of the test code requires
> bouncing back and forth between the test functions and the coding of
> of_test_find_node_by_name_cases[].

You shouldn't need to bounce back and forth because the order in which
the tests run shouldn't matter.

>
> In the frank version the order of execution of the test code is obvious.

So I know we were arguing before over whether order *does* matter in
some of the other test cases (none in the example that you or I
posted), but wouldn't it be better if the order of execution didn't
matter? If you don't allow a user to depend on the execution of test
cases, then arguably these test case dependencies would never form and
the order wouldn't matter.
>
> It is possible that a test function could be left out of
> of_test_find_node_by_name_cases[], in error.  This will result in a compile
> warning (I think warning instead of error, but I have not verified that)
> so it might be caught or it might be overlooked.
>
> The base version is 265 lines.  The frank version is 208 lines, 57 lines
> less.  Less is better.

I agree that less is better, but there are different kinds of less to
consider. I prefer less logic in a function to fewer lines overall.

It seems we are in agreement that test cases should be small and
simple, so I won't dwell on that point any longer. I agree that the
test cases themselves when taken in isolation in base version and
frank version are equally simple (obviously, they are the same).

If I am correct, we are only debating whether it is best to put each
test case in its own function or not. That being said, I honestly
still think my version (base version) is easier to understand. The
reason I think mine is easier to read is entirely because of the code
isolation provided by each test case running it its own function. I
can look at a test case by itself and know that it doesn't depend on
anything that happened in a preceding test case. It is true that I
have to look in different places in the file, but I think that is more
than made up for by the fact that in order to understand a test case,
I only have to look at two functions: init, and the test case itself
(well, also exit if you care about how things are cleaned up). I don't
have to look through every single test case that proceeds it.

It might not be immediately obvious what isolation my version provides
over your version at first glance, and that is exactly the point. We
know that they are the same because you pulled the test cases out of
my version, but what about the other test suite in 19/19,
of_test_dynamic? If you notice, I did not just break each test case by
wrapping it in a function; that didn't work because there was a
dependency between some of the test cases. I removed that dependency,
so that each test case is actually isolated:

## ============== single function (18/19) version ===========
static void of_unittest_dynamic(struct kunit *test)
{
        struct device_node *np;
        struct property *prop;

        np = of_find_node_by_path("/testcase-data");
        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);

        /* Array of 4 properties for the purpose of testing */
        prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);

        /* Add a new property - should pass*/
        prop->name = "new-property";
        prop->value = "new-property-data";
        prop->length = strlen(prop->value) + 1;
        KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
                            "Adding a new property failed\n");

        /* Try to add an existing property - should fail */
        prop++;
        prop->name = "new-property";
        prop->value = "new-property-data-should-fail";
        prop->length = strlen(prop->value) + 1;
        KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
                            "Adding an existing property should have failed\n");

        /* Try to modify an existing property - should pass */
        prop->value = "modify-property-data-should-pass";
        prop->length = strlen(prop->value) + 1;
        KUNIT_EXPECT_EQ_MSG(
                test, of_update_property(np, prop), 0,
                "Updating an existing property should have passed\n");

        /* Try to modify non-existent property - should pass*/
        prop++;
        prop->name = "modify-property";
        prop->value = "modify-missing-property-data-should-pass";
        prop->length = strlen(prop->value) + 1;
        KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
                            "Updating a missing property should have passed\n");

        /* Remove property - should pass */
        KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
                            "Removing a property should have passed\n");

        /* Adding very large property - should pass */
        prop++;
        prop->name = "large-property-PAGE_SIZEx8";
        prop->length = PAGE_SIZE * 8;
        prop->value = kzalloc(prop->length, GFP_KERNEL);
        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
        KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
                            "Adding a large property should have passed\n");
}

## ============== multi function (19/19) version ===========
struct of_test_dynamic_context {
        struct device_node *np;
        struct property *prop0;
        struct property *prop1;
};

static void of_test_dynamic_basic(struct kunit *test)
{
        struct of_test_dynamic_context *ctx = test->priv;
        struct device_node *np = ctx->np;
        struct property *prop0 = ctx->prop0;

        /* Add a new property - should pass*/
        prop0->name = "new-property";
        prop0->value = "new-property-data";
        prop0->length = strlen(prop0->value) + 1;
        KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
                            "Adding a new property failed\n");

        /* Test that we can remove a property */
        KUNIT_EXPECT_EQ(test, of_remove_property(np, prop0), 0);
}

static void of_test_dynamic_add_existing_property(struct kunit *test)
{
        struct of_test_dynamic_context *ctx = test->priv;
        struct device_node *np = ctx->np;
        struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;

        /* Add a new property - should pass*/
        prop0->name = "new-property";
        prop0->value = "new-property-data";
        prop0->length = strlen(prop0->value) + 1;
        KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
                            "Adding a new property failed\n");

        /* Try to add an existing property - should fail */
        prop1->name = "new-property";
        prop1->value = "new-property-data-should-fail";
        prop1->length = strlen(prop1->value) + 1;
        KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop1), 0,
                            "Adding an existing property should have failed\n");
}

static void of_test_dynamic_modify_existing_property(struct kunit *test)
{
        struct of_test_dynamic_context *ctx = test->priv;
        struct device_node *np = ctx->np;
        struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;

        /* Add a new property - should pass*/
        prop0->name = "new-property";
        prop0->value = "new-property-data";
        prop0->length = strlen(prop0->value) + 1;
        KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
                            "Adding a new property failed\n");

        /* Try to modify an existing property - should pass */
        prop1->name = "new-property";
        prop1->value = "modify-property-data-should-pass";
        prop1->length = strlen(prop1->value) + 1;
        KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop1), 0,
                            "Updating an existing property should have
passed\n");
}

static void of_test_dynamic_modify_non_existent_property(struct kunit *test)
{
        struct of_test_dynamic_context *ctx = test->priv;
        struct device_node *np = ctx->np;
        struct property *prop0 = ctx->prop0;

        /* Try to modify non-existent property - should pass*/
        prop0->name = "modify-property";
        prop0->value = "modify-missing-property-data-should-pass";
        prop0->length = strlen(prop0->value) + 1;
        KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop0), 0,
                            "Updating a missing property should have passed\n");
}

static void of_test_dynamic_large_property(struct kunit *test)
{
        struct of_test_dynamic_context *ctx = test->priv;
        struct device_node *np = ctx->np;
        struct property *prop0 = ctx->prop0;

        /* Adding very large property - should pass */
        prop0->name = "large-property-PAGE_SIZEx8";
        prop0->length = PAGE_SIZE * 8;
        prop0->value = kunit_kzalloc(test, prop0->length, GFP_KERNEL);
        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop0->value);

        KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
                            "Adding a large property should have passed\n");
}

static int of_test_dynamic_init(struct kunit *test)
{
        struct of_test_dynamic_context *ctx;

        KUNIT_ASSERT_EQ(test, 0, unittest_data_add());

        if (!of_aliases)
                of_aliases = of_find_node_by_path("/aliases");

        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
                        "/testcase-data/phandle-tests/consumer-a"));

        ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
        test->priv = ctx;

        ctx->np = of_find_node_by_path("/testcase-data");
        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->np);

        ctx->prop0 = kunit_kzalloc(test, sizeof(*ctx->prop0), GFP_KERNEL);
        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop0);

        ctx->prop1 = kunit_kzalloc(test, sizeof(*ctx->prop1), GFP_KERNEL);
        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop1);

        return 0;
}

static void of_test_dynamic_exit(struct kunit *test)
{
        struct of_test_dynamic_context *ctx = test->priv;
        struct device_node *np = ctx->np;

        of_remove_property(np, ctx->prop0);
        of_remove_property(np, ctx->prop1);
        of_node_put(np);
}

static struct kunit_case of_test_dynamic_cases[] = {
        KUNIT_CASE(of_test_dynamic_basic),
        KUNIT_CASE(of_test_dynamic_add_existing_property),
        KUNIT_CASE(of_test_dynamic_modify_existing_property),
        KUNIT_CASE(of_test_dynamic_modify_non_existent_property),
        KUNIT_CASE(of_test_dynamic_large_property),
        {},
};

static struct kunit_module of_test_dynamic_module = {
        .name = "of-dynamic-test",
        .init = of_test_dynamic_init,
        .exit = of_test_dynamic_exit,
        .test_cases = of_test_dynamic_cases,
};
module_test(of_test_dynamic_module);

Compare the test cases for adding of_test_dynamic_basic,
of_test_dynamic_add_existing_property,
of_test_dynamic_modify_existing_property, and
of_test_dynamic_modify_non_existent_property to the originals. My
version is much longer overall, but I think is still much easier to
understand. I can say from when I was trying to split this up in the
first place, it was not obvious what properties were expected to be
populated as a precondition for a given test case (except the first
one of course). Whereas, in my version, it is immediately obvious what
the preconditions are for a test case. I think you can apply this same
logic to the examples you provided, in frank version, I don't
immediately know if one test cases does something that is a
precondition for another test case.

My version also makes it easier to run a test case entirely by itself
which is really valuable for debugging purposes. A common thing that
happens when you have lots of unit tests is something breaks and lots
of tests fail. If the test cases are good, there should be just a
couple (ideally one) test cases that directly assert the violated
property; those are the test cases you actually want to focus on, the
rest are noise for the purposes of that breakage. In my version, it is
much easier to turn off the test cases that you don't care about and
then focus in on the ones that exercise the violated property.

Now I know that, hermeticity especially, but other features as well
(test suite summary, error on unused test case function, etc) are not
actually in KUnit as it is under consideration here. Maybe it would be
best to save these last two patches (18/19, and 19/19) until I have
these other features checked in and reconsider them then?

>
> ## ==========  base version  ====================================
>
> // SPDX-License-Identifier: GPL-2.0
> /*
>  * Unit tests for functions defined in base.c.
>  */
> #include <linux/of.h>
>
> #include <kunit/test.h>
>
> #include "test-common.h"
>
> static void of_test_find_node_by_name_basic(struct kunit *test)
> {
>         struct device_node *np;
>         const char *name;
>
>         np = of_find_node_by_path("/testcase-data");
>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
>                                "find /testcase-data failed\n");
>         of_node_put(np);
>         kfree(name);
> }
>
> static void of_test_find_node_by_name_trailing_slash(struct kunit *test)
> {
>         /* Test if trailing '/' works */
>         KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
>                             "trailing '/' on /testcase-data/ should fail\n");
>
> }
>
> static void of_test_find_node_by_name_multiple_components(struct kunit *test)
> {
>         struct device_node *np;
>         const char *name;
>
>         np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>         KUNIT_EXPECT_STREQ_MSG(
>                 test, "/testcase-data/phandle-tests/consumer-a", name,
>                 "find /testcase-data/phandle-tests/consumer-a failed\n");
>         of_node_put(np);
>         kfree(name);
> }
>
> static void of_test_find_node_by_name_with_alias(struct kunit *test)
> {
>         struct device_node *np;
>         const char *name;
>
>         np = of_find_node_by_path("testcase-alias");
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>         KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
>                                "find testcase-alias failed\n");
>         of_node_put(np);
>         kfree(name);
> }
>
> static void of_test_find_node_by_name_with_alias_and_slash(struct kunit *test)
> {
>         /* Test if trailing '/' works on aliases */
>         KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
>                            "trailing '/' on testcase-alias/ should fail\n");
> }
>
> /*
>  * TODO(brendanhiggins at google.com): This looks like a duplicate of
>  * of_test_find_node_by_name_multiple_components
>  */
> static void of_test_find_node_by_name_multiple_components_2(struct kunit *test)
> {
>         struct device_node *np;
>         const char *name;
>
>         np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>         KUNIT_EXPECT_STREQ_MSG(
>                 test, "/testcase-data/phandle-tests/consumer-a", name,
>                 "find testcase-alias/phandle-tests/consumer-a failed\n");
>         of_node_put(np);
>         kfree(name);
> }
>
> static void of_test_find_node_by_name_missing_path(struct kunit *test)
> {
>         struct device_node *np;
>
>         KUNIT_EXPECT_EQ_MSG(
>                 test,
>                 np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
>                 "non-existent path returned node %pOF\n", np);
>         of_node_put(np);
> }
>
> static void of_test_find_node_by_name_missing_alias(struct kunit *test)
> {
>         struct device_node *np;
>
>         KUNIT_EXPECT_EQ_MSG(
>                 test, np = of_find_node_by_path("missing-alias"), NULL,
>                 "non-existent alias returned node %pOF\n", np);
>         of_node_put(np);
> }
>
> static void of_test_find_node_by_name_missing_alias_with_relative_path(
>                 struct kunit *test)
> {
>         struct device_node *np;
>
>         KUNIT_EXPECT_EQ_MSG(
>                 test,
>                 np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
>                 "non-existent alias with relative path returned node %pOF\n",
>                 np);
>         of_node_put(np);
> }
>
> static void of_test_find_node_by_name_with_option(struct kunit *test)
> {
>         struct device_node *np;
>         const char *options;
>
>         np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
>                                "option path test failed\n");
>         of_node_put(np);
> }
>
> static void of_test_find_node_by_name_with_option_and_slash(struct kunit *test)
> {
>         struct device_node *np;
>         const char *options;
>
>         np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
>                                "option path test, subcase #1 failed\n");
>         of_node_put(np);
>
>         np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
>                                "option path test, subcase #2 failed\n");
>         of_node_put(np);
> }
>
> static void of_test_find_node_by_name_with_null_option(struct kunit *test)
> {
>         struct device_node *np;
>
>         np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
>         KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
>                                          "NULL option path test failed\n");
>         of_node_put(np);
> }
>
> static void of_test_find_node_by_name_with_option_alias(struct kunit *test)
> {
>         struct device_node *np;
>         const char *options;
>
>         np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
>                                        &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
>                                "option alias path test failed\n");
>         of_node_put(np);
> }
>
> static void of_test_find_node_by_name_with_option_alias_and_slash(
>                 struct kunit *test)
> {
>         struct device_node *np;
>         const char *options;
>
>         np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
>                                        &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
>                                "option alias path test, subcase #1 failed\n");
>         of_node_put(np);
> }
>
> static void of_test_find_node_by_name_with_null_option_alias(struct kunit *test)
> {
>         struct device_node *np;
>
>         np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
>         KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
>                         test, np, "NULL option alias path test failed\n");
>         of_node_put(np);
> }
>
> static void of_test_find_node_by_name_option_clearing(struct kunit *test)
> {
>         struct device_node *np;
>         const char *options;
>
>         options = "testoption";
>         np = of_find_node_opts_by_path("testcase-alias", &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_EQ_MSG(test, options, NULL,
>                             "option clearing test failed\n");
>         of_node_put(np);
> }
>
> static void of_test_find_node_by_name_option_clearing_root(struct kunit *test)
> {
>         struct device_node *np;
>         const char *options;
>
>         options = "testoption";
>         np = of_find_node_opts_by_path("/", &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_EQ_MSG(test, options, NULL,
>                             "option clearing root node test failed\n");
>         of_node_put(np);
> }
>
> static int of_test_find_node_by_name_init(struct kunit *test)
> {
>         /* adding data for unittest */
>         KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
>
>         if (!of_aliases)
>                 of_aliases = of_find_node_by_path("/aliases");
>
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
>                         "/testcase-data/phandle-tests/consumer-a"));
>
>         return 0;
> }
>
> static struct kunit_case of_test_find_node_by_name_cases[] = {
>         KUNIT_CASE(of_test_find_node_by_name_basic),
>         KUNIT_CASE(of_test_find_node_by_name_trailing_slash),
>         KUNIT_CASE(of_test_find_node_by_name_multiple_components),
>         KUNIT_CASE(of_test_find_node_by_name_with_alias),
>         KUNIT_CASE(of_test_find_node_by_name_with_alias_and_slash),
>         KUNIT_CASE(of_test_find_node_by_name_multiple_components_2),
>         KUNIT_CASE(of_test_find_node_by_name_missing_path),
>         KUNIT_CASE(of_test_find_node_by_name_missing_alias),
>         KUNIT_CASE(of_test_find_node_by_name_missing_alias_with_relative_path),
>         KUNIT_CASE(of_test_find_node_by_name_with_option),
>         KUNIT_CASE(of_test_find_node_by_name_with_option_and_slash),
>         KUNIT_CASE(of_test_find_node_by_name_with_null_option),
>         KUNIT_CASE(of_test_find_node_by_name_with_option_alias),
>         KUNIT_CASE(of_test_find_node_by_name_with_option_alias_and_slash),
>         KUNIT_CASE(of_test_find_node_by_name_with_null_option_alias),
>         KUNIT_CASE(of_test_find_node_by_name_option_clearing),
>         KUNIT_CASE(of_test_find_node_by_name_option_clearing_root),
>         {},
> };
>
> static struct kunit_module of_test_find_node_by_name_module = {
>         .name = "of-test-find-node-by-name",
>         .init = of_test_find_node_by_name_init,
>         .test_cases = of_test_find_node_by_name_cases,
> };
> module_test(of_test_find_node_by_name_module);
>
>
> ## ==========  frank version  ===================================
>
>         // SPDX-License-Identifier: GPL-2.0
> /*
>  * Unit tests for functions defined in base.c.
>  */
> #include <linux/of.h>
>
> #include <kunit/test.h>
>
> #include "test-common.h"
>
> static void of_unittest_find_node_by_name(struct kunit *test)
> {
>         struct device_node *np;
>         const char *options, *name;
>
>
>         // find node by name basic
>
>         np = of_find_node_by_path("/testcase-data");
>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
>                                "find /testcase-data failed\n");
>         of_node_put(np);
>         kfree(name);
>
>
>         // find node by name trailing slash
>
>         /* Test if trailing '/' works */
>         KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
>                             "trailing '/' on /testcase-data/ should fail\n");
>
>
>         // find node by name multiple components
>
>         np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>         KUNIT_EXPECT_STREQ_MSG(
>                 test, "/testcase-data/phandle-tests/consumer-a", name,
>                 "find /testcase-data/phandle-tests/consumer-a failed\n");
>         of_node_put(np);
>         kfree(name);
>
>
>         // find node by name with alias
>
>         np = of_find_node_by_path("testcase-alias");
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>         KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
>                                "find testcase-alias failed\n");
>         of_node_put(np);
>         kfree(name);
>
>
>         // find node by name with alias and slash
>
>         /* Test if trailing '/' works on aliases */
>         KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
>                             "trailing '/' on testcase-alias/ should fail\n");
>
>
>         // find node by name multiple components 2
>
>         np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>         KUNIT_EXPECT_STREQ_MSG(
>                 test, "/testcase-data/phandle-tests/consumer-a", name,
>                 "find testcase-alias/phandle-tests/consumer-a failed\n");
>         of_node_put(np);
>         kfree(name);
>
>
>         // find node by name missing path
>
>         KUNIT_EXPECT_EQ_MSG(
>                 test,
>                 np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
>                 "non-existent path returned node %pOF\n", np);
>         of_node_put(np);
>
>
>         // find node by name missing alias
>
>         KUNIT_EXPECT_EQ_MSG(
>                 test, np = of_find_node_by_path("missing-alias"), NULL,
>                 "non-existent alias returned node %pOF\n", np);
>         of_node_put(np);
>
>
>         //  find node by name missing alias with relative path
>
>         KUNIT_EXPECT_EQ_MSG(
>                 test,
>                 np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
>                 "non-existent alias with relative path returned node %pOF\n",
>                 np);
>         of_node_put(np);
>
>
>         // find node by name with option
>
>         np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
>                                "option path test failed\n");
>         of_node_put(np);
>
>
>         // find node by name with option and slash
>
>         np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
>                                "option path test, subcase #1 failed\n");
>         of_node_put(np);
>
>         np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
>                                "option path test, subcase #2 failed\n");
>         of_node_put(np);
>
>
>         // find node by name with null option
>
>         np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
>         KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
>                                          "NULL option path test failed\n");
>         of_node_put(np);
>
>
>         // find node by name with option alias
>
>         np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
>                                        &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
>                                "option alias path test failed\n");
>         of_node_put(np);
>
>
>         // find node by name with option alias and slash
>
>         np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
>                                        &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
>                                "option alias path test, subcase #1 failed\n");
>         of_node_put(np);
>
>
>         // find node by name with null option alias
>
>         np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
>         KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
>                         test, np, "NULL option alias path test failed\n");
>         of_node_put(np);
>
>
>         // find node by name option clearing
>
>         options = "testoption";
>         np = of_find_node_opts_by_path("testcase-alias", &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_EQ_MSG(test, options, NULL,
>                             "option clearing test failed\n");
>         of_node_put(np);
>
>
>         // find node by name option clearing root
>
>         options = "testoption";
>         np = of_find_node_opts_by_path("/", &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_EQ_MSG(test, options, NULL,
>                             "option clearing root node test failed\n");
>         of_node_put(np);
> }
>
> static int of_test_init(struct kunit *test)
> {
>         /* adding data for unittest */
>         KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
>
>         if (!of_aliases)
>                 of_aliases = of_find_node_by_path("/aliases");
>
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
>                         "/testcase-data/phandle-tests/consumer-a"));
>
>         return 0;
> }
>
> static struct kunit_case of_test_cases[] = {
>         KUNIT_CASE(of_unittest_find_node_by_name),
>         {},
> };
>
> static struct kunit_module of_test_module = {
>         .name = "of-base-test",
>         .init = of_test_init,
>         .test_cases = of_test_cases,
> };
> module_test(of_test_module);
>
>
> >
> >
> >>> be cases where the devicetree unittests are currently not well grouped
> >>> and may benefit from change, but if so that should be handled independently
> >>> of any transformation into a KUnit framework.
> >>
> >> I agree. I did this because I wanted to illustrate what I thought real
> >> world KUnit unit tests should look like (I also wanted to be able to
> >> show off KUnit test features that help you write these kinds of
> >> tests); I was not necessarily intending that all the of: unittest
> >> patches would get merged in with the whole RFC. I was mostly trying to
> >> create cause for discussion (which it seems like I succeeded at ;-) ).
> >>
> >> So fair enough, I will propose these patches separately and later
> >> (except of course this one that splits up the file). Do you want the
> >> initial transformation to the KUnit framework in the main KUnit
> >> patchset, or do you want that to be done separately? If I recall, Rob
> >> suggested this as a good initial example that other people could refer
> >> to, and some people seemed to think that I needed one to help guide
> >> the discussion and provide direction for early users. I don't
> >> necessarily think that means the initial real world example needs to
> >> be a part of the initial patchset though.

I really appreciate you taking the time to discuss these difficult points :-)

If the way I want to express test cases here is really that difficult
to read, then it means that I have some work to do to make it better,
because I plan on constructing other test cases in a very similar way.
So, if you think that these test cases have real readability issues,
then there is something I either need to improve with the framework,
or the documentation.

So if you would rather discuss these patches later once I added those
features that would make the notion of hermeticity stronger, or would
make summaries better, or anything else I mentioned, that's fine with
me, but if you think there is something fundamentally wrong with my
approach, I would rather figured out the right way to handle it sooner
rather than later.

Looking forward to hear what you think!

Cheers

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2019-02-28  3:52                   ` brendanhiggins
  2019-02-28  3:52                     ` Brendan Higgins
@ 2019-03-22  0:22                     ` frowand.list
  2019-03-22  0:22                       ` Frank Rowand
                                         ` (2 more replies)
  1 sibling, 3 replies; 232+ messages in thread
From: frowand.list @ 2019-03-22  0:22 UTC (permalink / raw)


On 2/27/19 7:52 PM, Brendan Higgins wrote:
> On Wed, Feb 20, 2019 at 12:45 PM Frank Rowand <frowand.list at gmail.com> wrote:
>>
>> On 2/18/19 2:25 PM, Frank Rowand wrote:
>>> On 2/15/19 2:56 AM, Brendan Higgins wrote:
>>>> On Thu, Feb 14, 2019 at 6:05 PM Frank Rowand <frowand.list at gmail.com> wrote:
>>>>>
>>>>> On 2/14/19 4:56 PM, Brendan Higgins wrote:
>>>>>> On Thu, Feb 14, 2019 at 3:57 PM Frank Rowand <frowand.list at gmail.com> wrote:
>>>>>>>
>>>>>>> On 12/5/18 3:54 PM, Brendan Higgins wrote:
>>>>>>>> On Tue, Dec 4, 2018 at 2:58 AM Frank Rowand <frowand.list at gmail.com> wrote:
>>>>>>>>>
>>
>> < snip >
>>
>>>
>>> It makes it harder for me to read the source of the tests and
>>> understand the order they will execute.  It also makes it harder
>>> for me to read through the actual tests (in this example the
>>> tests that are currently grouped in of_unittest_find_node_by_name())
>>> because of all the extra function headers injected into the
>>> existing single function to break it apart into many smaller
>>> functions.
>>
>> < snip >
>>
>>>>>> This is not something I feel particularly strongly about, it is just
>>>>>> pretty atypical from my experience to have so many unrelated test
>>>>>> cases in a single file.
>>>>>>
>>>>>> Maybe you would prefer that I break up the test cases first, and then
>>>>>> we split up the file as appropriate?
>>>>>
>>>>> I prefer that the test cases not be broken up arbitrarily.  There _may_
>>
>> I expect that I created confusion by putting this in a reply to patch 18/19.
>> It is actually a comment about patch 19/19.  Sorry about that.
>>
> 
> No worries.
> 
>>
>>>>
>>>> I wasn't trying to break them up arbitrarily. I thought I was doing it
>>>> according to a pattern (breaking up the file, that is), but maybe I
>>>> just hadn't looked at enough examples.
>>>
>>> This goes back to the kunit model of putting each test into a separate
>>> function that can be a KUNIT_CASE().  That is a model that I do not agree
>>> with for devicetree.
>>
>> So now that I am actually talking about patch 19/19, let me give a concrete
>> example.  I will cut and paste (after my comments), the beginning portion
>> of base-test.c, after applying patch 19/19 (the "base version".  Then I
>> will cut and paste my alternative version which does not break the tests
>> down into individual functions (the "frank version").
> 
> Awesome, thanks for putting the comparison together!
> 
>>
>> I will also reply to this email with the base version and the frank version
>> as attachments, which will make it easier to save as separate versions
>> for easier viewing.  I'm not sure if an email with attachments will make
>> it through the list servers, but I am cautiously optimistic.
>>
>> I am using v4 of the patch series because I never got v3 to cleanly apply
>> and it is not a constructive use of my time to do so since I have v4 applied.
>>
>> One of the points I was trying to make is that readability suffers from the
>> approach taken by patches 18/19 and 19/19.
> 
> I understood that point.
> 
>>
>> The base version contains the extra text of a function header for each
>> unit test.  This is visual noise and makes the file larger.  It is also
>> one more possible location of an error (although not likely).
> 
> I don't see how it is much more visual noise than a comment.
> Admittedly, a space versus an underscore might be nice, but I think it
> is also more likely that a function name is more likely to be kept up
> to date than a comment even if they are both informational. It also
> forces the user to actually name all the tests. Even then, I am not
> married to doing it this exact way. The thing I really care about is
> isolating the code in each test case so that it can be executed
> separately.
> 
> A side thought, when I was proofreading this, it occurred to me that
> you might not like the function name over comment partly because you
> think about them differently. You aren't used to seeing a function
> used to frame things or communicate information in this way. Is this

No.  It is more visual clutter and it is more functional clutter that
potentially has to be validated.


> true? Admittedly, I have gotten used to a lot of unit test frameworks
> that break up test cases by function, so I wondering if part of the
> difference in comfortability with this framing might come from the
> fact that I am really used to seeing it this way and you are not? If
> this is the case, maybe it would be better if we had something like:
> 
> KUNIT_DECLARE_CASE(case_id, "Test case description")
> {
>         KUNIT_EXPECT_EQ(kunit, ...);
>         ...
> }
> 
> Just a thought.
> 
>>
>> The frank version has converted each of the new function headers into
>> a comment, using the function name with '_' converted to ' '.  The
>> comments are more readable than the function headers.  Note that I added
>> an extra blank line before each comment, which violates the kernel
>> coding standards, but I feel this makes the code more readable.
> 
> I agree that the extra space is an improvement, but I think any
> sufficient visual separation would work.
> 
>>
>> The base version needs to declare each of the individual test functions
>> in of_test_find_node_by_name_cases[]. It is possible that a test function
>> could be left out of of_test_find_node_by_name_cases[], in error.  This
>> will result in a compile warning (I think warning instead of error, but
>> I have not verified that) so the error might be caught or it might be
>> overlooked.
> 
> It's a warning, but that can be fixed.
> 
>>
>> In the base version, the order of execution of the test code requires
>> bouncing back and forth between the test functions and the coding of
>> of_test_find_node_by_name_cases[].
> 
> You shouldn't need to bounce back and forth because the order in which
> the tests run shouldn't matter.

If one can't guarantee total independence of all of the tests, with no
side effects, then yes.  But that is not my world.  To make that
guarantee, I would need to be able to run just a single test in an
entire test run.

I actually want to make side effects possible.  Whether from other
tests or from live kernel code that is accessing the live devicetree.
Any extra stress makes me happier.

I forget the exact term that has been tossed around, but to me the
devicetree unittests are more like system validation, release tests,
acceptance tests, and stress tests.  Not unit tests in the philosophy
of KUnit.

I do see the value of pure unit tests, and there are rare times that
my devicetree use case might be better served by that approach.  But
if so, it is very easy for me to add a simple pure test when debugging.
My general use case does not map onto this model.


>>
>> In the frank version the order of execution of the test code is obvious.
> 
> So I know we were arguing before over whether order *does* matter in
> some of the other test cases (none in the example that you or I
> posted), but wouldn't it be better if the order of execution didn't
> matter? If you don't allow a user to depend on the execution of test
> cases, then arguably these test case dependencies would never form and
> the order wouldn't matter.

Reality intrudes.  Order does matter.


>>
>> It is possible that a test function could be left out of
>> of_test_find_node_by_name_cases[], in error.  This will result in a compile
>> warning (I think warning instead of error, but I have not verified that)
>> so it might be caught or it might be overlooked.
>>
>> The base version is 265 lines.  The frank version is 208 lines, 57 lines
>> less.  Less is better.
> 
> I agree that less is better, but there are different kinds of less to
> consider. I prefer less logic in a function to fewer lines overall.
> 
> It seems we are in agreement that test cases should be small and
> simple, so I won't dwell on that point any longer. I agree that the

As a general guide for simple unit tests, sure.

For my case, no.  Reality intrudes.

KUnit has a nice architectural view of what a unit test should be.

The existing devicetree "unittests" are not such unit tests.  They
simply share the same name.

The devicetree unittests do not fit into a clean:
  - initialize
  - do one test
  - clean up
model.

Trying to force them into that model will not work.  The initialize
is not a simple, easy to decompose thing.  And trying to decompose
it can actually make the code more complex and messier.

Clean up can NOT occur, because part of my test validation is looking
at the state of the device tree after the tests complete, viewed
through the /proc/device-tree/ interface.


> test cases themselves when taken in isolation in base version and
> frank version are equally simple (obviously, they are the same).
> 
> If I am correct, we are only debating whether it is best to put each
> test case in its own function or not. That being said, I honestly
> still think my version (base version) is easier to understand. The
> reason I think mine is easier to read is entirely because of the code
> isolation provided by each test case running it its own function. I
> can look at a test case by itself and know that it doesn't depend on
> anything that happened in a preceding test case. It is true that I
> have to look in different places in the file, but I think that is more
> than made up for by the fact that in order to understand a test case,
> I only have to look at two functions: init, and the test case itself
> (well, also exit if you care about how things are cleaned up). I don't
> have to look through every single test case that proceeds it.
> 
> It might not be immediately obvious what isolation my version provides
> over your version at first glance, and that is exactly the point. We
> know that they are the same because you pulled the test cases out of
> my version, but what about the other test suite in 19/19,
> of_test_dynamic? If you notice, I did not just break each test case by
> wrapping it in a function; that didn't work because there was a
> dependency between some of the test cases. I removed that dependency,
> so that each test case is actually isolated:
> 
> ## ============== single function (18/19) version ===========
> static void of_unittest_dynamic(struct kunit *test)
> {
>         struct device_node *np;
>         struct property *prop;
> 
>         np = of_find_node_by_path("/testcase-data");
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 
>         /* Array of 4 properties for the purpose of testing */
>         prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
> 
>         /* Add a new property - should pass*/
>         prop->name = "new-property";
>         prop->value = "new-property-data";
>         prop->length = strlen(prop->value) + 1;
>         KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
>                             "Adding a new property failed\n");
> 
>         /* Try to add an existing property - should fail */
>         prop++;
>         prop->name = "new-property";
>         prop->value = "new-property-data-should-fail";
>         prop->length = strlen(prop->value) + 1;
>         KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
>                             "Adding an existing property should have failed\n");
> 
>         /* Try to modify an existing property - should pass */
>         prop->value = "modify-property-data-should-pass";
>         prop->length = strlen(prop->value) + 1;
>         KUNIT_EXPECT_EQ_MSG(
>                 test, of_update_property(np, prop), 0,
>                 "Updating an existing property should have passed\n");
> 
>         /* Try to modify non-existent property - should pass*/
>         prop++;
>         prop->name = "modify-property";
>         prop->value = "modify-missing-property-data-should-pass";
>         prop->length = strlen(prop->value) + 1;
>         KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
>                             "Updating a missing property should have passed\n");
> 
>         /* Remove property - should pass */
>         KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
>                             "Removing a property should have passed\n");
> 
>         /* Adding very large property - should pass */
>         prop++;
>         prop->name = "large-property-PAGE_SIZEx8";
>         prop->length = PAGE_SIZE * 8;
>         prop->value = kzalloc(prop->length, GFP_KERNEL);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
>         KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
>                             "Adding a large property should have passed\n");
> }
> 
> ## ============== multi function (19/19) version ===========
> struct of_test_dynamic_context {
>         struct device_node *np;
>         struct property *prop0;
>         struct property *prop1;
> };
> 
> static void of_test_dynamic_basic(struct kunit *test)
> {
>         struct of_test_dynamic_context *ctx = test->priv;
>         struct device_node *np = ctx->np;
>         struct property *prop0 = ctx->prop0;
> 
>         /* Add a new property - should pass*/
>         prop0->name = "new-property";
>         prop0->value = "new-property-data";
>         prop0->length = strlen(prop0->value) + 1;
>         KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
>                             "Adding a new property failed\n");
> 
>         /* Test that we can remove a property */
>         KUNIT_EXPECT_EQ(test, of_remove_property(np, prop0), 0);
> }
> 
> static void of_test_dynamic_add_existing_property(struct kunit *test)
> {
>         struct of_test_dynamic_context *ctx = test->priv;
>         struct device_node *np = ctx->np;
>         struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;
> 
>         /* Add a new property - should pass*/
>         prop0->name = "new-property";
>         prop0->value = "new-property-data";
>         prop0->length = strlen(prop0->value) + 1;
>         KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
>                             "Adding a new property failed\n");
> 
>         /* Try to add an existing property - should fail */
>         prop1->name = "new-property";
>         prop1->value = "new-property-data-should-fail";
>         prop1->length = strlen(prop1->value) + 1;
>         KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop1), 0,
>                             "Adding an existing property should have failed\n");
> }
> 
> static void of_test_dynamic_modify_existing_property(struct kunit *test)
> {
>         struct of_test_dynamic_context *ctx = test->priv;
>         struct device_node *np = ctx->np;
>         struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;
> 
>         /* Add a new property - should pass*/
>         prop0->name = "new-property";
>         prop0->value = "new-property-data";
>         prop0->length = strlen(prop0->value) + 1;
>         KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
>                             "Adding a new property failed\n");
> 
>         /* Try to modify an existing property - should pass */
>         prop1->name = "new-property";
>         prop1->value = "modify-property-data-should-pass";
>         prop1->length = strlen(prop1->value) + 1;
>         KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop1), 0,
>                             "Updating an existing property should have
> passed\n");
> }
> 
> static void of_test_dynamic_modify_non_existent_property(struct kunit *test)
> {
>         struct of_test_dynamic_context *ctx = test->priv;
>         struct device_node *np = ctx->np;
>         struct property *prop0 = ctx->prop0;
> 
>         /* Try to modify non-existent property - should pass*/
>         prop0->name = "modify-property";
>         prop0->value = "modify-missing-property-data-should-pass";
>         prop0->length = strlen(prop0->value) + 1;
>         KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop0), 0,
>                             "Updating a missing property should have passed\n");
> }
> 
> static void of_test_dynamic_large_property(struct kunit *test)
> {
>         struct of_test_dynamic_context *ctx = test->priv;
>         struct device_node *np = ctx->np;
>         struct property *prop0 = ctx->prop0;
> 
>         /* Adding very large property - should pass */
>         prop0->name = "large-property-PAGE_SIZEx8";
>         prop0->length = PAGE_SIZE * 8;
>         prop0->value = kunit_kzalloc(test, prop0->length, GFP_KERNEL);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop0->value);
> 
>         KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
>                             "Adding a large property should have passed\n");
> }
> 
> static int of_test_dynamic_init(struct kunit *test)
> {
>         struct of_test_dynamic_context *ctx;
> 
>         KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
> 
>         if (!of_aliases)
>                 of_aliases = of_find_node_by_path("/aliases");
> 
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
>                         "/testcase-data/phandle-tests/consumer-a"));
> 
>         ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
>         test->priv = ctx;
> 
>         ctx->np = of_find_node_by_path("/testcase-data");
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->np);
> 
>         ctx->prop0 = kunit_kzalloc(test, sizeof(*ctx->prop0), GFP_KERNEL);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop0);
> 
>         ctx->prop1 = kunit_kzalloc(test, sizeof(*ctx->prop1), GFP_KERNEL);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop1);
> 
>         return 0;
> }
> 
> static void of_test_dynamic_exit(struct kunit *test)
> {
>         struct of_test_dynamic_context *ctx = test->priv;
>         struct device_node *np = ctx->np;
> 
>         of_remove_property(np, ctx->prop0);
>         of_remove_property(np, ctx->prop1);
>         of_node_put(np);
> }
> 
> static struct kunit_case of_test_dynamic_cases[] = {
>         KUNIT_CASE(of_test_dynamic_basic),
>         KUNIT_CASE(of_test_dynamic_add_existing_property),
>         KUNIT_CASE(of_test_dynamic_modify_existing_property),
>         KUNIT_CASE(of_test_dynamic_modify_non_existent_property),
>         KUNIT_CASE(of_test_dynamic_large_property),
>         {},
> };
> 
> static struct kunit_module of_test_dynamic_module = {
>         .name = "of-dynamic-test",
>         .init = of_test_dynamic_init,
>         .exit = of_test_dynamic_exit,
>         .test_cases = of_test_dynamic_cases,
> };
> module_test(of_test_dynamic_module);
> 
> Compare the test cases for adding of_test_dynamic_basic,
> of_test_dynamic_add_existing_property,
> of_test_dynamic_modify_existing_property, and
> of_test_dynamic_modify_non_existent_property to the originals. My
> version is much longer overall, but I think is still much easier to
> understand. I can say from when I was trying to split this up in the
> first place, it was not obvious what properties were expected to be
> populated as a precondition for a given test case (except the first
> one of course). Whereas, in my version, it is immediately obvious what
> the preconditions are for a test case. I think you can apply this same
> logic to the examples you provided, in frank version, I don't
> immediately know if one test cases does something that is a
> precondition for another test case.

Yes, that is a real problem in the current code, but easily fixed
with comments.


> My version also makes it easier to run a test case entirely by itself
> which is really valuable for debugging purposes. A common thing that
> happens when you have lots of unit tests is something breaks and lots
> of tests fail. If the test cases are good, there should be just a
> couple (ideally one) test cases that directly assert the violated
> property; those are the test cases you actually want to focus on, the
> rest are noise for the purposes of that breakage. In my version, it is
> much easier to turn off the test cases that you don't care about and
> then focus in on the ones that exercise the violated property.
> 
> Now I know that, hermeticity especially, but other features as well
> (test suite summary, error on unused test case function, etc) are not
> actually in KUnit as it is under consideration here. Maybe it would be
> best to save these last two patches (18/19, and 19/19) until I have
> these other features checked in and reconsider them then?

Thanks for leaving 18/19 and 19/19 off in v4.

-Frank

> 
>>
>> ## ==========  base version  ====================================
>>
>> // SPDX-License-Identifier: GPL-2.0
>> /*
>>  * Unit tests for functions defined in base.c.
>>  */
>> #include <linux/of.h>
>>
>> #include <kunit/test.h>
>>
>> #include "test-common.h"
>>
>> static void of_test_find_node_by_name_basic(struct kunit *test)
>> {
>>         struct device_node *np;
>>         const char *name;
>>
>>         np = of_find_node_by_path("/testcase-data");
>>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
>>                                "find /testcase-data failed\n");
>>         of_node_put(np);
>>         kfree(name);
>> }
>>
>> static void of_test_find_node_by_name_trailing_slash(struct kunit *test)
>> {
>>         /* Test if trailing '/' works */
>>         KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
>>                             "trailing '/' on /testcase-data/ should fail\n");
>>
>> }
>>
>> static void of_test_find_node_by_name_multiple_components(struct kunit *test)
>> {
>>         struct device_node *np;
>>         const char *name;
>>
>>         np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>>         KUNIT_EXPECT_STREQ_MSG(
>>                 test, "/testcase-data/phandle-tests/consumer-a", name,
>>                 "find /testcase-data/phandle-tests/consumer-a failed\n");
>>         of_node_put(np);
>>         kfree(name);
>> }
>>
>> static void of_test_find_node_by_name_with_alias(struct kunit *test)
>> {
>>         struct device_node *np;
>>         const char *name;
>>
>>         np = of_find_node_by_path("testcase-alias");
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
>>                                "find testcase-alias failed\n");
>>         of_node_put(np);
>>         kfree(name);
>> }
>>
>> static void of_test_find_node_by_name_with_alias_and_slash(struct kunit *test)
>> {
>>         /* Test if trailing '/' works on aliases */
>>         KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
>>                            "trailing '/' on testcase-alias/ should fail\n");
>> }
>>
>> /*
>>  * TODO(brendanhiggins at google.com): This looks like a duplicate of
>>  * of_test_find_node_by_name_multiple_components
>>  */
>> static void of_test_find_node_by_name_multiple_components_2(struct kunit *test)
>> {
>>         struct device_node *np;
>>         const char *name;
>>
>>         np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>>         KUNIT_EXPECT_STREQ_MSG(
>>                 test, "/testcase-data/phandle-tests/consumer-a", name,
>>                 "find testcase-alias/phandle-tests/consumer-a failed\n");
>>         of_node_put(np);
>>         kfree(name);
>> }
>>
>> static void of_test_find_node_by_name_missing_path(struct kunit *test)
>> {
>>         struct device_node *np;
>>
>>         KUNIT_EXPECT_EQ_MSG(
>>                 test,
>>                 np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
>>                 "non-existent path returned node %pOF\n", np);
>>         of_node_put(np);
>> }
>>
>> static void of_test_find_node_by_name_missing_alias(struct kunit *test)
>> {
>>         struct device_node *np;
>>
>>         KUNIT_EXPECT_EQ_MSG(
>>                 test, np = of_find_node_by_path("missing-alias"), NULL,
>>                 "non-existent alias returned node %pOF\n", np);
>>         of_node_put(np);
>> }
>>
>> static void of_test_find_node_by_name_missing_alias_with_relative_path(
>>                 struct kunit *test)
>> {
>>         struct device_node *np;
>>
>>         KUNIT_EXPECT_EQ_MSG(
>>                 test,
>>                 np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
>>                 "non-existent alias with relative path returned node %pOF\n",
>>                 np);
>>         of_node_put(np);
>> }
>>
>> static void of_test_find_node_by_name_with_option(struct kunit *test)
>> {
>>         struct device_node *np;
>>         const char *options;
>>
>>         np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
>>                                "option path test failed\n");
>>         of_node_put(np);
>> }
>>
>> static void of_test_find_node_by_name_with_option_and_slash(struct kunit *test)
>> {
>>         struct device_node *np;
>>         const char *options;
>>
>>         np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
>>                                "option path test, subcase #1 failed\n");
>>         of_node_put(np);
>>
>>         np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
>>                                "option path test, subcase #2 failed\n");
>>         of_node_put(np);
>> }
>>
>> static void of_test_find_node_by_name_with_null_option(struct kunit *test)
>> {
>>         struct device_node *np;
>>
>>         np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
>>         KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
>>                                          "NULL option path test failed\n");
>>         of_node_put(np);
>> }
>>
>> static void of_test_find_node_by_name_with_option_alias(struct kunit *test)
>> {
>>         struct device_node *np;
>>         const char *options;
>>
>>         np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
>>                                        &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
>>                                "option alias path test failed\n");
>>         of_node_put(np);
>> }
>>
>> static void of_test_find_node_by_name_with_option_alias_and_slash(
>>                 struct kunit *test)
>> {
>>         struct device_node *np;
>>         const char *options;
>>
>>         np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
>>                                        &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
>>                                "option alias path test, subcase #1 failed\n");
>>         of_node_put(np);
>> }
>>
>> static void of_test_find_node_by_name_with_null_option_alias(struct kunit *test)
>> {
>>         struct device_node *np;
>>
>>         np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
>>         KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
>>                         test, np, "NULL option alias path test failed\n");
>>         of_node_put(np);
>> }
>>
>> static void of_test_find_node_by_name_option_clearing(struct kunit *test)
>> {
>>         struct device_node *np;
>>         const char *options;
>>
>>         options = "testoption";
>>         np = of_find_node_opts_by_path("testcase-alias", &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_EQ_MSG(test, options, NULL,
>>                             "option clearing test failed\n");
>>         of_node_put(np);
>> }
>>
>> static void of_test_find_node_by_name_option_clearing_root(struct kunit *test)
>> {
>>         struct device_node *np;
>>         const char *options;
>>
>>         options = "testoption";
>>         np = of_find_node_opts_by_path("/", &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_EQ_MSG(test, options, NULL,
>>                             "option clearing root node test failed\n");
>>         of_node_put(np);
>> }
>>
>> static int of_test_find_node_by_name_init(struct kunit *test)
>> {
>>         /* adding data for unittest */
>>         KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
>>
>>         if (!of_aliases)
>>                 of_aliases = of_find_node_by_path("/aliases");
>>
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
>>                         "/testcase-data/phandle-tests/consumer-a"));
>>
>>         return 0;
>> }
>>
>> static struct kunit_case of_test_find_node_by_name_cases[] = {
>>         KUNIT_CASE(of_test_find_node_by_name_basic),
>>         KUNIT_CASE(of_test_find_node_by_name_trailing_slash),
>>         KUNIT_CASE(of_test_find_node_by_name_multiple_components),
>>         KUNIT_CASE(of_test_find_node_by_name_with_alias),
>>         KUNIT_CASE(of_test_find_node_by_name_with_alias_and_slash),
>>         KUNIT_CASE(of_test_find_node_by_name_multiple_components_2),
>>         KUNIT_CASE(of_test_find_node_by_name_missing_path),
>>         KUNIT_CASE(of_test_find_node_by_name_missing_alias),
>>         KUNIT_CASE(of_test_find_node_by_name_missing_alias_with_relative_path),
>>         KUNIT_CASE(of_test_find_node_by_name_with_option),
>>         KUNIT_CASE(of_test_find_node_by_name_with_option_and_slash),
>>         KUNIT_CASE(of_test_find_node_by_name_with_null_option),
>>         KUNIT_CASE(of_test_find_node_by_name_with_option_alias),
>>         KUNIT_CASE(of_test_find_node_by_name_with_option_alias_and_slash),
>>         KUNIT_CASE(of_test_find_node_by_name_with_null_option_alias),
>>         KUNIT_CASE(of_test_find_node_by_name_option_clearing),
>>         KUNIT_CASE(of_test_find_node_by_name_option_clearing_root),
>>         {},
>> };
>>
>> static struct kunit_module of_test_find_node_by_name_module = {
>>         .name = "of-test-find-node-by-name",
>>         .init = of_test_find_node_by_name_init,
>>         .test_cases = of_test_find_node_by_name_cases,
>> };
>> module_test(of_test_find_node_by_name_module);
>>
>>
>> ## ==========  frank version  ===================================
>>
>>         // SPDX-License-Identifier: GPL-2.0
>> /*
>>  * Unit tests for functions defined in base.c.
>>  */
>> #include <linux/of.h>
>>
>> #include <kunit/test.h>
>>
>> #include "test-common.h"
>>
>> static void of_unittest_find_node_by_name(struct kunit *test)
>> {
>>         struct device_node *np;
>>         const char *options, *name;
>>
>>
>>         // find node by name basic
>>
>>         np = of_find_node_by_path("/testcase-data");
>>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
>>                                "find /testcase-data failed\n");
>>         of_node_put(np);
>>         kfree(name);
>>
>>
>>         // find node by name trailing slash
>>
>>         /* Test if trailing '/' works */
>>         KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
>>                             "trailing '/' on /testcase-data/ should fail\n");
>>
>>
>>         // find node by name multiple components
>>
>>         np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>>         KUNIT_EXPECT_STREQ_MSG(
>>                 test, "/testcase-data/phandle-tests/consumer-a", name,
>>                 "find /testcase-data/phandle-tests/consumer-a failed\n");
>>         of_node_put(np);
>>         kfree(name);
>>
>>
>>         // find node by name with alias
>>
>>         np = of_find_node_by_path("testcase-alias");
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
>>                                "find testcase-alias failed\n");
>>         of_node_put(np);
>>         kfree(name);
>>
>>
>>         // find node by name with alias and slash
>>
>>         /* Test if trailing '/' works on aliases */
>>         KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
>>                             "trailing '/' on testcase-alias/ should fail\n");
>>
>>
>>         // find node by name multiple components 2
>>
>>         np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>>         KUNIT_EXPECT_STREQ_MSG(
>>                 test, "/testcase-data/phandle-tests/consumer-a", name,
>>                 "find testcase-alias/phandle-tests/consumer-a failed\n");
>>         of_node_put(np);
>>         kfree(name);
>>
>>
>>         // find node by name missing path
>>
>>         KUNIT_EXPECT_EQ_MSG(
>>                 test,
>>                 np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
>>                 "non-existent path returned node %pOF\n", np);
>>         of_node_put(np);
>>
>>
>>         // find node by name missing alias
>>
>>         KUNIT_EXPECT_EQ_MSG(
>>                 test, np = of_find_node_by_path("missing-alias"), NULL,
>>                 "non-existent alias returned node %pOF\n", np);
>>         of_node_put(np);
>>
>>
>>         //  find node by name missing alias with relative path
>>
>>         KUNIT_EXPECT_EQ_MSG(
>>                 test,
>>                 np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
>>                 "non-existent alias with relative path returned node %pOF\n",
>>                 np);
>>         of_node_put(np);
>>
>>
>>         // find node by name with option
>>
>>         np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
>>                                "option path test failed\n");
>>         of_node_put(np);
>>
>>
>>         // find node by name with option and slash
>>
>>         np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
>>                                "option path test, subcase #1 failed\n");
>>         of_node_put(np);
>>
>>         np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
>>                                "option path test, subcase #2 failed\n");
>>         of_node_put(np);
>>
>>
>>         // find node by name with null option
>>
>>         np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
>>         KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
>>                                          "NULL option path test failed\n");
>>         of_node_put(np);
>>
>>
>>         // find node by name with option alias
>>
>>         np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
>>                                        &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
>>                                "option alias path test failed\n");
>>         of_node_put(np);
>>
>>
>>         // find node by name with option alias and slash
>>
>>         np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
>>                                        &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
>>                                "option alias path test, subcase #1 failed\n");
>>         of_node_put(np);
>>
>>
>>         // find node by name with null option alias
>>
>>         np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
>>         KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
>>                         test, np, "NULL option alias path test failed\n");
>>         of_node_put(np);
>>
>>
>>         // find node by name option clearing
>>
>>         options = "testoption";
>>         np = of_find_node_opts_by_path("testcase-alias", &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_EQ_MSG(test, options, NULL,
>>                             "option clearing test failed\n");
>>         of_node_put(np);
>>
>>
>>         // find node by name option clearing root
>>
>>         options = "testoption";
>>         np = of_find_node_opts_by_path("/", &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_EQ_MSG(test, options, NULL,
>>                             "option clearing root node test failed\n");
>>         of_node_put(np);
>> }
>>
>> static int of_test_init(struct kunit *test)
>> {
>>         /* adding data for unittest */
>>         KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
>>
>>         if (!of_aliases)
>>                 of_aliases = of_find_node_by_path("/aliases");
>>
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
>>                         "/testcase-data/phandle-tests/consumer-a"));
>>
>>         return 0;
>> }
>>
>> static struct kunit_case of_test_cases[] = {
>>         KUNIT_CASE(of_unittest_find_node_by_name),
>>         {},
>> };
>>
>> static struct kunit_module of_test_module = {
>>         .name = "of-base-test",
>>         .init = of_test_init,
>>         .test_cases = of_test_cases,
>> };
>> module_test(of_test_module);
>>
>>
>>>
>>>
>>>>> be cases where the devicetree unittests are currently not well grouped
>>>>> and may benefit from change, but if so that should be handled independently
>>>>> of any transformation into a KUnit framework.
>>>>
>>>> I agree. I did this because I wanted to illustrate what I thought real
>>>> world KUnit unit tests should look like (I also wanted to be able to
>>>> show off KUnit test features that help you write these kinds of
>>>> tests); I was not necessarily intending that all the of: unittest
>>>> patches would get merged in with the whole RFC. I was mostly trying to
>>>> create cause for discussion (which it seems like I succeeded at ;-) ).
>>>>
>>>> So fair enough, I will propose these patches separately and later
>>>> (except of course this one that splits up the file). Do you want the
>>>> initial transformation to the KUnit framework in the main KUnit
>>>> patchset, or do you want that to be done separately? If I recall, Rob
>>>> suggested this as a good initial example that other people could refer
>>>> to, and some people seemed to think that I needed one to help guide
>>>> the discussion and provide direction for early users. I don't
>>>> necessarily think that means the initial real world example needs to
>>>> be a part of the initial patchset though.
> 
> I really appreciate you taking the time to discuss these difficult points :-)
> 
> If the way I want to express test cases here is really that difficult
> to read, then it means that I have some work to do to make it better,
> because I plan on constructing other test cases in a very similar way.
> So, if you think that these test cases have real readability issues,
> then there is something I either need to improve with the framework,
> or the documentation.
> 
> So if you would rather discuss these patches later once I added those
> features that would make the notion of hermeticity stronger, or would
> make summaries better, or anything else I mentioned, that's fine with
> me, but if you think there is something fundamentally wrong with my
> approach, I would rather figured out the right way to handle it sooner
> rather than later.
> 
> Looking forward to hear what you think!
> 
> Cheers
> 

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2019-03-22  0:22                     ` frowand.list
@ 2019-03-22  0:22                       ` Frank Rowand
  2019-03-22  1:30                       ` brendanhiggins
  2019-03-22  1:34                       ` frowand.list
  2 siblings, 0 replies; 232+ messages in thread
From: Frank Rowand @ 2019-03-22  0:22 UTC (permalink / raw)


On 2/27/19 7:52 PM, Brendan Higgins wrote:
> On Wed, Feb 20, 2019@12:45 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>
>> On 2/18/19 2:25 PM, Frank Rowand wrote:
>>> On 2/15/19 2:56 AM, Brendan Higgins wrote:
>>>> On Thu, Feb 14, 2019@6:05 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>>>>
>>>>> On 2/14/19 4:56 PM, Brendan Higgins wrote:
>>>>>> On Thu, Feb 14, 2019@3:57 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>>>>>>
>>>>>>> On 12/5/18 3:54 PM, Brendan Higgins wrote:
>>>>>>>> On Tue, Dec 4, 2018@2:58 AM Frank Rowand <frowand.list@gmail.com> wrote:
>>>>>>>>>
>>
>> < snip >
>>
>>>
>>> It makes it harder for me to read the source of the tests and
>>> understand the order they will execute.  It also makes it harder
>>> for me to read through the actual tests (in this example the
>>> tests that are currently grouped in of_unittest_find_node_by_name())
>>> because of all the extra function headers injected into the
>>> existing single function to break it apart into many smaller
>>> functions.
>>
>> < snip >
>>
>>>>>> This is not something I feel particularly strongly about, it is just
>>>>>> pretty atypical from my experience to have so many unrelated test
>>>>>> cases in a single file.
>>>>>>
>>>>>> Maybe you would prefer that I break up the test cases first, and then
>>>>>> we split up the file as appropriate?
>>>>>
>>>>> I prefer that the test cases not be broken up arbitrarily.  There _may_
>>
>> I expect that I created confusion by putting this in a reply to patch 18/19.
>> It is actually a comment about patch 19/19.  Sorry about that.
>>
> 
> No worries.
> 
>>
>>>>
>>>> I wasn't trying to break them up arbitrarily. I thought I was doing it
>>>> according to a pattern (breaking up the file, that is), but maybe I
>>>> just hadn't looked at enough examples.
>>>
>>> This goes back to the kunit model of putting each test into a separate
>>> function that can be a KUNIT_CASE().  That is a model that I do not agree
>>> with for devicetree.
>>
>> So now that I am actually talking about patch 19/19, let me give a concrete
>> example.  I will cut and paste (after my comments), the beginning portion
>> of base-test.c, after applying patch 19/19 (the "base version".  Then I
>> will cut and paste my alternative version which does not break the tests
>> down into individual functions (the "frank version").
> 
> Awesome, thanks for putting the comparison together!
> 
>>
>> I will also reply to this email with the base version and the frank version
>> as attachments, which will make it easier to save as separate versions
>> for easier viewing.  I'm not sure if an email with attachments will make
>> it through the list servers, but I am cautiously optimistic.
>>
>> I am using v4 of the patch series because I never got v3 to cleanly apply
>> and it is not a constructive use of my time to do so since I have v4 applied.
>>
>> One of the points I was trying to make is that readability suffers from the
>> approach taken by patches 18/19 and 19/19.
> 
> I understood that point.
> 
>>
>> The base version contains the extra text of a function header for each
>> unit test.  This is visual noise and makes the file larger.  It is also
>> one more possible location of an error (although not likely).
> 
> I don't see how it is much more visual noise than a comment.
> Admittedly, a space versus an underscore might be nice, but I think it
> is also more likely that a function name is more likely to be kept up
> to date than a comment even if they are both informational. It also
> forces the user to actually name all the tests. Even then, I am not
> married to doing it this exact way. The thing I really care about is
> isolating the code in each test case so that it can be executed
> separately.
> 
> A side thought, when I was proofreading this, it occurred to me that
> you might not like the function name over comment partly because you
> think about them differently. You aren't used to seeing a function
> used to frame things or communicate information in this way. Is this

No.  It is more visual clutter and it is more functional clutter that
potentially has to be validated.


> true? Admittedly, I have gotten used to a lot of unit test frameworks
> that break up test cases by function, so I wondering if part of the
> difference in comfortability with this framing might come from the
> fact that I am really used to seeing it this way and you are not? If
> this is the case, maybe it would be better if we had something like:
> 
> KUNIT_DECLARE_CASE(case_id, "Test case description")
> {
>         KUNIT_EXPECT_EQ(kunit, ...);
>         ...
> }
> 
> Just a thought.
> 
>>
>> The frank version has converted each of the new function headers into
>> a comment, using the function name with '_' converted to ' '.  The
>> comments are more readable than the function headers.  Note that I added
>> an extra blank line before each comment, which violates the kernel
>> coding standards, but I feel this makes the code more readable.
> 
> I agree that the extra space is an improvement, but I think any
> sufficient visual separation would work.
> 
>>
>> The base version needs to declare each of the individual test functions
>> in of_test_find_node_by_name_cases[]. It is possible that a test function
>> could be left out of of_test_find_node_by_name_cases[], in error.  This
>> will result in a compile warning (I think warning instead of error, but
>> I have not verified that) so the error might be caught or it might be
>> overlooked.
> 
> It's a warning, but that can be fixed.
> 
>>
>> In the base version, the order of execution of the test code requires
>> bouncing back and forth between the test functions and the coding of
>> of_test_find_node_by_name_cases[].
> 
> You shouldn't need to bounce back and forth because the order in which
> the tests run shouldn't matter.

If one can't guarantee total independence of all of the tests, with no
side effects, then yes.  But that is not my world.  To make that
guarantee, I would need to be able to run just a single test in an
entire test run.

I actually want to make side effects possible.  Whether from other
tests or from live kernel code that is accessing the live devicetree.
Any extra stress makes me happier.

I forget the exact term that has been tossed around, but to me the
devicetree unittests are more like system validation, release tests,
acceptance tests, and stress tests.  Not unit tests in the philosophy
of KUnit.

I do see the value of pure unit tests, and there are rare times that
my devicetree use case might be better served by that approach.  But
if so, it is very easy for me to add a simple pure test when debugging.
My general use case does not map onto this model.


>>
>> In the frank version the order of execution of the test code is obvious.
> 
> So I know we were arguing before over whether order *does* matter in
> some of the other test cases (none in the example that you or I
> posted), but wouldn't it be better if the order of execution didn't
> matter? If you don't allow a user to depend on the execution of test
> cases, then arguably these test case dependencies would never form and
> the order wouldn't matter.

Reality intrudes.  Order does matter.


>>
>> It is possible that a test function could be left out of
>> of_test_find_node_by_name_cases[], in error.  This will result in a compile
>> warning (I think warning instead of error, but I have not verified that)
>> so it might be caught or it might be overlooked.
>>
>> The base version is 265 lines.  The frank version is 208 lines, 57 lines
>> less.  Less is better.
> 
> I agree that less is better, but there are different kinds of less to
> consider. I prefer less logic in a function to fewer lines overall.
> 
> It seems we are in agreement that test cases should be small and
> simple, so I won't dwell on that point any longer. I agree that the

As a general guide for simple unit tests, sure.

For my case, no.  Reality intrudes.

KUnit has a nice architectural view of what a unit test should be.

The existing devicetree "unittests" are not such unit tests.  They
simply share the same name.

The devicetree unittests do not fit into a clean:
  - initialize
  - do one test
  - clean up
model.

Trying to force them into that model will not work.  The initialize
is not a simple, easy to decompose thing.  And trying to decompose
it can actually make the code more complex and messier.

Clean up can NOT occur, because part of my test validation is looking
at the state of the device tree after the tests complete, viewed
through the /proc/device-tree/ interface.


> test cases themselves when taken in isolation in base version and
> frank version are equally simple (obviously, they are the same).
> 
> If I am correct, we are only debating whether it is best to put each
> test case in its own function or not. That being said, I honestly
> still think my version (base version) is easier to understand. The
> reason I think mine is easier to read is entirely because of the code
> isolation provided by each test case running it its own function. I
> can look at a test case by itself and know that it doesn't depend on
> anything that happened in a preceding test case. It is true that I
> have to look in different places in the file, but I think that is more
> than made up for by the fact that in order to understand a test case,
> I only have to look at two functions: init, and the test case itself
> (well, also exit if you care about how things are cleaned up). I don't
> have to look through every single test case that proceeds it.
> 
> It might not be immediately obvious what isolation my version provides
> over your version at first glance, and that is exactly the point. We
> know that they are the same because you pulled the test cases out of
> my version, but what about the other test suite in 19/19,
> of_test_dynamic? If you notice, I did not just break each test case by
> wrapping it in a function; that didn't work because there was a
> dependency between some of the test cases. I removed that dependency,
> so that each test case is actually isolated:
> 
> ## ============== single function (18/19) version ===========
> static void of_unittest_dynamic(struct kunit *test)
> {
>         struct device_node *np;
>         struct property *prop;
> 
>         np = of_find_node_by_path("/testcase-data");
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 
>         /* Array of 4 properties for the purpose of testing */
>         prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
> 
>         /* Add a new property - should pass*/
>         prop->name = "new-property";
>         prop->value = "new-property-data";
>         prop->length = strlen(prop->value) + 1;
>         KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
>                             "Adding a new property failed\n");
> 
>         /* Try to add an existing property - should fail */
>         prop++;
>         prop->name = "new-property";
>         prop->value = "new-property-data-should-fail";
>         prop->length = strlen(prop->value) + 1;
>         KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
>                             "Adding an existing property should have failed\n");
> 
>         /* Try to modify an existing property - should pass */
>         prop->value = "modify-property-data-should-pass";
>         prop->length = strlen(prop->value) + 1;
>         KUNIT_EXPECT_EQ_MSG(
>                 test, of_update_property(np, prop), 0,
>                 "Updating an existing property should have passed\n");
> 
>         /* Try to modify non-existent property - should pass*/
>         prop++;
>         prop->name = "modify-property";
>         prop->value = "modify-missing-property-data-should-pass";
>         prop->length = strlen(prop->value) + 1;
>         KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
>                             "Updating a missing property should have passed\n");
> 
>         /* Remove property - should pass */
>         KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
>                             "Removing a property should have passed\n");
> 
>         /* Adding very large property - should pass */
>         prop++;
>         prop->name = "large-property-PAGE_SIZEx8";
>         prop->length = PAGE_SIZE * 8;
>         prop->value = kzalloc(prop->length, GFP_KERNEL);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
>         KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
>                             "Adding a large property should have passed\n");
> }
> 
> ## ============== multi function (19/19) version ===========
> struct of_test_dynamic_context {
>         struct device_node *np;
>         struct property *prop0;
>         struct property *prop1;
> };
> 
> static void of_test_dynamic_basic(struct kunit *test)
> {
>         struct of_test_dynamic_context *ctx = test->priv;
>         struct device_node *np = ctx->np;
>         struct property *prop0 = ctx->prop0;
> 
>         /* Add a new property - should pass*/
>         prop0->name = "new-property";
>         prop0->value = "new-property-data";
>         prop0->length = strlen(prop0->value) + 1;
>         KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
>                             "Adding a new property failed\n");
> 
>         /* Test that we can remove a property */
>         KUNIT_EXPECT_EQ(test, of_remove_property(np, prop0), 0);
> }
> 
> static void of_test_dynamic_add_existing_property(struct kunit *test)
> {
>         struct of_test_dynamic_context *ctx = test->priv;
>         struct device_node *np = ctx->np;
>         struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;
> 
>         /* Add a new property - should pass*/
>         prop0->name = "new-property";
>         prop0->value = "new-property-data";
>         prop0->length = strlen(prop0->value) + 1;
>         KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
>                             "Adding a new property failed\n");
> 
>         /* Try to add an existing property - should fail */
>         prop1->name = "new-property";
>         prop1->value = "new-property-data-should-fail";
>         prop1->length = strlen(prop1->value) + 1;
>         KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop1), 0,
>                             "Adding an existing property should have failed\n");
> }
> 
> static void of_test_dynamic_modify_existing_property(struct kunit *test)
> {
>         struct of_test_dynamic_context *ctx = test->priv;
>         struct device_node *np = ctx->np;
>         struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;
> 
>         /* Add a new property - should pass*/
>         prop0->name = "new-property";
>         prop0->value = "new-property-data";
>         prop0->length = strlen(prop0->value) + 1;
>         KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
>                             "Adding a new property failed\n");
> 
>         /* Try to modify an existing property - should pass */
>         prop1->name = "new-property";
>         prop1->value = "modify-property-data-should-pass";
>         prop1->length = strlen(prop1->value) + 1;
>         KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop1), 0,
>                             "Updating an existing property should have
> passed\n");
> }
> 
> static void of_test_dynamic_modify_non_existent_property(struct kunit *test)
> {
>         struct of_test_dynamic_context *ctx = test->priv;
>         struct device_node *np = ctx->np;
>         struct property *prop0 = ctx->prop0;
> 
>         /* Try to modify non-existent property - should pass*/
>         prop0->name = "modify-property";
>         prop0->value = "modify-missing-property-data-should-pass";
>         prop0->length = strlen(prop0->value) + 1;
>         KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop0), 0,
>                             "Updating a missing property should have passed\n");
> }
> 
> static void of_test_dynamic_large_property(struct kunit *test)
> {
>         struct of_test_dynamic_context *ctx = test->priv;
>         struct device_node *np = ctx->np;
>         struct property *prop0 = ctx->prop0;
> 
>         /* Adding very large property - should pass */
>         prop0->name = "large-property-PAGE_SIZEx8";
>         prop0->length = PAGE_SIZE * 8;
>         prop0->value = kunit_kzalloc(test, prop0->length, GFP_KERNEL);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop0->value);
> 
>         KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
>                             "Adding a large property should have passed\n");
> }
> 
> static int of_test_dynamic_init(struct kunit *test)
> {
>         struct of_test_dynamic_context *ctx;
> 
>         KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
> 
>         if (!of_aliases)
>                 of_aliases = of_find_node_by_path("/aliases");
> 
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
>                         "/testcase-data/phandle-tests/consumer-a"));
> 
>         ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
>         test->priv = ctx;
> 
>         ctx->np = of_find_node_by_path("/testcase-data");
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->np);
> 
>         ctx->prop0 = kunit_kzalloc(test, sizeof(*ctx->prop0), GFP_KERNEL);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop0);
> 
>         ctx->prop1 = kunit_kzalloc(test, sizeof(*ctx->prop1), GFP_KERNEL);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop1);
> 
>         return 0;
> }
> 
> static void of_test_dynamic_exit(struct kunit *test)
> {
>         struct of_test_dynamic_context *ctx = test->priv;
>         struct device_node *np = ctx->np;
> 
>         of_remove_property(np, ctx->prop0);
>         of_remove_property(np, ctx->prop1);
>         of_node_put(np);
> }
> 
> static struct kunit_case of_test_dynamic_cases[] = {
>         KUNIT_CASE(of_test_dynamic_basic),
>         KUNIT_CASE(of_test_dynamic_add_existing_property),
>         KUNIT_CASE(of_test_dynamic_modify_existing_property),
>         KUNIT_CASE(of_test_dynamic_modify_non_existent_property),
>         KUNIT_CASE(of_test_dynamic_large_property),
>         {},
> };
> 
> static struct kunit_module of_test_dynamic_module = {
>         .name = "of-dynamic-test",
>         .init = of_test_dynamic_init,
>         .exit = of_test_dynamic_exit,
>         .test_cases = of_test_dynamic_cases,
> };
> module_test(of_test_dynamic_module);
> 
> Compare the test cases for adding of_test_dynamic_basic,
> of_test_dynamic_add_existing_property,
> of_test_dynamic_modify_existing_property, and
> of_test_dynamic_modify_non_existent_property to the originals. My
> version is much longer overall, but I think is still much easier to
> understand. I can say from when I was trying to split this up in the
> first place, it was not obvious what properties were expected to be
> populated as a precondition for a given test case (except the first
> one of course). Whereas, in my version, it is immediately obvious what
> the preconditions are for a test case. I think you can apply this same
> logic to the examples you provided, in frank version, I don't
> immediately know if one test cases does something that is a
> precondition for another test case.

Yes, that is a real problem in the current code, but easily fixed
with comments.


> My version also makes it easier to run a test case entirely by itself
> which is really valuable for debugging purposes. A common thing that
> happens when you have lots of unit tests is something breaks and lots
> of tests fail. If the test cases are good, there should be just a
> couple (ideally one) test cases that directly assert the violated
> property; those are the test cases you actually want to focus on, the
> rest are noise for the purposes of that breakage. In my version, it is
> much easier to turn off the test cases that you don't care about and
> then focus in on the ones that exercise the violated property.
> 
> Now I know that, hermeticity especially, but other features as well
> (test suite summary, error on unused test case function, etc) are not
> actually in KUnit as it is under consideration here. Maybe it would be
> best to save these last two patches (18/19, and 19/19) until I have
> these other features checked in and reconsider them then?

Thanks for leaving 18/19 and 19/19 off in v4.

-Frank

> 
>>
>> ## ==========  base version  ====================================
>>
>> // SPDX-License-Identifier: GPL-2.0
>> /*
>>  * Unit tests for functions defined in base.c.
>>  */
>> #include <linux/of.h>
>>
>> #include <kunit/test.h>
>>
>> #include "test-common.h"
>>
>> static void of_test_find_node_by_name_basic(struct kunit *test)
>> {
>>         struct device_node *np;
>>         const char *name;
>>
>>         np = of_find_node_by_path("/testcase-data");
>>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
>>                                "find /testcase-data failed\n");
>>         of_node_put(np);
>>         kfree(name);
>> }
>>
>> static void of_test_find_node_by_name_trailing_slash(struct kunit *test)
>> {
>>         /* Test if trailing '/' works */
>>         KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
>>                             "trailing '/' on /testcase-data/ should fail\n");
>>
>> }
>>
>> static void of_test_find_node_by_name_multiple_components(struct kunit *test)
>> {
>>         struct device_node *np;
>>         const char *name;
>>
>>         np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>>         KUNIT_EXPECT_STREQ_MSG(
>>                 test, "/testcase-data/phandle-tests/consumer-a", name,
>>                 "find /testcase-data/phandle-tests/consumer-a failed\n");
>>         of_node_put(np);
>>         kfree(name);
>> }
>>
>> static void of_test_find_node_by_name_with_alias(struct kunit *test)
>> {
>>         struct device_node *np;
>>         const char *name;
>>
>>         np = of_find_node_by_path("testcase-alias");
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
>>                                "find testcase-alias failed\n");
>>         of_node_put(np);
>>         kfree(name);
>> }
>>
>> static void of_test_find_node_by_name_with_alias_and_slash(struct kunit *test)
>> {
>>         /* Test if trailing '/' works on aliases */
>>         KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
>>                            "trailing '/' on testcase-alias/ should fail\n");
>> }
>>
>> /*
>>  * TODO(brendanhiggins at google.com): This looks like a duplicate of
>>  * of_test_find_node_by_name_multiple_components
>>  */
>> static void of_test_find_node_by_name_multiple_components_2(struct kunit *test)
>> {
>>         struct device_node *np;
>>         const char *name;
>>
>>         np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>>         KUNIT_EXPECT_STREQ_MSG(
>>                 test, "/testcase-data/phandle-tests/consumer-a", name,
>>                 "find testcase-alias/phandle-tests/consumer-a failed\n");
>>         of_node_put(np);
>>         kfree(name);
>> }
>>
>> static void of_test_find_node_by_name_missing_path(struct kunit *test)
>> {
>>         struct device_node *np;
>>
>>         KUNIT_EXPECT_EQ_MSG(
>>                 test,
>>                 np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
>>                 "non-existent path returned node %pOF\n", np);
>>         of_node_put(np);
>> }
>>
>> static void of_test_find_node_by_name_missing_alias(struct kunit *test)
>> {
>>         struct device_node *np;
>>
>>         KUNIT_EXPECT_EQ_MSG(
>>                 test, np = of_find_node_by_path("missing-alias"), NULL,
>>                 "non-existent alias returned node %pOF\n", np);
>>         of_node_put(np);
>> }
>>
>> static void of_test_find_node_by_name_missing_alias_with_relative_path(
>>                 struct kunit *test)
>> {
>>         struct device_node *np;
>>
>>         KUNIT_EXPECT_EQ_MSG(
>>                 test,
>>                 np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
>>                 "non-existent alias with relative path returned node %pOF\n",
>>                 np);
>>         of_node_put(np);
>> }
>>
>> static void of_test_find_node_by_name_with_option(struct kunit *test)
>> {
>>         struct device_node *np;
>>         const char *options;
>>
>>         np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
>>                                "option path test failed\n");
>>         of_node_put(np);
>> }
>>
>> static void of_test_find_node_by_name_with_option_and_slash(struct kunit *test)
>> {
>>         struct device_node *np;
>>         const char *options;
>>
>>         np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
>>                                "option path test, subcase #1 failed\n");
>>         of_node_put(np);
>>
>>         np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
>>                                "option path test, subcase #2 failed\n");
>>         of_node_put(np);
>> }
>>
>> static void of_test_find_node_by_name_with_null_option(struct kunit *test)
>> {
>>         struct device_node *np;
>>
>>         np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
>>         KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
>>                                          "NULL option path test failed\n");
>>         of_node_put(np);
>> }
>>
>> static void of_test_find_node_by_name_with_option_alias(struct kunit *test)
>> {
>>         struct device_node *np;
>>         const char *options;
>>
>>         np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
>>                                        &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
>>                                "option alias path test failed\n");
>>         of_node_put(np);
>> }
>>
>> static void of_test_find_node_by_name_with_option_alias_and_slash(
>>                 struct kunit *test)
>> {
>>         struct device_node *np;
>>         const char *options;
>>
>>         np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
>>                                        &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
>>                                "option alias path test, subcase #1 failed\n");
>>         of_node_put(np);
>> }
>>
>> static void of_test_find_node_by_name_with_null_option_alias(struct kunit *test)
>> {
>>         struct device_node *np;
>>
>>         np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
>>         KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
>>                         test, np, "NULL option alias path test failed\n");
>>         of_node_put(np);
>> }
>>
>> static void of_test_find_node_by_name_option_clearing(struct kunit *test)
>> {
>>         struct device_node *np;
>>         const char *options;
>>
>>         options = "testoption";
>>         np = of_find_node_opts_by_path("testcase-alias", &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_EQ_MSG(test, options, NULL,
>>                             "option clearing test failed\n");
>>         of_node_put(np);
>> }
>>
>> static void of_test_find_node_by_name_option_clearing_root(struct kunit *test)
>> {
>>         struct device_node *np;
>>         const char *options;
>>
>>         options = "testoption";
>>         np = of_find_node_opts_by_path("/", &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_EQ_MSG(test, options, NULL,
>>                             "option clearing root node test failed\n");
>>         of_node_put(np);
>> }
>>
>> static int of_test_find_node_by_name_init(struct kunit *test)
>> {
>>         /* adding data for unittest */
>>         KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
>>
>>         if (!of_aliases)
>>                 of_aliases = of_find_node_by_path("/aliases");
>>
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
>>                         "/testcase-data/phandle-tests/consumer-a"));
>>
>>         return 0;
>> }
>>
>> static struct kunit_case of_test_find_node_by_name_cases[] = {
>>         KUNIT_CASE(of_test_find_node_by_name_basic),
>>         KUNIT_CASE(of_test_find_node_by_name_trailing_slash),
>>         KUNIT_CASE(of_test_find_node_by_name_multiple_components),
>>         KUNIT_CASE(of_test_find_node_by_name_with_alias),
>>         KUNIT_CASE(of_test_find_node_by_name_with_alias_and_slash),
>>         KUNIT_CASE(of_test_find_node_by_name_multiple_components_2),
>>         KUNIT_CASE(of_test_find_node_by_name_missing_path),
>>         KUNIT_CASE(of_test_find_node_by_name_missing_alias),
>>         KUNIT_CASE(of_test_find_node_by_name_missing_alias_with_relative_path),
>>         KUNIT_CASE(of_test_find_node_by_name_with_option),
>>         KUNIT_CASE(of_test_find_node_by_name_with_option_and_slash),
>>         KUNIT_CASE(of_test_find_node_by_name_with_null_option),
>>         KUNIT_CASE(of_test_find_node_by_name_with_option_alias),
>>         KUNIT_CASE(of_test_find_node_by_name_with_option_alias_and_slash),
>>         KUNIT_CASE(of_test_find_node_by_name_with_null_option_alias),
>>         KUNIT_CASE(of_test_find_node_by_name_option_clearing),
>>         KUNIT_CASE(of_test_find_node_by_name_option_clearing_root),
>>         {},
>> };
>>
>> static struct kunit_module of_test_find_node_by_name_module = {
>>         .name = "of-test-find-node-by-name",
>>         .init = of_test_find_node_by_name_init,
>>         .test_cases = of_test_find_node_by_name_cases,
>> };
>> module_test(of_test_find_node_by_name_module);
>>
>>
>> ## ==========  frank version  ===================================
>>
>>         // SPDX-License-Identifier: GPL-2.0
>> /*
>>  * Unit tests for functions defined in base.c.
>>  */
>> #include <linux/of.h>
>>
>> #include <kunit/test.h>
>>
>> #include "test-common.h"
>>
>> static void of_unittest_find_node_by_name(struct kunit *test)
>> {
>>         struct device_node *np;
>>         const char *options, *name;
>>
>>
>>         // find node by name basic
>>
>>         np = of_find_node_by_path("/testcase-data");
>>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
>>                                "find /testcase-data failed\n");
>>         of_node_put(np);
>>         kfree(name);
>>
>>
>>         // find node by name trailing slash
>>
>>         /* Test if trailing '/' works */
>>         KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
>>                             "trailing '/' on /testcase-data/ should fail\n");
>>
>>
>>         // find node by name multiple components
>>
>>         np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>>         KUNIT_EXPECT_STREQ_MSG(
>>                 test, "/testcase-data/phandle-tests/consumer-a", name,
>>                 "find /testcase-data/phandle-tests/consumer-a failed\n");
>>         of_node_put(np);
>>         kfree(name);
>>
>>
>>         // find node by name with alias
>>
>>         np = of_find_node_by_path("testcase-alias");
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
>>                                "find testcase-alias failed\n");
>>         of_node_put(np);
>>         kfree(name);
>>
>>
>>         // find node by name with alias and slash
>>
>>         /* Test if trailing '/' works on aliases */
>>         KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
>>                             "trailing '/' on testcase-alias/ should fail\n");
>>
>>
>>         // find node by name multiple components 2
>>
>>         np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>>         KUNIT_EXPECT_STREQ_MSG(
>>                 test, "/testcase-data/phandle-tests/consumer-a", name,
>>                 "find testcase-alias/phandle-tests/consumer-a failed\n");
>>         of_node_put(np);
>>         kfree(name);
>>
>>
>>         // find node by name missing path
>>
>>         KUNIT_EXPECT_EQ_MSG(
>>                 test,
>>                 np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
>>                 "non-existent path returned node %pOF\n", np);
>>         of_node_put(np);
>>
>>
>>         // find node by name missing alias
>>
>>         KUNIT_EXPECT_EQ_MSG(
>>                 test, np = of_find_node_by_path("missing-alias"), NULL,
>>                 "non-existent alias returned node %pOF\n", np);
>>         of_node_put(np);
>>
>>
>>         //  find node by name missing alias with relative path
>>
>>         KUNIT_EXPECT_EQ_MSG(
>>                 test,
>>                 np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
>>                 "non-existent alias with relative path returned node %pOF\n",
>>                 np);
>>         of_node_put(np);
>>
>>
>>         // find node by name with option
>>
>>         np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
>>                                "option path test failed\n");
>>         of_node_put(np);
>>
>>
>>         // find node by name with option and slash
>>
>>         np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
>>                                "option path test, subcase #1 failed\n");
>>         of_node_put(np);
>>
>>         np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
>>                                "option path test, subcase #2 failed\n");
>>         of_node_put(np);
>>
>>
>>         // find node by name with null option
>>
>>         np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
>>         KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
>>                                          "NULL option path test failed\n");
>>         of_node_put(np);
>>
>>
>>         // find node by name with option alias
>>
>>         np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
>>                                        &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
>>                                "option alias path test failed\n");
>>         of_node_put(np);
>>
>>
>>         // find node by name with option alias and slash
>>
>>         np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
>>                                        &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
>>                                "option alias path test, subcase #1 failed\n");
>>         of_node_put(np);
>>
>>
>>         // find node by name with null option alias
>>
>>         np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
>>         KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
>>                         test, np, "NULL option alias path test failed\n");
>>         of_node_put(np);
>>
>>
>>         // find node by name option clearing
>>
>>         options = "testoption";
>>         np = of_find_node_opts_by_path("testcase-alias", &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_EQ_MSG(test, options, NULL,
>>                             "option clearing test failed\n");
>>         of_node_put(np);
>>
>>
>>         // find node by name option clearing root
>>
>>         options = "testoption";
>>         np = of_find_node_opts_by_path("/", &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_EQ_MSG(test, options, NULL,
>>                             "option clearing root node test failed\n");
>>         of_node_put(np);
>> }
>>
>> static int of_test_init(struct kunit *test)
>> {
>>         /* adding data for unittest */
>>         KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
>>
>>         if (!of_aliases)
>>                 of_aliases = of_find_node_by_path("/aliases");
>>
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
>>                         "/testcase-data/phandle-tests/consumer-a"));
>>
>>         return 0;
>> }
>>
>> static struct kunit_case of_test_cases[] = {
>>         KUNIT_CASE(of_unittest_find_node_by_name),
>>         {},
>> };
>>
>> static struct kunit_module of_test_module = {
>>         .name = "of-base-test",
>>         .init = of_test_init,
>>         .test_cases = of_test_cases,
>> };
>> module_test(of_test_module);
>>
>>
>>>
>>>
>>>>> be cases where the devicetree unittests are currently not well grouped
>>>>> and may benefit from change, but if so that should be handled independently
>>>>> of any transformation into a KUnit framework.
>>>>
>>>> I agree. I did this because I wanted to illustrate what I thought real
>>>> world KUnit unit tests should look like (I also wanted to be able to
>>>> show off KUnit test features that help you write these kinds of
>>>> tests); I was not necessarily intending that all the of: unittest
>>>> patches would get merged in with the whole RFC. I was mostly trying to
>>>> create cause for discussion (which it seems like I succeeded at ;-) ).
>>>>
>>>> So fair enough, I will propose these patches separately and later
>>>> (except of course this one that splits up the file). Do you want the
>>>> initial transformation to the KUnit framework in the main KUnit
>>>> patchset, or do you want that to be done separately? If I recall, Rob
>>>> suggested this as a good initial example that other people could refer
>>>> to, and some people seemed to think that I needed one to help guide
>>>> the discussion and provide direction for early users. I don't
>>>> necessarily think that means the initial real world example needs to
>>>> be a part of the initial patchset though.
> 
> I really appreciate you taking the time to discuss these difficult points :-)
> 
> If the way I want to express test cases here is really that difficult
> to read, then it means that I have some work to do to make it better,
> because I plan on constructing other test cases in a very similar way.
> So, if you think that these test cases have real readability issues,
> then there is something I either need to improve with the framework,
> or the documentation.
> 
> So if you would rather discuss these patches later once I added those
> features that would make the notion of hermeticity stronger, or would
> make summaries better, or anything else I mentioned, that's fine with
> me, but if you think there is something fundamentally wrong with my
> approach, I would rather figured out the right way to handle it sooner
> rather than later.
> 
> Looking forward to hear what you think!
> 
> Cheers
> 

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework
  2018-12-05 23:10     ` brendanhiggins
  2018-12-05 23:10       ` Brendan Higgins
@ 2019-03-22  0:27       ` frowand.list
  2019-03-22  0:27         ` Frank Rowand
  2019-03-25 22:04         ` brendanhiggins
  1 sibling, 2 replies; 232+ messages in thread
From: frowand.list @ 2019-03-22  0:27 UTC (permalink / raw)


On 12/5/18 3:10 PM, Brendan Higgins wrote:
> On Tue, Dec 4, 2018 at 5:49 AM Rob Herring <robh at kernel.org> wrote:
>>
>> On Tue, Dec 4, 2018 at 5:40 AM Frank Rowand <frowand.list at gmail.com> wrote:
>>>
>>> Hi Brendan, Rob,
>>>
>>> Pulling a comment from way back in the v1 patch thread:
>>>
>>> On 10/17/18 3:22 PM, Brendan Higgins wrote:
>>>> On Wed, Oct 17, 2018 at 10:49 AM <Tim.Bird at sony.com> wrote:
>>>
>>> < snip >
>>>
>>>> The test and the code under test are linked together in the same
>>>> binary and are compiled under Kbuild. Right now I am linking
>>>> everything into a UML kernel, but I would ultimately like to make
>>>> tests compile into completely independent test binaries. So each test
>>>> file would get compiled into its own test binary and would link
>>>> against only the code needed to run the test, but we are a bit of a
>>>> ways off from that.
>>>
>>> I have never used UML, so you should expect naive questions from me,
>>> exhibiting my lack of understanding.
>>>
>>> Does this mean that I have to build a UML architecture kernel to run
>>> the KUnit tests?
>>
>> In this version of the patch series, yes.
>>
>>> *** Rob, if the answer is yes, then it seems like for my workflow,
>>> which is to build for real ARM hardware, my work is doubled (or
>>> worse), because for every patch/commit that I apply, I not only have
>>> to build the ARM kernel and boot on the real hardware to test, I also
>>> have to build the UML kernel and boot in UML.  If that is correct
>>> then I see this as a major problem for me.
>>
>> I've already raised this issue elsewhere in the series. Restricting
>> the DT tests to UML is a non-starter.
> 

> I have already stated my position elsewhere on the matter, but in
> summary: Ensuring most tests can run without external dependencies
> (hardware, VM, etc) has a lot of benefits and should be supported in
> nearly all cases, but such tests should also work when compiled to run
> on real hardware/VM; the tooling might not be as good in the latter
> case, but I understand that there are good reasons to support it
> nonetheless.

And my needs are the exact opposite.  My tests must run on real hardware,
in the context of the real operating system subsystems and drivers
potentially causing issues.

It is useful if the tests can also run without that dependency.

-Frank


> 
> So I am going to try to add basic support for running tests on other
> architectures in the next version or two.

< snip >

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework
  2019-03-22  0:27       ` frowand.list
@ 2019-03-22  0:27         ` Frank Rowand
  2019-03-25 22:04         ` brendanhiggins
  1 sibling, 0 replies; 232+ messages in thread
From: Frank Rowand @ 2019-03-22  0:27 UTC (permalink / raw)


On 12/5/18 3:10 PM, Brendan Higgins wrote:
> On Tue, Dec 4, 2018@5:49 AM Rob Herring <robh@kernel.org> wrote:
>>
>> On Tue, Dec 4, 2018@5:40 AM Frank Rowand <frowand.list@gmail.com> wrote:
>>>
>>> Hi Brendan, Rob,
>>>
>>> Pulling a comment from way back in the v1 patch thread:
>>>
>>> On 10/17/18 3:22 PM, Brendan Higgins wrote:
>>>> On Wed, Oct 17, 2018@10:49 AM <Tim.Bird@sony.com> wrote:
>>>
>>> < snip >
>>>
>>>> The test and the code under test are linked together in the same
>>>> binary and are compiled under Kbuild. Right now I am linking
>>>> everything into a UML kernel, but I would ultimately like to make
>>>> tests compile into completely independent test binaries. So each test
>>>> file would get compiled into its own test binary and would link
>>>> against only the code needed to run the test, but we are a bit of a
>>>> ways off from that.
>>>
>>> I have never used UML, so you should expect naive questions from me,
>>> exhibiting my lack of understanding.
>>>
>>> Does this mean that I have to build a UML architecture kernel to run
>>> the KUnit tests?
>>
>> In this version of the patch series, yes.
>>
>>> *** Rob, if the answer is yes, then it seems like for my workflow,
>>> which is to build for real ARM hardware, my work is doubled (or
>>> worse), because for every patch/commit that I apply, I not only have
>>> to build the ARM kernel and boot on the real hardware to test, I also
>>> have to build the UML kernel and boot in UML.  If that is correct
>>> then I see this as a major problem for me.
>>
>> I've already raised this issue elsewhere in the series. Restricting
>> the DT tests to UML is a non-starter.
> 

> I have already stated my position elsewhere on the matter, but in
> summary: Ensuring most tests can run without external dependencies
> (hardware, VM, etc) has a lot of benefits and should be supported in
> nearly all cases, but such tests should also work when compiled to run
> on real hardware/VM; the tooling might not be as good in the latter
> case, but I understand that there are good reasons to support it
> nonetheless.

And my needs are the exact opposite.  My tests must run on real hardware,
in the context of the real operating system subsystems and drivers
potentially causing issues.

It is useful if the tests can also run without that dependency.

-Frank


> 
> So I am going to try to add basic support for running tests on other
> architectures in the next version or two.

< snip >

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2019-03-22  0:22                     ` frowand.list
  2019-03-22  0:22                       ` Frank Rowand
@ 2019-03-22  1:30                       ` brendanhiggins
  2019-03-22  1:30                         ` Brendan Higgins
                                           ` (2 more replies)
  2019-03-22  1:34                       ` frowand.list
  2 siblings, 3 replies; 232+ messages in thread
From: brendanhiggins @ 2019-03-22  1:30 UTC (permalink / raw)


On Thu, Mar 21, 2019 at 5:22 PM Frank Rowand <frowand.list at gmail.com> wrote:
>
> On 2/27/19 7:52 PM, Brendan Higgins wrote:
> > On Wed, Feb 20, 2019 at 12:45 PM Frank Rowand <frowand.list at gmail.com> wrote:
> >>
> >> On 2/18/19 2:25 PM, Frank Rowand wrote:
> >>> On 2/15/19 2:56 AM, Brendan Higgins wrote:
> >>>> On Thu, Feb 14, 2019 at 6:05 PM Frank Rowand <frowand.list at gmail.com> wrote:
> >>>>>
> >>>>> On 2/14/19 4:56 PM, Brendan Higgins wrote:
> >>>>>> On Thu, Feb 14, 2019 at 3:57 PM Frank Rowand <frowand.list at gmail.com> wrote:
> >>>>>>>
> >>>>>>> On 12/5/18 3:54 PM, Brendan Higgins wrote:
> >>>>>>>> On Tue, Dec 4, 2018 at 2:58 AM Frank Rowand <frowand.list at gmail.com> wrote:
< snip >
> >>
> >> In the base version, the order of execution of the test code requires
> >> bouncing back and forth between the test functions and the coding of
> >> of_test_find_node_by_name_cases[].
> >
> > You shouldn't need to bounce back and forth because the order in which
> > the tests run shouldn't matter.
>
> If one can't guarantee total independence of all of the tests, with no
> side effects, then yes.  But that is not my world.  To make that
> guarantee, I would need to be able to run just a single test in an
> entire test run.
>
> I actually want to make side effects possible.  Whether from other
> tests or from live kernel code that is accessing the live devicetree.
> Any extra stress makes me happier.
>
> I forget the exact term that has been tossed around, but to me the
> devicetree unittests are more like system validation, release tests,
> acceptance tests, and stress tests.  Not unit tests in the philosophy
> of KUnit.

Ah, I understand. I thought that they were actually trying to be unit
tests; that pretty much voids this discussion then. Integration tests
and end to end tests are valuable as long as that is actually what you
are trying to do.

>
> I do see the value of pure unit tests, and there are rare times that
> my devicetree use case might be better served by that approach.  But
> if so, it is very easy for me to add a simple pure test when debugging.
> My general use case does not map onto this model.

Why do you think it is rare that you would actually want unit tests?

I mean, if you don't get much code churn, then maybe it's not going to
provide you a ton of value to immediately go and write a bunch of unit
tests right now, but I can't think of a single time where it's hurt.
Unit tests, from my experience, are usually the easiest tests to
maintain, and the most helpful when I am developing.

Maybe I need to understand your use case better.

>
>
> >>
> >> In the frank version the order of execution of the test code is obvious.
> >
> > So I know we were arguing before over whether order *does* matter in
> > some of the other test cases (none in the example that you or I
> > posted), but wouldn't it be better if the order of execution didn't
> > matter? If you don't allow a user to depend on the execution of test
> > cases, then arguably these test case dependencies would never form and
> > the order wouldn't matter.
>
> Reality intrudes.  Order does matter.
>
>
> >>
> >> It is possible that a test function could be left out of
> >> of_test_find_node_by_name_cases[], in error.  This will result in a compile
> >> warning (I think warning instead of error, but I have not verified that)
> >> so it might be caught or it might be overlooked.
> >>
> >> The base version is 265 lines.  The frank version is 208 lines, 57 lines
> >> less.  Less is better.
> >
> > I agree that less is better, but there are different kinds of less to
> > consider. I prefer less logic in a function to fewer lines overall.
> >
> > It seems we are in agreement that test cases should be small and
> > simple, so I won't dwell on that point any longer. I agree that the
>
> As a general guide for simple unit tests, sure.
>
> For my case, no.  Reality intrudes.
>
> KUnit has a nice architectural view of what a unit test should be.

Cool, I am glad you think so! That actually means a lot to me. I was
afraid I wasn't conveying the idea properly and that was the root of
this debate.

>
> The existing devicetree "unittests" are not such unit tests.  They
> simply share the same name.
>
> The devicetree unittests do not fit into a clean:
>   - initialize
>   - do one test
>   - clean up
> model.
>
> Trying to force them into that model will not work.  The initialize
> is not a simple, easy to decompose thing.  And trying to decompose
> it can actually make the code more complex and messier.
>
> Clean up can NOT occur, because part of my test validation is looking
> at the state of the device tree after the tests complete, viewed
> through the /proc/device-tree/ interface.
>

Again, if they are not actually intended to be unit tests, then I
think that is fine.

< snip >

> > Compare the test cases for adding of_test_dynamic_basic,
> > of_test_dynamic_add_existing_property,
> > of_test_dynamic_modify_existing_property, and
> > of_test_dynamic_modify_non_existent_property to the originals. My
> > version is much longer overall, but I think is still much easier to
> > understand. I can say from when I was trying to split this up in the
> > first place, it was not obvious what properties were expected to be
> > populated as a precondition for a given test case (except the first
> > one of course). Whereas, in my version, it is immediately obvious what
> > the preconditions are for a test case. I think you can apply this same
> > logic to the examples you provided, in frank version, I don't
> > immediately know if one test cases does something that is a
> > precondition for another test case.
>
> Yes, that is a real problem in the current code, but easily fixed
> with comments.

I think it is best when you don't need comments, but in this case, I
think I have to agree with you.

>
>
> > My version also makes it easier to run a test case entirely by itself
> > which is really valuable for debugging purposes. A common thing that
> > happens when you have lots of unit tests is something breaks and lots
> > of tests fail. If the test cases are good, there should be just a
> > couple (ideally one) test cases that directly assert the violated
> > property; those are the test cases you actually want to focus on, the
> > rest are noise for the purposes of that breakage. In my version, it is
> > much easier to turn off the test cases that you don't care about and
> > then focus in on the ones that exercise the violated property.
> >
> > Now I know that, hermeticity especially, but other features as well
> > (test suite summary, error on unused test case function, etc) are not
> > actually in KUnit as it is under consideration here. Maybe it would be
> > best to save these last two patches (18/19, and 19/19) until I have
> > these other features checked in and reconsider them then?
>
> Thanks for leaving 18/19 and 19/19 off in v4.

Sure, no problem. It was pretty clear that it was a waste of both of
our times to continue discussing those at this juncture. :-)

Do you still want me to try to convert the DT not-exactly-unittest to
KUnit? I would kind of prefer (I don't feel *super* strongly about the
matter) we don't call it that since I was intending for it to be the
flagship initial example, but I certainly don't mind trying to clean
this patch up to get it up to snuff. It's really just a question of
whether it is worth it to you.

< snip >

Cheers!

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2019-03-22  1:30                       ` brendanhiggins
@ 2019-03-22  1:30                         ` Brendan Higgins
  2019-03-22  1:47                         ` frowand.list
  2019-09-20 16:57                         ` Rob Herring
  2 siblings, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2019-03-22  1:30 UTC (permalink / raw)


On Thu, Mar 21, 2019@5:22 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/27/19 7:52 PM, Brendan Higgins wrote:
> > On Wed, Feb 20, 2019@12:45 PM Frank Rowand <frowand.list@gmail.com> wrote:
> >>
> >> On 2/18/19 2:25 PM, Frank Rowand wrote:
> >>> On 2/15/19 2:56 AM, Brendan Higgins wrote:
> >>>> On Thu, Feb 14, 2019@6:05 PM Frank Rowand <frowand.list@gmail.com> wrote:
> >>>>>
> >>>>> On 2/14/19 4:56 PM, Brendan Higgins wrote:
> >>>>>> On Thu, Feb 14, 2019@3:57 PM Frank Rowand <frowand.list@gmail.com> wrote:
> >>>>>>>
> >>>>>>> On 12/5/18 3:54 PM, Brendan Higgins wrote:
> >>>>>>>> On Tue, Dec 4, 2018@2:58 AM Frank Rowand <frowand.list@gmail.com> wrote:
< snip >
> >>
> >> In the base version, the order of execution of the test code requires
> >> bouncing back and forth between the test functions and the coding of
> >> of_test_find_node_by_name_cases[].
> >
> > You shouldn't need to bounce back and forth because the order in which
> > the tests run shouldn't matter.
>
> If one can't guarantee total independence of all of the tests, with no
> side effects, then yes.  But that is not my world.  To make that
> guarantee, I would need to be able to run just a single test in an
> entire test run.
>
> I actually want to make side effects possible.  Whether from other
> tests or from live kernel code that is accessing the live devicetree.
> Any extra stress makes me happier.
>
> I forget the exact term that has been tossed around, but to me the
> devicetree unittests are more like system validation, release tests,
> acceptance tests, and stress tests.  Not unit tests in the philosophy
> of KUnit.

Ah, I understand. I thought that they were actually trying to be unit
tests; that pretty much voids this discussion then. Integration tests
and end to end tests are valuable as long as that is actually what you
are trying to do.

>
> I do see the value of pure unit tests, and there are rare times that
> my devicetree use case might be better served by that approach.  But
> if so, it is very easy for me to add a simple pure test when debugging.
> My general use case does not map onto this model.

Why do you think it is rare that you would actually want unit tests?

I mean, if you don't get much code churn, then maybe it's not going to
provide you a ton of value to immediately go and write a bunch of unit
tests right now, but I can't think of a single time where it's hurt.
Unit tests, from my experience, are usually the easiest tests to
maintain, and the most helpful when I am developing.

Maybe I need to understand your use case better.

>
>
> >>
> >> In the frank version the order of execution of the test code is obvious.
> >
> > So I know we were arguing before over whether order *does* matter in
> > some of the other test cases (none in the example that you or I
> > posted), but wouldn't it be better if the order of execution didn't
> > matter? If you don't allow a user to depend on the execution of test
> > cases, then arguably these test case dependencies would never form and
> > the order wouldn't matter.
>
> Reality intrudes.  Order does matter.
>
>
> >>
> >> It is possible that a test function could be left out of
> >> of_test_find_node_by_name_cases[], in error.  This will result in a compile
> >> warning (I think warning instead of error, but I have not verified that)
> >> so it might be caught or it might be overlooked.
> >>
> >> The base version is 265 lines.  The frank version is 208 lines, 57 lines
> >> less.  Less is better.
> >
> > I agree that less is better, but there are different kinds of less to
> > consider. I prefer less logic in a function to fewer lines overall.
> >
> > It seems we are in agreement that test cases should be small and
> > simple, so I won't dwell on that point any longer. I agree that the
>
> As a general guide for simple unit tests, sure.
>
> For my case, no.  Reality intrudes.
>
> KUnit has a nice architectural view of what a unit test should be.

Cool, I am glad you think so! That actually means a lot to me. I was
afraid I wasn't conveying the idea properly and that was the root of
this debate.

>
> The existing devicetree "unittests" are not such unit tests.  They
> simply share the same name.
>
> The devicetree unittests do not fit into a clean:
>   - initialize
>   - do one test
>   - clean up
> model.
>
> Trying to force them into that model will not work.  The initialize
> is not a simple, easy to decompose thing.  And trying to decompose
> it can actually make the code more complex and messier.
>
> Clean up can NOT occur, because part of my test validation is looking
> at the state of the device tree after the tests complete, viewed
> through the /proc/device-tree/ interface.
>

Again, if they are not actually intended to be unit tests, then I
think that is fine.

< snip >

> > Compare the test cases for adding of_test_dynamic_basic,
> > of_test_dynamic_add_existing_property,
> > of_test_dynamic_modify_existing_property, and
> > of_test_dynamic_modify_non_existent_property to the originals. My
> > version is much longer overall, but I think is still much easier to
> > understand. I can say from when I was trying to split this up in the
> > first place, it was not obvious what properties were expected to be
> > populated as a precondition for a given test case (except the first
> > one of course). Whereas, in my version, it is immediately obvious what
> > the preconditions are for a test case. I think you can apply this same
> > logic to the examples you provided, in frank version, I don't
> > immediately know if one test cases does something that is a
> > precondition for another test case.
>
> Yes, that is a real problem in the current code, but easily fixed
> with comments.

I think it is best when you don't need comments, but in this case, I
think I have to agree with you.

>
>
> > My version also makes it easier to run a test case entirely by itself
> > which is really valuable for debugging purposes. A common thing that
> > happens when you have lots of unit tests is something breaks and lots
> > of tests fail. If the test cases are good, there should be just a
> > couple (ideally one) test cases that directly assert the violated
> > property; those are the test cases you actually want to focus on, the
> > rest are noise for the purposes of that breakage. In my version, it is
> > much easier to turn off the test cases that you don't care about and
> > then focus in on the ones that exercise the violated property.
> >
> > Now I know that, hermeticity especially, but other features as well
> > (test suite summary, error on unused test case function, etc) are not
> > actually in KUnit as it is under consideration here. Maybe it would be
> > best to save these last two patches (18/19, and 19/19) until I have
> > these other features checked in and reconsider them then?
>
> Thanks for leaving 18/19 and 19/19 off in v4.

Sure, no problem. It was pretty clear that it was a waste of both of
our times to continue discussing those at this juncture. :-)

Do you still want me to try to convert the DT not-exactly-unittest to
KUnit? I would kind of prefer (I don't feel *super* strongly about the
matter) we don't call it that since I was intending for it to be the
flagship initial example, but I certainly don't mind trying to clean
this patch up to get it up to snuff. It's really just a question of
whether it is worth it to you.

< snip >

Cheers!

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2019-03-22  0:22                     ` frowand.list
  2019-03-22  0:22                       ` Frank Rowand
  2019-03-22  1:30                       ` brendanhiggins
@ 2019-03-22  1:34                       ` frowand.list
  2019-03-22  1:34                         ` Frank Rowand
  2019-03-25 22:18                         ` brendanhiggins
  2 siblings, 2 replies; 232+ messages in thread
From: frowand.list @ 2019-03-22  1:34 UTC (permalink / raw)


On 3/21/19 5:22 PM, Frank Rowand wrote:
> On 2/27/19 7:52 PM, Brendan Higgins wrote:

< snip >

>> Now I know that, hermeticity especially, but other features as well
>> (test suite summary, error on unused test case function, etc) are not
>> actually in KUnit as it is under consideration here. Maybe it would be
>> best to save these last two patches (18/19, and 19/19) until I have
>> these other features checked in and reconsider them then?
> 
> Thanks for leaving 18/19 and 19/19 off in v4.

Oops, they got into v4 but as 16/17 and 17/17, I think.  But it sounds
like you are planning to leave them out of v5.

> 
> -Frank

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2019-03-22  1:34                       ` frowand.list
@ 2019-03-22  1:34                         ` Frank Rowand
  2019-03-25 22:18                         ` brendanhiggins
  1 sibling, 0 replies; 232+ messages in thread
From: Frank Rowand @ 2019-03-22  1:34 UTC (permalink / raw)


On 3/21/19 5:22 PM, Frank Rowand wrote:
> On 2/27/19 7:52 PM, Brendan Higgins wrote:

< snip >

>> Now I know that, hermeticity especially, but other features as well
>> (test suite summary, error on unused test case function, etc) are not
>> actually in KUnit as it is under consideration here. Maybe it would be
>> best to save these last two patches (18/19, and 19/19) until I have
>> these other features checked in and reconsider them then?
> 
> Thanks for leaving 18/19 and 19/19 off in v4.

Oops, they got into v4 but as 16/17 and 17/17, I think.  But it sounds
like you are planning to leave them out of v5.

> 
> -Frank

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2019-03-22  1:30                       ` brendanhiggins
  2019-03-22  1:30                         ` Brendan Higgins
@ 2019-03-22  1:47                         ` frowand.list
  2019-03-22  1:47                           ` Frank Rowand
  2019-03-25 22:15                           ` brendanhiggins
  2019-09-20 16:57                         ` Rob Herring
  2 siblings, 2 replies; 232+ messages in thread
From: frowand.list @ 2019-03-22  1:47 UTC (permalink / raw)


On 3/21/19 6:30 PM, Brendan Higgins wrote:
> On Thu, Mar 21, 2019 at 5:22 PM Frank Rowand <frowand.list at gmail.com> wrote:
>>
>> On 2/27/19 7:52 PM, Brendan Higgins wrote:

< snip >  but thanks for the comments in the snipped section.


>>
>> Thanks for leaving 18/19 and 19/19 off in v4.
> 
> Sure, no problem. It was pretty clear that it was a waste of both of
> our times to continue discussing those at this juncture. :-)
> 
> Do you still want me to try to convert the DT not-exactly-unittest to
> KUnit? I would kind of prefer (I don't feel *super* strongly about the
> matter) we don't call it that since I was intending for it to be the
> flagship initial example, but I certainly don't mind trying to clean
> this patch up to get it up to snuff. It's really just a question of
> whether it is worth it to you.

In the long term, if KUnit is adopted by the kernel, then I think it
probably makes sense for devicetree unittest to convert from using
our own unittest() function to report an individual test pass/fail
to instead use something like KUNIT_EXPECT_*() to provide more
consistent test messages to test frameworks.  That is assuming
KUNIT_EXPECT_*() provides comparable functionality.  I still have
not looked into that question since the converted tests (patch 15/17
in v4) still does not execute without throwing internal errors.

If that conversion occurred, I would also avoid the ASSERTs.

> 
> < snip >
> 
> Cheers!
> 

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2019-03-22  1:47                         ` frowand.list
@ 2019-03-22  1:47                           ` Frank Rowand
  2019-03-25 22:15                           ` brendanhiggins
  1 sibling, 0 replies; 232+ messages in thread
From: Frank Rowand @ 2019-03-22  1:47 UTC (permalink / raw)


On 3/21/19 6:30 PM, Brendan Higgins wrote:
> On Thu, Mar 21, 2019@5:22 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>
>> On 2/27/19 7:52 PM, Brendan Higgins wrote:

< snip >  but thanks for the comments in the snipped section.


>>
>> Thanks for leaving 18/19 and 19/19 off in v4.
> 
> Sure, no problem. It was pretty clear that it was a waste of both of
> our times to continue discussing those at this juncture. :-)
> 
> Do you still want me to try to convert the DT not-exactly-unittest to
> KUnit? I would kind of prefer (I don't feel *super* strongly about the
> matter) we don't call it that since I was intending for it to be the
> flagship initial example, but I certainly don't mind trying to clean
> this patch up to get it up to snuff. It's really just a question of
> whether it is worth it to you.

In the long term, if KUnit is adopted by the kernel, then I think it
probably makes sense for devicetree unittest to convert from using
our own unittest() function to report an individual test pass/fail
to instead use something like KUNIT_EXPECT_*() to provide more
consistent test messages to test frameworks.  That is assuming
KUNIT_EXPECT_*() provides comparable functionality.  I still have
not looked into that question since the converted tests (patch 15/17
in v4) still does not execute without throwing internal errors.

If that conversion occurred, I would also avoid the ASSERTs.

> 
> < snip >
> 
> Cheers!
> 

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework
  2019-03-22  0:27       ` frowand.list
  2019-03-22  0:27         ` Frank Rowand
@ 2019-03-25 22:04         ` brendanhiggins
  2019-03-25 22:04           ` Brendan Higgins
  1 sibling, 1 reply; 232+ messages in thread
From: brendanhiggins @ 2019-03-25 22:04 UTC (permalink / raw)


On Thu, Mar 21, 2019 at 5:28 PM Frank Rowand <frowand.list at gmail.com> wrote:
>
> On 12/5/18 3:10 PM, Brendan Higgins wrote:
> > On Tue, Dec 4, 2018 at 5:49 AM Rob Herring <robh at kernel.org> wrote:
> >>
> >> On Tue, Dec 4, 2018 at 5:40 AM Frank Rowand <frowand.list at gmail.com> wrote:
> >>>
> >>> Hi Brendan, Rob,
> >>>
> >>> Pulling a comment from way back in the v1 patch thread:
> >>>
> >>> On 10/17/18 3:22 PM, Brendan Higgins wrote:
> >>>> On Wed, Oct 17, 2018 at 10:49 AM <Tim.Bird at sony.com> wrote:
> >>>
> >>> < snip >
> >>>
> >>>> The test and the code under test are linked together in the same
> >>>> binary and are compiled under Kbuild. Right now I am linking
> >>>> everything into a UML kernel, but I would ultimately like to make
> >>>> tests compile into completely independent test binaries. So each test
> >>>> file would get compiled into its own test binary and would link
> >>>> against only the code needed to run the test, but we are a bit of a
> >>>> ways off from that.
> >>>
> >>> I have never used UML, so you should expect naive questions from me,
> >>> exhibiting my lack of understanding.
> >>>
> >>> Does this mean that I have to build a UML architecture kernel to run
> >>> the KUnit tests?
> >>
> >> In this version of the patch series, yes.
> >>
> >>> *** Rob, if the answer is yes, then it seems like for my workflow,
> >>> which is to build for real ARM hardware, my work is doubled (or
> >>> worse), because for every patch/commit that I apply, I not only have
> >>> to build the ARM kernel and boot on the real hardware to test, I also
> >>> have to build the UML kernel and boot in UML.  If that is correct
> >>> then I see this as a major problem for me.
> >>
> >> I've already raised this issue elsewhere in the series. Restricting
> >> the DT tests to UML is a non-starter.
> >
>
> > I have already stated my position elsewhere on the matter, but in
> > summary: Ensuring most tests can run without external dependencies
> > (hardware, VM, etc) has a lot of benefits and should be supported in
> > nearly all cases, but such tests should also work when compiled to run
> > on real hardware/VM; the tooling might not be as good in the latter
> > case, but I understand that there are good reasons to support it
> > nonetheless.
>
> And my needs are the exact opposite.  My tests must run on real hardware,
> in the context of the real operating system subsystems and drivers
> potentially causing issues.

Right, Rob pointed this out, and I fixed this in v4. To be clear, as
of RFC v4 you can run KUnit tests on non-UML architectures, we tested
it on x86 and ARM.

>
> It is useful if the tests can also run without that dependency.

This, of course, is still the main intended use case, but there is
nothing to stop you from using it on real hardware.

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework
  2019-03-25 22:04         ` brendanhiggins
@ 2019-03-25 22:04           ` Brendan Higgins
  0 siblings, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2019-03-25 22:04 UTC (permalink / raw)


On Thu, Mar 21, 2019@5:28 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 12/5/18 3:10 PM, Brendan Higgins wrote:
> > On Tue, Dec 4, 2018@5:49 AM Rob Herring <robh@kernel.org> wrote:
> >>
> >> On Tue, Dec 4, 2018@5:40 AM Frank Rowand <frowand.list@gmail.com> wrote:
> >>>
> >>> Hi Brendan, Rob,
> >>>
> >>> Pulling a comment from way back in the v1 patch thread:
> >>>
> >>> On 10/17/18 3:22 PM, Brendan Higgins wrote:
> >>>> On Wed, Oct 17, 2018@10:49 AM <Tim.Bird@sony.com> wrote:
> >>>
> >>> < snip >
> >>>
> >>>> The test and the code under test are linked together in the same
> >>>> binary and are compiled under Kbuild. Right now I am linking
> >>>> everything into a UML kernel, but I would ultimately like to make
> >>>> tests compile into completely independent test binaries. So each test
> >>>> file would get compiled into its own test binary and would link
> >>>> against only the code needed to run the test, but we are a bit of a
> >>>> ways off from that.
> >>>
> >>> I have never used UML, so you should expect naive questions from me,
> >>> exhibiting my lack of understanding.
> >>>
> >>> Does this mean that I have to build a UML architecture kernel to run
> >>> the KUnit tests?
> >>
> >> In this version of the patch series, yes.
> >>
> >>> *** Rob, if the answer is yes, then it seems like for my workflow,
> >>> which is to build for real ARM hardware, my work is doubled (or
> >>> worse), because for every patch/commit that I apply, I not only have
> >>> to build the ARM kernel and boot on the real hardware to test, I also
> >>> have to build the UML kernel and boot in UML.  If that is correct
> >>> then I see this as a major problem for me.
> >>
> >> I've already raised this issue elsewhere in the series. Restricting
> >> the DT tests to UML is a non-starter.
> >
>
> > I have already stated my position elsewhere on the matter, but in
> > summary: Ensuring most tests can run without external dependencies
> > (hardware, VM, etc) has a lot of benefits and should be supported in
> > nearly all cases, but such tests should also work when compiled to run
> > on real hardware/VM; the tooling might not be as good in the latter
> > case, but I understand that there are good reasons to support it
> > nonetheless.
>
> And my needs are the exact opposite.  My tests must run on real hardware,
> in the context of the real operating system subsystems and drivers
> potentially causing issues.

Right, Rob pointed this out, and I fixed this in v4. To be clear, as
of RFC v4 you can run KUnit tests on non-UML architectures, we tested
it on x86 and ARM.

>
> It is useful if the tests can also run without that dependency.

This, of course, is still the main intended use case, but there is
nothing to stop you from using it on real hardware.

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2019-03-22  1:47                         ` frowand.list
  2019-03-22  1:47                           ` Frank Rowand
@ 2019-03-25 22:15                           ` brendanhiggins
  2019-03-25 22:15                             ` Brendan Higgins
  1 sibling, 1 reply; 232+ messages in thread
From: brendanhiggins @ 2019-03-25 22:15 UTC (permalink / raw)


On Thu, Mar 21, 2019 at 6:47 PM Frank Rowand <frowand.list at gmail.com> wrote:
>
> On 3/21/19 6:30 PM, Brendan Higgins wrote:
> > On Thu, Mar 21, 2019 at 5:22 PM Frank Rowand <frowand.list at gmail.com> wrote:
> >>
> >> On 2/27/19 7:52 PM, Brendan Higgins wrote:
>
> < snip >  but thanks for the comments in the snipped section.
>
>
> >>
> >> Thanks for leaving 18/19 and 19/19 off in v4.
> >
> > Sure, no problem. It was pretty clear that it was a waste of both of
> > our times to continue discussing those at this juncture. :-)
> >
> > Do you still want me to try to convert the DT not-exactly-unittest to
> > KUnit? I would kind of prefer (I don't feel *super* strongly about the
> > matter) we don't call it that since I was intending for it to be the
> > flagship initial example, but I certainly don't mind trying to clean
> > this patch up to get it up to snuff. It's really just a question of
> > whether it is worth it to you.
>
> In the long term, if KUnit is adopted by the kernel, then I think it
> probably makes sense for devicetree unittest to convert from using
> our own unittest() function to report an individual test pass/fail
> to instead use something like KUNIT_EXPECT_*() to provide more
> consistent test messages to test frameworks.  That is assuming
> KUNIT_EXPECT_*() provides comparable functionality.  I still have
> not looked into that question since the converted tests (patch 15/17
> in v4) still does not execute without throwing internal errors.

Sounds good.

>
> If that conversion occurred, I would also avoid the ASSERTs.

Noted.

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2019-03-25 22:15                           ` brendanhiggins
@ 2019-03-25 22:15                             ` Brendan Higgins
  0 siblings, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2019-03-25 22:15 UTC (permalink / raw)


On Thu, Mar 21, 2019@6:47 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 3/21/19 6:30 PM, Brendan Higgins wrote:
> > On Thu, Mar 21, 2019@5:22 PM Frank Rowand <frowand.list@gmail.com> wrote:
> >>
> >> On 2/27/19 7:52 PM, Brendan Higgins wrote:
>
> < snip >  but thanks for the comments in the snipped section.
>
>
> >>
> >> Thanks for leaving 18/19 and 19/19 off in v4.
> >
> > Sure, no problem. It was pretty clear that it was a waste of both of
> > our times to continue discussing those at this juncture. :-)
> >
> > Do you still want me to try to convert the DT not-exactly-unittest to
> > KUnit? I would kind of prefer (I don't feel *super* strongly about the
> > matter) we don't call it that since I was intending for it to be the
> > flagship initial example, but I certainly don't mind trying to clean
> > this patch up to get it up to snuff. It's really just a question of
> > whether it is worth it to you.
>
> In the long term, if KUnit is adopted by the kernel, then I think it
> probably makes sense for devicetree unittest to convert from using
> our own unittest() function to report an individual test pass/fail
> to instead use something like KUNIT_EXPECT_*() to provide more
> consistent test messages to test frameworks.  That is assuming
> KUNIT_EXPECT_*() provides comparable functionality.  I still have
> not looked into that question since the converted tests (patch 15/17
> in v4) still does not execute without throwing internal errors.

Sounds good.

>
> If that conversion occurred, I would also avoid the ASSERTs.

Noted.

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2019-03-22  1:34                       ` frowand.list
  2019-03-22  1:34                         ` Frank Rowand
@ 2019-03-25 22:18                         ` brendanhiggins
  2019-03-25 22:18                           ` Brendan Higgins
  1 sibling, 1 reply; 232+ messages in thread
From: brendanhiggins @ 2019-03-25 22:18 UTC (permalink / raw)


On Thu, Mar 21, 2019 at 6:34 PM Frank Rowand <frowand.list at gmail.com> wrote:
>
> On 3/21/19 5:22 PM, Frank Rowand wrote:
> > On 2/27/19 7:52 PM, Brendan Higgins wrote:
>
> < snip >
>
> >> Now I know that, hermeticity especially, but other features as well
> >> (test suite summary, error on unused test case function, etc) are not
> >> actually in KUnit as it is under consideration here. Maybe it would be
> >> best to save these last two patches (18/19, and 19/19) until I have
> >> these other features checked in and reconsider them then?
> >
> > Thanks for leaving 18/19 and 19/19 off in v4.
>
> Oops, they got into v4 but as 16/17 and 17/17, I think.  But it sounds
> like you are planning to leave them out of v5.

Oh, I thought you meant v5 when you were thanking me. In any case, to
confirm, I will be leaving off 16/17, and 17/17 in the next version.

^ permalink raw reply	[flat|nested] 232+ messages in thread

* [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2019-03-25 22:18                         ` brendanhiggins
@ 2019-03-25 22:18                           ` Brendan Higgins
  0 siblings, 0 replies; 232+ messages in thread
From: Brendan Higgins @ 2019-03-25 22:18 UTC (permalink / raw)


On Thu, Mar 21, 2019@6:34 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 3/21/19 5:22 PM, Frank Rowand wrote:
> > On 2/27/19 7:52 PM, Brendan Higgins wrote:
>
> < snip >
>
> >> Now I know that, hermeticity especially, but other features as well
> >> (test suite summary, error on unused test case function, etc) are not
> >> actually in KUnit as it is under consideration here. Maybe it would be
> >> best to save these last two patches (18/19, and 19/19) until I have
> >> these other features checked in and reconsider them then?
> >
> > Thanks for leaving 18/19 and 19/19 off in v4.
>
> Oops, they got into v4 but as 16/17 and 17/17, I think.  But it sounds
> like you are planning to leave them out of v5.

Oh, I thought you meant v5 when you were thanking me. In any case, to
confirm, I will be leaving off 16/17, and 17/17 in the next version.

^ permalink raw reply	[flat|nested] 232+ messages in thread

* Re: [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2019-03-22  1:30                       ` brendanhiggins
  2019-03-22  1:30                         ` Brendan Higgins
  2019-03-22  1:47                         ` frowand.list
@ 2019-09-20 16:57                         ` Rob Herring
  2019-09-21 23:57                           ` Frank Rowand
  2 siblings, 1 reply; 232+ messages in thread
From: Rob Herring @ 2019-09-20 16:57 UTC (permalink / raw)
  To: Brendan Higgins, Frank Rowand
  Cc: Greg KH, Kees Cook, Luis Chamberlain, shuah, Joel Stanley,
	Michael Ellerman, Joe Perches, brakmo, Steven Rostedt, Bird,
	Timothy, Kevin Hilman, Julia Lawall, linux-kselftest, kunit-dev,
	Linux Kernel Mailing List, Jeff Dike, Richard Weinberger,
	linux-um, Daniel Vetter, dri-devel, Dan Williams, linux-nvdimm,
	Kieran Bingham, Knut Omang

Following up from LPC discussions...

On Thu, Mar 21, 2019 at 8:30 PM Brendan Higgins
<brendanhiggins@google.com> wrote:
>
> On Thu, Mar 21, 2019 at 5:22 PM Frank Rowand <frowand.list@gmail.com> wrote:
> >
> > On 2/27/19 7:52 PM, Brendan Higgins wrote:
> > > On Wed, Feb 20, 2019 at 12:45 PM Frank Rowand <frowand.list@gmail.com> wrote:
> > >>
> > >> On 2/18/19 2:25 PM, Frank Rowand wrote:
> > >>> On 2/15/19 2:56 AM, Brendan Higgins wrote:
> > >>>> On Thu, Feb 14, 2019 at 6:05 PM Frank Rowand <frowand.list@gmail.com> wrote:
> > >>>>>
> > >>>>> On 2/14/19 4:56 PM, Brendan Higgins wrote:
> > >>>>>> On Thu, Feb 14, 2019 at 3:57 PM Frank Rowand <frowand.list@gmail.com> wrote:
> > >>>>>>>
> > >>>>>>> On 12/5/18 3:54 PM, Brendan Higgins wrote:
> > >>>>>>>> On Tue, Dec 4, 2018 at 2:58 AM Frank Rowand <frowand.list@gmail.com> wrote:
> < snip >
> > >>
> > >> In the base version, the order of execution of the test code requires
> > >> bouncing back and forth between the test functions and the coding of
> > >> of_test_find_node_by_name_cases[].
> > >
> > > You shouldn't need to bounce back and forth because the order in which
> > > the tests run shouldn't matter.
> >
> > If one can't guarantee total independence of all of the tests, with no
> > side effects, then yes.  But that is not my world.  To make that
> > guarantee, I would need to be able to run just a single test in an
> > entire test run.
> >
> > I actually want to make side effects possible.  Whether from other
> > tests or from live kernel code that is accessing the live devicetree.
> > Any extra stress makes me happier.
> >
> > I forget the exact term that has been tossed around, but to me the
> > devicetree unittests are more like system validation, release tests,
> > acceptance tests, and stress tests.  Not unit tests in the philosophy
> > of KUnit.
>
> Ah, I understand. I thought that they were actually trying to be unit
> tests; that pretty much voids this discussion then. Integration tests
> and end to end tests are valuable as long as that is actually what you
> are trying to do.

There's a mixture. There's a whole bunch of tests that are basically
just testing various DT APIs and use a static DT. Those are all unit
tests IMO.

Then there's all the overlay tests Frank has added. I guess some of
those are not unittests in the strictest sense. Regardless, if we're
reporting test results, we should align our reporting with what will
become the rest of the kernel.

> > I do see the value of pure unit tests, and there are rare times that
> > my devicetree use case might be better served by that approach.  But
> > if so, it is very easy for me to add a simple pure test when debugging.
> > My general use case does not map onto this model.
>
> Why do you think it is rare that you would actually want unit tests?

I don't. We should have a unittest (or multiple) for every single DT
API call and that should be a requirement to add any new APIs.

> I mean, if you don't get much code churn, then maybe it's not going to
> provide you a ton of value to immediately go and write a bunch of unit
> tests right now, but I can't think of a single time where it's hurt.
> Unit tests, from my experience, are usually the easiest tests to
> maintain, and the most helpful when I am developing.
>
> Maybe I need to understand your use case better.
>
> >
> >
> > >>
> > >> In the frank version the order of execution of the test code is obvious.
> > >
> > > So I know we were arguing before over whether order *does* matter in
> > > some of the other test cases (none in the example that you or I
> > > posted), but wouldn't it be better if the order of execution didn't
> > > matter? If you don't allow a user to depend on the execution of test
> > > cases, then arguably these test case dependencies would never form and
> > > the order wouldn't matter.
> >
> > Reality intrudes.  Order does matter.
> >
> >
> > >>
> > >> It is possible that a test function could be left out of
> > >> of_test_find_node_by_name_cases[], in error.  This will result in a compile
> > >> warning (I think warning instead of error, but I have not verified that)
> > >> so it might be caught or it might be overlooked.
> > >>
> > >> The base version is 265 lines.  The frank version is 208 lines, 57 lines
> > >> less.  Less is better.
> > >
> > > I agree that less is better, but there are different kinds of less to
> > > consider. I prefer less logic in a function to fewer lines overall.
> > >
> > > It seems we are in agreement that test cases should be small and
> > > simple, so I won't dwell on that point any longer. I agree that the
> >
> > As a general guide for simple unit tests, sure.
> >
> > For my case, no.  Reality intrudes.
> >
> > KUnit has a nice architectural view of what a unit test should be.
>
> Cool, I am glad you think so! That actually means a lot to me. I was
> afraid I wasn't conveying the idea properly and that was the root of
> this debate.
>
> >
> > The existing devicetree "unittests" are not such unit tests.  They
> > simply share the same name.
> >
> > The devicetree unittests do not fit into a clean:
> >   - initialize
> >   - do one test
> >   - clean up
> > model.

Initialize being static and clean-up being NULL still fits into this model.

> > Trying to force them into that model will not work.  The initialize
> > is not a simple, easy to decompose thing.  And trying to decompose
> > it can actually make the code more complex and messier.
> >
> > Clean up can NOT occur, because part of my test validation is looking
> > at the state of the device tree after the tests complete, viewed
> > through the /proc/device-tree/ interface.

Well, that's pretty ugly to have the test in the kernel and the
validation in userspace. I can see why you do, but that seems like a
problem in how those tests are defined and run.

> Again, if they are not actually intended to be unit tests, then I
> think that is fine.
>
> < snip >
>
> > > Compare the test cases for adding of_test_dynamic_basic,
> > > of_test_dynamic_add_existing_property,
> > > of_test_dynamic_modify_existing_property, and
> > > of_test_dynamic_modify_non_existent_property to the originals. My
> > > version is much longer overall, but I think is still much easier to
> > > understand. I can say from when I was trying to split this up in the
> > > first place, it was not obvious what properties were expected to be
> > > populated as a precondition for a given test case (except the first
> > > one of course). Whereas, in my version, it is immediately obvious what
> > > the preconditions are for a test case. I think you can apply this same
> > > logic to the examples you provided, in frank version, I don't
> > > immediately know if one test cases does something that is a
> > > precondition for another test case.
> >
> > Yes, that is a real problem in the current code, but easily fixed
> > with comments.
>
> I think it is best when you don't need comments, but in this case, I
> think I have to agree with you.
>
> >
> >
> > > My version also makes it easier to run a test case entirely by itself
> > > which is really valuable for debugging purposes. A common thing that
> > > happens when you have lots of unit tests is something breaks and lots
> > > of tests fail. If the test cases are good, there should be just a
> > > couple (ideally one) test cases that directly assert the violated
> > > property; those are the test cases you actually want to focus on, the
> > > rest are noise for the purposes of that breakage. In my version, it is
> > > much easier to turn off the test cases that you don't care about and
> > > then focus in on the ones that exercise the violated property.
> > >
> > > Now I know that, hermeticity especially, but other features as well
> > > (test suite summary, error on unused test case function, etc) are not
> > > actually in KUnit as it is under consideration here. Maybe it would be
> > > best to save these last two patches (18/19, and 19/19) until I have
> > > these other features checked in and reconsider them then?
> >
> > Thanks for leaving 18/19 and 19/19 off in v4.
>
> Sure, no problem. It was pretty clear that it was a waste of both of
> our times to continue discussing those at this juncture. :-)
>
> Do you still want me to try to convert the DT not-exactly-unittest to
> KUnit? I would kind of prefer (I don't feel *super* strongly about the
> matter) we don't call it that since I was intending for it to be the
> flagship initial example, but I certainly don't mind trying to clean
> this patch up to get it up to snuff. It's really just a question of
> whether it is worth it to you.

I still want to see this happen at least for the parts that are
clearly unit tests. And for the parts that aren't, Frank should move
them out of of/unittest.c.

So how to move forward? Convert tests one by one? Take a first swag at
what are unit tests and aren't?

Brendan, do you still have DT unittest patches that work with current kunit?

Rob

^ permalink raw reply	[flat|nested] 232+ messages in thread

* Re: [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest
  2019-09-20 16:57                         ` Rob Herring
@ 2019-09-21 23:57                           ` Frank Rowand
  0 siblings, 0 replies; 232+ messages in thread
From: Frank Rowand @ 2019-09-21 23:57 UTC (permalink / raw)
  To: Rob Herring, Brendan Higgins
  Cc: Greg KH, Kees Cook, Luis Chamberlain, shuah, Joel Stanley,
	Michael Ellerman, Joe Perches, brakmo, Steven Rostedt, Bird,
	Timothy, Kevin Hilman, Julia Lawall, linux-kselftest, kunit-dev,
	Linux Kernel Mailing List, Jeff Dike, Richard Weinberger,
	linux-um, Daniel Vetter, dri-devel, Dan Williams, linux-nvdimm,
	Kieran Bingham, Knut Omang

On 9/20/19 9:57 AM, Rob Herring wrote:
> Following up from LPC discussions...
> 
> On Thu, Mar 21, 2019 at 8:30 PM Brendan Higgins
> <brendanhiggins@google.com> wrote:
>>
>> On Thu, Mar 21, 2019 at 5:22 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>>
>>> On 2/27/19 7:52 PM, Brendan Higgins wrote:
>>>> On Wed, Feb 20, 2019 at 12:45 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>>>>
>>>>> On 2/18/19 2:25 PM, Frank Rowand wrote:
>>>>>> On 2/15/19 2:56 AM, Brendan Higgins wrote:
>>>>>>> On Thu, Feb 14, 2019 at 6:05 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>>>>>>>
>>>>>>>> On 2/14/19 4:56 PM, Brendan Higgins wrote:
>>>>>>>>> On Thu, Feb 14, 2019 at 3:57 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>> On 12/5/18 3:54 PM, Brendan Higgins wrote:
>>>>>>>>>>> On Tue, Dec 4, 2018 at 2:58 AM Frank Rowand <frowand.list@gmail.com> wrote:
>> < snip >
>>>>>
>>>>> In the base version, the order of execution of the test code requires
>>>>> bouncing back and forth between the test functions and the coding of
>>>>> of_test_find_node_by_name_cases[].
>>>>
>>>> You shouldn't need to bounce back and forth because the order in which
>>>> the tests run shouldn't matter.
>>>
>>> If one can't guarantee total independence of all of the tests, with no
>>> side effects, then yes.  But that is not my world.  To make that
>>> guarantee, I would need to be able to run just a single test in an
>>> entire test run.
>>>
>>> I actually want to make side effects possible.  Whether from other
>>> tests or from live kernel code that is accessing the live devicetree.
>>> Any extra stress makes me happier.
>>>
>>> I forget the exact term that has been tossed around, but to me the
>>> devicetree unittests are more like system validation, release tests,
>>> acceptance tests, and stress tests.  Not unit tests in the philosophy
>>> of KUnit.
>>
>> Ah, I understand. I thought that they were actually trying to be unit
>> tests; that pretty much voids this discussion then. Integration tests
>> and end to end tests are valuable as long as that is actually what you
>> are trying to do.
> 
> There's a mixture. There's a whole bunch of tests that are basically
> just testing various DT APIs and use a static DT. Those are all unit
> tests IMO.
> 
> Then there's all the overlay tests Frank has added. I guess some of
> those are not unittests in the strictest sense. Regardless, if we're
> reporting test results, we should align our reporting with what will
> become the rest of the kernel.

The last time I talked to you at lpc, I was still resisting moving the
DT unittests to the kunit framework.  But I think I am on board now.

Brendan agreed to accept a kunit patch from me (when I write it) to enable
the DT unittests to report # of tests run and # of tests passed/failed,
as is currently the case.

Brendan also agreed that null initialization and clean up would be ok
for the DT unittests.  But that he does not want that model to be
frequently used.  (You mention this idea later in the email I am
replying to.)


> 
>>> I do see the value of pure unit tests, and there are rare times that
>>> my devicetree use case might be better served by that approach.  But
>>> if so, it is very easy for me to add a simple pure test when debugging.
>>> My general use case does not map onto this model.
>>
>> Why do you think it is rare that you would actually want unit tests?
> 
> I don't. We should have a unittest (or multiple) for every single DT
> API call and that should be a requirement to add any new APIs.
> 
>> I mean, if you don't get much code churn, then maybe it's not going to
>> provide you a ton of value to immediately go and write a bunch of unit
>> tests right now, but I can't think of a single time where it's hurt.
>> Unit tests, from my experience, are usually the easiest tests to
>> maintain, and the most helpful when I am developing.
>>
>> Maybe I need to understand your use case better.
>>
>>>
>>>
>>>>>
>>>>> In the frank version the order of execution of the test code is obvious.
>>>>
>>>> So I know we were arguing before over whether order *does* matter in
>>>> some of the other test cases (none in the example that you or I
>>>> posted), but wouldn't it be better if the order of execution didn't
>>>> matter? If you don't allow a user to depend on the execution of test
>>>> cases, then arguably these test case dependencies would never form and
>>>> the order wouldn't matter.
>>>
>>> Reality intrudes.  Order does matter.
>>>
>>>
>>>>>
>>>>> It is possible that a test function could be left out of
>>>>> of_test_find_node_by_name_cases[], in error.  This will result in a compile
>>>>> warning (I think warning instead of error, but I have not verified that)
>>>>> so it might be caught or it might be overlooked.
>>>>>
>>>>> The base version is 265 lines.  The frank version is 208 lines, 57 lines
>>>>> less.  Less is better.
>>>>
>>>> I agree that less is better, but there are different kinds of less to
>>>> consider. I prefer less logic in a function to fewer lines overall.
>>>>
>>>> It seems we are in agreement that test cases should be small and
>>>> simple, so I won't dwell on that point any longer. I agree that the
>>>
>>> As a general guide for simple unit tests, sure.
>>>
>>> For my case, no.  Reality intrudes.
>>>
>>> KUnit has a nice architectural view of what a unit test should be.
>>
>> Cool, I am glad you think so! That actually means a lot to me. I was
>> afraid I wasn't conveying the idea properly and that was the root of
>> this debate.
>>
>>>
>>> The existing devicetree "unittests" are not such unit tests.  They
>>> simply share the same name.
>>>
>>> The devicetree unittests do not fit into a clean:
>>>   - initialize
>>>   - do one test
>>>   - clean up
>>> model.
> 
> Initialize being static and clean-up being NULL still fits into this model.
> 
>>> Trying to force them into that model will not work.  The initialize
>>> is not a simple, easy to decompose thing.  And trying to decompose
>>> it can actually make the code more complex and messier.
>>>
>>> Clean up can NOT occur, because part of my test validation is looking
>>> at the state of the device tree after the tests complete, viewed
>>> through the /proc/device-tree/ interface.
> 
> Well, that's pretty ugly to have the test in the kernel and the
> validation in userspace. I can see why you do, but that seems like a
> problem in how those tests are defined and run.

Yes it is ugly.  Any good suggestions on a better solution?


> 
>> Again, if they are not actually intended to be unit tests, then I
>> think that is fine.
>>
>> < snip >
>>
>>>> Compare the test cases for adding of_test_dynamic_basic,
>>>> of_test_dynamic_add_existing_property,
>>>> of_test_dynamic_modify_existing_property, and
>>>> of_test_dynamic_modify_non_existent_property to the originals. My
>>>> version is much longer overall, but I think is still much easier to
>>>> understand. I can say from when I was trying to split this up in the
>>>> first place, it was not obvious what properties were expected to be
>>>> populated as a precondition for a given test case (except the first
>>>> one of course). Whereas, in my version, it is immediately obvious what
>>>> the preconditions are for a test case. I think you can apply this same
>>>> logic to the examples you provided, in frank version, I don't
>>>> immediately know if one test cases does something that is a
>>>> precondition for another test case.
>>>
>>> Yes, that is a real problem in the current code, but easily fixed
>>> with comments.
>>
>> I think it is best when you don't need comments, but in this case, I
>> think I have to agree with you.
>>
>>>
>>>
>>>> My version also makes it easier to run a test case entirely by itself
>>>> which is really valuable for debugging purposes. A common thing that
>>>> happens when you have lots of unit tests is something breaks and lots
>>>> of tests fail. If the test cases are good, there should be just a
>>>> couple (ideally one) test cases that directly assert the violated
>>>> property; those are the test cases you actually want to focus on, the
>>>> rest are noise for the purposes of that breakage. In my version, it is
>>>> much easier to turn off the test cases that you don't care about and
>>>> then focus in on the ones that exercise the violated property.
>>>>
>>>> Now I know that, hermeticity especially, but other features as well
>>>> (test suite summary, error on unused test case function, etc) are not
>>>> actually in KUnit as it is under consideration here. Maybe it would be
>>>> best to save these last two patches (18/19, and 19/19) until I have
>>>> these other features checked in and reconsider them then?
>>>
>>> Thanks for leaving 18/19 and 19/19 off in v4.
>>
>> Sure, no problem. It was pretty clear that it was a waste of both of
>> our times to continue discussing those at this juncture. :-)
>>
>> Do you still want me to try to convert the DT not-exactly-unittest to
>> KUnit? I would kind of prefer (I don't feel *super* strongly about the
>> matter) we don't call it that since I was intending for it to be the
>> flagship initial example, but I certainly don't mind trying to clean
>> this patch up to get it up to snuff. It's really just a question of
>> whether it is worth it to you.
> 
> I still want to see this happen at least for the parts that are
> clearly unit tests. And for the parts that aren't, Frank should move
> them out of of/unittest.c.
> 
> So how to move forward? Convert tests one by one? Take a first swag at
> what are unit tests and aren't?

By the end of lpc I decided that I would go ahead and take a shot at
modifying the DT unittests to use the kunit framework.  So I am
volunteering me to do the work.  This has an additional benefit of
testing whether someone like me can successfully follow the
documentation.

It looked like there was a good chance that kunit would get accepted in
the merge window, so I was going to do the DT unittest work based on
-rc1.  I see that Brenden has submitted a couple more versions after
the merge window opened.  I think that kunit will be stable enough
after -rc1 (even if not yet merged by Linus) that I will be comfortable
doing the work on top of the latest patch series at the close of the
merge window.


> 
> Brendan, do you still have DT unittest patches that work with current kunit?

I never saw a version of the DT unittest patches that worked.  I was watching
and waiting before doing a serious review of the DT unittest portion of the
kunit patch series.

-Frank

> 
> Rob
> 


^ permalink raw reply	[flat|nested] 232+ messages in thread

end of thread, other threads:[~2019-09-21 23:57 UTC | newest]

Thread overview: 232+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-28 19:36 [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework brendanhiggins
2018-11-28 19:36 ` Brendan Higgins
2018-11-28 19:36 ` [RFC v3 01/19] kunit: test: add KUnit test runner core brendanhiggins
2018-11-28 19:36   ` Brendan Higgins
2018-11-30  3:14   ` mcgrof
2018-11-30  3:14     ` Luis Chamberlain
2018-12-01  1:51     ` brendanhiggins
2018-12-01  1:51       ` Brendan Higgins
2018-12-01  2:57       ` mcgrof
2018-12-01  2:57         ` Luis Chamberlain
2018-12-05 13:15     ` anton.ivanov
2018-12-05 13:15       ` Anton Ivanov
2018-12-05 14:45       ` arnd
2018-12-05 14:45         ` Arnd Bergmann
2018-12-05 14:49         ` anton.ivanov
2018-12-05 14:49           ` Anton Ivanov
2018-11-30  3:28   ` mcgrof
2018-11-30  3:28     ` Luis Chamberlain
2018-12-01  2:08     ` brendanhiggins
2018-12-01  2:08       ` Brendan Higgins
2018-12-01  3:10       ` mcgrof
2018-12-01  3:10         ` Luis Chamberlain
2018-12-03 22:47         ` brendanhiggins
2018-12-03 22:47           ` Brendan Higgins
2018-12-01  3:02   ` mcgrof
2018-12-01  3:02     ` Luis Chamberlain
2018-11-28 19:36 ` [RFC v3 02/19] kunit: test: add test resource management API brendanhiggins
2018-11-28 19:36   ` Brendan Higgins
2018-11-28 19:36 ` [RFC v3 03/19] kunit: test: add string_stream a std::stream like string builder brendanhiggins
2018-11-28 19:36   ` Brendan Higgins
2018-11-30  3:29   ` mcgrof
2018-11-30  3:29     ` Luis Chamberlain
2018-12-01  2:14     ` brendanhiggins
2018-12-01  2:14       ` Brendan Higgins
2018-12-01  3:12       ` mcgrof
2018-12-01  3:12         ` Luis Chamberlain
2018-12-03 10:55     ` pmladek
2018-12-03 10:55       ` Petr Mladek
2018-12-04  0:35       ` brendanhiggins
2018-12-04  0:35         ` Brendan Higgins
2018-11-28 19:36 ` [RFC v3 04/19] kunit: test: add test_stream a std::stream like logger brendanhiggins
2018-11-28 19:36   ` Brendan Higgins
2018-11-28 19:36 ` [RFC v3 05/19] kunit: test: add the concept of expectations brendanhiggins
2018-11-28 19:36   ` Brendan Higgins
2018-11-28 19:36 ` [RFC v3 06/19] arch: um: enable running kunit from User Mode Linux brendanhiggins
2018-11-28 19:36   ` Brendan Higgins
2018-11-28 21:26   ` robh
2018-11-28 21:26     ` Rob Herring
2018-11-30  3:37     ` mcgrof
2018-11-30  3:37       ` Luis Chamberlain
2018-11-30 14:05       ` robh
2018-11-30 14:05         ` Rob Herring
2018-11-30 18:22         ` mcgrof
2018-11-30 18:22           ` Luis Chamberlain
2018-12-03 23:22           ` brendanhiggins
2018-12-03 23:22             ` Brendan Higgins
2018-11-30  3:30   ` mcgrof
2018-11-30  3:30     ` Luis Chamberlain
2018-11-28 19:36 ` [RFC v3 07/19] kunit: test: add initial tests brendanhiggins
2018-11-28 19:36   ` Brendan Higgins
2018-11-30  3:40   ` mcgrof
2018-11-30  3:40     ` Luis Chamberlain
2018-12-03 23:26     ` brendanhiggins
2018-12-03 23:26       ` Brendan Higgins
2018-12-03 23:43       ` mcgrof
2018-12-03 23:43         ` Luis Chamberlain
2018-11-28 19:36 ` [RFC v3 08/19] arch: um: add shim to trap to allow installing a fault catcher for tests brendanhiggins
2018-11-28 19:36   ` Brendan Higgins
2018-11-30  3:34   ` mcgrof
2018-11-30  3:34     ` Luis Chamberlain
2018-12-03 23:34     ` brendanhiggins
2018-12-03 23:34       ` Brendan Higgins
2018-12-03 23:46       ` mcgrof
2018-12-03 23:46         ` Luis Chamberlain
2018-12-04  0:44         ` brendanhiggins
2018-12-04  0:44           ` Brendan Higgins
2018-11-30  3:41   ` mcgrof
2018-11-30  3:41     ` Luis Chamberlain
2018-12-03 23:37     ` brendanhiggins
2018-12-03 23:37       ` Brendan Higgins
2018-11-28 19:36 ` [RFC v3 09/19] kunit: test: add the concept of assertions brendanhiggins
2018-11-28 19:36   ` Brendan Higgins
2018-11-28 19:36 ` [RFC v3 10/19] kunit: test: add test managed resource tests brendanhiggins
2018-11-28 19:36   ` Brendan Higgins
2018-11-28 19:36 ` [RFC v3 11/19] kunit: add Python libraries for handing KUnit config and kernel brendanhiggins
2018-11-28 19:36   ` Brendan Higgins
2018-11-29 13:54   ` kieran.bingham
2018-11-29 13:54     ` Kieran Bingham
2018-12-03 23:48     ` brendanhiggins
2018-12-03 23:48       ` Brendan Higgins
2018-12-04 20:47       ` mcgrof
2018-12-04 20:47         ` Luis Chamberlain
2018-12-06 12:32         ` kieran.bingham
2018-12-06 12:32           ` Kieran Bingham
2018-12-06 15:37           ` willy
2018-12-06 15:37             ` Matthew Wilcox
2018-12-07 11:30             ` kieran.bingham
2018-12-07 11:30               ` Kieran Bingham
2018-12-11 14:09             ` pmladek
2018-12-11 14:09               ` Petr Mladek
2018-12-11 14:41               ` rostedt
2018-12-11 14:41                 ` Steven Rostedt
2018-12-11 17:01                 ` anton.ivanov
2018-12-11 17:01                   ` Anton Ivanov
2019-02-09  0:40                   ` brendanhiggins
2019-02-09  0:40                     ` Brendan Higgins
2018-12-07  1:05           ` mcgrof
2018-12-07  1:05             ` Luis Chamberlain
2018-12-07 18:35           ` kent.overstreet
2018-12-07 18:35             ` Kent Overstreet
2018-11-30  3:44   ` mcgrof
2018-11-30  3:44     ` Luis Chamberlain
2018-12-03 23:50     ` brendanhiggins
2018-12-03 23:50       ` Brendan Higgins
2018-12-04 20:48       ` mcgrof
2018-12-04 20:48         ` Luis Chamberlain
2018-11-28 19:36 ` [RFC v3 12/19] kunit: add KUnit wrapper script and simple output parser brendanhiggins
2018-11-28 19:36   ` Brendan Higgins
2018-11-28 19:36 ` [RFC v3 13/19] kunit: improve output from python wrapper brendanhiggins
2018-11-28 19:36   ` Brendan Higgins
2018-11-28 19:36 ` [RFC v3 14/19] Documentation: kunit: add documentation for KUnit brendanhiggins
2018-11-28 19:36   ` Brendan Higgins
2018-11-29 13:56   ` kieran.bingham
2018-11-29 13:56     ` Kieran Bingham
2018-11-30  3:45     ` mcgrof
2018-11-30  3:45       ` Luis Chamberlain
2018-12-03 23:53       ` brendanhiggins
2018-12-03 23:53         ` Brendan Higgins
2018-12-06 12:16         ` kieran.bingham
2018-12-06 12:16           ` Kieran Bingham
2019-02-09  0:56           ` brendanhiggins
2019-02-09  0:56             ` Brendan Higgins
2019-02-11 12:16             ` kieran.bingham
2019-02-11 12:16               ` Kieran Bingham
2019-02-12 22:10               ` brendanhiggins
2019-02-12 22:10                 ` Brendan Higgins
2019-02-13 21:55                 ` kieran.bingham
2019-02-13 21:55                   ` Kieran Bingham
2019-02-14  0:17                   ` brendanhiggins
2019-02-14  0:17                     ` Brendan Higgins
2019-02-14 17:26                     ` mcgrof
2019-02-14 17:26                       ` Luis Chamberlain
2019-02-14 22:07                       ` brendanhiggins
2019-02-14 22:07                         ` Brendan Higgins
2018-11-28 19:36 ` [RFC v3 15/19] MAINTAINERS: add entry for KUnit the unit testing framework brendanhiggins
2018-11-28 19:36   ` Brendan Higgins
2018-11-28 19:36 ` [RFC v3 16/19] arch: um: make UML unflatten device tree when testing brendanhiggins
2018-11-28 19:36   ` Brendan Higgins
2018-11-28 21:16   ` robh
2018-11-28 21:16     ` Rob Herring
2018-12-04  0:00     ` brendanhiggins
2018-12-04  0:00       ` Brendan Higgins
2018-11-30  3:46   ` mcgrof
2018-11-30  3:46     ` Luis Chamberlain
2018-12-04  0:02     ` brendanhiggins
2018-12-04  0:02       ` Brendan Higgins
2018-11-28 19:36 ` [RFC v3 17/19] of: unittest: migrate tests to run on KUnit brendanhiggins
2018-11-28 19:36   ` Brendan Higgins
     [not found]   ` <CAL_Jsq+09Kx7yMBC_Jw45QGmk6U_fp4N6HOZDwYrM4tWw+_dOA@mail.gmail.com>
2018-11-30  0:39     ` rdunlap
2018-11-30  0:39       ` Randy Dunlap
2018-12-04  0:13       ` brendanhiggins
2018-12-04  0:13         ` Brendan Higgins
2018-12-04 13:40         ` robh
2018-12-04 13:40           ` Rob Herring
2018-12-05 23:42           ` brendanhiggins
2018-12-05 23:42             ` Brendan Higgins
2018-12-07  0:41             ` robh
2018-12-07  0:41               ` Rob Herring
2018-12-04  0:08     ` brendanhiggins
2018-12-04  0:08       ` Brendan Higgins
2019-02-13  1:44     ` brendanhiggins
2019-02-13  1:44       ` Brendan Higgins
2019-02-14 20:10       ` robh
2019-02-14 20:10         ` Rob Herring
2019-02-14 21:52         ` brendanhiggins
2019-02-14 21:52           ` Brendan Higgins
2019-02-18 22:56       ` frowand.list
2019-02-18 22:56         ` Frank Rowand
2019-02-28  0:29         ` brendanhiggins
2019-02-28  0:29           ` Brendan Higgins
2018-12-04 10:56   ` frowand.list
2018-12-04 10:56     ` Frank Rowand
2018-11-28 19:36 ` [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest brendanhiggins
2018-11-28 19:36   ` Brendan Higgins
2018-12-04 10:58   ` frowand.list
2018-12-04 10:58     ` Frank Rowand
2018-12-05 23:54     ` brendanhiggins
2018-12-05 23:54       ` Brendan Higgins
2019-02-14 23:57       ` frowand.list
2019-02-14 23:57         ` Frank Rowand
2019-02-15  0:56         ` brendanhiggins
2019-02-15  0:56           ` Brendan Higgins
2019-02-15  2:05           ` frowand.list
2019-02-15  2:05             ` Frank Rowand
2019-02-15 10:56             ` brendanhiggins
2019-02-15 10:56               ` Brendan Higgins
2019-02-18 22:25               ` frowand.list
2019-02-18 22:25                 ` Frank Rowand
2019-02-20 20:44                 ` frowand.list
2019-02-20 20:44                   ` Frank Rowand
2019-02-20 20:47                   ` frowand.list
2019-02-20 20:47                     ` Frank Rowand
2019-02-28  3:52                   ` brendanhiggins
2019-02-28  3:52                     ` Brendan Higgins
2019-03-22  0:22                     ` frowand.list
2019-03-22  0:22                       ` Frank Rowand
2019-03-22  1:30                       ` brendanhiggins
2019-03-22  1:30                         ` Brendan Higgins
2019-03-22  1:47                         ` frowand.list
2019-03-22  1:47                           ` Frank Rowand
2019-03-25 22:15                           ` brendanhiggins
2019-03-25 22:15                             ` Brendan Higgins
2019-09-20 16:57                         ` Rob Herring
2019-09-21 23:57                           ` Frank Rowand
2019-03-22  1:34                       ` frowand.list
2019-03-22  1:34                         ` Frank Rowand
2019-03-25 22:18                         ` brendanhiggins
2019-03-25 22:18                           ` Brendan Higgins
2018-11-28 19:36 ` [RFC v3 19/19] of: unittest: split up some super large test cases brendanhiggins
2018-11-28 19:36   ` Brendan Higgins
2018-12-04 10:52 ` [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework frowand.list
2018-12-04 10:52   ` Frank Rowand
2018-12-04 11:40 ` frowand.list
2018-12-04 11:40   ` Frank Rowand
2018-12-04 13:49   ` robh
2018-12-04 13:49     ` Rob Herring
2018-12-05 23:10     ` brendanhiggins
2018-12-05 23:10       ` Brendan Higgins
2019-03-22  0:27       ` frowand.list
2019-03-22  0:27         ` Frank Rowand
2019-03-25 22:04         ` brendanhiggins
2019-03-25 22:04           ` Brendan Higgins

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).