All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-02-14 21:37 ` brendanhiggins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: gregkh, joel, mpe, joe, brakmo, rostedt, Tim.Bird, khilman,
	julia.lawall, linux-kselftest, kunit-dev, linux-kernel, jdike,
	richard, linux-um, daniel, dri-devel, dan.j.williams,
	linux-nvdimm, knut.omang, devicetree, pmladek, Alexander.Levin,
	amir73il, dan.carpenter, wfg, Brendan Higgins

This patch set proposes KUnit, a lightweight unit testing and mocking
framework for the Linux kernel.

Unlike Autotest and kselftest, KUnit is a true unit testing framework;
it does not require installing the kernel on a test machine or in a VM
and does not require tests to be written in userspace running on a host
kernel. Additionally, KUnit is fast: From invocation to completion KUnit
can run several dozen tests in under a second. Currently, the entire
KUnit test suite for KUnit runs in under a second from the initial
invocation (build time excluded).

KUnit is heavily inspired by JUnit, Python's unittest.mock, and
Googletest/Googlemock for C++. KUnit provides facilities for defining
unit test cases, grouping related test cases into test suites, providing
common infrastructure for running tests, mocking, spying, and much more.

## What's so special about unit testing?

A unit test is supposed to test a single unit of code in isolation,
hence the name. There should be no dependencies outside the control of
the test; this means no external dependencies, which makes tests orders
of magnitudes faster. Likewise, since there are no external dependencies,
there are no hoops to jump through to run the tests. Additionally, this
makes unit tests deterministic: a failing unit test always indicates a
problem. Finally, because unit tests necessarily have finer granularity,
they are able to test all code paths easily solving the classic problem
of difficulty in exercising error handling code.

## Is KUnit trying to replace other testing frameworks for the kernel?

No. Most existing tests for the Linux kernel are end-to-end tests, which
have their place. A well tested system has lots of unit tests, a
reasonable number of integration tests, and some end-to-end tests. KUnit
is just trying to address the unit test space which is currently not
being addressed.

## More information on KUnit

There is a bunch of documentation near the end of this patch set that
describes how to use KUnit and best practices for writing unit tests.
For convenience I am hosting the compiled docs here:
https://google.github.io/kunit-docs/third_party/kernel/docs/
Additionally for convenience, I have applied these patches to a branch:
https://kunit.googlesource.com/linux/+/kunit/rfc/5.0-rc5/v4
The repo may be cloned with:
git clone https://kunit.googlesource.com/linux
This patchset is on the kunit/rfc/5.0-rc5/v4 branch.

## Changes Since Last Version

 - Got KUnit working on (hypothetically) all architectures (tested on
   x86), as per Rob's (and other's) request
 - Punting all KUnit features/patches depending on UML for now.
 - Broke out UML specific support into arch/um/* as per "[RFC v3 01/19]
   kunit: test: add KUnit test runner core", as requested by Luis.
 - Added support to kunit_tool to allow it to build kernels in external
   directories, as suggested by Kieran.
 - Added a UML defconfig, and a config fragment for KUnit as suggested
   by Kieran and Luis.
 - Cleaned up, and reformatted a bunch of stuff.

-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-02-14 21:37 ` brendanhiggins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-02-14 21:37 UTC (permalink / raw)


This patch set proposes KUnit, a lightweight unit testing and mocking
framework for the Linux kernel.

Unlike Autotest and kselftest, KUnit is a true unit testing framework;
it does not require installing the kernel on a test machine or in a VM
and does not require tests to be written in userspace running on a host
kernel. Additionally, KUnit is fast: From invocation to completion KUnit
can run several dozen tests in under a second. Currently, the entire
KUnit test suite for KUnit runs in under a second from the initial
invocation (build time excluded).

KUnit is heavily inspired by JUnit, Python's unittest.mock, and
Googletest/Googlemock for C++. KUnit provides facilities for defining
unit test cases, grouping related test cases into test suites, providing
common infrastructure for running tests, mocking, spying, and much more.

## What's so special about unit testing?

A unit test is supposed to test a single unit of code in isolation,
hence the name. There should be no dependencies outside the control of
the test; this means no external dependencies, which makes tests orders
of magnitudes faster. Likewise, since there are no external dependencies,
there are no hoops to jump through to run the tests. Additionally, this
makes unit tests deterministic: a failing unit test always indicates a
problem. Finally, because unit tests necessarily have finer granularity,
they are able to test all code paths easily solving the classic problem
of difficulty in exercising error handling code.

## Is KUnit trying to replace other testing frameworks for the kernel?

No. Most existing tests for the Linux kernel are end-to-end tests, which
have their place. A well tested system has lots of unit tests, a
reasonable number of integration tests, and some end-to-end tests. KUnit
is just trying to address the unit test space which is currently not
being addressed.

## More information on KUnit

There is a bunch of documentation near the end of this patch set that
describes how to use KUnit and best practices for writing unit tests.
For convenience I am hosting the compiled docs here:
https://google.github.io/kunit-docs/third_party/kernel/docs/
Additionally for convenience, I have applied these patches to a branch:
https://kunit.googlesource.com/linux/+/kunit/rfc/5.0-rc5/v4
The repo may be cloned with:
git clone https://kunit.googlesource.com/linux
This patchset is on the kunit/rfc/5.0-rc5/v4 branch.

## Changes Since Last Version

 - Got KUnit working on (hypothetically) all architectures (tested on
   x86), as per Rob's (and other's) request
 - Punting all KUnit features/patches depending on UML for now.
 - Broke out UML specific support into arch/um/* as per "[RFC v3 01/19]
   kunit: test: add KUnit test runner core", as requested by Luis.
 - Added support to kunit_tool to allow it to build kernels in external
   directories, as suggested by Kieran.
 - Added a UML defconfig, and a config fragment for KUnit as suggested
   by Kieran and Luis.
 - Cleaned up, and reformatted a bunch of stuff.

-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-02-14 21:37 ` brendanhiggins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)


This patch set proposes KUnit, a lightweight unit testing and mocking
framework for the Linux kernel.

Unlike Autotest and kselftest, KUnit is a true unit testing framework;
it does not require installing the kernel on a test machine or in a VM
and does not require tests to be written in userspace running on a host
kernel. Additionally, KUnit is fast: From invocation to completion KUnit
can run several dozen tests in under a second. Currently, the entire
KUnit test suite for KUnit runs in under a second from the initial
invocation (build time excluded).

KUnit is heavily inspired by JUnit, Python's unittest.mock, and
Googletest/Googlemock for C++. KUnit provides facilities for defining
unit test cases, grouping related test cases into test suites, providing
common infrastructure for running tests, mocking, spying, and much more.

## What's so special about unit testing?

A unit test is supposed to test a single unit of code in isolation,
hence the name. There should be no dependencies outside the control of
the test; this means no external dependencies, which makes tests orders
of magnitudes faster. Likewise, since there are no external dependencies,
there are no hoops to jump through to run the tests. Additionally, this
makes unit tests deterministic: a failing unit test always indicates a
problem. Finally, because unit tests necessarily have finer granularity,
they are able to test all code paths easily solving the classic problem
of difficulty in exercising error handling code.

## Is KUnit trying to replace other testing frameworks for the kernel?

No. Most existing tests for the Linux kernel are end-to-end tests, which
have their place. A well tested system has lots of unit tests, a
reasonable number of integration tests, and some end-to-end tests. KUnit
is just trying to address the unit test space which is currently not
being addressed.

## More information on KUnit

There is a bunch of documentation near the end of this patch set that
describes how to use KUnit and best practices for writing unit tests.
For convenience I am hosting the compiled docs here:
https://google.github.io/kunit-docs/third_party/kernel/docs/
Additionally for convenience, I have applied these patches to a branch:
https://kunit.googlesource.com/linux/+/kunit/rfc/5.0-rc5/v4
The repo may be cloned with:
git clone https://kunit.googlesource.com/linux
This patchset is on the kunit/rfc/5.0-rc5/v4 branch.

## Changes Since Last Version

 - Got KUnit working on (hypothetically) all architectures (tested on
   x86), as per Rob's (and other's) request
 - Punting all KUnit features/patches depending on UML for now.
 - Broke out UML specific support into arch/um/* as per "[RFC v3 01/19]
   kunit: test: add KUnit test runner core", as requested by Luis.
 - Added support to kunit_tool to allow it to build kernels in external
   directories, as suggested by Kieran.
 - Added a UML defconfig, and a config fragment for KUnit as suggested
   by Kieran and Luis.
 - Cleaned up, and reformatted a bunch of stuff.

-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-02-14 21:37 ` brendanhiggins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: brakmo, pmladek, amir73il, Brendan Higgins, dri-devel,
	Alexander.Levin, linux-kselftest, linux-nvdimm, richard,
	knut.omang, wfg, joel, jdike, dan.carpenter, devicetree,
	Tim.Bird, linux-um, rostedt, julia.lawall, dan.j.williams,
	kunit-dev, gregkh, linux-kernel, daniel, mpe, joe, khilman

This patch set proposes KUnit, a lightweight unit testing and mocking
framework for the Linux kernel.

Unlike Autotest and kselftest, KUnit is a true unit testing framework;
it does not require installing the kernel on a test machine or in a VM
and does not require tests to be written in userspace running on a host
kernel. Additionally, KUnit is fast: From invocation to completion KUnit
can run several dozen tests in under a second. Currently, the entire
KUnit test suite for KUnit runs in under a second from the initial
invocation (build time excluded).

KUnit is heavily inspired by JUnit, Python's unittest.mock, and
Googletest/Googlemock for C++. KUnit provides facilities for defining
unit test cases, grouping related test cases into test suites, providing
common infrastructure for running tests, mocking, spying, and much more.

## What's so special about unit testing?

A unit test is supposed to test a single unit of code in isolation,
hence the name. There should be no dependencies outside the control of
the test; this means no external dependencies, which makes tests orders
of magnitudes faster. Likewise, since there are no external dependencies,
there are no hoops to jump through to run the tests. Additionally, this
makes unit tests deterministic: a failing unit test always indicates a
problem. Finally, because unit tests necessarily have finer granularity,
they are able to test all code paths easily solving the classic problem
of difficulty in exercising error handling code.

## Is KUnit trying to replace other testing frameworks for the kernel?

No. Most existing tests for the Linux kernel are end-to-end tests, which
have their place. A well tested system has lots of unit tests, a
reasonable number of integration tests, and some end-to-end tests. KUnit
is just trying to address the unit test space which is currently not
being addressed.

## More information on KUnit

There is a bunch of documentation near the end of this patch set that
describes how to use KUnit and best practices for writing unit tests.
For convenience I am hosting the compiled docs here:
https://google.github.io/kunit-docs/third_party/kernel/docs/
Additionally for convenience, I have applied these patches to a branch:
https://kunit.googlesource.com/linux/+/kunit/rfc/5.0-rc5/v4
The repo may be cloned with:
git clone https://kunit.googlesource.com/linux
This patchset is on the kunit/rfc/5.0-rc5/v4 branch.

## Changes Since Last Version

 - Got KUnit working on (hypothetically) all architectures (tested on
   x86), as per Rob's (and other's) request
 - Punting all KUnit features/patches depending on UML for now.
 - Broke out UML specific support into arch/um/* as per "[RFC v3 01/19]
   kunit: test: add KUnit test runner core", as requested by Luis.
 - Added support to kunit_tool to allow it to build kernels in external
   directories, as suggested by Kieran.
 - Added a UML defconfig, and a config fragment for KUnit as suggested
   by Kieran and Luis.
 - Cleaned up, and reformatted a bunch of stuff.

-- 
2.21.0.rc0.258.g878e2cd30e-goog


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 01/17] kunit: test: add KUnit test runner core
  2019-02-14 21:37 ` brendanhiggins
                       ` (2 preceding siblings ...)
  (?)
@ 2019-02-14 21:37     ` Brendan Higgins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook-hpIqsD4AKlfQT0dZR+AlfA, mcgrof-DgEjT+Ai2ygdnm+yROfE0A,
	shuah-DgEjT+Ai2ygdnm+yROfE0A, robh-DgEjT+Ai2ygdnm+yROfE0A,
	kieran.bingham-ryLnwIuWjnjg/C1BVhZhaw,
	frowand.list-Re5JQEeQqe8AvxtiuMwx3w
  Cc: brakmo-b10kYP2dOMg, pmladek-IBi9RG/b67k,
	amir73il-Re5JQEeQqe8AvxtiuMwx3w, Brendan Higgins,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	Alexander.Levin-0li6OtcxBFHby3iVrkZq2A,
	linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw, richard-/L3Ra7n9ekc,
	knut.omang-QHcLZuEGTsvQT0dZR+AlfA, wfg-VuQAYsv1563Yd54FQh9/CA,
	joel-U3u1mxZcP9KHXe+LvDLADg, jdike-OPE4K8JWMJJBDgjK7y7TUQ,
	dan.carpenter-QHcLZuEGTsvQT0dZR+AlfA,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Tim.Bird-7U/KSKJipcs,
	linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	rostedt-nx8X9YLhiw1AfugRpC6u6w, julia.lawall-L2FTfq7BK8M,
	kunit-dev-/JYPxA39Uh5TLH3MbocFFw,
	gregkh-hQyY1W1yCW8ekmWlsbkhG0B+6BGkLq7r,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, daniel-/w4YWyX8dFk,
	mpe-Gsx/Oe8HsFggBc27wqDAHg, joe-6d6DIl74uiNBDgjK7y7TUQ,
	khilman-rdvid1DuHRBWk0Htik3J/w

Add core facilities for defining unit tests; this provides a common way
to define test cases, functions that execute code which is under test
and determine whether the code under test behaves as expected; this also
provides a way to group together related test cases in test suites (here
we call them test_modules).

Just define test cases and how to execute them for now; setting
expectations on code will be defined later.

Signed-off-by: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
---
 include/kunit/test.h | 165 ++++++++++++++++++++++++++++++++++++++++++
 kunit/Kconfig        |  16 +++++
 kunit/Makefile       |   1 +
 kunit/test.c         | 168 +++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 350 insertions(+)
 create mode 100644 include/kunit/test.h
 create mode 100644 kunit/Kconfig
 create mode 100644 kunit/Makefile
 create mode 100644 kunit/test.c

diff --git a/include/kunit/test.h b/include/kunit/test.h
new file mode 100644
index 0000000000000..23c2ebedd6dd9
--- /dev/null
+++ b/include/kunit/test.h
@@ -0,0 +1,165 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Base unit test (KUnit) API.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
+ */
+
+#ifndef _KUNIT_TEST_H
+#define _KUNIT_TEST_H
+
+#include <linux/types.h>
+#include <linux/slab.h>
+
+struct kunit;
+
+/**
+ * struct kunit_case - represents an individual test case.
+ * @run_case: the function representing the actual test case.
+ * @name: the name of the test case.
+ *
+ * A test case is a function with the signature, ``void (*)(struct kunit *)``
+ * that makes expectations (see KUNIT_EXPECT_TRUE()) about code under test. Each
+ * test case is associated with a &struct kunit_module and will be run after the
+ * module's init function and followed by the module's exit function.
+ *
+ * A test case should be static and should only be created with the KUNIT_CASE()
+ * macro; additionally, every array of test cases should be terminated with an
+ * empty test case.
+ *
+ * Example:
+ *
+ * .. code-block:: c
+ *
+ *	void add_test_basic(struct kunit *test)
+ *	{
+ *		KUNIT_EXPECT_EQ(test, 1, add(1, 0));
+ *		KUNIT_EXPECT_EQ(test, 2, add(1, 1));
+ *		KUNIT_EXPECT_EQ(test, 0, add(-1, 1));
+ *		KUNIT_EXPECT_EQ(test, INT_MAX, add(0, INT_MAX));
+ *		KUNIT_EXPECT_EQ(test, -1, add(INT_MAX, INT_MIN));
+ *	}
+ *
+ *	static struct kunit_case example_test_cases[] = {
+ *		KUNIT_CASE(add_test_basic),
+ *		{},
+ *	};
+ *
+ */
+struct kunit_case {
+	void (*run_case)(struct kunit *test);
+	const char name[256];
+
+	/* private: internal use only. */
+	bool success;
+};
+
+/**
+ * KUNIT_CASE - A helper for creating a &struct kunit_case
+ * @test_name: a reference to a test case function.
+ *
+ * Takes a symbol for a function representing a test case and creates a
+ * &struct kunit_case object from it. See the documentation for
+ * &struct kunit_case for an example on how to use it.
+ */
+#define KUNIT_CASE(test_name) { .run_case = test_name, .name = #test_name }
+
+/**
+ * struct kunit_module - describes a related collection of &struct kunit_case s.
+ * @name: the name of the test. Purely informational.
+ * @init: called before every test case.
+ * @exit: called after every test case.
+ * @test_cases: a null terminated array of test cases.
+ *
+ * A kunit_module is a collection of related &struct kunit_case s, such that
+ * @init is called before every test case and @exit is called after every test
+ * case, similar to the notion of a *test fixture* or a *test class* in other
+ * unit testing frameworks like JUnit or Googletest.
+ *
+ * Every &struct kunit_case must be associated with a kunit_module for KUnit to
+ * run it.
+ */
+struct kunit_module {
+	const char name[256];
+	int (*init)(struct kunit *test);
+	void (*exit)(struct kunit *test);
+	struct kunit_case *test_cases;
+};
+
+/**
+ * struct kunit - represents a running instance of a test.
+ * @priv: for user to store arbitrary data. Commonly used to pass data created
+ * in the init function (see &struct kunit_module).
+ *
+ * Used to store information about the current context under which the test is
+ * running. Most of this data is private and should only be accessed indirectly
+ * via public functions; the one exception is @priv which can be used by the
+ * test writer to store arbitrary data.
+ */
+struct kunit {
+	void *priv;
+
+	/* private: internal use only. */
+	const char *name; /* Read only after initialization! */
+	spinlock_t lock; /* Gaurds all mutable test state. */
+	bool success; /* Protected by lock. */
+	void (*vprintk)(const struct kunit *test,
+			const char *level,
+			struct va_format *vaf);
+};
+
+int kunit_init_test(struct kunit *test, const char *name);
+
+int kunit_run_tests(struct kunit_module *module);
+
+/**
+ * module_test() - used to register a &struct kunit_module with KUnit.
+ * @module: a statically allocated &struct kunit_module.
+ *
+ * Registers @module with the test framework. See &struct kunit_module for more
+ * information.
+ */
+#define module_test(module) \
+		static int module_kunit_init##module(void) \
+		{ \
+			return kunit_run_tests(&module); \
+		} \
+		late_initcall(module_kunit_init##module)
+
+void __printf(3, 4) kunit_printk(const char *level,
+				 const struct kunit *test,
+				 const char *fmt, ...);
+
+/**
+ * kunit_info() - Prints an INFO level message associated with the current test.
+ * @test: The test context object.
+ * @fmt: A printk() style format string.
+ *
+ * Prints an info level message associated with the test module being run. Takes
+ * a variable number of format parameters just like printk().
+ */
+#define kunit_info(test, fmt, ...) \
+		kunit_printk(KERN_INFO, test, fmt, ##__VA_ARGS__)
+
+/**
+ * kunit_warn() - Prints a WARN level message associated with the current test.
+ * @test: The test context object.
+ * @fmt: A printk() style format string.
+ *
+ * See kunit_info().
+ */
+#define kunit_warn(test, fmt, ...) \
+		kunit_printk(KERN_WARNING, test, fmt, ##__VA_ARGS__)
+
+/**
+ * kunit_err() - Prints an ERROR level message associated with the current test.
+ * @test: The test context object.
+ * @fmt: A printk() style format string.
+ *
+ * See kunit_info().
+ */
+#define kunit_err(test, fmt, ...) \
+		kunit_printk(KERN_ERR, test, fmt, ##__VA_ARGS__)
+
+#endif /* _KUNIT_TEST_H */
diff --git a/kunit/Kconfig b/kunit/Kconfig
new file mode 100644
index 0000000000000..64480092b2c24
--- /dev/null
+++ b/kunit/Kconfig
@@ -0,0 +1,16 @@
+#
+# KUnit base configuration
+#
+
+menu "KUnit support"
+
+config KUNIT
+	bool "Enable support for unit tests (KUnit)"
+	help
+	  Enables support for kernel unit tests (KUnit), a lightweight unit
+	  testing and mocking framework for the Linux kernel. These tests are
+	  able to be run locally on a developer's workstation without a VM or
+	  special hardware. For more information, please see
+	  Documentation/kunit/
+
+endmenu
diff --git a/kunit/Makefile b/kunit/Makefile
new file mode 100644
index 0000000000000..5efdc4dea2c08
--- /dev/null
+++ b/kunit/Makefile
@@ -0,0 +1 @@
+obj-$(CONFIG_KUNIT) +=			test.o
diff --git a/kunit/test.c b/kunit/test.c
new file mode 100644
index 0000000000000..0b4396f92086e
--- /dev/null
+++ b/kunit/test.c
@@ -0,0 +1,168 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Base unit test (KUnit) API.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
+ */
+
+#include <linux/sched.h>
+#include <linux/sched/debug.h>
+#include <os.h>
+#include <kunit/test.h>
+
+static bool kunit_get_success(struct kunit *test)
+{
+	unsigned long flags;
+	bool success;
+
+	spin_lock_irqsave(&test->lock, flags);
+	success = test->success;
+	spin_unlock_irqrestore(&test->lock, flags);
+
+	return success;
+}
+
+static void kunit_set_success(struct kunit *test, bool success)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&test->lock, flags);
+	test->success = success;
+	spin_unlock_irqrestore(&test->lock, flags);
+}
+
+static int kunit_vprintk_emit(const struct kunit *test,
+			      int level,
+			      const char *fmt,
+			      va_list args)
+{
+	return vprintk_emit(0, level, NULL, 0, fmt, args);
+}
+
+static int kunit_printk_emit(const struct kunit *test,
+			     int level,
+			     const char *fmt, ...)
+{
+	va_list args;
+	int ret;
+
+	va_start(args, fmt);
+	ret = kunit_vprintk_emit(test, level, fmt, args);
+	va_end(args);
+
+	return ret;
+}
+
+static void kunit_vprintk(const struct kunit *test,
+			  const char *level,
+			  struct va_format *vaf)
+{
+	kunit_printk_emit(test,
+			  level[1] - '0',
+			  "kunit %s: %pV", test->name, vaf);
+}
+
+int kunit_init_test(struct kunit *test, const char *name)
+{
+	spin_lock_init(&test->lock);
+	test->name = name;
+	test->vprintk = kunit_vprintk;
+
+	return 0;
+}
+
+/*
+ * Initializes and runs test case. Does not clean up or do post validations.
+ */
+static void kunit_run_case_internal(struct kunit *test,
+				    struct kunit_module *module,
+				    struct kunit_case *test_case)
+{
+	int ret;
+
+	if (module->init) {
+		ret = module->init(test);
+		if (ret) {
+			kunit_err(test, "failed to initialize: %d", ret);
+			kunit_set_success(test, false);
+			return;
+		}
+	}
+
+	test_case->run_case(test);
+}
+
+/*
+ * Performs post validations and cleanup after a test case was run.
+ * XXX: Should ONLY BE CALLED AFTER kunit_run_case_internal!
+ */
+static void kunit_run_case_cleanup(struct kunit *test,
+				   struct kunit_module *module,
+				   struct kunit_case *test_case)
+{
+	if (module->exit)
+		module->exit(test);
+}
+
+/*
+ * Performs all logic to run a test case.
+ */
+static bool kunit_run_case(struct kunit *test,
+			   struct kunit_module *module,
+			   struct kunit_case *test_case)
+{
+	kunit_set_success(test, true);
+
+	kunit_run_case_internal(test, module, test_case);
+	kunit_run_case_cleanup(test, module, test_case);
+
+	return kunit_get_success(test);
+}
+
+int kunit_run_tests(struct kunit_module *module)
+{
+	bool all_passed = true, success;
+	struct kunit_case *test_case;
+	struct kunit test;
+	int ret;
+
+	ret = kunit_init_test(&test, module->name);
+	if (ret)
+		return ret;
+
+	for (test_case = module->test_cases; test_case->run_case; test_case++) {
+		success = kunit_run_case(&test, module, test_case);
+		if (!success)
+			all_passed = false;
+
+		kunit_info(&test,
+			  "%s %s",
+			  test_case->name,
+			  success ? "passed" : "failed");
+	}
+
+	if (all_passed)
+		kunit_info(&test, "all tests passed");
+	else
+		kunit_info(&test, "one or more tests failed");
+
+	return 0;
+}
+
+void kunit_printk(const char *level,
+		  const struct kunit *test,
+		  const char *fmt, ...)
+{
+	struct va_format vaf;
+	va_list args;
+
+	va_start(args, fmt);
+
+	vaf.fmt = fmt;
+	vaf.va = &args;
+
+	test->vprintk(test, level, &vaf);
+
+	va_end(args);
+}
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 01/17] kunit: test: add KUnit test runner core
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: gregkh, joel, mpe, joe, brakmo, rostedt, Tim.Bird, khilman,
	julia.lawall, linux-kselftest, kunit-dev, linux-kernel, jdike,
	richard, linux-um, daniel, dri-devel, dan.j.williams,
	linux-nvdimm, knut.omang, devicetree, pmladek, Alexander.Levin,
	amir73il, dan.carpenter, wfg, Brendan Higgins

Add core facilities for defining unit tests; this provides a common way
to define test cases, functions that execute code which is under test
and determine whether the code under test behaves as expected; this also
provides a way to group together related test cases in test suites (here
we call them test_modules).

Just define test cases and how to execute them for now; setting
expectations on code will be defined later.

Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
---
 include/kunit/test.h | 165 ++++++++++++++++++++++++++++++++++++++++++
 kunit/Kconfig        |  16 +++++
 kunit/Makefile       |   1 +
 kunit/test.c         | 168 +++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 350 insertions(+)
 create mode 100644 include/kunit/test.h
 create mode 100644 kunit/Kconfig
 create mode 100644 kunit/Makefile
 create mode 100644 kunit/test.c

diff --git a/include/kunit/test.h b/include/kunit/test.h
new file mode 100644
index 0000000000000..23c2ebedd6dd9
--- /dev/null
+++ b/include/kunit/test.h
@@ -0,0 +1,165 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Base unit test (KUnit) API.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins@google.com>
+ */
+
+#ifndef _KUNIT_TEST_H
+#define _KUNIT_TEST_H
+
+#include <linux/types.h>
+#include <linux/slab.h>
+
+struct kunit;
+
+/**
+ * struct kunit_case - represents an individual test case.
+ * @run_case: the function representing the actual test case.
+ * @name: the name of the test case.
+ *
+ * A test case is a function with the signature, ``void (*)(struct kunit *)``
+ * that makes expectations (see KUNIT_EXPECT_TRUE()) about code under test. Each
+ * test case is associated with a &struct kunit_module and will be run after the
+ * module's init function and followed by the module's exit function.
+ *
+ * A test case should be static and should only be created with the KUNIT_CASE()
+ * macro; additionally, every array of test cases should be terminated with an
+ * empty test case.
+ *
+ * Example:
+ *
+ * .. code-block:: c
+ *
+ *	void add_test_basic(struct kunit *test)
+ *	{
+ *		KUNIT_EXPECT_EQ(test, 1, add(1, 0));
+ *		KUNIT_EXPECT_EQ(test, 2, add(1, 1));
+ *		KUNIT_EXPECT_EQ(test, 0, add(-1, 1));
+ *		KUNIT_EXPECT_EQ(test, INT_MAX, add(0, INT_MAX));
+ *		KUNIT_EXPECT_EQ(test, -1, add(INT_MAX, INT_MIN));
+ *	}
+ *
+ *	static struct kunit_case example_test_cases[] = {
+ *		KUNIT_CASE(add_test_basic),
+ *		{},
+ *	};
+ *
+ */
+struct kunit_case {
+	void (*run_case)(struct kunit *test);
+	const char name[256];
+
+	/* private: internal use only. */
+	bool success;
+};
+
+/**
+ * KUNIT_CASE - A helper for creating a &struct kunit_case
+ * @test_name: a reference to a test case function.
+ *
+ * Takes a symbol for a function representing a test case and creates a
+ * &struct kunit_case object from it. See the documentation for
+ * &struct kunit_case for an example on how to use it.
+ */
+#define KUNIT_CASE(test_name) { .run_case = test_name, .name = #test_name }
+
+/**
+ * struct kunit_module - describes a related collection of &struct kunit_case s.
+ * @name: the name of the test. Purely informational.
+ * @init: called before every test case.
+ * @exit: called after every test case.
+ * @test_cases: a null terminated array of test cases.
+ *
+ * A kunit_module is a collection of related &struct kunit_case s, such that
+ * @init is called before every test case and @exit is called after every test
+ * case, similar to the notion of a *test fixture* or a *test class* in other
+ * unit testing frameworks like JUnit or Googletest.
+ *
+ * Every &struct kunit_case must be associated with a kunit_module for KUnit to
+ * run it.
+ */
+struct kunit_module {
+	const char name[256];
+	int (*init)(struct kunit *test);
+	void (*exit)(struct kunit *test);
+	struct kunit_case *test_cases;
+};
+
+/**
+ * struct kunit - represents a running instance of a test.
+ * @priv: for user to store arbitrary data. Commonly used to pass data created
+ * in the init function (see &struct kunit_module).
+ *
+ * Used to store information about the current context under which the test is
+ * running. Most of this data is private and should only be accessed indirectly
+ * via public functions; the one exception is @priv which can be used by the
+ * test writer to store arbitrary data.
+ */
+struct kunit {
+	void *priv;
+
+	/* private: internal use only. */
+	const char *name; /* Read only after initialization! */
+	spinlock_t lock; /* Gaurds all mutable test state. */
+	bool success; /* Protected by lock. */
+	void (*vprintk)(const struct kunit *test,
+			const char *level,
+			struct va_format *vaf);
+};
+
+int kunit_init_test(struct kunit *test, const char *name);
+
+int kunit_run_tests(struct kunit_module *module);
+
+/**
+ * module_test() - used to register a &struct kunit_module with KUnit.
+ * @module: a statically allocated &struct kunit_module.
+ *
+ * Registers @module with the test framework. See &struct kunit_module for more
+ * information.
+ */
+#define module_test(module) \
+		static int module_kunit_init##module(void) \
+		{ \
+			return kunit_run_tests(&module); \
+		} \
+		late_initcall(module_kunit_init##module)
+
+void __printf(3, 4) kunit_printk(const char *level,
+				 const struct kunit *test,
+				 const char *fmt, ...);
+
+/**
+ * kunit_info() - Prints an INFO level message associated with the current test.
+ * @test: The test context object.
+ * @fmt: A printk() style format string.
+ *
+ * Prints an info level message associated with the test module being run. Takes
+ * a variable number of format parameters just like printk().
+ */
+#define kunit_info(test, fmt, ...) \
+		kunit_printk(KERN_INFO, test, fmt, ##__VA_ARGS__)
+
+/**
+ * kunit_warn() - Prints a WARN level message associated with the current test.
+ * @test: The test context object.
+ * @fmt: A printk() style format string.
+ *
+ * See kunit_info().
+ */
+#define kunit_warn(test, fmt, ...) \
+		kunit_printk(KERN_WARNING, test, fmt, ##__VA_ARGS__)
+
+/**
+ * kunit_err() - Prints an ERROR level message associated with the current test.
+ * @test: The test context object.
+ * @fmt: A printk() style format string.
+ *
+ * See kunit_info().
+ */
+#define kunit_err(test, fmt, ...) \
+		kunit_printk(KERN_ERR, test, fmt, ##__VA_ARGS__)
+
+#endif /* _KUNIT_TEST_H */
diff --git a/kunit/Kconfig b/kunit/Kconfig
new file mode 100644
index 0000000000000..64480092b2c24
--- /dev/null
+++ b/kunit/Kconfig
@@ -0,0 +1,16 @@
+#
+# KUnit base configuration
+#
+
+menu "KUnit support"
+
+config KUNIT
+	bool "Enable support for unit tests (KUnit)"
+	help
+	  Enables support for kernel unit tests (KUnit), a lightweight unit
+	  testing and mocking framework for the Linux kernel. These tests are
+	  able to be run locally on a developer's workstation without a VM or
+	  special hardware. For more information, please see
+	  Documentation/kunit/
+
+endmenu
diff --git a/kunit/Makefile b/kunit/Makefile
new file mode 100644
index 0000000000000..5efdc4dea2c08
--- /dev/null
+++ b/kunit/Makefile
@@ -0,0 +1 @@
+obj-$(CONFIG_KUNIT) +=			test.o
diff --git a/kunit/test.c b/kunit/test.c
new file mode 100644
index 0000000000000..0b4396f92086e
--- /dev/null
+++ b/kunit/test.c
@@ -0,0 +1,168 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Base unit test (KUnit) API.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins@google.com>
+ */
+
+#include <linux/sched.h>
+#include <linux/sched/debug.h>
+#include <os.h>
+#include <kunit/test.h>
+
+static bool kunit_get_success(struct kunit *test)
+{
+	unsigned long flags;
+	bool success;
+
+	spin_lock_irqsave(&test->lock, flags);
+	success = test->success;
+	spin_unlock_irqrestore(&test->lock, flags);
+
+	return success;
+}
+
+static void kunit_set_success(struct kunit *test, bool success)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&test->lock, flags);
+	test->success = success;
+	spin_unlock_irqrestore(&test->lock, flags);
+}
+
+static int kunit_vprintk_emit(const struct kunit *test,
+			      int level,
+			      const char *fmt,
+			      va_list args)
+{
+	return vprintk_emit(0, level, NULL, 0, fmt, args);
+}
+
+static int kunit_printk_emit(const struct kunit *test,
+			     int level,
+			     const char *fmt, ...)
+{
+	va_list args;
+	int ret;
+
+	va_start(args, fmt);
+	ret = kunit_vprintk_emit(test, level, fmt, args);
+	va_end(args);
+
+	return ret;
+}
+
+static void kunit_vprintk(const struct kunit *test,
+			  const char *level,
+			  struct va_format *vaf)
+{
+	kunit_printk_emit(test,
+			  level[1] - '0',
+			  "kunit %s: %pV", test->name, vaf);
+}
+
+int kunit_init_test(struct kunit *test, const char *name)
+{
+	spin_lock_init(&test->lock);
+	test->name = name;
+	test->vprintk = kunit_vprintk;
+
+	return 0;
+}
+
+/*
+ * Initializes and runs test case. Does not clean up or do post validations.
+ */
+static void kunit_run_case_internal(struct kunit *test,
+				    struct kunit_module *module,
+				    struct kunit_case *test_case)
+{
+	int ret;
+
+	if (module->init) {
+		ret = module->init(test);
+		if (ret) {
+			kunit_err(test, "failed to initialize: %d", ret);
+			kunit_set_success(test, false);
+			return;
+		}
+	}
+
+	test_case->run_case(test);
+}
+
+/*
+ * Performs post validations and cleanup after a test case was run.
+ * XXX: Should ONLY BE CALLED AFTER kunit_run_case_internal!
+ */
+static void kunit_run_case_cleanup(struct kunit *test,
+				   struct kunit_module *module,
+				   struct kunit_case *test_case)
+{
+	if (module->exit)
+		module->exit(test);
+}
+
+/*
+ * Performs all logic to run a test case.
+ */
+static bool kunit_run_case(struct kunit *test,
+			   struct kunit_module *module,
+			   struct kunit_case *test_case)
+{
+	kunit_set_success(test, true);
+
+	kunit_run_case_internal(test, module, test_case);
+	kunit_run_case_cleanup(test, module, test_case);
+
+	return kunit_get_success(test);
+}
+
+int kunit_run_tests(struct kunit_module *module)
+{
+	bool all_passed = true, success;
+	struct kunit_case *test_case;
+	struct kunit test;
+	int ret;
+
+	ret = kunit_init_test(&test, module->name);
+	if (ret)
+		return ret;
+
+	for (test_case = module->test_cases; test_case->run_case; test_case++) {
+		success = kunit_run_case(&test, module, test_case);
+		if (!success)
+			all_passed = false;
+
+		kunit_info(&test,
+			  "%s %s",
+			  test_case->name,
+			  success ? "passed" : "failed");
+	}
+
+	if (all_passed)
+		kunit_info(&test, "all tests passed");
+	else
+		kunit_info(&test, "one or more tests failed");
+
+	return 0;
+}
+
+void kunit_printk(const char *level,
+		  const struct kunit *test,
+		  const char *fmt, ...)
+{
+	struct va_format vaf;
+	va_list args;
+
+	va_start(args, fmt);
+
+	vaf.fmt = fmt;
+	vaf.va = &args;
+
+	test->vprintk(test, level, &vaf);
+
+	va_end(args);
+}
-- 
2.21.0.rc0.258.g878e2cd30e-goog


^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 01/17] kunit: test: add KUnit test runner core
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-02-14 21:37 UTC (permalink / raw)


Add core facilities for defining unit tests; this provides a common way
to define test cases, functions that execute code which is under test
and determine whether the code under test behaves as expected; this also
provides a way to group together related test cases in test suites (here
we call them test_modules).

Just define test cases and how to execute them for now; setting
expectations on code will be defined later.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 include/kunit/test.h | 165 ++++++++++++++++++++++++++++++++++++++++++
 kunit/Kconfig        |  16 +++++
 kunit/Makefile       |   1 +
 kunit/test.c         | 168 +++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 350 insertions(+)
 create mode 100644 include/kunit/test.h
 create mode 100644 kunit/Kconfig
 create mode 100644 kunit/Makefile
 create mode 100644 kunit/test.c

diff --git a/include/kunit/test.h b/include/kunit/test.h
new file mode 100644
index 0000000000000..23c2ebedd6dd9
--- /dev/null
+++ b/include/kunit/test.h
@@ -0,0 +1,165 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Base unit test (KUnit) API.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+
+#ifndef _KUNIT_TEST_H
+#define _KUNIT_TEST_H
+
+#include <linux/types.h>
+#include <linux/slab.h>
+
+struct kunit;
+
+/**
+ * struct kunit_case - represents an individual test case.
+ * @run_case: the function representing the actual test case.
+ * @name: the name of the test case.
+ *
+ * A test case is a function with the signature, ``void (*)(struct kunit *)``
+ * that makes expectations (see KUNIT_EXPECT_TRUE()) about code under test. Each
+ * test case is associated with a &struct kunit_module and will be run after the
+ * module's init function and followed by the module's exit function.
+ *
+ * A test case should be static and should only be created with the KUNIT_CASE()
+ * macro; additionally, every array of test cases should be terminated with an
+ * empty test case.
+ *
+ * Example:
+ *
+ * .. code-block:: c
+ *
+ *	void add_test_basic(struct kunit *test)
+ *	{
+ *		KUNIT_EXPECT_EQ(test, 1, add(1, 0));
+ *		KUNIT_EXPECT_EQ(test, 2, add(1, 1));
+ *		KUNIT_EXPECT_EQ(test, 0, add(-1, 1));
+ *		KUNIT_EXPECT_EQ(test, INT_MAX, add(0, INT_MAX));
+ *		KUNIT_EXPECT_EQ(test, -1, add(INT_MAX, INT_MIN));
+ *	}
+ *
+ *	static struct kunit_case example_test_cases[] = {
+ *		KUNIT_CASE(add_test_basic),
+ *		{},
+ *	};
+ *
+ */
+struct kunit_case {
+	void (*run_case)(struct kunit *test);
+	const char name[256];
+
+	/* private: internal use only. */
+	bool success;
+};
+
+/**
+ * KUNIT_CASE - A helper for creating a &struct kunit_case
+ * @test_name: a reference to a test case function.
+ *
+ * Takes a symbol for a function representing a test case and creates a
+ * &struct kunit_case object from it. See the documentation for
+ * &struct kunit_case for an example on how to use it.
+ */
+#define KUNIT_CASE(test_name) { .run_case = test_name, .name = #test_name }
+
+/**
+ * struct kunit_module - describes a related collection of &struct kunit_case s.
+ * @name: the name of the test. Purely informational.
+ * @init: called before every test case.
+ * @exit: called after every test case.
+ * @test_cases: a null terminated array of test cases.
+ *
+ * A kunit_module is a collection of related &struct kunit_case s, such that
+ * @init is called before every test case and @exit is called after every test
+ * case, similar to the notion of a *test fixture* or a *test class* in other
+ * unit testing frameworks like JUnit or Googletest.
+ *
+ * Every &struct kunit_case must be associated with a kunit_module for KUnit to
+ * run it.
+ */
+struct kunit_module {
+	const char name[256];
+	int (*init)(struct kunit *test);
+	void (*exit)(struct kunit *test);
+	struct kunit_case *test_cases;
+};
+
+/**
+ * struct kunit - represents a running instance of a test.
+ * @priv: for user to store arbitrary data. Commonly used to pass data created
+ * in the init function (see &struct kunit_module).
+ *
+ * Used to store information about the current context under which the test is
+ * running. Most of this data is private and should only be accessed indirectly
+ * via public functions; the one exception is @priv which can be used by the
+ * test writer to store arbitrary data.
+ */
+struct kunit {
+	void *priv;
+
+	/* private: internal use only. */
+	const char *name; /* Read only after initialization! */
+	spinlock_t lock; /* Gaurds all mutable test state. */
+	bool success; /* Protected by lock. */
+	void (*vprintk)(const struct kunit *test,
+			const char *level,
+			struct va_format *vaf);
+};
+
+int kunit_init_test(struct kunit *test, const char *name);
+
+int kunit_run_tests(struct kunit_module *module);
+
+/**
+ * module_test() - used to register a &struct kunit_module with KUnit.
+ * @module: a statically allocated &struct kunit_module.
+ *
+ * Registers @module with the test framework. See &struct kunit_module for more
+ * information.
+ */
+#define module_test(module) \
+		static int module_kunit_init##module(void) \
+		{ \
+			return kunit_run_tests(&module); \
+		} \
+		late_initcall(module_kunit_init##module)
+
+void __printf(3, 4) kunit_printk(const char *level,
+				 const struct kunit *test,
+				 const char *fmt, ...);
+
+/**
+ * kunit_info() - Prints an INFO level message associated with the current test.
+ * @test: The test context object.
+ * @fmt: A printk() style format string.
+ *
+ * Prints an info level message associated with the test module being run. Takes
+ * a variable number of format parameters just like printk().
+ */
+#define kunit_info(test, fmt, ...) \
+		kunit_printk(KERN_INFO, test, fmt, ##__VA_ARGS__)
+
+/**
+ * kunit_warn() - Prints a WARN level message associated with the current test.
+ * @test: The test context object.
+ * @fmt: A printk() style format string.
+ *
+ * See kunit_info().
+ */
+#define kunit_warn(test, fmt, ...) \
+		kunit_printk(KERN_WARNING, test, fmt, ##__VA_ARGS__)
+
+/**
+ * kunit_err() - Prints an ERROR level message associated with the current test.
+ * @test: The test context object.
+ * @fmt: A printk() style format string.
+ *
+ * See kunit_info().
+ */
+#define kunit_err(test, fmt, ...) \
+		kunit_printk(KERN_ERR, test, fmt, ##__VA_ARGS__)
+
+#endif /* _KUNIT_TEST_H */
diff --git a/kunit/Kconfig b/kunit/Kconfig
new file mode 100644
index 0000000000000..64480092b2c24
--- /dev/null
+++ b/kunit/Kconfig
@@ -0,0 +1,16 @@
+#
+# KUnit base configuration
+#
+
+menu "KUnit support"
+
+config KUNIT
+	bool "Enable support for unit tests (KUnit)"
+	help
+	  Enables support for kernel unit tests (KUnit), a lightweight unit
+	  testing and mocking framework for the Linux kernel. These tests are
+	  able to be run locally on a developer's workstation without a VM or
+	  special hardware. For more information, please see
+	  Documentation/kunit/
+
+endmenu
diff --git a/kunit/Makefile b/kunit/Makefile
new file mode 100644
index 0000000000000..5efdc4dea2c08
--- /dev/null
+++ b/kunit/Makefile
@@ -0,0 +1 @@
+obj-$(CONFIG_KUNIT) +=			test.o
diff --git a/kunit/test.c b/kunit/test.c
new file mode 100644
index 0000000000000..0b4396f92086e
--- /dev/null
+++ b/kunit/test.c
@@ -0,0 +1,168 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Base unit test (KUnit) API.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+
+#include <linux/sched.h>
+#include <linux/sched/debug.h>
+#include <os.h>
+#include <kunit/test.h>
+
+static bool kunit_get_success(struct kunit *test)
+{
+	unsigned long flags;
+	bool success;
+
+	spin_lock_irqsave(&test->lock, flags);
+	success = test->success;
+	spin_unlock_irqrestore(&test->lock, flags);
+
+	return success;
+}
+
+static void kunit_set_success(struct kunit *test, bool success)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&test->lock, flags);
+	test->success = success;
+	spin_unlock_irqrestore(&test->lock, flags);
+}
+
+static int kunit_vprintk_emit(const struct kunit *test,
+			      int level,
+			      const char *fmt,
+			      va_list args)
+{
+	return vprintk_emit(0, level, NULL, 0, fmt, args);
+}
+
+static int kunit_printk_emit(const struct kunit *test,
+			     int level,
+			     const char *fmt, ...)
+{
+	va_list args;
+	int ret;
+
+	va_start(args, fmt);
+	ret = kunit_vprintk_emit(test, level, fmt, args);
+	va_end(args);
+
+	return ret;
+}
+
+static void kunit_vprintk(const struct kunit *test,
+			  const char *level,
+			  struct va_format *vaf)
+{
+	kunit_printk_emit(test,
+			  level[1] - '0',
+			  "kunit %s: %pV", test->name, vaf);
+}
+
+int kunit_init_test(struct kunit *test, const char *name)
+{
+	spin_lock_init(&test->lock);
+	test->name = name;
+	test->vprintk = kunit_vprintk;
+
+	return 0;
+}
+
+/*
+ * Initializes and runs test case. Does not clean up or do post validations.
+ */
+static void kunit_run_case_internal(struct kunit *test,
+				    struct kunit_module *module,
+				    struct kunit_case *test_case)
+{
+	int ret;
+
+	if (module->init) {
+		ret = module->init(test);
+		if (ret) {
+			kunit_err(test, "failed to initialize: %d", ret);
+			kunit_set_success(test, false);
+			return;
+		}
+	}
+
+	test_case->run_case(test);
+}
+
+/*
+ * Performs post validations and cleanup after a test case was run.
+ * XXX: Should ONLY BE CALLED AFTER kunit_run_case_internal!
+ */
+static void kunit_run_case_cleanup(struct kunit *test,
+				   struct kunit_module *module,
+				   struct kunit_case *test_case)
+{
+	if (module->exit)
+		module->exit(test);
+}
+
+/*
+ * Performs all logic to run a test case.
+ */
+static bool kunit_run_case(struct kunit *test,
+			   struct kunit_module *module,
+			   struct kunit_case *test_case)
+{
+	kunit_set_success(test, true);
+
+	kunit_run_case_internal(test, module, test_case);
+	kunit_run_case_cleanup(test, module, test_case);
+
+	return kunit_get_success(test);
+}
+
+int kunit_run_tests(struct kunit_module *module)
+{
+	bool all_passed = true, success;
+	struct kunit_case *test_case;
+	struct kunit test;
+	int ret;
+
+	ret = kunit_init_test(&test, module->name);
+	if (ret)
+		return ret;
+
+	for (test_case = module->test_cases; test_case->run_case; test_case++) {
+		success = kunit_run_case(&test, module, test_case);
+		if (!success)
+			all_passed = false;
+
+		kunit_info(&test,
+			  "%s %s",
+			  test_case->name,
+			  success ? "passed" : "failed");
+	}
+
+	if (all_passed)
+		kunit_info(&test, "all tests passed");
+	else
+		kunit_info(&test, "one or more tests failed");
+
+	return 0;
+}
+
+void kunit_printk(const char *level,
+		  const struct kunit *test,
+		  const char *fmt, ...)
+{
+	struct va_format vaf;
+	va_list args;
+
+	va_start(args, fmt);
+
+	vaf.fmt = fmt;
+	vaf.va = &args;
+
+	test->vprintk(test, level, &vaf);
+
+	va_end(args);
+}
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 01/17] kunit: test: add KUnit test runner core
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)


Add core facilities for defining unit tests; this provides a common way
to define test cases, functions that execute code which is under test
and determine whether the code under test behaves as expected; this also
provides a way to group together related test cases in test suites (here
we call them test_modules).

Just define test cases and how to execute them for now; setting
expectations on code will be defined later.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 include/kunit/test.h | 165 ++++++++++++++++++++++++++++++++++++++++++
 kunit/Kconfig        |  16 +++++
 kunit/Makefile       |   1 +
 kunit/test.c         | 168 +++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 350 insertions(+)
 create mode 100644 include/kunit/test.h
 create mode 100644 kunit/Kconfig
 create mode 100644 kunit/Makefile
 create mode 100644 kunit/test.c

diff --git a/include/kunit/test.h b/include/kunit/test.h
new file mode 100644
index 0000000000000..23c2ebedd6dd9
--- /dev/null
+++ b/include/kunit/test.h
@@ -0,0 +1,165 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Base unit test (KUnit) API.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+
+#ifndef _KUNIT_TEST_H
+#define _KUNIT_TEST_H
+
+#include <linux/types.h>
+#include <linux/slab.h>
+
+struct kunit;
+
+/**
+ * struct kunit_case - represents an individual test case.
+ * @run_case: the function representing the actual test case.
+ * @name: the name of the test case.
+ *
+ * A test case is a function with the signature, ``void (*)(struct kunit *)``
+ * that makes expectations (see KUNIT_EXPECT_TRUE()) about code under test. Each
+ * test case is associated with a &struct kunit_module and will be run after the
+ * module's init function and followed by the module's exit function.
+ *
+ * A test case should be static and should only be created with the KUNIT_CASE()
+ * macro; additionally, every array of test cases should be terminated with an
+ * empty test case.
+ *
+ * Example:
+ *
+ * .. code-block:: c
+ *
+ *	void add_test_basic(struct kunit *test)
+ *	{
+ *		KUNIT_EXPECT_EQ(test, 1, add(1, 0));
+ *		KUNIT_EXPECT_EQ(test, 2, add(1, 1));
+ *		KUNIT_EXPECT_EQ(test, 0, add(-1, 1));
+ *		KUNIT_EXPECT_EQ(test, INT_MAX, add(0, INT_MAX));
+ *		KUNIT_EXPECT_EQ(test, -1, add(INT_MAX, INT_MIN));
+ *	}
+ *
+ *	static struct kunit_case example_test_cases[] = {
+ *		KUNIT_CASE(add_test_basic),
+ *		{},
+ *	};
+ *
+ */
+struct kunit_case {
+	void (*run_case)(struct kunit *test);
+	const char name[256];
+
+	/* private: internal use only. */
+	bool success;
+};
+
+/**
+ * KUNIT_CASE - A helper for creating a &struct kunit_case
+ * @test_name: a reference to a test case function.
+ *
+ * Takes a symbol for a function representing a test case and creates a
+ * &struct kunit_case object from it. See the documentation for
+ * &struct kunit_case for an example on how to use it.
+ */
+#define KUNIT_CASE(test_name) { .run_case = test_name, .name = #test_name }
+
+/**
+ * struct kunit_module - describes a related collection of &struct kunit_case s.
+ * @name: the name of the test. Purely informational.
+ * @init: called before every test case.
+ * @exit: called after every test case.
+ * @test_cases: a null terminated array of test cases.
+ *
+ * A kunit_module is a collection of related &struct kunit_case s, such that
+ * @init is called before every test case and @exit is called after every test
+ * case, similar to the notion of a *test fixture* or a *test class* in other
+ * unit testing frameworks like JUnit or Googletest.
+ *
+ * Every &struct kunit_case must be associated with a kunit_module for KUnit to
+ * run it.
+ */
+struct kunit_module {
+	const char name[256];
+	int (*init)(struct kunit *test);
+	void (*exit)(struct kunit *test);
+	struct kunit_case *test_cases;
+};
+
+/**
+ * struct kunit - represents a running instance of a test.
+ * @priv: for user to store arbitrary data. Commonly used to pass data created
+ * in the init function (see &struct kunit_module).
+ *
+ * Used to store information about the current context under which the test is
+ * running. Most of this data is private and should only be accessed indirectly
+ * via public functions; the one exception is @priv which can be used by the
+ * test writer to store arbitrary data.
+ */
+struct kunit {
+	void *priv;
+
+	/* private: internal use only. */
+	const char *name; /* Read only after initialization! */
+	spinlock_t lock; /* Gaurds all mutable test state. */
+	bool success; /* Protected by lock. */
+	void (*vprintk)(const struct kunit *test,
+			const char *level,
+			struct va_format *vaf);
+};
+
+int kunit_init_test(struct kunit *test, const char *name);
+
+int kunit_run_tests(struct kunit_module *module);
+
+/**
+ * module_test() - used to register a &struct kunit_module with KUnit.
+ * @module: a statically allocated &struct kunit_module.
+ *
+ * Registers @module with the test framework. See &struct kunit_module for more
+ * information.
+ */
+#define module_test(module) \
+		static int module_kunit_init##module(void) \
+		{ \
+			return kunit_run_tests(&module); \
+		} \
+		late_initcall(module_kunit_init##module)
+
+void __printf(3, 4) kunit_printk(const char *level,
+				 const struct kunit *test,
+				 const char *fmt, ...);
+
+/**
+ * kunit_info() - Prints an INFO level message associated with the current test.
+ * @test: The test context object.
+ * @fmt: A printk() style format string.
+ *
+ * Prints an info level message associated with the test module being run. Takes
+ * a variable number of format parameters just like printk().
+ */
+#define kunit_info(test, fmt, ...) \
+		kunit_printk(KERN_INFO, test, fmt, ##__VA_ARGS__)
+
+/**
+ * kunit_warn() - Prints a WARN level message associated with the current test.
+ * @test: The test context object.
+ * @fmt: A printk() style format string.
+ *
+ * See kunit_info().
+ */
+#define kunit_warn(test, fmt, ...) \
+		kunit_printk(KERN_WARNING, test, fmt, ##__VA_ARGS__)
+
+/**
+ * kunit_err() - Prints an ERROR level message associated with the current test.
+ * @test: The test context object.
+ * @fmt: A printk() style format string.
+ *
+ * See kunit_info().
+ */
+#define kunit_err(test, fmt, ...) \
+		kunit_printk(KERN_ERR, test, fmt, ##__VA_ARGS__)
+
+#endif /* _KUNIT_TEST_H */
diff --git a/kunit/Kconfig b/kunit/Kconfig
new file mode 100644
index 0000000000000..64480092b2c24
--- /dev/null
+++ b/kunit/Kconfig
@@ -0,0 +1,16 @@
+#
+# KUnit base configuration
+#
+
+menu "KUnit support"
+
+config KUNIT
+	bool "Enable support for unit tests (KUnit)"
+	help
+	  Enables support for kernel unit tests (KUnit), a lightweight unit
+	  testing and mocking framework for the Linux kernel. These tests are
+	  able to be run locally on a developer's workstation without a VM or
+	  special hardware. For more information, please see
+	  Documentation/kunit/
+
+endmenu
diff --git a/kunit/Makefile b/kunit/Makefile
new file mode 100644
index 0000000000000..5efdc4dea2c08
--- /dev/null
+++ b/kunit/Makefile
@@ -0,0 +1 @@
+obj-$(CONFIG_KUNIT) +=			test.o
diff --git a/kunit/test.c b/kunit/test.c
new file mode 100644
index 0000000000000..0b4396f92086e
--- /dev/null
+++ b/kunit/test.c
@@ -0,0 +1,168 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Base unit test (KUnit) API.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+
+#include <linux/sched.h>
+#include <linux/sched/debug.h>
+#include <os.h>
+#include <kunit/test.h>
+
+static bool kunit_get_success(struct kunit *test)
+{
+	unsigned long flags;
+	bool success;
+
+	spin_lock_irqsave(&test->lock, flags);
+	success = test->success;
+	spin_unlock_irqrestore(&test->lock, flags);
+
+	return success;
+}
+
+static void kunit_set_success(struct kunit *test, bool success)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&test->lock, flags);
+	test->success = success;
+	spin_unlock_irqrestore(&test->lock, flags);
+}
+
+static int kunit_vprintk_emit(const struct kunit *test,
+			      int level,
+			      const char *fmt,
+			      va_list args)
+{
+	return vprintk_emit(0, level, NULL, 0, fmt, args);
+}
+
+static int kunit_printk_emit(const struct kunit *test,
+			     int level,
+			     const char *fmt, ...)
+{
+	va_list args;
+	int ret;
+
+	va_start(args, fmt);
+	ret = kunit_vprintk_emit(test, level, fmt, args);
+	va_end(args);
+
+	return ret;
+}
+
+static void kunit_vprintk(const struct kunit *test,
+			  const char *level,
+			  struct va_format *vaf)
+{
+	kunit_printk_emit(test,
+			  level[1] - '0',
+			  "kunit %s: %pV", test->name, vaf);
+}
+
+int kunit_init_test(struct kunit *test, const char *name)
+{
+	spin_lock_init(&test->lock);
+	test->name = name;
+	test->vprintk = kunit_vprintk;
+
+	return 0;
+}
+
+/*
+ * Initializes and runs test case. Does not clean up or do post validations.
+ */
+static void kunit_run_case_internal(struct kunit *test,
+				    struct kunit_module *module,
+				    struct kunit_case *test_case)
+{
+	int ret;
+
+	if (module->init) {
+		ret = module->init(test);
+		if (ret) {
+			kunit_err(test, "failed to initialize: %d", ret);
+			kunit_set_success(test, false);
+			return;
+		}
+	}
+
+	test_case->run_case(test);
+}
+
+/*
+ * Performs post validations and cleanup after a test case was run.
+ * XXX: Should ONLY BE CALLED AFTER kunit_run_case_internal!
+ */
+static void kunit_run_case_cleanup(struct kunit *test,
+				   struct kunit_module *module,
+				   struct kunit_case *test_case)
+{
+	if (module->exit)
+		module->exit(test);
+}
+
+/*
+ * Performs all logic to run a test case.
+ */
+static bool kunit_run_case(struct kunit *test,
+			   struct kunit_module *module,
+			   struct kunit_case *test_case)
+{
+	kunit_set_success(test, true);
+
+	kunit_run_case_internal(test, module, test_case);
+	kunit_run_case_cleanup(test, module, test_case);
+
+	return kunit_get_success(test);
+}
+
+int kunit_run_tests(struct kunit_module *module)
+{
+	bool all_passed = true, success;
+	struct kunit_case *test_case;
+	struct kunit test;
+	int ret;
+
+	ret = kunit_init_test(&test, module->name);
+	if (ret)
+		return ret;
+
+	for (test_case = module->test_cases; test_case->run_case; test_case++) {
+		success = kunit_run_case(&test, module, test_case);
+		if (!success)
+			all_passed = false;
+
+		kunit_info(&test,
+			  "%s %s",
+			  test_case->name,
+			  success ? "passed" : "failed");
+	}
+
+	if (all_passed)
+		kunit_info(&test, "all tests passed");
+	else
+		kunit_info(&test, "one or more tests failed");
+
+	return 0;
+}
+
+void kunit_printk(const char *level,
+		  const struct kunit *test,
+		  const char *fmt, ...)
+{
+	struct va_format vaf;
+	va_list args;
+
+	va_start(args, fmt);
+
+	vaf.fmt = fmt;
+	vaf.va = &args;
+
+	test->vprintk(test, level, &vaf);
+
+	va_end(args);
+}
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 01/17] kunit: test: add KUnit test runner core
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: brakmo, pmladek, amir73il, Brendan Higgins, dri-devel,
	Alexander.Levin, linux-kselftest, linux-nvdimm, richard,
	knut.omang, wfg, joel, jdike, dan.carpenter, devicetree,
	Tim.Bird, linux-um, rostedt, julia.lawall, dan.j.williams,
	kunit-dev, gregkh, linux-kernel, daniel, mpe, joe, khilman

Add core facilities for defining unit tests; this provides a common way
to define test cases, functions that execute code which is under test
and determine whether the code under test behaves as expected; this also
provides a way to group together related test cases in test suites (here
we call them test_modules).

Just define test cases and how to execute them for now; setting
expectations on code will be defined later.

Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
---
 include/kunit/test.h | 165 ++++++++++++++++++++++++++++++++++++++++++
 kunit/Kconfig        |  16 +++++
 kunit/Makefile       |   1 +
 kunit/test.c         | 168 +++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 350 insertions(+)
 create mode 100644 include/kunit/test.h
 create mode 100644 kunit/Kconfig
 create mode 100644 kunit/Makefile
 create mode 100644 kunit/test.c

diff --git a/include/kunit/test.h b/include/kunit/test.h
new file mode 100644
index 0000000000000..23c2ebedd6dd9
--- /dev/null
+++ b/include/kunit/test.h
@@ -0,0 +1,165 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Base unit test (KUnit) API.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins@google.com>
+ */
+
+#ifndef _KUNIT_TEST_H
+#define _KUNIT_TEST_H
+
+#include <linux/types.h>
+#include <linux/slab.h>
+
+struct kunit;
+
+/**
+ * struct kunit_case - represents an individual test case.
+ * @run_case: the function representing the actual test case.
+ * @name: the name of the test case.
+ *
+ * A test case is a function with the signature, ``void (*)(struct kunit *)``
+ * that makes expectations (see KUNIT_EXPECT_TRUE()) about code under test. Each
+ * test case is associated with a &struct kunit_module and will be run after the
+ * module's init function and followed by the module's exit function.
+ *
+ * A test case should be static and should only be created with the KUNIT_CASE()
+ * macro; additionally, every array of test cases should be terminated with an
+ * empty test case.
+ *
+ * Example:
+ *
+ * .. code-block:: c
+ *
+ *	void add_test_basic(struct kunit *test)
+ *	{
+ *		KUNIT_EXPECT_EQ(test, 1, add(1, 0));
+ *		KUNIT_EXPECT_EQ(test, 2, add(1, 1));
+ *		KUNIT_EXPECT_EQ(test, 0, add(-1, 1));
+ *		KUNIT_EXPECT_EQ(test, INT_MAX, add(0, INT_MAX));
+ *		KUNIT_EXPECT_EQ(test, -1, add(INT_MAX, INT_MIN));
+ *	}
+ *
+ *	static struct kunit_case example_test_cases[] = {
+ *		KUNIT_CASE(add_test_basic),
+ *		{},
+ *	};
+ *
+ */
+struct kunit_case {
+	void (*run_case)(struct kunit *test);
+	const char name[256];
+
+	/* private: internal use only. */
+	bool success;
+};
+
+/**
+ * KUNIT_CASE - A helper for creating a &struct kunit_case
+ * @test_name: a reference to a test case function.
+ *
+ * Takes a symbol for a function representing a test case and creates a
+ * &struct kunit_case object from it. See the documentation for
+ * &struct kunit_case for an example on how to use it.
+ */
+#define KUNIT_CASE(test_name) { .run_case = test_name, .name = #test_name }
+
+/**
+ * struct kunit_module - describes a related collection of &struct kunit_case s.
+ * @name: the name of the test. Purely informational.
+ * @init: called before every test case.
+ * @exit: called after every test case.
+ * @test_cases: a null terminated array of test cases.
+ *
+ * A kunit_module is a collection of related &struct kunit_case s, such that
+ * @init is called before every test case and @exit is called after every test
+ * case, similar to the notion of a *test fixture* or a *test class* in other
+ * unit testing frameworks like JUnit or Googletest.
+ *
+ * Every &struct kunit_case must be associated with a kunit_module for KUnit to
+ * run it.
+ */
+struct kunit_module {
+	const char name[256];
+	int (*init)(struct kunit *test);
+	void (*exit)(struct kunit *test);
+	struct kunit_case *test_cases;
+};
+
+/**
+ * struct kunit - represents a running instance of a test.
+ * @priv: for user to store arbitrary data. Commonly used to pass data created
+ * in the init function (see &struct kunit_module).
+ *
+ * Used to store information about the current context under which the test is
+ * running. Most of this data is private and should only be accessed indirectly
+ * via public functions; the one exception is @priv which can be used by the
+ * test writer to store arbitrary data.
+ */
+struct kunit {
+	void *priv;
+
+	/* private: internal use only. */
+	const char *name; /* Read only after initialization! */
+	spinlock_t lock; /* Gaurds all mutable test state. */
+	bool success; /* Protected by lock. */
+	void (*vprintk)(const struct kunit *test,
+			const char *level,
+			struct va_format *vaf);
+};
+
+int kunit_init_test(struct kunit *test, const char *name);
+
+int kunit_run_tests(struct kunit_module *module);
+
+/**
+ * module_test() - used to register a &struct kunit_module with KUnit.
+ * @module: a statically allocated &struct kunit_module.
+ *
+ * Registers @module with the test framework. See &struct kunit_module for more
+ * information.
+ */
+#define module_test(module) \
+		static int module_kunit_init##module(void) \
+		{ \
+			return kunit_run_tests(&module); \
+		} \
+		late_initcall(module_kunit_init##module)
+
+void __printf(3, 4) kunit_printk(const char *level,
+				 const struct kunit *test,
+				 const char *fmt, ...);
+
+/**
+ * kunit_info() - Prints an INFO level message associated with the current test.
+ * @test: The test context object.
+ * @fmt: A printk() style format string.
+ *
+ * Prints an info level message associated with the test module being run. Takes
+ * a variable number of format parameters just like printk().
+ */
+#define kunit_info(test, fmt, ...) \
+		kunit_printk(KERN_INFO, test, fmt, ##__VA_ARGS__)
+
+/**
+ * kunit_warn() - Prints a WARN level message associated with the current test.
+ * @test: The test context object.
+ * @fmt: A printk() style format string.
+ *
+ * See kunit_info().
+ */
+#define kunit_warn(test, fmt, ...) \
+		kunit_printk(KERN_WARNING, test, fmt, ##__VA_ARGS__)
+
+/**
+ * kunit_err() - Prints an ERROR level message associated with the current test.
+ * @test: The test context object.
+ * @fmt: A printk() style format string.
+ *
+ * See kunit_info().
+ */
+#define kunit_err(test, fmt, ...) \
+		kunit_printk(KERN_ERR, test, fmt, ##__VA_ARGS__)
+
+#endif /* _KUNIT_TEST_H */
diff --git a/kunit/Kconfig b/kunit/Kconfig
new file mode 100644
index 0000000000000..64480092b2c24
--- /dev/null
+++ b/kunit/Kconfig
@@ -0,0 +1,16 @@
+#
+# KUnit base configuration
+#
+
+menu "KUnit support"
+
+config KUNIT
+	bool "Enable support for unit tests (KUnit)"
+	help
+	  Enables support for kernel unit tests (KUnit), a lightweight unit
+	  testing and mocking framework for the Linux kernel. These tests are
+	  able to be run locally on a developer's workstation without a VM or
+	  special hardware. For more information, please see
+	  Documentation/kunit/
+
+endmenu
diff --git a/kunit/Makefile b/kunit/Makefile
new file mode 100644
index 0000000000000..5efdc4dea2c08
--- /dev/null
+++ b/kunit/Makefile
@@ -0,0 +1 @@
+obj-$(CONFIG_KUNIT) +=			test.o
diff --git a/kunit/test.c b/kunit/test.c
new file mode 100644
index 0000000000000..0b4396f92086e
--- /dev/null
+++ b/kunit/test.c
@@ -0,0 +1,168 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Base unit test (KUnit) API.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins@google.com>
+ */
+
+#include <linux/sched.h>
+#include <linux/sched/debug.h>
+#include <os.h>
+#include <kunit/test.h>
+
+static bool kunit_get_success(struct kunit *test)
+{
+	unsigned long flags;
+	bool success;
+
+	spin_lock_irqsave(&test->lock, flags);
+	success = test->success;
+	spin_unlock_irqrestore(&test->lock, flags);
+
+	return success;
+}
+
+static void kunit_set_success(struct kunit *test, bool success)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&test->lock, flags);
+	test->success = success;
+	spin_unlock_irqrestore(&test->lock, flags);
+}
+
+static int kunit_vprintk_emit(const struct kunit *test,
+			      int level,
+			      const char *fmt,
+			      va_list args)
+{
+	return vprintk_emit(0, level, NULL, 0, fmt, args);
+}
+
+static int kunit_printk_emit(const struct kunit *test,
+			     int level,
+			     const char *fmt, ...)
+{
+	va_list args;
+	int ret;
+
+	va_start(args, fmt);
+	ret = kunit_vprintk_emit(test, level, fmt, args);
+	va_end(args);
+
+	return ret;
+}
+
+static void kunit_vprintk(const struct kunit *test,
+			  const char *level,
+			  struct va_format *vaf)
+{
+	kunit_printk_emit(test,
+			  level[1] - '0',
+			  "kunit %s: %pV", test->name, vaf);
+}
+
+int kunit_init_test(struct kunit *test, const char *name)
+{
+	spin_lock_init(&test->lock);
+	test->name = name;
+	test->vprintk = kunit_vprintk;
+
+	return 0;
+}
+
+/*
+ * Initializes and runs test case. Does not clean up or do post validations.
+ */
+static void kunit_run_case_internal(struct kunit *test,
+				    struct kunit_module *module,
+				    struct kunit_case *test_case)
+{
+	int ret;
+
+	if (module->init) {
+		ret = module->init(test);
+		if (ret) {
+			kunit_err(test, "failed to initialize: %d", ret);
+			kunit_set_success(test, false);
+			return;
+		}
+	}
+
+	test_case->run_case(test);
+}
+
+/*
+ * Performs post validations and cleanup after a test case was run.
+ * XXX: Should ONLY BE CALLED AFTER kunit_run_case_internal!
+ */
+static void kunit_run_case_cleanup(struct kunit *test,
+				   struct kunit_module *module,
+				   struct kunit_case *test_case)
+{
+	if (module->exit)
+		module->exit(test);
+}
+
+/*
+ * Performs all logic to run a test case.
+ */
+static bool kunit_run_case(struct kunit *test,
+			   struct kunit_module *module,
+			   struct kunit_case *test_case)
+{
+	kunit_set_success(test, true);
+
+	kunit_run_case_internal(test, module, test_case);
+	kunit_run_case_cleanup(test, module, test_case);
+
+	return kunit_get_success(test);
+}
+
+int kunit_run_tests(struct kunit_module *module)
+{
+	bool all_passed = true, success;
+	struct kunit_case *test_case;
+	struct kunit test;
+	int ret;
+
+	ret = kunit_init_test(&test, module->name);
+	if (ret)
+		return ret;
+
+	for (test_case = module->test_cases; test_case->run_case; test_case++) {
+		success = kunit_run_case(&test, module, test_case);
+		if (!success)
+			all_passed = false;
+
+		kunit_info(&test,
+			  "%s %s",
+			  test_case->name,
+			  success ? "passed" : "failed");
+	}
+
+	if (all_passed)
+		kunit_info(&test, "all tests passed");
+	else
+		kunit_info(&test, "one or more tests failed");
+
+	return 0;
+}
+
+void kunit_printk(const char *level,
+		  const struct kunit *test,
+		  const char *fmt, ...)
+{
+	struct va_format vaf;
+	va_list args;
+
+	va_start(args, fmt);
+
+	vaf.fmt = fmt;
+	vaf.va = &args;
+
+	test->vprintk(test, level, &vaf);
+
+	va_end(args);
+}
-- 
2.21.0.rc0.258.g878e2cd30e-goog


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 02/17] kunit: test: add test resource management API
  2019-02-14 21:37 ` brendanhiggins
  (?)
  (?)
@ 2019-02-14 21:37   ` brendanhiggins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: gregkh, joel, mpe, joe, brakmo, rostedt, Tim.Bird, khilman,
	julia.lawall, linux-kselftest, kunit-dev, linux-kernel, jdike,
	richard, linux-um, daniel, dri-devel, dan.j.williams,
	linux-nvdimm, knut.omang, devicetree, pmladek, Alexander.Levin,
	amir73il, dan.carpenter, wfg, Brendan Higgins

Create a common API for test managed resources like memory and test
objects. A lot of times a test will want to set up infrastructure to be
used in test cases; this could be anything from just wanting to allocate
some memory to setting up a driver stack; this defines facilities for
creating "test resources" which are managed by the test infrastructure
and are automatically cleaned up at the conclusion of the test.

Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
---
 include/kunit/test.h | 109 +++++++++++++++++++++++++++++++++++++++++++
 kunit/test.c         |  95 +++++++++++++++++++++++++++++++++++++
 2 files changed, 204 insertions(+)

diff --git a/include/kunit/test.h b/include/kunit/test.h
index 23c2ebedd6dd9..21abc9e953969 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -12,6 +12,69 @@
 #include <linux/types.h>
 #include <linux/slab.h>
 
+struct kunit_resource;
+
+typedef int (*kunit_resource_init_t)(struct kunit_resource *, void *);
+typedef void (*kunit_resource_free_t)(struct kunit_resource *);
+
+/**
+ * struct kunit_resource - represents a *test managed resource*
+ * @allocation: for the user to store arbitrary data.
+ * @free: a user supplied function to free the resource. Populated by
+ * kunit_alloc_resource().
+ *
+ * Represents a *test managed resource*, a resource which will automatically be
+ * cleaned up at the end of a test case.
+ *
+ * Example:
+ *
+ * .. code-block:: c
+ *
+ *	struct kunit_kmalloc_params {
+ *		size_t size;
+ *		gfp_t gfp;
+ *	};
+ *
+ *	static int kunit_kmalloc_init(struct kunit_resource *res, void *context)
+ *	{
+ *		struct kunit_kmalloc_params *params = context;
+ *		res->allocation = kmalloc(params->size, params->gfp);
+ *
+ *		if (!res->allocation)
+ *			return -ENOMEM;
+ *
+ *		return 0;
+ *	}
+ *
+ *	static void kunit_kmalloc_free(struct kunit_resource *res)
+ *	{
+ *		kfree(res->allocation);
+ *	}
+ *
+ *	void *kunit_kmalloc(struct kunit *test, size_t size, gfp_t gfp)
+ *	{
+ *		struct kunit_kmalloc_params params;
+ *		struct kunit_resource *res;
+ *
+ *		params.size = size;
+ *		params.gfp = gfp;
+ *
+ *		res = kunit_alloc_resource(test, kunit_kmalloc_init,
+ *			kunit_kmalloc_free, &params);
+ *		if (res)
+ *			return res->allocation;
+ *		else
+ *			return NULL;
+ *	}
+ */
+struct kunit_resource {
+	void *allocation;
+	kunit_resource_free_t free;
+
+	/* private: internal use only. */
+	struct list_head node;
+};
+
 struct kunit;
 
 /**
@@ -104,6 +167,7 @@ struct kunit {
 	const char *name; /* Read only after initialization! */
 	spinlock_t lock; /* Gaurds all mutable test state. */
 	bool success; /* Protected by lock. */
+	struct list_head resources; /* Protected by lock. */
 	void (*vprintk)(const struct kunit *test,
 			const char *level,
 			struct va_format *vaf);
@@ -127,6 +191,51 @@ int kunit_run_tests(struct kunit_module *module);
 		} \
 		late_initcall(module_kunit_init##module)
 
+/**
+ * kunit_alloc_resource() - Allocates a *test managed resource*.
+ * @test: The test context object.
+ * @init: a user supplied function to initialize the resource.
+ * @free: a user supplied function to free the resource.
+ * @context: for the user to pass in arbitrary data.
+ *
+ * Allocates a *test managed resource*, a resource which will automatically be
+ * cleaned up at the end of a test case. See &struct kunit_resource for an
+ * example.
+ */
+struct kunit_resource *kunit_alloc_resource(struct kunit *test,
+					    kunit_resource_init_t init,
+					    kunit_resource_free_t free,
+					    void *context);
+
+void kunit_free_resource(struct kunit *test, struct kunit_resource *res);
+
+/**
+ * kunit_kmalloc() - Like kmalloc() except the allocation is *test managed*.
+ * @test: The test context object.
+ * @size: The size in bytes of the desired memory.
+ * @gfp: flags passed to underlying kmalloc().
+ *
+ * Just like `kmalloc(...)`, except the allocation is managed by the test case
+ * and is automatically cleaned up after the test case concludes. See &struct
+ * kunit_resource for more information.
+ */
+void *kunit_kmalloc(struct kunit *test, size_t size, gfp_t gfp);
+
+/**
+ * kunit_kzalloc() - Just like kunit_kmalloc(), but zeroes the allocation.
+ * @test: The test context object.
+ * @size: The size in bytes of the desired memory.
+ * @gfp: flags passed to underlying kmalloc().
+ *
+ * See kzalloc() and kunit_kmalloc() for more information.
+ */
+static inline void *kunit_kzalloc(struct kunit *test, size_t size, gfp_t gfp)
+{
+	return kunit_kmalloc(test, size, gfp | __GFP_ZERO);
+}
+
+void kunit_cleanup(struct kunit *test);
+
 void __printf(3, 4) kunit_printk(const char *level,
 				 const struct kunit *test,
 				 const char *fmt, ...);
diff --git a/kunit/test.c b/kunit/test.c
index 0b4396f92086e..84f2e1c040af3 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -66,6 +66,7 @@ static void kunit_vprintk(const struct kunit *test,
 int kunit_init_test(struct kunit *test, const char *name)
 {
 	spin_lock_init(&test->lock);
+	INIT_LIST_HEAD(&test->resources);
 	test->name = name;
 	test->vprintk = kunit_vprintk;
 
@@ -93,6 +94,11 @@ static void kunit_run_case_internal(struct kunit *test,
 	test_case->run_case(test);
 }
 
+static void kunit_case_internal_cleanup(struct kunit *test)
+{
+	kunit_cleanup(test);
+}
+
 /*
  * Performs post validations and cleanup after a test case was run.
  * XXX: Should ONLY BE CALLED AFTER kunit_run_case_internal!
@@ -103,6 +109,8 @@ static void kunit_run_case_cleanup(struct kunit *test,
 {
 	if (module->exit)
 		module->exit(test);
+
+	kunit_case_internal_cleanup(test);
 }
 
 /*
@@ -150,6 +158,93 @@ int kunit_run_tests(struct kunit_module *module)
 	return 0;
 }
 
+struct kunit_resource *kunit_alloc_resource(struct kunit *test,
+					    kunit_resource_init_t init,
+					    kunit_resource_free_t free,
+					    void *context)
+{
+	struct kunit_resource *res;
+	unsigned long flags;
+	int ret;
+
+	res = kzalloc(sizeof(*res), GFP_KERNEL);
+	if (!res)
+		return NULL;
+
+	ret = init(res, context);
+	if (ret)
+		return NULL;
+
+	res->free = free;
+	spin_lock_irqsave(&test->lock, flags);
+	list_add_tail(&res->node, &test->resources);
+	spin_unlock_irqrestore(&test->lock, flags);
+
+	return res;
+}
+
+void kunit_free_resource(struct kunit *test, struct kunit_resource *res)
+{
+	res->free(res);
+	list_del(&res->node);
+	kfree(res);
+}
+
+struct kunit_kmalloc_params {
+	size_t size;
+	gfp_t gfp;
+};
+
+static int kunit_kmalloc_init(struct kunit_resource *res, void *context)
+{
+	struct kunit_kmalloc_params *params = context;
+
+	res->allocation = kmalloc(params->size, params->gfp);
+	if (!res->allocation)
+		return -ENOMEM;
+
+	return 0;
+}
+
+static void kunit_kmalloc_free(struct kunit_resource *res)
+{
+	kfree(res->allocation);
+}
+
+void *kunit_kmalloc(struct kunit *test, size_t size, gfp_t gfp)
+{
+	struct kunit_kmalloc_params params;
+	struct kunit_resource *res;
+
+	params.size = size;
+	params.gfp = gfp;
+
+	res = kunit_alloc_resource(test,
+				   kunit_kmalloc_init,
+				   kunit_kmalloc_free,
+				   &params);
+
+	if (res)
+		return res->allocation;
+	else
+		return NULL;
+}
+
+void kunit_cleanup(struct kunit *test)
+{
+	struct kunit_resource *resource, *resource_safe;
+	unsigned long flags;
+
+	spin_lock_irqsave(&test->lock, flags);
+	list_for_each_entry_safe(resource,
+				 resource_safe,
+				 &test->resources,
+				 node) {
+		kunit_free_resource(test, resource);
+	}
+	spin_unlock_irqrestore(&test->lock, flags);
+}
+
 void kunit_printk(const char *level,
 		  const struct kunit *test,
 		  const char *fmt, ...)
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 02/17] kunit: test: add test resource management API
@ 2019-02-14 21:37   ` brendanhiggins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-02-14 21:37 UTC (permalink / raw)


Create a common API for test managed resources like memory and test
objects. A lot of times a test will want to set up infrastructure to be
used in test cases; this could be anything from just wanting to allocate
some memory to setting up a driver stack; this defines facilities for
creating "test resources" which are managed by the test infrastructure
and are automatically cleaned up at the conclusion of the test.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 include/kunit/test.h | 109 +++++++++++++++++++++++++++++++++++++++++++
 kunit/test.c         |  95 +++++++++++++++++++++++++++++++++++++
 2 files changed, 204 insertions(+)

diff --git a/include/kunit/test.h b/include/kunit/test.h
index 23c2ebedd6dd9..21abc9e953969 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -12,6 +12,69 @@
 #include <linux/types.h>
 #include <linux/slab.h>
 
+struct kunit_resource;
+
+typedef int (*kunit_resource_init_t)(struct kunit_resource *, void *);
+typedef void (*kunit_resource_free_t)(struct kunit_resource *);
+
+/**
+ * struct kunit_resource - represents a *test managed resource*
+ * @allocation: for the user to store arbitrary data.
+ * @free: a user supplied function to free the resource. Populated by
+ * kunit_alloc_resource().
+ *
+ * Represents a *test managed resource*, a resource which will automatically be
+ * cleaned up at the end of a test case.
+ *
+ * Example:
+ *
+ * .. code-block:: c
+ *
+ *	struct kunit_kmalloc_params {
+ *		size_t size;
+ *		gfp_t gfp;
+ *	};
+ *
+ *	static int kunit_kmalloc_init(struct kunit_resource *res, void *context)
+ *	{
+ *		struct kunit_kmalloc_params *params = context;
+ *		res->allocation = kmalloc(params->size, params->gfp);
+ *
+ *		if (!res->allocation)
+ *			return -ENOMEM;
+ *
+ *		return 0;
+ *	}
+ *
+ *	static void kunit_kmalloc_free(struct kunit_resource *res)
+ *	{
+ *		kfree(res->allocation);
+ *	}
+ *
+ *	void *kunit_kmalloc(struct kunit *test, size_t size, gfp_t gfp)
+ *	{
+ *		struct kunit_kmalloc_params params;
+ *		struct kunit_resource *res;
+ *
+ *		params.size = size;
+ *		params.gfp = gfp;
+ *
+ *		res = kunit_alloc_resource(test, kunit_kmalloc_init,
+ *			kunit_kmalloc_free, &params);
+ *		if (res)
+ *			return res->allocation;
+ *		else
+ *			return NULL;
+ *	}
+ */
+struct kunit_resource {
+	void *allocation;
+	kunit_resource_free_t free;
+
+	/* private: internal use only. */
+	struct list_head node;
+};
+
 struct kunit;
 
 /**
@@ -104,6 +167,7 @@ struct kunit {
 	const char *name; /* Read only after initialization! */
 	spinlock_t lock; /* Gaurds all mutable test state. */
 	bool success; /* Protected by lock. */
+	struct list_head resources; /* Protected by lock. */
 	void (*vprintk)(const struct kunit *test,
 			const char *level,
 			struct va_format *vaf);
@@ -127,6 +191,51 @@ int kunit_run_tests(struct kunit_module *module);
 		} \
 		late_initcall(module_kunit_init##module)
 
+/**
+ * kunit_alloc_resource() - Allocates a *test managed resource*.
+ * @test: The test context object.
+ * @init: a user supplied function to initialize the resource.
+ * @free: a user supplied function to free the resource.
+ * @context: for the user to pass in arbitrary data.
+ *
+ * Allocates a *test managed resource*, a resource which will automatically be
+ * cleaned up at the end of a test case. See &struct kunit_resource for an
+ * example.
+ */
+struct kunit_resource *kunit_alloc_resource(struct kunit *test,
+					    kunit_resource_init_t init,
+					    kunit_resource_free_t free,
+					    void *context);
+
+void kunit_free_resource(struct kunit *test, struct kunit_resource *res);
+
+/**
+ * kunit_kmalloc() - Like kmalloc() except the allocation is *test managed*.
+ * @test: The test context object.
+ * @size: The size in bytes of the desired memory.
+ * @gfp: flags passed to underlying kmalloc().
+ *
+ * Just like `kmalloc(...)`, except the allocation is managed by the test case
+ * and is automatically cleaned up after the test case concludes. See &struct
+ * kunit_resource for more information.
+ */
+void *kunit_kmalloc(struct kunit *test, size_t size, gfp_t gfp);
+
+/**
+ * kunit_kzalloc() - Just like kunit_kmalloc(), but zeroes the allocation.
+ * @test: The test context object.
+ * @size: The size in bytes of the desired memory.
+ * @gfp: flags passed to underlying kmalloc().
+ *
+ * See kzalloc() and kunit_kmalloc() for more information.
+ */
+static inline void *kunit_kzalloc(struct kunit *test, size_t size, gfp_t gfp)
+{
+	return kunit_kmalloc(test, size, gfp | __GFP_ZERO);
+}
+
+void kunit_cleanup(struct kunit *test);
+
 void __printf(3, 4) kunit_printk(const char *level,
 				 const struct kunit *test,
 				 const char *fmt, ...);
diff --git a/kunit/test.c b/kunit/test.c
index 0b4396f92086e..84f2e1c040af3 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -66,6 +66,7 @@ static void kunit_vprintk(const struct kunit *test,
 int kunit_init_test(struct kunit *test, const char *name)
 {
 	spin_lock_init(&test->lock);
+	INIT_LIST_HEAD(&test->resources);
 	test->name = name;
 	test->vprintk = kunit_vprintk;
 
@@ -93,6 +94,11 @@ static void kunit_run_case_internal(struct kunit *test,
 	test_case->run_case(test);
 }
 
+static void kunit_case_internal_cleanup(struct kunit *test)
+{
+	kunit_cleanup(test);
+}
+
 /*
  * Performs post validations and cleanup after a test case was run.
  * XXX: Should ONLY BE CALLED AFTER kunit_run_case_internal!
@@ -103,6 +109,8 @@ static void kunit_run_case_cleanup(struct kunit *test,
 {
 	if (module->exit)
 		module->exit(test);
+
+	kunit_case_internal_cleanup(test);
 }
 
 /*
@@ -150,6 +158,93 @@ int kunit_run_tests(struct kunit_module *module)
 	return 0;
 }
 
+struct kunit_resource *kunit_alloc_resource(struct kunit *test,
+					    kunit_resource_init_t init,
+					    kunit_resource_free_t free,
+					    void *context)
+{
+	struct kunit_resource *res;
+	unsigned long flags;
+	int ret;
+
+	res = kzalloc(sizeof(*res), GFP_KERNEL);
+	if (!res)
+		return NULL;
+
+	ret = init(res, context);
+	if (ret)
+		return NULL;
+
+	res->free = free;
+	spin_lock_irqsave(&test->lock, flags);
+	list_add_tail(&res->node, &test->resources);
+	spin_unlock_irqrestore(&test->lock, flags);
+
+	return res;
+}
+
+void kunit_free_resource(struct kunit *test, struct kunit_resource *res)
+{
+	res->free(res);
+	list_del(&res->node);
+	kfree(res);
+}
+
+struct kunit_kmalloc_params {
+	size_t size;
+	gfp_t gfp;
+};
+
+static int kunit_kmalloc_init(struct kunit_resource *res, void *context)
+{
+	struct kunit_kmalloc_params *params = context;
+
+	res->allocation = kmalloc(params->size, params->gfp);
+	if (!res->allocation)
+		return -ENOMEM;
+
+	return 0;
+}
+
+static void kunit_kmalloc_free(struct kunit_resource *res)
+{
+	kfree(res->allocation);
+}
+
+void *kunit_kmalloc(struct kunit *test, size_t size, gfp_t gfp)
+{
+	struct kunit_kmalloc_params params;
+	struct kunit_resource *res;
+
+	params.size = size;
+	params.gfp = gfp;
+
+	res = kunit_alloc_resource(test,
+				   kunit_kmalloc_init,
+				   kunit_kmalloc_free,
+				   &params);
+
+	if (res)
+		return res->allocation;
+	else
+		return NULL;
+}
+
+void kunit_cleanup(struct kunit *test)
+{
+	struct kunit_resource *resource, *resource_safe;
+	unsigned long flags;
+
+	spin_lock_irqsave(&test->lock, flags);
+	list_for_each_entry_safe(resource,
+				 resource_safe,
+				 &test->resources,
+				 node) {
+		kunit_free_resource(test, resource);
+	}
+	spin_unlock_irqrestore(&test->lock, flags);
+}
+
 void kunit_printk(const char *level,
 		  const struct kunit *test,
 		  const char *fmt, ...)
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 02/17] kunit: test: add test resource management API
@ 2019-02-14 21:37   ` brendanhiggins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)


Create a common API for test managed resources like memory and test
objects. A lot of times a test will want to set up infrastructure to be
used in test cases; this could be anything from just wanting to allocate
some memory to setting up a driver stack; this defines facilities for
creating "test resources" which are managed by the test infrastructure
and are automatically cleaned up at the conclusion of the test.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 include/kunit/test.h | 109 +++++++++++++++++++++++++++++++++++++++++++
 kunit/test.c         |  95 +++++++++++++++++++++++++++++++++++++
 2 files changed, 204 insertions(+)

diff --git a/include/kunit/test.h b/include/kunit/test.h
index 23c2ebedd6dd9..21abc9e953969 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -12,6 +12,69 @@
 #include <linux/types.h>
 #include <linux/slab.h>
 
+struct kunit_resource;
+
+typedef int (*kunit_resource_init_t)(struct kunit_resource *, void *);
+typedef void (*kunit_resource_free_t)(struct kunit_resource *);
+
+/**
+ * struct kunit_resource - represents a *test managed resource*
+ * @allocation: for the user to store arbitrary data.
+ * @free: a user supplied function to free the resource. Populated by
+ * kunit_alloc_resource().
+ *
+ * Represents a *test managed resource*, a resource which will automatically be
+ * cleaned up at the end of a test case.
+ *
+ * Example:
+ *
+ * .. code-block:: c
+ *
+ *	struct kunit_kmalloc_params {
+ *		size_t size;
+ *		gfp_t gfp;
+ *	};
+ *
+ *	static int kunit_kmalloc_init(struct kunit_resource *res, void *context)
+ *	{
+ *		struct kunit_kmalloc_params *params = context;
+ *		res->allocation = kmalloc(params->size, params->gfp);
+ *
+ *		if (!res->allocation)
+ *			return -ENOMEM;
+ *
+ *		return 0;
+ *	}
+ *
+ *	static void kunit_kmalloc_free(struct kunit_resource *res)
+ *	{
+ *		kfree(res->allocation);
+ *	}
+ *
+ *	void *kunit_kmalloc(struct kunit *test, size_t size, gfp_t gfp)
+ *	{
+ *		struct kunit_kmalloc_params params;
+ *		struct kunit_resource *res;
+ *
+ *		params.size = size;
+ *		params.gfp = gfp;
+ *
+ *		res = kunit_alloc_resource(test, kunit_kmalloc_init,
+ *			kunit_kmalloc_free, &params);
+ *		if (res)
+ *			return res->allocation;
+ *		else
+ *			return NULL;
+ *	}
+ */
+struct kunit_resource {
+	void *allocation;
+	kunit_resource_free_t free;
+
+	/* private: internal use only. */
+	struct list_head node;
+};
+
 struct kunit;
 
 /**
@@ -104,6 +167,7 @@ struct kunit {
 	const char *name; /* Read only after initialization! */
 	spinlock_t lock; /* Gaurds all mutable test state. */
 	bool success; /* Protected by lock. */
+	struct list_head resources; /* Protected by lock. */
 	void (*vprintk)(const struct kunit *test,
 			const char *level,
 			struct va_format *vaf);
@@ -127,6 +191,51 @@ int kunit_run_tests(struct kunit_module *module);
 		} \
 		late_initcall(module_kunit_init##module)
 
+/**
+ * kunit_alloc_resource() - Allocates a *test managed resource*.
+ * @test: The test context object.
+ * @init: a user supplied function to initialize the resource.
+ * @free: a user supplied function to free the resource.
+ * @context: for the user to pass in arbitrary data.
+ *
+ * Allocates a *test managed resource*, a resource which will automatically be
+ * cleaned up at the end of a test case. See &struct kunit_resource for an
+ * example.
+ */
+struct kunit_resource *kunit_alloc_resource(struct kunit *test,
+					    kunit_resource_init_t init,
+					    kunit_resource_free_t free,
+					    void *context);
+
+void kunit_free_resource(struct kunit *test, struct kunit_resource *res);
+
+/**
+ * kunit_kmalloc() - Like kmalloc() except the allocation is *test managed*.
+ * @test: The test context object.
+ * @size: The size in bytes of the desired memory.
+ * @gfp: flags passed to underlying kmalloc().
+ *
+ * Just like `kmalloc(...)`, except the allocation is managed by the test case
+ * and is automatically cleaned up after the test case concludes. See &struct
+ * kunit_resource for more information.
+ */
+void *kunit_kmalloc(struct kunit *test, size_t size, gfp_t gfp);
+
+/**
+ * kunit_kzalloc() - Just like kunit_kmalloc(), but zeroes the allocation.
+ * @test: The test context object.
+ * @size: The size in bytes of the desired memory.
+ * @gfp: flags passed to underlying kmalloc().
+ *
+ * See kzalloc() and kunit_kmalloc() for more information.
+ */
+static inline void *kunit_kzalloc(struct kunit *test, size_t size, gfp_t gfp)
+{
+	return kunit_kmalloc(test, size, gfp | __GFP_ZERO);
+}
+
+void kunit_cleanup(struct kunit *test);
+
 void __printf(3, 4) kunit_printk(const char *level,
 				 const struct kunit *test,
 				 const char *fmt, ...);
diff --git a/kunit/test.c b/kunit/test.c
index 0b4396f92086e..84f2e1c040af3 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -66,6 +66,7 @@ static void kunit_vprintk(const struct kunit *test,
 int kunit_init_test(struct kunit *test, const char *name)
 {
 	spin_lock_init(&test->lock);
+	INIT_LIST_HEAD(&test->resources);
 	test->name = name;
 	test->vprintk = kunit_vprintk;
 
@@ -93,6 +94,11 @@ static void kunit_run_case_internal(struct kunit *test,
 	test_case->run_case(test);
 }
 
+static void kunit_case_internal_cleanup(struct kunit *test)
+{
+	kunit_cleanup(test);
+}
+
 /*
  * Performs post validations and cleanup after a test case was run.
  * XXX: Should ONLY BE CALLED AFTER kunit_run_case_internal!
@@ -103,6 +109,8 @@ static void kunit_run_case_cleanup(struct kunit *test,
 {
 	if (module->exit)
 		module->exit(test);
+
+	kunit_case_internal_cleanup(test);
 }
 
 /*
@@ -150,6 +158,93 @@ int kunit_run_tests(struct kunit_module *module)
 	return 0;
 }
 
+struct kunit_resource *kunit_alloc_resource(struct kunit *test,
+					    kunit_resource_init_t init,
+					    kunit_resource_free_t free,
+					    void *context)
+{
+	struct kunit_resource *res;
+	unsigned long flags;
+	int ret;
+
+	res = kzalloc(sizeof(*res), GFP_KERNEL);
+	if (!res)
+		return NULL;
+
+	ret = init(res, context);
+	if (ret)
+		return NULL;
+
+	res->free = free;
+	spin_lock_irqsave(&test->lock, flags);
+	list_add_tail(&res->node, &test->resources);
+	spin_unlock_irqrestore(&test->lock, flags);
+
+	return res;
+}
+
+void kunit_free_resource(struct kunit *test, struct kunit_resource *res)
+{
+	res->free(res);
+	list_del(&res->node);
+	kfree(res);
+}
+
+struct kunit_kmalloc_params {
+	size_t size;
+	gfp_t gfp;
+};
+
+static int kunit_kmalloc_init(struct kunit_resource *res, void *context)
+{
+	struct kunit_kmalloc_params *params = context;
+
+	res->allocation = kmalloc(params->size, params->gfp);
+	if (!res->allocation)
+		return -ENOMEM;
+
+	return 0;
+}
+
+static void kunit_kmalloc_free(struct kunit_resource *res)
+{
+	kfree(res->allocation);
+}
+
+void *kunit_kmalloc(struct kunit *test, size_t size, gfp_t gfp)
+{
+	struct kunit_kmalloc_params params;
+	struct kunit_resource *res;
+
+	params.size = size;
+	params.gfp = gfp;
+
+	res = kunit_alloc_resource(test,
+				   kunit_kmalloc_init,
+				   kunit_kmalloc_free,
+				   &params);
+
+	if (res)
+		return res->allocation;
+	else
+		return NULL;
+}
+
+void kunit_cleanup(struct kunit *test)
+{
+	struct kunit_resource *resource, *resource_safe;
+	unsigned long flags;
+
+	spin_lock_irqsave(&test->lock, flags);
+	list_for_each_entry_safe(resource,
+				 resource_safe,
+				 &test->resources,
+				 node) {
+		kunit_free_resource(test, resource);
+	}
+	spin_unlock_irqrestore(&test->lock, flags);
+}
+
 void kunit_printk(const char *level,
 		  const struct kunit *test,
 		  const char *fmt, ...)
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 02/17] kunit: test: add test resource management API
@ 2019-02-14 21:37   ` brendanhiggins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: brakmo, pmladek, amir73il, Brendan Higgins, dri-devel,
	Alexander.Levin, linux-kselftest, linux-nvdimm, richard,
	knut.omang, wfg, joel, jdike, dan.carpenter, devicetree,
	Tim.Bird, linux-um, rostedt, julia.lawall, dan.j.williams,
	kunit-dev, gregkh, linux-kernel, daniel, mpe, joe, khilman

Create a common API for test managed resources like memory and test
objects. A lot of times a test will want to set up infrastructure to be
used in test cases; this could be anything from just wanting to allocate
some memory to setting up a driver stack; this defines facilities for
creating "test resources" which are managed by the test infrastructure
and are automatically cleaned up at the conclusion of the test.

Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
---
 include/kunit/test.h | 109 +++++++++++++++++++++++++++++++++++++++++++
 kunit/test.c         |  95 +++++++++++++++++++++++++++++++++++++
 2 files changed, 204 insertions(+)

diff --git a/include/kunit/test.h b/include/kunit/test.h
index 23c2ebedd6dd9..21abc9e953969 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -12,6 +12,69 @@
 #include <linux/types.h>
 #include <linux/slab.h>
 
+struct kunit_resource;
+
+typedef int (*kunit_resource_init_t)(struct kunit_resource *, void *);
+typedef void (*kunit_resource_free_t)(struct kunit_resource *);
+
+/**
+ * struct kunit_resource - represents a *test managed resource*
+ * @allocation: for the user to store arbitrary data.
+ * @free: a user supplied function to free the resource. Populated by
+ * kunit_alloc_resource().
+ *
+ * Represents a *test managed resource*, a resource which will automatically be
+ * cleaned up at the end of a test case.
+ *
+ * Example:
+ *
+ * .. code-block:: c
+ *
+ *	struct kunit_kmalloc_params {
+ *		size_t size;
+ *		gfp_t gfp;
+ *	};
+ *
+ *	static int kunit_kmalloc_init(struct kunit_resource *res, void *context)
+ *	{
+ *		struct kunit_kmalloc_params *params = context;
+ *		res->allocation = kmalloc(params->size, params->gfp);
+ *
+ *		if (!res->allocation)
+ *			return -ENOMEM;
+ *
+ *		return 0;
+ *	}
+ *
+ *	static void kunit_kmalloc_free(struct kunit_resource *res)
+ *	{
+ *		kfree(res->allocation);
+ *	}
+ *
+ *	void *kunit_kmalloc(struct kunit *test, size_t size, gfp_t gfp)
+ *	{
+ *		struct kunit_kmalloc_params params;
+ *		struct kunit_resource *res;
+ *
+ *		params.size = size;
+ *		params.gfp = gfp;
+ *
+ *		res = kunit_alloc_resource(test, kunit_kmalloc_init,
+ *			kunit_kmalloc_free, &params);
+ *		if (res)
+ *			return res->allocation;
+ *		else
+ *			return NULL;
+ *	}
+ */
+struct kunit_resource {
+	void *allocation;
+	kunit_resource_free_t free;
+
+	/* private: internal use only. */
+	struct list_head node;
+};
+
 struct kunit;
 
 /**
@@ -104,6 +167,7 @@ struct kunit {
 	const char *name; /* Read only after initialization! */
 	spinlock_t lock; /* Gaurds all mutable test state. */
 	bool success; /* Protected by lock. */
+	struct list_head resources; /* Protected by lock. */
 	void (*vprintk)(const struct kunit *test,
 			const char *level,
 			struct va_format *vaf);
@@ -127,6 +191,51 @@ int kunit_run_tests(struct kunit_module *module);
 		} \
 		late_initcall(module_kunit_init##module)
 
+/**
+ * kunit_alloc_resource() - Allocates a *test managed resource*.
+ * @test: The test context object.
+ * @init: a user supplied function to initialize the resource.
+ * @free: a user supplied function to free the resource.
+ * @context: for the user to pass in arbitrary data.
+ *
+ * Allocates a *test managed resource*, a resource which will automatically be
+ * cleaned up at the end of a test case. See &struct kunit_resource for an
+ * example.
+ */
+struct kunit_resource *kunit_alloc_resource(struct kunit *test,
+					    kunit_resource_init_t init,
+					    kunit_resource_free_t free,
+					    void *context);
+
+void kunit_free_resource(struct kunit *test, struct kunit_resource *res);
+
+/**
+ * kunit_kmalloc() - Like kmalloc() except the allocation is *test managed*.
+ * @test: The test context object.
+ * @size: The size in bytes of the desired memory.
+ * @gfp: flags passed to underlying kmalloc().
+ *
+ * Just like `kmalloc(...)`, except the allocation is managed by the test case
+ * and is automatically cleaned up after the test case concludes. See &struct
+ * kunit_resource for more information.
+ */
+void *kunit_kmalloc(struct kunit *test, size_t size, gfp_t gfp);
+
+/**
+ * kunit_kzalloc() - Just like kunit_kmalloc(), but zeroes the allocation.
+ * @test: The test context object.
+ * @size: The size in bytes of the desired memory.
+ * @gfp: flags passed to underlying kmalloc().
+ *
+ * See kzalloc() and kunit_kmalloc() for more information.
+ */
+static inline void *kunit_kzalloc(struct kunit *test, size_t size, gfp_t gfp)
+{
+	return kunit_kmalloc(test, size, gfp | __GFP_ZERO);
+}
+
+void kunit_cleanup(struct kunit *test);
+
 void __printf(3, 4) kunit_printk(const char *level,
 				 const struct kunit *test,
 				 const char *fmt, ...);
diff --git a/kunit/test.c b/kunit/test.c
index 0b4396f92086e..84f2e1c040af3 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -66,6 +66,7 @@ static void kunit_vprintk(const struct kunit *test,
 int kunit_init_test(struct kunit *test, const char *name)
 {
 	spin_lock_init(&test->lock);
+	INIT_LIST_HEAD(&test->resources);
 	test->name = name;
 	test->vprintk = kunit_vprintk;
 
@@ -93,6 +94,11 @@ static void kunit_run_case_internal(struct kunit *test,
 	test_case->run_case(test);
 }
 
+static void kunit_case_internal_cleanup(struct kunit *test)
+{
+	kunit_cleanup(test);
+}
+
 /*
  * Performs post validations and cleanup after a test case was run.
  * XXX: Should ONLY BE CALLED AFTER kunit_run_case_internal!
@@ -103,6 +109,8 @@ static void kunit_run_case_cleanup(struct kunit *test,
 {
 	if (module->exit)
 		module->exit(test);
+
+	kunit_case_internal_cleanup(test);
 }
 
 /*
@@ -150,6 +158,93 @@ int kunit_run_tests(struct kunit_module *module)
 	return 0;
 }
 
+struct kunit_resource *kunit_alloc_resource(struct kunit *test,
+					    kunit_resource_init_t init,
+					    kunit_resource_free_t free,
+					    void *context)
+{
+	struct kunit_resource *res;
+	unsigned long flags;
+	int ret;
+
+	res = kzalloc(sizeof(*res), GFP_KERNEL);
+	if (!res)
+		return NULL;
+
+	ret = init(res, context);
+	if (ret)
+		return NULL;
+
+	res->free = free;
+	spin_lock_irqsave(&test->lock, flags);
+	list_add_tail(&res->node, &test->resources);
+	spin_unlock_irqrestore(&test->lock, flags);
+
+	return res;
+}
+
+void kunit_free_resource(struct kunit *test, struct kunit_resource *res)
+{
+	res->free(res);
+	list_del(&res->node);
+	kfree(res);
+}
+
+struct kunit_kmalloc_params {
+	size_t size;
+	gfp_t gfp;
+};
+
+static int kunit_kmalloc_init(struct kunit_resource *res, void *context)
+{
+	struct kunit_kmalloc_params *params = context;
+
+	res->allocation = kmalloc(params->size, params->gfp);
+	if (!res->allocation)
+		return -ENOMEM;
+
+	return 0;
+}
+
+static void kunit_kmalloc_free(struct kunit_resource *res)
+{
+	kfree(res->allocation);
+}
+
+void *kunit_kmalloc(struct kunit *test, size_t size, gfp_t gfp)
+{
+	struct kunit_kmalloc_params params;
+	struct kunit_resource *res;
+
+	params.size = size;
+	params.gfp = gfp;
+
+	res = kunit_alloc_resource(test,
+				   kunit_kmalloc_init,
+				   kunit_kmalloc_free,
+				   &params);
+
+	if (res)
+		return res->allocation;
+	else
+		return NULL;
+}
+
+void kunit_cleanup(struct kunit *test)
+{
+	struct kunit_resource *resource, *resource_safe;
+	unsigned long flags;
+
+	spin_lock_irqsave(&test->lock, flags);
+	list_for_each_entry_safe(resource,
+				 resource_safe,
+				 &test->resources,
+				 node) {
+		kunit_free_resource(test, resource);
+	}
+	spin_unlock_irqrestore(&test->lock, flags);
+}
+
 void kunit_printk(const char *level,
 		  const struct kunit *test,
 		  const char *fmt, ...)
-- 
2.21.0.rc0.258.g878e2cd30e-goog


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 03/17] kunit: test: add string_stream a std::stream like string builder
  2019-02-14 21:37 ` brendanhiggins
                       ` (2 preceding siblings ...)
  (?)
@ 2019-02-14 21:37     ` Brendan Higgins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook-hpIqsD4AKlfQT0dZR+AlfA, mcgrof-DgEjT+Ai2ygdnm+yROfE0A,
	shuah-DgEjT+Ai2ygdnm+yROfE0A, robh-DgEjT+Ai2ygdnm+yROfE0A,
	kieran.bingham-ryLnwIuWjnjg/C1BVhZhaw,
	frowand.list-Re5JQEeQqe8AvxtiuMwx3w
  Cc: brakmo-b10kYP2dOMg, pmladek-IBi9RG/b67k,
	amir73il-Re5JQEeQqe8AvxtiuMwx3w, Brendan Higgins,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	Alexander.Levin-0li6OtcxBFHby3iVrkZq2A,
	linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw, richard-/L3Ra7n9ekc,
	knut.omang-QHcLZuEGTsvQT0dZR+AlfA, wfg-VuQAYsv1563Yd54FQh9/CA,
	joel-U3u1mxZcP9KHXe+LvDLADg, jdike-OPE4K8JWMJJBDgjK7y7TUQ,
	dan.carpenter-QHcLZuEGTsvQT0dZR+AlfA,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Tim.Bird-7U/KSKJipcs,
	linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	rostedt-nx8X9YLhiw1AfugRpC6u6w, julia.lawall-L2FTfq7BK8M,
	kunit-dev-/JYPxA39Uh5TLH3MbocFFw,
	gregkh-hQyY1W1yCW8ekmWlsbkhG0B+6BGkLq7r,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, daniel-/w4YWyX8dFk,
	mpe-Gsx/Oe8HsFggBc27wqDAHg, joe-6d6DIl74uiNBDgjK7y7TUQ,
	khilman-rdvid1DuHRBWk0Htik3J/w

A number of test features need to do pretty complicated string printing
where it may not be possible to rely on a single preallocated string
with parameters.

So provide a library for constructing the string as you go similar to
C++'s std::string.

Signed-off-by: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
---
Changes Since Last Version
 - None. There was some discussion about maybe trying to generalize this
   or replace it with something existing, but it didn't seem feasible to
   generalize this, and there wasn't really anything that is a great
   replacement.
---
 include/kunit/string-stream.h |  44 ++++++++++
 kunit/Makefile                |   3 +-
 kunit/string-stream.c         | 149 ++++++++++++++++++++++++++++++++++
 3 files changed, 195 insertions(+), 1 deletion(-)
 create mode 100644 include/kunit/string-stream.h
 create mode 100644 kunit/string-stream.c

diff --git a/include/kunit/string-stream.h b/include/kunit/string-stream.h
new file mode 100644
index 0000000000000..280ee67559588
--- /dev/null
+++ b/include/kunit/string-stream.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * C++ stream style string builder used in KUnit for building messages.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
+ */
+
+#ifndef _KUNIT_STRING_STREAM_H
+#define _KUNIT_STRING_STREAM_H
+
+#include <linux/types.h>
+#include <linux/spinlock.h>
+#include <linux/kref.h>
+#include <stdarg.h>
+
+struct string_stream_fragment {
+	struct list_head node;
+	char *fragment;
+};
+
+struct string_stream {
+	size_t length;
+	struct list_head fragments;
+
+	/* length and fragments are protected by this lock */
+	spinlock_t lock;
+	struct kref refcount;
+	int (*add)(struct string_stream *this, const char *fmt, ...);
+	int (*vadd)(struct string_stream *this, const char *fmt, va_list args);
+	char *(*get_string)(struct string_stream *this);
+	void (*clear)(struct string_stream *this);
+	bool (*is_empty)(struct string_stream *this);
+};
+
+struct string_stream *new_string_stream(void);
+
+void destroy_string_stream(struct string_stream *stream);
+
+void string_stream_get(struct string_stream *stream);
+
+int string_stream_put(struct string_stream *stream);
+
+#endif /* _KUNIT_STRING_STREAM_H */
diff --git a/kunit/Makefile b/kunit/Makefile
index 5efdc4dea2c08..275b565a0e81f 100644
--- a/kunit/Makefile
+++ b/kunit/Makefile
@@ -1 +1,2 @@
-obj-$(CONFIG_KUNIT) +=			test.o
+obj-$(CONFIG_KUNIT) +=			test.o \
+					string-stream.o
diff --git a/kunit/string-stream.c b/kunit/string-stream.c
new file mode 100644
index 0000000000000..e90fb595a5607
--- /dev/null
+++ b/kunit/string-stream.c
@@ -0,0 +1,149 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * C++ stream style string builder used in KUnit for building messages.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
+ */
+
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <kunit/string-stream.h>
+
+static int string_stream_vadd(struct string_stream *this,
+			       const char *fmt,
+			       va_list args)
+{
+	struct string_stream_fragment *fragment;
+	int len;
+	va_list args_for_counting;
+	unsigned long flags;
+
+	/* Make a copy because `vsnprintf` could change it */
+	va_copy(args_for_counting, args);
+
+	/* Need space for null byte. */
+	len = vsnprintf(NULL, 0, fmt, args_for_counting) + 1;
+
+	va_end(args_for_counting);
+
+	fragment = kmalloc(sizeof(*fragment), GFP_KERNEL);
+	if (!fragment)
+		return -ENOMEM;
+
+	fragment->fragment = kmalloc(len, GFP_KERNEL);
+	if (!fragment->fragment) {
+		kfree(fragment);
+		return -ENOMEM;
+	}
+
+	len = vsnprintf(fragment->fragment, len, fmt, args);
+	spin_lock_irqsave(&this->lock, flags);
+	this->length += len;
+	list_add_tail(&fragment->node, &this->fragments);
+	spin_unlock_irqrestore(&this->lock, flags);
+	return 0;
+}
+
+static int string_stream_add(struct string_stream *this, const char *fmt, ...)
+{
+	va_list args;
+	int result;
+
+	va_start(args, fmt);
+	result = string_stream_vadd(this, fmt, args);
+	va_end(args);
+	return result;
+}
+
+static void string_stream_clear(struct string_stream *this)
+{
+	struct string_stream_fragment *fragment, *fragment_safe;
+	unsigned long flags;
+
+	spin_lock_irqsave(&this->lock, flags);
+	list_for_each_entry_safe(fragment,
+				 fragment_safe,
+				 &this->fragments,
+				 node) {
+		list_del(&fragment->node);
+		kfree(fragment->fragment);
+		kfree(fragment);
+	}
+	this->length = 0;
+	spin_unlock_irqrestore(&this->lock, flags);
+}
+
+static char *string_stream_get_string(struct string_stream *this)
+{
+	struct string_stream_fragment *fragment;
+	size_t buf_len = this->length + 1; /* +1 for null byte. */
+	char *buf;
+	unsigned long flags;
+
+	buf = kzalloc(buf_len, GFP_KERNEL);
+	if (!buf)
+		return NULL;
+
+	spin_lock_irqsave(&this->lock, flags);
+	list_for_each_entry(fragment, &this->fragments, node)
+		strlcat(buf, fragment->fragment, buf_len);
+	spin_unlock_irqrestore(&this->lock, flags);
+
+	return buf;
+}
+
+static bool string_stream_is_empty(struct string_stream *this)
+{
+	bool is_empty;
+	unsigned long flags;
+
+	spin_lock_irqsave(&this->lock, flags);
+	is_empty = list_empty(&this->fragments);
+	spin_unlock_irqrestore(&this->lock, flags);
+
+	return is_empty;
+}
+
+void destroy_string_stream(struct string_stream *stream)
+{
+	stream->clear(stream);
+	kfree(stream);
+}
+
+static void string_stream_destroy(struct kref *kref)
+{
+	struct string_stream *stream = container_of(kref,
+						    struct string_stream,
+						    refcount);
+	destroy_string_stream(stream);
+}
+
+struct string_stream *new_string_stream(void)
+{
+	struct string_stream *stream = kzalloc(sizeof(*stream), GFP_KERNEL);
+
+	if (!stream)
+		return NULL;
+
+	INIT_LIST_HEAD(&stream->fragments);
+	spin_lock_init(&stream->lock);
+	kref_init(&stream->refcount);
+	stream->add = string_stream_add;
+	stream->vadd = string_stream_vadd;
+	stream->get_string = string_stream_get_string;
+	stream->clear = string_stream_clear;
+	stream->is_empty = string_stream_is_empty;
+	return stream;
+}
+
+void string_stream_get(struct string_stream *stream)
+{
+	kref_get(&stream->refcount);
+}
+
+int string_stream_put(struct string_stream *stream)
+{
+	return kref_put(&stream->refcount, &string_stream_destroy);
+}
+
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 03/17] kunit: test: add string_stream a std::stream like string builder
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: gregkh, joel, mpe, joe, brakmo, rostedt, Tim.Bird, khilman,
	julia.lawall, linux-kselftest, kunit-dev, linux-kernel, jdike,
	richard, linux-um, daniel, dri-devel, dan.j.williams,
	linux-nvdimm, knut.omang, devicetree, pmladek, Alexander.Levin,
	amir73il, dan.carpenter, wfg, Brendan Higgins

A number of test features need to do pretty complicated string printing
where it may not be possible to rely on a single preallocated string
with parameters.

So provide a library for constructing the string as you go similar to
C++'s std::string.

Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
---
Changes Since Last Version
 - None. There was some discussion about maybe trying to generalize this
   or replace it with something existing, but it didn't seem feasible to
   generalize this, and there wasn't really anything that is a great
   replacement.
---
 include/kunit/string-stream.h |  44 ++++++++++
 kunit/Makefile                |   3 +-
 kunit/string-stream.c         | 149 ++++++++++++++++++++++++++++++++++
 3 files changed, 195 insertions(+), 1 deletion(-)
 create mode 100644 include/kunit/string-stream.h
 create mode 100644 kunit/string-stream.c

diff --git a/include/kunit/string-stream.h b/include/kunit/string-stream.h
new file mode 100644
index 0000000000000..280ee67559588
--- /dev/null
+++ b/include/kunit/string-stream.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * C++ stream style string builder used in KUnit for building messages.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins@google.com>
+ */
+
+#ifndef _KUNIT_STRING_STREAM_H
+#define _KUNIT_STRING_STREAM_H
+
+#include <linux/types.h>
+#include <linux/spinlock.h>
+#include <linux/kref.h>
+#include <stdarg.h>
+
+struct string_stream_fragment {
+	struct list_head node;
+	char *fragment;
+};
+
+struct string_stream {
+	size_t length;
+	struct list_head fragments;
+
+	/* length and fragments are protected by this lock */
+	spinlock_t lock;
+	struct kref refcount;
+	int (*add)(struct string_stream *this, const char *fmt, ...);
+	int (*vadd)(struct string_stream *this, const char *fmt, va_list args);
+	char *(*get_string)(struct string_stream *this);
+	void (*clear)(struct string_stream *this);
+	bool (*is_empty)(struct string_stream *this);
+};
+
+struct string_stream *new_string_stream(void);
+
+void destroy_string_stream(struct string_stream *stream);
+
+void string_stream_get(struct string_stream *stream);
+
+int string_stream_put(struct string_stream *stream);
+
+#endif /* _KUNIT_STRING_STREAM_H */
diff --git a/kunit/Makefile b/kunit/Makefile
index 5efdc4dea2c08..275b565a0e81f 100644
--- a/kunit/Makefile
+++ b/kunit/Makefile
@@ -1 +1,2 @@
-obj-$(CONFIG_KUNIT) +=			test.o
+obj-$(CONFIG_KUNIT) +=			test.o \
+					string-stream.o
diff --git a/kunit/string-stream.c b/kunit/string-stream.c
new file mode 100644
index 0000000000000..e90fb595a5607
--- /dev/null
+++ b/kunit/string-stream.c
@@ -0,0 +1,149 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * C++ stream style string builder used in KUnit for building messages.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins@google.com>
+ */
+
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <kunit/string-stream.h>
+
+static int string_stream_vadd(struct string_stream *this,
+			       const char *fmt,
+			       va_list args)
+{
+	struct string_stream_fragment *fragment;
+	int len;
+	va_list args_for_counting;
+	unsigned long flags;
+
+	/* Make a copy because `vsnprintf` could change it */
+	va_copy(args_for_counting, args);
+
+	/* Need space for null byte. */
+	len = vsnprintf(NULL, 0, fmt, args_for_counting) + 1;
+
+	va_end(args_for_counting);
+
+	fragment = kmalloc(sizeof(*fragment), GFP_KERNEL);
+	if (!fragment)
+		return -ENOMEM;
+
+	fragment->fragment = kmalloc(len, GFP_KERNEL);
+	if (!fragment->fragment) {
+		kfree(fragment);
+		return -ENOMEM;
+	}
+
+	len = vsnprintf(fragment->fragment, len, fmt, args);
+	spin_lock_irqsave(&this->lock, flags);
+	this->length += len;
+	list_add_tail(&fragment->node, &this->fragments);
+	spin_unlock_irqrestore(&this->lock, flags);
+	return 0;
+}
+
+static int string_stream_add(struct string_stream *this, const char *fmt, ...)
+{
+	va_list args;
+	int result;
+
+	va_start(args, fmt);
+	result = string_stream_vadd(this, fmt, args);
+	va_end(args);
+	return result;
+}
+
+static void string_stream_clear(struct string_stream *this)
+{
+	struct string_stream_fragment *fragment, *fragment_safe;
+	unsigned long flags;
+
+	spin_lock_irqsave(&this->lock, flags);
+	list_for_each_entry_safe(fragment,
+				 fragment_safe,
+				 &this->fragments,
+				 node) {
+		list_del(&fragment->node);
+		kfree(fragment->fragment);
+		kfree(fragment);
+	}
+	this->length = 0;
+	spin_unlock_irqrestore(&this->lock, flags);
+}
+
+static char *string_stream_get_string(struct string_stream *this)
+{
+	struct string_stream_fragment *fragment;
+	size_t buf_len = this->length + 1; /* +1 for null byte. */
+	char *buf;
+	unsigned long flags;
+
+	buf = kzalloc(buf_len, GFP_KERNEL);
+	if (!buf)
+		return NULL;
+
+	spin_lock_irqsave(&this->lock, flags);
+	list_for_each_entry(fragment, &this->fragments, node)
+		strlcat(buf, fragment->fragment, buf_len);
+	spin_unlock_irqrestore(&this->lock, flags);
+
+	return buf;
+}
+
+static bool string_stream_is_empty(struct string_stream *this)
+{
+	bool is_empty;
+	unsigned long flags;
+
+	spin_lock_irqsave(&this->lock, flags);
+	is_empty = list_empty(&this->fragments);
+	spin_unlock_irqrestore(&this->lock, flags);
+
+	return is_empty;
+}
+
+void destroy_string_stream(struct string_stream *stream)
+{
+	stream->clear(stream);
+	kfree(stream);
+}
+
+static void string_stream_destroy(struct kref *kref)
+{
+	struct string_stream *stream = container_of(kref,
+						    struct string_stream,
+						    refcount);
+	destroy_string_stream(stream);
+}
+
+struct string_stream *new_string_stream(void)
+{
+	struct string_stream *stream = kzalloc(sizeof(*stream), GFP_KERNEL);
+
+	if (!stream)
+		return NULL;
+
+	INIT_LIST_HEAD(&stream->fragments);
+	spin_lock_init(&stream->lock);
+	kref_init(&stream->refcount);
+	stream->add = string_stream_add;
+	stream->vadd = string_stream_vadd;
+	stream->get_string = string_stream_get_string;
+	stream->clear = string_stream_clear;
+	stream->is_empty = string_stream_is_empty;
+	return stream;
+}
+
+void string_stream_get(struct string_stream *stream)
+{
+	kref_get(&stream->refcount);
+}
+
+int string_stream_put(struct string_stream *stream)
+{
+	return kref_put(&stream->refcount, &string_stream_destroy);
+}
+
-- 
2.21.0.rc0.258.g878e2cd30e-goog


^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 03/17] kunit: test: add string_stream a std::stream like string builder
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-02-14 21:37 UTC (permalink / raw)


A number of test features need to do pretty complicated string printing
where it may not be possible to rely on a single preallocated string
with parameters.

So provide a library for constructing the string as you go similar to
C++'s std::string.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
Changes Since Last Version
 - None. There was some discussion about maybe trying to generalize this
   or replace it with something existing, but it didn't seem feasible to
   generalize this, and there wasn't really anything that is a great
   replacement.
---
 include/kunit/string-stream.h |  44 ++++++++++
 kunit/Makefile                |   3 +-
 kunit/string-stream.c         | 149 ++++++++++++++++++++++++++++++++++
 3 files changed, 195 insertions(+), 1 deletion(-)
 create mode 100644 include/kunit/string-stream.h
 create mode 100644 kunit/string-stream.c

diff --git a/include/kunit/string-stream.h b/include/kunit/string-stream.h
new file mode 100644
index 0000000000000..280ee67559588
--- /dev/null
+++ b/include/kunit/string-stream.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * C++ stream style string builder used in KUnit for building messages.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+
+#ifndef _KUNIT_STRING_STREAM_H
+#define _KUNIT_STRING_STREAM_H
+
+#include <linux/types.h>
+#include <linux/spinlock.h>
+#include <linux/kref.h>
+#include <stdarg.h>
+
+struct string_stream_fragment {
+	struct list_head node;
+	char *fragment;
+};
+
+struct string_stream {
+	size_t length;
+	struct list_head fragments;
+
+	/* length and fragments are protected by this lock */
+	spinlock_t lock;
+	struct kref refcount;
+	int (*add)(struct string_stream *this, const char *fmt, ...);
+	int (*vadd)(struct string_stream *this, const char *fmt, va_list args);
+	char *(*get_string)(struct string_stream *this);
+	void (*clear)(struct string_stream *this);
+	bool (*is_empty)(struct string_stream *this);
+};
+
+struct string_stream *new_string_stream(void);
+
+void destroy_string_stream(struct string_stream *stream);
+
+void string_stream_get(struct string_stream *stream);
+
+int string_stream_put(struct string_stream *stream);
+
+#endif /* _KUNIT_STRING_STREAM_H */
diff --git a/kunit/Makefile b/kunit/Makefile
index 5efdc4dea2c08..275b565a0e81f 100644
--- a/kunit/Makefile
+++ b/kunit/Makefile
@@ -1 +1,2 @@
-obj-$(CONFIG_KUNIT) +=			test.o
+obj-$(CONFIG_KUNIT) +=			test.o \
+					string-stream.o
diff --git a/kunit/string-stream.c b/kunit/string-stream.c
new file mode 100644
index 0000000000000..e90fb595a5607
--- /dev/null
+++ b/kunit/string-stream.c
@@ -0,0 +1,149 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * C++ stream style string builder used in KUnit for building messages.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <kunit/string-stream.h>
+
+static int string_stream_vadd(struct string_stream *this,
+			       const char *fmt,
+			       va_list args)
+{
+	struct string_stream_fragment *fragment;
+	int len;
+	va_list args_for_counting;
+	unsigned long flags;
+
+	/* Make a copy because `vsnprintf` could change it */
+	va_copy(args_for_counting, args);
+
+	/* Need space for null byte. */
+	len = vsnprintf(NULL, 0, fmt, args_for_counting) + 1;
+
+	va_end(args_for_counting);
+
+	fragment = kmalloc(sizeof(*fragment), GFP_KERNEL);
+	if (!fragment)
+		return -ENOMEM;
+
+	fragment->fragment = kmalloc(len, GFP_KERNEL);
+	if (!fragment->fragment) {
+		kfree(fragment);
+		return -ENOMEM;
+	}
+
+	len = vsnprintf(fragment->fragment, len, fmt, args);
+	spin_lock_irqsave(&this->lock, flags);
+	this->length += len;
+	list_add_tail(&fragment->node, &this->fragments);
+	spin_unlock_irqrestore(&this->lock, flags);
+	return 0;
+}
+
+static int string_stream_add(struct string_stream *this, const char *fmt, ...)
+{
+	va_list args;
+	int result;
+
+	va_start(args, fmt);
+	result = string_stream_vadd(this, fmt, args);
+	va_end(args);
+	return result;
+}
+
+static void string_stream_clear(struct string_stream *this)
+{
+	struct string_stream_fragment *fragment, *fragment_safe;
+	unsigned long flags;
+
+	spin_lock_irqsave(&this->lock, flags);
+	list_for_each_entry_safe(fragment,
+				 fragment_safe,
+				 &this->fragments,
+				 node) {
+		list_del(&fragment->node);
+		kfree(fragment->fragment);
+		kfree(fragment);
+	}
+	this->length = 0;
+	spin_unlock_irqrestore(&this->lock, flags);
+}
+
+static char *string_stream_get_string(struct string_stream *this)
+{
+	struct string_stream_fragment *fragment;
+	size_t buf_len = this->length + 1; /* +1 for null byte. */
+	char *buf;
+	unsigned long flags;
+
+	buf = kzalloc(buf_len, GFP_KERNEL);
+	if (!buf)
+		return NULL;
+
+	spin_lock_irqsave(&this->lock, flags);
+	list_for_each_entry(fragment, &this->fragments, node)
+		strlcat(buf, fragment->fragment, buf_len);
+	spin_unlock_irqrestore(&this->lock, flags);
+
+	return buf;
+}
+
+static bool string_stream_is_empty(struct string_stream *this)
+{
+	bool is_empty;
+	unsigned long flags;
+
+	spin_lock_irqsave(&this->lock, flags);
+	is_empty = list_empty(&this->fragments);
+	spin_unlock_irqrestore(&this->lock, flags);
+
+	return is_empty;
+}
+
+void destroy_string_stream(struct string_stream *stream)
+{
+	stream->clear(stream);
+	kfree(stream);
+}
+
+static void string_stream_destroy(struct kref *kref)
+{
+	struct string_stream *stream = container_of(kref,
+						    struct string_stream,
+						    refcount);
+	destroy_string_stream(stream);
+}
+
+struct string_stream *new_string_stream(void)
+{
+	struct string_stream *stream = kzalloc(sizeof(*stream), GFP_KERNEL);
+
+	if (!stream)
+		return NULL;
+
+	INIT_LIST_HEAD(&stream->fragments);
+	spin_lock_init(&stream->lock);
+	kref_init(&stream->refcount);
+	stream->add = string_stream_add;
+	stream->vadd = string_stream_vadd;
+	stream->get_string = string_stream_get_string;
+	stream->clear = string_stream_clear;
+	stream->is_empty = string_stream_is_empty;
+	return stream;
+}
+
+void string_stream_get(struct string_stream *stream)
+{
+	kref_get(&stream->refcount);
+}
+
+int string_stream_put(struct string_stream *stream)
+{
+	return kref_put(&stream->refcount, &string_stream_destroy);
+}
+
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 03/17] kunit: test: add string_stream a std::stream like string builder
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)


A number of test features need to do pretty complicated string printing
where it may not be possible to rely on a single preallocated string
with parameters.

So provide a library for constructing the string as you go similar to
C++'s std::string.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
Changes Since Last Version
 - None. There was some discussion about maybe trying to generalize this
   or replace it with something existing, but it didn't seem feasible to
   generalize this, and there wasn't really anything that is a great
   replacement.
---
 include/kunit/string-stream.h |  44 ++++++++++
 kunit/Makefile                |   3 +-
 kunit/string-stream.c         | 149 ++++++++++++++++++++++++++++++++++
 3 files changed, 195 insertions(+), 1 deletion(-)
 create mode 100644 include/kunit/string-stream.h
 create mode 100644 kunit/string-stream.c

diff --git a/include/kunit/string-stream.h b/include/kunit/string-stream.h
new file mode 100644
index 0000000000000..280ee67559588
--- /dev/null
+++ b/include/kunit/string-stream.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * C++ stream style string builder used in KUnit for building messages.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+
+#ifndef _KUNIT_STRING_STREAM_H
+#define _KUNIT_STRING_STREAM_H
+
+#include <linux/types.h>
+#include <linux/spinlock.h>
+#include <linux/kref.h>
+#include <stdarg.h>
+
+struct string_stream_fragment {
+	struct list_head node;
+	char *fragment;
+};
+
+struct string_stream {
+	size_t length;
+	struct list_head fragments;
+
+	/* length and fragments are protected by this lock */
+	spinlock_t lock;
+	struct kref refcount;
+	int (*add)(struct string_stream *this, const char *fmt, ...);
+	int (*vadd)(struct string_stream *this, const char *fmt, va_list args);
+	char *(*get_string)(struct string_stream *this);
+	void (*clear)(struct string_stream *this);
+	bool (*is_empty)(struct string_stream *this);
+};
+
+struct string_stream *new_string_stream(void);
+
+void destroy_string_stream(struct string_stream *stream);
+
+void string_stream_get(struct string_stream *stream);
+
+int string_stream_put(struct string_stream *stream);
+
+#endif /* _KUNIT_STRING_STREAM_H */
diff --git a/kunit/Makefile b/kunit/Makefile
index 5efdc4dea2c08..275b565a0e81f 100644
--- a/kunit/Makefile
+++ b/kunit/Makefile
@@ -1 +1,2 @@
-obj-$(CONFIG_KUNIT) +=			test.o
+obj-$(CONFIG_KUNIT) +=			test.o \
+					string-stream.o
diff --git a/kunit/string-stream.c b/kunit/string-stream.c
new file mode 100644
index 0000000000000..e90fb595a5607
--- /dev/null
+++ b/kunit/string-stream.c
@@ -0,0 +1,149 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * C++ stream style string builder used in KUnit for building messages.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <kunit/string-stream.h>
+
+static int string_stream_vadd(struct string_stream *this,
+			       const char *fmt,
+			       va_list args)
+{
+	struct string_stream_fragment *fragment;
+	int len;
+	va_list args_for_counting;
+	unsigned long flags;
+
+	/* Make a copy because `vsnprintf` could change it */
+	va_copy(args_for_counting, args);
+
+	/* Need space for null byte. */
+	len = vsnprintf(NULL, 0, fmt, args_for_counting) + 1;
+
+	va_end(args_for_counting);
+
+	fragment = kmalloc(sizeof(*fragment), GFP_KERNEL);
+	if (!fragment)
+		return -ENOMEM;
+
+	fragment->fragment = kmalloc(len, GFP_KERNEL);
+	if (!fragment->fragment) {
+		kfree(fragment);
+		return -ENOMEM;
+	}
+
+	len = vsnprintf(fragment->fragment, len, fmt, args);
+	spin_lock_irqsave(&this->lock, flags);
+	this->length += len;
+	list_add_tail(&fragment->node, &this->fragments);
+	spin_unlock_irqrestore(&this->lock, flags);
+	return 0;
+}
+
+static int string_stream_add(struct string_stream *this, const char *fmt, ...)
+{
+	va_list args;
+	int result;
+
+	va_start(args, fmt);
+	result = string_stream_vadd(this, fmt, args);
+	va_end(args);
+	return result;
+}
+
+static void string_stream_clear(struct string_stream *this)
+{
+	struct string_stream_fragment *fragment, *fragment_safe;
+	unsigned long flags;
+
+	spin_lock_irqsave(&this->lock, flags);
+	list_for_each_entry_safe(fragment,
+				 fragment_safe,
+				 &this->fragments,
+				 node) {
+		list_del(&fragment->node);
+		kfree(fragment->fragment);
+		kfree(fragment);
+	}
+	this->length = 0;
+	spin_unlock_irqrestore(&this->lock, flags);
+}
+
+static char *string_stream_get_string(struct string_stream *this)
+{
+	struct string_stream_fragment *fragment;
+	size_t buf_len = this->length + 1; /* +1 for null byte. */
+	char *buf;
+	unsigned long flags;
+
+	buf = kzalloc(buf_len, GFP_KERNEL);
+	if (!buf)
+		return NULL;
+
+	spin_lock_irqsave(&this->lock, flags);
+	list_for_each_entry(fragment, &this->fragments, node)
+		strlcat(buf, fragment->fragment, buf_len);
+	spin_unlock_irqrestore(&this->lock, flags);
+
+	return buf;
+}
+
+static bool string_stream_is_empty(struct string_stream *this)
+{
+	bool is_empty;
+	unsigned long flags;
+
+	spin_lock_irqsave(&this->lock, flags);
+	is_empty = list_empty(&this->fragments);
+	spin_unlock_irqrestore(&this->lock, flags);
+
+	return is_empty;
+}
+
+void destroy_string_stream(struct string_stream *stream)
+{
+	stream->clear(stream);
+	kfree(stream);
+}
+
+static void string_stream_destroy(struct kref *kref)
+{
+	struct string_stream *stream = container_of(kref,
+						    struct string_stream,
+						    refcount);
+	destroy_string_stream(stream);
+}
+
+struct string_stream *new_string_stream(void)
+{
+	struct string_stream *stream = kzalloc(sizeof(*stream), GFP_KERNEL);
+
+	if (!stream)
+		return NULL;
+
+	INIT_LIST_HEAD(&stream->fragments);
+	spin_lock_init(&stream->lock);
+	kref_init(&stream->refcount);
+	stream->add = string_stream_add;
+	stream->vadd = string_stream_vadd;
+	stream->get_string = string_stream_get_string;
+	stream->clear = string_stream_clear;
+	stream->is_empty = string_stream_is_empty;
+	return stream;
+}
+
+void string_stream_get(struct string_stream *stream)
+{
+	kref_get(&stream->refcount);
+}
+
+int string_stream_put(struct string_stream *stream)
+{
+	return kref_put(&stream->refcount, &string_stream_destroy);
+}
+
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 03/17] kunit: test: add string_stream a std::stream like string builder
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: brakmo, pmladek, amir73il, Brendan Higgins, dri-devel,
	Alexander.Levin, linux-kselftest, linux-nvdimm, richard,
	knut.omang, wfg, joel, jdike, dan.carpenter, devicetree,
	Tim.Bird, linux-um, rostedt, julia.lawall, dan.j.williams,
	kunit-dev, gregkh, linux-kernel, daniel, mpe, joe, khilman

A number of test features need to do pretty complicated string printing
where it may not be possible to rely on a single preallocated string
with parameters.

So provide a library for constructing the string as you go similar to
C++'s std::string.

Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
---
Changes Since Last Version
 - None. There was some discussion about maybe trying to generalize this
   or replace it with something existing, but it didn't seem feasible to
   generalize this, and there wasn't really anything that is a great
   replacement.
---
 include/kunit/string-stream.h |  44 ++++++++++
 kunit/Makefile                |   3 +-
 kunit/string-stream.c         | 149 ++++++++++++++++++++++++++++++++++
 3 files changed, 195 insertions(+), 1 deletion(-)
 create mode 100644 include/kunit/string-stream.h
 create mode 100644 kunit/string-stream.c

diff --git a/include/kunit/string-stream.h b/include/kunit/string-stream.h
new file mode 100644
index 0000000000000..280ee67559588
--- /dev/null
+++ b/include/kunit/string-stream.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * C++ stream style string builder used in KUnit for building messages.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins@google.com>
+ */
+
+#ifndef _KUNIT_STRING_STREAM_H
+#define _KUNIT_STRING_STREAM_H
+
+#include <linux/types.h>
+#include <linux/spinlock.h>
+#include <linux/kref.h>
+#include <stdarg.h>
+
+struct string_stream_fragment {
+	struct list_head node;
+	char *fragment;
+};
+
+struct string_stream {
+	size_t length;
+	struct list_head fragments;
+
+	/* length and fragments are protected by this lock */
+	spinlock_t lock;
+	struct kref refcount;
+	int (*add)(struct string_stream *this, const char *fmt, ...);
+	int (*vadd)(struct string_stream *this, const char *fmt, va_list args);
+	char *(*get_string)(struct string_stream *this);
+	void (*clear)(struct string_stream *this);
+	bool (*is_empty)(struct string_stream *this);
+};
+
+struct string_stream *new_string_stream(void);
+
+void destroy_string_stream(struct string_stream *stream);
+
+void string_stream_get(struct string_stream *stream);
+
+int string_stream_put(struct string_stream *stream);
+
+#endif /* _KUNIT_STRING_STREAM_H */
diff --git a/kunit/Makefile b/kunit/Makefile
index 5efdc4dea2c08..275b565a0e81f 100644
--- a/kunit/Makefile
+++ b/kunit/Makefile
@@ -1 +1,2 @@
-obj-$(CONFIG_KUNIT) +=			test.o
+obj-$(CONFIG_KUNIT) +=			test.o \
+					string-stream.o
diff --git a/kunit/string-stream.c b/kunit/string-stream.c
new file mode 100644
index 0000000000000..e90fb595a5607
--- /dev/null
+++ b/kunit/string-stream.c
@@ -0,0 +1,149 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * C++ stream style string builder used in KUnit for building messages.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins@google.com>
+ */
+
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <kunit/string-stream.h>
+
+static int string_stream_vadd(struct string_stream *this,
+			       const char *fmt,
+			       va_list args)
+{
+	struct string_stream_fragment *fragment;
+	int len;
+	va_list args_for_counting;
+	unsigned long flags;
+
+	/* Make a copy because `vsnprintf` could change it */
+	va_copy(args_for_counting, args);
+
+	/* Need space for null byte. */
+	len = vsnprintf(NULL, 0, fmt, args_for_counting) + 1;
+
+	va_end(args_for_counting);
+
+	fragment = kmalloc(sizeof(*fragment), GFP_KERNEL);
+	if (!fragment)
+		return -ENOMEM;
+
+	fragment->fragment = kmalloc(len, GFP_KERNEL);
+	if (!fragment->fragment) {
+		kfree(fragment);
+		return -ENOMEM;
+	}
+
+	len = vsnprintf(fragment->fragment, len, fmt, args);
+	spin_lock_irqsave(&this->lock, flags);
+	this->length += len;
+	list_add_tail(&fragment->node, &this->fragments);
+	spin_unlock_irqrestore(&this->lock, flags);
+	return 0;
+}
+
+static int string_stream_add(struct string_stream *this, const char *fmt, ...)
+{
+	va_list args;
+	int result;
+
+	va_start(args, fmt);
+	result = string_stream_vadd(this, fmt, args);
+	va_end(args);
+	return result;
+}
+
+static void string_stream_clear(struct string_stream *this)
+{
+	struct string_stream_fragment *fragment, *fragment_safe;
+	unsigned long flags;
+
+	spin_lock_irqsave(&this->lock, flags);
+	list_for_each_entry_safe(fragment,
+				 fragment_safe,
+				 &this->fragments,
+				 node) {
+		list_del(&fragment->node);
+		kfree(fragment->fragment);
+		kfree(fragment);
+	}
+	this->length = 0;
+	spin_unlock_irqrestore(&this->lock, flags);
+}
+
+static char *string_stream_get_string(struct string_stream *this)
+{
+	struct string_stream_fragment *fragment;
+	size_t buf_len = this->length + 1; /* +1 for null byte. */
+	char *buf;
+	unsigned long flags;
+
+	buf = kzalloc(buf_len, GFP_KERNEL);
+	if (!buf)
+		return NULL;
+
+	spin_lock_irqsave(&this->lock, flags);
+	list_for_each_entry(fragment, &this->fragments, node)
+		strlcat(buf, fragment->fragment, buf_len);
+	spin_unlock_irqrestore(&this->lock, flags);
+
+	return buf;
+}
+
+static bool string_stream_is_empty(struct string_stream *this)
+{
+	bool is_empty;
+	unsigned long flags;
+
+	spin_lock_irqsave(&this->lock, flags);
+	is_empty = list_empty(&this->fragments);
+	spin_unlock_irqrestore(&this->lock, flags);
+
+	return is_empty;
+}
+
+void destroy_string_stream(struct string_stream *stream)
+{
+	stream->clear(stream);
+	kfree(stream);
+}
+
+static void string_stream_destroy(struct kref *kref)
+{
+	struct string_stream *stream = container_of(kref,
+						    struct string_stream,
+						    refcount);
+	destroy_string_stream(stream);
+}
+
+struct string_stream *new_string_stream(void)
+{
+	struct string_stream *stream = kzalloc(sizeof(*stream), GFP_KERNEL);
+
+	if (!stream)
+		return NULL;
+
+	INIT_LIST_HEAD(&stream->fragments);
+	spin_lock_init(&stream->lock);
+	kref_init(&stream->refcount);
+	stream->add = string_stream_add;
+	stream->vadd = string_stream_vadd;
+	stream->get_string = string_stream_get_string;
+	stream->clear = string_stream_clear;
+	stream->is_empty = string_stream_is_empty;
+	return stream;
+}
+
+void string_stream_get(struct string_stream *stream)
+{
+	kref_get(&stream->refcount);
+}
+
+int string_stream_put(struct string_stream *stream)
+{
+	return kref_put(&stream->refcount, &string_stream_destroy);
+}
+
-- 
2.21.0.rc0.258.g878e2cd30e-goog


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 04/17] kunit: test: add test_stream a std::stream like logger
  2019-02-14 21:37 ` brendanhiggins
                       ` (2 preceding siblings ...)
  (?)
@ 2019-02-14 21:37     ` Brendan Higgins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook-hpIqsD4AKlfQT0dZR+AlfA, mcgrof-DgEjT+Ai2ygdnm+yROfE0A,
	shuah-DgEjT+Ai2ygdnm+yROfE0A, robh-DgEjT+Ai2ygdnm+yROfE0A,
	kieran.bingham-ryLnwIuWjnjg/C1BVhZhaw,
	frowand.list-Re5JQEeQqe8AvxtiuMwx3w
  Cc: brakmo-b10kYP2dOMg, pmladek-IBi9RG/b67k,
	amir73il-Re5JQEeQqe8AvxtiuMwx3w, Brendan Higgins,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	Alexander.Levin-0li6OtcxBFHby3iVrkZq2A,
	linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw, richard-/L3Ra7n9ekc,
	knut.omang-QHcLZuEGTsvQT0dZR+AlfA, wfg-VuQAYsv1563Yd54FQh9/CA,
	joel-U3u1mxZcP9KHXe+LvDLADg, jdike-OPE4K8JWMJJBDgjK7y7TUQ,
	dan.carpenter-QHcLZuEGTsvQT0dZR+AlfA,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Tim.Bird-7U/KSKJipcs,
	linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	rostedt-nx8X9YLhiw1AfugRpC6u6w, julia.lawall-L2FTfq7BK8M,
	kunit-dev-/JYPxA39Uh5TLH3MbocFFw,
	gregkh-hQyY1W1yCW8ekmWlsbkhG0B+6BGkLq7r,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, daniel-/w4YWyX8dFk,
	mpe-Gsx/Oe8HsFggBc27wqDAHg, joe-6d6DIl74uiNBDgjK7y7TUQ,
	khilman-rdvid1DuHRBWk0Htik3J/w

A lot of the expectation and assertion infrastructure prints out fairly
complicated test failure messages, so add a C++ style log library for
for logging test results.

Signed-off-by: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
---
 include/kunit/kunit-stream.h |  50 ++++++++++++
 include/kunit/test.h         |   2 +
 kunit/Makefile               |   3 +-
 kunit/kunit-stream.c         | 153 +++++++++++++++++++++++++++++++++++
 kunit/test.c                 |   8 ++
 5 files changed, 215 insertions(+), 1 deletion(-)
 create mode 100644 include/kunit/kunit-stream.h
 create mode 100644 kunit/kunit-stream.c

diff --git a/include/kunit/kunit-stream.h b/include/kunit/kunit-stream.h
new file mode 100644
index 0000000000000..15ad83a6b7aae
--- /dev/null
+++ b/include/kunit/kunit-stream.h
@@ -0,0 +1,50 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * C++ stream style string formatter and printer used in KUnit for outputting
+ * KUnit messages.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
+ */
+
+#ifndef _KUNIT_KUNIT_STREAM_H
+#define _KUNIT_KUNIT_STREAM_H
+
+#include <linux/types.h>
+#include <kunit/string-stream.h>
+
+struct kunit;
+
+/**
+ * struct kunit_stream - a std::stream style string builder.
+ * @set_level: sets the level that this string should be printed at.
+ * @add: adds the formatted input to the internal buffer.
+ * @append: adds the contents of other to this.
+ * @commit: prints out the internal buffer to the user.
+ * @clear: clears the internal buffer.
+ *
+ * A std::stream style string builder. Allows messages to be built up and
+ * printed all at once.
+ */
+struct kunit_stream {
+	void (*set_level)(struct kunit_stream *this, const char *level);
+	void (*add)(struct kunit_stream *this, const char *fmt, ...);
+	void (*append)(struct kunit_stream *this, struct kunit_stream *other);
+	void (*commit)(struct kunit_stream *this);
+	void (*clear)(struct kunit_stream *this);
+	/* private: internal use only. */
+	struct kunit *test;
+	spinlock_t lock; /* Guards level. */
+	const char *level;
+	struct string_stream *internal_stream;
+};
+
+/**
+ * kunit_new_stream() - constructs a new &struct kunit_stream.
+ * @test: The test context object.
+ *
+ * Constructs a new test managed &struct kunit_stream.
+ */
+struct kunit_stream *kunit_new_stream(struct kunit *test);
+
+#endif /* _KUNIT_KUNIT_STREAM_H */
diff --git a/include/kunit/test.h b/include/kunit/test.h
index 21abc9e953969..75cd3c3ab1b4b 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -11,6 +11,7 @@
 
 #include <linux/types.h>
 #include <linux/slab.h>
+#include <kunit/kunit-stream.h>
 
 struct kunit_resource;
 
@@ -171,6 +172,7 @@ struct kunit {
 	void (*vprintk)(const struct kunit *test,
 			const char *level,
 			struct va_format *vaf);
+	void (*fail)(struct kunit *test, struct kunit_stream *stream);
 };
 
 int kunit_init_test(struct kunit *test, const char *name);
diff --git a/kunit/Makefile b/kunit/Makefile
index 275b565a0e81f..6ddc622ee6b1c 100644
--- a/kunit/Makefile
+++ b/kunit/Makefile
@@ -1,2 +1,3 @@
 obj-$(CONFIG_KUNIT) +=			test.o \
-					string-stream.o
+					string-stream.o \
+					kunit-stream.o
diff --git a/kunit/kunit-stream.c b/kunit/kunit-stream.c
new file mode 100644
index 0000000000000..bc88638aef3b1
--- /dev/null
+++ b/kunit/kunit-stream.c
@@ -0,0 +1,153 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * C++ stream style string formatter and printer used in KUnit for outputting
+ * KUnit messages.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
+ */
+
+#include <kunit/test.h>
+#include <kunit/kunit-stream.h>
+#include <kunit/string-stream.h>
+
+static const char *kunit_stream_get_level(struct kunit_stream *this)
+{
+	unsigned long flags;
+	const char *level;
+
+	spin_lock_irqsave(&this->lock, flags);
+	level = this->level;
+	spin_unlock_irqrestore(&this->lock, flags);
+
+	return level;
+}
+
+static void kunit_stream_set_level(struct kunit_stream *this, const char *level)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&this->lock, flags);
+	this->level = level;
+	spin_unlock_irqrestore(&this->lock, flags);
+}
+
+static void kunit_stream_add(struct kunit_stream *this, const char *fmt, ...)
+{
+	va_list args;
+	struct string_stream *stream = this->internal_stream;
+
+	va_start(args, fmt);
+	if (stream->vadd(stream, fmt, args) < 0)
+		kunit_err(this->test, "Failed to allocate fragment: %s", fmt);
+
+	va_end(args);
+}
+
+static void kunit_stream_append(struct kunit_stream *this,
+				struct kunit_stream *other)
+{
+	struct string_stream *other_stream = other->internal_stream;
+	const char *other_content;
+
+	other_content = other_stream->get_string(other_stream);
+
+	if (!other_content) {
+		kunit_err(this->test,
+			  "Failed to get string from second argument for appending.");
+		return;
+	}
+
+	this->add(this, other_content);
+}
+
+static void kunit_stream_clear(struct kunit_stream *this)
+{
+	this->internal_stream->clear(this->internal_stream);
+}
+
+static void kunit_stream_commit(struct kunit_stream *this)
+{
+	struct string_stream *stream = this->internal_stream;
+	struct string_stream_fragment *fragment;
+	const char *level;
+	char *buf;
+
+	level = kunit_stream_get_level(this);
+	if (!level) {
+		kunit_err(this->test,
+			  "Stream was committed without a specified log level.");
+		level = KERN_ERR;
+		this->set_level(this, level);
+	}
+
+	buf = stream->get_string(stream);
+	if (!buf) {
+		kunit_err(this->test,
+			 "Could not allocate buffer, dumping stream:");
+		list_for_each_entry(fragment, &stream->fragments, node) {
+			kunit_err(this->test, fragment->fragment);
+		}
+		goto cleanup;
+	}
+
+	kunit_printk(level, this->test, buf);
+	kfree(buf);
+
+cleanup:
+	this->clear(this);
+}
+
+static int kunit_stream_init(struct kunit_resource *res, void *context)
+{
+	struct kunit *test = context;
+	struct kunit_stream *stream;
+
+	stream = kzalloc(sizeof(*stream), GFP_KERNEL);
+	if (!stream)
+		return -ENOMEM;
+	res->allocation = stream;
+	stream->test = test;
+	spin_lock_init(&stream->lock);
+	stream->internal_stream = new_string_stream();
+
+	if (!stream->internal_stream)
+		return -ENOMEM;
+
+	stream->set_level = kunit_stream_set_level;
+	stream->add = kunit_stream_add;
+	stream->append = kunit_stream_append;
+	stream->commit = kunit_stream_commit;
+	stream->clear = kunit_stream_clear;
+
+	return 0;
+}
+
+static void kunit_stream_free(struct kunit_resource *res)
+{
+	struct kunit_stream *stream = res->allocation;
+
+	if (!stream->internal_stream->is_empty(stream->internal_stream)) {
+		kunit_err(stream->test,
+			 "End of test case reached with uncommitted stream entries.");
+		stream->commit(stream);
+	}
+
+	destroy_string_stream(stream->internal_stream);
+	kfree(stream);
+}
+
+struct kunit_stream *kunit_new_stream(struct kunit *test)
+{
+	struct kunit_resource *res;
+
+	res = kunit_alloc_resource(test,
+				   kunit_stream_init,
+				   kunit_stream_free,
+				   test);
+
+	if (res)
+		return res->allocation;
+	else
+		return NULL;
+}
diff --git a/kunit/test.c b/kunit/test.c
index 84f2e1c040af3..1a2e5e6b7ffee 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -63,12 +63,20 @@ static void kunit_vprintk(const struct kunit *test,
 			  "kunit %s: %pV", test->name, vaf);
 }
 
+static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
+{
+	kunit_set_success(test, false);
+	stream->set_level(stream, KERN_ERR);
+	stream->commit(stream);
+}
+
 int kunit_init_test(struct kunit *test, const char *name)
 {
 	spin_lock_init(&test->lock);
 	INIT_LIST_HEAD(&test->resources);
 	test->name = name;
 	test->vprintk = kunit_vprintk;
+	test->fail = kunit_fail;
 
 	return 0;
 }
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 04/17] kunit: test: add test_stream a std::stream like logger
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: gregkh, joel, mpe, joe, brakmo, rostedt, Tim.Bird, khilman,
	julia.lawall, linux-kselftest, kunit-dev, linux-kernel, jdike,
	richard, linux-um, daniel, dri-devel, dan.j.williams,
	linux-nvdimm, knut.omang, devicetree, pmladek, Alexander.Levin,
	amir73il, dan.carpenter, wfg, Brendan Higgins

A lot of the expectation and assertion infrastructure prints out fairly
complicated test failure messages, so add a C++ style log library for
for logging test results.

Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
---
 include/kunit/kunit-stream.h |  50 ++++++++++++
 include/kunit/test.h         |   2 +
 kunit/Makefile               |   3 +-
 kunit/kunit-stream.c         | 153 +++++++++++++++++++++++++++++++++++
 kunit/test.c                 |   8 ++
 5 files changed, 215 insertions(+), 1 deletion(-)
 create mode 100644 include/kunit/kunit-stream.h
 create mode 100644 kunit/kunit-stream.c

diff --git a/include/kunit/kunit-stream.h b/include/kunit/kunit-stream.h
new file mode 100644
index 0000000000000..15ad83a6b7aae
--- /dev/null
+++ b/include/kunit/kunit-stream.h
@@ -0,0 +1,50 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * C++ stream style string formatter and printer used in KUnit for outputting
+ * KUnit messages.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins@google.com>
+ */
+
+#ifndef _KUNIT_KUNIT_STREAM_H
+#define _KUNIT_KUNIT_STREAM_H
+
+#include <linux/types.h>
+#include <kunit/string-stream.h>
+
+struct kunit;
+
+/**
+ * struct kunit_stream - a std::stream style string builder.
+ * @set_level: sets the level that this string should be printed at.
+ * @add: adds the formatted input to the internal buffer.
+ * @append: adds the contents of other to this.
+ * @commit: prints out the internal buffer to the user.
+ * @clear: clears the internal buffer.
+ *
+ * A std::stream style string builder. Allows messages to be built up and
+ * printed all at once.
+ */
+struct kunit_stream {
+	void (*set_level)(struct kunit_stream *this, const char *level);
+	void (*add)(struct kunit_stream *this, const char *fmt, ...);
+	void (*append)(struct kunit_stream *this, struct kunit_stream *other);
+	void (*commit)(struct kunit_stream *this);
+	void (*clear)(struct kunit_stream *this);
+	/* private: internal use only. */
+	struct kunit *test;
+	spinlock_t lock; /* Guards level. */
+	const char *level;
+	struct string_stream *internal_stream;
+};
+
+/**
+ * kunit_new_stream() - constructs a new &struct kunit_stream.
+ * @test: The test context object.
+ *
+ * Constructs a new test managed &struct kunit_stream.
+ */
+struct kunit_stream *kunit_new_stream(struct kunit *test);
+
+#endif /* _KUNIT_KUNIT_STREAM_H */
diff --git a/include/kunit/test.h b/include/kunit/test.h
index 21abc9e953969..75cd3c3ab1b4b 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -11,6 +11,7 @@
 
 #include <linux/types.h>
 #include <linux/slab.h>
+#include <kunit/kunit-stream.h>
 
 struct kunit_resource;
 
@@ -171,6 +172,7 @@ struct kunit {
 	void (*vprintk)(const struct kunit *test,
 			const char *level,
 			struct va_format *vaf);
+	void (*fail)(struct kunit *test, struct kunit_stream *stream);
 };
 
 int kunit_init_test(struct kunit *test, const char *name);
diff --git a/kunit/Makefile b/kunit/Makefile
index 275b565a0e81f..6ddc622ee6b1c 100644
--- a/kunit/Makefile
+++ b/kunit/Makefile
@@ -1,2 +1,3 @@
 obj-$(CONFIG_KUNIT) +=			test.o \
-					string-stream.o
+					string-stream.o \
+					kunit-stream.o
diff --git a/kunit/kunit-stream.c b/kunit/kunit-stream.c
new file mode 100644
index 0000000000000..bc88638aef3b1
--- /dev/null
+++ b/kunit/kunit-stream.c
@@ -0,0 +1,153 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * C++ stream style string formatter and printer used in KUnit for outputting
+ * KUnit messages.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins@google.com>
+ */
+
+#include <kunit/test.h>
+#include <kunit/kunit-stream.h>
+#include <kunit/string-stream.h>
+
+static const char *kunit_stream_get_level(struct kunit_stream *this)
+{
+	unsigned long flags;
+	const char *level;
+
+	spin_lock_irqsave(&this->lock, flags);
+	level = this->level;
+	spin_unlock_irqrestore(&this->lock, flags);
+
+	return level;
+}
+
+static void kunit_stream_set_level(struct kunit_stream *this, const char *level)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&this->lock, flags);
+	this->level = level;
+	spin_unlock_irqrestore(&this->lock, flags);
+}
+
+static void kunit_stream_add(struct kunit_stream *this, const char *fmt, ...)
+{
+	va_list args;
+	struct string_stream *stream = this->internal_stream;
+
+	va_start(args, fmt);
+	if (stream->vadd(stream, fmt, args) < 0)
+		kunit_err(this->test, "Failed to allocate fragment: %s", fmt);
+
+	va_end(args);
+}
+
+static void kunit_stream_append(struct kunit_stream *this,
+				struct kunit_stream *other)
+{
+	struct string_stream *other_stream = other->internal_stream;
+	const char *other_content;
+
+	other_content = other_stream->get_string(other_stream);
+
+	if (!other_content) {
+		kunit_err(this->test,
+			  "Failed to get string from second argument for appending.");
+		return;
+	}
+
+	this->add(this, other_content);
+}
+
+static void kunit_stream_clear(struct kunit_stream *this)
+{
+	this->internal_stream->clear(this->internal_stream);
+}
+
+static void kunit_stream_commit(struct kunit_stream *this)
+{
+	struct string_stream *stream = this->internal_stream;
+	struct string_stream_fragment *fragment;
+	const char *level;
+	char *buf;
+
+	level = kunit_stream_get_level(this);
+	if (!level) {
+		kunit_err(this->test,
+			  "Stream was committed without a specified log level.");
+		level = KERN_ERR;
+		this->set_level(this, level);
+	}
+
+	buf = stream->get_string(stream);
+	if (!buf) {
+		kunit_err(this->test,
+			 "Could not allocate buffer, dumping stream:");
+		list_for_each_entry(fragment, &stream->fragments, node) {
+			kunit_err(this->test, fragment->fragment);
+		}
+		goto cleanup;
+	}
+
+	kunit_printk(level, this->test, buf);
+	kfree(buf);
+
+cleanup:
+	this->clear(this);
+}
+
+static int kunit_stream_init(struct kunit_resource *res, void *context)
+{
+	struct kunit *test = context;
+	struct kunit_stream *stream;
+
+	stream = kzalloc(sizeof(*stream), GFP_KERNEL);
+	if (!stream)
+		return -ENOMEM;
+	res->allocation = stream;
+	stream->test = test;
+	spin_lock_init(&stream->lock);
+	stream->internal_stream = new_string_stream();
+
+	if (!stream->internal_stream)
+		return -ENOMEM;
+
+	stream->set_level = kunit_stream_set_level;
+	stream->add = kunit_stream_add;
+	stream->append = kunit_stream_append;
+	stream->commit = kunit_stream_commit;
+	stream->clear = kunit_stream_clear;
+
+	return 0;
+}
+
+static void kunit_stream_free(struct kunit_resource *res)
+{
+	struct kunit_stream *stream = res->allocation;
+
+	if (!stream->internal_stream->is_empty(stream->internal_stream)) {
+		kunit_err(stream->test,
+			 "End of test case reached with uncommitted stream entries.");
+		stream->commit(stream);
+	}
+
+	destroy_string_stream(stream->internal_stream);
+	kfree(stream);
+}
+
+struct kunit_stream *kunit_new_stream(struct kunit *test)
+{
+	struct kunit_resource *res;
+
+	res = kunit_alloc_resource(test,
+				   kunit_stream_init,
+				   kunit_stream_free,
+				   test);
+
+	if (res)
+		return res->allocation;
+	else
+		return NULL;
+}
diff --git a/kunit/test.c b/kunit/test.c
index 84f2e1c040af3..1a2e5e6b7ffee 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -63,12 +63,20 @@ static void kunit_vprintk(const struct kunit *test,
 			  "kunit %s: %pV", test->name, vaf);
 }
 
+static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
+{
+	kunit_set_success(test, false);
+	stream->set_level(stream, KERN_ERR);
+	stream->commit(stream);
+}
+
 int kunit_init_test(struct kunit *test, const char *name)
 {
 	spin_lock_init(&test->lock);
 	INIT_LIST_HEAD(&test->resources);
 	test->name = name;
 	test->vprintk = kunit_vprintk;
+	test->fail = kunit_fail;
 
 	return 0;
 }
-- 
2.21.0.rc0.258.g878e2cd30e-goog


^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 04/17] kunit: test: add test_stream a std::stream like logger
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-02-14 21:37 UTC (permalink / raw)


A lot of the expectation and assertion infrastructure prints out fairly
complicated test failure messages, so add a C++ style log library for
for logging test results.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 include/kunit/kunit-stream.h |  50 ++++++++++++
 include/kunit/test.h         |   2 +
 kunit/Makefile               |   3 +-
 kunit/kunit-stream.c         | 153 +++++++++++++++++++++++++++++++++++
 kunit/test.c                 |   8 ++
 5 files changed, 215 insertions(+), 1 deletion(-)
 create mode 100644 include/kunit/kunit-stream.h
 create mode 100644 kunit/kunit-stream.c

diff --git a/include/kunit/kunit-stream.h b/include/kunit/kunit-stream.h
new file mode 100644
index 0000000000000..15ad83a6b7aae
--- /dev/null
+++ b/include/kunit/kunit-stream.h
@@ -0,0 +1,50 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * C++ stream style string formatter and printer used in KUnit for outputting
+ * KUnit messages.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+
+#ifndef _KUNIT_KUNIT_STREAM_H
+#define _KUNIT_KUNIT_STREAM_H
+
+#include <linux/types.h>
+#include <kunit/string-stream.h>
+
+struct kunit;
+
+/**
+ * struct kunit_stream - a std::stream style string builder.
+ * @set_level: sets the level that this string should be printed at.
+ * @add: adds the formatted input to the internal buffer.
+ * @append: adds the contents of other to this.
+ * @commit: prints out the internal buffer to the user.
+ * @clear: clears the internal buffer.
+ *
+ * A std::stream style string builder. Allows messages to be built up and
+ * printed all at once.
+ */
+struct kunit_stream {
+	void (*set_level)(struct kunit_stream *this, const char *level);
+	void (*add)(struct kunit_stream *this, const char *fmt, ...);
+	void (*append)(struct kunit_stream *this, struct kunit_stream *other);
+	void (*commit)(struct kunit_stream *this);
+	void (*clear)(struct kunit_stream *this);
+	/* private: internal use only. */
+	struct kunit *test;
+	spinlock_t lock; /* Guards level. */
+	const char *level;
+	struct string_stream *internal_stream;
+};
+
+/**
+ * kunit_new_stream() - constructs a new &struct kunit_stream.
+ * @test: The test context object.
+ *
+ * Constructs a new test managed &struct kunit_stream.
+ */
+struct kunit_stream *kunit_new_stream(struct kunit *test);
+
+#endif /* _KUNIT_KUNIT_STREAM_H */
diff --git a/include/kunit/test.h b/include/kunit/test.h
index 21abc9e953969..75cd3c3ab1b4b 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -11,6 +11,7 @@
 
 #include <linux/types.h>
 #include <linux/slab.h>
+#include <kunit/kunit-stream.h>
 
 struct kunit_resource;
 
@@ -171,6 +172,7 @@ struct kunit {
 	void (*vprintk)(const struct kunit *test,
 			const char *level,
 			struct va_format *vaf);
+	void (*fail)(struct kunit *test, struct kunit_stream *stream);
 };
 
 int kunit_init_test(struct kunit *test, const char *name);
diff --git a/kunit/Makefile b/kunit/Makefile
index 275b565a0e81f..6ddc622ee6b1c 100644
--- a/kunit/Makefile
+++ b/kunit/Makefile
@@ -1,2 +1,3 @@
 obj-$(CONFIG_KUNIT) +=			test.o \
-					string-stream.o
+					string-stream.o \
+					kunit-stream.o
diff --git a/kunit/kunit-stream.c b/kunit/kunit-stream.c
new file mode 100644
index 0000000000000..bc88638aef3b1
--- /dev/null
+++ b/kunit/kunit-stream.c
@@ -0,0 +1,153 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * C++ stream style string formatter and printer used in KUnit for outputting
+ * KUnit messages.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+
+#include <kunit/test.h>
+#include <kunit/kunit-stream.h>
+#include <kunit/string-stream.h>
+
+static const char *kunit_stream_get_level(struct kunit_stream *this)
+{
+	unsigned long flags;
+	const char *level;
+
+	spin_lock_irqsave(&this->lock, flags);
+	level = this->level;
+	spin_unlock_irqrestore(&this->lock, flags);
+
+	return level;
+}
+
+static void kunit_stream_set_level(struct kunit_stream *this, const char *level)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&this->lock, flags);
+	this->level = level;
+	spin_unlock_irqrestore(&this->lock, flags);
+}
+
+static void kunit_stream_add(struct kunit_stream *this, const char *fmt, ...)
+{
+	va_list args;
+	struct string_stream *stream = this->internal_stream;
+
+	va_start(args, fmt);
+	if (stream->vadd(stream, fmt, args) < 0)
+		kunit_err(this->test, "Failed to allocate fragment: %s", fmt);
+
+	va_end(args);
+}
+
+static void kunit_stream_append(struct kunit_stream *this,
+				struct kunit_stream *other)
+{
+	struct string_stream *other_stream = other->internal_stream;
+	const char *other_content;
+
+	other_content = other_stream->get_string(other_stream);
+
+	if (!other_content) {
+		kunit_err(this->test,
+			  "Failed to get string from second argument for appending.");
+		return;
+	}
+
+	this->add(this, other_content);
+}
+
+static void kunit_stream_clear(struct kunit_stream *this)
+{
+	this->internal_stream->clear(this->internal_stream);
+}
+
+static void kunit_stream_commit(struct kunit_stream *this)
+{
+	struct string_stream *stream = this->internal_stream;
+	struct string_stream_fragment *fragment;
+	const char *level;
+	char *buf;
+
+	level = kunit_stream_get_level(this);
+	if (!level) {
+		kunit_err(this->test,
+			  "Stream was committed without a specified log level.");
+		level = KERN_ERR;
+		this->set_level(this, level);
+	}
+
+	buf = stream->get_string(stream);
+	if (!buf) {
+		kunit_err(this->test,
+			 "Could not allocate buffer, dumping stream:");
+		list_for_each_entry(fragment, &stream->fragments, node) {
+			kunit_err(this->test, fragment->fragment);
+		}
+		goto cleanup;
+	}
+
+	kunit_printk(level, this->test, buf);
+	kfree(buf);
+
+cleanup:
+	this->clear(this);
+}
+
+static int kunit_stream_init(struct kunit_resource *res, void *context)
+{
+	struct kunit *test = context;
+	struct kunit_stream *stream;
+
+	stream = kzalloc(sizeof(*stream), GFP_KERNEL);
+	if (!stream)
+		return -ENOMEM;
+	res->allocation = stream;
+	stream->test = test;
+	spin_lock_init(&stream->lock);
+	stream->internal_stream = new_string_stream();
+
+	if (!stream->internal_stream)
+		return -ENOMEM;
+
+	stream->set_level = kunit_stream_set_level;
+	stream->add = kunit_stream_add;
+	stream->append = kunit_stream_append;
+	stream->commit = kunit_stream_commit;
+	stream->clear = kunit_stream_clear;
+
+	return 0;
+}
+
+static void kunit_stream_free(struct kunit_resource *res)
+{
+	struct kunit_stream *stream = res->allocation;
+
+	if (!stream->internal_stream->is_empty(stream->internal_stream)) {
+		kunit_err(stream->test,
+			 "End of test case reached with uncommitted stream entries.");
+		stream->commit(stream);
+	}
+
+	destroy_string_stream(stream->internal_stream);
+	kfree(stream);
+}
+
+struct kunit_stream *kunit_new_stream(struct kunit *test)
+{
+	struct kunit_resource *res;
+
+	res = kunit_alloc_resource(test,
+				   kunit_stream_init,
+				   kunit_stream_free,
+				   test);
+
+	if (res)
+		return res->allocation;
+	else
+		return NULL;
+}
diff --git a/kunit/test.c b/kunit/test.c
index 84f2e1c040af3..1a2e5e6b7ffee 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -63,12 +63,20 @@ static void kunit_vprintk(const struct kunit *test,
 			  "kunit %s: %pV", test->name, vaf);
 }
 
+static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
+{
+	kunit_set_success(test, false);
+	stream->set_level(stream, KERN_ERR);
+	stream->commit(stream);
+}
+
 int kunit_init_test(struct kunit *test, const char *name)
 {
 	spin_lock_init(&test->lock);
 	INIT_LIST_HEAD(&test->resources);
 	test->name = name;
 	test->vprintk = kunit_vprintk;
+	test->fail = kunit_fail;
 
 	return 0;
 }
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 04/17] kunit: test: add test_stream a std::stream like logger
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)


A lot of the expectation and assertion infrastructure prints out fairly
complicated test failure messages, so add a C++ style log library for
for logging test results.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 include/kunit/kunit-stream.h |  50 ++++++++++++
 include/kunit/test.h         |   2 +
 kunit/Makefile               |   3 +-
 kunit/kunit-stream.c         | 153 +++++++++++++++++++++++++++++++++++
 kunit/test.c                 |   8 ++
 5 files changed, 215 insertions(+), 1 deletion(-)
 create mode 100644 include/kunit/kunit-stream.h
 create mode 100644 kunit/kunit-stream.c

diff --git a/include/kunit/kunit-stream.h b/include/kunit/kunit-stream.h
new file mode 100644
index 0000000000000..15ad83a6b7aae
--- /dev/null
+++ b/include/kunit/kunit-stream.h
@@ -0,0 +1,50 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * C++ stream style string formatter and printer used in KUnit for outputting
+ * KUnit messages.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+
+#ifndef _KUNIT_KUNIT_STREAM_H
+#define _KUNIT_KUNIT_STREAM_H
+
+#include <linux/types.h>
+#include <kunit/string-stream.h>
+
+struct kunit;
+
+/**
+ * struct kunit_stream - a std::stream style string builder.
+ * @set_level: sets the level that this string should be printed at.
+ * @add: adds the formatted input to the internal buffer.
+ * @append: adds the contents of other to this.
+ * @commit: prints out the internal buffer to the user.
+ * @clear: clears the internal buffer.
+ *
+ * A std::stream style string builder. Allows messages to be built up and
+ * printed all at once.
+ */
+struct kunit_stream {
+	void (*set_level)(struct kunit_stream *this, const char *level);
+	void (*add)(struct kunit_stream *this, const char *fmt, ...);
+	void (*append)(struct kunit_stream *this, struct kunit_stream *other);
+	void (*commit)(struct kunit_stream *this);
+	void (*clear)(struct kunit_stream *this);
+	/* private: internal use only. */
+	struct kunit *test;
+	spinlock_t lock; /* Guards level. */
+	const char *level;
+	struct string_stream *internal_stream;
+};
+
+/**
+ * kunit_new_stream() - constructs a new &struct kunit_stream.
+ * @test: The test context object.
+ *
+ * Constructs a new test managed &struct kunit_stream.
+ */
+struct kunit_stream *kunit_new_stream(struct kunit *test);
+
+#endif /* _KUNIT_KUNIT_STREAM_H */
diff --git a/include/kunit/test.h b/include/kunit/test.h
index 21abc9e953969..75cd3c3ab1b4b 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -11,6 +11,7 @@
 
 #include <linux/types.h>
 #include <linux/slab.h>
+#include <kunit/kunit-stream.h>
 
 struct kunit_resource;
 
@@ -171,6 +172,7 @@ struct kunit {
 	void (*vprintk)(const struct kunit *test,
 			const char *level,
 			struct va_format *vaf);
+	void (*fail)(struct kunit *test, struct kunit_stream *stream);
 };
 
 int kunit_init_test(struct kunit *test, const char *name);
diff --git a/kunit/Makefile b/kunit/Makefile
index 275b565a0e81f..6ddc622ee6b1c 100644
--- a/kunit/Makefile
+++ b/kunit/Makefile
@@ -1,2 +1,3 @@
 obj-$(CONFIG_KUNIT) +=			test.o \
-					string-stream.o
+					string-stream.o \
+					kunit-stream.o
diff --git a/kunit/kunit-stream.c b/kunit/kunit-stream.c
new file mode 100644
index 0000000000000..bc88638aef3b1
--- /dev/null
+++ b/kunit/kunit-stream.c
@@ -0,0 +1,153 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * C++ stream style string formatter and printer used in KUnit for outputting
+ * KUnit messages.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+
+#include <kunit/test.h>
+#include <kunit/kunit-stream.h>
+#include <kunit/string-stream.h>
+
+static const char *kunit_stream_get_level(struct kunit_stream *this)
+{
+	unsigned long flags;
+	const char *level;
+
+	spin_lock_irqsave(&this->lock, flags);
+	level = this->level;
+	spin_unlock_irqrestore(&this->lock, flags);
+
+	return level;
+}
+
+static void kunit_stream_set_level(struct kunit_stream *this, const char *level)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&this->lock, flags);
+	this->level = level;
+	spin_unlock_irqrestore(&this->lock, flags);
+}
+
+static void kunit_stream_add(struct kunit_stream *this, const char *fmt, ...)
+{
+	va_list args;
+	struct string_stream *stream = this->internal_stream;
+
+	va_start(args, fmt);
+	if (stream->vadd(stream, fmt, args) < 0)
+		kunit_err(this->test, "Failed to allocate fragment: %s", fmt);
+
+	va_end(args);
+}
+
+static void kunit_stream_append(struct kunit_stream *this,
+				struct kunit_stream *other)
+{
+	struct string_stream *other_stream = other->internal_stream;
+	const char *other_content;
+
+	other_content = other_stream->get_string(other_stream);
+
+	if (!other_content) {
+		kunit_err(this->test,
+			  "Failed to get string from second argument for appending.");
+		return;
+	}
+
+	this->add(this, other_content);
+}
+
+static void kunit_stream_clear(struct kunit_stream *this)
+{
+	this->internal_stream->clear(this->internal_stream);
+}
+
+static void kunit_stream_commit(struct kunit_stream *this)
+{
+	struct string_stream *stream = this->internal_stream;
+	struct string_stream_fragment *fragment;
+	const char *level;
+	char *buf;
+
+	level = kunit_stream_get_level(this);
+	if (!level) {
+		kunit_err(this->test,
+			  "Stream was committed without a specified log level.");
+		level = KERN_ERR;
+		this->set_level(this, level);
+	}
+
+	buf = stream->get_string(stream);
+	if (!buf) {
+		kunit_err(this->test,
+			 "Could not allocate buffer, dumping stream:");
+		list_for_each_entry(fragment, &stream->fragments, node) {
+			kunit_err(this->test, fragment->fragment);
+		}
+		goto cleanup;
+	}
+
+	kunit_printk(level, this->test, buf);
+	kfree(buf);
+
+cleanup:
+	this->clear(this);
+}
+
+static int kunit_stream_init(struct kunit_resource *res, void *context)
+{
+	struct kunit *test = context;
+	struct kunit_stream *stream;
+
+	stream = kzalloc(sizeof(*stream), GFP_KERNEL);
+	if (!stream)
+		return -ENOMEM;
+	res->allocation = stream;
+	stream->test = test;
+	spin_lock_init(&stream->lock);
+	stream->internal_stream = new_string_stream();
+
+	if (!stream->internal_stream)
+		return -ENOMEM;
+
+	stream->set_level = kunit_stream_set_level;
+	stream->add = kunit_stream_add;
+	stream->append = kunit_stream_append;
+	stream->commit = kunit_stream_commit;
+	stream->clear = kunit_stream_clear;
+
+	return 0;
+}
+
+static void kunit_stream_free(struct kunit_resource *res)
+{
+	struct kunit_stream *stream = res->allocation;
+
+	if (!stream->internal_stream->is_empty(stream->internal_stream)) {
+		kunit_err(stream->test,
+			 "End of test case reached with uncommitted stream entries.");
+		stream->commit(stream);
+	}
+
+	destroy_string_stream(stream->internal_stream);
+	kfree(stream);
+}
+
+struct kunit_stream *kunit_new_stream(struct kunit *test)
+{
+	struct kunit_resource *res;
+
+	res = kunit_alloc_resource(test,
+				   kunit_stream_init,
+				   kunit_stream_free,
+				   test);
+
+	if (res)
+		return res->allocation;
+	else
+		return NULL;
+}
diff --git a/kunit/test.c b/kunit/test.c
index 84f2e1c040af3..1a2e5e6b7ffee 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -63,12 +63,20 @@ static void kunit_vprintk(const struct kunit *test,
 			  "kunit %s: %pV", test->name, vaf);
 }
 
+static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
+{
+	kunit_set_success(test, false);
+	stream->set_level(stream, KERN_ERR);
+	stream->commit(stream);
+}
+
 int kunit_init_test(struct kunit *test, const char *name)
 {
 	spin_lock_init(&test->lock);
 	INIT_LIST_HEAD(&test->resources);
 	test->name = name;
 	test->vprintk = kunit_vprintk;
+	test->fail = kunit_fail;
 
 	return 0;
 }
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 04/17] kunit: test: add test_stream a std::stream like logger
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: brakmo, pmladek, amir73il, Brendan Higgins, dri-devel,
	Alexander.Levin, linux-kselftest, linux-nvdimm, richard,
	knut.omang, wfg, joel, jdike, dan.carpenter, devicetree,
	Tim.Bird, linux-um, rostedt, julia.lawall, dan.j.williams,
	kunit-dev, gregkh, linux-kernel, daniel, mpe, joe, khilman

A lot of the expectation and assertion infrastructure prints out fairly
complicated test failure messages, so add a C++ style log library for
for logging test results.

Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
---
 include/kunit/kunit-stream.h |  50 ++++++++++++
 include/kunit/test.h         |   2 +
 kunit/Makefile               |   3 +-
 kunit/kunit-stream.c         | 153 +++++++++++++++++++++++++++++++++++
 kunit/test.c                 |   8 ++
 5 files changed, 215 insertions(+), 1 deletion(-)
 create mode 100644 include/kunit/kunit-stream.h
 create mode 100644 kunit/kunit-stream.c

diff --git a/include/kunit/kunit-stream.h b/include/kunit/kunit-stream.h
new file mode 100644
index 0000000000000..15ad83a6b7aae
--- /dev/null
+++ b/include/kunit/kunit-stream.h
@@ -0,0 +1,50 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * C++ stream style string formatter and printer used in KUnit for outputting
+ * KUnit messages.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins@google.com>
+ */
+
+#ifndef _KUNIT_KUNIT_STREAM_H
+#define _KUNIT_KUNIT_STREAM_H
+
+#include <linux/types.h>
+#include <kunit/string-stream.h>
+
+struct kunit;
+
+/**
+ * struct kunit_stream - a std::stream style string builder.
+ * @set_level: sets the level that this string should be printed at.
+ * @add: adds the formatted input to the internal buffer.
+ * @append: adds the contents of other to this.
+ * @commit: prints out the internal buffer to the user.
+ * @clear: clears the internal buffer.
+ *
+ * A std::stream style string builder. Allows messages to be built up and
+ * printed all at once.
+ */
+struct kunit_stream {
+	void (*set_level)(struct kunit_stream *this, const char *level);
+	void (*add)(struct kunit_stream *this, const char *fmt, ...);
+	void (*append)(struct kunit_stream *this, struct kunit_stream *other);
+	void (*commit)(struct kunit_stream *this);
+	void (*clear)(struct kunit_stream *this);
+	/* private: internal use only. */
+	struct kunit *test;
+	spinlock_t lock; /* Guards level. */
+	const char *level;
+	struct string_stream *internal_stream;
+};
+
+/**
+ * kunit_new_stream() - constructs a new &struct kunit_stream.
+ * @test: The test context object.
+ *
+ * Constructs a new test managed &struct kunit_stream.
+ */
+struct kunit_stream *kunit_new_stream(struct kunit *test);
+
+#endif /* _KUNIT_KUNIT_STREAM_H */
diff --git a/include/kunit/test.h b/include/kunit/test.h
index 21abc9e953969..75cd3c3ab1b4b 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -11,6 +11,7 @@
 
 #include <linux/types.h>
 #include <linux/slab.h>
+#include <kunit/kunit-stream.h>
 
 struct kunit_resource;
 
@@ -171,6 +172,7 @@ struct kunit {
 	void (*vprintk)(const struct kunit *test,
 			const char *level,
 			struct va_format *vaf);
+	void (*fail)(struct kunit *test, struct kunit_stream *stream);
 };
 
 int kunit_init_test(struct kunit *test, const char *name);
diff --git a/kunit/Makefile b/kunit/Makefile
index 275b565a0e81f..6ddc622ee6b1c 100644
--- a/kunit/Makefile
+++ b/kunit/Makefile
@@ -1,2 +1,3 @@
 obj-$(CONFIG_KUNIT) +=			test.o \
-					string-stream.o
+					string-stream.o \
+					kunit-stream.o
diff --git a/kunit/kunit-stream.c b/kunit/kunit-stream.c
new file mode 100644
index 0000000000000..bc88638aef3b1
--- /dev/null
+++ b/kunit/kunit-stream.c
@@ -0,0 +1,153 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * C++ stream style string formatter and printer used in KUnit for outputting
+ * KUnit messages.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins@google.com>
+ */
+
+#include <kunit/test.h>
+#include <kunit/kunit-stream.h>
+#include <kunit/string-stream.h>
+
+static const char *kunit_stream_get_level(struct kunit_stream *this)
+{
+	unsigned long flags;
+	const char *level;
+
+	spin_lock_irqsave(&this->lock, flags);
+	level = this->level;
+	spin_unlock_irqrestore(&this->lock, flags);
+
+	return level;
+}
+
+static void kunit_stream_set_level(struct kunit_stream *this, const char *level)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&this->lock, flags);
+	this->level = level;
+	spin_unlock_irqrestore(&this->lock, flags);
+}
+
+static void kunit_stream_add(struct kunit_stream *this, const char *fmt, ...)
+{
+	va_list args;
+	struct string_stream *stream = this->internal_stream;
+
+	va_start(args, fmt);
+	if (stream->vadd(stream, fmt, args) < 0)
+		kunit_err(this->test, "Failed to allocate fragment: %s", fmt);
+
+	va_end(args);
+}
+
+static void kunit_stream_append(struct kunit_stream *this,
+				struct kunit_stream *other)
+{
+	struct string_stream *other_stream = other->internal_stream;
+	const char *other_content;
+
+	other_content = other_stream->get_string(other_stream);
+
+	if (!other_content) {
+		kunit_err(this->test,
+			  "Failed to get string from second argument for appending.");
+		return;
+	}
+
+	this->add(this, other_content);
+}
+
+static void kunit_stream_clear(struct kunit_stream *this)
+{
+	this->internal_stream->clear(this->internal_stream);
+}
+
+static void kunit_stream_commit(struct kunit_stream *this)
+{
+	struct string_stream *stream = this->internal_stream;
+	struct string_stream_fragment *fragment;
+	const char *level;
+	char *buf;
+
+	level = kunit_stream_get_level(this);
+	if (!level) {
+		kunit_err(this->test,
+			  "Stream was committed without a specified log level.");
+		level = KERN_ERR;
+		this->set_level(this, level);
+	}
+
+	buf = stream->get_string(stream);
+	if (!buf) {
+		kunit_err(this->test,
+			 "Could not allocate buffer, dumping stream:");
+		list_for_each_entry(fragment, &stream->fragments, node) {
+			kunit_err(this->test, fragment->fragment);
+		}
+		goto cleanup;
+	}
+
+	kunit_printk(level, this->test, buf);
+	kfree(buf);
+
+cleanup:
+	this->clear(this);
+}
+
+static int kunit_stream_init(struct kunit_resource *res, void *context)
+{
+	struct kunit *test = context;
+	struct kunit_stream *stream;
+
+	stream = kzalloc(sizeof(*stream), GFP_KERNEL);
+	if (!stream)
+		return -ENOMEM;
+	res->allocation = stream;
+	stream->test = test;
+	spin_lock_init(&stream->lock);
+	stream->internal_stream = new_string_stream();
+
+	if (!stream->internal_stream)
+		return -ENOMEM;
+
+	stream->set_level = kunit_stream_set_level;
+	stream->add = kunit_stream_add;
+	stream->append = kunit_stream_append;
+	stream->commit = kunit_stream_commit;
+	stream->clear = kunit_stream_clear;
+
+	return 0;
+}
+
+static void kunit_stream_free(struct kunit_resource *res)
+{
+	struct kunit_stream *stream = res->allocation;
+
+	if (!stream->internal_stream->is_empty(stream->internal_stream)) {
+		kunit_err(stream->test,
+			 "End of test case reached with uncommitted stream entries.");
+		stream->commit(stream);
+	}
+
+	destroy_string_stream(stream->internal_stream);
+	kfree(stream);
+}
+
+struct kunit_stream *kunit_new_stream(struct kunit *test)
+{
+	struct kunit_resource *res;
+
+	res = kunit_alloc_resource(test,
+				   kunit_stream_init,
+				   kunit_stream_free,
+				   test);
+
+	if (res)
+		return res->allocation;
+	else
+		return NULL;
+}
diff --git a/kunit/test.c b/kunit/test.c
index 84f2e1c040af3..1a2e5e6b7ffee 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -63,12 +63,20 @@ static void kunit_vprintk(const struct kunit *test,
 			  "kunit %s: %pV", test->name, vaf);
 }
 
+static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
+{
+	kunit_set_success(test, false);
+	stream->set_level(stream, KERN_ERR);
+	stream->commit(stream);
+}
+
 int kunit_init_test(struct kunit *test, const char *name)
 {
 	spin_lock_init(&test->lock);
 	INIT_LIST_HEAD(&test->resources);
 	test->name = name;
 	test->vprintk = kunit_vprintk;
+	test->fail = kunit_fail;
 
 	return 0;
 }
-- 
2.21.0.rc0.258.g878e2cd30e-goog


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 05/17] kunit: test: add the concept of expectations
  2019-02-14 21:37 ` brendanhiggins
                       ` (2 preceding siblings ...)
  (?)
@ 2019-02-14 21:37     ` Brendan Higgins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook-hpIqsD4AKlfQT0dZR+AlfA, mcgrof-DgEjT+Ai2ygdnm+yROfE0A,
	shuah-DgEjT+Ai2ygdnm+yROfE0A, robh-DgEjT+Ai2ygdnm+yROfE0A,
	kieran.bingham-ryLnwIuWjnjg/C1BVhZhaw,
	frowand.list-Re5JQEeQqe8AvxtiuMwx3w
  Cc: brakmo-b10kYP2dOMg, pmladek-IBi9RG/b67k,
	amir73il-Re5JQEeQqe8AvxtiuMwx3w, Brendan Higgins,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	Alexander.Levin-0li6OtcxBFHby3iVrkZq2A,
	linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw, richard-/L3Ra7n9ekc,
	knut.omang-QHcLZuEGTsvQT0dZR+AlfA, wfg-VuQAYsv1563Yd54FQh9/CA,
	joel-U3u1mxZcP9KHXe+LvDLADg, jdike-OPE4K8JWMJJBDgjK7y7TUQ,
	dan.carpenter-QHcLZuEGTsvQT0dZR+AlfA,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Tim.Bird-7U/KSKJipcs,
	linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	rostedt-nx8X9YLhiw1AfugRpC6u6w, julia.lawall-L2FTfq7BK8M,
	kunit-dev-/JYPxA39Uh5TLH3MbocFFw,
	gregkh-hQyY1W1yCW8ekmWlsbkhG0B+6BGkLq7r,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, daniel-/w4YWyX8dFk,
	mpe-Gsx/Oe8HsFggBc27wqDAHg, joe-6d6DIl74uiNBDgjK7y7TUQ,
	khilman-rdvid1DuHRBWk0Htik3J/w

Add support for expectations, which allow properties to be specified and
then verified in tests.

Signed-off-by: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
---
 include/kunit/test.h | 415 +++++++++++++++++++++++++++++++++++++++++++
 kunit/test.c         |  34 ++++
 2 files changed, 449 insertions(+)

diff --git a/include/kunit/test.h b/include/kunit/test.h
index 75cd3c3ab1b4b..a36ad1a502c66 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -273,4 +273,419 @@ void __printf(3, 4) kunit_printk(const char *level,
 #define kunit_err(test, fmt, ...) \
 		kunit_printk(KERN_ERR, test, fmt, ##__VA_ARGS__)
 
+static inline struct kunit_stream *kunit_expect_start(struct kunit *test,
+						      const char *file,
+						      const char *line)
+{
+	struct kunit_stream *stream = kunit_new_stream(test);
+
+	stream->add(stream, "EXPECTATION FAILED at %s:%s\n\t", file, line);
+
+	return stream;
+}
+
+static inline void kunit_expect_end(struct kunit *test,
+				    bool success,
+				    struct kunit_stream *stream)
+{
+	if (!success)
+		test->fail(test, stream);
+	else
+		stream->clear(stream);
+}
+
+#define KUNIT_EXPECT_START(test) \
+		kunit_expect_start(test, __FILE__, __stringify(__LINE__))
+
+#define KUNIT_EXPECT_END(test, success, stream) \
+		kunit_expect_end(test, success, stream)
+
+#define KUNIT_EXPECT_MSG(test, success, message, fmt, ...) do {		       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+									       \
+	__stream->add(__stream, message);				       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+	KUNIT_EXPECT_END(test, success, __stream);			       \
+} while (0)
+
+#define KUNIT_EXPECT(test, success, message) do {			       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+									       \
+	__stream->add(__stream, message);				       \
+	KUNIT_EXPECT_END(test, success, __stream);			       \
+} while (0)
+
+/**
+ * KUNIT_SUCCEED() - A no-op expectation. Only exists for code clarity.
+ * @test: The test context object.
+ *
+ * The opposite of KUNIT_FAIL(), it is an expectation that cannot fail. In other
+ * words, it does nothing and only exists for code clarity. See
+ * KUNIT_EXPECT_TRUE() for more information.
+ */
+#define KUNIT_SUCCEED(test) do {} while (0)
+
+/**
+ * KUNIT_FAIL() - Always causes a test to fail when evaluated.
+ * @test: The test context object.
+ * @fmt: an informational message to be printed when the assertion is made.
+ * @...: string format arguments.
+ *
+ * The opposite of KUNIT_SUCCEED(), it is an expectation that always fails. In
+ * other words, it always results in a failed expectation, and consequently
+ * always causes the test case to fail when evaluated. See KUNIT_EXPECT_TRUE()
+ * for more information.
+ */
+#define KUNIT_FAIL(test, fmt, ...) \
+		KUNIT_EXPECT_MSG(test, false, "", fmt, ##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_TRUE() - Causes a test failure when the expression is not true.
+ * @test: The test context object.
+ * @condition: an arbitrary boolean expression. The test fails when this does
+ * not evaluate to true.
+ *
+ * This and expectations of the form `KUNIT_EXPECT_*` will cause the test case
+ * to fail when the specified condition is not met; however, it will not prevent
+ * the test case from continuing to run; this is otherwise known as an
+ * *expectation failure*.
+ */
+#define KUNIT_EXPECT_TRUE(test, condition)				       \
+		KUNIT_EXPECT(test, (condition),				       \
+		       "Expected " #condition " is true, but is false.")
+
+#define KUNIT_EXPECT_TRUE_MSG(test, condition, fmt, ...)		       \
+		KUNIT_EXPECT_MSG(test, (condition),			       \
+				"Expected " #condition " is true, but is false.\n",\
+				fmt, ##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_FALSE() - Makes a test failure when the expression is not false.
+ * @test: The test context object.
+ * @condition: an arbitrary boolean expression. The test fails when this does
+ * not evaluate to false.
+ *
+ * Sets an expectation that @condition evaluates to false. See
+ * KUNIT_EXPECT_TRUE() for more information.
+ */
+#define KUNIT_EXPECT_FALSE(test, condition)				       \
+		KUNIT_EXPECT(test, !(condition),			       \
+		       "Expected " #condition " is false, but is true.")
+
+#define KUNIT_EXPECT_FALSE_MSG(test, condition, fmt, ...)		       \
+		KUNIT_EXPECT_MSG(test, !(condition),			       \
+				"Expected " #condition " is false, but is true.\n",\
+				fmt, ##__VA_ARGS__)
+
+void kunit_expect_binary_msg(struct kunit *test,
+			    long long left, const char *left_name,
+			    long long right, const char *right_name,
+			    bool compare_result,
+			    const char *compare_name,
+			    const char *file,
+			    const char *line,
+			    const char *fmt, ...);
+
+static inline void kunit_expect_binary(struct kunit *test,
+				       long long left, const char *left_name,
+				       long long right, const char *right_name,
+				       bool compare_result,
+				       const char *compare_name,
+				       const char *file,
+				       const char *line)
+{
+	struct kunit_stream *stream = kunit_expect_start(test, file, line);
+
+	stream->add(stream,
+		    "Expected %s %s %s, but\n",
+		    left_name, compare_name, right_name);
+	stream->add(stream, "\t\t%s == %lld\n", left_name, left);
+	stream->add(stream, "\t\t%s == %lld", right_name, right);
+
+	kunit_expect_end(test, compare_result, stream);
+}
+
+/*
+ * A factory macro for defining the expectations for the basic comparisons
+ * defined for the built in types.
+ *
+ * Unfortunately, there is no common type that all types can be promoted to for
+ * which all the binary operators behave the same way as for the actual types
+ * (for example, there is no type that long long and unsigned long long can
+ * both be cast to where the comparison result is preserved for all values). So
+ * the best we can do is do the comparison in the original types and then coerce
+ * everything to long long for printing; this way, the comparison behaves
+ * correctly and the printed out value usually makes sense without
+ * interpretation, but can always be interpretted to figure out the actual
+ * value.
+ */
+#define KUNIT_EXPECT_BINARY(test, left, condition, right) do {		       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+	kunit_expect_binary(test,					       \
+			   (long long) __left, #left,			       \
+			   (long long) __right, #right,			       \
+			   __left condition __right, #condition,	       \
+			   __FILE__, __stringify(__LINE__));		       \
+} while (0)
+
+#define KUNIT_EXPECT_BINARY_MSG(test, left, condition, right, fmt, ...) do {   \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+	kunit_expect_binary_msg(test,					       \
+			   (long long) __left, #left,			       \
+			   (long long) __right, #right,			       \
+			   __left condition __right, #condition,	       \
+			   __FILE__, __stringify(__LINE__),		       \
+			   fmt, ##__VA_ARGS__);				       \
+} while (0)
+
+/**
+ * KUNIT_EXPECT_EQ() - Sets an expectation that @left and @right are equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are
+ * equal. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) == (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_EQ(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, ==, right)
+
+#define KUNIT_EXPECT_EQ_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					==,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_NE() - An expectation that @left and @right are not equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are not
+ * equal. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) != (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_NE(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, !=, right)
+
+#define KUNIT_EXPECT_NE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					!=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_LT() - An expectation that @left is less than @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the value that @left evaluates to is less than the
+ * value that @right evaluates to. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) < (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_LT(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, <, right)
+
+#define KUNIT_EXPECT_LT_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					<,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_LE() - Expects that @left is less than or equal to @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the value that @left evaluates to is less than or
+ * equal to the value that @right evaluates to. Semantically this is equivalent
+ * to KUNIT_EXPECT_TRUE(@test, (@left) <= (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_LE(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, <=, right)
+
+#define KUNIT_EXPECT_LE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					<=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_GT() - An expectation that @left is greater than @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the value that @left evaluates to is greater than
+ * the value that @right evaluates to. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) > (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_GT(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, >, right)
+
+#define KUNIT_EXPECT_GT_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					>,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_GE() - Expects that @left is greater than or equal to @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the value that @left evaluates to is greater than
+ * the value that @right evaluates to. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) >= (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_GE(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, >=, right)
+
+#define KUNIT_EXPECT_GE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					>=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_STREQ() - Expects that strings @left and @right are equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a null terminated string.
+ * @right: an arbitrary expression that evaluates to a null terminated string.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are
+ * equal. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, !strcmp((@left), (@right))). See KUNIT_EXPECT_TRUE()
+ * for more information.
+ */
+#define KUNIT_EXPECT_STREQ(test, left, right) do {			       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Expected " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+									       \
+	KUNIT_EXPECT_END(test, !strcmp(left, right), __stream);		       \
+} while (0)
+
+#define KUNIT_EXPECT_STREQ_MSG(test, left, right, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Expected " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+									       \
+	KUNIT_EXPECT_END(test, !strcmp(left, right), __stream);		       \
+} while (0)
+
+/**
+ * KUNIT_EXPECT_STRNEQ() - Expects that strings @left and @right are not equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a null terminated string.
+ * @right: an arbitrary expression that evaluates to a null terminated string.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are
+ * not equal. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, strcmp((@left), (@right))). See KUNIT_EXPECT_TRUE()
+ * for more information.
+ */
+#define KUNIT_EXPECT_STRNEQ(test, left, right) do {			       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Expected " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+									       \
+	KUNIT_EXPECT_END(test, strcmp(left, right), __stream);		       \
+} while (0)
+
+#define KUNIT_EXPECT_STRNEQ_MSG(test, left, right, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Expected " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+									       \
+	KUNIT_EXPECT_END(test, strcmp(left, right), __stream);		       \
+} while (0)
+
+/**
+ * KUNIT_EXPECT_NOT_ERR_OR_NULL() - Expects that @ptr is not null and not err.
+ * @test: The test context object.
+ * @ptr: an arbitrary pointer.
+ *
+ * Sets an expectation that the value that @ptr evaluates to is not null and not
+ * an errno stored in a pointer. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, !IS_ERR_OR_NULL(@ptr)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_NOT_ERR_OR_NULL(test, ptr) do {			       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(ptr) __ptr = (ptr);					       \
+									       \
+	if (!__ptr)							       \
+		__stream->add(__stream,					       \
+			      "Expected " #ptr " is not null, but is.");       \
+	if (IS_ERR(__ptr))						       \
+		__stream->add(__stream,					       \
+			      "Expected " #ptr " is not error, but is: %ld",   \
+			      PTR_ERR(__ptr));				       \
+									       \
+	KUNIT_EXPECT_END(test, !IS_ERR_OR_NULL(__ptr), __stream);	       \
+} while (0)
+
+#define KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, ptr, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(ptr) __ptr = (ptr);					       \
+									       \
+	if (!__ptr) {							       \
+		__stream->add(__stream,					       \
+			      "Expected " #ptr " is not null, but is.");       \
+		__stream->add(__stream, fmt, ##__VA_ARGS__);		       \
+	}								       \
+	if (IS_ERR(__ptr)) {						       \
+		__stream->add(__stream,					       \
+			      "Expected " #ptr " is not error, but is: %ld",   \
+			      PTR_ERR(__ptr));				       \
+									       \
+		__stream->add(__stream, fmt, ##__VA_ARGS__);		       \
+	}								       \
+	KUNIT_EXPECT_END(test, !IS_ERR_OR_NULL(__ptr), __stream);	       \
+} while (0)
+
 #endif /* _KUNIT_TEST_H */
diff --git a/kunit/test.c b/kunit/test.c
index 1a2e5e6b7ffee..d18c50d5ed671 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -269,3 +269,37 @@ void kunit_printk(const char *level,
 
 	va_end(args);
 }
+
+void kunit_expect_binary_msg(struct kunit *test,
+			     long long left, const char *left_name,
+			     long long right, const char *right_name,
+			     bool compare_result,
+			     const char *compare_name,
+			     const char *file,
+			     const char *line,
+			     const char *fmt, ...)
+{
+	struct kunit_stream *stream = kunit_expect_start(test, file, line);
+	struct va_format vaf;
+	va_list args;
+
+	stream->add(stream,
+		    "Expected %s %s %s, but\n",
+		    left_name, compare_name, right_name);
+	stream->add(stream, "\t\t%s == %lld\n", left_name, left);
+	stream->add(stream, "\t\t%s == %lld", right_name, right);
+
+	if (fmt) {
+		va_start(args, fmt);
+
+		vaf.fmt = fmt;
+		vaf.va = &args;
+
+		stream->add(stream, "\n%pV", &vaf);
+
+		va_end(args);
+	}
+
+	kunit_expect_end(test, compare_result, stream);
+}
+
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 05/17] kunit: test: add the concept of expectations
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: gregkh, joel, mpe, joe, brakmo, rostedt, Tim.Bird, khilman,
	julia.lawall, linux-kselftest, kunit-dev, linux-kernel, jdike,
	richard, linux-um, daniel, dri-devel, dan.j.williams,
	linux-nvdimm, knut.omang, devicetree, pmladek, Alexander.Levin,
	amir73il, dan.carpenter, wfg, Brendan Higgins

Add support for expectations, which allow properties to be specified and
then verified in tests.

Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
---
 include/kunit/test.h | 415 +++++++++++++++++++++++++++++++++++++++++++
 kunit/test.c         |  34 ++++
 2 files changed, 449 insertions(+)

diff --git a/include/kunit/test.h b/include/kunit/test.h
index 75cd3c3ab1b4b..a36ad1a502c66 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -273,4 +273,419 @@ void __printf(3, 4) kunit_printk(const char *level,
 #define kunit_err(test, fmt, ...) \
 		kunit_printk(KERN_ERR, test, fmt, ##__VA_ARGS__)
 
+static inline struct kunit_stream *kunit_expect_start(struct kunit *test,
+						      const char *file,
+						      const char *line)
+{
+	struct kunit_stream *stream = kunit_new_stream(test);
+
+	stream->add(stream, "EXPECTATION FAILED at %s:%s\n\t", file, line);
+
+	return stream;
+}
+
+static inline void kunit_expect_end(struct kunit *test,
+				    bool success,
+				    struct kunit_stream *stream)
+{
+	if (!success)
+		test->fail(test, stream);
+	else
+		stream->clear(stream);
+}
+
+#define KUNIT_EXPECT_START(test) \
+		kunit_expect_start(test, __FILE__, __stringify(__LINE__))
+
+#define KUNIT_EXPECT_END(test, success, stream) \
+		kunit_expect_end(test, success, stream)
+
+#define KUNIT_EXPECT_MSG(test, success, message, fmt, ...) do {		       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+									       \
+	__stream->add(__stream, message);				       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+	KUNIT_EXPECT_END(test, success, __stream);			       \
+} while (0)
+
+#define KUNIT_EXPECT(test, success, message) do {			       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+									       \
+	__stream->add(__stream, message);				       \
+	KUNIT_EXPECT_END(test, success, __stream);			       \
+} while (0)
+
+/**
+ * KUNIT_SUCCEED() - A no-op expectation. Only exists for code clarity.
+ * @test: The test context object.
+ *
+ * The opposite of KUNIT_FAIL(), it is an expectation that cannot fail. In other
+ * words, it does nothing and only exists for code clarity. See
+ * KUNIT_EXPECT_TRUE() for more information.
+ */
+#define KUNIT_SUCCEED(test) do {} while (0)
+
+/**
+ * KUNIT_FAIL() - Always causes a test to fail when evaluated.
+ * @test: The test context object.
+ * @fmt: an informational message to be printed when the assertion is made.
+ * @...: string format arguments.
+ *
+ * The opposite of KUNIT_SUCCEED(), it is an expectation that always fails. In
+ * other words, it always results in a failed expectation, and consequently
+ * always causes the test case to fail when evaluated. See KUNIT_EXPECT_TRUE()
+ * for more information.
+ */
+#define KUNIT_FAIL(test, fmt, ...) \
+		KUNIT_EXPECT_MSG(test, false, "", fmt, ##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_TRUE() - Causes a test failure when the expression is not true.
+ * @test: The test context object.
+ * @condition: an arbitrary boolean expression. The test fails when this does
+ * not evaluate to true.
+ *
+ * This and expectations of the form `KUNIT_EXPECT_*` will cause the test case
+ * to fail when the specified condition is not met; however, it will not prevent
+ * the test case from continuing to run; this is otherwise known as an
+ * *expectation failure*.
+ */
+#define KUNIT_EXPECT_TRUE(test, condition)				       \
+		KUNIT_EXPECT(test, (condition),				       \
+		       "Expected " #condition " is true, but is false.")
+
+#define KUNIT_EXPECT_TRUE_MSG(test, condition, fmt, ...)		       \
+		KUNIT_EXPECT_MSG(test, (condition),			       \
+				"Expected " #condition " is true, but is false.\n",\
+				fmt, ##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_FALSE() - Makes a test failure when the expression is not false.
+ * @test: The test context object.
+ * @condition: an arbitrary boolean expression. The test fails when this does
+ * not evaluate to false.
+ *
+ * Sets an expectation that @condition evaluates to false. See
+ * KUNIT_EXPECT_TRUE() for more information.
+ */
+#define KUNIT_EXPECT_FALSE(test, condition)				       \
+		KUNIT_EXPECT(test, !(condition),			       \
+		       "Expected " #condition " is false, but is true.")
+
+#define KUNIT_EXPECT_FALSE_MSG(test, condition, fmt, ...)		       \
+		KUNIT_EXPECT_MSG(test, !(condition),			       \
+				"Expected " #condition " is false, but is true.\n",\
+				fmt, ##__VA_ARGS__)
+
+void kunit_expect_binary_msg(struct kunit *test,
+			    long long left, const char *left_name,
+			    long long right, const char *right_name,
+			    bool compare_result,
+			    const char *compare_name,
+			    const char *file,
+			    const char *line,
+			    const char *fmt, ...);
+
+static inline void kunit_expect_binary(struct kunit *test,
+				       long long left, const char *left_name,
+				       long long right, const char *right_name,
+				       bool compare_result,
+				       const char *compare_name,
+				       const char *file,
+				       const char *line)
+{
+	struct kunit_stream *stream = kunit_expect_start(test, file, line);
+
+	stream->add(stream,
+		    "Expected %s %s %s, but\n",
+		    left_name, compare_name, right_name);
+	stream->add(stream, "\t\t%s == %lld\n", left_name, left);
+	stream->add(stream, "\t\t%s == %lld", right_name, right);
+
+	kunit_expect_end(test, compare_result, stream);
+}
+
+/*
+ * A factory macro for defining the expectations for the basic comparisons
+ * defined for the built in types.
+ *
+ * Unfortunately, there is no common type that all types can be promoted to for
+ * which all the binary operators behave the same way as for the actual types
+ * (for example, there is no type that long long and unsigned long long can
+ * both be cast to where the comparison result is preserved for all values). So
+ * the best we can do is do the comparison in the original types and then coerce
+ * everything to long long for printing; this way, the comparison behaves
+ * correctly and the printed out value usually makes sense without
+ * interpretation, but can always be interpretted to figure out the actual
+ * value.
+ */
+#define KUNIT_EXPECT_BINARY(test, left, condition, right) do {		       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+	kunit_expect_binary(test,					       \
+			   (long long) __left, #left,			       \
+			   (long long) __right, #right,			       \
+			   __left condition __right, #condition,	       \
+			   __FILE__, __stringify(__LINE__));		       \
+} while (0)
+
+#define KUNIT_EXPECT_BINARY_MSG(test, left, condition, right, fmt, ...) do {   \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+	kunit_expect_binary_msg(test,					       \
+			   (long long) __left, #left,			       \
+			   (long long) __right, #right,			       \
+			   __left condition __right, #condition,	       \
+			   __FILE__, __stringify(__LINE__),		       \
+			   fmt, ##__VA_ARGS__);				       \
+} while (0)
+
+/**
+ * KUNIT_EXPECT_EQ() - Sets an expectation that @left and @right are equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are
+ * equal. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) == (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_EQ(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, ==, right)
+
+#define KUNIT_EXPECT_EQ_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					==,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_NE() - An expectation that @left and @right are not equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are not
+ * equal. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) != (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_NE(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, !=, right)
+
+#define KUNIT_EXPECT_NE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					!=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_LT() - An expectation that @left is less than @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the value that @left evaluates to is less than the
+ * value that @right evaluates to. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) < (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_LT(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, <, right)
+
+#define KUNIT_EXPECT_LT_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					<,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_LE() - Expects that @left is less than or equal to @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the value that @left evaluates to is less than or
+ * equal to the value that @right evaluates to. Semantically this is equivalent
+ * to KUNIT_EXPECT_TRUE(@test, (@left) <= (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_LE(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, <=, right)
+
+#define KUNIT_EXPECT_LE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					<=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_GT() - An expectation that @left is greater than @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the value that @left evaluates to is greater than
+ * the value that @right evaluates to. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) > (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_GT(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, >, right)
+
+#define KUNIT_EXPECT_GT_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					>,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_GE() - Expects that @left is greater than or equal to @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the value that @left evaluates to is greater than
+ * the value that @right evaluates to. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) >= (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_GE(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, >=, right)
+
+#define KUNIT_EXPECT_GE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					>=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_STREQ() - Expects that strings @left and @right are equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a null terminated string.
+ * @right: an arbitrary expression that evaluates to a null terminated string.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are
+ * equal. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, !strcmp((@left), (@right))). See KUNIT_EXPECT_TRUE()
+ * for more information.
+ */
+#define KUNIT_EXPECT_STREQ(test, left, right) do {			       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Expected " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+									       \
+	KUNIT_EXPECT_END(test, !strcmp(left, right), __stream);		       \
+} while (0)
+
+#define KUNIT_EXPECT_STREQ_MSG(test, left, right, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Expected " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+									       \
+	KUNIT_EXPECT_END(test, !strcmp(left, right), __stream);		       \
+} while (0)
+
+/**
+ * KUNIT_EXPECT_STRNEQ() - Expects that strings @left and @right are not equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a null terminated string.
+ * @right: an arbitrary expression that evaluates to a null terminated string.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are
+ * not equal. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, strcmp((@left), (@right))). See KUNIT_EXPECT_TRUE()
+ * for more information.
+ */
+#define KUNIT_EXPECT_STRNEQ(test, left, right) do {			       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Expected " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+									       \
+	KUNIT_EXPECT_END(test, strcmp(left, right), __stream);		       \
+} while (0)
+
+#define KUNIT_EXPECT_STRNEQ_MSG(test, left, right, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Expected " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+									       \
+	KUNIT_EXPECT_END(test, strcmp(left, right), __stream);		       \
+} while (0)
+
+/**
+ * KUNIT_EXPECT_NOT_ERR_OR_NULL() - Expects that @ptr is not null and not err.
+ * @test: The test context object.
+ * @ptr: an arbitrary pointer.
+ *
+ * Sets an expectation that the value that @ptr evaluates to is not null and not
+ * an errno stored in a pointer. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, !IS_ERR_OR_NULL(@ptr)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_NOT_ERR_OR_NULL(test, ptr) do {			       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(ptr) __ptr = (ptr);					       \
+									       \
+	if (!__ptr)							       \
+		__stream->add(__stream,					       \
+			      "Expected " #ptr " is not null, but is.");       \
+	if (IS_ERR(__ptr))						       \
+		__stream->add(__stream,					       \
+			      "Expected " #ptr " is not error, but is: %ld",   \
+			      PTR_ERR(__ptr));				       \
+									       \
+	KUNIT_EXPECT_END(test, !IS_ERR_OR_NULL(__ptr), __stream);	       \
+} while (0)
+
+#define KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, ptr, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(ptr) __ptr = (ptr);					       \
+									       \
+	if (!__ptr) {							       \
+		__stream->add(__stream,					       \
+			      "Expected " #ptr " is not null, but is.");       \
+		__stream->add(__stream, fmt, ##__VA_ARGS__);		       \
+	}								       \
+	if (IS_ERR(__ptr)) {						       \
+		__stream->add(__stream,					       \
+			      "Expected " #ptr " is not error, but is: %ld",   \
+			      PTR_ERR(__ptr));				       \
+									       \
+		__stream->add(__stream, fmt, ##__VA_ARGS__);		       \
+	}								       \
+	KUNIT_EXPECT_END(test, !IS_ERR_OR_NULL(__ptr), __stream);	       \
+} while (0)
+
 #endif /* _KUNIT_TEST_H */
diff --git a/kunit/test.c b/kunit/test.c
index 1a2e5e6b7ffee..d18c50d5ed671 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -269,3 +269,37 @@ void kunit_printk(const char *level,
 
 	va_end(args);
 }
+
+void kunit_expect_binary_msg(struct kunit *test,
+			     long long left, const char *left_name,
+			     long long right, const char *right_name,
+			     bool compare_result,
+			     const char *compare_name,
+			     const char *file,
+			     const char *line,
+			     const char *fmt, ...)
+{
+	struct kunit_stream *stream = kunit_expect_start(test, file, line);
+	struct va_format vaf;
+	va_list args;
+
+	stream->add(stream,
+		    "Expected %s %s %s, but\n",
+		    left_name, compare_name, right_name);
+	stream->add(stream, "\t\t%s == %lld\n", left_name, left);
+	stream->add(stream, "\t\t%s == %lld", right_name, right);
+
+	if (fmt) {
+		va_start(args, fmt);
+
+		vaf.fmt = fmt;
+		vaf.va = &args;
+
+		stream->add(stream, "\n%pV", &vaf);
+
+		va_end(args);
+	}
+
+	kunit_expect_end(test, compare_result, stream);
+}
+
-- 
2.21.0.rc0.258.g878e2cd30e-goog


^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 05/17] kunit: test: add the concept of expectations
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-02-14 21:37 UTC (permalink / raw)


Add support for expectations, which allow properties to be specified and
then verified in tests.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 include/kunit/test.h | 415 +++++++++++++++++++++++++++++++++++++++++++
 kunit/test.c         |  34 ++++
 2 files changed, 449 insertions(+)

diff --git a/include/kunit/test.h b/include/kunit/test.h
index 75cd3c3ab1b4b..a36ad1a502c66 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -273,4 +273,419 @@ void __printf(3, 4) kunit_printk(const char *level,
 #define kunit_err(test, fmt, ...) \
 		kunit_printk(KERN_ERR, test, fmt, ##__VA_ARGS__)
 
+static inline struct kunit_stream *kunit_expect_start(struct kunit *test,
+						      const char *file,
+						      const char *line)
+{
+	struct kunit_stream *stream = kunit_new_stream(test);
+
+	stream->add(stream, "EXPECTATION FAILED at %s:%s\n\t", file, line);
+
+	return stream;
+}
+
+static inline void kunit_expect_end(struct kunit *test,
+				    bool success,
+				    struct kunit_stream *stream)
+{
+	if (!success)
+		test->fail(test, stream);
+	else
+		stream->clear(stream);
+}
+
+#define KUNIT_EXPECT_START(test) \
+		kunit_expect_start(test, __FILE__, __stringify(__LINE__))
+
+#define KUNIT_EXPECT_END(test, success, stream) \
+		kunit_expect_end(test, success, stream)
+
+#define KUNIT_EXPECT_MSG(test, success, message, fmt, ...) do {		       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+									       \
+	__stream->add(__stream, message);				       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+	KUNIT_EXPECT_END(test, success, __stream);			       \
+} while (0)
+
+#define KUNIT_EXPECT(test, success, message) do {			       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+									       \
+	__stream->add(__stream, message);				       \
+	KUNIT_EXPECT_END(test, success, __stream);			       \
+} while (0)
+
+/**
+ * KUNIT_SUCCEED() - A no-op expectation. Only exists for code clarity.
+ * @test: The test context object.
+ *
+ * The opposite of KUNIT_FAIL(), it is an expectation that cannot fail. In other
+ * words, it does nothing and only exists for code clarity. See
+ * KUNIT_EXPECT_TRUE() for more information.
+ */
+#define KUNIT_SUCCEED(test) do {} while (0)
+
+/**
+ * KUNIT_FAIL() - Always causes a test to fail when evaluated.
+ * @test: The test context object.
+ * @fmt: an informational message to be printed when the assertion is made.
+ * @...: string format arguments.
+ *
+ * The opposite of KUNIT_SUCCEED(), it is an expectation that always fails. In
+ * other words, it always results in a failed expectation, and consequently
+ * always causes the test case to fail when evaluated. See KUNIT_EXPECT_TRUE()
+ * for more information.
+ */
+#define KUNIT_FAIL(test, fmt, ...) \
+		KUNIT_EXPECT_MSG(test, false, "", fmt, ##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_TRUE() - Causes a test failure when the expression is not true.
+ * @test: The test context object.
+ * @condition: an arbitrary boolean expression. The test fails when this does
+ * not evaluate to true.
+ *
+ * This and expectations of the form `KUNIT_EXPECT_*` will cause the test case
+ * to fail when the specified condition is not met; however, it will not prevent
+ * the test case from continuing to run; this is otherwise known as an
+ * *expectation failure*.
+ */
+#define KUNIT_EXPECT_TRUE(test, condition)				       \
+		KUNIT_EXPECT(test, (condition),				       \
+		       "Expected " #condition " is true, but is false.")
+
+#define KUNIT_EXPECT_TRUE_MSG(test, condition, fmt, ...)		       \
+		KUNIT_EXPECT_MSG(test, (condition),			       \
+				"Expected " #condition " is true, but is false.\n",\
+				fmt, ##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_FALSE() - Makes a test failure when the expression is not false.
+ * @test: The test context object.
+ * @condition: an arbitrary boolean expression. The test fails when this does
+ * not evaluate to false.
+ *
+ * Sets an expectation that @condition evaluates to false. See
+ * KUNIT_EXPECT_TRUE() for more information.
+ */
+#define KUNIT_EXPECT_FALSE(test, condition)				       \
+		KUNIT_EXPECT(test, !(condition),			       \
+		       "Expected " #condition " is false, but is true.")
+
+#define KUNIT_EXPECT_FALSE_MSG(test, condition, fmt, ...)		       \
+		KUNIT_EXPECT_MSG(test, !(condition),			       \
+				"Expected " #condition " is false, but is true.\n",\
+				fmt, ##__VA_ARGS__)
+
+void kunit_expect_binary_msg(struct kunit *test,
+			    long long left, const char *left_name,
+			    long long right, const char *right_name,
+			    bool compare_result,
+			    const char *compare_name,
+			    const char *file,
+			    const char *line,
+			    const char *fmt, ...);
+
+static inline void kunit_expect_binary(struct kunit *test,
+				       long long left, const char *left_name,
+				       long long right, const char *right_name,
+				       bool compare_result,
+				       const char *compare_name,
+				       const char *file,
+				       const char *line)
+{
+	struct kunit_stream *stream = kunit_expect_start(test, file, line);
+
+	stream->add(stream,
+		    "Expected %s %s %s, but\n",
+		    left_name, compare_name, right_name);
+	stream->add(stream, "\t\t%s == %lld\n", left_name, left);
+	stream->add(stream, "\t\t%s == %lld", right_name, right);
+
+	kunit_expect_end(test, compare_result, stream);
+}
+
+/*
+ * A factory macro for defining the expectations for the basic comparisons
+ * defined for the built in types.
+ *
+ * Unfortunately, there is no common type that all types can be promoted to for
+ * which all the binary operators behave the same way as for the actual types
+ * (for example, there is no type that long long and unsigned long long can
+ * both be cast to where the comparison result is preserved for all values). So
+ * the best we can do is do the comparison in the original types and then coerce
+ * everything to long long for printing; this way, the comparison behaves
+ * correctly and the printed out value usually makes sense without
+ * interpretation, but can always be interpretted to figure out the actual
+ * value.
+ */
+#define KUNIT_EXPECT_BINARY(test, left, condition, right) do {		       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+	kunit_expect_binary(test,					       \
+			   (long long) __left, #left,			       \
+			   (long long) __right, #right,			       \
+			   __left condition __right, #condition,	       \
+			   __FILE__, __stringify(__LINE__));		       \
+} while (0)
+
+#define KUNIT_EXPECT_BINARY_MSG(test, left, condition, right, fmt, ...) do {   \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+	kunit_expect_binary_msg(test,					       \
+			   (long long) __left, #left,			       \
+			   (long long) __right, #right,			       \
+			   __left condition __right, #condition,	       \
+			   __FILE__, __stringify(__LINE__),		       \
+			   fmt, ##__VA_ARGS__);				       \
+} while (0)
+
+/**
+ * KUNIT_EXPECT_EQ() - Sets an expectation that @left and @right are equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are
+ * equal. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) == (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_EQ(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, ==, right)
+
+#define KUNIT_EXPECT_EQ_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					==,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_NE() - An expectation that @left and @right are not equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are not
+ * equal. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) != (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_NE(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, !=, right)
+
+#define KUNIT_EXPECT_NE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					!=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_LT() - An expectation that @left is less than @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the value that @left evaluates to is less than the
+ * value that @right evaluates to. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) < (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_LT(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, <, right)
+
+#define KUNIT_EXPECT_LT_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					<,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_LE() - Expects that @left is less than or equal to @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the value that @left evaluates to is less than or
+ * equal to the value that @right evaluates to. Semantically this is equivalent
+ * to KUNIT_EXPECT_TRUE(@test, (@left) <= (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_LE(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, <=, right)
+
+#define KUNIT_EXPECT_LE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					<=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_GT() - An expectation that @left is greater than @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the value that @left evaluates to is greater than
+ * the value that @right evaluates to. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) > (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_GT(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, >, right)
+
+#define KUNIT_EXPECT_GT_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					>,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_GE() - Expects that @left is greater than or equal to @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the value that @left evaluates to is greater than
+ * the value that @right evaluates to. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) >= (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_GE(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, >=, right)
+
+#define KUNIT_EXPECT_GE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					>=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_STREQ() - Expects that strings @left and @right are equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a null terminated string.
+ * @right: an arbitrary expression that evaluates to a null terminated string.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are
+ * equal. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, !strcmp((@left), (@right))). See KUNIT_EXPECT_TRUE()
+ * for more information.
+ */
+#define KUNIT_EXPECT_STREQ(test, left, right) do {			       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Expected " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+									       \
+	KUNIT_EXPECT_END(test, !strcmp(left, right), __stream);		       \
+} while (0)
+
+#define KUNIT_EXPECT_STREQ_MSG(test, left, right, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Expected " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+									       \
+	KUNIT_EXPECT_END(test, !strcmp(left, right), __stream);		       \
+} while (0)
+
+/**
+ * KUNIT_EXPECT_STRNEQ() - Expects that strings @left and @right are not equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a null terminated string.
+ * @right: an arbitrary expression that evaluates to a null terminated string.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are
+ * not equal. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, strcmp((@left), (@right))). See KUNIT_EXPECT_TRUE()
+ * for more information.
+ */
+#define KUNIT_EXPECT_STRNEQ(test, left, right) do {			       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Expected " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+									       \
+	KUNIT_EXPECT_END(test, strcmp(left, right), __stream);		       \
+} while (0)
+
+#define KUNIT_EXPECT_STRNEQ_MSG(test, left, right, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Expected " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+									       \
+	KUNIT_EXPECT_END(test, strcmp(left, right), __stream);		       \
+} while (0)
+
+/**
+ * KUNIT_EXPECT_NOT_ERR_OR_NULL() - Expects that @ptr is not null and not err.
+ * @test: The test context object.
+ * @ptr: an arbitrary pointer.
+ *
+ * Sets an expectation that the value that @ptr evaluates to is not null and not
+ * an errno stored in a pointer. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, !IS_ERR_OR_NULL(@ptr)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_NOT_ERR_OR_NULL(test, ptr) do {			       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(ptr) __ptr = (ptr);					       \
+									       \
+	if (!__ptr)							       \
+		__stream->add(__stream,					       \
+			      "Expected " #ptr " is not null, but is.");       \
+	if (IS_ERR(__ptr))						       \
+		__stream->add(__stream,					       \
+			      "Expected " #ptr " is not error, but is: %ld",   \
+			      PTR_ERR(__ptr));				       \
+									       \
+	KUNIT_EXPECT_END(test, !IS_ERR_OR_NULL(__ptr), __stream);	       \
+} while (0)
+
+#define KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, ptr, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(ptr) __ptr = (ptr);					       \
+									       \
+	if (!__ptr) {							       \
+		__stream->add(__stream,					       \
+			      "Expected " #ptr " is not null, but is.");       \
+		__stream->add(__stream, fmt, ##__VA_ARGS__);		       \
+	}								       \
+	if (IS_ERR(__ptr)) {						       \
+		__stream->add(__stream,					       \
+			      "Expected " #ptr " is not error, but is: %ld",   \
+			      PTR_ERR(__ptr));				       \
+									       \
+		__stream->add(__stream, fmt, ##__VA_ARGS__);		       \
+	}								       \
+	KUNIT_EXPECT_END(test, !IS_ERR_OR_NULL(__ptr), __stream);	       \
+} while (0)
+
 #endif /* _KUNIT_TEST_H */
diff --git a/kunit/test.c b/kunit/test.c
index 1a2e5e6b7ffee..d18c50d5ed671 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -269,3 +269,37 @@ void kunit_printk(const char *level,
 
 	va_end(args);
 }
+
+void kunit_expect_binary_msg(struct kunit *test,
+			     long long left, const char *left_name,
+			     long long right, const char *right_name,
+			     bool compare_result,
+			     const char *compare_name,
+			     const char *file,
+			     const char *line,
+			     const char *fmt, ...)
+{
+	struct kunit_stream *stream = kunit_expect_start(test, file, line);
+	struct va_format vaf;
+	va_list args;
+
+	stream->add(stream,
+		    "Expected %s %s %s, but\n",
+		    left_name, compare_name, right_name);
+	stream->add(stream, "\t\t%s == %lld\n", left_name, left);
+	stream->add(stream, "\t\t%s == %lld", right_name, right);
+
+	if (fmt) {
+		va_start(args, fmt);
+
+		vaf.fmt = fmt;
+		vaf.va = &args;
+
+		stream->add(stream, "\n%pV", &vaf);
+
+		va_end(args);
+	}
+
+	kunit_expect_end(test, compare_result, stream);
+}
+
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 05/17] kunit: test: add the concept of expectations
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)


Add support for expectations, which allow properties to be specified and
then verified in tests.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 include/kunit/test.h | 415 +++++++++++++++++++++++++++++++++++++++++++
 kunit/test.c         |  34 ++++
 2 files changed, 449 insertions(+)

diff --git a/include/kunit/test.h b/include/kunit/test.h
index 75cd3c3ab1b4b..a36ad1a502c66 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -273,4 +273,419 @@ void __printf(3, 4) kunit_printk(const char *level,
 #define kunit_err(test, fmt, ...) \
 		kunit_printk(KERN_ERR, test, fmt, ##__VA_ARGS__)
 
+static inline struct kunit_stream *kunit_expect_start(struct kunit *test,
+						      const char *file,
+						      const char *line)
+{
+	struct kunit_stream *stream = kunit_new_stream(test);
+
+	stream->add(stream, "EXPECTATION FAILED at %s:%s\n\t", file, line);
+
+	return stream;
+}
+
+static inline void kunit_expect_end(struct kunit *test,
+				    bool success,
+				    struct kunit_stream *stream)
+{
+	if (!success)
+		test->fail(test, stream);
+	else
+		stream->clear(stream);
+}
+
+#define KUNIT_EXPECT_START(test) \
+		kunit_expect_start(test, __FILE__, __stringify(__LINE__))
+
+#define KUNIT_EXPECT_END(test, success, stream) \
+		kunit_expect_end(test, success, stream)
+
+#define KUNIT_EXPECT_MSG(test, success, message, fmt, ...) do {		       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+									       \
+	__stream->add(__stream, message);				       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+	KUNIT_EXPECT_END(test, success, __stream);			       \
+} while (0)
+
+#define KUNIT_EXPECT(test, success, message) do {			       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+									       \
+	__stream->add(__stream, message);				       \
+	KUNIT_EXPECT_END(test, success, __stream);			       \
+} while (0)
+
+/**
+ * KUNIT_SUCCEED() - A no-op expectation. Only exists for code clarity.
+ * @test: The test context object.
+ *
+ * The opposite of KUNIT_FAIL(), it is an expectation that cannot fail. In other
+ * words, it does nothing and only exists for code clarity. See
+ * KUNIT_EXPECT_TRUE() for more information.
+ */
+#define KUNIT_SUCCEED(test) do {} while (0)
+
+/**
+ * KUNIT_FAIL() - Always causes a test to fail when evaluated.
+ * @test: The test context object.
+ * @fmt: an informational message to be printed when the assertion is made.
+ * @...: string format arguments.
+ *
+ * The opposite of KUNIT_SUCCEED(), it is an expectation that always fails. In
+ * other words, it always results in a failed expectation, and consequently
+ * always causes the test case to fail when evaluated. See KUNIT_EXPECT_TRUE()
+ * for more information.
+ */
+#define KUNIT_FAIL(test, fmt, ...) \
+		KUNIT_EXPECT_MSG(test, false, "", fmt, ##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_TRUE() - Causes a test failure when the expression is not true.
+ * @test: The test context object.
+ * @condition: an arbitrary boolean expression. The test fails when this does
+ * not evaluate to true.
+ *
+ * This and expectations of the form `KUNIT_EXPECT_*` will cause the test case
+ * to fail when the specified condition is not met; however, it will not prevent
+ * the test case from continuing to run; this is otherwise known as an
+ * *expectation failure*.
+ */
+#define KUNIT_EXPECT_TRUE(test, condition)				       \
+		KUNIT_EXPECT(test, (condition),				       \
+		       "Expected " #condition " is true, but is false.")
+
+#define KUNIT_EXPECT_TRUE_MSG(test, condition, fmt, ...)		       \
+		KUNIT_EXPECT_MSG(test, (condition),			       \
+				"Expected " #condition " is true, but is false.\n",\
+				fmt, ##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_FALSE() - Makes a test failure when the expression is not false.
+ * @test: The test context object.
+ * @condition: an arbitrary boolean expression. The test fails when this does
+ * not evaluate to false.
+ *
+ * Sets an expectation that @condition evaluates to false. See
+ * KUNIT_EXPECT_TRUE() for more information.
+ */
+#define KUNIT_EXPECT_FALSE(test, condition)				       \
+		KUNIT_EXPECT(test, !(condition),			       \
+		       "Expected " #condition " is false, but is true.")
+
+#define KUNIT_EXPECT_FALSE_MSG(test, condition, fmt, ...)		       \
+		KUNIT_EXPECT_MSG(test, !(condition),			       \
+				"Expected " #condition " is false, but is true.\n",\
+				fmt, ##__VA_ARGS__)
+
+void kunit_expect_binary_msg(struct kunit *test,
+			    long long left, const char *left_name,
+			    long long right, const char *right_name,
+			    bool compare_result,
+			    const char *compare_name,
+			    const char *file,
+			    const char *line,
+			    const char *fmt, ...);
+
+static inline void kunit_expect_binary(struct kunit *test,
+				       long long left, const char *left_name,
+				       long long right, const char *right_name,
+				       bool compare_result,
+				       const char *compare_name,
+				       const char *file,
+				       const char *line)
+{
+	struct kunit_stream *stream = kunit_expect_start(test, file, line);
+
+	stream->add(stream,
+		    "Expected %s %s %s, but\n",
+		    left_name, compare_name, right_name);
+	stream->add(stream, "\t\t%s == %lld\n", left_name, left);
+	stream->add(stream, "\t\t%s == %lld", right_name, right);
+
+	kunit_expect_end(test, compare_result, stream);
+}
+
+/*
+ * A factory macro for defining the expectations for the basic comparisons
+ * defined for the built in types.
+ *
+ * Unfortunately, there is no common type that all types can be promoted to for
+ * which all the binary operators behave the same way as for the actual types
+ * (for example, there is no type that long long and unsigned long long can
+ * both be cast to where the comparison result is preserved for all values). So
+ * the best we can do is do the comparison in the original types and then coerce
+ * everything to long long for printing; this way, the comparison behaves
+ * correctly and the printed out value usually makes sense without
+ * interpretation, but can always be interpretted to figure out the actual
+ * value.
+ */
+#define KUNIT_EXPECT_BINARY(test, left, condition, right) do {		       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+	kunit_expect_binary(test,					       \
+			   (long long) __left, #left,			       \
+			   (long long) __right, #right,			       \
+			   __left condition __right, #condition,	       \
+			   __FILE__, __stringify(__LINE__));		       \
+} while (0)
+
+#define KUNIT_EXPECT_BINARY_MSG(test, left, condition, right, fmt, ...) do {   \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+	kunit_expect_binary_msg(test,					       \
+			   (long long) __left, #left,			       \
+			   (long long) __right, #right,			       \
+			   __left condition __right, #condition,	       \
+			   __FILE__, __stringify(__LINE__),		       \
+			   fmt, ##__VA_ARGS__);				       \
+} while (0)
+
+/**
+ * KUNIT_EXPECT_EQ() - Sets an expectation that @left and @right are equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are
+ * equal. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) == (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_EQ(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, ==, right)
+
+#define KUNIT_EXPECT_EQ_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					==,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_NE() - An expectation that @left and @right are not equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are not
+ * equal. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) != (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_NE(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, !=, right)
+
+#define KUNIT_EXPECT_NE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					!=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_LT() - An expectation that @left is less than @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the value that @left evaluates to is less than the
+ * value that @right evaluates to. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) < (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_LT(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, <, right)
+
+#define KUNIT_EXPECT_LT_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					<,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_LE() - Expects that @left is less than or equal to @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the value that @left evaluates to is less than or
+ * equal to the value that @right evaluates to. Semantically this is equivalent
+ * to KUNIT_EXPECT_TRUE(@test, (@left) <= (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_LE(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, <=, right)
+
+#define KUNIT_EXPECT_LE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					<=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_GT() - An expectation that @left is greater than @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the value that @left evaluates to is greater than
+ * the value that @right evaluates to. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) > (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_GT(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, >, right)
+
+#define KUNIT_EXPECT_GT_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					>,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_GE() - Expects that @left is greater than or equal to @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the value that @left evaluates to is greater than
+ * the value that @right evaluates to. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) >= (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_GE(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, >=, right)
+
+#define KUNIT_EXPECT_GE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					>=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_STREQ() - Expects that strings @left and @right are equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a null terminated string.
+ * @right: an arbitrary expression that evaluates to a null terminated string.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are
+ * equal. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, !strcmp((@left), (@right))). See KUNIT_EXPECT_TRUE()
+ * for more information.
+ */
+#define KUNIT_EXPECT_STREQ(test, left, right) do {			       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Expected " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+									       \
+	KUNIT_EXPECT_END(test, !strcmp(left, right), __stream);		       \
+} while (0)
+
+#define KUNIT_EXPECT_STREQ_MSG(test, left, right, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Expected " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+									       \
+	KUNIT_EXPECT_END(test, !strcmp(left, right), __stream);		       \
+} while (0)
+
+/**
+ * KUNIT_EXPECT_STRNEQ() - Expects that strings @left and @right are not equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a null terminated string.
+ * @right: an arbitrary expression that evaluates to a null terminated string.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are
+ * not equal. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, strcmp((@left), (@right))). See KUNIT_EXPECT_TRUE()
+ * for more information.
+ */
+#define KUNIT_EXPECT_STRNEQ(test, left, right) do {			       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Expected " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+									       \
+	KUNIT_EXPECT_END(test, strcmp(left, right), __stream);		       \
+} while (0)
+
+#define KUNIT_EXPECT_STRNEQ_MSG(test, left, right, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Expected " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+									       \
+	KUNIT_EXPECT_END(test, strcmp(left, right), __stream);		       \
+} while (0)
+
+/**
+ * KUNIT_EXPECT_NOT_ERR_OR_NULL() - Expects that @ptr is not null and not err.
+ * @test: The test context object.
+ * @ptr: an arbitrary pointer.
+ *
+ * Sets an expectation that the value that @ptr evaluates to is not null and not
+ * an errno stored in a pointer. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, !IS_ERR_OR_NULL(@ptr)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_NOT_ERR_OR_NULL(test, ptr) do {			       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(ptr) __ptr = (ptr);					       \
+									       \
+	if (!__ptr)							       \
+		__stream->add(__stream,					       \
+			      "Expected " #ptr " is not null, but is.");       \
+	if (IS_ERR(__ptr))						       \
+		__stream->add(__stream,					       \
+			      "Expected " #ptr " is not error, but is: %ld",   \
+			      PTR_ERR(__ptr));				       \
+									       \
+	KUNIT_EXPECT_END(test, !IS_ERR_OR_NULL(__ptr), __stream);	       \
+} while (0)
+
+#define KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, ptr, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(ptr) __ptr = (ptr);					       \
+									       \
+	if (!__ptr) {							       \
+		__stream->add(__stream,					       \
+			      "Expected " #ptr " is not null, but is.");       \
+		__stream->add(__stream, fmt, ##__VA_ARGS__);		       \
+	}								       \
+	if (IS_ERR(__ptr)) {						       \
+		__stream->add(__stream,					       \
+			      "Expected " #ptr " is not error, but is: %ld",   \
+			      PTR_ERR(__ptr));				       \
+									       \
+		__stream->add(__stream, fmt, ##__VA_ARGS__);		       \
+	}								       \
+	KUNIT_EXPECT_END(test, !IS_ERR_OR_NULL(__ptr), __stream);	       \
+} while (0)
+
 #endif /* _KUNIT_TEST_H */
diff --git a/kunit/test.c b/kunit/test.c
index 1a2e5e6b7ffee..d18c50d5ed671 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -269,3 +269,37 @@ void kunit_printk(const char *level,
 
 	va_end(args);
 }
+
+void kunit_expect_binary_msg(struct kunit *test,
+			     long long left, const char *left_name,
+			     long long right, const char *right_name,
+			     bool compare_result,
+			     const char *compare_name,
+			     const char *file,
+			     const char *line,
+			     const char *fmt, ...)
+{
+	struct kunit_stream *stream = kunit_expect_start(test, file, line);
+	struct va_format vaf;
+	va_list args;
+
+	stream->add(stream,
+		    "Expected %s %s %s, but\n",
+		    left_name, compare_name, right_name);
+	stream->add(stream, "\t\t%s == %lld\n", left_name, left);
+	stream->add(stream, "\t\t%s == %lld", right_name, right);
+
+	if (fmt) {
+		va_start(args, fmt);
+
+		vaf.fmt = fmt;
+		vaf.va = &args;
+
+		stream->add(stream, "\n%pV", &vaf);
+
+		va_end(args);
+	}
+
+	kunit_expect_end(test, compare_result, stream);
+}
+
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 05/17] kunit: test: add the concept of expectations
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: brakmo, pmladek, amir73il, Brendan Higgins, dri-devel,
	Alexander.Levin, linux-kselftest, linux-nvdimm, richard,
	knut.omang, wfg, joel, jdike, dan.carpenter, devicetree,
	Tim.Bird, linux-um, rostedt, julia.lawall, dan.j.williams,
	kunit-dev, gregkh, linux-kernel, daniel, mpe, joe, khilman

Add support for expectations, which allow properties to be specified and
then verified in tests.

Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
---
 include/kunit/test.h | 415 +++++++++++++++++++++++++++++++++++++++++++
 kunit/test.c         |  34 ++++
 2 files changed, 449 insertions(+)

diff --git a/include/kunit/test.h b/include/kunit/test.h
index 75cd3c3ab1b4b..a36ad1a502c66 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -273,4 +273,419 @@ void __printf(3, 4) kunit_printk(const char *level,
 #define kunit_err(test, fmt, ...) \
 		kunit_printk(KERN_ERR, test, fmt, ##__VA_ARGS__)
 
+static inline struct kunit_stream *kunit_expect_start(struct kunit *test,
+						      const char *file,
+						      const char *line)
+{
+	struct kunit_stream *stream = kunit_new_stream(test);
+
+	stream->add(stream, "EXPECTATION FAILED at %s:%s\n\t", file, line);
+
+	return stream;
+}
+
+static inline void kunit_expect_end(struct kunit *test,
+				    bool success,
+				    struct kunit_stream *stream)
+{
+	if (!success)
+		test->fail(test, stream);
+	else
+		stream->clear(stream);
+}
+
+#define KUNIT_EXPECT_START(test) \
+		kunit_expect_start(test, __FILE__, __stringify(__LINE__))
+
+#define KUNIT_EXPECT_END(test, success, stream) \
+		kunit_expect_end(test, success, stream)
+
+#define KUNIT_EXPECT_MSG(test, success, message, fmt, ...) do {		       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+									       \
+	__stream->add(__stream, message);				       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+	KUNIT_EXPECT_END(test, success, __stream);			       \
+} while (0)
+
+#define KUNIT_EXPECT(test, success, message) do {			       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+									       \
+	__stream->add(__stream, message);				       \
+	KUNIT_EXPECT_END(test, success, __stream);			       \
+} while (0)
+
+/**
+ * KUNIT_SUCCEED() - A no-op expectation. Only exists for code clarity.
+ * @test: The test context object.
+ *
+ * The opposite of KUNIT_FAIL(), it is an expectation that cannot fail. In other
+ * words, it does nothing and only exists for code clarity. See
+ * KUNIT_EXPECT_TRUE() for more information.
+ */
+#define KUNIT_SUCCEED(test) do {} while (0)
+
+/**
+ * KUNIT_FAIL() - Always causes a test to fail when evaluated.
+ * @test: The test context object.
+ * @fmt: an informational message to be printed when the assertion is made.
+ * @...: string format arguments.
+ *
+ * The opposite of KUNIT_SUCCEED(), it is an expectation that always fails. In
+ * other words, it always results in a failed expectation, and consequently
+ * always causes the test case to fail when evaluated. See KUNIT_EXPECT_TRUE()
+ * for more information.
+ */
+#define KUNIT_FAIL(test, fmt, ...) \
+		KUNIT_EXPECT_MSG(test, false, "", fmt, ##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_TRUE() - Causes a test failure when the expression is not true.
+ * @test: The test context object.
+ * @condition: an arbitrary boolean expression. The test fails when this does
+ * not evaluate to true.
+ *
+ * This and expectations of the form `KUNIT_EXPECT_*` will cause the test case
+ * to fail when the specified condition is not met; however, it will not prevent
+ * the test case from continuing to run; this is otherwise known as an
+ * *expectation failure*.
+ */
+#define KUNIT_EXPECT_TRUE(test, condition)				       \
+		KUNIT_EXPECT(test, (condition),				       \
+		       "Expected " #condition " is true, but is false.")
+
+#define KUNIT_EXPECT_TRUE_MSG(test, condition, fmt, ...)		       \
+		KUNIT_EXPECT_MSG(test, (condition),			       \
+				"Expected " #condition " is true, but is false.\n",\
+				fmt, ##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_FALSE() - Makes a test failure when the expression is not false.
+ * @test: The test context object.
+ * @condition: an arbitrary boolean expression. The test fails when this does
+ * not evaluate to false.
+ *
+ * Sets an expectation that @condition evaluates to false. See
+ * KUNIT_EXPECT_TRUE() for more information.
+ */
+#define KUNIT_EXPECT_FALSE(test, condition)				       \
+		KUNIT_EXPECT(test, !(condition),			       \
+		       "Expected " #condition " is false, but is true.")
+
+#define KUNIT_EXPECT_FALSE_MSG(test, condition, fmt, ...)		       \
+		KUNIT_EXPECT_MSG(test, !(condition),			       \
+				"Expected " #condition " is false, but is true.\n",\
+				fmt, ##__VA_ARGS__)
+
+void kunit_expect_binary_msg(struct kunit *test,
+			    long long left, const char *left_name,
+			    long long right, const char *right_name,
+			    bool compare_result,
+			    const char *compare_name,
+			    const char *file,
+			    const char *line,
+			    const char *fmt, ...);
+
+static inline void kunit_expect_binary(struct kunit *test,
+				       long long left, const char *left_name,
+				       long long right, const char *right_name,
+				       bool compare_result,
+				       const char *compare_name,
+				       const char *file,
+				       const char *line)
+{
+	struct kunit_stream *stream = kunit_expect_start(test, file, line);
+
+	stream->add(stream,
+		    "Expected %s %s %s, but\n",
+		    left_name, compare_name, right_name);
+	stream->add(stream, "\t\t%s == %lld\n", left_name, left);
+	stream->add(stream, "\t\t%s == %lld", right_name, right);
+
+	kunit_expect_end(test, compare_result, stream);
+}
+
+/*
+ * A factory macro for defining the expectations for the basic comparisons
+ * defined for the built in types.
+ *
+ * Unfortunately, there is no common type that all types can be promoted to for
+ * which all the binary operators behave the same way as for the actual types
+ * (for example, there is no type that long long and unsigned long long can
+ * both be cast to where the comparison result is preserved for all values). So
+ * the best we can do is do the comparison in the original types and then coerce
+ * everything to long long for printing; this way, the comparison behaves
+ * correctly and the printed out value usually makes sense without
+ * interpretation, but can always be interpretted to figure out the actual
+ * value.
+ */
+#define KUNIT_EXPECT_BINARY(test, left, condition, right) do {		       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+	kunit_expect_binary(test,					       \
+			   (long long) __left, #left,			       \
+			   (long long) __right, #right,			       \
+			   __left condition __right, #condition,	       \
+			   __FILE__, __stringify(__LINE__));		       \
+} while (0)
+
+#define KUNIT_EXPECT_BINARY_MSG(test, left, condition, right, fmt, ...) do {   \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+	kunit_expect_binary_msg(test,					       \
+			   (long long) __left, #left,			       \
+			   (long long) __right, #right,			       \
+			   __left condition __right, #condition,	       \
+			   __FILE__, __stringify(__LINE__),		       \
+			   fmt, ##__VA_ARGS__);				       \
+} while (0)
+
+/**
+ * KUNIT_EXPECT_EQ() - Sets an expectation that @left and @right are equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are
+ * equal. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) == (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_EQ(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, ==, right)
+
+#define KUNIT_EXPECT_EQ_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					==,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_NE() - An expectation that @left and @right are not equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are not
+ * equal. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) != (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_NE(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, !=, right)
+
+#define KUNIT_EXPECT_NE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					!=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_LT() - An expectation that @left is less than @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the value that @left evaluates to is less than the
+ * value that @right evaluates to. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) < (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_LT(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, <, right)
+
+#define KUNIT_EXPECT_LT_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					<,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_LE() - Expects that @left is less than or equal to @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the value that @left evaluates to is less than or
+ * equal to the value that @right evaluates to. Semantically this is equivalent
+ * to KUNIT_EXPECT_TRUE(@test, (@left) <= (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_LE(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, <=, right)
+
+#define KUNIT_EXPECT_LE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					<=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_GT() - An expectation that @left is greater than @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the value that @left evaluates to is greater than
+ * the value that @right evaluates to. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) > (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_GT(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, >, right)
+
+#define KUNIT_EXPECT_GT_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					>,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_GE() - Expects that @left is greater than or equal to @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an expectation that the value that @left evaluates to is greater than
+ * the value that @right evaluates to. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, (@left) >= (@right)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_GE(test, left, right) \
+		KUNIT_EXPECT_BINARY(test, left, >=, right)
+
+#define KUNIT_EXPECT_GE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_EXPECT_BINARY_MSG(test,				       \
+					left,				       \
+					>=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_EXPECT_STREQ() - Expects that strings @left and @right are equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a null terminated string.
+ * @right: an arbitrary expression that evaluates to a null terminated string.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are
+ * equal. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, !strcmp((@left), (@right))). See KUNIT_EXPECT_TRUE()
+ * for more information.
+ */
+#define KUNIT_EXPECT_STREQ(test, left, right) do {			       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Expected " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+									       \
+	KUNIT_EXPECT_END(test, !strcmp(left, right), __stream);		       \
+} while (0)
+
+#define KUNIT_EXPECT_STREQ_MSG(test, left, right, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Expected " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+									       \
+	KUNIT_EXPECT_END(test, !strcmp(left, right), __stream);		       \
+} while (0)
+
+/**
+ * KUNIT_EXPECT_STRNEQ() - Expects that strings @left and @right are not equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a null terminated string.
+ * @right: an arbitrary expression that evaluates to a null terminated string.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are
+ * not equal. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, strcmp((@left), (@right))). See KUNIT_EXPECT_TRUE()
+ * for more information.
+ */
+#define KUNIT_EXPECT_STRNEQ(test, left, right) do {			       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Expected " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+									       \
+	KUNIT_EXPECT_END(test, strcmp(left, right), __stream);		       \
+} while (0)
+
+#define KUNIT_EXPECT_STRNEQ_MSG(test, left, right, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Expected " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+									       \
+	KUNIT_EXPECT_END(test, strcmp(left, right), __stream);		       \
+} while (0)
+
+/**
+ * KUNIT_EXPECT_NOT_ERR_OR_NULL() - Expects that @ptr is not null and not err.
+ * @test: The test context object.
+ * @ptr: an arbitrary pointer.
+ *
+ * Sets an expectation that the value that @ptr evaluates to is not null and not
+ * an errno stored in a pointer. This is semantically equivalent to
+ * KUNIT_EXPECT_TRUE(@test, !IS_ERR_OR_NULL(@ptr)). See KUNIT_EXPECT_TRUE() for
+ * more information.
+ */
+#define KUNIT_EXPECT_NOT_ERR_OR_NULL(test, ptr) do {			       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(ptr) __ptr = (ptr);					       \
+									       \
+	if (!__ptr)							       \
+		__stream->add(__stream,					       \
+			      "Expected " #ptr " is not null, but is.");       \
+	if (IS_ERR(__ptr))						       \
+		__stream->add(__stream,					       \
+			      "Expected " #ptr " is not error, but is: %ld",   \
+			      PTR_ERR(__ptr));				       \
+									       \
+	KUNIT_EXPECT_END(test, !IS_ERR_OR_NULL(__ptr), __stream);	       \
+} while (0)
+
+#define KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, ptr, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_EXPECT_START(test);	       \
+	typeof(ptr) __ptr = (ptr);					       \
+									       \
+	if (!__ptr) {							       \
+		__stream->add(__stream,					       \
+			      "Expected " #ptr " is not null, but is.");       \
+		__stream->add(__stream, fmt, ##__VA_ARGS__);		       \
+	}								       \
+	if (IS_ERR(__ptr)) {						       \
+		__stream->add(__stream,					       \
+			      "Expected " #ptr " is not error, but is: %ld",   \
+			      PTR_ERR(__ptr));				       \
+									       \
+		__stream->add(__stream, fmt, ##__VA_ARGS__);		       \
+	}								       \
+	KUNIT_EXPECT_END(test, !IS_ERR_OR_NULL(__ptr), __stream);	       \
+} while (0)
+
 #endif /* _KUNIT_TEST_H */
diff --git a/kunit/test.c b/kunit/test.c
index 1a2e5e6b7ffee..d18c50d5ed671 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -269,3 +269,37 @@ void kunit_printk(const char *level,
 
 	va_end(args);
 }
+
+void kunit_expect_binary_msg(struct kunit *test,
+			     long long left, const char *left_name,
+			     long long right, const char *right_name,
+			     bool compare_result,
+			     const char *compare_name,
+			     const char *file,
+			     const char *line,
+			     const char *fmt, ...)
+{
+	struct kunit_stream *stream = kunit_expect_start(test, file, line);
+	struct va_format vaf;
+	va_list args;
+
+	stream->add(stream,
+		    "Expected %s %s %s, but\n",
+		    left_name, compare_name, right_name);
+	stream->add(stream, "\t\t%s == %lld\n", left_name, left);
+	stream->add(stream, "\t\t%s == %lld", right_name, right);
+
+	if (fmt) {
+		va_start(args, fmt);
+
+		vaf.fmt = fmt;
+		vaf.va = &args;
+
+		stream->add(stream, "\n%pV", &vaf);
+
+		va_end(args);
+	}
+
+	kunit_expect_end(test, compare_result, stream);
+}
+
-- 
2.21.0.rc0.258.g878e2cd30e-goog


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 06/17] kbuild: enable building KUnit
  2019-02-14 21:37 ` brendanhiggins
                       ` (2 preceding siblings ...)
  (?)
@ 2019-02-14 21:37     ` Brendan Higgins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook-hpIqsD4AKlfQT0dZR+AlfA, mcgrof-DgEjT+Ai2ygdnm+yROfE0A,
	shuah-DgEjT+Ai2ygdnm+yROfE0A, robh-DgEjT+Ai2ygdnm+yROfE0A,
	kieran.bingham-ryLnwIuWjnjg/C1BVhZhaw,
	frowand.list-Re5JQEeQqe8AvxtiuMwx3w
  Cc: brakmo-b10kYP2dOMg, pmladek-IBi9RG/b67k,
	amir73il-Re5JQEeQqe8AvxtiuMwx3w, Brendan Higgins,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	Alexander.Levin-0li6OtcxBFHby3iVrkZq2A,
	linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw, richard-/L3Ra7n9ekc,
	knut.omang-QHcLZuEGTsvQT0dZR+AlfA, wfg-VuQAYsv1563Yd54FQh9/CA,
	joel-U3u1mxZcP9KHXe+LvDLADg, jdike-OPE4K8JWMJJBDgjK7y7TUQ,
	dan.carpenter-QHcLZuEGTsvQT0dZR+AlfA,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Tim.Bird-7U/KSKJipcs,
	linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	rostedt-nx8X9YLhiw1AfugRpC6u6w, julia.lawall-L2FTfq7BK8M,
	kunit-dev-/JYPxA39Uh5TLH3MbocFFw,
	gregkh-hQyY1W1yCW8ekmWlsbkhG0B+6BGkLq7r,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, daniel-/w4YWyX8dFk,
	mpe-Gsx/Oe8HsFggBc27wqDAHg, joe-6d6DIl74uiNBDgjK7y7TUQ,
	khilman-rdvid1DuHRBWk0Htik3J/w

Add KUnit to root Kconfig and Makefile allowing it to actually be built.

Signed-off-by: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
---
Changes Since Last Version
 - Rewrote patch description. This was previously called "[RFC v3 06/19]
   arch: um: enable running kunit from User Mode Linux," which was
   incorrect since this patch does not have any UML specific bits in it.
---
 Kconfig  | 2 ++
 Makefile | 2 +-
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/Kconfig b/Kconfig
index 48a80beab6853..10428501edb78 100644
--- a/Kconfig
+++ b/Kconfig
@@ -30,3 +30,5 @@ source "crypto/Kconfig"
 source "lib/Kconfig"
 
 source "lib/Kconfig.debug"
+
+source "kunit/Kconfig"
diff --git a/Makefile b/Makefile
index 3142e67d03f1b..d10308eb7f214 100644
--- a/Makefile
+++ b/Makefile
@@ -958,7 +958,7 @@ endif
 PHONY += prepare0
 
 ifeq ($(KBUILD_EXTMOD),)
-core-y		+= kernel/ certs/ mm/ fs/ ipc/ security/ crypto/ block/
+core-y		+= kernel/ certs/ mm/ fs/ ipc/ security/ crypto/ block/ kunit/
 
 vmlinux-dirs	:= $(patsubst %/,%,$(filter %/, $(init-y) $(init-m) \
 		     $(core-y) $(core-m) $(drivers-y) $(drivers-m) \
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 06/17] kbuild: enable building KUnit
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: gregkh, joel, mpe, joe, brakmo, rostedt, Tim.Bird, khilman,
	julia.lawall, linux-kselftest, kunit-dev, linux-kernel, jdike,
	richard, linux-um, daniel, dri-devel, dan.j.williams,
	linux-nvdimm, knut.omang, devicetree, pmladek, Alexander.Levin,
	amir73il, dan.carpenter, wfg, Brendan Higgins

Add KUnit to root Kconfig and Makefile allowing it to actually be built.

Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
---
Changes Since Last Version
 - Rewrote patch description. This was previously called "[RFC v3 06/19]
   arch: um: enable running kunit from User Mode Linux," which was
   incorrect since this patch does not have any UML specific bits in it.
---
 Kconfig  | 2 ++
 Makefile | 2 +-
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/Kconfig b/Kconfig
index 48a80beab6853..10428501edb78 100644
--- a/Kconfig
+++ b/Kconfig
@@ -30,3 +30,5 @@ source "crypto/Kconfig"
 source "lib/Kconfig"
 
 source "lib/Kconfig.debug"
+
+source "kunit/Kconfig"
diff --git a/Makefile b/Makefile
index 3142e67d03f1b..d10308eb7f214 100644
--- a/Makefile
+++ b/Makefile
@@ -958,7 +958,7 @@ endif
 PHONY += prepare0
 
 ifeq ($(KBUILD_EXTMOD),)
-core-y		+= kernel/ certs/ mm/ fs/ ipc/ security/ crypto/ block/
+core-y		+= kernel/ certs/ mm/ fs/ ipc/ security/ crypto/ block/ kunit/
 
 vmlinux-dirs	:= $(patsubst %/,%,$(filter %/, $(init-y) $(init-m) \
 		     $(core-y) $(core-m) $(drivers-y) $(drivers-m) \
-- 
2.21.0.rc0.258.g878e2cd30e-goog


^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 06/17] kbuild: enable building KUnit
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-02-14 21:37 UTC (permalink / raw)


Add KUnit to root Kconfig and Makefile allowing it to actually be built.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
Changes Since Last Version
 - Rewrote patch description. This was previously called "[RFC v3 06/19]
   arch: um: enable running kunit from User Mode Linux," which was
   incorrect since this patch does not have any UML specific bits in it.
---
 Kconfig  | 2 ++
 Makefile | 2 +-
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/Kconfig b/Kconfig
index 48a80beab6853..10428501edb78 100644
--- a/Kconfig
+++ b/Kconfig
@@ -30,3 +30,5 @@ source "crypto/Kconfig"
 source "lib/Kconfig"
 
 source "lib/Kconfig.debug"
+
+source "kunit/Kconfig"
diff --git a/Makefile b/Makefile
index 3142e67d03f1b..d10308eb7f214 100644
--- a/Makefile
+++ b/Makefile
@@ -958,7 +958,7 @@ endif
 PHONY += prepare0
 
 ifeq ($(KBUILD_EXTMOD),)
-core-y		+= kernel/ certs/ mm/ fs/ ipc/ security/ crypto/ block/
+core-y		+= kernel/ certs/ mm/ fs/ ipc/ security/ crypto/ block/ kunit/
 
 vmlinux-dirs	:= $(patsubst %/,%,$(filter %/, $(init-y) $(init-m) \
 		     $(core-y) $(core-m) $(drivers-y) $(drivers-m) \
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 06/17] kbuild: enable building KUnit
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)


Add KUnit to root Kconfig and Makefile allowing it to actually be built.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
Changes Since Last Version
 - Rewrote patch description. This was previously called "[RFC v3 06/19]
   arch: um: enable running kunit from User Mode Linux," which was
   incorrect since this patch does not have any UML specific bits in it.
---
 Kconfig  | 2 ++
 Makefile | 2 +-
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/Kconfig b/Kconfig
index 48a80beab6853..10428501edb78 100644
--- a/Kconfig
+++ b/Kconfig
@@ -30,3 +30,5 @@ source "crypto/Kconfig"
 source "lib/Kconfig"
 
 source "lib/Kconfig.debug"
+
+source "kunit/Kconfig"
diff --git a/Makefile b/Makefile
index 3142e67d03f1b..d10308eb7f214 100644
--- a/Makefile
+++ b/Makefile
@@ -958,7 +958,7 @@ endif
 PHONY += prepare0
 
 ifeq ($(KBUILD_EXTMOD),)
-core-y		+= kernel/ certs/ mm/ fs/ ipc/ security/ crypto/ block/
+core-y		+= kernel/ certs/ mm/ fs/ ipc/ security/ crypto/ block/ kunit/
 
 vmlinux-dirs	:= $(patsubst %/,%,$(filter %/, $(init-y) $(init-m) \
 		     $(core-y) $(core-m) $(drivers-y) $(drivers-m) \
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 06/17] kbuild: enable building KUnit
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: brakmo, pmladek, amir73il, Brendan Higgins, dri-devel,
	Alexander.Levin, linux-kselftest, linux-nvdimm, richard,
	knut.omang, wfg, joel, jdike, dan.carpenter, devicetree,
	Tim.Bird, linux-um, rostedt, julia.lawall, dan.j.williams,
	kunit-dev, gregkh, linux-kernel, daniel, mpe, joe, khilman

Add KUnit to root Kconfig and Makefile allowing it to actually be built.

Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
---
Changes Since Last Version
 - Rewrote patch description. This was previously called "[RFC v3 06/19]
   arch: um: enable running kunit from User Mode Linux," which was
   incorrect since this patch does not have any UML specific bits in it.
---
 Kconfig  | 2 ++
 Makefile | 2 +-
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/Kconfig b/Kconfig
index 48a80beab6853..10428501edb78 100644
--- a/Kconfig
+++ b/Kconfig
@@ -30,3 +30,5 @@ source "crypto/Kconfig"
 source "lib/Kconfig"
 
 source "lib/Kconfig.debug"
+
+source "kunit/Kconfig"
diff --git a/Makefile b/Makefile
index 3142e67d03f1b..d10308eb7f214 100644
--- a/Makefile
+++ b/Makefile
@@ -958,7 +958,7 @@ endif
 PHONY += prepare0
 
 ifeq ($(KBUILD_EXTMOD),)
-core-y		+= kernel/ certs/ mm/ fs/ ipc/ security/ crypto/ block/
+core-y		+= kernel/ certs/ mm/ fs/ ipc/ security/ crypto/ block/ kunit/
 
 vmlinux-dirs	:= $(patsubst %/,%,$(filter %/, $(init-y) $(init-m) \
 		     $(core-y) $(core-m) $(drivers-y) $(drivers-m) \
-- 
2.21.0.rc0.258.g878e2cd30e-goog


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 07/17] kunit: test: add initial tests
  2019-02-14 21:37 ` brendanhiggins
  (?)
  (?)
@ 2019-02-14 21:37   ` brendanhiggins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: gregkh, joel, mpe, joe, brakmo, rostedt, Tim.Bird, khilman,
	julia.lawall, linux-kselftest, kunit-dev, linux-kernel, jdike,
	richard, linux-um, daniel, dri-devel, dan.j.williams,
	linux-nvdimm, knut.omang, devicetree, pmladek, Alexander.Levin,
	amir73il, dan.carpenter, wfg, Brendan Higgins

Add a test for string stream along with a simpler example.

Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
---
 kunit/Kconfig              | 12 ++++++
 kunit/Makefile             |  4 ++
 kunit/example-test.c       | 88 ++++++++++++++++++++++++++++++++++++++
 kunit/string-stream-test.c | 61 ++++++++++++++++++++++++++
 4 files changed, 165 insertions(+)
 create mode 100644 kunit/example-test.c
 create mode 100644 kunit/string-stream-test.c

diff --git a/kunit/Kconfig b/kunit/Kconfig
index 64480092b2c24..5cb500355c873 100644
--- a/kunit/Kconfig
+++ b/kunit/Kconfig
@@ -13,4 +13,16 @@ config KUNIT
 	  special hardware. For more information, please see
 	  Documentation/kunit/
 
+config KUNIT_TEST
+	bool "KUnit test for KUnit"
+	depends on KUNIT
+	help
+	  Enables KUnit test to test KUnit.
+
+config KUNIT_EXAMPLE_TEST
+	bool "Example test for KUnit"
+	depends on KUNIT
+	help
+	  Enables example KUnit test to demo features of KUnit.
+
 endmenu
diff --git a/kunit/Makefile b/kunit/Makefile
index 6ddc622ee6b1c..60a9ea6cb4697 100644
--- a/kunit/Makefile
+++ b/kunit/Makefile
@@ -1,3 +1,7 @@
 obj-$(CONFIG_KUNIT) +=			test.o \
 					string-stream.o \
 					kunit-stream.o
+
+obj-$(CONFIG_KUNIT_TEST) +=		string-stream-test.o
+
+obj-$(CONFIG_KUNIT_EXAMPLE_TEST) +=	example-test.o
diff --git a/kunit/example-test.c b/kunit/example-test.c
new file mode 100644
index 0000000000000..352f64a423e7c
--- /dev/null
+++ b/kunit/example-test.c
@@ -0,0 +1,88 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Example KUnit test to show how to use KUnit.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins@google.com>
+ */
+
+#include <kunit/test.h>
+
+/*
+ * This is the most fundamental element of KUnit, the test case. A test case
+ * makes a set EXPECTATIONs and ASSERTIONs about the behavior of some code; if
+ * any expectations or assertions are not met, the test fails; otherwise, the
+ * test passes.
+ *
+ * In KUnit, a test case is just a function with the signature
+ * `void (*)(struct kunit *)`. `struct kunit` is a context object that stores
+ * information about the current test.
+ */
+static void example_simple_test(struct kunit *test)
+{
+	/*
+	 * This is an EXPECTATION; it is how KUnit tests things. When you want
+	 * to test a piece of code, you set some expectations about what the
+	 * code should do. KUnit then runs the test and verifies that the code's
+	 * behavior matched what was expected.
+	 */
+	KUNIT_EXPECT_EQ(test, 1 + 1, 2);
+}
+
+/*
+ * This is run once before each test case, see the comment on
+ * example_test_module for more information.
+ */
+static int example_test_init(struct kunit *test)
+{
+	kunit_info(test, "initializing");
+
+	return 0;
+}
+
+/*
+ * Here we make a list of all the test cases we want to add to the test module
+ * below.
+ */
+static struct kunit_case example_test_cases[] = {
+	/*
+	 * This is a helper to create a test case object from a test case
+	 * function; its exact function is not important to understand how to
+	 * use KUnit, just know that this is how you associate test cases with a
+	 * test module.
+	 */
+	KUNIT_CASE(example_simple_test),
+	{},
+};
+
+/*
+ * This defines a suite or grouping of tests.
+ *
+ * Test cases are defined as belonging to the suite by adding them to
+ * `kunit_cases`.
+ *
+ * Often it is desirable to run some function which will set up things which
+ * will be used by every test; this is accomplished with an `init` function
+ * which runs before each test case is invoked. Similarly, an `exit` function
+ * may be specified which runs after every test case and can be used to for
+ * cleanup. For clarity, running tests in a test module would behave as follows:
+ *
+ * module.init(test);
+ * module.test_case[0](test);
+ * module.exit(test);
+ * module.init(test);
+ * module.test_case[1](test);
+ * module.exit(test);
+ * ...;
+ */
+static struct kunit_module example_test_module = {
+	.name = "example",
+	.init = example_test_init,
+	.test_cases = example_test_cases,
+};
+
+/*
+ * This registers the above test module telling KUnit that this is a suite of
+ * tests that need to be run.
+ */
+module_test(example_test_module);
diff --git a/kunit/string-stream-test.c b/kunit/string-stream-test.c
new file mode 100644
index 0000000000000..6cfef69568011
--- /dev/null
+++ b/kunit/string-stream-test.c
@@ -0,0 +1,61 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * KUnit test for struct string_stream.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins@google.com>
+ */
+
+#include <linux/slab.h>
+#include <kunit/test.h>
+#include <kunit/string-stream.h>
+
+static void string_stream_test_get_string(struct kunit *test)
+{
+	struct string_stream *stream = new_string_stream();
+	char *output;
+
+	stream->add(stream, "Foo");
+	stream->add(stream, " %s", "bar");
+
+	output = stream->get_string(stream);
+	KUNIT_EXPECT_STREQ(test, output, "Foo bar");
+	kfree(output);
+	destroy_string_stream(stream);
+}
+
+static void string_stream_test_add_and_clear(struct kunit *test)
+{
+	struct string_stream *stream = new_string_stream();
+	char *output;
+	int i;
+
+	for (i = 0; i < 10; i++)
+		stream->add(stream, "A");
+
+	output = stream->get_string(stream);
+	KUNIT_EXPECT_STREQ(test, output, "AAAAAAAAAA");
+	KUNIT_EXPECT_EQ(test, stream->length, 10);
+	KUNIT_EXPECT_FALSE(test, stream->is_empty(stream));
+	kfree(output);
+
+	stream->clear(stream);
+
+	output = stream->get_string(stream);
+	KUNIT_EXPECT_STREQ(test, output, "");
+	KUNIT_EXPECT_TRUE(test, stream->is_empty(stream));
+	destroy_string_stream(stream);
+}
+
+static struct kunit_case string_stream_test_cases[] = {
+	KUNIT_CASE(string_stream_test_get_string),
+	KUNIT_CASE(string_stream_test_add_and_clear),
+	{}
+};
+
+static struct kunit_module string_stream_test_module = {
+	.name = "string-stream-test",
+	.test_cases = string_stream_test_cases
+};
+module_test(string_stream_test_module);
+
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 07/17] kunit: test: add initial tests
@ 2019-02-14 21:37   ` brendanhiggins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-02-14 21:37 UTC (permalink / raw)


Add a test for string stream along with a simpler example.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 kunit/Kconfig              | 12 ++++++
 kunit/Makefile             |  4 ++
 kunit/example-test.c       | 88 ++++++++++++++++++++++++++++++++++++++
 kunit/string-stream-test.c | 61 ++++++++++++++++++++++++++
 4 files changed, 165 insertions(+)
 create mode 100644 kunit/example-test.c
 create mode 100644 kunit/string-stream-test.c

diff --git a/kunit/Kconfig b/kunit/Kconfig
index 64480092b2c24..5cb500355c873 100644
--- a/kunit/Kconfig
+++ b/kunit/Kconfig
@@ -13,4 +13,16 @@ config KUNIT
 	  special hardware. For more information, please see
 	  Documentation/kunit/
 
+config KUNIT_TEST
+	bool "KUnit test for KUnit"
+	depends on KUNIT
+	help
+	  Enables KUnit test to test KUnit.
+
+config KUNIT_EXAMPLE_TEST
+	bool "Example test for KUnit"
+	depends on KUNIT
+	help
+	  Enables example KUnit test to demo features of KUnit.
+
 endmenu
diff --git a/kunit/Makefile b/kunit/Makefile
index 6ddc622ee6b1c..60a9ea6cb4697 100644
--- a/kunit/Makefile
+++ b/kunit/Makefile
@@ -1,3 +1,7 @@
 obj-$(CONFIG_KUNIT) +=			test.o \
 					string-stream.o \
 					kunit-stream.o
+
+obj-$(CONFIG_KUNIT_TEST) +=		string-stream-test.o
+
+obj-$(CONFIG_KUNIT_EXAMPLE_TEST) +=	example-test.o
diff --git a/kunit/example-test.c b/kunit/example-test.c
new file mode 100644
index 0000000000000..352f64a423e7c
--- /dev/null
+++ b/kunit/example-test.c
@@ -0,0 +1,88 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Example KUnit test to show how to use KUnit.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+
+#include <kunit/test.h>
+
+/*
+ * This is the most fundamental element of KUnit, the test case. A test case
+ * makes a set EXPECTATIONs and ASSERTIONs about the behavior of some code; if
+ * any expectations or assertions are not met, the test fails; otherwise, the
+ * test passes.
+ *
+ * In KUnit, a test case is just a function with the signature
+ * `void (*)(struct kunit *)`. `struct kunit` is a context object that stores
+ * information about the current test.
+ */
+static void example_simple_test(struct kunit *test)
+{
+	/*
+	 * This is an EXPECTATION; it is how KUnit tests things. When you want
+	 * to test a piece of code, you set some expectations about what the
+	 * code should do. KUnit then runs the test and verifies that the code's
+	 * behavior matched what was expected.
+	 */
+	KUNIT_EXPECT_EQ(test, 1 + 1, 2);
+}
+
+/*
+ * This is run once before each test case, see the comment on
+ * example_test_module for more information.
+ */
+static int example_test_init(struct kunit *test)
+{
+	kunit_info(test, "initializing");
+
+	return 0;
+}
+
+/*
+ * Here we make a list of all the test cases we want to add to the test module
+ * below.
+ */
+static struct kunit_case example_test_cases[] = {
+	/*
+	 * This is a helper to create a test case object from a test case
+	 * function; its exact function is not important to understand how to
+	 * use KUnit, just know that this is how you associate test cases with a
+	 * test module.
+	 */
+	KUNIT_CASE(example_simple_test),
+	{},
+};
+
+/*
+ * This defines a suite or grouping of tests.
+ *
+ * Test cases are defined as belonging to the suite by adding them to
+ * `kunit_cases`.
+ *
+ * Often it is desirable to run some function which will set up things which
+ * will be used by every test; this is accomplished with an `init` function
+ * which runs before each test case is invoked. Similarly, an `exit` function
+ * may be specified which runs after every test case and can be used to for
+ * cleanup. For clarity, running tests in a test module would behave as follows:
+ *
+ * module.init(test);
+ * module.test_case[0](test);
+ * module.exit(test);
+ * module.init(test);
+ * module.test_case[1](test);
+ * module.exit(test);
+ * ...;
+ */
+static struct kunit_module example_test_module = {
+	.name = "example",
+	.init = example_test_init,
+	.test_cases = example_test_cases,
+};
+
+/*
+ * This registers the above test module telling KUnit that this is a suite of
+ * tests that need to be run.
+ */
+module_test(example_test_module);
diff --git a/kunit/string-stream-test.c b/kunit/string-stream-test.c
new file mode 100644
index 0000000000000..6cfef69568011
--- /dev/null
+++ b/kunit/string-stream-test.c
@@ -0,0 +1,61 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * KUnit test for struct string_stream.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+
+#include <linux/slab.h>
+#include <kunit/test.h>
+#include <kunit/string-stream.h>
+
+static void string_stream_test_get_string(struct kunit *test)
+{
+	struct string_stream *stream = new_string_stream();
+	char *output;
+
+	stream->add(stream, "Foo");
+	stream->add(stream, " %s", "bar");
+
+	output = stream->get_string(stream);
+	KUNIT_EXPECT_STREQ(test, output, "Foo bar");
+	kfree(output);
+	destroy_string_stream(stream);
+}
+
+static void string_stream_test_add_and_clear(struct kunit *test)
+{
+	struct string_stream *stream = new_string_stream();
+	char *output;
+	int i;
+
+	for (i = 0; i < 10; i++)
+		stream->add(stream, "A");
+
+	output = stream->get_string(stream);
+	KUNIT_EXPECT_STREQ(test, output, "AAAAAAAAAA");
+	KUNIT_EXPECT_EQ(test, stream->length, 10);
+	KUNIT_EXPECT_FALSE(test, stream->is_empty(stream));
+	kfree(output);
+
+	stream->clear(stream);
+
+	output = stream->get_string(stream);
+	KUNIT_EXPECT_STREQ(test, output, "");
+	KUNIT_EXPECT_TRUE(test, stream->is_empty(stream));
+	destroy_string_stream(stream);
+}
+
+static struct kunit_case string_stream_test_cases[] = {
+	KUNIT_CASE(string_stream_test_get_string),
+	KUNIT_CASE(string_stream_test_add_and_clear),
+	{}
+};
+
+static struct kunit_module string_stream_test_module = {
+	.name = "string-stream-test",
+	.test_cases = string_stream_test_cases
+};
+module_test(string_stream_test_module);
+
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 07/17] kunit: test: add initial tests
@ 2019-02-14 21:37   ` brendanhiggins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)


Add a test for string stream along with a simpler example.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 kunit/Kconfig              | 12 ++++++
 kunit/Makefile             |  4 ++
 kunit/example-test.c       | 88 ++++++++++++++++++++++++++++++++++++++
 kunit/string-stream-test.c | 61 ++++++++++++++++++++++++++
 4 files changed, 165 insertions(+)
 create mode 100644 kunit/example-test.c
 create mode 100644 kunit/string-stream-test.c

diff --git a/kunit/Kconfig b/kunit/Kconfig
index 64480092b2c24..5cb500355c873 100644
--- a/kunit/Kconfig
+++ b/kunit/Kconfig
@@ -13,4 +13,16 @@ config KUNIT
 	  special hardware. For more information, please see
 	  Documentation/kunit/
 
+config KUNIT_TEST
+	bool "KUnit test for KUnit"
+	depends on KUNIT
+	help
+	  Enables KUnit test to test KUnit.
+
+config KUNIT_EXAMPLE_TEST
+	bool "Example test for KUnit"
+	depends on KUNIT
+	help
+	  Enables example KUnit test to demo features of KUnit.
+
 endmenu
diff --git a/kunit/Makefile b/kunit/Makefile
index 6ddc622ee6b1c..60a9ea6cb4697 100644
--- a/kunit/Makefile
+++ b/kunit/Makefile
@@ -1,3 +1,7 @@
 obj-$(CONFIG_KUNIT) +=			test.o \
 					string-stream.o \
 					kunit-stream.o
+
+obj-$(CONFIG_KUNIT_TEST) +=		string-stream-test.o
+
+obj-$(CONFIG_KUNIT_EXAMPLE_TEST) +=	example-test.o
diff --git a/kunit/example-test.c b/kunit/example-test.c
new file mode 100644
index 0000000000000..352f64a423e7c
--- /dev/null
+++ b/kunit/example-test.c
@@ -0,0 +1,88 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Example KUnit test to show how to use KUnit.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+
+#include <kunit/test.h>
+
+/*
+ * This is the most fundamental element of KUnit, the test case. A test case
+ * makes a set EXPECTATIONs and ASSERTIONs about the behavior of some code; if
+ * any expectations or assertions are not met, the test fails; otherwise, the
+ * test passes.
+ *
+ * In KUnit, a test case is just a function with the signature
+ * `void (*)(struct kunit *)`. `struct kunit` is a context object that stores
+ * information about the current test.
+ */
+static void example_simple_test(struct kunit *test)
+{
+	/*
+	 * This is an EXPECTATION; it is how KUnit tests things. When you want
+	 * to test a piece of code, you set some expectations about what the
+	 * code should do. KUnit then runs the test and verifies that the code's
+	 * behavior matched what was expected.
+	 */
+	KUNIT_EXPECT_EQ(test, 1 + 1, 2);
+}
+
+/*
+ * This is run once before each test case, see the comment on
+ * example_test_module for more information.
+ */
+static int example_test_init(struct kunit *test)
+{
+	kunit_info(test, "initializing");
+
+	return 0;
+}
+
+/*
+ * Here we make a list of all the test cases we want to add to the test module
+ * below.
+ */
+static struct kunit_case example_test_cases[] = {
+	/*
+	 * This is a helper to create a test case object from a test case
+	 * function; its exact function is not important to understand how to
+	 * use KUnit, just know that this is how you associate test cases with a
+	 * test module.
+	 */
+	KUNIT_CASE(example_simple_test),
+	{},
+};
+
+/*
+ * This defines a suite or grouping of tests.
+ *
+ * Test cases are defined as belonging to the suite by adding them to
+ * `kunit_cases`.
+ *
+ * Often it is desirable to run some function which will set up things which
+ * will be used by every test; this is accomplished with an `init` function
+ * which runs before each test case is invoked. Similarly, an `exit` function
+ * may be specified which runs after every test case and can be used to for
+ * cleanup. For clarity, running tests in a test module would behave as follows:
+ *
+ * module.init(test);
+ * module.test_case[0](test);
+ * module.exit(test);
+ * module.init(test);
+ * module.test_case[1](test);
+ * module.exit(test);
+ * ...;
+ */
+static struct kunit_module example_test_module = {
+	.name = "example",
+	.init = example_test_init,
+	.test_cases = example_test_cases,
+};
+
+/*
+ * This registers the above test module telling KUnit that this is a suite of
+ * tests that need to be run.
+ */
+module_test(example_test_module);
diff --git a/kunit/string-stream-test.c b/kunit/string-stream-test.c
new file mode 100644
index 0000000000000..6cfef69568011
--- /dev/null
+++ b/kunit/string-stream-test.c
@@ -0,0 +1,61 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * KUnit test for struct string_stream.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+
+#include <linux/slab.h>
+#include <kunit/test.h>
+#include <kunit/string-stream.h>
+
+static void string_stream_test_get_string(struct kunit *test)
+{
+	struct string_stream *stream = new_string_stream();
+	char *output;
+
+	stream->add(stream, "Foo");
+	stream->add(stream, " %s", "bar");
+
+	output = stream->get_string(stream);
+	KUNIT_EXPECT_STREQ(test, output, "Foo bar");
+	kfree(output);
+	destroy_string_stream(stream);
+}
+
+static void string_stream_test_add_and_clear(struct kunit *test)
+{
+	struct string_stream *stream = new_string_stream();
+	char *output;
+	int i;
+
+	for (i = 0; i < 10; i++)
+		stream->add(stream, "A");
+
+	output = stream->get_string(stream);
+	KUNIT_EXPECT_STREQ(test, output, "AAAAAAAAAA");
+	KUNIT_EXPECT_EQ(test, stream->length, 10);
+	KUNIT_EXPECT_FALSE(test, stream->is_empty(stream));
+	kfree(output);
+
+	stream->clear(stream);
+
+	output = stream->get_string(stream);
+	KUNIT_EXPECT_STREQ(test, output, "");
+	KUNIT_EXPECT_TRUE(test, stream->is_empty(stream));
+	destroy_string_stream(stream);
+}
+
+static struct kunit_case string_stream_test_cases[] = {
+	KUNIT_CASE(string_stream_test_get_string),
+	KUNIT_CASE(string_stream_test_add_and_clear),
+	{}
+};
+
+static struct kunit_module string_stream_test_module = {
+	.name = "string-stream-test",
+	.test_cases = string_stream_test_cases
+};
+module_test(string_stream_test_module);
+
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 07/17] kunit: test: add initial tests
@ 2019-02-14 21:37   ` brendanhiggins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: brakmo, pmladek, amir73il, Brendan Higgins, dri-devel,
	Alexander.Levin, linux-kselftest, linux-nvdimm, richard,
	knut.omang, wfg, joel, jdike, dan.carpenter, devicetree,
	Tim.Bird, linux-um, rostedt, julia.lawall, dan.j.williams,
	kunit-dev, gregkh, linux-kernel, daniel, mpe, joe, khilman

Add a test for string stream along with a simpler example.

Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
---
 kunit/Kconfig              | 12 ++++++
 kunit/Makefile             |  4 ++
 kunit/example-test.c       | 88 ++++++++++++++++++++++++++++++++++++++
 kunit/string-stream-test.c | 61 ++++++++++++++++++++++++++
 4 files changed, 165 insertions(+)
 create mode 100644 kunit/example-test.c
 create mode 100644 kunit/string-stream-test.c

diff --git a/kunit/Kconfig b/kunit/Kconfig
index 64480092b2c24..5cb500355c873 100644
--- a/kunit/Kconfig
+++ b/kunit/Kconfig
@@ -13,4 +13,16 @@ config KUNIT
 	  special hardware. For more information, please see
 	  Documentation/kunit/
 
+config KUNIT_TEST
+	bool "KUnit test for KUnit"
+	depends on KUNIT
+	help
+	  Enables KUnit test to test KUnit.
+
+config KUNIT_EXAMPLE_TEST
+	bool "Example test for KUnit"
+	depends on KUNIT
+	help
+	  Enables example KUnit test to demo features of KUnit.
+
 endmenu
diff --git a/kunit/Makefile b/kunit/Makefile
index 6ddc622ee6b1c..60a9ea6cb4697 100644
--- a/kunit/Makefile
+++ b/kunit/Makefile
@@ -1,3 +1,7 @@
 obj-$(CONFIG_KUNIT) +=			test.o \
 					string-stream.o \
 					kunit-stream.o
+
+obj-$(CONFIG_KUNIT_TEST) +=		string-stream-test.o
+
+obj-$(CONFIG_KUNIT_EXAMPLE_TEST) +=	example-test.o
diff --git a/kunit/example-test.c b/kunit/example-test.c
new file mode 100644
index 0000000000000..352f64a423e7c
--- /dev/null
+++ b/kunit/example-test.c
@@ -0,0 +1,88 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Example KUnit test to show how to use KUnit.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins@google.com>
+ */
+
+#include <kunit/test.h>
+
+/*
+ * This is the most fundamental element of KUnit, the test case. A test case
+ * makes a set EXPECTATIONs and ASSERTIONs about the behavior of some code; if
+ * any expectations or assertions are not met, the test fails; otherwise, the
+ * test passes.
+ *
+ * In KUnit, a test case is just a function with the signature
+ * `void (*)(struct kunit *)`. `struct kunit` is a context object that stores
+ * information about the current test.
+ */
+static void example_simple_test(struct kunit *test)
+{
+	/*
+	 * This is an EXPECTATION; it is how KUnit tests things. When you want
+	 * to test a piece of code, you set some expectations about what the
+	 * code should do. KUnit then runs the test and verifies that the code's
+	 * behavior matched what was expected.
+	 */
+	KUNIT_EXPECT_EQ(test, 1 + 1, 2);
+}
+
+/*
+ * This is run once before each test case, see the comment on
+ * example_test_module for more information.
+ */
+static int example_test_init(struct kunit *test)
+{
+	kunit_info(test, "initializing");
+
+	return 0;
+}
+
+/*
+ * Here we make a list of all the test cases we want to add to the test module
+ * below.
+ */
+static struct kunit_case example_test_cases[] = {
+	/*
+	 * This is a helper to create a test case object from a test case
+	 * function; its exact function is not important to understand how to
+	 * use KUnit, just know that this is how you associate test cases with a
+	 * test module.
+	 */
+	KUNIT_CASE(example_simple_test),
+	{},
+};
+
+/*
+ * This defines a suite or grouping of tests.
+ *
+ * Test cases are defined as belonging to the suite by adding them to
+ * `kunit_cases`.
+ *
+ * Often it is desirable to run some function which will set up things which
+ * will be used by every test; this is accomplished with an `init` function
+ * which runs before each test case is invoked. Similarly, an `exit` function
+ * may be specified which runs after every test case and can be used to for
+ * cleanup. For clarity, running tests in a test module would behave as follows:
+ *
+ * module.init(test);
+ * module.test_case[0](test);
+ * module.exit(test);
+ * module.init(test);
+ * module.test_case[1](test);
+ * module.exit(test);
+ * ...;
+ */
+static struct kunit_module example_test_module = {
+	.name = "example",
+	.init = example_test_init,
+	.test_cases = example_test_cases,
+};
+
+/*
+ * This registers the above test module telling KUnit that this is a suite of
+ * tests that need to be run.
+ */
+module_test(example_test_module);
diff --git a/kunit/string-stream-test.c b/kunit/string-stream-test.c
new file mode 100644
index 0000000000000..6cfef69568011
--- /dev/null
+++ b/kunit/string-stream-test.c
@@ -0,0 +1,61 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * KUnit test for struct string_stream.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins@google.com>
+ */
+
+#include <linux/slab.h>
+#include <kunit/test.h>
+#include <kunit/string-stream.h>
+
+static void string_stream_test_get_string(struct kunit *test)
+{
+	struct string_stream *stream = new_string_stream();
+	char *output;
+
+	stream->add(stream, "Foo");
+	stream->add(stream, " %s", "bar");
+
+	output = stream->get_string(stream);
+	KUNIT_EXPECT_STREQ(test, output, "Foo bar");
+	kfree(output);
+	destroy_string_stream(stream);
+}
+
+static void string_stream_test_add_and_clear(struct kunit *test)
+{
+	struct string_stream *stream = new_string_stream();
+	char *output;
+	int i;
+
+	for (i = 0; i < 10; i++)
+		stream->add(stream, "A");
+
+	output = stream->get_string(stream);
+	KUNIT_EXPECT_STREQ(test, output, "AAAAAAAAAA");
+	KUNIT_EXPECT_EQ(test, stream->length, 10);
+	KUNIT_EXPECT_FALSE(test, stream->is_empty(stream));
+	kfree(output);
+
+	stream->clear(stream);
+
+	output = stream->get_string(stream);
+	KUNIT_EXPECT_STREQ(test, output, "");
+	KUNIT_EXPECT_TRUE(test, stream->is_empty(stream));
+	destroy_string_stream(stream);
+}
+
+static struct kunit_case string_stream_test_cases[] = {
+	KUNIT_CASE(string_stream_test_get_string),
+	KUNIT_CASE(string_stream_test_add_and_clear),
+	{}
+};
+
+static struct kunit_module string_stream_test_module = {
+	.name = "string-stream-test",
+	.test_cases = string_stream_test_cases
+};
+module_test(string_stream_test_module);
+
-- 
2.21.0.rc0.258.g878e2cd30e-goog


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 08/17] kunit: test: add support for test abort
  2019-02-14 21:37 ` brendanhiggins
                       ` (2 preceding siblings ...)
  (?)
@ 2019-02-14 21:37     ` Brendan Higgins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook-hpIqsD4AKlfQT0dZR+AlfA, mcgrof-DgEjT+Ai2ygdnm+yROfE0A,
	shuah-DgEjT+Ai2ygdnm+yROfE0A, robh-DgEjT+Ai2ygdnm+yROfE0A,
	kieran.bingham-ryLnwIuWjnjg/C1BVhZhaw,
	frowand.list-Re5JQEeQqe8AvxtiuMwx3w
  Cc: brakmo-b10kYP2dOMg, pmladek-IBi9RG/b67k,
	amir73il-Re5JQEeQqe8AvxtiuMwx3w, Brendan Higgins,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	Alexander.Levin-0li6OtcxBFHby3iVrkZq2A,
	linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw, richard-/L3Ra7n9ekc,
	knut.omang-QHcLZuEGTsvQT0dZR+AlfA, wfg-VuQAYsv1563Yd54FQh9/CA,
	joel-U3u1mxZcP9KHXe+LvDLADg, jdike-OPE4K8JWMJJBDgjK7y7TUQ,
	dan.carpenter-QHcLZuEGTsvQT0dZR+AlfA,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Tim.Bird-7U/KSKJipcs,
	linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	rostedt-nx8X9YLhiw1AfugRpC6u6w, julia.lawall-L2FTfq7BK8M,
	kunit-dev-/JYPxA39Uh5TLH3MbocFFw,
	gregkh-hQyY1W1yCW8ekmWlsbkhG0B+6BGkLq7r,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, daniel-/w4YWyX8dFk,
	mpe-Gsx/Oe8HsFggBc27wqDAHg, joe-6d6DIl74uiNBDgjK7y7TUQ,
	khilman-rdvid1DuHRBWk0Htik3J/w

Add support for aborting/bailing out of test cases. Needed for
implementing assertions.

Signed-off-by: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
---
Changes Since Last Version
 - This patch is new introducing a new cross-architecture way to abort
   out of a test case (needed for KUNIT_ASSERT_*, see next patch for
   details).
 - On a side note, this is not a complete replacement for the UML abort
   mechanism, but covers the majority of necessary functionality. UML
   architecture specific featurs have been dropped from the initial
   patchset.
---
 include/kunit/test.h |  24 +++++
 kunit/Makefile       |   3 +-
 kunit/test-test.c    | 127 ++++++++++++++++++++++++++
 kunit/test.c         | 208 +++++++++++++++++++++++++++++++++++++++++--
 4 files changed, 353 insertions(+), 9 deletions(-)
 create mode 100644 kunit/test-test.c

diff --git a/include/kunit/test.h b/include/kunit/test.h
index a36ad1a502c66..cd02dca96eb61 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -151,6 +151,26 @@ struct kunit_module {
 	struct kunit_case *test_cases;
 };
 
+struct kunit_try_catch_context {
+	struct kunit *test;
+	struct kunit_module *module;
+	struct kunit_case *test_case;
+	struct completion *try_completion;
+	int try_result;
+};
+
+struct kunit_try_catch {
+	void (*run)(struct kunit_try_catch *try_catch);
+	void (*throw)(struct kunit_try_catch *try_catch);
+	struct kunit_try_catch_context context;
+	void (*try)(struct kunit_try_catch_context *context);
+	void (*catch)(struct kunit_try_catch_context *context);
+};
+
+void kunit_try_catch_init(struct kunit_try_catch *try_catch);
+
+void kunit_generic_try_catch_init(struct kunit_try_catch *try_catch);
+
 /**
  * struct kunit - represents a running instance of a test.
  * @priv: for user to store arbitrary data. Commonly used to pass data created
@@ -166,13 +186,17 @@ struct kunit {
 
 	/* private: internal use only. */
 	const char *name; /* Read only after initialization! */
+	struct kunit_try_catch try_catch;
 	spinlock_t lock; /* Gaurds all mutable test state. */
 	bool success; /* Protected by lock. */
+	bool death_test; /* Protected by lock. */
 	struct list_head resources; /* Protected by lock. */
+	void (*set_death_test)(struct kunit *test, bool death_test);
 	void (*vprintk)(const struct kunit *test,
 			const char *level,
 			struct va_format *vaf);
 	void (*fail)(struct kunit *test, struct kunit_stream *stream);
+	void (*abort)(struct kunit *test);
 };
 
 int kunit_init_test(struct kunit *test, const char *name);
diff --git a/kunit/Makefile b/kunit/Makefile
index 60a9ea6cb4697..e4c300f67479a 100644
--- a/kunit/Makefile
+++ b/kunit/Makefile
@@ -2,6 +2,7 @@ obj-$(CONFIG_KUNIT) +=			test.o \
 					string-stream.o \
 					kunit-stream.o
 
-obj-$(CONFIG_KUNIT_TEST) +=		string-stream-test.o
+obj-$(CONFIG_KUNIT_TEST) +=		test-test.o \
+					string-stream-test.o
 
 obj-$(CONFIG_KUNIT_EXAMPLE_TEST) +=	example-test.o
diff --git a/kunit/test-test.c b/kunit/test-test.c
new file mode 100644
index 0000000000000..a936c041f1c8f
--- /dev/null
+++ b/kunit/test-test.c
@@ -0,0 +1,127 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * KUnit test for core test infrastructure.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
+ */
+#include <kunit/test.h>
+
+struct kunit_try_catch_test_context {
+	struct kunit_try_catch *try_catch;
+	bool function_called;
+};
+
+void kunit_test_successful_try(struct kunit_try_catch_context *context)
+{
+	struct kunit_try_catch_test_context *ctx = context->test->priv;
+
+	ctx->function_called = true;
+}
+
+void kunit_test_no_catch(struct kunit_try_catch_context *context)
+{
+	KUNIT_FAIL(context->test, "Catch should not be called.");
+}
+
+static void kunit_test_try_catch_successful_try_no_catch(struct kunit *test)
+{
+	struct kunit_try_catch_test_context *ctx = test->priv;
+	struct kunit_try_catch *try_catch = ctx->try_catch;
+
+	try_catch->try = kunit_test_successful_try;
+	try_catch->catch = kunit_test_no_catch;
+	try_catch->run(try_catch);
+
+	KUNIT_EXPECT_TRUE(test, ctx->function_called);
+}
+
+void kunit_test_unsuccessful_try(struct kunit_try_catch_context *context)
+{
+	struct kunit_try_catch *try_catch = container_of(context,
+							 struct kunit_try_catch,
+							 context);
+
+	try_catch->throw(try_catch);
+	KUNIT_FAIL(context->test, "This line should never be reached.");
+}
+
+void kunit_test_catch(struct kunit_try_catch_context *context)
+{
+	struct kunit_try_catch_test_context *ctx = context->test->priv;
+
+	ctx->function_called = true;
+}
+
+static void kunit_test_try_catch_unsuccessful_try_does_catch(struct kunit *test)
+{
+	struct kunit_try_catch_test_context *ctx = test->priv;
+	struct kunit_try_catch *try_catch = ctx->try_catch;
+
+	try_catch->try = kunit_test_unsuccessful_try;
+	try_catch->catch = kunit_test_catch;
+	try_catch->run(try_catch);
+
+	KUNIT_EXPECT_TRUE(test, ctx->function_called);
+}
+
+static void kunit_test_generic_try_catch_successful_try_no_catch(
+		struct kunit *test)
+{
+	struct kunit_try_catch_test_context *ctx = test->priv;
+	struct kunit_try_catch *try_catch = ctx->try_catch;
+
+	kunit_generic_try_catch_init(try_catch);
+
+	try_catch->try = kunit_test_successful_try;
+	try_catch->catch = kunit_test_no_catch;
+	try_catch->run(try_catch);
+
+	KUNIT_EXPECT_TRUE(test, ctx->function_called);
+}
+
+static void kunit_test_generic_try_catch_unsuccessful_try_does_catch(
+		struct kunit *test)
+{
+	struct kunit_try_catch_test_context *ctx = test->priv;
+	struct kunit_try_catch *try_catch = ctx->try_catch;
+
+	kunit_generic_try_catch_init(try_catch);
+
+	try_catch->try = kunit_test_unsuccessful_try;
+	try_catch->catch = kunit_test_catch;
+	try_catch->run(try_catch);
+
+	KUNIT_EXPECT_TRUE(test, ctx->function_called);
+}
+
+static int kunit_try_catch_test_init(struct kunit *test)
+{
+	struct kunit_try_catch_test_context *ctx;
+
+	ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
+	test->priv = ctx;
+
+	ctx->try_catch = kunit_kmalloc(test,
+				       sizeof(*ctx->try_catch),
+				       GFP_KERNEL);
+	kunit_try_catch_init(ctx->try_catch);
+	ctx->try_catch->context.test = test;
+
+	return 0;
+}
+
+static struct kunit_case kunit_try_catch_test_cases[] = {
+	KUNIT_CASE(kunit_test_try_catch_successful_try_no_catch),
+	KUNIT_CASE(kunit_test_try_catch_unsuccessful_try_does_catch),
+	KUNIT_CASE(kunit_test_generic_try_catch_successful_try_no_catch),
+	KUNIT_CASE(kunit_test_generic_try_catch_unsuccessful_try_does_catch),
+	{},
+};
+
+static struct kunit_module kunit_try_catch_test_module = {
+	.name = "kunit-try-catch-test",
+	.init = kunit_try_catch_test_init,
+	.test_cases = kunit_try_catch_test_cases,
+};
+module_test(kunit_try_catch_test_module);
diff --git a/kunit/test.c b/kunit/test.c
index d18c50d5ed671..6e5244642ab07 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -6,9 +6,9 @@
  * Author: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
  */
 
-#include <linux/sched.h>
 #include <linux/sched/debug.h>
-#include <os.h>
+#include <linux/completion.h>
+#include <linux/kthread.h>
 #include <kunit/test.h>
 
 static bool kunit_get_success(struct kunit *test)
@@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
 	spin_unlock_irqrestore(&test->lock, flags);
 }
 
+static bool kunit_get_death_test(struct kunit *test)
+{
+	unsigned long flags;
+	bool death_test;
+
+	spin_lock_irqsave(&test->lock, flags);
+	death_test = test->death_test;
+	spin_unlock_irqrestore(&test->lock, flags);
+
+	return death_test;
+}
+
+static void kunit_set_death_test(struct kunit *test, bool death_test)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&test->lock, flags);
+	test->death_test = death_test;
+	spin_unlock_irqrestore(&test->lock, flags);
+}
+
 static int kunit_vprintk_emit(const struct kunit *test,
 			      int level,
 			      const char *fmt,
@@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
 	stream->commit(stream);
 }
 
+static void __noreturn kunit_abort(struct kunit *test)
+{
+	kunit_set_death_test(test, true);
+
+	test->try_catch.throw(&test->try_catch);
+
+	/*
+	 * Throw could not abort from test.
+	 */
+	kunit_err(test, "Throw could not abort from test!");
+	show_stack(NULL, NULL);
+	BUG();
+}
+
 int kunit_init_test(struct kunit *test, const char *name)
 {
 	spin_lock_init(&test->lock);
 	INIT_LIST_HEAD(&test->resources);
 	test->name = name;
+	test->set_death_test = kunit_set_death_test;
 	test->vprintk = kunit_vprintk;
 	test->fail = kunit_fail;
+	test->abort = kunit_abort;
 
 	return 0;
 }
@@ -122,16 +159,171 @@ static void kunit_run_case_cleanup(struct kunit *test,
 }
 
 /*
- * Performs all logic to run a test case.
+ * Handles an unexpected crash in a test case.
  */
-static bool kunit_run_case(struct kunit *test,
-			   struct kunit_module *module,
-			   struct kunit_case *test_case)
+static void kunit_handle_test_crash(struct kunit *test,
+				   struct kunit_module *module,
+				   struct kunit_case *test_case)
 {
-	kunit_set_success(test, true);
+	kunit_err(test, "%s crashed", test_case->name);
+	/*
+	 * TODO(brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org): This prints the stack trace up
+	 * through this frame, not up to the frame that caused the crash.
+	 */
+	show_stack(NULL, NULL);
+
+	kunit_case_internal_cleanup(test);
+}
+
+static void kunit_generic_throw(struct kunit_try_catch *try_catch)
+{
+	try_catch->context.try_result = -EFAULT;
+	complete_and_exit(try_catch->context.try_completion, -EFAULT);
+}
+
+static int kunit_generic_run_threadfn_adapter(void *data)
+{
+	struct kunit_try_catch *try_catch = data;
 
+	try_catch->try(&try_catch->context);
+
+	complete_and_exit(try_catch->context.try_completion, 0);
+}
+
+static void kunit_generic_run_try_catch(struct kunit_try_catch *try_catch)
+{
+	struct task_struct *task_struct;
+	struct kunit *test = try_catch->context.test;
+	int exit_code, wake_result;
+	DECLARE_COMPLETION(test_case_completion);
+
+	try_catch->context.try_completion = &test_case_completion;
+	try_catch->context.try_result = 0;
+	task_struct = kthread_create(kunit_generic_run_threadfn_adapter,
+					     try_catch,
+					     "kunit_try_catch_thread");
+	if (IS_ERR_OR_NULL(task_struct)) {
+		try_catch->catch(&try_catch->context);
+		return;
+	}
+
+	wake_result = wake_up_process(task_struct);
+	if (wake_result != 0 && wake_result != 1) {
+		kunit_err(test, "task was not woken properly: %d", wake_result);
+		try_catch->catch(&try_catch->context);
+	}
+
+	/*
+	 * TODO(brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org): We should probably have some type of
+	 * timeout here. The only question is what that timeout value should be.
+	 *
+	 * The intention has always been, at some point, to be able to label
+	 * tests with some type of size bucket (unit/small, integration/medium,
+	 * large/system/end-to-end, etc), where each size bucket would get a
+	 * default timeout value kind of like what Bazel does:
+	 * https://docs.bazel.build/versions/master/be/common-definitions.html#test.size
+	 * There is still some debate to be had on exactly how we do this. (For
+	 * one, we probably want to have some sort of test runner level
+	 * timeout.)
+	 *
+	 * For more background on this topic, see:
+	 * https://mike-bland.com/2011/11/01/small-medium-large.html
+	 */
+	wait_for_completion(&test_case_completion);
+
+	exit_code = try_catch->context.try_result;
+	if (exit_code == -EFAULT)
+		try_catch->catch(&try_catch->context);
+	else if (exit_code == -EINTR)
+		kunit_err(test, "wake_up_process() was never called.");
+	else if (exit_code)
+		kunit_err(test, "Unknown error: %d", exit_code);
+}
+
+void kunit_generic_try_catch_init(struct kunit_try_catch *try_catch)
+{
+	try_catch->run = kunit_generic_run_try_catch;
+	try_catch->throw = kunit_generic_throw;
+}
+
+void __weak kunit_try_catch_init(struct kunit_try_catch *try_catch)
+{
+	kunit_generic_try_catch_init(try_catch);
+}
+
+static void kunit_try_run_case(struct kunit_try_catch_context *context)
+{
+	struct kunit_try_catch_context *ctx = context;
+	struct kunit *test = ctx->test;
+	struct kunit_module *module = ctx->module;
+	struct kunit_case *test_case = ctx->test_case;
+
+	/*
+	 * kunit_run_case_internal may encounter a fatal error; if it does, we
+	 * will jump to ENTER_HANDLER above instead of continuing normal control
+	 * flow.
+	 */
 	kunit_run_case_internal(test, module, test_case);
+	/* This line may never be reached. */
 	kunit_run_case_cleanup(test, module, test_case);
+}
+
+static void kunit_catch_run_case(struct kunit_try_catch_context *context)
+{
+	struct kunit_try_catch_context *ctx = context;
+	struct kunit *test = ctx->test;
+	struct kunit_module *module = ctx->module;
+	struct kunit_case *test_case = ctx->test_case;
+
+	if (kunit_get_death_test(test)) {
+		/*
+		 * EXPECTED DEATH: kunit_run_case_internal encountered
+		 * anticipated fatal error. Everything should be in a safe
+		 * state.
+		 */
+		kunit_run_case_cleanup(test, module, test_case);
+	} else {
+		/*
+		 * UNEXPECTED DEATH: kunit_run_case_internal encountered an
+		 * unanticipated fatal error. We have no idea what the state of
+		 * the test case is in.
+		 */
+		kunit_handle_test_crash(test, module, test_case);
+		kunit_set_success(test, false);
+	}
+}
+
+/*
+ * Performs all logic to run a test case. It also catches most errors that
+ * occurs in a test case and reports them as failures.
+ *
+ * XXX: THIS DOES NOT FOLLOW NORMAL CONTROL FLOW. READ CAREFULLY!!!
+ */
+static bool kunit_run_case_catch_errors(struct kunit *test,
+				       struct kunit_module *module,
+				       struct kunit_case *test_case)
+{
+	struct kunit_try_catch *try_catch = &test->try_catch;
+	struct kunit_try_catch_context *context = &try_catch->context;
+
+	kunit_try_catch_init(try_catch);
+
+	kunit_set_success(test, true);
+	kunit_set_death_test(test, false);
+
+	/*
+	 * ENTER HANDLER: If a failure occurs, we enter here.
+	 */
+	context->test = test;
+	context->module = module;
+	context->test_case = test_case;
+	try_catch->try = kunit_try_run_case;
+	try_catch->catch = kunit_catch_run_case;
+	try_catch->run(try_catch);
+	/*
+	 * EXIT HANDLER: test case has been run and all possible errors have
+	 * been handled.
+	 */
 
 	return kunit_get_success(test);
 }
@@ -148,7 +340,7 @@ int kunit_run_tests(struct kunit_module *module)
 		return ret;
 
 	for (test_case = module->test_cases; test_case->run_case; test_case++) {
-		success = kunit_run_case(&test, module, test_case);
+		success = kunit_run_case_catch_errors(&test, module, test_case);
 		if (!success)
 			all_passed = false;
 
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: gregkh, joel, mpe, joe, brakmo, rostedt, Tim.Bird, khilman,
	julia.lawall, linux-kselftest, kunit-dev, linux-kernel, jdike,
	richard, linux-um, daniel, dri-devel, dan.j.williams,
	linux-nvdimm, knut.omang, devicetree, pmladek, Alexander.Levin,
	amir73il, dan.carpenter, wfg, Brendan Higgins

Add support for aborting/bailing out of test cases. Needed for
implementing assertions.

Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
---
Changes Since Last Version
 - This patch is new introducing a new cross-architecture way to abort
   out of a test case (needed for KUNIT_ASSERT_*, see next patch for
   details).
 - On a side note, this is not a complete replacement for the UML abort
   mechanism, but covers the majority of necessary functionality. UML
   architecture specific featurs have been dropped from the initial
   patchset.
---
 include/kunit/test.h |  24 +++++
 kunit/Makefile       |   3 +-
 kunit/test-test.c    | 127 ++++++++++++++++++++++++++
 kunit/test.c         | 208 +++++++++++++++++++++++++++++++++++++++++--
 4 files changed, 353 insertions(+), 9 deletions(-)
 create mode 100644 kunit/test-test.c

diff --git a/include/kunit/test.h b/include/kunit/test.h
index a36ad1a502c66..cd02dca96eb61 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -151,6 +151,26 @@ struct kunit_module {
 	struct kunit_case *test_cases;
 };
 
+struct kunit_try_catch_context {
+	struct kunit *test;
+	struct kunit_module *module;
+	struct kunit_case *test_case;
+	struct completion *try_completion;
+	int try_result;
+};
+
+struct kunit_try_catch {
+	void (*run)(struct kunit_try_catch *try_catch);
+	void (*throw)(struct kunit_try_catch *try_catch);
+	struct kunit_try_catch_context context;
+	void (*try)(struct kunit_try_catch_context *context);
+	void (*catch)(struct kunit_try_catch_context *context);
+};
+
+void kunit_try_catch_init(struct kunit_try_catch *try_catch);
+
+void kunit_generic_try_catch_init(struct kunit_try_catch *try_catch);
+
 /**
  * struct kunit - represents a running instance of a test.
  * @priv: for user to store arbitrary data. Commonly used to pass data created
@@ -166,13 +186,17 @@ struct kunit {
 
 	/* private: internal use only. */
 	const char *name; /* Read only after initialization! */
+	struct kunit_try_catch try_catch;
 	spinlock_t lock; /* Gaurds all mutable test state. */
 	bool success; /* Protected by lock. */
+	bool death_test; /* Protected by lock. */
 	struct list_head resources; /* Protected by lock. */
+	void (*set_death_test)(struct kunit *test, bool death_test);
 	void (*vprintk)(const struct kunit *test,
 			const char *level,
 			struct va_format *vaf);
 	void (*fail)(struct kunit *test, struct kunit_stream *stream);
+	void (*abort)(struct kunit *test);
 };
 
 int kunit_init_test(struct kunit *test, const char *name);
diff --git a/kunit/Makefile b/kunit/Makefile
index 60a9ea6cb4697..e4c300f67479a 100644
--- a/kunit/Makefile
+++ b/kunit/Makefile
@@ -2,6 +2,7 @@ obj-$(CONFIG_KUNIT) +=			test.o \
 					string-stream.o \
 					kunit-stream.o
 
-obj-$(CONFIG_KUNIT_TEST) +=		string-stream-test.o
+obj-$(CONFIG_KUNIT_TEST) +=		test-test.o \
+					string-stream-test.o
 
 obj-$(CONFIG_KUNIT_EXAMPLE_TEST) +=	example-test.o
diff --git a/kunit/test-test.c b/kunit/test-test.c
new file mode 100644
index 0000000000000..a936c041f1c8f
--- /dev/null
+++ b/kunit/test-test.c
@@ -0,0 +1,127 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * KUnit test for core test infrastructure.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins@google.com>
+ */
+#include <kunit/test.h>
+
+struct kunit_try_catch_test_context {
+	struct kunit_try_catch *try_catch;
+	bool function_called;
+};
+
+void kunit_test_successful_try(struct kunit_try_catch_context *context)
+{
+	struct kunit_try_catch_test_context *ctx = context->test->priv;
+
+	ctx->function_called = true;
+}
+
+void kunit_test_no_catch(struct kunit_try_catch_context *context)
+{
+	KUNIT_FAIL(context->test, "Catch should not be called.");
+}
+
+static void kunit_test_try_catch_successful_try_no_catch(struct kunit *test)
+{
+	struct kunit_try_catch_test_context *ctx = test->priv;
+	struct kunit_try_catch *try_catch = ctx->try_catch;
+
+	try_catch->try = kunit_test_successful_try;
+	try_catch->catch = kunit_test_no_catch;
+	try_catch->run(try_catch);
+
+	KUNIT_EXPECT_TRUE(test, ctx->function_called);
+}
+
+void kunit_test_unsuccessful_try(struct kunit_try_catch_context *context)
+{
+	struct kunit_try_catch *try_catch = container_of(context,
+							 struct kunit_try_catch,
+							 context);
+
+	try_catch->throw(try_catch);
+	KUNIT_FAIL(context->test, "This line should never be reached.");
+}
+
+void kunit_test_catch(struct kunit_try_catch_context *context)
+{
+	struct kunit_try_catch_test_context *ctx = context->test->priv;
+
+	ctx->function_called = true;
+}
+
+static void kunit_test_try_catch_unsuccessful_try_does_catch(struct kunit *test)
+{
+	struct kunit_try_catch_test_context *ctx = test->priv;
+	struct kunit_try_catch *try_catch = ctx->try_catch;
+
+	try_catch->try = kunit_test_unsuccessful_try;
+	try_catch->catch = kunit_test_catch;
+	try_catch->run(try_catch);
+
+	KUNIT_EXPECT_TRUE(test, ctx->function_called);
+}
+
+static void kunit_test_generic_try_catch_successful_try_no_catch(
+		struct kunit *test)
+{
+	struct kunit_try_catch_test_context *ctx = test->priv;
+	struct kunit_try_catch *try_catch = ctx->try_catch;
+
+	kunit_generic_try_catch_init(try_catch);
+
+	try_catch->try = kunit_test_successful_try;
+	try_catch->catch = kunit_test_no_catch;
+	try_catch->run(try_catch);
+
+	KUNIT_EXPECT_TRUE(test, ctx->function_called);
+}
+
+static void kunit_test_generic_try_catch_unsuccessful_try_does_catch(
+		struct kunit *test)
+{
+	struct kunit_try_catch_test_context *ctx = test->priv;
+	struct kunit_try_catch *try_catch = ctx->try_catch;
+
+	kunit_generic_try_catch_init(try_catch);
+
+	try_catch->try = kunit_test_unsuccessful_try;
+	try_catch->catch = kunit_test_catch;
+	try_catch->run(try_catch);
+
+	KUNIT_EXPECT_TRUE(test, ctx->function_called);
+}
+
+static int kunit_try_catch_test_init(struct kunit *test)
+{
+	struct kunit_try_catch_test_context *ctx;
+
+	ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
+	test->priv = ctx;
+
+	ctx->try_catch = kunit_kmalloc(test,
+				       sizeof(*ctx->try_catch),
+				       GFP_KERNEL);
+	kunit_try_catch_init(ctx->try_catch);
+	ctx->try_catch->context.test = test;
+
+	return 0;
+}
+
+static struct kunit_case kunit_try_catch_test_cases[] = {
+	KUNIT_CASE(kunit_test_try_catch_successful_try_no_catch),
+	KUNIT_CASE(kunit_test_try_catch_unsuccessful_try_does_catch),
+	KUNIT_CASE(kunit_test_generic_try_catch_successful_try_no_catch),
+	KUNIT_CASE(kunit_test_generic_try_catch_unsuccessful_try_does_catch),
+	{},
+};
+
+static struct kunit_module kunit_try_catch_test_module = {
+	.name = "kunit-try-catch-test",
+	.init = kunit_try_catch_test_init,
+	.test_cases = kunit_try_catch_test_cases,
+};
+module_test(kunit_try_catch_test_module);
diff --git a/kunit/test.c b/kunit/test.c
index d18c50d5ed671..6e5244642ab07 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -6,9 +6,9 @@
  * Author: Brendan Higgins <brendanhiggins@google.com>
  */
 
-#include <linux/sched.h>
 #include <linux/sched/debug.h>
-#include <os.h>
+#include <linux/completion.h>
+#include <linux/kthread.h>
 #include <kunit/test.h>
 
 static bool kunit_get_success(struct kunit *test)
@@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
 	spin_unlock_irqrestore(&test->lock, flags);
 }
 
+static bool kunit_get_death_test(struct kunit *test)
+{
+	unsigned long flags;
+	bool death_test;
+
+	spin_lock_irqsave(&test->lock, flags);
+	death_test = test->death_test;
+	spin_unlock_irqrestore(&test->lock, flags);
+
+	return death_test;
+}
+
+static void kunit_set_death_test(struct kunit *test, bool death_test)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&test->lock, flags);
+	test->death_test = death_test;
+	spin_unlock_irqrestore(&test->lock, flags);
+}
+
 static int kunit_vprintk_emit(const struct kunit *test,
 			      int level,
 			      const char *fmt,
@@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
 	stream->commit(stream);
 }
 
+static void __noreturn kunit_abort(struct kunit *test)
+{
+	kunit_set_death_test(test, true);
+
+	test->try_catch.throw(&test->try_catch);
+
+	/*
+	 * Throw could not abort from test.
+	 */
+	kunit_err(test, "Throw could not abort from test!");
+	show_stack(NULL, NULL);
+	BUG();
+}
+
 int kunit_init_test(struct kunit *test, const char *name)
 {
 	spin_lock_init(&test->lock);
 	INIT_LIST_HEAD(&test->resources);
 	test->name = name;
+	test->set_death_test = kunit_set_death_test;
 	test->vprintk = kunit_vprintk;
 	test->fail = kunit_fail;
+	test->abort = kunit_abort;
 
 	return 0;
 }
@@ -122,16 +159,171 @@ static void kunit_run_case_cleanup(struct kunit *test,
 }
 
 /*
- * Performs all logic to run a test case.
+ * Handles an unexpected crash in a test case.
  */
-static bool kunit_run_case(struct kunit *test,
-			   struct kunit_module *module,
-			   struct kunit_case *test_case)
+static void kunit_handle_test_crash(struct kunit *test,
+				   struct kunit_module *module,
+				   struct kunit_case *test_case)
 {
-	kunit_set_success(test, true);
+	kunit_err(test, "%s crashed", test_case->name);
+	/*
+	 * TODO(brendanhiggins@google.com): This prints the stack trace up
+	 * through this frame, not up to the frame that caused the crash.
+	 */
+	show_stack(NULL, NULL);
+
+	kunit_case_internal_cleanup(test);
+}
+
+static void kunit_generic_throw(struct kunit_try_catch *try_catch)
+{
+	try_catch->context.try_result = -EFAULT;
+	complete_and_exit(try_catch->context.try_completion, -EFAULT);
+}
+
+static int kunit_generic_run_threadfn_adapter(void *data)
+{
+	struct kunit_try_catch *try_catch = data;
 
+	try_catch->try(&try_catch->context);
+
+	complete_and_exit(try_catch->context.try_completion, 0);
+}
+
+static void kunit_generic_run_try_catch(struct kunit_try_catch *try_catch)
+{
+	struct task_struct *task_struct;
+	struct kunit *test = try_catch->context.test;
+	int exit_code, wake_result;
+	DECLARE_COMPLETION(test_case_completion);
+
+	try_catch->context.try_completion = &test_case_completion;
+	try_catch->context.try_result = 0;
+	task_struct = kthread_create(kunit_generic_run_threadfn_adapter,
+					     try_catch,
+					     "kunit_try_catch_thread");
+	if (IS_ERR_OR_NULL(task_struct)) {
+		try_catch->catch(&try_catch->context);
+		return;
+	}
+
+	wake_result = wake_up_process(task_struct);
+	if (wake_result != 0 && wake_result != 1) {
+		kunit_err(test, "task was not woken properly: %d", wake_result);
+		try_catch->catch(&try_catch->context);
+	}
+
+	/*
+	 * TODO(brendanhiggins@google.com): We should probably have some type of
+	 * timeout here. The only question is what that timeout value should be.
+	 *
+	 * The intention has always been, at some point, to be able to label
+	 * tests with some type of size bucket (unit/small, integration/medium,
+	 * large/system/end-to-end, etc), where each size bucket would get a
+	 * default timeout value kind of like what Bazel does:
+	 * https://docs.bazel.build/versions/master/be/common-definitions.html#test.size
+	 * There is still some debate to be had on exactly how we do this. (For
+	 * one, we probably want to have some sort of test runner level
+	 * timeout.)
+	 *
+	 * For more background on this topic, see:
+	 * https://mike-bland.com/2011/11/01/small-medium-large.html
+	 */
+	wait_for_completion(&test_case_completion);
+
+	exit_code = try_catch->context.try_result;
+	if (exit_code == -EFAULT)
+		try_catch->catch(&try_catch->context);
+	else if (exit_code == -EINTR)
+		kunit_err(test, "wake_up_process() was never called.");
+	else if (exit_code)
+		kunit_err(test, "Unknown error: %d", exit_code);
+}
+
+void kunit_generic_try_catch_init(struct kunit_try_catch *try_catch)
+{
+	try_catch->run = kunit_generic_run_try_catch;
+	try_catch->throw = kunit_generic_throw;
+}
+
+void __weak kunit_try_catch_init(struct kunit_try_catch *try_catch)
+{
+	kunit_generic_try_catch_init(try_catch);
+}
+
+static void kunit_try_run_case(struct kunit_try_catch_context *context)
+{
+	struct kunit_try_catch_context *ctx = context;
+	struct kunit *test = ctx->test;
+	struct kunit_module *module = ctx->module;
+	struct kunit_case *test_case = ctx->test_case;
+
+	/*
+	 * kunit_run_case_internal may encounter a fatal error; if it does, we
+	 * will jump to ENTER_HANDLER above instead of continuing normal control
+	 * flow.
+	 */
 	kunit_run_case_internal(test, module, test_case);
+	/* This line may never be reached. */
 	kunit_run_case_cleanup(test, module, test_case);
+}
+
+static void kunit_catch_run_case(struct kunit_try_catch_context *context)
+{
+	struct kunit_try_catch_context *ctx = context;
+	struct kunit *test = ctx->test;
+	struct kunit_module *module = ctx->module;
+	struct kunit_case *test_case = ctx->test_case;
+
+	if (kunit_get_death_test(test)) {
+		/*
+		 * EXPECTED DEATH: kunit_run_case_internal encountered
+		 * anticipated fatal error. Everything should be in a safe
+		 * state.
+		 */
+		kunit_run_case_cleanup(test, module, test_case);
+	} else {
+		/*
+		 * UNEXPECTED DEATH: kunit_run_case_internal encountered an
+		 * unanticipated fatal error. We have no idea what the state of
+		 * the test case is in.
+		 */
+		kunit_handle_test_crash(test, module, test_case);
+		kunit_set_success(test, false);
+	}
+}
+
+/*
+ * Performs all logic to run a test case. It also catches most errors that
+ * occurs in a test case and reports them as failures.
+ *
+ * XXX: THIS DOES NOT FOLLOW NORMAL CONTROL FLOW. READ CAREFULLY!!!
+ */
+static bool kunit_run_case_catch_errors(struct kunit *test,
+				       struct kunit_module *module,
+				       struct kunit_case *test_case)
+{
+	struct kunit_try_catch *try_catch = &test->try_catch;
+	struct kunit_try_catch_context *context = &try_catch->context;
+
+	kunit_try_catch_init(try_catch);
+
+	kunit_set_success(test, true);
+	kunit_set_death_test(test, false);
+
+	/*
+	 * ENTER HANDLER: If a failure occurs, we enter here.
+	 */
+	context->test = test;
+	context->module = module;
+	context->test_case = test_case;
+	try_catch->try = kunit_try_run_case;
+	try_catch->catch = kunit_catch_run_case;
+	try_catch->run(try_catch);
+	/*
+	 * EXIT HANDLER: test case has been run and all possible errors have
+	 * been handled.
+	 */
 
 	return kunit_get_success(test);
 }
@@ -148,7 +340,7 @@ int kunit_run_tests(struct kunit_module *module)
 		return ret;
 
 	for (test_case = module->test_cases; test_case->run_case; test_case++) {
-		success = kunit_run_case(&test, module, test_case);
+		success = kunit_run_case_catch_errors(&test, module, test_case);
 		if (!success)
 			all_passed = false;
 
-- 
2.21.0.rc0.258.g878e2cd30e-goog


^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-02-14 21:37 UTC (permalink / raw)


Add support for aborting/bailing out of test cases. Needed for
implementing assertions.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
Changes Since Last Version
 - This patch is new introducing a new cross-architecture way to abort
   out of a test case (needed for KUNIT_ASSERT_*, see next patch for
   details).
 - On a side note, this is not a complete replacement for the UML abort
   mechanism, but covers the majority of necessary functionality. UML
   architecture specific featurs have been dropped from the initial
   patchset.
---
 include/kunit/test.h |  24 +++++
 kunit/Makefile       |   3 +-
 kunit/test-test.c    | 127 ++++++++++++++++++++++++++
 kunit/test.c         | 208 +++++++++++++++++++++++++++++++++++++++++--
 4 files changed, 353 insertions(+), 9 deletions(-)
 create mode 100644 kunit/test-test.c

diff --git a/include/kunit/test.h b/include/kunit/test.h
index a36ad1a502c66..cd02dca96eb61 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -151,6 +151,26 @@ struct kunit_module {
 	struct kunit_case *test_cases;
 };
 
+struct kunit_try_catch_context {
+	struct kunit *test;
+	struct kunit_module *module;
+	struct kunit_case *test_case;
+	struct completion *try_completion;
+	int try_result;
+};
+
+struct kunit_try_catch {
+	void (*run)(struct kunit_try_catch *try_catch);
+	void (*throw)(struct kunit_try_catch *try_catch);
+	struct kunit_try_catch_context context;
+	void (*try)(struct kunit_try_catch_context *context);
+	void (*catch)(struct kunit_try_catch_context *context);
+};
+
+void kunit_try_catch_init(struct kunit_try_catch *try_catch);
+
+void kunit_generic_try_catch_init(struct kunit_try_catch *try_catch);
+
 /**
  * struct kunit - represents a running instance of a test.
  * @priv: for user to store arbitrary data. Commonly used to pass data created
@@ -166,13 +186,17 @@ struct kunit {
 
 	/* private: internal use only. */
 	const char *name; /* Read only after initialization! */
+	struct kunit_try_catch try_catch;
 	spinlock_t lock; /* Gaurds all mutable test state. */
 	bool success; /* Protected by lock. */
+	bool death_test; /* Protected by lock. */
 	struct list_head resources; /* Protected by lock. */
+	void (*set_death_test)(struct kunit *test, bool death_test);
 	void (*vprintk)(const struct kunit *test,
 			const char *level,
 			struct va_format *vaf);
 	void (*fail)(struct kunit *test, struct kunit_stream *stream);
+	void (*abort)(struct kunit *test);
 };
 
 int kunit_init_test(struct kunit *test, const char *name);
diff --git a/kunit/Makefile b/kunit/Makefile
index 60a9ea6cb4697..e4c300f67479a 100644
--- a/kunit/Makefile
+++ b/kunit/Makefile
@@ -2,6 +2,7 @@ obj-$(CONFIG_KUNIT) +=			test.o \
 					string-stream.o \
 					kunit-stream.o
 
-obj-$(CONFIG_KUNIT_TEST) +=		string-stream-test.o
+obj-$(CONFIG_KUNIT_TEST) +=		test-test.o \
+					string-stream-test.o
 
 obj-$(CONFIG_KUNIT_EXAMPLE_TEST) +=	example-test.o
diff --git a/kunit/test-test.c b/kunit/test-test.c
new file mode 100644
index 0000000000000..a936c041f1c8f
--- /dev/null
+++ b/kunit/test-test.c
@@ -0,0 +1,127 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * KUnit test for core test infrastructure.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+#include <kunit/test.h>
+
+struct kunit_try_catch_test_context {
+	struct kunit_try_catch *try_catch;
+	bool function_called;
+};
+
+void kunit_test_successful_try(struct kunit_try_catch_context *context)
+{
+	struct kunit_try_catch_test_context *ctx = context->test->priv;
+
+	ctx->function_called = true;
+}
+
+void kunit_test_no_catch(struct kunit_try_catch_context *context)
+{
+	KUNIT_FAIL(context->test, "Catch should not be called.");
+}
+
+static void kunit_test_try_catch_successful_try_no_catch(struct kunit *test)
+{
+	struct kunit_try_catch_test_context *ctx = test->priv;
+	struct kunit_try_catch *try_catch = ctx->try_catch;
+
+	try_catch->try = kunit_test_successful_try;
+	try_catch->catch = kunit_test_no_catch;
+	try_catch->run(try_catch);
+
+	KUNIT_EXPECT_TRUE(test, ctx->function_called);
+}
+
+void kunit_test_unsuccessful_try(struct kunit_try_catch_context *context)
+{
+	struct kunit_try_catch *try_catch = container_of(context,
+							 struct kunit_try_catch,
+							 context);
+
+	try_catch->throw(try_catch);
+	KUNIT_FAIL(context->test, "This line should never be reached.");
+}
+
+void kunit_test_catch(struct kunit_try_catch_context *context)
+{
+	struct kunit_try_catch_test_context *ctx = context->test->priv;
+
+	ctx->function_called = true;
+}
+
+static void kunit_test_try_catch_unsuccessful_try_does_catch(struct kunit *test)
+{
+	struct kunit_try_catch_test_context *ctx = test->priv;
+	struct kunit_try_catch *try_catch = ctx->try_catch;
+
+	try_catch->try = kunit_test_unsuccessful_try;
+	try_catch->catch = kunit_test_catch;
+	try_catch->run(try_catch);
+
+	KUNIT_EXPECT_TRUE(test, ctx->function_called);
+}
+
+static void kunit_test_generic_try_catch_successful_try_no_catch(
+		struct kunit *test)
+{
+	struct kunit_try_catch_test_context *ctx = test->priv;
+	struct kunit_try_catch *try_catch = ctx->try_catch;
+
+	kunit_generic_try_catch_init(try_catch);
+
+	try_catch->try = kunit_test_successful_try;
+	try_catch->catch = kunit_test_no_catch;
+	try_catch->run(try_catch);
+
+	KUNIT_EXPECT_TRUE(test, ctx->function_called);
+}
+
+static void kunit_test_generic_try_catch_unsuccessful_try_does_catch(
+		struct kunit *test)
+{
+	struct kunit_try_catch_test_context *ctx = test->priv;
+	struct kunit_try_catch *try_catch = ctx->try_catch;
+
+	kunit_generic_try_catch_init(try_catch);
+
+	try_catch->try = kunit_test_unsuccessful_try;
+	try_catch->catch = kunit_test_catch;
+	try_catch->run(try_catch);
+
+	KUNIT_EXPECT_TRUE(test, ctx->function_called);
+}
+
+static int kunit_try_catch_test_init(struct kunit *test)
+{
+	struct kunit_try_catch_test_context *ctx;
+
+	ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
+	test->priv = ctx;
+
+	ctx->try_catch = kunit_kmalloc(test,
+				       sizeof(*ctx->try_catch),
+				       GFP_KERNEL);
+	kunit_try_catch_init(ctx->try_catch);
+	ctx->try_catch->context.test = test;
+
+	return 0;
+}
+
+static struct kunit_case kunit_try_catch_test_cases[] = {
+	KUNIT_CASE(kunit_test_try_catch_successful_try_no_catch),
+	KUNIT_CASE(kunit_test_try_catch_unsuccessful_try_does_catch),
+	KUNIT_CASE(kunit_test_generic_try_catch_successful_try_no_catch),
+	KUNIT_CASE(kunit_test_generic_try_catch_unsuccessful_try_does_catch),
+	{},
+};
+
+static struct kunit_module kunit_try_catch_test_module = {
+	.name = "kunit-try-catch-test",
+	.init = kunit_try_catch_test_init,
+	.test_cases = kunit_try_catch_test_cases,
+};
+module_test(kunit_try_catch_test_module);
diff --git a/kunit/test.c b/kunit/test.c
index d18c50d5ed671..6e5244642ab07 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -6,9 +6,9 @@
  * Author: Brendan Higgins <brendanhiggins at google.com>
  */
 
-#include <linux/sched.h>
 #include <linux/sched/debug.h>
-#include <os.h>
+#include <linux/completion.h>
+#include <linux/kthread.h>
 #include <kunit/test.h>
 
 static bool kunit_get_success(struct kunit *test)
@@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
 	spin_unlock_irqrestore(&test->lock, flags);
 }
 
+static bool kunit_get_death_test(struct kunit *test)
+{
+	unsigned long flags;
+	bool death_test;
+
+	spin_lock_irqsave(&test->lock, flags);
+	death_test = test->death_test;
+	spin_unlock_irqrestore(&test->lock, flags);
+
+	return death_test;
+}
+
+static void kunit_set_death_test(struct kunit *test, bool death_test)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&test->lock, flags);
+	test->death_test = death_test;
+	spin_unlock_irqrestore(&test->lock, flags);
+}
+
 static int kunit_vprintk_emit(const struct kunit *test,
 			      int level,
 			      const char *fmt,
@@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
 	stream->commit(stream);
 }
 
+static void __noreturn kunit_abort(struct kunit *test)
+{
+	kunit_set_death_test(test, true);
+
+	test->try_catch.throw(&test->try_catch);
+
+	/*
+	 * Throw could not abort from test.
+	 */
+	kunit_err(test, "Throw could not abort from test!");
+	show_stack(NULL, NULL);
+	BUG();
+}
+
 int kunit_init_test(struct kunit *test, const char *name)
 {
 	spin_lock_init(&test->lock);
 	INIT_LIST_HEAD(&test->resources);
 	test->name = name;
+	test->set_death_test = kunit_set_death_test;
 	test->vprintk = kunit_vprintk;
 	test->fail = kunit_fail;
+	test->abort = kunit_abort;
 
 	return 0;
 }
@@ -122,16 +159,171 @@ static void kunit_run_case_cleanup(struct kunit *test,
 }
 
 /*
- * Performs all logic to run a test case.
+ * Handles an unexpected crash in a test case.
  */
-static bool kunit_run_case(struct kunit *test,
-			   struct kunit_module *module,
-			   struct kunit_case *test_case)
+static void kunit_handle_test_crash(struct kunit *test,
+				   struct kunit_module *module,
+				   struct kunit_case *test_case)
 {
-	kunit_set_success(test, true);
+	kunit_err(test, "%s crashed", test_case->name);
+	/*
+	 * TODO(brendanhiggins at google.com): This prints the stack trace up
+	 * through this frame, not up to the frame that caused the crash.
+	 */
+	show_stack(NULL, NULL);
+
+	kunit_case_internal_cleanup(test);
+}
+
+static void kunit_generic_throw(struct kunit_try_catch *try_catch)
+{
+	try_catch->context.try_result = -EFAULT;
+	complete_and_exit(try_catch->context.try_completion, -EFAULT);
+}
+
+static int kunit_generic_run_threadfn_adapter(void *data)
+{
+	struct kunit_try_catch *try_catch = data;
 
+	try_catch->try(&try_catch->context);
+
+	complete_and_exit(try_catch->context.try_completion, 0);
+}
+
+static void kunit_generic_run_try_catch(struct kunit_try_catch *try_catch)
+{
+	struct task_struct *task_struct;
+	struct kunit *test = try_catch->context.test;
+	int exit_code, wake_result;
+	DECLARE_COMPLETION(test_case_completion);
+
+	try_catch->context.try_completion = &test_case_completion;
+	try_catch->context.try_result = 0;
+	task_struct = kthread_create(kunit_generic_run_threadfn_adapter,
+					     try_catch,
+					     "kunit_try_catch_thread");
+	if (IS_ERR_OR_NULL(task_struct)) {
+		try_catch->catch(&try_catch->context);
+		return;
+	}
+
+	wake_result = wake_up_process(task_struct);
+	if (wake_result != 0 && wake_result != 1) {
+		kunit_err(test, "task was not woken properly: %d", wake_result);
+		try_catch->catch(&try_catch->context);
+	}
+
+	/*
+	 * TODO(brendanhiggins at google.com): We should probably have some type of
+	 * timeout here. The only question is what that timeout value should be.
+	 *
+	 * The intention has always been, at some point, to be able to label
+	 * tests with some type of size bucket (unit/small, integration/medium,
+	 * large/system/end-to-end, etc), where each size bucket would get a
+	 * default timeout value kind of like what Bazel does:
+	 * https://docs.bazel.build/versions/master/be/common-definitions.html#test.size
+	 * There is still some debate to be had on exactly how we do this. (For
+	 * one, we probably want to have some sort of test runner level
+	 * timeout.)
+	 *
+	 * For more background on this topic, see:
+	 * https://mike-bland.com/2011/11/01/small-medium-large.html
+	 */
+	wait_for_completion(&test_case_completion);
+
+	exit_code = try_catch->context.try_result;
+	if (exit_code == -EFAULT)
+		try_catch->catch(&try_catch->context);
+	else if (exit_code == -EINTR)
+		kunit_err(test, "wake_up_process() was never called.");
+	else if (exit_code)
+		kunit_err(test, "Unknown error: %d", exit_code);
+}
+
+void kunit_generic_try_catch_init(struct kunit_try_catch *try_catch)
+{
+	try_catch->run = kunit_generic_run_try_catch;
+	try_catch->throw = kunit_generic_throw;
+}
+
+void __weak kunit_try_catch_init(struct kunit_try_catch *try_catch)
+{
+	kunit_generic_try_catch_init(try_catch);
+}
+
+static void kunit_try_run_case(struct kunit_try_catch_context *context)
+{
+	struct kunit_try_catch_context *ctx = context;
+	struct kunit *test = ctx->test;
+	struct kunit_module *module = ctx->module;
+	struct kunit_case *test_case = ctx->test_case;
+
+	/*
+	 * kunit_run_case_internal may encounter a fatal error; if it does, we
+	 * will jump to ENTER_HANDLER above instead of continuing normal control
+	 * flow.
+	 */
 	kunit_run_case_internal(test, module, test_case);
+	/* This line may never be reached. */
 	kunit_run_case_cleanup(test, module, test_case);
+}
+
+static void kunit_catch_run_case(struct kunit_try_catch_context *context)
+{
+	struct kunit_try_catch_context *ctx = context;
+	struct kunit *test = ctx->test;
+	struct kunit_module *module = ctx->module;
+	struct kunit_case *test_case = ctx->test_case;
+
+	if (kunit_get_death_test(test)) {
+		/*
+		 * EXPECTED DEATH: kunit_run_case_internal encountered
+		 * anticipated fatal error. Everything should be in a safe
+		 * state.
+		 */
+		kunit_run_case_cleanup(test, module, test_case);
+	} else {
+		/*
+		 * UNEXPECTED DEATH: kunit_run_case_internal encountered an
+		 * unanticipated fatal error. We have no idea what the state of
+		 * the test case is in.
+		 */
+		kunit_handle_test_crash(test, module, test_case);
+		kunit_set_success(test, false);
+	}
+}
+
+/*
+ * Performs all logic to run a test case. It also catches most errors that
+ * occurs in a test case and reports them as failures.
+ *
+ * XXX: THIS DOES NOT FOLLOW NORMAL CONTROL FLOW. READ CAREFULLY!!!
+ */
+static bool kunit_run_case_catch_errors(struct kunit *test,
+				       struct kunit_module *module,
+				       struct kunit_case *test_case)
+{
+	struct kunit_try_catch *try_catch = &test->try_catch;
+	struct kunit_try_catch_context *context = &try_catch->context;
+
+	kunit_try_catch_init(try_catch);
+
+	kunit_set_success(test, true);
+	kunit_set_death_test(test, false);
+
+	/*
+	 * ENTER HANDLER: If a failure occurs, we enter here.
+	 */
+	context->test = test;
+	context->module = module;
+	context->test_case = test_case;
+	try_catch->try = kunit_try_run_case;
+	try_catch->catch = kunit_catch_run_case;
+	try_catch->run(try_catch);
+	/*
+	 * EXIT HANDLER: test case has been run and all possible errors have
+	 * been handled.
+	 */
 
 	return kunit_get_success(test);
 }
@@ -148,7 +340,7 @@ int kunit_run_tests(struct kunit_module *module)
 		return ret;
 
 	for (test_case = module->test_cases; test_case->run_case; test_case++) {
-		success = kunit_run_case(&test, module, test_case);
+		success = kunit_run_case_catch_errors(&test, module, test_case);
 		if (!success)
 			all_passed = false;
 
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)


Add support for aborting/bailing out of test cases. Needed for
implementing assertions.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
Changes Since Last Version
 - This patch is new introducing a new cross-architecture way to abort
   out of a test case (needed for KUNIT_ASSERT_*, see next patch for
   details).
 - On a side note, this is not a complete replacement for the UML abort
   mechanism, but covers the majority of necessary functionality. UML
   architecture specific featurs have been dropped from the initial
   patchset.
---
 include/kunit/test.h |  24 +++++
 kunit/Makefile       |   3 +-
 kunit/test-test.c    | 127 ++++++++++++++++++++++++++
 kunit/test.c         | 208 +++++++++++++++++++++++++++++++++++++++++--
 4 files changed, 353 insertions(+), 9 deletions(-)
 create mode 100644 kunit/test-test.c

diff --git a/include/kunit/test.h b/include/kunit/test.h
index a36ad1a502c66..cd02dca96eb61 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -151,6 +151,26 @@ struct kunit_module {
 	struct kunit_case *test_cases;
 };
 
+struct kunit_try_catch_context {
+	struct kunit *test;
+	struct kunit_module *module;
+	struct kunit_case *test_case;
+	struct completion *try_completion;
+	int try_result;
+};
+
+struct kunit_try_catch {
+	void (*run)(struct kunit_try_catch *try_catch);
+	void (*throw)(struct kunit_try_catch *try_catch);
+	struct kunit_try_catch_context context;
+	void (*try)(struct kunit_try_catch_context *context);
+	void (*catch)(struct kunit_try_catch_context *context);
+};
+
+void kunit_try_catch_init(struct kunit_try_catch *try_catch);
+
+void kunit_generic_try_catch_init(struct kunit_try_catch *try_catch);
+
 /**
  * struct kunit - represents a running instance of a test.
  * @priv: for user to store arbitrary data. Commonly used to pass data created
@@ -166,13 +186,17 @@ struct kunit {
 
 	/* private: internal use only. */
 	const char *name; /* Read only after initialization! */
+	struct kunit_try_catch try_catch;
 	spinlock_t lock; /* Gaurds all mutable test state. */
 	bool success; /* Protected by lock. */
+	bool death_test; /* Protected by lock. */
 	struct list_head resources; /* Protected by lock. */
+	void (*set_death_test)(struct kunit *test, bool death_test);
 	void (*vprintk)(const struct kunit *test,
 			const char *level,
 			struct va_format *vaf);
 	void (*fail)(struct kunit *test, struct kunit_stream *stream);
+	void (*abort)(struct kunit *test);
 };
 
 int kunit_init_test(struct kunit *test, const char *name);
diff --git a/kunit/Makefile b/kunit/Makefile
index 60a9ea6cb4697..e4c300f67479a 100644
--- a/kunit/Makefile
+++ b/kunit/Makefile
@@ -2,6 +2,7 @@ obj-$(CONFIG_KUNIT) +=			test.o \
 					string-stream.o \
 					kunit-stream.o
 
-obj-$(CONFIG_KUNIT_TEST) +=		string-stream-test.o
+obj-$(CONFIG_KUNIT_TEST) +=		test-test.o \
+					string-stream-test.o
 
 obj-$(CONFIG_KUNIT_EXAMPLE_TEST) +=	example-test.o
diff --git a/kunit/test-test.c b/kunit/test-test.c
new file mode 100644
index 0000000000000..a936c041f1c8f
--- /dev/null
+++ b/kunit/test-test.c
@@ -0,0 +1,127 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * KUnit test for core test infrastructure.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins at google.com>
+ */
+#include <kunit/test.h>
+
+struct kunit_try_catch_test_context {
+	struct kunit_try_catch *try_catch;
+	bool function_called;
+};
+
+void kunit_test_successful_try(struct kunit_try_catch_context *context)
+{
+	struct kunit_try_catch_test_context *ctx = context->test->priv;
+
+	ctx->function_called = true;
+}
+
+void kunit_test_no_catch(struct kunit_try_catch_context *context)
+{
+	KUNIT_FAIL(context->test, "Catch should not be called.");
+}
+
+static void kunit_test_try_catch_successful_try_no_catch(struct kunit *test)
+{
+	struct kunit_try_catch_test_context *ctx = test->priv;
+	struct kunit_try_catch *try_catch = ctx->try_catch;
+
+	try_catch->try = kunit_test_successful_try;
+	try_catch->catch = kunit_test_no_catch;
+	try_catch->run(try_catch);
+
+	KUNIT_EXPECT_TRUE(test, ctx->function_called);
+}
+
+void kunit_test_unsuccessful_try(struct kunit_try_catch_context *context)
+{
+	struct kunit_try_catch *try_catch = container_of(context,
+							 struct kunit_try_catch,
+							 context);
+
+	try_catch->throw(try_catch);
+	KUNIT_FAIL(context->test, "This line should never be reached.");
+}
+
+void kunit_test_catch(struct kunit_try_catch_context *context)
+{
+	struct kunit_try_catch_test_context *ctx = context->test->priv;
+
+	ctx->function_called = true;
+}
+
+static void kunit_test_try_catch_unsuccessful_try_does_catch(struct kunit *test)
+{
+	struct kunit_try_catch_test_context *ctx = test->priv;
+	struct kunit_try_catch *try_catch = ctx->try_catch;
+
+	try_catch->try = kunit_test_unsuccessful_try;
+	try_catch->catch = kunit_test_catch;
+	try_catch->run(try_catch);
+
+	KUNIT_EXPECT_TRUE(test, ctx->function_called);
+}
+
+static void kunit_test_generic_try_catch_successful_try_no_catch(
+		struct kunit *test)
+{
+	struct kunit_try_catch_test_context *ctx = test->priv;
+	struct kunit_try_catch *try_catch = ctx->try_catch;
+
+	kunit_generic_try_catch_init(try_catch);
+
+	try_catch->try = kunit_test_successful_try;
+	try_catch->catch = kunit_test_no_catch;
+	try_catch->run(try_catch);
+
+	KUNIT_EXPECT_TRUE(test, ctx->function_called);
+}
+
+static void kunit_test_generic_try_catch_unsuccessful_try_does_catch(
+		struct kunit *test)
+{
+	struct kunit_try_catch_test_context *ctx = test->priv;
+	struct kunit_try_catch *try_catch = ctx->try_catch;
+
+	kunit_generic_try_catch_init(try_catch);
+
+	try_catch->try = kunit_test_unsuccessful_try;
+	try_catch->catch = kunit_test_catch;
+	try_catch->run(try_catch);
+
+	KUNIT_EXPECT_TRUE(test, ctx->function_called);
+}
+
+static int kunit_try_catch_test_init(struct kunit *test)
+{
+	struct kunit_try_catch_test_context *ctx;
+
+	ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
+	test->priv = ctx;
+
+	ctx->try_catch = kunit_kmalloc(test,
+				       sizeof(*ctx->try_catch),
+				       GFP_KERNEL);
+	kunit_try_catch_init(ctx->try_catch);
+	ctx->try_catch->context.test = test;
+
+	return 0;
+}
+
+static struct kunit_case kunit_try_catch_test_cases[] = {
+	KUNIT_CASE(kunit_test_try_catch_successful_try_no_catch),
+	KUNIT_CASE(kunit_test_try_catch_unsuccessful_try_does_catch),
+	KUNIT_CASE(kunit_test_generic_try_catch_successful_try_no_catch),
+	KUNIT_CASE(kunit_test_generic_try_catch_unsuccessful_try_does_catch),
+	{},
+};
+
+static struct kunit_module kunit_try_catch_test_module = {
+	.name = "kunit-try-catch-test",
+	.init = kunit_try_catch_test_init,
+	.test_cases = kunit_try_catch_test_cases,
+};
+module_test(kunit_try_catch_test_module);
diff --git a/kunit/test.c b/kunit/test.c
index d18c50d5ed671..6e5244642ab07 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -6,9 +6,9 @@
  * Author: Brendan Higgins <brendanhiggins at google.com>
  */
 
-#include <linux/sched.h>
 #include <linux/sched/debug.h>
-#include <os.h>
+#include <linux/completion.h>
+#include <linux/kthread.h>
 #include <kunit/test.h>
 
 static bool kunit_get_success(struct kunit *test)
@@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
 	spin_unlock_irqrestore(&test->lock, flags);
 }
 
+static bool kunit_get_death_test(struct kunit *test)
+{
+	unsigned long flags;
+	bool death_test;
+
+	spin_lock_irqsave(&test->lock, flags);
+	death_test = test->death_test;
+	spin_unlock_irqrestore(&test->lock, flags);
+
+	return death_test;
+}
+
+static void kunit_set_death_test(struct kunit *test, bool death_test)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&test->lock, flags);
+	test->death_test = death_test;
+	spin_unlock_irqrestore(&test->lock, flags);
+}
+
 static int kunit_vprintk_emit(const struct kunit *test,
 			      int level,
 			      const char *fmt,
@@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
 	stream->commit(stream);
 }
 
+static void __noreturn kunit_abort(struct kunit *test)
+{
+	kunit_set_death_test(test, true);
+
+	test->try_catch.throw(&test->try_catch);
+
+	/*
+	 * Throw could not abort from test.
+	 */
+	kunit_err(test, "Throw could not abort from test!");
+	show_stack(NULL, NULL);
+	BUG();
+}
+
 int kunit_init_test(struct kunit *test, const char *name)
 {
 	spin_lock_init(&test->lock);
 	INIT_LIST_HEAD(&test->resources);
 	test->name = name;
+	test->set_death_test = kunit_set_death_test;
 	test->vprintk = kunit_vprintk;
 	test->fail = kunit_fail;
+	test->abort = kunit_abort;
 
 	return 0;
 }
@@ -122,16 +159,171 @@ static void kunit_run_case_cleanup(struct kunit *test,
 }
 
 /*
- * Performs all logic to run a test case.
+ * Handles an unexpected crash in a test case.
  */
-static bool kunit_run_case(struct kunit *test,
-			   struct kunit_module *module,
-			   struct kunit_case *test_case)
+static void kunit_handle_test_crash(struct kunit *test,
+				   struct kunit_module *module,
+				   struct kunit_case *test_case)
 {
-	kunit_set_success(test, true);
+	kunit_err(test, "%s crashed", test_case->name);
+	/*
+	 * TODO(brendanhiggins at google.com): This prints the stack trace up
+	 * through this frame, not up to the frame that caused the crash.
+	 */
+	show_stack(NULL, NULL);
+
+	kunit_case_internal_cleanup(test);
+}
+
+static void kunit_generic_throw(struct kunit_try_catch *try_catch)
+{
+	try_catch->context.try_result = -EFAULT;
+	complete_and_exit(try_catch->context.try_completion, -EFAULT);
+}
+
+static int kunit_generic_run_threadfn_adapter(void *data)
+{
+	struct kunit_try_catch *try_catch = data;
 
+	try_catch->try(&try_catch->context);
+
+	complete_and_exit(try_catch->context.try_completion, 0);
+}
+
+static void kunit_generic_run_try_catch(struct kunit_try_catch *try_catch)
+{
+	struct task_struct *task_struct;
+	struct kunit *test = try_catch->context.test;
+	int exit_code, wake_result;
+	DECLARE_COMPLETION(test_case_completion);
+
+	try_catch->context.try_completion = &test_case_completion;
+	try_catch->context.try_result = 0;
+	task_struct = kthread_create(kunit_generic_run_threadfn_adapter,
+					     try_catch,
+					     "kunit_try_catch_thread");
+	if (IS_ERR_OR_NULL(task_struct)) {
+		try_catch->catch(&try_catch->context);
+		return;
+	}
+
+	wake_result = wake_up_process(task_struct);
+	if (wake_result != 0 && wake_result != 1) {
+		kunit_err(test, "task was not woken properly: %d", wake_result);
+		try_catch->catch(&try_catch->context);
+	}
+
+	/*
+	 * TODO(brendanhiggins at google.com): We should probably have some type of
+	 * timeout here. The only question is what that timeout value should be.
+	 *
+	 * The intention has always been, at some point, to be able to label
+	 * tests with some type of size bucket (unit/small, integration/medium,
+	 * large/system/end-to-end, etc), where each size bucket would get a
+	 * default timeout value kind of like what Bazel does:
+	 * https://docs.bazel.build/versions/master/be/common-definitions.html#test.size
+	 * There is still some debate to be had on exactly how we do this. (For
+	 * one, we probably want to have some sort of test runner level
+	 * timeout.)
+	 *
+	 * For more background on this topic, see:
+	 * https://mike-bland.com/2011/11/01/small-medium-large.html
+	 */
+	wait_for_completion(&test_case_completion);
+
+	exit_code = try_catch->context.try_result;
+	if (exit_code == -EFAULT)
+		try_catch->catch(&try_catch->context);
+	else if (exit_code == -EINTR)
+		kunit_err(test, "wake_up_process() was never called.");
+	else if (exit_code)
+		kunit_err(test, "Unknown error: %d", exit_code);
+}
+
+void kunit_generic_try_catch_init(struct kunit_try_catch *try_catch)
+{
+	try_catch->run = kunit_generic_run_try_catch;
+	try_catch->throw = kunit_generic_throw;
+}
+
+void __weak kunit_try_catch_init(struct kunit_try_catch *try_catch)
+{
+	kunit_generic_try_catch_init(try_catch);
+}
+
+static void kunit_try_run_case(struct kunit_try_catch_context *context)
+{
+	struct kunit_try_catch_context *ctx = context;
+	struct kunit *test = ctx->test;
+	struct kunit_module *module = ctx->module;
+	struct kunit_case *test_case = ctx->test_case;
+
+	/*
+	 * kunit_run_case_internal may encounter a fatal error; if it does, we
+	 * will jump to ENTER_HANDLER above instead of continuing normal control
+	 * flow.
+	 */
 	kunit_run_case_internal(test, module, test_case);
+	/* This line may never be reached. */
 	kunit_run_case_cleanup(test, module, test_case);
+}
+
+static void kunit_catch_run_case(struct kunit_try_catch_context *context)
+{
+	struct kunit_try_catch_context *ctx = context;
+	struct kunit *test = ctx->test;
+	struct kunit_module *module = ctx->module;
+	struct kunit_case *test_case = ctx->test_case;
+
+	if (kunit_get_death_test(test)) {
+		/*
+		 * EXPECTED DEATH: kunit_run_case_internal encountered
+		 * anticipated fatal error. Everything should be in a safe
+		 * state.
+		 */
+		kunit_run_case_cleanup(test, module, test_case);
+	} else {
+		/*
+		 * UNEXPECTED DEATH: kunit_run_case_internal encountered an
+		 * unanticipated fatal error. We have no idea what the state of
+		 * the test case is in.
+		 */
+		kunit_handle_test_crash(test, module, test_case);
+		kunit_set_success(test, false);
+	}
+}
+
+/*
+ * Performs all logic to run a test case. It also catches most errors that
+ * occurs in a test case and reports them as failures.
+ *
+ * XXX: THIS DOES NOT FOLLOW NORMAL CONTROL FLOW. READ CAREFULLY!!!
+ */
+static bool kunit_run_case_catch_errors(struct kunit *test,
+				       struct kunit_module *module,
+				       struct kunit_case *test_case)
+{
+	struct kunit_try_catch *try_catch = &test->try_catch;
+	struct kunit_try_catch_context *context = &try_catch->context;
+
+	kunit_try_catch_init(try_catch);
+
+	kunit_set_success(test, true);
+	kunit_set_death_test(test, false);
+
+	/*
+	 * ENTER HANDLER: If a failure occurs, we enter here.
+	 */
+	context->test = test;
+	context->module = module;
+	context->test_case = test_case;
+	try_catch->try = kunit_try_run_case;
+	try_catch->catch = kunit_catch_run_case;
+	try_catch->run(try_catch);
+	/*
+	 * EXIT HANDLER: test case has been run and all possible errors have
+	 * been handled.
+	 */
 
 	return kunit_get_success(test);
 }
@@ -148,7 +340,7 @@ int kunit_run_tests(struct kunit_module *module)
 		return ret;
 
 	for (test_case = module->test_cases; test_case->run_case; test_case++) {
-		success = kunit_run_case(&test, module, test_case);
+		success = kunit_run_case_catch_errors(&test, module, test_case);
 		if (!success)
 			all_passed = false;
 
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: brakmo, pmladek, amir73il, Brendan Higgins, dri-devel,
	Alexander.Levin, linux-kselftest, linux-nvdimm, richard,
	knut.omang, wfg, joel, jdike, dan.carpenter, devicetree,
	Tim.Bird, linux-um, rostedt, julia.lawall, dan.j.williams,
	kunit-dev, gregkh, linux-kernel, daniel, mpe, joe, khilman

Add support for aborting/bailing out of test cases. Needed for
implementing assertions.

Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
---
Changes Since Last Version
 - This patch is new introducing a new cross-architecture way to abort
   out of a test case (needed for KUNIT_ASSERT_*, see next patch for
   details).
 - On a side note, this is not a complete replacement for the UML abort
   mechanism, but covers the majority of necessary functionality. UML
   architecture specific featurs have been dropped from the initial
   patchset.
---
 include/kunit/test.h |  24 +++++
 kunit/Makefile       |   3 +-
 kunit/test-test.c    | 127 ++++++++++++++++++++++++++
 kunit/test.c         | 208 +++++++++++++++++++++++++++++++++++++++++--
 4 files changed, 353 insertions(+), 9 deletions(-)
 create mode 100644 kunit/test-test.c

diff --git a/include/kunit/test.h b/include/kunit/test.h
index a36ad1a502c66..cd02dca96eb61 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -151,6 +151,26 @@ struct kunit_module {
 	struct kunit_case *test_cases;
 };
 
+struct kunit_try_catch_context {
+	struct kunit *test;
+	struct kunit_module *module;
+	struct kunit_case *test_case;
+	struct completion *try_completion;
+	int try_result;
+};
+
+struct kunit_try_catch {
+	void (*run)(struct kunit_try_catch *try_catch);
+	void (*throw)(struct kunit_try_catch *try_catch);
+	struct kunit_try_catch_context context;
+	void (*try)(struct kunit_try_catch_context *context);
+	void (*catch)(struct kunit_try_catch_context *context);
+};
+
+void kunit_try_catch_init(struct kunit_try_catch *try_catch);
+
+void kunit_generic_try_catch_init(struct kunit_try_catch *try_catch);
+
 /**
  * struct kunit - represents a running instance of a test.
  * @priv: for user to store arbitrary data. Commonly used to pass data created
@@ -166,13 +186,17 @@ struct kunit {
 
 	/* private: internal use only. */
 	const char *name; /* Read only after initialization! */
+	struct kunit_try_catch try_catch;
 	spinlock_t lock; /* Gaurds all mutable test state. */
 	bool success; /* Protected by lock. */
+	bool death_test; /* Protected by lock. */
 	struct list_head resources; /* Protected by lock. */
+	void (*set_death_test)(struct kunit *test, bool death_test);
 	void (*vprintk)(const struct kunit *test,
 			const char *level,
 			struct va_format *vaf);
 	void (*fail)(struct kunit *test, struct kunit_stream *stream);
+	void (*abort)(struct kunit *test);
 };
 
 int kunit_init_test(struct kunit *test, const char *name);
diff --git a/kunit/Makefile b/kunit/Makefile
index 60a9ea6cb4697..e4c300f67479a 100644
--- a/kunit/Makefile
+++ b/kunit/Makefile
@@ -2,6 +2,7 @@ obj-$(CONFIG_KUNIT) +=			test.o \
 					string-stream.o \
 					kunit-stream.o
 
-obj-$(CONFIG_KUNIT_TEST) +=		string-stream-test.o
+obj-$(CONFIG_KUNIT_TEST) +=		test-test.o \
+					string-stream-test.o
 
 obj-$(CONFIG_KUNIT_EXAMPLE_TEST) +=	example-test.o
diff --git a/kunit/test-test.c b/kunit/test-test.c
new file mode 100644
index 0000000000000..a936c041f1c8f
--- /dev/null
+++ b/kunit/test-test.c
@@ -0,0 +1,127 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * KUnit test for core test infrastructure.
+ *
+ * Copyright (C) 2019, Google LLC.
+ * Author: Brendan Higgins <brendanhiggins@google.com>
+ */
+#include <kunit/test.h>
+
+struct kunit_try_catch_test_context {
+	struct kunit_try_catch *try_catch;
+	bool function_called;
+};
+
+void kunit_test_successful_try(struct kunit_try_catch_context *context)
+{
+	struct kunit_try_catch_test_context *ctx = context->test->priv;
+
+	ctx->function_called = true;
+}
+
+void kunit_test_no_catch(struct kunit_try_catch_context *context)
+{
+	KUNIT_FAIL(context->test, "Catch should not be called.");
+}
+
+static void kunit_test_try_catch_successful_try_no_catch(struct kunit *test)
+{
+	struct kunit_try_catch_test_context *ctx = test->priv;
+	struct kunit_try_catch *try_catch = ctx->try_catch;
+
+	try_catch->try = kunit_test_successful_try;
+	try_catch->catch = kunit_test_no_catch;
+	try_catch->run(try_catch);
+
+	KUNIT_EXPECT_TRUE(test, ctx->function_called);
+}
+
+void kunit_test_unsuccessful_try(struct kunit_try_catch_context *context)
+{
+	struct kunit_try_catch *try_catch = container_of(context,
+							 struct kunit_try_catch,
+							 context);
+
+	try_catch->throw(try_catch);
+	KUNIT_FAIL(context->test, "This line should never be reached.");
+}
+
+void kunit_test_catch(struct kunit_try_catch_context *context)
+{
+	struct kunit_try_catch_test_context *ctx = context->test->priv;
+
+	ctx->function_called = true;
+}
+
+static void kunit_test_try_catch_unsuccessful_try_does_catch(struct kunit *test)
+{
+	struct kunit_try_catch_test_context *ctx = test->priv;
+	struct kunit_try_catch *try_catch = ctx->try_catch;
+
+	try_catch->try = kunit_test_unsuccessful_try;
+	try_catch->catch = kunit_test_catch;
+	try_catch->run(try_catch);
+
+	KUNIT_EXPECT_TRUE(test, ctx->function_called);
+}
+
+static void kunit_test_generic_try_catch_successful_try_no_catch(
+		struct kunit *test)
+{
+	struct kunit_try_catch_test_context *ctx = test->priv;
+	struct kunit_try_catch *try_catch = ctx->try_catch;
+
+	kunit_generic_try_catch_init(try_catch);
+
+	try_catch->try = kunit_test_successful_try;
+	try_catch->catch = kunit_test_no_catch;
+	try_catch->run(try_catch);
+
+	KUNIT_EXPECT_TRUE(test, ctx->function_called);
+}
+
+static void kunit_test_generic_try_catch_unsuccessful_try_does_catch(
+		struct kunit *test)
+{
+	struct kunit_try_catch_test_context *ctx = test->priv;
+	struct kunit_try_catch *try_catch = ctx->try_catch;
+
+	kunit_generic_try_catch_init(try_catch);
+
+	try_catch->try = kunit_test_unsuccessful_try;
+	try_catch->catch = kunit_test_catch;
+	try_catch->run(try_catch);
+
+	KUNIT_EXPECT_TRUE(test, ctx->function_called);
+}
+
+static int kunit_try_catch_test_init(struct kunit *test)
+{
+	struct kunit_try_catch_test_context *ctx;
+
+	ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
+	test->priv = ctx;
+
+	ctx->try_catch = kunit_kmalloc(test,
+				       sizeof(*ctx->try_catch),
+				       GFP_KERNEL);
+	kunit_try_catch_init(ctx->try_catch);
+	ctx->try_catch->context.test = test;
+
+	return 0;
+}
+
+static struct kunit_case kunit_try_catch_test_cases[] = {
+	KUNIT_CASE(kunit_test_try_catch_successful_try_no_catch),
+	KUNIT_CASE(kunit_test_try_catch_unsuccessful_try_does_catch),
+	KUNIT_CASE(kunit_test_generic_try_catch_successful_try_no_catch),
+	KUNIT_CASE(kunit_test_generic_try_catch_unsuccessful_try_does_catch),
+	{},
+};
+
+static struct kunit_module kunit_try_catch_test_module = {
+	.name = "kunit-try-catch-test",
+	.init = kunit_try_catch_test_init,
+	.test_cases = kunit_try_catch_test_cases,
+};
+module_test(kunit_try_catch_test_module);
diff --git a/kunit/test.c b/kunit/test.c
index d18c50d5ed671..6e5244642ab07 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -6,9 +6,9 @@
  * Author: Brendan Higgins <brendanhiggins@google.com>
  */
 
-#include <linux/sched.h>
 #include <linux/sched/debug.h>
-#include <os.h>
+#include <linux/completion.h>
+#include <linux/kthread.h>
 #include <kunit/test.h>
 
 static bool kunit_get_success(struct kunit *test)
@@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
 	spin_unlock_irqrestore(&test->lock, flags);
 }
 
+static bool kunit_get_death_test(struct kunit *test)
+{
+	unsigned long flags;
+	bool death_test;
+
+	spin_lock_irqsave(&test->lock, flags);
+	death_test = test->death_test;
+	spin_unlock_irqrestore(&test->lock, flags);
+
+	return death_test;
+}
+
+static void kunit_set_death_test(struct kunit *test, bool death_test)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&test->lock, flags);
+	test->death_test = death_test;
+	spin_unlock_irqrestore(&test->lock, flags);
+}
+
 static int kunit_vprintk_emit(const struct kunit *test,
 			      int level,
 			      const char *fmt,
@@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
 	stream->commit(stream);
 }
 
+static void __noreturn kunit_abort(struct kunit *test)
+{
+	kunit_set_death_test(test, true);
+
+	test->try_catch.throw(&test->try_catch);
+
+	/*
+	 * Throw could not abort from test.
+	 */
+	kunit_err(test, "Throw could not abort from test!");
+	show_stack(NULL, NULL);
+	BUG();
+}
+
 int kunit_init_test(struct kunit *test, const char *name)
 {
 	spin_lock_init(&test->lock);
 	INIT_LIST_HEAD(&test->resources);
 	test->name = name;
+	test->set_death_test = kunit_set_death_test;
 	test->vprintk = kunit_vprintk;
 	test->fail = kunit_fail;
+	test->abort = kunit_abort;
 
 	return 0;
 }
@@ -122,16 +159,171 @@ static void kunit_run_case_cleanup(struct kunit *test,
 }
 
 /*
- * Performs all logic to run a test case.
+ * Handles an unexpected crash in a test case.
  */
-static bool kunit_run_case(struct kunit *test,
-			   struct kunit_module *module,
-			   struct kunit_case *test_case)
+static void kunit_handle_test_crash(struct kunit *test,
+				   struct kunit_module *module,
+				   struct kunit_case *test_case)
 {
-	kunit_set_success(test, true);
+	kunit_err(test, "%s crashed", test_case->name);
+	/*
+	 * TODO(brendanhiggins@google.com): This prints the stack trace up
+	 * through this frame, not up to the frame that caused the crash.
+	 */
+	show_stack(NULL, NULL);
+
+	kunit_case_internal_cleanup(test);
+}
+
+static void kunit_generic_throw(struct kunit_try_catch *try_catch)
+{
+	try_catch->context.try_result = -EFAULT;
+	complete_and_exit(try_catch->context.try_completion, -EFAULT);
+}
+
+static int kunit_generic_run_threadfn_adapter(void *data)
+{
+	struct kunit_try_catch *try_catch = data;
 
+	try_catch->try(&try_catch->context);
+
+	complete_and_exit(try_catch->context.try_completion, 0);
+}
+
+static void kunit_generic_run_try_catch(struct kunit_try_catch *try_catch)
+{
+	struct task_struct *task_struct;
+	struct kunit *test = try_catch->context.test;
+	int exit_code, wake_result;
+	DECLARE_COMPLETION(test_case_completion);
+
+	try_catch->context.try_completion = &test_case_completion;
+	try_catch->context.try_result = 0;
+	task_struct = kthread_create(kunit_generic_run_threadfn_adapter,
+					     try_catch,
+					     "kunit_try_catch_thread");
+	if (IS_ERR_OR_NULL(task_struct)) {
+		try_catch->catch(&try_catch->context);
+		return;
+	}
+
+	wake_result = wake_up_process(task_struct);
+	if (wake_result != 0 && wake_result != 1) {
+		kunit_err(test, "task was not woken properly: %d", wake_result);
+		try_catch->catch(&try_catch->context);
+	}
+
+	/*
+	 * TODO(brendanhiggins@google.com): We should probably have some type of
+	 * timeout here. The only question is what that timeout value should be.
+	 *
+	 * The intention has always been, at some point, to be able to label
+	 * tests with some type of size bucket (unit/small, integration/medium,
+	 * large/system/end-to-end, etc), where each size bucket would get a
+	 * default timeout value kind of like what Bazel does:
+	 * https://docs.bazel.build/versions/master/be/common-definitions.html#test.size
+	 * There is still some debate to be had on exactly how we do this. (For
+	 * one, we probably want to have some sort of test runner level
+	 * timeout.)
+	 *
+	 * For more background on this topic, see:
+	 * https://mike-bland.com/2011/11/01/small-medium-large.html
+	 */
+	wait_for_completion(&test_case_completion);
+
+	exit_code = try_catch->context.try_result;
+	if (exit_code == -EFAULT)
+		try_catch->catch(&try_catch->context);
+	else if (exit_code == -EINTR)
+		kunit_err(test, "wake_up_process() was never called.");
+	else if (exit_code)
+		kunit_err(test, "Unknown error: %d", exit_code);
+}
+
+void kunit_generic_try_catch_init(struct kunit_try_catch *try_catch)
+{
+	try_catch->run = kunit_generic_run_try_catch;
+	try_catch->throw = kunit_generic_throw;
+}
+
+void __weak kunit_try_catch_init(struct kunit_try_catch *try_catch)
+{
+	kunit_generic_try_catch_init(try_catch);
+}
+
+static void kunit_try_run_case(struct kunit_try_catch_context *context)
+{
+	struct kunit_try_catch_context *ctx = context;
+	struct kunit *test = ctx->test;
+	struct kunit_module *module = ctx->module;
+	struct kunit_case *test_case = ctx->test_case;
+
+	/*
+	 * kunit_run_case_internal may encounter a fatal error; if it does, we
+	 * will jump to ENTER_HANDLER above instead of continuing normal control
+	 * flow.
+	 */
 	kunit_run_case_internal(test, module, test_case);
+	/* This line may never be reached. */
 	kunit_run_case_cleanup(test, module, test_case);
+}
+
+static void kunit_catch_run_case(struct kunit_try_catch_context *context)
+{
+	struct kunit_try_catch_context *ctx = context;
+	struct kunit *test = ctx->test;
+	struct kunit_module *module = ctx->module;
+	struct kunit_case *test_case = ctx->test_case;
+
+	if (kunit_get_death_test(test)) {
+		/*
+		 * EXPECTED DEATH: kunit_run_case_internal encountered
+		 * anticipated fatal error. Everything should be in a safe
+		 * state.
+		 */
+		kunit_run_case_cleanup(test, module, test_case);
+	} else {
+		/*
+		 * UNEXPECTED DEATH: kunit_run_case_internal encountered an
+		 * unanticipated fatal error. We have no idea what the state of
+		 * the test case is in.
+		 */
+		kunit_handle_test_crash(test, module, test_case);
+		kunit_set_success(test, false);
+	}
+}
+
+/*
+ * Performs all logic to run a test case. It also catches most errors that
+ * occurs in a test case and reports them as failures.
+ *
+ * XXX: THIS DOES NOT FOLLOW NORMAL CONTROL FLOW. READ CAREFULLY!!!
+ */
+static bool kunit_run_case_catch_errors(struct kunit *test,
+				       struct kunit_module *module,
+				       struct kunit_case *test_case)
+{
+	struct kunit_try_catch *try_catch = &test->try_catch;
+	struct kunit_try_catch_context *context = &try_catch->context;
+
+	kunit_try_catch_init(try_catch);
+
+	kunit_set_success(test, true);
+	kunit_set_death_test(test, false);
+
+	/*
+	 * ENTER HANDLER: If a failure occurs, we enter here.
+	 */
+	context->test = test;
+	context->module = module;
+	context->test_case = test_case;
+	try_catch->try = kunit_try_run_case;
+	try_catch->catch = kunit_catch_run_case;
+	try_catch->run(try_catch);
+	/*
+	 * EXIT HANDLER: test case has been run and all possible errors have
+	 * been handled.
+	 */
 
 	return kunit_get_success(test);
 }
@@ -148,7 +340,7 @@ int kunit_run_tests(struct kunit_module *module)
 		return ret;
 
 	for (test_case = module->test_cases; test_case->run_case; test_case++) {
-		success = kunit_run_case(&test, module, test_case);
+		success = kunit_run_case_catch_errors(&test, module, test_case);
 		if (!success)
 			all_passed = false;
 
-- 
2.21.0.rc0.258.g878e2cd30e-goog


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 09/17] kunit: test: add the concept of assertions
  2019-02-14 21:37 ` brendanhiggins
                       ` (2 preceding siblings ...)
  (?)
@ 2019-02-14 21:37     ` Brendan Higgins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook-hpIqsD4AKlfQT0dZR+AlfA, mcgrof-DgEjT+Ai2ygdnm+yROfE0A,
	shuah-DgEjT+Ai2ygdnm+yROfE0A, robh-DgEjT+Ai2ygdnm+yROfE0A,
	kieran.bingham-ryLnwIuWjnjg/C1BVhZhaw,
	frowand.list-Re5JQEeQqe8AvxtiuMwx3w
  Cc: brakmo-b10kYP2dOMg, pmladek-IBi9RG/b67k,
	amir73il-Re5JQEeQqe8AvxtiuMwx3w, Brendan Higgins,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	Alexander.Levin-0li6OtcxBFHby3iVrkZq2A,
	linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw, richard-/L3Ra7n9ekc,
	knut.omang-QHcLZuEGTsvQT0dZR+AlfA, wfg-VuQAYsv1563Yd54FQh9/CA,
	joel-U3u1mxZcP9KHXe+LvDLADg, jdike-OPE4K8JWMJJBDgjK7y7TUQ,
	dan.carpenter-QHcLZuEGTsvQT0dZR+AlfA,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Tim.Bird-7U/KSKJipcs,
	linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	rostedt-nx8X9YLhiw1AfugRpC6u6w, julia.lawall-L2FTfq7BK8M,
	kunit-dev-/JYPxA39Uh5TLH3MbocFFw,
	gregkh-hQyY1W1yCW8ekmWlsbkhG0B+6BGkLq7r,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, daniel-/w4YWyX8dFk,
	mpe-Gsx/Oe8HsFggBc27wqDAHg, joe-6d6DIl74uiNBDgjK7y7TUQ,
	khilman-rdvid1DuHRBWk0Htik3J/w

Add support for assertions which are like expectations except the test
terminates if the assertion is not satisfied.

Signed-off-by: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
---
 include/kunit/test.h       | 397 ++++++++++++++++++++++++++++++++++++-
 kunit/string-stream-test.c |  12 +-
 kunit/test-test.c          |   2 +
 kunit/test.c               |  33 +++
 4 files changed, 435 insertions(+), 9 deletions(-)

diff --git a/include/kunit/test.h b/include/kunit/test.h
index cd02dca96eb61..c42c67a9729fd 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -84,9 +84,10 @@ struct kunit;
  * @name: the name of the test case.
  *
  * A test case is a function with the signature, ``void (*)(struct kunit *)``
- * that makes expectations (see KUNIT_EXPECT_TRUE()) about code under test. Each
- * test case is associated with a &struct kunit_module and will be run after the
- * module's init function and followed by the module's exit function.
+ * that makes expectations and assertions (see KUNIT_EXPECT_TRUE() and
+ * KUNIT_ASSERT_TRUE()) about code under test. Each test case is associated with
+ * a &struct kunit_module and will be run after the module's init function and
+ * followed by the module's exit function.
  *
  * A test case should be static and should only be created with the KUNIT_CASE()
  * macro; additionally, every array of test cases should be terminated with an
@@ -712,4 +713,394 @@ static inline void kunit_expect_binary(struct kunit *test,
 	KUNIT_EXPECT_END(test, !IS_ERR_OR_NULL(__ptr), __stream);	       \
 } while (0)
 
+static inline struct kunit_stream *kunit_assert_start(struct kunit *test,
+						    const char *file,
+						    const char *line)
+{
+	struct kunit_stream *stream = kunit_new_stream(test);
+
+	stream->add(stream, "ASSERTION FAILED at %s:%s\n\t", file, line);
+
+	return stream;
+}
+
+static inline void kunit_assert_end(struct kunit *test,
+				   bool success,
+				   struct kunit_stream *stream)
+{
+	if (!success) {
+		test->fail(test, stream);
+		test->abort(test);
+	} else {
+		stream->clear(stream);
+	}
+}
+
+#define KUNIT_ASSERT_START(test) \
+		kunit_assert_start(test, __FILE__, __stringify(__LINE__))
+
+#define KUNIT_ASSERT_END(test, success, stream) \
+		kunit_assert_end(test, success, stream)
+
+#define KUNIT_ASSERT(test, success, message) do {			       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+									       \
+	__stream->add(__stream, message);				       \
+	KUNIT_ASSERT_END(test, success, __stream);			       \
+} while (0)
+
+#define KUNIT_ASSERT_MSG(test, success, message, fmt, ...) do {		       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+									       \
+	__stream->add(__stream, message);				       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+	KUNIT_ASSERT_END(test, success, __stream);			       \
+} while (0)
+
+#define KUNIT_ASSERT_FAILURE(test, fmt, ...) \
+		KUNIT_ASSERT_MSG(test, false, "", fmt, ##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_TRUE() - Sets an assertion that @condition is true.
+ * @test: The test context object.
+ * @condition: an arbitrary boolean expression. The test fails and aborts when
+ * this does not evaluate to true.
+ *
+ * This and assertions of the form `KUNIT_ASSERT_*` will cause the test case to
+ * fail *and immediately abort* when the specified condition is not met. Unlike
+ * an expectation failure, it will prevent the test case from continuing to run;
+ * this is otherwise known as an *assertion failure*.
+ */
+#define KUNIT_ASSERT_TRUE(test, condition)				       \
+		KUNIT_ASSERT(test, (condition),				       \
+		       "Asserted " #condition " is true, but is false.")
+
+#define KUNIT_ASSERT_TRUE_MSG(test, condition, fmt, ...)		       \
+		KUNIT_ASSERT_MSG(test, (condition),			       \
+				"Asserted " #condition " is true, but is false.\n",\
+				fmt, ##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_FALSE() - Sets an assertion that @condition is false.
+ * @test: The test context object.
+ * @condition: an arbitrary boolean expression.
+ *
+ * Sets an assertion that the value that @condition evaluates to is false. This
+ * is the same as KUNIT_EXPECT_FALSE(), except it causes an assertion failure
+ * (see KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_FALSE(test, condition)				       \
+		KUNIT_ASSERT(test, !(condition),			       \
+		       "Asserted " #condition " is false, but is true.")
+
+#define KUNIT_ASSERT_FALSE_MSG(test, condition, fmt, ...)		       \
+		KUNIT_ASSERT_MSG(test, !(condition),			       \
+				"Asserted " #condition " is false, but is true.\n",\
+				fmt, ##__VA_ARGS__)
+
+void kunit_assert_binary_msg(struct kunit *test,
+			    long long left, const char *left_name,
+			    long long right, const char *right_name,
+			    bool compare_result,
+			    const char *compare_name,
+			    const char *file,
+			    const char *line,
+			    const char *fmt, ...);
+
+static inline void kunit_assert_binary(struct kunit *test,
+				      long long left, const char *left_name,
+				      long long right, const char *right_name,
+				      bool compare_result,
+				      const char *compare_name,
+				      const char *file,
+				      const char *line)
+{
+	kunit_assert_binary_msg(test,
+			       left, left_name,
+			       right, right_name,
+			       compare_result,
+			       compare_name,
+			       file,
+			       line,
+			       NULL);
+}
+
+/*
+ * A factory macro for defining the expectations for the basic comparisons
+ * defined for the built in types.
+ *
+ * Unfortunately, there is no common type that all types can be promoted to for
+ * which all the binary operators behave the same way as for the actual types
+ * (for example, there is no type that long long and unsigned long long can
+ * both be cast to where the comparison result is preserved for all values). So
+ * the best we can do is do the comparison in the original types and then coerce
+ * everything to long long for printing; this way, the comparison behaves
+ * correctly and the printed out value usually makes sense without
+ * interpretation, but can always be interpretted to figure out the actual
+ * value.
+ */
+#define KUNIT_ASSERT_BINARY(test, left, condition, right) do {		       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+	kunit_assert_binary(test,					       \
+			   (long long) __left, #left,			       \
+			   (long long) __right, #right,			       \
+			   __left condition __right, #condition,	       \
+			   __FILE__, __stringify(__LINE__));		       \
+} while (0)
+
+#define KUNIT_ASSERT_BINARY_MSG(test, left, condition, right, fmt, ...) do {   \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+	kunit_assert_binary_msg(test,					       \
+			       (long long) __left, #left,		       \
+			       (long long) __right, #right,		       \
+			       __left condition __right, #condition,	       \
+			       __FILE__, __stringify(__LINE__),		       \
+			       fmt, ##__VA_ARGS__);			       \
+} while (0)
+
+/**
+ * KUNIT_ASSERT_EQ() - Sets an assertion that @left and @right are equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the values that @left and @right evaluate to are
+ * equal. This is the same as KUNIT_EXPECT_EQ(), except it causes an assertion
+ * failure (see KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_EQ(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, ==, right)
+
+#define KUNIT_ASSERT_EQ_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					==,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_NE() - An assertion that @left and @right are not equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the values that @left and @right evaluate to are not
+ * equal. This is the same as KUNIT_EXPECT_NE(), except it causes an assertion
+ * failure (see KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_NE(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, !=, right)
+
+#define KUNIT_ASSERT_NE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					!=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_LT() - An assertion that @left is less than @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the value that @left evaluates to is less than the
+ * value that @right evaluates to. This is the same as KUNIT_EXPECT_LT(), except
+ * it causes an assertion failure (see KUNIT_ASSERT_TRUE()) when the assertion
+ * is not met.
+ */
+#define KUNIT_ASSERT_LT(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, <, right)
+
+#define KUNIT_ASSERT_LT_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					<,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+/**
+ * KUNIT_ASSERT_LE() - An assertion that @left is less than or equal to @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the value that @left evaluates to is less than or
+ * equal to the value that @right evaluates to. This is the same as
+ * KUNIT_EXPECT_LE(), except it causes an assertion failure (see
+ * KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_LE(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, <=, right)
+
+#define KUNIT_ASSERT_LE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					<=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+/**
+ * KUNIT_ASSERT_GT() - An assertion that @left is greater than @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the value that @left evaluates to is greater than the
+ * value that @right evaluates to. This is the same as KUNIT_EXPECT_GT(), except
+ * it causes an assertion failure (see KUNIT_ASSERT_TRUE()) when the assertion
+ * is not met.
+ */
+#define KUNIT_ASSERT_GT(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, >, right)
+
+#define KUNIT_ASSERT_GT_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					>,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_GE() - Assertion that @left is greater than or equal to @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the value that @left evaluates to is greater than the
+ * value that @right evaluates to. This is the same as KUNIT_EXPECT_GE(), except
+ * it causes an assertion failure (see KUNIT_ASSERT_TRUE()) when the assertion
+ * is not met.
+ */
+#define KUNIT_ASSERT_GE(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, >=, right)
+
+#define KUNIT_ASSERT_GE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					>=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_STREQ() - An assertion that strings @left and @right are equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a null terminated string.
+ * @right: an arbitrary expression that evaluates to a null terminated string.
+ *
+ * Sets an assertion that the values that @left and @right evaluate to are
+ * equal. This is the same as KUNIT_EXPECT_STREQ(), except it causes an
+ * assertion failure (see KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_STREQ(test, left, right) do {			       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Asserted " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+									       \
+	KUNIT_ASSERT_END(test, !strcmp(left, right), __stream);		       \
+} while (0)
+
+#define KUNIT_ASSERT_STREQ_MSG(test, left, right, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Asserted " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+									       \
+	KUNIT_ASSERT_END(test, !strcmp(left, right), __stream);		       \
+} while (0)
+
+/**
+ * KUNIT_ASSERT_STRNEQ() - Expects that strings @left and @right are not equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a null terminated string.
+ * @right: an arbitrary expression that evaluates to a null terminated string.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are
+ * not equal. This is semantically equivalent to
+ * KUNIT_ASSERT_TRUE(@test, strcmp((@left), (@right))). See KUNIT_ASSERT_TRUE()
+ * for more information.
+ */
+#define KUNIT_ASSERT_STRNEQ(test, left, right) do {			       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Asserted " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+									       \
+	KUNIT_ASSERT_END(test, strcmp(left, right), __stream);		       \
+} while (0)
+
+#define KUNIT_ASSERT_STRNEQ_MSG(test, left, right, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Asserted " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+									       \
+	KUNIT_ASSERT_END(test, strcmp(left, right), __stream);		       \
+} while (0)
+
+/**
+ * KUNIT_ASSERT_NOT_ERR_OR_NULL() - Assertion that @ptr is not null and not err.
+ * @test: The test context object.
+ * @ptr: an arbitrary pointer.
+ *
+ * Sets an assertion that the value that @ptr evaluates to is not null and not
+ * an errno stored in a pointer. This is the same as
+ * KUNIT_EXPECT_NOT_ERR_OR_NULL(), except it causes an assertion failure (see
+ * KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr) do {			       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(ptr) __ptr = (ptr);					       \
+									       \
+	if (!__ptr)							       \
+		__stream->add(__stream,					       \
+			      "Asserted " #ptr " is not null, but is.");       \
+	if (IS_ERR(__ptr))						       \
+		__stream->add(__stream,					       \
+			      "Asserted " #ptr " is not error, but is: %ld",   \
+			      PTR_ERR(__ptr));				       \
+									       \
+	KUNIT_ASSERT_END(test, !IS_ERR_OR_NULL(__ptr), __stream);	       \
+} while (0)
+
+#define KUNIT_ASSERT_NOT_ERR_OR_NULL_MSG(test, ptr, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(ptr) __ptr = (ptr);					       \
+									       \
+	if (!__ptr) {							       \
+		__stream->add(__stream,					       \
+			      "Asserted " #ptr " is not null, but is.");       \
+		__stream->add(__stream, fmt, ##__VA_ARGS__);		       \
+	}								       \
+	if (IS_ERR(__ptr)) {						       \
+		__stream->add(__stream,					       \
+			      "Asserted " #ptr " is not error, but is: %ld",   \
+			      PTR_ERR(__ptr));				       \
+									       \
+		__stream->add(__stream, fmt, ##__VA_ARGS__);		       \
+	}								       \
+	KUNIT_ASSERT_END(test, !IS_ERR_OR_NULL(__ptr), __stream);	       \
+} while (0)
+
 #endif /* _KUNIT_TEST_H */
diff --git a/kunit/string-stream-test.c b/kunit/string-stream-test.c
index 6cfef69568011..441afd11b43de 100644
--- a/kunit/string-stream-test.c
+++ b/kunit/string-stream-test.c
@@ -19,7 +19,7 @@ static void string_stream_test_get_string(struct kunit *test)
 	stream->add(stream, " %s", "bar");
 
 	output = stream->get_string(stream);
-	KUNIT_EXPECT_STREQ(test, output, "Foo bar");
+	KUNIT_ASSERT_STREQ(test, output, "Foo bar");
 	kfree(output);
 	destroy_string_stream(stream);
 }
@@ -34,16 +34,16 @@ static void string_stream_test_add_and_clear(struct kunit *test)
 		stream->add(stream, "A");
 
 	output = stream->get_string(stream);
-	KUNIT_EXPECT_STREQ(test, output, "AAAAAAAAAA");
-	KUNIT_EXPECT_EQ(test, stream->length, 10);
-	KUNIT_EXPECT_FALSE(test, stream->is_empty(stream));
+	KUNIT_ASSERT_STREQ(test, output, "AAAAAAAAAA");
+	KUNIT_ASSERT_EQ(test, stream->length, 10);
+	KUNIT_ASSERT_FALSE(test, stream->is_empty(stream));
 	kfree(output);
 
 	stream->clear(stream);
 
 	output = stream->get_string(stream);
-	KUNIT_EXPECT_STREQ(test, output, "");
-	KUNIT_EXPECT_TRUE(test, stream->is_empty(stream));
+	KUNIT_ASSERT_STREQ(test, output, "");
+	KUNIT_ASSERT_TRUE(test, stream->is_empty(stream));
 	destroy_string_stream(stream);
 }
 
diff --git a/kunit/test-test.c b/kunit/test-test.c
index a936c041f1c8f..0b4ad6690310d 100644
--- a/kunit/test-test.c
+++ b/kunit/test-test.c
@@ -100,11 +100,13 @@ static int kunit_try_catch_test_init(struct kunit *test)
 	struct kunit_try_catch_test_context *ctx;
 
 	ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
 	test->priv = ctx;
 
 	ctx->try_catch = kunit_kmalloc(test,
 				       sizeof(*ctx->try_catch),
 				       GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->try_catch);
 	kunit_try_catch_init(ctx->try_catch);
 	ctx->try_catch->context.test = test;
 
diff --git a/kunit/test.c b/kunit/test.c
index 6e5244642ab07..9cc8ecdb079b0 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -495,3 +495,36 @@ void kunit_expect_binary_msg(struct kunit *test,
 	kunit_expect_end(test, compare_result, stream);
 }
 
+void kunit_assert_binary_msg(struct kunit *test,
+			     long long left, const char *left_name,
+			     long long right, const char *right_name,
+			     bool compare_result,
+			     const char *compare_name,
+			     const char *file,
+			     const char *line,
+			     const char *fmt, ...)
+{
+	struct kunit_stream *stream = kunit_assert_start(test, file, line);
+	struct va_format vaf;
+	va_list args;
+
+	stream->add(stream,
+		    "Asserted %s %s %s, but\n",
+		    left_name, compare_name, right_name);
+	stream->add(stream, "\t\t%s == %lld\n", left_name, left);
+	stream->add(stream, "\t\t%s == %lld", right_name, right);
+
+	if (fmt) {
+		va_start(args, fmt);
+
+		vaf.fmt = fmt;
+		vaf.va = &args;
+
+		stream->add(stream, "\n%pV", &vaf);
+
+		va_end(args);
+	}
+
+	kunit_assert_end(test, compare_result, stream);
+}
+
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 09/17] kunit: test: add the concept of assertions
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: gregkh, joel, mpe, joe, brakmo, rostedt, Tim.Bird, khilman,
	julia.lawall, linux-kselftest, kunit-dev, linux-kernel, jdike,
	richard, linux-um, daniel, dri-devel, dan.j.williams,
	linux-nvdimm, knut.omang, devicetree, pmladek, Alexander.Levin,
	amir73il, dan.carpenter, wfg, Brendan Higgins

Add support for assertions which are like expectations except the test
terminates if the assertion is not satisfied.

Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
---
 include/kunit/test.h       | 397 ++++++++++++++++++++++++++++++++++++-
 kunit/string-stream-test.c |  12 +-
 kunit/test-test.c          |   2 +
 kunit/test.c               |  33 +++
 4 files changed, 435 insertions(+), 9 deletions(-)

diff --git a/include/kunit/test.h b/include/kunit/test.h
index cd02dca96eb61..c42c67a9729fd 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -84,9 +84,10 @@ struct kunit;
  * @name: the name of the test case.
  *
  * A test case is a function with the signature, ``void (*)(struct kunit *)``
- * that makes expectations (see KUNIT_EXPECT_TRUE()) about code under test. Each
- * test case is associated with a &struct kunit_module and will be run after the
- * module's init function and followed by the module's exit function.
+ * that makes expectations and assertions (see KUNIT_EXPECT_TRUE() and
+ * KUNIT_ASSERT_TRUE()) about code under test. Each test case is associated with
+ * a &struct kunit_module and will be run after the module's init function and
+ * followed by the module's exit function.
  *
  * A test case should be static and should only be created with the KUNIT_CASE()
  * macro; additionally, every array of test cases should be terminated with an
@@ -712,4 +713,394 @@ static inline void kunit_expect_binary(struct kunit *test,
 	KUNIT_EXPECT_END(test, !IS_ERR_OR_NULL(__ptr), __stream);	       \
 } while (0)
 
+static inline struct kunit_stream *kunit_assert_start(struct kunit *test,
+						    const char *file,
+						    const char *line)
+{
+	struct kunit_stream *stream = kunit_new_stream(test);
+
+	stream->add(stream, "ASSERTION FAILED at %s:%s\n\t", file, line);
+
+	return stream;
+}
+
+static inline void kunit_assert_end(struct kunit *test,
+				   bool success,
+				   struct kunit_stream *stream)
+{
+	if (!success) {
+		test->fail(test, stream);
+		test->abort(test);
+	} else {
+		stream->clear(stream);
+	}
+}
+
+#define KUNIT_ASSERT_START(test) \
+		kunit_assert_start(test, __FILE__, __stringify(__LINE__))
+
+#define KUNIT_ASSERT_END(test, success, stream) \
+		kunit_assert_end(test, success, stream)
+
+#define KUNIT_ASSERT(test, success, message) do {			       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+									       \
+	__stream->add(__stream, message);				       \
+	KUNIT_ASSERT_END(test, success, __stream);			       \
+} while (0)
+
+#define KUNIT_ASSERT_MSG(test, success, message, fmt, ...) do {		       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+									       \
+	__stream->add(__stream, message);				       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+	KUNIT_ASSERT_END(test, success, __stream);			       \
+} while (0)
+
+#define KUNIT_ASSERT_FAILURE(test, fmt, ...) \
+		KUNIT_ASSERT_MSG(test, false, "", fmt, ##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_TRUE() - Sets an assertion that @condition is true.
+ * @test: The test context object.
+ * @condition: an arbitrary boolean expression. The test fails and aborts when
+ * this does not evaluate to true.
+ *
+ * This and assertions of the form `KUNIT_ASSERT_*` will cause the test case to
+ * fail *and immediately abort* when the specified condition is not met. Unlike
+ * an expectation failure, it will prevent the test case from continuing to run;
+ * this is otherwise known as an *assertion failure*.
+ */
+#define KUNIT_ASSERT_TRUE(test, condition)				       \
+		KUNIT_ASSERT(test, (condition),				       \
+		       "Asserted " #condition " is true, but is false.")
+
+#define KUNIT_ASSERT_TRUE_MSG(test, condition, fmt, ...)		       \
+		KUNIT_ASSERT_MSG(test, (condition),			       \
+				"Asserted " #condition " is true, but is false.\n",\
+				fmt, ##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_FALSE() - Sets an assertion that @condition is false.
+ * @test: The test context object.
+ * @condition: an arbitrary boolean expression.
+ *
+ * Sets an assertion that the value that @condition evaluates to is false. This
+ * is the same as KUNIT_EXPECT_FALSE(), except it causes an assertion failure
+ * (see KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_FALSE(test, condition)				       \
+		KUNIT_ASSERT(test, !(condition),			       \
+		       "Asserted " #condition " is false, but is true.")
+
+#define KUNIT_ASSERT_FALSE_MSG(test, condition, fmt, ...)		       \
+		KUNIT_ASSERT_MSG(test, !(condition),			       \
+				"Asserted " #condition " is false, but is true.\n",\
+				fmt, ##__VA_ARGS__)
+
+void kunit_assert_binary_msg(struct kunit *test,
+			    long long left, const char *left_name,
+			    long long right, const char *right_name,
+			    bool compare_result,
+			    const char *compare_name,
+			    const char *file,
+			    const char *line,
+			    const char *fmt, ...);
+
+static inline void kunit_assert_binary(struct kunit *test,
+				      long long left, const char *left_name,
+				      long long right, const char *right_name,
+				      bool compare_result,
+				      const char *compare_name,
+				      const char *file,
+				      const char *line)
+{
+	kunit_assert_binary_msg(test,
+			       left, left_name,
+			       right, right_name,
+			       compare_result,
+			       compare_name,
+			       file,
+			       line,
+			       NULL);
+}
+
+/*
+ * A factory macro for defining the expectations for the basic comparisons
+ * defined for the built in types.
+ *
+ * Unfortunately, there is no common type that all types can be promoted to for
+ * which all the binary operators behave the same way as for the actual types
+ * (for example, there is no type that long long and unsigned long long can
+ * both be cast to where the comparison result is preserved for all values). So
+ * the best we can do is do the comparison in the original types and then coerce
+ * everything to long long for printing; this way, the comparison behaves
+ * correctly and the printed out value usually makes sense without
+ * interpretation, but can always be interpretted to figure out the actual
+ * value.
+ */
+#define KUNIT_ASSERT_BINARY(test, left, condition, right) do {		       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+	kunit_assert_binary(test,					       \
+			   (long long) __left, #left,			       \
+			   (long long) __right, #right,			       \
+			   __left condition __right, #condition,	       \
+			   __FILE__, __stringify(__LINE__));		       \
+} while (0)
+
+#define KUNIT_ASSERT_BINARY_MSG(test, left, condition, right, fmt, ...) do {   \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+	kunit_assert_binary_msg(test,					       \
+			       (long long) __left, #left,		       \
+			       (long long) __right, #right,		       \
+			       __left condition __right, #condition,	       \
+			       __FILE__, __stringify(__LINE__),		       \
+			       fmt, ##__VA_ARGS__);			       \
+} while (0)
+
+/**
+ * KUNIT_ASSERT_EQ() - Sets an assertion that @left and @right are equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the values that @left and @right evaluate to are
+ * equal. This is the same as KUNIT_EXPECT_EQ(), except it causes an assertion
+ * failure (see KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_EQ(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, ==, right)
+
+#define KUNIT_ASSERT_EQ_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					==,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_NE() - An assertion that @left and @right are not equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the values that @left and @right evaluate to are not
+ * equal. This is the same as KUNIT_EXPECT_NE(), except it causes an assertion
+ * failure (see KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_NE(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, !=, right)
+
+#define KUNIT_ASSERT_NE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					!=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_LT() - An assertion that @left is less than @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the value that @left evaluates to is less than the
+ * value that @right evaluates to. This is the same as KUNIT_EXPECT_LT(), except
+ * it causes an assertion failure (see KUNIT_ASSERT_TRUE()) when the assertion
+ * is not met.
+ */
+#define KUNIT_ASSERT_LT(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, <, right)
+
+#define KUNIT_ASSERT_LT_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					<,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+/**
+ * KUNIT_ASSERT_LE() - An assertion that @left is less than or equal to @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the value that @left evaluates to is less than or
+ * equal to the value that @right evaluates to. This is the same as
+ * KUNIT_EXPECT_LE(), except it causes an assertion failure (see
+ * KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_LE(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, <=, right)
+
+#define KUNIT_ASSERT_LE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					<=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+/**
+ * KUNIT_ASSERT_GT() - An assertion that @left is greater than @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the value that @left evaluates to is greater than the
+ * value that @right evaluates to. This is the same as KUNIT_EXPECT_GT(), except
+ * it causes an assertion failure (see KUNIT_ASSERT_TRUE()) when the assertion
+ * is not met.
+ */
+#define KUNIT_ASSERT_GT(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, >, right)
+
+#define KUNIT_ASSERT_GT_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					>,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_GE() - Assertion that @left is greater than or equal to @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the value that @left evaluates to is greater than the
+ * value that @right evaluates to. This is the same as KUNIT_EXPECT_GE(), except
+ * it causes an assertion failure (see KUNIT_ASSERT_TRUE()) when the assertion
+ * is not met.
+ */
+#define KUNIT_ASSERT_GE(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, >=, right)
+
+#define KUNIT_ASSERT_GE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					>=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_STREQ() - An assertion that strings @left and @right are equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a null terminated string.
+ * @right: an arbitrary expression that evaluates to a null terminated string.
+ *
+ * Sets an assertion that the values that @left and @right evaluate to are
+ * equal. This is the same as KUNIT_EXPECT_STREQ(), except it causes an
+ * assertion failure (see KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_STREQ(test, left, right) do {			       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Asserted " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+									       \
+	KUNIT_ASSERT_END(test, !strcmp(left, right), __stream);		       \
+} while (0)
+
+#define KUNIT_ASSERT_STREQ_MSG(test, left, right, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Asserted " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+									       \
+	KUNIT_ASSERT_END(test, !strcmp(left, right), __stream);		       \
+} while (0)
+
+/**
+ * KUNIT_ASSERT_STRNEQ() - Expects that strings @left and @right are not equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a null terminated string.
+ * @right: an arbitrary expression that evaluates to a null terminated string.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are
+ * not equal. This is semantically equivalent to
+ * KUNIT_ASSERT_TRUE(@test, strcmp((@left), (@right))). See KUNIT_ASSERT_TRUE()
+ * for more information.
+ */
+#define KUNIT_ASSERT_STRNEQ(test, left, right) do {			       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Asserted " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+									       \
+	KUNIT_ASSERT_END(test, strcmp(left, right), __stream);		       \
+} while (0)
+
+#define KUNIT_ASSERT_STRNEQ_MSG(test, left, right, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Asserted " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+									       \
+	KUNIT_ASSERT_END(test, strcmp(left, right), __stream);		       \
+} while (0)
+
+/**
+ * KUNIT_ASSERT_NOT_ERR_OR_NULL() - Assertion that @ptr is not null and not err.
+ * @test: The test context object.
+ * @ptr: an arbitrary pointer.
+ *
+ * Sets an assertion that the value that @ptr evaluates to is not null and not
+ * an errno stored in a pointer. This is the same as
+ * KUNIT_EXPECT_NOT_ERR_OR_NULL(), except it causes an assertion failure (see
+ * KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr) do {			       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(ptr) __ptr = (ptr);					       \
+									       \
+	if (!__ptr)							       \
+		__stream->add(__stream,					       \
+			      "Asserted " #ptr " is not null, but is.");       \
+	if (IS_ERR(__ptr))						       \
+		__stream->add(__stream,					       \
+			      "Asserted " #ptr " is not error, but is: %ld",   \
+			      PTR_ERR(__ptr));				       \
+									       \
+	KUNIT_ASSERT_END(test, !IS_ERR_OR_NULL(__ptr), __stream);	       \
+} while (0)
+
+#define KUNIT_ASSERT_NOT_ERR_OR_NULL_MSG(test, ptr, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(ptr) __ptr = (ptr);					       \
+									       \
+	if (!__ptr) {							       \
+		__stream->add(__stream,					       \
+			      "Asserted " #ptr " is not null, but is.");       \
+		__stream->add(__stream, fmt, ##__VA_ARGS__);		       \
+	}								       \
+	if (IS_ERR(__ptr)) {						       \
+		__stream->add(__stream,					       \
+			      "Asserted " #ptr " is not error, but is: %ld",   \
+			      PTR_ERR(__ptr));				       \
+									       \
+		__stream->add(__stream, fmt, ##__VA_ARGS__);		       \
+	}								       \
+	KUNIT_ASSERT_END(test, !IS_ERR_OR_NULL(__ptr), __stream);	       \
+} while (0)
+
 #endif /* _KUNIT_TEST_H */
diff --git a/kunit/string-stream-test.c b/kunit/string-stream-test.c
index 6cfef69568011..441afd11b43de 100644
--- a/kunit/string-stream-test.c
+++ b/kunit/string-stream-test.c
@@ -19,7 +19,7 @@ static void string_stream_test_get_string(struct kunit *test)
 	stream->add(stream, " %s", "bar");
 
 	output = stream->get_string(stream);
-	KUNIT_EXPECT_STREQ(test, output, "Foo bar");
+	KUNIT_ASSERT_STREQ(test, output, "Foo bar");
 	kfree(output);
 	destroy_string_stream(stream);
 }
@@ -34,16 +34,16 @@ static void string_stream_test_add_and_clear(struct kunit *test)
 		stream->add(stream, "A");
 
 	output = stream->get_string(stream);
-	KUNIT_EXPECT_STREQ(test, output, "AAAAAAAAAA");
-	KUNIT_EXPECT_EQ(test, stream->length, 10);
-	KUNIT_EXPECT_FALSE(test, stream->is_empty(stream));
+	KUNIT_ASSERT_STREQ(test, output, "AAAAAAAAAA");
+	KUNIT_ASSERT_EQ(test, stream->length, 10);
+	KUNIT_ASSERT_FALSE(test, stream->is_empty(stream));
 	kfree(output);
 
 	stream->clear(stream);
 
 	output = stream->get_string(stream);
-	KUNIT_EXPECT_STREQ(test, output, "");
-	KUNIT_EXPECT_TRUE(test, stream->is_empty(stream));
+	KUNIT_ASSERT_STREQ(test, output, "");
+	KUNIT_ASSERT_TRUE(test, stream->is_empty(stream));
 	destroy_string_stream(stream);
 }
 
diff --git a/kunit/test-test.c b/kunit/test-test.c
index a936c041f1c8f..0b4ad6690310d 100644
--- a/kunit/test-test.c
+++ b/kunit/test-test.c
@@ -100,11 +100,13 @@ static int kunit_try_catch_test_init(struct kunit *test)
 	struct kunit_try_catch_test_context *ctx;
 
 	ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
 	test->priv = ctx;
 
 	ctx->try_catch = kunit_kmalloc(test,
 				       sizeof(*ctx->try_catch),
 				       GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->try_catch);
 	kunit_try_catch_init(ctx->try_catch);
 	ctx->try_catch->context.test = test;
 
diff --git a/kunit/test.c b/kunit/test.c
index 6e5244642ab07..9cc8ecdb079b0 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -495,3 +495,36 @@ void kunit_expect_binary_msg(struct kunit *test,
 	kunit_expect_end(test, compare_result, stream);
 }
 
+void kunit_assert_binary_msg(struct kunit *test,
+			     long long left, const char *left_name,
+			     long long right, const char *right_name,
+			     bool compare_result,
+			     const char *compare_name,
+			     const char *file,
+			     const char *line,
+			     const char *fmt, ...)
+{
+	struct kunit_stream *stream = kunit_assert_start(test, file, line);
+	struct va_format vaf;
+	va_list args;
+
+	stream->add(stream,
+		    "Asserted %s %s %s, but\n",
+		    left_name, compare_name, right_name);
+	stream->add(stream, "\t\t%s == %lld\n", left_name, left);
+	stream->add(stream, "\t\t%s == %lld", right_name, right);
+
+	if (fmt) {
+		va_start(args, fmt);
+
+		vaf.fmt = fmt;
+		vaf.va = &args;
+
+		stream->add(stream, "\n%pV", &vaf);
+
+		va_end(args);
+	}
+
+	kunit_assert_end(test, compare_result, stream);
+}
+
-- 
2.21.0.rc0.258.g878e2cd30e-goog


^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 09/17] kunit: test: add the concept of assertions
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-02-14 21:37 UTC (permalink / raw)


Add support for assertions which are like expectations except the test
terminates if the assertion is not satisfied.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 include/kunit/test.h       | 397 ++++++++++++++++++++++++++++++++++++-
 kunit/string-stream-test.c |  12 +-
 kunit/test-test.c          |   2 +
 kunit/test.c               |  33 +++
 4 files changed, 435 insertions(+), 9 deletions(-)

diff --git a/include/kunit/test.h b/include/kunit/test.h
index cd02dca96eb61..c42c67a9729fd 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -84,9 +84,10 @@ struct kunit;
  * @name: the name of the test case.
  *
  * A test case is a function with the signature, ``void (*)(struct kunit *)``
- * that makes expectations (see KUNIT_EXPECT_TRUE()) about code under test. Each
- * test case is associated with a &struct kunit_module and will be run after the
- * module's init function and followed by the module's exit function.
+ * that makes expectations and assertions (see KUNIT_EXPECT_TRUE() and
+ * KUNIT_ASSERT_TRUE()) about code under test. Each test case is associated with
+ * a &struct kunit_module and will be run after the module's init function and
+ * followed by the module's exit function.
  *
  * A test case should be static and should only be created with the KUNIT_CASE()
  * macro; additionally, every array of test cases should be terminated with an
@@ -712,4 +713,394 @@ static inline void kunit_expect_binary(struct kunit *test,
 	KUNIT_EXPECT_END(test, !IS_ERR_OR_NULL(__ptr), __stream);	       \
 } while (0)
 
+static inline struct kunit_stream *kunit_assert_start(struct kunit *test,
+						    const char *file,
+						    const char *line)
+{
+	struct kunit_stream *stream = kunit_new_stream(test);
+
+	stream->add(stream, "ASSERTION FAILED at %s:%s\n\t", file, line);
+
+	return stream;
+}
+
+static inline void kunit_assert_end(struct kunit *test,
+				   bool success,
+				   struct kunit_stream *stream)
+{
+	if (!success) {
+		test->fail(test, stream);
+		test->abort(test);
+	} else {
+		stream->clear(stream);
+	}
+}
+
+#define KUNIT_ASSERT_START(test) \
+		kunit_assert_start(test, __FILE__, __stringify(__LINE__))
+
+#define KUNIT_ASSERT_END(test, success, stream) \
+		kunit_assert_end(test, success, stream)
+
+#define KUNIT_ASSERT(test, success, message) do {			       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+									       \
+	__stream->add(__stream, message);				       \
+	KUNIT_ASSERT_END(test, success, __stream);			       \
+} while (0)
+
+#define KUNIT_ASSERT_MSG(test, success, message, fmt, ...) do {		       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+									       \
+	__stream->add(__stream, message);				       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+	KUNIT_ASSERT_END(test, success, __stream);			       \
+} while (0)
+
+#define KUNIT_ASSERT_FAILURE(test, fmt, ...) \
+		KUNIT_ASSERT_MSG(test, false, "", fmt, ##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_TRUE() - Sets an assertion that @condition is true.
+ * @test: The test context object.
+ * @condition: an arbitrary boolean expression. The test fails and aborts when
+ * this does not evaluate to true.
+ *
+ * This and assertions of the form `KUNIT_ASSERT_*` will cause the test case to
+ * fail *and immediately abort* when the specified condition is not met. Unlike
+ * an expectation failure, it will prevent the test case from continuing to run;
+ * this is otherwise known as an *assertion failure*.
+ */
+#define KUNIT_ASSERT_TRUE(test, condition)				       \
+		KUNIT_ASSERT(test, (condition),				       \
+		       "Asserted " #condition " is true, but is false.")
+
+#define KUNIT_ASSERT_TRUE_MSG(test, condition, fmt, ...)		       \
+		KUNIT_ASSERT_MSG(test, (condition),			       \
+				"Asserted " #condition " is true, but is false.\n",\
+				fmt, ##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_FALSE() - Sets an assertion that @condition is false.
+ * @test: The test context object.
+ * @condition: an arbitrary boolean expression.
+ *
+ * Sets an assertion that the value that @condition evaluates to is false. This
+ * is the same as KUNIT_EXPECT_FALSE(), except it causes an assertion failure
+ * (see KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_FALSE(test, condition)				       \
+		KUNIT_ASSERT(test, !(condition),			       \
+		       "Asserted " #condition " is false, but is true.")
+
+#define KUNIT_ASSERT_FALSE_MSG(test, condition, fmt, ...)		       \
+		KUNIT_ASSERT_MSG(test, !(condition),			       \
+				"Asserted " #condition " is false, but is true.\n",\
+				fmt, ##__VA_ARGS__)
+
+void kunit_assert_binary_msg(struct kunit *test,
+			    long long left, const char *left_name,
+			    long long right, const char *right_name,
+			    bool compare_result,
+			    const char *compare_name,
+			    const char *file,
+			    const char *line,
+			    const char *fmt, ...);
+
+static inline void kunit_assert_binary(struct kunit *test,
+				      long long left, const char *left_name,
+				      long long right, const char *right_name,
+				      bool compare_result,
+				      const char *compare_name,
+				      const char *file,
+				      const char *line)
+{
+	kunit_assert_binary_msg(test,
+			       left, left_name,
+			       right, right_name,
+			       compare_result,
+			       compare_name,
+			       file,
+			       line,
+			       NULL);
+}
+
+/*
+ * A factory macro for defining the expectations for the basic comparisons
+ * defined for the built in types.
+ *
+ * Unfortunately, there is no common type that all types can be promoted to for
+ * which all the binary operators behave the same way as for the actual types
+ * (for example, there is no type that long long and unsigned long long can
+ * both be cast to where the comparison result is preserved for all values). So
+ * the best we can do is do the comparison in the original types and then coerce
+ * everything to long long for printing; this way, the comparison behaves
+ * correctly and the printed out value usually makes sense without
+ * interpretation, but can always be interpretted to figure out the actual
+ * value.
+ */
+#define KUNIT_ASSERT_BINARY(test, left, condition, right) do {		       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+	kunit_assert_binary(test,					       \
+			   (long long) __left, #left,			       \
+			   (long long) __right, #right,			       \
+			   __left condition __right, #condition,	       \
+			   __FILE__, __stringify(__LINE__));		       \
+} while (0)
+
+#define KUNIT_ASSERT_BINARY_MSG(test, left, condition, right, fmt, ...) do {   \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+	kunit_assert_binary_msg(test,					       \
+			       (long long) __left, #left,		       \
+			       (long long) __right, #right,		       \
+			       __left condition __right, #condition,	       \
+			       __FILE__, __stringify(__LINE__),		       \
+			       fmt, ##__VA_ARGS__);			       \
+} while (0)
+
+/**
+ * KUNIT_ASSERT_EQ() - Sets an assertion that @left and @right are equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the values that @left and @right evaluate to are
+ * equal. This is the same as KUNIT_EXPECT_EQ(), except it causes an assertion
+ * failure (see KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_EQ(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, ==, right)
+
+#define KUNIT_ASSERT_EQ_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					==,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_NE() - An assertion that @left and @right are not equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the values that @left and @right evaluate to are not
+ * equal. This is the same as KUNIT_EXPECT_NE(), except it causes an assertion
+ * failure (see KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_NE(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, !=, right)
+
+#define KUNIT_ASSERT_NE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					!=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_LT() - An assertion that @left is less than @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the value that @left evaluates to is less than the
+ * value that @right evaluates to. This is the same as KUNIT_EXPECT_LT(), except
+ * it causes an assertion failure (see KUNIT_ASSERT_TRUE()) when the assertion
+ * is not met.
+ */
+#define KUNIT_ASSERT_LT(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, <, right)
+
+#define KUNIT_ASSERT_LT_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					<,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+/**
+ * KUNIT_ASSERT_LE() - An assertion that @left is less than or equal to @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the value that @left evaluates to is less than or
+ * equal to the value that @right evaluates to. This is the same as
+ * KUNIT_EXPECT_LE(), except it causes an assertion failure (see
+ * KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_LE(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, <=, right)
+
+#define KUNIT_ASSERT_LE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					<=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+/**
+ * KUNIT_ASSERT_GT() - An assertion that @left is greater than @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the value that @left evaluates to is greater than the
+ * value that @right evaluates to. This is the same as KUNIT_EXPECT_GT(), except
+ * it causes an assertion failure (see KUNIT_ASSERT_TRUE()) when the assertion
+ * is not met.
+ */
+#define KUNIT_ASSERT_GT(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, >, right)
+
+#define KUNIT_ASSERT_GT_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					>,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_GE() - Assertion that @left is greater than or equal to @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the value that @left evaluates to is greater than the
+ * value that @right evaluates to. This is the same as KUNIT_EXPECT_GE(), except
+ * it causes an assertion failure (see KUNIT_ASSERT_TRUE()) when the assertion
+ * is not met.
+ */
+#define KUNIT_ASSERT_GE(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, >=, right)
+
+#define KUNIT_ASSERT_GE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					>=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_STREQ() - An assertion that strings @left and @right are equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a null terminated string.
+ * @right: an arbitrary expression that evaluates to a null terminated string.
+ *
+ * Sets an assertion that the values that @left and @right evaluate to are
+ * equal. This is the same as KUNIT_EXPECT_STREQ(), except it causes an
+ * assertion failure (see KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_STREQ(test, left, right) do {			       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Asserted " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+									       \
+	KUNIT_ASSERT_END(test, !strcmp(left, right), __stream);		       \
+} while (0)
+
+#define KUNIT_ASSERT_STREQ_MSG(test, left, right, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Asserted " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+									       \
+	KUNIT_ASSERT_END(test, !strcmp(left, right), __stream);		       \
+} while (0)
+
+/**
+ * KUNIT_ASSERT_STRNEQ() - Expects that strings @left and @right are not equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a null terminated string.
+ * @right: an arbitrary expression that evaluates to a null terminated string.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are
+ * not equal. This is semantically equivalent to
+ * KUNIT_ASSERT_TRUE(@test, strcmp((@left), (@right))). See KUNIT_ASSERT_TRUE()
+ * for more information.
+ */
+#define KUNIT_ASSERT_STRNEQ(test, left, right) do {			       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Asserted " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+									       \
+	KUNIT_ASSERT_END(test, strcmp(left, right), __stream);		       \
+} while (0)
+
+#define KUNIT_ASSERT_STRNEQ_MSG(test, left, right, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Asserted " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+									       \
+	KUNIT_ASSERT_END(test, strcmp(left, right), __stream);		       \
+} while (0)
+
+/**
+ * KUNIT_ASSERT_NOT_ERR_OR_NULL() - Assertion that @ptr is not null and not err.
+ * @test: The test context object.
+ * @ptr: an arbitrary pointer.
+ *
+ * Sets an assertion that the value that @ptr evaluates to is not null and not
+ * an errno stored in a pointer. This is the same as
+ * KUNIT_EXPECT_NOT_ERR_OR_NULL(), except it causes an assertion failure (see
+ * KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr) do {			       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(ptr) __ptr = (ptr);					       \
+									       \
+	if (!__ptr)							       \
+		__stream->add(__stream,					       \
+			      "Asserted " #ptr " is not null, but is.");       \
+	if (IS_ERR(__ptr))						       \
+		__stream->add(__stream,					       \
+			      "Asserted " #ptr " is not error, but is: %ld",   \
+			      PTR_ERR(__ptr));				       \
+									       \
+	KUNIT_ASSERT_END(test, !IS_ERR_OR_NULL(__ptr), __stream);	       \
+} while (0)
+
+#define KUNIT_ASSERT_NOT_ERR_OR_NULL_MSG(test, ptr, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(ptr) __ptr = (ptr);					       \
+									       \
+	if (!__ptr) {							       \
+		__stream->add(__stream,					       \
+			      "Asserted " #ptr " is not null, but is.");       \
+		__stream->add(__stream, fmt, ##__VA_ARGS__);		       \
+	}								       \
+	if (IS_ERR(__ptr)) {						       \
+		__stream->add(__stream,					       \
+			      "Asserted " #ptr " is not error, but is: %ld",   \
+			      PTR_ERR(__ptr));				       \
+									       \
+		__stream->add(__stream, fmt, ##__VA_ARGS__);		       \
+	}								       \
+	KUNIT_ASSERT_END(test, !IS_ERR_OR_NULL(__ptr), __stream);	       \
+} while (0)
+
 #endif /* _KUNIT_TEST_H */
diff --git a/kunit/string-stream-test.c b/kunit/string-stream-test.c
index 6cfef69568011..441afd11b43de 100644
--- a/kunit/string-stream-test.c
+++ b/kunit/string-stream-test.c
@@ -19,7 +19,7 @@ static void string_stream_test_get_string(struct kunit *test)
 	stream->add(stream, " %s", "bar");
 
 	output = stream->get_string(stream);
-	KUNIT_EXPECT_STREQ(test, output, "Foo bar");
+	KUNIT_ASSERT_STREQ(test, output, "Foo bar");
 	kfree(output);
 	destroy_string_stream(stream);
 }
@@ -34,16 +34,16 @@ static void string_stream_test_add_and_clear(struct kunit *test)
 		stream->add(stream, "A");
 
 	output = stream->get_string(stream);
-	KUNIT_EXPECT_STREQ(test, output, "AAAAAAAAAA");
-	KUNIT_EXPECT_EQ(test, stream->length, 10);
-	KUNIT_EXPECT_FALSE(test, stream->is_empty(stream));
+	KUNIT_ASSERT_STREQ(test, output, "AAAAAAAAAA");
+	KUNIT_ASSERT_EQ(test, stream->length, 10);
+	KUNIT_ASSERT_FALSE(test, stream->is_empty(stream));
 	kfree(output);
 
 	stream->clear(stream);
 
 	output = stream->get_string(stream);
-	KUNIT_EXPECT_STREQ(test, output, "");
-	KUNIT_EXPECT_TRUE(test, stream->is_empty(stream));
+	KUNIT_ASSERT_STREQ(test, output, "");
+	KUNIT_ASSERT_TRUE(test, stream->is_empty(stream));
 	destroy_string_stream(stream);
 }
 
diff --git a/kunit/test-test.c b/kunit/test-test.c
index a936c041f1c8f..0b4ad6690310d 100644
--- a/kunit/test-test.c
+++ b/kunit/test-test.c
@@ -100,11 +100,13 @@ static int kunit_try_catch_test_init(struct kunit *test)
 	struct kunit_try_catch_test_context *ctx;
 
 	ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
 	test->priv = ctx;
 
 	ctx->try_catch = kunit_kmalloc(test,
 				       sizeof(*ctx->try_catch),
 				       GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->try_catch);
 	kunit_try_catch_init(ctx->try_catch);
 	ctx->try_catch->context.test = test;
 
diff --git a/kunit/test.c b/kunit/test.c
index 6e5244642ab07..9cc8ecdb079b0 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -495,3 +495,36 @@ void kunit_expect_binary_msg(struct kunit *test,
 	kunit_expect_end(test, compare_result, stream);
 }
 
+void kunit_assert_binary_msg(struct kunit *test,
+			     long long left, const char *left_name,
+			     long long right, const char *right_name,
+			     bool compare_result,
+			     const char *compare_name,
+			     const char *file,
+			     const char *line,
+			     const char *fmt, ...)
+{
+	struct kunit_stream *stream = kunit_assert_start(test, file, line);
+	struct va_format vaf;
+	va_list args;
+
+	stream->add(stream,
+		    "Asserted %s %s %s, but\n",
+		    left_name, compare_name, right_name);
+	stream->add(stream, "\t\t%s == %lld\n", left_name, left);
+	stream->add(stream, "\t\t%s == %lld", right_name, right);
+
+	if (fmt) {
+		va_start(args, fmt);
+
+		vaf.fmt = fmt;
+		vaf.va = &args;
+
+		stream->add(stream, "\n%pV", &vaf);
+
+		va_end(args);
+	}
+
+	kunit_assert_end(test, compare_result, stream);
+}
+
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 09/17] kunit: test: add the concept of assertions
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)


Add support for assertions which are like expectations except the test
terminates if the assertion is not satisfied.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 include/kunit/test.h       | 397 ++++++++++++++++++++++++++++++++++++-
 kunit/string-stream-test.c |  12 +-
 kunit/test-test.c          |   2 +
 kunit/test.c               |  33 +++
 4 files changed, 435 insertions(+), 9 deletions(-)

diff --git a/include/kunit/test.h b/include/kunit/test.h
index cd02dca96eb61..c42c67a9729fd 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -84,9 +84,10 @@ struct kunit;
  * @name: the name of the test case.
  *
  * A test case is a function with the signature, ``void (*)(struct kunit *)``
- * that makes expectations (see KUNIT_EXPECT_TRUE()) about code under test. Each
- * test case is associated with a &struct kunit_module and will be run after the
- * module's init function and followed by the module's exit function.
+ * that makes expectations and assertions (see KUNIT_EXPECT_TRUE() and
+ * KUNIT_ASSERT_TRUE()) about code under test. Each test case is associated with
+ * a &struct kunit_module and will be run after the module's init function and
+ * followed by the module's exit function.
  *
  * A test case should be static and should only be created with the KUNIT_CASE()
  * macro; additionally, every array of test cases should be terminated with an
@@ -712,4 +713,394 @@ static inline void kunit_expect_binary(struct kunit *test,
 	KUNIT_EXPECT_END(test, !IS_ERR_OR_NULL(__ptr), __stream);	       \
 } while (0)
 
+static inline struct kunit_stream *kunit_assert_start(struct kunit *test,
+						    const char *file,
+						    const char *line)
+{
+	struct kunit_stream *stream = kunit_new_stream(test);
+
+	stream->add(stream, "ASSERTION FAILED at %s:%s\n\t", file, line);
+
+	return stream;
+}
+
+static inline void kunit_assert_end(struct kunit *test,
+				   bool success,
+				   struct kunit_stream *stream)
+{
+	if (!success) {
+		test->fail(test, stream);
+		test->abort(test);
+	} else {
+		stream->clear(stream);
+	}
+}
+
+#define KUNIT_ASSERT_START(test) \
+		kunit_assert_start(test, __FILE__, __stringify(__LINE__))
+
+#define KUNIT_ASSERT_END(test, success, stream) \
+		kunit_assert_end(test, success, stream)
+
+#define KUNIT_ASSERT(test, success, message) do {			       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+									       \
+	__stream->add(__stream, message);				       \
+	KUNIT_ASSERT_END(test, success, __stream);			       \
+} while (0)
+
+#define KUNIT_ASSERT_MSG(test, success, message, fmt, ...) do {		       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+									       \
+	__stream->add(__stream, message);				       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+	KUNIT_ASSERT_END(test, success, __stream);			       \
+} while (0)
+
+#define KUNIT_ASSERT_FAILURE(test, fmt, ...) \
+		KUNIT_ASSERT_MSG(test, false, "", fmt, ##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_TRUE() - Sets an assertion that @condition is true.
+ * @test: The test context object.
+ * @condition: an arbitrary boolean expression. The test fails and aborts when
+ * this does not evaluate to true.
+ *
+ * This and assertions of the form `KUNIT_ASSERT_*` will cause the test case to
+ * fail *and immediately abort* when the specified condition is not met. Unlike
+ * an expectation failure, it will prevent the test case from continuing to run;
+ * this is otherwise known as an *assertion failure*.
+ */
+#define KUNIT_ASSERT_TRUE(test, condition)				       \
+		KUNIT_ASSERT(test, (condition),				       \
+		       "Asserted " #condition " is true, but is false.")
+
+#define KUNIT_ASSERT_TRUE_MSG(test, condition, fmt, ...)		       \
+		KUNIT_ASSERT_MSG(test, (condition),			       \
+				"Asserted " #condition " is true, but is false.\n",\
+				fmt, ##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_FALSE() - Sets an assertion that @condition is false.
+ * @test: The test context object.
+ * @condition: an arbitrary boolean expression.
+ *
+ * Sets an assertion that the value that @condition evaluates to is false. This
+ * is the same as KUNIT_EXPECT_FALSE(), except it causes an assertion failure
+ * (see KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_FALSE(test, condition)				       \
+		KUNIT_ASSERT(test, !(condition),			       \
+		       "Asserted " #condition " is false, but is true.")
+
+#define KUNIT_ASSERT_FALSE_MSG(test, condition, fmt, ...)		       \
+		KUNIT_ASSERT_MSG(test, !(condition),			       \
+				"Asserted " #condition " is false, but is true.\n",\
+				fmt, ##__VA_ARGS__)
+
+void kunit_assert_binary_msg(struct kunit *test,
+			    long long left, const char *left_name,
+			    long long right, const char *right_name,
+			    bool compare_result,
+			    const char *compare_name,
+			    const char *file,
+			    const char *line,
+			    const char *fmt, ...);
+
+static inline void kunit_assert_binary(struct kunit *test,
+				      long long left, const char *left_name,
+				      long long right, const char *right_name,
+				      bool compare_result,
+				      const char *compare_name,
+				      const char *file,
+				      const char *line)
+{
+	kunit_assert_binary_msg(test,
+			       left, left_name,
+			       right, right_name,
+			       compare_result,
+			       compare_name,
+			       file,
+			       line,
+			       NULL);
+}
+
+/*
+ * A factory macro for defining the expectations for the basic comparisons
+ * defined for the built in types.
+ *
+ * Unfortunately, there is no common type that all types can be promoted to for
+ * which all the binary operators behave the same way as for the actual types
+ * (for example, there is no type that long long and unsigned long long can
+ * both be cast to where the comparison result is preserved for all values). So
+ * the best we can do is do the comparison in the original types and then coerce
+ * everything to long long for printing; this way, the comparison behaves
+ * correctly and the printed out value usually makes sense without
+ * interpretation, but can always be interpretted to figure out the actual
+ * value.
+ */
+#define KUNIT_ASSERT_BINARY(test, left, condition, right) do {		       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+	kunit_assert_binary(test,					       \
+			   (long long) __left, #left,			       \
+			   (long long) __right, #right,			       \
+			   __left condition __right, #condition,	       \
+			   __FILE__, __stringify(__LINE__));		       \
+} while (0)
+
+#define KUNIT_ASSERT_BINARY_MSG(test, left, condition, right, fmt, ...) do {   \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+	kunit_assert_binary_msg(test,					       \
+			       (long long) __left, #left,		       \
+			       (long long) __right, #right,		       \
+			       __left condition __right, #condition,	       \
+			       __FILE__, __stringify(__LINE__),		       \
+			       fmt, ##__VA_ARGS__);			       \
+} while (0)
+
+/**
+ * KUNIT_ASSERT_EQ() - Sets an assertion that @left and @right are equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the values that @left and @right evaluate to are
+ * equal. This is the same as KUNIT_EXPECT_EQ(), except it causes an assertion
+ * failure (see KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_EQ(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, ==, right)
+
+#define KUNIT_ASSERT_EQ_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					==,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_NE() - An assertion that @left and @right are not equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the values that @left and @right evaluate to are not
+ * equal. This is the same as KUNIT_EXPECT_NE(), except it causes an assertion
+ * failure (see KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_NE(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, !=, right)
+
+#define KUNIT_ASSERT_NE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					!=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_LT() - An assertion that @left is less than @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the value that @left evaluates to is less than the
+ * value that @right evaluates to. This is the same as KUNIT_EXPECT_LT(), except
+ * it causes an assertion failure (see KUNIT_ASSERT_TRUE()) when the assertion
+ * is not met.
+ */
+#define KUNIT_ASSERT_LT(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, <, right)
+
+#define KUNIT_ASSERT_LT_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					<,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+/**
+ * KUNIT_ASSERT_LE() - An assertion that @left is less than or equal to @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the value that @left evaluates to is less than or
+ * equal to the value that @right evaluates to. This is the same as
+ * KUNIT_EXPECT_LE(), except it causes an assertion failure (see
+ * KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_LE(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, <=, right)
+
+#define KUNIT_ASSERT_LE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					<=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+/**
+ * KUNIT_ASSERT_GT() - An assertion that @left is greater than @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the value that @left evaluates to is greater than the
+ * value that @right evaluates to. This is the same as KUNIT_EXPECT_GT(), except
+ * it causes an assertion failure (see KUNIT_ASSERT_TRUE()) when the assertion
+ * is not met.
+ */
+#define KUNIT_ASSERT_GT(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, >, right)
+
+#define KUNIT_ASSERT_GT_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					>,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_GE() - Assertion that @left is greater than or equal to @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the value that @left evaluates to is greater than the
+ * value that @right evaluates to. This is the same as KUNIT_EXPECT_GE(), except
+ * it causes an assertion failure (see KUNIT_ASSERT_TRUE()) when the assertion
+ * is not met.
+ */
+#define KUNIT_ASSERT_GE(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, >=, right)
+
+#define KUNIT_ASSERT_GE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					>=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_STREQ() - An assertion that strings @left and @right are equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a null terminated string.
+ * @right: an arbitrary expression that evaluates to a null terminated string.
+ *
+ * Sets an assertion that the values that @left and @right evaluate to are
+ * equal. This is the same as KUNIT_EXPECT_STREQ(), except it causes an
+ * assertion failure (see KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_STREQ(test, left, right) do {			       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Asserted " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+									       \
+	KUNIT_ASSERT_END(test, !strcmp(left, right), __stream);		       \
+} while (0)
+
+#define KUNIT_ASSERT_STREQ_MSG(test, left, right, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Asserted " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+									       \
+	KUNIT_ASSERT_END(test, !strcmp(left, right), __stream);		       \
+} while (0)
+
+/**
+ * KUNIT_ASSERT_STRNEQ() - Expects that strings @left and @right are not equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a null terminated string.
+ * @right: an arbitrary expression that evaluates to a null terminated string.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are
+ * not equal. This is semantically equivalent to
+ * KUNIT_ASSERT_TRUE(@test, strcmp((@left), (@right))). See KUNIT_ASSERT_TRUE()
+ * for more information.
+ */
+#define KUNIT_ASSERT_STRNEQ(test, left, right) do {			       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Asserted " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+									       \
+	KUNIT_ASSERT_END(test, strcmp(left, right), __stream);		       \
+} while (0)
+
+#define KUNIT_ASSERT_STRNEQ_MSG(test, left, right, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Asserted " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+									       \
+	KUNIT_ASSERT_END(test, strcmp(left, right), __stream);		       \
+} while (0)
+
+/**
+ * KUNIT_ASSERT_NOT_ERR_OR_NULL() - Assertion that @ptr is not null and not err.
+ * @test: The test context object.
+ * @ptr: an arbitrary pointer.
+ *
+ * Sets an assertion that the value that @ptr evaluates to is not null and not
+ * an errno stored in a pointer. This is the same as
+ * KUNIT_EXPECT_NOT_ERR_OR_NULL(), except it causes an assertion failure (see
+ * KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr) do {			       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(ptr) __ptr = (ptr);					       \
+									       \
+	if (!__ptr)							       \
+		__stream->add(__stream,					       \
+			      "Asserted " #ptr " is not null, but is.");       \
+	if (IS_ERR(__ptr))						       \
+		__stream->add(__stream,					       \
+			      "Asserted " #ptr " is not error, but is: %ld",   \
+			      PTR_ERR(__ptr));				       \
+									       \
+	KUNIT_ASSERT_END(test, !IS_ERR_OR_NULL(__ptr), __stream);	       \
+} while (0)
+
+#define KUNIT_ASSERT_NOT_ERR_OR_NULL_MSG(test, ptr, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(ptr) __ptr = (ptr);					       \
+									       \
+	if (!__ptr) {							       \
+		__stream->add(__stream,					       \
+			      "Asserted " #ptr " is not null, but is.");       \
+		__stream->add(__stream, fmt, ##__VA_ARGS__);		       \
+	}								       \
+	if (IS_ERR(__ptr)) {						       \
+		__stream->add(__stream,					       \
+			      "Asserted " #ptr " is not error, but is: %ld",   \
+			      PTR_ERR(__ptr));				       \
+									       \
+		__stream->add(__stream, fmt, ##__VA_ARGS__);		       \
+	}								       \
+	KUNIT_ASSERT_END(test, !IS_ERR_OR_NULL(__ptr), __stream);	       \
+} while (0)
+
 #endif /* _KUNIT_TEST_H */
diff --git a/kunit/string-stream-test.c b/kunit/string-stream-test.c
index 6cfef69568011..441afd11b43de 100644
--- a/kunit/string-stream-test.c
+++ b/kunit/string-stream-test.c
@@ -19,7 +19,7 @@ static void string_stream_test_get_string(struct kunit *test)
 	stream->add(stream, " %s", "bar");
 
 	output = stream->get_string(stream);
-	KUNIT_EXPECT_STREQ(test, output, "Foo bar");
+	KUNIT_ASSERT_STREQ(test, output, "Foo bar");
 	kfree(output);
 	destroy_string_stream(stream);
 }
@@ -34,16 +34,16 @@ static void string_stream_test_add_and_clear(struct kunit *test)
 		stream->add(stream, "A");
 
 	output = stream->get_string(stream);
-	KUNIT_EXPECT_STREQ(test, output, "AAAAAAAAAA");
-	KUNIT_EXPECT_EQ(test, stream->length, 10);
-	KUNIT_EXPECT_FALSE(test, stream->is_empty(stream));
+	KUNIT_ASSERT_STREQ(test, output, "AAAAAAAAAA");
+	KUNIT_ASSERT_EQ(test, stream->length, 10);
+	KUNIT_ASSERT_FALSE(test, stream->is_empty(stream));
 	kfree(output);
 
 	stream->clear(stream);
 
 	output = stream->get_string(stream);
-	KUNIT_EXPECT_STREQ(test, output, "");
-	KUNIT_EXPECT_TRUE(test, stream->is_empty(stream));
+	KUNIT_ASSERT_STREQ(test, output, "");
+	KUNIT_ASSERT_TRUE(test, stream->is_empty(stream));
 	destroy_string_stream(stream);
 }
 
diff --git a/kunit/test-test.c b/kunit/test-test.c
index a936c041f1c8f..0b4ad6690310d 100644
--- a/kunit/test-test.c
+++ b/kunit/test-test.c
@@ -100,11 +100,13 @@ static int kunit_try_catch_test_init(struct kunit *test)
 	struct kunit_try_catch_test_context *ctx;
 
 	ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
 	test->priv = ctx;
 
 	ctx->try_catch = kunit_kmalloc(test,
 				       sizeof(*ctx->try_catch),
 				       GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->try_catch);
 	kunit_try_catch_init(ctx->try_catch);
 	ctx->try_catch->context.test = test;
 
diff --git a/kunit/test.c b/kunit/test.c
index 6e5244642ab07..9cc8ecdb079b0 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -495,3 +495,36 @@ void kunit_expect_binary_msg(struct kunit *test,
 	kunit_expect_end(test, compare_result, stream);
 }
 
+void kunit_assert_binary_msg(struct kunit *test,
+			     long long left, const char *left_name,
+			     long long right, const char *right_name,
+			     bool compare_result,
+			     const char *compare_name,
+			     const char *file,
+			     const char *line,
+			     const char *fmt, ...)
+{
+	struct kunit_stream *stream = kunit_assert_start(test, file, line);
+	struct va_format vaf;
+	va_list args;
+
+	stream->add(stream,
+		    "Asserted %s %s %s, but\n",
+		    left_name, compare_name, right_name);
+	stream->add(stream, "\t\t%s == %lld\n", left_name, left);
+	stream->add(stream, "\t\t%s == %lld", right_name, right);
+
+	if (fmt) {
+		va_start(args, fmt);
+
+		vaf.fmt = fmt;
+		vaf.va = &args;
+
+		stream->add(stream, "\n%pV", &vaf);
+
+		va_end(args);
+	}
+
+	kunit_assert_end(test, compare_result, stream);
+}
+
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 09/17] kunit: test: add the concept of assertions
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: brakmo, pmladek, amir73il, Brendan Higgins, dri-devel,
	Alexander.Levin, linux-kselftest, linux-nvdimm, richard,
	knut.omang, wfg, joel, jdike, dan.carpenter, devicetree,
	Tim.Bird, linux-um, rostedt, julia.lawall, dan.j.williams,
	kunit-dev, gregkh, linux-kernel, daniel, mpe, joe, khilman

Add support for assertions which are like expectations except the test
terminates if the assertion is not satisfied.

Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
---
 include/kunit/test.h       | 397 ++++++++++++++++++++++++++++++++++++-
 kunit/string-stream-test.c |  12 +-
 kunit/test-test.c          |   2 +
 kunit/test.c               |  33 +++
 4 files changed, 435 insertions(+), 9 deletions(-)

diff --git a/include/kunit/test.h b/include/kunit/test.h
index cd02dca96eb61..c42c67a9729fd 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -84,9 +84,10 @@ struct kunit;
  * @name: the name of the test case.
  *
  * A test case is a function with the signature, ``void (*)(struct kunit *)``
- * that makes expectations (see KUNIT_EXPECT_TRUE()) about code under test. Each
- * test case is associated with a &struct kunit_module and will be run after the
- * module's init function and followed by the module's exit function.
+ * that makes expectations and assertions (see KUNIT_EXPECT_TRUE() and
+ * KUNIT_ASSERT_TRUE()) about code under test. Each test case is associated with
+ * a &struct kunit_module and will be run after the module's init function and
+ * followed by the module's exit function.
  *
  * A test case should be static and should only be created with the KUNIT_CASE()
  * macro; additionally, every array of test cases should be terminated with an
@@ -712,4 +713,394 @@ static inline void kunit_expect_binary(struct kunit *test,
 	KUNIT_EXPECT_END(test, !IS_ERR_OR_NULL(__ptr), __stream);	       \
 } while (0)
 
+static inline struct kunit_stream *kunit_assert_start(struct kunit *test,
+						    const char *file,
+						    const char *line)
+{
+	struct kunit_stream *stream = kunit_new_stream(test);
+
+	stream->add(stream, "ASSERTION FAILED at %s:%s\n\t", file, line);
+
+	return stream;
+}
+
+static inline void kunit_assert_end(struct kunit *test,
+				   bool success,
+				   struct kunit_stream *stream)
+{
+	if (!success) {
+		test->fail(test, stream);
+		test->abort(test);
+	} else {
+		stream->clear(stream);
+	}
+}
+
+#define KUNIT_ASSERT_START(test) \
+		kunit_assert_start(test, __FILE__, __stringify(__LINE__))
+
+#define KUNIT_ASSERT_END(test, success, stream) \
+		kunit_assert_end(test, success, stream)
+
+#define KUNIT_ASSERT(test, success, message) do {			       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+									       \
+	__stream->add(__stream, message);				       \
+	KUNIT_ASSERT_END(test, success, __stream);			       \
+} while (0)
+
+#define KUNIT_ASSERT_MSG(test, success, message, fmt, ...) do {		       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+									       \
+	__stream->add(__stream, message);				       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+	KUNIT_ASSERT_END(test, success, __stream);			       \
+} while (0)
+
+#define KUNIT_ASSERT_FAILURE(test, fmt, ...) \
+		KUNIT_ASSERT_MSG(test, false, "", fmt, ##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_TRUE() - Sets an assertion that @condition is true.
+ * @test: The test context object.
+ * @condition: an arbitrary boolean expression. The test fails and aborts when
+ * this does not evaluate to true.
+ *
+ * This and assertions of the form `KUNIT_ASSERT_*` will cause the test case to
+ * fail *and immediately abort* when the specified condition is not met. Unlike
+ * an expectation failure, it will prevent the test case from continuing to run;
+ * this is otherwise known as an *assertion failure*.
+ */
+#define KUNIT_ASSERT_TRUE(test, condition)				       \
+		KUNIT_ASSERT(test, (condition),				       \
+		       "Asserted " #condition " is true, but is false.")
+
+#define KUNIT_ASSERT_TRUE_MSG(test, condition, fmt, ...)		       \
+		KUNIT_ASSERT_MSG(test, (condition),			       \
+				"Asserted " #condition " is true, but is false.\n",\
+				fmt, ##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_FALSE() - Sets an assertion that @condition is false.
+ * @test: The test context object.
+ * @condition: an arbitrary boolean expression.
+ *
+ * Sets an assertion that the value that @condition evaluates to is false. This
+ * is the same as KUNIT_EXPECT_FALSE(), except it causes an assertion failure
+ * (see KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_FALSE(test, condition)				       \
+		KUNIT_ASSERT(test, !(condition),			       \
+		       "Asserted " #condition " is false, but is true.")
+
+#define KUNIT_ASSERT_FALSE_MSG(test, condition, fmt, ...)		       \
+		KUNIT_ASSERT_MSG(test, !(condition),			       \
+				"Asserted " #condition " is false, but is true.\n",\
+				fmt, ##__VA_ARGS__)
+
+void kunit_assert_binary_msg(struct kunit *test,
+			    long long left, const char *left_name,
+			    long long right, const char *right_name,
+			    bool compare_result,
+			    const char *compare_name,
+			    const char *file,
+			    const char *line,
+			    const char *fmt, ...);
+
+static inline void kunit_assert_binary(struct kunit *test,
+				      long long left, const char *left_name,
+				      long long right, const char *right_name,
+				      bool compare_result,
+				      const char *compare_name,
+				      const char *file,
+				      const char *line)
+{
+	kunit_assert_binary_msg(test,
+			       left, left_name,
+			       right, right_name,
+			       compare_result,
+			       compare_name,
+			       file,
+			       line,
+			       NULL);
+}
+
+/*
+ * A factory macro for defining the expectations for the basic comparisons
+ * defined for the built in types.
+ *
+ * Unfortunately, there is no common type that all types can be promoted to for
+ * which all the binary operators behave the same way as for the actual types
+ * (for example, there is no type that long long and unsigned long long can
+ * both be cast to where the comparison result is preserved for all values). So
+ * the best we can do is do the comparison in the original types and then coerce
+ * everything to long long for printing; this way, the comparison behaves
+ * correctly and the printed out value usually makes sense without
+ * interpretation, but can always be interpretted to figure out the actual
+ * value.
+ */
+#define KUNIT_ASSERT_BINARY(test, left, condition, right) do {		       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+	kunit_assert_binary(test,					       \
+			   (long long) __left, #left,			       \
+			   (long long) __right, #right,			       \
+			   __left condition __right, #condition,	       \
+			   __FILE__, __stringify(__LINE__));		       \
+} while (0)
+
+#define KUNIT_ASSERT_BINARY_MSG(test, left, condition, right, fmt, ...) do {   \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+	kunit_assert_binary_msg(test,					       \
+			       (long long) __left, #left,		       \
+			       (long long) __right, #right,		       \
+			       __left condition __right, #condition,	       \
+			       __FILE__, __stringify(__LINE__),		       \
+			       fmt, ##__VA_ARGS__);			       \
+} while (0)
+
+/**
+ * KUNIT_ASSERT_EQ() - Sets an assertion that @left and @right are equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the values that @left and @right evaluate to are
+ * equal. This is the same as KUNIT_EXPECT_EQ(), except it causes an assertion
+ * failure (see KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_EQ(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, ==, right)
+
+#define KUNIT_ASSERT_EQ_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					==,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_NE() - An assertion that @left and @right are not equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the values that @left and @right evaluate to are not
+ * equal. This is the same as KUNIT_EXPECT_NE(), except it causes an assertion
+ * failure (see KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_NE(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, !=, right)
+
+#define KUNIT_ASSERT_NE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					!=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_LT() - An assertion that @left is less than @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the value that @left evaluates to is less than the
+ * value that @right evaluates to. This is the same as KUNIT_EXPECT_LT(), except
+ * it causes an assertion failure (see KUNIT_ASSERT_TRUE()) when the assertion
+ * is not met.
+ */
+#define KUNIT_ASSERT_LT(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, <, right)
+
+#define KUNIT_ASSERT_LT_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					<,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+/**
+ * KUNIT_ASSERT_LE() - An assertion that @left is less than or equal to @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the value that @left evaluates to is less than or
+ * equal to the value that @right evaluates to. This is the same as
+ * KUNIT_EXPECT_LE(), except it causes an assertion failure (see
+ * KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_LE(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, <=, right)
+
+#define KUNIT_ASSERT_LE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					<=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+/**
+ * KUNIT_ASSERT_GT() - An assertion that @left is greater than @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the value that @left evaluates to is greater than the
+ * value that @right evaluates to. This is the same as KUNIT_EXPECT_GT(), except
+ * it causes an assertion failure (see KUNIT_ASSERT_TRUE()) when the assertion
+ * is not met.
+ */
+#define KUNIT_ASSERT_GT(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, >, right)
+
+#define KUNIT_ASSERT_GT_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					>,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_GE() - Assertion that @left is greater than or equal to @right.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a primitive C type.
+ * @right: an arbitrary expression that evaluates to a primitive C type.
+ *
+ * Sets an assertion that the value that @left evaluates to is greater than the
+ * value that @right evaluates to. This is the same as KUNIT_EXPECT_GE(), except
+ * it causes an assertion failure (see KUNIT_ASSERT_TRUE()) when the assertion
+ * is not met.
+ */
+#define KUNIT_ASSERT_GE(test, left, right) \
+		KUNIT_ASSERT_BINARY(test, left, >=, right)
+
+#define KUNIT_ASSERT_GE_MSG(test, left, right, fmt, ...)		       \
+		KUNIT_ASSERT_BINARY_MSG(test,				       \
+					left,				       \
+					>=,				       \
+					right,				       \
+					fmt,				       \
+					##__VA_ARGS__)
+
+/**
+ * KUNIT_ASSERT_STREQ() - An assertion that strings @left and @right are equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a null terminated string.
+ * @right: an arbitrary expression that evaluates to a null terminated string.
+ *
+ * Sets an assertion that the values that @left and @right evaluate to are
+ * equal. This is the same as KUNIT_EXPECT_STREQ(), except it causes an
+ * assertion failure (see KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_STREQ(test, left, right) do {			       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Asserted " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+									       \
+	KUNIT_ASSERT_END(test, !strcmp(left, right), __stream);		       \
+} while (0)
+
+#define KUNIT_ASSERT_STREQ_MSG(test, left, right, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Asserted " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+									       \
+	KUNIT_ASSERT_END(test, !strcmp(left, right), __stream);		       \
+} while (0)
+
+/**
+ * KUNIT_ASSERT_STRNEQ() - Expects that strings @left and @right are not equal.
+ * @test: The test context object.
+ * @left: an arbitrary expression that evaluates to a null terminated string.
+ * @right: an arbitrary expression that evaluates to a null terminated string.
+ *
+ * Sets an expectation that the values that @left and @right evaluate to are
+ * not equal. This is semantically equivalent to
+ * KUNIT_ASSERT_TRUE(@test, strcmp((@left), (@right))). See KUNIT_ASSERT_TRUE()
+ * for more information.
+ */
+#define KUNIT_ASSERT_STRNEQ(test, left, right) do {			       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Asserted " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+									       \
+	KUNIT_ASSERT_END(test, strcmp(left, right), __stream);		       \
+} while (0)
+
+#define KUNIT_ASSERT_STRNEQ_MSG(test, left, right, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(left) __left = (left);					       \
+	typeof(right) __right = (right);				       \
+									       \
+	__stream->add(__stream, "Asserted " #left " == " #right ", but\n");    \
+	__stream->add(__stream, "\t\t%s == %s\n", #left, __left);	       \
+	__stream->add(__stream, "\t\t%s == %s\n", #right, __right);	       \
+	__stream->add(__stream, fmt, ##__VA_ARGS__);			       \
+									       \
+	KUNIT_ASSERT_END(test, strcmp(left, right), __stream);		       \
+} while (0)
+
+/**
+ * KUNIT_ASSERT_NOT_ERR_OR_NULL() - Assertion that @ptr is not null and not err.
+ * @test: The test context object.
+ * @ptr: an arbitrary pointer.
+ *
+ * Sets an assertion that the value that @ptr evaluates to is not null and not
+ * an errno stored in a pointer. This is the same as
+ * KUNIT_EXPECT_NOT_ERR_OR_NULL(), except it causes an assertion failure (see
+ * KUNIT_ASSERT_TRUE()) when the assertion is not met.
+ */
+#define KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr) do {			       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(ptr) __ptr = (ptr);					       \
+									       \
+	if (!__ptr)							       \
+		__stream->add(__stream,					       \
+			      "Asserted " #ptr " is not null, but is.");       \
+	if (IS_ERR(__ptr))						       \
+		__stream->add(__stream,					       \
+			      "Asserted " #ptr " is not error, but is: %ld",   \
+			      PTR_ERR(__ptr));				       \
+									       \
+	KUNIT_ASSERT_END(test, !IS_ERR_OR_NULL(__ptr), __stream);	       \
+} while (0)
+
+#define KUNIT_ASSERT_NOT_ERR_OR_NULL_MSG(test, ptr, fmt, ...) do {	       \
+	struct kunit_stream *__stream = KUNIT_ASSERT_START(test);	       \
+	typeof(ptr) __ptr = (ptr);					       \
+									       \
+	if (!__ptr) {							       \
+		__stream->add(__stream,					       \
+			      "Asserted " #ptr " is not null, but is.");       \
+		__stream->add(__stream, fmt, ##__VA_ARGS__);		       \
+	}								       \
+	if (IS_ERR(__ptr)) {						       \
+		__stream->add(__stream,					       \
+			      "Asserted " #ptr " is not error, but is: %ld",   \
+			      PTR_ERR(__ptr));				       \
+									       \
+		__stream->add(__stream, fmt, ##__VA_ARGS__);		       \
+	}								       \
+	KUNIT_ASSERT_END(test, !IS_ERR_OR_NULL(__ptr), __stream);	       \
+} while (0)
+
 #endif /* _KUNIT_TEST_H */
diff --git a/kunit/string-stream-test.c b/kunit/string-stream-test.c
index 6cfef69568011..441afd11b43de 100644
--- a/kunit/string-stream-test.c
+++ b/kunit/string-stream-test.c
@@ -19,7 +19,7 @@ static void string_stream_test_get_string(struct kunit *test)
 	stream->add(stream, " %s", "bar");
 
 	output = stream->get_string(stream);
-	KUNIT_EXPECT_STREQ(test, output, "Foo bar");
+	KUNIT_ASSERT_STREQ(test, output, "Foo bar");
 	kfree(output);
 	destroy_string_stream(stream);
 }
@@ -34,16 +34,16 @@ static void string_stream_test_add_and_clear(struct kunit *test)
 		stream->add(stream, "A");
 
 	output = stream->get_string(stream);
-	KUNIT_EXPECT_STREQ(test, output, "AAAAAAAAAA");
-	KUNIT_EXPECT_EQ(test, stream->length, 10);
-	KUNIT_EXPECT_FALSE(test, stream->is_empty(stream));
+	KUNIT_ASSERT_STREQ(test, output, "AAAAAAAAAA");
+	KUNIT_ASSERT_EQ(test, stream->length, 10);
+	KUNIT_ASSERT_FALSE(test, stream->is_empty(stream));
 	kfree(output);
 
 	stream->clear(stream);
 
 	output = stream->get_string(stream);
-	KUNIT_EXPECT_STREQ(test, output, "");
-	KUNIT_EXPECT_TRUE(test, stream->is_empty(stream));
+	KUNIT_ASSERT_STREQ(test, output, "");
+	KUNIT_ASSERT_TRUE(test, stream->is_empty(stream));
 	destroy_string_stream(stream);
 }
 
diff --git a/kunit/test-test.c b/kunit/test-test.c
index a936c041f1c8f..0b4ad6690310d 100644
--- a/kunit/test-test.c
+++ b/kunit/test-test.c
@@ -100,11 +100,13 @@ static int kunit_try_catch_test_init(struct kunit *test)
 	struct kunit_try_catch_test_context *ctx;
 
 	ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
 	test->priv = ctx;
 
 	ctx->try_catch = kunit_kmalloc(test,
 				       sizeof(*ctx->try_catch),
 				       GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->try_catch);
 	kunit_try_catch_init(ctx->try_catch);
 	ctx->try_catch->context.test = test;
 
diff --git a/kunit/test.c b/kunit/test.c
index 6e5244642ab07..9cc8ecdb079b0 100644
--- a/kunit/test.c
+++ b/kunit/test.c
@@ -495,3 +495,36 @@ void kunit_expect_binary_msg(struct kunit *test,
 	kunit_expect_end(test, compare_result, stream);
 }
 
+void kunit_assert_binary_msg(struct kunit *test,
+			     long long left, const char *left_name,
+			     long long right, const char *right_name,
+			     bool compare_result,
+			     const char *compare_name,
+			     const char *file,
+			     const char *line,
+			     const char *fmt, ...)
+{
+	struct kunit_stream *stream = kunit_assert_start(test, file, line);
+	struct va_format vaf;
+	va_list args;
+
+	stream->add(stream,
+		    "Asserted %s %s %s, but\n",
+		    left_name, compare_name, right_name);
+	stream->add(stream, "\t\t%s == %lld\n", left_name, left);
+	stream->add(stream, "\t\t%s == %lld", right_name, right);
+
+	if (fmt) {
+		va_start(args, fmt);
+
+		vaf.fmt = fmt;
+		vaf.va = &args;
+
+		stream->add(stream, "\n%pV", &vaf);
+
+		va_end(args);
+	}
+
+	kunit_assert_end(test, compare_result, stream);
+}
+
-- 
2.21.0.rc0.258.g878e2cd30e-goog


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 10/17] kunit: test: add test managed resource tests
  2019-02-14 21:37 ` brendanhiggins
                       ` (2 preceding siblings ...)
  (?)
@ 2019-02-14 21:37     ` Brendan Higgins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook-hpIqsD4AKlfQT0dZR+AlfA, mcgrof-DgEjT+Ai2ygdnm+yROfE0A,
	shuah-DgEjT+Ai2ygdnm+yROfE0A, robh-DgEjT+Ai2ygdnm+yROfE0A,
	kieran.bingham-ryLnwIuWjnjg/C1BVhZhaw,
	frowand.list-Re5JQEeQqe8AvxtiuMwx3w
  Cc: brakmo-b10kYP2dOMg, pmladek-IBi9RG/b67k,
	amir73il-Re5JQEeQqe8AvxtiuMwx3w, Brendan Higgins,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	Alexander.Levin-0li6OtcxBFHby3iVrkZq2A,
	linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw, richard-/L3Ra7n9ekc,
	knut.omang-QHcLZuEGTsvQT0dZR+AlfA, wfg-VuQAYsv1563Yd54FQh9/CA,
	joel-U3u1mxZcP9KHXe+LvDLADg, jdike-OPE4K8JWMJJBDgjK7y7TUQ,
	dan.carpenter-QHcLZuEGTsvQT0dZR+AlfA,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Tim.Bird-7U/KSKJipcs,
	linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	rostedt-nx8X9YLhiw1AfugRpC6u6w, julia.lawall-L2FTfq7BK8M,
	kunit-dev-/JYPxA39Uh5TLH3MbocFFw,
	gregkh-hQyY1W1yCW8ekmWlsbkhG0B+6BGkLq7r,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, Avinash Kondareddy,
	daniel-/w4YWyX8dFk, mpe-Gsx/Oe8HsFggBc27wqDAHg,
	joe-6d6DIl74uiNBDgjK7y7TUQ, khilman-rdvid1DuHRBWk0Htik3J/w

Tests how tests interact with test managed resources in their lifetime.

Signed-off-by: Avinash Kondareddy <avikr-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
Signed-off-by: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
---
 kunit/test-test.c | 121 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 121 insertions(+)

diff --git a/kunit/test-test.c b/kunit/test-test.c
index 0b4ad6690310d..bb34431398526 100644
--- a/kunit/test-test.c
+++ b/kunit/test-test.c
@@ -127,3 +127,124 @@ static struct kunit_module kunit_try_catch_test_module = {
 	.test_cases = kunit_try_catch_test_cases,
 };
 module_test(kunit_try_catch_test_module);
+
+/*
+ * Context for testing test managed resources
+ * is_resource_initialized is used to test arbitrary resources
+ */
+struct kunit_test_resource_context {
+	struct kunit test;
+	bool is_resource_initialized;
+};
+
+static int fake_resource_init(struct kunit_resource *res, void *context)
+{
+	struct kunit_test_resource_context *ctx = context;
+
+	res->allocation = &ctx->is_resource_initialized;
+	ctx->is_resource_initialized = true;
+	return 0;
+}
+
+static void fake_resource_free(struct kunit_resource *res)
+{
+	bool *is_resource_initialized = res->allocation;
+
+	*is_resource_initialized = false;
+}
+
+static void kunit_resource_test_init_resources(struct kunit *test)
+{
+	struct kunit_test_resource_context *ctx = test->priv;
+
+	kunit_init_test(&ctx->test, "testing_test_init_test");
+
+	KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
+}
+
+static void kunit_resource_test_alloc_resource(struct kunit *test)
+{
+	struct kunit_test_resource_context *ctx = test->priv;
+	struct kunit_resource *res;
+	kunit_resource_free_t free = fake_resource_free;
+
+	res = kunit_alloc_resource(&ctx->test,
+				   fake_resource_init,
+				   fake_resource_free,
+				   ctx);
+
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, res);
+	KUNIT_EXPECT_EQ(test, &ctx->is_resource_initialized, res->allocation);
+	KUNIT_EXPECT_TRUE(test, list_is_last(&res->node, &ctx->test.resources));
+	KUNIT_EXPECT_EQ(test, free, res->free);
+}
+
+static void kunit_resource_test_free_resource(struct kunit *test)
+{
+	struct kunit_test_resource_context *ctx = test->priv;
+	struct kunit_resource *res = kunit_alloc_resource(&ctx->test,
+							  fake_resource_init,
+							  fake_resource_free,
+							  ctx);
+
+	kunit_free_resource(&ctx->test, res);
+
+	KUNIT_EXPECT_EQ(test, false, ctx->is_resource_initialized);
+	KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
+}
+
+#define KUNIT_RESOURCE_NUM 5
+static void kunit_resource_test_cleanup_resources(struct kunit *test)
+{
+	int i;
+	struct kunit_test_resource_context *ctx = test->priv;
+	struct kunit_resource *resources[KUNIT_RESOURCE_NUM];
+
+	for (i = 0; i < KUNIT_RESOURCE_NUM; i++) {
+		resources[i] = kunit_alloc_resource(&ctx->test,
+						    fake_resource_init,
+						    fake_resource_free,
+						    ctx);
+	}
+
+	kunit_cleanup(&ctx->test);
+
+	KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
+}
+
+static int kunit_resource_test_init(struct kunit *test)
+{
+	struct kunit_test_resource_context *ctx =
+			kzalloc(sizeof(*ctx), GFP_KERNEL);
+
+	if (!ctx)
+		return -ENOMEM;
+	test->priv = ctx;
+
+	kunit_init_test(&ctx->test, "test_test_context");
+	return 0;
+}
+
+static void kunit_resource_test_exit(struct kunit *test)
+{
+	struct kunit_test_resource_context *ctx = test->priv;
+
+	kunit_cleanup(&ctx->test);
+	kfree(ctx);
+}
+
+static struct kunit_case kunit_resource_test_cases[] = {
+	KUNIT_CASE(kunit_resource_test_init_resources),
+	KUNIT_CASE(kunit_resource_test_alloc_resource),
+	KUNIT_CASE(kunit_resource_test_free_resource),
+	KUNIT_CASE(kunit_resource_test_cleanup_resources),
+	{},
+};
+
+static struct kunit_module kunit_resource_test_module = {
+	.name = "kunit-resource-test",
+	.init = kunit_resource_test_init,
+	.exit = kunit_resource_test_exit,
+	.test_cases = kunit_resource_test_cases,
+};
+module_test(kunit_resource_test_module);
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 10/17] kunit: test: add test managed resource tests
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: gregkh, joel, mpe, joe, brakmo, rostedt, Tim.Bird, khilman,
	julia.lawall, linux-kselftest, kunit-dev, linux-kernel, jdike,
	richard, linux-um, daniel, dri-devel, dan.j.williams,
	linux-nvdimm, knut.omang, devicetree, pmladek, Alexander.Levin,
	amir73il, dan.carpenter, wfg, Brendan Higgins,
	Avinash Kondareddy

Tests how tests interact with test managed resources in their lifetime.

Signed-off-by: Avinash Kondareddy <avikr@google.com>
Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
---
 kunit/test-test.c | 121 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 121 insertions(+)

diff --git a/kunit/test-test.c b/kunit/test-test.c
index 0b4ad6690310d..bb34431398526 100644
--- a/kunit/test-test.c
+++ b/kunit/test-test.c
@@ -127,3 +127,124 @@ static struct kunit_module kunit_try_catch_test_module = {
 	.test_cases = kunit_try_catch_test_cases,
 };
 module_test(kunit_try_catch_test_module);
+
+/*
+ * Context for testing test managed resources
+ * is_resource_initialized is used to test arbitrary resources
+ */
+struct kunit_test_resource_context {
+	struct kunit test;
+	bool is_resource_initialized;
+};
+
+static int fake_resource_init(struct kunit_resource *res, void *context)
+{
+	struct kunit_test_resource_context *ctx = context;
+
+	res->allocation = &ctx->is_resource_initialized;
+	ctx->is_resource_initialized = true;
+	return 0;
+}
+
+static void fake_resource_free(struct kunit_resource *res)
+{
+	bool *is_resource_initialized = res->allocation;
+
+	*is_resource_initialized = false;
+}
+
+static void kunit_resource_test_init_resources(struct kunit *test)
+{
+	struct kunit_test_resource_context *ctx = test->priv;
+
+	kunit_init_test(&ctx->test, "testing_test_init_test");
+
+	KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
+}
+
+static void kunit_resource_test_alloc_resource(struct kunit *test)
+{
+	struct kunit_test_resource_context *ctx = test->priv;
+	struct kunit_resource *res;
+	kunit_resource_free_t free = fake_resource_free;
+
+	res = kunit_alloc_resource(&ctx->test,
+				   fake_resource_init,
+				   fake_resource_free,
+				   ctx);
+
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, res);
+	KUNIT_EXPECT_EQ(test, &ctx->is_resource_initialized, res->allocation);
+	KUNIT_EXPECT_TRUE(test, list_is_last(&res->node, &ctx->test.resources));
+	KUNIT_EXPECT_EQ(test, free, res->free);
+}
+
+static void kunit_resource_test_free_resource(struct kunit *test)
+{
+	struct kunit_test_resource_context *ctx = test->priv;
+	struct kunit_resource *res = kunit_alloc_resource(&ctx->test,
+							  fake_resource_init,
+							  fake_resource_free,
+							  ctx);
+
+	kunit_free_resource(&ctx->test, res);
+
+	KUNIT_EXPECT_EQ(test, false, ctx->is_resource_initialized);
+	KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
+}
+
+#define KUNIT_RESOURCE_NUM 5
+static void kunit_resource_test_cleanup_resources(struct kunit *test)
+{
+	int i;
+	struct kunit_test_resource_context *ctx = test->priv;
+	struct kunit_resource *resources[KUNIT_RESOURCE_NUM];
+
+	for (i = 0; i < KUNIT_RESOURCE_NUM; i++) {
+		resources[i] = kunit_alloc_resource(&ctx->test,
+						    fake_resource_init,
+						    fake_resource_free,
+						    ctx);
+	}
+
+	kunit_cleanup(&ctx->test);
+
+	KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
+}
+
+static int kunit_resource_test_init(struct kunit *test)
+{
+	struct kunit_test_resource_context *ctx =
+			kzalloc(sizeof(*ctx), GFP_KERNEL);
+
+	if (!ctx)
+		return -ENOMEM;
+	test->priv = ctx;
+
+	kunit_init_test(&ctx->test, "test_test_context");
+	return 0;
+}
+
+static void kunit_resource_test_exit(struct kunit *test)
+{
+	struct kunit_test_resource_context *ctx = test->priv;
+
+	kunit_cleanup(&ctx->test);
+	kfree(ctx);
+}
+
+static struct kunit_case kunit_resource_test_cases[] = {
+	KUNIT_CASE(kunit_resource_test_init_resources),
+	KUNIT_CASE(kunit_resource_test_alloc_resource),
+	KUNIT_CASE(kunit_resource_test_free_resource),
+	KUNIT_CASE(kunit_resource_test_cleanup_resources),
+	{},
+};
+
+static struct kunit_module kunit_resource_test_module = {
+	.name = "kunit-resource-test",
+	.init = kunit_resource_test_init,
+	.exit = kunit_resource_test_exit,
+	.test_cases = kunit_resource_test_cases,
+};
+module_test(kunit_resource_test_module);
-- 
2.21.0.rc0.258.g878e2cd30e-goog


^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 10/17] kunit: test: add test managed resource tests
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-02-14 21:37 UTC (permalink / raw)


Tests how tests interact with test managed resources in their lifetime.

Signed-off-by: Avinash Kondareddy <avikr at google.com>
Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 kunit/test-test.c | 121 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 121 insertions(+)

diff --git a/kunit/test-test.c b/kunit/test-test.c
index 0b4ad6690310d..bb34431398526 100644
--- a/kunit/test-test.c
+++ b/kunit/test-test.c
@@ -127,3 +127,124 @@ static struct kunit_module kunit_try_catch_test_module = {
 	.test_cases = kunit_try_catch_test_cases,
 };
 module_test(kunit_try_catch_test_module);
+
+/*
+ * Context for testing test managed resources
+ * is_resource_initialized is used to test arbitrary resources
+ */
+struct kunit_test_resource_context {
+	struct kunit test;
+	bool is_resource_initialized;
+};
+
+static int fake_resource_init(struct kunit_resource *res, void *context)
+{
+	struct kunit_test_resource_context *ctx = context;
+
+	res->allocation = &ctx->is_resource_initialized;
+	ctx->is_resource_initialized = true;
+	return 0;
+}
+
+static void fake_resource_free(struct kunit_resource *res)
+{
+	bool *is_resource_initialized = res->allocation;
+
+	*is_resource_initialized = false;
+}
+
+static void kunit_resource_test_init_resources(struct kunit *test)
+{
+	struct kunit_test_resource_context *ctx = test->priv;
+
+	kunit_init_test(&ctx->test, "testing_test_init_test");
+
+	KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
+}
+
+static void kunit_resource_test_alloc_resource(struct kunit *test)
+{
+	struct kunit_test_resource_context *ctx = test->priv;
+	struct kunit_resource *res;
+	kunit_resource_free_t free = fake_resource_free;
+
+	res = kunit_alloc_resource(&ctx->test,
+				   fake_resource_init,
+				   fake_resource_free,
+				   ctx);
+
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, res);
+	KUNIT_EXPECT_EQ(test, &ctx->is_resource_initialized, res->allocation);
+	KUNIT_EXPECT_TRUE(test, list_is_last(&res->node, &ctx->test.resources));
+	KUNIT_EXPECT_EQ(test, free, res->free);
+}
+
+static void kunit_resource_test_free_resource(struct kunit *test)
+{
+	struct kunit_test_resource_context *ctx = test->priv;
+	struct kunit_resource *res = kunit_alloc_resource(&ctx->test,
+							  fake_resource_init,
+							  fake_resource_free,
+							  ctx);
+
+	kunit_free_resource(&ctx->test, res);
+
+	KUNIT_EXPECT_EQ(test, false, ctx->is_resource_initialized);
+	KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
+}
+
+#define KUNIT_RESOURCE_NUM 5
+static void kunit_resource_test_cleanup_resources(struct kunit *test)
+{
+	int i;
+	struct kunit_test_resource_context *ctx = test->priv;
+	struct kunit_resource *resources[KUNIT_RESOURCE_NUM];
+
+	for (i = 0; i < KUNIT_RESOURCE_NUM; i++) {
+		resources[i] = kunit_alloc_resource(&ctx->test,
+						    fake_resource_init,
+						    fake_resource_free,
+						    ctx);
+	}
+
+	kunit_cleanup(&ctx->test);
+
+	KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
+}
+
+static int kunit_resource_test_init(struct kunit *test)
+{
+	struct kunit_test_resource_context *ctx =
+			kzalloc(sizeof(*ctx), GFP_KERNEL);
+
+	if (!ctx)
+		return -ENOMEM;
+	test->priv = ctx;
+
+	kunit_init_test(&ctx->test, "test_test_context");
+	return 0;
+}
+
+static void kunit_resource_test_exit(struct kunit *test)
+{
+	struct kunit_test_resource_context *ctx = test->priv;
+
+	kunit_cleanup(&ctx->test);
+	kfree(ctx);
+}
+
+static struct kunit_case kunit_resource_test_cases[] = {
+	KUNIT_CASE(kunit_resource_test_init_resources),
+	KUNIT_CASE(kunit_resource_test_alloc_resource),
+	KUNIT_CASE(kunit_resource_test_free_resource),
+	KUNIT_CASE(kunit_resource_test_cleanup_resources),
+	{},
+};
+
+static struct kunit_module kunit_resource_test_module = {
+	.name = "kunit-resource-test",
+	.init = kunit_resource_test_init,
+	.exit = kunit_resource_test_exit,
+	.test_cases = kunit_resource_test_cases,
+};
+module_test(kunit_resource_test_module);
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 10/17] kunit: test: add test managed resource tests
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)


Tests how tests interact with test managed resources in their lifetime.

Signed-off-by: Avinash Kondareddy <avikr at google.com>
Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 kunit/test-test.c | 121 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 121 insertions(+)

diff --git a/kunit/test-test.c b/kunit/test-test.c
index 0b4ad6690310d..bb34431398526 100644
--- a/kunit/test-test.c
+++ b/kunit/test-test.c
@@ -127,3 +127,124 @@ static struct kunit_module kunit_try_catch_test_module = {
 	.test_cases = kunit_try_catch_test_cases,
 };
 module_test(kunit_try_catch_test_module);
+
+/*
+ * Context for testing test managed resources
+ * is_resource_initialized is used to test arbitrary resources
+ */
+struct kunit_test_resource_context {
+	struct kunit test;
+	bool is_resource_initialized;
+};
+
+static int fake_resource_init(struct kunit_resource *res, void *context)
+{
+	struct kunit_test_resource_context *ctx = context;
+
+	res->allocation = &ctx->is_resource_initialized;
+	ctx->is_resource_initialized = true;
+	return 0;
+}
+
+static void fake_resource_free(struct kunit_resource *res)
+{
+	bool *is_resource_initialized = res->allocation;
+
+	*is_resource_initialized = false;
+}
+
+static void kunit_resource_test_init_resources(struct kunit *test)
+{
+	struct kunit_test_resource_context *ctx = test->priv;
+
+	kunit_init_test(&ctx->test, "testing_test_init_test");
+
+	KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
+}
+
+static void kunit_resource_test_alloc_resource(struct kunit *test)
+{
+	struct kunit_test_resource_context *ctx = test->priv;
+	struct kunit_resource *res;
+	kunit_resource_free_t free = fake_resource_free;
+
+	res = kunit_alloc_resource(&ctx->test,
+				   fake_resource_init,
+				   fake_resource_free,
+				   ctx);
+
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, res);
+	KUNIT_EXPECT_EQ(test, &ctx->is_resource_initialized, res->allocation);
+	KUNIT_EXPECT_TRUE(test, list_is_last(&res->node, &ctx->test.resources));
+	KUNIT_EXPECT_EQ(test, free, res->free);
+}
+
+static void kunit_resource_test_free_resource(struct kunit *test)
+{
+	struct kunit_test_resource_context *ctx = test->priv;
+	struct kunit_resource *res = kunit_alloc_resource(&ctx->test,
+							  fake_resource_init,
+							  fake_resource_free,
+							  ctx);
+
+	kunit_free_resource(&ctx->test, res);
+
+	KUNIT_EXPECT_EQ(test, false, ctx->is_resource_initialized);
+	KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
+}
+
+#define KUNIT_RESOURCE_NUM 5
+static void kunit_resource_test_cleanup_resources(struct kunit *test)
+{
+	int i;
+	struct kunit_test_resource_context *ctx = test->priv;
+	struct kunit_resource *resources[KUNIT_RESOURCE_NUM];
+
+	for (i = 0; i < KUNIT_RESOURCE_NUM; i++) {
+		resources[i] = kunit_alloc_resource(&ctx->test,
+						    fake_resource_init,
+						    fake_resource_free,
+						    ctx);
+	}
+
+	kunit_cleanup(&ctx->test);
+
+	KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
+}
+
+static int kunit_resource_test_init(struct kunit *test)
+{
+	struct kunit_test_resource_context *ctx =
+			kzalloc(sizeof(*ctx), GFP_KERNEL);
+
+	if (!ctx)
+		return -ENOMEM;
+	test->priv = ctx;
+
+	kunit_init_test(&ctx->test, "test_test_context");
+	return 0;
+}
+
+static void kunit_resource_test_exit(struct kunit *test)
+{
+	struct kunit_test_resource_context *ctx = test->priv;
+
+	kunit_cleanup(&ctx->test);
+	kfree(ctx);
+}
+
+static struct kunit_case kunit_resource_test_cases[] = {
+	KUNIT_CASE(kunit_resource_test_init_resources),
+	KUNIT_CASE(kunit_resource_test_alloc_resource),
+	KUNIT_CASE(kunit_resource_test_free_resource),
+	KUNIT_CASE(kunit_resource_test_cleanup_resources),
+	{},
+};
+
+static struct kunit_module kunit_resource_test_module = {
+	.name = "kunit-resource-test",
+	.init = kunit_resource_test_init,
+	.exit = kunit_resource_test_exit,
+	.test_cases = kunit_resource_test_cases,
+};
+module_test(kunit_resource_test_module);
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 10/17] kunit: test: add test managed resource tests
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: brakmo, pmladek, amir73il, Brendan Higgins, dri-devel,
	Alexander.Levin, linux-kselftest, linux-nvdimm, richard,
	knut.omang, wfg, joel, jdike, dan.carpenter, devicetree,
	Tim.Bird, linux-um, rostedt, julia.lawall, dan.j.williams,
	kunit-dev, gregkh, linux-kernel, Avinash Kondareddy, daniel, mpe,
	joe, khilman

Tests how tests interact with test managed resources in their lifetime.

Signed-off-by: Avinash Kondareddy <avikr@google.com>
Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
---
 kunit/test-test.c | 121 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 121 insertions(+)

diff --git a/kunit/test-test.c b/kunit/test-test.c
index 0b4ad6690310d..bb34431398526 100644
--- a/kunit/test-test.c
+++ b/kunit/test-test.c
@@ -127,3 +127,124 @@ static struct kunit_module kunit_try_catch_test_module = {
 	.test_cases = kunit_try_catch_test_cases,
 };
 module_test(kunit_try_catch_test_module);
+
+/*
+ * Context for testing test managed resources
+ * is_resource_initialized is used to test arbitrary resources
+ */
+struct kunit_test_resource_context {
+	struct kunit test;
+	bool is_resource_initialized;
+};
+
+static int fake_resource_init(struct kunit_resource *res, void *context)
+{
+	struct kunit_test_resource_context *ctx = context;
+
+	res->allocation = &ctx->is_resource_initialized;
+	ctx->is_resource_initialized = true;
+	return 0;
+}
+
+static void fake_resource_free(struct kunit_resource *res)
+{
+	bool *is_resource_initialized = res->allocation;
+
+	*is_resource_initialized = false;
+}
+
+static void kunit_resource_test_init_resources(struct kunit *test)
+{
+	struct kunit_test_resource_context *ctx = test->priv;
+
+	kunit_init_test(&ctx->test, "testing_test_init_test");
+
+	KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
+}
+
+static void kunit_resource_test_alloc_resource(struct kunit *test)
+{
+	struct kunit_test_resource_context *ctx = test->priv;
+	struct kunit_resource *res;
+	kunit_resource_free_t free = fake_resource_free;
+
+	res = kunit_alloc_resource(&ctx->test,
+				   fake_resource_init,
+				   fake_resource_free,
+				   ctx);
+
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, res);
+	KUNIT_EXPECT_EQ(test, &ctx->is_resource_initialized, res->allocation);
+	KUNIT_EXPECT_TRUE(test, list_is_last(&res->node, &ctx->test.resources));
+	KUNIT_EXPECT_EQ(test, free, res->free);
+}
+
+static void kunit_resource_test_free_resource(struct kunit *test)
+{
+	struct kunit_test_resource_context *ctx = test->priv;
+	struct kunit_resource *res = kunit_alloc_resource(&ctx->test,
+							  fake_resource_init,
+							  fake_resource_free,
+							  ctx);
+
+	kunit_free_resource(&ctx->test, res);
+
+	KUNIT_EXPECT_EQ(test, false, ctx->is_resource_initialized);
+	KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
+}
+
+#define KUNIT_RESOURCE_NUM 5
+static void kunit_resource_test_cleanup_resources(struct kunit *test)
+{
+	int i;
+	struct kunit_test_resource_context *ctx = test->priv;
+	struct kunit_resource *resources[KUNIT_RESOURCE_NUM];
+
+	for (i = 0; i < KUNIT_RESOURCE_NUM; i++) {
+		resources[i] = kunit_alloc_resource(&ctx->test,
+						    fake_resource_init,
+						    fake_resource_free,
+						    ctx);
+	}
+
+	kunit_cleanup(&ctx->test);
+
+	KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
+}
+
+static int kunit_resource_test_init(struct kunit *test)
+{
+	struct kunit_test_resource_context *ctx =
+			kzalloc(sizeof(*ctx), GFP_KERNEL);
+
+	if (!ctx)
+		return -ENOMEM;
+	test->priv = ctx;
+
+	kunit_init_test(&ctx->test, "test_test_context");
+	return 0;
+}
+
+static void kunit_resource_test_exit(struct kunit *test)
+{
+	struct kunit_test_resource_context *ctx = test->priv;
+
+	kunit_cleanup(&ctx->test);
+	kfree(ctx);
+}
+
+static struct kunit_case kunit_resource_test_cases[] = {
+	KUNIT_CASE(kunit_resource_test_init_resources),
+	KUNIT_CASE(kunit_resource_test_alloc_resource),
+	KUNIT_CASE(kunit_resource_test_free_resource),
+	KUNIT_CASE(kunit_resource_test_cleanup_resources),
+	{},
+};
+
+static struct kunit_module kunit_resource_test_module = {
+	.name = "kunit-resource-test",
+	.init = kunit_resource_test_init,
+	.exit = kunit_resource_test_exit,
+	.test_cases = kunit_resource_test_cases,
+};
+module_test(kunit_resource_test_module);
-- 
2.21.0.rc0.258.g878e2cd30e-goog


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 11/17] kunit: tool: add Python wrappers for running KUnit tests
  2019-02-14 21:37 ` brendanhiggins
                       ` (2 preceding siblings ...)
  (?)
@ 2019-02-14 21:37     ` Brendan Higgins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook-hpIqsD4AKlfQT0dZR+AlfA, mcgrof-DgEjT+Ai2ygdnm+yROfE0A,
	shuah-DgEjT+Ai2ygdnm+yROfE0A, robh-DgEjT+Ai2ygdnm+yROfE0A,
	kieran.bingham-ryLnwIuWjnjg/C1BVhZhaw,
	frowand.list-Re5JQEeQqe8AvxtiuMwx3w
  Cc: brakmo-b10kYP2dOMg, pmladek-IBi9RG/b67k,
	amir73il-Re5JQEeQqe8AvxtiuMwx3w, Brendan Higgins,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	Alexander.Levin-0li6OtcxBFHby3iVrkZq2A,
	linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw, richard-/L3Ra7n9ekc,
	knut.omang-QHcLZuEGTsvQT0dZR+AlfA, Felix Guo,
	wfg-VuQAYsv1563Yd54FQh9/CA, joel-U3u1mxZcP9KHXe+LvDLADg,
	jdike-OPE4K8JWMJJBDgjK7y7TUQ,
	dan.carpenter-QHcLZuEGTsvQT0dZR+AlfA,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Tim.Bird-7U/KSKJipcs,
	linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	rostedt-nx8X9YLhiw1AfugRpC6u6w, julia.lawall-L2FTfq7BK8M,
	kunit-dev-/JYPxA39Uh5TLH3MbocFFw,
	gregkh-hQyY1W1yCW8ekmWlsbkhG0B+6BGkLq7r,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, daniel-/w4YWyX8dFk,
	mpe-Gsx/Oe8HsFggBc27wqDAHg, joe-6d6DIl74uiNBDgjK7y7TUQ,
	khilman-rdvid1DuHRBWk0Htik3J/w

From: Felix Guo <felixguoxiuping-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>

The ultimate goal is to create minimal isolated test binaries; in the
meantime we are using UML to provide the infrastructure to run tests, so
define an abstract way to configure and run tests that allow us to
change the context in which tests are built without affecting the user.
This also makes pretty and dynamic error reporting, and a lot of other
nice features easier.

kunit_config.py:
  - parse .config and Kconfig files.

kunit_kernel.py: provides helper functions to:
  - configure the kernel using kunitconfig.
  - build the kernel with the appropriate configuration.
  - provide function to invoke the kernel and stream the output back.

Signed-off-by: Felix Guo <felixguoxiuping-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Signed-off-by: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
---
Changes Since Last Version
 - Added support for building and running tests in an external
   directory.
 - Squashed with most other kunit_tool commits, since most did not
   represent a coherent new feature.
---
 tools/testing/kunit/.gitignore      |   3 +
 tools/testing/kunit/kunit.py        |  78 +++++++++++++++
 tools/testing/kunit/kunit_config.py |  66 +++++++++++++
 tools/testing/kunit/kunit_kernel.py | 148 ++++++++++++++++++++++++++++
 tools/testing/kunit/kunit_parser.py | 119 ++++++++++++++++++++++
 5 files changed, 414 insertions(+)
 create mode 100644 tools/testing/kunit/.gitignore
 create mode 100755 tools/testing/kunit/kunit.py
 create mode 100644 tools/testing/kunit/kunit_config.py
 create mode 100644 tools/testing/kunit/kunit_kernel.py
 create mode 100644 tools/testing/kunit/kunit_parser.py

diff --git a/tools/testing/kunit/.gitignore b/tools/testing/kunit/.gitignore
new file mode 100644
index 0000000000000..c791ff59a37a9
--- /dev/null
+++ b/tools/testing/kunit/.gitignore
@@ -0,0 +1,3 @@
+# Byte-compiled / optimized / DLL files
+__pycache__/
+*.py[cod]
\ No newline at end of file
diff --git a/tools/testing/kunit/kunit.py b/tools/testing/kunit/kunit.py
new file mode 100755
index 0000000000000..7413ec7351a20
--- /dev/null
+++ b/tools/testing/kunit/kunit.py
@@ -0,0 +1,78 @@
+#!/usr/bin/python3
+# SPDX-License-Identifier: GPL-2.0
+#
+# A thin wrapper on top of the KUnit Kernel
+#
+# Copyright (C) 2019, Google LLC.
+# Author: Felix Guo <felixguoxiuping-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
+# Author: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
+
+import argparse
+import sys
+import os
+import time
+
+import kunit_config
+import kunit_kernel
+import kunit_parser
+
+parser = argparse.ArgumentParser(description='Runs KUnit tests.')
+
+parser.add_argument('--raw_output', help='don\'t format output from kernel',
+		    action='store_true')
+
+parser.add_argument('--timeout', help='maximum number of seconds to allow for '
+		    'all tests to run. This does not include time taken to '
+		    'build the tests.', type=int, default=300,
+		    metavar='timeout')
+
+parser.add_argument('--jobs',
+		    help='As in the make command, "Specifies  the number of '
+		    'jobs (commands) to run simultaneously."',
+		    type=int, default=8, metavar='jobs')
+
+parser.add_argument('--build_dir',
+		    help='As in the make command, it specifies the build '
+		    'directory.',
+		    type=str, default=None, metavar='build_dir')
+
+cli_args = parser.parse_args()
+
+linux = kunit_kernel.LinuxSourceTree()
+
+build_dir = None
+if cli_args.build_dir:
+	build_dir = cli_args.build_dir
+
+config_start = time.time()
+success = linux.build_reconfig(build_dir)
+config_end = time.time()
+if not success:
+	quit()
+
+kunit_parser.print_with_timestamp('Building KUnit Kernel ...')
+
+build_start = time.time()
+
+success = linux.build_um_kernel(jobs=cli_args.jobs, build_dir=build_dir)
+build_end = time.time()
+if not success:
+	quit()
+
+kunit_parser.print_with_timestamp('Starting KUnit Kernel ...')
+test_start = time.time()
+
+if cli_args.raw_output:
+	kunit_parser.raw_output(linux.run_kernel(timeout=cli_args.timeout,
+						 build_dir=build_dir))
+else:
+	kunit_parser.parse_run_tests(linux.run_kernel(timeout=cli_args.timeout,
+						      build_dir=build_dir))
+
+test_end = time.time()
+
+kunit_parser.print_with_timestamp((
+	"Elapsed time: %.3fs total, %.3fs configuring, %.3fs " +
+	"building, %.3fs running.\n") % (test_end - config_start,
+	config_end - config_start, build_end - build_start,
+	test_end - test_start))
diff --git a/tools/testing/kunit/kunit_config.py b/tools/testing/kunit/kunit_config.py
new file mode 100644
index 0000000000000..167f47d9ab8e4
--- /dev/null
+++ b/tools/testing/kunit/kunit_config.py
@@ -0,0 +1,66 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Builds a .config from a kunitconfig.
+#
+# Copyright (C) 2019, Google LLC.
+# Author: Felix Guo <felixguoxiuping-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
+# Author: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
+
+import collections
+import re
+
+CONFIG_IS_NOT_SET_PATTERN = r'^# CONFIG_\w+ is not set$'
+CONFIG_PATTERN = r'^CONFIG_\w+=\S+$'
+
+KconfigEntryBase = collections.namedtuple('KconfigEntry', ['raw_entry'])
+
+
+class KconfigEntry(KconfigEntryBase):
+
+	def __str__(self) -> str:
+		return self.raw_entry
+
+
+class KconfigParseError(Exception):
+	"""Error parsing Kconfig defconfig or .config."""
+
+
+class Kconfig(object):
+	"""Represents defconfig or .config specified using the Kconfig language."""
+
+	def __init__(self):
+		self._entries = []
+
+	def entries(self):
+		return set(self._entries)
+
+	def add_entry(self, entry: KconfigEntry) -> None:
+		self._entries.append(entry)
+
+	def is_subset_of(self, other: "Kconfig") -> bool:
+		return self.entries().issubset(other.entries())
+
+	def write_to_file(self, path: str) -> None:
+		with open(path, 'w') as f:
+			for entry in self.entries():
+				f.write(str(entry) + '\n')
+
+	def parse_from_string(self, blob: str) -> None:
+		"""Parses a string containing KconfigEntrys and populates this Kconfig."""
+		self._entries = []
+		is_not_set_matcher = re.compile(CONFIG_IS_NOT_SET_PATTERN)
+		config_matcher = re.compile(CONFIG_PATTERN)
+		for line in blob.split('\n'):
+			line = line.strip()
+			if not line:
+				continue
+			elif config_matcher.match(line) or is_not_set_matcher.match(line):
+				self._entries.append(KconfigEntry(line))
+			elif line[0] == '#':
+				continue
+			else:
+				raise KconfigParseError('Failed to parse: ' + line)
+
+	def read_from_file(self, path: str) -> None:
+		with open(path, 'r') as f:
+			self.parse_from_string(f.read())
diff --git a/tools/testing/kunit/kunit_kernel.py b/tools/testing/kunit/kunit_kernel.py
new file mode 100644
index 0000000000000..07c0abf2f47df
--- /dev/null
+++ b/tools/testing/kunit/kunit_kernel.py
@@ -0,0 +1,148 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Runs UML kernel, collects output, and handles errors.
+#
+# Copyright (C) 2019, Google LLC.
+# Author: Felix Guo <felixguoxiuping-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
+# Author: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
+
+
+import logging
+import subprocess
+import os
+
+import kunit_config
+
+KCONFIG_PATH = '.config'
+
+class ConfigError(Exception):
+	"""Represents an error trying to configure the Linux kernel."""
+
+
+class BuildError(Exception):
+	"""Represents an error trying to build the Linux kernel."""
+
+
+class LinuxSourceTreeOperations(object):
+	"""An abstraction over command line operations performed on a source tree."""
+
+	def make_mrproper(self):
+		try:
+			subprocess.check_output(['make', 'mrproper'])
+		except OSError as e:
+			raise ConfigError('Could not call make command: ' + e)
+		except subprocess.CalledProcessError as e:
+			raise ConfigError(e.output)
+
+	def make_olddefconfig(self, build_dir):
+		command = ['make', 'ARCH=um', 'olddefconfig']
+		if build_dir:
+			command += ['O=' + build_dir]
+		try:
+			subprocess.check_output(command)
+		except OSError as e:
+			raise ConfigError('Could not call make command: ' + e)
+		except subprocess.CalledProcessError as e:
+			raise ConfigError(e.output)
+
+	def make(self, jobs, build_dir):
+		command = ['make', 'ARCH=um', '--jobs=' + str(jobs)]
+		if build_dir:
+			command += ['O=' + build_dir]
+		try:
+			subprocess.check_output(command)
+		except OSError as e:
+			raise BuildError('Could not call execute make: ' + e)
+		except subprocess.CalledProcessError as e:
+			raise BuildError(e.output)
+
+	def linux_bin(self, params, timeout, build_dir):
+		"""Runs the Linux UML binary. Must be named 'linux'."""
+		linux_bin = './linux'
+		if build_dir:
+			linux_bin = os.path.join(build_dir, 'linux')
+		process = subprocess.Popen(
+			[linux_bin] + params,
+			stdin=subprocess.PIPE,
+			stdout=subprocess.PIPE,
+			stderr=subprocess.PIPE)
+		process.wait(timeout=timeout)
+		return process
+
+
+def get_kconfig_path(build_dir):
+	kconfig_path = KCONFIG_PATH
+	if build_dir:
+		kconfig_path = os.path.join(build_dir, KCONFIG_PATH)
+	return kconfig_path
+
+class LinuxSourceTree(object):
+	"""Represents a Linux kernel source tree with KUnit tests."""
+
+	def __init__(self):
+		self._kconfig = kunit_config.Kconfig()
+		self._kconfig.read_from_file('kunitconfig')
+		self._ops = LinuxSourceTreeOperations()
+
+	def clean(self):
+		try:
+			self._ops.make_mrproper()
+		except ConfigError as e:
+			logging.error(e)
+			return False
+		return True
+
+	def build_config(self, build_dir):
+		kconfig_path = get_kconfig_path(build_dir)
+		if build_dir and not os.path.exists(build_dir):
+			os.mkdir(build_dir)
+		self._kconfig.write_to_file(kconfig_path)
+		try:
+			self._ops.make_olddefconfig(build_dir)
+		except ConfigError as e:
+			logging.error(e)
+			return False
+		validated_kconfig = kunit_config.Kconfig()
+		validated_kconfig.read_from_file(kconfig_path)
+		if not self._kconfig.is_subset_of(validated_kconfig):
+			logging.error('Provided Kconfig is not contained in validated .config!')
+			return False
+		return True
+
+	def build_reconfig(self, build_dir):
+		"""Creates a new .config if it is not a subset of the kunitconfig."""
+		kconfig_path = get_kconfig_path(build_dir)
+		if os.path.exists(kconfig_path):
+			existing_kconfig = kunit_config.Kconfig()
+			existing_kconfig.read_from_file(kconfig_path)
+			if not self._kconfig.is_subset_of(existing_kconfig):
+				print('Regenerating .config ...')
+				os.remove(kconfig_path)
+				return self.build_config(build_dir)
+			else:
+				return True
+		else:
+			print('Generating .config ...')
+			return self.build_config(build_dir)
+
+	def build_um_kernel(self, jobs, build_dir):
+		try:
+			self._ops.make_olddefconfig(build_dir)
+			self._ops.make(jobs, build_dir)
+		except (ConfigError, BuildError) as e:
+			logging.error(e)
+			return False
+		used_kconfig = kunit_config.Kconfig()
+		used_kconfig.read_from_file(get_kconfig_path(build_dir))
+		if not self._kconfig.is_subset_of(used_kconfig):
+			logging.error('Provided Kconfig is not contained in final config!')
+			return False
+		return True
+
+	def run_kernel(self, args=[], timeout=None, build_dir=None):
+		args.extend(['mem=256M'])
+		process = self._ops.linux_bin(args, timeout, build_dir)
+		with open('test.log', 'w') as f:
+			for line in process.stdout:
+				f.write(line.rstrip().decode('ascii') + '\n')
+				yield line.rstrip().decode('ascii')
diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py
new file mode 100644
index 0000000000000..6c81d4dcfabb5
--- /dev/null
+++ b/tools/testing/kunit/kunit_parser.py
@@ -0,0 +1,119 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Parses test results from a kernel dmesg log.
+#
+# Copyright (C) 2019, Google LLC.
+# Author: Felix Guo <felixguoxiuping-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
+# Author: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
+
+import re
+from datetime import datetime
+
+kunit_start_re = re.compile('printk: console .* enabled')
+kunit_end_re = re.compile('List of all partitions:')
+
+def isolate_kunit_output(kernel_output):
+	started = False
+	for line in kernel_output:
+		if kunit_start_re.match(line):
+			started = True
+		elif kunit_end_re.match(line):
+			break
+		elif started:
+			yield line
+
+def raw_output(kernel_output):
+	for line in kernel_output:
+		print(line)
+
+DIVIDER = "=" * 30
+
+RESET = '\033[0;0m'
+
+def red(text):
+	return '\033[1;31m' + text + RESET
+
+def yellow(text):
+	return '\033[1;33m' + text + RESET
+
+def green(text):
+	return '\033[1;32m' + text + RESET
+
+def print_with_timestamp(message):
+	print('[%s] %s' % (datetime.now().strftime('%H:%M:%S'), message))
+
+def print_log(log):
+	for m in log:
+		print_with_timestamp(m)
+
+def parse_run_tests(kernel_output):
+	test_case_output = re.compile('^kunit .*?: (.*)$')
+
+	test_module_success = re.compile('^kunit .*: all tests passed')
+	test_module_fail = re.compile('^kunit .*: one or more tests failed')
+
+	test_case_success = re.compile('^kunit (.*): (.*) passed')
+	test_case_fail = re.compile('^kunit (.*): (.*) failed')
+	test_case_crash = re.compile('^kunit (.*): (.*) crashed')
+
+	total_tests = set()
+	failed_tests = set()
+	crashed_tests = set()
+
+	def get_test_name(match):
+		return match.group(1) + ":" + match.group(2)
+
+	current_case_log = []
+	def end_one_test(match, log):
+		log.clear()
+		total_tests.add(get_test_name(match))
+
+	print_with_timestamp(DIVIDER)
+	for line in isolate_kunit_output(kernel_output):
+		# Ignore module output:
+		if (test_module_success.match(line) or
+		    test_module_fail.match(line)):
+			print_with_timestamp(DIVIDER)
+			continue
+
+		match = re.match(test_case_success, line)
+		if match:
+			print_with_timestamp(green("[PASSED] ") +
+					     get_test_name(match))
+			end_one_test(match, current_case_log)
+			continue
+
+		match = re.match(test_case_fail, line)
+		# Crashed tests will report as both failed and crashed. We only
+		# want to show and count it once.
+		if match and get_test_name(match) not in crashed_tests:
+			failed_tests.add(get_test_name(match))
+			print_with_timestamp(red("[FAILED] " +
+						 get_test_name(match)))
+			print_log(map(yellow, current_case_log))
+			print_with_timestamp("")
+			end_one_test(match, current_case_log)
+			continue
+
+		match = re.match(test_case_crash, line)
+		if match:
+			crashed_tests.add(get_test_name(match))
+			print_with_timestamp(yellow("[CRASH] " +
+						    get_test_name(match)))
+			print_log(current_case_log)
+			print_with_timestamp("")
+			end_one_test(match, current_case_log)
+			continue
+
+		# Strip off the `kunit module-name:` prefix
+		match = re.match(test_case_output, line)
+		if match:
+			current_case_log.append(match.group(1))
+		else:
+			current_case_log.append(line)
+
+	fmt = green if (len(failed_tests) + len(crashed_tests) == 0) else red
+	print_with_timestamp(
+		fmt("Testing complete. %d tests run. %d failed. %d crashed." %
+		    (len(total_tests), len(failed_tests), len(crashed_tests))))
+
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 11/17] kunit: tool: add Python wrappers for running KUnit tests
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: gregkh, joel, mpe, joe, brakmo, rostedt, Tim.Bird, khilman,
	julia.lawall, linux-kselftest, kunit-dev, linux-kernel, jdike,
	richard, linux-um, daniel, dri-devel, dan.j.williams,
	linux-nvdimm, knut.omang, devicetree, pmladek, Alexander.Levin,
	amir73il, dan.carpenter, wfg, Felix Guo, Brendan Higgins

From: Felix Guo <felixguoxiuping@gmail.com>

The ultimate goal is to create minimal isolated test binaries; in the
meantime we are using UML to provide the infrastructure to run tests, so
define an abstract way to configure and run tests that allow us to
change the context in which tests are built without affecting the user.
This also makes pretty and dynamic error reporting, and a lot of other
nice features easier.

kunit_config.py:
  - parse .config and Kconfig files.

kunit_kernel.py: provides helper functions to:
  - configure the kernel using kunitconfig.
  - build the kernel with the appropriate configuration.
  - provide function to invoke the kernel and stream the output back.

Signed-off-by: Felix Guo <felixguoxiuping@gmail.com>
Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
---
Changes Since Last Version
 - Added support for building and running tests in an external
   directory.
 - Squashed with most other kunit_tool commits, since most did not
   represent a coherent new feature.
---
 tools/testing/kunit/.gitignore      |   3 +
 tools/testing/kunit/kunit.py        |  78 +++++++++++++++
 tools/testing/kunit/kunit_config.py |  66 +++++++++++++
 tools/testing/kunit/kunit_kernel.py | 148 ++++++++++++++++++++++++++++
 tools/testing/kunit/kunit_parser.py | 119 ++++++++++++++++++++++
 5 files changed, 414 insertions(+)
 create mode 100644 tools/testing/kunit/.gitignore
 create mode 100755 tools/testing/kunit/kunit.py
 create mode 100644 tools/testing/kunit/kunit_config.py
 create mode 100644 tools/testing/kunit/kunit_kernel.py
 create mode 100644 tools/testing/kunit/kunit_parser.py

diff --git a/tools/testing/kunit/.gitignore b/tools/testing/kunit/.gitignore
new file mode 100644
index 0000000000000..c791ff59a37a9
--- /dev/null
+++ b/tools/testing/kunit/.gitignore
@@ -0,0 +1,3 @@
+# Byte-compiled / optimized / DLL files
+__pycache__/
+*.py[cod]
\ No newline at end of file
diff --git a/tools/testing/kunit/kunit.py b/tools/testing/kunit/kunit.py
new file mode 100755
index 0000000000000..7413ec7351a20
--- /dev/null
+++ b/tools/testing/kunit/kunit.py
@@ -0,0 +1,78 @@
+#!/usr/bin/python3
+# SPDX-License-Identifier: GPL-2.0
+#
+# A thin wrapper on top of the KUnit Kernel
+#
+# Copyright (C) 2019, Google LLC.
+# Author: Felix Guo <felixguoxiuping@gmail.com>
+# Author: Brendan Higgins <brendanhiggins@google.com>
+
+import argparse
+import sys
+import os
+import time
+
+import kunit_config
+import kunit_kernel
+import kunit_parser
+
+parser = argparse.ArgumentParser(description='Runs KUnit tests.')
+
+parser.add_argument('--raw_output', help='don\'t format output from kernel',
+		    action='store_true')
+
+parser.add_argument('--timeout', help='maximum number of seconds to allow for '
+		    'all tests to run. This does not include time taken to '
+		    'build the tests.', type=int, default=300,
+		    metavar='timeout')
+
+parser.add_argument('--jobs',
+		    help='As in the make command, "Specifies  the number of '
+		    'jobs (commands) to run simultaneously."',
+		    type=int, default=8, metavar='jobs')
+
+parser.add_argument('--build_dir',
+		    help='As in the make command, it specifies the build '
+		    'directory.',
+		    type=str, default=None, metavar='build_dir')
+
+cli_args = parser.parse_args()
+
+linux = kunit_kernel.LinuxSourceTree()
+
+build_dir = None
+if cli_args.build_dir:
+	build_dir = cli_args.build_dir
+
+config_start = time.time()
+success = linux.build_reconfig(build_dir)
+config_end = time.time()
+if not success:
+	quit()
+
+kunit_parser.print_with_timestamp('Building KUnit Kernel ...')
+
+build_start = time.time()
+
+success = linux.build_um_kernel(jobs=cli_args.jobs, build_dir=build_dir)
+build_end = time.time()
+if not success:
+	quit()
+
+kunit_parser.print_with_timestamp('Starting KUnit Kernel ...')
+test_start = time.time()
+
+if cli_args.raw_output:
+	kunit_parser.raw_output(linux.run_kernel(timeout=cli_args.timeout,
+						 build_dir=build_dir))
+else:
+	kunit_parser.parse_run_tests(linux.run_kernel(timeout=cli_args.timeout,
+						      build_dir=build_dir))
+
+test_end = time.time()
+
+kunit_parser.print_with_timestamp((
+	"Elapsed time: %.3fs total, %.3fs configuring, %.3fs " +
+	"building, %.3fs running.\n") % (test_end - config_start,
+	config_end - config_start, build_end - build_start,
+	test_end - test_start))
diff --git a/tools/testing/kunit/kunit_config.py b/tools/testing/kunit/kunit_config.py
new file mode 100644
index 0000000000000..167f47d9ab8e4
--- /dev/null
+++ b/tools/testing/kunit/kunit_config.py
@@ -0,0 +1,66 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Builds a .config from a kunitconfig.
+#
+# Copyright (C) 2019, Google LLC.
+# Author: Felix Guo <felixguoxiuping@gmail.com>
+# Author: Brendan Higgins <brendanhiggins@google.com>
+
+import collections
+import re
+
+CONFIG_IS_NOT_SET_PATTERN = r'^# CONFIG_\w+ is not set$'
+CONFIG_PATTERN = r'^CONFIG_\w+=\S+$'
+
+KconfigEntryBase = collections.namedtuple('KconfigEntry', ['raw_entry'])
+
+
+class KconfigEntry(KconfigEntryBase):
+
+	def __str__(self) -> str:
+		return self.raw_entry
+
+
+class KconfigParseError(Exception):
+	"""Error parsing Kconfig defconfig or .config."""
+
+
+class Kconfig(object):
+	"""Represents defconfig or .config specified using the Kconfig language."""
+
+	def __init__(self):
+		self._entries = []
+
+	def entries(self):
+		return set(self._entries)
+
+	def add_entry(self, entry: KconfigEntry) -> None:
+		self._entries.append(entry)
+
+	def is_subset_of(self, other: "Kconfig") -> bool:
+		return self.entries().issubset(other.entries())
+
+	def write_to_file(self, path: str) -> None:
+		with open(path, 'w') as f:
+			for entry in self.entries():
+				f.write(str(entry) + '\n')
+
+	def parse_from_string(self, blob: str) -> None:
+		"""Parses a string containing KconfigEntrys and populates this Kconfig."""
+		self._entries = []
+		is_not_set_matcher = re.compile(CONFIG_IS_NOT_SET_PATTERN)
+		config_matcher = re.compile(CONFIG_PATTERN)
+		for line in blob.split('\n'):
+			line = line.strip()
+			if not line:
+				continue
+			elif config_matcher.match(line) or is_not_set_matcher.match(line):
+				self._entries.append(KconfigEntry(line))
+			elif line[0] == '#':
+				continue
+			else:
+				raise KconfigParseError('Failed to parse: ' + line)
+
+	def read_from_file(self, path: str) -> None:
+		with open(path, 'r') as f:
+			self.parse_from_string(f.read())
diff --git a/tools/testing/kunit/kunit_kernel.py b/tools/testing/kunit/kunit_kernel.py
new file mode 100644
index 0000000000000..07c0abf2f47df
--- /dev/null
+++ b/tools/testing/kunit/kunit_kernel.py
@@ -0,0 +1,148 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Runs UML kernel, collects output, and handles errors.
+#
+# Copyright (C) 2019, Google LLC.
+# Author: Felix Guo <felixguoxiuping@gmail.com>
+# Author: Brendan Higgins <brendanhiggins@google.com>
+
+
+import logging
+import subprocess
+import os
+
+import kunit_config
+
+KCONFIG_PATH = '.config'
+
+class ConfigError(Exception):
+	"""Represents an error trying to configure the Linux kernel."""
+
+
+class BuildError(Exception):
+	"""Represents an error trying to build the Linux kernel."""
+
+
+class LinuxSourceTreeOperations(object):
+	"""An abstraction over command line operations performed on a source tree."""
+
+	def make_mrproper(self):
+		try:
+			subprocess.check_output(['make', 'mrproper'])
+		except OSError as e:
+			raise ConfigError('Could not call make command: ' + e)
+		except subprocess.CalledProcessError as e:
+			raise ConfigError(e.output)
+
+	def make_olddefconfig(self, build_dir):
+		command = ['make', 'ARCH=um', 'olddefconfig']
+		if build_dir:
+			command += ['O=' + build_dir]
+		try:
+			subprocess.check_output(command)
+		except OSError as e:
+			raise ConfigError('Could not call make command: ' + e)
+		except subprocess.CalledProcessError as e:
+			raise ConfigError(e.output)
+
+	def make(self, jobs, build_dir):
+		command = ['make', 'ARCH=um', '--jobs=' + str(jobs)]
+		if build_dir:
+			command += ['O=' + build_dir]
+		try:
+			subprocess.check_output(command)
+		except OSError as e:
+			raise BuildError('Could not call execute make: ' + e)
+		except subprocess.CalledProcessError as e:
+			raise BuildError(e.output)
+
+	def linux_bin(self, params, timeout, build_dir):
+		"""Runs the Linux UML binary. Must be named 'linux'."""
+		linux_bin = './linux'
+		if build_dir:
+			linux_bin = os.path.join(build_dir, 'linux')
+		process = subprocess.Popen(
+			[linux_bin] + params,
+			stdin=subprocess.PIPE,
+			stdout=subprocess.PIPE,
+			stderr=subprocess.PIPE)
+		process.wait(timeout=timeout)
+		return process
+
+
+def get_kconfig_path(build_dir):
+	kconfig_path = KCONFIG_PATH
+	if build_dir:
+		kconfig_path = os.path.join(build_dir, KCONFIG_PATH)
+	return kconfig_path
+
+class LinuxSourceTree(object):
+	"""Represents a Linux kernel source tree with KUnit tests."""
+
+	def __init__(self):
+		self._kconfig = kunit_config.Kconfig()
+		self._kconfig.read_from_file('kunitconfig')
+		self._ops = LinuxSourceTreeOperations()
+
+	def clean(self):
+		try:
+			self._ops.make_mrproper()
+		except ConfigError as e:
+			logging.error(e)
+			return False
+		return True
+
+	def build_config(self, build_dir):
+		kconfig_path = get_kconfig_path(build_dir)
+		if build_dir and not os.path.exists(build_dir):
+			os.mkdir(build_dir)
+		self._kconfig.write_to_file(kconfig_path)
+		try:
+			self._ops.make_olddefconfig(build_dir)
+		except ConfigError as e:
+			logging.error(e)
+			return False
+		validated_kconfig = kunit_config.Kconfig()
+		validated_kconfig.read_from_file(kconfig_path)
+		if not self._kconfig.is_subset_of(validated_kconfig):
+			logging.error('Provided Kconfig is not contained in validated .config!')
+			return False
+		return True
+
+	def build_reconfig(self, build_dir):
+		"""Creates a new .config if it is not a subset of the kunitconfig."""
+		kconfig_path = get_kconfig_path(build_dir)
+		if os.path.exists(kconfig_path):
+			existing_kconfig = kunit_config.Kconfig()
+			existing_kconfig.read_from_file(kconfig_path)
+			if not self._kconfig.is_subset_of(existing_kconfig):
+				print('Regenerating .config ...')
+				os.remove(kconfig_path)
+				return self.build_config(build_dir)
+			else:
+				return True
+		else:
+			print('Generating .config ...')
+			return self.build_config(build_dir)
+
+	def build_um_kernel(self, jobs, build_dir):
+		try:
+			self._ops.make_olddefconfig(build_dir)
+			self._ops.make(jobs, build_dir)
+		except (ConfigError, BuildError) as e:
+			logging.error(e)
+			return False
+		used_kconfig = kunit_config.Kconfig()
+		used_kconfig.read_from_file(get_kconfig_path(build_dir))
+		if not self._kconfig.is_subset_of(used_kconfig):
+			logging.error('Provided Kconfig is not contained in final config!')
+			return False
+		return True
+
+	def run_kernel(self, args=[], timeout=None, build_dir=None):
+		args.extend(['mem=256M'])
+		process = self._ops.linux_bin(args, timeout, build_dir)
+		with open('test.log', 'w') as f:
+			for line in process.stdout:
+				f.write(line.rstrip().decode('ascii') + '\n')
+				yield line.rstrip().decode('ascii')
diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py
new file mode 100644
index 0000000000000..6c81d4dcfabb5
--- /dev/null
+++ b/tools/testing/kunit/kunit_parser.py
@@ -0,0 +1,119 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Parses test results from a kernel dmesg log.
+#
+# Copyright (C) 2019, Google LLC.
+# Author: Felix Guo <felixguoxiuping@gmail.com>
+# Author: Brendan Higgins <brendanhiggins@google.com>
+
+import re
+from datetime import datetime
+
+kunit_start_re = re.compile('printk: console .* enabled')
+kunit_end_re = re.compile('List of all partitions:')
+
+def isolate_kunit_output(kernel_output):
+	started = False
+	for line in kernel_output:
+		if kunit_start_re.match(line):
+			started = True
+		elif kunit_end_re.match(line):
+			break
+		elif started:
+			yield line
+
+def raw_output(kernel_output):
+	for line in kernel_output:
+		print(line)
+
+DIVIDER = "=" * 30
+
+RESET = '\033[0;0m'
+
+def red(text):
+	return '\033[1;31m' + text + RESET
+
+def yellow(text):
+	return '\033[1;33m' + text + RESET
+
+def green(text):
+	return '\033[1;32m' + text + RESET
+
+def print_with_timestamp(message):
+	print('[%s] %s' % (datetime.now().strftime('%H:%M:%S'), message))
+
+def print_log(log):
+	for m in log:
+		print_with_timestamp(m)
+
+def parse_run_tests(kernel_output):
+	test_case_output = re.compile('^kunit .*?: (.*)$')
+
+	test_module_success = re.compile('^kunit .*: all tests passed')
+	test_module_fail = re.compile('^kunit .*: one or more tests failed')
+
+	test_case_success = re.compile('^kunit (.*): (.*) passed')
+	test_case_fail = re.compile('^kunit (.*): (.*) failed')
+	test_case_crash = re.compile('^kunit (.*): (.*) crashed')
+
+	total_tests = set()
+	failed_tests = set()
+	crashed_tests = set()
+
+	def get_test_name(match):
+		return match.group(1) + ":" + match.group(2)
+
+	current_case_log = []
+	def end_one_test(match, log):
+		log.clear()
+		total_tests.add(get_test_name(match))
+
+	print_with_timestamp(DIVIDER)
+	for line in isolate_kunit_output(kernel_output):
+		# Ignore module output:
+		if (test_module_success.match(line) or
+		    test_module_fail.match(line)):
+			print_with_timestamp(DIVIDER)
+			continue
+
+		match = re.match(test_case_success, line)
+		if match:
+			print_with_timestamp(green("[PASSED] ") +
+					     get_test_name(match))
+			end_one_test(match, current_case_log)
+			continue
+
+		match = re.match(test_case_fail, line)
+		# Crashed tests will report as both failed and crashed. We only
+		# want to show and count it once.
+		if match and get_test_name(match) not in crashed_tests:
+			failed_tests.add(get_test_name(match))
+			print_with_timestamp(red("[FAILED] " +
+						 get_test_name(match)))
+			print_log(map(yellow, current_case_log))
+			print_with_timestamp("")
+			end_one_test(match, current_case_log)
+			continue
+
+		match = re.match(test_case_crash, line)
+		if match:
+			crashed_tests.add(get_test_name(match))
+			print_with_timestamp(yellow("[CRASH] " +
+						    get_test_name(match)))
+			print_log(current_case_log)
+			print_with_timestamp("")
+			end_one_test(match, current_case_log)
+			continue
+
+		# Strip off the `kunit module-name:` prefix
+		match = re.match(test_case_output, line)
+		if match:
+			current_case_log.append(match.group(1))
+		else:
+			current_case_log.append(line)
+
+	fmt = green if (len(failed_tests) + len(crashed_tests) == 0) else red
+	print_with_timestamp(
+		fmt("Testing complete. %d tests run. %d failed. %d crashed." %
+		    (len(total_tests), len(failed_tests), len(crashed_tests))))
+
-- 
2.21.0.rc0.258.g878e2cd30e-goog


^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 11/17] kunit: tool: add Python wrappers for running KUnit tests
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-02-14 21:37 UTC (permalink / raw)


From: Felix Guo <felixguoxiuping at gmail.com>

The ultimate goal is to create minimal isolated test binaries; in the
meantime we are using UML to provide the infrastructure to run tests, so
define an abstract way to configure and run tests that allow us to
change the context in which tests are built without affecting the user.
This also makes pretty and dynamic error reporting, and a lot of other
nice features easier.

kunit_config.py:
  - parse .config and Kconfig files.

kunit_kernel.py: provides helper functions to:
  - configure the kernel using kunitconfig.
  - build the kernel with the appropriate configuration.
  - provide function to invoke the kernel and stream the output back.

Signed-off-by: Felix Guo <felixguoxiuping at gmail.com>
Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
Changes Since Last Version
 - Added support for building and running tests in an external
   directory.
 - Squashed with most other kunit_tool commits, since most did not
   represent a coherent new feature.
---
 tools/testing/kunit/.gitignore      |   3 +
 tools/testing/kunit/kunit.py        |  78 +++++++++++++++
 tools/testing/kunit/kunit_config.py |  66 +++++++++++++
 tools/testing/kunit/kunit_kernel.py | 148 ++++++++++++++++++++++++++++
 tools/testing/kunit/kunit_parser.py | 119 ++++++++++++++++++++++
 5 files changed, 414 insertions(+)
 create mode 100644 tools/testing/kunit/.gitignore
 create mode 100755 tools/testing/kunit/kunit.py
 create mode 100644 tools/testing/kunit/kunit_config.py
 create mode 100644 tools/testing/kunit/kunit_kernel.py
 create mode 100644 tools/testing/kunit/kunit_parser.py

diff --git a/tools/testing/kunit/.gitignore b/tools/testing/kunit/.gitignore
new file mode 100644
index 0000000000000..c791ff59a37a9
--- /dev/null
+++ b/tools/testing/kunit/.gitignore
@@ -0,0 +1,3 @@
+# Byte-compiled / optimized / DLL files
+__pycache__/
+*.py[cod]
\ No newline at end of file
diff --git a/tools/testing/kunit/kunit.py b/tools/testing/kunit/kunit.py
new file mode 100755
index 0000000000000..7413ec7351a20
--- /dev/null
+++ b/tools/testing/kunit/kunit.py
@@ -0,0 +1,78 @@
+#!/usr/bin/python3
+# SPDX-License-Identifier: GPL-2.0
+#
+# A thin wrapper on top of the KUnit Kernel
+#
+# Copyright (C) 2019, Google LLC.
+# Author: Felix Guo <felixguoxiuping at gmail.com>
+# Author: Brendan Higgins <brendanhiggins at google.com>
+
+import argparse
+import sys
+import os
+import time
+
+import kunit_config
+import kunit_kernel
+import kunit_parser
+
+parser = argparse.ArgumentParser(description='Runs KUnit tests.')
+
+parser.add_argument('--raw_output', help='don\'t format output from kernel',
+		    action='store_true')
+
+parser.add_argument('--timeout', help='maximum number of seconds to allow for '
+		    'all tests to run. This does not include time taken to '
+		    'build the tests.', type=int, default=300,
+		    metavar='timeout')
+
+parser.add_argument('--jobs',
+		    help='As in the make command, "Specifies  the number of '
+		    'jobs (commands) to run simultaneously."',
+		    type=int, default=8, metavar='jobs')
+
+parser.add_argument('--build_dir',
+		    help='As in the make command, it specifies the build '
+		    'directory.',
+		    type=str, default=None, metavar='build_dir')
+
+cli_args = parser.parse_args()
+
+linux = kunit_kernel.LinuxSourceTree()
+
+build_dir = None
+if cli_args.build_dir:
+	build_dir = cli_args.build_dir
+
+config_start = time.time()
+success = linux.build_reconfig(build_dir)
+config_end = time.time()
+if not success:
+	quit()
+
+kunit_parser.print_with_timestamp('Building KUnit Kernel ...')
+
+build_start = time.time()
+
+success = linux.build_um_kernel(jobs=cli_args.jobs, build_dir=build_dir)
+build_end = time.time()
+if not success:
+	quit()
+
+kunit_parser.print_with_timestamp('Starting KUnit Kernel ...')
+test_start = time.time()
+
+if cli_args.raw_output:
+	kunit_parser.raw_output(linux.run_kernel(timeout=cli_args.timeout,
+						 build_dir=build_dir))
+else:
+	kunit_parser.parse_run_tests(linux.run_kernel(timeout=cli_args.timeout,
+						      build_dir=build_dir))
+
+test_end = time.time()
+
+kunit_parser.print_with_timestamp((
+	"Elapsed time: %.3fs total, %.3fs configuring, %.3fs " +
+	"building, %.3fs running.\n") % (test_end - config_start,
+	config_end - config_start, build_end - build_start,
+	test_end - test_start))
diff --git a/tools/testing/kunit/kunit_config.py b/tools/testing/kunit/kunit_config.py
new file mode 100644
index 0000000000000..167f47d9ab8e4
--- /dev/null
+++ b/tools/testing/kunit/kunit_config.py
@@ -0,0 +1,66 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Builds a .config from a kunitconfig.
+#
+# Copyright (C) 2019, Google LLC.
+# Author: Felix Guo <felixguoxiuping at gmail.com>
+# Author: Brendan Higgins <brendanhiggins at google.com>
+
+import collections
+import re
+
+CONFIG_IS_NOT_SET_PATTERN = r'^# CONFIG_\w+ is not set$'
+CONFIG_PATTERN = r'^CONFIG_\w+=\S+$'
+
+KconfigEntryBase = collections.namedtuple('KconfigEntry', ['raw_entry'])
+
+
+class KconfigEntry(KconfigEntryBase):
+
+	def __str__(self) -> str:
+		return self.raw_entry
+
+
+class KconfigParseError(Exception):
+	"""Error parsing Kconfig defconfig or .config."""
+
+
+class Kconfig(object):
+	"""Represents defconfig or .config specified using the Kconfig language."""
+
+	def __init__(self):
+		self._entries = []
+
+	def entries(self):
+		return set(self._entries)
+
+	def add_entry(self, entry: KconfigEntry) -> None:
+		self._entries.append(entry)
+
+	def is_subset_of(self, other: "Kconfig") -> bool:
+		return self.entries().issubset(other.entries())
+
+	def write_to_file(self, path: str) -> None:
+		with open(path, 'w') as f:
+			for entry in self.entries():
+				f.write(str(entry) + '\n')
+
+	def parse_from_string(self, blob: str) -> None:
+		"""Parses a string containing KconfigEntrys and populates this Kconfig."""
+		self._entries = []
+		is_not_set_matcher = re.compile(CONFIG_IS_NOT_SET_PATTERN)
+		config_matcher = re.compile(CONFIG_PATTERN)
+		for line in blob.split('\n'):
+			line = line.strip()
+			if not line:
+				continue
+			elif config_matcher.match(line) or is_not_set_matcher.match(line):
+				self._entries.append(KconfigEntry(line))
+			elif line[0] == '#':
+				continue
+			else:
+				raise KconfigParseError('Failed to parse: ' + line)
+
+	def read_from_file(self, path: str) -> None:
+		with open(path, 'r') as f:
+			self.parse_from_string(f.read())
diff --git a/tools/testing/kunit/kunit_kernel.py b/tools/testing/kunit/kunit_kernel.py
new file mode 100644
index 0000000000000..07c0abf2f47df
--- /dev/null
+++ b/tools/testing/kunit/kunit_kernel.py
@@ -0,0 +1,148 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Runs UML kernel, collects output, and handles errors.
+#
+# Copyright (C) 2019, Google LLC.
+# Author: Felix Guo <felixguoxiuping at gmail.com>
+# Author: Brendan Higgins <brendanhiggins at google.com>
+
+
+import logging
+import subprocess
+import os
+
+import kunit_config
+
+KCONFIG_PATH = '.config'
+
+class ConfigError(Exception):
+	"""Represents an error trying to configure the Linux kernel."""
+
+
+class BuildError(Exception):
+	"""Represents an error trying to build the Linux kernel."""
+
+
+class LinuxSourceTreeOperations(object):
+	"""An abstraction over command line operations performed on a source tree."""
+
+	def make_mrproper(self):
+		try:
+			subprocess.check_output(['make', 'mrproper'])
+		except OSError as e:
+			raise ConfigError('Could not call make command: ' + e)
+		except subprocess.CalledProcessError as e:
+			raise ConfigError(e.output)
+
+	def make_olddefconfig(self, build_dir):
+		command = ['make', 'ARCH=um', 'olddefconfig']
+		if build_dir:
+			command += ['O=' + build_dir]
+		try:
+			subprocess.check_output(command)
+		except OSError as e:
+			raise ConfigError('Could not call make command: ' + e)
+		except subprocess.CalledProcessError as e:
+			raise ConfigError(e.output)
+
+	def make(self, jobs, build_dir):
+		command = ['make', 'ARCH=um', '--jobs=' + str(jobs)]
+		if build_dir:
+			command += ['O=' + build_dir]
+		try:
+			subprocess.check_output(command)
+		except OSError as e:
+			raise BuildError('Could not call execute make: ' + e)
+		except subprocess.CalledProcessError as e:
+			raise BuildError(e.output)
+
+	def linux_bin(self, params, timeout, build_dir):
+		"""Runs the Linux UML binary. Must be named 'linux'."""
+		linux_bin = './linux'
+		if build_dir:
+			linux_bin = os.path.join(build_dir, 'linux')
+		process = subprocess.Popen(
+			[linux_bin] + params,
+			stdin=subprocess.PIPE,
+			stdout=subprocess.PIPE,
+			stderr=subprocess.PIPE)
+		process.wait(timeout=timeout)
+		return process
+
+
+def get_kconfig_path(build_dir):
+	kconfig_path = KCONFIG_PATH
+	if build_dir:
+		kconfig_path = os.path.join(build_dir, KCONFIG_PATH)
+	return kconfig_path
+
+class LinuxSourceTree(object):
+	"""Represents a Linux kernel source tree with KUnit tests."""
+
+	def __init__(self):
+		self._kconfig = kunit_config.Kconfig()
+		self._kconfig.read_from_file('kunitconfig')
+		self._ops = LinuxSourceTreeOperations()
+
+	def clean(self):
+		try:
+			self._ops.make_mrproper()
+		except ConfigError as e:
+			logging.error(e)
+			return False
+		return True
+
+	def build_config(self, build_dir):
+		kconfig_path = get_kconfig_path(build_dir)
+		if build_dir and not os.path.exists(build_dir):
+			os.mkdir(build_dir)
+		self._kconfig.write_to_file(kconfig_path)
+		try:
+			self._ops.make_olddefconfig(build_dir)
+		except ConfigError as e:
+			logging.error(e)
+			return False
+		validated_kconfig = kunit_config.Kconfig()
+		validated_kconfig.read_from_file(kconfig_path)
+		if not self._kconfig.is_subset_of(validated_kconfig):
+			logging.error('Provided Kconfig is not contained in validated .config!')
+			return False
+		return True
+
+	def build_reconfig(self, build_dir):
+		"""Creates a new .config if it is not a subset of the kunitconfig."""
+		kconfig_path = get_kconfig_path(build_dir)
+		if os.path.exists(kconfig_path):
+			existing_kconfig = kunit_config.Kconfig()
+			existing_kconfig.read_from_file(kconfig_path)
+			if not self._kconfig.is_subset_of(existing_kconfig):
+				print('Regenerating .config ...')
+				os.remove(kconfig_path)
+				return self.build_config(build_dir)
+			else:
+				return True
+		else:
+			print('Generating .config ...')
+			return self.build_config(build_dir)
+
+	def build_um_kernel(self, jobs, build_dir):
+		try:
+			self._ops.make_olddefconfig(build_dir)
+			self._ops.make(jobs, build_dir)
+		except (ConfigError, BuildError) as e:
+			logging.error(e)
+			return False
+		used_kconfig = kunit_config.Kconfig()
+		used_kconfig.read_from_file(get_kconfig_path(build_dir))
+		if not self._kconfig.is_subset_of(used_kconfig):
+			logging.error('Provided Kconfig is not contained in final config!')
+			return False
+		return True
+
+	def run_kernel(self, args=[], timeout=None, build_dir=None):
+		args.extend(['mem=256M'])
+		process = self._ops.linux_bin(args, timeout, build_dir)
+		with open('test.log', 'w') as f:
+			for line in process.stdout:
+				f.write(line.rstrip().decode('ascii') + '\n')
+				yield line.rstrip().decode('ascii')
diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py
new file mode 100644
index 0000000000000..6c81d4dcfabb5
--- /dev/null
+++ b/tools/testing/kunit/kunit_parser.py
@@ -0,0 +1,119 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Parses test results from a kernel dmesg log.
+#
+# Copyright (C) 2019, Google LLC.
+# Author: Felix Guo <felixguoxiuping at gmail.com>
+# Author: Brendan Higgins <brendanhiggins at google.com>
+
+import re
+from datetime import datetime
+
+kunit_start_re = re.compile('printk: console .* enabled')
+kunit_end_re = re.compile('List of all partitions:')
+
+def isolate_kunit_output(kernel_output):
+	started = False
+	for line in kernel_output:
+		if kunit_start_re.match(line):
+			started = True
+		elif kunit_end_re.match(line):
+			break
+		elif started:
+			yield line
+
+def raw_output(kernel_output):
+	for line in kernel_output:
+		print(line)
+
+DIVIDER = "=" * 30
+
+RESET = '\033[0;0m'
+
+def red(text):
+	return '\033[1;31m' + text + RESET
+
+def yellow(text):
+	return '\033[1;33m' + text + RESET
+
+def green(text):
+	return '\033[1;32m' + text + RESET
+
+def print_with_timestamp(message):
+	print('[%s] %s' % (datetime.now().strftime('%H:%M:%S'), message))
+
+def print_log(log):
+	for m in log:
+		print_with_timestamp(m)
+
+def parse_run_tests(kernel_output):
+	test_case_output = re.compile('^kunit .*?: (.*)$')
+
+	test_module_success = re.compile('^kunit .*: all tests passed')
+	test_module_fail = re.compile('^kunit .*: one or more tests failed')
+
+	test_case_success = re.compile('^kunit (.*): (.*) passed')
+	test_case_fail = re.compile('^kunit (.*): (.*) failed')
+	test_case_crash = re.compile('^kunit (.*): (.*) crashed')
+
+	total_tests = set()
+	failed_tests = set()
+	crashed_tests = set()
+
+	def get_test_name(match):
+		return match.group(1) + ":" + match.group(2)
+
+	current_case_log = []
+	def end_one_test(match, log):
+		log.clear()
+		total_tests.add(get_test_name(match))
+
+	print_with_timestamp(DIVIDER)
+	for line in isolate_kunit_output(kernel_output):
+		# Ignore module output:
+		if (test_module_success.match(line) or
+		    test_module_fail.match(line)):
+			print_with_timestamp(DIVIDER)
+			continue
+
+		match = re.match(test_case_success, line)
+		if match:
+			print_with_timestamp(green("[PASSED] ") +
+					     get_test_name(match))
+			end_one_test(match, current_case_log)
+			continue
+
+		match = re.match(test_case_fail, line)
+		# Crashed tests will report as both failed and crashed. We only
+		# want to show and count it once.
+		if match and get_test_name(match) not in crashed_tests:
+			failed_tests.add(get_test_name(match))
+			print_with_timestamp(red("[FAILED] " +
+						 get_test_name(match)))
+			print_log(map(yellow, current_case_log))
+			print_with_timestamp("")
+			end_one_test(match, current_case_log)
+			continue
+
+		match = re.match(test_case_crash, line)
+		if match:
+			crashed_tests.add(get_test_name(match))
+			print_with_timestamp(yellow("[CRASH] " +
+						    get_test_name(match)))
+			print_log(current_case_log)
+			print_with_timestamp("")
+			end_one_test(match, current_case_log)
+			continue
+
+		# Strip off the `kunit module-name:` prefix
+		match = re.match(test_case_output, line)
+		if match:
+			current_case_log.append(match.group(1))
+		else:
+			current_case_log.append(line)
+
+	fmt = green if (len(failed_tests) + len(crashed_tests) == 0) else red
+	print_with_timestamp(
+		fmt("Testing complete. %d tests run. %d failed. %d crashed." %
+		    (len(total_tests), len(failed_tests), len(crashed_tests))))
+
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 11/17] kunit: tool: add Python wrappers for running KUnit tests
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)


From: Felix Guo <felixguoxiuping@gmail.com>

The ultimate goal is to create minimal isolated test binaries; in the
meantime we are using UML to provide the infrastructure to run tests, so
define an abstract way to configure and run tests that allow us to
change the context in which tests are built without affecting the user.
This also makes pretty and dynamic error reporting, and a lot of other
nice features easier.

kunit_config.py:
  - parse .config and Kconfig files.

kunit_kernel.py: provides helper functions to:
  - configure the kernel using kunitconfig.
  - build the kernel with the appropriate configuration.
  - provide function to invoke the kernel and stream the output back.

Signed-off-by: Felix Guo <felixguoxiuping at gmail.com>
Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
Changes Since Last Version
 - Added support for building and running tests in an external
   directory.
 - Squashed with most other kunit_tool commits, since most did not
   represent a coherent new feature.
---
 tools/testing/kunit/.gitignore      |   3 +
 tools/testing/kunit/kunit.py        |  78 +++++++++++++++
 tools/testing/kunit/kunit_config.py |  66 +++++++++++++
 tools/testing/kunit/kunit_kernel.py | 148 ++++++++++++++++++++++++++++
 tools/testing/kunit/kunit_parser.py | 119 ++++++++++++++++++++++
 5 files changed, 414 insertions(+)
 create mode 100644 tools/testing/kunit/.gitignore
 create mode 100755 tools/testing/kunit/kunit.py
 create mode 100644 tools/testing/kunit/kunit_config.py
 create mode 100644 tools/testing/kunit/kunit_kernel.py
 create mode 100644 tools/testing/kunit/kunit_parser.py

diff --git a/tools/testing/kunit/.gitignore b/tools/testing/kunit/.gitignore
new file mode 100644
index 0000000000000..c791ff59a37a9
--- /dev/null
+++ b/tools/testing/kunit/.gitignore
@@ -0,0 +1,3 @@
+# Byte-compiled / optimized / DLL files
+__pycache__/
+*.py[cod]
\ No newline at end of file
diff --git a/tools/testing/kunit/kunit.py b/tools/testing/kunit/kunit.py
new file mode 100755
index 0000000000000..7413ec7351a20
--- /dev/null
+++ b/tools/testing/kunit/kunit.py
@@ -0,0 +1,78 @@
+#!/usr/bin/python3
+# SPDX-License-Identifier: GPL-2.0
+#
+# A thin wrapper on top of the KUnit Kernel
+#
+# Copyright (C) 2019, Google LLC.
+# Author: Felix Guo <felixguoxiuping at gmail.com>
+# Author: Brendan Higgins <brendanhiggins at google.com>
+
+import argparse
+import sys
+import os
+import time
+
+import kunit_config
+import kunit_kernel
+import kunit_parser
+
+parser = argparse.ArgumentParser(description='Runs KUnit tests.')
+
+parser.add_argument('--raw_output', help='don\'t format output from kernel',
+		    action='store_true')
+
+parser.add_argument('--timeout', help='maximum number of seconds to allow for '
+		    'all tests to run. This does not include time taken to '
+		    'build the tests.', type=int, default=300,
+		    metavar='timeout')
+
+parser.add_argument('--jobs',
+		    help='As in the make command, "Specifies  the number of '
+		    'jobs (commands) to run simultaneously."',
+		    type=int, default=8, metavar='jobs')
+
+parser.add_argument('--build_dir',
+		    help='As in the make command, it specifies the build '
+		    'directory.',
+		    type=str, default=None, metavar='build_dir')
+
+cli_args = parser.parse_args()
+
+linux = kunit_kernel.LinuxSourceTree()
+
+build_dir = None
+if cli_args.build_dir:
+	build_dir = cli_args.build_dir
+
+config_start = time.time()
+success = linux.build_reconfig(build_dir)
+config_end = time.time()
+if not success:
+	quit()
+
+kunit_parser.print_with_timestamp('Building KUnit Kernel ...')
+
+build_start = time.time()
+
+success = linux.build_um_kernel(jobs=cli_args.jobs, build_dir=build_dir)
+build_end = time.time()
+if not success:
+	quit()
+
+kunit_parser.print_with_timestamp('Starting KUnit Kernel ...')
+test_start = time.time()
+
+if cli_args.raw_output:
+	kunit_parser.raw_output(linux.run_kernel(timeout=cli_args.timeout,
+						 build_dir=build_dir))
+else:
+	kunit_parser.parse_run_tests(linux.run_kernel(timeout=cli_args.timeout,
+						      build_dir=build_dir))
+
+test_end = time.time()
+
+kunit_parser.print_with_timestamp((
+	"Elapsed time: %.3fs total, %.3fs configuring, %.3fs " +
+	"building, %.3fs running.\n") % (test_end - config_start,
+	config_end - config_start, build_end - build_start,
+	test_end - test_start))
diff --git a/tools/testing/kunit/kunit_config.py b/tools/testing/kunit/kunit_config.py
new file mode 100644
index 0000000000000..167f47d9ab8e4
--- /dev/null
+++ b/tools/testing/kunit/kunit_config.py
@@ -0,0 +1,66 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Builds a .config from a kunitconfig.
+#
+# Copyright (C) 2019, Google LLC.
+# Author: Felix Guo <felixguoxiuping at gmail.com>
+# Author: Brendan Higgins <brendanhiggins at google.com>
+
+import collections
+import re
+
+CONFIG_IS_NOT_SET_PATTERN = r'^# CONFIG_\w+ is not set$'
+CONFIG_PATTERN = r'^CONFIG_\w+=\S+$'
+
+KconfigEntryBase = collections.namedtuple('KconfigEntry', ['raw_entry'])
+
+
+class KconfigEntry(KconfigEntryBase):
+
+	def __str__(self) -> str:
+		return self.raw_entry
+
+
+class KconfigParseError(Exception):
+	"""Error parsing Kconfig defconfig or .config."""
+
+
+class Kconfig(object):
+	"""Represents defconfig or .config specified using the Kconfig language."""
+
+	def __init__(self):
+		self._entries = []
+
+	def entries(self):
+		return set(self._entries)
+
+	def add_entry(self, entry: KconfigEntry) -> None:
+		self._entries.append(entry)
+
+	def is_subset_of(self, other: "Kconfig") -> bool:
+		return self.entries().issubset(other.entries())
+
+	def write_to_file(self, path: str) -> None:
+		with open(path, 'w') as f:
+			for entry in self.entries():
+				f.write(str(entry) + '\n')
+
+	def parse_from_string(self, blob: str) -> None:
+		"""Parses a string containing KconfigEntrys and populates this Kconfig."""
+		self._entries = []
+		is_not_set_matcher = re.compile(CONFIG_IS_NOT_SET_PATTERN)
+		config_matcher = re.compile(CONFIG_PATTERN)
+		for line in blob.split('\n'):
+			line = line.strip()
+			if not line:
+				continue
+			elif config_matcher.match(line) or is_not_set_matcher.match(line):
+				self._entries.append(KconfigEntry(line))
+			elif line[0] == '#':
+				continue
+			else:
+				raise KconfigParseError('Failed to parse: ' + line)
+
+	def read_from_file(self, path: str) -> None:
+		with open(path, 'r') as f:
+			self.parse_from_string(f.read())
diff --git a/tools/testing/kunit/kunit_kernel.py b/tools/testing/kunit/kunit_kernel.py
new file mode 100644
index 0000000000000..07c0abf2f47df
--- /dev/null
+++ b/tools/testing/kunit/kunit_kernel.py
@@ -0,0 +1,148 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Runs UML kernel, collects output, and handles errors.
+#
+# Copyright (C) 2019, Google LLC.
+# Author: Felix Guo <felixguoxiuping at gmail.com>
+# Author: Brendan Higgins <brendanhiggins at google.com>
+
+
+import logging
+import subprocess
+import os
+
+import kunit_config
+
+KCONFIG_PATH = '.config'
+
+class ConfigError(Exception):
+	"""Represents an error trying to configure the Linux kernel."""
+
+
+class BuildError(Exception):
+	"""Represents an error trying to build the Linux kernel."""
+
+
+class LinuxSourceTreeOperations(object):
+	"""An abstraction over command line operations performed on a source tree."""
+
+	def make_mrproper(self):
+		try:
+			subprocess.check_output(['make', 'mrproper'])
+		except OSError as e:
+			raise ConfigError('Could not call make command: ' + e)
+		except subprocess.CalledProcessError as e:
+			raise ConfigError(e.output)
+
+	def make_olddefconfig(self, build_dir):
+		command = ['make', 'ARCH=um', 'olddefconfig']
+		if build_dir:
+			command += ['O=' + build_dir]
+		try:
+			subprocess.check_output(command)
+		except OSError as e:
+			raise ConfigError('Could not call make command: ' + e)
+		except subprocess.CalledProcessError as e:
+			raise ConfigError(e.output)
+
+	def make(self, jobs, build_dir):
+		command = ['make', 'ARCH=um', '--jobs=' + str(jobs)]
+		if build_dir:
+			command += ['O=' + build_dir]
+		try:
+			subprocess.check_output(command)
+		except OSError as e:
+			raise BuildError('Could not call execute make: ' + e)
+		except subprocess.CalledProcessError as e:
+			raise BuildError(e.output)
+
+	def linux_bin(self, params, timeout, build_dir):
+		"""Runs the Linux UML binary. Must be named 'linux'."""
+		linux_bin = './linux'
+		if build_dir:
+			linux_bin = os.path.join(build_dir, 'linux')
+		process = subprocess.Popen(
+			[linux_bin] + params,
+			stdin=subprocess.PIPE,
+			stdout=subprocess.PIPE,
+			stderr=subprocess.PIPE)
+		process.wait(timeout=timeout)
+		return process
+
+
+def get_kconfig_path(build_dir):
+	kconfig_path = KCONFIG_PATH
+	if build_dir:
+		kconfig_path = os.path.join(build_dir, KCONFIG_PATH)
+	return kconfig_path
+
+class LinuxSourceTree(object):
+	"""Represents a Linux kernel source tree with KUnit tests."""
+
+	def __init__(self):
+		self._kconfig = kunit_config.Kconfig()
+		self._kconfig.read_from_file('kunitconfig')
+		self._ops = LinuxSourceTreeOperations()
+
+	def clean(self):
+		try:
+			self._ops.make_mrproper()
+		except ConfigError as e:
+			logging.error(e)
+			return False
+		return True
+
+	def build_config(self, build_dir):
+		kconfig_path = get_kconfig_path(build_dir)
+		if build_dir and not os.path.exists(build_dir):
+			os.mkdir(build_dir)
+		self._kconfig.write_to_file(kconfig_path)
+		try:
+			self._ops.make_olddefconfig(build_dir)
+		except ConfigError as e:
+			logging.error(e)
+			return False
+		validated_kconfig = kunit_config.Kconfig()
+		validated_kconfig.read_from_file(kconfig_path)
+		if not self._kconfig.is_subset_of(validated_kconfig):
+			logging.error('Provided Kconfig is not contained in validated .config!')
+			return False
+		return True
+
+	def build_reconfig(self, build_dir):
+		"""Creates a new .config if it is not a subset of the kunitconfig."""
+		kconfig_path = get_kconfig_path(build_dir)
+		if os.path.exists(kconfig_path):
+			existing_kconfig = kunit_config.Kconfig()
+			existing_kconfig.read_from_file(kconfig_path)
+			if not self._kconfig.is_subset_of(existing_kconfig):
+				print('Regenerating .config ...')
+				os.remove(kconfig_path)
+				return self.build_config(build_dir)
+			else:
+				return True
+		else:
+			print('Generating .config ...')
+			return self.build_config(build_dir)
+
+	def build_um_kernel(self, jobs, build_dir):
+		try:
+			self._ops.make_olddefconfig(build_dir)
+			self._ops.make(jobs, build_dir)
+		except (ConfigError, BuildError) as e:
+			logging.error(e)
+			return False
+		used_kconfig = kunit_config.Kconfig()
+		used_kconfig.read_from_file(get_kconfig_path(build_dir))
+		if not self._kconfig.is_subset_of(used_kconfig):
+			logging.error('Provided Kconfig is not contained in final config!')
+			return False
+		return True
+
+	def run_kernel(self, args=[], timeout=None, build_dir=None):
+		args.extend(['mem=256M'])
+		process = self._ops.linux_bin(args, timeout, build_dir)
+		with open('test.log', 'w') as f:
+			for line in process.stdout:
+				f.write(line.rstrip().decode('ascii') + '\n')
+				yield line.rstrip().decode('ascii')
diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py
new file mode 100644
index 0000000000000..6c81d4dcfabb5
--- /dev/null
+++ b/tools/testing/kunit/kunit_parser.py
@@ -0,0 +1,119 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Parses test results from a kernel dmesg log.
+#
+# Copyright (C) 2019, Google LLC.
+# Author: Felix Guo <felixguoxiuping at gmail.com>
+# Author: Brendan Higgins <brendanhiggins at google.com>
+
+import re
+from datetime import datetime
+
+kunit_start_re = re.compile('printk: console .* enabled')
+kunit_end_re = re.compile('List of all partitions:')
+
+def isolate_kunit_output(kernel_output):
+	started = False
+	for line in kernel_output:
+		if kunit_start_re.match(line):
+			started = True
+		elif kunit_end_re.match(line):
+			break
+		elif started:
+			yield line
+
+def raw_output(kernel_output):
+	for line in kernel_output:
+		print(line)
+
+DIVIDER = "=" * 30
+
+RESET = '\033[0;0m'
+
+def red(text):
+	return '\033[1;31m' + text + RESET
+
+def yellow(text):
+	return '\033[1;33m' + text + RESET
+
+def green(text):
+	return '\033[1;32m' + text + RESET
+
+def print_with_timestamp(message):
+	print('[%s] %s' % (datetime.now().strftime('%H:%M:%S'), message))
+
+def print_log(log):
+	for m in log:
+		print_with_timestamp(m)
+
+def parse_run_tests(kernel_output):
+	test_case_output = re.compile('^kunit .*?: (.*)$')
+
+	test_module_success = re.compile('^kunit .*: all tests passed')
+	test_module_fail = re.compile('^kunit .*: one or more tests failed')
+
+	test_case_success = re.compile('^kunit (.*): (.*) passed')
+	test_case_fail = re.compile('^kunit (.*): (.*) failed')
+	test_case_crash = re.compile('^kunit (.*): (.*) crashed')
+
+	total_tests = set()
+	failed_tests = set()
+	crashed_tests = set()
+
+	def get_test_name(match):
+		return match.group(1) + ":" + match.group(2)
+
+	current_case_log = []
+	def end_one_test(match, log):
+		log.clear()
+		total_tests.add(get_test_name(match))
+
+	print_with_timestamp(DIVIDER)
+	for line in isolate_kunit_output(kernel_output):
+		# Ignore module output:
+		if (test_module_success.match(line) or
+		    test_module_fail.match(line)):
+			print_with_timestamp(DIVIDER)
+			continue
+
+		match = re.match(test_case_success, line)
+		if match:
+			print_with_timestamp(green("[PASSED] ") +
+					     get_test_name(match))
+			end_one_test(match, current_case_log)
+			continue
+
+		match = re.match(test_case_fail, line)
+		# Crashed tests will report as both failed and crashed. We only
+		# want to show and count it once.
+		if match and get_test_name(match) not in crashed_tests:
+			failed_tests.add(get_test_name(match))
+			print_with_timestamp(red("[FAILED] " +
+						 get_test_name(match)))
+			print_log(map(yellow, current_case_log))
+			print_with_timestamp("")
+			end_one_test(match, current_case_log)
+			continue
+
+		match = re.match(test_case_crash, line)
+		if match:
+			crashed_tests.add(get_test_name(match))
+			print_with_timestamp(yellow("[CRASH] " +
+						    get_test_name(match)))
+			print_log(current_case_log)
+			print_with_timestamp("")
+			end_one_test(match, current_case_log)
+			continue
+
+		# Strip off the `kunit module-name:` prefix
+		match = re.match(test_case_output, line)
+		if match:
+			current_case_log.append(match.group(1))
+		else:
+			current_case_log.append(line)
+
+	fmt = green if (len(failed_tests) + len(crashed_tests) == 0) else red
+	print_with_timestamp(
+		fmt("Testing complete. %d tests run. %d failed. %d crashed." %
+		    (len(total_tests), len(failed_tests), len(crashed_tests))))
+
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 11/17] kunit: tool: add Python wrappers for running KUnit tests
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: brakmo, pmladek, amir73il, Brendan Higgins, dri-devel,
	Alexander.Levin, linux-kselftest, linux-nvdimm, richard,
	knut.omang, Felix Guo, wfg, joel, jdike, dan.carpenter,
	devicetree, Tim.Bird, linux-um, rostedt, julia.lawall,
	dan.j.williams, kunit-dev, gregkh, linux-kernel, daniel, mpe,
	joe, khilman

From: Felix Guo <felixguoxiuping@gmail.com>

The ultimate goal is to create minimal isolated test binaries; in the
meantime we are using UML to provide the infrastructure to run tests, so
define an abstract way to configure and run tests that allow us to
change the context in which tests are built without affecting the user.
This also makes pretty and dynamic error reporting, and a lot of other
nice features easier.

kunit_config.py:
  - parse .config and Kconfig files.

kunit_kernel.py: provides helper functions to:
  - configure the kernel using kunitconfig.
  - build the kernel with the appropriate configuration.
  - provide function to invoke the kernel and stream the output back.

Signed-off-by: Felix Guo <felixguoxiuping@gmail.com>
Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
---
Changes Since Last Version
 - Added support for building and running tests in an external
   directory.
 - Squashed with most other kunit_tool commits, since most did not
   represent a coherent new feature.
---
 tools/testing/kunit/.gitignore      |   3 +
 tools/testing/kunit/kunit.py        |  78 +++++++++++++++
 tools/testing/kunit/kunit_config.py |  66 +++++++++++++
 tools/testing/kunit/kunit_kernel.py | 148 ++++++++++++++++++++++++++++
 tools/testing/kunit/kunit_parser.py | 119 ++++++++++++++++++++++
 5 files changed, 414 insertions(+)
 create mode 100644 tools/testing/kunit/.gitignore
 create mode 100755 tools/testing/kunit/kunit.py
 create mode 100644 tools/testing/kunit/kunit_config.py
 create mode 100644 tools/testing/kunit/kunit_kernel.py
 create mode 100644 tools/testing/kunit/kunit_parser.py

diff --git a/tools/testing/kunit/.gitignore b/tools/testing/kunit/.gitignore
new file mode 100644
index 0000000000000..c791ff59a37a9
--- /dev/null
+++ b/tools/testing/kunit/.gitignore
@@ -0,0 +1,3 @@
+# Byte-compiled / optimized / DLL files
+__pycache__/
+*.py[cod]
\ No newline at end of file
diff --git a/tools/testing/kunit/kunit.py b/tools/testing/kunit/kunit.py
new file mode 100755
index 0000000000000..7413ec7351a20
--- /dev/null
+++ b/tools/testing/kunit/kunit.py
@@ -0,0 +1,78 @@
+#!/usr/bin/python3
+# SPDX-License-Identifier: GPL-2.0
+#
+# A thin wrapper on top of the KUnit Kernel
+#
+# Copyright (C) 2019, Google LLC.
+# Author: Felix Guo <felixguoxiuping@gmail.com>
+# Author: Brendan Higgins <brendanhiggins@google.com>
+
+import argparse
+import sys
+import os
+import time
+
+import kunit_config
+import kunit_kernel
+import kunit_parser
+
+parser = argparse.ArgumentParser(description='Runs KUnit tests.')
+
+parser.add_argument('--raw_output', help='don\'t format output from kernel',
+		    action='store_true')
+
+parser.add_argument('--timeout', help='maximum number of seconds to allow for '
+		    'all tests to run. This does not include time taken to '
+		    'build the tests.', type=int, default=300,
+		    metavar='timeout')
+
+parser.add_argument('--jobs',
+		    help='As in the make command, "Specifies  the number of '
+		    'jobs (commands) to run simultaneously."',
+		    type=int, default=8, metavar='jobs')
+
+parser.add_argument('--build_dir',
+		    help='As in the make command, it specifies the build '
+		    'directory.',
+		    type=str, default=None, metavar='build_dir')
+
+cli_args = parser.parse_args()
+
+linux = kunit_kernel.LinuxSourceTree()
+
+build_dir = None
+if cli_args.build_dir:
+	build_dir = cli_args.build_dir
+
+config_start = time.time()
+success = linux.build_reconfig(build_dir)
+config_end = time.time()
+if not success:
+	quit()
+
+kunit_parser.print_with_timestamp('Building KUnit Kernel ...')
+
+build_start = time.time()
+
+success = linux.build_um_kernel(jobs=cli_args.jobs, build_dir=build_dir)
+build_end = time.time()
+if not success:
+	quit()
+
+kunit_parser.print_with_timestamp('Starting KUnit Kernel ...')
+test_start = time.time()
+
+if cli_args.raw_output:
+	kunit_parser.raw_output(linux.run_kernel(timeout=cli_args.timeout,
+						 build_dir=build_dir))
+else:
+	kunit_parser.parse_run_tests(linux.run_kernel(timeout=cli_args.timeout,
+						      build_dir=build_dir))
+
+test_end = time.time()
+
+kunit_parser.print_with_timestamp((
+	"Elapsed time: %.3fs total, %.3fs configuring, %.3fs " +
+	"building, %.3fs running.\n") % (test_end - config_start,
+	config_end - config_start, build_end - build_start,
+	test_end - test_start))
diff --git a/tools/testing/kunit/kunit_config.py b/tools/testing/kunit/kunit_config.py
new file mode 100644
index 0000000000000..167f47d9ab8e4
--- /dev/null
+++ b/tools/testing/kunit/kunit_config.py
@@ -0,0 +1,66 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Builds a .config from a kunitconfig.
+#
+# Copyright (C) 2019, Google LLC.
+# Author: Felix Guo <felixguoxiuping@gmail.com>
+# Author: Brendan Higgins <brendanhiggins@google.com>
+
+import collections
+import re
+
+CONFIG_IS_NOT_SET_PATTERN = r'^# CONFIG_\w+ is not set$'
+CONFIG_PATTERN = r'^CONFIG_\w+=\S+$'
+
+KconfigEntryBase = collections.namedtuple('KconfigEntry', ['raw_entry'])
+
+
+class KconfigEntry(KconfigEntryBase):
+
+	def __str__(self) -> str:
+		return self.raw_entry
+
+
+class KconfigParseError(Exception):
+	"""Error parsing Kconfig defconfig or .config."""
+
+
+class Kconfig(object):
+	"""Represents defconfig or .config specified using the Kconfig language."""
+
+	def __init__(self):
+		self._entries = []
+
+	def entries(self):
+		return set(self._entries)
+
+	def add_entry(self, entry: KconfigEntry) -> None:
+		self._entries.append(entry)
+
+	def is_subset_of(self, other: "Kconfig") -> bool:
+		return self.entries().issubset(other.entries())
+
+	def write_to_file(self, path: str) -> None:
+		with open(path, 'w') as f:
+			for entry in self.entries():
+				f.write(str(entry) + '\n')
+
+	def parse_from_string(self, blob: str) -> None:
+		"""Parses a string containing KconfigEntrys and populates this Kconfig."""
+		self._entries = []
+		is_not_set_matcher = re.compile(CONFIG_IS_NOT_SET_PATTERN)
+		config_matcher = re.compile(CONFIG_PATTERN)
+		for line in blob.split('\n'):
+			line = line.strip()
+			if not line:
+				continue
+			elif config_matcher.match(line) or is_not_set_matcher.match(line):
+				self._entries.append(KconfigEntry(line))
+			elif line[0] == '#':
+				continue
+			else:
+				raise KconfigParseError('Failed to parse: ' + line)
+
+	def read_from_file(self, path: str) -> None:
+		with open(path, 'r') as f:
+			self.parse_from_string(f.read())
diff --git a/tools/testing/kunit/kunit_kernel.py b/tools/testing/kunit/kunit_kernel.py
new file mode 100644
index 0000000000000..07c0abf2f47df
--- /dev/null
+++ b/tools/testing/kunit/kunit_kernel.py
@@ -0,0 +1,148 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Runs UML kernel, collects output, and handles errors.
+#
+# Copyright (C) 2019, Google LLC.
+# Author: Felix Guo <felixguoxiuping@gmail.com>
+# Author: Brendan Higgins <brendanhiggins@google.com>
+
+
+import logging
+import subprocess
+import os
+
+import kunit_config
+
+KCONFIG_PATH = '.config'
+
+class ConfigError(Exception):
+	"""Represents an error trying to configure the Linux kernel."""
+
+
+class BuildError(Exception):
+	"""Represents an error trying to build the Linux kernel."""
+
+
+class LinuxSourceTreeOperations(object):
+	"""An abstraction over command line operations performed on a source tree."""
+
+	def make_mrproper(self):
+		try:
+			subprocess.check_output(['make', 'mrproper'])
+		except OSError as e:
+			raise ConfigError('Could not call make command: ' + e)
+		except subprocess.CalledProcessError as e:
+			raise ConfigError(e.output)
+
+	def make_olddefconfig(self, build_dir):
+		command = ['make', 'ARCH=um', 'olddefconfig']
+		if build_dir:
+			command += ['O=' + build_dir]
+		try:
+			subprocess.check_output(command)
+		except OSError as e:
+			raise ConfigError('Could not call make command: ' + e)
+		except subprocess.CalledProcessError as e:
+			raise ConfigError(e.output)
+
+	def make(self, jobs, build_dir):
+		command = ['make', 'ARCH=um', '--jobs=' + str(jobs)]
+		if build_dir:
+			command += ['O=' + build_dir]
+		try:
+			subprocess.check_output(command)
+		except OSError as e:
+			raise BuildError('Could not call execute make: ' + e)
+		except subprocess.CalledProcessError as e:
+			raise BuildError(e.output)
+
+	def linux_bin(self, params, timeout, build_dir):
+		"""Runs the Linux UML binary. Must be named 'linux'."""
+		linux_bin = './linux'
+		if build_dir:
+			linux_bin = os.path.join(build_dir, 'linux')
+		process = subprocess.Popen(
+			[linux_bin] + params,
+			stdin=subprocess.PIPE,
+			stdout=subprocess.PIPE,
+			stderr=subprocess.PIPE)
+		process.wait(timeout=timeout)
+		return process
+
+
+def get_kconfig_path(build_dir):
+	kconfig_path = KCONFIG_PATH
+	if build_dir:
+		kconfig_path = os.path.join(build_dir, KCONFIG_PATH)
+	return kconfig_path
+
+class LinuxSourceTree(object):
+	"""Represents a Linux kernel source tree with KUnit tests."""
+
+	def __init__(self):
+		self._kconfig = kunit_config.Kconfig()
+		self._kconfig.read_from_file('kunitconfig')
+		self._ops = LinuxSourceTreeOperations()
+
+	def clean(self):
+		try:
+			self._ops.make_mrproper()
+		except ConfigError as e:
+			logging.error(e)
+			return False
+		return True
+
+	def build_config(self, build_dir):
+		kconfig_path = get_kconfig_path(build_dir)
+		if build_dir and not os.path.exists(build_dir):
+			os.mkdir(build_dir)
+		self._kconfig.write_to_file(kconfig_path)
+		try:
+			self._ops.make_olddefconfig(build_dir)
+		except ConfigError as e:
+			logging.error(e)
+			return False
+		validated_kconfig = kunit_config.Kconfig()
+		validated_kconfig.read_from_file(kconfig_path)
+		if not self._kconfig.is_subset_of(validated_kconfig):
+			logging.error('Provided Kconfig is not contained in validated .config!')
+			return False
+		return True
+
+	def build_reconfig(self, build_dir):
+		"""Creates a new .config if it is not a subset of the kunitconfig."""
+		kconfig_path = get_kconfig_path(build_dir)
+		if os.path.exists(kconfig_path):
+			existing_kconfig = kunit_config.Kconfig()
+			existing_kconfig.read_from_file(kconfig_path)
+			if not self._kconfig.is_subset_of(existing_kconfig):
+				print('Regenerating .config ...')
+				os.remove(kconfig_path)
+				return self.build_config(build_dir)
+			else:
+				return True
+		else:
+			print('Generating .config ...')
+			return self.build_config(build_dir)
+
+	def build_um_kernel(self, jobs, build_dir):
+		try:
+			self._ops.make_olddefconfig(build_dir)
+			self._ops.make(jobs, build_dir)
+		except (ConfigError, BuildError) as e:
+			logging.error(e)
+			return False
+		used_kconfig = kunit_config.Kconfig()
+		used_kconfig.read_from_file(get_kconfig_path(build_dir))
+		if not self._kconfig.is_subset_of(used_kconfig):
+			logging.error('Provided Kconfig is not contained in final config!')
+			return False
+		return True
+
+	def run_kernel(self, args=[], timeout=None, build_dir=None):
+		args.extend(['mem=256M'])
+		process = self._ops.linux_bin(args, timeout, build_dir)
+		with open('test.log', 'w') as f:
+			for line in process.stdout:
+				f.write(line.rstrip().decode('ascii') + '\n')
+				yield line.rstrip().decode('ascii')
diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py
new file mode 100644
index 0000000000000..6c81d4dcfabb5
--- /dev/null
+++ b/tools/testing/kunit/kunit_parser.py
@@ -0,0 +1,119 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Parses test results from a kernel dmesg log.
+#
+# Copyright (C) 2019, Google LLC.
+# Author: Felix Guo <felixguoxiuping@gmail.com>
+# Author: Brendan Higgins <brendanhiggins@google.com>
+
+import re
+from datetime import datetime
+
+kunit_start_re = re.compile('printk: console .* enabled')
+kunit_end_re = re.compile('List of all partitions:')
+
+def isolate_kunit_output(kernel_output):
+	started = False
+	for line in kernel_output:
+		if kunit_start_re.match(line):
+			started = True
+		elif kunit_end_re.match(line):
+			break
+		elif started:
+			yield line
+
+def raw_output(kernel_output):
+	for line in kernel_output:
+		print(line)
+
+DIVIDER = "=" * 30
+
+RESET = '\033[0;0m'
+
+def red(text):
+	return '\033[1;31m' + text + RESET
+
+def yellow(text):
+	return '\033[1;33m' + text + RESET
+
+def green(text):
+	return '\033[1;32m' + text + RESET
+
+def print_with_timestamp(message):
+	print('[%s] %s' % (datetime.now().strftime('%H:%M:%S'), message))
+
+def print_log(log):
+	for m in log:
+		print_with_timestamp(m)
+
+def parse_run_tests(kernel_output):
+	test_case_output = re.compile('^kunit .*?: (.*)$')
+
+	test_module_success = re.compile('^kunit .*: all tests passed')
+	test_module_fail = re.compile('^kunit .*: one or more tests failed')
+
+	test_case_success = re.compile('^kunit (.*): (.*) passed')
+	test_case_fail = re.compile('^kunit (.*): (.*) failed')
+	test_case_crash = re.compile('^kunit (.*): (.*) crashed')
+
+	total_tests = set()
+	failed_tests = set()
+	crashed_tests = set()
+
+	def get_test_name(match):
+		return match.group(1) + ":" + match.group(2)
+
+	current_case_log = []
+	def end_one_test(match, log):
+		log.clear()
+		total_tests.add(get_test_name(match))
+
+	print_with_timestamp(DIVIDER)
+	for line in isolate_kunit_output(kernel_output):
+		# Ignore module output:
+		if (test_module_success.match(line) or
+		    test_module_fail.match(line)):
+			print_with_timestamp(DIVIDER)
+			continue
+
+		match = re.match(test_case_success, line)
+		if match:
+			print_with_timestamp(green("[PASSED] ") +
+					     get_test_name(match))
+			end_one_test(match, current_case_log)
+			continue
+
+		match = re.match(test_case_fail, line)
+		# Crashed tests will report as both failed and crashed. We only
+		# want to show and count it once.
+		if match and get_test_name(match) not in crashed_tests:
+			failed_tests.add(get_test_name(match))
+			print_with_timestamp(red("[FAILED] " +
+						 get_test_name(match)))
+			print_log(map(yellow, current_case_log))
+			print_with_timestamp("")
+			end_one_test(match, current_case_log)
+			continue
+
+		match = re.match(test_case_crash, line)
+		if match:
+			crashed_tests.add(get_test_name(match))
+			print_with_timestamp(yellow("[CRASH] " +
+						    get_test_name(match)))
+			print_log(current_case_log)
+			print_with_timestamp("")
+			end_one_test(match, current_case_log)
+			continue
+
+		# Strip off the `kunit module-name:` prefix
+		match = re.match(test_case_output, line)
+		if match:
+			current_case_log.append(match.group(1))
+		else:
+			current_case_log.append(line)
+
+	fmt = green if (len(failed_tests) + len(crashed_tests) == 0) else red
+	print_with_timestamp(
+		fmt("Testing complete. %d tests run. %d failed. %d crashed." %
+		    (len(total_tests), len(failed_tests), len(crashed_tests))))
+
-- 
2.21.0.rc0.258.g878e2cd30e-goog


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 12/17] kunit: defconfig: add defconfigs for building KUnit tests
  2019-02-14 21:37 ` brendanhiggins
                       ` (2 preceding siblings ...)
  (?)
@ 2019-02-14 21:37     ` Brendan Higgins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook-hpIqsD4AKlfQT0dZR+AlfA, mcgrof-DgEjT+Ai2ygdnm+yROfE0A,
	shuah-DgEjT+Ai2ygdnm+yROfE0A, robh-DgEjT+Ai2ygdnm+yROfE0A,
	kieran.bingham-ryLnwIuWjnjg/C1BVhZhaw,
	frowand.list-Re5JQEeQqe8AvxtiuMwx3w
  Cc: brakmo-b10kYP2dOMg, pmladek-IBi9RG/b67k,
	amir73il-Re5JQEeQqe8AvxtiuMwx3w, Brendan Higgins,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	Alexander.Levin-0li6OtcxBFHby3iVrkZq2A,
	linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw, richard-/L3Ra7n9ekc,
	knut.omang-QHcLZuEGTsvQT0dZR+AlfA, wfg-VuQAYsv1563Yd54FQh9/CA,
	joel-U3u1mxZcP9KHXe+LvDLADg, jdike-OPE4K8JWMJJBDgjK7y7TUQ,
	dan.carpenter-QHcLZuEGTsvQT0dZR+AlfA,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Tim.Bird-7U/KSKJipcs,
	linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	rostedt-nx8X9YLhiw1AfugRpC6u6w, julia.lawall-L2FTfq7BK8M,
	kunit-dev-/JYPxA39Uh5TLH3MbocFFw,
	gregkh-hQyY1W1yCW8ekmWlsbkhG0B+6BGkLq7r,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, daniel-/w4YWyX8dFk,
	mpe-Gsx/Oe8HsFggBc27wqDAHg, joe-6d6DIl74uiNBDgjK7y7TUQ,
	khilman-rdvid1DuHRBWk0Htik3J/w

Add defconfig for UML and a fragment that can be used to configure other
architectures for building KUnit tests. Add option to kunit_tool to use
a defconfig to create the kunitconfig.

Signed-off-by: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
---
Changes Since Last Version
 - This patch is new adding default configs to build KUnit.
 - NOTE: there is still some discussion to be had here about whether we
   should go with a defconfig, a config fragment, or both.
---
 arch/um/configs/kunit_defconfig              |  8 ++++++++
 tools/testing/kunit/configs/all_tests.config |  8 ++++++++
 tools/testing/kunit/kunit.py                 | 17 +++++++++++++++--
 tools/testing/kunit/kunit_kernel.py          |  3 ++-
 4 files changed, 33 insertions(+), 3 deletions(-)
 create mode 100644 arch/um/configs/kunit_defconfig
 create mode 100644 tools/testing/kunit/configs/all_tests.config

diff --git a/arch/um/configs/kunit_defconfig b/arch/um/configs/kunit_defconfig
new file mode 100644
index 0000000000000..bfe49689038f1
--- /dev/null
+++ b/arch/um/configs/kunit_defconfig
@@ -0,0 +1,8 @@
+CONFIG_OF=y
+CONFIG_OF_UNITTEST=y
+CONFIG_OF_OVERLAY=y
+CONFIG_I2C=y
+CONFIG_I2C_MUX=y
+CONFIG_KUNIT=y
+CONFIG_KUNIT_TEST=y
+CONFIG_KUNIT_EXAMPLE_TEST=y
diff --git a/tools/testing/kunit/configs/all_tests.config b/tools/testing/kunit/configs/all_tests.config
new file mode 100644
index 0000000000000..bfe49689038f1
--- /dev/null
+++ b/tools/testing/kunit/configs/all_tests.config
@@ -0,0 +1,8 @@
+CONFIG_OF=y
+CONFIG_OF_UNITTEST=y
+CONFIG_OF_OVERLAY=y
+CONFIG_I2C=y
+CONFIG_I2C_MUX=y
+CONFIG_KUNIT=y
+CONFIG_KUNIT_TEST=y
+CONFIG_KUNIT_EXAMPLE_TEST=y
diff --git a/tools/testing/kunit/kunit.py b/tools/testing/kunit/kunit.py
index 7413ec7351a20..63e9fb3b60200 100755
--- a/tools/testing/kunit/kunit.py
+++ b/tools/testing/kunit/kunit.py
@@ -11,6 +11,7 @@ import argparse
 import sys
 import os
 import time
+import shutil
 
 import kunit_config
 import kunit_kernel
@@ -36,14 +37,26 @@ parser.add_argument('--build_dir',
 		    'directory.',
 		    type=str, default=None, metavar='build_dir')
 
-cli_args = parser.parse_args()
+parser.add_argument('--defconfig',
+		    help='Uses a default kunitconfig.',
+		    action='store_true')
 
-linux = kunit_kernel.LinuxSourceTree()
+def create_default_kunitconfig():
+	if not os.path.exists(kunit_kernel.KUNITCONFIG_PATH):
+		shutil.copyfile('arch/um/configs/kunit_defconfig',
+				kunit_kernel.KUNITCONFIG_PATH)
+
+cli_args = parser.parse_args()
 
 build_dir = None
 if cli_args.build_dir:
 	build_dir = cli_args.build_dir
 
+if cli_args.defconfig:
+	create_default_kunitconfig()
+
+linux = kunit_kernel.LinuxSourceTree()
+
 config_start = time.time()
 success = linux.build_reconfig(build_dir)
 config_end = time.time()
diff --git a/tools/testing/kunit/kunit_kernel.py b/tools/testing/kunit/kunit_kernel.py
index 07c0abf2f47df..bf38768353313 100644
--- a/tools/testing/kunit/kunit_kernel.py
+++ b/tools/testing/kunit/kunit_kernel.py
@@ -14,6 +14,7 @@ import os
 import kunit_config
 
 KCONFIG_PATH = '.config'
+KUNITCONFIG_PATH = 'kunitconfig'
 
 class ConfigError(Exception):
 	"""Represents an error trying to configure the Linux kernel."""
@@ -81,7 +82,7 @@ class LinuxSourceTree(object):
 
 	def __init__(self):
 		self._kconfig = kunit_config.Kconfig()
-		self._kconfig.read_from_file('kunitconfig')
+		self._kconfig.read_from_file(KUNITCONFIG_PATH)
 		self._ops = LinuxSourceTreeOperations()
 
 	def clean(self):
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 12/17] kunit: defconfig: add defconfigs for building KUnit tests
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: gregkh, joel, mpe, joe, brakmo, rostedt, Tim.Bird, khilman,
	julia.lawall, linux-kselftest, kunit-dev, linux-kernel, jdike,
	richard, linux-um, daniel, dri-devel, dan.j.williams,
	linux-nvdimm, knut.omang, devicetree, pmladek, Alexander.Levin,
	amir73il, dan.carpenter, wfg, Brendan Higgins

Add defconfig for UML and a fragment that can be used to configure other
architectures for building KUnit tests. Add option to kunit_tool to use
a defconfig to create the kunitconfig.

Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
---
Changes Since Last Version
 - This patch is new adding default configs to build KUnit.
 - NOTE: there is still some discussion to be had here about whether we
   should go with a defconfig, a config fragment, or both.
---
 arch/um/configs/kunit_defconfig              |  8 ++++++++
 tools/testing/kunit/configs/all_tests.config |  8 ++++++++
 tools/testing/kunit/kunit.py                 | 17 +++++++++++++++--
 tools/testing/kunit/kunit_kernel.py          |  3 ++-
 4 files changed, 33 insertions(+), 3 deletions(-)
 create mode 100644 arch/um/configs/kunit_defconfig
 create mode 100644 tools/testing/kunit/configs/all_tests.config

diff --git a/arch/um/configs/kunit_defconfig b/arch/um/configs/kunit_defconfig
new file mode 100644
index 0000000000000..bfe49689038f1
--- /dev/null
+++ b/arch/um/configs/kunit_defconfig
@@ -0,0 +1,8 @@
+CONFIG_OF=y
+CONFIG_OF_UNITTEST=y
+CONFIG_OF_OVERLAY=y
+CONFIG_I2C=y
+CONFIG_I2C_MUX=y
+CONFIG_KUNIT=y
+CONFIG_KUNIT_TEST=y
+CONFIG_KUNIT_EXAMPLE_TEST=y
diff --git a/tools/testing/kunit/configs/all_tests.config b/tools/testing/kunit/configs/all_tests.config
new file mode 100644
index 0000000000000..bfe49689038f1
--- /dev/null
+++ b/tools/testing/kunit/configs/all_tests.config
@@ -0,0 +1,8 @@
+CONFIG_OF=y
+CONFIG_OF_UNITTEST=y
+CONFIG_OF_OVERLAY=y
+CONFIG_I2C=y
+CONFIG_I2C_MUX=y
+CONFIG_KUNIT=y
+CONFIG_KUNIT_TEST=y
+CONFIG_KUNIT_EXAMPLE_TEST=y
diff --git a/tools/testing/kunit/kunit.py b/tools/testing/kunit/kunit.py
index 7413ec7351a20..63e9fb3b60200 100755
--- a/tools/testing/kunit/kunit.py
+++ b/tools/testing/kunit/kunit.py
@@ -11,6 +11,7 @@ import argparse
 import sys
 import os
 import time
+import shutil
 
 import kunit_config
 import kunit_kernel
@@ -36,14 +37,26 @@ parser.add_argument('--build_dir',
 		    'directory.',
 		    type=str, default=None, metavar='build_dir')
 
-cli_args = parser.parse_args()
+parser.add_argument('--defconfig',
+		    help='Uses a default kunitconfig.',
+		    action='store_true')
 
-linux = kunit_kernel.LinuxSourceTree()
+def create_default_kunitconfig():
+	if not os.path.exists(kunit_kernel.KUNITCONFIG_PATH):
+		shutil.copyfile('arch/um/configs/kunit_defconfig',
+				kunit_kernel.KUNITCONFIG_PATH)
+
+cli_args = parser.parse_args()
 
 build_dir = None
 if cli_args.build_dir:
 	build_dir = cli_args.build_dir
 
+if cli_args.defconfig:
+	create_default_kunitconfig()
+
+linux = kunit_kernel.LinuxSourceTree()
+
 config_start = time.time()
 success = linux.build_reconfig(build_dir)
 config_end = time.time()
diff --git a/tools/testing/kunit/kunit_kernel.py b/tools/testing/kunit/kunit_kernel.py
index 07c0abf2f47df..bf38768353313 100644
--- a/tools/testing/kunit/kunit_kernel.py
+++ b/tools/testing/kunit/kunit_kernel.py
@@ -14,6 +14,7 @@ import os
 import kunit_config
 
 KCONFIG_PATH = '.config'
+KUNITCONFIG_PATH = 'kunitconfig'
 
 class ConfigError(Exception):
 	"""Represents an error trying to configure the Linux kernel."""
@@ -81,7 +82,7 @@ class LinuxSourceTree(object):
 
 	def __init__(self):
 		self._kconfig = kunit_config.Kconfig()
-		self._kconfig.read_from_file('kunitconfig')
+		self._kconfig.read_from_file(KUNITCONFIG_PATH)
 		self._ops = LinuxSourceTreeOperations()
 
 	def clean(self):
-- 
2.21.0.rc0.258.g878e2cd30e-goog


^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 12/17] kunit: defconfig: add defconfigs for building KUnit tests
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-02-14 21:37 UTC (permalink / raw)


Add defconfig for UML and a fragment that can be used to configure other
architectures for building KUnit tests. Add option to kunit_tool to use
a defconfig to create the kunitconfig.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
Changes Since Last Version
 - This patch is new adding default configs to build KUnit.
 - NOTE: there is still some discussion to be had here about whether we
   should go with a defconfig, a config fragment, or both.
---
 arch/um/configs/kunit_defconfig              |  8 ++++++++
 tools/testing/kunit/configs/all_tests.config |  8 ++++++++
 tools/testing/kunit/kunit.py                 | 17 +++++++++++++++--
 tools/testing/kunit/kunit_kernel.py          |  3 ++-
 4 files changed, 33 insertions(+), 3 deletions(-)
 create mode 100644 arch/um/configs/kunit_defconfig
 create mode 100644 tools/testing/kunit/configs/all_tests.config

diff --git a/arch/um/configs/kunit_defconfig b/arch/um/configs/kunit_defconfig
new file mode 100644
index 0000000000000..bfe49689038f1
--- /dev/null
+++ b/arch/um/configs/kunit_defconfig
@@ -0,0 +1,8 @@
+CONFIG_OF=y
+CONFIG_OF_UNITTEST=y
+CONFIG_OF_OVERLAY=y
+CONFIG_I2C=y
+CONFIG_I2C_MUX=y
+CONFIG_KUNIT=y
+CONFIG_KUNIT_TEST=y
+CONFIG_KUNIT_EXAMPLE_TEST=y
diff --git a/tools/testing/kunit/configs/all_tests.config b/tools/testing/kunit/configs/all_tests.config
new file mode 100644
index 0000000000000..bfe49689038f1
--- /dev/null
+++ b/tools/testing/kunit/configs/all_tests.config
@@ -0,0 +1,8 @@
+CONFIG_OF=y
+CONFIG_OF_UNITTEST=y
+CONFIG_OF_OVERLAY=y
+CONFIG_I2C=y
+CONFIG_I2C_MUX=y
+CONFIG_KUNIT=y
+CONFIG_KUNIT_TEST=y
+CONFIG_KUNIT_EXAMPLE_TEST=y
diff --git a/tools/testing/kunit/kunit.py b/tools/testing/kunit/kunit.py
index 7413ec7351a20..63e9fb3b60200 100755
--- a/tools/testing/kunit/kunit.py
+++ b/tools/testing/kunit/kunit.py
@@ -11,6 +11,7 @@ import argparse
 import sys
 import os
 import time
+import shutil
 
 import kunit_config
 import kunit_kernel
@@ -36,14 +37,26 @@ parser.add_argument('--build_dir',
 		    'directory.',
 		    type=str, default=None, metavar='build_dir')
 
-cli_args = parser.parse_args()
+parser.add_argument('--defconfig',
+		    help='Uses a default kunitconfig.',
+		    action='store_true')
 
-linux = kunit_kernel.LinuxSourceTree()
+def create_default_kunitconfig():
+	if not os.path.exists(kunit_kernel.KUNITCONFIG_PATH):
+		shutil.copyfile('arch/um/configs/kunit_defconfig',
+				kunit_kernel.KUNITCONFIG_PATH)
+
+cli_args = parser.parse_args()
 
 build_dir = None
 if cli_args.build_dir:
 	build_dir = cli_args.build_dir
 
+if cli_args.defconfig:
+	create_default_kunitconfig()
+
+linux = kunit_kernel.LinuxSourceTree()
+
 config_start = time.time()
 success = linux.build_reconfig(build_dir)
 config_end = time.time()
diff --git a/tools/testing/kunit/kunit_kernel.py b/tools/testing/kunit/kunit_kernel.py
index 07c0abf2f47df..bf38768353313 100644
--- a/tools/testing/kunit/kunit_kernel.py
+++ b/tools/testing/kunit/kunit_kernel.py
@@ -14,6 +14,7 @@ import os
 import kunit_config
 
 KCONFIG_PATH = '.config'
+KUNITCONFIG_PATH = 'kunitconfig'
 
 class ConfigError(Exception):
 	"""Represents an error trying to configure the Linux kernel."""
@@ -81,7 +82,7 @@ class LinuxSourceTree(object):
 
 	def __init__(self):
 		self._kconfig = kunit_config.Kconfig()
-		self._kconfig.read_from_file('kunitconfig')
+		self._kconfig.read_from_file(KUNITCONFIG_PATH)
 		self._ops = LinuxSourceTreeOperations()
 
 	def clean(self):
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 12/17] kunit: defconfig: add defconfigs for building KUnit tests
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)


Add defconfig for UML and a fragment that can be used to configure other
architectures for building KUnit tests. Add option to kunit_tool to use
a defconfig to create the kunitconfig.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
Changes Since Last Version
 - This patch is new adding default configs to build KUnit.
 - NOTE: there is still some discussion to be had here about whether we
   should go with a defconfig, a config fragment, or both.
---
 arch/um/configs/kunit_defconfig              |  8 ++++++++
 tools/testing/kunit/configs/all_tests.config |  8 ++++++++
 tools/testing/kunit/kunit.py                 | 17 +++++++++++++++--
 tools/testing/kunit/kunit_kernel.py          |  3 ++-
 4 files changed, 33 insertions(+), 3 deletions(-)
 create mode 100644 arch/um/configs/kunit_defconfig
 create mode 100644 tools/testing/kunit/configs/all_tests.config

diff --git a/arch/um/configs/kunit_defconfig b/arch/um/configs/kunit_defconfig
new file mode 100644
index 0000000000000..bfe49689038f1
--- /dev/null
+++ b/arch/um/configs/kunit_defconfig
@@ -0,0 +1,8 @@
+CONFIG_OF=y
+CONFIG_OF_UNITTEST=y
+CONFIG_OF_OVERLAY=y
+CONFIG_I2C=y
+CONFIG_I2C_MUX=y
+CONFIG_KUNIT=y
+CONFIG_KUNIT_TEST=y
+CONFIG_KUNIT_EXAMPLE_TEST=y
diff --git a/tools/testing/kunit/configs/all_tests.config b/tools/testing/kunit/configs/all_tests.config
new file mode 100644
index 0000000000000..bfe49689038f1
--- /dev/null
+++ b/tools/testing/kunit/configs/all_tests.config
@@ -0,0 +1,8 @@
+CONFIG_OF=y
+CONFIG_OF_UNITTEST=y
+CONFIG_OF_OVERLAY=y
+CONFIG_I2C=y
+CONFIG_I2C_MUX=y
+CONFIG_KUNIT=y
+CONFIG_KUNIT_TEST=y
+CONFIG_KUNIT_EXAMPLE_TEST=y
diff --git a/tools/testing/kunit/kunit.py b/tools/testing/kunit/kunit.py
index 7413ec7351a20..63e9fb3b60200 100755
--- a/tools/testing/kunit/kunit.py
+++ b/tools/testing/kunit/kunit.py
@@ -11,6 +11,7 @@ import argparse
 import sys
 import os
 import time
+import shutil
 
 import kunit_config
 import kunit_kernel
@@ -36,14 +37,26 @@ parser.add_argument('--build_dir',
 		    'directory.',
 		    type=str, default=None, metavar='build_dir')
 
-cli_args = parser.parse_args()
+parser.add_argument('--defconfig',
+		    help='Uses a default kunitconfig.',
+		    action='store_true')
 
-linux = kunit_kernel.LinuxSourceTree()
+def create_default_kunitconfig():
+	if not os.path.exists(kunit_kernel.KUNITCONFIG_PATH):
+		shutil.copyfile('arch/um/configs/kunit_defconfig',
+				kunit_kernel.KUNITCONFIG_PATH)
+
+cli_args = parser.parse_args()
 
 build_dir = None
 if cli_args.build_dir:
 	build_dir = cli_args.build_dir
 
+if cli_args.defconfig:
+	create_default_kunitconfig()
+
+linux = kunit_kernel.LinuxSourceTree()
+
 config_start = time.time()
 success = linux.build_reconfig(build_dir)
 config_end = time.time()
diff --git a/tools/testing/kunit/kunit_kernel.py b/tools/testing/kunit/kunit_kernel.py
index 07c0abf2f47df..bf38768353313 100644
--- a/tools/testing/kunit/kunit_kernel.py
+++ b/tools/testing/kunit/kunit_kernel.py
@@ -14,6 +14,7 @@ import os
 import kunit_config
 
 KCONFIG_PATH = '.config'
+KUNITCONFIG_PATH = 'kunitconfig'
 
 class ConfigError(Exception):
 	"""Represents an error trying to configure the Linux kernel."""
@@ -81,7 +82,7 @@ class LinuxSourceTree(object):
 
 	def __init__(self):
 		self._kconfig = kunit_config.Kconfig()
-		self._kconfig.read_from_file('kunitconfig')
+		self._kconfig.read_from_file(KUNITCONFIG_PATH)
 		self._ops = LinuxSourceTreeOperations()
 
 	def clean(self):
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 12/17] kunit: defconfig: add defconfigs for building KUnit tests
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: brakmo, pmladek, amir73il, Brendan Higgins, dri-devel,
	Alexander.Levin, linux-kselftest, linux-nvdimm, richard,
	knut.omang, wfg, joel, jdike, dan.carpenter, devicetree,
	Tim.Bird, linux-um, rostedt, julia.lawall, dan.j.williams,
	kunit-dev, gregkh, linux-kernel, daniel, mpe, joe, khilman

Add defconfig for UML and a fragment that can be used to configure other
architectures for building KUnit tests. Add option to kunit_tool to use
a defconfig to create the kunitconfig.

Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
---
Changes Since Last Version
 - This patch is new adding default configs to build KUnit.
 - NOTE: there is still some discussion to be had here about whether we
   should go with a defconfig, a config fragment, or both.
---
 arch/um/configs/kunit_defconfig              |  8 ++++++++
 tools/testing/kunit/configs/all_tests.config |  8 ++++++++
 tools/testing/kunit/kunit.py                 | 17 +++++++++++++++--
 tools/testing/kunit/kunit_kernel.py          |  3 ++-
 4 files changed, 33 insertions(+), 3 deletions(-)
 create mode 100644 arch/um/configs/kunit_defconfig
 create mode 100644 tools/testing/kunit/configs/all_tests.config

diff --git a/arch/um/configs/kunit_defconfig b/arch/um/configs/kunit_defconfig
new file mode 100644
index 0000000000000..bfe49689038f1
--- /dev/null
+++ b/arch/um/configs/kunit_defconfig
@@ -0,0 +1,8 @@
+CONFIG_OF=y
+CONFIG_OF_UNITTEST=y
+CONFIG_OF_OVERLAY=y
+CONFIG_I2C=y
+CONFIG_I2C_MUX=y
+CONFIG_KUNIT=y
+CONFIG_KUNIT_TEST=y
+CONFIG_KUNIT_EXAMPLE_TEST=y
diff --git a/tools/testing/kunit/configs/all_tests.config b/tools/testing/kunit/configs/all_tests.config
new file mode 100644
index 0000000000000..bfe49689038f1
--- /dev/null
+++ b/tools/testing/kunit/configs/all_tests.config
@@ -0,0 +1,8 @@
+CONFIG_OF=y
+CONFIG_OF_UNITTEST=y
+CONFIG_OF_OVERLAY=y
+CONFIG_I2C=y
+CONFIG_I2C_MUX=y
+CONFIG_KUNIT=y
+CONFIG_KUNIT_TEST=y
+CONFIG_KUNIT_EXAMPLE_TEST=y
diff --git a/tools/testing/kunit/kunit.py b/tools/testing/kunit/kunit.py
index 7413ec7351a20..63e9fb3b60200 100755
--- a/tools/testing/kunit/kunit.py
+++ b/tools/testing/kunit/kunit.py
@@ -11,6 +11,7 @@ import argparse
 import sys
 import os
 import time
+import shutil
 
 import kunit_config
 import kunit_kernel
@@ -36,14 +37,26 @@ parser.add_argument('--build_dir',
 		    'directory.',
 		    type=str, default=None, metavar='build_dir')
 
-cli_args = parser.parse_args()
+parser.add_argument('--defconfig',
+		    help='Uses a default kunitconfig.',
+		    action='store_true')
 
-linux = kunit_kernel.LinuxSourceTree()
+def create_default_kunitconfig():
+	if not os.path.exists(kunit_kernel.KUNITCONFIG_PATH):
+		shutil.copyfile('arch/um/configs/kunit_defconfig',
+				kunit_kernel.KUNITCONFIG_PATH)
+
+cli_args = parser.parse_args()
 
 build_dir = None
 if cli_args.build_dir:
 	build_dir = cli_args.build_dir
 
+if cli_args.defconfig:
+	create_default_kunitconfig()
+
+linux = kunit_kernel.LinuxSourceTree()
+
 config_start = time.time()
 success = linux.build_reconfig(build_dir)
 config_end = time.time()
diff --git a/tools/testing/kunit/kunit_kernel.py b/tools/testing/kunit/kunit_kernel.py
index 07c0abf2f47df..bf38768353313 100644
--- a/tools/testing/kunit/kunit_kernel.py
+++ b/tools/testing/kunit/kunit_kernel.py
@@ -14,6 +14,7 @@ import os
 import kunit_config
 
 KCONFIG_PATH = '.config'
+KUNITCONFIG_PATH = 'kunitconfig'
 
 class ConfigError(Exception):
 	"""Represents an error trying to configure the Linux kernel."""
@@ -81,7 +82,7 @@ class LinuxSourceTree(object):
 
 	def __init__(self):
 		self._kconfig = kunit_config.Kconfig()
-		self._kconfig.read_from_file('kunitconfig')
+		self._kconfig.read_from_file(KUNITCONFIG_PATH)
 		self._ops = LinuxSourceTreeOperations()
 
 	def clean(self):
-- 
2.21.0.rc0.258.g878e2cd30e-goog


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 13/17] Documentation: kunit: add documentation for KUnit
  2019-02-14 21:37 ` brendanhiggins
                       ` (2 preceding siblings ...)
  (?)
@ 2019-02-14 21:37     ` Brendan Higgins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook-hpIqsD4AKlfQT0dZR+AlfA, mcgrof-DgEjT+Ai2ygdnm+yROfE0A,
	shuah-DgEjT+Ai2ygdnm+yROfE0A, robh-DgEjT+Ai2ygdnm+yROfE0A,
	kieran.bingham-ryLnwIuWjnjg/C1BVhZhaw,
	frowand.list-Re5JQEeQqe8AvxtiuMwx3w
  Cc: brakmo-b10kYP2dOMg, pmladek-IBi9RG/b67k,
	amir73il-Re5JQEeQqe8AvxtiuMwx3w, Brendan Higgins,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	Alexander.Levin-0li6OtcxBFHby3iVrkZq2A,
	linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw, richard-/L3Ra7n9ekc,
	knut.omang-QHcLZuEGTsvQT0dZR+AlfA, Felix Guo,
	wfg-VuQAYsv1563Yd54FQh9/CA, joel-U3u1mxZcP9KHXe+LvDLADg,
	jdike-OPE4K8JWMJJBDgjK7y7TUQ,
	dan.carpenter-QHcLZuEGTsvQT0dZR+AlfA,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Tim.Bird-7U/KSKJipcs,
	linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	rostedt-nx8X9YLhiw1AfugRpC6u6w, julia.lawall-L2FTfq7BK8M,
	kunit-dev-/JYPxA39Uh5TLH3MbocFFw,
	gregkh-hQyY1W1yCW8ekmWlsbkhG0B+6BGkLq7r,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, daniel-/w4YWyX8dFk,
	mpe-Gsx/Oe8HsFggBc27wqDAHg, joe-6d6DIl74uiNBDgjK7y7TUQ,
	khilman-rdvid1DuHRBWk0Htik3J/w

Add documentation for KUnit, the Linux kernel unit testing framework.
- Add intro and usage guide for KUnit
- Add API reference

Signed-off-by: Felix Guo <felixguoxiuping-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Signed-off-by: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
---
 Documentation/index.rst           |   1 +
 Documentation/kunit/api/index.rst |  16 ++
 Documentation/kunit/api/test.rst  |  15 +
 Documentation/kunit/faq.rst       |  46 +++
 Documentation/kunit/index.rst     |  80 ++++++
 Documentation/kunit/start.rst     | 180 ++++++++++++
 Documentation/kunit/usage.rst     | 447 ++++++++++++++++++++++++++++++
 7 files changed, 785 insertions(+)
 create mode 100644 Documentation/kunit/api/index.rst
 create mode 100644 Documentation/kunit/api/test.rst
 create mode 100644 Documentation/kunit/faq.rst
 create mode 100644 Documentation/kunit/index.rst
 create mode 100644 Documentation/kunit/start.rst
 create mode 100644 Documentation/kunit/usage.rst

diff --git a/Documentation/index.rst b/Documentation/index.rst
index c858c2e66e361..9512de536b34a 100644
--- a/Documentation/index.rst
+++ b/Documentation/index.rst
@@ -65,6 +65,7 @@ merged much easier.
    kernel-hacking/index
    trace/index
    maintainer/index
+   kunit/index
 
 Kernel API documentation
 ------------------------
diff --git a/Documentation/kunit/api/index.rst b/Documentation/kunit/api/index.rst
new file mode 100644
index 0000000000000..c31c530088153
--- /dev/null
+++ b/Documentation/kunit/api/index.rst
@@ -0,0 +1,16 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=============
+API Reference
+=============
+.. toctree::
+
+	test
+
+This section documents the KUnit kernel testing API. It is divided into 3
+sections:
+
+================================= ==============================================
+:doc:`test`                       documents all of the standard testing API
+                                  excluding mocking or mocking related features.
+================================= ==============================================
diff --git a/Documentation/kunit/api/test.rst b/Documentation/kunit/api/test.rst
new file mode 100644
index 0000000000000..7c926014f047c
--- /dev/null
+++ b/Documentation/kunit/api/test.rst
@@ -0,0 +1,15 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+========
+Test API
+========
+
+This file documents all of the standard testing API excluding mocking or mocking
+related features.
+
+.. kernel-doc:: include/kunit/test.h
+   :internal:
+
+.. kernel-doc:: include/kunit/kunit-stream.h
+   :internal:
+
diff --git a/Documentation/kunit/faq.rst b/Documentation/kunit/faq.rst
new file mode 100644
index 0000000000000..cb8e4fb2257a0
--- /dev/null
+++ b/Documentation/kunit/faq.rst
@@ -0,0 +1,46 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=========================================
+Frequently Asked Questions
+=========================================
+
+How is this different from Autotest, kselftest, etc?
+====================================================
+KUnit is a unit testing framework. Autotest, kselftest (and some others) are
+not.
+
+A `unit test <https://martinfowler.com/bliki/UnitTest.html>`_ is supposed to
+test a single unit of code in isolation, hence the name. A unit test should be
+the finest granularity of testing and as such should allow all possible code
+paths to be tested in the code under test; this is only possible if the code
+under test is very small and does not have any external dependencies outside of
+the test's control like hardware.
+
+There are no testing frameworks currently available for the kernel that do not
+require installing the kernel on a test machine or in a VM and all require
+tests to be written in userspace and run on the kernel under test; this is true
+for Autotest, kselftest, and some others, disqualifying any of them from being
+considered unit testing frameworks.
+
+What is the difference between a unit test and these other kinds of tests?
+==========================================================================
+Most existing tests for the Linux kernel would be categorized as an integration
+test, or an end-to-end test.
+
+- A unit test is supposed to test a single unit of code in isolation, hence the
+  name. A unit test should be the finest granularity of testing and as such
+  should allow all possible code paths to be tested in the code under test; this
+  is only possible if the code under test is very small and does not have any
+  external dependencies outside of the test's control like hardware.
+- An integration test tests the interaction between a minimal set of components,
+  usually just two or three. For example, someone might write an integration
+  test to test the interaction between a driver and a piece of hardware, or to
+  test the interaction between the userspace libraries the kernel provides and
+  the kernel itself; however, one of these tests would probably not test the
+  entire kernel along with hardware interactions and interactions with the
+  userspace.
+- An end-to-end test usually tests the entire system from the perspective of the
+  code under test. For example, someone might write an end-to-end test for the
+  kernel by installing a production configuration of the kernel on production
+  hardware with a production userspace and then trying to exercise some behavior
+  that depends on interactions between the hardware, the kernel, and userspace.
diff --git a/Documentation/kunit/index.rst b/Documentation/kunit/index.rst
new file mode 100644
index 0000000000000..c6710211b647f
--- /dev/null
+++ b/Documentation/kunit/index.rst
@@ -0,0 +1,80 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=========================================
+KUnit - Unit Testing for the Linux Kernel
+=========================================
+
+.. toctree::
+	:maxdepth: 2
+
+	start
+	usage
+	api/index
+	faq
+
+What is KUnit?
+==============
+
+KUnit is a lightweight unit testing and mocking framework for the Linux kernel.
+These tests are able to be run locally on a developer's workstation without a VM
+or special hardware.
+
+KUnit is heavily inspired by JUnit, Python's unittest.mock, and
+Googletest/Googlemock for C++. KUnit provides facilities for defining unit test
+cases, grouping related test cases into test suites, providing common
+infrastructure for running tests, and much more.
+
+Get started now: :doc:`start`
+
+Why KUnit?
+==========
+
+A unit test is supposed to test a single unit of code in isolation, hence the
+name. A unit test should be the finest granularity of testing and as such should
+allow all possible code paths to be tested in the code under test; this is only
+possible if the code under test is very small and does not have any external
+dependencies outside of the test's control like hardware.
+
+Outside of KUnit, there are no testing frameworks currently
+available for the kernel that do not require installing the kernel on a test
+machine or in a VM and all require tests to be written in userspace running on
+the kernel; this is true for Autotest, and kselftest, disqualifying
+any of them from being considered unit testing frameworks.
+
+KUnit addresses the problem of being able to run tests without needing a virtual
+machine or actual hardware with User Mode Linux. User Mode Linux is a Linux
+architecture, like ARM or x86; however, unlike other architectures it compiles
+to a standalone program that can be run like any other program directly inside
+of a host operating system; to be clear, it does not require any virtualization
+support; it is just a regular program.
+
+KUnit is fast. Excluding build time, from invocation to completion KUnit can run
+several dozen tests in only 10 to 20 seconds; this might not sound like a big
+deal to some people, but having such fast and easy to run tests fundamentally
+changes the way you go about testing and even writing code in the first place.
+Linus himself said in his `git talk at Google
+<https://gist.github.com/lorn/1272686/revisions#diff-53c65572127855f1b003db4064a94573R874>`_:
+
+	"... a lot of people seem to think that performance is about doing the
+	same thing, just doing it faster, and that is not true. That is not what
+	performance is all about. If you can do something really fast, really
+	well, people will start using it differently."
+
+In this context Linus was talking about branching and merging,
+but this point also applies to testing. If your tests are slow, unreliable, are
+difficult to write, and require a special setup or special hardware to run,
+then you wait a lot longer to write tests, and you wait a lot longer to run
+tests; this means that tests are likely to break, unlikely to test a lot of
+things, and are unlikely to be rerun once they pass. If your tests are really
+fast, you run them all the time, every time you make a change, and every time
+someone sends you some code. Why trust that someone ran all their tests
+correctly on every change when you can just run them yourself in less time than
+it takes to read his / her test log?
+
+How do I use it?
+===================
+
+*   :doc:`start` - for new users of KUnit
+*   :doc:`usage` - for a more detailed explanation of KUnit features
+*   :doc:`api/index` - for the list of KUnit APIs used for testing
+
diff --git a/Documentation/kunit/start.rst b/Documentation/kunit/start.rst
new file mode 100644
index 0000000000000..5cdba5091905e
--- /dev/null
+++ b/Documentation/kunit/start.rst
@@ -0,0 +1,180 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+===============
+Getting Started
+===============
+
+Installing dependencies
+=======================
+KUnit has the same dependencies as the Linux kernel. As long as you can build
+the kernel, you can run KUnit.
+
+KUnit Wrapper
+=============
+Included with KUnit is a simple Python wrapper that helps format the output to
+easily use and read KUnit output. It handles building and running the kernel, as
+well as formatting the output.
+
+The wrapper can be run with:
+
+.. code-block:: bash
+
+   ./tools/testing/kunit/kunit.py
+
+Creating a kunitconfig
+======================
+The Python script is a thin wrapper around Kbuild as such, it needs to be
+configured with a ``kunitconfig`` file. This file essentially contains the
+regular Kernel config, with the specific test targets as well.
+
+.. code-block:: bash
+
+	git clone -b master https://kunit.googlesource.com/kunitconfig $PATH_TO_KUNITCONFIG_REPO
+	cd $PATH_TO_LINUX_REPO
+	ln -s $PATH_TO_KUNIT_CONFIG_REPO/kunitconfig kunitconfig
+
+You may want to add kunitconfig to your local gitignore.
+
+Verifying KUnit Works
+-------------------------
+
+To make sure that everything is set up correctly, simply invoke the Python
+wrapper from your kernel repo:
+
+.. code-block:: bash
+
+	./tools/testing/kunit/kunit.py
+
+.. note::
+   You may want to run ``make mrproper`` first.
+
+If everything worked correctly, you should see the following:
+
+.. code-block:: bash
+
+	Generating .config ...
+	Building KUnit Kernel ...
+	Starting KUnit Kernel ...
+
+followed by a list of tests that are run. All of them should be passing.
+
+.. note::
+   Because it is building a lot of sources for the first time, the ``Building
+   kunit kernel`` step may take a while.
+
+Writing your first test
+==========================
+
+In your kernel repo let's add some code that we can test. Create a file
+``drivers/misc/example.h`` with the contents:
+
+.. code-block:: c
+
+	int misc_example_add(int left, int right);
+
+create a file ``drivers/misc/example.c``:
+
+.. code-block:: c
+
+	#include <linux/errno.h>
+
+	#include "example.h"
+
+	int misc_example_add(int left, int right)
+	{
+		return left + right;
+	}
+
+Now add the following lines to ``drivers/misc/Kconfig``:
+
+.. code-block:: kconfig
+
+	config MISC_EXAMPLE
+		bool "My example"
+
+and the following lines to ``drivers/misc/Makefile``:
+
+.. code-block:: make
+
+	obj-$(CONFIG_MISC_EXAMPLE) += example.o
+
+Now we are ready to write the test. The test will be in
+``drivers/misc/example-test.c``:
+
+.. code-block:: c
+
+	#include <kunit/test.h>
+	#include "example.h"
+
+	/* Define the test cases. */
+
+	static void misc_example_add_test_basic(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, 1, misc_example_add(1, 0));
+		KUNIT_EXPECT_EQ(test, 2, misc_example_add(1, 1));
+		KUNIT_EXPECT_EQ(test, 0, misc_example_add(-1, 1));
+		KUNIT_EXPECT_EQ(test, INT_MAX, misc_example_add(0, INT_MAX));
+		KUNIT_EXPECT_EQ(test, -1, misc_example_add(INT_MAX, INT_MIN));
+	}
+
+	static void misc_example_test_failure(struct kunit *test)
+	{
+		KUNIT_FAIL(test, "This test never passes.");
+	}
+
+	static struct kunit_case misc_example_test_cases[] = {
+		KUNIT_CASE(misc_example_add_test_basic),
+		KUNIT_CASE(misc_example_test_failure),
+		{},
+	};
+
+	static struct kunit_module misc_example_test_module = {
+		.name = "misc-example",
+		.test_cases = misc_example_test_cases,
+	};
+	module_test(misc_example_test_module);
+
+Now add the following to ``drivers/misc/Kconfig``:
+
+.. code-block:: kconfig
+
+	config MISC_EXAMPLE_TEST
+		bool "Test for my example"
+		depends on MISC_EXAMPLE && KUNIT
+
+and the following to ``drivers/misc/Makefile``:
+
+.. code-block:: make
+
+	obj-$(CONFIG_MISC_EXAMPLE_TEST) += example-test.o
+
+Now add it to your ``kunitconfig``:
+
+.. code-block:: none
+
+	CONFIG_MISC_EXAMPLE=y
+	CONFIG_MISC_EXAMPLE_TEST=y
+
+Now you can run the test:
+
+.. code-block:: bash
+
+	./tools/testing/kunit/kunit.py
+
+You should see the following failure:
+
+.. code-block:: none
+
+	...
+	[16:08:57] [PASSED] misc-example:misc_example_add_test_basic
+	[16:08:57] [FAILED] misc-example:misc_example_test_failure
+	[16:08:57] EXPECTATION FAILED at drivers/misc/example-test.c:17
+	[16:08:57] 	This test never passes.
+	...
+
+Congrats! You just wrote your first KUnit test!
+
+Next Steps
+=============
+*   Check out the :doc:`usage` page for a more
+    in-depth explanation of KUnit.
diff --git a/Documentation/kunit/usage.rst b/Documentation/kunit/usage.rst
new file mode 100644
index 0000000000000..96ef7f9a1add4
--- /dev/null
+++ b/Documentation/kunit/usage.rst
@@ -0,0 +1,447 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=============
+Using KUnit
+=============
+
+The purpose of this document is to describe what KUnit is, how it works, how it
+is intended to be used, and all the concepts and terminology that are needed to
+understand it. This guide assumes a working knowledge of the Linux kernel and
+some basic knowledge of testing.
+
+For a high level introduction to KUnit, including setting up KUnit for your
+project, see :doc:`start`.
+
+Organization of this document
+=================================
+
+This document is organized into two main sections: Testing and Isolating
+Behavior. The first covers what a unit test is and how to use KUnit to write
+them. The second covers how to use KUnit to isolate code and make it possible
+to unit test code that was otherwise un-unit-testable.
+
+Testing
+==========
+
+What is KUnit?
+------------------
+
+"K" is short for "kernel" so "KUnit" is the "(Linux) Kernel Unit Testing
+Framework." KUnit is intended first and foremost for writing unit tests; it is
+general enough that it can be used to write integration tests; however, this is
+a secondary goal. KUnit has no ambition of being the only testing framework for
+the kernel; for example, it does not intend to be an end-to-end testing
+framework.
+
+What is Unit Testing?
+-------------------------
+
+A `unit test <https://martinfowler.com/bliki/UnitTest.html>`_ is a test that
+tests code at the smallest possible scope, a *unit* of code. In the C
+programming language that's a function.
+
+Unit tests should be written for all the publicly exposed functions in a
+compilation unit; so that is all the functions that are exported in either a
+*class* (defined below) or all functions which are **not** static.
+
+Writing Tests
+-------------
+
+Test Cases
+~~~~~~~~~~
+
+The fundamental unit in KUnit is the test case. A test case is a function with
+the signature ``void (*)(struct kunit *test)``. It calls a function to be tested
+and then sets *expectations* for what should happen. For example:
+
+.. code-block:: c
+
+	void example_test_success(struct kunit *test)
+	{
+	}
+
+	void example_test_failure(struct kunit *test)
+	{
+		KUNIT_FAIL(test, "This test never passes.");
+	}
+
+In the above example ``example_test_success`` always passes because it does
+nothing; no expectations are set, so all expectations pass. On the other hand
+``example_test_failure`` always fails because it calls ``KUNIT_FAIL``, which is
+a special expectation that logs a message and causes the test case to fail.
+
+Expectations
+~~~~~~~~~~~~
+An *expectation* is a way to specify that you expect a piece of code to do
+something in a test. An expectation is called like a function. A test is made
+by setting expectations about the behavior of a piece of code under test; when
+one or more of the expectations fail, the test case fails and information about
+the failure is logged. For example:
+
+.. code-block:: c
+
+	void add_test_basic(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, 1, add(1, 0));
+		KUNIT_EXPECT_EQ(test, 2, add(1, 1));
+	}
+
+In the above example ``add_test_basic`` makes a number of assertions about the
+behavior of a function called ``add``; the first parameter is always of type
+``struct kunit *``, which contains information about the current test context;
+the second parameter, in this case, is what the value is expected to be; the
+last value is what the value actually is. If ``add`` passes all of these
+expectations, the test case, ``add_test_basic`` will pass; if any one of these
+expectations fail, the test case will fail.
+
+It is important to understand that a test case *fails* when any expectation is
+violated; however, the test will continue running, potentially trying other
+expectations until the test case ends or is otherwise terminated. This is as
+opposed to *assertions* which are discussed later.
+
+To learn about more expectations supported by KUnit, see :doc:`api/test`.
+
+.. note::
+   A single test case should be pretty short, pretty easy to understand,
+   focused on a single behavior.
+
+For example, if we wanted to properly test the add function above, we would
+create additional tests cases which would each test a different property that an
+add function should have like this:
+
+.. code-block:: c
+
+	void add_test_basic(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, 1, add(1, 0));
+		KUNIT_EXPECT_EQ(test, 2, add(1, 1));
+	}
+
+	void add_test_negative(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, 0, add(-1, 1));
+	}
+
+	void add_test_max(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, INT_MAX, add(0, INT_MAX));
+		KUNIT_EXPECT_EQ(test, -1, add(INT_MAX, INT_MIN));
+	}
+
+	void add_test_overflow(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, INT_MIN, add(INT_MAX, 1));
+	}
+
+Notice how it is immediately obvious what all the properties that we are testing
+for are.
+
+Assertions
+~~~~~~~~~~
+
+KUnit also has the concept of an *assertion*. An assertion is just like an
+expectation except the assertion immediately terminates the test case if it is
+not satisfied.
+
+For example:
+
+.. code-block:: c
+
+	static void mock_test_do_expect_default_return(struct kunit *test)
+	{
+		struct mock_test_context *ctx = test->priv;
+		struct mock *mock = ctx->mock;
+		int param0 = 5, param1 = -5;
+		const char *two_param_types[] = {"int", "int"};
+		const void *two_params[] = {&param0, &param1};
+		const void *ret;
+
+		ret = mock->do_expect(mock,
+				      "test_printk", test_printk,
+				      two_param_types, two_params,
+				      ARRAY_SIZE(two_params));
+		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ret);
+		KUNIT_EXPECT_EQ(test, -4, *((int *) ret));
+	}
+
+In this example, the method under test should return a pointer to a value, so
+if the pointer returned by the method is null or an errno, we don't want to
+bother continuing the test since the following expectation could crash the test
+case. `ASSERT_NOT_ERR_OR_NULL(...)` allows us to bail out of the test case if
+the appropriate conditions have not been satisfied to complete the test.
+
+Modules / Test Suites
+~~~~~~~~~~~~~~~~~~~~~
+
+Now obviously one unit test isn't very helpful; the power comes from having
+many test cases covering all of your behaviors. Consequently it is common to
+have many *similar* tests; in order to reduce duplication in these closely
+related tests most unit testing frameworks provide the concept of a *test
+suite*, in KUnit we call it a *test module*; all it is is just a collection of
+test cases for a unit of code with a set up function that gets invoked before
+every test cases and then a tear down function that gets invoked after every
+test case completes.
+
+Example:
+
+.. code-block:: c
+
+	static struct kunit_case example_test_cases[] = {
+		KUNIT_CASE(example_test_foo),
+		KUNIT_CASE(example_test_bar),
+		KUNIT_CASE(example_test_baz),
+		{},
+	};
+
+	static struct kunit_module example_test_module[] = {
+		.name = "example",
+		.init = example_test_init,
+		.exit = example_test_exit,
+		.test_cases = example_test_cases,
+	};
+	module_test(example_test_module);
+
+In the above example the test suite, ``example_test_module``, would run the test
+cases ``example_test_foo``, ``example_test_bar``, and ``example_test_baz``, each
+would have ``example_test_init`` called immediately before it and would have
+``example_test_exit`` called immediately after it.
+``module_test(example_test_module)`` registers the test suite with the KUnit
+test framework.
+
+.. note::
+   A test case will only be run if it is associated with a test suite.
+
+For a more information on these types of things see the :doc:`api/test`.
+
+Isolating Behavior
+==================
+
+The most important aspect of unit testing that other forms of testing do not
+provide is the ability to limit the amount of code under test to a single unit.
+In practice, this is only possible by being able to control what code gets run
+when the unit under test calls a function and this is usually accomplished
+through some sort of indirection where a function is exposed as part of an API
+such that the definition of that function can be changed without affecting the
+rest of the code base. In the kernel this primarily comes from two constructs,
+classes, structs that contain function pointers that are provided by the
+implementer, and architecture specific functions which have definitions selected
+at compile time.
+
+Classes
+-------
+
+Classes are not a construct that is built into the C programming language;
+however, it is an easily derived concept. Accordingly, pretty much every project
+that does not use a standardized object oriented library (like GNOME's GObject)
+has their own slightly different way of doing object oriented programming; the
+Linux kernel is no exception.
+
+The central concept in kernel object oriented programming is the class. In the
+kernel, a *class* is a struct that contains function pointers. This creates a
+contract between *implementers* and *users* since it forces them to use the
+same function signature without having to call the function directly. In order
+for it to truly be a class, the function pointers must specify that a pointer
+to the class, known as a *class handle*, be one of the parameters; this makes
+it possible for the member functions (also known as *methods*) to have access
+to member variables (more commonly known as *fields*) allowing the same
+implementation to have multiple *instances*.
+
+Typically a class can be *overridden* by *child classes* by embedding the
+*parent class* in the child class. Then when a method provided by the child
+class is called, the child implementation knows that the pointer passed to it is
+of a parent contained within the child; because of this, the child can compute
+the pointer to itself because the pointer to the parent is always a fixed offset
+from the pointer to the child; this offset is the offset of the parent contained
+in the child struct. For example:
+
+.. code-block:: c
+
+	struct shape {
+		int (*area)(struct shape *this);
+	};
+
+	struct rectangle {
+		struct shape parent;
+		int length;
+		int width;
+	};
+
+	int rectangle_area(struct shape *this)
+	{
+		struct rectangle *self = container_of(this, struct shape, parent);
+
+		return self->length * self->width;
+	};
+
+	void rectangle_new(struct rectangle *self, int length, int width)
+	{
+		self->parent.area = rectangle_area;
+		self->length = length;
+		self->width = width;
+	}
+
+In this example (as in most kernel code) the operation of computing the pointer
+to the child from the pointer to the parent is done by ``container_of``.
+
+Faking Classes
+~~~~~~~~~~~~~~
+
+In order to unit test a piece of code that calls a method in a class, the
+behavior of the method must be controllable, otherwise the test ceases to be a
+unit test and becomes an integration test.
+
+A fake just provides an implementation of a piece of code that is different than
+what runs in a production instance, but behaves identically from the standpoint
+of the callers; this is usually done to replace a dependency that is hard to
+deal with, or is slow.
+
+A good example for this might be implementing a fake EEPROM that just stores the
+"contents" in an internal buffer. For example, let's assume we have a class that
+represents an EEPROM:
+
+.. code-block:: c
+
+	struct eeprom {
+		ssize_t (*read)(struct eeprom *this, size_t offset, char *buffer, size_t count);
+		ssize_t (*write)(struct eeprom *this, size_t offset, const char *buffer, size_t count);
+	};
+
+And we want to test some code that buffers writes to the EEPROM:
+
+.. code-block:: c
+
+	struct eeprom_buffer {
+		ssize_t (*write)(struct eeprom_buffer *this, const char *buffer, size_t count);
+		int flush(struct eeprom_buffer *this);
+		size_t flush_count; /* Flushes when buffer exceeds flush_count. */
+	};
+
+	struct eeprom_buffer *new_eeprom_buffer(struct eeprom *eeprom);
+	void destroy_eeprom_buffer(struct eeprom *eeprom);
+
+We can easily test this code by *faking out* the underlying EEPROM:
+
+.. code-block:: c
+
+	struct fake_eeprom {
+		struct eeprom parent;
+		char contents[FAKE_EEPROM_CONTENTS_SIZE];
+	};
+
+	ssize_t fake_eeprom_read(struct eeprom *parent, size_t offset, char *buffer, size_t count)
+	{
+		struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent);
+
+		count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset);
+		memcpy(buffer, this->contents + offset, count);
+
+		return count;
+	}
+
+	ssize_t fake_eeprom_write(struct eeprom *this, size_t offset, const char *buffer, size_t count)
+	{
+		struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent);
+
+		count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset);
+		memcpy(this->contents + offset, buffer, count);
+
+		return count;
+	}
+
+	void fake_eeprom_init(struct fake_eeprom *this)
+	{
+		this->parent.read = fake_eeprom_read;
+		this->parent.write = fake_eeprom_write;
+		memset(this->contents, 0, FAKE_EEPROM_CONTENTS_SIZE);
+	}
+
+We can now use it to test ``struct eeprom_buffer``:
+
+.. code-block:: c
+
+	struct eeprom_buffer_test {
+		struct fake_eeprom *fake_eeprom;
+		struct eeprom_buffer *eeprom_buffer;
+	};
+
+	static void eeprom_buffer_test_does_not_write_until_flush(struct kunit *test)
+	{
+		struct eeprom_buffer_test *ctx = test->priv;
+		struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
+		struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
+		char buffer[] = {0xff};
+
+		eeprom_buffer->flush_count = SIZE_MAX;
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 1);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 1);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0);
+
+		eeprom_buffer->flush(eeprom_buffer);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
+	}
+
+	static void eeprom_buffer_test_flushes_after_flush_count_met(struct kunit *test)
+	{
+		struct eeprom_buffer_test *ctx = test->priv;
+		struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
+		struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
+		char buffer[] = {0xff};
+
+		eeprom_buffer->flush_count = 2;
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 1);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 1);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
+	}
+
+	static void eeprom_buffer_test_flushes_increments_of_flush_count(struct kunit *test)
+	{
+		struct eeprom_buffer_test *ctx = test->priv;
+		struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
+		struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
+		char buffer[] = {0xff, 0xff};
+
+		eeprom_buffer->flush_count = 2;
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 1);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 2);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
+		/* Should have only flushed the first two bytes. */
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[2], 0);
+	}
+
+	static int eeprom_buffer_test_init(struct kunit *test)
+	{
+		struct eeprom_buffer_test *ctx;
+
+		ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
+		ASSERT_NOT_ERR_OR_NULL(test, ctx);
+
+		ctx->fake_eeprom = kunit_kzalloc(test, sizeof(*ctx->fake_eeprom), GFP_KERNEL);
+		ASSERT_NOT_ERR_OR_NULL(test, ctx->fake_eeprom);
+
+		ctx->eeprom_buffer = new_eeprom_buffer(&ctx->fake_eeprom->parent);
+		ASSERT_NOT_ERR_OR_NULL(test, ctx->eeprom_buffer);
+
+		test->priv = ctx;
+
+		return 0;
+	}
+
+	static void eeprom_buffer_test_exit(struct kunit *test)
+	{
+		struct eeprom_buffer_test *ctx = test->priv;
+
+		destroy_eeprom_buffer(ctx->eeprom_buffer);
+	}
+
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 13/17] Documentation: kunit: add documentation for KUnit
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: gregkh, joel, mpe, joe, brakmo, rostedt, Tim.Bird, khilman,
	julia.lawall, linux-kselftest, kunit-dev, linux-kernel, jdike,
	richard, linux-um, daniel, dri-devel, dan.j.williams,
	linux-nvdimm, knut.omang, devicetree, pmladek, Alexander.Levin,
	amir73il, dan.carpenter, wfg, Brendan Higgins, Felix Guo

Add documentation for KUnit, the Linux kernel unit testing framework.
- Add intro and usage guide for KUnit
- Add API reference

Signed-off-by: Felix Guo <felixguoxiuping@gmail.com>
Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
---
 Documentation/index.rst           |   1 +
 Documentation/kunit/api/index.rst |  16 ++
 Documentation/kunit/api/test.rst  |  15 +
 Documentation/kunit/faq.rst       |  46 +++
 Documentation/kunit/index.rst     |  80 ++++++
 Documentation/kunit/start.rst     | 180 ++++++++++++
 Documentation/kunit/usage.rst     | 447 ++++++++++++++++++++++++++++++
 7 files changed, 785 insertions(+)
 create mode 100644 Documentation/kunit/api/index.rst
 create mode 100644 Documentation/kunit/api/test.rst
 create mode 100644 Documentation/kunit/faq.rst
 create mode 100644 Documentation/kunit/index.rst
 create mode 100644 Documentation/kunit/start.rst
 create mode 100644 Documentation/kunit/usage.rst

diff --git a/Documentation/index.rst b/Documentation/index.rst
index c858c2e66e361..9512de536b34a 100644
--- a/Documentation/index.rst
+++ b/Documentation/index.rst
@@ -65,6 +65,7 @@ merged much easier.
    kernel-hacking/index
    trace/index
    maintainer/index
+   kunit/index
 
 Kernel API documentation
 ------------------------
diff --git a/Documentation/kunit/api/index.rst b/Documentation/kunit/api/index.rst
new file mode 100644
index 0000000000000..c31c530088153
--- /dev/null
+++ b/Documentation/kunit/api/index.rst
@@ -0,0 +1,16 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=============
+API Reference
+=============
+.. toctree::
+
+	test
+
+This section documents the KUnit kernel testing API. It is divided into 3
+sections:
+
+================================= ==============================================
+:doc:`test`                       documents all of the standard testing API
+                                  excluding mocking or mocking related features.
+================================= ==============================================
diff --git a/Documentation/kunit/api/test.rst b/Documentation/kunit/api/test.rst
new file mode 100644
index 0000000000000..7c926014f047c
--- /dev/null
+++ b/Documentation/kunit/api/test.rst
@@ -0,0 +1,15 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+========
+Test API
+========
+
+This file documents all of the standard testing API excluding mocking or mocking
+related features.
+
+.. kernel-doc:: include/kunit/test.h
+   :internal:
+
+.. kernel-doc:: include/kunit/kunit-stream.h
+   :internal:
+
diff --git a/Documentation/kunit/faq.rst b/Documentation/kunit/faq.rst
new file mode 100644
index 0000000000000..cb8e4fb2257a0
--- /dev/null
+++ b/Documentation/kunit/faq.rst
@@ -0,0 +1,46 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=========================================
+Frequently Asked Questions
+=========================================
+
+How is this different from Autotest, kselftest, etc?
+====================================================
+KUnit is a unit testing framework. Autotest, kselftest (and some others) are
+not.
+
+A `unit test <https://martinfowler.com/bliki/UnitTest.html>`_ is supposed to
+test a single unit of code in isolation, hence the name. A unit test should be
+the finest granularity of testing and as such should allow all possible code
+paths to be tested in the code under test; this is only possible if the code
+under test is very small and does not have any external dependencies outside of
+the test's control like hardware.
+
+There are no testing frameworks currently available for the kernel that do not
+require installing the kernel on a test machine or in a VM and all require
+tests to be written in userspace and run on the kernel under test; this is true
+for Autotest, kselftest, and some others, disqualifying any of them from being
+considered unit testing frameworks.
+
+What is the difference between a unit test and these other kinds of tests?
+==========================================================================
+Most existing tests for the Linux kernel would be categorized as an integration
+test, or an end-to-end test.
+
+- A unit test is supposed to test a single unit of code in isolation, hence the
+  name. A unit test should be the finest granularity of testing and as such
+  should allow all possible code paths to be tested in the code under test; this
+  is only possible if the code under test is very small and does not have any
+  external dependencies outside of the test's control like hardware.
+- An integration test tests the interaction between a minimal set of components,
+  usually just two or three. For example, someone might write an integration
+  test to test the interaction between a driver and a piece of hardware, or to
+  test the interaction between the userspace libraries the kernel provides and
+  the kernel itself; however, one of these tests would probably not test the
+  entire kernel along with hardware interactions and interactions with the
+  userspace.
+- An end-to-end test usually tests the entire system from the perspective of the
+  code under test. For example, someone might write an end-to-end test for the
+  kernel by installing a production configuration of the kernel on production
+  hardware with a production userspace and then trying to exercise some behavior
+  that depends on interactions between the hardware, the kernel, and userspace.
diff --git a/Documentation/kunit/index.rst b/Documentation/kunit/index.rst
new file mode 100644
index 0000000000000..c6710211b647f
--- /dev/null
+++ b/Documentation/kunit/index.rst
@@ -0,0 +1,80 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=========================================
+KUnit - Unit Testing for the Linux Kernel
+=========================================
+
+.. toctree::
+	:maxdepth: 2
+
+	start
+	usage
+	api/index
+	faq
+
+What is KUnit?
+==============
+
+KUnit is a lightweight unit testing and mocking framework for the Linux kernel.
+These tests are able to be run locally on a developer's workstation without a VM
+or special hardware.
+
+KUnit is heavily inspired by JUnit, Python's unittest.mock, and
+Googletest/Googlemock for C++. KUnit provides facilities for defining unit test
+cases, grouping related test cases into test suites, providing common
+infrastructure for running tests, and much more.
+
+Get started now: :doc:`start`
+
+Why KUnit?
+==========
+
+A unit test is supposed to test a single unit of code in isolation, hence the
+name. A unit test should be the finest granularity of testing and as such should
+allow all possible code paths to be tested in the code under test; this is only
+possible if the code under test is very small and does not have any external
+dependencies outside of the test's control like hardware.
+
+Outside of KUnit, there are no testing frameworks currently
+available for the kernel that do not require installing the kernel on a test
+machine or in a VM and all require tests to be written in userspace running on
+the kernel; this is true for Autotest, and kselftest, disqualifying
+any of them from being considered unit testing frameworks.
+
+KUnit addresses the problem of being able to run tests without needing a virtual
+machine or actual hardware with User Mode Linux. User Mode Linux is a Linux
+architecture, like ARM or x86; however, unlike other architectures it compiles
+to a standalone program that can be run like any other program directly inside
+of a host operating system; to be clear, it does not require any virtualization
+support; it is just a regular program.
+
+KUnit is fast. Excluding build time, from invocation to completion KUnit can run
+several dozen tests in only 10 to 20 seconds; this might not sound like a big
+deal to some people, but having such fast and easy to run tests fundamentally
+changes the way you go about testing and even writing code in the first place.
+Linus himself said in his `git talk at Google
+<https://gist.github.com/lorn/1272686/revisions#diff-53c65572127855f1b003db4064a94573R874>`_:
+
+	"... a lot of people seem to think that performance is about doing the
+	same thing, just doing it faster, and that is not true. That is not what
+	performance is all about. If you can do something really fast, really
+	well, people will start using it differently."
+
+In this context Linus was talking about branching and merging,
+but this point also applies to testing. If your tests are slow, unreliable, are
+difficult to write, and require a special setup or special hardware to run,
+then you wait a lot longer to write tests, and you wait a lot longer to run
+tests; this means that tests are likely to break, unlikely to test a lot of
+things, and are unlikely to be rerun once they pass. If your tests are really
+fast, you run them all the time, every time you make a change, and every time
+someone sends you some code. Why trust that someone ran all their tests
+correctly on every change when you can just run them yourself in less time than
+it takes to read his / her test log?
+
+How do I use it?
+===================
+
+*   :doc:`start` - for new users of KUnit
+*   :doc:`usage` - for a more detailed explanation of KUnit features
+*   :doc:`api/index` - for the list of KUnit APIs used for testing
+
diff --git a/Documentation/kunit/start.rst b/Documentation/kunit/start.rst
new file mode 100644
index 0000000000000..5cdba5091905e
--- /dev/null
+++ b/Documentation/kunit/start.rst
@@ -0,0 +1,180 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+===============
+Getting Started
+===============
+
+Installing dependencies
+=======================
+KUnit has the same dependencies as the Linux kernel. As long as you can build
+the kernel, you can run KUnit.
+
+KUnit Wrapper
+=============
+Included with KUnit is a simple Python wrapper that helps format the output to
+easily use and read KUnit output. It handles building and running the kernel, as
+well as formatting the output.
+
+The wrapper can be run with:
+
+.. code-block:: bash
+
+   ./tools/testing/kunit/kunit.py
+
+Creating a kunitconfig
+======================
+The Python script is a thin wrapper around Kbuild as such, it needs to be
+configured with a ``kunitconfig`` file. This file essentially contains the
+regular Kernel config, with the specific test targets as well.
+
+.. code-block:: bash
+
+	git clone -b master https://kunit.googlesource.com/kunitconfig $PATH_TO_KUNITCONFIG_REPO
+	cd $PATH_TO_LINUX_REPO
+	ln -s $PATH_TO_KUNIT_CONFIG_REPO/kunitconfig kunitconfig
+
+You may want to add kunitconfig to your local gitignore.
+
+Verifying KUnit Works
+-------------------------
+
+To make sure that everything is set up correctly, simply invoke the Python
+wrapper from your kernel repo:
+
+.. code-block:: bash
+
+	./tools/testing/kunit/kunit.py
+
+.. note::
+   You may want to run ``make mrproper`` first.
+
+If everything worked correctly, you should see the following:
+
+.. code-block:: bash
+
+	Generating .config ...
+	Building KUnit Kernel ...
+	Starting KUnit Kernel ...
+
+followed by a list of tests that are run. All of them should be passing.
+
+.. note::
+   Because it is building a lot of sources for the first time, the ``Building
+   kunit kernel`` step may take a while.
+
+Writing your first test
+==========================
+
+In your kernel repo let's add some code that we can test. Create a file
+``drivers/misc/example.h`` with the contents:
+
+.. code-block:: c
+
+	int misc_example_add(int left, int right);
+
+create a file ``drivers/misc/example.c``:
+
+.. code-block:: c
+
+	#include <linux/errno.h>
+
+	#include "example.h"
+
+	int misc_example_add(int left, int right)
+	{
+		return left + right;
+	}
+
+Now add the following lines to ``drivers/misc/Kconfig``:
+
+.. code-block:: kconfig
+
+	config MISC_EXAMPLE
+		bool "My example"
+
+and the following lines to ``drivers/misc/Makefile``:
+
+.. code-block:: make
+
+	obj-$(CONFIG_MISC_EXAMPLE) += example.o
+
+Now we are ready to write the test. The test will be in
+``drivers/misc/example-test.c``:
+
+.. code-block:: c
+
+	#include <kunit/test.h>
+	#include "example.h"
+
+	/* Define the test cases. */
+
+	static void misc_example_add_test_basic(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, 1, misc_example_add(1, 0));
+		KUNIT_EXPECT_EQ(test, 2, misc_example_add(1, 1));
+		KUNIT_EXPECT_EQ(test, 0, misc_example_add(-1, 1));
+		KUNIT_EXPECT_EQ(test, INT_MAX, misc_example_add(0, INT_MAX));
+		KUNIT_EXPECT_EQ(test, -1, misc_example_add(INT_MAX, INT_MIN));
+	}
+
+	static void misc_example_test_failure(struct kunit *test)
+	{
+		KUNIT_FAIL(test, "This test never passes.");
+	}
+
+	static struct kunit_case misc_example_test_cases[] = {
+		KUNIT_CASE(misc_example_add_test_basic),
+		KUNIT_CASE(misc_example_test_failure),
+		{},
+	};
+
+	static struct kunit_module misc_example_test_module = {
+		.name = "misc-example",
+		.test_cases = misc_example_test_cases,
+	};
+	module_test(misc_example_test_module);
+
+Now add the following to ``drivers/misc/Kconfig``:
+
+.. code-block:: kconfig
+
+	config MISC_EXAMPLE_TEST
+		bool "Test for my example"
+		depends on MISC_EXAMPLE && KUNIT
+
+and the following to ``drivers/misc/Makefile``:
+
+.. code-block:: make
+
+	obj-$(CONFIG_MISC_EXAMPLE_TEST) += example-test.o
+
+Now add it to your ``kunitconfig``:
+
+.. code-block:: none
+
+	CONFIG_MISC_EXAMPLE=y
+	CONFIG_MISC_EXAMPLE_TEST=y
+
+Now you can run the test:
+
+.. code-block:: bash
+
+	./tools/testing/kunit/kunit.py
+
+You should see the following failure:
+
+.. code-block:: none
+
+	...
+	[16:08:57] [PASSED] misc-example:misc_example_add_test_basic
+	[16:08:57] [FAILED] misc-example:misc_example_test_failure
+	[16:08:57] EXPECTATION FAILED at drivers/misc/example-test.c:17
+	[16:08:57] 	This test never passes.
+	...
+
+Congrats! You just wrote your first KUnit test!
+
+Next Steps
+=============
+*   Check out the :doc:`usage` page for a more
+    in-depth explanation of KUnit.
diff --git a/Documentation/kunit/usage.rst b/Documentation/kunit/usage.rst
new file mode 100644
index 0000000000000..96ef7f9a1add4
--- /dev/null
+++ b/Documentation/kunit/usage.rst
@@ -0,0 +1,447 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=============
+Using KUnit
+=============
+
+The purpose of this document is to describe what KUnit is, how it works, how it
+is intended to be used, and all the concepts and terminology that are needed to
+understand it. This guide assumes a working knowledge of the Linux kernel and
+some basic knowledge of testing.
+
+For a high level introduction to KUnit, including setting up KUnit for your
+project, see :doc:`start`.
+
+Organization of this document
+=================================
+
+This document is organized into two main sections: Testing and Isolating
+Behavior. The first covers what a unit test is and how to use KUnit to write
+them. The second covers how to use KUnit to isolate code and make it possible
+to unit test code that was otherwise un-unit-testable.
+
+Testing
+==========
+
+What is KUnit?
+------------------
+
+"K" is short for "kernel" so "KUnit" is the "(Linux) Kernel Unit Testing
+Framework." KUnit is intended first and foremost for writing unit tests; it is
+general enough that it can be used to write integration tests; however, this is
+a secondary goal. KUnit has no ambition of being the only testing framework for
+the kernel; for example, it does not intend to be an end-to-end testing
+framework.
+
+What is Unit Testing?
+-------------------------
+
+A `unit test <https://martinfowler.com/bliki/UnitTest.html>`_ is a test that
+tests code at the smallest possible scope, a *unit* of code. In the C
+programming language that's a function.
+
+Unit tests should be written for all the publicly exposed functions in a
+compilation unit; so that is all the functions that are exported in either a
+*class* (defined below) or all functions which are **not** static.
+
+Writing Tests
+-------------
+
+Test Cases
+~~~~~~~~~~
+
+The fundamental unit in KUnit is the test case. A test case is a function with
+the signature ``void (*)(struct kunit *test)``. It calls a function to be tested
+and then sets *expectations* for what should happen. For example:
+
+.. code-block:: c
+
+	void example_test_success(struct kunit *test)
+	{
+	}
+
+	void example_test_failure(struct kunit *test)
+	{
+		KUNIT_FAIL(test, "This test never passes.");
+	}
+
+In the above example ``example_test_success`` always passes because it does
+nothing; no expectations are set, so all expectations pass. On the other hand
+``example_test_failure`` always fails because it calls ``KUNIT_FAIL``, which is
+a special expectation that logs a message and causes the test case to fail.
+
+Expectations
+~~~~~~~~~~~~
+An *expectation* is a way to specify that you expect a piece of code to do
+something in a test. An expectation is called like a function. A test is made
+by setting expectations about the behavior of a piece of code under test; when
+one or more of the expectations fail, the test case fails and information about
+the failure is logged. For example:
+
+.. code-block:: c
+
+	void add_test_basic(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, 1, add(1, 0));
+		KUNIT_EXPECT_EQ(test, 2, add(1, 1));
+	}
+
+In the above example ``add_test_basic`` makes a number of assertions about the
+behavior of a function called ``add``; the first parameter is always of type
+``struct kunit *``, which contains information about the current test context;
+the second parameter, in this case, is what the value is expected to be; the
+last value is what the value actually is. If ``add`` passes all of these
+expectations, the test case, ``add_test_basic`` will pass; if any one of these
+expectations fail, the test case will fail.
+
+It is important to understand that a test case *fails* when any expectation is
+violated; however, the test will continue running, potentially trying other
+expectations until the test case ends or is otherwise terminated. This is as
+opposed to *assertions* which are discussed later.
+
+To learn about more expectations supported by KUnit, see :doc:`api/test`.
+
+.. note::
+   A single test case should be pretty short, pretty easy to understand,
+   focused on a single behavior.
+
+For example, if we wanted to properly test the add function above, we would
+create additional tests cases which would each test a different property that an
+add function should have like this:
+
+.. code-block:: c
+
+	void add_test_basic(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, 1, add(1, 0));
+		KUNIT_EXPECT_EQ(test, 2, add(1, 1));
+	}
+
+	void add_test_negative(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, 0, add(-1, 1));
+	}
+
+	void add_test_max(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, INT_MAX, add(0, INT_MAX));
+		KUNIT_EXPECT_EQ(test, -1, add(INT_MAX, INT_MIN));
+	}
+
+	void add_test_overflow(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, INT_MIN, add(INT_MAX, 1));
+	}
+
+Notice how it is immediately obvious what all the properties that we are testing
+for are.
+
+Assertions
+~~~~~~~~~~
+
+KUnit also has the concept of an *assertion*. An assertion is just like an
+expectation except the assertion immediately terminates the test case if it is
+not satisfied.
+
+For example:
+
+.. code-block:: c
+
+	static void mock_test_do_expect_default_return(struct kunit *test)
+	{
+		struct mock_test_context *ctx = test->priv;
+		struct mock *mock = ctx->mock;
+		int param0 = 5, param1 = -5;
+		const char *two_param_types[] = {"int", "int"};
+		const void *two_params[] = {&param0, &param1};
+		const void *ret;
+
+		ret = mock->do_expect(mock,
+				      "test_printk", test_printk,
+				      two_param_types, two_params,
+				      ARRAY_SIZE(two_params));
+		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ret);
+		KUNIT_EXPECT_EQ(test, -4, *((int *) ret));
+	}
+
+In this example, the method under test should return a pointer to a value, so
+if the pointer returned by the method is null or an errno, we don't want to
+bother continuing the test since the following expectation could crash the test
+case. `ASSERT_NOT_ERR_OR_NULL(...)` allows us to bail out of the test case if
+the appropriate conditions have not been satisfied to complete the test.
+
+Modules / Test Suites
+~~~~~~~~~~~~~~~~~~~~~
+
+Now obviously one unit test isn't very helpful; the power comes from having
+many test cases covering all of your behaviors. Consequently it is common to
+have many *similar* tests; in order to reduce duplication in these closely
+related tests most unit testing frameworks provide the concept of a *test
+suite*, in KUnit we call it a *test module*; all it is is just a collection of
+test cases for a unit of code with a set up function that gets invoked before
+every test cases and then a tear down function that gets invoked after every
+test case completes.
+
+Example:
+
+.. code-block:: c
+
+	static struct kunit_case example_test_cases[] = {
+		KUNIT_CASE(example_test_foo),
+		KUNIT_CASE(example_test_bar),
+		KUNIT_CASE(example_test_baz),
+		{},
+	};
+
+	static struct kunit_module example_test_module[] = {
+		.name = "example",
+		.init = example_test_init,
+		.exit = example_test_exit,
+		.test_cases = example_test_cases,
+	};
+	module_test(example_test_module);
+
+In the above example the test suite, ``example_test_module``, would run the test
+cases ``example_test_foo``, ``example_test_bar``, and ``example_test_baz``, each
+would have ``example_test_init`` called immediately before it and would have
+``example_test_exit`` called immediately after it.
+``module_test(example_test_module)`` registers the test suite with the KUnit
+test framework.
+
+.. note::
+   A test case will only be run if it is associated with a test suite.
+
+For a more information on these types of things see the :doc:`api/test`.
+
+Isolating Behavior
+==================
+
+The most important aspect of unit testing that other forms of testing do not
+provide is the ability to limit the amount of code under test to a single unit.
+In practice, this is only possible by being able to control what code gets run
+when the unit under test calls a function and this is usually accomplished
+through some sort of indirection where a function is exposed as part of an API
+such that the definition of that function can be changed without affecting the
+rest of the code base. In the kernel this primarily comes from two constructs,
+classes, structs that contain function pointers that are provided by the
+implementer, and architecture specific functions which have definitions selected
+at compile time.
+
+Classes
+-------
+
+Classes are not a construct that is built into the C programming language;
+however, it is an easily derived concept. Accordingly, pretty much every project
+that does not use a standardized object oriented library (like GNOME's GObject)
+has their own slightly different way of doing object oriented programming; the
+Linux kernel is no exception.
+
+The central concept in kernel object oriented programming is the class. In the
+kernel, a *class* is a struct that contains function pointers. This creates a
+contract between *implementers* and *users* since it forces them to use the
+same function signature without having to call the function directly. In order
+for it to truly be a class, the function pointers must specify that a pointer
+to the class, known as a *class handle*, be one of the parameters; this makes
+it possible for the member functions (also known as *methods*) to have access
+to member variables (more commonly known as *fields*) allowing the same
+implementation to have multiple *instances*.
+
+Typically a class can be *overridden* by *child classes* by embedding the
+*parent class* in the child class. Then when a method provided by the child
+class is called, the child implementation knows that the pointer passed to it is
+of a parent contained within the child; because of this, the child can compute
+the pointer to itself because the pointer to the parent is always a fixed offset
+from the pointer to the child; this offset is the offset of the parent contained
+in the child struct. For example:
+
+.. code-block:: c
+
+	struct shape {
+		int (*area)(struct shape *this);
+	};
+
+	struct rectangle {
+		struct shape parent;
+		int length;
+		int width;
+	};
+
+	int rectangle_area(struct shape *this)
+	{
+		struct rectangle *self = container_of(this, struct shape, parent);
+
+		return self->length * self->width;
+	};
+
+	void rectangle_new(struct rectangle *self, int length, int width)
+	{
+		self->parent.area = rectangle_area;
+		self->length = length;
+		self->width = width;
+	}
+
+In this example (as in most kernel code) the operation of computing the pointer
+to the child from the pointer to the parent is done by ``container_of``.
+
+Faking Classes
+~~~~~~~~~~~~~~
+
+In order to unit test a piece of code that calls a method in a class, the
+behavior of the method must be controllable, otherwise the test ceases to be a
+unit test and becomes an integration test.
+
+A fake just provides an implementation of a piece of code that is different than
+what runs in a production instance, but behaves identically from the standpoint
+of the callers; this is usually done to replace a dependency that is hard to
+deal with, or is slow.
+
+A good example for this might be implementing a fake EEPROM that just stores the
+"contents" in an internal buffer. For example, let's assume we have a class that
+represents an EEPROM:
+
+.. code-block:: c
+
+	struct eeprom {
+		ssize_t (*read)(struct eeprom *this, size_t offset, char *buffer, size_t count);
+		ssize_t (*write)(struct eeprom *this, size_t offset, const char *buffer, size_t count);
+	};
+
+And we want to test some code that buffers writes to the EEPROM:
+
+.. code-block:: c
+
+	struct eeprom_buffer {
+		ssize_t (*write)(struct eeprom_buffer *this, const char *buffer, size_t count);
+		int flush(struct eeprom_buffer *this);
+		size_t flush_count; /* Flushes when buffer exceeds flush_count. */
+	};
+
+	struct eeprom_buffer *new_eeprom_buffer(struct eeprom *eeprom);
+	void destroy_eeprom_buffer(struct eeprom *eeprom);
+
+We can easily test this code by *faking out* the underlying EEPROM:
+
+.. code-block:: c
+
+	struct fake_eeprom {
+		struct eeprom parent;
+		char contents[FAKE_EEPROM_CONTENTS_SIZE];
+	};
+
+	ssize_t fake_eeprom_read(struct eeprom *parent, size_t offset, char *buffer, size_t count)
+	{
+		struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent);
+
+		count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset);
+		memcpy(buffer, this->contents + offset, count);
+
+		return count;
+	}
+
+	ssize_t fake_eeprom_write(struct eeprom *this, size_t offset, const char *buffer, size_t count)
+	{
+		struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent);
+
+		count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset);
+		memcpy(this->contents + offset, buffer, count);
+
+		return count;
+	}
+
+	void fake_eeprom_init(struct fake_eeprom *this)
+	{
+		this->parent.read = fake_eeprom_read;
+		this->parent.write = fake_eeprom_write;
+		memset(this->contents, 0, FAKE_EEPROM_CONTENTS_SIZE);
+	}
+
+We can now use it to test ``struct eeprom_buffer``:
+
+.. code-block:: c
+
+	struct eeprom_buffer_test {
+		struct fake_eeprom *fake_eeprom;
+		struct eeprom_buffer *eeprom_buffer;
+	};
+
+	static void eeprom_buffer_test_does_not_write_until_flush(struct kunit *test)
+	{
+		struct eeprom_buffer_test *ctx = test->priv;
+		struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
+		struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
+		char buffer[] = {0xff};
+
+		eeprom_buffer->flush_count = SIZE_MAX;
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 1);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 1);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0);
+
+		eeprom_buffer->flush(eeprom_buffer);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
+	}
+
+	static void eeprom_buffer_test_flushes_after_flush_count_met(struct kunit *test)
+	{
+		struct eeprom_buffer_test *ctx = test->priv;
+		struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
+		struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
+		char buffer[] = {0xff};
+
+		eeprom_buffer->flush_count = 2;
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 1);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 1);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
+	}
+
+	static void eeprom_buffer_test_flushes_increments_of_flush_count(struct kunit *test)
+	{
+		struct eeprom_buffer_test *ctx = test->priv;
+		struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
+		struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
+		char buffer[] = {0xff, 0xff};
+
+		eeprom_buffer->flush_count = 2;
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 1);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 2);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
+		/* Should have only flushed the first two bytes. */
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[2], 0);
+	}
+
+	static int eeprom_buffer_test_init(struct kunit *test)
+	{
+		struct eeprom_buffer_test *ctx;
+
+		ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
+		ASSERT_NOT_ERR_OR_NULL(test, ctx);
+
+		ctx->fake_eeprom = kunit_kzalloc(test, sizeof(*ctx->fake_eeprom), GFP_KERNEL);
+		ASSERT_NOT_ERR_OR_NULL(test, ctx->fake_eeprom);
+
+		ctx->eeprom_buffer = new_eeprom_buffer(&ctx->fake_eeprom->parent);
+		ASSERT_NOT_ERR_OR_NULL(test, ctx->eeprom_buffer);
+
+		test->priv = ctx;
+
+		return 0;
+	}
+
+	static void eeprom_buffer_test_exit(struct kunit *test)
+	{
+		struct eeprom_buffer_test *ctx = test->priv;
+
+		destroy_eeprom_buffer(ctx->eeprom_buffer);
+	}
+
-- 
2.21.0.rc0.258.g878e2cd30e-goog


^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 13/17] Documentation: kunit: add documentation for KUnit
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-02-14 21:37 UTC (permalink / raw)


Add documentation for KUnit, the Linux kernel unit testing framework.
- Add intro and usage guide for KUnit
- Add API reference

Signed-off-by: Felix Guo <felixguoxiuping at gmail.com>
Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 Documentation/index.rst           |   1 +
 Documentation/kunit/api/index.rst |  16 ++
 Documentation/kunit/api/test.rst  |  15 +
 Documentation/kunit/faq.rst       |  46 +++
 Documentation/kunit/index.rst     |  80 ++++++
 Documentation/kunit/start.rst     | 180 ++++++++++++
 Documentation/kunit/usage.rst     | 447 ++++++++++++++++++++++++++++++
 7 files changed, 785 insertions(+)
 create mode 100644 Documentation/kunit/api/index.rst
 create mode 100644 Documentation/kunit/api/test.rst
 create mode 100644 Documentation/kunit/faq.rst
 create mode 100644 Documentation/kunit/index.rst
 create mode 100644 Documentation/kunit/start.rst
 create mode 100644 Documentation/kunit/usage.rst

diff --git a/Documentation/index.rst b/Documentation/index.rst
index c858c2e66e361..9512de536b34a 100644
--- a/Documentation/index.rst
+++ b/Documentation/index.rst
@@ -65,6 +65,7 @@ merged much easier.
    kernel-hacking/index
    trace/index
    maintainer/index
+   kunit/index
 
 Kernel API documentation
 ------------------------
diff --git a/Documentation/kunit/api/index.rst b/Documentation/kunit/api/index.rst
new file mode 100644
index 0000000000000..c31c530088153
--- /dev/null
+++ b/Documentation/kunit/api/index.rst
@@ -0,0 +1,16 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=============
+API Reference
+=============
+.. toctree::
+
+	test
+
+This section documents the KUnit kernel testing API. It is divided into 3
+sections:
+
+================================= ==============================================
+:doc:`test`                       documents all of the standard testing API
+                                  excluding mocking or mocking related features.
+================================= ==============================================
diff --git a/Documentation/kunit/api/test.rst b/Documentation/kunit/api/test.rst
new file mode 100644
index 0000000000000..7c926014f047c
--- /dev/null
+++ b/Documentation/kunit/api/test.rst
@@ -0,0 +1,15 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+========
+Test API
+========
+
+This file documents all of the standard testing API excluding mocking or mocking
+related features.
+
+.. kernel-doc:: include/kunit/test.h
+   :internal:
+
+.. kernel-doc:: include/kunit/kunit-stream.h
+   :internal:
+
diff --git a/Documentation/kunit/faq.rst b/Documentation/kunit/faq.rst
new file mode 100644
index 0000000000000..cb8e4fb2257a0
--- /dev/null
+++ b/Documentation/kunit/faq.rst
@@ -0,0 +1,46 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=========================================
+Frequently Asked Questions
+=========================================
+
+How is this different from Autotest, kselftest, etc?
+====================================================
+KUnit is a unit testing framework. Autotest, kselftest (and some others) are
+not.
+
+A `unit test <https://martinfowler.com/bliki/UnitTest.html>`_ is supposed to
+test a single unit of code in isolation, hence the name. A unit test should be
+the finest granularity of testing and as such should allow all possible code
+paths to be tested in the code under test; this is only possible if the code
+under test is very small and does not have any external dependencies outside of
+the test's control like hardware.
+
+There are no testing frameworks currently available for the kernel that do not
+require installing the kernel on a test machine or in a VM and all require
+tests to be written in userspace and run on the kernel under test; this is true
+for Autotest, kselftest, and some others, disqualifying any of them from being
+considered unit testing frameworks.
+
+What is the difference between a unit test and these other kinds of tests?
+==========================================================================
+Most existing tests for the Linux kernel would be categorized as an integration
+test, or an end-to-end test.
+
+- A unit test is supposed to test a single unit of code in isolation, hence the
+  name. A unit test should be the finest granularity of testing and as such
+  should allow all possible code paths to be tested in the code under test; this
+  is only possible if the code under test is very small and does not have any
+  external dependencies outside of the test's control like hardware.
+- An integration test tests the interaction between a minimal set of components,
+  usually just two or three. For example, someone might write an integration
+  test to test the interaction between a driver and a piece of hardware, or to
+  test the interaction between the userspace libraries the kernel provides and
+  the kernel itself; however, one of these tests would probably not test the
+  entire kernel along with hardware interactions and interactions with the
+  userspace.
+- An end-to-end test usually tests the entire system from the perspective of the
+  code under test. For example, someone might write an end-to-end test for the
+  kernel by installing a production configuration of the kernel on production
+  hardware with a production userspace and then trying to exercise some behavior
+  that depends on interactions between the hardware, the kernel, and userspace.
diff --git a/Documentation/kunit/index.rst b/Documentation/kunit/index.rst
new file mode 100644
index 0000000000000..c6710211b647f
--- /dev/null
+++ b/Documentation/kunit/index.rst
@@ -0,0 +1,80 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=========================================
+KUnit - Unit Testing for the Linux Kernel
+=========================================
+
+.. toctree::
+	:maxdepth: 2
+
+	start
+	usage
+	api/index
+	faq
+
+What is KUnit?
+==============
+
+KUnit is a lightweight unit testing and mocking framework for the Linux kernel.
+These tests are able to be run locally on a developer's workstation without a VM
+or special hardware.
+
+KUnit is heavily inspired by JUnit, Python's unittest.mock, and
+Googletest/Googlemock for C++. KUnit provides facilities for defining unit test
+cases, grouping related test cases into test suites, providing common
+infrastructure for running tests, and much more.
+
+Get started now: :doc:`start`
+
+Why KUnit?
+==========
+
+A unit test is supposed to test a single unit of code in isolation, hence the
+name. A unit test should be the finest granularity of testing and as such should
+allow all possible code paths to be tested in the code under test; this is only
+possible if the code under test is very small and does not have any external
+dependencies outside of the test's control like hardware.
+
+Outside of KUnit, there are no testing frameworks currently
+available for the kernel that do not require installing the kernel on a test
+machine or in a VM and all require tests to be written in userspace running on
+the kernel; this is true for Autotest, and kselftest, disqualifying
+any of them from being considered unit testing frameworks.
+
+KUnit addresses the problem of being able to run tests without needing a virtual
+machine or actual hardware with User Mode Linux. User Mode Linux is a Linux
+architecture, like ARM or x86; however, unlike other architectures it compiles
+to a standalone program that can be run like any other program directly inside
+of a host operating system; to be clear, it does not require any virtualization
+support; it is just a regular program.
+
+KUnit is fast. Excluding build time, from invocation to completion KUnit can run
+several dozen tests in only 10 to 20 seconds; this might not sound like a big
+deal to some people, but having such fast and easy to run tests fundamentally
+changes the way you go about testing and even writing code in the first place.
+Linus himself said in his `git talk at Google
+<https://gist.github.com/lorn/1272686/revisions#diff-53c65572127855f1b003db4064a94573R874>`_:
+
+	"... a lot of people seem to think that performance is about doing the
+	same thing, just doing it faster, and that is not true. That is not what
+	performance is all about. If you can do something really fast, really
+	well, people will start using it differently."
+
+In this context Linus was talking about branching and merging,
+but this point also applies to testing. If your tests are slow, unreliable, are
+difficult to write, and require a special setup or special hardware to run,
+then you wait a lot longer to write tests, and you wait a lot longer to run
+tests; this means that tests are likely to break, unlikely to test a lot of
+things, and are unlikely to be rerun once they pass. If your tests are really
+fast, you run them all the time, every time you make a change, and every time
+someone sends you some code. Why trust that someone ran all their tests
+correctly on every change when you can just run them yourself in less time than
+it takes to read his / her test log?
+
+How do I use it?
+===================
+
+*   :doc:`start` - for new users of KUnit
+*   :doc:`usage` - for a more detailed explanation of KUnit features
+*   :doc:`api/index` - for the list of KUnit APIs used for testing
+
diff --git a/Documentation/kunit/start.rst b/Documentation/kunit/start.rst
new file mode 100644
index 0000000000000..5cdba5091905e
--- /dev/null
+++ b/Documentation/kunit/start.rst
@@ -0,0 +1,180 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+===============
+Getting Started
+===============
+
+Installing dependencies
+=======================
+KUnit has the same dependencies as the Linux kernel. As long as you can build
+the kernel, you can run KUnit.
+
+KUnit Wrapper
+=============
+Included with KUnit is a simple Python wrapper that helps format the output to
+easily use and read KUnit output. It handles building and running the kernel, as
+well as formatting the output.
+
+The wrapper can be run with:
+
+.. code-block:: bash
+
+   ./tools/testing/kunit/kunit.py
+
+Creating a kunitconfig
+======================
+The Python script is a thin wrapper around Kbuild as such, it needs to be
+configured with a ``kunitconfig`` file. This file essentially contains the
+regular Kernel config, with the specific test targets as well.
+
+.. code-block:: bash
+
+	git clone -b master https://kunit.googlesource.com/kunitconfig $PATH_TO_KUNITCONFIG_REPO
+	cd $PATH_TO_LINUX_REPO
+	ln -s $PATH_TO_KUNIT_CONFIG_REPO/kunitconfig kunitconfig
+
+You may want to add kunitconfig to your local gitignore.
+
+Verifying KUnit Works
+-------------------------
+
+To make sure that everything is set up correctly, simply invoke the Python
+wrapper from your kernel repo:
+
+.. code-block:: bash
+
+	./tools/testing/kunit/kunit.py
+
+.. note::
+   You may want to run ``make mrproper`` first.
+
+If everything worked correctly, you should see the following:
+
+.. code-block:: bash
+
+	Generating .config ...
+	Building KUnit Kernel ...
+	Starting KUnit Kernel ...
+
+followed by a list of tests that are run. All of them should be passing.
+
+.. note::
+   Because it is building a lot of sources for the first time, the ``Building
+   kunit kernel`` step may take a while.
+
+Writing your first test
+==========================
+
+In your kernel repo let's add some code that we can test. Create a file
+``drivers/misc/example.h`` with the contents:
+
+.. code-block:: c
+
+	int misc_example_add(int left, int right);
+
+create a file ``drivers/misc/example.c``:
+
+.. code-block:: c
+
+	#include <linux/errno.h>
+
+	#include "example.h"
+
+	int misc_example_add(int left, int right)
+	{
+		return left + right;
+	}
+
+Now add the following lines to ``drivers/misc/Kconfig``:
+
+.. code-block:: kconfig
+
+	config MISC_EXAMPLE
+		bool "My example"
+
+and the following lines to ``drivers/misc/Makefile``:
+
+.. code-block:: make
+
+	obj-$(CONFIG_MISC_EXAMPLE) += example.o
+
+Now we are ready to write the test. The test will be in
+``drivers/misc/example-test.c``:
+
+.. code-block:: c
+
+	#include <kunit/test.h>
+	#include "example.h"
+
+	/* Define the test cases. */
+
+	static void misc_example_add_test_basic(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, 1, misc_example_add(1, 0));
+		KUNIT_EXPECT_EQ(test, 2, misc_example_add(1, 1));
+		KUNIT_EXPECT_EQ(test, 0, misc_example_add(-1, 1));
+		KUNIT_EXPECT_EQ(test, INT_MAX, misc_example_add(0, INT_MAX));
+		KUNIT_EXPECT_EQ(test, -1, misc_example_add(INT_MAX, INT_MIN));
+	}
+
+	static void misc_example_test_failure(struct kunit *test)
+	{
+		KUNIT_FAIL(test, "This test never passes.");
+	}
+
+	static struct kunit_case misc_example_test_cases[] = {
+		KUNIT_CASE(misc_example_add_test_basic),
+		KUNIT_CASE(misc_example_test_failure),
+		{},
+	};
+
+	static struct kunit_module misc_example_test_module = {
+		.name = "misc-example",
+		.test_cases = misc_example_test_cases,
+	};
+	module_test(misc_example_test_module);
+
+Now add the following to ``drivers/misc/Kconfig``:
+
+.. code-block:: kconfig
+
+	config MISC_EXAMPLE_TEST
+		bool "Test for my example"
+		depends on MISC_EXAMPLE && KUNIT
+
+and the following to ``drivers/misc/Makefile``:
+
+.. code-block:: make
+
+	obj-$(CONFIG_MISC_EXAMPLE_TEST) += example-test.o
+
+Now add it to your ``kunitconfig``:
+
+.. code-block:: none
+
+	CONFIG_MISC_EXAMPLE=y
+	CONFIG_MISC_EXAMPLE_TEST=y
+
+Now you can run the test:
+
+.. code-block:: bash
+
+	./tools/testing/kunit/kunit.py
+
+You should see the following failure:
+
+.. code-block:: none
+
+	...
+	[16:08:57] [PASSED] misc-example:misc_example_add_test_basic
+	[16:08:57] [FAILED] misc-example:misc_example_test_failure
+	[16:08:57] EXPECTATION FAILED at drivers/misc/example-test.c:17
+	[16:08:57] 	This test never passes.
+	...
+
+Congrats! You just wrote your first KUnit test!
+
+Next Steps
+=============
+*   Check out the :doc:`usage` page for a more
+    in-depth explanation of KUnit.
diff --git a/Documentation/kunit/usage.rst b/Documentation/kunit/usage.rst
new file mode 100644
index 0000000000000..96ef7f9a1add4
--- /dev/null
+++ b/Documentation/kunit/usage.rst
@@ -0,0 +1,447 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=============
+Using KUnit
+=============
+
+The purpose of this document is to describe what KUnit is, how it works, how it
+is intended to be used, and all the concepts and terminology that are needed to
+understand it. This guide assumes a working knowledge of the Linux kernel and
+some basic knowledge of testing.
+
+For a high level introduction to KUnit, including setting up KUnit for your
+project, see :doc:`start`.
+
+Organization of this document
+=================================
+
+This document is organized into two main sections: Testing and Isolating
+Behavior. The first covers what a unit test is and how to use KUnit to write
+them. The second covers how to use KUnit to isolate code and make it possible
+to unit test code that was otherwise un-unit-testable.
+
+Testing
+==========
+
+What is KUnit?
+------------------
+
+"K" is short for "kernel" so "KUnit" is the "(Linux) Kernel Unit Testing
+Framework." KUnit is intended first and foremost for writing unit tests; it is
+general enough that it can be used to write integration tests; however, this is
+a secondary goal. KUnit has no ambition of being the only testing framework for
+the kernel; for example, it does not intend to be an end-to-end testing
+framework.
+
+What is Unit Testing?
+-------------------------
+
+A `unit test <https://martinfowler.com/bliki/UnitTest.html>`_ is a test that
+tests code at the smallest possible scope, a *unit* of code. In the C
+programming language that's a function.
+
+Unit tests should be written for all the publicly exposed functions in a
+compilation unit; so that is all the functions that are exported in either a
+*class* (defined below) or all functions which are **not** static.
+
+Writing Tests
+-------------
+
+Test Cases
+~~~~~~~~~~
+
+The fundamental unit in KUnit is the test case. A test case is a function with
+the signature ``void (*)(struct kunit *test)``. It calls a function to be tested
+and then sets *expectations* for what should happen. For example:
+
+.. code-block:: c
+
+	void example_test_success(struct kunit *test)
+	{
+	}
+
+	void example_test_failure(struct kunit *test)
+	{
+		KUNIT_FAIL(test, "This test never passes.");
+	}
+
+In the above example ``example_test_success`` always passes because it does
+nothing; no expectations are set, so all expectations pass. On the other hand
+``example_test_failure`` always fails because it calls ``KUNIT_FAIL``, which is
+a special expectation that logs a message and causes the test case to fail.
+
+Expectations
+~~~~~~~~~~~~
+An *expectation* is a way to specify that you expect a piece of code to do
+something in a test. An expectation is called like a function. A test is made
+by setting expectations about the behavior of a piece of code under test; when
+one or more of the expectations fail, the test case fails and information about
+the failure is logged. For example:
+
+.. code-block:: c
+
+	void add_test_basic(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, 1, add(1, 0));
+		KUNIT_EXPECT_EQ(test, 2, add(1, 1));
+	}
+
+In the above example ``add_test_basic`` makes a number of assertions about the
+behavior of a function called ``add``; the first parameter is always of type
+``struct kunit *``, which contains information about the current test context;
+the second parameter, in this case, is what the value is expected to be; the
+last value is what the value actually is. If ``add`` passes all of these
+expectations, the test case, ``add_test_basic`` will pass; if any one of these
+expectations fail, the test case will fail.
+
+It is important to understand that a test case *fails* when any expectation is
+violated; however, the test will continue running, potentially trying other
+expectations until the test case ends or is otherwise terminated. This is as
+opposed to *assertions* which are discussed later.
+
+To learn about more expectations supported by KUnit, see :doc:`api/test`.
+
+.. note::
+   A single test case should be pretty short, pretty easy to understand,
+   focused on a single behavior.
+
+For example, if we wanted to properly test the add function above, we would
+create additional tests cases which would each test a different property that an
+add function should have like this:
+
+.. code-block:: c
+
+	void add_test_basic(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, 1, add(1, 0));
+		KUNIT_EXPECT_EQ(test, 2, add(1, 1));
+	}
+
+	void add_test_negative(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, 0, add(-1, 1));
+	}
+
+	void add_test_max(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, INT_MAX, add(0, INT_MAX));
+		KUNIT_EXPECT_EQ(test, -1, add(INT_MAX, INT_MIN));
+	}
+
+	void add_test_overflow(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, INT_MIN, add(INT_MAX, 1));
+	}
+
+Notice how it is immediately obvious what all the properties that we are testing
+for are.
+
+Assertions
+~~~~~~~~~~
+
+KUnit also has the concept of an *assertion*. An assertion is just like an
+expectation except the assertion immediately terminates the test case if it is
+not satisfied.
+
+For example:
+
+.. code-block:: c
+
+	static void mock_test_do_expect_default_return(struct kunit *test)
+	{
+		struct mock_test_context *ctx = test->priv;
+		struct mock *mock = ctx->mock;
+		int param0 = 5, param1 = -5;
+		const char *two_param_types[] = {"int", "int"};
+		const void *two_params[] = {&param0, &param1};
+		const void *ret;
+
+		ret = mock->do_expect(mock,
+				      "test_printk", test_printk,
+				      two_param_types, two_params,
+				      ARRAY_SIZE(two_params));
+		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ret);
+		KUNIT_EXPECT_EQ(test, -4, *((int *) ret));
+	}
+
+In this example, the method under test should return a pointer to a value, so
+if the pointer returned by the method is null or an errno, we don't want to
+bother continuing the test since the following expectation could crash the test
+case. `ASSERT_NOT_ERR_OR_NULL(...)` allows us to bail out of the test case if
+the appropriate conditions have not been satisfied to complete the test.
+
+Modules / Test Suites
+~~~~~~~~~~~~~~~~~~~~~
+
+Now obviously one unit test isn't very helpful; the power comes from having
+many test cases covering all of your behaviors. Consequently it is common to
+have many *similar* tests; in order to reduce duplication in these closely
+related tests most unit testing frameworks provide the concept of a *test
+suite*, in KUnit we call it a *test module*; all it is is just a collection of
+test cases for a unit of code with a set up function that gets invoked before
+every test cases and then a tear down function that gets invoked after every
+test case completes.
+
+Example:
+
+.. code-block:: c
+
+	static struct kunit_case example_test_cases[] = {
+		KUNIT_CASE(example_test_foo),
+		KUNIT_CASE(example_test_bar),
+		KUNIT_CASE(example_test_baz),
+		{},
+	};
+
+	static struct kunit_module example_test_module[] = {
+		.name = "example",
+		.init = example_test_init,
+		.exit = example_test_exit,
+		.test_cases = example_test_cases,
+	};
+	module_test(example_test_module);
+
+In the above example the test suite, ``example_test_module``, would run the test
+cases ``example_test_foo``, ``example_test_bar``, and ``example_test_baz``, each
+would have ``example_test_init`` called immediately before it and would have
+``example_test_exit`` called immediately after it.
+``module_test(example_test_module)`` registers the test suite with the KUnit
+test framework.
+
+.. note::
+   A test case will only be run if it is associated with a test suite.
+
+For a more information on these types of things see the :doc:`api/test`.
+
+Isolating Behavior
+==================
+
+The most important aspect of unit testing that other forms of testing do not
+provide is the ability to limit the amount of code under test to a single unit.
+In practice, this is only possible by being able to control what code gets run
+when the unit under test calls a function and this is usually accomplished
+through some sort of indirection where a function is exposed as part of an API
+such that the definition of that function can be changed without affecting the
+rest of the code base. In the kernel this primarily comes from two constructs,
+classes, structs that contain function pointers that are provided by the
+implementer, and architecture specific functions which have definitions selected
+at compile time.
+
+Classes
+-------
+
+Classes are not a construct that is built into the C programming language;
+however, it is an easily derived concept. Accordingly, pretty much every project
+that does not use a standardized object oriented library (like GNOME's GObject)
+has their own slightly different way of doing object oriented programming; the
+Linux kernel is no exception.
+
+The central concept in kernel object oriented programming is the class. In the
+kernel, a *class* is a struct that contains function pointers. This creates a
+contract between *implementers* and *users* since it forces them to use the
+same function signature without having to call the function directly. In order
+for it to truly be a class, the function pointers must specify that a pointer
+to the class, known as a *class handle*, be one of the parameters; this makes
+it possible for the member functions (also known as *methods*) to have access
+to member variables (more commonly known as *fields*) allowing the same
+implementation to have multiple *instances*.
+
+Typically a class can be *overridden* by *child classes* by embedding the
+*parent class* in the child class. Then when a method provided by the child
+class is called, the child implementation knows that the pointer passed to it is
+of a parent contained within the child; because of this, the child can compute
+the pointer to itself because the pointer to the parent is always a fixed offset
+from the pointer to the child; this offset is the offset of the parent contained
+in the child struct. For example:
+
+.. code-block:: c
+
+	struct shape {
+		int (*area)(struct shape *this);
+	};
+
+	struct rectangle {
+		struct shape parent;
+		int length;
+		int width;
+	};
+
+	int rectangle_area(struct shape *this)
+	{
+		struct rectangle *self = container_of(this, struct shape, parent);
+
+		return self->length * self->width;
+	};
+
+	void rectangle_new(struct rectangle *self, int length, int width)
+	{
+		self->parent.area = rectangle_area;
+		self->length = length;
+		self->width = width;
+	}
+
+In this example (as in most kernel code) the operation of computing the pointer
+to the child from the pointer to the parent is done by ``container_of``.
+
+Faking Classes
+~~~~~~~~~~~~~~
+
+In order to unit test a piece of code that calls a method in a class, the
+behavior of the method must be controllable, otherwise the test ceases to be a
+unit test and becomes an integration test.
+
+A fake just provides an implementation of a piece of code that is different than
+what runs in a production instance, but behaves identically from the standpoint
+of the callers; this is usually done to replace a dependency that is hard to
+deal with, or is slow.
+
+A good example for this might be implementing a fake EEPROM that just stores the
+"contents" in an internal buffer. For example, let's assume we have a class that
+represents an EEPROM:
+
+.. code-block:: c
+
+	struct eeprom {
+		ssize_t (*read)(struct eeprom *this, size_t offset, char *buffer, size_t count);
+		ssize_t (*write)(struct eeprom *this, size_t offset, const char *buffer, size_t count);
+	};
+
+And we want to test some code that buffers writes to the EEPROM:
+
+.. code-block:: c
+
+	struct eeprom_buffer {
+		ssize_t (*write)(struct eeprom_buffer *this, const char *buffer, size_t count);
+		int flush(struct eeprom_buffer *this);
+		size_t flush_count; /* Flushes when buffer exceeds flush_count. */
+	};
+
+	struct eeprom_buffer *new_eeprom_buffer(struct eeprom *eeprom);
+	void destroy_eeprom_buffer(struct eeprom *eeprom);
+
+We can easily test this code by *faking out* the underlying EEPROM:
+
+.. code-block:: c
+
+	struct fake_eeprom {
+		struct eeprom parent;
+		char contents[FAKE_EEPROM_CONTENTS_SIZE];
+	};
+
+	ssize_t fake_eeprom_read(struct eeprom *parent, size_t offset, char *buffer, size_t count)
+	{
+		struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent);
+
+		count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset);
+		memcpy(buffer, this->contents + offset, count);
+
+		return count;
+	}
+
+	ssize_t fake_eeprom_write(struct eeprom *this, size_t offset, const char *buffer, size_t count)
+	{
+		struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent);
+
+		count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset);
+		memcpy(this->contents + offset, buffer, count);
+
+		return count;
+	}
+
+	void fake_eeprom_init(struct fake_eeprom *this)
+	{
+		this->parent.read = fake_eeprom_read;
+		this->parent.write = fake_eeprom_write;
+		memset(this->contents, 0, FAKE_EEPROM_CONTENTS_SIZE);
+	}
+
+We can now use it to test ``struct eeprom_buffer``:
+
+.. code-block:: c
+
+	struct eeprom_buffer_test {
+		struct fake_eeprom *fake_eeprom;
+		struct eeprom_buffer *eeprom_buffer;
+	};
+
+	static void eeprom_buffer_test_does_not_write_until_flush(struct kunit *test)
+	{
+		struct eeprom_buffer_test *ctx = test->priv;
+		struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
+		struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
+		char buffer[] = {0xff};
+
+		eeprom_buffer->flush_count = SIZE_MAX;
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 1);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 1);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0);
+
+		eeprom_buffer->flush(eeprom_buffer);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
+	}
+
+	static void eeprom_buffer_test_flushes_after_flush_count_met(struct kunit *test)
+	{
+		struct eeprom_buffer_test *ctx = test->priv;
+		struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
+		struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
+		char buffer[] = {0xff};
+
+		eeprom_buffer->flush_count = 2;
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 1);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 1);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
+	}
+
+	static void eeprom_buffer_test_flushes_increments_of_flush_count(struct kunit *test)
+	{
+		struct eeprom_buffer_test *ctx = test->priv;
+		struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
+		struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
+		char buffer[] = {0xff, 0xff};
+
+		eeprom_buffer->flush_count = 2;
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 1);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 2);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
+		/* Should have only flushed the first two bytes. */
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[2], 0);
+	}
+
+	static int eeprom_buffer_test_init(struct kunit *test)
+	{
+		struct eeprom_buffer_test *ctx;
+
+		ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
+		ASSERT_NOT_ERR_OR_NULL(test, ctx);
+
+		ctx->fake_eeprom = kunit_kzalloc(test, sizeof(*ctx->fake_eeprom), GFP_KERNEL);
+		ASSERT_NOT_ERR_OR_NULL(test, ctx->fake_eeprom);
+
+		ctx->eeprom_buffer = new_eeprom_buffer(&ctx->fake_eeprom->parent);
+		ASSERT_NOT_ERR_OR_NULL(test, ctx->eeprom_buffer);
+
+		test->priv = ctx;
+
+		return 0;
+	}
+
+	static void eeprom_buffer_test_exit(struct kunit *test)
+	{
+		struct eeprom_buffer_test *ctx = test->priv;
+
+		destroy_eeprom_buffer(ctx->eeprom_buffer);
+	}
+
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 13/17] Documentation: kunit: add documentation for KUnit
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)


Add documentation for KUnit, the Linux kernel unit testing framework.
- Add intro and usage guide for KUnit
- Add API reference

Signed-off-by: Felix Guo <felixguoxiuping at gmail.com>
Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 Documentation/index.rst           |   1 +
 Documentation/kunit/api/index.rst |  16 ++
 Documentation/kunit/api/test.rst  |  15 +
 Documentation/kunit/faq.rst       |  46 +++
 Documentation/kunit/index.rst     |  80 ++++++
 Documentation/kunit/start.rst     | 180 ++++++++++++
 Documentation/kunit/usage.rst     | 447 ++++++++++++++++++++++++++++++
 7 files changed, 785 insertions(+)
 create mode 100644 Documentation/kunit/api/index.rst
 create mode 100644 Documentation/kunit/api/test.rst
 create mode 100644 Documentation/kunit/faq.rst
 create mode 100644 Documentation/kunit/index.rst
 create mode 100644 Documentation/kunit/start.rst
 create mode 100644 Documentation/kunit/usage.rst

diff --git a/Documentation/index.rst b/Documentation/index.rst
index c858c2e66e361..9512de536b34a 100644
--- a/Documentation/index.rst
+++ b/Documentation/index.rst
@@ -65,6 +65,7 @@ merged much easier.
    kernel-hacking/index
    trace/index
    maintainer/index
+   kunit/index
 
 Kernel API documentation
 ------------------------
diff --git a/Documentation/kunit/api/index.rst b/Documentation/kunit/api/index.rst
new file mode 100644
index 0000000000000..c31c530088153
--- /dev/null
+++ b/Documentation/kunit/api/index.rst
@@ -0,0 +1,16 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=============
+API Reference
+=============
+.. toctree::
+
+	test
+
+This section documents the KUnit kernel testing API. It is divided into 3
+sections:
+
+================================= ==============================================
+:doc:`test`                       documents all of the standard testing API
+                                  excluding mocking or mocking related features.
+================================= ==============================================
diff --git a/Documentation/kunit/api/test.rst b/Documentation/kunit/api/test.rst
new file mode 100644
index 0000000000000..7c926014f047c
--- /dev/null
+++ b/Documentation/kunit/api/test.rst
@@ -0,0 +1,15 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+========
+Test API
+========
+
+This file documents all of the standard testing API excluding mocking or mocking
+related features.
+
+.. kernel-doc:: include/kunit/test.h
+   :internal:
+
+.. kernel-doc:: include/kunit/kunit-stream.h
+   :internal:
+
diff --git a/Documentation/kunit/faq.rst b/Documentation/kunit/faq.rst
new file mode 100644
index 0000000000000..cb8e4fb2257a0
--- /dev/null
+++ b/Documentation/kunit/faq.rst
@@ -0,0 +1,46 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=========================================
+Frequently Asked Questions
+=========================================
+
+How is this different from Autotest, kselftest, etc?
+====================================================
+KUnit is a unit testing framework. Autotest, kselftest (and some others) are
+not.
+
+A `unit test <https://martinfowler.com/bliki/UnitTest.html>`_ is supposed to
+test a single unit of code in isolation, hence the name. A unit test should be
+the finest granularity of testing and as such should allow all possible code
+paths to be tested in the code under test; this is only possible if the code
+under test is very small and does not have any external dependencies outside of
+the test's control like hardware.
+
+There are no testing frameworks currently available for the kernel that do not
+require installing the kernel on a test machine or in a VM and all require
+tests to be written in userspace and run on the kernel under test; this is true
+for Autotest, kselftest, and some others, disqualifying any of them from being
+considered unit testing frameworks.
+
+What is the difference between a unit test and these other kinds of tests?
+==========================================================================
+Most existing tests for the Linux kernel would be categorized as an integration
+test, or an end-to-end test.
+
+- A unit test is supposed to test a single unit of code in isolation, hence the
+  name. A unit test should be the finest granularity of testing and as such
+  should allow all possible code paths to be tested in the code under test; this
+  is only possible if the code under test is very small and does not have any
+  external dependencies outside of the test's control like hardware.
+- An integration test tests the interaction between a minimal set of components,
+  usually just two or three. For example, someone might write an integration
+  test to test the interaction between a driver and a piece of hardware, or to
+  test the interaction between the userspace libraries the kernel provides and
+  the kernel itself; however, one of these tests would probably not test the
+  entire kernel along with hardware interactions and interactions with the
+  userspace.
+- An end-to-end test usually tests the entire system from the perspective of the
+  code under test. For example, someone might write an end-to-end test for the
+  kernel by installing a production configuration of the kernel on production
+  hardware with a production userspace and then trying to exercise some behavior
+  that depends on interactions between the hardware, the kernel, and userspace.
diff --git a/Documentation/kunit/index.rst b/Documentation/kunit/index.rst
new file mode 100644
index 0000000000000..c6710211b647f
--- /dev/null
+++ b/Documentation/kunit/index.rst
@@ -0,0 +1,80 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=========================================
+KUnit - Unit Testing for the Linux Kernel
+=========================================
+
+.. toctree::
+	:maxdepth: 2
+
+	start
+	usage
+	api/index
+	faq
+
+What is KUnit?
+==============
+
+KUnit is a lightweight unit testing and mocking framework for the Linux kernel.
+These tests are able to be run locally on a developer's workstation without a VM
+or special hardware.
+
+KUnit is heavily inspired by JUnit, Python's unittest.mock, and
+Googletest/Googlemock for C++. KUnit provides facilities for defining unit test
+cases, grouping related test cases into test suites, providing common
+infrastructure for running tests, and much more.
+
+Get started now: :doc:`start`
+
+Why KUnit?
+==========
+
+A unit test is supposed to test a single unit of code in isolation, hence the
+name. A unit test should be the finest granularity of testing and as such should
+allow all possible code paths to be tested in the code under test; this is only
+possible if the code under test is very small and does not have any external
+dependencies outside of the test's control like hardware.
+
+Outside of KUnit, there are no testing frameworks currently
+available for the kernel that do not require installing the kernel on a test
+machine or in a VM and all require tests to be written in userspace running on
+the kernel; this is true for Autotest, and kselftest, disqualifying
+any of them from being considered unit testing frameworks.
+
+KUnit addresses the problem of being able to run tests without needing a virtual
+machine or actual hardware with User Mode Linux. User Mode Linux is a Linux
+architecture, like ARM or x86; however, unlike other architectures it compiles
+to a standalone program that can be run like any other program directly inside
+of a host operating system; to be clear, it does not require any virtualization
+support; it is just a regular program.
+
+KUnit is fast. Excluding build time, from invocation to completion KUnit can run
+several dozen tests in only 10 to 20 seconds; this might not sound like a big
+deal to some people, but having such fast and easy to run tests fundamentally
+changes the way you go about testing and even writing code in the first place.
+Linus himself said in his `git talk at Google
+<https://gist.github.com/lorn/1272686/revisions#diff-53c65572127855f1b003db4064a94573R874>`_:
+
+	"... a lot of people seem to think that performance is about doing the
+	same thing, just doing it faster, and that is not true. That is not what
+	performance is all about. If you can do something really fast, really
+	well, people will start using it differently."
+
+In this context Linus was talking about branching and merging,
+but this point also applies to testing. If your tests are slow, unreliable, are
+difficult to write, and require a special setup or special hardware to run,
+then you wait a lot longer to write tests, and you wait a lot longer to run
+tests; this means that tests are likely to break, unlikely to test a lot of
+things, and are unlikely to be rerun once they pass. If your tests are really
+fast, you run them all the time, every time you make a change, and every time
+someone sends you some code. Why trust that someone ran all their tests
+correctly on every change when you can just run them yourself in less time than
+it takes to read his / her test log?
+
+How do I use it?
+===================
+
+*   :doc:`start` - for new users of KUnit
+*   :doc:`usage` - for a more detailed explanation of KUnit features
+*   :doc:`api/index` - for the list of KUnit APIs used for testing
+
diff --git a/Documentation/kunit/start.rst b/Documentation/kunit/start.rst
new file mode 100644
index 0000000000000..5cdba5091905e
--- /dev/null
+++ b/Documentation/kunit/start.rst
@@ -0,0 +1,180 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+===============
+Getting Started
+===============
+
+Installing dependencies
+=======================
+KUnit has the same dependencies as the Linux kernel. As long as you can build
+the kernel, you can run KUnit.
+
+KUnit Wrapper
+=============
+Included with KUnit is a simple Python wrapper that helps format the output to
+easily use and read KUnit output. It handles building and running the kernel, as
+well as formatting the output.
+
+The wrapper can be run with:
+
+.. code-block:: bash
+
+   ./tools/testing/kunit/kunit.py
+
+Creating a kunitconfig
+======================
+The Python script is a thin wrapper around Kbuild as such, it needs to be
+configured with a ``kunitconfig`` file. This file essentially contains the
+regular Kernel config, with the specific test targets as well.
+
+.. code-block:: bash
+
+	git clone -b master https://kunit.googlesource.com/kunitconfig $PATH_TO_KUNITCONFIG_REPO
+	cd $PATH_TO_LINUX_REPO
+	ln -s $PATH_TO_KUNIT_CONFIG_REPO/kunitconfig kunitconfig
+
+You may want to add kunitconfig to your local gitignore.
+
+Verifying KUnit Works
+-------------------------
+
+To make sure that everything is set up correctly, simply invoke the Python
+wrapper from your kernel repo:
+
+.. code-block:: bash
+
+	./tools/testing/kunit/kunit.py
+
+.. note::
+   You may want to run ``make mrproper`` first.
+
+If everything worked correctly, you should see the following:
+
+.. code-block:: bash
+
+	Generating .config ...
+	Building KUnit Kernel ...
+	Starting KUnit Kernel ...
+
+followed by a list of tests that are run. All of them should be passing.
+
+.. note::
+   Because it is building a lot of sources for the first time, the ``Building
+   kunit kernel`` step may take a while.
+
+Writing your first test
+==========================
+
+In your kernel repo let's add some code that we can test. Create a file
+``drivers/misc/example.h`` with the contents:
+
+.. code-block:: c
+
+	int misc_example_add(int left, int right);
+
+create a file ``drivers/misc/example.c``:
+
+.. code-block:: c
+
+	#include <linux/errno.h>
+
+	#include "example.h"
+
+	int misc_example_add(int left, int right)
+	{
+		return left + right;
+	}
+
+Now add the following lines to ``drivers/misc/Kconfig``:
+
+.. code-block:: kconfig
+
+	config MISC_EXAMPLE
+		bool "My example"
+
+and the following lines to ``drivers/misc/Makefile``:
+
+.. code-block:: make
+
+	obj-$(CONFIG_MISC_EXAMPLE) += example.o
+
+Now we are ready to write the test. The test will be in
+``drivers/misc/example-test.c``:
+
+.. code-block:: c
+
+	#include <kunit/test.h>
+	#include "example.h"
+
+	/* Define the test cases. */
+
+	static void misc_example_add_test_basic(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, 1, misc_example_add(1, 0));
+		KUNIT_EXPECT_EQ(test, 2, misc_example_add(1, 1));
+		KUNIT_EXPECT_EQ(test, 0, misc_example_add(-1, 1));
+		KUNIT_EXPECT_EQ(test, INT_MAX, misc_example_add(0, INT_MAX));
+		KUNIT_EXPECT_EQ(test, -1, misc_example_add(INT_MAX, INT_MIN));
+	}
+
+	static void misc_example_test_failure(struct kunit *test)
+	{
+		KUNIT_FAIL(test, "This test never passes.");
+	}
+
+	static struct kunit_case misc_example_test_cases[] = {
+		KUNIT_CASE(misc_example_add_test_basic),
+		KUNIT_CASE(misc_example_test_failure),
+		{},
+	};
+
+	static struct kunit_module misc_example_test_module = {
+		.name = "misc-example",
+		.test_cases = misc_example_test_cases,
+	};
+	module_test(misc_example_test_module);
+
+Now add the following to ``drivers/misc/Kconfig``:
+
+.. code-block:: kconfig
+
+	config MISC_EXAMPLE_TEST
+		bool "Test for my example"
+		depends on MISC_EXAMPLE && KUNIT
+
+and the following to ``drivers/misc/Makefile``:
+
+.. code-block:: make
+
+	obj-$(CONFIG_MISC_EXAMPLE_TEST) += example-test.o
+
+Now add it to your ``kunitconfig``:
+
+.. code-block:: none
+
+	CONFIG_MISC_EXAMPLE=y
+	CONFIG_MISC_EXAMPLE_TEST=y
+
+Now you can run the test:
+
+.. code-block:: bash
+
+	./tools/testing/kunit/kunit.py
+
+You should see the following failure:
+
+.. code-block:: none
+
+	...
+	[16:08:57] [PASSED] misc-example:misc_example_add_test_basic
+	[16:08:57] [FAILED] misc-example:misc_example_test_failure
+	[16:08:57] EXPECTATION FAILED at drivers/misc/example-test.c:17
+	[16:08:57] 	This test never passes.
+	...
+
+Congrats! You just wrote your first KUnit test!
+
+Next Steps
+=============
+*   Check out the :doc:`usage` page for a more
+    in-depth explanation of KUnit.
diff --git a/Documentation/kunit/usage.rst b/Documentation/kunit/usage.rst
new file mode 100644
index 0000000000000..96ef7f9a1add4
--- /dev/null
+++ b/Documentation/kunit/usage.rst
@@ -0,0 +1,447 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=============
+Using KUnit
+=============
+
+The purpose of this document is to describe what KUnit is, how it works, how it
+is intended to be used, and all the concepts and terminology that are needed to
+understand it. This guide assumes a working knowledge of the Linux kernel and
+some basic knowledge of testing.
+
+For a high level introduction to KUnit, including setting up KUnit for your
+project, see :doc:`start`.
+
+Organization of this document
+=================================
+
+This document is organized into two main sections: Testing and Isolating
+Behavior. The first covers what a unit test is and how to use KUnit to write
+them. The second covers how to use KUnit to isolate code and make it possible
+to unit test code that was otherwise un-unit-testable.
+
+Testing
+==========
+
+What is KUnit?
+------------------
+
+"K" is short for "kernel" so "KUnit" is the "(Linux) Kernel Unit Testing
+Framework." KUnit is intended first and foremost for writing unit tests; it is
+general enough that it can be used to write integration tests; however, this is
+a secondary goal. KUnit has no ambition of being the only testing framework for
+the kernel; for example, it does not intend to be an end-to-end testing
+framework.
+
+What is Unit Testing?
+-------------------------
+
+A `unit test <https://martinfowler.com/bliki/UnitTest.html>`_ is a test that
+tests code at the smallest possible scope, a *unit* of code. In the C
+programming language that's a function.
+
+Unit tests should be written for all the publicly exposed functions in a
+compilation unit; so that is all the functions that are exported in either a
+*class* (defined below) or all functions which are **not** static.
+
+Writing Tests
+-------------
+
+Test Cases
+~~~~~~~~~~
+
+The fundamental unit in KUnit is the test case. A test case is a function with
+the signature ``void (*)(struct kunit *test)``. It calls a function to be tested
+and then sets *expectations* for what should happen. For example:
+
+.. code-block:: c
+
+	void example_test_success(struct kunit *test)
+	{
+	}
+
+	void example_test_failure(struct kunit *test)
+	{
+		KUNIT_FAIL(test, "This test never passes.");
+	}
+
+In the above example ``example_test_success`` always passes because it does
+nothing; no expectations are set, so all expectations pass. On the other hand
+``example_test_failure`` always fails because it calls ``KUNIT_FAIL``, which is
+a special expectation that logs a message and causes the test case to fail.
+
+Expectations
+~~~~~~~~~~~~
+An *expectation* is a way to specify that you expect a piece of code to do
+something in a test. An expectation is called like a function. A test is made
+by setting expectations about the behavior of a piece of code under test; when
+one or more of the expectations fail, the test case fails and information about
+the failure is logged. For example:
+
+.. code-block:: c
+
+	void add_test_basic(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, 1, add(1, 0));
+		KUNIT_EXPECT_EQ(test, 2, add(1, 1));
+	}
+
+In the above example ``add_test_basic`` makes a number of assertions about the
+behavior of a function called ``add``; the first parameter is always of type
+``struct kunit *``, which contains information about the current test context;
+the second parameter, in this case, is what the value is expected to be; the
+last value is what the value actually is. If ``add`` passes all of these
+expectations, the test case, ``add_test_basic`` will pass; if any one of these
+expectations fail, the test case will fail.
+
+It is important to understand that a test case *fails* when any expectation is
+violated; however, the test will continue running, potentially trying other
+expectations until the test case ends or is otherwise terminated. This is as
+opposed to *assertions* which are discussed later.
+
+To learn about more expectations supported by KUnit, see :doc:`api/test`.
+
+.. note::
+   A single test case should be pretty short, pretty easy to understand,
+   focused on a single behavior.
+
+For example, if we wanted to properly test the add function above, we would
+create additional tests cases which would each test a different property that an
+add function should have like this:
+
+.. code-block:: c
+
+	void add_test_basic(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, 1, add(1, 0));
+		KUNIT_EXPECT_EQ(test, 2, add(1, 1));
+	}
+
+	void add_test_negative(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, 0, add(-1, 1));
+	}
+
+	void add_test_max(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, INT_MAX, add(0, INT_MAX));
+		KUNIT_EXPECT_EQ(test, -1, add(INT_MAX, INT_MIN));
+	}
+
+	void add_test_overflow(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, INT_MIN, add(INT_MAX, 1));
+	}
+
+Notice how it is immediately obvious what all the properties that we are testing
+for are.
+
+Assertions
+~~~~~~~~~~
+
+KUnit also has the concept of an *assertion*. An assertion is just like an
+expectation except the assertion immediately terminates the test case if it is
+not satisfied.
+
+For example:
+
+.. code-block:: c
+
+	static void mock_test_do_expect_default_return(struct kunit *test)
+	{
+		struct mock_test_context *ctx = test->priv;
+		struct mock *mock = ctx->mock;
+		int param0 = 5, param1 = -5;
+		const char *two_param_types[] = {"int", "int"};
+		const void *two_params[] = {&param0, &param1};
+		const void *ret;
+
+		ret = mock->do_expect(mock,
+				      "test_printk", test_printk,
+				      two_param_types, two_params,
+				      ARRAY_SIZE(two_params));
+		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ret);
+		KUNIT_EXPECT_EQ(test, -4, *((int *) ret));
+	}
+
+In this example, the method under test should return a pointer to a value, so
+if the pointer returned by the method is null or an errno, we don't want to
+bother continuing the test since the following expectation could crash the test
+case. `ASSERT_NOT_ERR_OR_NULL(...)` allows us to bail out of the test case if
+the appropriate conditions have not been satisfied to complete the test.
+
+Modules / Test Suites
+~~~~~~~~~~~~~~~~~~~~~
+
+Now obviously one unit test isn't very helpful; the power comes from having
+many test cases covering all of your behaviors. Consequently it is common to
+have many *similar* tests; in order to reduce duplication in these closely
+related tests most unit testing frameworks provide the concept of a *test
+suite*, in KUnit we call it a *test module*; all it is is just a collection of
+test cases for a unit of code with a set up function that gets invoked before
+every test cases and then a tear down function that gets invoked after every
+test case completes.
+
+Example:
+
+.. code-block:: c
+
+	static struct kunit_case example_test_cases[] = {
+		KUNIT_CASE(example_test_foo),
+		KUNIT_CASE(example_test_bar),
+		KUNIT_CASE(example_test_baz),
+		{},
+	};
+
+	static struct kunit_module example_test_module[] = {
+		.name = "example",
+		.init = example_test_init,
+		.exit = example_test_exit,
+		.test_cases = example_test_cases,
+	};
+	module_test(example_test_module);
+
+In the above example the test suite, ``example_test_module``, would run the test
+cases ``example_test_foo``, ``example_test_bar``, and ``example_test_baz``, each
+would have ``example_test_init`` called immediately before it and would have
+``example_test_exit`` called immediately after it.
+``module_test(example_test_module)`` registers the test suite with the KUnit
+test framework.
+
+.. note::
+   A test case will only be run if it is associated with a test suite.
+
+For a more information on these types of things see the :doc:`api/test`.
+
+Isolating Behavior
+==================
+
+The most important aspect of unit testing that other forms of testing do not
+provide is the ability to limit the amount of code under test to a single unit.
+In practice, this is only possible by being able to control what code gets run
+when the unit under test calls a function and this is usually accomplished
+through some sort of indirection where a function is exposed as part of an API
+such that the definition of that function can be changed without affecting the
+rest of the code base. In the kernel this primarily comes from two constructs,
+classes, structs that contain function pointers that are provided by the
+implementer, and architecture specific functions which have definitions selected
+at compile time.
+
+Classes
+-------
+
+Classes are not a construct that is built into the C programming language;
+however, it is an easily derived concept. Accordingly, pretty much every project
+that does not use a standardized object oriented library (like GNOME's GObject)
+has their own slightly different way of doing object oriented programming; the
+Linux kernel is no exception.
+
+The central concept in kernel object oriented programming is the class. In the
+kernel, a *class* is a struct that contains function pointers. This creates a
+contract between *implementers* and *users* since it forces them to use the
+same function signature without having to call the function directly. In order
+for it to truly be a class, the function pointers must specify that a pointer
+to the class, known as a *class handle*, be one of the parameters; this makes
+it possible for the member functions (also known as *methods*) to have access
+to member variables (more commonly known as *fields*) allowing the same
+implementation to have multiple *instances*.
+
+Typically a class can be *overridden* by *child classes* by embedding the
+*parent class* in the child class. Then when a method provided by the child
+class is called, the child implementation knows that the pointer passed to it is
+of a parent contained within the child; because of this, the child can compute
+the pointer to itself because the pointer to the parent is always a fixed offset
+from the pointer to the child; this offset is the offset of the parent contained
+in the child struct. For example:
+
+.. code-block:: c
+
+	struct shape {
+		int (*area)(struct shape *this);
+	};
+
+	struct rectangle {
+		struct shape parent;
+		int length;
+		int width;
+	};
+
+	int rectangle_area(struct shape *this)
+	{
+		struct rectangle *self = container_of(this, struct shape, parent);
+
+		return self->length * self->width;
+	};
+
+	void rectangle_new(struct rectangle *self, int length, int width)
+	{
+		self->parent.area = rectangle_area;
+		self->length = length;
+		self->width = width;
+	}
+
+In this example (as in most kernel code) the operation of computing the pointer
+to the child from the pointer to the parent is done by ``container_of``.
+
+Faking Classes
+~~~~~~~~~~~~~~
+
+In order to unit test a piece of code that calls a method in a class, the
+behavior of the method must be controllable, otherwise the test ceases to be a
+unit test and becomes an integration test.
+
+A fake just provides an implementation of a piece of code that is different than
+what runs in a production instance, but behaves identically from the standpoint
+of the callers; this is usually done to replace a dependency that is hard to
+deal with, or is slow.
+
+A good example for this might be implementing a fake EEPROM that just stores the
+"contents" in an internal buffer. For example, let's assume we have a class that
+represents an EEPROM:
+
+.. code-block:: c
+
+	struct eeprom {
+		ssize_t (*read)(struct eeprom *this, size_t offset, char *buffer, size_t count);
+		ssize_t (*write)(struct eeprom *this, size_t offset, const char *buffer, size_t count);
+	};
+
+And we want to test some code that buffers writes to the EEPROM:
+
+.. code-block:: c
+
+	struct eeprom_buffer {
+		ssize_t (*write)(struct eeprom_buffer *this, const char *buffer, size_t count);
+		int flush(struct eeprom_buffer *this);
+		size_t flush_count; /* Flushes when buffer exceeds flush_count. */
+	};
+
+	struct eeprom_buffer *new_eeprom_buffer(struct eeprom *eeprom);
+	void destroy_eeprom_buffer(struct eeprom *eeprom);
+
+We can easily test this code by *faking out* the underlying EEPROM:
+
+.. code-block:: c
+
+	struct fake_eeprom {
+		struct eeprom parent;
+		char contents[FAKE_EEPROM_CONTENTS_SIZE];
+	};
+
+	ssize_t fake_eeprom_read(struct eeprom *parent, size_t offset, char *buffer, size_t count)
+	{
+		struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent);
+
+		count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset);
+		memcpy(buffer, this->contents + offset, count);
+
+		return count;
+	}
+
+	ssize_t fake_eeprom_write(struct eeprom *this, size_t offset, const char *buffer, size_t count)
+	{
+		struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent);
+
+		count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset);
+		memcpy(this->contents + offset, buffer, count);
+
+		return count;
+	}
+
+	void fake_eeprom_init(struct fake_eeprom *this)
+	{
+		this->parent.read = fake_eeprom_read;
+		this->parent.write = fake_eeprom_write;
+		memset(this->contents, 0, FAKE_EEPROM_CONTENTS_SIZE);
+	}
+
+We can now use it to test ``struct eeprom_buffer``:
+
+.. code-block:: c
+
+	struct eeprom_buffer_test {
+		struct fake_eeprom *fake_eeprom;
+		struct eeprom_buffer *eeprom_buffer;
+	};
+
+	static void eeprom_buffer_test_does_not_write_until_flush(struct kunit *test)
+	{
+		struct eeprom_buffer_test *ctx = test->priv;
+		struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
+		struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
+		char buffer[] = {0xff};
+
+		eeprom_buffer->flush_count = SIZE_MAX;
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 1);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 1);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0);
+
+		eeprom_buffer->flush(eeprom_buffer);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
+	}
+
+	static void eeprom_buffer_test_flushes_after_flush_count_met(struct kunit *test)
+	{
+		struct eeprom_buffer_test *ctx = test->priv;
+		struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
+		struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
+		char buffer[] = {0xff};
+
+		eeprom_buffer->flush_count = 2;
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 1);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 1);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
+	}
+
+	static void eeprom_buffer_test_flushes_increments_of_flush_count(struct kunit *test)
+	{
+		struct eeprom_buffer_test *ctx = test->priv;
+		struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
+		struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
+		char buffer[] = {0xff, 0xff};
+
+		eeprom_buffer->flush_count = 2;
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 1);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 2);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
+		/* Should have only flushed the first two bytes. */
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[2], 0);
+	}
+
+	static int eeprom_buffer_test_init(struct kunit *test)
+	{
+		struct eeprom_buffer_test *ctx;
+
+		ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
+		ASSERT_NOT_ERR_OR_NULL(test, ctx);
+
+		ctx->fake_eeprom = kunit_kzalloc(test, sizeof(*ctx->fake_eeprom), GFP_KERNEL);
+		ASSERT_NOT_ERR_OR_NULL(test, ctx->fake_eeprom);
+
+		ctx->eeprom_buffer = new_eeprom_buffer(&ctx->fake_eeprom->parent);
+		ASSERT_NOT_ERR_OR_NULL(test, ctx->eeprom_buffer);
+
+		test->priv = ctx;
+
+		return 0;
+	}
+
+	static void eeprom_buffer_test_exit(struct kunit *test)
+	{
+		struct eeprom_buffer_test *ctx = test->priv;
+
+		destroy_eeprom_buffer(ctx->eeprom_buffer);
+	}
+
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 13/17] Documentation: kunit: add documentation for KUnit
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: brakmo, pmladek, amir73il, Brendan Higgins, dri-devel,
	Alexander.Levin, linux-kselftest, linux-nvdimm, richard,
	knut.omang, Felix Guo, wfg, joel, jdike, dan.carpenter,
	devicetree, Tim.Bird, linux-um, rostedt, julia.lawall,
	dan.j.williams, kunit-dev, gregkh, linux-kernel, daniel, mpe,
	joe, khilman

Add documentation for KUnit, the Linux kernel unit testing framework.
- Add intro and usage guide for KUnit
- Add API reference

Signed-off-by: Felix Guo <felixguoxiuping@gmail.com>
Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
---
 Documentation/index.rst           |   1 +
 Documentation/kunit/api/index.rst |  16 ++
 Documentation/kunit/api/test.rst  |  15 +
 Documentation/kunit/faq.rst       |  46 +++
 Documentation/kunit/index.rst     |  80 ++++++
 Documentation/kunit/start.rst     | 180 ++++++++++++
 Documentation/kunit/usage.rst     | 447 ++++++++++++++++++++++++++++++
 7 files changed, 785 insertions(+)
 create mode 100644 Documentation/kunit/api/index.rst
 create mode 100644 Documentation/kunit/api/test.rst
 create mode 100644 Documentation/kunit/faq.rst
 create mode 100644 Documentation/kunit/index.rst
 create mode 100644 Documentation/kunit/start.rst
 create mode 100644 Documentation/kunit/usage.rst

diff --git a/Documentation/index.rst b/Documentation/index.rst
index c858c2e66e361..9512de536b34a 100644
--- a/Documentation/index.rst
+++ b/Documentation/index.rst
@@ -65,6 +65,7 @@ merged much easier.
    kernel-hacking/index
    trace/index
    maintainer/index
+   kunit/index
 
 Kernel API documentation
 ------------------------
diff --git a/Documentation/kunit/api/index.rst b/Documentation/kunit/api/index.rst
new file mode 100644
index 0000000000000..c31c530088153
--- /dev/null
+++ b/Documentation/kunit/api/index.rst
@@ -0,0 +1,16 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=============
+API Reference
+=============
+.. toctree::
+
+	test
+
+This section documents the KUnit kernel testing API. It is divided into 3
+sections:
+
+================================= ==============================================
+:doc:`test`                       documents all of the standard testing API
+                                  excluding mocking or mocking related features.
+================================= ==============================================
diff --git a/Documentation/kunit/api/test.rst b/Documentation/kunit/api/test.rst
new file mode 100644
index 0000000000000..7c926014f047c
--- /dev/null
+++ b/Documentation/kunit/api/test.rst
@@ -0,0 +1,15 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+========
+Test API
+========
+
+This file documents all of the standard testing API excluding mocking or mocking
+related features.
+
+.. kernel-doc:: include/kunit/test.h
+   :internal:
+
+.. kernel-doc:: include/kunit/kunit-stream.h
+   :internal:
+
diff --git a/Documentation/kunit/faq.rst b/Documentation/kunit/faq.rst
new file mode 100644
index 0000000000000..cb8e4fb2257a0
--- /dev/null
+++ b/Documentation/kunit/faq.rst
@@ -0,0 +1,46 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=========================================
+Frequently Asked Questions
+=========================================
+
+How is this different from Autotest, kselftest, etc?
+====================================================
+KUnit is a unit testing framework. Autotest, kselftest (and some others) are
+not.
+
+A `unit test <https://martinfowler.com/bliki/UnitTest.html>`_ is supposed to
+test a single unit of code in isolation, hence the name. A unit test should be
+the finest granularity of testing and as such should allow all possible code
+paths to be tested in the code under test; this is only possible if the code
+under test is very small and does not have any external dependencies outside of
+the test's control like hardware.
+
+There are no testing frameworks currently available for the kernel that do not
+require installing the kernel on a test machine or in a VM and all require
+tests to be written in userspace and run on the kernel under test; this is true
+for Autotest, kselftest, and some others, disqualifying any of them from being
+considered unit testing frameworks.
+
+What is the difference between a unit test and these other kinds of tests?
+==========================================================================
+Most existing tests for the Linux kernel would be categorized as an integration
+test, or an end-to-end test.
+
+- A unit test is supposed to test a single unit of code in isolation, hence the
+  name. A unit test should be the finest granularity of testing and as such
+  should allow all possible code paths to be tested in the code under test; this
+  is only possible if the code under test is very small and does not have any
+  external dependencies outside of the test's control like hardware.
+- An integration test tests the interaction between a minimal set of components,
+  usually just two or three. For example, someone might write an integration
+  test to test the interaction between a driver and a piece of hardware, or to
+  test the interaction between the userspace libraries the kernel provides and
+  the kernel itself; however, one of these tests would probably not test the
+  entire kernel along with hardware interactions and interactions with the
+  userspace.
+- An end-to-end test usually tests the entire system from the perspective of the
+  code under test. For example, someone might write an end-to-end test for the
+  kernel by installing a production configuration of the kernel on production
+  hardware with a production userspace and then trying to exercise some behavior
+  that depends on interactions between the hardware, the kernel, and userspace.
diff --git a/Documentation/kunit/index.rst b/Documentation/kunit/index.rst
new file mode 100644
index 0000000000000..c6710211b647f
--- /dev/null
+++ b/Documentation/kunit/index.rst
@@ -0,0 +1,80 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=========================================
+KUnit - Unit Testing for the Linux Kernel
+=========================================
+
+.. toctree::
+	:maxdepth: 2
+
+	start
+	usage
+	api/index
+	faq
+
+What is KUnit?
+==============
+
+KUnit is a lightweight unit testing and mocking framework for the Linux kernel.
+These tests are able to be run locally on a developer's workstation without a VM
+or special hardware.
+
+KUnit is heavily inspired by JUnit, Python's unittest.mock, and
+Googletest/Googlemock for C++. KUnit provides facilities for defining unit test
+cases, grouping related test cases into test suites, providing common
+infrastructure for running tests, and much more.
+
+Get started now: :doc:`start`
+
+Why KUnit?
+==========
+
+A unit test is supposed to test a single unit of code in isolation, hence the
+name. A unit test should be the finest granularity of testing and as such should
+allow all possible code paths to be tested in the code under test; this is only
+possible if the code under test is very small and does not have any external
+dependencies outside of the test's control like hardware.
+
+Outside of KUnit, there are no testing frameworks currently
+available for the kernel that do not require installing the kernel on a test
+machine or in a VM and all require tests to be written in userspace running on
+the kernel; this is true for Autotest, and kselftest, disqualifying
+any of them from being considered unit testing frameworks.
+
+KUnit addresses the problem of being able to run tests without needing a virtual
+machine or actual hardware with User Mode Linux. User Mode Linux is a Linux
+architecture, like ARM or x86; however, unlike other architectures it compiles
+to a standalone program that can be run like any other program directly inside
+of a host operating system; to be clear, it does not require any virtualization
+support; it is just a regular program.
+
+KUnit is fast. Excluding build time, from invocation to completion KUnit can run
+several dozen tests in only 10 to 20 seconds; this might not sound like a big
+deal to some people, but having such fast and easy to run tests fundamentally
+changes the way you go about testing and even writing code in the first place.
+Linus himself said in his `git talk at Google
+<https://gist.github.com/lorn/1272686/revisions#diff-53c65572127855f1b003db4064a94573R874>`_:
+
+	"... a lot of people seem to think that performance is about doing the
+	same thing, just doing it faster, and that is not true. That is not what
+	performance is all about. If you can do something really fast, really
+	well, people will start using it differently."
+
+In this context Linus was talking about branching and merging,
+but this point also applies to testing. If your tests are slow, unreliable, are
+difficult to write, and require a special setup or special hardware to run,
+then you wait a lot longer to write tests, and you wait a lot longer to run
+tests; this means that tests are likely to break, unlikely to test a lot of
+things, and are unlikely to be rerun once they pass. If your tests are really
+fast, you run them all the time, every time you make a change, and every time
+someone sends you some code. Why trust that someone ran all their tests
+correctly on every change when you can just run them yourself in less time than
+it takes to read his / her test log?
+
+How do I use it?
+===================
+
+*   :doc:`start` - for new users of KUnit
+*   :doc:`usage` - for a more detailed explanation of KUnit features
+*   :doc:`api/index` - for the list of KUnit APIs used for testing
+
diff --git a/Documentation/kunit/start.rst b/Documentation/kunit/start.rst
new file mode 100644
index 0000000000000..5cdba5091905e
--- /dev/null
+++ b/Documentation/kunit/start.rst
@@ -0,0 +1,180 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+===============
+Getting Started
+===============
+
+Installing dependencies
+=======================
+KUnit has the same dependencies as the Linux kernel. As long as you can build
+the kernel, you can run KUnit.
+
+KUnit Wrapper
+=============
+Included with KUnit is a simple Python wrapper that helps format the output to
+easily use and read KUnit output. It handles building and running the kernel, as
+well as formatting the output.
+
+The wrapper can be run with:
+
+.. code-block:: bash
+
+   ./tools/testing/kunit/kunit.py
+
+Creating a kunitconfig
+======================
+The Python script is a thin wrapper around Kbuild as such, it needs to be
+configured with a ``kunitconfig`` file. This file essentially contains the
+regular Kernel config, with the specific test targets as well.
+
+.. code-block:: bash
+
+	git clone -b master https://kunit.googlesource.com/kunitconfig $PATH_TO_KUNITCONFIG_REPO
+	cd $PATH_TO_LINUX_REPO
+	ln -s $PATH_TO_KUNIT_CONFIG_REPO/kunitconfig kunitconfig
+
+You may want to add kunitconfig to your local gitignore.
+
+Verifying KUnit Works
+-------------------------
+
+To make sure that everything is set up correctly, simply invoke the Python
+wrapper from your kernel repo:
+
+.. code-block:: bash
+
+	./tools/testing/kunit/kunit.py
+
+.. note::
+   You may want to run ``make mrproper`` first.
+
+If everything worked correctly, you should see the following:
+
+.. code-block:: bash
+
+	Generating .config ...
+	Building KUnit Kernel ...
+	Starting KUnit Kernel ...
+
+followed by a list of tests that are run. All of them should be passing.
+
+.. note::
+   Because it is building a lot of sources for the first time, the ``Building
+   kunit kernel`` step may take a while.
+
+Writing your first test
+==========================
+
+In your kernel repo let's add some code that we can test. Create a file
+``drivers/misc/example.h`` with the contents:
+
+.. code-block:: c
+
+	int misc_example_add(int left, int right);
+
+create a file ``drivers/misc/example.c``:
+
+.. code-block:: c
+
+	#include <linux/errno.h>
+
+	#include "example.h"
+
+	int misc_example_add(int left, int right)
+	{
+		return left + right;
+	}
+
+Now add the following lines to ``drivers/misc/Kconfig``:
+
+.. code-block:: kconfig
+
+	config MISC_EXAMPLE
+		bool "My example"
+
+and the following lines to ``drivers/misc/Makefile``:
+
+.. code-block:: make
+
+	obj-$(CONFIG_MISC_EXAMPLE) += example.o
+
+Now we are ready to write the test. The test will be in
+``drivers/misc/example-test.c``:
+
+.. code-block:: c
+
+	#include <kunit/test.h>
+	#include "example.h"
+
+	/* Define the test cases. */
+
+	static void misc_example_add_test_basic(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, 1, misc_example_add(1, 0));
+		KUNIT_EXPECT_EQ(test, 2, misc_example_add(1, 1));
+		KUNIT_EXPECT_EQ(test, 0, misc_example_add(-1, 1));
+		KUNIT_EXPECT_EQ(test, INT_MAX, misc_example_add(0, INT_MAX));
+		KUNIT_EXPECT_EQ(test, -1, misc_example_add(INT_MAX, INT_MIN));
+	}
+
+	static void misc_example_test_failure(struct kunit *test)
+	{
+		KUNIT_FAIL(test, "This test never passes.");
+	}
+
+	static struct kunit_case misc_example_test_cases[] = {
+		KUNIT_CASE(misc_example_add_test_basic),
+		KUNIT_CASE(misc_example_test_failure),
+		{},
+	};
+
+	static struct kunit_module misc_example_test_module = {
+		.name = "misc-example",
+		.test_cases = misc_example_test_cases,
+	};
+	module_test(misc_example_test_module);
+
+Now add the following to ``drivers/misc/Kconfig``:
+
+.. code-block:: kconfig
+
+	config MISC_EXAMPLE_TEST
+		bool "Test for my example"
+		depends on MISC_EXAMPLE && KUNIT
+
+and the following to ``drivers/misc/Makefile``:
+
+.. code-block:: make
+
+	obj-$(CONFIG_MISC_EXAMPLE_TEST) += example-test.o
+
+Now add it to your ``kunitconfig``:
+
+.. code-block:: none
+
+	CONFIG_MISC_EXAMPLE=y
+	CONFIG_MISC_EXAMPLE_TEST=y
+
+Now you can run the test:
+
+.. code-block:: bash
+
+	./tools/testing/kunit/kunit.py
+
+You should see the following failure:
+
+.. code-block:: none
+
+	...
+	[16:08:57] [PASSED] misc-example:misc_example_add_test_basic
+	[16:08:57] [FAILED] misc-example:misc_example_test_failure
+	[16:08:57] EXPECTATION FAILED at drivers/misc/example-test.c:17
+	[16:08:57] 	This test never passes.
+	...
+
+Congrats! You just wrote your first KUnit test!
+
+Next Steps
+=============
+*   Check out the :doc:`usage` page for a more
+    in-depth explanation of KUnit.
diff --git a/Documentation/kunit/usage.rst b/Documentation/kunit/usage.rst
new file mode 100644
index 0000000000000..96ef7f9a1add4
--- /dev/null
+++ b/Documentation/kunit/usage.rst
@@ -0,0 +1,447 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=============
+Using KUnit
+=============
+
+The purpose of this document is to describe what KUnit is, how it works, how it
+is intended to be used, and all the concepts and terminology that are needed to
+understand it. This guide assumes a working knowledge of the Linux kernel and
+some basic knowledge of testing.
+
+For a high level introduction to KUnit, including setting up KUnit for your
+project, see :doc:`start`.
+
+Organization of this document
+=================================
+
+This document is organized into two main sections: Testing and Isolating
+Behavior. The first covers what a unit test is and how to use KUnit to write
+them. The second covers how to use KUnit to isolate code and make it possible
+to unit test code that was otherwise un-unit-testable.
+
+Testing
+==========
+
+What is KUnit?
+------------------
+
+"K" is short for "kernel" so "KUnit" is the "(Linux) Kernel Unit Testing
+Framework." KUnit is intended first and foremost for writing unit tests; it is
+general enough that it can be used to write integration tests; however, this is
+a secondary goal. KUnit has no ambition of being the only testing framework for
+the kernel; for example, it does not intend to be an end-to-end testing
+framework.
+
+What is Unit Testing?
+-------------------------
+
+A `unit test <https://martinfowler.com/bliki/UnitTest.html>`_ is a test that
+tests code at the smallest possible scope, a *unit* of code. In the C
+programming language that's a function.
+
+Unit tests should be written for all the publicly exposed functions in a
+compilation unit; so that is all the functions that are exported in either a
+*class* (defined below) or all functions which are **not** static.
+
+Writing Tests
+-------------
+
+Test Cases
+~~~~~~~~~~
+
+The fundamental unit in KUnit is the test case. A test case is a function with
+the signature ``void (*)(struct kunit *test)``. It calls a function to be tested
+and then sets *expectations* for what should happen. For example:
+
+.. code-block:: c
+
+	void example_test_success(struct kunit *test)
+	{
+	}
+
+	void example_test_failure(struct kunit *test)
+	{
+		KUNIT_FAIL(test, "This test never passes.");
+	}
+
+In the above example ``example_test_success`` always passes because it does
+nothing; no expectations are set, so all expectations pass. On the other hand
+``example_test_failure`` always fails because it calls ``KUNIT_FAIL``, which is
+a special expectation that logs a message and causes the test case to fail.
+
+Expectations
+~~~~~~~~~~~~
+An *expectation* is a way to specify that you expect a piece of code to do
+something in a test. An expectation is called like a function. A test is made
+by setting expectations about the behavior of a piece of code under test; when
+one or more of the expectations fail, the test case fails and information about
+the failure is logged. For example:
+
+.. code-block:: c
+
+	void add_test_basic(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, 1, add(1, 0));
+		KUNIT_EXPECT_EQ(test, 2, add(1, 1));
+	}
+
+In the above example ``add_test_basic`` makes a number of assertions about the
+behavior of a function called ``add``; the first parameter is always of type
+``struct kunit *``, which contains information about the current test context;
+the second parameter, in this case, is what the value is expected to be; the
+last value is what the value actually is. If ``add`` passes all of these
+expectations, the test case, ``add_test_basic`` will pass; if any one of these
+expectations fail, the test case will fail.
+
+It is important to understand that a test case *fails* when any expectation is
+violated; however, the test will continue running, potentially trying other
+expectations until the test case ends or is otherwise terminated. This is as
+opposed to *assertions* which are discussed later.
+
+To learn about more expectations supported by KUnit, see :doc:`api/test`.
+
+.. note::
+   A single test case should be pretty short, pretty easy to understand,
+   focused on a single behavior.
+
+For example, if we wanted to properly test the add function above, we would
+create additional tests cases which would each test a different property that an
+add function should have like this:
+
+.. code-block:: c
+
+	void add_test_basic(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, 1, add(1, 0));
+		KUNIT_EXPECT_EQ(test, 2, add(1, 1));
+	}
+
+	void add_test_negative(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, 0, add(-1, 1));
+	}
+
+	void add_test_max(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, INT_MAX, add(0, INT_MAX));
+		KUNIT_EXPECT_EQ(test, -1, add(INT_MAX, INT_MIN));
+	}
+
+	void add_test_overflow(struct kunit *test)
+	{
+		KUNIT_EXPECT_EQ(test, INT_MIN, add(INT_MAX, 1));
+	}
+
+Notice how it is immediately obvious what all the properties that we are testing
+for are.
+
+Assertions
+~~~~~~~~~~
+
+KUnit also has the concept of an *assertion*. An assertion is just like an
+expectation except the assertion immediately terminates the test case if it is
+not satisfied.
+
+For example:
+
+.. code-block:: c
+
+	static void mock_test_do_expect_default_return(struct kunit *test)
+	{
+		struct mock_test_context *ctx = test->priv;
+		struct mock *mock = ctx->mock;
+		int param0 = 5, param1 = -5;
+		const char *two_param_types[] = {"int", "int"};
+		const void *two_params[] = {&param0, &param1};
+		const void *ret;
+
+		ret = mock->do_expect(mock,
+				      "test_printk", test_printk,
+				      two_param_types, two_params,
+				      ARRAY_SIZE(two_params));
+		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ret);
+		KUNIT_EXPECT_EQ(test, -4, *((int *) ret));
+	}
+
+In this example, the method under test should return a pointer to a value, so
+if the pointer returned by the method is null or an errno, we don't want to
+bother continuing the test since the following expectation could crash the test
+case. `ASSERT_NOT_ERR_OR_NULL(...)` allows us to bail out of the test case if
+the appropriate conditions have not been satisfied to complete the test.
+
+Modules / Test Suites
+~~~~~~~~~~~~~~~~~~~~~
+
+Now obviously one unit test isn't very helpful; the power comes from having
+many test cases covering all of your behaviors. Consequently it is common to
+have many *similar* tests; in order to reduce duplication in these closely
+related tests most unit testing frameworks provide the concept of a *test
+suite*, in KUnit we call it a *test module*; all it is is just a collection of
+test cases for a unit of code with a set up function that gets invoked before
+every test cases and then a tear down function that gets invoked after every
+test case completes.
+
+Example:
+
+.. code-block:: c
+
+	static struct kunit_case example_test_cases[] = {
+		KUNIT_CASE(example_test_foo),
+		KUNIT_CASE(example_test_bar),
+		KUNIT_CASE(example_test_baz),
+		{},
+	};
+
+	static struct kunit_module example_test_module[] = {
+		.name = "example",
+		.init = example_test_init,
+		.exit = example_test_exit,
+		.test_cases = example_test_cases,
+	};
+	module_test(example_test_module);
+
+In the above example the test suite, ``example_test_module``, would run the test
+cases ``example_test_foo``, ``example_test_bar``, and ``example_test_baz``, each
+would have ``example_test_init`` called immediately before it and would have
+``example_test_exit`` called immediately after it.
+``module_test(example_test_module)`` registers the test suite with the KUnit
+test framework.
+
+.. note::
+   A test case will only be run if it is associated with a test suite.
+
+For a more information on these types of things see the :doc:`api/test`.
+
+Isolating Behavior
+==================
+
+The most important aspect of unit testing that other forms of testing do not
+provide is the ability to limit the amount of code under test to a single unit.
+In practice, this is only possible by being able to control what code gets run
+when the unit under test calls a function and this is usually accomplished
+through some sort of indirection where a function is exposed as part of an API
+such that the definition of that function can be changed without affecting the
+rest of the code base. In the kernel this primarily comes from two constructs,
+classes, structs that contain function pointers that are provided by the
+implementer, and architecture specific functions which have definitions selected
+at compile time.
+
+Classes
+-------
+
+Classes are not a construct that is built into the C programming language;
+however, it is an easily derived concept. Accordingly, pretty much every project
+that does not use a standardized object oriented library (like GNOME's GObject)
+has their own slightly different way of doing object oriented programming; the
+Linux kernel is no exception.
+
+The central concept in kernel object oriented programming is the class. In the
+kernel, a *class* is a struct that contains function pointers. This creates a
+contract between *implementers* and *users* since it forces them to use the
+same function signature without having to call the function directly. In order
+for it to truly be a class, the function pointers must specify that a pointer
+to the class, known as a *class handle*, be one of the parameters; this makes
+it possible for the member functions (also known as *methods*) to have access
+to member variables (more commonly known as *fields*) allowing the same
+implementation to have multiple *instances*.
+
+Typically a class can be *overridden* by *child classes* by embedding the
+*parent class* in the child class. Then when a method provided by the child
+class is called, the child implementation knows that the pointer passed to it is
+of a parent contained within the child; because of this, the child can compute
+the pointer to itself because the pointer to the parent is always a fixed offset
+from the pointer to the child; this offset is the offset of the parent contained
+in the child struct. For example:
+
+.. code-block:: c
+
+	struct shape {
+		int (*area)(struct shape *this);
+	};
+
+	struct rectangle {
+		struct shape parent;
+		int length;
+		int width;
+	};
+
+	int rectangle_area(struct shape *this)
+	{
+		struct rectangle *self = container_of(this, struct shape, parent);
+
+		return self->length * self->width;
+	};
+
+	void rectangle_new(struct rectangle *self, int length, int width)
+	{
+		self->parent.area = rectangle_area;
+		self->length = length;
+		self->width = width;
+	}
+
+In this example (as in most kernel code) the operation of computing the pointer
+to the child from the pointer to the parent is done by ``container_of``.
+
+Faking Classes
+~~~~~~~~~~~~~~
+
+In order to unit test a piece of code that calls a method in a class, the
+behavior of the method must be controllable, otherwise the test ceases to be a
+unit test and becomes an integration test.
+
+A fake just provides an implementation of a piece of code that is different than
+what runs in a production instance, but behaves identically from the standpoint
+of the callers; this is usually done to replace a dependency that is hard to
+deal with, or is slow.
+
+A good example for this might be implementing a fake EEPROM that just stores the
+"contents" in an internal buffer. For example, let's assume we have a class that
+represents an EEPROM:
+
+.. code-block:: c
+
+	struct eeprom {
+		ssize_t (*read)(struct eeprom *this, size_t offset, char *buffer, size_t count);
+		ssize_t (*write)(struct eeprom *this, size_t offset, const char *buffer, size_t count);
+	};
+
+And we want to test some code that buffers writes to the EEPROM:
+
+.. code-block:: c
+
+	struct eeprom_buffer {
+		ssize_t (*write)(struct eeprom_buffer *this, const char *buffer, size_t count);
+		int flush(struct eeprom_buffer *this);
+		size_t flush_count; /* Flushes when buffer exceeds flush_count. */
+	};
+
+	struct eeprom_buffer *new_eeprom_buffer(struct eeprom *eeprom);
+	void destroy_eeprom_buffer(struct eeprom *eeprom);
+
+We can easily test this code by *faking out* the underlying EEPROM:
+
+.. code-block:: c
+
+	struct fake_eeprom {
+		struct eeprom parent;
+		char contents[FAKE_EEPROM_CONTENTS_SIZE];
+	};
+
+	ssize_t fake_eeprom_read(struct eeprom *parent, size_t offset, char *buffer, size_t count)
+	{
+		struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent);
+
+		count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset);
+		memcpy(buffer, this->contents + offset, count);
+
+		return count;
+	}
+
+	ssize_t fake_eeprom_write(struct eeprom *this, size_t offset, const char *buffer, size_t count)
+	{
+		struct fake_eeprom *this = container_of(parent, struct fake_eeprom, parent);
+
+		count = min(count, FAKE_EEPROM_CONTENTS_SIZE - offset);
+		memcpy(this->contents + offset, buffer, count);
+
+		return count;
+	}
+
+	void fake_eeprom_init(struct fake_eeprom *this)
+	{
+		this->parent.read = fake_eeprom_read;
+		this->parent.write = fake_eeprom_write;
+		memset(this->contents, 0, FAKE_EEPROM_CONTENTS_SIZE);
+	}
+
+We can now use it to test ``struct eeprom_buffer``:
+
+.. code-block:: c
+
+	struct eeprom_buffer_test {
+		struct fake_eeprom *fake_eeprom;
+		struct eeprom_buffer *eeprom_buffer;
+	};
+
+	static void eeprom_buffer_test_does_not_write_until_flush(struct kunit *test)
+	{
+		struct eeprom_buffer_test *ctx = test->priv;
+		struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
+		struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
+		char buffer[] = {0xff};
+
+		eeprom_buffer->flush_count = SIZE_MAX;
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 1);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 1);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0);
+
+		eeprom_buffer->flush(eeprom_buffer);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
+	}
+
+	static void eeprom_buffer_test_flushes_after_flush_count_met(struct kunit *test)
+	{
+		struct eeprom_buffer_test *ctx = test->priv;
+		struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
+		struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
+		char buffer[] = {0xff};
+
+		eeprom_buffer->flush_count = 2;
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 1);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 1);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
+	}
+
+	static void eeprom_buffer_test_flushes_increments_of_flush_count(struct kunit *test)
+	{
+		struct eeprom_buffer_test *ctx = test->priv;
+		struct eeprom_buffer *eeprom_buffer = ctx->eeprom_buffer;
+		struct fake_eeprom *fake_eeprom = ctx->fake_eeprom;
+		char buffer[] = {0xff, 0xff};
+
+		eeprom_buffer->flush_count = 2;
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 1);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0);
+
+		eeprom_buffer->write(eeprom_buffer, buffer, 2);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[0], 0xff);
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[1], 0xff);
+		/* Should have only flushed the first two bytes. */
+		KUNIT_EXPECT_EQ(test, fake_eeprom->contents[2], 0);
+	}
+
+	static int eeprom_buffer_test_init(struct kunit *test)
+	{
+		struct eeprom_buffer_test *ctx;
+
+		ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
+		ASSERT_NOT_ERR_OR_NULL(test, ctx);
+
+		ctx->fake_eeprom = kunit_kzalloc(test, sizeof(*ctx->fake_eeprom), GFP_KERNEL);
+		ASSERT_NOT_ERR_OR_NULL(test, ctx->fake_eeprom);
+
+		ctx->eeprom_buffer = new_eeprom_buffer(&ctx->fake_eeprom->parent);
+		ASSERT_NOT_ERR_OR_NULL(test, ctx->eeprom_buffer);
+
+		test->priv = ctx;
+
+		return 0;
+	}
+
+	static void eeprom_buffer_test_exit(struct kunit *test)
+	{
+		struct eeprom_buffer_test *ctx = test->priv;
+
+		destroy_eeprom_buffer(ctx->eeprom_buffer);
+	}
+
-- 
2.21.0.rc0.258.g878e2cd30e-goog


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 14/17] MAINTAINERS: add entry for KUnit the unit testing framework
  2019-02-14 21:37 ` brendanhiggins
                       ` (2 preceding siblings ...)
  (?)
@ 2019-02-14 21:37     ` Brendan Higgins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook-hpIqsD4AKlfQT0dZR+AlfA, mcgrof-DgEjT+Ai2ygdnm+yROfE0A,
	shuah-DgEjT+Ai2ygdnm+yROfE0A, robh-DgEjT+Ai2ygdnm+yROfE0A,
	kieran.bingham-ryLnwIuWjnjg/C1BVhZhaw,
	frowand.list-Re5JQEeQqe8AvxtiuMwx3w
  Cc: brakmo-b10kYP2dOMg, pmladek-IBi9RG/b67k,
	amir73il-Re5JQEeQqe8AvxtiuMwx3w, Brendan Higgins,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	Alexander.Levin-0li6OtcxBFHby3iVrkZq2A,
	linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw, richard-/L3Ra7n9ekc,
	knut.omang-QHcLZuEGTsvQT0dZR+AlfA, wfg-VuQAYsv1563Yd54FQh9/CA,
	joel-U3u1mxZcP9KHXe+LvDLADg, jdike-OPE4K8JWMJJBDgjK7y7TUQ,
	dan.carpenter-QHcLZuEGTsvQT0dZR+AlfA,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Tim.Bird-7U/KSKJipcs,
	linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	rostedt-nx8X9YLhiw1AfugRpC6u6w, julia.lawall-L2FTfq7BK8M,
	kunit-dev-/JYPxA39Uh5TLH3MbocFFw,
	gregkh-hQyY1W1yCW8ekmWlsbkhG0B+6BGkLq7r,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, daniel-/w4YWyX8dFk,
	mpe-Gsx/Oe8HsFggBc27wqDAHg, joe-6d6DIl74uiNBDgjK7y7TUQ,
	khilman-rdvid1DuHRBWk0Htik3J/w

Add myself as maintainer of KUnit, the Linux kernel's unit testing
framework.

Signed-off-by: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
---
 MAINTAINERS | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 8c68de3cfd80e..ff2cc9fcb49ad 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8267,6 +8267,16 @@ S:	Maintained
 F:	tools/testing/selftests/
 F:	Documentation/dev-tools/kselftest*
 
+KERNEL UNIT TESTING FRAMEWORK (KUnit)
+M:	Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
+L:	kunit-dev-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org
+W:	https://google.github.io/kunit-docs/third_party/kernel/docs/
+S:	Maintained
+F:	Documentation/kunit/
+F:	include/kunit/
+F:	kunit/
+F:	tools/testing/kunit/
+
 KERNEL USERMODE HELPER
 M:	Luis Chamberlain <mcgrof-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
 L:	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 14/17] MAINTAINERS: add entry for KUnit the unit testing framework
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: gregkh, joel, mpe, joe, brakmo, rostedt, Tim.Bird, khilman,
	julia.lawall, linux-kselftest, kunit-dev, linux-kernel, jdike,
	richard, linux-um, daniel, dri-devel, dan.j.williams,
	linux-nvdimm, knut.omang, devicetree, pmladek, Alexander.Levin,
	amir73il, dan.carpenter, wfg, Brendan Higgins

Add myself as maintainer of KUnit, the Linux kernel's unit testing
framework.

Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
---
 MAINTAINERS | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 8c68de3cfd80e..ff2cc9fcb49ad 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8267,6 +8267,16 @@ S:	Maintained
 F:	tools/testing/selftests/
 F:	Documentation/dev-tools/kselftest*
 
+KERNEL UNIT TESTING FRAMEWORK (KUnit)
+M:	Brendan Higgins <brendanhiggins@google.com>
+L:	kunit-dev@googlegroups.com
+W:	https://google.github.io/kunit-docs/third_party/kernel/docs/
+S:	Maintained
+F:	Documentation/kunit/
+F:	include/kunit/
+F:	kunit/
+F:	tools/testing/kunit/
+
 KERNEL USERMODE HELPER
 M:	Luis Chamberlain <mcgrof@kernel.org>
 L:	linux-kernel@vger.kernel.org
-- 
2.21.0.rc0.258.g878e2cd30e-goog


^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 14/17] MAINTAINERS: add entry for KUnit the unit testing framework
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-02-14 21:37 UTC (permalink / raw)


Add myself as maintainer of KUnit, the Linux kernel's unit testing
framework.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 MAINTAINERS | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 8c68de3cfd80e..ff2cc9fcb49ad 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8267,6 +8267,16 @@ S:	Maintained
 F:	tools/testing/selftests/
 F:	Documentation/dev-tools/kselftest*
 
+KERNEL UNIT TESTING FRAMEWORK (KUnit)
+M:	Brendan Higgins <brendanhiggins at google.com>
+L:	kunit-dev at googlegroups.com
+W:	https://google.github.io/kunit-docs/third_party/kernel/docs/
+S:	Maintained
+F:	Documentation/kunit/
+F:	include/kunit/
+F:	kunit/
+F:	tools/testing/kunit/
+
 KERNEL USERMODE HELPER
 M:	Luis Chamberlain <mcgrof at kernel.org>
 L:	linux-kernel at vger.kernel.org
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 14/17] MAINTAINERS: add entry for KUnit the unit testing framework
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)


Add myself as maintainer of KUnit, the Linux kernel's unit testing
framework.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 MAINTAINERS | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 8c68de3cfd80e..ff2cc9fcb49ad 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8267,6 +8267,16 @@ S:	Maintained
 F:	tools/testing/selftests/
 F:	Documentation/dev-tools/kselftest*
 
+KERNEL UNIT TESTING FRAMEWORK (KUnit)
+M:	Brendan Higgins <brendanhiggins at google.com>
+L:	kunit-dev at googlegroups.com
+W:	https://google.github.io/kunit-docs/third_party/kernel/docs/
+S:	Maintained
+F:	Documentation/kunit/
+F:	include/kunit/
+F:	kunit/
+F:	tools/testing/kunit/
+
 KERNEL USERMODE HELPER
 M:	Luis Chamberlain <mcgrof at kernel.org>
 L:	linux-kernel at vger.kernel.org
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 14/17] MAINTAINERS: add entry for KUnit the unit testing framework
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: brakmo, pmladek, amir73il, Brendan Higgins, dri-devel,
	Alexander.Levin, linux-kselftest, linux-nvdimm, richard,
	knut.omang, wfg, joel, jdike, dan.carpenter, devicetree,
	Tim.Bird, linux-um, rostedt, julia.lawall, dan.j.williams,
	kunit-dev, gregkh, linux-kernel, daniel, mpe, joe, khilman

Add myself as maintainer of KUnit, the Linux kernel's unit testing
framework.

Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
---
 MAINTAINERS | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 8c68de3cfd80e..ff2cc9fcb49ad 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8267,6 +8267,16 @@ S:	Maintained
 F:	tools/testing/selftests/
 F:	Documentation/dev-tools/kselftest*
 
+KERNEL UNIT TESTING FRAMEWORK (KUnit)
+M:	Brendan Higgins <brendanhiggins@google.com>
+L:	kunit-dev@googlegroups.com
+W:	https://google.github.io/kunit-docs/third_party/kernel/docs/
+S:	Maintained
+F:	Documentation/kunit/
+F:	include/kunit/
+F:	kunit/
+F:	tools/testing/kunit/
+
 KERNEL USERMODE HELPER
 M:	Luis Chamberlain <mcgrof@kernel.org>
 L:	linux-kernel@vger.kernel.org
-- 
2.21.0.rc0.258.g878e2cd30e-goog


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 15/17] of: unittest: migrate tests to run on KUnit
  2019-02-14 21:37 ` brendanhiggins
                       ` (2 preceding siblings ...)
  (?)
@ 2019-02-14 21:37     ` Brendan Higgins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook-hpIqsD4AKlfQT0dZR+AlfA, mcgrof-DgEjT+Ai2ygdnm+yROfE0A,
	shuah-DgEjT+Ai2ygdnm+yROfE0A, robh-DgEjT+Ai2ygdnm+yROfE0A,
	kieran.bingham-ryLnwIuWjnjg/C1BVhZhaw,
	frowand.list-Re5JQEeQqe8AvxtiuMwx3w
  Cc: brakmo-b10kYP2dOMg, pmladek-IBi9RG/b67k,
	amir73il-Re5JQEeQqe8AvxtiuMwx3w, Brendan Higgins,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	Alexander.Levin-0li6OtcxBFHby3iVrkZq2A,
	linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw, richard-/L3Ra7n9ekc,
	knut.omang-QHcLZuEGTsvQT0dZR+AlfA, wfg-VuQAYsv1563Yd54FQh9/CA,
	joel-U3u1mxZcP9KHXe+LvDLADg, jdike-OPE4K8JWMJJBDgjK7y7TUQ,
	dan.carpenter-QHcLZuEGTsvQT0dZR+AlfA,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Tim.Bird-7U/KSKJipcs,
	linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	rostedt-nx8X9YLhiw1AfugRpC6u6w, julia.lawall-L2FTfq7BK8M,
	kunit-dev-/JYPxA39Uh5TLH3MbocFFw,
	gregkh-hQyY1W1yCW8ekmWlsbkhG0B+6BGkLq7r,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, daniel-/w4YWyX8dFk,
	mpe-Gsx/Oe8HsFggBc27wqDAHg, joe-6d6DIl74uiNBDgjK7y7TUQ,
	khilman-rdvid1DuHRBWk0Htik3J/w

Migrate tests without any cleanup, or modifying test logic in anyway to
run under KUnit using the KUnit expectation and assertion API.

Signed-off-by: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
---
 drivers/of/Kconfig    |    1 +
 drivers/of/unittest.c | 1310 +++++++++++++++++++++--------------------
 2 files changed, 671 insertions(+), 640 deletions(-)

diff --git a/drivers/of/Kconfig b/drivers/of/Kconfig
index ad3fcad4d75b8..f309399deac20 100644
--- a/drivers/of/Kconfig
+++ b/drivers/of/Kconfig
@@ -15,6 +15,7 @@ if OF
 config OF_UNITTEST
 	bool "Device Tree runtime unit tests"
 	depends on !SPARC
+	depends on KUNIT
 	select IRQ_DOMAIN
 	select OF_EARLY_FLATTREE
 	select OF_RESOLVE
diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
index effa4e2b9d992..96de69ccb3e63 100644
--- a/drivers/of/unittest.c
+++ b/drivers/of/unittest.c
@@ -26,186 +26,189 @@
 
 #include <linux/bitops.h>
 
+#include <kunit/test.h>
+
 #include "of_private.h"
 
-static struct unittest_results {
-	int passed;
-	int failed;
-} unittest_results;
-
-#define unittest(result, fmt, ...) ({ \
-	bool failed = !(result); \
-	if (failed) { \
-		unittest_results.failed++; \
-		pr_err("FAIL %s():%i " fmt, __func__, __LINE__, ##__VA_ARGS__); \
-	} else { \
-		unittest_results.passed++; \
-		pr_debug("pass %s():%i\n", __func__, __LINE__); \
-	} \
-	failed; \
-})
-
-static void __init of_unittest_find_node_by_name(void)
+static void of_unittest_find_node_by_name(struct kunit *test)
 {
 	struct device_node *np;
 	const char *options, *name;
 
 	np = of_find_node_by_path("/testcase-data");
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	unittest(np && !strcmp("/testcase-data", name),
-		"find /testcase-data failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
+			       "find /testcase-data failed\n");
 	of_node_put(np);
 	kfree(name);
 
 	/* Test if trailing '/' works */
-	np = of_find_node_by_path("/testcase-data/");
-	unittest(!np, "trailing '/' on /testcase-data/ should fail\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
+			    "trailing '/' on /testcase-data/ should fail\n");
 
 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	unittest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", name),
+	KUNIT_EXPECT_STREQ_MSG(
+		test, "/testcase-data/phandle-tests/consumer-a", name,
 		"find /testcase-data/phandle-tests/consumer-a failed\n");
 	of_node_put(np);
 	kfree(name);
 
 	np = of_find_node_by_path("testcase-alias");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	unittest(np && !strcmp("/testcase-data", name),
-		"find testcase-alias failed\n");
+	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
+			       "find testcase-alias failed\n");
 	of_node_put(np);
 	kfree(name);
 
 	/* Test if trailing '/' works on aliases */
-	np = of_find_node_by_path("testcase-alias/");
-	unittest(!np, "trailing '/' on testcase-alias/ should fail\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
+			    "trailing '/' on testcase-alias/ should fail\n");
 
 	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	unittest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", name),
+	KUNIT_EXPECT_STREQ_MSG(
+		test, "/testcase-data/phandle-tests/consumer-a", name,
 		"find testcase-alias/phandle-tests/consumer-a failed\n");
 	of_node_put(np);
 	kfree(name);
 
-	np = of_find_node_by_path("/testcase-data/missing-path");
-	unittest(!np, "non-existent path returned node %pOF\n", np);
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
+		"non-existent path returned node %pOF\n", np);
 	of_node_put(np);
 
-	np = of_find_node_by_path("missing-alias");
-	unittest(!np, "non-existent alias returned node %pOF\n", np);
+	KUNIT_EXPECT_EQ_MSG(
+		test, np = of_find_node_by_path("missing-alias"), NULL,
+		"non-existent alias returned node %pOF\n", np);
 	of_node_put(np);
 
-	np = of_find_node_by_path("testcase-alias/missing-path");
-	unittest(!np, "non-existent alias with relative path returned node %pOF\n", np);
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
+		"non-existent alias with relative path returned node %pOF\n",
+		np);
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
-	unittest(np && !strcmp("testoption", options),
-		 "option path test failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
+			       "option path test failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
-	unittest(np && !strcmp("test/option", options),
-		 "option path test, subcase #1 failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
+			       "option path test, subcase #1 failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
-	unittest(np && !strcmp("test/option", options),
-		 "option path test, subcase #2 failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
+			       "option path test, subcase #2 failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
-	unittest(np, "NULL option path test failed\n");
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
+					 "NULL option path test failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
 				       &options);
-	unittest(np && !strcmp("testaliasoption", options),
-		 "option alias path test failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
+			       "option alias path test failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
 				       &options);
-	unittest(np && !strcmp("test/alias/option", options),
-		 "option alias path test, subcase #1 failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
+			       "option alias path test, subcase #1 failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
-	unittest(np, "NULL option alias path test failed\n");
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
+			test, np, "NULL option alias path test failed\n");
 	of_node_put(np);
 
 	options = "testoption";
 	np = of_find_node_opts_by_path("testcase-alias", &options);
-	unittest(np && !options, "option clearing test failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
+			    "option clearing test failed\n");
 	of_node_put(np);
 
 	options = "testoption";
 	np = of_find_node_opts_by_path("/", &options);
-	unittest(np && !options, "option clearing root node test failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
+			    "option clearing root node test failed\n");
 	of_node_put(np);
 }
 
-static void __init of_unittest_dynamic(void)
+static void of_unittest_dynamic(struct kunit *test)
 {
 	struct device_node *np;
 	struct property *prop;
 
 	np = of_find_node_by_path("/testcase-data");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	/* Array of 4 properties for the purpose of testing */
 	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
-	if (!prop) {
-		unittest(0, "kzalloc() failed\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
 
 	/* Add a new property - should pass*/
 	prop->name = "new-property";
 	prop->value = "new-property-data";
 	prop->length = strlen(prop->value) + 1;
-	unittest(of_add_property(np, prop) == 0, "Adding a new property failed\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+			    "Adding a new property failed\n");
 
 	/* Try to add an existing property - should fail */
 	prop++;
 	prop->name = "new-property";
 	prop->value = "new-property-data-should-fail";
 	prop->length = strlen(prop->value) + 1;
-	unittest(of_add_property(np, prop) != 0,
-		 "Adding an existing property should have failed\n");
+	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
+			    "Adding an existing property should have failed\n");
 
 	/* Try to modify an existing property - should pass */
 	prop->value = "modify-property-data-should-pass";
 	prop->length = strlen(prop->value) + 1;
-	unittest(of_update_property(np, prop) == 0,
-		 "Updating an existing property should have passed\n");
+	KUNIT_EXPECT_EQ_MSG(
+		test, of_update_property(np, prop), 0,
+		"Updating an existing property should have passed\n");
 
 	/* Try to modify non-existent property - should pass*/
 	prop++;
 	prop->name = "modify-property";
 	prop->value = "modify-missing-property-data-should-pass";
 	prop->length = strlen(prop->value) + 1;
-	unittest(of_update_property(np, prop) == 0,
-		 "Updating a missing property should have passed\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
+			    "Updating a missing property should have passed\n");
 
 	/* Remove property - should pass */
-	unittest(of_remove_property(np, prop) == 0,
-		 "Removing a property should have passed\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
+			    "Removing a property should have passed\n");
 
 	/* Adding very large property - should pass */
 	prop++;
 	prop->name = "large-property-PAGE_SIZEx8";
 	prop->length = PAGE_SIZE * 8;
 	prop->value = kzalloc(prop->length, GFP_KERNEL);
-	unittest(prop->value != NULL, "Unable to allocate large buffer\n");
-	if (prop->value)
-		unittest(of_add_property(np, prop) == 0,
-			 "Adding a large property should have passed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+			    "Adding a large property should have passed\n");
 }
 
-static int __init of_unittest_check_node_linkage(struct device_node *np)
+static int of_unittest_check_node_linkage(struct device_node *np)
 {
 	struct device_node *child;
 	int count = 0, rc;
@@ -230,27 +233,30 @@ static int __init of_unittest_check_node_linkage(struct device_node *np)
 	return rc;
 }
 
-static void __init of_unittest_check_tree_linkage(void)
+static void of_unittest_check_tree_linkage(struct kunit *test)
 {
 	struct device_node *np;
 	int allnode_count = 0, child_count;
 
-	if (!of_root)
-		return;
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);
 
 	for_each_of_allnodes(np)
 		allnode_count++;
 	child_count = of_unittest_check_node_linkage(of_root);
 
-	unittest(child_count > 0, "Device node data structure is corrupted\n");
-	unittest(child_count == allnode_count,
-		 "allnodes list size (%i) doesn't match sibling lists size (%i)\n",
-		 allnode_count, child_count);
+	KUNIT_EXPECT_GT_MSG(test, child_count, 0,
+			    "Device node data structure is corrupted\n");
+	KUNIT_EXPECT_EQ_MSG(
+		test, child_count, allnode_count,
+		"allnodes list size (%i) doesn't match sibling lists size (%i)\n",
+		allnode_count, child_count);
 	pr_debug("allnodes list size (%i); sibling lists size (%i)\n", allnode_count, child_count);
 }
 
-static void __init of_unittest_printf_one(struct device_node *np, const char *fmt,
-					  const char *expected)
+static void of_unittest_printf_one(struct kunit *test,
+				   struct device_node *np,
+				   const char *fmt,
+				   const char *expected)
 {
 	unsigned char *buf;
 	int buf_size;
@@ -265,8 +271,12 @@ static void __init of_unittest_printf_one(struct device_node *np, const char *fm
 	memset(buf, 0xff, buf_size);
 	size = snprintf(buf, buf_size - 2, fmt, np);
 
-	/* use strcmp() instead of strncmp() here to be absolutely sure strings match */
-	unittest((strcmp(buf, expected) == 0) && (buf[size+1] == 0xff),
+	KUNIT_EXPECT_STREQ_MSG(
+		test, buf, expected,
+		"sprintf failed; fmt='%s' expected='%s' rslt='%s'\n",
+		fmt, expected, buf);
+	KUNIT_EXPECT_EQ_MSG(
+		test, buf[size+1], 0xff,
 		"sprintf failed; fmt='%s' expected='%s' rslt='%s'\n",
 		fmt, expected, buf);
 
@@ -276,44 +286,49 @@ static void __init of_unittest_printf_one(struct device_node *np, const char *fm
 		/* Clear the buffer, and make sure it works correctly still */
 		memset(buf, 0xff, buf_size);
 		snprintf(buf, size+1, fmt, np);
-		unittest(strncmp(buf, expected, size) == 0 && (buf[size+1] == 0xff),
+		KUNIT_EXPECT_STREQ_MSG(
+			test, buf, expected,
+			"snprintf failed; size=%i fmt='%s' expected='%s' rslt='%s'\n",
+			size, fmt, expected, buf);
+		KUNIT_EXPECT_EQ_MSG(
+			test, buf[size+1], 0xff,
 			"snprintf failed; size=%i fmt='%s' expected='%s' rslt='%s'\n",
 			size, fmt, expected, buf);
 	}
 	kfree(buf);
 }
 
-static void __init of_unittest_printf(void)
+static void of_unittest_printf(struct kunit *test)
 {
 	struct device_node *np;
 	const char *full_name = "/testcase-data/platform-tests/test-device@1/dev@100";
 	char phandle_str[16] = "";
 
 	np = of_find_node_by_path(full_name);
-	if (!np) {
-		unittest(np, "testcase data missing\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	num_to_str(phandle_str, sizeof(phandle_str), np->phandle, 0);
 
-	of_unittest_printf_one(np, "%pOF",  full_name);
-	of_unittest_printf_one(np, "%pOFf", full_name);
-	of_unittest_printf_one(np, "%pOFn", "dev");
-	of_unittest_printf_one(np, "%2pOFn", "dev");
-	of_unittest_printf_one(np, "%5pOFn", "  dev");
-	of_unittest_printf_one(np, "%pOFnc", "dev:test-sub-device");
-	of_unittest_printf_one(np, "%pOFp", phandle_str);
-	of_unittest_printf_one(np, "%pOFP", "dev@100");
-	of_unittest_printf_one(np, "ABC %pOFP ABC", "ABC dev@100 ABC");
-	of_unittest_printf_one(np, "%10pOFP", "   dev@100");
-	of_unittest_printf_one(np, "%-10pOFP", "dev@100   ");
-	of_unittest_printf_one(of_root, "%pOFP", "/");
-	of_unittest_printf_one(np, "%pOFF", "----");
-	of_unittest_printf_one(np, "%pOFPF", "dev@100:----");
-	of_unittest_printf_one(np, "%pOFPFPc", "dev@100:----:dev@100:test-sub-device");
-	of_unittest_printf_one(np, "%pOFc", "test-sub-device");
-	of_unittest_printf_one(np, "%pOFC",
+	of_unittest_printf_one(test, np, "%pOF",  full_name);
+	of_unittest_printf_one(test, np, "%pOFf", full_name);
+	of_unittest_printf_one(test, np, "%pOFn", "dev");
+	of_unittest_printf_one(test, np, "%2pOFn", "dev");
+	of_unittest_printf_one(test, np, "%5pOFn", "  dev");
+	of_unittest_printf_one(test, np, "%pOFnc", "dev:test-sub-device");
+	of_unittest_printf_one(test, np, "%pOFp", phandle_str);
+	of_unittest_printf_one(test, np, "%pOFP", "dev@100");
+	of_unittest_printf_one(test, np, "ABC %pOFP ABC", "ABC dev@100 ABC");
+	of_unittest_printf_one(test, np, "%10pOFP", "   dev@100");
+	of_unittest_printf_one(test, np, "%-10pOFP", "dev@100   ");
+	of_unittest_printf_one(test, of_root, "%pOFP", "/");
+	of_unittest_printf_one(test, np, "%pOFF", "----");
+	of_unittest_printf_one(test, np, "%pOFPF", "dev@100:----");
+	of_unittest_printf_one(test,
+			       np,
+			       "%pOFPFPc",
+			       "dev@100:----:dev@100:test-sub-device");
+	of_unittest_printf_one(test, np, "%pOFc", "test-sub-device");
+	of_unittest_printf_one(test, np, "%pOFC",
 			"\"test-sub-device\",\"test-compat2\",\"test-compat3\"");
 }
 
@@ -323,7 +338,7 @@ struct node_hash {
 };
 
 static DEFINE_HASHTABLE(phandle_ht, 8);
-static void __init of_unittest_check_phandles(void)
+static void of_unittest_check_phandles(struct kunit *test)
 {
 	struct device_node *np;
 	struct node_hash *nh;
@@ -335,24 +350,26 @@ static void __init of_unittest_check_phandles(void)
 			continue;
 
 		hash_for_each_possible(phandle_ht, nh, node, np->phandle) {
+			KUNIT_EXPECT_NE_MSG(
+				test, nh->np->phandle, np->phandle,
+				"Duplicate phandle! %i used by %pOF and %pOF\n",
+				np->phandle, nh->np, np);
 			if (nh->np->phandle == np->phandle) {
-				pr_info("Duplicate phandle! %i used by %pOF and %pOF\n",
-					np->phandle, nh->np, np);
 				dup_count++;
 				break;
 			}
 		}
 
 		nh = kzalloc(sizeof(*nh), GFP_KERNEL);
-		if (WARN_ON(!nh))
-			return;
+		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, nh);
 
 		nh->np = np;
 		hash_add(phandle_ht, &nh->node, np->phandle);
 		phandle_count++;
 	}
-	unittest(dup_count == 0, "Found %i duplicates in %i phandles\n",
-		 dup_count, phandle_count);
+	KUNIT_EXPECT_EQ_MSG(test, dup_count, 0,
+			    "Found %i duplicates in %i phandles\n",
+			    dup_count, phandle_count);
 
 	/* Clean up */
 	hash_for_each_safe(phandle_ht, i, tmp, nh, node) {
@@ -361,20 +378,21 @@ static void __init of_unittest_check_phandles(void)
 	}
 }
 
-static void __init of_unittest_parse_phandle_with_args(void)
+static void of_unittest_parse_phandle_with_args(struct kunit *test)
 {
 	struct device_node *np;
 	struct of_phandle_args args;
-	int i, rc;
+	int i, rc = 0;
 
 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
-	rc = of_count_phandle_with_args(np, "phandle-list", "#phandle-cells");
-	unittest(rc == 7, "of_count_phandle_with_args() returned %i, expected 7\n", rc);
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_count_phandle_with_args(
+			np, "phandle-list", "#phandle-cells"),
+		7,
+		"of_count_phandle_with_args() returned %i, expected 7\n", rc);
 
 	for (i = 0; i < 8; i++) {
 		bool passed = true;
@@ -428,85 +446,91 @@ static void __init of_unittest_parse_phandle_with_args(void)
 			passed = false;
 		}
 
-		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
-			 i, args.np, rc);
+		KUNIT_EXPECT_TRUE_MSG(
+			test, passed,
+			"index %i - data error on node %pOF rc=%i\n",
+			i, args.np, rc);
 	}
 
 	/* Check for missing list property */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args(np, "phandle-list-missing",
-					"#phandle-cells", 0, &args);
-	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
-	rc = of_count_phandle_with_args(np, "phandle-list-missing",
-					"#phandle-cells");
-	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args(
+			np, "phandle-list-missing", "#phandle-cells", 0, &args),
+		-ENOENT);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_count_phandle_with_args(
+			np, "phandle-list-missing", "#phandle-cells"),
+		-ENOENT);
 
 	/* Check for missing cells property */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args(np, "phandle-list",
-					"#phandle-cells-missing", 0, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
-	rc = of_count_phandle_with_args(np, "phandle-list",
-					"#phandle-cells-missing");
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args(
+			np, "phandle-list", "#phandle-cells-missing", 0, &args),
+		-EINVAL);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_count_phandle_with_args(
+			np, "phandle-list", "#phandle-cells-missing"),
+		-EINVAL);
 
 	/* Check for bad phandle in list */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args(np, "phandle-list-bad-phandle",
-					"#phandle-cells", 0, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
-	rc = of_count_phandle_with_args(np, "phandle-list-bad-phandle",
-					"#phandle-cells");
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args(np, "phandle-list-bad-phandle",
+					   "#phandle-cells", 0, &args),
+		-EINVAL);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_count_phandle_with_args(
+			np, "phandle-list-bad-phandle", "#phandle-cells"),
+		-EINVAL);
 
 	/* Check for incorrectly formed argument list */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args(np, "phandle-list-bad-args",
-					"#phandle-cells", 1, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
-	rc = of_count_phandle_with_args(np, "phandle-list-bad-args",
-					"#phandle-cells");
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args(np, "phandle-list-bad-args",
+					   "#phandle-cells", 1, &args),
+		-EINVAL);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_count_phandle_with_args(
+			np, "phandle-list-bad-args", "#phandle-cells"),
+		-EINVAL);
 }
 
-static void __init of_unittest_parse_phandle_with_args_map(void)
+static void of_unittest_parse_phandle_with_args_map(struct kunit *test)
 {
 	struct device_node *np, *p0, *p1, *p2, *p3;
 	struct of_phandle_args args;
 	int i, rc;
 
 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-b");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	p0 = of_find_node_by_path("/testcase-data/phandle-tests/provider0");
-	if (!p0) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p0);
 
 	p1 = of_find_node_by_path("/testcase-data/phandle-tests/provider1");
-	if (!p1) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p1);
 
 	p2 = of_find_node_by_path("/testcase-data/phandle-tests/provider2");
-	if (!p2) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p2);
 
 	p3 = of_find_node_by_path("/testcase-data/phandle-tests/provider3");
-	if (!p3) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p3);
 
-	rc = of_count_phandle_with_args(np, "phandle-list", "#phandle-cells");
-	unittest(rc == 7, "of_count_phandle_with_args() returned %i, expected 7\n", rc);
+	KUNIT_EXPECT_EQ(test,
+		       of_count_phandle_with_args(np,
+						  "phandle-list",
+						  "#phandle-cells"),
+		       7);
 
 	for (i = 0; i < 8; i++) {
 		bool passed = true;
@@ -564,121 +588,186 @@ static void __init of_unittest_parse_phandle_with_args_map(void)
 			passed = false;
 		}
 
-		unittest(passed, "index %i - data error on node %s rc=%i\n",
-			 i, args.np->full_name, rc);
+		KUNIT_EXPECT_TRUE_MSG(
+			test, passed,
+			"index %i - data error on node %s rc=%i\n",
+			i, (args.np ? args.np->full_name : "missing np"), rc);
 	}
 
 	/* Check for missing list property */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args_map(np, "phandle-list-missing",
-					    "phandle", 0, &args);
-	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args_map(
+			np, "phandle-list-missing", "phandle", 0, &args),
+		-ENOENT);
 
 	/* Check for missing cells,map,mask property */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args_map(np, "phandle-list",
-					    "phandle-missing", 0, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args_map(
+			np, "phandle-list", "phandle-missing", 0, &args),
+		-EINVAL);
 
 	/* Check for bad phandle in list */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args_map(np, "phandle-list-bad-phandle",
-					    "phandle", 0, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args_map(
+			np, "phandle-list-bad-phandle", "phandle", 0, &args),
+		-EINVAL);
 
 	/* Check for incorrectly formed argument list */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args_map(np, "phandle-list-bad-args",
-					    "phandle", 1, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args_map(
+			np, "phandle-list-bad-args", "phandle", 1, &args),
+		-EINVAL);
 }
 
-static void __init of_unittest_property_string(void)
+static void of_unittest_property_string(struct kunit *test)
 {
 	const char *strings[4];
 	struct device_node *np;
 	int rc;
 
 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
-	if (!np) {
-		pr_err("No testcase data in device tree\n");
-		return;
-	}
-
-	rc = of_property_match_string(np, "phandle-list-names", "first");
-	unittest(rc == 0, "first expected:0 got:%i\n", rc);
-	rc = of_property_match_string(np, "phandle-list-names", "second");
-	unittest(rc == 1, "second expected:1 got:%i\n", rc);
-	rc = of_property_match_string(np, "phandle-list-names", "third");
-	unittest(rc == 2, "third expected:2 got:%i\n", rc);
-	rc = of_property_match_string(np, "phandle-list-names", "fourth");
-	unittest(rc == -ENODATA, "unmatched string; rc=%i\n", rc);
-	rc = of_property_match_string(np, "missing-property", "blah");
-	unittest(rc == -EINVAL, "missing property; rc=%i\n", rc);
-	rc = of_property_match_string(np, "empty-property", "blah");
-	unittest(rc == -ENODATA, "empty property; rc=%i\n", rc);
-	rc = of_property_match_string(np, "unterminated-string", "blah");
-	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+
+	KUNIT_EXPECT_EQ(
+		test,
+		of_property_match_string(np, "phandle-list-names", "first"),
+		0);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_property_match_string(np, "phandle-list-names", "second"),
+		1);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_property_match_string(np, "phandle-list-names", "third"),
+		2);
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_match_string(np, "phandle-list-names", "fourth"),
+		-ENODATA,
+		"unmatched string");
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_match_string(np, "missing-property", "blah"),
+		-EINVAL,
+		"missing property");
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_match_string(np, "empty-property", "blah"),
+		-ENODATA,
+		"empty property");
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_match_string(np, "unterminated-string", "blah"),
+		-EILSEQ,
+		"unterminated string");
 
 	/* of_property_count_strings() tests */
-	rc = of_property_count_strings(np, "string-property");
-	unittest(rc == 1, "Incorrect string count; rc=%i\n", rc);
-	rc = of_property_count_strings(np, "phandle-list-names");
-	unittest(rc == 3, "Incorrect string count; rc=%i\n", rc);
-	rc = of_property_count_strings(np, "unterminated-string");
-	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
-	rc = of_property_count_strings(np, "unterminated-string-list");
-	unittest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ(test,
+			of_property_count_strings(np, "string-property"), 1);
+	KUNIT_EXPECT_EQ(test,
+			of_property_count_strings(np, "phandle-list-names"), 3);
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_count_strings(np, "unterminated-string"), -EILSEQ,
+		"unterminated string");
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_count_strings(np, "unterminated-string-list"),
+		-EILSEQ,
+		"unterminated string array");
 
 	/* of_property_read_string_index() tests */
 	rc = of_property_read_string_index(np, "string-property", 0, strings);
-	unittest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_ASSERT_EQ(test, rc, 0);
+	KUNIT_EXPECT_STREQ(test, strings[0], "foobar");
+
 	strings[0] = NULL;
 	rc = of_property_read_string_index(np, "string-property", 1, strings);
-	unittest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ(test, rc, -ENODATA);
+	KUNIT_EXPECT_EQ(test, strings[0], NULL);
+
 	rc = of_property_read_string_index(np, "phandle-list-names", 0, strings);
-	unittest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_ASSERT_EQ(test, rc, 0);
+	KUNIT_EXPECT_STREQ(test, strings[0], "first");
+
 	rc = of_property_read_string_index(np, "phandle-list-names", 1, strings);
-	unittest(rc == 0 && !strcmp(strings[0], "second"), "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_ASSERT_EQ(test, rc, 0);
+	KUNIT_EXPECT_STREQ(test, strings[0], "second");
+
 	rc = of_property_read_string_index(np, "phandle-list-names", 2, strings);
-	unittest(rc == 0 && !strcmp(strings[0], "third"), "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_ASSERT_EQ(test, rc, 0);
+	KUNIT_EXPECT_STREQ(test, strings[0], "third");
+
 	strings[0] = NULL;
 	rc = of_property_read_string_index(np, "phandle-list-names", 3, strings);
-	unittest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ(test, rc, -ENODATA);
+	KUNIT_EXPECT_EQ(test, strings[0], NULL);
+
 	strings[0] = NULL;
 	rc = of_property_read_string_index(np, "unterminated-string", 0, strings);
-	unittest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ(test, rc, -EILSEQ);
+	KUNIT_EXPECT_EQ(test, strings[0], NULL);
+
 	rc = of_property_read_string_index(np, "unterminated-string-list", 0, strings);
-	unittest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_ASSERT_EQ(test, rc, 0);
+	KUNIT_EXPECT_STREQ(test, strings[0], "first");
+
 	strings[0] = NULL;
 	rc = of_property_read_string_index(np, "unterminated-string-list", 2, strings); /* should fail */
-	unittest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
-	strings[1] = NULL;
+	KUNIT_EXPECT_EQ(test, rc, -EILSEQ);
+	KUNIT_EXPECT_EQ(test, strings[0], NULL);
 
 	/* of_property_read_string_array() tests */
-	rc = of_property_read_string_array(np, "string-property", strings, 4);
-	unittest(rc == 1, "Incorrect string count; rc=%i\n", rc);
-	rc = of_property_read_string_array(np, "phandle-list-names", strings, 4);
-	unittest(rc == 3, "Incorrect string count; rc=%i\n", rc);
-	rc = of_property_read_string_array(np, "unterminated-string", strings, 4);
-	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
+	strings[1] = NULL;
+	KUNIT_EXPECT_EQ(
+		test,
+		of_property_read_string_array(
+			np, "string-property", strings, 4),
+		1);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_property_read_string_array(
+			np, "phandle-list-names", strings, 4),
+		3);
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_read_string_array(
+			np, "unterminated-string", strings, 4),
+		-EILSEQ,
+		"unterminated string");
 	/* -- An incorrectly formed string should cause a failure */
-	rc = of_property_read_string_array(np, "unterminated-string-list", strings, 4);
-	unittest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_read_string_array(
+			np, "unterminated-string-list", strings, 4),
+		-EILSEQ,
+		"unterminated string array");
 	/* -- parsing the correctly formed strings should still work: */
 	strings[2] = NULL;
 	rc = of_property_read_string_array(np, "unterminated-string-list", strings, 2);
-	unittest(rc == 2 && strings[2] == NULL, "of_property_read_string_array() failure; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ(test, rc, 2);
+	KUNIT_EXPECT_EQ(test, strings[2], NULL);
+
 	strings[1] = NULL;
 	rc = of_property_read_string_array(np, "phandle-list-names", strings, 1);
-	unittest(rc == 1 && strings[1] == NULL, "Overwrote end of string array; rc=%i, str='%s'\n", rc, strings[1]);
+	KUNIT_ASSERT_EQ(test, rc, 1);
+	KUNIT_EXPECT_EQ_MSG(test, strings[1], NULL,
+			    "Overwrote end of string array");
 }
 
 #define propcmp(p1, p2) (((p1)->length == (p2)->length) && \
 			(p1)->value && (p2)->value && \
 			!memcmp((p1)->value, (p2)->value, (p1)->length) && \
 			!strcmp((p1)->name, (p2)->name))
-static void __init of_unittest_property_copy(void)
+static void of_unittest_property_copy(struct kunit *test)
 {
 #ifdef CONFIG_OF_DYNAMIC
 	struct property p1 = { .name = "p1", .length = 0, .value = "" };
@@ -686,20 +775,24 @@ static void __init of_unittest_property_copy(void)
 	struct property *new;
 
 	new = __of_prop_dup(&p1, GFP_KERNEL);
-	unittest(new && propcmp(&p1, new), "empty property didn't copy correctly\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, new);
+	KUNIT_EXPECT_TRUE_MSG(test, propcmp(&p1, new),
+			      "empty property didn't copy correctly");
 	kfree(new->value);
 	kfree(new->name);
 	kfree(new);
 
 	new = __of_prop_dup(&p2, GFP_KERNEL);
-	unittest(new && propcmp(&p2, new), "non-empty property didn't copy correctly\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, new);
+	KUNIT_EXPECT_TRUE_MSG(test, propcmp(&p2, new),
+			      "non-empty property didn't copy correctly");
 	kfree(new->value);
 	kfree(new->name);
 	kfree(new);
 #endif
 }
 
-static void __init of_unittest_changeset(void)
+static void of_unittest_changeset(struct kunit *test)
 {
 #ifdef CONFIG_OF_DYNAMIC
 	struct property *ppadd, padd = { .name = "prop-add", .length = 1, .value = "" };
@@ -712,32 +805,32 @@ static void __init of_unittest_changeset(void)
 	struct of_changeset chgset;
 
 	n1 = __of_node_dup(NULL, "n1");
-	unittest(n1, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n1);
 
 	n2 = __of_node_dup(NULL, "n2");
-	unittest(n2, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n2);
 
 	n21 = __of_node_dup(NULL, "n21");
-	unittest(n21, "testcase setup failure %p\n", n21);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n21);
 
 	nchangeset = of_find_node_by_path("/testcase-data/changeset");
 	nremove = of_get_child_by_name(nchangeset, "node-remove");
-	unittest(nremove, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, nremove);
 
 	ppadd = __of_prop_dup(&padd, GFP_KERNEL);
-	unittest(ppadd, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppadd);
 
 	ppname_n1  = __of_prop_dup(&pname_n1, GFP_KERNEL);
-	unittest(ppname_n1, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n1);
 
 	ppname_n2  = __of_prop_dup(&pname_n2, GFP_KERNEL);
-	unittest(ppname_n2, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n2);
 
 	ppname_n21 = __of_prop_dup(&pname_n21, GFP_KERNEL);
-	unittest(ppname_n21, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n21);
 
 	ppupdate = __of_prop_dup(&pupdate, GFP_KERNEL);
-	unittest(ppupdate, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppupdate);
 
 	parent = nchangeset;
 	n1->parent = parent;
@@ -745,54 +838,72 @@ static void __init of_unittest_changeset(void)
 	n21->parent = n2;
 
 	ppremove = of_find_property(parent, "prop-remove", NULL);
-	unittest(ppremove, "failed to find removal prop");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppremove);
 
 	of_changeset_init(&chgset);
 
-	unittest(!of_changeset_attach_node(&chgset, n1), "fail attach n1\n");
-	unittest(!of_changeset_add_property(&chgset, n1, ppname_n1), "fail add prop name\n");
-
-	unittest(!of_changeset_attach_node(&chgset, n2), "fail attach n2\n");
-	unittest(!of_changeset_add_property(&chgset, n2, ppname_n2), "fail add prop name\n");
-
-	unittest(!of_changeset_detach_node(&chgset, nremove), "fail remove node\n");
-	unittest(!of_changeset_add_property(&chgset, n21, ppname_n21), "fail add prop name\n");
-
-	unittest(!of_changeset_attach_node(&chgset, n21), "fail attach n21\n");
-
-	unittest(!of_changeset_add_property(&chgset, parent, ppadd), "fail add prop prop-add\n");
-	unittest(!of_changeset_update_property(&chgset, parent, ppupdate), "fail update prop\n");
-	unittest(!of_changeset_remove_property(&chgset, parent, ppremove), "fail remove prop\n");
-
-	unittest(!of_changeset_apply(&chgset), "apply failed\n");
+	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n1),
+			       "fail attach n1\n");
+	KUNIT_EXPECT_FALSE_MSG(
+		test, of_changeset_add_property(&chgset, n1, ppname_n1),
+		"fail add prop name\n");
+
+	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n2),
+			       "fail attach n2\n");
+	KUNIT_EXPECT_FALSE_MSG(
+		test, of_changeset_add_property(&chgset, n2, ppname_n2),
+			       "fail add prop name\n");
+
+	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_detach_node(&chgset, nremove),
+			       "fail remove node\n");
+	KUNIT_EXPECT_FALSE_MSG(
+		test, of_changeset_add_property(&chgset, n21, ppname_n21),
+		"fail add prop name\n");
+
+	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n21),
+			       "fail attach n21\n");
+
+	KUNIT_EXPECT_FALSE_MSG(
+		test,
+		of_changeset_add_property(&chgset, parent, ppadd),
+		"fail add prop prop-add\n");
+	KUNIT_EXPECT_FALSE_MSG(
+		test,
+		of_changeset_update_property(&chgset, parent, ppupdate),
+		"fail update prop\n");
+	KUNIT_EXPECT_FALSE_MSG(
+		test,
+		of_changeset_remove_property(&chgset, parent, ppremove),
+		"fail remove prop\n");
+
+	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_apply(&chgset),
+			       "apply failed\n");
 
 	of_node_put(nchangeset);
 
 	/* Make sure node names are constructed correctly */
-	unittest((np = of_find_node_by_path("/testcase-data/changeset/n2/n21")),
-		 "'%pOF' not added\n", n21);
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
+		test,
+		np = of_find_node_by_path("/testcase-data/changeset/n2/n21"),
+		"'%pOF' not added\n", n21);
 	of_node_put(np);
 
-	unittest(!of_changeset_revert(&chgset), "revert failed\n");
+	KUNIT_EXPECT_FALSE(test, of_changeset_revert(&chgset));
 
 	of_changeset_destroy(&chgset);
 #endif
 }
 
-static void __init of_unittest_parse_interrupts(void)
+static void of_unittest_parse_interrupts(struct kunit *test)
 {
 	struct device_node *np;
 	struct of_phandle_args args;
 	int i, rc;
 
-	if (of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)
-		return;
+	KUNIT_ASSERT_FALSE(test, of_irq_workarounds & OF_IMAP_OLDWORLD_MAC);
 
 	np = of_find_node_by_path("/testcase-data/interrupts/interrupts0");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	for (i = 0; i < 4; i++) {
 		bool passed = true;
@@ -804,16 +915,15 @@ static void __init of_unittest_parse_interrupts(void)
 		passed &= (args.args_count == 1);
 		passed &= (args.args[0] == (i + 1));
 
-		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
-			 i, args.np, rc);
+		KUNIT_EXPECT_TRUE_MSG(
+			test, passed,
+			"index %i - data error on node %pOF rc=%i\n",
+			i, args.np, rc);
 	}
 	of_node_put(np);
 
 	np = of_find_node_by_path("/testcase-data/interrupts/interrupts1");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	for (i = 0; i < 4; i++) {
 		bool passed = true;
@@ -850,26 +960,24 @@ static void __init of_unittest_parse_interrupts(void)
 		default:
 			passed = false;
 		}
-		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
-			 i, args.np, rc);
+		KUNIT_EXPECT_TRUE_MSG(
+			test, passed,
+			"index %i - data error on node %pOF rc=%i\n",
+			i, args.np, rc);
 	}
 	of_node_put(np);
 }
 
-static void __init of_unittest_parse_interrupts_extended(void)
+static void of_unittest_parse_interrupts_extended(struct kunit *test)
 {
 	struct device_node *np;
 	struct of_phandle_args args;
 	int i, rc;
 
-	if (of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)
-		return;
+	KUNIT_ASSERT_FALSE(test, of_irq_workarounds & OF_IMAP_OLDWORLD_MAC);
 
 	np = of_find_node_by_path("/testcase-data/interrupts/interrupts-extended0");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	for (i = 0; i < 7; i++) {
 		bool passed = true;
@@ -924,8 +1032,10 @@ static void __init of_unittest_parse_interrupts_extended(void)
 			passed = false;
 		}
 
-		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
-			 i, args.np, rc);
+		KUNIT_EXPECT_TRUE_MSG(
+			test, passed,
+			"index %i - data error on node %pOF rc=%i\n",
+			i, args.np, rc);
 	}
 	of_node_put(np);
 }
@@ -965,7 +1075,7 @@ static struct {
 	{ .path = "/testcase-data/match-node/name9", .data = "K", },
 };
 
-static void __init of_unittest_match_node(void)
+static void of_unittest_match_node(struct kunit *test)
 {
 	struct device_node *np;
 	const struct of_device_id *match;
@@ -973,26 +1083,19 @@ static void __init of_unittest_match_node(void)
 
 	for (i = 0; i < ARRAY_SIZE(match_node_tests); i++) {
 		np = of_find_node_by_path(match_node_tests[i].path);
-		if (!np) {
-			unittest(0, "missing testcase node %s\n",
-				match_node_tests[i].path);
-			continue;
-		}
+		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 		match = of_match_node(match_node_table, np);
-		if (!match) {
-			unittest(0, "%s didn't match anything\n",
-				match_node_tests[i].path);
-			continue;
-		}
+		KUNIT_ASSERT_NOT_ERR_OR_NULL_MSG(test, np,
+						 "%s didn't match anything",
+						 match_node_tests[i].path);
 
-		if (strcmp(match->data, match_node_tests[i].data) != 0) {
-			unittest(0, "%s got wrong match. expected %s, got %s\n",
-				match_node_tests[i].path, match_node_tests[i].data,
-				(const char *)match->data);
-			continue;
-		}
-		unittest(1, "passed");
+		KUNIT_EXPECT_STREQ_MSG(
+			test,
+			match->data, match_node_tests[i].data,
+			"%s got wrong match. expected %s, got %s\n",
+			match_node_tests[i].path, match_node_tests[i].data,
+			(const char *)match->data);
 	}
 }
 
@@ -1004,9 +1107,9 @@ static struct resource test_bus_res = {
 static const struct platform_device_info test_bus_info = {
 	.name = "unittest-bus",
 };
-static void __init of_unittest_platform_populate(void)
+static void of_unittest_platform_populate(struct kunit *test)
 {
-	int irq, rc;
+	int irq;
 	struct device_node *np, *child, *grandchild;
 	struct platform_device *pdev, *test_bus;
 	const struct of_device_id match[] = {
@@ -1020,32 +1123,27 @@ static void __init of_unittest_platform_populate(void)
 	/* Test that a missing irq domain returns -EPROBE_DEFER */
 	np = of_find_node_by_path("/testcase-data/testcase-device1");
 	pdev = of_find_device_by_node(np);
-	unittest(pdev, "device 1 creation failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, pdev);
 
 	if (!(of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)) {
 		irq = platform_get_irq(pdev, 0);
-		unittest(irq == -EPROBE_DEFER,
-			 "device deferred probe failed - %d\n", irq);
+		KUNIT_ASSERT_EQ(test, irq, -EPROBE_DEFER);
 
 		/* Test that a parsing failure does not return -EPROBE_DEFER */
 		np = of_find_node_by_path("/testcase-data/testcase-device2");
 		pdev = of_find_device_by_node(np);
-		unittest(pdev, "device 2 creation failed\n");
+		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, pdev);
 		irq = platform_get_irq(pdev, 0);
-		unittest(irq < 0 && irq != -EPROBE_DEFER,
-			 "device parsing error failed - %d\n", irq);
+		KUNIT_ASSERT_TRUE_MSG(test, irq < 0 && irq != -EPROBE_DEFER,
+				      "device parsing error failed - %d\n",
+				      irq);
 	}
 
 	np = of_find_node_by_path("/testcase-data/platform-tests");
-	unittest(np, "No testcase data in device tree\n");
-	if (!np)
-		return;
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	test_bus = platform_device_register_full(&test_bus_info);
-	rc = PTR_ERR_OR_ZERO(test_bus);
-	unittest(!rc, "testbus registration failed; rc=%i\n", rc);
-	if (rc)
-		return;
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, test_bus);
 	test_bus->dev.of_node = np;
 
 	/*
@@ -1060,17 +1158,19 @@ static void __init of_unittest_platform_populate(void)
 	of_platform_populate(np, match, NULL, &test_bus->dev);
 	for_each_child_of_node(np, child) {
 		for_each_child_of_node(child, grandchild)
-			unittest(of_find_device_by_node(grandchild),
-				 "Could not create device for node '%pOFn'\n",
-				 grandchild);
+			KUNIT_EXPECT_TRUE_MSG(
+				test, of_find_device_by_node(grandchild),
+				"Could not create device for node '%pOFn'\n",
+				grandchild);
 	}
 
 	of_platform_depopulate(&test_bus->dev);
 	for_each_child_of_node(np, child) {
 		for_each_child_of_node(child, grandchild)
-			unittest(!of_find_device_by_node(grandchild),
-				 "device didn't get destroyed '%pOFn'\n",
-				 grandchild);
+			KUNIT_EXPECT_FALSE_MSG(
+				test, of_find_device_by_node(grandchild),
+				"device didn't get destroyed '%pOFn'\n",
+				grandchild);
 	}
 
 	platform_device_unregister(test_bus);
@@ -1171,7 +1271,7 @@ static void attach_node_and_children(struct device_node *np)
  *	unittest_data_add - Reads, copies data from
  *	linked tree and attaches it to the live tree
  */
-static int __init unittest_data_add(void)
+static int unittest_data_add(void)
 {
 	void *unittest_data;
 	struct device_node *unittest_data_node, *np;
@@ -1242,7 +1342,7 @@ static int __init unittest_data_add(void)
 }
 
 #ifdef CONFIG_OF_OVERLAY
-static int __init overlay_data_apply(const char *overlay_name, int *overlay_id);
+static int overlay_data_apply(const char *overlay_name, int *overlay_id);
 
 static int unittest_probe(struct platform_device *pdev)
 {
@@ -1471,172 +1571,146 @@ static void of_unittest_destroy_tracked_overlays(void)
 	} while (defers > 0);
 }
 
-static int __init of_unittest_apply_overlay(int overlay_nr, int *overlay_id)
+static int of_unittest_apply_overlay(struct kunit *test,
+				     int overlay_nr,
+				     int *overlay_id)
 {
 	const char *overlay_name;
 
 	overlay_name = overlay_name_from_nr(overlay_nr);
 
-	if (!overlay_data_apply(overlay_name, overlay_id)) {
-		unittest(0, "could not apply overlay \"%s\"\n",
-				overlay_name);
-		return -EFAULT;
-	}
+	KUNIT_ASSERT_TRUE_MSG(test,
+			      overlay_data_apply(overlay_name, overlay_id),
+			      "could not apply overlay \"%s\"\n", overlay_name);
 	of_unittest_track_overlay(*overlay_id);
 
 	return 0;
 }
 
 /* apply an overlay while checking before and after states */
-static int __init of_unittest_apply_overlay_check(int overlay_nr,
+static int of_unittest_apply_overlay_check(struct kunit *test, int overlay_nr,
 		int unittest_nr, int before, int after,
 		enum overlay_type ovtype)
 {
 	int ret, ovcs_id;
 
 	/* unittest device must not be in before state */
-	if (of_unittest_device_exists(unittest_nr, ovtype) != before) {
-		unittest(0, "%s with device @\"%s\" %s\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype),
-				!before ? "enabled" : "disabled");
-		return -EINVAL;
-	}
+	KUNIT_ASSERT_EQ_MSG(
+		test, of_unittest_device_exists(unittest_nr, ovtype), before,
+		"%s with device @\"%s\" %s\n", overlay_name_from_nr(overlay_nr),
+		unittest_path(unittest_nr, ovtype),
+		!before ? "enabled" : "disabled");
 
 	ovcs_id = 0;
-	ret = of_unittest_apply_overlay(overlay_nr, &ovcs_id);
+	ret = of_unittest_apply_overlay(test, overlay_nr, &ovcs_id);
 	if (ret != 0) {
-		/* of_unittest_apply_overlay already called unittest() */
+		/* of_unittest_apply_overlay already set expectation */
 		return ret;
 	}
 
 	/* unittest device must be to set to after state */
-	if (of_unittest_device_exists(unittest_nr, ovtype) != after) {
-		unittest(0, "%s failed to create @\"%s\" %s\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype),
-				!after ? "enabled" : "disabled");
-		return -EINVAL;
-	}
+	KUNIT_ASSERT_EQ_MSG(
+		test, of_unittest_device_exists(unittest_nr, ovtype), after,
+		"%s failed to create @\"%s\" %s\n",
+		overlay_name_from_nr(overlay_nr),
+		unittest_path(unittest_nr, ovtype),
+		!after ? "enabled" : "disabled");
 
 	return 0;
 }
 
 /* apply an overlay and then revert it while checking before, after states */
-static int __init of_unittest_apply_revert_overlay_check(int overlay_nr,
+static int of_unittest_apply_revert_overlay_check(
+		struct kunit *test, int overlay_nr,
 		int unittest_nr, int before, int after,
 		enum overlay_type ovtype)
 {
 	int ret, ovcs_id;
 
 	/* unittest device must be in before state */
-	if (of_unittest_device_exists(unittest_nr, ovtype) != before) {
-		unittest(0, "%s with device @\"%s\" %s\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype),
-				!before ? "enabled" : "disabled");
-		return -EINVAL;
-	}
+	KUNIT_ASSERT_EQ_MSG(
+		test, of_unittest_device_exists(unittest_nr, ovtype), before,
+		"%s with device @\"%s\" %s\n", overlay_name_from_nr(overlay_nr),
+		unittest_path(unittest_nr, ovtype),
+		!before ? "enabled" : "disabled");
 
 	/* apply the overlay */
 	ovcs_id = 0;
-	ret = of_unittest_apply_overlay(overlay_nr, &ovcs_id);
+	ret = of_unittest_apply_overlay(test, overlay_nr, &ovcs_id);
 	if (ret != 0) {
-		/* of_unittest_apply_overlay already called unittest() */
+		/* of_unittest_apply_overlay already set expectation. */
 		return ret;
 	}
 
 	/* unittest device must be in after state */
-	if (of_unittest_device_exists(unittest_nr, ovtype) != after) {
-		unittest(0, "%s failed to create @\"%s\" %s\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype),
-				!after ? "enabled" : "disabled");
-		return -EINVAL;
-	}
-
-	ret = of_overlay_remove(&ovcs_id);
-	if (ret != 0) {
-		unittest(0, "%s failed to be destroyed @\"%s\"\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype));
-		return ret;
-	}
+	KUNIT_ASSERT_EQ_MSG(
+		test, of_unittest_device_exists(unittest_nr, ovtype), after,
+		"%s failed to create @\"%s\" %s\n",
+		overlay_name_from_nr(overlay_nr),
+		unittest_path(unittest_nr, ovtype),
+		!after ? "enabled" : "disabled");
+
+	KUNIT_ASSERT_EQ_MSG(test, of_overlay_remove(&ovcs_id), 0,
+			    "%s failed to be destroyed @\"%s\"\n",
+			    overlay_name_from_nr(overlay_nr),
+			    unittest_path(unittest_nr, ovtype));
 
 	/* unittest device must be again in before state */
-	if (of_unittest_device_exists(unittest_nr, PDEV_OVERLAY) != before) {
-		unittest(0, "%s with device @\"%s\" %s\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype),
-				!before ? "enabled" : "disabled");
-		return -EINVAL;
-	}
+	KUNIT_ASSERT_EQ_MSG(
+		test,
+		of_unittest_device_exists(unittest_nr, PDEV_OVERLAY), before,
+		"%s with device @\"%s\" %s\n",
+		overlay_name_from_nr(overlay_nr),
+		unittest_path(unittest_nr, ovtype),
+		!before ? "enabled" : "disabled");
 
 	return 0;
 }
 
 /* test activation of device */
-static void __init of_unittest_overlay_0(void)
+static void of_unittest_overlay_0(struct kunit *test)
 {
 	/* device should enable */
-	if (of_unittest_apply_overlay_check(0, 0, 0, 1, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 0);
+	of_unittest_apply_overlay_check(test, 0, 0, 0, 1, PDEV_OVERLAY);
 }
 
 /* test deactivation of device */
-static void __init of_unittest_overlay_1(void)
+static void of_unittest_overlay_1(struct kunit *test)
 {
 	/* device should disable */
-	if (of_unittest_apply_overlay_check(1, 1, 1, 0, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 1);
+	of_unittest_apply_overlay_check(test, 1, 1, 1, 0, PDEV_OVERLAY);
 }
 
 /* test activation of device */
-static void __init of_unittest_overlay_2(void)
+static void of_unittest_overlay_2(struct kunit *test)
 {
 	/* device should enable */
-	if (of_unittest_apply_overlay_check(2, 2, 0, 1, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 2);
+	of_unittest_apply_overlay_check(test, 2, 2, 0, 1, PDEV_OVERLAY);
 }
 
 /* test deactivation of device */
-static void __init of_unittest_overlay_3(void)
+static void of_unittest_overlay_3(struct kunit *test)
 {
 	/* device should disable */
-	if (of_unittest_apply_overlay_check(3, 3, 1, 0, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 3);
+	of_unittest_apply_overlay_check(test, 3, 3, 1, 0, PDEV_OVERLAY);
 }
 
 /* test activation of a full device node */
-static void __init of_unittest_overlay_4(void)
+static void of_unittest_overlay_4(struct kunit *test)
 {
 	/* device should disable */
-	if (of_unittest_apply_overlay_check(4, 4, 0, 1, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 4);
+	of_unittest_apply_overlay_check(test, 4, 4, 0, 1, PDEV_OVERLAY);
 }
 
 /* test overlay apply/revert sequence */
-static void __init of_unittest_overlay_5(void)
+static void of_unittest_overlay_5(struct kunit *test)
 {
 	/* device should disable */
-	if (of_unittest_apply_revert_overlay_check(5, 5, 0, 1, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 5);
+	of_unittest_apply_revert_overlay_check(test, 5, 5, 0, 1, PDEV_OVERLAY);
 }
 
 /* test overlay application in sequence */
-static void __init of_unittest_overlay_6(void)
+static void of_unittest_overlay_6(struct kunit *test)
 {
 	int i, ov_id[2], ovcs_id;
 	int overlay_nr = 6, unittest_nr = 6;
@@ -1645,74 +1719,67 @@ static void __init of_unittest_overlay_6(void)
 
 	/* unittest device must be in before state */
 	for (i = 0; i < 2; i++) {
-		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
-				!= before) {
-			unittest(0, "%s with device @\"%s\" %s\n",
-					overlay_name_from_nr(overlay_nr + i),
-					unittest_path(unittest_nr + i,
-						PDEV_OVERLAY),
-					!before ? "enabled" : "disabled");
-			return;
-		}
+		KUNIT_ASSERT_EQ_MSG(test,
+				    of_unittest_device_exists(unittest_nr + i,
+							      PDEV_OVERLAY),
+				    before,
+				    "%s with device @\"%s\" %s\n",
+				    overlay_name_from_nr(overlay_nr + i),
+				    unittest_path(unittest_nr + i,
+						  PDEV_OVERLAY),
+				    !before ? "enabled" : "disabled");
 	}
 
 	/* apply the overlays */
 	for (i = 0; i < 2; i++) {
-
 		overlay_name = overlay_name_from_nr(overlay_nr + i);
 
-		if (!overlay_data_apply(overlay_name, &ovcs_id)) {
-			unittest(0, "could not apply overlay \"%s\"\n",
-					overlay_name);
-			return;
-		}
+		KUNIT_ASSERT_TRUE_MSG(
+			test, overlay_data_apply(overlay_name, &ovcs_id),
+			"could not apply overlay \"%s\"\n", overlay_name);
 		ov_id[i] = ovcs_id;
 		of_unittest_track_overlay(ov_id[i]);
 	}
 
 	for (i = 0; i < 2; i++) {
 		/* unittest device must be in after state */
-		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
-				!= after) {
-			unittest(0, "overlay @\"%s\" failed @\"%s\" %s\n",
-					overlay_name_from_nr(overlay_nr + i),
-					unittest_path(unittest_nr + i,
-						PDEV_OVERLAY),
-					!after ? "enabled" : "disabled");
-			return;
-		}
+		KUNIT_ASSERT_EQ_MSG(test,
+				    of_unittest_device_exists(unittest_nr + i,
+							      PDEV_OVERLAY),
+				    after,
+				    "overlay @\"%s\" failed @\"%s\" %s\n",
+				    overlay_name_from_nr(overlay_nr + i),
+				    unittest_path(unittest_nr + i,
+						  PDEV_OVERLAY),
+				    !after ? "enabled" : "disabled");
 	}
 
 	for (i = 1; i >= 0; i--) {
 		ovcs_id = ov_id[i];
-		if (of_overlay_remove(&ovcs_id)) {
-			unittest(0, "%s failed destroy @\"%s\"\n",
-					overlay_name_from_nr(overlay_nr + i),
-					unittest_path(unittest_nr + i,
-						PDEV_OVERLAY));
-			return;
-		}
+		KUNIT_ASSERT_FALSE_MSG(
+			test, of_overlay_remove(&ovcs_id),
+			"%s failed destroy @\"%s\"\n",
+			overlay_name_from_nr(overlay_nr + i),
+			unittest_path(unittest_nr + i, PDEV_OVERLAY));
 		of_unittest_untrack_overlay(ov_id[i]);
 	}
 
 	for (i = 0; i < 2; i++) {
 		/* unittest device must be again in before state */
-		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
-				!= before) {
-			unittest(0, "%s with device @\"%s\" %s\n",
-					overlay_name_from_nr(overlay_nr + i),
-					unittest_path(unittest_nr + i,
-						PDEV_OVERLAY),
-					!before ? "enabled" : "disabled");
-			return;
-		}
+		KUNIT_ASSERT_EQ_MSG(test,
+				    of_unittest_device_exists(unittest_nr + i,
+							      PDEV_OVERLAY),
+				    before,
+				    "%s with device @\"%s\" %s\n",
+				    overlay_name_from_nr(overlay_nr + i),
+				    unittest_path(unittest_nr + i,
+						  PDEV_OVERLAY),
+				    !before ? "enabled" : "disabled");
 	}
-
-	unittest(1, "overlay test %d passed\n", 6);
 }
 
 /* test overlay application in sequence */
-static void __init of_unittest_overlay_8(void)
+static void of_unittest_overlay_8(struct kunit *test)
 {
 	int i, ov_id[2], ovcs_id;
 	int overlay_nr = 8, unittest_nr = 8;
@@ -1722,76 +1789,64 @@ static void __init of_unittest_overlay_8(void)
 
 	/* apply the overlays */
 	for (i = 0; i < 2; i++) {
-
 		overlay_name = overlay_name_from_nr(overlay_nr + i);
 
-		if (!overlay_data_apply(overlay_name, &ovcs_id)) {
-			unittest(0, "could not apply overlay \"%s\"\n",
-					overlay_name);
-			return;
-		}
+		KUNIT_ASSERT_TRUE_MSG(
+			test, overlay_data_apply(overlay_name, &ovcs_id),
+			"could not apply overlay \"%s\"\n", overlay_name);
 		ov_id[i] = ovcs_id;
 		of_unittest_track_overlay(ov_id[i]);
 	}
 
 	/* now try to remove first overlay (it should fail) */
 	ovcs_id = ov_id[0];
-	if (!of_overlay_remove(&ovcs_id)) {
-		unittest(0, "%s was destroyed @\"%s\"\n",
-				overlay_name_from_nr(overlay_nr + 0),
-				unittest_path(unittest_nr,
-					PDEV_OVERLAY));
-		return;
-	}
+	KUNIT_ASSERT_TRUE_MSG(
+		test, of_overlay_remove(&ovcs_id),
+		"%s was destroyed @\"%s\"\n",
+		overlay_name_from_nr(overlay_nr + 0),
+		unittest_path(unittest_nr, PDEV_OVERLAY));
 
 	/* removing them in order should work */
 	for (i = 1; i >= 0; i--) {
 		ovcs_id = ov_id[i];
-		if (of_overlay_remove(&ovcs_id)) {
-			unittest(0, "%s not destroyed @\"%s\"\n",
-					overlay_name_from_nr(overlay_nr + i),
-					unittest_path(unittest_nr,
-						PDEV_OVERLAY));
-			return;
-		}
+		KUNIT_ASSERT_FALSE_MSG(
+			test, of_overlay_remove(&ovcs_id),
+			"%s not destroyed @\"%s\"\n",
+			overlay_name_from_nr(overlay_nr + i),
+			unittest_path(unittest_nr, PDEV_OVERLAY));
 		of_unittest_untrack_overlay(ov_id[i]);
 	}
-
-	unittest(1, "overlay test %d passed\n", 8);
 }
 
 /* test insertion of a bus with parent devices */
-static void __init of_unittest_overlay_10(void)
+static void of_unittest_overlay_10(struct kunit *test)
 {
-	int ret;
 	char *child_path;
 
 	/* device should disable */
-	ret = of_unittest_apply_overlay_check(10, 10, 0, 1, PDEV_OVERLAY);
-	if (unittest(ret == 0,
-			"overlay test %d failed; overlay application\n", 10))
-		return;
+	KUNIT_ASSERT_EQ_MSG(
+		test,
+		of_unittest_apply_overlay_check(
+				test, 10, 10, 0, 1, PDEV_OVERLAY),
+		0,
+		"overlay test %d failed; overlay application\n", 10);
 
 	child_path = kasprintf(GFP_KERNEL, "%s/test-unittest101",
 			unittest_path(10, PDEV_OVERLAY));
-	if (unittest(child_path, "overlay test %d failed; kasprintf\n", 10))
-		return;
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, child_path);
 
-	ret = of_path_device_type_exists(child_path, PDEV_OVERLAY);
+	KUNIT_EXPECT_TRUE_MSG(
+		test, of_path_device_type_exists(child_path, PDEV_OVERLAY),
+		"overlay test %d failed; no child device\n", 10);
 	kfree(child_path);
-
-	unittest(ret, "overlay test %d failed; no child device\n", 10);
 }
 
 /* test insertion of a bus with parent devices (and revert) */
-static void __init of_unittest_overlay_11(void)
+static void of_unittest_overlay_11(struct kunit *test)
 {
-	int ret;
-
 	/* device should disable */
-	ret = of_unittest_apply_revert_overlay_check(11, 11, 0, 1,
-			PDEV_OVERLAY);
-	unittest(ret == 0, "overlay test %d failed; overlay apply\n", 11);
+	KUNIT_EXPECT_FALSE(test, of_unittest_apply_revert_overlay_check(
+		test, 11, 11, 0, 1, PDEV_OVERLAY));
 }
 
 #if IS_BUILTIN(CONFIG_I2C) && IS_ENABLED(CONFIG_OF_OVERLAY)
@@ -2013,25 +2068,18 @@ static struct i2c_driver unittest_i2c_mux_driver = {
 
 #endif
 
-static int of_unittest_overlay_i2c_init(void)
+static int of_unittest_overlay_i2c_init(struct kunit *test)
 {
-	int ret;
-
-	ret = i2c_add_driver(&unittest_i2c_dev_driver);
-	if (unittest(ret == 0,
-			"could not register unittest i2c device driver\n"))
-		return ret;
+	KUNIT_ASSERT_EQ_MSG(test, i2c_add_driver(&unittest_i2c_dev_driver), 0,
+			    "could not register unittest i2c device driver\n");
 
-	ret = platform_driver_register(&unittest_i2c_bus_driver);
-	if (unittest(ret == 0,
-			"could not register unittest i2c bus driver\n"))
-		return ret;
+	KUNIT_ASSERT_EQ_MSG(
+		test, platform_driver_register(&unittest_i2c_bus_driver), 0,
+		"could not register unittest i2c bus driver\n");
 
 #if IS_BUILTIN(CONFIG_I2C_MUX)
-	ret = i2c_add_driver(&unittest_i2c_mux_driver);
-	if (unittest(ret == 0,
-			"could not register unittest i2c mux driver\n"))
-		return ret;
+	KUNIT_ASSERT_EQ_MSG(test, i2c_add_driver(&unittest_i2c_mux_driver), 0,
+			    "could not register unittest i2c mux driver\n");
 #endif
 
 	return 0;
@@ -2046,101 +2094,85 @@ static void of_unittest_overlay_i2c_cleanup(void)
 	i2c_del_driver(&unittest_i2c_dev_driver);
 }
 
-static void __init of_unittest_overlay_i2c_12(void)
+static void of_unittest_overlay_i2c_12(struct kunit *test)
 {
 	/* device should enable */
-	if (of_unittest_apply_overlay_check(12, 12, 0, 1, I2C_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 12);
+	of_unittest_apply_overlay_check(test, 12, 12, 0, 1, I2C_OVERLAY);
 }
 
 /* test deactivation of device */
-static void __init of_unittest_overlay_i2c_13(void)
+static void of_unittest_overlay_i2c_13(struct kunit *test)
 {
 	/* device should disable */
-	if (of_unittest_apply_overlay_check(13, 13, 1, 0, I2C_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 13);
+	of_unittest_apply_overlay_check(test, 13, 13, 1, 0, I2C_OVERLAY);
 }
 
 /* just check for i2c mux existence */
-static void of_unittest_overlay_i2c_14(void)
+static void of_unittest_overlay_i2c_14(struct kunit *test)
 {
+	KUNIT_SUCCEED(test);
 }
 
-static void __init of_unittest_overlay_i2c_15(void)
+static void of_unittest_overlay_i2c_15(struct kunit *test)
 {
 	/* device should enable */
-	if (of_unittest_apply_overlay_check(15, 15, 0, 1, I2C_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 15);
+	of_unittest_apply_overlay_check(test, 15, 15, 0, 1, I2C_OVERLAY);
 }
 
 #else
 
-static inline void of_unittest_overlay_i2c_14(void) { }
-static inline void of_unittest_overlay_i2c_15(void) { }
+static inline void of_unittest_overlay_i2c_14(struct kunit *test) { }
+static inline void of_unittest_overlay_i2c_15(struct kunit *test) { }
 
 #endif
 
-static void __init of_unittest_overlay(void)
+static void of_unittest_overlay(struct kunit *test)
 {
 	struct device_node *bus_np = NULL;
 
-	if (platform_driver_register(&unittest_driver)) {
-		unittest(0, "could not register unittest driver\n");
-		goto out;
-	}
+	KUNIT_ASSERT_FALSE_MSG(test, platform_driver_register(&unittest_driver),
+			       "could not register unittest driver\n");
 
 	bus_np = of_find_node_by_path(bus_path);
-	if (bus_np == NULL) {
-		unittest(0, "could not find bus_path \"%s\"\n", bus_path);
-		goto out;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL_MSG(
+		test, bus_np, "could not find bus_path \"%s\"\n", bus_path);
 
-	if (of_platform_default_populate(bus_np, NULL, NULL)) {
-		unittest(0, "could not populate bus @ \"%s\"\n", bus_path);
-		goto out;
-	}
-
-	if (!of_unittest_device_exists(100, PDEV_OVERLAY)) {
-		unittest(0, "could not find unittest0 @ \"%s\"\n",
-				unittest_path(100, PDEV_OVERLAY));
-		goto out;
-	}
+	KUNIT_ASSERT_FALSE_MSG(
+		test, of_platform_default_populate(bus_np, NULL, NULL),
+		"could not populate bus @ \"%s\"\n", bus_path);
 
-	if (of_unittest_device_exists(101, PDEV_OVERLAY)) {
-		unittest(0, "unittest1 @ \"%s\" should not exist\n",
-				unittest_path(101, PDEV_OVERLAY));
-		goto out;
-	}
+	KUNIT_ASSERT_TRUE_MSG(
+		test, of_unittest_device_exists(100, PDEV_OVERLAY),
+		"could not find unittest0 @ \"%s\"\n",
+		unittest_path(100, PDEV_OVERLAY));
 
-	unittest(1, "basic infrastructure of overlays passed");
+	KUNIT_ASSERT_FALSE_MSG(
+		test, of_unittest_device_exists(101, PDEV_OVERLAY),
+		"unittest1 @ \"%s\" should not exist\n",
+		unittest_path(101, PDEV_OVERLAY));
 
 	/* tests in sequence */
-	of_unittest_overlay_0();
-	of_unittest_overlay_1();
-	of_unittest_overlay_2();
-	of_unittest_overlay_3();
-	of_unittest_overlay_4();
-	of_unittest_overlay_5();
-	of_unittest_overlay_6();
-	of_unittest_overlay_8();
-
-	of_unittest_overlay_10();
-	of_unittest_overlay_11();
+	of_unittest_overlay_0(test);
+	of_unittest_overlay_1(test);
+	of_unittest_overlay_2(test);
+	of_unittest_overlay_3(test);
+	of_unittest_overlay_4(test);
+	of_unittest_overlay_5(test);
+	of_unittest_overlay_6(test);
+	of_unittest_overlay_8(test);
+
+	of_unittest_overlay_10(test);
+	of_unittest_overlay_11(test);
 
 #if IS_BUILTIN(CONFIG_I2C)
-	if (unittest(of_unittest_overlay_i2c_init() == 0, "i2c init failed\n"))
-		goto out;
+	KUNIT_ASSERT_EQ_MSG(test, of_unittest_overlay_i2c_init(test), 0,
+			    "i2c init failed\n");
+	goto out;
 
-	of_unittest_overlay_i2c_12();
-	of_unittest_overlay_i2c_13();
-	of_unittest_overlay_i2c_14();
-	of_unittest_overlay_i2c_15();
+	of_unittest_overlay_i2c_12(test);
+	of_unittest_overlay_i2c_13(test);
+	of_unittest_overlay_i2c_14(test);
+	of_unittest_overlay_i2c_15(test);
 
 	of_unittest_overlay_i2c_cleanup();
 #endif
@@ -2152,7 +2184,7 @@ static void __init of_unittest_overlay(void)
 }
 
 #else
-static inline void __init of_unittest_overlay(void) { }
+static inline void of_unittest_overlay(struct kunit *test) { }
 #endif
 
 #ifdef CONFIG_OF_OVERLAY
@@ -2313,7 +2345,7 @@ void __init unittest_unflatten_overlay_base(void)
  *
  * Return 0 on unexpected error.
  */
-static int __init overlay_data_apply(const char *overlay_name, int *overlay_id)
+static int overlay_data_apply(const char *overlay_name, int *overlay_id)
 {
 	struct overlay_info *info;
 	int found = 0;
@@ -2359,19 +2391,17 @@ static int __init overlay_data_apply(const char *overlay_name, int *overlay_id)
  * The first part of the function is _not_ normal overlay usage; it is
  * finishing splicing the base overlay device tree into the live tree.
  */
-static __init void of_unittest_overlay_high_level(void)
+static void of_unittest_overlay_high_level(struct kunit *test)
 {
 	struct device_node *last_sibling;
 	struct device_node *np;
 	struct device_node *of_symbols;
-	struct device_node *overlay_base_symbols;
+	struct device_node *overlay_base_symbols = 0;
 	struct device_node **pprev;
 	struct property *prop;
 
-	if (!overlay_base_root) {
-		unittest(0, "overlay_base_root not initialized\n");
-		return;
-	}
+	KUNIT_ASSERT_TRUE_MSG(test, overlay_base_root,
+			      "overlay_base_root not initialized\n");
 
 	/*
 	 * Could not fixup phandles in unittest_unflatten_overlay_base()
@@ -2418,11 +2448,9 @@ static __init void of_unittest_overlay_high_level(void)
 	for_each_child_of_node(overlay_base_root, np) {
 		struct device_node *base_child;
 		for_each_child_of_node(of_root, base_child) {
-			if (!strcmp(np->full_name, base_child->full_name)) {
-				unittest(0, "illegal node name in overlay_base %pOFn",
-					 np);
-				return;
-			}
+			KUNIT_ASSERT_STRNEQ_MSG(
+				test, np->full_name, base_child->full_name,
+				"illegal node name in overlay_base %pOFn", np);
 		}
 	}
 
@@ -2456,21 +2484,24 @@ static __init void of_unittest_overlay_high_level(void)
 
 			new_prop = __of_prop_dup(prop, GFP_KERNEL);
 			if (!new_prop) {
-				unittest(0, "__of_prop_dup() of '%s' from overlay_base node __symbols__",
-					 prop->name);
+				KUNIT_FAIL(test,
+					   "__of_prop_dup() of '%s' from overlay_base node __symbols__",
+					   prop->name);
 				goto err_unlock;
 			}
 			if (__of_add_property(of_symbols, new_prop)) {
 				/* "name" auto-generated by unflatten */
 				if (!strcmp(new_prop->name, "name"))
 					continue;
-				unittest(0, "duplicate property '%s' in overlay_base node __symbols__",
-					 prop->name);
+				KUNIT_FAIL(test,
+					   "duplicate property '%s' in overlay_base node __symbols__",
+					   prop->name);
 				goto err_unlock;
 			}
 			if (__of_add_property_sysfs(of_symbols, new_prop)) {
-				unittest(0, "unable to add property '%s' in overlay_base node __symbols__ to sysfs",
-					 prop->name);
+				KUNIT_FAIL(test,
+					   "unable to add property '%s' in overlay_base node __symbols__ to sysfs",
+					   prop->name);
 				goto err_unlock;
 			}
 		}
@@ -2481,20 +2512,24 @@ static __init void of_unittest_overlay_high_level(void)
 
 	/* now do the normal overlay usage test */
 
-	unittest(overlay_data_apply("overlay", NULL),
-		 "Adding overlay 'overlay' failed\n");
+	KUNIT_EXPECT_TRUE_MSG(test, overlay_data_apply("overlay", NULL),
+			      "Adding overlay 'overlay' failed\n");
 
-	unittest(overlay_data_apply("overlay_bad_add_dup_node", NULL),
-		 "Adding overlay 'overlay_bad_add_dup_node' failed\n");
+	KUNIT_EXPECT_TRUE_MSG(
+		test, overlay_data_apply("overlay_bad_add_dup_node", NULL),
+		"Adding overlay 'overlay_bad_add_dup_node' failed\n");
 
-	unittest(overlay_data_apply("overlay_bad_add_dup_prop", NULL),
-		 "Adding overlay 'overlay_bad_add_dup_prop' failed\n");
+	KUNIT_EXPECT_TRUE_MSG(
+		test, overlay_data_apply("overlay_bad_add_dup_prop", NULL),
+		"Adding overlay 'overlay_bad_add_dup_prop' failed\n");
 
-	unittest(overlay_data_apply("overlay_bad_phandle", NULL),
-		 "Adding overlay 'overlay_bad_phandle' failed\n");
+	KUNIT_EXPECT_TRUE_MSG(
+		test, overlay_data_apply("overlay_bad_phandle", NULL),
+		"Adding overlay 'overlay_bad_phandle' failed\n");
 
-	unittest(overlay_data_apply("overlay_bad_symbol", NULL),
-		 "Adding overlay 'overlay_bad_symbol' failed\n");
+	KUNIT_EXPECT_TRUE_MSG(
+		test, overlay_data_apply("overlay_bad_symbol", NULL),
+		"Adding overlay 'overlay_bad_symbol' failed\n");
 
 	return;
 
@@ -2504,57 +2539,52 @@ static __init void of_unittest_overlay_high_level(void)
 
 #else
 
-static inline __init void of_unittest_overlay_high_level(void) {}
+static inline void of_unittest_overlay_high_level(struct kunit *test) {}
 
 #endif
 
-static int __init of_unittest(void)
+static int of_test_init(struct kunit *test)
 {
-	struct device_node *np;
-	int res;
-
 	/* adding data for unittest */
-	res = unittest_data_add();
-	if (res)
-		return res;
+	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
+
 	if (!of_aliases)
 		of_aliases = of_find_node_by_path("/aliases");
 
-	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
-	if (!np) {
-		pr_info("No testcase data in device tree; not running tests\n");
-		return 0;
-	}
-	of_node_put(np);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
+		"/testcase-data/phandle-tests/consumer-a"));
 
 	if (IS_ENABLED(CONFIG_UML))
 		unflatten_device_tree();
 
-	pr_info("start of unittest - you will see error messages\n");
-	of_unittest_check_tree_linkage();
-	of_unittest_check_phandles();
-	of_unittest_find_node_by_name();
-	of_unittest_dynamic();
-	of_unittest_parse_phandle_with_args();
-	of_unittest_parse_phandle_with_args_map();
-	of_unittest_printf();
-	of_unittest_property_string();
-	of_unittest_property_copy();
-	of_unittest_changeset();
-	of_unittest_parse_interrupts();
-	of_unittest_parse_interrupts_extended();
-	of_unittest_match_node();
-	of_unittest_platform_populate();
-	of_unittest_overlay();
+	return 0;
+}
 
+static struct kunit_case of_test_cases[] = {
+	KUNIT_CASE(of_unittest_check_tree_linkage),
+	KUNIT_CASE(of_unittest_check_phandles),
+	KUNIT_CASE(of_unittest_find_node_by_name),
+	KUNIT_CASE(of_unittest_dynamic),
+	KUNIT_CASE(of_unittest_parse_phandle_with_args),
+	KUNIT_CASE(of_unittest_parse_phandle_with_args_map),
+	KUNIT_CASE(of_unittest_printf),
+	KUNIT_CASE(of_unittest_property_string),
+	KUNIT_CASE(of_unittest_property_copy),
+	KUNIT_CASE(of_unittest_changeset),
+	KUNIT_CASE(of_unittest_parse_interrupts),
+	KUNIT_CASE(of_unittest_parse_interrupts_extended),
+	KUNIT_CASE(of_unittest_match_node),
+	KUNIT_CASE(of_unittest_platform_populate),
+	KUNIT_CASE(of_unittest_overlay),
 	/* Double check linkage after removing testcase data */
-	of_unittest_check_tree_linkage();
-
-	of_unittest_overlay_high_level();
-
-	pr_info("end of unittest - %i passed, %i failed\n",
-		unittest_results.passed, unittest_results.failed);
+	KUNIT_CASE(of_unittest_check_tree_linkage),
+	KUNIT_CASE(of_unittest_overlay_high_level),
+	{},
+};
 
-	return 0;
-}
-late_initcall(of_unittest);
+static struct kunit_module of_test_module = {
+	.name = "of-test",
+	.init = of_test_init,
+	.test_cases = of_test_cases,
+};
+module_test(of_test_module);
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 15/17] of: unittest: migrate tests to run on KUnit
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: gregkh, joel, mpe, joe, brakmo, rostedt, Tim.Bird, khilman,
	julia.lawall, linux-kselftest, kunit-dev, linux-kernel, jdike,
	richard, linux-um, daniel, dri-devel, dan.j.williams,
	linux-nvdimm, knut.omang, devicetree, pmladek, Alexander.Levin,
	amir73il, dan.carpenter, wfg, Brendan Higgins

Migrate tests without any cleanup, or modifying test logic in anyway to
run under KUnit using the KUnit expectation and assertion API.

Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
---
 drivers/of/Kconfig    |    1 +
 drivers/of/unittest.c | 1310 +++++++++++++++++++++--------------------
 2 files changed, 671 insertions(+), 640 deletions(-)

diff --git a/drivers/of/Kconfig b/drivers/of/Kconfig
index ad3fcad4d75b8..f309399deac20 100644
--- a/drivers/of/Kconfig
+++ b/drivers/of/Kconfig
@@ -15,6 +15,7 @@ if OF
 config OF_UNITTEST
 	bool "Device Tree runtime unit tests"
 	depends on !SPARC
+	depends on KUNIT
 	select IRQ_DOMAIN
 	select OF_EARLY_FLATTREE
 	select OF_RESOLVE
diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
index effa4e2b9d992..96de69ccb3e63 100644
--- a/drivers/of/unittest.c
+++ b/drivers/of/unittest.c
@@ -26,186 +26,189 @@
 
 #include <linux/bitops.h>
 
+#include <kunit/test.h>
+
 #include "of_private.h"
 
-static struct unittest_results {
-	int passed;
-	int failed;
-} unittest_results;
-
-#define unittest(result, fmt, ...) ({ \
-	bool failed = !(result); \
-	if (failed) { \
-		unittest_results.failed++; \
-		pr_err("FAIL %s():%i " fmt, __func__, __LINE__, ##__VA_ARGS__); \
-	} else { \
-		unittest_results.passed++; \
-		pr_debug("pass %s():%i\n", __func__, __LINE__); \
-	} \
-	failed; \
-})
-
-static void __init of_unittest_find_node_by_name(void)
+static void of_unittest_find_node_by_name(struct kunit *test)
 {
 	struct device_node *np;
 	const char *options, *name;
 
 	np = of_find_node_by_path("/testcase-data");
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	unittest(np && !strcmp("/testcase-data", name),
-		"find /testcase-data failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
+			       "find /testcase-data failed\n");
 	of_node_put(np);
 	kfree(name);
 
 	/* Test if trailing '/' works */
-	np = of_find_node_by_path("/testcase-data/");
-	unittest(!np, "trailing '/' on /testcase-data/ should fail\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
+			    "trailing '/' on /testcase-data/ should fail\n");
 
 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	unittest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", name),
+	KUNIT_EXPECT_STREQ_MSG(
+		test, "/testcase-data/phandle-tests/consumer-a", name,
 		"find /testcase-data/phandle-tests/consumer-a failed\n");
 	of_node_put(np);
 	kfree(name);
 
 	np = of_find_node_by_path("testcase-alias");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	unittest(np && !strcmp("/testcase-data", name),
-		"find testcase-alias failed\n");
+	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
+			       "find testcase-alias failed\n");
 	of_node_put(np);
 	kfree(name);
 
 	/* Test if trailing '/' works on aliases */
-	np = of_find_node_by_path("testcase-alias/");
-	unittest(!np, "trailing '/' on testcase-alias/ should fail\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
+			    "trailing '/' on testcase-alias/ should fail\n");
 
 	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	unittest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", name),
+	KUNIT_EXPECT_STREQ_MSG(
+		test, "/testcase-data/phandle-tests/consumer-a", name,
 		"find testcase-alias/phandle-tests/consumer-a failed\n");
 	of_node_put(np);
 	kfree(name);
 
-	np = of_find_node_by_path("/testcase-data/missing-path");
-	unittest(!np, "non-existent path returned node %pOF\n", np);
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
+		"non-existent path returned node %pOF\n", np);
 	of_node_put(np);
 
-	np = of_find_node_by_path("missing-alias");
-	unittest(!np, "non-existent alias returned node %pOF\n", np);
+	KUNIT_EXPECT_EQ_MSG(
+		test, np = of_find_node_by_path("missing-alias"), NULL,
+		"non-existent alias returned node %pOF\n", np);
 	of_node_put(np);
 
-	np = of_find_node_by_path("testcase-alias/missing-path");
-	unittest(!np, "non-existent alias with relative path returned node %pOF\n", np);
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
+		"non-existent alias with relative path returned node %pOF\n",
+		np);
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
-	unittest(np && !strcmp("testoption", options),
-		 "option path test failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
+			       "option path test failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
-	unittest(np && !strcmp("test/option", options),
-		 "option path test, subcase #1 failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
+			       "option path test, subcase #1 failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
-	unittest(np && !strcmp("test/option", options),
-		 "option path test, subcase #2 failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
+			       "option path test, subcase #2 failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
-	unittest(np, "NULL option path test failed\n");
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
+					 "NULL option path test failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
 				       &options);
-	unittest(np && !strcmp("testaliasoption", options),
-		 "option alias path test failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
+			       "option alias path test failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
 				       &options);
-	unittest(np && !strcmp("test/alias/option", options),
-		 "option alias path test, subcase #1 failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
+			       "option alias path test, subcase #1 failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
-	unittest(np, "NULL option alias path test failed\n");
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
+			test, np, "NULL option alias path test failed\n");
 	of_node_put(np);
 
 	options = "testoption";
 	np = of_find_node_opts_by_path("testcase-alias", &options);
-	unittest(np && !options, "option clearing test failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
+			    "option clearing test failed\n");
 	of_node_put(np);
 
 	options = "testoption";
 	np = of_find_node_opts_by_path("/", &options);
-	unittest(np && !options, "option clearing root node test failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
+			    "option clearing root node test failed\n");
 	of_node_put(np);
 }
 
-static void __init of_unittest_dynamic(void)
+static void of_unittest_dynamic(struct kunit *test)
 {
 	struct device_node *np;
 	struct property *prop;
 
 	np = of_find_node_by_path("/testcase-data");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	/* Array of 4 properties for the purpose of testing */
 	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
-	if (!prop) {
-		unittest(0, "kzalloc() failed\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
 
 	/* Add a new property - should pass*/
 	prop->name = "new-property";
 	prop->value = "new-property-data";
 	prop->length = strlen(prop->value) + 1;
-	unittest(of_add_property(np, prop) == 0, "Adding a new property failed\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+			    "Adding a new property failed\n");
 
 	/* Try to add an existing property - should fail */
 	prop++;
 	prop->name = "new-property";
 	prop->value = "new-property-data-should-fail";
 	prop->length = strlen(prop->value) + 1;
-	unittest(of_add_property(np, prop) != 0,
-		 "Adding an existing property should have failed\n");
+	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
+			    "Adding an existing property should have failed\n");
 
 	/* Try to modify an existing property - should pass */
 	prop->value = "modify-property-data-should-pass";
 	prop->length = strlen(prop->value) + 1;
-	unittest(of_update_property(np, prop) == 0,
-		 "Updating an existing property should have passed\n");
+	KUNIT_EXPECT_EQ_MSG(
+		test, of_update_property(np, prop), 0,
+		"Updating an existing property should have passed\n");
 
 	/* Try to modify non-existent property - should pass*/
 	prop++;
 	prop->name = "modify-property";
 	prop->value = "modify-missing-property-data-should-pass";
 	prop->length = strlen(prop->value) + 1;
-	unittest(of_update_property(np, prop) == 0,
-		 "Updating a missing property should have passed\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
+			    "Updating a missing property should have passed\n");
 
 	/* Remove property - should pass */
-	unittest(of_remove_property(np, prop) == 0,
-		 "Removing a property should have passed\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
+			    "Removing a property should have passed\n");
 
 	/* Adding very large property - should pass */
 	prop++;
 	prop->name = "large-property-PAGE_SIZEx8";
 	prop->length = PAGE_SIZE * 8;
 	prop->value = kzalloc(prop->length, GFP_KERNEL);
-	unittest(prop->value != NULL, "Unable to allocate large buffer\n");
-	if (prop->value)
-		unittest(of_add_property(np, prop) == 0,
-			 "Adding a large property should have passed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+			    "Adding a large property should have passed\n");
 }
 
-static int __init of_unittest_check_node_linkage(struct device_node *np)
+static int of_unittest_check_node_linkage(struct device_node *np)
 {
 	struct device_node *child;
 	int count = 0, rc;
@@ -230,27 +233,30 @@ static int __init of_unittest_check_node_linkage(struct device_node *np)
 	return rc;
 }
 
-static void __init of_unittest_check_tree_linkage(void)
+static void of_unittest_check_tree_linkage(struct kunit *test)
 {
 	struct device_node *np;
 	int allnode_count = 0, child_count;
 
-	if (!of_root)
-		return;
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);
 
 	for_each_of_allnodes(np)
 		allnode_count++;
 	child_count = of_unittest_check_node_linkage(of_root);
 
-	unittest(child_count > 0, "Device node data structure is corrupted\n");
-	unittest(child_count == allnode_count,
-		 "allnodes list size (%i) doesn't match sibling lists size (%i)\n",
-		 allnode_count, child_count);
+	KUNIT_EXPECT_GT_MSG(test, child_count, 0,
+			    "Device node data structure is corrupted\n");
+	KUNIT_EXPECT_EQ_MSG(
+		test, child_count, allnode_count,
+		"allnodes list size (%i) doesn't match sibling lists size (%i)\n",
+		allnode_count, child_count);
 	pr_debug("allnodes list size (%i); sibling lists size (%i)\n", allnode_count, child_count);
 }
 
-static void __init of_unittest_printf_one(struct device_node *np, const char *fmt,
-					  const char *expected)
+static void of_unittest_printf_one(struct kunit *test,
+				   struct device_node *np,
+				   const char *fmt,
+				   const char *expected)
 {
 	unsigned char *buf;
 	int buf_size;
@@ -265,8 +271,12 @@ static void __init of_unittest_printf_one(struct device_node *np, const char *fm
 	memset(buf, 0xff, buf_size);
 	size = snprintf(buf, buf_size - 2, fmt, np);
 
-	/* use strcmp() instead of strncmp() here to be absolutely sure strings match */
-	unittest((strcmp(buf, expected) == 0) && (buf[size+1] == 0xff),
+	KUNIT_EXPECT_STREQ_MSG(
+		test, buf, expected,
+		"sprintf failed; fmt='%s' expected='%s' rslt='%s'\n",
+		fmt, expected, buf);
+	KUNIT_EXPECT_EQ_MSG(
+		test, buf[size+1], 0xff,
 		"sprintf failed; fmt='%s' expected='%s' rslt='%s'\n",
 		fmt, expected, buf);
 
@@ -276,44 +286,49 @@ static void __init of_unittest_printf_one(struct device_node *np, const char *fm
 		/* Clear the buffer, and make sure it works correctly still */
 		memset(buf, 0xff, buf_size);
 		snprintf(buf, size+1, fmt, np);
-		unittest(strncmp(buf, expected, size) == 0 && (buf[size+1] == 0xff),
+		KUNIT_EXPECT_STREQ_MSG(
+			test, buf, expected,
+			"snprintf failed; size=%i fmt='%s' expected='%s' rslt='%s'\n",
+			size, fmt, expected, buf);
+		KUNIT_EXPECT_EQ_MSG(
+			test, buf[size+1], 0xff,
 			"snprintf failed; size=%i fmt='%s' expected='%s' rslt='%s'\n",
 			size, fmt, expected, buf);
 	}
 	kfree(buf);
 }
 
-static void __init of_unittest_printf(void)
+static void of_unittest_printf(struct kunit *test)
 {
 	struct device_node *np;
 	const char *full_name = "/testcase-data/platform-tests/test-device@1/dev@100";
 	char phandle_str[16] = "";
 
 	np = of_find_node_by_path(full_name);
-	if (!np) {
-		unittest(np, "testcase data missing\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	num_to_str(phandle_str, sizeof(phandle_str), np->phandle, 0);
 
-	of_unittest_printf_one(np, "%pOF",  full_name);
-	of_unittest_printf_one(np, "%pOFf", full_name);
-	of_unittest_printf_one(np, "%pOFn", "dev");
-	of_unittest_printf_one(np, "%2pOFn", "dev");
-	of_unittest_printf_one(np, "%5pOFn", "  dev");
-	of_unittest_printf_one(np, "%pOFnc", "dev:test-sub-device");
-	of_unittest_printf_one(np, "%pOFp", phandle_str);
-	of_unittest_printf_one(np, "%pOFP", "dev@100");
-	of_unittest_printf_one(np, "ABC %pOFP ABC", "ABC dev@100 ABC");
-	of_unittest_printf_one(np, "%10pOFP", "   dev@100");
-	of_unittest_printf_one(np, "%-10pOFP", "dev@100   ");
-	of_unittest_printf_one(of_root, "%pOFP", "/");
-	of_unittest_printf_one(np, "%pOFF", "----");
-	of_unittest_printf_one(np, "%pOFPF", "dev@100:----");
-	of_unittest_printf_one(np, "%pOFPFPc", "dev@100:----:dev@100:test-sub-device");
-	of_unittest_printf_one(np, "%pOFc", "test-sub-device");
-	of_unittest_printf_one(np, "%pOFC",
+	of_unittest_printf_one(test, np, "%pOF",  full_name);
+	of_unittest_printf_one(test, np, "%pOFf", full_name);
+	of_unittest_printf_one(test, np, "%pOFn", "dev");
+	of_unittest_printf_one(test, np, "%2pOFn", "dev");
+	of_unittest_printf_one(test, np, "%5pOFn", "  dev");
+	of_unittest_printf_one(test, np, "%pOFnc", "dev:test-sub-device");
+	of_unittest_printf_one(test, np, "%pOFp", phandle_str);
+	of_unittest_printf_one(test, np, "%pOFP", "dev@100");
+	of_unittest_printf_one(test, np, "ABC %pOFP ABC", "ABC dev@100 ABC");
+	of_unittest_printf_one(test, np, "%10pOFP", "   dev@100");
+	of_unittest_printf_one(test, np, "%-10pOFP", "dev@100   ");
+	of_unittest_printf_one(test, of_root, "%pOFP", "/");
+	of_unittest_printf_one(test, np, "%pOFF", "----");
+	of_unittest_printf_one(test, np, "%pOFPF", "dev@100:----");
+	of_unittest_printf_one(test,
+			       np,
+			       "%pOFPFPc",
+			       "dev@100:----:dev@100:test-sub-device");
+	of_unittest_printf_one(test, np, "%pOFc", "test-sub-device");
+	of_unittest_printf_one(test, np, "%pOFC",
 			"\"test-sub-device\",\"test-compat2\",\"test-compat3\"");
 }
 
@@ -323,7 +338,7 @@ struct node_hash {
 };
 
 static DEFINE_HASHTABLE(phandle_ht, 8);
-static void __init of_unittest_check_phandles(void)
+static void of_unittest_check_phandles(struct kunit *test)
 {
 	struct device_node *np;
 	struct node_hash *nh;
@@ -335,24 +350,26 @@ static void __init of_unittest_check_phandles(void)
 			continue;
 
 		hash_for_each_possible(phandle_ht, nh, node, np->phandle) {
+			KUNIT_EXPECT_NE_MSG(
+				test, nh->np->phandle, np->phandle,
+				"Duplicate phandle! %i used by %pOF and %pOF\n",
+				np->phandle, nh->np, np);
 			if (nh->np->phandle == np->phandle) {
-				pr_info("Duplicate phandle! %i used by %pOF and %pOF\n",
-					np->phandle, nh->np, np);
 				dup_count++;
 				break;
 			}
 		}
 
 		nh = kzalloc(sizeof(*nh), GFP_KERNEL);
-		if (WARN_ON(!nh))
-			return;
+		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, nh);
 
 		nh->np = np;
 		hash_add(phandle_ht, &nh->node, np->phandle);
 		phandle_count++;
 	}
-	unittest(dup_count == 0, "Found %i duplicates in %i phandles\n",
-		 dup_count, phandle_count);
+	KUNIT_EXPECT_EQ_MSG(test, dup_count, 0,
+			    "Found %i duplicates in %i phandles\n",
+			    dup_count, phandle_count);
 
 	/* Clean up */
 	hash_for_each_safe(phandle_ht, i, tmp, nh, node) {
@@ -361,20 +378,21 @@ static void __init of_unittest_check_phandles(void)
 	}
 }
 
-static void __init of_unittest_parse_phandle_with_args(void)
+static void of_unittest_parse_phandle_with_args(struct kunit *test)
 {
 	struct device_node *np;
 	struct of_phandle_args args;
-	int i, rc;
+	int i, rc = 0;
 
 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
-	rc = of_count_phandle_with_args(np, "phandle-list", "#phandle-cells");
-	unittest(rc == 7, "of_count_phandle_with_args() returned %i, expected 7\n", rc);
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_count_phandle_with_args(
+			np, "phandle-list", "#phandle-cells"),
+		7,
+		"of_count_phandle_with_args() returned %i, expected 7\n", rc);
 
 	for (i = 0; i < 8; i++) {
 		bool passed = true;
@@ -428,85 +446,91 @@ static void __init of_unittest_parse_phandle_with_args(void)
 			passed = false;
 		}
 
-		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
-			 i, args.np, rc);
+		KUNIT_EXPECT_TRUE_MSG(
+			test, passed,
+			"index %i - data error on node %pOF rc=%i\n",
+			i, args.np, rc);
 	}
 
 	/* Check for missing list property */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args(np, "phandle-list-missing",
-					"#phandle-cells", 0, &args);
-	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
-	rc = of_count_phandle_with_args(np, "phandle-list-missing",
-					"#phandle-cells");
-	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args(
+			np, "phandle-list-missing", "#phandle-cells", 0, &args),
+		-ENOENT);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_count_phandle_with_args(
+			np, "phandle-list-missing", "#phandle-cells"),
+		-ENOENT);
 
 	/* Check for missing cells property */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args(np, "phandle-list",
-					"#phandle-cells-missing", 0, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
-	rc = of_count_phandle_with_args(np, "phandle-list",
-					"#phandle-cells-missing");
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args(
+			np, "phandle-list", "#phandle-cells-missing", 0, &args),
+		-EINVAL);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_count_phandle_with_args(
+			np, "phandle-list", "#phandle-cells-missing"),
+		-EINVAL);
 
 	/* Check for bad phandle in list */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args(np, "phandle-list-bad-phandle",
-					"#phandle-cells", 0, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
-	rc = of_count_phandle_with_args(np, "phandle-list-bad-phandle",
-					"#phandle-cells");
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args(np, "phandle-list-bad-phandle",
+					   "#phandle-cells", 0, &args),
+		-EINVAL);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_count_phandle_with_args(
+			np, "phandle-list-bad-phandle", "#phandle-cells"),
+		-EINVAL);
 
 	/* Check for incorrectly formed argument list */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args(np, "phandle-list-bad-args",
-					"#phandle-cells", 1, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
-	rc = of_count_phandle_with_args(np, "phandle-list-bad-args",
-					"#phandle-cells");
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args(np, "phandle-list-bad-args",
+					   "#phandle-cells", 1, &args),
+		-EINVAL);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_count_phandle_with_args(
+			np, "phandle-list-bad-args", "#phandle-cells"),
+		-EINVAL);
 }
 
-static void __init of_unittest_parse_phandle_with_args_map(void)
+static void of_unittest_parse_phandle_with_args_map(struct kunit *test)
 {
 	struct device_node *np, *p0, *p1, *p2, *p3;
 	struct of_phandle_args args;
 	int i, rc;
 
 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-b");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	p0 = of_find_node_by_path("/testcase-data/phandle-tests/provider0");
-	if (!p0) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p0);
 
 	p1 = of_find_node_by_path("/testcase-data/phandle-tests/provider1");
-	if (!p1) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p1);
 
 	p2 = of_find_node_by_path("/testcase-data/phandle-tests/provider2");
-	if (!p2) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p2);
 
 	p3 = of_find_node_by_path("/testcase-data/phandle-tests/provider3");
-	if (!p3) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p3);
 
-	rc = of_count_phandle_with_args(np, "phandle-list", "#phandle-cells");
-	unittest(rc == 7, "of_count_phandle_with_args() returned %i, expected 7\n", rc);
+	KUNIT_EXPECT_EQ(test,
+		       of_count_phandle_with_args(np,
+						  "phandle-list",
+						  "#phandle-cells"),
+		       7);
 
 	for (i = 0; i < 8; i++) {
 		bool passed = true;
@@ -564,121 +588,186 @@ static void __init of_unittest_parse_phandle_with_args_map(void)
 			passed = false;
 		}
 
-		unittest(passed, "index %i - data error on node %s rc=%i\n",
-			 i, args.np->full_name, rc);
+		KUNIT_EXPECT_TRUE_MSG(
+			test, passed,
+			"index %i - data error on node %s rc=%i\n",
+			i, (args.np ? args.np->full_name : "missing np"), rc);
 	}
 
 	/* Check for missing list property */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args_map(np, "phandle-list-missing",
-					    "phandle", 0, &args);
-	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args_map(
+			np, "phandle-list-missing", "phandle", 0, &args),
+		-ENOENT);
 
 	/* Check for missing cells,map,mask property */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args_map(np, "phandle-list",
-					    "phandle-missing", 0, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args_map(
+			np, "phandle-list", "phandle-missing", 0, &args),
+		-EINVAL);
 
 	/* Check for bad phandle in list */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args_map(np, "phandle-list-bad-phandle",
-					    "phandle", 0, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args_map(
+			np, "phandle-list-bad-phandle", "phandle", 0, &args),
+		-EINVAL);
 
 	/* Check for incorrectly formed argument list */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args_map(np, "phandle-list-bad-args",
-					    "phandle", 1, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args_map(
+			np, "phandle-list-bad-args", "phandle", 1, &args),
+		-EINVAL);
 }
 
-static void __init of_unittest_property_string(void)
+static void of_unittest_property_string(struct kunit *test)
 {
 	const char *strings[4];
 	struct device_node *np;
 	int rc;
 
 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
-	if (!np) {
-		pr_err("No testcase data in device tree\n");
-		return;
-	}
-
-	rc = of_property_match_string(np, "phandle-list-names", "first");
-	unittest(rc == 0, "first expected:0 got:%i\n", rc);
-	rc = of_property_match_string(np, "phandle-list-names", "second");
-	unittest(rc == 1, "second expected:1 got:%i\n", rc);
-	rc = of_property_match_string(np, "phandle-list-names", "third");
-	unittest(rc == 2, "third expected:2 got:%i\n", rc);
-	rc = of_property_match_string(np, "phandle-list-names", "fourth");
-	unittest(rc == -ENODATA, "unmatched string; rc=%i\n", rc);
-	rc = of_property_match_string(np, "missing-property", "blah");
-	unittest(rc == -EINVAL, "missing property; rc=%i\n", rc);
-	rc = of_property_match_string(np, "empty-property", "blah");
-	unittest(rc == -ENODATA, "empty property; rc=%i\n", rc);
-	rc = of_property_match_string(np, "unterminated-string", "blah");
-	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+
+	KUNIT_EXPECT_EQ(
+		test,
+		of_property_match_string(np, "phandle-list-names", "first"),
+		0);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_property_match_string(np, "phandle-list-names", "second"),
+		1);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_property_match_string(np, "phandle-list-names", "third"),
+		2);
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_match_string(np, "phandle-list-names", "fourth"),
+		-ENODATA,
+		"unmatched string");
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_match_string(np, "missing-property", "blah"),
+		-EINVAL,
+		"missing property");
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_match_string(np, "empty-property", "blah"),
+		-ENODATA,
+		"empty property");
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_match_string(np, "unterminated-string", "blah"),
+		-EILSEQ,
+		"unterminated string");
 
 	/* of_property_count_strings() tests */
-	rc = of_property_count_strings(np, "string-property");
-	unittest(rc == 1, "Incorrect string count; rc=%i\n", rc);
-	rc = of_property_count_strings(np, "phandle-list-names");
-	unittest(rc == 3, "Incorrect string count; rc=%i\n", rc);
-	rc = of_property_count_strings(np, "unterminated-string");
-	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
-	rc = of_property_count_strings(np, "unterminated-string-list");
-	unittest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ(test,
+			of_property_count_strings(np, "string-property"), 1);
+	KUNIT_EXPECT_EQ(test,
+			of_property_count_strings(np, "phandle-list-names"), 3);
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_count_strings(np, "unterminated-string"), -EILSEQ,
+		"unterminated string");
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_count_strings(np, "unterminated-string-list"),
+		-EILSEQ,
+		"unterminated string array");
 
 	/* of_property_read_string_index() tests */
 	rc = of_property_read_string_index(np, "string-property", 0, strings);
-	unittest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_ASSERT_EQ(test, rc, 0);
+	KUNIT_EXPECT_STREQ(test, strings[0], "foobar");
+
 	strings[0] = NULL;
 	rc = of_property_read_string_index(np, "string-property", 1, strings);
-	unittest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ(test, rc, -ENODATA);
+	KUNIT_EXPECT_EQ(test, strings[0], NULL);
+
 	rc = of_property_read_string_index(np, "phandle-list-names", 0, strings);
-	unittest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_ASSERT_EQ(test, rc, 0);
+	KUNIT_EXPECT_STREQ(test, strings[0], "first");
+
 	rc = of_property_read_string_index(np, "phandle-list-names", 1, strings);
-	unittest(rc == 0 && !strcmp(strings[0], "second"), "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_ASSERT_EQ(test, rc, 0);
+	KUNIT_EXPECT_STREQ(test, strings[0], "second");
+
 	rc = of_property_read_string_index(np, "phandle-list-names", 2, strings);
-	unittest(rc == 0 && !strcmp(strings[0], "third"), "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_ASSERT_EQ(test, rc, 0);
+	KUNIT_EXPECT_STREQ(test, strings[0], "third");
+
 	strings[0] = NULL;
 	rc = of_property_read_string_index(np, "phandle-list-names", 3, strings);
-	unittest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ(test, rc, -ENODATA);
+	KUNIT_EXPECT_EQ(test, strings[0], NULL);
+
 	strings[0] = NULL;
 	rc = of_property_read_string_index(np, "unterminated-string", 0, strings);
-	unittest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ(test, rc, -EILSEQ);
+	KUNIT_EXPECT_EQ(test, strings[0], NULL);
+
 	rc = of_property_read_string_index(np, "unterminated-string-list", 0, strings);
-	unittest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_ASSERT_EQ(test, rc, 0);
+	KUNIT_EXPECT_STREQ(test, strings[0], "first");
+
 	strings[0] = NULL;
 	rc = of_property_read_string_index(np, "unterminated-string-list", 2, strings); /* should fail */
-	unittest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
-	strings[1] = NULL;
+	KUNIT_EXPECT_EQ(test, rc, -EILSEQ);
+	KUNIT_EXPECT_EQ(test, strings[0], NULL);
 
 	/* of_property_read_string_array() tests */
-	rc = of_property_read_string_array(np, "string-property", strings, 4);
-	unittest(rc == 1, "Incorrect string count; rc=%i\n", rc);
-	rc = of_property_read_string_array(np, "phandle-list-names", strings, 4);
-	unittest(rc == 3, "Incorrect string count; rc=%i\n", rc);
-	rc = of_property_read_string_array(np, "unterminated-string", strings, 4);
-	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
+	strings[1] = NULL;
+	KUNIT_EXPECT_EQ(
+		test,
+		of_property_read_string_array(
+			np, "string-property", strings, 4),
+		1);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_property_read_string_array(
+			np, "phandle-list-names", strings, 4),
+		3);
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_read_string_array(
+			np, "unterminated-string", strings, 4),
+		-EILSEQ,
+		"unterminated string");
 	/* -- An incorrectly formed string should cause a failure */
-	rc = of_property_read_string_array(np, "unterminated-string-list", strings, 4);
-	unittest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_read_string_array(
+			np, "unterminated-string-list", strings, 4),
+		-EILSEQ,
+		"unterminated string array");
 	/* -- parsing the correctly formed strings should still work: */
 	strings[2] = NULL;
 	rc = of_property_read_string_array(np, "unterminated-string-list", strings, 2);
-	unittest(rc == 2 && strings[2] == NULL, "of_property_read_string_array() failure; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ(test, rc, 2);
+	KUNIT_EXPECT_EQ(test, strings[2], NULL);
+
 	strings[1] = NULL;
 	rc = of_property_read_string_array(np, "phandle-list-names", strings, 1);
-	unittest(rc == 1 && strings[1] == NULL, "Overwrote end of string array; rc=%i, str='%s'\n", rc, strings[1]);
+	KUNIT_ASSERT_EQ(test, rc, 1);
+	KUNIT_EXPECT_EQ_MSG(test, strings[1], NULL,
+			    "Overwrote end of string array");
 }
 
 #define propcmp(p1, p2) (((p1)->length == (p2)->length) && \
 			(p1)->value && (p2)->value && \
 			!memcmp((p1)->value, (p2)->value, (p1)->length) && \
 			!strcmp((p1)->name, (p2)->name))
-static void __init of_unittest_property_copy(void)
+static void of_unittest_property_copy(struct kunit *test)
 {
 #ifdef CONFIG_OF_DYNAMIC
 	struct property p1 = { .name = "p1", .length = 0, .value = "" };
@@ -686,20 +775,24 @@ static void __init of_unittest_property_copy(void)
 	struct property *new;
 
 	new = __of_prop_dup(&p1, GFP_KERNEL);
-	unittest(new && propcmp(&p1, new), "empty property didn't copy correctly\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, new);
+	KUNIT_EXPECT_TRUE_MSG(test, propcmp(&p1, new),
+			      "empty property didn't copy correctly");
 	kfree(new->value);
 	kfree(new->name);
 	kfree(new);
 
 	new = __of_prop_dup(&p2, GFP_KERNEL);
-	unittest(new && propcmp(&p2, new), "non-empty property didn't copy correctly\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, new);
+	KUNIT_EXPECT_TRUE_MSG(test, propcmp(&p2, new),
+			      "non-empty property didn't copy correctly");
 	kfree(new->value);
 	kfree(new->name);
 	kfree(new);
 #endif
 }
 
-static void __init of_unittest_changeset(void)
+static void of_unittest_changeset(struct kunit *test)
 {
 #ifdef CONFIG_OF_DYNAMIC
 	struct property *ppadd, padd = { .name = "prop-add", .length = 1, .value = "" };
@@ -712,32 +805,32 @@ static void __init of_unittest_changeset(void)
 	struct of_changeset chgset;
 
 	n1 = __of_node_dup(NULL, "n1");
-	unittest(n1, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n1);
 
 	n2 = __of_node_dup(NULL, "n2");
-	unittest(n2, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n2);
 
 	n21 = __of_node_dup(NULL, "n21");
-	unittest(n21, "testcase setup failure %p\n", n21);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n21);
 
 	nchangeset = of_find_node_by_path("/testcase-data/changeset");
 	nremove = of_get_child_by_name(nchangeset, "node-remove");
-	unittest(nremove, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, nremove);
 
 	ppadd = __of_prop_dup(&padd, GFP_KERNEL);
-	unittest(ppadd, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppadd);
 
 	ppname_n1  = __of_prop_dup(&pname_n1, GFP_KERNEL);
-	unittest(ppname_n1, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n1);
 
 	ppname_n2  = __of_prop_dup(&pname_n2, GFP_KERNEL);
-	unittest(ppname_n2, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n2);
 
 	ppname_n21 = __of_prop_dup(&pname_n21, GFP_KERNEL);
-	unittest(ppname_n21, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n21);
 
 	ppupdate = __of_prop_dup(&pupdate, GFP_KERNEL);
-	unittest(ppupdate, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppupdate);
 
 	parent = nchangeset;
 	n1->parent = parent;
@@ -745,54 +838,72 @@ static void __init of_unittest_changeset(void)
 	n21->parent = n2;
 
 	ppremove = of_find_property(parent, "prop-remove", NULL);
-	unittest(ppremove, "failed to find removal prop");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppremove);
 
 	of_changeset_init(&chgset);
 
-	unittest(!of_changeset_attach_node(&chgset, n1), "fail attach n1\n");
-	unittest(!of_changeset_add_property(&chgset, n1, ppname_n1), "fail add prop name\n");
-
-	unittest(!of_changeset_attach_node(&chgset, n2), "fail attach n2\n");
-	unittest(!of_changeset_add_property(&chgset, n2, ppname_n2), "fail add prop name\n");
-
-	unittest(!of_changeset_detach_node(&chgset, nremove), "fail remove node\n");
-	unittest(!of_changeset_add_property(&chgset, n21, ppname_n21), "fail add prop name\n");
-
-	unittest(!of_changeset_attach_node(&chgset, n21), "fail attach n21\n");
-
-	unittest(!of_changeset_add_property(&chgset, parent, ppadd), "fail add prop prop-add\n");
-	unittest(!of_changeset_update_property(&chgset, parent, ppupdate), "fail update prop\n");
-	unittest(!of_changeset_remove_property(&chgset, parent, ppremove), "fail remove prop\n");
-
-	unittest(!of_changeset_apply(&chgset), "apply failed\n");
+	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n1),
+			       "fail attach n1\n");
+	KUNIT_EXPECT_FALSE_MSG(
+		test, of_changeset_add_property(&chgset, n1, ppname_n1),
+		"fail add prop name\n");
+
+	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n2),
+			       "fail attach n2\n");
+	KUNIT_EXPECT_FALSE_MSG(
+		test, of_changeset_add_property(&chgset, n2, ppname_n2),
+			       "fail add prop name\n");
+
+	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_detach_node(&chgset, nremove),
+			       "fail remove node\n");
+	KUNIT_EXPECT_FALSE_MSG(
+		test, of_changeset_add_property(&chgset, n21, ppname_n21),
+		"fail add prop name\n");
+
+	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n21),
+			       "fail attach n21\n");
+
+	KUNIT_EXPECT_FALSE_MSG(
+		test,
+		of_changeset_add_property(&chgset, parent, ppadd),
+		"fail add prop prop-add\n");
+	KUNIT_EXPECT_FALSE_MSG(
+		test,
+		of_changeset_update_property(&chgset, parent, ppupdate),
+		"fail update prop\n");
+	KUNIT_EXPECT_FALSE_MSG(
+		test,
+		of_changeset_remove_property(&chgset, parent, ppremove),
+		"fail remove prop\n");
+
+	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_apply(&chgset),
+			       "apply failed\n");
 
 	of_node_put(nchangeset);
 
 	/* Make sure node names are constructed correctly */
-	unittest((np = of_find_node_by_path("/testcase-data/changeset/n2/n21")),
-		 "'%pOF' not added\n", n21);
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
+		test,
+		np = of_find_node_by_path("/testcase-data/changeset/n2/n21"),
+		"'%pOF' not added\n", n21);
 	of_node_put(np);
 
-	unittest(!of_changeset_revert(&chgset), "revert failed\n");
+	KUNIT_EXPECT_FALSE(test, of_changeset_revert(&chgset));
 
 	of_changeset_destroy(&chgset);
 #endif
 }
 
-static void __init of_unittest_parse_interrupts(void)
+static void of_unittest_parse_interrupts(struct kunit *test)
 {
 	struct device_node *np;
 	struct of_phandle_args args;
 	int i, rc;
 
-	if (of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)
-		return;
+	KUNIT_ASSERT_FALSE(test, of_irq_workarounds & OF_IMAP_OLDWORLD_MAC);
 
 	np = of_find_node_by_path("/testcase-data/interrupts/interrupts0");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	for (i = 0; i < 4; i++) {
 		bool passed = true;
@@ -804,16 +915,15 @@ static void __init of_unittest_parse_interrupts(void)
 		passed &= (args.args_count == 1);
 		passed &= (args.args[0] == (i + 1));
 
-		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
-			 i, args.np, rc);
+		KUNIT_EXPECT_TRUE_MSG(
+			test, passed,
+			"index %i - data error on node %pOF rc=%i\n",
+			i, args.np, rc);
 	}
 	of_node_put(np);
 
 	np = of_find_node_by_path("/testcase-data/interrupts/interrupts1");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	for (i = 0; i < 4; i++) {
 		bool passed = true;
@@ -850,26 +960,24 @@ static void __init of_unittest_parse_interrupts(void)
 		default:
 			passed = false;
 		}
-		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
-			 i, args.np, rc);
+		KUNIT_EXPECT_TRUE_MSG(
+			test, passed,
+			"index %i - data error on node %pOF rc=%i\n",
+			i, args.np, rc);
 	}
 	of_node_put(np);
 }
 
-static void __init of_unittest_parse_interrupts_extended(void)
+static void of_unittest_parse_interrupts_extended(struct kunit *test)
 {
 	struct device_node *np;
 	struct of_phandle_args args;
 	int i, rc;
 
-	if (of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)
-		return;
+	KUNIT_ASSERT_FALSE(test, of_irq_workarounds & OF_IMAP_OLDWORLD_MAC);
 
 	np = of_find_node_by_path("/testcase-data/interrupts/interrupts-extended0");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	for (i = 0; i < 7; i++) {
 		bool passed = true;
@@ -924,8 +1032,10 @@ static void __init of_unittest_parse_interrupts_extended(void)
 			passed = false;
 		}
 
-		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
-			 i, args.np, rc);
+		KUNIT_EXPECT_TRUE_MSG(
+			test, passed,
+			"index %i - data error on node %pOF rc=%i\n",
+			i, args.np, rc);
 	}
 	of_node_put(np);
 }
@@ -965,7 +1075,7 @@ static struct {
 	{ .path = "/testcase-data/match-node/name9", .data = "K", },
 };
 
-static void __init of_unittest_match_node(void)
+static void of_unittest_match_node(struct kunit *test)
 {
 	struct device_node *np;
 	const struct of_device_id *match;
@@ -973,26 +1083,19 @@ static void __init of_unittest_match_node(void)
 
 	for (i = 0; i < ARRAY_SIZE(match_node_tests); i++) {
 		np = of_find_node_by_path(match_node_tests[i].path);
-		if (!np) {
-			unittest(0, "missing testcase node %s\n",
-				match_node_tests[i].path);
-			continue;
-		}
+		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 		match = of_match_node(match_node_table, np);
-		if (!match) {
-			unittest(0, "%s didn't match anything\n",
-				match_node_tests[i].path);
-			continue;
-		}
+		KUNIT_ASSERT_NOT_ERR_OR_NULL_MSG(test, np,
+						 "%s didn't match anything",
+						 match_node_tests[i].path);
 
-		if (strcmp(match->data, match_node_tests[i].data) != 0) {
-			unittest(0, "%s got wrong match. expected %s, got %s\n",
-				match_node_tests[i].path, match_node_tests[i].data,
-				(const char *)match->data);
-			continue;
-		}
-		unittest(1, "passed");
+		KUNIT_EXPECT_STREQ_MSG(
+			test,
+			match->data, match_node_tests[i].data,
+			"%s got wrong match. expected %s, got %s\n",
+			match_node_tests[i].path, match_node_tests[i].data,
+			(const char *)match->data);
 	}
 }
 
@@ -1004,9 +1107,9 @@ static struct resource test_bus_res = {
 static const struct platform_device_info test_bus_info = {
 	.name = "unittest-bus",
 };
-static void __init of_unittest_platform_populate(void)
+static void of_unittest_platform_populate(struct kunit *test)
 {
-	int irq, rc;
+	int irq;
 	struct device_node *np, *child, *grandchild;
 	struct platform_device *pdev, *test_bus;
 	const struct of_device_id match[] = {
@@ -1020,32 +1123,27 @@ static void __init of_unittest_platform_populate(void)
 	/* Test that a missing irq domain returns -EPROBE_DEFER */
 	np = of_find_node_by_path("/testcase-data/testcase-device1");
 	pdev = of_find_device_by_node(np);
-	unittest(pdev, "device 1 creation failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, pdev);
 
 	if (!(of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)) {
 		irq = platform_get_irq(pdev, 0);
-		unittest(irq == -EPROBE_DEFER,
-			 "device deferred probe failed - %d\n", irq);
+		KUNIT_ASSERT_EQ(test, irq, -EPROBE_DEFER);
 
 		/* Test that a parsing failure does not return -EPROBE_DEFER */
 		np = of_find_node_by_path("/testcase-data/testcase-device2");
 		pdev = of_find_device_by_node(np);
-		unittest(pdev, "device 2 creation failed\n");
+		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, pdev);
 		irq = platform_get_irq(pdev, 0);
-		unittest(irq < 0 && irq != -EPROBE_DEFER,
-			 "device parsing error failed - %d\n", irq);
+		KUNIT_ASSERT_TRUE_MSG(test, irq < 0 && irq != -EPROBE_DEFER,
+				      "device parsing error failed - %d\n",
+				      irq);
 	}
 
 	np = of_find_node_by_path("/testcase-data/platform-tests");
-	unittest(np, "No testcase data in device tree\n");
-	if (!np)
-		return;
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	test_bus = platform_device_register_full(&test_bus_info);
-	rc = PTR_ERR_OR_ZERO(test_bus);
-	unittest(!rc, "testbus registration failed; rc=%i\n", rc);
-	if (rc)
-		return;
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, test_bus);
 	test_bus->dev.of_node = np;
 
 	/*
@@ -1060,17 +1158,19 @@ static void __init of_unittest_platform_populate(void)
 	of_platform_populate(np, match, NULL, &test_bus->dev);
 	for_each_child_of_node(np, child) {
 		for_each_child_of_node(child, grandchild)
-			unittest(of_find_device_by_node(grandchild),
-				 "Could not create device for node '%pOFn'\n",
-				 grandchild);
+			KUNIT_EXPECT_TRUE_MSG(
+				test, of_find_device_by_node(grandchild),
+				"Could not create device for node '%pOFn'\n",
+				grandchild);
 	}
 
 	of_platform_depopulate(&test_bus->dev);
 	for_each_child_of_node(np, child) {
 		for_each_child_of_node(child, grandchild)
-			unittest(!of_find_device_by_node(grandchild),
-				 "device didn't get destroyed '%pOFn'\n",
-				 grandchild);
+			KUNIT_EXPECT_FALSE_MSG(
+				test, of_find_device_by_node(grandchild),
+				"device didn't get destroyed '%pOFn'\n",
+				grandchild);
 	}
 
 	platform_device_unregister(test_bus);
@@ -1171,7 +1271,7 @@ static void attach_node_and_children(struct device_node *np)
  *	unittest_data_add - Reads, copies data from
  *	linked tree and attaches it to the live tree
  */
-static int __init unittest_data_add(void)
+static int unittest_data_add(void)
 {
 	void *unittest_data;
 	struct device_node *unittest_data_node, *np;
@@ -1242,7 +1342,7 @@ static int __init unittest_data_add(void)
 }
 
 #ifdef CONFIG_OF_OVERLAY
-static int __init overlay_data_apply(const char *overlay_name, int *overlay_id);
+static int overlay_data_apply(const char *overlay_name, int *overlay_id);
 
 static int unittest_probe(struct platform_device *pdev)
 {
@@ -1471,172 +1571,146 @@ static void of_unittest_destroy_tracked_overlays(void)
 	} while (defers > 0);
 }
 
-static int __init of_unittest_apply_overlay(int overlay_nr, int *overlay_id)
+static int of_unittest_apply_overlay(struct kunit *test,
+				     int overlay_nr,
+				     int *overlay_id)
 {
 	const char *overlay_name;
 
 	overlay_name = overlay_name_from_nr(overlay_nr);
 
-	if (!overlay_data_apply(overlay_name, overlay_id)) {
-		unittest(0, "could not apply overlay \"%s\"\n",
-				overlay_name);
-		return -EFAULT;
-	}
+	KUNIT_ASSERT_TRUE_MSG(test,
+			      overlay_data_apply(overlay_name, overlay_id),
+			      "could not apply overlay \"%s\"\n", overlay_name);
 	of_unittest_track_overlay(*overlay_id);
 
 	return 0;
 }
 
 /* apply an overlay while checking before and after states */
-static int __init of_unittest_apply_overlay_check(int overlay_nr,
+static int of_unittest_apply_overlay_check(struct kunit *test, int overlay_nr,
 		int unittest_nr, int before, int after,
 		enum overlay_type ovtype)
 {
 	int ret, ovcs_id;
 
 	/* unittest device must not be in before state */
-	if (of_unittest_device_exists(unittest_nr, ovtype) != before) {
-		unittest(0, "%s with device @\"%s\" %s\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype),
-				!before ? "enabled" : "disabled");
-		return -EINVAL;
-	}
+	KUNIT_ASSERT_EQ_MSG(
+		test, of_unittest_device_exists(unittest_nr, ovtype), before,
+		"%s with device @\"%s\" %s\n", overlay_name_from_nr(overlay_nr),
+		unittest_path(unittest_nr, ovtype),
+		!before ? "enabled" : "disabled");
 
 	ovcs_id = 0;
-	ret = of_unittest_apply_overlay(overlay_nr, &ovcs_id);
+	ret = of_unittest_apply_overlay(test, overlay_nr, &ovcs_id);
 	if (ret != 0) {
-		/* of_unittest_apply_overlay already called unittest() */
+		/* of_unittest_apply_overlay already set expectation */
 		return ret;
 	}
 
 	/* unittest device must be to set to after state */
-	if (of_unittest_device_exists(unittest_nr, ovtype) != after) {
-		unittest(0, "%s failed to create @\"%s\" %s\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype),
-				!after ? "enabled" : "disabled");
-		return -EINVAL;
-	}
+	KUNIT_ASSERT_EQ_MSG(
+		test, of_unittest_device_exists(unittest_nr, ovtype), after,
+		"%s failed to create @\"%s\" %s\n",
+		overlay_name_from_nr(overlay_nr),
+		unittest_path(unittest_nr, ovtype),
+		!after ? "enabled" : "disabled");
 
 	return 0;
 }
 
 /* apply an overlay and then revert it while checking before, after states */
-static int __init of_unittest_apply_revert_overlay_check(int overlay_nr,
+static int of_unittest_apply_revert_overlay_check(
+		struct kunit *test, int overlay_nr,
 		int unittest_nr, int before, int after,
 		enum overlay_type ovtype)
 {
 	int ret, ovcs_id;
 
 	/* unittest device must be in before state */
-	if (of_unittest_device_exists(unittest_nr, ovtype) != before) {
-		unittest(0, "%s with device @\"%s\" %s\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype),
-				!before ? "enabled" : "disabled");
-		return -EINVAL;
-	}
+	KUNIT_ASSERT_EQ_MSG(
+		test, of_unittest_device_exists(unittest_nr, ovtype), before,
+		"%s with device @\"%s\" %s\n", overlay_name_from_nr(overlay_nr),
+		unittest_path(unittest_nr, ovtype),
+		!before ? "enabled" : "disabled");
 
 	/* apply the overlay */
 	ovcs_id = 0;
-	ret = of_unittest_apply_overlay(overlay_nr, &ovcs_id);
+	ret = of_unittest_apply_overlay(test, overlay_nr, &ovcs_id);
 	if (ret != 0) {
-		/* of_unittest_apply_overlay already called unittest() */
+		/* of_unittest_apply_overlay already set expectation. */
 		return ret;
 	}
 
 	/* unittest device must be in after state */
-	if (of_unittest_device_exists(unittest_nr, ovtype) != after) {
-		unittest(0, "%s failed to create @\"%s\" %s\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype),
-				!after ? "enabled" : "disabled");
-		return -EINVAL;
-	}
-
-	ret = of_overlay_remove(&ovcs_id);
-	if (ret != 0) {
-		unittest(0, "%s failed to be destroyed @\"%s\"\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype));
-		return ret;
-	}
+	KUNIT_ASSERT_EQ_MSG(
+		test, of_unittest_device_exists(unittest_nr, ovtype), after,
+		"%s failed to create @\"%s\" %s\n",
+		overlay_name_from_nr(overlay_nr),
+		unittest_path(unittest_nr, ovtype),
+		!after ? "enabled" : "disabled");
+
+	KUNIT_ASSERT_EQ_MSG(test, of_overlay_remove(&ovcs_id), 0,
+			    "%s failed to be destroyed @\"%s\"\n",
+			    overlay_name_from_nr(overlay_nr),
+			    unittest_path(unittest_nr, ovtype));
 
 	/* unittest device must be again in before state */
-	if (of_unittest_device_exists(unittest_nr, PDEV_OVERLAY) != before) {
-		unittest(0, "%s with device @\"%s\" %s\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype),
-				!before ? "enabled" : "disabled");
-		return -EINVAL;
-	}
+	KUNIT_ASSERT_EQ_MSG(
+		test,
+		of_unittest_device_exists(unittest_nr, PDEV_OVERLAY), before,
+		"%s with device @\"%s\" %s\n",
+		overlay_name_from_nr(overlay_nr),
+		unittest_path(unittest_nr, ovtype),
+		!before ? "enabled" : "disabled");
 
 	return 0;
 }
 
 /* test activation of device */
-static void __init of_unittest_overlay_0(void)
+static void of_unittest_overlay_0(struct kunit *test)
 {
 	/* device should enable */
-	if (of_unittest_apply_overlay_check(0, 0, 0, 1, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 0);
+	of_unittest_apply_overlay_check(test, 0, 0, 0, 1, PDEV_OVERLAY);
 }
 
 /* test deactivation of device */
-static void __init of_unittest_overlay_1(void)
+static void of_unittest_overlay_1(struct kunit *test)
 {
 	/* device should disable */
-	if (of_unittest_apply_overlay_check(1, 1, 1, 0, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 1);
+	of_unittest_apply_overlay_check(test, 1, 1, 1, 0, PDEV_OVERLAY);
 }
 
 /* test activation of device */
-static void __init of_unittest_overlay_2(void)
+static void of_unittest_overlay_2(struct kunit *test)
 {
 	/* device should enable */
-	if (of_unittest_apply_overlay_check(2, 2, 0, 1, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 2);
+	of_unittest_apply_overlay_check(test, 2, 2, 0, 1, PDEV_OVERLAY);
 }
 
 /* test deactivation of device */
-static void __init of_unittest_overlay_3(void)
+static void of_unittest_overlay_3(struct kunit *test)
 {
 	/* device should disable */
-	if (of_unittest_apply_overlay_check(3, 3, 1, 0, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 3);
+	of_unittest_apply_overlay_check(test, 3, 3, 1, 0, PDEV_OVERLAY);
 }
 
 /* test activation of a full device node */
-static void __init of_unittest_overlay_4(void)
+static void of_unittest_overlay_4(struct kunit *test)
 {
 	/* device should disable */
-	if (of_unittest_apply_overlay_check(4, 4, 0, 1, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 4);
+	of_unittest_apply_overlay_check(test, 4, 4, 0, 1, PDEV_OVERLAY);
 }
 
 /* test overlay apply/revert sequence */
-static void __init of_unittest_overlay_5(void)
+static void of_unittest_overlay_5(struct kunit *test)
 {
 	/* device should disable */
-	if (of_unittest_apply_revert_overlay_check(5, 5, 0, 1, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 5);
+	of_unittest_apply_revert_overlay_check(test, 5, 5, 0, 1, PDEV_OVERLAY);
 }
 
 /* test overlay application in sequence */
-static void __init of_unittest_overlay_6(void)
+static void of_unittest_overlay_6(struct kunit *test)
 {
 	int i, ov_id[2], ovcs_id;
 	int overlay_nr = 6, unittest_nr = 6;
@@ -1645,74 +1719,67 @@ static void __init of_unittest_overlay_6(void)
 
 	/* unittest device must be in before state */
 	for (i = 0; i < 2; i++) {
-		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
-				!= before) {
-			unittest(0, "%s with device @\"%s\" %s\n",
-					overlay_name_from_nr(overlay_nr + i),
-					unittest_path(unittest_nr + i,
-						PDEV_OVERLAY),
-					!before ? "enabled" : "disabled");
-			return;
-		}
+		KUNIT_ASSERT_EQ_MSG(test,
+				    of_unittest_device_exists(unittest_nr + i,
+							      PDEV_OVERLAY),
+				    before,
+				    "%s with device @\"%s\" %s\n",
+				    overlay_name_from_nr(overlay_nr + i),
+				    unittest_path(unittest_nr + i,
+						  PDEV_OVERLAY),
+				    !before ? "enabled" : "disabled");
 	}
 
 	/* apply the overlays */
 	for (i = 0; i < 2; i++) {
-
 		overlay_name = overlay_name_from_nr(overlay_nr + i);
 
-		if (!overlay_data_apply(overlay_name, &ovcs_id)) {
-			unittest(0, "could not apply overlay \"%s\"\n",
-					overlay_name);
-			return;
-		}
+		KUNIT_ASSERT_TRUE_MSG(
+			test, overlay_data_apply(overlay_name, &ovcs_id),
+			"could not apply overlay \"%s\"\n", overlay_name);
 		ov_id[i] = ovcs_id;
 		of_unittest_track_overlay(ov_id[i]);
 	}
 
 	for (i = 0; i < 2; i++) {
 		/* unittest device must be in after state */
-		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
-				!= after) {
-			unittest(0, "overlay @\"%s\" failed @\"%s\" %s\n",
-					overlay_name_from_nr(overlay_nr + i),
-					unittest_path(unittest_nr + i,
-						PDEV_OVERLAY),
-					!after ? "enabled" : "disabled");
-			return;
-		}
+		KUNIT_ASSERT_EQ_MSG(test,
+				    of_unittest_device_exists(unittest_nr + i,
+							      PDEV_OVERLAY),
+				    after,
+				    "overlay @\"%s\" failed @\"%s\" %s\n",
+				    overlay_name_from_nr(overlay_nr + i),
+				    unittest_path(unittest_nr + i,
+						  PDEV_OVERLAY),
+				    !after ? "enabled" : "disabled");
 	}
 
 	for (i = 1; i >= 0; i--) {
 		ovcs_id = ov_id[i];
-		if (of_overlay_remove(&ovcs_id)) {
-			unittest(0, "%s failed destroy @\"%s\"\n",
-					overlay_name_from_nr(overlay_nr + i),
-					unittest_path(unittest_nr + i,
-						PDEV_OVERLAY));
-			return;
-		}
+		KUNIT_ASSERT_FALSE_MSG(
+			test, of_overlay_remove(&ovcs_id),
+			"%s failed destroy @\"%s\"\n",
+			overlay_name_from_nr(overlay_nr + i),
+			unittest_path(unittest_nr + i, PDEV_OVERLAY));
 		of_unittest_untrack_overlay(ov_id[i]);
 	}
 
 	for (i = 0; i < 2; i++) {
 		/* unittest device must be again in before state */
-		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
-				!= before) {
-			unittest(0, "%s with device @\"%s\" %s\n",
-					overlay_name_from_nr(overlay_nr + i),
-					unittest_path(unittest_nr + i,
-						PDEV_OVERLAY),
-					!before ? "enabled" : "disabled");
-			return;
-		}
+		KUNIT_ASSERT_EQ_MSG(test,
+				    of_unittest_device_exists(unittest_nr + i,
+							      PDEV_OVERLAY),
+				    before,
+				    "%s with device @\"%s\" %s\n",
+				    overlay_name_from_nr(overlay_nr + i),
+				    unittest_path(unittest_nr + i,
+						  PDEV_OVERLAY),
+				    !before ? "enabled" : "disabled");
 	}
-
-	unittest(1, "overlay test %d passed\n", 6);
 }
 
 /* test overlay application in sequence */
-static void __init of_unittest_overlay_8(void)
+static void of_unittest_overlay_8(struct kunit *test)
 {
 	int i, ov_id[2], ovcs_id;
 	int overlay_nr = 8, unittest_nr = 8;
@@ -1722,76 +1789,64 @@ static void __init of_unittest_overlay_8(void)
 
 	/* apply the overlays */
 	for (i = 0; i < 2; i++) {
-
 		overlay_name = overlay_name_from_nr(overlay_nr + i);
 
-		if (!overlay_data_apply(overlay_name, &ovcs_id)) {
-			unittest(0, "could not apply overlay \"%s\"\n",
-					overlay_name);
-			return;
-		}
+		KUNIT_ASSERT_TRUE_MSG(
+			test, overlay_data_apply(overlay_name, &ovcs_id),
+			"could not apply overlay \"%s\"\n", overlay_name);
 		ov_id[i] = ovcs_id;
 		of_unittest_track_overlay(ov_id[i]);
 	}
 
 	/* now try to remove first overlay (it should fail) */
 	ovcs_id = ov_id[0];
-	if (!of_overlay_remove(&ovcs_id)) {
-		unittest(0, "%s was destroyed @\"%s\"\n",
-				overlay_name_from_nr(overlay_nr + 0),
-				unittest_path(unittest_nr,
-					PDEV_OVERLAY));
-		return;
-	}
+	KUNIT_ASSERT_TRUE_MSG(
+		test, of_overlay_remove(&ovcs_id),
+		"%s was destroyed @\"%s\"\n",
+		overlay_name_from_nr(overlay_nr + 0),
+		unittest_path(unittest_nr, PDEV_OVERLAY));
 
 	/* removing them in order should work */
 	for (i = 1; i >= 0; i--) {
 		ovcs_id = ov_id[i];
-		if (of_overlay_remove(&ovcs_id)) {
-			unittest(0, "%s not destroyed @\"%s\"\n",
-					overlay_name_from_nr(overlay_nr + i),
-					unittest_path(unittest_nr,
-						PDEV_OVERLAY));
-			return;
-		}
+		KUNIT_ASSERT_FALSE_MSG(
+			test, of_overlay_remove(&ovcs_id),
+			"%s not destroyed @\"%s\"\n",
+			overlay_name_from_nr(overlay_nr + i),
+			unittest_path(unittest_nr, PDEV_OVERLAY));
 		of_unittest_untrack_overlay(ov_id[i]);
 	}
-
-	unittest(1, "overlay test %d passed\n", 8);
 }
 
 /* test insertion of a bus with parent devices */
-static void __init of_unittest_overlay_10(void)
+static void of_unittest_overlay_10(struct kunit *test)
 {
-	int ret;
 	char *child_path;
 
 	/* device should disable */
-	ret = of_unittest_apply_overlay_check(10, 10, 0, 1, PDEV_OVERLAY);
-	if (unittest(ret == 0,
-			"overlay test %d failed; overlay application\n", 10))
-		return;
+	KUNIT_ASSERT_EQ_MSG(
+		test,
+		of_unittest_apply_overlay_check(
+				test, 10, 10, 0, 1, PDEV_OVERLAY),
+		0,
+		"overlay test %d failed; overlay application\n", 10);
 
 	child_path = kasprintf(GFP_KERNEL, "%s/test-unittest101",
 			unittest_path(10, PDEV_OVERLAY));
-	if (unittest(child_path, "overlay test %d failed; kasprintf\n", 10))
-		return;
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, child_path);
 
-	ret = of_path_device_type_exists(child_path, PDEV_OVERLAY);
+	KUNIT_EXPECT_TRUE_MSG(
+		test, of_path_device_type_exists(child_path, PDEV_OVERLAY),
+		"overlay test %d failed; no child device\n", 10);
 	kfree(child_path);
-
-	unittest(ret, "overlay test %d failed; no child device\n", 10);
 }
 
 /* test insertion of a bus with parent devices (and revert) */
-static void __init of_unittest_overlay_11(void)
+static void of_unittest_overlay_11(struct kunit *test)
 {
-	int ret;
-
 	/* device should disable */
-	ret = of_unittest_apply_revert_overlay_check(11, 11, 0, 1,
-			PDEV_OVERLAY);
-	unittest(ret == 0, "overlay test %d failed; overlay apply\n", 11);
+	KUNIT_EXPECT_FALSE(test, of_unittest_apply_revert_overlay_check(
+		test, 11, 11, 0, 1, PDEV_OVERLAY));
 }
 
 #if IS_BUILTIN(CONFIG_I2C) && IS_ENABLED(CONFIG_OF_OVERLAY)
@@ -2013,25 +2068,18 @@ static struct i2c_driver unittest_i2c_mux_driver = {
 
 #endif
 
-static int of_unittest_overlay_i2c_init(void)
+static int of_unittest_overlay_i2c_init(struct kunit *test)
 {
-	int ret;
-
-	ret = i2c_add_driver(&unittest_i2c_dev_driver);
-	if (unittest(ret == 0,
-			"could not register unittest i2c device driver\n"))
-		return ret;
+	KUNIT_ASSERT_EQ_MSG(test, i2c_add_driver(&unittest_i2c_dev_driver), 0,
+			    "could not register unittest i2c device driver\n");
 
-	ret = platform_driver_register(&unittest_i2c_bus_driver);
-	if (unittest(ret == 0,
-			"could not register unittest i2c bus driver\n"))
-		return ret;
+	KUNIT_ASSERT_EQ_MSG(
+		test, platform_driver_register(&unittest_i2c_bus_driver), 0,
+		"could not register unittest i2c bus driver\n");
 
 #if IS_BUILTIN(CONFIG_I2C_MUX)
-	ret = i2c_add_driver(&unittest_i2c_mux_driver);
-	if (unittest(ret == 0,
-			"could not register unittest i2c mux driver\n"))
-		return ret;
+	KUNIT_ASSERT_EQ_MSG(test, i2c_add_driver(&unittest_i2c_mux_driver), 0,
+			    "could not register unittest i2c mux driver\n");
 #endif
 
 	return 0;
@@ -2046,101 +2094,85 @@ static void of_unittest_overlay_i2c_cleanup(void)
 	i2c_del_driver(&unittest_i2c_dev_driver);
 }
 
-static void __init of_unittest_overlay_i2c_12(void)
+static void of_unittest_overlay_i2c_12(struct kunit *test)
 {
 	/* device should enable */
-	if (of_unittest_apply_overlay_check(12, 12, 0, 1, I2C_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 12);
+	of_unittest_apply_overlay_check(test, 12, 12, 0, 1, I2C_OVERLAY);
 }
 
 /* test deactivation of device */
-static void __init of_unittest_overlay_i2c_13(void)
+static void of_unittest_overlay_i2c_13(struct kunit *test)
 {
 	/* device should disable */
-	if (of_unittest_apply_overlay_check(13, 13, 1, 0, I2C_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 13);
+	of_unittest_apply_overlay_check(test, 13, 13, 1, 0, I2C_OVERLAY);
 }
 
 /* just check for i2c mux existence */
-static void of_unittest_overlay_i2c_14(void)
+static void of_unittest_overlay_i2c_14(struct kunit *test)
 {
+	KUNIT_SUCCEED(test);
 }
 
-static void __init of_unittest_overlay_i2c_15(void)
+static void of_unittest_overlay_i2c_15(struct kunit *test)
 {
 	/* device should enable */
-	if (of_unittest_apply_overlay_check(15, 15, 0, 1, I2C_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 15);
+	of_unittest_apply_overlay_check(test, 15, 15, 0, 1, I2C_OVERLAY);
 }
 
 #else
 
-static inline void of_unittest_overlay_i2c_14(void) { }
-static inline void of_unittest_overlay_i2c_15(void) { }
+static inline void of_unittest_overlay_i2c_14(struct kunit *test) { }
+static inline void of_unittest_overlay_i2c_15(struct kunit *test) { }
 
 #endif
 
-static void __init of_unittest_overlay(void)
+static void of_unittest_overlay(struct kunit *test)
 {
 	struct device_node *bus_np = NULL;
 
-	if (platform_driver_register(&unittest_driver)) {
-		unittest(0, "could not register unittest driver\n");
-		goto out;
-	}
+	KUNIT_ASSERT_FALSE_MSG(test, platform_driver_register(&unittest_driver),
+			       "could not register unittest driver\n");
 
 	bus_np = of_find_node_by_path(bus_path);
-	if (bus_np == NULL) {
-		unittest(0, "could not find bus_path \"%s\"\n", bus_path);
-		goto out;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL_MSG(
+		test, bus_np, "could not find bus_path \"%s\"\n", bus_path);
 
-	if (of_platform_default_populate(bus_np, NULL, NULL)) {
-		unittest(0, "could not populate bus @ \"%s\"\n", bus_path);
-		goto out;
-	}
-
-	if (!of_unittest_device_exists(100, PDEV_OVERLAY)) {
-		unittest(0, "could not find unittest0 @ \"%s\"\n",
-				unittest_path(100, PDEV_OVERLAY));
-		goto out;
-	}
+	KUNIT_ASSERT_FALSE_MSG(
+		test, of_platform_default_populate(bus_np, NULL, NULL),
+		"could not populate bus @ \"%s\"\n", bus_path);
 
-	if (of_unittest_device_exists(101, PDEV_OVERLAY)) {
-		unittest(0, "unittest1 @ \"%s\" should not exist\n",
-				unittest_path(101, PDEV_OVERLAY));
-		goto out;
-	}
+	KUNIT_ASSERT_TRUE_MSG(
+		test, of_unittest_device_exists(100, PDEV_OVERLAY),
+		"could not find unittest0 @ \"%s\"\n",
+		unittest_path(100, PDEV_OVERLAY));
 
-	unittest(1, "basic infrastructure of overlays passed");
+	KUNIT_ASSERT_FALSE_MSG(
+		test, of_unittest_device_exists(101, PDEV_OVERLAY),
+		"unittest1 @ \"%s\" should not exist\n",
+		unittest_path(101, PDEV_OVERLAY));
 
 	/* tests in sequence */
-	of_unittest_overlay_0();
-	of_unittest_overlay_1();
-	of_unittest_overlay_2();
-	of_unittest_overlay_3();
-	of_unittest_overlay_4();
-	of_unittest_overlay_5();
-	of_unittest_overlay_6();
-	of_unittest_overlay_8();
-
-	of_unittest_overlay_10();
-	of_unittest_overlay_11();
+	of_unittest_overlay_0(test);
+	of_unittest_overlay_1(test);
+	of_unittest_overlay_2(test);
+	of_unittest_overlay_3(test);
+	of_unittest_overlay_4(test);
+	of_unittest_overlay_5(test);
+	of_unittest_overlay_6(test);
+	of_unittest_overlay_8(test);
+
+	of_unittest_overlay_10(test);
+	of_unittest_overlay_11(test);
 
 #if IS_BUILTIN(CONFIG_I2C)
-	if (unittest(of_unittest_overlay_i2c_init() == 0, "i2c init failed\n"))
-		goto out;
+	KUNIT_ASSERT_EQ_MSG(test, of_unittest_overlay_i2c_init(test), 0,
+			    "i2c init failed\n");
+	goto out;
 
-	of_unittest_overlay_i2c_12();
-	of_unittest_overlay_i2c_13();
-	of_unittest_overlay_i2c_14();
-	of_unittest_overlay_i2c_15();
+	of_unittest_overlay_i2c_12(test);
+	of_unittest_overlay_i2c_13(test);
+	of_unittest_overlay_i2c_14(test);
+	of_unittest_overlay_i2c_15(test);
 
 	of_unittest_overlay_i2c_cleanup();
 #endif
@@ -2152,7 +2184,7 @@ static void __init of_unittest_overlay(void)
 }
 
 #else
-static inline void __init of_unittest_overlay(void) { }
+static inline void of_unittest_overlay(struct kunit *test) { }
 #endif
 
 #ifdef CONFIG_OF_OVERLAY
@@ -2313,7 +2345,7 @@ void __init unittest_unflatten_overlay_base(void)
  *
  * Return 0 on unexpected error.
  */
-static int __init overlay_data_apply(const char *overlay_name, int *overlay_id)
+static int overlay_data_apply(const char *overlay_name, int *overlay_id)
 {
 	struct overlay_info *info;
 	int found = 0;
@@ -2359,19 +2391,17 @@ static int __init overlay_data_apply(const char *overlay_name, int *overlay_id)
  * The first part of the function is _not_ normal overlay usage; it is
  * finishing splicing the base overlay device tree into the live tree.
  */
-static __init void of_unittest_overlay_high_level(void)
+static void of_unittest_overlay_high_level(struct kunit *test)
 {
 	struct device_node *last_sibling;
 	struct device_node *np;
 	struct device_node *of_symbols;
-	struct device_node *overlay_base_symbols;
+	struct device_node *overlay_base_symbols = 0;
 	struct device_node **pprev;
 	struct property *prop;
 
-	if (!overlay_base_root) {
-		unittest(0, "overlay_base_root not initialized\n");
-		return;
-	}
+	KUNIT_ASSERT_TRUE_MSG(test, overlay_base_root,
+			      "overlay_base_root not initialized\n");
 
 	/*
 	 * Could not fixup phandles in unittest_unflatten_overlay_base()
@@ -2418,11 +2448,9 @@ static __init void of_unittest_overlay_high_level(void)
 	for_each_child_of_node(overlay_base_root, np) {
 		struct device_node *base_child;
 		for_each_child_of_node(of_root, base_child) {
-			if (!strcmp(np->full_name, base_child->full_name)) {
-				unittest(0, "illegal node name in overlay_base %pOFn",
-					 np);
-				return;
-			}
+			KUNIT_ASSERT_STRNEQ_MSG(
+				test, np->full_name, base_child->full_name,
+				"illegal node name in overlay_base %pOFn", np);
 		}
 	}
 
@@ -2456,21 +2484,24 @@ static __init void of_unittest_overlay_high_level(void)
 
 			new_prop = __of_prop_dup(prop, GFP_KERNEL);
 			if (!new_prop) {
-				unittest(0, "__of_prop_dup() of '%s' from overlay_base node __symbols__",
-					 prop->name);
+				KUNIT_FAIL(test,
+					   "__of_prop_dup() of '%s' from overlay_base node __symbols__",
+					   prop->name);
 				goto err_unlock;
 			}
 			if (__of_add_property(of_symbols, new_prop)) {
 				/* "name" auto-generated by unflatten */
 				if (!strcmp(new_prop->name, "name"))
 					continue;
-				unittest(0, "duplicate property '%s' in overlay_base node __symbols__",
-					 prop->name);
+				KUNIT_FAIL(test,
+					   "duplicate property '%s' in overlay_base node __symbols__",
+					   prop->name);
 				goto err_unlock;
 			}
 			if (__of_add_property_sysfs(of_symbols, new_prop)) {
-				unittest(0, "unable to add property '%s' in overlay_base node __symbols__ to sysfs",
-					 prop->name);
+				KUNIT_FAIL(test,
+					   "unable to add property '%s' in overlay_base node __symbols__ to sysfs",
+					   prop->name);
 				goto err_unlock;
 			}
 		}
@@ -2481,20 +2512,24 @@ static __init void of_unittest_overlay_high_level(void)
 
 	/* now do the normal overlay usage test */
 
-	unittest(overlay_data_apply("overlay", NULL),
-		 "Adding overlay 'overlay' failed\n");
+	KUNIT_EXPECT_TRUE_MSG(test, overlay_data_apply("overlay", NULL),
+			      "Adding overlay 'overlay' failed\n");
 
-	unittest(overlay_data_apply("overlay_bad_add_dup_node", NULL),
-		 "Adding overlay 'overlay_bad_add_dup_node' failed\n");
+	KUNIT_EXPECT_TRUE_MSG(
+		test, overlay_data_apply("overlay_bad_add_dup_node", NULL),
+		"Adding overlay 'overlay_bad_add_dup_node' failed\n");
 
-	unittest(overlay_data_apply("overlay_bad_add_dup_prop", NULL),
-		 "Adding overlay 'overlay_bad_add_dup_prop' failed\n");
+	KUNIT_EXPECT_TRUE_MSG(
+		test, overlay_data_apply("overlay_bad_add_dup_prop", NULL),
+		"Adding overlay 'overlay_bad_add_dup_prop' failed\n");
 
-	unittest(overlay_data_apply("overlay_bad_phandle", NULL),
-		 "Adding overlay 'overlay_bad_phandle' failed\n");
+	KUNIT_EXPECT_TRUE_MSG(
+		test, overlay_data_apply("overlay_bad_phandle", NULL),
+		"Adding overlay 'overlay_bad_phandle' failed\n");
 
-	unittest(overlay_data_apply("overlay_bad_symbol", NULL),
-		 "Adding overlay 'overlay_bad_symbol' failed\n");
+	KUNIT_EXPECT_TRUE_MSG(
+		test, overlay_data_apply("overlay_bad_symbol", NULL),
+		"Adding overlay 'overlay_bad_symbol' failed\n");
 
 	return;
 
@@ -2504,57 +2539,52 @@ static __init void of_unittest_overlay_high_level(void)
 
 #else
 
-static inline __init void of_unittest_overlay_high_level(void) {}
+static inline void of_unittest_overlay_high_level(struct kunit *test) {}
 
 #endif
 
-static int __init of_unittest(void)
+static int of_test_init(struct kunit *test)
 {
-	struct device_node *np;
-	int res;
-
 	/* adding data for unittest */
-	res = unittest_data_add();
-	if (res)
-		return res;
+	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
+
 	if (!of_aliases)
 		of_aliases = of_find_node_by_path("/aliases");
 
-	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
-	if (!np) {
-		pr_info("No testcase data in device tree; not running tests\n");
-		return 0;
-	}
-	of_node_put(np);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
+		"/testcase-data/phandle-tests/consumer-a"));
 
 	if (IS_ENABLED(CONFIG_UML))
 		unflatten_device_tree();
 
-	pr_info("start of unittest - you will see error messages\n");
-	of_unittest_check_tree_linkage();
-	of_unittest_check_phandles();
-	of_unittest_find_node_by_name();
-	of_unittest_dynamic();
-	of_unittest_parse_phandle_with_args();
-	of_unittest_parse_phandle_with_args_map();
-	of_unittest_printf();
-	of_unittest_property_string();
-	of_unittest_property_copy();
-	of_unittest_changeset();
-	of_unittest_parse_interrupts();
-	of_unittest_parse_interrupts_extended();
-	of_unittest_match_node();
-	of_unittest_platform_populate();
-	of_unittest_overlay();
+	return 0;
+}
 
+static struct kunit_case of_test_cases[] = {
+	KUNIT_CASE(of_unittest_check_tree_linkage),
+	KUNIT_CASE(of_unittest_check_phandles),
+	KUNIT_CASE(of_unittest_find_node_by_name),
+	KUNIT_CASE(of_unittest_dynamic),
+	KUNIT_CASE(of_unittest_parse_phandle_with_args),
+	KUNIT_CASE(of_unittest_parse_phandle_with_args_map),
+	KUNIT_CASE(of_unittest_printf),
+	KUNIT_CASE(of_unittest_property_string),
+	KUNIT_CASE(of_unittest_property_copy),
+	KUNIT_CASE(of_unittest_changeset),
+	KUNIT_CASE(of_unittest_parse_interrupts),
+	KUNIT_CASE(of_unittest_parse_interrupts_extended),
+	KUNIT_CASE(of_unittest_match_node),
+	KUNIT_CASE(of_unittest_platform_populate),
+	KUNIT_CASE(of_unittest_overlay),
 	/* Double check linkage after removing testcase data */
-	of_unittest_check_tree_linkage();
-
-	of_unittest_overlay_high_level();
-
-	pr_info("end of unittest - %i passed, %i failed\n",
-		unittest_results.passed, unittest_results.failed);
+	KUNIT_CASE(of_unittest_check_tree_linkage),
+	KUNIT_CASE(of_unittest_overlay_high_level),
+	{},
+};
 
-	return 0;
-}
-late_initcall(of_unittest);
+static struct kunit_module of_test_module = {
+	.name = "of-test",
+	.init = of_test_init,
+	.test_cases = of_test_cases,
+};
+module_test(of_test_module);
-- 
2.21.0.rc0.258.g878e2cd30e-goog


^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 15/17] of: unittest: migrate tests to run on KUnit
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-02-14 21:37 UTC (permalink / raw)


Migrate tests without any cleanup, or modifying test logic in anyway to
run under KUnit using the KUnit expectation and assertion API.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 drivers/of/Kconfig    |    1 +
 drivers/of/unittest.c | 1310 +++++++++++++++++++++--------------------
 2 files changed, 671 insertions(+), 640 deletions(-)

diff --git a/drivers/of/Kconfig b/drivers/of/Kconfig
index ad3fcad4d75b8..f309399deac20 100644
--- a/drivers/of/Kconfig
+++ b/drivers/of/Kconfig
@@ -15,6 +15,7 @@ if OF
 config OF_UNITTEST
 	bool "Device Tree runtime unit tests"
 	depends on !SPARC
+	depends on KUNIT
 	select IRQ_DOMAIN
 	select OF_EARLY_FLATTREE
 	select OF_RESOLVE
diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
index effa4e2b9d992..96de69ccb3e63 100644
--- a/drivers/of/unittest.c
+++ b/drivers/of/unittest.c
@@ -26,186 +26,189 @@
 
 #include <linux/bitops.h>
 
+#include <kunit/test.h>
+
 #include "of_private.h"
 
-static struct unittest_results {
-	int passed;
-	int failed;
-} unittest_results;
-
-#define unittest(result, fmt, ...) ({ \
-	bool failed = !(result); \
-	if (failed) { \
-		unittest_results.failed++; \
-		pr_err("FAIL %s():%i " fmt, __func__, __LINE__, ##__VA_ARGS__); \
-	} else { \
-		unittest_results.passed++; \
-		pr_debug("pass %s():%i\n", __func__, __LINE__); \
-	} \
-	failed; \
-})
-
-static void __init of_unittest_find_node_by_name(void)
+static void of_unittest_find_node_by_name(struct kunit *test)
 {
 	struct device_node *np;
 	const char *options, *name;
 
 	np = of_find_node_by_path("/testcase-data");
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	unittest(np && !strcmp("/testcase-data", name),
-		"find /testcase-data failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
+			       "find /testcase-data failed\n");
 	of_node_put(np);
 	kfree(name);
 
 	/* Test if trailing '/' works */
-	np = of_find_node_by_path("/testcase-data/");
-	unittest(!np, "trailing '/' on /testcase-data/ should fail\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
+			    "trailing '/' on /testcase-data/ should fail\n");
 
 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	unittest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", name),
+	KUNIT_EXPECT_STREQ_MSG(
+		test, "/testcase-data/phandle-tests/consumer-a", name,
 		"find /testcase-data/phandle-tests/consumer-a failed\n");
 	of_node_put(np);
 	kfree(name);
 
 	np = of_find_node_by_path("testcase-alias");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	unittest(np && !strcmp("/testcase-data", name),
-		"find testcase-alias failed\n");
+	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
+			       "find testcase-alias failed\n");
 	of_node_put(np);
 	kfree(name);
 
 	/* Test if trailing '/' works on aliases */
-	np = of_find_node_by_path("testcase-alias/");
-	unittest(!np, "trailing '/' on testcase-alias/ should fail\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
+			    "trailing '/' on testcase-alias/ should fail\n");
 
 	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	unittest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", name),
+	KUNIT_EXPECT_STREQ_MSG(
+		test, "/testcase-data/phandle-tests/consumer-a", name,
 		"find testcase-alias/phandle-tests/consumer-a failed\n");
 	of_node_put(np);
 	kfree(name);
 
-	np = of_find_node_by_path("/testcase-data/missing-path");
-	unittest(!np, "non-existent path returned node %pOF\n", np);
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
+		"non-existent path returned node %pOF\n", np);
 	of_node_put(np);
 
-	np = of_find_node_by_path("missing-alias");
-	unittest(!np, "non-existent alias returned node %pOF\n", np);
+	KUNIT_EXPECT_EQ_MSG(
+		test, np = of_find_node_by_path("missing-alias"), NULL,
+		"non-existent alias returned node %pOF\n", np);
 	of_node_put(np);
 
-	np = of_find_node_by_path("testcase-alias/missing-path");
-	unittest(!np, "non-existent alias with relative path returned node %pOF\n", np);
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
+		"non-existent alias with relative path returned node %pOF\n",
+		np);
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
-	unittest(np && !strcmp("testoption", options),
-		 "option path test failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
+			       "option path test failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
-	unittest(np && !strcmp("test/option", options),
-		 "option path test, subcase #1 failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
+			       "option path test, subcase #1 failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
-	unittest(np && !strcmp("test/option", options),
-		 "option path test, subcase #2 failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
+			       "option path test, subcase #2 failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
-	unittest(np, "NULL option path test failed\n");
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
+					 "NULL option path test failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
 				       &options);
-	unittest(np && !strcmp("testaliasoption", options),
-		 "option alias path test failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
+			       "option alias path test failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
 				       &options);
-	unittest(np && !strcmp("test/alias/option", options),
-		 "option alias path test, subcase #1 failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
+			       "option alias path test, subcase #1 failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
-	unittest(np, "NULL option alias path test failed\n");
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
+			test, np, "NULL option alias path test failed\n");
 	of_node_put(np);
 
 	options = "testoption";
 	np = of_find_node_opts_by_path("testcase-alias", &options);
-	unittest(np && !options, "option clearing test failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
+			    "option clearing test failed\n");
 	of_node_put(np);
 
 	options = "testoption";
 	np = of_find_node_opts_by_path("/", &options);
-	unittest(np && !options, "option clearing root node test failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
+			    "option clearing root node test failed\n");
 	of_node_put(np);
 }
 
-static void __init of_unittest_dynamic(void)
+static void of_unittest_dynamic(struct kunit *test)
 {
 	struct device_node *np;
 	struct property *prop;
 
 	np = of_find_node_by_path("/testcase-data");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	/* Array of 4 properties for the purpose of testing */
 	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
-	if (!prop) {
-		unittest(0, "kzalloc() failed\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
 
 	/* Add a new property - should pass*/
 	prop->name = "new-property";
 	prop->value = "new-property-data";
 	prop->length = strlen(prop->value) + 1;
-	unittest(of_add_property(np, prop) == 0, "Adding a new property failed\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+			    "Adding a new property failed\n");
 
 	/* Try to add an existing property - should fail */
 	prop++;
 	prop->name = "new-property";
 	prop->value = "new-property-data-should-fail";
 	prop->length = strlen(prop->value) + 1;
-	unittest(of_add_property(np, prop) != 0,
-		 "Adding an existing property should have failed\n");
+	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
+			    "Adding an existing property should have failed\n");
 
 	/* Try to modify an existing property - should pass */
 	prop->value = "modify-property-data-should-pass";
 	prop->length = strlen(prop->value) + 1;
-	unittest(of_update_property(np, prop) == 0,
-		 "Updating an existing property should have passed\n");
+	KUNIT_EXPECT_EQ_MSG(
+		test, of_update_property(np, prop), 0,
+		"Updating an existing property should have passed\n");
 
 	/* Try to modify non-existent property - should pass*/
 	prop++;
 	prop->name = "modify-property";
 	prop->value = "modify-missing-property-data-should-pass";
 	prop->length = strlen(prop->value) + 1;
-	unittest(of_update_property(np, prop) == 0,
-		 "Updating a missing property should have passed\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
+			    "Updating a missing property should have passed\n");
 
 	/* Remove property - should pass */
-	unittest(of_remove_property(np, prop) == 0,
-		 "Removing a property should have passed\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
+			    "Removing a property should have passed\n");
 
 	/* Adding very large property - should pass */
 	prop++;
 	prop->name = "large-property-PAGE_SIZEx8";
 	prop->length = PAGE_SIZE * 8;
 	prop->value = kzalloc(prop->length, GFP_KERNEL);
-	unittest(prop->value != NULL, "Unable to allocate large buffer\n");
-	if (prop->value)
-		unittest(of_add_property(np, prop) == 0,
-			 "Adding a large property should have passed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+			    "Adding a large property should have passed\n");
 }
 
-static int __init of_unittest_check_node_linkage(struct device_node *np)
+static int of_unittest_check_node_linkage(struct device_node *np)
 {
 	struct device_node *child;
 	int count = 0, rc;
@@ -230,27 +233,30 @@ static int __init of_unittest_check_node_linkage(struct device_node *np)
 	return rc;
 }
 
-static void __init of_unittest_check_tree_linkage(void)
+static void of_unittest_check_tree_linkage(struct kunit *test)
 {
 	struct device_node *np;
 	int allnode_count = 0, child_count;
 
-	if (!of_root)
-		return;
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);
 
 	for_each_of_allnodes(np)
 		allnode_count++;
 	child_count = of_unittest_check_node_linkage(of_root);
 
-	unittest(child_count > 0, "Device node data structure is corrupted\n");
-	unittest(child_count == allnode_count,
-		 "allnodes list size (%i) doesn't match sibling lists size (%i)\n",
-		 allnode_count, child_count);
+	KUNIT_EXPECT_GT_MSG(test, child_count, 0,
+			    "Device node data structure is corrupted\n");
+	KUNIT_EXPECT_EQ_MSG(
+		test, child_count, allnode_count,
+		"allnodes list size (%i) doesn't match sibling lists size (%i)\n",
+		allnode_count, child_count);
 	pr_debug("allnodes list size (%i); sibling lists size (%i)\n", allnode_count, child_count);
 }
 
-static void __init of_unittest_printf_one(struct device_node *np, const char *fmt,
-					  const char *expected)
+static void of_unittest_printf_one(struct kunit *test,
+				   struct device_node *np,
+				   const char *fmt,
+				   const char *expected)
 {
 	unsigned char *buf;
 	int buf_size;
@@ -265,8 +271,12 @@ static void __init of_unittest_printf_one(struct device_node *np, const char *fm
 	memset(buf, 0xff, buf_size);
 	size = snprintf(buf, buf_size - 2, fmt, np);
 
-	/* use strcmp() instead of strncmp() here to be absolutely sure strings match */
-	unittest((strcmp(buf, expected) == 0) && (buf[size+1] == 0xff),
+	KUNIT_EXPECT_STREQ_MSG(
+		test, buf, expected,
+		"sprintf failed; fmt='%s' expected='%s' rslt='%s'\n",
+		fmt, expected, buf);
+	KUNIT_EXPECT_EQ_MSG(
+		test, buf[size+1], 0xff,
 		"sprintf failed; fmt='%s' expected='%s' rslt='%s'\n",
 		fmt, expected, buf);
 
@@ -276,44 +286,49 @@ static void __init of_unittest_printf_one(struct device_node *np, const char *fm
 		/* Clear the buffer, and make sure it works correctly still */
 		memset(buf, 0xff, buf_size);
 		snprintf(buf, size+1, fmt, np);
-		unittest(strncmp(buf, expected, size) == 0 && (buf[size+1] == 0xff),
+		KUNIT_EXPECT_STREQ_MSG(
+			test, buf, expected,
+			"snprintf failed; size=%i fmt='%s' expected='%s' rslt='%s'\n",
+			size, fmt, expected, buf);
+		KUNIT_EXPECT_EQ_MSG(
+			test, buf[size+1], 0xff,
 			"snprintf failed; size=%i fmt='%s' expected='%s' rslt='%s'\n",
 			size, fmt, expected, buf);
 	}
 	kfree(buf);
 }
 
-static void __init of_unittest_printf(void)
+static void of_unittest_printf(struct kunit *test)
 {
 	struct device_node *np;
 	const char *full_name = "/testcase-data/platform-tests/test-device at 1/dev at 100";
 	char phandle_str[16] = "";
 
 	np = of_find_node_by_path(full_name);
-	if (!np) {
-		unittest(np, "testcase data missing\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	num_to_str(phandle_str, sizeof(phandle_str), np->phandle, 0);
 
-	of_unittest_printf_one(np, "%pOF",  full_name);
-	of_unittest_printf_one(np, "%pOFf", full_name);
-	of_unittest_printf_one(np, "%pOFn", "dev");
-	of_unittest_printf_one(np, "%2pOFn", "dev");
-	of_unittest_printf_one(np, "%5pOFn", "  dev");
-	of_unittest_printf_one(np, "%pOFnc", "dev:test-sub-device");
-	of_unittest_printf_one(np, "%pOFp", phandle_str);
-	of_unittest_printf_one(np, "%pOFP", "dev at 100");
-	of_unittest_printf_one(np, "ABC %pOFP ABC", "ABC dev at 100 ABC");
-	of_unittest_printf_one(np, "%10pOFP", "   dev at 100");
-	of_unittest_printf_one(np, "%-10pOFP", "dev at 100   ");
-	of_unittest_printf_one(of_root, "%pOFP", "/");
-	of_unittest_printf_one(np, "%pOFF", "----");
-	of_unittest_printf_one(np, "%pOFPF", "dev at 100:----");
-	of_unittest_printf_one(np, "%pOFPFPc", "dev at 100:----:dev at 100:test-sub-device");
-	of_unittest_printf_one(np, "%pOFc", "test-sub-device");
-	of_unittest_printf_one(np, "%pOFC",
+	of_unittest_printf_one(test, np, "%pOF",  full_name);
+	of_unittest_printf_one(test, np, "%pOFf", full_name);
+	of_unittest_printf_one(test, np, "%pOFn", "dev");
+	of_unittest_printf_one(test, np, "%2pOFn", "dev");
+	of_unittest_printf_one(test, np, "%5pOFn", "  dev");
+	of_unittest_printf_one(test, np, "%pOFnc", "dev:test-sub-device");
+	of_unittest_printf_one(test, np, "%pOFp", phandle_str);
+	of_unittest_printf_one(test, np, "%pOFP", "dev at 100");
+	of_unittest_printf_one(test, np, "ABC %pOFP ABC", "ABC dev at 100 ABC");
+	of_unittest_printf_one(test, np, "%10pOFP", "   dev at 100");
+	of_unittest_printf_one(test, np, "%-10pOFP", "dev at 100   ");
+	of_unittest_printf_one(test, of_root, "%pOFP", "/");
+	of_unittest_printf_one(test, np, "%pOFF", "----");
+	of_unittest_printf_one(test, np, "%pOFPF", "dev at 100:----");
+	of_unittest_printf_one(test,
+			       np,
+			       "%pOFPFPc",
+			       "dev at 100:----:dev at 100:test-sub-device");
+	of_unittest_printf_one(test, np, "%pOFc", "test-sub-device");
+	of_unittest_printf_one(test, np, "%pOFC",
 			"\"test-sub-device\",\"test-compat2\",\"test-compat3\"");
 }
 
@@ -323,7 +338,7 @@ struct node_hash {
 };
 
 static DEFINE_HASHTABLE(phandle_ht, 8);
-static void __init of_unittest_check_phandles(void)
+static void of_unittest_check_phandles(struct kunit *test)
 {
 	struct device_node *np;
 	struct node_hash *nh;
@@ -335,24 +350,26 @@ static void __init of_unittest_check_phandles(void)
 			continue;
 
 		hash_for_each_possible(phandle_ht, nh, node, np->phandle) {
+			KUNIT_EXPECT_NE_MSG(
+				test, nh->np->phandle, np->phandle,
+				"Duplicate phandle! %i used by %pOF and %pOF\n",
+				np->phandle, nh->np, np);
 			if (nh->np->phandle == np->phandle) {
-				pr_info("Duplicate phandle! %i used by %pOF and %pOF\n",
-					np->phandle, nh->np, np);
 				dup_count++;
 				break;
 			}
 		}
 
 		nh = kzalloc(sizeof(*nh), GFP_KERNEL);
-		if (WARN_ON(!nh))
-			return;
+		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, nh);
 
 		nh->np = np;
 		hash_add(phandle_ht, &nh->node, np->phandle);
 		phandle_count++;
 	}
-	unittest(dup_count == 0, "Found %i duplicates in %i phandles\n",
-		 dup_count, phandle_count);
+	KUNIT_EXPECT_EQ_MSG(test, dup_count, 0,
+			    "Found %i duplicates in %i phandles\n",
+			    dup_count, phandle_count);
 
 	/* Clean up */
 	hash_for_each_safe(phandle_ht, i, tmp, nh, node) {
@@ -361,20 +378,21 @@ static void __init of_unittest_check_phandles(void)
 	}
 }
 
-static void __init of_unittest_parse_phandle_with_args(void)
+static void of_unittest_parse_phandle_with_args(struct kunit *test)
 {
 	struct device_node *np;
 	struct of_phandle_args args;
-	int i, rc;
+	int i, rc = 0;
 
 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
-	rc = of_count_phandle_with_args(np, "phandle-list", "#phandle-cells");
-	unittest(rc == 7, "of_count_phandle_with_args() returned %i, expected 7\n", rc);
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_count_phandle_with_args(
+			np, "phandle-list", "#phandle-cells"),
+		7,
+		"of_count_phandle_with_args() returned %i, expected 7\n", rc);
 
 	for (i = 0; i < 8; i++) {
 		bool passed = true;
@@ -428,85 +446,91 @@ static void __init of_unittest_parse_phandle_with_args(void)
 			passed = false;
 		}
 
-		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
-			 i, args.np, rc);
+		KUNIT_EXPECT_TRUE_MSG(
+			test, passed,
+			"index %i - data error on node %pOF rc=%i\n",
+			i, args.np, rc);
 	}
 
 	/* Check for missing list property */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args(np, "phandle-list-missing",
-					"#phandle-cells", 0, &args);
-	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
-	rc = of_count_phandle_with_args(np, "phandle-list-missing",
-					"#phandle-cells");
-	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args(
+			np, "phandle-list-missing", "#phandle-cells", 0, &args),
+		-ENOENT);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_count_phandle_with_args(
+			np, "phandle-list-missing", "#phandle-cells"),
+		-ENOENT);
 
 	/* Check for missing cells property */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args(np, "phandle-list",
-					"#phandle-cells-missing", 0, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
-	rc = of_count_phandle_with_args(np, "phandle-list",
-					"#phandle-cells-missing");
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args(
+			np, "phandle-list", "#phandle-cells-missing", 0, &args),
+		-EINVAL);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_count_phandle_with_args(
+			np, "phandle-list", "#phandle-cells-missing"),
+		-EINVAL);
 
 	/* Check for bad phandle in list */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args(np, "phandle-list-bad-phandle",
-					"#phandle-cells", 0, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
-	rc = of_count_phandle_with_args(np, "phandle-list-bad-phandle",
-					"#phandle-cells");
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args(np, "phandle-list-bad-phandle",
+					   "#phandle-cells", 0, &args),
+		-EINVAL);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_count_phandle_with_args(
+			np, "phandle-list-bad-phandle", "#phandle-cells"),
+		-EINVAL);
 
 	/* Check for incorrectly formed argument list */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args(np, "phandle-list-bad-args",
-					"#phandle-cells", 1, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
-	rc = of_count_phandle_with_args(np, "phandle-list-bad-args",
-					"#phandle-cells");
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args(np, "phandle-list-bad-args",
+					   "#phandle-cells", 1, &args),
+		-EINVAL);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_count_phandle_with_args(
+			np, "phandle-list-bad-args", "#phandle-cells"),
+		-EINVAL);
 }
 
-static void __init of_unittest_parse_phandle_with_args_map(void)
+static void of_unittest_parse_phandle_with_args_map(struct kunit *test)
 {
 	struct device_node *np, *p0, *p1, *p2, *p3;
 	struct of_phandle_args args;
 	int i, rc;
 
 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-b");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	p0 = of_find_node_by_path("/testcase-data/phandle-tests/provider0");
-	if (!p0) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p0);
 
 	p1 = of_find_node_by_path("/testcase-data/phandle-tests/provider1");
-	if (!p1) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p1);
 
 	p2 = of_find_node_by_path("/testcase-data/phandle-tests/provider2");
-	if (!p2) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p2);
 
 	p3 = of_find_node_by_path("/testcase-data/phandle-tests/provider3");
-	if (!p3) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p3);
 
-	rc = of_count_phandle_with_args(np, "phandle-list", "#phandle-cells");
-	unittest(rc == 7, "of_count_phandle_with_args() returned %i, expected 7\n", rc);
+	KUNIT_EXPECT_EQ(test,
+		       of_count_phandle_with_args(np,
+						  "phandle-list",
+						  "#phandle-cells"),
+		       7);
 
 	for (i = 0; i < 8; i++) {
 		bool passed = true;
@@ -564,121 +588,186 @@ static void __init of_unittest_parse_phandle_with_args_map(void)
 			passed = false;
 		}
 
-		unittest(passed, "index %i - data error on node %s rc=%i\n",
-			 i, args.np->full_name, rc);
+		KUNIT_EXPECT_TRUE_MSG(
+			test, passed,
+			"index %i - data error on node %s rc=%i\n",
+			i, (args.np ? args.np->full_name : "missing np"), rc);
 	}
 
 	/* Check for missing list property */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args_map(np, "phandle-list-missing",
-					    "phandle", 0, &args);
-	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args_map(
+			np, "phandle-list-missing", "phandle", 0, &args),
+		-ENOENT);
 
 	/* Check for missing cells,map,mask property */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args_map(np, "phandle-list",
-					    "phandle-missing", 0, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args_map(
+			np, "phandle-list", "phandle-missing", 0, &args),
+		-EINVAL);
 
 	/* Check for bad phandle in list */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args_map(np, "phandle-list-bad-phandle",
-					    "phandle", 0, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args_map(
+			np, "phandle-list-bad-phandle", "phandle", 0, &args),
+		-EINVAL);
 
 	/* Check for incorrectly formed argument list */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args_map(np, "phandle-list-bad-args",
-					    "phandle", 1, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args_map(
+			np, "phandle-list-bad-args", "phandle", 1, &args),
+		-EINVAL);
 }
 
-static void __init of_unittest_property_string(void)
+static void of_unittest_property_string(struct kunit *test)
 {
 	const char *strings[4];
 	struct device_node *np;
 	int rc;
 
 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
-	if (!np) {
-		pr_err("No testcase data in device tree\n");
-		return;
-	}
-
-	rc = of_property_match_string(np, "phandle-list-names", "first");
-	unittest(rc == 0, "first expected:0 got:%i\n", rc);
-	rc = of_property_match_string(np, "phandle-list-names", "second");
-	unittest(rc == 1, "second expected:1 got:%i\n", rc);
-	rc = of_property_match_string(np, "phandle-list-names", "third");
-	unittest(rc == 2, "third expected:2 got:%i\n", rc);
-	rc = of_property_match_string(np, "phandle-list-names", "fourth");
-	unittest(rc == -ENODATA, "unmatched string; rc=%i\n", rc);
-	rc = of_property_match_string(np, "missing-property", "blah");
-	unittest(rc == -EINVAL, "missing property; rc=%i\n", rc);
-	rc = of_property_match_string(np, "empty-property", "blah");
-	unittest(rc == -ENODATA, "empty property; rc=%i\n", rc);
-	rc = of_property_match_string(np, "unterminated-string", "blah");
-	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+
+	KUNIT_EXPECT_EQ(
+		test,
+		of_property_match_string(np, "phandle-list-names", "first"),
+		0);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_property_match_string(np, "phandle-list-names", "second"),
+		1);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_property_match_string(np, "phandle-list-names", "third"),
+		2);
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_match_string(np, "phandle-list-names", "fourth"),
+		-ENODATA,
+		"unmatched string");
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_match_string(np, "missing-property", "blah"),
+		-EINVAL,
+		"missing property");
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_match_string(np, "empty-property", "blah"),
+		-ENODATA,
+		"empty property");
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_match_string(np, "unterminated-string", "blah"),
+		-EILSEQ,
+		"unterminated string");
 
 	/* of_property_count_strings() tests */
-	rc = of_property_count_strings(np, "string-property");
-	unittest(rc == 1, "Incorrect string count; rc=%i\n", rc);
-	rc = of_property_count_strings(np, "phandle-list-names");
-	unittest(rc == 3, "Incorrect string count; rc=%i\n", rc);
-	rc = of_property_count_strings(np, "unterminated-string");
-	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
-	rc = of_property_count_strings(np, "unterminated-string-list");
-	unittest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ(test,
+			of_property_count_strings(np, "string-property"), 1);
+	KUNIT_EXPECT_EQ(test,
+			of_property_count_strings(np, "phandle-list-names"), 3);
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_count_strings(np, "unterminated-string"), -EILSEQ,
+		"unterminated string");
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_count_strings(np, "unterminated-string-list"),
+		-EILSEQ,
+		"unterminated string array");
 
 	/* of_property_read_string_index() tests */
 	rc = of_property_read_string_index(np, "string-property", 0, strings);
-	unittest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_ASSERT_EQ(test, rc, 0);
+	KUNIT_EXPECT_STREQ(test, strings[0], "foobar");
+
 	strings[0] = NULL;
 	rc = of_property_read_string_index(np, "string-property", 1, strings);
-	unittest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ(test, rc, -ENODATA);
+	KUNIT_EXPECT_EQ(test, strings[0], NULL);
+
 	rc = of_property_read_string_index(np, "phandle-list-names", 0, strings);
-	unittest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_ASSERT_EQ(test, rc, 0);
+	KUNIT_EXPECT_STREQ(test, strings[0], "first");
+
 	rc = of_property_read_string_index(np, "phandle-list-names", 1, strings);
-	unittest(rc == 0 && !strcmp(strings[0], "second"), "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_ASSERT_EQ(test, rc, 0);
+	KUNIT_EXPECT_STREQ(test, strings[0], "second");
+
 	rc = of_property_read_string_index(np, "phandle-list-names", 2, strings);
-	unittest(rc == 0 && !strcmp(strings[0], "third"), "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_ASSERT_EQ(test, rc, 0);
+	KUNIT_EXPECT_STREQ(test, strings[0], "third");
+
 	strings[0] = NULL;
 	rc = of_property_read_string_index(np, "phandle-list-names", 3, strings);
-	unittest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ(test, rc, -ENODATA);
+	KUNIT_EXPECT_EQ(test, strings[0], NULL);
+
 	strings[0] = NULL;
 	rc = of_property_read_string_index(np, "unterminated-string", 0, strings);
-	unittest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ(test, rc, -EILSEQ);
+	KUNIT_EXPECT_EQ(test, strings[0], NULL);
+
 	rc = of_property_read_string_index(np, "unterminated-string-list", 0, strings);
-	unittest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_ASSERT_EQ(test, rc, 0);
+	KUNIT_EXPECT_STREQ(test, strings[0], "first");
+
 	strings[0] = NULL;
 	rc = of_property_read_string_index(np, "unterminated-string-list", 2, strings); /* should fail */
-	unittest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
-	strings[1] = NULL;
+	KUNIT_EXPECT_EQ(test, rc, -EILSEQ);
+	KUNIT_EXPECT_EQ(test, strings[0], NULL);
 
 	/* of_property_read_string_array() tests */
-	rc = of_property_read_string_array(np, "string-property", strings, 4);
-	unittest(rc == 1, "Incorrect string count; rc=%i\n", rc);
-	rc = of_property_read_string_array(np, "phandle-list-names", strings, 4);
-	unittest(rc == 3, "Incorrect string count; rc=%i\n", rc);
-	rc = of_property_read_string_array(np, "unterminated-string", strings, 4);
-	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
+	strings[1] = NULL;
+	KUNIT_EXPECT_EQ(
+		test,
+		of_property_read_string_array(
+			np, "string-property", strings, 4),
+		1);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_property_read_string_array(
+			np, "phandle-list-names", strings, 4),
+		3);
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_read_string_array(
+			np, "unterminated-string", strings, 4),
+		-EILSEQ,
+		"unterminated string");
 	/* -- An incorrectly formed string should cause a failure */
-	rc = of_property_read_string_array(np, "unterminated-string-list", strings, 4);
-	unittest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_read_string_array(
+			np, "unterminated-string-list", strings, 4),
+		-EILSEQ,
+		"unterminated string array");
 	/* -- parsing the correctly formed strings should still work: */
 	strings[2] = NULL;
 	rc = of_property_read_string_array(np, "unterminated-string-list", strings, 2);
-	unittest(rc == 2 && strings[2] == NULL, "of_property_read_string_array() failure; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ(test, rc, 2);
+	KUNIT_EXPECT_EQ(test, strings[2], NULL);
+
 	strings[1] = NULL;
 	rc = of_property_read_string_array(np, "phandle-list-names", strings, 1);
-	unittest(rc == 1 && strings[1] == NULL, "Overwrote end of string array; rc=%i, str='%s'\n", rc, strings[1]);
+	KUNIT_ASSERT_EQ(test, rc, 1);
+	KUNIT_EXPECT_EQ_MSG(test, strings[1], NULL,
+			    "Overwrote end of string array");
 }
 
 #define propcmp(p1, p2) (((p1)->length == (p2)->length) && \
 			(p1)->value && (p2)->value && \
 			!memcmp((p1)->value, (p2)->value, (p1)->length) && \
 			!strcmp((p1)->name, (p2)->name))
-static void __init of_unittest_property_copy(void)
+static void of_unittest_property_copy(struct kunit *test)
 {
 #ifdef CONFIG_OF_DYNAMIC
 	struct property p1 = { .name = "p1", .length = 0, .value = "" };
@@ -686,20 +775,24 @@ static void __init of_unittest_property_copy(void)
 	struct property *new;
 
 	new = __of_prop_dup(&p1, GFP_KERNEL);
-	unittest(new && propcmp(&p1, new), "empty property didn't copy correctly\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, new);
+	KUNIT_EXPECT_TRUE_MSG(test, propcmp(&p1, new),
+			      "empty property didn't copy correctly");
 	kfree(new->value);
 	kfree(new->name);
 	kfree(new);
 
 	new = __of_prop_dup(&p2, GFP_KERNEL);
-	unittest(new && propcmp(&p2, new), "non-empty property didn't copy correctly\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, new);
+	KUNIT_EXPECT_TRUE_MSG(test, propcmp(&p2, new),
+			      "non-empty property didn't copy correctly");
 	kfree(new->value);
 	kfree(new->name);
 	kfree(new);
 #endif
 }
 
-static void __init of_unittest_changeset(void)
+static void of_unittest_changeset(struct kunit *test)
 {
 #ifdef CONFIG_OF_DYNAMIC
 	struct property *ppadd, padd = { .name = "prop-add", .length = 1, .value = "" };
@@ -712,32 +805,32 @@ static void __init of_unittest_changeset(void)
 	struct of_changeset chgset;
 
 	n1 = __of_node_dup(NULL, "n1");
-	unittest(n1, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n1);
 
 	n2 = __of_node_dup(NULL, "n2");
-	unittest(n2, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n2);
 
 	n21 = __of_node_dup(NULL, "n21");
-	unittest(n21, "testcase setup failure %p\n", n21);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n21);
 
 	nchangeset = of_find_node_by_path("/testcase-data/changeset");
 	nremove = of_get_child_by_name(nchangeset, "node-remove");
-	unittest(nremove, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, nremove);
 
 	ppadd = __of_prop_dup(&padd, GFP_KERNEL);
-	unittest(ppadd, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppadd);
 
 	ppname_n1  = __of_prop_dup(&pname_n1, GFP_KERNEL);
-	unittest(ppname_n1, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n1);
 
 	ppname_n2  = __of_prop_dup(&pname_n2, GFP_KERNEL);
-	unittest(ppname_n2, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n2);
 
 	ppname_n21 = __of_prop_dup(&pname_n21, GFP_KERNEL);
-	unittest(ppname_n21, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n21);
 
 	ppupdate = __of_prop_dup(&pupdate, GFP_KERNEL);
-	unittest(ppupdate, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppupdate);
 
 	parent = nchangeset;
 	n1->parent = parent;
@@ -745,54 +838,72 @@ static void __init of_unittest_changeset(void)
 	n21->parent = n2;
 
 	ppremove = of_find_property(parent, "prop-remove", NULL);
-	unittest(ppremove, "failed to find removal prop");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppremove);
 
 	of_changeset_init(&chgset);
 
-	unittest(!of_changeset_attach_node(&chgset, n1), "fail attach n1\n");
-	unittest(!of_changeset_add_property(&chgset, n1, ppname_n1), "fail add prop name\n");
-
-	unittest(!of_changeset_attach_node(&chgset, n2), "fail attach n2\n");
-	unittest(!of_changeset_add_property(&chgset, n2, ppname_n2), "fail add prop name\n");
-
-	unittest(!of_changeset_detach_node(&chgset, nremove), "fail remove node\n");
-	unittest(!of_changeset_add_property(&chgset, n21, ppname_n21), "fail add prop name\n");
-
-	unittest(!of_changeset_attach_node(&chgset, n21), "fail attach n21\n");
-
-	unittest(!of_changeset_add_property(&chgset, parent, ppadd), "fail add prop prop-add\n");
-	unittest(!of_changeset_update_property(&chgset, parent, ppupdate), "fail update prop\n");
-	unittest(!of_changeset_remove_property(&chgset, parent, ppremove), "fail remove prop\n");
-
-	unittest(!of_changeset_apply(&chgset), "apply failed\n");
+	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n1),
+			       "fail attach n1\n");
+	KUNIT_EXPECT_FALSE_MSG(
+		test, of_changeset_add_property(&chgset, n1, ppname_n1),
+		"fail add prop name\n");
+
+	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n2),
+			       "fail attach n2\n");
+	KUNIT_EXPECT_FALSE_MSG(
+		test, of_changeset_add_property(&chgset, n2, ppname_n2),
+			       "fail add prop name\n");
+
+	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_detach_node(&chgset, nremove),
+			       "fail remove node\n");
+	KUNIT_EXPECT_FALSE_MSG(
+		test, of_changeset_add_property(&chgset, n21, ppname_n21),
+		"fail add prop name\n");
+
+	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n21),
+			       "fail attach n21\n");
+
+	KUNIT_EXPECT_FALSE_MSG(
+		test,
+		of_changeset_add_property(&chgset, parent, ppadd),
+		"fail add prop prop-add\n");
+	KUNIT_EXPECT_FALSE_MSG(
+		test,
+		of_changeset_update_property(&chgset, parent, ppupdate),
+		"fail update prop\n");
+	KUNIT_EXPECT_FALSE_MSG(
+		test,
+		of_changeset_remove_property(&chgset, parent, ppremove),
+		"fail remove prop\n");
+
+	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_apply(&chgset),
+			       "apply failed\n");
 
 	of_node_put(nchangeset);
 
 	/* Make sure node names are constructed correctly */
-	unittest((np = of_find_node_by_path("/testcase-data/changeset/n2/n21")),
-		 "'%pOF' not added\n", n21);
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
+		test,
+		np = of_find_node_by_path("/testcase-data/changeset/n2/n21"),
+		"'%pOF' not added\n", n21);
 	of_node_put(np);
 
-	unittest(!of_changeset_revert(&chgset), "revert failed\n");
+	KUNIT_EXPECT_FALSE(test, of_changeset_revert(&chgset));
 
 	of_changeset_destroy(&chgset);
 #endif
 }
 
-static void __init of_unittest_parse_interrupts(void)
+static void of_unittest_parse_interrupts(struct kunit *test)
 {
 	struct device_node *np;
 	struct of_phandle_args args;
 	int i, rc;
 
-	if (of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)
-		return;
+	KUNIT_ASSERT_FALSE(test, of_irq_workarounds & OF_IMAP_OLDWORLD_MAC);
 
 	np = of_find_node_by_path("/testcase-data/interrupts/interrupts0");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	for (i = 0; i < 4; i++) {
 		bool passed = true;
@@ -804,16 +915,15 @@ static void __init of_unittest_parse_interrupts(void)
 		passed &= (args.args_count == 1);
 		passed &= (args.args[0] == (i + 1));
 
-		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
-			 i, args.np, rc);
+		KUNIT_EXPECT_TRUE_MSG(
+			test, passed,
+			"index %i - data error on node %pOF rc=%i\n",
+			i, args.np, rc);
 	}
 	of_node_put(np);
 
 	np = of_find_node_by_path("/testcase-data/interrupts/interrupts1");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	for (i = 0; i < 4; i++) {
 		bool passed = true;
@@ -850,26 +960,24 @@ static void __init of_unittest_parse_interrupts(void)
 		default:
 			passed = false;
 		}
-		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
-			 i, args.np, rc);
+		KUNIT_EXPECT_TRUE_MSG(
+			test, passed,
+			"index %i - data error on node %pOF rc=%i\n",
+			i, args.np, rc);
 	}
 	of_node_put(np);
 }
 
-static void __init of_unittest_parse_interrupts_extended(void)
+static void of_unittest_parse_interrupts_extended(struct kunit *test)
 {
 	struct device_node *np;
 	struct of_phandle_args args;
 	int i, rc;
 
-	if (of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)
-		return;
+	KUNIT_ASSERT_FALSE(test, of_irq_workarounds & OF_IMAP_OLDWORLD_MAC);
 
 	np = of_find_node_by_path("/testcase-data/interrupts/interrupts-extended0");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	for (i = 0; i < 7; i++) {
 		bool passed = true;
@@ -924,8 +1032,10 @@ static void __init of_unittest_parse_interrupts_extended(void)
 			passed = false;
 		}
 
-		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
-			 i, args.np, rc);
+		KUNIT_EXPECT_TRUE_MSG(
+			test, passed,
+			"index %i - data error on node %pOF rc=%i\n",
+			i, args.np, rc);
 	}
 	of_node_put(np);
 }
@@ -965,7 +1075,7 @@ static struct {
 	{ .path = "/testcase-data/match-node/name9", .data = "K", },
 };
 
-static void __init of_unittest_match_node(void)
+static void of_unittest_match_node(struct kunit *test)
 {
 	struct device_node *np;
 	const struct of_device_id *match;
@@ -973,26 +1083,19 @@ static void __init of_unittest_match_node(void)
 
 	for (i = 0; i < ARRAY_SIZE(match_node_tests); i++) {
 		np = of_find_node_by_path(match_node_tests[i].path);
-		if (!np) {
-			unittest(0, "missing testcase node %s\n",
-				match_node_tests[i].path);
-			continue;
-		}
+		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 		match = of_match_node(match_node_table, np);
-		if (!match) {
-			unittest(0, "%s didn't match anything\n",
-				match_node_tests[i].path);
-			continue;
-		}
+		KUNIT_ASSERT_NOT_ERR_OR_NULL_MSG(test, np,
+						 "%s didn't match anything",
+						 match_node_tests[i].path);
 
-		if (strcmp(match->data, match_node_tests[i].data) != 0) {
-			unittest(0, "%s got wrong match. expected %s, got %s\n",
-				match_node_tests[i].path, match_node_tests[i].data,
-				(const char *)match->data);
-			continue;
-		}
-		unittest(1, "passed");
+		KUNIT_EXPECT_STREQ_MSG(
+			test,
+			match->data, match_node_tests[i].data,
+			"%s got wrong match. expected %s, got %s\n",
+			match_node_tests[i].path, match_node_tests[i].data,
+			(const char *)match->data);
 	}
 }
 
@@ -1004,9 +1107,9 @@ static struct resource test_bus_res = {
 static const struct platform_device_info test_bus_info = {
 	.name = "unittest-bus",
 };
-static void __init of_unittest_platform_populate(void)
+static void of_unittest_platform_populate(struct kunit *test)
 {
-	int irq, rc;
+	int irq;
 	struct device_node *np, *child, *grandchild;
 	struct platform_device *pdev, *test_bus;
 	const struct of_device_id match[] = {
@@ -1020,32 +1123,27 @@ static void __init of_unittest_platform_populate(void)
 	/* Test that a missing irq domain returns -EPROBE_DEFER */
 	np = of_find_node_by_path("/testcase-data/testcase-device1");
 	pdev = of_find_device_by_node(np);
-	unittest(pdev, "device 1 creation failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, pdev);
 
 	if (!(of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)) {
 		irq = platform_get_irq(pdev, 0);
-		unittest(irq == -EPROBE_DEFER,
-			 "device deferred probe failed - %d\n", irq);
+		KUNIT_ASSERT_EQ(test, irq, -EPROBE_DEFER);
 
 		/* Test that a parsing failure does not return -EPROBE_DEFER */
 		np = of_find_node_by_path("/testcase-data/testcase-device2");
 		pdev = of_find_device_by_node(np);
-		unittest(pdev, "device 2 creation failed\n");
+		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, pdev);
 		irq = platform_get_irq(pdev, 0);
-		unittest(irq < 0 && irq != -EPROBE_DEFER,
-			 "device parsing error failed - %d\n", irq);
+		KUNIT_ASSERT_TRUE_MSG(test, irq < 0 && irq != -EPROBE_DEFER,
+				      "device parsing error failed - %d\n",
+				      irq);
 	}
 
 	np = of_find_node_by_path("/testcase-data/platform-tests");
-	unittest(np, "No testcase data in device tree\n");
-	if (!np)
-		return;
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	test_bus = platform_device_register_full(&test_bus_info);
-	rc = PTR_ERR_OR_ZERO(test_bus);
-	unittest(!rc, "testbus registration failed; rc=%i\n", rc);
-	if (rc)
-		return;
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, test_bus);
 	test_bus->dev.of_node = np;
 
 	/*
@@ -1060,17 +1158,19 @@ static void __init of_unittest_platform_populate(void)
 	of_platform_populate(np, match, NULL, &test_bus->dev);
 	for_each_child_of_node(np, child) {
 		for_each_child_of_node(child, grandchild)
-			unittest(of_find_device_by_node(grandchild),
-				 "Could not create device for node '%pOFn'\n",
-				 grandchild);
+			KUNIT_EXPECT_TRUE_MSG(
+				test, of_find_device_by_node(grandchild),
+				"Could not create device for node '%pOFn'\n",
+				grandchild);
 	}
 
 	of_platform_depopulate(&test_bus->dev);
 	for_each_child_of_node(np, child) {
 		for_each_child_of_node(child, grandchild)
-			unittest(!of_find_device_by_node(grandchild),
-				 "device didn't get destroyed '%pOFn'\n",
-				 grandchild);
+			KUNIT_EXPECT_FALSE_MSG(
+				test, of_find_device_by_node(grandchild),
+				"device didn't get destroyed '%pOFn'\n",
+				grandchild);
 	}
 
 	platform_device_unregister(test_bus);
@@ -1171,7 +1271,7 @@ static void attach_node_and_children(struct device_node *np)
  *	unittest_data_add - Reads, copies data from
  *	linked tree and attaches it to the live tree
  */
-static int __init unittest_data_add(void)
+static int unittest_data_add(void)
 {
 	void *unittest_data;
 	struct device_node *unittest_data_node, *np;
@@ -1242,7 +1342,7 @@ static int __init unittest_data_add(void)
 }
 
 #ifdef CONFIG_OF_OVERLAY
-static int __init overlay_data_apply(const char *overlay_name, int *overlay_id);
+static int overlay_data_apply(const char *overlay_name, int *overlay_id);
 
 static int unittest_probe(struct platform_device *pdev)
 {
@@ -1471,172 +1571,146 @@ static void of_unittest_destroy_tracked_overlays(void)
 	} while (defers > 0);
 }
 
-static int __init of_unittest_apply_overlay(int overlay_nr, int *overlay_id)
+static int of_unittest_apply_overlay(struct kunit *test,
+				     int overlay_nr,
+				     int *overlay_id)
 {
 	const char *overlay_name;
 
 	overlay_name = overlay_name_from_nr(overlay_nr);
 
-	if (!overlay_data_apply(overlay_name, overlay_id)) {
-		unittest(0, "could not apply overlay \"%s\"\n",
-				overlay_name);
-		return -EFAULT;
-	}
+	KUNIT_ASSERT_TRUE_MSG(test,
+			      overlay_data_apply(overlay_name, overlay_id),
+			      "could not apply overlay \"%s\"\n", overlay_name);
 	of_unittest_track_overlay(*overlay_id);
 
 	return 0;
 }
 
 /* apply an overlay while checking before and after states */
-static int __init of_unittest_apply_overlay_check(int overlay_nr,
+static int of_unittest_apply_overlay_check(struct kunit *test, int overlay_nr,
 		int unittest_nr, int before, int after,
 		enum overlay_type ovtype)
 {
 	int ret, ovcs_id;
 
 	/* unittest device must not be in before state */
-	if (of_unittest_device_exists(unittest_nr, ovtype) != before) {
-		unittest(0, "%s with device @\"%s\" %s\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype),
-				!before ? "enabled" : "disabled");
-		return -EINVAL;
-	}
+	KUNIT_ASSERT_EQ_MSG(
+		test, of_unittest_device_exists(unittest_nr, ovtype), before,
+		"%s with device @\"%s\" %s\n", overlay_name_from_nr(overlay_nr),
+		unittest_path(unittest_nr, ovtype),
+		!before ? "enabled" : "disabled");
 
 	ovcs_id = 0;
-	ret = of_unittest_apply_overlay(overlay_nr, &ovcs_id);
+	ret = of_unittest_apply_overlay(test, overlay_nr, &ovcs_id);
 	if (ret != 0) {
-		/* of_unittest_apply_overlay already called unittest() */
+		/* of_unittest_apply_overlay already set expectation */
 		return ret;
 	}
 
 	/* unittest device must be to set to after state */
-	if (of_unittest_device_exists(unittest_nr, ovtype) != after) {
-		unittest(0, "%s failed to create @\"%s\" %s\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype),
-				!after ? "enabled" : "disabled");
-		return -EINVAL;
-	}
+	KUNIT_ASSERT_EQ_MSG(
+		test, of_unittest_device_exists(unittest_nr, ovtype), after,
+		"%s failed to create @\"%s\" %s\n",
+		overlay_name_from_nr(overlay_nr),
+		unittest_path(unittest_nr, ovtype),
+		!after ? "enabled" : "disabled");
 
 	return 0;
 }
 
 /* apply an overlay and then revert it while checking before, after states */
-static int __init of_unittest_apply_revert_overlay_check(int overlay_nr,
+static int of_unittest_apply_revert_overlay_check(
+		struct kunit *test, int overlay_nr,
 		int unittest_nr, int before, int after,
 		enum overlay_type ovtype)
 {
 	int ret, ovcs_id;
 
 	/* unittest device must be in before state */
-	if (of_unittest_device_exists(unittest_nr, ovtype) != before) {
-		unittest(0, "%s with device @\"%s\" %s\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype),
-				!before ? "enabled" : "disabled");
-		return -EINVAL;
-	}
+	KUNIT_ASSERT_EQ_MSG(
+		test, of_unittest_device_exists(unittest_nr, ovtype), before,
+		"%s with device @\"%s\" %s\n", overlay_name_from_nr(overlay_nr),
+		unittest_path(unittest_nr, ovtype),
+		!before ? "enabled" : "disabled");
 
 	/* apply the overlay */
 	ovcs_id = 0;
-	ret = of_unittest_apply_overlay(overlay_nr, &ovcs_id);
+	ret = of_unittest_apply_overlay(test, overlay_nr, &ovcs_id);
 	if (ret != 0) {
-		/* of_unittest_apply_overlay already called unittest() */
+		/* of_unittest_apply_overlay already set expectation. */
 		return ret;
 	}
 
 	/* unittest device must be in after state */
-	if (of_unittest_device_exists(unittest_nr, ovtype) != after) {
-		unittest(0, "%s failed to create @\"%s\" %s\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype),
-				!after ? "enabled" : "disabled");
-		return -EINVAL;
-	}
-
-	ret = of_overlay_remove(&ovcs_id);
-	if (ret != 0) {
-		unittest(0, "%s failed to be destroyed @\"%s\"\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype));
-		return ret;
-	}
+	KUNIT_ASSERT_EQ_MSG(
+		test, of_unittest_device_exists(unittest_nr, ovtype), after,
+		"%s failed to create @\"%s\" %s\n",
+		overlay_name_from_nr(overlay_nr),
+		unittest_path(unittest_nr, ovtype),
+		!after ? "enabled" : "disabled");
+
+	KUNIT_ASSERT_EQ_MSG(test, of_overlay_remove(&ovcs_id), 0,
+			    "%s failed to be destroyed @\"%s\"\n",
+			    overlay_name_from_nr(overlay_nr),
+			    unittest_path(unittest_nr, ovtype));
 
 	/* unittest device must be again in before state */
-	if (of_unittest_device_exists(unittest_nr, PDEV_OVERLAY) != before) {
-		unittest(0, "%s with device @\"%s\" %s\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype),
-				!before ? "enabled" : "disabled");
-		return -EINVAL;
-	}
+	KUNIT_ASSERT_EQ_MSG(
+		test,
+		of_unittest_device_exists(unittest_nr, PDEV_OVERLAY), before,
+		"%s with device @\"%s\" %s\n",
+		overlay_name_from_nr(overlay_nr),
+		unittest_path(unittest_nr, ovtype),
+		!before ? "enabled" : "disabled");
 
 	return 0;
 }
 
 /* test activation of device */
-static void __init of_unittest_overlay_0(void)
+static void of_unittest_overlay_0(struct kunit *test)
 {
 	/* device should enable */
-	if (of_unittest_apply_overlay_check(0, 0, 0, 1, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 0);
+	of_unittest_apply_overlay_check(test, 0, 0, 0, 1, PDEV_OVERLAY);
 }
 
 /* test deactivation of device */
-static void __init of_unittest_overlay_1(void)
+static void of_unittest_overlay_1(struct kunit *test)
 {
 	/* device should disable */
-	if (of_unittest_apply_overlay_check(1, 1, 1, 0, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 1);
+	of_unittest_apply_overlay_check(test, 1, 1, 1, 0, PDEV_OVERLAY);
 }
 
 /* test activation of device */
-static void __init of_unittest_overlay_2(void)
+static void of_unittest_overlay_2(struct kunit *test)
 {
 	/* device should enable */
-	if (of_unittest_apply_overlay_check(2, 2, 0, 1, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 2);
+	of_unittest_apply_overlay_check(test, 2, 2, 0, 1, PDEV_OVERLAY);
 }
 
 /* test deactivation of device */
-static void __init of_unittest_overlay_3(void)
+static void of_unittest_overlay_3(struct kunit *test)
 {
 	/* device should disable */
-	if (of_unittest_apply_overlay_check(3, 3, 1, 0, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 3);
+	of_unittest_apply_overlay_check(test, 3, 3, 1, 0, PDEV_OVERLAY);
 }
 
 /* test activation of a full device node */
-static void __init of_unittest_overlay_4(void)
+static void of_unittest_overlay_4(struct kunit *test)
 {
 	/* device should disable */
-	if (of_unittest_apply_overlay_check(4, 4, 0, 1, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 4);
+	of_unittest_apply_overlay_check(test, 4, 4, 0, 1, PDEV_OVERLAY);
 }
 
 /* test overlay apply/revert sequence */
-static void __init of_unittest_overlay_5(void)
+static void of_unittest_overlay_5(struct kunit *test)
 {
 	/* device should disable */
-	if (of_unittest_apply_revert_overlay_check(5, 5, 0, 1, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 5);
+	of_unittest_apply_revert_overlay_check(test, 5, 5, 0, 1, PDEV_OVERLAY);
 }
 
 /* test overlay application in sequence */
-static void __init of_unittest_overlay_6(void)
+static void of_unittest_overlay_6(struct kunit *test)
 {
 	int i, ov_id[2], ovcs_id;
 	int overlay_nr = 6, unittest_nr = 6;
@@ -1645,74 +1719,67 @@ static void __init of_unittest_overlay_6(void)
 
 	/* unittest device must be in before state */
 	for (i = 0; i < 2; i++) {
-		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
-				!= before) {
-			unittest(0, "%s with device @\"%s\" %s\n",
-					overlay_name_from_nr(overlay_nr + i),
-					unittest_path(unittest_nr + i,
-						PDEV_OVERLAY),
-					!before ? "enabled" : "disabled");
-			return;
-		}
+		KUNIT_ASSERT_EQ_MSG(test,
+				    of_unittest_device_exists(unittest_nr + i,
+							      PDEV_OVERLAY),
+				    before,
+				    "%s with device @\"%s\" %s\n",
+				    overlay_name_from_nr(overlay_nr + i),
+				    unittest_path(unittest_nr + i,
+						  PDEV_OVERLAY),
+				    !before ? "enabled" : "disabled");
 	}
 
 	/* apply the overlays */
 	for (i = 0; i < 2; i++) {
-
 		overlay_name = overlay_name_from_nr(overlay_nr + i);
 
-		if (!overlay_data_apply(overlay_name, &ovcs_id)) {
-			unittest(0, "could not apply overlay \"%s\"\n",
-					overlay_name);
-			return;
-		}
+		KUNIT_ASSERT_TRUE_MSG(
+			test, overlay_data_apply(overlay_name, &ovcs_id),
+			"could not apply overlay \"%s\"\n", overlay_name);
 		ov_id[i] = ovcs_id;
 		of_unittest_track_overlay(ov_id[i]);
 	}
 
 	for (i = 0; i < 2; i++) {
 		/* unittest device must be in after state */
-		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
-				!= after) {
-			unittest(0, "overlay @\"%s\" failed @\"%s\" %s\n",
-					overlay_name_from_nr(overlay_nr + i),
-					unittest_path(unittest_nr + i,
-						PDEV_OVERLAY),
-					!after ? "enabled" : "disabled");
-			return;
-		}
+		KUNIT_ASSERT_EQ_MSG(test,
+				    of_unittest_device_exists(unittest_nr + i,
+							      PDEV_OVERLAY),
+				    after,
+				    "overlay @\"%s\" failed @\"%s\" %s\n",
+				    overlay_name_from_nr(overlay_nr + i),
+				    unittest_path(unittest_nr + i,
+						  PDEV_OVERLAY),
+				    !after ? "enabled" : "disabled");
 	}
 
 	for (i = 1; i >= 0; i--) {
 		ovcs_id = ov_id[i];
-		if (of_overlay_remove(&ovcs_id)) {
-			unittest(0, "%s failed destroy @\"%s\"\n",
-					overlay_name_from_nr(overlay_nr + i),
-					unittest_path(unittest_nr + i,
-						PDEV_OVERLAY));
-			return;
-		}
+		KUNIT_ASSERT_FALSE_MSG(
+			test, of_overlay_remove(&ovcs_id),
+			"%s failed destroy @\"%s\"\n",
+			overlay_name_from_nr(overlay_nr + i),
+			unittest_path(unittest_nr + i, PDEV_OVERLAY));
 		of_unittest_untrack_overlay(ov_id[i]);
 	}
 
 	for (i = 0; i < 2; i++) {
 		/* unittest device must be again in before state */
-		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
-				!= before) {
-			unittest(0, "%s with device @\"%s\" %s\n",
-					overlay_name_from_nr(overlay_nr + i),
-					unittest_path(unittest_nr + i,
-						PDEV_OVERLAY),
-					!before ? "enabled" : "disabled");
-			return;
-		}
+		KUNIT_ASSERT_EQ_MSG(test,
+				    of_unittest_device_exists(unittest_nr + i,
+							      PDEV_OVERLAY),
+				    before,
+				    "%s with device @\"%s\" %s\n",
+				    overlay_name_from_nr(overlay_nr + i),
+				    unittest_path(unittest_nr + i,
+						  PDEV_OVERLAY),
+				    !before ? "enabled" : "disabled");
 	}
-
-	unittest(1, "overlay test %d passed\n", 6);
 }
 
 /* test overlay application in sequence */
-static void __init of_unittest_overlay_8(void)
+static void of_unittest_overlay_8(struct kunit *test)
 {
 	int i, ov_id[2], ovcs_id;
 	int overlay_nr = 8, unittest_nr = 8;
@@ -1722,76 +1789,64 @@ static void __init of_unittest_overlay_8(void)
 
 	/* apply the overlays */
 	for (i = 0; i < 2; i++) {
-
 		overlay_name = overlay_name_from_nr(overlay_nr + i);
 
-		if (!overlay_data_apply(overlay_name, &ovcs_id)) {
-			unittest(0, "could not apply overlay \"%s\"\n",
-					overlay_name);
-			return;
-		}
+		KUNIT_ASSERT_TRUE_MSG(
+			test, overlay_data_apply(overlay_name, &ovcs_id),
+			"could not apply overlay \"%s\"\n", overlay_name);
 		ov_id[i] = ovcs_id;
 		of_unittest_track_overlay(ov_id[i]);
 	}
 
 	/* now try to remove first overlay (it should fail) */
 	ovcs_id = ov_id[0];
-	if (!of_overlay_remove(&ovcs_id)) {
-		unittest(0, "%s was destroyed @\"%s\"\n",
-				overlay_name_from_nr(overlay_nr + 0),
-				unittest_path(unittest_nr,
-					PDEV_OVERLAY));
-		return;
-	}
+	KUNIT_ASSERT_TRUE_MSG(
+		test, of_overlay_remove(&ovcs_id),
+		"%s was destroyed @\"%s\"\n",
+		overlay_name_from_nr(overlay_nr + 0),
+		unittest_path(unittest_nr, PDEV_OVERLAY));
 
 	/* removing them in order should work */
 	for (i = 1; i >= 0; i--) {
 		ovcs_id = ov_id[i];
-		if (of_overlay_remove(&ovcs_id)) {
-			unittest(0, "%s not destroyed @\"%s\"\n",
-					overlay_name_from_nr(overlay_nr + i),
-					unittest_path(unittest_nr,
-						PDEV_OVERLAY));
-			return;
-		}
+		KUNIT_ASSERT_FALSE_MSG(
+			test, of_overlay_remove(&ovcs_id),
+			"%s not destroyed @\"%s\"\n",
+			overlay_name_from_nr(overlay_nr + i),
+			unittest_path(unittest_nr, PDEV_OVERLAY));
 		of_unittest_untrack_overlay(ov_id[i]);
 	}
-
-	unittest(1, "overlay test %d passed\n", 8);
 }
 
 /* test insertion of a bus with parent devices */
-static void __init of_unittest_overlay_10(void)
+static void of_unittest_overlay_10(struct kunit *test)
 {
-	int ret;
 	char *child_path;
 
 	/* device should disable */
-	ret = of_unittest_apply_overlay_check(10, 10, 0, 1, PDEV_OVERLAY);
-	if (unittest(ret == 0,
-			"overlay test %d failed; overlay application\n", 10))
-		return;
+	KUNIT_ASSERT_EQ_MSG(
+		test,
+		of_unittest_apply_overlay_check(
+				test, 10, 10, 0, 1, PDEV_OVERLAY),
+		0,
+		"overlay test %d failed; overlay application\n", 10);
 
 	child_path = kasprintf(GFP_KERNEL, "%s/test-unittest101",
 			unittest_path(10, PDEV_OVERLAY));
-	if (unittest(child_path, "overlay test %d failed; kasprintf\n", 10))
-		return;
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, child_path);
 
-	ret = of_path_device_type_exists(child_path, PDEV_OVERLAY);
+	KUNIT_EXPECT_TRUE_MSG(
+		test, of_path_device_type_exists(child_path, PDEV_OVERLAY),
+		"overlay test %d failed; no child device\n", 10);
 	kfree(child_path);
-
-	unittest(ret, "overlay test %d failed; no child device\n", 10);
 }
 
 /* test insertion of a bus with parent devices (and revert) */
-static void __init of_unittest_overlay_11(void)
+static void of_unittest_overlay_11(struct kunit *test)
 {
-	int ret;
-
 	/* device should disable */
-	ret = of_unittest_apply_revert_overlay_check(11, 11, 0, 1,
-			PDEV_OVERLAY);
-	unittest(ret == 0, "overlay test %d failed; overlay apply\n", 11);
+	KUNIT_EXPECT_FALSE(test, of_unittest_apply_revert_overlay_check(
+		test, 11, 11, 0, 1, PDEV_OVERLAY));
 }
 
 #if IS_BUILTIN(CONFIG_I2C) && IS_ENABLED(CONFIG_OF_OVERLAY)
@@ -2013,25 +2068,18 @@ static struct i2c_driver unittest_i2c_mux_driver = {
 
 #endif
 
-static int of_unittest_overlay_i2c_init(void)
+static int of_unittest_overlay_i2c_init(struct kunit *test)
 {
-	int ret;
-
-	ret = i2c_add_driver(&unittest_i2c_dev_driver);
-	if (unittest(ret == 0,
-			"could not register unittest i2c device driver\n"))
-		return ret;
+	KUNIT_ASSERT_EQ_MSG(test, i2c_add_driver(&unittest_i2c_dev_driver), 0,
+			    "could not register unittest i2c device driver\n");
 
-	ret = platform_driver_register(&unittest_i2c_bus_driver);
-	if (unittest(ret == 0,
-			"could not register unittest i2c bus driver\n"))
-		return ret;
+	KUNIT_ASSERT_EQ_MSG(
+		test, platform_driver_register(&unittest_i2c_bus_driver), 0,
+		"could not register unittest i2c bus driver\n");
 
 #if IS_BUILTIN(CONFIG_I2C_MUX)
-	ret = i2c_add_driver(&unittest_i2c_mux_driver);
-	if (unittest(ret == 0,
-			"could not register unittest i2c mux driver\n"))
-		return ret;
+	KUNIT_ASSERT_EQ_MSG(test, i2c_add_driver(&unittest_i2c_mux_driver), 0,
+			    "could not register unittest i2c mux driver\n");
 #endif
 
 	return 0;
@@ -2046,101 +2094,85 @@ static void of_unittest_overlay_i2c_cleanup(void)
 	i2c_del_driver(&unittest_i2c_dev_driver);
 }
 
-static void __init of_unittest_overlay_i2c_12(void)
+static void of_unittest_overlay_i2c_12(struct kunit *test)
 {
 	/* device should enable */
-	if (of_unittest_apply_overlay_check(12, 12, 0, 1, I2C_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 12);
+	of_unittest_apply_overlay_check(test, 12, 12, 0, 1, I2C_OVERLAY);
 }
 
 /* test deactivation of device */
-static void __init of_unittest_overlay_i2c_13(void)
+static void of_unittest_overlay_i2c_13(struct kunit *test)
 {
 	/* device should disable */
-	if (of_unittest_apply_overlay_check(13, 13, 1, 0, I2C_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 13);
+	of_unittest_apply_overlay_check(test, 13, 13, 1, 0, I2C_OVERLAY);
 }
 
 /* just check for i2c mux existence */
-static void of_unittest_overlay_i2c_14(void)
+static void of_unittest_overlay_i2c_14(struct kunit *test)
 {
+	KUNIT_SUCCEED(test);
 }
 
-static void __init of_unittest_overlay_i2c_15(void)
+static void of_unittest_overlay_i2c_15(struct kunit *test)
 {
 	/* device should enable */
-	if (of_unittest_apply_overlay_check(15, 15, 0, 1, I2C_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 15);
+	of_unittest_apply_overlay_check(test, 15, 15, 0, 1, I2C_OVERLAY);
 }
 
 #else
 
-static inline void of_unittest_overlay_i2c_14(void) { }
-static inline void of_unittest_overlay_i2c_15(void) { }
+static inline void of_unittest_overlay_i2c_14(struct kunit *test) { }
+static inline void of_unittest_overlay_i2c_15(struct kunit *test) { }
 
 #endif
 
-static void __init of_unittest_overlay(void)
+static void of_unittest_overlay(struct kunit *test)
 {
 	struct device_node *bus_np = NULL;
 
-	if (platform_driver_register(&unittest_driver)) {
-		unittest(0, "could not register unittest driver\n");
-		goto out;
-	}
+	KUNIT_ASSERT_FALSE_MSG(test, platform_driver_register(&unittest_driver),
+			       "could not register unittest driver\n");
 
 	bus_np = of_find_node_by_path(bus_path);
-	if (bus_np == NULL) {
-		unittest(0, "could not find bus_path \"%s\"\n", bus_path);
-		goto out;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL_MSG(
+		test, bus_np, "could not find bus_path \"%s\"\n", bus_path);
 
-	if (of_platform_default_populate(bus_np, NULL, NULL)) {
-		unittest(0, "could not populate bus @ \"%s\"\n", bus_path);
-		goto out;
-	}
-
-	if (!of_unittest_device_exists(100, PDEV_OVERLAY)) {
-		unittest(0, "could not find unittest0 @ \"%s\"\n",
-				unittest_path(100, PDEV_OVERLAY));
-		goto out;
-	}
+	KUNIT_ASSERT_FALSE_MSG(
+		test, of_platform_default_populate(bus_np, NULL, NULL),
+		"could not populate bus @ \"%s\"\n", bus_path);
 
-	if (of_unittest_device_exists(101, PDEV_OVERLAY)) {
-		unittest(0, "unittest1 @ \"%s\" should not exist\n",
-				unittest_path(101, PDEV_OVERLAY));
-		goto out;
-	}
+	KUNIT_ASSERT_TRUE_MSG(
+		test, of_unittest_device_exists(100, PDEV_OVERLAY),
+		"could not find unittest0 @ \"%s\"\n",
+		unittest_path(100, PDEV_OVERLAY));
 
-	unittest(1, "basic infrastructure of overlays passed");
+	KUNIT_ASSERT_FALSE_MSG(
+		test, of_unittest_device_exists(101, PDEV_OVERLAY),
+		"unittest1 @ \"%s\" should not exist\n",
+		unittest_path(101, PDEV_OVERLAY));
 
 	/* tests in sequence */
-	of_unittest_overlay_0();
-	of_unittest_overlay_1();
-	of_unittest_overlay_2();
-	of_unittest_overlay_3();
-	of_unittest_overlay_4();
-	of_unittest_overlay_5();
-	of_unittest_overlay_6();
-	of_unittest_overlay_8();
-
-	of_unittest_overlay_10();
-	of_unittest_overlay_11();
+	of_unittest_overlay_0(test);
+	of_unittest_overlay_1(test);
+	of_unittest_overlay_2(test);
+	of_unittest_overlay_3(test);
+	of_unittest_overlay_4(test);
+	of_unittest_overlay_5(test);
+	of_unittest_overlay_6(test);
+	of_unittest_overlay_8(test);
+
+	of_unittest_overlay_10(test);
+	of_unittest_overlay_11(test);
 
 #if IS_BUILTIN(CONFIG_I2C)
-	if (unittest(of_unittest_overlay_i2c_init() == 0, "i2c init failed\n"))
-		goto out;
+	KUNIT_ASSERT_EQ_MSG(test, of_unittest_overlay_i2c_init(test), 0,
+			    "i2c init failed\n");
+	goto out;
 
-	of_unittest_overlay_i2c_12();
-	of_unittest_overlay_i2c_13();
-	of_unittest_overlay_i2c_14();
-	of_unittest_overlay_i2c_15();
+	of_unittest_overlay_i2c_12(test);
+	of_unittest_overlay_i2c_13(test);
+	of_unittest_overlay_i2c_14(test);
+	of_unittest_overlay_i2c_15(test);
 
 	of_unittest_overlay_i2c_cleanup();
 #endif
@@ -2152,7 +2184,7 @@ static void __init of_unittest_overlay(void)
 }
 
 #else
-static inline void __init of_unittest_overlay(void) { }
+static inline void of_unittest_overlay(struct kunit *test) { }
 #endif
 
 #ifdef CONFIG_OF_OVERLAY
@@ -2313,7 +2345,7 @@ void __init unittest_unflatten_overlay_base(void)
  *
  * Return 0 on unexpected error.
  */
-static int __init overlay_data_apply(const char *overlay_name, int *overlay_id)
+static int overlay_data_apply(const char *overlay_name, int *overlay_id)
 {
 	struct overlay_info *info;
 	int found = 0;
@@ -2359,19 +2391,17 @@ static int __init overlay_data_apply(const char *overlay_name, int *overlay_id)
  * The first part of the function is _not_ normal overlay usage; it is
  * finishing splicing the base overlay device tree into the live tree.
  */
-static __init void of_unittest_overlay_high_level(void)
+static void of_unittest_overlay_high_level(struct kunit *test)
 {
 	struct device_node *last_sibling;
 	struct device_node *np;
 	struct device_node *of_symbols;
-	struct device_node *overlay_base_symbols;
+	struct device_node *overlay_base_symbols = 0;
 	struct device_node **pprev;
 	struct property *prop;
 
-	if (!overlay_base_root) {
-		unittest(0, "overlay_base_root not initialized\n");
-		return;
-	}
+	KUNIT_ASSERT_TRUE_MSG(test, overlay_base_root,
+			      "overlay_base_root not initialized\n");
 
 	/*
 	 * Could not fixup phandles in unittest_unflatten_overlay_base()
@@ -2418,11 +2448,9 @@ static __init void of_unittest_overlay_high_level(void)
 	for_each_child_of_node(overlay_base_root, np) {
 		struct device_node *base_child;
 		for_each_child_of_node(of_root, base_child) {
-			if (!strcmp(np->full_name, base_child->full_name)) {
-				unittest(0, "illegal node name in overlay_base %pOFn",
-					 np);
-				return;
-			}
+			KUNIT_ASSERT_STRNEQ_MSG(
+				test, np->full_name, base_child->full_name,
+				"illegal node name in overlay_base %pOFn", np);
 		}
 	}
 
@@ -2456,21 +2484,24 @@ static __init void of_unittest_overlay_high_level(void)
 
 			new_prop = __of_prop_dup(prop, GFP_KERNEL);
 			if (!new_prop) {
-				unittest(0, "__of_prop_dup() of '%s' from overlay_base node __symbols__",
-					 prop->name);
+				KUNIT_FAIL(test,
+					   "__of_prop_dup() of '%s' from overlay_base node __symbols__",
+					   prop->name);
 				goto err_unlock;
 			}
 			if (__of_add_property(of_symbols, new_prop)) {
 				/* "name" auto-generated by unflatten */
 				if (!strcmp(new_prop->name, "name"))
 					continue;
-				unittest(0, "duplicate property '%s' in overlay_base node __symbols__",
-					 prop->name);
+				KUNIT_FAIL(test,
+					   "duplicate property '%s' in overlay_base node __symbols__",
+					   prop->name);
 				goto err_unlock;
 			}
 			if (__of_add_property_sysfs(of_symbols, new_prop)) {
-				unittest(0, "unable to add property '%s' in overlay_base node __symbols__ to sysfs",
-					 prop->name);
+				KUNIT_FAIL(test,
+					   "unable to add property '%s' in overlay_base node __symbols__ to sysfs",
+					   prop->name);
 				goto err_unlock;
 			}
 		}
@@ -2481,20 +2512,24 @@ static __init void of_unittest_overlay_high_level(void)
 
 	/* now do the normal overlay usage test */
 
-	unittest(overlay_data_apply("overlay", NULL),
-		 "Adding overlay 'overlay' failed\n");
+	KUNIT_EXPECT_TRUE_MSG(test, overlay_data_apply("overlay", NULL),
+			      "Adding overlay 'overlay' failed\n");
 
-	unittest(overlay_data_apply("overlay_bad_add_dup_node", NULL),
-		 "Adding overlay 'overlay_bad_add_dup_node' failed\n");
+	KUNIT_EXPECT_TRUE_MSG(
+		test, overlay_data_apply("overlay_bad_add_dup_node", NULL),
+		"Adding overlay 'overlay_bad_add_dup_node' failed\n");
 
-	unittest(overlay_data_apply("overlay_bad_add_dup_prop", NULL),
-		 "Adding overlay 'overlay_bad_add_dup_prop' failed\n");
+	KUNIT_EXPECT_TRUE_MSG(
+		test, overlay_data_apply("overlay_bad_add_dup_prop", NULL),
+		"Adding overlay 'overlay_bad_add_dup_prop' failed\n");
 
-	unittest(overlay_data_apply("overlay_bad_phandle", NULL),
-		 "Adding overlay 'overlay_bad_phandle' failed\n");
+	KUNIT_EXPECT_TRUE_MSG(
+		test, overlay_data_apply("overlay_bad_phandle", NULL),
+		"Adding overlay 'overlay_bad_phandle' failed\n");
 
-	unittest(overlay_data_apply("overlay_bad_symbol", NULL),
-		 "Adding overlay 'overlay_bad_symbol' failed\n");
+	KUNIT_EXPECT_TRUE_MSG(
+		test, overlay_data_apply("overlay_bad_symbol", NULL),
+		"Adding overlay 'overlay_bad_symbol' failed\n");
 
 	return;
 
@@ -2504,57 +2539,52 @@ static __init void of_unittest_overlay_high_level(void)
 
 #else
 
-static inline __init void of_unittest_overlay_high_level(void) {}
+static inline void of_unittest_overlay_high_level(struct kunit *test) {}
 
 #endif
 
-static int __init of_unittest(void)
+static int of_test_init(struct kunit *test)
 {
-	struct device_node *np;
-	int res;
-
 	/* adding data for unittest */
-	res = unittest_data_add();
-	if (res)
-		return res;
+	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
+
 	if (!of_aliases)
 		of_aliases = of_find_node_by_path("/aliases");
 
-	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
-	if (!np) {
-		pr_info("No testcase data in device tree; not running tests\n");
-		return 0;
-	}
-	of_node_put(np);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
+		"/testcase-data/phandle-tests/consumer-a"));
 
 	if (IS_ENABLED(CONFIG_UML))
 		unflatten_device_tree();
 
-	pr_info("start of unittest - you will see error messages\n");
-	of_unittest_check_tree_linkage();
-	of_unittest_check_phandles();
-	of_unittest_find_node_by_name();
-	of_unittest_dynamic();
-	of_unittest_parse_phandle_with_args();
-	of_unittest_parse_phandle_with_args_map();
-	of_unittest_printf();
-	of_unittest_property_string();
-	of_unittest_property_copy();
-	of_unittest_changeset();
-	of_unittest_parse_interrupts();
-	of_unittest_parse_interrupts_extended();
-	of_unittest_match_node();
-	of_unittest_platform_populate();
-	of_unittest_overlay();
+	return 0;
+}
 
+static struct kunit_case of_test_cases[] = {
+	KUNIT_CASE(of_unittest_check_tree_linkage),
+	KUNIT_CASE(of_unittest_check_phandles),
+	KUNIT_CASE(of_unittest_find_node_by_name),
+	KUNIT_CASE(of_unittest_dynamic),
+	KUNIT_CASE(of_unittest_parse_phandle_with_args),
+	KUNIT_CASE(of_unittest_parse_phandle_with_args_map),
+	KUNIT_CASE(of_unittest_printf),
+	KUNIT_CASE(of_unittest_property_string),
+	KUNIT_CASE(of_unittest_property_copy),
+	KUNIT_CASE(of_unittest_changeset),
+	KUNIT_CASE(of_unittest_parse_interrupts),
+	KUNIT_CASE(of_unittest_parse_interrupts_extended),
+	KUNIT_CASE(of_unittest_match_node),
+	KUNIT_CASE(of_unittest_platform_populate),
+	KUNIT_CASE(of_unittest_overlay),
 	/* Double check linkage after removing testcase data */
-	of_unittest_check_tree_linkage();
-
-	of_unittest_overlay_high_level();
-
-	pr_info("end of unittest - %i passed, %i failed\n",
-		unittest_results.passed, unittest_results.failed);
+	KUNIT_CASE(of_unittest_check_tree_linkage),
+	KUNIT_CASE(of_unittest_overlay_high_level),
+	{},
+};
 
-	return 0;
-}
-late_initcall(of_unittest);
+static struct kunit_module of_test_module = {
+	.name = "of-test",
+	.init = of_test_init,
+	.test_cases = of_test_cases,
+};
+module_test(of_test_module);
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 15/17] of: unittest: migrate tests to run on KUnit
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)


Migrate tests without any cleanup, or modifying test logic in anyway to
run under KUnit using the KUnit expectation and assertion API.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 drivers/of/Kconfig    |    1 +
 drivers/of/unittest.c | 1310 +++++++++++++++++++++--------------------
 2 files changed, 671 insertions(+), 640 deletions(-)

diff --git a/drivers/of/Kconfig b/drivers/of/Kconfig
index ad3fcad4d75b8..f309399deac20 100644
--- a/drivers/of/Kconfig
+++ b/drivers/of/Kconfig
@@ -15,6 +15,7 @@ if OF
 config OF_UNITTEST
 	bool "Device Tree runtime unit tests"
 	depends on !SPARC
+	depends on KUNIT
 	select IRQ_DOMAIN
 	select OF_EARLY_FLATTREE
 	select OF_RESOLVE
diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
index effa4e2b9d992..96de69ccb3e63 100644
--- a/drivers/of/unittest.c
+++ b/drivers/of/unittest.c
@@ -26,186 +26,189 @@
 
 #include <linux/bitops.h>
 
+#include <kunit/test.h>
+
 #include "of_private.h"
 
-static struct unittest_results {
-	int passed;
-	int failed;
-} unittest_results;
-
-#define unittest(result, fmt, ...) ({ \
-	bool failed = !(result); \
-	if (failed) { \
-		unittest_results.failed++; \
-		pr_err("FAIL %s():%i " fmt, __func__, __LINE__, ##__VA_ARGS__); \
-	} else { \
-		unittest_results.passed++; \
-		pr_debug("pass %s():%i\n", __func__, __LINE__); \
-	} \
-	failed; \
-})
-
-static void __init of_unittest_find_node_by_name(void)
+static void of_unittest_find_node_by_name(struct kunit *test)
 {
 	struct device_node *np;
 	const char *options, *name;
 
 	np = of_find_node_by_path("/testcase-data");
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	unittest(np && !strcmp("/testcase-data", name),
-		"find /testcase-data failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
+			       "find /testcase-data failed\n");
 	of_node_put(np);
 	kfree(name);
 
 	/* Test if trailing '/' works */
-	np = of_find_node_by_path("/testcase-data/");
-	unittest(!np, "trailing '/' on /testcase-data/ should fail\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
+			    "trailing '/' on /testcase-data/ should fail\n");
 
 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	unittest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", name),
+	KUNIT_EXPECT_STREQ_MSG(
+		test, "/testcase-data/phandle-tests/consumer-a", name,
 		"find /testcase-data/phandle-tests/consumer-a failed\n");
 	of_node_put(np);
 	kfree(name);
 
 	np = of_find_node_by_path("testcase-alias");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	unittest(np && !strcmp("/testcase-data", name),
-		"find testcase-alias failed\n");
+	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
+			       "find testcase-alias failed\n");
 	of_node_put(np);
 	kfree(name);
 
 	/* Test if trailing '/' works on aliases */
-	np = of_find_node_by_path("testcase-alias/");
-	unittest(!np, "trailing '/' on testcase-alias/ should fail\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
+			    "trailing '/' on testcase-alias/ should fail\n");
 
 	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	unittest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", name),
+	KUNIT_EXPECT_STREQ_MSG(
+		test, "/testcase-data/phandle-tests/consumer-a", name,
 		"find testcase-alias/phandle-tests/consumer-a failed\n");
 	of_node_put(np);
 	kfree(name);
 
-	np = of_find_node_by_path("/testcase-data/missing-path");
-	unittest(!np, "non-existent path returned node %pOF\n", np);
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
+		"non-existent path returned node %pOF\n", np);
 	of_node_put(np);
 
-	np = of_find_node_by_path("missing-alias");
-	unittest(!np, "non-existent alias returned node %pOF\n", np);
+	KUNIT_EXPECT_EQ_MSG(
+		test, np = of_find_node_by_path("missing-alias"), NULL,
+		"non-existent alias returned node %pOF\n", np);
 	of_node_put(np);
 
-	np = of_find_node_by_path("testcase-alias/missing-path");
-	unittest(!np, "non-existent alias with relative path returned node %pOF\n", np);
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
+		"non-existent alias with relative path returned node %pOF\n",
+		np);
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
-	unittest(np && !strcmp("testoption", options),
-		 "option path test failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
+			       "option path test failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
-	unittest(np && !strcmp("test/option", options),
-		 "option path test, subcase #1 failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
+			       "option path test, subcase #1 failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
-	unittest(np && !strcmp("test/option", options),
-		 "option path test, subcase #2 failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
+			       "option path test, subcase #2 failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
-	unittest(np, "NULL option path test failed\n");
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
+					 "NULL option path test failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
 				       &options);
-	unittest(np && !strcmp("testaliasoption", options),
-		 "option alias path test failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
+			       "option alias path test failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
 				       &options);
-	unittest(np && !strcmp("test/alias/option", options),
-		 "option alias path test, subcase #1 failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
+			       "option alias path test, subcase #1 failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
-	unittest(np, "NULL option alias path test failed\n");
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
+			test, np, "NULL option alias path test failed\n");
 	of_node_put(np);
 
 	options = "testoption";
 	np = of_find_node_opts_by_path("testcase-alias", &options);
-	unittest(np && !options, "option clearing test failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
+			    "option clearing test failed\n");
 	of_node_put(np);
 
 	options = "testoption";
 	np = of_find_node_opts_by_path("/", &options);
-	unittest(np && !options, "option clearing root node test failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
+			    "option clearing root node test failed\n");
 	of_node_put(np);
 }
 
-static void __init of_unittest_dynamic(void)
+static void of_unittest_dynamic(struct kunit *test)
 {
 	struct device_node *np;
 	struct property *prop;
 
 	np = of_find_node_by_path("/testcase-data");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	/* Array of 4 properties for the purpose of testing */
 	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
-	if (!prop) {
-		unittest(0, "kzalloc() failed\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
 
 	/* Add a new property - should pass*/
 	prop->name = "new-property";
 	prop->value = "new-property-data";
 	prop->length = strlen(prop->value) + 1;
-	unittest(of_add_property(np, prop) == 0, "Adding a new property failed\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+			    "Adding a new property failed\n");
 
 	/* Try to add an existing property - should fail */
 	prop++;
 	prop->name = "new-property";
 	prop->value = "new-property-data-should-fail";
 	prop->length = strlen(prop->value) + 1;
-	unittest(of_add_property(np, prop) != 0,
-		 "Adding an existing property should have failed\n");
+	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
+			    "Adding an existing property should have failed\n");
 
 	/* Try to modify an existing property - should pass */
 	prop->value = "modify-property-data-should-pass";
 	prop->length = strlen(prop->value) + 1;
-	unittest(of_update_property(np, prop) == 0,
-		 "Updating an existing property should have passed\n");
+	KUNIT_EXPECT_EQ_MSG(
+		test, of_update_property(np, prop), 0,
+		"Updating an existing property should have passed\n");
 
 	/* Try to modify non-existent property - should pass*/
 	prop++;
 	prop->name = "modify-property";
 	prop->value = "modify-missing-property-data-should-pass";
 	prop->length = strlen(prop->value) + 1;
-	unittest(of_update_property(np, prop) == 0,
-		 "Updating a missing property should have passed\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
+			    "Updating a missing property should have passed\n");
 
 	/* Remove property - should pass */
-	unittest(of_remove_property(np, prop) == 0,
-		 "Removing a property should have passed\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
+			    "Removing a property should have passed\n");
 
 	/* Adding very large property - should pass */
 	prop++;
 	prop->name = "large-property-PAGE_SIZEx8";
 	prop->length = PAGE_SIZE * 8;
 	prop->value = kzalloc(prop->length, GFP_KERNEL);
-	unittest(prop->value != NULL, "Unable to allocate large buffer\n");
-	if (prop->value)
-		unittest(of_add_property(np, prop) == 0,
-			 "Adding a large property should have passed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+			    "Adding a large property should have passed\n");
 }
 
-static int __init of_unittest_check_node_linkage(struct device_node *np)
+static int of_unittest_check_node_linkage(struct device_node *np)
 {
 	struct device_node *child;
 	int count = 0, rc;
@@ -230,27 +233,30 @@ static int __init of_unittest_check_node_linkage(struct device_node *np)
 	return rc;
 }
 
-static void __init of_unittest_check_tree_linkage(void)
+static void of_unittest_check_tree_linkage(struct kunit *test)
 {
 	struct device_node *np;
 	int allnode_count = 0, child_count;
 
-	if (!of_root)
-		return;
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);
 
 	for_each_of_allnodes(np)
 		allnode_count++;
 	child_count = of_unittest_check_node_linkage(of_root);
 
-	unittest(child_count > 0, "Device node data structure is corrupted\n");
-	unittest(child_count == allnode_count,
-		 "allnodes list size (%i) doesn't match sibling lists size (%i)\n",
-		 allnode_count, child_count);
+	KUNIT_EXPECT_GT_MSG(test, child_count, 0,
+			    "Device node data structure is corrupted\n");
+	KUNIT_EXPECT_EQ_MSG(
+		test, child_count, allnode_count,
+		"allnodes list size (%i) doesn't match sibling lists size (%i)\n",
+		allnode_count, child_count);
 	pr_debug("allnodes list size (%i); sibling lists size (%i)\n", allnode_count, child_count);
 }
 
-static void __init of_unittest_printf_one(struct device_node *np, const char *fmt,
-					  const char *expected)
+static void of_unittest_printf_one(struct kunit *test,
+				   struct device_node *np,
+				   const char *fmt,
+				   const char *expected)
 {
 	unsigned char *buf;
 	int buf_size;
@@ -265,8 +271,12 @@ static void __init of_unittest_printf_one(struct device_node *np, const char *fm
 	memset(buf, 0xff, buf_size);
 	size = snprintf(buf, buf_size - 2, fmt, np);
 
-	/* use strcmp() instead of strncmp() here to be absolutely sure strings match */
-	unittest((strcmp(buf, expected) == 0) && (buf[size+1] == 0xff),
+	KUNIT_EXPECT_STREQ_MSG(
+		test, buf, expected,
+		"sprintf failed; fmt='%s' expected='%s' rslt='%s'\n",
+		fmt, expected, buf);
+	KUNIT_EXPECT_EQ_MSG(
+		test, buf[size+1], 0xff,
 		"sprintf failed; fmt='%s' expected='%s' rslt='%s'\n",
 		fmt, expected, buf);
 
@@ -276,44 +286,49 @@ static void __init of_unittest_printf_one(struct device_node *np, const char *fm
 		/* Clear the buffer, and make sure it works correctly still */
 		memset(buf, 0xff, buf_size);
 		snprintf(buf, size+1, fmt, np);
-		unittest(strncmp(buf, expected, size) == 0 && (buf[size+1] == 0xff),
+		KUNIT_EXPECT_STREQ_MSG(
+			test, buf, expected,
+			"snprintf failed; size=%i fmt='%s' expected='%s' rslt='%s'\n",
+			size, fmt, expected, buf);
+		KUNIT_EXPECT_EQ_MSG(
+			test, buf[size+1], 0xff,
 			"snprintf failed; size=%i fmt='%s' expected='%s' rslt='%s'\n",
 			size, fmt, expected, buf);
 	}
 	kfree(buf);
 }
 
-static void __init of_unittest_printf(void)
+static void of_unittest_printf(struct kunit *test)
 {
 	struct device_node *np;
 	const char *full_name = "/testcase-data/platform-tests/test-device at 1/dev at 100";
 	char phandle_str[16] = "";
 
 	np = of_find_node_by_path(full_name);
-	if (!np) {
-		unittest(np, "testcase data missing\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	num_to_str(phandle_str, sizeof(phandle_str), np->phandle, 0);
 
-	of_unittest_printf_one(np, "%pOF",  full_name);
-	of_unittest_printf_one(np, "%pOFf", full_name);
-	of_unittest_printf_one(np, "%pOFn", "dev");
-	of_unittest_printf_one(np, "%2pOFn", "dev");
-	of_unittest_printf_one(np, "%5pOFn", "  dev");
-	of_unittest_printf_one(np, "%pOFnc", "dev:test-sub-device");
-	of_unittest_printf_one(np, "%pOFp", phandle_str);
-	of_unittest_printf_one(np, "%pOFP", "dev at 100");
-	of_unittest_printf_one(np, "ABC %pOFP ABC", "ABC dev at 100 ABC");
-	of_unittest_printf_one(np, "%10pOFP", "   dev at 100");
-	of_unittest_printf_one(np, "%-10pOFP", "dev at 100   ");
-	of_unittest_printf_one(of_root, "%pOFP", "/");
-	of_unittest_printf_one(np, "%pOFF", "----");
-	of_unittest_printf_one(np, "%pOFPF", "dev at 100:----");
-	of_unittest_printf_one(np, "%pOFPFPc", "dev at 100:----:dev at 100:test-sub-device");
-	of_unittest_printf_one(np, "%pOFc", "test-sub-device");
-	of_unittest_printf_one(np, "%pOFC",
+	of_unittest_printf_one(test, np, "%pOF",  full_name);
+	of_unittest_printf_one(test, np, "%pOFf", full_name);
+	of_unittest_printf_one(test, np, "%pOFn", "dev");
+	of_unittest_printf_one(test, np, "%2pOFn", "dev");
+	of_unittest_printf_one(test, np, "%5pOFn", "  dev");
+	of_unittest_printf_one(test, np, "%pOFnc", "dev:test-sub-device");
+	of_unittest_printf_one(test, np, "%pOFp", phandle_str);
+	of_unittest_printf_one(test, np, "%pOFP", "dev at 100");
+	of_unittest_printf_one(test, np, "ABC %pOFP ABC", "ABC dev at 100 ABC");
+	of_unittest_printf_one(test, np, "%10pOFP", "   dev at 100");
+	of_unittest_printf_one(test, np, "%-10pOFP", "dev at 100   ");
+	of_unittest_printf_one(test, of_root, "%pOFP", "/");
+	of_unittest_printf_one(test, np, "%pOFF", "----");
+	of_unittest_printf_one(test, np, "%pOFPF", "dev at 100:----");
+	of_unittest_printf_one(test,
+			       np,
+			       "%pOFPFPc",
+			       "dev at 100:----:dev at 100:test-sub-device");
+	of_unittest_printf_one(test, np, "%pOFc", "test-sub-device");
+	of_unittest_printf_one(test, np, "%pOFC",
 			"\"test-sub-device\",\"test-compat2\",\"test-compat3\"");
 }
 
@@ -323,7 +338,7 @@ struct node_hash {
 };
 
 static DEFINE_HASHTABLE(phandle_ht, 8);
-static void __init of_unittest_check_phandles(void)
+static void of_unittest_check_phandles(struct kunit *test)
 {
 	struct device_node *np;
 	struct node_hash *nh;
@@ -335,24 +350,26 @@ static void __init of_unittest_check_phandles(void)
 			continue;
 
 		hash_for_each_possible(phandle_ht, nh, node, np->phandle) {
+			KUNIT_EXPECT_NE_MSG(
+				test, nh->np->phandle, np->phandle,
+				"Duplicate phandle! %i used by %pOF and %pOF\n",
+				np->phandle, nh->np, np);
 			if (nh->np->phandle == np->phandle) {
-				pr_info("Duplicate phandle! %i used by %pOF and %pOF\n",
-					np->phandle, nh->np, np);
 				dup_count++;
 				break;
 			}
 		}
 
 		nh = kzalloc(sizeof(*nh), GFP_KERNEL);
-		if (WARN_ON(!nh))
-			return;
+		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, nh);
 
 		nh->np = np;
 		hash_add(phandle_ht, &nh->node, np->phandle);
 		phandle_count++;
 	}
-	unittest(dup_count == 0, "Found %i duplicates in %i phandles\n",
-		 dup_count, phandle_count);
+	KUNIT_EXPECT_EQ_MSG(test, dup_count, 0,
+			    "Found %i duplicates in %i phandles\n",
+			    dup_count, phandle_count);
 
 	/* Clean up */
 	hash_for_each_safe(phandle_ht, i, tmp, nh, node) {
@@ -361,20 +378,21 @@ static void __init of_unittest_check_phandles(void)
 	}
 }
 
-static void __init of_unittest_parse_phandle_with_args(void)
+static void of_unittest_parse_phandle_with_args(struct kunit *test)
 {
 	struct device_node *np;
 	struct of_phandle_args args;
-	int i, rc;
+	int i, rc = 0;
 
 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
-	rc = of_count_phandle_with_args(np, "phandle-list", "#phandle-cells");
-	unittest(rc == 7, "of_count_phandle_with_args() returned %i, expected 7\n", rc);
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_count_phandle_with_args(
+			np, "phandle-list", "#phandle-cells"),
+		7,
+		"of_count_phandle_with_args() returned %i, expected 7\n", rc);
 
 	for (i = 0; i < 8; i++) {
 		bool passed = true;
@@ -428,85 +446,91 @@ static void __init of_unittest_parse_phandle_with_args(void)
 			passed = false;
 		}
 
-		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
-			 i, args.np, rc);
+		KUNIT_EXPECT_TRUE_MSG(
+			test, passed,
+			"index %i - data error on node %pOF rc=%i\n",
+			i, args.np, rc);
 	}
 
 	/* Check for missing list property */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args(np, "phandle-list-missing",
-					"#phandle-cells", 0, &args);
-	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
-	rc = of_count_phandle_with_args(np, "phandle-list-missing",
-					"#phandle-cells");
-	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args(
+			np, "phandle-list-missing", "#phandle-cells", 0, &args),
+		-ENOENT);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_count_phandle_with_args(
+			np, "phandle-list-missing", "#phandle-cells"),
+		-ENOENT);
 
 	/* Check for missing cells property */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args(np, "phandle-list",
-					"#phandle-cells-missing", 0, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
-	rc = of_count_phandle_with_args(np, "phandle-list",
-					"#phandle-cells-missing");
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args(
+			np, "phandle-list", "#phandle-cells-missing", 0, &args),
+		-EINVAL);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_count_phandle_with_args(
+			np, "phandle-list", "#phandle-cells-missing"),
+		-EINVAL);
 
 	/* Check for bad phandle in list */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args(np, "phandle-list-bad-phandle",
-					"#phandle-cells", 0, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
-	rc = of_count_phandle_with_args(np, "phandle-list-bad-phandle",
-					"#phandle-cells");
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args(np, "phandle-list-bad-phandle",
+					   "#phandle-cells", 0, &args),
+		-EINVAL);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_count_phandle_with_args(
+			np, "phandle-list-bad-phandle", "#phandle-cells"),
+		-EINVAL);
 
 	/* Check for incorrectly formed argument list */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args(np, "phandle-list-bad-args",
-					"#phandle-cells", 1, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
-	rc = of_count_phandle_with_args(np, "phandle-list-bad-args",
-					"#phandle-cells");
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args(np, "phandle-list-bad-args",
+					   "#phandle-cells", 1, &args),
+		-EINVAL);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_count_phandle_with_args(
+			np, "phandle-list-bad-args", "#phandle-cells"),
+		-EINVAL);
 }
 
-static void __init of_unittest_parse_phandle_with_args_map(void)
+static void of_unittest_parse_phandle_with_args_map(struct kunit *test)
 {
 	struct device_node *np, *p0, *p1, *p2, *p3;
 	struct of_phandle_args args;
 	int i, rc;
 
 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-b");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	p0 = of_find_node_by_path("/testcase-data/phandle-tests/provider0");
-	if (!p0) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p0);
 
 	p1 = of_find_node_by_path("/testcase-data/phandle-tests/provider1");
-	if (!p1) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p1);
 
 	p2 = of_find_node_by_path("/testcase-data/phandle-tests/provider2");
-	if (!p2) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p2);
 
 	p3 = of_find_node_by_path("/testcase-data/phandle-tests/provider3");
-	if (!p3) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p3);
 
-	rc = of_count_phandle_with_args(np, "phandle-list", "#phandle-cells");
-	unittest(rc == 7, "of_count_phandle_with_args() returned %i, expected 7\n", rc);
+	KUNIT_EXPECT_EQ(test,
+		       of_count_phandle_with_args(np,
+						  "phandle-list",
+						  "#phandle-cells"),
+		       7);
 
 	for (i = 0; i < 8; i++) {
 		bool passed = true;
@@ -564,121 +588,186 @@ static void __init of_unittest_parse_phandle_with_args_map(void)
 			passed = false;
 		}
 
-		unittest(passed, "index %i - data error on node %s rc=%i\n",
-			 i, args.np->full_name, rc);
+		KUNIT_EXPECT_TRUE_MSG(
+			test, passed,
+			"index %i - data error on node %s rc=%i\n",
+			i, (args.np ? args.np->full_name : "missing np"), rc);
 	}
 
 	/* Check for missing list property */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args_map(np, "phandle-list-missing",
-					    "phandle", 0, &args);
-	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args_map(
+			np, "phandle-list-missing", "phandle", 0, &args),
+		-ENOENT);
 
 	/* Check for missing cells,map,mask property */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args_map(np, "phandle-list",
-					    "phandle-missing", 0, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args_map(
+			np, "phandle-list", "phandle-missing", 0, &args),
+		-EINVAL);
 
 	/* Check for bad phandle in list */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args_map(np, "phandle-list-bad-phandle",
-					    "phandle", 0, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args_map(
+			np, "phandle-list-bad-phandle", "phandle", 0, &args),
+		-EINVAL);
 
 	/* Check for incorrectly formed argument list */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args_map(np, "phandle-list-bad-args",
-					    "phandle", 1, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args_map(
+			np, "phandle-list-bad-args", "phandle", 1, &args),
+		-EINVAL);
 }
 
-static void __init of_unittest_property_string(void)
+static void of_unittest_property_string(struct kunit *test)
 {
 	const char *strings[4];
 	struct device_node *np;
 	int rc;
 
 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
-	if (!np) {
-		pr_err("No testcase data in device tree\n");
-		return;
-	}
-
-	rc = of_property_match_string(np, "phandle-list-names", "first");
-	unittest(rc == 0, "first expected:0 got:%i\n", rc);
-	rc = of_property_match_string(np, "phandle-list-names", "second");
-	unittest(rc == 1, "second expected:1 got:%i\n", rc);
-	rc = of_property_match_string(np, "phandle-list-names", "third");
-	unittest(rc == 2, "third expected:2 got:%i\n", rc);
-	rc = of_property_match_string(np, "phandle-list-names", "fourth");
-	unittest(rc == -ENODATA, "unmatched string; rc=%i\n", rc);
-	rc = of_property_match_string(np, "missing-property", "blah");
-	unittest(rc == -EINVAL, "missing property; rc=%i\n", rc);
-	rc = of_property_match_string(np, "empty-property", "blah");
-	unittest(rc == -ENODATA, "empty property; rc=%i\n", rc);
-	rc = of_property_match_string(np, "unterminated-string", "blah");
-	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+
+	KUNIT_EXPECT_EQ(
+		test,
+		of_property_match_string(np, "phandle-list-names", "first"),
+		0);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_property_match_string(np, "phandle-list-names", "second"),
+		1);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_property_match_string(np, "phandle-list-names", "third"),
+		2);
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_match_string(np, "phandle-list-names", "fourth"),
+		-ENODATA,
+		"unmatched string");
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_match_string(np, "missing-property", "blah"),
+		-EINVAL,
+		"missing property");
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_match_string(np, "empty-property", "blah"),
+		-ENODATA,
+		"empty property");
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_match_string(np, "unterminated-string", "blah"),
+		-EILSEQ,
+		"unterminated string");
 
 	/* of_property_count_strings() tests */
-	rc = of_property_count_strings(np, "string-property");
-	unittest(rc == 1, "Incorrect string count; rc=%i\n", rc);
-	rc = of_property_count_strings(np, "phandle-list-names");
-	unittest(rc == 3, "Incorrect string count; rc=%i\n", rc);
-	rc = of_property_count_strings(np, "unterminated-string");
-	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
-	rc = of_property_count_strings(np, "unterminated-string-list");
-	unittest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ(test,
+			of_property_count_strings(np, "string-property"), 1);
+	KUNIT_EXPECT_EQ(test,
+			of_property_count_strings(np, "phandle-list-names"), 3);
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_count_strings(np, "unterminated-string"), -EILSEQ,
+		"unterminated string");
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_count_strings(np, "unterminated-string-list"),
+		-EILSEQ,
+		"unterminated string array");
 
 	/* of_property_read_string_index() tests */
 	rc = of_property_read_string_index(np, "string-property", 0, strings);
-	unittest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_ASSERT_EQ(test, rc, 0);
+	KUNIT_EXPECT_STREQ(test, strings[0], "foobar");
+
 	strings[0] = NULL;
 	rc = of_property_read_string_index(np, "string-property", 1, strings);
-	unittest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ(test, rc, -ENODATA);
+	KUNIT_EXPECT_EQ(test, strings[0], NULL);
+
 	rc = of_property_read_string_index(np, "phandle-list-names", 0, strings);
-	unittest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_ASSERT_EQ(test, rc, 0);
+	KUNIT_EXPECT_STREQ(test, strings[0], "first");
+
 	rc = of_property_read_string_index(np, "phandle-list-names", 1, strings);
-	unittest(rc == 0 && !strcmp(strings[0], "second"), "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_ASSERT_EQ(test, rc, 0);
+	KUNIT_EXPECT_STREQ(test, strings[0], "second");
+
 	rc = of_property_read_string_index(np, "phandle-list-names", 2, strings);
-	unittest(rc == 0 && !strcmp(strings[0], "third"), "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_ASSERT_EQ(test, rc, 0);
+	KUNIT_EXPECT_STREQ(test, strings[0], "third");
+
 	strings[0] = NULL;
 	rc = of_property_read_string_index(np, "phandle-list-names", 3, strings);
-	unittest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ(test, rc, -ENODATA);
+	KUNIT_EXPECT_EQ(test, strings[0], NULL);
+
 	strings[0] = NULL;
 	rc = of_property_read_string_index(np, "unterminated-string", 0, strings);
-	unittest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ(test, rc, -EILSEQ);
+	KUNIT_EXPECT_EQ(test, strings[0], NULL);
+
 	rc = of_property_read_string_index(np, "unterminated-string-list", 0, strings);
-	unittest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_ASSERT_EQ(test, rc, 0);
+	KUNIT_EXPECT_STREQ(test, strings[0], "first");
+
 	strings[0] = NULL;
 	rc = of_property_read_string_index(np, "unterminated-string-list", 2, strings); /* should fail */
-	unittest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
-	strings[1] = NULL;
+	KUNIT_EXPECT_EQ(test, rc, -EILSEQ);
+	KUNIT_EXPECT_EQ(test, strings[0], NULL);
 
 	/* of_property_read_string_array() tests */
-	rc = of_property_read_string_array(np, "string-property", strings, 4);
-	unittest(rc == 1, "Incorrect string count; rc=%i\n", rc);
-	rc = of_property_read_string_array(np, "phandle-list-names", strings, 4);
-	unittest(rc == 3, "Incorrect string count; rc=%i\n", rc);
-	rc = of_property_read_string_array(np, "unterminated-string", strings, 4);
-	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
+	strings[1] = NULL;
+	KUNIT_EXPECT_EQ(
+		test,
+		of_property_read_string_array(
+			np, "string-property", strings, 4),
+		1);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_property_read_string_array(
+			np, "phandle-list-names", strings, 4),
+		3);
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_read_string_array(
+			np, "unterminated-string", strings, 4),
+		-EILSEQ,
+		"unterminated string");
 	/* -- An incorrectly formed string should cause a failure */
-	rc = of_property_read_string_array(np, "unterminated-string-list", strings, 4);
-	unittest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_read_string_array(
+			np, "unterminated-string-list", strings, 4),
+		-EILSEQ,
+		"unterminated string array");
 	/* -- parsing the correctly formed strings should still work: */
 	strings[2] = NULL;
 	rc = of_property_read_string_array(np, "unterminated-string-list", strings, 2);
-	unittest(rc == 2 && strings[2] == NULL, "of_property_read_string_array() failure; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ(test, rc, 2);
+	KUNIT_EXPECT_EQ(test, strings[2], NULL);
+
 	strings[1] = NULL;
 	rc = of_property_read_string_array(np, "phandle-list-names", strings, 1);
-	unittest(rc == 1 && strings[1] == NULL, "Overwrote end of string array; rc=%i, str='%s'\n", rc, strings[1]);
+	KUNIT_ASSERT_EQ(test, rc, 1);
+	KUNIT_EXPECT_EQ_MSG(test, strings[1], NULL,
+			    "Overwrote end of string array");
 }
 
 #define propcmp(p1, p2) (((p1)->length == (p2)->length) && \
 			(p1)->value && (p2)->value && \
 			!memcmp((p1)->value, (p2)->value, (p1)->length) && \
 			!strcmp((p1)->name, (p2)->name))
-static void __init of_unittest_property_copy(void)
+static void of_unittest_property_copy(struct kunit *test)
 {
 #ifdef CONFIG_OF_DYNAMIC
 	struct property p1 = { .name = "p1", .length = 0, .value = "" };
@@ -686,20 +775,24 @@ static void __init of_unittest_property_copy(void)
 	struct property *new;
 
 	new = __of_prop_dup(&p1, GFP_KERNEL);
-	unittest(new && propcmp(&p1, new), "empty property didn't copy correctly\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, new);
+	KUNIT_EXPECT_TRUE_MSG(test, propcmp(&p1, new),
+			      "empty property didn't copy correctly");
 	kfree(new->value);
 	kfree(new->name);
 	kfree(new);
 
 	new = __of_prop_dup(&p2, GFP_KERNEL);
-	unittest(new && propcmp(&p2, new), "non-empty property didn't copy correctly\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, new);
+	KUNIT_EXPECT_TRUE_MSG(test, propcmp(&p2, new),
+			      "non-empty property didn't copy correctly");
 	kfree(new->value);
 	kfree(new->name);
 	kfree(new);
 #endif
 }
 
-static void __init of_unittest_changeset(void)
+static void of_unittest_changeset(struct kunit *test)
 {
 #ifdef CONFIG_OF_DYNAMIC
 	struct property *ppadd, padd = { .name = "prop-add", .length = 1, .value = "" };
@@ -712,32 +805,32 @@ static void __init of_unittest_changeset(void)
 	struct of_changeset chgset;
 
 	n1 = __of_node_dup(NULL, "n1");
-	unittest(n1, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n1);
 
 	n2 = __of_node_dup(NULL, "n2");
-	unittest(n2, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n2);
 
 	n21 = __of_node_dup(NULL, "n21");
-	unittest(n21, "testcase setup failure %p\n", n21);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n21);
 
 	nchangeset = of_find_node_by_path("/testcase-data/changeset");
 	nremove = of_get_child_by_name(nchangeset, "node-remove");
-	unittest(nremove, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, nremove);
 
 	ppadd = __of_prop_dup(&padd, GFP_KERNEL);
-	unittest(ppadd, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppadd);
 
 	ppname_n1  = __of_prop_dup(&pname_n1, GFP_KERNEL);
-	unittest(ppname_n1, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n1);
 
 	ppname_n2  = __of_prop_dup(&pname_n2, GFP_KERNEL);
-	unittest(ppname_n2, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n2);
 
 	ppname_n21 = __of_prop_dup(&pname_n21, GFP_KERNEL);
-	unittest(ppname_n21, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n21);
 
 	ppupdate = __of_prop_dup(&pupdate, GFP_KERNEL);
-	unittest(ppupdate, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppupdate);
 
 	parent = nchangeset;
 	n1->parent = parent;
@@ -745,54 +838,72 @@ static void __init of_unittest_changeset(void)
 	n21->parent = n2;
 
 	ppremove = of_find_property(parent, "prop-remove", NULL);
-	unittest(ppremove, "failed to find removal prop");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppremove);
 
 	of_changeset_init(&chgset);
 
-	unittest(!of_changeset_attach_node(&chgset, n1), "fail attach n1\n");
-	unittest(!of_changeset_add_property(&chgset, n1, ppname_n1), "fail add prop name\n");
-
-	unittest(!of_changeset_attach_node(&chgset, n2), "fail attach n2\n");
-	unittest(!of_changeset_add_property(&chgset, n2, ppname_n2), "fail add prop name\n");
-
-	unittest(!of_changeset_detach_node(&chgset, nremove), "fail remove node\n");
-	unittest(!of_changeset_add_property(&chgset, n21, ppname_n21), "fail add prop name\n");
-
-	unittest(!of_changeset_attach_node(&chgset, n21), "fail attach n21\n");
-
-	unittest(!of_changeset_add_property(&chgset, parent, ppadd), "fail add prop prop-add\n");
-	unittest(!of_changeset_update_property(&chgset, parent, ppupdate), "fail update prop\n");
-	unittest(!of_changeset_remove_property(&chgset, parent, ppremove), "fail remove prop\n");
-
-	unittest(!of_changeset_apply(&chgset), "apply failed\n");
+	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n1),
+			       "fail attach n1\n");
+	KUNIT_EXPECT_FALSE_MSG(
+		test, of_changeset_add_property(&chgset, n1, ppname_n1),
+		"fail add prop name\n");
+
+	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n2),
+			       "fail attach n2\n");
+	KUNIT_EXPECT_FALSE_MSG(
+		test, of_changeset_add_property(&chgset, n2, ppname_n2),
+			       "fail add prop name\n");
+
+	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_detach_node(&chgset, nremove),
+			       "fail remove node\n");
+	KUNIT_EXPECT_FALSE_MSG(
+		test, of_changeset_add_property(&chgset, n21, ppname_n21),
+		"fail add prop name\n");
+
+	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n21),
+			       "fail attach n21\n");
+
+	KUNIT_EXPECT_FALSE_MSG(
+		test,
+		of_changeset_add_property(&chgset, parent, ppadd),
+		"fail add prop prop-add\n");
+	KUNIT_EXPECT_FALSE_MSG(
+		test,
+		of_changeset_update_property(&chgset, parent, ppupdate),
+		"fail update prop\n");
+	KUNIT_EXPECT_FALSE_MSG(
+		test,
+		of_changeset_remove_property(&chgset, parent, ppremove),
+		"fail remove prop\n");
+
+	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_apply(&chgset),
+			       "apply failed\n");
 
 	of_node_put(nchangeset);
 
 	/* Make sure node names are constructed correctly */
-	unittest((np = of_find_node_by_path("/testcase-data/changeset/n2/n21")),
-		 "'%pOF' not added\n", n21);
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
+		test,
+		np = of_find_node_by_path("/testcase-data/changeset/n2/n21"),
+		"'%pOF' not added\n", n21);
 	of_node_put(np);
 
-	unittest(!of_changeset_revert(&chgset), "revert failed\n");
+	KUNIT_EXPECT_FALSE(test, of_changeset_revert(&chgset));
 
 	of_changeset_destroy(&chgset);
 #endif
 }
 
-static void __init of_unittest_parse_interrupts(void)
+static void of_unittest_parse_interrupts(struct kunit *test)
 {
 	struct device_node *np;
 	struct of_phandle_args args;
 	int i, rc;
 
-	if (of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)
-		return;
+	KUNIT_ASSERT_FALSE(test, of_irq_workarounds & OF_IMAP_OLDWORLD_MAC);
 
 	np = of_find_node_by_path("/testcase-data/interrupts/interrupts0");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	for (i = 0; i < 4; i++) {
 		bool passed = true;
@@ -804,16 +915,15 @@ static void __init of_unittest_parse_interrupts(void)
 		passed &= (args.args_count == 1);
 		passed &= (args.args[0] == (i + 1));
 
-		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
-			 i, args.np, rc);
+		KUNIT_EXPECT_TRUE_MSG(
+			test, passed,
+			"index %i - data error on node %pOF rc=%i\n",
+			i, args.np, rc);
 	}
 	of_node_put(np);
 
 	np = of_find_node_by_path("/testcase-data/interrupts/interrupts1");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	for (i = 0; i < 4; i++) {
 		bool passed = true;
@@ -850,26 +960,24 @@ static void __init of_unittest_parse_interrupts(void)
 		default:
 			passed = false;
 		}
-		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
-			 i, args.np, rc);
+		KUNIT_EXPECT_TRUE_MSG(
+			test, passed,
+			"index %i - data error on node %pOF rc=%i\n",
+			i, args.np, rc);
 	}
 	of_node_put(np);
 }
 
-static void __init of_unittest_parse_interrupts_extended(void)
+static void of_unittest_parse_interrupts_extended(struct kunit *test)
 {
 	struct device_node *np;
 	struct of_phandle_args args;
 	int i, rc;
 
-	if (of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)
-		return;
+	KUNIT_ASSERT_FALSE(test, of_irq_workarounds & OF_IMAP_OLDWORLD_MAC);
 
 	np = of_find_node_by_path("/testcase-data/interrupts/interrupts-extended0");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	for (i = 0; i < 7; i++) {
 		bool passed = true;
@@ -924,8 +1032,10 @@ static void __init of_unittest_parse_interrupts_extended(void)
 			passed = false;
 		}
 
-		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
-			 i, args.np, rc);
+		KUNIT_EXPECT_TRUE_MSG(
+			test, passed,
+			"index %i - data error on node %pOF rc=%i\n",
+			i, args.np, rc);
 	}
 	of_node_put(np);
 }
@@ -965,7 +1075,7 @@ static struct {
 	{ .path = "/testcase-data/match-node/name9", .data = "K", },
 };
 
-static void __init of_unittest_match_node(void)
+static void of_unittest_match_node(struct kunit *test)
 {
 	struct device_node *np;
 	const struct of_device_id *match;
@@ -973,26 +1083,19 @@ static void __init of_unittest_match_node(void)
 
 	for (i = 0; i < ARRAY_SIZE(match_node_tests); i++) {
 		np = of_find_node_by_path(match_node_tests[i].path);
-		if (!np) {
-			unittest(0, "missing testcase node %s\n",
-				match_node_tests[i].path);
-			continue;
-		}
+		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 		match = of_match_node(match_node_table, np);
-		if (!match) {
-			unittest(0, "%s didn't match anything\n",
-				match_node_tests[i].path);
-			continue;
-		}
+		KUNIT_ASSERT_NOT_ERR_OR_NULL_MSG(test, np,
+						 "%s didn't match anything",
+						 match_node_tests[i].path);
 
-		if (strcmp(match->data, match_node_tests[i].data) != 0) {
-			unittest(0, "%s got wrong match. expected %s, got %s\n",
-				match_node_tests[i].path, match_node_tests[i].data,
-				(const char *)match->data);
-			continue;
-		}
-		unittest(1, "passed");
+		KUNIT_EXPECT_STREQ_MSG(
+			test,
+			match->data, match_node_tests[i].data,
+			"%s got wrong match. expected %s, got %s\n",
+			match_node_tests[i].path, match_node_tests[i].data,
+			(const char *)match->data);
 	}
 }
 
@@ -1004,9 +1107,9 @@ static struct resource test_bus_res = {
 static const struct platform_device_info test_bus_info = {
 	.name = "unittest-bus",
 };
-static void __init of_unittest_platform_populate(void)
+static void of_unittest_platform_populate(struct kunit *test)
 {
-	int irq, rc;
+	int irq;
 	struct device_node *np, *child, *grandchild;
 	struct platform_device *pdev, *test_bus;
 	const struct of_device_id match[] = {
@@ -1020,32 +1123,27 @@ static void __init of_unittest_platform_populate(void)
 	/* Test that a missing irq domain returns -EPROBE_DEFER */
 	np = of_find_node_by_path("/testcase-data/testcase-device1");
 	pdev = of_find_device_by_node(np);
-	unittest(pdev, "device 1 creation failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, pdev);
 
 	if (!(of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)) {
 		irq = platform_get_irq(pdev, 0);
-		unittest(irq == -EPROBE_DEFER,
-			 "device deferred probe failed - %d\n", irq);
+		KUNIT_ASSERT_EQ(test, irq, -EPROBE_DEFER);
 
 		/* Test that a parsing failure does not return -EPROBE_DEFER */
 		np = of_find_node_by_path("/testcase-data/testcase-device2");
 		pdev = of_find_device_by_node(np);
-		unittest(pdev, "device 2 creation failed\n");
+		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, pdev);
 		irq = platform_get_irq(pdev, 0);
-		unittest(irq < 0 && irq != -EPROBE_DEFER,
-			 "device parsing error failed - %d\n", irq);
+		KUNIT_ASSERT_TRUE_MSG(test, irq < 0 && irq != -EPROBE_DEFER,
+				      "device parsing error failed - %d\n",
+				      irq);
 	}
 
 	np = of_find_node_by_path("/testcase-data/platform-tests");
-	unittest(np, "No testcase data in device tree\n");
-	if (!np)
-		return;
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	test_bus = platform_device_register_full(&test_bus_info);
-	rc = PTR_ERR_OR_ZERO(test_bus);
-	unittest(!rc, "testbus registration failed; rc=%i\n", rc);
-	if (rc)
-		return;
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, test_bus);
 	test_bus->dev.of_node = np;
 
 	/*
@@ -1060,17 +1158,19 @@ static void __init of_unittest_platform_populate(void)
 	of_platform_populate(np, match, NULL, &test_bus->dev);
 	for_each_child_of_node(np, child) {
 		for_each_child_of_node(child, grandchild)
-			unittest(of_find_device_by_node(grandchild),
-				 "Could not create device for node '%pOFn'\n",
-				 grandchild);
+			KUNIT_EXPECT_TRUE_MSG(
+				test, of_find_device_by_node(grandchild),
+				"Could not create device for node '%pOFn'\n",
+				grandchild);
 	}
 
 	of_platform_depopulate(&test_bus->dev);
 	for_each_child_of_node(np, child) {
 		for_each_child_of_node(child, grandchild)
-			unittest(!of_find_device_by_node(grandchild),
-				 "device didn't get destroyed '%pOFn'\n",
-				 grandchild);
+			KUNIT_EXPECT_FALSE_MSG(
+				test, of_find_device_by_node(grandchild),
+				"device didn't get destroyed '%pOFn'\n",
+				grandchild);
 	}
 
 	platform_device_unregister(test_bus);
@@ -1171,7 +1271,7 @@ static void attach_node_and_children(struct device_node *np)
  *	unittest_data_add - Reads, copies data from
  *	linked tree and attaches it to the live tree
  */
-static int __init unittest_data_add(void)
+static int unittest_data_add(void)
 {
 	void *unittest_data;
 	struct device_node *unittest_data_node, *np;
@@ -1242,7 +1342,7 @@ static int __init unittest_data_add(void)
 }
 
 #ifdef CONFIG_OF_OVERLAY
-static int __init overlay_data_apply(const char *overlay_name, int *overlay_id);
+static int overlay_data_apply(const char *overlay_name, int *overlay_id);
 
 static int unittest_probe(struct platform_device *pdev)
 {
@@ -1471,172 +1571,146 @@ static void of_unittest_destroy_tracked_overlays(void)
 	} while (defers > 0);
 }
 
-static int __init of_unittest_apply_overlay(int overlay_nr, int *overlay_id)
+static int of_unittest_apply_overlay(struct kunit *test,
+				     int overlay_nr,
+				     int *overlay_id)
 {
 	const char *overlay_name;
 
 	overlay_name = overlay_name_from_nr(overlay_nr);
 
-	if (!overlay_data_apply(overlay_name, overlay_id)) {
-		unittest(0, "could not apply overlay \"%s\"\n",
-				overlay_name);
-		return -EFAULT;
-	}
+	KUNIT_ASSERT_TRUE_MSG(test,
+			      overlay_data_apply(overlay_name, overlay_id),
+			      "could not apply overlay \"%s\"\n", overlay_name);
 	of_unittest_track_overlay(*overlay_id);
 
 	return 0;
 }
 
 /* apply an overlay while checking before and after states */
-static int __init of_unittest_apply_overlay_check(int overlay_nr,
+static int of_unittest_apply_overlay_check(struct kunit *test, int overlay_nr,
 		int unittest_nr, int before, int after,
 		enum overlay_type ovtype)
 {
 	int ret, ovcs_id;
 
 	/* unittest device must not be in before state */
-	if (of_unittest_device_exists(unittest_nr, ovtype) != before) {
-		unittest(0, "%s with device @\"%s\" %s\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype),
-				!before ? "enabled" : "disabled");
-		return -EINVAL;
-	}
+	KUNIT_ASSERT_EQ_MSG(
+		test, of_unittest_device_exists(unittest_nr, ovtype), before,
+		"%s with device @\"%s\" %s\n", overlay_name_from_nr(overlay_nr),
+		unittest_path(unittest_nr, ovtype),
+		!before ? "enabled" : "disabled");
 
 	ovcs_id = 0;
-	ret = of_unittest_apply_overlay(overlay_nr, &ovcs_id);
+	ret = of_unittest_apply_overlay(test, overlay_nr, &ovcs_id);
 	if (ret != 0) {
-		/* of_unittest_apply_overlay already called unittest() */
+		/* of_unittest_apply_overlay already set expectation */
 		return ret;
 	}
 
 	/* unittest device must be to set to after state */
-	if (of_unittest_device_exists(unittest_nr, ovtype) != after) {
-		unittest(0, "%s failed to create @\"%s\" %s\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype),
-				!after ? "enabled" : "disabled");
-		return -EINVAL;
-	}
+	KUNIT_ASSERT_EQ_MSG(
+		test, of_unittest_device_exists(unittest_nr, ovtype), after,
+		"%s failed to create @\"%s\" %s\n",
+		overlay_name_from_nr(overlay_nr),
+		unittest_path(unittest_nr, ovtype),
+		!after ? "enabled" : "disabled");
 
 	return 0;
 }
 
 /* apply an overlay and then revert it while checking before, after states */
-static int __init of_unittest_apply_revert_overlay_check(int overlay_nr,
+static int of_unittest_apply_revert_overlay_check(
+		struct kunit *test, int overlay_nr,
 		int unittest_nr, int before, int after,
 		enum overlay_type ovtype)
 {
 	int ret, ovcs_id;
 
 	/* unittest device must be in before state */
-	if (of_unittest_device_exists(unittest_nr, ovtype) != before) {
-		unittest(0, "%s with device @\"%s\" %s\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype),
-				!before ? "enabled" : "disabled");
-		return -EINVAL;
-	}
+	KUNIT_ASSERT_EQ_MSG(
+		test, of_unittest_device_exists(unittest_nr, ovtype), before,
+		"%s with device @\"%s\" %s\n", overlay_name_from_nr(overlay_nr),
+		unittest_path(unittest_nr, ovtype),
+		!before ? "enabled" : "disabled");
 
 	/* apply the overlay */
 	ovcs_id = 0;
-	ret = of_unittest_apply_overlay(overlay_nr, &ovcs_id);
+	ret = of_unittest_apply_overlay(test, overlay_nr, &ovcs_id);
 	if (ret != 0) {
-		/* of_unittest_apply_overlay already called unittest() */
+		/* of_unittest_apply_overlay already set expectation. */
 		return ret;
 	}
 
 	/* unittest device must be in after state */
-	if (of_unittest_device_exists(unittest_nr, ovtype) != after) {
-		unittest(0, "%s failed to create @\"%s\" %s\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype),
-				!after ? "enabled" : "disabled");
-		return -EINVAL;
-	}
-
-	ret = of_overlay_remove(&ovcs_id);
-	if (ret != 0) {
-		unittest(0, "%s failed to be destroyed @\"%s\"\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype));
-		return ret;
-	}
+	KUNIT_ASSERT_EQ_MSG(
+		test, of_unittest_device_exists(unittest_nr, ovtype), after,
+		"%s failed to create @\"%s\" %s\n",
+		overlay_name_from_nr(overlay_nr),
+		unittest_path(unittest_nr, ovtype),
+		!after ? "enabled" : "disabled");
+
+	KUNIT_ASSERT_EQ_MSG(test, of_overlay_remove(&ovcs_id), 0,
+			    "%s failed to be destroyed @\"%s\"\n",
+			    overlay_name_from_nr(overlay_nr),
+			    unittest_path(unittest_nr, ovtype));
 
 	/* unittest device must be again in before state */
-	if (of_unittest_device_exists(unittest_nr, PDEV_OVERLAY) != before) {
-		unittest(0, "%s with device @\"%s\" %s\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype),
-				!before ? "enabled" : "disabled");
-		return -EINVAL;
-	}
+	KUNIT_ASSERT_EQ_MSG(
+		test,
+		of_unittest_device_exists(unittest_nr, PDEV_OVERLAY), before,
+		"%s with device @\"%s\" %s\n",
+		overlay_name_from_nr(overlay_nr),
+		unittest_path(unittest_nr, ovtype),
+		!before ? "enabled" : "disabled");
 
 	return 0;
 }
 
 /* test activation of device */
-static void __init of_unittest_overlay_0(void)
+static void of_unittest_overlay_0(struct kunit *test)
 {
 	/* device should enable */
-	if (of_unittest_apply_overlay_check(0, 0, 0, 1, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 0);
+	of_unittest_apply_overlay_check(test, 0, 0, 0, 1, PDEV_OVERLAY);
 }
 
 /* test deactivation of device */
-static void __init of_unittest_overlay_1(void)
+static void of_unittest_overlay_1(struct kunit *test)
 {
 	/* device should disable */
-	if (of_unittest_apply_overlay_check(1, 1, 1, 0, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 1);
+	of_unittest_apply_overlay_check(test, 1, 1, 1, 0, PDEV_OVERLAY);
 }
 
 /* test activation of device */
-static void __init of_unittest_overlay_2(void)
+static void of_unittest_overlay_2(struct kunit *test)
 {
 	/* device should enable */
-	if (of_unittest_apply_overlay_check(2, 2, 0, 1, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 2);
+	of_unittest_apply_overlay_check(test, 2, 2, 0, 1, PDEV_OVERLAY);
 }
 
 /* test deactivation of device */
-static void __init of_unittest_overlay_3(void)
+static void of_unittest_overlay_3(struct kunit *test)
 {
 	/* device should disable */
-	if (of_unittest_apply_overlay_check(3, 3, 1, 0, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 3);
+	of_unittest_apply_overlay_check(test, 3, 3, 1, 0, PDEV_OVERLAY);
 }
 
 /* test activation of a full device node */
-static void __init of_unittest_overlay_4(void)
+static void of_unittest_overlay_4(struct kunit *test)
 {
 	/* device should disable */
-	if (of_unittest_apply_overlay_check(4, 4, 0, 1, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 4);
+	of_unittest_apply_overlay_check(test, 4, 4, 0, 1, PDEV_OVERLAY);
 }
 
 /* test overlay apply/revert sequence */
-static void __init of_unittest_overlay_5(void)
+static void of_unittest_overlay_5(struct kunit *test)
 {
 	/* device should disable */
-	if (of_unittest_apply_revert_overlay_check(5, 5, 0, 1, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 5);
+	of_unittest_apply_revert_overlay_check(test, 5, 5, 0, 1, PDEV_OVERLAY);
 }
 
 /* test overlay application in sequence */
-static void __init of_unittest_overlay_6(void)
+static void of_unittest_overlay_6(struct kunit *test)
 {
 	int i, ov_id[2], ovcs_id;
 	int overlay_nr = 6, unittest_nr = 6;
@@ -1645,74 +1719,67 @@ static void __init of_unittest_overlay_6(void)
 
 	/* unittest device must be in before state */
 	for (i = 0; i < 2; i++) {
-		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
-				!= before) {
-			unittest(0, "%s with device @\"%s\" %s\n",
-					overlay_name_from_nr(overlay_nr + i),
-					unittest_path(unittest_nr + i,
-						PDEV_OVERLAY),
-					!before ? "enabled" : "disabled");
-			return;
-		}
+		KUNIT_ASSERT_EQ_MSG(test,
+				    of_unittest_device_exists(unittest_nr + i,
+							      PDEV_OVERLAY),
+				    before,
+				    "%s with device @\"%s\" %s\n",
+				    overlay_name_from_nr(overlay_nr + i),
+				    unittest_path(unittest_nr + i,
+						  PDEV_OVERLAY),
+				    !before ? "enabled" : "disabled");
 	}
 
 	/* apply the overlays */
 	for (i = 0; i < 2; i++) {
-
 		overlay_name = overlay_name_from_nr(overlay_nr + i);
 
-		if (!overlay_data_apply(overlay_name, &ovcs_id)) {
-			unittest(0, "could not apply overlay \"%s\"\n",
-					overlay_name);
-			return;
-		}
+		KUNIT_ASSERT_TRUE_MSG(
+			test, overlay_data_apply(overlay_name, &ovcs_id),
+			"could not apply overlay \"%s\"\n", overlay_name);
 		ov_id[i] = ovcs_id;
 		of_unittest_track_overlay(ov_id[i]);
 	}
 
 	for (i = 0; i < 2; i++) {
 		/* unittest device must be in after state */
-		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
-				!= after) {
-			unittest(0, "overlay @\"%s\" failed @\"%s\" %s\n",
-					overlay_name_from_nr(overlay_nr + i),
-					unittest_path(unittest_nr + i,
-						PDEV_OVERLAY),
-					!after ? "enabled" : "disabled");
-			return;
-		}
+		KUNIT_ASSERT_EQ_MSG(test,
+				    of_unittest_device_exists(unittest_nr + i,
+							      PDEV_OVERLAY),
+				    after,
+				    "overlay @\"%s\" failed @\"%s\" %s\n",
+				    overlay_name_from_nr(overlay_nr + i),
+				    unittest_path(unittest_nr + i,
+						  PDEV_OVERLAY),
+				    !after ? "enabled" : "disabled");
 	}
 
 	for (i = 1; i >= 0; i--) {
 		ovcs_id = ov_id[i];
-		if (of_overlay_remove(&ovcs_id)) {
-			unittest(0, "%s failed destroy @\"%s\"\n",
-					overlay_name_from_nr(overlay_nr + i),
-					unittest_path(unittest_nr + i,
-						PDEV_OVERLAY));
-			return;
-		}
+		KUNIT_ASSERT_FALSE_MSG(
+			test, of_overlay_remove(&ovcs_id),
+			"%s failed destroy @\"%s\"\n",
+			overlay_name_from_nr(overlay_nr + i),
+			unittest_path(unittest_nr + i, PDEV_OVERLAY));
 		of_unittest_untrack_overlay(ov_id[i]);
 	}
 
 	for (i = 0; i < 2; i++) {
 		/* unittest device must be again in before state */
-		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
-				!= before) {
-			unittest(0, "%s with device @\"%s\" %s\n",
-					overlay_name_from_nr(overlay_nr + i),
-					unittest_path(unittest_nr + i,
-						PDEV_OVERLAY),
-					!before ? "enabled" : "disabled");
-			return;
-		}
+		KUNIT_ASSERT_EQ_MSG(test,
+				    of_unittest_device_exists(unittest_nr + i,
+							      PDEV_OVERLAY),
+				    before,
+				    "%s with device @\"%s\" %s\n",
+				    overlay_name_from_nr(overlay_nr + i),
+				    unittest_path(unittest_nr + i,
+						  PDEV_OVERLAY),
+				    !before ? "enabled" : "disabled");
 	}
-
-	unittest(1, "overlay test %d passed\n", 6);
 }
 
 /* test overlay application in sequence */
-static void __init of_unittest_overlay_8(void)
+static void of_unittest_overlay_8(struct kunit *test)
 {
 	int i, ov_id[2], ovcs_id;
 	int overlay_nr = 8, unittest_nr = 8;
@@ -1722,76 +1789,64 @@ static void __init of_unittest_overlay_8(void)
 
 	/* apply the overlays */
 	for (i = 0; i < 2; i++) {
-
 		overlay_name = overlay_name_from_nr(overlay_nr + i);
 
-		if (!overlay_data_apply(overlay_name, &ovcs_id)) {
-			unittest(0, "could not apply overlay \"%s\"\n",
-					overlay_name);
-			return;
-		}
+		KUNIT_ASSERT_TRUE_MSG(
+			test, overlay_data_apply(overlay_name, &ovcs_id),
+			"could not apply overlay \"%s\"\n", overlay_name);
 		ov_id[i] = ovcs_id;
 		of_unittest_track_overlay(ov_id[i]);
 	}
 
 	/* now try to remove first overlay (it should fail) */
 	ovcs_id = ov_id[0];
-	if (!of_overlay_remove(&ovcs_id)) {
-		unittest(0, "%s was destroyed @\"%s\"\n",
-				overlay_name_from_nr(overlay_nr + 0),
-				unittest_path(unittest_nr,
-					PDEV_OVERLAY));
-		return;
-	}
+	KUNIT_ASSERT_TRUE_MSG(
+		test, of_overlay_remove(&ovcs_id),
+		"%s was destroyed @\"%s\"\n",
+		overlay_name_from_nr(overlay_nr + 0),
+		unittest_path(unittest_nr, PDEV_OVERLAY));
 
 	/* removing them in order should work */
 	for (i = 1; i >= 0; i--) {
 		ovcs_id = ov_id[i];
-		if (of_overlay_remove(&ovcs_id)) {
-			unittest(0, "%s not destroyed @\"%s\"\n",
-					overlay_name_from_nr(overlay_nr + i),
-					unittest_path(unittest_nr,
-						PDEV_OVERLAY));
-			return;
-		}
+		KUNIT_ASSERT_FALSE_MSG(
+			test, of_overlay_remove(&ovcs_id),
+			"%s not destroyed @\"%s\"\n",
+			overlay_name_from_nr(overlay_nr + i),
+			unittest_path(unittest_nr, PDEV_OVERLAY));
 		of_unittest_untrack_overlay(ov_id[i]);
 	}
-
-	unittest(1, "overlay test %d passed\n", 8);
 }
 
 /* test insertion of a bus with parent devices */
-static void __init of_unittest_overlay_10(void)
+static void of_unittest_overlay_10(struct kunit *test)
 {
-	int ret;
 	char *child_path;
 
 	/* device should disable */
-	ret = of_unittest_apply_overlay_check(10, 10, 0, 1, PDEV_OVERLAY);
-	if (unittest(ret == 0,
-			"overlay test %d failed; overlay application\n", 10))
-		return;
+	KUNIT_ASSERT_EQ_MSG(
+		test,
+		of_unittest_apply_overlay_check(
+				test, 10, 10, 0, 1, PDEV_OVERLAY),
+		0,
+		"overlay test %d failed; overlay application\n", 10);
 
 	child_path = kasprintf(GFP_KERNEL, "%s/test-unittest101",
 			unittest_path(10, PDEV_OVERLAY));
-	if (unittest(child_path, "overlay test %d failed; kasprintf\n", 10))
-		return;
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, child_path);
 
-	ret = of_path_device_type_exists(child_path, PDEV_OVERLAY);
+	KUNIT_EXPECT_TRUE_MSG(
+		test, of_path_device_type_exists(child_path, PDEV_OVERLAY),
+		"overlay test %d failed; no child device\n", 10);
 	kfree(child_path);
-
-	unittest(ret, "overlay test %d failed; no child device\n", 10);
 }
 
 /* test insertion of a bus with parent devices (and revert) */
-static void __init of_unittest_overlay_11(void)
+static void of_unittest_overlay_11(struct kunit *test)
 {
-	int ret;
-
 	/* device should disable */
-	ret = of_unittest_apply_revert_overlay_check(11, 11, 0, 1,
-			PDEV_OVERLAY);
-	unittest(ret == 0, "overlay test %d failed; overlay apply\n", 11);
+	KUNIT_EXPECT_FALSE(test, of_unittest_apply_revert_overlay_check(
+		test, 11, 11, 0, 1, PDEV_OVERLAY));
 }
 
 #if IS_BUILTIN(CONFIG_I2C) && IS_ENABLED(CONFIG_OF_OVERLAY)
@@ -2013,25 +2068,18 @@ static struct i2c_driver unittest_i2c_mux_driver = {
 
 #endif
 
-static int of_unittest_overlay_i2c_init(void)
+static int of_unittest_overlay_i2c_init(struct kunit *test)
 {
-	int ret;
-
-	ret = i2c_add_driver(&unittest_i2c_dev_driver);
-	if (unittest(ret == 0,
-			"could not register unittest i2c device driver\n"))
-		return ret;
+	KUNIT_ASSERT_EQ_MSG(test, i2c_add_driver(&unittest_i2c_dev_driver), 0,
+			    "could not register unittest i2c device driver\n");
 
-	ret = platform_driver_register(&unittest_i2c_bus_driver);
-	if (unittest(ret == 0,
-			"could not register unittest i2c bus driver\n"))
-		return ret;
+	KUNIT_ASSERT_EQ_MSG(
+		test, platform_driver_register(&unittest_i2c_bus_driver), 0,
+		"could not register unittest i2c bus driver\n");
 
 #if IS_BUILTIN(CONFIG_I2C_MUX)
-	ret = i2c_add_driver(&unittest_i2c_mux_driver);
-	if (unittest(ret == 0,
-			"could not register unittest i2c mux driver\n"))
-		return ret;
+	KUNIT_ASSERT_EQ_MSG(test, i2c_add_driver(&unittest_i2c_mux_driver), 0,
+			    "could not register unittest i2c mux driver\n");
 #endif
 
 	return 0;
@@ -2046,101 +2094,85 @@ static void of_unittest_overlay_i2c_cleanup(void)
 	i2c_del_driver(&unittest_i2c_dev_driver);
 }
 
-static void __init of_unittest_overlay_i2c_12(void)
+static void of_unittest_overlay_i2c_12(struct kunit *test)
 {
 	/* device should enable */
-	if (of_unittest_apply_overlay_check(12, 12, 0, 1, I2C_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 12);
+	of_unittest_apply_overlay_check(test, 12, 12, 0, 1, I2C_OVERLAY);
 }
 
 /* test deactivation of device */
-static void __init of_unittest_overlay_i2c_13(void)
+static void of_unittest_overlay_i2c_13(struct kunit *test)
 {
 	/* device should disable */
-	if (of_unittest_apply_overlay_check(13, 13, 1, 0, I2C_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 13);
+	of_unittest_apply_overlay_check(test, 13, 13, 1, 0, I2C_OVERLAY);
 }
 
 /* just check for i2c mux existence */
-static void of_unittest_overlay_i2c_14(void)
+static void of_unittest_overlay_i2c_14(struct kunit *test)
 {
+	KUNIT_SUCCEED(test);
 }
 
-static void __init of_unittest_overlay_i2c_15(void)
+static void of_unittest_overlay_i2c_15(struct kunit *test)
 {
 	/* device should enable */
-	if (of_unittest_apply_overlay_check(15, 15, 0, 1, I2C_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 15);
+	of_unittest_apply_overlay_check(test, 15, 15, 0, 1, I2C_OVERLAY);
 }
 
 #else
 
-static inline void of_unittest_overlay_i2c_14(void) { }
-static inline void of_unittest_overlay_i2c_15(void) { }
+static inline void of_unittest_overlay_i2c_14(struct kunit *test) { }
+static inline void of_unittest_overlay_i2c_15(struct kunit *test) { }
 
 #endif
 
-static void __init of_unittest_overlay(void)
+static void of_unittest_overlay(struct kunit *test)
 {
 	struct device_node *bus_np = NULL;
 
-	if (platform_driver_register(&unittest_driver)) {
-		unittest(0, "could not register unittest driver\n");
-		goto out;
-	}
+	KUNIT_ASSERT_FALSE_MSG(test, platform_driver_register(&unittest_driver),
+			       "could not register unittest driver\n");
 
 	bus_np = of_find_node_by_path(bus_path);
-	if (bus_np == NULL) {
-		unittest(0, "could not find bus_path \"%s\"\n", bus_path);
-		goto out;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL_MSG(
+		test, bus_np, "could not find bus_path \"%s\"\n", bus_path);
 
-	if (of_platform_default_populate(bus_np, NULL, NULL)) {
-		unittest(0, "could not populate bus @ \"%s\"\n", bus_path);
-		goto out;
-	}
-
-	if (!of_unittest_device_exists(100, PDEV_OVERLAY)) {
-		unittest(0, "could not find unittest0 @ \"%s\"\n",
-				unittest_path(100, PDEV_OVERLAY));
-		goto out;
-	}
+	KUNIT_ASSERT_FALSE_MSG(
+		test, of_platform_default_populate(bus_np, NULL, NULL),
+		"could not populate bus @ \"%s\"\n", bus_path);
 
-	if (of_unittest_device_exists(101, PDEV_OVERLAY)) {
-		unittest(0, "unittest1 @ \"%s\" should not exist\n",
-				unittest_path(101, PDEV_OVERLAY));
-		goto out;
-	}
+	KUNIT_ASSERT_TRUE_MSG(
+		test, of_unittest_device_exists(100, PDEV_OVERLAY),
+		"could not find unittest0 @ \"%s\"\n",
+		unittest_path(100, PDEV_OVERLAY));
 
-	unittest(1, "basic infrastructure of overlays passed");
+	KUNIT_ASSERT_FALSE_MSG(
+		test, of_unittest_device_exists(101, PDEV_OVERLAY),
+		"unittest1 @ \"%s\" should not exist\n",
+		unittest_path(101, PDEV_OVERLAY));
 
 	/* tests in sequence */
-	of_unittest_overlay_0();
-	of_unittest_overlay_1();
-	of_unittest_overlay_2();
-	of_unittest_overlay_3();
-	of_unittest_overlay_4();
-	of_unittest_overlay_5();
-	of_unittest_overlay_6();
-	of_unittest_overlay_8();
-
-	of_unittest_overlay_10();
-	of_unittest_overlay_11();
+	of_unittest_overlay_0(test);
+	of_unittest_overlay_1(test);
+	of_unittest_overlay_2(test);
+	of_unittest_overlay_3(test);
+	of_unittest_overlay_4(test);
+	of_unittest_overlay_5(test);
+	of_unittest_overlay_6(test);
+	of_unittest_overlay_8(test);
+
+	of_unittest_overlay_10(test);
+	of_unittest_overlay_11(test);
 
 #if IS_BUILTIN(CONFIG_I2C)
-	if (unittest(of_unittest_overlay_i2c_init() == 0, "i2c init failed\n"))
-		goto out;
+	KUNIT_ASSERT_EQ_MSG(test, of_unittest_overlay_i2c_init(test), 0,
+			    "i2c init failed\n");
+	goto out;
 
-	of_unittest_overlay_i2c_12();
-	of_unittest_overlay_i2c_13();
-	of_unittest_overlay_i2c_14();
-	of_unittest_overlay_i2c_15();
+	of_unittest_overlay_i2c_12(test);
+	of_unittest_overlay_i2c_13(test);
+	of_unittest_overlay_i2c_14(test);
+	of_unittest_overlay_i2c_15(test);
 
 	of_unittest_overlay_i2c_cleanup();
 #endif
@@ -2152,7 +2184,7 @@ static void __init of_unittest_overlay(void)
 }
 
 #else
-static inline void __init of_unittest_overlay(void) { }
+static inline void of_unittest_overlay(struct kunit *test) { }
 #endif
 
 #ifdef CONFIG_OF_OVERLAY
@@ -2313,7 +2345,7 @@ void __init unittest_unflatten_overlay_base(void)
  *
  * Return 0 on unexpected error.
  */
-static int __init overlay_data_apply(const char *overlay_name, int *overlay_id)
+static int overlay_data_apply(const char *overlay_name, int *overlay_id)
 {
 	struct overlay_info *info;
 	int found = 0;
@@ -2359,19 +2391,17 @@ static int __init overlay_data_apply(const char *overlay_name, int *overlay_id)
  * The first part of the function is _not_ normal overlay usage; it is
  * finishing splicing the base overlay device tree into the live tree.
  */
-static __init void of_unittest_overlay_high_level(void)
+static void of_unittest_overlay_high_level(struct kunit *test)
 {
 	struct device_node *last_sibling;
 	struct device_node *np;
 	struct device_node *of_symbols;
-	struct device_node *overlay_base_symbols;
+	struct device_node *overlay_base_symbols = 0;
 	struct device_node **pprev;
 	struct property *prop;
 
-	if (!overlay_base_root) {
-		unittest(0, "overlay_base_root not initialized\n");
-		return;
-	}
+	KUNIT_ASSERT_TRUE_MSG(test, overlay_base_root,
+			      "overlay_base_root not initialized\n");
 
 	/*
 	 * Could not fixup phandles in unittest_unflatten_overlay_base()
@@ -2418,11 +2448,9 @@ static __init void of_unittest_overlay_high_level(void)
 	for_each_child_of_node(overlay_base_root, np) {
 		struct device_node *base_child;
 		for_each_child_of_node(of_root, base_child) {
-			if (!strcmp(np->full_name, base_child->full_name)) {
-				unittest(0, "illegal node name in overlay_base %pOFn",
-					 np);
-				return;
-			}
+			KUNIT_ASSERT_STRNEQ_MSG(
+				test, np->full_name, base_child->full_name,
+				"illegal node name in overlay_base %pOFn", np);
 		}
 	}
 
@@ -2456,21 +2484,24 @@ static __init void of_unittest_overlay_high_level(void)
 
 			new_prop = __of_prop_dup(prop, GFP_KERNEL);
 			if (!new_prop) {
-				unittest(0, "__of_prop_dup() of '%s' from overlay_base node __symbols__",
-					 prop->name);
+				KUNIT_FAIL(test,
+					   "__of_prop_dup() of '%s' from overlay_base node __symbols__",
+					   prop->name);
 				goto err_unlock;
 			}
 			if (__of_add_property(of_symbols, new_prop)) {
 				/* "name" auto-generated by unflatten */
 				if (!strcmp(new_prop->name, "name"))
 					continue;
-				unittest(0, "duplicate property '%s' in overlay_base node __symbols__",
-					 prop->name);
+				KUNIT_FAIL(test,
+					   "duplicate property '%s' in overlay_base node __symbols__",
+					   prop->name);
 				goto err_unlock;
 			}
 			if (__of_add_property_sysfs(of_symbols, new_prop)) {
-				unittest(0, "unable to add property '%s' in overlay_base node __symbols__ to sysfs",
-					 prop->name);
+				KUNIT_FAIL(test,
+					   "unable to add property '%s' in overlay_base node __symbols__ to sysfs",
+					   prop->name);
 				goto err_unlock;
 			}
 		}
@@ -2481,20 +2512,24 @@ static __init void of_unittest_overlay_high_level(void)
 
 	/* now do the normal overlay usage test */
 
-	unittest(overlay_data_apply("overlay", NULL),
-		 "Adding overlay 'overlay' failed\n");
+	KUNIT_EXPECT_TRUE_MSG(test, overlay_data_apply("overlay", NULL),
+			      "Adding overlay 'overlay' failed\n");
 
-	unittest(overlay_data_apply("overlay_bad_add_dup_node", NULL),
-		 "Adding overlay 'overlay_bad_add_dup_node' failed\n");
+	KUNIT_EXPECT_TRUE_MSG(
+		test, overlay_data_apply("overlay_bad_add_dup_node", NULL),
+		"Adding overlay 'overlay_bad_add_dup_node' failed\n");
 
-	unittest(overlay_data_apply("overlay_bad_add_dup_prop", NULL),
-		 "Adding overlay 'overlay_bad_add_dup_prop' failed\n");
+	KUNIT_EXPECT_TRUE_MSG(
+		test, overlay_data_apply("overlay_bad_add_dup_prop", NULL),
+		"Adding overlay 'overlay_bad_add_dup_prop' failed\n");
 
-	unittest(overlay_data_apply("overlay_bad_phandle", NULL),
-		 "Adding overlay 'overlay_bad_phandle' failed\n");
+	KUNIT_EXPECT_TRUE_MSG(
+		test, overlay_data_apply("overlay_bad_phandle", NULL),
+		"Adding overlay 'overlay_bad_phandle' failed\n");
 
-	unittest(overlay_data_apply("overlay_bad_symbol", NULL),
-		 "Adding overlay 'overlay_bad_symbol' failed\n");
+	KUNIT_EXPECT_TRUE_MSG(
+		test, overlay_data_apply("overlay_bad_symbol", NULL),
+		"Adding overlay 'overlay_bad_symbol' failed\n");
 
 	return;
 
@@ -2504,57 +2539,52 @@ static __init void of_unittest_overlay_high_level(void)
 
 #else
 
-static inline __init void of_unittest_overlay_high_level(void) {}
+static inline void of_unittest_overlay_high_level(struct kunit *test) {}
 
 #endif
 
-static int __init of_unittest(void)
+static int of_test_init(struct kunit *test)
 {
-	struct device_node *np;
-	int res;
-
 	/* adding data for unittest */
-	res = unittest_data_add();
-	if (res)
-		return res;
+	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
+
 	if (!of_aliases)
 		of_aliases = of_find_node_by_path("/aliases");
 
-	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
-	if (!np) {
-		pr_info("No testcase data in device tree; not running tests\n");
-		return 0;
-	}
-	of_node_put(np);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
+		"/testcase-data/phandle-tests/consumer-a"));
 
 	if (IS_ENABLED(CONFIG_UML))
 		unflatten_device_tree();
 
-	pr_info("start of unittest - you will see error messages\n");
-	of_unittest_check_tree_linkage();
-	of_unittest_check_phandles();
-	of_unittest_find_node_by_name();
-	of_unittest_dynamic();
-	of_unittest_parse_phandle_with_args();
-	of_unittest_parse_phandle_with_args_map();
-	of_unittest_printf();
-	of_unittest_property_string();
-	of_unittest_property_copy();
-	of_unittest_changeset();
-	of_unittest_parse_interrupts();
-	of_unittest_parse_interrupts_extended();
-	of_unittest_match_node();
-	of_unittest_platform_populate();
-	of_unittest_overlay();
+	return 0;
+}
 
+static struct kunit_case of_test_cases[] = {
+	KUNIT_CASE(of_unittest_check_tree_linkage),
+	KUNIT_CASE(of_unittest_check_phandles),
+	KUNIT_CASE(of_unittest_find_node_by_name),
+	KUNIT_CASE(of_unittest_dynamic),
+	KUNIT_CASE(of_unittest_parse_phandle_with_args),
+	KUNIT_CASE(of_unittest_parse_phandle_with_args_map),
+	KUNIT_CASE(of_unittest_printf),
+	KUNIT_CASE(of_unittest_property_string),
+	KUNIT_CASE(of_unittest_property_copy),
+	KUNIT_CASE(of_unittest_changeset),
+	KUNIT_CASE(of_unittest_parse_interrupts),
+	KUNIT_CASE(of_unittest_parse_interrupts_extended),
+	KUNIT_CASE(of_unittest_match_node),
+	KUNIT_CASE(of_unittest_platform_populate),
+	KUNIT_CASE(of_unittest_overlay),
 	/* Double check linkage after removing testcase data */
-	of_unittest_check_tree_linkage();
-
-	of_unittest_overlay_high_level();
-
-	pr_info("end of unittest - %i passed, %i failed\n",
-		unittest_results.passed, unittest_results.failed);
+	KUNIT_CASE(of_unittest_check_tree_linkage),
+	KUNIT_CASE(of_unittest_overlay_high_level),
+	{},
+};
 
-	return 0;
-}
-late_initcall(of_unittest);
+static struct kunit_module of_test_module = {
+	.name = "of-test",
+	.init = of_test_init,
+	.test_cases = of_test_cases,
+};
+module_test(of_test_module);
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 15/17] of: unittest: migrate tests to run on KUnit
@ 2019-02-14 21:37     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: brakmo, pmladek, amir73il, Brendan Higgins, dri-devel,
	Alexander.Levin, linux-kselftest, linux-nvdimm, richard,
	knut.omang, wfg, joel, jdike, dan.carpenter, devicetree,
	Tim.Bird, linux-um, rostedt, julia.lawall, dan.j.williams,
	kunit-dev, gregkh, linux-kernel, daniel, mpe, joe, khilman

Migrate tests without any cleanup, or modifying test logic in anyway to
run under KUnit using the KUnit expectation and assertion API.

Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
---
 drivers/of/Kconfig    |    1 +
 drivers/of/unittest.c | 1310 +++++++++++++++++++++--------------------
 2 files changed, 671 insertions(+), 640 deletions(-)

diff --git a/drivers/of/Kconfig b/drivers/of/Kconfig
index ad3fcad4d75b8..f309399deac20 100644
--- a/drivers/of/Kconfig
+++ b/drivers/of/Kconfig
@@ -15,6 +15,7 @@ if OF
 config OF_UNITTEST
 	bool "Device Tree runtime unit tests"
 	depends on !SPARC
+	depends on KUNIT
 	select IRQ_DOMAIN
 	select OF_EARLY_FLATTREE
 	select OF_RESOLVE
diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
index effa4e2b9d992..96de69ccb3e63 100644
--- a/drivers/of/unittest.c
+++ b/drivers/of/unittest.c
@@ -26,186 +26,189 @@
 
 #include <linux/bitops.h>
 
+#include <kunit/test.h>
+
 #include "of_private.h"
 
-static struct unittest_results {
-	int passed;
-	int failed;
-} unittest_results;
-
-#define unittest(result, fmt, ...) ({ \
-	bool failed = !(result); \
-	if (failed) { \
-		unittest_results.failed++; \
-		pr_err("FAIL %s():%i " fmt, __func__, __LINE__, ##__VA_ARGS__); \
-	} else { \
-		unittest_results.passed++; \
-		pr_debug("pass %s():%i\n", __func__, __LINE__); \
-	} \
-	failed; \
-})
-
-static void __init of_unittest_find_node_by_name(void)
+static void of_unittest_find_node_by_name(struct kunit *test)
 {
 	struct device_node *np;
 	const char *options, *name;
 
 	np = of_find_node_by_path("/testcase-data");
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	unittest(np && !strcmp("/testcase-data", name),
-		"find /testcase-data failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
+			       "find /testcase-data failed\n");
 	of_node_put(np);
 	kfree(name);
 
 	/* Test if trailing '/' works */
-	np = of_find_node_by_path("/testcase-data/");
-	unittest(!np, "trailing '/' on /testcase-data/ should fail\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
+			    "trailing '/' on /testcase-data/ should fail\n");
 
 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	unittest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", name),
+	KUNIT_EXPECT_STREQ_MSG(
+		test, "/testcase-data/phandle-tests/consumer-a", name,
 		"find /testcase-data/phandle-tests/consumer-a failed\n");
 	of_node_put(np);
 	kfree(name);
 
 	np = of_find_node_by_path("testcase-alias");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	unittest(np && !strcmp("/testcase-data", name),
-		"find testcase-alias failed\n");
+	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
+			       "find testcase-alias failed\n");
 	of_node_put(np);
 	kfree(name);
 
 	/* Test if trailing '/' works on aliases */
-	np = of_find_node_by_path("testcase-alias/");
-	unittest(!np, "trailing '/' on testcase-alias/ should fail\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
+			    "trailing '/' on testcase-alias/ should fail\n");
 
 	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	unittest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", name),
+	KUNIT_EXPECT_STREQ_MSG(
+		test, "/testcase-data/phandle-tests/consumer-a", name,
 		"find testcase-alias/phandle-tests/consumer-a failed\n");
 	of_node_put(np);
 	kfree(name);
 
-	np = of_find_node_by_path("/testcase-data/missing-path");
-	unittest(!np, "non-existent path returned node %pOF\n", np);
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
+		"non-existent path returned node %pOF\n", np);
 	of_node_put(np);
 
-	np = of_find_node_by_path("missing-alias");
-	unittest(!np, "non-existent alias returned node %pOF\n", np);
+	KUNIT_EXPECT_EQ_MSG(
+		test, np = of_find_node_by_path("missing-alias"), NULL,
+		"non-existent alias returned node %pOF\n", np);
 	of_node_put(np);
 
-	np = of_find_node_by_path("testcase-alias/missing-path");
-	unittest(!np, "non-existent alias with relative path returned node %pOF\n", np);
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
+		"non-existent alias with relative path returned node %pOF\n",
+		np);
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
-	unittest(np && !strcmp("testoption", options),
-		 "option path test failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
+			       "option path test failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
-	unittest(np && !strcmp("test/option", options),
-		 "option path test, subcase #1 failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
+			       "option path test, subcase #1 failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
-	unittest(np && !strcmp("test/option", options),
-		 "option path test, subcase #2 failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
+			       "option path test, subcase #2 failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
-	unittest(np, "NULL option path test failed\n");
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
+					 "NULL option path test failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
 				       &options);
-	unittest(np && !strcmp("testaliasoption", options),
-		 "option alias path test failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
+			       "option alias path test failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
 				       &options);
-	unittest(np && !strcmp("test/alias/option", options),
-		 "option alias path test, subcase #1 failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
+			       "option alias path test, subcase #1 failed\n");
 	of_node_put(np);
 
 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
-	unittest(np, "NULL option alias path test failed\n");
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
+			test, np, "NULL option alias path test failed\n");
 	of_node_put(np);
 
 	options = "testoption";
 	np = of_find_node_opts_by_path("testcase-alias", &options);
-	unittest(np && !options, "option clearing test failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
+			    "option clearing test failed\n");
 	of_node_put(np);
 
 	options = "testoption";
 	np = of_find_node_opts_by_path("/", &options);
-	unittest(np && !options, "option clearing root node test failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
+			    "option clearing root node test failed\n");
 	of_node_put(np);
 }
 
-static void __init of_unittest_dynamic(void)
+static void of_unittest_dynamic(struct kunit *test)
 {
 	struct device_node *np;
 	struct property *prop;
 
 	np = of_find_node_by_path("/testcase-data");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	/* Array of 4 properties for the purpose of testing */
 	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
-	if (!prop) {
-		unittest(0, "kzalloc() failed\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
 
 	/* Add a new property - should pass*/
 	prop->name = "new-property";
 	prop->value = "new-property-data";
 	prop->length = strlen(prop->value) + 1;
-	unittest(of_add_property(np, prop) == 0, "Adding a new property failed\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+			    "Adding a new property failed\n");
 
 	/* Try to add an existing property - should fail */
 	prop++;
 	prop->name = "new-property";
 	prop->value = "new-property-data-should-fail";
 	prop->length = strlen(prop->value) + 1;
-	unittest(of_add_property(np, prop) != 0,
-		 "Adding an existing property should have failed\n");
+	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
+			    "Adding an existing property should have failed\n");
 
 	/* Try to modify an existing property - should pass */
 	prop->value = "modify-property-data-should-pass";
 	prop->length = strlen(prop->value) + 1;
-	unittest(of_update_property(np, prop) == 0,
-		 "Updating an existing property should have passed\n");
+	KUNIT_EXPECT_EQ_MSG(
+		test, of_update_property(np, prop), 0,
+		"Updating an existing property should have passed\n");
 
 	/* Try to modify non-existent property - should pass*/
 	prop++;
 	prop->name = "modify-property";
 	prop->value = "modify-missing-property-data-should-pass";
 	prop->length = strlen(prop->value) + 1;
-	unittest(of_update_property(np, prop) == 0,
-		 "Updating a missing property should have passed\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
+			    "Updating a missing property should have passed\n");
 
 	/* Remove property - should pass */
-	unittest(of_remove_property(np, prop) == 0,
-		 "Removing a property should have passed\n");
+	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
+			    "Removing a property should have passed\n");
 
 	/* Adding very large property - should pass */
 	prop++;
 	prop->name = "large-property-PAGE_SIZEx8";
 	prop->length = PAGE_SIZE * 8;
 	prop->value = kzalloc(prop->length, GFP_KERNEL);
-	unittest(prop->value != NULL, "Unable to allocate large buffer\n");
-	if (prop->value)
-		unittest(of_add_property(np, prop) == 0,
-			 "Adding a large property should have passed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+			    "Adding a large property should have passed\n");
 }
 
-static int __init of_unittest_check_node_linkage(struct device_node *np)
+static int of_unittest_check_node_linkage(struct device_node *np)
 {
 	struct device_node *child;
 	int count = 0, rc;
@@ -230,27 +233,30 @@ static int __init of_unittest_check_node_linkage(struct device_node *np)
 	return rc;
 }
 
-static void __init of_unittest_check_tree_linkage(void)
+static void of_unittest_check_tree_linkage(struct kunit *test)
 {
 	struct device_node *np;
 	int allnode_count = 0, child_count;
 
-	if (!of_root)
-		return;
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);
 
 	for_each_of_allnodes(np)
 		allnode_count++;
 	child_count = of_unittest_check_node_linkage(of_root);
 
-	unittest(child_count > 0, "Device node data structure is corrupted\n");
-	unittest(child_count == allnode_count,
-		 "allnodes list size (%i) doesn't match sibling lists size (%i)\n",
-		 allnode_count, child_count);
+	KUNIT_EXPECT_GT_MSG(test, child_count, 0,
+			    "Device node data structure is corrupted\n");
+	KUNIT_EXPECT_EQ_MSG(
+		test, child_count, allnode_count,
+		"allnodes list size (%i) doesn't match sibling lists size (%i)\n",
+		allnode_count, child_count);
 	pr_debug("allnodes list size (%i); sibling lists size (%i)\n", allnode_count, child_count);
 }
 
-static void __init of_unittest_printf_one(struct device_node *np, const char *fmt,
-					  const char *expected)
+static void of_unittest_printf_one(struct kunit *test,
+				   struct device_node *np,
+				   const char *fmt,
+				   const char *expected)
 {
 	unsigned char *buf;
 	int buf_size;
@@ -265,8 +271,12 @@ static void __init of_unittest_printf_one(struct device_node *np, const char *fm
 	memset(buf, 0xff, buf_size);
 	size = snprintf(buf, buf_size - 2, fmt, np);
 
-	/* use strcmp() instead of strncmp() here to be absolutely sure strings match */
-	unittest((strcmp(buf, expected) == 0) && (buf[size+1] == 0xff),
+	KUNIT_EXPECT_STREQ_MSG(
+		test, buf, expected,
+		"sprintf failed; fmt='%s' expected='%s' rslt='%s'\n",
+		fmt, expected, buf);
+	KUNIT_EXPECT_EQ_MSG(
+		test, buf[size+1], 0xff,
 		"sprintf failed; fmt='%s' expected='%s' rslt='%s'\n",
 		fmt, expected, buf);
 
@@ -276,44 +286,49 @@ static void __init of_unittest_printf_one(struct device_node *np, const char *fm
 		/* Clear the buffer, and make sure it works correctly still */
 		memset(buf, 0xff, buf_size);
 		snprintf(buf, size+1, fmt, np);
-		unittest(strncmp(buf, expected, size) == 0 && (buf[size+1] == 0xff),
+		KUNIT_EXPECT_STREQ_MSG(
+			test, buf, expected,
+			"snprintf failed; size=%i fmt='%s' expected='%s' rslt='%s'\n",
+			size, fmt, expected, buf);
+		KUNIT_EXPECT_EQ_MSG(
+			test, buf[size+1], 0xff,
 			"snprintf failed; size=%i fmt='%s' expected='%s' rslt='%s'\n",
 			size, fmt, expected, buf);
 	}
 	kfree(buf);
 }
 
-static void __init of_unittest_printf(void)
+static void of_unittest_printf(struct kunit *test)
 {
 	struct device_node *np;
 	const char *full_name = "/testcase-data/platform-tests/test-device@1/dev@100";
 	char phandle_str[16] = "";
 
 	np = of_find_node_by_path(full_name);
-	if (!np) {
-		unittest(np, "testcase data missing\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	num_to_str(phandle_str, sizeof(phandle_str), np->phandle, 0);
 
-	of_unittest_printf_one(np, "%pOF",  full_name);
-	of_unittest_printf_one(np, "%pOFf", full_name);
-	of_unittest_printf_one(np, "%pOFn", "dev");
-	of_unittest_printf_one(np, "%2pOFn", "dev");
-	of_unittest_printf_one(np, "%5pOFn", "  dev");
-	of_unittest_printf_one(np, "%pOFnc", "dev:test-sub-device");
-	of_unittest_printf_one(np, "%pOFp", phandle_str);
-	of_unittest_printf_one(np, "%pOFP", "dev@100");
-	of_unittest_printf_one(np, "ABC %pOFP ABC", "ABC dev@100 ABC");
-	of_unittest_printf_one(np, "%10pOFP", "   dev@100");
-	of_unittest_printf_one(np, "%-10pOFP", "dev@100   ");
-	of_unittest_printf_one(of_root, "%pOFP", "/");
-	of_unittest_printf_one(np, "%pOFF", "----");
-	of_unittest_printf_one(np, "%pOFPF", "dev@100:----");
-	of_unittest_printf_one(np, "%pOFPFPc", "dev@100:----:dev@100:test-sub-device");
-	of_unittest_printf_one(np, "%pOFc", "test-sub-device");
-	of_unittest_printf_one(np, "%pOFC",
+	of_unittest_printf_one(test, np, "%pOF",  full_name);
+	of_unittest_printf_one(test, np, "%pOFf", full_name);
+	of_unittest_printf_one(test, np, "%pOFn", "dev");
+	of_unittest_printf_one(test, np, "%2pOFn", "dev");
+	of_unittest_printf_one(test, np, "%5pOFn", "  dev");
+	of_unittest_printf_one(test, np, "%pOFnc", "dev:test-sub-device");
+	of_unittest_printf_one(test, np, "%pOFp", phandle_str);
+	of_unittest_printf_one(test, np, "%pOFP", "dev@100");
+	of_unittest_printf_one(test, np, "ABC %pOFP ABC", "ABC dev@100 ABC");
+	of_unittest_printf_one(test, np, "%10pOFP", "   dev@100");
+	of_unittest_printf_one(test, np, "%-10pOFP", "dev@100   ");
+	of_unittest_printf_one(test, of_root, "%pOFP", "/");
+	of_unittest_printf_one(test, np, "%pOFF", "----");
+	of_unittest_printf_one(test, np, "%pOFPF", "dev@100:----");
+	of_unittest_printf_one(test,
+			       np,
+			       "%pOFPFPc",
+			       "dev@100:----:dev@100:test-sub-device");
+	of_unittest_printf_one(test, np, "%pOFc", "test-sub-device");
+	of_unittest_printf_one(test, np, "%pOFC",
 			"\"test-sub-device\",\"test-compat2\",\"test-compat3\"");
 }
 
@@ -323,7 +338,7 @@ struct node_hash {
 };
 
 static DEFINE_HASHTABLE(phandle_ht, 8);
-static void __init of_unittest_check_phandles(void)
+static void of_unittest_check_phandles(struct kunit *test)
 {
 	struct device_node *np;
 	struct node_hash *nh;
@@ -335,24 +350,26 @@ static void __init of_unittest_check_phandles(void)
 			continue;
 
 		hash_for_each_possible(phandle_ht, nh, node, np->phandle) {
+			KUNIT_EXPECT_NE_MSG(
+				test, nh->np->phandle, np->phandle,
+				"Duplicate phandle! %i used by %pOF and %pOF\n",
+				np->phandle, nh->np, np);
 			if (nh->np->phandle == np->phandle) {
-				pr_info("Duplicate phandle! %i used by %pOF and %pOF\n",
-					np->phandle, nh->np, np);
 				dup_count++;
 				break;
 			}
 		}
 
 		nh = kzalloc(sizeof(*nh), GFP_KERNEL);
-		if (WARN_ON(!nh))
-			return;
+		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, nh);
 
 		nh->np = np;
 		hash_add(phandle_ht, &nh->node, np->phandle);
 		phandle_count++;
 	}
-	unittest(dup_count == 0, "Found %i duplicates in %i phandles\n",
-		 dup_count, phandle_count);
+	KUNIT_EXPECT_EQ_MSG(test, dup_count, 0,
+			    "Found %i duplicates in %i phandles\n",
+			    dup_count, phandle_count);
 
 	/* Clean up */
 	hash_for_each_safe(phandle_ht, i, tmp, nh, node) {
@@ -361,20 +378,21 @@ static void __init of_unittest_check_phandles(void)
 	}
 }
 
-static void __init of_unittest_parse_phandle_with_args(void)
+static void of_unittest_parse_phandle_with_args(struct kunit *test)
 {
 	struct device_node *np;
 	struct of_phandle_args args;
-	int i, rc;
+	int i, rc = 0;
 
 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
-	rc = of_count_phandle_with_args(np, "phandle-list", "#phandle-cells");
-	unittest(rc == 7, "of_count_phandle_with_args() returned %i, expected 7\n", rc);
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_count_phandle_with_args(
+			np, "phandle-list", "#phandle-cells"),
+		7,
+		"of_count_phandle_with_args() returned %i, expected 7\n", rc);
 
 	for (i = 0; i < 8; i++) {
 		bool passed = true;
@@ -428,85 +446,91 @@ static void __init of_unittest_parse_phandle_with_args(void)
 			passed = false;
 		}
 
-		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
-			 i, args.np, rc);
+		KUNIT_EXPECT_TRUE_MSG(
+			test, passed,
+			"index %i - data error on node %pOF rc=%i\n",
+			i, args.np, rc);
 	}
 
 	/* Check for missing list property */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args(np, "phandle-list-missing",
-					"#phandle-cells", 0, &args);
-	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
-	rc = of_count_phandle_with_args(np, "phandle-list-missing",
-					"#phandle-cells");
-	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args(
+			np, "phandle-list-missing", "#phandle-cells", 0, &args),
+		-ENOENT);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_count_phandle_with_args(
+			np, "phandle-list-missing", "#phandle-cells"),
+		-ENOENT);
 
 	/* Check for missing cells property */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args(np, "phandle-list",
-					"#phandle-cells-missing", 0, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
-	rc = of_count_phandle_with_args(np, "phandle-list",
-					"#phandle-cells-missing");
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args(
+			np, "phandle-list", "#phandle-cells-missing", 0, &args),
+		-EINVAL);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_count_phandle_with_args(
+			np, "phandle-list", "#phandle-cells-missing"),
+		-EINVAL);
 
 	/* Check for bad phandle in list */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args(np, "phandle-list-bad-phandle",
-					"#phandle-cells", 0, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
-	rc = of_count_phandle_with_args(np, "phandle-list-bad-phandle",
-					"#phandle-cells");
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args(np, "phandle-list-bad-phandle",
+					   "#phandle-cells", 0, &args),
+		-EINVAL);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_count_phandle_with_args(
+			np, "phandle-list-bad-phandle", "#phandle-cells"),
+		-EINVAL);
 
 	/* Check for incorrectly formed argument list */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args(np, "phandle-list-bad-args",
-					"#phandle-cells", 1, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
-	rc = of_count_phandle_with_args(np, "phandle-list-bad-args",
-					"#phandle-cells");
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args(np, "phandle-list-bad-args",
+					   "#phandle-cells", 1, &args),
+		-EINVAL);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_count_phandle_with_args(
+			np, "phandle-list-bad-args", "#phandle-cells"),
+		-EINVAL);
 }
 
-static void __init of_unittest_parse_phandle_with_args_map(void)
+static void of_unittest_parse_phandle_with_args_map(struct kunit *test)
 {
 	struct device_node *np, *p0, *p1, *p2, *p3;
 	struct of_phandle_args args;
 	int i, rc;
 
 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-b");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	p0 = of_find_node_by_path("/testcase-data/phandle-tests/provider0");
-	if (!p0) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p0);
 
 	p1 = of_find_node_by_path("/testcase-data/phandle-tests/provider1");
-	if (!p1) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p1);
 
 	p2 = of_find_node_by_path("/testcase-data/phandle-tests/provider2");
-	if (!p2) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p2);
 
 	p3 = of_find_node_by_path("/testcase-data/phandle-tests/provider3");
-	if (!p3) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p3);
 
-	rc = of_count_phandle_with_args(np, "phandle-list", "#phandle-cells");
-	unittest(rc == 7, "of_count_phandle_with_args() returned %i, expected 7\n", rc);
+	KUNIT_EXPECT_EQ(test,
+		       of_count_phandle_with_args(np,
+						  "phandle-list",
+						  "#phandle-cells"),
+		       7);
 
 	for (i = 0; i < 8; i++) {
 		bool passed = true;
@@ -564,121 +588,186 @@ static void __init of_unittest_parse_phandle_with_args_map(void)
 			passed = false;
 		}
 
-		unittest(passed, "index %i - data error on node %s rc=%i\n",
-			 i, args.np->full_name, rc);
+		KUNIT_EXPECT_TRUE_MSG(
+			test, passed,
+			"index %i - data error on node %s rc=%i\n",
+			i, (args.np ? args.np->full_name : "missing np"), rc);
 	}
 
 	/* Check for missing list property */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args_map(np, "phandle-list-missing",
-					    "phandle", 0, &args);
-	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args_map(
+			np, "phandle-list-missing", "phandle", 0, &args),
+		-ENOENT);
 
 	/* Check for missing cells,map,mask property */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args_map(np, "phandle-list",
-					    "phandle-missing", 0, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args_map(
+			np, "phandle-list", "phandle-missing", 0, &args),
+		-EINVAL);
 
 	/* Check for bad phandle in list */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args_map(np, "phandle-list-bad-phandle",
-					    "phandle", 0, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args_map(
+			np, "phandle-list-bad-phandle", "phandle", 0, &args),
+		-EINVAL);
 
 	/* Check for incorrectly formed argument list */
 	memset(&args, 0, sizeof(args));
-	rc = of_parse_phandle_with_args_map(np, "phandle-list-bad-args",
-					    "phandle", 1, &args);
-	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_parse_phandle_with_args_map(
+			np, "phandle-list-bad-args", "phandle", 1, &args),
+		-EINVAL);
 }
 
-static void __init of_unittest_property_string(void)
+static void of_unittest_property_string(struct kunit *test)
 {
 	const char *strings[4];
 	struct device_node *np;
 	int rc;
 
 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
-	if (!np) {
-		pr_err("No testcase data in device tree\n");
-		return;
-	}
-
-	rc = of_property_match_string(np, "phandle-list-names", "first");
-	unittest(rc == 0, "first expected:0 got:%i\n", rc);
-	rc = of_property_match_string(np, "phandle-list-names", "second");
-	unittest(rc == 1, "second expected:1 got:%i\n", rc);
-	rc = of_property_match_string(np, "phandle-list-names", "third");
-	unittest(rc == 2, "third expected:2 got:%i\n", rc);
-	rc = of_property_match_string(np, "phandle-list-names", "fourth");
-	unittest(rc == -ENODATA, "unmatched string; rc=%i\n", rc);
-	rc = of_property_match_string(np, "missing-property", "blah");
-	unittest(rc == -EINVAL, "missing property; rc=%i\n", rc);
-	rc = of_property_match_string(np, "empty-property", "blah");
-	unittest(rc == -ENODATA, "empty property; rc=%i\n", rc);
-	rc = of_property_match_string(np, "unterminated-string", "blah");
-	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+
+	KUNIT_EXPECT_EQ(
+		test,
+		of_property_match_string(np, "phandle-list-names", "first"),
+		0);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_property_match_string(np, "phandle-list-names", "second"),
+		1);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_property_match_string(np, "phandle-list-names", "third"),
+		2);
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_match_string(np, "phandle-list-names", "fourth"),
+		-ENODATA,
+		"unmatched string");
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_match_string(np, "missing-property", "blah"),
+		-EINVAL,
+		"missing property");
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_match_string(np, "empty-property", "blah"),
+		-ENODATA,
+		"empty property");
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_match_string(np, "unterminated-string", "blah"),
+		-EILSEQ,
+		"unterminated string");
 
 	/* of_property_count_strings() tests */
-	rc = of_property_count_strings(np, "string-property");
-	unittest(rc == 1, "Incorrect string count; rc=%i\n", rc);
-	rc = of_property_count_strings(np, "phandle-list-names");
-	unittest(rc == 3, "Incorrect string count; rc=%i\n", rc);
-	rc = of_property_count_strings(np, "unterminated-string");
-	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
-	rc = of_property_count_strings(np, "unterminated-string-list");
-	unittest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ(test,
+			of_property_count_strings(np, "string-property"), 1);
+	KUNIT_EXPECT_EQ(test,
+			of_property_count_strings(np, "phandle-list-names"), 3);
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_count_strings(np, "unterminated-string"), -EILSEQ,
+		"unterminated string");
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_count_strings(np, "unterminated-string-list"),
+		-EILSEQ,
+		"unterminated string array");
 
 	/* of_property_read_string_index() tests */
 	rc = of_property_read_string_index(np, "string-property", 0, strings);
-	unittest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_ASSERT_EQ(test, rc, 0);
+	KUNIT_EXPECT_STREQ(test, strings[0], "foobar");
+
 	strings[0] = NULL;
 	rc = of_property_read_string_index(np, "string-property", 1, strings);
-	unittest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ(test, rc, -ENODATA);
+	KUNIT_EXPECT_EQ(test, strings[0], NULL);
+
 	rc = of_property_read_string_index(np, "phandle-list-names", 0, strings);
-	unittest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_ASSERT_EQ(test, rc, 0);
+	KUNIT_EXPECT_STREQ(test, strings[0], "first");
+
 	rc = of_property_read_string_index(np, "phandle-list-names", 1, strings);
-	unittest(rc == 0 && !strcmp(strings[0], "second"), "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_ASSERT_EQ(test, rc, 0);
+	KUNIT_EXPECT_STREQ(test, strings[0], "second");
+
 	rc = of_property_read_string_index(np, "phandle-list-names", 2, strings);
-	unittest(rc == 0 && !strcmp(strings[0], "third"), "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_ASSERT_EQ(test, rc, 0);
+	KUNIT_EXPECT_STREQ(test, strings[0], "third");
+
 	strings[0] = NULL;
 	rc = of_property_read_string_index(np, "phandle-list-names", 3, strings);
-	unittest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ(test, rc, -ENODATA);
+	KUNIT_EXPECT_EQ(test, strings[0], NULL);
+
 	strings[0] = NULL;
 	rc = of_property_read_string_index(np, "unterminated-string", 0, strings);
-	unittest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ(test, rc, -EILSEQ);
+	KUNIT_EXPECT_EQ(test, strings[0], NULL);
+
 	rc = of_property_read_string_index(np, "unterminated-string-list", 0, strings);
-	unittest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc);
+	KUNIT_ASSERT_EQ(test, rc, 0);
+	KUNIT_EXPECT_STREQ(test, strings[0], "first");
+
 	strings[0] = NULL;
 	rc = of_property_read_string_index(np, "unterminated-string-list", 2, strings); /* should fail */
-	unittest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
-	strings[1] = NULL;
+	KUNIT_EXPECT_EQ(test, rc, -EILSEQ);
+	KUNIT_EXPECT_EQ(test, strings[0], NULL);
 
 	/* of_property_read_string_array() tests */
-	rc = of_property_read_string_array(np, "string-property", strings, 4);
-	unittest(rc == 1, "Incorrect string count; rc=%i\n", rc);
-	rc = of_property_read_string_array(np, "phandle-list-names", strings, 4);
-	unittest(rc == 3, "Incorrect string count; rc=%i\n", rc);
-	rc = of_property_read_string_array(np, "unterminated-string", strings, 4);
-	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
+	strings[1] = NULL;
+	KUNIT_EXPECT_EQ(
+		test,
+		of_property_read_string_array(
+			np, "string-property", strings, 4),
+		1);
+	KUNIT_EXPECT_EQ(
+		test,
+		of_property_read_string_array(
+			np, "phandle-list-names", strings, 4),
+		3);
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_read_string_array(
+			np, "unterminated-string", strings, 4),
+		-EILSEQ,
+		"unterminated string");
 	/* -- An incorrectly formed string should cause a failure */
-	rc = of_property_read_string_array(np, "unterminated-string-list", strings, 4);
-	unittest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		of_property_read_string_array(
+			np, "unterminated-string-list", strings, 4),
+		-EILSEQ,
+		"unterminated string array");
 	/* -- parsing the correctly formed strings should still work: */
 	strings[2] = NULL;
 	rc = of_property_read_string_array(np, "unterminated-string-list", strings, 2);
-	unittest(rc == 2 && strings[2] == NULL, "of_property_read_string_array() failure; rc=%i\n", rc);
+	KUNIT_EXPECT_EQ(test, rc, 2);
+	KUNIT_EXPECT_EQ(test, strings[2], NULL);
+
 	strings[1] = NULL;
 	rc = of_property_read_string_array(np, "phandle-list-names", strings, 1);
-	unittest(rc == 1 && strings[1] == NULL, "Overwrote end of string array; rc=%i, str='%s'\n", rc, strings[1]);
+	KUNIT_ASSERT_EQ(test, rc, 1);
+	KUNIT_EXPECT_EQ_MSG(test, strings[1], NULL,
+			    "Overwrote end of string array");
 }
 
 #define propcmp(p1, p2) (((p1)->length == (p2)->length) && \
 			(p1)->value && (p2)->value && \
 			!memcmp((p1)->value, (p2)->value, (p1)->length) && \
 			!strcmp((p1)->name, (p2)->name))
-static void __init of_unittest_property_copy(void)
+static void of_unittest_property_copy(struct kunit *test)
 {
 #ifdef CONFIG_OF_DYNAMIC
 	struct property p1 = { .name = "p1", .length = 0, .value = "" };
@@ -686,20 +775,24 @@ static void __init of_unittest_property_copy(void)
 	struct property *new;
 
 	new = __of_prop_dup(&p1, GFP_KERNEL);
-	unittest(new && propcmp(&p1, new), "empty property didn't copy correctly\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, new);
+	KUNIT_EXPECT_TRUE_MSG(test, propcmp(&p1, new),
+			      "empty property didn't copy correctly");
 	kfree(new->value);
 	kfree(new->name);
 	kfree(new);
 
 	new = __of_prop_dup(&p2, GFP_KERNEL);
-	unittest(new && propcmp(&p2, new), "non-empty property didn't copy correctly\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, new);
+	KUNIT_EXPECT_TRUE_MSG(test, propcmp(&p2, new),
+			      "non-empty property didn't copy correctly");
 	kfree(new->value);
 	kfree(new->name);
 	kfree(new);
 #endif
 }
 
-static void __init of_unittest_changeset(void)
+static void of_unittest_changeset(struct kunit *test)
 {
 #ifdef CONFIG_OF_DYNAMIC
 	struct property *ppadd, padd = { .name = "prop-add", .length = 1, .value = "" };
@@ -712,32 +805,32 @@ static void __init of_unittest_changeset(void)
 	struct of_changeset chgset;
 
 	n1 = __of_node_dup(NULL, "n1");
-	unittest(n1, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n1);
 
 	n2 = __of_node_dup(NULL, "n2");
-	unittest(n2, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n2);
 
 	n21 = __of_node_dup(NULL, "n21");
-	unittest(n21, "testcase setup failure %p\n", n21);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n21);
 
 	nchangeset = of_find_node_by_path("/testcase-data/changeset");
 	nremove = of_get_child_by_name(nchangeset, "node-remove");
-	unittest(nremove, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, nremove);
 
 	ppadd = __of_prop_dup(&padd, GFP_KERNEL);
-	unittest(ppadd, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppadd);
 
 	ppname_n1  = __of_prop_dup(&pname_n1, GFP_KERNEL);
-	unittest(ppname_n1, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n1);
 
 	ppname_n2  = __of_prop_dup(&pname_n2, GFP_KERNEL);
-	unittest(ppname_n2, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n2);
 
 	ppname_n21 = __of_prop_dup(&pname_n21, GFP_KERNEL);
-	unittest(ppname_n21, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n21);
 
 	ppupdate = __of_prop_dup(&pupdate, GFP_KERNEL);
-	unittest(ppupdate, "testcase setup failure\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppupdate);
 
 	parent = nchangeset;
 	n1->parent = parent;
@@ -745,54 +838,72 @@ static void __init of_unittest_changeset(void)
 	n21->parent = n2;
 
 	ppremove = of_find_property(parent, "prop-remove", NULL);
-	unittest(ppremove, "failed to find removal prop");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppremove);
 
 	of_changeset_init(&chgset);
 
-	unittest(!of_changeset_attach_node(&chgset, n1), "fail attach n1\n");
-	unittest(!of_changeset_add_property(&chgset, n1, ppname_n1), "fail add prop name\n");
-
-	unittest(!of_changeset_attach_node(&chgset, n2), "fail attach n2\n");
-	unittest(!of_changeset_add_property(&chgset, n2, ppname_n2), "fail add prop name\n");
-
-	unittest(!of_changeset_detach_node(&chgset, nremove), "fail remove node\n");
-	unittest(!of_changeset_add_property(&chgset, n21, ppname_n21), "fail add prop name\n");
-
-	unittest(!of_changeset_attach_node(&chgset, n21), "fail attach n21\n");
-
-	unittest(!of_changeset_add_property(&chgset, parent, ppadd), "fail add prop prop-add\n");
-	unittest(!of_changeset_update_property(&chgset, parent, ppupdate), "fail update prop\n");
-	unittest(!of_changeset_remove_property(&chgset, parent, ppremove), "fail remove prop\n");
-
-	unittest(!of_changeset_apply(&chgset), "apply failed\n");
+	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n1),
+			       "fail attach n1\n");
+	KUNIT_EXPECT_FALSE_MSG(
+		test, of_changeset_add_property(&chgset, n1, ppname_n1),
+		"fail add prop name\n");
+
+	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n2),
+			       "fail attach n2\n");
+	KUNIT_EXPECT_FALSE_MSG(
+		test, of_changeset_add_property(&chgset, n2, ppname_n2),
+			       "fail add prop name\n");
+
+	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_detach_node(&chgset, nremove),
+			       "fail remove node\n");
+	KUNIT_EXPECT_FALSE_MSG(
+		test, of_changeset_add_property(&chgset, n21, ppname_n21),
+		"fail add prop name\n");
+
+	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n21),
+			       "fail attach n21\n");
+
+	KUNIT_EXPECT_FALSE_MSG(
+		test,
+		of_changeset_add_property(&chgset, parent, ppadd),
+		"fail add prop prop-add\n");
+	KUNIT_EXPECT_FALSE_MSG(
+		test,
+		of_changeset_update_property(&chgset, parent, ppupdate),
+		"fail update prop\n");
+	KUNIT_EXPECT_FALSE_MSG(
+		test,
+		of_changeset_remove_property(&chgset, parent, ppremove),
+		"fail remove prop\n");
+
+	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_apply(&chgset),
+			       "apply failed\n");
 
 	of_node_put(nchangeset);
 
 	/* Make sure node names are constructed correctly */
-	unittest((np = of_find_node_by_path("/testcase-data/changeset/n2/n21")),
-		 "'%pOF' not added\n", n21);
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
+		test,
+		np = of_find_node_by_path("/testcase-data/changeset/n2/n21"),
+		"'%pOF' not added\n", n21);
 	of_node_put(np);
 
-	unittest(!of_changeset_revert(&chgset), "revert failed\n");
+	KUNIT_EXPECT_FALSE(test, of_changeset_revert(&chgset));
 
 	of_changeset_destroy(&chgset);
 #endif
 }
 
-static void __init of_unittest_parse_interrupts(void)
+static void of_unittest_parse_interrupts(struct kunit *test)
 {
 	struct device_node *np;
 	struct of_phandle_args args;
 	int i, rc;
 
-	if (of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)
-		return;
+	KUNIT_ASSERT_FALSE(test, of_irq_workarounds & OF_IMAP_OLDWORLD_MAC);
 
 	np = of_find_node_by_path("/testcase-data/interrupts/interrupts0");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	for (i = 0; i < 4; i++) {
 		bool passed = true;
@@ -804,16 +915,15 @@ static void __init of_unittest_parse_interrupts(void)
 		passed &= (args.args_count == 1);
 		passed &= (args.args[0] == (i + 1));
 
-		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
-			 i, args.np, rc);
+		KUNIT_EXPECT_TRUE_MSG(
+			test, passed,
+			"index %i - data error on node %pOF rc=%i\n",
+			i, args.np, rc);
 	}
 	of_node_put(np);
 
 	np = of_find_node_by_path("/testcase-data/interrupts/interrupts1");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	for (i = 0; i < 4; i++) {
 		bool passed = true;
@@ -850,26 +960,24 @@ static void __init of_unittest_parse_interrupts(void)
 		default:
 			passed = false;
 		}
-		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
-			 i, args.np, rc);
+		KUNIT_EXPECT_TRUE_MSG(
+			test, passed,
+			"index %i - data error on node %pOF rc=%i\n",
+			i, args.np, rc);
 	}
 	of_node_put(np);
 }
 
-static void __init of_unittest_parse_interrupts_extended(void)
+static void of_unittest_parse_interrupts_extended(struct kunit *test)
 {
 	struct device_node *np;
 	struct of_phandle_args args;
 	int i, rc;
 
-	if (of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)
-		return;
+	KUNIT_ASSERT_FALSE(test, of_irq_workarounds & OF_IMAP_OLDWORLD_MAC);
 
 	np = of_find_node_by_path("/testcase-data/interrupts/interrupts-extended0");
-	if (!np) {
-		pr_err("missing testcase data\n");
-		return;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	for (i = 0; i < 7; i++) {
 		bool passed = true;
@@ -924,8 +1032,10 @@ static void __init of_unittest_parse_interrupts_extended(void)
 			passed = false;
 		}
 
-		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
-			 i, args.np, rc);
+		KUNIT_EXPECT_TRUE_MSG(
+			test, passed,
+			"index %i - data error on node %pOF rc=%i\n",
+			i, args.np, rc);
 	}
 	of_node_put(np);
 }
@@ -965,7 +1075,7 @@ static struct {
 	{ .path = "/testcase-data/match-node/name9", .data = "K", },
 };
 
-static void __init of_unittest_match_node(void)
+static void of_unittest_match_node(struct kunit *test)
 {
 	struct device_node *np;
 	const struct of_device_id *match;
@@ -973,26 +1083,19 @@ static void __init of_unittest_match_node(void)
 
 	for (i = 0; i < ARRAY_SIZE(match_node_tests); i++) {
 		np = of_find_node_by_path(match_node_tests[i].path);
-		if (!np) {
-			unittest(0, "missing testcase node %s\n",
-				match_node_tests[i].path);
-			continue;
-		}
+		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 		match = of_match_node(match_node_table, np);
-		if (!match) {
-			unittest(0, "%s didn't match anything\n",
-				match_node_tests[i].path);
-			continue;
-		}
+		KUNIT_ASSERT_NOT_ERR_OR_NULL_MSG(test, np,
+						 "%s didn't match anything",
+						 match_node_tests[i].path);
 
-		if (strcmp(match->data, match_node_tests[i].data) != 0) {
-			unittest(0, "%s got wrong match. expected %s, got %s\n",
-				match_node_tests[i].path, match_node_tests[i].data,
-				(const char *)match->data);
-			continue;
-		}
-		unittest(1, "passed");
+		KUNIT_EXPECT_STREQ_MSG(
+			test,
+			match->data, match_node_tests[i].data,
+			"%s got wrong match. expected %s, got %s\n",
+			match_node_tests[i].path, match_node_tests[i].data,
+			(const char *)match->data);
 	}
 }
 
@@ -1004,9 +1107,9 @@ static struct resource test_bus_res = {
 static const struct platform_device_info test_bus_info = {
 	.name = "unittest-bus",
 };
-static void __init of_unittest_platform_populate(void)
+static void of_unittest_platform_populate(struct kunit *test)
 {
-	int irq, rc;
+	int irq;
 	struct device_node *np, *child, *grandchild;
 	struct platform_device *pdev, *test_bus;
 	const struct of_device_id match[] = {
@@ -1020,32 +1123,27 @@ static void __init of_unittest_platform_populate(void)
 	/* Test that a missing irq domain returns -EPROBE_DEFER */
 	np = of_find_node_by_path("/testcase-data/testcase-device1");
 	pdev = of_find_device_by_node(np);
-	unittest(pdev, "device 1 creation failed\n");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, pdev);
 
 	if (!(of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)) {
 		irq = platform_get_irq(pdev, 0);
-		unittest(irq == -EPROBE_DEFER,
-			 "device deferred probe failed - %d\n", irq);
+		KUNIT_ASSERT_EQ(test, irq, -EPROBE_DEFER);
 
 		/* Test that a parsing failure does not return -EPROBE_DEFER */
 		np = of_find_node_by_path("/testcase-data/testcase-device2");
 		pdev = of_find_device_by_node(np);
-		unittest(pdev, "device 2 creation failed\n");
+		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, pdev);
 		irq = platform_get_irq(pdev, 0);
-		unittest(irq < 0 && irq != -EPROBE_DEFER,
-			 "device parsing error failed - %d\n", irq);
+		KUNIT_ASSERT_TRUE_MSG(test, irq < 0 && irq != -EPROBE_DEFER,
+				      "device parsing error failed - %d\n",
+				      irq);
 	}
 
 	np = of_find_node_by_path("/testcase-data/platform-tests");
-	unittest(np, "No testcase data in device tree\n");
-	if (!np)
-		return;
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 
 	test_bus = platform_device_register_full(&test_bus_info);
-	rc = PTR_ERR_OR_ZERO(test_bus);
-	unittest(!rc, "testbus registration failed; rc=%i\n", rc);
-	if (rc)
-		return;
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, test_bus);
 	test_bus->dev.of_node = np;
 
 	/*
@@ -1060,17 +1158,19 @@ static void __init of_unittest_platform_populate(void)
 	of_platform_populate(np, match, NULL, &test_bus->dev);
 	for_each_child_of_node(np, child) {
 		for_each_child_of_node(child, grandchild)
-			unittest(of_find_device_by_node(grandchild),
-				 "Could not create device for node '%pOFn'\n",
-				 grandchild);
+			KUNIT_EXPECT_TRUE_MSG(
+				test, of_find_device_by_node(grandchild),
+				"Could not create device for node '%pOFn'\n",
+				grandchild);
 	}
 
 	of_platform_depopulate(&test_bus->dev);
 	for_each_child_of_node(np, child) {
 		for_each_child_of_node(child, grandchild)
-			unittest(!of_find_device_by_node(grandchild),
-				 "device didn't get destroyed '%pOFn'\n",
-				 grandchild);
+			KUNIT_EXPECT_FALSE_MSG(
+				test, of_find_device_by_node(grandchild),
+				"device didn't get destroyed '%pOFn'\n",
+				grandchild);
 	}
 
 	platform_device_unregister(test_bus);
@@ -1171,7 +1271,7 @@ static void attach_node_and_children(struct device_node *np)
  *	unittest_data_add - Reads, copies data from
  *	linked tree and attaches it to the live tree
  */
-static int __init unittest_data_add(void)
+static int unittest_data_add(void)
 {
 	void *unittest_data;
 	struct device_node *unittest_data_node, *np;
@@ -1242,7 +1342,7 @@ static int __init unittest_data_add(void)
 }
 
 #ifdef CONFIG_OF_OVERLAY
-static int __init overlay_data_apply(const char *overlay_name, int *overlay_id);
+static int overlay_data_apply(const char *overlay_name, int *overlay_id);
 
 static int unittest_probe(struct platform_device *pdev)
 {
@@ -1471,172 +1571,146 @@ static void of_unittest_destroy_tracked_overlays(void)
 	} while (defers > 0);
 }
 
-static int __init of_unittest_apply_overlay(int overlay_nr, int *overlay_id)
+static int of_unittest_apply_overlay(struct kunit *test,
+				     int overlay_nr,
+				     int *overlay_id)
 {
 	const char *overlay_name;
 
 	overlay_name = overlay_name_from_nr(overlay_nr);
 
-	if (!overlay_data_apply(overlay_name, overlay_id)) {
-		unittest(0, "could not apply overlay \"%s\"\n",
-				overlay_name);
-		return -EFAULT;
-	}
+	KUNIT_ASSERT_TRUE_MSG(test,
+			      overlay_data_apply(overlay_name, overlay_id),
+			      "could not apply overlay \"%s\"\n", overlay_name);
 	of_unittest_track_overlay(*overlay_id);
 
 	return 0;
 }
 
 /* apply an overlay while checking before and after states */
-static int __init of_unittest_apply_overlay_check(int overlay_nr,
+static int of_unittest_apply_overlay_check(struct kunit *test, int overlay_nr,
 		int unittest_nr, int before, int after,
 		enum overlay_type ovtype)
 {
 	int ret, ovcs_id;
 
 	/* unittest device must not be in before state */
-	if (of_unittest_device_exists(unittest_nr, ovtype) != before) {
-		unittest(0, "%s with device @\"%s\" %s\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype),
-				!before ? "enabled" : "disabled");
-		return -EINVAL;
-	}
+	KUNIT_ASSERT_EQ_MSG(
+		test, of_unittest_device_exists(unittest_nr, ovtype), before,
+		"%s with device @\"%s\" %s\n", overlay_name_from_nr(overlay_nr),
+		unittest_path(unittest_nr, ovtype),
+		!before ? "enabled" : "disabled");
 
 	ovcs_id = 0;
-	ret = of_unittest_apply_overlay(overlay_nr, &ovcs_id);
+	ret = of_unittest_apply_overlay(test, overlay_nr, &ovcs_id);
 	if (ret != 0) {
-		/* of_unittest_apply_overlay already called unittest() */
+		/* of_unittest_apply_overlay already set expectation */
 		return ret;
 	}
 
 	/* unittest device must be to set to after state */
-	if (of_unittest_device_exists(unittest_nr, ovtype) != after) {
-		unittest(0, "%s failed to create @\"%s\" %s\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype),
-				!after ? "enabled" : "disabled");
-		return -EINVAL;
-	}
+	KUNIT_ASSERT_EQ_MSG(
+		test, of_unittest_device_exists(unittest_nr, ovtype), after,
+		"%s failed to create @\"%s\" %s\n",
+		overlay_name_from_nr(overlay_nr),
+		unittest_path(unittest_nr, ovtype),
+		!after ? "enabled" : "disabled");
 
 	return 0;
 }
 
 /* apply an overlay and then revert it while checking before, after states */
-static int __init of_unittest_apply_revert_overlay_check(int overlay_nr,
+static int of_unittest_apply_revert_overlay_check(
+		struct kunit *test, int overlay_nr,
 		int unittest_nr, int before, int after,
 		enum overlay_type ovtype)
 {
 	int ret, ovcs_id;
 
 	/* unittest device must be in before state */
-	if (of_unittest_device_exists(unittest_nr, ovtype) != before) {
-		unittest(0, "%s with device @\"%s\" %s\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype),
-				!before ? "enabled" : "disabled");
-		return -EINVAL;
-	}
+	KUNIT_ASSERT_EQ_MSG(
+		test, of_unittest_device_exists(unittest_nr, ovtype), before,
+		"%s with device @\"%s\" %s\n", overlay_name_from_nr(overlay_nr),
+		unittest_path(unittest_nr, ovtype),
+		!before ? "enabled" : "disabled");
 
 	/* apply the overlay */
 	ovcs_id = 0;
-	ret = of_unittest_apply_overlay(overlay_nr, &ovcs_id);
+	ret = of_unittest_apply_overlay(test, overlay_nr, &ovcs_id);
 	if (ret != 0) {
-		/* of_unittest_apply_overlay already called unittest() */
+		/* of_unittest_apply_overlay already set expectation. */
 		return ret;
 	}
 
 	/* unittest device must be in after state */
-	if (of_unittest_device_exists(unittest_nr, ovtype) != after) {
-		unittest(0, "%s failed to create @\"%s\" %s\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype),
-				!after ? "enabled" : "disabled");
-		return -EINVAL;
-	}
-
-	ret = of_overlay_remove(&ovcs_id);
-	if (ret != 0) {
-		unittest(0, "%s failed to be destroyed @\"%s\"\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype));
-		return ret;
-	}
+	KUNIT_ASSERT_EQ_MSG(
+		test, of_unittest_device_exists(unittest_nr, ovtype), after,
+		"%s failed to create @\"%s\" %s\n",
+		overlay_name_from_nr(overlay_nr),
+		unittest_path(unittest_nr, ovtype),
+		!after ? "enabled" : "disabled");
+
+	KUNIT_ASSERT_EQ_MSG(test, of_overlay_remove(&ovcs_id), 0,
+			    "%s failed to be destroyed @\"%s\"\n",
+			    overlay_name_from_nr(overlay_nr),
+			    unittest_path(unittest_nr, ovtype));
 
 	/* unittest device must be again in before state */
-	if (of_unittest_device_exists(unittest_nr, PDEV_OVERLAY) != before) {
-		unittest(0, "%s with device @\"%s\" %s\n",
-				overlay_name_from_nr(overlay_nr),
-				unittest_path(unittest_nr, ovtype),
-				!before ? "enabled" : "disabled");
-		return -EINVAL;
-	}
+	KUNIT_ASSERT_EQ_MSG(
+		test,
+		of_unittest_device_exists(unittest_nr, PDEV_OVERLAY), before,
+		"%s with device @\"%s\" %s\n",
+		overlay_name_from_nr(overlay_nr),
+		unittest_path(unittest_nr, ovtype),
+		!before ? "enabled" : "disabled");
 
 	return 0;
 }
 
 /* test activation of device */
-static void __init of_unittest_overlay_0(void)
+static void of_unittest_overlay_0(struct kunit *test)
 {
 	/* device should enable */
-	if (of_unittest_apply_overlay_check(0, 0, 0, 1, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 0);
+	of_unittest_apply_overlay_check(test, 0, 0, 0, 1, PDEV_OVERLAY);
 }
 
 /* test deactivation of device */
-static void __init of_unittest_overlay_1(void)
+static void of_unittest_overlay_1(struct kunit *test)
 {
 	/* device should disable */
-	if (of_unittest_apply_overlay_check(1, 1, 1, 0, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 1);
+	of_unittest_apply_overlay_check(test, 1, 1, 1, 0, PDEV_OVERLAY);
 }
 
 /* test activation of device */
-static void __init of_unittest_overlay_2(void)
+static void of_unittest_overlay_2(struct kunit *test)
 {
 	/* device should enable */
-	if (of_unittest_apply_overlay_check(2, 2, 0, 1, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 2);
+	of_unittest_apply_overlay_check(test, 2, 2, 0, 1, PDEV_OVERLAY);
 }
 
 /* test deactivation of device */
-static void __init of_unittest_overlay_3(void)
+static void of_unittest_overlay_3(struct kunit *test)
 {
 	/* device should disable */
-	if (of_unittest_apply_overlay_check(3, 3, 1, 0, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 3);
+	of_unittest_apply_overlay_check(test, 3, 3, 1, 0, PDEV_OVERLAY);
 }
 
 /* test activation of a full device node */
-static void __init of_unittest_overlay_4(void)
+static void of_unittest_overlay_4(struct kunit *test)
 {
 	/* device should disable */
-	if (of_unittest_apply_overlay_check(4, 4, 0, 1, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 4);
+	of_unittest_apply_overlay_check(test, 4, 4, 0, 1, PDEV_OVERLAY);
 }
 
 /* test overlay apply/revert sequence */
-static void __init of_unittest_overlay_5(void)
+static void of_unittest_overlay_5(struct kunit *test)
 {
 	/* device should disable */
-	if (of_unittest_apply_revert_overlay_check(5, 5, 0, 1, PDEV_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 5);
+	of_unittest_apply_revert_overlay_check(test, 5, 5, 0, 1, PDEV_OVERLAY);
 }
 
 /* test overlay application in sequence */
-static void __init of_unittest_overlay_6(void)
+static void of_unittest_overlay_6(struct kunit *test)
 {
 	int i, ov_id[2], ovcs_id;
 	int overlay_nr = 6, unittest_nr = 6;
@@ -1645,74 +1719,67 @@ static void __init of_unittest_overlay_6(void)
 
 	/* unittest device must be in before state */
 	for (i = 0; i < 2; i++) {
-		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
-				!= before) {
-			unittest(0, "%s with device @\"%s\" %s\n",
-					overlay_name_from_nr(overlay_nr + i),
-					unittest_path(unittest_nr + i,
-						PDEV_OVERLAY),
-					!before ? "enabled" : "disabled");
-			return;
-		}
+		KUNIT_ASSERT_EQ_MSG(test,
+				    of_unittest_device_exists(unittest_nr + i,
+							      PDEV_OVERLAY),
+				    before,
+				    "%s with device @\"%s\" %s\n",
+				    overlay_name_from_nr(overlay_nr + i),
+				    unittest_path(unittest_nr + i,
+						  PDEV_OVERLAY),
+				    !before ? "enabled" : "disabled");
 	}
 
 	/* apply the overlays */
 	for (i = 0; i < 2; i++) {
-
 		overlay_name = overlay_name_from_nr(overlay_nr + i);
 
-		if (!overlay_data_apply(overlay_name, &ovcs_id)) {
-			unittest(0, "could not apply overlay \"%s\"\n",
-					overlay_name);
-			return;
-		}
+		KUNIT_ASSERT_TRUE_MSG(
+			test, overlay_data_apply(overlay_name, &ovcs_id),
+			"could not apply overlay \"%s\"\n", overlay_name);
 		ov_id[i] = ovcs_id;
 		of_unittest_track_overlay(ov_id[i]);
 	}
 
 	for (i = 0; i < 2; i++) {
 		/* unittest device must be in after state */
-		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
-				!= after) {
-			unittest(0, "overlay @\"%s\" failed @\"%s\" %s\n",
-					overlay_name_from_nr(overlay_nr + i),
-					unittest_path(unittest_nr + i,
-						PDEV_OVERLAY),
-					!after ? "enabled" : "disabled");
-			return;
-		}
+		KUNIT_ASSERT_EQ_MSG(test,
+				    of_unittest_device_exists(unittest_nr + i,
+							      PDEV_OVERLAY),
+				    after,
+				    "overlay @\"%s\" failed @\"%s\" %s\n",
+				    overlay_name_from_nr(overlay_nr + i),
+				    unittest_path(unittest_nr + i,
+						  PDEV_OVERLAY),
+				    !after ? "enabled" : "disabled");
 	}
 
 	for (i = 1; i >= 0; i--) {
 		ovcs_id = ov_id[i];
-		if (of_overlay_remove(&ovcs_id)) {
-			unittest(0, "%s failed destroy @\"%s\"\n",
-					overlay_name_from_nr(overlay_nr + i),
-					unittest_path(unittest_nr + i,
-						PDEV_OVERLAY));
-			return;
-		}
+		KUNIT_ASSERT_FALSE_MSG(
+			test, of_overlay_remove(&ovcs_id),
+			"%s failed destroy @\"%s\"\n",
+			overlay_name_from_nr(overlay_nr + i),
+			unittest_path(unittest_nr + i, PDEV_OVERLAY));
 		of_unittest_untrack_overlay(ov_id[i]);
 	}
 
 	for (i = 0; i < 2; i++) {
 		/* unittest device must be again in before state */
-		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
-				!= before) {
-			unittest(0, "%s with device @\"%s\" %s\n",
-					overlay_name_from_nr(overlay_nr + i),
-					unittest_path(unittest_nr + i,
-						PDEV_OVERLAY),
-					!before ? "enabled" : "disabled");
-			return;
-		}
+		KUNIT_ASSERT_EQ_MSG(test,
+				    of_unittest_device_exists(unittest_nr + i,
+							      PDEV_OVERLAY),
+				    before,
+				    "%s with device @\"%s\" %s\n",
+				    overlay_name_from_nr(overlay_nr + i),
+				    unittest_path(unittest_nr + i,
+						  PDEV_OVERLAY),
+				    !before ? "enabled" : "disabled");
 	}
-
-	unittest(1, "overlay test %d passed\n", 6);
 }
 
 /* test overlay application in sequence */
-static void __init of_unittest_overlay_8(void)
+static void of_unittest_overlay_8(struct kunit *test)
 {
 	int i, ov_id[2], ovcs_id;
 	int overlay_nr = 8, unittest_nr = 8;
@@ -1722,76 +1789,64 @@ static void __init of_unittest_overlay_8(void)
 
 	/* apply the overlays */
 	for (i = 0; i < 2; i++) {
-
 		overlay_name = overlay_name_from_nr(overlay_nr + i);
 
-		if (!overlay_data_apply(overlay_name, &ovcs_id)) {
-			unittest(0, "could not apply overlay \"%s\"\n",
-					overlay_name);
-			return;
-		}
+		KUNIT_ASSERT_TRUE_MSG(
+			test, overlay_data_apply(overlay_name, &ovcs_id),
+			"could not apply overlay \"%s\"\n", overlay_name);
 		ov_id[i] = ovcs_id;
 		of_unittest_track_overlay(ov_id[i]);
 	}
 
 	/* now try to remove first overlay (it should fail) */
 	ovcs_id = ov_id[0];
-	if (!of_overlay_remove(&ovcs_id)) {
-		unittest(0, "%s was destroyed @\"%s\"\n",
-				overlay_name_from_nr(overlay_nr + 0),
-				unittest_path(unittest_nr,
-					PDEV_OVERLAY));
-		return;
-	}
+	KUNIT_ASSERT_TRUE_MSG(
+		test, of_overlay_remove(&ovcs_id),
+		"%s was destroyed @\"%s\"\n",
+		overlay_name_from_nr(overlay_nr + 0),
+		unittest_path(unittest_nr, PDEV_OVERLAY));
 
 	/* removing them in order should work */
 	for (i = 1; i >= 0; i--) {
 		ovcs_id = ov_id[i];
-		if (of_overlay_remove(&ovcs_id)) {
-			unittest(0, "%s not destroyed @\"%s\"\n",
-					overlay_name_from_nr(overlay_nr + i),
-					unittest_path(unittest_nr,
-						PDEV_OVERLAY));
-			return;
-		}
+		KUNIT_ASSERT_FALSE_MSG(
+			test, of_overlay_remove(&ovcs_id),
+			"%s not destroyed @\"%s\"\n",
+			overlay_name_from_nr(overlay_nr + i),
+			unittest_path(unittest_nr, PDEV_OVERLAY));
 		of_unittest_untrack_overlay(ov_id[i]);
 	}
-
-	unittest(1, "overlay test %d passed\n", 8);
 }
 
 /* test insertion of a bus with parent devices */
-static void __init of_unittest_overlay_10(void)
+static void of_unittest_overlay_10(struct kunit *test)
 {
-	int ret;
 	char *child_path;
 
 	/* device should disable */
-	ret = of_unittest_apply_overlay_check(10, 10, 0, 1, PDEV_OVERLAY);
-	if (unittest(ret == 0,
-			"overlay test %d failed; overlay application\n", 10))
-		return;
+	KUNIT_ASSERT_EQ_MSG(
+		test,
+		of_unittest_apply_overlay_check(
+				test, 10, 10, 0, 1, PDEV_OVERLAY),
+		0,
+		"overlay test %d failed; overlay application\n", 10);
 
 	child_path = kasprintf(GFP_KERNEL, "%s/test-unittest101",
 			unittest_path(10, PDEV_OVERLAY));
-	if (unittest(child_path, "overlay test %d failed; kasprintf\n", 10))
-		return;
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, child_path);
 
-	ret = of_path_device_type_exists(child_path, PDEV_OVERLAY);
+	KUNIT_EXPECT_TRUE_MSG(
+		test, of_path_device_type_exists(child_path, PDEV_OVERLAY),
+		"overlay test %d failed; no child device\n", 10);
 	kfree(child_path);
-
-	unittest(ret, "overlay test %d failed; no child device\n", 10);
 }
 
 /* test insertion of a bus with parent devices (and revert) */
-static void __init of_unittest_overlay_11(void)
+static void of_unittest_overlay_11(struct kunit *test)
 {
-	int ret;
-
 	/* device should disable */
-	ret = of_unittest_apply_revert_overlay_check(11, 11, 0, 1,
-			PDEV_OVERLAY);
-	unittest(ret == 0, "overlay test %d failed; overlay apply\n", 11);
+	KUNIT_EXPECT_FALSE(test, of_unittest_apply_revert_overlay_check(
+		test, 11, 11, 0, 1, PDEV_OVERLAY));
 }
 
 #if IS_BUILTIN(CONFIG_I2C) && IS_ENABLED(CONFIG_OF_OVERLAY)
@@ -2013,25 +2068,18 @@ static struct i2c_driver unittest_i2c_mux_driver = {
 
 #endif
 
-static int of_unittest_overlay_i2c_init(void)
+static int of_unittest_overlay_i2c_init(struct kunit *test)
 {
-	int ret;
-
-	ret = i2c_add_driver(&unittest_i2c_dev_driver);
-	if (unittest(ret == 0,
-			"could not register unittest i2c device driver\n"))
-		return ret;
+	KUNIT_ASSERT_EQ_MSG(test, i2c_add_driver(&unittest_i2c_dev_driver), 0,
+			    "could not register unittest i2c device driver\n");
 
-	ret = platform_driver_register(&unittest_i2c_bus_driver);
-	if (unittest(ret == 0,
-			"could not register unittest i2c bus driver\n"))
-		return ret;
+	KUNIT_ASSERT_EQ_MSG(
+		test, platform_driver_register(&unittest_i2c_bus_driver), 0,
+		"could not register unittest i2c bus driver\n");
 
 #if IS_BUILTIN(CONFIG_I2C_MUX)
-	ret = i2c_add_driver(&unittest_i2c_mux_driver);
-	if (unittest(ret == 0,
-			"could not register unittest i2c mux driver\n"))
-		return ret;
+	KUNIT_ASSERT_EQ_MSG(test, i2c_add_driver(&unittest_i2c_mux_driver), 0,
+			    "could not register unittest i2c mux driver\n");
 #endif
 
 	return 0;
@@ -2046,101 +2094,85 @@ static void of_unittest_overlay_i2c_cleanup(void)
 	i2c_del_driver(&unittest_i2c_dev_driver);
 }
 
-static void __init of_unittest_overlay_i2c_12(void)
+static void of_unittest_overlay_i2c_12(struct kunit *test)
 {
 	/* device should enable */
-	if (of_unittest_apply_overlay_check(12, 12, 0, 1, I2C_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 12);
+	of_unittest_apply_overlay_check(test, 12, 12, 0, 1, I2C_OVERLAY);
 }
 
 /* test deactivation of device */
-static void __init of_unittest_overlay_i2c_13(void)
+static void of_unittest_overlay_i2c_13(struct kunit *test)
 {
 	/* device should disable */
-	if (of_unittest_apply_overlay_check(13, 13, 1, 0, I2C_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 13);
+	of_unittest_apply_overlay_check(test, 13, 13, 1, 0, I2C_OVERLAY);
 }
 
 /* just check for i2c mux existence */
-static void of_unittest_overlay_i2c_14(void)
+static void of_unittest_overlay_i2c_14(struct kunit *test)
 {
+	KUNIT_SUCCEED(test);
 }
 
-static void __init of_unittest_overlay_i2c_15(void)
+static void of_unittest_overlay_i2c_15(struct kunit *test)
 {
 	/* device should enable */
-	if (of_unittest_apply_overlay_check(15, 15, 0, 1, I2C_OVERLAY))
-		return;
-
-	unittest(1, "overlay test %d passed\n", 15);
+	of_unittest_apply_overlay_check(test, 15, 15, 0, 1, I2C_OVERLAY);
 }
 
 #else
 
-static inline void of_unittest_overlay_i2c_14(void) { }
-static inline void of_unittest_overlay_i2c_15(void) { }
+static inline void of_unittest_overlay_i2c_14(struct kunit *test) { }
+static inline void of_unittest_overlay_i2c_15(struct kunit *test) { }
 
 #endif
 
-static void __init of_unittest_overlay(void)
+static void of_unittest_overlay(struct kunit *test)
 {
 	struct device_node *bus_np = NULL;
 
-	if (platform_driver_register(&unittest_driver)) {
-		unittest(0, "could not register unittest driver\n");
-		goto out;
-	}
+	KUNIT_ASSERT_FALSE_MSG(test, platform_driver_register(&unittest_driver),
+			       "could not register unittest driver\n");
 
 	bus_np = of_find_node_by_path(bus_path);
-	if (bus_np == NULL) {
-		unittest(0, "could not find bus_path \"%s\"\n", bus_path);
-		goto out;
-	}
+	KUNIT_ASSERT_NOT_ERR_OR_NULL_MSG(
+		test, bus_np, "could not find bus_path \"%s\"\n", bus_path);
 
-	if (of_platform_default_populate(bus_np, NULL, NULL)) {
-		unittest(0, "could not populate bus @ \"%s\"\n", bus_path);
-		goto out;
-	}
-
-	if (!of_unittest_device_exists(100, PDEV_OVERLAY)) {
-		unittest(0, "could not find unittest0 @ \"%s\"\n",
-				unittest_path(100, PDEV_OVERLAY));
-		goto out;
-	}
+	KUNIT_ASSERT_FALSE_MSG(
+		test, of_platform_default_populate(bus_np, NULL, NULL),
+		"could not populate bus @ \"%s\"\n", bus_path);
 
-	if (of_unittest_device_exists(101, PDEV_OVERLAY)) {
-		unittest(0, "unittest1 @ \"%s\" should not exist\n",
-				unittest_path(101, PDEV_OVERLAY));
-		goto out;
-	}
+	KUNIT_ASSERT_TRUE_MSG(
+		test, of_unittest_device_exists(100, PDEV_OVERLAY),
+		"could not find unittest0 @ \"%s\"\n",
+		unittest_path(100, PDEV_OVERLAY));
 
-	unittest(1, "basic infrastructure of overlays passed");
+	KUNIT_ASSERT_FALSE_MSG(
+		test, of_unittest_device_exists(101, PDEV_OVERLAY),
+		"unittest1 @ \"%s\" should not exist\n",
+		unittest_path(101, PDEV_OVERLAY));
 
 	/* tests in sequence */
-	of_unittest_overlay_0();
-	of_unittest_overlay_1();
-	of_unittest_overlay_2();
-	of_unittest_overlay_3();
-	of_unittest_overlay_4();
-	of_unittest_overlay_5();
-	of_unittest_overlay_6();
-	of_unittest_overlay_8();
-
-	of_unittest_overlay_10();
-	of_unittest_overlay_11();
+	of_unittest_overlay_0(test);
+	of_unittest_overlay_1(test);
+	of_unittest_overlay_2(test);
+	of_unittest_overlay_3(test);
+	of_unittest_overlay_4(test);
+	of_unittest_overlay_5(test);
+	of_unittest_overlay_6(test);
+	of_unittest_overlay_8(test);
+
+	of_unittest_overlay_10(test);
+	of_unittest_overlay_11(test);
 
 #if IS_BUILTIN(CONFIG_I2C)
-	if (unittest(of_unittest_overlay_i2c_init() == 0, "i2c init failed\n"))
-		goto out;
+	KUNIT_ASSERT_EQ_MSG(test, of_unittest_overlay_i2c_init(test), 0,
+			    "i2c init failed\n");
+	goto out;
 
-	of_unittest_overlay_i2c_12();
-	of_unittest_overlay_i2c_13();
-	of_unittest_overlay_i2c_14();
-	of_unittest_overlay_i2c_15();
+	of_unittest_overlay_i2c_12(test);
+	of_unittest_overlay_i2c_13(test);
+	of_unittest_overlay_i2c_14(test);
+	of_unittest_overlay_i2c_15(test);
 
 	of_unittest_overlay_i2c_cleanup();
 #endif
@@ -2152,7 +2184,7 @@ static void __init of_unittest_overlay(void)
 }
 
 #else
-static inline void __init of_unittest_overlay(void) { }
+static inline void of_unittest_overlay(struct kunit *test) { }
 #endif
 
 #ifdef CONFIG_OF_OVERLAY
@@ -2313,7 +2345,7 @@ void __init unittest_unflatten_overlay_base(void)
  *
  * Return 0 on unexpected error.
  */
-static int __init overlay_data_apply(const char *overlay_name, int *overlay_id)
+static int overlay_data_apply(const char *overlay_name, int *overlay_id)
 {
 	struct overlay_info *info;
 	int found = 0;
@@ -2359,19 +2391,17 @@ static int __init overlay_data_apply(const char *overlay_name, int *overlay_id)
  * The first part of the function is _not_ normal overlay usage; it is
  * finishing splicing the base overlay device tree into the live tree.
  */
-static __init void of_unittest_overlay_high_level(void)
+static void of_unittest_overlay_high_level(struct kunit *test)
 {
 	struct device_node *last_sibling;
 	struct device_node *np;
 	struct device_node *of_symbols;
-	struct device_node *overlay_base_symbols;
+	struct device_node *overlay_base_symbols = 0;
 	struct device_node **pprev;
 	struct property *prop;
 
-	if (!overlay_base_root) {
-		unittest(0, "overlay_base_root not initialized\n");
-		return;
-	}
+	KUNIT_ASSERT_TRUE_MSG(test, overlay_base_root,
+			      "overlay_base_root not initialized\n");
 
 	/*
 	 * Could not fixup phandles in unittest_unflatten_overlay_base()
@@ -2418,11 +2448,9 @@ static __init void of_unittest_overlay_high_level(void)
 	for_each_child_of_node(overlay_base_root, np) {
 		struct device_node *base_child;
 		for_each_child_of_node(of_root, base_child) {
-			if (!strcmp(np->full_name, base_child->full_name)) {
-				unittest(0, "illegal node name in overlay_base %pOFn",
-					 np);
-				return;
-			}
+			KUNIT_ASSERT_STRNEQ_MSG(
+				test, np->full_name, base_child->full_name,
+				"illegal node name in overlay_base %pOFn", np);
 		}
 	}
 
@@ -2456,21 +2484,24 @@ static __init void of_unittest_overlay_high_level(void)
 
 			new_prop = __of_prop_dup(prop, GFP_KERNEL);
 			if (!new_prop) {
-				unittest(0, "__of_prop_dup() of '%s' from overlay_base node __symbols__",
-					 prop->name);
+				KUNIT_FAIL(test,
+					   "__of_prop_dup() of '%s' from overlay_base node __symbols__",
+					   prop->name);
 				goto err_unlock;
 			}
 			if (__of_add_property(of_symbols, new_prop)) {
 				/* "name" auto-generated by unflatten */
 				if (!strcmp(new_prop->name, "name"))
 					continue;
-				unittest(0, "duplicate property '%s' in overlay_base node __symbols__",
-					 prop->name);
+				KUNIT_FAIL(test,
+					   "duplicate property '%s' in overlay_base node __symbols__",
+					   prop->name);
 				goto err_unlock;
 			}
 			if (__of_add_property_sysfs(of_symbols, new_prop)) {
-				unittest(0, "unable to add property '%s' in overlay_base node __symbols__ to sysfs",
-					 prop->name);
+				KUNIT_FAIL(test,
+					   "unable to add property '%s' in overlay_base node __symbols__ to sysfs",
+					   prop->name);
 				goto err_unlock;
 			}
 		}
@@ -2481,20 +2512,24 @@ static __init void of_unittest_overlay_high_level(void)
 
 	/* now do the normal overlay usage test */
 
-	unittest(overlay_data_apply("overlay", NULL),
-		 "Adding overlay 'overlay' failed\n");
+	KUNIT_EXPECT_TRUE_MSG(test, overlay_data_apply("overlay", NULL),
+			      "Adding overlay 'overlay' failed\n");
 
-	unittest(overlay_data_apply("overlay_bad_add_dup_node", NULL),
-		 "Adding overlay 'overlay_bad_add_dup_node' failed\n");
+	KUNIT_EXPECT_TRUE_MSG(
+		test, overlay_data_apply("overlay_bad_add_dup_node", NULL),
+		"Adding overlay 'overlay_bad_add_dup_node' failed\n");
 
-	unittest(overlay_data_apply("overlay_bad_add_dup_prop", NULL),
-		 "Adding overlay 'overlay_bad_add_dup_prop' failed\n");
+	KUNIT_EXPECT_TRUE_MSG(
+		test, overlay_data_apply("overlay_bad_add_dup_prop", NULL),
+		"Adding overlay 'overlay_bad_add_dup_prop' failed\n");
 
-	unittest(overlay_data_apply("overlay_bad_phandle", NULL),
-		 "Adding overlay 'overlay_bad_phandle' failed\n");
+	KUNIT_EXPECT_TRUE_MSG(
+		test, overlay_data_apply("overlay_bad_phandle", NULL),
+		"Adding overlay 'overlay_bad_phandle' failed\n");
 
-	unittest(overlay_data_apply("overlay_bad_symbol", NULL),
-		 "Adding overlay 'overlay_bad_symbol' failed\n");
+	KUNIT_EXPECT_TRUE_MSG(
+		test, overlay_data_apply("overlay_bad_symbol", NULL),
+		"Adding overlay 'overlay_bad_symbol' failed\n");
 
 	return;
 
@@ -2504,57 +2539,52 @@ static __init void of_unittest_overlay_high_level(void)
 
 #else
 
-static inline __init void of_unittest_overlay_high_level(void) {}
+static inline void of_unittest_overlay_high_level(struct kunit *test) {}
 
 #endif
 
-static int __init of_unittest(void)
+static int of_test_init(struct kunit *test)
 {
-	struct device_node *np;
-	int res;
-
 	/* adding data for unittest */
-	res = unittest_data_add();
-	if (res)
-		return res;
+	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
+
 	if (!of_aliases)
 		of_aliases = of_find_node_by_path("/aliases");
 
-	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
-	if (!np) {
-		pr_info("No testcase data in device tree; not running tests\n");
-		return 0;
-	}
-	of_node_put(np);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
+		"/testcase-data/phandle-tests/consumer-a"));
 
 	if (IS_ENABLED(CONFIG_UML))
 		unflatten_device_tree();
 
-	pr_info("start of unittest - you will see error messages\n");
-	of_unittest_check_tree_linkage();
-	of_unittest_check_phandles();
-	of_unittest_find_node_by_name();
-	of_unittest_dynamic();
-	of_unittest_parse_phandle_with_args();
-	of_unittest_parse_phandle_with_args_map();
-	of_unittest_printf();
-	of_unittest_property_string();
-	of_unittest_property_copy();
-	of_unittest_changeset();
-	of_unittest_parse_interrupts();
-	of_unittest_parse_interrupts_extended();
-	of_unittest_match_node();
-	of_unittest_platform_populate();
-	of_unittest_overlay();
+	return 0;
+}
 
+static struct kunit_case of_test_cases[] = {
+	KUNIT_CASE(of_unittest_check_tree_linkage),
+	KUNIT_CASE(of_unittest_check_phandles),
+	KUNIT_CASE(of_unittest_find_node_by_name),
+	KUNIT_CASE(of_unittest_dynamic),
+	KUNIT_CASE(of_unittest_parse_phandle_with_args),
+	KUNIT_CASE(of_unittest_parse_phandle_with_args_map),
+	KUNIT_CASE(of_unittest_printf),
+	KUNIT_CASE(of_unittest_property_string),
+	KUNIT_CASE(of_unittest_property_copy),
+	KUNIT_CASE(of_unittest_changeset),
+	KUNIT_CASE(of_unittest_parse_interrupts),
+	KUNIT_CASE(of_unittest_parse_interrupts_extended),
+	KUNIT_CASE(of_unittest_match_node),
+	KUNIT_CASE(of_unittest_platform_populate),
+	KUNIT_CASE(of_unittest_overlay),
 	/* Double check linkage after removing testcase data */
-	of_unittest_check_tree_linkage();
-
-	of_unittest_overlay_high_level();
-
-	pr_info("end of unittest - %i passed, %i failed\n",
-		unittest_results.passed, unittest_results.failed);
+	KUNIT_CASE(of_unittest_check_tree_linkage),
+	KUNIT_CASE(of_unittest_overlay_high_level),
+	{},
+};
 
-	return 0;
-}
-late_initcall(of_unittest);
+static struct kunit_module of_test_module = {
+	.name = "of-test",
+	.init = of_test_init,
+	.test_cases = of_test_cases,
+};
+module_test(of_test_module);
-- 
2.21.0.rc0.258.g878e2cd30e-goog


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 16/17] of: unittest: split out a couple of test cases from unittest
  2019-02-14 21:37 ` brendanhiggins
  (?)
  (?)
@ 2019-02-14 21:37   ` brendanhiggins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: gregkh, joel, mpe, joe, brakmo, rostedt, Tim.Bird, khilman,
	julia.lawall, linux-kselftest, kunit-dev, linux-kernel, jdike,
	richard, linux-um, daniel, dri-devel, dan.j.williams,
	linux-nvdimm, knut.omang, devicetree, pmladek, Alexander.Levin,
	amir73il, dan.carpenter, wfg, Brendan Higgins

Split out a couple of test cases that these features in base.c from the
unittest.c monolith. The intention is that we will eventually split out
all test cases and group them together based on what portion of device
tree they test.

Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
---
 drivers/of/Makefile      |   2 +-
 drivers/of/base-test.c   | 214 ++++++++++++++++++++++++
 drivers/of/test-common.c | 175 ++++++++++++++++++++
 drivers/of/test-common.h |  16 ++
 drivers/of/unittest.c    | 345 +--------------------------------------
 5 files changed, 407 insertions(+), 345 deletions(-)
 create mode 100644 drivers/of/base-test.c
 create mode 100644 drivers/of/test-common.c
 create mode 100644 drivers/of/test-common.h

diff --git a/drivers/of/Makefile b/drivers/of/Makefile
index 663a4af0cccd5..4a4bd527d586c 100644
--- a/drivers/of/Makefile
+++ b/drivers/of/Makefile
@@ -8,7 +8,7 @@ obj-$(CONFIG_OF_PROMTREE) += pdt.o
 obj-$(CONFIG_OF_ADDRESS)  += address.o
 obj-$(CONFIG_OF_IRQ)    += irq.o
 obj-$(CONFIG_OF_NET)	+= of_net.o
-obj-$(CONFIG_OF_UNITTEST) += unittest.o
+obj-$(CONFIG_OF_UNITTEST) += unittest.o base-test.o test-common.o
 obj-$(CONFIG_OF_MDIO)	+= of_mdio.o
 obj-$(CONFIG_OF_RESERVED_MEM) += of_reserved_mem.o
 obj-$(CONFIG_OF_RESOLVE)  += resolver.o
diff --git a/drivers/of/base-test.c b/drivers/of/base-test.c
new file mode 100644
index 0000000000000..3d3f4f1b74800
--- /dev/null
+++ b/drivers/of/base-test.c
@@ -0,0 +1,214 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Unit tests for functions defined in base.c.
+ */
+#include <linux/of.h>
+
+#include <kunit/test.h>
+
+#include "test-common.h"
+
+static void of_unittest_find_node_by_name(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options, *name;
+
+	np = of_find_node_by_path("/testcase-data");
+	name = kasprintf(GFP_KERNEL, "%pOF", np);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
+			       "find /testcase-data failed\n");
+	of_node_put(np);
+	kfree(name);
+
+	/* Test if trailing '/' works */
+	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
+			    "trailing '/' on /testcase-data/ should fail\n");
+
+	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	name = kasprintf(GFP_KERNEL, "%pOF", np);
+	KUNIT_EXPECT_STREQ_MSG(
+		test, "/testcase-data/phandle-tests/consumer-a", name,
+		"find /testcase-data/phandle-tests/consumer-a failed\n");
+	of_node_put(np);
+	kfree(name);
+
+	np = of_find_node_by_path("testcase-alias");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	name = kasprintf(GFP_KERNEL, "%pOF", np);
+	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
+			       "find testcase-alias failed\n");
+	of_node_put(np);
+	kfree(name);
+
+	/* Test if trailing '/' works on aliases */
+	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
+			    "trailing '/' on testcase-alias/ should fail\n");
+
+	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	name = kasprintf(GFP_KERNEL, "%pOF", np);
+	KUNIT_EXPECT_STREQ_MSG(
+		test, "/testcase-data/phandle-tests/consumer-a", name,
+		"find testcase-alias/phandle-tests/consumer-a failed\n");
+	of_node_put(np);
+	kfree(name);
+
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
+		"non-existent path returned node %pOF\n", np);
+	of_node_put(np);
+
+	KUNIT_EXPECT_EQ_MSG(
+		test, np = of_find_node_by_path("missing-alias"), NULL,
+		"non-existent alias returned node %pOF\n", np);
+	of_node_put(np);
+
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
+		"non-existent alias with relative path returned node %pOF\n",
+		np);
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
+			       "option path test failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
+			       "option path test, subcase #1 failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
+			       "option path test, subcase #2 failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
+					 "NULL option path test failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
+				       &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
+			       "option alias path test failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
+				       &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
+			       "option alias path test, subcase #1 failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
+			test, np, "NULL option alias path test failed\n");
+	of_node_put(np);
+
+	options = "testoption";
+	np = of_find_node_opts_by_path("testcase-alias", &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
+			    "option clearing test failed\n");
+	of_node_put(np);
+
+	options = "testoption";
+	np = of_find_node_opts_by_path("/", &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
+			    "option clearing root node test failed\n");
+	of_node_put(np);
+}
+
+static void of_unittest_dynamic(struct kunit *test)
+{
+	struct device_node *np;
+	struct property *prop;
+
+	np = of_find_node_by_path("/testcase-data");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+
+	/* Array of 4 properties for the purpose of testing */
+	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
+
+	/* Add a new property - should pass*/
+	prop->name = "new-property";
+	prop->value = "new-property-data";
+	prop->length = strlen(prop->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+			    "Adding a new property failed\n");
+
+	/* Try to add an existing property - should fail */
+	prop++;
+	prop->name = "new-property";
+	prop->value = "new-property-data-should-fail";
+	prop->length = strlen(prop->value) + 1;
+	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
+			    "Adding an existing property should have failed\n");
+
+	/* Try to modify an existing property - should pass */
+	prop->value = "modify-property-data-should-pass";
+	prop->length = strlen(prop->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(
+		test, of_update_property(np, prop), 0,
+		"Updating an existing property should have passed\n");
+
+	/* Try to modify non-existent property - should pass*/
+	prop++;
+	prop->name = "modify-property";
+	prop->value = "modify-missing-property-data-should-pass";
+	prop->length = strlen(prop->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
+			    "Updating a missing property should have passed\n");
+
+	/* Remove property - should pass */
+	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
+			    "Removing a property should have passed\n");
+
+	/* Adding very large property - should pass */
+	prop++;
+	prop->name = "large-property-PAGE_SIZEx8";
+	prop->length = PAGE_SIZE * 8;
+	prop->value = kzalloc(prop->length, GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+			    "Adding a large property should have passed\n");
+}
+
+static int of_test_init(struct kunit *test)
+{
+	/* adding data for unittest */
+	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
+
+	if (!of_aliases)
+		of_aliases = of_find_node_by_path("/aliases");
+
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
+			"/testcase-data/phandle-tests/consumer-a"));
+
+	return 0;
+}
+
+static struct kunit_case of_test_cases[] = {
+	KUNIT_CASE(of_unittest_find_node_by_name),
+	KUNIT_CASE(of_unittest_dynamic),
+	{},
+};
+
+static struct kunit_module of_test_module = {
+	.name = "of-base-test",
+	.init = of_test_init,
+	.test_cases = of_test_cases,
+};
+module_test(of_test_module);
diff --git a/drivers/of/test-common.c b/drivers/of/test-common.c
new file mode 100644
index 0000000000000..4c9a5f3b82f7d
--- /dev/null
+++ b/drivers/of/test-common.c
@@ -0,0 +1,175 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Common code to be used by unit tests.
+ */
+#include "test-common.h"
+
+#include <linux/of_fdt.h>
+#include <linux/slab.h>
+
+#include "of_private.h"
+
+/**
+ *	update_node_properties - adds the properties
+ *	of np into dup node (present in live tree) and
+ *	updates parent of children of np to dup.
+ *
+ *	@np:	node whose properties are being added to the live tree
+ *	@dup:	node present in live tree to be updated
+ */
+static void update_node_properties(struct device_node *np,
+					struct device_node *dup)
+{
+	struct property *prop;
+	struct property *save_next;
+	struct device_node *child;
+	int ret;
+
+	for_each_child_of_node(np, child)
+		child->parent = dup;
+
+	/*
+	 * "unittest internal error: unable to add testdata property"
+	 *
+	 *    If this message reports a property in node '/__symbols__' then
+	 *    the respective unittest overlay contains a label that has the
+	 *    same name as a label in the live devicetree.  The label will
+	 *    be in the live devicetree only if the devicetree source was
+	 *    compiled with the '-@' option.  If you encounter this error,
+	 *    please consider renaming __all__ of the labels in the unittest
+	 *    overlay dts files with an odd prefix that is unlikely to be
+	 *    used in a real devicetree.
+	 */
+
+	/*
+	 * open code for_each_property_of_node() because of_add_property()
+	 * sets prop->next to NULL
+	 */
+	for (prop = np->properties; prop != NULL; prop = save_next) {
+		save_next = prop->next;
+		ret = of_add_property(dup, prop);
+		if (ret)
+			pr_err("unittest internal error: unable to add testdata property %pOF/%s",
+			       np, prop->name);
+	}
+}
+
+/**
+ *	attach_node_and_children - attaches nodes
+ *	and its children to live tree
+ *
+ *	@np:	Node to attach to live tree
+ */
+static void attach_node_and_children(struct device_node *np)
+{
+	struct device_node *next, *dup, *child;
+	unsigned long flags;
+	const char *full_name;
+
+	full_name = kasprintf(GFP_KERNEL, "%pOF", np);
+
+	if (!strcmp(full_name, "/__local_fixups__") ||
+	    !strcmp(full_name, "/__fixups__"))
+		return;
+
+	dup = of_find_node_by_path(full_name);
+	kfree(full_name);
+	if (dup) {
+		update_node_properties(np, dup);
+		return;
+	}
+
+	child = np->child;
+	np->child = NULL;
+
+	mutex_lock(&of_mutex);
+	raw_spin_lock_irqsave(&devtree_lock, flags);
+	np->sibling = np->parent->child;
+	np->parent->child = np;
+	of_node_clear_flag(np, OF_DETACHED);
+	raw_spin_unlock_irqrestore(&devtree_lock, flags);
+
+	__of_attach_node_sysfs(np);
+	mutex_unlock(&of_mutex);
+
+	while (child) {
+		next = child->sibling;
+		attach_node_and_children(child);
+		child = next;
+	}
+}
+
+/**
+ *	unittest_data_add - Reads, copies data from
+ *	linked tree and attaches it to the live tree
+ */
+int unittest_data_add(void)
+{
+	void *unittest_data;
+	struct device_node *unittest_data_node, *np;
+	/*
+	 * __dtb_testcases_begin[] and __dtb_testcases_end[] are magically
+	 * created by cmd_dt_S_dtb in scripts/Makefile.lib
+	 */
+	extern uint8_t __dtb_testcases_begin[];
+	extern uint8_t __dtb_testcases_end[];
+	const int size = __dtb_testcases_end - __dtb_testcases_begin;
+	int rc;
+
+	if (!size) {
+		pr_warn("%s: No testcase data to attach; not running tests\n",
+			__func__);
+		return -ENODATA;
+	}
+
+	/* creating copy */
+	unittest_data = kmemdup(__dtb_testcases_begin, size, GFP_KERNEL);
+
+	if (!unittest_data) {
+		pr_warn("%s: Failed to allocate memory for unittest_data; "
+			"not running tests\n", __func__);
+		return -ENOMEM;
+	}
+	of_fdt_unflatten_tree(unittest_data, NULL, &unittest_data_node);
+	if (!unittest_data_node) {
+		pr_warn("%s: No tree to attach; not running tests\n", __func__);
+		return -ENODATA;
+	}
+
+	/*
+	 * This lock normally encloses of_resolve_phandles()
+	 */
+	of_overlay_mutex_lock();
+
+	rc = of_resolve_phandles(unittest_data_node);
+	if (rc) {
+		pr_err("%s: Failed to resolve phandles (rc=%i)\n", __func__, rc);
+		of_overlay_mutex_unlock();
+		return -EINVAL;
+	}
+
+	if (!of_root) {
+		of_root = unittest_data_node;
+		for_each_of_allnodes(np)
+			__of_attach_node_sysfs(np);
+		of_aliases = of_find_node_by_path("/aliases");
+		of_chosen = of_find_node_by_path("/chosen");
+		of_overlay_mutex_unlock();
+		return 0;
+	}
+
+	/* attach the sub-tree to live tree */
+	np = unittest_data_node->child;
+	while (np) {
+		struct device_node *next = np->sibling;
+
+		np->parent = of_root;
+		attach_node_and_children(np);
+		np = next;
+	}
+
+	of_overlay_mutex_unlock();
+
+	return 0;
+}
+
diff --git a/drivers/of/test-common.h b/drivers/of/test-common.h
new file mode 100644
index 0000000000000..a35484406bbf1
--- /dev/null
+++ b/drivers/of/test-common.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * Common code to be used by unit tests.
+ */
+#ifndef _LINUX_OF_TEST_COMMON_H
+#define _LINUX_OF_TEST_COMMON_H
+
+#include <linux/of.h>
+
+/**
+ *	unittest_data_add - Reads, copies data from
+ *	linked tree and attaches it to the live tree
+ */
+int unittest_data_add(void);
+
+#endif /* _LINUX_OF_TEST_COMMON_H */
diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
index 96de69ccb3e63..05a2610d0be7f 100644
--- a/drivers/of/unittest.c
+++ b/drivers/of/unittest.c
@@ -29,184 +29,7 @@
 #include <kunit/test.h>
 
 #include "of_private.h"
-
-static void of_unittest_find_node_by_name(struct kunit *test)
-{
-	struct device_node *np;
-	const char *options, *name;
-
-	np = of_find_node_by_path("/testcase-data");
-	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
-			       "find /testcase-data failed\n");
-	of_node_put(np);
-	kfree(name);
-
-	/* Test if trailing '/' works */
-	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
-			    "trailing '/' on /testcase-data/ should fail\n");
-
-	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	KUNIT_EXPECT_STREQ_MSG(
-		test, "/testcase-data/phandle-tests/consumer-a", name,
-		"find /testcase-data/phandle-tests/consumer-a failed\n");
-	of_node_put(np);
-	kfree(name);
-
-	np = of_find_node_by_path("testcase-alias");
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
-			       "find testcase-alias failed\n");
-	of_node_put(np);
-	kfree(name);
-
-	/* Test if trailing '/' works on aliases */
-	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
-			    "trailing '/' on testcase-alias/ should fail\n");
-
-	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	KUNIT_EXPECT_STREQ_MSG(
-		test, "/testcase-data/phandle-tests/consumer-a", name,
-		"find testcase-alias/phandle-tests/consumer-a failed\n");
-	of_node_put(np);
-	kfree(name);
-
-	KUNIT_EXPECT_EQ_MSG(
-		test,
-		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
-		"non-existent path returned node %pOF\n", np);
-	of_node_put(np);
-
-	KUNIT_EXPECT_EQ_MSG(
-		test, np = of_find_node_by_path("missing-alias"), NULL,
-		"non-existent alias returned node %pOF\n", np);
-	of_node_put(np);
-
-	KUNIT_EXPECT_EQ_MSG(
-		test,
-		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
-		"non-existent alias with relative path returned node %pOF\n",
-		np);
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
-			       "option path test failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
-			       "option path test, subcase #1 failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
-			       "option path test, subcase #2 failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
-	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
-					 "NULL option path test failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
-				       &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
-			       "option alias path test failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
-				       &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
-			       "option alias path test, subcase #1 failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
-	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
-			test, np, "NULL option alias path test failed\n");
-	of_node_put(np);
-
-	options = "testoption";
-	np = of_find_node_opts_by_path("testcase-alias", &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
-			    "option clearing test failed\n");
-	of_node_put(np);
-
-	options = "testoption";
-	np = of_find_node_opts_by_path("/", &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
-			    "option clearing root node test failed\n");
-	of_node_put(np);
-}
-
-static void of_unittest_dynamic(struct kunit *test)
-{
-	struct device_node *np;
-	struct property *prop;
-
-	np = of_find_node_by_path("/testcase-data");
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-
-	/* Array of 4 properties for the purpose of testing */
-	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
-
-	/* Add a new property - should pass*/
-	prop->name = "new-property";
-	prop->value = "new-property-data";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
-			    "Adding a new property failed\n");
-
-	/* Try to add an existing property - should fail */
-	prop++;
-	prop->name = "new-property";
-	prop->value = "new-property-data-should-fail";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
-			    "Adding an existing property should have failed\n");
-
-	/* Try to modify an existing property - should pass */
-	prop->value = "modify-property-data-should-pass";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(
-		test, of_update_property(np, prop), 0,
-		"Updating an existing property should have passed\n");
-
-	/* Try to modify non-existent property - should pass*/
-	prop++;
-	prop->name = "modify-property";
-	prop->value = "modify-missing-property-data-should-pass";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
-			    "Updating a missing property should have passed\n");
-
-	/* Remove property - should pass */
-	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
-			    "Removing a property should have passed\n");
-
-	/* Adding very large property - should pass */
-	prop++;
-	prop->name = "large-property-PAGE_SIZEx8";
-	prop->length = PAGE_SIZE * 8;
-	prop->value = kzalloc(prop->length, GFP_KERNEL);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
-	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
-			    "Adding a large property should have passed\n");
-}
+#include "test-common.h"
 
 static int of_unittest_check_node_linkage(struct device_node *np)
 {
@@ -1177,170 +1000,6 @@ static void of_unittest_platform_populate(struct kunit *test)
 	of_node_put(np);
 }
 
-/**
- *	update_node_properties - adds the properties
- *	of np into dup node (present in live tree) and
- *	updates parent of children of np to dup.
- *
- *	@np:	node whose properties are being added to the live tree
- *	@dup:	node present in live tree to be updated
- */
-static void update_node_properties(struct device_node *np,
-					struct device_node *dup)
-{
-	struct property *prop;
-	struct property *save_next;
-	struct device_node *child;
-	int ret;
-
-	for_each_child_of_node(np, child)
-		child->parent = dup;
-
-	/*
-	 * "unittest internal error: unable to add testdata property"
-	 *
-	 *    If this message reports a property in node '/__symbols__' then
-	 *    the respective unittest overlay contains a label that has the
-	 *    same name as a label in the live devicetree.  The label will
-	 *    be in the live devicetree only if the devicetree source was
-	 *    compiled with the '-@' option.  If you encounter this error,
-	 *    please consider renaming __all__ of the labels in the unittest
-	 *    overlay dts files with an odd prefix that is unlikely to be
-	 *    used in a real devicetree.
-	 */
-
-	/*
-	 * open code for_each_property_of_node() because of_add_property()
-	 * sets prop->next to NULL
-	 */
-	for (prop = np->properties; prop != NULL; prop = save_next) {
-		save_next = prop->next;
-		ret = of_add_property(dup, prop);
-		if (ret)
-			pr_err("unittest internal error: unable to add testdata property %pOF/%s",
-			       np, prop->name);
-	}
-}
-
-/**
- *	attach_node_and_children - attaches nodes
- *	and its children to live tree
- *
- *	@np:	Node to attach to live tree
- */
-static void attach_node_and_children(struct device_node *np)
-{
-	struct device_node *next, *dup, *child;
-	unsigned long flags;
-	const char *full_name;
-
-	full_name = kasprintf(GFP_KERNEL, "%pOF", np);
-
-	if (!strcmp(full_name, "/__local_fixups__") ||
-	    !strcmp(full_name, "/__fixups__"))
-		return;
-
-	dup = of_find_node_by_path(full_name);
-	kfree(full_name);
-	if (dup) {
-		update_node_properties(np, dup);
-		return;
-	}
-
-	child = np->child;
-	np->child = NULL;
-
-	mutex_lock(&of_mutex);
-	raw_spin_lock_irqsave(&devtree_lock, flags);
-	np->sibling = np->parent->child;
-	np->parent->child = np;
-	of_node_clear_flag(np, OF_DETACHED);
-	raw_spin_unlock_irqrestore(&devtree_lock, flags);
-
-	__of_attach_node_sysfs(np);
-	mutex_unlock(&of_mutex);
-
-	while (child) {
-		next = child->sibling;
-		attach_node_and_children(child);
-		child = next;
-	}
-}
-
-/**
- *	unittest_data_add - Reads, copies data from
- *	linked tree and attaches it to the live tree
- */
-static int unittest_data_add(void)
-{
-	void *unittest_data;
-	struct device_node *unittest_data_node, *np;
-	/*
-	 * __dtb_testcases_begin[] and __dtb_testcases_end[] are magically
-	 * created by cmd_dt_S_dtb in scripts/Makefile.lib
-	 */
-	extern uint8_t __dtb_testcases_begin[];
-	extern uint8_t __dtb_testcases_end[];
-	const int size = __dtb_testcases_end - __dtb_testcases_begin;
-	int rc;
-
-	if (!size) {
-		pr_warn("%s: No testcase data to attach; not running tests\n",
-			__func__);
-		return -ENODATA;
-	}
-
-	/* creating copy */
-	unittest_data = kmemdup(__dtb_testcases_begin, size, GFP_KERNEL);
-
-	if (!unittest_data) {
-		pr_warn("%s: Failed to allocate memory for unittest_data; "
-			"not running tests\n", __func__);
-		return -ENOMEM;
-	}
-	of_fdt_unflatten_tree(unittest_data, NULL, &unittest_data_node);
-	if (!unittest_data_node) {
-		pr_warn("%s: No tree to attach; not running tests\n", __func__);
-		return -ENODATA;
-	}
-
-	/*
-	 * This lock normally encloses of_resolve_phandles()
-	 */
-	of_overlay_mutex_lock();
-
-	rc = of_resolve_phandles(unittest_data_node);
-	if (rc) {
-		pr_err("%s: Failed to resolve phandles (rc=%i)\n", __func__, rc);
-		of_overlay_mutex_unlock();
-		return -EINVAL;
-	}
-
-	if (!of_root) {
-		of_root = unittest_data_node;
-		for_each_of_allnodes(np)
-			__of_attach_node_sysfs(np);
-		of_aliases = of_find_node_by_path("/aliases");
-		of_chosen = of_find_node_by_path("/chosen");
-		of_overlay_mutex_unlock();
-		return 0;
-	}
-
-	/* attach the sub-tree to live tree */
-	np = unittest_data_node->child;
-	while (np) {
-		struct device_node *next = np->sibling;
-
-		np->parent = of_root;
-		attach_node_and_children(np);
-		np = next;
-	}
-
-	of_overlay_mutex_unlock();
-
-	return 0;
-}
-
 #ifdef CONFIG_OF_OVERLAY
 static int overlay_data_apply(const char *overlay_name, int *overlay_id);
 
@@ -2563,8 +2222,6 @@ static int of_test_init(struct kunit *test)
 static struct kunit_case of_test_cases[] = {
 	KUNIT_CASE(of_unittest_check_tree_linkage),
 	KUNIT_CASE(of_unittest_check_phandles),
-	KUNIT_CASE(of_unittest_find_node_by_name),
-	KUNIT_CASE(of_unittest_dynamic),
 	KUNIT_CASE(of_unittest_parse_phandle_with_args),
 	KUNIT_CASE(of_unittest_parse_phandle_with_args_map),
 	KUNIT_CASE(of_unittest_printf),
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 16/17] of: unittest: split out a couple of test cases from unittest
@ 2019-02-14 21:37   ` brendanhiggins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-02-14 21:37 UTC (permalink / raw)


Split out a couple of test cases that these features in base.c from the
unittest.c monolith. The intention is that we will eventually split out
all test cases and group them together based on what portion of device
tree they test.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 drivers/of/Makefile      |   2 +-
 drivers/of/base-test.c   | 214 ++++++++++++++++++++++++
 drivers/of/test-common.c | 175 ++++++++++++++++++++
 drivers/of/test-common.h |  16 ++
 drivers/of/unittest.c    | 345 +--------------------------------------
 5 files changed, 407 insertions(+), 345 deletions(-)
 create mode 100644 drivers/of/base-test.c
 create mode 100644 drivers/of/test-common.c
 create mode 100644 drivers/of/test-common.h

diff --git a/drivers/of/Makefile b/drivers/of/Makefile
index 663a4af0cccd5..4a4bd527d586c 100644
--- a/drivers/of/Makefile
+++ b/drivers/of/Makefile
@@ -8,7 +8,7 @@ obj-$(CONFIG_OF_PROMTREE) += pdt.o
 obj-$(CONFIG_OF_ADDRESS)  += address.o
 obj-$(CONFIG_OF_IRQ)    += irq.o
 obj-$(CONFIG_OF_NET)	+= of_net.o
-obj-$(CONFIG_OF_UNITTEST) += unittest.o
+obj-$(CONFIG_OF_UNITTEST) += unittest.o base-test.o test-common.o
 obj-$(CONFIG_OF_MDIO)	+= of_mdio.o
 obj-$(CONFIG_OF_RESERVED_MEM) += of_reserved_mem.o
 obj-$(CONFIG_OF_RESOLVE)  += resolver.o
diff --git a/drivers/of/base-test.c b/drivers/of/base-test.c
new file mode 100644
index 0000000000000..3d3f4f1b74800
--- /dev/null
+++ b/drivers/of/base-test.c
@@ -0,0 +1,214 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Unit tests for functions defined in base.c.
+ */
+#include <linux/of.h>
+
+#include <kunit/test.h>
+
+#include "test-common.h"
+
+static void of_unittest_find_node_by_name(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options, *name;
+
+	np = of_find_node_by_path("/testcase-data");
+	name = kasprintf(GFP_KERNEL, "%pOF", np);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
+			       "find /testcase-data failed\n");
+	of_node_put(np);
+	kfree(name);
+
+	/* Test if trailing '/' works */
+	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
+			    "trailing '/' on /testcase-data/ should fail\n");
+
+	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	name = kasprintf(GFP_KERNEL, "%pOF", np);
+	KUNIT_EXPECT_STREQ_MSG(
+		test, "/testcase-data/phandle-tests/consumer-a", name,
+		"find /testcase-data/phandle-tests/consumer-a failed\n");
+	of_node_put(np);
+	kfree(name);
+
+	np = of_find_node_by_path("testcase-alias");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	name = kasprintf(GFP_KERNEL, "%pOF", np);
+	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
+			       "find testcase-alias failed\n");
+	of_node_put(np);
+	kfree(name);
+
+	/* Test if trailing '/' works on aliases */
+	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
+			    "trailing '/' on testcase-alias/ should fail\n");
+
+	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	name = kasprintf(GFP_KERNEL, "%pOF", np);
+	KUNIT_EXPECT_STREQ_MSG(
+		test, "/testcase-data/phandle-tests/consumer-a", name,
+		"find testcase-alias/phandle-tests/consumer-a failed\n");
+	of_node_put(np);
+	kfree(name);
+
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
+		"non-existent path returned node %pOF\n", np);
+	of_node_put(np);
+
+	KUNIT_EXPECT_EQ_MSG(
+		test, np = of_find_node_by_path("missing-alias"), NULL,
+		"non-existent alias returned node %pOF\n", np);
+	of_node_put(np);
+
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
+		"non-existent alias with relative path returned node %pOF\n",
+		np);
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
+			       "option path test failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
+			       "option path test, subcase #1 failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
+			       "option path test, subcase #2 failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
+					 "NULL option path test failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
+				       &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
+			       "option alias path test failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
+				       &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
+			       "option alias path test, subcase #1 failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
+			test, np, "NULL option alias path test failed\n");
+	of_node_put(np);
+
+	options = "testoption";
+	np = of_find_node_opts_by_path("testcase-alias", &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
+			    "option clearing test failed\n");
+	of_node_put(np);
+
+	options = "testoption";
+	np = of_find_node_opts_by_path("/", &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
+			    "option clearing root node test failed\n");
+	of_node_put(np);
+}
+
+static void of_unittest_dynamic(struct kunit *test)
+{
+	struct device_node *np;
+	struct property *prop;
+
+	np = of_find_node_by_path("/testcase-data");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+
+	/* Array of 4 properties for the purpose of testing */
+	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
+
+	/* Add a new property - should pass*/
+	prop->name = "new-property";
+	prop->value = "new-property-data";
+	prop->length = strlen(prop->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+			    "Adding a new property failed\n");
+
+	/* Try to add an existing property - should fail */
+	prop++;
+	prop->name = "new-property";
+	prop->value = "new-property-data-should-fail";
+	prop->length = strlen(prop->value) + 1;
+	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
+			    "Adding an existing property should have failed\n");
+
+	/* Try to modify an existing property - should pass */
+	prop->value = "modify-property-data-should-pass";
+	prop->length = strlen(prop->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(
+		test, of_update_property(np, prop), 0,
+		"Updating an existing property should have passed\n");
+
+	/* Try to modify non-existent property - should pass*/
+	prop++;
+	prop->name = "modify-property";
+	prop->value = "modify-missing-property-data-should-pass";
+	prop->length = strlen(prop->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
+			    "Updating a missing property should have passed\n");
+
+	/* Remove property - should pass */
+	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
+			    "Removing a property should have passed\n");
+
+	/* Adding very large property - should pass */
+	prop++;
+	prop->name = "large-property-PAGE_SIZEx8";
+	prop->length = PAGE_SIZE * 8;
+	prop->value = kzalloc(prop->length, GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+			    "Adding a large property should have passed\n");
+}
+
+static int of_test_init(struct kunit *test)
+{
+	/* adding data for unittest */
+	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
+
+	if (!of_aliases)
+		of_aliases = of_find_node_by_path("/aliases");
+
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
+			"/testcase-data/phandle-tests/consumer-a"));
+
+	return 0;
+}
+
+static struct kunit_case of_test_cases[] = {
+	KUNIT_CASE(of_unittest_find_node_by_name),
+	KUNIT_CASE(of_unittest_dynamic),
+	{},
+};
+
+static struct kunit_module of_test_module = {
+	.name = "of-base-test",
+	.init = of_test_init,
+	.test_cases = of_test_cases,
+};
+module_test(of_test_module);
diff --git a/drivers/of/test-common.c b/drivers/of/test-common.c
new file mode 100644
index 0000000000000..4c9a5f3b82f7d
--- /dev/null
+++ b/drivers/of/test-common.c
@@ -0,0 +1,175 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Common code to be used by unit tests.
+ */
+#include "test-common.h"
+
+#include <linux/of_fdt.h>
+#include <linux/slab.h>
+
+#include "of_private.h"
+
+/**
+ *	update_node_properties - adds the properties
+ *	of np into dup node (present in live tree) and
+ *	updates parent of children of np to dup.
+ *
+ *	@np:	node whose properties are being added to the live tree
+ *	@dup:	node present in live tree to be updated
+ */
+static void update_node_properties(struct device_node *np,
+					struct device_node *dup)
+{
+	struct property *prop;
+	struct property *save_next;
+	struct device_node *child;
+	int ret;
+
+	for_each_child_of_node(np, child)
+		child->parent = dup;
+
+	/*
+	 * "unittest internal error: unable to add testdata property"
+	 *
+	 *    If this message reports a property in node '/__symbols__' then
+	 *    the respective unittest overlay contains a label that has the
+	 *    same name as a label in the live devicetree.  The label will
+	 *    be in the live devicetree only if the devicetree source was
+	 *    compiled with the '-@' option.  If you encounter this error,
+	 *    please consider renaming __all__ of the labels in the unittest
+	 *    overlay dts files with an odd prefix that is unlikely to be
+	 *    used in a real devicetree.
+	 */
+
+	/*
+	 * open code for_each_property_of_node() because of_add_property()
+	 * sets prop->next to NULL
+	 */
+	for (prop = np->properties; prop != NULL; prop = save_next) {
+		save_next = prop->next;
+		ret = of_add_property(dup, prop);
+		if (ret)
+			pr_err("unittest internal error: unable to add testdata property %pOF/%s",
+			       np, prop->name);
+	}
+}
+
+/**
+ *	attach_node_and_children - attaches nodes
+ *	and its children to live tree
+ *
+ *	@np:	Node to attach to live tree
+ */
+static void attach_node_and_children(struct device_node *np)
+{
+	struct device_node *next, *dup, *child;
+	unsigned long flags;
+	const char *full_name;
+
+	full_name = kasprintf(GFP_KERNEL, "%pOF", np);
+
+	if (!strcmp(full_name, "/__local_fixups__") ||
+	    !strcmp(full_name, "/__fixups__"))
+		return;
+
+	dup = of_find_node_by_path(full_name);
+	kfree(full_name);
+	if (dup) {
+		update_node_properties(np, dup);
+		return;
+	}
+
+	child = np->child;
+	np->child = NULL;
+
+	mutex_lock(&of_mutex);
+	raw_spin_lock_irqsave(&devtree_lock, flags);
+	np->sibling = np->parent->child;
+	np->parent->child = np;
+	of_node_clear_flag(np, OF_DETACHED);
+	raw_spin_unlock_irqrestore(&devtree_lock, flags);
+
+	__of_attach_node_sysfs(np);
+	mutex_unlock(&of_mutex);
+
+	while (child) {
+		next = child->sibling;
+		attach_node_and_children(child);
+		child = next;
+	}
+}
+
+/**
+ *	unittest_data_add - Reads, copies data from
+ *	linked tree and attaches it to the live tree
+ */
+int unittest_data_add(void)
+{
+	void *unittest_data;
+	struct device_node *unittest_data_node, *np;
+	/*
+	 * __dtb_testcases_begin[] and __dtb_testcases_end[] are magically
+	 * created by cmd_dt_S_dtb in scripts/Makefile.lib
+	 */
+	extern uint8_t __dtb_testcases_begin[];
+	extern uint8_t __dtb_testcases_end[];
+	const int size = __dtb_testcases_end - __dtb_testcases_begin;
+	int rc;
+
+	if (!size) {
+		pr_warn("%s: No testcase data to attach; not running tests\n",
+			__func__);
+		return -ENODATA;
+	}
+
+	/* creating copy */
+	unittest_data = kmemdup(__dtb_testcases_begin, size, GFP_KERNEL);
+
+	if (!unittest_data) {
+		pr_warn("%s: Failed to allocate memory for unittest_data; "
+			"not running tests\n", __func__);
+		return -ENOMEM;
+	}
+	of_fdt_unflatten_tree(unittest_data, NULL, &unittest_data_node);
+	if (!unittest_data_node) {
+		pr_warn("%s: No tree to attach; not running tests\n", __func__);
+		return -ENODATA;
+	}
+
+	/*
+	 * This lock normally encloses of_resolve_phandles()
+	 */
+	of_overlay_mutex_lock();
+
+	rc = of_resolve_phandles(unittest_data_node);
+	if (rc) {
+		pr_err("%s: Failed to resolve phandles (rc=%i)\n", __func__, rc);
+		of_overlay_mutex_unlock();
+		return -EINVAL;
+	}
+
+	if (!of_root) {
+		of_root = unittest_data_node;
+		for_each_of_allnodes(np)
+			__of_attach_node_sysfs(np);
+		of_aliases = of_find_node_by_path("/aliases");
+		of_chosen = of_find_node_by_path("/chosen");
+		of_overlay_mutex_unlock();
+		return 0;
+	}
+
+	/* attach the sub-tree to live tree */
+	np = unittest_data_node->child;
+	while (np) {
+		struct device_node *next = np->sibling;
+
+		np->parent = of_root;
+		attach_node_and_children(np);
+		np = next;
+	}
+
+	of_overlay_mutex_unlock();
+
+	return 0;
+}
+
diff --git a/drivers/of/test-common.h b/drivers/of/test-common.h
new file mode 100644
index 0000000000000..a35484406bbf1
--- /dev/null
+++ b/drivers/of/test-common.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * Common code to be used by unit tests.
+ */
+#ifndef _LINUX_OF_TEST_COMMON_H
+#define _LINUX_OF_TEST_COMMON_H
+
+#include <linux/of.h>
+
+/**
+ *	unittest_data_add - Reads, copies data from
+ *	linked tree and attaches it to the live tree
+ */
+int unittest_data_add(void);
+
+#endif /* _LINUX_OF_TEST_COMMON_H */
diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
index 96de69ccb3e63..05a2610d0be7f 100644
--- a/drivers/of/unittest.c
+++ b/drivers/of/unittest.c
@@ -29,184 +29,7 @@
 #include <kunit/test.h>
 
 #include "of_private.h"
-
-static void of_unittest_find_node_by_name(struct kunit *test)
-{
-	struct device_node *np;
-	const char *options, *name;
-
-	np = of_find_node_by_path("/testcase-data");
-	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
-			       "find /testcase-data failed\n");
-	of_node_put(np);
-	kfree(name);
-
-	/* Test if trailing '/' works */
-	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
-			    "trailing '/' on /testcase-data/ should fail\n");
-
-	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	KUNIT_EXPECT_STREQ_MSG(
-		test, "/testcase-data/phandle-tests/consumer-a", name,
-		"find /testcase-data/phandle-tests/consumer-a failed\n");
-	of_node_put(np);
-	kfree(name);
-
-	np = of_find_node_by_path("testcase-alias");
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
-			       "find testcase-alias failed\n");
-	of_node_put(np);
-	kfree(name);
-
-	/* Test if trailing '/' works on aliases */
-	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
-			    "trailing '/' on testcase-alias/ should fail\n");
-
-	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	KUNIT_EXPECT_STREQ_MSG(
-		test, "/testcase-data/phandle-tests/consumer-a", name,
-		"find testcase-alias/phandle-tests/consumer-a failed\n");
-	of_node_put(np);
-	kfree(name);
-
-	KUNIT_EXPECT_EQ_MSG(
-		test,
-		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
-		"non-existent path returned node %pOF\n", np);
-	of_node_put(np);
-
-	KUNIT_EXPECT_EQ_MSG(
-		test, np = of_find_node_by_path("missing-alias"), NULL,
-		"non-existent alias returned node %pOF\n", np);
-	of_node_put(np);
-
-	KUNIT_EXPECT_EQ_MSG(
-		test,
-		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
-		"non-existent alias with relative path returned node %pOF\n",
-		np);
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
-			       "option path test failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
-			       "option path test, subcase #1 failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
-			       "option path test, subcase #2 failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
-	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
-					 "NULL option path test failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
-				       &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
-			       "option alias path test failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
-				       &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
-			       "option alias path test, subcase #1 failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
-	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
-			test, np, "NULL option alias path test failed\n");
-	of_node_put(np);
-
-	options = "testoption";
-	np = of_find_node_opts_by_path("testcase-alias", &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
-			    "option clearing test failed\n");
-	of_node_put(np);
-
-	options = "testoption";
-	np = of_find_node_opts_by_path("/", &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
-			    "option clearing root node test failed\n");
-	of_node_put(np);
-}
-
-static void of_unittest_dynamic(struct kunit *test)
-{
-	struct device_node *np;
-	struct property *prop;
-
-	np = of_find_node_by_path("/testcase-data");
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-
-	/* Array of 4 properties for the purpose of testing */
-	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
-
-	/* Add a new property - should pass*/
-	prop->name = "new-property";
-	prop->value = "new-property-data";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
-			    "Adding a new property failed\n");
-
-	/* Try to add an existing property - should fail */
-	prop++;
-	prop->name = "new-property";
-	prop->value = "new-property-data-should-fail";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
-			    "Adding an existing property should have failed\n");
-
-	/* Try to modify an existing property - should pass */
-	prop->value = "modify-property-data-should-pass";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(
-		test, of_update_property(np, prop), 0,
-		"Updating an existing property should have passed\n");
-
-	/* Try to modify non-existent property - should pass*/
-	prop++;
-	prop->name = "modify-property";
-	prop->value = "modify-missing-property-data-should-pass";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
-			    "Updating a missing property should have passed\n");
-
-	/* Remove property - should pass */
-	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
-			    "Removing a property should have passed\n");
-
-	/* Adding very large property - should pass */
-	prop++;
-	prop->name = "large-property-PAGE_SIZEx8";
-	prop->length = PAGE_SIZE * 8;
-	prop->value = kzalloc(prop->length, GFP_KERNEL);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
-	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
-			    "Adding a large property should have passed\n");
-}
+#include "test-common.h"
 
 static int of_unittest_check_node_linkage(struct device_node *np)
 {
@@ -1177,170 +1000,6 @@ static void of_unittest_platform_populate(struct kunit *test)
 	of_node_put(np);
 }
 
-/**
- *	update_node_properties - adds the properties
- *	of np into dup node (present in live tree) and
- *	updates parent of children of np to dup.
- *
- *	@np:	node whose properties are being added to the live tree
- *	@dup:	node present in live tree to be updated
- */
-static void update_node_properties(struct device_node *np,
-					struct device_node *dup)
-{
-	struct property *prop;
-	struct property *save_next;
-	struct device_node *child;
-	int ret;
-
-	for_each_child_of_node(np, child)
-		child->parent = dup;
-
-	/*
-	 * "unittest internal error: unable to add testdata property"
-	 *
-	 *    If this message reports a property in node '/__symbols__' then
-	 *    the respective unittest overlay contains a label that has the
-	 *    same name as a label in the live devicetree.  The label will
-	 *    be in the live devicetree only if the devicetree source was
-	 *    compiled with the '-@' option.  If you encounter this error,
-	 *    please consider renaming __all__ of the labels in the unittest
-	 *    overlay dts files with an odd prefix that is unlikely to be
-	 *    used in a real devicetree.
-	 */
-
-	/*
-	 * open code for_each_property_of_node() because of_add_property()
-	 * sets prop->next to NULL
-	 */
-	for (prop = np->properties; prop != NULL; prop = save_next) {
-		save_next = prop->next;
-		ret = of_add_property(dup, prop);
-		if (ret)
-			pr_err("unittest internal error: unable to add testdata property %pOF/%s",
-			       np, prop->name);
-	}
-}
-
-/**
- *	attach_node_and_children - attaches nodes
- *	and its children to live tree
- *
- *	@np:	Node to attach to live tree
- */
-static void attach_node_and_children(struct device_node *np)
-{
-	struct device_node *next, *dup, *child;
-	unsigned long flags;
-	const char *full_name;
-
-	full_name = kasprintf(GFP_KERNEL, "%pOF", np);
-
-	if (!strcmp(full_name, "/__local_fixups__") ||
-	    !strcmp(full_name, "/__fixups__"))
-		return;
-
-	dup = of_find_node_by_path(full_name);
-	kfree(full_name);
-	if (dup) {
-		update_node_properties(np, dup);
-		return;
-	}
-
-	child = np->child;
-	np->child = NULL;
-
-	mutex_lock(&of_mutex);
-	raw_spin_lock_irqsave(&devtree_lock, flags);
-	np->sibling = np->parent->child;
-	np->parent->child = np;
-	of_node_clear_flag(np, OF_DETACHED);
-	raw_spin_unlock_irqrestore(&devtree_lock, flags);
-
-	__of_attach_node_sysfs(np);
-	mutex_unlock(&of_mutex);
-
-	while (child) {
-		next = child->sibling;
-		attach_node_and_children(child);
-		child = next;
-	}
-}
-
-/**
- *	unittest_data_add - Reads, copies data from
- *	linked tree and attaches it to the live tree
- */
-static int unittest_data_add(void)
-{
-	void *unittest_data;
-	struct device_node *unittest_data_node, *np;
-	/*
-	 * __dtb_testcases_begin[] and __dtb_testcases_end[] are magically
-	 * created by cmd_dt_S_dtb in scripts/Makefile.lib
-	 */
-	extern uint8_t __dtb_testcases_begin[];
-	extern uint8_t __dtb_testcases_end[];
-	const int size = __dtb_testcases_end - __dtb_testcases_begin;
-	int rc;
-
-	if (!size) {
-		pr_warn("%s: No testcase data to attach; not running tests\n",
-			__func__);
-		return -ENODATA;
-	}
-
-	/* creating copy */
-	unittest_data = kmemdup(__dtb_testcases_begin, size, GFP_KERNEL);
-
-	if (!unittest_data) {
-		pr_warn("%s: Failed to allocate memory for unittest_data; "
-			"not running tests\n", __func__);
-		return -ENOMEM;
-	}
-	of_fdt_unflatten_tree(unittest_data, NULL, &unittest_data_node);
-	if (!unittest_data_node) {
-		pr_warn("%s: No tree to attach; not running tests\n", __func__);
-		return -ENODATA;
-	}
-
-	/*
-	 * This lock normally encloses of_resolve_phandles()
-	 */
-	of_overlay_mutex_lock();
-
-	rc = of_resolve_phandles(unittest_data_node);
-	if (rc) {
-		pr_err("%s: Failed to resolve phandles (rc=%i)\n", __func__, rc);
-		of_overlay_mutex_unlock();
-		return -EINVAL;
-	}
-
-	if (!of_root) {
-		of_root = unittest_data_node;
-		for_each_of_allnodes(np)
-			__of_attach_node_sysfs(np);
-		of_aliases = of_find_node_by_path("/aliases");
-		of_chosen = of_find_node_by_path("/chosen");
-		of_overlay_mutex_unlock();
-		return 0;
-	}
-
-	/* attach the sub-tree to live tree */
-	np = unittest_data_node->child;
-	while (np) {
-		struct device_node *next = np->sibling;
-
-		np->parent = of_root;
-		attach_node_and_children(np);
-		np = next;
-	}
-
-	of_overlay_mutex_unlock();
-
-	return 0;
-}
-
 #ifdef CONFIG_OF_OVERLAY
 static int overlay_data_apply(const char *overlay_name, int *overlay_id);
 
@@ -2563,8 +2222,6 @@ static int of_test_init(struct kunit *test)
 static struct kunit_case of_test_cases[] = {
 	KUNIT_CASE(of_unittest_check_tree_linkage),
 	KUNIT_CASE(of_unittest_check_phandles),
-	KUNIT_CASE(of_unittest_find_node_by_name),
-	KUNIT_CASE(of_unittest_dynamic),
 	KUNIT_CASE(of_unittest_parse_phandle_with_args),
 	KUNIT_CASE(of_unittest_parse_phandle_with_args_map),
 	KUNIT_CASE(of_unittest_printf),
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 16/17] of: unittest: split out a couple of test cases from unittest
@ 2019-02-14 21:37   ` brendanhiggins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)


Split out a couple of test cases that these features in base.c from the
unittest.c monolith. The intention is that we will eventually split out
all test cases and group them together based on what portion of device
tree they test.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 drivers/of/Makefile      |   2 +-
 drivers/of/base-test.c   | 214 ++++++++++++++++++++++++
 drivers/of/test-common.c | 175 ++++++++++++++++++++
 drivers/of/test-common.h |  16 ++
 drivers/of/unittest.c    | 345 +--------------------------------------
 5 files changed, 407 insertions(+), 345 deletions(-)
 create mode 100644 drivers/of/base-test.c
 create mode 100644 drivers/of/test-common.c
 create mode 100644 drivers/of/test-common.h

diff --git a/drivers/of/Makefile b/drivers/of/Makefile
index 663a4af0cccd5..4a4bd527d586c 100644
--- a/drivers/of/Makefile
+++ b/drivers/of/Makefile
@@ -8,7 +8,7 @@ obj-$(CONFIG_OF_PROMTREE) += pdt.o
 obj-$(CONFIG_OF_ADDRESS)  += address.o
 obj-$(CONFIG_OF_IRQ)    += irq.o
 obj-$(CONFIG_OF_NET)	+= of_net.o
-obj-$(CONFIG_OF_UNITTEST) += unittest.o
+obj-$(CONFIG_OF_UNITTEST) += unittest.o base-test.o test-common.o
 obj-$(CONFIG_OF_MDIO)	+= of_mdio.o
 obj-$(CONFIG_OF_RESERVED_MEM) += of_reserved_mem.o
 obj-$(CONFIG_OF_RESOLVE)  += resolver.o
diff --git a/drivers/of/base-test.c b/drivers/of/base-test.c
new file mode 100644
index 0000000000000..3d3f4f1b74800
--- /dev/null
+++ b/drivers/of/base-test.c
@@ -0,0 +1,214 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Unit tests for functions defined in base.c.
+ */
+#include <linux/of.h>
+
+#include <kunit/test.h>
+
+#include "test-common.h"
+
+static void of_unittest_find_node_by_name(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options, *name;
+
+	np = of_find_node_by_path("/testcase-data");
+	name = kasprintf(GFP_KERNEL, "%pOF", np);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
+			       "find /testcase-data failed\n");
+	of_node_put(np);
+	kfree(name);
+
+	/* Test if trailing '/' works */
+	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
+			    "trailing '/' on /testcase-data/ should fail\n");
+
+	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	name = kasprintf(GFP_KERNEL, "%pOF", np);
+	KUNIT_EXPECT_STREQ_MSG(
+		test, "/testcase-data/phandle-tests/consumer-a", name,
+		"find /testcase-data/phandle-tests/consumer-a failed\n");
+	of_node_put(np);
+	kfree(name);
+
+	np = of_find_node_by_path("testcase-alias");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	name = kasprintf(GFP_KERNEL, "%pOF", np);
+	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
+			       "find testcase-alias failed\n");
+	of_node_put(np);
+	kfree(name);
+
+	/* Test if trailing '/' works on aliases */
+	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
+			    "trailing '/' on testcase-alias/ should fail\n");
+
+	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	name = kasprintf(GFP_KERNEL, "%pOF", np);
+	KUNIT_EXPECT_STREQ_MSG(
+		test, "/testcase-data/phandle-tests/consumer-a", name,
+		"find testcase-alias/phandle-tests/consumer-a failed\n");
+	of_node_put(np);
+	kfree(name);
+
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
+		"non-existent path returned node %pOF\n", np);
+	of_node_put(np);
+
+	KUNIT_EXPECT_EQ_MSG(
+		test, np = of_find_node_by_path("missing-alias"), NULL,
+		"non-existent alias returned node %pOF\n", np);
+	of_node_put(np);
+
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
+		"non-existent alias with relative path returned node %pOF\n",
+		np);
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
+			       "option path test failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
+			       "option path test, subcase #1 failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
+			       "option path test, subcase #2 failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
+					 "NULL option path test failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
+				       &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
+			       "option alias path test failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
+				       &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
+			       "option alias path test, subcase #1 failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
+			test, np, "NULL option alias path test failed\n");
+	of_node_put(np);
+
+	options = "testoption";
+	np = of_find_node_opts_by_path("testcase-alias", &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
+			    "option clearing test failed\n");
+	of_node_put(np);
+
+	options = "testoption";
+	np = of_find_node_opts_by_path("/", &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
+			    "option clearing root node test failed\n");
+	of_node_put(np);
+}
+
+static void of_unittest_dynamic(struct kunit *test)
+{
+	struct device_node *np;
+	struct property *prop;
+
+	np = of_find_node_by_path("/testcase-data");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+
+	/* Array of 4 properties for the purpose of testing */
+	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
+
+	/* Add a new property - should pass*/
+	prop->name = "new-property";
+	prop->value = "new-property-data";
+	prop->length = strlen(prop->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+			    "Adding a new property failed\n");
+
+	/* Try to add an existing property - should fail */
+	prop++;
+	prop->name = "new-property";
+	prop->value = "new-property-data-should-fail";
+	prop->length = strlen(prop->value) + 1;
+	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
+			    "Adding an existing property should have failed\n");
+
+	/* Try to modify an existing property - should pass */
+	prop->value = "modify-property-data-should-pass";
+	prop->length = strlen(prop->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(
+		test, of_update_property(np, prop), 0,
+		"Updating an existing property should have passed\n");
+
+	/* Try to modify non-existent property - should pass*/
+	prop++;
+	prop->name = "modify-property";
+	prop->value = "modify-missing-property-data-should-pass";
+	prop->length = strlen(prop->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
+			    "Updating a missing property should have passed\n");
+
+	/* Remove property - should pass */
+	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
+			    "Removing a property should have passed\n");
+
+	/* Adding very large property - should pass */
+	prop++;
+	prop->name = "large-property-PAGE_SIZEx8";
+	prop->length = PAGE_SIZE * 8;
+	prop->value = kzalloc(prop->length, GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+			    "Adding a large property should have passed\n");
+}
+
+static int of_test_init(struct kunit *test)
+{
+	/* adding data for unittest */
+	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
+
+	if (!of_aliases)
+		of_aliases = of_find_node_by_path("/aliases");
+
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
+			"/testcase-data/phandle-tests/consumer-a"));
+
+	return 0;
+}
+
+static struct kunit_case of_test_cases[] = {
+	KUNIT_CASE(of_unittest_find_node_by_name),
+	KUNIT_CASE(of_unittest_dynamic),
+	{},
+};
+
+static struct kunit_module of_test_module = {
+	.name = "of-base-test",
+	.init = of_test_init,
+	.test_cases = of_test_cases,
+};
+module_test(of_test_module);
diff --git a/drivers/of/test-common.c b/drivers/of/test-common.c
new file mode 100644
index 0000000000000..4c9a5f3b82f7d
--- /dev/null
+++ b/drivers/of/test-common.c
@@ -0,0 +1,175 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Common code to be used by unit tests.
+ */
+#include "test-common.h"
+
+#include <linux/of_fdt.h>
+#include <linux/slab.h>
+
+#include "of_private.h"
+
+/**
+ *	update_node_properties - adds the properties
+ *	of np into dup node (present in live tree) and
+ *	updates parent of children of np to dup.
+ *
+ *	@np:	node whose properties are being added to the live tree
+ *	@dup:	node present in live tree to be updated
+ */
+static void update_node_properties(struct device_node *np,
+					struct device_node *dup)
+{
+	struct property *prop;
+	struct property *save_next;
+	struct device_node *child;
+	int ret;
+
+	for_each_child_of_node(np, child)
+		child->parent = dup;
+
+	/*
+	 * "unittest internal error: unable to add testdata property"
+	 *
+	 *    If this message reports a property in node '/__symbols__' then
+	 *    the respective unittest overlay contains a label that has the
+	 *    same name as a label in the live devicetree.  The label will
+	 *    be in the live devicetree only if the devicetree source was
+	 *    compiled with the '-@' option.  If you encounter this error,
+	 *    please consider renaming __all__ of the labels in the unittest
+	 *    overlay dts files with an odd prefix that is unlikely to be
+	 *    used in a real devicetree.
+	 */
+
+	/*
+	 * open code for_each_property_of_node() because of_add_property()
+	 * sets prop->next to NULL
+	 */
+	for (prop = np->properties; prop != NULL; prop = save_next) {
+		save_next = prop->next;
+		ret = of_add_property(dup, prop);
+		if (ret)
+			pr_err("unittest internal error: unable to add testdata property %pOF/%s",
+			       np, prop->name);
+	}
+}
+
+/**
+ *	attach_node_and_children - attaches nodes
+ *	and its children to live tree
+ *
+ *	@np:	Node to attach to live tree
+ */
+static void attach_node_and_children(struct device_node *np)
+{
+	struct device_node *next, *dup, *child;
+	unsigned long flags;
+	const char *full_name;
+
+	full_name = kasprintf(GFP_KERNEL, "%pOF", np);
+
+	if (!strcmp(full_name, "/__local_fixups__") ||
+	    !strcmp(full_name, "/__fixups__"))
+		return;
+
+	dup = of_find_node_by_path(full_name);
+	kfree(full_name);
+	if (dup) {
+		update_node_properties(np, dup);
+		return;
+	}
+
+	child = np->child;
+	np->child = NULL;
+
+	mutex_lock(&of_mutex);
+	raw_spin_lock_irqsave(&devtree_lock, flags);
+	np->sibling = np->parent->child;
+	np->parent->child = np;
+	of_node_clear_flag(np, OF_DETACHED);
+	raw_spin_unlock_irqrestore(&devtree_lock, flags);
+
+	__of_attach_node_sysfs(np);
+	mutex_unlock(&of_mutex);
+
+	while (child) {
+		next = child->sibling;
+		attach_node_and_children(child);
+		child = next;
+	}
+}
+
+/**
+ *	unittest_data_add - Reads, copies data from
+ *	linked tree and attaches it to the live tree
+ */
+int unittest_data_add(void)
+{
+	void *unittest_data;
+	struct device_node *unittest_data_node, *np;
+	/*
+	 * __dtb_testcases_begin[] and __dtb_testcases_end[] are magically
+	 * created by cmd_dt_S_dtb in scripts/Makefile.lib
+	 */
+	extern uint8_t __dtb_testcases_begin[];
+	extern uint8_t __dtb_testcases_end[];
+	const int size = __dtb_testcases_end - __dtb_testcases_begin;
+	int rc;
+
+	if (!size) {
+		pr_warn("%s: No testcase data to attach; not running tests\n",
+			__func__);
+		return -ENODATA;
+	}
+
+	/* creating copy */
+	unittest_data = kmemdup(__dtb_testcases_begin, size, GFP_KERNEL);
+
+	if (!unittest_data) {
+		pr_warn("%s: Failed to allocate memory for unittest_data; "
+			"not running tests\n", __func__);
+		return -ENOMEM;
+	}
+	of_fdt_unflatten_tree(unittest_data, NULL, &unittest_data_node);
+	if (!unittest_data_node) {
+		pr_warn("%s: No tree to attach; not running tests\n", __func__);
+		return -ENODATA;
+	}
+
+	/*
+	 * This lock normally encloses of_resolve_phandles()
+	 */
+	of_overlay_mutex_lock();
+
+	rc = of_resolve_phandles(unittest_data_node);
+	if (rc) {
+		pr_err("%s: Failed to resolve phandles (rc=%i)\n", __func__, rc);
+		of_overlay_mutex_unlock();
+		return -EINVAL;
+	}
+
+	if (!of_root) {
+		of_root = unittest_data_node;
+		for_each_of_allnodes(np)
+			__of_attach_node_sysfs(np);
+		of_aliases = of_find_node_by_path("/aliases");
+		of_chosen = of_find_node_by_path("/chosen");
+		of_overlay_mutex_unlock();
+		return 0;
+	}
+
+	/* attach the sub-tree to live tree */
+	np = unittest_data_node->child;
+	while (np) {
+		struct device_node *next = np->sibling;
+
+		np->parent = of_root;
+		attach_node_and_children(np);
+		np = next;
+	}
+
+	of_overlay_mutex_unlock();
+
+	return 0;
+}
+
diff --git a/drivers/of/test-common.h b/drivers/of/test-common.h
new file mode 100644
index 0000000000000..a35484406bbf1
--- /dev/null
+++ b/drivers/of/test-common.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * Common code to be used by unit tests.
+ */
+#ifndef _LINUX_OF_TEST_COMMON_H
+#define _LINUX_OF_TEST_COMMON_H
+
+#include <linux/of.h>
+
+/**
+ *	unittest_data_add - Reads, copies data from
+ *	linked tree and attaches it to the live tree
+ */
+int unittest_data_add(void);
+
+#endif /* _LINUX_OF_TEST_COMMON_H */
diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
index 96de69ccb3e63..05a2610d0be7f 100644
--- a/drivers/of/unittest.c
+++ b/drivers/of/unittest.c
@@ -29,184 +29,7 @@
 #include <kunit/test.h>
 
 #include "of_private.h"
-
-static void of_unittest_find_node_by_name(struct kunit *test)
-{
-	struct device_node *np;
-	const char *options, *name;
-
-	np = of_find_node_by_path("/testcase-data");
-	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
-			       "find /testcase-data failed\n");
-	of_node_put(np);
-	kfree(name);
-
-	/* Test if trailing '/' works */
-	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
-			    "trailing '/' on /testcase-data/ should fail\n");
-
-	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	KUNIT_EXPECT_STREQ_MSG(
-		test, "/testcase-data/phandle-tests/consumer-a", name,
-		"find /testcase-data/phandle-tests/consumer-a failed\n");
-	of_node_put(np);
-	kfree(name);
-
-	np = of_find_node_by_path("testcase-alias");
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
-			       "find testcase-alias failed\n");
-	of_node_put(np);
-	kfree(name);
-
-	/* Test if trailing '/' works on aliases */
-	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
-			    "trailing '/' on testcase-alias/ should fail\n");
-
-	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	KUNIT_EXPECT_STREQ_MSG(
-		test, "/testcase-data/phandle-tests/consumer-a", name,
-		"find testcase-alias/phandle-tests/consumer-a failed\n");
-	of_node_put(np);
-	kfree(name);
-
-	KUNIT_EXPECT_EQ_MSG(
-		test,
-		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
-		"non-existent path returned node %pOF\n", np);
-	of_node_put(np);
-
-	KUNIT_EXPECT_EQ_MSG(
-		test, np = of_find_node_by_path("missing-alias"), NULL,
-		"non-existent alias returned node %pOF\n", np);
-	of_node_put(np);
-
-	KUNIT_EXPECT_EQ_MSG(
-		test,
-		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
-		"non-existent alias with relative path returned node %pOF\n",
-		np);
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
-			       "option path test failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
-			       "option path test, subcase #1 failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
-			       "option path test, subcase #2 failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
-	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
-					 "NULL option path test failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
-				       &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
-			       "option alias path test failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
-				       &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
-			       "option alias path test, subcase #1 failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
-	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
-			test, np, "NULL option alias path test failed\n");
-	of_node_put(np);
-
-	options = "testoption";
-	np = of_find_node_opts_by_path("testcase-alias", &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
-			    "option clearing test failed\n");
-	of_node_put(np);
-
-	options = "testoption";
-	np = of_find_node_opts_by_path("/", &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
-			    "option clearing root node test failed\n");
-	of_node_put(np);
-}
-
-static void of_unittest_dynamic(struct kunit *test)
-{
-	struct device_node *np;
-	struct property *prop;
-
-	np = of_find_node_by_path("/testcase-data");
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-
-	/* Array of 4 properties for the purpose of testing */
-	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
-
-	/* Add a new property - should pass*/
-	prop->name = "new-property";
-	prop->value = "new-property-data";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
-			    "Adding a new property failed\n");
-
-	/* Try to add an existing property - should fail */
-	prop++;
-	prop->name = "new-property";
-	prop->value = "new-property-data-should-fail";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
-			    "Adding an existing property should have failed\n");
-
-	/* Try to modify an existing property - should pass */
-	prop->value = "modify-property-data-should-pass";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(
-		test, of_update_property(np, prop), 0,
-		"Updating an existing property should have passed\n");
-
-	/* Try to modify non-existent property - should pass*/
-	prop++;
-	prop->name = "modify-property";
-	prop->value = "modify-missing-property-data-should-pass";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
-			    "Updating a missing property should have passed\n");
-
-	/* Remove property - should pass */
-	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
-			    "Removing a property should have passed\n");
-
-	/* Adding very large property - should pass */
-	prop++;
-	prop->name = "large-property-PAGE_SIZEx8";
-	prop->length = PAGE_SIZE * 8;
-	prop->value = kzalloc(prop->length, GFP_KERNEL);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
-	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
-			    "Adding a large property should have passed\n");
-}
+#include "test-common.h"
 
 static int of_unittest_check_node_linkage(struct device_node *np)
 {
@@ -1177,170 +1000,6 @@ static void of_unittest_platform_populate(struct kunit *test)
 	of_node_put(np);
 }
 
-/**
- *	update_node_properties - adds the properties
- *	of np into dup node (present in live tree) and
- *	updates parent of children of np to dup.
- *
- *	@np:	node whose properties are being added to the live tree
- *	@dup:	node present in live tree to be updated
- */
-static void update_node_properties(struct device_node *np,
-					struct device_node *dup)
-{
-	struct property *prop;
-	struct property *save_next;
-	struct device_node *child;
-	int ret;
-
-	for_each_child_of_node(np, child)
-		child->parent = dup;
-
-	/*
-	 * "unittest internal error: unable to add testdata property"
-	 *
-	 *    If this message reports a property in node '/__symbols__' then
-	 *    the respective unittest overlay contains a label that has the
-	 *    same name as a label in the live devicetree.  The label will
-	 *    be in the live devicetree only if the devicetree source was
-	 *    compiled with the '-@' option.  If you encounter this error,
-	 *    please consider renaming __all__ of the labels in the unittest
-	 *    overlay dts files with an odd prefix that is unlikely to be
-	 *    used in a real devicetree.
-	 */
-
-	/*
-	 * open code for_each_property_of_node() because of_add_property()
-	 * sets prop->next to NULL
-	 */
-	for (prop = np->properties; prop != NULL; prop = save_next) {
-		save_next = prop->next;
-		ret = of_add_property(dup, prop);
-		if (ret)
-			pr_err("unittest internal error: unable to add testdata property %pOF/%s",
-			       np, prop->name);
-	}
-}
-
-/**
- *	attach_node_and_children - attaches nodes
- *	and its children to live tree
- *
- *	@np:	Node to attach to live tree
- */
-static void attach_node_and_children(struct device_node *np)
-{
-	struct device_node *next, *dup, *child;
-	unsigned long flags;
-	const char *full_name;
-
-	full_name = kasprintf(GFP_KERNEL, "%pOF", np);
-
-	if (!strcmp(full_name, "/__local_fixups__") ||
-	    !strcmp(full_name, "/__fixups__"))
-		return;
-
-	dup = of_find_node_by_path(full_name);
-	kfree(full_name);
-	if (dup) {
-		update_node_properties(np, dup);
-		return;
-	}
-
-	child = np->child;
-	np->child = NULL;
-
-	mutex_lock(&of_mutex);
-	raw_spin_lock_irqsave(&devtree_lock, flags);
-	np->sibling = np->parent->child;
-	np->parent->child = np;
-	of_node_clear_flag(np, OF_DETACHED);
-	raw_spin_unlock_irqrestore(&devtree_lock, flags);
-
-	__of_attach_node_sysfs(np);
-	mutex_unlock(&of_mutex);
-
-	while (child) {
-		next = child->sibling;
-		attach_node_and_children(child);
-		child = next;
-	}
-}
-
-/**
- *	unittest_data_add - Reads, copies data from
- *	linked tree and attaches it to the live tree
- */
-static int unittest_data_add(void)
-{
-	void *unittest_data;
-	struct device_node *unittest_data_node, *np;
-	/*
-	 * __dtb_testcases_begin[] and __dtb_testcases_end[] are magically
-	 * created by cmd_dt_S_dtb in scripts/Makefile.lib
-	 */
-	extern uint8_t __dtb_testcases_begin[];
-	extern uint8_t __dtb_testcases_end[];
-	const int size = __dtb_testcases_end - __dtb_testcases_begin;
-	int rc;
-
-	if (!size) {
-		pr_warn("%s: No testcase data to attach; not running tests\n",
-			__func__);
-		return -ENODATA;
-	}
-
-	/* creating copy */
-	unittest_data = kmemdup(__dtb_testcases_begin, size, GFP_KERNEL);
-
-	if (!unittest_data) {
-		pr_warn("%s: Failed to allocate memory for unittest_data; "
-			"not running tests\n", __func__);
-		return -ENOMEM;
-	}
-	of_fdt_unflatten_tree(unittest_data, NULL, &unittest_data_node);
-	if (!unittest_data_node) {
-		pr_warn("%s: No tree to attach; not running tests\n", __func__);
-		return -ENODATA;
-	}
-
-	/*
-	 * This lock normally encloses of_resolve_phandles()
-	 */
-	of_overlay_mutex_lock();
-
-	rc = of_resolve_phandles(unittest_data_node);
-	if (rc) {
-		pr_err("%s: Failed to resolve phandles (rc=%i)\n", __func__, rc);
-		of_overlay_mutex_unlock();
-		return -EINVAL;
-	}
-
-	if (!of_root) {
-		of_root = unittest_data_node;
-		for_each_of_allnodes(np)
-			__of_attach_node_sysfs(np);
-		of_aliases = of_find_node_by_path("/aliases");
-		of_chosen = of_find_node_by_path("/chosen");
-		of_overlay_mutex_unlock();
-		return 0;
-	}
-
-	/* attach the sub-tree to live tree */
-	np = unittest_data_node->child;
-	while (np) {
-		struct device_node *next = np->sibling;
-
-		np->parent = of_root;
-		attach_node_and_children(np);
-		np = next;
-	}
-
-	of_overlay_mutex_unlock();
-
-	return 0;
-}
-
 #ifdef CONFIG_OF_OVERLAY
 static int overlay_data_apply(const char *overlay_name, int *overlay_id);
 
@@ -2563,8 +2222,6 @@ static int of_test_init(struct kunit *test)
 static struct kunit_case of_test_cases[] = {
 	KUNIT_CASE(of_unittest_check_tree_linkage),
 	KUNIT_CASE(of_unittest_check_phandles),
-	KUNIT_CASE(of_unittest_find_node_by_name),
-	KUNIT_CASE(of_unittest_dynamic),
 	KUNIT_CASE(of_unittest_parse_phandle_with_args),
 	KUNIT_CASE(of_unittest_parse_phandle_with_args_map),
 	KUNIT_CASE(of_unittest_printf),
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 16/17] of: unittest: split out a couple of test cases from unittest
@ 2019-02-14 21:37   ` brendanhiggins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: brakmo, pmladek, amir73il, Brendan Higgins, dri-devel,
	Alexander.Levin, linux-kselftest, linux-nvdimm, richard,
	knut.omang, wfg, joel, jdike, dan.carpenter, devicetree,
	Tim.Bird, linux-um, rostedt, julia.lawall, dan.j.williams,
	kunit-dev, gregkh, linux-kernel, daniel, mpe, joe, khilman

Split out a couple of test cases that these features in base.c from the
unittest.c monolith. The intention is that we will eventually split out
all test cases and group them together based on what portion of device
tree they test.

Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
---
 drivers/of/Makefile      |   2 +-
 drivers/of/base-test.c   | 214 ++++++++++++++++++++++++
 drivers/of/test-common.c | 175 ++++++++++++++++++++
 drivers/of/test-common.h |  16 ++
 drivers/of/unittest.c    | 345 +--------------------------------------
 5 files changed, 407 insertions(+), 345 deletions(-)
 create mode 100644 drivers/of/base-test.c
 create mode 100644 drivers/of/test-common.c
 create mode 100644 drivers/of/test-common.h

diff --git a/drivers/of/Makefile b/drivers/of/Makefile
index 663a4af0cccd5..4a4bd527d586c 100644
--- a/drivers/of/Makefile
+++ b/drivers/of/Makefile
@@ -8,7 +8,7 @@ obj-$(CONFIG_OF_PROMTREE) += pdt.o
 obj-$(CONFIG_OF_ADDRESS)  += address.o
 obj-$(CONFIG_OF_IRQ)    += irq.o
 obj-$(CONFIG_OF_NET)	+= of_net.o
-obj-$(CONFIG_OF_UNITTEST) += unittest.o
+obj-$(CONFIG_OF_UNITTEST) += unittest.o base-test.o test-common.o
 obj-$(CONFIG_OF_MDIO)	+= of_mdio.o
 obj-$(CONFIG_OF_RESERVED_MEM) += of_reserved_mem.o
 obj-$(CONFIG_OF_RESOLVE)  += resolver.o
diff --git a/drivers/of/base-test.c b/drivers/of/base-test.c
new file mode 100644
index 0000000000000..3d3f4f1b74800
--- /dev/null
+++ b/drivers/of/base-test.c
@@ -0,0 +1,214 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Unit tests for functions defined in base.c.
+ */
+#include <linux/of.h>
+
+#include <kunit/test.h>
+
+#include "test-common.h"
+
+static void of_unittest_find_node_by_name(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options, *name;
+
+	np = of_find_node_by_path("/testcase-data");
+	name = kasprintf(GFP_KERNEL, "%pOF", np);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
+			       "find /testcase-data failed\n");
+	of_node_put(np);
+	kfree(name);
+
+	/* Test if trailing '/' works */
+	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
+			    "trailing '/' on /testcase-data/ should fail\n");
+
+	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	name = kasprintf(GFP_KERNEL, "%pOF", np);
+	KUNIT_EXPECT_STREQ_MSG(
+		test, "/testcase-data/phandle-tests/consumer-a", name,
+		"find /testcase-data/phandle-tests/consumer-a failed\n");
+	of_node_put(np);
+	kfree(name);
+
+	np = of_find_node_by_path("testcase-alias");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	name = kasprintf(GFP_KERNEL, "%pOF", np);
+	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
+			       "find testcase-alias failed\n");
+	of_node_put(np);
+	kfree(name);
+
+	/* Test if trailing '/' works on aliases */
+	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
+			    "trailing '/' on testcase-alias/ should fail\n");
+
+	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	name = kasprintf(GFP_KERNEL, "%pOF", np);
+	KUNIT_EXPECT_STREQ_MSG(
+		test, "/testcase-data/phandle-tests/consumer-a", name,
+		"find testcase-alias/phandle-tests/consumer-a failed\n");
+	of_node_put(np);
+	kfree(name);
+
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
+		"non-existent path returned node %pOF\n", np);
+	of_node_put(np);
+
+	KUNIT_EXPECT_EQ_MSG(
+		test, np = of_find_node_by_path("missing-alias"), NULL,
+		"non-existent alias returned node %pOF\n", np);
+	of_node_put(np);
+
+	KUNIT_EXPECT_EQ_MSG(
+		test,
+		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
+		"non-existent alias with relative path returned node %pOF\n",
+		np);
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
+			       "option path test failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
+			       "option path test, subcase #1 failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
+			       "option path test, subcase #2 failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
+					 "NULL option path test failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
+				       &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
+			       "option alias path test failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
+				       &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
+			       "option alias path test, subcase #1 failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
+			test, np, "NULL option alias path test failed\n");
+	of_node_put(np);
+
+	options = "testoption";
+	np = of_find_node_opts_by_path("testcase-alias", &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
+			    "option clearing test failed\n");
+	of_node_put(np);
+
+	options = "testoption";
+	np = of_find_node_opts_by_path("/", &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
+			    "option clearing root node test failed\n");
+	of_node_put(np);
+}
+
+static void of_unittest_dynamic(struct kunit *test)
+{
+	struct device_node *np;
+	struct property *prop;
+
+	np = of_find_node_by_path("/testcase-data");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+
+	/* Array of 4 properties for the purpose of testing */
+	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
+
+	/* Add a new property - should pass*/
+	prop->name = "new-property";
+	prop->value = "new-property-data";
+	prop->length = strlen(prop->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+			    "Adding a new property failed\n");
+
+	/* Try to add an existing property - should fail */
+	prop++;
+	prop->name = "new-property";
+	prop->value = "new-property-data-should-fail";
+	prop->length = strlen(prop->value) + 1;
+	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
+			    "Adding an existing property should have failed\n");
+
+	/* Try to modify an existing property - should pass */
+	prop->value = "modify-property-data-should-pass";
+	prop->length = strlen(prop->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(
+		test, of_update_property(np, prop), 0,
+		"Updating an existing property should have passed\n");
+
+	/* Try to modify non-existent property - should pass*/
+	prop++;
+	prop->name = "modify-property";
+	prop->value = "modify-missing-property-data-should-pass";
+	prop->length = strlen(prop->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
+			    "Updating a missing property should have passed\n");
+
+	/* Remove property - should pass */
+	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
+			    "Removing a property should have passed\n");
+
+	/* Adding very large property - should pass */
+	prop++;
+	prop->name = "large-property-PAGE_SIZEx8";
+	prop->length = PAGE_SIZE * 8;
+	prop->value = kzalloc(prop->length, GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+			    "Adding a large property should have passed\n");
+}
+
+static int of_test_init(struct kunit *test)
+{
+	/* adding data for unittest */
+	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
+
+	if (!of_aliases)
+		of_aliases = of_find_node_by_path("/aliases");
+
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
+			"/testcase-data/phandle-tests/consumer-a"));
+
+	return 0;
+}
+
+static struct kunit_case of_test_cases[] = {
+	KUNIT_CASE(of_unittest_find_node_by_name),
+	KUNIT_CASE(of_unittest_dynamic),
+	{},
+};
+
+static struct kunit_module of_test_module = {
+	.name = "of-base-test",
+	.init = of_test_init,
+	.test_cases = of_test_cases,
+};
+module_test(of_test_module);
diff --git a/drivers/of/test-common.c b/drivers/of/test-common.c
new file mode 100644
index 0000000000000..4c9a5f3b82f7d
--- /dev/null
+++ b/drivers/of/test-common.c
@@ -0,0 +1,175 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Common code to be used by unit tests.
+ */
+#include "test-common.h"
+
+#include <linux/of_fdt.h>
+#include <linux/slab.h>
+
+#include "of_private.h"
+
+/**
+ *	update_node_properties - adds the properties
+ *	of np into dup node (present in live tree) and
+ *	updates parent of children of np to dup.
+ *
+ *	@np:	node whose properties are being added to the live tree
+ *	@dup:	node present in live tree to be updated
+ */
+static void update_node_properties(struct device_node *np,
+					struct device_node *dup)
+{
+	struct property *prop;
+	struct property *save_next;
+	struct device_node *child;
+	int ret;
+
+	for_each_child_of_node(np, child)
+		child->parent = dup;
+
+	/*
+	 * "unittest internal error: unable to add testdata property"
+	 *
+	 *    If this message reports a property in node '/__symbols__' then
+	 *    the respective unittest overlay contains a label that has the
+	 *    same name as a label in the live devicetree.  The label will
+	 *    be in the live devicetree only if the devicetree source was
+	 *    compiled with the '-@' option.  If you encounter this error,
+	 *    please consider renaming __all__ of the labels in the unittest
+	 *    overlay dts files with an odd prefix that is unlikely to be
+	 *    used in a real devicetree.
+	 */
+
+	/*
+	 * open code for_each_property_of_node() because of_add_property()
+	 * sets prop->next to NULL
+	 */
+	for (prop = np->properties; prop != NULL; prop = save_next) {
+		save_next = prop->next;
+		ret = of_add_property(dup, prop);
+		if (ret)
+			pr_err("unittest internal error: unable to add testdata property %pOF/%s",
+			       np, prop->name);
+	}
+}
+
+/**
+ *	attach_node_and_children - attaches nodes
+ *	and its children to live tree
+ *
+ *	@np:	Node to attach to live tree
+ */
+static void attach_node_and_children(struct device_node *np)
+{
+	struct device_node *next, *dup, *child;
+	unsigned long flags;
+	const char *full_name;
+
+	full_name = kasprintf(GFP_KERNEL, "%pOF", np);
+
+	if (!strcmp(full_name, "/__local_fixups__") ||
+	    !strcmp(full_name, "/__fixups__"))
+		return;
+
+	dup = of_find_node_by_path(full_name);
+	kfree(full_name);
+	if (dup) {
+		update_node_properties(np, dup);
+		return;
+	}
+
+	child = np->child;
+	np->child = NULL;
+
+	mutex_lock(&of_mutex);
+	raw_spin_lock_irqsave(&devtree_lock, flags);
+	np->sibling = np->parent->child;
+	np->parent->child = np;
+	of_node_clear_flag(np, OF_DETACHED);
+	raw_spin_unlock_irqrestore(&devtree_lock, flags);
+
+	__of_attach_node_sysfs(np);
+	mutex_unlock(&of_mutex);
+
+	while (child) {
+		next = child->sibling;
+		attach_node_and_children(child);
+		child = next;
+	}
+}
+
+/**
+ *	unittest_data_add - Reads, copies data from
+ *	linked tree and attaches it to the live tree
+ */
+int unittest_data_add(void)
+{
+	void *unittest_data;
+	struct device_node *unittest_data_node, *np;
+	/*
+	 * __dtb_testcases_begin[] and __dtb_testcases_end[] are magically
+	 * created by cmd_dt_S_dtb in scripts/Makefile.lib
+	 */
+	extern uint8_t __dtb_testcases_begin[];
+	extern uint8_t __dtb_testcases_end[];
+	const int size = __dtb_testcases_end - __dtb_testcases_begin;
+	int rc;
+
+	if (!size) {
+		pr_warn("%s: No testcase data to attach; not running tests\n",
+			__func__);
+		return -ENODATA;
+	}
+
+	/* creating copy */
+	unittest_data = kmemdup(__dtb_testcases_begin, size, GFP_KERNEL);
+
+	if (!unittest_data) {
+		pr_warn("%s: Failed to allocate memory for unittest_data; "
+			"not running tests\n", __func__);
+		return -ENOMEM;
+	}
+	of_fdt_unflatten_tree(unittest_data, NULL, &unittest_data_node);
+	if (!unittest_data_node) {
+		pr_warn("%s: No tree to attach; not running tests\n", __func__);
+		return -ENODATA;
+	}
+
+	/*
+	 * This lock normally encloses of_resolve_phandles()
+	 */
+	of_overlay_mutex_lock();
+
+	rc = of_resolve_phandles(unittest_data_node);
+	if (rc) {
+		pr_err("%s: Failed to resolve phandles (rc=%i)\n", __func__, rc);
+		of_overlay_mutex_unlock();
+		return -EINVAL;
+	}
+
+	if (!of_root) {
+		of_root = unittest_data_node;
+		for_each_of_allnodes(np)
+			__of_attach_node_sysfs(np);
+		of_aliases = of_find_node_by_path("/aliases");
+		of_chosen = of_find_node_by_path("/chosen");
+		of_overlay_mutex_unlock();
+		return 0;
+	}
+
+	/* attach the sub-tree to live tree */
+	np = unittest_data_node->child;
+	while (np) {
+		struct device_node *next = np->sibling;
+
+		np->parent = of_root;
+		attach_node_and_children(np);
+		np = next;
+	}
+
+	of_overlay_mutex_unlock();
+
+	return 0;
+}
+
diff --git a/drivers/of/test-common.h b/drivers/of/test-common.h
new file mode 100644
index 0000000000000..a35484406bbf1
--- /dev/null
+++ b/drivers/of/test-common.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * Common code to be used by unit tests.
+ */
+#ifndef _LINUX_OF_TEST_COMMON_H
+#define _LINUX_OF_TEST_COMMON_H
+
+#include <linux/of.h>
+
+/**
+ *	unittest_data_add - Reads, copies data from
+ *	linked tree and attaches it to the live tree
+ */
+int unittest_data_add(void);
+
+#endif /* _LINUX_OF_TEST_COMMON_H */
diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
index 96de69ccb3e63..05a2610d0be7f 100644
--- a/drivers/of/unittest.c
+++ b/drivers/of/unittest.c
@@ -29,184 +29,7 @@
 #include <kunit/test.h>
 
 #include "of_private.h"
-
-static void of_unittest_find_node_by_name(struct kunit *test)
-{
-	struct device_node *np;
-	const char *options, *name;
-
-	np = of_find_node_by_path("/testcase-data");
-	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
-			       "find /testcase-data failed\n");
-	of_node_put(np);
-	kfree(name);
-
-	/* Test if trailing '/' works */
-	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
-			    "trailing '/' on /testcase-data/ should fail\n");
-
-	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	KUNIT_EXPECT_STREQ_MSG(
-		test, "/testcase-data/phandle-tests/consumer-a", name,
-		"find /testcase-data/phandle-tests/consumer-a failed\n");
-	of_node_put(np);
-	kfree(name);
-
-	np = of_find_node_by_path("testcase-alias");
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
-			       "find testcase-alias failed\n");
-	of_node_put(np);
-	kfree(name);
-
-	/* Test if trailing '/' works on aliases */
-	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
-			    "trailing '/' on testcase-alias/ should fail\n");
-
-	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	KUNIT_EXPECT_STREQ_MSG(
-		test, "/testcase-data/phandle-tests/consumer-a", name,
-		"find testcase-alias/phandle-tests/consumer-a failed\n");
-	of_node_put(np);
-	kfree(name);
-
-	KUNIT_EXPECT_EQ_MSG(
-		test,
-		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
-		"non-existent path returned node %pOF\n", np);
-	of_node_put(np);
-
-	KUNIT_EXPECT_EQ_MSG(
-		test, np = of_find_node_by_path("missing-alias"), NULL,
-		"non-existent alias returned node %pOF\n", np);
-	of_node_put(np);
-
-	KUNIT_EXPECT_EQ_MSG(
-		test,
-		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
-		"non-existent alias with relative path returned node %pOF\n",
-		np);
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
-			       "option path test failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
-			       "option path test, subcase #1 failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
-			       "option path test, subcase #2 failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
-	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
-					 "NULL option path test failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
-				       &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
-			       "option alias path test failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
-				       &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
-			       "option alias path test, subcase #1 failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
-	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
-			test, np, "NULL option alias path test failed\n");
-	of_node_put(np);
-
-	options = "testoption";
-	np = of_find_node_opts_by_path("testcase-alias", &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
-			    "option clearing test failed\n");
-	of_node_put(np);
-
-	options = "testoption";
-	np = of_find_node_opts_by_path("/", &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
-			    "option clearing root node test failed\n");
-	of_node_put(np);
-}
-
-static void of_unittest_dynamic(struct kunit *test)
-{
-	struct device_node *np;
-	struct property *prop;
-
-	np = of_find_node_by_path("/testcase-data");
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-
-	/* Array of 4 properties for the purpose of testing */
-	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
-
-	/* Add a new property - should pass*/
-	prop->name = "new-property";
-	prop->value = "new-property-data";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
-			    "Adding a new property failed\n");
-
-	/* Try to add an existing property - should fail */
-	prop++;
-	prop->name = "new-property";
-	prop->value = "new-property-data-should-fail";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
-			    "Adding an existing property should have failed\n");
-
-	/* Try to modify an existing property - should pass */
-	prop->value = "modify-property-data-should-pass";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(
-		test, of_update_property(np, prop), 0,
-		"Updating an existing property should have passed\n");
-
-	/* Try to modify non-existent property - should pass*/
-	prop++;
-	prop->name = "modify-property";
-	prop->value = "modify-missing-property-data-should-pass";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
-			    "Updating a missing property should have passed\n");
-
-	/* Remove property - should pass */
-	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
-			    "Removing a property should have passed\n");
-
-	/* Adding very large property - should pass */
-	prop++;
-	prop->name = "large-property-PAGE_SIZEx8";
-	prop->length = PAGE_SIZE * 8;
-	prop->value = kzalloc(prop->length, GFP_KERNEL);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
-	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
-			    "Adding a large property should have passed\n");
-}
+#include "test-common.h"
 
 static int of_unittest_check_node_linkage(struct device_node *np)
 {
@@ -1177,170 +1000,6 @@ static void of_unittest_platform_populate(struct kunit *test)
 	of_node_put(np);
 }
 
-/**
- *	update_node_properties - adds the properties
- *	of np into dup node (present in live tree) and
- *	updates parent of children of np to dup.
- *
- *	@np:	node whose properties are being added to the live tree
- *	@dup:	node present in live tree to be updated
- */
-static void update_node_properties(struct device_node *np,
-					struct device_node *dup)
-{
-	struct property *prop;
-	struct property *save_next;
-	struct device_node *child;
-	int ret;
-
-	for_each_child_of_node(np, child)
-		child->parent = dup;
-
-	/*
-	 * "unittest internal error: unable to add testdata property"
-	 *
-	 *    If this message reports a property in node '/__symbols__' then
-	 *    the respective unittest overlay contains a label that has the
-	 *    same name as a label in the live devicetree.  The label will
-	 *    be in the live devicetree only if the devicetree source was
-	 *    compiled with the '-@' option.  If you encounter this error,
-	 *    please consider renaming __all__ of the labels in the unittest
-	 *    overlay dts files with an odd prefix that is unlikely to be
-	 *    used in a real devicetree.
-	 */
-
-	/*
-	 * open code for_each_property_of_node() because of_add_property()
-	 * sets prop->next to NULL
-	 */
-	for (prop = np->properties; prop != NULL; prop = save_next) {
-		save_next = prop->next;
-		ret = of_add_property(dup, prop);
-		if (ret)
-			pr_err("unittest internal error: unable to add testdata property %pOF/%s",
-			       np, prop->name);
-	}
-}
-
-/**
- *	attach_node_and_children - attaches nodes
- *	and its children to live tree
- *
- *	@np:	Node to attach to live tree
- */
-static void attach_node_and_children(struct device_node *np)
-{
-	struct device_node *next, *dup, *child;
-	unsigned long flags;
-	const char *full_name;
-
-	full_name = kasprintf(GFP_KERNEL, "%pOF", np);
-
-	if (!strcmp(full_name, "/__local_fixups__") ||
-	    !strcmp(full_name, "/__fixups__"))
-		return;
-
-	dup = of_find_node_by_path(full_name);
-	kfree(full_name);
-	if (dup) {
-		update_node_properties(np, dup);
-		return;
-	}
-
-	child = np->child;
-	np->child = NULL;
-
-	mutex_lock(&of_mutex);
-	raw_spin_lock_irqsave(&devtree_lock, flags);
-	np->sibling = np->parent->child;
-	np->parent->child = np;
-	of_node_clear_flag(np, OF_DETACHED);
-	raw_spin_unlock_irqrestore(&devtree_lock, flags);
-
-	__of_attach_node_sysfs(np);
-	mutex_unlock(&of_mutex);
-
-	while (child) {
-		next = child->sibling;
-		attach_node_and_children(child);
-		child = next;
-	}
-}
-
-/**
- *	unittest_data_add - Reads, copies data from
- *	linked tree and attaches it to the live tree
- */
-static int unittest_data_add(void)
-{
-	void *unittest_data;
-	struct device_node *unittest_data_node, *np;
-	/*
-	 * __dtb_testcases_begin[] and __dtb_testcases_end[] are magically
-	 * created by cmd_dt_S_dtb in scripts/Makefile.lib
-	 */
-	extern uint8_t __dtb_testcases_begin[];
-	extern uint8_t __dtb_testcases_end[];
-	const int size = __dtb_testcases_end - __dtb_testcases_begin;
-	int rc;
-
-	if (!size) {
-		pr_warn("%s: No testcase data to attach; not running tests\n",
-			__func__);
-		return -ENODATA;
-	}
-
-	/* creating copy */
-	unittest_data = kmemdup(__dtb_testcases_begin, size, GFP_KERNEL);
-
-	if (!unittest_data) {
-		pr_warn("%s: Failed to allocate memory for unittest_data; "
-			"not running tests\n", __func__);
-		return -ENOMEM;
-	}
-	of_fdt_unflatten_tree(unittest_data, NULL, &unittest_data_node);
-	if (!unittest_data_node) {
-		pr_warn("%s: No tree to attach; not running tests\n", __func__);
-		return -ENODATA;
-	}
-
-	/*
-	 * This lock normally encloses of_resolve_phandles()
-	 */
-	of_overlay_mutex_lock();
-
-	rc = of_resolve_phandles(unittest_data_node);
-	if (rc) {
-		pr_err("%s: Failed to resolve phandles (rc=%i)\n", __func__, rc);
-		of_overlay_mutex_unlock();
-		return -EINVAL;
-	}
-
-	if (!of_root) {
-		of_root = unittest_data_node;
-		for_each_of_allnodes(np)
-			__of_attach_node_sysfs(np);
-		of_aliases = of_find_node_by_path("/aliases");
-		of_chosen = of_find_node_by_path("/chosen");
-		of_overlay_mutex_unlock();
-		return 0;
-	}
-
-	/* attach the sub-tree to live tree */
-	np = unittest_data_node->child;
-	while (np) {
-		struct device_node *next = np->sibling;
-
-		np->parent = of_root;
-		attach_node_and_children(np);
-		np = next;
-	}
-
-	of_overlay_mutex_unlock();
-
-	return 0;
-}
-
 #ifdef CONFIG_OF_OVERLAY
 static int overlay_data_apply(const char *overlay_name, int *overlay_id);
 
@@ -2563,8 +2222,6 @@ static int of_test_init(struct kunit *test)
 static struct kunit_case of_test_cases[] = {
 	KUNIT_CASE(of_unittest_check_tree_linkage),
 	KUNIT_CASE(of_unittest_check_phandles),
-	KUNIT_CASE(of_unittest_find_node_by_name),
-	KUNIT_CASE(of_unittest_dynamic),
 	KUNIT_CASE(of_unittest_parse_phandle_with_args),
 	KUNIT_CASE(of_unittest_parse_phandle_with_args_map),
 	KUNIT_CASE(of_unittest_printf),
-- 
2.21.0.rc0.258.g878e2cd30e-goog


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 17/17] of: unittest: split up some super large test cases
  2019-02-14 21:37 ` brendanhiggins
  (?)
  (?)
@ 2019-02-14 21:37   ` brendanhiggins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: gregkh, joel, mpe, joe, brakmo, rostedt, Tim.Bird, khilman,
	julia.lawall, linux-kselftest, kunit-dev, linux-kernel, jdike,
	richard, linux-um, daniel, dri-devel, dan.j.williams,
	linux-nvdimm, knut.omang, devicetree, pmladek, Alexander.Levin,
	amir73il, dan.carpenter, wfg, Brendan Higgins

Split up the super large test cases of_unittest_find_node_by_name and
of_unittest_dynamic into properly sized and defined test cases.

Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
---
 drivers/of/base-test.c | 297 ++++++++++++++++++++++++++++++++++-------
 1 file changed, 249 insertions(+), 48 deletions(-)

diff --git a/drivers/of/base-test.c b/drivers/of/base-test.c
index 3d3f4f1b74800..7b44c967ed2fd 100644
--- a/drivers/of/base-test.c
+++ b/drivers/of/base-test.c
@@ -8,10 +8,10 @@
 
 #include "test-common.h"
 
-static void of_unittest_find_node_by_name(struct kunit *test)
+static void of_test_find_node_by_name_basic(struct kunit *test)
 {
 	struct device_node *np;
-	const char *options, *name;
+	const char *name;
 
 	np = of_find_node_by_path("/testcase-data");
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
@@ -20,11 +20,21 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 			       "find /testcase-data failed\n");
 	of_node_put(np);
 	kfree(name);
+}
 
+static void of_test_find_node_by_name_trailing_slash(struct kunit *test)
+{
 	/* Test if trailing '/' works */
 	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
 			    "trailing '/' on /testcase-data/ should fail\n");
 
+}
+
+static void of_test_find_node_by_name_multiple_components(struct kunit *test)
+{
+	struct device_node *np;
+	const char *name;
+
 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
@@ -33,6 +43,12 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 		"find /testcase-data/phandle-tests/consumer-a failed\n");
 	of_node_put(np);
 	kfree(name);
+}
+
+static void of_test_find_node_by_name_with_alias(struct kunit *test)
+{
+	struct device_node *np;
+	const char *name;
 
 	np = of_find_node_by_path("testcase-alias");
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
@@ -41,10 +57,23 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 			       "find testcase-alias failed\n");
 	of_node_put(np);
 	kfree(name);
+}
 
+static void of_test_find_node_by_name_with_alias_and_slash(struct kunit *test)
+{
 	/* Test if trailing '/' works on aliases */
 	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
-			    "trailing '/' on testcase-alias/ should fail\n");
+			   "trailing '/' on testcase-alias/ should fail\n");
+}
+
+/*
+ * TODO(brendanhiggins@google.com): This looks like a duplicate of
+ * of_test_find_node_by_name_multiple_components
+ */
+static void of_test_find_node_by_name_multiple_components_2(struct kunit *test)
+{
+	struct device_node *np;
+	const char *name;
 
 	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
@@ -54,17 +83,33 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 		"find testcase-alias/phandle-tests/consumer-a failed\n");
 	of_node_put(np);
 	kfree(name);
+}
+
+static void of_test_find_node_by_name_missing_path(struct kunit *test)
+{
+	struct device_node *np;
 
 	KUNIT_EXPECT_EQ_MSG(
 		test,
 		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
 		"non-existent path returned node %pOF\n", np);
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_missing_alias(struct kunit *test)
+{
+	struct device_node *np;
 
 	KUNIT_EXPECT_EQ_MSG(
 		test, np = of_find_node_by_path("missing-alias"), NULL,
 		"non-existent alias returned node %pOF\n", np);
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_missing_alias_with_relative_path(
+		struct kunit *test)
+{
+	struct device_node *np;
 
 	KUNIT_EXPECT_EQ_MSG(
 		test,
@@ -72,12 +117,24 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 		"non-existent alias with relative path returned node %pOF\n",
 		np);
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_option(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
 			       "option path test failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_option_and_slash(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
@@ -90,11 +147,22 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
 			       "option path test, subcase #2 failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_null_option(struct kunit *test)
+{
+	struct device_node *np;
 
 	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
 	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
 					 "NULL option path test failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_option_alias(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
 				       &options);
@@ -102,6 +170,13 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
 			       "option alias path test failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_option_alias_and_slash(
+		struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
 				       &options);
@@ -109,11 +184,22 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
 			       "option alias path test, subcase #1 failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_null_option_alias(struct kunit *test)
+{
+	struct device_node *np;
 
 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
 	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
 			test, np, "NULL option alias path test failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_option_clearing(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	options = "testoption";
 	np = of_find_node_opts_by_path("testcase-alias", &options);
@@ -121,6 +207,12 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
 			    "option clearing test failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_option_clearing_root(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	options = "testoption";
 	np = of_find_node_opts_by_path("/", &options);
@@ -130,65 +222,147 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 	of_node_put(np);
 }
 
-static void of_unittest_dynamic(struct kunit *test)
+static int of_test_find_node_by_name_init(struct kunit *test)
 {
+	/* adding data for unittest */
+	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
+
+	if (!of_aliases)
+		of_aliases = of_find_node_by_path("/aliases");
+
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
+			"/testcase-data/phandle-tests/consumer-a"));
+
+	return 0;
+}
+
+static struct kunit_case of_test_find_node_by_name_cases[] = {
+	KUNIT_CASE(of_test_find_node_by_name_basic),
+	KUNIT_CASE(of_test_find_node_by_name_trailing_slash),
+	KUNIT_CASE(of_test_find_node_by_name_multiple_components),
+	KUNIT_CASE(of_test_find_node_by_name_with_alias),
+	KUNIT_CASE(of_test_find_node_by_name_with_alias_and_slash),
+	KUNIT_CASE(of_test_find_node_by_name_multiple_components_2),
+	KUNIT_CASE(of_test_find_node_by_name_missing_path),
+	KUNIT_CASE(of_test_find_node_by_name_missing_alias),
+	KUNIT_CASE(of_test_find_node_by_name_missing_alias_with_relative_path),
+	KUNIT_CASE(of_test_find_node_by_name_with_option),
+	KUNIT_CASE(of_test_find_node_by_name_with_option_and_slash),
+	KUNIT_CASE(of_test_find_node_by_name_with_null_option),
+	KUNIT_CASE(of_test_find_node_by_name_with_option_alias),
+	KUNIT_CASE(of_test_find_node_by_name_with_option_alias_and_slash),
+	KUNIT_CASE(of_test_find_node_by_name_with_null_option_alias),
+	KUNIT_CASE(of_test_find_node_by_name_option_clearing),
+	KUNIT_CASE(of_test_find_node_by_name_option_clearing_root),
+	{},
+};
+
+static struct kunit_module of_test_find_node_by_name_module = {
+	.name = "of-test-find-node-by-name",
+	.init = of_test_find_node_by_name_init,
+	.test_cases = of_test_find_node_by_name_cases,
+};
+module_test(of_test_find_node_by_name_module);
+
+struct of_test_dynamic_context {
 	struct device_node *np;
-	struct property *prop;
+	struct property *prop0;
+	struct property *prop1;
+};
 
-	np = of_find_node_by_path("/testcase-data");
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+static void of_test_dynamic_basic(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+	struct property *prop0 = ctx->prop0;
 
-	/* Array of 4 properties for the purpose of testing */
-	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
+	/* Add a new property - should pass*/
+	prop0->name = "new-property";
+	prop0->value = "new-property-data";
+	prop0->length = strlen(prop0->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
+			    "Adding a new property failed\n");
+
+	/* Test that we can remove a property */
+	KUNIT_EXPECT_EQ(test, of_remove_property(np, prop0), 0);
+}
+
+static void of_test_dynamic_add_existing_property(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+	struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;
 
 	/* Add a new property - should pass*/
-	prop->name = "new-property";
-	prop->value = "new-property-data";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+	prop0->name = "new-property";
+	prop0->value = "new-property-data";
+	prop0->length = strlen(prop0->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
 			    "Adding a new property failed\n");
 
 	/* Try to add an existing property - should fail */
-	prop++;
-	prop->name = "new-property";
-	prop->value = "new-property-data-should-fail";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
+	prop1->name = "new-property";
+	prop1->value = "new-property-data-should-fail";
+	prop1->length = strlen(prop1->value) + 1;
+	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop1), 0,
 			    "Adding an existing property should have failed\n");
+}
+
+static void of_test_dynamic_modify_existing_property(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+	struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;
+
+	/* Add a new property - should pass*/
+	prop0->name = "new-property";
+	prop0->value = "new-property-data";
+	prop0->length = strlen(prop0->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
+			    "Adding a new property failed\n");
 
 	/* Try to modify an existing property - should pass */
-	prop->value = "modify-property-data-should-pass";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(
-		test, of_update_property(np, prop), 0,
-		"Updating an existing property should have passed\n");
+	prop1->name = "new-property";
+	prop1->value = "modify-property-data-should-pass";
+	prop1->length = strlen(prop1->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop1), 0,
+			    "Updating an existing property should have passed\n");
+}
+
+static void of_test_dynamic_modify_non_existent_property(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+	struct property *prop0 = ctx->prop0;
 
 	/* Try to modify non-existent property - should pass*/
-	prop++;
-	prop->name = "modify-property";
-	prop->value = "modify-missing-property-data-should-pass";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
+	prop0->name = "modify-property";
+	prop0->value = "modify-missing-property-data-should-pass";
+	prop0->length = strlen(prop0->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop0), 0,
 			    "Updating a missing property should have passed\n");
+}
 
-	/* Remove property - should pass */
-	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
-			    "Removing a property should have passed\n");
+static void of_test_dynamic_large_property(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+	struct property *prop0 = ctx->prop0;
 
 	/* Adding very large property - should pass */
-	prop++;
-	prop->name = "large-property-PAGE_SIZEx8";
-	prop->length = PAGE_SIZE * 8;
-	prop->value = kzalloc(prop->length, GFP_KERNEL);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
-	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+	prop0->name = "large-property-PAGE_SIZEx8";
+	prop0->length = PAGE_SIZE * 8;
+	prop0->value = kunit_kzalloc(test, prop0->length, GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop0->value);
+
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
 			    "Adding a large property should have passed\n");
 }
 
-static int of_test_init(struct kunit *test)
+static int of_test_dynamic_init(struct kunit *test)
 {
-	/* adding data for unittest */
+	struct of_test_dynamic_context *ctx;
+
 	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
 
 	if (!of_aliases)
@@ -197,18 +371,45 @@ static int of_test_init(struct kunit *test)
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
 			"/testcase-data/phandle-tests/consumer-a"));
 
+	ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+	test->priv = ctx;
+
+	ctx->np = of_find_node_by_path("/testcase-data");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->np);
+
+	ctx->prop0 = kunit_kzalloc(test, sizeof(*ctx->prop0), GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop0);
+
+	ctx->prop1 = kunit_kzalloc(test, sizeof(*ctx->prop1), GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop1);
+
 	return 0;
 }
 
-static struct kunit_case of_test_cases[] = {
-	KUNIT_CASE(of_unittest_find_node_by_name),
-	KUNIT_CASE(of_unittest_dynamic),
+static void of_test_dynamic_exit(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+
+	of_remove_property(np, ctx->prop0);
+	of_remove_property(np, ctx->prop1);
+	of_node_put(np);
+}
+
+static struct kunit_case of_test_dynamic_cases[] = {
+	KUNIT_CASE(of_test_dynamic_basic),
+	KUNIT_CASE(of_test_dynamic_add_existing_property),
+	KUNIT_CASE(of_test_dynamic_modify_existing_property),
+	KUNIT_CASE(of_test_dynamic_modify_non_existent_property),
+	KUNIT_CASE(of_test_dynamic_large_property),
 	{},
 };
 
-static struct kunit_module of_test_module = {
-	.name = "of-base-test",
-	.init = of_test_init,
-	.test_cases = of_test_cases,
+static struct kunit_module of_test_dynamic_module = {
+	.name = "of-dynamic-test",
+	.init = of_test_dynamic_init,
+	.exit = of_test_dynamic_exit,
+	.test_cases = of_test_dynamic_cases,
 };
-module_test(of_test_module);
+module_test(of_test_dynamic_module);
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 17/17] of: unittest: split up some super large test cases
@ 2019-02-14 21:37   ` brendanhiggins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-02-14 21:37 UTC (permalink / raw)


Split up the super large test cases of_unittest_find_node_by_name and
of_unittest_dynamic into properly sized and defined test cases.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 drivers/of/base-test.c | 297 ++++++++++++++++++++++++++++++++++-------
 1 file changed, 249 insertions(+), 48 deletions(-)

diff --git a/drivers/of/base-test.c b/drivers/of/base-test.c
index 3d3f4f1b74800..7b44c967ed2fd 100644
--- a/drivers/of/base-test.c
+++ b/drivers/of/base-test.c
@@ -8,10 +8,10 @@
 
 #include "test-common.h"
 
-static void of_unittest_find_node_by_name(struct kunit *test)
+static void of_test_find_node_by_name_basic(struct kunit *test)
 {
 	struct device_node *np;
-	const char *options, *name;
+	const char *name;
 
 	np = of_find_node_by_path("/testcase-data");
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
@@ -20,11 +20,21 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 			       "find /testcase-data failed\n");
 	of_node_put(np);
 	kfree(name);
+}
 
+static void of_test_find_node_by_name_trailing_slash(struct kunit *test)
+{
 	/* Test if trailing '/' works */
 	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
 			    "trailing '/' on /testcase-data/ should fail\n");
 
+}
+
+static void of_test_find_node_by_name_multiple_components(struct kunit *test)
+{
+	struct device_node *np;
+	const char *name;
+
 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
@@ -33,6 +43,12 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 		"find /testcase-data/phandle-tests/consumer-a failed\n");
 	of_node_put(np);
 	kfree(name);
+}
+
+static void of_test_find_node_by_name_with_alias(struct kunit *test)
+{
+	struct device_node *np;
+	const char *name;
 
 	np = of_find_node_by_path("testcase-alias");
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
@@ -41,10 +57,23 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 			       "find testcase-alias failed\n");
 	of_node_put(np);
 	kfree(name);
+}
 
+static void of_test_find_node_by_name_with_alias_and_slash(struct kunit *test)
+{
 	/* Test if trailing '/' works on aliases */
 	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
-			    "trailing '/' on testcase-alias/ should fail\n");
+			   "trailing '/' on testcase-alias/ should fail\n");
+}
+
+/*
+ * TODO(brendanhiggins at google.com): This looks like a duplicate of
+ * of_test_find_node_by_name_multiple_components
+ */
+static void of_test_find_node_by_name_multiple_components_2(struct kunit *test)
+{
+	struct device_node *np;
+	const char *name;
 
 	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
@@ -54,17 +83,33 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 		"find testcase-alias/phandle-tests/consumer-a failed\n");
 	of_node_put(np);
 	kfree(name);
+}
+
+static void of_test_find_node_by_name_missing_path(struct kunit *test)
+{
+	struct device_node *np;
 
 	KUNIT_EXPECT_EQ_MSG(
 		test,
 		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
 		"non-existent path returned node %pOF\n", np);
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_missing_alias(struct kunit *test)
+{
+	struct device_node *np;
 
 	KUNIT_EXPECT_EQ_MSG(
 		test, np = of_find_node_by_path("missing-alias"), NULL,
 		"non-existent alias returned node %pOF\n", np);
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_missing_alias_with_relative_path(
+		struct kunit *test)
+{
+	struct device_node *np;
 
 	KUNIT_EXPECT_EQ_MSG(
 		test,
@@ -72,12 +117,24 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 		"non-existent alias with relative path returned node %pOF\n",
 		np);
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_option(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
 			       "option path test failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_option_and_slash(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
@@ -90,11 +147,22 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
 			       "option path test, subcase #2 failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_null_option(struct kunit *test)
+{
+	struct device_node *np;
 
 	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
 	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
 					 "NULL option path test failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_option_alias(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
 				       &options);
@@ -102,6 +170,13 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
 			       "option alias path test failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_option_alias_and_slash(
+		struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
 				       &options);
@@ -109,11 +184,22 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
 			       "option alias path test, subcase #1 failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_null_option_alias(struct kunit *test)
+{
+	struct device_node *np;
 
 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
 	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
 			test, np, "NULL option alias path test failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_option_clearing(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	options = "testoption";
 	np = of_find_node_opts_by_path("testcase-alias", &options);
@@ -121,6 +207,12 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
 			    "option clearing test failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_option_clearing_root(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	options = "testoption";
 	np = of_find_node_opts_by_path("/", &options);
@@ -130,65 +222,147 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 	of_node_put(np);
 }
 
-static void of_unittest_dynamic(struct kunit *test)
+static int of_test_find_node_by_name_init(struct kunit *test)
 {
+	/* adding data for unittest */
+	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
+
+	if (!of_aliases)
+		of_aliases = of_find_node_by_path("/aliases");
+
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
+			"/testcase-data/phandle-tests/consumer-a"));
+
+	return 0;
+}
+
+static struct kunit_case of_test_find_node_by_name_cases[] = {
+	KUNIT_CASE(of_test_find_node_by_name_basic),
+	KUNIT_CASE(of_test_find_node_by_name_trailing_slash),
+	KUNIT_CASE(of_test_find_node_by_name_multiple_components),
+	KUNIT_CASE(of_test_find_node_by_name_with_alias),
+	KUNIT_CASE(of_test_find_node_by_name_with_alias_and_slash),
+	KUNIT_CASE(of_test_find_node_by_name_multiple_components_2),
+	KUNIT_CASE(of_test_find_node_by_name_missing_path),
+	KUNIT_CASE(of_test_find_node_by_name_missing_alias),
+	KUNIT_CASE(of_test_find_node_by_name_missing_alias_with_relative_path),
+	KUNIT_CASE(of_test_find_node_by_name_with_option),
+	KUNIT_CASE(of_test_find_node_by_name_with_option_and_slash),
+	KUNIT_CASE(of_test_find_node_by_name_with_null_option),
+	KUNIT_CASE(of_test_find_node_by_name_with_option_alias),
+	KUNIT_CASE(of_test_find_node_by_name_with_option_alias_and_slash),
+	KUNIT_CASE(of_test_find_node_by_name_with_null_option_alias),
+	KUNIT_CASE(of_test_find_node_by_name_option_clearing),
+	KUNIT_CASE(of_test_find_node_by_name_option_clearing_root),
+	{},
+};
+
+static struct kunit_module of_test_find_node_by_name_module = {
+	.name = "of-test-find-node-by-name",
+	.init = of_test_find_node_by_name_init,
+	.test_cases = of_test_find_node_by_name_cases,
+};
+module_test(of_test_find_node_by_name_module);
+
+struct of_test_dynamic_context {
 	struct device_node *np;
-	struct property *prop;
+	struct property *prop0;
+	struct property *prop1;
+};
 
-	np = of_find_node_by_path("/testcase-data");
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+static void of_test_dynamic_basic(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+	struct property *prop0 = ctx->prop0;
 
-	/* Array of 4 properties for the purpose of testing */
-	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
+	/* Add a new property - should pass*/
+	prop0->name = "new-property";
+	prop0->value = "new-property-data";
+	prop0->length = strlen(prop0->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
+			    "Adding a new property failed\n");
+
+	/* Test that we can remove a property */
+	KUNIT_EXPECT_EQ(test, of_remove_property(np, prop0), 0);
+}
+
+static void of_test_dynamic_add_existing_property(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+	struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;
 
 	/* Add a new property - should pass*/
-	prop->name = "new-property";
-	prop->value = "new-property-data";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+	prop0->name = "new-property";
+	prop0->value = "new-property-data";
+	prop0->length = strlen(prop0->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
 			    "Adding a new property failed\n");
 
 	/* Try to add an existing property - should fail */
-	prop++;
-	prop->name = "new-property";
-	prop->value = "new-property-data-should-fail";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
+	prop1->name = "new-property";
+	prop1->value = "new-property-data-should-fail";
+	prop1->length = strlen(prop1->value) + 1;
+	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop1), 0,
 			    "Adding an existing property should have failed\n");
+}
+
+static void of_test_dynamic_modify_existing_property(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+	struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;
+
+	/* Add a new property - should pass*/
+	prop0->name = "new-property";
+	prop0->value = "new-property-data";
+	prop0->length = strlen(prop0->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
+			    "Adding a new property failed\n");
 
 	/* Try to modify an existing property - should pass */
-	prop->value = "modify-property-data-should-pass";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(
-		test, of_update_property(np, prop), 0,
-		"Updating an existing property should have passed\n");
+	prop1->name = "new-property";
+	prop1->value = "modify-property-data-should-pass";
+	prop1->length = strlen(prop1->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop1), 0,
+			    "Updating an existing property should have passed\n");
+}
+
+static void of_test_dynamic_modify_non_existent_property(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+	struct property *prop0 = ctx->prop0;
 
 	/* Try to modify non-existent property - should pass*/
-	prop++;
-	prop->name = "modify-property";
-	prop->value = "modify-missing-property-data-should-pass";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
+	prop0->name = "modify-property";
+	prop0->value = "modify-missing-property-data-should-pass";
+	prop0->length = strlen(prop0->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop0), 0,
 			    "Updating a missing property should have passed\n");
+}
 
-	/* Remove property - should pass */
-	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
-			    "Removing a property should have passed\n");
+static void of_test_dynamic_large_property(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+	struct property *prop0 = ctx->prop0;
 
 	/* Adding very large property - should pass */
-	prop++;
-	prop->name = "large-property-PAGE_SIZEx8";
-	prop->length = PAGE_SIZE * 8;
-	prop->value = kzalloc(prop->length, GFP_KERNEL);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
-	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+	prop0->name = "large-property-PAGE_SIZEx8";
+	prop0->length = PAGE_SIZE * 8;
+	prop0->value = kunit_kzalloc(test, prop0->length, GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop0->value);
+
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
 			    "Adding a large property should have passed\n");
 }
 
-static int of_test_init(struct kunit *test)
+static int of_test_dynamic_init(struct kunit *test)
 {
-	/* adding data for unittest */
+	struct of_test_dynamic_context *ctx;
+
 	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
 
 	if (!of_aliases)
@@ -197,18 +371,45 @@ static int of_test_init(struct kunit *test)
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
 			"/testcase-data/phandle-tests/consumer-a"));
 
+	ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+	test->priv = ctx;
+
+	ctx->np = of_find_node_by_path("/testcase-data");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->np);
+
+	ctx->prop0 = kunit_kzalloc(test, sizeof(*ctx->prop0), GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop0);
+
+	ctx->prop1 = kunit_kzalloc(test, sizeof(*ctx->prop1), GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop1);
+
 	return 0;
 }
 
-static struct kunit_case of_test_cases[] = {
-	KUNIT_CASE(of_unittest_find_node_by_name),
-	KUNIT_CASE(of_unittest_dynamic),
+static void of_test_dynamic_exit(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+
+	of_remove_property(np, ctx->prop0);
+	of_remove_property(np, ctx->prop1);
+	of_node_put(np);
+}
+
+static struct kunit_case of_test_dynamic_cases[] = {
+	KUNIT_CASE(of_test_dynamic_basic),
+	KUNIT_CASE(of_test_dynamic_add_existing_property),
+	KUNIT_CASE(of_test_dynamic_modify_existing_property),
+	KUNIT_CASE(of_test_dynamic_modify_non_existent_property),
+	KUNIT_CASE(of_test_dynamic_large_property),
 	{},
 };
 
-static struct kunit_module of_test_module = {
-	.name = "of-base-test",
-	.init = of_test_init,
-	.test_cases = of_test_cases,
+static struct kunit_module of_test_dynamic_module = {
+	.name = "of-dynamic-test",
+	.init = of_test_dynamic_init,
+	.exit = of_test_dynamic_exit,
+	.test_cases = of_test_dynamic_cases,
 };
-module_test(of_test_module);
+module_test(of_test_dynamic_module);
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 17/17] of: unittest: split up some super large test cases
@ 2019-02-14 21:37   ` brendanhiggins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)


Split up the super large test cases of_unittest_find_node_by_name and
of_unittest_dynamic into properly sized and defined test cases.

Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
---
 drivers/of/base-test.c | 297 ++++++++++++++++++++++++++++++++++-------
 1 file changed, 249 insertions(+), 48 deletions(-)

diff --git a/drivers/of/base-test.c b/drivers/of/base-test.c
index 3d3f4f1b74800..7b44c967ed2fd 100644
--- a/drivers/of/base-test.c
+++ b/drivers/of/base-test.c
@@ -8,10 +8,10 @@
 
 #include "test-common.h"
 
-static void of_unittest_find_node_by_name(struct kunit *test)
+static void of_test_find_node_by_name_basic(struct kunit *test)
 {
 	struct device_node *np;
-	const char *options, *name;
+	const char *name;
 
 	np = of_find_node_by_path("/testcase-data");
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
@@ -20,11 +20,21 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 			       "find /testcase-data failed\n");
 	of_node_put(np);
 	kfree(name);
+}
 
+static void of_test_find_node_by_name_trailing_slash(struct kunit *test)
+{
 	/* Test if trailing '/' works */
 	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
 			    "trailing '/' on /testcase-data/ should fail\n");
 
+}
+
+static void of_test_find_node_by_name_multiple_components(struct kunit *test)
+{
+	struct device_node *np;
+	const char *name;
+
 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
@@ -33,6 +43,12 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 		"find /testcase-data/phandle-tests/consumer-a failed\n");
 	of_node_put(np);
 	kfree(name);
+}
+
+static void of_test_find_node_by_name_with_alias(struct kunit *test)
+{
+	struct device_node *np;
+	const char *name;
 
 	np = of_find_node_by_path("testcase-alias");
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
@@ -41,10 +57,23 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 			       "find testcase-alias failed\n");
 	of_node_put(np);
 	kfree(name);
+}
 
+static void of_test_find_node_by_name_with_alias_and_slash(struct kunit *test)
+{
 	/* Test if trailing '/' works on aliases */
 	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
-			    "trailing '/' on testcase-alias/ should fail\n");
+			   "trailing '/' on testcase-alias/ should fail\n");
+}
+
+/*
+ * TODO(brendanhiggins at google.com): This looks like a duplicate of
+ * of_test_find_node_by_name_multiple_components
+ */
+static void of_test_find_node_by_name_multiple_components_2(struct kunit *test)
+{
+	struct device_node *np;
+	const char *name;
 
 	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
@@ -54,17 +83,33 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 		"find testcase-alias/phandle-tests/consumer-a failed\n");
 	of_node_put(np);
 	kfree(name);
+}
+
+static void of_test_find_node_by_name_missing_path(struct kunit *test)
+{
+	struct device_node *np;
 
 	KUNIT_EXPECT_EQ_MSG(
 		test,
 		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
 		"non-existent path returned node %pOF\n", np);
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_missing_alias(struct kunit *test)
+{
+	struct device_node *np;
 
 	KUNIT_EXPECT_EQ_MSG(
 		test, np = of_find_node_by_path("missing-alias"), NULL,
 		"non-existent alias returned node %pOF\n", np);
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_missing_alias_with_relative_path(
+		struct kunit *test)
+{
+	struct device_node *np;
 
 	KUNIT_EXPECT_EQ_MSG(
 		test,
@@ -72,12 +117,24 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 		"non-existent alias with relative path returned node %pOF\n",
 		np);
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_option(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
 			       "option path test failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_option_and_slash(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
@@ -90,11 +147,22 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
 			       "option path test, subcase #2 failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_null_option(struct kunit *test)
+{
+	struct device_node *np;
 
 	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
 	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
 					 "NULL option path test failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_option_alias(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
 				       &options);
@@ -102,6 +170,13 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
 			       "option alias path test failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_option_alias_and_slash(
+		struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
 				       &options);
@@ -109,11 +184,22 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
 			       "option alias path test, subcase #1 failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_null_option_alias(struct kunit *test)
+{
+	struct device_node *np;
 
 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
 	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
 			test, np, "NULL option alias path test failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_option_clearing(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	options = "testoption";
 	np = of_find_node_opts_by_path("testcase-alias", &options);
@@ -121,6 +207,12 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
 			    "option clearing test failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_option_clearing_root(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	options = "testoption";
 	np = of_find_node_opts_by_path("/", &options);
@@ -130,65 +222,147 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 	of_node_put(np);
 }
 
-static void of_unittest_dynamic(struct kunit *test)
+static int of_test_find_node_by_name_init(struct kunit *test)
 {
+	/* adding data for unittest */
+	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
+
+	if (!of_aliases)
+		of_aliases = of_find_node_by_path("/aliases");
+
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
+			"/testcase-data/phandle-tests/consumer-a"));
+
+	return 0;
+}
+
+static struct kunit_case of_test_find_node_by_name_cases[] = {
+	KUNIT_CASE(of_test_find_node_by_name_basic),
+	KUNIT_CASE(of_test_find_node_by_name_trailing_slash),
+	KUNIT_CASE(of_test_find_node_by_name_multiple_components),
+	KUNIT_CASE(of_test_find_node_by_name_with_alias),
+	KUNIT_CASE(of_test_find_node_by_name_with_alias_and_slash),
+	KUNIT_CASE(of_test_find_node_by_name_multiple_components_2),
+	KUNIT_CASE(of_test_find_node_by_name_missing_path),
+	KUNIT_CASE(of_test_find_node_by_name_missing_alias),
+	KUNIT_CASE(of_test_find_node_by_name_missing_alias_with_relative_path),
+	KUNIT_CASE(of_test_find_node_by_name_with_option),
+	KUNIT_CASE(of_test_find_node_by_name_with_option_and_slash),
+	KUNIT_CASE(of_test_find_node_by_name_with_null_option),
+	KUNIT_CASE(of_test_find_node_by_name_with_option_alias),
+	KUNIT_CASE(of_test_find_node_by_name_with_option_alias_and_slash),
+	KUNIT_CASE(of_test_find_node_by_name_with_null_option_alias),
+	KUNIT_CASE(of_test_find_node_by_name_option_clearing),
+	KUNIT_CASE(of_test_find_node_by_name_option_clearing_root),
+	{},
+};
+
+static struct kunit_module of_test_find_node_by_name_module = {
+	.name = "of-test-find-node-by-name",
+	.init = of_test_find_node_by_name_init,
+	.test_cases = of_test_find_node_by_name_cases,
+};
+module_test(of_test_find_node_by_name_module);
+
+struct of_test_dynamic_context {
 	struct device_node *np;
-	struct property *prop;
+	struct property *prop0;
+	struct property *prop1;
+};
 
-	np = of_find_node_by_path("/testcase-data");
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+static void of_test_dynamic_basic(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+	struct property *prop0 = ctx->prop0;
 
-	/* Array of 4 properties for the purpose of testing */
-	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
+	/* Add a new property - should pass*/
+	prop0->name = "new-property";
+	prop0->value = "new-property-data";
+	prop0->length = strlen(prop0->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
+			    "Adding a new property failed\n");
+
+	/* Test that we can remove a property */
+	KUNIT_EXPECT_EQ(test, of_remove_property(np, prop0), 0);
+}
+
+static void of_test_dynamic_add_existing_property(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+	struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;
 
 	/* Add a new property - should pass*/
-	prop->name = "new-property";
-	prop->value = "new-property-data";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+	prop0->name = "new-property";
+	prop0->value = "new-property-data";
+	prop0->length = strlen(prop0->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
 			    "Adding a new property failed\n");
 
 	/* Try to add an existing property - should fail */
-	prop++;
-	prop->name = "new-property";
-	prop->value = "new-property-data-should-fail";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
+	prop1->name = "new-property";
+	prop1->value = "new-property-data-should-fail";
+	prop1->length = strlen(prop1->value) + 1;
+	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop1), 0,
 			    "Adding an existing property should have failed\n");
+}
+
+static void of_test_dynamic_modify_existing_property(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+	struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;
+
+	/* Add a new property - should pass*/
+	prop0->name = "new-property";
+	prop0->value = "new-property-data";
+	prop0->length = strlen(prop0->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
+			    "Adding a new property failed\n");
 
 	/* Try to modify an existing property - should pass */
-	prop->value = "modify-property-data-should-pass";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(
-		test, of_update_property(np, prop), 0,
-		"Updating an existing property should have passed\n");
+	prop1->name = "new-property";
+	prop1->value = "modify-property-data-should-pass";
+	prop1->length = strlen(prop1->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop1), 0,
+			    "Updating an existing property should have passed\n");
+}
+
+static void of_test_dynamic_modify_non_existent_property(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+	struct property *prop0 = ctx->prop0;
 
 	/* Try to modify non-existent property - should pass*/
-	prop++;
-	prop->name = "modify-property";
-	prop->value = "modify-missing-property-data-should-pass";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
+	prop0->name = "modify-property";
+	prop0->value = "modify-missing-property-data-should-pass";
+	prop0->length = strlen(prop0->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop0), 0,
 			    "Updating a missing property should have passed\n");
+}
 
-	/* Remove property - should pass */
-	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
-			    "Removing a property should have passed\n");
+static void of_test_dynamic_large_property(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+	struct property *prop0 = ctx->prop0;
 
 	/* Adding very large property - should pass */
-	prop++;
-	prop->name = "large-property-PAGE_SIZEx8";
-	prop->length = PAGE_SIZE * 8;
-	prop->value = kzalloc(prop->length, GFP_KERNEL);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
-	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+	prop0->name = "large-property-PAGE_SIZEx8";
+	prop0->length = PAGE_SIZE * 8;
+	prop0->value = kunit_kzalloc(test, prop0->length, GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop0->value);
+
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
 			    "Adding a large property should have passed\n");
 }
 
-static int of_test_init(struct kunit *test)
+static int of_test_dynamic_init(struct kunit *test)
 {
-	/* adding data for unittest */
+	struct of_test_dynamic_context *ctx;
+
 	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
 
 	if (!of_aliases)
@@ -197,18 +371,45 @@ static int of_test_init(struct kunit *test)
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
 			"/testcase-data/phandle-tests/consumer-a"));
 
+	ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+	test->priv = ctx;
+
+	ctx->np = of_find_node_by_path("/testcase-data");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->np);
+
+	ctx->prop0 = kunit_kzalloc(test, sizeof(*ctx->prop0), GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop0);
+
+	ctx->prop1 = kunit_kzalloc(test, sizeof(*ctx->prop1), GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop1);
+
 	return 0;
 }
 
-static struct kunit_case of_test_cases[] = {
-	KUNIT_CASE(of_unittest_find_node_by_name),
-	KUNIT_CASE(of_unittest_dynamic),
+static void of_test_dynamic_exit(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+
+	of_remove_property(np, ctx->prop0);
+	of_remove_property(np, ctx->prop1);
+	of_node_put(np);
+}
+
+static struct kunit_case of_test_dynamic_cases[] = {
+	KUNIT_CASE(of_test_dynamic_basic),
+	KUNIT_CASE(of_test_dynamic_add_existing_property),
+	KUNIT_CASE(of_test_dynamic_modify_existing_property),
+	KUNIT_CASE(of_test_dynamic_modify_non_existent_property),
+	KUNIT_CASE(of_test_dynamic_large_property),
 	{},
 };
 
-static struct kunit_module of_test_module = {
-	.name = "of-base-test",
-	.init = of_test_init,
-	.test_cases = of_test_cases,
+static struct kunit_module of_test_dynamic_module = {
+	.name = "of-dynamic-test",
+	.init = of_test_dynamic_init,
+	.exit = of_test_dynamic_exit,
+	.test_cases = of_test_dynamic_cases,
 };
-module_test(of_test_module);
+module_test(of_test_dynamic_module);
-- 
2.21.0.rc0.258.g878e2cd30e-goog

^ permalink raw reply related	[flat|nested] 316+ messages in thread

* [RFC v4 17/17] of: unittest: split up some super large test cases
@ 2019-02-14 21:37   ` brendanhiggins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-14 21:37 UTC (permalink / raw)
  To: keescook, mcgrof, shuah, robh, kieran.bingham, frowand.list
  Cc: brakmo, pmladek, amir73il, Brendan Higgins, dri-devel,
	Alexander.Levin, linux-kselftest, linux-nvdimm, richard,
	knut.omang, wfg, joel, jdike, dan.carpenter, devicetree,
	Tim.Bird, linux-um, rostedt, julia.lawall, dan.j.williams,
	kunit-dev, gregkh, linux-kernel, daniel, mpe, joe, khilman

Split up the super large test cases of_unittest_find_node_by_name and
of_unittest_dynamic into properly sized and defined test cases.

Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
---
 drivers/of/base-test.c | 297 ++++++++++++++++++++++++++++++++++-------
 1 file changed, 249 insertions(+), 48 deletions(-)

diff --git a/drivers/of/base-test.c b/drivers/of/base-test.c
index 3d3f4f1b74800..7b44c967ed2fd 100644
--- a/drivers/of/base-test.c
+++ b/drivers/of/base-test.c
@@ -8,10 +8,10 @@
 
 #include "test-common.h"
 
-static void of_unittest_find_node_by_name(struct kunit *test)
+static void of_test_find_node_by_name_basic(struct kunit *test)
 {
 	struct device_node *np;
-	const char *options, *name;
+	const char *name;
 
 	np = of_find_node_by_path("/testcase-data");
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
@@ -20,11 +20,21 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 			       "find /testcase-data failed\n");
 	of_node_put(np);
 	kfree(name);
+}
 
+static void of_test_find_node_by_name_trailing_slash(struct kunit *test)
+{
 	/* Test if trailing '/' works */
 	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
 			    "trailing '/' on /testcase-data/ should fail\n");
 
+}
+
+static void of_test_find_node_by_name_multiple_components(struct kunit *test)
+{
+	struct device_node *np;
+	const char *name;
+
 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 	name = kasprintf(GFP_KERNEL, "%pOF", np);
@@ -33,6 +43,12 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 		"find /testcase-data/phandle-tests/consumer-a failed\n");
 	of_node_put(np);
 	kfree(name);
+}
+
+static void of_test_find_node_by_name_with_alias(struct kunit *test)
+{
+	struct device_node *np;
+	const char *name;
 
 	np = of_find_node_by_path("testcase-alias");
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
@@ -41,10 +57,23 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 			       "find testcase-alias failed\n");
 	of_node_put(np);
 	kfree(name);
+}
 
+static void of_test_find_node_by_name_with_alias_and_slash(struct kunit *test)
+{
 	/* Test if trailing '/' works on aliases */
 	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
-			    "trailing '/' on testcase-alias/ should fail\n");
+			   "trailing '/' on testcase-alias/ should fail\n");
+}
+
+/*
+ * TODO(brendanhiggins@google.com): This looks like a duplicate of
+ * of_test_find_node_by_name_multiple_components
+ */
+static void of_test_find_node_by_name_multiple_components_2(struct kunit *test)
+{
+	struct device_node *np;
+	const char *name;
 
 	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
@@ -54,17 +83,33 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 		"find testcase-alias/phandle-tests/consumer-a failed\n");
 	of_node_put(np);
 	kfree(name);
+}
+
+static void of_test_find_node_by_name_missing_path(struct kunit *test)
+{
+	struct device_node *np;
 
 	KUNIT_EXPECT_EQ_MSG(
 		test,
 		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
 		"non-existent path returned node %pOF\n", np);
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_missing_alias(struct kunit *test)
+{
+	struct device_node *np;
 
 	KUNIT_EXPECT_EQ_MSG(
 		test, np = of_find_node_by_path("missing-alias"), NULL,
 		"non-existent alias returned node %pOF\n", np);
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_missing_alias_with_relative_path(
+		struct kunit *test)
+{
+	struct device_node *np;
 
 	KUNIT_EXPECT_EQ_MSG(
 		test,
@@ -72,12 +117,24 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 		"non-existent alias with relative path returned node %pOF\n",
 		np);
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_option(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
 	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
 			       "option path test failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_option_and_slash(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
@@ -90,11 +147,22 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
 			       "option path test, subcase #2 failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_null_option(struct kunit *test)
+{
+	struct device_node *np;
 
 	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
 	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
 					 "NULL option path test failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_option_alias(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
 				       &options);
@@ -102,6 +170,13 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
 			       "option alias path test failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_option_alias_and_slash(
+		struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
 				       &options);
@@ -109,11 +184,22 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
 			       "option alias path test, subcase #1 failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_with_null_option_alias(struct kunit *test)
+{
+	struct device_node *np;
 
 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
 	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
 			test, np, "NULL option alias path test failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_option_clearing(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	options = "testoption";
 	np = of_find_node_opts_by_path("testcase-alias", &options);
@@ -121,6 +207,12 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
 			    "option clearing test failed\n");
 	of_node_put(np);
+}
+
+static void of_test_find_node_by_name_option_clearing_root(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options;
 
 	options = "testoption";
 	np = of_find_node_opts_by_path("/", &options);
@@ -130,65 +222,147 @@ static void of_unittest_find_node_by_name(struct kunit *test)
 	of_node_put(np);
 }
 
-static void of_unittest_dynamic(struct kunit *test)
+static int of_test_find_node_by_name_init(struct kunit *test)
 {
+	/* adding data for unittest */
+	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
+
+	if (!of_aliases)
+		of_aliases = of_find_node_by_path("/aliases");
+
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
+			"/testcase-data/phandle-tests/consumer-a"));
+
+	return 0;
+}
+
+static struct kunit_case of_test_find_node_by_name_cases[] = {
+	KUNIT_CASE(of_test_find_node_by_name_basic),
+	KUNIT_CASE(of_test_find_node_by_name_trailing_slash),
+	KUNIT_CASE(of_test_find_node_by_name_multiple_components),
+	KUNIT_CASE(of_test_find_node_by_name_with_alias),
+	KUNIT_CASE(of_test_find_node_by_name_with_alias_and_slash),
+	KUNIT_CASE(of_test_find_node_by_name_multiple_components_2),
+	KUNIT_CASE(of_test_find_node_by_name_missing_path),
+	KUNIT_CASE(of_test_find_node_by_name_missing_alias),
+	KUNIT_CASE(of_test_find_node_by_name_missing_alias_with_relative_path),
+	KUNIT_CASE(of_test_find_node_by_name_with_option),
+	KUNIT_CASE(of_test_find_node_by_name_with_option_and_slash),
+	KUNIT_CASE(of_test_find_node_by_name_with_null_option),
+	KUNIT_CASE(of_test_find_node_by_name_with_option_alias),
+	KUNIT_CASE(of_test_find_node_by_name_with_option_alias_and_slash),
+	KUNIT_CASE(of_test_find_node_by_name_with_null_option_alias),
+	KUNIT_CASE(of_test_find_node_by_name_option_clearing),
+	KUNIT_CASE(of_test_find_node_by_name_option_clearing_root),
+	{},
+};
+
+static struct kunit_module of_test_find_node_by_name_module = {
+	.name = "of-test-find-node-by-name",
+	.init = of_test_find_node_by_name_init,
+	.test_cases = of_test_find_node_by_name_cases,
+};
+module_test(of_test_find_node_by_name_module);
+
+struct of_test_dynamic_context {
 	struct device_node *np;
-	struct property *prop;
+	struct property *prop0;
+	struct property *prop1;
+};
 
-	np = of_find_node_by_path("/testcase-data");
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+static void of_test_dynamic_basic(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+	struct property *prop0 = ctx->prop0;
 
-	/* Array of 4 properties for the purpose of testing */
-	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
+	/* Add a new property - should pass*/
+	prop0->name = "new-property";
+	prop0->value = "new-property-data";
+	prop0->length = strlen(prop0->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
+			    "Adding a new property failed\n");
+
+	/* Test that we can remove a property */
+	KUNIT_EXPECT_EQ(test, of_remove_property(np, prop0), 0);
+}
+
+static void of_test_dynamic_add_existing_property(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+	struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;
 
 	/* Add a new property - should pass*/
-	prop->name = "new-property";
-	prop->value = "new-property-data";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+	prop0->name = "new-property";
+	prop0->value = "new-property-data";
+	prop0->length = strlen(prop0->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
 			    "Adding a new property failed\n");
 
 	/* Try to add an existing property - should fail */
-	prop++;
-	prop->name = "new-property";
-	prop->value = "new-property-data-should-fail";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
+	prop1->name = "new-property";
+	prop1->value = "new-property-data-should-fail";
+	prop1->length = strlen(prop1->value) + 1;
+	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop1), 0,
 			    "Adding an existing property should have failed\n");
+}
+
+static void of_test_dynamic_modify_existing_property(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+	struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;
+
+	/* Add a new property - should pass*/
+	prop0->name = "new-property";
+	prop0->value = "new-property-data";
+	prop0->length = strlen(prop0->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
+			    "Adding a new property failed\n");
 
 	/* Try to modify an existing property - should pass */
-	prop->value = "modify-property-data-should-pass";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(
-		test, of_update_property(np, prop), 0,
-		"Updating an existing property should have passed\n");
+	prop1->name = "new-property";
+	prop1->value = "modify-property-data-should-pass";
+	prop1->length = strlen(prop1->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop1), 0,
+			    "Updating an existing property should have passed\n");
+}
+
+static void of_test_dynamic_modify_non_existent_property(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+	struct property *prop0 = ctx->prop0;
 
 	/* Try to modify non-existent property - should pass*/
-	prop++;
-	prop->name = "modify-property";
-	prop->value = "modify-missing-property-data-should-pass";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
+	prop0->name = "modify-property";
+	prop0->value = "modify-missing-property-data-should-pass";
+	prop0->length = strlen(prop0->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop0), 0,
 			    "Updating a missing property should have passed\n");
+}
 
-	/* Remove property - should pass */
-	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
-			    "Removing a property should have passed\n");
+static void of_test_dynamic_large_property(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+	struct property *prop0 = ctx->prop0;
 
 	/* Adding very large property - should pass */
-	prop++;
-	prop->name = "large-property-PAGE_SIZEx8";
-	prop->length = PAGE_SIZE * 8;
-	prop->value = kzalloc(prop->length, GFP_KERNEL);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
-	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+	prop0->name = "large-property-PAGE_SIZEx8";
+	prop0->length = PAGE_SIZE * 8;
+	prop0->value = kunit_kzalloc(test, prop0->length, GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop0->value);
+
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
 			    "Adding a large property should have passed\n");
 }
 
-static int of_test_init(struct kunit *test)
+static int of_test_dynamic_init(struct kunit *test)
 {
-	/* adding data for unittest */
+	struct of_test_dynamic_context *ctx;
+
 	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
 
 	if (!of_aliases)
@@ -197,18 +371,45 @@ static int of_test_init(struct kunit *test)
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
 			"/testcase-data/phandle-tests/consumer-a"));
 
+	ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+	test->priv = ctx;
+
+	ctx->np = of_find_node_by_path("/testcase-data");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->np);
+
+	ctx->prop0 = kunit_kzalloc(test, sizeof(*ctx->prop0), GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop0);
+
+	ctx->prop1 = kunit_kzalloc(test, sizeof(*ctx->prop1), GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop1);
+
 	return 0;
 }
 
-static struct kunit_case of_test_cases[] = {
-	KUNIT_CASE(of_unittest_find_node_by_name),
-	KUNIT_CASE(of_unittest_dynamic),
+static void of_test_dynamic_exit(struct kunit *test)
+{
+	struct of_test_dynamic_context *ctx = test->priv;
+	struct device_node *np = ctx->np;
+
+	of_remove_property(np, ctx->prop0);
+	of_remove_property(np, ctx->prop1);
+	of_node_put(np);
+}
+
+static struct kunit_case of_test_dynamic_cases[] = {
+	KUNIT_CASE(of_test_dynamic_basic),
+	KUNIT_CASE(of_test_dynamic_add_existing_property),
+	KUNIT_CASE(of_test_dynamic_modify_existing_property),
+	KUNIT_CASE(of_test_dynamic_modify_non_existent_property),
+	KUNIT_CASE(of_test_dynamic_large_property),
 	{},
 };
 
-static struct kunit_module of_test_module = {
-	.name = "of-base-test",
-	.init = of_test_init,
-	.test_cases = of_test_cases,
+static struct kunit_module of_test_dynamic_module = {
+	.name = "of-dynamic-test",
+	.init = of_test_dynamic_init,
+	.exit = of_test_dynamic_exit,
+	.test_cases = of_test_dynamic_cases,
 };
-module_test(of_test_module);
+module_test(of_test_dynamic_module);
-- 
2.21.0.rc0.258.g878e2cd30e-goog


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply related	[flat|nested] 316+ messages in thread

* Re: [RFC v4 10/17] kunit: test: add test managed resource tests
  2019-02-14 21:37     ` Brendan Higgins
                           ` (2 preceding siblings ...)
  (?)
@ 2019-02-15 20:54         ` Stephen Boyd
  -1 siblings, 0 replies; 316+ messages in thread
From: Stephen Boyd @ 2019-02-15 20:54 UTC (permalink / raw)
  To: frowand.list-Re5JQEeQqe8AvxtiuMwx3w,
	keescook-hpIqsD4AKlfQT0dZR+AlfA,
	kieran.bingham-ryLnwIuWjnjg/C1BVhZhaw,
	mcgrof-DgEjT+Ai2ygdnm+yROfE0A, robh-DgEjT+Ai2ygdnm+yROfE0A,
	shuah-DgEjT+Ai2ygdnm+yROfE0A
  Cc: brakmo-b10kYP2dOMg, pmladek-IBi9RG/b67k,
	amir73il-Re5JQEeQqe8AvxtiuMwx3w, Brendan Higgins,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	Alexander.Levin-0li6OtcxBFHby3iVrkZq2A,
	linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw, richard-/L3Ra7n9ekc,
	knut.omang-QHcLZuEGTsvQT0dZR+AlfA, wfg-VuQAYsv1563Yd54FQh9/CA,
	joel-U3u1mxZcP9KHXe+LvDLADg, jdike-OPE4K8JWMJJBDgjK7y7TUQ,
	dan.carpenter-QHcLZuEGTsvQT0dZR+AlfA,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Tim.Bird-7U/KSKJipcs,
	linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	rostedt-nx8X9YLhiw1AfugRpC6u6w, julia.lawall-L2FTfq7BK8M,
	kunit-dev-/JYPxA39Uh5TLH3MbocFFw,
	gregkh-hQyY1W1yCW8ekmWlsbkhG0B+6BGkLq7r,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, Avinash Kondareddy,
	daniel-/w4YWyX8dFk, mpe-Gsx/Oe8HsFggBc27wqDAHg,
	joe-6d6DIl74uiNBDgjK7y7TUQ, khilman-rdvid1DuHRBWk0Htik3J/w

Quoting Brendan Higgins (2019-02-14 13:37:22)
> diff --git a/kunit/test-test.c b/kunit/test-test.c
> index 0b4ad6690310d..bb34431398526 100644
> --- a/kunit/test-test.c
> +++ b/kunit/test-test.c
[...]
> +
> +#define KUNIT_RESOURCE_NUM 5
> +static void kunit_resource_test_cleanup_resources(struct kunit *test)
> +{
> +       int i;
> +       struct kunit_test_resource_context *ctx = test->priv;
> +       struct kunit_resource *resources[KUNIT_RESOURCE_NUM];
> +
> +       for (i = 0; i < KUNIT_RESOURCE_NUM; i++) {

Nitpick: This could use ARRAY_SIZE(resources) and then the #define could
be dropped.

> +               resources[i] = kunit_alloc_resource(&ctx->test,
> +                                                   fake_resource_init,
> +                                                   fake_resource_free,
> +                                                   ctx);
> +       }
> +
> +       kunit_cleanup(&ctx->test);
> +
> +       KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
> +}
> +
[...]
> +
> +static struct kunit_case kunit_resource_test_cases[] = {

Can these arrays be const?

> +       KUNIT_CASE(kunit_resource_test_init_resources),
> +       KUNIT_CASE(kunit_resource_test_alloc_resource),
> +       KUNIT_CASE(kunit_resource_test_free_resource),
> +       KUNIT_CASE(kunit_resource_test_cleanup_resources),
> +       {},
> +};

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 10/17] kunit: test: add test managed resource tests
@ 2019-02-15 20:54         ` Stephen Boyd
  0 siblings, 0 replies; 316+ messages in thread
From: Stephen Boyd @ 2019-02-15 20:54 UTC (permalink / raw)
  To: Brendan Higgins, frowand.list, keescook, kieran.bingham, mcgrof,
	robh, shuah
  Cc: gregkh, joel, mpe, joe, brakmo, rostedt, Tim.Bird, khilman,
	julia.lawall, linux-kselftest, kunit-dev, linux-kernel, jdike,
	richard, linux-um, daniel, dri-devel, dan.j.williams,
	linux-nvdimm, knut.omang, devicetree, pmladek, Alexander.Levin,
	amir73il, dan.carpenter, wfg, Brendan Higgins,
	Avinash Kondareddy

Quoting Brendan Higgins (2019-02-14 13:37:22)
> diff --git a/kunit/test-test.c b/kunit/test-test.c
> index 0b4ad6690310d..bb34431398526 100644
> --- a/kunit/test-test.c
> +++ b/kunit/test-test.c
[...]
> +
> +#define KUNIT_RESOURCE_NUM 5
> +static void kunit_resource_test_cleanup_resources(struct kunit *test)
> +{
> +       int i;
> +       struct kunit_test_resource_context *ctx = test->priv;
> +       struct kunit_resource *resources[KUNIT_RESOURCE_NUM];
> +
> +       for (i = 0; i < KUNIT_RESOURCE_NUM; i++) {

Nitpick: This could use ARRAY_SIZE(resources) and then the #define could
be dropped.

> +               resources[i] = kunit_alloc_resource(&ctx->test,
> +                                                   fake_resource_init,
> +                                                   fake_resource_free,
> +                                                   ctx);
> +       }
> +
> +       kunit_cleanup(&ctx->test);
> +
> +       KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
> +}
> +
[...]
> +
> +static struct kunit_case kunit_resource_test_cases[] = {

Can these arrays be const?

> +       KUNIT_CASE(kunit_resource_test_init_resources),
> +       KUNIT_CASE(kunit_resource_test_alloc_resource),
> +       KUNIT_CASE(kunit_resource_test_free_resource),
> +       KUNIT_CASE(kunit_resource_test_cleanup_resources),
> +       {},
> +};

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 10/17] kunit: test: add test managed resource tests
@ 2019-02-15 20:54         ` Stephen Boyd
  0 siblings, 0 replies; 316+ messages in thread
From: sboyd @ 2019-02-15 20:54 UTC (permalink / raw)


Quoting Brendan Higgins (2019-02-14 13:37:22)
> diff --git a/kunit/test-test.c b/kunit/test-test.c
> index 0b4ad6690310d..bb34431398526 100644
> --- a/kunit/test-test.c
> +++ b/kunit/test-test.c
[...]
> +
> +#define KUNIT_RESOURCE_NUM 5
> +static void kunit_resource_test_cleanup_resources(struct kunit *test)
> +{
> +       int i;
> +       struct kunit_test_resource_context *ctx = test->priv;
> +       struct kunit_resource *resources[KUNIT_RESOURCE_NUM];
> +
> +       for (i = 0; i < KUNIT_RESOURCE_NUM; i++) {

Nitpick: This could use ARRAY_SIZE(resources) and then the #define could
be dropped.

> +               resources[i] = kunit_alloc_resource(&ctx->test,
> +                                                   fake_resource_init,
> +                                                   fake_resource_free,
> +                                                   ctx);
> +       }
> +
> +       kunit_cleanup(&ctx->test);
> +
> +       KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
> +}
> +
[...]
> +
> +static struct kunit_case kunit_resource_test_cases[] = {

Can these arrays be const?

> +       KUNIT_CASE(kunit_resource_test_init_resources),
> +       KUNIT_CASE(kunit_resource_test_alloc_resource),
> +       KUNIT_CASE(kunit_resource_test_free_resource),
> +       KUNIT_CASE(kunit_resource_test_cleanup_resources),
> +       {},
> +};

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 10/17] kunit: test: add test managed resource tests
@ 2019-02-15 20:54         ` Stephen Boyd
  0 siblings, 0 replies; 316+ messages in thread
From: Stephen Boyd @ 2019-02-15 20:54 UTC (permalink / raw)


Quoting Brendan Higgins (2019-02-14 13:37:22)
> diff --git a/kunit/test-test.c b/kunit/test-test.c
> index 0b4ad6690310d..bb34431398526 100644
> --- a/kunit/test-test.c
> +++ b/kunit/test-test.c
[...]
> +
> +#define KUNIT_RESOURCE_NUM 5
> +static void kunit_resource_test_cleanup_resources(struct kunit *test)
> +{
> +       int i;
> +       struct kunit_test_resource_context *ctx = test->priv;
> +       struct kunit_resource *resources[KUNIT_RESOURCE_NUM];
> +
> +       for (i = 0; i < KUNIT_RESOURCE_NUM; i++) {

Nitpick: This could use ARRAY_SIZE(resources) and then the #define could
be dropped.

> +               resources[i] = kunit_alloc_resource(&ctx->test,
> +                                                   fake_resource_init,
> +                                                   fake_resource_free,
> +                                                   ctx);
> +       }
> +
> +       kunit_cleanup(&ctx->test);
> +
> +       KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
> +}
> +
[...]
> +
> +static struct kunit_case kunit_resource_test_cases[] = {

Can these arrays be const?

> +       KUNIT_CASE(kunit_resource_test_init_resources),
> +       KUNIT_CASE(kunit_resource_test_alloc_resource),
> +       KUNIT_CASE(kunit_resource_test_free_resource),
> +       KUNIT_CASE(kunit_resource_test_cleanup_resources),
> +       {},
> +};

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 10/17] kunit: test: add test managed resource tests
@ 2019-02-15 20:54         ` Stephen Boyd
  0 siblings, 0 replies; 316+ messages in thread
From: Stephen Boyd @ 2019-02-15 20:54 UTC (permalink / raw)
  To: Brendan Higgins, frowand.list, keescook, kieran.bingham, mcgrof,
	robh, shuah
  Cc: brakmo, pmladek, amir73il, Brendan Higgins, dri-devel,
	Alexander.Levin, linux-kselftest, linux-nvdimm, richard,
	knut.omang, wfg, joel, jdike, dan.carpenter, devicetree,
	Tim.Bird, linux-um, rostedt, julia.lawall, dan.j.williams,
	kunit-dev, gregkh, linux-kernel, Avinash Kondareddy, daniel, mpe,
	joe, khilman

Quoting Brendan Higgins (2019-02-14 13:37:22)
> diff --git a/kunit/test-test.c b/kunit/test-test.c
> index 0b4ad6690310d..bb34431398526 100644
> --- a/kunit/test-test.c
> +++ b/kunit/test-test.c
[...]
> +
> +#define KUNIT_RESOURCE_NUM 5
> +static void kunit_resource_test_cleanup_resources(struct kunit *test)
> +{
> +       int i;
> +       struct kunit_test_resource_context *ctx = test->priv;
> +       struct kunit_resource *resources[KUNIT_RESOURCE_NUM];
> +
> +       for (i = 0; i < KUNIT_RESOURCE_NUM; i++) {

Nitpick: This could use ARRAY_SIZE(resources) and then the #define could
be dropped.

> +               resources[i] = kunit_alloc_resource(&ctx->test,
> +                                                   fake_resource_init,
> +                                                   fake_resource_free,
> +                                                   ctx);
> +       }
> +
> +       kunit_cleanup(&ctx->test);
> +
> +       KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
> +}
> +
[...]
> +
> +static struct kunit_case kunit_resource_test_cases[] = {

Can these arrays be const?

> +       KUNIT_CASE(kunit_resource_test_init_resources),
> +       KUNIT_CASE(kunit_resource_test_alloc_resource),
> +       KUNIT_CASE(kunit_resource_test_free_resource),
> +       KUNIT_CASE(kunit_resource_test_cleanup_resources),
> +       {},
> +};

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 02/17] kunit: test: add test resource management API
  2019-02-14 21:37   ` brendanhiggins
                       ` (2 preceding siblings ...)
  (?)
@ 2019-02-15 21:01     ` Stephen Boyd
  -1 siblings, 0 replies; 316+ messages in thread
From: Stephen Boyd @ 2019-02-15 21:01 UTC (permalink / raw)
  To: frowand.list, keescook, kieran.bingham, mcgrof, robh, shuah
  Cc: gregkh, joel, mpe, joe, brakmo, rostedt, Tim.Bird, khilman,
	julia.lawall, linux-kselftest, kunit-dev, linux-kernel, jdike,
	richard, linux-um, daniel, dri-devel, dan.j.williams,
	linux-nvdimm, knut.omang, devicetree, pmladek, Alexander.Levin,
	amir73il, dan.carpenter, wfg, Brendan Higgins

Quoting Brendan Higgins (2019-02-14 13:37:14)
> @@ -104,6 +167,7 @@ struct kunit {
>         const char *name; /* Read only after initialization! */
>         spinlock_t lock; /* Gaurds all mutable test state. */
>         bool success; /* Protected by lock. */
> +       struct list_head resources; /* Protected by lock. */
>         void (*vprintk)(const struct kunit *test,
>                         const char *level,
>                         struct va_format *vaf);
> @@ -127,6 +191,51 @@ int kunit_run_tests(struct kunit_module *module);
>                 } \
>                 late_initcall(module_kunit_init##module)
>  
> +/**
> + * kunit_alloc_resource() - Allocates a *test managed resource*.
> + * @test: The test context object.
> + * @init: a user supplied function to initialize the resource.
> + * @free: a user supplied function to free the resource.
> + * @context: for the user to pass in arbitrary data.

Nitpick: "pass in arbitrary data to the init function"? Maybe that
provides some more clarity.

> + *
> + * Allocates a *test managed resource*, a resource which will automatically be
> + * cleaned up at the end of a test case. See &struct kunit_resource for an
> + * example.
> + */

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 02/17] kunit: test: add test resource management API
@ 2019-02-15 21:01     ` Stephen Boyd
  0 siblings, 0 replies; 316+ messages in thread
From: Stephen Boyd @ 2019-02-15 21:01 UTC (permalink / raw)
  To: Brendan Higgins, frowand.list, keescook, kieran.bingham, mcgrof,
	robh, shuah
  Cc: gregkh, joel, mpe, joe, brakmo, rostedt, Tim.Bird, khilman,
	julia.lawall, linux-kselftest, kunit-dev, linux-kernel, jdike,
	richard, linux-um, daniel, dri-devel, dan.j.williams,
	linux-nvdimm, knut.omang, devicetree, pmladek, Alexander.Levin,
	amir73il, dan.carpenter, wfg, Brendan Higgins

Quoting Brendan Higgins (2019-02-14 13:37:14)
> @@ -104,6 +167,7 @@ struct kunit {
>         const char *name; /* Read only after initialization! */
>         spinlock_t lock; /* Gaurds all mutable test state. */
>         bool success; /* Protected by lock. */
> +       struct list_head resources; /* Protected by lock. */
>         void (*vprintk)(const struct kunit *test,
>                         const char *level,
>                         struct va_format *vaf);
> @@ -127,6 +191,51 @@ int kunit_run_tests(struct kunit_module *module);
>                 } \
>                 late_initcall(module_kunit_init##module)
>  
> +/**
> + * kunit_alloc_resource() - Allocates a *test managed resource*.
> + * @test: The test context object.
> + * @init: a user supplied function to initialize the resource.
> + * @free: a user supplied function to free the resource.
> + * @context: for the user to pass in arbitrary data.

Nitpick: "pass in arbitrary data to the init function"? Maybe that
provides some more clarity.

> + *
> + * Allocates a *test managed resource*, a resource which will automatically be
> + * cleaned up at the end of a test case. See &struct kunit_resource for an
> + * example.
> + */

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 02/17] kunit: test: add test resource management API
@ 2019-02-15 21:01     ` Stephen Boyd
  0 siblings, 0 replies; 316+ messages in thread
From: sboyd @ 2019-02-15 21:01 UTC (permalink / raw)


Quoting Brendan Higgins (2019-02-14 13:37:14)
> @@ -104,6 +167,7 @@ struct kunit {
>         const char *name; /* Read only after initialization! */
>         spinlock_t lock; /* Gaurds all mutable test state. */
>         bool success; /* Protected by lock. */
> +       struct list_head resources; /* Protected by lock. */
>         void (*vprintk)(const struct kunit *test,
>                         const char *level,
>                         struct va_format *vaf);
> @@ -127,6 +191,51 @@ int kunit_run_tests(struct kunit_module *module);
>                 } \
>                 late_initcall(module_kunit_init##module)
>  
> +/**
> + * kunit_alloc_resource() - Allocates a *test managed resource*.
> + * @test: The test context object.
> + * @init: a user supplied function to initialize the resource.
> + * @free: a user supplied function to free the resource.
> + * @context: for the user to pass in arbitrary data.

Nitpick: "pass in arbitrary data to the init function"? Maybe that
provides some more clarity.

> + *
> + * Allocates a *test managed resource*, a resource which will automatically be
> + * cleaned up at the end of a test case. See &struct kunit_resource for an
> + * example.
> + */

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 02/17] kunit: test: add test resource management API
@ 2019-02-15 21:01     ` Stephen Boyd
  0 siblings, 0 replies; 316+ messages in thread
From: Stephen Boyd @ 2019-02-15 21:01 UTC (permalink / raw)


Quoting Brendan Higgins (2019-02-14 13:37:14)
> @@ -104,6 +167,7 @@ struct kunit {
>         const char *name; /* Read only after initialization! */
>         spinlock_t lock; /* Gaurds all mutable test state. */
>         bool success; /* Protected by lock. */
> +       struct list_head resources; /* Protected by lock. */
>         void (*vprintk)(const struct kunit *test,
>                         const char *level,
>                         struct va_format *vaf);
> @@ -127,6 +191,51 @@ int kunit_run_tests(struct kunit_module *module);
>                 } \
>                 late_initcall(module_kunit_init##module)
>  
> +/**
> + * kunit_alloc_resource() - Allocates a *test managed resource*.
> + * @test: The test context object.
> + * @init: a user supplied function to initialize the resource.
> + * @free: a user supplied function to free the resource.
> + * @context: for the user to pass in arbitrary data.

Nitpick: "pass in arbitrary data to the init function"? Maybe that
provides some more clarity.

> + *
> + * Allocates a *test managed resource*, a resource which will automatically be
> + * cleaned up at the end of a test case. See &struct kunit_resource for an
> + * example.
> + */

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 02/17] kunit: test: add test resource management API
@ 2019-02-15 21:01     ` Stephen Boyd
  0 siblings, 0 replies; 316+ messages in thread
From: Stephen Boyd @ 2019-02-15 21:01 UTC (permalink / raw)
  To: Brendan Higgins, frowand.list, keescook, kieran.bingham, mcgrof,
	robh, shuah
  Cc: brakmo, pmladek, amir73il, Brendan Higgins, dri-devel,
	Alexander.Levin, linux-kselftest, linux-nvdimm, richard,
	knut.omang, wfg, joel, jdike, dan.carpenter, devicetree,
	Tim.Bird, linux-um, rostedt, julia.lawall, dan.j.williams,
	kunit-dev, gregkh, linux-kernel, daniel, mpe, joe, khilman

Quoting Brendan Higgins (2019-02-14 13:37:14)
> @@ -104,6 +167,7 @@ struct kunit {
>         const char *name; /* Read only after initialization! */
>         spinlock_t lock; /* Gaurds all mutable test state. */
>         bool success; /* Protected by lock. */
> +       struct list_head resources; /* Protected by lock. */
>         void (*vprintk)(const struct kunit *test,
>                         const char *level,
>                         struct va_format *vaf);
> @@ -127,6 +191,51 @@ int kunit_run_tests(struct kunit_module *module);
>                 } \
>                 late_initcall(module_kunit_init##module)
>  
> +/**
> + * kunit_alloc_resource() - Allocates a *test managed resource*.
> + * @test: The test context object.
> + * @init: a user supplied function to initialize the resource.
> + * @free: a user supplied function to free the resource.
> + * @context: for the user to pass in arbitrary data.

Nitpick: "pass in arbitrary data to the init function"? Maybe that
provides some more clarity.

> + *
> + * Allocates a *test managed resource*, a resource which will automatically be
> + * cleaned up at the end of a test case. See &struct kunit_resource for an
> + * example.
> + */

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 15/17] of: unittest: migrate tests to run on KUnit
  2019-02-14 21:37     ` Brendan Higgins
                           ` (2 preceding siblings ...)
  (?)
@ 2019-02-16  0:24         ` Frank Rowand
  -1 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-02-16  0:24 UTC (permalink / raw)
  To: Brendan Higgins, keescook-hpIqsD4AKlfQT0dZR+AlfA,
	mcgrof-DgEjT+Ai2ygdnm+yROfE0A, shuah-DgEjT+Ai2ygdnm+yROfE0A,
	robh-DgEjT+Ai2ygdnm+yROfE0A,
	kieran.bingham-ryLnwIuWjnjg/C1BVhZhaw
  Cc: brakmo-b10kYP2dOMg, pmladek-IBi9RG/b67k,
	amir73il-Re5JQEeQqe8AvxtiuMwx3w,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	Alexander.Levin-0li6OtcxBFHby3iVrkZq2A,
	linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw, richard-/L3Ra7n9ekc,
	knut.omang-QHcLZuEGTsvQT0dZR+AlfA, wfg-VuQAYsv1563Yd54FQh9/CA,
	joel-U3u1mxZcP9KHXe+LvDLADg, jdike-OPE4K8JWMJJBDgjK7y7TUQ,
	dan.carpenter-QHcLZuEGTsvQT0dZR+AlfA,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Tim.Bird-7U/KSKJipcs,
	linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	rostedt-nx8X9YLhiw1AfugRpC6u6w, julia.lawall-L2FTfq7BK8M,
	kunit-dev-/JYPxA39Uh5TLH3MbocFFw,
	gregkh-hQyY1W1yCW8ekmWlsbkhG0B+6BGkLq7r,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, daniel-/w4YWyX8dFk,
	mpe-Gsx/Oe8HsFggBc27wqDAHg, joe-6d6DIl74uiNBDgjK7y7TUQ,
	khilman-rdvid1DuHRBWk0Htik3J/w

On 2/14/19 1:37 PM, Brendan Higgins wrote:
> Migrate tests without any cleanup, or modifying test logic in anyway to
> run under KUnit using the KUnit expectation and assertion API.
> 
> Signed-off-by: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> ---
>  drivers/of/Kconfig    |    1 +
>  drivers/of/unittest.c | 1310 +++++++++++++++++++++--------------------
>  2 files changed, 671 insertions(+), 640 deletions(-)
> 
> diff --git a/drivers/of/Kconfig b/drivers/of/Kconfig
> index ad3fcad4d75b8..f309399deac20 100644
> --- a/drivers/of/Kconfig
> +++ b/drivers/of/Kconfig
> @@ -15,6 +15,7 @@ if OF
>  config OF_UNITTEST
>  	bool "Device Tree runtime unit tests"
>  	depends on !SPARC
> +	depends on KUNIT
>  	select IRQ_DOMAIN
>  	select OF_EARLY_FLATTREE
>  	select OF_RESOLVE
> diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c

These comments are from applying the patches to 5.0-rc3.

The final hunk of this patch fails to apply because it depends upon

   [PATCH v1 0/1] of: unittest: unflatten device tree on UML when testing.

If I apply that patch then I can apply patches 15 through 17.

If I apply patches 1 through 14 and boot the uml kernel then the devicetree
unittest result is:

  ### dt-test ### FAIL of_unittest_overlay_high_level():2372 overlay_base_root not initialized
  ### dt-test ### end of unittest - 219 passed, 1 failed

This is as expected from your previous reports, and is fixed after
applying

   [PATCH v1 0/1] of: unittest: unflatten device tree on UML when testing.

with the devicetree unittest result of:

   ### dt-test ### end of unittest - 224 passed, 0 failed

After adding patch 15, there are a lot of "unittest internal error" messages.

-Frank


> index effa4e2b9d992..96de69ccb3e63 100644
> --- a/drivers/of/unittest.c
> +++ b/drivers/of/unittest.c
> @@ -26,186 +26,189 @@
>  
>  #include <linux/bitops.h>
>  
> +#include <kunit/test.h>### dt-test ### end of unittest - 224 passed, 0 failed
> +
>  #include "of_private.h"
>  
> -static struct unittest_results {
> -	int passed;
> -	int failed;
> -} unittest_results;
> -
> -#define unittest(result, fmt, ...) ({ \
> -	bool failed = !(result); \
> -	if (failed) { \
> -		unittest_results.failed++; \
> -		pr_err("FAIL %s():%i " fmt, __func__, __LINE__, ##__VA_ARGS__); \
> -	} else { \
> -		unittest_results.passed++; \
> -		pr_debug("pass %s():%i\n", __func__, __LINE__); \
> -	} \
> -	failed; \
> -})
> -
> -static void __init of_unittest_find_node_by_name(void)
> +static void of_unittest_find_node_by_name(struct kunit *test)
>  {
>  	struct device_node *np;
>  	const char *options, *name;
>  
>  	np = of_find_node_by_path("/testcase-data");
>  	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	unittest(np && !strcmp("/testcase-data", name),
> -		"find /testcase-data failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> +			       "find /testcase-data failed\n");
>  	of_node_put(np);
>  	kfree(name);
>  
>  	/* Test if trailing '/' works */
> -	np = of_find_node_by_path("/testcase-data/");
> -	unittest(!np, "trailing '/' on /testcase-data/ should fail\n");
> +	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
> +			    "trailing '/' on /testcase-data/ should fail\n");
>  
>  	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	unittest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", name),
> +	KUNIT_EXPECT_STREQ_MSG(
> +		test, "/testcase-data/phandle-tests/consumer-a", name,
>  		"find /testcase-data/phandle-tests/consumer-a failed\n");
>  	of_node_put(np);
>  	kfree(name);
>  
>  	np = of_find_node_by_path("testcase-alias");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	unittest(np && !strcmp("/testcase-data", name),
> -		"find testcase-alias failed\n");
> +	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> +			       "find testcase-alias failed\n");
>  	of_node_put(np);
>  	kfree(name);
>  
>  	/* Test if trailing '/' works on aliases */
> -	np = of_find_node_by_path("testcase-alias/");
> -	unittest(!np, "trailing '/' on testcase-alias/ should fail\n");
> +	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
> +			    "trailing '/' on testcase-alias/ should fail\n");
>  
>  	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	unittest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", name),
> +	KUNIT_EXPECT_STREQ_MSG(
> +		test, "/testcase-data/phandle-tests/consumer-a", name,
>  		"find testcase-alias/phandle-tests/consumer-a failed\n");
>  	of_node_put(np);
>  	kfree(name);
>  
> -	np = of_find_node_by_path("/testcase-data/missing-path");
> -	unittest(!np, "non-existent path returned node %pOF\n", np);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
> +		"non-existent path returned node %pOF\n", np);
>  	of_node_put(np);
>  
> -	np = of_find_node_by_path("missing-alias");
> -	unittest(!np, "non-existent alias returned node %pOF\n", np);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test, np = of_find_node_by_path("missing-alias"), NULL,
> +		"non-existent alias returned node %pOF\n", np);
>  	of_node_put(np);
>  
> -	np = of_find_node_by_path("testcase-alias/missing-path");
> -	unittest(!np, "non-existent alias with relative path returned node %pOF\n", np);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
> +		"non-existent alias with relative path returned node %pOF\n",
> +		np);
>  	of_node_put(np);
>  
>  	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
> -	unittest(np && !strcmp("testoption", options),
> -		 "option path test failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
> +			       "option path test failed\n");
>  	of_node_put(np);
>  
>  	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
> -	unittest(np && !strcmp("test/option", options),
> -		 "option path test, subcase #1 failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> +			       "option path test, subcase #1 failed\n");
>  	of_node_put(np);
>  
>  	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
> -	unittest(np && !strcmp("test/option", options),
> -		 "option path test, subcase #2 failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> +			       "option path test, subcase #2 failed\n");
>  	of_node_put(np);
>  
>  	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
> -	unittest(np, "NULL option path test failed\n");
> +	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
> +					 "NULL option path test failed\n");
>  	of_node_put(np);
>  
>  	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
>  				       &options);
> -	unittest(np && !strcmp("testaliasoption", options),
> -		 "option alias path test failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
> +			       "option alias path test failed\n");
>  	of_node_put(np);
>  
>  	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
>  				       &options);
> -	unittest(np && !strcmp("test/alias/option", options),
> -		 "option alias path test, subcase #1 failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
> +			       "option alias path test, subcase #1 failed\n");
>  	of_node_put(np);
>  
>  	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
> -	unittest(np, "NULL option alias path test failed\n");
> +	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
> +			test, np, "NULL option alias path test failed\n");
>  	of_node_put(np);
>  
>  	options = "testoption";
>  	np = of_find_node_opts_by_path("testcase-alias", &options);
> -	unittest(np && !options, "option clearing test failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> +			    "option clearing test failed\n");
>  	of_node_put(np);
>  
>  	options = "testoption";
>  	np = of_find_node_opts_by_path("/", &options);
> -	unittest(np && !options, "option clearing root node test failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> +			    "option clearing root node test failed\n");
>  	of_node_put(np);
>  }
>  
> -static void __init of_unittest_dynamic(void)
> +static void of_unittest_dynamic(struct kunit *test)
>  {
>  	struct device_node *np;
>  	struct property *prop;
>  
>  	np = of_find_node_by_path("/testcase-data");
> -	if (!np) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  	/* Array of 4 properties for the purpose of testing */
>  	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
> -	if (!prop) {
> -		unittest(0, "kzalloc() failed\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
>  
>  	/* Add a new property - should pass*/
>  	prop->name = "new-property";
>  	prop->value = "new-property-data";
>  	prop->length = strlen(prop->value) + 1;
> -	unittest(of_add_property(np, prop) == 0, "Adding a new property failed\n");
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> +			    "Adding a new property failed\n");
>  
>  	/* Try to add an existing property - should fail */
>  	prop++;
>  	prop->name = "new-property";
>  	prop->value = "new-property-data-should-fail";
>  	prop->length = strlen(prop->value) + 1;
> -	unittest(of_add_property(np, prop) != 0,
> -		 "Adding an existing property should have failed\n");
> +	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
> +			    "Adding an existing property should have failed\n");
>  
>  	/* Try to modify an existing property - should pass */
>  	prop->value = "modify-property-data-should-pass";
>  	prop->length = strlen(prop->value) + 1;
> -	unittest(of_update_property(np, prop) == 0,
> -		 "Updating an existing property should have passed\n");
> +	KUNIT_EXPECT_EQ_MSG(
> +		test, of_update_property(np, prop), 0,
> +		"Updating an existing property should have passed\n");
>  
>  	/* Try to modify non-existent property - should pass*/
>  	prop++;
>  	prop->name = "modify-property";
>  	prop->value = "modify-missing-property-data-should-pass";
>  	prop->length = strlen(prop->value) + 1;
> -	unittest(of_update_property(np, prop) == 0,
> -		 "Updating a missing property should have passed\n");
> +	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
> +			    "Updating a missing property should have passed\n");
>  
>  	/* Remove property - should pass */
> -	unittest(of_remove_property(np, prop) == 0,
> -		 "Removing a property should have passed\n");
> +	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
> +			    "Removing a property should have passed\n");
>  
>  	/* Adding very large property - should pass */
>  	prop++;
>  	prop->name = "large-property-PAGE_SIZEx8";
>  	prop->length = PAGE_SIZE * 8;
>  	prop->value = kzalloc(prop->length, GFP_KERNEL);
> -	unittest(prop->value != NULL, "Unable to allocate large buffer\n");
> -	if (prop->value)
> -		unittest(of_add_property(np, prop) == 0,
> -			 "Adding a large property should have passed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> +			    "Adding a large property should have passed\n");
>  }
>  
> -static int __init of_unittest_check_node_linkage(struct device_node *np)
> +static int of_unittest_check_node_linkage(struct device_node *np)
>  {
>  	struct device_node *child;
>  	int count = 0, rc;
> @@ -230,27 +233,30 @@ static int __init of_unittest_check_node_linkage(struct device_node *np)
>  	return rc;
>  }
>  
> -static void __init of_unittest_check_tree_linkage(void)
> +static void of_unittest_check_tree_linkage(struct kunit *test)
>  {
>  	struct device_node *np;
>  	int allnode_count = 0, child_count;
>  
> -	if (!of_root)
> -		return;
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);
>  
>  	for_each_of_allnodes(np)
>  		allnode_count++;
>  	child_count = of_unittest_check_node_linkage(of_root);
>  
> -	unittest(child_count > 0, "Device node data structure is corrupted\n");
> -	unittest(child_count == allnode_count,
> -		 "allnodes list size (%i) doesn't match sibling lists size (%i)\n",
> -		 allnode_count, child_count);
> +	KUNIT_EXPECT_GT_MSG(test, child_count, 0,
> +			    "Device node data structure is corrupted\n");
> +	KUNIT_EXPECT_EQ_MSG(
> +		test, child_count, allnode_count,
> +		"allnodes list size (%i) doesn't match sibling lists size (%i)\n",
> +		allnode_count, child_count);
>  	pr_debug("allnodes list size (%i); sibling lists size (%i)\n", allnode_count, child_count);
>  }
>  
> -static void __init of_unittest_printf_one(struct device_node *np, const char *fmt,
> -					  const char *expected)
> +static void of_unittest_printf_one(struct kunit *test,
> +				   struct device_node *np,
> +				   const char *fmt,
> +				   const char *expected)
>  {
>  	unsigned char *buf;
>  	int buf_size;
> @@ -265,8 +271,12 @@ static void __init of_unittest_printf_one(struct device_node *np, const char *fm
>  	memset(buf, 0xff, buf_size);
>  	size = snprintf(buf, buf_size - 2, fmt, np);
>  
> -	/* use strcmp() instead of strncmp() here to be absolutely sure strings match */
> -	unittest((strcmp(buf, expected) == 0) && (buf[size+1] == 0xff),
> +	KUNIT_EXPECT_STREQ_MSG(
> +		test, buf, expected,
> +		"sprintf failed; fmt='%s' expected='%s' rslt='%s'\n",
> +		fmt, expected, buf);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test, buf[size+1], 0xff,
>  		"sprintf failed; fmt='%s' expected='%s' rslt='%s'\n",
>  		fmt, expected, buf);
>  
> @@ -276,44 +286,49 @@ static void __init of_unittest_printf_one(struct device_node *np, const char *fm
>  		/* Clear the buffer, and make sure it works correctly still */
>  		memset(buf, 0xff, buf_size);
>  		snprintf(buf, size+1, fmt, np);
> -		unittest(strncmp(buf, expected, size) == 0 && (buf[size+1] == 0xff),
> +		KUNIT_EXPECT_STREQ_MSG(
> +			test, buf, expected,
> +			"snprintf failed; size=%i fmt='%s' expected='%s' rslt='%s'\n",
> +			size, fmt, expected, buf);
> +		KUNIT_EXPECT_EQ_MSG(
> +			test, buf[size+1], 0xff,
>  			"snprintf failed; size=%i fmt='%s' expected='%s' rslt='%s'\n",
>  			size, fmt, expected, buf);
>  	}
>  	kfree(buf);
>  }
>  
> -static void __init of_unittest_printf(void)
> +static void of_unittest_printf(struct kunit *test)
>  {
>  	struct device_node *np;
>  	const char *full_name = "/testcase-data/platform-tests/test-device@1/dev@100";
>  	char phandle_str[16] = "";
>  
>  	np = of_find_node_by_path(full_name);
> -	if (!np) {
> -		unittest(np, "testcase data missing\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  	num_to_str(phandle_str, sizeof(phandle_str), np->phandle, 0);
>  
> -	of_unittest_printf_one(np, "%pOF",  full_name);
> -	of_unittest_printf_one(np, "%pOFf", full_name);
> -	of_unittest_printf_one(np, "%pOFn", "dev");
> -	of_unittest_printf_one(np, "%2pOFn", "dev");
> -	of_unittest_printf_one(np, "%5pOFn", "  dev");
> -	of_unittest_printf_one(np, "%pOFnc", "dev:test-sub-device");
> -	of_unittest_printf_one(np, "%pOFp", phandle_str);
> -	of_unittest_printf_one(np, "%pOFP", "dev@100");
> -	of_unittest_printf_one(np, "ABC %pOFP ABC", "ABC dev@100 ABC");
> -	of_unittest_printf_one(np, "%10pOFP", "   dev@100");
> -	of_unittest_printf_one(np, "%-10pOFP", "dev@100   ");
> -	of_unittest_printf_one(of_root, "%pOFP", "/");
> -	of_unittest_printf_one(np, "%pOFF", "----");
> -	of_unittest_printf_one(np, "%pOFPF", "dev@100:----");
> -	of_unittest_printf_one(np, "%pOFPFPc", "dev@100:----:dev@100:test-sub-device");
> -	of_unittest_printf_one(np, "%pOFc", "test-sub-device");
> -	of_unittest_printf_one(np, "%pOFC",
> +	of_unittest_printf_one(test, np, "%pOF",  full_name);
> +	of_unittest_printf_one(test, np, "%pOFf", full_name);
> +	of_unittest_printf_one(test, np, "%pOFn", "dev");
> +	of_unittest_printf_one(test, np, "%2pOFn", "dev");
> +	of_unittest_printf_one(test, np, "%5pOFn", "  dev");
> +	of_unittest_printf_one(test, np, "%pOFnc", "dev:test-sub-device");
> +	of_unittest_printf_one(test, np, "%pOFp", phandle_str);
> +	of_unittest_printf_one(test, np, "%pOFP", "dev@100");
> +	of_unittest_printf_one(test, np, "ABC %pOFP ABC", "ABC dev@100 ABC");
> +	of_unittest_printf_one(test, np, "%10pOFP", "   dev@100");
> +	of_unittest_printf_one(test, np, "%-10pOFP", "dev@100   ");
> +	of_unittest_printf_one(test, of_root, "%pOFP", "/");
> +	of_unittest_printf_one(test, np, "%pOFF", "----");
> +	of_unittest_printf_one(test, np, "%pOFPF", "dev@100:----");
> +	of_unittest_printf_one(test,
> +			       np,
> +			       "%pOFPFPc",
> +			       "dev@100:----:dev@100:test-sub-device");
> +	of_unittest_printf_one(test, np, "%pOFc", "test-sub-device");
> +	of_unittest_printf_one(test, np, "%pOFC",
>  			"\"test-sub-device\",\"test-compat2\",\"test-compat3\"");
>  }
>  
> @@ -323,7 +338,7 @@ struct node_hash {
>  };
>  
>  static DEFINE_HASHTABLE(phandle_ht, 8);
> -static void __init of_unittest_check_phandles(void)
> +static void of_unittest_check_phandles(struct kunit *test)
>  {
>  	struct device_node *np;
>  	struct node_hash *nh;
> @@ -335,24 +350,26 @@ static void __init of_unittest_check_phandles(void)
>  			continue;
>  
>  		hash_for_each_possible(phandle_ht, nh, node, np->phandle) {
> +			KUNIT_EXPECT_NE_MSG(
> +				test, nh->np->phandle, np->phandle,
> +				"Duplicate phandle! %i used by %pOF and %pOF\n",
> +				np->phandle, nh->np, np);
>  			if (nh->np->phandle == np->phandle) {
> -				pr_info("Duplicate phandle! %i used by %pOF and %pOF\n",
> -					np->phandle, nh->np, np);
>  				dup_count++;
>  				break;
>  			}
>  		}
>  
>  		nh = kzalloc(sizeof(*nh), GFP_KERNEL);
> -		if (WARN_ON(!nh))
> -			return;
> +		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, nh);
>  
>  		nh->np = np;
>  		hash_add(phandle_ht, &nh->node, np->phandle);
>  		phandle_count++;
>  	}
> -	unittest(dup_count == 0, "Found %i duplicates in %i phandles\n",
> -		 dup_count, phandle_count);
> +	KUNIT_EXPECT_EQ_MSG(test, dup_count, 0,
> +			    "Found %i duplicates in %i phandles\n",
> +			    dup_count, phandle_count);
>  
>  	/* Clean up */
>  	hash_for_each_safe(phandle_ht, i, tmp, nh, node) {
> @@ -361,20 +378,21 @@ static void __init of_unittest_check_phandles(void)
>  	}
>  }
>  
> -static void __init of_unittest_parse_phandle_with_args(void)
> +static void of_unittest_parse_phandle_with_args(struct kunit *test)
>  {
>  	struct device_node *np;
>  	struct of_phandle_args args;
> -	int i, rc;
> +	int i, rc = 0;
>  
>  	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> -	if (!np) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
> -	rc = of_count_phandle_with_args(np, "phandle-list", "#phandle-cells");
> -	unittest(rc == 7, "of_count_phandle_with_args() returned %i, expected 7\n", rc);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_count_phandle_with_args(
> +			np, "phandle-list", "#phandle-cells"),
> +		7,
> +		"of_count_phandle_with_args() returned %i, expected 7\n", rc);
>  
>  	for (i = 0; i < 8; i++) {
>  		bool passed = true;
> @@ -428,85 +446,91 @@ static void __init of_unittest_parse_phandle_with_args(void)
>  			passed = false;
>  		}
>  
> -		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
> -			 i, args.np, rc);
> +		KUNIT_EXPECT_TRUE_MSG(
> +			test, passed,
> +			"index %i - data error on node %pOF rc=%i\n",
> +			i, args.np, rc);
>  	}
>  
>  	/* Check for missing list property */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args(np, "phandle-list-missing",
> -					"#phandle-cells", 0, &args);
> -	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
> -	rc = of_count_phandle_with_args(np, "phandle-list-missing",
> -					"#phandle-cells");
> -	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args(
> +			np, "phandle-list-missing", "#phandle-cells", 0, &args),
> +		-ENOENT);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_count_phandle_with_args(
> +			np, "phandle-list-missing", "#phandle-cells"),
> +		-ENOENT);
>  
>  	/* Check for missing cells property */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args(np, "phandle-list",
> -					"#phandle-cells-missing", 0, &args);
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> -	rc = of_count_phandle_with_args(np, "phandle-list",
> -					"#phandle-cells-missing");
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args(
> +			np, "phandle-list", "#phandle-cells-missing", 0, &args),
> +		-EINVAL);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_count_phandle_with_args(
> +			np, "phandle-list", "#phandle-cells-missing"),
> +		-EINVAL);
>  
>  	/* Check for bad phandle in list */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args(np, "phandle-list-bad-phandle",
> -					"#phandle-cells", 0, &args);
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> -	rc = of_count_phandle_with_args(np, "phandle-list-bad-phandle",
> -					"#phandle-cells");
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args(np, "phandle-list-bad-phandle",
> +					   "#phandle-cells", 0, &args),
> +		-EINVAL);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_count_phandle_with_args(
> +			np, "phandle-list-bad-phandle", "#phandle-cells"),
> +		-EINVAL);
>  
>  	/* Check for incorrectly formed argument list */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args(np, "phandle-list-bad-args",
> -					"#phandle-cells", 1, &args);
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> -	rc = of_count_phandle_with_args(np, "phandle-list-bad-args",
> -					"#phandle-cells");
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args(np, "phandle-list-bad-args",
> +					   "#phandle-cells", 1, &args),
> +		-EINVAL);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_count_phandle_with_args(
> +			np, "phandle-list-bad-args", "#phandle-cells"),
> +		-EINVAL);
>  }
>  
> -static void __init of_unittest_parse_phandle_with_args_map(void)
> +static void of_unittest_parse_phandle_with_args_map(struct kunit *test)
>  {
>  	struct device_node *np, *p0, *p1, *p2, *p3;
>  	struct of_phandle_args args;
>  	int i, rc;
>  
>  	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-b");
> -	if (!np) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  	p0 = of_find_node_by_path("/testcase-data/phandle-tests/provider0");
> -	if (!p0) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p0);
>  
>  	p1 = of_find_node_by_path("/testcase-data/phandle-tests/provider1");
> -	if (!p1) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p1);
>  
>  	p2 = of_find_node_by_path("/testcase-data/phandle-tests/provider2");
> -	if (!p2) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p2);
>  
>  	p3 = of_find_node_by_path("/testcase-data/phandle-tests/provider3");
> -	if (!p3) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p3);
>  
> -	rc = of_count_phandle_with_args(np, "phandle-list", "#phandle-cells");
> -	unittest(rc == 7, "of_count_phandle_with_args() returned %i, expected 7\n", rc);
> +	KUNIT_EXPECT_EQ(test,
> +		       of_count_phandle_with_args(np,
> +						  "phandle-list",
> +						  "#phandle-cells"),
> +		       7);
>  
>  	for (i = 0; i < 8; i++) {
>  		bool passed = true;
> @@ -564,121 +588,186 @@ static void __init of_unittest_parse_phandle_with_args_map(void)
>  			passed = false;
>  		}
>  
> -		unittest(passed, "index %i - data error on node %s rc=%i\n",
> -			 i, args.np->full_name, rc);
> +		KUNIT_EXPECT_TRUE_MSG(
> +			test, passed,
> +			"index %i - data error on node %s rc=%i\n",
> +			i, (args.np ? args.np->full_name : "missing np"), rc);
>  	}
>  
>  	/* Check for missing list property */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args_map(np, "phandle-list-missing",
> -					    "phandle", 0, &args);
> -	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args_map(
> +			np, "phandle-list-missing", "phandle", 0, &args),
> +		-ENOENT);
>  
>  	/* Check for missing cells,map,mask property */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args_map(np, "phandle-list",
> -					    "phandle-missing", 0, &args);
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args_map(
> +			np, "phandle-list", "phandle-missing", 0, &args),
> +		-EINVAL);
>  
>  	/* Check for bad phandle in list */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args_map(np, "phandle-list-bad-phandle",
> -					    "phandle", 0, &args);
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args_map(
> +			np, "phandle-list-bad-phandle", "phandle", 0, &args),
> +		-EINVAL);
>  
>  	/* Check for incorrectly formed argument list */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args_map(np, "phandle-list-bad-args",
> -					    "phandle", 1, &args);
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args_map(
> +			np, "phandle-list-bad-args", "phandle", 1, &args),
> +		-EINVAL);
>  }
>  
> -static void __init of_unittest_property_string(void)
> +static void of_unittest_property_string(struct kunit *test)
>  {
>  	const char *strings[4];
>  	struct device_node *np;
>  	int rc;
>  
>  	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> -	if (!np) {
> -		pr_err("No testcase data in device tree\n");
> -		return;
> -	}
> -
> -	rc = of_property_match_string(np, "phandle-list-names", "first");
> -	unittest(rc == 0, "first expected:0 got:%i\n", rc);
> -	rc = of_property_match_string(np, "phandle-list-names", "second");
> -	unittest(rc == 1, "second expected:1 got:%i\n", rc);
> -	rc = of_property_match_string(np, "phandle-list-names", "third");
> -	unittest(rc == 2, "third expected:2 got:%i\n", rc);
> -	rc = of_property_match_string(np, "phandle-list-names", "fourth");
> -	unittest(rc == -ENODATA, "unmatched string; rc=%i\n", rc);
> -	rc = of_property_match_string(np, "missing-property", "blah");
> -	unittest(rc == -EINVAL, "missing property; rc=%i\n", rc);
> -	rc = of_property_match_string(np, "empty-property", "blah");
> -	unittest(rc == -ENODATA, "empty property; rc=%i\n", rc);
> -	rc = of_property_match_string(np, "unterminated-string", "blah");
> -	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_property_match_string(np, "phandle-list-names", "first"),
> +		0);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_property_match_string(np, "phandle-list-names", "second"),
> +		1);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_property_match_string(np, "phandle-list-names", "third"),
> +		2);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_match_string(np, "phandle-list-names", "fourth"),
> +		-ENODATA,
> +		"unmatched string");
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_match_string(np, "missing-property", "blah"),
> +		-EINVAL,
> +		"missing property");
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_match_string(np, "empty-property", "blah"),
> +		-ENODATA,
> +		"empty property");
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_match_string(np, "unterminated-string", "blah"),
> +		-EILSEQ,
> +		"unterminated string");
>  
>  	/* of_property_count_strings() tests */
> -	rc = of_property_count_strings(np, "string-property");
> -	unittest(rc == 1, "Incorrect string count; rc=%i\n", rc);
> -	rc = of_property_count_strings(np, "phandle-list-names");
> -	unittest(rc == 3, "Incorrect string count; rc=%i\n", rc);
> -	rc = of_property_count_strings(np, "unterminated-string");
> -	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
> -	rc = of_property_count_strings(np, "unterminated-string-list");
> -	unittest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc);
> +	KUNIT_EXPECT_EQ(test,
> +			of_property_count_strings(np, "string-property"), 1);
> +	KUNIT_EXPECT_EQ(test,
> +			of_property_count_strings(np, "phandle-list-names"), 3);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_count_strings(np, "unterminated-string"), -EILSEQ,
> +		"unterminated string");
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_count_strings(np, "unterminated-string-list"),
> +		-EILSEQ,
> +		"unterminated string array");
>  
>  	/* of_property_read_string_index() tests */
>  	rc = of_property_read_string_index(np, "string-property", 0, strings);
> -	unittest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_ASSERT_EQ(test, rc, 0);
> +	KUNIT_EXPECT_STREQ(test, strings[0], "foobar");
> +
>  	strings[0] = NULL;
>  	rc = of_property_read_string_index(np, "string-property", 1, strings);
> -	unittest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_EXPECT_EQ(test, rc, -ENODATA);
> +	KUNIT_EXPECT_EQ(test, strings[0], NULL);
> +
>  	rc = of_property_read_string_index(np, "phandle-list-names", 0, strings);
> -	unittest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_ASSERT_EQ(test, rc, 0);
> +	KUNIT_EXPECT_STREQ(test, strings[0], "first");
> +
>  	rc = of_property_read_string_index(np, "phandle-list-names", 1, strings);
> -	unittest(rc == 0 && !strcmp(strings[0], "second"), "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_ASSERT_EQ(test, rc, 0);
> +	KUNIT_EXPECT_STREQ(test, strings[0], "second");
> +
>  	rc = of_property_read_string_index(np, "phandle-list-names", 2, strings);
> -	unittest(rc == 0 && !strcmp(strings[0], "third"), "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_ASSERT_EQ(test, rc, 0);
> +	KUNIT_EXPECT_STREQ(test, strings[0], "third");
> +
>  	strings[0] = NULL;
>  	rc = of_property_read_string_index(np, "phandle-list-names", 3, strings);
> -	unittest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_EXPECT_EQ(test, rc, -ENODATA);
> +	KUNIT_EXPECT_EQ(test, strings[0], NULL);
> +
>  	strings[0] = NULL;
>  	rc = of_property_read_string_index(np, "unterminated-string", 0, strings);
> -	unittest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_EXPECT_EQ(test, rc, -EILSEQ);
> +	KUNIT_EXPECT_EQ(test, strings[0], NULL);
> +
>  	rc = of_property_read_string_index(np, "unterminated-string-list", 0, strings);
> -	unittest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_ASSERT_EQ(test, rc, 0);
> +	KUNIT_EXPECT_STREQ(test, strings[0], "first");
> +
>  	strings[0] = NULL;
>  	rc = of_property_read_string_index(np, "unterminated-string-list", 2, strings); /* should fail */
> -	unittest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
> -	strings[1] = NULL;
> +	KUNIT_EXPECT_EQ(test, rc, -EILSEQ);
> +	KUNIT_EXPECT_EQ(test, strings[0], NULL);
>  
>  	/* of_property_read_string_array() tests */
> -	rc = of_property_read_string_array(np, "string-property", strings, 4);
> -	unittest(rc == 1, "Incorrect string count; rc=%i\n", rc);
> -	rc = of_property_read_string_array(np, "phandle-list-names", strings, 4);
> -	unittest(rc == 3, "Incorrect string count; rc=%i\n", rc);
> -	rc = of_property_read_string_array(np, "unterminated-string", strings, 4);
> -	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
> +	strings[1] = NULL;
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_property_read_string_array(
> +			np, "string-property", strings, 4),
> +		1);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_property_read_string_array(
> +			np, "phandle-list-names", strings, 4),
> +		3);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_read_string_array(
> +			np, "unterminated-string", strings, 4),
> +		-EILSEQ,
> +		"unterminated string");
>  	/* -- An incorrectly formed string should cause a failure */
> -	rc = of_property_read_string_array(np, "unterminated-string-list", strings, 4);
> -	unittest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_read_string_array(
> +			np, "unterminated-string-list", strings, 4),
> +		-EILSEQ,
> +		"unterminated string array");
>  	/* -- parsing the correctly formed strings should still work: */
>  	strings[2] = NULL;
>  	rc = of_property_read_string_array(np, "unterminated-string-list", strings, 2);
> -	unittest(rc == 2 && strings[2] == NULL, "of_property_read_string_array() failure; rc=%i\n", rc);
> +	KUNIT_EXPECT_EQ(test, rc, 2);
> +	KUNIT_EXPECT_EQ(test, strings[2], NULL);
> +
>  	strings[1] = NULL;
>  	rc = of_property_read_string_array(np, "phandle-list-names", strings, 1);
> -	unittest(rc == 1 && strings[1] == NULL, "Overwrote end of string array; rc=%i, str='%s'\n", rc, strings[1]);
> +	KUNIT_ASSERT_EQ(test, rc, 1);
> +	KUNIT_EXPECT_EQ_MSG(test, strings[1], NULL,
> +			    "Overwrote end of string array");
>  }
>  
>  #define propcmp(p1, p2) (((p1)->length == (p2)->length) && \
>  			(p1)->value && (p2)->value && \
>  			!memcmp((p1)->value, (p2)->value, (p1)->length) && \
>  			!strcmp((p1)->name, (p2)->name))
> -static void __init of_unittest_property_copy(void)
> +static void of_unittest_property_copy(struct kunit *test)
>  {
>  #ifdef CONFIG_OF_DYNAMIC
>  	struct property p1 = { .name = "p1", .length = 0, .value = "" };
> @@ -686,20 +775,24 @@ static void __init of_unittest_property_copy(void)
>  	struct property *new;
>  
>  	new = __of_prop_dup(&p1, GFP_KERNEL);
> -	unittest(new && propcmp(&p1, new), "empty property didn't copy correctly\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, new);
> +	KUNIT_EXPECT_TRUE_MSG(test, propcmp(&p1, new),
> +			      "empty property didn't copy correctly");
>  	kfree(new->value);
>  	kfree(new->name);
>  	kfree(new);
>  
>  	new = __of_prop_dup(&p2, GFP_KERNEL);
> -	unittest(new && propcmp(&p2, new), "non-empty property didn't copy correctly\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, new);
> +	KUNIT_EXPECT_TRUE_MSG(test, propcmp(&p2, new),
> +			      "non-empty property didn't copy correctly");
>  	kfree(new->value);
>  	kfree(new->name);
>  	kfree(new);
>  #endif
>  }
>  
> -static void __init of_unittest_changeset(void)
> +static void of_unittest_changeset(struct kunit *test)
>  {
>  #ifdef CONFIG_OF_DYNAMIC
>  	struct property *ppadd, padd = { .name = "prop-add", .length = 1, .value = "" };
> @@ -712,32 +805,32 @@ static void __init of_unittest_changeset(void)
>  	struct of_changeset chgset;
>  
>  	n1 = __of_node_dup(NULL, "n1");
> -	unittest(n1, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n1);
>  
>  	n2 = __of_node_dup(NULL, "n2");
> -	unittest(n2, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n2);
>  
>  	n21 = __of_node_dup(NULL, "n21");
> -	unittest(n21, "testcase setup failure %p\n", n21);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n21);
>  
>  	nchangeset = of_find_node_by_path("/testcase-data/changeset");
>  	nremove = of_get_child_by_name(nchangeset, "node-remove");
> -	unittest(nremove, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, nremove);
>  
>  	ppadd = __of_prop_dup(&padd, GFP_KERNEL);
> -	unittest(ppadd, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppadd);
>  
>  	ppname_n1  = __of_prop_dup(&pname_n1, GFP_KERNEL);
> -	unittest(ppname_n1, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n1);
>  
>  	ppname_n2  = __of_prop_dup(&pname_n2, GFP_KERNEL);
> -	unittest(ppname_n2, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n2);
>  
>  	ppname_n21 = __of_prop_dup(&pname_n21, GFP_KERNEL);
> -	unittest(ppname_n21, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n21);
>  
>  	ppupdate = __of_prop_dup(&pupdate, GFP_KERNEL);
> -	unittest(ppupdate, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppupdate);
>  
>  	parent = nchangeset;
>  	n1->parent = parent;
> @@ -745,54 +838,72 @@ static void __init of_unittest_changeset(void)
>  	n21->parent = n2;
>  
>  	ppremove = of_find_property(parent, "prop-remove", NULL);
> -	unittest(ppremove, "failed to find removal prop");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppremove);
>  
>  	of_changeset_init(&chgset);
>  
> -	unittest(!of_changeset_attach_node(&chgset, n1), "fail attach n1\n");
> -	unittest(!of_changeset_add_property(&chgset, n1, ppname_n1), "fail add prop name\n");
> -
> -	unittest(!of_changeset_attach_node(&chgset, n2), "fail attach n2\n");
> -	unittest(!of_changeset_add_property(&chgset, n2, ppname_n2), "fail add prop name\n");
> -
> -	unittest(!of_changeset_detach_node(&chgset, nremove), "fail remove node\n");
> -	unittest(!of_changeset_add_property(&chgset, n21, ppname_n21), "fail add prop name\n");
> -
> -	unittest(!of_changeset_attach_node(&chgset, n21), "fail attach n21\n");
> -
> -	unittest(!of_changeset_add_property(&chgset, parent, ppadd), "fail add prop prop-add\n");
> -	unittest(!of_changeset_update_property(&chgset, parent, ppupdate), "fail update prop\n");
> -	unittest(!of_changeset_remove_property(&chgset, parent, ppremove), "fail remove prop\n");
> -
> -	unittest(!of_changeset_apply(&chgset), "apply failed\n");
> +	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n1),
> +			       "fail attach n1\n");
> +	KUNIT_EXPECT_FALSE_MSG(
> +		test, of_changeset_add_property(&chgset, n1, ppname_n1),
> +		"fail add prop name\n");
> +
> +	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n2),
> +			       "fail attach n2\n");
> +	KUNIT_EXPECT_FALSE_MSG(
> +		test, of_changeset_add_property(&chgset, n2, ppname_n2),
> +			       "fail add prop name\n");
> +
> +	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_detach_node(&chgset, nremove),
> +			       "fail remove node\n");
> +	KUNIT_EXPECT_FALSE_MSG(
> +		test, of_changeset_add_property(&chgset, n21, ppname_n21),
> +		"fail add prop name\n");
> +
> +	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n21),
> +			       "fail attach n21\n");
> +
> +	KUNIT_EXPECT_FALSE_MSG(
> +		test,
> +		of_changeset_add_property(&chgset, parent, ppadd),
> +		"fail add prop prop-add\n");
> +	KUNIT_EXPECT_FALSE_MSG(
> +		test,
> +		of_changeset_update_property(&chgset, parent, ppupdate),
> +		"fail update prop\n");
> +	KUNIT_EXPECT_FALSE_MSG(
> +		test,
> +		of_changeset_remove_property(&chgset, parent, ppremove),
> +		"fail remove prop\n");
> +
> +	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_apply(&chgset),
> +			       "apply failed\n");
>  
>  	of_node_put(nchangeset);
>  
>  	/* Make sure node names are constructed correctly */
> -	unittest((np = of_find_node_by_path("/testcase-data/changeset/n2/n21")),
> -		 "'%pOF' not added\n", n21);
> +	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
> +		test,
> +		np = of_find_node_by_path("/testcase-data/changeset/n2/n21"),
> +		"'%pOF' not added\n", n21);
>  	of_node_put(np);
>  
> -	unittest(!of_changeset_revert(&chgset), "revert failed\n");
> +	KUNIT_EXPECT_FALSE(test, of_changeset_revert(&chgset));
>  
>  	of_changeset_destroy(&chgset);
>  #endif
>  }
>  
> -static void __init of_unittest_parse_interrupts(void)
> +static void of_unittest_parse_interrupts(struct kunit *test)
>  {
>  	struct device_node *np;
>  	struct of_phandle_args args;
>  	int i, rc;
>  
> -	if (of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)
> -		return;
> +	KUNIT_ASSERT_FALSE(test, of_irq_workarounds & OF_IMAP_OLDWORLD_MAC);
>  
>  	np = of_find_node_by_path("/testcase-data/interrupts/interrupts0");
> -	if (!np) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  	for (i = 0; i < 4; i++) {
>  		bool passed = true;
> @@ -804,16 +915,15 @@ static void __init of_unittest_parse_interrupts(void)
>  		passed &= (args.args_count == 1);
>  		passed &= (args.args[0] == (i + 1));
>  
> -		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
> -			 i, args.np, rc);
> +		KUNIT_EXPECT_TRUE_MSG(
> +			test, passed,
> +			"index %i - data error on node %pOF rc=%i\n",
> +			i, args.np, rc);
>  	}
>  	of_node_put(np);
>  
>  	np = of_find_node_by_path("/testcase-data/interrupts/interrupts1");
> -	if (!np) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  	for (i = 0; i < 4; i++) {
>  		bool passed = true;
> @@ -850,26 +960,24 @@ static void __init of_unittest_parse_interrupts(void)
>  		default:
>  			passed = false;
>  		}
> -		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
> -			 i, args.np, rc);
> +		KUNIT_EXPECT_TRUE_MSG(
> +			test, passed,
> +			"index %i - data error on node %pOF rc=%i\n",
> +			i, args.np, rc);
>  	}
>  	of_node_put(np);
>  }
>  
> -static void __init of_unittest_parse_interrupts_extended(void)
> +static void of_unittest_parse_interrupts_extended(struct kunit *test)
>  {
>  	struct device_node *np;
>  	struct of_phandle_args args;
>  	int i, rc;
>  
> -	if (of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)
> -		return;
> +	KUNIT_ASSERT_FALSE(test, of_irq_workarounds & OF_IMAP_OLDWORLD_MAC);
>  
>  	np = of_find_node_by_path("/testcase-data/interrupts/interrupts-extended0");
> -	if (!np) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  	for (i = 0; i < 7; i++) {
>  		bool passed = true;
> @@ -924,8 +1032,10 @@ static void __init of_unittest_parse_interrupts_extended(void)
>  			passed = false;
>  		}
>  
> -		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
> -			 i, args.np, rc);
> +		KUNIT_EXPECT_TRUE_MSG(
> +			test, passed,
> +			"index %i - data error on node %pOF rc=%i\n",
> +			i, args.np, rc);
>  	}
>  	of_node_put(np);
>  }
> @@ -965,7 +1075,7 @@ static struct {
>  	{ .path = "/testcase-data/match-node/name9", .data = "K", },
>  };
>  
> -static void __init of_unittest_match_node(void)
> +static void of_unittest_match_node(struct kunit *test)
>  {
>  	struct device_node *np;
>  	const struct of_device_id *match;
> @@ -973,26 +1083,19 @@ static void __init of_unittest_match_node(void)
>  
>  	for (i = 0; i < ARRAY_SIZE(match_node_tests); i++) {
>  		np = of_find_node_by_path(match_node_tests[i].path);
> -		if (!np) {
> -			unittest(0, "missing testcase node %s\n",
> -				match_node_tests[i].path);
> -			continue;
> -		}
> +		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  		match = of_match_node(match_node_table, np);
> -		if (!match) {
> -			unittest(0, "%s didn't match anything\n",
> -				match_node_tests[i].path);
> -			continue;
> -		}
> +		KUNIT_ASSERT_NOT_ERR_OR_NULL_MSG(test, np,
> +						 "%s didn't match anything",
> +						 match_node_tests[i].path);
>  
> -		if (strcmp(match->data, match_node_tests[i].data) != 0) {
> -			unittest(0, "%s got wrong match. expected %s, got %s\n",
> -				match_node_tests[i].path, match_node_tests[i].data,
> -				(const char *)match->data);
> -			continue;
> -		}
> -		unittest(1, "passed");
> +		KUNIT_EXPECT_STREQ_MSG(
> +			test,
> +			match->data, match_node_tests[i].data,
> +			"%s got wrong match. expected %s, got %s\n",
> +			match_node_tests[i].path, match_node_tests[i].data,
> +			(const char *)match->data);
>  	}
>  }
>  
> @@ -1004,9 +1107,9 @@ static struct resource test_bus_res = {
>  static const struct platform_device_info test_bus_info = {
>  	.name = "unittest-bus",
>  };
> -static void __init of_unittest_platform_populate(void)
> +static void of_unittest_platform_populate(struct kunit *test)
>  {
> -	int irq, rc;
> +	int irq;
>  	struct device_node *np, *child, *grandchild;
>  	struct platform_device *pdev, *test_bus;
>  	const struct of_device_id match[] = {
> @@ -1020,32 +1123,27 @@ static void __init of_unittest_platform_populate(void)
>  	/* Test that a missing irq domain returns -EPROBE_DEFER */
>  	np = of_find_node_by_path("/testcase-data/testcase-device1");
>  	pdev = of_find_device_by_node(np);
> -	unittest(pdev, "device 1 creation failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, pdev);
>  
>  	if (!(of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)) {
>  		irq = platform_get_irq(pdev, 0);
> -		unittest(irq == -EPROBE_DEFER,
> -			 "device deferred probe failed - %d\n", irq);
> +		KUNIT_ASSERT_EQ(test, irq, -EPROBE_DEFER);
>  
>  		/* Test that a parsing failure does not return -EPROBE_DEFER */
>  		np = of_find_node_by_path("/testcase-data/testcase-device2");
>  		pdev = of_find_device_by_node(np);
> -		unittest(pdev, "device 2 creation failed\n");
> +		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, pdev);
>  		irq = platform_get_irq(pdev, 0);
> -		unittest(irq < 0 && irq != -EPROBE_DEFER,
> -			 "device parsing error failed - %d\n", irq);
> +		KUNIT_ASSERT_TRUE_MSG(test, irq < 0 && irq != -EPROBE_DEFER,
> +				      "device parsing error failed - %d\n",
> +				      irq);
>  	}
>  
>  	np = of_find_node_by_path("/testcase-data/platform-tests");
> -	unittest(np, "No testcase data in device tree\n");
> -	if (!np)
> -		return;
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  	test_bus = platform_device_register_full(&test_bus_info);
> -	rc = PTR_ERR_OR_ZERO(test_bus);
> -	unittest(!rc, "testbus registration failed; rc=%i\n", rc);
> -	if (rc)
> -		return;
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, test_bus);
>  	test_bus->dev.of_node = np;
>  
>  	/*
> @@ -1060,17 +1158,19 @@ static void __init of_unittest_platform_populate(void)
>  	of_platform_populate(np, match, NULL, &test_bus->dev);
>  	for_each_child_of_node(np, child) {
>  		for_each_child_of_node(child, grandchild)
> -			unittest(of_find_device_by_node(grandchild),
> -				 "Could not create device for node '%pOFn'\n",
> -				 grandchild);
> +			KUNIT_EXPECT_TRUE_MSG(
> +				test, of_find_device_by_node(grandchild),
> +				"Could not create device for node '%pOFn'\n",
> +				grandchild);
>  	}
>  
>  	of_platform_depopulate(&test_bus->dev);
>  	for_each_child_of_node(np, child) {
>  		for_each_child_of_node(child, grandchild)
> -			unittest(!of_find_device_by_node(grandchild),
> -				 "device didn't get destroyed '%pOFn'\n",
> -				 grandchild);
> +			KUNIT_EXPECT_FALSE_MSG(
> +				test, of_find_device_by_node(grandchild),
> +				"device didn't get destroyed '%pOFn'\n",
> +				grandchild);
>  	}
>  
>  	platform_device_unregister(test_bus);
> @@ -1171,7 +1271,7 @@ static void attach_node_and_children(struct device_node *np)
>   *	unittest_data_add - Reads, copies data from
>   *	linked tree and attaches it to the live tree
>   */
> -static int __init unittest_data_add(void)
> +static int unittest_data_add(void)
>  {
>  	void *unittest_data;
>  	struct device_node *unittest_data_node, *np;
> @@ -1242,7 +1342,7 @@ static int __init unittest_data_add(void)
>  }
>  
>  #ifdef CONFIG_OF_OVERLAY
> -static int __init overlay_data_apply(const char *overlay_name, int *overlay_id);
> +static int overlay_data_apply(const char *overlay_name, int *overlay_id);
>  
>  static int unittest_probe(struct platform_device *pdev)
>  {
> @@ -1471,172 +1571,146 @@ static void of_unittest_destroy_tracked_overlays(void)
>  	} while (defers > 0);
>  }
>  
> -static int __init of_unittest_apply_overlay(int overlay_nr, int *overlay_id)
> +static int of_unittest_apply_overlay(struct kunit *test,
> +				     int overlay_nr,
> +				     int *overlay_id)
>  {
>  	const char *overlay_name;
>  
>  	overlay_name = overlay_name_from_nr(overlay_nr);
>  
> -	if (!overlay_data_apply(overlay_name, overlay_id)) {
> -		unittest(0, "could not apply overlay \"%s\"\n",
> -				overlay_name);
> -		return -EFAULT;
> -	}
> +	KUNIT_ASSERT_TRUE_MSG(test,
> +			      overlay_data_apply(overlay_name, overlay_id),
> +			      "could not apply overlay \"%s\"\n", overlay_name);
>  	of_unittest_track_overlay(*overlay_id);
>  
>  	return 0;
>  }
>  
>  /* apply an overlay while checking before and after states */
> -static int __init of_unittest_apply_overlay_check(int overlay_nr,
> +static int of_unittest_apply_overlay_check(struct kunit *test, int overlay_nr,
>  		int unittest_nr, int before, int after,
>  		enum overlay_type ovtype)
>  {
>  	int ret, ovcs_id;
>  
>  	/* unittest device must not be in before state */
> -	if (of_unittest_device_exists(unittest_nr, ovtype) != before) {
> -		unittest(0, "%s with device @\"%s\" %s\n",
> -				overlay_name_from_nr(overlay_nr),
> -				unittest_path(unittest_nr, ovtype),
> -				!before ? "enabled" : "disabled");
> -		return -EINVAL;
> -	}
> +	KUNIT_ASSERT_EQ_MSG(
> +		test, of_unittest_device_exists(unittest_nr, ovtype), before,
> +		"%s with device @\"%s\" %s\n", overlay_name_from_nr(overlay_nr),
> +		unittest_path(unittest_nr, ovtype),
> +		!before ? "enabled" : "disabled");
>  
>  	ovcs_id = 0;
> -	ret = of_unittest_apply_overlay(overlay_nr, &ovcs_id);
> +	ret = of_unittest_apply_overlay(test, overlay_nr, &ovcs_id);
>  	if (ret != 0) {
> -		/* of_unittest_apply_overlay already called unittest() */
> +		/* of_unittest_apply_overlay already set expectation */
>  		return ret;
>  	}
>  
>  	/* unittest device must be to set to after state */
> -	if (of_unittest_device_exists(unittest_nr, ovtype) != after) {
> -		unittest(0, "%s failed to create @\"%s\" %s\n",
> -				overlay_name_from_nr(overlay_nr),
> -				unittest_path(unittest_nr, ovtype),
> -				!after ? "enabled" : "disabled");
> -		return -EINVAL;
> -	}
> +	KUNIT_ASSERT_EQ_MSG(
> +		test, of_unittest_device_exists(unittest_nr, ovtype), after,
> +		"%s failed to create @\"%s\" %s\n",
> +		overlay_name_from_nr(overlay_nr),
> +		unittest_path(unittest_nr, ovtype),
> +		!after ? "enabled" : "disabled");
>  
>  	return 0;
>  }
>  
>  /* apply an overlay and then revert it while checking before, after states */
> -static int __init of_unittest_apply_revert_overlay_check(int overlay_nr,
> +static int of_unittest_apply_revert_overlay_check(
> +		struct kunit *test, int overlay_nr,
>  		int unittest_nr, int before, int after,
>  		enum overlay_type ovtype)
>  {
>  	int ret, ovcs_id;
>  
>  	/* unittest device must be in before state */
> -	if (of_unittest_device_exists(unittest_nr, ovtype) != before) {
> -		unittest(0, "%s with device @\"%s\" %s\n",
> -				overlay_name_from_nr(overlay_nr),
> -				unittest_path(unittest_nr, ovtype),
> -				!before ? "enabled" : "disabled");
> -		return -EINVAL;
> -	}
> +	KUNIT_ASSERT_EQ_MSG(
> +		test, of_unittest_device_exists(unittest_nr, ovtype), before,
> +		"%s with device @\"%s\" %s\n", overlay_name_from_nr(overlay_nr),
> +		unittest_path(unittest_nr, ovtype),
> +		!before ? "enabled" : "disabled");
>  
>  	/* apply the overlay */
>  	ovcs_id = 0;
> -	ret = of_unittest_apply_overlay(overlay_nr, &ovcs_id);
> +	ret = of_unittest_apply_overlay(test, overlay_nr, &ovcs_id);
>  	if (ret != 0) {
> -		/* of_unittest_apply_overlay already called unittest() */
> +		/* of_unittest_apply_overlay already set expectation. */
>  		return ret;
>  	}
>  
>  	/* unittest device must be in after state */
> -	if (of_unittest_device_exists(unittest_nr, ovtype) != after) {
> -		unittest(0, "%s failed to create @\"%s\" %s\n",
> -				overlay_name_from_nr(overlay_nr),
> -				unittest_path(unittest_nr, ovtype),
> -				!after ? "enabled" : "disabled");
> -		return -EINVAL;
> -	}
> -
> -	ret = of_overlay_remove(&ovcs_id);
> -	if (ret != 0) {
> -		unittest(0, "%s failed to be destroyed @\"%s\"\n",
> -				overlay_name_from_nr(overlay_nr),
> -				unittest_path(unittest_nr, ovtype));
> -		return ret;
> -	}
> +	KUNIT_ASSERT_EQ_MSG(
> +		test, of_unittest_device_exists(unittest_nr, ovtype), after,
> +		"%s failed to create @\"%s\" %s\n",
> +		overlay_name_from_nr(overlay_nr),
> +		unittest_path(unittest_nr, ovtype),
> +		!after ? "enabled" : "disabled");
> +
> +	KUNIT_ASSERT_EQ_MSG(test, of_overlay_remove(&ovcs_id), 0,
> +			    "%s failed to be destroyed @\"%s\"\n",
> +			    overlay_name_from_nr(overlay_nr),
> +			    unittest_path(unittest_nr, ovtype));
>  
>  	/* unittest device must be again in before state */
> -	if (of_unittest_device_exists(unittest_nr, PDEV_OVERLAY) != before) {
> -		unittest(0, "%s with device @\"%s\" %s\n",
> -				overlay_name_from_nr(overlay_nr),
> -				unittest_path(unittest_nr, ovtype),
> -				!before ? "enabled" : "disabled");
> -		return -EINVAL;
> -	}
> +	KUNIT_ASSERT_EQ_MSG(
> +		test,
> +		of_unittest_device_exists(unittest_nr, PDEV_OVERLAY), before,
> +		"%s with device @\"%s\" %s\n",
> +		overlay_name_from_nr(overlay_nr),
> +		unittest_path(unittest_nr, ovtype),
> +		!before ? "enabled" : "disabled");
>  
>  	return 0;
>  }
>  
>  /* test activation of device */
> -static void __init of_unittest_overlay_0(void)
> +static void of_unittest_overlay_0(struct kunit *test)
>  {
>  	/* device should enable */
> -	if (of_unittest_apply_overlay_check(0, 0, 0, 1, PDEV_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 0);
> +	of_unittest_apply_overlay_check(test, 0, 0, 0, 1, PDEV_OVERLAY);
>  }
>  
>  /* test deactivation of device */
> -static void __init of_unittest_overlay_1(void)
> +static void of_unittest_overlay_1(struct kunit *test)
>  {
>  	/* device should disable */
> -	if (of_unittest_apply_overlay_check(1, 1, 1, 0, PDEV_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 1);
> +	of_unittest_apply_overlay_check(test, 1, 1, 1, 0, PDEV_OVERLAY);
>  }
>  
>  /* test activation of device */
> -static void __init of_unittest_overlay_2(void)
> +static void of_unittest_overlay_2(struct kunit *test)
>  {
>  	/* device should enable */
> -	if (of_unittest_apply_overlay_check(2, 2, 0, 1, PDEV_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 2);
> +	of_unittest_apply_overlay_check(test, 2, 2, 0, 1, PDEV_OVERLAY);
>  }
>  
>  /* test deactivation of device */
> -static void __init of_unittest_overlay_3(void)
> +static void of_unittest_overlay_3(struct kunit *test)
>  {
>  	/* device should disable */
> -	if (of_unittest_apply_overlay_check(3, 3, 1, 0, PDEV_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 3);
> +	of_unittest_apply_overlay_check(test, 3, 3, 1, 0, PDEV_OVERLAY);
>  }
>  
>  /* test activation of a full device node */
> -static void __init of_unittest_overlay_4(void)
> +static void of_unittest_overlay_4(struct kunit *test)
>  {
>  	/* device should disable */
> -	if (of_unittest_apply_overlay_check(4, 4, 0, 1, PDEV_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 4);
> +	of_unittest_apply_overlay_check(test, 4, 4, 0, 1, PDEV_OVERLAY);
>  }
>  
>  /* test overlay apply/revert sequence */
> -static void __init of_unittest_overlay_5(void)
> +static void of_unittest_overlay_5(struct kunit *test)
>  {
>  	/* device should disable */
> -	if (of_unittest_apply_revert_overlay_check(5, 5, 0, 1, PDEV_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 5);
> +	of_unittest_apply_revert_overlay_check(test, 5, 5, 0, 1, PDEV_OVERLAY);
>  }
>  
>  /* test overlay application in sequence */
> -static void __init of_unittest_overlay_6(void)
> +static void of_unittest_overlay_6(struct kunit *test)
>  {
>  	int i, ov_id[2], ovcs_id;
>  	int overlay_nr = 6, unittest_nr = 6;
> @@ -1645,74 +1719,67 @@ static void __init of_unittest_overlay_6(void)
>  
>  	/* unittest device must be in before state */
>  	for (i = 0; i < 2; i++) {
> -		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
> -				!= before) {
> -			unittest(0, "%s with device @\"%s\" %s\n",
> -					overlay_name_from_nr(overlay_nr + i),
> -					unittest_path(unittest_nr + i,
> -						PDEV_OVERLAY),
> -					!before ? "enabled" : "disabled");
> -			return;
> -		}
> +		KUNIT_ASSERT_EQ_MSG(test,
> +				    of_unittest_device_exists(unittest_nr + i,
> +							      PDEV_OVERLAY),
> +				    before,
> +				    "%s with device @\"%s\" %s\n",
> +				    overlay_name_from_nr(overlay_nr + i),
> +				    unittest_path(unittest_nr + i,
> +						  PDEV_OVERLAY),
> +				    !before ? "enabled" : "disabled");
>  	}
>  
>  	/* apply the overlays */
>  	for (i = 0; i < 2; i++) {
> -
>  		overlay_name = overlay_name_from_nr(overlay_nr + i);
>  
> -		if (!overlay_data_apply(overlay_name, &ovcs_id)) {
> -			unittest(0, "could not apply overlay \"%s\"\n",
> -					overlay_name);
> -			return;
> -		}
> +		KUNIT_ASSERT_TRUE_MSG(
> +			test, overlay_data_apply(overlay_name, &ovcs_id),
> +			"could not apply overlay \"%s\"\n", overlay_name);
>  		ov_id[i] = ovcs_id;
>  		of_unittest_track_overlay(ov_id[i]);
>  	}
>  
>  	for (i = 0; i < 2; i++) {
>  		/* unittest device must be in after state */
> -		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
> -				!= after) {
> -			unittest(0, "overlay @\"%s\" failed @\"%s\" %s\n",
> -					overlay_name_from_nr(overlay_nr + i),
> -					unittest_path(unittest_nr + i,
> -						PDEV_OVERLAY),
> -					!after ? "enabled" : "disabled");
> -			return;
> -		}
> +		KUNIT_ASSERT_EQ_MSG(test,
> +				    of_unittest_device_exists(unittest_nr + i,
> +							      PDEV_OVERLAY),
> +				    after,
> +				    "overlay @\"%s\" failed @\"%s\" %s\n",
> +				    overlay_name_from_nr(overlay_nr + i),
> +				    unittest_path(unittest_nr + i,
> +						  PDEV_OVERLAY),
> +				    !after ? "enabled" : "disabled");
>  	}
>  
>  	for (i = 1; i >= 0; i--) {
>  		ovcs_id = ov_id[i];
> -		if (of_overlay_remove(&ovcs_id)) {
> -			unittest(0, "%s failed destroy @\"%s\"\n",
> -					overlay_name_from_nr(overlay_nr + i),
> -					unittest_path(unittest_nr + i,
> -						PDEV_OVERLAY));
> -			return;
> -		}
> +		KUNIT_ASSERT_FALSE_MSG(
> +			test, of_overlay_remove(&ovcs_id),
> +			"%s failed destroy @\"%s\"\n",
> +			overlay_name_from_nr(overlay_nr + i),
> +			unittest_path(unittest_nr + i, PDEV_OVERLAY));
>  		of_unittest_untrack_overlay(ov_id[i]);
>  	}
>  
>  	for (i = 0; i < 2; i++) {
>  		/* unittest device must be again in before state */
> -		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
> -				!= before) {
> -			unittest(0, "%s with device @\"%s\" %s\n",
> -					overlay_name_from_nr(overlay_nr + i),
> -					unittest_path(unittest_nr + i,
> -						PDEV_OVERLAY),
> -					!before ? "enabled" : "disabled");
> -			return;
> -		}
> +		KUNIT_ASSERT_EQ_MSG(test,
> +				    of_unittest_device_exists(unittest_nr + i,
> +							      PDEV_OVERLAY),
> +				    before,
> +				    "%s with device @\"%s\" %s\n",
> +				    overlay_name_from_nr(overlay_nr + i),
> +				    unittest_path(unittest_nr + i,
> +						  PDEV_OVERLAY),
> +				    !before ? "enabled" : "disabled");
>  	}
> -
> -	unittest(1, "overlay test %d passed\n", 6);
>  }
>  
>  /* test overlay application in sequence */
> -static void __init of_unittest_overlay_8(void)
> +static void of_unittest_overlay_8(struct kunit *test)
>  {
>  	int i, ov_id[2], ovcs_id;
>  	int overlay_nr = 8, unittest_nr = 8;
> @@ -1722,76 +1789,64 @@ static void __init of_unittest_overlay_8(void)
>  
>  	/* apply the overlays */
>  	for (i = 0; i < 2; i++) {
> -
>  		overlay_name = overlay_name_from_nr(overlay_nr + i);
>  
> -		if (!overlay_data_apply(overlay_name, &ovcs_id)) {
> -			unittest(0, "could not apply overlay \"%s\"\n",
> -					overlay_name);
> -			return;
> -		}
> +		KUNIT_ASSERT_TRUE_MSG(
> +			test, overlay_data_apply(overlay_name, &ovcs_id),
> +			"could not apply overlay \"%s\"\n", overlay_name);
>  		ov_id[i] = ovcs_id;
>  		of_unittest_track_overlay(ov_id[i]);
>  	}
>  
>  	/* now try to remove first overlay (it should fail) */
>  	ovcs_id = ov_id[0];
> -	if (!of_overlay_remove(&ovcs_id)) {
> -		unittest(0, "%s was destroyed @\"%s\"\n",
> -				overlay_name_from_nr(overlay_nr + 0),
> -				unittest_path(unittest_nr,
> -					PDEV_OVERLAY));
> -		return;
> -	}
> +	KUNIT_ASSERT_TRUE_MSG(
> +		test, of_overlay_remove(&ovcs_id),
> +		"%s was destroyed @\"%s\"\n",
> +		overlay_name_from_nr(overlay_nr + 0),
> +		unittest_path(unittest_nr, PDEV_OVERLAY));
>  
>  	/* removing them in order should work */
>  	for (i = 1; i >= 0; i--) {
>  		ovcs_id = ov_id[i];
> -		if (of_overlay_remove(&ovcs_id)) {
> -			unittest(0, "%s not destroyed @\"%s\"\n",
> -					overlay_name_from_nr(overlay_nr + i),
> -					unittest_path(unittest_nr,
> -						PDEV_OVERLAY));
> -			return;
> -		}
> +		KUNIT_ASSERT_FALSE_MSG(
> +			test, of_overlay_remove(&ovcs_id),
> +			"%s not destroyed @\"%s\"\n",
> +			overlay_name_from_nr(overlay_nr + i),
> +			unittest_path(unittest_nr, PDEV_OVERLAY));
>  		of_unittest_untrack_overlay(ov_id[i]);
>  	}
> -
> -	unittest(1, "overlay test %d passed\n", 8);
>  }
>  
>  /* test insertion of a bus with parent devices */
> -static void __init of_unittest_overlay_10(void)
> +static void of_unittest_overlay_10(struct kunit *test)
>  {
> -	int ret;
>  	char *child_path;
>  
>  	/* device should disable */
> -	ret = of_unittest_apply_overlay_check(10, 10, 0, 1, PDEV_OVERLAY);
> -	if (unittest(ret == 0,
> -			"overlay test %d failed; overlay application\n", 10))
> -		return;
> +	KUNIT_ASSERT_EQ_MSG(
> +		test,
> +		of_unittest_apply_overlay_check(
> +				test, 10, 10, 0, 1, PDEV_OVERLAY),
> +		0,
> +		"overlay test %d failed; overlay application\n", 10);
>  
>  	child_path = kasprintf(GFP_KERNEL, "%s/test-unittest101",
>  			unittest_path(10, PDEV_OVERLAY));
> -	if (unittest(child_path, "overlay test %d failed; kasprintf\n", 10))
> -		return;
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, child_path);
>  
> -	ret = of_path_device_type_exists(child_path, PDEV_OVERLAY);
> +	KUNIT_EXPECT_TRUE_MSG(
> +		test, of_path_device_type_exists(child_path, PDEV_OVERLAY),
> +		"overlay test %d failed; no child device\n", 10);
>  	kfree(child_path);
> -
> -	unittest(ret, "overlay test %d failed; no child device\n", 10);
>  }
>  
>  /* test insertion of a bus with parent devices (and revert) */
> -static void __init of_unittest_overlay_11(void)
> +static void of_unittest_overlay_11(struct kunit *test)
>  {
> -	int ret;
> -
>  	/* device should disable */
> -	ret = of_unittest_apply_revert_overlay_check(11, 11, 0, 1,
> -			PDEV_OVERLAY);
> -	unittest(ret == 0, "overlay test %d failed; overlay apply\n", 11);
> +	KUNIT_EXPECT_FALSE(test, of_unittest_apply_revert_overlay_check(
> +		test, 11, 11, 0, 1, PDEV_OVERLAY));
>  }
>  
>  #if IS_BUILTIN(CONFIG_I2C) && IS_ENABLED(CONFIG_OF_OVERLAY)
> @@ -2013,25 +2068,18 @@ static struct i2c_driver unittest_i2c_mux_driver = {
>  
>  #endif
>  
> -static int of_unittest_overlay_i2c_init(void)
> +static int of_unittest_overlay_i2c_init(struct kunit *test)
>  {
> -	int ret;
> -
> -	ret = i2c_add_driver(&unittest_i2c_dev_driver);
> -	if (unittest(ret == 0,
> -			"could not register unittest i2c device driver\n"))
> -		return ret;
> +	KUNIT_ASSERT_EQ_MSG(test, i2c_add_driver(&unittest_i2c_dev_driver), 0,
> +			    "could not register unittest i2c device driver\n");
>  
> -	ret = platform_driver_register(&unittest_i2c_bus_driver);
> -	if (unittest(ret == 0,
> -			"could not register unittest i2c bus driver\n"))
> -		return ret;
> +	KUNIT_ASSERT_EQ_MSG(
> +		test, platform_driver_register(&unittest_i2c_bus_driver), 0,
> +		"could not register unittest i2c bus driver\n");
>  
>  #if IS_BUILTIN(CONFIG_I2C_MUX)
> -	ret = i2c_add_driver(&unittest_i2c_mux_driver);
> -	if (unittest(ret == 0,
> -			"could not register unittest i2c mux driver\n"))
> -		return ret;
> +	KUNIT_ASSERT_EQ_MSG(test, i2c_add_driver(&unittest_i2c_mux_driver), 0,
> +			    "could not register unittest i2c mux driver\n");
>  #endif
>  
>  	return 0;
> @@ -2046,101 +2094,85 @@ static void of_unittest_overlay_i2c_cleanup(void)
>  	i2c_del_driver(&unittest_i2c_dev_driver);
>  }
>  
> -static void __init of_unittest_overlay_i2c_12(void)
> +static void of_unittest_overlay_i2c_12(struct kunit *test)
>  {
>  	/* device should enable */
> -	if (of_unittest_apply_overlay_check(12, 12, 0, 1, I2C_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 12);
> +	of_unittest_apply_overlay_check(test, 12, 12, 0, 1, I2C_OVERLAY);
>  }
>  
>  /* test deactivation of device */
> -static void __init of_unittest_overlay_i2c_13(void)
> +static void of_unittest_overlay_i2c_13(struct kunit *test)
>  {
>  	/* device should disable */
> -	if (of_unittest_apply_overlay_check(13, 13, 1, 0, I2C_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 13);
> +	of_unittest_apply_overlay_check(test, 13, 13, 1, 0, I2C_OVERLAY);
>  }
>  
>  /* just check for i2c mux existence */
> -static void of_unittest_overlay_i2c_14(void)
> +static void of_unittest_overlay_i2c_14(struct kunit *test)
>  {
> +	KUNIT_SUCCEED(test);
>  }
>  
> -static void __init of_unittest_overlay_i2c_15(void)
> +static void of_unittest_overlay_i2c_15(struct kunit *test)
>  {
>  	/* device should enable */
> -	if (of_unittest_apply_overlay_check(15, 15, 0, 1, I2C_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 15);
> +	of_unittest_apply_overlay_check(test, 15, 15, 0, 1, I2C_OVERLAY);
>  }
>  
>  #else
>  
> -static inline void of_unittest_overlay_i2c_14(void) { }
> -static inline void of_unittest_overlay_i2c_15(void) { }
> +static inline void of_unittest_overlay_i2c_14(struct kunit *test) { }
> +static inline void of_unittest_overlay_i2c_15(struct kunit *test) { }
>  
>  #endif
>  
> -static void __init of_unittest_overlay(void)
> +static void of_unittest_overlay(struct kunit *test)
>  {
>  	struct device_node *bus_np = NULL;
>  
> -	if (platform_driver_register(&unittest_driver)) {
> -		unittest(0, "could not register unittest driver\n");
> -		goto out;
> -	}
> +	KUNIT_ASSERT_FALSE_MSG(test, platform_driver_register(&unittest_driver),
> +			       "could not register unittest driver\n");
>  
>  	bus_np = of_find_node_by_path(bus_path);
> -	if (bus_np == NULL) {
> -		unittest(0, "could not find bus_path \"%s\"\n", bus_path);
> -		goto out;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL_MSG(
> +		test, bus_np, "could not find bus_path \"%s\"\n", bus_path);
>  
> -	if (of_platform_default_populate(bus_np, NULL, NULL)) {
> -		unittest(0, "could not populate bus @ \"%s\"\n", bus_path);
> -		goto out;
> -	}
> -
> -	if (!of_unittest_device_exists(100, PDEV_OVERLAY)) {
> -		unittest(0, "could not find unittest0 @ \"%s\"\n",
> -				unittest_path(100, PDEV_OVERLAY));
> -		goto out;
> -	}
> +	KUNIT_ASSERT_FALSE_MSG(
> +		test, of_platform_default_populate(bus_np, NULL, NULL),
> +		"could not populate bus @ \"%s\"\n", bus_path);
>  
> -	if (of_unittest_device_exists(101, PDEV_OVERLAY)) {
> -		unittest(0, "unittest1 @ \"%s\" should not exist\n",
> -				unittest_path(101, PDEV_OVERLAY));
> -		goto out;
> -	}
> +	KUNIT_ASSERT_TRUE_MSG(
> +		test, of_unittest_device_exists(100, PDEV_OVERLAY),
> +		"could not find unittest0 @ \"%s\"\n",
> +		unittest_path(100, PDEV_OVERLAY));
>  
> -	unittest(1, "basic infrastructure of overlays passed");
> +	KUNIT_ASSERT_FALSE_MSG(
> +		test, of_unittest_device_exists(101, PDEV_OVERLAY),
> +		"unittest1 @ \"%s\" should not exist\n",
> +		unittest_path(101, PDEV_OVERLAY));
>  
>  	/* tests in sequence */
> -	of_unittest_overlay_0();
> -	of_unittest_overlay_1();
> -	of_unittest_overlay_2();
> -	of_unittest_overlay_3();
> -	of_unittest_overlay_4();
> -	of_unittest_overlay_5();
> -	of_unittest_overlay_6();
> -	of_unittest_overlay_8();
> -
> -	of_unittest_overlay_10();
> -	of_unittest_overlay_11();
> +	of_unittest_overlay_0(test);
> +	of_unittest_overlay_1(test);
> +	of_unittest_overlay_2(test);
> +	of_unittest_overlay_3(test);
> +	of_unittest_overlay_4(test);
> +	of_unittest_overlay_5(test);
> +	of_unittest_overlay_6(test);
> +	of_unittest_overlay_8(test);
> +
> +	of_unittest_overlay_10(test);
> +	of_unittest_overlay_11(test);
>  
>  #if IS_BUILTIN(CONFIG_I2C)
> -	if (unittest(of_unittest_overlay_i2c_init() == 0, "i2c init failed\n"))
> -		goto out;
> +	KUNIT_ASSERT_EQ_MSG(test, of_unittest_overlay_i2c_init(test), 0,
> +			    "i2c init failed\n");
> +	goto out;
>  
> -	of_unittest_overlay_i2c_12();
> -	of_unittest_overlay_i2c_13();
> -	of_unittest_overlay_i2c_14();
> -	of_unittest_overlay_i2c_15();
> +	of_unittest_overlay_i2c_12(test);
> +	of_unittest_overlay_i2c_13(test);
> +	of_unittest_overlay_i2c_14(test);
> +	of_unittest_overlay_i2c_15(test);
>  
>  	of_unittest_overlay_i2c_cleanup();
>  #endif
> @@ -2152,7 +2184,7 @@ static void __init of_unittest_overlay(void)
>  }
>  
>  #else
> -static inline void __init of_unittest_overlay(void) { }
> +static inline void of_unittest_overlay(struct kunit *test) { }
>  #endif
>  
>  #ifdef CONFIG_OF_OVERLAY
> @@ -2313,7 +2345,7 @@ void __init unittest_unflatten_overlay_base(void)
>   *
>   * Return 0 on unexpected error.
>   */
> -static int __init overlay_data_apply(const char *overlay_name, int *overlay_id)
> +static int overlay_data_apply(const char *overlay_name, int *overlay_id)
>  {
>  	struct overlay_info *info;
>  	int found = 0;
> @@ -2359,19 +2391,17 @@ static int __init overlay_data_apply(const char *overlay_name, int *overlay_id)
>   * The first part of the function is _not_ normal overlay usage; it is
>   * finishing splicing the base overlay device tree into the live tree.
>   */
> -static __init void of_unittest_overlay_high_level(void)
> +static void of_unittest_overlay_high_level(struct kunit *test)
>  {
>  	struct device_node *last_sibling;
>  	struct device_node *np;
>  	struct device_node *of_symbols;
> -	struct device_node *overlay_base_symbols;
> +	struct device_node *overlay_base_symbols = 0;
>  	struct device_node **pprev;
>  	struct property *prop;
>  
> -	if (!overlay_base_root) {
> -		unittest(0, "overlay_base_root not initialized\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_TRUE_MSG(test, overlay_base_root,
> +			      "overlay_base_root not initialized\n");
>  
>  	/*
>  	 * Could not fixup phandles in unittest_unflatten_overlay_base()
> @@ -2418,11 +2448,9 @@ static __init void of_unittest_overlay_high_level(void)
>  	for_each_child_of_node(overlay_base_root, np) {
>  		struct device_node *base_child;
>  		for_each_child_of_node(of_root, base_child) {
> -			if (!strcmp(np->full_name, base_child->full_name)) {
> -				unittest(0, "illegal node name in overlay_base %pOFn",
> -					 np);
> -				return;
> -			}
> +			KUNIT_ASSERT_STRNEQ_MSG(
> +				test, np->full_name, base_child->full_name,
> +				"illegal node name in overlay_base %pOFn", np);
>  		}
>  	}
>  
> @@ -2456,21 +2484,24 @@ static __init void of_unittest_overlay_high_level(void)
>  
>  			new_prop = __of_prop_dup(prop, GFP_KERNEL);
>  			if (!new_prop) {
> -				unittest(0, "__of_prop_dup() of '%s' from overlay_base node __symbols__",
> -					 prop->name);
> +				KUNIT_FAIL(test,
> +					   "__of_prop_dup() of '%s' from overlay_base node __symbols__",
> +					   prop->name);
>  				goto err_unlock;
>  			}
>  			if (__of_add_property(of_symbols, new_prop)) {
>  				/* "name" auto-generated by unflatten */
>  				if (!strcmp(new_prop->name, "name"))
>  					continue;
> -				unittest(0, "duplicate property '%s' in overlay_base node __symbols__",
> -					 prop->name);
> +				KUNIT_FAIL(test,
> +					   "duplicate property '%s' in overlay_base node __symbols__",
> +					   prop->name);
>  				goto err_unlock;
>  			}
>  			if (__of_add_property_sysfs(of_symbols, new_prop)) {
> -				unittest(0, "unable to add property '%s' in overlay_base node __symbols__ to sysfs",
> -					 prop->name);
> +				KUNIT_FAIL(test,
> +					   "unable to add property '%s' in overlay_base node __symbols__ to sysfs",
> +					   prop->name);
>  				goto err_unlock;
>  			}
>  		}
> @@ -2481,20 +2512,24 @@ static __init void of_unittest_overlay_high_level(void)
>  
>  	/* now do the normal overlay usage test */
>  
> -	unittest(overlay_data_apply("overlay", NULL),
> -		 "Adding overlay 'overlay' failed\n");
> +	KUNIT_EXPECT_TRUE_MSG(test, overlay_data_apply("overlay", NULL),
> +			      "Adding overlay 'overlay' failed\n");
>  
> -	unittest(overlay_data_apply("overlay_bad_add_dup_node", NULL),
> -		 "Adding overlay 'overlay_bad_add_dup_node' failed\n");
> +	KUNIT_EXPECT_TRUE_MSG(
> +		test, overlay_data_apply("overlay_bad_add_dup_node", NULL),
> +		"Adding overlay 'overlay_bad_add_dup_node' failed\n");
>  
> -	unittest(overlay_data_apply("overlay_bad_add_dup_prop", NULL),
> -		 "Adding overlay 'overlay_bad_add_dup_prop' failed\n");
> +	KUNIT_EXPECT_TRUE_MSG(
> +		test, overlay_data_apply("overlay_bad_add_dup_prop", NULL),
> +		"Adding overlay 'overlay_bad_add_dup_prop' failed\n");
>  
> -	unittest(overlay_data_apply("overlay_bad_phandle", NULL),
> -		 "Adding overlay 'overlay_bad_phandle' failed\n");
> +	KUNIT_EXPECT_TRUE_MSG(
> +		test, overlay_data_apply("overlay_bad_phandle", NULL),
> +		"Adding overlay 'overlay_bad_phandle' failed\n");
>  
> -	unittest(overlay_data_apply("overlay_bad_symbol", NULL),
> -		 "Adding overlay 'overlay_bad_symbol' failed\n");
> +	KUNIT_EXPECT_TRUE_MSG(
> +		test, overlay_data_apply("overlay_bad_symbol", NULL),
> +		"Adding overlay 'overlay_bad_symbol' failed\n");
>  
>  	return;
>  
> @@ -2504,57 +2539,52 @@ static __init void of_unittest_overlay_high_level(void)
>  
>  #else
>  
> -static inline __init void of_unittest_overlay_high_level(void) {}
> +static inline void of_unittest_overlay_high_level(struct kunit *test) {}
>  
>  #endif
>  
> -static int __init of_unittest(void)
> +static int of_test_init(struct kunit *test)
>  {
> -	struct device_node *np;
> -	int res;
> -
>  	/* adding data for unittest */
> -	res = unittest_data_add();
> -	if (res)
> -		return res;
> +	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
> +
>  	if (!of_aliases)
>  		of_aliases = of_find_node_by_path("/aliases");
>  
> -	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> -	if (!np) {
> -		pr_info("No testcase data in device tree; not running tests\n");
> -		return 0;
> -	}
> -	of_node_put(np);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
> +		"/testcase-data/phandle-tests/consumer-a"));
>  
>  	if (IS_ENABLED(CONFIG_UML))
>  		unflatten_device_tree();
>  
> -	pr_info("start of unittest - you will see error messages\n");
> -	of_unittest_check_tree_linkage();
> -	of_unittest_check_phandles();
> -	of_unittest_find_node_by_name();
> -	of_unittest_dynamic();
> -	of_unittest_parse_phandle_with_args();
> -	of_unittest_parse_phandle_with_args_map();
> -	of_unittest_printf();
> -	of_unittest_property_string();
> -	of_unittest_property_copy();
> -	of_unittest_changeset();
> -	of_unittest_parse_interrupts();
> -	of_unittest_parse_interrupts_extended();
> -	of_unittest_match_node();
> -	of_unittest_platform_populate();
> -	of_unittest_overlay();
> +	return 0;
> +}
>  
> +static struct kunit_case of_test_cases[] = {
> +	KUNIT_CASE(of_unittest_check_tree_linkage),
> +	KUNIT_CASE(of_unittest_check_phandles),
> +	KUNIT_CASE(of_unittest_find_node_by_name),
> +	KUNIT_CASE(of_unittest_dynamic),
> +	KUNIT_CASE(of_unittest_parse_phandle_with_args),
> +	KUNIT_CASE(of_unittest_parse_phandle_with_args_map),
> +	KUNIT_CASE(of_unittest_printf),
> +	KUNIT_CASE(of_unittest_property_string),
> +	KUNIT_CASE(of_unittest_property_copy),
> +	KUNIT_CASE(of_unittest_changeset),
> +	KUNIT_CASE(of_unittest_parse_interrupts),
> +	KUNIT_CASE(of_unittest_parse_interrupts_extended),
> +	KUNIT_CASE(of_unittest_match_node),
> +	KUNIT_CASE(of_unittest_platform_populate),
> +	KUNIT_CASE(of_unittest_overlay),
>  	/* Double check linkage after removing testcase data */
> -	of_unittest_check_tree_linkage();
> -
> -	of_unittest_overlay_high_level();
> -
> -	pr_info("end of unittest - %i passed, %i failed\n",
> -		unittest_results.passed, unittest_results.failed);
> +	KUNIT_CASE(of_unittest_check_tree_linkage),
> +	KUNIT_CASE(of_unittest_overlay_high_level),
> +	{},
> +};
>  
> -	return 0;
> -}
> -late_initcall(of_unittest);
> +static struct kunit_module of_test_module = {
> +	.name = "of-test",
> +	.init = of_test_init,
> +	.test_cases = of_test_cases,
> +};
> +module_test(of_test_module);
> 

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 15/17] of: unittest: migrate tests to run on KUnit
@ 2019-02-16  0:24         ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-02-16  0:24 UTC (permalink / raw)
  To: Brendan Higgins, keescook, mcgrof, shuah, robh, kieran.bingham
  Cc: gregkh, joel, mpe, joe, brakmo, rostedt, Tim.Bird, khilman,
	julia.lawall, linux-kselftest, kunit-dev, linux-kernel, jdike,
	richard, linux-um, daniel, dri-devel, dan.j.williams,
	linux-nvdimm, knut.omang, devicetree, pmladek, Alexander.Levin,
	amir73il, dan.carpenter, wfg

On 2/14/19 1:37 PM, Brendan Higgins wrote:
> Migrate tests without any cleanup, or modifying test logic in anyway to
> run under KUnit using the KUnit expectation and assertion API.
> 
> Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
> ---
>  drivers/of/Kconfig    |    1 +
>  drivers/of/unittest.c | 1310 +++++++++++++++++++++--------------------
>  2 files changed, 671 insertions(+), 640 deletions(-)
> 
> diff --git a/drivers/of/Kconfig b/drivers/of/Kconfig
> index ad3fcad4d75b8..f309399deac20 100644
> --- a/drivers/of/Kconfig
> +++ b/drivers/of/Kconfig
> @@ -15,6 +15,7 @@ if OF
>  config OF_UNITTEST
>  	bool "Device Tree runtime unit tests"
>  	depends on !SPARC
> +	depends on KUNIT
>  	select IRQ_DOMAIN
>  	select OF_EARLY_FLATTREE
>  	select OF_RESOLVE
> diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c

These comments are from applying the patches to 5.0-rc3.

The final hunk of this patch fails to apply because it depends upon

   [PATCH v1 0/1] of: unittest: unflatten device tree on UML when testing.

If I apply that patch then I can apply patches 15 through 17.

If I apply patches 1 through 14 and boot the uml kernel then the devicetree
unittest result is:

  ### dt-test ### FAIL of_unittest_overlay_high_level():2372 overlay_base_root not initialized
  ### dt-test ### end of unittest - 219 passed, 1 failed

This is as expected from your previous reports, and is fixed after
applying

   [PATCH v1 0/1] of: unittest: unflatten device tree on UML when testing.

with the devicetree unittest result of:

   ### dt-test ### end of unittest - 224 passed, 0 failed

After adding patch 15, there are a lot of "unittest internal error" messages.

-Frank


> index effa4e2b9d992..96de69ccb3e63 100644
> --- a/drivers/of/unittest.c
> +++ b/drivers/of/unittest.c
> @@ -26,186 +26,189 @@
>  
>  #include <linux/bitops.h>
>  
> +#include <kunit/test.h>### dt-test ### end of unittest - 224 passed, 0 failed
> +
>  #include "of_private.h"
>  
> -static struct unittest_results {
> -	int passed;
> -	int failed;
> -} unittest_results;
> -
> -#define unittest(result, fmt, ...) ({ \
> -	bool failed = !(result); \
> -	if (failed) { \
> -		unittest_results.failed++; \
> -		pr_err("FAIL %s():%i " fmt, __func__, __LINE__, ##__VA_ARGS__); \
> -	} else { \
> -		unittest_results.passed++; \
> -		pr_debug("pass %s():%i\n", __func__, __LINE__); \
> -	} \
> -	failed; \
> -})
> -
> -static void __init of_unittest_find_node_by_name(void)
> +static void of_unittest_find_node_by_name(struct kunit *test)
>  {
>  	struct device_node *np;
>  	const char *options, *name;
>  
>  	np = of_find_node_by_path("/testcase-data");
>  	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	unittest(np && !strcmp("/testcase-data", name),
> -		"find /testcase-data failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> +			       "find /testcase-data failed\n");
>  	of_node_put(np);
>  	kfree(name);
>  
>  	/* Test if trailing '/' works */
> -	np = of_find_node_by_path("/testcase-data/");
> -	unittest(!np, "trailing '/' on /testcase-data/ should fail\n");
> +	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
> +			    "trailing '/' on /testcase-data/ should fail\n");
>  
>  	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	unittest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", name),
> +	KUNIT_EXPECT_STREQ_MSG(
> +		test, "/testcase-data/phandle-tests/consumer-a", name,
>  		"find /testcase-data/phandle-tests/consumer-a failed\n");
>  	of_node_put(np);
>  	kfree(name);
>  
>  	np = of_find_node_by_path("testcase-alias");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	unittest(np && !strcmp("/testcase-data", name),
> -		"find testcase-alias failed\n");
> +	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> +			       "find testcase-alias failed\n");
>  	of_node_put(np);
>  	kfree(name);
>  
>  	/* Test if trailing '/' works on aliases */
> -	np = of_find_node_by_path("testcase-alias/");
> -	unittest(!np, "trailing '/' on testcase-alias/ should fail\n");
> +	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
> +			    "trailing '/' on testcase-alias/ should fail\n");
>  
>  	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	unittest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", name),
> +	KUNIT_EXPECT_STREQ_MSG(
> +		test, "/testcase-data/phandle-tests/consumer-a", name,
>  		"find testcase-alias/phandle-tests/consumer-a failed\n");
>  	of_node_put(np);
>  	kfree(name);
>  
> -	np = of_find_node_by_path("/testcase-data/missing-path");
> -	unittest(!np, "non-existent path returned node %pOF\n", np);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
> +		"non-existent path returned node %pOF\n", np);
>  	of_node_put(np);
>  
> -	np = of_find_node_by_path("missing-alias");
> -	unittest(!np, "non-existent alias returned node %pOF\n", np);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test, np = of_find_node_by_path("missing-alias"), NULL,
> +		"non-existent alias returned node %pOF\n", np);
>  	of_node_put(np);
>  
> -	np = of_find_node_by_path("testcase-alias/missing-path");
> -	unittest(!np, "non-existent alias with relative path returned node %pOF\n", np);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
> +		"non-existent alias with relative path returned node %pOF\n",
> +		np);
>  	of_node_put(np);
>  
>  	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
> -	unittest(np && !strcmp("testoption", options),
> -		 "option path test failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
> +			       "option path test failed\n");
>  	of_node_put(np);
>  
>  	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
> -	unittest(np && !strcmp("test/option", options),
> -		 "option path test, subcase #1 failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> +			       "option path test, subcase #1 failed\n");
>  	of_node_put(np);
>  
>  	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
> -	unittest(np && !strcmp("test/option", options),
> -		 "option path test, subcase #2 failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> +			       "option path test, subcase #2 failed\n");
>  	of_node_put(np);
>  
>  	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
> -	unittest(np, "NULL option path test failed\n");
> +	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
> +					 "NULL option path test failed\n");
>  	of_node_put(np);
>  
>  	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
>  				       &options);
> -	unittest(np && !strcmp("testaliasoption", options),
> -		 "option alias path test failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
> +			       "option alias path test failed\n");
>  	of_node_put(np);
>  
>  	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
>  				       &options);
> -	unittest(np && !strcmp("test/alias/option", options),
> -		 "option alias path test, subcase #1 failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
> +			       "option alias path test, subcase #1 failed\n");
>  	of_node_put(np);
>  
>  	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
> -	unittest(np, "NULL option alias path test failed\n");
> +	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
> +			test, np, "NULL option alias path test failed\n");
>  	of_node_put(np);
>  
>  	options = "testoption";
>  	np = of_find_node_opts_by_path("testcase-alias", &options);
> -	unittest(np && !options, "option clearing test failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> +			    "option clearing test failed\n");
>  	of_node_put(np);
>  
>  	options = "testoption";
>  	np = of_find_node_opts_by_path("/", &options);
> -	unittest(np && !options, "option clearing root node test failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> +			    "option clearing root node test failed\n");
>  	of_node_put(np);
>  }
>  
> -static void __init of_unittest_dynamic(void)
> +static void of_unittest_dynamic(struct kunit *test)
>  {
>  	struct device_node *np;
>  	struct property *prop;
>  
>  	np = of_find_node_by_path("/testcase-data");
> -	if (!np) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  	/* Array of 4 properties for the purpose of testing */
>  	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
> -	if (!prop) {
> -		unittest(0, "kzalloc() failed\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
>  
>  	/* Add a new property - should pass*/
>  	prop->name = "new-property";
>  	prop->value = "new-property-data";
>  	prop->length = strlen(prop->value) + 1;
> -	unittest(of_add_property(np, prop) == 0, "Adding a new property failed\n");
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> +			    "Adding a new property failed\n");
>  
>  	/* Try to add an existing property - should fail */
>  	prop++;
>  	prop->name = "new-property";
>  	prop->value = "new-property-data-should-fail";
>  	prop->length = strlen(prop->value) + 1;
> -	unittest(of_add_property(np, prop) != 0,
> -		 "Adding an existing property should have failed\n");
> +	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
> +			    "Adding an existing property should have failed\n");
>  
>  	/* Try to modify an existing property - should pass */
>  	prop->value = "modify-property-data-should-pass";
>  	prop->length = strlen(prop->value) + 1;
> -	unittest(of_update_property(np, prop) == 0,
> -		 "Updating an existing property should have passed\n");
> +	KUNIT_EXPECT_EQ_MSG(
> +		test, of_update_property(np, prop), 0,
> +		"Updating an existing property should have passed\n");
>  
>  	/* Try to modify non-existent property - should pass*/
>  	prop++;
>  	prop->name = "modify-property";
>  	prop->value = "modify-missing-property-data-should-pass";
>  	prop->length = strlen(prop->value) + 1;
> -	unittest(of_update_property(np, prop) == 0,
> -		 "Updating a missing property should have passed\n");
> +	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
> +			    "Updating a missing property should have passed\n");
>  
>  	/* Remove property - should pass */
> -	unittest(of_remove_property(np, prop) == 0,
> -		 "Removing a property should have passed\n");
> +	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
> +			    "Removing a property should have passed\n");
>  
>  	/* Adding very large property - should pass */
>  	prop++;
>  	prop->name = "large-property-PAGE_SIZEx8";
>  	prop->length = PAGE_SIZE * 8;
>  	prop->value = kzalloc(prop->length, GFP_KERNEL);
> -	unittest(prop->value != NULL, "Unable to allocate large buffer\n");
> -	if (prop->value)
> -		unittest(of_add_property(np, prop) == 0,
> -			 "Adding a large property should have passed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> +			    "Adding a large property should have passed\n");
>  }
>  
> -static int __init of_unittest_check_node_linkage(struct device_node *np)
> +static int of_unittest_check_node_linkage(struct device_node *np)
>  {
>  	struct device_node *child;
>  	int count = 0, rc;
> @@ -230,27 +233,30 @@ static int __init of_unittest_check_node_linkage(struct device_node *np)
>  	return rc;
>  }
>  
> -static void __init of_unittest_check_tree_linkage(void)
> +static void of_unittest_check_tree_linkage(struct kunit *test)
>  {
>  	struct device_node *np;
>  	int allnode_count = 0, child_count;
>  
> -	if (!of_root)
> -		return;
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);
>  
>  	for_each_of_allnodes(np)
>  		allnode_count++;
>  	child_count = of_unittest_check_node_linkage(of_root);
>  
> -	unittest(child_count > 0, "Device node data structure is corrupted\n");
> -	unittest(child_count == allnode_count,
> -		 "allnodes list size (%i) doesn't match sibling lists size (%i)\n",
> -		 allnode_count, child_count);
> +	KUNIT_EXPECT_GT_MSG(test, child_count, 0,
> +			    "Device node data structure is corrupted\n");
> +	KUNIT_EXPECT_EQ_MSG(
> +		test, child_count, allnode_count,
> +		"allnodes list size (%i) doesn't match sibling lists size (%i)\n",
> +		allnode_count, child_count);
>  	pr_debug("allnodes list size (%i); sibling lists size (%i)\n", allnode_count, child_count);
>  }
>  
> -static void __init of_unittest_printf_one(struct device_node *np, const char *fmt,
> -					  const char *expected)
> +static void of_unittest_printf_one(struct kunit *test,
> +				   struct device_node *np,
> +				   const char *fmt,
> +				   const char *expected)
>  {
>  	unsigned char *buf;
>  	int buf_size;
> @@ -265,8 +271,12 @@ static void __init of_unittest_printf_one(struct device_node *np, const char *fm
>  	memset(buf, 0xff, buf_size);
>  	size = snprintf(buf, buf_size - 2, fmt, np);
>  
> -	/* use strcmp() instead of strncmp() here to be absolutely sure strings match */
> -	unittest((strcmp(buf, expected) == 0) && (buf[size+1] == 0xff),
> +	KUNIT_EXPECT_STREQ_MSG(
> +		test, buf, expected,
> +		"sprintf failed; fmt='%s' expected='%s' rslt='%s'\n",
> +		fmt, expected, buf);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test, buf[size+1], 0xff,
>  		"sprintf failed; fmt='%s' expected='%s' rslt='%s'\n",
>  		fmt, expected, buf);
>  
> @@ -276,44 +286,49 @@ static void __init of_unittest_printf_one(struct device_node *np, const char *fm
>  		/* Clear the buffer, and make sure it works correctly still */
>  		memset(buf, 0xff, buf_size);
>  		snprintf(buf, size+1, fmt, np);
> -		unittest(strncmp(buf, expected, size) == 0 && (buf[size+1] == 0xff),
> +		KUNIT_EXPECT_STREQ_MSG(
> +			test, buf, expected,
> +			"snprintf failed; size=%i fmt='%s' expected='%s' rslt='%s'\n",
> +			size, fmt, expected, buf);
> +		KUNIT_EXPECT_EQ_MSG(
> +			test, buf[size+1], 0xff,
>  			"snprintf failed; size=%i fmt='%s' expected='%s' rslt='%s'\n",
>  			size, fmt, expected, buf);
>  	}
>  	kfree(buf);
>  }
>  
> -static void __init of_unittest_printf(void)
> +static void of_unittest_printf(struct kunit *test)
>  {
>  	struct device_node *np;
>  	const char *full_name = "/testcase-data/platform-tests/test-device@1/dev@100";
>  	char phandle_str[16] = "";
>  
>  	np = of_find_node_by_path(full_name);
> -	if (!np) {
> -		unittest(np, "testcase data missing\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  	num_to_str(phandle_str, sizeof(phandle_str), np->phandle, 0);
>  
> -	of_unittest_printf_one(np, "%pOF",  full_name);
> -	of_unittest_printf_one(np, "%pOFf", full_name);
> -	of_unittest_printf_one(np, "%pOFn", "dev");
> -	of_unittest_printf_one(np, "%2pOFn", "dev");
> -	of_unittest_printf_one(np, "%5pOFn", "  dev");
> -	of_unittest_printf_one(np, "%pOFnc", "dev:test-sub-device");
> -	of_unittest_printf_one(np, "%pOFp", phandle_str);
> -	of_unittest_printf_one(np, "%pOFP", "dev@100");
> -	of_unittest_printf_one(np, "ABC %pOFP ABC", "ABC dev@100 ABC");
> -	of_unittest_printf_one(np, "%10pOFP", "   dev@100");
> -	of_unittest_printf_one(np, "%-10pOFP", "dev@100   ");
> -	of_unittest_printf_one(of_root, "%pOFP", "/");
> -	of_unittest_printf_one(np, "%pOFF", "----");
> -	of_unittest_printf_one(np, "%pOFPF", "dev@100:----");
> -	of_unittest_printf_one(np, "%pOFPFPc", "dev@100:----:dev@100:test-sub-device");
> -	of_unittest_printf_one(np, "%pOFc", "test-sub-device");
> -	of_unittest_printf_one(np, "%pOFC",
> +	of_unittest_printf_one(test, np, "%pOF",  full_name);
> +	of_unittest_printf_one(test, np, "%pOFf", full_name);
> +	of_unittest_printf_one(test, np, "%pOFn", "dev");
> +	of_unittest_printf_one(test, np, "%2pOFn", "dev");
> +	of_unittest_printf_one(test, np, "%5pOFn", "  dev");
> +	of_unittest_printf_one(test, np, "%pOFnc", "dev:test-sub-device");
> +	of_unittest_printf_one(test, np, "%pOFp", phandle_str);
> +	of_unittest_printf_one(test, np, "%pOFP", "dev@100");
> +	of_unittest_printf_one(test, np, "ABC %pOFP ABC", "ABC dev@100 ABC");
> +	of_unittest_printf_one(test, np, "%10pOFP", "   dev@100");
> +	of_unittest_printf_one(test, np, "%-10pOFP", "dev@100   ");
> +	of_unittest_printf_one(test, of_root, "%pOFP", "/");
> +	of_unittest_printf_one(test, np, "%pOFF", "----");
> +	of_unittest_printf_one(test, np, "%pOFPF", "dev@100:----");
> +	of_unittest_printf_one(test,
> +			       np,
> +			       "%pOFPFPc",
> +			       "dev@100:----:dev@100:test-sub-device");
> +	of_unittest_printf_one(test, np, "%pOFc", "test-sub-device");
> +	of_unittest_printf_one(test, np, "%pOFC",
>  			"\"test-sub-device\",\"test-compat2\",\"test-compat3\"");
>  }
>  
> @@ -323,7 +338,7 @@ struct node_hash {
>  };
>  
>  static DEFINE_HASHTABLE(phandle_ht, 8);
> -static void __init of_unittest_check_phandles(void)
> +static void of_unittest_check_phandles(struct kunit *test)
>  {
>  	struct device_node *np;
>  	struct node_hash *nh;
> @@ -335,24 +350,26 @@ static void __init of_unittest_check_phandles(void)
>  			continue;
>  
>  		hash_for_each_possible(phandle_ht, nh, node, np->phandle) {
> +			KUNIT_EXPECT_NE_MSG(
> +				test, nh->np->phandle, np->phandle,
> +				"Duplicate phandle! %i used by %pOF and %pOF\n",
> +				np->phandle, nh->np, np);
>  			if (nh->np->phandle == np->phandle) {
> -				pr_info("Duplicate phandle! %i used by %pOF and %pOF\n",
> -					np->phandle, nh->np, np);
>  				dup_count++;
>  				break;
>  			}
>  		}
>  
>  		nh = kzalloc(sizeof(*nh), GFP_KERNEL);
> -		if (WARN_ON(!nh))
> -			return;
> +		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, nh);
>  
>  		nh->np = np;
>  		hash_add(phandle_ht, &nh->node, np->phandle);
>  		phandle_count++;
>  	}
> -	unittest(dup_count == 0, "Found %i duplicates in %i phandles\n",
> -		 dup_count, phandle_count);
> +	KUNIT_EXPECT_EQ_MSG(test, dup_count, 0,
> +			    "Found %i duplicates in %i phandles\n",
> +			    dup_count, phandle_count);
>  
>  	/* Clean up */
>  	hash_for_each_safe(phandle_ht, i, tmp, nh, node) {
> @@ -361,20 +378,21 @@ static void __init of_unittest_check_phandles(void)
>  	}
>  }
>  
> -static void __init of_unittest_parse_phandle_with_args(void)
> +static void of_unittest_parse_phandle_with_args(struct kunit *test)
>  {
>  	struct device_node *np;
>  	struct of_phandle_args args;
> -	int i, rc;
> +	int i, rc = 0;
>  
>  	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> -	if (!np) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
> -	rc = of_count_phandle_with_args(np, "phandle-list", "#phandle-cells");
> -	unittest(rc == 7, "of_count_phandle_with_args() returned %i, expected 7\n", rc);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_count_phandle_with_args(
> +			np, "phandle-list", "#phandle-cells"),
> +		7,
> +		"of_count_phandle_with_args() returned %i, expected 7\n", rc);
>  
>  	for (i = 0; i < 8; i++) {
>  		bool passed = true;
> @@ -428,85 +446,91 @@ static void __init of_unittest_parse_phandle_with_args(void)
>  			passed = false;
>  		}
>  
> -		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
> -			 i, args.np, rc);
> +		KUNIT_EXPECT_TRUE_MSG(
> +			test, passed,
> +			"index %i - data error on node %pOF rc=%i\n",
> +			i, args.np, rc);
>  	}
>  
>  	/* Check for missing list property */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args(np, "phandle-list-missing",
> -					"#phandle-cells", 0, &args);
> -	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
> -	rc = of_count_phandle_with_args(np, "phandle-list-missing",
> -					"#phandle-cells");
> -	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args(
> +			np, "phandle-list-missing", "#phandle-cells", 0, &args),
> +		-ENOENT);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_count_phandle_with_args(
> +			np, "phandle-list-missing", "#phandle-cells"),
> +		-ENOENT);
>  
>  	/* Check for missing cells property */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args(np, "phandle-list",
> -					"#phandle-cells-missing", 0, &args);
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> -	rc = of_count_phandle_with_args(np, "phandle-list",
> -					"#phandle-cells-missing");
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args(
> +			np, "phandle-list", "#phandle-cells-missing", 0, &args),
> +		-EINVAL);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_count_phandle_with_args(
> +			np, "phandle-list", "#phandle-cells-missing"),
> +		-EINVAL);
>  
>  	/* Check for bad phandle in list */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args(np, "phandle-list-bad-phandle",
> -					"#phandle-cells", 0, &args);
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> -	rc = of_count_phandle_with_args(np, "phandle-list-bad-phandle",
> -					"#phandle-cells");
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args(np, "phandle-list-bad-phandle",
> +					   "#phandle-cells", 0, &args),
> +		-EINVAL);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_count_phandle_with_args(
> +			np, "phandle-list-bad-phandle", "#phandle-cells"),
> +		-EINVAL);
>  
>  	/* Check for incorrectly formed argument list */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args(np, "phandle-list-bad-args",
> -					"#phandle-cells", 1, &args);
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> -	rc = of_count_phandle_with_args(np, "phandle-list-bad-args",
> -					"#phandle-cells");
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args(np, "phandle-list-bad-args",
> +					   "#phandle-cells", 1, &args),
> +		-EINVAL);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_count_phandle_with_args(
> +			np, "phandle-list-bad-args", "#phandle-cells"),
> +		-EINVAL);
>  }
>  
> -static void __init of_unittest_parse_phandle_with_args_map(void)
> +static void of_unittest_parse_phandle_with_args_map(struct kunit *test)
>  {
>  	struct device_node *np, *p0, *p1, *p2, *p3;
>  	struct of_phandle_args args;
>  	int i, rc;
>  
>  	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-b");
> -	if (!np) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  	p0 = of_find_node_by_path("/testcase-data/phandle-tests/provider0");
> -	if (!p0) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p0);
>  
>  	p1 = of_find_node_by_path("/testcase-data/phandle-tests/provider1");
> -	if (!p1) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p1);
>  
>  	p2 = of_find_node_by_path("/testcase-data/phandle-tests/provider2");
> -	if (!p2) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p2);
>  
>  	p3 = of_find_node_by_path("/testcase-data/phandle-tests/provider3");
> -	if (!p3) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p3);
>  
> -	rc = of_count_phandle_with_args(np, "phandle-list", "#phandle-cells");
> -	unittest(rc == 7, "of_count_phandle_with_args() returned %i, expected 7\n", rc);
> +	KUNIT_EXPECT_EQ(test,
> +		       of_count_phandle_with_args(np,
> +						  "phandle-list",
> +						  "#phandle-cells"),
> +		       7);
>  
>  	for (i = 0; i < 8; i++) {
>  		bool passed = true;
> @@ -564,121 +588,186 @@ static void __init of_unittest_parse_phandle_with_args_map(void)
>  			passed = false;
>  		}
>  
> -		unittest(passed, "index %i - data error on node %s rc=%i\n",
> -			 i, args.np->full_name, rc);
> +		KUNIT_EXPECT_TRUE_MSG(
> +			test, passed,
> +			"index %i - data error on node %s rc=%i\n",
> +			i, (args.np ? args.np->full_name : "missing np"), rc);
>  	}
>  
>  	/* Check for missing list property */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args_map(np, "phandle-list-missing",
> -					    "phandle", 0, &args);
> -	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args_map(
> +			np, "phandle-list-missing", "phandle", 0, &args),
> +		-ENOENT);
>  
>  	/* Check for missing cells,map,mask property */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args_map(np, "phandle-list",
> -					    "phandle-missing", 0, &args);
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args_map(
> +			np, "phandle-list", "phandle-missing", 0, &args),
> +		-EINVAL);
>  
>  	/* Check for bad phandle in list */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args_map(np, "phandle-list-bad-phandle",
> -					    "phandle", 0, &args);
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args_map(
> +			np, "phandle-list-bad-phandle", "phandle", 0, &args),
> +		-EINVAL);
>  
>  	/* Check for incorrectly formed argument list */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args_map(np, "phandle-list-bad-args",
> -					    "phandle", 1, &args);
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args_map(
> +			np, "phandle-list-bad-args", "phandle", 1, &args),
> +		-EINVAL);
>  }
>  
> -static void __init of_unittest_property_string(void)
> +static void of_unittest_property_string(struct kunit *test)
>  {
>  	const char *strings[4];
>  	struct device_node *np;
>  	int rc;
>  
>  	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> -	if (!np) {
> -		pr_err("No testcase data in device tree\n");
> -		return;
> -	}
> -
> -	rc = of_property_match_string(np, "phandle-list-names", "first");
> -	unittest(rc == 0, "first expected:0 got:%i\n", rc);
> -	rc = of_property_match_string(np, "phandle-list-names", "second");
> -	unittest(rc == 1, "second expected:1 got:%i\n", rc);
> -	rc = of_property_match_string(np, "phandle-list-names", "third");
> -	unittest(rc == 2, "third expected:2 got:%i\n", rc);
> -	rc = of_property_match_string(np, "phandle-list-names", "fourth");
> -	unittest(rc == -ENODATA, "unmatched string; rc=%i\n", rc);
> -	rc = of_property_match_string(np, "missing-property", "blah");
> -	unittest(rc == -EINVAL, "missing property; rc=%i\n", rc);
> -	rc = of_property_match_string(np, "empty-property", "blah");
> -	unittest(rc == -ENODATA, "empty property; rc=%i\n", rc);
> -	rc = of_property_match_string(np, "unterminated-string", "blah");
> -	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_property_match_string(np, "phandle-list-names", "first"),
> +		0);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_property_match_string(np, "phandle-list-names", "second"),
> +		1);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_property_match_string(np, "phandle-list-names", "third"),
> +		2);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_match_string(np, "phandle-list-names", "fourth"),
> +		-ENODATA,
> +		"unmatched string");
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_match_string(np, "missing-property", "blah"),
> +		-EINVAL,
> +		"missing property");
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_match_string(np, "empty-property", "blah"),
> +		-ENODATA,
> +		"empty property");
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_match_string(np, "unterminated-string", "blah"),
> +		-EILSEQ,
> +		"unterminated string");
>  
>  	/* of_property_count_strings() tests */
> -	rc = of_property_count_strings(np, "string-property");
> -	unittest(rc == 1, "Incorrect string count; rc=%i\n", rc);
> -	rc = of_property_count_strings(np, "phandle-list-names");
> -	unittest(rc == 3, "Incorrect string count; rc=%i\n", rc);
> -	rc = of_property_count_strings(np, "unterminated-string");
> -	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
> -	rc = of_property_count_strings(np, "unterminated-string-list");
> -	unittest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc);
> +	KUNIT_EXPECT_EQ(test,
> +			of_property_count_strings(np, "string-property"), 1);
> +	KUNIT_EXPECT_EQ(test,
> +			of_property_count_strings(np, "phandle-list-names"), 3);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_count_strings(np, "unterminated-string"), -EILSEQ,
> +		"unterminated string");
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_count_strings(np, "unterminated-string-list"),
> +		-EILSEQ,
> +		"unterminated string array");
>  
>  	/* of_property_read_string_index() tests */
>  	rc = of_property_read_string_index(np, "string-property", 0, strings);
> -	unittest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_ASSERT_EQ(test, rc, 0);
> +	KUNIT_EXPECT_STREQ(test, strings[0], "foobar");
> +
>  	strings[0] = NULL;
>  	rc = of_property_read_string_index(np, "string-property", 1, strings);
> -	unittest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_EXPECT_EQ(test, rc, -ENODATA);
> +	KUNIT_EXPECT_EQ(test, strings[0], NULL);
> +
>  	rc = of_property_read_string_index(np, "phandle-list-names", 0, strings);
> -	unittest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_ASSERT_EQ(test, rc, 0);
> +	KUNIT_EXPECT_STREQ(test, strings[0], "first");
> +
>  	rc = of_property_read_string_index(np, "phandle-list-names", 1, strings);
> -	unittest(rc == 0 && !strcmp(strings[0], "second"), "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_ASSERT_EQ(test, rc, 0);
> +	KUNIT_EXPECT_STREQ(test, strings[0], "second");
> +
>  	rc = of_property_read_string_index(np, "phandle-list-names", 2, strings);
> -	unittest(rc == 0 && !strcmp(strings[0], "third"), "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_ASSERT_EQ(test, rc, 0);
> +	KUNIT_EXPECT_STREQ(test, strings[0], "third");
> +
>  	strings[0] = NULL;
>  	rc = of_property_read_string_index(np, "phandle-list-names", 3, strings);
> -	unittest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_EXPECT_EQ(test, rc, -ENODATA);
> +	KUNIT_EXPECT_EQ(test, strings[0], NULL);
> +
>  	strings[0] = NULL;
>  	rc = of_property_read_string_index(np, "unterminated-string", 0, strings);
> -	unittest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_EXPECT_EQ(test, rc, -EILSEQ);
> +	KUNIT_EXPECT_EQ(test, strings[0], NULL);
> +
>  	rc = of_property_read_string_index(np, "unterminated-string-list", 0, strings);
> -	unittest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_ASSERT_EQ(test, rc, 0);
> +	KUNIT_EXPECT_STREQ(test, strings[0], "first");
> +
>  	strings[0] = NULL;
>  	rc = of_property_read_string_index(np, "unterminated-string-list", 2, strings); /* should fail */
> -	unittest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
> -	strings[1] = NULL;
> +	KUNIT_EXPECT_EQ(test, rc, -EILSEQ);
> +	KUNIT_EXPECT_EQ(test, strings[0], NULL);
>  
>  	/* of_property_read_string_array() tests */
> -	rc = of_property_read_string_array(np, "string-property", strings, 4);
> -	unittest(rc == 1, "Incorrect string count; rc=%i\n", rc);
> -	rc = of_property_read_string_array(np, "phandle-list-names", strings, 4);
> -	unittest(rc == 3, "Incorrect string count; rc=%i\n", rc);
> -	rc = of_property_read_string_array(np, "unterminated-string", strings, 4);
> -	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
> +	strings[1] = NULL;
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_property_read_string_array(
> +			np, "string-property", strings, 4),
> +		1);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_property_read_string_array(
> +			np, "phandle-list-names", strings, 4),
> +		3);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_read_string_array(
> +			np, "unterminated-string", strings, 4),
> +		-EILSEQ,
> +		"unterminated string");
>  	/* -- An incorrectly formed string should cause a failure */
> -	rc = of_property_read_string_array(np, "unterminated-string-list", strings, 4);
> -	unittest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_read_string_array(
> +			np, "unterminated-string-list", strings, 4),
> +		-EILSEQ,
> +		"unterminated string array");
>  	/* -- parsing the correctly formed strings should still work: */
>  	strings[2] = NULL;
>  	rc = of_property_read_string_array(np, "unterminated-string-list", strings, 2);
> -	unittest(rc == 2 && strings[2] == NULL, "of_property_read_string_array() failure; rc=%i\n", rc);
> +	KUNIT_EXPECT_EQ(test, rc, 2);
> +	KUNIT_EXPECT_EQ(test, strings[2], NULL);
> +
>  	strings[1] = NULL;
>  	rc = of_property_read_string_array(np, "phandle-list-names", strings, 1);
> -	unittest(rc == 1 && strings[1] == NULL, "Overwrote end of string array; rc=%i, str='%s'\n", rc, strings[1]);
> +	KUNIT_ASSERT_EQ(test, rc, 1);
> +	KUNIT_EXPECT_EQ_MSG(test, strings[1], NULL,
> +			    "Overwrote end of string array");
>  }
>  
>  #define propcmp(p1, p2) (((p1)->length == (p2)->length) && \
>  			(p1)->value && (p2)->value && \
>  			!memcmp((p1)->value, (p2)->value, (p1)->length) && \
>  			!strcmp((p1)->name, (p2)->name))
> -static void __init of_unittest_property_copy(void)
> +static void of_unittest_property_copy(struct kunit *test)
>  {
>  #ifdef CONFIG_OF_DYNAMIC
>  	struct property p1 = { .name = "p1", .length = 0, .value = "" };
> @@ -686,20 +775,24 @@ static void __init of_unittest_property_copy(void)
>  	struct property *new;
>  
>  	new = __of_prop_dup(&p1, GFP_KERNEL);
> -	unittest(new && propcmp(&p1, new), "empty property didn't copy correctly\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, new);
> +	KUNIT_EXPECT_TRUE_MSG(test, propcmp(&p1, new),
> +			      "empty property didn't copy correctly");
>  	kfree(new->value);
>  	kfree(new->name);
>  	kfree(new);
>  
>  	new = __of_prop_dup(&p2, GFP_KERNEL);
> -	unittest(new && propcmp(&p2, new), "non-empty property didn't copy correctly\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, new);
> +	KUNIT_EXPECT_TRUE_MSG(test, propcmp(&p2, new),
> +			      "non-empty property didn't copy correctly");
>  	kfree(new->value);
>  	kfree(new->name);
>  	kfree(new);
>  #endif
>  }
>  
> -static void __init of_unittest_changeset(void)
> +static void of_unittest_changeset(struct kunit *test)
>  {
>  #ifdef CONFIG_OF_DYNAMIC
>  	struct property *ppadd, padd = { .name = "prop-add", .length = 1, .value = "" };
> @@ -712,32 +805,32 @@ static void __init of_unittest_changeset(void)
>  	struct of_changeset chgset;
>  
>  	n1 = __of_node_dup(NULL, "n1");
> -	unittest(n1, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n1);
>  
>  	n2 = __of_node_dup(NULL, "n2");
> -	unittest(n2, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n2);
>  
>  	n21 = __of_node_dup(NULL, "n21");
> -	unittest(n21, "testcase setup failure %p\n", n21);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n21);
>  
>  	nchangeset = of_find_node_by_path("/testcase-data/changeset");
>  	nremove = of_get_child_by_name(nchangeset, "node-remove");
> -	unittest(nremove, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, nremove);
>  
>  	ppadd = __of_prop_dup(&padd, GFP_KERNEL);
> -	unittest(ppadd, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppadd);
>  
>  	ppname_n1  = __of_prop_dup(&pname_n1, GFP_KERNEL);
> -	unittest(ppname_n1, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n1);
>  
>  	ppname_n2  = __of_prop_dup(&pname_n2, GFP_KERNEL);
> -	unittest(ppname_n2, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n2);
>  
>  	ppname_n21 = __of_prop_dup(&pname_n21, GFP_KERNEL);
> -	unittest(ppname_n21, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n21);
>  
>  	ppupdate = __of_prop_dup(&pupdate, GFP_KERNEL);
> -	unittest(ppupdate, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppupdate);
>  
>  	parent = nchangeset;
>  	n1->parent = parent;
> @@ -745,54 +838,72 @@ static void __init of_unittest_changeset(void)
>  	n21->parent = n2;
>  
>  	ppremove = of_find_property(parent, "prop-remove", NULL);
> -	unittest(ppremove, "failed to find removal prop");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppremove);
>  
>  	of_changeset_init(&chgset);
>  
> -	unittest(!of_changeset_attach_node(&chgset, n1), "fail attach n1\n");
> -	unittest(!of_changeset_add_property(&chgset, n1, ppname_n1), "fail add prop name\n");
> -
> -	unittest(!of_changeset_attach_node(&chgset, n2), "fail attach n2\n");
> -	unittest(!of_changeset_add_property(&chgset, n2, ppname_n2), "fail add prop name\n");
> -
> -	unittest(!of_changeset_detach_node(&chgset, nremove), "fail remove node\n");
> -	unittest(!of_changeset_add_property(&chgset, n21, ppname_n21), "fail add prop name\n");
> -
> -	unittest(!of_changeset_attach_node(&chgset, n21), "fail attach n21\n");
> -
> -	unittest(!of_changeset_add_property(&chgset, parent, ppadd), "fail add prop prop-add\n");
> -	unittest(!of_changeset_update_property(&chgset, parent, ppupdate), "fail update prop\n");
> -	unittest(!of_changeset_remove_property(&chgset, parent, ppremove), "fail remove prop\n");
> -
> -	unittest(!of_changeset_apply(&chgset), "apply failed\n");
> +	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n1),
> +			       "fail attach n1\n");
> +	KUNIT_EXPECT_FALSE_MSG(
> +		test, of_changeset_add_property(&chgset, n1, ppname_n1),
> +		"fail add prop name\n");
> +
> +	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n2),
> +			       "fail attach n2\n");
> +	KUNIT_EXPECT_FALSE_MSG(
> +		test, of_changeset_add_property(&chgset, n2, ppname_n2),
> +			       "fail add prop name\n");
> +
> +	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_detach_node(&chgset, nremove),
> +			       "fail remove node\n");
> +	KUNIT_EXPECT_FALSE_MSG(
> +		test, of_changeset_add_property(&chgset, n21, ppname_n21),
> +		"fail add prop name\n");
> +
> +	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n21),
> +			       "fail attach n21\n");
> +
> +	KUNIT_EXPECT_FALSE_MSG(
> +		test,
> +		of_changeset_add_property(&chgset, parent, ppadd),
> +		"fail add prop prop-add\n");
> +	KUNIT_EXPECT_FALSE_MSG(
> +		test,
> +		of_changeset_update_property(&chgset, parent, ppupdate),
> +		"fail update prop\n");
> +	KUNIT_EXPECT_FALSE_MSG(
> +		test,
> +		of_changeset_remove_property(&chgset, parent, ppremove),
> +		"fail remove prop\n");
> +
> +	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_apply(&chgset),
> +			       "apply failed\n");
>  
>  	of_node_put(nchangeset);
>  
>  	/* Make sure node names are constructed correctly */
> -	unittest((np = of_find_node_by_path("/testcase-data/changeset/n2/n21")),
> -		 "'%pOF' not added\n", n21);
> +	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
> +		test,
> +		np = of_find_node_by_path("/testcase-data/changeset/n2/n21"),
> +		"'%pOF' not added\n", n21);
>  	of_node_put(np);
>  
> -	unittest(!of_changeset_revert(&chgset), "revert failed\n");
> +	KUNIT_EXPECT_FALSE(test, of_changeset_revert(&chgset));
>  
>  	of_changeset_destroy(&chgset);
>  #endif
>  }
>  
> -static void __init of_unittest_parse_interrupts(void)
> +static void of_unittest_parse_interrupts(struct kunit *test)
>  {
>  	struct device_node *np;
>  	struct of_phandle_args args;
>  	int i, rc;
>  
> -	if (of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)
> -		return;
> +	KUNIT_ASSERT_FALSE(test, of_irq_workarounds & OF_IMAP_OLDWORLD_MAC);
>  
>  	np = of_find_node_by_path("/testcase-data/interrupts/interrupts0");
> -	if (!np) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  	for (i = 0; i < 4; i++) {
>  		bool passed = true;
> @@ -804,16 +915,15 @@ static void __init of_unittest_parse_interrupts(void)
>  		passed &= (args.args_count == 1);
>  		passed &= (args.args[0] == (i + 1));
>  
> -		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
> -			 i, args.np, rc);
> +		KUNIT_EXPECT_TRUE_MSG(
> +			test, passed,
> +			"index %i - data error on node %pOF rc=%i\n",
> +			i, args.np, rc);
>  	}
>  	of_node_put(np);
>  
>  	np = of_find_node_by_path("/testcase-data/interrupts/interrupts1");
> -	if (!np) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  	for (i = 0; i < 4; i++) {
>  		bool passed = true;
> @@ -850,26 +960,24 @@ static void __init of_unittest_parse_interrupts(void)
>  		default:
>  			passed = false;
>  		}
> -		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
> -			 i, args.np, rc);
> +		KUNIT_EXPECT_TRUE_MSG(
> +			test, passed,
> +			"index %i - data error on node %pOF rc=%i\n",
> +			i, args.np, rc);
>  	}
>  	of_node_put(np);
>  }
>  
> -static void __init of_unittest_parse_interrupts_extended(void)
> +static void of_unittest_parse_interrupts_extended(struct kunit *test)
>  {
>  	struct device_node *np;
>  	struct of_phandle_args args;
>  	int i, rc;
>  
> -	if (of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)
> -		return;
> +	KUNIT_ASSERT_FALSE(test, of_irq_workarounds & OF_IMAP_OLDWORLD_MAC);
>  
>  	np = of_find_node_by_path("/testcase-data/interrupts/interrupts-extended0");
> -	if (!np) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  	for (i = 0; i < 7; i++) {
>  		bool passed = true;
> @@ -924,8 +1032,10 @@ static void __init of_unittest_parse_interrupts_extended(void)
>  			passed = false;
>  		}
>  
> -		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
> -			 i, args.np, rc);
> +		KUNIT_EXPECT_TRUE_MSG(
> +			test, passed,
> +			"index %i - data error on node %pOF rc=%i\n",
> +			i, args.np, rc);
>  	}
>  	of_node_put(np);
>  }
> @@ -965,7 +1075,7 @@ static struct {
>  	{ .path = "/testcase-data/match-node/name9", .data = "K", },
>  };
>  
> -static void __init of_unittest_match_node(void)
> +static void of_unittest_match_node(struct kunit *test)
>  {
>  	struct device_node *np;
>  	const struct of_device_id *match;
> @@ -973,26 +1083,19 @@ static void __init of_unittest_match_node(void)
>  
>  	for (i = 0; i < ARRAY_SIZE(match_node_tests); i++) {
>  		np = of_find_node_by_path(match_node_tests[i].path);
> -		if (!np) {
> -			unittest(0, "missing testcase node %s\n",
> -				match_node_tests[i].path);
> -			continue;
> -		}
> +		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  		match = of_match_node(match_node_table, np);
> -		if (!match) {
> -			unittest(0, "%s didn't match anything\n",
> -				match_node_tests[i].path);
> -			continue;
> -		}
> +		KUNIT_ASSERT_NOT_ERR_OR_NULL_MSG(test, np,
> +						 "%s didn't match anything",
> +						 match_node_tests[i].path);
>  
> -		if (strcmp(match->data, match_node_tests[i].data) != 0) {
> -			unittest(0, "%s got wrong match. expected %s, got %s\n",
> -				match_node_tests[i].path, match_node_tests[i].data,
> -				(const char *)match->data);
> -			continue;
> -		}
> -		unittest(1, "passed");
> +		KUNIT_EXPECT_STREQ_MSG(
> +			test,
> +			match->data, match_node_tests[i].data,
> +			"%s got wrong match. expected %s, got %s\n",
> +			match_node_tests[i].path, match_node_tests[i].data,
> +			(const char *)match->data);
>  	}
>  }
>  
> @@ -1004,9 +1107,9 @@ static struct resource test_bus_res = {
>  static const struct platform_device_info test_bus_info = {
>  	.name = "unittest-bus",
>  };
> -static void __init of_unittest_platform_populate(void)
> +static void of_unittest_platform_populate(struct kunit *test)
>  {
> -	int irq, rc;
> +	int irq;
>  	struct device_node *np, *child, *grandchild;
>  	struct platform_device *pdev, *test_bus;
>  	const struct of_device_id match[] = {
> @@ -1020,32 +1123,27 @@ static void __init of_unittest_platform_populate(void)
>  	/* Test that a missing irq domain returns -EPROBE_DEFER */
>  	np = of_find_node_by_path("/testcase-data/testcase-device1");
>  	pdev = of_find_device_by_node(np);
> -	unittest(pdev, "device 1 creation failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, pdev);
>  
>  	if (!(of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)) {
>  		irq = platform_get_irq(pdev, 0);
> -		unittest(irq == -EPROBE_DEFER,
> -			 "device deferred probe failed - %d\n", irq);
> +		KUNIT_ASSERT_EQ(test, irq, -EPROBE_DEFER);
>  
>  		/* Test that a parsing failure does not return -EPROBE_DEFER */
>  		np = of_find_node_by_path("/testcase-data/testcase-device2");
>  		pdev = of_find_device_by_node(np);
> -		unittest(pdev, "device 2 creation failed\n");
> +		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, pdev);
>  		irq = platform_get_irq(pdev, 0);
> -		unittest(irq < 0 && irq != -EPROBE_DEFER,
> -			 "device parsing error failed - %d\n", irq);
> +		KUNIT_ASSERT_TRUE_MSG(test, irq < 0 && irq != -EPROBE_DEFER,
> +				      "device parsing error failed - %d\n",
> +				      irq);
>  	}
>  
>  	np = of_find_node_by_path("/testcase-data/platform-tests");
> -	unittest(np, "No testcase data in device tree\n");
> -	if (!np)
> -		return;
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  	test_bus = platform_device_register_full(&test_bus_info);
> -	rc = PTR_ERR_OR_ZERO(test_bus);
> -	unittest(!rc, "testbus registration failed; rc=%i\n", rc);
> -	if (rc)
> -		return;
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, test_bus);
>  	test_bus->dev.of_node = np;
>  
>  	/*
> @@ -1060,17 +1158,19 @@ static void __init of_unittest_platform_populate(void)
>  	of_platform_populate(np, match, NULL, &test_bus->dev);
>  	for_each_child_of_node(np, child) {
>  		for_each_child_of_node(child, grandchild)
> -			unittest(of_find_device_by_node(grandchild),
> -				 "Could not create device for node '%pOFn'\n",
> -				 grandchild);
> +			KUNIT_EXPECT_TRUE_MSG(
> +				test, of_find_device_by_node(grandchild),
> +				"Could not create device for node '%pOFn'\n",
> +				grandchild);
>  	}
>  
>  	of_platform_depopulate(&test_bus->dev);
>  	for_each_child_of_node(np, child) {
>  		for_each_child_of_node(child, grandchild)
> -			unittest(!of_find_device_by_node(grandchild),
> -				 "device didn't get destroyed '%pOFn'\n",
> -				 grandchild);
> +			KUNIT_EXPECT_FALSE_MSG(
> +				test, of_find_device_by_node(grandchild),
> +				"device didn't get destroyed '%pOFn'\n",
> +				grandchild);
>  	}
>  
>  	platform_device_unregister(test_bus);
> @@ -1171,7 +1271,7 @@ static void attach_node_and_children(struct device_node *np)
>   *	unittest_data_add - Reads, copies data from
>   *	linked tree and attaches it to the live tree
>   */
> -static int __init unittest_data_add(void)
> +static int unittest_data_add(void)
>  {
>  	void *unittest_data;
>  	struct device_node *unittest_data_node, *np;
> @@ -1242,7 +1342,7 @@ static int __init unittest_data_add(void)
>  }
>  
>  #ifdef CONFIG_OF_OVERLAY
> -static int __init overlay_data_apply(const char *overlay_name, int *overlay_id);
> +static int overlay_data_apply(const char *overlay_name, int *overlay_id);
>  
>  static int unittest_probe(struct platform_device *pdev)
>  {
> @@ -1471,172 +1571,146 @@ static void of_unittest_destroy_tracked_overlays(void)
>  	} while (defers > 0);
>  }
>  
> -static int __init of_unittest_apply_overlay(int overlay_nr, int *overlay_id)
> +static int of_unittest_apply_overlay(struct kunit *test,
> +				     int overlay_nr,
> +				     int *overlay_id)
>  {
>  	const char *overlay_name;
>  
>  	overlay_name = overlay_name_from_nr(overlay_nr);
>  
> -	if (!overlay_data_apply(overlay_name, overlay_id)) {
> -		unittest(0, "could not apply overlay \"%s\"\n",
> -				overlay_name);
> -		return -EFAULT;
> -	}
> +	KUNIT_ASSERT_TRUE_MSG(test,
> +			      overlay_data_apply(overlay_name, overlay_id),
> +			      "could not apply overlay \"%s\"\n", overlay_name);
>  	of_unittest_track_overlay(*overlay_id);
>  
>  	return 0;
>  }
>  
>  /* apply an overlay while checking before and after states */
> -static int __init of_unittest_apply_overlay_check(int overlay_nr,
> +static int of_unittest_apply_overlay_check(struct kunit *test, int overlay_nr,
>  		int unittest_nr, int before, int after,
>  		enum overlay_type ovtype)
>  {
>  	int ret, ovcs_id;
>  
>  	/* unittest device must not be in before state */
> -	if (of_unittest_device_exists(unittest_nr, ovtype) != before) {
> -		unittest(0, "%s with device @\"%s\" %s\n",
> -				overlay_name_from_nr(overlay_nr),
> -				unittest_path(unittest_nr, ovtype),
> -				!before ? "enabled" : "disabled");
> -		return -EINVAL;
> -	}
> +	KUNIT_ASSERT_EQ_MSG(
> +		test, of_unittest_device_exists(unittest_nr, ovtype), before,
> +		"%s with device @\"%s\" %s\n", overlay_name_from_nr(overlay_nr),
> +		unittest_path(unittest_nr, ovtype),
> +		!before ? "enabled" : "disabled");
>  
>  	ovcs_id = 0;
> -	ret = of_unittest_apply_overlay(overlay_nr, &ovcs_id);
> +	ret = of_unittest_apply_overlay(test, overlay_nr, &ovcs_id);
>  	if (ret != 0) {
> -		/* of_unittest_apply_overlay already called unittest() */
> +		/* of_unittest_apply_overlay already set expectation */
>  		return ret;
>  	}
>  
>  	/* unittest device must be to set to after state */
> -	if (of_unittest_device_exists(unittest_nr, ovtype) != after) {
> -		unittest(0, "%s failed to create @\"%s\" %s\n",
> -				overlay_name_from_nr(overlay_nr),
> -				unittest_path(unittest_nr, ovtype),
> -				!after ? "enabled" : "disabled");
> -		return -EINVAL;
> -	}
> +	KUNIT_ASSERT_EQ_MSG(
> +		test, of_unittest_device_exists(unittest_nr, ovtype), after,
> +		"%s failed to create @\"%s\" %s\n",
> +		overlay_name_from_nr(overlay_nr),
> +		unittest_path(unittest_nr, ovtype),
> +		!after ? "enabled" : "disabled");
>  
>  	return 0;
>  }
>  
>  /* apply an overlay and then revert it while checking before, after states */
> -static int __init of_unittest_apply_revert_overlay_check(int overlay_nr,
> +static int of_unittest_apply_revert_overlay_check(
> +		struct kunit *test, int overlay_nr,
>  		int unittest_nr, int before, int after,
>  		enum overlay_type ovtype)
>  {
>  	int ret, ovcs_id;
>  
>  	/* unittest device must be in before state */
> -	if (of_unittest_device_exists(unittest_nr, ovtype) != before) {
> -		unittest(0, "%s with device @\"%s\" %s\n",
> -				overlay_name_from_nr(overlay_nr),
> -				unittest_path(unittest_nr, ovtype),
> -				!before ? "enabled" : "disabled");
> -		return -EINVAL;
> -	}
> +	KUNIT_ASSERT_EQ_MSG(
> +		test, of_unittest_device_exists(unittest_nr, ovtype), before,
> +		"%s with device @\"%s\" %s\n", overlay_name_from_nr(overlay_nr),
> +		unittest_path(unittest_nr, ovtype),
> +		!before ? "enabled" : "disabled");
>  
>  	/* apply the overlay */
>  	ovcs_id = 0;
> -	ret = of_unittest_apply_overlay(overlay_nr, &ovcs_id);
> +	ret = of_unittest_apply_overlay(test, overlay_nr, &ovcs_id);
>  	if (ret != 0) {
> -		/* of_unittest_apply_overlay already called unittest() */
> +		/* of_unittest_apply_overlay already set expectation. */
>  		return ret;
>  	}
>  
>  	/* unittest device must be in after state */
> -	if (of_unittest_device_exists(unittest_nr, ovtype) != after) {
> -		unittest(0, "%s failed to create @\"%s\" %s\n",
> -				overlay_name_from_nr(overlay_nr),
> -				unittest_path(unittest_nr, ovtype),
> -				!after ? "enabled" : "disabled");
> -		return -EINVAL;
> -	}
> -
> -	ret = of_overlay_remove(&ovcs_id);
> -	if (ret != 0) {
> -		unittest(0, "%s failed to be destroyed @\"%s\"\n",
> -				overlay_name_from_nr(overlay_nr),
> -				unittest_path(unittest_nr, ovtype));
> -		return ret;
> -	}
> +	KUNIT_ASSERT_EQ_MSG(
> +		test, of_unittest_device_exists(unittest_nr, ovtype), after,
> +		"%s failed to create @\"%s\" %s\n",
> +		overlay_name_from_nr(overlay_nr),
> +		unittest_path(unittest_nr, ovtype),
> +		!after ? "enabled" : "disabled");
> +
> +	KUNIT_ASSERT_EQ_MSG(test, of_overlay_remove(&ovcs_id), 0,
> +			    "%s failed to be destroyed @\"%s\"\n",
> +			    overlay_name_from_nr(overlay_nr),
> +			    unittest_path(unittest_nr, ovtype));
>  
>  	/* unittest device must be again in before state */
> -	if (of_unittest_device_exists(unittest_nr, PDEV_OVERLAY) != before) {
> -		unittest(0, "%s with device @\"%s\" %s\n",
> -				overlay_name_from_nr(overlay_nr),
> -				unittest_path(unittest_nr, ovtype),
> -				!before ? "enabled" : "disabled");
> -		return -EINVAL;
> -	}
> +	KUNIT_ASSERT_EQ_MSG(
> +		test,
> +		of_unittest_device_exists(unittest_nr, PDEV_OVERLAY), before,
> +		"%s with device @\"%s\" %s\n",
> +		overlay_name_from_nr(overlay_nr),
> +		unittest_path(unittest_nr, ovtype),
> +		!before ? "enabled" : "disabled");
>  
>  	return 0;
>  }
>  
>  /* test activation of device */
> -static void __init of_unittest_overlay_0(void)
> +static void of_unittest_overlay_0(struct kunit *test)
>  {
>  	/* device should enable */
> -	if (of_unittest_apply_overlay_check(0, 0, 0, 1, PDEV_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 0);
> +	of_unittest_apply_overlay_check(test, 0, 0, 0, 1, PDEV_OVERLAY);
>  }
>  
>  /* test deactivation of device */
> -static void __init of_unittest_overlay_1(void)
> +static void of_unittest_overlay_1(struct kunit *test)
>  {
>  	/* device should disable */
> -	if (of_unittest_apply_overlay_check(1, 1, 1, 0, PDEV_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 1);
> +	of_unittest_apply_overlay_check(test, 1, 1, 1, 0, PDEV_OVERLAY);
>  }
>  
>  /* test activation of device */
> -static void __init of_unittest_overlay_2(void)
> +static void of_unittest_overlay_2(struct kunit *test)
>  {
>  	/* device should enable */
> -	if (of_unittest_apply_overlay_check(2, 2, 0, 1, PDEV_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 2);
> +	of_unittest_apply_overlay_check(test, 2, 2, 0, 1, PDEV_OVERLAY);
>  }
>  
>  /* test deactivation of device */
> -static void __init of_unittest_overlay_3(void)
> +static void of_unittest_overlay_3(struct kunit *test)
>  {
>  	/* device should disable */
> -	if (of_unittest_apply_overlay_check(3, 3, 1, 0, PDEV_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 3);
> +	of_unittest_apply_overlay_check(test, 3, 3, 1, 0, PDEV_OVERLAY);
>  }
>  
>  /* test activation of a full device node */
> -static void __init of_unittest_overlay_4(void)
> +static void of_unittest_overlay_4(struct kunit *test)
>  {
>  	/* device should disable */
> -	if (of_unittest_apply_overlay_check(4, 4, 0, 1, PDEV_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 4);
> +	of_unittest_apply_overlay_check(test, 4, 4, 0, 1, PDEV_OVERLAY);
>  }
>  
>  /* test overlay apply/revert sequence */
> -static void __init of_unittest_overlay_5(void)
> +static void of_unittest_overlay_5(struct kunit *test)
>  {
>  	/* device should disable */
> -	if (of_unittest_apply_revert_overlay_check(5, 5, 0, 1, PDEV_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 5);
> +	of_unittest_apply_revert_overlay_check(test, 5, 5, 0, 1, PDEV_OVERLAY);
>  }
>  
>  /* test overlay application in sequence */
> -static void __init of_unittest_overlay_6(void)
> +static void of_unittest_overlay_6(struct kunit *test)
>  {
>  	int i, ov_id[2], ovcs_id;
>  	int overlay_nr = 6, unittest_nr = 6;
> @@ -1645,74 +1719,67 @@ static void __init of_unittest_overlay_6(void)
>  
>  	/* unittest device must be in before state */
>  	for (i = 0; i < 2; i++) {
> -		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
> -				!= before) {
> -			unittest(0, "%s with device @\"%s\" %s\n",
> -					overlay_name_from_nr(overlay_nr + i),
> -					unittest_path(unittest_nr + i,
> -						PDEV_OVERLAY),
> -					!before ? "enabled" : "disabled");
> -			return;
> -		}
> +		KUNIT_ASSERT_EQ_MSG(test,
> +				    of_unittest_device_exists(unittest_nr + i,
> +							      PDEV_OVERLAY),
> +				    before,
> +				    "%s with device @\"%s\" %s\n",
> +				    overlay_name_from_nr(overlay_nr + i),
> +				    unittest_path(unittest_nr + i,
> +						  PDEV_OVERLAY),
> +				    !before ? "enabled" : "disabled");
>  	}
>  
>  	/* apply the overlays */
>  	for (i = 0; i < 2; i++) {
> -
>  		overlay_name = overlay_name_from_nr(overlay_nr + i);
>  
> -		if (!overlay_data_apply(overlay_name, &ovcs_id)) {
> -			unittest(0, "could not apply overlay \"%s\"\n",
> -					overlay_name);
> -			return;
> -		}
> +		KUNIT_ASSERT_TRUE_MSG(
> +			test, overlay_data_apply(overlay_name, &ovcs_id),
> +			"could not apply overlay \"%s\"\n", overlay_name);
>  		ov_id[i] = ovcs_id;
>  		of_unittest_track_overlay(ov_id[i]);
>  	}
>  
>  	for (i = 0; i < 2; i++) {
>  		/* unittest device must be in after state */
> -		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
> -				!= after) {
> -			unittest(0, "overlay @\"%s\" failed @\"%s\" %s\n",
> -					overlay_name_from_nr(overlay_nr + i),
> -					unittest_path(unittest_nr + i,
> -						PDEV_OVERLAY),
> -					!after ? "enabled" : "disabled");
> -			return;
> -		}
> +		KUNIT_ASSERT_EQ_MSG(test,
> +				    of_unittest_device_exists(unittest_nr + i,
> +							      PDEV_OVERLAY),
> +				    after,
> +				    "overlay @\"%s\" failed @\"%s\" %s\n",
> +				    overlay_name_from_nr(overlay_nr + i),
> +				    unittest_path(unittest_nr + i,
> +						  PDEV_OVERLAY),
> +				    !after ? "enabled" : "disabled");
>  	}
>  
>  	for (i = 1; i >= 0; i--) {
>  		ovcs_id = ov_id[i];
> -		if (of_overlay_remove(&ovcs_id)) {
> -			unittest(0, "%s failed destroy @\"%s\"\n",
> -					overlay_name_from_nr(overlay_nr + i),
> -					unittest_path(unittest_nr + i,
> -						PDEV_OVERLAY));
> -			return;
> -		}
> +		KUNIT_ASSERT_FALSE_MSG(
> +			test, of_overlay_remove(&ovcs_id),
> +			"%s failed destroy @\"%s\"\n",
> +			overlay_name_from_nr(overlay_nr + i),
> +			unittest_path(unittest_nr + i, PDEV_OVERLAY));
>  		of_unittest_untrack_overlay(ov_id[i]);
>  	}
>  
>  	for (i = 0; i < 2; i++) {
>  		/* unittest device must be again in before state */
> -		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
> -				!= before) {
> -			unittest(0, "%s with device @\"%s\" %s\n",
> -					overlay_name_from_nr(overlay_nr + i),
> -					unittest_path(unittest_nr + i,
> -						PDEV_OVERLAY),
> -					!before ? "enabled" : "disabled");
> -			return;
> -		}
> +		KUNIT_ASSERT_EQ_MSG(test,
> +				    of_unittest_device_exists(unittest_nr + i,
> +							      PDEV_OVERLAY),
> +				    before,
> +				    "%s with device @\"%s\" %s\n",
> +				    overlay_name_from_nr(overlay_nr + i),
> +				    unittest_path(unittest_nr + i,
> +						  PDEV_OVERLAY),
> +				    !before ? "enabled" : "disabled");
>  	}
> -
> -	unittest(1, "overlay test %d passed\n", 6);
>  }
>  
>  /* test overlay application in sequence */
> -static void __init of_unittest_overlay_8(void)
> +static void of_unittest_overlay_8(struct kunit *test)
>  {
>  	int i, ov_id[2], ovcs_id;
>  	int overlay_nr = 8, unittest_nr = 8;
> @@ -1722,76 +1789,64 @@ static void __init of_unittest_overlay_8(void)
>  
>  	/* apply the overlays */
>  	for (i = 0; i < 2; i++) {
> -
>  		overlay_name = overlay_name_from_nr(overlay_nr + i);
>  
> -		if (!overlay_data_apply(overlay_name, &ovcs_id)) {
> -			unittest(0, "could not apply overlay \"%s\"\n",
> -					overlay_name);
> -			return;
> -		}
> +		KUNIT_ASSERT_TRUE_MSG(
> +			test, overlay_data_apply(overlay_name, &ovcs_id),
> +			"could not apply overlay \"%s\"\n", overlay_name);
>  		ov_id[i] = ovcs_id;
>  		of_unittest_track_overlay(ov_id[i]);
>  	}
>  
>  	/* now try to remove first overlay (it should fail) */
>  	ovcs_id = ov_id[0];
> -	if (!of_overlay_remove(&ovcs_id)) {
> -		unittest(0, "%s was destroyed @\"%s\"\n",
> -				overlay_name_from_nr(overlay_nr + 0),
> -				unittest_path(unittest_nr,
> -					PDEV_OVERLAY));
> -		return;
> -	}
> +	KUNIT_ASSERT_TRUE_MSG(
> +		test, of_overlay_remove(&ovcs_id),
> +		"%s was destroyed @\"%s\"\n",
> +		overlay_name_from_nr(overlay_nr + 0),
> +		unittest_path(unittest_nr, PDEV_OVERLAY));
>  
>  	/* removing them in order should work */
>  	for (i = 1; i >= 0; i--) {
>  		ovcs_id = ov_id[i];
> -		if (of_overlay_remove(&ovcs_id)) {
> -			unittest(0, "%s not destroyed @\"%s\"\n",
> -					overlay_name_from_nr(overlay_nr + i),
> -					unittest_path(unittest_nr,
> -						PDEV_OVERLAY));
> -			return;
> -		}
> +		KUNIT_ASSERT_FALSE_MSG(
> +			test, of_overlay_remove(&ovcs_id),
> +			"%s not destroyed @\"%s\"\n",
> +			overlay_name_from_nr(overlay_nr + i),
> +			unittest_path(unittest_nr, PDEV_OVERLAY));
>  		of_unittest_untrack_overlay(ov_id[i]);
>  	}
> -
> -	unittest(1, "overlay test %d passed\n", 8);
>  }
>  
>  /* test insertion of a bus with parent devices */
> -static void __init of_unittest_overlay_10(void)
> +static void of_unittest_overlay_10(struct kunit *test)
>  {
> -	int ret;
>  	char *child_path;
>  
>  	/* device should disable */
> -	ret = of_unittest_apply_overlay_check(10, 10, 0, 1, PDEV_OVERLAY);
> -	if (unittest(ret == 0,
> -			"overlay test %d failed; overlay application\n", 10))
> -		return;
> +	KUNIT_ASSERT_EQ_MSG(
> +		test,
> +		of_unittest_apply_overlay_check(
> +				test, 10, 10, 0, 1, PDEV_OVERLAY),
> +		0,
> +		"overlay test %d failed; overlay application\n", 10);
>  
>  	child_path = kasprintf(GFP_KERNEL, "%s/test-unittest101",
>  			unittest_path(10, PDEV_OVERLAY));
> -	if (unittest(child_path, "overlay test %d failed; kasprintf\n", 10))
> -		return;
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, child_path);
>  
> -	ret = of_path_device_type_exists(child_path, PDEV_OVERLAY);
> +	KUNIT_EXPECT_TRUE_MSG(
> +		test, of_path_device_type_exists(child_path, PDEV_OVERLAY),
> +		"overlay test %d failed; no child device\n", 10);
>  	kfree(child_path);
> -
> -	unittest(ret, "overlay test %d failed; no child device\n", 10);
>  }
>  
>  /* test insertion of a bus with parent devices (and revert) */
> -static void __init of_unittest_overlay_11(void)
> +static void of_unittest_overlay_11(struct kunit *test)
>  {
> -	int ret;
> -
>  	/* device should disable */
> -	ret = of_unittest_apply_revert_overlay_check(11, 11, 0, 1,
> -			PDEV_OVERLAY);
> -	unittest(ret == 0, "overlay test %d failed; overlay apply\n", 11);
> +	KUNIT_EXPECT_FALSE(test, of_unittest_apply_revert_overlay_check(
> +		test, 11, 11, 0, 1, PDEV_OVERLAY));
>  }
>  
>  #if IS_BUILTIN(CONFIG_I2C) && IS_ENABLED(CONFIG_OF_OVERLAY)
> @@ -2013,25 +2068,18 @@ static struct i2c_driver unittest_i2c_mux_driver = {
>  
>  #endif
>  
> -static int of_unittest_overlay_i2c_init(void)
> +static int of_unittest_overlay_i2c_init(struct kunit *test)
>  {
> -	int ret;
> -
> -	ret = i2c_add_driver(&unittest_i2c_dev_driver);
> -	if (unittest(ret == 0,
> -			"could not register unittest i2c device driver\n"))
> -		return ret;
> +	KUNIT_ASSERT_EQ_MSG(test, i2c_add_driver(&unittest_i2c_dev_driver), 0,
> +			    "could not register unittest i2c device driver\n");
>  
> -	ret = platform_driver_register(&unittest_i2c_bus_driver);
> -	if (unittest(ret == 0,
> -			"could not register unittest i2c bus driver\n"))
> -		return ret;
> +	KUNIT_ASSERT_EQ_MSG(
> +		test, platform_driver_register(&unittest_i2c_bus_driver), 0,
> +		"could not register unittest i2c bus driver\n");
>  
>  #if IS_BUILTIN(CONFIG_I2C_MUX)
> -	ret = i2c_add_driver(&unittest_i2c_mux_driver);
> -	if (unittest(ret == 0,
> -			"could not register unittest i2c mux driver\n"))
> -		return ret;
> +	KUNIT_ASSERT_EQ_MSG(test, i2c_add_driver(&unittest_i2c_mux_driver), 0,
> +			    "could not register unittest i2c mux driver\n");
>  #endif
>  
>  	return 0;
> @@ -2046,101 +2094,85 @@ static void of_unittest_overlay_i2c_cleanup(void)
>  	i2c_del_driver(&unittest_i2c_dev_driver);
>  }
>  
> -static void __init of_unittest_overlay_i2c_12(void)
> +static void of_unittest_overlay_i2c_12(struct kunit *test)
>  {
>  	/* device should enable */
> -	if (of_unittest_apply_overlay_check(12, 12, 0, 1, I2C_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 12);
> +	of_unittest_apply_overlay_check(test, 12, 12, 0, 1, I2C_OVERLAY);
>  }
>  
>  /* test deactivation of device */
> -static void __init of_unittest_overlay_i2c_13(void)
> +static void of_unittest_overlay_i2c_13(struct kunit *test)
>  {
>  	/* device should disable */
> -	if (of_unittest_apply_overlay_check(13, 13, 1, 0, I2C_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 13);
> +	of_unittest_apply_overlay_check(test, 13, 13, 1, 0, I2C_OVERLAY);
>  }
>  
>  /* just check for i2c mux existence */
> -static void of_unittest_overlay_i2c_14(void)
> +static void of_unittest_overlay_i2c_14(struct kunit *test)
>  {
> +	KUNIT_SUCCEED(test);
>  }
>  
> -static void __init of_unittest_overlay_i2c_15(void)
> +static void of_unittest_overlay_i2c_15(struct kunit *test)
>  {
>  	/* device should enable */
> -	if (of_unittest_apply_overlay_check(15, 15, 0, 1, I2C_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 15);
> +	of_unittest_apply_overlay_check(test, 15, 15, 0, 1, I2C_OVERLAY);
>  }
>  
>  #else
>  
> -static inline void of_unittest_overlay_i2c_14(void) { }
> -static inline void of_unittest_overlay_i2c_15(void) { }
> +static inline void of_unittest_overlay_i2c_14(struct kunit *test) { }
> +static inline void of_unittest_overlay_i2c_15(struct kunit *test) { }
>  
>  #endif
>  
> -static void __init of_unittest_overlay(void)
> +static void of_unittest_overlay(struct kunit *test)
>  {
>  	struct device_node *bus_np = NULL;
>  
> -	if (platform_driver_register(&unittest_driver)) {
> -		unittest(0, "could not register unittest driver\n");
> -		goto out;
> -	}
> +	KUNIT_ASSERT_FALSE_MSG(test, platform_driver_register(&unittest_driver),
> +			       "could not register unittest driver\n");
>  
>  	bus_np = of_find_node_by_path(bus_path);
> -	if (bus_np == NULL) {
> -		unittest(0, "could not find bus_path \"%s\"\n", bus_path);
> -		goto out;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL_MSG(
> +		test, bus_np, "could not find bus_path \"%s\"\n", bus_path);
>  
> -	if (of_platform_default_populate(bus_np, NULL, NULL)) {
> -		unittest(0, "could not populate bus @ \"%s\"\n", bus_path);
> -		goto out;
> -	}
> -
> -	if (!of_unittest_device_exists(100, PDEV_OVERLAY)) {
> -		unittest(0, "could not find unittest0 @ \"%s\"\n",
> -				unittest_path(100, PDEV_OVERLAY));
> -		goto out;
> -	}
> +	KUNIT_ASSERT_FALSE_MSG(
> +		test, of_platform_default_populate(bus_np, NULL, NULL),
> +		"could not populate bus @ \"%s\"\n", bus_path);
>  
> -	if (of_unittest_device_exists(101, PDEV_OVERLAY)) {
> -		unittest(0, "unittest1 @ \"%s\" should not exist\n",
> -				unittest_path(101, PDEV_OVERLAY));
> -		goto out;
> -	}
> +	KUNIT_ASSERT_TRUE_MSG(
> +		test, of_unittest_device_exists(100, PDEV_OVERLAY),
> +		"could not find unittest0 @ \"%s\"\n",
> +		unittest_path(100, PDEV_OVERLAY));
>  
> -	unittest(1, "basic infrastructure of overlays passed");
> +	KUNIT_ASSERT_FALSE_MSG(
> +		test, of_unittest_device_exists(101, PDEV_OVERLAY),
> +		"unittest1 @ \"%s\" should not exist\n",
> +		unittest_path(101, PDEV_OVERLAY));
>  
>  	/* tests in sequence */
> -	of_unittest_overlay_0();
> -	of_unittest_overlay_1();
> -	of_unittest_overlay_2();
> -	of_unittest_overlay_3();
> -	of_unittest_overlay_4();
> -	of_unittest_overlay_5();
> -	of_unittest_overlay_6();
> -	of_unittest_overlay_8();
> -
> -	of_unittest_overlay_10();
> -	of_unittest_overlay_11();
> +	of_unittest_overlay_0(test);
> +	of_unittest_overlay_1(test);
> +	of_unittest_overlay_2(test);
> +	of_unittest_overlay_3(test);
> +	of_unittest_overlay_4(test);
> +	of_unittest_overlay_5(test);
> +	of_unittest_overlay_6(test);
> +	of_unittest_overlay_8(test);
> +
> +	of_unittest_overlay_10(test);
> +	of_unittest_overlay_11(test);
>  
>  #if IS_BUILTIN(CONFIG_I2C)
> -	if (unittest(of_unittest_overlay_i2c_init() == 0, "i2c init failed\n"))
> -		goto out;
> +	KUNIT_ASSERT_EQ_MSG(test, of_unittest_overlay_i2c_init(test), 0,
> +			    "i2c init failed\n");
> +	goto out;
>  
> -	of_unittest_overlay_i2c_12();
> -	of_unittest_overlay_i2c_13();
> -	of_unittest_overlay_i2c_14();
> -	of_unittest_overlay_i2c_15();
> +	of_unittest_overlay_i2c_12(test);
> +	of_unittest_overlay_i2c_13(test);
> +	of_unittest_overlay_i2c_14(test);
> +	of_unittest_overlay_i2c_15(test);
>  
>  	of_unittest_overlay_i2c_cleanup();
>  #endif
> @@ -2152,7 +2184,7 @@ static void __init of_unittest_overlay(void)
>  }
>  
>  #else
> -static inline void __init of_unittest_overlay(void) { }
> +static inline void of_unittest_overlay(struct kunit *test) { }
>  #endif
>  
>  #ifdef CONFIG_OF_OVERLAY
> @@ -2313,7 +2345,7 @@ void __init unittest_unflatten_overlay_base(void)
>   *
>   * Return 0 on unexpected error.
>   */
> -static int __init overlay_data_apply(const char *overlay_name, int *overlay_id)
> +static int overlay_data_apply(const char *overlay_name, int *overlay_id)
>  {
>  	struct overlay_info *info;
>  	int found = 0;
> @@ -2359,19 +2391,17 @@ static int __init overlay_data_apply(const char *overlay_name, int *overlay_id)
>   * The first part of the function is _not_ normal overlay usage; it is
>   * finishing splicing the base overlay device tree into the live tree.
>   */
> -static __init void of_unittest_overlay_high_level(void)
> +static void of_unittest_overlay_high_level(struct kunit *test)
>  {
>  	struct device_node *last_sibling;
>  	struct device_node *np;
>  	struct device_node *of_symbols;
> -	struct device_node *overlay_base_symbols;
> +	struct device_node *overlay_base_symbols = 0;
>  	struct device_node **pprev;
>  	struct property *prop;
>  
> -	if (!overlay_base_root) {
> -		unittest(0, "overlay_base_root not initialized\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_TRUE_MSG(test, overlay_base_root,
> +			      "overlay_base_root not initialized\n");
>  
>  	/*
>  	 * Could not fixup phandles in unittest_unflatten_overlay_base()
> @@ -2418,11 +2448,9 @@ static __init void of_unittest_overlay_high_level(void)
>  	for_each_child_of_node(overlay_base_root, np) {
>  		struct device_node *base_child;
>  		for_each_child_of_node(of_root, base_child) {
> -			if (!strcmp(np->full_name, base_child->full_name)) {
> -				unittest(0, "illegal node name in overlay_base %pOFn",
> -					 np);
> -				return;
> -			}
> +			KUNIT_ASSERT_STRNEQ_MSG(
> +				test, np->full_name, base_child->full_name,
> +				"illegal node name in overlay_base %pOFn", np);
>  		}
>  	}
>  
> @@ -2456,21 +2484,24 @@ static __init void of_unittest_overlay_high_level(void)
>  
>  			new_prop = __of_prop_dup(prop, GFP_KERNEL);
>  			if (!new_prop) {
> -				unittest(0, "__of_prop_dup() of '%s' from overlay_base node __symbols__",
> -					 prop->name);
> +				KUNIT_FAIL(test,
> +					   "__of_prop_dup() of '%s' from overlay_base node __symbols__",
> +					   prop->name);
>  				goto err_unlock;
>  			}
>  			if (__of_add_property(of_symbols, new_prop)) {
>  				/* "name" auto-generated by unflatten */
>  				if (!strcmp(new_prop->name, "name"))
>  					continue;
> -				unittest(0, "duplicate property '%s' in overlay_base node __symbols__",
> -					 prop->name);
> +				KUNIT_FAIL(test,
> +					   "duplicate property '%s' in overlay_base node __symbols__",
> +					   prop->name);
>  				goto err_unlock;
>  			}
>  			if (__of_add_property_sysfs(of_symbols, new_prop)) {
> -				unittest(0, "unable to add property '%s' in overlay_base node __symbols__ to sysfs",
> -					 prop->name);
> +				KUNIT_FAIL(test,
> +					   "unable to add property '%s' in overlay_base node __symbols__ to sysfs",
> +					   prop->name);
>  				goto err_unlock;
>  			}
>  		}
> @@ -2481,20 +2512,24 @@ static __init void of_unittest_overlay_high_level(void)
>  
>  	/* now do the normal overlay usage test */
>  
> -	unittest(overlay_data_apply("overlay", NULL),
> -		 "Adding overlay 'overlay' failed\n");
> +	KUNIT_EXPECT_TRUE_MSG(test, overlay_data_apply("overlay", NULL),
> +			      "Adding overlay 'overlay' failed\n");
>  
> -	unittest(overlay_data_apply("overlay_bad_add_dup_node", NULL),
> -		 "Adding overlay 'overlay_bad_add_dup_node' failed\n");
> +	KUNIT_EXPECT_TRUE_MSG(
> +		test, overlay_data_apply("overlay_bad_add_dup_node", NULL),
> +		"Adding overlay 'overlay_bad_add_dup_node' failed\n");
>  
> -	unittest(overlay_data_apply("overlay_bad_add_dup_prop", NULL),
> -		 "Adding overlay 'overlay_bad_add_dup_prop' failed\n");
> +	KUNIT_EXPECT_TRUE_MSG(
> +		test, overlay_data_apply("overlay_bad_add_dup_prop", NULL),
> +		"Adding overlay 'overlay_bad_add_dup_prop' failed\n");
>  
> -	unittest(overlay_data_apply("overlay_bad_phandle", NULL),
> -		 "Adding overlay 'overlay_bad_phandle' failed\n");
> +	KUNIT_EXPECT_TRUE_MSG(
> +		test, overlay_data_apply("overlay_bad_phandle", NULL),
> +		"Adding overlay 'overlay_bad_phandle' failed\n");
>  
> -	unittest(overlay_data_apply("overlay_bad_symbol", NULL),
> -		 "Adding overlay 'overlay_bad_symbol' failed\n");
> +	KUNIT_EXPECT_TRUE_MSG(
> +		test, overlay_data_apply("overlay_bad_symbol", NULL),
> +		"Adding overlay 'overlay_bad_symbol' failed\n");
>  
>  	return;
>  
> @@ -2504,57 +2539,52 @@ static __init void of_unittest_overlay_high_level(void)
>  
>  #else
>  
> -static inline __init void of_unittest_overlay_high_level(void) {}
> +static inline void of_unittest_overlay_high_level(struct kunit *test) {}
>  
>  #endif
>  
> -static int __init of_unittest(void)
> +static int of_test_init(struct kunit *test)
>  {
> -	struct device_node *np;
> -	int res;
> -
>  	/* adding data for unittest */
> -	res = unittest_data_add();
> -	if (res)
> -		return res;
> +	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
> +
>  	if (!of_aliases)
>  		of_aliases = of_find_node_by_path("/aliases");
>  
> -	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> -	if (!np) {
> -		pr_info("No testcase data in device tree; not running tests\n");
> -		return 0;
> -	}
> -	of_node_put(np);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
> +		"/testcase-data/phandle-tests/consumer-a"));
>  
>  	if (IS_ENABLED(CONFIG_UML))
>  		unflatten_device_tree();
>  
> -	pr_info("start of unittest - you will see error messages\n");
> -	of_unittest_check_tree_linkage();
> -	of_unittest_check_phandles();
> -	of_unittest_find_node_by_name();
> -	of_unittest_dynamic();
> -	of_unittest_parse_phandle_with_args();
> -	of_unittest_parse_phandle_with_args_map();
> -	of_unittest_printf();
> -	of_unittest_property_string();
> -	of_unittest_property_copy();
> -	of_unittest_changeset();
> -	of_unittest_parse_interrupts();
> -	of_unittest_parse_interrupts_extended();
> -	of_unittest_match_node();
> -	of_unittest_platform_populate();
> -	of_unittest_overlay();
> +	return 0;
> +}
>  
> +static struct kunit_case of_test_cases[] = {
> +	KUNIT_CASE(of_unittest_check_tree_linkage),
> +	KUNIT_CASE(of_unittest_check_phandles),
> +	KUNIT_CASE(of_unittest_find_node_by_name),
> +	KUNIT_CASE(of_unittest_dynamic),
> +	KUNIT_CASE(of_unittest_parse_phandle_with_args),
> +	KUNIT_CASE(of_unittest_parse_phandle_with_args_map),
> +	KUNIT_CASE(of_unittest_printf),
> +	KUNIT_CASE(of_unittest_property_string),
> +	KUNIT_CASE(of_unittest_property_copy),
> +	KUNIT_CASE(of_unittest_changeset),
> +	KUNIT_CASE(of_unittest_parse_interrupts),
> +	KUNIT_CASE(of_unittest_parse_interrupts_extended),
> +	KUNIT_CASE(of_unittest_match_node),
> +	KUNIT_CASE(of_unittest_platform_populate),
> +	KUNIT_CASE(of_unittest_overlay),
>  	/* Double check linkage after removing testcase data */
> -	of_unittest_check_tree_linkage();
> -
> -	of_unittest_overlay_high_level();
> -
> -	pr_info("end of unittest - %i passed, %i failed\n",
> -		unittest_results.passed, unittest_results.failed);
> +	KUNIT_CASE(of_unittest_check_tree_linkage),
> +	KUNIT_CASE(of_unittest_overlay_high_level),
> +	{},
> +};
>  
> -	return 0;
> -}
> -late_initcall(of_unittest);
> +static struct kunit_module of_test_module = {
> +	.name = "of-test",
> +	.init = of_test_init,
> +	.test_cases = of_test_cases,
> +};
> +module_test(of_test_module);
> 


^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 15/17] of: unittest: migrate tests to run on KUnit
@ 2019-02-16  0:24         ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: frowand.list @ 2019-02-16  0:24 UTC (permalink / raw)


On 2/14/19 1:37 PM, Brendan Higgins wrote:
> Migrate tests without any cleanup, or modifying test logic in anyway to
> run under KUnit using the KUnit expectation and assertion API.
> 
> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> ---
>  drivers/of/Kconfig    |    1 +
>  drivers/of/unittest.c | 1310 +++++++++++++++++++++--------------------
>  2 files changed, 671 insertions(+), 640 deletions(-)
> 
> diff --git a/drivers/of/Kconfig b/drivers/of/Kconfig
> index ad3fcad4d75b8..f309399deac20 100644
> --- a/drivers/of/Kconfig
> +++ b/drivers/of/Kconfig
> @@ -15,6 +15,7 @@ if OF
>  config OF_UNITTEST
>  	bool "Device Tree runtime unit tests"
>  	depends on !SPARC
> +	depends on KUNIT
>  	select IRQ_DOMAIN
>  	select OF_EARLY_FLATTREE
>  	select OF_RESOLVE
> diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c

These comments are from applying the patches to 5.0-rc3.

The final hunk of this patch fails to apply because it depends upon

   [PATCH v1 0/1] of: unittest: unflatten device tree on UML when testing.

If I apply that patch then I can apply patches 15 through 17.

If I apply patches 1 through 14 and boot the uml kernel then the devicetree
unittest result is:

  ### dt-test ### FAIL of_unittest_overlay_high_level():2372 overlay_base_root not initialized
  ### dt-test ### end of unittest - 219 passed, 1 failed

This is as expected from your previous reports, and is fixed after
applying

   [PATCH v1 0/1] of: unittest: unflatten device tree on UML when testing.

with the devicetree unittest result of:

   ### dt-test ### end of unittest - 224 passed, 0 failed

After adding patch 15, there are a lot of "unittest internal error" messages.

-Frank


> index effa4e2b9d992..96de69ccb3e63 100644
> --- a/drivers/of/unittest.c
> +++ b/drivers/of/unittest.c
> @@ -26,186 +26,189 @@
>  
>  #include <linux/bitops.h>
>  
> +#include <kunit/test.h>### dt-test ### end of unittest - 224 passed, 0 failed
> +
>  #include "of_private.h"
>  
> -static struct unittest_results {
> -	int passed;
> -	int failed;
> -} unittest_results;
> -
> -#define unittest(result, fmt, ...) ({ \
> -	bool failed = !(result); \
> -	if (failed) { \
> -		unittest_results.failed++; \
> -		pr_err("FAIL %s():%i " fmt, __func__, __LINE__, ##__VA_ARGS__); \
> -	} else { \
> -		unittest_results.passed++; \
> -		pr_debug("pass %s():%i\n", __func__, __LINE__); \
> -	} \
> -	failed; \
> -})
> -
> -static void __init of_unittest_find_node_by_name(void)
> +static void of_unittest_find_node_by_name(struct kunit *test)
>  {
>  	struct device_node *np;
>  	const char *options, *name;
>  
>  	np = of_find_node_by_path("/testcase-data");
>  	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	unittest(np && !strcmp("/testcase-data", name),
> -		"find /testcase-data failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> +			       "find /testcase-data failed\n");
>  	of_node_put(np);
>  	kfree(name);
>  
>  	/* Test if trailing '/' works */
> -	np = of_find_node_by_path("/testcase-data/");
> -	unittest(!np, "trailing '/' on /testcase-data/ should fail\n");
> +	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
> +			    "trailing '/' on /testcase-data/ should fail\n");
>  
>  	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	unittest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", name),
> +	KUNIT_EXPECT_STREQ_MSG(
> +		test, "/testcase-data/phandle-tests/consumer-a", name,
>  		"find /testcase-data/phandle-tests/consumer-a failed\n");
>  	of_node_put(np);
>  	kfree(name);
>  
>  	np = of_find_node_by_path("testcase-alias");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	unittest(np && !strcmp("/testcase-data", name),
> -		"find testcase-alias failed\n");
> +	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> +			       "find testcase-alias failed\n");
>  	of_node_put(np);
>  	kfree(name);
>  
>  	/* Test if trailing '/' works on aliases */
> -	np = of_find_node_by_path("testcase-alias/");
> -	unittest(!np, "trailing '/' on testcase-alias/ should fail\n");
> +	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
> +			    "trailing '/' on testcase-alias/ should fail\n");
>  
>  	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	unittest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", name),
> +	KUNIT_EXPECT_STREQ_MSG(
> +		test, "/testcase-data/phandle-tests/consumer-a", name,
>  		"find testcase-alias/phandle-tests/consumer-a failed\n");
>  	of_node_put(np);
>  	kfree(name);
>  
> -	np = of_find_node_by_path("/testcase-data/missing-path");
> -	unittest(!np, "non-existent path returned node %pOF\n", np);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
> +		"non-existent path returned node %pOF\n", np);
>  	of_node_put(np);
>  
> -	np = of_find_node_by_path("missing-alias");
> -	unittest(!np, "non-existent alias returned node %pOF\n", np);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test, np = of_find_node_by_path("missing-alias"), NULL,
> +		"non-existent alias returned node %pOF\n", np);
>  	of_node_put(np);
>  
> -	np = of_find_node_by_path("testcase-alias/missing-path");
> -	unittest(!np, "non-existent alias with relative path returned node %pOF\n", np);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
> +		"non-existent alias with relative path returned node %pOF\n",
> +		np);
>  	of_node_put(np);
>  
>  	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
> -	unittest(np && !strcmp("testoption", options),
> -		 "option path test failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
> +			       "option path test failed\n");
>  	of_node_put(np);
>  
>  	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
> -	unittest(np && !strcmp("test/option", options),
> -		 "option path test, subcase #1 failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> +			       "option path test, subcase #1 failed\n");
>  	of_node_put(np);
>  
>  	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
> -	unittest(np && !strcmp("test/option", options),
> -		 "option path test, subcase #2 failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> +			       "option path test, subcase #2 failed\n");
>  	of_node_put(np);
>  
>  	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
> -	unittest(np, "NULL option path test failed\n");
> +	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
> +					 "NULL option path test failed\n");
>  	of_node_put(np);
>  
>  	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
>  				       &options);
> -	unittest(np && !strcmp("testaliasoption", options),
> -		 "option alias path test failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
> +			       "option alias path test failed\n");
>  	of_node_put(np);
>  
>  	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
>  				       &options);
> -	unittest(np && !strcmp("test/alias/option", options),
> -		 "option alias path test, subcase #1 failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
> +			       "option alias path test, subcase #1 failed\n");
>  	of_node_put(np);
>  
>  	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
> -	unittest(np, "NULL option alias path test failed\n");
> +	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
> +			test, np, "NULL option alias path test failed\n");
>  	of_node_put(np);
>  
>  	options = "testoption";
>  	np = of_find_node_opts_by_path("testcase-alias", &options);
> -	unittest(np && !options, "option clearing test failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> +			    "option clearing test failed\n");
>  	of_node_put(np);
>  
>  	options = "testoption";
>  	np = of_find_node_opts_by_path("/", &options);
> -	unittest(np && !options, "option clearing root node test failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> +			    "option clearing root node test failed\n");
>  	of_node_put(np);
>  }
>  
> -static void __init of_unittest_dynamic(void)
> +static void of_unittest_dynamic(struct kunit *test)
>  {
>  	struct device_node *np;
>  	struct property *prop;
>  
>  	np = of_find_node_by_path("/testcase-data");
> -	if (!np) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  	/* Array of 4 properties for the purpose of testing */
>  	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
> -	if (!prop) {
> -		unittest(0, "kzalloc() failed\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
>  
>  	/* Add a new property - should pass*/
>  	prop->name = "new-property";
>  	prop->value = "new-property-data";
>  	prop->length = strlen(prop->value) + 1;
> -	unittest(of_add_property(np, prop) == 0, "Adding a new property failed\n");
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> +			    "Adding a new property failed\n");
>  
>  	/* Try to add an existing property - should fail */
>  	prop++;
>  	prop->name = "new-property";
>  	prop->value = "new-property-data-should-fail";
>  	prop->length = strlen(prop->value) + 1;
> -	unittest(of_add_property(np, prop) != 0,
> -		 "Adding an existing property should have failed\n");
> +	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
> +			    "Adding an existing property should have failed\n");
>  
>  	/* Try to modify an existing property - should pass */
>  	prop->value = "modify-property-data-should-pass";
>  	prop->length = strlen(prop->value) + 1;
> -	unittest(of_update_property(np, prop) == 0,
> -		 "Updating an existing property should have passed\n");
> +	KUNIT_EXPECT_EQ_MSG(
> +		test, of_update_property(np, prop), 0,
> +		"Updating an existing property should have passed\n");
>  
>  	/* Try to modify non-existent property - should pass*/
>  	prop++;
>  	prop->name = "modify-property";
>  	prop->value = "modify-missing-property-data-should-pass";
>  	prop->length = strlen(prop->value) + 1;
> -	unittest(of_update_property(np, prop) == 0,
> -		 "Updating a missing property should have passed\n");
> +	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
> +			    "Updating a missing property should have passed\n");
>  
>  	/* Remove property - should pass */
> -	unittest(of_remove_property(np, prop) == 0,
> -		 "Removing a property should have passed\n");
> +	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
> +			    "Removing a property should have passed\n");
>  
>  	/* Adding very large property - should pass */
>  	prop++;
>  	prop->name = "large-property-PAGE_SIZEx8";
>  	prop->length = PAGE_SIZE * 8;
>  	prop->value = kzalloc(prop->length, GFP_KERNEL);
> -	unittest(prop->value != NULL, "Unable to allocate large buffer\n");
> -	if (prop->value)
> -		unittest(of_add_property(np, prop) == 0,
> -			 "Adding a large property should have passed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> +			    "Adding a large property should have passed\n");
>  }
>  
> -static int __init of_unittest_check_node_linkage(struct device_node *np)
> +static int of_unittest_check_node_linkage(struct device_node *np)
>  {
>  	struct device_node *child;
>  	int count = 0, rc;
> @@ -230,27 +233,30 @@ static int __init of_unittest_check_node_linkage(struct device_node *np)
>  	return rc;
>  }
>  
> -static void __init of_unittest_check_tree_linkage(void)
> +static void of_unittest_check_tree_linkage(struct kunit *test)
>  {
>  	struct device_node *np;
>  	int allnode_count = 0, child_count;
>  
> -	if (!of_root)
> -		return;
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);
>  
>  	for_each_of_allnodes(np)
>  		allnode_count++;
>  	child_count = of_unittest_check_node_linkage(of_root);
>  
> -	unittest(child_count > 0, "Device node data structure is corrupted\n");
> -	unittest(child_count == allnode_count,
> -		 "allnodes list size (%i) doesn't match sibling lists size (%i)\n",
> -		 allnode_count, child_count);
> +	KUNIT_EXPECT_GT_MSG(test, child_count, 0,
> +			    "Device node data structure is corrupted\n");
> +	KUNIT_EXPECT_EQ_MSG(
> +		test, child_count, allnode_count,
> +		"allnodes list size (%i) doesn't match sibling lists size (%i)\n",
> +		allnode_count, child_count);
>  	pr_debug("allnodes list size (%i); sibling lists size (%i)\n", allnode_count, child_count);
>  }
>  
> -static void __init of_unittest_printf_one(struct device_node *np, const char *fmt,
> -					  const char *expected)
> +static void of_unittest_printf_one(struct kunit *test,
> +				   struct device_node *np,
> +				   const char *fmt,
> +				   const char *expected)
>  {
>  	unsigned char *buf;
>  	int buf_size;
> @@ -265,8 +271,12 @@ static void __init of_unittest_printf_one(struct device_node *np, const char *fm
>  	memset(buf, 0xff, buf_size);
>  	size = snprintf(buf, buf_size - 2, fmt, np);
>  
> -	/* use strcmp() instead of strncmp() here to be absolutely sure strings match */
> -	unittest((strcmp(buf, expected) == 0) && (buf[size+1] == 0xff),
> +	KUNIT_EXPECT_STREQ_MSG(
> +		test, buf, expected,
> +		"sprintf failed; fmt='%s' expected='%s' rslt='%s'\n",
> +		fmt, expected, buf);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test, buf[size+1], 0xff,
>  		"sprintf failed; fmt='%s' expected='%s' rslt='%s'\n",
>  		fmt, expected, buf);
>  
> @@ -276,44 +286,49 @@ static void __init of_unittest_printf_one(struct device_node *np, const char *fm
>  		/* Clear the buffer, and make sure it works correctly still */
>  		memset(buf, 0xff, buf_size);
>  		snprintf(buf, size+1, fmt, np);
> -		unittest(strncmp(buf, expected, size) == 0 && (buf[size+1] == 0xff),
> +		KUNIT_EXPECT_STREQ_MSG(
> +			test, buf, expected,
> +			"snprintf failed; size=%i fmt='%s' expected='%s' rslt='%s'\n",
> +			size, fmt, expected, buf);
> +		KUNIT_EXPECT_EQ_MSG(
> +			test, buf[size+1], 0xff,
>  			"snprintf failed; size=%i fmt='%s' expected='%s' rslt='%s'\n",
>  			size, fmt, expected, buf);
>  	}
>  	kfree(buf);
>  }
>  
> -static void __init of_unittest_printf(void)
> +static void of_unittest_printf(struct kunit *test)
>  {
>  	struct device_node *np;
>  	const char *full_name = "/testcase-data/platform-tests/test-device at 1/dev at 100";
>  	char phandle_str[16] = "";
>  
>  	np = of_find_node_by_path(full_name);
> -	if (!np) {
> -		unittest(np, "testcase data missing\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  	num_to_str(phandle_str, sizeof(phandle_str), np->phandle, 0);
>  
> -	of_unittest_printf_one(np, "%pOF",  full_name);
> -	of_unittest_printf_one(np, "%pOFf", full_name);
> -	of_unittest_printf_one(np, "%pOFn", "dev");
> -	of_unittest_printf_one(np, "%2pOFn", "dev");
> -	of_unittest_printf_one(np, "%5pOFn", "  dev");
> -	of_unittest_printf_one(np, "%pOFnc", "dev:test-sub-device");
> -	of_unittest_printf_one(np, "%pOFp", phandle_str);
> -	of_unittest_printf_one(np, "%pOFP", "dev at 100");
> -	of_unittest_printf_one(np, "ABC %pOFP ABC", "ABC dev at 100 ABC");
> -	of_unittest_printf_one(np, "%10pOFP", "   dev at 100");
> -	of_unittest_printf_one(np, "%-10pOFP", "dev at 100   ");
> -	of_unittest_printf_one(of_root, "%pOFP", "/");
> -	of_unittest_printf_one(np, "%pOFF", "----");
> -	of_unittest_printf_one(np, "%pOFPF", "dev at 100:----");
> -	of_unittest_printf_one(np, "%pOFPFPc", "dev at 100:----:dev at 100:test-sub-device");
> -	of_unittest_printf_one(np, "%pOFc", "test-sub-device");
> -	of_unittest_printf_one(np, "%pOFC",
> +	of_unittest_printf_one(test, np, "%pOF",  full_name);
> +	of_unittest_printf_one(test, np, "%pOFf", full_name);
> +	of_unittest_printf_one(test, np, "%pOFn", "dev");
> +	of_unittest_printf_one(test, np, "%2pOFn", "dev");
> +	of_unittest_printf_one(test, np, "%5pOFn", "  dev");
> +	of_unittest_printf_one(test, np, "%pOFnc", "dev:test-sub-device");
> +	of_unittest_printf_one(test, np, "%pOFp", phandle_str);
> +	of_unittest_printf_one(test, np, "%pOFP", "dev at 100");
> +	of_unittest_printf_one(test, np, "ABC %pOFP ABC", "ABC dev at 100 ABC");
> +	of_unittest_printf_one(test, np, "%10pOFP", "   dev at 100");
> +	of_unittest_printf_one(test, np, "%-10pOFP", "dev at 100   ");
> +	of_unittest_printf_one(test, of_root, "%pOFP", "/");
> +	of_unittest_printf_one(test, np, "%pOFF", "----");
> +	of_unittest_printf_one(test, np, "%pOFPF", "dev at 100:----");
> +	of_unittest_printf_one(test,
> +			       np,
> +			       "%pOFPFPc",
> +			       "dev at 100:----:dev at 100:test-sub-device");
> +	of_unittest_printf_one(test, np, "%pOFc", "test-sub-device");
> +	of_unittest_printf_one(test, np, "%pOFC",
>  			"\"test-sub-device\",\"test-compat2\",\"test-compat3\"");
>  }
>  
> @@ -323,7 +338,7 @@ struct node_hash {
>  };
>  
>  static DEFINE_HASHTABLE(phandle_ht, 8);
> -static void __init of_unittest_check_phandles(void)
> +static void of_unittest_check_phandles(struct kunit *test)
>  {
>  	struct device_node *np;
>  	struct node_hash *nh;
> @@ -335,24 +350,26 @@ static void __init of_unittest_check_phandles(void)
>  			continue;
>  
>  		hash_for_each_possible(phandle_ht, nh, node, np->phandle) {
> +			KUNIT_EXPECT_NE_MSG(
> +				test, nh->np->phandle, np->phandle,
> +				"Duplicate phandle! %i used by %pOF and %pOF\n",
> +				np->phandle, nh->np, np);
>  			if (nh->np->phandle == np->phandle) {
> -				pr_info("Duplicate phandle! %i used by %pOF and %pOF\n",
> -					np->phandle, nh->np, np);
>  				dup_count++;
>  				break;
>  			}
>  		}
>  
>  		nh = kzalloc(sizeof(*nh), GFP_KERNEL);
> -		if (WARN_ON(!nh))
> -			return;
> +		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, nh);
>  
>  		nh->np = np;
>  		hash_add(phandle_ht, &nh->node, np->phandle);
>  		phandle_count++;
>  	}
> -	unittest(dup_count == 0, "Found %i duplicates in %i phandles\n",
> -		 dup_count, phandle_count);
> +	KUNIT_EXPECT_EQ_MSG(test, dup_count, 0,
> +			    "Found %i duplicates in %i phandles\n",
> +			    dup_count, phandle_count);
>  
>  	/* Clean up */
>  	hash_for_each_safe(phandle_ht, i, tmp, nh, node) {
> @@ -361,20 +378,21 @@ static void __init of_unittest_check_phandles(void)
>  	}
>  }
>  
> -static void __init of_unittest_parse_phandle_with_args(void)
> +static void of_unittest_parse_phandle_with_args(struct kunit *test)
>  {
>  	struct device_node *np;
>  	struct of_phandle_args args;
> -	int i, rc;
> +	int i, rc = 0;
>  
>  	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> -	if (!np) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
> -	rc = of_count_phandle_with_args(np, "phandle-list", "#phandle-cells");
> -	unittest(rc == 7, "of_count_phandle_with_args() returned %i, expected 7\n", rc);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_count_phandle_with_args(
> +			np, "phandle-list", "#phandle-cells"),
> +		7,
> +		"of_count_phandle_with_args() returned %i, expected 7\n", rc);
>  
>  	for (i = 0; i < 8; i++) {
>  		bool passed = true;
> @@ -428,85 +446,91 @@ static void __init of_unittest_parse_phandle_with_args(void)
>  			passed = false;
>  		}
>  
> -		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
> -			 i, args.np, rc);
> +		KUNIT_EXPECT_TRUE_MSG(
> +			test, passed,
> +			"index %i - data error on node %pOF rc=%i\n",
> +			i, args.np, rc);
>  	}
>  
>  	/* Check for missing list property */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args(np, "phandle-list-missing",
> -					"#phandle-cells", 0, &args);
> -	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
> -	rc = of_count_phandle_with_args(np, "phandle-list-missing",
> -					"#phandle-cells");
> -	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args(
> +			np, "phandle-list-missing", "#phandle-cells", 0, &args),
> +		-ENOENT);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_count_phandle_with_args(
> +			np, "phandle-list-missing", "#phandle-cells"),
> +		-ENOENT);
>  
>  	/* Check for missing cells property */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args(np, "phandle-list",
> -					"#phandle-cells-missing", 0, &args);
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> -	rc = of_count_phandle_with_args(np, "phandle-list",
> -					"#phandle-cells-missing");
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args(
> +			np, "phandle-list", "#phandle-cells-missing", 0, &args),
> +		-EINVAL);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_count_phandle_with_args(
> +			np, "phandle-list", "#phandle-cells-missing"),
> +		-EINVAL);
>  
>  	/* Check for bad phandle in list */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args(np, "phandle-list-bad-phandle",
> -					"#phandle-cells", 0, &args);
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> -	rc = of_count_phandle_with_args(np, "phandle-list-bad-phandle",
> -					"#phandle-cells");
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args(np, "phandle-list-bad-phandle",
> +					   "#phandle-cells", 0, &args),
> +		-EINVAL);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_count_phandle_with_args(
> +			np, "phandle-list-bad-phandle", "#phandle-cells"),
> +		-EINVAL);
>  
>  	/* Check for incorrectly formed argument list */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args(np, "phandle-list-bad-args",
> -					"#phandle-cells", 1, &args);
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> -	rc = of_count_phandle_with_args(np, "phandle-list-bad-args",
> -					"#phandle-cells");
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args(np, "phandle-list-bad-args",
> +					   "#phandle-cells", 1, &args),
> +		-EINVAL);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_count_phandle_with_args(
> +			np, "phandle-list-bad-args", "#phandle-cells"),
> +		-EINVAL);
>  }
>  
> -static void __init of_unittest_parse_phandle_with_args_map(void)
> +static void of_unittest_parse_phandle_with_args_map(struct kunit *test)
>  {
>  	struct device_node *np, *p0, *p1, *p2, *p3;
>  	struct of_phandle_args args;
>  	int i, rc;
>  
>  	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-b");
> -	if (!np) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  	p0 = of_find_node_by_path("/testcase-data/phandle-tests/provider0");
> -	if (!p0) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p0);
>  
>  	p1 = of_find_node_by_path("/testcase-data/phandle-tests/provider1");
> -	if (!p1) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p1);
>  
>  	p2 = of_find_node_by_path("/testcase-data/phandle-tests/provider2");
> -	if (!p2) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p2);
>  
>  	p3 = of_find_node_by_path("/testcase-data/phandle-tests/provider3");
> -	if (!p3) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p3);
>  
> -	rc = of_count_phandle_with_args(np, "phandle-list", "#phandle-cells");
> -	unittest(rc == 7, "of_count_phandle_with_args() returned %i, expected 7\n", rc);
> +	KUNIT_EXPECT_EQ(test,
> +		       of_count_phandle_with_args(np,
> +						  "phandle-list",
> +						  "#phandle-cells"),
> +		       7);
>  
>  	for (i = 0; i < 8; i++) {
>  		bool passed = true;
> @@ -564,121 +588,186 @@ static void __init of_unittest_parse_phandle_with_args_map(void)
>  			passed = false;
>  		}
>  
> -		unittest(passed, "index %i - data error on node %s rc=%i\n",
> -			 i, args.np->full_name, rc);
> +		KUNIT_EXPECT_TRUE_MSG(
> +			test, passed,
> +			"index %i - data error on node %s rc=%i\n",
> +			i, (args.np ? args.np->full_name : "missing np"), rc);
>  	}
>  
>  	/* Check for missing list property */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args_map(np, "phandle-list-missing",
> -					    "phandle", 0, &args);
> -	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args_map(
> +			np, "phandle-list-missing", "phandle", 0, &args),
> +		-ENOENT);
>  
>  	/* Check for missing cells,map,mask property */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args_map(np, "phandle-list",
> -					    "phandle-missing", 0, &args);
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args_map(
> +			np, "phandle-list", "phandle-missing", 0, &args),
> +		-EINVAL);
>  
>  	/* Check for bad phandle in list */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args_map(np, "phandle-list-bad-phandle",
> -					    "phandle", 0, &args);
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args_map(
> +			np, "phandle-list-bad-phandle", "phandle", 0, &args),
> +		-EINVAL);
>  
>  	/* Check for incorrectly formed argument list */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args_map(np, "phandle-list-bad-args",
> -					    "phandle", 1, &args);
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args_map(
> +			np, "phandle-list-bad-args", "phandle", 1, &args),
> +		-EINVAL);
>  }
>  
> -static void __init of_unittest_property_string(void)
> +static void of_unittest_property_string(struct kunit *test)
>  {
>  	const char *strings[4];
>  	struct device_node *np;
>  	int rc;
>  
>  	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> -	if (!np) {
> -		pr_err("No testcase data in device tree\n");
> -		return;
> -	}
> -
> -	rc = of_property_match_string(np, "phandle-list-names", "first");
> -	unittest(rc == 0, "first expected:0 got:%i\n", rc);
> -	rc = of_property_match_string(np, "phandle-list-names", "second");
> -	unittest(rc == 1, "second expected:1 got:%i\n", rc);
> -	rc = of_property_match_string(np, "phandle-list-names", "third");
> -	unittest(rc == 2, "third expected:2 got:%i\n", rc);
> -	rc = of_property_match_string(np, "phandle-list-names", "fourth");
> -	unittest(rc == -ENODATA, "unmatched string; rc=%i\n", rc);
> -	rc = of_property_match_string(np, "missing-property", "blah");
> -	unittest(rc == -EINVAL, "missing property; rc=%i\n", rc);
> -	rc = of_property_match_string(np, "empty-property", "blah");
> -	unittest(rc == -ENODATA, "empty property; rc=%i\n", rc);
> -	rc = of_property_match_string(np, "unterminated-string", "blah");
> -	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_property_match_string(np, "phandle-list-names", "first"),
> +		0);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_property_match_string(np, "phandle-list-names", "second"),
> +		1);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_property_match_string(np, "phandle-list-names", "third"),
> +		2);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_match_string(np, "phandle-list-names", "fourth"),
> +		-ENODATA,
> +		"unmatched string");
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_match_string(np, "missing-property", "blah"),
> +		-EINVAL,
> +		"missing property");
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_match_string(np, "empty-property", "blah"),
> +		-ENODATA,
> +		"empty property");
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_match_string(np, "unterminated-string", "blah"),
> +		-EILSEQ,
> +		"unterminated string");
>  
>  	/* of_property_count_strings() tests */
> -	rc = of_property_count_strings(np, "string-property");
> -	unittest(rc == 1, "Incorrect string count; rc=%i\n", rc);
> -	rc = of_property_count_strings(np, "phandle-list-names");
> -	unittest(rc == 3, "Incorrect string count; rc=%i\n", rc);
> -	rc = of_property_count_strings(np, "unterminated-string");
> -	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
> -	rc = of_property_count_strings(np, "unterminated-string-list");
> -	unittest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc);
> +	KUNIT_EXPECT_EQ(test,
> +			of_property_count_strings(np, "string-property"), 1);
> +	KUNIT_EXPECT_EQ(test,
> +			of_property_count_strings(np, "phandle-list-names"), 3);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_count_strings(np, "unterminated-string"), -EILSEQ,
> +		"unterminated string");
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_count_strings(np, "unterminated-string-list"),
> +		-EILSEQ,
> +		"unterminated string array");
>  
>  	/* of_property_read_string_index() tests */
>  	rc = of_property_read_string_index(np, "string-property", 0, strings);
> -	unittest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_ASSERT_EQ(test, rc, 0);
> +	KUNIT_EXPECT_STREQ(test, strings[0], "foobar");
> +
>  	strings[0] = NULL;
>  	rc = of_property_read_string_index(np, "string-property", 1, strings);
> -	unittest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_EXPECT_EQ(test, rc, -ENODATA);
> +	KUNIT_EXPECT_EQ(test, strings[0], NULL);
> +
>  	rc = of_property_read_string_index(np, "phandle-list-names", 0, strings);
> -	unittest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_ASSERT_EQ(test, rc, 0);
> +	KUNIT_EXPECT_STREQ(test, strings[0], "first");
> +
>  	rc = of_property_read_string_index(np, "phandle-list-names", 1, strings);
> -	unittest(rc == 0 && !strcmp(strings[0], "second"), "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_ASSERT_EQ(test, rc, 0);
> +	KUNIT_EXPECT_STREQ(test, strings[0], "second");
> +
>  	rc = of_property_read_string_index(np, "phandle-list-names", 2, strings);
> -	unittest(rc == 0 && !strcmp(strings[0], "third"), "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_ASSERT_EQ(test, rc, 0);
> +	KUNIT_EXPECT_STREQ(test, strings[0], "third");
> +
>  	strings[0] = NULL;
>  	rc = of_property_read_string_index(np, "phandle-list-names", 3, strings);
> -	unittest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_EXPECT_EQ(test, rc, -ENODATA);
> +	KUNIT_EXPECT_EQ(test, strings[0], NULL);
> +
>  	strings[0] = NULL;
>  	rc = of_property_read_string_index(np, "unterminated-string", 0, strings);
> -	unittest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_EXPECT_EQ(test, rc, -EILSEQ);
> +	KUNIT_EXPECT_EQ(test, strings[0], NULL);
> +
>  	rc = of_property_read_string_index(np, "unterminated-string-list", 0, strings);
> -	unittest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_ASSERT_EQ(test, rc, 0);
> +	KUNIT_EXPECT_STREQ(test, strings[0], "first");
> +
>  	strings[0] = NULL;
>  	rc = of_property_read_string_index(np, "unterminated-string-list", 2, strings); /* should fail */
> -	unittest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
> -	strings[1] = NULL;
> +	KUNIT_EXPECT_EQ(test, rc, -EILSEQ);
> +	KUNIT_EXPECT_EQ(test, strings[0], NULL);
>  
>  	/* of_property_read_string_array() tests */
> -	rc = of_property_read_string_array(np, "string-property", strings, 4);
> -	unittest(rc == 1, "Incorrect string count; rc=%i\n", rc);
> -	rc = of_property_read_string_array(np, "phandle-list-names", strings, 4);
> -	unittest(rc == 3, "Incorrect string count; rc=%i\n", rc);
> -	rc = of_property_read_string_array(np, "unterminated-string", strings, 4);
> -	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
> +	strings[1] = NULL;
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_property_read_string_array(
> +			np, "string-property", strings, 4),
> +		1);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_property_read_string_array(
> +			np, "phandle-list-names", strings, 4),
> +		3);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_read_string_array(
> +			np, "unterminated-string", strings, 4),
> +		-EILSEQ,
> +		"unterminated string");
>  	/* -- An incorrectly formed string should cause a failure */
> -	rc = of_property_read_string_array(np, "unterminated-string-list", strings, 4);
> -	unittest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_read_string_array(
> +			np, "unterminated-string-list", strings, 4),
> +		-EILSEQ,
> +		"unterminated string array");
>  	/* -- parsing the correctly formed strings should still work: */
>  	strings[2] = NULL;
>  	rc = of_property_read_string_array(np, "unterminated-string-list", strings, 2);
> -	unittest(rc == 2 && strings[2] == NULL, "of_property_read_string_array() failure; rc=%i\n", rc);
> +	KUNIT_EXPECT_EQ(test, rc, 2);
> +	KUNIT_EXPECT_EQ(test, strings[2], NULL);
> +
>  	strings[1] = NULL;
>  	rc = of_property_read_string_array(np, "phandle-list-names", strings, 1);
> -	unittest(rc == 1 && strings[1] == NULL, "Overwrote end of string array; rc=%i, str='%s'\n", rc, strings[1]);
> +	KUNIT_ASSERT_EQ(test, rc, 1);
> +	KUNIT_EXPECT_EQ_MSG(test, strings[1], NULL,
> +			    "Overwrote end of string array");
>  }
>  
>  #define propcmp(p1, p2) (((p1)->length == (p2)->length) && \
>  			(p1)->value && (p2)->value && \
>  			!memcmp((p1)->value, (p2)->value, (p1)->length) && \
>  			!strcmp((p1)->name, (p2)->name))
> -static void __init of_unittest_property_copy(void)
> +static void of_unittest_property_copy(struct kunit *test)
>  {
>  #ifdef CONFIG_OF_DYNAMIC
>  	struct property p1 = { .name = "p1", .length = 0, .value = "" };
> @@ -686,20 +775,24 @@ static void __init of_unittest_property_copy(void)
>  	struct property *new;
>  
>  	new = __of_prop_dup(&p1, GFP_KERNEL);
> -	unittest(new && propcmp(&p1, new), "empty property didn't copy correctly\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, new);
> +	KUNIT_EXPECT_TRUE_MSG(test, propcmp(&p1, new),
> +			      "empty property didn't copy correctly");
>  	kfree(new->value);
>  	kfree(new->name);
>  	kfree(new);
>  
>  	new = __of_prop_dup(&p2, GFP_KERNEL);
> -	unittest(new && propcmp(&p2, new), "non-empty property didn't copy correctly\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, new);
> +	KUNIT_EXPECT_TRUE_MSG(test, propcmp(&p2, new),
> +			      "non-empty property didn't copy correctly");
>  	kfree(new->value);
>  	kfree(new->name);
>  	kfree(new);
>  #endif
>  }
>  
> -static void __init of_unittest_changeset(void)
> +static void of_unittest_changeset(struct kunit *test)
>  {
>  #ifdef CONFIG_OF_DYNAMIC
>  	struct property *ppadd, padd = { .name = "prop-add", .length = 1, .value = "" };
> @@ -712,32 +805,32 @@ static void __init of_unittest_changeset(void)
>  	struct of_changeset chgset;
>  
>  	n1 = __of_node_dup(NULL, "n1");
> -	unittest(n1, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n1);
>  
>  	n2 = __of_node_dup(NULL, "n2");
> -	unittest(n2, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n2);
>  
>  	n21 = __of_node_dup(NULL, "n21");
> -	unittest(n21, "testcase setup failure %p\n", n21);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n21);
>  
>  	nchangeset = of_find_node_by_path("/testcase-data/changeset");
>  	nremove = of_get_child_by_name(nchangeset, "node-remove");
> -	unittest(nremove, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, nremove);
>  
>  	ppadd = __of_prop_dup(&padd, GFP_KERNEL);
> -	unittest(ppadd, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppadd);
>  
>  	ppname_n1  = __of_prop_dup(&pname_n1, GFP_KERNEL);
> -	unittest(ppname_n1, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n1);
>  
>  	ppname_n2  = __of_prop_dup(&pname_n2, GFP_KERNEL);
> -	unittest(ppname_n2, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n2);
>  
>  	ppname_n21 = __of_prop_dup(&pname_n21, GFP_KERNEL);
> -	unittest(ppname_n21, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n21);
>  
>  	ppupdate = __of_prop_dup(&pupdate, GFP_KERNEL);
> -	unittest(ppupdate, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppupdate);
>  
>  	parent = nchangeset;
>  	n1->parent = parent;
> @@ -745,54 +838,72 @@ static void __init of_unittest_changeset(void)
>  	n21->parent = n2;
>  
>  	ppremove = of_find_property(parent, "prop-remove", NULL);
> -	unittest(ppremove, "failed to find removal prop");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppremove);
>  
>  	of_changeset_init(&chgset);
>  
> -	unittest(!of_changeset_attach_node(&chgset, n1), "fail attach n1\n");
> -	unittest(!of_changeset_add_property(&chgset, n1, ppname_n1), "fail add prop name\n");
> -
> -	unittest(!of_changeset_attach_node(&chgset, n2), "fail attach n2\n");
> -	unittest(!of_changeset_add_property(&chgset, n2, ppname_n2), "fail add prop name\n");
> -
> -	unittest(!of_changeset_detach_node(&chgset, nremove), "fail remove node\n");
> -	unittest(!of_changeset_add_property(&chgset, n21, ppname_n21), "fail add prop name\n");
> -
> -	unittest(!of_changeset_attach_node(&chgset, n21), "fail attach n21\n");
> -
> -	unittest(!of_changeset_add_property(&chgset, parent, ppadd), "fail add prop prop-add\n");
> -	unittest(!of_changeset_update_property(&chgset, parent, ppupdate), "fail update prop\n");
> -	unittest(!of_changeset_remove_property(&chgset, parent, ppremove), "fail remove prop\n");
> -
> -	unittest(!of_changeset_apply(&chgset), "apply failed\n");
> +	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n1),
> +			       "fail attach n1\n");
> +	KUNIT_EXPECT_FALSE_MSG(
> +		test, of_changeset_add_property(&chgset, n1, ppname_n1),
> +		"fail add prop name\n");
> +
> +	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n2),
> +			       "fail attach n2\n");
> +	KUNIT_EXPECT_FALSE_MSG(
> +		test, of_changeset_add_property(&chgset, n2, ppname_n2),
> +			       "fail add prop name\n");
> +
> +	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_detach_node(&chgset, nremove),
> +			       "fail remove node\n");
> +	KUNIT_EXPECT_FALSE_MSG(
> +		test, of_changeset_add_property(&chgset, n21, ppname_n21),
> +		"fail add prop name\n");
> +
> +	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n21),
> +			       "fail attach n21\n");
> +
> +	KUNIT_EXPECT_FALSE_MSG(
> +		test,
> +		of_changeset_add_property(&chgset, parent, ppadd),
> +		"fail add prop prop-add\n");
> +	KUNIT_EXPECT_FALSE_MSG(
> +		test,
> +		of_changeset_update_property(&chgset, parent, ppupdate),
> +		"fail update prop\n");
> +	KUNIT_EXPECT_FALSE_MSG(
> +		test,
> +		of_changeset_remove_property(&chgset, parent, ppremove),
> +		"fail remove prop\n");
> +
> +	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_apply(&chgset),
> +			       "apply failed\n");
>  
>  	of_node_put(nchangeset);
>  
>  	/* Make sure node names are constructed correctly */
> -	unittest((np = of_find_node_by_path("/testcase-data/changeset/n2/n21")),
> -		 "'%pOF' not added\n", n21);
> +	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
> +		test,
> +		np = of_find_node_by_path("/testcase-data/changeset/n2/n21"),
> +		"'%pOF' not added\n", n21);
>  	of_node_put(np);
>  
> -	unittest(!of_changeset_revert(&chgset), "revert failed\n");
> +	KUNIT_EXPECT_FALSE(test, of_changeset_revert(&chgset));
>  
>  	of_changeset_destroy(&chgset);
>  #endif
>  }
>  
> -static void __init of_unittest_parse_interrupts(void)
> +static void of_unittest_parse_interrupts(struct kunit *test)
>  {
>  	struct device_node *np;
>  	struct of_phandle_args args;
>  	int i, rc;
>  
> -	if (of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)
> -		return;
> +	KUNIT_ASSERT_FALSE(test, of_irq_workarounds & OF_IMAP_OLDWORLD_MAC);
>  
>  	np = of_find_node_by_path("/testcase-data/interrupts/interrupts0");
> -	if (!np) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  	for (i = 0; i < 4; i++) {
>  		bool passed = true;
> @@ -804,16 +915,15 @@ static void __init of_unittest_parse_interrupts(void)
>  		passed &= (args.args_count == 1);
>  		passed &= (args.args[0] == (i + 1));
>  
> -		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
> -			 i, args.np, rc);
> +		KUNIT_EXPECT_TRUE_MSG(
> +			test, passed,
> +			"index %i - data error on node %pOF rc=%i\n",
> +			i, args.np, rc);
>  	}
>  	of_node_put(np);
>  
>  	np = of_find_node_by_path("/testcase-data/interrupts/interrupts1");
> -	if (!np) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  	for (i = 0; i < 4; i++) {
>  		bool passed = true;
> @@ -850,26 +960,24 @@ static void __init of_unittest_parse_interrupts(void)
>  		default:
>  			passed = false;
>  		}
> -		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
> -			 i, args.np, rc);
> +		KUNIT_EXPECT_TRUE_MSG(
> +			test, passed,
> +			"index %i - data error on node %pOF rc=%i\n",
> +			i, args.np, rc);
>  	}
>  	of_node_put(np);
>  }
>  
> -static void __init of_unittest_parse_interrupts_extended(void)
> +static void of_unittest_parse_interrupts_extended(struct kunit *test)
>  {
>  	struct device_node *np;
>  	struct of_phandle_args args;
>  	int i, rc;
>  
> -	if (of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)
> -		return;
> +	KUNIT_ASSERT_FALSE(test, of_irq_workarounds & OF_IMAP_OLDWORLD_MAC);
>  
>  	np = of_find_node_by_path("/testcase-data/interrupts/interrupts-extended0");
> -	if (!np) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  	for (i = 0; i < 7; i++) {
>  		bool passed = true;
> @@ -924,8 +1032,10 @@ static void __init of_unittest_parse_interrupts_extended(void)
>  			passed = false;
>  		}
>  
> -		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
> -			 i, args.np, rc);
> +		KUNIT_EXPECT_TRUE_MSG(
> +			test, passed,
> +			"index %i - data error on node %pOF rc=%i\n",
> +			i, args.np, rc);
>  	}
>  	of_node_put(np);
>  }
> @@ -965,7 +1075,7 @@ static struct {
>  	{ .path = "/testcase-data/match-node/name9", .data = "K", },
>  };
>  
> -static void __init of_unittest_match_node(void)
> +static void of_unittest_match_node(struct kunit *test)
>  {
>  	struct device_node *np;
>  	const struct of_device_id *match;
> @@ -973,26 +1083,19 @@ static void __init of_unittest_match_node(void)
>  
>  	for (i = 0; i < ARRAY_SIZE(match_node_tests); i++) {
>  		np = of_find_node_by_path(match_node_tests[i].path);
> -		if (!np) {
> -			unittest(0, "missing testcase node %s\n",
> -				match_node_tests[i].path);
> -			continue;
> -		}
> +		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  		match = of_match_node(match_node_table, np);
> -		if (!match) {
> -			unittest(0, "%s didn't match anything\n",
> -				match_node_tests[i].path);
> -			continue;
> -		}
> +		KUNIT_ASSERT_NOT_ERR_OR_NULL_MSG(test, np,
> +						 "%s didn't match anything",
> +						 match_node_tests[i].path);
>  
> -		if (strcmp(match->data, match_node_tests[i].data) != 0) {
> -			unittest(0, "%s got wrong match. expected %s, got %s\n",
> -				match_node_tests[i].path, match_node_tests[i].data,
> -				(const char *)match->data);
> -			continue;
> -		}
> -		unittest(1, "passed");
> +		KUNIT_EXPECT_STREQ_MSG(
> +			test,
> +			match->data, match_node_tests[i].data,
> +			"%s got wrong match. expected %s, got %s\n",
> +			match_node_tests[i].path, match_node_tests[i].data,
> +			(const char *)match->data);
>  	}
>  }
>  
> @@ -1004,9 +1107,9 @@ static struct resource test_bus_res = {
>  static const struct platform_device_info test_bus_info = {
>  	.name = "unittest-bus",
>  };
> -static void __init of_unittest_platform_populate(void)
> +static void of_unittest_platform_populate(struct kunit *test)
>  {
> -	int irq, rc;
> +	int irq;
>  	struct device_node *np, *child, *grandchild;
>  	struct platform_device *pdev, *test_bus;
>  	const struct of_device_id match[] = {
> @@ -1020,32 +1123,27 @@ static void __init of_unittest_platform_populate(void)
>  	/* Test that a missing irq domain returns -EPROBE_DEFER */
>  	np = of_find_node_by_path("/testcase-data/testcase-device1");
>  	pdev = of_find_device_by_node(np);
> -	unittest(pdev, "device 1 creation failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, pdev);
>  
>  	if (!(of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)) {
>  		irq = platform_get_irq(pdev, 0);
> -		unittest(irq == -EPROBE_DEFER,
> -			 "device deferred probe failed - %d\n", irq);
> +		KUNIT_ASSERT_EQ(test, irq, -EPROBE_DEFER);
>  
>  		/* Test that a parsing failure does not return -EPROBE_DEFER */
>  		np = of_find_node_by_path("/testcase-data/testcase-device2");
>  		pdev = of_find_device_by_node(np);
> -		unittest(pdev, "device 2 creation failed\n");
> +		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, pdev);
>  		irq = platform_get_irq(pdev, 0);
> -		unittest(irq < 0 && irq != -EPROBE_DEFER,
> -			 "device parsing error failed - %d\n", irq);
> +		KUNIT_ASSERT_TRUE_MSG(test, irq < 0 && irq != -EPROBE_DEFER,
> +				      "device parsing error failed - %d\n",
> +				      irq);
>  	}
>  
>  	np = of_find_node_by_path("/testcase-data/platform-tests");
> -	unittest(np, "No testcase data in device tree\n");
> -	if (!np)
> -		return;
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  	test_bus = platform_device_register_full(&test_bus_info);
> -	rc = PTR_ERR_OR_ZERO(test_bus);
> -	unittest(!rc, "testbus registration failed; rc=%i\n", rc);
> -	if (rc)
> -		return;
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, test_bus);
>  	test_bus->dev.of_node = np;
>  
>  	/*
> @@ -1060,17 +1158,19 @@ static void __init of_unittest_platform_populate(void)
>  	of_platform_populate(np, match, NULL, &test_bus->dev);
>  	for_each_child_of_node(np, child) {
>  		for_each_child_of_node(child, grandchild)
> -			unittest(of_find_device_by_node(grandchild),
> -				 "Could not create device for node '%pOFn'\n",
> -				 grandchild);
> +			KUNIT_EXPECT_TRUE_MSG(
> +				test, of_find_device_by_node(grandchild),
> +				"Could not create device for node '%pOFn'\n",
> +				grandchild);
>  	}
>  
>  	of_platform_depopulate(&test_bus->dev);
>  	for_each_child_of_node(np, child) {
>  		for_each_child_of_node(child, grandchild)
> -			unittest(!of_find_device_by_node(grandchild),
> -				 "device didn't get destroyed '%pOFn'\n",
> -				 grandchild);
> +			KUNIT_EXPECT_FALSE_MSG(
> +				test, of_find_device_by_node(grandchild),
> +				"device didn't get destroyed '%pOFn'\n",
> +				grandchild);
>  	}
>  
>  	platform_device_unregister(test_bus);
> @@ -1171,7 +1271,7 @@ static void attach_node_and_children(struct device_node *np)
>   *	unittest_data_add - Reads, copies data from
>   *	linked tree and attaches it to the live tree
>   */
> -static int __init unittest_data_add(void)
> +static int unittest_data_add(void)
>  {
>  	void *unittest_data;
>  	struct device_node *unittest_data_node, *np;
> @@ -1242,7 +1342,7 @@ static int __init unittest_data_add(void)
>  }
>  
>  #ifdef CONFIG_OF_OVERLAY
> -static int __init overlay_data_apply(const char *overlay_name, int *overlay_id);
> +static int overlay_data_apply(const char *overlay_name, int *overlay_id);
>  
>  static int unittest_probe(struct platform_device *pdev)
>  {
> @@ -1471,172 +1571,146 @@ static void of_unittest_destroy_tracked_overlays(void)
>  	} while (defers > 0);
>  }
>  
> -static int __init of_unittest_apply_overlay(int overlay_nr, int *overlay_id)
> +static int of_unittest_apply_overlay(struct kunit *test,
> +				     int overlay_nr,
> +				     int *overlay_id)
>  {
>  	const char *overlay_name;
>  
>  	overlay_name = overlay_name_from_nr(overlay_nr);
>  
> -	if (!overlay_data_apply(overlay_name, overlay_id)) {
> -		unittest(0, "could not apply overlay \"%s\"\n",
> -				overlay_name);
> -		return -EFAULT;
> -	}
> +	KUNIT_ASSERT_TRUE_MSG(test,
> +			      overlay_data_apply(overlay_name, overlay_id),
> +			      "could not apply overlay \"%s\"\n", overlay_name);
>  	of_unittest_track_overlay(*overlay_id);
>  
>  	return 0;
>  }
>  
>  /* apply an overlay while checking before and after states */
> -static int __init of_unittest_apply_overlay_check(int overlay_nr,
> +static int of_unittest_apply_overlay_check(struct kunit *test, int overlay_nr,
>  		int unittest_nr, int before, int after,
>  		enum overlay_type ovtype)
>  {
>  	int ret, ovcs_id;
>  
>  	/* unittest device must not be in before state */
> -	if (of_unittest_device_exists(unittest_nr, ovtype) != before) {
> -		unittest(0, "%s with device @\"%s\" %s\n",
> -				overlay_name_from_nr(overlay_nr),
> -				unittest_path(unittest_nr, ovtype),
> -				!before ? "enabled" : "disabled");
> -		return -EINVAL;
> -	}
> +	KUNIT_ASSERT_EQ_MSG(
> +		test, of_unittest_device_exists(unittest_nr, ovtype), before,
> +		"%s with device @\"%s\" %s\n", overlay_name_from_nr(overlay_nr),
> +		unittest_path(unittest_nr, ovtype),
> +		!before ? "enabled" : "disabled");
>  
>  	ovcs_id = 0;
> -	ret = of_unittest_apply_overlay(overlay_nr, &ovcs_id);
> +	ret = of_unittest_apply_overlay(test, overlay_nr, &ovcs_id);
>  	if (ret != 0) {
> -		/* of_unittest_apply_overlay already called unittest() */
> +		/* of_unittest_apply_overlay already set expectation */
>  		return ret;
>  	}
>  
>  	/* unittest device must be to set to after state */
> -	if (of_unittest_device_exists(unittest_nr, ovtype) != after) {
> -		unittest(0, "%s failed to create @\"%s\" %s\n",
> -				overlay_name_from_nr(overlay_nr),
> -				unittest_path(unittest_nr, ovtype),
> -				!after ? "enabled" : "disabled");
> -		return -EINVAL;
> -	}
> +	KUNIT_ASSERT_EQ_MSG(
> +		test, of_unittest_device_exists(unittest_nr, ovtype), after,
> +		"%s failed to create @\"%s\" %s\n",
> +		overlay_name_from_nr(overlay_nr),
> +		unittest_path(unittest_nr, ovtype),
> +		!after ? "enabled" : "disabled");
>  
>  	return 0;
>  }
>  
>  /* apply an overlay and then revert it while checking before, after states */
> -static int __init of_unittest_apply_revert_overlay_check(int overlay_nr,
> +static int of_unittest_apply_revert_overlay_check(
> +		struct kunit *test, int overlay_nr,
>  		int unittest_nr, int before, int after,
>  		enum overlay_type ovtype)
>  {
>  	int ret, ovcs_id;
>  
>  	/* unittest device must be in before state */
> -	if (of_unittest_device_exists(unittest_nr, ovtype) != before) {
> -		unittest(0, "%s with device @\"%s\" %s\n",
> -				overlay_name_from_nr(overlay_nr),
> -				unittest_path(unittest_nr, ovtype),
> -				!before ? "enabled" : "disabled");
> -		return -EINVAL;
> -	}
> +	KUNIT_ASSERT_EQ_MSG(
> +		test, of_unittest_device_exists(unittest_nr, ovtype), before,
> +		"%s with device @\"%s\" %s\n", overlay_name_from_nr(overlay_nr),
> +		unittest_path(unittest_nr, ovtype),
> +		!before ? "enabled" : "disabled");
>  
>  	/* apply the overlay */
>  	ovcs_id = 0;
> -	ret = of_unittest_apply_overlay(overlay_nr, &ovcs_id);
> +	ret = of_unittest_apply_overlay(test, overlay_nr, &ovcs_id);
>  	if (ret != 0) {
> -		/* of_unittest_apply_overlay already called unittest() */
> +		/* of_unittest_apply_overlay already set expectation. */
>  		return ret;
>  	}
>  
>  	/* unittest device must be in after state */
> -	if (of_unittest_device_exists(unittest_nr, ovtype) != after) {
> -		unittest(0, "%s failed to create @\"%s\" %s\n",
> -				overlay_name_from_nr(overlay_nr),
> -				unittest_path(unittest_nr, ovtype),
> -				!after ? "enabled" : "disabled");
> -		return -EINVAL;
> -	}
> -
> -	ret = of_overlay_remove(&ovcs_id);
> -	if (ret != 0) {
> -		unittest(0, "%s failed to be destroyed @\"%s\"\n",
> -				overlay_name_from_nr(overlay_nr),
> -				unittest_path(unittest_nr, ovtype));
> -		return ret;
> -	}
> +	KUNIT_ASSERT_EQ_MSG(
> +		test, of_unittest_device_exists(unittest_nr, ovtype), after,
> +		"%s failed to create @\"%s\" %s\n",
> +		overlay_name_from_nr(overlay_nr),
> +		unittest_path(unittest_nr, ovtype),
> +		!after ? "enabled" : "disabled");
> +
> +	KUNIT_ASSERT_EQ_MSG(test, of_overlay_remove(&ovcs_id), 0,
> +			    "%s failed to be destroyed @\"%s\"\n",
> +			    overlay_name_from_nr(overlay_nr),
> +			    unittest_path(unittest_nr, ovtype));
>  
>  	/* unittest device must be again in before state */
> -	if (of_unittest_device_exists(unittest_nr, PDEV_OVERLAY) != before) {
> -		unittest(0, "%s with device @\"%s\" %s\n",
> -				overlay_name_from_nr(overlay_nr),
> -				unittest_path(unittest_nr, ovtype),
> -				!before ? "enabled" : "disabled");
> -		return -EINVAL;
> -	}
> +	KUNIT_ASSERT_EQ_MSG(
> +		test,
> +		of_unittest_device_exists(unittest_nr, PDEV_OVERLAY), before,
> +		"%s with device @\"%s\" %s\n",
> +		overlay_name_from_nr(overlay_nr),
> +		unittest_path(unittest_nr, ovtype),
> +		!before ? "enabled" : "disabled");
>  
>  	return 0;
>  }
>  
>  /* test activation of device */
> -static void __init of_unittest_overlay_0(void)
> +static void of_unittest_overlay_0(struct kunit *test)
>  {
>  	/* device should enable */
> -	if (of_unittest_apply_overlay_check(0, 0, 0, 1, PDEV_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 0);
> +	of_unittest_apply_overlay_check(test, 0, 0, 0, 1, PDEV_OVERLAY);
>  }
>  
>  /* test deactivation of device */
> -static void __init of_unittest_overlay_1(void)
> +static void of_unittest_overlay_1(struct kunit *test)
>  {
>  	/* device should disable */
> -	if (of_unittest_apply_overlay_check(1, 1, 1, 0, PDEV_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 1);
> +	of_unittest_apply_overlay_check(test, 1, 1, 1, 0, PDEV_OVERLAY);
>  }
>  
>  /* test activation of device */
> -static void __init of_unittest_overlay_2(void)
> +static void of_unittest_overlay_2(struct kunit *test)
>  {
>  	/* device should enable */
> -	if (of_unittest_apply_overlay_check(2, 2, 0, 1, PDEV_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 2);
> +	of_unittest_apply_overlay_check(test, 2, 2, 0, 1, PDEV_OVERLAY);
>  }
>  
>  /* test deactivation of device */
> -static void __init of_unittest_overlay_3(void)
> +static void of_unittest_overlay_3(struct kunit *test)
>  {
>  	/* device should disable */
> -	if (of_unittest_apply_overlay_check(3, 3, 1, 0, PDEV_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 3);
> +	of_unittest_apply_overlay_check(test, 3, 3, 1, 0, PDEV_OVERLAY);
>  }
>  
>  /* test activation of a full device node */
> -static void __init of_unittest_overlay_4(void)
> +static void of_unittest_overlay_4(struct kunit *test)
>  {
>  	/* device should disable */
> -	if (of_unittest_apply_overlay_check(4, 4, 0, 1, PDEV_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 4);
> +	of_unittest_apply_overlay_check(test, 4, 4, 0, 1, PDEV_OVERLAY);
>  }
>  
>  /* test overlay apply/revert sequence */
> -static void __init of_unittest_overlay_5(void)
> +static void of_unittest_overlay_5(struct kunit *test)
>  {
>  	/* device should disable */
> -	if (of_unittest_apply_revert_overlay_check(5, 5, 0, 1, PDEV_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 5);
> +	of_unittest_apply_revert_overlay_check(test, 5, 5, 0, 1, PDEV_OVERLAY);
>  }
>  
>  /* test overlay application in sequence */
> -static void __init of_unittest_overlay_6(void)
> +static void of_unittest_overlay_6(struct kunit *test)
>  {
>  	int i, ov_id[2], ovcs_id;
>  	int overlay_nr = 6, unittest_nr = 6;
> @@ -1645,74 +1719,67 @@ static void __init of_unittest_overlay_6(void)
>  
>  	/* unittest device must be in before state */
>  	for (i = 0; i < 2; i++) {
> -		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
> -				!= before) {
> -			unittest(0, "%s with device @\"%s\" %s\n",
> -					overlay_name_from_nr(overlay_nr + i),
> -					unittest_path(unittest_nr + i,
> -						PDEV_OVERLAY),
> -					!before ? "enabled" : "disabled");
> -			return;
> -		}
> +		KUNIT_ASSERT_EQ_MSG(test,
> +				    of_unittest_device_exists(unittest_nr + i,
> +							      PDEV_OVERLAY),
> +				    before,
> +				    "%s with device @\"%s\" %s\n",
> +				    overlay_name_from_nr(overlay_nr + i),
> +				    unittest_path(unittest_nr + i,
> +						  PDEV_OVERLAY),
> +				    !before ? "enabled" : "disabled");
>  	}
>  
>  	/* apply the overlays */
>  	for (i = 0; i < 2; i++) {
> -
>  		overlay_name = overlay_name_from_nr(overlay_nr + i);
>  
> -		if (!overlay_data_apply(overlay_name, &ovcs_id)) {
> -			unittest(0, "could not apply overlay \"%s\"\n",
> -					overlay_name);
> -			return;
> -		}
> +		KUNIT_ASSERT_TRUE_MSG(
> +			test, overlay_data_apply(overlay_name, &ovcs_id),
> +			"could not apply overlay \"%s\"\n", overlay_name);
>  		ov_id[i] = ovcs_id;
>  		of_unittest_track_overlay(ov_id[i]);
>  	}
>  
>  	for (i = 0; i < 2; i++) {
>  		/* unittest device must be in after state */
> -		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
> -				!= after) {
> -			unittest(0, "overlay @\"%s\" failed @\"%s\" %s\n",
> -					overlay_name_from_nr(overlay_nr + i),
> -					unittest_path(unittest_nr + i,
> -						PDEV_OVERLAY),
> -					!after ? "enabled" : "disabled");
> -			return;
> -		}
> +		KUNIT_ASSERT_EQ_MSG(test,
> +				    of_unittest_device_exists(unittest_nr + i,
> +							      PDEV_OVERLAY),
> +				    after,
> +				    "overlay @\"%s\" failed @\"%s\" %s\n",
> +				    overlay_name_from_nr(overlay_nr + i),
> +				    unittest_path(unittest_nr + i,
> +						  PDEV_OVERLAY),
> +				    !after ? "enabled" : "disabled");
>  	}
>  
>  	for (i = 1; i >= 0; i--) {
>  		ovcs_id = ov_id[i];
> -		if (of_overlay_remove(&ovcs_id)) {
> -			unittest(0, "%s failed destroy @\"%s\"\n",
> -					overlay_name_from_nr(overlay_nr + i),
> -					unittest_path(unittest_nr + i,
> -						PDEV_OVERLAY));
> -			return;
> -		}
> +		KUNIT_ASSERT_FALSE_MSG(
> +			test, of_overlay_remove(&ovcs_id),
> +			"%s failed destroy @\"%s\"\n",
> +			overlay_name_from_nr(overlay_nr + i),
> +			unittest_path(unittest_nr + i, PDEV_OVERLAY));
>  		of_unittest_untrack_overlay(ov_id[i]);
>  	}
>  
>  	for (i = 0; i < 2; i++) {
>  		/* unittest device must be again in before state */
> -		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
> -				!= before) {
> -			unittest(0, "%s with device @\"%s\" %s\n",
> -					overlay_name_from_nr(overlay_nr + i),
> -					unittest_path(unittest_nr + i,
> -						PDEV_OVERLAY),
> -					!before ? "enabled" : "disabled");
> -			return;
> -		}
> +		KUNIT_ASSERT_EQ_MSG(test,
> +				    of_unittest_device_exists(unittest_nr + i,
> +							      PDEV_OVERLAY),
> +				    before,
> +				    "%s with device @\"%s\" %s\n",
> +				    overlay_name_from_nr(overlay_nr + i),
> +				    unittest_path(unittest_nr + i,
> +						  PDEV_OVERLAY),
> +				    !before ? "enabled" : "disabled");
>  	}
> -
> -	unittest(1, "overlay test %d passed\n", 6);
>  }
>  
>  /* test overlay application in sequence */
> -static void __init of_unittest_overlay_8(void)
> +static void of_unittest_overlay_8(struct kunit *test)
>  {
>  	int i, ov_id[2], ovcs_id;
>  	int overlay_nr = 8, unittest_nr = 8;
> @@ -1722,76 +1789,64 @@ static void __init of_unittest_overlay_8(void)
>  
>  	/* apply the overlays */
>  	for (i = 0; i < 2; i++) {
> -
>  		overlay_name = overlay_name_from_nr(overlay_nr + i);
>  
> -		if (!overlay_data_apply(overlay_name, &ovcs_id)) {
> -			unittest(0, "could not apply overlay \"%s\"\n",
> -					overlay_name);
> -			return;
> -		}
> +		KUNIT_ASSERT_TRUE_MSG(
> +			test, overlay_data_apply(overlay_name, &ovcs_id),
> +			"could not apply overlay \"%s\"\n", overlay_name);
>  		ov_id[i] = ovcs_id;
>  		of_unittest_track_overlay(ov_id[i]);
>  	}
>  
>  	/* now try to remove first overlay (it should fail) */
>  	ovcs_id = ov_id[0];
> -	if (!of_overlay_remove(&ovcs_id)) {
> -		unittest(0, "%s was destroyed @\"%s\"\n",
> -				overlay_name_from_nr(overlay_nr + 0),
> -				unittest_path(unittest_nr,
> -					PDEV_OVERLAY));
> -		return;
> -	}
> +	KUNIT_ASSERT_TRUE_MSG(
> +		test, of_overlay_remove(&ovcs_id),
> +		"%s was destroyed @\"%s\"\n",
> +		overlay_name_from_nr(overlay_nr + 0),
> +		unittest_path(unittest_nr, PDEV_OVERLAY));
>  
>  	/* removing them in order should work */
>  	for (i = 1; i >= 0; i--) {
>  		ovcs_id = ov_id[i];
> -		if (of_overlay_remove(&ovcs_id)) {
> -			unittest(0, "%s not destroyed @\"%s\"\n",
> -					overlay_name_from_nr(overlay_nr + i),
> -					unittest_path(unittest_nr,
> -						PDEV_OVERLAY));
> -			return;
> -		}
> +		KUNIT_ASSERT_FALSE_MSG(
> +			test, of_overlay_remove(&ovcs_id),
> +			"%s not destroyed @\"%s\"\n",
> +			overlay_name_from_nr(overlay_nr + i),
> +			unittest_path(unittest_nr, PDEV_OVERLAY));
>  		of_unittest_untrack_overlay(ov_id[i]);
>  	}
> -
> -	unittest(1, "overlay test %d passed\n", 8);
>  }
>  
>  /* test insertion of a bus with parent devices */
> -static void __init of_unittest_overlay_10(void)
> +static void of_unittest_overlay_10(struct kunit *test)
>  {
> -	int ret;
>  	char *child_path;
>  
>  	/* device should disable */
> -	ret = of_unittest_apply_overlay_check(10, 10, 0, 1, PDEV_OVERLAY);
> -	if (unittest(ret == 0,
> -			"overlay test %d failed; overlay application\n", 10))
> -		return;
> +	KUNIT_ASSERT_EQ_MSG(
> +		test,
> +		of_unittest_apply_overlay_check(
> +				test, 10, 10, 0, 1, PDEV_OVERLAY),
> +		0,
> +		"overlay test %d failed; overlay application\n", 10);
>  
>  	child_path = kasprintf(GFP_KERNEL, "%s/test-unittest101",
>  			unittest_path(10, PDEV_OVERLAY));
> -	if (unittest(child_path, "overlay test %d failed; kasprintf\n", 10))
> -		return;
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, child_path);
>  
> -	ret = of_path_device_type_exists(child_path, PDEV_OVERLAY);
> +	KUNIT_EXPECT_TRUE_MSG(
> +		test, of_path_device_type_exists(child_path, PDEV_OVERLAY),
> +		"overlay test %d failed; no child device\n", 10);
>  	kfree(child_path);
> -
> -	unittest(ret, "overlay test %d failed; no child device\n", 10);
>  }
>  
>  /* test insertion of a bus with parent devices (and revert) */
> -static void __init of_unittest_overlay_11(void)
> +static void of_unittest_overlay_11(struct kunit *test)
>  {
> -	int ret;
> -
>  	/* device should disable */
> -	ret = of_unittest_apply_revert_overlay_check(11, 11, 0, 1,
> -			PDEV_OVERLAY);
> -	unittest(ret == 0, "overlay test %d failed; overlay apply\n", 11);
> +	KUNIT_EXPECT_FALSE(test, of_unittest_apply_revert_overlay_check(
> +		test, 11, 11, 0, 1, PDEV_OVERLAY));
>  }
>  
>  #if IS_BUILTIN(CONFIG_I2C) && IS_ENABLED(CONFIG_OF_OVERLAY)
> @@ -2013,25 +2068,18 @@ static struct i2c_driver unittest_i2c_mux_driver = {
>  
>  #endif
>  
> -static int of_unittest_overlay_i2c_init(void)
> +static int of_unittest_overlay_i2c_init(struct kunit *test)
>  {
> -	int ret;
> -
> -	ret = i2c_add_driver(&unittest_i2c_dev_driver);
> -	if (unittest(ret == 0,
> -			"could not register unittest i2c device driver\n"))
> -		return ret;
> +	KUNIT_ASSERT_EQ_MSG(test, i2c_add_driver(&unittest_i2c_dev_driver), 0,
> +			    "could not register unittest i2c device driver\n");
>  
> -	ret = platform_driver_register(&unittest_i2c_bus_driver);
> -	if (unittest(ret == 0,
> -			"could not register unittest i2c bus driver\n"))
> -		return ret;
> +	KUNIT_ASSERT_EQ_MSG(
> +		test, platform_driver_register(&unittest_i2c_bus_driver), 0,
> +		"could not register unittest i2c bus driver\n");
>  
>  #if IS_BUILTIN(CONFIG_I2C_MUX)
> -	ret = i2c_add_driver(&unittest_i2c_mux_driver);
> -	if (unittest(ret == 0,
> -			"could not register unittest i2c mux driver\n"))
> -		return ret;
> +	KUNIT_ASSERT_EQ_MSG(test, i2c_add_driver(&unittest_i2c_mux_driver), 0,
> +			    "could not register unittest i2c mux driver\n");
>  #endif
>  
>  	return 0;
> @@ -2046,101 +2094,85 @@ static void of_unittest_overlay_i2c_cleanup(void)
>  	i2c_del_driver(&unittest_i2c_dev_driver);
>  }
>  
> -static void __init of_unittest_overlay_i2c_12(void)
> +static void of_unittest_overlay_i2c_12(struct kunit *test)
>  {
>  	/* device should enable */
> -	if (of_unittest_apply_overlay_check(12, 12, 0, 1, I2C_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 12);
> +	of_unittest_apply_overlay_check(test, 12, 12, 0, 1, I2C_OVERLAY);
>  }
>  
>  /* test deactivation of device */
> -static void __init of_unittest_overlay_i2c_13(void)
> +static void of_unittest_overlay_i2c_13(struct kunit *test)
>  {
>  	/* device should disable */
> -	if (of_unittest_apply_overlay_check(13, 13, 1, 0, I2C_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 13);
> +	of_unittest_apply_overlay_check(test, 13, 13, 1, 0, I2C_OVERLAY);
>  }
>  
>  /* just check for i2c mux existence */
> -static void of_unittest_overlay_i2c_14(void)
> +static void of_unittest_overlay_i2c_14(struct kunit *test)
>  {
> +	KUNIT_SUCCEED(test);
>  }
>  
> -static void __init of_unittest_overlay_i2c_15(void)
> +static void of_unittest_overlay_i2c_15(struct kunit *test)
>  {
>  	/* device should enable */
> -	if (of_unittest_apply_overlay_check(15, 15, 0, 1, I2C_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 15);
> +	of_unittest_apply_overlay_check(test, 15, 15, 0, 1, I2C_OVERLAY);
>  }
>  
>  #else
>  
> -static inline void of_unittest_overlay_i2c_14(void) { }
> -static inline void of_unittest_overlay_i2c_15(void) { }
> +static inline void of_unittest_overlay_i2c_14(struct kunit *test) { }
> +static inline void of_unittest_overlay_i2c_15(struct kunit *test) { }
>  
>  #endif
>  
> -static void __init of_unittest_overlay(void)
> +static void of_unittest_overlay(struct kunit *test)
>  {
>  	struct device_node *bus_np = NULL;
>  
> -	if (platform_driver_register(&unittest_driver)) {
> -		unittest(0, "could not register unittest driver\n");
> -		goto out;
> -	}
> +	KUNIT_ASSERT_FALSE_MSG(test, platform_driver_register(&unittest_driver),
> +			       "could not register unittest driver\n");
>  
>  	bus_np = of_find_node_by_path(bus_path);
> -	if (bus_np == NULL) {
> -		unittest(0, "could not find bus_path \"%s\"\n", bus_path);
> -		goto out;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL_MSG(
> +		test, bus_np, "could not find bus_path \"%s\"\n", bus_path);
>  
> -	if (of_platform_default_populate(bus_np, NULL, NULL)) {
> -		unittest(0, "could not populate bus @ \"%s\"\n", bus_path);
> -		goto out;
> -	}
> -
> -	if (!of_unittest_device_exists(100, PDEV_OVERLAY)) {
> -		unittest(0, "could not find unittest0 @ \"%s\"\n",
> -				unittest_path(100, PDEV_OVERLAY));
> -		goto out;
> -	}
> +	KUNIT_ASSERT_FALSE_MSG(
> +		test, of_platform_default_populate(bus_np, NULL, NULL),
> +		"could not populate bus @ \"%s\"\n", bus_path);
>  
> -	if (of_unittest_device_exists(101, PDEV_OVERLAY)) {
> -		unittest(0, "unittest1 @ \"%s\" should not exist\n",
> -				unittest_path(101, PDEV_OVERLAY));
> -		goto out;
> -	}
> +	KUNIT_ASSERT_TRUE_MSG(
> +		test, of_unittest_device_exists(100, PDEV_OVERLAY),
> +		"could not find unittest0 @ \"%s\"\n",
> +		unittest_path(100, PDEV_OVERLAY));
>  
> -	unittest(1, "basic infrastructure of overlays passed");
> +	KUNIT_ASSERT_FALSE_MSG(
> +		test, of_unittest_device_exists(101, PDEV_OVERLAY),
> +		"unittest1 @ \"%s\" should not exist\n",
> +		unittest_path(101, PDEV_OVERLAY));
>  
>  	/* tests in sequence */
> -	of_unittest_overlay_0();
> -	of_unittest_overlay_1();
> -	of_unittest_overlay_2();
> -	of_unittest_overlay_3();
> -	of_unittest_overlay_4();
> -	of_unittest_overlay_5();
> -	of_unittest_overlay_6();
> -	of_unittest_overlay_8();
> -
> -	of_unittest_overlay_10();
> -	of_unittest_overlay_11();
> +	of_unittest_overlay_0(test);
> +	of_unittest_overlay_1(test);
> +	of_unittest_overlay_2(test);
> +	of_unittest_overlay_3(test);
> +	of_unittest_overlay_4(test);
> +	of_unittest_overlay_5(test);
> +	of_unittest_overlay_6(test);
> +	of_unittest_overlay_8(test);
> +
> +	of_unittest_overlay_10(test);
> +	of_unittest_overlay_11(test);
>  
>  #if IS_BUILTIN(CONFIG_I2C)
> -	if (unittest(of_unittest_overlay_i2c_init() == 0, "i2c init failed\n"))
> -		goto out;
> +	KUNIT_ASSERT_EQ_MSG(test, of_unittest_overlay_i2c_init(test), 0,
> +			    "i2c init failed\n");
> +	goto out;
>  
> -	of_unittest_overlay_i2c_12();
> -	of_unittest_overlay_i2c_13();
> -	of_unittest_overlay_i2c_14();
> -	of_unittest_overlay_i2c_15();
> +	of_unittest_overlay_i2c_12(test);
> +	of_unittest_overlay_i2c_13(test);
> +	of_unittest_overlay_i2c_14(test);
> +	of_unittest_overlay_i2c_15(test);
>  
>  	of_unittest_overlay_i2c_cleanup();
>  #endif
> @@ -2152,7 +2184,7 @@ static void __init of_unittest_overlay(void)
>  }
>  
>  #else
> -static inline void __init of_unittest_overlay(void) { }
> +static inline void of_unittest_overlay(struct kunit *test) { }
>  #endif
>  
>  #ifdef CONFIG_OF_OVERLAY
> @@ -2313,7 +2345,7 @@ void __init unittest_unflatten_overlay_base(void)
>   *
>   * Return 0 on unexpected error.
>   */
> -static int __init overlay_data_apply(const char *overlay_name, int *overlay_id)
> +static int overlay_data_apply(const char *overlay_name, int *overlay_id)
>  {
>  	struct overlay_info *info;
>  	int found = 0;
> @@ -2359,19 +2391,17 @@ static int __init overlay_data_apply(const char *overlay_name, int *overlay_id)
>   * The first part of the function is _not_ normal overlay usage; it is
>   * finishing splicing the base overlay device tree into the live tree.
>   */
> -static __init void of_unittest_overlay_high_level(void)
> +static void of_unittest_overlay_high_level(struct kunit *test)
>  {
>  	struct device_node *last_sibling;
>  	struct device_node *np;
>  	struct device_node *of_symbols;
> -	struct device_node *overlay_base_symbols;
> +	struct device_node *overlay_base_symbols = 0;
>  	struct device_node **pprev;
>  	struct property *prop;
>  
> -	if (!overlay_base_root) {
> -		unittest(0, "overlay_base_root not initialized\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_TRUE_MSG(test, overlay_base_root,
> +			      "overlay_base_root not initialized\n");
>  
>  	/*
>  	 * Could not fixup phandles in unittest_unflatten_overlay_base()
> @@ -2418,11 +2448,9 @@ static __init void of_unittest_overlay_high_level(void)
>  	for_each_child_of_node(overlay_base_root, np) {
>  		struct device_node *base_child;
>  		for_each_child_of_node(of_root, base_child) {
> -			if (!strcmp(np->full_name, base_child->full_name)) {
> -				unittest(0, "illegal node name in overlay_base %pOFn",
> -					 np);
> -				return;
> -			}
> +			KUNIT_ASSERT_STRNEQ_MSG(
> +				test, np->full_name, base_child->full_name,
> +				"illegal node name in overlay_base %pOFn", np);
>  		}
>  	}
>  
> @@ -2456,21 +2484,24 @@ static __init void of_unittest_overlay_high_level(void)
>  
>  			new_prop = __of_prop_dup(prop, GFP_KERNEL);
>  			if (!new_prop) {
> -				unittest(0, "__of_prop_dup() of '%s' from overlay_base node __symbols__",
> -					 prop->name);
> +				KUNIT_FAIL(test,
> +					   "__of_prop_dup() of '%s' from overlay_base node __symbols__",
> +					   prop->name);
>  				goto err_unlock;
>  			}
>  			if (__of_add_property(of_symbols, new_prop)) {
>  				/* "name" auto-generated by unflatten */
>  				if (!strcmp(new_prop->name, "name"))
>  					continue;
> -				unittest(0, "duplicate property '%s' in overlay_base node __symbols__",
> -					 prop->name);
> +				KUNIT_FAIL(test,
> +					   "duplicate property '%s' in overlay_base node __symbols__",
> +					   prop->name);
>  				goto err_unlock;
>  			}
>  			if (__of_add_property_sysfs(of_symbols, new_prop)) {
> -				unittest(0, "unable to add property '%s' in overlay_base node __symbols__ to sysfs",
> -					 prop->name);
> +				KUNIT_FAIL(test,
> +					   "unable to add property '%s' in overlay_base node __symbols__ to sysfs",
> +					   prop->name);
>  				goto err_unlock;
>  			}
>  		}
> @@ -2481,20 +2512,24 @@ static __init void of_unittest_overlay_high_level(void)
>  
>  	/* now do the normal overlay usage test */
>  
> -	unittest(overlay_data_apply("overlay", NULL),
> -		 "Adding overlay 'overlay' failed\n");
> +	KUNIT_EXPECT_TRUE_MSG(test, overlay_data_apply("overlay", NULL),
> +			      "Adding overlay 'overlay' failed\n");
>  
> -	unittest(overlay_data_apply("overlay_bad_add_dup_node", NULL),
> -		 "Adding overlay 'overlay_bad_add_dup_node' failed\n");
> +	KUNIT_EXPECT_TRUE_MSG(
> +		test, overlay_data_apply("overlay_bad_add_dup_node", NULL),
> +		"Adding overlay 'overlay_bad_add_dup_node' failed\n");
>  
> -	unittest(overlay_data_apply("overlay_bad_add_dup_prop", NULL),
> -		 "Adding overlay 'overlay_bad_add_dup_prop' failed\n");
> +	KUNIT_EXPECT_TRUE_MSG(
> +		test, overlay_data_apply("overlay_bad_add_dup_prop", NULL),
> +		"Adding overlay 'overlay_bad_add_dup_prop' failed\n");
>  
> -	unittest(overlay_data_apply("overlay_bad_phandle", NULL),
> -		 "Adding overlay 'overlay_bad_phandle' failed\n");
> +	KUNIT_EXPECT_TRUE_MSG(
> +		test, overlay_data_apply("overlay_bad_phandle", NULL),
> +		"Adding overlay 'overlay_bad_phandle' failed\n");
>  
> -	unittest(overlay_data_apply("overlay_bad_symbol", NULL),
> -		 "Adding overlay 'overlay_bad_symbol' failed\n");
> +	KUNIT_EXPECT_TRUE_MSG(
> +		test, overlay_data_apply("overlay_bad_symbol", NULL),
> +		"Adding overlay 'overlay_bad_symbol' failed\n");
>  
>  	return;
>  
> @@ -2504,57 +2539,52 @@ static __init void of_unittest_overlay_high_level(void)
>  
>  #else
>  
> -static inline __init void of_unittest_overlay_high_level(void) {}
> +static inline void of_unittest_overlay_high_level(struct kunit *test) {}
>  
>  #endif
>  
> -static int __init of_unittest(void)
> +static int of_test_init(struct kunit *test)
>  {
> -	struct device_node *np;
> -	int res;
> -
>  	/* adding data for unittest */
> -	res = unittest_data_add();
> -	if (res)
> -		return res;
> +	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
> +
>  	if (!of_aliases)
>  		of_aliases = of_find_node_by_path("/aliases");
>  
> -	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> -	if (!np) {
> -		pr_info("No testcase data in device tree; not running tests\n");
> -		return 0;
> -	}
> -	of_node_put(np);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
> +		"/testcase-data/phandle-tests/consumer-a"));
>  
>  	if (IS_ENABLED(CONFIG_UML))
>  		unflatten_device_tree();
>  
> -	pr_info("start of unittest - you will see error messages\n");
> -	of_unittest_check_tree_linkage();
> -	of_unittest_check_phandles();
> -	of_unittest_find_node_by_name();
> -	of_unittest_dynamic();
> -	of_unittest_parse_phandle_with_args();
> -	of_unittest_parse_phandle_with_args_map();
> -	of_unittest_printf();
> -	of_unittest_property_string();
> -	of_unittest_property_copy();
> -	of_unittest_changeset();
> -	of_unittest_parse_interrupts();
> -	of_unittest_parse_interrupts_extended();
> -	of_unittest_match_node();
> -	of_unittest_platform_populate();
> -	of_unittest_overlay();
> +	return 0;
> +}
>  
> +static struct kunit_case of_test_cases[] = {
> +	KUNIT_CASE(of_unittest_check_tree_linkage),
> +	KUNIT_CASE(of_unittest_check_phandles),
> +	KUNIT_CASE(of_unittest_find_node_by_name),
> +	KUNIT_CASE(of_unittest_dynamic),
> +	KUNIT_CASE(of_unittest_parse_phandle_with_args),
> +	KUNIT_CASE(of_unittest_parse_phandle_with_args_map),
> +	KUNIT_CASE(of_unittest_printf),
> +	KUNIT_CASE(of_unittest_property_string),
> +	KUNIT_CASE(of_unittest_property_copy),
> +	KUNIT_CASE(of_unittest_changeset),
> +	KUNIT_CASE(of_unittest_parse_interrupts),
> +	KUNIT_CASE(of_unittest_parse_interrupts_extended),
> +	KUNIT_CASE(of_unittest_match_node),
> +	KUNIT_CASE(of_unittest_platform_populate),
> +	KUNIT_CASE(of_unittest_overlay),
>  	/* Double check linkage after removing testcase data */
> -	of_unittest_check_tree_linkage();
> -
> -	of_unittest_overlay_high_level();
> -
> -	pr_info("end of unittest - %i passed, %i failed\n",
> -		unittest_results.passed, unittest_results.failed);
> +	KUNIT_CASE(of_unittest_check_tree_linkage),
> +	KUNIT_CASE(of_unittest_overlay_high_level),
> +	{},
> +};
>  
> -	return 0;
> -}
> -late_initcall(of_unittest);
> +static struct kunit_module of_test_module = {
> +	.name = "of-test",
> +	.init = of_test_init,
> +	.test_cases = of_test_cases,
> +};
> +module_test(of_test_module);
> 

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 15/17] of: unittest: migrate tests to run on KUnit
@ 2019-02-16  0:24         ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-02-16  0:24 UTC (permalink / raw)


On 2/14/19 1:37 PM, Brendan Higgins wrote:
> Migrate tests without any cleanup, or modifying test logic in anyway to
> run under KUnit using the KUnit expectation and assertion API.
> 
> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> ---
>  drivers/of/Kconfig    |    1 +
>  drivers/of/unittest.c | 1310 +++++++++++++++++++++--------------------
>  2 files changed, 671 insertions(+), 640 deletions(-)
> 
> diff --git a/drivers/of/Kconfig b/drivers/of/Kconfig
> index ad3fcad4d75b8..f309399deac20 100644
> --- a/drivers/of/Kconfig
> +++ b/drivers/of/Kconfig
> @@ -15,6 +15,7 @@ if OF
>  config OF_UNITTEST
>  	bool "Device Tree runtime unit tests"
>  	depends on !SPARC
> +	depends on KUNIT
>  	select IRQ_DOMAIN
>  	select OF_EARLY_FLATTREE
>  	select OF_RESOLVE
> diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c

These comments are from applying the patches to 5.0-rc3.

The final hunk of this patch fails to apply because it depends upon

   [PATCH v1 0/1] of: unittest: unflatten device tree on UML when testing.

If I apply that patch then I can apply patches 15 through 17.

If I apply patches 1 through 14 and boot the uml kernel then the devicetree
unittest result is:

  ### dt-test ### FAIL of_unittest_overlay_high_level():2372 overlay_base_root not initialized
  ### dt-test ### end of unittest - 219 passed, 1 failed

This is as expected from your previous reports, and is fixed after
applying

   [PATCH v1 0/1] of: unittest: unflatten device tree on UML when testing.

with the devicetree unittest result of:

   ### dt-test ### end of unittest - 224 passed, 0 failed

After adding patch 15, there are a lot of "unittest internal error" messages.

-Frank


> index effa4e2b9d992..96de69ccb3e63 100644
> --- a/drivers/of/unittest.c
> +++ b/drivers/of/unittest.c
> @@ -26,186 +26,189 @@
>  
>  #include <linux/bitops.h>
>  
> +#include <kunit/test.h>### dt-test ### end of unittest - 224 passed, 0 failed
> +
>  #include "of_private.h"
>  
> -static struct unittest_results {
> -	int passed;
> -	int failed;
> -} unittest_results;
> -
> -#define unittest(result, fmt, ...) ({ \
> -	bool failed = !(result); \
> -	if (failed) { \
> -		unittest_results.failed++; \
> -		pr_err("FAIL %s():%i " fmt, __func__, __LINE__, ##__VA_ARGS__); \
> -	} else { \
> -		unittest_results.passed++; \
> -		pr_debug("pass %s():%i\n", __func__, __LINE__); \
> -	} \
> -	failed; \
> -})
> -
> -static void __init of_unittest_find_node_by_name(void)
> +static void of_unittest_find_node_by_name(struct kunit *test)
>  {
>  	struct device_node *np;
>  	const char *options, *name;
>  
>  	np = of_find_node_by_path("/testcase-data");
>  	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	unittest(np && !strcmp("/testcase-data", name),
> -		"find /testcase-data failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> +			       "find /testcase-data failed\n");
>  	of_node_put(np);
>  	kfree(name);
>  
>  	/* Test if trailing '/' works */
> -	np = of_find_node_by_path("/testcase-data/");
> -	unittest(!np, "trailing '/' on /testcase-data/ should fail\n");
> +	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
> +			    "trailing '/' on /testcase-data/ should fail\n");
>  
>  	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	unittest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", name),
> +	KUNIT_EXPECT_STREQ_MSG(
> +		test, "/testcase-data/phandle-tests/consumer-a", name,
>  		"find /testcase-data/phandle-tests/consumer-a failed\n");
>  	of_node_put(np);
>  	kfree(name);
>  
>  	np = of_find_node_by_path("testcase-alias");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	unittest(np && !strcmp("/testcase-data", name),
> -		"find testcase-alias failed\n");
> +	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> +			       "find testcase-alias failed\n");
>  	of_node_put(np);
>  	kfree(name);
>  
>  	/* Test if trailing '/' works on aliases */
> -	np = of_find_node_by_path("testcase-alias/");
> -	unittest(!np, "trailing '/' on testcase-alias/ should fail\n");
> +	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
> +			    "trailing '/' on testcase-alias/ should fail\n");
>  
>  	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	unittest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", name),
> +	KUNIT_EXPECT_STREQ_MSG(
> +		test, "/testcase-data/phandle-tests/consumer-a", name,
>  		"find testcase-alias/phandle-tests/consumer-a failed\n");
>  	of_node_put(np);
>  	kfree(name);
>  
> -	np = of_find_node_by_path("/testcase-data/missing-path");
> -	unittest(!np, "non-existent path returned node %pOF\n", np);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
> +		"non-existent path returned node %pOF\n", np);
>  	of_node_put(np);
>  
> -	np = of_find_node_by_path("missing-alias");
> -	unittest(!np, "non-existent alias returned node %pOF\n", np);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test, np = of_find_node_by_path("missing-alias"), NULL,
> +		"non-existent alias returned node %pOF\n", np);
>  	of_node_put(np);
>  
> -	np = of_find_node_by_path("testcase-alias/missing-path");
> -	unittest(!np, "non-existent alias with relative path returned node %pOF\n", np);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
> +		"non-existent alias with relative path returned node %pOF\n",
> +		np);
>  	of_node_put(np);
>  
>  	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
> -	unittest(np && !strcmp("testoption", options),
> -		 "option path test failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
> +			       "option path test failed\n");
>  	of_node_put(np);
>  
>  	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
> -	unittest(np && !strcmp("test/option", options),
> -		 "option path test, subcase #1 failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> +			       "option path test, subcase #1 failed\n");
>  	of_node_put(np);
>  
>  	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
> -	unittest(np && !strcmp("test/option", options),
> -		 "option path test, subcase #2 failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> +			       "option path test, subcase #2 failed\n");
>  	of_node_put(np);
>  
>  	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
> -	unittest(np, "NULL option path test failed\n");
> +	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
> +					 "NULL option path test failed\n");
>  	of_node_put(np);
>  
>  	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
>  				       &options);
> -	unittest(np && !strcmp("testaliasoption", options),
> -		 "option alias path test failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
> +			       "option alias path test failed\n");
>  	of_node_put(np);
>  
>  	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
>  				       &options);
> -	unittest(np && !strcmp("test/alias/option", options),
> -		 "option alias path test, subcase #1 failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
> +			       "option alias path test, subcase #1 failed\n");
>  	of_node_put(np);
>  
>  	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
> -	unittest(np, "NULL option alias path test failed\n");
> +	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
> +			test, np, "NULL option alias path test failed\n");
>  	of_node_put(np);
>  
>  	options = "testoption";
>  	np = of_find_node_opts_by_path("testcase-alias", &options);
> -	unittest(np && !options, "option clearing test failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> +			    "option clearing test failed\n");
>  	of_node_put(np);
>  
>  	options = "testoption";
>  	np = of_find_node_opts_by_path("/", &options);
> -	unittest(np && !options, "option clearing root node test failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> +			    "option clearing root node test failed\n");
>  	of_node_put(np);
>  }
>  
> -static void __init of_unittest_dynamic(void)
> +static void of_unittest_dynamic(struct kunit *test)
>  {
>  	struct device_node *np;
>  	struct property *prop;
>  
>  	np = of_find_node_by_path("/testcase-data");
> -	if (!np) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  	/* Array of 4 properties for the purpose of testing */
>  	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
> -	if (!prop) {
> -		unittest(0, "kzalloc() failed\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
>  
>  	/* Add a new property - should pass*/
>  	prop->name = "new-property";
>  	prop->value = "new-property-data";
>  	prop->length = strlen(prop->value) + 1;
> -	unittest(of_add_property(np, prop) == 0, "Adding a new property failed\n");
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> +			    "Adding a new property failed\n");
>  
>  	/* Try to add an existing property - should fail */
>  	prop++;
>  	prop->name = "new-property";
>  	prop->value = "new-property-data-should-fail";
>  	prop->length = strlen(prop->value) + 1;
> -	unittest(of_add_property(np, prop) != 0,
> -		 "Adding an existing property should have failed\n");
> +	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
> +			    "Adding an existing property should have failed\n");
>  
>  	/* Try to modify an existing property - should pass */
>  	prop->value = "modify-property-data-should-pass";
>  	prop->length = strlen(prop->value) + 1;
> -	unittest(of_update_property(np, prop) == 0,
> -		 "Updating an existing property should have passed\n");
> +	KUNIT_EXPECT_EQ_MSG(
> +		test, of_update_property(np, prop), 0,
> +		"Updating an existing property should have passed\n");
>  
>  	/* Try to modify non-existent property - should pass*/
>  	prop++;
>  	prop->name = "modify-property";
>  	prop->value = "modify-missing-property-data-should-pass";
>  	prop->length = strlen(prop->value) + 1;
> -	unittest(of_update_property(np, prop) == 0,
> -		 "Updating a missing property should have passed\n");
> +	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
> +			    "Updating a missing property should have passed\n");
>  
>  	/* Remove property - should pass */
> -	unittest(of_remove_property(np, prop) == 0,
> -		 "Removing a property should have passed\n");
> +	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
> +			    "Removing a property should have passed\n");
>  
>  	/* Adding very large property - should pass */
>  	prop++;
>  	prop->name = "large-property-PAGE_SIZEx8";
>  	prop->length = PAGE_SIZE * 8;
>  	prop->value = kzalloc(prop->length, GFP_KERNEL);
> -	unittest(prop->value != NULL, "Unable to allocate large buffer\n");
> -	if (prop->value)
> -		unittest(of_add_property(np, prop) == 0,
> -			 "Adding a large property should have passed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> +			    "Adding a large property should have passed\n");
>  }
>  
> -static int __init of_unittest_check_node_linkage(struct device_node *np)
> +static int of_unittest_check_node_linkage(struct device_node *np)
>  {
>  	struct device_node *child;
>  	int count = 0, rc;
> @@ -230,27 +233,30 @@ static int __init of_unittest_check_node_linkage(struct device_node *np)
>  	return rc;
>  }
>  
> -static void __init of_unittest_check_tree_linkage(void)
> +static void of_unittest_check_tree_linkage(struct kunit *test)
>  {
>  	struct device_node *np;
>  	int allnode_count = 0, child_count;
>  
> -	if (!of_root)
> -		return;
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);
>  
>  	for_each_of_allnodes(np)
>  		allnode_count++;
>  	child_count = of_unittest_check_node_linkage(of_root);
>  
> -	unittest(child_count > 0, "Device node data structure is corrupted\n");
> -	unittest(child_count == allnode_count,
> -		 "allnodes list size (%i) doesn't match sibling lists size (%i)\n",
> -		 allnode_count, child_count);
> +	KUNIT_EXPECT_GT_MSG(test, child_count, 0,
> +			    "Device node data structure is corrupted\n");
> +	KUNIT_EXPECT_EQ_MSG(
> +		test, child_count, allnode_count,
> +		"allnodes list size (%i) doesn't match sibling lists size (%i)\n",
> +		allnode_count, child_count);
>  	pr_debug("allnodes list size (%i); sibling lists size (%i)\n", allnode_count, child_count);
>  }
>  
> -static void __init of_unittest_printf_one(struct device_node *np, const char *fmt,
> -					  const char *expected)
> +static void of_unittest_printf_one(struct kunit *test,
> +				   struct device_node *np,
> +				   const char *fmt,
> +				   const char *expected)
>  {
>  	unsigned char *buf;
>  	int buf_size;
> @@ -265,8 +271,12 @@ static void __init of_unittest_printf_one(struct device_node *np, const char *fm
>  	memset(buf, 0xff, buf_size);
>  	size = snprintf(buf, buf_size - 2, fmt, np);
>  
> -	/* use strcmp() instead of strncmp() here to be absolutely sure strings match */
> -	unittest((strcmp(buf, expected) == 0) && (buf[size+1] == 0xff),
> +	KUNIT_EXPECT_STREQ_MSG(
> +		test, buf, expected,
> +		"sprintf failed; fmt='%s' expected='%s' rslt='%s'\n",
> +		fmt, expected, buf);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test, buf[size+1], 0xff,
>  		"sprintf failed; fmt='%s' expected='%s' rslt='%s'\n",
>  		fmt, expected, buf);
>  
> @@ -276,44 +286,49 @@ static void __init of_unittest_printf_one(struct device_node *np, const char *fm
>  		/* Clear the buffer, and make sure it works correctly still */
>  		memset(buf, 0xff, buf_size);
>  		snprintf(buf, size+1, fmt, np);
> -		unittest(strncmp(buf, expected, size) == 0 && (buf[size+1] == 0xff),
> +		KUNIT_EXPECT_STREQ_MSG(
> +			test, buf, expected,
> +			"snprintf failed; size=%i fmt='%s' expected='%s' rslt='%s'\n",
> +			size, fmt, expected, buf);
> +		KUNIT_EXPECT_EQ_MSG(
> +			test, buf[size+1], 0xff,
>  			"snprintf failed; size=%i fmt='%s' expected='%s' rslt='%s'\n",
>  			size, fmt, expected, buf);
>  	}
>  	kfree(buf);
>  }
>  
> -static void __init of_unittest_printf(void)
> +static void of_unittest_printf(struct kunit *test)
>  {
>  	struct device_node *np;
>  	const char *full_name = "/testcase-data/platform-tests/test-device at 1/dev at 100";
>  	char phandle_str[16] = "";
>  
>  	np = of_find_node_by_path(full_name);
> -	if (!np) {
> -		unittest(np, "testcase data missing\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  	num_to_str(phandle_str, sizeof(phandle_str), np->phandle, 0);
>  
> -	of_unittest_printf_one(np, "%pOF",  full_name);
> -	of_unittest_printf_one(np, "%pOFf", full_name);
> -	of_unittest_printf_one(np, "%pOFn", "dev");
> -	of_unittest_printf_one(np, "%2pOFn", "dev");
> -	of_unittest_printf_one(np, "%5pOFn", "  dev");
> -	of_unittest_printf_one(np, "%pOFnc", "dev:test-sub-device");
> -	of_unittest_printf_one(np, "%pOFp", phandle_str);
> -	of_unittest_printf_one(np, "%pOFP", "dev at 100");
> -	of_unittest_printf_one(np, "ABC %pOFP ABC", "ABC dev at 100 ABC");
> -	of_unittest_printf_one(np, "%10pOFP", "   dev at 100");
> -	of_unittest_printf_one(np, "%-10pOFP", "dev at 100   ");
> -	of_unittest_printf_one(of_root, "%pOFP", "/");
> -	of_unittest_printf_one(np, "%pOFF", "----");
> -	of_unittest_printf_one(np, "%pOFPF", "dev at 100:----");
> -	of_unittest_printf_one(np, "%pOFPFPc", "dev at 100:----:dev at 100:test-sub-device");
> -	of_unittest_printf_one(np, "%pOFc", "test-sub-device");
> -	of_unittest_printf_one(np, "%pOFC",
> +	of_unittest_printf_one(test, np, "%pOF",  full_name);
> +	of_unittest_printf_one(test, np, "%pOFf", full_name);
> +	of_unittest_printf_one(test, np, "%pOFn", "dev");
> +	of_unittest_printf_one(test, np, "%2pOFn", "dev");
> +	of_unittest_printf_one(test, np, "%5pOFn", "  dev");
> +	of_unittest_printf_one(test, np, "%pOFnc", "dev:test-sub-device");
> +	of_unittest_printf_one(test, np, "%pOFp", phandle_str);
> +	of_unittest_printf_one(test, np, "%pOFP", "dev at 100");
> +	of_unittest_printf_one(test, np, "ABC %pOFP ABC", "ABC dev at 100 ABC");
> +	of_unittest_printf_one(test, np, "%10pOFP", "   dev at 100");
> +	of_unittest_printf_one(test, np, "%-10pOFP", "dev at 100   ");
> +	of_unittest_printf_one(test, of_root, "%pOFP", "/");
> +	of_unittest_printf_one(test, np, "%pOFF", "----");
> +	of_unittest_printf_one(test, np, "%pOFPF", "dev at 100:----");
> +	of_unittest_printf_one(test,
> +			       np,
> +			       "%pOFPFPc",
> +			       "dev at 100:----:dev at 100:test-sub-device");
> +	of_unittest_printf_one(test, np, "%pOFc", "test-sub-device");
> +	of_unittest_printf_one(test, np, "%pOFC",
>  			"\"test-sub-device\",\"test-compat2\",\"test-compat3\"");
>  }
>  
> @@ -323,7 +338,7 @@ struct node_hash {
>  };
>  
>  static DEFINE_HASHTABLE(phandle_ht, 8);
> -static void __init of_unittest_check_phandles(void)
> +static void of_unittest_check_phandles(struct kunit *test)
>  {
>  	struct device_node *np;
>  	struct node_hash *nh;
> @@ -335,24 +350,26 @@ static void __init of_unittest_check_phandles(void)
>  			continue;
>  
>  		hash_for_each_possible(phandle_ht, nh, node, np->phandle) {
> +			KUNIT_EXPECT_NE_MSG(
> +				test, nh->np->phandle, np->phandle,
> +				"Duplicate phandle! %i used by %pOF and %pOF\n",
> +				np->phandle, nh->np, np);
>  			if (nh->np->phandle == np->phandle) {
> -				pr_info("Duplicate phandle! %i used by %pOF and %pOF\n",
> -					np->phandle, nh->np, np);
>  				dup_count++;
>  				break;
>  			}
>  		}
>  
>  		nh = kzalloc(sizeof(*nh), GFP_KERNEL);
> -		if (WARN_ON(!nh))
> -			return;
> +		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, nh);
>  
>  		nh->np = np;
>  		hash_add(phandle_ht, &nh->node, np->phandle);
>  		phandle_count++;
>  	}
> -	unittest(dup_count == 0, "Found %i duplicates in %i phandles\n",
> -		 dup_count, phandle_count);
> +	KUNIT_EXPECT_EQ_MSG(test, dup_count, 0,
> +			    "Found %i duplicates in %i phandles\n",
> +			    dup_count, phandle_count);
>  
>  	/* Clean up */
>  	hash_for_each_safe(phandle_ht, i, tmp, nh, node) {
> @@ -361,20 +378,21 @@ static void __init of_unittest_check_phandles(void)
>  	}
>  }
>  
> -static void __init of_unittest_parse_phandle_with_args(void)
> +static void of_unittest_parse_phandle_with_args(struct kunit *test)
>  {
>  	struct device_node *np;
>  	struct of_phandle_args args;
> -	int i, rc;
> +	int i, rc = 0;
>  
>  	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> -	if (!np) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
> -	rc = of_count_phandle_with_args(np, "phandle-list", "#phandle-cells");
> -	unittest(rc == 7, "of_count_phandle_with_args() returned %i, expected 7\n", rc);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_count_phandle_with_args(
> +			np, "phandle-list", "#phandle-cells"),
> +		7,
> +		"of_count_phandle_with_args() returned %i, expected 7\n", rc);
>  
>  	for (i = 0; i < 8; i++) {
>  		bool passed = true;
> @@ -428,85 +446,91 @@ static void __init of_unittest_parse_phandle_with_args(void)
>  			passed = false;
>  		}
>  
> -		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
> -			 i, args.np, rc);
> +		KUNIT_EXPECT_TRUE_MSG(
> +			test, passed,
> +			"index %i - data error on node %pOF rc=%i\n",
> +			i, args.np, rc);
>  	}
>  
>  	/* Check for missing list property */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args(np, "phandle-list-missing",
> -					"#phandle-cells", 0, &args);
> -	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
> -	rc = of_count_phandle_with_args(np, "phandle-list-missing",
> -					"#phandle-cells");
> -	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args(
> +			np, "phandle-list-missing", "#phandle-cells", 0, &args),
> +		-ENOENT);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_count_phandle_with_args(
> +			np, "phandle-list-missing", "#phandle-cells"),
> +		-ENOENT);
>  
>  	/* Check for missing cells property */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args(np, "phandle-list",
> -					"#phandle-cells-missing", 0, &args);
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> -	rc = of_count_phandle_with_args(np, "phandle-list",
> -					"#phandle-cells-missing");
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args(
> +			np, "phandle-list", "#phandle-cells-missing", 0, &args),
> +		-EINVAL);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_count_phandle_with_args(
> +			np, "phandle-list", "#phandle-cells-missing"),
> +		-EINVAL);
>  
>  	/* Check for bad phandle in list */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args(np, "phandle-list-bad-phandle",
> -					"#phandle-cells", 0, &args);
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> -	rc = of_count_phandle_with_args(np, "phandle-list-bad-phandle",
> -					"#phandle-cells");
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args(np, "phandle-list-bad-phandle",
> +					   "#phandle-cells", 0, &args),
> +		-EINVAL);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_count_phandle_with_args(
> +			np, "phandle-list-bad-phandle", "#phandle-cells"),
> +		-EINVAL);
>  
>  	/* Check for incorrectly formed argument list */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args(np, "phandle-list-bad-args",
> -					"#phandle-cells", 1, &args);
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> -	rc = of_count_phandle_with_args(np, "phandle-list-bad-args",
> -					"#phandle-cells");
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args(np, "phandle-list-bad-args",
> +					   "#phandle-cells", 1, &args),
> +		-EINVAL);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_count_phandle_with_args(
> +			np, "phandle-list-bad-args", "#phandle-cells"),
> +		-EINVAL);
>  }
>  
> -static void __init of_unittest_parse_phandle_with_args_map(void)
> +static void of_unittest_parse_phandle_with_args_map(struct kunit *test)
>  {
>  	struct device_node *np, *p0, *p1, *p2, *p3;
>  	struct of_phandle_args args;
>  	int i, rc;
>  
>  	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-b");
> -	if (!np) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  	p0 = of_find_node_by_path("/testcase-data/phandle-tests/provider0");
> -	if (!p0) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p0);
>  
>  	p1 = of_find_node_by_path("/testcase-data/phandle-tests/provider1");
> -	if (!p1) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p1);
>  
>  	p2 = of_find_node_by_path("/testcase-data/phandle-tests/provider2");
> -	if (!p2) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p2);
>  
>  	p3 = of_find_node_by_path("/testcase-data/phandle-tests/provider3");
> -	if (!p3) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p3);
>  
> -	rc = of_count_phandle_with_args(np, "phandle-list", "#phandle-cells");
> -	unittest(rc == 7, "of_count_phandle_with_args() returned %i, expected 7\n", rc);
> +	KUNIT_EXPECT_EQ(test,
> +		       of_count_phandle_with_args(np,
> +						  "phandle-list",
> +						  "#phandle-cells"),
> +		       7);
>  
>  	for (i = 0; i < 8; i++) {
>  		bool passed = true;
> @@ -564,121 +588,186 @@ static void __init of_unittest_parse_phandle_with_args_map(void)
>  			passed = false;
>  		}
>  
> -		unittest(passed, "index %i - data error on node %s rc=%i\n",
> -			 i, args.np->full_name, rc);
> +		KUNIT_EXPECT_TRUE_MSG(
> +			test, passed,
> +			"index %i - data error on node %s rc=%i\n",
> +			i, (args.np ? args.np->full_name : "missing np"), rc);
>  	}
>  
>  	/* Check for missing list property */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args_map(np, "phandle-list-missing",
> -					    "phandle", 0, &args);
> -	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args_map(
> +			np, "phandle-list-missing", "phandle", 0, &args),
> +		-ENOENT);
>  
>  	/* Check for missing cells,map,mask property */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args_map(np, "phandle-list",
> -					    "phandle-missing", 0, &args);
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args_map(
> +			np, "phandle-list", "phandle-missing", 0, &args),
> +		-EINVAL);
>  
>  	/* Check for bad phandle in list */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args_map(np, "phandle-list-bad-phandle",
> -					    "phandle", 0, &args);
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args_map(
> +			np, "phandle-list-bad-phandle", "phandle", 0, &args),
> +		-EINVAL);
>  
>  	/* Check for incorrectly formed argument list */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args_map(np, "phandle-list-bad-args",
> -					    "phandle", 1, &args);
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args_map(
> +			np, "phandle-list-bad-args", "phandle", 1, &args),
> +		-EINVAL);
>  }
>  
> -static void __init of_unittest_property_string(void)
> +static void of_unittest_property_string(struct kunit *test)
>  {
>  	const char *strings[4];
>  	struct device_node *np;
>  	int rc;
>  
>  	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> -	if (!np) {
> -		pr_err("No testcase data in device tree\n");
> -		return;
> -	}
> -
> -	rc = of_property_match_string(np, "phandle-list-names", "first");
> -	unittest(rc == 0, "first expected:0 got:%i\n", rc);
> -	rc = of_property_match_string(np, "phandle-list-names", "second");
> -	unittest(rc == 1, "second expected:1 got:%i\n", rc);
> -	rc = of_property_match_string(np, "phandle-list-names", "third");
> -	unittest(rc == 2, "third expected:2 got:%i\n", rc);
> -	rc = of_property_match_string(np, "phandle-list-names", "fourth");
> -	unittest(rc == -ENODATA, "unmatched string; rc=%i\n", rc);
> -	rc = of_property_match_string(np, "missing-property", "blah");
> -	unittest(rc == -EINVAL, "missing property; rc=%i\n", rc);
> -	rc = of_property_match_string(np, "empty-property", "blah");
> -	unittest(rc == -ENODATA, "empty property; rc=%i\n", rc);
> -	rc = of_property_match_string(np, "unterminated-string", "blah");
> -	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_property_match_string(np, "phandle-list-names", "first"),
> +		0);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_property_match_string(np, "phandle-list-names", "second"),
> +		1);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_property_match_string(np, "phandle-list-names", "third"),
> +		2);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_match_string(np, "phandle-list-names", "fourth"),
> +		-ENODATA,
> +		"unmatched string");
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_match_string(np, "missing-property", "blah"),
> +		-EINVAL,
> +		"missing property");
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_match_string(np, "empty-property", "blah"),
> +		-ENODATA,
> +		"empty property");
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_match_string(np, "unterminated-string", "blah"),
> +		-EILSEQ,
> +		"unterminated string");
>  
>  	/* of_property_count_strings() tests */
> -	rc = of_property_count_strings(np, "string-property");
> -	unittest(rc == 1, "Incorrect string count; rc=%i\n", rc);
> -	rc = of_property_count_strings(np, "phandle-list-names");
> -	unittest(rc == 3, "Incorrect string count; rc=%i\n", rc);
> -	rc = of_property_count_strings(np, "unterminated-string");
> -	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
> -	rc = of_property_count_strings(np, "unterminated-string-list");
> -	unittest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc);
> +	KUNIT_EXPECT_EQ(test,
> +			of_property_count_strings(np, "string-property"), 1);
> +	KUNIT_EXPECT_EQ(test,
> +			of_property_count_strings(np, "phandle-list-names"), 3);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_count_strings(np, "unterminated-string"), -EILSEQ,
> +		"unterminated string");
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_count_strings(np, "unterminated-string-list"),
> +		-EILSEQ,
> +		"unterminated string array");
>  
>  	/* of_property_read_string_index() tests */
>  	rc = of_property_read_string_index(np, "string-property", 0, strings);
> -	unittest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_ASSERT_EQ(test, rc, 0);
> +	KUNIT_EXPECT_STREQ(test, strings[0], "foobar");
> +
>  	strings[0] = NULL;
>  	rc = of_property_read_string_index(np, "string-property", 1, strings);
> -	unittest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_EXPECT_EQ(test, rc, -ENODATA);
> +	KUNIT_EXPECT_EQ(test, strings[0], NULL);
> +
>  	rc = of_property_read_string_index(np, "phandle-list-names", 0, strings);
> -	unittest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_ASSERT_EQ(test, rc, 0);
> +	KUNIT_EXPECT_STREQ(test, strings[0], "first");
> +
>  	rc = of_property_read_string_index(np, "phandle-list-names", 1, strings);
> -	unittest(rc == 0 && !strcmp(strings[0], "second"), "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_ASSERT_EQ(test, rc, 0);
> +	KUNIT_EXPECT_STREQ(test, strings[0], "second");
> +
>  	rc = of_property_read_string_index(np, "phandle-list-names", 2, strings);
> -	unittest(rc == 0 && !strcmp(strings[0], "third"), "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_ASSERT_EQ(test, rc, 0);
> +	KUNIT_EXPECT_STREQ(test, strings[0], "third");
> +
>  	strings[0] = NULL;
>  	rc = of_property_read_string_index(np, "phandle-list-names", 3, strings);
> -	unittest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_EXPECT_EQ(test, rc, -ENODATA);
> +	KUNIT_EXPECT_EQ(test, strings[0], NULL);
> +
>  	strings[0] = NULL;
>  	rc = of_property_read_string_index(np, "unterminated-string", 0, strings);
> -	unittest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_EXPECT_EQ(test, rc, -EILSEQ);
> +	KUNIT_EXPECT_EQ(test, strings[0], NULL);
> +
>  	rc = of_property_read_string_index(np, "unterminated-string-list", 0, strings);
> -	unittest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_ASSERT_EQ(test, rc, 0);
> +	KUNIT_EXPECT_STREQ(test, strings[0], "first");
> +
>  	strings[0] = NULL;
>  	rc = of_property_read_string_index(np, "unterminated-string-list", 2, strings); /* should fail */
> -	unittest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
> -	strings[1] = NULL;
> +	KUNIT_EXPECT_EQ(test, rc, -EILSEQ);
> +	KUNIT_EXPECT_EQ(test, strings[0], NULL);
>  
>  	/* of_property_read_string_array() tests */
> -	rc = of_property_read_string_array(np, "string-property", strings, 4);
> -	unittest(rc == 1, "Incorrect string count; rc=%i\n", rc);
> -	rc = of_property_read_string_array(np, "phandle-list-names", strings, 4);
> -	unittest(rc == 3, "Incorrect string count; rc=%i\n", rc);
> -	rc = of_property_read_string_array(np, "unterminated-string", strings, 4);
> -	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
> +	strings[1] = NULL;
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_property_read_string_array(
> +			np, "string-property", strings, 4),
> +		1);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_property_read_string_array(
> +			np, "phandle-list-names", strings, 4),
> +		3);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_read_string_array(
> +			np, "unterminated-string", strings, 4),
> +		-EILSEQ,
> +		"unterminated string");
>  	/* -- An incorrectly formed string should cause a failure */
> -	rc = of_property_read_string_array(np, "unterminated-string-list", strings, 4);
> -	unittest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_read_string_array(
> +			np, "unterminated-string-list", strings, 4),
> +		-EILSEQ,
> +		"unterminated string array");
>  	/* -- parsing the correctly formed strings should still work: */
>  	strings[2] = NULL;
>  	rc = of_property_read_string_array(np, "unterminated-string-list", strings, 2);
> -	unittest(rc == 2 && strings[2] == NULL, "of_property_read_string_array() failure; rc=%i\n", rc);
> +	KUNIT_EXPECT_EQ(test, rc, 2);
> +	KUNIT_EXPECT_EQ(test, strings[2], NULL);
> +
>  	strings[1] = NULL;
>  	rc = of_property_read_string_array(np, "phandle-list-names", strings, 1);
> -	unittest(rc == 1 && strings[1] == NULL, "Overwrote end of string array; rc=%i, str='%s'\n", rc, strings[1]);
> +	KUNIT_ASSERT_EQ(test, rc, 1);
> +	KUNIT_EXPECT_EQ_MSG(test, strings[1], NULL,
> +			    "Overwrote end of string array");
>  }
>  
>  #define propcmp(p1, p2) (((p1)->length == (p2)->length) && \
>  			(p1)->value && (p2)->value && \
>  			!memcmp((p1)->value, (p2)->value, (p1)->length) && \
>  			!strcmp((p1)->name, (p2)->name))
> -static void __init of_unittest_property_copy(void)
> +static void of_unittest_property_copy(struct kunit *test)
>  {
>  #ifdef CONFIG_OF_DYNAMIC
>  	struct property p1 = { .name = "p1", .length = 0, .value = "" };
> @@ -686,20 +775,24 @@ static void __init of_unittest_property_copy(void)
>  	struct property *new;
>  
>  	new = __of_prop_dup(&p1, GFP_KERNEL);
> -	unittest(new && propcmp(&p1, new), "empty property didn't copy correctly\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, new);
> +	KUNIT_EXPECT_TRUE_MSG(test, propcmp(&p1, new),
> +			      "empty property didn't copy correctly");
>  	kfree(new->value);
>  	kfree(new->name);
>  	kfree(new);
>  
>  	new = __of_prop_dup(&p2, GFP_KERNEL);
> -	unittest(new && propcmp(&p2, new), "non-empty property didn't copy correctly\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, new);
> +	KUNIT_EXPECT_TRUE_MSG(test, propcmp(&p2, new),
> +			      "non-empty property didn't copy correctly");
>  	kfree(new->value);
>  	kfree(new->name);
>  	kfree(new);
>  #endif
>  }
>  
> -static void __init of_unittest_changeset(void)
> +static void of_unittest_changeset(struct kunit *test)
>  {
>  #ifdef CONFIG_OF_DYNAMIC
>  	struct property *ppadd, padd = { .name = "prop-add", .length = 1, .value = "" };
> @@ -712,32 +805,32 @@ static void __init of_unittest_changeset(void)
>  	struct of_changeset chgset;
>  
>  	n1 = __of_node_dup(NULL, "n1");
> -	unittest(n1, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n1);
>  
>  	n2 = __of_node_dup(NULL, "n2");
> -	unittest(n2, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n2);
>  
>  	n21 = __of_node_dup(NULL, "n21");
> -	unittest(n21, "testcase setup failure %p\n", n21);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n21);
>  
>  	nchangeset = of_find_node_by_path("/testcase-data/changeset");
>  	nremove = of_get_child_by_name(nchangeset, "node-remove");
> -	unittest(nremove, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, nremove);
>  
>  	ppadd = __of_prop_dup(&padd, GFP_KERNEL);
> -	unittest(ppadd, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppadd);
>  
>  	ppname_n1  = __of_prop_dup(&pname_n1, GFP_KERNEL);
> -	unittest(ppname_n1, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n1);
>  
>  	ppname_n2  = __of_prop_dup(&pname_n2, GFP_KERNEL);
> -	unittest(ppname_n2, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n2);
>  
>  	ppname_n21 = __of_prop_dup(&pname_n21, GFP_KERNEL);
> -	unittest(ppname_n21, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n21);
>  
>  	ppupdate = __of_prop_dup(&pupdate, GFP_KERNEL);
> -	unittest(ppupdate, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppupdate);
>  
>  	parent = nchangeset;
>  	n1->parent = parent;
> @@ -745,54 +838,72 @@ static void __init of_unittest_changeset(void)
>  	n21->parent = n2;
>  
>  	ppremove = of_find_property(parent, "prop-remove", NULL);
> -	unittest(ppremove, "failed to find removal prop");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppremove);
>  
>  	of_changeset_init(&chgset);
>  
> -	unittest(!of_changeset_attach_node(&chgset, n1), "fail attach n1\n");
> -	unittest(!of_changeset_add_property(&chgset, n1, ppname_n1), "fail add prop name\n");
> -
> -	unittest(!of_changeset_attach_node(&chgset, n2), "fail attach n2\n");
> -	unittest(!of_changeset_add_property(&chgset, n2, ppname_n2), "fail add prop name\n");
> -
> -	unittest(!of_changeset_detach_node(&chgset, nremove), "fail remove node\n");
> -	unittest(!of_changeset_add_property(&chgset, n21, ppname_n21), "fail add prop name\n");
> -
> -	unittest(!of_changeset_attach_node(&chgset, n21), "fail attach n21\n");
> -
> -	unittest(!of_changeset_add_property(&chgset, parent, ppadd), "fail add prop prop-add\n");
> -	unittest(!of_changeset_update_property(&chgset, parent, ppupdate), "fail update prop\n");
> -	unittest(!of_changeset_remove_property(&chgset, parent, ppremove), "fail remove prop\n");
> -
> -	unittest(!of_changeset_apply(&chgset), "apply failed\n");
> +	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n1),
> +			       "fail attach n1\n");
> +	KUNIT_EXPECT_FALSE_MSG(
> +		test, of_changeset_add_property(&chgset, n1, ppname_n1),
> +		"fail add prop name\n");
> +
> +	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n2),
> +			       "fail attach n2\n");
> +	KUNIT_EXPECT_FALSE_MSG(
> +		test, of_changeset_add_property(&chgset, n2, ppname_n2),
> +			       "fail add prop name\n");
> +
> +	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_detach_node(&chgset, nremove),
> +			       "fail remove node\n");
> +	KUNIT_EXPECT_FALSE_MSG(
> +		test, of_changeset_add_property(&chgset, n21, ppname_n21),
> +		"fail add prop name\n");
> +
> +	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n21),
> +			       "fail attach n21\n");
> +
> +	KUNIT_EXPECT_FALSE_MSG(
> +		test,
> +		of_changeset_add_property(&chgset, parent, ppadd),
> +		"fail add prop prop-add\n");
> +	KUNIT_EXPECT_FALSE_MSG(
> +		test,
> +		of_changeset_update_property(&chgset, parent, ppupdate),
> +		"fail update prop\n");
> +	KUNIT_EXPECT_FALSE_MSG(
> +		test,
> +		of_changeset_remove_property(&chgset, parent, ppremove),
> +		"fail remove prop\n");
> +
> +	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_apply(&chgset),
> +			       "apply failed\n");
>  
>  	of_node_put(nchangeset);
>  
>  	/* Make sure node names are constructed correctly */
> -	unittest((np = of_find_node_by_path("/testcase-data/changeset/n2/n21")),
> -		 "'%pOF' not added\n", n21);
> +	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
> +		test,
> +		np = of_find_node_by_path("/testcase-data/changeset/n2/n21"),
> +		"'%pOF' not added\n", n21);
>  	of_node_put(np);
>  
> -	unittest(!of_changeset_revert(&chgset), "revert failed\n");
> +	KUNIT_EXPECT_FALSE(test, of_changeset_revert(&chgset));
>  
>  	of_changeset_destroy(&chgset);
>  #endif
>  }
>  
> -static void __init of_unittest_parse_interrupts(void)
> +static void of_unittest_parse_interrupts(struct kunit *test)
>  {
>  	struct device_node *np;
>  	struct of_phandle_args args;
>  	int i, rc;
>  
> -	if (of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)
> -		return;
> +	KUNIT_ASSERT_FALSE(test, of_irq_workarounds & OF_IMAP_OLDWORLD_MAC);
>  
>  	np = of_find_node_by_path("/testcase-data/interrupts/interrupts0");
> -	if (!np) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  	for (i = 0; i < 4; i++) {
>  		bool passed = true;
> @@ -804,16 +915,15 @@ static void __init of_unittest_parse_interrupts(void)
>  		passed &= (args.args_count == 1);
>  		passed &= (args.args[0] == (i + 1));
>  
> -		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
> -			 i, args.np, rc);
> +		KUNIT_EXPECT_TRUE_MSG(
> +			test, passed,
> +			"index %i - data error on node %pOF rc=%i\n",
> +			i, args.np, rc);
>  	}
>  	of_node_put(np);
>  
>  	np = of_find_node_by_path("/testcase-data/interrupts/interrupts1");
> -	if (!np) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  	for (i = 0; i < 4; i++) {
>  		bool passed = true;
> @@ -850,26 +960,24 @@ static void __init of_unittest_parse_interrupts(void)
>  		default:
>  			passed = false;
>  		}
> -		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
> -			 i, args.np, rc);
> +		KUNIT_EXPECT_TRUE_MSG(
> +			test, passed,
> +			"index %i - data error on node %pOF rc=%i\n",
> +			i, args.np, rc);
>  	}
>  	of_node_put(np);
>  }
>  
> -static void __init of_unittest_parse_interrupts_extended(void)
> +static void of_unittest_parse_interrupts_extended(struct kunit *test)
>  {
>  	struct device_node *np;
>  	struct of_phandle_args args;
>  	int i, rc;
>  
> -	if (of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)
> -		return;
> +	KUNIT_ASSERT_FALSE(test, of_irq_workarounds & OF_IMAP_OLDWORLD_MAC);
>  
>  	np = of_find_node_by_path("/testcase-data/interrupts/interrupts-extended0");
> -	if (!np) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  	for (i = 0; i < 7; i++) {
>  		bool passed = true;
> @@ -924,8 +1032,10 @@ static void __init of_unittest_parse_interrupts_extended(void)
>  			passed = false;
>  		}
>  
> -		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
> -			 i, args.np, rc);
> +		KUNIT_EXPECT_TRUE_MSG(
> +			test, passed,
> +			"index %i - data error on node %pOF rc=%i\n",
> +			i, args.np, rc);
>  	}
>  	of_node_put(np);
>  }
> @@ -965,7 +1075,7 @@ static struct {
>  	{ .path = "/testcase-data/match-node/name9", .data = "K", },
>  };
>  
> -static void __init of_unittest_match_node(void)
> +static void of_unittest_match_node(struct kunit *test)
>  {
>  	struct device_node *np;
>  	const struct of_device_id *match;
> @@ -973,26 +1083,19 @@ static void __init of_unittest_match_node(void)
>  
>  	for (i = 0; i < ARRAY_SIZE(match_node_tests); i++) {
>  		np = of_find_node_by_path(match_node_tests[i].path);
> -		if (!np) {
> -			unittest(0, "missing testcase node %s\n",
> -				match_node_tests[i].path);
> -			continue;
> -		}
> +		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  		match = of_match_node(match_node_table, np);
> -		if (!match) {
> -			unittest(0, "%s didn't match anything\n",
> -				match_node_tests[i].path);
> -			continue;
> -		}
> +		KUNIT_ASSERT_NOT_ERR_OR_NULL_MSG(test, np,
> +						 "%s didn't match anything",
> +						 match_node_tests[i].path);
>  
> -		if (strcmp(match->data, match_node_tests[i].data) != 0) {
> -			unittest(0, "%s got wrong match. expected %s, got %s\n",
> -				match_node_tests[i].path, match_node_tests[i].data,
> -				(const char *)match->data);
> -			continue;
> -		}
> -		unittest(1, "passed");
> +		KUNIT_EXPECT_STREQ_MSG(
> +			test,
> +			match->data, match_node_tests[i].data,
> +			"%s got wrong match. expected %s, got %s\n",
> +			match_node_tests[i].path, match_node_tests[i].data,
> +			(const char *)match->data);
>  	}
>  }
>  
> @@ -1004,9 +1107,9 @@ static struct resource test_bus_res = {
>  static const struct platform_device_info test_bus_info = {
>  	.name = "unittest-bus",
>  };
> -static void __init of_unittest_platform_populate(void)
> +static void of_unittest_platform_populate(struct kunit *test)
>  {
> -	int irq, rc;
> +	int irq;
>  	struct device_node *np, *child, *grandchild;
>  	struct platform_device *pdev, *test_bus;
>  	const struct of_device_id match[] = {
> @@ -1020,32 +1123,27 @@ static void __init of_unittest_platform_populate(void)
>  	/* Test that a missing irq domain returns -EPROBE_DEFER */
>  	np = of_find_node_by_path("/testcase-data/testcase-device1");
>  	pdev = of_find_device_by_node(np);
> -	unittest(pdev, "device 1 creation failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, pdev);
>  
>  	if (!(of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)) {
>  		irq = platform_get_irq(pdev, 0);
> -		unittest(irq == -EPROBE_DEFER,
> -			 "device deferred probe failed - %d\n", irq);
> +		KUNIT_ASSERT_EQ(test, irq, -EPROBE_DEFER);
>  
>  		/* Test that a parsing failure does not return -EPROBE_DEFER */
>  		np = of_find_node_by_path("/testcase-data/testcase-device2");
>  		pdev = of_find_device_by_node(np);
> -		unittest(pdev, "device 2 creation failed\n");
> +		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, pdev);
>  		irq = platform_get_irq(pdev, 0);
> -		unittest(irq < 0 && irq != -EPROBE_DEFER,
> -			 "device parsing error failed - %d\n", irq);
> +		KUNIT_ASSERT_TRUE_MSG(test, irq < 0 && irq != -EPROBE_DEFER,
> +				      "device parsing error failed - %d\n",
> +				      irq);
>  	}
>  
>  	np = of_find_node_by_path("/testcase-data/platform-tests");
> -	unittest(np, "No testcase data in device tree\n");
> -	if (!np)
> -		return;
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  	test_bus = platform_device_register_full(&test_bus_info);
> -	rc = PTR_ERR_OR_ZERO(test_bus);
> -	unittest(!rc, "testbus registration failed; rc=%i\n", rc);
> -	if (rc)
> -		return;
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, test_bus);
>  	test_bus->dev.of_node = np;
>  
>  	/*
> @@ -1060,17 +1158,19 @@ static void __init of_unittest_platform_populate(void)
>  	of_platform_populate(np, match, NULL, &test_bus->dev);
>  	for_each_child_of_node(np, child) {
>  		for_each_child_of_node(child, grandchild)
> -			unittest(of_find_device_by_node(grandchild),
> -				 "Could not create device for node '%pOFn'\n",
> -				 grandchild);
> +			KUNIT_EXPECT_TRUE_MSG(
> +				test, of_find_device_by_node(grandchild),
> +				"Could not create device for node '%pOFn'\n",
> +				grandchild);
>  	}
>  
>  	of_platform_depopulate(&test_bus->dev);
>  	for_each_child_of_node(np, child) {
>  		for_each_child_of_node(child, grandchild)
> -			unittest(!of_find_device_by_node(grandchild),
> -				 "device didn't get destroyed '%pOFn'\n",
> -				 grandchild);
> +			KUNIT_EXPECT_FALSE_MSG(
> +				test, of_find_device_by_node(grandchild),
> +				"device didn't get destroyed '%pOFn'\n",
> +				grandchild);
>  	}
>  
>  	platform_device_unregister(test_bus);
> @@ -1171,7 +1271,7 @@ static void attach_node_and_children(struct device_node *np)
>   *	unittest_data_add - Reads, copies data from
>   *	linked tree and attaches it to the live tree
>   */
> -static int __init unittest_data_add(void)
> +static int unittest_data_add(void)
>  {
>  	void *unittest_data;
>  	struct device_node *unittest_data_node, *np;
> @@ -1242,7 +1342,7 @@ static int __init unittest_data_add(void)
>  }
>  
>  #ifdef CONFIG_OF_OVERLAY
> -static int __init overlay_data_apply(const char *overlay_name, int *overlay_id);
> +static int overlay_data_apply(const char *overlay_name, int *overlay_id);
>  
>  static int unittest_probe(struct platform_device *pdev)
>  {
> @@ -1471,172 +1571,146 @@ static void of_unittest_destroy_tracked_overlays(void)
>  	} while (defers > 0);
>  }
>  
> -static int __init of_unittest_apply_overlay(int overlay_nr, int *overlay_id)
> +static int of_unittest_apply_overlay(struct kunit *test,
> +				     int overlay_nr,
> +				     int *overlay_id)
>  {
>  	const char *overlay_name;
>  
>  	overlay_name = overlay_name_from_nr(overlay_nr);
>  
> -	if (!overlay_data_apply(overlay_name, overlay_id)) {
> -		unittest(0, "could not apply overlay \"%s\"\n",
> -				overlay_name);
> -		return -EFAULT;
> -	}
> +	KUNIT_ASSERT_TRUE_MSG(test,
> +			      overlay_data_apply(overlay_name, overlay_id),
> +			      "could not apply overlay \"%s\"\n", overlay_name);
>  	of_unittest_track_overlay(*overlay_id);
>  
>  	return 0;
>  }
>  
>  /* apply an overlay while checking before and after states */
> -static int __init of_unittest_apply_overlay_check(int overlay_nr,
> +static int of_unittest_apply_overlay_check(struct kunit *test, int overlay_nr,
>  		int unittest_nr, int before, int after,
>  		enum overlay_type ovtype)
>  {
>  	int ret, ovcs_id;
>  
>  	/* unittest device must not be in before state */
> -	if (of_unittest_device_exists(unittest_nr, ovtype) != before) {
> -		unittest(0, "%s with device @\"%s\" %s\n",
> -				overlay_name_from_nr(overlay_nr),
> -				unittest_path(unittest_nr, ovtype),
> -				!before ? "enabled" : "disabled");
> -		return -EINVAL;
> -	}
> +	KUNIT_ASSERT_EQ_MSG(
> +		test, of_unittest_device_exists(unittest_nr, ovtype), before,
> +		"%s with device @\"%s\" %s\n", overlay_name_from_nr(overlay_nr),
> +		unittest_path(unittest_nr, ovtype),
> +		!before ? "enabled" : "disabled");
>  
>  	ovcs_id = 0;
> -	ret = of_unittest_apply_overlay(overlay_nr, &ovcs_id);
> +	ret = of_unittest_apply_overlay(test, overlay_nr, &ovcs_id);
>  	if (ret != 0) {
> -		/* of_unittest_apply_overlay already called unittest() */
> +		/* of_unittest_apply_overlay already set expectation */
>  		return ret;
>  	}
>  
>  	/* unittest device must be to set to after state */
> -	if (of_unittest_device_exists(unittest_nr, ovtype) != after) {
> -		unittest(0, "%s failed to create @\"%s\" %s\n",
> -				overlay_name_from_nr(overlay_nr),
> -				unittest_path(unittest_nr, ovtype),
> -				!after ? "enabled" : "disabled");
> -		return -EINVAL;
> -	}
> +	KUNIT_ASSERT_EQ_MSG(
> +		test, of_unittest_device_exists(unittest_nr, ovtype), after,
> +		"%s failed to create @\"%s\" %s\n",
> +		overlay_name_from_nr(overlay_nr),
> +		unittest_path(unittest_nr, ovtype),
> +		!after ? "enabled" : "disabled");
>  
>  	return 0;
>  }
>  
>  /* apply an overlay and then revert it while checking before, after states */
> -static int __init of_unittest_apply_revert_overlay_check(int overlay_nr,
> +static int of_unittest_apply_revert_overlay_check(
> +		struct kunit *test, int overlay_nr,
>  		int unittest_nr, int before, int after,
>  		enum overlay_type ovtype)
>  {
>  	int ret, ovcs_id;
>  
>  	/* unittest device must be in before state */
> -	if (of_unittest_device_exists(unittest_nr, ovtype) != before) {
> -		unittest(0, "%s with device @\"%s\" %s\n",
> -				overlay_name_from_nr(overlay_nr),
> -				unittest_path(unittest_nr, ovtype),
> -				!before ? "enabled" : "disabled");
> -		return -EINVAL;
> -	}
> +	KUNIT_ASSERT_EQ_MSG(
> +		test, of_unittest_device_exists(unittest_nr, ovtype), before,
> +		"%s with device @\"%s\" %s\n", overlay_name_from_nr(overlay_nr),
> +		unittest_path(unittest_nr, ovtype),
> +		!before ? "enabled" : "disabled");
>  
>  	/* apply the overlay */
>  	ovcs_id = 0;
> -	ret = of_unittest_apply_overlay(overlay_nr, &ovcs_id);
> +	ret = of_unittest_apply_overlay(test, overlay_nr, &ovcs_id);
>  	if (ret != 0) {
> -		/* of_unittest_apply_overlay already called unittest() */
> +		/* of_unittest_apply_overlay already set expectation. */
>  		return ret;
>  	}
>  
>  	/* unittest device must be in after state */
> -	if (of_unittest_device_exists(unittest_nr, ovtype) != after) {
> -		unittest(0, "%s failed to create @\"%s\" %s\n",
> -				overlay_name_from_nr(overlay_nr),
> -				unittest_path(unittest_nr, ovtype),
> -				!after ? "enabled" : "disabled");
> -		return -EINVAL;
> -	}
> -
> -	ret = of_overlay_remove(&ovcs_id);
> -	if (ret != 0) {
> -		unittest(0, "%s failed to be destroyed @\"%s\"\n",
> -				overlay_name_from_nr(overlay_nr),
> -				unittest_path(unittest_nr, ovtype));
> -		return ret;
> -	}
> +	KUNIT_ASSERT_EQ_MSG(
> +		test, of_unittest_device_exists(unittest_nr, ovtype), after,
> +		"%s failed to create @\"%s\" %s\n",
> +		overlay_name_from_nr(overlay_nr),
> +		unittest_path(unittest_nr, ovtype),
> +		!after ? "enabled" : "disabled");
> +
> +	KUNIT_ASSERT_EQ_MSG(test, of_overlay_remove(&ovcs_id), 0,
> +			    "%s failed to be destroyed @\"%s\"\n",
> +			    overlay_name_from_nr(overlay_nr),
> +			    unittest_path(unittest_nr, ovtype));
>  
>  	/* unittest device must be again in before state */
> -	if (of_unittest_device_exists(unittest_nr, PDEV_OVERLAY) != before) {
> -		unittest(0, "%s with device @\"%s\" %s\n",
> -				overlay_name_from_nr(overlay_nr),
> -				unittest_path(unittest_nr, ovtype),
> -				!before ? "enabled" : "disabled");
> -		return -EINVAL;
> -	}
> +	KUNIT_ASSERT_EQ_MSG(
> +		test,
> +		of_unittest_device_exists(unittest_nr, PDEV_OVERLAY), before,
> +		"%s with device @\"%s\" %s\n",
> +		overlay_name_from_nr(overlay_nr),
> +		unittest_path(unittest_nr, ovtype),
> +		!before ? "enabled" : "disabled");
>  
>  	return 0;
>  }
>  
>  /* test activation of device */
> -static void __init of_unittest_overlay_0(void)
> +static void of_unittest_overlay_0(struct kunit *test)
>  {
>  	/* device should enable */
> -	if (of_unittest_apply_overlay_check(0, 0, 0, 1, PDEV_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 0);
> +	of_unittest_apply_overlay_check(test, 0, 0, 0, 1, PDEV_OVERLAY);
>  }
>  
>  /* test deactivation of device */
> -static void __init of_unittest_overlay_1(void)
> +static void of_unittest_overlay_1(struct kunit *test)
>  {
>  	/* device should disable */
> -	if (of_unittest_apply_overlay_check(1, 1, 1, 0, PDEV_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 1);
> +	of_unittest_apply_overlay_check(test, 1, 1, 1, 0, PDEV_OVERLAY);
>  }
>  
>  /* test activation of device */
> -static void __init of_unittest_overlay_2(void)
> +static void of_unittest_overlay_2(struct kunit *test)
>  {
>  	/* device should enable */
> -	if (of_unittest_apply_overlay_check(2, 2, 0, 1, PDEV_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 2);
> +	of_unittest_apply_overlay_check(test, 2, 2, 0, 1, PDEV_OVERLAY);
>  }
>  
>  /* test deactivation of device */
> -static void __init of_unittest_overlay_3(void)
> +static void of_unittest_overlay_3(struct kunit *test)
>  {
>  	/* device should disable */
> -	if (of_unittest_apply_overlay_check(3, 3, 1, 0, PDEV_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 3);
> +	of_unittest_apply_overlay_check(test, 3, 3, 1, 0, PDEV_OVERLAY);
>  }
>  
>  /* test activation of a full device node */
> -static void __init of_unittest_overlay_4(void)
> +static void of_unittest_overlay_4(struct kunit *test)
>  {
>  	/* device should disable */
> -	if (of_unittest_apply_overlay_check(4, 4, 0, 1, PDEV_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 4);
> +	of_unittest_apply_overlay_check(test, 4, 4, 0, 1, PDEV_OVERLAY);
>  }
>  
>  /* test overlay apply/revert sequence */
> -static void __init of_unittest_overlay_5(void)
> +static void of_unittest_overlay_5(struct kunit *test)
>  {
>  	/* device should disable */
> -	if (of_unittest_apply_revert_overlay_check(5, 5, 0, 1, PDEV_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 5);
> +	of_unittest_apply_revert_overlay_check(test, 5, 5, 0, 1, PDEV_OVERLAY);
>  }
>  
>  /* test overlay application in sequence */
> -static void __init of_unittest_overlay_6(void)
> +static void of_unittest_overlay_6(struct kunit *test)
>  {
>  	int i, ov_id[2], ovcs_id;
>  	int overlay_nr = 6, unittest_nr = 6;
> @@ -1645,74 +1719,67 @@ static void __init of_unittest_overlay_6(void)
>  
>  	/* unittest device must be in before state */
>  	for (i = 0; i < 2; i++) {
> -		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
> -				!= before) {
> -			unittest(0, "%s with device @\"%s\" %s\n",
> -					overlay_name_from_nr(overlay_nr + i),
> -					unittest_path(unittest_nr + i,
> -						PDEV_OVERLAY),
> -					!before ? "enabled" : "disabled");
> -			return;
> -		}
> +		KUNIT_ASSERT_EQ_MSG(test,
> +				    of_unittest_device_exists(unittest_nr + i,
> +							      PDEV_OVERLAY),
> +				    before,
> +				    "%s with device @\"%s\" %s\n",
> +				    overlay_name_from_nr(overlay_nr + i),
> +				    unittest_path(unittest_nr + i,
> +						  PDEV_OVERLAY),
> +				    !before ? "enabled" : "disabled");
>  	}
>  
>  	/* apply the overlays */
>  	for (i = 0; i < 2; i++) {
> -
>  		overlay_name = overlay_name_from_nr(overlay_nr + i);
>  
> -		if (!overlay_data_apply(overlay_name, &ovcs_id)) {
> -			unittest(0, "could not apply overlay \"%s\"\n",
> -					overlay_name);
> -			return;
> -		}
> +		KUNIT_ASSERT_TRUE_MSG(
> +			test, overlay_data_apply(overlay_name, &ovcs_id),
> +			"could not apply overlay \"%s\"\n", overlay_name);
>  		ov_id[i] = ovcs_id;
>  		of_unittest_track_overlay(ov_id[i]);
>  	}
>  
>  	for (i = 0; i < 2; i++) {
>  		/* unittest device must be in after state */
> -		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
> -				!= after) {
> -			unittest(0, "overlay @\"%s\" failed @\"%s\" %s\n",
> -					overlay_name_from_nr(overlay_nr + i),
> -					unittest_path(unittest_nr + i,
> -						PDEV_OVERLAY),
> -					!after ? "enabled" : "disabled");
> -			return;
> -		}
> +		KUNIT_ASSERT_EQ_MSG(test,
> +				    of_unittest_device_exists(unittest_nr + i,
> +							      PDEV_OVERLAY),
> +				    after,
> +				    "overlay @\"%s\" failed @\"%s\" %s\n",
> +				    overlay_name_from_nr(overlay_nr + i),
> +				    unittest_path(unittest_nr + i,
> +						  PDEV_OVERLAY),
> +				    !after ? "enabled" : "disabled");
>  	}
>  
>  	for (i = 1; i >= 0; i--) {
>  		ovcs_id = ov_id[i];
> -		if (of_overlay_remove(&ovcs_id)) {
> -			unittest(0, "%s failed destroy @\"%s\"\n",
> -					overlay_name_from_nr(overlay_nr + i),
> -					unittest_path(unittest_nr + i,
> -						PDEV_OVERLAY));
> -			return;
> -		}
> +		KUNIT_ASSERT_FALSE_MSG(
> +			test, of_overlay_remove(&ovcs_id),
> +			"%s failed destroy @\"%s\"\n",
> +			overlay_name_from_nr(overlay_nr + i),
> +			unittest_path(unittest_nr + i, PDEV_OVERLAY));
>  		of_unittest_untrack_overlay(ov_id[i]);
>  	}
>  
>  	for (i = 0; i < 2; i++) {
>  		/* unittest device must be again in before state */
> -		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
> -				!= before) {
> -			unittest(0, "%s with device @\"%s\" %s\n",
> -					overlay_name_from_nr(overlay_nr + i),
> -					unittest_path(unittest_nr + i,
> -						PDEV_OVERLAY),
> -					!before ? "enabled" : "disabled");
> -			return;
> -		}
> +		KUNIT_ASSERT_EQ_MSG(test,
> +				    of_unittest_device_exists(unittest_nr + i,
> +							      PDEV_OVERLAY),
> +				    before,
> +				    "%s with device @\"%s\" %s\n",
> +				    overlay_name_from_nr(overlay_nr + i),
> +				    unittest_path(unittest_nr + i,
> +						  PDEV_OVERLAY),
> +				    !before ? "enabled" : "disabled");
>  	}
> -
> -	unittest(1, "overlay test %d passed\n", 6);
>  }
>  
>  /* test overlay application in sequence */
> -static void __init of_unittest_overlay_8(void)
> +static void of_unittest_overlay_8(struct kunit *test)
>  {
>  	int i, ov_id[2], ovcs_id;
>  	int overlay_nr = 8, unittest_nr = 8;
> @@ -1722,76 +1789,64 @@ static void __init of_unittest_overlay_8(void)
>  
>  	/* apply the overlays */
>  	for (i = 0; i < 2; i++) {
> -
>  		overlay_name = overlay_name_from_nr(overlay_nr + i);
>  
> -		if (!overlay_data_apply(overlay_name, &ovcs_id)) {
> -			unittest(0, "could not apply overlay \"%s\"\n",
> -					overlay_name);
> -			return;
> -		}
> +		KUNIT_ASSERT_TRUE_MSG(
> +			test, overlay_data_apply(overlay_name, &ovcs_id),
> +			"could not apply overlay \"%s\"\n", overlay_name);
>  		ov_id[i] = ovcs_id;
>  		of_unittest_track_overlay(ov_id[i]);
>  	}
>  
>  	/* now try to remove first overlay (it should fail) */
>  	ovcs_id = ov_id[0];
> -	if (!of_overlay_remove(&ovcs_id)) {
> -		unittest(0, "%s was destroyed @\"%s\"\n",
> -				overlay_name_from_nr(overlay_nr + 0),
> -				unittest_path(unittest_nr,
> -					PDEV_OVERLAY));
> -		return;
> -	}
> +	KUNIT_ASSERT_TRUE_MSG(
> +		test, of_overlay_remove(&ovcs_id),
> +		"%s was destroyed @\"%s\"\n",
> +		overlay_name_from_nr(overlay_nr + 0),
> +		unittest_path(unittest_nr, PDEV_OVERLAY));
>  
>  	/* removing them in order should work */
>  	for (i = 1; i >= 0; i--) {
>  		ovcs_id = ov_id[i];
> -		if (of_overlay_remove(&ovcs_id)) {
> -			unittest(0, "%s not destroyed @\"%s\"\n",
> -					overlay_name_from_nr(overlay_nr + i),
> -					unittest_path(unittest_nr,
> -						PDEV_OVERLAY));
> -			return;
> -		}
> +		KUNIT_ASSERT_FALSE_MSG(
> +			test, of_overlay_remove(&ovcs_id),
> +			"%s not destroyed @\"%s\"\n",
> +			overlay_name_from_nr(overlay_nr + i),
> +			unittest_path(unittest_nr, PDEV_OVERLAY));
>  		of_unittest_untrack_overlay(ov_id[i]);
>  	}
> -
> -	unittest(1, "overlay test %d passed\n", 8);
>  }
>  
>  /* test insertion of a bus with parent devices */
> -static void __init of_unittest_overlay_10(void)
> +static void of_unittest_overlay_10(struct kunit *test)
>  {
> -	int ret;
>  	char *child_path;
>  
>  	/* device should disable */
> -	ret = of_unittest_apply_overlay_check(10, 10, 0, 1, PDEV_OVERLAY);
> -	if (unittest(ret == 0,
> -			"overlay test %d failed; overlay application\n", 10))
> -		return;
> +	KUNIT_ASSERT_EQ_MSG(
> +		test,
> +		of_unittest_apply_overlay_check(
> +				test, 10, 10, 0, 1, PDEV_OVERLAY),
> +		0,
> +		"overlay test %d failed; overlay application\n", 10);
>  
>  	child_path = kasprintf(GFP_KERNEL, "%s/test-unittest101",
>  			unittest_path(10, PDEV_OVERLAY));
> -	if (unittest(child_path, "overlay test %d failed; kasprintf\n", 10))
> -		return;
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, child_path);
>  
> -	ret = of_path_device_type_exists(child_path, PDEV_OVERLAY);
> +	KUNIT_EXPECT_TRUE_MSG(
> +		test, of_path_device_type_exists(child_path, PDEV_OVERLAY),
> +		"overlay test %d failed; no child device\n", 10);
>  	kfree(child_path);
> -
> -	unittest(ret, "overlay test %d failed; no child device\n", 10);
>  }
>  
>  /* test insertion of a bus with parent devices (and revert) */
> -static void __init of_unittest_overlay_11(void)
> +static void of_unittest_overlay_11(struct kunit *test)
>  {
> -	int ret;
> -
>  	/* device should disable */
> -	ret = of_unittest_apply_revert_overlay_check(11, 11, 0, 1,
> -			PDEV_OVERLAY);
> -	unittest(ret == 0, "overlay test %d failed; overlay apply\n", 11);
> +	KUNIT_EXPECT_FALSE(test, of_unittest_apply_revert_overlay_check(
> +		test, 11, 11, 0, 1, PDEV_OVERLAY));
>  }
>  
>  #if IS_BUILTIN(CONFIG_I2C) && IS_ENABLED(CONFIG_OF_OVERLAY)
> @@ -2013,25 +2068,18 @@ static struct i2c_driver unittest_i2c_mux_driver = {
>  
>  #endif
>  
> -static int of_unittest_overlay_i2c_init(void)
> +static int of_unittest_overlay_i2c_init(struct kunit *test)
>  {
> -	int ret;
> -
> -	ret = i2c_add_driver(&unittest_i2c_dev_driver);
> -	if (unittest(ret == 0,
> -			"could not register unittest i2c device driver\n"))
> -		return ret;
> +	KUNIT_ASSERT_EQ_MSG(test, i2c_add_driver(&unittest_i2c_dev_driver), 0,
> +			    "could not register unittest i2c device driver\n");
>  
> -	ret = platform_driver_register(&unittest_i2c_bus_driver);
> -	if (unittest(ret == 0,
> -			"could not register unittest i2c bus driver\n"))
> -		return ret;
> +	KUNIT_ASSERT_EQ_MSG(
> +		test, platform_driver_register(&unittest_i2c_bus_driver), 0,
> +		"could not register unittest i2c bus driver\n");
>  
>  #if IS_BUILTIN(CONFIG_I2C_MUX)
> -	ret = i2c_add_driver(&unittest_i2c_mux_driver);
> -	if (unittest(ret == 0,
> -			"could not register unittest i2c mux driver\n"))
> -		return ret;
> +	KUNIT_ASSERT_EQ_MSG(test, i2c_add_driver(&unittest_i2c_mux_driver), 0,
> +			    "could not register unittest i2c mux driver\n");
>  #endif
>  
>  	return 0;
> @@ -2046,101 +2094,85 @@ static void of_unittest_overlay_i2c_cleanup(void)
>  	i2c_del_driver(&unittest_i2c_dev_driver);
>  }
>  
> -static void __init of_unittest_overlay_i2c_12(void)
> +static void of_unittest_overlay_i2c_12(struct kunit *test)
>  {
>  	/* device should enable */
> -	if (of_unittest_apply_overlay_check(12, 12, 0, 1, I2C_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 12);
> +	of_unittest_apply_overlay_check(test, 12, 12, 0, 1, I2C_OVERLAY);
>  }
>  
>  /* test deactivation of device */
> -static void __init of_unittest_overlay_i2c_13(void)
> +static void of_unittest_overlay_i2c_13(struct kunit *test)
>  {
>  	/* device should disable */
> -	if (of_unittest_apply_overlay_check(13, 13, 1, 0, I2C_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 13);
> +	of_unittest_apply_overlay_check(test, 13, 13, 1, 0, I2C_OVERLAY);
>  }
>  
>  /* just check for i2c mux existence */
> -static void of_unittest_overlay_i2c_14(void)
> +static void of_unittest_overlay_i2c_14(struct kunit *test)
>  {
> +	KUNIT_SUCCEED(test);
>  }
>  
> -static void __init of_unittest_overlay_i2c_15(void)
> +static void of_unittest_overlay_i2c_15(struct kunit *test)
>  {
>  	/* device should enable */
> -	if (of_unittest_apply_overlay_check(15, 15, 0, 1, I2C_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 15);
> +	of_unittest_apply_overlay_check(test, 15, 15, 0, 1, I2C_OVERLAY);
>  }
>  
>  #else
>  
> -static inline void of_unittest_overlay_i2c_14(void) { }
> -static inline void of_unittest_overlay_i2c_15(void) { }
> +static inline void of_unittest_overlay_i2c_14(struct kunit *test) { }
> +static inline void of_unittest_overlay_i2c_15(struct kunit *test) { }
>  
>  #endif
>  
> -static void __init of_unittest_overlay(void)
> +static void of_unittest_overlay(struct kunit *test)
>  {
>  	struct device_node *bus_np = NULL;
>  
> -	if (platform_driver_register(&unittest_driver)) {
> -		unittest(0, "could not register unittest driver\n");
> -		goto out;
> -	}
> +	KUNIT_ASSERT_FALSE_MSG(test, platform_driver_register(&unittest_driver),
> +			       "could not register unittest driver\n");
>  
>  	bus_np = of_find_node_by_path(bus_path);
> -	if (bus_np == NULL) {
> -		unittest(0, "could not find bus_path \"%s\"\n", bus_path);
> -		goto out;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL_MSG(
> +		test, bus_np, "could not find bus_path \"%s\"\n", bus_path);
>  
> -	if (of_platform_default_populate(bus_np, NULL, NULL)) {
> -		unittest(0, "could not populate bus @ \"%s\"\n", bus_path);
> -		goto out;
> -	}
> -
> -	if (!of_unittest_device_exists(100, PDEV_OVERLAY)) {
> -		unittest(0, "could not find unittest0 @ \"%s\"\n",
> -				unittest_path(100, PDEV_OVERLAY));
> -		goto out;
> -	}
> +	KUNIT_ASSERT_FALSE_MSG(
> +		test, of_platform_default_populate(bus_np, NULL, NULL),
> +		"could not populate bus @ \"%s\"\n", bus_path);
>  
> -	if (of_unittest_device_exists(101, PDEV_OVERLAY)) {
> -		unittest(0, "unittest1 @ \"%s\" should not exist\n",
> -				unittest_path(101, PDEV_OVERLAY));
> -		goto out;
> -	}
> +	KUNIT_ASSERT_TRUE_MSG(
> +		test, of_unittest_device_exists(100, PDEV_OVERLAY),
> +		"could not find unittest0 @ \"%s\"\n",
> +		unittest_path(100, PDEV_OVERLAY));
>  
> -	unittest(1, "basic infrastructure of overlays passed");
> +	KUNIT_ASSERT_FALSE_MSG(
> +		test, of_unittest_device_exists(101, PDEV_OVERLAY),
> +		"unittest1 @ \"%s\" should not exist\n",
> +		unittest_path(101, PDEV_OVERLAY));
>  
>  	/* tests in sequence */
> -	of_unittest_overlay_0();
> -	of_unittest_overlay_1();
> -	of_unittest_overlay_2();
> -	of_unittest_overlay_3();
> -	of_unittest_overlay_4();
> -	of_unittest_overlay_5();
> -	of_unittest_overlay_6();
> -	of_unittest_overlay_8();
> -
> -	of_unittest_overlay_10();
> -	of_unittest_overlay_11();
> +	of_unittest_overlay_0(test);
> +	of_unittest_overlay_1(test);
> +	of_unittest_overlay_2(test);
> +	of_unittest_overlay_3(test);
> +	of_unittest_overlay_4(test);
> +	of_unittest_overlay_5(test);
> +	of_unittest_overlay_6(test);
> +	of_unittest_overlay_8(test);
> +
> +	of_unittest_overlay_10(test);
> +	of_unittest_overlay_11(test);
>  
>  #if IS_BUILTIN(CONFIG_I2C)
> -	if (unittest(of_unittest_overlay_i2c_init() == 0, "i2c init failed\n"))
> -		goto out;
> +	KUNIT_ASSERT_EQ_MSG(test, of_unittest_overlay_i2c_init(test), 0,
> +			    "i2c init failed\n");
> +	goto out;
>  
> -	of_unittest_overlay_i2c_12();
> -	of_unittest_overlay_i2c_13();
> -	of_unittest_overlay_i2c_14();
> -	of_unittest_overlay_i2c_15();
> +	of_unittest_overlay_i2c_12(test);
> +	of_unittest_overlay_i2c_13(test);
> +	of_unittest_overlay_i2c_14(test);
> +	of_unittest_overlay_i2c_15(test);
>  
>  	of_unittest_overlay_i2c_cleanup();
>  #endif
> @@ -2152,7 +2184,7 @@ static void __init of_unittest_overlay(void)
>  }
>  
>  #else
> -static inline void __init of_unittest_overlay(void) { }
> +static inline void of_unittest_overlay(struct kunit *test) { }
>  #endif
>  
>  #ifdef CONFIG_OF_OVERLAY
> @@ -2313,7 +2345,7 @@ void __init unittest_unflatten_overlay_base(void)
>   *
>   * Return 0 on unexpected error.
>   */
> -static int __init overlay_data_apply(const char *overlay_name, int *overlay_id)
> +static int overlay_data_apply(const char *overlay_name, int *overlay_id)
>  {
>  	struct overlay_info *info;
>  	int found = 0;
> @@ -2359,19 +2391,17 @@ static int __init overlay_data_apply(const char *overlay_name, int *overlay_id)
>   * The first part of the function is _not_ normal overlay usage; it is
>   * finishing splicing the base overlay device tree into the live tree.
>   */
> -static __init void of_unittest_overlay_high_level(void)
> +static void of_unittest_overlay_high_level(struct kunit *test)
>  {
>  	struct device_node *last_sibling;
>  	struct device_node *np;
>  	struct device_node *of_symbols;
> -	struct device_node *overlay_base_symbols;
> +	struct device_node *overlay_base_symbols = 0;
>  	struct device_node **pprev;
>  	struct property *prop;
>  
> -	if (!overlay_base_root) {
> -		unittest(0, "overlay_base_root not initialized\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_TRUE_MSG(test, overlay_base_root,
> +			      "overlay_base_root not initialized\n");
>  
>  	/*
>  	 * Could not fixup phandles in unittest_unflatten_overlay_base()
> @@ -2418,11 +2448,9 @@ static __init void of_unittest_overlay_high_level(void)
>  	for_each_child_of_node(overlay_base_root, np) {
>  		struct device_node *base_child;
>  		for_each_child_of_node(of_root, base_child) {
> -			if (!strcmp(np->full_name, base_child->full_name)) {
> -				unittest(0, "illegal node name in overlay_base %pOFn",
> -					 np);
> -				return;
> -			}
> +			KUNIT_ASSERT_STRNEQ_MSG(
> +				test, np->full_name, base_child->full_name,
> +				"illegal node name in overlay_base %pOFn", np);
>  		}
>  	}
>  
> @@ -2456,21 +2484,24 @@ static __init void of_unittest_overlay_high_level(void)
>  
>  			new_prop = __of_prop_dup(prop, GFP_KERNEL);
>  			if (!new_prop) {
> -				unittest(0, "__of_prop_dup() of '%s' from overlay_base node __symbols__",
> -					 prop->name);
> +				KUNIT_FAIL(test,
> +					   "__of_prop_dup() of '%s' from overlay_base node __symbols__",
> +					   prop->name);
>  				goto err_unlock;
>  			}
>  			if (__of_add_property(of_symbols, new_prop)) {
>  				/* "name" auto-generated by unflatten */
>  				if (!strcmp(new_prop->name, "name"))
>  					continue;
> -				unittest(0, "duplicate property '%s' in overlay_base node __symbols__",
> -					 prop->name);
> +				KUNIT_FAIL(test,
> +					   "duplicate property '%s' in overlay_base node __symbols__",
> +					   prop->name);
>  				goto err_unlock;
>  			}
>  			if (__of_add_property_sysfs(of_symbols, new_prop)) {
> -				unittest(0, "unable to add property '%s' in overlay_base node __symbols__ to sysfs",
> -					 prop->name);
> +				KUNIT_FAIL(test,
> +					   "unable to add property '%s' in overlay_base node __symbols__ to sysfs",
> +					   prop->name);
>  				goto err_unlock;
>  			}
>  		}
> @@ -2481,20 +2512,24 @@ static __init void of_unittest_overlay_high_level(void)
>  
>  	/* now do the normal overlay usage test */
>  
> -	unittest(overlay_data_apply("overlay", NULL),
> -		 "Adding overlay 'overlay' failed\n");
> +	KUNIT_EXPECT_TRUE_MSG(test, overlay_data_apply("overlay", NULL),
> +			      "Adding overlay 'overlay' failed\n");
>  
> -	unittest(overlay_data_apply("overlay_bad_add_dup_node", NULL),
> -		 "Adding overlay 'overlay_bad_add_dup_node' failed\n");
> +	KUNIT_EXPECT_TRUE_MSG(
> +		test, overlay_data_apply("overlay_bad_add_dup_node", NULL),
> +		"Adding overlay 'overlay_bad_add_dup_node' failed\n");
>  
> -	unittest(overlay_data_apply("overlay_bad_add_dup_prop", NULL),
> -		 "Adding overlay 'overlay_bad_add_dup_prop' failed\n");
> +	KUNIT_EXPECT_TRUE_MSG(
> +		test, overlay_data_apply("overlay_bad_add_dup_prop", NULL),
> +		"Adding overlay 'overlay_bad_add_dup_prop' failed\n");
>  
> -	unittest(overlay_data_apply("overlay_bad_phandle", NULL),
> -		 "Adding overlay 'overlay_bad_phandle' failed\n");
> +	KUNIT_EXPECT_TRUE_MSG(
> +		test, overlay_data_apply("overlay_bad_phandle", NULL),
> +		"Adding overlay 'overlay_bad_phandle' failed\n");
>  
> -	unittest(overlay_data_apply("overlay_bad_symbol", NULL),
> -		 "Adding overlay 'overlay_bad_symbol' failed\n");
> +	KUNIT_EXPECT_TRUE_MSG(
> +		test, overlay_data_apply("overlay_bad_symbol", NULL),
> +		"Adding overlay 'overlay_bad_symbol' failed\n");
>  
>  	return;
>  
> @@ -2504,57 +2539,52 @@ static __init void of_unittest_overlay_high_level(void)
>  
>  #else
>  
> -static inline __init void of_unittest_overlay_high_level(void) {}
> +static inline void of_unittest_overlay_high_level(struct kunit *test) {}
>  
>  #endif
>  
> -static int __init of_unittest(void)
> +static int of_test_init(struct kunit *test)
>  {
> -	struct device_node *np;
> -	int res;
> -
>  	/* adding data for unittest */
> -	res = unittest_data_add();
> -	if (res)
> -		return res;
> +	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
> +
>  	if (!of_aliases)
>  		of_aliases = of_find_node_by_path("/aliases");
>  
> -	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> -	if (!np) {
> -		pr_info("No testcase data in device tree; not running tests\n");
> -		return 0;
> -	}
> -	of_node_put(np);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
> +		"/testcase-data/phandle-tests/consumer-a"));
>  
>  	if (IS_ENABLED(CONFIG_UML))
>  		unflatten_device_tree();
>  
> -	pr_info("start of unittest - you will see error messages\n");
> -	of_unittest_check_tree_linkage();
> -	of_unittest_check_phandles();
> -	of_unittest_find_node_by_name();
> -	of_unittest_dynamic();
> -	of_unittest_parse_phandle_with_args();
> -	of_unittest_parse_phandle_with_args_map();
> -	of_unittest_printf();
> -	of_unittest_property_string();
> -	of_unittest_property_copy();
> -	of_unittest_changeset();
> -	of_unittest_parse_interrupts();
> -	of_unittest_parse_interrupts_extended();
> -	of_unittest_match_node();
> -	of_unittest_platform_populate();
> -	of_unittest_overlay();
> +	return 0;
> +}
>  
> +static struct kunit_case of_test_cases[] = {
> +	KUNIT_CASE(of_unittest_check_tree_linkage),
> +	KUNIT_CASE(of_unittest_check_phandles),
> +	KUNIT_CASE(of_unittest_find_node_by_name),
> +	KUNIT_CASE(of_unittest_dynamic),
> +	KUNIT_CASE(of_unittest_parse_phandle_with_args),
> +	KUNIT_CASE(of_unittest_parse_phandle_with_args_map),
> +	KUNIT_CASE(of_unittest_printf),
> +	KUNIT_CASE(of_unittest_property_string),
> +	KUNIT_CASE(of_unittest_property_copy),
> +	KUNIT_CASE(of_unittest_changeset),
> +	KUNIT_CASE(of_unittest_parse_interrupts),
> +	KUNIT_CASE(of_unittest_parse_interrupts_extended),
> +	KUNIT_CASE(of_unittest_match_node),
> +	KUNIT_CASE(of_unittest_platform_populate),
> +	KUNIT_CASE(of_unittest_overlay),
>  	/* Double check linkage after removing testcase data */
> -	of_unittest_check_tree_linkage();
> -
> -	of_unittest_overlay_high_level();
> -
> -	pr_info("end of unittest - %i passed, %i failed\n",
> -		unittest_results.passed, unittest_results.failed);
> +	KUNIT_CASE(of_unittest_check_tree_linkage),
> +	KUNIT_CASE(of_unittest_overlay_high_level),
> +	{},
> +};
>  
> -	return 0;
> -}
> -late_initcall(of_unittest);
> +static struct kunit_module of_test_module = {
> +	.name = "of-test",
> +	.init = of_test_init,
> +	.test_cases = of_test_cases,
> +};
> +module_test(of_test_module);
> 

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 15/17] of: unittest: migrate tests to run on KUnit
@ 2019-02-16  0:24         ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-02-16  0:24 UTC (permalink / raw)
  To: Brendan Higgins, keescook, mcgrof, shuah, robh, kieran.bingham
  Cc: brakmo, pmladek, amir73il, dri-devel, Alexander.Levin,
	linux-kselftest, linux-nvdimm, richard, knut.omang, wfg, joel,
	jdike, dan.carpenter, devicetree, Tim.Bird, linux-um, rostedt,
	julia.lawall, dan.j.williams, kunit-dev, gregkh, linux-kernel,
	daniel, mpe, joe, khilman

On 2/14/19 1:37 PM, Brendan Higgins wrote:
> Migrate tests without any cleanup, or modifying test logic in anyway to
> run under KUnit using the KUnit expectation and assertion API.
> 
> Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
> ---
>  drivers/of/Kconfig    |    1 +
>  drivers/of/unittest.c | 1310 +++++++++++++++++++++--------------------
>  2 files changed, 671 insertions(+), 640 deletions(-)
> 
> diff --git a/drivers/of/Kconfig b/drivers/of/Kconfig
> index ad3fcad4d75b8..f309399deac20 100644
> --- a/drivers/of/Kconfig
> +++ b/drivers/of/Kconfig
> @@ -15,6 +15,7 @@ if OF
>  config OF_UNITTEST
>  	bool "Device Tree runtime unit tests"
>  	depends on !SPARC
> +	depends on KUNIT
>  	select IRQ_DOMAIN
>  	select OF_EARLY_FLATTREE
>  	select OF_RESOLVE
> diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c

These comments are from applying the patches to 5.0-rc3.

The final hunk of this patch fails to apply because it depends upon

   [PATCH v1 0/1] of: unittest: unflatten device tree on UML when testing.

If I apply that patch then I can apply patches 15 through 17.

If I apply patches 1 through 14 and boot the uml kernel then the devicetree
unittest result is:

  ### dt-test ### FAIL of_unittest_overlay_high_level():2372 overlay_base_root not initialized
  ### dt-test ### end of unittest - 219 passed, 1 failed

This is as expected from your previous reports, and is fixed after
applying

   [PATCH v1 0/1] of: unittest: unflatten device tree on UML when testing.

with the devicetree unittest result of:

   ### dt-test ### end of unittest - 224 passed, 0 failed

After adding patch 15, there are a lot of "unittest internal error" messages.

-Frank


> index effa4e2b9d992..96de69ccb3e63 100644
> --- a/drivers/of/unittest.c
> +++ b/drivers/of/unittest.c
> @@ -26,186 +26,189 @@
>  
>  #include <linux/bitops.h>
>  
> +#include <kunit/test.h>### dt-test ### end of unittest - 224 passed, 0 failed
> +
>  #include "of_private.h"
>  
> -static struct unittest_results {
> -	int passed;
> -	int failed;
> -} unittest_results;
> -
> -#define unittest(result, fmt, ...) ({ \
> -	bool failed = !(result); \
> -	if (failed) { \
> -		unittest_results.failed++; \
> -		pr_err("FAIL %s():%i " fmt, __func__, __LINE__, ##__VA_ARGS__); \
> -	} else { \
> -		unittest_results.passed++; \
> -		pr_debug("pass %s():%i\n", __func__, __LINE__); \
> -	} \
> -	failed; \
> -})
> -
> -static void __init of_unittest_find_node_by_name(void)
> +static void of_unittest_find_node_by_name(struct kunit *test)
>  {
>  	struct device_node *np;
>  	const char *options, *name;
>  
>  	np = of_find_node_by_path("/testcase-data");
>  	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	unittest(np && !strcmp("/testcase-data", name),
> -		"find /testcase-data failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> +			       "find /testcase-data failed\n");
>  	of_node_put(np);
>  	kfree(name);
>  
>  	/* Test if trailing '/' works */
> -	np = of_find_node_by_path("/testcase-data/");
> -	unittest(!np, "trailing '/' on /testcase-data/ should fail\n");
> +	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
> +			    "trailing '/' on /testcase-data/ should fail\n");
>  
>  	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	unittest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", name),
> +	KUNIT_EXPECT_STREQ_MSG(
> +		test, "/testcase-data/phandle-tests/consumer-a", name,
>  		"find /testcase-data/phandle-tests/consumer-a failed\n");
>  	of_node_put(np);
>  	kfree(name);
>  
>  	np = of_find_node_by_path("testcase-alias");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	unittest(np && !strcmp("/testcase-data", name),
> -		"find testcase-alias failed\n");
> +	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> +			       "find testcase-alias failed\n");
>  	of_node_put(np);
>  	kfree(name);
>  
>  	/* Test if trailing '/' works on aliases */
> -	np = of_find_node_by_path("testcase-alias/");
> -	unittest(!np, "trailing '/' on testcase-alias/ should fail\n");
> +	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
> +			    "trailing '/' on testcase-alias/ should fail\n");
>  
>  	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	unittest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", name),
> +	KUNIT_EXPECT_STREQ_MSG(
> +		test, "/testcase-data/phandle-tests/consumer-a", name,
>  		"find testcase-alias/phandle-tests/consumer-a failed\n");
>  	of_node_put(np);
>  	kfree(name);
>  
> -	np = of_find_node_by_path("/testcase-data/missing-path");
> -	unittest(!np, "non-existent path returned node %pOF\n", np);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
> +		"non-existent path returned node %pOF\n", np);
>  	of_node_put(np);
>  
> -	np = of_find_node_by_path("missing-alias");
> -	unittest(!np, "non-existent alias returned node %pOF\n", np);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test, np = of_find_node_by_path("missing-alias"), NULL,
> +		"non-existent alias returned node %pOF\n", np);
>  	of_node_put(np);
>  
> -	np = of_find_node_by_path("testcase-alias/missing-path");
> -	unittest(!np, "non-existent alias with relative path returned node %pOF\n", np);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
> +		"non-existent alias with relative path returned node %pOF\n",
> +		np);
>  	of_node_put(np);
>  
>  	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
> -	unittest(np && !strcmp("testoption", options),
> -		 "option path test failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
> +			       "option path test failed\n");
>  	of_node_put(np);
>  
>  	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
> -	unittest(np && !strcmp("test/option", options),
> -		 "option path test, subcase #1 failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> +			       "option path test, subcase #1 failed\n");
>  	of_node_put(np);
>  
>  	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
> -	unittest(np && !strcmp("test/option", options),
> -		 "option path test, subcase #2 failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> +			       "option path test, subcase #2 failed\n");
>  	of_node_put(np);
>  
>  	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
> -	unittest(np, "NULL option path test failed\n");
> +	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
> +					 "NULL option path test failed\n");
>  	of_node_put(np);
>  
>  	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
>  				       &options);
> -	unittest(np && !strcmp("testaliasoption", options),
> -		 "option alias path test failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
> +			       "option alias path test failed\n");
>  	of_node_put(np);
>  
>  	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
>  				       &options);
> -	unittest(np && !strcmp("test/alias/option", options),
> -		 "option alias path test, subcase #1 failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
> +			       "option alias path test, subcase #1 failed\n");
>  	of_node_put(np);
>  
>  	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
> -	unittest(np, "NULL option alias path test failed\n");
> +	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
> +			test, np, "NULL option alias path test failed\n");
>  	of_node_put(np);
>  
>  	options = "testoption";
>  	np = of_find_node_opts_by_path("testcase-alias", &options);
> -	unittest(np && !options, "option clearing test failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> +			    "option clearing test failed\n");
>  	of_node_put(np);
>  
>  	options = "testoption";
>  	np = of_find_node_opts_by_path("/", &options);
> -	unittest(np && !options, "option clearing root node test failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> +			    "option clearing root node test failed\n");
>  	of_node_put(np);
>  }
>  
> -static void __init of_unittest_dynamic(void)
> +static void of_unittest_dynamic(struct kunit *test)
>  {
>  	struct device_node *np;
>  	struct property *prop;
>  
>  	np = of_find_node_by_path("/testcase-data");
> -	if (!np) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  	/* Array of 4 properties for the purpose of testing */
>  	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
> -	if (!prop) {
> -		unittest(0, "kzalloc() failed\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
>  
>  	/* Add a new property - should pass*/
>  	prop->name = "new-property";
>  	prop->value = "new-property-data";
>  	prop->length = strlen(prop->value) + 1;
> -	unittest(of_add_property(np, prop) == 0, "Adding a new property failed\n");
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> +			    "Adding a new property failed\n");
>  
>  	/* Try to add an existing property - should fail */
>  	prop++;
>  	prop->name = "new-property";
>  	prop->value = "new-property-data-should-fail";
>  	prop->length = strlen(prop->value) + 1;
> -	unittest(of_add_property(np, prop) != 0,
> -		 "Adding an existing property should have failed\n");
> +	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
> +			    "Adding an existing property should have failed\n");
>  
>  	/* Try to modify an existing property - should pass */
>  	prop->value = "modify-property-data-should-pass";
>  	prop->length = strlen(prop->value) + 1;
> -	unittest(of_update_property(np, prop) == 0,
> -		 "Updating an existing property should have passed\n");
> +	KUNIT_EXPECT_EQ_MSG(
> +		test, of_update_property(np, prop), 0,
> +		"Updating an existing property should have passed\n");
>  
>  	/* Try to modify non-existent property - should pass*/
>  	prop++;
>  	prop->name = "modify-property";
>  	prop->value = "modify-missing-property-data-should-pass";
>  	prop->length = strlen(prop->value) + 1;
> -	unittest(of_update_property(np, prop) == 0,
> -		 "Updating a missing property should have passed\n");
> +	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
> +			    "Updating a missing property should have passed\n");
>  
>  	/* Remove property - should pass */
> -	unittest(of_remove_property(np, prop) == 0,
> -		 "Removing a property should have passed\n");
> +	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
> +			    "Removing a property should have passed\n");
>  
>  	/* Adding very large property - should pass */
>  	prop++;
>  	prop->name = "large-property-PAGE_SIZEx8";
>  	prop->length = PAGE_SIZE * 8;
>  	prop->value = kzalloc(prop->length, GFP_KERNEL);
> -	unittest(prop->value != NULL, "Unable to allocate large buffer\n");
> -	if (prop->value)
> -		unittest(of_add_property(np, prop) == 0,
> -			 "Adding a large property should have passed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> +			    "Adding a large property should have passed\n");
>  }
>  
> -static int __init of_unittest_check_node_linkage(struct device_node *np)
> +static int of_unittest_check_node_linkage(struct device_node *np)
>  {
>  	struct device_node *child;
>  	int count = 0, rc;
> @@ -230,27 +233,30 @@ static int __init of_unittest_check_node_linkage(struct device_node *np)
>  	return rc;
>  }
>  
> -static void __init of_unittest_check_tree_linkage(void)
> +static void of_unittest_check_tree_linkage(struct kunit *test)
>  {
>  	struct device_node *np;
>  	int allnode_count = 0, child_count;
>  
> -	if (!of_root)
> -		return;
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);
>  
>  	for_each_of_allnodes(np)
>  		allnode_count++;
>  	child_count = of_unittest_check_node_linkage(of_root);
>  
> -	unittest(child_count > 0, "Device node data structure is corrupted\n");
> -	unittest(child_count == allnode_count,
> -		 "allnodes list size (%i) doesn't match sibling lists size (%i)\n",
> -		 allnode_count, child_count);
> +	KUNIT_EXPECT_GT_MSG(test, child_count, 0,
> +			    "Device node data structure is corrupted\n");
> +	KUNIT_EXPECT_EQ_MSG(
> +		test, child_count, allnode_count,
> +		"allnodes list size (%i) doesn't match sibling lists size (%i)\n",
> +		allnode_count, child_count);
>  	pr_debug("allnodes list size (%i); sibling lists size (%i)\n", allnode_count, child_count);
>  }
>  
> -static void __init of_unittest_printf_one(struct device_node *np, const char *fmt,
> -					  const char *expected)
> +static void of_unittest_printf_one(struct kunit *test,
> +				   struct device_node *np,
> +				   const char *fmt,
> +				   const char *expected)
>  {
>  	unsigned char *buf;
>  	int buf_size;
> @@ -265,8 +271,12 @@ static void __init of_unittest_printf_one(struct device_node *np, const char *fm
>  	memset(buf, 0xff, buf_size);
>  	size = snprintf(buf, buf_size - 2, fmt, np);
>  
> -	/* use strcmp() instead of strncmp() here to be absolutely sure strings match */
> -	unittest((strcmp(buf, expected) == 0) && (buf[size+1] == 0xff),
> +	KUNIT_EXPECT_STREQ_MSG(
> +		test, buf, expected,
> +		"sprintf failed; fmt='%s' expected='%s' rslt='%s'\n",
> +		fmt, expected, buf);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test, buf[size+1], 0xff,
>  		"sprintf failed; fmt='%s' expected='%s' rslt='%s'\n",
>  		fmt, expected, buf);
>  
> @@ -276,44 +286,49 @@ static void __init of_unittest_printf_one(struct device_node *np, const char *fm
>  		/* Clear the buffer, and make sure it works correctly still */
>  		memset(buf, 0xff, buf_size);
>  		snprintf(buf, size+1, fmt, np);
> -		unittest(strncmp(buf, expected, size) == 0 && (buf[size+1] == 0xff),
> +		KUNIT_EXPECT_STREQ_MSG(
> +			test, buf, expected,
> +			"snprintf failed; size=%i fmt='%s' expected='%s' rslt='%s'\n",
> +			size, fmt, expected, buf);
> +		KUNIT_EXPECT_EQ_MSG(
> +			test, buf[size+1], 0xff,
>  			"snprintf failed; size=%i fmt='%s' expected='%s' rslt='%s'\n",
>  			size, fmt, expected, buf);
>  	}
>  	kfree(buf);
>  }
>  
> -static void __init of_unittest_printf(void)
> +static void of_unittest_printf(struct kunit *test)
>  {
>  	struct device_node *np;
>  	const char *full_name = "/testcase-data/platform-tests/test-device@1/dev@100";
>  	char phandle_str[16] = "";
>  
>  	np = of_find_node_by_path(full_name);
> -	if (!np) {
> -		unittest(np, "testcase data missing\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  	num_to_str(phandle_str, sizeof(phandle_str), np->phandle, 0);
>  
> -	of_unittest_printf_one(np, "%pOF",  full_name);
> -	of_unittest_printf_one(np, "%pOFf", full_name);
> -	of_unittest_printf_one(np, "%pOFn", "dev");
> -	of_unittest_printf_one(np, "%2pOFn", "dev");
> -	of_unittest_printf_one(np, "%5pOFn", "  dev");
> -	of_unittest_printf_one(np, "%pOFnc", "dev:test-sub-device");
> -	of_unittest_printf_one(np, "%pOFp", phandle_str);
> -	of_unittest_printf_one(np, "%pOFP", "dev@100");
> -	of_unittest_printf_one(np, "ABC %pOFP ABC", "ABC dev@100 ABC");
> -	of_unittest_printf_one(np, "%10pOFP", "   dev@100");
> -	of_unittest_printf_one(np, "%-10pOFP", "dev@100   ");
> -	of_unittest_printf_one(of_root, "%pOFP", "/");
> -	of_unittest_printf_one(np, "%pOFF", "----");
> -	of_unittest_printf_one(np, "%pOFPF", "dev@100:----");
> -	of_unittest_printf_one(np, "%pOFPFPc", "dev@100:----:dev@100:test-sub-device");
> -	of_unittest_printf_one(np, "%pOFc", "test-sub-device");
> -	of_unittest_printf_one(np, "%pOFC",
> +	of_unittest_printf_one(test, np, "%pOF",  full_name);
> +	of_unittest_printf_one(test, np, "%pOFf", full_name);
> +	of_unittest_printf_one(test, np, "%pOFn", "dev");
> +	of_unittest_printf_one(test, np, "%2pOFn", "dev");
> +	of_unittest_printf_one(test, np, "%5pOFn", "  dev");
> +	of_unittest_printf_one(test, np, "%pOFnc", "dev:test-sub-device");
> +	of_unittest_printf_one(test, np, "%pOFp", phandle_str);
> +	of_unittest_printf_one(test, np, "%pOFP", "dev@100");
> +	of_unittest_printf_one(test, np, "ABC %pOFP ABC", "ABC dev@100 ABC");
> +	of_unittest_printf_one(test, np, "%10pOFP", "   dev@100");
> +	of_unittest_printf_one(test, np, "%-10pOFP", "dev@100   ");
> +	of_unittest_printf_one(test, of_root, "%pOFP", "/");
> +	of_unittest_printf_one(test, np, "%pOFF", "----");
> +	of_unittest_printf_one(test, np, "%pOFPF", "dev@100:----");
> +	of_unittest_printf_one(test,
> +			       np,
> +			       "%pOFPFPc",
> +			       "dev@100:----:dev@100:test-sub-device");
> +	of_unittest_printf_one(test, np, "%pOFc", "test-sub-device");
> +	of_unittest_printf_one(test, np, "%pOFC",
>  			"\"test-sub-device\",\"test-compat2\",\"test-compat3\"");
>  }
>  
> @@ -323,7 +338,7 @@ struct node_hash {
>  };
>  
>  static DEFINE_HASHTABLE(phandle_ht, 8);
> -static void __init of_unittest_check_phandles(void)
> +static void of_unittest_check_phandles(struct kunit *test)
>  {
>  	struct device_node *np;
>  	struct node_hash *nh;
> @@ -335,24 +350,26 @@ static void __init of_unittest_check_phandles(void)
>  			continue;
>  
>  		hash_for_each_possible(phandle_ht, nh, node, np->phandle) {
> +			KUNIT_EXPECT_NE_MSG(
> +				test, nh->np->phandle, np->phandle,
> +				"Duplicate phandle! %i used by %pOF and %pOF\n",
> +				np->phandle, nh->np, np);
>  			if (nh->np->phandle == np->phandle) {
> -				pr_info("Duplicate phandle! %i used by %pOF and %pOF\n",
> -					np->phandle, nh->np, np);
>  				dup_count++;
>  				break;
>  			}
>  		}
>  
>  		nh = kzalloc(sizeof(*nh), GFP_KERNEL);
> -		if (WARN_ON(!nh))
> -			return;
> +		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, nh);
>  
>  		nh->np = np;
>  		hash_add(phandle_ht, &nh->node, np->phandle);
>  		phandle_count++;
>  	}
> -	unittest(dup_count == 0, "Found %i duplicates in %i phandles\n",
> -		 dup_count, phandle_count);
> +	KUNIT_EXPECT_EQ_MSG(test, dup_count, 0,
> +			    "Found %i duplicates in %i phandles\n",
> +			    dup_count, phandle_count);
>  
>  	/* Clean up */
>  	hash_for_each_safe(phandle_ht, i, tmp, nh, node) {
> @@ -361,20 +378,21 @@ static void __init of_unittest_check_phandles(void)
>  	}
>  }
>  
> -static void __init of_unittest_parse_phandle_with_args(void)
> +static void of_unittest_parse_phandle_with_args(struct kunit *test)
>  {
>  	struct device_node *np;
>  	struct of_phandle_args args;
> -	int i, rc;
> +	int i, rc = 0;
>  
>  	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> -	if (!np) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
> -	rc = of_count_phandle_with_args(np, "phandle-list", "#phandle-cells");
> -	unittest(rc == 7, "of_count_phandle_with_args() returned %i, expected 7\n", rc);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_count_phandle_with_args(
> +			np, "phandle-list", "#phandle-cells"),
> +		7,
> +		"of_count_phandle_with_args() returned %i, expected 7\n", rc);
>  
>  	for (i = 0; i < 8; i++) {
>  		bool passed = true;
> @@ -428,85 +446,91 @@ static void __init of_unittest_parse_phandle_with_args(void)
>  			passed = false;
>  		}
>  
> -		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
> -			 i, args.np, rc);
> +		KUNIT_EXPECT_TRUE_MSG(
> +			test, passed,
> +			"index %i - data error on node %pOF rc=%i\n",
> +			i, args.np, rc);
>  	}
>  
>  	/* Check for missing list property */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args(np, "phandle-list-missing",
> -					"#phandle-cells", 0, &args);
> -	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
> -	rc = of_count_phandle_with_args(np, "phandle-list-missing",
> -					"#phandle-cells");
> -	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args(
> +			np, "phandle-list-missing", "#phandle-cells", 0, &args),
> +		-ENOENT);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_count_phandle_with_args(
> +			np, "phandle-list-missing", "#phandle-cells"),
> +		-ENOENT);
>  
>  	/* Check for missing cells property */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args(np, "phandle-list",
> -					"#phandle-cells-missing", 0, &args);
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> -	rc = of_count_phandle_with_args(np, "phandle-list",
> -					"#phandle-cells-missing");
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args(
> +			np, "phandle-list", "#phandle-cells-missing", 0, &args),
> +		-EINVAL);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_count_phandle_with_args(
> +			np, "phandle-list", "#phandle-cells-missing"),
> +		-EINVAL);
>  
>  	/* Check for bad phandle in list */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args(np, "phandle-list-bad-phandle",
> -					"#phandle-cells", 0, &args);
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> -	rc = of_count_phandle_with_args(np, "phandle-list-bad-phandle",
> -					"#phandle-cells");
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args(np, "phandle-list-bad-phandle",
> +					   "#phandle-cells", 0, &args),
> +		-EINVAL);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_count_phandle_with_args(
> +			np, "phandle-list-bad-phandle", "#phandle-cells"),
> +		-EINVAL);
>  
>  	/* Check for incorrectly formed argument list */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args(np, "phandle-list-bad-args",
> -					"#phandle-cells", 1, &args);
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> -	rc = of_count_phandle_with_args(np, "phandle-list-bad-args",
> -					"#phandle-cells");
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args(np, "phandle-list-bad-args",
> +					   "#phandle-cells", 1, &args),
> +		-EINVAL);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_count_phandle_with_args(
> +			np, "phandle-list-bad-args", "#phandle-cells"),
> +		-EINVAL);
>  }
>  
> -static void __init of_unittest_parse_phandle_with_args_map(void)
> +static void of_unittest_parse_phandle_with_args_map(struct kunit *test)
>  {
>  	struct device_node *np, *p0, *p1, *p2, *p3;
>  	struct of_phandle_args args;
>  	int i, rc;
>  
>  	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-b");
> -	if (!np) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  	p0 = of_find_node_by_path("/testcase-data/phandle-tests/provider0");
> -	if (!p0) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p0);
>  
>  	p1 = of_find_node_by_path("/testcase-data/phandle-tests/provider1");
> -	if (!p1) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p1);
>  
>  	p2 = of_find_node_by_path("/testcase-data/phandle-tests/provider2");
> -	if (!p2) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p2);
>  
>  	p3 = of_find_node_by_path("/testcase-data/phandle-tests/provider3");
> -	if (!p3) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p3);
>  
> -	rc = of_count_phandle_with_args(np, "phandle-list", "#phandle-cells");
> -	unittest(rc == 7, "of_count_phandle_with_args() returned %i, expected 7\n", rc);
> +	KUNIT_EXPECT_EQ(test,
> +		       of_count_phandle_with_args(np,
> +						  "phandle-list",
> +						  "#phandle-cells"),
> +		       7);
>  
>  	for (i = 0; i < 8; i++) {
>  		bool passed = true;
> @@ -564,121 +588,186 @@ static void __init of_unittest_parse_phandle_with_args_map(void)
>  			passed = false;
>  		}
>  
> -		unittest(passed, "index %i - data error on node %s rc=%i\n",
> -			 i, args.np->full_name, rc);
> +		KUNIT_EXPECT_TRUE_MSG(
> +			test, passed,
> +			"index %i - data error on node %s rc=%i\n",
> +			i, (args.np ? args.np->full_name : "missing np"), rc);
>  	}
>  
>  	/* Check for missing list property */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args_map(np, "phandle-list-missing",
> -					    "phandle", 0, &args);
> -	unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args_map(
> +			np, "phandle-list-missing", "phandle", 0, &args),
> +		-ENOENT);
>  
>  	/* Check for missing cells,map,mask property */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args_map(np, "phandle-list",
> -					    "phandle-missing", 0, &args);
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args_map(
> +			np, "phandle-list", "phandle-missing", 0, &args),
> +		-EINVAL);
>  
>  	/* Check for bad phandle in list */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args_map(np, "phandle-list-bad-phandle",
> -					    "phandle", 0, &args);
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args_map(
> +			np, "phandle-list-bad-phandle", "phandle", 0, &args),
> +		-EINVAL);
>  
>  	/* Check for incorrectly formed argument list */
>  	memset(&args, 0, sizeof(args));
> -	rc = of_parse_phandle_with_args_map(np, "phandle-list-bad-args",
> -					    "phandle", 1, &args);
> -	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_parse_phandle_with_args_map(
> +			np, "phandle-list-bad-args", "phandle", 1, &args),
> +		-EINVAL);
>  }
>  
> -static void __init of_unittest_property_string(void)
> +static void of_unittest_property_string(struct kunit *test)
>  {
>  	const char *strings[4];
>  	struct device_node *np;
>  	int rc;
>  
>  	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> -	if (!np) {
> -		pr_err("No testcase data in device tree\n");
> -		return;
> -	}
> -
> -	rc = of_property_match_string(np, "phandle-list-names", "first");
> -	unittest(rc == 0, "first expected:0 got:%i\n", rc);
> -	rc = of_property_match_string(np, "phandle-list-names", "second");
> -	unittest(rc == 1, "second expected:1 got:%i\n", rc);
> -	rc = of_property_match_string(np, "phandle-list-names", "third");
> -	unittest(rc == 2, "third expected:2 got:%i\n", rc);
> -	rc = of_property_match_string(np, "phandle-list-names", "fourth");
> -	unittest(rc == -ENODATA, "unmatched string; rc=%i\n", rc);
> -	rc = of_property_match_string(np, "missing-property", "blah");
> -	unittest(rc == -EINVAL, "missing property; rc=%i\n", rc);
> -	rc = of_property_match_string(np, "empty-property", "blah");
> -	unittest(rc == -ENODATA, "empty property; rc=%i\n", rc);
> -	rc = of_property_match_string(np, "unterminated-string", "blah");
> -	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_property_match_string(np, "phandle-list-names", "first"),
> +		0);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_property_match_string(np, "phandle-list-names", "second"),
> +		1);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_property_match_string(np, "phandle-list-names", "third"),
> +		2);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_match_string(np, "phandle-list-names", "fourth"),
> +		-ENODATA,
> +		"unmatched string");
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_match_string(np, "missing-property", "blah"),
> +		-EINVAL,
> +		"missing property");
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_match_string(np, "empty-property", "blah"),
> +		-ENODATA,
> +		"empty property");
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_match_string(np, "unterminated-string", "blah"),
> +		-EILSEQ,
> +		"unterminated string");
>  
>  	/* of_property_count_strings() tests */
> -	rc = of_property_count_strings(np, "string-property");
> -	unittest(rc == 1, "Incorrect string count; rc=%i\n", rc);
> -	rc = of_property_count_strings(np, "phandle-list-names");
> -	unittest(rc == 3, "Incorrect string count; rc=%i\n", rc);
> -	rc = of_property_count_strings(np, "unterminated-string");
> -	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
> -	rc = of_property_count_strings(np, "unterminated-string-list");
> -	unittest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc);
> +	KUNIT_EXPECT_EQ(test,
> +			of_property_count_strings(np, "string-property"), 1);
> +	KUNIT_EXPECT_EQ(test,
> +			of_property_count_strings(np, "phandle-list-names"), 3);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_count_strings(np, "unterminated-string"), -EILSEQ,
> +		"unterminated string");
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_count_strings(np, "unterminated-string-list"),
> +		-EILSEQ,
> +		"unterminated string array");
>  
>  	/* of_property_read_string_index() tests */
>  	rc = of_property_read_string_index(np, "string-property", 0, strings);
> -	unittest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_ASSERT_EQ(test, rc, 0);
> +	KUNIT_EXPECT_STREQ(test, strings[0], "foobar");
> +
>  	strings[0] = NULL;
>  	rc = of_property_read_string_index(np, "string-property", 1, strings);
> -	unittest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_EXPECT_EQ(test, rc, -ENODATA);
> +	KUNIT_EXPECT_EQ(test, strings[0], NULL);
> +
>  	rc = of_property_read_string_index(np, "phandle-list-names", 0, strings);
> -	unittest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_ASSERT_EQ(test, rc, 0);
> +	KUNIT_EXPECT_STREQ(test, strings[0], "first");
> +
>  	rc = of_property_read_string_index(np, "phandle-list-names", 1, strings);
> -	unittest(rc == 0 && !strcmp(strings[0], "second"), "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_ASSERT_EQ(test, rc, 0);
> +	KUNIT_EXPECT_STREQ(test, strings[0], "second");
> +
>  	rc = of_property_read_string_index(np, "phandle-list-names", 2, strings);
> -	unittest(rc == 0 && !strcmp(strings[0], "third"), "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_ASSERT_EQ(test, rc, 0);
> +	KUNIT_EXPECT_STREQ(test, strings[0], "third");
> +
>  	strings[0] = NULL;
>  	rc = of_property_read_string_index(np, "phandle-list-names", 3, strings);
> -	unittest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_EXPECT_EQ(test, rc, -ENODATA);
> +	KUNIT_EXPECT_EQ(test, strings[0], NULL);
> +
>  	strings[0] = NULL;
>  	rc = of_property_read_string_index(np, "unterminated-string", 0, strings);
> -	unittest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_EXPECT_EQ(test, rc, -EILSEQ);
> +	KUNIT_EXPECT_EQ(test, strings[0], NULL);
> +
>  	rc = of_property_read_string_index(np, "unterminated-string-list", 0, strings);
> -	unittest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc);
> +	KUNIT_ASSERT_EQ(test, rc, 0);
> +	KUNIT_EXPECT_STREQ(test, strings[0], "first");
> +
>  	strings[0] = NULL;
>  	rc = of_property_read_string_index(np, "unterminated-string-list", 2, strings); /* should fail */
> -	unittest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
> -	strings[1] = NULL;
> +	KUNIT_EXPECT_EQ(test, rc, -EILSEQ);
> +	KUNIT_EXPECT_EQ(test, strings[0], NULL);
>  
>  	/* of_property_read_string_array() tests */
> -	rc = of_property_read_string_array(np, "string-property", strings, 4);
> -	unittest(rc == 1, "Incorrect string count; rc=%i\n", rc);
> -	rc = of_property_read_string_array(np, "phandle-list-names", strings, 4);
> -	unittest(rc == 3, "Incorrect string count; rc=%i\n", rc);
> -	rc = of_property_read_string_array(np, "unterminated-string", strings, 4);
> -	unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
> +	strings[1] = NULL;
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_property_read_string_array(
> +			np, "string-property", strings, 4),
> +		1);
> +	KUNIT_EXPECT_EQ(
> +		test,
> +		of_property_read_string_array(
> +			np, "phandle-list-names", strings, 4),
> +		3);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_read_string_array(
> +			np, "unterminated-string", strings, 4),
> +		-EILSEQ,
> +		"unterminated string");
>  	/* -- An incorrectly formed string should cause a failure */
> -	rc = of_property_read_string_array(np, "unterminated-string-list", strings, 4);
> -	unittest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc);
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		of_property_read_string_array(
> +			np, "unterminated-string-list", strings, 4),
> +		-EILSEQ,
> +		"unterminated string array");
>  	/* -- parsing the correctly formed strings should still work: */
>  	strings[2] = NULL;
>  	rc = of_property_read_string_array(np, "unterminated-string-list", strings, 2);
> -	unittest(rc == 2 && strings[2] == NULL, "of_property_read_string_array() failure; rc=%i\n", rc);
> +	KUNIT_EXPECT_EQ(test, rc, 2);
> +	KUNIT_EXPECT_EQ(test, strings[2], NULL);
> +
>  	strings[1] = NULL;
>  	rc = of_property_read_string_array(np, "phandle-list-names", strings, 1);
> -	unittest(rc == 1 && strings[1] == NULL, "Overwrote end of string array; rc=%i, str='%s'\n", rc, strings[1]);
> +	KUNIT_ASSERT_EQ(test, rc, 1);
> +	KUNIT_EXPECT_EQ_MSG(test, strings[1], NULL,
> +			    "Overwrote end of string array");
>  }
>  
>  #define propcmp(p1, p2) (((p1)->length == (p2)->length) && \
>  			(p1)->value && (p2)->value && \
>  			!memcmp((p1)->value, (p2)->value, (p1)->length) && \
>  			!strcmp((p1)->name, (p2)->name))
> -static void __init of_unittest_property_copy(void)
> +static void of_unittest_property_copy(struct kunit *test)
>  {
>  #ifdef CONFIG_OF_DYNAMIC
>  	struct property p1 = { .name = "p1", .length = 0, .value = "" };
> @@ -686,20 +775,24 @@ static void __init of_unittest_property_copy(void)
>  	struct property *new;
>  
>  	new = __of_prop_dup(&p1, GFP_KERNEL);
> -	unittest(new && propcmp(&p1, new), "empty property didn't copy correctly\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, new);
> +	KUNIT_EXPECT_TRUE_MSG(test, propcmp(&p1, new),
> +			      "empty property didn't copy correctly");
>  	kfree(new->value);
>  	kfree(new->name);
>  	kfree(new);
>  
>  	new = __of_prop_dup(&p2, GFP_KERNEL);
> -	unittest(new && propcmp(&p2, new), "non-empty property didn't copy correctly\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, new);
> +	KUNIT_EXPECT_TRUE_MSG(test, propcmp(&p2, new),
> +			      "non-empty property didn't copy correctly");
>  	kfree(new->value);
>  	kfree(new->name);
>  	kfree(new);
>  #endif
>  }
>  
> -static void __init of_unittest_changeset(void)
> +static void of_unittest_changeset(struct kunit *test)
>  {
>  #ifdef CONFIG_OF_DYNAMIC
>  	struct property *ppadd, padd = { .name = "prop-add", .length = 1, .value = "" };
> @@ -712,32 +805,32 @@ static void __init of_unittest_changeset(void)
>  	struct of_changeset chgset;
>  
>  	n1 = __of_node_dup(NULL, "n1");
> -	unittest(n1, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n1);
>  
>  	n2 = __of_node_dup(NULL, "n2");
> -	unittest(n2, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n2);
>  
>  	n21 = __of_node_dup(NULL, "n21");
> -	unittest(n21, "testcase setup failure %p\n", n21);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, n21);
>  
>  	nchangeset = of_find_node_by_path("/testcase-data/changeset");
>  	nremove = of_get_child_by_name(nchangeset, "node-remove");
> -	unittest(nremove, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, nremove);
>  
>  	ppadd = __of_prop_dup(&padd, GFP_KERNEL);
> -	unittest(ppadd, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppadd);
>  
>  	ppname_n1  = __of_prop_dup(&pname_n1, GFP_KERNEL);
> -	unittest(ppname_n1, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n1);
>  
>  	ppname_n2  = __of_prop_dup(&pname_n2, GFP_KERNEL);
> -	unittest(ppname_n2, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n2);
>  
>  	ppname_n21 = __of_prop_dup(&pname_n21, GFP_KERNEL);
> -	unittest(ppname_n21, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppname_n21);
>  
>  	ppupdate = __of_prop_dup(&pupdate, GFP_KERNEL);
> -	unittest(ppupdate, "testcase setup failure\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppupdate);
>  
>  	parent = nchangeset;
>  	n1->parent = parent;
> @@ -745,54 +838,72 @@ static void __init of_unittest_changeset(void)
>  	n21->parent = n2;
>  
>  	ppremove = of_find_property(parent, "prop-remove", NULL);
> -	unittest(ppremove, "failed to find removal prop");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ppremove);
>  
>  	of_changeset_init(&chgset);
>  
> -	unittest(!of_changeset_attach_node(&chgset, n1), "fail attach n1\n");
> -	unittest(!of_changeset_add_property(&chgset, n1, ppname_n1), "fail add prop name\n");
> -
> -	unittest(!of_changeset_attach_node(&chgset, n2), "fail attach n2\n");
> -	unittest(!of_changeset_add_property(&chgset, n2, ppname_n2), "fail add prop name\n");
> -
> -	unittest(!of_changeset_detach_node(&chgset, nremove), "fail remove node\n");
> -	unittest(!of_changeset_add_property(&chgset, n21, ppname_n21), "fail add prop name\n");
> -
> -	unittest(!of_changeset_attach_node(&chgset, n21), "fail attach n21\n");
> -
> -	unittest(!of_changeset_add_property(&chgset, parent, ppadd), "fail add prop prop-add\n");
> -	unittest(!of_changeset_update_property(&chgset, parent, ppupdate), "fail update prop\n");
> -	unittest(!of_changeset_remove_property(&chgset, parent, ppremove), "fail remove prop\n");
> -
> -	unittest(!of_changeset_apply(&chgset), "apply failed\n");
> +	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n1),
> +			       "fail attach n1\n");
> +	KUNIT_EXPECT_FALSE_MSG(
> +		test, of_changeset_add_property(&chgset, n1, ppname_n1),
> +		"fail add prop name\n");
> +
> +	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n2),
> +			       "fail attach n2\n");
> +	KUNIT_EXPECT_FALSE_MSG(
> +		test, of_changeset_add_property(&chgset, n2, ppname_n2),
> +			       "fail add prop name\n");
> +
> +	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_detach_node(&chgset, nremove),
> +			       "fail remove node\n");
> +	KUNIT_EXPECT_FALSE_MSG(
> +		test, of_changeset_add_property(&chgset, n21, ppname_n21),
> +		"fail add prop name\n");
> +
> +	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_attach_node(&chgset, n21),
> +			       "fail attach n21\n");
> +
> +	KUNIT_EXPECT_FALSE_MSG(
> +		test,
> +		of_changeset_add_property(&chgset, parent, ppadd),
> +		"fail add prop prop-add\n");
> +	KUNIT_EXPECT_FALSE_MSG(
> +		test,
> +		of_changeset_update_property(&chgset, parent, ppupdate),
> +		"fail update prop\n");
> +	KUNIT_EXPECT_FALSE_MSG(
> +		test,
> +		of_changeset_remove_property(&chgset, parent, ppremove),
> +		"fail remove prop\n");
> +
> +	KUNIT_EXPECT_FALSE_MSG(test, of_changeset_apply(&chgset),
> +			       "apply failed\n");
>  
>  	of_node_put(nchangeset);
>  
>  	/* Make sure node names are constructed correctly */
> -	unittest((np = of_find_node_by_path("/testcase-data/changeset/n2/n21")),
> -		 "'%pOF' not added\n", n21);
> +	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
> +		test,
> +		np = of_find_node_by_path("/testcase-data/changeset/n2/n21"),
> +		"'%pOF' not added\n", n21);
>  	of_node_put(np);
>  
> -	unittest(!of_changeset_revert(&chgset), "revert failed\n");
> +	KUNIT_EXPECT_FALSE(test, of_changeset_revert(&chgset));
>  
>  	of_changeset_destroy(&chgset);
>  #endif
>  }
>  
> -static void __init of_unittest_parse_interrupts(void)
> +static void of_unittest_parse_interrupts(struct kunit *test)
>  {
>  	struct device_node *np;
>  	struct of_phandle_args args;
>  	int i, rc;
>  
> -	if (of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)
> -		return;
> +	KUNIT_ASSERT_FALSE(test, of_irq_workarounds & OF_IMAP_OLDWORLD_MAC);
>  
>  	np = of_find_node_by_path("/testcase-data/interrupts/interrupts0");
> -	if (!np) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  	for (i = 0; i < 4; i++) {
>  		bool passed = true;
> @@ -804,16 +915,15 @@ static void __init of_unittest_parse_interrupts(void)
>  		passed &= (args.args_count == 1);
>  		passed &= (args.args[0] == (i + 1));
>  
> -		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
> -			 i, args.np, rc);
> +		KUNIT_EXPECT_TRUE_MSG(
> +			test, passed,
> +			"index %i - data error on node %pOF rc=%i\n",
> +			i, args.np, rc);
>  	}
>  	of_node_put(np);
>  
>  	np = of_find_node_by_path("/testcase-data/interrupts/interrupts1");
> -	if (!np) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  	for (i = 0; i < 4; i++) {
>  		bool passed = true;
> @@ -850,26 +960,24 @@ static void __init of_unittest_parse_interrupts(void)
>  		default:
>  			passed = false;
>  		}
> -		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
> -			 i, args.np, rc);
> +		KUNIT_EXPECT_TRUE_MSG(
> +			test, passed,
> +			"index %i - data error on node %pOF rc=%i\n",
> +			i, args.np, rc);
>  	}
>  	of_node_put(np);
>  }
>  
> -static void __init of_unittest_parse_interrupts_extended(void)
> +static void of_unittest_parse_interrupts_extended(struct kunit *test)
>  {
>  	struct device_node *np;
>  	struct of_phandle_args args;
>  	int i, rc;
>  
> -	if (of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)
> -		return;
> +	KUNIT_ASSERT_FALSE(test, of_irq_workarounds & OF_IMAP_OLDWORLD_MAC);
>  
>  	np = of_find_node_by_path("/testcase-data/interrupts/interrupts-extended0");
> -	if (!np) {
> -		pr_err("missing testcase data\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  	for (i = 0; i < 7; i++) {
>  		bool passed = true;
> @@ -924,8 +1032,10 @@ static void __init of_unittest_parse_interrupts_extended(void)
>  			passed = false;
>  		}
>  
> -		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
> -			 i, args.np, rc);
> +		KUNIT_EXPECT_TRUE_MSG(
> +			test, passed,
> +			"index %i - data error on node %pOF rc=%i\n",
> +			i, args.np, rc);
>  	}
>  	of_node_put(np);
>  }
> @@ -965,7 +1075,7 @@ static struct {
>  	{ .path = "/testcase-data/match-node/name9", .data = "K", },
>  };
>  
> -static void __init of_unittest_match_node(void)
> +static void of_unittest_match_node(struct kunit *test)
>  {
>  	struct device_node *np;
>  	const struct of_device_id *match;
> @@ -973,26 +1083,19 @@ static void __init of_unittest_match_node(void)
>  
>  	for (i = 0; i < ARRAY_SIZE(match_node_tests); i++) {
>  		np = of_find_node_by_path(match_node_tests[i].path);
> -		if (!np) {
> -			unittest(0, "missing testcase node %s\n",
> -				match_node_tests[i].path);
> -			continue;
> -		}
> +		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  		match = of_match_node(match_node_table, np);
> -		if (!match) {
> -			unittest(0, "%s didn't match anything\n",
> -				match_node_tests[i].path);
> -			continue;
> -		}
> +		KUNIT_ASSERT_NOT_ERR_OR_NULL_MSG(test, np,
> +						 "%s didn't match anything",
> +						 match_node_tests[i].path);
>  
> -		if (strcmp(match->data, match_node_tests[i].data) != 0) {
> -			unittest(0, "%s got wrong match. expected %s, got %s\n",
> -				match_node_tests[i].path, match_node_tests[i].data,
> -				(const char *)match->data);
> -			continue;
> -		}
> -		unittest(1, "passed");
> +		KUNIT_EXPECT_STREQ_MSG(
> +			test,
> +			match->data, match_node_tests[i].data,
> +			"%s got wrong match. expected %s, got %s\n",
> +			match_node_tests[i].path, match_node_tests[i].data,
> +			(const char *)match->data);
>  	}
>  }
>  
> @@ -1004,9 +1107,9 @@ static struct resource test_bus_res = {
>  static const struct platform_device_info test_bus_info = {
>  	.name = "unittest-bus",
>  };
> -static void __init of_unittest_platform_populate(void)
> +static void of_unittest_platform_populate(struct kunit *test)
>  {
> -	int irq, rc;
> +	int irq;
>  	struct device_node *np, *child, *grandchild;
>  	struct platform_device *pdev, *test_bus;
>  	const struct of_device_id match[] = {
> @@ -1020,32 +1123,27 @@ static void __init of_unittest_platform_populate(void)
>  	/* Test that a missing irq domain returns -EPROBE_DEFER */
>  	np = of_find_node_by_path("/testcase-data/testcase-device1");
>  	pdev = of_find_device_by_node(np);
> -	unittest(pdev, "device 1 creation failed\n");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, pdev);
>  
>  	if (!(of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)) {
>  		irq = platform_get_irq(pdev, 0);
> -		unittest(irq == -EPROBE_DEFER,
> -			 "device deferred probe failed - %d\n", irq);
> +		KUNIT_ASSERT_EQ(test, irq, -EPROBE_DEFER);
>  
>  		/* Test that a parsing failure does not return -EPROBE_DEFER */
>  		np = of_find_node_by_path("/testcase-data/testcase-device2");
>  		pdev = of_find_device_by_node(np);
> -		unittest(pdev, "device 2 creation failed\n");
> +		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, pdev);
>  		irq = platform_get_irq(pdev, 0);
> -		unittest(irq < 0 && irq != -EPROBE_DEFER,
> -			 "device parsing error failed - %d\n", irq);
> +		KUNIT_ASSERT_TRUE_MSG(test, irq < 0 && irq != -EPROBE_DEFER,
> +				      "device parsing error failed - %d\n",
> +				      irq);
>  	}
>  
>  	np = of_find_node_by_path("/testcase-data/platform-tests");
> -	unittest(np, "No testcase data in device tree\n");
> -	if (!np)
> -		return;
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  
>  	test_bus = platform_device_register_full(&test_bus_info);
> -	rc = PTR_ERR_OR_ZERO(test_bus);
> -	unittest(!rc, "testbus registration failed; rc=%i\n", rc);
> -	if (rc)
> -		return;
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, test_bus);
>  	test_bus->dev.of_node = np;
>  
>  	/*
> @@ -1060,17 +1158,19 @@ static void __init of_unittest_platform_populate(void)
>  	of_platform_populate(np, match, NULL, &test_bus->dev);
>  	for_each_child_of_node(np, child) {
>  		for_each_child_of_node(child, grandchild)
> -			unittest(of_find_device_by_node(grandchild),
> -				 "Could not create device for node '%pOFn'\n",
> -				 grandchild);
> +			KUNIT_EXPECT_TRUE_MSG(
> +				test, of_find_device_by_node(grandchild),
> +				"Could not create device for node '%pOFn'\n",
> +				grandchild);
>  	}
>  
>  	of_platform_depopulate(&test_bus->dev);
>  	for_each_child_of_node(np, child) {
>  		for_each_child_of_node(child, grandchild)
> -			unittest(!of_find_device_by_node(grandchild),
> -				 "device didn't get destroyed '%pOFn'\n",
> -				 grandchild);
> +			KUNIT_EXPECT_FALSE_MSG(
> +				test, of_find_device_by_node(grandchild),
> +				"device didn't get destroyed '%pOFn'\n",
> +				grandchild);
>  	}
>  
>  	platform_device_unregister(test_bus);
> @@ -1171,7 +1271,7 @@ static void attach_node_and_children(struct device_node *np)
>   *	unittest_data_add - Reads, copies data from
>   *	linked tree and attaches it to the live tree
>   */
> -static int __init unittest_data_add(void)
> +static int unittest_data_add(void)
>  {
>  	void *unittest_data;
>  	struct device_node *unittest_data_node, *np;
> @@ -1242,7 +1342,7 @@ static int __init unittest_data_add(void)
>  }
>  
>  #ifdef CONFIG_OF_OVERLAY
> -static int __init overlay_data_apply(const char *overlay_name, int *overlay_id);
> +static int overlay_data_apply(const char *overlay_name, int *overlay_id);
>  
>  static int unittest_probe(struct platform_device *pdev)
>  {
> @@ -1471,172 +1571,146 @@ static void of_unittest_destroy_tracked_overlays(void)
>  	} while (defers > 0);
>  }
>  
> -static int __init of_unittest_apply_overlay(int overlay_nr, int *overlay_id)
> +static int of_unittest_apply_overlay(struct kunit *test,
> +				     int overlay_nr,
> +				     int *overlay_id)
>  {
>  	const char *overlay_name;
>  
>  	overlay_name = overlay_name_from_nr(overlay_nr);
>  
> -	if (!overlay_data_apply(overlay_name, overlay_id)) {
> -		unittest(0, "could not apply overlay \"%s\"\n",
> -				overlay_name);
> -		return -EFAULT;
> -	}
> +	KUNIT_ASSERT_TRUE_MSG(test,
> +			      overlay_data_apply(overlay_name, overlay_id),
> +			      "could not apply overlay \"%s\"\n", overlay_name);
>  	of_unittest_track_overlay(*overlay_id);
>  
>  	return 0;
>  }
>  
>  /* apply an overlay while checking before and after states */
> -static int __init of_unittest_apply_overlay_check(int overlay_nr,
> +static int of_unittest_apply_overlay_check(struct kunit *test, int overlay_nr,
>  		int unittest_nr, int before, int after,
>  		enum overlay_type ovtype)
>  {
>  	int ret, ovcs_id;
>  
>  	/* unittest device must not be in before state */
> -	if (of_unittest_device_exists(unittest_nr, ovtype) != before) {
> -		unittest(0, "%s with device @\"%s\" %s\n",
> -				overlay_name_from_nr(overlay_nr),
> -				unittest_path(unittest_nr, ovtype),
> -				!before ? "enabled" : "disabled");
> -		return -EINVAL;
> -	}
> +	KUNIT_ASSERT_EQ_MSG(
> +		test, of_unittest_device_exists(unittest_nr, ovtype), before,
> +		"%s with device @\"%s\" %s\n", overlay_name_from_nr(overlay_nr),
> +		unittest_path(unittest_nr, ovtype),
> +		!before ? "enabled" : "disabled");
>  
>  	ovcs_id = 0;
> -	ret = of_unittest_apply_overlay(overlay_nr, &ovcs_id);
> +	ret = of_unittest_apply_overlay(test, overlay_nr, &ovcs_id);
>  	if (ret != 0) {
> -		/* of_unittest_apply_overlay already called unittest() */
> +		/* of_unittest_apply_overlay already set expectation */
>  		return ret;
>  	}
>  
>  	/* unittest device must be to set to after state */
> -	if (of_unittest_device_exists(unittest_nr, ovtype) != after) {
> -		unittest(0, "%s failed to create @\"%s\" %s\n",
> -				overlay_name_from_nr(overlay_nr),
> -				unittest_path(unittest_nr, ovtype),
> -				!after ? "enabled" : "disabled");
> -		return -EINVAL;
> -	}
> +	KUNIT_ASSERT_EQ_MSG(
> +		test, of_unittest_device_exists(unittest_nr, ovtype), after,
> +		"%s failed to create @\"%s\" %s\n",
> +		overlay_name_from_nr(overlay_nr),
> +		unittest_path(unittest_nr, ovtype),
> +		!after ? "enabled" : "disabled");
>  
>  	return 0;
>  }
>  
>  /* apply an overlay and then revert it while checking before, after states */
> -static int __init of_unittest_apply_revert_overlay_check(int overlay_nr,
> +static int of_unittest_apply_revert_overlay_check(
> +		struct kunit *test, int overlay_nr,
>  		int unittest_nr, int before, int after,
>  		enum overlay_type ovtype)
>  {
>  	int ret, ovcs_id;
>  
>  	/* unittest device must be in before state */
> -	if (of_unittest_device_exists(unittest_nr, ovtype) != before) {
> -		unittest(0, "%s with device @\"%s\" %s\n",
> -				overlay_name_from_nr(overlay_nr),
> -				unittest_path(unittest_nr, ovtype),
> -				!before ? "enabled" : "disabled");
> -		return -EINVAL;
> -	}
> +	KUNIT_ASSERT_EQ_MSG(
> +		test, of_unittest_device_exists(unittest_nr, ovtype), before,
> +		"%s with device @\"%s\" %s\n", overlay_name_from_nr(overlay_nr),
> +		unittest_path(unittest_nr, ovtype),
> +		!before ? "enabled" : "disabled");
>  
>  	/* apply the overlay */
>  	ovcs_id = 0;
> -	ret = of_unittest_apply_overlay(overlay_nr, &ovcs_id);
> +	ret = of_unittest_apply_overlay(test, overlay_nr, &ovcs_id);
>  	if (ret != 0) {
> -		/* of_unittest_apply_overlay already called unittest() */
> +		/* of_unittest_apply_overlay already set expectation. */
>  		return ret;
>  	}
>  
>  	/* unittest device must be in after state */
> -	if (of_unittest_device_exists(unittest_nr, ovtype) != after) {
> -		unittest(0, "%s failed to create @\"%s\" %s\n",
> -				overlay_name_from_nr(overlay_nr),
> -				unittest_path(unittest_nr, ovtype),
> -				!after ? "enabled" : "disabled");
> -		return -EINVAL;
> -	}
> -
> -	ret = of_overlay_remove(&ovcs_id);
> -	if (ret != 0) {
> -		unittest(0, "%s failed to be destroyed @\"%s\"\n",
> -				overlay_name_from_nr(overlay_nr),
> -				unittest_path(unittest_nr, ovtype));
> -		return ret;
> -	}
> +	KUNIT_ASSERT_EQ_MSG(
> +		test, of_unittest_device_exists(unittest_nr, ovtype), after,
> +		"%s failed to create @\"%s\" %s\n",
> +		overlay_name_from_nr(overlay_nr),
> +		unittest_path(unittest_nr, ovtype),
> +		!after ? "enabled" : "disabled");
> +
> +	KUNIT_ASSERT_EQ_MSG(test, of_overlay_remove(&ovcs_id), 0,
> +			    "%s failed to be destroyed @\"%s\"\n",
> +			    overlay_name_from_nr(overlay_nr),
> +			    unittest_path(unittest_nr, ovtype));
>  
>  	/* unittest device must be again in before state */
> -	if (of_unittest_device_exists(unittest_nr, PDEV_OVERLAY) != before) {
> -		unittest(0, "%s with device @\"%s\" %s\n",
> -				overlay_name_from_nr(overlay_nr),
> -				unittest_path(unittest_nr, ovtype),
> -				!before ? "enabled" : "disabled");
> -		return -EINVAL;
> -	}
> +	KUNIT_ASSERT_EQ_MSG(
> +		test,
> +		of_unittest_device_exists(unittest_nr, PDEV_OVERLAY), before,
> +		"%s with device @\"%s\" %s\n",
> +		overlay_name_from_nr(overlay_nr),
> +		unittest_path(unittest_nr, ovtype),
> +		!before ? "enabled" : "disabled");
>  
>  	return 0;
>  }
>  
>  /* test activation of device */
> -static void __init of_unittest_overlay_0(void)
> +static void of_unittest_overlay_0(struct kunit *test)
>  {
>  	/* device should enable */
> -	if (of_unittest_apply_overlay_check(0, 0, 0, 1, PDEV_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 0);
> +	of_unittest_apply_overlay_check(test, 0, 0, 0, 1, PDEV_OVERLAY);
>  }
>  
>  /* test deactivation of device */
> -static void __init of_unittest_overlay_1(void)
> +static void of_unittest_overlay_1(struct kunit *test)
>  {
>  	/* device should disable */
> -	if (of_unittest_apply_overlay_check(1, 1, 1, 0, PDEV_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 1);
> +	of_unittest_apply_overlay_check(test, 1, 1, 1, 0, PDEV_OVERLAY);
>  }
>  
>  /* test activation of device */
> -static void __init of_unittest_overlay_2(void)
> +static void of_unittest_overlay_2(struct kunit *test)
>  {
>  	/* device should enable */
> -	if (of_unittest_apply_overlay_check(2, 2, 0, 1, PDEV_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 2);
> +	of_unittest_apply_overlay_check(test, 2, 2, 0, 1, PDEV_OVERLAY);
>  }
>  
>  /* test deactivation of device */
> -static void __init of_unittest_overlay_3(void)
> +static void of_unittest_overlay_3(struct kunit *test)
>  {
>  	/* device should disable */
> -	if (of_unittest_apply_overlay_check(3, 3, 1, 0, PDEV_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 3);
> +	of_unittest_apply_overlay_check(test, 3, 3, 1, 0, PDEV_OVERLAY);
>  }
>  
>  /* test activation of a full device node */
> -static void __init of_unittest_overlay_4(void)
> +static void of_unittest_overlay_4(struct kunit *test)
>  {
>  	/* device should disable */
> -	if (of_unittest_apply_overlay_check(4, 4, 0, 1, PDEV_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 4);
> +	of_unittest_apply_overlay_check(test, 4, 4, 0, 1, PDEV_OVERLAY);
>  }
>  
>  /* test overlay apply/revert sequence */
> -static void __init of_unittest_overlay_5(void)
> +static void of_unittest_overlay_5(struct kunit *test)
>  {
>  	/* device should disable */
> -	if (of_unittest_apply_revert_overlay_check(5, 5, 0, 1, PDEV_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 5);
> +	of_unittest_apply_revert_overlay_check(test, 5, 5, 0, 1, PDEV_OVERLAY);
>  }
>  
>  /* test overlay application in sequence */
> -static void __init of_unittest_overlay_6(void)
> +static void of_unittest_overlay_6(struct kunit *test)
>  {
>  	int i, ov_id[2], ovcs_id;
>  	int overlay_nr = 6, unittest_nr = 6;
> @@ -1645,74 +1719,67 @@ static void __init of_unittest_overlay_6(void)
>  
>  	/* unittest device must be in before state */
>  	for (i = 0; i < 2; i++) {
> -		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
> -				!= before) {
> -			unittest(0, "%s with device @\"%s\" %s\n",
> -					overlay_name_from_nr(overlay_nr + i),
> -					unittest_path(unittest_nr + i,
> -						PDEV_OVERLAY),
> -					!before ? "enabled" : "disabled");
> -			return;
> -		}
> +		KUNIT_ASSERT_EQ_MSG(test,
> +				    of_unittest_device_exists(unittest_nr + i,
> +							      PDEV_OVERLAY),
> +				    before,
> +				    "%s with device @\"%s\" %s\n",
> +				    overlay_name_from_nr(overlay_nr + i),
> +				    unittest_path(unittest_nr + i,
> +						  PDEV_OVERLAY),
> +				    !before ? "enabled" : "disabled");
>  	}
>  
>  	/* apply the overlays */
>  	for (i = 0; i < 2; i++) {
> -
>  		overlay_name = overlay_name_from_nr(overlay_nr + i);
>  
> -		if (!overlay_data_apply(overlay_name, &ovcs_id)) {
> -			unittest(0, "could not apply overlay \"%s\"\n",
> -					overlay_name);
> -			return;
> -		}
> +		KUNIT_ASSERT_TRUE_MSG(
> +			test, overlay_data_apply(overlay_name, &ovcs_id),
> +			"could not apply overlay \"%s\"\n", overlay_name);
>  		ov_id[i] = ovcs_id;
>  		of_unittest_track_overlay(ov_id[i]);
>  	}
>  
>  	for (i = 0; i < 2; i++) {
>  		/* unittest device must be in after state */
> -		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
> -				!= after) {
> -			unittest(0, "overlay @\"%s\" failed @\"%s\" %s\n",
> -					overlay_name_from_nr(overlay_nr + i),
> -					unittest_path(unittest_nr + i,
> -						PDEV_OVERLAY),
> -					!after ? "enabled" : "disabled");
> -			return;
> -		}
> +		KUNIT_ASSERT_EQ_MSG(test,
> +				    of_unittest_device_exists(unittest_nr + i,
> +							      PDEV_OVERLAY),
> +				    after,
> +				    "overlay @\"%s\" failed @\"%s\" %s\n",
> +				    overlay_name_from_nr(overlay_nr + i),
> +				    unittest_path(unittest_nr + i,
> +						  PDEV_OVERLAY),
> +				    !after ? "enabled" : "disabled");
>  	}
>  
>  	for (i = 1; i >= 0; i--) {
>  		ovcs_id = ov_id[i];
> -		if (of_overlay_remove(&ovcs_id)) {
> -			unittest(0, "%s failed destroy @\"%s\"\n",
> -					overlay_name_from_nr(overlay_nr + i),
> -					unittest_path(unittest_nr + i,
> -						PDEV_OVERLAY));
> -			return;
> -		}
> +		KUNIT_ASSERT_FALSE_MSG(
> +			test, of_overlay_remove(&ovcs_id),
> +			"%s failed destroy @\"%s\"\n",
> +			overlay_name_from_nr(overlay_nr + i),
> +			unittest_path(unittest_nr + i, PDEV_OVERLAY));
>  		of_unittest_untrack_overlay(ov_id[i]);
>  	}
>  
>  	for (i = 0; i < 2; i++) {
>  		/* unittest device must be again in before state */
> -		if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
> -				!= before) {
> -			unittest(0, "%s with device @\"%s\" %s\n",
> -					overlay_name_from_nr(overlay_nr + i),
> -					unittest_path(unittest_nr + i,
> -						PDEV_OVERLAY),
> -					!before ? "enabled" : "disabled");
> -			return;
> -		}
> +		KUNIT_ASSERT_EQ_MSG(test,
> +				    of_unittest_device_exists(unittest_nr + i,
> +							      PDEV_OVERLAY),
> +				    before,
> +				    "%s with device @\"%s\" %s\n",
> +				    overlay_name_from_nr(overlay_nr + i),
> +				    unittest_path(unittest_nr + i,
> +						  PDEV_OVERLAY),
> +				    !before ? "enabled" : "disabled");
>  	}
> -
> -	unittest(1, "overlay test %d passed\n", 6);
>  }
>  
>  /* test overlay application in sequence */
> -static void __init of_unittest_overlay_8(void)
> +static void of_unittest_overlay_8(struct kunit *test)
>  {
>  	int i, ov_id[2], ovcs_id;
>  	int overlay_nr = 8, unittest_nr = 8;
> @@ -1722,76 +1789,64 @@ static void __init of_unittest_overlay_8(void)
>  
>  	/* apply the overlays */
>  	for (i = 0; i < 2; i++) {
> -
>  		overlay_name = overlay_name_from_nr(overlay_nr + i);
>  
> -		if (!overlay_data_apply(overlay_name, &ovcs_id)) {
> -			unittest(0, "could not apply overlay \"%s\"\n",
> -					overlay_name);
> -			return;
> -		}
> +		KUNIT_ASSERT_TRUE_MSG(
> +			test, overlay_data_apply(overlay_name, &ovcs_id),
> +			"could not apply overlay \"%s\"\n", overlay_name);
>  		ov_id[i] = ovcs_id;
>  		of_unittest_track_overlay(ov_id[i]);
>  	}
>  
>  	/* now try to remove first overlay (it should fail) */
>  	ovcs_id = ov_id[0];
> -	if (!of_overlay_remove(&ovcs_id)) {
> -		unittest(0, "%s was destroyed @\"%s\"\n",
> -				overlay_name_from_nr(overlay_nr + 0),
> -				unittest_path(unittest_nr,
> -					PDEV_OVERLAY));
> -		return;
> -	}
> +	KUNIT_ASSERT_TRUE_MSG(
> +		test, of_overlay_remove(&ovcs_id),
> +		"%s was destroyed @\"%s\"\n",
> +		overlay_name_from_nr(overlay_nr + 0),
> +		unittest_path(unittest_nr, PDEV_OVERLAY));
>  
>  	/* removing them in order should work */
>  	for (i = 1; i >= 0; i--) {
>  		ovcs_id = ov_id[i];
> -		if (of_overlay_remove(&ovcs_id)) {
> -			unittest(0, "%s not destroyed @\"%s\"\n",
> -					overlay_name_from_nr(overlay_nr + i),
> -					unittest_path(unittest_nr,
> -						PDEV_OVERLAY));
> -			return;
> -		}
> +		KUNIT_ASSERT_FALSE_MSG(
> +			test, of_overlay_remove(&ovcs_id),
> +			"%s not destroyed @\"%s\"\n",
> +			overlay_name_from_nr(overlay_nr + i),
> +			unittest_path(unittest_nr, PDEV_OVERLAY));
>  		of_unittest_untrack_overlay(ov_id[i]);
>  	}
> -
> -	unittest(1, "overlay test %d passed\n", 8);
>  }
>  
>  /* test insertion of a bus with parent devices */
> -static void __init of_unittest_overlay_10(void)
> +static void of_unittest_overlay_10(struct kunit *test)
>  {
> -	int ret;
>  	char *child_path;
>  
>  	/* device should disable */
> -	ret = of_unittest_apply_overlay_check(10, 10, 0, 1, PDEV_OVERLAY);
> -	if (unittest(ret == 0,
> -			"overlay test %d failed; overlay application\n", 10))
> -		return;
> +	KUNIT_ASSERT_EQ_MSG(
> +		test,
> +		of_unittest_apply_overlay_check(
> +				test, 10, 10, 0, 1, PDEV_OVERLAY),
> +		0,
> +		"overlay test %d failed; overlay application\n", 10);
>  
>  	child_path = kasprintf(GFP_KERNEL, "%s/test-unittest101",
>  			unittest_path(10, PDEV_OVERLAY));
> -	if (unittest(child_path, "overlay test %d failed; kasprintf\n", 10))
> -		return;
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, child_path);
>  
> -	ret = of_path_device_type_exists(child_path, PDEV_OVERLAY);
> +	KUNIT_EXPECT_TRUE_MSG(
> +		test, of_path_device_type_exists(child_path, PDEV_OVERLAY),
> +		"overlay test %d failed; no child device\n", 10);
>  	kfree(child_path);
> -
> -	unittest(ret, "overlay test %d failed; no child device\n", 10);
>  }
>  
>  /* test insertion of a bus with parent devices (and revert) */
> -static void __init of_unittest_overlay_11(void)
> +static void of_unittest_overlay_11(struct kunit *test)
>  {
> -	int ret;
> -
>  	/* device should disable */
> -	ret = of_unittest_apply_revert_overlay_check(11, 11, 0, 1,
> -			PDEV_OVERLAY);
> -	unittest(ret == 0, "overlay test %d failed; overlay apply\n", 11);
> +	KUNIT_EXPECT_FALSE(test, of_unittest_apply_revert_overlay_check(
> +		test, 11, 11, 0, 1, PDEV_OVERLAY));
>  }
>  
>  #if IS_BUILTIN(CONFIG_I2C) && IS_ENABLED(CONFIG_OF_OVERLAY)
> @@ -2013,25 +2068,18 @@ static struct i2c_driver unittest_i2c_mux_driver = {
>  
>  #endif
>  
> -static int of_unittest_overlay_i2c_init(void)
> +static int of_unittest_overlay_i2c_init(struct kunit *test)
>  {
> -	int ret;
> -
> -	ret = i2c_add_driver(&unittest_i2c_dev_driver);
> -	if (unittest(ret == 0,
> -			"could not register unittest i2c device driver\n"))
> -		return ret;
> +	KUNIT_ASSERT_EQ_MSG(test, i2c_add_driver(&unittest_i2c_dev_driver), 0,
> +			    "could not register unittest i2c device driver\n");
>  
> -	ret = platform_driver_register(&unittest_i2c_bus_driver);
> -	if (unittest(ret == 0,
> -			"could not register unittest i2c bus driver\n"))
> -		return ret;
> +	KUNIT_ASSERT_EQ_MSG(
> +		test, platform_driver_register(&unittest_i2c_bus_driver), 0,
> +		"could not register unittest i2c bus driver\n");
>  
>  #if IS_BUILTIN(CONFIG_I2C_MUX)
> -	ret = i2c_add_driver(&unittest_i2c_mux_driver);
> -	if (unittest(ret == 0,
> -			"could not register unittest i2c mux driver\n"))
> -		return ret;
> +	KUNIT_ASSERT_EQ_MSG(test, i2c_add_driver(&unittest_i2c_mux_driver), 0,
> +			    "could not register unittest i2c mux driver\n");
>  #endif
>  
>  	return 0;
> @@ -2046,101 +2094,85 @@ static void of_unittest_overlay_i2c_cleanup(void)
>  	i2c_del_driver(&unittest_i2c_dev_driver);
>  }
>  
> -static void __init of_unittest_overlay_i2c_12(void)
> +static void of_unittest_overlay_i2c_12(struct kunit *test)
>  {
>  	/* device should enable */
> -	if (of_unittest_apply_overlay_check(12, 12, 0, 1, I2C_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 12);
> +	of_unittest_apply_overlay_check(test, 12, 12, 0, 1, I2C_OVERLAY);
>  }
>  
>  /* test deactivation of device */
> -static void __init of_unittest_overlay_i2c_13(void)
> +static void of_unittest_overlay_i2c_13(struct kunit *test)
>  {
>  	/* device should disable */
> -	if (of_unittest_apply_overlay_check(13, 13, 1, 0, I2C_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 13);
> +	of_unittest_apply_overlay_check(test, 13, 13, 1, 0, I2C_OVERLAY);
>  }
>  
>  /* just check for i2c mux existence */
> -static void of_unittest_overlay_i2c_14(void)
> +static void of_unittest_overlay_i2c_14(struct kunit *test)
>  {
> +	KUNIT_SUCCEED(test);
>  }
>  
> -static void __init of_unittest_overlay_i2c_15(void)
> +static void of_unittest_overlay_i2c_15(struct kunit *test)
>  {
>  	/* device should enable */
> -	if (of_unittest_apply_overlay_check(15, 15, 0, 1, I2C_OVERLAY))
> -		return;
> -
> -	unittest(1, "overlay test %d passed\n", 15);
> +	of_unittest_apply_overlay_check(test, 15, 15, 0, 1, I2C_OVERLAY);
>  }
>  
>  #else
>  
> -static inline void of_unittest_overlay_i2c_14(void) { }
> -static inline void of_unittest_overlay_i2c_15(void) { }
> +static inline void of_unittest_overlay_i2c_14(struct kunit *test) { }
> +static inline void of_unittest_overlay_i2c_15(struct kunit *test) { }
>  
>  #endif
>  
> -static void __init of_unittest_overlay(void)
> +static void of_unittest_overlay(struct kunit *test)
>  {
>  	struct device_node *bus_np = NULL;
>  
> -	if (platform_driver_register(&unittest_driver)) {
> -		unittest(0, "could not register unittest driver\n");
> -		goto out;
> -	}
> +	KUNIT_ASSERT_FALSE_MSG(test, platform_driver_register(&unittest_driver),
> +			       "could not register unittest driver\n");
>  
>  	bus_np = of_find_node_by_path(bus_path);
> -	if (bus_np == NULL) {
> -		unittest(0, "could not find bus_path \"%s\"\n", bus_path);
> -		goto out;
> -	}
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL_MSG(
> +		test, bus_np, "could not find bus_path \"%s\"\n", bus_path);
>  
> -	if (of_platform_default_populate(bus_np, NULL, NULL)) {
> -		unittest(0, "could not populate bus @ \"%s\"\n", bus_path);
> -		goto out;
> -	}
> -
> -	if (!of_unittest_device_exists(100, PDEV_OVERLAY)) {
> -		unittest(0, "could not find unittest0 @ \"%s\"\n",
> -				unittest_path(100, PDEV_OVERLAY));
> -		goto out;
> -	}
> +	KUNIT_ASSERT_FALSE_MSG(
> +		test, of_platform_default_populate(bus_np, NULL, NULL),
> +		"could not populate bus @ \"%s\"\n", bus_path);
>  
> -	if (of_unittest_device_exists(101, PDEV_OVERLAY)) {
> -		unittest(0, "unittest1 @ \"%s\" should not exist\n",
> -				unittest_path(101, PDEV_OVERLAY));
> -		goto out;
> -	}
> +	KUNIT_ASSERT_TRUE_MSG(
> +		test, of_unittest_device_exists(100, PDEV_OVERLAY),
> +		"could not find unittest0 @ \"%s\"\n",
> +		unittest_path(100, PDEV_OVERLAY));
>  
> -	unittest(1, "basic infrastructure of overlays passed");
> +	KUNIT_ASSERT_FALSE_MSG(
> +		test, of_unittest_device_exists(101, PDEV_OVERLAY),
> +		"unittest1 @ \"%s\" should not exist\n",
> +		unittest_path(101, PDEV_OVERLAY));
>  
>  	/* tests in sequence */
> -	of_unittest_overlay_0();
> -	of_unittest_overlay_1();
> -	of_unittest_overlay_2();
> -	of_unittest_overlay_3();
> -	of_unittest_overlay_4();
> -	of_unittest_overlay_5();
> -	of_unittest_overlay_6();
> -	of_unittest_overlay_8();
> -
> -	of_unittest_overlay_10();
> -	of_unittest_overlay_11();
> +	of_unittest_overlay_0(test);
> +	of_unittest_overlay_1(test);
> +	of_unittest_overlay_2(test);
> +	of_unittest_overlay_3(test);
> +	of_unittest_overlay_4(test);
> +	of_unittest_overlay_5(test);
> +	of_unittest_overlay_6(test);
> +	of_unittest_overlay_8(test);
> +
> +	of_unittest_overlay_10(test);
> +	of_unittest_overlay_11(test);
>  
>  #if IS_BUILTIN(CONFIG_I2C)
> -	if (unittest(of_unittest_overlay_i2c_init() == 0, "i2c init failed\n"))
> -		goto out;
> +	KUNIT_ASSERT_EQ_MSG(test, of_unittest_overlay_i2c_init(test), 0,
> +			    "i2c init failed\n");
> +	goto out;
>  
> -	of_unittest_overlay_i2c_12();
> -	of_unittest_overlay_i2c_13();
> -	of_unittest_overlay_i2c_14();
> -	of_unittest_overlay_i2c_15();
> +	of_unittest_overlay_i2c_12(test);
> +	of_unittest_overlay_i2c_13(test);
> +	of_unittest_overlay_i2c_14(test);
> +	of_unittest_overlay_i2c_15(test);
>  
>  	of_unittest_overlay_i2c_cleanup();
>  #endif
> @@ -2152,7 +2184,7 @@ static void __init of_unittest_overlay(void)
>  }
>  
>  #else
> -static inline void __init of_unittest_overlay(void) { }
> +static inline void of_unittest_overlay(struct kunit *test) { }
>  #endif
>  
>  #ifdef CONFIG_OF_OVERLAY
> @@ -2313,7 +2345,7 @@ void __init unittest_unflatten_overlay_base(void)
>   *
>   * Return 0 on unexpected error.
>   */
> -static int __init overlay_data_apply(const char *overlay_name, int *overlay_id)
> +static int overlay_data_apply(const char *overlay_name, int *overlay_id)
>  {
>  	struct overlay_info *info;
>  	int found = 0;
> @@ -2359,19 +2391,17 @@ static int __init overlay_data_apply(const char *overlay_name, int *overlay_id)
>   * The first part of the function is _not_ normal overlay usage; it is
>   * finishing splicing the base overlay device tree into the live tree.
>   */
> -static __init void of_unittest_overlay_high_level(void)
> +static void of_unittest_overlay_high_level(struct kunit *test)
>  {
>  	struct device_node *last_sibling;
>  	struct device_node *np;
>  	struct device_node *of_symbols;
> -	struct device_node *overlay_base_symbols;
> +	struct device_node *overlay_base_symbols = 0;
>  	struct device_node **pprev;
>  	struct property *prop;
>  
> -	if (!overlay_base_root) {
> -		unittest(0, "overlay_base_root not initialized\n");
> -		return;
> -	}
> +	KUNIT_ASSERT_TRUE_MSG(test, overlay_base_root,
> +			      "overlay_base_root not initialized\n");
>  
>  	/*
>  	 * Could not fixup phandles in unittest_unflatten_overlay_base()
> @@ -2418,11 +2448,9 @@ static __init void of_unittest_overlay_high_level(void)
>  	for_each_child_of_node(overlay_base_root, np) {
>  		struct device_node *base_child;
>  		for_each_child_of_node(of_root, base_child) {
> -			if (!strcmp(np->full_name, base_child->full_name)) {
> -				unittest(0, "illegal node name in overlay_base %pOFn",
> -					 np);
> -				return;
> -			}
> +			KUNIT_ASSERT_STRNEQ_MSG(
> +				test, np->full_name, base_child->full_name,
> +				"illegal node name in overlay_base %pOFn", np);
>  		}
>  	}
>  
> @@ -2456,21 +2484,24 @@ static __init void of_unittest_overlay_high_level(void)
>  
>  			new_prop = __of_prop_dup(prop, GFP_KERNEL);
>  			if (!new_prop) {
> -				unittest(0, "__of_prop_dup() of '%s' from overlay_base node __symbols__",
> -					 prop->name);
> +				KUNIT_FAIL(test,
> +					   "__of_prop_dup() of '%s' from overlay_base node __symbols__",
> +					   prop->name);
>  				goto err_unlock;
>  			}
>  			if (__of_add_property(of_symbols, new_prop)) {
>  				/* "name" auto-generated by unflatten */
>  				if (!strcmp(new_prop->name, "name"))
>  					continue;
> -				unittest(0, "duplicate property '%s' in overlay_base node __symbols__",
> -					 prop->name);
> +				KUNIT_FAIL(test,
> +					   "duplicate property '%s' in overlay_base node __symbols__",
> +					   prop->name);
>  				goto err_unlock;
>  			}
>  			if (__of_add_property_sysfs(of_symbols, new_prop)) {
> -				unittest(0, "unable to add property '%s' in overlay_base node __symbols__ to sysfs",
> -					 prop->name);
> +				KUNIT_FAIL(test,
> +					   "unable to add property '%s' in overlay_base node __symbols__ to sysfs",
> +					   prop->name);
>  				goto err_unlock;
>  			}
>  		}
> @@ -2481,20 +2512,24 @@ static __init void of_unittest_overlay_high_level(void)
>  
>  	/* now do the normal overlay usage test */
>  
> -	unittest(overlay_data_apply("overlay", NULL),
> -		 "Adding overlay 'overlay' failed\n");
> +	KUNIT_EXPECT_TRUE_MSG(test, overlay_data_apply("overlay", NULL),
> +			      "Adding overlay 'overlay' failed\n");
>  
> -	unittest(overlay_data_apply("overlay_bad_add_dup_node", NULL),
> -		 "Adding overlay 'overlay_bad_add_dup_node' failed\n");
> +	KUNIT_EXPECT_TRUE_MSG(
> +		test, overlay_data_apply("overlay_bad_add_dup_node", NULL),
> +		"Adding overlay 'overlay_bad_add_dup_node' failed\n");
>  
> -	unittest(overlay_data_apply("overlay_bad_add_dup_prop", NULL),
> -		 "Adding overlay 'overlay_bad_add_dup_prop' failed\n");
> +	KUNIT_EXPECT_TRUE_MSG(
> +		test, overlay_data_apply("overlay_bad_add_dup_prop", NULL),
> +		"Adding overlay 'overlay_bad_add_dup_prop' failed\n");
>  
> -	unittest(overlay_data_apply("overlay_bad_phandle", NULL),
> -		 "Adding overlay 'overlay_bad_phandle' failed\n");
> +	KUNIT_EXPECT_TRUE_MSG(
> +		test, overlay_data_apply("overlay_bad_phandle", NULL),
> +		"Adding overlay 'overlay_bad_phandle' failed\n");
>  
> -	unittest(overlay_data_apply("overlay_bad_symbol", NULL),
> -		 "Adding overlay 'overlay_bad_symbol' failed\n");
> +	KUNIT_EXPECT_TRUE_MSG(
> +		test, overlay_data_apply("overlay_bad_symbol", NULL),
> +		"Adding overlay 'overlay_bad_symbol' failed\n");
>  
>  	return;
>  
> @@ -2504,57 +2539,52 @@ static __init void of_unittest_overlay_high_level(void)
>  
>  #else
>  
> -static inline __init void of_unittest_overlay_high_level(void) {}
> +static inline void of_unittest_overlay_high_level(struct kunit *test) {}
>  
>  #endif
>  
> -static int __init of_unittest(void)
> +static int of_test_init(struct kunit *test)
>  {
> -	struct device_node *np;
> -	int res;
> -
>  	/* adding data for unittest */
> -	res = unittest_data_add();
> -	if (res)
> -		return res;
> +	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
> +
>  	if (!of_aliases)
>  		of_aliases = of_find_node_by_path("/aliases");
>  
> -	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> -	if (!np) {
> -		pr_info("No testcase data in device tree; not running tests\n");
> -		return 0;
> -	}
> -	of_node_put(np);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
> +		"/testcase-data/phandle-tests/consumer-a"));
>  
>  	if (IS_ENABLED(CONFIG_UML))
>  		unflatten_device_tree();
>  
> -	pr_info("start of unittest - you will see error messages\n");
> -	of_unittest_check_tree_linkage();
> -	of_unittest_check_phandles();
> -	of_unittest_find_node_by_name();
> -	of_unittest_dynamic();
> -	of_unittest_parse_phandle_with_args();
> -	of_unittest_parse_phandle_with_args_map();
> -	of_unittest_printf();
> -	of_unittest_property_string();
> -	of_unittest_property_copy();
> -	of_unittest_changeset();
> -	of_unittest_parse_interrupts();
> -	of_unittest_parse_interrupts_extended();
> -	of_unittest_match_node();
> -	of_unittest_platform_populate();
> -	of_unittest_overlay();
> +	return 0;
> +}
>  
> +static struct kunit_case of_test_cases[] = {
> +	KUNIT_CASE(of_unittest_check_tree_linkage),
> +	KUNIT_CASE(of_unittest_check_phandles),
> +	KUNIT_CASE(of_unittest_find_node_by_name),
> +	KUNIT_CASE(of_unittest_dynamic),
> +	KUNIT_CASE(of_unittest_parse_phandle_with_args),
> +	KUNIT_CASE(of_unittest_parse_phandle_with_args_map),
> +	KUNIT_CASE(of_unittest_printf),
> +	KUNIT_CASE(of_unittest_property_string),
> +	KUNIT_CASE(of_unittest_property_copy),
> +	KUNIT_CASE(of_unittest_changeset),
> +	KUNIT_CASE(of_unittest_parse_interrupts),
> +	KUNIT_CASE(of_unittest_parse_interrupts_extended),
> +	KUNIT_CASE(of_unittest_match_node),
> +	KUNIT_CASE(of_unittest_platform_populate),
> +	KUNIT_CASE(of_unittest_overlay),
>  	/* Double check linkage after removing testcase data */
> -	of_unittest_check_tree_linkage();
> -
> -	of_unittest_overlay_high_level();
> -
> -	pr_info("end of unittest - %i passed, %i failed\n",
> -		unittest_results.passed, unittest_results.failed);
> +	KUNIT_CASE(of_unittest_check_tree_linkage),
> +	KUNIT_CASE(of_unittest_overlay_high_level),
> +	{},
> +};
>  
> -	return 0;
> -}
> -late_initcall(of_unittest);
> +static struct kunit_module of_test_module = {
> +	.name = "of-test",
> +	.init = of_test_init,
> +	.test_cases = of_test_cases,
> +};
> +module_test(of_test_module);
> 


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
  2019-02-14 21:37     ` Brendan Higgins
  (?)
  (?)
@ 2019-02-18 19:52         ` Frank Rowand
  -1 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-02-18 19:52 UTC (permalink / raw)
  To: Brendan Higgins, keescook-hpIqsD4AKlfQT0dZR+AlfA,
	mcgrof-DgEjT+Ai2ygdnm+yROfE0A, shuah-DgEjT+Ai2ygdnm+yROfE0A,
	robh-DgEjT+Ai2ygdnm+yROfE0A,
	kieran.bingham-ryLnwIuWjnjg/C1BVhZhaw
  Cc: brakmo-b10kYP2dOMg, pmladek-IBi9RG/b67k,
	amir73il-Re5JQEeQqe8AvxtiuMwx3w,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	Alexander.Levin-0li6OtcxBFHby3iVrkZq2A,
	linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw, richard-/L3Ra7n9ekc,
	knut.omang-QHcLZuEGTsvQT0dZR+AlfA, wfg-VuQAYsv1563Yd54FQh9/CA,
	joel-U3u1mxZcP9KHXe+LvDLADg, jdike-OPE4K8JWMJJBDgjK7y7TUQ,
	dan.carpenter-QHcLZuEGTsvQT0dZR+AlfA,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Tim.Bird-7U/KSKJipcs,
	linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	rostedt-nx8X9YLhiw1AfugRpC6u6w, julia.lawall-L2FTfq7BK8M,
	kunit-dev-/JYPxA39Uh5TLH3MbocFFw,
	gregkh-hQyY1W1yCW8ekmWlsbkhG0B+6BGkLq7r,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, daniel-/w4YWyX8dFk,
	mpe-Gsx/Oe8HsFggBc27wqDAHg, joe-6d6DIl74uiNBDgjK7y7TUQ,
	khilman-rdvid1DuHRBWk0Htik3J/w

On 2/14/19 1:37 PM, Brendan Higgins wrote:
> Add support for aborting/bailing out of test cases. Needed for
> implementing assertions.
> 
> Signed-off-by: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> ---
> Changes Since Last Version
>  - This patch is new introducing a new cross-architecture way to abort
>    out of a test case (needed for KUNIT_ASSERT_*, see next patch for
>    details).
>  - On a side note, this is not a complete replacement for the UML abort
>    mechanism, but covers the majority of necessary functionality. UML
>    architecture specific featurs have been dropped from the initial
>    patchset.
> ---
>  include/kunit/test.h |  24 +++++
>  kunit/Makefile       |   3 +-
>  kunit/test-test.c    | 127 ++++++++++++++++++++++++++
>  kunit/test.c         | 208 +++++++++++++++++++++++++++++++++++++++++--
>  4 files changed, 353 insertions(+), 9 deletions(-)
>  create mode 100644 kunit/test-test.c

< snip >

> diff --git a/kunit/test.c b/kunit/test.c
> index d18c50d5ed671..6e5244642ab07 100644
> --- a/kunit/test.c
> +++ b/kunit/test.c
> @@ -6,9 +6,9 @@
>   * Author: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
>   */
>  
> -#include <linux/sched.h>
>  #include <linux/sched/debug.h>
> -#include <os.h>
> +#include <linux/completion.h>
> +#include <linux/kthread.h>
>  #include <kunit/test.h>
>  
>  static bool kunit_get_success(struct kunit *test)
> @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
>  	spin_unlock_irqrestore(&test->lock, flags);
>  }
>  
> +static bool kunit_get_death_test(struct kunit *test)
> +{
> +	unsigned long flags;
> +	bool death_test;
> +
> +	spin_lock_irqsave(&test->lock, flags);
> +	death_test = test->death_test;
> +	spin_unlock_irqrestore(&test->lock, flags);
> +
> +	return death_test;
> +}
> +
> +static void kunit_set_death_test(struct kunit *test, bool death_test)
> +{
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&test->lock, flags);
> +	test->death_test = death_test;
> +	spin_unlock_irqrestore(&test->lock, flags);
> +}
> +
>  static int kunit_vprintk_emit(const struct kunit *test,
>  			      int level,
>  			      const char *fmt,
> @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
>  	stream->commit(stream);
>  }
>  
> +static void __noreturn kunit_abort(struct kunit *test)
> +{
> +	kunit_set_death_test(test, true);
> +
> +	test->try_catch.throw(&test->try_catch);
> +
> +	/*
> +	 * Throw could not abort from test.
> +	 */
> +	kunit_err(test, "Throw could not abort from test!");
> +	show_stack(NULL, NULL);
> +	BUG();

kunit_abort() is what will be call as the result of an assert failure.

BUG(), which is a panic, which is crashing the system is not acceptable
in the Linux kernel.  You will just annoy Linus if you submit this.

-Frank

< snip >

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-02-18 19:52         ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-02-18 19:52 UTC (permalink / raw)
  To: Brendan Higgins, keescook, mcgrof, shuah, robh, kieran.bingham
  Cc: gregkh, joel, mpe, joe, brakmo, rostedt, Tim.Bird, khilman,
	julia.lawall, linux-kselftest, kunit-dev, linux-kernel, jdike,
	richard, linux-um, daniel, dri-devel, dan.j.williams,
	linux-nvdimm, knut.omang, devicetree, pmladek, Alexander.Levin,
	amir73il, dan.carpenter, wfg

On 2/14/19 1:37 PM, Brendan Higgins wrote:
> Add support for aborting/bailing out of test cases. Needed for
> implementing assertions.
> 
> Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
> ---
> Changes Since Last Version
>  - This patch is new introducing a new cross-architecture way to abort
>    out of a test case (needed for KUNIT_ASSERT_*, see next patch for
>    details).
>  - On a side note, this is not a complete replacement for the UML abort
>    mechanism, but covers the majority of necessary functionality. UML
>    architecture specific featurs have been dropped from the initial
>    patchset.
> ---
>  include/kunit/test.h |  24 +++++
>  kunit/Makefile       |   3 +-
>  kunit/test-test.c    | 127 ++++++++++++++++++++++++++
>  kunit/test.c         | 208 +++++++++++++++++++++++++++++++++++++++++--
>  4 files changed, 353 insertions(+), 9 deletions(-)
>  create mode 100644 kunit/test-test.c

< snip >

> diff --git a/kunit/test.c b/kunit/test.c
> index d18c50d5ed671..6e5244642ab07 100644
> --- a/kunit/test.c
> +++ b/kunit/test.c
> @@ -6,9 +6,9 @@
>   * Author: Brendan Higgins <brendanhiggins@google.com>
>   */
>  
> -#include <linux/sched.h>
>  #include <linux/sched/debug.h>
> -#include <os.h>
> +#include <linux/completion.h>
> +#include <linux/kthread.h>
>  #include <kunit/test.h>
>  
>  static bool kunit_get_success(struct kunit *test)
> @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
>  	spin_unlock_irqrestore(&test->lock, flags);
>  }
>  
> +static bool kunit_get_death_test(struct kunit *test)
> +{
> +	unsigned long flags;
> +	bool death_test;
> +
> +	spin_lock_irqsave(&test->lock, flags);
> +	death_test = test->death_test;
> +	spin_unlock_irqrestore(&test->lock, flags);
> +
> +	return death_test;
> +}
> +
> +static void kunit_set_death_test(struct kunit *test, bool death_test)
> +{
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&test->lock, flags);
> +	test->death_test = death_test;
> +	spin_unlock_irqrestore(&test->lock, flags);
> +}
> +
>  static int kunit_vprintk_emit(const struct kunit *test,
>  			      int level,
>  			      const char *fmt,
> @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
>  	stream->commit(stream);
>  }
>  
> +static void __noreturn kunit_abort(struct kunit *test)
> +{
> +	kunit_set_death_test(test, true);
> +
> +	test->try_catch.throw(&test->try_catch);
> +
> +	/*
> +	 * Throw could not abort from test.
> +	 */
> +	kunit_err(test, "Throw could not abort from test!");
> +	show_stack(NULL, NULL);
> +	BUG();

kunit_abort() is what will be call as the result of an assert failure.

BUG(), which is a panic, which is crashing the system is not acceptable
in the Linux kernel.  You will just annoy Linus if you submit this.

-Frank

< snip >

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-02-18 19:52         ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: frowand.list @ 2019-02-18 19:52 UTC (permalink / raw)


On 2/14/19 1:37 PM, Brendan Higgins wrote:
> Add support for aborting/bailing out of test cases. Needed for
> implementing assertions.
> 
> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> ---
> Changes Since Last Version
>  - This patch is new introducing a new cross-architecture way to abort
>    out of a test case (needed for KUNIT_ASSERT_*, see next patch for
>    details).
>  - On a side note, this is not a complete replacement for the UML abort
>    mechanism, but covers the majority of necessary functionality. UML
>    architecture specific featurs have been dropped from the initial
>    patchset.
> ---
>  include/kunit/test.h |  24 +++++
>  kunit/Makefile       |   3 +-
>  kunit/test-test.c    | 127 ++++++++++++++++++++++++++
>  kunit/test.c         | 208 +++++++++++++++++++++++++++++++++++++++++--
>  4 files changed, 353 insertions(+), 9 deletions(-)
>  create mode 100644 kunit/test-test.c

< snip >

> diff --git a/kunit/test.c b/kunit/test.c
> index d18c50d5ed671..6e5244642ab07 100644
> --- a/kunit/test.c
> +++ b/kunit/test.c
> @@ -6,9 +6,9 @@
>   * Author: Brendan Higgins <brendanhiggins at google.com>
>   */
>  
> -#include <linux/sched.h>
>  #include <linux/sched/debug.h>
> -#include <os.h>
> +#include <linux/completion.h>
> +#include <linux/kthread.h>
>  #include <kunit/test.h>
>  
>  static bool kunit_get_success(struct kunit *test)
> @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
>  	spin_unlock_irqrestore(&test->lock, flags);
>  }
>  
> +static bool kunit_get_death_test(struct kunit *test)
> +{
> +	unsigned long flags;
> +	bool death_test;
> +
> +	spin_lock_irqsave(&test->lock, flags);
> +	death_test = test->death_test;
> +	spin_unlock_irqrestore(&test->lock, flags);
> +
> +	return death_test;
> +}
> +
> +static void kunit_set_death_test(struct kunit *test, bool death_test)
> +{
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&test->lock, flags);
> +	test->death_test = death_test;
> +	spin_unlock_irqrestore(&test->lock, flags);
> +}
> +
>  static int kunit_vprintk_emit(const struct kunit *test,
>  			      int level,
>  			      const char *fmt,
> @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
>  	stream->commit(stream);
>  }
>  
> +static void __noreturn kunit_abort(struct kunit *test)
> +{
> +	kunit_set_death_test(test, true);
> +
> +	test->try_catch.throw(&test->try_catch);
> +
> +	/*
> +	 * Throw could not abort from test.
> +	 */
> +	kunit_err(test, "Throw could not abort from test!");
> +	show_stack(NULL, NULL);
> +	BUG();

kunit_abort() is what will be call as the result of an assert failure.

BUG(), which is a panic, which is crashing the system is not acceptable
in the Linux kernel.  You will just annoy Linus if you submit this.

-Frank

< snip >

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-02-18 19:52         ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-02-18 19:52 UTC (permalink / raw)


On 2/14/19 1:37 PM, Brendan Higgins wrote:
> Add support for aborting/bailing out of test cases. Needed for
> implementing assertions.
> 
> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> ---
> Changes Since Last Version
>  - This patch is new introducing a new cross-architecture way to abort
>    out of a test case (needed for KUNIT_ASSERT_*, see next patch for
>    details).
>  - On a side note, this is not a complete replacement for the UML abort
>    mechanism, but covers the majority of necessary functionality. UML
>    architecture specific featurs have been dropped from the initial
>    patchset.
> ---
>  include/kunit/test.h |  24 +++++
>  kunit/Makefile       |   3 +-
>  kunit/test-test.c    | 127 ++++++++++++++++++++++++++
>  kunit/test.c         | 208 +++++++++++++++++++++++++++++++++++++++++--
>  4 files changed, 353 insertions(+), 9 deletions(-)
>  create mode 100644 kunit/test-test.c

< snip >

> diff --git a/kunit/test.c b/kunit/test.c
> index d18c50d5ed671..6e5244642ab07 100644
> --- a/kunit/test.c
> +++ b/kunit/test.c
> @@ -6,9 +6,9 @@
>   * Author: Brendan Higgins <brendanhiggins at google.com>
>   */
>  
> -#include <linux/sched.h>
>  #include <linux/sched/debug.h>
> -#include <os.h>
> +#include <linux/completion.h>
> +#include <linux/kthread.h>
>  #include <kunit/test.h>
>  
>  static bool kunit_get_success(struct kunit *test)
> @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
>  	spin_unlock_irqrestore(&test->lock, flags);
>  }
>  
> +static bool kunit_get_death_test(struct kunit *test)
> +{
> +	unsigned long flags;
> +	bool death_test;
> +
> +	spin_lock_irqsave(&test->lock, flags);
> +	death_test = test->death_test;
> +	spin_unlock_irqrestore(&test->lock, flags);
> +
> +	return death_test;
> +}
> +
> +static void kunit_set_death_test(struct kunit *test, bool death_test)
> +{
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&test->lock, flags);
> +	test->death_test = death_test;
> +	spin_unlock_irqrestore(&test->lock, flags);
> +}
> +
>  static int kunit_vprintk_emit(const struct kunit *test,
>  			      int level,
>  			      const char *fmt,
> @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
>  	stream->commit(stream);
>  }
>  
> +static void __noreturn kunit_abort(struct kunit *test)
> +{
> +	kunit_set_death_test(test, true);
> +
> +	test->try_catch.throw(&test->try_catch);
> +
> +	/*
> +	 * Throw could not abort from test.
> +	 */
> +	kunit_err(test, "Throw could not abort from test!");
> +	show_stack(NULL, NULL);
> +	BUG();

kunit_abort() is what will be call as the result of an assert failure.

BUG(), which is a panic, which is crashing the system is not acceptable
in the Linux kernel.  You will just annoy Linus if you submit this.

-Frank

< snip >

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
  2019-02-14 21:37 ` brendanhiggins
  (?)
  (?)
@ 2019-02-18 20:02   ` frowand.list
  -1 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-02-18 20:02 UTC (permalink / raw)
  To: Brendan Higgins, keescook, mcgrof, shuah, robh, kieran.bingham
  Cc: gregkh, joel, mpe, joe, brakmo, rostedt, Tim.Bird, khilman,
	julia.lawall, linux-kselftest, kunit-dev, linux-kernel, jdike,
	richard, linux-um, daniel, dri-devel, dan.j.williams,
	linux-nvdimm, knut.omang, devicetree, pmladek, Alexander.Levin,
	amir73il, dan.carpenter, wfg

On 2/14/19 1:37 PM, Brendan Higgins wrote:
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.
> 
> Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> it does not require installing the kernel on a test machine or in a VM
> and does not require tests to be written in userspace running on a host
> kernel. Additionally, KUnit is fast: From invocation to completion KUnit
> can run several dozen tests in under a second. Currently, the entire
> KUnit test suite for KUnit runs in under a second from the initial
> invocation (build time excluded).
> 
> KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> Googletest/Googlemock for C++. KUnit provides facilities for defining
> unit test cases, grouping related test cases into test suites, providing
> common infrastructure for running tests, mocking, spying, and much more.
> 
> ## What's so special about unit testing?
> 
> A unit test is supposed to test a single unit of code in isolation,
> hence the name. There should be no dependencies outside the control of
> the test; this means no external dependencies, which makes tests orders
> of magnitudes faster. Likewise, since there are no external dependencies,
> there are no hoops to jump through to run the tests. Additionally, this
> makes unit tests deterministic: a failing unit test always indicates a
> problem. Finally, because unit tests necessarily have finer granularity,
> they are able to test all code paths easily solving the classic problem
> of difficulty in exercising error handling code.
> 
> ## Is KUnit trying to replace other testing frameworks for the kernel?
> 
> No. Most existing tests for the Linux kernel are end-to-end tests, which
> have their place. A well tested system has lots of unit tests, a
> reasonable number of integration tests, and some end-to-end tests. KUnit
> is just trying to address the unit test space which is currently not
> being addressed.
> 
> ## More information on KUnit
> 
> There is a bunch of documentation near the end of this patch set that
> describes how to use KUnit and best practices for writing unit tests.
> For convenience I am hosting the compiled docs here:
> https://google.github.io/kunit-docs/third_party/kernel/docs/
> Additionally for convenience, I have applied these patches to a branch:
> https://kunit.googlesource.com/linux/+/kunit/rfc/5.0-rc5/v4
> The repo may be cloned with:
> git clone https://kunit.googlesource.com/linux
> This patchset is on the kunit/rfc/5.0-rc5/v4 branch.
> 
> ## Changes Since Last Version
> 
>  - Got KUnit working on (hypothetically) all architectures (tested on
>    x86), as per Rob's (and other's) request
>  - Punting all KUnit features/patches depending on UML for now.
>  - Broke out UML specific support into arch/um/* as per "[RFC v3 01/19]
>    kunit: test: add KUnit test runner core", as requested by Luis.
>  - Added support to kunit_tool to allow it to build kernels in external
>    directories, as suggested by Kieran.
>  - Added a UML defconfig, and a config fragment for KUnit as suggested
>    by Kieran and Luis.
>  - Cleaned up, and reformatted a bunch of stuff.
> 

I have not read through the patches in any detail.  I have read some of
the code to try to understand the patches to the devicetree unit tests.
So that may limit how valid my comments below are.

I found the code difficult to read in places where it should have been
much simpler to read.  Structuring the code in a pseudo object oriented
style meant that everywhere in a code path that I encountered a dynamic
function call, I had to go find where that dynamic function call was
initialized (and being the cautious person that I am, verify that
no where else was the value of that dynamic function call).  With
primitive vi and tags, that search would have instead just been a
simple key press (or at worst a few keys) if hard coded function
calls were done instead of dynamic function calls.  In the code paths
that I looked at, I did not see any case of a dynamic function being
anything other than the value it was originally initialized as.
There may be such cases, I did not read the entire patch set.  There
may also be cases envisioned in the architects mind of how this
flexibility may be of future value.  Dunno.

-Frank

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-02-18 20:02   ` frowand.list
  0 siblings, 0 replies; 316+ messages in thread
From: frowand.list @ 2019-02-18 20:02 UTC (permalink / raw)


On 2/14/19 1:37 PM, Brendan Higgins wrote:
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.
> 
> Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> it does not require installing the kernel on a test machine or in a VM
> and does not require tests to be written in userspace running on a host
> kernel. Additionally, KUnit is fast: From invocation to completion KUnit
> can run several dozen tests in under a second. Currently, the entire
> KUnit test suite for KUnit runs in under a second from the initial
> invocation (build time excluded).
> 
> KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> Googletest/Googlemock for C++. KUnit provides facilities for defining
> unit test cases, grouping related test cases into test suites, providing
> common infrastructure for running tests, mocking, spying, and much more.
> 
> ## What's so special about unit testing?
> 
> A unit test is supposed to test a single unit of code in isolation,
> hence the name. There should be no dependencies outside the control of
> the test; this means no external dependencies, which makes tests orders
> of magnitudes faster. Likewise, since there are no external dependencies,
> there are no hoops to jump through to run the tests. Additionally, this
> makes unit tests deterministic: a failing unit test always indicates a
> problem. Finally, because unit tests necessarily have finer granularity,
> they are able to test all code paths easily solving the classic problem
> of difficulty in exercising error handling code.
> 
> ## Is KUnit trying to replace other testing frameworks for the kernel?
> 
> No. Most existing tests for the Linux kernel are end-to-end tests, which
> have their place. A well tested system has lots of unit tests, a
> reasonable number of integration tests, and some end-to-end tests. KUnit
> is just trying to address the unit test space which is currently not
> being addressed.
> 
> ## More information on KUnit
> 
> There is a bunch of documentation near the end of this patch set that
> describes how to use KUnit and best practices for writing unit tests.
> For convenience I am hosting the compiled docs here:
> https://google.github.io/kunit-docs/third_party/kernel/docs/
> Additionally for convenience, I have applied these patches to a branch:
> https://kunit.googlesource.com/linux/+/kunit/rfc/5.0-rc5/v4
> The repo may be cloned with:
> git clone https://kunit.googlesource.com/linux
> This patchset is on the kunit/rfc/5.0-rc5/v4 branch.
> 
> ## Changes Since Last Version
> 
>  - Got KUnit working on (hypothetically) all architectures (tested on
>    x86), as per Rob's (and other's) request
>  - Punting all KUnit features/patches depending on UML for now.
>  - Broke out UML specific support into arch/um/* as per "[RFC v3 01/19]
>    kunit: test: add KUnit test runner core", as requested by Luis.
>  - Added support to kunit_tool to allow it to build kernels in external
>    directories, as suggested by Kieran.
>  - Added a UML defconfig, and a config fragment for KUnit as suggested
>    by Kieran and Luis.
>  - Cleaned up, and reformatted a bunch of stuff.
> 

I have not read through the patches in any detail.  I have read some of
the code to try to understand the patches to the devicetree unit tests.
So that may limit how valid my comments below are.

I found the code difficult to read in places where it should have been
much simpler to read.  Structuring the code in a pseudo object oriented
style meant that everywhere in a code path that I encountered a dynamic
function call, I had to go find where that dynamic function call was
initialized (and being the cautious person that I am, verify that
no where else was the value of that dynamic function call).  With
primitive vi and tags, that search would have instead just been a
simple key press (or at worst a few keys) if hard coded function
calls were done instead of dynamic function calls.  In the code paths
that I looked at, I did not see any case of a dynamic function being
anything other than the value it was originally initialized as.
There may be such cases, I did not read the entire patch set.  There
may also be cases envisioned in the architects mind of how this
flexibility may be of future value.  Dunno.

-Frank

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-02-18 20:02   ` frowand.list
  0 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-02-18 20:02 UTC (permalink / raw)


On 2/14/19 1:37 PM, Brendan Higgins wrote:
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.
> 
> Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> it does not require installing the kernel on a test machine or in a VM
> and does not require tests to be written in userspace running on a host
> kernel. Additionally, KUnit is fast: From invocation to completion KUnit
> can run several dozen tests in under a second. Currently, the entire
> KUnit test suite for KUnit runs in under a second from the initial
> invocation (build time excluded).
> 
> KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> Googletest/Googlemock for C++. KUnit provides facilities for defining
> unit test cases, grouping related test cases into test suites, providing
> common infrastructure for running tests, mocking, spying, and much more.
> 
> ## What's so special about unit testing?
> 
> A unit test is supposed to test a single unit of code in isolation,
> hence the name. There should be no dependencies outside the control of
> the test; this means no external dependencies, which makes tests orders
> of magnitudes faster. Likewise, since there are no external dependencies,
> there are no hoops to jump through to run the tests. Additionally, this
> makes unit tests deterministic: a failing unit test always indicates a
> problem. Finally, because unit tests necessarily have finer granularity,
> they are able to test all code paths easily solving the classic problem
> of difficulty in exercising error handling code.
> 
> ## Is KUnit trying to replace other testing frameworks for the kernel?
> 
> No. Most existing tests for the Linux kernel are end-to-end tests, which
> have their place. A well tested system has lots of unit tests, a
> reasonable number of integration tests, and some end-to-end tests. KUnit
> is just trying to address the unit test space which is currently not
> being addressed.
> 
> ## More information on KUnit
> 
> There is a bunch of documentation near the end of this patch set that
> describes how to use KUnit and best practices for writing unit tests.
> For convenience I am hosting the compiled docs here:
> https://google.github.io/kunit-docs/third_party/kernel/docs/
> Additionally for convenience, I have applied these patches to a branch:
> https://kunit.googlesource.com/linux/+/kunit/rfc/5.0-rc5/v4
> The repo may be cloned with:
> git clone https://kunit.googlesource.com/linux
> This patchset is on the kunit/rfc/5.0-rc5/v4 branch.
> 
> ## Changes Since Last Version
> 
>  - Got KUnit working on (hypothetically) all architectures (tested on
>    x86), as per Rob's (and other's) request
>  - Punting all KUnit features/patches depending on UML for now.
>  - Broke out UML specific support into arch/um/* as per "[RFC v3 01/19]
>    kunit: test: add KUnit test runner core", as requested by Luis.
>  - Added support to kunit_tool to allow it to build kernels in external
>    directories, as suggested by Kieran.
>  - Added a UML defconfig, and a config fragment for KUnit as suggested
>    by Kieran and Luis.
>  - Cleaned up, and reformatted a bunch of stuff.
> 

I have not read through the patches in any detail.  I have read some of
the code to try to understand the patches to the devicetree unit tests.
So that may limit how valid my comments below are.

I found the code difficult to read in places where it should have been
much simpler to read.  Structuring the code in a pseudo object oriented
style meant that everywhere in a code path that I encountered a dynamic
function call, I had to go find where that dynamic function call was
initialized (and being the cautious person that I am, verify that
no where else was the value of that dynamic function call).  With
primitive vi and tags, that search would have instead just been a
simple key press (or at worst a few keys) if hard coded function
calls were done instead of dynamic function calls.  In the code paths
that I looked at, I did not see any case of a dynamic function being
anything other than the value it was originally initialized as.
There may be such cases, I did not read the entire patch set.  There
may also be cases envisioned in the architects mind of how this
flexibility may be of future value.  Dunno.

-Frank

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-02-18 20:02   ` frowand.list
  0 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-02-18 20:02 UTC (permalink / raw)
  To: Brendan Higgins, keescook, mcgrof, shuah, robh, kieran.bingham
  Cc: brakmo, pmladek, amir73il, dri-devel, Alexander.Levin,
	linux-kselftest, linux-nvdimm, richard, knut.omang, wfg, joel,
	jdike, dan.carpenter, devicetree, Tim.Bird, linux-um, rostedt,
	julia.lawall, dan.j.williams, kunit-dev, gregkh, linux-kernel,
	daniel, mpe, joe, khilman

On 2/14/19 1:37 PM, Brendan Higgins wrote:
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.
> 
> Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> it does not require installing the kernel on a test machine or in a VM
> and does not require tests to be written in userspace running on a host
> kernel. Additionally, KUnit is fast: From invocation to completion KUnit
> can run several dozen tests in under a second. Currently, the entire
> KUnit test suite for KUnit runs in under a second from the initial
> invocation (build time excluded).
> 
> KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> Googletest/Googlemock for C++. KUnit provides facilities for defining
> unit test cases, grouping related test cases into test suites, providing
> common infrastructure for running tests, mocking, spying, and much more.
> 
> ## What's so special about unit testing?
> 
> A unit test is supposed to test a single unit of code in isolation,
> hence the name. There should be no dependencies outside the control of
> the test; this means no external dependencies, which makes tests orders
> of magnitudes faster. Likewise, since there are no external dependencies,
> there are no hoops to jump through to run the tests. Additionally, this
> makes unit tests deterministic: a failing unit test always indicates a
> problem. Finally, because unit tests necessarily have finer granularity,
> they are able to test all code paths easily solving the classic problem
> of difficulty in exercising error handling code.
> 
> ## Is KUnit trying to replace other testing frameworks for the kernel?
> 
> No. Most existing tests for the Linux kernel are end-to-end tests, which
> have their place. A well tested system has lots of unit tests, a
> reasonable number of integration tests, and some end-to-end tests. KUnit
> is just trying to address the unit test space which is currently not
> being addressed.
> 
> ## More information on KUnit
> 
> There is a bunch of documentation near the end of this patch set that
> describes how to use KUnit and best practices for writing unit tests.
> For convenience I am hosting the compiled docs here:
> https://google.github.io/kunit-docs/third_party/kernel/docs/
> Additionally for convenience, I have applied these patches to a branch:
> https://kunit.googlesource.com/linux/+/kunit/rfc/5.0-rc5/v4
> The repo may be cloned with:
> git clone https://kunit.googlesource.com/linux
> This patchset is on the kunit/rfc/5.0-rc5/v4 branch.
> 
> ## Changes Since Last Version
> 
>  - Got KUnit working on (hypothetically) all architectures (tested on
>    x86), as per Rob's (and other's) request
>  - Punting all KUnit features/patches depending on UML for now.
>  - Broke out UML specific support into arch/um/* as per "[RFC v3 01/19]
>    kunit: test: add KUnit test runner core", as requested by Luis.
>  - Added support to kunit_tool to allow it to build kernels in external
>    directories, as suggested by Kieran.
>  - Added a UML defconfig, and a config fragment for KUnit as suggested
>    by Kieran and Luis.
>  - Cleaned up, and reformatted a bunch of stuff.
> 

I have not read through the patches in any detail.  I have read some of
the code to try to understand the patches to the devicetree unit tests.
So that may limit how valid my comments below are.

I found the code difficult to read in places where it should have been
much simpler to read.  Structuring the code in a pseudo object oriented
style meant that everywhere in a code path that I encountered a dynamic
function call, I had to go find where that dynamic function call was
initialized (and being the cautious person that I am, verify that
no where else was the value of that dynamic function call).  With
primitive vi and tags, that search would have instead just been a
simple key press (or at worst a few keys) if hard coded function
calls were done instead of dynamic function calls.  In the code paths
that I looked at, I did not see any case of a dynamic function being
anything other than the value it was originally initialized as.
There may be such cases, I did not read the entire patch set.  There
may also be cases envisioned in the architects mind of how this
flexibility may be of future value.  Dunno.

-Frank

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 10/17] kunit: test: add test managed resource tests
  2019-02-15 20:54         ` Stephen Boyd
                             ` (2 preceding siblings ...)
  (?)
@ 2019-02-19 23:20           ` Brendan Higgins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-19 23:20 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: Frank Rowand, Kees Cook, Kieran Bingham, Luis Chamberlain,
	Rob Herring, shuah, Greg KH, Joel Stanley, Michael Ellerman,
	Joe Perches, brakmo, Steven Rostedt, Bird, Timothy, Kevin Hilman,
	Julia Lawall, linux-kselftest, kunit-dev,
	Linux Kernel Mailing List, Jeff Dike, Richard Weinberger,
	linux-um, Daniel Vetter

On Fri, Feb 15, 2019 at 12:54 PM Stephen Boyd <sboyd@kernel.org> wrote:
>
> Quoting Brendan Higgins (2019-02-14 13:37:22)
> > diff --git a/kunit/test-test.c b/kunit/test-test.c
> > index 0b4ad6690310d..bb34431398526 100644
> > --- a/kunit/test-test.c
> > +++ b/kunit/test-test.c
> [...]
> > +
> > +#define KUNIT_RESOURCE_NUM 5
> > +static void kunit_resource_test_cleanup_resources(struct kunit *test)
> > +{
> > +       int i;
> > +       struct kunit_test_resource_context *ctx = test->priv;
> > +       struct kunit_resource *resources[KUNIT_RESOURCE_NUM];
> > +
> > +       for (i = 0; i < KUNIT_RESOURCE_NUM; i++) {
>
> Nitpick: This could use ARRAY_SIZE(resources) and then the #define could
> be dropped.

Noted. Will fix in next revision.

>
> > +               resources[i] = kunit_alloc_resource(&ctx->test,
> > +                                                   fake_resource_init,
> > +                                                   fake_resource_free,
> > +                                                   ctx);
> > +       }
> > +
> > +       kunit_cleanup(&ctx->test);
> > +
> > +       KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
> > +}
> > +
> [...]
> > +
> > +static struct kunit_case kunit_resource_test_cases[] = {
>
> Can these arrays be const?

There is some private mutable state inside of `struct kunit_case` that
would be kind of annoying to pull out; I don't think it would make it
cleaner.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 10/17] kunit: test: add test managed resource tests
@ 2019-02-19 23:20           ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-19 23:20 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: Frank Rowand, Kees Cook, Kieran Bingham, Luis Chamberlain,
	Rob Herring, shuah, Greg KH, Joel Stanley, Michael Ellerman,
	Joe Perches, brakmo, Steven Rostedt, Bird, Timothy, Kevin Hilman,
	Julia Lawall, linux-kselftest, kunit-dev,
	Linux Kernel Mailing List, Jeff Dike, Richard Weinberger,
	linux-um, Daniel Vetter, dri-devel, Dan Williams, linux-nvdimm,
	Knut Omang, devicetree, Petr Mladek, Sasha Levin, amir73il,
	dan.carpenter, wfg, Avinash Kondareddy

On Fri, Feb 15, 2019 at 12:54 PM Stephen Boyd <sboyd@kernel.org> wrote:
>
> Quoting Brendan Higgins (2019-02-14 13:37:22)
> > diff --git a/kunit/test-test.c b/kunit/test-test.c
> > index 0b4ad6690310d..bb34431398526 100644
> > --- a/kunit/test-test.c
> > +++ b/kunit/test-test.c
> [...]
> > +
> > +#define KUNIT_RESOURCE_NUM 5
> > +static void kunit_resource_test_cleanup_resources(struct kunit *test)
> > +{
> > +       int i;
> > +       struct kunit_test_resource_context *ctx = test->priv;
> > +       struct kunit_resource *resources[KUNIT_RESOURCE_NUM];
> > +
> > +       for (i = 0; i < KUNIT_RESOURCE_NUM; i++) {
>
> Nitpick: This could use ARRAY_SIZE(resources) and then the #define could
> be dropped.

Noted. Will fix in next revision.

>
> > +               resources[i] = kunit_alloc_resource(&ctx->test,
> > +                                                   fake_resource_init,
> > +                                                   fake_resource_free,
> > +                                                   ctx);
> > +       }
> > +
> > +       kunit_cleanup(&ctx->test);
> > +
> > +       KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
> > +}
> > +
> [...]
> > +
> > +static struct kunit_case kunit_resource_test_cases[] = {
>
> Can these arrays be const?

There is some private mutable state inside of `struct kunit_case` that
would be kind of annoying to pull out; I don't think it would make it
cleaner.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 10/17] kunit: test: add test managed resource tests
@ 2019-02-19 23:20           ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-02-19 23:20 UTC (permalink / raw)


On Fri, Feb 15, 2019 at 12:54 PM Stephen Boyd <sboyd at kernel.org> wrote:
>
> Quoting Brendan Higgins (2019-02-14 13:37:22)
> > diff --git a/kunit/test-test.c b/kunit/test-test.c
> > index 0b4ad6690310d..bb34431398526 100644
> > --- a/kunit/test-test.c
> > +++ b/kunit/test-test.c
> [...]
> > +
> > +#define KUNIT_RESOURCE_NUM 5
> > +static void kunit_resource_test_cleanup_resources(struct kunit *test)
> > +{
> > +       int i;
> > +       struct kunit_test_resource_context *ctx = test->priv;
> > +       struct kunit_resource *resources[KUNIT_RESOURCE_NUM];
> > +
> > +       for (i = 0; i < KUNIT_RESOURCE_NUM; i++) {
>
> Nitpick: This could use ARRAY_SIZE(resources) and then the #define could
> be dropped.

Noted. Will fix in next revision.

>
> > +               resources[i] = kunit_alloc_resource(&ctx->test,
> > +                                                   fake_resource_init,
> > +                                                   fake_resource_free,
> > +                                                   ctx);
> > +       }
> > +
> > +       kunit_cleanup(&ctx->test);
> > +
> > +       KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
> > +}
> > +
> [...]
> > +
> > +static struct kunit_case kunit_resource_test_cases[] = {
>
> Can these arrays be const?

There is some private mutable state inside of `struct kunit_case` that
would be kind of annoying to pull out; I don't think it would make it
cleaner.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 10/17] kunit: test: add test managed resource tests
@ 2019-02-19 23:20           ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-19 23:20 UTC (permalink / raw)


On Fri, Feb 15, 2019@12:54 PM Stephen Boyd <sboyd@kernel.org> wrote:
>
> Quoting Brendan Higgins (2019-02-14 13:37:22)
> > diff --git a/kunit/test-test.c b/kunit/test-test.c
> > index 0b4ad6690310d..bb34431398526 100644
> > --- a/kunit/test-test.c
> > +++ b/kunit/test-test.c
> [...]
> > +
> > +#define KUNIT_RESOURCE_NUM 5
> > +static void kunit_resource_test_cleanup_resources(struct kunit *test)
> > +{
> > +       int i;
> > +       struct kunit_test_resource_context *ctx = test->priv;
> > +       struct kunit_resource *resources[KUNIT_RESOURCE_NUM];
> > +
> > +       for (i = 0; i < KUNIT_RESOURCE_NUM; i++) {
>
> Nitpick: This could use ARRAY_SIZE(resources) and then the #define could
> be dropped.

Noted. Will fix in next revision.

>
> > +               resources[i] = kunit_alloc_resource(&ctx->test,
> > +                                                   fake_resource_init,
> > +                                                   fake_resource_free,
> > +                                                   ctx);
> > +       }
> > +
> > +       kunit_cleanup(&ctx->test);
> > +
> > +       KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
> > +}
> > +
> [...]
> > +
> > +static struct kunit_case kunit_resource_test_cases[] = {
>
> Can these arrays be const?

There is some private mutable state inside of `struct kunit_case` that
would be kind of annoying to pull out; I don't think it would make it
cleaner.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 10/17] kunit: test: add test managed resource tests
@ 2019-02-19 23:20           ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-19 23:20 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: brakmo, Petr Mladek, amir73il, dri-devel, Sasha Levin,
	linux-kselftest, Frank Rowand, Rob Herring, linux-nvdimm,
	Richard Weinberger, Knut Omang, Kieran Bingham, wfg,
	Joel Stanley, Jeff Dike, dan.carpenter, devicetree, shuah, Bird,
	Timothy, Kees Cook, linux-um, Steven Rostedt, Julia Lawall,
	Dan Williams, kunit-dev, Greg KH, Linux Kernel Mailing List,
	Luis Chamberlain, Avinash Kondareddy, Daniel Vetter,
	Michael Ellerman, Joe Perches, Kevin Hilman

On Fri, Feb 15, 2019 at 12:54 PM Stephen Boyd <sboyd@kernel.org> wrote:
>
> Quoting Brendan Higgins (2019-02-14 13:37:22)
> > diff --git a/kunit/test-test.c b/kunit/test-test.c
> > index 0b4ad6690310d..bb34431398526 100644
> > --- a/kunit/test-test.c
> > +++ b/kunit/test-test.c
> [...]
> > +
> > +#define KUNIT_RESOURCE_NUM 5
> > +static void kunit_resource_test_cleanup_resources(struct kunit *test)
> > +{
> > +       int i;
> > +       struct kunit_test_resource_context *ctx = test->priv;
> > +       struct kunit_resource *resources[KUNIT_RESOURCE_NUM];
> > +
> > +       for (i = 0; i < KUNIT_RESOURCE_NUM; i++) {
>
> Nitpick: This could use ARRAY_SIZE(resources) and then the #define could
> be dropped.

Noted. Will fix in next revision.

>
> > +               resources[i] = kunit_alloc_resource(&ctx->test,
> > +                                                   fake_resource_init,
> > +                                                   fake_resource_free,
> > +                                                   ctx);
> > +       }
> > +
> > +       kunit_cleanup(&ctx->test);
> > +
> > +       KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
> > +}
> > +
> [...]
> > +
> > +static struct kunit_case kunit_resource_test_cases[] = {
>
> Can these arrays be const?

There is some private mutable state inside of `struct kunit_case` that
would be kind of annoying to pull out; I don't think it would make it
cleaner.

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 02/17] kunit: test: add test resource management API
  2019-02-15 21:01     ` Stephen Boyd
                         ` (2 preceding siblings ...)
  (?)
@ 2019-02-19 23:24       ` Brendan Higgins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-19 23:24 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: Frank Rowand, Kees Cook, Kieran Bingham, Luis Chamberlain,
	Rob Herring, shuah, Greg KH, Joel Stanley, Michael Ellerman,
	Joe Perches, brakmo, Steven Rostedt, Bird, Timothy, Kevin Hilman,
	Julia Lawall, linux-kselftest, kunit-dev,
	Linux Kernel Mailing List, Jeff Dike, Richard Weinberger,
	linux-um, Daniel Vetter

On Fri, Feb 15, 2019 at 1:01 PM Stephen Boyd <sboyd@kernel.org> wrote:
>
> Quoting Brendan Higgins (2019-02-14 13:37:14)
> > @@ -104,6 +167,7 @@ struct kunit {
> >         const char *name; /* Read only after initialization! */
> >         spinlock_t lock; /* Gaurds all mutable test state. */
> >         bool success; /* Protected by lock. */
> > +       struct list_head resources; /* Protected by lock. */
> >         void (*vprintk)(const struct kunit *test,
> >                         const char *level,
> >                         struct va_format *vaf);
> > @@ -127,6 +191,51 @@ int kunit_run_tests(struct kunit_module *module);
> >                 } \
> >                 late_initcall(module_kunit_init##module)
> >
> > +/**
> > + * kunit_alloc_resource() - Allocates a *test managed resource*.
> > + * @test: The test context object.
> > + * @init: a user supplied function to initialize the resource.
> > + * @free: a user supplied function to free the resource.
> > + * @context: for the user to pass in arbitrary data.
>
> Nitpick: "pass in arbitrary data to the init function"? Maybe that
> provides some more clarity.

I think that makes sense. Will fix in next revision.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 02/17] kunit: test: add test resource management API
@ 2019-02-19 23:24       ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-19 23:24 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: Frank Rowand, Kees Cook, Kieran Bingham, Luis Chamberlain,
	Rob Herring, shuah, Greg KH, Joel Stanley, Michael Ellerman,
	Joe Perches, brakmo, Steven Rostedt, Bird, Timothy, Kevin Hilman,
	Julia Lawall, linux-kselftest, kunit-dev,
	Linux Kernel Mailing List, Jeff Dike, Richard Weinberger,
	linux-um, Daniel Vetter, dri-devel, Dan Williams, linux-nvdimm,
	Knut Omang, devicetree, Petr Mladek, Sasha Levin, amir73il,
	dan.carpenter, wfg

On Fri, Feb 15, 2019 at 1:01 PM Stephen Boyd <sboyd@kernel.org> wrote:
>
> Quoting Brendan Higgins (2019-02-14 13:37:14)
> > @@ -104,6 +167,7 @@ struct kunit {
> >         const char *name; /* Read only after initialization! */
> >         spinlock_t lock; /* Gaurds all mutable test state. */
> >         bool success; /* Protected by lock. */
> > +       struct list_head resources; /* Protected by lock. */
> >         void (*vprintk)(const struct kunit *test,
> >                         const char *level,
> >                         struct va_format *vaf);
> > @@ -127,6 +191,51 @@ int kunit_run_tests(struct kunit_module *module);
> >                 } \
> >                 late_initcall(module_kunit_init##module)
> >
> > +/**
> > + * kunit_alloc_resource() - Allocates a *test managed resource*.
> > + * @test: The test context object.
> > + * @init: a user supplied function to initialize the resource.
> > + * @free: a user supplied function to free the resource.
> > + * @context: for the user to pass in arbitrary data.
>
> Nitpick: "pass in arbitrary data to the init function"? Maybe that
> provides some more clarity.

I think that makes sense. Will fix in next revision.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 02/17] kunit: test: add test resource management API
@ 2019-02-19 23:24       ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-02-19 23:24 UTC (permalink / raw)


On Fri, Feb 15, 2019 at 1:01 PM Stephen Boyd <sboyd at kernel.org> wrote:
>
> Quoting Brendan Higgins (2019-02-14 13:37:14)
> > @@ -104,6 +167,7 @@ struct kunit {
> >         const char *name; /* Read only after initialization! */
> >         spinlock_t lock; /* Gaurds all mutable test state. */
> >         bool success; /* Protected by lock. */
> > +       struct list_head resources; /* Protected by lock. */
> >         void (*vprintk)(const struct kunit *test,
> >                         const char *level,
> >                         struct va_format *vaf);
> > @@ -127,6 +191,51 @@ int kunit_run_tests(struct kunit_module *module);
> >                 } \
> >                 late_initcall(module_kunit_init##module)
> >
> > +/**
> > + * kunit_alloc_resource() - Allocates a *test managed resource*.
> > + * @test: The test context object.
> > + * @init: a user supplied function to initialize the resource.
> > + * @free: a user supplied function to free the resource.
> > + * @context: for the user to pass in arbitrary data.
>
> Nitpick: "pass in arbitrary data to the init function"? Maybe that
> provides some more clarity.

I think that makes sense. Will fix in next revision.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 02/17] kunit: test: add test resource management API
@ 2019-02-19 23:24       ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-19 23:24 UTC (permalink / raw)


On Fri, Feb 15, 2019@1:01 PM Stephen Boyd <sboyd@kernel.org> wrote:
>
> Quoting Brendan Higgins (2019-02-14 13:37:14)
> > @@ -104,6 +167,7 @@ struct kunit {
> >         const char *name; /* Read only after initialization! */
> >         spinlock_t lock; /* Gaurds all mutable test state. */
> >         bool success; /* Protected by lock. */
> > +       struct list_head resources; /* Protected by lock. */
> >         void (*vprintk)(const struct kunit *test,
> >                         const char *level,
> >                         struct va_format *vaf);
> > @@ -127,6 +191,51 @@ int kunit_run_tests(struct kunit_module *module);
> >                 } \
> >                 late_initcall(module_kunit_init##module)
> >
> > +/**
> > + * kunit_alloc_resource() - Allocates a *test managed resource*.
> > + * @test: The test context object.
> > + * @init: a user supplied function to initialize the resource.
> > + * @free: a user supplied function to free the resource.
> > + * @context: for the user to pass in arbitrary data.
>
> Nitpick: "pass in arbitrary data to the init function"? Maybe that
> provides some more clarity.

I think that makes sense. Will fix in next revision.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 02/17] kunit: test: add test resource management API
@ 2019-02-19 23:24       ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-19 23:24 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: brakmo, Petr Mladek, amir73il, dri-devel, Sasha Levin,
	linux-kselftest, Frank Rowand, Rob Herring, linux-nvdimm,
	Richard Weinberger, Knut Omang, Kieran Bingham, wfg,
	Joel Stanley, Jeff Dike, dan.carpenter, devicetree, shuah, Bird,
	Timothy, Kees Cook, linux-um, Steven Rostedt, Julia Lawall,
	Dan Williams, kunit-dev, Greg KH, Linux Kernel Mailing List,
	Luis Chamberlain, Daniel Vetter, Michael Ellerman, Joe Perches,
	Kevin Hilman

On Fri, Feb 15, 2019 at 1:01 PM Stephen Boyd <sboyd@kernel.org> wrote:
>
> Quoting Brendan Higgins (2019-02-14 13:37:14)
> > @@ -104,6 +167,7 @@ struct kunit {
> >         const char *name; /* Read only after initialization! */
> >         spinlock_t lock; /* Gaurds all mutable test state. */
> >         bool success; /* Protected by lock. */
> > +       struct list_head resources; /* Protected by lock. */
> >         void (*vprintk)(const struct kunit *test,
> >                         const char *level,
> >                         struct va_format *vaf);
> > @@ -127,6 +191,51 @@ int kunit_run_tests(struct kunit_module *module);
> >                 } \
> >                 late_initcall(module_kunit_init##module)
> >
> > +/**
> > + * kunit_alloc_resource() - Allocates a *test managed resource*.
> > + * @test: The test context object.
> > + * @init: a user supplied function to initialize the resource.
> > + * @free: a user supplied function to free the resource.
> > + * @context: for the user to pass in arbitrary data.
>
> Nitpick: "pass in arbitrary data to the init function"? Maybe that
> provides some more clarity.

I think that makes sense. Will fix in next revision.

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 15/17] of: unittest: migrate tests to run on KUnit
  2019-02-16  0:24         ` Frank Rowand
                               ` (2 preceding siblings ...)
  (?)
@ 2019-02-20  2:24             ` Brendan Higgins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-20  2:24 UTC (permalink / raw)
  To: Frank Rowand
  Cc: brakmo-b10kYP2dOMg, Petr Mladek, amir73il-Re5JQEeQqe8AvxtiuMwx3w,
	dri-devel, Sasha Levin, linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	shuah-DgEjT+Ai2ygdnm+yROfE0A, Rob Herring, linux-nvdimm,
	Richard Weinberger, Knut Omang, Kieran Bingham,
	wfg-VuQAYsv1563Yd54FQh9/CA, Joel Stanley, Jeff Dike,
	dan.carpenter-QHcLZuEGTsvQT0dZR+AlfA, devicetree, Bird, Timothy,
	Kees Cook, linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	Steven Rostedt, Julia Lawall, kunit-dev-/JYPxA39Uh5TLH3MbocFFw,
	Greg KH, Linux Kernel Mailing List, Luis Chamberlain

On Fri, Feb 15, 2019 at 4:24 PM Frank Rowand <frowand.list-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
>
> On 2/14/19 1:37 PM, Brendan Higgins wrote:
> > Migrate tests without any cleanup, or modifying test logic in anyway to
> > run under KUnit using the KUnit expectation and assertion API.
> >
> > Signed-off-by: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> > ---
> >  drivers/of/Kconfig    |    1 +
> >  drivers/of/unittest.c | 1310 +++++++++++++++++++++--------------------
> >  2 files changed, 671 insertions(+), 640 deletions(-)
> >
> > diff --git a/drivers/of/Kconfig b/drivers/of/Kconfig
> > index ad3fcad4d75b8..f309399deac20 100644
> > --- a/drivers/of/Kconfig
> > +++ b/drivers/of/Kconfig
> > @@ -15,6 +15,7 @@ if OF
> >  config OF_UNITTEST
> >       bool "Device Tree runtime unit tests"
> >       depends on !SPARC
> > +     depends on KUNIT
> >       select IRQ_DOMAIN
> >       select OF_EARLY_FLATTREE
> >       select OF_RESOLVE
> > diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
>
> These comments are from applying the patches to 5.0-rc3.
>
> The final hunk of this patch fails to apply because it depends upon
>
>    [PATCH v1 0/1] of: unittest: unflatten device tree on UML when testing.
>

Whoops, I probably should have made a note of that in the commit
description or cover letter, sorry.

> If I apply that patch then I can apply patches 15 through 17.
>
> If I apply patches 1 through 14 and boot the uml kernel then the devicetree
> unittest result is:
>
>   ### dt-test ### FAIL of_unittest_overlay_high_level():2372 overlay_base_root not initialized
>   ### dt-test ### end of unittest - 219 passed, 1 failed
>
> This is as expected from your previous reports, and is fixed after
> applying
>
>    [PATCH v1 0/1] of: unittest: unflatten device tree on UML when testing.
>
> with the devicetree unittest result of:
>
>    ### dt-test ### end of unittest - 224 passed, 0 failed
>
> After adding patch 15, there are a lot of "unittest internal error" messages.

Yeah, I meant to ask you about that. I thought it was due to a change
you made, but after further examination, just now, I found it was my
fault. Sorry for not mentioning that anywhere. I will fix it in v5.

Thanks!

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 15/17] of: unittest: migrate tests to run on KUnit
@ 2019-02-20  2:24             ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-20  2:24 UTC (permalink / raw)
  To: Frank Rowand
  Cc: Kees Cook, Luis Chamberlain, shuah, Rob Herring, Kieran Bingham,
	Greg KH, Joel Stanley, Michael Ellerman, Joe Perches, brakmo,
	Steven Rostedt, Bird, Timothy, Kevin Hilman, Julia Lawall,
	linux-kselftest, kunit-dev, Linux Kernel Mailing List, Jeff Dike,
	Richard Weinberger, linux-um, Daniel Vetter, dri-devel,
	Dan Williams, linux-nvdimm, Knut Omang, devicetree, Petr Mladek,
	Sasha Levin, amir73il, dan.carpenter, wfg

On Fri, Feb 15, 2019 at 4:24 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/14/19 1:37 PM, Brendan Higgins wrote:
> > Migrate tests without any cleanup, or modifying test logic in anyway to
> > run under KUnit using the KUnit expectation and assertion API.
> >
> > Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
> > ---
> >  drivers/of/Kconfig    |    1 +
> >  drivers/of/unittest.c | 1310 +++++++++++++++++++++--------------------
> >  2 files changed, 671 insertions(+), 640 deletions(-)
> >
> > diff --git a/drivers/of/Kconfig b/drivers/of/Kconfig
> > index ad3fcad4d75b8..f309399deac20 100644
> > --- a/drivers/of/Kconfig
> > +++ b/drivers/of/Kconfig
> > @@ -15,6 +15,7 @@ if OF
> >  config OF_UNITTEST
> >       bool "Device Tree runtime unit tests"
> >       depends on !SPARC
> > +     depends on KUNIT
> >       select IRQ_DOMAIN
> >       select OF_EARLY_FLATTREE
> >       select OF_RESOLVE
> > diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
>
> These comments are from applying the patches to 5.0-rc3.
>
> The final hunk of this patch fails to apply because it depends upon
>
>    [PATCH v1 0/1] of: unittest: unflatten device tree on UML when testing.
>

Whoops, I probably should have made a note of that in the commit
description or cover letter, sorry.

> If I apply that patch then I can apply patches 15 through 17.
>
> If I apply patches 1 through 14 and boot the uml kernel then the devicetree
> unittest result is:
>
>   ### dt-test ### FAIL of_unittest_overlay_high_level():2372 overlay_base_root not initialized
>   ### dt-test ### end of unittest - 219 passed, 1 failed
>
> This is as expected from your previous reports, and is fixed after
> applying
>
>    [PATCH v1 0/1] of: unittest: unflatten device tree on UML when testing.
>
> with the devicetree unittest result of:
>
>    ### dt-test ### end of unittest - 224 passed, 0 failed
>
> After adding patch 15, there are a lot of "unittest internal error" messages.

Yeah, I meant to ask you about that. I thought it was due to a change
you made, but after further examination, just now, I found it was my
fault. Sorry for not mentioning that anywhere. I will fix it in v5.

Thanks!

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 15/17] of: unittest: migrate tests to run on KUnit
@ 2019-02-20  2:24             ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-02-20  2:24 UTC (permalink / raw)


On Fri, Feb 15, 2019 at 4:24 PM Frank Rowand <frowand.list at gmail.com> wrote:
>
> On 2/14/19 1:37 PM, Brendan Higgins wrote:
> > Migrate tests without any cleanup, or modifying test logic in anyway to
> > run under KUnit using the KUnit expectation and assertion API.
> >
> > Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> > ---
> >  drivers/of/Kconfig    |    1 +
> >  drivers/of/unittest.c | 1310 +++++++++++++++++++++--------------------
> >  2 files changed, 671 insertions(+), 640 deletions(-)
> >
> > diff --git a/drivers/of/Kconfig b/drivers/of/Kconfig
> > index ad3fcad4d75b8..f309399deac20 100644
> > --- a/drivers/of/Kconfig
> > +++ b/drivers/of/Kconfig
> > @@ -15,6 +15,7 @@ if OF
> >  config OF_UNITTEST
> >       bool "Device Tree runtime unit tests"
> >       depends on !SPARC
> > +     depends on KUNIT
> >       select IRQ_DOMAIN
> >       select OF_EARLY_FLATTREE
> >       select OF_RESOLVE
> > diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
>
> These comments are from applying the patches to 5.0-rc3.
>
> The final hunk of this patch fails to apply because it depends upon
>
>    [PATCH v1 0/1] of: unittest: unflatten device tree on UML when testing.
>

Whoops, I probably should have made a note of that in the commit
description or cover letter, sorry.

> If I apply that patch then I can apply patches 15 through 17.
>
> If I apply patches 1 through 14 and boot the uml kernel then the devicetree
> unittest result is:
>
>   ### dt-test ### FAIL of_unittest_overlay_high_level():2372 overlay_base_root not initialized
>   ### dt-test ### end of unittest - 219 passed, 1 failed
>
> This is as expected from your previous reports, and is fixed after
> applying
>
>    [PATCH v1 0/1] of: unittest: unflatten device tree on UML when testing.
>
> with the devicetree unittest result of:
>
>    ### dt-test ### end of unittest - 224 passed, 0 failed
>
> After adding patch 15, there are a lot of "unittest internal error" messages.

Yeah, I meant to ask you about that. I thought it was due to a change
you made, but after further examination, just now, I found it was my
fault. Sorry for not mentioning that anywhere. I will fix it in v5.

Thanks!

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 15/17] of: unittest: migrate tests to run on KUnit
@ 2019-02-20  2:24             ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-20  2:24 UTC (permalink / raw)


On Fri, Feb 15, 2019@4:24 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/14/19 1:37 PM, Brendan Higgins wrote:
> > Migrate tests without any cleanup, or modifying test logic in anyway to
> > run under KUnit using the KUnit expectation and assertion API.
> >
> > Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> > ---
> >  drivers/of/Kconfig    |    1 +
> >  drivers/of/unittest.c | 1310 +++++++++++++++++++++--------------------
> >  2 files changed, 671 insertions(+), 640 deletions(-)
> >
> > diff --git a/drivers/of/Kconfig b/drivers/of/Kconfig
> > index ad3fcad4d75b8..f309399deac20 100644
> > --- a/drivers/of/Kconfig
> > +++ b/drivers/of/Kconfig
> > @@ -15,6 +15,7 @@ if OF
> >  config OF_UNITTEST
> >       bool "Device Tree runtime unit tests"
> >       depends on !SPARC
> > +     depends on KUNIT
> >       select IRQ_DOMAIN
> >       select OF_EARLY_FLATTREE
> >       select OF_RESOLVE
> > diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
>
> These comments are from applying the patches to 5.0-rc3.
>
> The final hunk of this patch fails to apply because it depends upon
>
>    [PATCH v1 0/1] of: unittest: unflatten device tree on UML when testing.
>

Whoops, I probably should have made a note of that in the commit
description or cover letter, sorry.

> If I apply that patch then I can apply patches 15 through 17.
>
> If I apply patches 1 through 14 and boot the uml kernel then the devicetree
> unittest result is:
>
>   ### dt-test ### FAIL of_unittest_overlay_high_level():2372 overlay_base_root not initialized
>   ### dt-test ### end of unittest - 219 passed, 1 failed
>
> This is as expected from your previous reports, and is fixed after
> applying
>
>    [PATCH v1 0/1] of: unittest: unflatten device tree on UML when testing.
>
> with the devicetree unittest result of:
>
>    ### dt-test ### end of unittest - 224 passed, 0 failed
>
> After adding patch 15, there are a lot of "unittest internal error" messages.

Yeah, I meant to ask you about that. I thought it was due to a change
you made, but after further examination, just now, I found it was my
fault. Sorry for not mentioning that anywhere. I will fix it in v5.

Thanks!

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 15/17] of: unittest: migrate tests to run on KUnit
@ 2019-02-20  2:24             ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-20  2:24 UTC (permalink / raw)
  To: Frank Rowand
  Cc: brakmo, Petr Mladek, amir73il, dri-devel, Sasha Levin,
	linux-kselftest, shuah, Rob Herring, linux-nvdimm,
	Richard Weinberger, Knut Omang, Kieran Bingham, wfg,
	Joel Stanley, Jeff Dike, dan.carpenter, devicetree, Bird,

On Fri, Feb 15, 2019 at 4:24 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/14/19 1:37 PM, Brendan Higgins wrote:
> > Migrate tests without any cleanup, or modifying test logic in anyway to
> > run under KUnit using the KUnit expectation and assertion API.
> >
> > Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
> > ---
> >  drivers/of/Kconfig    |    1 +
> >  drivers/of/unittest.c | 1310 +++++++++++++++++++++--------------------
> >  2 files changed, 671 insertions(+), 640 deletions(-)
> >
> > diff --git a/drivers/of/Kconfig b/drivers/of/Kconfig
> > index ad3fcad4d75b8..f309399deac20 100644
> > --- a/drivers/of/Kconfig
> > +++ b/drivers/of/Kconfig
> > @@ -15,6 +15,7 @@ if OF
> >  config OF_UNITTEST
> >       bool "Device Tree runtime unit tests"
> >       depends on !SPARC
> > +     depends on KUNIT
> >       select IRQ_DOMAIN
> >       select OF_EARLY_FLATTREE
> >       select OF_RESOLVE
> > diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
>
> These comments are from applying the patches to 5.0-rc3.
>
> The final hunk of this patch fails to apply because it depends upon
>
>    [PATCH v1 0/1] of: unittest: unflatten device tree on UML when testing.
>

Whoops, I probably should have made a note of that in the commit
description or cover letter, sorry.

> If I apply that patch then I can apply patches 15 through 17.
>
> If I apply patches 1 through 14 and boot the uml kernel then the devicetree
> unittest result is:
>
>   ### dt-test ### FAIL of_unittest_overlay_high_level():2372 overlay_base_root not initialized
>   ### dt-test ### end of unittest - 219 passed, 1 failed
>
> This is as expected from your previous reports, and is fixed after
> applying
>
>    [PATCH v1 0/1] of: unittest: unflatten device tree on UML when testing.
>
> with the devicetree unittest result of:
>
>    ### dt-test ### end of unittest - 224 passed, 0 failed
>
> After adding patch 15, there are a lot of "unittest internal error" messages.

Yeah, I meant to ask you about that. I thought it was due to a change
you made, but after further examination, just now, I found it was my
fault. Sorry for not mentioning that anywhere. I will fix it in v5.

Thanks!

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
  2019-02-18 19:52         ` Frank Rowand
                               ` (2 preceding siblings ...)
  (?)
@ 2019-02-20  3:39             ` Brendan Higgins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-20  3:39 UTC (permalink / raw)
  To: Frank Rowand
  Cc: brakmo-b10kYP2dOMg, Petr Mladek, Amir Goldstein, dri-devel,
	Sasha Levin, linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	shuah-DgEjT+Ai2ygdnm+yROfE0A, Rob Herring, linux-nvdimm,
	Richard Weinberger, Knut Omang, Kieran Bingham,
	wfg-VuQAYsv1563Yd54FQh9/CA, Joel Stanley, Jeff Dike,
	dan.carpenter-QHcLZuEGTsvQT0dZR+AlfA, devicetree, Bird, Timothy,
	Kees Cook, linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	Steven Rostedt, Julia Lawall, kunit-dev-/JYPxA39Uh5TLH3MbocFFw,
	Greg KH, Linux Kernel Mailing List, Luis

On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand <frowand.list-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
>
> On 2/14/19 1:37 PM, Brendan Higgins wrote:
> > Add support for aborting/bailing out of test cases. Needed for
> > implementing assertions.
> >
> > Signed-off-by: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> > ---
> > Changes Since Last Version
> >  - This patch is new introducing a new cross-architecture way to abort
> >    out of a test case (needed for KUNIT_ASSERT_*, see next patch for
> >    details).
> >  - On a side note, this is not a complete replacement for the UML abort
> >    mechanism, but covers the majority of necessary functionality. UML
> >    architecture specific featurs have been dropped from the initial
> >    patchset.
> > ---
> >  include/kunit/test.h |  24 +++++
> >  kunit/Makefile       |   3 +-
> >  kunit/test-test.c    | 127 ++++++++++++++++++++++++++
> >  kunit/test.c         | 208 +++++++++++++++++++++++++++++++++++++++++--
> >  4 files changed, 353 insertions(+), 9 deletions(-)
> >  create mode 100644 kunit/test-test.c
>
> < snip >
>
> > diff --git a/kunit/test.c b/kunit/test.c
> > index d18c50d5ed671..6e5244642ab07 100644
> > --- a/kunit/test.c
> > +++ b/kunit/test.c
> > @@ -6,9 +6,9 @@
> >   * Author: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> >   */
> >
> > -#include <linux/sched.h>
> >  #include <linux/sched/debug.h>
> > -#include <os.h>
> > +#include <linux/completion.h>
> > +#include <linux/kthread.h>
> >  #include <kunit/test.h>
> >
> >  static bool kunit_get_success(struct kunit *test)
> > @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
> >       spin_unlock_irqrestore(&test->lock, flags);
> >  }
> >
> > +static bool kunit_get_death_test(struct kunit *test)
> > +{
> > +     unsigned long flags;
> > +     bool death_test;
> > +
> > +     spin_lock_irqsave(&test->lock, flags);
> > +     death_test = test->death_test;
> > +     spin_unlock_irqrestore(&test->lock, flags);
> > +
> > +     return death_test;
> > +}
> > +
> > +static void kunit_set_death_test(struct kunit *test, bool death_test)
> > +{
> > +     unsigned long flags;
> > +
> > +     spin_lock_irqsave(&test->lock, flags);
> > +     test->death_test = death_test;
> > +     spin_unlock_irqrestore(&test->lock, flags);
> > +}
> > +
> >  static int kunit_vprintk_emit(const struct kunit *test,
> >                             int level,
> >                             const char *fmt,
> > @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
> >       stream->commit(stream);
> >  }
> >
> > +static void __noreturn kunit_abort(struct kunit *test)
> > +{
> > +     kunit_set_death_test(test, true);
> > +
> > +     test->try_catch.throw(&test->try_catch);
> > +
> > +     /*
> > +      * Throw could not abort from test.
> > +      */
> > +     kunit_err(test, "Throw could not abort from test!");
> > +     show_stack(NULL, NULL);
> > +     BUG();
>
> kunit_abort() is what will be call as the result of an assert failure.

Yep. Does that need clarified somewhere?

>
> BUG(), which is a panic, which is crashing the system is not acceptable
> in the Linux kernel.  You will just annoy Linus if you submit this.

Sorry, I thought this was an acceptable use case since, a) this should
never be compiled in a production kernel, b) we are in a pretty bad,
unpredictable state if we get here and keep going. I think you might
have said elsewhere that you think "a" is not valid? In any case, I
can replace this with a WARN, would that be acceptable?

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-02-20  3:39             ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-20  3:39 UTC (permalink / raw)
  To: Frank Rowand
  Cc: Kees Cook, Luis Chamberlain, shuah, Rob Herring, Kieran Bingham,
	Greg KH, Joel Stanley, Michael Ellerman, Joe Perches, brakmo,
	Steven Rostedt, Bird, Timothy, Kevin Hilman, Julia Lawall,
	linux-kselftest, kunit-dev, Linux Kernel Mailing List, Jeff Dike,
	Richard Weinberger, linux-um, Daniel Vetter, dri-devel,
	Dan Williams, linux-nvdimm, Knut Omang, devicetree, Petr Mladek,
	Sasha Levin, Amir Goldstein, dan.carpenter, wfg

On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/14/19 1:37 PM, Brendan Higgins wrote:
> > Add support for aborting/bailing out of test cases. Needed for
> > implementing assertions.
> >
> > Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
> > ---
> > Changes Since Last Version
> >  - This patch is new introducing a new cross-architecture way to abort
> >    out of a test case (needed for KUNIT_ASSERT_*, see next patch for
> >    details).
> >  - On a side note, this is not a complete replacement for the UML abort
> >    mechanism, but covers the majority of necessary functionality. UML
> >    architecture specific featurs have been dropped from the initial
> >    patchset.
> > ---
> >  include/kunit/test.h |  24 +++++
> >  kunit/Makefile       |   3 +-
> >  kunit/test-test.c    | 127 ++++++++++++++++++++++++++
> >  kunit/test.c         | 208 +++++++++++++++++++++++++++++++++++++++++--
> >  4 files changed, 353 insertions(+), 9 deletions(-)
> >  create mode 100644 kunit/test-test.c
>
> < snip >
>
> > diff --git a/kunit/test.c b/kunit/test.c
> > index d18c50d5ed671..6e5244642ab07 100644
> > --- a/kunit/test.c
> > +++ b/kunit/test.c
> > @@ -6,9 +6,9 @@
> >   * Author: Brendan Higgins <brendanhiggins@google.com>
> >   */
> >
> > -#include <linux/sched.h>
> >  #include <linux/sched/debug.h>
> > -#include <os.h>
> > +#include <linux/completion.h>
> > +#include <linux/kthread.h>
> >  #include <kunit/test.h>
> >
> >  static bool kunit_get_success(struct kunit *test)
> > @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
> >       spin_unlock_irqrestore(&test->lock, flags);
> >  }
> >
> > +static bool kunit_get_death_test(struct kunit *test)
> > +{
> > +     unsigned long flags;
> > +     bool death_test;
> > +
> > +     spin_lock_irqsave(&test->lock, flags);
> > +     death_test = test->death_test;
> > +     spin_unlock_irqrestore(&test->lock, flags);
> > +
> > +     return death_test;
> > +}
> > +
> > +static void kunit_set_death_test(struct kunit *test, bool death_test)
> > +{
> > +     unsigned long flags;
> > +
> > +     spin_lock_irqsave(&test->lock, flags);
> > +     test->death_test = death_test;
> > +     spin_unlock_irqrestore(&test->lock, flags);
> > +}
> > +
> >  static int kunit_vprintk_emit(const struct kunit *test,
> >                             int level,
> >                             const char *fmt,
> > @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
> >       stream->commit(stream);
> >  }
> >
> > +static void __noreturn kunit_abort(struct kunit *test)
> > +{
> > +     kunit_set_death_test(test, true);
> > +
> > +     test->try_catch.throw(&test->try_catch);
> > +
> > +     /*
> > +      * Throw could not abort from test.
> > +      */
> > +     kunit_err(test, "Throw could not abort from test!");
> > +     show_stack(NULL, NULL);
> > +     BUG();
>
> kunit_abort() is what will be call as the result of an assert failure.

Yep. Does that need clarified somewhere?

>
> BUG(), which is a panic, which is crashing the system is not acceptable
> in the Linux kernel.  You will just annoy Linus if you submit this.

Sorry, I thought this was an acceptable use case since, a) this should
never be compiled in a production kernel, b) we are in a pretty bad,
unpredictable state if we get here and keep going. I think you might
have said elsewhere that you think "a" is not valid? In any case, I
can replace this with a WARN, would that be acceptable?

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-02-20  3:39             ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-02-20  3:39 UTC (permalink / raw)


On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand <frowand.list at gmail.com> wrote:
>
> On 2/14/19 1:37 PM, Brendan Higgins wrote:
> > Add support for aborting/bailing out of test cases. Needed for
> > implementing assertions.
> >
> > Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> > ---
> > Changes Since Last Version
> >  - This patch is new introducing a new cross-architecture way to abort
> >    out of a test case (needed for KUNIT_ASSERT_*, see next patch for
> >    details).
> >  - On a side note, this is not a complete replacement for the UML abort
> >    mechanism, but covers the majority of necessary functionality. UML
> >    architecture specific featurs have been dropped from the initial
> >    patchset.
> > ---
> >  include/kunit/test.h |  24 +++++
> >  kunit/Makefile       |   3 +-
> >  kunit/test-test.c    | 127 ++++++++++++++++++++++++++
> >  kunit/test.c         | 208 +++++++++++++++++++++++++++++++++++++++++--
> >  4 files changed, 353 insertions(+), 9 deletions(-)
> >  create mode 100644 kunit/test-test.c
>
> < snip >
>
> > diff --git a/kunit/test.c b/kunit/test.c
> > index d18c50d5ed671..6e5244642ab07 100644
> > --- a/kunit/test.c
> > +++ b/kunit/test.c
> > @@ -6,9 +6,9 @@
> >   * Author: Brendan Higgins <brendanhiggins at google.com>
> >   */
> >
> > -#include <linux/sched.h>
> >  #include <linux/sched/debug.h>
> > -#include <os.h>
> > +#include <linux/completion.h>
> > +#include <linux/kthread.h>
> >  #include <kunit/test.h>
> >
> >  static bool kunit_get_success(struct kunit *test)
> > @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
> >       spin_unlock_irqrestore(&test->lock, flags);
> >  }
> >
> > +static bool kunit_get_death_test(struct kunit *test)
> > +{
> > +     unsigned long flags;
> > +     bool death_test;
> > +
> > +     spin_lock_irqsave(&test->lock, flags);
> > +     death_test = test->death_test;
> > +     spin_unlock_irqrestore(&test->lock, flags);
> > +
> > +     return death_test;
> > +}
> > +
> > +static void kunit_set_death_test(struct kunit *test, bool death_test)
> > +{
> > +     unsigned long flags;
> > +
> > +     spin_lock_irqsave(&test->lock, flags);
> > +     test->death_test = death_test;
> > +     spin_unlock_irqrestore(&test->lock, flags);
> > +}
> > +
> >  static int kunit_vprintk_emit(const struct kunit *test,
> >                             int level,
> >                             const char *fmt,
> > @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
> >       stream->commit(stream);
> >  }
> >
> > +static void __noreturn kunit_abort(struct kunit *test)
> > +{
> > +     kunit_set_death_test(test, true);
> > +
> > +     test->try_catch.throw(&test->try_catch);
> > +
> > +     /*
> > +      * Throw could not abort from test.
> > +      */
> > +     kunit_err(test, "Throw could not abort from test!");
> > +     show_stack(NULL, NULL);
> > +     BUG();
>
> kunit_abort() is what will be call as the result of an assert failure.

Yep. Does that need clarified somewhere?

>
> BUG(), which is a panic, which is crashing the system is not acceptable
> in the Linux kernel.  You will just annoy Linus if you submit this.

Sorry, I thought this was an acceptable use case since, a) this should
never be compiled in a production kernel, b) we are in a pretty bad,
unpredictable state if we get here and keep going. I think you might
have said elsewhere that you think "a" is not valid? In any case, I
can replace this with a WARN, would that be acceptable?

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-02-20  3:39             ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-20  3:39 UTC (permalink / raw)


On Mon, Feb 18, 2019@11:52 AM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/14/19 1:37 PM, Brendan Higgins wrote:
> > Add support for aborting/bailing out of test cases. Needed for
> > implementing assertions.
> >
> > Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> > ---
> > Changes Since Last Version
> >  - This patch is new introducing a new cross-architecture way to abort
> >    out of a test case (needed for KUNIT_ASSERT_*, see next patch for
> >    details).
> >  - On a side note, this is not a complete replacement for the UML abort
> >    mechanism, but covers the majority of necessary functionality. UML
> >    architecture specific featurs have been dropped from the initial
> >    patchset.
> > ---
> >  include/kunit/test.h |  24 +++++
> >  kunit/Makefile       |   3 +-
> >  kunit/test-test.c    | 127 ++++++++++++++++++++++++++
> >  kunit/test.c         | 208 +++++++++++++++++++++++++++++++++++++++++--
> >  4 files changed, 353 insertions(+), 9 deletions(-)
> >  create mode 100644 kunit/test-test.c
>
> < snip >
>
> > diff --git a/kunit/test.c b/kunit/test.c
> > index d18c50d5ed671..6e5244642ab07 100644
> > --- a/kunit/test.c
> > +++ b/kunit/test.c
> > @@ -6,9 +6,9 @@
> >   * Author: Brendan Higgins <brendanhiggins at google.com>
> >   */
> >
> > -#include <linux/sched.h>
> >  #include <linux/sched/debug.h>
> > -#include <os.h>
> > +#include <linux/completion.h>
> > +#include <linux/kthread.h>
> >  #include <kunit/test.h>
> >
> >  static bool kunit_get_success(struct kunit *test)
> > @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
> >       spin_unlock_irqrestore(&test->lock, flags);
> >  }
> >
> > +static bool kunit_get_death_test(struct kunit *test)
> > +{
> > +     unsigned long flags;
> > +     bool death_test;
> > +
> > +     spin_lock_irqsave(&test->lock, flags);
> > +     death_test = test->death_test;
> > +     spin_unlock_irqrestore(&test->lock, flags);
> > +
> > +     return death_test;
> > +}
> > +
> > +static void kunit_set_death_test(struct kunit *test, bool death_test)
> > +{
> > +     unsigned long flags;
> > +
> > +     spin_lock_irqsave(&test->lock, flags);
> > +     test->death_test = death_test;
> > +     spin_unlock_irqrestore(&test->lock, flags);
> > +}
> > +
> >  static int kunit_vprintk_emit(const struct kunit *test,
> >                             int level,
> >                             const char *fmt,
> > @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
> >       stream->commit(stream);
> >  }
> >
> > +static void __noreturn kunit_abort(struct kunit *test)
> > +{
> > +     kunit_set_death_test(test, true);
> > +
> > +     test->try_catch.throw(&test->try_catch);
> > +
> > +     /*
> > +      * Throw could not abort from test.
> > +      */
> > +     kunit_err(test, "Throw could not abort from test!");
> > +     show_stack(NULL, NULL);
> > +     BUG();
>
> kunit_abort() is what will be call as the result of an assert failure.

Yep. Does that need clarified somewhere?

>
> BUG(), which is a panic, which is crashing the system is not acceptable
> in the Linux kernel.  You will just annoy Linus if you submit this.

Sorry, I thought this was an acceptable use case since, a) this should
never be compiled in a production kernel, b) we are in a pretty bad,
unpredictable state if we get here and keep going. I think you might
have said elsewhere that you think "a" is not valid? In any case, I
can replace this with a WARN, would that be acceptable?

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-02-20  3:39             ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-20  3:39 UTC (permalink / raw)
  To: Frank Rowand
  Cc: brakmo, Petr Mladek, Amir Goldstein, dri-devel, Sasha Levin,
	linux-kselftest, shuah, Rob Herring, linux-nvdimm,
	Richard Weinberger, Knut Omang, Kieran Bingham, wfg,
	Joel Stanley, Jeff Dike, dan.carpenter, devicetree, Bird,

On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/14/19 1:37 PM, Brendan Higgins wrote:
> > Add support for aborting/bailing out of test cases. Needed for
> > implementing assertions.
> >
> > Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
> > ---
> > Changes Since Last Version
> >  - This patch is new introducing a new cross-architecture way to abort
> >    out of a test case (needed for KUNIT_ASSERT_*, see next patch for
> >    details).
> >  - On a side note, this is not a complete replacement for the UML abort
> >    mechanism, but covers the majority of necessary functionality. UML
> >    architecture specific featurs have been dropped from the initial
> >    patchset.
> > ---
> >  include/kunit/test.h |  24 +++++
> >  kunit/Makefile       |   3 +-
> >  kunit/test-test.c    | 127 ++++++++++++++++++++++++++
> >  kunit/test.c         | 208 +++++++++++++++++++++++++++++++++++++++++--
> >  4 files changed, 353 insertions(+), 9 deletions(-)
> >  create mode 100644 kunit/test-test.c
>
> < snip >
>
> > diff --git a/kunit/test.c b/kunit/test.c
> > index d18c50d5ed671..6e5244642ab07 100644
> > --- a/kunit/test.c
> > +++ b/kunit/test.c
> > @@ -6,9 +6,9 @@
> >   * Author: Brendan Higgins <brendanhiggins@google.com>
> >   */
> >
> > -#include <linux/sched.h>
> >  #include <linux/sched/debug.h>
> > -#include <os.h>
> > +#include <linux/completion.h>
> > +#include <linux/kthread.h>
> >  #include <kunit/test.h>
> >
> >  static bool kunit_get_success(struct kunit *test)
> > @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
> >       spin_unlock_irqrestore(&test->lock, flags);
> >  }
> >
> > +static bool kunit_get_death_test(struct kunit *test)
> > +{
> > +     unsigned long flags;
> > +     bool death_test;
> > +
> > +     spin_lock_irqsave(&test->lock, flags);
> > +     death_test = test->death_test;
> > +     spin_unlock_irqrestore(&test->lock, flags);
> > +
> > +     return death_test;
> > +}
> > +
> > +static void kunit_set_death_test(struct kunit *test, bool death_test)
> > +{
> > +     unsigned long flags;
> > +
> > +     spin_lock_irqsave(&test->lock, flags);
> > +     test->death_test = death_test;
> > +     spin_unlock_irqrestore(&test->lock, flags);
> > +}
> > +
> >  static int kunit_vprintk_emit(const struct kunit *test,
> >                             int level,
> >                             const char *fmt,
> > @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
> >       stream->commit(stream);
> >  }
> >
> > +static void __noreturn kunit_abort(struct kunit *test)
> > +{
> > +     kunit_set_death_test(test, true);
> > +
> > +     test->try_catch.throw(&test->try_catch);
> > +
> > +     /*
> > +      * Throw could not abort from test.
> > +      */
> > +     kunit_err(test, "Throw could not abort from test!");
> > +     show_stack(NULL, NULL);
> > +     BUG();
>
> kunit_abort() is what will be call as the result of an assert failure.

Yep. Does that need clarified somewhere?

>
> BUG(), which is a panic, which is crashing the system is not acceptable
> in the Linux kernel.  You will just annoy Linus if you submit this.

Sorry, I thought this was an acceptable use case since, a) this should
never be compiled in a production kernel, b) we are in a pretty bad,
unpredictable state if we get here and keep going. I think you might
have said elsewhere that you think "a" is not valid? In any case, I
can replace this with a WARN, would that be acceptable?

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
  2019-02-18 20:02   ` frowand.list
                       ` (2 preceding siblings ...)
  (?)
@ 2019-02-20  6:34     ` Brendan Higgins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-20  6:34 UTC (permalink / raw)
  To: Frank Rowand
  Cc: Kees Cook, Luis Chamberlain, shuah, Rob Herring, Kieran Bingham,
	Greg KH, Joel Stanley, Michael Ellerman, Joe Perches, brakmo,
	Steven Rostedt, Bird, Timothy, Kevin Hilman, Julia Lawall,
	linux-kselftest, kunit-dev, Linux Kernel Mailing List, Jeff Dike,
	Richard Weinberger, linux-um, Daniel Vetter, dri-devel

On Mon, Feb 18, 2019 at 12:02 PM Frank Rowand <frowand.list@gmail.com> wrote:
<snip>
> I have not read through the patches in any detail.  I have read some of
> the code to try to understand the patches to the devicetree unit tests.
> So that may limit how valid my comments below are.

No problem.

>
> I found the code difficult to read in places where it should have been
> much simpler to read.  Structuring the code in a pseudo object oriented
> style meant that everywhere in a code path that I encountered a dynamic
> function call, I had to go find where that dynamic function call was
> initialized (and being the cautious person that I am, verify that
> no where else was the value of that dynamic function call).  With
> primitive vi and tags, that search would have instead just been a
> simple key press (or at worst a few keys) if hard coded function
> calls were done instead of dynamic function calls.  In the code paths
> that I looked at, I did not see any case of a dynamic function being
> anything other than the value it was originally initialized as.
> There may be such cases, I did not read the entire patch set.  There
> may also be cases envisioned in the architects mind of how this
> flexibility may be of future value.  Dunno.

Yeah, a lot of it is intended to make architecture specific
implementations and some other future work easier. Some of it is also
for testing purposes. Admittedly some is for neither reason, but given
the heavy usage elsewhere, I figured there was no harm since it was
all private internal usage anyway.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-02-20  6:34     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-20  6:34 UTC (permalink / raw)
  To: Frank Rowand
  Cc: Kees Cook, Luis Chamberlain, shuah, Rob Herring, Kieran Bingham,
	Greg KH, Joel Stanley, Michael Ellerman, Joe Perches, brakmo,
	Steven Rostedt, Bird, Timothy, Kevin Hilman, Julia Lawall,
	linux-kselftest, kunit-dev, Linux Kernel Mailing List, Jeff Dike,
	Richard Weinberger, linux-um, Daniel Vetter, dri-devel,
	Dan Williams, linux-nvdimm, Knut Omang, devicetree, Petr Mladek,
	Sasha Levin, Amir Goldstein, dan.carpenter, wfg

On Mon, Feb 18, 2019 at 12:02 PM Frank Rowand <frowand.list@gmail.com> wrote:
<snip>
> I have not read through the patches in any detail.  I have read some of
> the code to try to understand the patches to the devicetree unit tests.
> So that may limit how valid my comments below are.

No problem.

>
> I found the code difficult to read in places where it should have been
> much simpler to read.  Structuring the code in a pseudo object oriented
> style meant that everywhere in a code path that I encountered a dynamic
> function call, I had to go find where that dynamic function call was
> initialized (and being the cautious person that I am, verify that
> no where else was the value of that dynamic function call).  With
> primitive vi and tags, that search would have instead just been a
> simple key press (or at worst a few keys) if hard coded function
> calls were done instead of dynamic function calls.  In the code paths
> that I looked at, I did not see any case of a dynamic function being
> anything other than the value it was originally initialized as.
> There may be such cases, I did not read the entire patch set.  There
> may also be cases envisioned in the architects mind of how this
> flexibility may be of future value.  Dunno.

Yeah, a lot of it is intended to make architecture specific
implementations and some other future work easier. Some of it is also
for testing purposes. Admittedly some is for neither reason, but given
the heavy usage elsewhere, I figured there was no harm since it was
all private internal usage anyway.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-02-20  6:34     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-02-20  6:34 UTC (permalink / raw)


On Mon, Feb 18, 2019 at 12:02 PM Frank Rowand <frowand.list at gmail.com> wrote:
<snip>
> I have not read through the patches in any detail.  I have read some of
> the code to try to understand the patches to the devicetree unit tests.
> So that may limit how valid my comments below are.

No problem.

>
> I found the code difficult to read in places where it should have been
> much simpler to read.  Structuring the code in a pseudo object oriented
> style meant that everywhere in a code path that I encountered a dynamic
> function call, I had to go find where that dynamic function call was
> initialized (and being the cautious person that I am, verify that
> no where else was the value of that dynamic function call).  With
> primitive vi and tags, that search would have instead just been a
> simple key press (or at worst a few keys) if hard coded function
> calls were done instead of dynamic function calls.  In the code paths
> that I looked at, I did not see any case of a dynamic function being
> anything other than the value it was originally initialized as.
> There may be such cases, I did not read the entire patch set.  There
> may also be cases envisioned in the architects mind of how this
> flexibility may be of future value.  Dunno.

Yeah, a lot of it is intended to make architecture specific
implementations and some other future work easier. Some of it is also
for testing purposes. Admittedly some is for neither reason, but given
the heavy usage elsewhere, I figured there was no harm since it was
all private internal usage anyway.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-02-20  6:34     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-20  6:34 UTC (permalink / raw)


On Mon, Feb 18, 2019@12:02 PM Frank Rowand <frowand.list@gmail.com> wrote:
<snip>
> I have not read through the patches in any detail.  I have read some of
> the code to try to understand the patches to the devicetree unit tests.
> So that may limit how valid my comments below are.

No problem.

>
> I found the code difficult to read in places where it should have been
> much simpler to read.  Structuring the code in a pseudo object oriented
> style meant that everywhere in a code path that I encountered a dynamic
> function call, I had to go find where that dynamic function call was
> initialized (and being the cautious person that I am, verify that
> no where else was the value of that dynamic function call).  With
> primitive vi and tags, that search would have instead just been a
> simple key press (or at worst a few keys) if hard coded function
> calls were done instead of dynamic function calls.  In the code paths
> that I looked at, I did not see any case of a dynamic function being
> anything other than the value it was originally initialized as.
> There may be such cases, I did not read the entire patch set.  There
> may also be cases envisioned in the architects mind of how this
> flexibility may be of future value.  Dunno.

Yeah, a lot of it is intended to make architecture specific
implementations and some other future work easier. Some of it is also
for testing purposes. Admittedly some is for neither reason, but given
the heavy usage elsewhere, I figured there was no harm since it was
all private internal usage anyway.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-02-20  6:34     ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-20  6:34 UTC (permalink / raw)
  To: Frank Rowand
  Cc: brakmo, Petr Mladek, Amir Goldstein, dri-devel, Sasha Levin,
	linux-kselftest, shuah, Rob Herring, linux-nvdimm,
	Richard Weinberger, Knut Omang, Kieran Bingham, wfg,
	Joel Stanley, Jeff Dike, dan.carpenter, devicetree, Bird,

On Mon, Feb 18, 2019 at 12:02 PM Frank Rowand <frowand.list@gmail.com> wrote:
<snip>
> I have not read through the patches in any detail.  I have read some of
> the code to try to understand the patches to the devicetree unit tests.
> So that may limit how valid my comments below are.

No problem.

>
> I found the code difficult to read in places where it should have been
> much simpler to read.  Structuring the code in a pseudo object oriented
> style meant that everywhere in a code path that I encountered a dynamic
> function call, I had to go find where that dynamic function call was
> initialized (and being the cautious person that I am, verify that
> no where else was the value of that dynamic function call).  With
> primitive vi and tags, that search would have instead just been a
> simple key press (or at worst a few keys) if hard coded function
> calls were done instead of dynamic function calls.  In the code paths
> that I looked at, I did not see any case of a dynamic function being
> anything other than the value it was originally initialized as.
> There may be such cases, I did not read the entire patch set.  There
> may also be cases envisioned in the architects mind of how this
> flexibility may be of future value.  Dunno.

Yeah, a lot of it is intended to make architecture specific
implementations and some other future work easier. Some of it is also
for testing purposes. Admittedly some is for neither reason, but given
the heavy usage elsewhere, I figured there was no harm since it was
all private internal usage anyway.

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
  2019-02-20  3:39             ` Brendan Higgins
                                 ` (3 preceding siblings ...)
  (?)
@ 2019-02-20  6:44               ` Frank Rowand
  -1 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-02-20  6:44 UTC (permalink / raw)
  To: Brendan Higgins
  Cc: brakmo, Petr Mladek, Amir Goldstein, dri-devel, Sasha Levin,
	linux-kselftest, shuah, Rob Herring, linux-nvdimm,
	Richard Weinberger, Knut Omang, Kieran Bingham, wfg,
	Joel Stanley, Jeff Dike, dan.carpenter, devicetree, Bird,

On 2/19/19 7:39 PM, Brendan Higgins wrote:
> On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand <frowand.list@gmail.com> wrote:
>>
>> On 2/14/19 1:37 PM, Brendan Higgins wrote:
>>> Add support for aborting/bailing out of test cases. Needed for
>>> implementing assertions.
>>>
>>> Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
>>> ---
>>> Changes Since Last Version
>>>  - This patch is new introducing a new cross-architecture way to abort
>>>    out of a test case (needed for KUNIT_ASSERT_*, see next patch for
>>>    details).
>>>  - On a side note, this is not a complete replacement for the UML abort
>>>    mechanism, but covers the majority of necessary functionality. UML
>>>    architecture specific featurs have been dropped from the initial
>>>    patchset.
>>> ---
>>>  include/kunit/test.h |  24 +++++
>>>  kunit/Makefile       |   3 +-
>>>  kunit/test-test.c    | 127 ++++++++++++++++++++++++++
>>>  kunit/test.c         | 208 +++++++++++++++++++++++++++++++++++++++++--
>>>  4 files changed, 353 insertions(+), 9 deletions(-)
>>>  create mode 100644 kunit/test-test.c
>>
>> < snip >
>>
>>> diff --git a/kunit/test.c b/kunit/test.c
>>> index d18c50d5ed671..6e5244642ab07 100644
>>> --- a/kunit/test.c
>>> +++ b/kunit/test.c
>>> @@ -6,9 +6,9 @@
>>>   * Author: Brendan Higgins <brendanhiggins@google.com>
>>>   */
>>>
>>> -#include <linux/sched.h>
>>>  #include <linux/sched/debug.h>
>>> -#include <os.h>
>>> +#include <linux/completion.h>
>>> +#include <linux/kthread.h>
>>>  #include <kunit/test.h>
>>>
>>>  static bool kunit_get_success(struct kunit *test)
>>> @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
>>>       spin_unlock_irqrestore(&test->lock, flags);
>>>  }
>>>
>>> +static bool kunit_get_death_test(struct kunit *test)
>>> +{
>>> +     unsigned long flags;
>>> +     bool death_test;
>>> +
>>> +     spin_lock_irqsave(&test->lock, flags);
>>> +     death_test = test->death_test;
>>> +     spin_unlock_irqrestore(&test->lock, flags);
>>> +
>>> +     return death_test;
>>> +}
>>> +
>>> +static void kunit_set_death_test(struct kunit *test, bool death_test)
>>> +{
>>> +     unsigned long flags;
>>> +
>>> +     spin_lock_irqsave(&test->lock, flags);
>>> +     test->death_test = death_test;
>>> +     spin_unlock_irqrestore(&test->lock, flags);
>>> +}
>>> +
>>>  static int kunit_vprintk_emit(const struct kunit *test,
>>>                             int level,
>>>                             const char *fmt,
>>> @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
>>>       stream->commit(stream);
>>>  }
>>>
>>> +static void __noreturn kunit_abort(struct kunit *test)
>>> +{
>>> +     kunit_set_death_test(test, true);
>>> +
>>> +     test->try_catch.throw(&test->try_catch);
>>> +
>>> +     /*
>>> +      * Throw could not abort from test.
>>> +      */
>>> +     kunit_err(test, "Throw could not abort from test!");
>>> +     show_stack(NULL, NULL);
>>> +     BUG();
>>
>> kunit_abort() is what will be call as the result of an assert failure.
> 
> Yep. Does that need clarified somewhere.
>>
>> BUG(), which is a panic, which is crashing the system is not acceptable
>> in the Linux kernel.  You will just annoy Linus if you submit this.
> 
> Sorry, I thought this was an acceptable use case since, a) this should
> never be compiled in a production kernel, b) we are in a pretty bad,
> unpredictable state if we get here and keep going. I think you might
> have said elsewhere that you think "a" is not valid? In any case, I
> can replace this with a WARN, would that be acceptable?

A WARN may or may not make sense, depending on the context.  It may
be sufficient to simply report a test failure (as in the old version
of case (2) below.

Answers to "a)" and "b)":

a) it might be in a production kernel

a') it is not acceptable in my development kernel either

b) No.  You don't crash a developer's kernel either unless it is
required to avoid data corruption.

b') And you can not do replacements like:

(1) in of_unittest_check_tree_linkage()

-----  old  -----

        if (!of_root)
                return;

-----  new  -----

        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);

(2) in of_unittest_property_string()

-----  old  -----

        /* of_property_read_string_index() tests */
        rc = of_property_read_string_index(np, "string-property", 0, strings);
        unittest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);

-----  new  -----

        /* of_property_read_string_index() tests */
        rc = of_property_read_string_index(np, "string-property", 0, strings);
        KUNIT_ASSERT_EQ(test, rc, 0);
        KUNIT_EXPECT_STREQ(test, strings[0], "foobar");


If a test fails, that is no reason to abort testing.  The remainder of the unit
tests can still run.  There may be cascading failures, but that is ok.
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-02-20  6:44               ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-02-20  6:44 UTC (permalink / raw)
  To: Brendan Higgins
  Cc: Kees Cook, Luis Chamberlain, shuah, Rob Herring, Kieran Bingham,
	Greg KH, Joel Stanley, Michael Ellerman, Joe Perches, brakmo,
	Steven Rostedt, Bird, Timothy, Kevin Hilman, Julia Lawall,
	linux-kselftest, kunit-dev, Linux Kernel Mailing List, Jeff Dike,
	Richard Weinberger, linux-um, Daniel Vetter, dri-devel,
	Dan Williams, linux-nvdimm, Knut Omang, devicetree, Petr Mladek,
	Sasha Levin, Amir Goldstein, dan.carpenter, wfg

On 2/19/19 7:39 PM, Brendan Higgins wrote:
> On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand <frowand.list@gmail.com> wrote:
>>
>> On 2/14/19 1:37 PM, Brendan Higgins wrote:
>>> Add support for aborting/bailing out of test cases. Needed for
>>> implementing assertions.
>>>
>>> Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
>>> ---
>>> Changes Since Last Version
>>>  - This patch is new introducing a new cross-architecture way to abort
>>>    out of a test case (needed for KUNIT_ASSERT_*, see next patch for
>>>    details).
>>>  - On a side note, this is not a complete replacement for the UML abort
>>>    mechanism, but covers the majority of necessary functionality. UML
>>>    architecture specific featurs have been dropped from the initial
>>>    patchset.
>>> ---
>>>  include/kunit/test.h |  24 +++++
>>>  kunit/Makefile       |   3 +-
>>>  kunit/test-test.c    | 127 ++++++++++++++++++++++++++
>>>  kunit/test.c         | 208 +++++++++++++++++++++++++++++++++++++++++--
>>>  4 files changed, 353 insertions(+), 9 deletions(-)
>>>  create mode 100644 kunit/test-test.c
>>
>> < snip >
>>
>>> diff --git a/kunit/test.c b/kunit/test.c
>>> index d18c50d5ed671..6e5244642ab07 100644
>>> --- a/kunit/test.c
>>> +++ b/kunit/test.c
>>> @@ -6,9 +6,9 @@
>>>   * Author: Brendan Higgins <brendanhiggins@google.com>
>>>   */
>>>
>>> -#include <linux/sched.h>
>>>  #include <linux/sched/debug.h>
>>> -#include <os.h>
>>> +#include <linux/completion.h>
>>> +#include <linux/kthread.h>
>>>  #include <kunit/test.h>
>>>
>>>  static bool kunit_get_success(struct kunit *test)
>>> @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
>>>       spin_unlock_irqrestore(&test->lock, flags);
>>>  }
>>>
>>> +static bool kunit_get_death_test(struct kunit *test)
>>> +{
>>> +     unsigned long flags;
>>> +     bool death_test;
>>> +
>>> +     spin_lock_irqsave(&test->lock, flags);
>>> +     death_test = test->death_test;
>>> +     spin_unlock_irqrestore(&test->lock, flags);
>>> +
>>> +     return death_test;
>>> +}
>>> +
>>> +static void kunit_set_death_test(struct kunit *test, bool death_test)
>>> +{
>>> +     unsigned long flags;
>>> +
>>> +     spin_lock_irqsave(&test->lock, flags);
>>> +     test->death_test = death_test;
>>> +     spin_unlock_irqrestore(&test->lock, flags);
>>> +}
>>> +
>>>  static int kunit_vprintk_emit(const struct kunit *test,
>>>                             int level,
>>>                             const char *fmt,
>>> @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
>>>       stream->commit(stream);
>>>  }
>>>
>>> +static void __noreturn kunit_abort(struct kunit *test)
>>> +{
>>> +     kunit_set_death_test(test, true);
>>> +
>>> +     test->try_catch.throw(&test->try_catch);
>>> +
>>> +     /*
>>> +      * Throw could not abort from test.
>>> +      */
>>> +     kunit_err(test, "Throw could not abort from test!");
>>> +     show_stack(NULL, NULL);
>>> +     BUG();
>>
>> kunit_abort() is what will be call as the result of an assert failure.
> 
> Yep. Does that need clarified somewhere.
>>
>> BUG(), which is a panic, which is crashing the system is not acceptable
>> in the Linux kernel.  You will just annoy Linus if you submit this.
> 
> Sorry, I thought this was an acceptable use case since, a) this should
> never be compiled in a production kernel, b) we are in a pretty bad,
> unpredictable state if we get here and keep going. I think you might
> have said elsewhere that you think "a" is not valid? In any case, I
> can replace this with a WARN, would that be acceptable?

A WARN may or may not make sense, depending on the context.  It may
be sufficient to simply report a test failure (as in the old version
of case (2) below.

Answers to "a)" and "b)":

a) it might be in a production kernel

a') it is not acceptable in my development kernel either

b) No.  You don't crash a developer's kernel either unless it is
required to avoid data corruption.

b') And you can not do replacements like:

(1) in of_unittest_check_tree_linkage()

-----  old  -----

        if (!of_root)
                return;

-----  new  -----

        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);

(2) in of_unittest_property_string()

-----  old  -----

        /* of_property_read_string_index() tests */
        rc = of_property_read_string_index(np, "string-property", 0, strings);
        unittest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);

-----  new  -----

        /* of_property_read_string_index() tests */
        rc = of_property_read_string_index(np, "string-property", 0, strings);
        KUNIT_ASSERT_EQ(test, rc, 0);
        KUNIT_EXPECT_STREQ(test, strings[0], "foobar");


If a test fails, that is no reason to abort testing.  The remainder of the unit
tests can still run.  There may be cascading failures, but that is ok.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-02-20  6:44               ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-02-20  6:44 UTC (permalink / raw)
  To: Brendan Higgins
  Cc: brakmo-b10kYP2dOMg, Petr Mladek, Amir Goldstein, dri-devel,
	Sasha Levin, linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	shuah-DgEjT+Ai2ygdnm+yROfE0A, Rob Herring, linux-nvdimm,
	Richard Weinberger, Knut Omang, Kieran Bingham,
	wfg-VuQAYsv1563Yd54FQh9/CA, Joel Stanley, Jeff Dike,
	dan.carpenter-QHcLZuEGTsvQT0dZR+AlfA, devicetree, Bird, Timothy,
	Kees Cook, linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	Steven Rostedt, Julia Lawall, kunit-dev-/JYPxA39Uh5TLH3MbocFFw,
	Greg KH, Linux Kernel Mailing List, Luis

On 2/19/19 7:39 PM, Brendan Higgins wrote:
> On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand <frowand.list-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
>>
>> On 2/14/19 1:37 PM, Brendan Higgins wrote:
>>> Add support for aborting/bailing out of test cases. Needed for
>>> implementing assertions.
>>>
>>> Signed-off-by: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
>>> ---
>>> Changes Since Last Version
>>>  - This patch is new introducing a new cross-architecture way to abort
>>>    out of a test case (needed for KUNIT_ASSERT_*, see next patch for
>>>    details).
>>>  - On a side note, this is not a complete replacement for the UML abort
>>>    mechanism, but covers the majority of necessary functionality. UML
>>>    architecture specific featurs have been dropped from the initial
>>>    patchset.
>>> ---
>>>  include/kunit/test.h |  24 +++++
>>>  kunit/Makefile       |   3 +-
>>>  kunit/test-test.c    | 127 ++++++++++++++++++++++++++
>>>  kunit/test.c         | 208 +++++++++++++++++++++++++++++++++++++++++--
>>>  4 files changed, 353 insertions(+), 9 deletions(-)
>>>  create mode 100644 kunit/test-test.c
>>
>> < snip >
>>
>>> diff --git a/kunit/test.c b/kunit/test.c
>>> index d18c50d5ed671..6e5244642ab07 100644
>>> --- a/kunit/test.c
>>> +++ b/kunit/test.c
>>> @@ -6,9 +6,9 @@
>>>   * Author: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
>>>   */
>>>
>>> -#include <linux/sched.h>
>>>  #include <linux/sched/debug.h>
>>> -#include <os.h>
>>> +#include <linux/completion.h>
>>> +#include <linux/kthread.h>
>>>  #include <kunit/test.h>
>>>
>>>  static bool kunit_get_success(struct kunit *test)
>>> @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
>>>       spin_unlock_irqrestore(&test->lock, flags);
>>>  }
>>>
>>> +static bool kunit_get_death_test(struct kunit *test)
>>> +{
>>> +     unsigned long flags;
>>> +     bool death_test;
>>> +
>>> +     spin_lock_irqsave(&test->lock, flags);
>>> +     death_test = test->death_test;
>>> +     spin_unlock_irqrestore(&test->lock, flags);
>>> +
>>> +     return death_test;
>>> +}
>>> +
>>> +static void kunit_set_death_test(struct kunit *test, bool death_test)
>>> +{
>>> +     unsigned long flags;
>>> +
>>> +     spin_lock_irqsave(&test->lock, flags);
>>> +     test->death_test = death_test;
>>> +     spin_unlock_irqrestore(&test->lock, flags);
>>> +}
>>> +
>>>  static int kunit_vprintk_emit(const struct kunit *test,
>>>                             int level,
>>>                             const char *fmt,
>>> @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
>>>       stream->commit(stream);
>>>  }
>>>
>>> +static void __noreturn kunit_abort(struct kunit *test)
>>> +{
>>> +     kunit_set_death_test(test, true);
>>> +
>>> +     test->try_catch.throw(&test->try_catch);
>>> +
>>> +     /*
>>> +      * Throw could not abort from test.
>>> +      */
>>> +     kunit_err(test, "Throw could not abort from test!");
>>> +     show_stack(NULL, NULL);
>>> +     BUG();
>>
>> kunit_abort() is what will be call as the result of an assert failure.
> 
> Yep. Does that need clarified somewhere.
>>
>> BUG(), which is a panic, which is crashing the system is not acceptable
>> in the Linux kernel.  You will just annoy Linus if you submit this.
> 
> Sorry, I thought this was an acceptable use case since, a) this should
> never be compiled in a production kernel, b) we are in a pretty bad,
> unpredictable state if we get here and keep going. I think you might
> have said elsewhere that you think "a" is not valid? In any case, I
> can replace this with a WARN, would that be acceptable?

A WARN may or may not make sense, depending on the context.  It may
be sufficient to simply report a test failure (as in the old version
of case (2) below.

Answers to "a)" and "b)":

a) it might be in a production kernel

a') it is not acceptable in my development kernel either

b) No.  You don't crash a developer's kernel either unless it is
required to avoid data corruption.

b') And you can not do replacements like:

(1) in of_unittest_check_tree_linkage()

-----  old  -----

        if (!of_root)
                return;

-----  new  -----

        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);

(2) in of_unittest_property_string()

-----  old  -----

        /* of_property_read_string_index() tests */
        rc = of_property_read_string_index(np, "string-property", 0, strings);
        unittest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);

-----  new  -----

        /* of_property_read_string_index() tests */
        rc = of_property_read_string_index(np, "string-property", 0, strings);
        KUNIT_ASSERT_EQ(test, rc, 0);
        KUNIT_EXPECT_STREQ(test, strings[0], "foobar");


If a test fails, that is no reason to abort testing.  The remainder of the unit
tests can still run.  There may be cascading failures, but that is ok.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-02-20  6:44               ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: frowand.list @ 2019-02-20  6:44 UTC (permalink / raw)


On 2/19/19 7:39 PM, Brendan Higgins wrote:
> On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand <frowand.list at gmail.com> wrote:
>>
>> On 2/14/19 1:37 PM, Brendan Higgins wrote:
>>> Add support for aborting/bailing out of test cases. Needed for
>>> implementing assertions.
>>>
>>> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
>>> ---
>>> Changes Since Last Version
>>>  - This patch is new introducing a new cross-architecture way to abort
>>>    out of a test case (needed for KUNIT_ASSERT_*, see next patch for
>>>    details).
>>>  - On a side note, this is not a complete replacement for the UML abort
>>>    mechanism, but covers the majority of necessary functionality. UML
>>>    architecture specific featurs have been dropped from the initial
>>>    patchset.
>>> ---
>>>  include/kunit/test.h |  24 +++++
>>>  kunit/Makefile       |   3 +-
>>>  kunit/test-test.c    | 127 ++++++++++++++++++++++++++
>>>  kunit/test.c         | 208 +++++++++++++++++++++++++++++++++++++++++--
>>>  4 files changed, 353 insertions(+), 9 deletions(-)
>>>  create mode 100644 kunit/test-test.c
>>
>> < snip >
>>
>>> diff --git a/kunit/test.c b/kunit/test.c
>>> index d18c50d5ed671..6e5244642ab07 100644
>>> --- a/kunit/test.c
>>> +++ b/kunit/test.c
>>> @@ -6,9 +6,9 @@
>>>   * Author: Brendan Higgins <brendanhiggins at google.com>
>>>   */
>>>
>>> -#include <linux/sched.h>
>>>  #include <linux/sched/debug.h>
>>> -#include <os.h>
>>> +#include <linux/completion.h>
>>> +#include <linux/kthread.h>
>>>  #include <kunit/test.h>
>>>
>>>  static bool kunit_get_success(struct kunit *test)
>>> @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
>>>       spin_unlock_irqrestore(&test->lock, flags);
>>>  }
>>>
>>> +static bool kunit_get_death_test(struct kunit *test)
>>> +{
>>> +     unsigned long flags;
>>> +     bool death_test;
>>> +
>>> +     spin_lock_irqsave(&test->lock, flags);
>>> +     death_test = test->death_test;
>>> +     spin_unlock_irqrestore(&test->lock, flags);
>>> +
>>> +     return death_test;
>>> +}
>>> +
>>> +static void kunit_set_death_test(struct kunit *test, bool death_test)
>>> +{
>>> +     unsigned long flags;
>>> +
>>> +     spin_lock_irqsave(&test->lock, flags);
>>> +     test->death_test = death_test;
>>> +     spin_unlock_irqrestore(&test->lock, flags);
>>> +}
>>> +
>>>  static int kunit_vprintk_emit(const struct kunit *test,
>>>                             int level,
>>>                             const char *fmt,
>>> @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
>>>       stream->commit(stream);
>>>  }
>>>
>>> +static void __noreturn kunit_abort(struct kunit *test)
>>> +{
>>> +     kunit_set_death_test(test, true);
>>> +
>>> +     test->try_catch.throw(&test->try_catch);
>>> +
>>> +     /*
>>> +      * Throw could not abort from test.
>>> +      */
>>> +     kunit_err(test, "Throw could not abort from test!");
>>> +     show_stack(NULL, NULL);
>>> +     BUG();
>>
>> kunit_abort() is what will be call as the result of an assert failure.
> 
> Yep. Does that need clarified somewhere.
>>
>> BUG(), which is a panic, which is crashing the system is not acceptable
>> in the Linux kernel.  You will just annoy Linus if you submit this.
> 
> Sorry, I thought this was an acceptable use case since, a) this should
> never be compiled in a production kernel, b) we are in a pretty bad,
> unpredictable state if we get here and keep going. I think you might
> have said elsewhere that you think "a" is not valid? In any case, I
> can replace this with a WARN, would that be acceptable?

A WARN may or may not make sense, depending on the context.  It may
be sufficient to simply report a test failure (as in the old version
of case (2) below.

Answers to "a)" and "b)":

a) it might be in a production kernel

a') it is not acceptable in my development kernel either

b) No.  You don't crash a developer's kernel either unless it is
required to avoid data corruption.

b') And you can not do replacements like:

(1) in of_unittest_check_tree_linkage()

-----  old  -----

        if (!of_root)
                return;

-----  new  -----

        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);

(2) in of_unittest_property_string()

-----  old  -----

        /* of_property_read_string_index() tests */
        rc = of_property_read_string_index(np, "string-property", 0, strings);
        unittest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);

-----  new  -----

        /* of_property_read_string_index() tests */
        rc = of_property_read_string_index(np, "string-property", 0, strings);
        KUNIT_ASSERT_EQ(test, rc, 0);
        KUNIT_EXPECT_STREQ(test, strings[0], "foobar");


If a test fails, that is no reason to abort testing.  The remainder of the unit
tests can still run.  There may be cascading failures, but that is ok.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-02-20  6:44               ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-02-20  6:44 UTC (permalink / raw)


On 2/19/19 7:39 PM, Brendan Higgins wrote:
> On Mon, Feb 18, 2019@11:52 AM Frank Rowand <frowand.list@gmail.com> wrote:
>>
>> On 2/14/19 1:37 PM, Brendan Higgins wrote:
>>> Add support for aborting/bailing out of test cases. Needed for
>>> implementing assertions.
>>>
>>> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
>>> ---
>>> Changes Since Last Version
>>>  - This patch is new introducing a new cross-architecture way to abort
>>>    out of a test case (needed for KUNIT_ASSERT_*, see next patch for
>>>    details).
>>>  - On a side note, this is not a complete replacement for the UML abort
>>>    mechanism, but covers the majority of necessary functionality. UML
>>>    architecture specific featurs have been dropped from the initial
>>>    patchset.
>>> ---
>>>  include/kunit/test.h |  24 +++++
>>>  kunit/Makefile       |   3 +-
>>>  kunit/test-test.c    | 127 ++++++++++++++++++++++++++
>>>  kunit/test.c         | 208 +++++++++++++++++++++++++++++++++++++++++--
>>>  4 files changed, 353 insertions(+), 9 deletions(-)
>>>  create mode 100644 kunit/test-test.c
>>
>> < snip >
>>
>>> diff --git a/kunit/test.c b/kunit/test.c
>>> index d18c50d5ed671..6e5244642ab07 100644
>>> --- a/kunit/test.c
>>> +++ b/kunit/test.c
>>> @@ -6,9 +6,9 @@
>>>   * Author: Brendan Higgins <brendanhiggins at google.com>
>>>   */
>>>
>>> -#include <linux/sched.h>
>>>  #include <linux/sched/debug.h>
>>> -#include <os.h>
>>> +#include <linux/completion.h>
>>> +#include <linux/kthread.h>
>>>  #include <kunit/test.h>
>>>
>>>  static bool kunit_get_success(struct kunit *test)
>>> @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
>>>       spin_unlock_irqrestore(&test->lock, flags);
>>>  }
>>>
>>> +static bool kunit_get_death_test(struct kunit *test)
>>> +{
>>> +     unsigned long flags;
>>> +     bool death_test;
>>> +
>>> +     spin_lock_irqsave(&test->lock, flags);
>>> +     death_test = test->death_test;
>>> +     spin_unlock_irqrestore(&test->lock, flags);
>>> +
>>> +     return death_test;
>>> +}
>>> +
>>> +static void kunit_set_death_test(struct kunit *test, bool death_test)
>>> +{
>>> +     unsigned long flags;
>>> +
>>> +     spin_lock_irqsave(&test->lock, flags);
>>> +     test->death_test = death_test;
>>> +     spin_unlock_irqrestore(&test->lock, flags);
>>> +}
>>> +
>>>  static int kunit_vprintk_emit(const struct kunit *test,
>>>                             int level,
>>>                             const char *fmt,
>>> @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
>>>       stream->commit(stream);
>>>  }
>>>
>>> +static void __noreturn kunit_abort(struct kunit *test)
>>> +{
>>> +     kunit_set_death_test(test, true);
>>> +
>>> +     test->try_catch.throw(&test->try_catch);
>>> +
>>> +     /*
>>> +      * Throw could not abort from test.
>>> +      */
>>> +     kunit_err(test, "Throw could not abort from test!");
>>> +     show_stack(NULL, NULL);
>>> +     BUG();
>>
>> kunit_abort() is what will be call as the result of an assert failure.
> 
> Yep. Does that need clarified somewhere.
>>
>> BUG(), which is a panic, which is crashing the system is not acceptable
>> in the Linux kernel.  You will just annoy Linus if you submit this.
> 
> Sorry, I thought this was an acceptable use case since, a) this should
> never be compiled in a production kernel, b) we are in a pretty bad,
> unpredictable state if we get here and keep going. I think you might
> have said elsewhere that you think "a" is not valid? In any case, I
> can replace this with a WARN, would that be acceptable?

A WARN may or may not make sense, depending on the context.  It may
be sufficient to simply report a test failure (as in the old version
of case (2) below.

Answers to "a)" and "b)":

a) it might be in a production kernel

a') it is not acceptable in my development kernel either

b) No.  You don't crash a developer's kernel either unless it is
required to avoid data corruption.

b') And you can not do replacements like:

(1) in of_unittest_check_tree_linkage()

-----  old  -----

        if (!of_root)
                return;

-----  new  -----

        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);

(2) in of_unittest_property_string()

-----  old  -----

        /* of_property_read_string_index() tests */
        rc = of_property_read_string_index(np, "string-property", 0, strings);
        unittest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);

-----  new  -----

        /* of_property_read_string_index() tests */
        rc = of_property_read_string_index(np, "string-property", 0, strings);
        KUNIT_ASSERT_EQ(test, rc, 0);
        KUNIT_EXPECT_STREQ(test, strings[0], "foobar");


If a test fails, that is no reason to abort testing.  The remainder of the unit
tests can still run.  There may be cascading failures, but that is ok.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-02-20  6:44               ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-02-20  6:44 UTC (permalink / raw)
  To: Brendan Higgins
  Cc: brakmo, Petr Mladek, Amir Goldstein, dri-devel, Sasha Levin,
	linux-kselftest, shuah, Rob Herring, linux-nvdimm,
	Richard Weinberger, Knut Omang, Kieran Bingham, wfg,
	Joel Stanley, Jeff Dike, dan.carpenter, devicetree, Bird,

On 2/19/19 7:39 PM, Brendan Higgins wrote:
> On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand <frowand.list@gmail.com> wrote:
>>
>> On 2/14/19 1:37 PM, Brendan Higgins wrote:
>>> Add support for aborting/bailing out of test cases. Needed for
>>> implementing assertions.
>>>
>>> Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
>>> ---
>>> Changes Since Last Version
>>>  - This patch is new introducing a new cross-architecture way to abort
>>>    out of a test case (needed for KUNIT_ASSERT_*, see next patch for
>>>    details).
>>>  - On a side note, this is not a complete replacement for the UML abort
>>>    mechanism, but covers the majority of necessary functionality. UML
>>>    architecture specific featurs have been dropped from the initial
>>>    patchset.
>>> ---
>>>  include/kunit/test.h |  24 +++++
>>>  kunit/Makefile       |   3 +-
>>>  kunit/test-test.c    | 127 ++++++++++++++++++++++++++
>>>  kunit/test.c         | 208 +++++++++++++++++++++++++++++++++++++++++--
>>>  4 files changed, 353 insertions(+), 9 deletions(-)
>>>  create mode 100644 kunit/test-test.c
>>
>> < snip >
>>
>>> diff --git a/kunit/test.c b/kunit/test.c
>>> index d18c50d5ed671..6e5244642ab07 100644
>>> --- a/kunit/test.c
>>> +++ b/kunit/test.c
>>> @@ -6,9 +6,9 @@
>>>   * Author: Brendan Higgins <brendanhiggins@google.com>
>>>   */
>>>
>>> -#include <linux/sched.h>
>>>  #include <linux/sched/debug.h>
>>> -#include <os.h>
>>> +#include <linux/completion.h>
>>> +#include <linux/kthread.h>
>>>  #include <kunit/test.h>
>>>
>>>  static bool kunit_get_success(struct kunit *test)
>>> @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
>>>       spin_unlock_irqrestore(&test->lock, flags);
>>>  }
>>>
>>> +static bool kunit_get_death_test(struct kunit *test)
>>> +{
>>> +     unsigned long flags;
>>> +     bool death_test;
>>> +
>>> +     spin_lock_irqsave(&test->lock, flags);
>>> +     death_test = test->death_test;
>>> +     spin_unlock_irqrestore(&test->lock, flags);
>>> +
>>> +     return death_test;
>>> +}
>>> +
>>> +static void kunit_set_death_test(struct kunit *test, bool death_test)
>>> +{
>>> +     unsigned long flags;
>>> +
>>> +     spin_lock_irqsave(&test->lock, flags);
>>> +     test->death_test = death_test;
>>> +     spin_unlock_irqrestore(&test->lock, flags);
>>> +}
>>> +
>>>  static int kunit_vprintk_emit(const struct kunit *test,
>>>                             int level,
>>>                             const char *fmt,
>>> @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
>>>       stream->commit(stream);
>>>  }
>>>
>>> +static void __noreturn kunit_abort(struct kunit *test)
>>> +{
>>> +     kunit_set_death_test(test, true);
>>> +
>>> +     test->try_catch.throw(&test->try_catch);
>>> +
>>> +     /*
>>> +      * Throw could not abort from test.
>>> +      */
>>> +     kunit_err(test, "Throw could not abort from test!");
>>> +     show_stack(NULL, NULL);
>>> +     BUG();
>>
>> kunit_abort() is what will be call as the result of an assert failure.
> 
> Yep. Does that need clarified somewhere.
>>
>> BUG(), which is a panic, which is crashing the system is not acceptable
>> in the Linux kernel.  You will just annoy Linus if you submit this.
> 
> Sorry, I thought this was an acceptable use case since, a) this should
> never be compiled in a production kernel, b) we are in a pretty bad,
> unpredictable state if we get here and keep going. I think you might
> have said elsewhere that you think "a" is not valid? In any case, I
> can replace this with a WARN, would that be acceptable?

A WARN may or may not make sense, depending on the context.  It may
be sufficient to simply report a test failure (as in the old version
of case (2) below.

Answers to "a)" and "b)":

a) it might be in a production kernel

a') it is not acceptable in my development kernel either

b) No.  You don't crash a developer's kernel either unless it is
required to avoid data corruption.

b') And you can not do replacements like:

(1) in of_unittest_check_tree_linkage()

-----  old  -----

        if (!of_root)
                return;

-----  new  -----

        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);

(2) in of_unittest_property_string()

-----  old  -----

        /* of_property_read_string_index() tests */
        rc = of_property_read_string_index(np, "string-property", 0, strings);
        unittest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);

-----  new  -----

        /* of_property_read_string_index() tests */
        rc = of_property_read_string_index(np, "string-property", 0, strings);
        KUNIT_ASSERT_EQ(test, rc, 0);
        KUNIT_EXPECT_STREQ(test, strings[0], "foobar");


If a test fails, that is no reason to abort testing.  The remainder of the unit
tests can still run.  There may be cascading failures, but that is ok.

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
  2019-02-20  6:34     ` Brendan Higgins
                         ` (2 preceding siblings ...)
  (?)
@ 2019-02-20  6:46       ` Frank Rowand
  -1 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-02-20  6:46 UTC (permalink / raw)
  To: Brendan Higgins
  Cc: Kees Cook, Luis Chamberlain, shuah, Rob Herring, Kieran Bingham,
	Greg KH, Joel Stanley, Michael Ellerman, Joe Perches, brakmo,
	Steven Rostedt, Bird, Timothy, Kevin Hilman, Julia Lawall,
	linux-kselftest, kunit-dev, Linux Kernel Mailing List, Jeff Dike,
	Richard Weinberger, linux-um, Daniel Vetter, dri-devel,
	Dan Williams, linux-nvdimm, Knut Omang, devicetree, Petr Mladek,
	Sasha Levin, Amir Goldstein, dan.carpenter, wfg

On 2/19/19 10:34 PM, Brendan Higgins wrote:
> On Mon, Feb 18, 2019 at 12:02 PM Frank Rowand <frowand.list@gmail.com> wrote:
> <snip>
>> I have not read through the patches in any detail.  I have read some of
>> the code to try to understand the patches to the devicetree unit tests.
>> So that may limit how valid my comments below are.
> 
> No problem.
> 
>>
>> I found the code difficult to read in places where it should have been
>> much simpler to read.  Structuring the code in a pseudo object oriented
>> style meant that everywhere in a code path that I encountered a dynamic
>> function call, I had to go find where that dynamic function call was
>> initialized (and being the cautious person that I am, verify that
>> no where else was the value of that dynamic function call).  With
>> primitive vi and tags, that search would have instead just been a
>> simple key press (or at worst a few keys) if hard coded function
>> calls were done instead of dynamic function calls.  In the code paths
>> that I looked at, I did not see any case of a dynamic function being
>> anything other than the value it was originally initialized as.
>> There may be such cases, I did not read the entire patch set.  There
>> may also be cases envisioned in the architects mind of how this
>> flexibility may be of future value.  Dunno.
> 
> Yeah, a lot of it is intended to make architecture specific
> implementations and some other future work easier. Some of it is also
> for testing purposes. Admittedly some is for neither reason, but given
> the heavy usage elsewhere, I figured there was no harm since it was
> all private internal usage anyway.
> 

Increasing the cost for me (and all the other potential code readers)
to read the code is harm.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-02-20  6:46       ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-02-20  6:46 UTC (permalink / raw)
  To: Brendan Higgins
  Cc: Kees Cook, Luis Chamberlain, shuah, Rob Herring, Kieran Bingham,
	Greg KH, Joel Stanley, Michael Ellerman, Joe Perches, brakmo,
	Steven Rostedt, Bird, Timothy, Kevin Hilman, Julia Lawall,
	linux-kselftest, kunit-dev, Linux Kernel Mailing List, Jeff Dike,
	Richard Weinberger, linux-um, Daniel Vetter, dri-devel

On 2/19/19 10:34 PM, Brendan Higgins wrote:
> On Mon, Feb 18, 2019 at 12:02 PM Frank Rowand <frowand.list@gmail.com> wrote:
> <snip>
>> I have not read through the patches in any detail.  I have read some of
>> the code to try to understand the patches to the devicetree unit tests.
>> So that may limit how valid my comments below are.
> 
> No problem.
> 
>>
>> I found the code difficult to read in places where it should have been
>> much simpler to read.  Structuring the code in a pseudo object oriented
>> style meant that everywhere in a code path that I encountered a dynamic
>> function call, I had to go find where that dynamic function call was
>> initialized (and being the cautious person that I am, verify that
>> no where else was the value of that dynamic function call).  With
>> primitive vi and tags, that search would have instead just been a
>> simple key press (or at worst a few keys) if hard coded function
>> calls were done instead of dynamic function calls.  In the code paths
>> that I looked at, I did not see any case of a dynamic function being
>> anything other than the value it was originally initialized as.
>> There may be such cases, I did not read the entire patch set.  There
>> may also be cases envisioned in the architects mind of how this
>> flexibility may be of future value.  Dunno.
> 
> Yeah, a lot of it is intended to make architecture specific
> implementations and some other future work easier. Some of it is also
> for testing purposes. Admittedly some is for neither reason, but given
> the heavy usage elsewhere, I figured there was no harm since it was
> all private internal usage anyway.
> 

Increasing the cost for me (and all the other potential code readers)
to read the code is harm.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-02-20  6:46       ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: frowand.list @ 2019-02-20  6:46 UTC (permalink / raw)


On 2/19/19 10:34 PM, Brendan Higgins wrote:
> On Mon, Feb 18, 2019 at 12:02 PM Frank Rowand <frowand.list at gmail.com> wrote:
> <snip>
>> I have not read through the patches in any detail.  I have read some of
>> the code to try to understand the patches to the devicetree unit tests.
>> So that may limit how valid my comments below are.
> 
> No problem.
> 
>>
>> I found the code difficult to read in places where it should have been
>> much simpler to read.  Structuring the code in a pseudo object oriented
>> style meant that everywhere in a code path that I encountered a dynamic
>> function call, I had to go find where that dynamic function call was
>> initialized (and being the cautious person that I am, verify that
>> no where else was the value of that dynamic function call).  With
>> primitive vi and tags, that search would have instead just been a
>> simple key press (or at worst a few keys) if hard coded function
>> calls were done instead of dynamic function calls.  In the code paths
>> that I looked at, I did not see any case of a dynamic function being
>> anything other than the value it was originally initialized as.
>> There may be such cases, I did not read the entire patch set.  There
>> may also be cases envisioned in the architects mind of how this
>> flexibility may be of future value.  Dunno.
> 
> Yeah, a lot of it is intended to make architecture specific
> implementations and some other future work easier. Some of it is also
> for testing purposes. Admittedly some is for neither reason, but given
> the heavy usage elsewhere, I figured there was no harm since it was
> all private internal usage anyway.
> 

Increasing the cost for me (and all the other potential code readers)
to read the code is harm.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-02-20  6:46       ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-02-20  6:46 UTC (permalink / raw)


On 2/19/19 10:34 PM, Brendan Higgins wrote:
> On Mon, Feb 18, 2019@12:02 PM Frank Rowand <frowand.list@gmail.com> wrote:
> <snip>
>> I have not read through the patches in any detail.  I have read some of
>> the code to try to understand the patches to the devicetree unit tests.
>> So that may limit how valid my comments below are.
> 
> No problem.
> 
>>
>> I found the code difficult to read in places where it should have been
>> much simpler to read.  Structuring the code in a pseudo object oriented
>> style meant that everywhere in a code path that I encountered a dynamic
>> function call, I had to go find where that dynamic function call was
>> initialized (and being the cautious person that I am, verify that
>> no where else was the value of that dynamic function call).  With
>> primitive vi and tags, that search would have instead just been a
>> simple key press (or at worst a few keys) if hard coded function
>> calls were done instead of dynamic function calls.  In the code paths
>> that I looked at, I did not see any case of a dynamic function being
>> anything other than the value it was originally initialized as.
>> There may be such cases, I did not read the entire patch set.  There
>> may also be cases envisioned in the architects mind of how this
>> flexibility may be of future value.  Dunno.
> 
> Yeah, a lot of it is intended to make architecture specific
> implementations and some other future work easier. Some of it is also
> for testing purposes. Admittedly some is for neither reason, but given
> the heavy usage elsewhere, I figured there was no harm since it was
> all private internal usage anyway.
> 

Increasing the cost for me (and all the other potential code readers)
to read the code is harm.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-02-20  6:46       ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-02-20  6:46 UTC (permalink / raw)
  To: Brendan Higgins
  Cc: brakmo, Petr Mladek, Amir Goldstein, dri-devel, Sasha Levin,
	linux-kselftest, shuah, Rob Herring, linux-nvdimm,
	Richard Weinberger, Knut Omang, Kieran Bingham, wfg,
	Joel Stanley, Jeff Dike, dan.carpenter, devicetree, Bird,

On 2/19/19 10:34 PM, Brendan Higgins wrote:
> On Mon, Feb 18, 2019 at 12:02 PM Frank Rowand <frowand.list@gmail.com> wrote:
> <snip>
>> I have not read through the patches in any detail.  I have read some of
>> the code to try to understand the patches to the devicetree unit tests.
>> So that may limit how valid my comments below are.
> 
> No problem.
> 
>>
>> I found the code difficult to read in places where it should have been
>> much simpler to read.  Structuring the code in a pseudo object oriented
>> style meant that everywhere in a code path that I encountered a dynamic
>> function call, I had to go find where that dynamic function call was
>> initialized (and being the cautious person that I am, verify that
>> no where else was the value of that dynamic function call).  With
>> primitive vi and tags, that search would have instead just been a
>> simple key press (or at worst a few keys) if hard coded function
>> calls were done instead of dynamic function calls.  In the code paths
>> that I looked at, I did not see any case of a dynamic function being
>> anything other than the value it was originally initialized as.
>> There may be such cases, I did not read the entire patch set.  There
>> may also be cases envisioned in the architects mind of how this
>> flexibility may be of future value.  Dunno.
> 
> Yeah, a lot of it is intended to make architecture specific
> implementations and some other future work easier. Some of it is also
> for testing purposes. Admittedly some is for neither reason, but given
> the heavy usage elsewhere, I figured there was no harm since it was
> all private internal usage anyway.
> 

Increasing the cost for me (and all the other potential code readers)
to read the code is harm.

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 10/17] kunit: test: add test managed resource tests
  2019-02-19 23:20           ` Brendan Higgins
@ 2019-02-20 22:03             ` Stephen Boyd
  -1 siblings, 0 replies; 316+ messages in thread
From: Stephen Boyd @ 2019-02-20 22:03 UTC (permalink / raw)
  To: Brendan Higgins
  Cc: brakmo, amir73il, dri-devel, Sasha Levin, linux-kselftest,
	Frank Rowand, Rob Herring, linux-nvdimm, Richard Weinberger,
	Knut Omang, Kieran Bingham, wfg, Joel Stanley, Jeff Dike,
	dan.carpenter, devicetree, shuah, Bird, Timothy, Kees Cook,
	linux-um, Steven Rostedt, Julia Lawall, kunit-dev, om,
	Petr Mladek, Greg KH, Linux Kernel Mailing List,
	Luis Chamberlain, Avinash Kondareddy, Daniel Vetter,
	Michael Ellerman, Joe Perches, Kevin Hilman

Quoting Brendan Higgins (2019-02-19 15:20:18)
> On Fri, Feb 15, 2019 at 12:54 PM Stephen Boyd <sboyd@kernel.org> wrote:
> >
> > Quoting Brendan Higgins (2019-02-14 13:37:22)
> > > +
> > > +static struct kunit_case kunit_resource_test_cases[] = {
> >
> > Can these arrays be const?
> 
> There is some private mutable state inside of `struct kunit_case` that
> would be kind of annoying to pull out; I don't think it would make it
> cleaner.

Fair enough. Thanks for checking.

_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 10/17] kunit: test: add test managed resource tests
@ 2019-02-20 22:03             ` Stephen Boyd
  0 siblings, 0 replies; 316+ messages in thread
From: Stephen Boyd @ 2019-02-20 22:03 UTC (permalink / raw)
  To: Brendan Higgins
  Cc: brakmo, amir73il, dri-devel, Sasha Levin, linux-kselftest,
	Frank Rowand, linux-nvdimm, Richard Weinberger, Knut Omang,
	Kieran Bingham, wfg, Joel Stanley, Jeff Dike, dan.carpenter,
	devicetree, shuah, Bird, Timothy, Kees Cook, linux-um,
	Steven Rostedt, Julia Lawall, Dan Williams, kunit-dev, om,
	Petr Mladek, Greg KH, Linux

Quoting Brendan Higgins (2019-02-19 15:20:18)
> On Fri, Feb 15, 2019 at 12:54 PM Stephen Boyd <sboyd@kernel.org> wrote:
> >
> > Quoting Brendan Higgins (2019-02-14 13:37:22)
> > > +
> > > +static struct kunit_case kunit_resource_test_cases[] = {
> >
> > Can these arrays be const?
> 
> There is some private mutable state inside of `struct kunit_case` that
> would be kind of annoying to pull out; I don't think it would make it
> cleaner.

Fair enough. Thanks for checking.

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
  2019-02-20  6:46       ` Frank Rowand
                           ` (2 preceding siblings ...)
  (?)
@ 2019-02-22 20:52         ` Thiago Jung Bauermann
  -1 siblings, 0 replies; 316+ messages in thread
From: Thiago Jung Bauermann @ 2019-02-22 20:52 UTC (permalink / raw)
  To: Frank Rowand
  Cc: Brendan Higgins, Kees Cook, Luis Chamberlain, shuah, Rob Herring,
	Kieran Bingham, Greg KH, Joel Stanley, Michael Ellerman,
	Joe Perches, brakmo, Steven Rostedt, Bird, Timothy, Kevin Hilman,
	Julia Lawall, linux-kselftest, kunit-dev,
	Linux Kernel Mailing List, Jeff Dike, Richard Weinberger,
	linux-um, Daniel Vetter


Frank Rowand <frowand.list@gmail.com> writes:

> On 2/19/19 10:34 PM, Brendan Higgins wrote:
>> On Mon, Feb 18, 2019 at 12:02 PM Frank Rowand <frowand.list@gmail.com> wrote:
>> <snip>
>>> I have not read through the patches in any detail.  I have read some of
>>> the code to try to understand the patches to the devicetree unit tests.
>>> So that may limit how valid my comments below are.
>>
>> No problem.
>>
>>>
>>> I found the code difficult to read in places where it should have been
>>> much simpler to read.  Structuring the code in a pseudo object oriented
>>> style meant that everywhere in a code path that I encountered a dynamic
>>> function call, I had to go find where that dynamic function call was
>>> initialized (and being the cautious person that I am, verify that
>>> no where else was the value of that dynamic function call).  With
>>> primitive vi and tags, that search would have instead just been a
>>> simple key press (or at worst a few keys) if hard coded function
>>> calls were done instead of dynamic function calls.  In the code paths
>>> that I looked at, I did not see any case of a dynamic function being
>>> anything other than the value it was originally initialized as.
>>> There may be such cases, I did not read the entire patch set.  There
>>> may also be cases envisioned in the architects mind of how this
>>> flexibility may be of future value.  Dunno.
>>
>> Yeah, a lot of it is intended to make architecture specific
>> implementations and some other future work easier. Some of it is also
>> for testing purposes. Admittedly some is for neither reason, but given
>> the heavy usage elsewhere, I figured there was no harm since it was
>> all private internal usage anyway.
>>
>
> Increasing the cost for me (and all the other potential code readers)
> to read the code is harm.

Dynamic function calls aren't necessary for arch-specific
implementations either. See for example arch_kexec_image_load() in
kernel/kexec_file.c, which uses a weak symbol that is overriden by
arch-specific code. Not everybody likes weak symbols, so another
alternative (which admitedly not everybody likes either) is to use a
macro with the name of the arch-specific function, as used by
arch_kexec_post_alloc_pages() in <linux/kexec.h> for instance.

--
Thiago Jung Bauermann
IBM Linux Technology Center

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-02-22 20:52         ` Thiago Jung Bauermann
  0 siblings, 0 replies; 316+ messages in thread
From: Thiago Jung Bauermann @ 2019-02-22 20:52 UTC (permalink / raw)
  To: Frank Rowand
  Cc: Brendan Higgins, Kees Cook, Luis Chamberlain, shuah, Rob Herring,
	Kieran Bingham, Greg KH, Joel Stanley, Michael Ellerman,
	Joe Perches, brakmo, Steven Rostedt, Bird, Timothy, Kevin Hilman,
	Julia Lawall, linux-kselftest, kunit-dev,
	Linux Kernel Mailing List, Jeff Dike, Richard Weinberger,
	linux-um, Daniel Vetter, dri-devel, Dan Williams, linux-nvdimm,
	Knut Omang, devicetree, Petr Mladek, Sasha Levin, Amir Goldstein,
	dan.carpenter, wfg


Frank Rowand <frowand.list@gmail.com> writes:

> On 2/19/19 10:34 PM, Brendan Higgins wrote:
>> On Mon, Feb 18, 2019 at 12:02 PM Frank Rowand <frowand.list@gmail.com> wrote:
>> <snip>
>>> I have not read through the patches in any detail.  I have read some of
>>> the code to try to understand the patches to the devicetree unit tests.
>>> So that may limit how valid my comments below are.
>>
>> No problem.
>>
>>>
>>> I found the code difficult to read in places where it should have been
>>> much simpler to read.  Structuring the code in a pseudo object oriented
>>> style meant that everywhere in a code path that I encountered a dynamic
>>> function call, I had to go find where that dynamic function call was
>>> initialized (and being the cautious person that I am, verify that
>>> no where else was the value of that dynamic function call).  With
>>> primitive vi and tags, that search would have instead just been a
>>> simple key press (or at worst a few keys) if hard coded function
>>> calls were done instead of dynamic function calls.  In the code paths
>>> that I looked at, I did not see any case of a dynamic function being
>>> anything other than the value it was originally initialized as.
>>> There may be such cases, I did not read the entire patch set.  There
>>> may also be cases envisioned in the architects mind of how this
>>> flexibility may be of future value.  Dunno.
>>
>> Yeah, a lot of it is intended to make architecture specific
>> implementations and some other future work easier. Some of it is also
>> for testing purposes. Admittedly some is for neither reason, but given
>> the heavy usage elsewhere, I figured there was no harm since it was
>> all private internal usage anyway.
>>
>
> Increasing the cost for me (and all the other potential code readers)
> to read the code is harm.

Dynamic function calls aren't necessary for arch-specific
implementations either. See for example arch_kexec_image_load() in
kernel/kexec_file.c, which uses a weak symbol that is overriden by
arch-specific code. Not everybody likes weak symbols, so another
alternative (which admitedly not everybody likes either) is to use a
macro with the name of the arch-specific function, as used by
arch_kexec_post_alloc_pages() in <linux/kexec.h> for instance.

--
Thiago Jung Bauermann
IBM Linux Technology Center


^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-02-22 20:52         ` Thiago Jung Bauermann
  0 siblings, 0 replies; 316+ messages in thread
From: bauerman @ 2019-02-22 20:52 UTC (permalink / raw)



Frank Rowand <frowand.list at gmail.com> writes:

> On 2/19/19 10:34 PM, Brendan Higgins wrote:
>> On Mon, Feb 18, 2019 at 12:02 PM Frank Rowand <frowand.list at gmail.com> wrote:
>> <snip>
>>> I have not read through the patches in any detail.  I have read some of
>>> the code to try to understand the patches to the devicetree unit tests.
>>> So that may limit how valid my comments below are.
>>
>> No problem.
>>
>>>
>>> I found the code difficult to read in places where it should have been
>>> much simpler to read.  Structuring the code in a pseudo object oriented
>>> style meant that everywhere in a code path that I encountered a dynamic
>>> function call, I had to go find where that dynamic function call was
>>> initialized (and being the cautious person that I am, verify that
>>> no where else was the value of that dynamic function call).  With
>>> primitive vi and tags, that search would have instead just been a
>>> simple key press (or at worst a few keys) if hard coded function
>>> calls were done instead of dynamic function calls.  In the code paths
>>> that I looked at, I did not see any case of a dynamic function being
>>> anything other than the value it was originally initialized as.
>>> There may be such cases, I did not read the entire patch set.  There
>>> may also be cases envisioned in the architects mind of how this
>>> flexibility may be of future value.  Dunno.
>>
>> Yeah, a lot of it is intended to make architecture specific
>> implementations and some other future work easier. Some of it is also
>> for testing purposes. Admittedly some is for neither reason, but given
>> the heavy usage elsewhere, I figured there was no harm since it was
>> all private internal usage anyway.
>>
>
> Increasing the cost for me (and all the other potential code readers)
> to read the code is harm.

Dynamic function calls aren't necessary for arch-specific
implementations either. See for example arch_kexec_image_load() in
kernel/kexec_file.c, which uses a weak symbol that is overriden by
arch-specific code. Not everybody likes weak symbols, so another
alternative (which admitedly not everybody likes either) is to use a
macro with the name of the arch-specific function, as used by
arch_kexec_post_alloc_pages() in <linux/kexec.h> for instance.

--
Thiago Jung Bauermann
IBM Linux Technology Center

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-02-22 20:52         ` Thiago Jung Bauermann
  0 siblings, 0 replies; 316+ messages in thread
From: Thiago Jung Bauermann @ 2019-02-22 20:52 UTC (permalink / raw)



Frank Rowand <frowand.list at gmail.com> writes:

> On 2/19/19 10:34 PM, Brendan Higgins wrote:
>> On Mon, Feb 18, 2019@12:02 PM Frank Rowand <frowand.list@gmail.com> wrote:
>> <snip>
>>> I have not read through the patches in any detail.  I have read some of
>>> the code to try to understand the patches to the devicetree unit tests.
>>> So that may limit how valid my comments below are.
>>
>> No problem.
>>
>>>
>>> I found the code difficult to read in places where it should have been
>>> much simpler to read.  Structuring the code in a pseudo object oriented
>>> style meant that everywhere in a code path that I encountered a dynamic
>>> function call, I had to go find where that dynamic function call was
>>> initialized (and being the cautious person that I am, verify that
>>> no where else was the value of that dynamic function call).  With
>>> primitive vi and tags, that search would have instead just been a
>>> simple key press (or at worst a few keys) if hard coded function
>>> calls were done instead of dynamic function calls.  In the code paths
>>> that I looked at, I did not see any case of a dynamic function being
>>> anything other than the value it was originally initialized as.
>>> There may be such cases, I did not read the entire patch set.  There
>>> may also be cases envisioned in the architects mind of how this
>>> flexibility may be of future value.  Dunno.
>>
>> Yeah, a lot of it is intended to make architecture specific
>> implementations and some other future work easier. Some of it is also
>> for testing purposes. Admittedly some is for neither reason, but given
>> the heavy usage elsewhere, I figured there was no harm since it was
>> all private internal usage anyway.
>>
>
> Increasing the cost for me (and all the other potential code readers)
> to read the code is harm.

Dynamic function calls aren't necessary for arch-specific
implementations either. See for example arch_kexec_image_load() in
kernel/kexec_file.c, which uses a weak symbol that is overriden by
arch-specific code. Not everybody likes weak symbols, so another
alternative (which admitedly not everybody likes either) is to use a
macro with the name of the arch-specific function, as used by
arch_kexec_post_alloc_pages() in <linux/kexec.h> for instance.

--
Thiago Jung Bauermann
IBM Linux Technology Center

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-02-22 20:52         ` Thiago Jung Bauermann
  0 siblings, 0 replies; 316+ messages in thread
From: Thiago Jung Bauermann @ 2019-02-22 20:52 UTC (permalink / raw)
  To: Frank Rowand
  Cc: brakmo, Petr Mladek, Amir Goldstein, Brendan Higgins, dri-devel,
	Sasha Levin, linux-kselftest, shuah, Rob Herring, linux-nvdimm,
	Richard Weinberger, Knut Omang, Kieran Bingham, wfg,
	Joel Stanley, Bird, Timothy, dan.carpenter, devicetree,
	Jeff Dike, Kees Cook, linux-um, Steven Rostedt, Julia Lawall,
	Dan Williams, kunit-dev, Greg KH, Linux Kernel Mailing List,
	Luis Chamberlain, Daniel Vetter, Michael Ellerman, Joe Perches,
	Kevin Hilman


Frank Rowand <frowand.list@gmail.com> writes:

> On 2/19/19 10:34 PM, Brendan Higgins wrote:
>> On Mon, Feb 18, 2019 at 12:02 PM Frank Rowand <frowand.list@gmail.com> wrote:
>> <snip>
>>> I have not read through the patches in any detail.  I have read some of
>>> the code to try to understand the patches to the devicetree unit tests.
>>> So that may limit how valid my comments below are.
>>
>> No problem.
>>
>>>
>>> I found the code difficult to read in places where it should have been
>>> much simpler to read.  Structuring the code in a pseudo object oriented
>>> style meant that everywhere in a code path that I encountered a dynamic
>>> function call, I had to go find where that dynamic function call was
>>> initialized (and being the cautious person that I am, verify that
>>> no where else was the value of that dynamic function call).  With
>>> primitive vi and tags, that search would have instead just been a
>>> simple key press (or at worst a few keys) if hard coded function
>>> calls were done instead of dynamic function calls.  In the code paths
>>> that I looked at, I did not see any case of a dynamic function being
>>> anything other than the value it was originally initialized as.
>>> There may be such cases, I did not read the entire patch set.  There
>>> may also be cases envisioned in the architects mind of how this
>>> flexibility may be of future value.  Dunno.
>>
>> Yeah, a lot of it is intended to make architecture specific
>> implementations and some other future work easier. Some of it is also
>> for testing purposes. Admittedly some is for neither reason, but given
>> the heavy usage elsewhere, I figured there was no harm since it was
>> all private internal usage anyway.
>>
>
> Increasing the cost for me (and all the other potential code readers)
> to read the code is harm.

Dynamic function calls aren't necessary for arch-specific
implementations either. See for example arch_kexec_image_load() in
kernel/kexec_file.c, which uses a weak symbol that is overriden by
arch-specific code. Not everybody likes weak symbols, so another
alternative (which admitedly not everybody likes either) is to use a
macro with the name of the arch-specific function, as used by
arch_kexec_post_alloc_pages() in <linux/kexec.h> for instance.

--
Thiago Jung Bauermann
IBM Linux Technology Center


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
  2019-02-14 21:37     ` Brendan Higgins
                           ` (2 preceding siblings ...)
  (?)
@ 2019-02-26 20:35         ` Stephen Boyd
  -1 siblings, 0 replies; 316+ messages in thread
From: Stephen Boyd @ 2019-02-26 20:35 UTC (permalink / raw)
  To: frowand.list-Re5JQEeQqe8AvxtiuMwx3w,
	keescook-hpIqsD4AKlfQT0dZR+AlfA,
	kieran.bingham-ryLnwIuWjnjg/C1BVhZhaw,
	mcgrof-DgEjT+Ai2ygdnm+yROfE0A, robh-DgEjT+Ai2ygdnm+yROfE0A,
	shuah-DgEjT+Ai2ygdnm+yROfE0A
  Cc: brakmo-b10kYP2dOMg, pmladek-IBi9RG/b67k,
	amir73il-Re5JQEeQqe8AvxtiuMwx3w, Brendan Higgins,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	Alexander.Levin-0li6OtcxBFHby3iVrkZq2A,
	linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw, richard-/L3Ra7n9ekc,
	knut.omang-QHcLZuEGTsvQT0dZR+AlfA, wfg-VuQAYsv1563Yd54FQh9/CA,
	joel-U3u1mxZcP9KHXe+LvDLADg, jdike-OPE4K8JWMJJBDgjK7y7TUQ,
	dan.carpenter-QHcLZuEGTsvQT0dZR+AlfA,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Tim.Bird-7U/KSKJipcs,
	linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	rostedt-nx8X9YLhiw1AfugRpC6u6w, julia.lawall-L2FTfq7BK8M,
	kunit-dev-/JYPxA39Uh5TLH3MbocFFw,
	gregkh-hQyY1W1yCW8ekmWlsbkhG0B+6BGkLq7r,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, daniel-/w4YWyX8dFk,
	mpe-Gsx/Oe8HsFggBc27wqDAHg, joe-6d6DIl74uiNBDgjK7y7TUQ,
	khilman-rdvid1DuHRBWk0Htik3J/w

Quoting Brendan Higgins (2019-02-14 13:37:20)
> Add support for aborting/bailing out of test cases. Needed for
> implementing assertions.

Can you add some more text here with the motivating reasons for
implementing assertions and bailing out of test cases?

For example, I wonder why unit tests can't just return with a failure
when they need to abort and then the test runner would detect that error
via the return value from the 'run test' function. That would be a more
direct approach, but also more verbose than a single KUNIT_ASSERT()
line. It would be more kernel idiomatic too because the control flow
isn't hidden inside a macro and it isn't intimately connected with
kthreads and completions.

> 
> Signed-off-by: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
[...]
> diff --git a/kunit/test-test.c b/kunit/test-test.c
> new file mode 100644
> index 0000000000000..a936c041f1c8f

Could this whole file be another patch? It seems to be a test for the
try catch mechanism.

> diff --git a/kunit/test.c b/kunit/test.c
> index d18c50d5ed671..6e5244642ab07 100644
> --- a/kunit/test.c
> +++ b/kunit/test.c
[...]
> +
> +static void kunit_generic_throw(struct kunit_try_catch *try_catch)
> +{
> +       try_catch->context.try_result = -EFAULT;
> +       complete_and_exit(try_catch->context.try_completion, -EFAULT);
> +}
> +
> +static int kunit_generic_run_threadfn_adapter(void *data)
> +{
> +       struct kunit_try_catch *try_catch = data;
>  
> +       try_catch->try(&try_catch->context);
> +
> +       complete_and_exit(try_catch->context.try_completion, 0);

The exit code doesn't matter, right? If so, it might be clearer to just
return 0 from this function because kthreads exit themselves as far as I
recall.

> +}
> +
> +static void kunit_generic_run_try_catch(struct kunit_try_catch *try_catch)
> +{
> +       struct task_struct *task_struct;
> +       struct kunit *test = try_catch->context.test;
> +       int exit_code, wake_result;
> +       DECLARE_COMPLETION(test_case_completion);

DECLARE_COMPLETION_ONSTACK()?

> +
> +       try_catch->context.try_completion = &test_case_completion;
> +       try_catch->context.try_result = 0;
> +       task_struct = kthread_create(kunit_generic_run_threadfn_adapter,
> +                                            try_catch,
> +                                            "kunit_try_catch_thread");
> +       if (IS_ERR_OR_NULL(task_struct)) {

It looks like NULL is never returned from kthread_create(), so don't
check for it here.

> +               try_catch->catch(&try_catch->context);
> +               return;
> +       }
> +
> +       wake_result = wake_up_process(task_struct);
> +       if (wake_result != 0 && wake_result != 1) {

These are the only two possible return values of wake_up_process(), so
why not just use kthread_run() and check for an error on the process
creation?

> +               kunit_err(test, "task was not woken properly: %d", wake_result);
> +               try_catch->catch(&try_catch->context);
> +       }
> +
> +       /*
> +        * TODO(brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org): We should probably have some type of
> +        * timeout here. The only question is what that timeout value should be.
> +        *
> +        * The intention has always been, at some point, to be able to label
> +        * tests with some type of size bucket (unit/small, integration/medium,
> +        * large/system/end-to-end, etc), where each size bucket would get a
> +        * default timeout value kind of like what Bazel does:
> +        * https://docs.bazel.build/versions/master/be/common-definitions.html#test.size
> +        * There is still some debate to be had on exactly how we do this. (For
> +        * one, we probably want to have some sort of test runner level
> +        * timeout.)
> +        *
> +        * For more background on this topic, see:
> +        * https://mike-bland.com/2011/11/01/small-medium-large.html
> +        */
> +       wait_for_completion(&test_case_completion);

It doesn't seem like a bad idea to make this have some arbitrarily large
timeout value to start with. Just to make sure the whole thing doesn't
get wedged. It's a timeout, so in theory it should never trigger if it's
large enough.

> +
> +       exit_code = try_catch->context.try_result;
> +       if (exit_code == -EFAULT)
> +               try_catch->catch(&try_catch->context);
> +       else if (exit_code == -EINTR)
> +               kunit_err(test, "wake_up_process() was never called.");

Does kunit_err() add newlines? It would be good to stick to the rest of
kernel style (printk, tracing, etc.) and rely on the callers to have the
newlines they want. Also, remove the full-stop because it isn't really
necessary to have those in error logs.

> +       else if (exit_code)
> +               kunit_err(test, "Unknown error: %d", exit_code);
> +}
> +
> +void kunit_generic_try_catch_init(struct kunit_try_catch *try_catch)
> +{
> +       try_catch->run = kunit_generic_run_try_catch;

Is the idea that 'run' would be anything besides
'kunit_generic_run_try_catch'? If it isn't going to be different, then
maybe it's simpler to just have a function like
kunit_generic_run_try_catch() that is called by the unit tests instead
of having to write 'try_catch->run(try_catch)' and indirect for the
basic case. Maybe I've missed the point entirely though and this is all
scaffolding for more complicated exception handling later on.

> +       try_catch->throw = kunit_generic_throw;
> +}
> +
> +void __weak kunit_try_catch_init(struct kunit_try_catch *try_catch)
> +{
> +       kunit_generic_try_catch_init(try_catch);
> +}
> +
> +static void kunit_try_run_case(struct kunit_try_catch_context *context)
> +{
> +       struct kunit_try_catch_context *ctx = context;
> +       struct kunit *test = ctx->test;
> +       struct kunit_module *module = ctx->module;
> +       struct kunit_case *test_case = ctx->test_case;
> +
> +       /*
> +        * kunit_run_case_internal may encounter a fatal error; if it does, we
> +        * will jump to ENTER_HANDLER above instead of continuing normal control

Where is 'ENTER_HANDLER' above?

> +        * flow.
> +        */
>         kunit_run_case_internal(test, module, test_case);
> +       /* This line may never be reached. */
>         kunit_run_case_cleanup(test, module, test_case);
> +}

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-02-26 20:35         ` Stephen Boyd
  0 siblings, 0 replies; 316+ messages in thread
From: Stephen Boyd @ 2019-02-26 20:35 UTC (permalink / raw)
  To: Brendan Higgins, frowand.list, keescook, kieran.bingham, mcgrof,
	robh, shuah
  Cc: gregkh, joel, mpe, joe, brakmo, rostedt, Tim.Bird, khilman,
	julia.lawall, linux-kselftest, kunit-dev, linux-kernel, jdike,
	richard, linux-um, daniel, dri-devel, dan.j.williams,
	linux-nvdimm, knut.omang, devicetree, pmladek, Alexander.Levin,
	amir73il, dan.carpenter, wfg, Brendan Higgins

Quoting Brendan Higgins (2019-02-14 13:37:20)
> Add support for aborting/bailing out of test cases. Needed for
> implementing assertions.

Can you add some more text here with the motivating reasons for
implementing assertions and bailing out of test cases?

For example, I wonder why unit tests can't just return with a failure
when they need to abort and then the test runner would detect that error
via the return value from the 'run test' function. That would be a more
direct approach, but also more verbose than a single KUNIT_ASSERT()
line. It would be more kernel idiomatic too because the control flow
isn't hidden inside a macro and it isn't intimately connected with
kthreads and completions.

> 
> Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
[...]
> diff --git a/kunit/test-test.c b/kunit/test-test.c
> new file mode 100644
> index 0000000000000..a936c041f1c8f

Could this whole file be another patch? It seems to be a test for the
try catch mechanism.

> diff --git a/kunit/test.c b/kunit/test.c
> index d18c50d5ed671..6e5244642ab07 100644
> --- a/kunit/test.c
> +++ b/kunit/test.c
[...]
> +
> +static void kunit_generic_throw(struct kunit_try_catch *try_catch)
> +{
> +       try_catch->context.try_result = -EFAULT;
> +       complete_and_exit(try_catch->context.try_completion, -EFAULT);
> +}
> +
> +static int kunit_generic_run_threadfn_adapter(void *data)
> +{
> +       struct kunit_try_catch *try_catch = data;
>  
> +       try_catch->try(&try_catch->context);
> +
> +       complete_and_exit(try_catch->context.try_completion, 0);

The exit code doesn't matter, right? If so, it might be clearer to just
return 0 from this function because kthreads exit themselves as far as I
recall.

> +}
> +
> +static void kunit_generic_run_try_catch(struct kunit_try_catch *try_catch)
> +{
> +       struct task_struct *task_struct;
> +       struct kunit *test = try_catch->context.test;
> +       int exit_code, wake_result;
> +       DECLARE_COMPLETION(test_case_completion);

DECLARE_COMPLETION_ONSTACK()?

> +
> +       try_catch->context.try_completion = &test_case_completion;
> +       try_catch->context.try_result = 0;
> +       task_struct = kthread_create(kunit_generic_run_threadfn_adapter,
> +                                            try_catch,
> +                                            "kunit_try_catch_thread");
> +       if (IS_ERR_OR_NULL(task_struct)) {

It looks like NULL is never returned from kthread_create(), so don't
check for it here.

> +               try_catch->catch(&try_catch->context);
> +               return;
> +       }
> +
> +       wake_result = wake_up_process(task_struct);
> +       if (wake_result != 0 && wake_result != 1) {

These are the only two possible return values of wake_up_process(), so
why not just use kthread_run() and check for an error on the process
creation?

> +               kunit_err(test, "task was not woken properly: %d", wake_result);
> +               try_catch->catch(&try_catch->context);
> +       }
> +
> +       /*
> +        * TODO(brendanhiggins@google.com): We should probably have some type of
> +        * timeout here. The only question is what that timeout value should be.
> +        *
> +        * The intention has always been, at some point, to be able to label
> +        * tests with some type of size bucket (unit/small, integration/medium,
> +        * large/system/end-to-end, etc), where each size bucket would get a
> +        * default timeout value kind of like what Bazel does:
> +        * https://docs.bazel.build/versions/master/be/common-definitions.html#test.size
> +        * There is still some debate to be had on exactly how we do this. (For
> +        * one, we probably want to have some sort of test runner level
> +        * timeout.)
> +        *
> +        * For more background on this topic, see:
> +        * https://mike-bland.com/2011/11/01/small-medium-large.html
> +        */
> +       wait_for_completion(&test_case_completion);

It doesn't seem like a bad idea to make this have some arbitrarily large
timeout value to start with. Just to make sure the whole thing doesn't
get wedged. It's a timeout, so in theory it should never trigger if it's
large enough.

> +
> +       exit_code = try_catch->context.try_result;
> +       if (exit_code == -EFAULT)
> +               try_catch->catch(&try_catch->context);
> +       else if (exit_code == -EINTR)
> +               kunit_err(test, "wake_up_process() was never called.");

Does kunit_err() add newlines? It would be good to stick to the rest of
kernel style (printk, tracing, etc.) and rely on the callers to have the
newlines they want. Also, remove the full-stop because it isn't really
necessary to have those in error logs.

> +       else if (exit_code)
> +               kunit_err(test, "Unknown error: %d", exit_code);
> +}
> +
> +void kunit_generic_try_catch_init(struct kunit_try_catch *try_catch)
> +{
> +       try_catch->run = kunit_generic_run_try_catch;

Is the idea that 'run' would be anything besides
'kunit_generic_run_try_catch'? If it isn't going to be different, then
maybe it's simpler to just have a function like
kunit_generic_run_try_catch() that is called by the unit tests instead
of having to write 'try_catch->run(try_catch)' and indirect for the
basic case. Maybe I've missed the point entirely though and this is all
scaffolding for more complicated exception handling later on.

> +       try_catch->throw = kunit_generic_throw;
> +}
> +
> +void __weak kunit_try_catch_init(struct kunit_try_catch *try_catch)
> +{
> +       kunit_generic_try_catch_init(try_catch);
> +}
> +
> +static void kunit_try_run_case(struct kunit_try_catch_context *context)
> +{
> +       struct kunit_try_catch_context *ctx = context;
> +       struct kunit *test = ctx->test;
> +       struct kunit_module *module = ctx->module;
> +       struct kunit_case *test_case = ctx->test_case;
> +
> +       /*
> +        * kunit_run_case_internal may encounter a fatal error; if it does, we
> +        * will jump to ENTER_HANDLER above instead of continuing normal control

Where is 'ENTER_HANDLER' above?

> +        * flow.
> +        */
>         kunit_run_case_internal(test, module, test_case);
> +       /* This line may never be reached. */
>         kunit_run_case_cleanup(test, module, test_case);
> +}

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-02-26 20:35         ` Stephen Boyd
  0 siblings, 0 replies; 316+ messages in thread
From: sboyd @ 2019-02-26 20:35 UTC (permalink / raw)


Quoting Brendan Higgins (2019-02-14 13:37:20)
> Add support for aborting/bailing out of test cases. Needed for
> implementing assertions.

Can you add some more text here with the motivating reasons for
implementing assertions and bailing out of test cases?

For example, I wonder why unit tests can't just return with a failure
when they need to abort and then the test runner would detect that error
via the return value from the 'run test' function. That would be a more
direct approach, but also more verbose than a single KUNIT_ASSERT()
line. It would be more kernel idiomatic too because the control flow
isn't hidden inside a macro and it isn't intimately connected with
kthreads and completions.

> 
> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
[...]
> diff --git a/kunit/test-test.c b/kunit/test-test.c
> new file mode 100644
> index 0000000000000..a936c041f1c8f

Could this whole file be another patch? It seems to be a test for the
try catch mechanism.

> diff --git a/kunit/test.c b/kunit/test.c
> index d18c50d5ed671..6e5244642ab07 100644
> --- a/kunit/test.c
> +++ b/kunit/test.c
[...]
> +
> +static void kunit_generic_throw(struct kunit_try_catch *try_catch)
> +{
> +       try_catch->context.try_result = -EFAULT;
> +       complete_and_exit(try_catch->context.try_completion, -EFAULT);
> +}
> +
> +static int kunit_generic_run_threadfn_adapter(void *data)
> +{
> +       struct kunit_try_catch *try_catch = data;
>  
> +       try_catch->try(&try_catch->context);
> +
> +       complete_and_exit(try_catch->context.try_completion, 0);

The exit code doesn't matter, right? If so, it might be clearer to just
return 0 from this function because kthreads exit themselves as far as I
recall.

> +}
> +
> +static void kunit_generic_run_try_catch(struct kunit_try_catch *try_catch)
> +{
> +       struct task_struct *task_struct;
> +       struct kunit *test = try_catch->context.test;
> +       int exit_code, wake_result;
> +       DECLARE_COMPLETION(test_case_completion);

DECLARE_COMPLETION_ONSTACK()?

> +
> +       try_catch->context.try_completion = &test_case_completion;
> +       try_catch->context.try_result = 0;
> +       task_struct = kthread_create(kunit_generic_run_threadfn_adapter,
> +                                            try_catch,
> +                                            "kunit_try_catch_thread");
> +       if (IS_ERR_OR_NULL(task_struct)) {

It looks like NULL is never returned from kthread_create(), so don't
check for it here.

> +               try_catch->catch(&try_catch->context);
> +               return;
> +       }
> +
> +       wake_result = wake_up_process(task_struct);
> +       if (wake_result != 0 && wake_result != 1) {

These are the only two possible return values of wake_up_process(), so
why not just use kthread_run() and check for an error on the process
creation?

> +               kunit_err(test, "task was not woken properly: %d", wake_result);
> +               try_catch->catch(&try_catch->context);
> +       }
> +
> +       /*
> +        * TODO(brendanhiggins at google.com): We should probably have some type of
> +        * timeout here. The only question is what that timeout value should be.
> +        *
> +        * The intention has always been, at some point, to be able to label
> +        * tests with some type of size bucket (unit/small, integration/medium,
> +        * large/system/end-to-end, etc), where each size bucket would get a
> +        * default timeout value kind of like what Bazel does:
> +        * https://docs.bazel.build/versions/master/be/common-definitions.html#test.size
> +        * There is still some debate to be had on exactly how we do this. (For
> +        * one, we probably want to have some sort of test runner level
> +        * timeout.)
> +        *
> +        * For more background on this topic, see:
> +        * https://mike-bland.com/2011/11/01/small-medium-large.html
> +        */
> +       wait_for_completion(&test_case_completion);

It doesn't seem like a bad idea to make this have some arbitrarily large
timeout value to start with. Just to make sure the whole thing doesn't
get wedged. It's a timeout, so in theory it should never trigger if it's
large enough.

> +
> +       exit_code = try_catch->context.try_result;
> +       if (exit_code == -EFAULT)
> +               try_catch->catch(&try_catch->context);
> +       else if (exit_code == -EINTR)
> +               kunit_err(test, "wake_up_process() was never called.");

Does kunit_err() add newlines? It would be good to stick to the rest of
kernel style (printk, tracing, etc.) and rely on the callers to have the
newlines they want. Also, remove the full-stop because it isn't really
necessary to have those in error logs.

> +       else if (exit_code)
> +               kunit_err(test, "Unknown error: %d", exit_code);
> +}
> +
> +void kunit_generic_try_catch_init(struct kunit_try_catch *try_catch)
> +{
> +       try_catch->run = kunit_generic_run_try_catch;

Is the idea that 'run' would be anything besides
'kunit_generic_run_try_catch'? If it isn't going to be different, then
maybe it's simpler to just have a function like
kunit_generic_run_try_catch() that is called by the unit tests instead
of having to write 'try_catch->run(try_catch)' and indirect for the
basic case. Maybe I've missed the point entirely though and this is all
scaffolding for more complicated exception handling later on.

> +       try_catch->throw = kunit_generic_throw;
> +}
> +
> +void __weak kunit_try_catch_init(struct kunit_try_catch *try_catch)
> +{
> +       kunit_generic_try_catch_init(try_catch);
> +}
> +
> +static void kunit_try_run_case(struct kunit_try_catch_context *context)
> +{
> +       struct kunit_try_catch_context *ctx = context;
> +       struct kunit *test = ctx->test;
> +       struct kunit_module *module = ctx->module;
> +       struct kunit_case *test_case = ctx->test_case;
> +
> +       /*
> +        * kunit_run_case_internal may encounter a fatal error; if it does, we
> +        * will jump to ENTER_HANDLER above instead of continuing normal control

Where is 'ENTER_HANDLER' above?

> +        * flow.
> +        */
>         kunit_run_case_internal(test, module, test_case);
> +       /* This line may never be reached. */
>         kunit_run_case_cleanup(test, module, test_case);
> +}

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-02-26 20:35         ` Stephen Boyd
  0 siblings, 0 replies; 316+ messages in thread
From: Stephen Boyd @ 2019-02-26 20:35 UTC (permalink / raw)


Quoting Brendan Higgins (2019-02-14 13:37:20)
> Add support for aborting/bailing out of test cases. Needed for
> implementing assertions.

Can you add some more text here with the motivating reasons for
implementing assertions and bailing out of test cases?

For example, I wonder why unit tests can't just return with a failure
when they need to abort and then the test runner would detect that error
via the return value from the 'run test' function. That would be a more
direct approach, but also more verbose than a single KUNIT_ASSERT()
line. It would be more kernel idiomatic too because the control flow
isn't hidden inside a macro and it isn't intimately connected with
kthreads and completions.

> 
> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
[...]
> diff --git a/kunit/test-test.c b/kunit/test-test.c
> new file mode 100644
> index 0000000000000..a936c041f1c8f

Could this whole file be another patch? It seems to be a test for the
try catch mechanism.

> diff --git a/kunit/test.c b/kunit/test.c
> index d18c50d5ed671..6e5244642ab07 100644
> --- a/kunit/test.c
> +++ b/kunit/test.c
[...]
> +
> +static void kunit_generic_throw(struct kunit_try_catch *try_catch)
> +{
> +       try_catch->context.try_result = -EFAULT;
> +       complete_and_exit(try_catch->context.try_completion, -EFAULT);
> +}
> +
> +static int kunit_generic_run_threadfn_adapter(void *data)
> +{
> +       struct kunit_try_catch *try_catch = data;
>  
> +       try_catch->try(&try_catch->context);
> +
> +       complete_and_exit(try_catch->context.try_completion, 0);

The exit code doesn't matter, right? If so, it might be clearer to just
return 0 from this function because kthreads exit themselves as far as I
recall.

> +}
> +
> +static void kunit_generic_run_try_catch(struct kunit_try_catch *try_catch)
> +{
> +       struct task_struct *task_struct;
> +       struct kunit *test = try_catch->context.test;
> +       int exit_code, wake_result;
> +       DECLARE_COMPLETION(test_case_completion);

DECLARE_COMPLETION_ONSTACK()?

> +
> +       try_catch->context.try_completion = &test_case_completion;
> +       try_catch->context.try_result = 0;
> +       task_struct = kthread_create(kunit_generic_run_threadfn_adapter,
> +                                            try_catch,
> +                                            "kunit_try_catch_thread");
> +       if (IS_ERR_OR_NULL(task_struct)) {

It looks like NULL is never returned from kthread_create(), so don't
check for it here.

> +               try_catch->catch(&try_catch->context);
> +               return;
> +       }
> +
> +       wake_result = wake_up_process(task_struct);
> +       if (wake_result != 0 && wake_result != 1) {

These are the only two possible return values of wake_up_process(), so
why not just use kthread_run() and check for an error on the process
creation?

> +               kunit_err(test, "task was not woken properly: %d", wake_result);
> +               try_catch->catch(&try_catch->context);
> +       }
> +
> +       /*
> +        * TODO(brendanhiggins at google.com): We should probably have some type of
> +        * timeout here. The only question is what that timeout value should be.
> +        *
> +        * The intention has always been, at some point, to be able to label
> +        * tests with some type of size bucket (unit/small, integration/medium,
> +        * large/system/end-to-end, etc), where each size bucket would get a
> +        * default timeout value kind of like what Bazel does:
> +        * https://docs.bazel.build/versions/master/be/common-definitions.html#test.size
> +        * There is still some debate to be had on exactly how we do this. (For
> +        * one, we probably want to have some sort of test runner level
> +        * timeout.)
> +        *
> +        * For more background on this topic, see:
> +        * https://mike-bland.com/2011/11/01/small-medium-large.html
> +        */
> +       wait_for_completion(&test_case_completion);

It doesn't seem like a bad idea to make this have some arbitrarily large
timeout value to start with. Just to make sure the whole thing doesn't
get wedged. It's a timeout, so in theory it should never trigger if it's
large enough.

> +
> +       exit_code = try_catch->context.try_result;
> +       if (exit_code == -EFAULT)
> +               try_catch->catch(&try_catch->context);
> +       else if (exit_code == -EINTR)
> +               kunit_err(test, "wake_up_process() was never called.");

Does kunit_err() add newlines? It would be good to stick to the rest of
kernel style (printk, tracing, etc.) and rely on the callers to have the
newlines they want. Also, remove the full-stop because it isn't really
necessary to have those in error logs.

> +       else if (exit_code)
> +               kunit_err(test, "Unknown error: %d", exit_code);
> +}
> +
> +void kunit_generic_try_catch_init(struct kunit_try_catch *try_catch)
> +{
> +       try_catch->run = kunit_generic_run_try_catch;

Is the idea that 'run' would be anything besides
'kunit_generic_run_try_catch'? If it isn't going to be different, then
maybe it's simpler to just have a function like
kunit_generic_run_try_catch() that is called by the unit tests instead
of having to write 'try_catch->run(try_catch)' and indirect for the
basic case. Maybe I've missed the point entirely though and this is all
scaffolding for more complicated exception handling later on.

> +       try_catch->throw = kunit_generic_throw;
> +}
> +
> +void __weak kunit_try_catch_init(struct kunit_try_catch *try_catch)
> +{
> +       kunit_generic_try_catch_init(try_catch);
> +}
> +
> +static void kunit_try_run_case(struct kunit_try_catch_context *context)
> +{
> +       struct kunit_try_catch_context *ctx = context;
> +       struct kunit *test = ctx->test;
> +       struct kunit_module *module = ctx->module;
> +       struct kunit_case *test_case = ctx->test_case;
> +
> +       /*
> +        * kunit_run_case_internal may encounter a fatal error; if it does, we
> +        * will jump to ENTER_HANDLER above instead of continuing normal control

Where is 'ENTER_HANDLER' above?

> +        * flow.
> +        */
>         kunit_run_case_internal(test, module, test_case);
> +       /* This line may never be reached. */
>         kunit_run_case_cleanup(test, module, test_case);
> +}

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-02-26 20:35         ` Stephen Boyd
  0 siblings, 0 replies; 316+ messages in thread
From: Stephen Boyd @ 2019-02-26 20:35 UTC (permalink / raw)
  To: Brendan Higgins, frowand.list, keescook, kieran.bingham, mcgrof,
	robh, shuah
  Cc: brakmo, pmladek, amir73il, Brendan Higgins, dri-devel,
	Alexander.Levin, linux-kselftest, linux-nvdimm, richard,
	knut.omang, wfg, joel, jdike, dan.carpenter, devicetree,
	Tim.Bird, linux-um, rostedt, julia.lawall, dan.j.williams,
	kunit-dev, gregkh, linux-kernel, daniel, mpe, joe, khilman

Quoting Brendan Higgins (2019-02-14 13:37:20)
> Add support for aborting/bailing out of test cases. Needed for
> implementing assertions.

Can you add some more text here with the motivating reasons for
implementing assertions and bailing out of test cases?

For example, I wonder why unit tests can't just return with a failure
when they need to abort and then the test runner would detect that error
via the return value from the 'run test' function. That would be a more
direct approach, but also more verbose than a single KUNIT_ASSERT()
line. It would be more kernel idiomatic too because the control flow
isn't hidden inside a macro and it isn't intimately connected with
kthreads and completions.

> 
> Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
[...]
> diff --git a/kunit/test-test.c b/kunit/test-test.c
> new file mode 100644
> index 0000000000000..a936c041f1c8f

Could this whole file be another patch? It seems to be a test for the
try catch mechanism.

> diff --git a/kunit/test.c b/kunit/test.c
> index d18c50d5ed671..6e5244642ab07 100644
> --- a/kunit/test.c
> +++ b/kunit/test.c
[...]
> +
> +static void kunit_generic_throw(struct kunit_try_catch *try_catch)
> +{
> +       try_catch->context.try_result = -EFAULT;
> +       complete_and_exit(try_catch->context.try_completion, -EFAULT);
> +}
> +
> +static int kunit_generic_run_threadfn_adapter(void *data)
> +{
> +       struct kunit_try_catch *try_catch = data;
>  
> +       try_catch->try(&try_catch->context);
> +
> +       complete_and_exit(try_catch->context.try_completion, 0);

The exit code doesn't matter, right? If so, it might be clearer to just
return 0 from this function because kthreads exit themselves as far as I
recall.

> +}
> +
> +static void kunit_generic_run_try_catch(struct kunit_try_catch *try_catch)
> +{
> +       struct task_struct *task_struct;
> +       struct kunit *test = try_catch->context.test;
> +       int exit_code, wake_result;
> +       DECLARE_COMPLETION(test_case_completion);

DECLARE_COMPLETION_ONSTACK()?

> +
> +       try_catch->context.try_completion = &test_case_completion;
> +       try_catch->context.try_result = 0;
> +       task_struct = kthread_create(kunit_generic_run_threadfn_adapter,
> +                                            try_catch,
> +                                            "kunit_try_catch_thread");
> +       if (IS_ERR_OR_NULL(task_struct)) {

It looks like NULL is never returned from kthread_create(), so don't
check for it here.

> +               try_catch->catch(&try_catch->context);
> +               return;
> +       }
> +
> +       wake_result = wake_up_process(task_struct);
> +       if (wake_result != 0 && wake_result != 1) {

These are the only two possible return values of wake_up_process(), so
why not just use kthread_run() and check for an error on the process
creation?

> +               kunit_err(test, "task was not woken properly: %d", wake_result);
> +               try_catch->catch(&try_catch->context);
> +       }
> +
> +       /*
> +        * TODO(brendanhiggins@google.com): We should probably have some type of
> +        * timeout here. The only question is what that timeout value should be.
> +        *
> +        * The intention has always been, at some point, to be able to label
> +        * tests with some type of size bucket (unit/small, integration/medium,
> +        * large/system/end-to-end, etc), where each size bucket would get a
> +        * default timeout value kind of like what Bazel does:
> +        * https://docs.bazel.build/versions/master/be/common-definitions.html#test.size
> +        * There is still some debate to be had on exactly how we do this. (For
> +        * one, we probably want to have some sort of test runner level
> +        * timeout.)
> +        *
> +        * For more background on this topic, see:
> +        * https://mike-bland.com/2011/11/01/small-medium-large.html
> +        */
> +       wait_for_completion(&test_case_completion);

It doesn't seem like a bad idea to make this have some arbitrarily large
timeout value to start with. Just to make sure the whole thing doesn't
get wedged. It's a timeout, so in theory it should never trigger if it's
large enough.

> +
> +       exit_code = try_catch->context.try_result;
> +       if (exit_code == -EFAULT)
> +               try_catch->catch(&try_catch->context);
> +       else if (exit_code == -EINTR)
> +               kunit_err(test, "wake_up_process() was never called.");

Does kunit_err() add newlines? It would be good to stick to the rest of
kernel style (printk, tracing, etc.) and rely on the callers to have the
newlines they want. Also, remove the full-stop because it isn't really
necessary to have those in error logs.

> +       else if (exit_code)
> +               kunit_err(test, "Unknown error: %d", exit_code);
> +}
> +
> +void kunit_generic_try_catch_init(struct kunit_try_catch *try_catch)
> +{
> +       try_catch->run = kunit_generic_run_try_catch;

Is the idea that 'run' would be anything besides
'kunit_generic_run_try_catch'? If it isn't going to be different, then
maybe it's simpler to just have a function like
kunit_generic_run_try_catch() that is called by the unit tests instead
of having to write 'try_catch->run(try_catch)' and indirect for the
basic case. Maybe I've missed the point entirely though and this is all
scaffolding for more complicated exception handling later on.

> +       try_catch->throw = kunit_generic_throw;
> +}
> +
> +void __weak kunit_try_catch_init(struct kunit_try_catch *try_catch)
> +{
> +       kunit_generic_try_catch_init(try_catch);
> +}
> +
> +static void kunit_try_run_case(struct kunit_try_catch_context *context)
> +{
> +       struct kunit_try_catch_context *ctx = context;
> +       struct kunit *test = ctx->test;
> +       struct kunit_module *module = ctx->module;
> +       struct kunit_case *test_case = ctx->test_case;
> +
> +       /*
> +        * kunit_run_case_internal may encounter a fatal error; if it does, we
> +        * will jump to ENTER_HANDLER above instead of continuing normal control

Where is 'ENTER_HANDLER' above?

> +        * flow.
> +        */
>         kunit_run_case_internal(test, module, test_case);
> +       /* This line may never be reached. */
>         kunit_run_case_cleanup(test, module, test_case);
> +}

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
  2019-02-20  6:46       ` Frank Rowand
                           ` (2 preceding siblings ...)
  (?)
@ 2019-02-28  4:15         ` Brendan Higgins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-28  4:15 UTC (permalink / raw)
  To: Frank Rowand
  Cc: Kees Cook, Luis Chamberlain, shuah, Rob Herring, Kieran Bingham,
	Greg KH, Joel Stanley, Michael Ellerman, Joe Perches, brakmo,
	Steven Rostedt, Bird, Timothy, Kevin Hilman, Julia Lawall,
	linux-kselftest, kunit-dev, Linux Kernel Mailing List, Jeff Dike,
	Richard Weinberger, linux-um, Daniel Vetter, dri-devel

On Tue, Feb 19, 2019 at 10:46 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/19/19 10:34 PM, Brendan Higgins wrote:
> > On Mon, Feb 18, 2019 at 12:02 PM Frank Rowand <frowand.list@gmail.com> wrote:
> > <snip>
> >> I have not read through the patches in any detail.  I have read some of
> >> the code to try to understand the patches to the devicetree unit tests.
> >> So that may limit how valid my comments below are.
> >
> > No problem.
> >
> >>
> >> I found the code difficult to read in places where it should have been
> >> much simpler to read.  Structuring the code in a pseudo object oriented
> >> style meant that everywhere in a code path that I encountered a dynamic
> >> function call, I had to go find where that dynamic function call was
> >> initialized (and being the cautious person that I am, verify that
> >> no where else was the value of that dynamic function call).  With
> >> primitive vi and tags, that search would have instead just been a
> >> simple key press (or at worst a few keys) if hard coded function
> >> calls were done instead of dynamic function calls.  In the code paths
> >> that I looked at, I did not see any case of a dynamic function being
> >> anything other than the value it was originally initialized as.
> >> There may be such cases, I did not read the entire patch set.  There
> >> may also be cases envisioned in the architects mind of how this
> >> flexibility may be of future value.  Dunno.
> >
> > Yeah, a lot of it is intended to make architecture specific
> > implementations and some other future work easier. Some of it is also
> > for testing purposes. Admittedly some is for neither reason, but given
> > the heavy usage elsewhere, I figured there was no harm since it was
> > all private internal usage anyway.
> >
>
> Increasing the cost for me (and all the other potential code readers)
> to read the code is harm.

You are right. I like the object oriented C style; I didn't think it
hurt readability.

In any case, I will go through and replace instances where I am not
using it for one of the above reasons.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-02-28  4:15         ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-28  4:15 UTC (permalink / raw)
  To: Frank Rowand
  Cc: Kees Cook, Luis Chamberlain, shuah, Rob Herring, Kieran Bingham,
	Greg KH, Joel Stanley, Michael Ellerman, Joe Perches, brakmo,
	Steven Rostedt, Bird, Timothy, Kevin Hilman, Julia Lawall,
	linux-kselftest, kunit-dev, Linux Kernel Mailing List, Jeff Dike,
	Richard Weinberger, linux-um, Daniel Vetter, dri-devel,
	Dan Williams, linux-nvdimm, Knut Omang, devicetree, Petr Mladek,
	Sasha Levin, Amir Goldstein, dan.carpenter, wfg

On Tue, Feb 19, 2019 at 10:46 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/19/19 10:34 PM, Brendan Higgins wrote:
> > On Mon, Feb 18, 2019 at 12:02 PM Frank Rowand <frowand.list@gmail.com> wrote:
> > <snip>
> >> I have not read through the patches in any detail.  I have read some of
> >> the code to try to understand the patches to the devicetree unit tests.
> >> So that may limit how valid my comments below are.
> >
> > No problem.
> >
> >>
> >> I found the code difficult to read in places where it should have been
> >> much simpler to read.  Structuring the code in a pseudo object oriented
> >> style meant that everywhere in a code path that I encountered a dynamic
> >> function call, I had to go find where that dynamic function call was
> >> initialized (and being the cautious person that I am, verify that
> >> no where else was the value of that dynamic function call).  With
> >> primitive vi and tags, that search would have instead just been a
> >> simple key press (or at worst a few keys) if hard coded function
> >> calls were done instead of dynamic function calls.  In the code paths
> >> that I looked at, I did not see any case of a dynamic function being
> >> anything other than the value it was originally initialized as.
> >> There may be such cases, I did not read the entire patch set.  There
> >> may also be cases envisioned in the architects mind of how this
> >> flexibility may be of future value.  Dunno.
> >
> > Yeah, a lot of it is intended to make architecture specific
> > implementations and some other future work easier. Some of it is also
> > for testing purposes. Admittedly some is for neither reason, but given
> > the heavy usage elsewhere, I figured there was no harm since it was
> > all private internal usage anyway.
> >
>
> Increasing the cost for me (and all the other potential code readers)
> to read the code is harm.

You are right. I like the object oriented C style; I didn't think it
hurt readability.

In any case, I will go through and replace instances where I am not
using it for one of the above reasons.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-02-28  4:15         ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-02-28  4:15 UTC (permalink / raw)


On Tue, Feb 19, 2019 at 10:46 PM Frank Rowand <frowand.list at gmail.com> wrote:
>
> On 2/19/19 10:34 PM, Brendan Higgins wrote:
> > On Mon, Feb 18, 2019 at 12:02 PM Frank Rowand <frowand.list at gmail.com> wrote:
> > <snip>
> >> I have not read through the patches in any detail.  I have read some of
> >> the code to try to understand the patches to the devicetree unit tests.
> >> So that may limit how valid my comments below are.
> >
> > No problem.
> >
> >>
> >> I found the code difficult to read in places where it should have been
> >> much simpler to read.  Structuring the code in a pseudo object oriented
> >> style meant that everywhere in a code path that I encountered a dynamic
> >> function call, I had to go find where that dynamic function call was
> >> initialized (and being the cautious person that I am, verify that
> >> no where else was the value of that dynamic function call).  With
> >> primitive vi and tags, that search would have instead just been a
> >> simple key press (or at worst a few keys) if hard coded function
> >> calls were done instead of dynamic function calls.  In the code paths
> >> that I looked at, I did not see any case of a dynamic function being
> >> anything other than the value it was originally initialized as.
> >> There may be such cases, I did not read the entire patch set.  There
> >> may also be cases envisioned in the architects mind of how this
> >> flexibility may be of future value.  Dunno.
> >
> > Yeah, a lot of it is intended to make architecture specific
> > implementations and some other future work easier. Some of it is also
> > for testing purposes. Admittedly some is for neither reason, but given
> > the heavy usage elsewhere, I figured there was no harm since it was
> > all private internal usage anyway.
> >
>
> Increasing the cost for me (and all the other potential code readers)
> to read the code is harm.

You are right. I like the object oriented C style; I didn't think it
hurt readability.

In any case, I will go through and replace instances where I am not
using it for one of the above reasons.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-02-28  4:15         ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-28  4:15 UTC (permalink / raw)


On Tue, Feb 19, 2019@10:46 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/19/19 10:34 PM, Brendan Higgins wrote:
> > On Mon, Feb 18, 2019@12:02 PM Frank Rowand <frowand.list@gmail.com> wrote:
> > <snip>
> >> I have not read through the patches in any detail.  I have read some of
> >> the code to try to understand the patches to the devicetree unit tests.
> >> So that may limit how valid my comments below are.
> >
> > No problem.
> >
> >>
> >> I found the code difficult to read in places where it should have been
> >> much simpler to read.  Structuring the code in a pseudo object oriented
> >> style meant that everywhere in a code path that I encountered a dynamic
> >> function call, I had to go find where that dynamic function call was
> >> initialized (and being the cautious person that I am, verify that
> >> no where else was the value of that dynamic function call).  With
> >> primitive vi and tags, that search would have instead just been a
> >> simple key press (or at worst a few keys) if hard coded function
> >> calls were done instead of dynamic function calls.  In the code paths
> >> that I looked at, I did not see any case of a dynamic function being
> >> anything other than the value it was originally initialized as.
> >> There may be such cases, I did not read the entire patch set.  There
> >> may also be cases envisioned in the architects mind of how this
> >> flexibility may be of future value.  Dunno.
> >
> > Yeah, a lot of it is intended to make architecture specific
> > implementations and some other future work easier. Some of it is also
> > for testing purposes. Admittedly some is for neither reason, but given
> > the heavy usage elsewhere, I figured there was no harm since it was
> > all private internal usage anyway.
> >
>
> Increasing the cost for me (and all the other potential code readers)
> to read the code is harm.

You are right. I like the object oriented C style; I didn't think it
hurt readability.

In any case, I will go through and replace instances where I am not
using it for one of the above reasons.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-02-28  4:15         ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-28  4:15 UTC (permalink / raw)
  To: Frank Rowand
  Cc: brakmo, Petr Mladek, Amir Goldstein, dri-devel, Sasha Levin,
	linux-kselftest, shuah, Rob Herring, linux-nvdimm,
	Richard Weinberger, Knut Omang, Kieran Bingham, wfg,
	Joel Stanley, Jeff Dike, dan.carpenter, devicetree, Bird,

On Tue, Feb 19, 2019 at 10:46 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/19/19 10:34 PM, Brendan Higgins wrote:
> > On Mon, Feb 18, 2019 at 12:02 PM Frank Rowand <frowand.list@gmail.com> wrote:
> > <snip>
> >> I have not read through the patches in any detail.  I have read some of
> >> the code to try to understand the patches to the devicetree unit tests.
> >> So that may limit how valid my comments below are.
> >
> > No problem.
> >
> >>
> >> I found the code difficult to read in places where it should have been
> >> much simpler to read.  Structuring the code in a pseudo object oriented
> >> style meant that everywhere in a code path that I encountered a dynamic
> >> function call, I had to go find where that dynamic function call was
> >> initialized (and being the cautious person that I am, verify that
> >> no where else was the value of that dynamic function call).  With
> >> primitive vi and tags, that search would have instead just been a
> >> simple key press (or at worst a few keys) if hard coded function
> >> calls were done instead of dynamic function calls.  In the code paths
> >> that I looked at, I did not see any case of a dynamic function being
> >> anything other than the value it was originally initialized as.
> >> There may be such cases, I did not read the entire patch set.  There
> >> may also be cases envisioned in the architects mind of how this
> >> flexibility may be of future value.  Dunno.
> >
> > Yeah, a lot of it is intended to make architecture specific
> > implementations and some other future work easier. Some of it is also
> > for testing purposes. Admittedly some is for neither reason, but given
> > the heavy usage elsewhere, I figured there was no harm since it was
> > all private internal usage anyway.
> >
>
> Increasing the cost for me (and all the other potential code readers)
> to read the code is harm.

You are right. I like the object oriented C style; I didn't think it
hurt readability.

In any case, I will go through and replace instances where I am not
using it for one of the above reasons.

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
  2019-02-22 20:52         ` Thiago Jung Bauermann
  (?)
  (?)
@ 2019-02-28  4:18           ` Brendan Higgins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-28  4:18 UTC (permalink / raw)
  To: Thiago Jung Bauermann
  Cc: Frank Rowand, Kees Cook, Luis Chamberlain, shuah, Rob Herring,
	Kieran Bingham, Greg KH, Joel Stanley, Michael Ellerman,
	Joe Perches, brakmo, Steven Rostedt, Bird, Timothy, Kevin Hilman,
	Julia Lawall, linux-kselftest, kunit-dev,
	Linux Kernel Mailing List, Jeff Dike, Richard Weinberger,
	linux-um, Daniel Vetter

On Fri, Feb 22, 2019 at 12:53 PM Thiago Jung Bauermann
<bauerman@linux.ibm.com> wrote:
>
>
> Frank Rowand <frowand.list@gmail.com> writes:
>
> > On 2/19/19 10:34 PM, Brendan Higgins wrote:
> >> On Mon, Feb 18, 2019 at 12:02 PM Frank Rowand <frowand.list@gmail.com> wrote:
> >> <snip>
> >>> I have not read through the patches in any detail.  I have read some of
> >>> the code to try to understand the patches to the devicetree unit tests.
> >>> So that may limit how valid my comments below are.
> >>
> >> No problem.
> >>
> >>>
> >>> I found the code difficult to read in places where it should have been
> >>> much simpler to read.  Structuring the code in a pseudo object oriented
> >>> style meant that everywhere in a code path that I encountered a dynamic
> >>> function call, I had to go find where that dynamic function call was
> >>> initialized (and being the cautious person that I am, verify that
> >>> no where else was the value of that dynamic function call).  With
> >>> primitive vi and tags, that search would have instead just been a
> >>> simple key press (or at worst a few keys) if hard coded function
> >>> calls were done instead of dynamic function calls.  In the code paths
> >>> that I looked at, I did not see any case of a dynamic function being
> >>> anything other than the value it was originally initialized as.
> >>> There may be such cases, I did not read the entire patch set.  There
> >>> may also be cases envisioned in the architects mind of how this
> >>> flexibility may be of future value.  Dunno.
> >>
> >> Yeah, a lot of it is intended to make architecture specific
> >> implementations and some other future work easier. Some of it is also
> >> for testing purposes. Admittedly some is for neither reason, but given
> >> the heavy usage elsewhere, I figured there was no harm since it was
> >> all private internal usage anyway.
> >>
> >
> > Increasing the cost for me (and all the other potential code readers)
> > to read the code is harm.
>
> Dynamic function calls aren't necessary for arch-specific
> implementations either. See for example arch_kexec_image_load() in
> kernel/kexec_file.c, which uses a weak symbol that is overriden by
> arch-specific code. Not everybody likes weak symbols, so another
> alternative (which admitedly not everybody likes either) is to use a
> macro with the name of the arch-specific function, as used by
> arch_kexec_post_alloc_pages() in <linux/kexec.h> for instance.

I personally have a strong preference for dynamic function calls over
weak symbols or macros, but I can change it if it really makes
anyone's eyes bleed.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-02-28  4:18           ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-28  4:18 UTC (permalink / raw)
  To: Thiago Jung Bauermann
  Cc: Frank Rowand, Kees Cook, Luis Chamberlain, shuah, Rob Herring,
	Kieran Bingham, Greg KH, Joel Stanley, Michael Ellerman,
	Joe Perches, brakmo, Steven Rostedt, Bird, Timothy, Kevin Hilman,
	Julia Lawall, linux-kselftest, kunit-dev,
	Linux Kernel Mailing List, Jeff Dike, Richard Weinberger,
	linux-um, Daniel Vetter, dri-devel, Dan Williams, linux-nvdimm,
	Knut Omang, devicetree, Petr Mladek, Sasha Levin, Amir Goldstein,
	dan.carpenter, wfg

On Fri, Feb 22, 2019 at 12:53 PM Thiago Jung Bauermann
<bauerman@linux.ibm.com> wrote:
>
>
> Frank Rowand <frowand.list@gmail.com> writes:
>
> > On 2/19/19 10:34 PM, Brendan Higgins wrote:
> >> On Mon, Feb 18, 2019 at 12:02 PM Frank Rowand <frowand.list@gmail.com> wrote:
> >> <snip>
> >>> I have not read through the patches in any detail.  I have read some of
> >>> the code to try to understand the patches to the devicetree unit tests.
> >>> So that may limit how valid my comments below are.
> >>
> >> No problem.
> >>
> >>>
> >>> I found the code difficult to read in places where it should have been
> >>> much simpler to read.  Structuring the code in a pseudo object oriented
> >>> style meant that everywhere in a code path that I encountered a dynamic
> >>> function call, I had to go find where that dynamic function call was
> >>> initialized (and being the cautious person that I am, verify that
> >>> no where else was the value of that dynamic function call).  With
> >>> primitive vi and tags, that search would have instead just been a
> >>> simple key press (or at worst a few keys) if hard coded function
> >>> calls were done instead of dynamic function calls.  In the code paths
> >>> that I looked at, I did not see any case of a dynamic function being
> >>> anything other than the value it was originally initialized as.
> >>> There may be such cases, I did not read the entire patch set.  There
> >>> may also be cases envisioned in the architects mind of how this
> >>> flexibility may be of future value.  Dunno.
> >>
> >> Yeah, a lot of it is intended to make architecture specific
> >> implementations and some other future work easier. Some of it is also
> >> for testing purposes. Admittedly some is for neither reason, but given
> >> the heavy usage elsewhere, I figured there was no harm since it was
> >> all private internal usage anyway.
> >>
> >
> > Increasing the cost for me (and all the other potential code readers)
> > to read the code is harm.
>
> Dynamic function calls aren't necessary for arch-specific
> implementations either. See for example arch_kexec_image_load() in
> kernel/kexec_file.c, which uses a weak symbol that is overriden by
> arch-specific code. Not everybody likes weak symbols, so another
> alternative (which admitedly not everybody likes either) is to use a
> macro with the name of the arch-specific function, as used by
> arch_kexec_post_alloc_pages() in <linux/kexec.h> for instance.

I personally have a strong preference for dynamic function calls over
weak symbols or macros, but I can change it if it really makes
anyone's eyes bleed.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-02-28  4:18           ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-02-28  4:18 UTC (permalink / raw)


On Fri, Feb 22, 2019 at 12:53 PM Thiago Jung Bauermann
<bauerman at linux.ibm.com> wrote:
>
>
> Frank Rowand <frowand.list at gmail.com> writes:
>
> > On 2/19/19 10:34 PM, Brendan Higgins wrote:
> >> On Mon, Feb 18, 2019 at 12:02 PM Frank Rowand <frowand.list at gmail.com> wrote:
> >> <snip>
> >>> I have not read through the patches in any detail.  I have read some of
> >>> the code to try to understand the patches to the devicetree unit tests.
> >>> So that may limit how valid my comments below are.
> >>
> >> No problem.
> >>
> >>>
> >>> I found the code difficult to read in places where it should have been
> >>> much simpler to read.  Structuring the code in a pseudo object oriented
> >>> style meant that everywhere in a code path that I encountered a dynamic
> >>> function call, I had to go find where that dynamic function call was
> >>> initialized (and being the cautious person that I am, verify that
> >>> no where else was the value of that dynamic function call).  With
> >>> primitive vi and tags, that search would have instead just been a
> >>> simple key press (or at worst a few keys) if hard coded function
> >>> calls were done instead of dynamic function calls.  In the code paths
> >>> that I looked at, I did not see any case of a dynamic function being
> >>> anything other than the value it was originally initialized as.
> >>> There may be such cases, I did not read the entire patch set.  There
> >>> may also be cases envisioned in the architects mind of how this
> >>> flexibility may be of future value.  Dunno.
> >>
> >> Yeah, a lot of it is intended to make architecture specific
> >> implementations and some other future work easier. Some of it is also
> >> for testing purposes. Admittedly some is for neither reason, but given
> >> the heavy usage elsewhere, I figured there was no harm since it was
> >> all private internal usage anyway.
> >>
> >
> > Increasing the cost for me (and all the other potential code readers)
> > to read the code is harm.
>
> Dynamic function calls aren't necessary for arch-specific
> implementations either. See for example arch_kexec_image_load() in
> kernel/kexec_file.c, which uses a weak symbol that is overriden by
> arch-specific code. Not everybody likes weak symbols, so another
> alternative (which admitedly not everybody likes either) is to use a
> macro with the name of the arch-specific function, as used by
> arch_kexec_post_alloc_pages() in <linux/kexec.h> for instance.

I personally have a strong preference for dynamic function calls over
weak symbols or macros, but I can change it if it really makes
anyone's eyes bleed.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-02-28  4:18           ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-28  4:18 UTC (permalink / raw)


On Fri, Feb 22, 2019 at 12:53 PM Thiago Jung Bauermann
<bauerman@linux.ibm.com> wrote:
>
>
> Frank Rowand <frowand.list at gmail.com> writes:
>
> > On 2/19/19 10:34 PM, Brendan Higgins wrote:
> >> On Mon, Feb 18, 2019@12:02 PM Frank Rowand <frowand.list@gmail.com> wrote:
> >> <snip>
> >>> I have not read through the patches in any detail.  I have read some of
> >>> the code to try to understand the patches to the devicetree unit tests.
> >>> So that may limit how valid my comments below are.
> >>
> >> No problem.
> >>
> >>>
> >>> I found the code difficult to read in places where it should have been
> >>> much simpler to read.  Structuring the code in a pseudo object oriented
> >>> style meant that everywhere in a code path that I encountered a dynamic
> >>> function call, I had to go find where that dynamic function call was
> >>> initialized (and being the cautious person that I am, verify that
> >>> no where else was the value of that dynamic function call).  With
> >>> primitive vi and tags, that search would have instead just been a
> >>> simple key press (or at worst a few keys) if hard coded function
> >>> calls were done instead of dynamic function calls.  In the code paths
> >>> that I looked at, I did not see any case of a dynamic function being
> >>> anything other than the value it was originally initialized as.
> >>> There may be such cases, I did not read the entire patch set.  There
> >>> may also be cases envisioned in the architects mind of how this
> >>> flexibility may be of future value.  Dunno.
> >>
> >> Yeah, a lot of it is intended to make architecture specific
> >> implementations and some other future work easier. Some of it is also
> >> for testing purposes. Admittedly some is for neither reason, but given
> >> the heavy usage elsewhere, I figured there was no harm since it was
> >> all private internal usage anyway.
> >>
> >
> > Increasing the cost for me (and all the other potential code readers)
> > to read the code is harm.
>
> Dynamic function calls aren't necessary for arch-specific
> implementations either. See for example arch_kexec_image_load() in
> kernel/kexec_file.c, which uses a weak symbol that is overriden by
> arch-specific code. Not everybody likes weak symbols, so another
> alternative (which admitedly not everybody likes either) is to use a
> macro with the name of the arch-specific function, as used by
> arch_kexec_post_alloc_pages() in <linux/kexec.h> for instance.

I personally have a strong preference for dynamic function calls over
weak symbols or macros, but I can change it if it really makes
anyone's eyes bleed.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
  2019-02-20  6:44               ` Frank Rowand
                                   ` (2 preceding siblings ...)
  (?)
@ 2019-02-28  7:42                 ` Brendan Higgins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-28  7:42 UTC (permalink / raw)
  To: Frank Rowand
  Cc: Kees Cook, Luis Chamberlain, shuah, Rob Herring, Kieran Bingham,
	Greg KH, Joel Stanley, Michael Ellerman, Joe Perches, brakmo,
	Steven Rostedt, Bird, Timothy, Kevin Hilman, Julia Lawall,
	linux-kselftest, kunit-dev, Linux Kernel Mailing List, Jeff Dike,
	Richard Weinberger, linux-um, Daniel Vetter, dri-devel

On Tue, Feb 19, 2019 at 10:44 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/19/19 7:39 PM, Brendan Higgins wrote:
> > On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand <frowand.list@gmail.com> wrote:
> >>
> >> On 2/14/19 1:37 PM, Brendan Higgins wrote:
> >>> Add support for aborting/bailing out of test cases. Needed for
> >>> implementing assertions.
> >>>
> >>> Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
> >>> ---
> >>> Changes Since Last Version
> >>>  - This patch is new introducing a new cross-architecture way to abort
> >>>    out of a test case (needed for KUNIT_ASSERT_*, see next patch for
> >>>    details).
> >>>  - On a side note, this is not a complete replacement for the UML abort
> >>>    mechanism, but covers the majority of necessary functionality. UML
> >>>    architecture specific featurs have been dropped from the initial
> >>>    patchset.
> >>> ---
> >>>  include/kunit/test.h |  24 +++++
> >>>  kunit/Makefile       |   3 +-
> >>>  kunit/test-test.c    | 127 ++++++++++++++++++++++++++
> >>>  kunit/test.c         | 208 +++++++++++++++++++++++++++++++++++++++++--
> >>>  4 files changed, 353 insertions(+), 9 deletions(-)
> >>>  create mode 100644 kunit/test-test.c
> >>
> >> < snip >
> >>
> >>> diff --git a/kunit/test.c b/kunit/test.c
> >>> index d18c50d5ed671..6e5244642ab07 100644
> >>> --- a/kunit/test.c
> >>> +++ b/kunit/test.c
> >>> @@ -6,9 +6,9 @@
> >>>   * Author: Brendan Higgins <brendanhiggins@google.com>
> >>>   */
> >>>
> >>> -#include <linux/sched.h>
> >>>  #include <linux/sched/debug.h>
> >>> -#include <os.h>
> >>> +#include <linux/completion.h>
> >>> +#include <linux/kthread.h>
> >>>  #include <kunit/test.h>
> >>>
> >>>  static bool kunit_get_success(struct kunit *test)
> >>> @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
> >>>       spin_unlock_irqrestore(&test->lock, flags);
> >>>  }
> >>>
> >>> +static bool kunit_get_death_test(struct kunit *test)
> >>> +{
> >>> +     unsigned long flags;
> >>> +     bool death_test;
> >>> +
> >>> +     spin_lock_irqsave(&test->lock, flags);
> >>> +     death_test = test->death_test;
> >>> +     spin_unlock_irqrestore(&test->lock, flags);
> >>> +
> >>> +     return death_test;
> >>> +}
> >>> +
> >>> +static void kunit_set_death_test(struct kunit *test, bool death_test)
> >>> +{
> >>> +     unsigned long flags;
> >>> +
> >>> +     spin_lock_irqsave(&test->lock, flags);
> >>> +     test->death_test = death_test;
> >>> +     spin_unlock_irqrestore(&test->lock, flags);
> >>> +}
> >>> +
> >>>  static int kunit_vprintk_emit(const struct kunit *test,
> >>>                             int level,
> >>>                             const char *fmt,
> >>> @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
> >>>       stream->commit(stream);
> >>>  }
> >>>
> >>> +static void __noreturn kunit_abort(struct kunit *test)
> >>> +{
> >>> +     kunit_set_death_test(test, true);
> >>> +
> >>> +     test->try_catch.throw(&test->try_catch);
> >>> +
> >>> +     /*
> >>> +      * Throw could not abort from test.
> >>> +      */
> >>> +     kunit_err(test, "Throw could not abort from test!");
> >>> +     show_stack(NULL, NULL);
> >>> +     BUG();
> >>
> >> kunit_abort() is what will be call as the result of an assert failure.
> >
> > Yep. Does that need clarified somewhere.
> >>
> >> BUG(), which is a panic, which is crashing the system is not acceptable
> >> in the Linux kernel.  You will just annoy Linus if you submit this.
> >
> > Sorry, I thought this was an acceptable use case since, a) this should
> > never be compiled in a production kernel, b) we are in a pretty bad,
> > unpredictable state if we get here and keep going. I think you might
> > have said elsewhere that you think "a" is not valid? In any case, I
> > can replace this with a WARN, would that be acceptable?
>
> A WARN may or may not make sense, depending on the context.  It may
> be sufficient to simply report a test failure (as in the old version
> of case (2) below.
>
> Answers to "a)" and "b)":
>
> a) it might be in a production kernel

Sorry for a possibly stupid question, how might it be so? Why would
someone intentionally build unit tests into a production kernel?

>
> a') it is not acceptable in my development kernel either

Fair enough.

>
> b) No.  You don't crash a developer's kernel either unless it is
> required to avoid data corruption.

Alright, I thought that was one of those cases, but I am not going to
push the point. Also, in case it wasn't clear, the path where BUG()
gets called only happens if there is a bug in KUnit itself, not just
because a test case fails catastrophically.

>
> b') And you can not do replacements like:
>
> (1) in of_unittest_check_tree_linkage()
>
> -----  old  -----
>
>         if (!of_root)
>                 return;
>
> -----  new  -----
>
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);
>
> (2) in of_unittest_property_string()
>
> -----  old  -----
>
>         /* of_property_read_string_index() tests */
>         rc = of_property_read_string_index(np, "string-property", 0, strings);
>         unittest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);
>
> -----  new  -----
>
>         /* of_property_read_string_index() tests */
>         rc = of_property_read_string_index(np, "string-property", 0, strings);
>         KUNIT_ASSERT_EQ(test, rc, 0);
>         KUNIT_EXPECT_STREQ(test, strings[0], "foobar");
>
>
> If a test fails, that is no reason to abort testing.  The remainder of the unit
> tests can still run.  There may be cascading failures, but that is ok.

Sure, that's what I am trying to do. I don't see how (1) changes
anything, a failed KUNIT_ASSERT_* only bails on the current test case,
it does not quit the entire test suite let alone crash the kernel.

In case it wasn't clear above,
> >>> +     test->try_catch.throw(&test->try_catch);
should never, ever return. The only time it would, would be if KUnit
was very broken. This should never actually happen, even if the
assertion that called it was violated. KUNIT_ASSERT_* just says, "this
is a prerequisite property for this test case"; if it is violated, the
test case should fail and bail because the preconditions for the test
case cannot be satisfied. Nevertheless, other tests cases will still
run.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-02-28  7:42                 ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-28  7:42 UTC (permalink / raw)
  To: Frank Rowand
  Cc: Kees Cook, Luis Chamberlain, shuah, Rob Herring, Kieran Bingham,
	Greg KH, Joel Stanley, Michael Ellerman, Joe Perches, brakmo,
	Steven Rostedt, Bird, Timothy, Kevin Hilman, Julia Lawall,
	linux-kselftest, kunit-dev, Linux Kernel Mailing List, Jeff Dike,
	Richard Weinberger, linux-um, Daniel Vetter, dri-devel,
	Dan Williams, linux-nvdimm, Knut Omang, devicetree, Petr Mladek,
	Sasha Levin, Amir Goldstein, dan.carpenter, wfg

On Tue, Feb 19, 2019 at 10:44 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/19/19 7:39 PM, Brendan Higgins wrote:
> > On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand <frowand.list@gmail.com> wrote:
> >>
> >> On 2/14/19 1:37 PM, Brendan Higgins wrote:
> >>> Add support for aborting/bailing out of test cases. Needed for
> >>> implementing assertions.
> >>>
> >>> Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
> >>> ---
> >>> Changes Since Last Version
> >>>  - This patch is new introducing a new cross-architecture way to abort
> >>>    out of a test case (needed for KUNIT_ASSERT_*, see next patch for
> >>>    details).
> >>>  - On a side note, this is not a complete replacement for the UML abort
> >>>    mechanism, but covers the majority of necessary functionality. UML
> >>>    architecture specific featurs have been dropped from the initial
> >>>    patchset.
> >>> ---
> >>>  include/kunit/test.h |  24 +++++
> >>>  kunit/Makefile       |   3 +-
> >>>  kunit/test-test.c    | 127 ++++++++++++++++++++++++++
> >>>  kunit/test.c         | 208 +++++++++++++++++++++++++++++++++++++++++--
> >>>  4 files changed, 353 insertions(+), 9 deletions(-)
> >>>  create mode 100644 kunit/test-test.c
> >>
> >> < snip >
> >>
> >>> diff --git a/kunit/test.c b/kunit/test.c
> >>> index d18c50d5ed671..6e5244642ab07 100644
> >>> --- a/kunit/test.c
> >>> +++ b/kunit/test.c
> >>> @@ -6,9 +6,9 @@
> >>>   * Author: Brendan Higgins <brendanhiggins@google.com>
> >>>   */
> >>>
> >>> -#include <linux/sched.h>
> >>>  #include <linux/sched/debug.h>
> >>> -#include <os.h>
> >>> +#include <linux/completion.h>
> >>> +#include <linux/kthread.h>
> >>>  #include <kunit/test.h>
> >>>
> >>>  static bool kunit_get_success(struct kunit *test)
> >>> @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
> >>>       spin_unlock_irqrestore(&test->lock, flags);
> >>>  }
> >>>
> >>> +static bool kunit_get_death_test(struct kunit *test)
> >>> +{
> >>> +     unsigned long flags;
> >>> +     bool death_test;
> >>> +
> >>> +     spin_lock_irqsave(&test->lock, flags);
> >>> +     death_test = test->death_test;
> >>> +     spin_unlock_irqrestore(&test->lock, flags);
> >>> +
> >>> +     return death_test;
> >>> +}
> >>> +
> >>> +static void kunit_set_death_test(struct kunit *test, bool death_test)
> >>> +{
> >>> +     unsigned long flags;
> >>> +
> >>> +     spin_lock_irqsave(&test->lock, flags);
> >>> +     test->death_test = death_test;
> >>> +     spin_unlock_irqrestore(&test->lock, flags);
> >>> +}
> >>> +
> >>>  static int kunit_vprintk_emit(const struct kunit *test,
> >>>                             int level,
> >>>                             const char *fmt,
> >>> @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
> >>>       stream->commit(stream);
> >>>  }
> >>>
> >>> +static void __noreturn kunit_abort(struct kunit *test)
> >>> +{
> >>> +     kunit_set_death_test(test, true);
> >>> +
> >>> +     test->try_catch.throw(&test->try_catch);
> >>> +
> >>> +     /*
> >>> +      * Throw could not abort from test.
> >>> +      */
> >>> +     kunit_err(test, "Throw could not abort from test!");
> >>> +     show_stack(NULL, NULL);
> >>> +     BUG();
> >>
> >> kunit_abort() is what will be call as the result of an assert failure.
> >
> > Yep. Does that need clarified somewhere.
> >>
> >> BUG(), which is a panic, which is crashing the system is not acceptable
> >> in the Linux kernel.  You will just annoy Linus if you submit this.
> >
> > Sorry, I thought this was an acceptable use case since, a) this should
> > never be compiled in a production kernel, b) we are in a pretty bad,
> > unpredictable state if we get here and keep going. I think you might
> > have said elsewhere that you think "a" is not valid? In any case, I
> > can replace this with a WARN, would that be acceptable?
>
> A WARN may or may not make sense, depending on the context.  It may
> be sufficient to simply report a test failure (as in the old version
> of case (2) below.
>
> Answers to "a)" and "b)":
>
> a) it might be in a production kernel

Sorry for a possibly stupid question, how might it be so? Why would
someone intentionally build unit tests into a production kernel?

>
> a') it is not acceptable in my development kernel either

Fair enough.

>
> b) No.  You don't crash a developer's kernel either unless it is
> required to avoid data corruption.

Alright, I thought that was one of those cases, but I am not going to
push the point. Also, in case it wasn't clear, the path where BUG()
gets called only happens if there is a bug in KUnit itself, not just
because a test case fails catastrophically.

>
> b') And you can not do replacements like:
>
> (1) in of_unittest_check_tree_linkage()
>
> -----  old  -----
>
>         if (!of_root)
>                 return;
>
> -----  new  -----
>
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);
>
> (2) in of_unittest_property_string()
>
> -----  old  -----
>
>         /* of_property_read_string_index() tests */
>         rc = of_property_read_string_index(np, "string-property", 0, strings);
>         unittest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);
>
> -----  new  -----
>
>         /* of_property_read_string_index() tests */
>         rc = of_property_read_string_index(np, "string-property", 0, strings);
>         KUNIT_ASSERT_EQ(test, rc, 0);
>         KUNIT_EXPECT_STREQ(test, strings[0], "foobar");
>
>
> If a test fails, that is no reason to abort testing.  The remainder of the unit
> tests can still run.  There may be cascading failures, but that is ok.

Sure, that's what I am trying to do. I don't see how (1) changes
anything, a failed KUNIT_ASSERT_* only bails on the current test case,
it does not quit the entire test suite let alone crash the kernel.

In case it wasn't clear above,
> >>> +     test->try_catch.throw(&test->try_catch);
should never, ever return. The only time it would, would be if KUnit
was very broken. This should never actually happen, even if the
assertion that called it was violated. KUNIT_ASSERT_* just says, "this
is a prerequisite property for this test case"; if it is violated, the
test case should fail and bail because the preconditions for the test
case cannot be satisfied. Nevertheless, other tests cases will still
run.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-02-28  7:42                 ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-02-28  7:42 UTC (permalink / raw)


On Tue, Feb 19, 2019 at 10:44 PM Frank Rowand <frowand.list at gmail.com> wrote:
>
> On 2/19/19 7:39 PM, Brendan Higgins wrote:
> > On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand <frowand.list at gmail.com> wrote:
> >>
> >> On 2/14/19 1:37 PM, Brendan Higgins wrote:
> >>> Add support for aborting/bailing out of test cases. Needed for
> >>> implementing assertions.
> >>>
> >>> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> >>> ---
> >>> Changes Since Last Version
> >>>  - This patch is new introducing a new cross-architecture way to abort
> >>>    out of a test case (needed for KUNIT_ASSERT_*, see next patch for
> >>>    details).
> >>>  - On a side note, this is not a complete replacement for the UML abort
> >>>    mechanism, but covers the majority of necessary functionality. UML
> >>>    architecture specific featurs have been dropped from the initial
> >>>    patchset.
> >>> ---
> >>>  include/kunit/test.h |  24 +++++
> >>>  kunit/Makefile       |   3 +-
> >>>  kunit/test-test.c    | 127 ++++++++++++++++++++++++++
> >>>  kunit/test.c         | 208 +++++++++++++++++++++++++++++++++++++++++--
> >>>  4 files changed, 353 insertions(+), 9 deletions(-)
> >>>  create mode 100644 kunit/test-test.c
> >>
> >> < snip >
> >>
> >>> diff --git a/kunit/test.c b/kunit/test.c
> >>> index d18c50d5ed671..6e5244642ab07 100644
> >>> --- a/kunit/test.c
> >>> +++ b/kunit/test.c
> >>> @@ -6,9 +6,9 @@
> >>>   * Author: Brendan Higgins <brendanhiggins at google.com>
> >>>   */
> >>>
> >>> -#include <linux/sched.h>
> >>>  #include <linux/sched/debug.h>
> >>> -#include <os.h>
> >>> +#include <linux/completion.h>
> >>> +#include <linux/kthread.h>
> >>>  #include <kunit/test.h>
> >>>
> >>>  static bool kunit_get_success(struct kunit *test)
> >>> @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
> >>>       spin_unlock_irqrestore(&test->lock, flags);
> >>>  }
> >>>
> >>> +static bool kunit_get_death_test(struct kunit *test)
> >>> +{
> >>> +     unsigned long flags;
> >>> +     bool death_test;
> >>> +
> >>> +     spin_lock_irqsave(&test->lock, flags);
> >>> +     death_test = test->death_test;
> >>> +     spin_unlock_irqrestore(&test->lock, flags);
> >>> +
> >>> +     return death_test;
> >>> +}
> >>> +
> >>> +static void kunit_set_death_test(struct kunit *test, bool death_test)
> >>> +{
> >>> +     unsigned long flags;
> >>> +
> >>> +     spin_lock_irqsave(&test->lock, flags);
> >>> +     test->death_test = death_test;
> >>> +     spin_unlock_irqrestore(&test->lock, flags);
> >>> +}
> >>> +
> >>>  static int kunit_vprintk_emit(const struct kunit *test,
> >>>                             int level,
> >>>                             const char *fmt,
> >>> @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
> >>>       stream->commit(stream);
> >>>  }
> >>>
> >>> +static void __noreturn kunit_abort(struct kunit *test)
> >>> +{
> >>> +     kunit_set_death_test(test, true);
> >>> +
> >>> +     test->try_catch.throw(&test->try_catch);
> >>> +
> >>> +     /*
> >>> +      * Throw could not abort from test.
> >>> +      */
> >>> +     kunit_err(test, "Throw could not abort from test!");
> >>> +     show_stack(NULL, NULL);
> >>> +     BUG();
> >>
> >> kunit_abort() is what will be call as the result of an assert failure.
> >
> > Yep. Does that need clarified somewhere.
> >>
> >> BUG(), which is a panic, which is crashing the system is not acceptable
> >> in the Linux kernel.  You will just annoy Linus if you submit this.
> >
> > Sorry, I thought this was an acceptable use case since, a) this should
> > never be compiled in a production kernel, b) we are in a pretty bad,
> > unpredictable state if we get here and keep going. I think you might
> > have said elsewhere that you think "a" is not valid? In any case, I
> > can replace this with a WARN, would that be acceptable?
>
> A WARN may or may not make sense, depending on the context.  It may
> be sufficient to simply report a test failure (as in the old version
> of case (2) below.
>
> Answers to "a)" and "b)":
>
> a) it might be in a production kernel

Sorry for a possibly stupid question, how might it be so? Why would
someone intentionally build unit tests into a production kernel?

>
> a') it is not acceptable in my development kernel either

Fair enough.

>
> b) No.  You don't crash a developer's kernel either unless it is
> required to avoid data corruption.

Alright, I thought that was one of those cases, but I am not going to
push the point. Also, in case it wasn't clear, the path where BUG()
gets called only happens if there is a bug in KUnit itself, not just
because a test case fails catastrophically.

>
> b') And you can not do replacements like:
>
> (1) in of_unittest_check_tree_linkage()
>
> -----  old  -----
>
>         if (!of_root)
>                 return;
>
> -----  new  -----
>
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);
>
> (2) in of_unittest_property_string()
>
> -----  old  -----
>
>         /* of_property_read_string_index() tests */
>         rc = of_property_read_string_index(np, "string-property", 0, strings);
>         unittest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);
>
> -----  new  -----
>
>         /* of_property_read_string_index() tests */
>         rc = of_property_read_string_index(np, "string-property", 0, strings);
>         KUNIT_ASSERT_EQ(test, rc, 0);
>         KUNIT_EXPECT_STREQ(test, strings[0], "foobar");
>
>
> If a test fails, that is no reason to abort testing.  The remainder of the unit
> tests can still run.  There may be cascading failures, but that is ok.

Sure, that's what I am trying to do. I don't see how (1) changes
anything, a failed KUNIT_ASSERT_* only bails on the current test case,
it does not quit the entire test suite let alone crash the kernel.

In case it wasn't clear above,
> >>> +     test->try_catch.throw(&test->try_catch);
should never, ever return. The only time it would, would be if KUnit
was very broken. This should never actually happen, even if the
assertion that called it was violated. KUNIT_ASSERT_* just says, "this
is a prerequisite property for this test case"; if it is violated, the
test case should fail and bail because the preconditions for the test
case cannot be satisfied. Nevertheless, other tests cases will still
run.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-02-28  7:42                 ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-28  7:42 UTC (permalink / raw)


On Tue, Feb 19, 2019@10:44 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/19/19 7:39 PM, Brendan Higgins wrote:
> > On Mon, Feb 18, 2019@11:52 AM Frank Rowand <frowand.list@gmail.com> wrote:
> >>
> >> On 2/14/19 1:37 PM, Brendan Higgins wrote:
> >>> Add support for aborting/bailing out of test cases. Needed for
> >>> implementing assertions.
> >>>
> >>> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> >>> ---
> >>> Changes Since Last Version
> >>>  - This patch is new introducing a new cross-architecture way to abort
> >>>    out of a test case (needed for KUNIT_ASSERT_*, see next patch for
> >>>    details).
> >>>  - On a side note, this is not a complete replacement for the UML abort
> >>>    mechanism, but covers the majority of necessary functionality. UML
> >>>    architecture specific featurs have been dropped from the initial
> >>>    patchset.
> >>> ---
> >>>  include/kunit/test.h |  24 +++++
> >>>  kunit/Makefile       |   3 +-
> >>>  kunit/test-test.c    | 127 ++++++++++++++++++++++++++
> >>>  kunit/test.c         | 208 +++++++++++++++++++++++++++++++++++++++++--
> >>>  4 files changed, 353 insertions(+), 9 deletions(-)
> >>>  create mode 100644 kunit/test-test.c
> >>
> >> < snip >
> >>
> >>> diff --git a/kunit/test.c b/kunit/test.c
> >>> index d18c50d5ed671..6e5244642ab07 100644
> >>> --- a/kunit/test.c
> >>> +++ b/kunit/test.c
> >>> @@ -6,9 +6,9 @@
> >>>   * Author: Brendan Higgins <brendanhiggins at google.com>
> >>>   */
> >>>
> >>> -#include <linux/sched.h>
> >>>  #include <linux/sched/debug.h>
> >>> -#include <os.h>
> >>> +#include <linux/completion.h>
> >>> +#include <linux/kthread.h>
> >>>  #include <kunit/test.h>
> >>>
> >>>  static bool kunit_get_success(struct kunit *test)
> >>> @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
> >>>       spin_unlock_irqrestore(&test->lock, flags);
> >>>  }
> >>>
> >>> +static bool kunit_get_death_test(struct kunit *test)
> >>> +{
> >>> +     unsigned long flags;
> >>> +     bool death_test;
> >>> +
> >>> +     spin_lock_irqsave(&test->lock, flags);
> >>> +     death_test = test->death_test;
> >>> +     spin_unlock_irqrestore(&test->lock, flags);
> >>> +
> >>> +     return death_test;
> >>> +}
> >>> +
> >>> +static void kunit_set_death_test(struct kunit *test, bool death_test)
> >>> +{
> >>> +     unsigned long flags;
> >>> +
> >>> +     spin_lock_irqsave(&test->lock, flags);
> >>> +     test->death_test = death_test;
> >>> +     spin_unlock_irqrestore(&test->lock, flags);
> >>> +}
> >>> +
> >>>  static int kunit_vprintk_emit(const struct kunit *test,
> >>>                             int level,
> >>>                             const char *fmt,
> >>> @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
> >>>       stream->commit(stream);
> >>>  }
> >>>
> >>> +static void __noreturn kunit_abort(struct kunit *test)
> >>> +{
> >>> +     kunit_set_death_test(test, true);
> >>> +
> >>> +     test->try_catch.throw(&test->try_catch);
> >>> +
> >>> +     /*
> >>> +      * Throw could not abort from test.
> >>> +      */
> >>> +     kunit_err(test, "Throw could not abort from test!");
> >>> +     show_stack(NULL, NULL);
> >>> +     BUG();
> >>
> >> kunit_abort() is what will be call as the result of an assert failure.
> >
> > Yep. Does that need clarified somewhere.
> >>
> >> BUG(), which is a panic, which is crashing the system is not acceptable
> >> in the Linux kernel.  You will just annoy Linus if you submit this.
> >
> > Sorry, I thought this was an acceptable use case since, a) this should
> > never be compiled in a production kernel, b) we are in a pretty bad,
> > unpredictable state if we get here and keep going. I think you might
> > have said elsewhere that you think "a" is not valid? In any case, I
> > can replace this with a WARN, would that be acceptable?
>
> A WARN may or may not make sense, depending on the context.  It may
> be sufficient to simply report a test failure (as in the old version
> of case (2) below.
>
> Answers to "a)" and "b)":
>
> a) it might be in a production kernel

Sorry for a possibly stupid question, how might it be so? Why would
someone intentionally build unit tests into a production kernel?

>
> a') it is not acceptable in my development kernel either

Fair enough.

>
> b) No.  You don't crash a developer's kernel either unless it is
> required to avoid data corruption.

Alright, I thought that was one of those cases, but I am not going to
push the point. Also, in case it wasn't clear, the path where BUG()
gets called only happens if there is a bug in KUnit itself, not just
because a test case fails catastrophically.

>
> b') And you can not do replacements like:
>
> (1) in of_unittest_check_tree_linkage()
>
> -----  old  -----
>
>         if (!of_root)
>                 return;
>
> -----  new  -----
>
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);
>
> (2) in of_unittest_property_string()
>
> -----  old  -----
>
>         /* of_property_read_string_index() tests */
>         rc = of_property_read_string_index(np, "string-property", 0, strings);
>         unittest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);
>
> -----  new  -----
>
>         /* of_property_read_string_index() tests */
>         rc = of_property_read_string_index(np, "string-property", 0, strings);
>         KUNIT_ASSERT_EQ(test, rc, 0);
>         KUNIT_EXPECT_STREQ(test, strings[0], "foobar");
>
>
> If a test fails, that is no reason to abort testing.  The remainder of the unit
> tests can still run.  There may be cascading failures, but that is ok.

Sure, that's what I am trying to do. I don't see how (1) changes
anything, a failed KUNIT_ASSERT_* only bails on the current test case,
it does not quit the entire test suite let alone crash the kernel.

In case it wasn't clear above,
> >>> +     test->try_catch.throw(&test->try_catch);
should never, ever return. The only time it would, would be if KUnit
was very broken. This should never actually happen, even if the
assertion that called it was violated. KUNIT_ASSERT_* just says, "this
is a prerequisite property for this test case"; if it is violated, the
test case should fail and bail because the preconditions for the test
case cannot be satisfied. Nevertheless, other tests cases will still
run.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-02-28  7:42                 ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-28  7:42 UTC (permalink / raw)
  To: Frank Rowand
  Cc: brakmo, Petr Mladek, Amir Goldstein, dri-devel, Sasha Levin,
	linux-kselftest, shuah, Rob Herring, linux-nvdimm,
	Richard Weinberger, Knut Omang, Kieran Bingham, wfg,
	Joel Stanley, Jeff Dike, dan.carpenter, devicetree, Bird,

On Tue, Feb 19, 2019 at 10:44 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/19/19 7:39 PM, Brendan Higgins wrote:
> > On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand <frowand.list@gmail.com> wrote:
> >>
> >> On 2/14/19 1:37 PM, Brendan Higgins wrote:
> >>> Add support for aborting/bailing out of test cases. Needed for
> >>> implementing assertions.
> >>>
> >>> Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
> >>> ---
> >>> Changes Since Last Version
> >>>  - This patch is new introducing a new cross-architecture way to abort
> >>>    out of a test case (needed for KUNIT_ASSERT_*, see next patch for
> >>>    details).
> >>>  - On a side note, this is not a complete replacement for the UML abort
> >>>    mechanism, but covers the majority of necessary functionality. UML
> >>>    architecture specific featurs have been dropped from the initial
> >>>    patchset.
> >>> ---
> >>>  include/kunit/test.h |  24 +++++
> >>>  kunit/Makefile       |   3 +-
> >>>  kunit/test-test.c    | 127 ++++++++++++++++++++++++++
> >>>  kunit/test.c         | 208 +++++++++++++++++++++++++++++++++++++++++--
> >>>  4 files changed, 353 insertions(+), 9 deletions(-)
> >>>  create mode 100644 kunit/test-test.c
> >>
> >> < snip >
> >>
> >>> diff --git a/kunit/test.c b/kunit/test.c
> >>> index d18c50d5ed671..6e5244642ab07 100644
> >>> --- a/kunit/test.c
> >>> +++ b/kunit/test.c
> >>> @@ -6,9 +6,9 @@
> >>>   * Author: Brendan Higgins <brendanhiggins@google.com>
> >>>   */
> >>>
> >>> -#include <linux/sched.h>
> >>>  #include <linux/sched/debug.h>
> >>> -#include <os.h>
> >>> +#include <linux/completion.h>
> >>> +#include <linux/kthread.h>
> >>>  #include <kunit/test.h>
> >>>
> >>>  static bool kunit_get_success(struct kunit *test)
> >>> @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
> >>>       spin_unlock_irqrestore(&test->lock, flags);
> >>>  }
> >>>
> >>> +static bool kunit_get_death_test(struct kunit *test)
> >>> +{
> >>> +     unsigned long flags;
> >>> +     bool death_test;
> >>> +
> >>> +     spin_lock_irqsave(&test->lock, flags);
> >>> +     death_test = test->death_test;
> >>> +     spin_unlock_irqrestore(&test->lock, flags);
> >>> +
> >>> +     return death_test;
> >>> +}
> >>> +
> >>> +static void kunit_set_death_test(struct kunit *test, bool death_test)
> >>> +{
> >>> +     unsigned long flags;
> >>> +
> >>> +     spin_lock_irqsave(&test->lock, flags);
> >>> +     test->death_test = death_test;
> >>> +     spin_unlock_irqrestore(&test->lock, flags);
> >>> +}
> >>> +
> >>>  static int kunit_vprintk_emit(const struct kunit *test,
> >>>                             int level,
> >>>                             const char *fmt,
> >>> @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
> >>>       stream->commit(stream);
> >>>  }
> >>>
> >>> +static void __noreturn kunit_abort(struct kunit *test)
> >>> +{
> >>> +     kunit_set_death_test(test, true);
> >>> +
> >>> +     test->try_catch.throw(&test->try_catch);
> >>> +
> >>> +     /*
> >>> +      * Throw could not abort from test.
> >>> +      */
> >>> +     kunit_err(test, "Throw could not abort from test!");
> >>> +     show_stack(NULL, NULL);
> >>> +     BUG();
> >>
> >> kunit_abort() is what will be call as the result of an assert failure.
> >
> > Yep. Does that need clarified somewhere.
> >>
> >> BUG(), which is a panic, which is crashing the system is not acceptable
> >> in the Linux kernel.  You will just annoy Linus if you submit this.
> >
> > Sorry, I thought this was an acceptable use case since, a) this should
> > never be compiled in a production kernel, b) we are in a pretty bad,
> > unpredictable state if we get here and keep going. I think you might
> > have said elsewhere that you think "a" is not valid? In any case, I
> > can replace this with a WARN, would that be acceptable?
>
> A WARN may or may not make sense, depending on the context.  It may
> be sufficient to simply report a test failure (as in the old version
> of case (2) below.
>
> Answers to "a)" and "b)":
>
> a) it might be in a production kernel

Sorry for a possibly stupid question, how might it be so? Why would
someone intentionally build unit tests into a production kernel?

>
> a') it is not acceptable in my development kernel either

Fair enough.

>
> b) No.  You don't crash a developer's kernel either unless it is
> required to avoid data corruption.

Alright, I thought that was one of those cases, but I am not going to
push the point. Also, in case it wasn't clear, the path where BUG()
gets called only happens if there is a bug in KUnit itself, not just
because a test case fails catastrophically.

>
> b') And you can not do replacements like:
>
> (1) in of_unittest_check_tree_linkage()
>
> -----  old  -----
>
>         if (!of_root)
>                 return;
>
> -----  new  -----
>
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);
>
> (2) in of_unittest_property_string()
>
> -----  old  -----
>
>         /* of_property_read_string_index() tests */
>         rc = of_property_read_string_index(np, "string-property", 0, strings);
>         unittest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);
>
> -----  new  -----
>
>         /* of_property_read_string_index() tests */
>         rc = of_property_read_string_index(np, "string-property", 0, strings);
>         KUNIT_ASSERT_EQ(test, rc, 0);
>         KUNIT_EXPECT_STREQ(test, strings[0], "foobar");
>
>
> If a test fails, that is no reason to abort testing.  The remainder of the unit
> tests can still run.  There may be cascading failures, but that is ok.

Sure, that's what I am trying to do. I don't see how (1) changes
anything, a failed KUNIT_ASSERT_* only bails on the current test case,
it does not quit the entire test suite let alone crash the kernel.

In case it wasn't clear above,
> >>> +     test->try_catch.throw(&test->try_catch);
should never, ever return. The only time it would, would be if KUnit
was very broken. This should never actually happen, even if the
assertion that called it was violated. KUNIT_ASSERT_* just says, "this
is a prerequisite property for this test case"; if it is violated, the
test case should fail and bail because the preconditions for the test
case cannot be satisfied. Nevertheless, other tests cases will still
run.

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
  2019-02-26 20:35         ` Stephen Boyd
  (?)
  (?)
@ 2019-02-28  9:03             ` Brendan Higgins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-28  9:03 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: brakmo-b10kYP2dOMg, Petr Mladek, Amir Goldstein, dri-devel,
	Sasha Levin, linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	Frank Rowand, Rob Herring, linux-nvdimm, Richard Weinberger,
	Knut Omang, Kieran Bingham, wfg-VuQAYsv1563Yd54FQh9/CA,
	Joel Stanley, Jeff Dike, dan.carpenter-QHcLZuEGTsvQT0dZR+AlfA,
	devicetree, shuah-DgEjT+Ai2ygdnm+yROfE0A, Bird, Timothy,
	Kees Cook, linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	Steven Rostedt, Julia Lawall, kunit-dev-/JYPxA39Uh5TLH3MbocFFw,
	Greg KH

On Tue, Feb 26, 2019 at 12:35 PM Stephen Boyd <sboyd-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org> wrote:
>
> Quoting Brendan Higgins (2019-02-14 13:37:20)
> > Add support for aborting/bailing out of test cases. Needed for
> > implementing assertions.
>
> Can you add some more text here with the motivating reasons for
> implementing assertions and bailing out of test cases?

Sure. Yeah, this comes before the commit that adds assertions, so I
should probably put a better explanation here.
>
> For example, I wonder why unit tests can't just return with a failure

Well, you could. You can just do the check as you would without KUnit,
except call KUNIT_FAIL(...) before you return. For example, instead
of:

KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);

you could do:

if (IS_ERR_OR_NULL(ptr)) {
        KUNIT_FAIL(test, "ptr is an errno or null: %ld", ptr);
        return;
}

> when they need to abort and then the test runner would detect that error
> via the return value from the 'run test' function. That would be a more
> direct approach, but also more verbose than a single KUNIT_ASSERT()
> line. It would be more kernel idiomatic too because the control flow

Yeah, I was intentionally going against that idiom. I think that idiom
makes test logic more complicated than it needs to be, especially if
the assertion failure happens in a helper function; then you have to
pass that error all the way back up. It is important that test code
should be as simple as possible to the point of being immediately
obviously correct at first glance because there are no tests for
tests.

The idea with assertions is that you use them to state all the
preconditions for your test. Logically speaking, these are the
premises of the test case, so if a premise isn't true, there is no
point in continuing the test case because there are no conclusions
that can be drawn without the premises. Whereas, the expectation is
the thing you are trying to prove. It is not used universally in
x-unit style test frameworks, but I really like it as a convention.
You could still express the idea of a premise using the above idiom,
but I think KUNIT_ASSERT_* states the intended idea perfectly.

> isn't hidden inside a macro and it isn't intimately connected with
> kthreads and completions.

Yeah, I wasn't a fan of that myself, but it was broadly available. My
previous version (still the architecture specific version for UML, not
in this patchset though) relies on UML_LONGJMP, but is obviously only
works on UML. A number of people wanted support for other
architectures. Rob and Luis specifically wanted me to provide a
version of abort that would work on any architecture, even if it only
had reduced functionality; I thought this fit the bill okay.

>
> >
> > Signed-off-by: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> [...]
> > diff --git a/kunit/test-test.c b/kunit/test-test.c
> > new file mode 100644
> > index 0000000000000..a936c041f1c8f
>
> Could this whole file be another patch? It seems to be a test for the
> try catch mechanism.

Sure.

>
> > diff --git a/kunit/test.c b/kunit/test.c
> > index d18c50d5ed671..6e5244642ab07 100644
> > --- a/kunit/test.c
> > +++ b/kunit/test.c
> [...]
> > +
> > +static void kunit_generic_throw(struct kunit_try_catch *try_catch)
> > +{
> > +       try_catch->context.try_result = -EFAULT;
> > +       complete_and_exit(try_catch->context.try_completion, -EFAULT);
> > +}
> > +
> > +static int kunit_generic_run_threadfn_adapter(void *data)
> > +{
> > +       struct kunit_try_catch *try_catch = data;
> >
> > +       try_catch->try(&try_catch->context);
> > +
> > +       complete_and_exit(try_catch->context.try_completion, 0);
>
> The exit code doesn't matter, right? If so, it might be clearer to just
> return 0 from this function because kthreads exit themselves as far as I
> recall.

You mean complete and then return?

>
> > +}
> > +
> > +static void kunit_generic_run_try_catch(struct kunit_try_catch *try_catch)
> > +{
> > +       struct task_struct *task_struct;
> > +       struct kunit *test = try_catch->context.test;
> > +       int exit_code, wake_result;
> > +       DECLARE_COMPLETION(test_case_completion);
>
> DECLARE_COMPLETION_ONSTACK()?

Whoops, yeah, that one.

>
> > +
> > +       try_catch->context.try_completion = &test_case_completion;
> > +       try_catch->context.try_result = 0;
> > +       task_struct = kthread_create(kunit_generic_run_threadfn_adapter,
> > +                                            try_catch,
> > +                                            "kunit_try_catch_thread");
> > +       if (IS_ERR_OR_NULL(task_struct)) {
>
> It looks like NULL is never returned from kthread_create(), so don't
> check for it here.

Bad habit, sorry.

>
> > +               try_catch->catch(&try_catch->context);
> > +               return;
> > +       }
> > +
> > +       wake_result = wake_up_process(task_struct);
> > +       if (wake_result != 0 && wake_result != 1) {
>
> These are the only two possible return values of wake_up_process(), so
> why not just use kthread_run() and check for an error on the process
> creation?

Good point, I am not sure why I did that.

>
> > +               kunit_err(test, "task was not woken properly: %d", wake_result);
> > +               try_catch->catch(&try_catch->context);
> > +       }
> > +
> > +       /*
> > +        * TODO(brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org): We should probably have some type of
> > +        * timeout here. The only question is what that timeout value should be.
> > +        *
> > +        * The intention has always been, at some point, to be able to label
> > +        * tests with some type of size bucket (unit/small, integration/medium,
> > +        * large/system/end-to-end, etc), where each size bucket would get a
> > +        * default timeout value kind of like what Bazel does:
> > +        * https://docs.bazel.build/versions/master/be/common-definitions.html#test.size
> > +        * There is still some debate to be had on exactly how we do this. (For
> > +        * one, we probably want to have some sort of test runner level
> > +        * timeout.)
> > +        *
> > +        * For more background on this topic, see:
> > +        * https://mike-bland.com/2011/11/01/small-medium-large.html
> > +        */
> > +       wait_for_completion(&test_case_completion);
>
> It doesn't seem like a bad idea to make this have some arbitrarily large
> timeout value to start with. Just to make sure the whole thing doesn't
> get wedged. It's a timeout, so in theory it should never trigger if it's
> large enough.

Seems reasonable.

>
> > +
> > +       exit_code = try_catch->context.try_result;
> > +       if (exit_code == -EFAULT)
> > +               try_catch->catch(&try_catch->context);
> > +       else if (exit_code == -EINTR)
> > +               kunit_err(test, "wake_up_process() was never called.");
>
> Does kunit_err() add newlines? It would be good to stick to the rest of
> kernel style (printk, tracing, etc.) and rely on the callers to have the
> newlines they want. Also, remove the full-stop because it isn't really
> necessary to have those in error logs.

Will do.

>
> > +       else if (exit_code)
> > +               kunit_err(test, "Unknown error: %d", exit_code);
> > +}
> > +
> > +void kunit_generic_try_catch_init(struct kunit_try_catch *try_catch)
> > +{
> > +       try_catch->run = kunit_generic_run_try_catch;
>
> Is the idea that 'run' would be anything besides
> 'kunit_generic_run_try_catch'? If it isn't going to be different, then

Yeah, it can be overridden with an architecture specific version.

> maybe it's simpler to just have a function like
> kunit_generic_run_try_catch() that is called by the unit tests instead
> of having to write 'try_catch->run(try_catch)' and indirect for the
> basic case. Maybe I've missed the point entirely though and this is all
> scaffolding for more complicated exception handling later on.

Yeah, the idea is that different architectures can override exception
handling with their own implementation. This is just the generic one.
For example, UML has one that doesn't depend on kthreads or
completions; the UML version also allows recovery from some segfault
conditions.

>
> > +       try_catch->throw = kunit_generic_throw;
> > +}
> > +
> > +void __weak kunit_try_catch_init(struct kunit_try_catch *try_catch)
> > +{
> > +       kunit_generic_try_catch_init(try_catch);
> > +}
> > +
> > +static void kunit_try_run_case(struct kunit_try_catch_context *context)
> > +{
> > +       struct kunit_try_catch_context *ctx = context;
> > +       struct kunit *test = ctx->test;
> > +       struct kunit_module *module = ctx->module;
> > +       struct kunit_case *test_case = ctx->test_case;
> > +
> > +       /*
> > +        * kunit_run_case_internal may encounter a fatal error; if it does, we
> > +        * will jump to ENTER_HANDLER above instead of continuing normal control
>
> Where is 'ENTER_HANDLER' above?

Whoops, sorry, that is left over from v3. Will remove.

>
> > +        * flow.
> > +        */
> >         kunit_run_case_internal(test, module, test_case);
> > +       /* This line may never be reached. */
> >         kunit_run_case_cleanup(test, module, test_case);
> > +}

Thanks!

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-02-28  9:03             ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-28  9:03 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: Frank Rowand, Kees Cook, Kieran Bingham, Luis Chamberlain,
	Rob Herring, shuah, Greg KH, Joel Stanley, Michael Ellerman,
	Joe Perches, brakmo, Steven Rostedt, Bird, Timothy, Kevin Hilman,
	Julia Lawall, linux-kselftest, kunit-dev,
	Linux Kernel Mailing List, Jeff Dike, Richard Weinberger,
	linux-um, Daniel Vetter, dri-devel, Dan Williams, linux-nvdimm,
	Knut Omang, devicetree, Petr Mladek, Sasha Levin, Amir Goldstein,
	dan.carpenter, wfg

On Tue, Feb 26, 2019 at 12:35 PM Stephen Boyd <sboyd@kernel.org> wrote:
>
> Quoting Brendan Higgins (2019-02-14 13:37:20)
> > Add support for aborting/bailing out of test cases. Needed for
> > implementing assertions.
>
> Can you add some more text here with the motivating reasons for
> implementing assertions and bailing out of test cases?

Sure. Yeah, this comes before the commit that adds assertions, so I
should probably put a better explanation here.
>
> For example, I wonder why unit tests can't just return with a failure

Well, you could. You can just do the check as you would without KUnit,
except call KUNIT_FAIL(...) before you return. For example, instead
of:

KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);

you could do:

if (IS_ERR_OR_NULL(ptr)) {
        KUNIT_FAIL(test, "ptr is an errno or null: %ld", ptr);
        return;
}

> when they need to abort and then the test runner would detect that error
> via the return value from the 'run test' function. That would be a more
> direct approach, but also more verbose than a single KUNIT_ASSERT()
> line. It would be more kernel idiomatic too because the control flow

Yeah, I was intentionally going against that idiom. I think that idiom
makes test logic more complicated than it needs to be, especially if
the assertion failure happens in a helper function; then you have to
pass that error all the way back up. It is important that test code
should be as simple as possible to the point of being immediately
obviously correct at first glance because there are no tests for
tests.

The idea with assertions is that you use them to state all the
preconditions for your test. Logically speaking, these are the
premises of the test case, so if a premise isn't true, there is no
point in continuing the test case because there are no conclusions
that can be drawn without the premises. Whereas, the expectation is
the thing you are trying to prove. It is not used universally in
x-unit style test frameworks, but I really like it as a convention.
You could still express the idea of a premise using the above idiom,
but I think KUNIT_ASSERT_* states the intended idea perfectly.

> isn't hidden inside a macro and it isn't intimately connected with
> kthreads and completions.

Yeah, I wasn't a fan of that myself, but it was broadly available. My
previous version (still the architecture specific version for UML, not
in this patchset though) relies on UML_LONGJMP, but is obviously only
works on UML. A number of people wanted support for other
architectures. Rob and Luis specifically wanted me to provide a
version of abort that would work on any architecture, even if it only
had reduced functionality; I thought this fit the bill okay.

>
> >
> > Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
> [...]
> > diff --git a/kunit/test-test.c b/kunit/test-test.c
> > new file mode 100644
> > index 0000000000000..a936c041f1c8f
>
> Could this whole file be another patch? It seems to be a test for the
> try catch mechanism.

Sure.

>
> > diff --git a/kunit/test.c b/kunit/test.c
> > index d18c50d5ed671..6e5244642ab07 100644
> > --- a/kunit/test.c
> > +++ b/kunit/test.c
> [...]
> > +
> > +static void kunit_generic_throw(struct kunit_try_catch *try_catch)
> > +{
> > +       try_catch->context.try_result = -EFAULT;
> > +       complete_and_exit(try_catch->context.try_completion, -EFAULT);
> > +}
> > +
> > +static int kunit_generic_run_threadfn_adapter(void *data)
> > +{
> > +       struct kunit_try_catch *try_catch = data;
> >
> > +       try_catch->try(&try_catch->context);
> > +
> > +       complete_and_exit(try_catch->context.try_completion, 0);
>
> The exit code doesn't matter, right? If so, it might be clearer to just
> return 0 from this function because kthreads exit themselves as far as I
> recall.

You mean complete and then return?

>
> > +}
> > +
> > +static void kunit_generic_run_try_catch(struct kunit_try_catch *try_catch)
> > +{
> > +       struct task_struct *task_struct;
> > +       struct kunit *test = try_catch->context.test;
> > +       int exit_code, wake_result;
> > +       DECLARE_COMPLETION(test_case_completion);
>
> DECLARE_COMPLETION_ONSTACK()?

Whoops, yeah, that one.

>
> > +
> > +       try_catch->context.try_completion = &test_case_completion;
> > +       try_catch->context.try_result = 0;
> > +       task_struct = kthread_create(kunit_generic_run_threadfn_adapter,
> > +                                            try_catch,
> > +                                            "kunit_try_catch_thread");
> > +       if (IS_ERR_OR_NULL(task_struct)) {
>
> It looks like NULL is never returned from kthread_create(), so don't
> check for it here.

Bad habit, sorry.

>
> > +               try_catch->catch(&try_catch->context);
> > +               return;
> > +       }
> > +
> > +       wake_result = wake_up_process(task_struct);
> > +       if (wake_result != 0 && wake_result != 1) {
>
> These are the only two possible return values of wake_up_process(), so
> why not just use kthread_run() and check for an error on the process
> creation?

Good point, I am not sure why I did that.

>
> > +               kunit_err(test, "task was not woken properly: %d", wake_result);
> > +               try_catch->catch(&try_catch->context);
> > +       }
> > +
> > +       /*
> > +        * TODO(brendanhiggins@google.com): We should probably have some type of
> > +        * timeout here. The only question is what that timeout value should be.
> > +        *
> > +        * The intention has always been, at some point, to be able to label
> > +        * tests with some type of size bucket (unit/small, integration/medium,
> > +        * large/system/end-to-end, etc), where each size bucket would get a
> > +        * default timeout value kind of like what Bazel does:
> > +        * https://docs.bazel.build/versions/master/be/common-definitions.html#test.size
> > +        * There is still some debate to be had on exactly how we do this. (For
> > +        * one, we probably want to have some sort of test runner level
> > +        * timeout.)
> > +        *
> > +        * For more background on this topic, see:
> > +        * https://mike-bland.com/2011/11/01/small-medium-large.html
> > +        */
> > +       wait_for_completion(&test_case_completion);
>
> It doesn't seem like a bad idea to make this have some arbitrarily large
> timeout value to start with. Just to make sure the whole thing doesn't
> get wedged. It's a timeout, so in theory it should never trigger if it's
> large enough.

Seems reasonable.

>
> > +
> > +       exit_code = try_catch->context.try_result;
> > +       if (exit_code == -EFAULT)
> > +               try_catch->catch(&try_catch->context);
> > +       else if (exit_code == -EINTR)
> > +               kunit_err(test, "wake_up_process() was never called.");
>
> Does kunit_err() add newlines? It would be good to stick to the rest of
> kernel style (printk, tracing, etc.) and rely on the callers to have the
> newlines they want. Also, remove the full-stop because it isn't really
> necessary to have those in error logs.

Will do.

>
> > +       else if (exit_code)
> > +               kunit_err(test, "Unknown error: %d", exit_code);
> > +}
> > +
> > +void kunit_generic_try_catch_init(struct kunit_try_catch *try_catch)
> > +{
> > +       try_catch->run = kunit_generic_run_try_catch;
>
> Is the idea that 'run' would be anything besides
> 'kunit_generic_run_try_catch'? If it isn't going to be different, then

Yeah, it can be overridden with an architecture specific version.

> maybe it's simpler to just have a function like
> kunit_generic_run_try_catch() that is called by the unit tests instead
> of having to write 'try_catch->run(try_catch)' and indirect for the
> basic case. Maybe I've missed the point entirely though and this is all
> scaffolding for more complicated exception handling later on.

Yeah, the idea is that different architectures can override exception
handling with their own implementation. This is just the generic one.
For example, UML has one that doesn't depend on kthreads or
completions; the UML version also allows recovery from some segfault
conditions.

>
> > +       try_catch->throw = kunit_generic_throw;
> > +}
> > +
> > +void __weak kunit_try_catch_init(struct kunit_try_catch *try_catch)
> > +{
> > +       kunit_generic_try_catch_init(try_catch);
> > +}
> > +
> > +static void kunit_try_run_case(struct kunit_try_catch_context *context)
> > +{
> > +       struct kunit_try_catch_context *ctx = context;
> > +       struct kunit *test = ctx->test;
> > +       struct kunit_module *module = ctx->module;
> > +       struct kunit_case *test_case = ctx->test_case;
> > +
> > +       /*
> > +        * kunit_run_case_internal may encounter a fatal error; if it does, we
> > +        * will jump to ENTER_HANDLER above instead of continuing normal control
>
> Where is 'ENTER_HANDLER' above?

Whoops, sorry, that is left over from v3. Will remove.

>
> > +        * flow.
> > +        */
> >         kunit_run_case_internal(test, module, test_case);
> > +       /* This line may never be reached. */
> >         kunit_run_case_cleanup(test, module, test_case);
> > +}

Thanks!

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-02-28  9:03             ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-02-28  9:03 UTC (permalink / raw)


On Tue, Feb 26, 2019 at 12:35 PM Stephen Boyd <sboyd at kernel.org> wrote:
>
> Quoting Brendan Higgins (2019-02-14 13:37:20)
> > Add support for aborting/bailing out of test cases. Needed for
> > implementing assertions.
>
> Can you add some more text here with the motivating reasons for
> implementing assertions and bailing out of test cases?

Sure. Yeah, this comes before the commit that adds assertions, so I
should probably put a better explanation here.
>
> For example, I wonder why unit tests can't just return with a failure

Well, you could. You can just do the check as you would without KUnit,
except call KUNIT_FAIL(...) before you return. For example, instead
of:

KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);

you could do:

if (IS_ERR_OR_NULL(ptr)) {
        KUNIT_FAIL(test, "ptr is an errno or null: %ld", ptr);
        return;
}

> when they need to abort and then the test runner would detect that error
> via the return value from the 'run test' function. That would be a more
> direct approach, but also more verbose than a single KUNIT_ASSERT()
> line. It would be more kernel idiomatic too because the control flow

Yeah, I was intentionally going against that idiom. I think that idiom
makes test logic more complicated than it needs to be, especially if
the assertion failure happens in a helper function; then you have to
pass that error all the way back up. It is important that test code
should be as simple as possible to the point of being immediately
obviously correct at first glance because there are no tests for
tests.

The idea with assertions is that you use them to state all the
preconditions for your test. Logically speaking, these are the
premises of the test case, so if a premise isn't true, there is no
point in continuing the test case because there are no conclusions
that can be drawn without the premises. Whereas, the expectation is
the thing you are trying to prove. It is not used universally in
x-unit style test frameworks, but I really like it as a convention.
You could still express the idea of a premise using the above idiom,
but I think KUNIT_ASSERT_* states the intended idea perfectly.

> isn't hidden inside a macro and it isn't intimately connected with
> kthreads and completions.

Yeah, I wasn't a fan of that myself, but it was broadly available. My
previous version (still the architecture specific version for UML, not
in this patchset though) relies on UML_LONGJMP, but is obviously only
works on UML. A number of people wanted support for other
architectures. Rob and Luis specifically wanted me to provide a
version of abort that would work on any architecture, even if it only
had reduced functionality; I thought this fit the bill okay.

>
> >
> > Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> [...]
> > diff --git a/kunit/test-test.c b/kunit/test-test.c
> > new file mode 100644
> > index 0000000000000..a936c041f1c8f
>
> Could this whole file be another patch? It seems to be a test for the
> try catch mechanism.

Sure.

>
> > diff --git a/kunit/test.c b/kunit/test.c
> > index d18c50d5ed671..6e5244642ab07 100644
> > --- a/kunit/test.c
> > +++ b/kunit/test.c
> [...]
> > +
> > +static void kunit_generic_throw(struct kunit_try_catch *try_catch)
> > +{
> > +       try_catch->context.try_result = -EFAULT;
> > +       complete_and_exit(try_catch->context.try_completion, -EFAULT);
> > +}
> > +
> > +static int kunit_generic_run_threadfn_adapter(void *data)
> > +{
> > +       struct kunit_try_catch *try_catch = data;
> >
> > +       try_catch->try(&try_catch->context);
> > +
> > +       complete_and_exit(try_catch->context.try_completion, 0);
>
> The exit code doesn't matter, right? If so, it might be clearer to just
> return 0 from this function because kthreads exit themselves as far as I
> recall.

You mean complete and then return?

>
> > +}
> > +
> > +static void kunit_generic_run_try_catch(struct kunit_try_catch *try_catch)
> > +{
> > +       struct task_struct *task_struct;
> > +       struct kunit *test = try_catch->context.test;
> > +       int exit_code, wake_result;
> > +       DECLARE_COMPLETION(test_case_completion);
>
> DECLARE_COMPLETION_ONSTACK()?

Whoops, yeah, that one.

>
> > +
> > +       try_catch->context.try_completion = &test_case_completion;
> > +       try_catch->context.try_result = 0;
> > +       task_struct = kthread_create(kunit_generic_run_threadfn_adapter,
> > +                                            try_catch,
> > +                                            "kunit_try_catch_thread");
> > +       if (IS_ERR_OR_NULL(task_struct)) {
>
> It looks like NULL is never returned from kthread_create(), so don't
> check for it here.

Bad habit, sorry.

>
> > +               try_catch->catch(&try_catch->context);
> > +               return;
> > +       }
> > +
> > +       wake_result = wake_up_process(task_struct);
> > +       if (wake_result != 0 && wake_result != 1) {
>
> These are the only two possible return values of wake_up_process(), so
> why not just use kthread_run() and check for an error on the process
> creation?

Good point, I am not sure why I did that.

>
> > +               kunit_err(test, "task was not woken properly: %d", wake_result);
> > +               try_catch->catch(&try_catch->context);
> > +       }
> > +
> > +       /*
> > +        * TODO(brendanhiggins at google.com): We should probably have some type of
> > +        * timeout here. The only question is what that timeout value should be.
> > +        *
> > +        * The intention has always been, at some point, to be able to label
> > +        * tests with some type of size bucket (unit/small, integration/medium,
> > +        * large/system/end-to-end, etc), where each size bucket would get a
> > +        * default timeout value kind of like what Bazel does:
> > +        * https://docs.bazel.build/versions/master/be/common-definitions.html#test.size
> > +        * There is still some debate to be had on exactly how we do this. (For
> > +        * one, we probably want to have some sort of test runner level
> > +        * timeout.)
> > +        *
> > +        * For more background on this topic, see:
> > +        * https://mike-bland.com/2011/11/01/small-medium-large.html
> > +        */
> > +       wait_for_completion(&test_case_completion);
>
> It doesn't seem like a bad idea to make this have some arbitrarily large
> timeout value to start with. Just to make sure the whole thing doesn't
> get wedged. It's a timeout, so in theory it should never trigger if it's
> large enough.

Seems reasonable.

>
> > +
> > +       exit_code = try_catch->context.try_result;
> > +       if (exit_code == -EFAULT)
> > +               try_catch->catch(&try_catch->context);
> > +       else if (exit_code == -EINTR)
> > +               kunit_err(test, "wake_up_process() was never called.");
>
> Does kunit_err() add newlines? It would be good to stick to the rest of
> kernel style (printk, tracing, etc.) and rely on the callers to have the
> newlines they want. Also, remove the full-stop because it isn't really
> necessary to have those in error logs.

Will do.

>
> > +       else if (exit_code)
> > +               kunit_err(test, "Unknown error: %d", exit_code);
> > +}
> > +
> > +void kunit_generic_try_catch_init(struct kunit_try_catch *try_catch)
> > +{
> > +       try_catch->run = kunit_generic_run_try_catch;
>
> Is the idea that 'run' would be anything besides
> 'kunit_generic_run_try_catch'? If it isn't going to be different, then

Yeah, it can be overridden with an architecture specific version.

> maybe it's simpler to just have a function like
> kunit_generic_run_try_catch() that is called by the unit tests instead
> of having to write 'try_catch->run(try_catch)' and indirect for the
> basic case. Maybe I've missed the point entirely though and this is all
> scaffolding for more complicated exception handling later on.

Yeah, the idea is that different architectures can override exception
handling with their own implementation. This is just the generic one.
For example, UML has one that doesn't depend on kthreads or
completions; the UML version also allows recovery from some segfault
conditions.

>
> > +       try_catch->throw = kunit_generic_throw;
> > +}
> > +
> > +void __weak kunit_try_catch_init(struct kunit_try_catch *try_catch)
> > +{
> > +       kunit_generic_try_catch_init(try_catch);
> > +}
> > +
> > +static void kunit_try_run_case(struct kunit_try_catch_context *context)
> > +{
> > +       struct kunit_try_catch_context *ctx = context;
> > +       struct kunit *test = ctx->test;
> > +       struct kunit_module *module = ctx->module;
> > +       struct kunit_case *test_case = ctx->test_case;
> > +
> > +       /*
> > +        * kunit_run_case_internal may encounter a fatal error; if it does, we
> > +        * will jump to ENTER_HANDLER above instead of continuing normal control
>
> Where is 'ENTER_HANDLER' above?

Whoops, sorry, that is left over from v3. Will remove.

>
> > +        * flow.
> > +        */
> >         kunit_run_case_internal(test, module, test_case);
> > +       /* This line may never be reached. */
> >         kunit_run_case_cleanup(test, module, test_case);
> > +}

Thanks!

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-02-28  9:03             ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-02-28  9:03 UTC (permalink / raw)


On Tue, Feb 26, 2019@12:35 PM Stephen Boyd <sboyd@kernel.org> wrote:
>
> Quoting Brendan Higgins (2019-02-14 13:37:20)
> > Add support for aborting/bailing out of test cases. Needed for
> > implementing assertions.
>
> Can you add some more text here with the motivating reasons for
> implementing assertions and bailing out of test cases?

Sure. Yeah, this comes before the commit that adds assertions, so I
should probably put a better explanation here.
>
> For example, I wonder why unit tests can't just return with a failure

Well, you could. You can just do the check as you would without KUnit,
except call KUNIT_FAIL(...) before you return. For example, instead
of:

KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);

you could do:

if (IS_ERR_OR_NULL(ptr)) {
        KUNIT_FAIL(test, "ptr is an errno or null: %ld", ptr);
        return;
}

> when they need to abort and then the test runner would detect that error
> via the return value from the 'run test' function. That would be a more
> direct approach, but also more verbose than a single KUNIT_ASSERT()
> line. It would be more kernel idiomatic too because the control flow

Yeah, I was intentionally going against that idiom. I think that idiom
makes test logic more complicated than it needs to be, especially if
the assertion failure happens in a helper function; then you have to
pass that error all the way back up. It is important that test code
should be as simple as possible to the point of being immediately
obviously correct at first glance because there are no tests for
tests.

The idea with assertions is that you use them to state all the
preconditions for your test. Logically speaking, these are the
premises of the test case, so if a premise isn't true, there is no
point in continuing the test case because there are no conclusions
that can be drawn without the premises. Whereas, the expectation is
the thing you are trying to prove. It is not used universally in
x-unit style test frameworks, but I really like it as a convention.
You could still express the idea of a premise using the above idiom,
but I think KUNIT_ASSERT_* states the intended idea perfectly.

> isn't hidden inside a macro and it isn't intimately connected with
> kthreads and completions.

Yeah, I wasn't a fan of that myself, but it was broadly available. My
previous version (still the architecture specific version for UML, not
in this patchset though) relies on UML_LONGJMP, but is obviously only
works on UML. A number of people wanted support for other
architectures. Rob and Luis specifically wanted me to provide a
version of abort that would work on any architecture, even if it only
had reduced functionality; I thought this fit the bill okay.

>
> >
> > Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> [...]
> > diff --git a/kunit/test-test.c b/kunit/test-test.c
> > new file mode 100644
> > index 0000000000000..a936c041f1c8f
>
> Could this whole file be another patch? It seems to be a test for the
> try catch mechanism.

Sure.

>
> > diff --git a/kunit/test.c b/kunit/test.c
> > index d18c50d5ed671..6e5244642ab07 100644
> > --- a/kunit/test.c
> > +++ b/kunit/test.c
> [...]
> > +
> > +static void kunit_generic_throw(struct kunit_try_catch *try_catch)
> > +{
> > +       try_catch->context.try_result = -EFAULT;
> > +       complete_and_exit(try_catch->context.try_completion, -EFAULT);
> > +}
> > +
> > +static int kunit_generic_run_threadfn_adapter(void *data)
> > +{
> > +       struct kunit_try_catch *try_catch = data;
> >
> > +       try_catch->try(&try_catch->context);
> > +
> > +       complete_and_exit(try_catch->context.try_completion, 0);
>
> The exit code doesn't matter, right? If so, it might be clearer to just
> return 0 from this function because kthreads exit themselves as far as I
> recall.

You mean complete and then return?

>
> > +}
> > +
> > +static void kunit_generic_run_try_catch(struct kunit_try_catch *try_catch)
> > +{
> > +       struct task_struct *task_struct;
> > +       struct kunit *test = try_catch->context.test;
> > +       int exit_code, wake_result;
> > +       DECLARE_COMPLETION(test_case_completion);
>
> DECLARE_COMPLETION_ONSTACK()?

Whoops, yeah, that one.

>
> > +
> > +       try_catch->context.try_completion = &test_case_completion;
> > +       try_catch->context.try_result = 0;
> > +       task_struct = kthread_create(kunit_generic_run_threadfn_adapter,
> > +                                            try_catch,
> > +                                            "kunit_try_catch_thread");
> > +       if (IS_ERR_OR_NULL(task_struct)) {
>
> It looks like NULL is never returned from kthread_create(), so don't
> check for it here.

Bad habit, sorry.

>
> > +               try_catch->catch(&try_catch->context);
> > +               return;
> > +       }
> > +
> > +       wake_result = wake_up_process(task_struct);
> > +       if (wake_result != 0 && wake_result != 1) {
>
> These are the only two possible return values of wake_up_process(), so
> why not just use kthread_run() and check for an error on the process
> creation?

Good point, I am not sure why I did that.

>
> > +               kunit_err(test, "task was not woken properly: %d", wake_result);
> > +               try_catch->catch(&try_catch->context);
> > +       }
> > +
> > +       /*
> > +        * TODO(brendanhiggins at google.com): We should probably have some type of
> > +        * timeout here. The only question is what that timeout value should be.
> > +        *
> > +        * The intention has always been, at some point, to be able to label
> > +        * tests with some type of size bucket (unit/small, integration/medium,
> > +        * large/system/end-to-end, etc), where each size bucket would get a
> > +        * default timeout value kind of like what Bazel does:
> > +        * https://docs.bazel.build/versions/master/be/common-definitions.html#test.size
> > +        * There is still some debate to be had on exactly how we do this. (For
> > +        * one, we probably want to have some sort of test runner level
> > +        * timeout.)
> > +        *
> > +        * For more background on this topic, see:
> > +        * https://mike-bland.com/2011/11/01/small-medium-large.html
> > +        */
> > +       wait_for_completion(&test_case_completion);
>
> It doesn't seem like a bad idea to make this have some arbitrarily large
> timeout value to start with. Just to make sure the whole thing doesn't
> get wedged. It's a timeout, so in theory it should never trigger if it's
> large enough.

Seems reasonable.

>
> > +
> > +       exit_code = try_catch->context.try_result;
> > +       if (exit_code == -EFAULT)
> > +               try_catch->catch(&try_catch->context);
> > +       else if (exit_code == -EINTR)
> > +               kunit_err(test, "wake_up_process() was never called.");
>
> Does kunit_err() add newlines? It would be good to stick to the rest of
> kernel style (printk, tracing, etc.) and rely on the callers to have the
> newlines they want. Also, remove the full-stop because it isn't really
> necessary to have those in error logs.

Will do.

>
> > +       else if (exit_code)
> > +               kunit_err(test, "Unknown error: %d", exit_code);
> > +}
> > +
> > +void kunit_generic_try_catch_init(struct kunit_try_catch *try_catch)
> > +{
> > +       try_catch->run = kunit_generic_run_try_catch;
>
> Is the idea that 'run' would be anything besides
> 'kunit_generic_run_try_catch'? If it isn't going to be different, then

Yeah, it can be overridden with an architecture specific version.

> maybe it's simpler to just have a function like
> kunit_generic_run_try_catch() that is called by the unit tests instead
> of having to write 'try_catch->run(try_catch)' and indirect for the
> basic case. Maybe I've missed the point entirely though and this is all
> scaffolding for more complicated exception handling later on.

Yeah, the idea is that different architectures can override exception
handling with their own implementation. This is just the generic one.
For example, UML has one that doesn't depend on kthreads or
completions; the UML version also allows recovery from some segfault
conditions.

>
> > +       try_catch->throw = kunit_generic_throw;
> > +}
> > +
> > +void __weak kunit_try_catch_init(struct kunit_try_catch *try_catch)
> > +{
> > +       kunit_generic_try_catch_init(try_catch);
> > +}
> > +
> > +static void kunit_try_run_case(struct kunit_try_catch_context *context)
> > +{
> > +       struct kunit_try_catch_context *ctx = context;
> > +       struct kunit *test = ctx->test;
> > +       struct kunit_module *module = ctx->module;
> > +       struct kunit_case *test_case = ctx->test_case;
> > +
> > +       /*
> > +        * kunit_run_case_internal may encounter a fatal error; if it does, we
> > +        * will jump to ENTER_HANDLER above instead of continuing normal control
>
> Where is 'ENTER_HANDLER' above?

Whoops, sorry, that is left over from v3. Will remove.

>
> > +        * flow.
> > +        */
> >         kunit_run_case_internal(test, module, test_case);
> > +       /* This line may never be reached. */
> >         kunit_run_case_cleanup(test, module, test_case);
> > +}

Thanks!

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
  2019-02-28  9:03             ` Brendan Higgins
                                 ` (2 preceding siblings ...)
  (?)
@ 2019-02-28 13:54               ` Dan Carpenter
  -1 siblings, 0 replies; 316+ messages in thread
From: Dan Carpenter @ 2019-02-28 13:54 UTC (permalink / raw)
  To: Brendan Higgins
  Cc: brakmo, Petr Mladek, Amir Goldstein, dri-devel, Sasha Levin,
	linux-kselftest, shuah, Frank Rowand, linux-nvdimm,
	Richard Weinberger, Knut Omang, Kieran Bingham, wfg,
	Joel Stanley, Bird, Timothy, devicetree, Jeff Dike, Kees Cook,
	linux-um, Steven Rostedt, Julia Lawall, Dan Williams, kunit-dev,
	Stephen Boyd, Greg KH, Linux

On Thu, Feb 28, 2019 at 01:03:24AM -0800, Brendan Higgins wrote:
> you could do:
> 
> if (IS_ERR_OR_NULL(ptr)) {
>         KUNIT_FAIL(test, "ptr is an errno or null: %ld", ptr);
>         return;
> }

It's best to not mix error pointers and NULL but when we do mix them,
it means that NULL is a special kind of success.  Like we try to load
a feature and we get back:

    valid pointer <-- success
    null          <-- feature is disabled.  not an error.
    error pointer <-- feature is broken.  fail.

regards,
dan carpenter
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-02-28 13:54               ` Dan Carpenter
  0 siblings, 0 replies; 316+ messages in thread
From: Dan Carpenter @ 2019-02-28 13:54 UTC (permalink / raw)
  To: Brendan Higgins
  Cc: Stephen Boyd, Frank Rowand, Kees Cook, Kieran Bingham,
	Luis Chamberlain, Rob Herring, shuah, Greg KH, Joel Stanley,
	Michael Ellerman, Joe Perches, brakmo, Steven Rostedt, Bird,
	Timothy, Kevin Hilman, Julia Lawall, linux-kselftest, kunit-dev,
	Linux Kernel Mailing List, Jeff Dike, Richard Weinberger,
	linux-um, Daniel Vetter, dri-devel, Dan Williams, linux-nvdimm,
	Knut Omang, devicetree, Petr Mladek, Sasha Levin, Amir Goldstein,
	wfg

On Thu, Feb 28, 2019 at 01:03:24AM -0800, Brendan Higgins wrote:
> you could do:
> 
> if (IS_ERR_OR_NULL(ptr)) {
>         KUNIT_FAIL(test, "ptr is an errno or null: %ld", ptr);
>         return;
> }

It's best to not mix error pointers and NULL but when we do mix them,
it means that NULL is a special kind of success.  Like we try to load
a feature and we get back:

    valid pointer <-- success
    null          <-- feature is disabled.  not an error.
    error pointer <-- feature is broken.  fail.

regards,
dan carpenter

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-02-28 13:54               ` Dan Carpenter
  0 siblings, 0 replies; 316+ messages in thread
From: dan.carpenter @ 2019-02-28 13:54 UTC (permalink / raw)


On Thu, Feb 28, 2019 at 01:03:24AM -0800, Brendan Higgins wrote:
> you could do:
> 
> if (IS_ERR_OR_NULL(ptr)) {
>         KUNIT_FAIL(test, "ptr is an errno or null: %ld", ptr);
>         return;
> }

It's best to not mix error pointers and NULL but when we do mix them,
it means that NULL is a special kind of success.  Like we try to load
a feature and we get back:

    valid pointer <-- success
    null          <-- feature is disabled.  not an error.
    error pointer <-- feature is broken.  fail.

regards,
dan carpenter

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-02-28 13:54               ` Dan Carpenter
  0 siblings, 0 replies; 316+ messages in thread
From: Dan Carpenter @ 2019-02-28 13:54 UTC (permalink / raw)


On Thu, Feb 28, 2019@01:03:24AM -0800, Brendan Higgins wrote:
> you could do:
> 
> if (IS_ERR_OR_NULL(ptr)) {
>         KUNIT_FAIL(test, "ptr is an errno or null: %ld", ptr);
>         return;
> }

It's best to not mix error pointers and NULL but when we do mix them,
it means that NULL is a special kind of success.  Like we try to load
a feature and we get back:

    valid pointer <-- success
    null          <-- feature is disabled.  not an error.
    error pointer <-- feature is broken.  fail.

regards,
dan carpenter

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-02-28 13:54               ` Dan Carpenter
  0 siblings, 0 replies; 316+ messages in thread
From: Dan Carpenter @ 2019-02-28 13:54 UTC (permalink / raw)
  To: Brendan Higgins
  Cc: brakmo, Petr Mladek, Amir Goldstein, dri-devel, Sasha Levin,
	linux-kselftest, shuah, Rob Herring, Frank Rowand, linux-nvdimm,
	Richard Weinberger, Knut Omang, Kieran Bingham, wfg,
	Joel Stanley, Bird, Timothy, devicetree, Jeff Dike, Kees Cook,
	linux-um, Steven Rostedt, Julia Lawall, Dan Williams, kunit-dev,
	Stephen Boyd, Greg KH, Linux Kernel Mailing List,
	Luis Chamberlain, Daniel Vetter, Michael Ellerman, Joe Perches,
	Kevin Hilman

On Thu, Feb 28, 2019 at 01:03:24AM -0800, Brendan Higgins wrote:
> you could do:
> 
> if (IS_ERR_OR_NULL(ptr)) {
>         KUNIT_FAIL(test, "ptr is an errno or null: %ld", ptr);
>         return;
> }

It's best to not mix error pointers and NULL but when we do mix them,
it means that NULL is a special kind of success.  Like we try to load
a feature and we get back:

    valid pointer <-- success
    null          <-- feature is disabled.  not an error.
    error pointer <-- feature is broken.  fail.

regards,
dan carpenter

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-02-28 18:02               ` Stephen Boyd
  0 siblings, 0 replies; 316+ messages in thread
From: Stephen Boyd @ 2019-02-28 18:02 UTC (permalink / raw)
  To: Brendan Higgins
  Cc: brakmo, Amir Goldstein, dri-devel, Sasha Levin, linux-kselftest,
	Frank Rowand, Rob Herring, linux-nvdimm, Richard Weinberger,
	Knut Omang, Kieran Bingham, wfg, Joel Stanley, Jeff Dike,
	dan.carpenter, devicetree, shuah, Bird, Timothy, Kees Cook,
	linux-um, Steven Rostedt, Julia Lawall, kunit-dev, om,
	Petr Mladek, Greg KH, Linux Kernel Mailing List,
	Luis Chamberlain, Daniel Vetter, Michael Ellerman, Joe Perches,
	Kevin Hilman

Quoting Brendan Higgins (2019-02-28 01:03:24)
> On Tue, Feb 26, 2019 at 12:35 PM Stephen Boyd <sboyd@kernel.org> wrote:
> >
> > when they need to abort and then the test runner would detect that error
> > via the return value from the 'run test' function. That would be a more
> > direct approach, but also more verbose than a single KUNIT_ASSERT()
> > line. It would be more kernel idiomatic too because the control flow
> 
> Yeah, I was intentionally going against that idiom. I think that idiom
> makes test logic more complicated than it needs to be, especially if
> the assertion failure happens in a helper function; then you have to
> pass that error all the way back up. It is important that test code
> should be as simple as possible to the point of being immediately
> obviously correct at first glance because there are no tests for
> tests.
> 
> The idea with assertions is that you use them to state all the
> preconditions for your test. Logically speaking, these are the
> premises of the test case, so if a premise isn't true, there is no
> point in continuing the test case because there are no conclusions
> that can be drawn without the premises. Whereas, the expectation is
> the thing you are trying to prove. It is not used universally in
> x-unit style test frameworks, but I really like it as a convention.
> You could still express the idea of a premise using the above idiom,
> but I think KUNIT_ASSERT_* states the intended idea perfectly.

Fair enough. It would be great if these sorts of things were described
in the commit text.

Is the assumption that things like held locks and refcounted elements
won't exist when one of these assertions is made? It sounds like some of
the cleanup logic could be fairly complicated if a helper function
changes some state and then an assert fails and we have to unwind all
the state from a corrupt location. A similar problem exists for a test
timeout too. How do we get back to a sane state if the test locks up for
a long time? Just don't try?

> 
> > isn't hidden inside a macro and it isn't intimately connected with
> > kthreads and completions.
> 
> Yeah, I wasn't a fan of that myself, but it was broadly available. My
> previous version (still the architecture specific version for UML, not
> in this patchset though) relies on UML_LONGJMP, but is obviously only
> works on UML. A number of people wanted support for other
> architectures. Rob and Luis specifically wanted me to provide a
> version of abort that would work on any architecture, even if it only
> had reduced functionality; I thought this fit the bill okay.

Ok.

> 
> >
> > >
> > > diff --git a/kunit/test.c b/kunit/test.c
> > > index d18c50d5ed671..6e5244642ab07 100644
> > > --- a/kunit/test.c
> > > +++ b/kunit/test.c
> > [...]
> > > +
> > > +static void kunit_generic_throw(struct kunit_try_catch *try_catch)
> > > +{
> > > +       try_catch->context.try_result = -EFAULT;
> > > +       complete_and_exit(try_catch->context.try_completion, -EFAULT);
> > > +}
> > > +
> > > +static int kunit_generic_run_threadfn_adapter(void *data)
> > > +{
> > > +       struct kunit_try_catch *try_catch = data;
> > >
> > > +       try_catch->try(&try_catch->context);
> > > +
> > > +       complete_and_exit(try_catch->context.try_completion, 0);
> >
> > The exit code doesn't matter, right? If so, it might be clearer to just
> > return 0 from this function because kthreads exit themselves as far as I
> > recall.
> 
> You mean complete and then return?

Yes. I was confused for a minute because I thought the exit code was
checked, but it isn't. Instead, the try_catch->context.try_result is
where the test result goes, so calling exit explicitly doesn't seem to
be important here, but it is important in the throw case.

> 
> >
> > > +       else if (exit_code)
> > > +               kunit_err(test, "Unknown error: %d", exit_code);
> > > +}
> > > +
> > > +void kunit_generic_try_catch_init(struct kunit_try_catch *try_catch)
> > > +{
> > > +       try_catch->run = kunit_generic_run_try_catch;
> >
> > Is the idea that 'run' would be anything besides
> > 'kunit_generic_run_try_catch'? If it isn't going to be different, then
> 
> Yeah, it can be overridden with an architecture specific version.
> 
> > maybe it's simpler to just have a function like
> > kunit_generic_run_try_catch() that is called by the unit tests instead
> > of having to write 'try_catch->run(try_catch)' and indirect for the
> > basic case. Maybe I've missed the point entirely though and this is all
> > scaffolding for more complicated exception handling later on.
> 
> Yeah, the idea is that different architectures can override exception
> handling with their own implementation. This is just the generic one.
> For example, UML has one that doesn't depend on kthreads or
> completions; the UML version also allows recovery from some segfault
> conditions.

Ok, got it. It may still be nice to have a wrapper or macro for that
try_catch->run(try_catch) statement so we don't have to know that a
try_catch struct has a run member.

	static inline void kunit_run_try_catch(struct kunit_try_catch *try_catch)
	{
		try_catch->run(try_catch);
	}
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-02-28 18:02               ` Stephen Boyd
  0 siblings, 0 replies; 316+ messages in thread
From: Stephen Boyd @ 2019-02-28 18:02 UTC (permalink / raw)
  To: Brendan Higgins
  Cc: brakmo-b10kYP2dOMg, Amir Goldstein, dri-devel, Sasha Levin,
	linux-kselftest-u79uwXL29TY76Z2rM5mHXA, Frank Rowand,
	Rob Herring, linux-nvdimm, Richard Weinberger, Knut Omang,
	Kieran Bingham, wfg-VuQAYsv1563Yd54FQh9/CA, Joel Stanley,
	Jeff Dike, dan.carpenter-QHcLZuEGTsvQT0dZR+AlfA, devicetree,
	shuah-DgEjT+Ai2ygdnm+yROfE0A, Bird, Timothy, Kees Cook,
	linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Steven Rostedt,
	Julia Lawall, kunit-dev-/JYPxA39Uh5TLH3MbocFFw,
	om-y27Ovi1pjclAfugRpC6u6w, Petr Mladek, Greg KH, Linux

Quoting Brendan Higgins (2019-02-28 01:03:24)
> On Tue, Feb 26, 2019 at 12:35 PM Stephen Boyd <sboyd-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org> wrote:
> >
> > when they need to abort and then the test runner would detect that error
> > via the return value from the 'run test' function. That would be a more
> > direct approach, but also more verbose than a single KUNIT_ASSERT()
> > line. It would be more kernel idiomatic too because the control flow
> 
> Yeah, I was intentionally going against that idiom. I think that idiom
> makes test logic more complicated than it needs to be, especially if
> the assertion failure happens in a helper function; then you have to
> pass that error all the way back up. It is important that test code
> should be as simple as possible to the point of being immediately
> obviously correct at first glance because there are no tests for
> tests.
> 
> The idea with assertions is that you use them to state all the
> preconditions for your test. Logically speaking, these are the
> premises of the test case, so if a premise isn't true, there is no
> point in continuing the test case because there are no conclusions
> that can be drawn without the premises. Whereas, the expectation is
> the thing you are trying to prove. It is not used universally in
> x-unit style test frameworks, but I really like it as a convention.
> You could still express the idea of a premise using the above idiom,
> but I think KUNIT_ASSERT_* states the intended idea perfectly.

Fair enough. It would be great if these sorts of things were described
in the commit text.

Is the assumption that things like held locks and refcounted elements
won't exist when one of these assertions is made? It sounds like some of
the cleanup logic could be fairly complicated if a helper function
changes some state and then an assert fails and we have to unwind all
the state from a corrupt location. A similar problem exists for a test
timeout too. How do we get back to a sane state if the test locks up for
a long time? Just don't try?

> 
> > isn't hidden inside a macro and it isn't intimately connected with
> > kthreads and completions.
> 
> Yeah, I wasn't a fan of that myself, but it was broadly available. My
> previous version (still the architecture specific version for UML, not
> in this patchset though) relies on UML_LONGJMP, but is obviously only
> works on UML. A number of people wanted support for other
> architectures. Rob and Luis specifically wanted me to provide a
> version of abort that would work on any architecture, even if it only
> had reduced functionality; I thought this fit the bill okay.

Ok.

> 
> >
> > >
> > > diff --git a/kunit/test.c b/kunit/test.c
> > > index d18c50d5ed671..6e5244642ab07 100644
> > > --- a/kunit/test.c
> > > +++ b/kunit/test.c
> > [...]
> > > +
> > > +static void kunit_generic_throw(struct kunit_try_catch *try_catch)
> > > +{
> > > +       try_catch->context.try_result = -EFAULT;
> > > +       complete_and_exit(try_catch->context.try_completion, -EFAULT);
> > > +}
> > > +
> > > +static int kunit_generic_run_threadfn_adapter(void *data)
> > > +{
> > > +       struct kunit_try_catch *try_catch = data;
> > >
> > > +       try_catch->try(&try_catch->context);
> > > +
> > > +       complete_and_exit(try_catch->context.try_completion, 0);
> >
> > The exit code doesn't matter, right? If so, it might be clearer to just
> > return 0 from this function because kthreads exit themselves as far as I
> > recall.
> 
> You mean complete and then return?

Yes. I was confused for a minute because I thought the exit code was
checked, but it isn't. Instead, the try_catch->context.try_result is
where the test result goes, so calling exit explicitly doesn't seem to
be important here, but it is important in the throw case.

> 
> >
> > > +       else if (exit_code)
> > > +               kunit_err(test, "Unknown error: %d", exit_code);
> > > +}
> > > +
> > > +void kunit_generic_try_catch_init(struct kunit_try_catch *try_catch)
> > > +{
> > > +       try_catch->run = kunit_generic_run_try_catch;
> >
> > Is the idea that 'run' would be anything besides
> > 'kunit_generic_run_try_catch'? If it isn't going to be different, then
> 
> Yeah, it can be overridden with an architecture specific version.
> 
> > maybe it's simpler to just have a function like
> > kunit_generic_run_try_catch() that is called by the unit tests instead
> > of having to write 'try_catch->run(try_catch)' and indirect for the
> > basic case. Maybe I've missed the point entirely though and this is all
> > scaffolding for more complicated exception handling later on.
> 
> Yeah, the idea is that different architectures can override exception
> handling with their own implementation. This is just the generic one.
> For example, UML has one that doesn't depend on kthreads or
> completions; the UML version also allows recovery from some segfault
> conditions.

Ok, got it. It may still be nice to have a wrapper or macro for that
try_catch->run(try_catch) statement so we don't have to know that a
try_catch struct has a run member.

	static inline void kunit_run_try_catch(struct kunit_try_catch *try_catch)
	{
		try_catch->run(try_catch);
	}

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
  2019-02-28 13:54               ` Dan Carpenter
                                   ` (2 preceding siblings ...)
  (?)
@ 2019-03-04 22:28                 ` Brendan Higgins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-04 22:28 UTC (permalink / raw)
  To: Dan Carpenter
  Cc: Stephen Boyd, Frank Rowand, Kees Cook, Kieran Bingham,
	Luis Chamberlain, Rob Herring, shuah, Greg KH, Joel Stanley,
	Michael Ellerman, Joe Perches, brakmo, Steven Rostedt, Bird,
	Timothy, Kevin Hilman, Julia Lawall, linux-kselftest, kunit-dev,
	Linux Kernel Mailing List, Jeff Dike, Richard Weinberger,
	linux-um

On Thu, Feb 28, 2019 at 5:55 AM Dan Carpenter <dan.carpenter@oracle.com> wrote:
>
> On Thu, Feb 28, 2019 at 01:03:24AM -0800, Brendan Higgins wrote:
> > you could do:
> >
> > if (IS_ERR_OR_NULL(ptr)) {
> >         KUNIT_FAIL(test, "ptr is an errno or null: %ld", ptr);
> >         return;
> > }
>
> It's best to not mix error pointers and NULL but when we do mix them,
> it means that NULL is a special kind of success.  Like we try to load
> a feature and we get back:
>
>     valid pointer <-- success
>     null          <-- feature is disabled.  not an error.
>     error pointer <-- feature is broken.  fail.

Thanks for pointing that out! Will fix.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-03-04 22:28                 ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-04 22:28 UTC (permalink / raw)
  To: Dan Carpenter
  Cc: Stephen Boyd, Frank Rowand, Kees Cook, Kieran Bingham,
	Luis Chamberlain, Rob Herring, shuah, Greg KH, Joel Stanley,
	Michael Ellerman, Joe Perches, brakmo, Steven Rostedt, Bird,
	Timothy, Kevin Hilman, Julia Lawall, linux-kselftest, kunit-dev,
	Linux Kernel Mailing List, Jeff Dike, Richard Weinberger,
	linux-um, Daniel Vetter, dri-devel, Dan Williams, linux-nvdimm,
	Knut Omang, devicetree, Petr Mladek, Sasha Levin, Amir Goldstein,
	wfg

On Thu, Feb 28, 2019 at 5:55 AM Dan Carpenter <dan.carpenter@oracle.com> wrote:
>
> On Thu, Feb 28, 2019 at 01:03:24AM -0800, Brendan Higgins wrote:
> > you could do:
> >
> > if (IS_ERR_OR_NULL(ptr)) {
> >         KUNIT_FAIL(test, "ptr is an errno or null: %ld", ptr);
> >         return;
> > }
>
> It's best to not mix error pointers and NULL but when we do mix them,
> it means that NULL is a special kind of success.  Like we try to load
> a feature and we get back:
>
>     valid pointer <-- success
>     null          <-- feature is disabled.  not an error.
>     error pointer <-- feature is broken.  fail.

Thanks for pointing that out! Will fix.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-03-04 22:28                 ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-03-04 22:28 UTC (permalink / raw)


On Thu, Feb 28, 2019 at 5:55 AM Dan Carpenter <dan.carpenter at oracle.com> wrote:
>
> On Thu, Feb 28, 2019 at 01:03:24AM -0800, Brendan Higgins wrote:
> > you could do:
> >
> > if (IS_ERR_OR_NULL(ptr)) {
> >         KUNIT_FAIL(test, "ptr is an errno or null: %ld", ptr);
> >         return;
> > }
>
> It's best to not mix error pointers and NULL but when we do mix them,
> it means that NULL is a special kind of success.  Like we try to load
> a feature and we get back:
>
>     valid pointer <-- success
>     null          <-- feature is disabled.  not an error.
>     error pointer <-- feature is broken.  fail.

Thanks for pointing that out! Will fix.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-03-04 22:28                 ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-04 22:28 UTC (permalink / raw)


On Thu, Feb 28, 2019@5:55 AM Dan Carpenter <dan.carpenter@oracle.com> wrote:
>
> On Thu, Feb 28, 2019@01:03:24AM -0800, Brendan Higgins wrote:
> > you could do:
> >
> > if (IS_ERR_OR_NULL(ptr)) {
> >         KUNIT_FAIL(test, "ptr is an errno or null: %ld", ptr);
> >         return;
> > }
>
> It's best to not mix error pointers and NULL but when we do mix them,
> it means that NULL is a special kind of success.  Like we try to load
> a feature and we get back:
>
>     valid pointer <-- success
>     null          <-- feature is disabled.  not an error.
>     error pointer <-- feature is broken.  fail.

Thanks for pointing that out! Will fix.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-03-04 22:28                 ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-04 22:28 UTC (permalink / raw)
  To: Dan Carpenter
  Cc: brakmo, Petr Mladek, Amir Goldstein, dri-devel, Sasha Levin,
	linux-kselftest, shuah, Rob Herring, Frank Rowand, linux-nvdimm,
	Richard Weinberger, Knut Omang, Kieran Bingham, wfg,
	Joel Stanley, Bird, Timothy, devicetree, Jeff Dike, Kees Cook,
	linux-um, Steven Rostedt, Julia Lawall, Dan Williams, kunit-dev,
	Stephen Boyd, Greg KH, Linux Kernel Mailing List,
	Luis Chamberlain, Daniel Vetter, Michael Ellerman, Joe Perches,
	Kevin Hilman

On Thu, Feb 28, 2019 at 5:55 AM Dan Carpenter <dan.carpenter@oracle.com> wrote:
>
> On Thu, Feb 28, 2019 at 01:03:24AM -0800, Brendan Higgins wrote:
> > you could do:
> >
> > if (IS_ERR_OR_NULL(ptr)) {
> >         KUNIT_FAIL(test, "ptr is an errno or null: %ld", ptr);
> >         return;
> > }
>
> It's best to not mix error pointers and NULL but when we do mix them,
> it means that NULL is a special kind of success.  Like we try to load
> a feature and we get back:
>
>     valid pointer <-- success
>     null          <-- feature is disabled.  not an error.
>     error pointer <-- feature is broken.  fail.

Thanks for pointing that out! Will fix.

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
  2019-02-28 18:02               ` Stephen Boyd
                                     ` (2 preceding siblings ...)
  (?)
@ 2019-03-04 22:39                   ` Brendan Higgins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-04 22:39 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: brakmo-b10kYP2dOMg, Petr Mladek, Amir Goldstein, dri-devel,
	Sasha Levin, linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	Frank Rowand, Rob Herring, linux-nvdimm, Richard Weinberger,
	Knut Omang, Kieran Bingham, wfg-VuQAYsv1563Yd54FQh9/CA,
	Joel Stanley, Jeff Dike, Dan Carpenter, devicetree,
	shuah-DgEjT+Ai2ygdnm+yROfE0A, Bird, Timothy, Kees Cook,
	linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Steven Rostedt,
	Julia Lawall, kunit-dev-/JYPxA39Uh5TLH3MbocFFw, Greg KH, Linux

On Thu, Feb 28, 2019 at 10:02 AM Stephen Boyd <sboyd-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org> wrote:
>
> Quoting Brendan Higgins (2019-02-28 01:03:24)
> > On Tue, Feb 26, 2019 at 12:35 PM Stephen Boyd <sboyd-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org> wrote:
> > >
> > > when they need to abort and then the test runner would detect that error
> > > via the return value from the 'run test' function. That would be a more
> > > direct approach, but also more verbose than a single KUNIT_ASSERT()
> > > line. It would be more kernel idiomatic too because the control flow
> >
> > Yeah, I was intentionally going against that idiom. I think that idiom
> > makes test logic more complicated than it needs to be, especially if
> > the assertion failure happens in a helper function; then you have to
> > pass that error all the way back up. It is important that test code
> > should be as simple as possible to the point of being immediately
> > obviously correct at first glance because there are no tests for
> > tests.
> >
> > The idea with assertions is that you use them to state all the
> > preconditions for your test. Logically speaking, these are the
> > premises of the test case, so if a premise isn't true, there is no
> > point in continuing the test case because there are no conclusions
> > that can be drawn without the premises. Whereas, the expectation is
> > the thing you are trying to prove. It is not used universally in
> > x-unit style test frameworks, but I really like it as a convention.
> > You could still express the idea of a premise using the above idiom,
> > but I think KUNIT_ASSERT_* states the intended idea perfectly.
>
> Fair enough. It would be great if these sorts of things were described
> in the commit text.

Good point. Will fix.

>
> Is the assumption that things like held locks and refcounted elements
> won't exist when one of these assertions is made? It sounds like some of
> the cleanup logic could be fairly complicated if a helper function
> changes some state and then an assert fails and we have to unwind all
> the state from a corrupt location. A similar problem exists for a test
> timeout too. How do we get back to a sane state if the test locks up for
> a long time? Just don't try?

It depends on the situation, if it is part of a KUnit test itself
(probably not code under test), then you can use the kunit_resource
API: https://lkml.org/lkml/2019/2/14/1125; it is inspired by the
devm_* family of functions, such that when a KUnit test case ends, for
any reason, all the kunit_resources are automatically cleaned up.
Similarly, kunit_module.exit is called at the end of every test case,
regardless of how it terminates.

>
> >
> > > isn't hidden inside a macro and it isn't intimately connected with
> > > kthreads and completions.
> >
> > Yeah, I wasn't a fan of that myself, but it was broadly available. My
> > previous version (still the architecture specific version for UML, not
> > in this patchset though) relies on UML_LONGJMP, but is obviously only
> > works on UML. A number of people wanted support for other
> > architectures. Rob and Luis specifically wanted me to provide a
> > version of abort that would work on any architecture, even if it only
> > had reduced functionality; I thought this fit the bill okay.
>
> Ok.
>
> >
> > >
> > > >
> > > > diff --git a/kunit/test.c b/kunit/test.c
> > > > index d18c50d5ed671..6e5244642ab07 100644
> > > > --- a/kunit/test.c
> > > > +++ b/kunit/test.c
> > > [...]
> > > > +
> > > > +static void kunit_generic_throw(struct kunit_try_catch *try_catch)
> > > > +{
> > > > +       try_catch->context.try_result = -EFAULT;
> > > > +       complete_and_exit(try_catch->context.try_completion, -EFAULT);
> > > > +}
> > > > +
> > > > +static int kunit_generic_run_threadfn_adapter(void *data)
> > > > +{
> > > > +       struct kunit_try_catch *try_catch = data;
> > > >
> > > > +       try_catch->try(&try_catch->context);
> > > > +
> > > > +       complete_and_exit(try_catch->context.try_completion, 0);
> > >
> > > The exit code doesn't matter, right? If so, it might be clearer to just
> > > return 0 from this function because kthreads exit themselves as far as I
> > > recall.
> >
> > You mean complete and then return?
>
> Yes. I was confused for a minute because I thought the exit code was
> checked, but it isn't. Instead, the try_catch->context.try_result is
> where the test result goes, so calling exit explicitly doesn't seem to
> be important here, but it is important in the throw case.

Yep.

>
> >
> > >
> > > > +       else if (exit_code)
> > > > +               kunit_err(test, "Unknown error: %d", exit_code);
> > > > +}
> > > > +
> > > > +void kunit_generic_try_catch_init(struct kunit_try_catch *try_catch)
> > > > +{
> > > > +       try_catch->run = kunit_generic_run_try_catch;
> > >
> > > Is the idea that 'run' would be anything besides
> > > 'kunit_generic_run_try_catch'? If it isn't going to be different, then
> >
> > Yeah, it can be overridden with an architecture specific version.
> >
> > > maybe it's simpler to just have a function like
> > > kunit_generic_run_try_catch() that is called by the unit tests instead
> > > of having to write 'try_catch->run(try_catch)' and indirect for the
> > > basic case. Maybe I've missed the point entirely though and this is all
> > > scaffolding for more complicated exception handling later on.
> >
> > Yeah, the idea is that different architectures can override exception
> > handling with their own implementation. This is just the generic one.
> > For example, UML has one that doesn't depend on kthreads or
> > completions; the UML version also allows recovery from some segfault
> > conditions.
>
> Ok, got it. It may still be nice to have a wrapper or macro for that
> try_catch->run(try_catch) statement so we don't have to know that a
> try_catch struct has a run member.
>
>         static inline void kunit_run_try_catch(struct kunit_try_catch *try_catch)
>         {
>                 try_catch->run(try_catch);
>         }

Makes sense. Will fix in the next revision.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-03-04 22:39                   ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-04 22:39 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: Frank Rowand, Kees Cook, Kieran Bingham, Luis Chamberlain,
	Rob Herring, shuah, Greg KH, Joel Stanley, Michael Ellerman,
	Joe Perches, brakmo, Steven Rostedt, Bird, Timothy, Kevin Hilman,
	Julia Lawall, linux-kselftest, kunit-dev,
	Linux Kernel Mailing List, Jeff Dike, Richard Weinberger,
	linux-um, Daniel Vetter, dri-devel, Dan Williams, linux-nvdimm,
	Knut Omang, devicetree, Petr Mladek, Sasha Levin, Amir Goldstein,
	Dan Carpenter, wfg

On Thu, Feb 28, 2019 at 10:02 AM Stephen Boyd <sboyd@kernel.org> wrote:
>
> Quoting Brendan Higgins (2019-02-28 01:03:24)
> > On Tue, Feb 26, 2019 at 12:35 PM Stephen Boyd <sboyd@kernel.org> wrote:
> > >
> > > when they need to abort and then the test runner would detect that error
> > > via the return value from the 'run test' function. That would be a more
> > > direct approach, but also more verbose than a single KUNIT_ASSERT()
> > > line. It would be more kernel idiomatic too because the control flow
> >
> > Yeah, I was intentionally going against that idiom. I think that idiom
> > makes test logic more complicated than it needs to be, especially if
> > the assertion failure happens in a helper function; then you have to
> > pass that error all the way back up. It is important that test code
> > should be as simple as possible to the point of being immediately
> > obviously correct at first glance because there are no tests for
> > tests.
> >
> > The idea with assertions is that you use them to state all the
> > preconditions for your test. Logically speaking, these are the
> > premises of the test case, so if a premise isn't true, there is no
> > point in continuing the test case because there are no conclusions
> > that can be drawn without the premises. Whereas, the expectation is
> > the thing you are trying to prove. It is not used universally in
> > x-unit style test frameworks, but I really like it as a convention.
> > You could still express the idea of a premise using the above idiom,
> > but I think KUNIT_ASSERT_* states the intended idea perfectly.
>
> Fair enough. It would be great if these sorts of things were described
> in the commit text.

Good point. Will fix.

>
> Is the assumption that things like held locks and refcounted elements
> won't exist when one of these assertions is made? It sounds like some of
> the cleanup logic could be fairly complicated if a helper function
> changes some state and then an assert fails and we have to unwind all
> the state from a corrupt location. A similar problem exists for a test
> timeout too. How do we get back to a sane state if the test locks up for
> a long time? Just don't try?

It depends on the situation, if it is part of a KUnit test itself
(probably not code under test), then you can use the kunit_resource
API: https://lkml.org/lkml/2019/2/14/1125; it is inspired by the
devm_* family of functions, such that when a KUnit test case ends, for
any reason, all the kunit_resources are automatically cleaned up.
Similarly, kunit_module.exit is called at the end of every test case,
regardless of how it terminates.

>
> >
> > > isn't hidden inside a macro and it isn't intimately connected with
> > > kthreads and completions.
> >
> > Yeah, I wasn't a fan of that myself, but it was broadly available. My
> > previous version (still the architecture specific version for UML, not
> > in this patchset though) relies on UML_LONGJMP, but is obviously only
> > works on UML. A number of people wanted support for other
> > architectures. Rob and Luis specifically wanted me to provide a
> > version of abort that would work on any architecture, even if it only
> > had reduced functionality; I thought this fit the bill okay.
>
> Ok.
>
> >
> > >
> > > >
> > > > diff --git a/kunit/test.c b/kunit/test.c
> > > > index d18c50d5ed671..6e5244642ab07 100644
> > > > --- a/kunit/test.c
> > > > +++ b/kunit/test.c
> > > [...]
> > > > +
> > > > +static void kunit_generic_throw(struct kunit_try_catch *try_catch)
> > > > +{
> > > > +       try_catch->context.try_result = -EFAULT;
> > > > +       complete_and_exit(try_catch->context.try_completion, -EFAULT);
> > > > +}
> > > > +
> > > > +static int kunit_generic_run_threadfn_adapter(void *data)
> > > > +{
> > > > +       struct kunit_try_catch *try_catch = data;
> > > >
> > > > +       try_catch->try(&try_catch->context);
> > > > +
> > > > +       complete_and_exit(try_catch->context.try_completion, 0);
> > >
> > > The exit code doesn't matter, right? If so, it might be clearer to just
> > > return 0 from this function because kthreads exit themselves as far as I
> > > recall.
> >
> > You mean complete and then return?
>
> Yes. I was confused for a minute because I thought the exit code was
> checked, but it isn't. Instead, the try_catch->context.try_result is
> where the test result goes, so calling exit explicitly doesn't seem to
> be important here, but it is important in the throw case.

Yep.

>
> >
> > >
> > > > +       else if (exit_code)
> > > > +               kunit_err(test, "Unknown error: %d", exit_code);
> > > > +}
> > > > +
> > > > +void kunit_generic_try_catch_init(struct kunit_try_catch *try_catch)
> > > > +{
> > > > +       try_catch->run = kunit_generic_run_try_catch;
> > >
> > > Is the idea that 'run' would be anything besides
> > > 'kunit_generic_run_try_catch'? If it isn't going to be different, then
> >
> > Yeah, it can be overridden with an architecture specific version.
> >
> > > maybe it's simpler to just have a function like
> > > kunit_generic_run_try_catch() that is called by the unit tests instead
> > > of having to write 'try_catch->run(try_catch)' and indirect for the
> > > basic case. Maybe I've missed the point entirely though and this is all
> > > scaffolding for more complicated exception handling later on.
> >
> > Yeah, the idea is that different architectures can override exception
> > handling with their own implementation. This is just the generic one.
> > For example, UML has one that doesn't depend on kthreads or
> > completions; the UML version also allows recovery from some segfault
> > conditions.
>
> Ok, got it. It may still be nice to have a wrapper or macro for that
> try_catch->run(try_catch) statement so we don't have to know that a
> try_catch struct has a run member.
>
>         static inline void kunit_run_try_catch(struct kunit_try_catch *try_catch)
>         {
>                 try_catch->run(try_catch);
>         }

Makes sense. Will fix in the next revision.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-03-04 22:39                   ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-03-04 22:39 UTC (permalink / raw)


On Thu, Feb 28, 2019 at 10:02 AM Stephen Boyd <sboyd at kernel.org> wrote:
>
> Quoting Brendan Higgins (2019-02-28 01:03:24)
> > On Tue, Feb 26, 2019 at 12:35 PM Stephen Boyd <sboyd at kernel.org> wrote:
> > >
> > > when they need to abort and then the test runner would detect that error
> > > via the return value from the 'run test' function. That would be a more
> > > direct approach, but also more verbose than a single KUNIT_ASSERT()
> > > line. It would be more kernel idiomatic too because the control flow
> >
> > Yeah, I was intentionally going against that idiom. I think that idiom
> > makes test logic more complicated than it needs to be, especially if
> > the assertion failure happens in a helper function; then you have to
> > pass that error all the way back up. It is important that test code
> > should be as simple as possible to the point of being immediately
> > obviously correct at first glance because there are no tests for
> > tests.
> >
> > The idea with assertions is that you use them to state all the
> > preconditions for your test. Logically speaking, these are the
> > premises of the test case, so if a premise isn't true, there is no
> > point in continuing the test case because there are no conclusions
> > that can be drawn without the premises. Whereas, the expectation is
> > the thing you are trying to prove. It is not used universally in
> > x-unit style test frameworks, but I really like it as a convention.
> > You could still express the idea of a premise using the above idiom,
> > but I think KUNIT_ASSERT_* states the intended idea perfectly.
>
> Fair enough. It would be great if these sorts of things were described
> in the commit text.

Good point. Will fix.

>
> Is the assumption that things like held locks and refcounted elements
> won't exist when one of these assertions is made? It sounds like some of
> the cleanup logic could be fairly complicated if a helper function
> changes some state and then an assert fails and we have to unwind all
> the state from a corrupt location. A similar problem exists for a test
> timeout too. How do we get back to a sane state if the test locks up for
> a long time? Just don't try?

It depends on the situation, if it is part of a KUnit test itself
(probably not code under test), then you can use the kunit_resource
API: https://lkml.org/lkml/2019/2/14/1125; it is inspired by the
devm_* family of functions, such that when a KUnit test case ends, for
any reason, all the kunit_resources are automatically cleaned up.
Similarly, kunit_module.exit is called at the end of every test case,
regardless of how it terminates.

>
> >
> > > isn't hidden inside a macro and it isn't intimately connected with
> > > kthreads and completions.
> >
> > Yeah, I wasn't a fan of that myself, but it was broadly available. My
> > previous version (still the architecture specific version for UML, not
> > in this patchset though) relies on UML_LONGJMP, but is obviously only
> > works on UML. A number of people wanted support for other
> > architectures. Rob and Luis specifically wanted me to provide a
> > version of abort that would work on any architecture, even if it only
> > had reduced functionality; I thought this fit the bill okay.
>
> Ok.
>
> >
> > >
> > > >
> > > > diff --git a/kunit/test.c b/kunit/test.c
> > > > index d18c50d5ed671..6e5244642ab07 100644
> > > > --- a/kunit/test.c
> > > > +++ b/kunit/test.c
> > > [...]
> > > > +
> > > > +static void kunit_generic_throw(struct kunit_try_catch *try_catch)
> > > > +{
> > > > +       try_catch->context.try_result = -EFAULT;
> > > > +       complete_and_exit(try_catch->context.try_completion, -EFAULT);
> > > > +}
> > > > +
> > > > +static int kunit_generic_run_threadfn_adapter(void *data)
> > > > +{
> > > > +       struct kunit_try_catch *try_catch = data;
> > > >
> > > > +       try_catch->try(&try_catch->context);
> > > > +
> > > > +       complete_and_exit(try_catch->context.try_completion, 0);
> > >
> > > The exit code doesn't matter, right? If so, it might be clearer to just
> > > return 0 from this function because kthreads exit themselves as far as I
> > > recall.
> >
> > You mean complete and then return?
>
> Yes. I was confused for a minute because I thought the exit code was
> checked, but it isn't. Instead, the try_catch->context.try_result is
> where the test result goes, so calling exit explicitly doesn't seem to
> be important here, but it is important in the throw case.

Yep.

>
> >
> > >
> > > > +       else if (exit_code)
> > > > +               kunit_err(test, "Unknown error: %d", exit_code);
> > > > +}
> > > > +
> > > > +void kunit_generic_try_catch_init(struct kunit_try_catch *try_catch)
> > > > +{
> > > > +       try_catch->run = kunit_generic_run_try_catch;
> > >
> > > Is the idea that 'run' would be anything besides
> > > 'kunit_generic_run_try_catch'? If it isn't going to be different, then
> >
> > Yeah, it can be overridden with an architecture specific version.
> >
> > > maybe it's simpler to just have a function like
> > > kunit_generic_run_try_catch() that is called by the unit tests instead
> > > of having to write 'try_catch->run(try_catch)' and indirect for the
> > > basic case. Maybe I've missed the point entirely though and this is all
> > > scaffolding for more complicated exception handling later on.
> >
> > Yeah, the idea is that different architectures can override exception
> > handling with their own implementation. This is just the generic one.
> > For example, UML has one that doesn't depend on kthreads or
> > completions; the UML version also allows recovery from some segfault
> > conditions.
>
> Ok, got it. It may still be nice to have a wrapper or macro for that
> try_catch->run(try_catch) statement so we don't have to know that a
> try_catch struct has a run member.
>
>         static inline void kunit_run_try_catch(struct kunit_try_catch *try_catch)
>         {
>                 try_catch->run(try_catch);
>         }

Makes sense. Will fix in the next revision.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-03-04 22:39                   ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-04 22:39 UTC (permalink / raw)


On Thu, Feb 28, 2019@10:02 AM Stephen Boyd <sboyd@kernel.org> wrote:
>
> Quoting Brendan Higgins (2019-02-28 01:03:24)
> > On Tue, Feb 26, 2019@12:35 PM Stephen Boyd <sboyd@kernel.org> wrote:
> > >
> > > when they need to abort and then the test runner would detect that error
> > > via the return value from the 'run test' function. That would be a more
> > > direct approach, but also more verbose than a single KUNIT_ASSERT()
> > > line. It would be more kernel idiomatic too because the control flow
> >
> > Yeah, I was intentionally going against that idiom. I think that idiom
> > makes test logic more complicated than it needs to be, especially if
> > the assertion failure happens in a helper function; then you have to
> > pass that error all the way back up. It is important that test code
> > should be as simple as possible to the point of being immediately
> > obviously correct at first glance because there are no tests for
> > tests.
> >
> > The idea with assertions is that you use them to state all the
> > preconditions for your test. Logically speaking, these are the
> > premises of the test case, so if a premise isn't true, there is no
> > point in continuing the test case because there are no conclusions
> > that can be drawn without the premises. Whereas, the expectation is
> > the thing you are trying to prove. It is not used universally in
> > x-unit style test frameworks, but I really like it as a convention.
> > You could still express the idea of a premise using the above idiom,
> > but I think KUNIT_ASSERT_* states the intended idea perfectly.
>
> Fair enough. It would be great if these sorts of things were described
> in the commit text.

Good point. Will fix.

>
> Is the assumption that things like held locks and refcounted elements
> won't exist when one of these assertions is made? It sounds like some of
> the cleanup logic could be fairly complicated if a helper function
> changes some state and then an assert fails and we have to unwind all
> the state from a corrupt location. A similar problem exists for a test
> timeout too. How do we get back to a sane state if the test locks up for
> a long time? Just don't try?

It depends on the situation, if it is part of a KUnit test itself
(probably not code under test), then you can use the kunit_resource
API: https://lkml.org/lkml/2019/2/14/1125; it is inspired by the
devm_* family of functions, such that when a KUnit test case ends, for
any reason, all the kunit_resources are automatically cleaned up.
Similarly, kunit_module.exit is called at the end of every test case,
regardless of how it terminates.

>
> >
> > > isn't hidden inside a macro and it isn't intimately connected with
> > > kthreads and completions.
> >
> > Yeah, I wasn't a fan of that myself, but it was broadly available. My
> > previous version (still the architecture specific version for UML, not
> > in this patchset though) relies on UML_LONGJMP, but is obviously only
> > works on UML. A number of people wanted support for other
> > architectures. Rob and Luis specifically wanted me to provide a
> > version of abort that would work on any architecture, even if it only
> > had reduced functionality; I thought this fit the bill okay.
>
> Ok.
>
> >
> > >
> > > >
> > > > diff --git a/kunit/test.c b/kunit/test.c
> > > > index d18c50d5ed671..6e5244642ab07 100644
> > > > --- a/kunit/test.c
> > > > +++ b/kunit/test.c
> > > [...]
> > > > +
> > > > +static void kunit_generic_throw(struct kunit_try_catch *try_catch)
> > > > +{
> > > > +       try_catch->context.try_result = -EFAULT;
> > > > +       complete_and_exit(try_catch->context.try_completion, -EFAULT);
> > > > +}
> > > > +
> > > > +static int kunit_generic_run_threadfn_adapter(void *data)
> > > > +{
> > > > +       struct kunit_try_catch *try_catch = data;
> > > >
> > > > +       try_catch->try(&try_catch->context);
> > > > +
> > > > +       complete_and_exit(try_catch->context.try_completion, 0);
> > >
> > > The exit code doesn't matter, right? If so, it might be clearer to just
> > > return 0 from this function because kthreads exit themselves as far as I
> > > recall.
> >
> > You mean complete and then return?
>
> Yes. I was confused for a minute because I thought the exit code was
> checked, but it isn't. Instead, the try_catch->context.try_result is
> where the test result goes, so calling exit explicitly doesn't seem to
> be important here, but it is important in the throw case.

Yep.

>
> >
> > >
> > > > +       else if (exit_code)
> > > > +               kunit_err(test, "Unknown error: %d", exit_code);
> > > > +}
> > > > +
> > > > +void kunit_generic_try_catch_init(struct kunit_try_catch *try_catch)
> > > > +{
> > > > +       try_catch->run = kunit_generic_run_try_catch;
> > >
> > > Is the idea that 'run' would be anything besides
> > > 'kunit_generic_run_try_catch'? If it isn't going to be different, then
> >
> > Yeah, it can be overridden with an architecture specific version.
> >
> > > maybe it's simpler to just have a function like
> > > kunit_generic_run_try_catch() that is called by the unit tests instead
> > > of having to write 'try_catch->run(try_catch)' and indirect for the
> > > basic case. Maybe I've missed the point entirely though and this is all
> > > scaffolding for more complicated exception handling later on.
> >
> > Yeah, the idea is that different architectures can override exception
> > handling with their own implementation. This is just the generic one.
> > For example, UML has one that doesn't depend on kthreads or
> > completions; the UML version also allows recovery from some segfault
> > conditions.
>
> Ok, got it. It may still be nice to have a wrapper or macro for that
> try_catch->run(try_catch) statement so we don't have to know that a
> try_catch struct has a run member.
>
>         static inline void kunit_run_try_catch(struct kunit_try_catch *try_catch)
>         {
>                 try_catch->run(try_catch);
>         }

Makes sense. Will fix in the next revision.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-03-04 22:39                   ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-04 22:39 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: brakmo, Petr Mladek, Amir Goldstein, dri-devel, Sasha Levin,
	linux-kselftest, Frank Rowand, Rob Herring, linux-nvdimm,
	Richard Weinberger, Knut Omang, Kieran Bingham, wfg,
	Joel Stanley, Jeff Dike, Dan Carpenter, devicetree, shuah, Bird,

On Thu, Feb 28, 2019 at 10:02 AM Stephen Boyd <sboyd@kernel.org> wrote:
>
> Quoting Brendan Higgins (2019-02-28 01:03:24)
> > On Tue, Feb 26, 2019 at 12:35 PM Stephen Boyd <sboyd@kernel.org> wrote:
> > >
> > > when they need to abort and then the test runner would detect that error
> > > via the return value from the 'run test' function. That would be a more
> > > direct approach, but also more verbose than a single KUNIT_ASSERT()
> > > line. It would be more kernel idiomatic too because the control flow
> >
> > Yeah, I was intentionally going against that idiom. I think that idiom
> > makes test logic more complicated than it needs to be, especially if
> > the assertion failure happens in a helper function; then you have to
> > pass that error all the way back up. It is important that test code
> > should be as simple as possible to the point of being immediately
> > obviously correct at first glance because there are no tests for
> > tests.
> >
> > The idea with assertions is that you use them to state all the
> > preconditions for your test. Logically speaking, these are the
> > premises of the test case, so if a premise isn't true, there is no
> > point in continuing the test case because there are no conclusions
> > that can be drawn without the premises. Whereas, the expectation is
> > the thing you are trying to prove. It is not used universally in
> > x-unit style test frameworks, but I really like it as a convention.
> > You could still express the idea of a premise using the above idiom,
> > but I think KUNIT_ASSERT_* states the intended idea perfectly.
>
> Fair enough. It would be great if these sorts of things were described
> in the commit text.

Good point. Will fix.

>
> Is the assumption that things like held locks and refcounted elements
> won't exist when one of these assertions is made? It sounds like some of
> the cleanup logic could be fairly complicated if a helper function
> changes some state and then an assert fails and we have to unwind all
> the state from a corrupt location. A similar problem exists for a test
> timeout too. How do we get back to a sane state if the test locks up for
> a long time? Just don't try?

It depends on the situation, if it is part of a KUnit test itself
(probably not code under test), then you can use the kunit_resource
API: https://lkml.org/lkml/2019/2/14/1125; it is inspired by the
devm_* family of functions, such that when a KUnit test case ends, for
any reason, all the kunit_resources are automatically cleaned up.
Similarly, kunit_module.exit is called at the end of every test case,
regardless of how it terminates.

>
> >
> > > isn't hidden inside a macro and it isn't intimately connected with
> > > kthreads and completions.
> >
> > Yeah, I wasn't a fan of that myself, but it was broadly available. My
> > previous version (still the architecture specific version for UML, not
> > in this patchset though) relies on UML_LONGJMP, but is obviously only
> > works on UML. A number of people wanted support for other
> > architectures. Rob and Luis specifically wanted me to provide a
> > version of abort that would work on any architecture, even if it only
> > had reduced functionality; I thought this fit the bill okay.
>
> Ok.
>
> >
> > >
> > > >
> > > > diff --git a/kunit/test.c b/kunit/test.c
> > > > index d18c50d5ed671..6e5244642ab07 100644
> > > > --- a/kunit/test.c
> > > > +++ b/kunit/test.c
> > > [...]
> > > > +
> > > > +static void kunit_generic_throw(struct kunit_try_catch *try_catch)
> > > > +{
> > > > +       try_catch->context.try_result = -EFAULT;
> > > > +       complete_and_exit(try_catch->context.try_completion, -EFAULT);
> > > > +}
> > > > +
> > > > +static int kunit_generic_run_threadfn_adapter(void *data)
> > > > +{
> > > > +       struct kunit_try_catch *try_catch = data;
> > > >
> > > > +       try_catch->try(&try_catch->context);
> > > > +
> > > > +       complete_and_exit(try_catch->context.try_completion, 0);
> > >
> > > The exit code doesn't matter, right? If so, it might be clearer to just
> > > return 0 from this function because kthreads exit themselves as far as I
> > > recall.
> >
> > You mean complete and then return?
>
> Yes. I was confused for a minute because I thought the exit code was
> checked, but it isn't. Instead, the try_catch->context.try_result is
> where the test result goes, so calling exit explicitly doesn't seem to
> be important here, but it is important in the throw case.

Yep.

>
> >
> > >
> > > > +       else if (exit_code)
> > > > +               kunit_err(test, "Unknown error: %d", exit_code);
> > > > +}
> > > > +
> > > > +void kunit_generic_try_catch_init(struct kunit_try_catch *try_catch)
> > > > +{
> > > > +       try_catch->run = kunit_generic_run_try_catch;
> > >
> > > Is the idea that 'run' would be anything besides
> > > 'kunit_generic_run_try_catch'? If it isn't going to be different, then
> >
> > Yeah, it can be overridden with an architecture specific version.
> >
> > > maybe it's simpler to just have a function like
> > > kunit_generic_run_try_catch() that is called by the unit tests instead
> > > of having to write 'try_catch->run(try_catch)' and indirect for the
> > > basic case. Maybe I've missed the point entirely though and this is all
> > > scaffolding for more complicated exception handling later on.
> >
> > Yeah, the idea is that different architectures can override exception
> > handling with their own implementation. This is just the generic one.
> > For example, UML has one that doesn't depend on kthreads or
> > completions; the UML version also allows recovery from some segfault
> > conditions.
>
> Ok, got it. It may still be nice to have a wrapper or macro for that
> try_catch->run(try_catch) statement so we don't have to know that a
> try_catch struct has a run member.
>
>         static inline void kunit_run_try_catch(struct kunit_try_catch *try_catch)
>         {
>                 try_catch->run(try_catch);
>         }

Makes sense. Will fix in the next revision.

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
  2019-02-14 21:37 ` brendanhiggins
                     ` (2 preceding siblings ...)
  (?)
@ 2019-03-04 23:01   ` Brendan Higgins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-04 23:01 UTC (permalink / raw)
  To: Kees Cook, Luis Chamberlain, shuah, Rob Herring, Kieran Bingham,
	Frank Rowand
  Cc: brakmo, Petr Mladek, Amir Goldstein, dri-devel, Sasha Levin,
	linux-kselftest, linux-nvdimm, Richard Weinberger, Knut Omang,
	wfg, Joel Stanley, Jeff Dike, Dan Carpenter, devicetree, Bird,
	Timothy, linux-um, Steven Rostedt, Julia Lawall, Dan Williams,
	kunit-dev, Greg KH, Linux Kernel Mailing List, Michael Ellerman,
	Joe Perches, Kevin Hilman

On Thu, Feb 14, 2019 at 1:38 PM Brendan Higgins
<brendanhiggins@google.com> wrote:
>
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.
>

<snip>

> ## More information on KUnit
>
> There is a bunch of documentation near the end of this patch set that
> describes how to use KUnit and best practices for writing unit tests.
> For convenience I am hosting the compiled docs here:
> https://google.github.io/kunit-docs/third_party/kernel/docs/
> Additionally for convenience, I have applied these patches to a branch:
> https://kunit.googlesource.com/linux/+/kunit/rfc/5.0-rc5/v4
> The repo may be cloned with:
> git clone https://kunit.googlesource.com/linux
> This patchset is on the kunit/rfc/5.0-rc5/v4 branch.
>
> ## Changes Since Last Version
>
>  - Got KUnit working on (hypothetically) all architectures (tested on
>    x86), as per Rob's (and other's) request
>  - Punting all KUnit features/patches depending on UML for now.
>  - Broke out UML specific support into arch/um/* as per "[RFC v3 01/19]
>    kunit: test: add KUnit test runner core", as requested by Luis.
>  - Added support to kunit_tool to allow it to build kernels in external
>    directories, as suggested by Kieran.
>  - Added a UML defconfig, and a config fragment for KUnit as suggested
>    by Kieran and Luis.
>  - Cleaned up, and reformatted a bunch of stuff.
>
> --
> 2.21.0.rc0.258.g878e2cd30e-goog
>

Someone suggested I should send the next revision out as "PATCH"
instead of "RFC" since there seems to be general consensus about
everything at a high level, with a couple exceptions.

At this time I am planning on sending the next revision out as "[PATCH
v1 00/NN] kunit: introduce KUnit, the Linux kernel unit testing
framework". Initially I wasn't sure if the next revision should be
"[PATCH v1 ...]" or "[PATCH v5 ...]". Please let me know if you have a
strong objection to the former.

In the next revision, I will be dropping the last two of three patches
for the DT unit tests as there doesn't seem to be enough features
currently available to justify the heavy refactoring I did; however, I
will still include the patch that just converts everything over to
KUnit without restructuring the test cases:
https://lkml.org/lkml/2019/2/14/1133

I should have the next revision out in a week or so.
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-04 23:01   ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-04 23:01 UTC (permalink / raw)
  To: Kees Cook, Luis Chamberlain, shuah, Rob Herring, Kieran Bingham,
	Frank Rowand
  Cc: Greg KH, Joel Stanley, Michael Ellerman, Joe Perches, brakmo,
	Steven Rostedt, Bird, Timothy, Kevin Hilman, Julia Lawall,
	linux-kselftest, kunit-dev, Linux Kernel Mailing List, Jeff Dike,
	Richard Weinberger, linux-um, Daniel Vetter, dri-devel,
	Dan Williams, linux-nvdimm, Knut Omang, devicetree, Petr Mladek,
	Sasha Levin, Amir Goldstein, Dan Carpenter, wfg

On Thu, Feb 14, 2019 at 1:38 PM Brendan Higgins
<brendanhiggins@google.com> wrote:
>
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.
>

<snip>

> ## More information on KUnit
>
> There is a bunch of documentation near the end of this patch set that
> describes how to use KUnit and best practices for writing unit tests.
> For convenience I am hosting the compiled docs here:
> https://google.github.io/kunit-docs/third_party/kernel/docs/
> Additionally for convenience, I have applied these patches to a branch:
> https://kunit.googlesource.com/linux/+/kunit/rfc/5.0-rc5/v4
> The repo may be cloned with:
> git clone https://kunit.googlesource.com/linux
> This patchset is on the kunit/rfc/5.0-rc5/v4 branch.
>
> ## Changes Since Last Version
>
>  - Got KUnit working on (hypothetically) all architectures (tested on
>    x86), as per Rob's (and other's) request
>  - Punting all KUnit features/patches depending on UML for now.
>  - Broke out UML specific support into arch/um/* as per "[RFC v3 01/19]
>    kunit: test: add KUnit test runner core", as requested by Luis.
>  - Added support to kunit_tool to allow it to build kernels in external
>    directories, as suggested by Kieran.
>  - Added a UML defconfig, and a config fragment for KUnit as suggested
>    by Kieran and Luis.
>  - Cleaned up, and reformatted a bunch of stuff.
>
> --
> 2.21.0.rc0.258.g878e2cd30e-goog
>

Someone suggested I should send the next revision out as "PATCH"
instead of "RFC" since there seems to be general consensus about
everything at a high level, with a couple exceptions.

At this time I am planning on sending the next revision out as "[PATCH
v1 00/NN] kunit: introduce KUnit, the Linux kernel unit testing
framework". Initially I wasn't sure if the next revision should be
"[PATCH v1 ...]" or "[PATCH v5 ...]". Please let me know if you have a
strong objection to the former.

In the next revision, I will be dropping the last two of three patches
for the DT unit tests as there doesn't seem to be enough features
currently available to justify the heavy refactoring I did; however, I
will still include the patch that just converts everything over to
KUnit without restructuring the test cases:
https://lkml.org/lkml/2019/2/14/1133

I should have the next revision out in a week or so.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-04 23:01   ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-03-04 23:01 UTC (permalink / raw)


On Thu, Feb 14, 2019 at 1:38 PM Brendan Higgins
<brendanhiggins at google.com> wrote:
>
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.
>

<snip>

> ## More information on KUnit
>
> There is a bunch of documentation near the end of this patch set that
> describes how to use KUnit and best practices for writing unit tests.
> For convenience I am hosting the compiled docs here:
> https://google.github.io/kunit-docs/third_party/kernel/docs/
> Additionally for convenience, I have applied these patches to a branch:
> https://kunit.googlesource.com/linux/+/kunit/rfc/5.0-rc5/v4
> The repo may be cloned with:
> git clone https://kunit.googlesource.com/linux
> This patchset is on the kunit/rfc/5.0-rc5/v4 branch.
>
> ## Changes Since Last Version
>
>  - Got KUnit working on (hypothetically) all architectures (tested on
>    x86), as per Rob's (and other's) request
>  - Punting all KUnit features/patches depending on UML for now.
>  - Broke out UML specific support into arch/um/* as per "[RFC v3 01/19]
>    kunit: test: add KUnit test runner core", as requested by Luis.
>  - Added support to kunit_tool to allow it to build kernels in external
>    directories, as suggested by Kieran.
>  - Added a UML defconfig, and a config fragment for KUnit as suggested
>    by Kieran and Luis.
>  - Cleaned up, and reformatted a bunch of stuff.
>
> --
> 2.21.0.rc0.258.g878e2cd30e-goog
>

Someone suggested I should send the next revision out as "PATCH"
instead of "RFC" since there seems to be general consensus about
everything at a high level, with a couple exceptions.

At this time I am planning on sending the next revision out as "[PATCH
v1 00/NN] kunit: introduce KUnit, the Linux kernel unit testing
framework". Initially I wasn't sure if the next revision should be
"[PATCH v1 ...]" or "[PATCH v5 ...]". Please let me know if you have a
strong objection to the former.

In the next revision, I will be dropping the last two of three patches
for the DT unit tests as there doesn't seem to be enough features
currently available to justify the heavy refactoring I did; however, I
will still include the patch that just converts everything over to
KUnit without restructuring the test cases:
https://lkml.org/lkml/2019/2/14/1133

I should have the next revision out in a week or so.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-04 23:01   ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-04 23:01 UTC (permalink / raw)


On Thu, Feb 14, 2019 at 1:38 PM Brendan Higgins
<brendanhiggins@google.com> wrote:
>
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.
>

<snip>

> ## More information on KUnit
>
> There is a bunch of documentation near the end of this patch set that
> describes how to use KUnit and best practices for writing unit tests.
> For convenience I am hosting the compiled docs here:
> https://google.github.io/kunit-docs/third_party/kernel/docs/
> Additionally for convenience, I have applied these patches to a branch:
> https://kunit.googlesource.com/linux/+/kunit/rfc/5.0-rc5/v4
> The repo may be cloned with:
> git clone https://kunit.googlesource.com/linux
> This patchset is on the kunit/rfc/5.0-rc5/v4 branch.
>
> ## Changes Since Last Version
>
>  - Got KUnit working on (hypothetically) all architectures (tested on
>    x86), as per Rob's (and other's) request
>  - Punting all KUnit features/patches depending on UML for now.
>  - Broke out UML specific support into arch/um/* as per "[RFC v3 01/19]
>    kunit: test: add KUnit test runner core", as requested by Luis.
>  - Added support to kunit_tool to allow it to build kernels in external
>    directories, as suggested by Kieran.
>  - Added a UML defconfig, and a config fragment for KUnit as suggested
>    by Kieran and Luis.
>  - Cleaned up, and reformatted a bunch of stuff.
>
> --
> 2.21.0.rc0.258.g878e2cd30e-goog
>

Someone suggested I should send the next revision out as "PATCH"
instead of "RFC" since there seems to be general consensus about
everything at a high level, with a couple exceptions.

At this time I am planning on sending the next revision out as "[PATCH
v1 00/NN] kunit: introduce KUnit, the Linux kernel unit testing
framework". Initially I wasn't sure if the next revision should be
"[PATCH v1 ...]" or "[PATCH v5 ...]". Please let me know if you have a
strong objection to the former.

In the next revision, I will be dropping the last two of three patches
for the DT unit tests as there doesn't seem to be enough features
currently available to justify the heavy refactoring I did; however, I
will still include the patch that just converts everything over to
KUnit without restructuring the test cases:
https://lkml.org/lkml/2019/2/14/1133

I should have the next revision out in a week or so.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-04 23:01   ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-04 23:01 UTC (permalink / raw)
  To: Kees Cook, Luis Chamberlain, shuah, Rob Herring, Kieran Bingham,
	Frank Rowand
  Cc: brakmo, Petr Mladek, Amir Goldstein, dri-devel, Sasha Levin,
	linux-kselftest, linux-nvdimm, Richard Weinberger, Knut Omang,
	wfg, Joel Stanley, Jeff Dike, Dan Carpenter, devicetree, Bird,
	Timothy, linux-um, Steven Rostedt, Julia Lawall, Dan Williams,
	kunit-dev, Greg KH, Linux Kernel Mailing List, Daniel Vetter,
	Michael Ellerman, Joe Perches, Kevin Hilman

On Thu, Feb 14, 2019 at 1:38 PM Brendan Higgins
<brendanhiggins@google.com> wrote:
>
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.
>

<snip>

> ## More information on KUnit
>
> There is a bunch of documentation near the end of this patch set that
> describes how to use KUnit and best practices for writing unit tests.
> For convenience I am hosting the compiled docs here:
> https://google.github.io/kunit-docs/third_party/kernel/docs/
> Additionally for convenience, I have applied these patches to a branch:
> https://kunit.googlesource.com/linux/+/kunit/rfc/5.0-rc5/v4
> The repo may be cloned with:
> git clone https://kunit.googlesource.com/linux
> This patchset is on the kunit/rfc/5.0-rc5/v4 branch.
>
> ## Changes Since Last Version
>
>  - Got KUnit working on (hypothetically) all architectures (tested on
>    x86), as per Rob's (and other's) request
>  - Punting all KUnit features/patches depending on UML for now.
>  - Broke out UML specific support into arch/um/* as per "[RFC v3 01/19]
>    kunit: test: add KUnit test runner core", as requested by Luis.
>  - Added support to kunit_tool to allow it to build kernels in external
>    directories, as suggested by Kieran.
>  - Added a UML defconfig, and a config fragment for KUnit as suggested
>    by Kieran and Luis.
>  - Cleaned up, and reformatted a bunch of stuff.
>
> --
> 2.21.0.rc0.258.g878e2cd30e-goog
>

Someone suggested I should send the next revision out as "PATCH"
instead of "RFC" since there seems to be general consensus about
everything at a high level, with a couple exceptions.

At this time I am planning on sending the next revision out as "[PATCH
v1 00/NN] kunit: introduce KUnit, the Linux kernel unit testing
framework". Initially I wasn't sure if the next revision should be
"[PATCH v1 ...]" or "[PATCH v5 ...]". Please let me know if you have a
strong objection to the former.

In the next revision, I will be dropping the last two of three patches
for the DT unit tests as there doesn't seem to be enough features
currently available to justify the heavy refactoring I did; however, I
will still include the patch that just converts everything over to
KUnit without restructuring the test cases:
https://lkml.org/lkml/2019/2/14/1133

I should have the next revision out in a week or so.

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
  2019-02-14 21:37 ` brendanhiggins
                       ` (2 preceding siblings ...)
  (?)
@ 2019-03-21  1:07     ` Logan Gunthorpe
  -1 siblings, 0 replies; 316+ messages in thread
From: Logan Gunthorpe @ 2019-03-21  1:07 UTC (permalink / raw)
  To: Brendan Higgins, keescook-hpIqsD4AKlfQT0dZR+AlfA,
	mcgrof-DgEjT+Ai2ygdnm+yROfE0A, shuah-DgEjT+Ai2ygdnm+yROfE0A,
	robh-DgEjT+Ai2ygdnm+yROfE0A,
	kieran.bingham-ryLnwIuWjnjg/C1BVhZhaw,
	frowand.list-Re5JQEeQqe8AvxtiuMwx3w
  Cc: brakmo-b10kYP2dOMg, pmladek-IBi9RG/b67k,
	amir73il-Re5JQEeQqe8AvxtiuMwx3w,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	Alexander.Levin-0li6OtcxBFHby3iVrkZq2A,
	linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw, richard-/L3Ra7n9ekc,
	knut.omang-QHcLZuEGTsvQT0dZR+AlfA, wfg-VuQAYsv1563Yd54FQh9/CA,
	joel-U3u1mxZcP9KHXe+LvDLADg, jdike-OPE4K8JWMJJBDgjK7y7TUQ,
	dan.carpenter-QHcLZuEGTsvQT0dZR+AlfA,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Tim.Bird-7U/KSKJipcs,
	linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	rostedt-nx8X9YLhiw1AfugRpC6u6w, julia.lawall-L2FTfq7BK8M,
	kunit-dev-/JYPxA39Uh5TLH3MbocFFw,
	gregkh-hQyY1W1yCW8ekmWlsbkhG0B+6BGkLq7r,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, daniel-/w4YWyX8dFk,
	mpe-Gsx/Oe8HsFggBc27wqDAHg, joe-6d6DIl74uiNBDgjK7y7TUQ,
	khilman-rdvid1DuHRBWk0Htik3J/w

Hi,

On 2019-02-14 2:37 p.m., Brendan Higgins wrote:
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.

I haven't followed the entire conversation but I saw the KUnit write-up
on LWN and ended up, as an exercise, giving it a try.

I really like the idea of having a fast unit testing infrastructure in
the kernel. Occasionally, I write userspace tests for tricky functions
that I essentially write by copying the code over to a throw away C file
and exercise them as I need. I think it would be great to be able to
keep these tests around in a way that they can be run by anyone who
wants to touch the code.

I was just dealing with some functions that required some mocked up
tests so I thought I'd give KUnit a try. I found writing the code very
easy and the infrastructure I was testing was quite simple to mock out
the hardware.

However, I got a bit hung up by one issue: I was writing unit tests for
code in the NTB tree which itself depends on CONFIG_PCI which cannot be
enabled in UML (for what should be obvious reasons). I managed to work
around this because, as luck would have it, all the functions I cared
about testing were actually static inline functions in headers. So I
placed my test code in the kunit folder (so it would compile) and hacked
around a couple a of functions I didn't care about that would not be
compiled.

In the end I got it to work acceptably, but I get the impression that
KUnit will not be usable for wide swaths of kernel code that can't be
compiled in UML. Has there been any discussion or ideas on how to work
around this so it can be more generally useful? Or will this feature be
restricted roughly to non-drivers and functions in headers that don't
have #ifdefs around them?

If you're interested in seeing the unit tests I ended up writing you can
find the commits here[1].

Thanks,

Logan

[1] https://github.com/sbates130272/linux-p2pmem/ ntb_kunit

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21  1:07     ` Logan Gunthorpe
  0 siblings, 0 replies; 316+ messages in thread
From: Logan Gunthorpe @ 2019-03-21  1:07 UTC (permalink / raw)
  To: Brendan Higgins, keescook, mcgrof, shuah, robh, kieran.bingham,
	frowand.list
  Cc: gregkh, joel, mpe, joe, brakmo, rostedt, Tim.Bird, khilman,
	julia.lawall, linux-kselftest, kunit-dev, linux-kernel, jdike,
	richard, linux-um, daniel, dri-devel, dan.j.williams,
	linux-nvdimm, knut.omang, devicetree, pmladek, Alexander.Levin,
	amir73il, dan.carpenter, wfg

Hi,

On 2019-02-14 2:37 p.m., Brendan Higgins wrote:
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.

I haven't followed the entire conversation but I saw the KUnit write-up
on LWN and ended up, as an exercise, giving it a try.

I really like the idea of having a fast unit testing infrastructure in
the kernel. Occasionally, I write userspace tests for tricky functions
that I essentially write by copying the code over to a throw away C file
and exercise them as I need. I think it would be great to be able to
keep these tests around in a way that they can be run by anyone who
wants to touch the code.

I was just dealing with some functions that required some mocked up
tests so I thought I'd give KUnit a try. I found writing the code very
easy and the infrastructure I was testing was quite simple to mock out
the hardware.

However, I got a bit hung up by one issue: I was writing unit tests for
code in the NTB tree which itself depends on CONFIG_PCI which cannot be
enabled in UML (for what should be obvious reasons). I managed to work
around this because, as luck would have it, all the functions I cared
about testing were actually static inline functions in headers. So I
placed my test code in the kunit folder (so it would compile) and hacked
around a couple a of functions I didn't care about that would not be
compiled.

In the end I got it to work acceptably, but I get the impression that
KUnit will not be usable for wide swaths of kernel code that can't be
compiled in UML. Has there been any discussion or ideas on how to work
around this so it can be more generally useful? Or will this feature be
restricted roughly to non-drivers and functions in headers that don't
have #ifdefs around them?

If you're interested in seeing the unit tests I ended up writing you can
find the commits here[1].

Thanks,

Logan

[1] https://github.com/sbates130272/linux-p2pmem/ ntb_kunit

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21  1:07     ` Logan Gunthorpe
  0 siblings, 0 replies; 316+ messages in thread
From: logang @ 2019-03-21  1:07 UTC (permalink / raw)


Hi,

On 2019-02-14 2:37 p.m., Brendan Higgins wrote:
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.

I haven't followed the entire conversation but I saw the KUnit write-up
on LWN and ended up, as an exercise, giving it a try.

I really like the idea of having a fast unit testing infrastructure in
the kernel. Occasionally, I write userspace tests for tricky functions
that I essentially write by copying the code over to a throw away C file
and exercise them as I need. I think it would be great to be able to
keep these tests around in a way that they can be run by anyone who
wants to touch the code.

I was just dealing with some functions that required some mocked up
tests so I thought I'd give KUnit a try. I found writing the code very
easy and the infrastructure I was testing was quite simple to mock out
the hardware.

However, I got a bit hung up by one issue: I was writing unit tests for
code in the NTB tree which itself depends on CONFIG_PCI which cannot be
enabled in UML (for what should be obvious reasons). I managed to work
around this because, as luck would have it, all the functions I cared
about testing were actually static inline functions in headers. So I
placed my test code in the kunit folder (so it would compile) and hacked
around a couple a of functions I didn't care about that would not be
compiled.

In the end I got it to work acceptably, but I get the impression that
KUnit will not be usable for wide swaths of kernel code that can't be
compiled in UML. Has there been any discussion or ideas on how to work
around this so it can be more generally useful? Or will this feature be
restricted roughly to non-drivers and functions in headers that don't
have #ifdefs around them?

If you're interested in seeing the unit tests I ended up writing you can
find the commits here[1].

Thanks,

Logan

[1] https://github.com/sbates130272/linux-p2pmem/ ntb_kunit

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21  1:07     ` Logan Gunthorpe
  0 siblings, 0 replies; 316+ messages in thread
From: Logan Gunthorpe @ 2019-03-21  1:07 UTC (permalink / raw)


Hi,

On 2019-02-14 2:37 p.m., Brendan Higgins wrote:
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.

I haven't followed the entire conversation but I saw the KUnit write-up
on LWN and ended up, as an exercise, giving it a try.

I really like the idea of having a fast unit testing infrastructure in
the kernel. Occasionally, I write userspace tests for tricky functions
that I essentially write by copying the code over to a throw away C file
and exercise them as I need. I think it would be great to be able to
keep these tests around in a way that they can be run by anyone who
wants to touch the code.

I was just dealing with some functions that required some mocked up
tests so I thought I'd give KUnit a try. I found writing the code very
easy and the infrastructure I was testing was quite simple to mock out
the hardware.

However, I got a bit hung up by one issue: I was writing unit tests for
code in the NTB tree which itself depends on CONFIG_PCI which cannot be
enabled in UML (for what should be obvious reasons). I managed to work
around this because, as luck would have it, all the functions I cared
about testing were actually static inline functions in headers. So I
placed my test code in the kunit folder (so it would compile) and hacked
around a couple a of functions I didn't care about that would not be
compiled.

In the end I got it to work acceptably, but I get the impression that
KUnit will not be usable for wide swaths of kernel code that can't be
compiled in UML. Has there been any discussion or ideas on how to work
around this so it can be more generally useful? Or will this feature be
restricted roughly to non-drivers and functions in headers that don't
have #ifdefs around them?

If you're interested in seeing the unit tests I ended up writing you can
find the commits here[1].

Thanks,

Logan

[1] https://github.com/sbates130272/linux-p2pmem/ ntb_kunit

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21  1:07     ` Logan Gunthorpe
  0 siblings, 0 replies; 316+ messages in thread
From: Logan Gunthorpe @ 2019-03-21  1:07 UTC (permalink / raw)
  To: Brendan Higgins, keescook, mcgrof, shuah, robh, kieran.bingham,
	frowand.list
  Cc: brakmo, pmladek, amir73il, dri-devel, Alexander.Levin,
	linux-kselftest, linux-nvdimm, richard, knut.omang, wfg, joel,
	jdike, dan.carpenter, devicetree, Tim.Bird, linux-um, rostedt,
	julia.lawall, dan.j.williams, kunit-dev, gregkh, linux-kernel,
	daniel, mpe, joe, khilman

Hi,

On 2019-02-14 2:37 p.m., Brendan Higgins wrote:
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.

I haven't followed the entire conversation but I saw the KUnit write-up
on LWN and ended up, as an exercise, giving it a try.

I really like the idea of having a fast unit testing infrastructure in
the kernel. Occasionally, I write userspace tests for tricky functions
that I essentially write by copying the code over to a throw away C file
and exercise them as I need. I think it would be great to be able to
keep these tests around in a way that they can be run by anyone who
wants to touch the code.

I was just dealing with some functions that required some mocked up
tests so I thought I'd give KUnit a try. I found writing the code very
easy and the infrastructure I was testing was quite simple to mock out
the hardware.

However, I got a bit hung up by one issue: I was writing unit tests for
code in the NTB tree which itself depends on CONFIG_PCI which cannot be
enabled in UML (for what should be obvious reasons). I managed to work
around this because, as luck would have it, all the functions I cared
about testing were actually static inline functions in headers. So I
placed my test code in the kunit folder (so it would compile) and hacked
around a couple a of functions I didn't care about that would not be
compiled.

In the end I got it to work acceptably, but I get the impression that
KUnit will not be usable for wide swaths of kernel code that can't be
compiled in UML. Has there been any discussion or ideas on how to work
around this so it can be more generally useful? Or will this feature be
restricted roughly to non-drivers and functions in headers that don't
have #ifdefs around them?

If you're interested in seeing the unit tests I ended up writing you can
find the commits here[1].

Thanks,

Logan

[1] https://github.com/sbates130272/linux-p2pmem/ ntb_kunit

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
  2019-03-21  1:07     ` Logan Gunthorpe
                           ` (2 preceding siblings ...)
  (?)
@ 2019-03-21  5:23         ` Knut Omang
  -1 siblings, 0 replies; 316+ messages in thread
From: Knut Omang @ 2019-03-21  5:23 UTC (permalink / raw)
  To: Logan Gunthorpe, Brendan Higgins,
	keescook-hpIqsD4AKlfQT0dZR+AlfA, mcgrof-DgEjT+Ai2ygdnm+yROfE0A,
	shuah-DgEjT+Ai2ygdnm+yROfE0A, robh-DgEjT+Ai2ygdnm+yROfE0A,
	kieran.bingham-ryLnwIuWjnjg/C1BVhZhaw,
	frowand.list-Re5JQEeQqe8AvxtiuMwx3w
  Cc: brakmo-b10kYP2dOMg, pmladek-IBi9RG/b67k,
	amir73il-Re5JQEeQqe8AvxtiuMwx3w,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	Alexander.Levin-0li6OtcxBFHby3iVrkZq2A,
	linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw, richard-/L3Ra7n9ekc,
	wfg-VuQAYsv1563Yd54FQh9/CA, joel-U3u1mxZcP9KHXe+LvDLADg,
	jdike-OPE4K8JWMJJBDgjK7y7TUQ,
	dan.carpenter-QHcLZuEGTsvQT0dZR+AlfA,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Tim.Bird-7U/KSKJipcs,
	linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	rostedt-nx8X9YLhiw1AfugRpC6u6w, julia.lawall-L2FTfq7BK8M,
	kunit-dev-/JYPxA39Uh5TLH3MbocFFw,
	gregkh-hQyY1W1yCW8ekmWlsbkhG0B+6BGkLq7r,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, daniel-/w4YWyX8dFk,
	mpe-Gsx/Oe8HsFggBc27wqDAHg, joe-6d6DIl74uiNBDgjK7y7TUQ,
	khilman-rdvid1DuHRBWk0Htik3J/w

Hi Logan,

On Wed, 2019-03-20 at 19:07 -0600, Logan Gunthorpe wrote:
> Hi,
> 
> On 2019-02-14 2:37 p.m., Brendan Higgins wrote:
> > This patch set proposes KUnit, a lightweight unit testing and mocking
> > framework for the Linux kernel.
> 
> I haven't followed the entire conversation but I saw the KUnit write-up
> on LWN and ended up, as an exercise, giving it a try.
> 
> I really like the idea of having a fast unit testing infrastructure in
> the kernel. Occasionally, I write userspace tests for tricky functions
> that I essentially write by copying the code over to a throw away C file
> and exercise them as I need. I think it would be great to be able to
> keep these tests around in a way that they can be run by anyone who
> wants to touch the code.
> 
> I was just dealing with some functions that required some mocked up
> tests so I thought I'd give KUnit a try. I found writing the code very
> easy and the infrastructure I was testing was quite simple to mock out
> the hardware.
> 
> However, I got a bit hung up by one issue: I was writing unit tests for
> code in the NTB tree which itself depends on CONFIG_PCI which cannot be
> enabled in UML (for what should be obvious reasons). I managed to work
> around this because, as luck would have it, all the functions I cared
> about testing were actually static inline functions in headers. So I
> placed my test code in the kunit folder (so it would compile) and hacked
> around a couple a of functions I didn't care about that would not be
> compiled.
> 
> In the end I got it to work acceptably, but I get the impression that
> KUnit will not be usable for wide swaths of kernel code that can't be
> compiled in UML. Has there been any discussion or ideas on how to work
> around this so it can be more generally useful? Or will this feature be
> restricted roughly to non-drivers and functions in headers that don't
> have #ifdefs around them?

Testing drivers, hardware and firmware within production kernels was the use
case that inspired KTF (Kernel Test Framework). Currently KTF is available as a
standalone git repository. That's been the most efficient form for us so far, 
as we typically want tests to be developed once but deployed on many different
kernel versions simultaneously, as part of continuous integration.

But we're also working towards a suitable proposal for how it can be 
smoothly integrated into the kernel, but while still keeping the benefits 
and tools to allow cross-kernel development of tests. As part of this,
I have a master student who has been looking at converting some of 
the existing kernel test suites to KTF, and we have more examples coming 
from our internal usage, as we get more mileage and more users.
See for instance this recent blog entry testing skbuff as an example,
on the Oracle kernel development blog:

https://blogs.oracle.com/linux/writing-kernel-tests-with-the-new-kernel-test-framework-ktf

Other relevant links:

Git repo: https://github.com/oracle/ktf
Formatted docs: http://heim.ifi.uio.no/~knuto/ktf/
LWN mention from my presentation at LPC'17: https://lwn.net/Articles/735034/
Earlier Oracle blog post: https://blogs.oracle.com/linux/oracles-new-kernel-test-framework-for-linux-v2
OSS'18 presentation slides: https://events.linuxfoundation.org/wp-content/uploads/2017/12/Test-Driven-Kernel-Development-Knut-Omang-Oracle.pdf

> If you're interested in seeing the unit tests I ended up writing you can
> find the commits here[1].

It would certainly be interesting to see how the use cases you struggled with
would work with KTF ;-)

Thanks,
Knut

>
> Thanks,
> 
> Logan
> 
> [1] https://github.com/sbates130272/linux-p2pmem/ ntb_kunit

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21  5:23         ` Knut Omang
  0 siblings, 0 replies; 316+ messages in thread
From: Knut Omang @ 2019-03-21  5:23 UTC (permalink / raw)
  To: Logan Gunthorpe, Brendan Higgins, keescook, mcgrof, shuah, robh,
	kieran.bingham, frowand.list
  Cc: gregkh, joel, mpe, joe, brakmo, rostedt, Tim.Bird, khilman,
	julia.lawall, linux-kselftest, kunit-dev, linux-kernel, jdike,
	richard, linux-um, daniel, dri-devel, dan.j.williams,
	linux-nvdimm, devicetree, pmladek, Alexander.Levin, amir73il,
	dan.carpenter, wfg

Hi Logan,

On Wed, 2019-03-20 at 19:07 -0600, Logan Gunthorpe wrote:
> Hi,
> 
> On 2019-02-14 2:37 p.m., Brendan Higgins wrote:
> > This patch set proposes KUnit, a lightweight unit testing and mocking
> > framework for the Linux kernel.
> 
> I haven't followed the entire conversation but I saw the KUnit write-up
> on LWN and ended up, as an exercise, giving it a try.
> 
> I really like the idea of having a fast unit testing infrastructure in
> the kernel. Occasionally, I write userspace tests for tricky functions
> that I essentially write by copying the code over to a throw away C file
> and exercise them as I need. I think it would be great to be able to
> keep these tests around in a way that they can be run by anyone who
> wants to touch the code.
> 
> I was just dealing with some functions that required some mocked up
> tests so I thought I'd give KUnit a try. I found writing the code very
> easy and the infrastructure I was testing was quite simple to mock out
> the hardware.
> 
> However, I got a bit hung up by one issue: I was writing unit tests for
> code in the NTB tree which itself depends on CONFIG_PCI which cannot be
> enabled in UML (for what should be obvious reasons). I managed to work
> around this because, as luck would have it, all the functions I cared
> about testing were actually static inline functions in headers. So I
> placed my test code in the kunit folder (so it would compile) and hacked
> around a couple a of functions I didn't care about that would not be
> compiled.
> 
> In the end I got it to work acceptably, but I get the impression that
> KUnit will not be usable for wide swaths of kernel code that can't be
> compiled in UML. Has there been any discussion or ideas on how to work
> around this so it can be more generally useful? Or will this feature be
> restricted roughly to non-drivers and functions in headers that don't
> have #ifdefs around them?

Testing drivers, hardware and firmware within production kernels was the use
case that inspired KTF (Kernel Test Framework). Currently KTF is available as a
standalone git repository. That's been the most efficient form for us so far, 
as we typically want tests to be developed once but deployed on many different
kernel versions simultaneously, as part of continuous integration.

But we're also working towards a suitable proposal for how it can be 
smoothly integrated into the kernel, but while still keeping the benefits 
and tools to allow cross-kernel development of tests. As part of this,
I have a master student who has been looking at converting some of 
the existing kernel test suites to KTF, and we have more examples coming 
from our internal usage, as we get more mileage and more users.
See for instance this recent blog entry testing skbuff as an example,
on the Oracle kernel development blog:

https://blogs.oracle.com/linux/writing-kernel-tests-with-the-new-kernel-test-framework-ktf

Other relevant links:

Git repo: https://github.com/oracle/ktf
Formatted docs: http://heim.ifi.uio.no/~knuto/ktf/
LWN mention from my presentation at LPC'17: https://lwn.net/Articles/735034/
Earlier Oracle blog post: https://blogs.oracle.com/linux/oracles-new-kernel-test-framework-for-linux-v2
OSS'18 presentation slides: https://events.linuxfoundation.org/wp-content/uploads/2017/12/Test-Driven-Kernel-Development-Knut-Omang-Oracle.pdf

> If you're interested in seeing the unit tests I ended up writing you can
> find the commits here[1].

It would certainly be interesting to see how the use cases you struggled with
would work with KTF ;-)

Thanks,
Knut

>
> Thanks,
> 
> Logan
> 
> [1] https://github.com/sbates130272/linux-p2pmem/ ntb_kunit


^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21  5:23         ` Knut Omang
  0 siblings, 0 replies; 316+ messages in thread
From: knut.omang @ 2019-03-21  5:23 UTC (permalink / raw)


Hi Logan,

On Wed, 2019-03-20 at 19:07 -0600, Logan Gunthorpe wrote:
> Hi,
> 
> On 2019-02-14 2:37 p.m., Brendan Higgins wrote:
> > This patch set proposes KUnit, a lightweight unit testing and mocking
> > framework for the Linux kernel.
> 
> I haven't followed the entire conversation but I saw the KUnit write-up
> on LWN and ended up, as an exercise, giving it a try.
> 
> I really like the idea of having a fast unit testing infrastructure in
> the kernel. Occasionally, I write userspace tests for tricky functions
> that I essentially write by copying the code over to a throw away C file
> and exercise them as I need. I think it would be great to be able to
> keep these tests around in a way that they can be run by anyone who
> wants to touch the code.
> 
> I was just dealing with some functions that required some mocked up
> tests so I thought I'd give KUnit a try. I found writing the code very
> easy and the infrastructure I was testing was quite simple to mock out
> the hardware.
> 
> However, I got a bit hung up by one issue: I was writing unit tests for
> code in the NTB tree which itself depends on CONFIG_PCI which cannot be
> enabled in UML (for what should be obvious reasons). I managed to work
> around this because, as luck would have it, all the functions I cared
> about testing were actually static inline functions in headers. So I
> placed my test code in the kunit folder (so it would compile) and hacked
> around a couple a of functions I didn't care about that would not be
> compiled.
> 
> In the end I got it to work acceptably, but I get the impression that
> KUnit will not be usable for wide swaths of kernel code that can't be
> compiled in UML. Has there been any discussion or ideas on how to work
> around this so it can be more generally useful? Or will this feature be
> restricted roughly to non-drivers and functions in headers that don't
> have #ifdefs around them?

Testing drivers, hardware and firmware within production kernels was the use
case that inspired KTF (Kernel Test Framework). Currently KTF is available as a
standalone git repository. That's been the most efficient form for us so far, 
as we typically want tests to be developed once but deployed on many different
kernel versions simultaneously, as part of continuous integration.

But we're also working towards a suitable proposal for how it can be 
smoothly integrated into the kernel, but while still keeping the benefits 
and tools to allow cross-kernel development of tests. As part of this,
I have a master student who has been looking at converting some of 
the existing kernel test suites to KTF, and we have more examples coming 
from our internal usage, as we get more mileage and more users.
See for instance this recent blog entry testing skbuff as an example,
on the Oracle kernel development blog:

https://blogs.oracle.com/linux/writing-kernel-tests-with-the-new-kernel-test-framework-ktf

Other relevant links:

Git repo: https://github.com/oracle/ktf
Formatted docs: http://heim.ifi.uio.no/~knuto/ktf/
LWN mention from my presentation at LPC'17: https://lwn.net/Articles/735034/
Earlier Oracle blog post: https://blogs.oracle.com/linux/oracles-new-kernel-test-framework-for-linux-v2
OSS'18 presentation slides: https://events.linuxfoundation.org/wp-content/uploads/2017/12/Test-Driven-Kernel-Development-Knut-Omang-Oracle.pdf

> If you're interested in seeing the unit tests I ended up writing you can
> find the commits here[1].

It would certainly be interesting to see how the use cases you struggled with
would work with KTF ;-)

Thanks,
Knut

>
> Thanks,
> 
> Logan
> 
> [1] https://github.com/sbates130272/linux-p2pmem/ ntb_kunit

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21  5:23         ` Knut Omang
  0 siblings, 0 replies; 316+ messages in thread
From: Knut Omang @ 2019-03-21  5:23 UTC (permalink / raw)


Hi Logan,

On Wed, 2019-03-20@19:07 -0600, Logan Gunthorpe wrote:
> Hi,
> 
> On 2019-02-14 2:37 p.m., Brendan Higgins wrote:
> > This patch set proposes KUnit, a lightweight unit testing and mocking
> > framework for the Linux kernel.
> 
> I haven't followed the entire conversation but I saw the KUnit write-up
> on LWN and ended up, as an exercise, giving it a try.
> 
> I really like the idea of having a fast unit testing infrastructure in
> the kernel. Occasionally, I write userspace tests for tricky functions
> that I essentially write by copying the code over to a throw away C file
> and exercise them as I need. I think it would be great to be able to
> keep these tests around in a way that they can be run by anyone who
> wants to touch the code.
> 
> I was just dealing with some functions that required some mocked up
> tests so I thought I'd give KUnit a try. I found writing the code very
> easy and the infrastructure I was testing was quite simple to mock out
> the hardware.
> 
> However, I got a bit hung up by one issue: I was writing unit tests for
> code in the NTB tree which itself depends on CONFIG_PCI which cannot be
> enabled in UML (for what should be obvious reasons). I managed to work
> around this because, as luck would have it, all the functions I cared
> about testing were actually static inline functions in headers. So I
> placed my test code in the kunit folder (so it would compile) and hacked
> around a couple a of functions I didn't care about that would not be
> compiled.
> 
> In the end I got it to work acceptably, but I get the impression that
> KUnit will not be usable for wide swaths of kernel code that can't be
> compiled in UML. Has there been any discussion or ideas on how to work
> around this so it can be more generally useful? Or will this feature be
> restricted roughly to non-drivers and functions in headers that don't
> have #ifdefs around them?

Testing drivers, hardware and firmware within production kernels was the use
case that inspired KTF (Kernel Test Framework). Currently KTF is available as a
standalone git repository. That's been the most efficient form for us so far, 
as we typically want tests to be developed once but deployed on many different
kernel versions simultaneously, as part of continuous integration.

But we're also working towards a suitable proposal for how it can be 
smoothly integrated into the kernel, but while still keeping the benefits 
and tools to allow cross-kernel development of tests. As part of this,
I have a master student who has been looking at converting some of 
the existing kernel test suites to KTF, and we have more examples coming 
from our internal usage, as we get more mileage and more users.
See for instance this recent blog entry testing skbuff as an example,
on the Oracle kernel development blog:

https://blogs.oracle.com/linux/writing-kernel-tests-with-the-new-kernel-test-framework-ktf

Other relevant links:

Git repo: https://github.com/oracle/ktf
Formatted docs: http://heim.ifi.uio.no/~knuto/ktf/
LWN mention from my presentation at LPC'17: https://lwn.net/Articles/735034/
Earlier Oracle blog post: https://blogs.oracle.com/linux/oracles-new-kernel-test-framework-for-linux-v2
OSS'18 presentation slides: https://events.linuxfoundation.org/wp-content/uploads/2017/12/Test-Driven-Kernel-Development-Knut-Omang-Oracle.pdf

> If you're interested in seeing the unit tests I ended up writing you can
> find the commits here[1].

It would certainly be interesting to see how the use cases you struggled with
would work with KTF ;-)

Thanks,
Knut

>
> Thanks,
> 
> Logan
> 
> [1] https://github.com/sbates130272/linux-p2pmem/ ntb_kunit

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21  5:23         ` Knut Omang
  0 siblings, 0 replies; 316+ messages in thread
From: Knut Omang @ 2019-03-21  5:23 UTC (permalink / raw)
  To: Logan Gunthorpe, Brendan Higgins, keescook, mcgrof, shuah, robh,
	kieran.bingham, frowand.list
  Cc: brakmo, pmladek, amir73il, dri-devel, Alexander.Levin,
	linux-kselftest, linux-nvdimm, richard, wfg, joel, jdike,
	dan.carpenter, devicetree, Tim.Bird, linux-um, rostedt,
	julia.lawall, dan.j.williams, kunit-dev, gregkh, linux-kernel,
	daniel, mpe, joe, khilman

Hi Logan,

On Wed, 2019-03-20 at 19:07 -0600, Logan Gunthorpe wrote:
> Hi,
> 
> On 2019-02-14 2:37 p.m., Brendan Higgins wrote:
> > This patch set proposes KUnit, a lightweight unit testing and mocking
> > framework for the Linux kernel.
> 
> I haven't followed the entire conversation but I saw the KUnit write-up
> on LWN and ended up, as an exercise, giving it a try.
> 
> I really like the idea of having a fast unit testing infrastructure in
> the kernel. Occasionally, I write userspace tests for tricky functions
> that I essentially write by copying the code over to a throw away C file
> and exercise them as I need. I think it would be great to be able to
> keep these tests around in a way that they can be run by anyone who
> wants to touch the code.
> 
> I was just dealing with some functions that required some mocked up
> tests so I thought I'd give KUnit a try. I found writing the code very
> easy and the infrastructure I was testing was quite simple to mock out
> the hardware.
> 
> However, I got a bit hung up by one issue: I was writing unit tests for
> code in the NTB tree which itself depends on CONFIG_PCI which cannot be
> enabled in UML (for what should be obvious reasons). I managed to work
> around this because, as luck would have it, all the functions I cared
> about testing were actually static inline functions in headers. So I
> placed my test code in the kunit folder (so it would compile) and hacked
> around a couple a of functions I didn't care about that would not be
> compiled.
> 
> In the end I got it to work acceptably, but I get the impression that
> KUnit will not be usable for wide swaths of kernel code that can't be
> compiled in UML. Has there been any discussion or ideas on how to work
> around this so it can be more generally useful? Or will this feature be
> restricted roughly to non-drivers and functions in headers that don't
> have #ifdefs around them?

Testing drivers, hardware and firmware within production kernels was the use
case that inspired KTF (Kernel Test Framework). Currently KTF is available as a
standalone git repository. That's been the most efficient form for us so far, 
as we typically want tests to be developed once but deployed on many different
kernel versions simultaneously, as part of continuous integration.

But we're also working towards a suitable proposal for how it can be 
smoothly integrated into the kernel, but while still keeping the benefits 
and tools to allow cross-kernel development of tests. As part of this,
I have a master student who has been looking at converting some of 
the existing kernel test suites to KTF, and we have more examples coming 
from our internal usage, as we get more mileage and more users.
See for instance this recent blog entry testing skbuff as an example,
on the Oracle kernel development blog:

https://blogs.oracle.com/linux/writing-kernel-tests-with-the-new-kernel-test-framework-ktf

Other relevant links:

Git repo: https://github.com/oracle/ktf
Formatted docs: http://heim.ifi.uio.no/~knuto/ktf/
LWN mention from my presentation at LPC'17: https://lwn.net/Articles/735034/
Earlier Oracle blog post: https://blogs.oracle.com/linux/oracles-new-kernel-test-framework-for-linux-v2
OSS'18 presentation slides: https://events.linuxfoundation.org/wp-content/uploads/2017/12/Test-Driven-Kernel-Development-Knut-Omang-Oracle.pdf

> If you're interested in seeing the unit tests I ended up writing you can
> find the commits here[1].

It would certainly be interesting to see how the use cases you struggled with
would work with KTF ;-)

Thanks,
Knut

>
> Thanks,
> 
> Logan
> 
> [1] https://github.com/sbates130272/linux-p2pmem/ ntb_kunit


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
  2019-03-21  5:23         ` Knut Omang
                             ` (3 preceding siblings ...)
  (?)
@ 2019-03-21 15:56           ` Logan Gunthorpe
  -1 siblings, 0 replies; 316+ messages in thread
From: Logan Gunthorpe @ 2019-03-21 15:56 UTC (permalink / raw)
  To: Knut Omang, Brendan Higgins, keescook, mcgrof, shuah, robh,
	kieran.bingham, frowand.list
  Cc: brakmo, pmladek, amir73il, dri-devel, Alexander.Levin,
	linux-kselftest, linux-nvdimm, richard, wfg, joel, jdike,
	dan.carpenter, devicetree, Tim.Bird, linux-um, rostedt,
	julia.lawall, kunit-dev, gregkh, linux-kernel, daniel, mpe, joe,
	khilman



On 2019-03-20 11:23 p.m., Knut Omang wrote:
> Testing drivers, hardware and firmware within production kernels was the use
> case that inspired KTF (Kernel Test Framework). Currently KTF is available as a
> standalone git repository. That's been the most efficient form for us so far, 
> as we typically want tests to be developed once but deployed on many different
> kernel versions simultaneously, as part of continuous integration.

Interesting. It seems like it's really in direct competition with KUnit.
I didn't really go into it in too much detail but these are my thoughts:

>From a developer perspective I think KTF not being in the kernel tree is
a huge negative. I want minimal effort to include my tests in a patch
series and minimal effort for other developers to be able to use them.
Needing to submit these tests to another project or use another project
to run them is too much friction.

Also I think the goal of having tests that run on any kernel version is
a pipe dream. You'd absolutely need a way to encode which kernel
versions a test is expected to pass on because the tests might not make
sense until a feature is finished in upstream. And this makes it even
harder to develop these tests because, when we write them, we might not
even know which kernel version the feature will be added to. Similarly,
if a feature is removed or substantially changed, someone will have to
do a patch to disable the test for subsequent kernel versions and create
a new test for changed features. So, IMO, tests absolutely have to be
part of the kernel tree so they can be changed with the respective
features they test.

Kunit's ability to run without having to build and run the entire kernel
 is also a huge plus. (Assuming there's a way to get around the build
dependency issues). Because of this, it can be very quick to run these
tests which makes development a *lot* easier seeing you don't have to
reboot a machine every time you want to test a fix.

Logan
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21 15:56           ` Logan Gunthorpe
  0 siblings, 0 replies; 316+ messages in thread
From: Logan Gunthorpe @ 2019-03-21 15:56 UTC (permalink / raw)
  To: Knut Omang, Brendan Higgins, keescook, mcgrof, shuah, robh,
	kieran.bingham, frowand.list
  Cc: gregkh, joel, mpe, joe, brakmo, rostedt, Tim.Bird, khilman,
	julia.lawall, linux-kselftest, kunit-dev, linux-kernel, jdike,
	richard, linux-um, daniel, dri-devel, dan.j.williams,
	linux-nvdimm, devicetree, pmladek, Alexander.Levin, amir73il,
	dan.carpenter, wfg



On 2019-03-20 11:23 p.m., Knut Omang wrote:
> Testing drivers, hardware and firmware within production kernels was the use
> case that inspired KTF (Kernel Test Framework). Currently KTF is available as a
> standalone git repository. That's been the most efficient form for us so far, 
> as we typically want tests to be developed once but deployed on many different
> kernel versions simultaneously, as part of continuous integration.

Interesting. It seems like it's really in direct competition with KUnit.
I didn't really go into it in too much detail but these are my thoughts:

From a developer perspective I think KTF not being in the kernel tree is
a huge negative. I want minimal effort to include my tests in a patch
series and minimal effort for other developers to be able to use them.
Needing to submit these tests to another project or use another project
to run them is too much friction.

Also I think the goal of having tests that run on any kernel version is
a pipe dream. You'd absolutely need a way to encode which kernel
versions a test is expected to pass on because the tests might not make
sense until a feature is finished in upstream. And this makes it even
harder to develop these tests because, when we write them, we might not
even know which kernel version the feature will be added to. Similarly,
if a feature is removed or substantially changed, someone will have to
do a patch to disable the test for subsequent kernel versions and create
a new test for changed features. So, IMO, tests absolutely have to be
part of the kernel tree so they can be changed with the respective
features they test.

Kunit's ability to run without having to build and run the entire kernel
 is also a huge plus. (Assuming there's a way to get around the build
dependency issues). Because of this, it can be very quick to run these
tests which makes development a *lot* easier seeing you don't have to
reboot a machine every time you want to test a fix.

Logan

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21 15:56           ` Logan Gunthorpe
  0 siblings, 0 replies; 316+ messages in thread
From: Logan Gunthorpe @ 2019-03-21 15:56 UTC (permalink / raw)
  To: Knut Omang, Brendan Higgins, keescook-hpIqsD4AKlfQT0dZR+AlfA,
	mcgrof-DgEjT+Ai2ygdnm+yROfE0A, shuah-DgEjT+Ai2ygdnm+yROfE0A,
	robh-DgEjT+Ai2ygdnm+yROfE0A,
	kieran.bingham-ryLnwIuWjnjg/C1BVhZhaw,
	frowand.list-Re5JQEeQqe8AvxtiuMwx3w
  Cc: brakmo-b10kYP2dOMg, pmladek-IBi9RG/b67k,
	amir73il-Re5JQEeQqe8AvxtiuMwx3w,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	Alexander.Levin-0li6OtcxBFHby3iVrkZq2A,
	linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw, richard-/L3Ra7n9ekc,
	wfg-VuQAYsv1563Yd54FQh9/CA, joel-U3u1mxZcP9KHXe+LvDLADg,
	jdike-OPE4K8JWMJJBDgjK7y7TUQ,
	dan.carpenter-QHcLZuEGTsvQT0dZR+AlfA,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Tim.Bird-7U/KSKJipcs,
	linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	rostedt-nx8X9YLhiw1AfugRpC6u6w, julia.lawall-L2FTfq7BK8M,
	kunit-dev-/JYPxA39Uh5TLH3MbocFFw,
	gregkh-hQyY1W1yCW8ekmWlsbkhG0B+6BGkLq7r,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, daniel-/w4YWyX8dFk,
	mpe-Gsx/Oe8HsFggBc27wqDAHg, joe-6d6DIl74uiNBDgjK7y7TUQ,
	khilman-rdvid1DuHRBWk0Htik3J/w



On 2019-03-20 11:23 p.m., Knut Omang wrote:
> Testing drivers, hardware and firmware within production kernels was the use
> case that inspired KTF (Kernel Test Framework). Currently KTF is available as a
> standalone git repository. That's been the most efficient form for us so far, 
> as we typically want tests to be developed once but deployed on many different
> kernel versions simultaneously, as part of continuous integration.

Interesting. It seems like it's really in direct competition with KUnit.
I didn't really go into it in too much detail but these are my thoughts:

>From a developer perspective I think KTF not being in the kernel tree is
a huge negative. I want minimal effort to include my tests in a patch
series and minimal effort for other developers to be able to use them.
Needing to submit these tests to another project or use another project
to run them is too much friction.

Also I think the goal of having tests that run on any kernel version is
a pipe dream. You'd absolutely need a way to encode which kernel
versions a test is expected to pass on because the tests might not make
sense until a feature is finished in upstream. And this makes it even
harder to develop these tests because, when we write them, we might not
even know which kernel version the feature will be added to. Similarly,
if a feature is removed or substantially changed, someone will have to
do a patch to disable the test for subsequent kernel versions and create
a new test for changed features. So, IMO, tests absolutely have to be
part of the kernel tree so they can be changed with the respective
features they test.

Kunit's ability to run without having to build and run the entire kernel
 is also a huge plus. (Assuming there's a way to get around the build
dependency issues). Because of this, it can be very quick to run these
tests which makes development a *lot* easier seeing you don't have to
reboot a machine every time you want to test a fix.

Logan

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21 15:56           ` Logan Gunthorpe
  0 siblings, 0 replies; 316+ messages in thread
From: logang @ 2019-03-21 15:56 UTC (permalink / raw)




On 2019-03-20 11:23 p.m., Knut Omang wrote:
> Testing drivers, hardware and firmware within production kernels was the use
> case that inspired KTF (Kernel Test Framework). Currently KTF is available as a
> standalone git repository. That's been the most efficient form for us so far, 
> as we typically want tests to be developed once but deployed on many different
> kernel versions simultaneously, as part of continuous integration.

Interesting. It seems like it's really in direct competition with KUnit.
I didn't really go into it in too much detail but these are my thoughts:

>From a developer perspective I think KTF not being in the kernel tree is
a huge negative. I want minimal effort to include my tests in a patch
series and minimal effort for other developers to be able to use them.
Needing to submit these tests to another project or use another project
to run them is too much friction.

Also I think the goal of having tests that run on any kernel version is
a pipe dream. You'd absolutely need a way to encode which kernel
versions a test is expected to pass on because the tests might not make
sense until a feature is finished in upstream. And this makes it even
harder to develop these tests because, when we write them, we might not
even know which kernel version the feature will be added to. Similarly,
if a feature is removed or substantially changed, someone will have to
do a patch to disable the test for subsequent kernel versions and create
a new test for changed features. So, IMO, tests absolutely have to be
part of the kernel tree so they can be changed with the respective
features they test.

Kunit's ability to run without having to build and run the entire kernel
 is also a huge plus. (Assuming there's a way to get around the build
dependency issues). Because of this, it can be very quick to run these
tests which makes development a *lot* easier seeing you don't have to
reboot a machine every time you want to test a fix.

Logan

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21 15:56           ` Logan Gunthorpe
  0 siblings, 0 replies; 316+ messages in thread
From: Logan Gunthorpe @ 2019-03-21 15:56 UTC (permalink / raw)




On 2019-03-20 11:23 p.m., Knut Omang wrote:
> Testing drivers, hardware and firmware within production kernels was the use
> case that inspired KTF (Kernel Test Framework). Currently KTF is available as a
> standalone git repository. That's been the most efficient form for us so far, 
> as we typically want tests to be developed once but deployed on many different
> kernel versions simultaneously, as part of continuous integration.

Interesting. It seems like it's really in direct competition with KUnit.
I didn't really go into it in too much detail but these are my thoughts:

>From a developer perspective I think KTF not being in the kernel tree is
a huge negative. I want minimal effort to include my tests in a patch
series and minimal effort for other developers to be able to use them.
Needing to submit these tests to another project or use another project
to run them is too much friction.

Also I think the goal of having tests that run on any kernel version is
a pipe dream. You'd absolutely need a way to encode which kernel
versions a test is expected to pass on because the tests might not make
sense until a feature is finished in upstream. And this makes it even
harder to develop these tests because, when we write them, we might not
even know which kernel version the feature will be added to. Similarly,
if a feature is removed or substantially changed, someone will have to
do a patch to disable the test for subsequent kernel versions and create
a new test for changed features. So, IMO, tests absolutely have to be
part of the kernel tree so they can be changed with the respective
features they test.

Kunit's ability to run without having to build and run the entire kernel
 is also a huge plus. (Assuming there's a way to get around the build
dependency issues). Because of this, it can be very quick to run these
tests which makes development a *lot* easier seeing you don't have to
reboot a machine every time you want to test a fix.

Logan

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21 15:56           ` Logan Gunthorpe
  0 siblings, 0 replies; 316+ messages in thread
From: Logan Gunthorpe @ 2019-03-21 15:56 UTC (permalink / raw)
  To: Knut Omang, Brendan Higgins, keescook, mcgrof, shuah, robh,
	kieran.bingham, frowand.list
  Cc: brakmo, pmladek, amir73il, dri-devel, Alexander.Levin,
	linux-kselftest, linux-nvdimm, richard, wfg, joel, jdike,
	dan.carpenter, devicetree, Tim.Bird, linux-um, rostedt,
	julia.lawall, dan.j.williams, kunit-dev, gregkh, linux-kernel,
	daniel, mpe, joe, khilman



On 2019-03-20 11:23 p.m., Knut Omang wrote:
> Testing drivers, hardware and firmware within production kernels was the use
> case that inspired KTF (Kernel Test Framework). Currently KTF is available as a
> standalone git repository. That's been the most efficient form for us so far, 
> as we typically want tests to be developed once but deployed on many different
> kernel versions simultaneously, as part of continuous integration.

Interesting. It seems like it's really in direct competition with KUnit.
I didn't really go into it in too much detail but these are my thoughts:

From a developer perspective I think KTF not being in the kernel tree is
a huge negative. I want minimal effort to include my tests in a patch
series and minimal effort for other developers to be able to use them.
Needing to submit these tests to another project or use another project
to run them is too much friction.

Also I think the goal of having tests that run on any kernel version is
a pipe dream. You'd absolutely need a way to encode which kernel
versions a test is expected to pass on because the tests might not make
sense until a feature is finished in upstream. And this makes it even
harder to develop these tests because, when we write them, we might not
even know which kernel version the feature will be added to. Similarly,
if a feature is removed or substantially changed, someone will have to
do a patch to disable the test for subsequent kernel versions and create
a new test for changed features. So, IMO, tests absolutely have to be
part of the kernel tree so they can be changed with the respective
features they test.

Kunit's ability to run without having to build and run the entire kernel
 is also a huge plus. (Assuming there's a way to get around the build
dependency issues). Because of this, it can be very quick to run these
tests which makes development a *lot* easier seeing you don't have to
reboot a machine every time you want to test a fix.

Logan

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
  2019-03-21 15:56           ` Logan Gunthorpe
                                 ` (2 preceding siblings ...)
  (?)
@ 2019-03-21 16:55               ` Brendan Higgins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-21 16:55 UTC (permalink / raw)
  To: Logan Gunthorpe
  Cc: brakmo-b10kYP2dOMg, Petr Mladek, Amir Goldstein, dri-devel,
	Sasha Levin, linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	Frank Rowand, Rob Herring, linux-nvdimm, Richard Weinberger,
	Knut Omang, Kieran Bingham, wfg-VuQAYsv1563Yd54FQh9/CA,
	Joel Stanley, Jeff Dike, Dan Carpenter, devicetree,
	shuah-DgEjT+Ai2ygdnm+yROfE0A, Bird, Timothy, Kees Cook,
	linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Steven Rostedt,
	Julia Lawall, kunit-dev-/JYPxA39Uh5TLH3MbocFFw, Greg KH, Linux

On Thu, Mar 21, 2019 at 8:56 AM Logan Gunthorpe <logang-OTvnGxWRz7hWk0Htik3J/w@public.gmane.org> wrote:
>
>
>
> On 2019-03-20 11:23 p.m., Knut Omang wrote:
> > Testing drivers, hardware and firmware within production kernels was the use
> > case that inspired KTF (Kernel Test Framework). Currently KTF is available as a
> > standalone git repository. That's been the most efficient form for us so far,
> > as we typically want tests to be developed once but deployed on many different
> > kernel versions simultaneously, as part of continuous integration.
>
> Interesting. It seems like it's really in direct competition with KUnit.

I won't speak for Knut, but I don't think we are in competition. I see
KTF as a novel way to do a kind of white box end-to-end testing for
the Linux kernel, which is a valuable thing, especially in some
circumstances. I could see KTF having a lot of value for someone who
wants to maintain out of tree drivers, in particular.

Nevertheless, I don't really see KTF as a real unit testing framework
for a number of different reasons; you pointed out some below, but I
think the main one being that it requires booting a real kernel on
actual hardware; I imagine it could be made to work on a VM, but that
isn't really the point; it fundamentally depends on having part of the
test, or at least driving the test from userspace on top of the kernel
under test. Knut, myself, and others, had a previous discussion to
this effect here: https://lkml.org/lkml/2018/11/24/170

> I didn't really go into it in too much detail but these are my thoughts:
>
> From a developer perspective I think KTF not being in the kernel tree is
> a huge negative. I want minimal effort to include my tests in a patch
> series and minimal effort for other developers to be able to use them.
> Needing to submit these tests to another project or use another project
> to run them is too much friction.
>
> Also I think the goal of having tests that run on any kernel version is
> a pipe dream. You'd absolutely need a way to encode which kernel
> versions a test is expected to pass on because the tests might not make
> sense until a feature is finished in upstream. And this makes it even
> harder to develop these tests because, when we write them, we might not
> even know which kernel version the feature will be added to. Similarly,
> if a feature is removed or substantially changed, someone will have to
> do a patch to disable the test for subsequent kernel versions and create
> a new test for changed features. So, IMO, tests absolutely have to be
> part of the kernel tree so they can be changed with the respective
> features they test.
>
> Kunit's ability to run without having to build and run the entire kernel
>  is also a huge plus. (Assuming there's a way to get around the build
> dependency issues). Because of this, it can be very quick to run these
> tests which makes development a *lot* easier seeing you don't have to
> reboot a machine every time you want to test a fix.

I will reply to your comments directly on your original email. I don't
want to hijack this thread, in case we want to discuss the topic of
KUnit vs. KTF further.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21 16:55               ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-21 16:55 UTC (permalink / raw)
  To: Logan Gunthorpe
  Cc: Knut Omang, Kees Cook, Luis Chamberlain, shuah, Rob Herring,
	Kieran Bingham, Frank Rowand, Greg KH, Joel Stanley,
	Michael Ellerman, Joe Perches, brakmo, Steven Rostedt, Bird,
	Timothy, Kevin Hilman, Julia Lawall, linux-kselftest, kunit-dev,
	Linux Kernel Mailing List, Jeff Dike, Richard Weinberger,
	linux-um, Daniel Vetter, dri-devel, Dan Williams, linux-nvdimm,
	devicetree, Petr Mladek, Sasha Levin, Amir Goldstein,
	Dan Carpenter, wfg

On Thu, Mar 21, 2019 at 8:56 AM Logan Gunthorpe <logang@deltatee.com> wrote:
>
>
>
> On 2019-03-20 11:23 p.m., Knut Omang wrote:
> > Testing drivers, hardware and firmware within production kernels was the use
> > case that inspired KTF (Kernel Test Framework). Currently KTF is available as a
> > standalone git repository. That's been the most efficient form for us so far,
> > as we typically want tests to be developed once but deployed on many different
> > kernel versions simultaneously, as part of continuous integration.
>
> Interesting. It seems like it's really in direct competition with KUnit.

I won't speak for Knut, but I don't think we are in competition. I see
KTF as a novel way to do a kind of white box end-to-end testing for
the Linux kernel, which is a valuable thing, especially in some
circumstances. I could see KTF having a lot of value for someone who
wants to maintain out of tree drivers, in particular.

Nevertheless, I don't really see KTF as a real unit testing framework
for a number of different reasons; you pointed out some below, but I
think the main one being that it requires booting a real kernel on
actual hardware; I imagine it could be made to work on a VM, but that
isn't really the point; it fundamentally depends on having part of the
test, or at least driving the test from userspace on top of the kernel
under test. Knut, myself, and others, had a previous discussion to
this effect here: https://lkml.org/lkml/2018/11/24/170

> I didn't really go into it in too much detail but these are my thoughts:
>
> From a developer perspective I think KTF not being in the kernel tree is
> a huge negative. I want minimal effort to include my tests in a patch
> series and minimal effort for other developers to be able to use them.
> Needing to submit these tests to another project or use another project
> to run them is too much friction.
>
> Also I think the goal of having tests that run on any kernel version is
> a pipe dream. You'd absolutely need a way to encode which kernel
> versions a test is expected to pass on because the tests might not make
> sense until a feature is finished in upstream. And this makes it even
> harder to develop these tests because, when we write them, we might not
> even know which kernel version the feature will be added to. Similarly,
> if a feature is removed or substantially changed, someone will have to
> do a patch to disable the test for subsequent kernel versions and create
> a new test for changed features. So, IMO, tests absolutely have to be
> part of the kernel tree so they can be changed with the respective
> features they test.
>
> Kunit's ability to run without having to build and run the entire kernel
>  is also a huge plus. (Assuming there's a way to get around the build
> dependency issues). Because of this, it can be very quick to run these
> tests which makes development a *lot* easier seeing you don't have to
> reboot a machine every time you want to test a fix.

I will reply to your comments directly on your original email. I don't
want to hijack this thread, in case we want to discuss the topic of
KUnit vs. KTF further.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21 16:55               ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-03-21 16:55 UTC (permalink / raw)


On Thu, Mar 21, 2019 at 8:56 AM Logan Gunthorpe <logang at deltatee.com> wrote:
>
>
>
> On 2019-03-20 11:23 p.m., Knut Omang wrote:
> > Testing drivers, hardware and firmware within production kernels was the use
> > case that inspired KTF (Kernel Test Framework). Currently KTF is available as a
> > standalone git repository. That's been the most efficient form for us so far,
> > as we typically want tests to be developed once but deployed on many different
> > kernel versions simultaneously, as part of continuous integration.
>
> Interesting. It seems like it's really in direct competition with KUnit.

I won't speak for Knut, but I don't think we are in competition. I see
KTF as a novel way to do a kind of white box end-to-end testing for
the Linux kernel, which is a valuable thing, especially in some
circumstances. I could see KTF having a lot of value for someone who
wants to maintain out of tree drivers, in particular.

Nevertheless, I don't really see KTF as a real unit testing framework
for a number of different reasons; you pointed out some below, but I
think the main one being that it requires booting a real kernel on
actual hardware; I imagine it could be made to work on a VM, but that
isn't really the point; it fundamentally depends on having part of the
test, or at least driving the test from userspace on top of the kernel
under test. Knut, myself, and others, had a previous discussion to
this effect here: https://lkml.org/lkml/2018/11/24/170

> I didn't really go into it in too much detail but these are my thoughts:
>
> From a developer perspective I think KTF not being in the kernel tree is
> a huge negative. I want minimal effort to include my tests in a patch
> series and minimal effort for other developers to be able to use them.
> Needing to submit these tests to another project or use another project
> to run them is too much friction.
>
> Also I think the goal of having tests that run on any kernel version is
> a pipe dream. You'd absolutely need a way to encode which kernel
> versions a test is expected to pass on because the tests might not make
> sense until a feature is finished in upstream. And this makes it even
> harder to develop these tests because, when we write them, we might not
> even know which kernel version the feature will be added to. Similarly,
> if a feature is removed or substantially changed, someone will have to
> do a patch to disable the test for subsequent kernel versions and create
> a new test for changed features. So, IMO, tests absolutely have to be
> part of the kernel tree so they can be changed with the respective
> features they test.
>
> Kunit's ability to run without having to build and run the entire kernel
>  is also a huge plus. (Assuming there's a way to get around the build
> dependency issues). Because of this, it can be very quick to run these
> tests which makes development a *lot* easier seeing you don't have to
> reboot a machine every time you want to test a fix.

I will reply to your comments directly on your original email. I don't
want to hijack this thread, in case we want to discuss the topic of
KUnit vs. KTF further.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21 16:55               ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-21 16:55 UTC (permalink / raw)


On Thu, Mar 21, 2019@8:56 AM Logan Gunthorpe <logang@deltatee.com> wrote:
>
>
>
> On 2019-03-20 11:23 p.m., Knut Omang wrote:
> > Testing drivers, hardware and firmware within production kernels was the use
> > case that inspired KTF (Kernel Test Framework). Currently KTF is available as a
> > standalone git repository. That's been the most efficient form for us so far,
> > as we typically want tests to be developed once but deployed on many different
> > kernel versions simultaneously, as part of continuous integration.
>
> Interesting. It seems like it's really in direct competition with KUnit.

I won't speak for Knut, but I don't think we are in competition. I see
KTF as a novel way to do a kind of white box end-to-end testing for
the Linux kernel, which is a valuable thing, especially in some
circumstances. I could see KTF having a lot of value for someone who
wants to maintain out of tree drivers, in particular.

Nevertheless, I don't really see KTF as a real unit testing framework
for a number of different reasons; you pointed out some below, but I
think the main one being that it requires booting a real kernel on
actual hardware; I imagine it could be made to work on a VM, but that
isn't really the point; it fundamentally depends on having part of the
test, or at least driving the test from userspace on top of the kernel
under test. Knut, myself, and others, had a previous discussion to
this effect here: https://lkml.org/lkml/2018/11/24/170

> I didn't really go into it in too much detail but these are my thoughts:
>
> From a developer perspective I think KTF not being in the kernel tree is
> a huge negative. I want minimal effort to include my tests in a patch
> series and minimal effort for other developers to be able to use them.
> Needing to submit these tests to another project or use another project
> to run them is too much friction.
>
> Also I think the goal of having tests that run on any kernel version is
> a pipe dream. You'd absolutely need a way to encode which kernel
> versions a test is expected to pass on because the tests might not make
> sense until a feature is finished in upstream. And this makes it even
> harder to develop these tests because, when we write them, we might not
> even know which kernel version the feature will be added to. Similarly,
> if a feature is removed or substantially changed, someone will have to
> do a patch to disable the test for subsequent kernel versions and create
> a new test for changed features. So, IMO, tests absolutely have to be
> part of the kernel tree so they can be changed with the respective
> features they test.
>
> Kunit's ability to run without having to build and run the entire kernel
>  is also a huge plus. (Assuming there's a way to get around the build
> dependency issues). Because of this, it can be very quick to run these
> tests which makes development a *lot* easier seeing you don't have to
> reboot a machine every time you want to test a fix.

I will reply to your comments directly on your original email. I don't
want to hijack this thread, in case we want to discuss the topic of
KUnit vs. KTF further.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21 16:55               ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-21 16:55 UTC (permalink / raw)
  To: Logan Gunthorpe
  Cc: brakmo, Petr Mladek, Amir Goldstein, dri-devel, Sasha Levin,
	linux-kselftest, Frank Rowand, Rob Herring, linux-nvdimm,
	Richard Weinberger, Knut Omang, Kieran Bingham, wfg,
	Joel Stanley, Jeff Dike, Dan Carpenter, devicetree, shuah, Bird,

On Thu, Mar 21, 2019 at 8:56 AM Logan Gunthorpe <logang@deltatee.com> wrote:
>
>
>
> On 2019-03-20 11:23 p.m., Knut Omang wrote:
> > Testing drivers, hardware and firmware within production kernels was the use
> > case that inspired KTF (Kernel Test Framework). Currently KTF is available as a
> > standalone git repository. That's been the most efficient form for us so far,
> > as we typically want tests to be developed once but deployed on many different
> > kernel versions simultaneously, as part of continuous integration.
>
> Interesting. It seems like it's really in direct competition with KUnit.

I won't speak for Knut, but I don't think we are in competition. I see
KTF as a novel way to do a kind of white box end-to-end testing for
the Linux kernel, which is a valuable thing, especially in some
circumstances. I could see KTF having a lot of value for someone who
wants to maintain out of tree drivers, in particular.

Nevertheless, I don't really see KTF as a real unit testing framework
for a number of different reasons; you pointed out some below, but I
think the main one being that it requires booting a real kernel on
actual hardware; I imagine it could be made to work on a VM, but that
isn't really the point; it fundamentally depends on having part of the
test, or at least driving the test from userspace on top of the kernel
under test. Knut, myself, and others, had a previous discussion to
this effect here: https://lkml.org/lkml/2018/11/24/170

> I didn't really go into it in too much detail but these are my thoughts:
>
> From a developer perspective I think KTF not being in the kernel tree is
> a huge negative. I want minimal effort to include my tests in a patch
> series and minimal effort for other developers to be able to use them.
> Needing to submit these tests to another project or use another project
> to run them is too much friction.
>
> Also I think the goal of having tests that run on any kernel version is
> a pipe dream. You'd absolutely need a way to encode which kernel
> versions a test is expected to pass on because the tests might not make
> sense until a feature is finished in upstream. And this makes it even
> harder to develop these tests because, when we write them, we might not
> even know which kernel version the feature will be added to. Similarly,
> if a feature is removed or substantially changed, someone will have to
> do a patch to disable the test for subsequent kernel versions and create
> a new test for changed features. So, IMO, tests absolutely have to be
> part of the kernel tree so they can be changed with the respective
> features they test.
>
> Kunit's ability to run without having to build and run the entire kernel
>  is also a huge plus. (Assuming there's a way to get around the build
> dependency issues). Because of this, it can be very quick to run these
> tests which makes development a *lot* easier seeing you don't have to
> reboot a machine every time you want to test a fix.

I will reply to your comments directly on your original email. I don't
want to hijack this thread, in case we want to discuss the topic of
KUnit vs. KTF further.

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
  2019-03-21 16:55               ` Brendan Higgins
                                     ` (2 preceding siblings ...)
  (?)
@ 2019-03-21 19:13                   ` Knut Omang
  -1 siblings, 0 replies; 316+ messages in thread
From: Knut Omang @ 2019-03-21 19:13 UTC (permalink / raw)
  To: Brendan Higgins, Logan Gunthorpe
  Cc: brakmo-b10kYP2dOMg, Petr Mladek, Amir Goldstein, dri-devel,
	Sasha Levin, linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	Frank Rowand, Rob Herring, linux-nvdimm, Richard Weinberger,
	Kieran Bingham, wfg-VuQAYsv1563Yd54FQh9/CA, Joel Stanley,
	Jeff Dike, Dan Carpenter, devicetree,
	shuah-DgEjT+Ai2ygdnm+yROfE0A, Bird, Timothy, Kees Cook,
	linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Steven Rostedt,
	Julia Lawall, kunit-dev-/JYPxA39Uh5TLH3MbocFFw, Greg KH,
	Linux Kernel Mailing List

On Thu, 2019-03-21 at 09:55 -0700, Brendan Higgins wrote:
> On Thu, Mar 21, 2019 at 8:56 AM Logan Gunthorpe <logang-OTvnGxWRz7hWk0Htik3J/w@public.gmane.org> wrote:
> > 
> > 
> > On 2019-03-20 11:23 p.m., Knut Omang wrote:
> > > Testing drivers, hardware and firmware within production kernels was the
> > > use
> > > case that inspired KTF (Kernel Test Framework). Currently KTF is available
> > > as a
> > > standalone git repository. That's been the most efficient form for us so
> > > far,
> > > as we typically want tests to be developed once but deployed on many
> > > different
> > > kernel versions simultaneously, as part of continuous integration.
> > 
> > Interesting. It seems like it's really in direct competition with KUnit.
> 
> I won't speak for Knut, but I don't think we are in competition. 

I would rather say we have some overlap in functionality :)
My understanding is that we have a common goal of providing better
infrastructure for testing, but have approached this whole problem complex from
somewhat different perspectives.

> I see
> KTF as a novel way to do a kind of white box end-to-end testing for
> the Linux kernel, which is a valuable thing, especially in some
> circumstances. I could see KTF having a lot of value for someone who
> wants to maintain out of tree drivers, in particular.

The best argument here is really good examples.
I'm not sure the distinction between "black box" and "white box" testing is
useful here, there's always underlying assumptions behind specific,
deterministic tests. Writing a test is a very good way to get to 
understand a piece of code. Just look at the flow of the example in
https://blogs.oracle.com/linux/writing-kernel-tests-with-the-new-kernel-test-framework-ktf, clearly unit tests, but the knowledge gathering is an important
part and motivation!

> Nevertheless, I don't really see KTF as a real unit testing framework
> for a number of different reasons; you pointed out some below, but I
> think the main one being that it requires booting a real kernel on
> actual hardware; 

That depends on what you want to test. If you need hardware (or simulated or
emulated hardware) for the test, of course you would need to have that hardware,
but if, lets say, you just wanted to run tests like the skbuff example tests
(see link above) you wouldn't need anything more than what you need to run KUnit
tests.

> I imagine it could be made to work on a VM, but that
> isn't really the point; it fundamentally depends on having part of the
> test, or at least driving the test from userspace on top of the kernel
> under test. Knut, myself, and others, had a previous discussion to
> this effect here: https://lkml.org/lkml/2018/11/24/170

> > I didn't really go into it in too much detail but these are my thoughts:
> > 
> > From a developer perspective I think KTF not being in the kernel tree is
> > a huge negative. I want minimal effort to include my tests in a patch
> > series and minimal effort for other developers to be able to use them.
> > Needing to submit these tests to another project or use another project
> > to run them is too much friction.

As said, I recognize the need to upstream KTF, and we are working to do that,
that's why I bother to write this :)

> > Also I think the goal of having tests that run on any kernel version is
> > a pipe dream. 

I have fulfilled that dream, so I know it is possible (Inifinband driver,
kernels from 2.6.39 to 4.8.x or so..) I know a lot of projects would benefit
from support for such workflows, but that's not really the point here - we want
to achieve both goals!

> > You'd absolutely need a way to encode which kernel
> > versions a test is expected to pass on because the tests might not make
> > sense until a feature is finished in upstream. And this makes it even
> > harder to develop these tests because, when we write them, we might not
> > even know which kernel version the feature will be added to. Similarly,
> > if a feature is removed or substantially changed, someone will have to
> > do a patch to disable the test for subsequent kernel versions and create
> > a new test for changed features. So, IMO, tests absolutely have to be
> > part of the kernel tree so they can be changed with the respective
> > features they test.

Of course a feature that is not part of a kernel cannot easily pass for that
kernel. And yes, testing for kernel version might be necessary in some cases,
and even to write a section of extra code to handle differences, still that's 
worth the benefit.
And that's also a use case: "Can I use kernel v.X.Y.Z if I need feature w?"
Lets assume we had a set of tests covering a particular feature, and someone
needed that feature, then they could just run the latest set of tests for that
feature on an older kernel to determine if they had enough support for what they
needed. If necessary, they could then backport the feature, and run the tests to
verify that they actually implemented it correctly.

On example I recall of this from the Infiniband driver times was 
the need to have a predictable way to efficiently use huge scatterlists across
kernels. We relied upon scatterlist chaining in a particular way, and the API
descriptions did not really specify to a detailed enough level how the
guaranteed semantics were supposed to be.
I wrote a few simple KTF tests that tested the driver code for the semantics we
expected, and ran them against older and newer kernels and used them to make
sure we would have a driver that worked across a few subtle changes to
scatterlists and their use. 

> > Kunit's ability to run without having to build and run the entire kernel
> >  is also a huge plus. 

IMHO the UML kernel is still a kernel running inside a user land program,
and so is a QEMU/KVM VM, which is my favourite KTF test environment.
 
Also with UML it is more difficult/less useful to deploy user space tools such
as valgrind, which IMHO would be my main reason for getting kernel code out of
the kernel. I recognize that there's a need for 
doing just that (e.g. compiling complicated data structures entirely in user
space with mocked interfaces) but I think it would be much more useful 
to be able to do that without the additional complexity of UML (or QEMU).

> > (Assuming there's a way to get around the build
> > dependency issues). Because of this, it can be very quick to run these
> > tests which makes development a *lot* easier seeing you don't have to
> > reboot a machine every time you want to test a fix.

If your target component under test can be built as a kernel module, or set of
modules, with KTF your workflow would not involve booting at all (unless you
happened to crash the system with one of your tests, that is :) )

You would just unload your module under test and the test module, recompile the
two and insmod again. My work current work cycle on this is just a few seconds.

> I will reply to your comments directly on your original email. I don't
> want to hijack this thread, in case we want to discuss the topic of
> KUnit vs. KTF further.

Good idea!
We can at least agree upon that such an important matter as this can 
be worthy a good, detailed discussion! :)

Thanks!
Knut

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21 19:13                   ` Knut Omang
  0 siblings, 0 replies; 316+ messages in thread
From: Knut Omang @ 2019-03-21 19:13 UTC (permalink / raw)
  To: Brendan Higgins, Logan Gunthorpe
  Cc: Kees Cook, Luis Chamberlain, shuah, Rob Herring, Kieran Bingham,
	Frank Rowand, Greg KH, Joel Stanley, Michael Ellerman,
	Joe Perches, brakmo, Steven Rostedt, Bird, Timothy, Kevin Hilman,
	Julia Lawall, linux-kselftest, kunit-dev,
	Linux Kernel Mailing List, Jeff Dike, Richard Weinberger,
	linux-um, Daniel Vetter, dri-devel, Dan Williams, linux-nvdimm,
	devicetree, Petr Mladek, Sasha Levin, Amir Goldstein,
	Dan Carpenter, wfg, Alan Maguire

On Thu, 2019-03-21 at 09:55 -0700, Brendan Higgins wrote:
> On Thu, Mar 21, 2019 at 8:56 AM Logan Gunthorpe <logang@deltatee.com> wrote:
> > 
> > 
> > On 2019-03-20 11:23 p.m., Knut Omang wrote:
> > > Testing drivers, hardware and firmware within production kernels was the
> > > use
> > > case that inspired KTF (Kernel Test Framework). Currently KTF is available
> > > as a
> > > standalone git repository. That's been the most efficient form for us so
> > > far,
> > > as we typically want tests to be developed once but deployed on many
> > > different
> > > kernel versions simultaneously, as part of continuous integration.
> > 
> > Interesting. It seems like it's really in direct competition with KUnit.
> 
> I won't speak for Knut, but I don't think we are in competition. 

I would rather say we have some overlap in functionality :)
My understanding is that we have a common goal of providing better
infrastructure for testing, but have approached this whole problem complex from
somewhat different perspectives.

> I see
> KTF as a novel way to do a kind of white box end-to-end testing for
> the Linux kernel, which is a valuable thing, especially in some
> circumstances. I could see KTF having a lot of value for someone who
> wants to maintain out of tree drivers, in particular.

The best argument here is really good examples.
I'm not sure the distinction between "black box" and "white box" testing is
useful here, there's always underlying assumptions behind specific,
deterministic tests. Writing a test is a very good way to get to 
understand a piece of code. Just look at the flow of the example in
https://blogs.oracle.com/linux/writing-kernel-tests-with-the-new-kernel-test-framework-ktf, clearly unit tests, but the knowledge gathering is an important
part and motivation!

> Nevertheless, I don't really see KTF as a real unit testing framework
> for a number of different reasons; you pointed out some below, but I
> think the main one being that it requires booting a real kernel on
> actual hardware; 

That depends on what you want to test. If you need hardware (or simulated or
emulated hardware) for the test, of course you would need to have that hardware,
but if, lets say, you just wanted to run tests like the skbuff example tests
(see link above) you wouldn't need anything more than what you need to run KUnit
tests.

> I imagine it could be made to work on a VM, but that
> isn't really the point; it fundamentally depends on having part of the
> test, or at least driving the test from userspace on top of the kernel
> under test. Knut, myself, and others, had a previous discussion to
> this effect here: https://lkml.org/lkml/2018/11/24/170

> > I didn't really go into it in too much detail but these are my thoughts:
> > 
> > From a developer perspective I think KTF not being in the kernel tree is
> > a huge negative. I want minimal effort to include my tests in a patch
> > series and minimal effort for other developers to be able to use them.
> > Needing to submit these tests to another project or use another project
> > to run them is too much friction.

As said, I recognize the need to upstream KTF, and we are working to do that,
that's why I bother to write this :)

> > Also I think the goal of having tests that run on any kernel version is
> > a pipe dream. 

I have fulfilled that dream, so I know it is possible (Inifinband driver,
kernels from 2.6.39 to 4.8.x or so..) I know a lot of projects would benefit
from support for such workflows, but that's not really the point here - we want
to achieve both goals!

> > You'd absolutely need a way to encode which kernel
> > versions a test is expected to pass on because the tests might not make
> > sense until a feature is finished in upstream. And this makes it even
> > harder to develop these tests because, when we write them, we might not
> > even know which kernel version the feature will be added to. Similarly,
> > if a feature is removed or substantially changed, someone will have to
> > do a patch to disable the test for subsequent kernel versions and create
> > a new test for changed features. So, IMO, tests absolutely have to be
> > part of the kernel tree so they can be changed with the respective
> > features they test.

Of course a feature that is not part of a kernel cannot easily pass for that
kernel. And yes, testing for kernel version might be necessary in some cases,
and even to write a section of extra code to handle differences, still that's 
worth the benefit.
And that's also a use case: "Can I use kernel v.X.Y.Z if I need feature w?"
Lets assume we had a set of tests covering a particular feature, and someone
needed that feature, then they could just run the latest set of tests for that
feature on an older kernel to determine if they had enough support for what they
needed. If necessary, they could then backport the feature, and run the tests to
verify that they actually implemented it correctly.

On example I recall of this from the Infiniband driver times was 
the need to have a predictable way to efficiently use huge scatterlists across
kernels. We relied upon scatterlist chaining in a particular way, and the API
descriptions did not really specify to a detailed enough level how the
guaranteed semantics were supposed to be.
I wrote a few simple KTF tests that tested the driver code for the semantics we
expected, and ran them against older and newer kernels and used them to make
sure we would have a driver that worked across a few subtle changes to
scatterlists and their use. 

> > Kunit's ability to run without having to build and run the entire kernel
> >  is also a huge plus. 

IMHO the UML kernel is still a kernel running inside a user land program,
and so is a QEMU/KVM VM, which is my favourite KTF test environment.
 
Also with UML it is more difficult/less useful to deploy user space tools such
as valgrind, which IMHO would be my main reason for getting kernel code out of
the kernel. I recognize that there's a need for 
doing just that (e.g. compiling complicated data structures entirely in user
space with mocked interfaces) but I think it would be much more useful 
to be able to do that without the additional complexity of UML (or QEMU).

> > (Assuming there's a way to get around the build
> > dependency issues). Because of this, it can be very quick to run these
> > tests which makes development a *lot* easier seeing you don't have to
> > reboot a machine every time you want to test a fix.

If your target component under test can be built as a kernel module, or set of
modules, with KTF your workflow would not involve booting at all (unless you
happened to crash the system with one of your tests, that is :) )

You would just unload your module under test and the test module, recompile the
two and insmod again. My work current work cycle on this is just a few seconds.

> I will reply to your comments directly on your original email. I don't
> want to hijack this thread, in case we want to discuss the topic of
> KUnit vs. KTF further.

Good idea!
We can at least agree upon that such an important matter as this can 
be worthy a good, detailed discussion! :)

Thanks!
Knut


^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21 19:13                   ` Knut Omang
  0 siblings, 0 replies; 316+ messages in thread
From: knut.omang @ 2019-03-21 19:13 UTC (permalink / raw)


On Thu, 2019-03-21 at 09:55 -0700, Brendan Higgins wrote:
> On Thu, Mar 21, 2019 at 8:56 AM Logan Gunthorpe <logang at deltatee.com> wrote:
> > 
> > 
> > On 2019-03-20 11:23 p.m., Knut Omang wrote:
> > > Testing drivers, hardware and firmware within production kernels was the
> > > use
> > > case that inspired KTF (Kernel Test Framework). Currently KTF is available
> > > as a
> > > standalone git repository. That's been the most efficient form for us so
> > > far,
> > > as we typically want tests to be developed once but deployed on many
> > > different
> > > kernel versions simultaneously, as part of continuous integration.
> > 
> > Interesting. It seems like it's really in direct competition with KUnit.
> 
> I won't speak for Knut, but I don't think we are in competition. 

I would rather say we have some overlap in functionality :)
My understanding is that we have a common goal of providing better
infrastructure for testing, but have approached this whole problem complex from
somewhat different perspectives.

> I see
> KTF as a novel way to do a kind of white box end-to-end testing for
> the Linux kernel, which is a valuable thing, especially in some
> circumstances. I could see KTF having a lot of value for someone who
> wants to maintain out of tree drivers, in particular.

The best argument here is really good examples.
I'm not sure the distinction between "black box" and "white box" testing is
useful here, there's always underlying assumptions behind specific,
deterministic tests. Writing a test is a very good way to get to 
understand a piece of code. Just look at the flow of the example in
https://blogs.oracle.com/linux/writing-kernel-tests-with-the-new-kernel-test-framework-ktf, clearly unit tests, but the knowledge gathering is an important
part and motivation!

> Nevertheless, I don't really see KTF as a real unit testing framework
> for a number of different reasons; you pointed out some below, but I
> think the main one being that it requires booting a real kernel on
> actual hardware; 

That depends on what you want to test. If you need hardware (or simulated or
emulated hardware) for the test, of course you would need to have that hardware,
but if, lets say, you just wanted to run tests like the skbuff example tests
(see link above) you wouldn't need anything more than what you need to run KUnit
tests.

> I imagine it could be made to work on a VM, but that
> isn't really the point; it fundamentally depends on having part of the
> test, or at least driving the test from userspace on top of the kernel
> under test. Knut, myself, and others, had a previous discussion to
> this effect here: https://lkml.org/lkml/2018/11/24/170

> > I didn't really go into it in too much detail but these are my thoughts:
> > 
> > From a developer perspective I think KTF not being in the kernel tree is
> > a huge negative. I want minimal effort to include my tests in a patch
> > series and minimal effort for other developers to be able to use them.
> > Needing to submit these tests to another project or use another project
> > to run them is too much friction.

As said, I recognize the need to upstream KTF, and we are working to do that,
that's why I bother to write this :)

> > Also I think the goal of having tests that run on any kernel version is
> > a pipe dream. 

I have fulfilled that dream, so I know it is possible (Inifinband driver,
kernels from 2.6.39 to 4.8.x or so..) I know a lot of projects would benefit
from support for such workflows, but that's not really the point here - we want
to achieve both goals!

> > You'd absolutely need a way to encode which kernel
> > versions a test is expected to pass on because the tests might not make
> > sense until a feature is finished in upstream. And this makes it even
> > harder to develop these tests because, when we write them, we might not
> > even know which kernel version the feature will be added to. Similarly,
> > if a feature is removed or substantially changed, someone will have to
> > do a patch to disable the test for subsequent kernel versions and create
> > a new test for changed features. So, IMO, tests absolutely have to be
> > part of the kernel tree so they can be changed with the respective
> > features they test.

Of course a feature that is not part of a kernel cannot easily pass for that
kernel. And yes, testing for kernel version might be necessary in some cases,
and even to write a section of extra code to handle differences, still that's 
worth the benefit.
And that's also a use case: "Can I use kernel v.X.Y.Z if I need feature w?"
Lets assume we had a set of tests covering a particular feature, and someone
needed that feature, then they could just run the latest set of tests for that
feature on an older kernel to determine if they had enough support for what they
needed. If necessary, they could then backport the feature, and run the tests to
verify that they actually implemented it correctly.

On example I recall of this from the Infiniband driver times was 
the need to have a predictable way to efficiently use huge scatterlists across
kernels. We relied upon scatterlist chaining in a particular way, and the API
descriptions did not really specify to a detailed enough level how the
guaranteed semantics were supposed to be.
I wrote a few simple KTF tests that tested the driver code for the semantics we
expected, and ran them against older and newer kernels and used them to make
sure we would have a driver that worked across a few subtle changes to
scatterlists and their use. 

> > Kunit's ability to run without having to build and run the entire kernel
> >  is also a huge plus. 

IMHO the UML kernel is still a kernel running inside a user land program,
and so is a QEMU/KVM VM, which is my favourite KTF test environment.
 
Also with UML it is more difficult/less useful to deploy user space tools such
as valgrind, which IMHO would be my main reason for getting kernel code out of
the kernel. I recognize that there's a need for 
doing just that (e.g. compiling complicated data structures entirely in user
space with mocked interfaces) but I think it would be much more useful 
to be able to do that without the additional complexity of UML (or QEMU).

> > (Assuming there's a way to get around the build
> > dependency issues). Because of this, it can be very quick to run these
> > tests which makes development a *lot* easier seeing you don't have to
> > reboot a machine every time you want to test a fix.

If your target component under test can be built as a kernel module, or set of
modules, with KTF your workflow would not involve booting at all (unless you
happened to crash the system with one of your tests, that is :) )

You would just unload your module under test and the test module, recompile the
two and insmod again. My work current work cycle on this is just a few seconds.

> I will reply to your comments directly on your original email. I don't
> want to hijack this thread, in case we want to discuss the topic of
> KUnit vs. KTF further.

Good idea!
We can at least agree upon that such an important matter as this can 
be worthy a good, detailed discussion! :)

Thanks!
Knut

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21 19:13                   ` Knut Omang
  0 siblings, 0 replies; 316+ messages in thread
From: Knut Omang @ 2019-03-21 19:13 UTC (permalink / raw)


On Thu, 2019-03-21@09:55 -0700, Brendan Higgins wrote:
> On Thu, Mar 21, 2019@8:56 AM Logan Gunthorpe <logang@deltatee.com> wrote:
> > 
> > 
> > On 2019-03-20 11:23 p.m., Knut Omang wrote:
> > > Testing drivers, hardware and firmware within production kernels was the
> > > use
> > > case that inspired KTF (Kernel Test Framework). Currently KTF is available
> > > as a
> > > standalone git repository. That's been the most efficient form for us so
> > > far,
> > > as we typically want tests to be developed once but deployed on many
> > > different
> > > kernel versions simultaneously, as part of continuous integration.
> > 
> > Interesting. It seems like it's really in direct competition with KUnit.
> 
> I won't speak for Knut, but I don't think we are in competition. 

I would rather say we have some overlap in functionality :)
My understanding is that we have a common goal of providing better
infrastructure for testing, but have approached this whole problem complex from
somewhat different perspectives.

> I see
> KTF as a novel way to do a kind of white box end-to-end testing for
> the Linux kernel, which is a valuable thing, especially in some
> circumstances. I could see KTF having a lot of value for someone who
> wants to maintain out of tree drivers, in particular.

The best argument here is really good examples.
I'm not sure the distinction between "black box" and "white box" testing is
useful here, there's always underlying assumptions behind specific,
deterministic tests. Writing a test is a very good way to get to 
understand a piece of code. Just look at the flow of the example in
https://blogs.oracle.com/linux/writing-kernel-tests-with-the-new-kernel-test-framework-ktf, clearly unit tests, but the knowledge gathering is an important
part and motivation!

> Nevertheless, I don't really see KTF as a real unit testing framework
> for a number of different reasons; you pointed out some below, but I
> think the main one being that it requires booting a real kernel on
> actual hardware; 

That depends on what you want to test. If you need hardware (or simulated or
emulated hardware) for the test, of course you would need to have that hardware,
but if, lets say, you just wanted to run tests like the skbuff example tests
(see link above) you wouldn't need anything more than what you need to run KUnit
tests.

> I imagine it could be made to work on a VM, but that
> isn't really the point; it fundamentally depends on having part of the
> test, or at least driving the test from userspace on top of the kernel
> under test. Knut, myself, and others, had a previous discussion to
> this effect here: https://lkml.org/lkml/2018/11/24/170

> > I didn't really go into it in too much detail but these are my thoughts:
> > 
> > From a developer perspective I think KTF not being in the kernel tree is
> > a huge negative. I want minimal effort to include my tests in a patch
> > series and minimal effort for other developers to be able to use them.
> > Needing to submit these tests to another project or use another project
> > to run them is too much friction.

As said, I recognize the need to upstream KTF, and we are working to do that,
that's why I bother to write this :)

> > Also I think the goal of having tests that run on any kernel version is
> > a pipe dream. 

I have fulfilled that dream, so I know it is possible (Inifinband driver,
kernels from 2.6.39 to 4.8.x or so..) I know a lot of projects would benefit
from support for such workflows, but that's not really the point here - we want
to achieve both goals!

> > You'd absolutely need a way to encode which kernel
> > versions a test is expected to pass on because the tests might not make
> > sense until a feature is finished in upstream. And this makes it even
> > harder to develop these tests because, when we write them, we might not
> > even know which kernel version the feature will be added to. Similarly,
> > if a feature is removed or substantially changed, someone will have to
> > do a patch to disable the test for subsequent kernel versions and create
> > a new test for changed features. So, IMO, tests absolutely have to be
> > part of the kernel tree so they can be changed with the respective
> > features they test.

Of course a feature that is not part of a kernel cannot easily pass for that
kernel. And yes, testing for kernel version might be necessary in some cases,
and even to write a section of extra code to handle differences, still that's 
worth the benefit.
And that's also a use case: "Can I use kernel v.X.Y.Z if I need feature w?"
Lets assume we had a set of tests covering a particular feature, and someone
needed that feature, then they could just run the latest set of tests for that
feature on an older kernel to determine if they had enough support for what they
needed. If necessary, they could then backport the feature, and run the tests to
verify that they actually implemented it correctly.

On example I recall of this from the Infiniband driver times was 
the need to have a predictable way to efficiently use huge scatterlists across
kernels. We relied upon scatterlist chaining in a particular way, and the API
descriptions did not really specify to a detailed enough level how the
guaranteed semantics were supposed to be.
I wrote a few simple KTF tests that tested the driver code for the semantics we
expected, and ran them against older and newer kernels and used them to make
sure we would have a driver that worked across a few subtle changes to
scatterlists and their use. 

> > Kunit's ability to run without having to build and run the entire kernel
> >  is also a huge plus. 

IMHO the UML kernel is still a kernel running inside a user land program,
and so is a QEMU/KVM VM, which is my favourite KTF test environment.
 
Also with UML it is more difficult/less useful to deploy user space tools such
as valgrind, which IMHO would be my main reason for getting kernel code out of
the kernel. I recognize that there's a need for 
doing just that (e.g. compiling complicated data structures entirely in user
space with mocked interfaces) but I think it would be much more useful 
to be able to do that without the additional complexity of UML (or QEMU).

> > (Assuming there's a way to get around the build
> > dependency issues). Because of this, it can be very quick to run these
> > tests which makes development a *lot* easier seeing you don't have to
> > reboot a machine every time you want to test a fix.

If your target component under test can be built as a kernel module, or set of
modules, with KTF your workflow would not involve booting at all (unless you
happened to crash the system with one of your tests, that is :) )

You would just unload your module under test and the test module, recompile the
two and insmod again. My work current work cycle on this is just a few seconds.

> I will reply to your comments directly on your original email. I don't
> want to hijack this thread, in case we want to discuss the topic of
> KUnit vs. KTF further.

Good idea!
We can at least agree upon that such an important matter as this can 
be worthy a good, detailed discussion! :)

Thanks!
Knut

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21 19:13                   ` Knut Omang
  0 siblings, 0 replies; 316+ messages in thread
From: Knut Omang @ 2019-03-21 19:13 UTC (permalink / raw)
  To: Brendan Higgins, Logan Gunthorpe
  Cc: brakmo, Petr Mladek, Amir Goldstein, dri-devel, Sasha Levin,
	linux-kselftest, Frank Rowand, Rob Herring, linux-nvdimm,
	Richard Weinberger, Kieran Bingham, wfg, Joel Stanley, Jeff Dike,
	Dan Carpenter, devicetree, shuah, Bird,

On Thu, 2019-03-21 at 09:55 -0700, Brendan Higgins wrote:
> On Thu, Mar 21, 2019 at 8:56 AM Logan Gunthorpe <logang@deltatee.com> wrote:
> > 
> > 
> > On 2019-03-20 11:23 p.m., Knut Omang wrote:
> > > Testing drivers, hardware and firmware within production kernels was the
> > > use
> > > case that inspired KTF (Kernel Test Framework). Currently KTF is available
> > > as a
> > > standalone git repository. That's been the most efficient form for us so
> > > far,
> > > as we typically want tests to be developed once but deployed on many
> > > different
> > > kernel versions simultaneously, as part of continuous integration.
> > 
> > Interesting. It seems like it's really in direct competition with KUnit.
> 
> I won't speak for Knut, but I don't think we are in competition. 

I would rather say we have some overlap in functionality :)
My understanding is that we have a common goal of providing better
infrastructure for testing, but have approached this whole problem complex from
somewhat different perspectives.

> I see
> KTF as a novel way to do a kind of white box end-to-end testing for
> the Linux kernel, which is a valuable thing, especially in some
> circumstances. I could see KTF having a lot of value for someone who
> wants to maintain out of tree drivers, in particular.

The best argument here is really good examples.
I'm not sure the distinction between "black box" and "white box" testing is
useful here, there's always underlying assumptions behind specific,
deterministic tests. Writing a test is a very good way to get to 
understand a piece of code. Just look at the flow of the example in
https://blogs.oracle.com/linux/writing-kernel-tests-with-the-new-kernel-test-framework-ktf, clearly unit tests, but the knowledge gathering is an important
part and motivation!

> Nevertheless, I don't really see KTF as a real unit testing framework
> for a number of different reasons; you pointed out some below, but I
> think the main one being that it requires booting a real kernel on
> actual hardware; 

That depends on what you want to test. If you need hardware (or simulated or
emulated hardware) for the test, of course you would need to have that hardware,
but if, lets say, you just wanted to run tests like the skbuff example tests
(see link above) you wouldn't need anything more than what you need to run KUnit
tests.

> I imagine it could be made to work on a VM, but that
> isn't really the point; it fundamentally depends on having part of the
> test, or at least driving the test from userspace on top of the kernel
> under test. Knut, myself, and others, had a previous discussion to
> this effect here: https://lkml.org/lkml/2018/11/24/170

> > I didn't really go into it in too much detail but these are my thoughts:
> > 
> > From a developer perspective I think KTF not being in the kernel tree is
> > a huge negative. I want minimal effort to include my tests in a patch
> > series and minimal effort for other developers to be able to use them.
> > Needing to submit these tests to another project or use another project
> > to run them is too much friction.

As said, I recognize the need to upstream KTF, and we are working to do that,
that's why I bother to write this :)

> > Also I think the goal of having tests that run on any kernel version is
> > a pipe dream. 

I have fulfilled that dream, so I know it is possible (Inifinband driver,
kernels from 2.6.39 to 4.8.x or so..) I know a lot of projects would benefit
from support for such workflows, but that's not really the point here - we want
to achieve both goals!

> > You'd absolutely need a way to encode which kernel
> > versions a test is expected to pass on because the tests might not make
> > sense until a feature is finished in upstream. And this makes it even
> > harder to develop these tests because, when we write them, we might not
> > even know which kernel version the feature will be added to. Similarly,
> > if a feature is removed or substantially changed, someone will have to
> > do a patch to disable the test for subsequent kernel versions and create
> > a new test for changed features. So, IMO, tests absolutely have to be
> > part of the kernel tree so they can be changed with the respective
> > features they test.

Of course a feature that is not part of a kernel cannot easily pass for that
kernel. And yes, testing for kernel version might be necessary in some cases,
and even to write a section of extra code to handle differences, still that's 
worth the benefit.
And that's also a use case: "Can I use kernel v.X.Y.Z if I need feature w?"
Lets assume we had a set of tests covering a particular feature, and someone
needed that feature, then they could just run the latest set of tests for that
feature on an older kernel to determine if they had enough support for what they
needed. If necessary, they could then backport the feature, and run the tests to
verify that they actually implemented it correctly.

On example I recall of this from the Infiniband driver times was 
the need to have a predictable way to efficiently use huge scatterlists across
kernels. We relied upon scatterlist chaining in a particular way, and the API
descriptions did not really specify to a detailed enough level how the
guaranteed semantics were supposed to be.
I wrote a few simple KTF tests that tested the driver code for the semantics we
expected, and ran them against older and newer kernels and used them to make
sure we would have a driver that worked across a few subtle changes to
scatterlists and their use. 

> > Kunit's ability to run without having to build and run the entire kernel
> >  is also a huge plus. 

IMHO the UML kernel is still a kernel running inside a user land program,
and so is a QEMU/KVM VM, which is my favourite KTF test environment.
 
Also with UML it is more difficult/less useful to deploy user space tools such
as valgrind, which IMHO would be my main reason for getting kernel code out of
the kernel. I recognize that there's a need for 
doing just that (e.g. compiling complicated data structures entirely in user
space with mocked interfaces) but I think it would be much more useful 
to be able to do that without the additional complexity of UML (or QEMU).

> > (Assuming there's a way to get around the build
> > dependency issues). Because of this, it can be very quick to run these
> > tests which makes development a *lot* easier seeing you don't have to
> > reboot a machine every time you want to test a fix.

If your target component under test can be built as a kernel module, or set of
modules, with KTF your workflow would not involve booting at all (unless you
happened to crash the system with one of your tests, that is :) )

You would just unload your module under test and the test module, recompile the
two and insmod again. My work current work cycle on this is just a few seconds.

> I will reply to your comments directly on your original email. I don't
> want to hijack this thread, in case we want to discuss the topic of
> KUnit vs. KTF further.

Good idea!
We can at least agree upon that such an important matter as this can 
be worthy a good, detailed discussion! :)

Thanks!
Knut


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
  2019-03-21 19:13                   ` Knut Omang
                                       ` (3 preceding siblings ...)
  (?)
@ 2019-03-21 19:29                     ` Logan Gunthorpe
  -1 siblings, 0 replies; 316+ messages in thread
From: Logan Gunthorpe @ 2019-03-21 19:29 UTC (permalink / raw)
  To: Knut Omang, Brendan Higgins
  Cc: brakmo, Petr Mladek, Amir Goldstein, dri-devel, Sasha Levin,
	linux-kselftest, Frank Rowand, Rob Herring, linux-nvdimm,
	Richard Weinberger, Kieran Bingham, wfg, Joel Stanley, Jeff Dike,
	Dan Carpenter, devicetree, shuah, Bird,



On 2019-03-21 1:13 p.m., Knut Omang wrote:
>> Nevertheless, I don't really see KTF as a real unit testing framework
>> for a number of different reasons; you pointed out some below, but I
>> think the main one being that it requires booting a real kernel on
>> actual hardware; 
> 
> That depends on what you want to test. If you need hardware (or simulated or
> emulated hardware) for the test, of course you would need to have that hardware,
> but if, lets say, you just wanted to run tests like the skbuff example tests
> (see link above) you wouldn't need anything more than what you need to run KUnit
> tests.

I'm starting to get the same impression: KTF isn't unit testing. When we
are saying "unit tests" we are specifying exactly what we want to test:
small sections of code in isolation. So by definition you should not
need hardware for this.

> I have fulfilled that dream, so I know it is possible (Inifinband driver,
> kernels from 2.6.39 to 4.8.x or so..) I know a lot of projects would benefit
> from support for such workflows, but that's not really the point here - we want
> to achieve both goals!

This is what makes me think we are not talking about testing the same
things. We are not talking about end to end testing of entire drivers
but smaller sections of code. A unit test is far more granular and
despite an infinband driver existing for 2.6.39 through 4.8, the
internal implementation could be drastically different. But unit tests
would be testing internal details which could be very different version
to version and has to evolve with the implementation.

> If your target component under test can be built as a kernel module, or set of
> modules, with KTF your workflow would not involve booting at all (unless you
> happened to crash the system with one of your tests, that is :) )

> You would just unload your module under test and the test module, recompile the
> two and insmod again. My work current work cycle on this is just a few seconds.

Yes, I'm sure we've all done that many a time but it's really beside the
point. Kunit offers a much nicer method for running a lot of unit tests
on existing code.

Logan
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21 19:29                     ` Logan Gunthorpe
  0 siblings, 0 replies; 316+ messages in thread
From: Logan Gunthorpe @ 2019-03-21 19:29 UTC (permalink / raw)
  To: Knut Omang, Brendan Higgins
  Cc: Kees Cook, Luis Chamberlain, shuah, Rob Herring, Kieran Bingham,
	Frank Rowand, Greg KH, Joel Stanley, Michael Ellerman,
	Joe Perches, brakmo, Steven Rostedt, Bird, Timothy, Kevin Hilman,
	Julia Lawall, linux-kselftest, kunit-dev,
	Linux Kernel Mailing List, Jeff Dike, Richard Weinberger,
	linux-um, Daniel Vetter, dri-devel, Dan Williams, linux-nvdimm,
	devicetree, Petr Mladek, Sasha Levin, Amir Goldstein,
	Dan Carpenter, wfg, Alan Maguire



On 2019-03-21 1:13 p.m., Knut Omang wrote:
>> Nevertheless, I don't really see KTF as a real unit testing framework
>> for a number of different reasons; you pointed out some below, but I
>> think the main one being that it requires booting a real kernel on
>> actual hardware; 
> 
> That depends on what you want to test. If you need hardware (or simulated or
> emulated hardware) for the test, of course you would need to have that hardware,
> but if, lets say, you just wanted to run tests like the skbuff example tests
> (see link above) you wouldn't need anything more than what you need to run KUnit
> tests.

I'm starting to get the same impression: KTF isn't unit testing. When we
are saying "unit tests" we are specifying exactly what we want to test:
small sections of code in isolation. So by definition you should not
need hardware for this.

> I have fulfilled that dream, so I know it is possible (Inifinband driver,
> kernels from 2.6.39 to 4.8.x or so..) I know a lot of projects would benefit
> from support for such workflows, but that's not really the point here - we want
> to achieve both goals!

This is what makes me think we are not talking about testing the same
things. We are not talking about end to end testing of entire drivers
but smaller sections of code. A unit test is far more granular and
despite an infinband driver existing for 2.6.39 through 4.8, the
internal implementation could be drastically different. But unit tests
would be testing internal details which could be very different version
to version and has to evolve with the implementation.

> If your target component under test can be built as a kernel module, or set of
> modules, with KTF your workflow would not involve booting at all (unless you
> happened to crash the system with one of your tests, that is :) )

> You would just unload your module under test and the test module, recompile the
> two and insmod again. My work current work cycle on this is just a few seconds.

Yes, I'm sure we've all done that many a time but it's really beside the
point. Kunit offers a much nicer method for running a lot of unit tests
on existing code.

Logan

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21 19:29                     ` Logan Gunthorpe
  0 siblings, 0 replies; 316+ messages in thread
From: Logan Gunthorpe @ 2019-03-21 19:29 UTC (permalink / raw)
  To: Knut Omang, Brendan Higgins
  Cc: brakmo-b10kYP2dOMg, Petr Mladek, Amir Goldstein, dri-devel,
	Sasha Levin, linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	Frank Rowand, Rob Herring, linux-nvdimm, Richard Weinberger,
	Kieran Bingham, wfg-VuQAYsv1563Yd54FQh9/CA, Joel Stanley,
	Jeff Dike, Dan Carpenter, devicetree,
	shuah-DgEjT+Ai2ygdnm+yROfE0A, Bird, Timothy, Kees Cook,
	linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Steven Rostedt,
	Julia Lawall, kunit-dev-/JYPxA39Uh5TLH3MbocFFw, Greg KH,
	Linux Kernel Mailing List



On 2019-03-21 1:13 p.m., Knut Omang wrote:
>> Nevertheless, I don't really see KTF as a real unit testing framework
>> for a number of different reasons; you pointed out some below, but I
>> think the main one being that it requires booting a real kernel on
>> actual hardware; 
> 
> That depends on what you want to test. If you need hardware (or simulated or
> emulated hardware) for the test, of course you would need to have that hardware,
> but if, lets say, you just wanted to run tests like the skbuff example tests
> (see link above) you wouldn't need anything more than what you need to run KUnit
> tests.

I'm starting to get the same impression: KTF isn't unit testing. When we
are saying "unit tests" we are specifying exactly what we want to test:
small sections of code in isolation. So by definition you should not
need hardware for this.

> I have fulfilled that dream, so I know it is possible (Inifinband driver,
> kernels from 2.6.39 to 4.8.x or so..) I know a lot of projects would benefit
> from support for such workflows, but that's not really the point here - we want
> to achieve both goals!

This is what makes me think we are not talking about testing the same
things. We are not talking about end to end testing of entire drivers
but smaller sections of code. A unit test is far more granular and
despite an infinband driver existing for 2.6.39 through 4.8, the
internal implementation could be drastically different. But unit tests
would be testing internal details which could be very different version
to version and has to evolve with the implementation.

> If your target component under test can be built as a kernel module, or set of
> modules, with KTF your workflow would not involve booting at all (unless you
> happened to crash the system with one of your tests, that is :) )

> You would just unload your module under test and the test module, recompile the
> two and insmod again. My work current work cycle on this is just a few seconds.

Yes, I'm sure we've all done that many a time but it's really beside the
point. Kunit offers a much nicer method for running a lot of unit tests
on existing code.

Logan

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21 19:29                     ` Logan Gunthorpe
  0 siblings, 0 replies; 316+ messages in thread
From: logang @ 2019-03-21 19:29 UTC (permalink / raw)




On 2019-03-21 1:13 p.m., Knut Omang wrote:
>> Nevertheless, I don't really see KTF as a real unit testing framework
>> for a number of different reasons; you pointed out some below, but I
>> think the main one being that it requires booting a real kernel on
>> actual hardware; 
> 
> That depends on what you want to test. If you need hardware (or simulated or
> emulated hardware) for the test, of course you would need to have that hardware,
> but if, lets say, you just wanted to run tests like the skbuff example tests
> (see link above) you wouldn't need anything more than what you need to run KUnit
> tests.

I'm starting to get the same impression: KTF isn't unit testing. When we
are saying "unit tests" we are specifying exactly what we want to test:
small sections of code in isolation. So by definition you should not
need hardware for this.

> I have fulfilled that dream, so I know it is possible (Inifinband driver,
> kernels from 2.6.39 to 4.8.x or so..) I know a lot of projects would benefit
> from support for such workflows, but that's not really the point here - we want
> to achieve both goals!

This is what makes me think we are not talking about testing the same
things. We are not talking about end to end testing of entire drivers
but smaller sections of code. A unit test is far more granular and
despite an infinband driver existing for 2.6.39 through 4.8, the
internal implementation could be drastically different. But unit tests
would be testing internal details which could be very different version
to version and has to evolve with the implementation.

> If your target component under test can be built as a kernel module, or set of
> modules, with KTF your workflow would not involve booting at all (unless you
> happened to crash the system with one of your tests, that is :) )

> You would just unload your module under test and the test module, recompile the
> two and insmod again. My work current work cycle on this is just a few seconds.

Yes, I'm sure we've all done that many a time but it's really beside the
point. Kunit offers a much nicer method for running a lot of unit tests
on existing code.

Logan

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21 19:29                     ` Logan Gunthorpe
  0 siblings, 0 replies; 316+ messages in thread
From: Logan Gunthorpe @ 2019-03-21 19:29 UTC (permalink / raw)




On 2019-03-21 1:13 p.m., Knut Omang wrote:
>> Nevertheless, I don't really see KTF as a real unit testing framework
>> for a number of different reasons; you pointed out some below, but I
>> think the main one being that it requires booting a real kernel on
>> actual hardware; 
> 
> That depends on what you want to test. If you need hardware (or simulated or
> emulated hardware) for the test, of course you would need to have that hardware,
> but if, lets say, you just wanted to run tests like the skbuff example tests
> (see link above) you wouldn't need anything more than what you need to run KUnit
> tests.

I'm starting to get the same impression: KTF isn't unit testing. When we
are saying "unit tests" we are specifying exactly what we want to test:
small sections of code in isolation. So by definition you should not
need hardware for this.

> I have fulfilled that dream, so I know it is possible (Inifinband driver,
> kernels from 2.6.39 to 4.8.x or so..) I know a lot of projects would benefit
> from support for such workflows, but that's not really the point here - we want
> to achieve both goals!

This is what makes me think we are not talking about testing the same
things. We are not talking about end to end testing of entire drivers
but smaller sections of code. A unit test is far more granular and
despite an infinband driver existing for 2.6.39 through 4.8, the
internal implementation could be drastically different. But unit tests
would be testing internal details which could be very different version
to version and has to evolve with the implementation.

> If your target component under test can be built as a kernel module, or set of
> modules, with KTF your workflow would not involve booting at all (unless you
> happened to crash the system with one of your tests, that is :) )

> You would just unload your module under test and the test module, recompile the
> two and insmod again. My work current work cycle on this is just a few seconds.

Yes, I'm sure we've all done that many a time but it's really beside the
point. Kunit offers a much nicer method for running a lot of unit tests
on existing code.

Logan

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21 19:29                     ` Logan Gunthorpe
  0 siblings, 0 replies; 316+ messages in thread
From: Logan Gunthorpe @ 2019-03-21 19:29 UTC (permalink / raw)
  To: Knut Omang, Brendan Higgins
  Cc: brakmo, Petr Mladek, Amir Goldstein, dri-devel, Sasha Levin,
	linux-kselftest, Frank Rowand, Rob Herring, linux-nvdimm,
	Richard Weinberger, Kieran Bingham, wfg, Joel Stanley, Jeff Dike,
	Dan Carpenter, devicetree, shuah, Bird,



On 2019-03-21 1:13 p.m., Knut Omang wrote:
>> Nevertheless, I don't really see KTF as a real unit testing framework
>> for a number of different reasons; you pointed out some below, but I
>> think the main one being that it requires booting a real kernel on
>> actual hardware; 
> 
> That depends on what you want to test. If you need hardware (or simulated or
> emulated hardware) for the test, of course you would need to have that hardware,
> but if, lets say, you just wanted to run tests like the skbuff example tests
> (see link above) you wouldn't need anything more than what you need to run KUnit
> tests.

I'm starting to get the same impression: KTF isn't unit testing. When we
are saying "unit tests" we are specifying exactly what we want to test:
small sections of code in isolation. So by definition you should not
need hardware for this.

> I have fulfilled that dream, so I know it is possible (Inifinband driver,
> kernels from 2.6.39 to 4.8.x or so..) I know a lot of projects would benefit
> from support for such workflows, but that's not really the point here - we want
> to achieve both goals!

This is what makes me think we are not talking about testing the same
things. We are not talking about end to end testing of entire drivers
but smaller sections of code. A unit test is far more granular and
despite an infinband driver existing for 2.6.39 through 4.8, the
internal implementation could be drastically different. But unit tests
would be testing internal details which could be very different version
to version and has to evolve with the implementation.

> If your target component under test can be built as a kernel module, or set of
> modules, with KTF your workflow would not involve booting at all (unless you
> happened to crash the system with one of your tests, that is :) )

> You would just unload your module under test and the test module, recompile the
> two and insmod again. My work current work cycle on this is just a few seconds.

Yes, I'm sure we've all done that many a time but it's really beside the
point. Kunit offers a much nicer method for running a lot of unit tests
on existing code.

Logan

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
  2019-03-21 19:29                     ` Logan Gunthorpe
                                           ` (2 preceding siblings ...)
  (?)
@ 2019-03-21 20:14                         ` Knut Omang
  -1 siblings, 0 replies; 316+ messages in thread
From: Knut Omang @ 2019-03-21 20:14 UTC (permalink / raw)
  To: Logan Gunthorpe, Brendan Higgins
  Cc: brakmo-b10kYP2dOMg, Petr Mladek, Amir Goldstein, dri-devel,
	Sasha Levin, linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	Frank Rowand, Rob Herring, linux-nvdimm, Richard Weinberger,
	Kieran Bingham, wfg-VuQAYsv1563Yd54FQh9/CA, Joel Stanley,
	Jeff Dike, Dan Carpenter, devicetree,
	shuah-DgEjT+Ai2ygdnm+yROfE0A, Bird, Timothy, Kees Cook,
	linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Steven Rostedt,
	Julia Lawall, kunit-dev-/JYPxA39Uh5TLH3MbocFFw, Greg KH,
	Linux Kernel Mailing List

On Thu, 2019-03-21 at 13:29 -0600, Logan Gunthorpe wrote:
> 
> On 2019-03-21 1:13 p.m., Knut Omang wrote:
> > > Nevertheless, I don't really see KTF as a real unit testing framework
> > > for a number of different reasons; you pointed out some below, but I
> > > think the main one being that it requires booting a real kernel on
> > > actual hardware; 
> > 
> > That depends on what you want to test. If you need hardware (or simulated or
> > emulated hardware) for the test, of course you would need to have that
> > hardware,
> > but if, lets say, you just wanted to run tests like the skbuff example tests
> > (see link above) you wouldn't need anything more than what you need to run
> > KUnit
> > tests.
> 
> I'm starting to get the same impression: KTF isn't unit testing. When we
> are saying "unit tests" we are specifying exactly what we want to test:
> small sections of code in isolation. So by definition you should not
> need hardware for this.

In my world hardware is just that: a piece of code. It can be in many forms, but
it is still code to be tested, and code that can be changed (but sometimes at a
slightly higher cost than a recompile ;-)

But that's not the point here: KTF can be used for your narrower definition of
unit tests, and it can be used for small, precise tests, for particular bugs for
instance, that you would not characterize as a unit test, still it serves the
same purpose, and I believe in a pragmatic approach to this. We want to maximize
the value of our time. I believe there's a sweet point wrt return on investment
on the scale from purist unit testing to just writing code and test with
existing applications. We're targeting that ;-)

> > I have fulfilled that dream, so I know it is possible (Inifinband driver,
> > kernels from 2.6.39 to 4.8.x or so..) I know a lot of projects would benefit
> > from support for such workflows, but that's not really the point here - we
> > want
> > to achieve both goals!
> 
> This is what makes me think we are not talking about testing the same
> things. We are not talking about end to end testing of entire drivers
> but smaller sections of code.

No! I am talking about testing units within a driver, or within any kernel
component. I am sure you agree that what constitutes a unit depend on what 
level of abstraction you are looking at.

>  A unit test is far more granular and
> despite an infinband driver existing for 2.6.39 through 4.8, the
> internal implementation could be drastically different. But unit tests
> would be testing internal details which could be very different version
> to version and has to evolve with the implementation.

> > If your target component under test can be built as a kernel module, or set
> > of
> > modules, with KTF your workflow would not involve booting at all (unless you
> > happened to crash the system with one of your tests, that is :) )
> > You would just unload your module under test and the test module, recompile
> > the
> > two and insmod again. My work current work cycle on this is just a few
> > seconds.
> 
> Yes, I'm sure we've all done that many a time but it's really beside the
> point. Kunit offers a much nicer method for running a lot of unit tests
> on existing code.

Again, use cases and examples are the key here,..

Knut

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21 20:14                         ` Knut Omang
  0 siblings, 0 replies; 316+ messages in thread
From: Knut Omang @ 2019-03-21 20:14 UTC (permalink / raw)
  To: Logan Gunthorpe, Brendan Higgins
  Cc: Kees Cook, Luis Chamberlain, shuah, Rob Herring, Kieran Bingham,
	Frank Rowand, Greg KH, Joel Stanley, Michael Ellerman,
	Joe Perches, brakmo, Steven Rostedt, Bird, Timothy, Kevin Hilman,
	Julia Lawall, linux-kselftest, kunit-dev,
	Linux Kernel Mailing List, Jeff Dike, Richard Weinberger,
	linux-um, Daniel Vetter, dri-devel, Dan Williams, linux-nvdimm,
	devicetree, Petr Mladek, Sasha Levin, Amir Goldstein,
	Dan Carpenter, wfg, Alan Maguire

On Thu, 2019-03-21 at 13:29 -0600, Logan Gunthorpe wrote:
> 
> On 2019-03-21 1:13 p.m., Knut Omang wrote:
> > > Nevertheless, I don't really see KTF as a real unit testing framework
> > > for a number of different reasons; you pointed out some below, but I
> > > think the main one being that it requires booting a real kernel on
> > > actual hardware; 
> > 
> > That depends on what you want to test. If you need hardware (or simulated or
> > emulated hardware) for the test, of course you would need to have that
> > hardware,
> > but if, lets say, you just wanted to run tests like the skbuff example tests
> > (see link above) you wouldn't need anything more than what you need to run
> > KUnit
> > tests.
> 
> I'm starting to get the same impression: KTF isn't unit testing. When we
> are saying "unit tests" we are specifying exactly what we want to test:
> small sections of code in isolation. So by definition you should not
> need hardware for this.

In my world hardware is just that: a piece of code. It can be in many forms, but
it is still code to be tested, and code that can be changed (but sometimes at a
slightly higher cost than a recompile ;-)

But that's not the point here: KTF can be used for your narrower definition of
unit tests, and it can be used for small, precise tests, for particular bugs for
instance, that you would not characterize as a unit test, still it serves the
same purpose, and I believe in a pragmatic approach to this. We want to maximize
the value of our time. I believe there's a sweet point wrt return on investment
on the scale from purist unit testing to just writing code and test with
existing applications. We're targeting that ;-)

> > I have fulfilled that dream, so I know it is possible (Inifinband driver,
> > kernels from 2.6.39 to 4.8.x or so..) I know a lot of projects would benefit
> > from support for such workflows, but that's not really the point here - we
> > want
> > to achieve both goals!
> 
> This is what makes me think we are not talking about testing the same
> things. We are not talking about end to end testing of entire drivers
> but smaller sections of code.

No! I am talking about testing units within a driver, or within any kernel
component. I am sure you agree that what constitutes a unit depend on what 
level of abstraction you are looking at.

>  A unit test is far more granular and
> despite an infinband driver existing for 2.6.39 through 4.8, the
> internal implementation could be drastically different. But unit tests
> would be testing internal details which could be very different version
> to version and has to evolve with the implementation.

> > If your target component under test can be built as a kernel module, or set
> > of
> > modules, with KTF your workflow would not involve booting at all (unless you
> > happened to crash the system with one of your tests, that is :) )
> > You would just unload your module under test and the test module, recompile
> > the
> > two and insmod again. My work current work cycle on this is just a few
> > seconds.
> 
> Yes, I'm sure we've all done that many a time but it's really beside the
> point. Kunit offers a much nicer method for running a lot of unit tests
> on existing code.

Again, use cases and examples are the key here,..

Knut



^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21 20:14                         ` Knut Omang
  0 siblings, 0 replies; 316+ messages in thread
From: knut.omang @ 2019-03-21 20:14 UTC (permalink / raw)


On Thu, 2019-03-21 at 13:29 -0600, Logan Gunthorpe wrote:
> 
> On 2019-03-21 1:13 p.m., Knut Omang wrote:
> > > Nevertheless, I don't really see KTF as a real unit testing framework
> > > for a number of different reasons; you pointed out some below, but I
> > > think the main one being that it requires booting a real kernel on
> > > actual hardware; 
> > 
> > That depends on what you want to test. If you need hardware (or simulated or
> > emulated hardware) for the test, of course you would need to have that
> > hardware,
> > but if, lets say, you just wanted to run tests like the skbuff example tests
> > (see link above) you wouldn't need anything more than what you need to run
> > KUnit
> > tests.
> 
> I'm starting to get the same impression: KTF isn't unit testing. When we
> are saying "unit tests" we are specifying exactly what we want to test:
> small sections of code in isolation. So by definition you should not
> need hardware for this.

In my world hardware is just that: a piece of code. It can be in many forms, but
it is still code to be tested, and code that can be changed (but sometimes at a
slightly higher cost than a recompile ;-)

But that's not the point here: KTF can be used for your narrower definition of
unit tests, and it can be used for small, precise tests, for particular bugs for
instance, that you would not characterize as a unit test, still it serves the
same purpose, and I believe in a pragmatic approach to this. We want to maximize
the value of our time. I believe there's a sweet point wrt return on investment
on the scale from purist unit testing to just writing code and test with
existing applications. We're targeting that ;-)

> > I have fulfilled that dream, so I know it is possible (Inifinband driver,
> > kernels from 2.6.39 to 4.8.x or so..) I know a lot of projects would benefit
> > from support for such workflows, but that's not really the point here - we
> > want
> > to achieve both goals!
> 
> This is what makes me think we are not talking about testing the same
> things. We are not talking about end to end testing of entire drivers
> but smaller sections of code.

No! I am talking about testing units within a driver, or within any kernel
component. I am sure you agree that what constitutes a unit depend on what 
level of abstraction you are looking at.

>  A unit test is far more granular and
> despite an infinband driver existing for 2.6.39 through 4.8, the
> internal implementation could be drastically different. But unit tests
> would be testing internal details which could be very different version
> to version and has to evolve with the implementation.

> > If your target component under test can be built as a kernel module, or set
> > of
> > modules, with KTF your workflow would not involve booting at all (unless you
> > happened to crash the system with one of your tests, that is :) )
> > You would just unload your module under test and the test module, recompile
> > the
> > two and insmod again. My work current work cycle on this is just a few
> > seconds.
> 
> Yes, I'm sure we've all done that many a time but it's really beside the
> point. Kunit offers a much nicer method for running a lot of unit tests
> on existing code.

Again, use cases and examples are the key here,..

Knut

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21 20:14                         ` Knut Omang
  0 siblings, 0 replies; 316+ messages in thread
From: Knut Omang @ 2019-03-21 20:14 UTC (permalink / raw)


On Thu, 2019-03-21@13:29 -0600, Logan Gunthorpe wrote:
> 
> On 2019-03-21 1:13 p.m., Knut Omang wrote:
> > > Nevertheless, I don't really see KTF as a real unit testing framework
> > > for a number of different reasons; you pointed out some below, but I
> > > think the main one being that it requires booting a real kernel on
> > > actual hardware; 
> > 
> > That depends on what you want to test. If you need hardware (or simulated or
> > emulated hardware) for the test, of course you would need to have that
> > hardware,
> > but if, lets say, you just wanted to run tests like the skbuff example tests
> > (see link above) you wouldn't need anything more than what you need to run
> > KUnit
> > tests.
> 
> I'm starting to get the same impression: KTF isn't unit testing. When we
> are saying "unit tests" we are specifying exactly what we want to test:
> small sections of code in isolation. So by definition you should not
> need hardware for this.

In my world hardware is just that: a piece of code. It can be in many forms, but
it is still code to be tested, and code that can be changed (but sometimes at a
slightly higher cost than a recompile ;-)

But that's not the point here: KTF can be used for your narrower definition of
unit tests, and it can be used for small, precise tests, for particular bugs for
instance, that you would not characterize as a unit test, still it serves the
same purpose, and I believe in a pragmatic approach to this. We want to maximize
the value of our time. I believe there's a sweet point wrt return on investment
on the scale from purist unit testing to just writing code and test with
existing applications. We're targeting that ;-)

> > I have fulfilled that dream, so I know it is possible (Inifinband driver,
> > kernels from 2.6.39 to 4.8.x or so..) I know a lot of projects would benefit
> > from support for such workflows, but that's not really the point here - we
> > want
> > to achieve both goals!
> 
> This is what makes me think we are not talking about testing the same
> things. We are not talking about end to end testing of entire drivers
> but smaller sections of code.

No! I am talking about testing units within a driver, or within any kernel
component. I am sure you agree that what constitutes a unit depend on what 
level of abstraction you are looking at.

>  A unit test is far more granular and
> despite an infinband driver existing for 2.6.39 through 4.8, the
> internal implementation could be drastically different. But unit tests
> would be testing internal details which could be very different version
> to version and has to evolve with the implementation.

> > If your target component under test can be built as a kernel module, or set
> > of
> > modules, with KTF your workflow would not involve booting at all (unless you
> > happened to crash the system with one of your tests, that is :) )
> > You would just unload your module under test and the test module, recompile
> > the
> > two and insmod again. My work current work cycle on this is just a few
> > seconds.
> 
> Yes, I'm sure we've all done that many a time but it's really beside the
> point. Kunit offers a much nicer method for running a lot of unit tests
> on existing code.

Again, use cases and examples are the key here,..

Knut

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21 20:14                         ` Knut Omang
  0 siblings, 0 replies; 316+ messages in thread
From: Knut Omang @ 2019-03-21 20:14 UTC (permalink / raw)
  To: Logan Gunthorpe, Brendan Higgins
  Cc: brakmo, Petr Mladek, Amir Goldstein, dri-devel, Sasha Levin,
	linux-kselftest, Frank Rowand, Rob Herring, linux-nvdimm,
	Richard Weinberger, Kieran Bingham, wfg, Joel Stanley, Jeff Dike,
	Dan Carpenter, devicetree, shuah, Bird,

On Thu, 2019-03-21 at 13:29 -0600, Logan Gunthorpe wrote:
> 
> On 2019-03-21 1:13 p.m., Knut Omang wrote:
> > > Nevertheless, I don't really see KTF as a real unit testing framework
> > > for a number of different reasons; you pointed out some below, but I
> > > think the main one being that it requires booting a real kernel on
> > > actual hardware; 
> > 
> > That depends on what you want to test. If you need hardware (or simulated or
> > emulated hardware) for the test, of course you would need to have that
> > hardware,
> > but if, lets say, you just wanted to run tests like the skbuff example tests
> > (see link above) you wouldn't need anything more than what you need to run
> > KUnit
> > tests.
> 
> I'm starting to get the same impression: KTF isn't unit testing. When we
> are saying "unit tests" we are specifying exactly what we want to test:
> small sections of code in isolation. So by definition you should not
> need hardware for this.

In my world hardware is just that: a piece of code. It can be in many forms, but
it is still code to be tested, and code that can be changed (but sometimes at a
slightly higher cost than a recompile ;-)

But that's not the point here: KTF can be used for your narrower definition of
unit tests, and it can be used for small, precise tests, for particular bugs for
instance, that you would not characterize as a unit test, still it serves the
same purpose, and I believe in a pragmatic approach to this. We want to maximize
the value of our time. I believe there's a sweet point wrt return on investment
on the scale from purist unit testing to just writing code and test with
existing applications. We're targeting that ;-)

> > I have fulfilled that dream, so I know it is possible (Inifinband driver,
> > kernels from 2.6.39 to 4.8.x or so..) I know a lot of projects would benefit
> > from support for such workflows, but that's not really the point here - we
> > want
> > to achieve both goals!
> 
> This is what makes me think we are not talking about testing the same
> things. We are not talking about end to end testing of entire drivers
> but smaller sections of code.

No! I am talking about testing units within a driver, or within any kernel
component. I am sure you agree that what constitutes a unit depend on what 
level of abstraction you are looking at.

>  A unit test is far more granular and
> despite an infinband driver existing for 2.6.39 through 4.8, the
> internal implementation could be drastically different. But unit tests
> would be testing internal details which could be very different version
> to version and has to evolve with the implementation.

> > If your target component under test can be built as a kernel module, or set
> > of
> > modules, with KTF your workflow would not involve booting at all (unless you
> > happened to crash the system with one of your tests, that is :) )
> > You would just unload your module under test and the test module, recompile
> > the
> > two and insmod again. My work current work cycle on this is just a few
> > seconds.
> 
> Yes, I'm sure we've all done that many a time but it's really beside the
> point. Kunit offers a much nicer method for running a lot of unit tests
> on existing code.

Again, use cases and examples are the key here,..

Knut



_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
  2019-03-21  1:07     ` Logan Gunthorpe
                           ` (2 preceding siblings ...)
  (?)
@ 2019-03-21 22:07         ` Brendan Higgins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-21 22:07 UTC (permalink / raw)
  To: Logan Gunthorpe
  Cc: brakmo-b10kYP2dOMg, Petr Mladek, Amir Goldstein, dri-devel,
	Sasha Levin, linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	Frank Rowand, Rob Herring, linux-nvdimm, Richard Weinberger,
	Knut Omang, Kieran Bingham, wfg-VuQAYsv1563Yd54FQh9/CA,
	Joel Stanley, Jeff Dike, Dan Carpenter, devicetree,
	shuah-DgEjT+Ai2ygdnm+yROfE0A, Bird, Timothy, Kees Cook,
	linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Steven Rostedt,
	Julia Lawall, kunit-dev-/JYPxA39Uh5TLH3MbocFFw, Greg KH, Linux

On Wed, Mar 20, 2019 at 6:08 PM Logan Gunthorpe <logang-OTvnGxWRz7hWk0Htik3J/w@public.gmane.org> wrote:
>
> Hi,
>
> On 2019-02-14 2:37 p.m., Brendan Higgins wrote:
> > This patch set proposes KUnit, a lightweight unit testing and mocking
> > framework for the Linux kernel.
>
> I haven't followed the entire conversation but I saw the KUnit write-up
> on LWN and ended up, as an exercise, giving it a try.

Awesome! Thanks for taking the time to try it out and to give me feedback!

>
> I really like the idea of having a fast unit testing infrastructure in
> the kernel. Occasionally, I write userspace tests for tricky functions
> that I essentially write by copying the code over to a throw away C file
> and exercise them as I need. I think it would be great to be able to
> keep these tests around in a way that they can be run by anyone who
> wants to touch the code.
>
> I was just dealing with some functions that required some mocked up
> tests so I thought I'd give KUnit a try. I found writing the code very
> easy and the infrastructure I was testing was quite simple to mock out
> the hardware.
>
> However, I got a bit hung up by one issue: I was writing unit tests for
> code in the NTB tree which itself depends on CONFIG_PCI which cannot be
> enabled in UML (for what should be obvious reasons). I managed to work
> around this because, as luck would have it, all the functions I cared
> about testing were actually static inline functions in headers. So I
> placed my test code in the kunit folder (so it would compile) and hacked
> around a couple a of functions I didn't care about that would not be
> compiled.

A couple of points, as for needing CONFIG_PCI; my plan to deal with
that type of thing has been that we would add support for a KUnit/UML
version that is just for KUnit. It would mock out the necessary bits
to provide a fake hardware implementation for anything that might
depend on it. I wrote a prototype for mocking/faking MMIO that I
presented to the list here[1]; it is not part of the current patchset
because we decided it would be best to focus on getting an MVP in, but
I plan on bringing it back up at some point. Anyway, what do you
generally think of this approach?

>
> In the end I got it to work acceptably, but I get the impression that

Awesome, I looked at the code you posted and it doesn't look like you
have had too many troubles. One thing that stood out to me, why did
you need to put it in the kunit/ dir?

> KUnit will not be usable for wide swaths of kernel code that can't be
> compiled in UML. Has there been any discussion or ideas on how to work

For the most part, I was planning on relying on mocking or faking the
hardware out as you have done. I have found this worked pretty well in
other instances. I know there are some edge cases for which even this
won't work like code in arch/; for that you *can* compile KUnit for
non-UML, but that is not really unit testing any more, but I included
it that as more of a last ditch effort, debugging aid, or to allow a
user to write integration tests (also very much because other people
asked me to do it ;-) ). So the main plan here is mocking and faking.

There has been some additional discussion here[2] (see the replies);
this thread is more about the future of trying to pull out kernel
dependencies which I discussed further with Luis (off list) here[3]
(in reality the discussion has been spread across a number of
different threads, but I think those are places you could at least get
context to jump in and talk about mocking and faking hardware and
Linux kernel resources).

In short, I think the idea is we are using UML as scaffolding and we
will gradually try to make it more independent by either reducing
dependencies on kernel resources (by providing high quality fakes) or
by somehow providing a better view of the dependencies so that you can
specify more directly what piece of code you need for your test. This
part is obviously very long term, but I think that's what we need to
do if we want to do really high quality unit testing, regardless of
whether we use KUnit or not. In anycase, right now, we are just
working on the basic tools to write *a* unit test for the kernel.

> around this so it can be more generally useful? Or will this feature be
> restricted roughly to non-drivers and functions in headers that don't
> have #ifdefs around them?

I hope not.

The #ifdef thing is something that I don't have a good answer for at
this time. Luis and I talked about it, but haven't come up with any
good ideas, sorry.

As for drivers, I found that testing drivers is quite doable. You can
see an example of a test I wrote for a i2c bus driver here[4]. I know
the i2c subsystem is pretty simple, but the general principle should
apply elsewhere.

>
> If you're interested in seeing the unit tests I ended up writing you can
> find the commits here[1].
>
> Thanks,
>
> Logan
>
> [1] https://github.com/sbates130272/linux-p2pmem/ ntb_kunit

I am looking forward to see what you think!

Cheers

[1] https://lkml.org/lkml/2018/10/17/122
[2] https://lkml.org/lkml/2018/11/29/93
[3] https://groups.google.com/forum/#!topic/kunit-dev/EQ1x0SzrUus
[4] https://kunit.googlesource.com/linux/+/e10484ad2f9fc7926412ec84739fe105981b4771/drivers/i2c/busses/i2c-aspeed-test.c

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21 22:07         ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-21 22:07 UTC (permalink / raw)
  To: Logan Gunthorpe
  Cc: Kees Cook, Luis Chamberlain, shuah, Rob Herring, Kieran Bingham,
	Frank Rowand, Greg KH, Joel Stanley, Michael Ellerman,
	Joe Perches, brakmo, Steven Rostedt, Bird, Timothy, Kevin Hilman,
	Julia Lawall, linux-kselftest, kunit-dev,
	Linux Kernel Mailing List, Jeff Dike, Richard Weinberger,
	linux-um, Daniel Vetter, dri-devel, Dan Williams, linux-nvdimm,
	Knut Omang, devicetree, Petr Mladek, Sasha Levin, Amir Goldstein,
	Dan Carpenter, wfg

On Wed, Mar 20, 2019 at 6:08 PM Logan Gunthorpe <logang@deltatee.com> wrote:
>
> Hi,
>
> On 2019-02-14 2:37 p.m., Brendan Higgins wrote:
> > This patch set proposes KUnit, a lightweight unit testing and mocking
> > framework for the Linux kernel.
>
> I haven't followed the entire conversation but I saw the KUnit write-up
> on LWN and ended up, as an exercise, giving it a try.

Awesome! Thanks for taking the time to try it out and to give me feedback!

>
> I really like the idea of having a fast unit testing infrastructure in
> the kernel. Occasionally, I write userspace tests for tricky functions
> that I essentially write by copying the code over to a throw away C file
> and exercise them as I need. I think it would be great to be able to
> keep these tests around in a way that they can be run by anyone who
> wants to touch the code.
>
> I was just dealing with some functions that required some mocked up
> tests so I thought I'd give KUnit a try. I found writing the code very
> easy and the infrastructure I was testing was quite simple to mock out
> the hardware.
>
> However, I got a bit hung up by one issue: I was writing unit tests for
> code in the NTB tree which itself depends on CONFIG_PCI which cannot be
> enabled in UML (for what should be obvious reasons). I managed to work
> around this because, as luck would have it, all the functions I cared
> about testing were actually static inline functions in headers. So I
> placed my test code in the kunit folder (so it would compile) and hacked
> around a couple a of functions I didn't care about that would not be
> compiled.

A couple of points, as for needing CONFIG_PCI; my plan to deal with
that type of thing has been that we would add support for a KUnit/UML
version that is just for KUnit. It would mock out the necessary bits
to provide a fake hardware implementation for anything that might
depend on it. I wrote a prototype for mocking/faking MMIO that I
presented to the list here[1]; it is not part of the current patchset
because we decided it would be best to focus on getting an MVP in, but
I plan on bringing it back up at some point. Anyway, what do you
generally think of this approach?

>
> In the end I got it to work acceptably, but I get the impression that

Awesome, I looked at the code you posted and it doesn't look like you
have had too many troubles. One thing that stood out to me, why did
you need to put it in the kunit/ dir?

> KUnit will not be usable for wide swaths of kernel code that can't be
> compiled in UML. Has there been any discussion or ideas on how to work

For the most part, I was planning on relying on mocking or faking the
hardware out as you have done. I have found this worked pretty well in
other instances. I know there are some edge cases for which even this
won't work like code in arch/; for that you *can* compile KUnit for
non-UML, but that is not really unit testing any more, but I included
it that as more of a last ditch effort, debugging aid, or to allow a
user to write integration tests (also very much because other people
asked me to do it ;-) ). So the main plan here is mocking and faking.

There has been some additional discussion here[2] (see the replies);
this thread is more about the future of trying to pull out kernel
dependencies which I discussed further with Luis (off list) here[3]
(in reality the discussion has been spread across a number of
different threads, but I think those are places you could at least get
context to jump in and talk about mocking and faking hardware and
Linux kernel resources).

In short, I think the idea is we are using UML as scaffolding and we
will gradually try to make it more independent by either reducing
dependencies on kernel resources (by providing high quality fakes) or
by somehow providing a better view of the dependencies so that you can
specify more directly what piece of code you need for your test. This
part is obviously very long term, but I think that's what we need to
do if we want to do really high quality unit testing, regardless of
whether we use KUnit or not. In anycase, right now, we are just
working on the basic tools to write *a* unit test for the kernel.

> around this so it can be more generally useful? Or will this feature be
> restricted roughly to non-drivers and functions in headers that don't
> have #ifdefs around them?

I hope not.

The #ifdef thing is something that I don't have a good answer for at
this time. Luis and I talked about it, but haven't come up with any
good ideas, sorry.

As for drivers, I found that testing drivers is quite doable. You can
see an example of a test I wrote for a i2c bus driver here[4]. I know
the i2c subsystem is pretty simple, but the general principle should
apply elsewhere.

>
> If you're interested in seeing the unit tests I ended up writing you can
> find the commits here[1].
>
> Thanks,
>
> Logan
>
> [1] https://github.com/sbates130272/linux-p2pmem/ ntb_kunit

I am looking forward to see what you think!

Cheers

[1] https://lkml.org/lkml/2018/10/17/122
[2] https://lkml.org/lkml/2018/11/29/93
[3] https://groups.google.com/forum/#!topic/kunit-dev/EQ1x0SzrUus
[4] https://kunit.googlesource.com/linux/+/e10484ad2f9fc7926412ec84739fe105981b4771/drivers/i2c/busses/i2c-aspeed-test.c

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21 22:07         ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-03-21 22:07 UTC (permalink / raw)


On Wed, Mar 20, 2019 at 6:08 PM Logan Gunthorpe <logang at deltatee.com> wrote:
>
> Hi,
>
> On 2019-02-14 2:37 p.m., Brendan Higgins wrote:
> > This patch set proposes KUnit, a lightweight unit testing and mocking
> > framework for the Linux kernel.
>
> I haven't followed the entire conversation but I saw the KUnit write-up
> on LWN and ended up, as an exercise, giving it a try.

Awesome! Thanks for taking the time to try it out and to give me feedback!

>
> I really like the idea of having a fast unit testing infrastructure in
> the kernel. Occasionally, I write userspace tests for tricky functions
> that I essentially write by copying the code over to a throw away C file
> and exercise them as I need. I think it would be great to be able to
> keep these tests around in a way that they can be run by anyone who
> wants to touch the code.
>
> I was just dealing with some functions that required some mocked up
> tests so I thought I'd give KUnit a try. I found writing the code very
> easy and the infrastructure I was testing was quite simple to mock out
> the hardware.
>
> However, I got a bit hung up by one issue: I was writing unit tests for
> code in the NTB tree which itself depends on CONFIG_PCI which cannot be
> enabled in UML (for what should be obvious reasons). I managed to work
> around this because, as luck would have it, all the functions I cared
> about testing were actually static inline functions in headers. So I
> placed my test code in the kunit folder (so it would compile) and hacked
> around a couple a of functions I didn't care about that would not be
> compiled.

A couple of points, as for needing CONFIG_PCI; my plan to deal with
that type of thing has been that we would add support for a KUnit/UML
version that is just for KUnit. It would mock out the necessary bits
to provide a fake hardware implementation for anything that might
depend on it. I wrote a prototype for mocking/faking MMIO that I
presented to the list here[1]; it is not part of the current patchset
because we decided it would be best to focus on getting an MVP in, but
I plan on bringing it back up at some point. Anyway, what do you
generally think of this approach?

>
> In the end I got it to work acceptably, but I get the impression that

Awesome, I looked at the code you posted and it doesn't look like you
have had too many troubles. One thing that stood out to me, why did
you need to put it in the kunit/ dir?

> KUnit will not be usable for wide swaths of kernel code that can't be
> compiled in UML. Has there been any discussion or ideas on how to work

For the most part, I was planning on relying on mocking or faking the
hardware out as you have done. I have found this worked pretty well in
other instances. I know there are some edge cases for which even this
won't work like code in arch/; for that you *can* compile KUnit for
non-UML, but that is not really unit testing any more, but I included
it that as more of a last ditch effort, debugging aid, or to allow a
user to write integration tests (also very much because other people
asked me to do it ;-) ). So the main plan here is mocking and faking.

There has been some additional discussion here[2] (see the replies);
this thread is more about the future of trying to pull out kernel
dependencies which I discussed further with Luis (off list) here[3]
(in reality the discussion has been spread across a number of
different threads, but I think those are places you could at least get
context to jump in and talk about mocking and faking hardware and
Linux kernel resources).

In short, I think the idea is we are using UML as scaffolding and we
will gradually try to make it more independent by either reducing
dependencies on kernel resources (by providing high quality fakes) or
by somehow providing a better view of the dependencies so that you can
specify more directly what piece of code you need for your test. This
part is obviously very long term, but I think that's what we need to
do if we want to do really high quality unit testing, regardless of
whether we use KUnit or not. In anycase, right now, we are just
working on the basic tools to write *a* unit test for the kernel.

> around this so it can be more generally useful? Or will this feature be
> restricted roughly to non-drivers and functions in headers that don't
> have #ifdefs around them?

I hope not.

The #ifdef thing is something that I don't have a good answer for at
this time. Luis and I talked about it, but haven't come up with any
good ideas, sorry.

As for drivers, I found that testing drivers is quite doable. You can
see an example of a test I wrote for a i2c bus driver here[4]. I know
the i2c subsystem is pretty simple, but the general principle should
apply elsewhere.

>
> If you're interested in seeing the unit tests I ended up writing you can
> find the commits here[1].
>
> Thanks,
>
> Logan
>
> [1] https://github.com/sbates130272/linux-p2pmem/ ntb_kunit

I am looking forward to see what you think!

Cheers

[1] https://lkml.org/lkml/2018/10/17/122
[2] https://lkml.org/lkml/2018/11/29/93
[3] https://groups.google.com/forum/#!topic/kunit-dev/EQ1x0SzrUus
[4] https://kunit.googlesource.com/linux/+/e10484ad2f9fc7926412ec84739fe105981b4771/drivers/i2c/busses/i2c-aspeed-test.c

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21 22:07         ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-21 22:07 UTC (permalink / raw)


On Wed, Mar 20, 2019@6:08 PM Logan Gunthorpe <logang@deltatee.com> wrote:
>
> Hi,
>
> On 2019-02-14 2:37 p.m., Brendan Higgins wrote:
> > This patch set proposes KUnit, a lightweight unit testing and mocking
> > framework for the Linux kernel.
>
> I haven't followed the entire conversation but I saw the KUnit write-up
> on LWN and ended up, as an exercise, giving it a try.

Awesome! Thanks for taking the time to try it out and to give me feedback!

>
> I really like the idea of having a fast unit testing infrastructure in
> the kernel. Occasionally, I write userspace tests for tricky functions
> that I essentially write by copying the code over to a throw away C file
> and exercise them as I need. I think it would be great to be able to
> keep these tests around in a way that they can be run by anyone who
> wants to touch the code.
>
> I was just dealing with some functions that required some mocked up
> tests so I thought I'd give KUnit a try. I found writing the code very
> easy and the infrastructure I was testing was quite simple to mock out
> the hardware.
>
> However, I got a bit hung up by one issue: I was writing unit tests for
> code in the NTB tree which itself depends on CONFIG_PCI which cannot be
> enabled in UML (for what should be obvious reasons). I managed to work
> around this because, as luck would have it, all the functions I cared
> about testing were actually static inline functions in headers. So I
> placed my test code in the kunit folder (so it would compile) and hacked
> around a couple a of functions I didn't care about that would not be
> compiled.

A couple of points, as for needing CONFIG_PCI; my plan to deal with
that type of thing has been that we would add support for a KUnit/UML
version that is just for KUnit. It would mock out the necessary bits
to provide a fake hardware implementation for anything that might
depend on it. I wrote a prototype for mocking/faking MMIO that I
presented to the list here[1]; it is not part of the current patchset
because we decided it would be best to focus on getting an MVP in, but
I plan on bringing it back up at some point. Anyway, what do you
generally think of this approach?

>
> In the end I got it to work acceptably, but I get the impression that

Awesome, I looked at the code you posted and it doesn't look like you
have had too many troubles. One thing that stood out to me, why did
you need to put it in the kunit/ dir?

> KUnit will not be usable for wide swaths of kernel code that can't be
> compiled in UML. Has there been any discussion or ideas on how to work

For the most part, I was planning on relying on mocking or faking the
hardware out as you have done. I have found this worked pretty well in
other instances. I know there are some edge cases for which even this
won't work like code in arch/; for that you *can* compile KUnit for
non-UML, but that is not really unit testing any more, but I included
it that as more of a last ditch effort, debugging aid, or to allow a
user to write integration tests (also very much because other people
asked me to do it ;-) ). So the main plan here is mocking and faking.

There has been some additional discussion here[2] (see the replies);
this thread is more about the future of trying to pull out kernel
dependencies which I discussed further with Luis (off list) here[3]
(in reality the discussion has been spread across a number of
different threads, but I think those are places you could at least get
context to jump in and talk about mocking and faking hardware and
Linux kernel resources).

In short, I think the idea is we are using UML as scaffolding and we
will gradually try to make it more independent by either reducing
dependencies on kernel resources (by providing high quality fakes) or
by somehow providing a better view of the dependencies so that you can
specify more directly what piece of code you need for your test. This
part is obviously very long term, but I think that's what we need to
do if we want to do really high quality unit testing, regardless of
whether we use KUnit or not. In anycase, right now, we are just
working on the basic tools to write *a* unit test for the kernel.

> around this so it can be more generally useful? Or will this feature be
> restricted roughly to non-drivers and functions in headers that don't
> have #ifdefs around them?

I hope not.

The #ifdef thing is something that I don't have a good answer for at
this time. Luis and I talked about it, but haven't come up with any
good ideas, sorry.

As for drivers, I found that testing drivers is quite doable. You can
see an example of a test I wrote for a i2c bus driver here[4]. I know
the i2c subsystem is pretty simple, but the general principle should
apply elsewhere.

>
> If you're interested in seeing the unit tests I ended up writing you can
> find the commits here[1].
>
> Thanks,
>
> Logan
>
> [1] https://github.com/sbates130272/linux-p2pmem/ ntb_kunit

I am looking forward to see what you think!

Cheers

[1] https://lkml.org/lkml/2018/10/17/122
[2] https://lkml.org/lkml/2018/11/29/93
[3] https://groups.google.com/forum/#!topic/kunit-dev/EQ1x0SzrUus
[4] https://kunit.googlesource.com/linux/+/e10484ad2f9fc7926412ec84739fe105981b4771/drivers/i2c/busses/i2c-aspeed-test.c

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21 22:07         ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-21 22:07 UTC (permalink / raw)
  To: Logan Gunthorpe
  Cc: brakmo, Petr Mladek, Amir Goldstein, dri-devel, Sasha Levin,
	linux-kselftest, Frank Rowand, Rob Herring, linux-nvdimm,
	Richard Weinberger, Knut Omang, Kieran Bingham, wfg,
	Joel Stanley, Jeff Dike, Dan Carpenter, devicetree, shuah, Bird,

On Wed, Mar 20, 2019 at 6:08 PM Logan Gunthorpe <logang@deltatee.com> wrote:
>
> Hi,
>
> On 2019-02-14 2:37 p.m., Brendan Higgins wrote:
> > This patch set proposes KUnit, a lightweight unit testing and mocking
> > framework for the Linux kernel.
>
> I haven't followed the entire conversation but I saw the KUnit write-up
> on LWN and ended up, as an exercise, giving it a try.

Awesome! Thanks for taking the time to try it out and to give me feedback!

>
> I really like the idea of having a fast unit testing infrastructure in
> the kernel. Occasionally, I write userspace tests for tricky functions
> that I essentially write by copying the code over to a throw away C file
> and exercise them as I need. I think it would be great to be able to
> keep these tests around in a way that they can be run by anyone who
> wants to touch the code.
>
> I was just dealing with some functions that required some mocked up
> tests so I thought I'd give KUnit a try. I found writing the code very
> easy and the infrastructure I was testing was quite simple to mock out
> the hardware.
>
> However, I got a bit hung up by one issue: I was writing unit tests for
> code in the NTB tree which itself depends on CONFIG_PCI which cannot be
> enabled in UML (for what should be obvious reasons). I managed to work
> around this because, as luck would have it, all the functions I cared
> about testing were actually static inline functions in headers. So I
> placed my test code in the kunit folder (so it would compile) and hacked
> around a couple a of functions I didn't care about that would not be
> compiled.

A couple of points, as for needing CONFIG_PCI; my plan to deal with
that type of thing has been that we would add support for a KUnit/UML
version that is just for KUnit. It would mock out the necessary bits
to provide a fake hardware implementation for anything that might
depend on it. I wrote a prototype for mocking/faking MMIO that I
presented to the list here[1]; it is not part of the current patchset
because we decided it would be best to focus on getting an MVP in, but
I plan on bringing it back up at some point. Anyway, what do you
generally think of this approach?

>
> In the end I got it to work acceptably, but I get the impression that

Awesome, I looked at the code you posted and it doesn't look like you
have had too many troubles. One thing that stood out to me, why did
you need to put it in the kunit/ dir?

> KUnit will not be usable for wide swaths of kernel code that can't be
> compiled in UML. Has there been any discussion or ideas on how to work

For the most part, I was planning on relying on mocking or faking the
hardware out as you have done. I have found this worked pretty well in
other instances. I know there are some edge cases for which even this
won't work like code in arch/; for that you *can* compile KUnit for
non-UML, but that is not really unit testing any more, but I included
it that as more of a last ditch effort, debugging aid, or to allow a
user to write integration tests (also very much because other people
asked me to do it ;-) ). So the main plan here is mocking and faking.

There has been some additional discussion here[2] (see the replies);
this thread is more about the future of trying to pull out kernel
dependencies which I discussed further with Luis (off list) here[3]
(in reality the discussion has been spread across a number of
different threads, but I think those are places you could at least get
context to jump in and talk about mocking and faking hardware and
Linux kernel resources).

In short, I think the idea is we are using UML as scaffolding and we
will gradually try to make it more independent by either reducing
dependencies on kernel resources (by providing high quality fakes) or
by somehow providing a better view of the dependencies so that you can
specify more directly what piece of code you need for your test. This
part is obviously very long term, but I think that's what we need to
do if we want to do really high quality unit testing, regardless of
whether we use KUnit or not. In anycase, right now, we are just
working on the basic tools to write *a* unit test for the kernel.

> around this so it can be more generally useful? Or will this feature be
> restricted roughly to non-drivers and functions in headers that don't
> have #ifdefs around them?

I hope not.

The #ifdef thing is something that I don't have a good answer for at
this time. Luis and I talked about it, but haven't come up with any
good ideas, sorry.

As for drivers, I found that testing drivers is quite doable. You can
see an example of a test I wrote for a i2c bus driver here[4]. I know
the i2c subsystem is pretty simple, but the general principle should
apply elsewhere.

>
> If you're interested in seeing the unit tests I ended up writing you can
> find the commits here[1].
>
> Thanks,
>
> Logan
>
> [1] https://github.com/sbates130272/linux-p2pmem/ ntb_kunit

I am looking forward to see what you think!

Cheers

[1] https://lkml.org/lkml/2018/10/17/122
[2] https://lkml.org/lkml/2018/11/29/93
[3] https://groups.google.com/forum/#!topic/kunit-dev/EQ1x0SzrUus
[4] https://kunit.googlesource.com/linux/+/e10484ad2f9fc7926412ec84739fe105981b4771/drivers/i2c/busses/i2c-aspeed-test.c

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
  2019-03-21 22:07         ` Brendan Higgins
                             ` (2 preceding siblings ...)
  (?)
@ 2019-03-21 22:26           ` Logan Gunthorpe
  -1 siblings, 0 replies; 316+ messages in thread
From: Logan Gunthorpe @ 2019-03-21 22:26 UTC (permalink / raw)
  To: Brendan Higgins
  Cc: Kees Cook, Luis Chamberlain, shuah, Rob Herring, Kieran Bingham,
	Frank Rowand, Greg KH, Joel Stanley, Michael Ellerman,
	Joe Perches, brakmo, Steven Rostedt, Bird, Timothy, Kevin Hilman,
	Julia Lawall, linux-kselftest, kunit-dev,
	Linux Kernel Mailing List, Jeff Dike, Richard Weinberger,
	linux-um, Daniel Vetter



On 2019-03-21 4:07 p.m., Brendan Higgins wrote:
> A couple of points, as for needing CONFIG_PCI; my plan to deal with
> that type of thing has been that we would add support for a KUnit/UML
> version that is just for KUnit. It would mock out the necessary bits
> to provide a fake hardware implementation for anything that might
> depend on it. I wrote a prototype for mocking/faking MMIO that I
> presented to the list here[1]; it is not part of the current patchset
> because we decided it would be best to focus on getting an MVP in, but
> I plan on bringing it back up at some point. Anyway, what do you
> generally think of this approach?

Yes, I was wondering if that might be possible. I think that's a great
approach but it will unfortunately take a lot of work before larger
swaths of the kernel are testable in Kunit with UML. Having more common
mocked infrastructure will be great by-product of it though.

> Awesome, I looked at the code you posted and it doesn't look like you
> have had too many troubles. One thing that stood out to me, why did
> you need to put it in the kunit/ dir?

Yeah, writing the code was super easy. Only after, did I realized I
couldn't get it to easily build.

Putting it in the kunit directory was necessary because nothing in the
NTB tree builds unless CONFIG_NTB is set (see drivers/Makefile) and
CONFIG_NTB depends on CONFIG_PCI. I didn't experiment to see how hard it
would be to set CONFIG_NTB without CONFIG_PCI; I assumed it would be tricky.

> I am looking forward to see what you think!

Generally, I'm impressed and want to see this work in upstream as soon
as possible so I can start to make use of it!

Logan

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21 22:26           ` Logan Gunthorpe
  0 siblings, 0 replies; 316+ messages in thread
From: Logan Gunthorpe @ 2019-03-21 22:26 UTC (permalink / raw)
  To: Brendan Higgins
  Cc: Kees Cook, Luis Chamberlain, shuah, Rob Herring, Kieran Bingham,
	Frank Rowand, Greg KH, Joel Stanley, Michael Ellerman,
	Joe Perches, brakmo, Steven Rostedt, Bird, Timothy, Kevin Hilman,
	Julia Lawall, linux-kselftest, kunit-dev,
	Linux Kernel Mailing List, Jeff Dike, Richard Weinberger,
	linux-um, Daniel Vetter, dri-devel, Dan Williams, linux-nvdimm,
	Knut Omang, devicetree, Petr Mladek, Sasha Levin, Amir Goldstein,
	Dan Carpenter, wfg



On 2019-03-21 4:07 p.m., Brendan Higgins wrote:
> A couple of points, as for needing CONFIG_PCI; my plan to deal with
> that type of thing has been that we would add support for a KUnit/UML
> version that is just for KUnit. It would mock out the necessary bits
> to provide a fake hardware implementation for anything that might
> depend on it. I wrote a prototype for mocking/faking MMIO that I
> presented to the list here[1]; it is not part of the current patchset
> because we decided it would be best to focus on getting an MVP in, but
> I plan on bringing it back up at some point. Anyway, what do you
> generally think of this approach?

Yes, I was wondering if that might be possible. I think that's a great
approach but it will unfortunately take a lot of work before larger
swaths of the kernel are testable in Kunit with UML. Having more common
mocked infrastructure will be great by-product of it though.

> Awesome, I looked at the code you posted and it doesn't look like you
> have had too many troubles. One thing that stood out to me, why did
> you need to put it in the kunit/ dir?

Yeah, writing the code was super easy. Only after, did I realized I
couldn't get it to easily build.

Putting it in the kunit directory was necessary because nothing in the
NTB tree builds unless CONFIG_NTB is set (see drivers/Makefile) and
CONFIG_NTB depends on CONFIG_PCI. I didn't experiment to see how hard it
would be to set CONFIG_NTB without CONFIG_PCI; I assumed it would be tricky.

> I am looking forward to see what you think!

Generally, I'm impressed and want to see this work in upstream as soon
as possible so I can start to make use of it!

Logan

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21 22:26           ` Logan Gunthorpe
  0 siblings, 0 replies; 316+ messages in thread
From: logang @ 2019-03-21 22:26 UTC (permalink / raw)




On 2019-03-21 4:07 p.m., Brendan Higgins wrote:
> A couple of points, as for needing CONFIG_PCI; my plan to deal with
> that type of thing has been that we would add support for a KUnit/UML
> version that is just for KUnit. It would mock out the necessary bits
> to provide a fake hardware implementation for anything that might
> depend on it. I wrote a prototype for mocking/faking MMIO that I
> presented to the list here[1]; it is not part of the current patchset
> because we decided it would be best to focus on getting an MVP in, but
> I plan on bringing it back up at some point. Anyway, what do you
> generally think of this approach?

Yes, I was wondering if that might be possible. I think that's a great
approach but it will unfortunately take a lot of work before larger
swaths of the kernel are testable in Kunit with UML. Having more common
mocked infrastructure will be great by-product of it though.

> Awesome, I looked at the code you posted and it doesn't look like you
> have had too many troubles. One thing that stood out to me, why did
> you need to put it in the kunit/ dir?

Yeah, writing the code was super easy. Only after, did I realized I
couldn't get it to easily build.

Putting it in the kunit directory was necessary because nothing in the
NTB tree builds unless CONFIG_NTB is set (see drivers/Makefile) and
CONFIG_NTB depends on CONFIG_PCI. I didn't experiment to see how hard it
would be to set CONFIG_NTB without CONFIG_PCI; I assumed it would be tricky.

> I am looking forward to see what you think!

Generally, I'm impressed and want to see this work in upstream as soon
as possible so I can start to make use of it!

Logan

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21 22:26           ` Logan Gunthorpe
  0 siblings, 0 replies; 316+ messages in thread
From: Logan Gunthorpe @ 2019-03-21 22:26 UTC (permalink / raw)




On 2019-03-21 4:07 p.m., Brendan Higgins wrote:
> A couple of points, as for needing CONFIG_PCI; my plan to deal with
> that type of thing has been that we would add support for a KUnit/UML
> version that is just for KUnit. It would mock out the necessary bits
> to provide a fake hardware implementation for anything that might
> depend on it. I wrote a prototype for mocking/faking MMIO that I
> presented to the list here[1]; it is not part of the current patchset
> because we decided it would be best to focus on getting an MVP in, but
> I plan on bringing it back up at some point. Anyway, what do you
> generally think of this approach?

Yes, I was wondering if that might be possible. I think that's a great
approach but it will unfortunately take a lot of work before larger
swaths of the kernel are testable in Kunit with UML. Having more common
mocked infrastructure will be great by-product of it though.

> Awesome, I looked at the code you posted and it doesn't look like you
> have had too many troubles. One thing that stood out to me, why did
> you need to put it in the kunit/ dir?

Yeah, writing the code was super easy. Only after, did I realized I
couldn't get it to easily build.

Putting it in the kunit directory was necessary because nothing in the
NTB tree builds unless CONFIG_NTB is set (see drivers/Makefile) and
CONFIG_NTB depends on CONFIG_PCI. I didn't experiment to see how hard it
would be to set CONFIG_NTB without CONFIG_PCI; I assumed it would be tricky.

> I am looking forward to see what you think!

Generally, I'm impressed and want to see this work in upstream as soon
as possible so I can start to make use of it!

Logan

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21 22:26           ` Logan Gunthorpe
  0 siblings, 0 replies; 316+ messages in thread
From: Logan Gunthorpe @ 2019-03-21 22:26 UTC (permalink / raw)
  To: Brendan Higgins
  Cc: brakmo, Petr Mladek, Amir Goldstein, dri-devel, Sasha Levin,
	linux-kselftest, Frank Rowand, Rob Herring, linux-nvdimm,
	Richard Weinberger, Knut Omang, Kieran Bingham, wfg,
	Joel Stanley, Jeff Dike, Dan Carpenter, devicetree, shuah, Bird,



On 2019-03-21 4:07 p.m., Brendan Higgins wrote:
> A couple of points, as for needing CONFIG_PCI; my plan to deal with
> that type of thing has been that we would add support for a KUnit/UML
> version that is just for KUnit. It would mock out the necessary bits
> to provide a fake hardware implementation for anything that might
> depend on it. I wrote a prototype for mocking/faking MMIO that I
> presented to the list here[1]; it is not part of the current patchset
> because we decided it would be best to focus on getting an MVP in, but
> I plan on bringing it back up at some point. Anyway, what do you
> generally think of this approach?

Yes, I was wondering if that might be possible. I think that's a great
approach but it will unfortunately take a lot of work before larger
swaths of the kernel are testable in Kunit with UML. Having more common
mocked infrastructure will be great by-product of it though.

> Awesome, I looked at the code you posted and it doesn't look like you
> have had too many troubles. One thing that stood out to me, why did
> you need to put it in the kunit/ dir?

Yeah, writing the code was super easy. Only after, did I realized I
couldn't get it to easily build.

Putting it in the kunit directory was necessary because nothing in the
NTB tree builds unless CONFIG_NTB is set (see drivers/Makefile) and
CONFIG_NTB depends on CONFIG_PCI. I didn't experiment to see how hard it
would be to set CONFIG_NTB without CONFIG_PCI; I assumed it would be tricky.

> I am looking forward to see what you think!

Generally, I'm impressed and want to see this work in upstream as soon
as possible so I can start to make use of it!

Logan

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
  2019-03-21 22:26           ` Logan Gunthorpe
                               ` (2 preceding siblings ...)
  (?)
@ 2019-03-21 23:33             ` Brendan Higgins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-21 23:33 UTC (permalink / raw)
  To: Logan Gunthorpe
  Cc: Kees Cook, Luis Chamberlain, shuah, Rob Herring, Kieran Bingham,
	Frank Rowand, Greg KH, Joel Stanley, Michael Ellerman,
	Joe Perches, brakmo, Steven Rostedt, Bird, Timothy, Kevin Hilman,
	Julia Lawall, linux-kselftest, kunit-dev,
	Linux Kernel Mailing List, Jeff Dike, Richard Weinberger,
	linux-um, Daniel Vetter

On Thu, Mar 21, 2019 at 3:27 PM Logan Gunthorpe <logang@deltatee.com> wrote:
>
>
>
> On 2019-03-21 4:07 p.m., Brendan Higgins wrote:
> > A couple of points, as for needing CONFIG_PCI; my plan to deal with
> > that type of thing has been that we would add support for a KUnit/UML
> > version that is just for KUnit. It would mock out the necessary bits
> > to provide a fake hardware implementation for anything that might
> > depend on it. I wrote a prototype for mocking/faking MMIO that I
> > presented to the list here[1]; it is not part of the current patchset
> > because we decided it would be best to focus on getting an MVP in, but
> > I plan on bringing it back up at some point. Anyway, what do you
> > generally think of this approach?
>
> Yes, I was wondering if that might be possible. I think that's a great
> approach but it will unfortunately take a lot of work before larger
> swaths of the kernel are testable in Kunit with UML. Having more common
> mocked infrastructure will be great by-product of it though.

Yeah, it's unfortunate that the best way to do something often takes
so much longer.

>
> > Awesome, I looked at the code you posted and it doesn't look like you
> > have had too many troubles. One thing that stood out to me, why did
> > you need to put it in the kunit/ dir?
>
> Yeah, writing the code was super easy. Only after, did I realized I
> couldn't get it to easily build.

Yeah, we really need to fix that; unfortunately, broadly addressing
that problem is really hard and will most likely take a long time.

>
> Putting it in the kunit directory was necessary because nothing in the
> NTB tree builds unless CONFIG_NTB is set (see drivers/Makefile) and
> CONFIG_NTB depends on CONFIG_PCI. I didn't experiment to see how hard it
> would be to set CONFIG_NTB without CONFIG_PCI; I assumed it would be tricky.
>
> > I am looking forward to see what you think!
>
> Generally, I'm impressed and want to see this work in upstream as soon
> as possible so I can start to make use of it!

Great to hear! I was trying to get the next revision out this week,
but addressing some of the comments is taking a little longer than
expected. I should have something together fairly soon though
(hopefully next week). Good news is that next revision will be
non-RFC; most of the feedback has settled down and I think we are
ready to start figuring out how to merge it. Fingers crossed :-)

Cheers

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21 23:33             ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-21 23:33 UTC (permalink / raw)
  To: Logan Gunthorpe
  Cc: Kees Cook, Luis Chamberlain, shuah, Rob Herring, Kieran Bingham,
	Frank Rowand, Greg KH, Joel Stanley, Michael Ellerman,
	Joe Perches, brakmo, Steven Rostedt, Bird, Timothy, Kevin Hilman,
	Julia Lawall, linux-kselftest, kunit-dev,
	Linux Kernel Mailing List, Jeff Dike, Richard Weinberger,
	linux-um, Daniel Vetter, dri-devel, Dan Williams, linux-nvdimm,
	Knut Omang, devicetree, Petr Mladek, Sasha Levin, Amir Goldstein,
	Dan Carpenter, wfg

On Thu, Mar 21, 2019 at 3:27 PM Logan Gunthorpe <logang@deltatee.com> wrote:
>
>
>
> On 2019-03-21 4:07 p.m., Brendan Higgins wrote:
> > A couple of points, as for needing CONFIG_PCI; my plan to deal with
> > that type of thing has been that we would add support for a KUnit/UML
> > version that is just for KUnit. It would mock out the necessary bits
> > to provide a fake hardware implementation for anything that might
> > depend on it. I wrote a prototype for mocking/faking MMIO that I
> > presented to the list here[1]; it is not part of the current patchset
> > because we decided it would be best to focus on getting an MVP in, but
> > I plan on bringing it back up at some point. Anyway, what do you
> > generally think of this approach?
>
> Yes, I was wondering if that might be possible. I think that's a great
> approach but it will unfortunately take a lot of work before larger
> swaths of the kernel are testable in Kunit with UML. Having more common
> mocked infrastructure will be great by-product of it though.

Yeah, it's unfortunate that the best way to do something often takes
so much longer.

>
> > Awesome, I looked at the code you posted and it doesn't look like you
> > have had too many troubles. One thing that stood out to me, why did
> > you need to put it in the kunit/ dir?
>
> Yeah, writing the code was super easy. Only after, did I realized I
> couldn't get it to easily build.

Yeah, we really need to fix that; unfortunately, broadly addressing
that problem is really hard and will most likely take a long time.

>
> Putting it in the kunit directory was necessary because nothing in the
> NTB tree builds unless CONFIG_NTB is set (see drivers/Makefile) and
> CONFIG_NTB depends on CONFIG_PCI. I didn't experiment to see how hard it
> would be to set CONFIG_NTB without CONFIG_PCI; I assumed it would be tricky.
>
> > I am looking forward to see what you think!
>
> Generally, I'm impressed and want to see this work in upstream as soon
> as possible so I can start to make use of it!

Great to hear! I was trying to get the next revision out this week,
but addressing some of the comments is taking a little longer than
expected. I should have something together fairly soon though
(hopefully next week). Good news is that next revision will be
non-RFC; most of the feedback has settled down and I think we are
ready to start figuring out how to merge it. Fingers crossed :-)

Cheers

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21 23:33             ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-03-21 23:33 UTC (permalink / raw)


On Thu, Mar 21, 2019 at 3:27 PM Logan Gunthorpe <logang at deltatee.com> wrote:
>
>
>
> On 2019-03-21 4:07 p.m., Brendan Higgins wrote:
> > A couple of points, as for needing CONFIG_PCI; my plan to deal with
> > that type of thing has been that we would add support for a KUnit/UML
> > version that is just for KUnit. It would mock out the necessary bits
> > to provide a fake hardware implementation for anything that might
> > depend on it. I wrote a prototype for mocking/faking MMIO that I
> > presented to the list here[1]; it is not part of the current patchset
> > because we decided it would be best to focus on getting an MVP in, but
> > I plan on bringing it back up at some point. Anyway, what do you
> > generally think of this approach?
>
> Yes, I was wondering if that might be possible. I think that's a great
> approach but it will unfortunately take a lot of work before larger
> swaths of the kernel are testable in Kunit with UML. Having more common
> mocked infrastructure will be great by-product of it though.

Yeah, it's unfortunate that the best way to do something often takes
so much longer.

>
> > Awesome, I looked at the code you posted and it doesn't look like you
> > have had too many troubles. One thing that stood out to me, why did
> > you need to put it in the kunit/ dir?
>
> Yeah, writing the code was super easy. Only after, did I realized I
> couldn't get it to easily build.

Yeah, we really need to fix that; unfortunately, broadly addressing
that problem is really hard and will most likely take a long time.

>
> Putting it in the kunit directory was necessary because nothing in the
> NTB tree builds unless CONFIG_NTB is set (see drivers/Makefile) and
> CONFIG_NTB depends on CONFIG_PCI. I didn't experiment to see how hard it
> would be to set CONFIG_NTB without CONFIG_PCI; I assumed it would be tricky.
>
> > I am looking forward to see what you think!
>
> Generally, I'm impressed and want to see this work in upstream as soon
> as possible so I can start to make use of it!

Great to hear! I was trying to get the next revision out this week,
but addressing some of the comments is taking a little longer than
expected. I should have something together fairly soon though
(hopefully next week). Good news is that next revision will be
non-RFC; most of the feedback has settled down and I think we are
ready to start figuring out how to merge it. Fingers crossed :-)

Cheers

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21 23:33             ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-21 23:33 UTC (permalink / raw)


On Thu, Mar 21, 2019@3:27 PM Logan Gunthorpe <logang@deltatee.com> wrote:
>
>
>
> On 2019-03-21 4:07 p.m., Brendan Higgins wrote:
> > A couple of points, as for needing CONFIG_PCI; my plan to deal with
> > that type of thing has been that we would add support for a KUnit/UML
> > version that is just for KUnit. It would mock out the necessary bits
> > to provide a fake hardware implementation for anything that might
> > depend on it. I wrote a prototype for mocking/faking MMIO that I
> > presented to the list here[1]; it is not part of the current patchset
> > because we decided it would be best to focus on getting an MVP in, but
> > I plan on bringing it back up at some point. Anyway, what do you
> > generally think of this approach?
>
> Yes, I was wondering if that might be possible. I think that's a great
> approach but it will unfortunately take a lot of work before larger
> swaths of the kernel are testable in Kunit with UML. Having more common
> mocked infrastructure will be great by-product of it though.

Yeah, it's unfortunate that the best way to do something often takes
so much longer.

>
> > Awesome, I looked at the code you posted and it doesn't look like you
> > have had too many troubles. One thing that stood out to me, why did
> > you need to put it in the kunit/ dir?
>
> Yeah, writing the code was super easy. Only after, did I realized I
> couldn't get it to easily build.

Yeah, we really need to fix that; unfortunately, broadly addressing
that problem is really hard and will most likely take a long time.

>
> Putting it in the kunit directory was necessary because nothing in the
> NTB tree builds unless CONFIG_NTB is set (see drivers/Makefile) and
> CONFIG_NTB depends on CONFIG_PCI. I didn't experiment to see how hard it
> would be to set CONFIG_NTB without CONFIG_PCI; I assumed it would be tricky.
>
> > I am looking forward to see what you think!
>
> Generally, I'm impressed and want to see this work in upstream as soon
> as possible so I can start to make use of it!

Great to hear! I was trying to get the next revision out this week,
but addressing some of the comments is taking a little longer than
expected. I should have something together fairly soon though
(hopefully next week). Good news is that next revision will be
non-RFC; most of the feedback has settled down and I think we are
ready to start figuring out how to merge it. Fingers crossed :-)

Cheers

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-21 23:33             ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-21 23:33 UTC (permalink / raw)
  To: Logan Gunthorpe
  Cc: brakmo, Petr Mladek, Amir Goldstein, dri-devel, Sasha Levin,
	linux-kselftest, Frank Rowand, Rob Herring, linux-nvdimm,
	Richard Weinberger, Knut Omang, Kieran Bingham, wfg,
	Joel Stanley, Jeff Dike, Dan Carpenter, devicetree, shuah, Bird,

On Thu, Mar 21, 2019 at 3:27 PM Logan Gunthorpe <logang@deltatee.com> wrote:
>
>
>
> On 2019-03-21 4:07 p.m., Brendan Higgins wrote:
> > A couple of points, as for needing CONFIG_PCI; my plan to deal with
> > that type of thing has been that we would add support for a KUnit/UML
> > version that is just for KUnit. It would mock out the necessary bits
> > to provide a fake hardware implementation for anything that might
> > depend on it. I wrote a prototype for mocking/faking MMIO that I
> > presented to the list here[1]; it is not part of the current patchset
> > because we decided it would be best to focus on getting an MVP in, but
> > I plan on bringing it back up at some point. Anyway, what do you
> > generally think of this approach?
>
> Yes, I was wondering if that might be possible. I think that's a great
> approach but it will unfortunately take a lot of work before larger
> swaths of the kernel are testable in Kunit with UML. Having more common
> mocked infrastructure will be great by-product of it though.

Yeah, it's unfortunate that the best way to do something often takes
so much longer.

>
> > Awesome, I looked at the code you posted and it doesn't look like you
> > have had too many troubles. One thing that stood out to me, why did
> > you need to put it in the kunit/ dir?
>
> Yeah, writing the code was super easy. Only after, did I realized I
> couldn't get it to easily build.

Yeah, we really need to fix that; unfortunately, broadly addressing
that problem is really hard and will most likely take a long time.

>
> Putting it in the kunit directory was necessary because nothing in the
> NTB tree builds unless CONFIG_NTB is set (see drivers/Makefile) and
> CONFIG_NTB depends on CONFIG_PCI. I didn't experiment to see how hard it
> would be to set CONFIG_NTB without CONFIG_PCI; I assumed it would be tricky.
>
> > I am looking forward to see what you think!
>
> Generally, I'm impressed and want to see this work in upstream as soon
> as possible so I can start to make use of it!

Great to hear! I was trying to get the next revision out this week,
but addressing some of the comments is taking a little longer than
expected. I should have something together fairly soon though
(hopefully next week). Good news is that next revision will be
non-RFC; most of the feedback has settled down and I think we are
ready to start figuring out how to merge it. Fingers crossed :-)

Cheers

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
  2019-02-28  7:42                 ` Brendan Higgins
                                       ` (2 preceding siblings ...)
  (?)
@ 2019-03-22  1:09                     ` Frank Rowand
  -1 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-03-22  1:09 UTC (permalink / raw)
  To: Brendan Higgins
  Cc: brakmo-b10kYP2dOMg, Petr Mladek, Amir Goldstein, dri-devel,
	Sasha Levin, linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	shuah-DgEjT+Ai2ygdnm+yROfE0A, Rob Herring, Frank Rowand,
	linux-nvdimm, Richard Weinberger, Knut Omang, Kieran Bingham,
	wfg-VuQAYsv1563Yd54FQh9/CA, Joel Stanley, Jeff Dike,
	dan.carpenter-QHcLZuEGTsvQT0dZR+AlfA, devicetree, Bird, Timothy,
	Kees Cook, linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	Steven Rostedt, Julia Lawall, kunit-dev-/JYPxA39Uh5TLH3MbocFFw,
	Greg KH

On 2/27/19 11:42 PM, Brendan Higgins wrote:
> On Tue, Feb 19, 2019 at 10:44 PM Frank Rowand <frowand.list-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
>>
>> On 2/19/19 7:39 PM, Brendan Higgins wrote:
>>> On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand <frowand.list-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
>>>>
>>>> On 2/14/19 1:37 PM, Brendan Higgins wrote:
>>>>> Add support for aborting/bailing out of test cases. Needed for
>>>>> implementing assertions.
>>>>>
>>>>> Signed-off-by: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
>>>>> ---
>>>>> Changes Since Last Version
>>>>>  - This patch is new introducing a new cross-architecture way to abort
>>>>>    out of a test case (needed for KUNIT_ASSERT_*, see next patch for
>>>>>    details).
>>>>>  - On a side note, this is not a complete replacement for the UML abort
>>>>>    mechanism, but covers the majority of necessary functionality. UML
>>>>>    architecture specific featurs have been dropped from the initial
>>>>>    patchset.
>>>>> ---
>>>>>  include/kunit/test.h |  24 +++++
>>>>>  kunit/Makefile       |   3 +-
>>>>>  kunit/test-test.c    | 127 ++++++++++++++++++++++++++
>>>>>  kunit/test.c         | 208 +++++++++++++++++++++++++++++++++++++++++--
>>>>>  4 files changed, 353 insertions(+), 9 deletions(-)
>>>>>  create mode 100644 kunit/test-test.c
>>>>
>>>> < snip >
>>>>
>>>>> diff --git a/kunit/test.c b/kunit/test.c
>>>>> index d18c50d5ed671..6e5244642ab07 100644
>>>>> --- a/kunit/test.c
>>>>> +++ b/kunit/test.c
>>>>> @@ -6,9 +6,9 @@
>>>>>   * Author: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
>>>>>   */
>>>>>
>>>>> -#include <linux/sched.h>
>>>>>  #include <linux/sched/debug.h>
>>>>> -#include <os.h>
>>>>> +#include <linux/completion.h>
>>>>> +#include <linux/kthread.h>
>>>>>  #include <kunit/test.h>
>>>>>
>>>>>  static bool kunit_get_success(struct kunit *test)
>>>>> @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
>>>>>       spin_unlock_irqrestore(&test->lock, flags);
>>>>>  }
>>>>>
>>>>> +static bool kunit_get_death_test(struct kunit *test)
>>>>> +{
>>>>> +     unsigned long flags;
>>>>> +     bool death_test;
>>>>> +
>>>>> +     spin_lock_irqsave(&test->lock, flags);
>>>>> +     death_test = test->death_test;
>>>>> +     spin_unlock_irqrestore(&test->lock, flags);
>>>>> +
>>>>> +     return death_test;
>>>>> +}
>>>>> +
>>>>> +static void kunit_set_death_test(struct kunit *test, bool death_test)
>>>>> +{
>>>>> +     unsigned long flags;
>>>>> +
>>>>> +     spin_lock_irqsave(&test->lock, flags);
>>>>> +     test->death_test = death_test;
>>>>> +     spin_unlock_irqrestore(&test->lock, flags);
>>>>> +}
>>>>> +
>>>>>  static int kunit_vprintk_emit(const struct kunit *test,
>>>>>                             int level,
>>>>>                             const char *fmt,
>>>>> @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
>>>>>       stream->commit(stream);
>>>>>  }
>>>>>
>>>>> +static void __noreturn kunit_abort(struct kunit *test)
>>>>> +{
>>>>> +     kunit_set_death_test(test, true);
>>>>> +
>>>>> +     test->try_catch.throw(&test->try_catch);
>>>>> +
>>>>> +     /*
>>>>> +      * Throw could not abort from test.
>>>>> +      */
>>>>> +     kunit_err(test, "Throw could not abort from test!");
>>>>> +     show_stack(NULL, NULL);
>>>>> +     BUG();
>>>>
>>>> kunit_abort() is what will be call as the result of an assert failure.
>>>
>>> Yep. Does that need clarified somewhere.
>>>>
>>>> BUG(), which is a panic, which is crashing the system is not acceptable
>>>> in the Linux kernel.  You will just annoy Linus if you submit this.
>>>
>>> Sorry, I thought this was an acceptable use case since, a) this should
>>> never be compiled in a production kernel, b) we are in a pretty bad,
>>> unpredictable state if we get here and keep going. I think you might
>>> have said elsewhere that you think "a" is not valid? In any case, I
>>> can replace this with a WARN, would that be acceptable?
>>
>> A WARN may or may not make sense, depending on the context.  It may
>> be sufficient to simply report a test failure (as in the old version
>> of case (2) below.
>>
>> Answers to "a)" and "b)":
>>
>> a) it might be in a production kernel
> 
> Sorry for a possibly stupid question, how might it be so? Why would
> someone intentionally build unit tests into a production kernel?

People do things.  Just expect it.


>>
>> a') it is not acceptable in my development kernel either
> 
> Fair enough.
> 
>>
>> b) No.  You don't crash a developer's kernel either unless it is
>> required to avoid data corruption.
> 
> Alright, I thought that was one of those cases, but I am not going to
> push the point. Also, in case it wasn't clear, the path where BUG()
> gets called only happens if there is a bug in KUnit itself, not just
> because a test case fails catastrophically.

Still not out of the woods.  Still facing Lions and Tigers and Bears,
Oh my!

So kunit_abort() is normally called as the result of an assert
failure (as written many lines further above).

kunit_abort()
   test->try_catch.throw(&test->try_catch)
   // this is really kunit_generic_throw(), yes?
      complete_and_exit()
         if (comp)
            // comp is test_case_completion?
            complete(comp)
         do_exit()
            // void __noreturn do_exit(long code)
            // depending on the task, either panic
            // or the task dies

I did not read through enough of the code to understand what is going
on here.  Is each kunit_module executed in a newly created thread?
And if kunit_abort() is called then that thread dies?  Or something
else?


>>
>> b') And you can not do replacements like:
>>
>> (1) in of_unittest_check_tree_linkage()
>>
>> -----  old  -----
>>
>>         if (!of_root)
>>                 return;
>>
>> -----  new  -----
>>
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);
>>
>> (2) in of_unittest_property_string()
>>
>> -----  old  -----
>>
>>         /* of_property_read_string_index() tests */
>>         rc = of_property_read_string_index(np, "string-property", 0, strings);
>>         unittest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);
>>
>> -----  new  -----
>>
>>         /* of_property_read_string_index() tests */
>>         rc = of_property_read_string_index(np, "string-property", 0, strings);
>>         KUNIT_ASSERT_EQ(test, rc, 0);
>>         KUNIT_EXPECT_STREQ(test, strings[0], "foobar");
>>
>>
>> If a test fails, that is no reason to abort testing.  The remainder of the unit
>> tests can still run.  There may be cascading failures, but that is ok.
> 
> Sure, that's what I am trying to do. I don't see how (1) changes
> anything, a failed KUNIT_ASSERT_* only bails on the current test case,
> it does not quit the entire test suite let alone crash the kernel.

This may be another case of whether a kunit_module is approximately a
single KUNIT_EXPECT_*() or a larger number of them.

I still want, for example, of_unittest_property_string() to include a large
number of KUNIT_EXPECT_*() instances.  In that case I still want the rest of
the tests in the kunit_module to be executed even after a KUNIT_ASSERT_*()
fails.  The existing test code has that property.


> 
> In case it wasn't clear above,
>>>>> +     test->try_catch.throw(&test->try_catch);
> should never, ever return. The only time it would, would be if KUnit
> was very broken. This should never actually happen, even if the
> assertion that called it was violated. KUNIT_ASSERT_* just says, "this
> is a prerequisite property for this test case"; if it is violated, the
> test case should fail and bail because the preconditions for the test
> case cannot be satisfied. Nevertheless, other tests cases will still
> run.
> 

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-03-22  1:09                     ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-03-22  1:09 UTC (permalink / raw)
  To: Brendan Higgins
  Cc: Kees Cook, Luis Chamberlain, shuah, Rob Herring, Kieran Bingham,
	Greg KH, Joel Stanley, Michael Ellerman, Joe Perches, brakmo,
	Steven Rostedt, Bird, Timothy, Kevin Hilman, Julia Lawall,
	linux-kselftest, kunit-dev, Linux Kernel Mailing List, Jeff Dike,
	Richard Weinberger, linux-um, Daniel Vetter, dri-devel,
	Dan Williams, linux-nvdimm, Knut Omang, devicetree, Petr Mladek,
	Sasha Levin, Amir Goldstein, dan.carpenter, wfg, Frank Rowand

On 2/27/19 11:42 PM, Brendan Higgins wrote:
> On Tue, Feb 19, 2019 at 10:44 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>
>> On 2/19/19 7:39 PM, Brendan Higgins wrote:
>>> On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand <frowand.list@gmail.com> wrote:
>>>>
>>>> On 2/14/19 1:37 PM, Brendan Higgins wrote:
>>>>> Add support for aborting/bailing out of test cases. Needed for
>>>>> implementing assertions.
>>>>>
>>>>> Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
>>>>> ---
>>>>> Changes Since Last Version
>>>>>  - This patch is new introducing a new cross-architecture way to abort
>>>>>    out of a test case (needed for KUNIT_ASSERT_*, see next patch for
>>>>>    details).
>>>>>  - On a side note, this is not a complete replacement for the UML abort
>>>>>    mechanism, but covers the majority of necessary functionality. UML
>>>>>    architecture specific featurs have been dropped from the initial
>>>>>    patchset.
>>>>> ---
>>>>>  include/kunit/test.h |  24 +++++
>>>>>  kunit/Makefile       |   3 +-
>>>>>  kunit/test-test.c    | 127 ++++++++++++++++++++++++++
>>>>>  kunit/test.c         | 208 +++++++++++++++++++++++++++++++++++++++++--
>>>>>  4 files changed, 353 insertions(+), 9 deletions(-)
>>>>>  create mode 100644 kunit/test-test.c
>>>>
>>>> < snip >
>>>>
>>>>> diff --git a/kunit/test.c b/kunit/test.c
>>>>> index d18c50d5ed671..6e5244642ab07 100644
>>>>> --- a/kunit/test.c
>>>>> +++ b/kunit/test.c
>>>>> @@ -6,9 +6,9 @@
>>>>>   * Author: Brendan Higgins <brendanhiggins@google.com>
>>>>>   */
>>>>>
>>>>> -#include <linux/sched.h>
>>>>>  #include <linux/sched/debug.h>
>>>>> -#include <os.h>
>>>>> +#include <linux/completion.h>
>>>>> +#include <linux/kthread.h>
>>>>>  #include <kunit/test.h>
>>>>>
>>>>>  static bool kunit_get_success(struct kunit *test)
>>>>> @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
>>>>>       spin_unlock_irqrestore(&test->lock, flags);
>>>>>  }
>>>>>
>>>>> +static bool kunit_get_death_test(struct kunit *test)
>>>>> +{
>>>>> +     unsigned long flags;
>>>>> +     bool death_test;
>>>>> +
>>>>> +     spin_lock_irqsave(&test->lock, flags);
>>>>> +     death_test = test->death_test;
>>>>> +     spin_unlock_irqrestore(&test->lock, flags);
>>>>> +
>>>>> +     return death_test;
>>>>> +}
>>>>> +
>>>>> +static void kunit_set_death_test(struct kunit *test, bool death_test)
>>>>> +{
>>>>> +     unsigned long flags;
>>>>> +
>>>>> +     spin_lock_irqsave(&test->lock, flags);
>>>>> +     test->death_test = death_test;
>>>>> +     spin_unlock_irqrestore(&test->lock, flags);
>>>>> +}
>>>>> +
>>>>>  static int kunit_vprintk_emit(const struct kunit *test,
>>>>>                             int level,
>>>>>                             const char *fmt,
>>>>> @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
>>>>>       stream->commit(stream);
>>>>>  }
>>>>>
>>>>> +static void __noreturn kunit_abort(struct kunit *test)
>>>>> +{
>>>>> +     kunit_set_death_test(test, true);
>>>>> +
>>>>> +     test->try_catch.throw(&test->try_catch);
>>>>> +
>>>>> +     /*
>>>>> +      * Throw could not abort from test.
>>>>> +      */
>>>>> +     kunit_err(test, "Throw could not abort from test!");
>>>>> +     show_stack(NULL, NULL);
>>>>> +     BUG();
>>>>
>>>> kunit_abort() is what will be call as the result of an assert failure.
>>>
>>> Yep. Does that need clarified somewhere.
>>>>
>>>> BUG(), which is a panic, which is crashing the system is not acceptable
>>>> in the Linux kernel.  You will just annoy Linus if you submit this.
>>>
>>> Sorry, I thought this was an acceptable use case since, a) this should
>>> never be compiled in a production kernel, b) we are in a pretty bad,
>>> unpredictable state if we get here and keep going. I think you might
>>> have said elsewhere that you think "a" is not valid? In any case, I
>>> can replace this with a WARN, would that be acceptable?
>>
>> A WARN may or may not make sense, depending on the context.  It may
>> be sufficient to simply report a test failure (as in the old version
>> of case (2) below.
>>
>> Answers to "a)" and "b)":
>>
>> a) it might be in a production kernel
> 
> Sorry for a possibly stupid question, how might it be so? Why would
> someone intentionally build unit tests into a production kernel?

People do things.  Just expect it.


>>
>> a') it is not acceptable in my development kernel either
> 
> Fair enough.
> 
>>
>> b) No.  You don't crash a developer's kernel either unless it is
>> required to avoid data corruption.
> 
> Alright, I thought that was one of those cases, but I am not going to
> push the point. Also, in case it wasn't clear, the path where BUG()
> gets called only happens if there is a bug in KUnit itself, not just
> because a test case fails catastrophically.

Still not out of the woods.  Still facing Lions and Tigers and Bears,
Oh my!

So kunit_abort() is normally called as the result of an assert
failure (as written many lines further above).

kunit_abort()
   test->try_catch.throw(&test->try_catch)
   // this is really kunit_generic_throw(), yes?
      complete_and_exit()
         if (comp)
            // comp is test_case_completion?
            complete(comp)
         do_exit()
            // void __noreturn do_exit(long code)
            // depending on the task, either panic
            // or the task dies

I did not read through enough of the code to understand what is going
on here.  Is each kunit_module executed in a newly created thread?
And if kunit_abort() is called then that thread dies?  Or something
else?


>>
>> b') And you can not do replacements like:
>>
>> (1) in of_unittest_check_tree_linkage()
>>
>> -----  old  -----
>>
>>         if (!of_root)
>>                 return;
>>
>> -----  new  -----
>>
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);
>>
>> (2) in of_unittest_property_string()
>>
>> -----  old  -----
>>
>>         /* of_property_read_string_index() tests */
>>         rc = of_property_read_string_index(np, "string-property", 0, strings);
>>         unittest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);
>>
>> -----  new  -----
>>
>>         /* of_property_read_string_index() tests */
>>         rc = of_property_read_string_index(np, "string-property", 0, strings);
>>         KUNIT_ASSERT_EQ(test, rc, 0);
>>         KUNIT_EXPECT_STREQ(test, strings[0], "foobar");
>>
>>
>> If a test fails, that is no reason to abort testing.  The remainder of the unit
>> tests can still run.  There may be cascading failures, but that is ok.
> 
> Sure, that's what I am trying to do. I don't see how (1) changes
> anything, a failed KUNIT_ASSERT_* only bails on the current test case,
> it does not quit the entire test suite let alone crash the kernel.

This may be another case of whether a kunit_module is approximately a
single KUNIT_EXPECT_*() or a larger number of them.

I still want, for example, of_unittest_property_string() to include a large
number of KUNIT_EXPECT_*() instances.  In that case I still want the rest of
the tests in the kunit_module to be executed even after a KUNIT_ASSERT_*()
fails.  The existing test code has that property.


> 
> In case it wasn't clear above,
>>>>> +     test->try_catch.throw(&test->try_catch);
> should never, ever return. The only time it would, would be if KUnit
> was very broken. This should never actually happen, even if the
> assertion that called it was violated. KUNIT_ASSERT_* just says, "this
> is a prerequisite property for this test case"; if it is violated, the
> test case should fail and bail because the preconditions for the test
> case cannot be satisfied. Nevertheless, other tests cases will still
> run.
> 


^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-03-22  1:09                     ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: frowand.list @ 2019-03-22  1:09 UTC (permalink / raw)


On 2/27/19 11:42 PM, Brendan Higgins wrote:
> On Tue, Feb 19, 2019 at 10:44 PM Frank Rowand <frowand.list at gmail.com> wrote:
>>
>> On 2/19/19 7:39 PM, Brendan Higgins wrote:
>>> On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand <frowand.list at gmail.com> wrote:
>>>>
>>>> On 2/14/19 1:37 PM, Brendan Higgins wrote:
>>>>> Add support for aborting/bailing out of test cases. Needed for
>>>>> implementing assertions.
>>>>>
>>>>> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
>>>>> ---
>>>>> Changes Since Last Version
>>>>>  - This patch is new introducing a new cross-architecture way to abort
>>>>>    out of a test case (needed for KUNIT_ASSERT_*, see next patch for
>>>>>    details).
>>>>>  - On a side note, this is not a complete replacement for the UML abort
>>>>>    mechanism, but covers the majority of necessary functionality. UML
>>>>>    architecture specific featurs have been dropped from the initial
>>>>>    patchset.
>>>>> ---
>>>>>  include/kunit/test.h |  24 +++++
>>>>>  kunit/Makefile       |   3 +-
>>>>>  kunit/test-test.c    | 127 ++++++++++++++++++++++++++
>>>>>  kunit/test.c         | 208 +++++++++++++++++++++++++++++++++++++++++--
>>>>>  4 files changed, 353 insertions(+), 9 deletions(-)
>>>>>  create mode 100644 kunit/test-test.c
>>>>
>>>> < snip >
>>>>
>>>>> diff --git a/kunit/test.c b/kunit/test.c
>>>>> index d18c50d5ed671..6e5244642ab07 100644
>>>>> --- a/kunit/test.c
>>>>> +++ b/kunit/test.c
>>>>> @@ -6,9 +6,9 @@
>>>>>   * Author: Brendan Higgins <brendanhiggins at google.com>
>>>>>   */
>>>>>
>>>>> -#include <linux/sched.h>
>>>>>  #include <linux/sched/debug.h>
>>>>> -#include <os.h>
>>>>> +#include <linux/completion.h>
>>>>> +#include <linux/kthread.h>
>>>>>  #include <kunit/test.h>
>>>>>
>>>>>  static bool kunit_get_success(struct kunit *test)
>>>>> @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
>>>>>       spin_unlock_irqrestore(&test->lock, flags);
>>>>>  }
>>>>>
>>>>> +static bool kunit_get_death_test(struct kunit *test)
>>>>> +{
>>>>> +     unsigned long flags;
>>>>> +     bool death_test;
>>>>> +
>>>>> +     spin_lock_irqsave(&test->lock, flags);
>>>>> +     death_test = test->death_test;
>>>>> +     spin_unlock_irqrestore(&test->lock, flags);
>>>>> +
>>>>> +     return death_test;
>>>>> +}
>>>>> +
>>>>> +static void kunit_set_death_test(struct kunit *test, bool death_test)
>>>>> +{
>>>>> +     unsigned long flags;
>>>>> +
>>>>> +     spin_lock_irqsave(&test->lock, flags);
>>>>> +     test->death_test = death_test;
>>>>> +     spin_unlock_irqrestore(&test->lock, flags);
>>>>> +}
>>>>> +
>>>>>  static int kunit_vprintk_emit(const struct kunit *test,
>>>>>                             int level,
>>>>>                             const char *fmt,
>>>>> @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
>>>>>       stream->commit(stream);
>>>>>  }
>>>>>
>>>>> +static void __noreturn kunit_abort(struct kunit *test)
>>>>> +{
>>>>> +     kunit_set_death_test(test, true);
>>>>> +
>>>>> +     test->try_catch.throw(&test->try_catch);
>>>>> +
>>>>> +     /*
>>>>> +      * Throw could not abort from test.
>>>>> +      */
>>>>> +     kunit_err(test, "Throw could not abort from test!");
>>>>> +     show_stack(NULL, NULL);
>>>>> +     BUG();
>>>>
>>>> kunit_abort() is what will be call as the result of an assert failure.
>>>
>>> Yep. Does that need clarified somewhere.
>>>>
>>>> BUG(), which is a panic, which is crashing the system is not acceptable
>>>> in the Linux kernel.  You will just annoy Linus if you submit this.
>>>
>>> Sorry, I thought this was an acceptable use case since, a) this should
>>> never be compiled in a production kernel, b) we are in a pretty bad,
>>> unpredictable state if we get here and keep going. I think you might
>>> have said elsewhere that you think "a" is not valid? In any case, I
>>> can replace this with a WARN, would that be acceptable?
>>
>> A WARN may or may not make sense, depending on the context.  It may
>> be sufficient to simply report a test failure (as in the old version
>> of case (2) below.
>>
>> Answers to "a)" and "b)":
>>
>> a) it might be in a production kernel
> 
> Sorry for a possibly stupid question, how might it be so? Why would
> someone intentionally build unit tests into a production kernel?

People do things.  Just expect it.


>>
>> a') it is not acceptable in my development kernel either
> 
> Fair enough.
> 
>>
>> b) No.  You don't crash a developer's kernel either unless it is
>> required to avoid data corruption.
> 
> Alright, I thought that was one of those cases, but I am not going to
> push the point. Also, in case it wasn't clear, the path where BUG()
> gets called only happens if there is a bug in KUnit itself, not just
> because a test case fails catastrophically.

Still not out of the woods.  Still facing Lions and Tigers and Bears,
Oh my!

So kunit_abort() is normally called as the result of an assert
failure (as written many lines further above).

kunit_abort()
   test->try_catch.throw(&test->try_catch)
   // this is really kunit_generic_throw(), yes?
      complete_and_exit()
         if (comp)
            // comp is test_case_completion?
            complete(comp)
         do_exit()
            // void __noreturn do_exit(long code)
            // depending on the task, either panic
            // or the task dies

I did not read through enough of the code to understand what is going
on here.  Is each kunit_module executed in a newly created thread?
And if kunit_abort() is called then that thread dies?  Or something
else?


>>
>> b') And you can not do replacements like:
>>
>> (1) in of_unittest_check_tree_linkage()
>>
>> -----  old  -----
>>
>>         if (!of_root)
>>                 return;
>>
>> -----  new  -----
>>
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);
>>
>> (2) in of_unittest_property_string()
>>
>> -----  old  -----
>>
>>         /* of_property_read_string_index() tests */
>>         rc = of_property_read_string_index(np, "string-property", 0, strings);
>>         unittest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);
>>
>> -----  new  -----
>>
>>         /* of_property_read_string_index() tests */
>>         rc = of_property_read_string_index(np, "string-property", 0, strings);
>>         KUNIT_ASSERT_EQ(test, rc, 0);
>>         KUNIT_EXPECT_STREQ(test, strings[0], "foobar");
>>
>>
>> If a test fails, that is no reason to abort testing.  The remainder of the unit
>> tests can still run.  There may be cascading failures, but that is ok.
> 
> Sure, that's what I am trying to do. I don't see how (1) changes
> anything, a failed KUNIT_ASSERT_* only bails on the current test case,
> it does not quit the entire test suite let alone crash the kernel.

This may be another case of whether a kunit_module is approximately a
single KUNIT_EXPECT_*() or a larger number of them.

I still want, for example, of_unittest_property_string() to include a large
number of KUNIT_EXPECT_*() instances.  In that case I still want the rest of
the tests in the kunit_module to be executed even after a KUNIT_ASSERT_*()
fails.  The existing test code has that property.


> 
> In case it wasn't clear above,
>>>>> +     test->try_catch.throw(&test->try_catch);
> should never, ever return. The only time it would, would be if KUnit
> was very broken. This should never actually happen, even if the
> assertion that called it was violated. KUNIT_ASSERT_* just says, "this
> is a prerequisite property for this test case"; if it is violated, the
> test case should fail and bail because the preconditions for the test
> case cannot be satisfied. Nevertheless, other tests cases will still
> run.
> 

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-03-22  1:09                     ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-03-22  1:09 UTC (permalink / raw)


On 2/27/19 11:42 PM, Brendan Higgins wrote:
> On Tue, Feb 19, 2019@10:44 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>
>> On 2/19/19 7:39 PM, Brendan Higgins wrote:
>>> On Mon, Feb 18, 2019@11:52 AM Frank Rowand <frowand.list@gmail.com> wrote:
>>>>
>>>> On 2/14/19 1:37 PM, Brendan Higgins wrote:
>>>>> Add support for aborting/bailing out of test cases. Needed for
>>>>> implementing assertions.
>>>>>
>>>>> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
>>>>> ---
>>>>> Changes Since Last Version
>>>>>  - This patch is new introducing a new cross-architecture way to abort
>>>>>    out of a test case (needed for KUNIT_ASSERT_*, see next patch for
>>>>>    details).
>>>>>  - On a side note, this is not a complete replacement for the UML abort
>>>>>    mechanism, but covers the majority of necessary functionality. UML
>>>>>    architecture specific featurs have been dropped from the initial
>>>>>    patchset.
>>>>> ---
>>>>>  include/kunit/test.h |  24 +++++
>>>>>  kunit/Makefile       |   3 +-
>>>>>  kunit/test-test.c    | 127 ++++++++++++++++++++++++++
>>>>>  kunit/test.c         | 208 +++++++++++++++++++++++++++++++++++++++++--
>>>>>  4 files changed, 353 insertions(+), 9 deletions(-)
>>>>>  create mode 100644 kunit/test-test.c
>>>>
>>>> < snip >
>>>>
>>>>> diff --git a/kunit/test.c b/kunit/test.c
>>>>> index d18c50d5ed671..6e5244642ab07 100644
>>>>> --- a/kunit/test.c
>>>>> +++ b/kunit/test.c
>>>>> @@ -6,9 +6,9 @@
>>>>>   * Author: Brendan Higgins <brendanhiggins at google.com>
>>>>>   */
>>>>>
>>>>> -#include <linux/sched.h>
>>>>>  #include <linux/sched/debug.h>
>>>>> -#include <os.h>
>>>>> +#include <linux/completion.h>
>>>>> +#include <linux/kthread.h>
>>>>>  #include <kunit/test.h>
>>>>>
>>>>>  static bool kunit_get_success(struct kunit *test)
>>>>> @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
>>>>>       spin_unlock_irqrestore(&test->lock, flags);
>>>>>  }
>>>>>
>>>>> +static bool kunit_get_death_test(struct kunit *test)
>>>>> +{
>>>>> +     unsigned long flags;
>>>>> +     bool death_test;
>>>>> +
>>>>> +     spin_lock_irqsave(&test->lock, flags);
>>>>> +     death_test = test->death_test;
>>>>> +     spin_unlock_irqrestore(&test->lock, flags);
>>>>> +
>>>>> +     return death_test;
>>>>> +}
>>>>> +
>>>>> +static void kunit_set_death_test(struct kunit *test, bool death_test)
>>>>> +{
>>>>> +     unsigned long flags;
>>>>> +
>>>>> +     spin_lock_irqsave(&test->lock, flags);
>>>>> +     test->death_test = death_test;
>>>>> +     spin_unlock_irqrestore(&test->lock, flags);
>>>>> +}
>>>>> +
>>>>>  static int kunit_vprintk_emit(const struct kunit *test,
>>>>>                             int level,
>>>>>                             const char *fmt,
>>>>> @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
>>>>>       stream->commit(stream);
>>>>>  }
>>>>>
>>>>> +static void __noreturn kunit_abort(struct kunit *test)
>>>>> +{
>>>>> +     kunit_set_death_test(test, true);
>>>>> +
>>>>> +     test->try_catch.throw(&test->try_catch);
>>>>> +
>>>>> +     /*
>>>>> +      * Throw could not abort from test.
>>>>> +      */
>>>>> +     kunit_err(test, "Throw could not abort from test!");
>>>>> +     show_stack(NULL, NULL);
>>>>> +     BUG();
>>>>
>>>> kunit_abort() is what will be call as the result of an assert failure.
>>>
>>> Yep. Does that need clarified somewhere.
>>>>
>>>> BUG(), which is a panic, which is crashing the system is not acceptable
>>>> in the Linux kernel.  You will just annoy Linus if you submit this.
>>>
>>> Sorry, I thought this was an acceptable use case since, a) this should
>>> never be compiled in a production kernel, b) we are in a pretty bad,
>>> unpredictable state if we get here and keep going. I think you might
>>> have said elsewhere that you think "a" is not valid? In any case, I
>>> can replace this with a WARN, would that be acceptable?
>>
>> A WARN may or may not make sense, depending on the context.  It may
>> be sufficient to simply report a test failure (as in the old version
>> of case (2) below.
>>
>> Answers to "a)" and "b)":
>>
>> a) it might be in a production kernel
> 
> Sorry for a possibly stupid question, how might it be so? Why would
> someone intentionally build unit tests into a production kernel?

People do things.  Just expect it.


>>
>> a') it is not acceptable in my development kernel either
> 
> Fair enough.
> 
>>
>> b) No.  You don't crash a developer's kernel either unless it is
>> required to avoid data corruption.
> 
> Alright, I thought that was one of those cases, but I am not going to
> push the point. Also, in case it wasn't clear, the path where BUG()
> gets called only happens if there is a bug in KUnit itself, not just
> because a test case fails catastrophically.

Still not out of the woods.  Still facing Lions and Tigers and Bears,
Oh my!

So kunit_abort() is normally called as the result of an assert
failure (as written many lines further above).

kunit_abort()
   test->try_catch.throw(&test->try_catch)
   // this is really kunit_generic_throw(), yes?
      complete_and_exit()
         if (comp)
            // comp is test_case_completion?
            complete(comp)
         do_exit()
            // void __noreturn do_exit(long code)
            // depending on the task, either panic
            // or the task dies

I did not read through enough of the code to understand what is going
on here.  Is each kunit_module executed in a newly created thread?
And if kunit_abort() is called then that thread dies?  Or something
else?


>>
>> b') And you can not do replacements like:
>>
>> (1) in of_unittest_check_tree_linkage()
>>
>> -----  old  -----
>>
>>         if (!of_root)
>>                 return;
>>
>> -----  new  -----
>>
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);
>>
>> (2) in of_unittest_property_string()
>>
>> -----  old  -----
>>
>>         /* of_property_read_string_index() tests */
>>         rc = of_property_read_string_index(np, "string-property", 0, strings);
>>         unittest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);
>>
>> -----  new  -----
>>
>>         /* of_property_read_string_index() tests */
>>         rc = of_property_read_string_index(np, "string-property", 0, strings);
>>         KUNIT_ASSERT_EQ(test, rc, 0);
>>         KUNIT_EXPECT_STREQ(test, strings[0], "foobar");
>>
>>
>> If a test fails, that is no reason to abort testing.  The remainder of the unit
>> tests can still run.  There may be cascading failures, but that is ok.
> 
> Sure, that's what I am trying to do. I don't see how (1) changes
> anything, a failed KUNIT_ASSERT_* only bails on the current test case,
> it does not quit the entire test suite let alone crash the kernel.

This may be another case of whether a kunit_module is approximately a
single KUNIT_EXPECT_*() or a larger number of them.

I still want, for example, of_unittest_property_string() to include a large
number of KUNIT_EXPECT_*() instances.  In that case I still want the rest of
the tests in the kunit_module to be executed even after a KUNIT_ASSERT_*()
fails.  The existing test code has that property.


> 
> In case it wasn't clear above,
>>>>> +     test->try_catch.throw(&test->try_catch);
> should never, ever return. The only time it would, would be if KUnit
> was very broken. This should never actually happen, even if the
> assertion that called it was violated. KUNIT_ASSERT_* just says, "this
> is a prerequisite property for this test case"; if it is violated, the
> test case should fail and bail because the preconditions for the test
> case cannot be satisfied. Nevertheless, other tests cases will still
> run.
> 

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-03-22  1:09                     ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-03-22  1:09 UTC (permalink / raw)
  To: Brendan Higgins
  Cc: brakmo, Petr Mladek, Amir Goldstein, dri-devel, Sasha Levin,
	linux-kselftest, shuah, Rob Herring, Frank Rowand, linux-nvdimm,
	Richard Weinberger, Knut Omang, Kieran Bingham, wfg,
	Joel Stanley, Jeff Dike, dan.carpenter, devicetree, Bird,

On 2/27/19 11:42 PM, Brendan Higgins wrote:
> On Tue, Feb 19, 2019 at 10:44 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>
>> On 2/19/19 7:39 PM, Brendan Higgins wrote:
>>> On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand <frowand.list@gmail.com> wrote:
>>>>
>>>> On 2/14/19 1:37 PM, Brendan Higgins wrote:
>>>>> Add support for aborting/bailing out of test cases. Needed for
>>>>> implementing assertions.
>>>>>
>>>>> Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
>>>>> ---
>>>>> Changes Since Last Version
>>>>>  - This patch is new introducing a new cross-architecture way to abort
>>>>>    out of a test case (needed for KUNIT_ASSERT_*, see next patch for
>>>>>    details).
>>>>>  - On a side note, this is not a complete replacement for the UML abort
>>>>>    mechanism, but covers the majority of necessary functionality. UML
>>>>>    architecture specific featurs have been dropped from the initial
>>>>>    patchset.
>>>>> ---
>>>>>  include/kunit/test.h |  24 +++++
>>>>>  kunit/Makefile       |   3 +-
>>>>>  kunit/test-test.c    | 127 ++++++++++++++++++++++++++
>>>>>  kunit/test.c         | 208 +++++++++++++++++++++++++++++++++++++++++--
>>>>>  4 files changed, 353 insertions(+), 9 deletions(-)
>>>>>  create mode 100644 kunit/test-test.c
>>>>
>>>> < snip >
>>>>
>>>>> diff --git a/kunit/test.c b/kunit/test.c
>>>>> index d18c50d5ed671..6e5244642ab07 100644
>>>>> --- a/kunit/test.c
>>>>> +++ b/kunit/test.c
>>>>> @@ -6,9 +6,9 @@
>>>>>   * Author: Brendan Higgins <brendanhiggins@google.com>
>>>>>   */
>>>>>
>>>>> -#include <linux/sched.h>
>>>>>  #include <linux/sched/debug.h>
>>>>> -#include <os.h>
>>>>> +#include <linux/completion.h>
>>>>> +#include <linux/kthread.h>
>>>>>  #include <kunit/test.h>
>>>>>
>>>>>  static bool kunit_get_success(struct kunit *test)
>>>>> @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
>>>>>       spin_unlock_irqrestore(&test->lock, flags);
>>>>>  }
>>>>>
>>>>> +static bool kunit_get_death_test(struct kunit *test)
>>>>> +{
>>>>> +     unsigned long flags;
>>>>> +     bool death_test;
>>>>> +
>>>>> +     spin_lock_irqsave(&test->lock, flags);
>>>>> +     death_test = test->death_test;
>>>>> +     spin_unlock_irqrestore(&test->lock, flags);
>>>>> +
>>>>> +     return death_test;
>>>>> +}
>>>>> +
>>>>> +static void kunit_set_death_test(struct kunit *test, bool death_test)
>>>>> +{
>>>>> +     unsigned long flags;
>>>>> +
>>>>> +     spin_lock_irqsave(&test->lock, flags);
>>>>> +     test->death_test = death_test;
>>>>> +     spin_unlock_irqrestore(&test->lock, flags);
>>>>> +}
>>>>> +
>>>>>  static int kunit_vprintk_emit(const struct kunit *test,
>>>>>                             int level,
>>>>>                             const char *fmt,
>>>>> @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
>>>>>       stream->commit(stream);
>>>>>  }
>>>>>
>>>>> +static void __noreturn kunit_abort(struct kunit *test)
>>>>> +{
>>>>> +     kunit_set_death_test(test, true);
>>>>> +
>>>>> +     test->try_catch.throw(&test->try_catch);
>>>>> +
>>>>> +     /*
>>>>> +      * Throw could not abort from test.
>>>>> +      */
>>>>> +     kunit_err(test, "Throw could not abort from test!");
>>>>> +     show_stack(NULL, NULL);
>>>>> +     BUG();
>>>>
>>>> kunit_abort() is what will be call as the result of an assert failure.
>>>
>>> Yep. Does that need clarified somewhere.
>>>>
>>>> BUG(), which is a panic, which is crashing the system is not acceptable
>>>> in the Linux kernel.  You will just annoy Linus if you submit this.
>>>
>>> Sorry, I thought this was an acceptable use case since, a) this should
>>> never be compiled in a production kernel, b) we are in a pretty bad,
>>> unpredictable state if we get here and keep going. I think you might
>>> have said elsewhere that you think "a" is not valid? In any case, I
>>> can replace this with a WARN, would that be acceptable?
>>
>> A WARN may or may not make sense, depending on the context.  It may
>> be sufficient to simply report a test failure (as in the old version
>> of case (2) below.
>>
>> Answers to "a)" and "b)":
>>
>> a) it might be in a production kernel
> 
> Sorry for a possibly stupid question, how might it be so? Why would
> someone intentionally build unit tests into a production kernel?

People do things.  Just expect it.


>>
>> a') it is not acceptable in my development kernel either
> 
> Fair enough.
> 
>>
>> b) No.  You don't crash a developer's kernel either unless it is
>> required to avoid data corruption.
> 
> Alright, I thought that was one of those cases, but I am not going to
> push the point. Also, in case it wasn't clear, the path where BUG()
> gets called only happens if there is a bug in KUnit itself, not just
> because a test case fails catastrophically.

Still not out of the woods.  Still facing Lions and Tigers and Bears,
Oh my!

So kunit_abort() is normally called as the result of an assert
failure (as written many lines further above).

kunit_abort()
   test->try_catch.throw(&test->try_catch)
   // this is really kunit_generic_throw(), yes?
      complete_and_exit()
         if (comp)
            // comp is test_case_completion?
            complete(comp)
         do_exit()
            // void __noreturn do_exit(long code)
            // depending on the task, either panic
            // or the task dies

I did not read through enough of the code to understand what is going
on here.  Is each kunit_module executed in a newly created thread?
And if kunit_abort() is called then that thread dies?  Or something
else?


>>
>> b') And you can not do replacements like:
>>
>> (1) in of_unittest_check_tree_linkage()
>>
>> -----  old  -----
>>
>>         if (!of_root)
>>                 return;
>>
>> -----  new  -----
>>
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);
>>
>> (2) in of_unittest_property_string()
>>
>> -----  old  -----
>>
>>         /* of_property_read_string_index() tests */
>>         rc = of_property_read_string_index(np, "string-property", 0, strings);
>>         unittest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);
>>
>> -----  new  -----
>>
>>         /* of_property_read_string_index() tests */
>>         rc = of_property_read_string_index(np, "string-property", 0, strings);
>>         KUNIT_ASSERT_EQ(test, rc, 0);
>>         KUNIT_EXPECT_STREQ(test, strings[0], "foobar");
>>
>>
>> If a test fails, that is no reason to abort testing.  The remainder of the unit
>> tests can still run.  There may be cascading failures, but that is ok.
> 
> Sure, that's what I am trying to do. I don't see how (1) changes
> anything, a failed KUNIT_ASSERT_* only bails on the current test case,
> it does not quit the entire test suite let alone crash the kernel.

This may be another case of whether a kunit_module is approximately a
single KUNIT_EXPECT_*() or a larger number of them.

I still want, for example, of_unittest_property_string() to include a large
number of KUNIT_EXPECT_*() instances.  In that case I still want the rest of
the tests in the kunit_module to be executed even after a KUNIT_ASSERT_*()
fails.  The existing test code has that property.


> 
> In case it wasn't clear above,
>>>>> +     test->try_catch.throw(&test->try_catch);
> should never, ever return. The only time it would, would be if KUnit
> was very broken. This should never actually happen, even if the
> assertion that called it was violated. KUNIT_ASSERT_* just says, "this
> is a prerequisite property for this test case"; if it is violated, the
> test case should fail and bail because the preconditions for the test
> case cannot be satisfied. Nevertheless, other tests cases will still
> run.
> 


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
  2019-03-21 23:33             ` Brendan Higgins
                                 ` (3 preceding siblings ...)
  (?)
@ 2019-03-22  1:12               ` Frank Rowand
  -1 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-03-22  1:12 UTC (permalink / raw)
  To: Brendan Higgins, Logan Gunthorpe
  Cc: brakmo, Petr Mladek, Amir Goldstein, dri-devel, Sasha Levin,
	linux-kselftest, Frank Rowand-real, shuah, Rob Herring,
	linux-nvdimm, Richard Weinberger, Knut Omang, Kieran Bingham,
	wfg, Joel Stanley, Jeff Dike, Dan Carpenter, devicetree, Bird,
	Timothy, Kees Cook, linux-um, Steven Rostedt, Julia Lawall,
	kunit-dev, Greg KH, Linux Kernel Mailing List, Luis Chamberlain,
	Daniel Vetter, Michael Ellerman, Joe Perches, Kevin Hilman

On 3/21/19 4:33 PM, Brendan Higgins wrote:
> On Thu, Mar 21, 2019 at 3:27 PM Logan Gunthorpe <logang@deltatee.com> wrote:
>>
>>
>>
>> On 2019-03-21 4:07 p.m., Brendan Higgins wrote:
>>> A couple of points, as for needing CONFIG_PCI; my plan to deal with
>>> that type of thing has been that we would add support for a KUnit/UML
>>> version that is just for KUnit. It would mock out the necessary bits
>>> to provide a fake hardware implementation for anything that might
>>> depend on it. I wrote a prototype for mocking/faking MMIO that I
>>> presented to the list here[1]; it is not part of the current patchset
>>> because we decided it would be best to focus on getting an MVP in, but
>>> I plan on bringing it back up at some point. Anyway, what do you
>>> generally think of this approach?
>>
>> Yes, I was wondering if that might be possible. I think that's a great
>> approach but it will unfortunately take a lot of work before larger
>> swaths of the kernel are testable in Kunit with UML. Having more common
>> mocked infrastructure will be great by-product of it though.
> 
> Yeah, it's unfortunate that the best way to do something often takes
> so much longer.
> 
>>
>>> Awesome, I looked at the code you posted and it doesn't look like you
>>> have had too many troubles. One thing that stood out to me, why did
>>> you need to put it in the kunit/ dir?
>>
>> Yeah, writing the code was super easy. Only after, did I realized I
>> couldn't get it to easily build.
> 
> Yeah, we really need to fix that; unfortunately, broadly addressing
> that problem is really hard and will most likely take a long time.
> 
>>
>> Putting it in the kunit directory was necessary because nothing in the
>> NTB tree builds unless CONFIG_NTB is set (see drivers/Makefile) and
>> CONFIG_NTB depends on CONFIG_PCI. I didn't experiment to see how hard it
>> would be to set CONFIG_NTB without CONFIG_PCI; I assumed it would be tricky.
>>
>>> I am looking forward to see what you think!
>>
>> Generally, I'm impressed and want to see this work in upstream as soon
>> as possible so I can start to make use of it!
> 
> Great to hear! I was trying to get the next revision out this week,
> but addressing some of the comments is taking a little longer than
> expected. I should have something together fairly soon though
> (hopefully next week). Good news is that next revision will be
> non-RFC; most of the feedback has settled down and I think we are
> ready to start figuring out how to merge it. Fingers crossed :-)
> 
> Cheers

I'll be out of the office next week and will not be able to review.
Please hold off on any devicetree related files until after I review.

Thanks,

Frank

_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-22  1:12               ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-03-22  1:12 UTC (permalink / raw)
  To: Brendan Higgins, Logan Gunthorpe
  Cc: Kees Cook, Luis Chamberlain, shuah, Rob Herring, Kieran Bingham,
	Greg KH, Joel Stanley, Michael Ellerman, Joe Perches, brakmo,
	Steven Rostedt, Bird, Timothy, Kevin Hilman, Julia Lawall,
	linux-kselftest, kunit-dev, Linux Kernel Mailing List, Jeff Dike,
	Richard Weinberger, linux-um, Daniel Vetter, dri-devel,
	Dan Williams, linux-nvdimm, Knut Omang, devicetree, Petr Mladek,
	Sasha Levin, Amir Goldstein, Dan Carpenter, wfg,
	Frank Rowand-real

On 3/21/19 4:33 PM, Brendan Higgins wrote:
> On Thu, Mar 21, 2019 at 3:27 PM Logan Gunthorpe <logang@deltatee.com> wrote:
>>
>>
>>
>> On 2019-03-21 4:07 p.m., Brendan Higgins wrote:
>>> A couple of points, as for needing CONFIG_PCI; my plan to deal with
>>> that type of thing has been that we would add support for a KUnit/UML
>>> version that is just for KUnit. It would mock out the necessary bits
>>> to provide a fake hardware implementation for anything that might
>>> depend on it. I wrote a prototype for mocking/faking MMIO that I
>>> presented to the list here[1]; it is not part of the current patchset
>>> because we decided it would be best to focus on getting an MVP in, but
>>> I plan on bringing it back up at some point. Anyway, what do you
>>> generally think of this approach?
>>
>> Yes, I was wondering if that might be possible. I think that's a great
>> approach but it will unfortunately take a lot of work before larger
>> swaths of the kernel are testable in Kunit with UML. Having more common
>> mocked infrastructure will be great by-product of it though.
> 
> Yeah, it's unfortunate that the best way to do something often takes
> so much longer.
> 
>>
>>> Awesome, I looked at the code you posted and it doesn't look like you
>>> have had too many troubles. One thing that stood out to me, why did
>>> you need to put it in the kunit/ dir?
>>
>> Yeah, writing the code was super easy. Only after, did I realized I
>> couldn't get it to easily build.
> 
> Yeah, we really need to fix that; unfortunately, broadly addressing
> that problem is really hard and will most likely take a long time.
> 
>>
>> Putting it in the kunit directory was necessary because nothing in the
>> NTB tree builds unless CONFIG_NTB is set (see drivers/Makefile) and
>> CONFIG_NTB depends on CONFIG_PCI. I didn't experiment to see how hard it
>> would be to set CONFIG_NTB without CONFIG_PCI; I assumed it would be tricky.
>>
>>> I am looking forward to see what you think!
>>
>> Generally, I'm impressed and want to see this work in upstream as soon
>> as possible so I can start to make use of it!
> 
> Great to hear! I was trying to get the next revision out this week,
> but addressing some of the comments is taking a little longer than
> expected. I should have something together fairly soon though
> (hopefully next week). Good news is that next revision will be
> non-RFC; most of the feedback has settled down and I think we are
> ready to start figuring out how to merge it. Fingers crossed :-)
> 
> Cheers

I'll be out of the office next week and will not be able to review.
Please hold off on any devicetree related files until after I review.

Thanks,

Frank


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-22  1:12               ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-03-22  1:12 UTC (permalink / raw)
  To: Brendan Higgins, Logan Gunthorpe
  Cc: Kees Cook, Luis Chamberlain, shuah, Rob Herring, Kieran Bingham,
	Greg KH, Joel Stanley, Michael Ellerman, Joe Perches, brakmo,
	Steven Rostedt, Bird, Timothy, Kevin Hilman, Julia Lawall,
	linux-kselftest, kunit-dev, Linux Kernel Mailing List, Jeff Dike,
	Richard Weinberger, linux-um, Daniel Vetter, dri-devel

On 3/21/19 4:33 PM, Brendan Higgins wrote:
> On Thu, Mar 21, 2019 at 3:27 PM Logan Gunthorpe <logang@deltatee.com> wrote:
>>
>>
>>
>> On 2019-03-21 4:07 p.m., Brendan Higgins wrote:
>>> A couple of points, as for needing CONFIG_PCI; my plan to deal with
>>> that type of thing has been that we would add support for a KUnit/UML
>>> version that is just for KUnit. It would mock out the necessary bits
>>> to provide a fake hardware implementation for anything that might
>>> depend on it. I wrote a prototype for mocking/faking MMIO that I
>>> presented to the list here[1]; it is not part of the current patchset
>>> because we decided it would be best to focus on getting an MVP in, but
>>> I plan on bringing it back up at some point. Anyway, what do you
>>> generally think of this approach?
>>
>> Yes, I was wondering if that might be possible. I think that's a great
>> approach but it will unfortunately take a lot of work before larger
>> swaths of the kernel are testable in Kunit with UML. Having more common
>> mocked infrastructure will be great by-product of it though.
> 
> Yeah, it's unfortunate that the best way to do something often takes
> so much longer.
> 
>>
>>> Awesome, I looked at the code you posted and it doesn't look like you
>>> have had too many troubles. One thing that stood out to me, why did
>>> you need to put it in the kunit/ dir?
>>
>> Yeah, writing the code was super easy. Only after, did I realized I
>> couldn't get it to easily build.
> 
> Yeah, we really need to fix that; unfortunately, broadly addressing
> that problem is really hard and will most likely take a long time.
> 
>>
>> Putting it in the kunit directory was necessary because nothing in the
>> NTB tree builds unless CONFIG_NTB is set (see drivers/Makefile) and
>> CONFIG_NTB depends on CONFIG_PCI. I didn't experiment to see how hard it
>> would be to set CONFIG_NTB without CONFIG_PCI; I assumed it would be tricky.
>>
>>> I am looking forward to see what you think!
>>
>> Generally, I'm impressed and want to see this work in upstream as soon
>> as possible so I can start to make use of it!
> 
> Great to hear! I was trying to get the next revision out this week,
> but addressing some of the comments is taking a little longer than
> expected. I should have something together fairly soon though
> (hopefully next week). Good news is that next revision will be
> non-RFC; most of the feedback has settled down and I think we are
> ready to start figuring out how to merge it. Fingers crossed :-)
> 
> Cheers

I'll be out of the office next week and will not be able to review.
Please hold off on any devicetree related files until after I review.

Thanks,

Frank

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-22  1:12               ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: frowand.list @ 2019-03-22  1:12 UTC (permalink / raw)


On 3/21/19 4:33 PM, Brendan Higgins wrote:
> On Thu, Mar 21, 2019 at 3:27 PM Logan Gunthorpe <logang at deltatee.com> wrote:
>>
>>
>>
>> On 2019-03-21 4:07 p.m., Brendan Higgins wrote:
>>> A couple of points, as for needing CONFIG_PCI; my plan to deal with
>>> that type of thing has been that we would add support for a KUnit/UML
>>> version that is just for KUnit. It would mock out the necessary bits
>>> to provide a fake hardware implementation for anything that might
>>> depend on it. I wrote a prototype for mocking/faking MMIO that I
>>> presented to the list here[1]; it is not part of the current patchset
>>> because we decided it would be best to focus on getting an MVP in, but
>>> I plan on bringing it back up at some point. Anyway, what do you
>>> generally think of this approach?
>>
>> Yes, I was wondering if that might be possible. I think that's a great
>> approach but it will unfortunately take a lot of work before larger
>> swaths of the kernel are testable in Kunit with UML. Having more common
>> mocked infrastructure will be great by-product of it though.
> 
> Yeah, it's unfortunate that the best way to do something often takes
> so much longer.
> 
>>
>>> Awesome, I looked at the code you posted and it doesn't look like you
>>> have had too many troubles. One thing that stood out to me, why did
>>> you need to put it in the kunit/ dir?
>>
>> Yeah, writing the code was super easy. Only after, did I realized I
>> couldn't get it to easily build.
> 
> Yeah, we really need to fix that; unfortunately, broadly addressing
> that problem is really hard and will most likely take a long time.
> 
>>
>> Putting it in the kunit directory was necessary because nothing in the
>> NTB tree builds unless CONFIG_NTB is set (see drivers/Makefile) and
>> CONFIG_NTB depends on CONFIG_PCI. I didn't experiment to see how hard it
>> would be to set CONFIG_NTB without CONFIG_PCI; I assumed it would be tricky.
>>
>>> I am looking forward to see what you think!
>>
>> Generally, I'm impressed and want to see this work in upstream as soon
>> as possible so I can start to make use of it!
> 
> Great to hear! I was trying to get the next revision out this week,
> but addressing some of the comments is taking a little longer than
> expected. I should have something together fairly soon though
> (hopefully next week). Good news is that next revision will be
> non-RFC; most of the feedback has settled down and I think we are
> ready to start figuring out how to merge it. Fingers crossed :-)
> 
> Cheers

I'll be out of the office next week and will not be able to review.
Please hold off on any devicetree related files until after I review.

Thanks,

Frank

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-22  1:12               ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-03-22  1:12 UTC (permalink / raw)


On 3/21/19 4:33 PM, Brendan Higgins wrote:
> On Thu, Mar 21, 2019@3:27 PM Logan Gunthorpe <logang@deltatee.com> wrote:
>>
>>
>>
>> On 2019-03-21 4:07 p.m., Brendan Higgins wrote:
>>> A couple of points, as for needing CONFIG_PCI; my plan to deal with
>>> that type of thing has been that we would add support for a KUnit/UML
>>> version that is just for KUnit. It would mock out the necessary bits
>>> to provide a fake hardware implementation for anything that might
>>> depend on it. I wrote a prototype for mocking/faking MMIO that I
>>> presented to the list here[1]; it is not part of the current patchset
>>> because we decided it would be best to focus on getting an MVP in, but
>>> I plan on bringing it back up at some point. Anyway, what do you
>>> generally think of this approach?
>>
>> Yes, I was wondering if that might be possible. I think that's a great
>> approach but it will unfortunately take a lot of work before larger
>> swaths of the kernel are testable in Kunit with UML. Having more common
>> mocked infrastructure will be great by-product of it though.
> 
> Yeah, it's unfortunate that the best way to do something often takes
> so much longer.
> 
>>
>>> Awesome, I looked at the code you posted and it doesn't look like you
>>> have had too many troubles. One thing that stood out to me, why did
>>> you need to put it in the kunit/ dir?
>>
>> Yeah, writing the code was super easy. Only after, did I realized I
>> couldn't get it to easily build.
> 
> Yeah, we really need to fix that; unfortunately, broadly addressing
> that problem is really hard and will most likely take a long time.
> 
>>
>> Putting it in the kunit directory was necessary because nothing in the
>> NTB tree builds unless CONFIG_NTB is set (see drivers/Makefile) and
>> CONFIG_NTB depends on CONFIG_PCI. I didn't experiment to see how hard it
>> would be to set CONFIG_NTB without CONFIG_PCI; I assumed it would be tricky.
>>
>>> I am looking forward to see what you think!
>>
>> Generally, I'm impressed and want to see this work in upstream as soon
>> as possible so I can start to make use of it!
> 
> Great to hear! I was trying to get the next revision out this week,
> but addressing some of the comments is taking a little longer than
> expected. I should have something together fairly soon though
> (hopefully next week). Good news is that next revision will be
> non-RFC; most of the feedback has settled down and I think we are
> ready to start figuring out how to merge it. Fingers crossed :-)
> 
> Cheers

I'll be out of the office next week and will not be able to review.
Please hold off on any devicetree related files until after I review.

Thanks,

Frank

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-22  1:12               ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-03-22  1:12 UTC (permalink / raw)
  To: Brendan Higgins, Logan Gunthorpe
  Cc: brakmo, Petr Mladek, Amir Goldstein, dri-devel, Sasha Levin,
	linux-kselftest, Frank Rowand-real, shuah, Rob Herring,
	linux-nvdimm, Richard Weinberger, Knut Omang, Kieran Bingham,
	wfg, Joel Stanley, Jeff Dike, Dan Carpenter, devicetree, Bird,
	Timothy, Kees Cook, linux-um, Steven Rostedt, Julia Lawall,
	Dan Williams, kunit-dev, Greg KH, Linux Kernel Mailing List,
	Luis Chamberlain, Daniel Vetter, Michael Ellerman, Joe Perches,
	Kevin Hilman

On 3/21/19 4:33 PM, Brendan Higgins wrote:
> On Thu, Mar 21, 2019 at 3:27 PM Logan Gunthorpe <logang@deltatee.com> wrote:
>>
>>
>>
>> On 2019-03-21 4:07 p.m., Brendan Higgins wrote:
>>> A couple of points, as for needing CONFIG_PCI; my plan to deal with
>>> that type of thing has been that we would add support for a KUnit/UML
>>> version that is just for KUnit. It would mock out the necessary bits
>>> to provide a fake hardware implementation for anything that might
>>> depend on it. I wrote a prototype for mocking/faking MMIO that I
>>> presented to the list here[1]; it is not part of the current patchset
>>> because we decided it would be best to focus on getting an MVP in, but
>>> I plan on bringing it back up at some point. Anyway, what do you
>>> generally think of this approach?
>>
>> Yes, I was wondering if that might be possible. I think that's a great
>> approach but it will unfortunately take a lot of work before larger
>> swaths of the kernel are testable in Kunit with UML. Having more common
>> mocked infrastructure will be great by-product of it though.
> 
> Yeah, it's unfortunate that the best way to do something often takes
> so much longer.
> 
>>
>>> Awesome, I looked at the code you posted and it doesn't look like you
>>> have had too many troubles. One thing that stood out to me, why did
>>> you need to put it in the kunit/ dir?
>>
>> Yeah, writing the code was super easy. Only after, did I realized I
>> couldn't get it to easily build.
> 
> Yeah, we really need to fix that; unfortunately, broadly addressing
> that problem is really hard and will most likely take a long time.
> 
>>
>> Putting it in the kunit directory was necessary because nothing in the
>> NTB tree builds unless CONFIG_NTB is set (see drivers/Makefile) and
>> CONFIG_NTB depends on CONFIG_PCI. I didn't experiment to see how hard it
>> would be to set CONFIG_NTB without CONFIG_PCI; I assumed it would be tricky.
>>
>>> I am looking forward to see what you think!
>>
>> Generally, I'm impressed and want to see this work in upstream as soon
>> as possible so I can start to make use of it!
> 
> Great to hear! I was trying to get the next revision out this week,
> but addressing some of the comments is taking a little longer than
> expected. I should have something together fairly soon though
> (hopefully next week). Good news is that next revision will be
> non-RFC; most of the feedback has settled down and I think we are
> ready to start figuring out how to merge it. Fingers crossed :-)
> 
> Cheers

I'll be out of the office next week and will not be able to review.
Please hold off on any devicetree related files until after I review.

Thanks,

Frank


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 16/17] of: unittest: split out a couple of test cases from unittest
  2019-02-14 21:37   ` brendanhiggins
                       ` (2 preceding siblings ...)
  (?)
@ 2019-03-22  1:14     ` Frank Rowand
  -1 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-03-22  1:14 UTC (permalink / raw)
  To: Brendan Higgins, keescook, mcgrof, shuah, robh, kieran.bingham
  Cc: brakmo, pmladek, amir73il, dri-devel, Alexander.Levin,
	linux-kselftest, linux-nvdimm, richard, knut.omang, wfg, joel,
	jdike, dan.carpenter, devicetree, Tim.Bird, linux-um, rostedt,
	julia.lawall, kunit-dev, gregkh, linux-kernel, daniel, mpe, joe,
	khilman

On 2/14/19 1:37 PM, Brendan Higgins wrote:
> Split out a couple of test cases that these features in base.c from the
> unittest.c monolith. The intention is that we will eventually split out
> all test cases and group them together based on what portion of device
> tree they test.

I still object to this patch.  I do not want this code scattered into
additional files.

-Frank


> 
> Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
> ---
>  drivers/of/Makefile      |   2 +-
>  drivers/of/base-test.c   | 214 ++++++++++++++++++++++++
>  drivers/of/test-common.c | 175 ++++++++++++++++++++
>  drivers/of/test-common.h |  16 ++
>  drivers/of/unittest.c    | 345 +--------------------------------------
>  5 files changed, 407 insertions(+), 345 deletions(-)
>  create mode 100644 drivers/of/base-test.c
>  create mode 100644 drivers/of/test-common.c
>  create mode 100644 drivers/of/test-common.h
> 
> diff --git a/drivers/of/Makefile b/drivers/of/Makefile
> index 663a4af0cccd5..4a4bd527d586c 100644
> --- a/drivers/of/Makefile
> +++ b/drivers/of/Makefile
> @@ -8,7 +8,7 @@ obj-$(CONFIG_OF_PROMTREE) += pdt.o
>  obj-$(CONFIG_OF_ADDRESS)  += address.o
>  obj-$(CONFIG_OF_IRQ)    += irq.o
>  obj-$(CONFIG_OF_NET)	+= of_net.o
> -obj-$(CONFIG_OF_UNITTEST) += unittest.o
> +obj-$(CONFIG_OF_UNITTEST) += unittest.o base-test.o test-common.o
>  obj-$(CONFIG_OF_MDIO)	+= of_mdio.o
>  obj-$(CONFIG_OF_RESERVED_MEM) += of_reserved_mem.o
>  obj-$(CONFIG_OF_RESOLVE)  += resolver.o
> diff --git a/drivers/of/base-test.c b/drivers/of/base-test.c
> new file mode 100644
> index 0000000000000..3d3f4f1b74800
> --- /dev/null
> +++ b/drivers/of/base-test.c
> @@ -0,0 +1,214 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Unit tests for functions defined in base.c.
> + */
> +#include <linux/of.h>
> +
> +#include <kunit/test.h>
> +
> +#include "test-common.h"
> +
> +static void of_unittest_find_node_by_name(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options, *name;
> +
> +	np = of_find_node_by_path("/testcase-data");
> +	name = kasprintf(GFP_KERNEL, "%pOF", np);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> +			       "find /testcase-data failed\n");
> +	of_node_put(np);
> +	kfree(name);
> +
> +	/* Test if trailing '/' works */
> +	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
> +			    "trailing '/' on /testcase-data/ should fail\n");
> +
> +	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	name = kasprintf(GFP_KERNEL, "%pOF", np);
> +	KUNIT_EXPECT_STREQ_MSG(
> +		test, "/testcase-data/phandle-tests/consumer-a", name,
> +		"find /testcase-data/phandle-tests/consumer-a failed\n");
> +	of_node_put(np);
> +	kfree(name);
> +
> +	np = of_find_node_by_path("testcase-alias");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	name = kasprintf(GFP_KERNEL, "%pOF", np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> +			       "find testcase-alias failed\n");
> +	of_node_put(np);
> +	kfree(name);
> +
> +	/* Test if trailing '/' works on aliases */
> +	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
> +			    "trailing '/' on testcase-alias/ should fail\n");
> +
> +	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	name = kasprintf(GFP_KERNEL, "%pOF", np);
> +	KUNIT_EXPECT_STREQ_MSG(
> +		test, "/testcase-data/phandle-tests/consumer-a", name,
> +		"find testcase-alias/phandle-tests/consumer-a failed\n");
> +	of_node_put(np);
> +	kfree(name);
> +
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
> +		"non-existent path returned node %pOF\n", np);
> +	of_node_put(np);
> +
> +	KUNIT_EXPECT_EQ_MSG(
> +		test, np = of_find_node_by_path("missing-alias"), NULL,
> +		"non-existent alias returned node %pOF\n", np);
> +	of_node_put(np);
> +
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
> +		"non-existent alias with relative path returned node %pOF\n",
> +		np);
> +	of_node_put(np);
> +
> +	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
> +			       "option path test failed\n");
> +	of_node_put(np);
> +
> +	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> +			       "option path test, subcase #1 failed\n");
> +	of_node_put(np);
> +
> +	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> +			       "option path test, subcase #2 failed\n");
> +	of_node_put(np);
> +
> +	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
> +	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
> +					 "NULL option path test failed\n");
> +	of_node_put(np);
> +
> +	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
> +				       &options);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
> +			       "option alias path test failed\n");
> +	of_node_put(np);
> +
> +	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
> +				       &options);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
> +			       "option alias path test, subcase #1 failed\n");
> +	of_node_put(np);
> +
> +	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
> +	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
> +			test, np, "NULL option alias path test failed\n");
> +	of_node_put(np);
> +
> +	options = "testoption";
> +	np = of_find_node_opts_by_path("testcase-alias", &options);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> +			    "option clearing test failed\n");
> +	of_node_put(np);
> +
> +	options = "testoption";
> +	np = of_find_node_opts_by_path("/", &options);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> +			    "option clearing root node test failed\n");
> +	of_node_put(np);
> +}
> +
> +static void of_unittest_dynamic(struct kunit *test)
> +{
> +	struct device_node *np;
> +	struct property *prop;
> +
> +	np = of_find_node_by_path("/testcase-data");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +
> +	/* Array of 4 properties for the purpose of testing */
> +	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
> +
> +	/* Add a new property - should pass*/
> +	prop->name = "new-property";
> +	prop->value = "new-property-data";
> +	prop->length = strlen(prop->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> +			    "Adding a new property failed\n");
> +
> +	/* Try to add an existing property - should fail */
> +	prop++;
> +	prop->name = "new-property";
> +	prop->value = "new-property-data-should-fail";
> +	prop->length = strlen(prop->value) + 1;
> +	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
> +			    "Adding an existing property should have failed\n");
> +
> +	/* Try to modify an existing property - should pass */
> +	prop->value = "modify-property-data-should-pass";
> +	prop->length = strlen(prop->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(
> +		test, of_update_property(np, prop), 0,
> +		"Updating an existing property should have passed\n");
> +
> +	/* Try to modify non-existent property - should pass*/
> +	prop++;
> +	prop->name = "modify-property";
> +	prop->value = "modify-missing-property-data-should-pass";
> +	prop->length = strlen(prop->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
> +			    "Updating a missing property should have passed\n");
> +
> +	/* Remove property - should pass */
> +	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
> +			    "Removing a property should have passed\n");
> +
> +	/* Adding very large property - should pass */
> +	prop++;
> +	prop->name = "large-property-PAGE_SIZEx8";
> +	prop->length = PAGE_SIZE * 8;
> +	prop->value = kzalloc(prop->length, GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> +			    "Adding a large property should have passed\n");
> +}
> +
> +static int of_test_init(struct kunit *test)
> +{
> +	/* adding data for unittest */
> +	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
> +
> +	if (!of_aliases)
> +		of_aliases = of_find_node_by_path("/aliases");
> +
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
> +			"/testcase-data/phandle-tests/consumer-a"));
> +
> +	return 0;
> +}
> +
> +static struct kunit_case of_test_cases[] = {
> +	KUNIT_CASE(of_unittest_find_node_by_name),
> +	KUNIT_CASE(of_unittest_dynamic),
> +	{},
> +};
> +
> +static struct kunit_module of_test_module = {
> +	.name = "of-base-test",
> +	.init = of_test_init,
> +	.test_cases = of_test_cases,
> +};
> +module_test(of_test_module);
> diff --git a/drivers/of/test-common.c b/drivers/of/test-common.c
> new file mode 100644
> index 0000000000000..4c9a5f3b82f7d
> --- /dev/null
> +++ b/drivers/of/test-common.c
> @@ -0,0 +1,175 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Common code to be used by unit tests.
> + */
> +#include "test-common.h"
> +
> +#include <linux/of_fdt.h>
> +#include <linux/slab.h>
> +
> +#include "of_private.h"
> +
> +/**
> + *	update_node_properties - adds the properties
> + *	of np into dup node (present in live tree) and
> + *	updates parent of children of np to dup.
> + *
> + *	@np:	node whose properties are being added to the live tree
> + *	@dup:	node present in live tree to be updated
> + */
> +static void update_node_properties(struct device_node *np,
> +					struct device_node *dup)
> +{
> +	struct property *prop;
> +	struct property *save_next;
> +	struct device_node *child;
> +	int ret;
> +
> +	for_each_child_of_node(np, child)
> +		child->parent = dup;
> +
> +	/*
> +	 * "unittest internal error: unable to add testdata property"
> +	 *
> +	 *    If this message reports a property in node '/__symbols__' then
> +	 *    the respective unittest overlay contains a label that has the
> +	 *    same name as a label in the live devicetree.  The label will
> +	 *    be in the live devicetree only if the devicetree source was
> +	 *    compiled with the '-@' option.  If you encounter this error,
> +	 *    please consider renaming __all__ of the labels in the unittest
> +	 *    overlay dts files with an odd prefix that is unlikely to be
> +	 *    used in a real devicetree.
> +	 */
> +
> +	/*
> +	 * open code for_each_property_of_node() because of_add_property()
> +	 * sets prop->next to NULL
> +	 */
> +	for (prop = np->properties; prop != NULL; prop = save_next) {
> +		save_next = prop->next;
> +		ret = of_add_property(dup, prop);
> +		if (ret)
> +			pr_err("unittest internal error: unable to add testdata property %pOF/%s",
> +			       np, prop->name);
> +	}
> +}
> +
> +/**
> + *	attach_node_and_children - attaches nodes
> + *	and its children to live tree
> + *
> + *	@np:	Node to attach to live tree
> + */
> +static void attach_node_and_children(struct device_node *np)
> +{
> +	struct device_node *next, *dup, *child;
> +	unsigned long flags;
> +	const char *full_name;
> +
> +	full_name = kasprintf(GFP_KERNEL, "%pOF", np);
> +
> +	if (!strcmp(full_name, "/__local_fixups__") ||
> +	    !strcmp(full_name, "/__fixups__"))
> +		return;
> +
> +	dup = of_find_node_by_path(full_name);
> +	kfree(full_name);
> +	if (dup) {
> +		update_node_properties(np, dup);
> +		return;
> +	}
> +
> +	child = np->child;
> +	np->child = NULL;
> +
> +	mutex_lock(&of_mutex);
> +	raw_spin_lock_irqsave(&devtree_lock, flags);
> +	np->sibling = np->parent->child;
> +	np->parent->child = np;
> +	of_node_clear_flag(np, OF_DETACHED);
> +	raw_spin_unlock_irqrestore(&devtree_lock, flags);
> +
> +	__of_attach_node_sysfs(np);
> +	mutex_unlock(&of_mutex);
> +
> +	while (child) {
> +		next = child->sibling;
> +		attach_node_and_children(child);
> +		child = next;
> +	}
> +}
> +
> +/**
> + *	unittest_data_add - Reads, copies data from
> + *	linked tree and attaches it to the live tree
> + */
> +int unittest_data_add(void)
> +{
> +	void *unittest_data;
> +	struct device_node *unittest_data_node, *np;
> +	/*
> +	 * __dtb_testcases_begin[] and __dtb_testcases_end[] are magically
> +	 * created by cmd_dt_S_dtb in scripts/Makefile.lib
> +	 */
> +	extern uint8_t __dtb_testcases_begin[];
> +	extern uint8_t __dtb_testcases_end[];
> +	const int size = __dtb_testcases_end - __dtb_testcases_begin;
> +	int rc;
> +
> +	if (!size) {
> +		pr_warn("%s: No testcase data to attach; not running tests\n",
> +			__func__);
> +		return -ENODATA;
> +	}
> +
> +	/* creating copy */
> +	unittest_data = kmemdup(__dtb_testcases_begin, size, GFP_KERNEL);
> +
> +	if (!unittest_data) {
> +		pr_warn("%s: Failed to allocate memory for unittest_data; "
> +			"not running tests\n", __func__);
> +		return -ENOMEM;
> +	}
> +	of_fdt_unflatten_tree(unittest_data, NULL, &unittest_data_node);
> +	if (!unittest_data_node) {
> +		pr_warn("%s: No tree to attach; not running tests\n", __func__);
> +		return -ENODATA;
> +	}
> +
> +	/*
> +	 * This lock normally encloses of_resolve_phandles()
> +	 */
> +	of_overlay_mutex_lock();
> +
> +	rc = of_resolve_phandles(unittest_data_node);
> +	if (rc) {
> +		pr_err("%s: Failed to resolve phandles (rc=%i)\n", __func__, rc);
> +		of_overlay_mutex_unlock();
> +		return -EINVAL;
> +	}
> +
> +	if (!of_root) {
> +		of_root = unittest_data_node;
> +		for_each_of_allnodes(np)
> +			__of_attach_node_sysfs(np);
> +		of_aliases = of_find_node_by_path("/aliases");
> +		of_chosen = of_find_node_by_path("/chosen");
> +		of_overlay_mutex_unlock();
> +		return 0;
> +	}
> +
> +	/* attach the sub-tree to live tree */
> +	np = unittest_data_node->child;
> +	while (np) {
> +		struct device_node *next = np->sibling;
> +
> +		np->parent = of_root;
> +		attach_node_and_children(np);
> +		np = next;
> +	}
> +
> +	of_overlay_mutex_unlock();
> +
> +	return 0;
> +}
> +
> diff --git a/drivers/of/test-common.h b/drivers/of/test-common.h
> new file mode 100644
> index 0000000000000..a35484406bbf1
> --- /dev/null
> +++ b/drivers/of/test-common.h
> @@ -0,0 +1,16 @@
> +/* SPDX-License-Identifier: GPL-2.0+ */
> +/*
> + * Common code to be used by unit tests.
> + */
> +#ifndef _LINUX_OF_TEST_COMMON_H
> +#define _LINUX_OF_TEST_COMMON_H
> +
> +#include <linux/of.h>
> +
> +/**
> + *	unittest_data_add - Reads, copies data from
> + *	linked tree and attaches it to the live tree
> + */
> +int unittest_data_add(void);
> +
> +#endif /* _LINUX_OF_TEST_COMMON_H */
> diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
> index 96de69ccb3e63..05a2610d0be7f 100644
> --- a/drivers/of/unittest.c
> +++ b/drivers/of/unittest.c
> @@ -29,184 +29,7 @@
>  #include <kunit/test.h>
>  
>  #include "of_private.h"
> -
> -static void of_unittest_find_node_by_name(struct kunit *test)
> -{
> -	struct device_node *np;
> -	const char *options, *name;
> -
> -	np = of_find_node_by_path("/testcase-data");
> -	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> -			       "find /testcase-data failed\n");
> -	of_node_put(np);
> -	kfree(name);
> -
> -	/* Test if trailing '/' works */
> -	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
> -			    "trailing '/' on /testcase-data/ should fail\n");
> -
> -	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	KUNIT_EXPECT_STREQ_MSG(
> -		test, "/testcase-data/phandle-tests/consumer-a", name,
> -		"find /testcase-data/phandle-tests/consumer-a failed\n");
> -	of_node_put(np);
> -	kfree(name);
> -
> -	np = of_find_node_by_path("testcase-alias");
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> -			       "find testcase-alias failed\n");
> -	of_node_put(np);
> -	kfree(name);
> -
> -	/* Test if trailing '/' works on aliases */
> -	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
> -			    "trailing '/' on testcase-alias/ should fail\n");
> -
> -	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	KUNIT_EXPECT_STREQ_MSG(
> -		test, "/testcase-data/phandle-tests/consumer-a", name,
> -		"find testcase-alias/phandle-tests/consumer-a failed\n");
> -	of_node_put(np);
> -	kfree(name);
> -
> -	KUNIT_EXPECT_EQ_MSG(
> -		test,
> -		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
> -		"non-existent path returned node %pOF\n", np);
> -	of_node_put(np);
> -
> -	KUNIT_EXPECT_EQ_MSG(
> -		test, np = of_find_node_by_path("missing-alias"), NULL,
> -		"non-existent alias returned node %pOF\n", np);
> -	of_node_put(np);
> -
> -	KUNIT_EXPECT_EQ_MSG(
> -		test,
> -		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
> -		"non-existent alias with relative path returned node %pOF\n",
> -		np);
> -	of_node_put(np);
> -
> -	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
> -			       "option path test failed\n");
> -	of_node_put(np);
> -
> -	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> -			       "option path test, subcase #1 failed\n");
> -	of_node_put(np);
> -
> -	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> -			       "option path test, subcase #2 failed\n");
> -	of_node_put(np);
> -
> -	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
> -	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
> -					 "NULL option path test failed\n");
> -	of_node_put(np);
> -
> -	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
> -				       &options);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
> -			       "option alias path test failed\n");
> -	of_node_put(np);
> -
> -	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
> -				       &options);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
> -			       "option alias path test, subcase #1 failed\n");
> -	of_node_put(np);
> -
> -	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
> -	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
> -			test, np, "NULL option alias path test failed\n");
> -	of_node_put(np);
> -
> -	options = "testoption";
> -	np = of_find_node_opts_by_path("testcase-alias", &options);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> -			    "option clearing test failed\n");
> -	of_node_put(np);
> -
> -	options = "testoption";
> -	np = of_find_node_opts_by_path("/", &options);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> -			    "option clearing root node test failed\n");
> -	of_node_put(np);
> -}
> -
> -static void of_unittest_dynamic(struct kunit *test)
> -{
> -	struct device_node *np;
> -	struct property *prop;
> -
> -	np = of_find_node_by_path("/testcase-data");
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -
> -	/* Array of 4 properties for the purpose of testing */
> -	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
> -
> -	/* Add a new property - should pass*/
> -	prop->name = "new-property";
> -	prop->value = "new-property-data";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> -			    "Adding a new property failed\n");
> -
> -	/* Try to add an existing property - should fail */
> -	prop++;
> -	prop->name = "new-property";
> -	prop->value = "new-property-data-should-fail";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
> -			    "Adding an existing property should have failed\n");
> -
> -	/* Try to modify an existing property - should pass */
> -	prop->value = "modify-property-data-should-pass";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_EQ_MSG(
> -		test, of_update_property(np, prop), 0,
> -		"Updating an existing property should have passed\n");
> -
> -	/* Try to modify non-existent property - should pass*/
> -	prop++;
> -	prop->name = "modify-property";
> -	prop->value = "modify-missing-property-data-should-pass";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
> -			    "Updating a missing property should have passed\n");
> -
> -	/* Remove property - should pass */
> -	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
> -			    "Removing a property should have passed\n");
> -
> -	/* Adding very large property - should pass */
> -	prop++;
> -	prop->name = "large-property-PAGE_SIZEx8";
> -	prop->length = PAGE_SIZE * 8;
> -	prop->value = kzalloc(prop->length, GFP_KERNEL);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
> -	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> -			    "Adding a large property should have passed\n");
> -}
> +#include "test-common.h"
>  
>  static int of_unittest_check_node_linkage(struct device_node *np)
>  {
> @@ -1177,170 +1000,6 @@ static void of_unittest_platform_populate(struct kunit *test)
>  	of_node_put(np);
>  }
>  
> -/**
> - *	update_node_properties - adds the properties
> - *	of np into dup node (present in live tree) and
> - *	updates parent of children of np to dup.
> - *
> - *	@np:	node whose properties are being added to the live tree
> - *	@dup:	node present in live tree to be updated
> - */
> -static void update_node_properties(struct device_node *np,
> -					struct device_node *dup)
> -{
> -	struct property *prop;
> -	struct property *save_next;
> -	struct device_node *child;
> -	int ret;
> -
> -	for_each_child_of_node(np, child)
> -		child->parent = dup;
> -
> -	/*
> -	 * "unittest internal error: unable to add testdata property"
> -	 *
> -	 *    If this message reports a property in node '/__symbols__' then
> -	 *    the respective unittest overlay contains a label that has the
> -	 *    same name as a label in the live devicetree.  The label will
> -	 *    be in the live devicetree only if the devicetree source was
> -	 *    compiled with the '-@' option.  If you encounter this error,
> -	 *    please consider renaming __all__ of the labels in the unittest
> -	 *    overlay dts files with an odd prefix that is unlikely to be
> -	 *    used in a real devicetree.
> -	 */
> -
> -	/*
> -	 * open code for_each_property_of_node() because of_add_property()
> -	 * sets prop->next to NULL
> -	 */
> -	for (prop = np->properties; prop != NULL; prop = save_next) {
> -		save_next = prop->next;
> -		ret = of_add_property(dup, prop);
> -		if (ret)
> -			pr_err("unittest internal error: unable to add testdata property %pOF/%s",
> -			       np, prop->name);
> -	}
> -}
> -
> -/**
> - *	attach_node_and_children - attaches nodes
> - *	and its children to live tree
> - *
> - *	@np:	Node to attach to live tree
> - */
> -static void attach_node_and_children(struct device_node *np)
> -{
> -	struct device_node *next, *dup, *child;
> -	unsigned long flags;
> -	const char *full_name;
> -
> -	full_name = kasprintf(GFP_KERNEL, "%pOF", np);
> -
> -	if (!strcmp(full_name, "/__local_fixups__") ||
> -	    !strcmp(full_name, "/__fixups__"))
> -		return;
> -
> -	dup = of_find_node_by_path(full_name);
> -	kfree(full_name);
> -	if (dup) {
> -		update_node_properties(np, dup);
> -		return;
> -	}
> -
> -	child = np->child;
> -	np->child = NULL;
> -
> -	mutex_lock(&of_mutex);
> -	raw_spin_lock_irqsave(&devtree_lock, flags);
> -	np->sibling = np->parent->child;
> -	np->parent->child = np;
> -	of_node_clear_flag(np, OF_DETACHED);
> -	raw_spin_unlock_irqrestore(&devtree_lock, flags);
> -
> -	__of_attach_node_sysfs(np);
> -	mutex_unlock(&of_mutex);
> -
> -	while (child) {
> -		next = child->sibling;
> -		attach_node_and_children(child);
> -		child = next;
> -	}
> -}
> -
> -/**
> - *	unittest_data_add - Reads, copies data from
> - *	linked tree and attaches it to the live tree
> - */
> -static int unittest_data_add(void)
> -{
> -	void *unittest_data;
> -	struct device_node *unittest_data_node, *np;
> -	/*
> -	 * __dtb_testcases_begin[] and __dtb_testcases_end[] are magically
> -	 * created by cmd_dt_S_dtb in scripts/Makefile.lib
> -	 */
> -	extern uint8_t __dtb_testcases_begin[];
> -	extern uint8_t __dtb_testcases_end[];
> -	const int size = __dtb_testcases_end - __dtb_testcases_begin;
> -	int rc;
> -
> -	if (!size) {
> -		pr_warn("%s: No testcase data to attach; not running tests\n",
> -			__func__);
> -		return -ENODATA;
> -	}
> -
> -	/* creating copy */
> -	unittest_data = kmemdup(__dtb_testcases_begin, size, GFP_KERNEL);
> -
> -	if (!unittest_data) {
> -		pr_warn("%s: Failed to allocate memory for unittest_data; "
> -			"not running tests\n", __func__);
> -		return -ENOMEM;
> -	}
> -	of_fdt_unflatten_tree(unittest_data, NULL, &unittest_data_node);
> -	if (!unittest_data_node) {
> -		pr_warn("%s: No tree to attach; not running tests\n", __func__);
> -		return -ENODATA;
> -	}
> -
> -	/*
> -	 * This lock normally encloses of_resolve_phandles()
> -	 */
> -	of_overlay_mutex_lock();
> -
> -	rc = of_resolve_phandles(unittest_data_node);
> -	if (rc) {
> -		pr_err("%s: Failed to resolve phandles (rc=%i)\n", __func__, rc);
> -		of_overlay_mutex_unlock();
> -		return -EINVAL;
> -	}
> -
> -	if (!of_root) {
> -		of_root = unittest_data_node;
> -		for_each_of_allnodes(np)
> -			__of_attach_node_sysfs(np);
> -		of_aliases = of_find_node_by_path("/aliases");
> -		of_chosen = of_find_node_by_path("/chosen");
> -		of_overlay_mutex_unlock();
> -		return 0;
> -	}
> -
> -	/* attach the sub-tree to live tree */
> -	np = unittest_data_node->child;
> -	while (np) {
> -		struct device_node *next = np->sibling;
> -
> -		np->parent = of_root;
> -		attach_node_and_children(np);
> -		np = next;
> -	}
> -
> -	of_overlay_mutex_unlock();
> -
> -	return 0;
> -}
> -
>  #ifdef CONFIG_OF_OVERLAY
>  static int overlay_data_apply(const char *overlay_name, int *overlay_id);
>  
> @@ -2563,8 +2222,6 @@ static int of_test_init(struct kunit *test)
>  static struct kunit_case of_test_cases[] = {
>  	KUNIT_CASE(of_unittest_check_tree_linkage),
>  	KUNIT_CASE(of_unittest_check_phandles),
> -	KUNIT_CASE(of_unittest_find_node_by_name),
> -	KUNIT_CASE(of_unittest_dynamic),
>  	KUNIT_CASE(of_unittest_parse_phandle_with_args),
>  	KUNIT_CASE(of_unittest_parse_phandle_with_args_map),
>  	KUNIT_CASE(of_unittest_printf),
> 

_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 16/17] of: unittest: split out a couple of test cases from unittest
@ 2019-03-22  1:14     ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-03-22  1:14 UTC (permalink / raw)
  To: Brendan Higgins, keescook, mcgrof, shuah, robh, kieran.bingham
  Cc: gregkh, joel, mpe, joe, brakmo, rostedt, Tim.Bird, khilman,
	julia.lawall, linux-kselftest, kunit-dev, linux-kernel, jdike,
	richard, linux-um, daniel, dri-devel, dan.j.williams,
	linux-nvdimm, knut.omang, devicetree, pmladek, Alexander.Levin,
	amir73il, dan.carpenter, wfg

On 2/14/19 1:37 PM, Brendan Higgins wrote:
> Split out a couple of test cases that these features in base.c from the
> unittest.c monolith. The intention is that we will eventually split out
> all test cases and group them together based on what portion of device
> tree they test.

I still object to this patch.  I do not want this code scattered into
additional files.

-Frank


> 
> Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
> ---
>  drivers/of/Makefile      |   2 +-
>  drivers/of/base-test.c   | 214 ++++++++++++++++++++++++
>  drivers/of/test-common.c | 175 ++++++++++++++++++++
>  drivers/of/test-common.h |  16 ++
>  drivers/of/unittest.c    | 345 +--------------------------------------
>  5 files changed, 407 insertions(+), 345 deletions(-)
>  create mode 100644 drivers/of/base-test.c
>  create mode 100644 drivers/of/test-common.c
>  create mode 100644 drivers/of/test-common.h
> 
> diff --git a/drivers/of/Makefile b/drivers/of/Makefile
> index 663a4af0cccd5..4a4bd527d586c 100644
> --- a/drivers/of/Makefile
> +++ b/drivers/of/Makefile
> @@ -8,7 +8,7 @@ obj-$(CONFIG_OF_PROMTREE) += pdt.o
>  obj-$(CONFIG_OF_ADDRESS)  += address.o
>  obj-$(CONFIG_OF_IRQ)    += irq.o
>  obj-$(CONFIG_OF_NET)	+= of_net.o
> -obj-$(CONFIG_OF_UNITTEST) += unittest.o
> +obj-$(CONFIG_OF_UNITTEST) += unittest.o base-test.o test-common.o
>  obj-$(CONFIG_OF_MDIO)	+= of_mdio.o
>  obj-$(CONFIG_OF_RESERVED_MEM) += of_reserved_mem.o
>  obj-$(CONFIG_OF_RESOLVE)  += resolver.o
> diff --git a/drivers/of/base-test.c b/drivers/of/base-test.c
> new file mode 100644
> index 0000000000000..3d3f4f1b74800
> --- /dev/null
> +++ b/drivers/of/base-test.c
> @@ -0,0 +1,214 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Unit tests for functions defined in base.c.
> + */
> +#include <linux/of.h>
> +
> +#include <kunit/test.h>
> +
> +#include "test-common.h"
> +
> +static void of_unittest_find_node_by_name(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options, *name;
> +
> +	np = of_find_node_by_path("/testcase-data");
> +	name = kasprintf(GFP_KERNEL, "%pOF", np);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> +			       "find /testcase-data failed\n");
> +	of_node_put(np);
> +	kfree(name);
> +
> +	/* Test if trailing '/' works */
> +	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
> +			    "trailing '/' on /testcase-data/ should fail\n");
> +
> +	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	name = kasprintf(GFP_KERNEL, "%pOF", np);
> +	KUNIT_EXPECT_STREQ_MSG(
> +		test, "/testcase-data/phandle-tests/consumer-a", name,
> +		"find /testcase-data/phandle-tests/consumer-a failed\n");
> +	of_node_put(np);
> +	kfree(name);
> +
> +	np = of_find_node_by_path("testcase-alias");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	name = kasprintf(GFP_KERNEL, "%pOF", np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> +			       "find testcase-alias failed\n");
> +	of_node_put(np);
> +	kfree(name);
> +
> +	/* Test if trailing '/' works on aliases */
> +	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
> +			    "trailing '/' on testcase-alias/ should fail\n");
> +
> +	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	name = kasprintf(GFP_KERNEL, "%pOF", np);
> +	KUNIT_EXPECT_STREQ_MSG(
> +		test, "/testcase-data/phandle-tests/consumer-a", name,
> +		"find testcase-alias/phandle-tests/consumer-a failed\n");
> +	of_node_put(np);
> +	kfree(name);
> +
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
> +		"non-existent path returned node %pOF\n", np);
> +	of_node_put(np);
> +
> +	KUNIT_EXPECT_EQ_MSG(
> +		test, np = of_find_node_by_path("missing-alias"), NULL,
> +		"non-existent alias returned node %pOF\n", np);
> +	of_node_put(np);
> +
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
> +		"non-existent alias with relative path returned node %pOF\n",
> +		np);
> +	of_node_put(np);
> +
> +	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
> +			       "option path test failed\n");
> +	of_node_put(np);
> +
> +	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> +			       "option path test, subcase #1 failed\n");
> +	of_node_put(np);
> +
> +	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> +			       "option path test, subcase #2 failed\n");
> +	of_node_put(np);
> +
> +	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
> +	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
> +					 "NULL option path test failed\n");
> +	of_node_put(np);
> +
> +	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
> +				       &options);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
> +			       "option alias path test failed\n");
> +	of_node_put(np);
> +
> +	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
> +				       &options);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
> +			       "option alias path test, subcase #1 failed\n");
> +	of_node_put(np);
> +
> +	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
> +	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
> +			test, np, "NULL option alias path test failed\n");
> +	of_node_put(np);
> +
> +	options = "testoption";
> +	np = of_find_node_opts_by_path("testcase-alias", &options);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> +			    "option clearing test failed\n");
> +	of_node_put(np);
> +
> +	options = "testoption";
> +	np = of_find_node_opts_by_path("/", &options);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> +			    "option clearing root node test failed\n");
> +	of_node_put(np);
> +}
> +
> +static void of_unittest_dynamic(struct kunit *test)
> +{
> +	struct device_node *np;
> +	struct property *prop;
> +
> +	np = of_find_node_by_path("/testcase-data");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +
> +	/* Array of 4 properties for the purpose of testing */
> +	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
> +
> +	/* Add a new property - should pass*/
> +	prop->name = "new-property";
> +	prop->value = "new-property-data";
> +	prop->length = strlen(prop->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> +			    "Adding a new property failed\n");
> +
> +	/* Try to add an existing property - should fail */
> +	prop++;
> +	prop->name = "new-property";
> +	prop->value = "new-property-data-should-fail";
> +	prop->length = strlen(prop->value) + 1;
> +	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
> +			    "Adding an existing property should have failed\n");
> +
> +	/* Try to modify an existing property - should pass */
> +	prop->value = "modify-property-data-should-pass";
> +	prop->length = strlen(prop->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(
> +		test, of_update_property(np, prop), 0,
> +		"Updating an existing property should have passed\n");
> +
> +	/* Try to modify non-existent property - should pass*/
> +	prop++;
> +	prop->name = "modify-property";
> +	prop->value = "modify-missing-property-data-should-pass";
> +	prop->length = strlen(prop->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
> +			    "Updating a missing property should have passed\n");
> +
> +	/* Remove property - should pass */
> +	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
> +			    "Removing a property should have passed\n");
> +
> +	/* Adding very large property - should pass */
> +	prop++;
> +	prop->name = "large-property-PAGE_SIZEx8";
> +	prop->length = PAGE_SIZE * 8;
> +	prop->value = kzalloc(prop->length, GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> +			    "Adding a large property should have passed\n");
> +}
> +
> +static int of_test_init(struct kunit *test)
> +{
> +	/* adding data for unittest */
> +	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
> +
> +	if (!of_aliases)
> +		of_aliases = of_find_node_by_path("/aliases");
> +
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
> +			"/testcase-data/phandle-tests/consumer-a"));
> +
> +	return 0;
> +}
> +
> +static struct kunit_case of_test_cases[] = {
> +	KUNIT_CASE(of_unittest_find_node_by_name),
> +	KUNIT_CASE(of_unittest_dynamic),
> +	{},
> +};
> +
> +static struct kunit_module of_test_module = {
> +	.name = "of-base-test",
> +	.init = of_test_init,
> +	.test_cases = of_test_cases,
> +};
> +module_test(of_test_module);
> diff --git a/drivers/of/test-common.c b/drivers/of/test-common.c
> new file mode 100644
> index 0000000000000..4c9a5f3b82f7d
> --- /dev/null
> +++ b/drivers/of/test-common.c
> @@ -0,0 +1,175 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Common code to be used by unit tests.
> + */
> +#include "test-common.h"
> +
> +#include <linux/of_fdt.h>
> +#include <linux/slab.h>
> +
> +#include "of_private.h"
> +
> +/**
> + *	update_node_properties - adds the properties
> + *	of np into dup node (present in live tree) and
> + *	updates parent of children of np to dup.
> + *
> + *	@np:	node whose properties are being added to the live tree
> + *	@dup:	node present in live tree to be updated
> + */
> +static void update_node_properties(struct device_node *np,
> +					struct device_node *dup)
> +{
> +	struct property *prop;
> +	struct property *save_next;
> +	struct device_node *child;
> +	int ret;
> +
> +	for_each_child_of_node(np, child)
> +		child->parent = dup;
> +
> +	/*
> +	 * "unittest internal error: unable to add testdata property"
> +	 *
> +	 *    If this message reports a property in node '/__symbols__' then
> +	 *    the respective unittest overlay contains a label that has the
> +	 *    same name as a label in the live devicetree.  The label will
> +	 *    be in the live devicetree only if the devicetree source was
> +	 *    compiled with the '-@' option.  If you encounter this error,
> +	 *    please consider renaming __all__ of the labels in the unittest
> +	 *    overlay dts files with an odd prefix that is unlikely to be
> +	 *    used in a real devicetree.
> +	 */
> +
> +	/*
> +	 * open code for_each_property_of_node() because of_add_property()
> +	 * sets prop->next to NULL
> +	 */
> +	for (prop = np->properties; prop != NULL; prop = save_next) {
> +		save_next = prop->next;
> +		ret = of_add_property(dup, prop);
> +		if (ret)
> +			pr_err("unittest internal error: unable to add testdata property %pOF/%s",
> +			       np, prop->name);
> +	}
> +}
> +
> +/**
> + *	attach_node_and_children - attaches nodes
> + *	and its children to live tree
> + *
> + *	@np:	Node to attach to live tree
> + */
> +static void attach_node_and_children(struct device_node *np)
> +{
> +	struct device_node *next, *dup, *child;
> +	unsigned long flags;
> +	const char *full_name;
> +
> +	full_name = kasprintf(GFP_KERNEL, "%pOF", np);
> +
> +	if (!strcmp(full_name, "/__local_fixups__") ||
> +	    !strcmp(full_name, "/__fixups__"))
> +		return;
> +
> +	dup = of_find_node_by_path(full_name);
> +	kfree(full_name);
> +	if (dup) {
> +		update_node_properties(np, dup);
> +		return;
> +	}
> +
> +	child = np->child;
> +	np->child = NULL;
> +
> +	mutex_lock(&of_mutex);
> +	raw_spin_lock_irqsave(&devtree_lock, flags);
> +	np->sibling = np->parent->child;
> +	np->parent->child = np;
> +	of_node_clear_flag(np, OF_DETACHED);
> +	raw_spin_unlock_irqrestore(&devtree_lock, flags);
> +
> +	__of_attach_node_sysfs(np);
> +	mutex_unlock(&of_mutex);
> +
> +	while (child) {
> +		next = child->sibling;
> +		attach_node_and_children(child);
> +		child = next;
> +	}
> +}
> +
> +/**
> + *	unittest_data_add - Reads, copies data from
> + *	linked tree and attaches it to the live tree
> + */
> +int unittest_data_add(void)
> +{
> +	void *unittest_data;
> +	struct device_node *unittest_data_node, *np;
> +	/*
> +	 * __dtb_testcases_begin[] and __dtb_testcases_end[] are magically
> +	 * created by cmd_dt_S_dtb in scripts/Makefile.lib
> +	 */
> +	extern uint8_t __dtb_testcases_begin[];
> +	extern uint8_t __dtb_testcases_end[];
> +	const int size = __dtb_testcases_end - __dtb_testcases_begin;
> +	int rc;
> +
> +	if (!size) {
> +		pr_warn("%s: No testcase data to attach; not running tests\n",
> +			__func__);
> +		return -ENODATA;
> +	}
> +
> +	/* creating copy */
> +	unittest_data = kmemdup(__dtb_testcases_begin, size, GFP_KERNEL);
> +
> +	if (!unittest_data) {
> +		pr_warn("%s: Failed to allocate memory for unittest_data; "
> +			"not running tests\n", __func__);
> +		return -ENOMEM;
> +	}
> +	of_fdt_unflatten_tree(unittest_data, NULL, &unittest_data_node);
> +	if (!unittest_data_node) {
> +		pr_warn("%s: No tree to attach; not running tests\n", __func__);
> +		return -ENODATA;
> +	}
> +
> +	/*
> +	 * This lock normally encloses of_resolve_phandles()
> +	 */
> +	of_overlay_mutex_lock();
> +
> +	rc = of_resolve_phandles(unittest_data_node);
> +	if (rc) {
> +		pr_err("%s: Failed to resolve phandles (rc=%i)\n", __func__, rc);
> +		of_overlay_mutex_unlock();
> +		return -EINVAL;
> +	}
> +
> +	if (!of_root) {
> +		of_root = unittest_data_node;
> +		for_each_of_allnodes(np)
> +			__of_attach_node_sysfs(np);
> +		of_aliases = of_find_node_by_path("/aliases");
> +		of_chosen = of_find_node_by_path("/chosen");
> +		of_overlay_mutex_unlock();
> +		return 0;
> +	}
> +
> +	/* attach the sub-tree to live tree */
> +	np = unittest_data_node->child;
> +	while (np) {
> +		struct device_node *next = np->sibling;
> +
> +		np->parent = of_root;
> +		attach_node_and_children(np);
> +		np = next;
> +	}
> +
> +	of_overlay_mutex_unlock();
> +
> +	return 0;
> +}
> +
> diff --git a/drivers/of/test-common.h b/drivers/of/test-common.h
> new file mode 100644
> index 0000000000000..a35484406bbf1
> --- /dev/null
> +++ b/drivers/of/test-common.h
> @@ -0,0 +1,16 @@
> +/* SPDX-License-Identifier: GPL-2.0+ */
> +/*
> + * Common code to be used by unit tests.
> + */
> +#ifndef _LINUX_OF_TEST_COMMON_H
> +#define _LINUX_OF_TEST_COMMON_H
> +
> +#include <linux/of.h>
> +
> +/**
> + *	unittest_data_add - Reads, copies data from
> + *	linked tree and attaches it to the live tree
> + */
> +int unittest_data_add(void);
> +
> +#endif /* _LINUX_OF_TEST_COMMON_H */
> diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
> index 96de69ccb3e63..05a2610d0be7f 100644
> --- a/drivers/of/unittest.c
> +++ b/drivers/of/unittest.c
> @@ -29,184 +29,7 @@
>  #include <kunit/test.h>
>  
>  #include "of_private.h"
> -
> -static void of_unittest_find_node_by_name(struct kunit *test)
> -{
> -	struct device_node *np;
> -	const char *options, *name;
> -
> -	np = of_find_node_by_path("/testcase-data");
> -	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> -			       "find /testcase-data failed\n");
> -	of_node_put(np);
> -	kfree(name);
> -
> -	/* Test if trailing '/' works */
> -	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
> -			    "trailing '/' on /testcase-data/ should fail\n");
> -
> -	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	KUNIT_EXPECT_STREQ_MSG(
> -		test, "/testcase-data/phandle-tests/consumer-a", name,
> -		"find /testcase-data/phandle-tests/consumer-a failed\n");
> -	of_node_put(np);
> -	kfree(name);
> -
> -	np = of_find_node_by_path("testcase-alias");
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> -			       "find testcase-alias failed\n");
> -	of_node_put(np);
> -	kfree(name);
> -
> -	/* Test if trailing '/' works on aliases */
> -	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
> -			    "trailing '/' on testcase-alias/ should fail\n");
> -
> -	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	KUNIT_EXPECT_STREQ_MSG(
> -		test, "/testcase-data/phandle-tests/consumer-a", name,
> -		"find testcase-alias/phandle-tests/consumer-a failed\n");
> -	of_node_put(np);
> -	kfree(name);
> -
> -	KUNIT_EXPECT_EQ_MSG(
> -		test,
> -		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
> -		"non-existent path returned node %pOF\n", np);
> -	of_node_put(np);
> -
> -	KUNIT_EXPECT_EQ_MSG(
> -		test, np = of_find_node_by_path("missing-alias"), NULL,
> -		"non-existent alias returned node %pOF\n", np);
> -	of_node_put(np);
> -
> -	KUNIT_EXPECT_EQ_MSG(
> -		test,
> -		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
> -		"non-existent alias with relative path returned node %pOF\n",
> -		np);
> -	of_node_put(np);
> -
> -	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
> -			       "option path test failed\n");
> -	of_node_put(np);
> -
> -	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> -			       "option path test, subcase #1 failed\n");
> -	of_node_put(np);
> -
> -	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> -			       "option path test, subcase #2 failed\n");
> -	of_node_put(np);
> -
> -	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
> -	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
> -					 "NULL option path test failed\n");
> -	of_node_put(np);
> -
> -	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
> -				       &options);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
> -			       "option alias path test failed\n");
> -	of_node_put(np);
> -
> -	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
> -				       &options);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
> -			       "option alias path test, subcase #1 failed\n");
> -	of_node_put(np);
> -
> -	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
> -	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
> -			test, np, "NULL option alias path test failed\n");
> -	of_node_put(np);
> -
> -	options = "testoption";
> -	np = of_find_node_opts_by_path("testcase-alias", &options);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> -			    "option clearing test failed\n");
> -	of_node_put(np);
> -
> -	options = "testoption";
> -	np = of_find_node_opts_by_path("/", &options);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> -			    "option clearing root node test failed\n");
> -	of_node_put(np);
> -}
> -
> -static void of_unittest_dynamic(struct kunit *test)
> -{
> -	struct device_node *np;
> -	struct property *prop;
> -
> -	np = of_find_node_by_path("/testcase-data");
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -
> -	/* Array of 4 properties for the purpose of testing */
> -	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
> -
> -	/* Add a new property - should pass*/
> -	prop->name = "new-property";
> -	prop->value = "new-property-data";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> -			    "Adding a new property failed\n");
> -
> -	/* Try to add an existing property - should fail */
> -	prop++;
> -	prop->name = "new-property";
> -	prop->value = "new-property-data-should-fail";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
> -			    "Adding an existing property should have failed\n");
> -
> -	/* Try to modify an existing property - should pass */
> -	prop->value = "modify-property-data-should-pass";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_EQ_MSG(
> -		test, of_update_property(np, prop), 0,
> -		"Updating an existing property should have passed\n");
> -
> -	/* Try to modify non-existent property - should pass*/
> -	prop++;
> -	prop->name = "modify-property";
> -	prop->value = "modify-missing-property-data-should-pass";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
> -			    "Updating a missing property should have passed\n");
> -
> -	/* Remove property - should pass */
> -	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
> -			    "Removing a property should have passed\n");
> -
> -	/* Adding very large property - should pass */
> -	prop++;
> -	prop->name = "large-property-PAGE_SIZEx8";
> -	prop->length = PAGE_SIZE * 8;
> -	prop->value = kzalloc(prop->length, GFP_KERNEL);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
> -	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> -			    "Adding a large property should have passed\n");
> -}
> +#include "test-common.h"
>  
>  static int of_unittest_check_node_linkage(struct device_node *np)
>  {
> @@ -1177,170 +1000,6 @@ static void of_unittest_platform_populate(struct kunit *test)
>  	of_node_put(np);
>  }
>  
> -/**
> - *	update_node_properties - adds the properties
> - *	of np into dup node (present in live tree) and
> - *	updates parent of children of np to dup.
> - *
> - *	@np:	node whose properties are being added to the live tree
> - *	@dup:	node present in live tree to be updated
> - */
> -static void update_node_properties(struct device_node *np,
> -					struct device_node *dup)
> -{
> -	struct property *prop;
> -	struct property *save_next;
> -	struct device_node *child;
> -	int ret;
> -
> -	for_each_child_of_node(np, child)
> -		child->parent = dup;
> -
> -	/*
> -	 * "unittest internal error: unable to add testdata property"
> -	 *
> -	 *    If this message reports a property in node '/__symbols__' then
> -	 *    the respective unittest overlay contains a label that has the
> -	 *    same name as a label in the live devicetree.  The label will
> -	 *    be in the live devicetree only if the devicetree source was
> -	 *    compiled with the '-@' option.  If you encounter this error,
> -	 *    please consider renaming __all__ of the labels in the unittest
> -	 *    overlay dts files with an odd prefix that is unlikely to be
> -	 *    used in a real devicetree.
> -	 */
> -
> -	/*
> -	 * open code for_each_property_of_node() because of_add_property()
> -	 * sets prop->next to NULL
> -	 */
> -	for (prop = np->properties; prop != NULL; prop = save_next) {
> -		save_next = prop->next;
> -		ret = of_add_property(dup, prop);
> -		if (ret)
> -			pr_err("unittest internal error: unable to add testdata property %pOF/%s",
> -			       np, prop->name);
> -	}
> -}
> -
> -/**
> - *	attach_node_and_children - attaches nodes
> - *	and its children to live tree
> - *
> - *	@np:	Node to attach to live tree
> - */
> -static void attach_node_and_children(struct device_node *np)
> -{
> -	struct device_node *next, *dup, *child;
> -	unsigned long flags;
> -	const char *full_name;
> -
> -	full_name = kasprintf(GFP_KERNEL, "%pOF", np);
> -
> -	if (!strcmp(full_name, "/__local_fixups__") ||
> -	    !strcmp(full_name, "/__fixups__"))
> -		return;
> -
> -	dup = of_find_node_by_path(full_name);
> -	kfree(full_name);
> -	if (dup) {
> -		update_node_properties(np, dup);
> -		return;
> -	}
> -
> -	child = np->child;
> -	np->child = NULL;
> -
> -	mutex_lock(&of_mutex);
> -	raw_spin_lock_irqsave(&devtree_lock, flags);
> -	np->sibling = np->parent->child;
> -	np->parent->child = np;
> -	of_node_clear_flag(np, OF_DETACHED);
> -	raw_spin_unlock_irqrestore(&devtree_lock, flags);
> -
> -	__of_attach_node_sysfs(np);
> -	mutex_unlock(&of_mutex);
> -
> -	while (child) {
> -		next = child->sibling;
> -		attach_node_and_children(child);
> -		child = next;
> -	}
> -}
> -
> -/**
> - *	unittest_data_add - Reads, copies data from
> - *	linked tree and attaches it to the live tree
> - */
> -static int unittest_data_add(void)
> -{
> -	void *unittest_data;
> -	struct device_node *unittest_data_node, *np;
> -	/*
> -	 * __dtb_testcases_begin[] and __dtb_testcases_end[] are magically
> -	 * created by cmd_dt_S_dtb in scripts/Makefile.lib
> -	 */
> -	extern uint8_t __dtb_testcases_begin[];
> -	extern uint8_t __dtb_testcases_end[];
> -	const int size = __dtb_testcases_end - __dtb_testcases_begin;
> -	int rc;
> -
> -	if (!size) {
> -		pr_warn("%s: No testcase data to attach; not running tests\n",
> -			__func__);
> -		return -ENODATA;
> -	}
> -
> -	/* creating copy */
> -	unittest_data = kmemdup(__dtb_testcases_begin, size, GFP_KERNEL);
> -
> -	if (!unittest_data) {
> -		pr_warn("%s: Failed to allocate memory for unittest_data; "
> -			"not running tests\n", __func__);
> -		return -ENOMEM;
> -	}
> -	of_fdt_unflatten_tree(unittest_data, NULL, &unittest_data_node);
> -	if (!unittest_data_node) {
> -		pr_warn("%s: No tree to attach; not running tests\n", __func__);
> -		return -ENODATA;
> -	}
> -
> -	/*
> -	 * This lock normally encloses of_resolve_phandles()
> -	 */
> -	of_overlay_mutex_lock();
> -
> -	rc = of_resolve_phandles(unittest_data_node);
> -	if (rc) {
> -		pr_err("%s: Failed to resolve phandles (rc=%i)\n", __func__, rc);
> -		of_overlay_mutex_unlock();
> -		return -EINVAL;
> -	}
> -
> -	if (!of_root) {
> -		of_root = unittest_data_node;
> -		for_each_of_allnodes(np)
> -			__of_attach_node_sysfs(np);
> -		of_aliases = of_find_node_by_path("/aliases");
> -		of_chosen = of_find_node_by_path("/chosen");
> -		of_overlay_mutex_unlock();
> -		return 0;
> -	}
> -
> -	/* attach the sub-tree to live tree */
> -	np = unittest_data_node->child;
> -	while (np) {
> -		struct device_node *next = np->sibling;
> -
> -		np->parent = of_root;
> -		attach_node_and_children(np);
> -		np = next;
> -	}
> -
> -	of_overlay_mutex_unlock();
> -
> -	return 0;
> -}
> -
>  #ifdef CONFIG_OF_OVERLAY
>  static int overlay_data_apply(const char *overlay_name, int *overlay_id);
>  
> @@ -2563,8 +2222,6 @@ static int of_test_init(struct kunit *test)
>  static struct kunit_case of_test_cases[] = {
>  	KUNIT_CASE(of_unittest_check_tree_linkage),
>  	KUNIT_CASE(of_unittest_check_phandles),
> -	KUNIT_CASE(of_unittest_find_node_by_name),
> -	KUNIT_CASE(of_unittest_dynamic),
>  	KUNIT_CASE(of_unittest_parse_phandle_with_args),
>  	KUNIT_CASE(of_unittest_parse_phandle_with_args_map),
>  	KUNIT_CASE(of_unittest_printf),
> 


^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 16/17] of: unittest: split out a couple of test cases from unittest
@ 2019-03-22  1:14     ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: frowand.list @ 2019-03-22  1:14 UTC (permalink / raw)


On 2/14/19 1:37 PM, Brendan Higgins wrote:
> Split out a couple of test cases that these features in base.c from the
> unittest.c monolith. The intention is that we will eventually split out
> all test cases and group them together based on what portion of device
> tree they test.

I still object to this patch.  I do not want this code scattered into
additional files.

-Frank


> 
> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> ---
>  drivers/of/Makefile      |   2 +-
>  drivers/of/base-test.c   | 214 ++++++++++++++++++++++++
>  drivers/of/test-common.c | 175 ++++++++++++++++++++
>  drivers/of/test-common.h |  16 ++
>  drivers/of/unittest.c    | 345 +--------------------------------------
>  5 files changed, 407 insertions(+), 345 deletions(-)
>  create mode 100644 drivers/of/base-test.c
>  create mode 100644 drivers/of/test-common.c
>  create mode 100644 drivers/of/test-common.h
> 
> diff --git a/drivers/of/Makefile b/drivers/of/Makefile
> index 663a4af0cccd5..4a4bd527d586c 100644
> --- a/drivers/of/Makefile
> +++ b/drivers/of/Makefile
> @@ -8,7 +8,7 @@ obj-$(CONFIG_OF_PROMTREE) += pdt.o
>  obj-$(CONFIG_OF_ADDRESS)  += address.o
>  obj-$(CONFIG_OF_IRQ)    += irq.o
>  obj-$(CONFIG_OF_NET)	+= of_net.o
> -obj-$(CONFIG_OF_UNITTEST) += unittest.o
> +obj-$(CONFIG_OF_UNITTEST) += unittest.o base-test.o test-common.o
>  obj-$(CONFIG_OF_MDIO)	+= of_mdio.o
>  obj-$(CONFIG_OF_RESERVED_MEM) += of_reserved_mem.o
>  obj-$(CONFIG_OF_RESOLVE)  += resolver.o
> diff --git a/drivers/of/base-test.c b/drivers/of/base-test.c
> new file mode 100644
> index 0000000000000..3d3f4f1b74800
> --- /dev/null
> +++ b/drivers/of/base-test.c
> @@ -0,0 +1,214 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Unit tests for functions defined in base.c.
> + */
> +#include <linux/of.h>
> +
> +#include <kunit/test.h>
> +
> +#include "test-common.h"
> +
> +static void of_unittest_find_node_by_name(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options, *name;
> +
> +	np = of_find_node_by_path("/testcase-data");
> +	name = kasprintf(GFP_KERNEL, "%pOF", np);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> +			       "find /testcase-data failed\n");
> +	of_node_put(np);
> +	kfree(name);
> +
> +	/* Test if trailing '/' works */
> +	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
> +			    "trailing '/' on /testcase-data/ should fail\n");
> +
> +	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	name = kasprintf(GFP_KERNEL, "%pOF", np);
> +	KUNIT_EXPECT_STREQ_MSG(
> +		test, "/testcase-data/phandle-tests/consumer-a", name,
> +		"find /testcase-data/phandle-tests/consumer-a failed\n");
> +	of_node_put(np);
> +	kfree(name);
> +
> +	np = of_find_node_by_path("testcase-alias");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	name = kasprintf(GFP_KERNEL, "%pOF", np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> +			       "find testcase-alias failed\n");
> +	of_node_put(np);
> +	kfree(name);
> +
> +	/* Test if trailing '/' works on aliases */
> +	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
> +			    "trailing '/' on testcase-alias/ should fail\n");
> +
> +	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	name = kasprintf(GFP_KERNEL, "%pOF", np);
> +	KUNIT_EXPECT_STREQ_MSG(
> +		test, "/testcase-data/phandle-tests/consumer-a", name,
> +		"find testcase-alias/phandle-tests/consumer-a failed\n");
> +	of_node_put(np);
> +	kfree(name);
> +
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
> +		"non-existent path returned node %pOF\n", np);
> +	of_node_put(np);
> +
> +	KUNIT_EXPECT_EQ_MSG(
> +		test, np = of_find_node_by_path("missing-alias"), NULL,
> +		"non-existent alias returned node %pOF\n", np);
> +	of_node_put(np);
> +
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
> +		"non-existent alias with relative path returned node %pOF\n",
> +		np);
> +	of_node_put(np);
> +
> +	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
> +			       "option path test failed\n");
> +	of_node_put(np);
> +
> +	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> +			       "option path test, subcase #1 failed\n");
> +	of_node_put(np);
> +
> +	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> +			       "option path test, subcase #2 failed\n");
> +	of_node_put(np);
> +
> +	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
> +	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
> +					 "NULL option path test failed\n");
> +	of_node_put(np);
> +
> +	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
> +				       &options);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
> +			       "option alias path test failed\n");
> +	of_node_put(np);
> +
> +	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
> +				       &options);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
> +			       "option alias path test, subcase #1 failed\n");
> +	of_node_put(np);
> +
> +	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
> +	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
> +			test, np, "NULL option alias path test failed\n");
> +	of_node_put(np);
> +
> +	options = "testoption";
> +	np = of_find_node_opts_by_path("testcase-alias", &options);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> +			    "option clearing test failed\n");
> +	of_node_put(np);
> +
> +	options = "testoption";
> +	np = of_find_node_opts_by_path("/", &options);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> +			    "option clearing root node test failed\n");
> +	of_node_put(np);
> +}
> +
> +static void of_unittest_dynamic(struct kunit *test)
> +{
> +	struct device_node *np;
> +	struct property *prop;
> +
> +	np = of_find_node_by_path("/testcase-data");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +
> +	/* Array of 4 properties for the purpose of testing */
> +	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
> +
> +	/* Add a new property - should pass*/
> +	prop->name = "new-property";
> +	prop->value = "new-property-data";
> +	prop->length = strlen(prop->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> +			    "Adding a new property failed\n");
> +
> +	/* Try to add an existing property - should fail */
> +	prop++;
> +	prop->name = "new-property";
> +	prop->value = "new-property-data-should-fail";
> +	prop->length = strlen(prop->value) + 1;
> +	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
> +			    "Adding an existing property should have failed\n");
> +
> +	/* Try to modify an existing property - should pass */
> +	prop->value = "modify-property-data-should-pass";
> +	prop->length = strlen(prop->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(
> +		test, of_update_property(np, prop), 0,
> +		"Updating an existing property should have passed\n");
> +
> +	/* Try to modify non-existent property - should pass*/
> +	prop++;
> +	prop->name = "modify-property";
> +	prop->value = "modify-missing-property-data-should-pass";
> +	prop->length = strlen(prop->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
> +			    "Updating a missing property should have passed\n");
> +
> +	/* Remove property - should pass */
> +	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
> +			    "Removing a property should have passed\n");
> +
> +	/* Adding very large property - should pass */
> +	prop++;
> +	prop->name = "large-property-PAGE_SIZEx8";
> +	prop->length = PAGE_SIZE * 8;
> +	prop->value = kzalloc(prop->length, GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> +			    "Adding a large property should have passed\n");
> +}
> +
> +static int of_test_init(struct kunit *test)
> +{
> +	/* adding data for unittest */
> +	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
> +
> +	if (!of_aliases)
> +		of_aliases = of_find_node_by_path("/aliases");
> +
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
> +			"/testcase-data/phandle-tests/consumer-a"));
> +
> +	return 0;
> +}
> +
> +static struct kunit_case of_test_cases[] = {
> +	KUNIT_CASE(of_unittest_find_node_by_name),
> +	KUNIT_CASE(of_unittest_dynamic),
> +	{},
> +};
> +
> +static struct kunit_module of_test_module = {
> +	.name = "of-base-test",
> +	.init = of_test_init,
> +	.test_cases = of_test_cases,
> +};
> +module_test(of_test_module);
> diff --git a/drivers/of/test-common.c b/drivers/of/test-common.c
> new file mode 100644
> index 0000000000000..4c9a5f3b82f7d
> --- /dev/null
> +++ b/drivers/of/test-common.c
> @@ -0,0 +1,175 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Common code to be used by unit tests.
> + */
> +#include "test-common.h"
> +
> +#include <linux/of_fdt.h>
> +#include <linux/slab.h>
> +
> +#include "of_private.h"
> +
> +/**
> + *	update_node_properties - adds the properties
> + *	of np into dup node (present in live tree) and
> + *	updates parent of children of np to dup.
> + *
> + *	@np:	node whose properties are being added to the live tree
> + *	@dup:	node present in live tree to be updated
> + */
> +static void update_node_properties(struct device_node *np,
> +					struct device_node *dup)
> +{
> +	struct property *prop;
> +	struct property *save_next;
> +	struct device_node *child;
> +	int ret;
> +
> +	for_each_child_of_node(np, child)
> +		child->parent = dup;
> +
> +	/*
> +	 * "unittest internal error: unable to add testdata property"
> +	 *
> +	 *    If this message reports a property in node '/__symbols__' then
> +	 *    the respective unittest overlay contains a label that has the
> +	 *    same name as a label in the live devicetree.  The label will
> +	 *    be in the live devicetree only if the devicetree source was
> +	 *    compiled with the '-@' option.  If you encounter this error,
> +	 *    please consider renaming __all__ of the labels in the unittest
> +	 *    overlay dts files with an odd prefix that is unlikely to be
> +	 *    used in a real devicetree.
> +	 */
> +
> +	/*
> +	 * open code for_each_property_of_node() because of_add_property()
> +	 * sets prop->next to NULL
> +	 */
> +	for (prop = np->properties; prop != NULL; prop = save_next) {
> +		save_next = prop->next;
> +		ret = of_add_property(dup, prop);
> +		if (ret)
> +			pr_err("unittest internal error: unable to add testdata property %pOF/%s",
> +			       np, prop->name);
> +	}
> +}
> +
> +/**
> + *	attach_node_and_children - attaches nodes
> + *	and its children to live tree
> + *
> + *	@np:	Node to attach to live tree
> + */
> +static void attach_node_and_children(struct device_node *np)
> +{
> +	struct device_node *next, *dup, *child;
> +	unsigned long flags;
> +	const char *full_name;
> +
> +	full_name = kasprintf(GFP_KERNEL, "%pOF", np);
> +
> +	if (!strcmp(full_name, "/__local_fixups__") ||
> +	    !strcmp(full_name, "/__fixups__"))
> +		return;
> +
> +	dup = of_find_node_by_path(full_name);
> +	kfree(full_name);
> +	if (dup) {
> +		update_node_properties(np, dup);
> +		return;
> +	}
> +
> +	child = np->child;
> +	np->child = NULL;
> +
> +	mutex_lock(&of_mutex);
> +	raw_spin_lock_irqsave(&devtree_lock, flags);
> +	np->sibling = np->parent->child;
> +	np->parent->child = np;
> +	of_node_clear_flag(np, OF_DETACHED);
> +	raw_spin_unlock_irqrestore(&devtree_lock, flags);
> +
> +	__of_attach_node_sysfs(np);
> +	mutex_unlock(&of_mutex);
> +
> +	while (child) {
> +		next = child->sibling;
> +		attach_node_and_children(child);
> +		child = next;
> +	}
> +}
> +
> +/**
> + *	unittest_data_add - Reads, copies data from
> + *	linked tree and attaches it to the live tree
> + */
> +int unittest_data_add(void)
> +{
> +	void *unittest_data;
> +	struct device_node *unittest_data_node, *np;
> +	/*
> +	 * __dtb_testcases_begin[] and __dtb_testcases_end[] are magically
> +	 * created by cmd_dt_S_dtb in scripts/Makefile.lib
> +	 */
> +	extern uint8_t __dtb_testcases_begin[];
> +	extern uint8_t __dtb_testcases_end[];
> +	const int size = __dtb_testcases_end - __dtb_testcases_begin;
> +	int rc;
> +
> +	if (!size) {
> +		pr_warn("%s: No testcase data to attach; not running tests\n",
> +			__func__);
> +		return -ENODATA;
> +	}
> +
> +	/* creating copy */
> +	unittest_data = kmemdup(__dtb_testcases_begin, size, GFP_KERNEL);
> +
> +	if (!unittest_data) {
> +		pr_warn("%s: Failed to allocate memory for unittest_data; "
> +			"not running tests\n", __func__);
> +		return -ENOMEM;
> +	}
> +	of_fdt_unflatten_tree(unittest_data, NULL, &unittest_data_node);
> +	if (!unittest_data_node) {
> +		pr_warn("%s: No tree to attach; not running tests\n", __func__);
> +		return -ENODATA;
> +	}
> +
> +	/*
> +	 * This lock normally encloses of_resolve_phandles()
> +	 */
> +	of_overlay_mutex_lock();
> +
> +	rc = of_resolve_phandles(unittest_data_node);
> +	if (rc) {
> +		pr_err("%s: Failed to resolve phandles (rc=%i)\n", __func__, rc);
> +		of_overlay_mutex_unlock();
> +		return -EINVAL;
> +	}
> +
> +	if (!of_root) {
> +		of_root = unittest_data_node;
> +		for_each_of_allnodes(np)
> +			__of_attach_node_sysfs(np);
> +		of_aliases = of_find_node_by_path("/aliases");
> +		of_chosen = of_find_node_by_path("/chosen");
> +		of_overlay_mutex_unlock();
> +		return 0;
> +	}
> +
> +	/* attach the sub-tree to live tree */
> +	np = unittest_data_node->child;
> +	while (np) {
> +		struct device_node *next = np->sibling;
> +
> +		np->parent = of_root;
> +		attach_node_and_children(np);
> +		np = next;
> +	}
> +
> +	of_overlay_mutex_unlock();
> +
> +	return 0;
> +}
> +
> diff --git a/drivers/of/test-common.h b/drivers/of/test-common.h
> new file mode 100644
> index 0000000000000..a35484406bbf1
> --- /dev/null
> +++ b/drivers/of/test-common.h
> @@ -0,0 +1,16 @@
> +/* SPDX-License-Identifier: GPL-2.0+ */
> +/*
> + * Common code to be used by unit tests.
> + */
> +#ifndef _LINUX_OF_TEST_COMMON_H
> +#define _LINUX_OF_TEST_COMMON_H
> +
> +#include <linux/of.h>
> +
> +/**
> + *	unittest_data_add - Reads, copies data from
> + *	linked tree and attaches it to the live tree
> + */
> +int unittest_data_add(void);
> +
> +#endif /* _LINUX_OF_TEST_COMMON_H */
> diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
> index 96de69ccb3e63..05a2610d0be7f 100644
> --- a/drivers/of/unittest.c
> +++ b/drivers/of/unittest.c
> @@ -29,184 +29,7 @@
>  #include <kunit/test.h>
>  
>  #include "of_private.h"
> -
> -static void of_unittest_find_node_by_name(struct kunit *test)
> -{
> -	struct device_node *np;
> -	const char *options, *name;
> -
> -	np = of_find_node_by_path("/testcase-data");
> -	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> -			       "find /testcase-data failed\n");
> -	of_node_put(np);
> -	kfree(name);
> -
> -	/* Test if trailing '/' works */
> -	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
> -			    "trailing '/' on /testcase-data/ should fail\n");
> -
> -	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	KUNIT_EXPECT_STREQ_MSG(
> -		test, "/testcase-data/phandle-tests/consumer-a", name,
> -		"find /testcase-data/phandle-tests/consumer-a failed\n");
> -	of_node_put(np);
> -	kfree(name);
> -
> -	np = of_find_node_by_path("testcase-alias");
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> -			       "find testcase-alias failed\n");
> -	of_node_put(np);
> -	kfree(name);
> -
> -	/* Test if trailing '/' works on aliases */
> -	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
> -			    "trailing '/' on testcase-alias/ should fail\n");
> -
> -	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	KUNIT_EXPECT_STREQ_MSG(
> -		test, "/testcase-data/phandle-tests/consumer-a", name,
> -		"find testcase-alias/phandle-tests/consumer-a failed\n");
> -	of_node_put(np);
> -	kfree(name);
> -
> -	KUNIT_EXPECT_EQ_MSG(
> -		test,
> -		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
> -		"non-existent path returned node %pOF\n", np);
> -	of_node_put(np);
> -
> -	KUNIT_EXPECT_EQ_MSG(
> -		test, np = of_find_node_by_path("missing-alias"), NULL,
> -		"non-existent alias returned node %pOF\n", np);
> -	of_node_put(np);
> -
> -	KUNIT_EXPECT_EQ_MSG(
> -		test,
> -		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
> -		"non-existent alias with relative path returned node %pOF\n",
> -		np);
> -	of_node_put(np);
> -
> -	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
> -			       "option path test failed\n");
> -	of_node_put(np);
> -
> -	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> -			       "option path test, subcase #1 failed\n");
> -	of_node_put(np);
> -
> -	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> -			       "option path test, subcase #2 failed\n");
> -	of_node_put(np);
> -
> -	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
> -	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
> -					 "NULL option path test failed\n");
> -	of_node_put(np);
> -
> -	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
> -				       &options);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
> -			       "option alias path test failed\n");
> -	of_node_put(np);
> -
> -	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
> -				       &options);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
> -			       "option alias path test, subcase #1 failed\n");
> -	of_node_put(np);
> -
> -	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
> -	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
> -			test, np, "NULL option alias path test failed\n");
> -	of_node_put(np);
> -
> -	options = "testoption";
> -	np = of_find_node_opts_by_path("testcase-alias", &options);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> -			    "option clearing test failed\n");
> -	of_node_put(np);
> -
> -	options = "testoption";
> -	np = of_find_node_opts_by_path("/", &options);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> -			    "option clearing root node test failed\n");
> -	of_node_put(np);
> -}
> -
> -static void of_unittest_dynamic(struct kunit *test)
> -{
> -	struct device_node *np;
> -	struct property *prop;
> -
> -	np = of_find_node_by_path("/testcase-data");
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -
> -	/* Array of 4 properties for the purpose of testing */
> -	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
> -
> -	/* Add a new property - should pass*/
> -	prop->name = "new-property";
> -	prop->value = "new-property-data";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> -			    "Adding a new property failed\n");
> -
> -	/* Try to add an existing property - should fail */
> -	prop++;
> -	prop->name = "new-property";
> -	prop->value = "new-property-data-should-fail";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
> -			    "Adding an existing property should have failed\n");
> -
> -	/* Try to modify an existing property - should pass */
> -	prop->value = "modify-property-data-should-pass";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_EQ_MSG(
> -		test, of_update_property(np, prop), 0,
> -		"Updating an existing property should have passed\n");
> -
> -	/* Try to modify non-existent property - should pass*/
> -	prop++;
> -	prop->name = "modify-property";
> -	prop->value = "modify-missing-property-data-should-pass";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
> -			    "Updating a missing property should have passed\n");
> -
> -	/* Remove property - should pass */
> -	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
> -			    "Removing a property should have passed\n");
> -
> -	/* Adding very large property - should pass */
> -	prop++;
> -	prop->name = "large-property-PAGE_SIZEx8";
> -	prop->length = PAGE_SIZE * 8;
> -	prop->value = kzalloc(prop->length, GFP_KERNEL);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
> -	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> -			    "Adding a large property should have passed\n");
> -}
> +#include "test-common.h"
>  
>  static int of_unittest_check_node_linkage(struct device_node *np)
>  {
> @@ -1177,170 +1000,6 @@ static void of_unittest_platform_populate(struct kunit *test)
>  	of_node_put(np);
>  }
>  
> -/**
> - *	update_node_properties - adds the properties
> - *	of np into dup node (present in live tree) and
> - *	updates parent of children of np to dup.
> - *
> - *	@np:	node whose properties are being added to the live tree
> - *	@dup:	node present in live tree to be updated
> - */
> -static void update_node_properties(struct device_node *np,
> -					struct device_node *dup)
> -{
> -	struct property *prop;
> -	struct property *save_next;
> -	struct device_node *child;
> -	int ret;
> -
> -	for_each_child_of_node(np, child)
> -		child->parent = dup;
> -
> -	/*
> -	 * "unittest internal error: unable to add testdata property"
> -	 *
> -	 *    If this message reports a property in node '/__symbols__' then
> -	 *    the respective unittest overlay contains a label that has the
> -	 *    same name as a label in the live devicetree.  The label will
> -	 *    be in the live devicetree only if the devicetree source was
> -	 *    compiled with the '-@' option.  If you encounter this error,
> -	 *    please consider renaming __all__ of the labels in the unittest
> -	 *    overlay dts files with an odd prefix that is unlikely to be
> -	 *    used in a real devicetree.
> -	 */
> -
> -	/*
> -	 * open code for_each_property_of_node() because of_add_property()
> -	 * sets prop->next to NULL
> -	 */
> -	for (prop = np->properties; prop != NULL; prop = save_next) {
> -		save_next = prop->next;
> -		ret = of_add_property(dup, prop);
> -		if (ret)
> -			pr_err("unittest internal error: unable to add testdata property %pOF/%s",
> -			       np, prop->name);
> -	}
> -}
> -
> -/**
> - *	attach_node_and_children - attaches nodes
> - *	and its children to live tree
> - *
> - *	@np:	Node to attach to live tree
> - */
> -static void attach_node_and_children(struct device_node *np)
> -{
> -	struct device_node *next, *dup, *child;
> -	unsigned long flags;
> -	const char *full_name;
> -
> -	full_name = kasprintf(GFP_KERNEL, "%pOF", np);
> -
> -	if (!strcmp(full_name, "/__local_fixups__") ||
> -	    !strcmp(full_name, "/__fixups__"))
> -		return;
> -
> -	dup = of_find_node_by_path(full_name);
> -	kfree(full_name);
> -	if (dup) {
> -		update_node_properties(np, dup);
> -		return;
> -	}
> -
> -	child = np->child;
> -	np->child = NULL;
> -
> -	mutex_lock(&of_mutex);
> -	raw_spin_lock_irqsave(&devtree_lock, flags);
> -	np->sibling = np->parent->child;
> -	np->parent->child = np;
> -	of_node_clear_flag(np, OF_DETACHED);
> -	raw_spin_unlock_irqrestore(&devtree_lock, flags);
> -
> -	__of_attach_node_sysfs(np);
> -	mutex_unlock(&of_mutex);
> -
> -	while (child) {
> -		next = child->sibling;
> -		attach_node_and_children(child);
> -		child = next;
> -	}
> -}
> -
> -/**
> - *	unittest_data_add - Reads, copies data from
> - *	linked tree and attaches it to the live tree
> - */
> -static int unittest_data_add(void)
> -{
> -	void *unittest_data;
> -	struct device_node *unittest_data_node, *np;
> -	/*
> -	 * __dtb_testcases_begin[] and __dtb_testcases_end[] are magically
> -	 * created by cmd_dt_S_dtb in scripts/Makefile.lib
> -	 */
> -	extern uint8_t __dtb_testcases_begin[];
> -	extern uint8_t __dtb_testcases_end[];
> -	const int size = __dtb_testcases_end - __dtb_testcases_begin;
> -	int rc;
> -
> -	if (!size) {
> -		pr_warn("%s: No testcase data to attach; not running tests\n",
> -			__func__);
> -		return -ENODATA;
> -	}
> -
> -	/* creating copy */
> -	unittest_data = kmemdup(__dtb_testcases_begin, size, GFP_KERNEL);
> -
> -	if (!unittest_data) {
> -		pr_warn("%s: Failed to allocate memory for unittest_data; "
> -			"not running tests\n", __func__);
> -		return -ENOMEM;
> -	}
> -	of_fdt_unflatten_tree(unittest_data, NULL, &unittest_data_node);
> -	if (!unittest_data_node) {
> -		pr_warn("%s: No tree to attach; not running tests\n", __func__);
> -		return -ENODATA;
> -	}
> -
> -	/*
> -	 * This lock normally encloses of_resolve_phandles()
> -	 */
> -	of_overlay_mutex_lock();
> -
> -	rc = of_resolve_phandles(unittest_data_node);
> -	if (rc) {
> -		pr_err("%s: Failed to resolve phandles (rc=%i)\n", __func__, rc);
> -		of_overlay_mutex_unlock();
> -		return -EINVAL;
> -	}
> -
> -	if (!of_root) {
> -		of_root = unittest_data_node;
> -		for_each_of_allnodes(np)
> -			__of_attach_node_sysfs(np);
> -		of_aliases = of_find_node_by_path("/aliases");
> -		of_chosen = of_find_node_by_path("/chosen");
> -		of_overlay_mutex_unlock();
> -		return 0;
> -	}
> -
> -	/* attach the sub-tree to live tree */
> -	np = unittest_data_node->child;
> -	while (np) {
> -		struct device_node *next = np->sibling;
> -
> -		np->parent = of_root;
> -		attach_node_and_children(np);
> -		np = next;
> -	}
> -
> -	of_overlay_mutex_unlock();
> -
> -	return 0;
> -}
> -
>  #ifdef CONFIG_OF_OVERLAY
>  static int overlay_data_apply(const char *overlay_name, int *overlay_id);
>  
> @@ -2563,8 +2222,6 @@ static int of_test_init(struct kunit *test)
>  static struct kunit_case of_test_cases[] = {
>  	KUNIT_CASE(of_unittest_check_tree_linkage),
>  	KUNIT_CASE(of_unittest_check_phandles),
> -	KUNIT_CASE(of_unittest_find_node_by_name),
> -	KUNIT_CASE(of_unittest_dynamic),
>  	KUNIT_CASE(of_unittest_parse_phandle_with_args),
>  	KUNIT_CASE(of_unittest_parse_phandle_with_args_map),
>  	KUNIT_CASE(of_unittest_printf),
> 

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 16/17] of: unittest: split out a couple of test cases from unittest
@ 2019-03-22  1:14     ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-03-22  1:14 UTC (permalink / raw)


On 2/14/19 1:37 PM, Brendan Higgins wrote:
> Split out a couple of test cases that these features in base.c from the
> unittest.c monolith. The intention is that we will eventually split out
> all test cases and group them together based on what portion of device
> tree they test.

I still object to this patch.  I do not want this code scattered into
additional files.

-Frank


> 
> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> ---
>  drivers/of/Makefile      |   2 +-
>  drivers/of/base-test.c   | 214 ++++++++++++++++++++++++
>  drivers/of/test-common.c | 175 ++++++++++++++++++++
>  drivers/of/test-common.h |  16 ++
>  drivers/of/unittest.c    | 345 +--------------------------------------
>  5 files changed, 407 insertions(+), 345 deletions(-)
>  create mode 100644 drivers/of/base-test.c
>  create mode 100644 drivers/of/test-common.c
>  create mode 100644 drivers/of/test-common.h
> 
> diff --git a/drivers/of/Makefile b/drivers/of/Makefile
> index 663a4af0cccd5..4a4bd527d586c 100644
> --- a/drivers/of/Makefile
> +++ b/drivers/of/Makefile
> @@ -8,7 +8,7 @@ obj-$(CONFIG_OF_PROMTREE) += pdt.o
>  obj-$(CONFIG_OF_ADDRESS)  += address.o
>  obj-$(CONFIG_OF_IRQ)    += irq.o
>  obj-$(CONFIG_OF_NET)	+= of_net.o
> -obj-$(CONFIG_OF_UNITTEST) += unittest.o
> +obj-$(CONFIG_OF_UNITTEST) += unittest.o base-test.o test-common.o
>  obj-$(CONFIG_OF_MDIO)	+= of_mdio.o
>  obj-$(CONFIG_OF_RESERVED_MEM) += of_reserved_mem.o
>  obj-$(CONFIG_OF_RESOLVE)  += resolver.o
> diff --git a/drivers/of/base-test.c b/drivers/of/base-test.c
> new file mode 100644
> index 0000000000000..3d3f4f1b74800
> --- /dev/null
> +++ b/drivers/of/base-test.c
> @@ -0,0 +1,214 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Unit tests for functions defined in base.c.
> + */
> +#include <linux/of.h>
> +
> +#include <kunit/test.h>
> +
> +#include "test-common.h"
> +
> +static void of_unittest_find_node_by_name(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options, *name;
> +
> +	np = of_find_node_by_path("/testcase-data");
> +	name = kasprintf(GFP_KERNEL, "%pOF", np);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> +			       "find /testcase-data failed\n");
> +	of_node_put(np);
> +	kfree(name);
> +
> +	/* Test if trailing '/' works */
> +	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
> +			    "trailing '/' on /testcase-data/ should fail\n");
> +
> +	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	name = kasprintf(GFP_KERNEL, "%pOF", np);
> +	KUNIT_EXPECT_STREQ_MSG(
> +		test, "/testcase-data/phandle-tests/consumer-a", name,
> +		"find /testcase-data/phandle-tests/consumer-a failed\n");
> +	of_node_put(np);
> +	kfree(name);
> +
> +	np = of_find_node_by_path("testcase-alias");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	name = kasprintf(GFP_KERNEL, "%pOF", np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> +			       "find testcase-alias failed\n");
> +	of_node_put(np);
> +	kfree(name);
> +
> +	/* Test if trailing '/' works on aliases */
> +	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
> +			    "trailing '/' on testcase-alias/ should fail\n");
> +
> +	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	name = kasprintf(GFP_KERNEL, "%pOF", np);
> +	KUNIT_EXPECT_STREQ_MSG(
> +		test, "/testcase-data/phandle-tests/consumer-a", name,
> +		"find testcase-alias/phandle-tests/consumer-a failed\n");
> +	of_node_put(np);
> +	kfree(name);
> +
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
> +		"non-existent path returned node %pOF\n", np);
> +	of_node_put(np);
> +
> +	KUNIT_EXPECT_EQ_MSG(
> +		test, np = of_find_node_by_path("missing-alias"), NULL,
> +		"non-existent alias returned node %pOF\n", np);
> +	of_node_put(np);
> +
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
> +		"non-existent alias with relative path returned node %pOF\n",
> +		np);
> +	of_node_put(np);
> +
> +	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
> +			       "option path test failed\n");
> +	of_node_put(np);
> +
> +	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> +			       "option path test, subcase #1 failed\n");
> +	of_node_put(np);
> +
> +	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> +			       "option path test, subcase #2 failed\n");
> +	of_node_put(np);
> +
> +	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
> +	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
> +					 "NULL option path test failed\n");
> +	of_node_put(np);
> +
> +	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
> +				       &options);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
> +			       "option alias path test failed\n");
> +	of_node_put(np);
> +
> +	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
> +				       &options);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
> +			       "option alias path test, subcase #1 failed\n");
> +	of_node_put(np);
> +
> +	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
> +	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
> +			test, np, "NULL option alias path test failed\n");
> +	of_node_put(np);
> +
> +	options = "testoption";
> +	np = of_find_node_opts_by_path("testcase-alias", &options);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> +			    "option clearing test failed\n");
> +	of_node_put(np);
> +
> +	options = "testoption";
> +	np = of_find_node_opts_by_path("/", &options);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> +			    "option clearing root node test failed\n");
> +	of_node_put(np);
> +}
> +
> +static void of_unittest_dynamic(struct kunit *test)
> +{
> +	struct device_node *np;
> +	struct property *prop;
> +
> +	np = of_find_node_by_path("/testcase-data");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +
> +	/* Array of 4 properties for the purpose of testing */
> +	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
> +
> +	/* Add a new property - should pass*/
> +	prop->name = "new-property";
> +	prop->value = "new-property-data";
> +	prop->length = strlen(prop->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> +			    "Adding a new property failed\n");
> +
> +	/* Try to add an existing property - should fail */
> +	prop++;
> +	prop->name = "new-property";
> +	prop->value = "new-property-data-should-fail";
> +	prop->length = strlen(prop->value) + 1;
> +	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
> +			    "Adding an existing property should have failed\n");
> +
> +	/* Try to modify an existing property - should pass */
> +	prop->value = "modify-property-data-should-pass";
> +	prop->length = strlen(prop->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(
> +		test, of_update_property(np, prop), 0,
> +		"Updating an existing property should have passed\n");
> +
> +	/* Try to modify non-existent property - should pass*/
> +	prop++;
> +	prop->name = "modify-property";
> +	prop->value = "modify-missing-property-data-should-pass";
> +	prop->length = strlen(prop->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
> +			    "Updating a missing property should have passed\n");
> +
> +	/* Remove property - should pass */
> +	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
> +			    "Removing a property should have passed\n");
> +
> +	/* Adding very large property - should pass */
> +	prop++;
> +	prop->name = "large-property-PAGE_SIZEx8";
> +	prop->length = PAGE_SIZE * 8;
> +	prop->value = kzalloc(prop->length, GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> +			    "Adding a large property should have passed\n");
> +}
> +
> +static int of_test_init(struct kunit *test)
> +{
> +	/* adding data for unittest */
> +	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
> +
> +	if (!of_aliases)
> +		of_aliases = of_find_node_by_path("/aliases");
> +
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
> +			"/testcase-data/phandle-tests/consumer-a"));
> +
> +	return 0;
> +}
> +
> +static struct kunit_case of_test_cases[] = {
> +	KUNIT_CASE(of_unittest_find_node_by_name),
> +	KUNIT_CASE(of_unittest_dynamic),
> +	{},
> +};
> +
> +static struct kunit_module of_test_module = {
> +	.name = "of-base-test",
> +	.init = of_test_init,
> +	.test_cases = of_test_cases,
> +};
> +module_test(of_test_module);
> diff --git a/drivers/of/test-common.c b/drivers/of/test-common.c
> new file mode 100644
> index 0000000000000..4c9a5f3b82f7d
> --- /dev/null
> +++ b/drivers/of/test-common.c
> @@ -0,0 +1,175 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Common code to be used by unit tests.
> + */
> +#include "test-common.h"
> +
> +#include <linux/of_fdt.h>
> +#include <linux/slab.h>
> +
> +#include "of_private.h"
> +
> +/**
> + *	update_node_properties - adds the properties
> + *	of np into dup node (present in live tree) and
> + *	updates parent of children of np to dup.
> + *
> + *	@np:	node whose properties are being added to the live tree
> + *	@dup:	node present in live tree to be updated
> + */
> +static void update_node_properties(struct device_node *np,
> +					struct device_node *dup)
> +{
> +	struct property *prop;
> +	struct property *save_next;
> +	struct device_node *child;
> +	int ret;
> +
> +	for_each_child_of_node(np, child)
> +		child->parent = dup;
> +
> +	/*
> +	 * "unittest internal error: unable to add testdata property"
> +	 *
> +	 *    If this message reports a property in node '/__symbols__' then
> +	 *    the respective unittest overlay contains a label that has the
> +	 *    same name as a label in the live devicetree.  The label will
> +	 *    be in the live devicetree only if the devicetree source was
> +	 *    compiled with the '-@' option.  If you encounter this error,
> +	 *    please consider renaming __all__ of the labels in the unittest
> +	 *    overlay dts files with an odd prefix that is unlikely to be
> +	 *    used in a real devicetree.
> +	 */
> +
> +	/*
> +	 * open code for_each_property_of_node() because of_add_property()
> +	 * sets prop->next to NULL
> +	 */
> +	for (prop = np->properties; prop != NULL; prop = save_next) {
> +		save_next = prop->next;
> +		ret = of_add_property(dup, prop);
> +		if (ret)
> +			pr_err("unittest internal error: unable to add testdata property %pOF/%s",
> +			       np, prop->name);
> +	}
> +}
> +
> +/**
> + *	attach_node_and_children - attaches nodes
> + *	and its children to live tree
> + *
> + *	@np:	Node to attach to live tree
> + */
> +static void attach_node_and_children(struct device_node *np)
> +{
> +	struct device_node *next, *dup, *child;
> +	unsigned long flags;
> +	const char *full_name;
> +
> +	full_name = kasprintf(GFP_KERNEL, "%pOF", np);
> +
> +	if (!strcmp(full_name, "/__local_fixups__") ||
> +	    !strcmp(full_name, "/__fixups__"))
> +		return;
> +
> +	dup = of_find_node_by_path(full_name);
> +	kfree(full_name);
> +	if (dup) {
> +		update_node_properties(np, dup);
> +		return;
> +	}
> +
> +	child = np->child;
> +	np->child = NULL;
> +
> +	mutex_lock(&of_mutex);
> +	raw_spin_lock_irqsave(&devtree_lock, flags);
> +	np->sibling = np->parent->child;
> +	np->parent->child = np;
> +	of_node_clear_flag(np, OF_DETACHED);
> +	raw_spin_unlock_irqrestore(&devtree_lock, flags);
> +
> +	__of_attach_node_sysfs(np);
> +	mutex_unlock(&of_mutex);
> +
> +	while (child) {
> +		next = child->sibling;
> +		attach_node_and_children(child);
> +		child = next;
> +	}
> +}
> +
> +/**
> + *	unittest_data_add - Reads, copies data from
> + *	linked tree and attaches it to the live tree
> + */
> +int unittest_data_add(void)
> +{
> +	void *unittest_data;
> +	struct device_node *unittest_data_node, *np;
> +	/*
> +	 * __dtb_testcases_begin[] and __dtb_testcases_end[] are magically
> +	 * created by cmd_dt_S_dtb in scripts/Makefile.lib
> +	 */
> +	extern uint8_t __dtb_testcases_begin[];
> +	extern uint8_t __dtb_testcases_end[];
> +	const int size = __dtb_testcases_end - __dtb_testcases_begin;
> +	int rc;
> +
> +	if (!size) {
> +		pr_warn("%s: No testcase data to attach; not running tests\n",
> +			__func__);
> +		return -ENODATA;
> +	}
> +
> +	/* creating copy */
> +	unittest_data = kmemdup(__dtb_testcases_begin, size, GFP_KERNEL);
> +
> +	if (!unittest_data) {
> +		pr_warn("%s: Failed to allocate memory for unittest_data; "
> +			"not running tests\n", __func__);
> +		return -ENOMEM;
> +	}
> +	of_fdt_unflatten_tree(unittest_data, NULL, &unittest_data_node);
> +	if (!unittest_data_node) {
> +		pr_warn("%s: No tree to attach; not running tests\n", __func__);
> +		return -ENODATA;
> +	}
> +
> +	/*
> +	 * This lock normally encloses of_resolve_phandles()
> +	 */
> +	of_overlay_mutex_lock();
> +
> +	rc = of_resolve_phandles(unittest_data_node);
> +	if (rc) {
> +		pr_err("%s: Failed to resolve phandles (rc=%i)\n", __func__, rc);
> +		of_overlay_mutex_unlock();
> +		return -EINVAL;
> +	}
> +
> +	if (!of_root) {
> +		of_root = unittest_data_node;
> +		for_each_of_allnodes(np)
> +			__of_attach_node_sysfs(np);
> +		of_aliases = of_find_node_by_path("/aliases");
> +		of_chosen = of_find_node_by_path("/chosen");
> +		of_overlay_mutex_unlock();
> +		return 0;
> +	}
> +
> +	/* attach the sub-tree to live tree */
> +	np = unittest_data_node->child;
> +	while (np) {
> +		struct device_node *next = np->sibling;
> +
> +		np->parent = of_root;
> +		attach_node_and_children(np);
> +		np = next;
> +	}
> +
> +	of_overlay_mutex_unlock();
> +
> +	return 0;
> +}
> +
> diff --git a/drivers/of/test-common.h b/drivers/of/test-common.h
> new file mode 100644
> index 0000000000000..a35484406bbf1
> --- /dev/null
> +++ b/drivers/of/test-common.h
> @@ -0,0 +1,16 @@
> +/* SPDX-License-Identifier: GPL-2.0+ */
> +/*
> + * Common code to be used by unit tests.
> + */
> +#ifndef _LINUX_OF_TEST_COMMON_H
> +#define _LINUX_OF_TEST_COMMON_H
> +
> +#include <linux/of.h>
> +
> +/**
> + *	unittest_data_add - Reads, copies data from
> + *	linked tree and attaches it to the live tree
> + */
> +int unittest_data_add(void);
> +
> +#endif /* _LINUX_OF_TEST_COMMON_H */
> diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
> index 96de69ccb3e63..05a2610d0be7f 100644
> --- a/drivers/of/unittest.c
> +++ b/drivers/of/unittest.c
> @@ -29,184 +29,7 @@
>  #include <kunit/test.h>
>  
>  #include "of_private.h"
> -
> -static void of_unittest_find_node_by_name(struct kunit *test)
> -{
> -	struct device_node *np;
> -	const char *options, *name;
> -
> -	np = of_find_node_by_path("/testcase-data");
> -	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> -			       "find /testcase-data failed\n");
> -	of_node_put(np);
> -	kfree(name);
> -
> -	/* Test if trailing '/' works */
> -	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
> -			    "trailing '/' on /testcase-data/ should fail\n");
> -
> -	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	KUNIT_EXPECT_STREQ_MSG(
> -		test, "/testcase-data/phandle-tests/consumer-a", name,
> -		"find /testcase-data/phandle-tests/consumer-a failed\n");
> -	of_node_put(np);
> -	kfree(name);
> -
> -	np = of_find_node_by_path("testcase-alias");
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> -			       "find testcase-alias failed\n");
> -	of_node_put(np);
> -	kfree(name);
> -
> -	/* Test if trailing '/' works on aliases */
> -	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
> -			    "trailing '/' on testcase-alias/ should fail\n");
> -
> -	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	KUNIT_EXPECT_STREQ_MSG(
> -		test, "/testcase-data/phandle-tests/consumer-a", name,
> -		"find testcase-alias/phandle-tests/consumer-a failed\n");
> -	of_node_put(np);
> -	kfree(name);
> -
> -	KUNIT_EXPECT_EQ_MSG(
> -		test,
> -		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
> -		"non-existent path returned node %pOF\n", np);
> -	of_node_put(np);
> -
> -	KUNIT_EXPECT_EQ_MSG(
> -		test, np = of_find_node_by_path("missing-alias"), NULL,
> -		"non-existent alias returned node %pOF\n", np);
> -	of_node_put(np);
> -
> -	KUNIT_EXPECT_EQ_MSG(
> -		test,
> -		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
> -		"non-existent alias with relative path returned node %pOF\n",
> -		np);
> -	of_node_put(np);
> -
> -	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
> -			       "option path test failed\n");
> -	of_node_put(np);
> -
> -	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> -			       "option path test, subcase #1 failed\n");
> -	of_node_put(np);
> -
> -	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> -			       "option path test, subcase #2 failed\n");
> -	of_node_put(np);
> -
> -	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
> -	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
> -					 "NULL option path test failed\n");
> -	of_node_put(np);
> -
> -	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
> -				       &options);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
> -			       "option alias path test failed\n");
> -	of_node_put(np);
> -
> -	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
> -				       &options);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
> -			       "option alias path test, subcase #1 failed\n");
> -	of_node_put(np);
> -
> -	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
> -	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
> -			test, np, "NULL option alias path test failed\n");
> -	of_node_put(np);
> -
> -	options = "testoption";
> -	np = of_find_node_opts_by_path("testcase-alias", &options);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> -			    "option clearing test failed\n");
> -	of_node_put(np);
> -
> -	options = "testoption";
> -	np = of_find_node_opts_by_path("/", &options);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> -			    "option clearing root node test failed\n");
> -	of_node_put(np);
> -}
> -
> -static void of_unittest_dynamic(struct kunit *test)
> -{
> -	struct device_node *np;
> -	struct property *prop;
> -
> -	np = of_find_node_by_path("/testcase-data");
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -
> -	/* Array of 4 properties for the purpose of testing */
> -	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
> -
> -	/* Add a new property - should pass*/
> -	prop->name = "new-property";
> -	prop->value = "new-property-data";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> -			    "Adding a new property failed\n");
> -
> -	/* Try to add an existing property - should fail */
> -	prop++;
> -	prop->name = "new-property";
> -	prop->value = "new-property-data-should-fail";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
> -			    "Adding an existing property should have failed\n");
> -
> -	/* Try to modify an existing property - should pass */
> -	prop->value = "modify-property-data-should-pass";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_EQ_MSG(
> -		test, of_update_property(np, prop), 0,
> -		"Updating an existing property should have passed\n");
> -
> -	/* Try to modify non-existent property - should pass*/
> -	prop++;
> -	prop->name = "modify-property";
> -	prop->value = "modify-missing-property-data-should-pass";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
> -			    "Updating a missing property should have passed\n");
> -
> -	/* Remove property - should pass */
> -	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
> -			    "Removing a property should have passed\n");
> -
> -	/* Adding very large property - should pass */
> -	prop++;
> -	prop->name = "large-property-PAGE_SIZEx8";
> -	prop->length = PAGE_SIZE * 8;
> -	prop->value = kzalloc(prop->length, GFP_KERNEL);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
> -	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> -			    "Adding a large property should have passed\n");
> -}
> +#include "test-common.h"
>  
>  static int of_unittest_check_node_linkage(struct device_node *np)
>  {
> @@ -1177,170 +1000,6 @@ static void of_unittest_platform_populate(struct kunit *test)
>  	of_node_put(np);
>  }
>  
> -/**
> - *	update_node_properties - adds the properties
> - *	of np into dup node (present in live tree) and
> - *	updates parent of children of np to dup.
> - *
> - *	@np:	node whose properties are being added to the live tree
> - *	@dup:	node present in live tree to be updated
> - */
> -static void update_node_properties(struct device_node *np,
> -					struct device_node *dup)
> -{
> -	struct property *prop;
> -	struct property *save_next;
> -	struct device_node *child;
> -	int ret;
> -
> -	for_each_child_of_node(np, child)
> -		child->parent = dup;
> -
> -	/*
> -	 * "unittest internal error: unable to add testdata property"
> -	 *
> -	 *    If this message reports a property in node '/__symbols__' then
> -	 *    the respective unittest overlay contains a label that has the
> -	 *    same name as a label in the live devicetree.  The label will
> -	 *    be in the live devicetree only if the devicetree source was
> -	 *    compiled with the '-@' option.  If you encounter this error,
> -	 *    please consider renaming __all__ of the labels in the unittest
> -	 *    overlay dts files with an odd prefix that is unlikely to be
> -	 *    used in a real devicetree.
> -	 */
> -
> -	/*
> -	 * open code for_each_property_of_node() because of_add_property()
> -	 * sets prop->next to NULL
> -	 */
> -	for (prop = np->properties; prop != NULL; prop = save_next) {
> -		save_next = prop->next;
> -		ret = of_add_property(dup, prop);
> -		if (ret)
> -			pr_err("unittest internal error: unable to add testdata property %pOF/%s",
> -			       np, prop->name);
> -	}
> -}
> -
> -/**
> - *	attach_node_and_children - attaches nodes
> - *	and its children to live tree
> - *
> - *	@np:	Node to attach to live tree
> - */
> -static void attach_node_and_children(struct device_node *np)
> -{
> -	struct device_node *next, *dup, *child;
> -	unsigned long flags;
> -	const char *full_name;
> -
> -	full_name = kasprintf(GFP_KERNEL, "%pOF", np);
> -
> -	if (!strcmp(full_name, "/__local_fixups__") ||
> -	    !strcmp(full_name, "/__fixups__"))
> -		return;
> -
> -	dup = of_find_node_by_path(full_name);
> -	kfree(full_name);
> -	if (dup) {
> -		update_node_properties(np, dup);
> -		return;
> -	}
> -
> -	child = np->child;
> -	np->child = NULL;
> -
> -	mutex_lock(&of_mutex);
> -	raw_spin_lock_irqsave(&devtree_lock, flags);
> -	np->sibling = np->parent->child;
> -	np->parent->child = np;
> -	of_node_clear_flag(np, OF_DETACHED);
> -	raw_spin_unlock_irqrestore(&devtree_lock, flags);
> -
> -	__of_attach_node_sysfs(np);
> -	mutex_unlock(&of_mutex);
> -
> -	while (child) {
> -		next = child->sibling;
> -		attach_node_and_children(child);
> -		child = next;
> -	}
> -}
> -
> -/**
> - *	unittest_data_add - Reads, copies data from
> - *	linked tree and attaches it to the live tree
> - */
> -static int unittest_data_add(void)
> -{
> -	void *unittest_data;
> -	struct device_node *unittest_data_node, *np;
> -	/*
> -	 * __dtb_testcases_begin[] and __dtb_testcases_end[] are magically
> -	 * created by cmd_dt_S_dtb in scripts/Makefile.lib
> -	 */
> -	extern uint8_t __dtb_testcases_begin[];
> -	extern uint8_t __dtb_testcases_end[];
> -	const int size = __dtb_testcases_end - __dtb_testcases_begin;
> -	int rc;
> -
> -	if (!size) {
> -		pr_warn("%s: No testcase data to attach; not running tests\n",
> -			__func__);
> -		return -ENODATA;
> -	}
> -
> -	/* creating copy */
> -	unittest_data = kmemdup(__dtb_testcases_begin, size, GFP_KERNEL);
> -
> -	if (!unittest_data) {
> -		pr_warn("%s: Failed to allocate memory for unittest_data; "
> -			"not running tests\n", __func__);
> -		return -ENOMEM;
> -	}
> -	of_fdt_unflatten_tree(unittest_data, NULL, &unittest_data_node);
> -	if (!unittest_data_node) {
> -		pr_warn("%s: No tree to attach; not running tests\n", __func__);
> -		return -ENODATA;
> -	}
> -
> -	/*
> -	 * This lock normally encloses of_resolve_phandles()
> -	 */
> -	of_overlay_mutex_lock();
> -
> -	rc = of_resolve_phandles(unittest_data_node);
> -	if (rc) {
> -		pr_err("%s: Failed to resolve phandles (rc=%i)\n", __func__, rc);
> -		of_overlay_mutex_unlock();
> -		return -EINVAL;
> -	}
> -
> -	if (!of_root) {
> -		of_root = unittest_data_node;
> -		for_each_of_allnodes(np)
> -			__of_attach_node_sysfs(np);
> -		of_aliases = of_find_node_by_path("/aliases");
> -		of_chosen = of_find_node_by_path("/chosen");
> -		of_overlay_mutex_unlock();
> -		return 0;
> -	}
> -
> -	/* attach the sub-tree to live tree */
> -	np = unittest_data_node->child;
> -	while (np) {
> -		struct device_node *next = np->sibling;
> -
> -		np->parent = of_root;
> -		attach_node_and_children(np);
> -		np = next;
> -	}
> -
> -	of_overlay_mutex_unlock();
> -
> -	return 0;
> -}
> -
>  #ifdef CONFIG_OF_OVERLAY
>  static int overlay_data_apply(const char *overlay_name, int *overlay_id);
>  
> @@ -2563,8 +2222,6 @@ static int of_test_init(struct kunit *test)
>  static struct kunit_case of_test_cases[] = {
>  	KUNIT_CASE(of_unittest_check_tree_linkage),
>  	KUNIT_CASE(of_unittest_check_phandles),
> -	KUNIT_CASE(of_unittest_find_node_by_name),
> -	KUNIT_CASE(of_unittest_dynamic),
>  	KUNIT_CASE(of_unittest_parse_phandle_with_args),
>  	KUNIT_CASE(of_unittest_parse_phandle_with_args_map),
>  	KUNIT_CASE(of_unittest_printf),
> 

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 16/17] of: unittest: split out a couple of test cases from unittest
@ 2019-03-22  1:14     ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-03-22  1:14 UTC (permalink / raw)
  To: Brendan Higgins, keescook, mcgrof, shuah, robh, kieran.bingham
  Cc: brakmo, pmladek, amir73il, dri-devel, Alexander.Levin,
	linux-kselftest, linux-nvdimm, richard, knut.omang, wfg, joel,
	jdike, dan.carpenter, devicetree, Tim.Bird, linux-um, rostedt,
	julia.lawall, dan.j.williams, kunit-dev, gregkh, linux-kernel,
	daniel, mpe, joe, khilman

On 2/14/19 1:37 PM, Brendan Higgins wrote:
> Split out a couple of test cases that these features in base.c from the
> unittest.c monolith. The intention is that we will eventually split out
> all test cases and group them together based on what portion of device
> tree they test.

I still object to this patch.  I do not want this code scattered into
additional files.

-Frank


> 
> Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
> ---
>  drivers/of/Makefile      |   2 +-
>  drivers/of/base-test.c   | 214 ++++++++++++++++++++++++
>  drivers/of/test-common.c | 175 ++++++++++++++++++++
>  drivers/of/test-common.h |  16 ++
>  drivers/of/unittest.c    | 345 +--------------------------------------
>  5 files changed, 407 insertions(+), 345 deletions(-)
>  create mode 100644 drivers/of/base-test.c
>  create mode 100644 drivers/of/test-common.c
>  create mode 100644 drivers/of/test-common.h
> 
> diff --git a/drivers/of/Makefile b/drivers/of/Makefile
> index 663a4af0cccd5..4a4bd527d586c 100644
> --- a/drivers/of/Makefile
> +++ b/drivers/of/Makefile
> @@ -8,7 +8,7 @@ obj-$(CONFIG_OF_PROMTREE) += pdt.o
>  obj-$(CONFIG_OF_ADDRESS)  += address.o
>  obj-$(CONFIG_OF_IRQ)    += irq.o
>  obj-$(CONFIG_OF_NET)	+= of_net.o
> -obj-$(CONFIG_OF_UNITTEST) += unittest.o
> +obj-$(CONFIG_OF_UNITTEST) += unittest.o base-test.o test-common.o
>  obj-$(CONFIG_OF_MDIO)	+= of_mdio.o
>  obj-$(CONFIG_OF_RESERVED_MEM) += of_reserved_mem.o
>  obj-$(CONFIG_OF_RESOLVE)  += resolver.o
> diff --git a/drivers/of/base-test.c b/drivers/of/base-test.c
> new file mode 100644
> index 0000000000000..3d3f4f1b74800
> --- /dev/null
> +++ b/drivers/of/base-test.c
> @@ -0,0 +1,214 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Unit tests for functions defined in base.c.
> + */
> +#include <linux/of.h>
> +
> +#include <kunit/test.h>
> +
> +#include "test-common.h"
> +
> +static void of_unittest_find_node_by_name(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options, *name;
> +
> +	np = of_find_node_by_path("/testcase-data");
> +	name = kasprintf(GFP_KERNEL, "%pOF", np);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> +			       "find /testcase-data failed\n");
> +	of_node_put(np);
> +	kfree(name);
> +
> +	/* Test if trailing '/' works */
> +	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
> +			    "trailing '/' on /testcase-data/ should fail\n");
> +
> +	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	name = kasprintf(GFP_KERNEL, "%pOF", np);
> +	KUNIT_EXPECT_STREQ_MSG(
> +		test, "/testcase-data/phandle-tests/consumer-a", name,
> +		"find /testcase-data/phandle-tests/consumer-a failed\n");
> +	of_node_put(np);
> +	kfree(name);
> +
> +	np = of_find_node_by_path("testcase-alias");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	name = kasprintf(GFP_KERNEL, "%pOF", np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> +			       "find testcase-alias failed\n");
> +	of_node_put(np);
> +	kfree(name);
> +
> +	/* Test if trailing '/' works on aliases */
> +	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
> +			    "trailing '/' on testcase-alias/ should fail\n");
> +
> +	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	name = kasprintf(GFP_KERNEL, "%pOF", np);
> +	KUNIT_EXPECT_STREQ_MSG(
> +		test, "/testcase-data/phandle-tests/consumer-a", name,
> +		"find testcase-alias/phandle-tests/consumer-a failed\n");
> +	of_node_put(np);
> +	kfree(name);
> +
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
> +		"non-existent path returned node %pOF\n", np);
> +	of_node_put(np);
> +
> +	KUNIT_EXPECT_EQ_MSG(
> +		test, np = of_find_node_by_path("missing-alias"), NULL,
> +		"non-existent alias returned node %pOF\n", np);
> +	of_node_put(np);
> +
> +	KUNIT_EXPECT_EQ_MSG(
> +		test,
> +		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
> +		"non-existent alias with relative path returned node %pOF\n",
> +		np);
> +	of_node_put(np);
> +
> +	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
> +			       "option path test failed\n");
> +	of_node_put(np);
> +
> +	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> +			       "option path test, subcase #1 failed\n");
> +	of_node_put(np);
> +
> +	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> +			       "option path test, subcase #2 failed\n");
> +	of_node_put(np);
> +
> +	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
> +	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
> +					 "NULL option path test failed\n");
> +	of_node_put(np);
> +
> +	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
> +				       &options);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
> +			       "option alias path test failed\n");
> +	of_node_put(np);
> +
> +	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
> +				       &options);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
> +			       "option alias path test, subcase #1 failed\n");
> +	of_node_put(np);
> +
> +	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
> +	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
> +			test, np, "NULL option alias path test failed\n");
> +	of_node_put(np);
> +
> +	options = "testoption";
> +	np = of_find_node_opts_by_path("testcase-alias", &options);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> +			    "option clearing test failed\n");
> +	of_node_put(np);
> +
> +	options = "testoption";
> +	np = of_find_node_opts_by_path("/", &options);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> +			    "option clearing root node test failed\n");
> +	of_node_put(np);
> +}
> +
> +static void of_unittest_dynamic(struct kunit *test)
> +{
> +	struct device_node *np;
> +	struct property *prop;
> +
> +	np = of_find_node_by_path("/testcase-data");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +
> +	/* Array of 4 properties for the purpose of testing */
> +	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
> +
> +	/* Add a new property - should pass*/
> +	prop->name = "new-property";
> +	prop->value = "new-property-data";
> +	prop->length = strlen(prop->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> +			    "Adding a new property failed\n");
> +
> +	/* Try to add an existing property - should fail */
> +	prop++;
> +	prop->name = "new-property";
> +	prop->value = "new-property-data-should-fail";
> +	prop->length = strlen(prop->value) + 1;
> +	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
> +			    "Adding an existing property should have failed\n");
> +
> +	/* Try to modify an existing property - should pass */
> +	prop->value = "modify-property-data-should-pass";
> +	prop->length = strlen(prop->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(
> +		test, of_update_property(np, prop), 0,
> +		"Updating an existing property should have passed\n");
> +
> +	/* Try to modify non-existent property - should pass*/
> +	prop++;
> +	prop->name = "modify-property";
> +	prop->value = "modify-missing-property-data-should-pass";
> +	prop->length = strlen(prop->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
> +			    "Updating a missing property should have passed\n");
> +
> +	/* Remove property - should pass */
> +	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
> +			    "Removing a property should have passed\n");
> +
> +	/* Adding very large property - should pass */
> +	prop++;
> +	prop->name = "large-property-PAGE_SIZEx8";
> +	prop->length = PAGE_SIZE * 8;
> +	prop->value = kzalloc(prop->length, GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> +			    "Adding a large property should have passed\n");
> +}
> +
> +static int of_test_init(struct kunit *test)
> +{
> +	/* adding data for unittest */
> +	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
> +
> +	if (!of_aliases)
> +		of_aliases = of_find_node_by_path("/aliases");
> +
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
> +			"/testcase-data/phandle-tests/consumer-a"));
> +
> +	return 0;
> +}
> +
> +static struct kunit_case of_test_cases[] = {
> +	KUNIT_CASE(of_unittest_find_node_by_name),
> +	KUNIT_CASE(of_unittest_dynamic),
> +	{},
> +};
> +
> +static struct kunit_module of_test_module = {
> +	.name = "of-base-test",
> +	.init = of_test_init,
> +	.test_cases = of_test_cases,
> +};
> +module_test(of_test_module);
> diff --git a/drivers/of/test-common.c b/drivers/of/test-common.c
> new file mode 100644
> index 0000000000000..4c9a5f3b82f7d
> --- /dev/null
> +++ b/drivers/of/test-common.c
> @@ -0,0 +1,175 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Common code to be used by unit tests.
> + */
> +#include "test-common.h"
> +
> +#include <linux/of_fdt.h>
> +#include <linux/slab.h>
> +
> +#include "of_private.h"
> +
> +/**
> + *	update_node_properties - adds the properties
> + *	of np into dup node (present in live tree) and
> + *	updates parent of children of np to dup.
> + *
> + *	@np:	node whose properties are being added to the live tree
> + *	@dup:	node present in live tree to be updated
> + */
> +static void update_node_properties(struct device_node *np,
> +					struct device_node *dup)
> +{
> +	struct property *prop;
> +	struct property *save_next;
> +	struct device_node *child;
> +	int ret;
> +
> +	for_each_child_of_node(np, child)
> +		child->parent = dup;
> +
> +	/*
> +	 * "unittest internal error: unable to add testdata property"
> +	 *
> +	 *    If this message reports a property in node '/__symbols__' then
> +	 *    the respective unittest overlay contains a label that has the
> +	 *    same name as a label in the live devicetree.  The label will
> +	 *    be in the live devicetree only if the devicetree source was
> +	 *    compiled with the '-@' option.  If you encounter this error,
> +	 *    please consider renaming __all__ of the labels in the unittest
> +	 *    overlay dts files with an odd prefix that is unlikely to be
> +	 *    used in a real devicetree.
> +	 */
> +
> +	/*
> +	 * open code for_each_property_of_node() because of_add_property()
> +	 * sets prop->next to NULL
> +	 */
> +	for (prop = np->properties; prop != NULL; prop = save_next) {
> +		save_next = prop->next;
> +		ret = of_add_property(dup, prop);
> +		if (ret)
> +			pr_err("unittest internal error: unable to add testdata property %pOF/%s",
> +			       np, prop->name);
> +	}
> +}
> +
> +/**
> + *	attach_node_and_children - attaches nodes
> + *	and its children to live tree
> + *
> + *	@np:	Node to attach to live tree
> + */
> +static void attach_node_and_children(struct device_node *np)
> +{
> +	struct device_node *next, *dup, *child;
> +	unsigned long flags;
> +	const char *full_name;
> +
> +	full_name = kasprintf(GFP_KERNEL, "%pOF", np);
> +
> +	if (!strcmp(full_name, "/__local_fixups__") ||
> +	    !strcmp(full_name, "/__fixups__"))
> +		return;
> +
> +	dup = of_find_node_by_path(full_name);
> +	kfree(full_name);
> +	if (dup) {
> +		update_node_properties(np, dup);
> +		return;
> +	}
> +
> +	child = np->child;
> +	np->child = NULL;
> +
> +	mutex_lock(&of_mutex);
> +	raw_spin_lock_irqsave(&devtree_lock, flags);
> +	np->sibling = np->parent->child;
> +	np->parent->child = np;
> +	of_node_clear_flag(np, OF_DETACHED);
> +	raw_spin_unlock_irqrestore(&devtree_lock, flags);
> +
> +	__of_attach_node_sysfs(np);
> +	mutex_unlock(&of_mutex);
> +
> +	while (child) {
> +		next = child->sibling;
> +		attach_node_and_children(child);
> +		child = next;
> +	}
> +}
> +
> +/**
> + *	unittest_data_add - Reads, copies data from
> + *	linked tree and attaches it to the live tree
> + */
> +int unittest_data_add(void)
> +{
> +	void *unittest_data;
> +	struct device_node *unittest_data_node, *np;
> +	/*
> +	 * __dtb_testcases_begin[] and __dtb_testcases_end[] are magically
> +	 * created by cmd_dt_S_dtb in scripts/Makefile.lib
> +	 */
> +	extern uint8_t __dtb_testcases_begin[];
> +	extern uint8_t __dtb_testcases_end[];
> +	const int size = __dtb_testcases_end - __dtb_testcases_begin;
> +	int rc;
> +
> +	if (!size) {
> +		pr_warn("%s: No testcase data to attach; not running tests\n",
> +			__func__);
> +		return -ENODATA;
> +	}
> +
> +	/* creating copy */
> +	unittest_data = kmemdup(__dtb_testcases_begin, size, GFP_KERNEL);
> +
> +	if (!unittest_data) {
> +		pr_warn("%s: Failed to allocate memory for unittest_data; "
> +			"not running tests\n", __func__);
> +		return -ENOMEM;
> +	}
> +	of_fdt_unflatten_tree(unittest_data, NULL, &unittest_data_node);
> +	if (!unittest_data_node) {
> +		pr_warn("%s: No tree to attach; not running tests\n", __func__);
> +		return -ENODATA;
> +	}
> +
> +	/*
> +	 * This lock normally encloses of_resolve_phandles()
> +	 */
> +	of_overlay_mutex_lock();
> +
> +	rc = of_resolve_phandles(unittest_data_node);
> +	if (rc) {
> +		pr_err("%s: Failed to resolve phandles (rc=%i)\n", __func__, rc);
> +		of_overlay_mutex_unlock();
> +		return -EINVAL;
> +	}
> +
> +	if (!of_root) {
> +		of_root = unittest_data_node;
> +		for_each_of_allnodes(np)
> +			__of_attach_node_sysfs(np);
> +		of_aliases = of_find_node_by_path("/aliases");
> +		of_chosen = of_find_node_by_path("/chosen");
> +		of_overlay_mutex_unlock();
> +		return 0;
> +	}
> +
> +	/* attach the sub-tree to live tree */
> +	np = unittest_data_node->child;
> +	while (np) {
> +		struct device_node *next = np->sibling;
> +
> +		np->parent = of_root;
> +		attach_node_and_children(np);
> +		np = next;
> +	}
> +
> +	of_overlay_mutex_unlock();
> +
> +	return 0;
> +}
> +
> diff --git a/drivers/of/test-common.h b/drivers/of/test-common.h
> new file mode 100644
> index 0000000000000..a35484406bbf1
> --- /dev/null
> +++ b/drivers/of/test-common.h
> @@ -0,0 +1,16 @@
> +/* SPDX-License-Identifier: GPL-2.0+ */
> +/*
> + * Common code to be used by unit tests.
> + */
> +#ifndef _LINUX_OF_TEST_COMMON_H
> +#define _LINUX_OF_TEST_COMMON_H
> +
> +#include <linux/of.h>
> +
> +/**
> + *	unittest_data_add - Reads, copies data from
> + *	linked tree and attaches it to the live tree
> + */
> +int unittest_data_add(void);
> +
> +#endif /* _LINUX_OF_TEST_COMMON_H */
> diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
> index 96de69ccb3e63..05a2610d0be7f 100644
> --- a/drivers/of/unittest.c
> +++ b/drivers/of/unittest.c
> @@ -29,184 +29,7 @@
>  #include <kunit/test.h>
>  
>  #include "of_private.h"
> -
> -static void of_unittest_find_node_by_name(struct kunit *test)
> -{
> -	struct device_node *np;
> -	const char *options, *name;
> -
> -	np = of_find_node_by_path("/testcase-data");
> -	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> -			       "find /testcase-data failed\n");
> -	of_node_put(np);
> -	kfree(name);
> -
> -	/* Test if trailing '/' works */
> -	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
> -			    "trailing '/' on /testcase-data/ should fail\n");
> -
> -	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	KUNIT_EXPECT_STREQ_MSG(
> -		test, "/testcase-data/phandle-tests/consumer-a", name,
> -		"find /testcase-data/phandle-tests/consumer-a failed\n");
> -	of_node_put(np);
> -	kfree(name);
> -
> -	np = of_find_node_by_path("testcase-alias");
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> -			       "find testcase-alias failed\n");
> -	of_node_put(np);
> -	kfree(name);
> -
> -	/* Test if trailing '/' works on aliases */
> -	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
> -			    "trailing '/' on testcase-alias/ should fail\n");
> -
> -	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	name = kasprintf(GFP_KERNEL, "%pOF", np);
> -	KUNIT_EXPECT_STREQ_MSG(
> -		test, "/testcase-data/phandle-tests/consumer-a", name,
> -		"find testcase-alias/phandle-tests/consumer-a failed\n");
> -	of_node_put(np);
> -	kfree(name);
> -
> -	KUNIT_EXPECT_EQ_MSG(
> -		test,
> -		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
> -		"non-existent path returned node %pOF\n", np);
> -	of_node_put(np);
> -
> -	KUNIT_EXPECT_EQ_MSG(
> -		test, np = of_find_node_by_path("missing-alias"), NULL,
> -		"non-existent alias returned node %pOF\n", np);
> -	of_node_put(np);
> -
> -	KUNIT_EXPECT_EQ_MSG(
> -		test,
> -		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
> -		"non-existent alias with relative path returned node %pOF\n",
> -		np);
> -	of_node_put(np);
> -
> -	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
> -			       "option path test failed\n");
> -	of_node_put(np);
> -
> -	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> -			       "option path test, subcase #1 failed\n");
> -	of_node_put(np);
> -
> -	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> -			       "option path test, subcase #2 failed\n");
> -	of_node_put(np);
> -
> -	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
> -	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
> -					 "NULL option path test failed\n");
> -	of_node_put(np);
> -
> -	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
> -				       &options);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
> -			       "option alias path test failed\n");
> -	of_node_put(np);
> -
> -	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
> -				       &options);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
> -			       "option alias path test, subcase #1 failed\n");
> -	of_node_put(np);
> -
> -	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
> -	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
> -			test, np, "NULL option alias path test failed\n");
> -	of_node_put(np);
> -
> -	options = "testoption";
> -	np = of_find_node_opts_by_path("testcase-alias", &options);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> -			    "option clearing test failed\n");
> -	of_node_put(np);
> -
> -	options = "testoption";
> -	np = of_find_node_opts_by_path("/", &options);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> -			    "option clearing root node test failed\n");
> -	of_node_put(np);
> -}
> -
> -static void of_unittest_dynamic(struct kunit *test)
> -{
> -	struct device_node *np;
> -	struct property *prop;
> -
> -	np = of_find_node_by_path("/testcase-data");
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> -
> -	/* Array of 4 properties for the purpose of testing */
> -	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
> -
> -	/* Add a new property - should pass*/
> -	prop->name = "new-property";
> -	prop->value = "new-property-data";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> -			    "Adding a new property failed\n");
> -
> -	/* Try to add an existing property - should fail */
> -	prop++;
> -	prop->name = "new-property";
> -	prop->value = "new-property-data-should-fail";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
> -			    "Adding an existing property should have failed\n");
> -
> -	/* Try to modify an existing property - should pass */
> -	prop->value = "modify-property-data-should-pass";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_EQ_MSG(
> -		test, of_update_property(np, prop), 0,
> -		"Updating an existing property should have passed\n");
> -
> -	/* Try to modify non-existent property - should pass*/
> -	prop++;
> -	prop->name = "modify-property";
> -	prop->value = "modify-missing-property-data-should-pass";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
> -			    "Updating a missing property should have passed\n");
> -
> -	/* Remove property - should pass */
> -	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
> -			    "Removing a property should have passed\n");
> -
> -	/* Adding very large property - should pass */
> -	prop++;
> -	prop->name = "large-property-PAGE_SIZEx8";
> -	prop->length = PAGE_SIZE * 8;
> -	prop->value = kzalloc(prop->length, GFP_KERNEL);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
> -	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> -			    "Adding a large property should have passed\n");
> -}
> +#include "test-common.h"
>  
>  static int of_unittest_check_node_linkage(struct device_node *np)
>  {
> @@ -1177,170 +1000,6 @@ static void of_unittest_platform_populate(struct kunit *test)
>  	of_node_put(np);
>  }
>  
> -/**
> - *	update_node_properties - adds the properties
> - *	of np into dup node (present in live tree) and
> - *	updates parent of children of np to dup.
> - *
> - *	@np:	node whose properties are being added to the live tree
> - *	@dup:	node present in live tree to be updated
> - */
> -static void update_node_properties(struct device_node *np,
> -					struct device_node *dup)
> -{
> -	struct property *prop;
> -	struct property *save_next;
> -	struct device_node *child;
> -	int ret;
> -
> -	for_each_child_of_node(np, child)
> -		child->parent = dup;
> -
> -	/*
> -	 * "unittest internal error: unable to add testdata property"
> -	 *
> -	 *    If this message reports a property in node '/__symbols__' then
> -	 *    the respective unittest overlay contains a label that has the
> -	 *    same name as a label in the live devicetree.  The label will
> -	 *    be in the live devicetree only if the devicetree source was
> -	 *    compiled with the '-@' option.  If you encounter this error,
> -	 *    please consider renaming __all__ of the labels in the unittest
> -	 *    overlay dts files with an odd prefix that is unlikely to be
> -	 *    used in a real devicetree.
> -	 */
> -
> -	/*
> -	 * open code for_each_property_of_node() because of_add_property()
> -	 * sets prop->next to NULL
> -	 */
> -	for (prop = np->properties; prop != NULL; prop = save_next) {
> -		save_next = prop->next;
> -		ret = of_add_property(dup, prop);
> -		if (ret)
> -			pr_err("unittest internal error: unable to add testdata property %pOF/%s",
> -			       np, prop->name);
> -	}
> -}
> -
> -/**
> - *	attach_node_and_children - attaches nodes
> - *	and its children to live tree
> - *
> - *	@np:	Node to attach to live tree
> - */
> -static void attach_node_and_children(struct device_node *np)
> -{
> -	struct device_node *next, *dup, *child;
> -	unsigned long flags;
> -	const char *full_name;
> -
> -	full_name = kasprintf(GFP_KERNEL, "%pOF", np);
> -
> -	if (!strcmp(full_name, "/__local_fixups__") ||
> -	    !strcmp(full_name, "/__fixups__"))
> -		return;
> -
> -	dup = of_find_node_by_path(full_name);
> -	kfree(full_name);
> -	if (dup) {
> -		update_node_properties(np, dup);
> -		return;
> -	}
> -
> -	child = np->child;
> -	np->child = NULL;
> -
> -	mutex_lock(&of_mutex);
> -	raw_spin_lock_irqsave(&devtree_lock, flags);
> -	np->sibling = np->parent->child;
> -	np->parent->child = np;
> -	of_node_clear_flag(np, OF_DETACHED);
> -	raw_spin_unlock_irqrestore(&devtree_lock, flags);
> -
> -	__of_attach_node_sysfs(np);
> -	mutex_unlock(&of_mutex);
> -
> -	while (child) {
> -		next = child->sibling;
> -		attach_node_and_children(child);
> -		child = next;
> -	}
> -}
> -
> -/**
> - *	unittest_data_add - Reads, copies data from
> - *	linked tree and attaches it to the live tree
> - */
> -static int unittest_data_add(void)
> -{
> -	void *unittest_data;
> -	struct device_node *unittest_data_node, *np;
> -	/*
> -	 * __dtb_testcases_begin[] and __dtb_testcases_end[] are magically
> -	 * created by cmd_dt_S_dtb in scripts/Makefile.lib
> -	 */
> -	extern uint8_t __dtb_testcases_begin[];
> -	extern uint8_t __dtb_testcases_end[];
> -	const int size = __dtb_testcases_end - __dtb_testcases_begin;
> -	int rc;
> -
> -	if (!size) {
> -		pr_warn("%s: No testcase data to attach; not running tests\n",
> -			__func__);
> -		return -ENODATA;
> -	}
> -
> -	/* creating copy */
> -	unittest_data = kmemdup(__dtb_testcases_begin, size, GFP_KERNEL);
> -
> -	if (!unittest_data) {
> -		pr_warn("%s: Failed to allocate memory for unittest_data; "
> -			"not running tests\n", __func__);
> -		return -ENOMEM;
> -	}
> -	of_fdt_unflatten_tree(unittest_data, NULL, &unittest_data_node);
> -	if (!unittest_data_node) {
> -		pr_warn("%s: No tree to attach; not running tests\n", __func__);
> -		return -ENODATA;
> -	}
> -
> -	/*
> -	 * This lock normally encloses of_resolve_phandles()
> -	 */
> -	of_overlay_mutex_lock();
> -
> -	rc = of_resolve_phandles(unittest_data_node);
> -	if (rc) {
> -		pr_err("%s: Failed to resolve phandles (rc=%i)\n", __func__, rc);
> -		of_overlay_mutex_unlock();
> -		return -EINVAL;
> -	}
> -
> -	if (!of_root) {
> -		of_root = unittest_data_node;
> -		for_each_of_allnodes(np)
> -			__of_attach_node_sysfs(np);
> -		of_aliases = of_find_node_by_path("/aliases");
> -		of_chosen = of_find_node_by_path("/chosen");
> -		of_overlay_mutex_unlock();
> -		return 0;
> -	}
> -
> -	/* attach the sub-tree to live tree */
> -	np = unittest_data_node->child;
> -	while (np) {
> -		struct device_node *next = np->sibling;
> -
> -		np->parent = of_root;
> -		attach_node_and_children(np);
> -		np = next;
> -	}
> -
> -	of_overlay_mutex_unlock();
> -
> -	return 0;
> -}
> -
>  #ifdef CONFIG_OF_OVERLAY
>  static int overlay_data_apply(const char *overlay_name, int *overlay_id);
>  
> @@ -2563,8 +2222,6 @@ static int of_test_init(struct kunit *test)
>  static struct kunit_case of_test_cases[] = {
>  	KUNIT_CASE(of_unittest_check_tree_linkage),
>  	KUNIT_CASE(of_unittest_check_phandles),
> -	KUNIT_CASE(of_unittest_find_node_by_name),
> -	KUNIT_CASE(of_unittest_dynamic),
>  	KUNIT_CASE(of_unittest_parse_phandle_with_args),
>  	KUNIT_CASE(of_unittest_parse_phandle_with_args_map),
>  	KUNIT_CASE(of_unittest_printf),
> 


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 17/17] of: unittest: split up some super large test cases
  2019-02-14 21:37   ` brendanhiggins
                       ` (3 preceding siblings ...)
  (?)
@ 2019-03-22  1:16     ` Frank Rowand
  -1 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-03-22  1:16 UTC (permalink / raw)
  To: Brendan Higgins, keescook, mcgrof, shuah, robh, kieran.bingham
  Cc: brakmo, pmladek, amir73il, dri-devel, Alexander.Levin,
	linux-kselftest, linux-nvdimm, richard, knut.omang, wfg, joel,
	jdike, dan.carpenter, devicetree, Tim.Bird, linux-um, rostedt,
	julia.lawall, kunit-dev, gregkh, linux-kernel, daniel, mpe, joe,
	khilman

On 2/14/19 1:37 PM, Brendan Higgins wrote:
> Split up the super large test cases of_unittest_find_node_by_name and
> of_unittest_dynamic into properly sized and defined test cases.

I also still object to this patch.

-Frank


> 
> Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
> ---
>  drivers/of/base-test.c | 297 ++++++++++++++++++++++++++++++++++-------
>  1 file changed, 249 insertions(+), 48 deletions(-)
> 
> diff --git a/drivers/of/base-test.c b/drivers/of/base-test.c
> index 3d3f4f1b74800..7b44c967ed2fd 100644
> --- a/drivers/of/base-test.c
> +++ b/drivers/of/base-test.c
> @@ -8,10 +8,10 @@
>  
>  #include "test-common.h"
>  
> -static void of_unittest_find_node_by_name(struct kunit *test)
> +static void of_test_find_node_by_name_basic(struct kunit *test)
>  {
>  	struct device_node *np;
> -	const char *options, *name;
> +	const char *name;
>  
>  	np = of_find_node_by_path("/testcase-data");
>  	name = kasprintf(GFP_KERNEL, "%pOF", np);
> @@ -20,11 +20,21 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  			       "find /testcase-data failed\n");
>  	of_node_put(np);
>  	kfree(name);
> +}
>  
> +static void of_test_find_node_by_name_trailing_slash(struct kunit *test)
> +{
>  	/* Test if trailing '/' works */
>  	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
>  			    "trailing '/' on /testcase-data/ should fail\n");
>  
> +}
> +
> +static void of_test_find_node_by_name_multiple_components(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *name;
> +
>  	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  	name = kasprintf(GFP_KERNEL, "%pOF", np);
> @@ -33,6 +43,12 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  		"find /testcase-data/phandle-tests/consumer-a failed\n");
>  	of_node_put(np);
>  	kfree(name);
> +}
> +
> +static void of_test_find_node_by_name_with_alias(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *name;
>  
>  	np = of_find_node_by_path("testcase-alias");
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> @@ -41,10 +57,23 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  			       "find testcase-alias failed\n");
>  	of_node_put(np);
>  	kfree(name);
> +}
>  
> +static void of_test_find_node_by_name_with_alias_and_slash(struct kunit *test)
> +{
>  	/* Test if trailing '/' works on aliases */
>  	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
> -			    "trailing '/' on testcase-alias/ should fail\n");
> +			   "trailing '/' on testcase-alias/ should fail\n");
> +}
> +
> +/*
> + * TODO(brendanhiggins@google.com): This looks like a duplicate of
> + * of_test_find_node_by_name_multiple_components
> + */
> +static void of_test_find_node_by_name_multiple_components_2(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *name;
>  
>  	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> @@ -54,17 +83,33 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  		"find testcase-alias/phandle-tests/consumer-a failed\n");
>  	of_node_put(np);
>  	kfree(name);
> +}
> +
> +static void of_test_find_node_by_name_missing_path(struct kunit *test)
> +{
> +	struct device_node *np;
>  
>  	KUNIT_EXPECT_EQ_MSG(
>  		test,
>  		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
>  		"non-existent path returned node %pOF\n", np);
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_missing_alias(struct kunit *test)
> +{
> +	struct device_node *np;
>  
>  	KUNIT_EXPECT_EQ_MSG(
>  		test, np = of_find_node_by_path("missing-alias"), NULL,
>  		"non-existent alias returned node %pOF\n", np);
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_missing_alias_with_relative_path(
> +		struct kunit *test)
> +{
> +	struct device_node *np;
>  
>  	KUNIT_EXPECT_EQ_MSG(
>  		test,
> @@ -72,12 +117,24 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  		"non-existent alias with relative path returned node %pOF\n",
>  		np);
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_option(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
>  			       "option path test failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_option_and_slash(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> @@ -90,11 +147,22 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
>  			       "option path test, subcase #2 failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_null_option(struct kunit *test)
> +{
> +	struct device_node *np;
>  
>  	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
>  	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
>  					 "NULL option path test failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_option_alias(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
>  				       &options);
> @@ -102,6 +170,13 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
>  			       "option alias path test failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_option_alias_and_slash(
> +		struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
>  				       &options);
> @@ -109,11 +184,22 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
>  			       "option alias path test, subcase #1 failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_null_option_alias(struct kunit *test)
> +{
> +	struct device_node *np;
>  
>  	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
>  	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
>  			test, np, "NULL option alias path test failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_option_clearing(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	options = "testoption";
>  	np = of_find_node_opts_by_path("testcase-alias", &options);
> @@ -121,6 +207,12 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
>  			    "option clearing test failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_option_clearing_root(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	options = "testoption";
>  	np = of_find_node_opts_by_path("/", &options);
> @@ -130,65 +222,147 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  	of_node_put(np);
>  }
>  
> -static void of_unittest_dynamic(struct kunit *test)
> +static int of_test_find_node_by_name_init(struct kunit *test)
>  {
> +	/* adding data for unittest */
> +	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
> +
> +	if (!of_aliases)
> +		of_aliases = of_find_node_by_path("/aliases");
> +
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
> +			"/testcase-data/phandle-tests/consumer-a"));
> +
> +	return 0;
> +}
> +
> +static struct kunit_case of_test_find_node_by_name_cases[] = {
> +	KUNIT_CASE(of_test_find_node_by_name_basic),
> +	KUNIT_CASE(of_test_find_node_by_name_trailing_slash),
> +	KUNIT_CASE(of_test_find_node_by_name_multiple_components),
> +	KUNIT_CASE(of_test_find_node_by_name_with_alias),
> +	KUNIT_CASE(of_test_find_node_by_name_with_alias_and_slash),
> +	KUNIT_CASE(of_test_find_node_by_name_multiple_components_2),
> +	KUNIT_CASE(of_test_find_node_by_name_missing_path),
> +	KUNIT_CASE(of_test_find_node_by_name_missing_alias),
> +	KUNIT_CASE(of_test_find_node_by_name_missing_alias_with_relative_path),
> +	KUNIT_CASE(of_test_find_node_by_name_with_option),
> +	KUNIT_CASE(of_test_find_node_by_name_with_option_and_slash),
> +	KUNIT_CASE(of_test_find_node_by_name_with_null_option),
> +	KUNIT_CASE(of_test_find_node_by_name_with_option_alias),
> +	KUNIT_CASE(of_test_find_node_by_name_with_option_alias_and_slash),
> +	KUNIT_CASE(of_test_find_node_by_name_with_null_option_alias),
> +	KUNIT_CASE(of_test_find_node_by_name_option_clearing),
> +	KUNIT_CASE(of_test_find_node_by_name_option_clearing_root),
> +	{},
> +};
> +
> +static struct kunit_module of_test_find_node_by_name_module = {
> +	.name = "of-test-find-node-by-name",
> +	.init = of_test_find_node_by_name_init,
> +	.test_cases = of_test_find_node_by_name_cases,
> +};
> +module_test(of_test_find_node_by_name_module);
> +
> +struct of_test_dynamic_context {
>  	struct device_node *np;
> -	struct property *prop;
> +	struct property *prop0;
> +	struct property *prop1;
> +};
>  
> -	np = of_find_node_by_path("/testcase-data");
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +static void of_test_dynamic_basic(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +	struct property *prop0 = ctx->prop0;
>  
> -	/* Array of 4 properties for the purpose of testing */
> -	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
> +	/* Add a new property - should pass*/
> +	prop0->name = "new-property";
> +	prop0->value = "new-property-data";
> +	prop0->length = strlen(prop0->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
> +			    "Adding a new property failed\n");
> +
> +	/* Test that we can remove a property */
> +	KUNIT_EXPECT_EQ(test, of_remove_property(np, prop0), 0);
> +}
> +
> +static void of_test_dynamic_add_existing_property(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +	struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;
>  
>  	/* Add a new property - should pass*/
> -	prop->name = "new-property";
> -	prop->value = "new-property-data";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> +	prop0->name = "new-property";
> +	prop0->value = "new-property-data";
> +	prop0->length = strlen(prop0->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
>  			    "Adding a new property failed\n");
>  
>  	/* Try to add an existing property - should fail */
> -	prop++;
> -	prop->name = "new-property";
> -	prop->value = "new-property-data-should-fail";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
> +	prop1->name = "new-property";
> +	prop1->value = "new-property-data-should-fail";
> +	prop1->length = strlen(prop1->value) + 1;
> +	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop1), 0,
>  			    "Adding an existing property should have failed\n");
> +}
> +
> +static void of_test_dynamic_modify_existing_property(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +	struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;
> +
> +	/* Add a new property - should pass*/
> +	prop0->name = "new-property";
> +	prop0->value = "new-property-data";
> +	prop0->length = strlen(prop0->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
> +			    "Adding a new property failed\n");
>  
>  	/* Try to modify an existing property - should pass */
> -	prop->value = "modify-property-data-should-pass";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_EQ_MSG(
> -		test, of_update_property(np, prop), 0,
> -		"Updating an existing property should have passed\n");
> +	prop1->name = "new-property";
> +	prop1->value = "modify-property-data-should-pass";
> +	prop1->length = strlen(prop1->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop1), 0,
> +			    "Updating an existing property should have passed\n");
> +}
> +
> +static void of_test_dynamic_modify_non_existent_property(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +	struct property *prop0 = ctx->prop0;
>  
>  	/* Try to modify non-existent property - should pass*/
> -	prop++;
> -	prop->name = "modify-property";
> -	prop->value = "modify-missing-property-data-should-pass";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
> +	prop0->name = "modify-property";
> +	prop0->value = "modify-missing-property-data-should-pass";
> +	prop0->length = strlen(prop0->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop0), 0,
>  			    "Updating a missing property should have passed\n");
> +}
>  
> -	/* Remove property - should pass */
> -	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
> -			    "Removing a property should have passed\n");
> +static void of_test_dynamic_large_property(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +	struct property *prop0 = ctx->prop0;
>  
>  	/* Adding very large property - should pass */
> -	prop++;
> -	prop->name = "large-property-PAGE_SIZEx8";
> -	prop->length = PAGE_SIZE * 8;
> -	prop->value = kzalloc(prop->length, GFP_KERNEL);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
> -	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> +	prop0->name = "large-property-PAGE_SIZEx8";
> +	prop0->length = PAGE_SIZE * 8;
> +	prop0->value = kunit_kzalloc(test, prop0->length, GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop0->value);
> +
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
>  			    "Adding a large property should have passed\n");
>  }
>  
> -static int of_test_init(struct kunit *test)
> +static int of_test_dynamic_init(struct kunit *test)
>  {
> -	/* adding data for unittest */
> +	struct of_test_dynamic_context *ctx;
> +
>  	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
>  
>  	if (!of_aliases)
> @@ -197,18 +371,45 @@ static int of_test_init(struct kunit *test)
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
>  			"/testcase-data/phandle-tests/consumer-a"));
>  
> +	ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
> +	test->priv = ctx;
> +
> +	ctx->np = of_find_node_by_path("/testcase-data");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->np);
> +
> +	ctx->prop0 = kunit_kzalloc(test, sizeof(*ctx->prop0), GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop0);
> +
> +	ctx->prop1 = kunit_kzalloc(test, sizeof(*ctx->prop1), GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop1);
> +
>  	return 0;
>  }
>  
> -static struct kunit_case of_test_cases[] = {
> -	KUNIT_CASE(of_unittest_find_node_by_name),
> -	KUNIT_CASE(of_unittest_dynamic),
> +static void of_test_dynamic_exit(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +
> +	of_remove_property(np, ctx->prop0);
> +	of_remove_property(np, ctx->prop1);
> +	of_node_put(np);
> +}
> +
> +static struct kunit_case of_test_dynamic_cases[] = {
> +	KUNIT_CASE(of_test_dynamic_basic),
> +	KUNIT_CASE(of_test_dynamic_add_existing_property),
> +	KUNIT_CASE(of_test_dynamic_modify_existing_property),
> +	KUNIT_CASE(of_test_dynamic_modify_non_existent_property),
> +	KUNIT_CASE(of_test_dynamic_large_property),
>  	{},
>  };
>  
> -static struct kunit_module of_test_module = {
> -	.name = "of-base-test",
> -	.init = of_test_init,
> -	.test_cases = of_test_cases,
> +static struct kunit_module of_test_dynamic_module = {
> +	.name = "of-dynamic-test",
> +	.init = of_test_dynamic_init,
> +	.exit = of_test_dynamic_exit,
> +	.test_cases = of_test_dynamic_cases,
>  };
> -module_test(of_test_module);
> +module_test(of_test_dynamic_module);
> 

_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 17/17] of: unittest: split up some super large test cases
@ 2019-03-22  1:16     ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-03-22  1:16 UTC (permalink / raw)
  To: Brendan Higgins, keescook, mcgrof, shuah, robh, kieran.bingham
  Cc: gregkh, joel, mpe, joe, brakmo, rostedt, Tim.Bird, khilman,
	julia.lawall, linux-kselftest, kunit-dev, linux-kernel, jdike,
	richard, linux-um, daniel, dri-devel, dan.j.williams,
	linux-nvdimm, knut.omang, devicetree, pmladek, Alexander.Levin,
	amir73il, dan.carpenter, wfg

On 2/14/19 1:37 PM, Brendan Higgins wrote:
> Split up the super large test cases of_unittest_find_node_by_name and
> of_unittest_dynamic into properly sized and defined test cases.

I also still object to this patch.

-Frank


> 
> Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
> ---
>  drivers/of/base-test.c | 297 ++++++++++++++++++++++++++++++++++-------
>  1 file changed, 249 insertions(+), 48 deletions(-)
> 
> diff --git a/drivers/of/base-test.c b/drivers/of/base-test.c
> index 3d3f4f1b74800..7b44c967ed2fd 100644
> --- a/drivers/of/base-test.c
> +++ b/drivers/of/base-test.c
> @@ -8,10 +8,10 @@
>  
>  #include "test-common.h"
>  
> -static void of_unittest_find_node_by_name(struct kunit *test)
> +static void of_test_find_node_by_name_basic(struct kunit *test)
>  {
>  	struct device_node *np;
> -	const char *options, *name;
> +	const char *name;
>  
>  	np = of_find_node_by_path("/testcase-data");
>  	name = kasprintf(GFP_KERNEL, "%pOF", np);
> @@ -20,11 +20,21 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  			       "find /testcase-data failed\n");
>  	of_node_put(np);
>  	kfree(name);
> +}
>  
> +static void of_test_find_node_by_name_trailing_slash(struct kunit *test)
> +{
>  	/* Test if trailing '/' works */
>  	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
>  			    "trailing '/' on /testcase-data/ should fail\n");
>  
> +}
> +
> +static void of_test_find_node_by_name_multiple_components(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *name;
> +
>  	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  	name = kasprintf(GFP_KERNEL, "%pOF", np);
> @@ -33,6 +43,12 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  		"find /testcase-data/phandle-tests/consumer-a failed\n");
>  	of_node_put(np);
>  	kfree(name);
> +}
> +
> +static void of_test_find_node_by_name_with_alias(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *name;
>  
>  	np = of_find_node_by_path("testcase-alias");
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> @@ -41,10 +57,23 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  			       "find testcase-alias failed\n");
>  	of_node_put(np);
>  	kfree(name);
> +}
>  
> +static void of_test_find_node_by_name_with_alias_and_slash(struct kunit *test)
> +{
>  	/* Test if trailing '/' works on aliases */
>  	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
> -			    "trailing '/' on testcase-alias/ should fail\n");
> +			   "trailing '/' on testcase-alias/ should fail\n");
> +}
> +
> +/*
> + * TODO(brendanhiggins@google.com): This looks like a duplicate of
> + * of_test_find_node_by_name_multiple_components
> + */
> +static void of_test_find_node_by_name_multiple_components_2(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *name;
>  
>  	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> @@ -54,17 +83,33 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  		"find testcase-alias/phandle-tests/consumer-a failed\n");
>  	of_node_put(np);
>  	kfree(name);
> +}
> +
> +static void of_test_find_node_by_name_missing_path(struct kunit *test)
> +{
> +	struct device_node *np;
>  
>  	KUNIT_EXPECT_EQ_MSG(
>  		test,
>  		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
>  		"non-existent path returned node %pOF\n", np);
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_missing_alias(struct kunit *test)
> +{
> +	struct device_node *np;
>  
>  	KUNIT_EXPECT_EQ_MSG(
>  		test, np = of_find_node_by_path("missing-alias"), NULL,
>  		"non-existent alias returned node %pOF\n", np);
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_missing_alias_with_relative_path(
> +		struct kunit *test)
> +{
> +	struct device_node *np;
>  
>  	KUNIT_EXPECT_EQ_MSG(
>  		test,
> @@ -72,12 +117,24 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  		"non-existent alias with relative path returned node %pOF\n",
>  		np);
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_option(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
>  			       "option path test failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_option_and_slash(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> @@ -90,11 +147,22 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
>  			       "option path test, subcase #2 failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_null_option(struct kunit *test)
> +{
> +	struct device_node *np;
>  
>  	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
>  	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
>  					 "NULL option path test failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_option_alias(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
>  				       &options);
> @@ -102,6 +170,13 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
>  			       "option alias path test failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_option_alias_and_slash(
> +		struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
>  				       &options);
> @@ -109,11 +184,22 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
>  			       "option alias path test, subcase #1 failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_null_option_alias(struct kunit *test)
> +{
> +	struct device_node *np;
>  
>  	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
>  	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
>  			test, np, "NULL option alias path test failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_option_clearing(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	options = "testoption";
>  	np = of_find_node_opts_by_path("testcase-alias", &options);
> @@ -121,6 +207,12 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
>  			    "option clearing test failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_option_clearing_root(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	options = "testoption";
>  	np = of_find_node_opts_by_path("/", &options);
> @@ -130,65 +222,147 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  	of_node_put(np);
>  }
>  
> -static void of_unittest_dynamic(struct kunit *test)
> +static int of_test_find_node_by_name_init(struct kunit *test)
>  {
> +	/* adding data for unittest */
> +	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
> +
> +	if (!of_aliases)
> +		of_aliases = of_find_node_by_path("/aliases");
> +
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
> +			"/testcase-data/phandle-tests/consumer-a"));
> +
> +	return 0;
> +}
> +
> +static struct kunit_case of_test_find_node_by_name_cases[] = {
> +	KUNIT_CASE(of_test_find_node_by_name_basic),
> +	KUNIT_CASE(of_test_find_node_by_name_trailing_slash),
> +	KUNIT_CASE(of_test_find_node_by_name_multiple_components),
> +	KUNIT_CASE(of_test_find_node_by_name_with_alias),
> +	KUNIT_CASE(of_test_find_node_by_name_with_alias_and_slash),
> +	KUNIT_CASE(of_test_find_node_by_name_multiple_components_2),
> +	KUNIT_CASE(of_test_find_node_by_name_missing_path),
> +	KUNIT_CASE(of_test_find_node_by_name_missing_alias),
> +	KUNIT_CASE(of_test_find_node_by_name_missing_alias_with_relative_path),
> +	KUNIT_CASE(of_test_find_node_by_name_with_option),
> +	KUNIT_CASE(of_test_find_node_by_name_with_option_and_slash),
> +	KUNIT_CASE(of_test_find_node_by_name_with_null_option),
> +	KUNIT_CASE(of_test_find_node_by_name_with_option_alias),
> +	KUNIT_CASE(of_test_find_node_by_name_with_option_alias_and_slash),
> +	KUNIT_CASE(of_test_find_node_by_name_with_null_option_alias),
> +	KUNIT_CASE(of_test_find_node_by_name_option_clearing),
> +	KUNIT_CASE(of_test_find_node_by_name_option_clearing_root),
> +	{},
> +};
> +
> +static struct kunit_module of_test_find_node_by_name_module = {
> +	.name = "of-test-find-node-by-name",
> +	.init = of_test_find_node_by_name_init,
> +	.test_cases = of_test_find_node_by_name_cases,
> +};
> +module_test(of_test_find_node_by_name_module);
> +
> +struct of_test_dynamic_context {
>  	struct device_node *np;
> -	struct property *prop;
> +	struct property *prop0;
> +	struct property *prop1;
> +};
>  
> -	np = of_find_node_by_path("/testcase-data");
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +static void of_test_dynamic_basic(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +	struct property *prop0 = ctx->prop0;
>  
> -	/* Array of 4 properties for the purpose of testing */
> -	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
> +	/* Add a new property - should pass*/
> +	prop0->name = "new-property";
> +	prop0->value = "new-property-data";
> +	prop0->length = strlen(prop0->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
> +			    "Adding a new property failed\n");
> +
> +	/* Test that we can remove a property */
> +	KUNIT_EXPECT_EQ(test, of_remove_property(np, prop0), 0);
> +}
> +
> +static void of_test_dynamic_add_existing_property(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +	struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;
>  
>  	/* Add a new property - should pass*/
> -	prop->name = "new-property";
> -	prop->value = "new-property-data";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> +	prop0->name = "new-property";
> +	prop0->value = "new-property-data";
> +	prop0->length = strlen(prop0->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
>  			    "Adding a new property failed\n");
>  
>  	/* Try to add an existing property - should fail */
> -	prop++;
> -	prop->name = "new-property";
> -	prop->value = "new-property-data-should-fail";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
> +	prop1->name = "new-property";
> +	prop1->value = "new-property-data-should-fail";
> +	prop1->length = strlen(prop1->value) + 1;
> +	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop1), 0,
>  			    "Adding an existing property should have failed\n");
> +}
> +
> +static void of_test_dynamic_modify_existing_property(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +	struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;
> +
> +	/* Add a new property - should pass*/
> +	prop0->name = "new-property";
> +	prop0->value = "new-property-data";
> +	prop0->length = strlen(prop0->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
> +			    "Adding a new property failed\n");
>  
>  	/* Try to modify an existing property - should pass */
> -	prop->value = "modify-property-data-should-pass";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_EQ_MSG(
> -		test, of_update_property(np, prop), 0,
> -		"Updating an existing property should have passed\n");
> +	prop1->name = "new-property";
> +	prop1->value = "modify-property-data-should-pass";
> +	prop1->length = strlen(prop1->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop1), 0,
> +			    "Updating an existing property should have passed\n");
> +}
> +
> +static void of_test_dynamic_modify_non_existent_property(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +	struct property *prop0 = ctx->prop0;
>  
>  	/* Try to modify non-existent property - should pass*/
> -	prop++;
> -	prop->name = "modify-property";
> -	prop->value = "modify-missing-property-data-should-pass";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
> +	prop0->name = "modify-property";
> +	prop0->value = "modify-missing-property-data-should-pass";
> +	prop0->length = strlen(prop0->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop0), 0,
>  			    "Updating a missing property should have passed\n");
> +}
>  
> -	/* Remove property - should pass */
> -	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
> -			    "Removing a property should have passed\n");
> +static void of_test_dynamic_large_property(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +	struct property *prop0 = ctx->prop0;
>  
>  	/* Adding very large property - should pass */
> -	prop++;
> -	prop->name = "large-property-PAGE_SIZEx8";
> -	prop->length = PAGE_SIZE * 8;
> -	prop->value = kzalloc(prop->length, GFP_KERNEL);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
> -	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> +	prop0->name = "large-property-PAGE_SIZEx8";
> +	prop0->length = PAGE_SIZE * 8;
> +	prop0->value = kunit_kzalloc(test, prop0->length, GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop0->value);
> +
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
>  			    "Adding a large property should have passed\n");
>  }
>  
> -static int of_test_init(struct kunit *test)
> +static int of_test_dynamic_init(struct kunit *test)
>  {
> -	/* adding data for unittest */
> +	struct of_test_dynamic_context *ctx;
> +
>  	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
>  
>  	if (!of_aliases)
> @@ -197,18 +371,45 @@ static int of_test_init(struct kunit *test)
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
>  			"/testcase-data/phandle-tests/consumer-a"));
>  
> +	ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
> +	test->priv = ctx;
> +
> +	ctx->np = of_find_node_by_path("/testcase-data");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->np);
> +
> +	ctx->prop0 = kunit_kzalloc(test, sizeof(*ctx->prop0), GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop0);
> +
> +	ctx->prop1 = kunit_kzalloc(test, sizeof(*ctx->prop1), GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop1);
> +
>  	return 0;
>  }
>  
> -static struct kunit_case of_test_cases[] = {
> -	KUNIT_CASE(of_unittest_find_node_by_name),
> -	KUNIT_CASE(of_unittest_dynamic),
> +static void of_test_dynamic_exit(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +
> +	of_remove_property(np, ctx->prop0);
> +	of_remove_property(np, ctx->prop1);
> +	of_node_put(np);
> +}
> +
> +static struct kunit_case of_test_dynamic_cases[] = {
> +	KUNIT_CASE(of_test_dynamic_basic),
> +	KUNIT_CASE(of_test_dynamic_add_existing_property),
> +	KUNIT_CASE(of_test_dynamic_modify_existing_property),
> +	KUNIT_CASE(of_test_dynamic_modify_non_existent_property),
> +	KUNIT_CASE(of_test_dynamic_large_property),
>  	{},
>  };
>  
> -static struct kunit_module of_test_module = {
> -	.name = "of-base-test",
> -	.init = of_test_init,
> -	.test_cases = of_test_cases,
> +static struct kunit_module of_test_dynamic_module = {
> +	.name = "of-dynamic-test",
> +	.init = of_test_dynamic_init,
> +	.exit = of_test_dynamic_exit,
> +	.test_cases = of_test_dynamic_cases,
>  };
> -module_test(of_test_module);
> +module_test(of_test_dynamic_module);
> 


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 17/17] of: unittest: split up some super large test cases
@ 2019-03-22  1:16     ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-03-22  1:16 UTC (permalink / raw)
  To: Brendan Higgins, keescook-hpIqsD4AKlfQT0dZR+AlfA,
	mcgrof-DgEjT+Ai2ygdnm+yROfE0A, shuah-DgEjT+Ai2ygdnm+yROfE0A,
	robh-DgEjT+Ai2ygdnm+yROfE0A,
	kieran.bingham-ryLnwIuWjnjg/C1BVhZhaw
  Cc: brakmo-b10kYP2dOMg, pmladek-IBi9RG/b67k,
	amir73il-Re5JQEeQqe8AvxtiuMwx3w,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	Alexander.Levin-0li6OtcxBFHby3iVrkZq2A,
	linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw, richard-/L3Ra7n9ekc,
	knut.omang-QHcLZuEGTsvQT0dZR+AlfA, wfg-VuQAYsv1563Yd54FQh9/CA,
	joel-U3u1mxZcP9KHXe+LvDLADg, jdike-OPE4K8JWMJJBDgjK7y7TUQ,
	dan.carpenter-QHcLZuEGTsvQT0dZR+AlfA,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Tim.Bird-7U/KSKJipcs,
	linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	rostedt-nx8X9YLhiw1AfugRpC6u6w, julia.lawall-L2FTfq7BK8M,
	kunit-dev-/JYPxA39Uh5TLH3MbocFFw,
	gregkh-hQyY1W1yCW8ekmWlsbkhG0B+6BGkLq7r,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, daniel-/w4YWyX8dFk,
	mpe-Gsx/Oe8HsFggBc27wqDAHg, joe-6d6DIl74uiNBDgjK7y7TUQ,
	khilman-rdvid1DuHRBWk0Htik3J/w

On 2/14/19 1:37 PM, Brendan Higgins wrote:
> Split up the super large test cases of_unittest_find_node_by_name and
> of_unittest_dynamic into properly sized and defined test cases.

I also still object to this patch.

-Frank


> 
> Signed-off-by: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> ---
>  drivers/of/base-test.c | 297 ++++++++++++++++++++++++++++++++++-------
>  1 file changed, 249 insertions(+), 48 deletions(-)
> 
> diff --git a/drivers/of/base-test.c b/drivers/of/base-test.c
> index 3d3f4f1b74800..7b44c967ed2fd 100644
> --- a/drivers/of/base-test.c
> +++ b/drivers/of/base-test.c
> @@ -8,10 +8,10 @@
>  
>  #include "test-common.h"
>  
> -static void of_unittest_find_node_by_name(struct kunit *test)
> +static void of_test_find_node_by_name_basic(struct kunit *test)
>  {
>  	struct device_node *np;
> -	const char *options, *name;
> +	const char *name;
>  
>  	np = of_find_node_by_path("/testcase-data");
>  	name = kasprintf(GFP_KERNEL, "%pOF", np);
> @@ -20,11 +20,21 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  			       "find /testcase-data failed\n");
>  	of_node_put(np);
>  	kfree(name);
> +}
>  
> +static void of_test_find_node_by_name_trailing_slash(struct kunit *test)
> +{
>  	/* Test if trailing '/' works */
>  	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
>  			    "trailing '/' on /testcase-data/ should fail\n");
>  
> +}
> +
> +static void of_test_find_node_by_name_multiple_components(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *name;
> +
>  	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  	name = kasprintf(GFP_KERNEL, "%pOF", np);
> @@ -33,6 +43,12 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  		"find /testcase-data/phandle-tests/consumer-a failed\n");
>  	of_node_put(np);
>  	kfree(name);
> +}
> +
> +static void of_test_find_node_by_name_with_alias(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *name;
>  
>  	np = of_find_node_by_path("testcase-alias");
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> @@ -41,10 +57,23 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  			       "find testcase-alias failed\n");
>  	of_node_put(np);
>  	kfree(name);
> +}
>  
> +static void of_test_find_node_by_name_with_alias_and_slash(struct kunit *test)
> +{
>  	/* Test if trailing '/' works on aliases */
>  	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
> -			    "trailing '/' on testcase-alias/ should fail\n");
> +			   "trailing '/' on testcase-alias/ should fail\n");
> +}
> +
> +/*
> + * TODO(brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org): This looks like a duplicate of
> + * of_test_find_node_by_name_multiple_components
> + */
> +static void of_test_find_node_by_name_multiple_components_2(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *name;
>  
>  	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> @@ -54,17 +83,33 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  		"find testcase-alias/phandle-tests/consumer-a failed\n");
>  	of_node_put(np);
>  	kfree(name);
> +}
> +
> +static void of_test_find_node_by_name_missing_path(struct kunit *test)
> +{
> +	struct device_node *np;
>  
>  	KUNIT_EXPECT_EQ_MSG(
>  		test,
>  		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
>  		"non-existent path returned node %pOF\n", np);
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_missing_alias(struct kunit *test)
> +{
> +	struct device_node *np;
>  
>  	KUNIT_EXPECT_EQ_MSG(
>  		test, np = of_find_node_by_path("missing-alias"), NULL,
>  		"non-existent alias returned node %pOF\n", np);
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_missing_alias_with_relative_path(
> +		struct kunit *test)
> +{
> +	struct device_node *np;
>  
>  	KUNIT_EXPECT_EQ_MSG(
>  		test,
> @@ -72,12 +117,24 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  		"non-existent alias with relative path returned node %pOF\n",
>  		np);
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_option(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
>  			       "option path test failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_option_and_slash(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> @@ -90,11 +147,22 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
>  			       "option path test, subcase #2 failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_null_option(struct kunit *test)
> +{
> +	struct device_node *np;
>  
>  	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
>  	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
>  					 "NULL option path test failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_option_alias(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
>  				       &options);
> @@ -102,6 +170,13 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
>  			       "option alias path test failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_option_alias_and_slash(
> +		struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
>  				       &options);
> @@ -109,11 +184,22 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
>  			       "option alias path test, subcase #1 failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_null_option_alias(struct kunit *test)
> +{
> +	struct device_node *np;
>  
>  	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
>  	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
>  			test, np, "NULL option alias path test failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_option_clearing(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	options = "testoption";
>  	np = of_find_node_opts_by_path("testcase-alias", &options);
> @@ -121,6 +207,12 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
>  			    "option clearing test failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_option_clearing_root(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	options = "testoption";
>  	np = of_find_node_opts_by_path("/", &options);
> @@ -130,65 +222,147 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  	of_node_put(np);
>  }
>  
> -static void of_unittest_dynamic(struct kunit *test)
> +static int of_test_find_node_by_name_init(struct kunit *test)
>  {
> +	/* adding data for unittest */
> +	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
> +
> +	if (!of_aliases)
> +		of_aliases = of_find_node_by_path("/aliases");
> +
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
> +			"/testcase-data/phandle-tests/consumer-a"));
> +
> +	return 0;
> +}
> +
> +static struct kunit_case of_test_find_node_by_name_cases[] = {
> +	KUNIT_CASE(of_test_find_node_by_name_basic),
> +	KUNIT_CASE(of_test_find_node_by_name_trailing_slash),
> +	KUNIT_CASE(of_test_find_node_by_name_multiple_components),
> +	KUNIT_CASE(of_test_find_node_by_name_with_alias),
> +	KUNIT_CASE(of_test_find_node_by_name_with_alias_and_slash),
> +	KUNIT_CASE(of_test_find_node_by_name_multiple_components_2),
> +	KUNIT_CASE(of_test_find_node_by_name_missing_path),
> +	KUNIT_CASE(of_test_find_node_by_name_missing_alias),
> +	KUNIT_CASE(of_test_find_node_by_name_missing_alias_with_relative_path),
> +	KUNIT_CASE(of_test_find_node_by_name_with_option),
> +	KUNIT_CASE(of_test_find_node_by_name_with_option_and_slash),
> +	KUNIT_CASE(of_test_find_node_by_name_with_null_option),
> +	KUNIT_CASE(of_test_find_node_by_name_with_option_alias),
> +	KUNIT_CASE(of_test_find_node_by_name_with_option_alias_and_slash),
> +	KUNIT_CASE(of_test_find_node_by_name_with_null_option_alias),
> +	KUNIT_CASE(of_test_find_node_by_name_option_clearing),
> +	KUNIT_CASE(of_test_find_node_by_name_option_clearing_root),
> +	{},
> +};
> +
> +static struct kunit_module of_test_find_node_by_name_module = {
> +	.name = "of-test-find-node-by-name",
> +	.init = of_test_find_node_by_name_init,
> +	.test_cases = of_test_find_node_by_name_cases,
> +};
> +module_test(of_test_find_node_by_name_module);
> +
> +struct of_test_dynamic_context {
>  	struct device_node *np;
> -	struct property *prop;
> +	struct property *prop0;
> +	struct property *prop1;
> +};
>  
> -	np = of_find_node_by_path("/testcase-data");
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +static void of_test_dynamic_basic(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +	struct property *prop0 = ctx->prop0;
>  
> -	/* Array of 4 properties for the purpose of testing */
> -	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
> +	/* Add a new property - should pass*/
> +	prop0->name = "new-property";
> +	prop0->value = "new-property-data";
> +	prop0->length = strlen(prop0->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
> +			    "Adding a new property failed\n");
> +
> +	/* Test that we can remove a property */
> +	KUNIT_EXPECT_EQ(test, of_remove_property(np, prop0), 0);
> +}
> +
> +static void of_test_dynamic_add_existing_property(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +	struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;
>  
>  	/* Add a new property - should pass*/
> -	prop->name = "new-property";
> -	prop->value = "new-property-data";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> +	prop0->name = "new-property";
> +	prop0->value = "new-property-data";
> +	prop0->length = strlen(prop0->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
>  			    "Adding a new property failed\n");
>  
>  	/* Try to add an existing property - should fail */
> -	prop++;
> -	prop->name = "new-property";
> -	prop->value = "new-property-data-should-fail";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
> +	prop1->name = "new-property";
> +	prop1->value = "new-property-data-should-fail";
> +	prop1->length = strlen(prop1->value) + 1;
> +	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop1), 0,
>  			    "Adding an existing property should have failed\n");
> +}
> +
> +static void of_test_dynamic_modify_existing_property(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +	struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;
> +
> +	/* Add a new property - should pass*/
> +	prop0->name = "new-property";
> +	prop0->value = "new-property-data";
> +	prop0->length = strlen(prop0->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
> +			    "Adding a new property failed\n");
>  
>  	/* Try to modify an existing property - should pass */
> -	prop->value = "modify-property-data-should-pass";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_EQ_MSG(
> -		test, of_update_property(np, prop), 0,
> -		"Updating an existing property should have passed\n");
> +	prop1->name = "new-property";
> +	prop1->value = "modify-property-data-should-pass";
> +	prop1->length = strlen(prop1->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop1), 0,
> +			    "Updating an existing property should have passed\n");
> +}
> +
> +static void of_test_dynamic_modify_non_existent_property(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +	struct property *prop0 = ctx->prop0;
>  
>  	/* Try to modify non-existent property - should pass*/
> -	prop++;
> -	prop->name = "modify-property";
> -	prop->value = "modify-missing-property-data-should-pass";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
> +	prop0->name = "modify-property";
> +	prop0->value = "modify-missing-property-data-should-pass";
> +	prop0->length = strlen(prop0->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop0), 0,
>  			    "Updating a missing property should have passed\n");
> +}
>  
> -	/* Remove property - should pass */
> -	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
> -			    "Removing a property should have passed\n");
> +static void of_test_dynamic_large_property(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +	struct property *prop0 = ctx->prop0;
>  
>  	/* Adding very large property - should pass */
> -	prop++;
> -	prop->name = "large-property-PAGE_SIZEx8";
> -	prop->length = PAGE_SIZE * 8;
> -	prop->value = kzalloc(prop->length, GFP_KERNEL);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
> -	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> +	prop0->name = "large-property-PAGE_SIZEx8";
> +	prop0->length = PAGE_SIZE * 8;
> +	prop0->value = kunit_kzalloc(test, prop0->length, GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop0->value);
> +
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
>  			    "Adding a large property should have passed\n");
>  }
>  
> -static int of_test_init(struct kunit *test)
> +static int of_test_dynamic_init(struct kunit *test)
>  {
> -	/* adding data for unittest */
> +	struct of_test_dynamic_context *ctx;
> +
>  	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
>  
>  	if (!of_aliases)
> @@ -197,18 +371,45 @@ static int of_test_init(struct kunit *test)
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
>  			"/testcase-data/phandle-tests/consumer-a"));
>  
> +	ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
> +	test->priv = ctx;
> +
> +	ctx->np = of_find_node_by_path("/testcase-data");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->np);
> +
> +	ctx->prop0 = kunit_kzalloc(test, sizeof(*ctx->prop0), GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop0);
> +
> +	ctx->prop1 = kunit_kzalloc(test, sizeof(*ctx->prop1), GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop1);
> +
>  	return 0;
>  }
>  
> -static struct kunit_case of_test_cases[] = {
> -	KUNIT_CASE(of_unittest_find_node_by_name),
> -	KUNIT_CASE(of_unittest_dynamic),
> +static void of_test_dynamic_exit(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +
> +	of_remove_property(np, ctx->prop0);
> +	of_remove_property(np, ctx->prop1);
> +	of_node_put(np);
> +}
> +
> +static struct kunit_case of_test_dynamic_cases[] = {
> +	KUNIT_CASE(of_test_dynamic_basic),
> +	KUNIT_CASE(of_test_dynamic_add_existing_property),
> +	KUNIT_CASE(of_test_dynamic_modify_existing_property),
> +	KUNIT_CASE(of_test_dynamic_modify_non_existent_property),
> +	KUNIT_CASE(of_test_dynamic_large_property),
>  	{},
>  };
>  
> -static struct kunit_module of_test_module = {
> -	.name = "of-base-test",
> -	.init = of_test_init,
> -	.test_cases = of_test_cases,
> +static struct kunit_module of_test_dynamic_module = {
> +	.name = "of-dynamic-test",
> +	.init = of_test_dynamic_init,
> +	.exit = of_test_dynamic_exit,
> +	.test_cases = of_test_dynamic_cases,
>  };
> -module_test(of_test_module);
> +module_test(of_test_dynamic_module);
> 

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 17/17] of: unittest: split up some super large test cases
@ 2019-03-22  1:16     ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: frowand.list @ 2019-03-22  1:16 UTC (permalink / raw)


On 2/14/19 1:37 PM, Brendan Higgins wrote:
> Split up the super large test cases of_unittest_find_node_by_name and
> of_unittest_dynamic into properly sized and defined test cases.

I also still object to this patch.

-Frank


> 
> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> ---
>  drivers/of/base-test.c | 297 ++++++++++++++++++++++++++++++++++-------
>  1 file changed, 249 insertions(+), 48 deletions(-)
> 
> diff --git a/drivers/of/base-test.c b/drivers/of/base-test.c
> index 3d3f4f1b74800..7b44c967ed2fd 100644
> --- a/drivers/of/base-test.c
> +++ b/drivers/of/base-test.c
> @@ -8,10 +8,10 @@
>  
>  #include "test-common.h"
>  
> -static void of_unittest_find_node_by_name(struct kunit *test)
> +static void of_test_find_node_by_name_basic(struct kunit *test)
>  {
>  	struct device_node *np;
> -	const char *options, *name;
> +	const char *name;
>  
>  	np = of_find_node_by_path("/testcase-data");
>  	name = kasprintf(GFP_KERNEL, "%pOF", np);
> @@ -20,11 +20,21 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  			       "find /testcase-data failed\n");
>  	of_node_put(np);
>  	kfree(name);
> +}
>  
> +static void of_test_find_node_by_name_trailing_slash(struct kunit *test)
> +{
>  	/* Test if trailing '/' works */
>  	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
>  			    "trailing '/' on /testcase-data/ should fail\n");
>  
> +}
> +
> +static void of_test_find_node_by_name_multiple_components(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *name;
> +
>  	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  	name = kasprintf(GFP_KERNEL, "%pOF", np);
> @@ -33,6 +43,12 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  		"find /testcase-data/phandle-tests/consumer-a failed\n");
>  	of_node_put(np);
>  	kfree(name);
> +}
> +
> +static void of_test_find_node_by_name_with_alias(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *name;
>  
>  	np = of_find_node_by_path("testcase-alias");
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> @@ -41,10 +57,23 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  			       "find testcase-alias failed\n");
>  	of_node_put(np);
>  	kfree(name);
> +}
>  
> +static void of_test_find_node_by_name_with_alias_and_slash(struct kunit *test)
> +{
>  	/* Test if trailing '/' works on aliases */
>  	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
> -			    "trailing '/' on testcase-alias/ should fail\n");
> +			   "trailing '/' on testcase-alias/ should fail\n");
> +}
> +
> +/*
> + * TODO(brendanhiggins at google.com): This looks like a duplicate of
> + * of_test_find_node_by_name_multiple_components
> + */
> +static void of_test_find_node_by_name_multiple_components_2(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *name;
>  
>  	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> @@ -54,17 +83,33 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  		"find testcase-alias/phandle-tests/consumer-a failed\n");
>  	of_node_put(np);
>  	kfree(name);
> +}
> +
> +static void of_test_find_node_by_name_missing_path(struct kunit *test)
> +{
> +	struct device_node *np;
>  
>  	KUNIT_EXPECT_EQ_MSG(
>  		test,
>  		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
>  		"non-existent path returned node %pOF\n", np);
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_missing_alias(struct kunit *test)
> +{
> +	struct device_node *np;
>  
>  	KUNIT_EXPECT_EQ_MSG(
>  		test, np = of_find_node_by_path("missing-alias"), NULL,
>  		"non-existent alias returned node %pOF\n", np);
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_missing_alias_with_relative_path(
> +		struct kunit *test)
> +{
> +	struct device_node *np;
>  
>  	KUNIT_EXPECT_EQ_MSG(
>  		test,
> @@ -72,12 +117,24 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  		"non-existent alias with relative path returned node %pOF\n",
>  		np);
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_option(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
>  			       "option path test failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_option_and_slash(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> @@ -90,11 +147,22 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
>  			       "option path test, subcase #2 failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_null_option(struct kunit *test)
> +{
> +	struct device_node *np;
>  
>  	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
>  	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
>  					 "NULL option path test failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_option_alias(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
>  				       &options);
> @@ -102,6 +170,13 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
>  			       "option alias path test failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_option_alias_and_slash(
> +		struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
>  				       &options);
> @@ -109,11 +184,22 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
>  			       "option alias path test, subcase #1 failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_null_option_alias(struct kunit *test)
> +{
> +	struct device_node *np;
>  
>  	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
>  	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
>  			test, np, "NULL option alias path test failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_option_clearing(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	options = "testoption";
>  	np = of_find_node_opts_by_path("testcase-alias", &options);
> @@ -121,6 +207,12 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
>  			    "option clearing test failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_option_clearing_root(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	options = "testoption";
>  	np = of_find_node_opts_by_path("/", &options);
> @@ -130,65 +222,147 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  	of_node_put(np);
>  }
>  
> -static void of_unittest_dynamic(struct kunit *test)
> +static int of_test_find_node_by_name_init(struct kunit *test)
>  {
> +	/* adding data for unittest */
> +	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
> +
> +	if (!of_aliases)
> +		of_aliases = of_find_node_by_path("/aliases");
> +
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
> +			"/testcase-data/phandle-tests/consumer-a"));
> +
> +	return 0;
> +}
> +
> +static struct kunit_case of_test_find_node_by_name_cases[] = {
> +	KUNIT_CASE(of_test_find_node_by_name_basic),
> +	KUNIT_CASE(of_test_find_node_by_name_trailing_slash),
> +	KUNIT_CASE(of_test_find_node_by_name_multiple_components),
> +	KUNIT_CASE(of_test_find_node_by_name_with_alias),
> +	KUNIT_CASE(of_test_find_node_by_name_with_alias_and_slash),
> +	KUNIT_CASE(of_test_find_node_by_name_multiple_components_2),
> +	KUNIT_CASE(of_test_find_node_by_name_missing_path),
> +	KUNIT_CASE(of_test_find_node_by_name_missing_alias),
> +	KUNIT_CASE(of_test_find_node_by_name_missing_alias_with_relative_path),
> +	KUNIT_CASE(of_test_find_node_by_name_with_option),
> +	KUNIT_CASE(of_test_find_node_by_name_with_option_and_slash),
> +	KUNIT_CASE(of_test_find_node_by_name_with_null_option),
> +	KUNIT_CASE(of_test_find_node_by_name_with_option_alias),
> +	KUNIT_CASE(of_test_find_node_by_name_with_option_alias_and_slash),
> +	KUNIT_CASE(of_test_find_node_by_name_with_null_option_alias),
> +	KUNIT_CASE(of_test_find_node_by_name_option_clearing),
> +	KUNIT_CASE(of_test_find_node_by_name_option_clearing_root),
> +	{},
> +};
> +
> +static struct kunit_module of_test_find_node_by_name_module = {
> +	.name = "of-test-find-node-by-name",
> +	.init = of_test_find_node_by_name_init,
> +	.test_cases = of_test_find_node_by_name_cases,
> +};
> +module_test(of_test_find_node_by_name_module);
> +
> +struct of_test_dynamic_context {
>  	struct device_node *np;
> -	struct property *prop;
> +	struct property *prop0;
> +	struct property *prop1;
> +};
>  
> -	np = of_find_node_by_path("/testcase-data");
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +static void of_test_dynamic_basic(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +	struct property *prop0 = ctx->prop0;
>  
> -	/* Array of 4 properties for the purpose of testing */
> -	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
> +	/* Add a new property - should pass*/
> +	prop0->name = "new-property";
> +	prop0->value = "new-property-data";
> +	prop0->length = strlen(prop0->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
> +			    "Adding a new property failed\n");
> +
> +	/* Test that we can remove a property */
> +	KUNIT_EXPECT_EQ(test, of_remove_property(np, prop0), 0);
> +}
> +
> +static void of_test_dynamic_add_existing_property(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +	struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;
>  
>  	/* Add a new property - should pass*/
> -	prop->name = "new-property";
> -	prop->value = "new-property-data";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> +	prop0->name = "new-property";
> +	prop0->value = "new-property-data";
> +	prop0->length = strlen(prop0->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
>  			    "Adding a new property failed\n");
>  
>  	/* Try to add an existing property - should fail */
> -	prop++;
> -	prop->name = "new-property";
> -	prop->value = "new-property-data-should-fail";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
> +	prop1->name = "new-property";
> +	prop1->value = "new-property-data-should-fail";
> +	prop1->length = strlen(prop1->value) + 1;
> +	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop1), 0,
>  			    "Adding an existing property should have failed\n");
> +}
> +
> +static void of_test_dynamic_modify_existing_property(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +	struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;
> +
> +	/* Add a new property - should pass*/
> +	prop0->name = "new-property";
> +	prop0->value = "new-property-data";
> +	prop0->length = strlen(prop0->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
> +			    "Adding a new property failed\n");
>  
>  	/* Try to modify an existing property - should pass */
> -	prop->value = "modify-property-data-should-pass";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_EQ_MSG(
> -		test, of_update_property(np, prop), 0,
> -		"Updating an existing property should have passed\n");
> +	prop1->name = "new-property";
> +	prop1->value = "modify-property-data-should-pass";
> +	prop1->length = strlen(prop1->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop1), 0,
> +			    "Updating an existing property should have passed\n");
> +}
> +
> +static void of_test_dynamic_modify_non_existent_property(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +	struct property *prop0 = ctx->prop0;
>  
>  	/* Try to modify non-existent property - should pass*/
> -	prop++;
> -	prop->name = "modify-property";
> -	prop->value = "modify-missing-property-data-should-pass";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
> +	prop0->name = "modify-property";
> +	prop0->value = "modify-missing-property-data-should-pass";
> +	prop0->length = strlen(prop0->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop0), 0,
>  			    "Updating a missing property should have passed\n");
> +}
>  
> -	/* Remove property - should pass */
> -	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
> -			    "Removing a property should have passed\n");
> +static void of_test_dynamic_large_property(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +	struct property *prop0 = ctx->prop0;
>  
>  	/* Adding very large property - should pass */
> -	prop++;
> -	prop->name = "large-property-PAGE_SIZEx8";
> -	prop->length = PAGE_SIZE * 8;
> -	prop->value = kzalloc(prop->length, GFP_KERNEL);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
> -	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> +	prop0->name = "large-property-PAGE_SIZEx8";
> +	prop0->length = PAGE_SIZE * 8;
> +	prop0->value = kunit_kzalloc(test, prop0->length, GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop0->value);
> +
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
>  			    "Adding a large property should have passed\n");
>  }
>  
> -static int of_test_init(struct kunit *test)
> +static int of_test_dynamic_init(struct kunit *test)
>  {
> -	/* adding data for unittest */
> +	struct of_test_dynamic_context *ctx;
> +
>  	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
>  
>  	if (!of_aliases)
> @@ -197,18 +371,45 @@ static int of_test_init(struct kunit *test)
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
>  			"/testcase-data/phandle-tests/consumer-a"));
>  
> +	ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
> +	test->priv = ctx;
> +
> +	ctx->np = of_find_node_by_path("/testcase-data");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->np);
> +
> +	ctx->prop0 = kunit_kzalloc(test, sizeof(*ctx->prop0), GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop0);
> +
> +	ctx->prop1 = kunit_kzalloc(test, sizeof(*ctx->prop1), GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop1);
> +
>  	return 0;
>  }
>  
> -static struct kunit_case of_test_cases[] = {
> -	KUNIT_CASE(of_unittest_find_node_by_name),
> -	KUNIT_CASE(of_unittest_dynamic),
> +static void of_test_dynamic_exit(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +
> +	of_remove_property(np, ctx->prop0);
> +	of_remove_property(np, ctx->prop1);
> +	of_node_put(np);
> +}
> +
> +static struct kunit_case of_test_dynamic_cases[] = {
> +	KUNIT_CASE(of_test_dynamic_basic),
> +	KUNIT_CASE(of_test_dynamic_add_existing_property),
> +	KUNIT_CASE(of_test_dynamic_modify_existing_property),
> +	KUNIT_CASE(of_test_dynamic_modify_non_existent_property),
> +	KUNIT_CASE(of_test_dynamic_large_property),
>  	{},
>  };
>  
> -static struct kunit_module of_test_module = {
> -	.name = "of-base-test",
> -	.init = of_test_init,
> -	.test_cases = of_test_cases,
> +static struct kunit_module of_test_dynamic_module = {
> +	.name = "of-dynamic-test",
> +	.init = of_test_dynamic_init,
> +	.exit = of_test_dynamic_exit,
> +	.test_cases = of_test_dynamic_cases,
>  };
> -module_test(of_test_module);
> +module_test(of_test_dynamic_module);
> 

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 17/17] of: unittest: split up some super large test cases
@ 2019-03-22  1:16     ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-03-22  1:16 UTC (permalink / raw)


On 2/14/19 1:37 PM, Brendan Higgins wrote:
> Split up the super large test cases of_unittest_find_node_by_name and
> of_unittest_dynamic into properly sized and defined test cases.

I also still object to this patch.

-Frank


> 
> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> ---
>  drivers/of/base-test.c | 297 ++++++++++++++++++++++++++++++++++-------
>  1 file changed, 249 insertions(+), 48 deletions(-)
> 
> diff --git a/drivers/of/base-test.c b/drivers/of/base-test.c
> index 3d3f4f1b74800..7b44c967ed2fd 100644
> --- a/drivers/of/base-test.c
> +++ b/drivers/of/base-test.c
> @@ -8,10 +8,10 @@
>  
>  #include "test-common.h"
>  
> -static void of_unittest_find_node_by_name(struct kunit *test)
> +static void of_test_find_node_by_name_basic(struct kunit *test)
>  {
>  	struct device_node *np;
> -	const char *options, *name;
> +	const char *name;
>  
>  	np = of_find_node_by_path("/testcase-data");
>  	name = kasprintf(GFP_KERNEL, "%pOF", np);
> @@ -20,11 +20,21 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  			       "find /testcase-data failed\n");
>  	of_node_put(np);
>  	kfree(name);
> +}
>  
> +static void of_test_find_node_by_name_trailing_slash(struct kunit *test)
> +{
>  	/* Test if trailing '/' works */
>  	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
>  			    "trailing '/' on /testcase-data/ should fail\n");
>  
> +}
> +
> +static void of_test_find_node_by_name_multiple_components(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *name;
> +
>  	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  	name = kasprintf(GFP_KERNEL, "%pOF", np);
> @@ -33,6 +43,12 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  		"find /testcase-data/phandle-tests/consumer-a failed\n");
>  	of_node_put(np);
>  	kfree(name);
> +}
> +
> +static void of_test_find_node_by_name_with_alias(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *name;
>  
>  	np = of_find_node_by_path("testcase-alias");
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> @@ -41,10 +57,23 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  			       "find testcase-alias failed\n");
>  	of_node_put(np);
>  	kfree(name);
> +}
>  
> +static void of_test_find_node_by_name_with_alias_and_slash(struct kunit *test)
> +{
>  	/* Test if trailing '/' works on aliases */
>  	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
> -			    "trailing '/' on testcase-alias/ should fail\n");
> +			   "trailing '/' on testcase-alias/ should fail\n");
> +}
> +
> +/*
> + * TODO(brendanhiggins at google.com): This looks like a duplicate of
> + * of_test_find_node_by_name_multiple_components
> + */
> +static void of_test_find_node_by_name_multiple_components_2(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *name;
>  
>  	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> @@ -54,17 +83,33 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  		"find testcase-alias/phandle-tests/consumer-a failed\n");
>  	of_node_put(np);
>  	kfree(name);
> +}
> +
> +static void of_test_find_node_by_name_missing_path(struct kunit *test)
> +{
> +	struct device_node *np;
>  
>  	KUNIT_EXPECT_EQ_MSG(
>  		test,
>  		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
>  		"non-existent path returned node %pOF\n", np);
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_missing_alias(struct kunit *test)
> +{
> +	struct device_node *np;
>  
>  	KUNIT_EXPECT_EQ_MSG(
>  		test, np = of_find_node_by_path("missing-alias"), NULL,
>  		"non-existent alias returned node %pOF\n", np);
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_missing_alias_with_relative_path(
> +		struct kunit *test)
> +{
> +	struct device_node *np;
>  
>  	KUNIT_EXPECT_EQ_MSG(
>  		test,
> @@ -72,12 +117,24 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  		"non-existent alias with relative path returned node %pOF\n",
>  		np);
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_option(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
>  			       "option path test failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_option_and_slash(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> @@ -90,11 +147,22 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
>  			       "option path test, subcase #2 failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_null_option(struct kunit *test)
> +{
> +	struct device_node *np;
>  
>  	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
>  	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
>  					 "NULL option path test failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_option_alias(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
>  				       &options);
> @@ -102,6 +170,13 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
>  			       "option alias path test failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_option_alias_and_slash(
> +		struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
>  				       &options);
> @@ -109,11 +184,22 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
>  			       "option alias path test, subcase #1 failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_null_option_alias(struct kunit *test)
> +{
> +	struct device_node *np;
>  
>  	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
>  	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
>  			test, np, "NULL option alias path test failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_option_clearing(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	options = "testoption";
>  	np = of_find_node_opts_by_path("testcase-alias", &options);
> @@ -121,6 +207,12 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
>  			    "option clearing test failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_option_clearing_root(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	options = "testoption";
>  	np = of_find_node_opts_by_path("/", &options);
> @@ -130,65 +222,147 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  	of_node_put(np);
>  }
>  
> -static void of_unittest_dynamic(struct kunit *test)
> +static int of_test_find_node_by_name_init(struct kunit *test)
>  {
> +	/* adding data for unittest */
> +	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
> +
> +	if (!of_aliases)
> +		of_aliases = of_find_node_by_path("/aliases");
> +
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
> +			"/testcase-data/phandle-tests/consumer-a"));
> +
> +	return 0;
> +}
> +
> +static struct kunit_case of_test_find_node_by_name_cases[] = {
> +	KUNIT_CASE(of_test_find_node_by_name_basic),
> +	KUNIT_CASE(of_test_find_node_by_name_trailing_slash),
> +	KUNIT_CASE(of_test_find_node_by_name_multiple_components),
> +	KUNIT_CASE(of_test_find_node_by_name_with_alias),
> +	KUNIT_CASE(of_test_find_node_by_name_with_alias_and_slash),
> +	KUNIT_CASE(of_test_find_node_by_name_multiple_components_2),
> +	KUNIT_CASE(of_test_find_node_by_name_missing_path),
> +	KUNIT_CASE(of_test_find_node_by_name_missing_alias),
> +	KUNIT_CASE(of_test_find_node_by_name_missing_alias_with_relative_path),
> +	KUNIT_CASE(of_test_find_node_by_name_with_option),
> +	KUNIT_CASE(of_test_find_node_by_name_with_option_and_slash),
> +	KUNIT_CASE(of_test_find_node_by_name_with_null_option),
> +	KUNIT_CASE(of_test_find_node_by_name_with_option_alias),
> +	KUNIT_CASE(of_test_find_node_by_name_with_option_alias_and_slash),
> +	KUNIT_CASE(of_test_find_node_by_name_with_null_option_alias),
> +	KUNIT_CASE(of_test_find_node_by_name_option_clearing),
> +	KUNIT_CASE(of_test_find_node_by_name_option_clearing_root),
> +	{},
> +};
> +
> +static struct kunit_module of_test_find_node_by_name_module = {
> +	.name = "of-test-find-node-by-name",
> +	.init = of_test_find_node_by_name_init,
> +	.test_cases = of_test_find_node_by_name_cases,
> +};
> +module_test(of_test_find_node_by_name_module);
> +
> +struct of_test_dynamic_context {
>  	struct device_node *np;
> -	struct property *prop;
> +	struct property *prop0;
> +	struct property *prop1;
> +};
>  
> -	np = of_find_node_by_path("/testcase-data");
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +static void of_test_dynamic_basic(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +	struct property *prop0 = ctx->prop0;
>  
> -	/* Array of 4 properties for the purpose of testing */
> -	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
> +	/* Add a new property - should pass*/
> +	prop0->name = "new-property";
> +	prop0->value = "new-property-data";
> +	prop0->length = strlen(prop0->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
> +			    "Adding a new property failed\n");
> +
> +	/* Test that we can remove a property */
> +	KUNIT_EXPECT_EQ(test, of_remove_property(np, prop0), 0);
> +}
> +
> +static void of_test_dynamic_add_existing_property(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +	struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;
>  
>  	/* Add a new property - should pass*/
> -	prop->name = "new-property";
> -	prop->value = "new-property-data";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> +	prop0->name = "new-property";
> +	prop0->value = "new-property-data";
> +	prop0->length = strlen(prop0->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
>  			    "Adding a new property failed\n");
>  
>  	/* Try to add an existing property - should fail */
> -	prop++;
> -	prop->name = "new-property";
> -	prop->value = "new-property-data-should-fail";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
> +	prop1->name = "new-property";
> +	prop1->value = "new-property-data-should-fail";
> +	prop1->length = strlen(prop1->value) + 1;
> +	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop1), 0,
>  			    "Adding an existing property should have failed\n");
> +}
> +
> +static void of_test_dynamic_modify_existing_property(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +	struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;
> +
> +	/* Add a new property - should pass*/
> +	prop0->name = "new-property";
> +	prop0->value = "new-property-data";
> +	prop0->length = strlen(prop0->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
> +			    "Adding a new property failed\n");
>  
>  	/* Try to modify an existing property - should pass */
> -	prop->value = "modify-property-data-should-pass";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_EQ_MSG(
> -		test, of_update_property(np, prop), 0,
> -		"Updating an existing property should have passed\n");
> +	prop1->name = "new-property";
> +	prop1->value = "modify-property-data-should-pass";
> +	prop1->length = strlen(prop1->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop1), 0,
> +			    "Updating an existing property should have passed\n");
> +}
> +
> +static void of_test_dynamic_modify_non_existent_property(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +	struct property *prop0 = ctx->prop0;
>  
>  	/* Try to modify non-existent property - should pass*/
> -	prop++;
> -	prop->name = "modify-property";
> -	prop->value = "modify-missing-property-data-should-pass";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
> +	prop0->name = "modify-property";
> +	prop0->value = "modify-missing-property-data-should-pass";
> +	prop0->length = strlen(prop0->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop0), 0,
>  			    "Updating a missing property should have passed\n");
> +}
>  
> -	/* Remove property - should pass */
> -	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
> -			    "Removing a property should have passed\n");
> +static void of_test_dynamic_large_property(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +	struct property *prop0 = ctx->prop0;
>  
>  	/* Adding very large property - should pass */
> -	prop++;
> -	prop->name = "large-property-PAGE_SIZEx8";
> -	prop->length = PAGE_SIZE * 8;
> -	prop->value = kzalloc(prop->length, GFP_KERNEL);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
> -	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> +	prop0->name = "large-property-PAGE_SIZEx8";
> +	prop0->length = PAGE_SIZE * 8;
> +	prop0->value = kunit_kzalloc(test, prop0->length, GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop0->value);
> +
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
>  			    "Adding a large property should have passed\n");
>  }
>  
> -static int of_test_init(struct kunit *test)
> +static int of_test_dynamic_init(struct kunit *test)
>  {
> -	/* adding data for unittest */
> +	struct of_test_dynamic_context *ctx;
> +
>  	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
>  
>  	if (!of_aliases)
> @@ -197,18 +371,45 @@ static int of_test_init(struct kunit *test)
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
>  			"/testcase-data/phandle-tests/consumer-a"));
>  
> +	ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
> +	test->priv = ctx;
> +
> +	ctx->np = of_find_node_by_path("/testcase-data");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->np);
> +
> +	ctx->prop0 = kunit_kzalloc(test, sizeof(*ctx->prop0), GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop0);
> +
> +	ctx->prop1 = kunit_kzalloc(test, sizeof(*ctx->prop1), GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop1);
> +
>  	return 0;
>  }
>  
> -static struct kunit_case of_test_cases[] = {
> -	KUNIT_CASE(of_unittest_find_node_by_name),
> -	KUNIT_CASE(of_unittest_dynamic),
> +static void of_test_dynamic_exit(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +
> +	of_remove_property(np, ctx->prop0);
> +	of_remove_property(np, ctx->prop1);
> +	of_node_put(np);
> +}
> +
> +static struct kunit_case of_test_dynamic_cases[] = {
> +	KUNIT_CASE(of_test_dynamic_basic),
> +	KUNIT_CASE(of_test_dynamic_add_existing_property),
> +	KUNIT_CASE(of_test_dynamic_modify_existing_property),
> +	KUNIT_CASE(of_test_dynamic_modify_non_existent_property),
> +	KUNIT_CASE(of_test_dynamic_large_property),
>  	{},
>  };
>  
> -static struct kunit_module of_test_module = {
> -	.name = "of-base-test",
> -	.init = of_test_init,
> -	.test_cases = of_test_cases,
> +static struct kunit_module of_test_dynamic_module = {
> +	.name = "of-dynamic-test",
> +	.init = of_test_dynamic_init,
> +	.exit = of_test_dynamic_exit,
> +	.test_cases = of_test_dynamic_cases,
>  };
> -module_test(of_test_module);
> +module_test(of_test_dynamic_module);
> 

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 17/17] of: unittest: split up some super large test cases
@ 2019-03-22  1:16     ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-03-22  1:16 UTC (permalink / raw)
  To: Brendan Higgins, keescook, mcgrof, shuah, robh, kieran.bingham
  Cc: brakmo, pmladek, amir73il, dri-devel, Alexander.Levin,
	linux-kselftest, linux-nvdimm, richard, knut.omang, wfg, joel,
	jdike, dan.carpenter, devicetree, Tim.Bird, linux-um, rostedt,
	julia.lawall, dan.j.williams, kunit-dev, gregkh, linux-kernel,
	daniel, mpe, joe, khilman

On 2/14/19 1:37 PM, Brendan Higgins wrote:
> Split up the super large test cases of_unittest_find_node_by_name and
> of_unittest_dynamic into properly sized and defined test cases.

I also still object to this patch.

-Frank


> 
> Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
> ---
>  drivers/of/base-test.c | 297 ++++++++++++++++++++++++++++++++++-------
>  1 file changed, 249 insertions(+), 48 deletions(-)
> 
> diff --git a/drivers/of/base-test.c b/drivers/of/base-test.c
> index 3d3f4f1b74800..7b44c967ed2fd 100644
> --- a/drivers/of/base-test.c
> +++ b/drivers/of/base-test.c
> @@ -8,10 +8,10 @@
>  
>  #include "test-common.h"
>  
> -static void of_unittest_find_node_by_name(struct kunit *test)
> +static void of_test_find_node_by_name_basic(struct kunit *test)
>  {
>  	struct device_node *np;
> -	const char *options, *name;
> +	const char *name;
>  
>  	np = of_find_node_by_path("/testcase-data");
>  	name = kasprintf(GFP_KERNEL, "%pOF", np);
> @@ -20,11 +20,21 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  			       "find /testcase-data failed\n");
>  	of_node_put(np);
>  	kfree(name);
> +}
>  
> +static void of_test_find_node_by_name_trailing_slash(struct kunit *test)
> +{
>  	/* Test if trailing '/' works */
>  	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
>  			    "trailing '/' on /testcase-data/ should fail\n");
>  
> +}
> +
> +static void of_test_find_node_by_name_multiple_components(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *name;
> +
>  	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  	name = kasprintf(GFP_KERNEL, "%pOF", np);
> @@ -33,6 +43,12 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  		"find /testcase-data/phandle-tests/consumer-a failed\n");
>  	of_node_put(np);
>  	kfree(name);
> +}
> +
> +static void of_test_find_node_by_name_with_alias(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *name;
>  
>  	np = of_find_node_by_path("testcase-alias");
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> @@ -41,10 +57,23 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  			       "find testcase-alias failed\n");
>  	of_node_put(np);
>  	kfree(name);
> +}
>  
> +static void of_test_find_node_by_name_with_alias_and_slash(struct kunit *test)
> +{
>  	/* Test if trailing '/' works on aliases */
>  	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
> -			    "trailing '/' on testcase-alias/ should fail\n");
> +			   "trailing '/' on testcase-alias/ should fail\n");
> +}
> +
> +/*
> + * TODO(brendanhiggins@google.com): This looks like a duplicate of
> + * of_test_find_node_by_name_multiple_components
> + */
> +static void of_test_find_node_by_name_multiple_components_2(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *name;
>  
>  	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> @@ -54,17 +83,33 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  		"find testcase-alias/phandle-tests/consumer-a failed\n");
>  	of_node_put(np);
>  	kfree(name);
> +}
> +
> +static void of_test_find_node_by_name_missing_path(struct kunit *test)
> +{
> +	struct device_node *np;
>  
>  	KUNIT_EXPECT_EQ_MSG(
>  		test,
>  		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
>  		"non-existent path returned node %pOF\n", np);
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_missing_alias(struct kunit *test)
> +{
> +	struct device_node *np;
>  
>  	KUNIT_EXPECT_EQ_MSG(
>  		test, np = of_find_node_by_path("missing-alias"), NULL,
>  		"non-existent alias returned node %pOF\n", np);
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_missing_alias_with_relative_path(
> +		struct kunit *test)
> +{
> +	struct device_node *np;
>  
>  	KUNIT_EXPECT_EQ_MSG(
>  		test,
> @@ -72,12 +117,24 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  		"non-existent alias with relative path returned node %pOF\n",
>  		np);
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_option(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>  	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
>  			       "option path test failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_option_and_slash(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> @@ -90,11 +147,22 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
>  			       "option path test, subcase #2 failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_null_option(struct kunit *test)
> +{
> +	struct device_node *np;
>  
>  	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
>  	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
>  					 "NULL option path test failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_option_alias(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
>  				       &options);
> @@ -102,6 +170,13 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
>  			       "option alias path test failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_option_alias_and_slash(
> +		struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
>  				       &options);
> @@ -109,11 +184,22 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
>  			       "option alias path test, subcase #1 failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_null_option_alias(struct kunit *test)
> +{
> +	struct device_node *np;
>  
>  	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
>  	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
>  			test, np, "NULL option alias path test failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_option_clearing(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	options = "testoption";
>  	np = of_find_node_opts_by_path("testcase-alias", &options);
> @@ -121,6 +207,12 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
>  			    "option clearing test failed\n");
>  	of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_option_clearing_root(struct kunit *test)
> +{
> +	struct device_node *np;
> +	const char *options;
>  
>  	options = "testoption";
>  	np = of_find_node_opts_by_path("/", &options);
> @@ -130,65 +222,147 @@ static void of_unittest_find_node_by_name(struct kunit *test)
>  	of_node_put(np);
>  }
>  
> -static void of_unittest_dynamic(struct kunit *test)
> +static int of_test_find_node_by_name_init(struct kunit *test)
>  {
> +	/* adding data for unittest */
> +	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
> +
> +	if (!of_aliases)
> +		of_aliases = of_find_node_by_path("/aliases");
> +
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
> +			"/testcase-data/phandle-tests/consumer-a"));
> +
> +	return 0;
> +}
> +
> +static struct kunit_case of_test_find_node_by_name_cases[] = {
> +	KUNIT_CASE(of_test_find_node_by_name_basic),
> +	KUNIT_CASE(of_test_find_node_by_name_trailing_slash),
> +	KUNIT_CASE(of_test_find_node_by_name_multiple_components),
> +	KUNIT_CASE(of_test_find_node_by_name_with_alias),
> +	KUNIT_CASE(of_test_find_node_by_name_with_alias_and_slash),
> +	KUNIT_CASE(of_test_find_node_by_name_multiple_components_2),
> +	KUNIT_CASE(of_test_find_node_by_name_missing_path),
> +	KUNIT_CASE(of_test_find_node_by_name_missing_alias),
> +	KUNIT_CASE(of_test_find_node_by_name_missing_alias_with_relative_path),
> +	KUNIT_CASE(of_test_find_node_by_name_with_option),
> +	KUNIT_CASE(of_test_find_node_by_name_with_option_and_slash),
> +	KUNIT_CASE(of_test_find_node_by_name_with_null_option),
> +	KUNIT_CASE(of_test_find_node_by_name_with_option_alias),
> +	KUNIT_CASE(of_test_find_node_by_name_with_option_alias_and_slash),
> +	KUNIT_CASE(of_test_find_node_by_name_with_null_option_alias),
> +	KUNIT_CASE(of_test_find_node_by_name_option_clearing),
> +	KUNIT_CASE(of_test_find_node_by_name_option_clearing_root),
> +	{},
> +};
> +
> +static struct kunit_module of_test_find_node_by_name_module = {
> +	.name = "of-test-find-node-by-name",
> +	.init = of_test_find_node_by_name_init,
> +	.test_cases = of_test_find_node_by_name_cases,
> +};
> +module_test(of_test_find_node_by_name_module);
> +
> +struct of_test_dynamic_context {
>  	struct device_node *np;
> -	struct property *prop;
> +	struct property *prop0;
> +	struct property *prop1;
> +};
>  
> -	np = of_find_node_by_path("/testcase-data");
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> +static void of_test_dynamic_basic(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +	struct property *prop0 = ctx->prop0;
>  
> -	/* Array of 4 properties for the purpose of testing */
> -	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
> +	/* Add a new property - should pass*/
> +	prop0->name = "new-property";
> +	prop0->value = "new-property-data";
> +	prop0->length = strlen(prop0->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
> +			    "Adding a new property failed\n");
> +
> +	/* Test that we can remove a property */
> +	KUNIT_EXPECT_EQ(test, of_remove_property(np, prop0), 0);
> +}
> +
> +static void of_test_dynamic_add_existing_property(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +	struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;
>  
>  	/* Add a new property - should pass*/
> -	prop->name = "new-property";
> -	prop->value = "new-property-data";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> +	prop0->name = "new-property";
> +	prop0->value = "new-property-data";
> +	prop0->length = strlen(prop0->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
>  			    "Adding a new property failed\n");
>  
>  	/* Try to add an existing property - should fail */
> -	prop++;
> -	prop->name = "new-property";
> -	prop->value = "new-property-data-should-fail";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
> +	prop1->name = "new-property";
> +	prop1->value = "new-property-data-should-fail";
> +	prop1->length = strlen(prop1->value) + 1;
> +	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop1), 0,
>  			    "Adding an existing property should have failed\n");
> +}
> +
> +static void of_test_dynamic_modify_existing_property(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +	struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;
> +
> +	/* Add a new property - should pass*/
> +	prop0->name = "new-property";
> +	prop0->value = "new-property-data";
> +	prop0->length = strlen(prop0->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
> +			    "Adding a new property failed\n");
>  
>  	/* Try to modify an existing property - should pass */
> -	prop->value = "modify-property-data-should-pass";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_EQ_MSG(
> -		test, of_update_property(np, prop), 0,
> -		"Updating an existing property should have passed\n");
> +	prop1->name = "new-property";
> +	prop1->value = "modify-property-data-should-pass";
> +	prop1->length = strlen(prop1->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop1), 0,
> +			    "Updating an existing property should have passed\n");
> +}
> +
> +static void of_test_dynamic_modify_non_existent_property(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +	struct property *prop0 = ctx->prop0;
>  
>  	/* Try to modify non-existent property - should pass*/
> -	prop++;
> -	prop->name = "modify-property";
> -	prop->value = "modify-missing-property-data-should-pass";
> -	prop->length = strlen(prop->value) + 1;
> -	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
> +	prop0->name = "modify-property";
> +	prop0->value = "modify-missing-property-data-should-pass";
> +	prop0->length = strlen(prop0->value) + 1;
> +	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop0), 0,
>  			    "Updating a missing property should have passed\n");
> +}
>  
> -	/* Remove property - should pass */
> -	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
> -			    "Removing a property should have passed\n");
> +static void of_test_dynamic_large_property(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +	struct property *prop0 = ctx->prop0;
>  
>  	/* Adding very large property - should pass */
> -	prop++;
> -	prop->name = "large-property-PAGE_SIZEx8";
> -	prop->length = PAGE_SIZE * 8;
> -	prop->value = kzalloc(prop->length, GFP_KERNEL);
> -	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
> -	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
> +	prop0->name = "large-property-PAGE_SIZEx8";
> +	prop0->length = PAGE_SIZE * 8;
> +	prop0->value = kunit_kzalloc(test, prop0->length, GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop0->value);
> +
> +	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
>  			    "Adding a large property should have passed\n");
>  }
>  
> -static int of_test_init(struct kunit *test)
> +static int of_test_dynamic_init(struct kunit *test)
>  {
> -	/* adding data for unittest */
> +	struct of_test_dynamic_context *ctx;
> +
>  	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
>  
>  	if (!of_aliases)
> @@ -197,18 +371,45 @@ static int of_test_init(struct kunit *test)
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
>  			"/testcase-data/phandle-tests/consumer-a"));
>  
> +	ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
> +	test->priv = ctx;
> +
> +	ctx->np = of_find_node_by_path("/testcase-data");
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->np);
> +
> +	ctx->prop0 = kunit_kzalloc(test, sizeof(*ctx->prop0), GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop0);
> +
> +	ctx->prop1 = kunit_kzalloc(test, sizeof(*ctx->prop1), GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop1);
> +
>  	return 0;
>  }
>  
> -static struct kunit_case of_test_cases[] = {
> -	KUNIT_CASE(of_unittest_find_node_by_name),
> -	KUNIT_CASE(of_unittest_dynamic),
> +static void of_test_dynamic_exit(struct kunit *test)
> +{
> +	struct of_test_dynamic_context *ctx = test->priv;
> +	struct device_node *np = ctx->np;
> +
> +	of_remove_property(np, ctx->prop0);
> +	of_remove_property(np, ctx->prop1);
> +	of_node_put(np);
> +}
> +
> +static struct kunit_case of_test_dynamic_cases[] = {
> +	KUNIT_CASE(of_test_dynamic_basic),
> +	KUNIT_CASE(of_test_dynamic_add_existing_property),
> +	KUNIT_CASE(of_test_dynamic_modify_existing_property),
> +	KUNIT_CASE(of_test_dynamic_modify_non_existent_property),
> +	KUNIT_CASE(of_test_dynamic_large_property),
>  	{},
>  };
>  
> -static struct kunit_module of_test_module = {
> -	.name = "of-base-test",
> -	.init = of_test_init,
> -	.test_cases = of_test_cases,
> +static struct kunit_module of_test_dynamic_module = {
> +	.name = "of-dynamic-test",
> +	.init = of_test_dynamic_init,
> +	.exit = of_test_dynamic_exit,
> +	.test_cases = of_test_dynamic_cases,
>  };
> -module_test(of_test_module);
> +module_test(of_test_dynamic_module);
> 


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
  2019-03-04 23:01   ` Brendan Higgins
                       ` (3 preceding siblings ...)
  (?)
@ 2019-03-22  1:23     ` Frank Rowand
  -1 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-03-22  1:23 UTC (permalink / raw)
  To: Brendan Higgins, Kees Cook, Luis Chamberlain, shuah, Rob Herring,
	Kieran Bingham
  Cc: brakmo, Petr Mladek, Amir Goldstein, dri-devel, Sasha Levin,
	linux-kselftest, linux-nvdimm, Richard Weinberger, Knut Omang,
	wfg, Joel Stanley, Jeff Dike, Dan Carpenter, devicetree, Bird,
	Timothy, linux-um, Steven Rostedt, Julia Lawall, kunit-dev,
	Greg KH, Linux Kernel Mailing List, Daniel Vetter,
	Michael Ellerman, Joe Perches, Kevin Hilman

On 3/4/19 3:01 PM, Brendan Higgins wrote:
> On Thu, Feb 14, 2019 at 1:38 PM Brendan Higgins
> <brendanhiggins@google.com> wrote:
>>
>> This patch set proposes KUnit, a lightweight unit testing and mocking
>> framework for the Linux kernel.
>>
> 
> <snip>
> 
>> ## More information on KUnit
>>
>> There is a bunch of documentation near the end of this patch set that
>> describes how to use KUnit and best practices for writing unit tests.
>> For convenience I am hosting the compiled docs here:
>> https://google.github.io/kunit-docs/third_party/kernel/docs/
>> Additionally for convenience, I have applied these patches to a branch:
>> https://kunit.googlesource.com/linux/+/kunit/rfc/5.0-rc5/v4
>> The repo may be cloned with:
>> git clone https://kunit.googlesource.com/linux
>> This patchset is on the kunit/rfc/5.0-rc5/v4 branch.
>>
>> ## Changes Since Last Version
>>
>>  - Got KUnit working on (hypothetically) all architectures (tested on
>>    x86), as per Rob's (and other's) request
>>  - Punting all KUnit features/patches depending on UML for now.
>>  - Broke out UML specific support into arch/um/* as per "[RFC v3 01/19]
>>    kunit: test: add KUnit test runner core", as requested by Luis.
>>  - Added support to kunit_tool to allow it to build kernels in external
>>    directories, as suggested by Kieran.
>>  - Added a UML defconfig, and a config fragment for KUnit as suggested
>>    by Kieran and Luis.
>>  - Cleaned up, and reformatted a bunch of stuff.
>>
>> --
>> 2.21.0.rc0.258.g878e2cd30e-goog
>>
> 
> Someone suggested I should send the next revision out as "PATCH"
> instead of "RFC" since there seems to be general consensus about
> everything at a high level, with a couple exceptions.
> 
> At this time I am planning on sending the next revision out as "[PATCH
> v1 00/NN] kunit: introduce KUnit, the Linux kernel unit testing
> framework". Initially I wasn't sure if the next revision should be
> "[PATCH v1 ...]" or "[PATCH v5 ...]". Please let me know if you have a
> strong objection to the former.
> 
> In the next revision, I will be dropping the last two of three patches
> for the DT unit tests as there doesn't seem to be enough features
> currently available to justify the heavy refactoring I did; however, I

Thank you.


> will still include the patch that just converts everything over to
> KUnit without restructuring the test cases:
> https://lkml.org/lkml/2019/2/14/1133

The link doesn't work for me (don't worry about that), so I'm assuming
this is:

   [RFC v4 15/17] of: unittest: migrate tests to run on KUnit

The conversation on that patch ended after:

   >> After adding patch 15, there are a lot of "unittest internal error" messages.
   > 
   > Yeah, I meant to ask you about that. I thought it was due to a change
   > you made, but after further examination, just now, I found it was my
   > fault. Sorry for not mentioning that anywhere. I will fix it in v5.

It is not worth my time to look at patch 15 when it is that broken.  So I
have not done any review of it.

So no, I think you are still in the RFC stage unless you drop patch 15.

> 
> I should have the next revision out in a week or so.
> 

_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-22  1:23     ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-03-22  1:23 UTC (permalink / raw)
  To: Brendan Higgins, Kees Cook, Luis Chamberlain, shuah, Rob Herring,
	Kieran Bingham
  Cc: Greg KH, Joel Stanley, Michael Ellerman, Joe Perches, brakmo,
	Steven Rostedt, Bird, Timothy, Kevin Hilman, Julia Lawall,
	linux-kselftest, kunit-dev, Linux Kernel Mailing List, Jeff Dike,
	Richard Weinberger, linux-um, Daniel Vetter, dri-devel,
	Dan Williams, linux-nvdimm, Knut Omang, devicetree, Petr Mladek,
	Sasha Levin, Amir Goldstein, Dan Carpenter, wfg

On 3/4/19 3:01 PM, Brendan Higgins wrote:
> On Thu, Feb 14, 2019 at 1:38 PM Brendan Higgins
> <brendanhiggins@google.com> wrote:
>>
>> This patch set proposes KUnit, a lightweight unit testing and mocking
>> framework for the Linux kernel.
>>
> 
> <snip>
> 
>> ## More information on KUnit
>>
>> There is a bunch of documentation near the end of this patch set that
>> describes how to use KUnit and best practices for writing unit tests.
>> For convenience I am hosting the compiled docs here:
>> https://google.github.io/kunit-docs/third_party/kernel/docs/
>> Additionally for convenience, I have applied these patches to a branch:
>> https://kunit.googlesource.com/linux/+/kunit/rfc/5.0-rc5/v4
>> The repo may be cloned with:
>> git clone https://kunit.googlesource.com/linux
>> This patchset is on the kunit/rfc/5.0-rc5/v4 branch.
>>
>> ## Changes Since Last Version
>>
>>  - Got KUnit working on (hypothetically) all architectures (tested on
>>    x86), as per Rob's (and other's) request
>>  - Punting all KUnit features/patches depending on UML for now.
>>  - Broke out UML specific support into arch/um/* as per "[RFC v3 01/19]
>>    kunit: test: add KUnit test runner core", as requested by Luis.
>>  - Added support to kunit_tool to allow it to build kernels in external
>>    directories, as suggested by Kieran.
>>  - Added a UML defconfig, and a config fragment for KUnit as suggested
>>    by Kieran and Luis.
>>  - Cleaned up, and reformatted a bunch of stuff.
>>
>> --
>> 2.21.0.rc0.258.g878e2cd30e-goog
>>
> 
> Someone suggested I should send the next revision out as "PATCH"
> instead of "RFC" since there seems to be general consensus about
> everything at a high level, with a couple exceptions.
> 
> At this time I am planning on sending the next revision out as "[PATCH
> v1 00/NN] kunit: introduce KUnit, the Linux kernel unit testing
> framework". Initially I wasn't sure if the next revision should be
> "[PATCH v1 ...]" or "[PATCH v5 ...]". Please let me know if you have a
> strong objection to the former.
> 
> In the next revision, I will be dropping the last two of three patches
> for the DT unit tests as there doesn't seem to be enough features
> currently available to justify the heavy refactoring I did; however, I

Thank you.


> will still include the patch that just converts everything over to
> KUnit without restructuring the test cases:
> https://lkml.org/lkml/2019/2/14/1133

The link doesn't work for me (don't worry about that), so I'm assuming
this is:

   [RFC v4 15/17] of: unittest: migrate tests to run on KUnit

The conversation on that patch ended after:

   >> After adding patch 15, there are a lot of "unittest internal error" messages.
   > 
   > Yeah, I meant to ask you about that. I thought it was due to a change
   > you made, but after further examination, just now, I found it was my
   > fault. Sorry for not mentioning that anywhere. I will fix it in v5.

It is not worth my time to look at patch 15 when it is that broken.  So I
have not done any review of it.

So no, I think you are still in the RFC stage unless you drop patch 15.

> 
> I should have the next revision out in a week or so.
> 


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-22  1:23     ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-03-22  1:23 UTC (permalink / raw)
  To: Brendan Higgins, Kees Cook, Luis Chamberlain,
	shuah-DgEjT+Ai2ygdnm+yROfE0A, Rob Herring, Kieran Bingham
  Cc: brakmo-b10kYP2dOMg, Petr Mladek, Amir Goldstein, dri-devel,
	Sasha Levin, linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	linux-nvdimm, Richard Weinberger, Knut Omang,
	wfg-VuQAYsv1563Yd54FQh9/CA, Joel Stanley, Jeff Dike,
	Dan Carpenter, devicetree, Bird, Timothy,
	linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Steven Rostedt,
	Julia Lawall, kunit-dev-/JYPxA39Uh5TLH3MbocFFw, Greg KH,
	Linux Kernel Mailing List, Daniel Vetter, Michael Ellerman,
	Joe Perches, Kevin Hilman

On 3/4/19 3:01 PM, Brendan Higgins wrote:
> On Thu, Feb 14, 2019 at 1:38 PM Brendan Higgins
> <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
>>
>> This patch set proposes KUnit, a lightweight unit testing and mocking
>> framework for the Linux kernel.
>>
> 
> <snip>
> 
>> ## More information on KUnit
>>
>> There is a bunch of documentation near the end of this patch set that
>> describes how to use KUnit and best practices for writing unit tests.
>> For convenience I am hosting the compiled docs here:
>> https://google.github.io/kunit-docs/third_party/kernel/docs/
>> Additionally for convenience, I have applied these patches to a branch:
>> https://kunit.googlesource.com/linux/+/kunit/rfc/5.0-rc5/v4
>> The repo may be cloned with:
>> git clone https://kunit.googlesource.com/linux
>> This patchset is on the kunit/rfc/5.0-rc5/v4 branch.
>>
>> ## Changes Since Last Version
>>
>>  - Got KUnit working on (hypothetically) all architectures (tested on
>>    x86), as per Rob's (and other's) request
>>  - Punting all KUnit features/patches depending on UML for now.
>>  - Broke out UML specific support into arch/um/* as per "[RFC v3 01/19]
>>    kunit: test: add KUnit test runner core", as requested by Luis.
>>  - Added support to kunit_tool to allow it to build kernels in external
>>    directories, as suggested by Kieran.
>>  - Added a UML defconfig, and a config fragment for KUnit as suggested
>>    by Kieran and Luis.
>>  - Cleaned up, and reformatted a bunch of stuff.
>>
>> --
>> 2.21.0.rc0.258.g878e2cd30e-goog
>>
> 
> Someone suggested I should send the next revision out as "PATCH"
> instead of "RFC" since there seems to be general consensus about
> everything at a high level, with a couple exceptions.
> 
> At this time I am planning on sending the next revision out as "[PATCH
> v1 00/NN] kunit: introduce KUnit, the Linux kernel unit testing
> framework". Initially I wasn't sure if the next revision should be
> "[PATCH v1 ...]" or "[PATCH v5 ...]". Please let me know if you have a
> strong objection to the former.
> 
> In the next revision, I will be dropping the last two of three patches
> for the DT unit tests as there doesn't seem to be enough features
> currently available to justify the heavy refactoring I did; however, I

Thank you.


> will still include the patch that just converts everything over to
> KUnit without restructuring the test cases:
> https://lkml.org/lkml/2019/2/14/1133

The link doesn't work for me (don't worry about that), so I'm assuming
this is:

   [RFC v4 15/17] of: unittest: migrate tests to run on KUnit

The conversation on that patch ended after:

   >> After adding patch 15, there are a lot of "unittest internal error" messages.
   > 
   > Yeah, I meant to ask you about that. I thought it was due to a change
   > you made, but after further examination, just now, I found it was my
   > fault. Sorry for not mentioning that anywhere. I will fix it in v5.

It is not worth my time to look at patch 15 when it is that broken.  So I
have not done any review of it.

So no, I think you are still in the RFC stage unless you drop patch 15.

> 
> I should have the next revision out in a week or so.
> 

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-22  1:23     ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: frowand.list @ 2019-03-22  1:23 UTC (permalink / raw)


On 3/4/19 3:01 PM, Brendan Higgins wrote:
> On Thu, Feb 14, 2019 at 1:38 PM Brendan Higgins
> <brendanhiggins at google.com> wrote:
>>
>> This patch set proposes KUnit, a lightweight unit testing and mocking
>> framework for the Linux kernel.
>>
> 
> <snip>
> 
>> ## More information on KUnit
>>
>> There is a bunch of documentation near the end of this patch set that
>> describes how to use KUnit and best practices for writing unit tests.
>> For convenience I am hosting the compiled docs here:
>> https://google.github.io/kunit-docs/third_party/kernel/docs/
>> Additionally for convenience, I have applied these patches to a branch:
>> https://kunit.googlesource.com/linux/+/kunit/rfc/5.0-rc5/v4
>> The repo may be cloned with:
>> git clone https://kunit.googlesource.com/linux
>> This patchset is on the kunit/rfc/5.0-rc5/v4 branch.
>>
>> ## Changes Since Last Version
>>
>>  - Got KUnit working on (hypothetically) all architectures (tested on
>>    x86), as per Rob's (and other's) request
>>  - Punting all KUnit features/patches depending on UML for now.
>>  - Broke out UML specific support into arch/um/* as per "[RFC v3 01/19]
>>    kunit: test: add KUnit test runner core", as requested by Luis.
>>  - Added support to kunit_tool to allow it to build kernels in external
>>    directories, as suggested by Kieran.
>>  - Added a UML defconfig, and a config fragment for KUnit as suggested
>>    by Kieran and Luis.
>>  - Cleaned up, and reformatted a bunch of stuff.
>>
>> --
>> 2.21.0.rc0.258.g878e2cd30e-goog
>>
> 
> Someone suggested I should send the next revision out as "PATCH"
> instead of "RFC" since there seems to be general consensus about
> everything at a high level, with a couple exceptions.
> 
> At this time I am planning on sending the next revision out as "[PATCH
> v1 00/NN] kunit: introduce KUnit, the Linux kernel unit testing
> framework". Initially I wasn't sure if the next revision should be
> "[PATCH v1 ...]" or "[PATCH v5 ...]". Please let me know if you have a
> strong objection to the former.
> 
> In the next revision, I will be dropping the last two of three patches
> for the DT unit tests as there doesn't seem to be enough features
> currently available to justify the heavy refactoring I did; however, I

Thank you.


> will still include the patch that just converts everything over to
> KUnit without restructuring the test cases:
> https://lkml.org/lkml/2019/2/14/1133

The link doesn't work for me (don't worry about that), so I'm assuming
this is:

   [RFC v4 15/17] of: unittest: migrate tests to run on KUnit

The conversation on that patch ended after:

   >> After adding patch 15, there are a lot of "unittest internal error" messages.
   > 
   > Yeah, I meant to ask you about that. I thought it was due to a change
   > you made, but after further examination, just now, I found it was my
   > fault. Sorry for not mentioning that anywhere. I will fix it in v5.

It is not worth my time to look at patch 15 when it is that broken.  So I
have not done any review of it.

So no, I think you are still in the RFC stage unless you drop patch 15.

> 
> I should have the next revision out in a week or so.
> 

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-22  1:23     ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-03-22  1:23 UTC (permalink / raw)


On 3/4/19 3:01 PM, Brendan Higgins wrote:
> On Thu, Feb 14, 2019 at 1:38 PM Brendan Higgins
> <brendanhiggins@google.com> wrote:
>>
>> This patch set proposes KUnit, a lightweight unit testing and mocking
>> framework for the Linux kernel.
>>
> 
> <snip>
> 
>> ## More information on KUnit
>>
>> There is a bunch of documentation near the end of this patch set that
>> describes how to use KUnit and best practices for writing unit tests.
>> For convenience I am hosting the compiled docs here:
>> https://google.github.io/kunit-docs/third_party/kernel/docs/
>> Additionally for convenience, I have applied these patches to a branch:
>> https://kunit.googlesource.com/linux/+/kunit/rfc/5.0-rc5/v4
>> The repo may be cloned with:
>> git clone https://kunit.googlesource.com/linux
>> This patchset is on the kunit/rfc/5.0-rc5/v4 branch.
>>
>> ## Changes Since Last Version
>>
>>  - Got KUnit working on (hypothetically) all architectures (tested on
>>    x86), as per Rob's (and other's) request
>>  - Punting all KUnit features/patches depending on UML for now.
>>  - Broke out UML specific support into arch/um/* as per "[RFC v3 01/19]
>>    kunit: test: add KUnit test runner core", as requested by Luis.
>>  - Added support to kunit_tool to allow it to build kernels in external
>>    directories, as suggested by Kieran.
>>  - Added a UML defconfig, and a config fragment for KUnit as suggested
>>    by Kieran and Luis.
>>  - Cleaned up, and reformatted a bunch of stuff.
>>
>> --
>> 2.21.0.rc0.258.g878e2cd30e-goog
>>
> 
> Someone suggested I should send the next revision out as "PATCH"
> instead of "RFC" since there seems to be general consensus about
> everything at a high level, with a couple exceptions.
> 
> At this time I am planning on sending the next revision out as "[PATCH
> v1 00/NN] kunit: introduce KUnit, the Linux kernel unit testing
> framework". Initially I wasn't sure if the next revision should be
> "[PATCH v1 ...]" or "[PATCH v5 ...]". Please let me know if you have a
> strong objection to the former.
> 
> In the next revision, I will be dropping the last two of three patches
> for the DT unit tests as there doesn't seem to be enough features
> currently available to justify the heavy refactoring I did; however, I

Thank you.


> will still include the patch that just converts everything over to
> KUnit without restructuring the test cases:
> https://lkml.org/lkml/2019/2/14/1133

The link doesn't work for me (don't worry about that), so I'm assuming
this is:

   [RFC v4 15/17] of: unittest: migrate tests to run on KUnit

The conversation on that patch ended after:

   >> After adding patch 15, there are a lot of "unittest internal error" messages.
   > 
   > Yeah, I meant to ask you about that. I thought it was due to a change
   > you made, but after further examination, just now, I found it was my
   > fault. Sorry for not mentioning that anywhere. I will fix it in v5.

It is not worth my time to look at patch 15 when it is that broken.  So I
have not done any review of it.

So no, I think you are still in the RFC stage unless you drop patch 15.

> 
> I should have the next revision out in a week or so.
> 

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-22  1:23     ` Frank Rowand
  0 siblings, 0 replies; 316+ messages in thread
From: Frank Rowand @ 2019-03-22  1:23 UTC (permalink / raw)
  To: Brendan Higgins, Kees Cook, Luis Chamberlain, shuah, Rob Herring,
	Kieran Bingham
  Cc: brakmo, Petr Mladek, Amir Goldstein, dri-devel, Sasha Levin,
	linux-kselftest, linux-nvdimm, Richard Weinberger, Knut Omang,
	wfg, Joel Stanley, Jeff Dike, Dan Carpenter, devicetree, Bird,
	Timothy, linux-um, Steven Rostedt, Julia Lawall, Dan Williams,
	kunit-dev, Greg KH, Linux Kernel Mailing List, Daniel Vetter,
	Michael Ellerman, Joe Perches, Kevin Hilman

On 3/4/19 3:01 PM, Brendan Higgins wrote:
> On Thu, Feb 14, 2019 at 1:38 PM Brendan Higgins
> <brendanhiggins@google.com> wrote:
>>
>> This patch set proposes KUnit, a lightweight unit testing and mocking
>> framework for the Linux kernel.
>>
> 
> <snip>
> 
>> ## More information on KUnit
>>
>> There is a bunch of documentation near the end of this patch set that
>> describes how to use KUnit and best practices for writing unit tests.
>> For convenience I am hosting the compiled docs here:
>> https://google.github.io/kunit-docs/third_party/kernel/docs/
>> Additionally for convenience, I have applied these patches to a branch:
>> https://kunit.googlesource.com/linux/+/kunit/rfc/5.0-rc5/v4
>> The repo may be cloned with:
>> git clone https://kunit.googlesource.com/linux
>> This patchset is on the kunit/rfc/5.0-rc5/v4 branch.
>>
>> ## Changes Since Last Version
>>
>>  - Got KUnit working on (hypothetically) all architectures (tested on
>>    x86), as per Rob's (and other's) request
>>  - Punting all KUnit features/patches depending on UML for now.
>>  - Broke out UML specific support into arch/um/* as per "[RFC v3 01/19]
>>    kunit: test: add KUnit test runner core", as requested by Luis.
>>  - Added support to kunit_tool to allow it to build kernels in external
>>    directories, as suggested by Kieran.
>>  - Added a UML defconfig, and a config fragment for KUnit as suggested
>>    by Kieran and Luis.
>>  - Cleaned up, and reformatted a bunch of stuff.
>>
>> --
>> 2.21.0.rc0.258.g878e2cd30e-goog
>>
> 
> Someone suggested I should send the next revision out as "PATCH"
> instead of "RFC" since there seems to be general consensus about
> everything at a high level, with a couple exceptions.
> 
> At this time I am planning on sending the next revision out as "[PATCH
> v1 00/NN] kunit: introduce KUnit, the Linux kernel unit testing
> framework". Initially I wasn't sure if the next revision should be
> "[PATCH v1 ...]" or "[PATCH v5 ...]". Please let me know if you have a
> strong objection to the former.
> 
> In the next revision, I will be dropping the last two of three patches
> for the DT unit tests as there doesn't seem to be enough features
> currently available to justify the heavy refactoring I did; however, I

Thank you.


> will still include the patch that just converts everything over to
> KUnit without restructuring the test cases:
> https://lkml.org/lkml/2019/2/14/1133

The link doesn't work for me (don't worry about that), so I'm assuming
this is:

   [RFC v4 15/17] of: unittest: migrate tests to run on KUnit

The conversation on that patch ended after:

   >> After adding patch 15, there are a lot of "unittest internal error" messages.
   > 
   > Yeah, I meant to ask you about that. I thought it was due to a change
   > you made, but after further examination, just now, I found it was my
   > fault. Sorry for not mentioning that anywhere. I will fix it in v5.

It is not worth my time to look at patch 15 when it is that broken.  So I
have not done any review of it.

So no, I think you are still in the RFC stage unless you drop patch 15.

> 
> I should have the next revision out in a week or so.
> 


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
  2019-03-22  1:09                     ` Frank Rowand
                                         ` (2 preceding siblings ...)
  (?)
@ 2019-03-22  1:41                       ` Brendan Higgins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-22  1:41 UTC (permalink / raw)
  To: Frank Rowand
  Cc: Kees Cook, Luis Chamberlain, shuah, Rob Herring, Kieran Bingham,
	Greg KH, Joel Stanley, Michael Ellerman, Joe Perches, brakmo,
	Steven Rostedt, Bird, Timothy, Kevin Hilman, Julia Lawall,
	linux-kselftest, kunit-dev, Linux Kernel Mailing List, Jeff Dike,
	Richard Weinberger, linux-um, Daniel Vetter, dri-devel

On Thu, Mar 21, 2019 at 6:10 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/27/19 11:42 PM, Brendan Higgins wrote:
> > On Tue, Feb 19, 2019 at 10:44 PM Frank Rowand <frowand.list@gmail.com> wrote:
> >>
> >> On 2/19/19 7:39 PM, Brendan Higgins wrote:
> >>> On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand <frowand.list@gmail.com> wrote:
> >>>>
> >>>> On 2/14/19 1:37 PM, Brendan Higgins wrote:
> >>>>> Add support for aborting/bailing out of test cases. Needed for
> >>>>> implementing assertions.
> >>>>>
> >>>>> Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
> >>>>> ---
> >>>>> Changes Since Last Version
> >>>>>  - This patch is new introducing a new cross-architecture way to abort
> >>>>>    out of a test case (needed for KUNIT_ASSERT_*, see next patch for
> >>>>>    details).
> >>>>>  - On a side note, this is not a complete replacement for the UML abort
> >>>>>    mechanism, but covers the majority of necessary functionality. UML
> >>>>>    architecture specific featurs have been dropped from the initial
> >>>>>    patchset.
> >>>>> ---
> >>>>>  include/kunit/test.h |  24 +++++
> >>>>>  kunit/Makefile       |   3 +-
> >>>>>  kunit/test-test.c    | 127 ++++++++++++++++++++++++++
> >>>>>  kunit/test.c         | 208 +++++++++++++++++++++++++++++++++++++++++--
> >>>>>  4 files changed, 353 insertions(+), 9 deletions(-)
> >>>>>  create mode 100644 kunit/test-test.c
> >>>>
> >>>> < snip >
> >>>>
> >>>>> diff --git a/kunit/test.c b/kunit/test.c
> >>>>> index d18c50d5ed671..6e5244642ab07 100644
> >>>>> --- a/kunit/test.c
> >>>>> +++ b/kunit/test.c
> >>>>> @@ -6,9 +6,9 @@
> >>>>>   * Author: Brendan Higgins <brendanhiggins@google.com>
> >>>>>   */
> >>>>>
> >>>>> -#include <linux/sched.h>
> >>>>>  #include <linux/sched/debug.h>
> >>>>> -#include <os.h>
> >>>>> +#include <linux/completion.h>
> >>>>> +#include <linux/kthread.h>
> >>>>>  #include <kunit/test.h>
> >>>>>
> >>>>>  static bool kunit_get_success(struct kunit *test)
> >>>>> @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
> >>>>>       spin_unlock_irqrestore(&test->lock, flags);
> >>>>>  }
> >>>>>
> >>>>> +static bool kunit_get_death_test(struct kunit *test)
> >>>>> +{
> >>>>> +     unsigned long flags;
> >>>>> +     bool death_test;
> >>>>> +
> >>>>> +     spin_lock_irqsave(&test->lock, flags);
> >>>>> +     death_test = test->death_test;
> >>>>> +     spin_unlock_irqrestore(&test->lock, flags);
> >>>>> +
> >>>>> +     return death_test;
> >>>>> +}
> >>>>> +
> >>>>> +static void kunit_set_death_test(struct kunit *test, bool death_test)
> >>>>> +{
> >>>>> +     unsigned long flags;
> >>>>> +
> >>>>> +     spin_lock_irqsave(&test->lock, flags);
> >>>>> +     test->death_test = death_test;
> >>>>> +     spin_unlock_irqrestore(&test->lock, flags);
> >>>>> +}
> >>>>> +
> >>>>>  static int kunit_vprintk_emit(const struct kunit *test,
> >>>>>                             int level,
> >>>>>                             const char *fmt,
> >>>>> @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
> >>>>>       stream->commit(stream);
> >>>>>  }
> >>>>>
> >>>>> +static void __noreturn kunit_abort(struct kunit *test)
> >>>>> +{
> >>>>> +     kunit_set_death_test(test, true);
> >>>>> +
> >>>>> +     test->try_catch.throw(&test->try_catch);
> >>>>> +
> >>>>> +     /*
> >>>>> +      * Throw could not abort from test.
> >>>>> +      */
> >>>>> +     kunit_err(test, "Throw could not abort from test!");
> >>>>> +     show_stack(NULL, NULL);
> >>>>> +     BUG();
> >>>>
> >>>> kunit_abort() is what will be call as the result of an assert failure.
> >>>
> >>> Yep. Does that need clarified somewhere.
> >>>>
> >>>> BUG(), which is a panic, which is crashing the system is not acceptable
> >>>> in the Linux kernel.  You will just annoy Linus if you submit this.
> >>>
> >>> Sorry, I thought this was an acceptable use case since, a) this should
> >>> never be compiled in a production kernel, b) we are in a pretty bad,
> >>> unpredictable state if we get here and keep going. I think you might
> >>> have said elsewhere that you think "a" is not valid? In any case, I
> >>> can replace this with a WARN, would that be acceptable?
> >>
> >> A WARN may or may not make sense, depending on the context.  It may
> >> be sufficient to simply report a test failure (as in the old version
> >> of case (2) below.
> >>
> >> Answers to "a)" and "b)":
> >>
> >> a) it might be in a production kernel
> >
> > Sorry for a possibly stupid question, how might it be so? Why would
> > someone intentionally build unit tests into a production kernel?
>
> People do things.  Just expect it.

Huh, alright. I will take your word for it then.

>
> >>
> >> a') it is not acceptable in my development kernel either
> >
> > Fair enough.
> >
> >>
> >> b) No.  You don't crash a developer's kernel either unless it is
> >> required to avoid data corruption.
> >
> > Alright, I thought that was one of those cases, but I am not going to
> > push the point. Also, in case it wasn't clear, the path where BUG()
> > gets called only happens if there is a bug in KUnit itself, not just
> > because a test case fails catastrophically.
>
> Still not out of the woods.  Still facing Lions and Tigers and Bears,
> Oh my!

Nope, I guess not :-)

>
> So kunit_abort() is normally called as the result of an assert
> failure (as written many lines further above).
>
> kunit_abort()
>    test->try_catch.throw(&test->try_catch)
>    // this is really kunit_generic_throw(), yes?
>       complete_and_exit()
>          if (comp)
>             // comp is test_case_completion?
>             complete(comp)
>          do_exit()
>             // void __noreturn do_exit(long code)
>             // depending on the task, either panic
>             // or the task dies

You are right up until after it calls do_exit().

KUnit actually spawns a thread for the test case to run in so that
when exit is called, only the test case thread dies. The thread that
started KUnit is never affected.

>
> I did not read through enough of the code to understand what is going
> on here.  Is each kunit_module executed in a newly created thread?
> And if kunit_abort() is called then that thread dies?  Or something
> else?

Mostly right, each kunit_case (not kunit_module) gets executed in its
own newly created thread. If kunit_abort() is called in that thread,
the kunit_case thread dies. The parent thread keeps going, and other
test cases are executed.

>
>
> >>
> >> b') And you can not do replacements like:
> >>
> >> (1) in of_unittest_check_tree_linkage()
> >>
> >> -----  old  -----
> >>
> >>         if (!of_root)
> >>                 return;
> >>
> >> -----  new  -----
> >>
> >>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);
> >>
> >> (2) in of_unittest_property_string()
> >>
> >> -----  old  -----
> >>
> >>         /* of_property_read_string_index() tests */
> >>         rc = of_property_read_string_index(np, "string-property", 0, strings);
> >>         unittest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);
> >>
> >> -----  new  -----
> >>
> >>         /* of_property_read_string_index() tests */
> >>         rc = of_property_read_string_index(np, "string-property", 0, strings);
> >>         KUNIT_ASSERT_EQ(test, rc, 0);
> >>         KUNIT_EXPECT_STREQ(test, strings[0], "foobar");
> >>
> >>
> >> If a test fails, that is no reason to abort testing.  The remainder of the unit
> >> tests can still run.  There may be cascading failures, but that is ok.
> >
> > Sure, that's what I am trying to do. I don't see how (1) changes
> > anything, a failed KUNIT_ASSERT_* only bails on the current test case,
> > it does not quit the entire test suite let alone crash the kernel.
>
> This may be another case of whether a kunit_module is approximately a
> single KUNIT_EXPECT_*() or a larger number of them.
>
> I still want, for example, of_unittest_property_string() to include a large
> number of KUNIT_EXPECT_*() instances.  In that case I still want the rest of
> the tests in the kunit_module to be executed even after a KUNIT_ASSERT_*()
> fails.  The existing test code has that property.

Sure, in the context of the reply you just sent me on the DT unittest
thread, that makes sense. I can pull out all but the ones that would
have terminated the collection of test cases (where you return early),
if that makes it better.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-03-22  1:41                       ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-22  1:41 UTC (permalink / raw)
  To: Frank Rowand
  Cc: Kees Cook, Luis Chamberlain, shuah, Rob Herring, Kieran Bingham,
	Greg KH, Joel Stanley, Michael Ellerman, Joe Perches, brakmo,
	Steven Rostedt, Bird, Timothy, Kevin Hilman, Julia Lawall,
	linux-kselftest, kunit-dev, Linux Kernel Mailing List, Jeff Dike,
	Richard Weinberger, linux-um, Daniel Vetter, dri-devel,
	Dan Williams, linux-nvdimm, Knut Omang, devicetree, Petr Mladek,
	Sasha Levin, Amir Goldstein, Dan Carpenter, wfg

On Thu, Mar 21, 2019 at 6:10 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/27/19 11:42 PM, Brendan Higgins wrote:
> > On Tue, Feb 19, 2019 at 10:44 PM Frank Rowand <frowand.list@gmail.com> wrote:
> >>
> >> On 2/19/19 7:39 PM, Brendan Higgins wrote:
> >>> On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand <frowand.list@gmail.com> wrote:
> >>>>
> >>>> On 2/14/19 1:37 PM, Brendan Higgins wrote:
> >>>>> Add support for aborting/bailing out of test cases. Needed for
> >>>>> implementing assertions.
> >>>>>
> >>>>> Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
> >>>>> ---
> >>>>> Changes Since Last Version
> >>>>>  - This patch is new introducing a new cross-architecture way to abort
> >>>>>    out of a test case (needed for KUNIT_ASSERT_*, see next patch for
> >>>>>    details).
> >>>>>  - On a side note, this is not a complete replacement for the UML abort
> >>>>>    mechanism, but covers the majority of necessary functionality. UML
> >>>>>    architecture specific featurs have been dropped from the initial
> >>>>>    patchset.
> >>>>> ---
> >>>>>  include/kunit/test.h |  24 +++++
> >>>>>  kunit/Makefile       |   3 +-
> >>>>>  kunit/test-test.c    | 127 ++++++++++++++++++++++++++
> >>>>>  kunit/test.c         | 208 +++++++++++++++++++++++++++++++++++++++++--
> >>>>>  4 files changed, 353 insertions(+), 9 deletions(-)
> >>>>>  create mode 100644 kunit/test-test.c
> >>>>
> >>>> < snip >
> >>>>
> >>>>> diff --git a/kunit/test.c b/kunit/test.c
> >>>>> index d18c50d5ed671..6e5244642ab07 100644
> >>>>> --- a/kunit/test.c
> >>>>> +++ b/kunit/test.c
> >>>>> @@ -6,9 +6,9 @@
> >>>>>   * Author: Brendan Higgins <brendanhiggins@google.com>
> >>>>>   */
> >>>>>
> >>>>> -#include <linux/sched.h>
> >>>>>  #include <linux/sched/debug.h>
> >>>>> -#include <os.h>
> >>>>> +#include <linux/completion.h>
> >>>>> +#include <linux/kthread.h>
> >>>>>  #include <kunit/test.h>
> >>>>>
> >>>>>  static bool kunit_get_success(struct kunit *test)
> >>>>> @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
> >>>>>       spin_unlock_irqrestore(&test->lock, flags);
> >>>>>  }
> >>>>>
> >>>>> +static bool kunit_get_death_test(struct kunit *test)
> >>>>> +{
> >>>>> +     unsigned long flags;
> >>>>> +     bool death_test;
> >>>>> +
> >>>>> +     spin_lock_irqsave(&test->lock, flags);
> >>>>> +     death_test = test->death_test;
> >>>>> +     spin_unlock_irqrestore(&test->lock, flags);
> >>>>> +
> >>>>> +     return death_test;
> >>>>> +}
> >>>>> +
> >>>>> +static void kunit_set_death_test(struct kunit *test, bool death_test)
> >>>>> +{
> >>>>> +     unsigned long flags;
> >>>>> +
> >>>>> +     spin_lock_irqsave(&test->lock, flags);
> >>>>> +     test->death_test = death_test;
> >>>>> +     spin_unlock_irqrestore(&test->lock, flags);
> >>>>> +}
> >>>>> +
> >>>>>  static int kunit_vprintk_emit(const struct kunit *test,
> >>>>>                             int level,
> >>>>>                             const char *fmt,
> >>>>> @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
> >>>>>       stream->commit(stream);
> >>>>>  }
> >>>>>
> >>>>> +static void __noreturn kunit_abort(struct kunit *test)
> >>>>> +{
> >>>>> +     kunit_set_death_test(test, true);
> >>>>> +
> >>>>> +     test->try_catch.throw(&test->try_catch);
> >>>>> +
> >>>>> +     /*
> >>>>> +      * Throw could not abort from test.
> >>>>> +      */
> >>>>> +     kunit_err(test, "Throw could not abort from test!");
> >>>>> +     show_stack(NULL, NULL);
> >>>>> +     BUG();
> >>>>
> >>>> kunit_abort() is what will be call as the result of an assert failure.
> >>>
> >>> Yep. Does that need clarified somewhere.
> >>>>
> >>>> BUG(), which is a panic, which is crashing the system is not acceptable
> >>>> in the Linux kernel.  You will just annoy Linus if you submit this.
> >>>
> >>> Sorry, I thought this was an acceptable use case since, a) this should
> >>> never be compiled in a production kernel, b) we are in a pretty bad,
> >>> unpredictable state if we get here and keep going. I think you might
> >>> have said elsewhere that you think "a" is not valid? In any case, I
> >>> can replace this with a WARN, would that be acceptable?
> >>
> >> A WARN may or may not make sense, depending on the context.  It may
> >> be sufficient to simply report a test failure (as in the old version
> >> of case (2) below.
> >>
> >> Answers to "a)" and "b)":
> >>
> >> a) it might be in a production kernel
> >
> > Sorry for a possibly stupid question, how might it be so? Why would
> > someone intentionally build unit tests into a production kernel?
>
> People do things.  Just expect it.

Huh, alright. I will take your word for it then.

>
> >>
> >> a') it is not acceptable in my development kernel either
> >
> > Fair enough.
> >
> >>
> >> b) No.  You don't crash a developer's kernel either unless it is
> >> required to avoid data corruption.
> >
> > Alright, I thought that was one of those cases, but I am not going to
> > push the point. Also, in case it wasn't clear, the path where BUG()
> > gets called only happens if there is a bug in KUnit itself, not just
> > because a test case fails catastrophically.
>
> Still not out of the woods.  Still facing Lions and Tigers and Bears,
> Oh my!

Nope, I guess not :-)

>
> So kunit_abort() is normally called as the result of an assert
> failure (as written many lines further above).
>
> kunit_abort()
>    test->try_catch.throw(&test->try_catch)
>    // this is really kunit_generic_throw(), yes?
>       complete_and_exit()
>          if (comp)
>             // comp is test_case_completion?
>             complete(comp)
>          do_exit()
>             // void __noreturn do_exit(long code)
>             // depending on the task, either panic
>             // or the task dies

You are right up until after it calls do_exit().

KUnit actually spawns a thread for the test case to run in so that
when exit is called, only the test case thread dies. The thread that
started KUnit is never affected.

>
> I did not read through enough of the code to understand what is going
> on here.  Is each kunit_module executed in a newly created thread?
> And if kunit_abort() is called then that thread dies?  Or something
> else?

Mostly right, each kunit_case (not kunit_module) gets executed in its
own newly created thread. If kunit_abort() is called in that thread,
the kunit_case thread dies. The parent thread keeps going, and other
test cases are executed.

>
>
> >>
> >> b') And you can not do replacements like:
> >>
> >> (1) in of_unittest_check_tree_linkage()
> >>
> >> -----  old  -----
> >>
> >>         if (!of_root)
> >>                 return;
> >>
> >> -----  new  -----
> >>
> >>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);
> >>
> >> (2) in of_unittest_property_string()
> >>
> >> -----  old  -----
> >>
> >>         /* of_property_read_string_index() tests */
> >>         rc = of_property_read_string_index(np, "string-property", 0, strings);
> >>         unittest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);
> >>
> >> -----  new  -----
> >>
> >>         /* of_property_read_string_index() tests */
> >>         rc = of_property_read_string_index(np, "string-property", 0, strings);
> >>         KUNIT_ASSERT_EQ(test, rc, 0);
> >>         KUNIT_EXPECT_STREQ(test, strings[0], "foobar");
> >>
> >>
> >> If a test fails, that is no reason to abort testing.  The remainder of the unit
> >> tests can still run.  There may be cascading failures, but that is ok.
> >
> > Sure, that's what I am trying to do. I don't see how (1) changes
> > anything, a failed KUNIT_ASSERT_* only bails on the current test case,
> > it does not quit the entire test suite let alone crash the kernel.
>
> This may be another case of whether a kunit_module is approximately a
> single KUNIT_EXPECT_*() or a larger number of them.
>
> I still want, for example, of_unittest_property_string() to include a large
> number of KUNIT_EXPECT_*() instances.  In that case I still want the rest of
> the tests in the kunit_module to be executed even after a KUNIT_ASSERT_*()
> fails.  The existing test code has that property.

Sure, in the context of the reply you just sent me on the DT unittest
thread, that makes sense. I can pull out all but the ones that would
have terminated the collection of test cases (where you return early),
if that makes it better.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-03-22  1:41                       ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-03-22  1:41 UTC (permalink / raw)


On Thu, Mar 21, 2019 at 6:10 PM Frank Rowand <frowand.list at gmail.com> wrote:
>
> On 2/27/19 11:42 PM, Brendan Higgins wrote:
> > On Tue, Feb 19, 2019 at 10:44 PM Frank Rowand <frowand.list at gmail.com> wrote:
> >>
> >> On 2/19/19 7:39 PM, Brendan Higgins wrote:
> >>> On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand <frowand.list at gmail.com> wrote:
> >>>>
> >>>> On 2/14/19 1:37 PM, Brendan Higgins wrote:
> >>>>> Add support for aborting/bailing out of test cases. Needed for
> >>>>> implementing assertions.
> >>>>>
> >>>>> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> >>>>> ---
> >>>>> Changes Since Last Version
> >>>>>  - This patch is new introducing a new cross-architecture way to abort
> >>>>>    out of a test case (needed for KUNIT_ASSERT_*, see next patch for
> >>>>>    details).
> >>>>>  - On a side note, this is not a complete replacement for the UML abort
> >>>>>    mechanism, but covers the majority of necessary functionality. UML
> >>>>>    architecture specific featurs have been dropped from the initial
> >>>>>    patchset.
> >>>>> ---
> >>>>>  include/kunit/test.h |  24 +++++
> >>>>>  kunit/Makefile       |   3 +-
> >>>>>  kunit/test-test.c    | 127 ++++++++++++++++++++++++++
> >>>>>  kunit/test.c         | 208 +++++++++++++++++++++++++++++++++++++++++--
> >>>>>  4 files changed, 353 insertions(+), 9 deletions(-)
> >>>>>  create mode 100644 kunit/test-test.c
> >>>>
> >>>> < snip >
> >>>>
> >>>>> diff --git a/kunit/test.c b/kunit/test.c
> >>>>> index d18c50d5ed671..6e5244642ab07 100644
> >>>>> --- a/kunit/test.c
> >>>>> +++ b/kunit/test.c
> >>>>> @@ -6,9 +6,9 @@
> >>>>>   * Author: Brendan Higgins <brendanhiggins at google.com>
> >>>>>   */
> >>>>>
> >>>>> -#include <linux/sched.h>
> >>>>>  #include <linux/sched/debug.h>
> >>>>> -#include <os.h>
> >>>>> +#include <linux/completion.h>
> >>>>> +#include <linux/kthread.h>
> >>>>>  #include <kunit/test.h>
> >>>>>
> >>>>>  static bool kunit_get_success(struct kunit *test)
> >>>>> @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
> >>>>>       spin_unlock_irqrestore(&test->lock, flags);
> >>>>>  }
> >>>>>
> >>>>> +static bool kunit_get_death_test(struct kunit *test)
> >>>>> +{
> >>>>> +     unsigned long flags;
> >>>>> +     bool death_test;
> >>>>> +
> >>>>> +     spin_lock_irqsave(&test->lock, flags);
> >>>>> +     death_test = test->death_test;
> >>>>> +     spin_unlock_irqrestore(&test->lock, flags);
> >>>>> +
> >>>>> +     return death_test;
> >>>>> +}
> >>>>> +
> >>>>> +static void kunit_set_death_test(struct kunit *test, bool death_test)
> >>>>> +{
> >>>>> +     unsigned long flags;
> >>>>> +
> >>>>> +     spin_lock_irqsave(&test->lock, flags);
> >>>>> +     test->death_test = death_test;
> >>>>> +     spin_unlock_irqrestore(&test->lock, flags);
> >>>>> +}
> >>>>> +
> >>>>>  static int kunit_vprintk_emit(const struct kunit *test,
> >>>>>                             int level,
> >>>>>                             const char *fmt,
> >>>>> @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
> >>>>>       stream->commit(stream);
> >>>>>  }
> >>>>>
> >>>>> +static void __noreturn kunit_abort(struct kunit *test)
> >>>>> +{
> >>>>> +     kunit_set_death_test(test, true);
> >>>>> +
> >>>>> +     test->try_catch.throw(&test->try_catch);
> >>>>> +
> >>>>> +     /*
> >>>>> +      * Throw could not abort from test.
> >>>>> +      */
> >>>>> +     kunit_err(test, "Throw could not abort from test!");
> >>>>> +     show_stack(NULL, NULL);
> >>>>> +     BUG();
> >>>>
> >>>> kunit_abort() is what will be call as the result of an assert failure.
> >>>
> >>> Yep. Does that need clarified somewhere.
> >>>>
> >>>> BUG(), which is a panic, which is crashing the system is not acceptable
> >>>> in the Linux kernel.  You will just annoy Linus if you submit this.
> >>>
> >>> Sorry, I thought this was an acceptable use case since, a) this should
> >>> never be compiled in a production kernel, b) we are in a pretty bad,
> >>> unpredictable state if we get here and keep going. I think you might
> >>> have said elsewhere that you think "a" is not valid? In any case, I
> >>> can replace this with a WARN, would that be acceptable?
> >>
> >> A WARN may or may not make sense, depending on the context.  It may
> >> be sufficient to simply report a test failure (as in the old version
> >> of case (2) below.
> >>
> >> Answers to "a)" and "b)":
> >>
> >> a) it might be in a production kernel
> >
> > Sorry for a possibly stupid question, how might it be so? Why would
> > someone intentionally build unit tests into a production kernel?
>
> People do things.  Just expect it.

Huh, alright. I will take your word for it then.

>
> >>
> >> a') it is not acceptable in my development kernel either
> >
> > Fair enough.
> >
> >>
> >> b) No.  You don't crash a developer's kernel either unless it is
> >> required to avoid data corruption.
> >
> > Alright, I thought that was one of those cases, but I am not going to
> > push the point. Also, in case it wasn't clear, the path where BUG()
> > gets called only happens if there is a bug in KUnit itself, not just
> > because a test case fails catastrophically.
>
> Still not out of the woods.  Still facing Lions and Tigers and Bears,
> Oh my!

Nope, I guess not :-)

>
> So kunit_abort() is normally called as the result of an assert
> failure (as written many lines further above).
>
> kunit_abort()
>    test->try_catch.throw(&test->try_catch)
>    // this is really kunit_generic_throw(), yes?
>       complete_and_exit()
>          if (comp)
>             // comp is test_case_completion?
>             complete(comp)
>          do_exit()
>             // void __noreturn do_exit(long code)
>             // depending on the task, either panic
>             // or the task dies

You are right up until after it calls do_exit().

KUnit actually spawns a thread for the test case to run in so that
when exit is called, only the test case thread dies. The thread that
started KUnit is never affected.

>
> I did not read through enough of the code to understand what is going
> on here.  Is each kunit_module executed in a newly created thread?
> And if kunit_abort() is called then that thread dies?  Or something
> else?

Mostly right, each kunit_case (not kunit_module) gets executed in its
own newly created thread. If kunit_abort() is called in that thread,
the kunit_case thread dies. The parent thread keeps going, and other
test cases are executed.

>
>
> >>
> >> b') And you can not do replacements like:
> >>
> >> (1) in of_unittest_check_tree_linkage()
> >>
> >> -----  old  -----
> >>
> >>         if (!of_root)
> >>                 return;
> >>
> >> -----  new  -----
> >>
> >>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);
> >>
> >> (2) in of_unittest_property_string()
> >>
> >> -----  old  -----
> >>
> >>         /* of_property_read_string_index() tests */
> >>         rc = of_property_read_string_index(np, "string-property", 0, strings);
> >>         unittest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);
> >>
> >> -----  new  -----
> >>
> >>         /* of_property_read_string_index() tests */
> >>         rc = of_property_read_string_index(np, "string-property", 0, strings);
> >>         KUNIT_ASSERT_EQ(test, rc, 0);
> >>         KUNIT_EXPECT_STREQ(test, strings[0], "foobar");
> >>
> >>
> >> If a test fails, that is no reason to abort testing.  The remainder of the unit
> >> tests can still run.  There may be cascading failures, but that is ok.
> >
> > Sure, that's what I am trying to do. I don't see how (1) changes
> > anything, a failed KUNIT_ASSERT_* only bails on the current test case,
> > it does not quit the entire test suite let alone crash the kernel.
>
> This may be another case of whether a kunit_module is approximately a
> single KUNIT_EXPECT_*() or a larger number of them.
>
> I still want, for example, of_unittest_property_string() to include a large
> number of KUNIT_EXPECT_*() instances.  In that case I still want the rest of
> the tests in the kunit_module to be executed even after a KUNIT_ASSERT_*()
> fails.  The existing test code has that property.

Sure, in the context of the reply you just sent me on the DT unittest
thread, that makes sense. I can pull out all but the ones that would
have terminated the collection of test cases (where you return early),
if that makes it better.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-03-22  1:41                       ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-22  1:41 UTC (permalink / raw)


On Thu, Mar 21, 2019@6:10 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/27/19 11:42 PM, Brendan Higgins wrote:
> > On Tue, Feb 19, 2019@10:44 PM Frank Rowand <frowand.list@gmail.com> wrote:
> >>
> >> On 2/19/19 7:39 PM, Brendan Higgins wrote:
> >>> On Mon, Feb 18, 2019@11:52 AM Frank Rowand <frowand.list@gmail.com> wrote:
> >>>>
> >>>> On 2/14/19 1:37 PM, Brendan Higgins wrote:
> >>>>> Add support for aborting/bailing out of test cases. Needed for
> >>>>> implementing assertions.
> >>>>>
> >>>>> Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> >>>>> ---
> >>>>> Changes Since Last Version
> >>>>>  - This patch is new introducing a new cross-architecture way to abort
> >>>>>    out of a test case (needed for KUNIT_ASSERT_*, see next patch for
> >>>>>    details).
> >>>>>  - On a side note, this is not a complete replacement for the UML abort
> >>>>>    mechanism, but covers the majority of necessary functionality. UML
> >>>>>    architecture specific featurs have been dropped from the initial
> >>>>>    patchset.
> >>>>> ---
> >>>>>  include/kunit/test.h |  24 +++++
> >>>>>  kunit/Makefile       |   3 +-
> >>>>>  kunit/test-test.c    | 127 ++++++++++++++++++++++++++
> >>>>>  kunit/test.c         | 208 +++++++++++++++++++++++++++++++++++++++++--
> >>>>>  4 files changed, 353 insertions(+), 9 deletions(-)
> >>>>>  create mode 100644 kunit/test-test.c
> >>>>
> >>>> < snip >
> >>>>
> >>>>> diff --git a/kunit/test.c b/kunit/test.c
> >>>>> index d18c50d5ed671..6e5244642ab07 100644
> >>>>> --- a/kunit/test.c
> >>>>> +++ b/kunit/test.c
> >>>>> @@ -6,9 +6,9 @@
> >>>>>   * Author: Brendan Higgins <brendanhiggins at google.com>
> >>>>>   */
> >>>>>
> >>>>> -#include <linux/sched.h>
> >>>>>  #include <linux/sched/debug.h>
> >>>>> -#include <os.h>
> >>>>> +#include <linux/completion.h>
> >>>>> +#include <linux/kthread.h>
> >>>>>  #include <kunit/test.h>
> >>>>>
> >>>>>  static bool kunit_get_success(struct kunit *test)
> >>>>> @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
> >>>>>       spin_unlock_irqrestore(&test->lock, flags);
> >>>>>  }
> >>>>>
> >>>>> +static bool kunit_get_death_test(struct kunit *test)
> >>>>> +{
> >>>>> +     unsigned long flags;
> >>>>> +     bool death_test;
> >>>>> +
> >>>>> +     spin_lock_irqsave(&test->lock, flags);
> >>>>> +     death_test = test->death_test;
> >>>>> +     spin_unlock_irqrestore(&test->lock, flags);
> >>>>> +
> >>>>> +     return death_test;
> >>>>> +}
> >>>>> +
> >>>>> +static void kunit_set_death_test(struct kunit *test, bool death_test)
> >>>>> +{
> >>>>> +     unsigned long flags;
> >>>>> +
> >>>>> +     spin_lock_irqsave(&test->lock, flags);
> >>>>> +     test->death_test = death_test;
> >>>>> +     spin_unlock_irqrestore(&test->lock, flags);
> >>>>> +}
> >>>>> +
> >>>>>  static int kunit_vprintk_emit(const struct kunit *test,
> >>>>>                             int level,
> >>>>>                             const char *fmt,
> >>>>> @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
> >>>>>       stream->commit(stream);
> >>>>>  }
> >>>>>
> >>>>> +static void __noreturn kunit_abort(struct kunit *test)
> >>>>> +{
> >>>>> +     kunit_set_death_test(test, true);
> >>>>> +
> >>>>> +     test->try_catch.throw(&test->try_catch);
> >>>>> +
> >>>>> +     /*
> >>>>> +      * Throw could not abort from test.
> >>>>> +      */
> >>>>> +     kunit_err(test, "Throw could not abort from test!");
> >>>>> +     show_stack(NULL, NULL);
> >>>>> +     BUG();
> >>>>
> >>>> kunit_abort() is what will be call as the result of an assert failure.
> >>>
> >>> Yep. Does that need clarified somewhere.
> >>>>
> >>>> BUG(), which is a panic, which is crashing the system is not acceptable
> >>>> in the Linux kernel.  You will just annoy Linus if you submit this.
> >>>
> >>> Sorry, I thought this was an acceptable use case since, a) this should
> >>> never be compiled in a production kernel, b) we are in a pretty bad,
> >>> unpredictable state if we get here and keep going. I think you might
> >>> have said elsewhere that you think "a" is not valid? In any case, I
> >>> can replace this with a WARN, would that be acceptable?
> >>
> >> A WARN may or may not make sense, depending on the context.  It may
> >> be sufficient to simply report a test failure (as in the old version
> >> of case (2) below.
> >>
> >> Answers to "a)" and "b)":
> >>
> >> a) it might be in a production kernel
> >
> > Sorry for a possibly stupid question, how might it be so? Why would
> > someone intentionally build unit tests into a production kernel?
>
> People do things.  Just expect it.

Huh, alright. I will take your word for it then.

>
> >>
> >> a') it is not acceptable in my development kernel either
> >
> > Fair enough.
> >
> >>
> >> b) No.  You don't crash a developer's kernel either unless it is
> >> required to avoid data corruption.
> >
> > Alright, I thought that was one of those cases, but I am not going to
> > push the point. Also, in case it wasn't clear, the path where BUG()
> > gets called only happens if there is a bug in KUnit itself, not just
> > because a test case fails catastrophically.
>
> Still not out of the woods.  Still facing Lions and Tigers and Bears,
> Oh my!

Nope, I guess not :-)

>
> So kunit_abort() is normally called as the result of an assert
> failure (as written many lines further above).
>
> kunit_abort()
>    test->try_catch.throw(&test->try_catch)
>    // this is really kunit_generic_throw(), yes?
>       complete_and_exit()
>          if (comp)
>             // comp is test_case_completion?
>             complete(comp)
>          do_exit()
>             // void __noreturn do_exit(long code)
>             // depending on the task, either panic
>             // or the task dies

You are right up until after it calls do_exit().

KUnit actually spawns a thread for the test case to run in so that
when exit is called, only the test case thread dies. The thread that
started KUnit is never affected.

>
> I did not read through enough of the code to understand what is going
> on here.  Is each kunit_module executed in a newly created thread?
> And if kunit_abort() is called then that thread dies?  Or something
> else?

Mostly right, each kunit_case (not kunit_module) gets executed in its
own newly created thread. If kunit_abort() is called in that thread,
the kunit_case thread dies. The parent thread keeps going, and other
test cases are executed.

>
>
> >>
> >> b') And you can not do replacements like:
> >>
> >> (1) in of_unittest_check_tree_linkage()
> >>
> >> -----  old  -----
> >>
> >>         if (!of_root)
> >>                 return;
> >>
> >> -----  new  -----
> >>
> >>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);
> >>
> >> (2) in of_unittest_property_string()
> >>
> >> -----  old  -----
> >>
> >>         /* of_property_read_string_index() tests */
> >>         rc = of_property_read_string_index(np, "string-property", 0, strings);
> >>         unittest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);
> >>
> >> -----  new  -----
> >>
> >>         /* of_property_read_string_index() tests */
> >>         rc = of_property_read_string_index(np, "string-property", 0, strings);
> >>         KUNIT_ASSERT_EQ(test, rc, 0);
> >>         KUNIT_EXPECT_STREQ(test, strings[0], "foobar");
> >>
> >>
> >> If a test fails, that is no reason to abort testing.  The remainder of the unit
> >> tests can still run.  There may be cascading failures, but that is ok.
> >
> > Sure, that's what I am trying to do. I don't see how (1) changes
> > anything, a failed KUNIT_ASSERT_* only bails on the current test case,
> > it does not quit the entire test suite let alone crash the kernel.
>
> This may be another case of whether a kunit_module is approximately a
> single KUNIT_EXPECT_*() or a larger number of them.
>
> I still want, for example, of_unittest_property_string() to include a large
> number of KUNIT_EXPECT_*() instances.  In that case I still want the rest of
> the tests in the kunit_module to be executed even after a KUNIT_ASSERT_*()
> fails.  The existing test code has that property.

Sure, in the context of the reply you just sent me on the DT unittest
thread, that makes sense. I can pull out all but the ones that would
have terminated the collection of test cases (where you return early),
if that makes it better.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-03-22  1:41                       ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-22  1:41 UTC (permalink / raw)
  To: Frank Rowand
  Cc: brakmo, Petr Mladek, Amir Goldstein, dri-devel, Sasha Levin,
	linux-kselftest, shuah, Rob Herring, linux-nvdimm,
	Richard Weinberger, Knut Omang, Kieran Bingham, wfg,
	Joel Stanley, Jeff Dike, Dan Carpenter, devicetree, Bird,
	Timothy, Kees Cook, linux-um, Steven Rostedt, Julia Lawall,
	Dan Williams, kunit-dev, Greg KH, Linux Kernel Mailing List,
	Luis Chamberlain, Daniel Vetter, Michael Ellerman, Joe Perches,
	Kevin Hilman

On Thu, Mar 21, 2019 at 6:10 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/27/19 11:42 PM, Brendan Higgins wrote:
> > On Tue, Feb 19, 2019 at 10:44 PM Frank Rowand <frowand.list@gmail.com> wrote:
> >>
> >> On 2/19/19 7:39 PM, Brendan Higgins wrote:
> >>> On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand <frowand.list@gmail.com> wrote:
> >>>>
> >>>> On 2/14/19 1:37 PM, Brendan Higgins wrote:
> >>>>> Add support for aborting/bailing out of test cases. Needed for
> >>>>> implementing assertions.
> >>>>>
> >>>>> Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
> >>>>> ---
> >>>>> Changes Since Last Version
> >>>>>  - This patch is new introducing a new cross-architecture way to abort
> >>>>>    out of a test case (needed for KUNIT_ASSERT_*, see next patch for
> >>>>>    details).
> >>>>>  - On a side note, this is not a complete replacement for the UML abort
> >>>>>    mechanism, but covers the majority of necessary functionality. UML
> >>>>>    architecture specific featurs have been dropped from the initial
> >>>>>    patchset.
> >>>>> ---
> >>>>>  include/kunit/test.h |  24 +++++
> >>>>>  kunit/Makefile       |   3 +-
> >>>>>  kunit/test-test.c    | 127 ++++++++++++++++++++++++++
> >>>>>  kunit/test.c         | 208 +++++++++++++++++++++++++++++++++++++++++--
> >>>>>  4 files changed, 353 insertions(+), 9 deletions(-)
> >>>>>  create mode 100644 kunit/test-test.c
> >>>>
> >>>> < snip >
> >>>>
> >>>>> diff --git a/kunit/test.c b/kunit/test.c
> >>>>> index d18c50d5ed671..6e5244642ab07 100644
> >>>>> --- a/kunit/test.c
> >>>>> +++ b/kunit/test.c
> >>>>> @@ -6,9 +6,9 @@
> >>>>>   * Author: Brendan Higgins <brendanhiggins@google.com>
> >>>>>   */
> >>>>>
> >>>>> -#include <linux/sched.h>
> >>>>>  #include <linux/sched/debug.h>
> >>>>> -#include <os.h>
> >>>>> +#include <linux/completion.h>
> >>>>> +#include <linux/kthread.h>
> >>>>>  #include <kunit/test.h>
> >>>>>
> >>>>>  static bool kunit_get_success(struct kunit *test)
> >>>>> @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool success)
> >>>>>       spin_unlock_irqrestore(&test->lock, flags);
> >>>>>  }
> >>>>>
> >>>>> +static bool kunit_get_death_test(struct kunit *test)
> >>>>> +{
> >>>>> +     unsigned long flags;
> >>>>> +     bool death_test;
> >>>>> +
> >>>>> +     spin_lock_irqsave(&test->lock, flags);
> >>>>> +     death_test = test->death_test;
> >>>>> +     spin_unlock_irqrestore(&test->lock, flags);
> >>>>> +
> >>>>> +     return death_test;
> >>>>> +}
> >>>>> +
> >>>>> +static void kunit_set_death_test(struct kunit *test, bool death_test)
> >>>>> +{
> >>>>> +     unsigned long flags;
> >>>>> +
> >>>>> +     spin_lock_irqsave(&test->lock, flags);
> >>>>> +     test->death_test = death_test;
> >>>>> +     spin_unlock_irqrestore(&test->lock, flags);
> >>>>> +}
> >>>>> +
> >>>>>  static int kunit_vprintk_emit(const struct kunit *test,
> >>>>>                             int level,
> >>>>>                             const char *fmt,
> >>>>> @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct kunit_stream *stream)
> >>>>>       stream->commit(stream);
> >>>>>  }
> >>>>>
> >>>>> +static void __noreturn kunit_abort(struct kunit *test)
> >>>>> +{
> >>>>> +     kunit_set_death_test(test, true);
> >>>>> +
> >>>>> +     test->try_catch.throw(&test->try_catch);
> >>>>> +
> >>>>> +     /*
> >>>>> +      * Throw could not abort from test.
> >>>>> +      */
> >>>>> +     kunit_err(test, "Throw could not abort from test!");
> >>>>> +     show_stack(NULL, NULL);
> >>>>> +     BUG();
> >>>>
> >>>> kunit_abort() is what will be call as the result of an assert failure.
> >>>
> >>> Yep. Does that need clarified somewhere.
> >>>>
> >>>> BUG(), which is a panic, which is crashing the system is not acceptable
> >>>> in the Linux kernel.  You will just annoy Linus if you submit this.
> >>>
> >>> Sorry, I thought this was an acceptable use case since, a) this should
> >>> never be compiled in a production kernel, b) we are in a pretty bad,
> >>> unpredictable state if we get here and keep going. I think you might
> >>> have said elsewhere that you think "a" is not valid? In any case, I
> >>> can replace this with a WARN, would that be acceptable?
> >>
> >> A WARN may or may not make sense, depending on the context.  It may
> >> be sufficient to simply report a test failure (as in the old version
> >> of case (2) below.
> >>
> >> Answers to "a)" and "b)":
> >>
> >> a) it might be in a production kernel
> >
> > Sorry for a possibly stupid question, how might it be so? Why would
> > someone intentionally build unit tests into a production kernel?
>
> People do things.  Just expect it.

Huh, alright. I will take your word for it then.

>
> >>
> >> a') it is not acceptable in my development kernel either
> >
> > Fair enough.
> >
> >>
> >> b) No.  You don't crash a developer's kernel either unless it is
> >> required to avoid data corruption.
> >
> > Alright, I thought that was one of those cases, but I am not going to
> > push the point. Also, in case it wasn't clear, the path where BUG()
> > gets called only happens if there is a bug in KUnit itself, not just
> > because a test case fails catastrophically.
>
> Still not out of the woods.  Still facing Lions and Tigers and Bears,
> Oh my!

Nope, I guess not :-)

>
> So kunit_abort() is normally called as the result of an assert
> failure (as written many lines further above).
>
> kunit_abort()
>    test->try_catch.throw(&test->try_catch)
>    // this is really kunit_generic_throw(), yes?
>       complete_and_exit()
>          if (comp)
>             // comp is test_case_completion?
>             complete(comp)
>          do_exit()
>             // void __noreturn do_exit(long code)
>             // depending on the task, either panic
>             // or the task dies

You are right up until after it calls do_exit().

KUnit actually spawns a thread for the test case to run in so that
when exit is called, only the test case thread dies. The thread that
started KUnit is never affected.

>
> I did not read through enough of the code to understand what is going
> on here.  Is each kunit_module executed in a newly created thread?
> And if kunit_abort() is called then that thread dies?  Or something
> else?

Mostly right, each kunit_case (not kunit_module) gets executed in its
own newly created thread. If kunit_abort() is called in that thread,
the kunit_case thread dies. The parent thread keeps going, and other
test cases are executed.

>
>
> >>
> >> b') And you can not do replacements like:
> >>
> >> (1) in of_unittest_check_tree_linkage()
> >>
> >> -----  old  -----
> >>
> >>         if (!of_root)
> >>                 return;
> >>
> >> -----  new  -----
> >>
> >>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);
> >>
> >> (2) in of_unittest_property_string()
> >>
> >> -----  old  -----
> >>
> >>         /* of_property_read_string_index() tests */
> >>         rc = of_property_read_string_index(np, "string-property", 0, strings);
> >>         unittest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);
> >>
> >> -----  new  -----
> >>
> >>         /* of_property_read_string_index() tests */
> >>         rc = of_property_read_string_index(np, "string-property", 0, strings);
> >>         KUNIT_ASSERT_EQ(test, rc, 0);
> >>         KUNIT_EXPECT_STREQ(test, strings[0], "foobar");
> >>
> >>
> >> If a test fails, that is no reason to abort testing.  The remainder of the unit
> >> tests can still run.  There may be cascading failures, but that is ok.
> >
> > Sure, that's what I am trying to do. I don't see how (1) changes
> > anything, a failed KUNIT_ASSERT_* only bails on the current test case,
> > it does not quit the entire test suite let alone crash the kernel.
>
> This may be another case of whether a kunit_module is approximately a
> single KUNIT_EXPECT_*() or a larger number of them.
>
> I still want, for example, of_unittest_property_string() to include a large
> number of KUNIT_EXPECT_*() instances.  In that case I still want the rest of
> the tests in the kunit_module to be executed even after a KUNIT_ASSERT_*()
> fails.  The existing test code has that property.

Sure, in the context of the reply you just sent me on the DT unittest
thread, that makes sense. I can pull out all but the ones that would
have terminated the collection of test cases (where you return early),
if that makes it better.

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 16/17] of: unittest: split out a couple of test cases from unittest
  2019-03-22  1:14     ` Frank Rowand
                         ` (3 preceding siblings ...)
  (?)
@ 2019-03-22  1:45       ` Brendan Higgins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-22  1:45 UTC (permalink / raw)
  To: Frank Rowand
  Cc: brakmo, Petr Mladek, Amir Goldstein, dri-devel, Sasha Levin,
	linux-kselftest, shuah, Rob Herring, linux-nvdimm,
	Richard Weinberger, Knut Omang, Kieran Bingham, wfg,
	Joel Stanley, Jeff Dike, Dan Carpenter, devicetree, Bird,
	Timothy, Kees Cook, linux-um, Steven Rostedt, Julia Lawall,
	kunit-dev, Greg KH, Linux Kernel Mailing List, Luis Chamberlain,
	Daniel Vetter, Michael Ellerman, Joe Perches, Kevin Hilman

On Thu, Mar 21, 2019 at 6:15 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/14/19 1:37 PM, Brendan Higgins wrote:
> > Split out a couple of test cases that these features in base.c from the
> > unittest.c monolith. The intention is that we will eventually split out
> > all test cases and group them together based on what portion of device
> > tree they test.
>
> I still object to this patch.  I do not want this code scattered into
> additional files.

Sure, no problem. I will remove this from future revisions.
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 16/17] of: unittest: split out a couple of test cases from unittest
@ 2019-03-22  1:45       ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-22  1:45 UTC (permalink / raw)
  To: Frank Rowand
  Cc: Kees Cook, Luis Chamberlain, shuah, Rob Herring, Kieran Bingham,
	Greg KH, Joel Stanley, Michael Ellerman, Joe Perches, brakmo,
	Steven Rostedt, Bird, Timothy, Kevin Hilman, Julia Lawall,
	linux-kselftest, kunit-dev, Linux Kernel Mailing List, Jeff Dike,
	Richard Weinberger, linux-um, Daniel Vetter, dri-devel,
	Dan Williams, linux-nvdimm, Knut Omang, devicetree, Petr Mladek,
	Sasha Levin, Amir Goldstein, Dan Carpenter, wfg

On Thu, Mar 21, 2019 at 6:15 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/14/19 1:37 PM, Brendan Higgins wrote:
> > Split out a couple of test cases that these features in base.c from the
> > unittest.c monolith. The intention is that we will eventually split out
> > all test cases and group them together based on what portion of device
> > tree they test.
>
> I still object to this patch.  I do not want this code scattered into
> additional files.

Sure, no problem. I will remove this from future revisions.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 16/17] of: unittest: split out a couple of test cases from unittest
@ 2019-03-22  1:45       ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-22  1:45 UTC (permalink / raw)
  To: Frank Rowand
  Cc: Kees Cook, Luis Chamberlain, shuah, Rob Herring, Kieran Bingham,
	Greg KH, Joel Stanley, Michael Ellerman, Joe Perches, brakmo,
	Steven Rostedt, Bird, Timothy, Kevin Hilman, Julia Lawall,
	linux-kselftest, kunit-dev, Linux Kernel Mailing List, Jeff Dike,
	Richard Weinberger, linux-um, Daniel Vetter, dri-devel

On Thu, Mar 21, 2019 at 6:15 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/14/19 1:37 PM, Brendan Higgins wrote:
> > Split out a couple of test cases that these features in base.c from the
> > unittest.c monolith. The intention is that we will eventually split out
> > all test cases and group them together based on what portion of device
> > tree they test.
>
> I still object to this patch.  I do not want this code scattered into
> additional files.

Sure, no problem. I will remove this from future revisions.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 16/17] of: unittest: split out a couple of test cases from unittest
@ 2019-03-22  1:45       ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-03-22  1:45 UTC (permalink / raw)


On Thu, Mar 21, 2019 at 6:15 PM Frank Rowand <frowand.list at gmail.com> wrote:
>
> On 2/14/19 1:37 PM, Brendan Higgins wrote:
> > Split out a couple of test cases that these features in base.c from the
> > unittest.c monolith. The intention is that we will eventually split out
> > all test cases and group them together based on what portion of device
> > tree they test.
>
> I still object to this patch.  I do not want this code scattered into
> additional files.

Sure, no problem. I will remove this from future revisions.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 16/17] of: unittest: split out a couple of test cases from unittest
@ 2019-03-22  1:45       ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-22  1:45 UTC (permalink / raw)


On Thu, Mar 21, 2019@6:15 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/14/19 1:37 PM, Brendan Higgins wrote:
> > Split out a couple of test cases that these features in base.c from the
> > unittest.c monolith. The intention is that we will eventually split out
> > all test cases and group them together based on what portion of device
> > tree they test.
>
> I still object to this patch.  I do not want this code scattered into
> additional files.

Sure, no problem. I will remove this from future revisions.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 16/17] of: unittest: split out a couple of test cases from unittest
@ 2019-03-22  1:45       ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-22  1:45 UTC (permalink / raw)
  To: Frank Rowand
  Cc: brakmo, Petr Mladek, Amir Goldstein, dri-devel, Sasha Levin,
	linux-kselftest, shuah, Rob Herring, linux-nvdimm,
	Richard Weinberger, Knut Omang, Kieran Bingham, wfg,
	Joel Stanley, Jeff Dike, Dan Carpenter, devicetree, Bird,
	Timothy, Kees Cook, linux-um, Steven Rostedt, Julia Lawall,
	Dan Williams, kunit-dev, Greg KH, Linux Kernel Mailing List,
	Luis Chamberlain, Daniel Vetter, Michael Ellerman, Joe Perches,
	Kevin Hilman

On Thu, Mar 21, 2019 at 6:15 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/14/19 1:37 PM, Brendan Higgins wrote:
> > Split out a couple of test cases that these features in base.c from the
> > unittest.c monolith. The intention is that we will eventually split out
> > all test cases and group them together based on what portion of device
> > tree they test.
>
> I still object to this patch.  I do not want this code scattered into
> additional files.

Sure, no problem. I will remove this from future revisions.

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 17/17] of: unittest: split up some super large test cases
  2019-03-22  1:16     ` Frank Rowand
                           ` (2 preceding siblings ...)
  (?)
@ 2019-03-22  1:45         ` Brendan Higgins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-22  1:45 UTC (permalink / raw)
  To: Frank Rowand
  Cc: brakmo-b10kYP2dOMg, Petr Mladek, Amir Goldstein, dri-devel,
	Sasha Levin, linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	shuah-DgEjT+Ai2ygdnm+yROfE0A, Rob Herring, linux-nvdimm,
	Richard Weinberger, Knut Omang, Kieran Bingham,
	wfg-VuQAYsv1563Yd54FQh9/CA, Joel Stanley, Jeff Dike,
	Dan Carpenter, devicetree, Bird, Timothy, Kees Cook,
	linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Steven Rostedt,
	Julia Lawall, kunit-dev-/JYPxA39Uh5TLH3MbocFFw, Greg KH,
	Linux Kernel Mailing List

On Thu, Mar 21, 2019 at 6:16 PM Frank Rowand <frowand.list-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
>
> On 2/14/19 1:37 PM, Brendan Higgins wrote:
> > Split up the super large test cases of_unittest_find_node_by_name and
> > of_unittest_dynamic into properly sized and defined test cases.
>
> I also still object to this patch.

I figured. Will drop.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 17/17] of: unittest: split up some super large test cases
@ 2019-03-22  1:45         ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-22  1:45 UTC (permalink / raw)
  To: Frank Rowand
  Cc: Kees Cook, Luis Chamberlain, shuah, Rob Herring, Kieran Bingham,
	Greg KH, Joel Stanley, Michael Ellerman, Joe Perches, brakmo,
	Steven Rostedt, Bird, Timothy, Kevin Hilman, Julia Lawall,
	linux-kselftest, kunit-dev, Linux Kernel Mailing List, Jeff Dike,
	Richard Weinberger, linux-um, Daniel Vetter, dri-devel,
	Dan Williams, linux-nvdimm, Knut Omang, devicetree, Petr Mladek,
	Sasha Levin, Amir Goldstein, Dan Carpenter, wfg

On Thu, Mar 21, 2019 at 6:16 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/14/19 1:37 PM, Brendan Higgins wrote:
> > Split up the super large test cases of_unittest_find_node_by_name and
> > of_unittest_dynamic into properly sized and defined test cases.
>
> I also still object to this patch.

I figured. Will drop.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 17/17] of: unittest: split up some super large test cases
@ 2019-03-22  1:45         ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-03-22  1:45 UTC (permalink / raw)


On Thu, Mar 21, 2019 at 6:16 PM Frank Rowand <frowand.list at gmail.com> wrote:
>
> On 2/14/19 1:37 PM, Brendan Higgins wrote:
> > Split up the super large test cases of_unittest_find_node_by_name and
> > of_unittest_dynamic into properly sized and defined test cases.
>
> I also still object to this patch.

I figured. Will drop.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 17/17] of: unittest: split up some super large test cases
@ 2019-03-22  1:45         ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-22  1:45 UTC (permalink / raw)


On Thu, Mar 21, 2019@6:16 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/14/19 1:37 PM, Brendan Higgins wrote:
> > Split up the super large test cases of_unittest_find_node_by_name and
> > of_unittest_dynamic into properly sized and defined test cases.
>
> I also still object to this patch.

I figured. Will drop.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 17/17] of: unittest: split up some super large test cases
@ 2019-03-22  1:45         ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-22  1:45 UTC (permalink / raw)
  To: Frank Rowand
  Cc: brakmo, Petr Mladek, Amir Goldstein, dri-devel, Sasha Levin,
	linux-kselftest, shuah, Rob Herring, linux-nvdimm,
	Richard Weinberger, Knut Omang, Kieran Bingham, wfg,
	Joel Stanley, Jeff Dike, Dan Carpenter, devicetree, Bird,
	Timothy, Kees Cook, linux-um, Steven Rostedt, Julia Lawall,
	Dan Williams, kunit-dev, Greg KH, Linux Kernel Mailing List,
	Luis Chamberlain, Daniel Vetter, Michael Ellerman, Joe Perches,
	Kevin Hilman

On Thu, Mar 21, 2019 at 6:16 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/14/19 1:37 PM, Brendan Higgins wrote:
> > Split up the super large test cases of_unittest_find_node_by_name and
> > of_unittest_dynamic into properly sized and defined test cases.
>
> I also still object to this patch.

I figured. Will drop.

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
  2019-03-22  1:41                       ` Brendan Higgins
                                             ` (2 preceding siblings ...)
  (?)
@ 2019-03-22  7:10                           ` Knut Omang
  -1 siblings, 0 replies; 316+ messages in thread
From: Knut Omang @ 2019-03-22  7:10 UTC (permalink / raw)
  To: Brendan Higgins, Frank Rowand
  Cc: brakmo-b10kYP2dOMg, Petr Mladek, Amir Goldstein, dri-devel,
	Sasha Levin, linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	shuah-DgEjT+Ai2ygdnm+yROfE0A, Rob Herring, linux-nvdimm,
	Richard Weinberger, Kieran Bingham, wfg-VuQAYsv1563Yd54FQh9/CA,
	Joel Stanley, Jeff Dike, Dan Carpenter, devicetree, Bird,
	Timothy, Kees Cook, linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	Steven Rostedt, Julia Lawall, kunit-dev-/JYPxA39Uh5TLH3MbocFFw,
	Greg KH, Linux Kernel Mailing List, Luis Chamberlain

On Thu, 2019-03-21 at 18:41 -0700, Brendan Higgins wrote:
> On Thu, Mar 21, 2019 at 6:10 PM Frank Rowand <frowand.list-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
> > On 2/27/19 11:42 PM, Brendan Higgins wrote:
> > > On Tue, Feb 19, 2019 at 10:44 PM Frank Rowand <frowand.list-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> > > wrote:
> > > > On 2/19/19 7:39 PM, Brendan Higgins wrote:
> > > > > On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand <frowand.list-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> > > > > wrote:
> > > > > > On 2/14/19 1:37 PM, Brendan Higgins wrote:
> > > > > > > Add support for aborting/bailing out of test cases. Needed for
> > > > > > > implementing assertions.
> > > > > > > 
> > > > > > > Signed-off-by: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> > > > > > > ---
> > > > > > > Changes Since Last Version
> > > > > > >  - This patch is new introducing a new cross-architecture way to
> > > > > > > abort
> > > > > > >    out of a test case (needed for KUNIT_ASSERT_*, see next patch
> > > > > > > for
> > > > > > >    details).
> > > > > > >  - On a side note, this is not a complete replacement for the UML
> > > > > > > abort
> > > > > > >    mechanism, but covers the majority of necessary functionality.
> > > > > > > UML
> > > > > > >    architecture specific featurs have been dropped from the
> > > > > > > initial
> > > > > > >    patchset.
> > > > > > > ---
> > > > > > >  include/kunit/test.h |  24 +++++
> > > > > > >  kunit/Makefile       |   3 +-
> > > > > > >  kunit/test-test.c    | 127 ++++++++++++++++++++++++++
> > > > > > >  kunit/test.c         | 208
> > > > > > > +++++++++++++++++++++++++++++++++++++++++--
> > > > > > >  4 files changed, 353 insertions(+), 9 deletions(-)
> > > > > > >  create mode 100644 kunit/test-test.c
> > > > > > 
> > > > > > < snip >
> > > > > > 
> > > > > > > diff --git a/kunit/test.c b/kunit/test.c
> > > > > > > index d18c50d5ed671..6e5244642ab07 100644
> > > > > > > --- a/kunit/test.c
> > > > > > > +++ b/kunit/test.c
> > > > > > > @@ -6,9 +6,9 @@
> > > > > > >   * Author: Brendan Higgins <brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> > > > > > >   */
> > > > > > > 
> > > > > > > -#include <linux/sched.h>
> > > > > > >  #include <linux/sched/debug.h>
> > > > > > > -#include <os.h>
> > > > > > > +#include <linux/completion.h>
> > > > > > > +#include <linux/kthread.h>
> > > > > > >  #include <kunit/test.h>
> > > > > > > 
> > > > > > >  static bool kunit_get_success(struct kunit *test)
> > > > > > > @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit
> > > > > > > *test, bool success)
> > > > > > >       spin_unlock_irqrestore(&test->lock, flags);
> > > > > > >  }
> > > > > > > 
> > > > > > > +static bool kunit_get_death_test(struct kunit *test)
> > > > > > > +{
> > > > > > > +     unsigned long flags;
> > > > > > > +     bool death_test;
> > > > > > > +
> > > > > > > +     spin_lock_irqsave(&test->lock, flags);
> > > > > > > +     death_test = test->death_test;
> > > > > > > +     spin_unlock_irqrestore(&test->lock, flags);
> > > > > > > +
> > > > > > > +     return death_test;
> > > > > > > +}
> > > > > > > +
> > > > > > > +static void kunit_set_death_test(struct kunit *test, bool
> > > > > > > death_test)
> > > > > > > +{
> > > > > > > +     unsigned long flags;
> > > > > > > +
> > > > > > > +     spin_lock_irqsave(&test->lock, flags);
> > > > > > > +     test->death_test = death_test;
> > > > > > > +     spin_unlock_irqrestore(&test->lock, flags);
> > > > > > > +}
> > > > > > > +
> > > > > > >  static int kunit_vprintk_emit(const struct kunit *test,
> > > > > > >                             int level,
> > > > > > >                             const char *fmt,
> > > > > > > @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test,
> > > > > > > struct kunit_stream *stream)
> > > > > > >       stream->commit(stream);
> > > > > > >  }
> > > > > > > 
> > > > > > > +static void __noreturn kunit_abort(struct kunit *test)
> > > > > > > +{
> > > > > > > +     kunit_set_death_test(test, true);
> > > > > > > +
> > > > > > > +     test->try_catch.throw(&test->try_catch);
> > > > > > > +
> > > > > > > +     /*
> > > > > > > +      * Throw could not abort from test.
> > > > > > > +      */
> > > > > > > +     kunit_err(test, "Throw could not abort from test!");
> > > > > > > +     show_stack(NULL, NULL);
> > > > > > > +     BUG();
> > > > > > 
> > > > > > kunit_abort() is what will be call as the result of an assert
> > > > > > failure.
> > > > > 
> > > > > Yep. Does that need clarified somewhere.
> > > > > > BUG(), which is a panic, which is crashing the system is not
> > > > > > acceptable
> > > > > > in the Linux kernel.  You will just annoy Linus if you submit this.
> > > > > 
> > > > > Sorry, I thought this was an acceptable use case since, a) this should
> > > > > never be compiled in a production kernel, b) we are in a pretty bad,
> > > > > unpredictable state if we get here and keep going. I think you might
> > > > > have said elsewhere that you think "a" is not valid? In any case, I
> > > > > can replace this with a WARN, would that be acceptable?
> > > > 
> > > > A WARN may or may not make sense, depending on the context.  It may
> > > > be sufficient to simply report a test failure (as in the old version
> > > > of case (2) below.
> > > > 
> > > > Answers to "a)" and "b)":
> > > > 
> > > > a) it might be in a production kernel
> > > 
> > > Sorry for a possibly stupid question, how might it be so? Why would
> > > someone intentionally build unit tests into a production kernel?
> > 
> > People do things.  Just expect it.
> 
> Huh, alright. I will take your word for it then.

I have a better explanation: Production kernels have bugs, unfortunately.
And sometimes those need to be investigated on systems than cannot be 
brought down or affected more than absolutely necessary, maybe via a third party
doing the execution. A light weight, precise test (well tested ahead :) ) might
be a way of proving or disproving assumptions that can lead to the development
and application of a fix. 

IMHO you're confusing "building into" with temporary applying, then removing
again - like the difference between running a local user space program vs
installing it under /usr and have it in everyone's PATH.

> > > > a') it is not acceptable in my development kernel either

I think one of the fundamental properties of a good test framework is that it
should not require changes to the code under test by itself.

Knut

> > > Fair enough.
> > > 
> > > > b) No.  You don't crash a developer's kernel either unless it is
> > > > required to avoid data corruption.
> > > Alright, I thought that was one of those cases, but I am not going to
> > > push the point. Also, in case it wasn't clear, the path where BUG()
> > > gets called only happens if there is a bug in KUnit itself, not just
> > > because a test case fails catastrophically.
> > 
> > Still not out of the woods.  Still facing Lions and Tigers and Bears,
> > Oh my!
> 
> Nope, I guess not :-)
> 
> > So kunit_abort() is normally called as the result of an assert
> > failure (as written many lines further above).
> > 
> > kunit_abort()
> >    test->try_catch.throw(&test->try_catch)
> >    // this is really kunit_generic_throw(), yes?
> >       complete_and_exit()
> >          if (comp)
> >             // comp is test_case_completion?
> >             complete(comp)
> >          do_exit()
> >             // void __noreturn do_exit(long code)
> >             // depending on the task, either panic
> >             // or the task dies
> 
> You are right up until after it calls do_exit().
> 
> KUnit actually spawns a thread for the test case to run in so that
> when exit is called, only the test case thread dies. The thread that
> started KUnit is never affected.
> 
> > I did not read through enough of the code to understand what is going
> > on here.  Is each kunit_module executed in a newly created thread?
> > And if kunit_abort() is called then that thread dies?  Or something
> > else?
> 
> Mostly right, each kunit_case (not kunit_module) gets executed in its
> own newly created thread. If kunit_abort() is called in that thread,
> the kunit_case thread dies. The parent thread keeps going, and other
> test cases are executed.
> 
> > 
> > > > b') And you can not do replacements like:
> > > > 
> > > > (1) in of_unittest_check_tree_linkage()
> > > > 
> > > > -----  old  -----
> > > > 
> > > >         if (!of_root)
> > > >                 return;
> > > > 
> > > > -----  new  -----
> > > > 
> > > >         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);
> > > > 
> > > > (2) in of_unittest_property_string()
> > > > 
> > > > -----  old  -----
> > > > 
> > > >         /* of_property_read_string_index() tests */
> > > >         rc = of_property_read_string_index(np, "string-property", 0,
> > > > strings);
> > > >         unittest(rc == 0 && !strcmp(strings[0], "foobar"),
> > > > "of_property_read_string_index() failure; rc=%i\n", rc);
> > > > 
> > > > -----  new  -----
> > > > 
> > > >         /* of_property_read_string_index() tests */
> > > >         rc = of_property_read_string_index(np, "string-property", 0,
> > > > strings);
> > > >         KUNIT_ASSERT_EQ(test, rc, 0);
> > > >         KUNIT_EXPECT_STREQ(test, strings[0], "foobar");
> > > > 
> > > > 
> > > > If a test fails, that is no reason to abort testing.  The remainder of
> > > > the unit
> > > > tests can still run.  There may be cascading failures, but that is ok.
> > > 
> > > Sure, that's what I am trying to do. I don't see how (1) changes
> > > anything, a failed KUNIT_ASSERT_* only bails on the current test case,
> > > it does not quit the entire test suite let alone crash the kernel.
> > 
> > This may be another case of whether a kunit_module is approximately a
> > single KUNIT_EXPECT_*() or a larger number of them.
> > 
> > I still want, for example, of_unittest_property_string() to include a large
> > number of KUNIT_EXPECT_*() instances.  In that case I still want the rest of
> > the tests in the kunit_module to be executed even after a KUNIT_ASSERT_*()
> > fails.  The existing test code has that property.
> 
> Sure, in the context of the reply you just sent me on the DT unittest
> thread, that makes sense. I can pull out all but the ones that would
> have terminated the collection of test cases (where you return early),
> if that makes it better.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-03-22  7:10                           ` Knut Omang
  0 siblings, 0 replies; 316+ messages in thread
From: Knut Omang @ 2019-03-22  7:10 UTC (permalink / raw)
  To: Brendan Higgins, Frank Rowand
  Cc: Kees Cook, Luis Chamberlain, shuah, Rob Herring, Kieran Bingham,
	Greg KH, Joel Stanley, Michael Ellerman, Joe Perches, brakmo,
	Steven Rostedt, Bird, Timothy, Kevin Hilman, Julia Lawall,
	linux-kselftest, kunit-dev, Linux Kernel Mailing List, Jeff Dike,
	Richard Weinberger, linux-um, Daniel Vetter, dri-devel,
	Dan Williams, linux-nvdimm, devicetree, Petr Mladek, Sasha Levin,
	Amir Goldstein, Dan Carpenter, wfg

On Thu, 2019-03-21 at 18:41 -0700, Brendan Higgins wrote:
> On Thu, Mar 21, 2019 at 6:10 PM Frank Rowand <frowand.list@gmail.com> wrote:
> > On 2/27/19 11:42 PM, Brendan Higgins wrote:
> > > On Tue, Feb 19, 2019 at 10:44 PM Frank Rowand <frowand.list@gmail.com>
> > > wrote:
> > > > On 2/19/19 7:39 PM, Brendan Higgins wrote:
> > > > > On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand <frowand.list@gmail.com>
> > > > > wrote:
> > > > > > On 2/14/19 1:37 PM, Brendan Higgins wrote:
> > > > > > > Add support for aborting/bailing out of test cases. Needed for
> > > > > > > implementing assertions.
> > > > > > > 
> > > > > > > Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
> > > > > > > ---
> > > > > > > Changes Since Last Version
> > > > > > >  - This patch is new introducing a new cross-architecture way to
> > > > > > > abort
> > > > > > >    out of a test case (needed for KUNIT_ASSERT_*, see next patch
> > > > > > > for
> > > > > > >    details).
> > > > > > >  - On a side note, this is not a complete replacement for the UML
> > > > > > > abort
> > > > > > >    mechanism, but covers the majority of necessary functionality.
> > > > > > > UML
> > > > > > >    architecture specific featurs have been dropped from the
> > > > > > > initial
> > > > > > >    patchset.
> > > > > > > ---
> > > > > > >  include/kunit/test.h |  24 +++++
> > > > > > >  kunit/Makefile       |   3 +-
> > > > > > >  kunit/test-test.c    | 127 ++++++++++++++++++++++++++
> > > > > > >  kunit/test.c         | 208
> > > > > > > +++++++++++++++++++++++++++++++++++++++++--
> > > > > > >  4 files changed, 353 insertions(+), 9 deletions(-)
> > > > > > >  create mode 100644 kunit/test-test.c
> > > > > > 
> > > > > > < snip >
> > > > > > 
> > > > > > > diff --git a/kunit/test.c b/kunit/test.c
> > > > > > > index d18c50d5ed671..6e5244642ab07 100644
> > > > > > > --- a/kunit/test.c
> > > > > > > +++ b/kunit/test.c
> > > > > > > @@ -6,9 +6,9 @@
> > > > > > >   * Author: Brendan Higgins <brendanhiggins@google.com>
> > > > > > >   */
> > > > > > > 
> > > > > > > -#include <linux/sched.h>
> > > > > > >  #include <linux/sched/debug.h>
> > > > > > > -#include <os.h>
> > > > > > > +#include <linux/completion.h>
> > > > > > > +#include <linux/kthread.h>
> > > > > > >  #include <kunit/test.h>
> > > > > > > 
> > > > > > >  static bool kunit_get_success(struct kunit *test)
> > > > > > > @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit
> > > > > > > *test, bool success)
> > > > > > >       spin_unlock_irqrestore(&test->lock, flags);
> > > > > > >  }
> > > > > > > 
> > > > > > > +static bool kunit_get_death_test(struct kunit *test)
> > > > > > > +{
> > > > > > > +     unsigned long flags;
> > > > > > > +     bool death_test;
> > > > > > > +
> > > > > > > +     spin_lock_irqsave(&test->lock, flags);
> > > > > > > +     death_test = test->death_test;
> > > > > > > +     spin_unlock_irqrestore(&test->lock, flags);
> > > > > > > +
> > > > > > > +     return death_test;
> > > > > > > +}
> > > > > > > +
> > > > > > > +static void kunit_set_death_test(struct kunit *test, bool
> > > > > > > death_test)
> > > > > > > +{
> > > > > > > +     unsigned long flags;
> > > > > > > +
> > > > > > > +     spin_lock_irqsave(&test->lock, flags);
> > > > > > > +     test->death_test = death_test;
> > > > > > > +     spin_unlock_irqrestore(&test->lock, flags);
> > > > > > > +}
> > > > > > > +
> > > > > > >  static int kunit_vprintk_emit(const struct kunit *test,
> > > > > > >                             int level,
> > > > > > >                             const char *fmt,
> > > > > > > @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test,
> > > > > > > struct kunit_stream *stream)
> > > > > > >       stream->commit(stream);
> > > > > > >  }
> > > > > > > 
> > > > > > > +static void __noreturn kunit_abort(struct kunit *test)
> > > > > > > +{
> > > > > > > +     kunit_set_death_test(test, true);
> > > > > > > +
> > > > > > > +     test->try_catch.throw(&test->try_catch);
> > > > > > > +
> > > > > > > +     /*
> > > > > > > +      * Throw could not abort from test.
> > > > > > > +      */
> > > > > > > +     kunit_err(test, "Throw could not abort from test!");
> > > > > > > +     show_stack(NULL, NULL);
> > > > > > > +     BUG();
> > > > > > 
> > > > > > kunit_abort() is what will be call as the result of an assert
> > > > > > failure.
> > > > > 
> > > > > Yep. Does that need clarified somewhere.
> > > > > > BUG(), which is a panic, which is crashing the system is not
> > > > > > acceptable
> > > > > > in the Linux kernel.  You will just annoy Linus if you submit this.
> > > > > 
> > > > > Sorry, I thought this was an acceptable use case since, a) this should
> > > > > never be compiled in a production kernel, b) we are in a pretty bad,
> > > > > unpredictable state if we get here and keep going. I think you might
> > > > > have said elsewhere that you think "a" is not valid? In any case, I
> > > > > can replace this with a WARN, would that be acceptable?
> > > > 
> > > > A WARN may or may not make sense, depending on the context.  It may
> > > > be sufficient to simply report a test failure (as in the old version
> > > > of case (2) below.
> > > > 
> > > > Answers to "a)" and "b)":
> > > > 
> > > > a) it might be in a production kernel
> > > 
> > > Sorry for a possibly stupid question, how might it be so? Why would
> > > someone intentionally build unit tests into a production kernel?
> > 
> > People do things.  Just expect it.
> 
> Huh, alright. I will take your word for it then.

I have a better explanation: Production kernels have bugs, unfortunately.
And sometimes those need to be investigated on systems than cannot be 
brought down or affected more than absolutely necessary, maybe via a third party
doing the execution. A light weight, precise test (well tested ahead :) ) might
be a way of proving or disproving assumptions that can lead to the development
and application of a fix. 

IMHO you're confusing "building into" with temporary applying, then removing
again - like the difference between running a local user space program vs
installing it under /usr and have it in everyone's PATH.

> > > > a') it is not acceptable in my development kernel either

I think one of the fundamental properties of a good test framework is that it
should not require changes to the code under test by itself.

Knut

> > > Fair enough.
> > > 
> > > > b) No.  You don't crash a developer's kernel either unless it is
> > > > required to avoid data corruption.
> > > Alright, I thought that was one of those cases, but I am not going to
> > > push the point. Also, in case it wasn't clear, the path where BUG()
> > > gets called only happens if there is a bug in KUnit itself, not just
> > > because a test case fails catastrophically.
> > 
> > Still not out of the woods.  Still facing Lions and Tigers and Bears,
> > Oh my!
> 
> Nope, I guess not :-)
> 
> > So kunit_abort() is normally called as the result of an assert
> > failure (as written many lines further above).
> > 
> > kunit_abort()
> >    test->try_catch.throw(&test->try_catch)
> >    // this is really kunit_generic_throw(), yes?
> >       complete_and_exit()
> >          if (comp)
> >             // comp is test_case_completion?
> >             complete(comp)
> >          do_exit()
> >             // void __noreturn do_exit(long code)
> >             // depending on the task, either panic
> >             // or the task dies
> 
> You are right up until after it calls do_exit().
> 
> KUnit actually spawns a thread for the test case to run in so that
> when exit is called, only the test case thread dies. The thread that
> started KUnit is never affected.
> 
> > I did not read through enough of the code to understand what is going
> > on here.  Is each kunit_module executed in a newly created thread?
> > And if kunit_abort() is called then that thread dies?  Or something
> > else?
> 
> Mostly right, each kunit_case (not kunit_module) gets executed in its
> own newly created thread. If kunit_abort() is called in that thread,
> the kunit_case thread dies. The parent thread keeps going, and other
> test cases are executed.
> 
> > 
> > > > b') And you can not do replacements like:
> > > > 
> > > > (1) in of_unittest_check_tree_linkage()
> > > > 
> > > > -----  old  -----
> > > > 
> > > >         if (!of_root)
> > > >                 return;
> > > > 
> > > > -----  new  -----
> > > > 
> > > >         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);
> > > > 
> > > > (2) in of_unittest_property_string()
> > > > 
> > > > -----  old  -----
> > > > 
> > > >         /* of_property_read_string_index() tests */
> > > >         rc = of_property_read_string_index(np, "string-property", 0,
> > > > strings);
> > > >         unittest(rc == 0 && !strcmp(strings[0], "foobar"),
> > > > "of_property_read_string_index() failure; rc=%i\n", rc);
> > > > 
> > > > -----  new  -----
> > > > 
> > > >         /* of_property_read_string_index() tests */
> > > >         rc = of_property_read_string_index(np, "string-property", 0,
> > > > strings);
> > > >         KUNIT_ASSERT_EQ(test, rc, 0);
> > > >         KUNIT_EXPECT_STREQ(test, strings[0], "foobar");
> > > > 
> > > > 
> > > > If a test fails, that is no reason to abort testing.  The remainder of
> > > > the unit
> > > > tests can still run.  There may be cascading failures, but that is ok.
> > > 
> > > Sure, that's what I am trying to do. I don't see how (1) changes
> > > anything, a failed KUNIT_ASSERT_* only bails on the current test case,
> > > it does not quit the entire test suite let alone crash the kernel.
> > 
> > This may be another case of whether a kunit_module is approximately a
> > single KUNIT_EXPECT_*() or a larger number of them.
> > 
> > I still want, for example, of_unittest_property_string() to include a large
> > number of KUNIT_EXPECT_*() instances.  In that case I still want the rest of
> > the tests in the kunit_module to be executed even after a KUNIT_ASSERT_*()
> > fails.  The existing test code has that property.
> 
> Sure, in the context of the reply you just sent me on the DT unittest
> thread, that makes sense. I can pull out all but the ones that would
> have terminated the collection of test cases (where you return early),
> if that makes it better.


^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-03-22  7:10                           ` Knut Omang
  0 siblings, 0 replies; 316+ messages in thread
From: knut.omang @ 2019-03-22  7:10 UTC (permalink / raw)


On Thu, 2019-03-21 at 18:41 -0700, Brendan Higgins wrote:
> On Thu, Mar 21, 2019 at 6:10 PM Frank Rowand <frowand.list at gmail.com> wrote:
> > On 2/27/19 11:42 PM, Brendan Higgins wrote:
> > > On Tue, Feb 19, 2019 at 10:44 PM Frank Rowand <frowand.list at gmail.com>
> > > wrote:
> > > > On 2/19/19 7:39 PM, Brendan Higgins wrote:
> > > > > On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand <frowand.list at gmail.com>
> > > > > wrote:
> > > > > > On 2/14/19 1:37 PM, Brendan Higgins wrote:
> > > > > > > Add support for aborting/bailing out of test cases. Needed for
> > > > > > > implementing assertions.
> > > > > > > 
> > > > > > > Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> > > > > > > ---
> > > > > > > Changes Since Last Version
> > > > > > >  - This patch is new introducing a new cross-architecture way to
> > > > > > > abort
> > > > > > >    out of a test case (needed for KUNIT_ASSERT_*, see next patch
> > > > > > > for
> > > > > > >    details).
> > > > > > >  - On a side note, this is not a complete replacement for the UML
> > > > > > > abort
> > > > > > >    mechanism, but covers the majority of necessary functionality.
> > > > > > > UML
> > > > > > >    architecture specific featurs have been dropped from the
> > > > > > > initial
> > > > > > >    patchset.
> > > > > > > ---
> > > > > > >  include/kunit/test.h |  24 +++++
> > > > > > >  kunit/Makefile       |   3 +-
> > > > > > >  kunit/test-test.c    | 127 ++++++++++++++++++++++++++
> > > > > > >  kunit/test.c         | 208
> > > > > > > +++++++++++++++++++++++++++++++++++++++++--
> > > > > > >  4 files changed, 353 insertions(+), 9 deletions(-)
> > > > > > >  create mode 100644 kunit/test-test.c
> > > > > > 
> > > > > > < snip >
> > > > > > 
> > > > > > > diff --git a/kunit/test.c b/kunit/test.c
> > > > > > > index d18c50d5ed671..6e5244642ab07 100644
> > > > > > > --- a/kunit/test.c
> > > > > > > +++ b/kunit/test.c
> > > > > > > @@ -6,9 +6,9 @@
> > > > > > >   * Author: Brendan Higgins <brendanhiggins at google.com>
> > > > > > >   */
> > > > > > > 
> > > > > > > -#include <linux/sched.h>
> > > > > > >  #include <linux/sched/debug.h>
> > > > > > > -#include <os.h>
> > > > > > > +#include <linux/completion.h>
> > > > > > > +#include <linux/kthread.h>
> > > > > > >  #include <kunit/test.h>
> > > > > > > 
> > > > > > >  static bool kunit_get_success(struct kunit *test)
> > > > > > > @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit
> > > > > > > *test, bool success)
> > > > > > >       spin_unlock_irqrestore(&test->lock, flags);
> > > > > > >  }
> > > > > > > 
> > > > > > > +static bool kunit_get_death_test(struct kunit *test)
> > > > > > > +{
> > > > > > > +     unsigned long flags;
> > > > > > > +     bool death_test;
> > > > > > > +
> > > > > > > +     spin_lock_irqsave(&test->lock, flags);
> > > > > > > +     death_test = test->death_test;
> > > > > > > +     spin_unlock_irqrestore(&test->lock, flags);
> > > > > > > +
> > > > > > > +     return death_test;
> > > > > > > +}
> > > > > > > +
> > > > > > > +static void kunit_set_death_test(struct kunit *test, bool
> > > > > > > death_test)
> > > > > > > +{
> > > > > > > +     unsigned long flags;
> > > > > > > +
> > > > > > > +     spin_lock_irqsave(&test->lock, flags);
> > > > > > > +     test->death_test = death_test;
> > > > > > > +     spin_unlock_irqrestore(&test->lock, flags);
> > > > > > > +}
> > > > > > > +
> > > > > > >  static int kunit_vprintk_emit(const struct kunit *test,
> > > > > > >                             int level,
> > > > > > >                             const char *fmt,
> > > > > > > @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test,
> > > > > > > struct kunit_stream *stream)
> > > > > > >       stream->commit(stream);
> > > > > > >  }
> > > > > > > 
> > > > > > > +static void __noreturn kunit_abort(struct kunit *test)
> > > > > > > +{
> > > > > > > +     kunit_set_death_test(test, true);
> > > > > > > +
> > > > > > > +     test->try_catch.throw(&test->try_catch);
> > > > > > > +
> > > > > > > +     /*
> > > > > > > +      * Throw could not abort from test.
> > > > > > > +      */
> > > > > > > +     kunit_err(test, "Throw could not abort from test!");
> > > > > > > +     show_stack(NULL, NULL);
> > > > > > > +     BUG();
> > > > > > 
> > > > > > kunit_abort() is what will be call as the result of an assert
> > > > > > failure.
> > > > > 
> > > > > Yep. Does that need clarified somewhere.
> > > > > > BUG(), which is a panic, which is crashing the system is not
> > > > > > acceptable
> > > > > > in the Linux kernel.  You will just annoy Linus if you submit this.
> > > > > 
> > > > > Sorry, I thought this was an acceptable use case since, a) this should
> > > > > never be compiled in a production kernel, b) we are in a pretty bad,
> > > > > unpredictable state if we get here and keep going. I think you might
> > > > > have said elsewhere that you think "a" is not valid? In any case, I
> > > > > can replace this with a WARN, would that be acceptable?
> > > > 
> > > > A WARN may or may not make sense, depending on the context.  It may
> > > > be sufficient to simply report a test failure (as in the old version
> > > > of case (2) below.
> > > > 
> > > > Answers to "a)" and "b)":
> > > > 
> > > > a) it might be in a production kernel
> > > 
> > > Sorry for a possibly stupid question, how might it be so? Why would
> > > someone intentionally build unit tests into a production kernel?
> > 
> > People do things.  Just expect it.
> 
> Huh, alright. I will take your word for it then.

I have a better explanation: Production kernels have bugs, unfortunately.
And sometimes those need to be investigated on systems than cannot be 
brought down or affected more than absolutely necessary, maybe via a third party
doing the execution. A light weight, precise test (well tested ahead :) ) might
be a way of proving or disproving assumptions that can lead to the development
and application of a fix. 

IMHO you're confusing "building into" with temporary applying, then removing
again - like the difference between running a local user space program vs
installing it under /usr and have it in everyone's PATH.

> > > > a') it is not acceptable in my development kernel either

I think one of the fundamental properties of a good test framework is that it
should not require changes to the code under test by itself.

Knut

> > > Fair enough.
> > > 
> > > > b) No.  You don't crash a developer's kernel either unless it is
> > > > required to avoid data corruption.
> > > Alright, I thought that was one of those cases, but I am not going to
> > > push the point. Also, in case it wasn't clear, the path where BUG()
> > > gets called only happens if there is a bug in KUnit itself, not just
> > > because a test case fails catastrophically.
> > 
> > Still not out of the woods.  Still facing Lions and Tigers and Bears,
> > Oh my!
> 
> Nope, I guess not :-)
> 
> > So kunit_abort() is normally called as the result of an assert
> > failure (as written many lines further above).
> > 
> > kunit_abort()
> >    test->try_catch.throw(&test->try_catch)
> >    // this is really kunit_generic_throw(), yes?
> >       complete_and_exit()
> >          if (comp)
> >             // comp is test_case_completion?
> >             complete(comp)
> >          do_exit()
> >             // void __noreturn do_exit(long code)
> >             // depending on the task, either panic
> >             // or the task dies
> 
> You are right up until after it calls do_exit().
> 
> KUnit actually spawns a thread for the test case to run in so that
> when exit is called, only the test case thread dies. The thread that
> started KUnit is never affected.
> 
> > I did not read through enough of the code to understand what is going
> > on here.  Is each kunit_module executed in a newly created thread?
> > And if kunit_abort() is called then that thread dies?  Or something
> > else?
> 
> Mostly right, each kunit_case (not kunit_module) gets executed in its
> own newly created thread. If kunit_abort() is called in that thread,
> the kunit_case thread dies. The parent thread keeps going, and other
> test cases are executed.
> 
> > 
> > > > b') And you can not do replacements like:
> > > > 
> > > > (1) in of_unittest_check_tree_linkage()
> > > > 
> > > > -----  old  -----
> > > > 
> > > >         if (!of_root)
> > > >                 return;
> > > > 
> > > > -----  new  -----
> > > > 
> > > >         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);
> > > > 
> > > > (2) in of_unittest_property_string()
> > > > 
> > > > -----  old  -----
> > > > 
> > > >         /* of_property_read_string_index() tests */
> > > >         rc = of_property_read_string_index(np, "string-property", 0,
> > > > strings);
> > > >         unittest(rc == 0 && !strcmp(strings[0], "foobar"),
> > > > "of_property_read_string_index() failure; rc=%i\n", rc);
> > > > 
> > > > -----  new  -----
> > > > 
> > > >         /* of_property_read_string_index() tests */
> > > >         rc = of_property_read_string_index(np, "string-property", 0,
> > > > strings);
> > > >         KUNIT_ASSERT_EQ(test, rc, 0);
> > > >         KUNIT_EXPECT_STREQ(test, strings[0], "foobar");
> > > > 
> > > > 
> > > > If a test fails, that is no reason to abort testing.  The remainder of
> > > > the unit
> > > > tests can still run.  There may be cascading failures, but that is ok.
> > > 
> > > Sure, that's what I am trying to do. I don't see how (1) changes
> > > anything, a failed KUNIT_ASSERT_* only bails on the current test case,
> > > it does not quit the entire test suite let alone crash the kernel.
> > 
> > This may be another case of whether a kunit_module is approximately a
> > single KUNIT_EXPECT_*() or a larger number of them.
> > 
> > I still want, for example, of_unittest_property_string() to include a large
> > number of KUNIT_EXPECT_*() instances.  In that case I still want the rest of
> > the tests in the kunit_module to be executed even after a KUNIT_ASSERT_*()
> > fails.  The existing test code has that property.
> 
> Sure, in the context of the reply you just sent me on the DT unittest
> thread, that makes sense. I can pull out all but the ones that would
> have terminated the collection of test cases (where you return early),
> if that makes it better.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-03-22  7:10                           ` Knut Omang
  0 siblings, 0 replies; 316+ messages in thread
From: Knut Omang @ 2019-03-22  7:10 UTC (permalink / raw)


On Thu, 2019-03-21@18:41 -0700, Brendan Higgins wrote:
> On Thu, Mar 21, 2019@6:10 PM Frank Rowand <frowand.list@gmail.com> wrote:
> > On 2/27/19 11:42 PM, Brendan Higgins wrote:
> > > On Tue, Feb 19, 2019 at 10:44 PM Frank Rowand <frowand.list at gmail.com>
> > > wrote:
> > > > On 2/19/19 7:39 PM, Brendan Higgins wrote:
> > > > > On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand <frowand.list at gmail.com>
> > > > > wrote:
> > > > > > On 2/14/19 1:37 PM, Brendan Higgins wrote:
> > > > > > > Add support for aborting/bailing out of test cases. Needed for
> > > > > > > implementing assertions.
> > > > > > > 
> > > > > > > Signed-off-by: Brendan Higgins <brendanhiggins at google.com>
> > > > > > > ---
> > > > > > > Changes Since Last Version
> > > > > > >  - This patch is new introducing a new cross-architecture way to
> > > > > > > abort
> > > > > > >    out of a test case (needed for KUNIT_ASSERT_*, see next patch
> > > > > > > for
> > > > > > >    details).
> > > > > > >  - On a side note, this is not a complete replacement for the UML
> > > > > > > abort
> > > > > > >    mechanism, but covers the majority of necessary functionality.
> > > > > > > UML
> > > > > > >    architecture specific featurs have been dropped from the
> > > > > > > initial
> > > > > > >    patchset.
> > > > > > > ---
> > > > > > >  include/kunit/test.h |  24 +++++
> > > > > > >  kunit/Makefile       |   3 +-
> > > > > > >  kunit/test-test.c    | 127 ++++++++++++++++++++++++++
> > > > > > >  kunit/test.c         | 208
> > > > > > > +++++++++++++++++++++++++++++++++++++++++--
> > > > > > >  4 files changed, 353 insertions(+), 9 deletions(-)
> > > > > > >  create mode 100644 kunit/test-test.c
> > > > > > 
> > > > > > < snip >
> > > > > > 
> > > > > > > diff --git a/kunit/test.c b/kunit/test.c
> > > > > > > index d18c50d5ed671..6e5244642ab07 100644
> > > > > > > --- a/kunit/test.c
> > > > > > > +++ b/kunit/test.c
> > > > > > > @@ -6,9 +6,9 @@
> > > > > > >   * Author: Brendan Higgins <brendanhiggins at google.com>
> > > > > > >   */
> > > > > > > 
> > > > > > > -#include <linux/sched.h>
> > > > > > >  #include <linux/sched/debug.h>
> > > > > > > -#include <os.h>
> > > > > > > +#include <linux/completion.h>
> > > > > > > +#include <linux/kthread.h>
> > > > > > >  #include <kunit/test.h>
> > > > > > > 
> > > > > > >  static bool kunit_get_success(struct kunit *test)
> > > > > > > @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit
> > > > > > > *test, bool success)
> > > > > > >       spin_unlock_irqrestore(&test->lock, flags);
> > > > > > >  }
> > > > > > > 
> > > > > > > +static bool kunit_get_death_test(struct kunit *test)
> > > > > > > +{
> > > > > > > +     unsigned long flags;
> > > > > > > +     bool death_test;
> > > > > > > +
> > > > > > > +     spin_lock_irqsave(&test->lock, flags);
> > > > > > > +     death_test = test->death_test;
> > > > > > > +     spin_unlock_irqrestore(&test->lock, flags);
> > > > > > > +
> > > > > > > +     return death_test;
> > > > > > > +}
> > > > > > > +
> > > > > > > +static void kunit_set_death_test(struct kunit *test, bool
> > > > > > > death_test)
> > > > > > > +{
> > > > > > > +     unsigned long flags;
> > > > > > > +
> > > > > > > +     spin_lock_irqsave(&test->lock, flags);
> > > > > > > +     test->death_test = death_test;
> > > > > > > +     spin_unlock_irqrestore(&test->lock, flags);
> > > > > > > +}
> > > > > > > +
> > > > > > >  static int kunit_vprintk_emit(const struct kunit *test,
> > > > > > >                             int level,
> > > > > > >                             const char *fmt,
> > > > > > > @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test,
> > > > > > > struct kunit_stream *stream)
> > > > > > >       stream->commit(stream);
> > > > > > >  }
> > > > > > > 
> > > > > > > +static void __noreturn kunit_abort(struct kunit *test)
> > > > > > > +{
> > > > > > > +     kunit_set_death_test(test, true);
> > > > > > > +
> > > > > > > +     test->try_catch.throw(&test->try_catch);
> > > > > > > +
> > > > > > > +     /*
> > > > > > > +      * Throw could not abort from test.
> > > > > > > +      */
> > > > > > > +     kunit_err(test, "Throw could not abort from test!");
> > > > > > > +     show_stack(NULL, NULL);
> > > > > > > +     BUG();
> > > > > > 
> > > > > > kunit_abort() is what will be call as the result of an assert
> > > > > > failure.
> > > > > 
> > > > > Yep. Does that need clarified somewhere.
> > > > > > BUG(), which is a panic, which is crashing the system is not
> > > > > > acceptable
> > > > > > in the Linux kernel.  You will just annoy Linus if you submit this.
> > > > > 
> > > > > Sorry, I thought this was an acceptable use case since, a) this should
> > > > > never be compiled in a production kernel, b) we are in a pretty bad,
> > > > > unpredictable state if we get here and keep going. I think you might
> > > > > have said elsewhere that you think "a" is not valid? In any case, I
> > > > > can replace this with a WARN, would that be acceptable?
> > > > 
> > > > A WARN may or may not make sense, depending on the context.  It may
> > > > be sufficient to simply report a test failure (as in the old version
> > > > of case (2) below.
> > > > 
> > > > Answers to "a)" and "b)":
> > > > 
> > > > a) it might be in a production kernel
> > > 
> > > Sorry for a possibly stupid question, how might it be so? Why would
> > > someone intentionally build unit tests into a production kernel?
> > 
> > People do things.  Just expect it.
> 
> Huh, alright. I will take your word for it then.

I have a better explanation: Production kernels have bugs, unfortunately.
And sometimes those need to be investigated on systems than cannot be 
brought down or affected more than absolutely necessary, maybe via a third party
doing the execution. A light weight, precise test (well tested ahead :) ) might
be a way of proving or disproving assumptions that can lead to the development
and application of a fix. 

IMHO you're confusing "building into" with temporary applying, then removing
again - like the difference between running a local user space program vs
installing it under /usr and have it in everyone's PATH.

> > > > a') it is not acceptable in my development kernel either

I think one of the fundamental properties of a good test framework is that it
should not require changes to the code under test by itself.

Knut

> > > Fair enough.
> > > 
> > > > b) No.  You don't crash a developer's kernel either unless it is
> > > > required to avoid data corruption.
> > > Alright, I thought that was one of those cases, but I am not going to
> > > push the point. Also, in case it wasn't clear, the path where BUG()
> > > gets called only happens if there is a bug in KUnit itself, not just
> > > because a test case fails catastrophically.
> > 
> > Still not out of the woods.  Still facing Lions and Tigers and Bears,
> > Oh my!
> 
> Nope, I guess not :-)
> 
> > So kunit_abort() is normally called as the result of an assert
> > failure (as written many lines further above).
> > 
> > kunit_abort()
> >    test->try_catch.throw(&test->try_catch)
> >    // this is really kunit_generic_throw(), yes?
> >       complete_and_exit()
> >          if (comp)
> >             // comp is test_case_completion?
> >             complete(comp)
> >          do_exit()
> >             // void __noreturn do_exit(long code)
> >             // depending on the task, either panic
> >             // or the task dies
> 
> You are right up until after it calls do_exit().
> 
> KUnit actually spawns a thread for the test case to run in so that
> when exit is called, only the test case thread dies. The thread that
> started KUnit is never affected.
> 
> > I did not read through enough of the code to understand what is going
> > on here.  Is each kunit_module executed in a newly created thread?
> > And if kunit_abort() is called then that thread dies?  Or something
> > else?
> 
> Mostly right, each kunit_case (not kunit_module) gets executed in its
> own newly created thread. If kunit_abort() is called in that thread,
> the kunit_case thread dies. The parent thread keeps going, and other
> test cases are executed.
> 
> > 
> > > > b') And you can not do replacements like:
> > > > 
> > > > (1) in of_unittest_check_tree_linkage()
> > > > 
> > > > -----  old  -----
> > > > 
> > > >         if (!of_root)
> > > >                 return;
> > > > 
> > > > -----  new  -----
> > > > 
> > > >         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);
> > > > 
> > > > (2) in of_unittest_property_string()
> > > > 
> > > > -----  old  -----
> > > > 
> > > >         /* of_property_read_string_index() tests */
> > > >         rc = of_property_read_string_index(np, "string-property", 0,
> > > > strings);
> > > >         unittest(rc == 0 && !strcmp(strings[0], "foobar"),
> > > > "of_property_read_string_index() failure; rc=%i\n", rc);
> > > > 
> > > > -----  new  -----
> > > > 
> > > >         /* of_property_read_string_index() tests */
> > > >         rc = of_property_read_string_index(np, "string-property", 0,
> > > > strings);
> > > >         KUNIT_ASSERT_EQ(test, rc, 0);
> > > >         KUNIT_EXPECT_STREQ(test, strings[0], "foobar");
> > > > 
> > > > 
> > > > If a test fails, that is no reason to abort testing.  The remainder of
> > > > the unit
> > > > tests can still run.  There may be cascading failures, but that is ok.
> > > 
> > > Sure, that's what I am trying to do. I don't see how (1) changes
> > > anything, a failed KUNIT_ASSERT_* only bails on the current test case,
> > > it does not quit the entire test suite let alone crash the kernel.
> > 
> > This may be another case of whether a kunit_module is approximately a
> > single KUNIT_EXPECT_*() or a larger number of them.
> > 
> > I still want, for example, of_unittest_property_string() to include a large
> > number of KUNIT_EXPECT_*() instances.  In that case I still want the rest of
> > the tests in the kunit_module to be executed even after a KUNIT_ASSERT_*()
> > fails.  The existing test code has that property.
> 
> Sure, in the context of the reply you just sent me on the DT unittest
> thread, that makes sense. I can pull out all but the ones that would
> have terminated the collection of test cases (where you return early),
> if that makes it better.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-03-22  7:10                           ` Knut Omang
  0 siblings, 0 replies; 316+ messages in thread
From: Knut Omang @ 2019-03-22  7:10 UTC (permalink / raw)
  To: Brendan Higgins, Frank Rowand
  Cc: brakmo, Petr Mladek, Amir Goldstein, dri-devel, Sasha Levin,
	linux-kselftest, shuah, Rob Herring, linux-nvdimm,
	Richard Weinberger, Kieran Bingham, wfg, Joel Stanley, Jeff Dike,
	Dan Carpenter, devicetree, Bird, Timothy, Kees Cook, linux-um,
	Steven Rostedt, Julia Lawall, Dan Williams, kunit-dev, Greg KH,
	Linux Kernel Mailing List, Luis Chamberlain, Daniel Vetter,
	Michael Ellerman, Joe Perches, Kevin Hilman

On Thu, 2019-03-21 at 18:41 -0700, Brendan Higgins wrote:
> On Thu, Mar 21, 2019 at 6:10 PM Frank Rowand <frowand.list@gmail.com> wrote:
> > On 2/27/19 11:42 PM, Brendan Higgins wrote:
> > > On Tue, Feb 19, 2019 at 10:44 PM Frank Rowand <frowand.list@gmail.com>
> > > wrote:
> > > > On 2/19/19 7:39 PM, Brendan Higgins wrote:
> > > > > On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand <frowand.list@gmail.com>
> > > > > wrote:
> > > > > > On 2/14/19 1:37 PM, Brendan Higgins wrote:
> > > > > > > Add support for aborting/bailing out of test cases. Needed for
> > > > > > > implementing assertions.
> > > > > > > 
> > > > > > > Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
> > > > > > > ---
> > > > > > > Changes Since Last Version
> > > > > > >  - This patch is new introducing a new cross-architecture way to
> > > > > > > abort
> > > > > > >    out of a test case (needed for KUNIT_ASSERT_*, see next patch
> > > > > > > for
> > > > > > >    details).
> > > > > > >  - On a side note, this is not a complete replacement for the UML
> > > > > > > abort
> > > > > > >    mechanism, but covers the majority of necessary functionality.
> > > > > > > UML
> > > > > > >    architecture specific featurs have been dropped from the
> > > > > > > initial
> > > > > > >    patchset.
> > > > > > > ---
> > > > > > >  include/kunit/test.h |  24 +++++
> > > > > > >  kunit/Makefile       |   3 +-
> > > > > > >  kunit/test-test.c    | 127 ++++++++++++++++++++++++++
> > > > > > >  kunit/test.c         | 208
> > > > > > > +++++++++++++++++++++++++++++++++++++++++--
> > > > > > >  4 files changed, 353 insertions(+), 9 deletions(-)
> > > > > > >  create mode 100644 kunit/test-test.c
> > > > > > 
> > > > > > < snip >
> > > > > > 
> > > > > > > diff --git a/kunit/test.c b/kunit/test.c
> > > > > > > index d18c50d5ed671..6e5244642ab07 100644
> > > > > > > --- a/kunit/test.c
> > > > > > > +++ b/kunit/test.c
> > > > > > > @@ -6,9 +6,9 @@
> > > > > > >   * Author: Brendan Higgins <brendanhiggins@google.com>
> > > > > > >   */
> > > > > > > 
> > > > > > > -#include <linux/sched.h>
> > > > > > >  #include <linux/sched/debug.h>
> > > > > > > -#include <os.h>
> > > > > > > +#include <linux/completion.h>
> > > > > > > +#include <linux/kthread.h>
> > > > > > >  #include <kunit/test.h>
> > > > > > > 
> > > > > > >  static bool kunit_get_success(struct kunit *test)
> > > > > > > @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit
> > > > > > > *test, bool success)
> > > > > > >       spin_unlock_irqrestore(&test->lock, flags);
> > > > > > >  }
> > > > > > > 
> > > > > > > +static bool kunit_get_death_test(struct kunit *test)
> > > > > > > +{
> > > > > > > +     unsigned long flags;
> > > > > > > +     bool death_test;
> > > > > > > +
> > > > > > > +     spin_lock_irqsave(&test->lock, flags);
> > > > > > > +     death_test = test->death_test;
> > > > > > > +     spin_unlock_irqrestore(&test->lock, flags);
> > > > > > > +
> > > > > > > +     return death_test;
> > > > > > > +}
> > > > > > > +
> > > > > > > +static void kunit_set_death_test(struct kunit *test, bool
> > > > > > > death_test)
> > > > > > > +{
> > > > > > > +     unsigned long flags;
> > > > > > > +
> > > > > > > +     spin_lock_irqsave(&test->lock, flags);
> > > > > > > +     test->death_test = death_test;
> > > > > > > +     spin_unlock_irqrestore(&test->lock, flags);
> > > > > > > +}
> > > > > > > +
> > > > > > >  static int kunit_vprintk_emit(const struct kunit *test,
> > > > > > >                             int level,
> > > > > > >                             const char *fmt,
> > > > > > > @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test,
> > > > > > > struct kunit_stream *stream)
> > > > > > >       stream->commit(stream);
> > > > > > >  }
> > > > > > > 
> > > > > > > +static void __noreturn kunit_abort(struct kunit *test)
> > > > > > > +{
> > > > > > > +     kunit_set_death_test(test, true);
> > > > > > > +
> > > > > > > +     test->try_catch.throw(&test->try_catch);
> > > > > > > +
> > > > > > > +     /*
> > > > > > > +      * Throw could not abort from test.
> > > > > > > +      */
> > > > > > > +     kunit_err(test, "Throw could not abort from test!");
> > > > > > > +     show_stack(NULL, NULL);
> > > > > > > +     BUG();
> > > > > > 
> > > > > > kunit_abort() is what will be call as the result of an assert
> > > > > > failure.
> > > > > 
> > > > > Yep. Does that need clarified somewhere.
> > > > > > BUG(), which is a panic, which is crashing the system is not
> > > > > > acceptable
> > > > > > in the Linux kernel.  You will just annoy Linus if you submit this.
> > > > > 
> > > > > Sorry, I thought this was an acceptable use case since, a) this should
> > > > > never be compiled in a production kernel, b) we are in a pretty bad,
> > > > > unpredictable state if we get here and keep going. I think you might
> > > > > have said elsewhere that you think "a" is not valid? In any case, I
> > > > > can replace this with a WARN, would that be acceptable?
> > > > 
> > > > A WARN may or may not make sense, depending on the context.  It may
> > > > be sufficient to simply report a test failure (as in the old version
> > > > of case (2) below.
> > > > 
> > > > Answers to "a)" and "b)":
> > > > 
> > > > a) it might be in a production kernel
> > > 
> > > Sorry for a possibly stupid question, how might it be so? Why would
> > > someone intentionally build unit tests into a production kernel?
> > 
> > People do things.  Just expect it.
> 
> Huh, alright. I will take your word for it then.

I have a better explanation: Production kernels have bugs, unfortunately.
And sometimes those need to be investigated on systems than cannot be 
brought down or affected more than absolutely necessary, maybe via a third party
doing the execution. A light weight, precise test (well tested ahead :) ) might
be a way of proving or disproving assumptions that can lead to the development
and application of a fix. 

IMHO you're confusing "building into" with temporary applying, then removing
again - like the difference between running a local user space program vs
installing it under /usr and have it in everyone's PATH.

> > > > a') it is not acceptable in my development kernel either

I think one of the fundamental properties of a good test framework is that it
should not require changes to the code under test by itself.

Knut

> > > Fair enough.
> > > 
> > > > b) No.  You don't crash a developer's kernel either unless it is
> > > > required to avoid data corruption.
> > > Alright, I thought that was one of those cases, but I am not going to
> > > push the point. Also, in case it wasn't clear, the path where BUG()
> > > gets called only happens if there is a bug in KUnit itself, not just
> > > because a test case fails catastrophically.
> > 
> > Still not out of the woods.  Still facing Lions and Tigers and Bears,
> > Oh my!
> 
> Nope, I guess not :-)
> 
> > So kunit_abort() is normally called as the result of an assert
> > failure (as written many lines further above).
> > 
> > kunit_abort()
> >    test->try_catch.throw(&test->try_catch)
> >    // this is really kunit_generic_throw(), yes?
> >       complete_and_exit()
> >          if (comp)
> >             // comp is test_case_completion?
> >             complete(comp)
> >          do_exit()
> >             // void __noreturn do_exit(long code)
> >             // depending on the task, either panic
> >             // or the task dies
> 
> You are right up until after it calls do_exit().
> 
> KUnit actually spawns a thread for the test case to run in so that
> when exit is called, only the test case thread dies. The thread that
> started KUnit is never affected.
> 
> > I did not read through enough of the code to understand what is going
> > on here.  Is each kunit_module executed in a newly created thread?
> > And if kunit_abort() is called then that thread dies?  Or something
> > else?
> 
> Mostly right, each kunit_case (not kunit_module) gets executed in its
> own newly created thread. If kunit_abort() is called in that thread,
> the kunit_case thread dies. The parent thread keeps going, and other
> test cases are executed.
> 
> > 
> > > > b') And you can not do replacements like:
> > > > 
> > > > (1) in of_unittest_check_tree_linkage()
> > > > 
> > > > -----  old  -----
> > > > 
> > > >         if (!of_root)
> > > >                 return;
> > > > 
> > > > -----  new  -----
> > > > 
> > > >         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);
> > > > 
> > > > (2) in of_unittest_property_string()
> > > > 
> > > > -----  old  -----
> > > > 
> > > >         /* of_property_read_string_index() tests */
> > > >         rc = of_property_read_string_index(np, "string-property", 0,
> > > > strings);
> > > >         unittest(rc == 0 && !strcmp(strings[0], "foobar"),
> > > > "of_property_read_string_index() failure; rc=%i\n", rc);
> > > > 
> > > > -----  new  -----
> > > > 
> > > >         /* of_property_read_string_index() tests */
> > > >         rc = of_property_read_string_index(np, "string-property", 0,
> > > > strings);
> > > >         KUNIT_ASSERT_EQ(test, rc, 0);
> > > >         KUNIT_EXPECT_STREQ(test, strings[0], "foobar");
> > > > 
> > > > 
> > > > If a test fails, that is no reason to abort testing.  The remainder of
> > > > the unit
> > > > tests can still run.  There may be cascading failures, but that is ok.
> > > 
> > > Sure, that's what I am trying to do. I don't see how (1) changes
> > > anything, a failed KUNIT_ASSERT_* only bails on the current test case,
> > > it does not quit the entire test suite let alone crash the kernel.
> > 
> > This may be another case of whether a kunit_module is approximately a
> > single KUNIT_EXPECT_*() or a larger number of them.
> > 
> > I still want, for example, of_unittest_property_string() to include a large
> > number of KUNIT_EXPECT_*() instances.  In that case I still want the rest of
> > the tests in the kunit_module to be executed even after a KUNIT_ASSERT_*()
> > fails.  The existing test code has that property.
> 
> Sure, in the context of the reply you just sent me on the DT unittest
> thread, that makes sense. I can pull out all but the ones that would
> have terminated the collection of test cases (where you return early),
> if that makes it better.


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
  2019-03-22  1:23     ` Frank Rowand
                           ` (2 preceding siblings ...)
  (?)
@ 2019-03-25 22:11         ` Brendan Higgins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-25 22:11 UTC (permalink / raw)
  To: Frank Rowand
  Cc: brakmo-b10kYP2dOMg, Petr Mladek, Amir Goldstein, dri-devel,
	Sasha Levin, linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	shuah-DgEjT+Ai2ygdnm+yROfE0A, Rob Herring, linux-nvdimm,
	Richard Weinberger, Knut Omang, Kieran Bingham,
	wfg-VuQAYsv1563Yd54FQh9/CA, Joel Stanley, Jeff Dike,
	Dan Carpenter, devicetree, Bird, Timothy, Kees Cook,
	linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Steven Rostedt,
	Julia Lawall, kunit-dev-/JYPxA39Uh5TLH3MbocFFw, Greg KH,
	Linux Kernel Mailing List

On Thu, Mar 21, 2019 at 6:23 PM Frank Rowand <frowand.list-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
>
> On 3/4/19 3:01 PM, Brendan Higgins wrote:
> > On Thu, Feb 14, 2019 at 1:38 PM Brendan Higgins
< snip >
> > Someone suggested I should send the next revision out as "PATCH"
> > instead of "RFC" since there seems to be general consensus about
> > everything at a high level, with a couple exceptions.
> >
> > At this time I am planning on sending the next revision out as "[PATCH
> > v1 00/NN] kunit: introduce KUnit, the Linux kernel unit testing
> > framework". Initially I wasn't sure if the next revision should be
> > "[PATCH v1 ...]" or "[PATCH v5 ...]". Please let me know if you have a
> > strong objection to the former.
> >
> > In the next revision, I will be dropping the last two of three patches
> > for the DT unit tests as there doesn't seem to be enough features
> > currently available to justify the heavy refactoring I did; however, I
>
> Thank you.
>
>
> > will still include the patch that just converts everything over to
> > KUnit without restructuring the test cases:
> > https://lkml.org/lkml/2019/2/14/1133
>
> The link doesn't work for me (don't worry about that), so I'm assuming
> this is:
>
>    [RFC v4 15/17] of: unittest: migrate tests to run on KUnit

That's correct.

>
> The conversation on that patch ended after:
>
>    >> After adding patch 15, there are a lot of "unittest internal error" messages.
>    >
>    > Yeah, I meant to ask you about that. I thought it was due to a change
>    > you made, but after further examination, just now, I found it was my
>    > fault. Sorry for not mentioning that anywhere. I will fix it in v5.
>
> It is not worth my time to look at patch 15 when it is that broken.  So I
> have not done any review of it.

Right, I didn't expect you to, we were still discussing things on RFC
v3 at the time. I think I got you comments on v3 in a very short time
frame around sending out v4; hence why your comments were not
addressed.

>
> So no, I think you are still in the RFC stage unless you drop patch 15.

Noted. I might split that out into a separate RFC then.

>
> >
> > I should have the next revision out in a week or so.
> >
>

Cheers!

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-25 22:11         ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-25 22:11 UTC (permalink / raw)
  To: Frank Rowand
  Cc: Kees Cook, Luis Chamberlain, shuah, Rob Herring, Kieran Bingham,
	Greg KH, Joel Stanley, Michael Ellerman, Joe Perches, brakmo,
	Steven Rostedt, Bird, Timothy, Kevin Hilman, Julia Lawall,
	linux-kselftest, kunit-dev, Linux Kernel Mailing List, Jeff Dike,
	Richard Weinberger, linux-um, Daniel Vetter, dri-devel,
	Dan Williams, linux-nvdimm, Knut Omang, devicetree, Petr Mladek,
	Sasha Levin, Amir Goldstein, Dan Carpenter, wfg

On Thu, Mar 21, 2019 at 6:23 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 3/4/19 3:01 PM, Brendan Higgins wrote:
> > On Thu, Feb 14, 2019 at 1:38 PM Brendan Higgins
< snip >
> > Someone suggested I should send the next revision out as "PATCH"
> > instead of "RFC" since there seems to be general consensus about
> > everything at a high level, with a couple exceptions.
> >
> > At this time I am planning on sending the next revision out as "[PATCH
> > v1 00/NN] kunit: introduce KUnit, the Linux kernel unit testing
> > framework". Initially I wasn't sure if the next revision should be
> > "[PATCH v1 ...]" or "[PATCH v5 ...]". Please let me know if you have a
> > strong objection to the former.
> >
> > In the next revision, I will be dropping the last two of three patches
> > for the DT unit tests as there doesn't seem to be enough features
> > currently available to justify the heavy refactoring I did; however, I
>
> Thank you.
>
>
> > will still include the patch that just converts everything over to
> > KUnit without restructuring the test cases:
> > https://lkml.org/lkml/2019/2/14/1133
>
> The link doesn't work for me (don't worry about that), so I'm assuming
> this is:
>
>    [RFC v4 15/17] of: unittest: migrate tests to run on KUnit

That's correct.

>
> The conversation on that patch ended after:
>
>    >> After adding patch 15, there are a lot of "unittest internal error" messages.
>    >
>    > Yeah, I meant to ask you about that. I thought it was due to a change
>    > you made, but after further examination, just now, I found it was my
>    > fault. Sorry for not mentioning that anywhere. I will fix it in v5.
>
> It is not worth my time to look at patch 15 when it is that broken.  So I
> have not done any review of it.

Right, I didn't expect you to, we were still discussing things on RFC
v3 at the time. I think I got you comments on v3 in a very short time
frame around sending out v4; hence why your comments were not
addressed.

>
> So no, I think you are still in the RFC stage unless you drop patch 15.

Noted. I might split that out into a separate RFC then.

>
> >
> > I should have the next revision out in a week or so.
> >
>

Cheers!

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-25 22:11         ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-03-25 22:11 UTC (permalink / raw)


On Thu, Mar 21, 2019 at 6:23 PM Frank Rowand <frowand.list at gmail.com> wrote:
>
> On 3/4/19 3:01 PM, Brendan Higgins wrote:
> > On Thu, Feb 14, 2019 at 1:38 PM Brendan Higgins
< snip >
> > Someone suggested I should send the next revision out as "PATCH"
> > instead of "RFC" since there seems to be general consensus about
> > everything at a high level, with a couple exceptions.
> >
> > At this time I am planning on sending the next revision out as "[PATCH
> > v1 00/NN] kunit: introduce KUnit, the Linux kernel unit testing
> > framework". Initially I wasn't sure if the next revision should be
> > "[PATCH v1 ...]" or "[PATCH v5 ...]". Please let me know if you have a
> > strong objection to the former.
> >
> > In the next revision, I will be dropping the last two of three patches
> > for the DT unit tests as there doesn't seem to be enough features
> > currently available to justify the heavy refactoring I did; however, I
>
> Thank you.
>
>
> > will still include the patch that just converts everything over to
> > KUnit without restructuring the test cases:
> > https://lkml.org/lkml/2019/2/14/1133
>
> The link doesn't work for me (don't worry about that), so I'm assuming
> this is:
>
>    [RFC v4 15/17] of: unittest: migrate tests to run on KUnit

That's correct.

>
> The conversation on that patch ended after:
>
>    >> After adding patch 15, there are a lot of "unittest internal error" messages.
>    >
>    > Yeah, I meant to ask you about that. I thought it was due to a change
>    > you made, but after further examination, just now, I found it was my
>    > fault. Sorry for not mentioning that anywhere. I will fix it in v5.
>
> It is not worth my time to look at patch 15 when it is that broken.  So I
> have not done any review of it.

Right, I didn't expect you to, we were still discussing things on RFC
v3 at the time. I think I got you comments on v3 in a very short time
frame around sending out v4; hence why your comments were not
addressed.

>
> So no, I think you are still in the RFC stage unless you drop patch 15.

Noted. I might split that out into a separate RFC then.

>
> >
> > I should have the next revision out in a week or so.
> >
>

Cheers!

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-25 22:11         ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-25 22:11 UTC (permalink / raw)


On Thu, Mar 21, 2019@6:23 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 3/4/19 3:01 PM, Brendan Higgins wrote:
> > On Thu, Feb 14, 2019 at 1:38 PM Brendan Higgins
< snip >
> > Someone suggested I should send the next revision out as "PATCH"
> > instead of "RFC" since there seems to be general consensus about
> > everything at a high level, with a couple exceptions.
> >
> > At this time I am planning on sending the next revision out as "[PATCH
> > v1 00/NN] kunit: introduce KUnit, the Linux kernel unit testing
> > framework". Initially I wasn't sure if the next revision should be
> > "[PATCH v1 ...]" or "[PATCH v5 ...]". Please let me know if you have a
> > strong objection to the former.
> >
> > In the next revision, I will be dropping the last two of three patches
> > for the DT unit tests as there doesn't seem to be enough features
> > currently available to justify the heavy refactoring I did; however, I
>
> Thank you.
>
>
> > will still include the patch that just converts everything over to
> > KUnit without restructuring the test cases:
> > https://lkml.org/lkml/2019/2/14/1133
>
> The link doesn't work for me (don't worry about that), so I'm assuming
> this is:
>
>    [RFC v4 15/17] of: unittest: migrate tests to run on KUnit

That's correct.

>
> The conversation on that patch ended after:
>
>    >> After adding patch 15, there are a lot of "unittest internal error" messages.
>    >
>    > Yeah, I meant to ask you about that. I thought it was due to a change
>    > you made, but after further examination, just now, I found it was my
>    > fault. Sorry for not mentioning that anywhere. I will fix it in v5.
>
> It is not worth my time to look at patch 15 when it is that broken.  So I
> have not done any review of it.

Right, I didn't expect you to, we were still discussing things on RFC
v3 at the time. I think I got you comments on v3 in a very short time
frame around sending out v4; hence why your comments were not
addressed.

>
> So no, I think you are still in the RFC stage unless you drop patch 15.

Noted. I might split that out into a separate RFC then.

>
> >
> > I should have the next revision out in a week or so.
> >
>

Cheers!

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-25 22:11         ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-25 22:11 UTC (permalink / raw)
  To: Frank Rowand
  Cc: brakmo, Petr Mladek, Amir Goldstein, dri-devel, Sasha Levin,
	linux-kselftest, shuah, Rob Herring, linux-nvdimm,
	Richard Weinberger, Knut Omang, Kieran Bingham, wfg,
	Joel Stanley, Jeff Dike, Dan Carpenter, devicetree, Bird,
	Timothy, Kees Cook, linux-um, Steven Rostedt, Julia Lawall,
	Dan Williams, kunit-dev, Greg KH, Linux Kernel Mailing List,
	Luis Chamberlain, Daniel Vetter, Michael Ellerman, Joe Perches,
	Kevin Hilman

On Thu, Mar 21, 2019 at 6:23 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 3/4/19 3:01 PM, Brendan Higgins wrote:
> > On Thu, Feb 14, 2019 at 1:38 PM Brendan Higgins
< snip >
> > Someone suggested I should send the next revision out as "PATCH"
> > instead of "RFC" since there seems to be general consensus about
> > everything at a high level, with a couple exceptions.
> >
> > At this time I am planning on sending the next revision out as "[PATCH
> > v1 00/NN] kunit: introduce KUnit, the Linux kernel unit testing
> > framework". Initially I wasn't sure if the next revision should be
> > "[PATCH v1 ...]" or "[PATCH v5 ...]". Please let me know if you have a
> > strong objection to the former.
> >
> > In the next revision, I will be dropping the last two of three patches
> > for the DT unit tests as there doesn't seem to be enough features
> > currently available to justify the heavy refactoring I did; however, I
>
> Thank you.
>
>
> > will still include the patch that just converts everything over to
> > KUnit without restructuring the test cases:
> > https://lkml.org/lkml/2019/2/14/1133
>
> The link doesn't work for me (don't worry about that), so I'm assuming
> this is:
>
>    [RFC v4 15/17] of: unittest: migrate tests to run on KUnit

That's correct.

>
> The conversation on that patch ended after:
>
>    >> After adding patch 15, there are a lot of "unittest internal error" messages.
>    >
>    > Yeah, I meant to ask you about that. I thought it was due to a change
>    > you made, but after further examination, just now, I found it was my
>    > fault. Sorry for not mentioning that anywhere. I will fix it in v5.
>
> It is not worth my time to look at patch 15 when it is that broken.  So I
> have not done any review of it.

Right, I didn't expect you to, we were still discussing things on RFC
v3 at the time. I think I got you comments on v3 in a very short time
frame around sending out v4; hence why your comments were not
addressed.

>
> So no, I think you are still in the RFC stage unless you drop patch 15.

Noted. I might split that out into a separate RFC then.

>
> >
> > I should have the next revision out in a week or so.
> >
>

Cheers!

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
  2019-03-22  1:12               ` Frank Rowand
                                     ` (2 preceding siblings ...)
  (?)
@ 2019-03-25 22:12                   ` Brendan Higgins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-25 22:12 UTC (permalink / raw)
  To: Frank Rowand
  Cc: brakmo-b10kYP2dOMg, Petr Mladek, Amir Goldstein, dri-devel,
	Sasha Levin, linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	Frank Rowand-real, shuah-DgEjT+Ai2ygdnm+yROfE0A, Rob Herring,
	linux-nvdimm, Richard Weinberger, Knut Omang, Kieran Bingham,
	wfg-VuQAYsv1563Yd54FQh9/CA, Joel Stanley, Jeff Dike,
	Dan Carpenter, devicetree, Bird, Timothy, Kees Cook,
	linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Steven Rostedt,
	Julia Lawall, kunit-dev-/JYPxA39Uh5TLH3MbocFFw, Greg KH, Linux

On Thu, Mar 21, 2019 at 6:12 PM Frank Rowand <frowand.list-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
>
> On 3/21/19 4:33 PM, Brendan Higgins wrote:
> > On Thu, Mar 21, 2019 at 3:27 PM Logan Gunthorpe <logang-OTvnGxWRz7hWk0Htik3J/w@public.gmane.org> wrote:
> >>
> >>
> >>
> >> On 2019-03-21 4:07 p.m., Brendan Higgins wrote:
> >>> A couple of points, as for needing CONFIG_PCI; my plan to deal with
> >>> that type of thing has been that we would add support for a KUnit/UML
> >>> version that is just for KUnit. It would mock out the necessary bits
> >>> to provide a fake hardware implementation for anything that might
> >>> depend on it. I wrote a prototype for mocking/faking MMIO that I
> >>> presented to the list here[1]; it is not part of the current patchset
> >>> because we decided it would be best to focus on getting an MVP in, but
> >>> I plan on bringing it back up at some point. Anyway, what do you
> >>> generally think of this approach?
> >>
> >> Yes, I was wondering if that might be possible. I think that's a great
> >> approach but it will unfortunately take a lot of work before larger
> >> swaths of the kernel are testable in Kunit with UML. Having more common
> >> mocked infrastructure will be great by-product of it though.
> >
> > Yeah, it's unfortunate that the best way to do something often takes
> > so much longer.
> >
> >>
> >>> Awesome, I looked at the code you posted and it doesn't look like you
> >>> have had too many troubles. One thing that stood out to me, why did
> >>> you need to put it in the kunit/ dir?
> >>
> >> Yeah, writing the code was super easy. Only after, did I realized I
> >> couldn't get it to easily build.
> >
> > Yeah, we really need to fix that; unfortunately, broadly addressing
> > that problem is really hard and will most likely take a long time.
> >
> >>
> >> Putting it in the kunit directory was necessary because nothing in the
> >> NTB tree builds unless CONFIG_NTB is set (see drivers/Makefile) and
> >> CONFIG_NTB depends on CONFIG_PCI. I didn't experiment to see how hard it
> >> would be to set CONFIG_NTB without CONFIG_PCI; I assumed it would be tricky.
> >>
> >>> I am looking forward to see what you think!
> >>
> >> Generally, I'm impressed and want to see this work in upstream as soon
> >> as possible so I can start to make use of it!
> >
> > Great to hear! I was trying to get the next revision out this week,
> > but addressing some of the comments is taking a little longer than
> > expected. I should have something together fairly soon though
> > (hopefully next week). Good news is that next revision will be
> > non-RFC; most of the feedback has settled down and I think we are
> > ready to start figuring out how to merge it. Fingers crossed :-)
> >
> > Cheers
>
> I'll be out of the office next week and will not be able to review.
> Please hold off on any devicetree related files until after I review.

Will do.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-25 22:12                   ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-25 22:12 UTC (permalink / raw)
  To: Frank Rowand
  Cc: Logan Gunthorpe, Kees Cook, Luis Chamberlain, shuah, Rob Herring,
	Kieran Bingham, Greg KH, Joel Stanley, Michael Ellerman,
	Joe Perches, brakmo, Steven Rostedt, Bird, Timothy, Kevin Hilman,
	Julia Lawall, linux-kselftest, kunit-dev,
	Linux Kernel Mailing List, Jeff Dike, Richard Weinberger,
	linux-um, Daniel Vetter, dri-devel, Dan Williams, linux-nvdimm,
	Knut Omang, devicetree, Petr Mladek, Sasha Levin, Amir Goldstein,
	Dan Carpenter, wfg, Frank Rowand-real

On Thu, Mar 21, 2019 at 6:12 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 3/21/19 4:33 PM, Brendan Higgins wrote:
> > On Thu, Mar 21, 2019 at 3:27 PM Logan Gunthorpe <logang@deltatee.com> wrote:
> >>
> >>
> >>
> >> On 2019-03-21 4:07 p.m., Brendan Higgins wrote:
> >>> A couple of points, as for needing CONFIG_PCI; my plan to deal with
> >>> that type of thing has been that we would add support for a KUnit/UML
> >>> version that is just for KUnit. It would mock out the necessary bits
> >>> to provide a fake hardware implementation for anything that might
> >>> depend on it. I wrote a prototype for mocking/faking MMIO that I
> >>> presented to the list here[1]; it is not part of the current patchset
> >>> because we decided it would be best to focus on getting an MVP in, but
> >>> I plan on bringing it back up at some point. Anyway, what do you
> >>> generally think of this approach?
> >>
> >> Yes, I was wondering if that might be possible. I think that's a great
> >> approach but it will unfortunately take a lot of work before larger
> >> swaths of the kernel are testable in Kunit with UML. Having more common
> >> mocked infrastructure will be great by-product of it though.
> >
> > Yeah, it's unfortunate that the best way to do something often takes
> > so much longer.
> >
> >>
> >>> Awesome, I looked at the code you posted and it doesn't look like you
> >>> have had too many troubles. One thing that stood out to me, why did
> >>> you need to put it in the kunit/ dir?
> >>
> >> Yeah, writing the code was super easy. Only after, did I realized I
> >> couldn't get it to easily build.
> >
> > Yeah, we really need to fix that; unfortunately, broadly addressing
> > that problem is really hard and will most likely take a long time.
> >
> >>
> >> Putting it in the kunit directory was necessary because nothing in the
> >> NTB tree builds unless CONFIG_NTB is set (see drivers/Makefile) and
> >> CONFIG_NTB depends on CONFIG_PCI. I didn't experiment to see how hard it
> >> would be to set CONFIG_NTB without CONFIG_PCI; I assumed it would be tricky.
> >>
> >>> I am looking forward to see what you think!
> >>
> >> Generally, I'm impressed and want to see this work in upstream as soon
> >> as possible so I can start to make use of it!
> >
> > Great to hear! I was trying to get the next revision out this week,
> > but addressing some of the comments is taking a little longer than
> > expected. I should have something together fairly soon though
> > (hopefully next week). Good news is that next revision will be
> > non-RFC; most of the feedback has settled down and I think we are
> > ready to start figuring out how to merge it. Fingers crossed :-)
> >
> > Cheers
>
> I'll be out of the office next week and will not be able to review.
> Please hold off on any devicetree related files until after I review.

Will do.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-25 22:12                   ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-03-25 22:12 UTC (permalink / raw)


On Thu, Mar 21, 2019 at 6:12 PM Frank Rowand <frowand.list at gmail.com> wrote:
>
> On 3/21/19 4:33 PM, Brendan Higgins wrote:
> > On Thu, Mar 21, 2019 at 3:27 PM Logan Gunthorpe <logang at deltatee.com> wrote:
> >>
> >>
> >>
> >> On 2019-03-21 4:07 p.m., Brendan Higgins wrote:
> >>> A couple of points, as for needing CONFIG_PCI; my plan to deal with
> >>> that type of thing has been that we would add support for a KUnit/UML
> >>> version that is just for KUnit. It would mock out the necessary bits
> >>> to provide a fake hardware implementation for anything that might
> >>> depend on it. I wrote a prototype for mocking/faking MMIO that I
> >>> presented to the list here[1]; it is not part of the current patchset
> >>> because we decided it would be best to focus on getting an MVP in, but
> >>> I plan on bringing it back up at some point. Anyway, what do you
> >>> generally think of this approach?
> >>
> >> Yes, I was wondering if that might be possible. I think that's a great
> >> approach but it will unfortunately take a lot of work before larger
> >> swaths of the kernel are testable in Kunit with UML. Having more common
> >> mocked infrastructure will be great by-product of it though.
> >
> > Yeah, it's unfortunate that the best way to do something often takes
> > so much longer.
> >
> >>
> >>> Awesome, I looked at the code you posted and it doesn't look like you
> >>> have had too many troubles. One thing that stood out to me, why did
> >>> you need to put it in the kunit/ dir?
> >>
> >> Yeah, writing the code was super easy. Only after, did I realized I
> >> couldn't get it to easily build.
> >
> > Yeah, we really need to fix that; unfortunately, broadly addressing
> > that problem is really hard and will most likely take a long time.
> >
> >>
> >> Putting it in the kunit directory was necessary because nothing in the
> >> NTB tree builds unless CONFIG_NTB is set (see drivers/Makefile) and
> >> CONFIG_NTB depends on CONFIG_PCI. I didn't experiment to see how hard it
> >> would be to set CONFIG_NTB without CONFIG_PCI; I assumed it would be tricky.
> >>
> >>> I am looking forward to see what you think!
> >>
> >> Generally, I'm impressed and want to see this work in upstream as soon
> >> as possible so I can start to make use of it!
> >
> > Great to hear! I was trying to get the next revision out this week,
> > but addressing some of the comments is taking a little longer than
> > expected. I should have something together fairly soon though
> > (hopefully next week). Good news is that next revision will be
> > non-RFC; most of the feedback has settled down and I think we are
> > ready to start figuring out how to merge it. Fingers crossed :-)
> >
> > Cheers
>
> I'll be out of the office next week and will not be able to review.
> Please hold off on any devicetree related files until after I review.

Will do.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-25 22:12                   ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-25 22:12 UTC (permalink / raw)


On Thu, Mar 21, 2019@6:12 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 3/21/19 4:33 PM, Brendan Higgins wrote:
> > On Thu, Mar 21, 2019@3:27 PM Logan Gunthorpe <logang@deltatee.com> wrote:
> >>
> >>
> >>
> >> On 2019-03-21 4:07 p.m., Brendan Higgins wrote:
> >>> A couple of points, as for needing CONFIG_PCI; my plan to deal with
> >>> that type of thing has been that we would add support for a KUnit/UML
> >>> version that is just for KUnit. It would mock out the necessary bits
> >>> to provide a fake hardware implementation for anything that might
> >>> depend on it. I wrote a prototype for mocking/faking MMIO that I
> >>> presented to the list here[1]; it is not part of the current patchset
> >>> because we decided it would be best to focus on getting an MVP in, but
> >>> I plan on bringing it back up at some point. Anyway, what do you
> >>> generally think of this approach?
> >>
> >> Yes, I was wondering if that might be possible. I think that's a great
> >> approach but it will unfortunately take a lot of work before larger
> >> swaths of the kernel are testable in Kunit with UML. Having more common
> >> mocked infrastructure will be great by-product of it though.
> >
> > Yeah, it's unfortunate that the best way to do something often takes
> > so much longer.
> >
> >>
> >>> Awesome, I looked at the code you posted and it doesn't look like you
> >>> have had too many troubles. One thing that stood out to me, why did
> >>> you need to put it in the kunit/ dir?
> >>
> >> Yeah, writing the code was super easy. Only after, did I realized I
> >> couldn't get it to easily build.
> >
> > Yeah, we really need to fix that; unfortunately, broadly addressing
> > that problem is really hard and will most likely take a long time.
> >
> >>
> >> Putting it in the kunit directory was necessary because nothing in the
> >> NTB tree builds unless CONFIG_NTB is set (see drivers/Makefile) and
> >> CONFIG_NTB depends on CONFIG_PCI. I didn't experiment to see how hard it
> >> would be to set CONFIG_NTB without CONFIG_PCI; I assumed it would be tricky.
> >>
> >>> I am looking forward to see what you think!
> >>
> >> Generally, I'm impressed and want to see this work in upstream as soon
> >> as possible so I can start to make use of it!
> >
> > Great to hear! I was trying to get the next revision out this week,
> > but addressing some of the comments is taking a little longer than
> > expected. I should have something together fairly soon though
> > (hopefully next week). Good news is that next revision will be
> > non-RFC; most of the feedback has settled down and I think we are
> > ready to start figuring out how to merge it. Fingers crossed :-)
> >
> > Cheers
>
> I'll be out of the office next week and will not be able to review.
> Please hold off on any devicetree related files until after I review.

Will do.

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework
@ 2019-03-25 22:12                   ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-25 22:12 UTC (permalink / raw)
  To: Frank Rowand
  Cc: brakmo, Petr Mladek, Amir Goldstein, dri-devel, Sasha Levin,
	linux-kselftest, Frank Rowand-real, shuah, Rob Herring,
	linux-nvdimm, Richard Weinberger, Knut Omang, Kieran Bingham,
	wfg, Joel Stanley, Jeff Dike, Dan Carpenter, devicetree, Bird,
	Timothy, Kees Cook, linux-um, Steven Rostedt, Julia Lawall,
	Dan Williams, kunit-dev, Greg KH, Linux Kernel Mailing List,
	Luis Chamberlain, Daniel Vetter, Michael Ellerman, Joe Perches,
	Logan Gunthorpe, Kevin Hilman

On Thu, Mar 21, 2019 at 6:12 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 3/21/19 4:33 PM, Brendan Higgins wrote:
> > On Thu, Mar 21, 2019 at 3:27 PM Logan Gunthorpe <logang@deltatee.com> wrote:
> >>
> >>
> >>
> >> On 2019-03-21 4:07 p.m., Brendan Higgins wrote:
> >>> A couple of points, as for needing CONFIG_PCI; my plan to deal with
> >>> that type of thing has been that we would add support for a KUnit/UML
> >>> version that is just for KUnit. It would mock out the necessary bits
> >>> to provide a fake hardware implementation for anything that might
> >>> depend on it. I wrote a prototype for mocking/faking MMIO that I
> >>> presented to the list here[1]; it is not part of the current patchset
> >>> because we decided it would be best to focus on getting an MVP in, but
> >>> I plan on bringing it back up at some point. Anyway, what do you
> >>> generally think of this approach?
> >>
> >> Yes, I was wondering if that might be possible. I think that's a great
> >> approach but it will unfortunately take a lot of work before larger
> >> swaths of the kernel are testable in Kunit with UML. Having more common
> >> mocked infrastructure will be great by-product of it though.
> >
> > Yeah, it's unfortunate that the best way to do something often takes
> > so much longer.
> >
> >>
> >>> Awesome, I looked at the code you posted and it doesn't look like you
> >>> have had too many troubles. One thing that stood out to me, why did
> >>> you need to put it in the kunit/ dir?
> >>
> >> Yeah, writing the code was super easy. Only after, did I realized I
> >> couldn't get it to easily build.
> >
> > Yeah, we really need to fix that; unfortunately, broadly addressing
> > that problem is really hard and will most likely take a long time.
> >
> >>
> >> Putting it in the kunit directory was necessary because nothing in the
> >> NTB tree builds unless CONFIG_NTB is set (see drivers/Makefile) and
> >> CONFIG_NTB depends on CONFIG_PCI. I didn't experiment to see how hard it
> >> would be to set CONFIG_NTB without CONFIG_PCI; I assumed it would be tricky.
> >>
> >>> I am looking forward to see what you think!
> >>
> >> Generally, I'm impressed and want to see this work in upstream as soon
> >> as possible so I can start to make use of it!
> >
> > Great to hear! I was trying to get the next revision out this week,
> > but addressing some of the comments is taking a little longer than
> > expected. I should have something together fairly soon though
> > (hopefully next week). Good news is that next revision will be
> > non-RFC; most of the feedback has settled down and I think we are
> > ready to start figuring out how to merge it. Fingers crossed :-)
> >
> > Cheers
>
> I'll be out of the office next week and will not be able to review.
> Please hold off on any devicetree related files until after I review.

Will do.

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
  2019-03-22  7:10                           ` Knut Omang
  (?)
  (?)
@ 2019-03-25 22:32                             ` Brendan Higgins
  -1 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-25 22:32 UTC (permalink / raw)
  To: Knut Omang
  Cc: Frank Rowand, Kees Cook, Luis Chamberlain, shuah, Rob Herring,
	Kieran Bingham, Greg KH, Joel Stanley, Michael Ellerman,
	Joe Perches, brakmo, Steven Rostedt, Bird, Timothy, Kevin Hilman,
	Julia Lawall, linux-kselftest, kunit-dev,
	Linux Kernel Mailing List, Jeff Dike, Richard Weinberger,
	linux-um, Daniel Vetter

On Fri, Mar 22, 2019 at 12:11 AM Knut Omang <knut.omang@oracle.com> wrote:
>
> On Thu, 2019-03-21 at 18:41 -0700, Brendan Higgins wrote:
> > On Thu, Mar 21, 2019 at 6:10 PM Frank Rowand <frowand.list@gmail.com> wrote:
> > > On 2/27/19 11:42 PM, Brendan Higgins wrote:
> > > > On Tue, Feb 19, 2019 at 10:44 PM Frank Rowand <frowand.list@gmail.com>
> > > > wrote:
> > > > > On 2/19/19 7:39 PM, Brendan Higgins wrote:
> > > > > > On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand <frowand.list@gmail.com>
> > > > > > wrote:
> > > > > > > On 2/14/19 1:37 PM, Brendan Higgins wrote:
< snip >
> > > > > > > kunit_abort() is what will be call as the result of an assert
> > > > > > > failure.
> > > > > >
> > > > > > Yep. Does that need clarified somewhere.
> > > > > > > BUG(), which is a panic, which is crashing the system is not
> > > > > > > acceptable
> > > > > > > in the Linux kernel.  You will just annoy Linus if you submit this.
> > > > > >
> > > > > > Sorry, I thought this was an acceptable use case since, a) this should
> > > > > > never be compiled in a production kernel, b) we are in a pretty bad,
> > > > > > unpredictable state if we get here and keep going. I think you might
> > > > > > have said elsewhere that you think "a" is not valid? In any case, I
> > > > > > can replace this with a WARN, would that be acceptable?
> > > > >
> > > > > A WARN may or may not make sense, depending on the context.  It may
> > > > > be sufficient to simply report a test failure (as in the old version
> > > > > of case (2) below.
> > > > >
> > > > > Answers to "a)" and "b)":
> > > > >
> > > > > a) it might be in a production kernel
> > > >
> > > > Sorry for a possibly stupid question, how might it be so? Why would
> > > > someone intentionally build unit tests into a production kernel?
> > >
> > > People do things.  Just expect it.
> >
> > Huh, alright. I will take your word for it then.
>
> I have a better explanation: Production kernels have bugs, unfortunately.
> And sometimes those need to be investigated on systems than cannot be
> brought down or affected more than absolutely necessary, maybe via a third party
> doing the execution. A light weight, precise test (well tested ahead :) ) might
> be a way of proving or disproving assumptions that can lead to the development
> and application of a fix.

Sorry, you are not suggesting testing in production are you? To be
clear, I am not concerned about someone using testing, KUnit, or
whatever in a *production-like* environment: that's not what we are
talking about here. My assumption is that no one will deploy tests
into actual production.

>
> IMHO you're confusing "building into" with temporary applying, then removing
> again - like the difference between running a local user space program vs
> installing it under /usr and have it in everyone's PATH.

I don't really see the point of distinguishing between "building into"
and "temporary applying" in this case; that's part of my point. Maybe
it makes sense in whitebox end-to-end testing, but in the case of unit
testing, I don't think so.

>
> > > > > a') it is not acceptable in my development kernel either
>
> I think one of the fundamental properties of a good test framework is that it
> should not require changes to the code under test by itself.
>

Sure, but that has nothing to do with the environment the code/tests
are running in.

< snip >

Cheers

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-03-25 22:32                             ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-25 22:32 UTC (permalink / raw)
  To: Knut Omang
  Cc: Frank Rowand, Kees Cook, Luis Chamberlain, shuah, Rob Herring,
	Kieran Bingham, Greg KH, Joel Stanley, Michael Ellerman,
	Joe Perches, brakmo, Steven Rostedt, Bird, Timothy, Kevin Hilman,
	Julia Lawall, linux-kselftest, kunit-dev,
	Linux Kernel Mailing List, Jeff Dike, Richard Weinberger,
	linux-um, Daniel Vetter, dri-devel, Dan Williams, linux-nvdimm,
	devicetree, Petr Mladek, Sasha Levin, Amir Goldstein,
	Dan Carpenter, wfg

On Fri, Mar 22, 2019 at 12:11 AM Knut Omang <knut.omang@oracle.com> wrote:
>
> On Thu, 2019-03-21 at 18:41 -0700, Brendan Higgins wrote:
> > On Thu, Mar 21, 2019 at 6:10 PM Frank Rowand <frowand.list@gmail.com> wrote:
> > > On 2/27/19 11:42 PM, Brendan Higgins wrote:
> > > > On Tue, Feb 19, 2019 at 10:44 PM Frank Rowand <frowand.list@gmail.com>
> > > > wrote:
> > > > > On 2/19/19 7:39 PM, Brendan Higgins wrote:
> > > > > > On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand <frowand.list@gmail.com>
> > > > > > wrote:
> > > > > > > On 2/14/19 1:37 PM, Brendan Higgins wrote:
< snip >
> > > > > > > kunit_abort() is what will be call as the result of an assert
> > > > > > > failure.
> > > > > >
> > > > > > Yep. Does that need clarified somewhere.
> > > > > > > BUG(), which is a panic, which is crashing the system is not
> > > > > > > acceptable
> > > > > > > in the Linux kernel.  You will just annoy Linus if you submit this.
> > > > > >
> > > > > > Sorry, I thought this was an acceptable use case since, a) this should
> > > > > > never be compiled in a production kernel, b) we are in a pretty bad,
> > > > > > unpredictable state if we get here and keep going. I think you might
> > > > > > have said elsewhere that you think "a" is not valid? In any case, I
> > > > > > can replace this with a WARN, would that be acceptable?
> > > > >
> > > > > A WARN may or may not make sense, depending on the context.  It may
> > > > > be sufficient to simply report a test failure (as in the old version
> > > > > of case (2) below.
> > > > >
> > > > > Answers to "a)" and "b)":
> > > > >
> > > > > a) it might be in a production kernel
> > > >
> > > > Sorry for a possibly stupid question, how might it be so? Why would
> > > > someone intentionally build unit tests into a production kernel?
> > >
> > > People do things.  Just expect it.
> >
> > Huh, alright. I will take your word for it then.
>
> I have a better explanation: Production kernels have bugs, unfortunately.
> And sometimes those need to be investigated on systems than cannot be
> brought down or affected more than absolutely necessary, maybe via a third party
> doing the execution. A light weight, precise test (well tested ahead :) ) might
> be a way of proving or disproving assumptions that can lead to the development
> and application of a fix.

Sorry, you are not suggesting testing in production are you? To be
clear, I am not concerned about someone using testing, KUnit, or
whatever in a *production-like* environment: that's not what we are
talking about here. My assumption is that no one will deploy tests
into actual production.

>
> IMHO you're confusing "building into" with temporary applying, then removing
> again - like the difference between running a local user space program vs
> installing it under /usr and have it in everyone's PATH.

I don't really see the point of distinguishing between "building into"
and "temporary applying" in this case; that's part of my point. Maybe
it makes sense in whitebox end-to-end testing, but in the case of unit
testing, I don't think so.

>
> > > > > a') it is not acceptable in my development kernel either
>
> I think one of the fundamental properties of a good test framework is that it
> should not require changes to the code under test by itself.
>

Sure, but that has nothing to do with the environment the code/tests
are running in.

< snip >

Cheers

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-03-25 22:32                             ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: brendanhiggins @ 2019-03-25 22:32 UTC (permalink / raw)


On Fri, Mar 22, 2019 at 12:11 AM Knut Omang <knut.omang at oracle.com> wrote:
>
> On Thu, 2019-03-21 at 18:41 -0700, Brendan Higgins wrote:
> > On Thu, Mar 21, 2019 at 6:10 PM Frank Rowand <frowand.list at gmail.com> wrote:
> > > On 2/27/19 11:42 PM, Brendan Higgins wrote:
> > > > On Tue, Feb 19, 2019 at 10:44 PM Frank Rowand <frowand.list at gmail.com>
> > > > wrote:
> > > > > On 2/19/19 7:39 PM, Brendan Higgins wrote:
> > > > > > On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand <frowand.list at gmail.com>
> > > > > > wrote:
> > > > > > > On 2/14/19 1:37 PM, Brendan Higgins wrote:
< snip >
> > > > > > > kunit_abort() is what will be call as the result of an assert
> > > > > > > failure.
> > > > > >
> > > > > > Yep. Does that need clarified somewhere.
> > > > > > > BUG(), which is a panic, which is crashing the system is not
> > > > > > > acceptable
> > > > > > > in the Linux kernel.  You will just annoy Linus if you submit this.
> > > > > >
> > > > > > Sorry, I thought this was an acceptable use case since, a) this should
> > > > > > never be compiled in a production kernel, b) we are in a pretty bad,
> > > > > > unpredictable state if we get here and keep going. I think you might
> > > > > > have said elsewhere that you think "a" is not valid? In any case, I
> > > > > > can replace this with a WARN, would that be acceptable?
> > > > >
> > > > > A WARN may or may not make sense, depending on the context.  It may
> > > > > be sufficient to simply report a test failure (as in the old version
> > > > > of case (2) below.
> > > > >
> > > > > Answers to "a)" and "b)":
> > > > >
> > > > > a) it might be in a production kernel
> > > >
> > > > Sorry for a possibly stupid question, how might it be so? Why would
> > > > someone intentionally build unit tests into a production kernel?
> > >
> > > People do things.  Just expect it.
> >
> > Huh, alright. I will take your word for it then.
>
> I have a better explanation: Production kernels have bugs, unfortunately.
> And sometimes those need to be investigated on systems than cannot be
> brought down or affected more than absolutely necessary, maybe via a third party
> doing the execution. A light weight, precise test (well tested ahead :) ) might
> be a way of proving or disproving assumptions that can lead to the development
> and application of a fix.

Sorry, you are not suggesting testing in production are you? To be
clear, I am not concerned about someone using testing, KUnit, or
whatever in a *production-like* environment: that's not what we are
talking about here. My assumption is that no one will deploy tests
into actual production.

>
> IMHO you're confusing "building into" with temporary applying, then removing
> again - like the difference between running a local user space program vs
> installing it under /usr and have it in everyone's PATH.

I don't really see the point of distinguishing between "building into"
and "temporary applying" in this case; that's part of my point. Maybe
it makes sense in whitebox end-to-end testing, but in the case of unit
testing, I don't think so.

>
> > > > > a') it is not acceptable in my development kernel either
>
> I think one of the fundamental properties of a good test framework is that it
> should not require changes to the code under test by itself.
>

Sure, but that has nothing to do with the environment the code/tests
are running in.

< snip >

Cheers

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-03-25 22:32                             ` Brendan Higgins
  0 siblings, 0 replies; 316+ messages in thread
From: Brendan Higgins @ 2019-03-25 22:32 UTC (permalink / raw)


On Fri, Mar 22, 2019@12:11 AM Knut Omang <knut.omang@oracle.com> wrote:
>
> On Thu, 2019-03-21@18:41 -0700, Brendan Higgins wrote:
> > On Thu, Mar 21, 2019@6:10 PM Frank Rowand <frowand.list@gmail.com> wrote:
> > > On 2/27/19 11:42 PM, Brendan Higgins wrote:
> > > > On Tue, Feb 19, 2019 at 10:44 PM Frank Rowand <frowand.list at gmail.com>
> > > > wrote:
> > > > > On 2/19/19 7:39 PM, Brendan Higgins wrote:
> > > > > > On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand <frowand.list at gmail.com>
> > > > > > wrote:
> > > > > > > On 2/14/19 1:37 PM, Brendan Higgins wrote:
< snip >
> > > > > > > kunit_abort() is what will be call as the result of an assert
> > > > > > > failure.
> > > > > >
> > > > > > Yep. Does that need clarified somewhere.
> > > > > > > BUG(), which is a panic, which is crashing the system is not
> > > > > > > acceptable
> > > > > > > in the Linux kernel.  You will just annoy Linus if you submit this.
> > > > > >
> > > > > > Sorry, I thought this was an acceptable use case since, a) this should
> > > > > > never be compiled in a production kernel, b) we are in a pretty bad,
> > > > > > unpredictable state if we get here and keep going. I think you might
> > > > > > have said elsewhere that you think "a" is not valid? In any case, I
> > > > > > can replace this with a WARN, would that be acceptable?
> > > > >
> > > > > A WARN may or may not make sense, depending on the context.  It may
> > > > > be sufficient to simply report a test failure (as in the old version
> > > > > of case (2) below.
> > > > >
> > > > > Answers to "a)" and "b)":
> > > > >
> > > > > a) it might be in a production kernel
> > > >
> > > > Sorry for a possibly stupid question, how might it be so? Why would
> > > > someone intentionally build unit tests into a production kernel?
> > >
> > > People do things.  Just expect it.
> >
> > Huh, alright. I will take your word for it then.
>
> I have a better explanation: Production kernels have bugs, unfortunately.
> And sometimes those need to be investigated on systems than cannot be
> brought down or affected more than absolutely necessary, maybe via a third party
> doing the execution. A light weight, precise test (well tested ahead :) ) might
> be a way of proving or disproving assumptions that can lead to the development
> and application of a fix.

Sorry, you are not suggesting testing in production are you? To be
clear, I am not concerned about someone using testing, KUnit, or
whatever in a *production-like* environment: that's not what we are
talking about here. My assumption is that no one will deploy tests
into actual production.

>
> IMHO you're confusing "building into" with temporary applying, then removing
> again - like the difference between running a local user space program vs
> installing it under /usr and have it in everyone's PATH.

I don't really see the point of distinguishing between "building into"
and "temporary applying" in this case; that's part of my point. Maybe
it makes sense in whitebox end-to-end testing, but in the case of unit
testing, I don't think so.

>
> > > > > a') it is not acceptable in my development kernel either
>
> I think one of the fundamental properties of a good test framework is that it
> should not require changes to the code under test by itself.
>

Sure, but that has nothing to do with the environment the code/tests
are running in.

< snip >

Cheers

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
  2019-03-25 22:32                             ` Brendan Higgins
                                                   ` (2 preceding siblings ...)
  (?)
@ 2019-03-26  7:44                                 ` Knut Omang
  -1 siblings, 0 replies; 316+ messages in thread
From: Knut Omang @ 2019-03-26  7:44 UTC (permalink / raw)
  To: Brendan Higgins
  Cc: brakmo-b10kYP2dOMg, Petr Mladek, Amir Goldstein, dri-devel,
	Sasha Levin, linux-kselftest-u79uwXL29TY76Z2rM5mHXA,
	Frank Rowand, Rob Herring, linux-nvdimm, Richard Weinberger,
	Kieran Bingham, wfg-VuQAYsv1563Yd54FQh9/CA, Joel Stanley,
	Jeff Dike, Dan Carpenter, devicetree,
	shuah-DgEjT+Ai2ygdnm+yROfE0A, Bird, Timothy, Kees Cook,
	linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Steven Rostedt,
	Julia Lawall, kunit-dev-/JYPxA39Uh5TLH3MbocFFw, Greg KH,
	Linux Kernel Mailing List

On Mon, 2019-03-25 at 15:32 -0700, Brendan Higgins wrote:
> On Fri, Mar 22, 2019 at 12:11 AM Knut Omang <knut.omang-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org> wrote:
> > On Thu, 2019-03-21 at 18:41 -0700, Brendan Higgins wrote:
> > > On Thu, Mar 21, 2019 at 6:10 PM Frank Rowand <frowand.list-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
> > > > On 2/27/19 11:42 PM, Brendan Higgins wrote:
> > > > > On Tue, Feb 19, 2019 at 10:44 PM Frank Rowand <frowand.list-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> > > > > wrote:
> > > > > > On 2/19/19 7:39 PM, Brendan Higgins wrote:
> > > > > > > On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand <frowand.list-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> > > > > > > wrote:
> > > > > > > > On 2/14/19 1:37 PM, Brendan Higgins wrote:
> < snip >
> > > > > > > > kunit_abort() is what will be call as the result of an assert
> > > > > > > > failure.
> > > > > > > 
> > > > > > > Yep. Does that need clarified somewhere.
> > > > > > > > BUG(), which is a panic, which is crashing the system is not
> > > > > > > > acceptable
> > > > > > > > in the Linux kernel.  You will just annoy Linus if you submit this.
> > > > > > > 
> > > > > > > Sorry, I thought this was an acceptable use case since, a) this should
> > > > > > > never be compiled in a production kernel, b) we are in a pretty bad,
> > > > > > > unpredictable state if we get here and keep going. I think you might
> > > > > > > have said elsewhere that you think "a" is not valid? In any case, I
> > > > > > > can replace this with a WARN, would that be acceptable?
> > > > > > 
> > > > > > A WARN may or may not make sense, depending on the context.  It may
> > > > > > be sufficient to simply report a test failure (as in the old version
> > > > > > of case (2) below.
> > > > > > 
> > > > > > Answers to "a)" and "b)":
> > > > > > 
> > > > > > a) it might be in a production kernel
> > > > > 
> > > > > Sorry for a possibly stupid question, how might it be so? Why would
> > > > > someone intentionally build unit tests into a production kernel?
> > > > 
> > > > People do things.  Just expect it.
> > > 
> > > Huh, alright. I will take your word for it then.
> > 
> > I have a better explanation: Production kernels have bugs, unfortunately.
> > And sometimes those need to be investigated on systems than cannot be
> > brought down or affected more than absolutely necessary, maybe via a third party
> > doing the execution. A light weight, precise test (well tested ahead :) ) might
> > be a way of proving or disproving assumptions that can lead to the development
> > and application of a fix.
> 
> Sorry, you are not suggesting testing in production are you? To be
> clear, I am not concerned about someone using testing, KUnit, or
> whatever in a *production-like* environment: that's not what we are
> talking about here. My assumption is that no one will deploy tests
> into actual production.

And my take is that you should not make such assumptions.
Even the cost of bringing down a "production-like" environment can be
significant, and the test infrastructure shouldn't think of itself as 
important enough to justify doing such things.

> > IMHO you're confusing "building into" with temporary applying, then removing
> > again - like the difference between running a local user space program vs
> > installing it under /usr and have it in everyone's PATH.
> 
> I don't really see the point of distinguishing between "building into"
> and "temporary applying" in this case; that's part of my point. Maybe
> it makes sense in whitebox end-to-end testing, but in the case of unit
> testing, I don't think so.
> 
> > > > > > a') it is not acceptable in my development kernel either
> > 
> > I think one of the fundamental properties of a good test framework is that it
> > should not require changes to the code under test by itself.
> > 
> 
> Sure, but that has nothing to do with the environment the code/tests
> are running in.

Well, just that if the tests require a special environment to run, 
you limit the usability of the tests in detecting or ruling out real issues.

Thanks,
Knut

> 
> < snip >
> 
> Cheers

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-03-26  7:44                                 ` Knut Omang
  0 siblings, 0 replies; 316+ messages in thread
From: Knut Omang @ 2019-03-26  7:44 UTC (permalink / raw)
  To: Brendan Higgins
  Cc: Frank Rowand, Kees Cook, Luis Chamberlain, shuah, Rob Herring,
	Kieran Bingham, Greg KH, Joel Stanley, Michael Ellerman,
	Joe Perches, brakmo, Steven Rostedt, Bird, Timothy, Kevin Hilman,
	Julia Lawall, linux-kselftest, kunit-dev,
	Linux Kernel Mailing List, Jeff Dike, Richard Weinberger,
	linux-um, Daniel Vetter, dri-devel, Dan Williams, linux-nvdimm,
	devicetree, Petr Mladek, Sasha Levin, Amir Goldstein,
	Dan Carpenter, wfg

On Mon, 2019-03-25 at 15:32 -0700, Brendan Higgins wrote:
> On Fri, Mar 22, 2019 at 12:11 AM Knut Omang <knut.omang@oracle.com> wrote:
> > On Thu, 2019-03-21 at 18:41 -0700, Brendan Higgins wrote:
> > > On Thu, Mar 21, 2019 at 6:10 PM Frank Rowand <frowand.list@gmail.com> wrote:
> > > > On 2/27/19 11:42 PM, Brendan Higgins wrote:
> > > > > On Tue, Feb 19, 2019 at 10:44 PM Frank Rowand <frowand.list@gmail.com>
> > > > > wrote:
> > > > > > On 2/19/19 7:39 PM, Brendan Higgins wrote:
> > > > > > > On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand <frowand.list@gmail.com>
> > > > > > > wrote:
> > > > > > > > On 2/14/19 1:37 PM, Brendan Higgins wrote:
> < snip >
> > > > > > > > kunit_abort() is what will be call as the result of an assert
> > > > > > > > failure.
> > > > > > > 
> > > > > > > Yep. Does that need clarified somewhere.
> > > > > > > > BUG(), which is a panic, which is crashing the system is not
> > > > > > > > acceptable
> > > > > > > > in the Linux kernel.  You will just annoy Linus if you submit this.
> > > > > > > 
> > > > > > > Sorry, I thought this was an acceptable use case since, a) this should
> > > > > > > never be compiled in a production kernel, b) we are in a pretty bad,
> > > > > > > unpredictable state if we get here and keep going. I think you might
> > > > > > > have said elsewhere that you think "a" is not valid? In any case, I
> > > > > > > can replace this with a WARN, would that be acceptable?
> > > > > > 
> > > > > > A WARN may or may not make sense, depending on the context.  It may
> > > > > > be sufficient to simply report a test failure (as in the old version
> > > > > > of case (2) below.
> > > > > > 
> > > > > > Answers to "a)" and "b)":
> > > > > > 
> > > > > > a) it might be in a production kernel
> > > > > 
> > > > > Sorry for a possibly stupid question, how might it be so? Why would
> > > > > someone intentionally build unit tests into a production kernel?
> > > > 
> > > > People do things.  Just expect it.
> > > 
> > > Huh, alright. I will take your word for it then.
> > 
> > I have a better explanation: Production kernels have bugs, unfortunately.
> > And sometimes those need to be investigated on systems than cannot be
> > brought down or affected more than absolutely necessary, maybe via a third party
> > doing the execution. A light weight, precise test (well tested ahead :) ) might
> > be a way of proving or disproving assumptions that can lead to the development
> > and application of a fix.
> 
> Sorry, you are not suggesting testing in production are you? To be
> clear, I am not concerned about someone using testing, KUnit, or
> whatever in a *production-like* environment: that's not what we are
> talking about here. My assumption is that no one will deploy tests
> into actual production.

And my take is that you should not make such assumptions.
Even the cost of bringing down a "production-like" environment can be
significant, and the test infrastructure shouldn't think of itself as 
important enough to justify doing such things.

> > IMHO you're confusing "building into" with temporary applying, then removing
> > again - like the difference between running a local user space program vs
> > installing it under /usr and have it in everyone's PATH.
> 
> I don't really see the point of distinguishing between "building into"
> and "temporary applying" in this case; that's part of my point. Maybe
> it makes sense in whitebox end-to-end testing, but in the case of unit
> testing, I don't think so.
> 
> > > > > > a') it is not acceptable in my development kernel either
> > 
> > I think one of the fundamental properties of a good test framework is that it
> > should not require changes to the code under test by itself.
> > 
> 
> Sure, but that has nothing to do with the environment the code/tests
> are running in.

Well, just that if the tests require a special environment to run, 
you limit the usability of the tests in detecting or ruling out real issues.

Thanks,
Knut

> 
> < snip >
> 
> Cheers


^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-03-26  7:44                                 ` Knut Omang
  0 siblings, 0 replies; 316+ messages in thread
From: knut.omang @ 2019-03-26  7:44 UTC (permalink / raw)


On Mon, 2019-03-25 at 15:32 -0700, Brendan Higgins wrote:
> On Fri, Mar 22, 2019 at 12:11 AM Knut Omang <knut.omang at oracle.com> wrote:
> > On Thu, 2019-03-21 at 18:41 -0700, Brendan Higgins wrote:
> > > On Thu, Mar 21, 2019 at 6:10 PM Frank Rowand <frowand.list at gmail.com> wrote:
> > > > On 2/27/19 11:42 PM, Brendan Higgins wrote:
> > > > > On Tue, Feb 19, 2019 at 10:44 PM Frank Rowand <frowand.list at gmail.com>
> > > > > wrote:
> > > > > > On 2/19/19 7:39 PM, Brendan Higgins wrote:
> > > > > > > On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand <frowand.list at gmail.com>
> > > > > > > wrote:
> > > > > > > > On 2/14/19 1:37 PM, Brendan Higgins wrote:
> < snip >
> > > > > > > > kunit_abort() is what will be call as the result of an assert
> > > > > > > > failure.
> > > > > > > 
> > > > > > > Yep. Does that need clarified somewhere.
> > > > > > > > BUG(), which is a panic, which is crashing the system is not
> > > > > > > > acceptable
> > > > > > > > in the Linux kernel.  You will just annoy Linus if you submit this.
> > > > > > > 
> > > > > > > Sorry, I thought this was an acceptable use case since, a) this should
> > > > > > > never be compiled in a production kernel, b) we are in a pretty bad,
> > > > > > > unpredictable state if we get here and keep going. I think you might
> > > > > > > have said elsewhere that you think "a" is not valid? In any case, I
> > > > > > > can replace this with a WARN, would that be acceptable?
> > > > > > 
> > > > > > A WARN may or may not make sense, depending on the context.  It may
> > > > > > be sufficient to simply report a test failure (as in the old version
> > > > > > of case (2) below.
> > > > > > 
> > > > > > Answers to "a)" and "b)":
> > > > > > 
> > > > > > a) it might be in a production kernel
> > > > > 
> > > > > Sorry for a possibly stupid question, how might it be so? Why would
> > > > > someone intentionally build unit tests into a production kernel?
> > > > 
> > > > People do things.  Just expect it.
> > > 
> > > Huh, alright. I will take your word for it then.
> > 
> > I have a better explanation: Production kernels have bugs, unfortunately.
> > And sometimes those need to be investigated on systems than cannot be
> > brought down or affected more than absolutely necessary, maybe via a third party
> > doing the execution. A light weight, precise test (well tested ahead :) ) might
> > be a way of proving or disproving assumptions that can lead to the development
> > and application of a fix.
> 
> Sorry, you are not suggesting testing in production are you? To be
> clear, I am not concerned about someone using testing, KUnit, or
> whatever in a *production-like* environment: that's not what we are
> talking about here. My assumption is that no one will deploy tests
> into actual production.

And my take is that you should not make such assumptions.
Even the cost of bringing down a "production-like" environment can be
significant, and the test infrastructure shouldn't think of itself as 
important enough to justify doing such things.

> > IMHO you're confusing "building into" with temporary applying, then removing
> > again - like the difference between running a local user space program vs
> > installing it under /usr and have it in everyone's PATH.
> 
> I don't really see the point of distinguishing between "building into"
> and "temporary applying" in this case; that's part of my point. Maybe
> it makes sense in whitebox end-to-end testing, but in the case of unit
> testing, I don't think so.
> 
> > > > > > a') it is not acceptable in my development kernel either
> > 
> > I think one of the fundamental properties of a good test framework is that it
> > should not require changes to the code under test by itself.
> > 
> 
> Sure, but that has nothing to do with the environment the code/tests
> are running in.

Well, just that if the tests require a special environment to run, 
you limit the usability of the tests in detecting or ruling out real issues.

Thanks,
Knut

> 
> < snip >
> 
> Cheers

^ permalink raw reply	[flat|nested] 316+ messages in thread

* [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-03-26  7:44                                 ` Knut Omang
  0 siblings, 0 replies; 316+ messages in thread
From: Knut Omang @ 2019-03-26  7:44 UTC (permalink / raw)


On Mon, 2019-03-25@15:32 -0700, Brendan Higgins wrote:
> On Fri, Mar 22, 2019@12:11 AM Knut Omang <knut.omang@oracle.com> wrote:
> > On Thu, 2019-03-21@18:41 -0700, Brendan Higgins wrote:
> > > On Thu, Mar 21, 2019@6:10 PM Frank Rowand <frowand.list@gmail.com> wrote:
> > > > On 2/27/19 11:42 PM, Brendan Higgins wrote:
> > > > > On Tue, Feb 19, 2019 at 10:44 PM Frank Rowand <frowand.list at gmail.com>
> > > > > wrote:
> > > > > > On 2/19/19 7:39 PM, Brendan Higgins wrote:
> > > > > > > On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand <frowand.list at gmail.com>
> > > > > > > wrote:
> > > > > > > > On 2/14/19 1:37 PM, Brendan Higgins wrote:
> < snip >
> > > > > > > > kunit_abort() is what will be call as the result of an assert
> > > > > > > > failure.
> > > > > > > 
> > > > > > > Yep. Does that need clarified somewhere.
> > > > > > > > BUG(), which is a panic, which is crashing the system is not
> > > > > > > > acceptable
> > > > > > > > in the Linux kernel.  You will just annoy Linus if you submit this.
> > > > > > > 
> > > > > > > Sorry, I thought this was an acceptable use case since, a) this should
> > > > > > > never be compiled in a production kernel, b) we are in a pretty bad,
> > > > > > > unpredictable state if we get here and keep going. I think you might
> > > > > > > have said elsewhere that you think "a" is not valid? In any case, I
> > > > > > > can replace this with a WARN, would that be acceptable?
> > > > > > 
> > > > > > A WARN may or may not make sense, depending on the context.  It may
> > > > > > be sufficient to simply report a test failure (as in the old version
> > > > > > of case (2) below.
> > > > > > 
> > > > > > Answers to "a)" and "b)":
> > > > > > 
> > > > > > a) it might be in a production kernel
> > > > > 
> > > > > Sorry for a possibly stupid question, how might it be so? Why would
> > > > > someone intentionally build unit tests into a production kernel?
> > > > 
> > > > People do things.  Just expect it.
> > > 
> > > Huh, alright. I will take your word for it then.
> > 
> > I have a better explanation: Production kernels have bugs, unfortunately.
> > And sometimes those need to be investigated on systems than cannot be
> > brought down or affected more than absolutely necessary, maybe via a third party
> > doing the execution. A light weight, precise test (well tested ahead :) ) might
> > be a way of proving or disproving assumptions that can lead to the development
> > and application of a fix.
> 
> Sorry, you are not suggesting testing in production are you? To be
> clear, I am not concerned about someone using testing, KUnit, or
> whatever in a *production-like* environment: that's not what we are
> talking about here. My assumption is that no one will deploy tests
> into actual production.

And my take is that you should not make such assumptions.
Even the cost of bringing down a "production-like" environment can be
significant, and the test infrastructure shouldn't think of itself as 
important enough to justify doing such things.

> > IMHO you're confusing "building into" with temporary applying, then removing
> > again - like the difference between running a local user space program vs
> > installing it under /usr and have it in everyone's PATH.
> 
> I don't really see the point of distinguishing between "building into"
> and "temporary applying" in this case; that's part of my point. Maybe
> it makes sense in whitebox end-to-end testing, but in the case of unit
> testing, I don't think so.
> 
> > > > > > a') it is not acceptable in my development kernel either
> > 
> > I think one of the fundamental properties of a good test framework is that it
> > should not require changes to the code under test by itself.
> > 
> 
> Sure, but that has nothing to do with the environment the code/tests
> are running in.

Well, just that if the tests require a special environment to run, 
you limit the usability of the tests in detecting or ruling out real issues.

Thanks,
Knut

> 
> < snip >
> 
> Cheers

^ permalink raw reply	[flat|nested] 316+ messages in thread

* Re: [RFC v4 08/17] kunit: test: add support for test abort
@ 2019-03-26  7:44                                 ` Knut Omang
  0 siblings, 0 replies; 316+ messages in thread
From: Knut Omang @ 2019-03-26  7:44 UTC (permalink / raw)
  To: Brendan Higgins
  Cc: brakmo, Petr Mladek, Amir Goldstein, dri-devel, Sasha Levin,
	linux-kselftest, Frank Rowand, Rob Herring, linux-nvdimm,
	Richard Weinberger, Kieran Bingham, wfg, Joel Stanley, Jeff Dike,
	Dan Carpenter, devicetree, shuah, Bird,

On Mon, 2019-03-25 at 15:32 -0700, Brendan Higgins wrote:
> On Fri, Mar 22, 2019 at 12:11 AM Knut Omang <knut.omang@oracle.com> wrote:
> > On Thu, 2019-03-21 at 18:41 -0700, Brendan Higgins wrote:
> > > On Thu, Mar 21, 2019 at 6:10 PM Frank Rowand <frowand.list@gmail.com> wrote:
> > > > On 2/27/19 11:42 PM, Brendan Higgins wrote:
> > > > > On Tue, Feb 19, 2019 at 10:44 PM Frank Rowand <frowand.list@gmail.com>
> > > > > wrote:
> > > > > > On 2/19/19 7:39 PM, Brendan Higgins wrote:
> > > > > > > On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand <frowand.list@gmail.com>
> > > > > > > wrote:
> > > > > > > > On 2/14/19 1:37 PM, Brendan Higgins wrote:
> < snip >
> > > > > > > > kunit_abort() is what will be call as the result of an assert
> > > > > > > > failure.
> > > > > > > 
> > > > > > > Yep. Does that need clarified somewhere.
> > > > > > > > BUG(), which is a panic, which is crashing the system is not
> > > > > > > > acceptable
> > > > > > > > in the Linux kernel.  You will just annoy Linus if you submit this.
> > > > > > > 
> > > > > > > Sorry, I thought this was an acceptable use case since, a) this should
> > > > > > > never be compiled in a production kernel, b) we are in a pretty bad,
> > > > > > > unpredictable state if we get here and keep going. I think you might
> > > > > > > have said elsewhere that you think "a" is not valid? In any case, I
> > > > > > > can replace this with a WARN, would that be acceptable?
> > > > > > 
> > > > > > A WARN may or may not make sense, depending on the context.  It may
> > > > > > be sufficient to simply report a test failure (as in the old version
> > > > > > of case (2) below.
> > > > > > 
> > > > > > Answers to "a)" and "b)":
> > > > > > 
> > > > > > a) it might be in a production kernel
> > > > > 
> > > > > Sorry for a possibly stupid question, how might it be so? Why would
> > > > > someone intentionally build unit tests into a production kernel?
> > > > 
> > > > People do things.  Just expect it.
> > > 
> > > Huh, alright. I will take your word for it then.
> > 
> > I have a better explanation: Production kernels have bugs, unfortunately.
> > And sometimes those need to be investigated on systems than cannot be
> > brought down or affected more than absolutely necessary, maybe via a third party
> > doing the execution. A light weight, precise test (well tested ahead :) ) might
> > be a way of proving or disproving assumptions that can lead to the development
> > and application of a fix.
> 
> Sorry, you are not suggesting testing in production are you? To be
> clear, I am not concerned about someone using testing, KUnit, or
> whatever in a *production-like* environment: that's not what we are
> talking about here. My assumption is that no one will deploy tests
> into actual production.

And my take is that you should not make such assumptions.
Even the cost of bringing down a "production-like" environment can be
significant, and the test infrastructure shouldn't think of itself as 
important enough to justify doing such things.

> > IMHO you're confusing "building into" with temporary applying, then removing
> > again - like the difference between running a local user space program vs
> > installing it under /usr and have it in everyone's PATH.
> 
> I don't really see the point of distinguishing between "building into"
> and "temporary applying" in this case; that's part of my point. Maybe
> it makes sense in whitebox end-to-end testing, but in the case of unit
> testing, I don't think so.
> 
> > > > > > a') it is not acceptable in my development kernel either
> > 
> > I think one of the fundamental properties of a good test framework is that it
> > should not require changes to the code under test by itself.
> > 
> 
> Sure, but that has nothing to do with the environment the code/tests
> are running in.

Well, just that if the tests require a special environment to run, 
you limit the usability of the tests in detecting or ruling out real issues.

Thanks,
Knut

> 
> < snip >
> 
> Cheers


_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


^ permalink raw reply	[flat|nested] 316+ messages in thread

end of thread, other threads:[~2019-03-26  7:46 UTC | newest]

Thread overview: 316+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-14 21:37 [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework Brendan Higgins
2019-02-14 21:37 ` Brendan Higgins
2019-02-14 21:37 ` Brendan Higgins
2019-02-14 21:37 ` brendanhiggins
2019-02-14 21:37 ` [RFC v4 02/17] kunit: test: add test resource management API Brendan Higgins
2019-02-14 21:37   ` Brendan Higgins
2019-02-14 21:37   ` Brendan Higgins
2019-02-14 21:37   ` brendanhiggins
2019-02-15 21:01   ` Stephen Boyd
2019-02-15 21:01     ` Stephen Boyd
2019-02-15 21:01     ` Stephen Boyd
2019-02-15 21:01     ` sboyd
2019-02-15 21:01     ` Stephen Boyd
2019-02-19 23:24     ` Brendan Higgins
2019-02-19 23:24       ` Brendan Higgins
2019-02-19 23:24       ` Brendan Higgins
2019-02-19 23:24       ` brendanhiggins
2019-02-19 23:24       ` Brendan Higgins
2019-02-14 21:37 ` [RFC v4 07/17] kunit: test: add initial tests Brendan Higgins
2019-02-14 21:37   ` Brendan Higgins
2019-02-14 21:37   ` Brendan Higgins
2019-02-14 21:37   ` brendanhiggins
2019-02-14 21:37 ` [RFC v4 16/17] of: unittest: split out a couple of test cases from unittest Brendan Higgins
2019-02-14 21:37   ` Brendan Higgins
2019-02-14 21:37   ` Brendan Higgins
2019-02-14 21:37   ` brendanhiggins
2019-03-22  1:14   ` Frank Rowand
2019-03-22  1:14     ` Frank Rowand
2019-03-22  1:14     ` Frank Rowand
2019-03-22  1:14     ` frowand.list
2019-03-22  1:14     ` Frank Rowand
2019-03-22  1:45     ` Brendan Higgins
2019-03-22  1:45       ` Brendan Higgins
2019-03-22  1:45       ` Brendan Higgins
2019-03-22  1:45       ` brendanhiggins
2019-03-22  1:45       ` Brendan Higgins
2019-03-22  1:45       ` Brendan Higgins
2019-02-14 21:37 ` [RFC v4 17/17] of: unittest: split up some super large test cases Brendan Higgins
2019-02-14 21:37   ` Brendan Higgins
2019-02-14 21:37   ` Brendan Higgins
2019-02-14 21:37   ` brendanhiggins
2019-03-22  1:16   ` Frank Rowand
2019-03-22  1:16     ` Frank Rowand
2019-03-22  1:16     ` Frank Rowand
2019-03-22  1:16     ` frowand.list
2019-03-22  1:16     ` Frank Rowand
2019-03-22  1:16     ` Frank Rowand
     [not found]     ` <09b06e6d-fd36-707e-cb7a-e935bd930510-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2019-03-22  1:45       ` Brendan Higgins
2019-03-22  1:45         ` Brendan Higgins
2019-03-22  1:45         ` Brendan Higgins
2019-03-22  1:45         ` brendanhiggins
2019-03-22  1:45         ` Brendan Higgins
2019-02-18 20:02 ` [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework Frank Rowand
2019-02-18 20:02   ` Frank Rowand
2019-02-18 20:02   ` Frank Rowand
2019-02-18 20:02   ` frowand.list
2019-02-20  6:34   ` Brendan Higgins
2019-02-20  6:34     ` Brendan Higgins
2019-02-20  6:34     ` Brendan Higgins
2019-02-20  6:34     ` brendanhiggins
2019-02-20  6:34     ` Brendan Higgins
2019-02-20  6:46     ` Frank Rowand
2019-02-20  6:46       ` Frank Rowand
2019-02-20  6:46       ` Frank Rowand
2019-02-20  6:46       ` frowand.list
2019-02-20  6:46       ` Frank Rowand
2019-02-22 20:52       ` Thiago Jung Bauermann
2019-02-22 20:52         ` Thiago Jung Bauermann
2019-02-22 20:52         ` Thiago Jung Bauermann
2019-02-22 20:52         ` bauerman
2019-02-22 20:52         ` Thiago Jung Bauermann
2019-02-28  4:18         ` Brendan Higgins
2019-02-28  4:18           ` Brendan Higgins
2019-02-28  4:18           ` brendanhiggins
2019-02-28  4:18           ` Brendan Higgins
2019-02-28  4:15       ` Brendan Higgins
2019-02-28  4:15         ` Brendan Higgins
2019-02-28  4:15         ` Brendan Higgins
2019-02-28  4:15         ` brendanhiggins
2019-02-28  4:15         ` Brendan Higgins
2019-03-04 23:01 ` Brendan Higgins
2019-03-04 23:01   ` Brendan Higgins
2019-03-04 23:01   ` Brendan Higgins
2019-03-04 23:01   ` brendanhiggins
2019-03-04 23:01   ` Brendan Higgins
2019-03-22  1:23   ` Frank Rowand
2019-03-22  1:23     ` Frank Rowand
2019-03-22  1:23     ` Frank Rowand
2019-03-22  1:23     ` frowand.list
2019-03-22  1:23     ` Frank Rowand
2019-03-22  1:23     ` Frank Rowand
     [not found]     ` <0e6eb370-3e62-e1a5-1b91-bccc5868e8e4-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2019-03-25 22:11       ` Brendan Higgins
2019-03-25 22:11         ` Brendan Higgins
2019-03-25 22:11         ` Brendan Higgins
2019-03-25 22:11         ` brendanhiggins
2019-03-25 22:11         ` Brendan Higgins
     [not found] ` <20190214213729.21702-1-brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2019-02-14 21:37   ` [RFC v4 01/17] kunit: test: add KUnit test runner core Brendan Higgins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37     ` brendanhiggins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37   ` [RFC v4 03/17] kunit: test: add string_stream a std::stream like string builder Brendan Higgins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37     ` brendanhiggins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37   ` [RFC v4 04/17] kunit: test: add test_stream a std::stream like logger Brendan Higgins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37     ` brendanhiggins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37   ` [RFC v4 05/17] kunit: test: add the concept of expectations Brendan Higgins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37     ` brendanhiggins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37   ` [RFC v4 06/17] kbuild: enable building KUnit Brendan Higgins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37     ` brendanhiggins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37   ` [RFC v4 08/17] kunit: test: add support for test abort Brendan Higgins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37     ` brendanhiggins
2019-02-14 21:37     ` Brendan Higgins
     [not found]     ` <20190214213729.21702-9-brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2019-02-18 19:52       ` Frank Rowand
2019-02-18 19:52         ` Frank Rowand
2019-02-18 19:52         ` frowand.list
2019-02-18 19:52         ` Frank Rowand
     [not found]         ` <da1995fa-a362-dfe6-8184-7fcdf2b923e8-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2019-02-20  3:39           ` Brendan Higgins
2019-02-20  3:39             ` Brendan Higgins
2019-02-20  3:39             ` Brendan Higgins
2019-02-20  3:39             ` brendanhiggins
2019-02-20  3:39             ` Brendan Higgins
2019-02-20  6:44             ` Frank Rowand
2019-02-20  6:44               ` Frank Rowand
2019-02-20  6:44               ` Frank Rowand
2019-02-20  6:44               ` frowand.list
2019-02-20  6:44               ` Frank Rowand
2019-02-20  6:44               ` Frank Rowand
2019-02-28  7:42               ` Brendan Higgins
2019-02-28  7:42                 ` Brendan Higgins
2019-02-28  7:42                 ` Brendan Higgins
2019-02-28  7:42                 ` brendanhiggins
2019-02-28  7:42                 ` Brendan Higgins
     [not found]                 ` <CAFd5g47EDmsBWKNiW0jpHW2VG_GWCfe8UO+=ofgM2_ru+_UBQA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2019-03-22  1:09                   ` Frank Rowand
2019-03-22  1:09                     ` Frank Rowand
2019-03-22  1:09                     ` Frank Rowand
2019-03-22  1:09                     ` frowand.list
2019-03-22  1:09                     ` Frank Rowand
2019-03-22  1:41                     ` Brendan Higgins
2019-03-22  1:41                       ` Brendan Higgins
2019-03-22  1:41                       ` Brendan Higgins
2019-03-22  1:41                       ` brendanhiggins
2019-03-22  1:41                       ` Brendan Higgins
     [not found]                       ` <CAFd5g46AP9yZQ+z+60HGaZuqhJQmfSBw9+r62w4k=cGiMEkqLA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2019-03-22  7:10                         ` Knut Omang
2019-03-22  7:10                           ` Knut Omang
2019-03-22  7:10                           ` Knut Omang
2019-03-22  7:10                           ` knut.omang
2019-03-22  7:10                           ` Knut Omang
2019-03-25 22:32                           ` Brendan Higgins
2019-03-25 22:32                             ` Brendan Higgins
2019-03-25 22:32                             ` brendanhiggins
2019-03-25 22:32                             ` Brendan Higgins
     [not found]                             ` <CAFd5g44eqjN-nVCJuoeYFCxwVa5AorWiAnXe-tFCAc11zDgJFA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2019-03-26  7:44                               ` Knut Omang
2019-03-26  7:44                                 ` Knut Omang
2019-03-26  7:44                                 ` Knut Omang
2019-03-26  7:44                                 ` knut.omang
2019-03-26  7:44                                 ` Knut Omang
2019-02-26 20:35       ` Stephen Boyd
2019-02-26 20:35         ` Stephen Boyd
2019-02-26 20:35         ` Stephen Boyd
2019-02-26 20:35         ` sboyd
2019-02-26 20:35         ` Stephen Boyd
     [not found]         ` <155121334527.260864.5324117081460979741-n1Xw8LXHxjTHt/MElyovVYaSKrA+ACpX0E9HWUfgJXw@public.gmane.org>
2019-02-28  9:03           ` Brendan Higgins
2019-02-28  9:03             ` Brendan Higgins
2019-02-28  9:03             ` brendanhiggins
2019-02-28  9:03             ` Brendan Higgins
2019-02-28 13:54             ` Dan Carpenter
2019-02-28 13:54               ` Dan Carpenter
2019-02-28 13:54               ` Dan Carpenter
2019-02-28 13:54               ` dan.carpenter
2019-02-28 13:54               ` Dan Carpenter
2019-03-04 22:28               ` Brendan Higgins
2019-03-04 22:28                 ` Brendan Higgins
2019-03-04 22:28                 ` Brendan Higgins
2019-03-04 22:28                 ` brendanhiggins
2019-03-04 22:28                 ` Brendan Higgins
2019-02-28 18:02             ` Stephen Boyd
2019-02-28 18:02               ` Stephen Boyd
     [not found]               ` <155137694423.260864.2846034318906225490-n1Xw8LXHxjTHt/MElyovVYaSKrA+ACpX0E9HWUfgJXw@public.gmane.org>
2019-03-04 22:39                 ` Brendan Higgins
2019-03-04 22:39                   ` Brendan Higgins
2019-03-04 22:39                   ` Brendan Higgins
2019-03-04 22:39                   ` brendanhiggins
2019-03-04 22:39                   ` Brendan Higgins
2019-02-14 21:37   ` [RFC v4 09/17] kunit: test: add the concept of assertions Brendan Higgins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37     ` brendanhiggins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37   ` [RFC v4 10/17] kunit: test: add test managed resource tests Brendan Higgins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37     ` brendanhiggins
2019-02-14 21:37     ` Brendan Higgins
     [not found]     ` <20190214213729.21702-11-brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2019-02-15 20:54       ` Stephen Boyd
2019-02-15 20:54         ` Stephen Boyd
2019-02-15 20:54         ` Stephen Boyd
2019-02-15 20:54         ` sboyd
2019-02-15 20:54         ` Stephen Boyd
2019-02-19 23:20         ` Brendan Higgins
2019-02-19 23:20           ` Brendan Higgins
2019-02-19 23:20           ` Brendan Higgins
2019-02-19 23:20           ` brendanhiggins
2019-02-19 23:20           ` Brendan Higgins
2019-02-20 22:03           ` Stephen Boyd
2019-02-20 22:03             ` Stephen Boyd
2019-02-14 21:37   ` [RFC v4 11/17] kunit: tool: add Python wrappers for running KUnit tests Brendan Higgins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37     ` brendanhiggins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37   ` [RFC v4 12/17] kunit: defconfig: add defconfigs for building " Brendan Higgins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37     ` brendanhiggins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37   ` [RFC v4 13/17] Documentation: kunit: add documentation for KUnit Brendan Higgins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37     ` brendanhiggins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37   ` [RFC v4 14/17] MAINTAINERS: add entry for KUnit the unit testing framework Brendan Higgins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37     ` brendanhiggins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37   ` [RFC v4 15/17] of: unittest: migrate tests to run on KUnit Brendan Higgins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37     ` Brendan Higgins
2019-02-14 21:37     ` brendanhiggins
2019-02-14 21:37     ` Brendan Higgins
     [not found]     ` <20190214213729.21702-16-brendanhiggins-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2019-02-16  0:24       ` Frank Rowand
2019-02-16  0:24         ` Frank Rowand
2019-02-16  0:24         ` Frank Rowand
2019-02-16  0:24         ` frowand.list
2019-02-16  0:24         ` Frank Rowand
     [not found]         ` <cda7c8db-a6d0-6a93-5c33-9ccf32dfd29a-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2019-02-20  2:24           ` Brendan Higgins
2019-02-20  2:24             ` Brendan Higgins
2019-02-20  2:24             ` Brendan Higgins
2019-02-20  2:24             ` brendanhiggins
2019-02-20  2:24             ` Brendan Higgins
2019-03-21  1:07   ` [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework Logan Gunthorpe
2019-03-21  1:07     ` Logan Gunthorpe
2019-03-21  1:07     ` Logan Gunthorpe
2019-03-21  1:07     ` logang
2019-03-21  1:07     ` Logan Gunthorpe
     [not found]     ` <6d9b3b21-1179-3a45-7545-30aa15306cb4-OTvnGxWRz7hWk0Htik3J/w@public.gmane.org>
2019-03-21  5:23       ` Knut Omang
2019-03-21  5:23         ` Knut Omang
2019-03-21  5:23         ` Knut Omang
2019-03-21  5:23         ` knut.omang
2019-03-21  5:23         ` Knut Omang
2019-03-21 15:56         ` Logan Gunthorpe
2019-03-21 15:56           ` Logan Gunthorpe
2019-03-21 15:56           ` Logan Gunthorpe
2019-03-21 15:56           ` logang
2019-03-21 15:56           ` Logan Gunthorpe
2019-03-21 15:56           ` Logan Gunthorpe
     [not found]           ` <ce355f5c-189c-816c-cde4-fb4e816d44e7-OTvnGxWRz7hWk0Htik3J/w@public.gmane.org>
2019-03-21 16:55             ` Brendan Higgins
2019-03-21 16:55               ` Brendan Higgins
2019-03-21 16:55               ` Brendan Higgins
2019-03-21 16:55               ` brendanhiggins
2019-03-21 16:55               ` Brendan Higgins
     [not found]               ` <CAFd5g45LKU+SZAFzn3RNCoxhzum0NAr9t+UJ80SJDMc_FeKgBQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2019-03-21 19:13                 ` Knut Omang
2019-03-21 19:13                   ` Knut Omang
2019-03-21 19:13                   ` Knut Omang
2019-03-21 19:13                   ` knut.omang
2019-03-21 19:13                   ` Knut Omang
2019-03-21 19:29                   ` Logan Gunthorpe
2019-03-21 19:29                     ` Logan Gunthorpe
2019-03-21 19:29                     ` Logan Gunthorpe
2019-03-21 19:29                     ` logang
2019-03-21 19:29                     ` Logan Gunthorpe
2019-03-21 19:29                     ` Logan Gunthorpe
     [not found]                     ` <961494a3-d08c-2720-c59d-7d7008edb288-OTvnGxWRz7hWk0Htik3J/w@public.gmane.org>
2019-03-21 20:14                       ` Knut Omang
2019-03-21 20:14                         ` Knut Omang
2019-03-21 20:14                         ` Knut Omang
2019-03-21 20:14                         ` knut.omang
2019-03-21 20:14                         ` Knut Omang
2019-03-21 22:07       ` Brendan Higgins
2019-03-21 22:07         ` Brendan Higgins
2019-03-21 22:07         ` Brendan Higgins
2019-03-21 22:07         ` brendanhiggins
2019-03-21 22:07         ` Brendan Higgins
2019-03-21 22:26         ` Logan Gunthorpe
2019-03-21 22:26           ` Logan Gunthorpe
2019-03-21 22:26           ` Logan Gunthorpe
2019-03-21 22:26           ` logang
2019-03-21 22:26           ` Logan Gunthorpe
2019-03-21 23:33           ` Brendan Higgins
2019-03-21 23:33             ` Brendan Higgins
2019-03-21 23:33             ` Brendan Higgins
2019-03-21 23:33             ` brendanhiggins
2019-03-21 23:33             ` Brendan Higgins
2019-03-22  1:12             ` Frank Rowand
2019-03-22  1:12               ` Frank Rowand
2019-03-22  1:12               ` Frank Rowand
2019-03-22  1:12               ` frowand.list
2019-03-22  1:12               ` Frank Rowand
2019-03-22  1:12               ` Frank Rowand
     [not found]               ` <aea24c8e-5ce8-5f33-c81b-d2eeef588ec8-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2019-03-25 22:12                 ` Brendan Higgins
2019-03-25 22:12                   ` Brendan Higgins
2019-03-25 22:12                   ` Brendan Higgins
2019-03-25 22:12                   ` brendanhiggins
2019-03-25 22:12                   ` Brendan Higgins

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.