Linux-Trace-Devel Archive on lore.kernel.org
 help / color / Atom feed
* [PATCH v4 00/11] Build trace-cruncher as Python pakage
@ 2021-07-07 13:21 Yordan Karadzhov (VMware)
  2021-07-07 13:21 ` [PATCH v4 01/11] trace-cruncher: Refactor the part that wraps ftrace Yordan Karadzhov (VMware)
                   ` (11 more replies)
  0 siblings, 12 replies; 13+ messages in thread
From: Yordan Karadzhov (VMware) @ 2021-07-07 13:21 UTC (permalink / raw)
  To: linux-trace-devel; +Cc: Yordan Karadzhov (VMware)

This patch-set restructures the project and makes it build as a native
Python package. Although it looks like a complete rewrite, this is
essentially just a switching from using Cython to using directly the C API
of Python. Cython is still being used but only for the implementation
of the NumPy data wrapper. The functionalities that are wrapping Ftrace
are extended substantially. This is possible due to switching to use of
the recently released libraries: libtraceevent and libtracefs. 

Major changes in v4:
 - More robust signal handling in iterate_trace() (PATCH 02/11). 

Major changes in v3:
 - More basic methods for tracing are added ([PATCH 02/11] new).
 - Auto-naming of the instances is sypported.
 - Recently implemented new APIs in libtracefs are adopted.
 
Changes in v2:
 - Addressing the comments made by Steven in his review.
 - Start using the libtracefs APIs for enable/disable events.
 - Add functionalities for enable/disable event filters.


Yordan Karadzhov (VMware) (11):
  trace-cruncher: Refactor the part that wraps ftrace
  trace-cruncher: Add basic methods for tracing
  trace-cruncher: Refactor the part that wraps libkshark
  trace-cruncher: Add "utils"
  trace-cruncher: Refactor the examples
  trace-cruncher: Add ftracepy example
  trace-cruncher: Add Makefile
  trace-cruncher: Update README.md
  trace-cruncher: Remove all leftover files.
  trace-cruncher: Add testing
  trace-cruncher: Add github workflow for CI testing

 .github/workflows/main.yml                    |   58 +
 0001-kernel-shark-Add-_DEVEL-build-flag.patch |   90 -
 0002-kernel-shark-Add-reg_pid-plugin.patch    |  231 --
 Makefile                                      |   33 +
 README.md                                     |   84 +-
 clean.sh                                      |    6 -
 examples/gpareto_fit.py                       |  328 ---
 examples/ksharksetup.py                       |   24 -
 examples/page_faults.py                       |  120 --
 examples/sched_wakeup.py                      |   70 +-
 examples/start_tracing.py                     |   20 +
 libkshark-py.c                                |  224 --
 libkshark_wrapper.pyx                         |  361 ----
 np_setup.py                                   |   90 -
 setup.py                                      |   81 +
 src/common.h                                  |  105 +
 src/ftracepy-utils.c                          | 1869 +++++++++++++++++
 src/ftracepy-utils.h                          |  144 ++
 src/ftracepy.c                                |  292 +++
 src/ksharkpy-utils.c                          |  411 ++++
 src/ksharkpy-utils.h                          |   41 +
 src/ksharkpy.c                                |   94 +
 src/npdatawrapper.pyx                         |  203 ++
 src/trace2matrix.c                            |   40 +
 tests/0_get_data/__init__.py                  |    0
 tests/0_get_data/test_get_data.py             |   26 +
 tests/1_unit/__init__.py                      |    0
 tests/1_unit/test_01_ftracepy_unit.py         |  471 +++++
 tests/1_unit/test_02_datawrapper_unit.py      |   41 +
 tests/1_unit/test_03_ksharkpy_unit.py         |   72 +
 tests/2_integration/__init__.py               |    0
 .../test_01_ftracepy_integration.py           |  113 +
 .../test_03_ksharkpy_integration.py           |   25 +
 tests/__init__.py                             |    0
 tracecruncher/__init__.py                     |    0
 tracecruncher/ft_utils.py                     |   19 +
 tracecruncher/ks_utils.py                     |  227 ++
 37 files changed, 4469 insertions(+), 1544 deletions(-)
 create mode 100644 .github/workflows/main.yml
 delete mode 100644 0001-kernel-shark-Add-_DEVEL-build-flag.patch
 delete mode 100644 0002-kernel-shark-Add-reg_pid-plugin.patch
 create mode 100644 Makefile
 delete mode 100755 clean.sh
 delete mode 100755 examples/gpareto_fit.py
 delete mode 100644 examples/ksharksetup.py
 delete mode 100755 examples/page_faults.py
 create mode 100755 examples/start_tracing.py
 delete mode 100644 libkshark-py.c
 delete mode 100644 libkshark_wrapper.pyx
 delete mode 100755 np_setup.py
 create mode 100644 setup.py
 create mode 100644 src/common.h
 create mode 100644 src/ftracepy-utils.c
 create mode 100644 src/ftracepy-utils.h
 create mode 100644 src/ftracepy.c
 create mode 100644 src/ksharkpy-utils.c
 create mode 100644 src/ksharkpy-utils.h
 create mode 100644 src/ksharkpy.c
 create mode 100644 src/npdatawrapper.pyx
 create mode 100644 src/trace2matrix.c
 create mode 100644 tests/0_get_data/__init__.py
 create mode 100755 tests/0_get_data/test_get_data.py
 create mode 100644 tests/1_unit/__init__.py
 create mode 100644 tests/1_unit/test_01_ftracepy_unit.py
 create mode 100755 tests/1_unit/test_02_datawrapper_unit.py
 create mode 100755 tests/1_unit/test_03_ksharkpy_unit.py
 create mode 100644 tests/2_integration/__init__.py
 create mode 100755 tests/2_integration/test_01_ftracepy_integration.py
 create mode 100755 tests/2_integration/test_03_ksharkpy_integration.py
 create mode 100644 tests/__init__.py
 create mode 100644 tracecruncher/__init__.py
 create mode 100644 tracecruncher/ft_utils.py
 create mode 100644 tracecruncher/ks_utils.py

-- 
2.27.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v4 01/11] trace-cruncher: Refactor the part that wraps ftrace
  2021-07-07 13:21 [PATCH v4 00/11] Build trace-cruncher as Python pakage Yordan Karadzhov (VMware)
@ 2021-07-07 13:21 ` Yordan Karadzhov (VMware)
  2021-07-07 13:21 ` [PATCH v4 02/11] trace-cruncher: Add basic methods for tracing Yordan Karadzhov (VMware)
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Yordan Karadzhov (VMware) @ 2021-07-07 13:21 UTC (permalink / raw)
  To: linux-trace-devel; +Cc: Yordan Karadzhov (VMware)

In order to be able to bulid the project as a native Python
package, which contains several sub-packages implement as C
extensions via the Python's C API, the part of the interface
that relies on libtracefs, libtraceevent (and libtracecmd in
the future) needs to be re-implemented as an extension called
"tracecruncher.ftracepy". Note that this new extension has
a stand-alone build that is completely decoupled from the
existing build system used by trace-cruncher.

Signed-off-by: Yordan Karadzhov (VMware) <y.karadz@gmail.com>
---
 setup.py             |   68 ++
 src/common.h         |  105 +++
 src/ftracepy-utils.c | 1540 ++++++++++++++++++++++++++++++++++++++++++
 src/ftracepy-utils.h |  132 ++++
 src/ftracepy.c       |  272 ++++++++
 5 files changed, 2117 insertions(+)
 create mode 100644 setup.py
 create mode 100644 src/common.h
 create mode 100644 src/ftracepy-utils.c
 create mode 100644 src/ftracepy-utils.h
 create mode 100644 src/ftracepy.c

diff --git a/setup.py b/setup.py
new file mode 100644
index 0000000..6a5d6df
--- /dev/null
+++ b/setup.py
@@ -0,0 +1,68 @@
+#!/usr/bin/env python3
+
+"""
+SPDX-License-Identifier: LGPL-2.1
+
+Copyright 2019 VMware Inc, Yordan Karadzhov (VMware) <y.karadz@gmail.com>
+"""
+
+from setuptools import setup, find_packages
+from distutils.core import Extension
+from Cython.Build import cythonize
+
+import pkgconfig as pkg
+
+
+def third_party_paths():
+    pkg_traceevent = pkg.parse('libtraceevent')
+    pkg_ftracepy = pkg.parse('libtracefs')
+    pkg_tracecmd = pkg.parse('libtracecmd')
+
+    include_dirs = []
+    include_dirs.extend(pkg_traceevent['include_dirs'])
+    include_dirs.extend(pkg_ftracepy['include_dirs'])
+    include_dirs.extend(pkg_tracecmd['include_dirs'])
+
+    library_dirs = []
+    library_dirs.extend(pkg_traceevent['library_dirs'])
+    library_dirs.extend(pkg_ftracepy['library_dirs'])
+    library_dirs.extend(pkg_tracecmd['library_dirs'])
+    library_dirs = list(set(library_dirs))
+
+    return include_dirs, library_dirs
+
+include_dirs, library_dirs = third_party_paths()
+
+def extension(name, sources, libraries):
+    runtime_library_dirs = library_dirs
+    runtime_library_dirs.extend('$ORIGIN')
+    return Extension(name, sources=sources,
+                           include_dirs=include_dirs,
+                           library_dirs=library_dirs,
+                           runtime_library_dirs=runtime_library_dirs,
+                           libraries=libraries,
+                           )
+
+def main():
+    module_ft = extension(name='tracecruncher.ftracepy',
+                          sources=['src/ftracepy.c', 'src/ftracepy-utils.c'],
+                          libraries=['traceevent', 'tracefs'])
+
+    setup(name='tracecruncher',
+          version='0.1.0',
+          description='NumPy based interface for accessing tracing data in Python.',
+          author='Yordan Karadzhov (VMware)',
+          author_email='y.karadz@gmail.com',
+          url='https://github.com/vmware/trace-cruncher',
+          license='LGPL-2.1',
+          packages=find_packages(),
+          ext_modules=[module_ft],
+          classifiers=[
+              'Development Status :: 3 - Alpha',
+              'Programming Language :: Python :: 3',
+              ]
+          )
+
+
+if __name__ == '__main__':
+    main()
diff --git a/src/common.h b/src/common.h
new file mode 100644
index 0000000..9985328
--- /dev/null
+++ b/src/common.h
@@ -0,0 +1,105 @@
+/* SPDX-License-Identifier: LGPL-2.1 */
+
+/*
+ * Copyright (C) 2017 VMware Inc, Yordan Karadzhov <y.karadz@gmail.com>
+ */
+
+#ifndef _TC_COMMON_H
+#define _TC_COMMON_H
+
+// C
+#include <stdbool.h>
+#include <string.h>
+
+#define TRACECRUNCHER_ERROR	tracecruncher_error
+#define KSHARK_ERROR		kshark_error
+#define TEP_ERROR		tep_error
+#define TFS_ERROR		tfs_error
+
+#define KS_INIT_ERROR \
+	PyErr_SetString(KSHARK_ERROR, "libshark failed to initialize");
+
+#define MEM_ERROR \
+	PyErr_SetString(TRACECRUNCHER_ERROR, "failed to allocate memory");
+
+static const char *NO_ARG = "/NONE/";
+
+static inline bool is_all(const char *arg)
+{
+	const char all[] = "all";
+	const char *p = &all[0];
+
+	for (; *arg; arg++, p++) {
+		if (tolower(*arg) != *p)
+			return false;
+	}
+	return !(*p);
+}
+
+static inline bool is_no_arg(const char *arg)
+{
+	return arg[0] == '\0' || arg == NO_ARG;
+}
+
+static inline bool is_set(const char *arg)
+{
+	return !(is_all(arg) || is_no_arg(arg));
+}
+
+static inline void no_free()
+{
+}
+
+#define NO_FREE		no_free
+
+#define STR(x) #x
+
+#define MAKE_TYPE_STR(x) STR(traceevent.x)
+
+#define MAKE_DIC_STR(x) STR(libtraceevent x object)
+
+#define C_OBJECT_WRAPPER_DECLARE(c_type, py_type)				\
+	typedef struct {							\
+	PyObject_HEAD								\
+	struct c_type *ptrObj;							\
+} py_type;									\
+PyObject *py_type##_New(struct c_type *evt_ptr);				\
+bool py_type##TypeInit();							\
+
+#define  C_OBJECT_WRAPPER(c_type, py_type, ptr_free)				\
+static PyTypeObject py_type##Type = {						\
+	PyVarObject_HEAD_INIT(NULL, 0) MAKE_TYPE_STR(c_type)			\
+};										\
+PyObject *py_type##_New(struct c_type *evt_ptr)					\
+{										\
+	py_type *newObject;							\
+	newObject = PyObject_New(py_type, &py_type##Type);			\
+	newObject->ptrObj = evt_ptr;						\
+	return (PyObject *) newObject;						\
+}										\
+static int py_type##_init(py_type *self, PyObject *args, PyObject *kwargs)	\
+{										\
+	self->ptrObj = NULL;							\
+	return 0;								\
+}										\
+static void py_type##_dealloc(py_type *self)					\
+{										\
+	ptr_free(self->ptrObj);							\
+	Py_TYPE(self)->tp_free(self);						\
+}										\
+bool py_type##TypeInit()							\
+{										\
+	py_type##Type.tp_new = PyType_GenericNew;				\
+	py_type##Type.tp_basicsize = sizeof(py_type);				\
+	py_type##Type.tp_init = (initproc) py_type##_init;			\
+	py_type##Type.tp_dealloc = (destructor) py_type##_dealloc;		\
+	py_type##Type.tp_flags = Py_TPFLAGS_DEFAULT;				\
+	py_type##Type.tp_doc = MAKE_DIC_STR(c_type);				\
+	py_type##Type.tp_methods = py_type##_methods;				\
+	if (PyType_Ready(&py_type##Type) < 0)					\
+		return false;							\
+	Py_INCREF(&py_type##Type);						\
+	return true;								\
+}										\
+
+#endif
diff --git a/src/ftracepy-utils.c b/src/ftracepy-utils.c
new file mode 100644
index 0000000..b34c45b
--- /dev/null
+++ b/src/ftracepy-utils.c
@@ -0,0 +1,1540 @@
+// SPDX-License-Identifier: LGPL-2.1
+
+/*
+ * Copyright (C) 2021 VMware Inc, Yordan Karadzhov (VMware) <y.karadz@gmail.com>
+ */
+
+#ifndef _GNU_SOURCE
+/** Use GNU C Library. */
+#define _GNU_SOURCE
+#endif // _GNU_SOURCE
+
+// C
+#include <search.h>
+#include <string.h>
+#include <sys/wait.h>
+#include <signal.h>
+#include <time.h>
+
+// trace-cruncher
+#include "ftracepy-utils.h"
+
+static void *instance_root;
+PyObject *TFS_ERROR;
+PyObject *TEP_ERROR;
+PyObject *TRACECRUNCHER_ERROR;
+
+PyObject *PyTepRecord_time(PyTepRecord* self)
+{
+	unsigned long ts = self->ptrObj ? self->ptrObj->ts : 0;
+	return PyLong_FromLongLong(ts);
+}
+
+PyObject *PyTepRecord_cpu(PyTepRecord* self)
+{
+	int cpu = self->ptrObj ? self->ptrObj->cpu : -1;
+	return PyLong_FromLong(cpu);
+}
+
+PyObject *PyTepEvent_name(PyTepEvent* self)
+{
+	const char * name = self->ptrObj ? self->ptrObj->name : "nil";
+	return PyUnicode_FromString(name);
+}
+
+PyObject *PyTepEvent_id(PyTepEvent* self)
+{
+	int id = self->ptrObj ? self->ptrObj->id : -1;
+	return PyLong_FromLong(id);
+}
+
+PyObject *PyTepEvent_field_names(PyTepEvent* self)
+{
+	struct tep_format_field *field, **fields;
+	struct tep_event *event = self->ptrObj;
+	int i = 0, nr_fields;
+	PyObject *list;
+
+	nr_fields= event->format.nr_fields + event->format.nr_common;
+	list = PyList_New(nr_fields);
+
+	/* Get all common fields. */
+	fields = tep_event_common_fields(event);
+	if (!fields) {
+		PyErr_Format(TEP_ERROR,
+			     "Failed to get common fields for event \'%s\'",
+			     self->ptrObj->name);
+		return NULL;
+	}
+
+	for (field = *fields; field; field = field->next)
+		PyList_SET_ITEM(list, i++, PyUnicode_FromString(field->name));
+	free(fields);
+
+	/* Add all unique fields. */
+	fields = tep_event_fields(event);
+	if (!fields) {
+		PyErr_Format(TEP_ERROR,
+			     "Failed to get fields for event \'%s\'",
+			     self->ptrObj->name);
+		return NULL;
+	}
+
+	for (field = *fields; field; field = field->next)
+		PyList_SET_ITEM(list, i++, PyUnicode_FromString(field->name));
+	free(fields);
+
+	return list;
+}
+
+static bool is_number(struct tep_format_field *field)
+{
+	int number_field_mask = TEP_FIELD_IS_SIGNED |
+				TEP_FIELD_IS_LONG |
+				TEP_FIELD_IS_FLAG;
+
+	return !field->flags || field->flags & number_field_mask;
+}
+
+PyObject *PyTepEvent_parse_record_field(PyTepEvent* self, PyObject *args,
+							  PyObject *kwargs)
+{
+	struct tep_format_field *field;
+	const char *field_name;
+	PyTepRecord *record;
+
+	static char *kwlist[] = {"record", "field", NULL};
+	if (!PyArg_ParseTupleAndKeywords(args,
+					 kwargs,
+					 "Os",
+					 kwlist,
+					 &record,
+					 &field_name)) {
+		return NULL;
+	}
+
+	field = tep_find_field(self->ptrObj, field_name);
+	if (!field)
+		field = tep_find_common_field(self->ptrObj, field_name);
+
+	if (!field) {
+		PyErr_Format(TEP_ERROR,
+			     "Failed to find field \'%s\' in event \'%s\'",
+			     field_name, self->ptrObj->name);
+		return NULL;
+	}
+
+	if (!field->size)
+		return PyUnicode_FromString("(nil)");
+
+	if (field->flags & TEP_FIELD_IS_STRING) {
+		char *val_str = record->ptrObj->data + field->offset;
+		return PyUnicode_FromString(val_str);
+	} else if (is_number(field)) {
+		unsigned long long val;
+
+		tep_read_number_field(field, record->ptrObj->data, &val);
+		return PyLong_FromLong(val);
+	} else if (field->flags & TEP_FIELD_IS_POINTER) {
+		void *val = record->ptrObj->data + field->offset;
+		char ptr_string[11];
+
+		sprintf(ptr_string, "%p", val);
+		return PyUnicode_FromString(ptr_string);
+	}
+
+	PyErr_Format(TEP_ERROR,
+		     "Unsupported field format \"%li\" (TODO: implement this)",
+		     field->flags);
+	return NULL;
+}
+
+int get_pid(struct tep_event *event, struct tep_record *record)
+{
+	const char *field_name = "common_pid";
+	struct tep_format_field *field;
+	unsigned long long val;
+
+	field = tep_find_common_field(event, field_name);
+	if (!field) {
+		PyErr_Format(TEP_ERROR,
+			     "Failed to find field \'s\' in event \'%s\'",
+			     field_name, event->name);
+		return -1;
+	}
+
+	tep_read_number_field(field, record->data, &val);
+
+	return val;
+}
+
+PyObject *PyTepEvent_get_pid(PyTepEvent* self, PyObject *args,
+					       PyObject *kwargs)
+{
+	static char *kwlist[] = {"record", NULL};
+	PyTepRecord *record;
+	int pid;
+
+	if (!PyArg_ParseTupleAndKeywords(args,
+					 kwargs,
+					 "O",
+					 kwlist,
+					 &record)) {
+		return NULL;
+	}
+
+	pid = get_pid(self->ptrObj, record->ptrObj);
+	if (pid < 0)
+		return NULL;
+
+	return PyLong_FromLong(pid);
+}
+
+static const char **get_arg_list(PyObject *py_list)
+{
+	const char **argv = NULL;
+	PyObject *arg_py;
+	int i, n;
+
+	if (!PyList_CheckExact(py_list))
+		goto fail;
+
+	n = PyList_Size(py_list);
+	argv = calloc(n + 1, sizeof(*argv));
+	for (i = 0; i < n; ++i) {
+		arg_py = PyList_GetItem(py_list, i);
+		if (!PyUnicode_Check(arg_py))
+			goto fail;
+
+		argv[i] = PyUnicode_DATA(arg_py);
+	}
+
+	return argv;
+
+ fail:
+	PyErr_SetString(TRACECRUNCHER_ERROR,
+			"Failed to parse argument list.");
+	free(argv);
+	return NULL;
+}
+
+PyObject *PyTep_init_local(PyTep *self, PyObject *args,
+					PyObject *kwargs)
+{
+	static char *kwlist[] = {"dir", "systems", NULL};
+	struct tep_handle *tep = NULL;
+	PyObject *system_list = NULL;
+	const char *dir_str;
+
+	if (!PyArg_ParseTupleAndKeywords(args,
+					 kwargs,
+					 "s|O",
+					 kwlist,
+					 &dir_str,
+					 &system_list)) {
+		return NULL;
+	}
+
+	if (system_list) {
+		const char **sys_names = get_arg_list(system_list);
+
+		if (!sys_names) {
+			PyErr_SetString(TFS_ERROR,
+					"Inconsistent \"systems\" argument.");
+			return NULL;
+		}
+
+		tep = tracefs_local_events_system(dir_str, sys_names);
+		free(sys_names);
+	} else {
+		tep = tracefs_local_events(dir_str);
+	}
+
+	if (!tep) {
+		PyErr_Format(TFS_ERROR,
+			     "Failed to get local events from \'%s\'.",
+			     dir_str);
+		return NULL;
+	}
+
+	tep_free(self->ptrObj);
+	self->ptrObj = tep;
+
+	Py_RETURN_NONE;
+}
+
+PyObject *PyTep_get_event(PyTep *self, PyObject *args,
+				       PyObject *kwargs)
+{
+	static char *kwlist[] = {"system", "name", NULL};
+	const char *system, *event_name;
+	struct tep_event *event;
+
+	if (!PyArg_ParseTupleAndKeywords(args,
+					 kwargs,
+					 "ss",
+					 kwlist,
+					 &system,
+					 &event_name)) {
+		return NULL;
+	}
+
+	event = tep_find_event_by_name(self->ptrObj, system, event_name);
+
+	return PyTepEvent_New(event);
+}
+
+static bool check_file(struct tracefs_instance *instance, const char *file)
+{
+	if (!tracefs_file_exists(instance, file)) {
+		PyErr_Format(TFS_ERROR, "File %s does not exist.", file);
+		return false;
+	}
+
+	return true;
+}
+
+static bool check_dir(struct tracefs_instance *instance, const char *dir)
+{
+	if (!tracefs_dir_exists(instance, dir)) {
+		PyErr_Format(TFS_ERROR, "Directory %s does not exist.", dir);
+		return false;
+	}
+
+	return true;
+}
+
+const char *top_instance_name = "top";
+static const char *get_instance_name(struct tracefs_instance *instance)
+{
+	const char *name = tracefs_instance_get_name(instance);
+	return name ? name : top_instance_name;
+}
+
+static int write_to_file(struct tracefs_instance *instance,
+			 const char *file,
+			 const char *val)
+{
+	int size;
+
+	if (!check_file(instance, file))
+		return -1;
+
+	size = tracefs_instance_file_write(instance, file, val);
+	if (size <= 0) {
+		PyErr_Format(TFS_ERROR,
+			     "Can not write \'%s\' to file \'%s\' (inst: \'%s\').",
+			     val, file, get_instance_name(instance));
+		PyErr_Print();
+	}
+
+	return size;
+}
+
+static int append_to_file(struct tracefs_instance *instance,
+			  const char *file,
+			  const char *val)
+{
+	int size;
+
+	if (!check_file(instance, file))
+		return -1;
+
+	size = tracefs_instance_file_append(instance, file, val);
+	if (size <= 0) {
+		PyErr_Format(TFS_ERROR,
+			     "Can not append \'%s\' to file \'%s\' (inst: \'%s\').",
+			     val, file, get_instance_name(instance));
+		PyErr_Print();
+	}
+
+	return size;
+}
+
+static int read_from_file(struct tracefs_instance *instance,
+			  const char *file,
+			  char **val)
+{
+	int size;
+
+	if (!check_file(instance, file))
+		return -1;
+
+	*val = tracefs_instance_file_read(instance, file, &size);
+	if (size < 0)
+		PyErr_Format(TFS_ERROR, "Can not read from file %s", file);
+
+	return size;
+}
+
+static inline void trim_new_line(char *val)
+{
+	val[strlen(val) - 1] = '\0';
+}
+
+static bool write_to_file_and_check(struct tracefs_instance *instance,
+				    const char *file,
+				    const char *val)
+{
+	char *read_val;
+	int ret;
+
+	if (write_to_file(instance, file, val) <= 0)
+		return false;
+
+	if (read_from_file(instance, file, &read_val) <= 0)
+		return false;
+
+	trim_new_line(read_val);
+	ret = strcmp(read_val, val);
+	free(read_val);
+
+	return ret == 0 ? true : false;
+}
+
+static PyObject *tfs_list2py_list(char **list)
+{
+	PyObject *py_list = PyList_New(0);
+	int i;
+
+	for (i = 0; list && list[i]; i++)
+		PyList_Append(py_list, PyUnicode_FromString(list[i]));
+
+	tracefs_list_free(list);
+
+	return py_list;
+}
+
+struct instance_wrapper {
+	struct tracefs_instance *ptr;
+	const char *name;
+};
+
+const char *instance_wrapper_get_name(const struct instance_wrapper *iw)
+{
+	if (!iw->ptr)
+		return iw->name;
+
+	return tracefs_instance_get_name(iw->ptr);
+}
+
+static int instance_compare(const void *a, const void *b)
+{
+	const struct instance_wrapper *iwa, *iwb;
+
+	iwa = (const struct instance_wrapper *) a;
+	iwb = (const struct instance_wrapper *) b;
+
+	return strcmp(instance_wrapper_get_name(iwa),
+		      instance_wrapper_get_name(iwb));
+}
+
+void instance_wrapper_free(void *ptr)
+{
+	struct instance_wrapper *iw;
+	if (!ptr)
+		return;
+
+	iw = ptr;
+	if (iw->ptr) {
+		if (tracefs_instance_destroy(iw->ptr) < 0)
+			fprintf(stderr,
+				"\ntfs_error: Failed to destroy instance '%s'.\n",
+				get_instance_name(iw->ptr));
+
+		free(iw->ptr);
+	}
+
+	free(ptr);
+}
+
+static void destroy_all_instances(void)
+{
+	tdestroy(instance_root, instance_wrapper_free);
+	instance_root = NULL;
+}
+
+static struct tracefs_instance *find_instance(const char *name)
+{
+	struct instance_wrapper iw, **iw_ptr;
+	if (!is_set(name))
+		return NULL;
+
+	if (!tracefs_instance_exists(name)) {
+		PyErr_Format(TFS_ERROR, "Trace instance \'%s\' does not exist.",
+			     name);
+		return NULL;
+	}
+
+	iw.ptr = NULL;
+	iw.name = name;
+	iw_ptr = tfind(&iw, &instance_root, instance_compare);
+	if (!iw_ptr || !(*iw_ptr) || !(*iw_ptr)->ptr ||
+	    strcmp(tracefs_instance_get_name((*iw_ptr)->ptr), name) != 0) {
+		PyErr_Format(TFS_ERROR, "Unable to find trace instances \'%s\'.",
+			     name);
+		return NULL;
+	}
+
+	return (*iw_ptr)->ptr;
+}
+
+bool get_optional_instance(const char *instance_name,
+			   struct tracefs_instance **instance)
+{
+	*instance = NULL;
+	if (is_set(instance_name)) {
+		*instance = find_instance(instance_name);
+		if (!instance) {
+			PyErr_Format(TFS_ERROR,
+				     "Failed to find instance \'%s\'.",
+				     instance_name);
+			return false;
+		}
+	}
+
+	return true;
+}
+
+bool get_instance_from_arg(PyObject *args, PyObject *kwargs,
+			   struct tracefs_instance **instance)
+{
+	const char *instance_name;
+
+	static char *kwlist[] = {"instance", NULL};
+	instance_name = NO_ARG;
+	if (!PyArg_ParseTupleAndKeywords(args,
+					 kwargs,
+					 "|s",
+					 kwlist,
+					 &instance_name)) {
+		return false;
+	}
+
+	if (!get_optional_instance(instance_name, instance))
+		return false;
+
+	return true;
+}
+
+PyObject *PyFtrace_dir(PyObject *self)
+{
+	return PyUnicode_FromString(tracefs_tracing_dir());
+}
+
+static char aname_pool[] =
+	"0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz";
+
+#define ANAME_LEN	16
+
+char auto_name[ANAME_LEN];
+
+static const char *autoname()
+{
+	int i, n, pool_size = sizeof(aname_pool);
+	struct timeval now;
+
+	gettimeofday(&now, NULL);
+	srand(now.tv_usec);
+
+	for (i = 0; i < ANAME_LEN - 1; ++i) {
+		n = rand() % (pool_size - 1);
+		auto_name[i] = aname_pool[n];
+	}
+	auto_name[i] = 0;
+
+	return auto_name;
+}
+
+static bool tracing_OFF(struct tracefs_instance *instance);
+
+PyObject *PyFtrace_create_instance(PyObject *self, PyObject *args,
+						   PyObject *kwargs)
+{
+	struct instance_wrapper *iw, **iw_ptr;
+	struct tracefs_instance *instance;
+	const char *name = NO_ARG;
+	int tracing_on = true;
+
+	static char *kwlist[] = {"name", "tracing_on", NULL};
+	if (!PyArg_ParseTupleAndKeywords(args,
+					 kwargs,
+					 "|sp",
+					 kwlist,
+					 &name,
+					 &tracing_on)) {
+		return NULL;
+	}
+
+	if (!is_set(name))
+		name = autoname();
+
+	instance = tracefs_instance_create(name);
+	if (!instance ||
+	    !tracefs_instance_exists(name) ||
+	    !tracefs_instance_is_new(instance)) {
+		PyErr_Format(TFS_ERROR,
+			     "Failed to create new trace instance \'%s\'.",
+			     name);
+		return NULL;
+	}
+
+	iw = calloc(1, sizeof(*iw));
+	if (!iw) {
+		MEM_ERROR
+		return NULL;
+	}
+
+	iw->ptr = instance;
+	iw_ptr = tsearch(iw, &instance_root, instance_compare);
+	if (!iw_ptr || !(*iw_ptr) || !(*iw_ptr)->ptr ||
+	    strcmp(tracefs_instance_get_name((*iw_ptr)->ptr), name) != 0) {
+		PyErr_Format(TFS_ERROR,
+			     "Failed to store new trace instance \'%s\'.",
+			     name);
+		tracefs_instance_destroy(instance);
+		tracefs_instance_free(instance);
+		free(iw);
+
+		return NULL;
+	}
+
+	if (!tracing_on)
+		tracing_OFF(instance);
+
+	return PyUnicode_FromString(name);
+}
+
+PyObject *PyFtrace_destroy_instance(PyObject *self, PyObject *args,
+						    PyObject *kwargs)
+{
+	struct tracefs_instance *instance;
+	struct instance_wrapper iw;
+	char *name;
+
+	static char *kwlist[] = {"name", NULL};
+	if (!PyArg_ParseTupleAndKeywords(args,
+					 kwargs,
+					 "s",
+					 kwlist,
+					 &name)) {
+		return NULL;
+	}
+
+	if (is_all(name)) {
+		destroy_all_instances();
+		Py_RETURN_NONE;
+	}
+
+	instance = find_instance(name);
+	if (!instance) {
+		PyErr_Format(TFS_ERROR,
+			     "Unable to destroy trace instances \'%s\'.",
+			     name);
+		return NULL;
+	}
+
+	iw.ptr = NULL;
+	iw.name = name;
+	tdelete(&iw, &instance_root, instance_compare);
+
+	tracefs_instance_destroy(instance);
+	tracefs_instance_free(instance);
+
+	Py_RETURN_NONE;
+}
+
+PyObject *instance_list;
+
+static void instance_action(const void *nodep, VISIT which, int depth)
+{
+	struct instance_wrapper *iw = *( struct instance_wrapper **) nodep;
+	const char *name;
+
+	switch(which) {
+	case preorder:
+	case endorder:
+		break;
+
+	case postorder:
+	case leaf:
+		name = tracefs_instance_get_name(iw->ptr);
+		PyList_Append(instance_list, PyUnicode_FromString(name));
+		break;
+	}
+}
+
+PyObject *PyFtrace_get_all_instances(PyObject *self)
+{
+	instance_list = PyList_New(0);
+	twalk(instance_root, instance_action);
+
+	return instance_list;
+}
+
+PyObject *PyFtrace_destroy_all_instances(PyObject *self)
+{
+	destroy_all_instances();
+
+	Py_RETURN_NONE;
+}
+
+PyObject *PyFtrace_instance_dir(PyObject *self, PyObject *args,
+						PyObject *kwargs)
+{
+	struct tracefs_instance *instance;
+
+	if (!get_instance_from_arg(args, kwargs, &instance))
+		return NULL;
+
+	return PyUnicode_FromString(tracefs_instance_get_dir(instance));
+}
+
+PyObject *PyFtrace_available_tracers(PyObject *self, PyObject *args,
+						     PyObject *kwargs)
+{
+	struct tracefs_instance *instance;
+	char **list;
+
+	if (!get_instance_from_arg(args, kwargs, &instance))
+		return NULL;
+
+	list = tracefs_tracers(tracefs_instance_get_dir(instance));
+	if (!list)
+		return NULL;
+
+	return tfs_list2py_list(list);
+}
+
+PyObject *PyFtrace_set_current_tracer(PyObject *self, PyObject *args,
+						      PyObject *kwargs)
+{
+	const char *file = "current_tracer", *tracer, *instance_name;
+	struct tracefs_instance *instance;
+
+	static char *kwlist[] = {"tracer", "instance", NULL};
+	tracer = instance_name = NO_ARG;
+	if (!PyArg_ParseTupleAndKeywords(args,
+					 kwargs,
+					 "|ss",
+					 kwlist,
+					 &tracer,
+					 &instance_name)) {
+		return NULL;
+	}
+
+	if (!get_optional_instance(instance_name, &instance))
+		return NULL;
+
+	if (is_set(tracer) &&
+	    strcmp(tracer, "nop") != 0) {
+		char **all_tracers =
+			tracefs_tracers(tracefs_instance_get_dir(instance));
+		int i;
+
+		for (i = 0; all_tracers && all_tracers[i]; i++) {
+			if (!strcmp(all_tracers[i], tracer))
+				break;
+		}
+
+		if (!all_tracers || !all_tracers[i]) {
+			PyErr_Format(TFS_ERROR,
+				     "Tracer \'%s\' is not available.",
+				     tracer);
+			return NULL;
+		}
+	} else if (!is_set(tracer)) {
+		tracer = "nop";
+	}
+
+	if (!write_to_file_and_check(instance, file, tracer)) {
+		PyErr_Format(TFS_ERROR, "Failed to enable tracer \'%s\'",
+			     tracer);
+		return NULL;
+	}
+
+	Py_RETURN_NONE;
+}
+
+PyObject *PyFtrace_get_current_tracer(PyObject *self, PyObject *args,
+						      PyObject *kwargs)
+{
+	const char *file = "current_tracer";
+	struct tracefs_instance *instance;
+	PyObject *ret;
+	char *tracer;
+
+	if (!get_instance_from_arg(args, kwargs, &instance))
+		return NULL;
+
+	if (read_from_file(instance, file, &tracer) <= 0)
+		return NULL;
+
+	trim_new_line(tracer);
+	ret = PyUnicode_FromString(tracer);
+	free(tracer);
+
+	return ret;
+}
+
+PyObject *PyFtrace_available_event_systems(PyObject *self, PyObject *args,
+							   PyObject *kwargs)
+{
+	struct tracefs_instance *instance;
+	char **list;
+
+	if (!get_instance_from_arg(args, kwargs, &instance))
+		return NULL;
+
+	list = tracefs_event_systems(tracefs_instance_get_dir(instance));
+	if (!list)
+		return NULL;
+
+	return tfs_list2py_list(list);
+}
+
+PyObject *PyFtrace_available_system_events(PyObject *self, PyObject *args,
+							   PyObject *kwargs)
+{
+	static char *kwlist[] = {"system", "instance", NULL};
+	const char *instance_name = NO_ARG, *system;
+	struct tracefs_instance *instance;
+	char **list;
+
+	if (!PyArg_ParseTupleAndKeywords(args,
+					 kwargs,
+					 "s|s",
+					 kwlist,
+					 &system,
+					 &instance_name)) {
+		return NULL;
+	}
+
+	if (!get_optional_instance(instance_name, &instance))
+		return NULL;
+
+	list = tracefs_system_events(tracefs_instance_get_dir(instance),
+				     system);
+	if (!list)
+		return NULL;
+
+	return tfs_list2py_list(list);
+}
+
+bool get_event_enable_file(struct tracefs_instance *instance,
+			   const char *system, const char *event,
+			   char **path)
+{
+	char *buff = calloc(PATH_MAX, 1);
+	const char *instance_name;
+
+	 if (!buff) {
+		MEM_ERROR
+		return false;
+	}
+
+	if ((is_all(system) && is_all(event)) ||
+	    (is_all(system) && is_no_arg(event)) ||
+	    (is_no_arg(system) && is_all(event))) {
+		strcpy(buff, "events/enable");
+
+		*path = buff;
+	} else if (is_set(system)) {
+		strcpy(buff, "events/");
+		strcat(buff, system);
+		if (!check_dir(instance, buff))
+			goto fail;
+
+		if (is_set(event)) {
+			strcat(buff, "/");
+			strcat(buff, event);
+			if (!check_dir(instance, buff))
+				goto fail;
+
+			strcat(buff, "/enable");
+		} else {
+			strcat(buff, "/enable");
+		}
+
+		*path = buff;
+	} else {
+		goto fail;
+	}
+
+	return true;
+
+ fail:
+	instance_name =
+		instance ? tracefs_instance_get_name(instance) : "top";
+	PyErr_Format(TFS_ERROR,
+		     "Failed to locate event:\n Instance: %s  System: %s  Event: %s",
+		     instance_name, system, event);
+	free(buff);
+	*path = NULL;
+	return false;
+}
+
+static bool event_enable_disable(struct tracefs_instance *instance,
+				 const char *system, const char *event,
+				 bool enable)
+{
+	int ret;
+
+	if (system && !is_set(system))
+		system = NULL;
+
+	if (event && !is_set(event))
+		event = NULL;
+
+	if (enable)
+		ret = tracefs_event_enable(instance, system, event);
+	else
+		ret = tracefs_event_disable(instance, system, event);
+
+	if (ret != 0) {
+		PyErr_Format(TFS_ERROR,
+			     "Failed to enable/disable event:\n System: %s  Event: %s",
+			     system ? system : "NULL",
+			     event ? event : "NULL");
+
+		return false;
+	}
+
+	return true;
+}
+
+static bool set_enable_event(PyObject *self,
+			     PyObject *args, PyObject *kwargs,
+			     bool enable)
+{
+	static char *kwlist[] = {"instance", "system", "event", NULL};
+	const char *instance_name, *system, *event;
+	struct tracefs_instance *instance;
+
+	instance_name = system = event = NO_ARG;
+	if (!PyArg_ParseTupleAndKeywords(args,
+					 kwargs,
+					 "|sss",
+					 kwlist,
+					 &instance_name,
+					 &system,
+					 &event)) {
+		return false;
+	}
+
+	if (!get_optional_instance(instance_name, &instance))
+		return false;
+
+	return event_enable_disable(instance, system, event, enable);
+}
+
+#define ON	"1"
+#define OFF	"0"
+
+PyObject *PyFtrace_enable_event(PyObject *self, PyObject *args,
+						PyObject *kwargs)
+{
+	if (!set_enable_event(self, args, kwargs, true))
+		return NULL;
+
+	Py_RETURN_NONE;
+}
+
+PyObject *PyFtrace_disable_event(PyObject *self, PyObject *args,
+						 PyObject *kwargs)
+{
+	if (!set_enable_event(self, args, kwargs, false))
+		return NULL;
+
+	Py_RETURN_NONE;
+}
+
+static bool set_enable_events(PyObject *self, PyObject *args, PyObject *kwargs,
+			      bool enable)
+{
+	static char *kwlist[] = {"instance", "systems", "events", NULL};
+	PyObject *system_list = NULL, *event_list = NULL, *system_event_list;
+	const char **systems = NULL, **events = NULL;
+	struct tracefs_instance *instance;
+	const char *instance_name;
+	char *file = NULL;
+	int ret, s, e;
+
+	instance_name = NO_ARG;
+	if (!PyArg_ParseTupleAndKeywords(args,
+					 kwargs,
+					 "|sOO",
+					 kwlist,
+					 &instance_name,
+					 &system_list,
+					 &event_list)) {
+		return false;
+	}
+
+	if (!get_optional_instance(instance_name, &instance))
+		return false;
+
+	if (!system_list && !event_list)
+		return event_enable_disable(instance, NULL, NULL, enable);
+
+	if (!system_list && event_list) {
+		if (PyUnicode_Check(event_list) &&
+		    is_all(PyUnicode_DATA(event_list))) {
+			return event_enable_disable(instance, NULL, NULL, enable);
+		} else {
+			PyErr_SetString(TFS_ERROR,
+					"Failed to enable events for unspecified system");
+			return false;
+		}
+	}
+
+	systems = get_arg_list(system_list);
+	if (!systems) {
+		PyErr_SetString(TFS_ERROR, "Inconsistent \"systems\" argument.");
+		return false;
+	}
+
+	if (!event_list) {
+		for (s = 0; systems[s]; ++s) {
+			ret = event_enable_disable(instance, systems[s], NULL, enable);
+			if (ret < 0)
+				return false;
+		}
+
+		return true;
+	}
+
+	if (!PyList_CheckExact(event_list))
+		goto fail_with_err;
+
+	for (s = 0; systems[s]; ++s) {
+		system_event_list = PyList_GetItem(event_list, s);
+		if (!system_event_list || !PyList_CheckExact(system_event_list))
+			goto fail_with_err;
+
+		events = get_arg_list(system_event_list);
+		if (!events)
+			goto fail_with_err;
+
+		for (e = 0; events[e]; ++e) {
+			if (!event_enable_disable(instance, systems[s], events[e], enable))
+				goto fail;
+		}
+
+		free(events);
+		events = NULL;
+	}
+
+	free(systems);
+
+	return true;
+
+ fail_with_err:
+	PyErr_SetString(TFS_ERROR, "Inconsistent \"events\" argument.");
+
+ fail:
+	free(systems);
+	free(events);
+	free(file);
+
+	return false;
+}
+
+PyObject *PyFtrace_enable_events(PyObject *self, PyObject *args,
+						 PyObject *kwargs)
+{
+	if (!set_enable_events(self, args, kwargs, true))
+		return NULL;
+
+	Py_RETURN_NONE;
+}
+
+PyObject *PyFtrace_disable_events(PyObject *self, PyObject *args,
+						  PyObject *kwargs)
+{
+	if (!set_enable_events(self, args, kwargs, false))
+		return NULL;
+
+	Py_RETURN_NONE;
+}
+
+PyObject *PyFtrace_event_is_enabled(PyObject *self, PyObject *args,
+						    PyObject *kwargs)
+{
+	static char *kwlist[] = {"instance", "system", "event", NULL};
+	const char *instance_name, *system, *event;
+	struct tracefs_instance *instance;
+	char *file, *val;
+	PyObject *ret;
+
+	instance_name = system = event = NO_ARG;
+	if (!PyArg_ParseTupleAndKeywords(args,
+					 kwargs,
+					 "|sss",
+					 kwlist,
+					 &instance_name,
+					 &system,
+					 &event)) {
+		return false;
+	}
+
+	if (!get_optional_instance(instance_name, &instance))
+		return false;
+
+	if (!get_event_enable_file(instance, system, event, &file))
+		return NULL;
+
+	if (read_from_file(instance, file, &val) <= 0)
+		return NULL;
+
+	trim_new_line(val);
+	ret = PyUnicode_FromString(val);
+
+	free(file);
+	free(val);
+
+	return ret;
+}
+
+PyObject *PyFtrace_set_event_filter(PyObject *self, PyObject *args,
+						    PyObject *kwargs)
+{
+	const char *instance_name = NO_ARG, *system, *event, *filter;
+	struct tracefs_instance *instance;
+	char path[PATH_MAX];
+
+	static char *kwlist[] = {"system", "event", "filter", "instance", NULL};
+	if (!PyArg_ParseTupleAndKeywords(args,
+					 kwargs,
+					 "sss|s",
+					 kwlist,
+					 &system,
+					 &event,
+					 &filter,
+					 &instance_name)) {
+		return NULL;
+	}
+
+	if (!get_optional_instance(instance_name, &instance))
+		return NULL;
+
+	sprintf(path, "events/%s/%s/filter", system, event);
+	if (!write_to_file_and_check(instance, path, filter)) {
+		PyErr_SetString(TFS_ERROR, "Failed to set event filter");
+		return NULL;
+	}
+
+	Py_RETURN_NONE;
+}
+
+PyObject *PyFtrace_clear_event_filter(PyObject *self, PyObject *args,
+						      PyObject *kwargs)
+{
+	const char *instance_name = NO_ARG, *system, *event;
+	struct tracefs_instance *instance;
+	char path[PATH_MAX];
+
+	static char *kwlist[] = {"system", "event", "instance", NULL};
+	if (!PyArg_ParseTupleAndKeywords(args,
+					 kwargs,
+					 "ss|s",
+					 kwlist,
+					 &system,
+					 &event,
+					 &instance_name)) {
+		return NULL;
+	}
+
+	if (!get_optional_instance(instance_name, &instance))
+		return NULL;
+
+	sprintf(path, "events/%s/%s/filter", system, event);
+	if (!write_to_file(instance, path, OFF)) {
+		PyErr_SetString(TFS_ERROR, "Failed to clear event filter");
+		return NULL;
+	}
+
+	Py_RETURN_NONE;
+}
+
+static bool tracing_ON(struct tracefs_instance *instance)
+{
+	int ret = tracefs_trace_on(instance);
+
+	if (ret < 0 ||
+	    tracefs_trace_is_on(instance) != 1) {
+		const char *instance_name =
+			instance ? tracefs_instance_get_name(instance) : "top";
+
+		PyErr_Format(TFS_ERROR,
+			     "Failed to start tracing (Instance: %s)",
+			     instance_name);
+		return false;
+	}
+
+	return true;
+}
+
+PyObject *PyFtrace_tracing_ON(PyObject *self, PyObject *args,
+					      PyObject *kwargs)
+{
+	struct tracefs_instance *instance;
+
+	if (!get_instance_from_arg(args, kwargs, &instance))
+		return NULL;
+
+	if (!tracing_ON(instance))
+		return NULL;
+
+	Py_RETURN_NONE;
+}
+
+static bool tracing_OFF(struct tracefs_instance *instance)
+{
+	int ret = tracefs_trace_off(instance);
+
+	if (ret < 0 ||
+	    tracefs_trace_is_on(instance) != 0) {
+		const char *instance_name =
+			instance ? tracefs_instance_get_name(instance) : "top";
+
+		PyErr_Format(TFS_ERROR,
+			     "Failed to stop tracing (Instance: %s)",
+			     instance_name);
+		return false;
+	}
+
+	return true;
+}
+
+PyObject *PyFtrace_tracing_OFF(PyObject *self, PyObject *args,
+					       PyObject *kwargs)
+{
+	struct tracefs_instance *instance;
+
+	if (!get_instance_from_arg(args, kwargs, &instance))
+		return NULL;
+
+	if (!tracing_OFF(instance))
+		return NULL;
+
+	Py_RETURN_NONE;
+}
+
+PyObject *PyFtrace_is_tracing_ON(PyObject *self, PyObject *args,
+						 PyObject *kwargs)
+{
+	struct tracefs_instance *instance;
+	int ret;
+
+	if (!get_instance_from_arg(args, kwargs, &instance))
+		return NULL;
+
+	ret = tracefs_trace_is_on(instance);
+	if (ret < 0) {
+		const char *instance_name =
+			instance ? tracefs_instance_get_name(instance) : "top";
+
+		PyErr_Format(TFS_ERROR,
+			     "Failed to check if tracing is ON (Instance: %s)",
+			     instance_name);
+		return NULL;
+	}
+
+	if (ret == 0)
+		Py_RETURN_FALSE;
+
+	Py_RETURN_TRUE;
+}
+
+static bool pid2file(struct tracefs_instance *instance,
+		     const char *file,
+		     int pid,
+		     bool append)
+{
+	char pid_str[100];
+
+	if (sprintf(pid_str, "%d", pid) <= 0)
+		return false;
+
+	if (append) {
+		if (!append_to_file(instance, file, pid_str))
+		        return false;
+	} else {
+		if (!write_to_file_and_check(instance, file, pid_str))
+		        return false;
+	}
+
+	return true;
+}
+
+static bool set_pid(struct tracefs_instance *instance,
+		    const char *file, PyObject *pid_val)
+{
+	PyObject *item;
+	int n, i, pid;
+
+	if (PyList_CheckExact(pid_val)) {
+		n = PyList_Size(pid_val);
+		for (i = 0; i < n; ++i) {
+			item = PyList_GetItem(pid_val, i);
+			if (!PyLong_CheckExact(item))
+				goto fail;
+
+			pid = PyLong_AsLong(item);
+			if (!pid2file(instance, file, pid, true))
+				goto fail;
+		}
+	} else if (PyLong_CheckExact(pid_val)) {
+		pid = PyLong_AsLong(pid_val);
+		if (!pid2file(instance, file, pid, true))
+			goto fail;
+	} else {
+		goto fail;
+	}
+
+	return true;
+
+ fail:
+	PyErr_Format(TFS_ERROR, "Failed to set PIDs for \"%s\"",
+		     file);
+	return false;
+}
+
+PyObject *PyFtrace_set_event_pid(PyObject *self, PyObject *args,
+						 PyObject *kwargs)
+{
+	const char *instance_name = NO_ARG;
+	struct tracefs_instance *instance;
+	PyObject *pid_val;
+
+	static char *kwlist[] = {"pid", "instance", NULL};
+	if (!PyArg_ParseTupleAndKeywords(args,
+					 kwargs,
+					 "O|s",
+					 kwlist,
+					 &pid_val,
+					 &instance_name)) {
+		return NULL;
+	}
+
+	if (!get_optional_instance(instance_name, &instance))
+		return NULL;
+
+	if (!set_pid(instance, "set_event_pid", pid_val))
+		return NULL;
+
+	Py_RETURN_NONE;
+}
+
+PyObject *PyFtrace_set_ftrace_pid(PyObject *self, PyObject *args,
+						  PyObject *kwargs)
+{
+	const char *instance_name = NO_ARG;
+	struct tracefs_instance *instance;
+	PyObject *pid_val;
+
+	static char *kwlist[] = {"pid", "instance", NULL};
+	if (!PyArg_ParseTupleAndKeywords(args,
+					 kwargs,
+					 "O|s",
+					 kwlist,
+					 &pid_val,
+					 &instance_name)) {
+		return NULL;
+	}
+
+	if (!get_optional_instance(instance_name, &instance))
+		return NULL;
+
+	if (!set_pid(instance, "set_ftrace_pid", pid_val))
+		return NULL;
+
+	Py_RETURN_NONE;
+}
+
+static bool set_opt(struct tracefs_instance *instance,
+		    const char *opt, const char *val)
+{
+	char file[PATH_MAX];
+
+	if (sprintf(file, "options/%s", opt) <= 0 ||
+	    !write_to_file_and_check(instance, file, val)) {
+		PyErr_Format(TFS_ERROR, "Failed to set option \"%s\"", opt);
+		return false;
+	}
+
+	return true;
+}
+
+static PyObject *set_option_py_args(PyObject *args, PyObject *kwargs,
+				   const char *val)
+{
+	const char *instance_name = NO_ARG, *opt;
+	struct tracefs_instance *instance;
+
+	static char *kwlist[] = {"option", "instance", NULL};
+	if (!PyArg_ParseTupleAndKeywords(args,
+					 kwargs,
+					 "s|s",
+					 kwlist,
+					 &opt,
+					 &instance_name)) {
+		return NULL;
+	}
+
+	if (!get_optional_instance(instance_name, &instance))
+		return NULL;
+
+	if (!set_opt(instance, opt, val))
+		return NULL;
+
+	Py_RETURN_NONE;
+}
+
+PyObject *PyFtrace_enable_option(PyObject *self, PyObject *args,
+						 PyObject *kwargs)
+{
+	return set_option_py_args(args, kwargs, ON);
+}
+
+PyObject *PyFtrace_disable_option(PyObject *self, PyObject *args,
+						  PyObject *kwargs)
+{
+	return set_option_py_args(args, kwargs, OFF);
+}
+
+PyObject *PyFtrace_option_is_set(PyObject *self, PyObject *args,
+						 PyObject *kwargs)
+{
+	const char *instance_name = NO_ARG, *opt;
+	struct tracefs_instance *instance;
+	enum tracefs_option_id opt_id;
+
+	static char *kwlist[] = {"option", "instance", NULL};
+	if (!PyArg_ParseTupleAndKeywords(args,
+					 kwargs,
+					 "s|s",
+					 kwlist,
+					 &opt,
+					 &instance_name)) {
+		return NULL;
+	}
+
+	if (!get_optional_instance(instance_name, &instance))
+		return NULL;
+
+	opt_id = tracefs_option_id(opt);
+	if (tracefs_option_is_enabled(instance, opt_id))
+		Py_RETURN_TRUE;
+
+	Py_RETURN_FALSE;
+}
+
+static PyObject *get_option_list(struct tracefs_instance *instance,
+				 bool enabled)
+{
+	const struct tracefs_options_mask *mask;
+	PyObject *list = PyList_New(0);
+	int i;
+
+	mask = enabled ? tracefs_options_get_enabled(instance) :
+			 tracefs_options_get_supported(instance);
+
+	for (i = 0; i < TRACEFS_OPTION_MAX; ++i)
+		if (tracefs_option_mask_is_set(mask, i)) {
+			const char *opt = tracefs_option_name(i);
+			PyList_Append(list, PyUnicode_FromString(opt));
+		}
+
+	return list;
+}
+
+PyObject *PyFtrace_enabled_options(PyObject *self, PyObject *args,
+						   PyObject *kwargs)
+{
+	struct tracefs_instance *instance;
+
+	if (!get_instance_from_arg(args, kwargs, &instance))
+		return NULL;
+
+	return get_option_list(instance, true);
+}
+
+PyObject *PyFtrace_supported_options(PyObject *self, PyObject *args,
+						     PyObject *kwargs)
+{
+	struct tracefs_instance *instance;
+
+	if (!get_instance_from_arg(args, kwargs, &instance))
+		return NULL;
+
+	return get_option_list(instance, false);
+}
+
+static bool set_fork_options(struct tracefs_instance *instance, bool enable)
+{
+	if (enable) {
+		if (tracefs_option_enable(instance, TRACEFS_OPTION_EVENT_FORK) < 0 ||
+		    tracefs_option_enable(instance, TRACEFS_OPTION_FUNCTION_FORK) < 0)
+			return false;
+	} else {
+		if (tracefs_option_disable(instance, TRACEFS_OPTION_EVENT_FORK) < 0 ||
+		    tracefs_option_disable(instance, TRACEFS_OPTION_FUNCTION_FORK) < 0)
+			return false;
+	}
+
+	return true;
+}
+
+static bool hook2pid(struct tracefs_instance *instance, PyObject *pid_val, int fork)
+{
+	if (!set_pid(instance, "set_ftrace_pid", pid_val) ||
+	    !set_pid(instance, "set_event_pid", pid_val))
+		goto fail;
+
+	if (fork < 0)
+		return true;
+
+	if (!set_fork_options(instance, fork))
+		goto fail;
+
+	return true;
+
+ fail:
+	PyErr_SetString(TFS_ERROR, "Failed to hook to PID");
+	PyErr_Print();
+	return false;
+}
+
+PyObject *PyFtrace_hook2pid(PyObject *self, PyObject *args, PyObject *kwargs)
+{
+	static char *kwlist[] = {"pid", "fork", "instance", NULL};
+	const char *instance_name = NO_ARG;
+	struct tracefs_instance *instance;
+	PyObject *pid_val;
+	int fork = -1;
+
+	if (!PyArg_ParseTupleAndKeywords(args,
+					 kwargs,
+					 "O|ps",
+					 kwlist,
+					 &pid_val,
+					 &fork,
+					 &instance_name)) {
+		return NULL;
+	}
+
+	if (!get_optional_instance(instance_name, &instance))
+		return NULL;
+
+	if (!hook2pid(instance, pid_val, fork))
+		return NULL;
+
+	Py_RETURN_NONE;
+}
+
+void PyFtrace_at_exit(void)
+{
+	destroy_all_instances();
+}
diff --git a/src/ftracepy-utils.h b/src/ftracepy-utils.h
new file mode 100644
index 0000000..44fceab
--- /dev/null
+++ b/src/ftracepy-utils.h
@@ -0,0 +1,132 @@
+/* SPDX-License-Identifier: LGPL-2.1 */
+
+/*
+ * Copyright (C) 2021 VMware Inc, Yordan Karadzhov <y.karadz@gmail.com>
+ */
+
+#ifndef _TC_FTRACE_PY_UTILS
+#define _TC_FTRACE_PY_UTILS
+
+// Python
+#include <Python.h>
+
+// libtracefs
+#include "tracefs.h"
+
+// trace-cruncher
+#include "common.h"
+
+C_OBJECT_WRAPPER_DECLARE(tep_record, PyTepRecord)
+
+C_OBJECT_WRAPPER_DECLARE(tep_event, PyTepEvent)
+
+C_OBJECT_WRAPPER_DECLARE(tep_handle, PyTep)
+
+PyObject *PyTepRecord_time(PyTepRecord* self);
+
+PyObject *PyTepRecord_cpu(PyTepRecord* self);
+
+PyObject *PyTepEvent_name(PyTepEvent* self);
+
+PyObject *PyTepEvent_id(PyTepEvent* self);
+
+PyObject *PyTepEvent_field_names(PyTepEvent* self);
+
+PyObject *PyTepEvent_parse_record_field(PyTepEvent* self, PyObject *args,
+							  PyObject *kwargs);
+
+PyObject *PyTepEvent_get_pid(PyTepEvent* self, PyObject *args,
+					       PyObject *kwargs);
+
+PyObject *PyTep_init_local(PyTep *self, PyObject *args,
+					PyObject *kwargs);
+
+PyObject *PyTep_get_event(PyTep *self, PyObject *args,
+				       PyObject *kwargs);
+
+PyObject *PyFtrace_dir(PyObject *self);
+
+PyObject *PyFtrace_create_instance(PyObject *self, PyObject *args,
+						   PyObject *kwargs);
+
+PyObject *PyFtrace_destroy_instance(PyObject *self, PyObject *args,
+						    PyObject *kwargs);
+
+PyObject *PyFtrace_get_all_instances(PyObject *self);
+
+PyObject *PyFtrace_destroy_all_instances(PyObject *self);
+
+PyObject *PyFtrace_instance_dir(PyObject *self, PyObject *args,
+						PyObject *kwargs);
+
+PyObject *PyFtrace_available_tracers(PyObject *self, PyObject *args,
+						     PyObject *kwargs);
+
+PyObject *PyFtrace_set_current_tracer(PyObject *self, PyObject *args,
+						      PyObject *kwargs);
+
+PyObject *PyFtrace_get_current_tracer(PyObject *self, PyObject *args,
+						      PyObject *kwargs);
+
+PyObject *PyFtrace_available_event_systems(PyObject *self, PyObject *args,
+							   PyObject *kwargs);
+
+PyObject *PyFtrace_available_system_events(PyObject *self, PyObject *args,
+							   PyObject *kwargs);
+
+PyObject *PyFtrace_enable_event(PyObject *self, PyObject *args,
+						PyObject *kwargs);
+
+PyObject *PyFtrace_disable_event(PyObject *self, PyObject *args,
+						 PyObject *kwargs);
+
+PyObject *PyFtrace_enable_events(PyObject *self, PyObject *args,
+						 PyObject *kwargs);
+
+PyObject *PyFtrace_disable_events(PyObject *self, PyObject *args,
+						  PyObject *kwargs);
+
+PyObject *PyFtrace_event_is_enabled(PyObject *self, PyObject *args,
+						    PyObject *kwargs);
+
+PyObject *PyFtrace_set_event_filter(PyObject *self, PyObject *args,
+						    PyObject *kwargs);
+
+PyObject *PyFtrace_clear_event_filter(PyObject *self, PyObject *args,
+						      PyObject *kwargs);
+
+PyObject *PyFtrace_tracing_ON(PyObject *self, PyObject *args,
+					      PyObject *kwargs);
+
+PyObject *PyFtrace_tracing_OFF(PyObject *self, PyObject *args,
+					       PyObject *kwargs);
+
+PyObject *PyFtrace_is_tracing_ON(PyObject *self, PyObject *args,
+						 PyObject *kwargs);
+
+PyObject *PyFtrace_set_event_pid(PyObject *self, PyObject *args,
+						 PyObject *kwargs);
+
+PyObject *PyFtrace_set_ftrace_pid(PyObject *self, PyObject *args,
+						  PyObject *kwargs);
+
+PyObject *PyFtrace_enable_option(PyObject *self, PyObject *args,
+						 PyObject *kwargs);
+
+PyObject *PyFtrace_disable_option(PyObject *self, PyObject *args,
+						  PyObject *kwargs);
+
+PyObject *PyFtrace_option_is_set(PyObject *self, PyObject *args,
+						 PyObject *kwargs);
+
+PyObject *PyFtrace_supported_options(PyObject *self, PyObject *args,
+						     PyObject *kwargs);
+
+PyObject *PyFtrace_enabled_options(PyObject *self, PyObject *args,
+						   PyObject *kwargs);
+
+PyObject *PyFtrace_hook2pid(PyObject *self, PyObject *args, PyObject *kwargs);
+
+void PyFtrace_at_exit(void);
+
+#endif
diff --git a/src/ftracepy.c b/src/ftracepy.c
new file mode 100644
index 0000000..2cdcc33
--- /dev/null
+++ b/src/ftracepy.c
@@ -0,0 +1,272 @@
+// SPDX-License-Identifier: LGPL-2.1
+
+/*
+ * Copyright (C) 2021 VMware Inc, Yordan Karadzhov (VMware) <y.karadz@gmail.com>
+ */
+
+// trace-cruncher
+#include "ftracepy-utils.h"
+
+extern PyObject *TFS_ERROR;
+extern PyObject *TEP_ERROR;
+extern PyObject *TRACECRUNCHER_ERROR;
+
+static PyMethodDef PyTepRecord_methods[] = {
+	{"time",
+	 (PyCFunction) PyTepRecord_time,
+	 METH_NOARGS,
+	 "Get the time of the record."
+	},
+	{"CPU",
+	 (PyCFunction) PyTepRecord_cpu,
+	 METH_NOARGS,
+	 "Get the CPU Id of the record."
+	},
+	{NULL}
+};
+
+C_OBJECT_WRAPPER(tep_record, PyTepRecord, NO_FREE)
+
+static PyMethodDef PyTepEvent_methods[] = {
+	{"name",
+	 (PyCFunction) PyTepEvent_name,
+	 METH_NOARGS,
+	 "Get the name of the event."
+	},
+	{"id",
+	 (PyCFunction) PyTepEvent_id,
+	 METH_NOARGS,
+	 "Get the unique identifier of the event."
+	},
+	{"field_names",
+	 (PyCFunction) PyTepEvent_field_names,
+	 METH_NOARGS,
+	 "Get the names of all fields."
+	},
+	{"parse_record_field",
+	 (PyCFunction) PyTepEvent_parse_record_field,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Get the content of a record field."
+	},
+	{"get_pid",
+	 (PyCFunction) PyTepEvent_get_pid,
+	 METH_VARARGS | METH_KEYWORDS,
+	},
+	{NULL}
+};
+
+C_OBJECT_WRAPPER(tep_event, PyTepEvent, NO_FREE)
+
+static PyMethodDef PyTep_methods[] = {
+	{"init_local",
+	 (PyCFunction) PyTep_init_local,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Initialize from local instance."
+	},
+	{"get_event",
+	 (PyCFunction) PyTep_get_event,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Get a PyTepEvent object."
+	},
+	{NULL}
+};
+
+C_OBJECT_WRAPPER(tep_handle, PyTep, tep_free)
+
+static PyMethodDef ftracepy_methods[] = {
+	{"dir",
+	 (PyCFunction) PyFtrace_dir,
+	 METH_NOARGS,
+	 "Get the absolute path to the tracefs directory."
+	},
+	{"create_instance",
+	 (PyCFunction) PyFtrace_create_instance,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Create new tracefs instance."
+	},
+	{"get_all_instances",
+	 (PyCFunction) PyFtrace_get_all_instances,
+	 METH_NOARGS,
+	 "Get all existing tracefs instances."
+	},
+	{"destroy_instance",
+	 (PyCFunction) PyFtrace_destroy_instance,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Destroy existing tracefs instance."
+	},
+	{"destroy_all_instances",
+	 (PyCFunction) PyFtrace_destroy_all_instances,
+	 METH_NOARGS,
+	 "Destroy all existing tracefs instances."
+	},
+	{"instance_dir",
+	 (PyCFunction) PyFtrace_instance_dir,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Get the absolute path to the instance directory."
+	},
+	{"available_tracers",
+	 (PyCFunction) PyFtrace_available_tracers,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Get a list of available tracers."
+	},
+	{"set_current_tracer",
+	 (PyCFunction) PyFtrace_set_current_tracer,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Enable a tracer."
+	},
+	{"get_current_tracer",
+	 (PyCFunction) PyFtrace_get_current_tracer,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Check the enabled tracer."
+	},
+	{"available_event_systems",
+	 (PyCFunction) PyFtrace_available_event_systems,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Get a list of available trace event systems."
+	},
+	{"available_system_events",
+	 (PyCFunction) PyFtrace_available_system_events,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Get a list of available trace event for a given system."
+	},
+	{"enable_event",
+	 (PyCFunction) PyFtrace_enable_event,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Enable trece event."
+	},
+	{"disable_event",
+	 (PyCFunction) PyFtrace_disable_event,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Disable trece event."
+	},
+	{"enable_events",
+	 (PyCFunction) PyFtrace_enable_events,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Enable multiple trece event."
+	},
+	{"disable_events",
+	 (PyCFunction) PyFtrace_disable_events,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Disable multiple trece event."
+	},
+	{"event_is_enabled",
+	 (PyCFunction) PyFtrace_event_is_enabled,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Check if event is enabled."
+	},
+	{"set_event_filter",
+	 (PyCFunction) PyFtrace_set_event_filter,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Define event filter."
+	},
+	{"clear_event_filter",
+	 (PyCFunction) PyFtrace_clear_event_filter,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Clear event filter."
+	},
+	{"tracing_ON",
+	 (PyCFunction) PyFtrace_tracing_ON,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Start tracing."
+	},
+	{"tracing_OFF",
+	 (PyCFunction) PyFtrace_tracing_OFF,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Stop tracing."
+	},
+	{"is_tracing_ON",
+	 (PyCFunction) PyFtrace_is_tracing_ON,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Check if tracing is ON."
+	},
+	{"set_event_pid",
+	 (PyCFunction) PyFtrace_set_event_pid,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "."
+	},
+	{"set_ftrace_pid",
+	 (PyCFunction) PyFtrace_set_ftrace_pid,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "."
+	},
+	{"enable_option",
+	 (PyCFunction) PyFtrace_enable_option,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Enable trece option."
+	},
+	{"disable_option",
+	 (PyCFunction) PyFtrace_disable_option,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Disable trece option."
+	},
+	{"option_is_set",
+	 (PyCFunction) PyFtrace_option_is_set,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Check if trece option is enabled."
+	},
+	{"supported_options",
+	 (PyCFunction) PyFtrace_supported_options,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Gat a list of all supported options."
+	},
+	{"enabled_options",
+	 (PyCFunction) PyFtrace_enabled_options,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Gat a list of all supported options."
+	},
+	{"hook2pid",
+	 (PyCFunction) PyFtrace_hook2pid,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Trace only particular process."
+	},
+	{NULL, NULL, 0, NULL}
+};
+
+static struct PyModuleDef ftracepy_module = {
+	PyModuleDef_HEAD_INIT,
+	"ftracepy",
+	"Python interface for Ftrace.",
+	-1,
+	ftracepy_methods
+};
+
+PyMODINIT_FUNC PyInit_ftracepy(void)
+{
+	if (!PyTepTypeInit())
+		return NULL;
+
+	if (!PyTepEventTypeInit())
+		return NULL;
+
+	if (!PyTepRecordTypeInit())
+		return NULL;
+
+	TFS_ERROR = PyErr_NewException("tracecruncher.ftracepy.tfs_error",
+				       NULL, NULL);
+
+	TEP_ERROR = PyErr_NewException("tracecruncher.ftracepy.tep_error",
+				       NULL, NULL);
+
+	TRACECRUNCHER_ERROR = PyErr_NewException("tracecruncher.tc_error",
+						 NULL, NULL);
+
+	PyObject *module =  PyModule_Create(&ftracepy_module);
+
+	PyModule_AddObject(module, "tep_handle", (PyObject *) &PyTepType);
+	PyModule_AddObject(module, "tep_event", (PyObject *) &PyTepEventType);
+	PyModule_AddObject(module, "tep_record", (PyObject *) &PyTepRecordType);
+
+	PyModule_AddObject(module, "tfs_error", TFS_ERROR);
+	PyModule_AddObject(module, "tep_error", TEP_ERROR);
+	PyModule_AddObject(module, "tc_error", TRACECRUNCHER_ERROR);
+
+	if (geteuid() != 0) {
+		PyErr_SetString(TFS_ERROR,
+				"Permission denied. Root privileges are required.");
+		return NULL;
+	}
+
+	Py_AtExit(PyFtrace_at_exit);
+
+	return module;
+}
-- 
2.27.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v4 02/11] trace-cruncher: Add basic methods for tracing
  2021-07-07 13:21 [PATCH v4 00/11] Build trace-cruncher as Python pakage Yordan Karadzhov (VMware)
  2021-07-07 13:21 ` [PATCH v4 01/11] trace-cruncher: Refactor the part that wraps ftrace Yordan Karadzhov (VMware)
@ 2021-07-07 13:21 ` Yordan Karadzhov (VMware)
  2021-07-07 13:21 ` [PATCH v4 03/11] trace-cruncher: Refactor the part that wraps libkshark Yordan Karadzhov (VMware)
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Yordan Karadzhov (VMware) @ 2021-07-07 13:21 UTC (permalink / raw)
  To: linux-trace-devel; +Cc: Yordan Karadzhov (VMware)

Here we define a set of basic methods for starting the tracing process
accessing the trace data.

Signed-off-by: Yordan Karadzhov (VMware) <y.karadz@gmail.com>
---
 src/ftracepy-utils.c | 329 +++++++++++++++++++++++++++++++++++++++++++
 src/ftracepy-utils.h |  12 ++
 src/ftracepy.c       |  20 +++
 3 files changed, 361 insertions(+)

diff --git a/src/ftracepy-utils.c b/src/ftracepy-utils.c
index b34c45b..91a319e 100644
--- a/src/ftracepy-utils.c
+++ b/src/ftracepy-utils.c
@@ -1507,6 +1507,335 @@ static bool hook2pid(struct tracefs_instance *instance, PyObject *pid_val, int f
 	return false;
 }
 
+static void start_tracing_procces(struct tracefs_instance *instance,
+				  char *const *argv,
+				  char *const *envp)
+{
+	PyObject *pid_val = PyList_New(1);
+
+	PyList_SET_ITEM(pid_val, 0, PyLong_FromLong(getpid()));
+	if (!hook2pid(instance, pid_val, true))
+		exit(1);
+
+	tracing_ON(instance);
+	if (execvpe(argv[0], argv, envp) < 0) {
+		PyErr_Format(TFS_ERROR, "Failed to exec \'%s\'",
+			     argv[0]);
+	}
+
+	exit(1);
+}
+
+static PyObject *get_callback_func(const char *plugin_name, const char * py_callback)
+{
+	PyObject *py_name, *py_module, *py_func;
+
+	py_name = PyUnicode_FromString(plugin_name);
+	py_module = PyImport_Import(py_name);
+	if (!py_module) {
+		PyErr_Format(TFS_ERROR, "Failed to import plugin \'%s\'",
+			     plugin_name);
+		return NULL;
+	}
+
+	py_func = PyObject_GetAttrString(py_module, py_callback);
+	if (!py_func || !PyCallable_Check(py_func)) {
+		PyErr_Format(TFS_ERROR,
+			     "Failed to import callback from plugin \'%s\'",
+			     plugin_name);
+		return NULL;
+	}
+
+	return py_func;
+}
+
+struct callback_context {
+	void	*py_callback;
+
+	bool	status;
+} callback_ctx;
+
+static int callback(struct tep_event *event, struct tep_record *record,
+		    int cpu, void *ctx_ptr)
+{
+	struct callback_context *ctx = ctx_ptr;
+	PyObject *ret;
+
+	record->cpu = cpu; // Remove when the bug in libtracefs is fixed.
+
+	PyObject *py_tep_event = PyTepEvent_New(event);
+	PyObject *py_tep_record = PyTepRecord_New(record);
+
+	PyObject *arglist = PyTuple_New(2);
+	PyTuple_SetItem(arglist, 0, py_tep_event);
+	PyTuple_SetItem(arglist, 1, py_tep_record);
+
+	ret = PyObject_CallObject((PyObject *)ctx->py_callback, arglist);
+	Py_DECREF(arglist);
+
+	if (ret) {
+		Py_DECREF(ret);
+	} else {
+		if (PyErr_Occurred()) {
+			if (PyErr_ExceptionMatches(PyExc_SystemExit)) {
+				PyErr_Clear();
+			} else {
+				PyErr_Print();
+			}
+		}
+
+		ctx->status = false;
+	}
+
+	return 0;
+}
+
+static bool notrace_this_pid(struct tracefs_instance *instance)
+{
+	int pid = getpid();
+
+	if (!pid2file(instance, "set_ftrace_notrace_pid", pid, true) ||
+	    !pid2file(instance, "set_event_notrace_pid", pid, true)) {
+		PyErr_SetString(TFS_ERROR,
+			        "Failed to desable tracing for \'this\' process.");
+		return false;
+	}
+
+	return true;
+}
+
+static void iterate_raw_events_waitpid(struct tracefs_instance *instance,
+				       struct tep_handle *tep,
+				       PyObject *py_func,
+				       pid_t pid)
+{
+	callback_ctx.py_callback = py_func;
+	do {
+		tracefs_iterate_raw_events(tep, instance, NULL, 0,
+					   callback, &callback_ctx);
+	} while (waitpid(pid, NULL, WNOHANG) != pid);
+}
+
+static bool init_callback_tep(struct tracefs_instance *instance,
+			      const char *plugin,
+			      const char *py_callback,
+			      struct tep_handle **tep,
+			      PyObject **py_func)
+{
+	*py_func = get_callback_func(plugin, py_callback);
+	if (!*py_func)
+		return false;
+
+	*tep = tracefs_local_events(tracefs_instance_get_dir(instance));
+	if (!*tep) {
+		PyErr_Format(TFS_ERROR,
+			     "Unable to get 'tep' event from instance \'%s\'.",
+			     get_instance_name(instance));
+		return false;
+	}
+
+	if (!notrace_this_pid(instance))
+		return false;
+
+	return true;
+}
+
+PyObject *PyFtrace_trace_shell_process(PyObject *self, PyObject *args,
+						       PyObject *kwargs)
+{
+	const char *plugin = "__main__", *py_callback = "callback", *instance_name;
+	static char *kwlist[] = {"process", "plugin", "callback", "instance", NULL};
+	struct tracefs_instance *instance;
+	struct tep_handle *tep;
+	PyObject *py_func;
+	char *process;
+	pid_t pid;
+
+	instance_name = NO_ARG;
+	if (!PyArg_ParseTupleAndKeywords(args,
+					 kwargs,
+					 "s|sss",
+					 kwlist,
+					 &process,
+					 &plugin,
+					 &py_callback,
+					 &instance_name)) {
+		return NULL;
+	}
+
+	if (!get_optional_instance(instance_name, &instance))
+		return NULL;
+
+	if (!init_callback_tep(instance, plugin, py_callback, &tep, &py_func))
+		return NULL;
+
+	pid = fork();
+	if (pid < 0) {
+		PyErr_SetString(TFS_ERROR, "Failed to fork");
+		return NULL;
+	}
+
+	if (pid == 0) {
+		char *argv[] = {getenv("SHELL"), "-c", process, NULL};
+		char *envp[] = {NULL};
+
+		start_tracing_procces(instance, argv, envp);
+	}
+
+	iterate_raw_events_waitpid(instance, tep, py_func, pid);
+
+	Py_RETURN_NONE;
+}
+
+PyObject *PyFtrace_trace_process(PyObject *self, PyObject *args,
+						 PyObject *kwargs)
+{
+	const char *plugin = "__main__", *py_callback = "callback", *instance_name;
+	static char *kwlist[] = {"argv", "plugin", "callback", "instance", NULL};
+	struct tracefs_instance *instance;
+	struct tep_handle *tep;
+	PyObject *py_func, *py_argv, *py_arg;
+	pid_t pid;
+	int i, argc;
+
+	instance_name = NO_ARG;
+	if (!PyArg_ParseTupleAndKeywords(args,
+					 kwargs,
+					 "O|sss",
+					 kwlist,
+					 &py_argv,
+					 &plugin,
+					 &py_callback,
+					 &instance_name)) {
+		return NULL;
+	}
+
+	if (!get_optional_instance(instance_name, &instance))
+		return NULL;
+
+	if (!init_callback_tep(instance, plugin, py_callback, &tep, &py_func))
+		return NULL;
+
+	if (!PyList_CheckExact(py_argv)) {
+		PyErr_SetString(TFS_ERROR, "Failed to parse \'argv\' list");
+		return NULL;
+	}
+
+	argc = PyList_Size(py_argv);
+
+	pid = fork();
+	if (pid < 0) {
+		PyErr_SetString(TFS_ERROR, "Failed to fork");
+		return NULL;
+	}
+
+	if (pid == 0) {
+		char *argv[argc + 1];
+		char *envp[] = {NULL};
+
+		for (i = 0; i < argc; ++i) {
+			py_arg = PyList_GetItem(py_argv, i);
+			if (!PyUnicode_Check(py_arg))
+				return NULL;
+
+			argv[i] = PyUnicode_DATA(py_arg);
+		}
+		argv[argc] = NULL;
+		start_tracing_procces(instance, argv, envp);
+	}
+
+	iterate_raw_events_waitpid(instance, tep, py_func, pid);
+
+	Py_RETURN_NONE;
+}
+
+static struct tracefs_instance *pipe_instance;
+
+static void pipe_stop(int sig)
+{
+	tracefs_trace_pipe_stop(pipe_instance);
+}
+
+PyObject *PyFtrace_read_trace(PyObject *self, PyObject *args,
+					      PyObject *kwargs)
+{
+	signal(SIGINT, pipe_stop);
+
+	if (!get_instance_from_arg(args, kwargs, &pipe_instance) ||
+	    !notrace_this_pid(pipe_instance))
+		return NULL;
+
+	tracing_ON(pipe_instance);
+	if (tracefs_trace_pipe_print(pipe_instance, 0) < 0) {
+		PyErr_Format(TFS_ERROR,
+			     "Unable to read trace data from instance \'%s\'.",
+			     get_instance_name(pipe_instance));
+		return NULL;
+	}
+
+	signal(SIGINT, SIG_DFL);
+	Py_RETURN_NONE;
+}
+
+struct tracefs_instance *itr_instance;
+static bool iterate_keep_going;
+
+static void iterate_stop(int sig)
+{
+	iterate_keep_going = false;
+	tracefs_trace_pipe_stop(itr_instance);
+}
+
+PyObject *PyFtrace_iterate_trace(PyObject *self, PyObject *args,
+					         PyObject *kwargs)
+{
+	static char *kwlist[] = {"plugin", "callback", "instance", NULL};
+	const char *plugin = "__main__", *py_callback = "callback";
+	bool *callback_status = &callback_ctx.status;
+	bool *keep_going = &iterate_keep_going;
+
+	const char *instance_name;
+	struct tep_handle *tep;
+	PyObject *py_func;
+	int ret;
+
+	(*(volatile bool *)keep_going) = true;
+	signal(SIGINT, iterate_stop);
+
+	instance_name = NO_ARG;
+	if (!PyArg_ParseTupleAndKeywords(args,
+					 kwargs,
+					 "|sss",
+					 kwlist,
+					 &plugin,
+					 &py_callback,
+					 &instance_name)) {
+		return NULL;
+	}
+
+	py_func = get_callback_func(plugin, py_callback);
+	if (!py_func ||
+	    !get_optional_instance(instance_name, &itr_instance) ||
+	    !notrace_this_pid(itr_instance))
+		return NULL;
+
+	tep = tracefs_local_events(tracefs_instance_get_dir(itr_instance));
+	(*(volatile bool *)callback_status) = true;
+	callback_ctx.py_callback = py_func;
+	tracing_ON(itr_instance);
+
+	while (*(volatile bool *)keep_going) {
+		ret = tracefs_iterate_raw_events(tep, itr_instance, NULL, 0,
+						 callback, &callback_ctx);
+
+		if (*(volatile bool *)callback_status == false || ret < 0)
+			break;
+	}
+
+	signal(SIGINT, SIG_DFL);
+	Py_RETURN_NONE;
+}
+
 PyObject *PyFtrace_hook2pid(PyObject *self, PyObject *args, PyObject *kwargs)
 {
 	static char *kwlist[] = {"pid", "fork", "instance", NULL};
diff --git a/src/ftracepy-utils.h b/src/ftracepy-utils.h
index 44fceab..3699aaa 100644
--- a/src/ftracepy-utils.h
+++ b/src/ftracepy-utils.h
@@ -125,6 +125,18 @@ PyObject *PyFtrace_supported_options(PyObject *self, PyObject *args,
 PyObject *PyFtrace_enabled_options(PyObject *self, PyObject *args,
 						   PyObject *kwargs);
 
+PyObject *PyFtrace_trace_process(PyObject *self, PyObject *args,
+						 PyObject *kwargs);
+
+PyObject *PyFtrace_trace_shell_process(PyObject *self, PyObject *args,
+						       PyObject *kwargs);
+
+PyObject *PyFtrace_read_trace(PyObject *self, PyObject *args,
+					      PyObject *kwargs);
+
+PyObject *PyFtrace_iterate_trace(PyObject *self, PyObject *args,
+						 PyObject *kwargs);
+
 PyObject *PyFtrace_hook2pid(PyObject *self, PyObject *args, PyObject *kwargs);
 
 void PyFtrace_at_exit(void);
diff --git a/src/ftracepy.c b/src/ftracepy.c
index 2cdcc33..5dd61e4 100644
--- a/src/ftracepy.c
+++ b/src/ftracepy.c
@@ -214,6 +214,26 @@ static PyMethodDef ftracepy_methods[] = {
 	 METH_VARARGS | METH_KEYWORDS,
 	 "Gat a list of all supported options."
 	},
+	{"trace_process",
+	 (PyCFunction) PyFtrace_trace_process,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Trace a process."
+	},
+	{"trace_shell_process",
+	 (PyCFunction) PyFtrace_trace_shell_process,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Trace a process executed within a shell."
+	},
+	{"read_trace",
+	 (PyCFunction) PyFtrace_read_trace,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Trace a shell process."
+	},
+	{"iterate_trace",
+	 (PyCFunction) PyFtrace_iterate_trace,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Trace a shell process."
+	},
 	{"hook2pid",
 	 (PyCFunction) PyFtrace_hook2pid,
 	 METH_VARARGS | METH_KEYWORDS,
-- 
2.27.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v4 03/11] trace-cruncher: Refactor the part that wraps libkshark
  2021-07-07 13:21 [PATCH v4 00/11] Build trace-cruncher as Python pakage Yordan Karadzhov (VMware)
  2021-07-07 13:21 ` [PATCH v4 01/11] trace-cruncher: Refactor the part that wraps ftrace Yordan Karadzhov (VMware)
  2021-07-07 13:21 ` [PATCH v4 02/11] trace-cruncher: Add basic methods for tracing Yordan Karadzhov (VMware)
@ 2021-07-07 13:21 ` Yordan Karadzhov (VMware)
  2021-07-07 13:21 ` [PATCH v4 04/11] trace-cruncher: Add "utils" Yordan Karadzhov (VMware)
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Yordan Karadzhov (VMware) @ 2021-07-07 13:21 UTC (permalink / raw)
  To: linux-trace-devel; +Cc: Yordan Karadzhov (VMware)

The part of the interface that relies on libkshark gets
re-implemented as an extension called "tracecruncher.ksharkpy".
The new extension gets build together with the previously
implemented "tracecruncher.ftracefy" extension.

Signed-off-by: Yordan Karadzhov (VMware) <y.karadz@gmail.com>
---
 setup.py              |  17 +-
 src/ksharkpy-utils.c  | 411 ++++++++++++++++++++++++++++++++++++++++++
 src/ksharkpy-utils.h  |  41 +++++
 src/ksharkpy.c        |  94 ++++++++++
 src/npdatawrapper.pyx | 203 +++++++++++++++++++++
 src/trace2matrix.c    |  40 ++++
 6 files changed, 804 insertions(+), 2 deletions(-)
 create mode 100644 src/ksharkpy-utils.c
 create mode 100644 src/ksharkpy-utils.h
 create mode 100644 src/ksharkpy.c
 create mode 100644 src/npdatawrapper.pyx
 create mode 100644 src/trace2matrix.c

diff --git a/setup.py b/setup.py
index 6a5d6df..4d7e727 100644
--- a/setup.py
+++ b/setup.py
@@ -11,22 +11,26 @@ from distutils.core import Extension
 from Cython.Build import cythonize
 
 import pkgconfig as pkg
+import numpy as np
 
 
 def third_party_paths():
     pkg_traceevent = pkg.parse('libtraceevent')
     pkg_ftracepy = pkg.parse('libtracefs')
     pkg_tracecmd = pkg.parse('libtracecmd')
+    pkg_kshark = pkg.parse('libkshark')
 
-    include_dirs = []
+    include_dirs = [np.get_include()]
     include_dirs.extend(pkg_traceevent['include_dirs'])
     include_dirs.extend(pkg_ftracepy['include_dirs'])
     include_dirs.extend(pkg_tracecmd['include_dirs'])
+    include_dirs.extend(pkg_kshark['include_dirs'])
 
     library_dirs = []
     library_dirs.extend(pkg_traceevent['library_dirs'])
     library_dirs.extend(pkg_ftracepy['library_dirs'])
     library_dirs.extend(pkg_tracecmd['library_dirs'])
+    library_dirs.extend(pkg_kshark['library_dirs'])
     library_dirs = list(set(library_dirs))
 
     return include_dirs, library_dirs
@@ -48,6 +52,15 @@ def main():
                           sources=['src/ftracepy.c', 'src/ftracepy-utils.c'],
                           libraries=['traceevent', 'tracefs'])
 
+    cythonize('src/npdatawrapper.pyx', language_level = "3")
+    module_data = extension(name='tracecruncher.npdatawrapper',
+                            sources=['src/npdatawrapper.c'],
+                            libraries=['kshark'])
+
+    module_ks = extension(name='tracecruncher.ksharkpy',
+                          sources=['src/ksharkpy.c', 'src/ksharkpy-utils.c'],
+                          libraries=['kshark'])
+
     setup(name='tracecruncher',
           version='0.1.0',
           description='NumPy based interface for accessing tracing data in Python.',
@@ -56,7 +69,7 @@ def main():
           url='https://github.com/vmware/trace-cruncher',
           license='LGPL-2.1',
           packages=find_packages(),
-          ext_modules=[module_ft],
+          ext_modules=[module_ft, module_data, module_ks],
           classifiers=[
               'Development Status :: 3 - Alpha',
               'Programming Language :: Python :: 3',
diff --git a/src/ksharkpy-utils.c b/src/ksharkpy-utils.c
new file mode 100644
index 0000000..12972fb
--- /dev/null
+++ b/src/ksharkpy-utils.c
@@ -0,0 +1,411 @@
+// SPDX-License-Identifier: LGPL-2.1
+
+/*
+ * Copyright (C) 2021 VMware Inc, Yordan Karadzhov (VMware) <y.karadz@gmail.com>
+ */
+
+#ifndef _GNU_SOURCE
+/** Use GNU C Library. */
+#define _GNU_SOURCE
+#endif // _GNU_SOURCE
+
+// C
+#include <string.h>
+
+// KernelShark
+#include "libkshark.h"
+#include "libkshark-plugin.h"
+#include "libkshark-model.h"
+#include "libkshark-tepdata.h"
+
+// trace-cruncher
+#include "ksharkpy-utils.h"
+
+PyObject *KSHARK_ERROR = NULL;
+PyObject *TRACECRUNCHER_ERROR = NULL;
+
+PyObject *PyKShark_open(PyObject *self, PyObject *args, PyObject *kwargs)
+{
+	struct kshark_context *kshark_ctx = NULL;
+	char *fname;
+	int sd;
+
+	static char *kwlist[] = {"file_name", NULL};
+	if(!PyArg_ParseTupleAndKeywords(args,
+					kwargs,
+					"s",
+					kwlist,
+					&fname)) {
+		return NULL;
+	}
+
+	if (!kshark_instance(&kshark_ctx)) {
+		KS_INIT_ERROR
+		return NULL;
+	}
+
+	sd = kshark_open(kshark_ctx, fname);
+	if (sd < 0) {
+		PyErr_Format(KSHARK_ERROR, "Failed to open file \'%s\'", fname);
+		return NULL;
+	}
+
+	return PyLong_FromLong(sd);
+}
+
+PyObject *PyKShark_close(PyObject* self, PyObject* noarg)
+{
+	struct kshark_context *kshark_ctx = NULL;
+
+	if (!kshark_instance(&kshark_ctx)) {
+		KS_INIT_ERROR
+		return NULL;
+	}
+
+	kshark_close_all(kshark_ctx);
+
+	Py_RETURN_NONE;
+}
+
+static bool is_tep_data(const char *file_name)
+{
+	if (!kshark_tep_check_data(file_name)) {
+		PyErr_Format(KSHARK_ERROR, "\'%s\' is not a TEP data file.",
+			     file_name);
+		return false;
+	}
+
+	return true;
+}
+
+PyObject *PyKShark_open_tep_buffer(PyObject *self, PyObject *args,
+						   PyObject *kwargs)
+{
+	struct kshark_context *kshark_ctx = NULL;
+	char *file_name, *buffer_name;
+	int sd, sd_top;
+
+	static char *kwlist[] = {"file_name", "buffer_name", NULL};
+	if(!PyArg_ParseTupleAndKeywords(args,
+					kwargs,
+					"ss",
+					kwlist,
+					&file_name,
+					&buffer_name)) {
+		return NULL;
+	}
+
+	if (!kshark_instance(&kshark_ctx)) {
+		KS_INIT_ERROR
+		return NULL;
+	}
+
+	if (!is_tep_data(file_name))
+		return NULL;
+
+	sd_top = kshark_tep_find_top_stream(kshark_ctx, file_name);
+	if (sd_top < 0) {
+		/* The "top" steam has to be initialized first. */
+		sd_top = kshark_open(kshark_ctx, file_name);
+	}
+
+	if (sd_top < 0)
+		return NULL;
+
+	sd = kshark_tep_open_buffer(kshark_ctx, sd_top, buffer_name);
+	if (sd < 0) {
+		PyErr_Format(KSHARK_ERROR,
+			     "Failed to open buffer \'%s\' in file \'%s\'",
+			     buffer_name, file_name);
+		return NULL;
+	}
+
+	return PyLong_FromLong(sd);
+}
+
+static struct kshark_data_stream *get_stream(int stream_id)
+{
+	struct kshark_context *kshark_ctx = NULL;
+	struct kshark_data_stream *stream;
+
+	if (!kshark_instance(&kshark_ctx)) {
+		KS_INIT_ERROR
+		return NULL;
+	}
+
+	stream = kshark_get_data_stream(kshark_ctx, stream_id);
+	if (!stream) {
+		PyErr_Format(KSHARK_ERROR,
+			     "No data stream %i loaded.",
+			     stream_id);
+		return NULL;
+	}
+
+	return stream;
+}
+
+PyObject *PyKShark_set_clock_offset(PyObject* self, PyObject* args,
+						    PyObject *kwargs)
+{
+	struct kshark_data_stream *stream;
+	int64_t offset;
+	int stream_id;
+
+	static char *kwlist[] = {"stream_id", "offset", NULL};
+	if (!PyArg_ParseTupleAndKeywords(args,
+					 kwargs,
+					 "iL",
+					 kwlist,
+					 &stream_id,
+					 &offset)) {
+		return NULL;
+	}
+
+	stream = get_stream(stream_id);
+	if (!stream)
+		return NULL;
+
+	if (stream->calib_array)
+		free(stream->calib_array);
+
+	stream->calib_array = malloc(sizeof(*stream->calib_array));
+	if (!stream->calib_array) {
+		MEM_ERROR
+		return NULL;
+	}
+
+	stream->calib_array[0] = offset;
+	stream->calib_array_size = 1;
+
+	stream->calib = kshark_offset_calib;
+
+	Py_RETURN_NONE;
+}
+
+static int compare(const void *a, const void *b)
+{
+	int a_i, b_i;
+
+	a_i = *(const int *) a;
+	b_i = *(const int *) b;
+
+	if (a_i > b_i)
+		return +1;
+
+	if (a_i < b_i)
+		return -1;
+
+	return 0;
+}
+
+PyObject *PyKShark_get_tasks(PyObject* self, PyObject* args, PyObject *kwargs)
+{
+	struct kshark_context *kshark_ctx = NULL;
+	const char *comm;
+	int sd, *pids;
+	ssize_t i, n;
+
+	static char *kwlist[] = {"stream_id", NULL};
+	if (!PyArg_ParseTupleAndKeywords(args,
+					 kwargs,
+					 "i",
+					 kwlist,
+					 &sd)) {
+		return NULL;
+	}
+
+	if (!kshark_instance(&kshark_ctx)) {
+		KS_INIT_ERROR
+		return NULL;
+	}
+
+	n = kshark_get_task_pids(kshark_ctx, sd, &pids);
+	if (n <= 0) {
+		PyErr_SetString(KSHARK_ERROR,
+				"Failed to retrieve the PID-s of the tasks");
+		return NULL;
+	}
+
+	qsort(pids, n, sizeof(*pids), compare);
+
+	PyObject *tasks, *pid_list, *pid_val;
+
+	tasks = PyDict_New();
+	for (i = 0; i < n; ++i) {
+		comm = kshark_comm_from_pid(sd, pids[i]);
+		pid_val = PyLong_FromLong(pids[i]);
+		pid_list = PyDict_GetItemString(tasks, comm);
+		if (!pid_list) {
+			pid_list = PyList_New(1);
+			PyList_SET_ITEM(pid_list, 0, pid_val);
+			PyDict_SetItemString(tasks, comm, pid_list);
+		} else {
+			PyList_Append(pid_list, pid_val);
+		}
+	}
+
+	return tasks;
+}
+
+PyObject *PyKShark_event_id(PyObject *self, PyObject *args, PyObject *kwargs)
+{
+	struct kshark_data_stream *stream;
+	int stream_id, event_id;
+	const char *name;
+
+	static char *kwlist[] = {"stream_id", "name", NULL};
+	if (!PyArg_ParseTupleAndKeywords(args,
+					 kwargs,
+					 "is",
+					 kwlist,
+					 &stream_id,
+					 &name)) {
+		return NULL;
+	}
+
+	stream = get_stream(stream_id);
+	if (!stream)
+		return NULL;
+
+	event_id = kshark_find_event_id(stream, name);
+	if (event_id < 0) {
+		PyErr_Format(KSHARK_ERROR,
+			     "Failed to retrieve the Id of event \'%s\' in stream \'%s\'",
+			     name, stream->file);
+		return NULL;
+	}
+
+	return PyLong_FromLong(event_id);
+}
+
+PyObject *PyKShark_event_name(PyObject *self, PyObject *args,
+					      PyObject *kwargs)
+{
+	struct kshark_data_stream *stream;
+	struct kshark_entry entry;
+	int stream_id, event_id;
+	PyObject *ret;
+	char *name;
+
+	static char *kwlist[] = {"stream_id", "event_id", NULL};
+	if (!PyArg_ParseTupleAndKeywords(args,
+					 kwargs,
+					 "ii",
+					 kwlist,
+					 &stream_id,
+					 &event_id)) {
+		return NULL;
+	}
+
+	stream = get_stream(stream_id);
+	if (!stream)
+		return NULL;
+
+	entry.event_id = event_id;
+	entry.stream_id = stream_id;
+	entry.visible = 0xFF;
+	name = kshark_get_event_name(&entry);
+	if (!name) {
+		PyErr_Format(KSHARK_ERROR,
+			     "Failed to retrieve the name of event \'id=%i\' in stream \'%s\'",
+			     event_id, stream->file);
+		return NULL;
+	}
+
+	ret = PyUnicode_FromString(name);
+	free(name);
+
+	return ret;
+}
+
+PyObject *PyKShark_read_event_field(PyObject *self, PyObject *args,
+						    PyObject *kwargs)
+{
+	struct kshark_context *kshark_ctx = NULL;
+	struct kshark_entry entry;
+	int event_id, ret, sd;
+	const char *field;
+	int64_t offset;
+	int64_t val;
+
+	static char *kwlist[] = {"stream_id", "offset", "event_id", "field", NULL};
+	if(!PyArg_ParseTupleAndKeywords(args,
+					kwargs,
+					"iLis",
+					kwlist,
+					&sd,
+					&offset,
+					&event_id,
+					&field)) {
+		return NULL;
+	}
+
+	if (!kshark_instance(&kshark_ctx)) {
+		KS_INIT_ERROR
+		return NULL;
+	}
+
+	entry.event_id = event_id;
+	entry.offset = offset;
+	entry.stream_id = sd;
+
+	ret = kshark_read_event_field_int(&entry, field, &val);
+	if (ret != 0) {
+		PyErr_Format(KSHARK_ERROR,
+			     "Failed to read field '%s' of event '%i'",
+			     field, event_id);
+		return NULL;
+	}
+
+	return PyLong_FromLong(val);
+}
+
+PyObject *PyKShark_new_session_file(PyObject *self, PyObject *args,
+						    PyObject *kwargs)
+{
+	struct kshark_context *kshark_ctx = NULL;
+	struct kshark_config_doc *session;
+	struct kshark_config_doc *plugins;
+	struct kshark_config_doc *markers;
+	struct kshark_config_doc *model;
+	struct kshark_trace_histo histo;
+	const char *session_file;
+
+	static char *kwlist[] = {"session_file", NULL};
+	if (!PyArg_ParseTupleAndKeywords(args,
+					 kwargs,
+					 "s",
+					 kwlist,
+					 &session_file)) {
+		return NULL;
+	}
+
+	if (!kshark_instance(&kshark_ctx)) {
+		KS_INIT_ERROR
+		return NULL;
+	}
+
+	session = kshark_config_new("kshark.config.session",
+				    KS_CONFIG_JSON);
+
+	kshark_ctx->filter_mask = KS_TEXT_VIEW_FILTER_MASK |
+				  KS_GRAPH_VIEW_FILTER_MASK |
+				  KS_EVENT_VIEW_FILTER_MASK;
+
+	kshark_export_all_dstreams(kshark_ctx, &session);
+
+	ksmodel_init(&histo);
+	model = kshark_export_model(&histo, KS_CONFIG_JSON);
+	kshark_config_doc_add(session, "Model", model);
+
+	markers = kshark_config_new("kshark.config.markers", KS_CONFIG_JSON);
+	kshark_config_doc_add(session, "Markers", markers);
+
+	plugins = kshark_config_new("kshark.config.plugins", KS_CONFIG_JSON);
+	kshark_config_doc_add(session, "User Plugins", plugins);
+
+	kshark_save_config_file(session_file, session);
+	kshark_free_config_doc(session);
+
+	Py_RETURN_NONE;
+}
diff --git a/src/ksharkpy-utils.h b/src/ksharkpy-utils.h
new file mode 100644
index 0000000..6d17d2e
--- /dev/null
+++ b/src/ksharkpy-utils.h
@@ -0,0 +1,41 @@
+/* SPDX-License-Identifier: LGPL-2.1 */
+
+/*
+ * Copyright (C) 2021 VMware Inc, Yordan Karadzhov <y.karadz@gmail.com>
+ */
+
+#ifndef _TC_KSHARK_PY_UTILS
+#define _TC_KSHARK_PY_UTILS
+
+// Python
+#include <Python.h>
+
+// trace-cruncher
+#include "common.h"
+
+C_OBJECT_WRAPPER_DECLARE(kshark_data_stream, PyKSharkStream)
+
+PyObject *PyKShark_open(PyObject *self, PyObject *args, PyObject *kwargs);
+
+PyObject *PyKShark_close(PyObject* self, PyObject* noarg);
+
+PyObject *PyKShark_open_tep_buffer(PyObject *self, PyObject *args,
+						   PyObject *kwargs);
+
+PyObject *PyKShark_set_clock_offset(PyObject* self, PyObject* args,
+						    PyObject *kwargs);
+
+PyObject *PyKShark_get_tasks(PyObject* self, PyObject* args, PyObject *kwargs);
+
+PyObject *PyKShark_event_id(PyObject *self, PyObject *args, PyObject *kwargs);
+
+PyObject *PyKShark_event_name(PyObject *self, PyObject *args,
+					      PyObject *kwargs);
+
+PyObject *PyKShark_read_event_field(PyObject *self, PyObject *args,
+						    PyObject *kwargs);
+
+PyObject *PyKShark_new_session_file(PyObject *self, PyObject *args,
+						    PyObject *kwargs);
+
+#endif
diff --git a/src/ksharkpy.c b/src/ksharkpy.c
new file mode 100644
index 0000000..7cfb94b
--- /dev/null
+++ b/src/ksharkpy.c
@@ -0,0 +1,94 @@
+// SPDX-License-Identifier: LGPL-2.1
+
+/*
+ * Copyright (C) 2019 VMware Inc, Yordan Karadzhov (VMware) <y.karadz@gmail.com>
+ */
+
+/** Use GNU C Library. */
+#define _GNU_SOURCE 1
+
+// C
+#include <stdio.h>
+#include <dlfcn.h>
+
+// Python
+#include <Python.h>
+
+// trace-cruncher
+#include "ksharkpy-utils.h"
+#include "common.h"
+
+extern PyObject *KSHARK_ERROR;
+extern PyObject *TRACECRUNCHER_ERROR;
+
+static PyMethodDef ksharkpy_methods[] = {
+	{"open",
+	 (PyCFunction) PyKShark_open,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Open trace data file"
+	},
+	{"close",
+	 (PyCFunction) PyKShark_close,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Close trace data file"
+	},
+	{"open_tep_buffer",
+	 (PyCFunction) PyKShark_open_tep_buffer,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Open trace data buffer"
+	},
+	{"set_clock_offset",
+	 (PyCFunction) PyKShark_set_clock_offset,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Set the clock offset of the data stream"
+	},
+	{"get_tasks",
+	 (PyCFunction) PyKShark_get_tasks,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Get all tasks recorded in a trace file"
+	},
+	{"event_id",
+	 (PyCFunction) PyKShark_event_id,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Get the Id of the event from its name"
+	},
+	{"event_name",
+	 (PyCFunction) PyKShark_event_name,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Get the name of the event from its Id number"
+	},
+	{"read_event_field",
+	 (PyCFunction) PyKShark_read_event_field,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Get the value of an event field having a given name"
+	},
+	{"new_session_file",
+	 (PyCFunction) PyKShark_new_session_file,
+	 METH_VARARGS | METH_KEYWORDS,
+	 "Create new session description file"
+	},
+	{NULL, NULL, 0, NULL}
+};
+
+static struct PyModuleDef ksharkpy_module = {
+	PyModuleDef_HEAD_INIT,
+	"ksharkpy",
+	"",
+	-1,
+	ksharkpy_methods
+};
+
+PyMODINIT_FUNC PyInit_ksharkpy(void)
+{
+	PyObject *module = PyModule_Create(&ksharkpy_module);
+
+	KSHARK_ERROR = PyErr_NewException("tracecruncher.ksharkpy.ks_error",
+					  NULL, NULL);
+	PyModule_AddObject(module, "ks_error", KSHARK_ERROR);
+
+	TRACECRUNCHER_ERROR = PyErr_NewException("tracecruncher.tc_error",
+						 NULL, NULL);
+	PyModule_AddObject(module, "tc_error", TRACECRUNCHER_ERROR);
+
+	return module;
+}
diff --git a/src/npdatawrapper.pyx b/src/npdatawrapper.pyx
new file mode 100644
index 0000000..da55d67
--- /dev/null
+++ b/src/npdatawrapper.pyx
@@ -0,0 +1,203 @@
+"""
+SPDX-License-Identifier: LGPL-2.1
+
+Copyright 2019 VMware Inc, Yordan Karadzhov (VMware) <y.karadz@gmail.com>
+"""
+
+import ctypes
+
+# Import the Python-level symbols of numpy
+import numpy as np
+# Import the C-level symbols of numpy
+cimport numpy as np
+
+import json
+
+from libcpp cimport bool
+
+from libc.stdlib cimport free
+
+from cpython cimport PyObject, Py_INCREF
+
+from libc cimport stdint
+ctypedef stdint.int16_t int16_t
+ctypedef stdint.uint16_t uint16_t
+ctypedef stdint.int32_t int32_t
+ctypedef stdint.uint32_t uint32_t
+ctypedef stdint.int64_t int64_t
+ctypedef stdint.uint64_t uint64_t
+
+cdef extern from 'numpy/ndarraytypes.h':
+    int NPY_ARRAY_CARRAY
+
+# Numpy must be initialized!!!
+np.import_array()
+
+cdef extern from 'trace2matrix.c':
+    ssize_t trace2matrix(int stream_id,
+                         int16_t **event_array,
+                         int16_t **cpu_array,
+                         int32_t **pid_array,
+                         int64_t **offset_array,
+                         int64_t **ts_array)
+
+data_columns = ['event', 'cpu', 'pid', 'offset', 'time']
+
+data_column_types = {
+    data_columns[0]: np.NPY_INT16,
+    data_columns[1]: np.NPY_INT16,
+    data_columns[2]: np.NPY_INT32,
+    data_columns[3]: np.NPY_INT64,
+    data_columns[4]: np.NPY_UINT64
+    }
+
+cdef class KsDataWrapper:
+    cdef int item_size
+    cdef int data_size
+    cdef int data_type
+    cdef void* data_ptr
+
+    cdef init(self, int data_type,
+                    int data_size,
+                    int item_size,
+                    void* data_ptr):
+        """ This initialization cannot be done in the constructor because
+            we use C-level arguments.
+        """
+        self.item_size = item_size
+        self.data_size = data_size
+        self.data_type = data_type
+        self.data_ptr = data_ptr
+
+    def __array__(self):
+        """ Here we use the __array__ method, that is called when numpy
+            tries to get an array from the object.
+        """
+        cdef np.npy_intp shape[1]
+        shape[0] = <np.npy_intp> self.data_size
+
+        ndarray = np.PyArray_New(np.ndarray,
+                                 1, shape,
+                                 self.data_type,
+                                 NULL,
+                                 self.data_ptr,
+                                 self.item_size,
+                                 NPY_ARRAY_CARRAY,
+                                 <object>NULL)
+
+        return ndarray
+
+    def __dealloc__(self):
+        """ Free the data. This is called by Python when all the references to
+            the object are gone.
+        """
+        free(<void*>self.data_ptr)
+
+
+def load(stream_id, evt_data=True, cpu_data=True, pid_data=True,
+                    ofst_data=True, ts_data=True):
+    """ Python binding of the 'kshark_load_data_matrix' function that does not
+        copy the data. The input parameters can be used to avoid loading the
+        data from the unnecessary fields.
+    """
+    cdef int16_t *evt_c
+    cdef int16_t *cpu_c
+    cdef int32_t *pid_c
+    cdef int64_t *ofst_c
+    cdef int64_t *ts_c
+
+    cdef np.ndarray evt, cpu, pid, ofst, ts
+
+    if not evt_data:
+        evt_c = NULL
+
+    if not cpu_data:
+        cpu_c = NULL
+
+    if not pid_data:
+        pid_c = NULL
+
+    if not ofst_data:
+        ofst_c = NULL
+
+    if not ts_data:
+        ts_c = NULL
+
+    data_dict = {}
+
+    cdef ssize_t size
+
+    size = trace2matrix(stream_id, &evt_c, &cpu_c, &pid_c, &ofst_c, &ts_c)
+    if size <= 0:
+        raise Exception('No data has been loaded.')
+
+    if evt_data:
+        column = 'event'
+        array_wrapper_evt = KsDataWrapper()
+        array_wrapper_evt.init(data_type=data_column_types[column],
+                               data_size=size,
+                               item_size=0,
+                               data_ptr=<void *>evt_c)
+
+        evt = np.array(array_wrapper_evt, copy=False)
+        evt.base = <PyObject *> array_wrapper_evt
+        data_dict.update({column: evt})
+        Py_INCREF(array_wrapper_evt)
+
+    if cpu_data:
+        column = 'cpu'
+        array_wrapper_cpu = KsDataWrapper()
+        array_wrapper_cpu.init(data_type=data_column_types[column],
+                               data_size=size,
+                               item_size=0,
+                               data_ptr=<void *> cpu_c)
+
+        cpu = np.array(array_wrapper_cpu, copy=False)
+        cpu.base = <PyObject *> array_wrapper_cpu
+        data_dict.update({column: cpu})
+        Py_INCREF(array_wrapper_cpu)
+
+    if pid_data:
+        column = 'pid'
+        array_wrapper_pid = KsDataWrapper()
+        array_wrapper_pid.init(data_type=data_column_types[column],
+                               data_size=size,
+                               item_size=0,
+                               data_ptr=<void *>pid_c)
+
+        pid = np.array(array_wrapper_pid, copy=False)
+        pid.base = <PyObject *> array_wrapper_pid
+        data_dict.update({column: pid})
+        Py_INCREF(array_wrapper_pid)
+
+    if ofst_data:
+        column = 'offset'
+        array_wrapper_ofst = KsDataWrapper()
+        array_wrapper_ofst.init(data_type=data_column_types[column],
+                                data_size=size,
+                                item_size=0,
+                                data_ptr=<void *> ofst_c)
+
+
+        ofst = np.array(array_wrapper_ofst, copy=False)
+        ofst.base = <PyObject *> array_wrapper_ofst
+        data_dict.update({column: ofst})
+        Py_INCREF(array_wrapper_ofst)
+
+    if ts_data:
+        column = 'time'
+        array_wrapper_ts = KsDataWrapper()
+        array_wrapper_ts.init(data_type=data_column_types[column],
+                              data_size=size,
+                              item_size=0,
+                              data_ptr=<void *> ts_c)
+
+        ts = np.array(array_wrapper_ts, copy=False)
+        ts.base = <PyObject *> array_wrapper_ts
+        data_dict.update({column: ts})
+        Py_INCREF(array_wrapper_ts)
+
+    return data_dict
+
+def columns():
+    return data_columns
diff --git a/src/trace2matrix.c b/src/trace2matrix.c
new file mode 100644
index 0000000..1151ebe
--- /dev/null
+++ b/src/trace2matrix.c
@@ -0,0 +1,40 @@
+// SPDX-License-Identifier: LGPL-2.1
+
+/*
+ * Copyright 2019 VMware Inc, Yordan Karadzhov <ykaradzhov@vmware.com>
+ */
+
+// KernelShark
+#include "libkshark.h"
+
+ssize_t trace2matrix(int sd,
+		     int16_t **event_array,
+		     int16_t **cpu_array,
+		     int32_t **pid_array,
+		     int64_t **offset_array,
+		     int64_t **ts_array)
+{
+	struct kshark_generic_stream_interface *interface;
+	struct kshark_context *kshark_ctx = NULL;
+	struct kshark_data_stream *stream;
+	ssize_t total = 0;
+
+	if (!kshark_instance(&kshark_ctx))
+		return -1;
+
+	stream = kshark_get_data_stream(kshark_ctx, sd);
+	if (!stream)
+		return -1;
+
+	interface = stream->interface;
+	if (interface->type == KS_GENERIC_DATA_INTERFACE &&
+	    interface->load_matrix) {
+		total = interface->load_matrix(stream, kshark_ctx, event_array,
+								   cpu_array,
+								   pid_array,
+								   offset_array,
+								   ts_array);
+	}
+
+	return total;
+}
-- 
2.27.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v4 04/11] trace-cruncher: Add "utils"
  2021-07-07 13:21 [PATCH v4 00/11] Build trace-cruncher as Python pakage Yordan Karadzhov (VMware)
                   ` (2 preceding siblings ...)
  2021-07-07 13:21 ` [PATCH v4 03/11] trace-cruncher: Refactor the part that wraps libkshark Yordan Karadzhov (VMware)
@ 2021-07-07 13:21 ` Yordan Karadzhov (VMware)
  2021-07-07 13:21 ` [PATCH v4 05/11] trace-cruncher: Refactor the examples Yordan Karadzhov (VMware)
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Yordan Karadzhov (VMware) @ 2021-07-07 13:21 UTC (permalink / raw)
  To: linux-trace-devel; +Cc: Yordan Karadzhov (VMware)

Place all the code, that is pure Python, in
tracecrunche/ks_utils.py and tracecrunche/ft_utils.py

Signed-off-by: Yordan Karadzhov (VMware) <y.karadz@gmail.com>
---
 tracecruncher/__init__.py |   0
 tracecruncher/ft_utils.py |  19 ++++
 tracecruncher/ks_utils.py | 227 ++++++++++++++++++++++++++++++++++++++
 3 files changed, 246 insertions(+)
 create mode 100644 tracecruncher/__init__.py
 create mode 100644 tracecruncher/ft_utils.py
 create mode 100644 tracecruncher/ks_utils.py

diff --git a/tracecruncher/__init__.py b/tracecruncher/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/tracecruncher/ft_utils.py b/tracecruncher/ft_utils.py
new file mode 100644
index 0000000..eae161c
--- /dev/null
+++ b/tracecruncher/ft_utils.py
@@ -0,0 +1,19 @@
+"""
+SPDX-License-Identifier: LGPL-2.1
+
+Copyright 2019 VMware Inc, Yordan Karadzhov (VMware) <y.karadz@gmail.com>
+"""
+
+import sys
+import time
+
+from . import ftracepy as ft
+
+
+def find_event_id(system, event):
+    """ Get the unique identifier of a trace event.
+    """
+    tep = ft.tep_handle();
+    tep.init_local(dir=ft.dir(), systems=[system]);
+
+    return tep.get_event(system=system, name=event).id()
diff --git a/tracecruncher/ks_utils.py b/tracecruncher/ks_utils.py
new file mode 100644
index 0000000..15c7835
--- /dev/null
+++ b/tracecruncher/ks_utils.py
@@ -0,0 +1,227 @@
+"""
+SPDX-License-Identifier: LGPL-2.1
+
+Copyright 2019 VMware Inc, Yordan Karadzhov (VMware) <y.karadz@gmail.com>
+"""
+
+import os
+import json
+
+from . import npdatawrapper as dw
+from . import ksharkpy as ks
+
+
+def size(data):
+    """ Get the number of trace records.
+    """
+    for key in dw.data_column_types:
+        if data[key] is not None:
+            return data[key].size
+
+    raise Exception('Data size is unknown.')
+
+
+class trace_file_stream:
+    def __init__(self, file_name='', buffer_name='top'):
+        """ Constructor.
+        """
+        self.file_name = file_name
+        self.buffer_name = buffer_name
+        self.stream_id = -1
+
+        if file_name:
+            self.open(file_name)
+
+    def open(self, file_name):
+        """ Open a trace file for reading.
+        """
+        self.file_name = file_name
+        self.stream_id = ks.open(self.file_name)
+
+    def open_buffer(self, file_name, buffer_name):
+        """ Open a aprticular buffer in a trace file for reading.
+        """
+        self.file_name = file_name
+        self.buffer_name = buffer_name
+        self.stream_id = ks.open_buffer(self.file_name, buffer_name)
+
+    def close(self):
+        """ Close this trace data stream.
+        """
+        if self.stream_id >= 0:
+            ks.close(self.stream_id)
+            self.stream_id = -1
+
+    def set_clock_offset(self, offset):
+        """ Set the clock offset to be append to the timestamps of this trace
+            data stream.
+        """
+        ks.set_clock_offset(stream_id=self.stream_id, offset=offset)
+
+    def load(self, cpu_data=True, pid_data=True, evt_data=True,
+             ofst_data=True, ts_data=True):
+        """ Load the trace data.
+        """
+        return dw.load(stream_id=self.stream_id,
+                       ofst_data=ofst_data,
+                       cpu_data=cpu_data,
+                       ts_data=ts_data,
+                       pid_data=pid_data,
+                       evt_data=evt_data)
+
+    def get_tasks(self):
+        """ Get a dictionary (name and PID) of all tasks presented in the
+            tracing data.
+        """
+        return ks.get_tasks(stream_id=self.stream_id)
+
+    def event_id(self, name):
+        """ Retrieve the unique ID of the event from its name.
+        """
+        return ks.event_id(stream_id=self.stream_id, name=name)
+
+    def event_name(self, event_id):
+        """ Retrieve the name of the event from its unique ID.
+        """
+        return ks.event_name(stream_id=self.stream_id, event_id=event_id)
+
+    def read_event_field(self, offset, event_id, field):
+        """ Retrieve the value of a trace event field.
+        """
+        return ks.read_event_field(stream_id=self.stream_id,
+                                   offset=offset,
+                                   event_id=event_id,
+                                   field=field)
+
+    def __enter__(self):
+        """
+        """
+        self.open(self.file_name)
+        return self
+
+    def __exit__(self,
+                 exception_type,
+                 exception_value,
+                 traceback):
+        """
+        """
+        self.close()
+
+    def __del__(self):
+        """
+        """
+        self.close()
+
+
+class ks_session:
+    def __init__(self, session_name):
+        """ Constructor.
+        """
+        self.gui_session(session_name)
+
+    def gui_session(self, session_name):
+        """ Generate a default KernelShark session description
+            file (JSON).
+        """
+        self.name, extension = os.path.splitext(session_name)
+        json_file = session_name
+        if extension != '.json':
+            json_file += '.json'
+
+        ks.new_session_file(session_file=json_file)
+
+        self.session_file = open(json_file, 'r+')
+        self.session_doc = json.load(self.session_file)
+
+        self.session_doc['Splitter'] = [1, 1]
+        self.session_doc['MainWindow'] = [1200, 800]
+        self.session_doc['ViewTop'] = 0
+        self.session_doc['ColorScheme'] = 0.75
+        self.session_doc['Model']['bins'] = 1000
+
+        self.session_doc['Markers']['markA'] = {}
+        self.session_doc['Markers']['markA']['isSet'] = False
+        self.session_doc['Markers']['markB'] = {}
+        self.session_doc['Markers']['markB']['isSet'] = False
+        self.session_doc['Markers']['Active'] = 'A'
+
+        for stream_doc in self.session_doc["data streams"]:
+            stream_doc['CPUPlots'] = []
+            stream_doc['TaskPlots'] = []
+
+        self.session_doc['ComboPlots'] = []
+
+    def set_cpu_plots(self, stream, plots):
+        """ Add a list of CPU plots to the KernelShark session description
+            file.
+        """
+        for stream_doc in self.session_doc['data streams']:
+            if stream_doc['stream id'] == stream.stream_id:
+                stream_doc['CPUPlots'] = list(map(int, plots))
+
+    def set_task_plots(self, stream, plots):
+        """ Add a list of Task plots to the KernelShark session description
+            file.
+        """
+        for stream_doc in self.session_doc['data streams']:
+            if stream_doc['stream id'] == stream.stream_id:
+                stream_doc['TaskPlots'] = list(map(int, plots))
+
+    def set_time_range(self, tmin, tmax):
+        """ Set the time range of the KernelShark visualization model.
+        """
+        self.session_doc['Model']['range'] = [int(tmin), int(tmax)]
+
+    def set_marker_a(self, row):
+        """ Set the position of Marker A.
+        """
+        self.session_doc['Markers']['markA']['isSet'] = True
+        self.session_doc['Markers']['markA']['row'] = int(row)
+
+    def set_marker_b(self, row):
+        """ Set the position of Marker B.
+        """
+        self.session_doc['Markers']['markB']['isSet'] = True
+        self.session_doc['Markers']['markB']['row'] = int(row)
+
+    def set_first_visible_row(self, row):
+        """ Set the number of the first visible row in the text data viewer.
+        """
+        self.session_doc['ViewTop'] = int(row)
+
+    def add_plugin(self, stream, plugin):
+        """ In the KernelShark session description file, add a plugin to be
+            registered to a given trace data stream.
+        """
+        for stream_doc in self.session_doc["data streams"]:
+            if stream_doc['stream id'] == stream.stream_id:
+                stream_doc['plugins']['registered'].append([plugin, True])
+
+    def add_event_filter(self, stream, events):
+        """ In the KernelShark session description file, add a list of
+            event IDs to be filtered out.
+        """
+        for stream_doc in self.session_doc["data streams"]:
+            if stream_doc['stream id'] == stream.stream_id:
+                stream_doc['filters']['hide event filter'] = events
+
+    def save(self):
+        """ Save a KernelShark session description of a JSON file.
+        """
+        self.session_file.seek(0)
+        json.dump(self.session_doc, self.session_file, indent=4)
+        self.session_file.truncate()
+
+
+def open_file(file_name):
+    """ Open a trace file for reading.
+    """
+    return trace_file_stream(file_name)
+
+
+def open_buffer(file_name, buffer_name):
+    """ Open a aprticular buffer in a trace file for reading.
+    """
+    s = trace_file_stream()
+    s.open_buffer(file_name, buffer_name)
+    return s
-- 
2.27.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v4 05/11] trace-cruncher: Refactor the examples
  2021-07-07 13:21 [PATCH v4 00/11] Build trace-cruncher as Python pakage Yordan Karadzhov (VMware)
                   ` (3 preceding siblings ...)
  2021-07-07 13:21 ` [PATCH v4 04/11] trace-cruncher: Add "utils" Yordan Karadzhov (VMware)
@ 2021-07-07 13:21 ` Yordan Karadzhov (VMware)
  2021-07-07 13:21 ` [PATCH v4 06/11] trace-cruncher: Add ftracefy example Yordan Karadzhov (VMware)
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Yordan Karadzhov (VMware) @ 2021-07-07 13:21 UTC (permalink / raw)
  To: linux-trace-devel; +Cc: Yordan Karadzhov (VMware)

For the moment we will keep only one example that is "sched_wakeup.py".
"gpareto_fit.py" gets removed because it doesn't really demonstrate
anything conceptually different compared to "sched_wakeup.py". The
difference comes from the more advanced statistical analysis of the data,
however this goes beyond the scope of trace-cruncher. "page_faults.py"
gets removed only temporally, because it requires some functionalities
that are not yet implemented in the ftrace libraries. Once those
functionalities are made available, the example will be added to
trace-cruncher again.

Signed-off-by: Yordan Karadzhov (VMware) <y.karadz@gmail.com>
---
 examples/gpareto_fit.py  | 328 ---------------------------------------
 examples/page_faults.py  | 120 --------------
 examples/sched_wakeup.py |  70 ++++-----
 3 files changed, 30 insertions(+), 488 deletions(-)
 delete mode 100755 examples/gpareto_fit.py
 delete mode 100755 examples/page_faults.py

diff --git a/examples/gpareto_fit.py b/examples/gpareto_fit.py
deleted file mode 100755
index 4a2bb2a..0000000
--- a/examples/gpareto_fit.py
+++ /dev/null
@@ -1,328 +0,0 @@
-#!/usr/bin/env python3
-
-"""
-SPDX-License-Identifier: LGPL-2.1
-
-Copyright 2019 VMware Inc, Yordan Karadzhov <ykaradzhov@vmware.com>
-"""
-
-import sys
-import json
-
-import matplotlib.pyplot as plt
-import scipy.stats as st
-import numpy as np
-
-from scipy.stats import genpareto as gpareto
-from scipy.optimize import curve_fit as cfit
-
-from ksharksetup import setup
-# Always call setup() before importing ksharkpy!!!
-setup()
-
-import ksharkpy as ks
-
-def chi2_test(hist, n_bins, c, loc, scale, norm):
-    """ Simple Chi^2 test for the goodness of the fit.
-    """
-    chi2 = n_empty_bins = 0
-    for i in range(len(hist[0])):
-        if hist[0][i] == 0:
-            # Ignore this empty bin.
-            n_empty_bins += 1
-            continue
-
-        # Get the center of bin i.
-        x = (hist[1][i] + hist[1][i + 1]) / 2
-        fit_val = gpareto.pdf(x, c=c, loc=loc, scale=scale)
-        chi = (fit_val - hist[0][i]) / np.sqrt(hist[0][i])
-        chi2 += chi**2
-
-    return  norm * chi2 / (n_bins - n_empty_bins)
-
-def quantile(p, P, c, loc, scale):
-    """ The quantile function of the Generalized Pareto distribution.
-    """
-    return loc + scale / c * ((P / p)**(c) - 1)
-
-
-def dq_dscale(p, P, c, scale):
-    """ Partial derivative of the quantile function.
-    """
-    return ((P / p)**c - 1) / c
-
-
-def dq_dc(p, P, c, scale):
-    """ Partial derivative of the quantile function.
-    """
-    return (scale * (np.log(P / p) * (P / p)**c ) / c
-          - scale * ((P / p)**c - 1) / (c**2))
-
-
-def dq_dP(p, P, c, scale):
-    """ Partial derivative of the quantile function.
-    """
-    return scale / P * (P / p)**c
-
-
-def error_P(n, N):
-    return np.sqrt(n) / N
-
-
-def error(p, P, c, scale, err_P, err_c, err_scale):
-    return np.sqrt((dq_dP(p, P, c, scale) * err_P)**2
-                 + (dq_dc(p, P, c, scale) * err_c)**2
-                 + (dq_dscale(p, P, c, scale) * err_scale)**2)
-
-
-def quantile_conf_bound(p, P, n, c, loc, scale, err_P, err_c, err_scale):
-    return (quantile(p=p, P=P, c=c, loc=loc, scale=scale)
-          + n * error(p=p, P=P, c=c, scale=scale,
-                      err_P=err_P, err_c=err_c, err_scale=err_scale));
-
-
-def get_latency(t0, t1):
-    """ Get the value of the latency in microseconds
-    """
-    return (t1 - t0) / 1000 - 1000
-
-
-def get_cpu_data(data, task_pid, start_id, stop_id, threshold):
-    """ Loop over the tracing data for a given CPU and find all latencies bigger
-        than the specified threshold.
-    """
-    # Get the size of the data.
-    size = ks.data_size(data)
-    #print("data size:", size)
-
-    time_start = -1
-    dt_ot = []
-    tot = 0
-    i = 0
-    i_start = 0;
-
-    while i < size:
-        if data["pid"][i] == task_pid and data['event'][i] == start_id:
-            time_start = data['time'][i]
-            i_start = i;
-            i = i + 1
-
-            while i < size:
-                if data["pid"][i] == task_pid and data['event'][i] == stop_id:
-                    delta = get_latency(time_start, data['time'][i])
-
-                    if delta > threshold and tot != 0:
-                        print('lat. over threshold: ', delta, i_start, i)
-                        dt_ot.append([delta, i_start, i])               
-
-                    tot = tot + 1
-                    break
-
-                i = i + 1
-        i = i + 1
-
-    print(task_pid, 'tot:', len(dt_ot), '/', tot)
-    return dt_ot, tot
-
-
-def make_ks_session(fname, data, start, stop):
-    """ Save a KernelShark session descriptor file (Json).
-        The sessions is zooming around the maximum observed latency.
-    """
-    sname = 'max_lat.json'
-    ks.new_session(fname, sname)
-    i_start = int(start)
-    i_stop = int(stop)
-
-    with open(sname, 'r+') as s:
-        session = json.load(s)
-        session['TaskPlots'] = [int(data['pid'][i_start])]
-        session['CPUPlots'] = [int(data['cpu'][i_start])]
-
-        delta = data['time'][i_stop] - data['time'][i_start]
-        tmin = int(data['time'][i_start] - delta)
-        tmax = int(data['time'][i_stop] + delta)
-        session['Model']['range'] = [tmin, tmax]
-
-        session['Markers']['markA']['isSet'] = True
-        session['Markers']['markA']['row'] = i_start)
-
-        session['Markers']['markB']['isSet'] = True
-        session['Markers']['markB']['row'] = i_stop)
-
-        session['ViewTop'] = i_start) - 5
-
-        ks.save_session(session, s)
-
-
-fname = str(sys.argv[1])
-status = ks.open_file(fname)
-if not status:
-    print ("Failed to open file ", fname)
-    sys.exit()
-
-ks.register_plugin('reg_pid')
-data = ks.load_data()
-
-# Get the Event Ids of the hrtimer_start and print events.
-start_id = ks.event_id('timer', 'hrtimer_start')
-stop_id = ks.event_id('ftrace', 'print')
-print("start_id", start_id)
-print("stop_id", stop_id)
-
-tasks = ks.get_tasks()
-jdb_pids = tasks['jitterdebugger']
-print('jitterdeburrer pids:', jdb_pids)
-jdb_pids.pop(0)
-
-threshold = 10
-data_ot = []
-tot = 0
-
-for task_pid in jdb_pids:
-    cpu_data, cpu_tot = get_cpu_data(data=data,
-                                     task_pid=task_pid,
-                                     start_id=start_id,
-                                     stop_id=stop_id,
-                                     threshold=threshold)
-
-    data_ot.extend(cpu_data)
-    tot += cpu_tot
-
-ks.close()
-
-dt_ot = np.array(data_ot)
-np.savetxt('peak_over_threshold_loaded.txt', dt_ot)
-
-make_ks_session(fname=fname, data=data, i_start=int(dt_ot[i_max_lat][1]),
-                                        i_stop=int(dt_ot[i_max_lat][2]))
-
-P = len(dt_ot) / tot
-err_P = error_P(n=len(dt_ot), N=tot)
-print('tot:', tot, ' P =', P)
-
-lat = dt_ot[:,0]
-#print(lat)
-i_max_lat = lat.argmax()
-print('imax:', i_max_lat, int(dt_ot[i_max_lat][1]))
-
-print('max', np.amax(dt_ot))
-
-start = threshold
-stop = 31
-n_bins = (stop - start) * 2
-
-bin_size = (stop - start) / n_bins
-
-x = np.linspace(start=start + bin_size / 2,
-                stop=stop - bin_size / 2,
-                num=n_bins)
-
-bins_ot = np.linspace(start=start, stop=stop, num=n_bins + 1)
-#print(bins_ot)
-
-fig, ax = plt.subplots(nrows=2, ncols=2)
-fig.tight_layout()
-ax[-1, -1].axis('off')
-
-hist_ot = ax[0][0].hist(x=lat, bins=bins_ot, histtype='stepfilled', alpha=0.3)
-ax[0][0].set_xlabel('latency [\u03BCs]', fontsize=8)
-ax[0][0].set_yscale('log')
-#print(hist_ot[0])
-
-hist_ot_norm = ax[1][0].hist(x=lat, bins=bins_ot,
-                             density=True, histtype='stepfilled', alpha=0.3)
-
-# Fit using the fitter of the genpareto class (shown in red).
-ret = gpareto.fit(lat, loc=threshold)
-ax[1][0].plot(x, gpareto.pdf(x, c=ret[0],  loc=ret[1],  scale=ret[2]),
-              'r-', lw=1, color='red',  alpha=0.8)
-
-ax[1][0].set_xlabel('latency [\u03BCs]', fontsize=8)
-print(ret)
-print('\ngoodness-of-fit: ' + '{:03.3f}'.format(chi2_test(hist_ot_norm,
-                                                          n_bins=n_bins,
-                                                          c=ret[0],
-                                                          loc=ret[1],
-                                                          scale=ret[2],
-                                                          norm=len(lat))))
-
-print("\n curve_fit:")
-# Fit using the curve_fit fitter. Fix the value of the "loc" parameter.
-popt, pcov = cfit(lambda x, c, scale: gpareto.pdf(x, c=c, loc=threshold, scale=scale),
-                  x, hist_ot_norm[0],
-                  p0=[ret[0], ret[2]])
-
-print(popt)
-print(pcov)
-
-ax[1][0].plot(x, gpareto.pdf(x, c=popt[0], loc=threshold, scale=popt[1]),
-              'r-', lw=1, color='blue', alpha=0.8)
-
-fit_legend = str('\u03BE = ' + '{:05.3f}'.format(popt[0]) +
-                 ' +- ' + '{:05.3f}'.format(pcov[0][0]**0.5) +
-                 ' (' + '{:03.2f}'.format(pcov[0][0]**0.5 / abs(popt[0]) * 100) + '%)')
-
-fit_legend += str('\n\u03C3 = ' + '{:05.3f}'.format(popt[1]) +
-                  ' +- ' + '{:05.3f}'.format(pcov[1][1]**0.5) +
-                  ' (' + '{:03.2f}'.format(pcov[1][1]**0.5 / abs(popt[1]) * 100) + '%)')
-
-fit_legend += '\n\u03BC = ' + str(threshold) + ' (const)'
-
-fit_legend += '\ngoodness-of-fit: ' + '{:03.3f}'.format(chi2_test(hist_ot_norm,
-                                                        n_bins=n_bins,
-                                                        c=popt[0],
-                                                        loc=threshold,
-                                                        scale=popt[1],
-                                                        norm=len(lat)))
-print(fit_legend)
-
-ax[0][1].set_xscale('log')
-##ax[0][1].set_yscale('log')
-ax[0][1].set_xlabel('Return period', fontsize=8)
-ax[0][1].set_ylabel('Return level [\u03BCs]', fontsize=6)
-ax[0][1].grid(True, linestyle=":", which="both")
-
-y = np.linspace(200000, 5000000, 400)
-ax[0][1].plot(y,
-              quantile(1 / y,
-                       P=P,
-                       c=popt[0],
-                       loc=threshold,
-                       scale=popt[1]),
-              'r-', lw=1, color='blue', alpha=0.8)
-
-ax[0][1].plot(y,
-              quantile_conf_bound(1 / y,
-                                  P=P,
-                                  n=+1, 
-                                  c=popt[0],
-                                  loc=threshold,
-                                  scale=popt[1],
-                                  err_P=err_P,
-                                  err_c= pcov[0][0]**0.5,
-                                  err_scale=pcov[1][1]**0.5),
-              'r-', lw=1, color='green', alpha=0.8)
-
-ax[0][1].plot(y,
-              quantile_conf_bound(1 / y,
-                                  P=P,
-                                  n=-1, 
-                                  c=popt[0],
-                                  loc=threshold,
-                                  scale=popt[1],
-                                  err_P=err_P,
-                                  err_c= pcov[0][0]**0.5,
-                                  err_scale=pcov[1][1]**0.5),
-              'r-', lw=1, color='green', alpha=0.8)
-
-props = dict(boxstyle='round', color='black', alpha=0.05)
-
-ax[1][1].text(0.05, 0.85,
-              fit_legend,
-              fontsize=9,
-              verticalalignment='top',
-              bbox=props)
-
-plt.savefig('figfit-all-loaded.png')
-#plt.show()
diff --git a/examples/page_faults.py b/examples/page_faults.py
deleted file mode 100755
index 446b12d..0000000
--- a/examples/page_faults.py
+++ /dev/null
@@ -1,120 +0,0 @@
-#!/usr/bin/env python3
-
-"""
-SPDX-License-Identifier: LGPL-2.1
-
-Copyright 2019 VMware Inc, Yordan Karadzhov <ykaradzhov@vmware.com>
-"""
-
-import os
-import sys
-import subprocess as sp
-import json
-
-import pprint as pr
-import matplotlib.pyplot as plt
-import scipy.stats as st
-import numpy as np
-from collections import Counter
-from tabulate import tabulate
-
-from ksharksetup import setup
-# Always call setup() before importing ksharkpy!!!
-setup()
-
-import ksharkpy as ks
-
-def gdb_decode_address(obj_file, obj_address):
-    """ Use gdb to examine the contents of the memory at this
-        address.
-    """
-    result = sp.run(['gdb',
-                     '--batch',
-                     '-ex',
-                     'x/i ' + str(obj_address),
-                     obj_file],
-                    stdout=sp.PIPE)
-
-    symbol = result.stdout.decode("utf-8").splitlines()
-
-    if symbol:
-        func = [symbol[0].split(':')[0], symbol[0].split(':')[1]]
-    else:
-        func = [obj_address]
-
-    func.append(obj_file)
-
-    return func
-
-# Get the name of the tracing data file.
-fname = str(sys.argv[1])
-
-ks.open_file(fname)
-ks.register_plugin('reg_pid')
-
-data = ks.load_data()
-tasks = ks.get_tasks()
-#pr.pprint(tasks)
-
-# Get the Event Ids of the page_fault_user or page_fault_kernel events.
-pf_eid = ks.event_id('exceptions', 'page_fault_user')
-
-# Gey the size of the data.
-d_size = ks.data_size(data)
-
-# Get the name of the user program.
-prog_name = str(sys.argv[2])
-
-table_headers = ['N p.f.', 'function', 'value', 'obj. file']
-table_list = []
-
-# Loop over all tasks associated with the user program.
-for j in range(len(tasks[prog_name])):
-    count = Counter()
-    task_pid = tasks[prog_name][j]
-    for i in range(0, d_size):
-        if data['event'][i] == pf_eid and data['pid'][i] == task_pid:
-            address = ks.read_event_field(offset=data['offset'][i],
-                                          event_id=pf_eid,
-                                          field='address')
-            ip = ks.read_event_field(offset=data['offset'][i],
-                                     event_id=pf_eid,
-                                     field='ip')
-            count[ip] += 1
-
-    pf_list = count.items()
-
-    # Sort the counters of the page fault instruction pointers. The most
-    # frequent will be on top.
-    pf_list = sorted(pf_list, key=lambda cnt: cnt[1], reverse=True)
-
-    i_max = 25
-    if i_max > len(pf_list):
-        i_max = len(pf_list)
-
-    for i in range(0, i_max):
-        func = ks.get_function(pf_list[i][0])
-        func_info = [func]
-        if func.startswith('0x'):
-            # The name of the function cannot be determined. We have an
-            # instruction pointer instead. Most probably this is a user-space
-            # function.
-            address = int(func, 0)
-            instruction = ks.map_instruction_address(task_pid, address)
-
-            if instruction['obj_file'] != 'UNKNOWN':
-                func_info = gdb_decode_address(instruction['obj_file'],
-                                               instruction['address'])
-            else:
-                func_info += ['', instruction['obj_file']]
-
-        else:
-            func_info = [func]
-
-        table_list.append([pf_list[i][1]] + func_info)
-
-ks.close()
-
-print("\n", tabulate(table_list,
-                     headers=table_headers,
-                     tablefmt='simple'))
diff --git a/examples/sched_wakeup.py b/examples/sched_wakeup.py
index 52f2688..acf3682 100755
--- a/examples/sched_wakeup.py
+++ b/examples/sched_wakeup.py
@@ -15,28 +15,20 @@ import matplotlib.pyplot as plt
 import scipy.stats as st
 import numpy as np
 
-from ksharksetup import setup
-# Always call setup() before importing ksharkpy!!!
-setup()
+import tracecruncher.ks_utils as tc
 
-import ksharkpy as ks
 # Get the name of the user program.
 if len(sys.argv) >= 2:
     fname = str(sys.argv[1])
 else:
     fname = input('choose a trace file: ')
 
-status = ks.open_file(fname)
-if not status:
-    print ("Failed to open file ", fname)
-    sys.exit()
-
-ks.register_plugin('reg_pid')
+f = tc.open_file(file_name=fname)
 
 # We do not need the Process Ids of the records.
 # Do not load the "pid" data.
-data = ks.load_data(pid_data=False)
-tasks = ks.get_tasks()
+data = f.load(pid_data=False)
+tasks = f.get_tasks()
 
 # Get the name of the user program.
 if len(sys.argv) >= 3:
@@ -48,11 +40,11 @@ else:
 task_pid = tasks[prog_name][0]
 
 # Get the Event Ids of the sched_switch and sched_waking events.
-ss_eid = ks.event_id('sched', 'sched_switch')
-w_eid = ks.event_id('sched', 'sched_waking')
+ss_eid = f.event_id(name='sched/sched_switch')
+w_eid = f.event_id(name='sched/sched_waking')
 
 # Gey the size of the data.
-i = data['offset'].size
+i = tc.size(data)
 
 dt = []
 delta_max = i_ss_max = i_sw_max = 0
@@ -60,7 +52,7 @@ delta_max = i_ss_max = i_sw_max = 0
 while i > 0:
     i = i - 1
     if data['event'][i] == ss_eid:
-        next_pid = ks.read_event_field(offset=data['offset'][i],
+        next_pid = f.read_event_field(offset=data['offset'][i],
                                        event_id=ss_eid,
                                        field='next_pid')
 
@@ -73,13 +65,13 @@ while i > 0:
                 i = i - 1
 
                 if data['event'][i] < 0 and cpu_ss == data['cpu'][i]:
-			# Ring buffer overflow. Ignore this case and continue.
+                        # Ring buffer overflow. Ignore this case and continue.
                         break
 
                 if data['event'][i] == ss_eid:
-                    next_pid = ks.read_event_field(offset=data['offset'][i],
-                                       event_id=ss_eid,
-                                       field='next_pid')
+                    next_pid = f.read_event_field(offset=data['offset'][i],
+                                                  event_id=ss_eid,
+                                                  field='next_pid')
                     if next_pid == task_pid:
                         # Second sched_switch for the same task. ?
                         time_ss = data['time'][i]
@@ -89,7 +81,7 @@ while i > 0:
                     continue
 
                 if (data['event'][i] == w_eid):
-                    waking_pid = ks.read_event_field(offset=data['offset'][i],
+                    waking_pid = f.read_event_field(offset=data['offset'][i],
                                                      event_id=w_eid,
                                                      field='pid')
 
@@ -107,6 +99,7 @@ while i > 0:
 desc = st.describe(np.array(dt))
 print(desc)
 
+# Plot the latency distribution.
 fig, ax = plt.subplots(nrows=1, ncols=1)
 fig.set_figheight(6)
 fig.set_figwidth(7)
@@ -119,30 +112,27 @@ ax.set_xlabel('latency [$\mu$s]')
 ax.hist(dt, bins=(100), histtype='step')
 plt.show()
 
-sname = 'sched.json'
-ks.new_session(fname, sname)
+# Prepare a session description for KernelShark.
+s = tc.ks_session('sched')
 
-with open(sname, 'r+') as s:
-    session = json.load(s)
-    session['TaskPlots'] = [task_pid]
-    session['CPUPlots'] = [int(data['cpu'][i_sw_max])]
+delta = data['time'][i_ss_max] - data['time'][i_sw_max]
+tmin = data['time'][i_sw_max] - delta
+tmax = data['time'][i_ss_max] + delta
 
-    if data['cpu'][i_ss_max] != data['cpu'][i_sw_max]:
-        session['CPUPlots'].append(int(data['cpu'][i_ss_max]))
+s.set_time_range(tmin=tmin, tmax=tmax)
 
-    delta = data['time'][i_ss_max] - data['time'][i_sw_max]
-    tmin = int(data['time'][i_sw_max] - delta)
-    tmax = int(data['time'][i_ss_max] + delta)
-    session['Model']['range'] = [tmin, tmax]
+cpu_plots = [data['cpu'][i_sw_max]]
+if data['cpu'][i_ss_max] != data['cpu'][i_sw_max]:
+    cpu_plots.append(data['cpu'][i_ss_max])
 
-    session['Markers']['markA']['isSet'] = True
-    session['Markers']['markA']['row'] = int(i_sw_max)
+s.set_cpu_plots(f, cpu_plots)
+s.set_task_plots(f, [task_pid])
 
-    session['Markers']['markB']['isSet'] = True
-    session['Markers']['markB']['row'] = int(i_ss_max)
+s.set_marker_a(i_sw_max)
+s.set_marker_b(i_ss_max)
 
-    session['ViewTop'] = int(i_sw_max) - 5
+s.set_first_visible_row(i_sw_max - 5)
 
-    ks.save_session(session, s)
+s.add_plugin(stream=f, plugin='sched_events')
 
-ks.close()
+s.save()
-- 
2.27.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v4 06/11] trace-cruncher: Add ftracefy example
  2021-07-07 13:21 [PATCH v4 00/11] Build trace-cruncher as Python pakage Yordan Karadzhov (VMware)
                   ` (4 preceding siblings ...)
  2021-07-07 13:21 ` [PATCH v4 05/11] trace-cruncher: Refactor the examples Yordan Karadzhov (VMware)
@ 2021-07-07 13:21 ` Yordan Karadzhov (VMware)
  2021-07-07 13:21 ` [PATCH v4 06/11] trace-cruncher: Add ftracepy example Yordan Karadzhov (VMware)
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Yordan Karadzhov (VMware) @ 2021-07-07 13:21 UTC (permalink / raw)
  To: linux-trace-devel; +Cc: Yordan Karadzhov (VMware)

This is the most basic possible example. To be considered as an
equivalent of a "Hello world" program.

Signed-off-by: Yordan Karadzhov (VMware) <y.karadz@gmail.com>
---
 examples/start_tracing.py | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)
 create mode 100755 examples/start_tracing.py

diff --git a/examples/start_tracing.py b/examples/start_tracing.py
new file mode 100755
index 0000000..da36164
--- /dev/null
+++ b/examples/start_tracing.py
@@ -0,0 +1,20 @@
+#!/usr/bin/env python3
+
+"""
+SPDX-License-Identifier: CC-BY-4.0
+
+Copyright (C) 2021 VMware Inc, Yordan Karadzhov (VMware) <y.karadz@gmail.com>
+"""
+
+import tracecruncher.ftracepy as ft
+
+# Create new Ftrace instance to work in.
+inst = ft.create_instance()
+
+# Enable all static events from systems "sched" and "irq".
+ft.enable_events(instance=inst,
+                 systems=['sched', 'irq'],
+                 events=[['sched_switch'],['all']])
+
+# Print the stream of trace events. "Ctrl+c" to stop tracing.
+ft.read_trace(instance=inst)
-- 
2.27.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v4 06/11] trace-cruncher: Add ftracepy example
  2021-07-07 13:21 [PATCH v4 00/11] Build trace-cruncher as Python pakage Yordan Karadzhov (VMware)
                   ` (5 preceding siblings ...)
  2021-07-07 13:21 ` [PATCH v4 06/11] trace-cruncher: Add ftracefy example Yordan Karadzhov (VMware)
@ 2021-07-07 13:21 ` Yordan Karadzhov (VMware)
  2021-07-07 13:21 ` [PATCH v4 07/11] trace-cruncher: Add Makefile Yordan Karadzhov (VMware)
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Yordan Karadzhov (VMware) @ 2021-07-07 13:21 UTC (permalink / raw)
  To: linux-trace-devel; +Cc: Yordan Karadzhov (VMware)

This is the most basic possible example. To be considered as an
equivalent of a "Hello world" program.

Signed-off-by: Yordan Karadzhov (VMware) <y.karadz@gmail.com>
---
 examples/start_tracing.py | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)
 create mode 100755 examples/start_tracing.py

diff --git a/examples/start_tracing.py b/examples/start_tracing.py
new file mode 100755
index 0000000..da36164
--- /dev/null
+++ b/examples/start_tracing.py
@@ -0,0 +1,20 @@
+#!/usr/bin/env python3
+
+"""
+SPDX-License-Identifier: CC-BY-4.0
+
+Copyright (C) 2021 VMware Inc, Yordan Karadzhov (VMware) <y.karadz@gmail.com>
+"""
+
+import tracecruncher.ftracepy as ft
+
+# Create new Ftrace instance to work in.
+inst = ft.create_instance()
+
+# Enable all static events from systems "sched" and "irq".
+ft.enable_events(instance=inst,
+                 systems=['sched', 'irq'],
+                 events=[['sched_switch'],['all']])
+
+# Print the stream of trace events. "Ctrl+c" to stop tracing.
+ft.read_trace(instance=inst)
-- 
2.27.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v4 07/11] trace-cruncher: Add Makefile
  2021-07-07 13:21 [PATCH v4 00/11] Build trace-cruncher as Python pakage Yordan Karadzhov (VMware)
                   ` (6 preceding siblings ...)
  2021-07-07 13:21 ` [PATCH v4 06/11] trace-cruncher: Add ftracepy example Yordan Karadzhov (VMware)
@ 2021-07-07 13:21 ` Yordan Karadzhov (VMware)
  2021-07-07 13:21 ` [PATCH v4 08/11] trace-cruncher: Update README.md Yordan Karadzhov (VMware)
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Yordan Karadzhov (VMware) @ 2021-07-07 13:21 UTC (permalink / raw)
  To: linux-trace-devel; +Cc: Yordan Karadzhov (VMware)

This will simplify the building procedure and will make it more intuitive.

Signed-off-by: Yordan Karadzhov (VMware) <y.karadz@gmail.com>
---
 Makefile | 33 +++++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)
 create mode 100644 Makefile

diff --git a/Makefile b/Makefile
new file mode 100644
index 0000000..a509811
--- /dev/null
+++ b/Makefile
@@ -0,0 +1,33 @@
+#
+# SPDX-License-Identifier: GPL-2.0
+#
+# Copyright 2019 VMware Inc, Yordan Karadzhov (VMware) <y.karadz@gmail.com>
+#
+
+UID := $(shell id -u)
+
+CYAN	:= '\e[36m'
+PURPLE	:= '\e[35m'
+NC	:= '\e[0m'
+
+all:
+	@ echo ${CYAN}Buildinging trace-cruncher:${NC};
+	python3 setup.py build
+
+clean:
+	rm -f src/npdatawrapper.c
+	rm -rf build
+
+install:
+	@ echo ${CYAN}Installing trace-cruncher:${NC};
+	python3 setup.py install --record install_manifest.txt
+
+uninstall:
+	@ if [ $(UID) -ne 0 ]; then \
+	echo ${PURPLE}Permission denied${NC} 1>&2; \
+	else \
+	echo ${CYAN}Uninstalling trace-cruncher:${NC}; \
+	xargs rm -v < install_manifest.txt; \
+	rm -rfv dist tracecruncher.egg-info; \
+	rm -fv install_manifest.txt; \
+	fi
-- 
2.27.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v4 08/11] trace-cruncher: Update README.md
  2021-07-07 13:21 [PATCH v4 00/11] Build trace-cruncher as Python pakage Yordan Karadzhov (VMware)
                   ` (7 preceding siblings ...)
  2021-07-07 13:21 ` [PATCH v4 07/11] trace-cruncher: Add Makefile Yordan Karadzhov (VMware)
@ 2021-07-07 13:21 ` Yordan Karadzhov (VMware)
  2021-07-07 13:21 ` [PATCH v4 09/11] trace-cruncher: Remove all leftover files Yordan Karadzhov (VMware)
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Yordan Karadzhov (VMware) @ 2021-07-07 13:21 UTC (permalink / raw)
  To: linux-trace-devel; +Cc: Yordan Karadzhov (VMware)

Building instruction are updated in order to properly describe
the refactored version and the installation of all third-party
dependencies.

Signed-off-by: Yordan Karadzhov (VMware) <y.karadz@gmail.com>
---
 README.md | 84 +++++++++++++++++++++++++++++++++++--------------------
 1 file changed, 54 insertions(+), 30 deletions(-)

diff --git a/README.md b/README.md
index c5121ab..9a3696c 100644
--- a/README.md
+++ b/README.md
@@ -4,68 +4,92 @@
 
 ## Overview
 
-The Trace-Cruncher project aims to provide an interface between the existing instrumentation for collection and visualization of tracing data from the Linux kernel and the broad and very well developed ecosystem of instruments for data analysis available in Python. The interface will be based on NumPy.
+The Trace-Cruncher project aims to provide an interface between the existing instrumentation for collection and visualization of tracing data from the Linux kernel and the broad and very well developed ecosystem of instruments for data analysis available in Python. The interface is based on NumPy.
 
-NumPy implements an efficient multi-dimensional container of generic data and uses strong typing in order to provide fast data processing in Python. The  Trace-Cruncher will allow for sophisticated analysis of kernel tracing data via scripts, but it will also opens the door for exposing the kernel tracing data to the instruments provided by the scientific toolkit of Python like MatPlotLib, Stats, Scikit-Learn and even to the nowadays most popular frameworks for Machine Learning like TensorFlow and PyTorch. The Trace-Cruncher is strongly coupled to the KernelShark project and is build on top of the C API of libkshark.
+NumPy implements an efficient multi-dimensional container of generic data and uses strong typing in order to provide fast data processing in Python. The  Trace-Cruncher allows for sophisticated analysis of kernel tracing data via scripts, but it also opens the door for exposing the kernel tracing data to the instruments provided by the scientific toolkit of Python like MatPlotLib, Stats, Scikit-Learn and even to the nowadays most popular frameworks for Machine Learning like TensorFlow and PyTorch. The Trace-Cruncher is strongly coupled to the KernelShark project and is build on top of the C API of libkshark.
 
 ## Try it out
 
 ### Prerequisites
 
 Trace-Cruncher has the following external dependencies:
-  trace-cmd / KernelShark, Json-C, Cython, NumPy, MatPlotLib.
+  libtraceevent, libtracefs, KernelShark, Json-C, Cython, NumPy.
 
-1.1 In order to install the packages on Ubuntu do the following:
+1.1 In order to install all packages on Ubuntu do the following:
 
-    sudo apt-get install libjson-c-dev libpython3-dev cython3 -y
+    > sudo apt-get update
 
-    sudo apt-get install python3-numpy python3-matplotlib -y
+    > sudo apt-get install build-essential git cmake libjson-c-dev -y
 
-1.2 In order to install the packages on Fedora, as root do the following:
+    > sudo apt-get install libpython3-dev cython3 python3-numpy python3-pip -y
 
-    dnf install json-c-devel python3-devel python3-Cython -y
+    > sudo pip3 install --system pkgconfig GitPython
 
-    dnf install python3-numpy python3-matplotlib -y
+1.2 In order to install all packages on Fedora, as root do the following:
 
-2. In order to get the proper version of KernelShark / trace-cmd do the
-following:
+    > dnf install gcc gcc-c++ git cmake json-c-devel -y
 
-    git clone git://git.kernel.org/pub/scm/utils/trace-cmd/trace-cmd.git --branch=kernelshark-v1.1
+    > dnf install python3-devel python3-Cython python3-numpy python3-pip -y
 
-or download a tarball from here:
-https://git.kernel.org/pub/scm/utils/trace-cmd/trace-cmd.git/snapshot/trace-cmd-kernelshark-v1.1.tar.gz
+    > sudo pip3 install --system pkgconfig GitPython
 
-### Build & Run
 
-1. Patch trace-cmd / KernelShark:
+2 In order to install all third party libraries do the following:
+
+    > git clone https://git.kernel.org/pub/scm/libs/libtrace/libtraceevent.git/
+
+    > cd libtraceevent
+
+    > make
+
+    > sudo make install
+
+    > cd ..
+
+
+    > git clone https://git.kernel.org/pub/scm/libs/libtrace/libtracefs.git/
+
+    > cd libtracefs
 
-    cd path/to/trace-cmd/
+    > make
 
-    git am ../path/to/trace-cruncher/0001-kernel-shark-Add-_DEVEL-build-flag.patch
+    > sudo make install
 
-    git am ../path/to/trace-cruncher/0002-kernel-shark-Add-reg_pid-plugin.patch
+    > cd ..
 
-2. Install trace-cmd:
 
-    make
+    > git clone https://git.kernel.org/pub/scm/utils/trace-cmd/trace-cmd.git
 
-    sudo make install_libs
+    > cd trace-cmd
 
-3. Install KernelShark:
+    > make
 
-    cd kernel-shark/build
+    > sudo make install_libs
 
-    cmake -D_DEVEL=1 ../
+    > cd ..
 
-    make
 
-    sudo make install
+    > git clone https://github.com/yordan-karadzhov/kernel-shark-v2.beta.git kernel-shark
+
+    > cd kernel-shark/build
+
+    > cmake ..
+
+    > make
+
+    > sudo make install
+
+    > cd ../..
+
+### Build & Run
+
+Installing trace-cruncher is very simple. After downloading the source code, you just have to run:
 
-4. Build the NumPy API itself:
+     > cd trace-cruncher
 
-    cd path/to/trace-cruncher
+     > make
 
-    ./np_setup.py build_ext -i
+     > sudo make install
 
 ## Documentation
 
-- 
2.27.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v4 09/11] trace-cruncher: Remove all leftover files.
  2021-07-07 13:21 [PATCH v4 00/11] Build trace-cruncher as Python pakage Yordan Karadzhov (VMware)
                   ` (8 preceding siblings ...)
  2021-07-07 13:21 ` [PATCH v4 08/11] trace-cruncher: Update README.md Yordan Karadzhov (VMware)
@ 2021-07-07 13:21 ` Yordan Karadzhov (VMware)
  2021-07-07 13:21 ` [PATCH v4 10/11] trace-cruncher: Add testing Yordan Karadzhov (VMware)
  2021-07-07 13:21 ` [PATCH v4 11/11] trace-cruncher: Add github workflow for CI testing Yordan Karadzhov (VMware)
  11 siblings, 0 replies; 13+ messages in thread
From: Yordan Karadzhov (VMware) @ 2021-07-07 13:21 UTC (permalink / raw)
  To: linux-trace-devel; +Cc: Yordan Karadzhov (VMware)

This patch completes the refactoring of trace-cruncher into a Python
module. All obsoleted source files are removed.

Signed-off-by: Yordan Karadzhov (VMware) <y.karadz@gmail.com>
---
 0001-kernel-shark-Add-_DEVEL-build-flag.patch |  90 -----
 0002-kernel-shark-Add-reg_pid-plugin.patch    | 231 -----------
 clean.sh                                      |   6 -
 examples/ksharksetup.py                       |  24 --
 libkshark-py.c                                | 224 -----------
 libkshark_wrapper.pyx                         | 361 ------------------
 np_setup.py                                   |  90 -----
 7 files changed, 1026 deletions(-)
 delete mode 100644 0001-kernel-shark-Add-_DEVEL-build-flag.patch
 delete mode 100644 0002-kernel-shark-Add-reg_pid-plugin.patch
 delete mode 100755 clean.sh
 delete mode 100644 examples/ksharksetup.py
 delete mode 100644 libkshark-py.c
 delete mode 100644 libkshark_wrapper.pyx
 delete mode 100755 np_setup.py

diff --git a/0001-kernel-shark-Add-_DEVEL-build-flag.patch b/0001-kernel-shark-Add-_DEVEL-build-flag.patch
deleted file mode 100644
index ddd3fd4..0000000
--- a/0001-kernel-shark-Add-_DEVEL-build-flag.patch
+++ /dev/null
@@ -1,90 +0,0 @@
-From 6c9e3b3f29c8af4780bb46313c3af73fb5d852c7 Mon Sep 17 00:00:00 2001
-From: "Yordan Karadzhov (VMware)" <y.karadz@gmail.com>
-Date: Fri, 20 Sep 2019 14:31:15 +0300
-Subject: [PATCH 1/2] kernel-shark: Add _DEVEL build flag
-
-KernelShark can be built with -D_DEVEL=1 as a command-line argument
-for Cmake. In this case the headers of the libraries will be installed
-as well and a symbolic link that points to the version of the library
-being installed will be created.
-
-Signed-off-by: Yordan Karadzhov (VMware) <y.karadz@gmail.com>
----
- kernel-shark/README             |  3 +++
- kernel-shark/src/CMakeLists.txt | 33 +++++++++++++++++++++++++++++++++
- 2 files changed, 36 insertions(+)
-
-diff --git a/kernel-shark/README b/kernel-shark/README
-index 6c360bb..0f14212 100644
---- a/kernel-shark/README
-+++ b/kernel-shark/README
-@@ -96,6 +96,9 @@ the dialog will derive the absolut path to the trace-cmd executable from
- 
- If no build types is specified, the type will be "RelWithDebInfo".
- 
-+2.1.4 In order to install a development version (including headers e.t.c) add
-+-D_DEVEL=1 as a CMake Command-Line option.
-+
- Examples:
- 
-     cmake -D_DOXYGEN_DOC=1 -D_INSTALL_PREFIX=/usr ../
-diff --git a/kernel-shark/src/CMakeLists.txt b/kernel-shark/src/CMakeLists.txt
-index e20a030..305840b 100644
---- a/kernel-shark/src/CMakeLists.txt
-+++ b/kernel-shark/src/CMakeLists.txt
-@@ -1,5 +1,13 @@
- message("\n src ...")
- 
-+macro(install_symlink filepath sympath)
-+    install(CODE "execute_process(COMMAND ${CMAKE_COMMAND} -E create_symlink ${filepath} ${sympath})")
-+    install(CODE "LIST(APPEND CMAKE_INSTALL_MANIFEST_FILES ${sympath})")
-+    install(CODE "message(\"-- Created symlink: ${sympath} -> ${filepath}\")")
-+endmacro(install_symlink)
-+
-+set(KS_INCLUDS_DESTINATION "${_INSTALL_PREFIX}/include/${KS_APP_NAME}")
-+
- message(STATUS "libkshark")
- add_library(kshark SHARED libkshark.c
-                           libkshark-model.c
-@@ -16,6 +24,19 @@ set_target_properties(kshark  PROPERTIES SUFFIX	".so.${KS_VERSION_STRING}")
- 
- install(TARGETS kshark LIBRARY DESTINATION ${_INSTALL_PREFIX}/lib/${KS_APP_NAME})
- 
-+if (_DEVEL)
-+
-+    install_symlink("libkshark.so.${KS_VERSION_STRING}"
-+                    "${_INSTALL_PREFIX}/lib/${KS_APP_NAME}/libkshark.so")
-+
-+    install(FILES "${KS_DIR}/src/libkshark.h"
-+                  "${KS_DIR}/src/libkshark-plugin.h"
-+                  "${KS_DIR}/src/libkshark-model.h"
-+            DESTINATION ${KS_INCLUDS_DESTINATION}
-+            COMPONENT devel)
-+
-+endif (_DEVEL)
-+
- if (OPENGL_FOUND AND GLUT_FOUND)
- 
-     message(STATUS "libkshark-plot")
-@@ -30,6 +51,18 @@ if (OPENGL_FOUND AND GLUT_FOUND)
- 
-     install(TARGETS kshark-plot LIBRARY DESTINATION ${_INSTALL_PREFIX}/lib/${KS_APP_NAME})
- 
-+    if (_DEVEL)
-+
-+        install_symlink("libkshark-plot.so.${KS_VERSION_STRING}"
-+                        "${_INSTALL_PREFIX}/lib/${KS_APP_NAME}/libkshark-plot.so")
-+
-+        install(FILES "${KS_DIR}/src/KsPlotTools.hpp"
-+                      "${KS_DIR}/src/libkshark-plot.h"
-+                DESTINATION ${KS_INCLUDS_DESTINATION}
-+                COMPONENT devel)
-+
-+    endif (_DEVEL)
-+
- endif (OPENGL_FOUND AND GLUT_FOUND)
- 
- if (Qt5Widgets_FOUND AND Qt5Network_FOUND)
--- 
-2.20.1
-
diff --git a/0002-kernel-shark-Add-reg_pid-plugin.patch b/0002-kernel-shark-Add-reg_pid-plugin.patch
deleted file mode 100644
index 146e3e6..0000000
--- a/0002-kernel-shark-Add-reg_pid-plugin.patch
+++ /dev/null
@@ -1,231 +0,0 @@
-From b3efcb6368bc7f70a23e156dce6c58d09953889a Mon Sep 17 00:00:00 2001
-From: "Yordan Karadzhov (VMware)" <y.karadz@gmail.com>
-Date: Wed, 9 Oct 2019 16:57:27 +0300
-Subject: [PATCH 2/2] kernel-shark: Add "reg_pid" plugin
-
-"reg_pid" plugin is a simplified version of the "sched_events" plugin
-that makes sure that all tasks presented in the data are registered.
-All other functionalities of the "sched_events" plugin are removed.
-"reg_pid" plugin will be used by the NumPy interface (Trace-Cruncher).
-
-Signed-off-by: Yordan Karadzhov (VMware) <y.karadz@gmail.com>
----
- kernel-shark/src/plugins/CMakeLists.txt |   5 +-
- kernel-shark/src/plugins/reg_pid.c      | 189 ++++++++++++++++++++++++
- 2 files changed, 193 insertions(+), 1 deletion(-)
- create mode 100644 kernel-shark/src/plugins/reg_pid.c
-
-diff --git a/kernel-shark/src/plugins/CMakeLists.txt b/kernel-shark/src/plugins/CMakeLists.txt
-index 6c77179..bf69945 100644
---- a/kernel-shark/src/plugins/CMakeLists.txt
-+++ b/kernel-shark/src/plugins/CMakeLists.txt
-@@ -27,7 +27,10 @@ BUILD_PLUGIN(NAME missed_events
-              SOURCE missed_events.c MissedEvents.cpp)
- list(APPEND PLUGIN_LIST "missed_events default") # This plugin will be loaded by default
- 
--install(TARGETS sched_events missed_events
-+BUILD_PLUGIN(NAME reg_pid
-+             SOURCE reg_pid.c)
-+
-+install(TARGETS sched_events missed_events reg_pid
-         LIBRARY DESTINATION ${KS_PLUGIN_INSTALL_PREFIX})
- 
- set(PLUGINS ${PLUGIN_LIST} PARENT_SCOPE)
-diff --git a/kernel-shark/src/plugins/reg_pid.c b/kernel-shark/src/plugins/reg_pid.c
-new file mode 100644
-index 0000000..4116dd8
---- /dev/null
-+++ b/kernel-shark/src/plugins/reg_pid.c
-@@ -0,0 +1,189 @@
-+// SPDX-License-Identifier: LGPL-2.1
-+
-+/*
-+ * Copyright (C) 2018 VMware Inc, Yordan Karadzhov <y.karadz@gmail.com>
-+ */
-+
-+/**
-+ *  @file    reg_pid.c
-+ *  @brief   Defines a callback function for Sched events used to registers the
-+ *	     "next" task (if not registered already).
-+ */
-+
-+// C
-+#include <stdlib.h>
-+#include <stdio.h>
-+#include <assert.h>
-+
-+// KernelShark
-+#include "libkshark.h"
-+
-+/** Structure representing a plugin-specific context. */
-+struct plugin_pid_reg_context {
-+	/** Page event used to parse the page. */
-+	struct tep_handle	*pevent;
-+
-+	/** Pointer to the sched_switch_event object. */
-+	struct tep_event	*sched_switch_event;
-+
-+	/** Pointer to the sched_switch_next_field format descriptor. */
-+	struct tep_format_field	*sched_switch_next_field;
-+
-+	/** Pointer to the sched_switch_comm_field format descriptor. */
-+	struct tep_format_field	*sched_switch_comm_field;
-+};
-+
-+/** Plugin context instance. */
-+struct plugin_pid_reg_context *plugin_pid_reg_context_handler = NULL;
-+
-+static void plugin_free_context(struct plugin_pid_reg_context *plugin_ctx)
-+{
-+	if (!plugin_ctx)
-+		return;
-+
-+	free(plugin_ctx);
-+}
-+
-+static bool plugin_pid_reg_init_context(struct kshark_context *kshark_ctx)
-+{
-+	struct plugin_pid_reg_context *plugin_ctx;
-+	struct tep_event *event;
-+
-+	/* No context should exist when we initialize the plugin. */
-+	assert(plugin_pid_reg_context_handler == NULL);
-+
-+	if (!kshark_ctx->pevent)
-+		return false;
-+
-+	plugin_pid_reg_context_handler =
-+		calloc(1, sizeof(*plugin_pid_reg_context_handler));
-+	if (!plugin_pid_reg_context_handler) {
-+		fprintf(stderr,
-+			"Failed to allocate memory for plugin_pid_reg_context.\n");
-+		return false;
-+	}
-+
-+	plugin_ctx = plugin_pid_reg_context_handler;
-+	plugin_ctx->pevent = kshark_ctx->pevent;
-+
-+	event = tep_find_event_by_name(plugin_ctx->pevent,
-+				       "sched", "sched_switch");
-+	if (!event) {
-+		plugin_free_context(plugin_ctx);
-+		plugin_pid_reg_context_handler = NULL;
-+
-+		return false;
-+	}
-+
-+	plugin_ctx->sched_switch_event = event;
-+
-+	plugin_ctx->sched_switch_next_field =
-+		tep_find_any_field(event, "next_pid");
-+
-+	plugin_ctx->sched_switch_comm_field =
-+		tep_find_field(event, "next_comm");
-+
-+	return true;
-+}
-+
-+/**
-+ * @brief Get the Process Id of the next scheduled task.
-+ *
-+ * @param record: Input location for a sched_switch record.
-+ */
-+int plugin_get_next_pid(struct tep_record *record)
-+{
-+	struct plugin_pid_reg_context *plugin_ctx =
-+		plugin_pid_reg_context_handler;
-+	unsigned long long val;
-+	int ret;
-+
-+	ret = tep_read_number_field(plugin_ctx->sched_switch_next_field,
-+				    record->data, &val);
-+
-+	return ret ? : val;
-+}
-+
-+static void plugin_register_command(struct kshark_context *kshark_ctx,
-+				    struct tep_record *record,
-+				    int pid)
-+{
-+	struct plugin_pid_reg_context *plugin_ctx =
-+		plugin_pid_reg_context_handler;
-+	const char *comm;
-+
-+	if (!plugin_ctx->sched_switch_comm_field)
-+		return;
-+
-+	comm = record->data + plugin_ctx->sched_switch_comm_field->offset;
-+	/*
-+	 * TODO: The retrieve of the name of the command above needs to be
-+	 * implemented as a wrapper function in libtracevent.
-+	 */
-+
-+	if (!tep_is_pid_registered(kshark_ctx->pevent, pid))
-+			tep_register_comm(kshark_ctx->pevent, comm, pid);
-+}
-+
-+static void plugin_pid_reg_action(struct kshark_context *kshark_ctx,
-+				  struct tep_record *rec,
-+				  struct kshark_entry *entry)
-+{
-+	int pid = plugin_get_next_pid(rec);
-+	if (pid >= 0)
-+		plugin_register_command(kshark_ctx, rec, pid);
-+}
-+
-+static void nop_action(struct kshark_cpp_argv *argv, int val, int action)
-+{}
-+
-+static int plugin_pid_reg_init(struct kshark_context *kshark_ctx)
-+{
-+	struct plugin_pid_reg_context *plugin_ctx;
-+
-+	if (!plugin_pid_reg_init_context(kshark_ctx))
-+		return 0;
-+
-+	plugin_ctx = plugin_pid_reg_context_handler;
-+
-+	kshark_register_event_handler(&kshark_ctx->event_handlers,
-+				      plugin_ctx->sched_switch_event->id,
-+				      plugin_pid_reg_action,
-+				      nop_action);
-+
-+	return 1;
-+}
-+
-+static int plugin_pid_reg_close(struct kshark_context *kshark_ctx)
-+{
-+	struct plugin_pid_reg_context *plugin_ctx;
-+
-+	if (!plugin_pid_reg_context_handler)
-+		return 0;
-+
-+	plugin_ctx = plugin_pid_reg_context_handler;
-+
-+	kshark_unregister_event_handler(&kshark_ctx->event_handlers,
-+					plugin_ctx->sched_switch_event->id,
-+					plugin_pid_reg_action,
-+					nop_action);
-+
-+	plugin_free_context(plugin_ctx);
-+	plugin_pid_reg_context_handler = NULL;
-+
-+	return 1;
-+}
-+
-+/** Load this plugin. */
-+int KSHARK_PLUGIN_INITIALIZER(struct kshark_context *kshark_ctx)
-+{
-+// 	printf("--> pid_reg init\n");
-+	return plugin_pid_reg_init(kshark_ctx);
-+}
-+
-+/** Unload this plugin. */
-+int KSHARK_PLUGIN_DEINITIALIZER(struct kshark_context *kshark_ctx)
-+{
-+// 	printf("<-- pid reg close\n");
-+	return plugin_pid_reg_close(kshark_ctx);
-+}
--- 
-2.20.1
-
diff --git a/clean.sh b/clean.sh
deleted file mode 100755
index a739b88..0000000
--- a/clean.sh
+++ /dev/null
@@ -1,6 +0,0 @@
-#!/bin/bash
-
-rm libkshark_wrapper.c
-rm ksharkpy.cpython-3*.so
-rm -rf build/
-rm -rf examples/__pycache__/
diff --git a/examples/ksharksetup.py b/examples/ksharksetup.py
deleted file mode 100644
index 86729e3..0000000
--- a/examples/ksharksetup.py
+++ /dev/null
@@ -1,24 +0,0 @@
-#!/usr/bin/env python3
-
-"""
-SPDX-License-Identifier: LGPL-2.1
-
-Copyright 2019 VMware Inc, Yordan Karadzhov <ykaradzhov@vmware.com>
-"""
-
-import os
-import sys
-
-def setup():
-    os.chdir(os.path.dirname(__file__))
-    path = os.getcwd() + '/..'
-    sys.path.append(path)
-
-    if 'LD_LIBRARY_PATH' not in os.environ:
-        os.environ['LD_LIBRARY_PATH'] = '/usr/local/lib/kernelshark:/usr/local/lib/traceevent:/usr/local/lib/trace-cmd'
-        try:
-            os.execv(sys.argv[0], sys.argv)
-        except e:
-            print('Failed re-exec:', e)
-            sys.exit(1)
-
diff --git a/libkshark-py.c b/libkshark-py.c
deleted file mode 100644
index 8b39bae..0000000
--- a/libkshark-py.c
+++ /dev/null
@@ -1,224 +0,0 @@
-// SPDX-License-Identifier: LGPL-2.1
-
-/*
- * Copyright 2019 VMware Inc, Yordan Karadzhov <ykaradzhov@vmware.com>
- */
-
- /**
-  *  @file    libkshark-py.c
-  *  @brief   Python API for processing of FTRACE (trace-cmd) data.
-  */
-
-// KernelShark
-#include "kernelshark/libkshark.h"
-#include "kernelshark/libkshark-model.h"
-
-bool kspy_open(const char *fname)
-{
-	struct kshark_context *kshark_ctx = NULL;
-
-	if (!kshark_instance(&kshark_ctx))
-		return false;
-
-	return kshark_open(kshark_ctx, fname);
-}
-
-void kspy_close(void)
-{
-	struct kshark_context *kshark_ctx = NULL;
-
-	if (!kshark_instance(&kshark_ctx))
-		return;
-
-	kshark_close(kshark_ctx);
-	kshark_free(kshark_ctx);
-}
-
-static int compare(const void * a, const void * b)
-{
-	return *(int*)a - *(int*)b;
-}
-
-size_t kspy_get_tasks(int **pids, char ***names)
-{
-	struct kshark_context *kshark_ctx = NULL;
-	const char *comm;
-	ssize_t i, n;
-	int ret;
-
-	if (!kshark_instance(&kshark_ctx))
-		return 0;
-
-	n = kshark_get_task_pids(kshark_ctx, pids);
-	if (n == 0)
-		return 0;
-
-	qsort(*pids, n, sizeof(**pids), compare);
-
-	*names = calloc(n, sizeof(char*));
-	if (!(*names))
-		goto fail;
-
-	for (i = 0; i < n; ++i) {
-		comm = tep_data_comm_from_pid(kshark_ctx->pevent, (*pids)[i]);
-		ret = asprintf(&(*names)[i], "%s", comm);
-		if (ret < 1)
-			goto fail;
-	}
-
-	return n;
-
-  fail:
-	free(*pids);
-	free(*names);
-	return 0;
-}
-
-size_t kspy_trace2matrix(uint64_t **offset_array,
-			 uint16_t **cpu_array,
-			 uint64_t **ts_array,
-			 uint16_t **pid_array,
-			 int **event_array)
-{
-	struct kshark_context *kshark_ctx = NULL;
-	size_t total = 0;
-
-	if (!kshark_instance(&kshark_ctx))
-		return false;
-
-	total = kshark_load_data_matrix(kshark_ctx, offset_array,
-					cpu_array,
-					ts_array,
-					pid_array,
-					event_array);
-
-	return total;
-}
-
-int kspy_get_event_id(const char *sys, const char *evt)
-{
-	struct kshark_context *kshark_ctx = NULL;
-	struct tep_event *event;
-
-	if (!kshark_instance(&kshark_ctx))
-		return -1;
-
-	event = tep_find_event_by_name(kshark_ctx->pevent, sys, evt);
-
-	return event->id;
-}
-
-unsigned long long kspy_read_event_field(uint64_t offset,
-					 int id, const char *field)
-{
-	struct kshark_context *kshark_ctx = NULL;
-	struct tep_format_field *evt_field;
-	struct tep_record *record;
-	struct tep_event *event;
-	unsigned long long val;
-	int ret;
-
-	if (!kshark_instance(&kshark_ctx))
-		return 0;
-
-	event = tep_find_event(kshark_ctx->pevent, id);
-	if (!event)
-		return 0;
-
-	evt_field = tep_find_any_field(event, field);
-	if (!evt_field)
-		return 0;
-
-	record = tracecmd_read_at(kshark_ctx->handle, offset, NULL);
-	if (!record)
-		return 0;
-
-	ret = tep_read_number_field(evt_field, record->data, &val);
-	free_record(record);
-
-	if (ret != 0)
-		return 0;
-
-	return val;
-}
-
-const char *kspy_get_function(unsigned long long addr)
-{
-	struct kshark_context *kshark_ctx = NULL;
-
-	if (!kshark_instance(&kshark_ctx))
-		return "";
-
-	return tep_find_function(kshark_ctx->pevent, addr);
-}
-
-void kspy_register_plugin(const char *plugin)
-{
-	struct kshark_context *kshark_ctx = NULL;
-	char *lib_file;
-	int n;
-
-	if (!kshark_instance(&kshark_ctx))
-		return;
-
-	n = asprintf(&lib_file, "%s/plugin-%s.so", KS_PLUGIN_DIR, plugin);
-	if (n > 0) {
-		kshark_register_plugin(kshark_ctx, lib_file);
-		kshark_handle_plugins(kshark_ctx, KSHARK_PLUGIN_INIT);
-		free(lib_file);
-	}
-}
-
-const char *kspy_map_instruction_address(int pid, unsigned long long proc_addr,
-					 unsigned long long *obj_addr)
-{
-	struct kshark_context *kshark_ctx = NULL;
-	struct tracecmd_proc_addr_map *mem_map;
-
-	*obj_addr = 0;
-	if (!kshark_instance(&kshark_ctx))
-		return "UNKNOWN";
-
-	mem_map = tracecmd_search_task_map(kshark_ctx->handle,
-					   pid, proc_addr);
-
-	if (!mem_map)
-		return "UNKNOWN";
-
-	*obj_addr = proc_addr - mem_map->start;
-
-	return mem_map->lib_name;
-}
-
-void kspy_new_session_file(const char *data_file, const char *session_file)
-{
-	struct kshark_context *kshark_ctx = NULL;
-	struct kshark_trace_histo histo;
-	struct kshark_config_doc *session;
-	struct kshark_config_doc *filters;
-	struct kshark_config_doc *markers;
-	struct kshark_config_doc *model;
-	struct kshark_config_doc *file;
-
-	if (!kshark_instance(&kshark_ctx))
-		return;
-
-	session = kshark_config_new("kshark.config.session",
-				    KS_CONFIG_JSON);
-
-	file = kshark_export_trace_file(data_file, KS_CONFIG_JSON);
-	kshark_config_doc_add(session, "Data", file);
-
-	filters = kshark_export_all_filters(kshark_ctx, KS_CONFIG_JSON);
-	kshark_config_doc_add(session, "Filters", filters);
-
-	ksmodel_init(&histo);
-	model = kshark_export_model(&histo, KS_CONFIG_JSON);
-	kshark_config_doc_add(session, "Model", model);
-
-	markers = kshark_config_new("kshark.config.markers", KS_CONFIG_JSON);
-	kshark_config_doc_add(session, "Markers", markers);
-
-	kshark_save_config_file(session_file, session);
-	kshark_free_config_doc(session);
-}
diff --git a/libkshark_wrapper.pyx b/libkshark_wrapper.pyx
deleted file mode 100644
index 1b75685..0000000
--- a/libkshark_wrapper.pyx
+++ /dev/null
@@ -1,361 +0,0 @@
-"""
-SPDX-License-Identifier: LGPL-2.1
-
-Copyright 2019 VMware Inc, Yordan Karadzhov <ykaradzhov@vmware.com>
-"""
-
-import ctypes
-
-# Import the Python-level symbols of numpy
-import numpy as np
-# Import the C-level symbols of numpy
-cimport numpy as np
-
-import json
-
-from libcpp cimport bool
-
-from libc.stdlib cimport free
-
-from cpython cimport PyObject, Py_INCREF
-
-
-cdef extern from 'stdint.h':
-    ctypedef unsigned short uint8_t
-    ctypedef unsigned short uint16_t
-    ctypedef unsigned long long uint64_t
-
-cdef extern from 'numpy/ndarraytypes.h':
-    int NPY_ARRAY_CARRAY
-
-# Declare all C functions we are going to call
-cdef extern from 'libkshark-py.c':
-    bool kspy_open(const char *fname)
-
-cdef extern from 'libkshark-py.c':
-    bool kspy_close()
-
-cdef extern from 'libkshark-py.c':
-    size_t kspy_trace2matrix(uint64_t **offset_array,
-                             uint8_t **cpu_array,
-                             uint64_t **ts_array,
-                             uint16_t **pid_array,
-                             int **event_array)
-
-cdef extern from 'libkshark-py.c':
-    int kspy_get_event_id(const char *sys, const char *evt)
-
-cdef extern from 'libkshark-py.c':
-    uint64_t kspy_read_event_field(uint64_t offset,
-                                   int event_id,
-                                   const char *field)
-
-cdef extern from 'libkshark-py.c':
-    ssize_t kspy_get_tasks(int **pids, char ***names)
-
-cdef extern from 'libkshark-py.c':
-    const char *kspy_get_function(unsigned long long addr)
-
-cdef extern from 'libkshark-py.c':
-    void kspy_register_plugin(const char *file)
-
-cdef extern from 'libkshark-py.c':
-    const char *kspy_map_instruction_address(int pid,
-					     unsigned long long proc_addr,
-					     unsigned long long *obj_addr)
-
-cdef extern from 'kernelshark/libkshark.h':
-    int KS_EVENT_OVERFLOW
-
-cdef extern from 'libkshark-py.c':
-    void kspy_new_session_file(const char *data_file,
-                               const char *session_file)
-
-EVENT_OVERFLOW = KS_EVENT_OVERFLOW
-
-# Numpy must be initialized!!!
-np.import_array()
-
-
-cdef class KsDataWrapper:
-    cdef int item_size
-    cdef int data_size
-    cdef int data_type
-    cdef void* data_ptr
-
-    cdef init(self,
-              int data_type,
-              int data_size,
-              int item_size,
-              void* data_ptr):
-        """ This initialization cannot be done in the constructor because
-            we use C-level arguments.
-        """
-        self.item_size = item_size
-        self.data_size = data_size
-        self.data_type = data_type
-        self.data_ptr = data_ptr
-
-    def __array__(self):
-        """ Here we use the __array__ method, that is called when numpy
-            tries to get an array from the object.
-        """
-        cdef np.npy_intp shape[1]
-        shape[0] = <np.npy_intp> self.data_size
-
-        ndarray = np.PyArray_New(np.ndarray,
-                                 1, shape,
-                                 self.data_type,
-                                 NULL,
-                                 self.data_ptr,
-                                 self.item_size,
-                                 NPY_ARRAY_CARRAY,
-                                 <object>NULL)
-
-        return ndarray
-
-    def __dealloc__(self):
-        """ Free the data. This is called by Python when all the references to
-            the object are gone.
-        """
-        free(<void*>self.data_ptr)
-
-
-def c_str2py(char *c_str):
-    """ String convertion C -> Python
-    """
-    return ctypes.c_char_p(c_str).value.decode('utf-8')
-
-
-def py_str2c(py_str):
-    """ String convertion Python -> C
-    """
-    return py_str.encode('utf-8')
-
-
-def open_file(fname):
-    """ Open a tracing data file.
-    """
-    return kspy_open(py_str2c(fname))
-
-
-def close():
-    """ Open the session file.
-    """
-    kspy_close()
-
-
-def read_event_field(offset, event_id, field):
-    """ Read the value of a specific field of the trace event.
-    """
-    cdef uint64_t v
-
-    v = kspy_read_event_field(offset, event_id, py_str2c(field))
-    return v
-
-
-def event_id(system, event):
-    """ Get the unique Id of the event
-    """
-    return kspy_get_event_id(py_str2c(system), py_str2c(event))
-
-
-def get_tasks():
-    """ Get a dictionary of all task's PIDs
-    """
-    cdef int *pids
-    cdef char **names
-    cdef int size = kspy_get_tasks(&pids, &names)
-
-    task_dict = {}
-
-    for i in range(0, size):
-        name = c_str2py(names[i])
-        pid_list = task_dict.get(name)
-
-        if pid_list is None:
-            pid_list = []
-
-        pid_list.append(pids[i])
-        task_dict.update({name : pid_list})
-
-    return task_dict
-
-def get_function(ip):
-    """ Get the name of the function from its ip
-    """
-    func = kspy_get_function(ip)
-    if func:
-        return c_str2py(kspy_get_function(ip))
-
-    return str("0x%x" %ip)
-
-def register_plugin(plugin):
-    """
-    """
-    kspy_register_plugin(py_str2c(plugin))
-
-def map_instruction_address(pid, address):
-    """
-    """
-    cdef unsigned long long obj_addr;
-    cdef const char* obj_file;
-    obj_file = kspy_map_instruction_address(pid, address, &obj_addr)
-
-    return {'obj_file' : c_str2py(obj_file), 'address' : obj_addr}
-
-def load_data(ofst_data=True, cpu_data=True,
-	      ts_data=True, pid_data=True,
-	      evt_data=True):
-    """ Python binding of the 'kshark_load_data_matrix' function that does not
-        copy the data. The input parameters can be used to avoid loading the
-        data from the unnecessary fields.
-    """
-    cdef uint64_t *ofst_c
-    cdef uint16_t *cpu_c
-    cdef uint64_t *ts_c
-    cdef uint16_t *pid_c
-    cdef int *evt_c
-
-    cdef np.ndarray ofst
-    cdef np.ndarray cpu
-    cdef np.ndarray ts
-    cdef np.ndarray pid
-    cdef np.ndarray evt
-
-    if not ofst_data:
-        ofst_c = NULL
-
-    if not cpu_data:
-        cpu_c = NULL
-
-    if not ts_data:
-        ts_c = NULL
-
-    if not pid_data:
-        pid_c = NULL
-
-    if not evt_data:
-        evt_c = NULL
-
-    data_dict = {}
-
-    # Call the C function
-    size = kspy_trace2matrix(&ofst_c, &cpu_c, &ts_c, &pid_c, &evt_c)
-
-    if ofst_data:
-        array_wrapper_ofst = KsDataWrapper()
-        array_wrapper_ofst.init(data_type=np.NPY_UINT64,
-                                item_size=0,
-                                data_size=size,
-                                data_ptr=<void *> ofst_c)
-
-
-        ofst = np.array(array_wrapper_ofst, copy=False)
-        ofst.base = <PyObject *> array_wrapper_ofst
-        data_dict.update({'offset': ofst})
-        Py_INCREF(array_wrapper_ofst)
-
-    if cpu_data:
-        array_wrapper_cpu = KsDataWrapper()
-        array_wrapper_cpu.init(data_type=np.NPY_UINT16,
-                               data_size=size,
-                               item_size=0,
-                               data_ptr=<void *> cpu_c)
-
-        cpu = np.array(array_wrapper_cpu, copy=False)
-        cpu.base = <PyObject *> array_wrapper_cpu
-        data_dict.update({'cpu': cpu})
-        Py_INCREF(array_wrapper_cpu)
-
-    if ts_data:
-        array_wrapper_ts = KsDataWrapper()
-        array_wrapper_ts.init(data_type=np.NPY_UINT64,
-                              data_size=size,
-                              item_size=0,
-                              data_ptr=<void *> ts_c)
-
-        ts = np.array(array_wrapper_ts, copy=False)
-        ts.base = <PyObject *> array_wrapper_ts
-        data_dict.update({'time': ts})
-        Py_INCREF(array_wrapper_ts)
-
-    if pid_data:
-        array_wrapper_pid = KsDataWrapper()
-        array_wrapper_pid.init(data_type=np.NPY_UINT16,
-                               data_size=size,
-                               item_size=0,
-                               data_ptr=<void *>pid_c)
-
-        pid = np.array(array_wrapper_pid, copy=False)
-        pid.base = <PyObject *> array_wrapper_pid
-        data_dict.update({'pid': pid})
-        Py_INCREF(array_wrapper_pid)
-
-    if evt_data:
-        array_wrapper_evt = KsDataWrapper()
-        array_wrapper_evt.init(data_type=np.NPY_INT,
-                               data_size=size,
-                               item_size=0,
-                               data_ptr=<void *>evt_c)
-
-        evt = np.array(array_wrapper_evt, copy=False)
-        evt.base = <PyObject *> array_wrapper_evt
-        data_dict.update({'event': evt})
-        Py_INCREF(array_wrapper_evt)
-
-    return data_dict
-
-def data_size(data):
-    """ Get the number of trace records.
-    """
-    if data['offset'] is not None:
-        return data['offset'].size
-
-    if data['cpu'] is not None:
-        return data['cpu'].size
-
-    if data['time'] is not None:
-        return data['time'].size
-
-    if data['pid'] is not None:
-        return data['pid'].size
-
-    if data['event'] is not None:
-        return data['event'].size
-
-    return 0
-
-def save_session(session, s):
-    """ Save a KernelShark session description of a JSON file.
-    """
-    s.seek(0)
-    json.dump(session, s, indent=4)
-    s.truncate()
-
-
-def new_session(fname, sname):
-    """ Generate and save a default KernelShark session description
-        file (JSON).
-    """
-    kspy_new_session_file(py_str2c(fname), py_str2c(sname))
-
-    with open(sname, 'r+') as s:
-        session = json.load(s)
-
-        session['Filters']['filter mask'] = 7
-        session['CPUPlots'] = []
-        session['TaskPlots'] = []
-        session['Splitter'] = [1, 1]
-        session['MainWindow'] = [1200, 800]
-        session['ViewTop'] = 0
-        session['ColorScheme'] = 0.75
-        session['Model']['bins'] = 1000
-
-        session['Markers']['markA'] = {}
-        session['Markers']['markA']['isSet'] = False
-        session['Markers']['markB'] = {}
-        session['Markers']['markB']['isSet'] = False
-        session['Markers']['Active'] = 'A'
-
-        save_session(session, s)
diff --git a/np_setup.py b/np_setup.py
deleted file mode 100755
index 40bb6fc..0000000
--- a/np_setup.py
+++ /dev/null
@@ -1,90 +0,0 @@
-#!/usr/bin/env python3
-
-"""
-SPDX-License-Identifier: LGPL-2.1
-
-Copyright 2019 VMware Inc, Yordan Karadzhov <ykaradzhov@vmware.com>
-"""
-
-import sys
-import getopt
-
-from Cython.Distutils import build_ext
-from numpy.distutils.misc_util import Configuration
-from numpy.distutils.core import setup
-
-def lib_dirs(argv):
-    """ Function used to retrieve the library paths.
-    """
-    kslibdir = ''
-    evlibdir = ''
-    trlibdir = ''
-
-    try:
-        opts, args = getopt.getopt(
-            argv, 'k:t:e:', ['kslibdir=',
-                             'trlibdir=',
-                             'evlibdir='])
-
-    except getopt.GetoptError:
-        sys.exit(2)
-
-    for opt, arg in opts:
-        if opt in ('-k', '--kslibdir'):
-            kslibdir = arg
-        elif opt in ('-t', '--trlibdir'):
-            trlibdir = arg
-        elif opt in ('-e', '--evlibdir'):
-            evlibdir = arg
-
-    cmd1 = 1
-    for i in range(len(sys.argv)):
-        if sys.argv[i] == 'build_ext':
-            cmd1 = i
-
-    sys.argv = sys.argv[:1] + sys.argv[cmd1:]
-
-    if kslibdir == '':
-        kslibdir = '/usr/local/lib/kernelshark'
-
-    if evlibdir == '':
-        evlibdir = '/usr/local/lib/traceevent'
-
-    if trlibdir == '':
-        trlibdir = '/usr/local/lib/trace-cmd/'
-
-    return [kslibdir, evlibdir, trlibdir]
-
-
-def configuration(parent_package='',
-                  top_path=None,
-                  libs=['kshark', 'tracecmd', 'traceevent', 'json-c'],
-                  libdirs=['.']):
-    """ Function used to build configuration.
-    """
-    config = Configuration('', parent_package, top_path)
-    config.add_extension('ksharkpy',
-                         sources=['libkshark_wrapper.pyx'],
-                         libraries=libs,
-                         define_macros=[('KS_PLUGIN_DIR','\"' + libdirs[0] + '/plugins\"')],
-                         library_dirs=libdirs,
-                         depends=['libkshark-py.c'],
-                         include_dirs=libdirs)
-
-    return config
-
-
-def main(argv):
-    # Retrieve third-party libraries.
-    libdirs = lib_dirs(sys.argv[1:])
-
-    # Retrieve the parameters of the configuration.
-    params = configuration(libdirs=libdirs).todict()
-    params['cmdclass'] = dict(build_ext=build_ext)
-
-    ## Building the extension.
-    setup(**params)
-
-
-if __name__ == '__main__':
-    main(sys.argv[1:])
-- 
2.27.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v4 10/11] trace-cruncher: Add testing
  2021-07-07 13:21 [PATCH v4 00/11] Build trace-cruncher as Python pakage Yordan Karadzhov (VMware)
                   ` (9 preceding siblings ...)
  2021-07-07 13:21 ` [PATCH v4 09/11] trace-cruncher: Remove all leftover files Yordan Karadzhov (VMware)
@ 2021-07-07 13:21 ` Yordan Karadzhov (VMware)
  2021-07-07 13:21 ` [PATCH v4 11/11] trace-cruncher: Add github workflow for CI testing Yordan Karadzhov (VMware)
  11 siblings, 0 replies; 13+ messages in thread
From: Yordan Karadzhov (VMware) @ 2021-07-07 13:21 UTC (permalink / raw)
  To: linux-trace-devel; +Cc: Yordan Karadzhov (VMware)

Add basic infrastructure for CI testing.

Signed-off-by: Yordan Karadzhov (VMware) <y.karadz@gmail.com>
---
 tests/0_get_data/__init__.py                  |   0
 tests/0_get_data/test_get_data.py             |  26 +
 tests/1_unit/__init__.py                      |   0
 tests/1_unit/test_01_ftracepy_unit.py         | 471 ++++++++++++++++++
 tests/1_unit/test_02_datawrapper_unit.py      |  41 ++
 tests/1_unit/test_03_ksharkpy_unit.py         |  72 +++
 tests/2_integration/__init__.py               |   0
 .../test_01_ftracepy_integration.py           | 113 +++++
 .../test_03_ksharkpy_integration.py           |  25 +
 tests/__init__.py                             |   0
 10 files changed, 748 insertions(+)
 create mode 100644 tests/0_get_data/__init__.py
 create mode 100755 tests/0_get_data/test_get_data.py
 create mode 100644 tests/1_unit/__init__.py
 create mode 100644 tests/1_unit/test_01_ftracepy_unit.py
 create mode 100755 tests/1_unit/test_02_datawrapper_unit.py
 create mode 100755 tests/1_unit/test_03_ksharkpy_unit.py
 create mode 100644 tests/2_integration/__init__.py
 create mode 100755 tests/2_integration/test_01_ftracepy_integration.py
 create mode 100755 tests/2_integration/test_03_ksharkpy_integration.py
 create mode 100644 tests/__init__.py

diff --git a/tests/0_get_data/__init__.py b/tests/0_get_data/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/tests/0_get_data/test_get_data.py b/tests/0_get_data/test_get_data.py
new file mode 100755
index 0000000..53decd0
--- /dev/null
+++ b/tests/0_get_data/test_get_data.py
@@ -0,0 +1,26 @@
+#!/usr/bin/env python3
+
+"""
+SPDX-License-Identifier: LGPL-2.1
+
+Copyright (C) 2021 VMware Inc, Yordan Karadzhov (VMware) <y.karadz@gmail.com>
+"""
+
+import os
+import sys
+import shutil
+import unittest
+import git
+
+class GetTestData(unittest.TestCase):
+    def test_get_data(self):
+        data_dir = 'testdata'
+        if os.path.exists(data_dir) and os.path.isdir(data_dir):
+            shutil.rmtree(data_dir)
+
+        github_repo = 'https://github.com/yordan-karadzhov/kernel-shark_testdata.git'
+        repo = git.Repo.clone_from(github_repo, data_dir)
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/tests/1_unit/__init__.py b/tests/1_unit/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/tests/1_unit/test_01_ftracepy_unit.py b/tests/1_unit/test_01_ftracepy_unit.py
new file mode 100644
index 0000000..e11c034
--- /dev/null
+++ b/tests/1_unit/test_01_ftracepy_unit.py
@@ -0,0 +1,471 @@
+#!/usr/bin/env python3
+
+"""
+SPDX-License-Identifier: LGPL-2.1
+
+Copyright (C) 2021 VMware Inc, Yordan Karadzhov (VMware) <y.karadz@gmail.com>
+"""
+
+import os
+import sys
+import unittest
+import tracecruncher.ftracepy as ft
+
+instance_name = 'test_instance1'
+another_instance_name = 'test_instance2'
+
+class InstanceTestCase(unittest.TestCase):
+    def test_dir(self):
+        tracefs_dir = ft.dir()
+        self.assertTrue(os.path.isdir(tracefs_dir))
+        instances_dir = tracefs_dir + '/instances/'
+        self.assertTrue(os.path.isdir(instances_dir))
+
+    def test_create_instance(self):
+        ft.create_instance(instance_name)
+        self.assertTrue(ft.is_tracing_ON(instance_name))
+        instances_dir = ft.dir() + '/instances/'
+        self.assertTrue(os.path.isdir(instances_dir + instance_name))
+
+        auto_inst = ft.create_instance(tracing_on=False)
+        self.assertFalse(ft.is_tracing_ON(auto_inst))
+        ft.destroy_instance(auto_inst)
+
+    def test_destroy_instance(self):
+        ft.destroy_instance(instance_name)
+        instances_dir = ft.dir() + '/instances/'
+        self.assertFalse(os.path.isdir(instances_dir + instance_name))
+
+        err = 'Unable to destroy trace instances'
+        with self.assertRaises(Exception) as context:
+            ft.destroy_instance(instance_name)
+        self.assertTrue(err in str(context.exception))
+
+        ft.create_instance(instance_name)
+        ft.create_instance(another_instance_name)
+        ft.destroy_all_instances()
+        self.assertFalse(os.path.isdir(instances_dir + instance_name))
+
+        ft.create_instance(instance_name)
+        ft.create_instance(another_instance_name)
+        ft.destroy_instance('all')
+        self.assertFalse(os.path.isdir(instances_dir + instance_name))
+
+    def test_get_all(self):
+        ft.create_instance(instance_name)
+        ft.create_instance(another_instance_name)
+        self.assertEqual(ft.get_all_instances(),
+                         [instance_name, another_instance_name])
+        ft.destroy_all_instances()
+
+    def test_instance_dir(self):
+        ft.create_instance(instance_name)
+        tracefs_dir = ft.dir();
+        instance_dir = tracefs_dir + '/instances/' + instance_name
+        self.assertEqual(instance_dir, ft.instance_dir(instance_name))
+        ft.destroy_all_instances()
+
+class PyTepTestCase(unittest.TestCase):
+    def test_init_local(self):
+        tracefs_dir = ft.dir()
+        tep = ft.tep_handle();
+        tep.init_local(tracefs_dir);
+
+        tep.init_local(dir=tracefs_dir, systems=['sched', 'irq']);
+
+        ft.create_instance(instance_name)
+        tracefs_dir = ft.instance_dir(instance_name)
+        tep.init_local(dir=tracefs_dir, systems=['sched', 'irq']);
+
+        err='function missing required argument \'dir\''
+        with self.assertRaises(Exception) as context:
+            tep.init_local(systems=['sched', 'irq']);
+        self.assertTrue(err in str(context.exception))
+
+        err='Failed to get local events from \'no_dir\''
+        with self.assertRaises(Exception) as context:
+            tep.init_local(dir='no_dir', systems=['sched', 'irq']);
+        self.assertTrue(err in str(context.exception))
+        ft.destroy_all_instances()
+
+    def test_get_event(self):
+        tracefs_dir = ft.dir()
+        tep = ft.tep_handle();
+        tep.init_local(tracefs_dir);
+        evt = tep.get_event(system='sched', name='sched_switch');
+
+
+class PyTepEventTestCase(unittest.TestCase):
+    def test_name(self):
+        tracefs_dir = ft.dir()
+        tep = ft.tep_handle();
+        tep.init_local(tracefs_dir);
+        evt = tep.get_event(system='sched', name='sched_switch');
+        self.assertEqual(evt.name(), 'sched_switch');
+
+    def test_field_names(self):
+        tracefs_dir = ft.dir()
+        tep = ft.tep_handle();
+        tep.init_local(tracefs_dir);
+        evt = tep.get_event(system='sched', name='sched_switch');
+        fiels = evt.field_names()
+        self.assertEqual(fiels , ['common_type',
+                                  'common_flags',
+                                  'common_preempt_count',
+                                  'common_pid',
+                                  'prev_comm',
+                                  'prev_pid',
+                                  'prev_prio',
+                                  'prev_state',
+                                  'next_comm',
+                                  'next_pid',
+                                  'next_prio'])
+
+
+class TracersTestCase(unittest.TestCase):
+    def test_available_tracers(self):
+        tracers = ft.available_tracers()
+        self.assertTrue(len(tracers) > 1)
+        self.assertTrue('function' in tracers)
+
+    def test_set_tracer(self):
+        ft.set_current_tracer(tracer='function')
+        ft.set_current_tracer(tracer='')
+
+        err = 'Tracer \'zero\' is not available.'
+        with self.assertRaises(Exception) as context:
+            ft.set_current_tracer(tracer='zero')
+        self.assertTrue(err in str(context.exception))
+
+
+class EventsTestCase(unittest.TestCase):
+    def test_available_systems(self):
+        systems = ft.available_event_systems()
+        self.assertTrue(len(systems) > 1)
+        self.assertTrue('sched' in systems)
+
+        ft.create_instance(instance_name)
+        systems = ft.available_event_systems(instance_name)
+        self.assertTrue(len(systems) > 1)
+        self.assertTrue('sched' in systems)
+
+        ft.destroy_all_instances()
+
+    def test_available_system_events(self):
+        events = ft.available_system_events(system='sched')
+        self.assertTrue(len(events) > 1)
+        self.assertTrue('sched_switch' in events)
+
+        ft.create_instance(instance_name)
+        events = ft.available_system_events(instance=instance_name,
+                                              system='sched')
+        self.assertTrue(len(events) > 1)
+        self.assertTrue('sched_switch' in events)
+
+        err = 'function missing required argument'
+        with self.assertRaises(Exception) as context:
+            ft.available_system_events(instance=instance_name)
+        self.assertTrue(err in str(context.exception))
+
+        ft.destroy_all_instances()
+
+    def test_enable_event(self):
+        ft.create_instance(instance_name)
+
+        ret = ft.event_is_enabled(instance=instance_name, system='all')
+        self.assertEqual(ret, '0')
+        ft.enable_event(instance=instance_name, system='all')
+        ret = ft.event_is_enabled(instance=instance_name, system='all')
+        self.assertEqual(ret, '1')
+        ft.disable_event(instance=instance_name, system='all')
+        ret = ft.event_is_enabled(instance=instance_name, system='all')
+        self.assertEqual(ret, '0')
+
+        ret = ft.event_is_enabled(instance=instance_name, event='all')
+        self.assertEqual(ret, '0')
+        ft.enable_event(instance=instance_name, event='all')
+        ret = ft.event_is_enabled(instance=instance_name, event='all')
+        self.assertEqual(ret, '1')
+        ft.disable_event(instance=instance_name, event='all')
+        ret = ft.event_is_enabled(instance=instance_name, event='all')
+        self.assertEqual(ret, '0')
+
+        ret = ft.event_is_enabled(instance=instance_name, system='sched')
+        self.assertEqual(ret, '0')
+        ft.enable_event(instance=instance_name, system='sched')
+        ret = ft.event_is_enabled(instance=instance_name, system='sched')
+        self.assertEqual(ret, '1')
+        ft.disable_event(instance=instance_name, system='sched')
+        ret = ft.event_is_enabled(instance=instance_name, system='sched')
+        self.assertEqual(ret, '0')
+
+        ft.enable_event(instance=instance_name,
+                        system='sched',
+                        event='sched_switch')
+        ret = ft.event_is_enabled(instance=instance_name,
+                                  system='sched',
+                                  event='sched_switch')
+        self.assertEqual(ret, '1')
+        ft.disable_event(instance=instance_name,
+                         system='sched',
+                         event='sched_switch')
+        ret = ft.event_is_enabled(instance=instance_name,
+                                  system='sched',
+                                  event='sched_switch')
+        self.assertEqual(ret, '0')
+
+        ft.enable_event(instance=instance_name,
+                        system='sched',
+                        event='all')
+        ret = ft.event_is_enabled(instance=instance_name,
+                                  system='sched',
+                                  event='all')
+        self.assertEqual(ret, '1')
+        ft.disable_event(instance=instance_name,
+                         system='sched',
+                         event='all')
+        ret = ft.event_is_enabled(instance=instance_name,
+                                  system='sched',
+                                  event='all')
+        self.assertEqual(ret, '0')
+
+        ft.enable_event(instance=instance_name,
+                        event='sched_switch')
+        ret = ft.event_is_enabled(instance=instance_name,
+                                  system='sched',
+                                  event='sched_switch')
+        self.assertEqual(ret, '1')
+        ft.disable_event(instance=instance_name,
+                         event='sched_switch')
+        ret = ft.event_is_enabled(instance=instance_name,
+                                  system='sched',
+                                  event='sched_switch')
+        self.assertEqual(ret, '0')
+
+        ft.enable_event(instance=instance_name,
+                        system='all',
+                        event='sched_switch')
+        ret = ft.event_is_enabled(instance=instance_name,
+                                  system='sched',
+                                  event='sched_switch')
+        self.assertEqual(ret, '1')
+        ft.disable_event(instance=instance_name,
+                         system='all',
+                         event='sched_switch')
+        ret = ft.event_is_enabled(instance=instance_name,
+                                  system='sched',
+                                  event='sched_switch')
+        self.assertEqual(ret, '0')
+
+        ft.destroy_all_instances()
+
+    def test_enable_event_err(self):
+        ft.create_instance(instance_name)
+
+        err = 'Failed to enable/disable event'
+        with self.assertRaises(Exception) as context:
+            ft.enable_event(instance=instance_name,
+                            system='zero')
+        self.assertTrue(err in str(context.exception))
+
+        with self.assertRaises(Exception) as context:
+            ft.enable_event(instance=instance_name,
+                            system='sched',
+                            event='zero')
+        self.assertTrue(err in str(context.exception))
+
+        ft.destroy_all_instances()
+
+    def test_enable_events(self):
+        ft.create_instance(instance_name)
+        ft.enable_events(instance=instance_name,
+                         events='all')
+
+        ret = ft.event_is_enabled(instance=instance_name,
+                                  event='all')
+        self.assertEqual(ret, '1')
+        ft.disable_events(instance=instance_name,
+                          events='all')
+
+        ret = ft.event_is_enabled(instance=instance_name,
+                                  event='all')
+        self.assertEqual(ret, '0')
+
+        ft.enable_events(instance=instance_name,
+                         systems=['sched', 'irq'])
+
+        ret = ft.event_is_enabled(instance=instance_name,
+                                  system='sched',
+                                  event='all')
+        self.assertEqual(ret, '1')
+
+        ret = ft.event_is_enabled(instance=instance_name,
+                                  system='irq',
+                                  event='all')
+        self.assertEqual(ret, '1')
+
+        ft.disable_events(instance=instance_name,
+                          systems=['sched', 'irq'])
+
+        ret = ft.event_is_enabled(instance=instance_name,
+                                  system='sched',
+                                  event='all')
+        self.assertEqual(ret, '0')
+
+        ret = ft.event_is_enabled(instance=instance_name,
+                                  system='irq',
+                                  event='all')
+        self.assertEqual(ret, '0')
+
+        ft.enable_events(instance=instance_name,
+                         systems=['sched', 'irq'],
+                         events=[['sched_switch', 'sched_waking'],
+                                 ['all']])
+
+        ret = ft.event_is_enabled(instance=instance_name,
+                                  system='sched',
+                                  event='sched_switch')
+        self.assertEqual(ret, '1')
+
+        ret = ft.event_is_enabled(instance=instance_name,
+                                  system='sched',
+                                  event='sched_waking')
+        self.assertEqual(ret, '1')
+
+        ret = ft.event_is_enabled(instance=instance_name,
+                                  system='sched',
+                                  event='sched_wakeup')
+        self.assertEqual(ret, '0')
+
+        ret = ft.event_is_enabled(instance=instance_name,
+                                  system='irq',
+                                  event='all')
+        self.assertEqual(ret, '1')
+
+        ft.disable_events(instance=instance_name,
+                          systems=['sched', 'irq'],
+                          events=[['sched_switch', 'sched_waking'],
+                                  ['all']])
+
+        ret = ft.event_is_enabled(instance=instance_name,
+                                  system='sched',
+                                  event='sched_switch')
+        self.assertEqual(ret, '0')
+
+        ret = ft.event_is_enabled(instance=instance_name,
+                                  system='sched',
+                                  event='sched_waking')
+        self.assertEqual(ret, '0')
+
+        ret = ft.event_is_enabled(instance=instance_name,
+                                  system='irq',
+                                  event='all')
+        self.assertEqual(ret, '0')
+
+        ft.destroy_all_instances()
+
+    def test_enable_events_err(self):
+        ft.create_instance(instance_name)
+
+        err = 'Inconsistent \"events\" argument'
+        with self.assertRaises(Exception) as context:
+            ft.enable_events(instance=instance_name,
+                             systems=['sched'],
+                             events=['all'])
+        self.assertTrue(err in str(context.exception))
+
+        err = 'Failed to enable events for unspecified system'
+        with self.assertRaises(Exception) as context:
+            ft.enable_events(instance=instance_name,
+                             events=['sched_switch', 'sched_wakeup'])
+        self.assertTrue(err in str(context.exception))
+
+        err = 'Failed to enable/disable event'
+        with self.assertRaises(Exception) as context:
+            ft.enable_events(instance=instance_name,
+                             systems=['sched'],
+                             events=[['no_event']])
+        self.assertTrue(err in str(context.exception))
+
+        ft.destroy_all_instances()
+
+
+class OptionsTestCase(unittest.TestCase):
+    def test_enable_option(self):
+        ft.create_instance(instance_name)
+        opt = 'event-fork'
+        ret = ft.option_is_set(instance=instance_name,
+                                   option=opt)
+        self.assertFalse(ret)
+
+        ft.enable_option(instance=instance_name,
+                         option=opt)
+        ret = ft.option_is_set(instance=instance_name,
+                               option=opt)
+        self.assertTrue(ret)
+
+        ft.disable_option(instance=instance_name,
+                          option=opt)
+        ret = ft.option_is_set(instance=instance_name,
+                                   option=opt)
+        self.assertFalse(ret)
+
+        opt = 'no-opt'
+        err = 'Failed to set option \"no-opt\"'
+        with self.assertRaises(Exception) as context:
+            ft.enable_option(instance=instance_name,
+                             option=opt)
+        self.assertTrue(err in str(context.exception))
+
+        ft.destroy_all_instances()
+
+    def test_supported_options(self):
+        ft.create_instance(instance_name)
+        opts = ft.supported_options(instance_name)
+        self.assertTrue(len(opts) > 20)
+        self.assertTrue('event-fork' in opts)
+
+        ft.destroy_all_instances()
+
+    def test_enabled_options(self):
+        ft.create_instance(instance_name)
+        opts = ft.enabled_options(instance_name)
+        n = len(opts)
+        ft.enable_option(instance=instance_name, option='function-fork')
+        ft.enable_option(instance=instance_name, option='event-fork')
+        opts = ft.enabled_options(instance_name)
+
+        self.assertEqual(len(opts), n + 2)
+        self.assertTrue('event-fork' in opts)
+        self.assertTrue('function-fork' in opts)
+
+        ft.destroy_all_instances()
+
+
+class TracingOnTestCase(unittest.TestCase):
+    def test_ON_OF(self):
+        ft.tracing_ON()
+        self.assertTrue(ft.is_tracing_ON())
+        ft.tracing_OFF()
+
+        ft.create_instance(instance_name)
+        ft.tracing_ON(instance=instance_name)
+        self.assertTrue(ft.is_tracing_ON(instance=instance_name))
+        self.assertFalse(ft.is_tracing_ON())
+        ft.tracing_OFF(instance=instance_name)
+
+        ft.destroy_all_instances()
+
+    def test_err(self):
+        err = 'returned a result with an error set'
+        with self.assertRaises(Exception) as context:
+            ft.tracing_ON('zero')
+        self.assertTrue(err in str(context.exception))
+
+        with self.assertRaises(Exception) as context:
+            ft.tracing_OFF('zero')
+        self.assertTrue(err in str(context.exception))
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/tests/1_unit/test_02_datawrapper_unit.py b/tests/1_unit/test_02_datawrapper_unit.py
new file mode 100755
index 0000000..58c8706
--- /dev/null
+++ b/tests/1_unit/test_02_datawrapper_unit.py
@@ -0,0 +1,41 @@
+#!/usr/bin/env python3
+
+"""
+SPDX-License-Identifier: LGPL-2.1
+
+Copyright (C) 2021 VMware Inc, Yordan Karadzhov (VMware) <y.karadz@gmail.com>
+"""
+
+import os
+import sys
+import unittest
+import tracecruncher.ksharkpy as ks
+import tracecruncher.npdatawrapper as dw
+
+file_1 = 'testdata/trace_test1.dat'
+
+class DwPyTestCase(unittest.TestCase):
+    def test_columns(self):
+        self.assertEqual(dw.columns(), ['event', 'cpu', 'pid', 'offset', 'time'])
+
+    def test_load(self):
+        sd = ks.open(file_1)
+        data = dw.load(sd)
+        self.assertEqual(len(dw.columns()), len(data))
+        self.assertEqual(data['pid'].size, 1530)
+
+        data_no_ts = dw.load(sd, ts_data=False)
+        self.assertEqual(data_no_ts['pid'].size, 1530)
+        self.assertEqual(len(dw.columns()) - 1, len(data_no_ts))
+
+        data_pid_cpu = dw.load(sd, evt_data=False,
+                                   ofst_data=False,
+                                   ts_data=False)
+        self.assertEqual(data_pid_cpu['cpu'].size, 1530)
+        self.assertEqual(2, len(data_pid_cpu))
+
+        ks.close()
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/tests/1_unit/test_03_ksharkpy_unit.py b/tests/1_unit/test_03_ksharkpy_unit.py
new file mode 100755
index 0000000..c7da2a1
--- /dev/null
+++ b/tests/1_unit/test_03_ksharkpy_unit.py
@@ -0,0 +1,72 @@
+#!/usr/bin/env python3
+
+"""
+SPDX-License-Identifier: LGPL-2.1
+
+Copyright (C) 2021 VMware Inc, Yordan Karadzhov (VMware) <y.karadz@gmail.com>
+"""
+
+import os
+import sys
+import unittest
+import tracecruncher.ksharkpy as ks
+import tracecruncher.npdatawrapper as dw
+
+file_1 = 'testdata/trace_test1.dat'
+file_2 = 'testdata/trace_test2.dat'
+
+ss_id = 323
+
+class KsPyTestCase(unittest.TestCase):
+    def test_open_close(self):
+        sd = ks.open(file_1)
+        self.assertEqual(sd, 0)
+        sd = ks.open(file_2)
+        self.assertEqual(sd, 1)
+        ks.close()
+
+        sd = ks.open(file_1)
+        self.assertEqual(sd, 0)
+        ks.close()
+
+        err = 'Failed to open file'
+        with self.assertRaises(Exception) as context:
+            sd = ks.open('no_file')
+        self.assertTrue(err in str(context.exception))
+
+    def test_event_id(self):
+        sd = ks.open(file_1)
+        eid = ks.event_id(stream_id=sd, name='sched/sched_switch')
+        self.assertEqual(eid, ss_id)
+
+        err = 'Failed to retrieve the Id of event'
+        with self.assertRaises(Exception) as context:
+            eid = ks.event_id(stream_id=sd, name='sched/no_such_event')
+        self.assertTrue(err in str(context.exception))
+
+        ks.close()
+
+    def test_event_name(self):
+        sd = ks.open(file_1)
+        name = ks.event_name(stream_id=sd, event_id=ss_id)
+        self.assertEqual(name, 'sched/sched_switch')
+
+        err = 'Failed to retrieve the name of event'
+        with self.assertRaises(Exception) as context:
+            name = ks.event_name(stream_id=sd, event_id=2**30)
+        self.assertTrue(err in str(context.exception))
+
+        ks.close()
+
+    def read_field(self):
+        sd = ks.open(file_1)
+        data = dw.load(sd)
+        next_pid = read_event_field(stream_id=sd,
+                                    offset=data['offset'],
+                                    event_id=ss_id,
+                                    field='next_pid')
+        self.assertEqual(next_pid, 4182)
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/tests/2_integration/__init__.py b/tests/2_integration/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/tests/2_integration/test_01_ftracepy_integration.py b/tests/2_integration/test_01_ftracepy_integration.py
new file mode 100755
index 0000000..d3b2c6b
--- /dev/null
+++ b/tests/2_integration/test_01_ftracepy_integration.py
@@ -0,0 +1,113 @@
+#!/usr/bin/env python3
+
+"""
+SPDX-License-Identifier: LGPL-2.1
+
+Copyright (C) 2021 VMware Inc, Yordan Karadzhov (VMware) <y.karadz@gmail.com>
+"""
+
+import os
+import sys
+import unittest
+import tracecruncher.ftracepy as ft
+
+class InstanceTestCase(unittest.TestCase):
+    def test_create_instance(self):
+        tracefs_dir = ft.dir()
+        instances_dir = tracefs_dir + '/instances/'
+        self.assertEqual(len(os.listdir(instances_dir)), 0)
+
+        for i in range(25) :
+            instance_name = 'test_instance_%s' % i
+            ft.create_instance(instance_name)
+            self.assertTrue(os.path.isdir(instances_dir + instance_name))
+
+        for i in range(15) :
+            instance_name = 'test_instance_%s' % i
+            ft.destroy_instance(instance_name)
+            self.assertFalse(os.path.isdir(instances_dir + instance_name))
+
+        self.assertEqual(len(os.listdir(instances_dir)), 10)
+        ft.destroy_instance('all')
+        self.assertEqual(len(os.listdir(instances_dir)), 0)
+
+    def test_current_tracer(self):
+        current = ft.get_current_tracer()
+        self.assertEqual(current, 'nop')
+        ft.tracing_OFF()
+        name = 'function'
+        ft.set_current_tracer(tracer=name)
+        current = ft.get_current_tracer()
+        self.assertEqual(current, name)
+        ft.set_current_tracer()
+
+        instance_name = 'test_instance'
+        ft.create_instance(instance_name)
+        current = ft.get_current_tracer(instance=instance_name)
+        self.assertEqual(current, 'nop')
+        ft.tracing_OFF(instance=instance_name)
+        ft.set_current_tracer(instance=instance_name, tracer=name)
+        current = ft.get_current_tracer(instance=instance_name)
+        self.assertEqual(current, name)
+        ft.destroy_instance('all')
+
+    def test_enable_events(self):
+        instance_name = 'test_instance'
+        ft.create_instance(instance_name)
+        systems = ft.available_event_systems(instance=instance_name)
+        systems.remove('ftrace')
+        for s in systems:
+            ret = ft.event_is_enabled(instance=instance_name,
+                                       system=s)
+            self.assertEqual(ret, '0')
+            ft.enable_event(instance=instance_name,
+                             system=s)
+            ret = ft.event_is_enabled(instance=instance_name,
+                                       system=s)
+            self.assertEqual(ret, '1')
+
+            ft.disable_event(instance=instance_name,
+                             system=s)
+            events = ft.available_system_events(instance=instance_name,
+                                                 system=s)
+            for e in events:
+                ret = ft.event_is_enabled(instance=instance_name,
+                                           system=s,
+                                           event=e)
+                self.assertEqual(ret, '0')
+                ft.enable_event(instance=instance_name,
+                                 system=s,
+                                 event=e)
+                ret = ft.event_is_enabled(instance=instance_name,
+                                           system=s,
+                                           event=e)
+                self.assertEqual(ret, '1')
+                ret = ft.event_is_enabled(instance=instance_name,
+                                           system=s)
+                if e != events[-1]:
+                    self.assertEqual(ret, 'X')
+
+            ret = ft.event_is_enabled(instance=instance_name,
+                                       system=s)
+            self.assertEqual(ret, '1')
+
+        ret = ft.event_is_enabled(instance=instance_name,
+                                   system=s)
+        self.assertEqual(ret, '1')
+
+        ft.disable_event(instance=instance_name, event='all')
+        for s in systems:
+            ret = ft.event_is_enabled(instance=instance_name,
+                                       system=s)
+            self.assertEqual(ret, '0')
+            events = ft.available_system_events(instance=instance_name,
+                                                 system=s)
+            for e in events:
+                ret = ft.event_is_enabled(instance=instance_name,
+                                           system=s,
+                                           event=e)
+                self.assertEqual(ret, '0')
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/tests/2_integration/test_03_ksharkpy_integration.py b/tests/2_integration/test_03_ksharkpy_integration.py
new file mode 100755
index 0000000..dd8e0b5
--- /dev/null
+++ b/tests/2_integration/test_03_ksharkpy_integration.py
@@ -0,0 +1,25 @@
+#!/usr/bin/env python3
+
+"""
+SPDX-License-Identifier: LGPL-2.1
+
+Copyright (C) 2021 VMware Inc, Yordan Karadzhov (VMware) <y.karadz@gmail.com>
+"""
+
+import os
+import sys
+import shutil
+import unittest
+import tracecruncher.ks_utils as tc
+
+class GetTestData(unittest.TestCase):
+    def test_open_and_read(self):
+        f = tc.open_file(file_name='testdata/trace_test1.dat')
+        data = f.load(pid_data=False)
+        tasks = f.get_tasks()
+        self.assertEqual(len(tasks), 29)
+        self.assertEqual(tasks['zoom'], [28121, 28137, 28141, 28199, 28201, 205666])
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/tests/__init__.py b/tests/__init__.py
new file mode 100644
index 0000000..e69de29
-- 
2.27.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v4 11/11] trace-cruncher: Add github workflow for CI testing
  2021-07-07 13:21 [PATCH v4 00/11] Build trace-cruncher as Python pakage Yordan Karadzhov (VMware)
                   ` (10 preceding siblings ...)
  2021-07-07 13:21 ` [PATCH v4 10/11] trace-cruncher: Add testing Yordan Karadzhov (VMware)
@ 2021-07-07 13:21 ` Yordan Karadzhov (VMware)
  11 siblings, 0 replies; 13+ messages in thread
From: Yordan Karadzhov (VMware) @ 2021-07-07 13:21 UTC (permalink / raw)
  To: linux-trace-devel; +Cc: Yordan Karadzhov (VMware)

The CI workflow will run once a week, or on any push to
branches "master" and "yordan_devel".

Signed-off-by: Yordan Karadzhov (VMware) <y.karadz@gmail.com>
---
 .github/workflows/main.yml | 58 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 58 insertions(+)
 create mode 100644 .github/workflows/main.yml

diff --git a/.github/workflows/main.yml b/.github/workflows/main.yml
new file mode 100644
index 0000000..94abf64
--- /dev/null
+++ b/.github/workflows/main.yml
@@ -0,0 +1,58 @@
+name: trace-cruncher CI
+
+on:
+  push:
+    branches: [master, yordan_devel]
+  schedule:
+    - cron:  '0 15 * * THU'
+
+jobs:
+  build:
+    runs-on: ubuntu-latest
+
+    steps:
+    - uses: actions/checkout@v2
+
+    - name: Install Dependencies
+      shell: bash
+      run: |
+        sudo apt-get update
+        sudo apt-get install build-essential git cmake libjson-c-dev -y
+        sudo apt-get install libpython3-dev cython3 python3-numpy -y
+        sudo apt install python3-pip
+        sudo pip3 install --system pkgconfig GitPython
+        git clone https://git.kernel.org/pub/scm/libs/libtrace/libtraceevent.git/
+        cd libtraceevent
+        make
+        sudo make install
+        cd ..
+        git clone https://git.kernel.org/pub/scm/libs/libtrace/libtracefs.git/
+        cd libtracefs
+        make
+        sudo make install
+        cd ..
+        git clone https://git.kernel.org/pub/scm/utils/trace-cmd/trace-cmd.git
+        cd trace-cmd
+        make
+        sudo make install_libs
+        cd ..
+        git clone https://github.com/yordan-karadzhov/kernel-shark-v2.beta.git kernel-shark
+        cd kernel-shark/build
+        cmake ..
+        make
+        sudo make install
+        cd ../..
+
+    - name: Build
+      working-directory: ${{runner.workspace}}/trace-cruncher
+      shell: bash
+      # Build and install.
+      run: |
+        make
+        sudo make install
+
+    - name: Test
+      working-directory: ${{runner.workspace}}/trace-cruncher/tests
+      shell: bash
+      # Execute tests defined by the CMake configuration.
+      run: sudo python3 -m unittest discover .
-- 
2.27.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, back to index

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-07 13:21 [PATCH v4 00/11] Build trace-cruncher as Python pakage Yordan Karadzhov (VMware)
2021-07-07 13:21 ` [PATCH v4 01/11] trace-cruncher: Refactor the part that wraps ftrace Yordan Karadzhov (VMware)
2021-07-07 13:21 ` [PATCH v4 02/11] trace-cruncher: Add basic methods for tracing Yordan Karadzhov (VMware)
2021-07-07 13:21 ` [PATCH v4 03/11] trace-cruncher: Refactor the part that wraps libkshark Yordan Karadzhov (VMware)
2021-07-07 13:21 ` [PATCH v4 04/11] trace-cruncher: Add "utils" Yordan Karadzhov (VMware)
2021-07-07 13:21 ` [PATCH v4 05/11] trace-cruncher: Refactor the examples Yordan Karadzhov (VMware)
2021-07-07 13:21 ` [PATCH v4 06/11] trace-cruncher: Add ftracefy example Yordan Karadzhov (VMware)
2021-07-07 13:21 ` [PATCH v4 06/11] trace-cruncher: Add ftracepy example Yordan Karadzhov (VMware)
2021-07-07 13:21 ` [PATCH v4 07/11] trace-cruncher: Add Makefile Yordan Karadzhov (VMware)
2021-07-07 13:21 ` [PATCH v4 08/11] trace-cruncher: Update README.md Yordan Karadzhov (VMware)
2021-07-07 13:21 ` [PATCH v4 09/11] trace-cruncher: Remove all leftover files Yordan Karadzhov (VMware)
2021-07-07 13:21 ` [PATCH v4 10/11] trace-cruncher: Add testing Yordan Karadzhov (VMware)
2021-07-07 13:21 ` [PATCH v4 11/11] trace-cruncher: Add github workflow for CI testing Yordan Karadzhov (VMware)

Linux-Trace-Devel Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-trace-devel/0 linux-trace-devel/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-trace-devel linux-trace-devel/ https://lore.kernel.org/linux-trace-devel \
		linux-trace-devel@vger.kernel.org
	public-inbox-index linux-trace-devel

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-trace-devel


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git