All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/7]  Add visualization model for the Qt-based KernelShark
@ 2018-07-31 13:52 Yordan Karadzhov (VMware)
  2018-07-31 13:52 ` [PATCH v2 1/7] kernel-shark-qt: Change the type of the fields in struct kshark_entry Yordan Karadzhov (VMware)
                   ` (6 more replies)
  0 siblings, 7 replies; 21+ messages in thread
From: Yordan Karadzhov (VMware) @ 2018-07-31 13:52 UTC (permalink / raw)
  To: rostedt; +Cc: linux-trace-devel, Yordan Karadzhov (VMware)

This series of patches introduces the second part of the C API used by
the Qt-based version of KernelShark. This part of the API is responsible
for the visual navigation and browsing inside the trace data.

This is the second version of this series of patches.
Major changes from v1 are:

[1/7] New patch. Changes the type of the fields of struct kshark_entry.

[2/7], [3/7] and [5/7 ] This version of the patchs contains a number of
improvements suggested by Steven Rostedt in his review. Thanks Steven!


Yordan Karadzhov (VMware) (7):
  kernel-shark-qt: Change the type of the fields in struct kshark_entry
  kernel-shark-qt: Add generic instruments for searching inside the
    trace data
  kernel-shark-qt: Introduce the visualization model used by the
    Qt-based KS
  kernel-shark-qt: Add an example showing how to manipulate the Vis.
    model.
  kernel-shark-qt: Define Data collections
  kernel-shark-qt: Make the Vis. model use Data collections.
  kernel-shark-qt: Changed the KernelShark version identifier.

 kernel-shark-qt/CMakeLists.txt             |    2 +-
 kernel-shark-qt/examples/CMakeLists.txt    |    4 +
 kernel-shark-qt/examples/datahisto.c       |  159 +++
 kernel-shark-qt/src/CMakeLists.txt         |    4 +-
 kernel-shark-qt/src/libkshark-collection.c |  828 ++++++++++++++
 kernel-shark-qt/src/libkshark-model.c      | 1174 ++++++++++++++++++++
 kernel-shark-qt/src/libkshark-model.h      |  152 +++
 kernel-shark-qt/src/libkshark.c            |  250 ++++-
 kernel-shark-qt/src/libkshark.h            |  173 ++-
 9 files changed, 2738 insertions(+), 8 deletions(-)
 create mode 100644 kernel-shark-qt/examples/datahisto.c
 create mode 100644 kernel-shark-qt/src/libkshark-collection.c
 create mode 100644 kernel-shark-qt/src/libkshark-model.c
 create mode 100644 kernel-shark-qt/src/libkshark-model.h

-- 
2.17.1

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v2 1/7] kernel-shark-qt: Change the type of the fields in struct kshark_entry
  2018-07-31 13:52 [PATCH v2 0/7] Add visualization model for the Qt-based KernelShark Yordan Karadzhov (VMware)
@ 2018-07-31 13:52 ` Yordan Karadzhov (VMware)
  2018-07-31 13:52 ` [PATCH v2 2/7] kernel-shark-qt: Add generic instruments for searching inside the trace data Yordan Karadzhov (VMware)
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 21+ messages in thread
From: Yordan Karadzhov (VMware) @ 2018-07-31 13:52 UTC (permalink / raw)
  To: rostedt; +Cc: linux-trace-devel, Yordan Karadzhov (VMware)

This patch aims to increase the max value limits of the "pid" and
"cpu" fields of struct kshark_entry. The type of the "visible" is
changed as well, but this is done just to provide optimal packing.

Signed-off-by: Yordan Karadzhov (VMware) <y.karadz@gmail.com>
---
 kernel-shark-qt/src/libkshark.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/kernel-shark-qt/src/libkshark.h b/kernel-shark-qt/src/libkshark.h
index 2e26552..0ad31c0 100644
--- a/kernel-shark-qt/src/libkshark.h
+++ b/kernel-shark-qt/src/libkshark.h
@@ -42,16 +42,16 @@ struct kshark_entry {
 	 * kshark_filter_masks to check the level of visibility/invisibility
 	 * of the entry.
 	 */
-	uint8_t		visible;
+	uint16_t	visible;
 
 	/** The CPU core of the record. */
-	uint8_t		cpu;
+	int16_t		cpu;
 
 	/** The PID of the task the record was generated. */
-	int16_t		pid;
+	int32_t		pid;
 
 	/** Unique Id ot the trace event type. */
-	int		event_id;
+	int32_t		event_id;
 
 	/** The offset into the trace file, used to find the record. */
 	uint64_t	offset;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 2/7] kernel-shark-qt: Add generic instruments for searching inside the trace data
  2018-07-31 13:52 [PATCH v2 0/7] Add visualization model for the Qt-based KernelShark Yordan Karadzhov (VMware)
  2018-07-31 13:52 ` [PATCH v2 1/7] kernel-shark-qt: Change the type of the fields in struct kshark_entry Yordan Karadzhov (VMware)
@ 2018-07-31 13:52 ` Yordan Karadzhov (VMware)
  2018-07-31 21:43   ` Steven Rostedt
  2018-07-31 13:52 ` [PATCH v2 3/7] kernel-shark-qt: Introduce the visualization model used by the Qt-based KS Yordan Karadzhov (VMware)
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 21+ messages in thread
From: Yordan Karadzhov (VMware) @ 2018-07-31 13:52 UTC (permalink / raw)
  To: rostedt; +Cc: linux-trace-devel, Yordan Karadzhov (VMware)

This patch introduces the instrumentation for data extraction used by the
visualization model of the Qt-based KernelShark. The effectiveness of these
instruments for searching has a dominant effect over the performance of the
model, so let's spend some time and explain this in detail.

The first type of instruments provide binary search inside a sorted in time
arrays of kshark_entries or trace_records. The search returns the first
element of the array, having timestamp bigger than a reference time value.
The time complexity of these searches is log(n).

The second type of instruments provide searching for the first (in time)
entry, satisfying an abstract Matching condition. Since the array is sorted
in time, but we search for an abstract property, for this search the array
is considered unsorted, thus we have to iterate and check all elements of the
array one by one. If we search for a type of entries, which are well presented
in the array, the time complexity of the search is constant, because no matter
how big is the array the search only goes through small number of entries at
the beginning of the array (or at the end, if we search backwards), before it
finds the first match. However if we search for sparse, or even nonexistent
entries, the time complexity becomes linear.

These explanations will start making more sense with the following patches.

Signed-off-by: Yordan Karadzhov (VMware) <y.karadz@gmail.com>
---
 kernel-shark-qt/src/libkshark.c | 233 +++++++++++++++++++++++++++++++-
 kernel-shark-qt/src/libkshark.h |  86 +++++++++++-
 2 files changed, 317 insertions(+), 2 deletions(-)

diff --git a/kernel-shark-qt/src/libkshark.c b/kernel-shark-qt/src/libkshark.c
index 3299752..1796bf8 100644
--- a/kernel-shark-qt/src/libkshark.c
+++ b/kernel-shark-qt/src/libkshark.c
@@ -861,7 +861,7 @@ static const char *kshark_get_info(struct pevent *pe,
  * @returns The returned string contains a semicolon-separated list of data
  *	    fields.
  */
-char* kshark_dump_entry(struct kshark_entry *entry)
+char* kshark_dump_entry(const struct kshark_entry *entry)
 {
 	const char *event_name, *task, *lat, *info;
 	struct kshark_context *kshark_ctx;
@@ -908,3 +908,234 @@ char* kshark_dump_entry(struct kshark_entry *entry)
 
 	return NULL;
 }
+
+/**
+ * @brief Binary search inside a time-sorted array of kshark_entries.
+ * @param time: The value of time to search for.
+ * @param data: Input location for the trace data.
+ * @param l: Array index specifying the lower edge of the range to search in.
+ * @param h: Array index specifying the upper edge of the range to search in.
+ * @returns On success, the first kshark_entry inside the range, having a
+	    timestamp equal or bigger than "time". In the case when no
+	    kshark_entry has been found inside the range, the function will
+	    return the value of "l" or "h".
+ */
+size_t kshark_find_entry_by_time(uint64_t time,
+				 struct kshark_entry **data,
+				 size_t l, size_t h)
+{
+	if (data[l]->ts >= time)
+		return l;
+
+	if (data[h]->ts < time)
+		return h;
+
+	size_t mid;
+	BSEARCH(h, l, data[mid]->ts < time);
+	return h;
+}
+
+/**
+ * @brief Binary search inside a time-sorted array of pevent_records.
+ * @param time: The value of time to search for.
+ * @param data: Input location for the trace data.
+ * @param l: Array index specifying the lower edge of the range to search in.
+ * @param h: Array index specifying the upper edge of the range to search in.
+ * @returns On success, the first pevent_record inside the range, having a
+	    timestamp equal or bigger than "time". In the case when no
+	    pevent_record has been found inside the range, the function will
+	    return the value of "l" or "h".
+ */
+size_t kshark_find_record_by_time(uint64_t time,
+				  struct pevent_record **data,
+				  size_t l, size_t h)
+{
+	if (data[l]->ts >= time)
+		return l;
+
+	if (data[h]->ts < time)
+		return h;
+
+	size_t mid;
+	BSEARCH(h, l, data[mid]->ts < time);
+	return h;
+}
+
+/**
+ * @brief Simple Pid matching function to be user for data requests.
+ * @param kshark_ctx: Input location for the session context pointer.
+ * @param e: kshark_entry to be checked.
+ * @param pid: Matching condition value.
+ * @returns True if the Pid of the entry matches the value of "pid".
+ *	    Else false.
+ */
+bool kshark_match_pid(struct kshark_context *kshark_ctx,
+		      struct kshark_entry *e, int pid)
+{
+	if (e->pid == pid)
+		return true;
+
+	return false;
+}
+
+/**
+ * @brief Simple Cpu matching function to be user for data requests.
+ * @param kshark_ctx: Input location for the session context pointer.
+ * @param e: kshark_entry to be checked.
+ * @param cpu: Matching condition value.
+ * @returns True if the Cpu of the entry matches the value of "cpu".
+ *	    Else false.
+ */
+bool kshark_match_cpu(struct kshark_context *kshark_ctx,
+		      struct kshark_entry *e, int cpu)
+{
+	if (e->cpu == cpu)
+		return true;
+
+	return false;
+}
+
+/**
+ * @brief Create Data request. The request defines the properties of the
+ *	  requested kshark_entry.
+ * @param first: Array index specifying the position inside the array from
+ *		 where the search starts.
+ * @param n: Number of array elements to search in.
+ * @param cond: Matching condition function.
+ * @param val: Matching condition value, used by the Matching condition
+ *	       function.
+ * @param vis_only: If true, a visible entry is requested.
+ * @param vis_mask: If "vis_only" is true, use this mask to specify the level
+ *		    of visibility of the requested entry
+ * @returns Pointer to kshark_entry_request on success, or NULL on failure.
+ * 	    The user is responsible for freeing the returned
+ *	    kshark_entry_request.
+ */
+struct kshark_entry_request *
+kshark_entry_request_alloc(size_t first, size_t n,
+			   matching_condition_func cond, int val,
+			   bool vis_only, int vis_mask)
+{
+	struct kshark_entry_request *req = malloc(sizeof(*req));
+
+	if (!req) {
+		fprintf(stderr,
+			"Failed to allocate memory for entry request.\n");
+		return NULL;
+	}
+
+	req->first = first;
+	req->n = n;
+	req->cond = cond;
+	req->val = val;
+	req->vis_only = vis_only;
+	req->vis_mask = vis_mask;
+
+	return req;
+}
+
+/** Dummy entry, used to indicate the existence of filtered entries. */
+const struct kshark_entry dummy_entry = {
+	.next		= NULL,
+	.visible	= 0x00,
+	.cpu		= KS_FILTERED_BIN,
+	.pid		= KS_FILTERED_BIN,
+	.event_id	= -1,
+	.offset		= 0,
+	.ts		= 0
+};
+
+static const struct kshark_entry *
+get_entry(const struct kshark_entry_request *req,
+          struct kshark_entry **data,
+          ssize_t *index, size_t start, ssize_t end, int inc)
+{
+	struct kshark_context *kshark_ctx = NULL;
+	const struct kshark_entry *e = NULL;
+	ssize_t i;
+
+	if (index)
+		*index = KS_EMPTY_BIN;
+
+	if (!kshark_instance(&kshark_ctx))
+		return e;
+
+	for (i = start; i != end; i += inc) {
+		if (req->cond(kshark_ctx, data[i], req->val)) {
+			/*
+			 * Data satisfying the condition has been found.
+			 */
+			if (req->vis_only &&
+			    !(data[i]->visible & req->vis_mask)) {
+				/* This data entry has been filtered. */
+				e = &dummy_entry;
+			} else {
+				e = data[i];
+				break;
+			}
+		}
+	}
+
+	if (index) {
+		if (e)
+			*index = (e->event_id >= 0)? i : KS_FILTERED_BIN;
+		else
+			*index = KS_EMPTY_BIN;
+	}
+
+	return e;
+}
+
+/**
+ * @brief Search for an entry satisfying the requirements of a given Data
+ *	  request. Start from the position provided by the request and go
+ *	  searching in the direction of the increasing timestamps (front).
+ * @param req: Input location for Data request.
+ * @param data: Input location for the trace data.
+ * @param index: Optional output location for the index of the returned
+ *		 entry inside the array.
+ * @returns Pointer to the first entry satisfying the matching conditionon
+ *	    success, or NULL on failure.
+ *	    In the special case when some entries, satisfying the Matching
+ *	    condition function have been found, but all these entries have
+ *	    been discarded because of the visibility criteria (filtered
+ *	    entries), the function returns a pointer to a special
+ *	    "Dummy entry".
+ */
+const struct kshark_entry *
+kshark_get_entry_front(const struct kshark_entry_request *req,
+                       struct kshark_entry **data,
+                       ssize_t *index)
+{
+	ssize_t end = req->first + req->n;
+
+	return get_entry(req, data, index, req->first, end, +1);
+}
+
+/**
+ * @brief Search for an entry satisfying the requirements of a given Data
+ *	  request. Start from the position provided by the request and go
+ *	  searching in the direction of the decreasing timestamps (back).
+ * @param req: Input location for Data request.
+ * @param data: Input location for the trace data.
+ * @param index: Optional output location for the index of the returned
+ *		 entry inside the array.
+ * @returns Pointer to the first entry satisfying the matching conditionon
+ *	    success, or NULL on failure.
+ *	    In the special case when some entries, satisfying the Matching
+ *	    condition function have been found, but all these entries have
+ *	    been discarded because of the visibility criteria (filtered
+ *	    entries), the function returns a pointer to a special
+ *	    "Dummy entry".
+ */
+const struct kshark_entry *
+kshark_get_entry_back(const struct kshark_entry_request *req,
+                      struct kshark_entry **data,
+                      ssize_t *index)
+{
+	ssize_t end = req->first - req->n;
+	if (end < 0)
+		end = -1;
+
+	return get_entry(req, data, index, req->first, end, -1);
+}
diff --git a/kernel-shark-qt/src/libkshark.h b/kernel-shark-qt/src/libkshark.h
index 0ad31c0..adbd392 100644
--- a/kernel-shark-qt/src/libkshark.h
+++ b/kernel-shark-qt/src/libkshark.h
@@ -133,7 +133,7 @@ void kshark_close(struct kshark_context *kshark_ctx);
 
 void kshark_free(struct kshark_context *kshark_ctx);
 
-char* kshark_dump_entry(struct kshark_entry *entry);
+char* kshark_dump_entry(const struct kshark_entry *entry);
 
 /** Bit masks used to control the visibility of the entry after filtering. */
 enum kshark_filter_masks {
@@ -190,6 +190,90 @@ void kshark_filter_entries(struct kshark_context *kshark_ctx,
 			   struct kshark_entry **data,
 			   size_t n_entries);
 
+/** General purpose Binary search macro. */
+#define BSEARCH(h, l, cond) 			\
+	({						\
+		while (h - l > 1) {			\
+			mid = (l + h) / 2;		\
+			if (cond)	\
+				l = mid;		\
+			else				\
+				h = mid;		\
+		}					\
+	})
+
+size_t kshark_find_entry_by_time(uint64_t time,
+				 struct kshark_entry **data_rows,
+				 size_t l, size_t h);
+
+size_t kshark_find_record_by_time(uint64_t time,
+				  struct pevent_record **data_rows,
+				  size_t l, size_t h);
+
+bool kshark_match_pid(struct kshark_context *kshark_ctx,
+		      struct kshark_entry *e, int pid);
+
+bool kshark_match_cpu(struct kshark_context *kshark_ctx,
+		      struct kshark_entry *e, int cpu);
+
+/** Empty bin identifier. */
+#define KS_EMPTY_BIN		-1
+
+/** Filtered bin identifier. */
+#define KS_FILTERED_BIN		-2
+
+/** Matching condition function type. To be user for data requests */
+typedef bool (matching_condition_func)(struct kshark_context*,
+				       struct kshark_entry*,
+				       int);
+
+/**
+ * Data request structure, defining the properties of the required
+ * kshark_entry.
+ */
+struct kshark_entry_request {
+	/**
+	 * Array index specifying the position inside the array from where
+	 * the search starts.
+	 */
+	size_t first;
+
+	/** Number of array elements to search in. */
+	size_t n;
+
+	/** Matching condition function. */
+	matching_condition_func *cond;
+
+	/**
+	 * Matching condition value, used by the Matching condition function.
+	 */
+	int val;
+
+	/** If true, a visible entry is requested. */
+	bool vis_only;
+
+	/**
+	 * If "vis_only" is true, use this mask to specify the level of
+	 * visibility of the requested entry.
+	 */
+	uint8_t vis_mask;
+};
+
+struct kshark_entry_request *
+kshark_entry_request_alloc(size_t first, size_t n,
+			   matching_condition_func cond, int val,
+			   bool vis_only, int vis_mask);
+
+const struct kshark_entry *
+kshark_get_entry_front(const struct kshark_entry_request *req,
+		       struct kshark_entry **data,
+		       ssize_t *index);
+
+const struct kshark_entry *
+kshark_get_entry_back(const struct kshark_entry_request *req,
+		      struct kshark_entry **data,
+		      ssize_t *index);
+
 #ifdef __cplusplus
 }
 #endif
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 3/7] kernel-shark-qt: Introduce the visualization model used by the Qt-based KS
  2018-07-31 13:52 [PATCH v2 0/7] Add visualization model for the Qt-based KernelShark Yordan Karadzhov (VMware)
  2018-07-31 13:52 ` [PATCH v2 1/7] kernel-shark-qt: Change the type of the fields in struct kshark_entry Yordan Karadzhov (VMware)
  2018-07-31 13:52 ` [PATCH v2 2/7] kernel-shark-qt: Add generic instruments for searching inside the trace data Yordan Karadzhov (VMware)
@ 2018-07-31 13:52 ` Yordan Karadzhov (VMware)
  2018-08-01  0:51   ` Steven Rostedt
                     ` (4 more replies)
  2018-07-31 13:52 ` [PATCH v2 4/7] kernel-shark-qt: Add an example showing how to manipulate the Vis. model Yordan Karadzhov (VMware)
                   ` (3 subsequent siblings)
  6 siblings, 5 replies; 21+ messages in thread
From: Yordan Karadzhov (VMware) @ 2018-07-31 13:52 UTC (permalink / raw)
  To: rostedt; +Cc: linux-trace-devel, Yordan Karadzhov (VMware)

The model, used by the Qt-based KernelShark for visualization of trace data
is build over the concept of "Data Bins". When visualizing a large data-set
of trace records, we are limited by the number of screen pixels available for
drawing. The model divides the data-set into data-units, also called Bins.
The Bin has to be defined in such a way that the entire content of one Bin
can be summarized and visualized by a single graphical element.
This model uses the timestamp of the trace records, as a criteria for forming
Bins. When the Model has to visualize all records inside a given time-window,
it divides this time-window into N smaller, uniformly sized subintervals and
then defines that one Bin contains all trace records, having timestamps
falling into one of this subintervals. Because the model operates over an
array of trace records, sorted in time, the content of each Bin can be
retrieved by simply knowing the index of the first element inside this Bin
and the index of the first element of the next Bin. This means that knowing
the index of the first element in each Bin is enough to determine the State
of the model.

The State of the model can be modified by its five basic operations: Zoon-In,
Zoom-Out, Shift-Forward, Shift-Backward and Jump-To. After each of these
operations, the new State of the model is retrieved, by using binary search
to find the index of the first element in each Bin. This means that each one
of the five basic operations of the model has log(n) time complexity (see
previous change log).

In order to keep the visualization of the new state of the model as efficient
as possible, the model needs a way to summarize and visualize the content of
the Bins in constant time. This is achieved by limiting ourself to only
checking the content of the records at the beginning and at the end of the
Bin. As explaned in the previous change log, this approach has the very
counter-intuitive effect of making the update of the sparse (or empty) Graphs
much slower. The problem of the Sparse Graphs will be addressed in another
patch, where "Data Collections" will be introduced.

Signed-off-by: Yordan Karadzhov (VMware) <y.karadz@gmail.com>
---
 kernel-shark-qt/src/CMakeLists.txt    |    3 +-
 kernel-shark-qt/src/libkshark-model.c | 1135 +++++++++++++++++++++++++
 kernel-shark-qt/src/libkshark-model.h |  142 ++++
 3 files changed, 1279 insertions(+), 1 deletion(-)
 create mode 100644 kernel-shark-qt/src/libkshark-model.c
 create mode 100644 kernel-shark-qt/src/libkshark-model.h

diff --git a/kernel-shark-qt/src/CMakeLists.txt b/kernel-shark-qt/src/CMakeLists.txt
index ed3c60e..ec22f63 100644
--- a/kernel-shark-qt/src/CMakeLists.txt
+++ b/kernel-shark-qt/src/CMakeLists.txt
@@ -1,7 +1,8 @@
 message("\n src ...")
 
 message(STATUS "libkshark")
-add_library(kshark SHARED libkshark.c)
+add_library(kshark SHARED libkshark.c
+                          libkshark-model.c)
 
 target_link_libraries(kshark ${CMAKE_DL_LIBS}
                              ${TRACEEVENT_LIBRARY}
diff --git a/kernel-shark-qt/src/libkshark-model.c b/kernel-shark-qt/src/libkshark-model.c
new file mode 100644
index 0000000..4a4e910
--- /dev/null
+++ b/kernel-shark-qt/src/libkshark-model.c
@@ -0,0 +1,1135 @@
+// SPDX-License-Identifier: LGPL-2.1
+
+/*
+ * Copyright (C) 2017 VMware Inc, Yordan Karadzhov <y.karadz@gmail.com>
+ */
+
+ /**
+  *  @file    libkshark.c
+  *  @brief   Visualization model for FTRACE (trace-cmd) data.
+  */
+
+// C
+#include <stdlib.h>
+
+// KernelShark
+#include "libkshark-model.h"
+
+#define UOB(histo) (histo->n_bins)
+#define LOB(histo) (histo->n_bins + 1)
+
+/**
+ * @brief Initialize the Visualization model.
+ * @param histo: Input location for the model descriptor.
+ */
+void ksmodel_init(struct kshark_trace_histo *histo)
+{
+	/*
+	 * Initialize an empty histo. The histo will have no bins and will
+	 * contain no data.
+	 */
+	histo->bin_size = 0;
+	histo->min = 0;
+	histo->max = 0;
+	histo->n_bins = 0;
+
+	histo->bin_count = NULL;
+	histo->map = NULL;
+}
+
+/**
+ * @brief Clear (reset) the Visualization model.
+ * @param histo: Input location for the model descriptor.
+ */
+void ksmodel_clear(struct kshark_trace_histo *histo)
+{
+	/* Reset the histo. It will have no bins and will contain no data. */
+	free(histo->map);
+	free(histo->bin_count);
+	ksmodel_init(histo);
+}
+
+static void ksmodel_reset_bins(struct kshark_trace_histo *histo,
+			       size_t first, size_t last)
+{
+	/* Reset the content of the bins. */
+	memset(&histo->map[first], KS_EMPTY_BIN,
+	       (last - first + 1) * sizeof(histo->map[0]));
+
+	memset(&histo->bin_count[first], 0,
+	       (last - first + 1) * sizeof(histo->bin_count[0]));
+}
+
+static bool ksmodel_histo_alloc(struct kshark_trace_histo *histo, size_t n)
+{
+	free(histo->bin_count);
+	free(histo->map);
+
+	/* Create bins. Two overflow bins are added. */
+	histo->map = calloc(n + 2, sizeof(*histo->map));
+	histo->bin_count = calloc(n + 2, sizeof(*histo->bin_count));
+
+	if (!histo->map || !histo->bin_count) {
+		ksmodel_clear(histo);
+		fprintf(stderr, "Failed to allocate memory for a histo.\n");
+		return false;
+	}
+
+	histo->n_bins = n;
+
+	return true;
+}
+
+static void ksmodel_set_in_range_bining(struct kshark_trace_histo *histo,
+					size_t n, uint64_t min, uint64_t max,
+					bool force_in_range)
+{
+	uint64_t corrected_range, delta_range, range = max - min;
+	struct kshark_entry *last;
+
+	/* The size of the bin must be >= 1, hence the range must be >= n. */
+	if (n == 0 || range < n)
+		return;
+
+	/*
+	 * If the number of bins changes, allocate memory for the descriptor
+	 * of the model.
+	 */
+	if (n != histo->n_bins) {
+		if (!ksmodel_histo_alloc(histo, n)) {
+			ksmodel_clear(histo);
+			return;
+		}
+	}
+
+	/* Reset the content of all bins (including overflow bins) to zero. */
+	ksmodel_reset_bins(histo, 0, histo->n_bins + 1);
+
+	if (range % n == 0) {
+		/*
+		 * The range is multiple of the number of bin and needs no
+		 * adjustment. This is very unlikely to happen but still ...
+		 */
+		histo->min = min;
+		histo->max = max;
+		histo->bin_size = range / n;
+	} else {
+		/*
+		 * The range needs adjustment. The new range will be slightly
+		 * bigger, compared to the requested one.
+		 */
+		histo->bin_size = range / n + 1;
+		corrected_range = histo->bin_size * n;
+		delta_range = corrected_range - range;
+		histo->min = min - delta_range / 2;
+		histo->max = histo->min + corrected_range;
+
+		if (!force_in_range)
+			return;
+
+		/*
+		 * Make sure that the new range doesn't go outside of the time
+		 * interval of the dataset.
+		 */
+		last = histo->data[histo->data_size - 1];
+		if (histo->min < histo->data[0]->ts) {
+			histo->min = histo->data[0]->ts;
+			histo->max = histo->min + corrected_range;
+		} else if (histo->max > last->ts) {
+			histo->max = last->ts;
+			histo->min = histo->max - corrected_range;
+		}
+	}
+}
+
+/**
+ * @brief Prepare the bining of the Visualization model.
+ * @param histo: Input location for the model descriptor.
+ * @param n: Number of bins.
+ * @param min: Lower edge of the time-window to be visualized.
+ * @param max: Upper edge of the time-window to be visualized.
+ */
+void ksmodel_set_bining(struct kshark_trace_histo *histo,
+			size_t n, uint64_t min, uint64_t max)
+{
+	ksmodel_set_in_range_bining(histo, n, min, max, false);
+}
+
+static size_t ksmodel_set_lower_edge(struct kshark_trace_histo *histo)
+{
+	/*
+	 * Find the index of the first entry inside
+	 * the range (timestamp > min).
+	 */
+	size_t row = kshark_find_entry_by_time(histo->min,
+					       histo->data,
+					       0,
+					       histo->data_size - 1);
+
+	if (row != 0) {
+		/*
+		 * The first entry inside the range is not the first entry
+		 * of the dataset. This means that the Lower Overflow bin
+		 * contains data.
+		 */
+
+		/* Lower Overflow bin starts at "0". */
+		histo->map[LOB(histo)] = 0;
+
+		/*
+		 * The number of entries inside the Lower Overflow bin is
+		 * equal to the index of the first entry inside the range.
+		 */
+		histo->bin_count[LOB(histo)] = row;
+	}  else {
+		/* Lower Overflow bin is empty. */
+		histo->map[LOB(histo)] = KS_EMPTY_BIN;
+		histo->bin_count[LOB(histo)] = 0;
+	}
+
+	/*
+	 * Now check if the first entry inside the range falls into the
+	 * first bin.
+	 */
+	if (histo->data[row]->ts  < histo->min + histo->bin_size) {
+		/*
+		 * It is inside the first bin. Set the beginning
+		 * of the first bin.
+		 */
+		histo->map[0] = row;
+	} else {
+		/* The first bin is empty. */
+		histo->map[0] = KS_EMPTY_BIN;
+	}
+
+	return row;
+}
+
+static size_t ksmodel_set_upper_edge(struct kshark_trace_histo *histo)
+{
+	/*
+	 * Find the index of the first entry outside
+	 * the range (timestamp > max).
+	 */
+	size_t row = kshark_find_entry_by_time(histo->max,
+					       histo->data,
+					       0,
+					       histo->data_size - 1);
+
+	if (row < histo->data_size - 1 ||
+	    (row == histo->data_size - 1 &&
+	     histo->data[histo->data_size - 1]->ts > histo->max)) {
+		/*
+		 * The Upper Overflow bin contains data. Set its beginning
+		 * and the number of entries.
+		 */
+		histo->map[UOB(histo)] = row;
+		histo->bin_count[UOB(histo)] = histo->data_size - row;
+	}  else {
+		/* Upper Overflow bin is empty. */
+		histo->map[UOB(histo)] = KS_EMPTY_BIN;
+		histo->bin_count[UOB(histo)] = 0;
+	}
+
+	return row;
+}
+
+static void ksmodel_set_next_bin_edge(struct kshark_trace_histo *histo,
+				      size_t bin)
+{
+	size_t time, row, next_bin = bin + 1;
+
+	/* Calculate the beginning of the next bin. */
+	time = histo->min + next_bin * histo->bin_size;
+
+	/*
+	 * Find the index of the first entry inside
+	 * the next bin (timestamp > time).
+	 */
+	row = kshark_find_entry_by_time(time, histo->data, 0,
+					histo->data_size - 1);
+
+	/*
+	 * The timestamp of the very last entry of the dataset can be exactly
+	 * equal to the value of the upper edge of the range. This is very
+	 * likely to happen when we use ksmodel_set_in_range_bining(). In this
+	 * case we have to increase the size of the very last bin in order to
+	 * make sure that the last entry of the dataset will fall into it.
+	 */
+	if (next_bin == histo->n_bins - 1)
+		++time;
+
+	if (histo->data[row]->ts >= time + histo->bin_size) {
+		/* The bin is empty. */
+		histo->map[next_bin] = KS_EMPTY_BIN;
+		return;
+	}
+
+	/* Set the index of the first entry. */
+	histo->map[next_bin] = row;
+}
+
+static void ksmodel_set_bin_counts(struct kshark_trace_histo *histo)
+{
+	int i = 0, prev_not_empty;
+
+	memset(&histo->bin_count[0], 0,
+	       (histo->n_bins) * sizeof(histo->bin_count[0]));
+	/*
+	 * Find the first bin which contains data. Start by checking the
+	 * Lower Overflow bin.
+	 */
+	if (histo->map[histo->n_bins + 1] != KS_EMPTY_BIN) {
+		prev_not_empty = LOB(histo);
+	} else {
+		while (histo->map[i] < 0) {
+			++i;
+		}
+
+		prev_not_empty = i++;
+	}
+
+	/*
+	 * Starting from the first not empty bin, loop over all bins and fill
+	 * in the bin_count array to hold the number of entries in each bin.
+	 */
+	while (i < histo->n_bins) {
+		if (histo->map[i] != KS_EMPTY_BIN) {
+			/*
+			 * Here we set the number of entries in
+			 * "prev_not_empty" bin.
+			 */
+			histo->bin_count[prev_not_empty] =
+				histo->map[i] - histo->map[prev_not_empty];
+	
+			prev_not_empty = i;
+		}
+
+		++i;
+	}
+
+	/* Check if the Upper Overflow bin contains data. */
+	if (histo->map[UOB(histo)] == KS_EMPTY_BIN) {
+		/*
+		 * The Upper Overflow bin is empty. Use the size of the
+		 * dataset to calculate the content of the previouse not
+		 * empty bin.
+		 */
+		histo->bin_count[prev_not_empty] = histo->data_size -
+						   histo->map[prev_not_empty];
+	} else {
+		/*
+		 * Use the index of the first entry inside the Upper Overflow
+		 * bin to calculate the content of the previouse not empty
+		 * bin.
+		 */
+		histo->bin_count[prev_not_empty] = histo->map[UOB(histo)] -
+						   histo->map[prev_not_empty];
+	}
+}
+
+/**
+ * @brief Provide the Visualization model with data. Calculate the current
+ *	  state of the model.
+ * @param histo: Input location for the model descriptor.
+ * @param data: Input location for the trace data.
+ * @param n: Number of bins.
+ */
+void ksmodel_fill(struct kshark_trace_histo *histo,
+		  struct kshark_entry **data, size_t n)
+{
+	int bin;
+
+	histo->data_size = n;
+	histo->data = data;
+
+	if (histo->n_bins == 0 ||
+	    histo->bin_size == 0 ||
+	    histo->data_size == 0) {
+		/*
+		 * Something is wrong with this histo.
+		 * Most likely the binning is not set.
+		 */
+		ksmodel_clear(histo);
+		fprintf(stderr,
+			"Unable to fill the model with data.\n");
+		fprintf(stderr,
+			"Try to set the bining of the model first.\n");
+
+		return;
+	}
+
+	/* Set the Lower Overflow bin */
+	ksmodel_set_lower_edge(histo);
+
+	/*
+	 * Loop over the dataset and set the beginning of all individual bins.
+	 */
+	bin = 0;
+	for (bin = 0; bin < histo->n_bins; ++bin)
+		ksmodel_set_next_bin_edge(histo, bin);
+
+	/* Set the Upper Overflow bin. */
+	ksmodel_set_upper_edge(histo);
+
+	/* Calculate the number of entries in each bin. */
+	ksmodel_set_bin_counts(histo);
+}
+
+/**
+ * @brief Get the total number of entries in a given bin.
+ * @param histo: Input location for the model descriptor.
+ * @param bin: Bin id.
+ * @returns The number of entries in this bin.
+ */
+size_t ksmodel_bin_count(struct kshark_trace_histo *histo, int bin)
+{
+	if (bin >= 0 && bin < histo->n_bins)
+		return histo->bin_count[bin];
+
+	if (bin == UPPER_OVERFLOW_BIN)
+		return histo->bin_count[UOB(histo)];
+
+	if (bin == LOWER_OVERFLOW_BIN)
+		return histo->bin_count[LOB(histo)];
+
+	return 0;
+}
+
+/**
+ * @brief Shift the time-window of the model forward. Recalculate the current
+ *	  state of the model.
+ * @param histo: Input location for the model descriptor.
+ * @param n: Number of bins to shift.
+ */
+void ksmodel_shift_forward(struct kshark_trace_histo *histo, size_t n)
+{
+	int bin;
+	
+	if (!histo->data_size)
+		return;
+
+	if (histo->bin_count[UOB(histo)] == 0) {
+		/*
+		 * The Upper Overflow bin is empty. This means that we are at
+		 * the upper edge of the dataset already. Do nothing in this
+		 * case.
+		 */
+		return;
+	}
+
+	histo->min += n * histo->bin_size;
+	histo->max += n * histo->bin_size;
+
+	if (n >= histo->n_bins) {
+		/*
+		 * No overlap between the new and the old ranges. Recalculate
+		 * all bins from scratch. First calculate the new range.
+		 */
+		ksmodel_set_bining(histo, histo->n_bins, histo->min,
+							 histo->max);
+
+		ksmodel_fill(histo, histo->data, histo->data_size);
+		return;
+	}
+
+	/* Set the new Lower Overflow bin. */
+	ksmodel_set_lower_edge(histo);
+
+	/*
+	 * Copy the the mapping indexes of all overlaping bins starting from
+	 * bin "0" of the new histo. Note that the number of overlaping bins
+	 * is histo->n_bins - n.
+	 */
+	memmove(&histo->map[0], &histo->map[n],
+		sizeof(histo->map[0]) * (histo->n_bins - n));
+
+	/*
+	 * The the mapping index pf the old Upper Overflow bin is now index
+	 * of the first new bin.
+	 */
+	bin = UOB(histo) - n;
+	histo->map[bin] = histo->map[UOB(histo)];
+
+	/* Calculate only the content of the new (non-overlapping) bins. */
+	for (; bin < histo->n_bins; ++bin)
+		ksmodel_set_next_bin_edge(histo, bin);
+
+	/*
+	 * Set the new Upper Overflow bin and calculate the number of entries
+	 * in each bin.
+	 */
+	ksmodel_set_upper_edge(histo);
+	ksmodel_set_bin_counts(histo);
+}
+
+/**
+ * @brief Shift the time-window of the model backward. Recalculate the current
+ *	  state of the model.
+ * @param histo: Input location for the model descriptor.
+ * @param n: Number of bins to shift.
+ */
+void ksmodel_shift_backward(struct kshark_trace_histo *histo, size_t n)
+{
+	int bin;
+
+	if (!histo->data_size)
+		return;
+
+	if (histo->bin_count[LOB(histo)] == 0) {
+		/*
+		 * The Lower Overflow bin is empty. This means that we are at
+		 * the Lower edge of the dataset already. Do nothing in this
+		 * case.
+		 */
+		return;
+	}
+
+	histo->min -= n * histo->bin_size;
+	histo->max -= n * histo->bin_size;
+
+	if (n >= histo->n_bins) {
+		/*
+		 * No overlap between the new and the old range. Recalculate
+		 * all bins from scratch. First calculate the new range.
+		 */
+		ksmodel_set_bining(histo, histo->n_bins, histo->min,
+							 histo->max);
+
+		ksmodel_fill(histo, histo->data, histo->data_size);
+		return;
+	}
+
+	/*
+	 * Copy the the mapping indexes of all overlaping bins starting from
+	 * bin "0" of the old histo. Note that the number of overlaping bins
+	 * is histo->n_bins - n.
+	 */
+	memmove(&histo->map[n], &histo->map[0],
+		sizeof(histo->map[0]) * (histo->n_bins - n));
+
+	/* Set the new Lower Overflow bin. */
+	ksmodel_set_lower_edge(histo);
+
+	/* Calculate only the content of the new (non-overlapping) bins. */
+	bin = 0;
+	while (bin < n) {
+		ksmodel_set_next_bin_edge(histo, bin);
+		++bin;
+	}
+
+	/*
+	 * Set the new Upper Overflow bin and calculate the number of entries
+	 * in each bin.
+	 */
+	ksmodel_set_upper_edge(histo);
+	ksmodel_set_bin_counts(histo);
+}
+
+/**
+ * @brief Move the time-window of the model to a given location. Recalculate
+ *	  the current state of the model.
+ * @param histo: Input location for the model descriptor.
+ * @param ts: position in time to be visualized.
+ */
+void ksmodel_jump_to(struct kshark_trace_histo *histo, size_t ts)
+{
+	size_t min, max, range_min;
+
+	if (ts > histo->min && ts < histo->max) {
+		/*
+		 * The new position is already inside the range.
+		 * Do nothing in this case.
+		 */
+		return;
+	}
+
+	/*
+	 * Calculate the new range without changing the size and the number
+	 * of bins.
+	 */
+	min = ts - histo->n_bins * histo->bin_size / 2;
+
+	/* Make sure that the range does not go outside of the dataset. */
+	if (min < histo->data[0]->ts)
+		min = histo->data[0]->ts;
+
+	range_min = histo->data[histo->data_size - 1]->ts -
+		   histo->n_bins * histo->bin_size;
+
+	if (min > range_min)
+		min = range_min;
+
+	max = min + histo->n_bins * histo->bin_size;
+
+	/* Use the new range to recalculate all bins from scratch. */
+	ksmodel_set_bining(histo, histo->n_bins, min, max);
+	ksmodel_fill(histo, histo->data, histo->data_size);
+}
+
+/**
+ * @brief Extend the time-window of the model. Recalculate the current state
+ *	  of the model.
+ * @param histo: Input location for the model descriptor.
+ * @param r: Scale factor of the zoom-out.
+ * @param mark: Focus point of the zoom-out.
+ */
+void ksmodel_zoom_out(struct kshark_trace_histo *histo,
+		      double r, int mark)
+{
+	size_t range, min, max, delta_min;
+	double delta_tot;
+
+	if (!histo->data_size)
+		return;
+
+	/*
+	 * If the marker is not set, assume that the focal point of the zoom
+	 * is the center of the range.
+	 */
+	if (mark < 0)
+		mark = histo->n_bins / 2;
+
+	/*
+	 * Calculate the new range of the histo. Use the bin of the marker
+	 * as a focal point for the zoomout. With this the maker will stay
+	 * inside the same bin in the new histo.
+	 */
+	range = histo->max - histo->min;
+	delta_tot = range * r;
+	delta_min = delta_tot * mark / histo->n_bins;
+
+	min = histo->min - delta_min;
+	max = histo->max + (size_t) delta_tot - delta_min;
+
+	/* Make sure the new range doesn't go outside of the dataset. */
+	if (min < histo->data[0]->ts)
+		min = histo->data[0]->ts;
+
+	if (max > histo->data[histo->data_size - 1]->ts)
+		max = histo->data[histo->data_size - 1]->ts;
+
+	/*
+	 * Use the new range to recalculate all bins from scratch. Enforce
+	 * "In Range" adjustment of the range of the model, in order to avoid
+	 * slowly drifting outside of the data-set in the case when the very
+	 * first or the very last entry is used as a focal point.
+	 */
+	ksmodel_set_in_range_bining(histo, histo->n_bins, min, max, true);
+	ksmodel_fill(histo, histo->data, histo->data_size);
+}
+
+/**
+ * @brief Shrink the time-window of the model. Recalculate the current state
+ *	  of the model.
+ * @param histo: Input location for the model descriptor.
+ * @param r: Scale factor of the zoom-in.
+ * @param mark: Focus point of the zoom-in.
+ */
+void ksmodel_zoom_in(struct kshark_trace_histo *histo,
+		     double r, int mark)
+{
+	size_t range, min, max, delta_min;
+	double delta_tot;
+
+	if (!histo->data_size)
+		return;
+
+	/*
+	 * If the marker is not set, assume that the focal point of the zoom
+	 * is the center of the range.
+	 */
+	if (mark < 0)
+		mark = histo->n_bins / 2;
+
+	range = histo->max - histo->min;
+
+	/* Avoid overzooming. */
+	if (range < histo->n_bins * 4)
+		return;
+
+	/*
+	 * Calculate the new range of the histo. Use the bin of the marker
+	 * as a focal point for the zoomin. With this the maker will stay
+	 * inside the same bin in the new histo.
+	 */
+	delta_tot =  range * r;
+	if (mark == (int)histo->n_bins - 1)
+		delta_min = delta_tot;
+	else if (mark == 0)
+		delta_min = 0;
+	else
+		delta_min = delta_tot * mark / histo->n_bins;
+
+	min = histo->min + delta_min;
+	max = histo->max - (size_t) delta_tot + delta_min;
+
+	/*
+	 * Use the new range to recalculate all bins from scratch. Enforce
+	 * "In Range" adjustment of the range of the model, in order to avoid
+	 * slowly drifting outside of the data-set in the case when the very
+	 * first or the very last entry is used as a focal point.
+	 */
+	ksmodel_set_in_range_bining(histo, histo->n_bins, min, max, true);
+	ksmodel_fill(histo, histo->data, histo->data_size);
+}
+
+/**
+ * @brief Get the index of the first entry in a given bin.
+ * @param histo: Input location for the model descriptor.
+ * @param bin: Bin id.
+ * @returns Index of the first entry in this bin. If the bin is empty the
+ *	    function returns negative error identifier (KS_EMPTY_BIN).
+ */
+ssize_t ksmodel_first_index_at_bin(struct kshark_trace_histo *histo, int bin)
+{
+	if (bin >= 0 && bin < (int) histo->n_bins)
+		return histo->map[bin];
+
+	if (bin == UPPER_OVERFLOW_BIN)
+		return histo->map[histo->n_bins];
+
+	if (bin == LOWER_OVERFLOW_BIN)
+		return histo->map[histo->n_bins + 1];
+
+	return KS_EMPTY_BIN;
+}
+
+/**
+ * @brief Get the index of the last entry in a given bin.
+ * @param histo: Input location for the model descriptor.
+ * @param bin: Bin id.
+ * @returns Index of the last entry in this bin. If the bin is empty the
+ *	    function returns negative error identifier (KS_EMPTY_BIN).
+ */
+ssize_t ksmodel_last_index_at_bin(struct kshark_trace_histo *histo, int bin)
+{
+	ssize_t index = ksmodel_first_index_at_bin(histo, bin);
+	size_t count = ksmodel_bin_count(histo, bin);
+
+	if (index >= 0 && count)
+		index += count - 1;
+
+	return index;
+}
+
+static bool ksmodel_is_visible(struct kshark_entry *e)
+{
+	if ((e->visible & KS_GRAPH_VIEW_FILTER_MASK) &&
+	    (e->visible & KS_EVENT_VIEW_FILTER_MASK))
+		return true;
+
+	return false;
+}
+
+static struct kshark_entry_request *
+ksmodel_entry_front_request_alloc(struct kshark_trace_histo *histo,
+				  int bin, bool vis_only,
+				  matching_condition_func func, int val)
+{
+	struct kshark_entry_request *req;
+	size_t first, n;
+
+	/* Get the number of entries in this bin. */
+	n = ksmodel_bin_count(histo, bin);
+	if (!n)
+		return NULL;
+
+	first = ksmodel_first_index_at_bin(histo, bin);
+
+	req = kshark_entry_request_alloc(first, n,
+					 func, val,
+					 vis_only, KS_GRAPH_VIEW_FILTER_MASK);
+
+	return req;
+}
+
+static struct kshark_entry_request *
+ksmodel_entry_back_request_alloc(struct kshark_trace_histo *histo,
+				 int bin, bool vis_only,
+				 matching_condition_func func, int val)
+{
+	struct kshark_entry_request *req;
+	size_t first, n;
+
+	/* Get the number of entries in this bin. */
+	n = ksmodel_bin_count(histo, bin);
+	if (!n)
+		return NULL;
+
+	first = ksmodel_last_index_at_bin(histo, bin);
+
+	req = kshark_entry_request_alloc(first, n,
+					 func, val,
+					 vis_only, KS_GRAPH_VIEW_FILTER_MASK);
+
+	return req;
+}
+
+/**
+ * @brief Get the index of the first entry from a given Cpu in a given bin.
+ * @param histo: Input location for the model descriptor.
+ * @param bin: Bin id.
+ * @param cpu: Cpu Id.
+ * @returns Index of the first entry from a given Cpu in this bin.
+ */
+ssize_t ksmodel_first_index_at_cpu(struct kshark_trace_histo *histo,
+				   int bin, int cpu)
+{
+	size_t i, n, first, not_found = KS_EMPTY_BIN;
+
+	n = ksmodel_bin_count(histo, bin);
+	if (!n)
+		return not_found;
+
+	first = ksmodel_first_index_at_bin(histo, bin);
+
+	for (i = first; i < first + n; ++i) {
+		if (histo->data[i]->cpu == cpu) {
+			if (ksmodel_is_visible(histo->data[i]))
+				return i;
+			else
+				not_found = KS_FILTERED_BIN;
+		}
+	}
+
+	return not_found;
+}
+
+/**
+ * @brief Get the index of the first entry from a given Task in a given bin.
+ * @param histo: Input location for the model descriptor.
+ * @param bin: Bin id.
+ * @param pid: Process Id of a task.
+ * @returns Index of the first entry from a given Task in this bin.
+ */
+ssize_t ksmodel_first_index_at_pid(struct kshark_trace_histo *histo,
+				   int bin, int pid)
+{
+	size_t i, n, first, not_found = KS_EMPTY_BIN;
+
+	n = ksmodel_bin_count(histo, bin);
+	if (!n)
+		return not_found;
+
+	first = ksmodel_first_index_at_bin(histo, bin);
+	
+	for (i = first; i < first + n; ++i) {
+		if (histo->data[i]->pid == pid) {
+			if (ksmodel_is_visible(histo->data[i]))
+				return i;
+			else
+				not_found = KS_FILTERED_BIN;
+		}
+	}
+
+	return not_found;
+}
+
+/**
+ * @brief In a given bin, start from the front end of the bin and go towards
+ *	  the back end, searching for an entry satisfying the Matching
+ *	  condition defined by a Matching condition function.
+ * @param histo: Input location for the model descriptor.
+ * @param bin: Bin id.
+ * @param vis_only: If true, a visible entry is requested.
+ * @param func: Matching condition function.
+ * @param val: Matching condition value, used by the Matching condition
+ *	       function.
+ * @param index: Optional output location for the index of the requested
+ *		 entry inside the array.
+ * @returns Pointer ot a kshark_entry, if an entry has been found. Else NULL.
+ */
+const struct kshark_entry *
+ksmodel_get_entry_front(struct kshark_trace_histo *histo,
+			int bin, bool vis_only,
+			matching_condition_func func, int val,
+			ssize_t *index)
+{
+	struct kshark_entry_request *req;
+	const struct kshark_entry *entry;
+
+	if (index)
+		*index = KS_EMPTY_BIN;
+
+	/* Set the position at the beginning of the bin and go forward. */
+	req = ksmodel_entry_front_request_alloc(histo, bin, vis_only,
+							    func, val);
+	if (!req)
+		return NULL;
+
+	entry = kshark_get_entry_front(req, histo->data, index);
+	free(req);
+
+	return entry;
+}
+
+/**
+ * @brief In a given bin, start from the back end of the bin and go towards
+ *	  the front end, searching for an entry satisfying the Matching
+ *	  condition defined by a Matching condition function.
+ * @param histo: Input location for the model descriptor.
+ * @param bin: Bin id.
+ * @param vis_only: If true, a visible entry is requested.
+ * @param func: Matching condition function.
+ * @param val: Matching condition value, used by the Matching condition
+ *	       function.
+ * @param index: Optional output location for the index of the requested
+ *		 entry inside the array.
+ * @returns Pointer ot a kshark_entry, if an entry has been found. Else NULL.
+ */
+const struct kshark_entry *
+ksmodel_get_entry_back(struct kshark_trace_histo *histo,
+		       int bin, bool vis_only,
+		       matching_condition_func func, int val,
+		       ssize_t *index)
+{
+	struct kshark_entry_request *req;
+	const struct kshark_entry *entry;
+
+	if (index)
+		*index = KS_EMPTY_BIN;
+
+	/* Set the position at the end of the bin and go backwards. */
+	req = ksmodel_entry_back_request_alloc(histo, bin, vis_only,
+							   func, val);
+	if (!req)
+		return NULL;
+
+	entry = kshark_get_entry_back(req, histo->data, index);
+	free(req);
+
+	return entry;
+}
+
+static int ksmodel_get_entry_pid(const struct kshark_entry *entry)
+{
+	if (!entry) {
+		/* No data has been found. */
+		return KS_EMPTY_BIN;
+	}
+
+	/*
+	 * Note that if some data has been found, but this data is
+	 * filtered-outa, the Dummy entry is returned. The PID of the Dummy
+	 * entry is KS_FILTERED_BIN.
+	 */
+
+	return entry->pid;
+}
+
+/**
+ * @brief In a given bin, start from the front end of the bin and go towards
+ *	  the back end, searching for an entry from a given CPU. Return
+ *	  the Process Id of the task of the entry found.
+ * @param histo: Input location for the model descriptor.
+ * @param bin: Bin id.
+ * @param cpu: CPU Id.
+ * @param vis_only: If true, a visible entry is requested.
+ * @param index: Optional output location for the index of the requested
+ *		 entry inside the array.
+ * @returns Process Id of the task if an entry has been found. Else a negative
+ *	    Identifier (KS_EMPTY_BIN or KS_FILTERED_BIN).
+ */
+int ksmodel_get_pid_front(struct kshark_trace_histo *histo,
+			  int bin, int cpu, bool vis_only,
+			  ssize_t *index)
+{
+	const struct kshark_entry *entry;
+
+	if (cpu < 0)
+		return KS_EMPTY_BIN;
+
+	entry = ksmodel_get_entry_front(histo, bin, vis_only,
+					       kshark_match_cpu, cpu,
+					       index);
+	return ksmodel_get_entry_pid(entry);
+}
+
+/**
+ * @brief In a given bin, start from the back end of the bin and go towards
+ *	  the front end, searching for an entry from a given CPU. Return
+ *	  the Process Id of the task of the entry found.
+ * @param histo: Input location for the model descriptor.
+ * @param bin: Bin id.
+ * @param cpu: CPU Id.
+ * @param vis_only: If true, a visible entry is requested.
+ * @param index: Optional output location for the index of the requested
+ *		 entry inside the array.
+ * @returns Process Id of the task if an entry has been found. Else a negative
+ *	    Identifier (KS_EMPTY_BIN or KS_FILTERED_BIN).
+ */
+int ksmodel_get_pid_back(struct kshark_trace_histo *histo,
+			 int bin, int cpu, bool vis_only,
+			 ssize_t *index)
+{
+	const struct kshark_entry *entry;
+
+	if (cpu < 0)
+		return KS_EMPTY_BIN;
+
+	entry = ksmodel_get_entry_back(histo, bin, vis_only,
+					      kshark_match_cpu, cpu,
+					      index);
+
+	return ksmodel_get_entry_pid(entry);
+}
+
+static int ksmodel_get_entry_cpu(const struct kshark_entry *entry)
+{
+	if (!entry) {
+		/* No data has been found. */
+		return KS_EMPTY_BIN;
+	}
+
+	/*
+	 * Note that if some data has been found, but this data is
+	 * filtered-outa, the Dummy entry is returned. The CPU Id of the Dummy
+	 * entry is KS_FILTERED_BIN.
+	 */
+
+	return entry->cpu;
+}
+
+/**
+ * @brief In a given bin, start from the front end of the bin and go towards
+ *	  the back end, searching for an entry from a given PID. Return
+ *	  the CPU Id of the entry found.
+ * @param histo: Input location for the model descriptor.
+ * @param bin: Bin id.
+ * @param pid: Process Id.
+ * @param vis_only: If true, a visible entry is requested.
+ * @param index: Optional output location for the index of the requested
+ *		 entry inside the array.
+ * @returns Process Id of the task if an entry has been found. Else a negative
+ *	    Identifier (KS_EMPTY_BIN or KS_FILTERED_BIN).
+ */
+int ksmodel_get_cpu_front(struct kshark_trace_histo *histo,
+			  int bin, int pid, bool vis_only,
+			  ssize_t *index)
+{
+	const struct kshark_entry *entry;
+
+	if (pid < 0)
+		return KS_EMPTY_BIN;
+
+	entry = ksmodel_get_entry_front(histo, bin, vis_only,
+					       kshark_match_pid, pid,
+					       index);
+	return ksmodel_get_entry_cpu(entry);
+}
+
+/**
+ * @brief In a given bin, start from the back end of the bin and go towards
+ *	  the front end, searching for an entry from a given PID. Return
+ *	  the CPU Id of the entry found.
+ * @param histo: Input location for the model descriptor.
+ * @param bin: Bin id.
+ * @param pid: Process Id.
+ * @param vis_only: If true, a visible entry is requested.
+ * @param index: Optional output location for the index of the requested
+ *		 entry inside the array.
+ * @returns Process Id of the task if an entry has been found. Else a negative
+ *	    Identifier (KS_EMPTY_BIN or KS_FILTERED_BIN).
+ */
+int ksmodel_get_cpu_back(struct kshark_trace_histo *histo,
+			 int bin, int pid, bool vis_only,
+			 ssize_t *index)
+{
+	const struct kshark_entry *entry;
+
+	if (pid < 0)
+		return KS_EMPTY_BIN;
+
+	entry = ksmodel_get_entry_back(histo, bin, vis_only,
+					      kshark_match_pid, pid,
+					      index);
+
+	return ksmodel_get_entry_cpu(entry);
+}
+
+/**
+ * @brief Check if a visible trace event from a given Cpu exists in this bin.
+ * @param histo: Input location for the model descriptor.
+ * @param bin: Bin id.
+ * @param cpu: Cpu Id.
+ * @param index: Optional output location for the index of the requested
+ *		 entry inside the array.
+ * @returns True, if a visible entry exists in this bin. Else false.
+ */
+bool ksmodel_cpu_visible_event_exist(struct kshark_trace_histo *histo,
+				     int bin, int cpu, ssize_t *index)
+{
+	struct kshark_entry_request *req;
+	const struct kshark_entry *entry;
+
+	if (index)
+		*index = KS_EMPTY_BIN;
+
+	/* Set the position at the beginning of the bin and go forward. */
+	req = ksmodel_entry_front_request_alloc(histo,
+						bin, true,
+						kshark_match_cpu, cpu);
+	if (!req)
+		return false;
+
+	/*
+	 * The default visibility mask of the Model Data request is
+	 * KS_GRAPH_VIEW_FILTER_MASK. Change the mask to
+	 * KS_EVENT_VIEW_FILTER_MASK because we want to find a visible event.
+	 */
+	req->vis_mask = KS_EVENT_VIEW_FILTER_MASK;
+
+	entry = kshark_get_entry_front(req, histo->data, index);
+	free(req);
+
+	if (!entry || !entry->visible) {
+		/* No visible entry has been found. */
+		return false;
+	}
+
+	return true;
+}
+
+/**
+ * @brief Check if a visible trace event from a given Task exists in this bin.
+ * @param histo: Input location for the model descriptor.
+ * @param bin: Bin id.
+ * @param pid: Process Id of the task.
+ * @param index: Optional output location for the index of the requested
+ *		 entry inside the array.
+ * @returns True, if a visible entry exists in this bin. Else false.
+ */
+bool ksmodel_task_visible_event_exist(struct kshark_trace_histo *histo,
+				      int bin, int pid, ssize_t *index)
+{
+	struct kshark_entry_request *req;
+	const struct kshark_entry *entry;
+
+	if (index)
+		*index = KS_EMPTY_BIN;
+
+	/* Set the position at the beginning of the bin and go forward. */
+	req = ksmodel_entry_front_request_alloc(histo,
+						bin, true,
+						kshark_match_pid, pid);
+	if (!req)
+		return false;
+
+	/*
+	 * The default visibility mask of the Model Data request is
+	 * KS_GRAPH_VIEW_FILTER_MASK. Change the mask to
+	 * KS_EVENT_VIEW_FILTER_MASK because we want to find a visible event.
+	 */
+	req->vis_mask = KS_EVENT_VIEW_FILTER_MASK;
+
+	entry = kshark_get_entry_front(req, histo->data, index);
+	free(req);
+
+	if (!entry || !entry->visible) {
+		/* No visible entry has been found. */
+		return false;
+	}
+
+	return true;
+}
diff --git a/kernel-shark-qt/src/libkshark-model.h b/kernel-shark-qt/src/libkshark-model.h
new file mode 100644
index 0000000..15391a9
--- /dev/null
+++ b/kernel-shark-qt/src/libkshark-model.h
@@ -0,0 +1,142 @@
+/* SPDX-License-Identifier: LGPL-2.1 */
+
+/*
+ * Copyright (C) 2017 VMware Inc, Yordan Karadzhov <y.karadz@gmail.com>
+ */
+
+ /**
+  *  @file    libkshark-model.h
+  *  @brief   Visualization model for FTRACE (trace-cmd) data.
+  */
+
+#ifndef _LIB_KSHARK_MODEL_H
+#define _LIB_KSHARK_MODEL_H
+
+// KernelShark
+#include "libkshark.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif // __cplusplus
+
+/** Overflow Bin identifiers. */
+enum OverflowBin {
+	/** Identifier of the Upper Overflow Bin. */
+	UPPER_OVERFLOW_BIN = -1,
+
+	/** Identifier of the Lower Overflow Bin. */
+	LOWER_OVERFLOW_BIN = -2,
+};
+
+/** Structure describing the current state of the visualization model. */
+struct kshark_trace_histo {
+	/** Trace data. */
+	struct kshark_entry	**data;
+
+	/** The size of the data. */
+	size_t			data_size;
+
+	/** The index of the first entry in each bin. */
+	ssize_t			*map;
+
+	/** Number of entries in each bin. */
+	size_t			*bin_count;
+
+	/** Lower edge of the time-window to be visualized. */
+	uint64_t		min;
+
+	/** Upper edge of the time-window to be visualized. */
+	uint64_t		max;
+
+	/** The size of the bins. */
+	uint64_t		bin_size;
+
+	/** Number of bins. */
+	int			n_bins;
+};
+
+void ksmodel_init(struct kshark_trace_histo *histo);
+
+void ksmodel_clear(struct kshark_trace_histo *histo);
+
+void ksmodel_set_bining(struct kshark_trace_histo *histo,
+			size_t n, uint64_t min, uint64_t max);
+
+void ksmodel_fill(struct kshark_trace_histo *histo,
+		  struct kshark_entry **data, size_t n);
+
+size_t ksmodel_bin_count(struct kshark_trace_histo *histo, int bin);
+
+void ksmodel_shift_forward(struct kshark_trace_histo *histo, size_t n);
+
+void ksmodel_shift_backward(struct kshark_trace_histo *histo, size_t n);
+
+void ksmodel_jump_to(struct kshark_trace_histo *histo, size_t ts);
+
+void ksmodel_zoom_out(struct kshark_trace_histo *histo,
+		      double r, int mark);
+
+void ksmodel_zoom_in(struct kshark_trace_histo *histo,
+		     double r, int mark);
+
+ssize_t ksmodel_first_index_at_bin(struct kshark_trace_histo *histo, int bin);
+
+ssize_t ksmodel_last_index_at_bin(struct kshark_trace_histo *histo, int bin);
+
+ssize_t ksmodel_first_index_at_cpu(struct kshark_trace_histo *histo,
+				   int bin, int cpu);
+
+ssize_t ksmodel_first_index_at_pid(struct kshark_trace_histo *histo,
+				   int bin, int pid);
+
+const struct kshark_entry *
+ksmodel_get_entry_front(struct kshark_trace_histo *histo,
+			int bin, bool vis_only,
+			matching_condition_func func, int val,
+			ssize_t *index);
+
+const struct kshark_entry *
+ksmodel_get_entry_back(struct kshark_trace_histo *histo,
+		       int bin, bool vis_only,
+		       matching_condition_func func, int val,
+		       ssize_t *index);
+
+int ksmodel_get_pid_front(struct kshark_trace_histo *histo,
+			  int bin, int cpu, bool vis_only,
+			  ssize_t *index);
+
+int ksmodel_get_pid_back(struct kshark_trace_histo *histo,
+			 int bin, int cpu, bool vis_only,
+			 ssize_t *index);
+
+int ksmodel_get_cpu_front(struct kshark_trace_histo *histo,
+			  int bin, int pid, bool vis_only,
+			  ssize_t *index);
+
+int ksmodel_get_cpu_back(struct kshark_trace_histo *histo,
+			 int bin, int pid, bool vis_only,
+			 ssize_t *index);
+
+bool ksmodel_cpu_visible_event_exist(struct kshark_trace_histo *histo,
+				     int bin, int cpu, ssize_t *index);
+
+bool ksmodel_task_visible_event_exist(struct kshark_trace_histo *histo,
+				      int bin, int pid, ssize_t *index);
+
+static inline double ksmodel_bin_time(struct kshark_trace_histo *histo,
+				      int bin)
+{
+	return (histo->min + bin*histo->bin_size) * 1e-9;
+}
+
+static inline uint64_t ksmodel_bin_ts(struct kshark_trace_histo *histo,
+				      int bin)
+{
+	return (histo->min + bin*histo->bin_size);
+}
+
+#ifdef __cplusplus
+}
+#endif // __cplusplus
+
+#endif
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 4/7] kernel-shark-qt: Add an example showing how to manipulate the Vis. model.
  2018-07-31 13:52 [PATCH v2 0/7] Add visualization model for the Qt-based KernelShark Yordan Karadzhov (VMware)
                   ` (2 preceding siblings ...)
  2018-07-31 13:52 ` [PATCH v2 3/7] kernel-shark-qt: Introduce the visualization model used by the Qt-based KS Yordan Karadzhov (VMware)
@ 2018-07-31 13:52 ` Yordan Karadzhov (VMware)
  2018-07-31 13:52 ` [PATCH v2 5/7] kernel-shark-qt: Define Data collections Yordan Karadzhov (VMware)
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 21+ messages in thread
From: Yordan Karadzhov (VMware) @ 2018-07-31 13:52 UTC (permalink / raw)
  To: rostedt; +Cc: linux-trace-devel, Yordan Karadzhov (VMware)

This patch introduces a basic example, showing how to initialize the
Visualization model and to use the API to perform some of the basic
operations.

Signed-off-by: Yordan Karadzhov (VMware) <y.karadz@gmail.com>
---
 kernel-shark-qt/examples/CMakeLists.txt |   4 +
 kernel-shark-qt/examples/datahisto.c    | 155 ++++++++++++++++++++++++
 2 files changed, 159 insertions(+)
 create mode 100644 kernel-shark-qt/examples/datahisto.c

diff --git a/kernel-shark-qt/examples/CMakeLists.txt b/kernel-shark-qt/examples/CMakeLists.txt
index 009fd1e..6906eba 100644
--- a/kernel-shark-qt/examples/CMakeLists.txt
+++ b/kernel-shark-qt/examples/CMakeLists.txt
@@ -7,3 +7,7 @@ target_link_libraries(dload   kshark)
 message(STATUS "datafilter")
 add_executable(dfilter          datafilter.c)
 target_link_libraries(dfilter   kshark)
+
+message(STATUS "datahisto")
+add_executable(dhisto          datahisto.c)
+target_link_libraries(dhisto   kshark)
diff --git a/kernel-shark-qt/examples/datahisto.c b/kernel-shark-qt/examples/datahisto.c
new file mode 100644
index 0000000..3f19870
--- /dev/null
+++ b/kernel-shark-qt/examples/datahisto.c
@@ -0,0 +1,155 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Copyright (C) 2018 VMware Inc, Yordan Karadzhov <y.karadz@gmail.com>
+ */
+
+// C
+#include <stdio.h>
+#include <stdlib.h>
+
+// KernelShark
+#include "libkshark.h"
+#include "libkshark-model.h"
+
+#define N_BINS 5
+
+const char *default_file = "trace.dat";
+
+void dump_bin(struct kshark_trace_histo *histo, int bin,
+	      const char *type, int val)
+{
+	const struct kshark_entry *e_front, *e_back;
+	char *entry_str;
+	ssize_t i_front, i_back;
+
+	printf("bin %i {\n", bin);
+	if (strcmp(type, "cpu") == 0) {
+		e_front = ksmodel_get_entry_front(histo, bin, true,
+						  kshark_match_cpu, val,
+						  &i_front);
+
+		e_back = ksmodel_get_entry_back(histo, bin, true,
+						kshark_match_cpu, val,
+						&i_back);
+	} else if (strcmp(type, "task") == 0) {
+		e_front = ksmodel_get_entry_front(histo, bin, true,
+						  kshark_match_pid, val,
+						  &i_front);
+
+		e_back = ksmodel_get_entry_back(histo, bin, true,
+						kshark_match_pid, val,
+						&i_back);
+	} else {
+		i_front = ksmodel_first_index_at_bin(histo, bin);
+		e_front = histo->data[i_front];
+
+		i_back = ksmodel_last_index_at_bin(histo, bin);
+		e_back = histo->data[i_back];
+	}
+
+	if (i_front == KS_EMPTY_BIN) {
+		puts ("EMPTY BIN");
+	} else {
+		entry_str = kshark_dump_entry(e_front);
+		printf("%li -> %s\n", i_front, entry_str);
+		free(entry_str);
+
+		entry_str = kshark_dump_entry(e_back);
+		printf("%li -> %s\n", i_back, entry_str);
+		free(entry_str);
+	}
+
+	puts("}\n");
+}
+
+void dump_histo(struct kshark_trace_histo *histo, const char *type, int val)
+{
+	size_t bin;
+
+	for (bin = 0; bin < histo->n_bins; ++bin)
+		dump_bin(histo, bin, type, val);
+}
+
+int main(int argc, char **argv)
+{
+	struct kshark_context *kshark_ctx;
+	struct kshark_entry **data = NULL;
+	struct kshark_trace_histo histo;
+	size_t i, n_rows, n_tasks;
+	bool status;
+	int *pids;
+
+	/* Create a new kshark session. */
+	kshark_ctx = NULL;
+	if (!kshark_instance(&kshark_ctx))
+		return 1;
+
+	/* Open a trace data file produced by trace-cmd. */
+	if (argc > 1)
+		status = kshark_open(kshark_ctx, argv[1]);
+	else
+		status = kshark_open(kshark_ctx, default_file);
+
+	if (!status) {
+		kshark_free(kshark_ctx);
+		return 1;
+	}
+
+	/* Load the content of the file into an array of entries. */
+	n_rows = kshark_load_data_entries(kshark_ctx, &data);
+
+	/* Get a list of all tasks. */
+	n_tasks = kshark_get_task_pids(kshark_ctx, &pids);
+
+	/* Initialize the Visualization Model. */
+	ksmodel_init(&histo);
+	ksmodel_set_bining(&histo, N_BINS, data[0]->ts,
+					   data[n_rows - 1]->ts);
+
+	/* Fill the model with data and calculate its state. */
+	ksmodel_fill(&histo, data, n_rows);
+
+	/* Dump the raw bins. */
+	dump_histo(&histo, "", 0);
+
+	puts("\n...\n\n");
+
+	/*
+	 * Change the state of the model. Do 50% Zoom-In and dump only CPU 0.
+	 */
+	ksmodel_zoom_in(&histo, .50, -1);
+	dump_histo(&histo, "cpu", 0);
+
+	puts("\n...\n\n");
+
+	/* Shift forward by two bins and this time dump only CPU 1. */
+	ksmodel_shift_forward(&histo, 2);
+	dump_histo(&histo, "cpu", 1);
+
+	puts("\n...\n\n");
+
+	/*
+	 * Do 10% Zoom-Out, using the last bin as a focal point. Dump the last
+	 * Task.
+	 */
+	ksmodel_zoom_out(&histo, .10, N_BINS - 1);
+	dump_histo(&histo, "task", pids[n_tasks - 1]);
+
+	/* Reset (clear) the model. */
+	ksmodel_clear(&histo);
+
+	/* Free the memory. */
+	for (i = 0; i < n_rows; ++i)
+		free(data[i]);
+
+	free(data);
+
+	/* Close the file. */
+	kshark_close(kshark_ctx);
+
+	/* Close the session. */
+	kshark_free(kshark_ctx);
+
+	return 0;
+}
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 5/7] kernel-shark-qt: Define Data collections
  2018-07-31 13:52 [PATCH v2 0/7] Add visualization model for the Qt-based KernelShark Yordan Karadzhov (VMware)
                   ` (3 preceding siblings ...)
  2018-07-31 13:52 ` [PATCH v2 4/7] kernel-shark-qt: Add an example showing how to manipulate the Vis. model Yordan Karadzhov (VMware)
@ 2018-07-31 13:52 ` Yordan Karadzhov (VMware)
  2018-07-31 13:52 ` [PATCH v2 6/7] kernel-shark-qt: Make the Vis. model use " Yordan Karadzhov (VMware)
  2018-07-31 13:52 ` [PATCH v2 7/7] kernel-shark-qt: Changed the KernelShark version identifier Yordan Karadzhov (VMware)
  6 siblings, 0 replies; 21+ messages in thread
From: Yordan Karadzhov (VMware) @ 2018-07-31 13:52 UTC (permalink / raw)
  To: rostedt; +Cc: linux-trace-devel, Yordan Karadzhov (VMware)

Data collections are used to optimize the search for an entry having
an abstract property, defined by a Matching condition function and a
value. When a collection is processed, the data which is relevant for
the collection is enclosed in "Data intervals", defined by pairs of
"Resume" and "Break" points. It is guaranteed that the data outside of
the intervals contains no entries satisfying the abstract matching
condition. Once defined, the Data collection can be used when searching
for an entry having the same abstract property. The collection allows to
ignore the irrelevant data, thus it eliminates the linear worst-case time
complexity of the search.

Signed-off-by: Yordan Karadzhov (VMware) <y.karadz@gmail.com>
---
 kernel-shark-qt/src/CMakeLists.txt         |   3 +-
 kernel-shark-qt/src/libkshark-collection.c | 827 +++++++++++++++++++++
 kernel-shark-qt/src/libkshark.c            |  16 +
 kernel-shark-qt/src/libkshark.h            |  79 ++
 4 files changed, 924 insertions(+), 1 deletion(-)
 create mode 100644 kernel-shark-qt/src/libkshark-collection.c

diff --git a/kernel-shark-qt/src/CMakeLists.txt b/kernel-shark-qt/src/CMakeLists.txt
index ec22f63..cd42920 100644
--- a/kernel-shark-qt/src/CMakeLists.txt
+++ b/kernel-shark-qt/src/CMakeLists.txt
@@ -2,7 +2,8 @@ message("\n src ...")
 
 message(STATUS "libkshark")
 add_library(kshark SHARED libkshark.c
-                          libkshark-model.c)
+                          libkshark-model.c
+                          libkshark-collection.c)
 
 target_link_libraries(kshark ${CMAKE_DL_LIBS}
                              ${TRACEEVENT_LIBRARY}
diff --git a/kernel-shark-qt/src/libkshark-collection.c b/kernel-shark-qt/src/libkshark-collection.c
new file mode 100644
index 0000000..79ada7f
--- /dev/null
+++ b/kernel-shark-qt/src/libkshark-collection.c
@@ -0,0 +1,827 @@
+// SPDX-License-Identifier: LGPL-2.1
+
+/*
+ * Copyright (C) 2018 VMware Inc, Yordan Karadzhov <y.karadz@gmail.com>
+ */
+
+ /**
+  *  @file    libkshark-collection.c
+  *  @brief   Data Collections.
+  */
+
+//C
+#include <stdbool.h>
+#include <stdlib.h>
+#include <assert.h>
+
+// KernelShark
+#include "libkshark.h"
+
+/* Quiet warnings over documenting simple structures */
+//! @cond Doxygen_Suppress
+
+enum collection_point_type {
+	COLLECTION_IGNORE = 0,
+	COLLECTION_RESUME,
+	COLLECTION_BREAK,
+};
+
+#define LAST_BIN		-3
+
+struct entry_list {
+	struct entry_list	*next;
+	size_t			index;
+	uint8_t			type;
+};
+
+//! @endcond
+
+static bool collection_add_entry(struct entry_list **list,
+				 size_t i, uint8_t type)
+{
+	struct entry_list *entry = *list;
+
+	if (entry->type != COLLECTION_IGNORE) {
+		entry->next = malloc(sizeof(*entry));
+		if (!entry->next)
+			return false;
+
+		entry = entry->next;
+		*list = entry;
+	}
+
+	entry->index = i;
+	entry->type = type;
+
+	return true;
+}
+
+static struct kshark_entry_collection *
+kshark_data_collection_alloc(struct kshark_context *kshark_ctx,
+			     struct kshark_entry **data,
+			     size_t first,
+			     size_t n_rows,
+			     matching_condition_func cond,
+			     int val,
+			     size_t margin)
+{
+	struct kshark_entry_collection *col_ptr = NULL;
+	struct kshark_entry *last_vis_entry = NULL;
+	struct entry_list *col_list, *temp;
+	size_t resume_count = 0, break_count = 0;
+	size_t i, j, end, last_added = 0;
+	bool good_data = false;
+
+	col_list = malloc(sizeof(*col_list));
+	if (!col_list)
+		goto fail;
+
+	temp = col_list;
+
+	if (margin != 0) {
+		/*
+		 * If this collection includes margin data, add a margin data
+		 * interval at the very beginning of the data-set.
+		 */
+		temp->index = first;
+		temp->type = COLLECTION_RESUME;
+		++resume_count;
+
+		collection_add_entry(&temp, first + margin - 1,
+				     COLLECTION_BREAK);
+		++break_count;
+	} else {
+		temp->type = COLLECTION_IGNORE;
+	}
+
+	end = first + n_rows - margin;
+	for (i = first + margin; i < end; ++i) {
+		if (!cond(kshark_ctx, data[i], val)) {
+			/*
+			 * The entry is irrelevant for this collection.
+			 * Do nothing.
+			 */
+			continue;
+		}
+
+		/* The Matching condition is satisfed. */
+		if (!good_data) {
+			/*
+			 * Resume the collection here. Add some margin data
+			 * in front of the data of interest.
+			 */
+			good_data = true;
+			if (last_added == 0 || last_added < i - margin) {
+				collection_add_entry(&temp, i - margin,
+						 COLLECTION_RESUME);
+				++resume_count;
+			} else {
+				/*
+				 * Ignore the last collection Brack point.
+				 * Continue extending the previous data
+				 * interval.
+				 */
+				temp->type = COLLECTION_IGNORE;
+				--break_count;
+			}
+		} else if (good_data &&
+			   data[i]->next &&
+			   !cond(kshark_ctx, data[i]->next, val)) {
+			/*
+			 * Brack the collection here. Add some margin data
+			 * after the data of interest.
+			 */
+			good_data = false;
+			last_vis_entry = data[i];
+
+			/* Keep adding entries until the "next" record. */
+			for (j = i + 1;
+			     j != end && last_vis_entry->next != data[j];
+			     j++)
+				;
+
+			/*
+			 * If the number of added entryes is smaller then the
+			 * number of margin entries requested, keep adding
+			 * until you fill the margin.
+			 */
+			if (i + margin < j)
+				i = j;
+			else
+				i += margin;
+
+			last_added = i;
+			collection_add_entry(&temp, i, COLLECTION_BREAK);
+			++break_count;
+		}
+	}
+
+	if (good_data) {
+		collection_add_entry(&temp, end - 1, COLLECTION_BREAK);
+		++break_count;
+	}
+
+	if (margin != 0) {
+		/*
+		 * If this collection includes margin data, add a margin data
+		 * interval at the very end of the data-set.
+		 */
+		collection_add_entry(&temp, first + n_rows - margin,
+				 COLLECTION_RESUME);
+
+		++resume_count;
+
+		collection_add_entry(&temp, first + n_rows - 1,
+				 COLLECTION_BREAK);
+
+		++break_count;
+	}
+
+	/*
+	 * If everything is OK, we must have pairs of COLLECTION_RESUME
+	 * and COLLECTION_BREAK points.
+	 */
+	assert(break_count == resume_count);
+
+	/* Create the collection. */
+	col_ptr = malloc(sizeof(*col_ptr));
+	if (!col_ptr)
+		goto fail;
+
+	col_ptr->next = NULL;
+
+	col_ptr->resume_points = calloc(resume_count,
+					sizeof(*col_ptr->resume_points));
+	if (!col_ptr->resume_points)
+		goto fail;
+
+	col_ptr->break_points = calloc(break_count,
+				       sizeof(*col_ptr->break_points));
+	if (!col_ptr->break_points) {
+		free(col_ptr->resume_points);
+		goto fail;
+	}
+
+	col_ptr->cond = cond;
+	col_ptr->val = val;
+
+	col_ptr->size = resume_count;
+	for (i = 0; i < col_ptr->size; ++i) {
+		assert(col_list->type == COLLECTION_RESUME);
+		col_ptr->resume_points[i] = col_list->index;
+		temp = col_list;
+		col_list = col_list->next;
+		free(temp);
+
+		assert(col_list->type == COLLECTION_BREAK);
+		col_ptr->break_points[i] = col_list->index;
+		temp = col_list;
+		col_list = col_list->next;
+		free(temp);
+	}
+
+	return col_ptr;
+
+fail:
+	fprintf(stderr, "Failed to allocate memory for Data collection.\n");
+
+	free(col_ptr);
+	for (i = 0; i < resume_count + break_count; ++i) {
+		temp = col_list;
+		col_list = col_list->next;
+		free(temp);
+	}
+
+	return NULL;
+}
+
+static ssize_t
+map_collection_index_from_source(const struct kshark_entry_collection *col,
+				 size_t source_index)
+{
+	size_t l, h, mid;
+
+	if (!col->size || source_index > col->break_points[col->size - 1])
+		return KS_EMPTY_BIN;
+
+	l = 0;
+	h = col->size - 1;
+	if (source_index < col->resume_points[0])
+		return l;
+
+	if (source_index > col->resume_points[col->size - 1])
+		return LAST_BIN;
+
+	BSEARCH(h, l, (source_index > col->resume_points[mid]));
+
+	return h;
+}
+
+/*
+ * This finction uses the intervals of the Data collection to transform the
+ * inputted single data request into a list of data requests. The new list of
+ * request will ignore the data outside of the intervals of the collection.
+ */
+static int
+map_collection_back_request(const struct kshark_entry_collection *col,
+			    struct kshark_entry_request **req)
+{
+	struct kshark_entry_request *req_tmp = *req;
+	size_t req_first, req_end;
+	ssize_t col_index;
+	int req_count;
+
+	if (req_tmp->next || col->size == 0) {
+		fprintf(stderr, "Unexpected input in ");
+		fprintf(stderr, "map_collection_front_request()\n");
+		goto do_nothing;
+	}
+
+	req_end = req_tmp->first - req_tmp->n + 1;
+
+	/*
+	 * Find the first Resume Point of the collection which is equal or
+	 * greater than the first index of this request.
+	 */
+	col_index = map_collection_index_from_source(col, req_tmp->first);
+
+	/*
+	 * The value of "col_index" is ambiguous. Deal with all possible
+	 * cases.
+	 */
+	if (col_index == KS_EMPTY_BIN) {
+		/*
+		 * No overlap between the request and the collection.
+		 * Do nothing.
+		 */
+		goto do_nothing;
+	}
+
+	if (col_index == 0) {
+		if (req_tmp->first == col->resume_points[0]) {
+			/*
+			 * This is a special case. Because this is Back
+			 * Request, if the beginning of this request is at
+			 * the Resume Point of the first interval, then there
+			 * is only one possible entry, to look into.
+			 */
+			req_tmp->n = 1;
+			return 1;
+		}
+
+		/*
+		 * No overlap between the request and the collection.
+		 * Do nothing.
+		 */
+		goto do_nothing;
+	} else if (col_index > 0) {
+		/*
+		 * This is Back Request, so the "col_index" interval of the
+		 * collection is guaranteed to be outside the requested data,
+		 * except in one special case.
+		 */
+		if (req_tmp->first == col->resume_points[col_index]) {
+			/*
+			 * We still have to check the very first entry of the
+			 * "col_index" interval.
+			 */
+			if (req_end > col->break_points[col_index - 1]) {
+				/*
+				 * The inputted request ends before the
+				 * beginning of the previous interval. There
+				 * is only one possible entry in this interval
+				 * to look into.
+				 */
+				req_tmp->n = 1;
+				return 1;
+			}
+		}  else {
+			/* Move to the previous interval of the collection. */
+			--col_index;
+
+			if (req_tmp->first > col->break_points[col_index]) {
+				/*
+				 * The request overlaps with this interval of
+				 * the collection. Start from here, using the
+				 * Break Point of the interval as beginning of
+				 * the request.
+				 */
+				req_tmp->first = col->break_points[col_index];
+			}
+		}
+	} else if (col_index == LAST_BIN) {
+		/*
+		 * The inputted Back Request starts after the end of the last
+		 * interval of the collection.
+		 */
+		col_index = col->size - 1;
+		if (req_end > col->break_points[col_index]) {
+			/*
+			 * The inputted request ends after the end of the last
+			 * interval of the collection. There is no overlap
+			 * between the request and the collection. Do nothing.
+			 */
+			goto do_nothing;
+		}
+
+		/*
+		 * The request overlaps with last interval of the collection.
+		 * Start from here, using the Break Point of the last interval
+		 * as beginning of the request.
+		 */
+		req_tmp->first = col->break_points[col_index];
+	}
+
+	/*
+	 * Now loop over the intervals of the collection going backwords till
+	 * the end of the inputted request and create a separate request for
+	 * each of those interest.
+	 */
+	req_count = 1;
+	while (col_index >= 0 && req_end <= col->break_points[col_index]) {
+		if (req_end >= col->resume_points[col_index]) {
+			/*
+			 * The last entry of the original request is inside
+			 * the "col_index" collection interval. Close the
+			 * collection request here and return.
+			 */
+			req_tmp->n = req_tmp->first - req_end + 1;
+			break;
+		}
+
+		/*
+		 * The last entry of the original request is outside of the
+		 * "col_index" interval. Close the collection request at the
+		 * end of this interval and move to the next one. Try to make
+		 * another request there.
+		 */
+		req_tmp->n = req_tmp->first -
+		             col->resume_points[col_index] + 1;
+
+		--col_index;
+
+		if (req_end > col->break_points[col_index]) {
+			/*
+			 * The last entry of the original request comes before
+			 * the next collection interval. Stop here.
+			 */
+			break;
+		}
+
+		if (col_index > 0) {
+			/* Make a new request. */
+			req_first = col->break_points[col_index];
+
+			req_tmp->next =
+				kshark_entry_request_alloc(req_first,
+							   0,
+							   req_tmp->cond,
+							   req_tmp->val,
+							   req_tmp->vis_only,
+							   req_tmp->vis_mask);
+
+			req_tmp = req_tmp->next;
+			++req_count;
+		}
+	}
+
+	return req_count;
+
+do_nothing:
+	kshark_free_entry_request(*req);
+	*req = NULL;
+	return 0;
+}
+
+/*
+ * This finction uses the intervals of the Data collection to transform the
+ * inputted single data request into a list of data requests. The new list of
+ * request will ignore the data outside of the intervals of the collection.
+ */
+static int
+map_collection_front_request(const struct kshark_entry_collection *col,
+			     struct kshark_entry_request **req)
+{
+	struct kshark_entry_request *req_tmp = *req;
+	size_t req_first, req_end;
+	ssize_t col_index;
+	int req_count;
+
+	if (req_tmp->next || col->size == 0) {
+		fprintf(stderr, "Unexpected input in ");
+		fprintf(stderr, "map_collection_front_request()\n");
+		goto do_nothing;
+	}
+
+	req_end = req_tmp->first + req_tmp->n - 1;
+
+	/*
+	 * Find the first Resume Point of the collection which is equal or
+	 * greater than the first index of this request.
+	 */
+	col_index = map_collection_index_from_source(col, req_tmp->first);
+
+	/*
+	 * The value of "col_index" is ambiguous. Deal with all possible
+	 * cases.
+	 */
+	if (col_index == KS_EMPTY_BIN) {
+		/*
+		 * No overlap between the request and the collection.
+		 * Do nothing.
+		 */
+		goto do_nothing;
+	}
+
+	if (col_index == 0) {
+		if (col->resume_points[0] > req_end) {
+			/*
+			 * The inputted request is in the gap before the first
+			 * interval of the collection. No overlap between the
+			 * request and the collection. Do nothing.
+			 */
+			goto do_nothing;
+		}
+
+		/*
+		 * The request overlaps with the "col_index" interval of the
+		 * collection. Start from here, using the Resume Point of
+		 * the interval as beginning of the request.
+		 */
+		req_tmp->first = col->resume_points[col_index];
+	} else if (col_index > 0) {
+		if (req_tmp->first > col->break_points[col_index - 1] &&
+		    req_end < col->resume_points[col_index]) {
+			/*
+			 * The inputted request is in the gap between interval
+			 * "col_index" and "col_index - 1". No overlap between
+			 * the request and the collection. Do nothing.
+			 */
+			goto do_nothing;
+		}
+
+		if (req_tmp->first <= col->break_points[col_index - 1]) {
+			/*
+			 * The beginning of this request is inside the previous
+			 * interval of the collection. Start from there and
+			 * keep the original beginning point.
+			 */
+			--col_index;
+		} else {
+			/*
+			 * The request overlaps with the "col_index" interval
+			 * of the collection. Start from here, using the Resume
+			 * Point of the interval as beginning of the request.
+			 */
+			req_tmp->first = col->resume_points[col_index];
+		}
+	} else if (col_index == LAST_BIN) {
+		/*
+		 * The inputted Front Request starts after the end of the last
+		 * interval of the collection. Do nothing.
+		 */
+		goto do_nothing;
+	}
+
+	/*
+	 * Now loop over the intervals of the collection going forwards till
+	 * the end of the inputted request and create a separate request for
+	 * each of those interest.
+	 */
+	req_count = 1;
+	while (col_index < col->size &&
+	       req_end >= col->resume_points[col_index]) {
+		if (req_end <= col->break_points[col_index]) {
+			/*
+			 * The last entry of the original request is inside
+			 * the "col_index" collection interval.
+			 * Close the collection request here and return.
+			 */
+			req_tmp->n = req_end - req_tmp->first + 1;
+			break;
+		}
+
+		/*
+		 * The last entry of the original request is outside this
+		 * collection interval (col_index). Close the collection
+		 * request at the end of the interval and move to the next
+		 * interval. Try to make another request there.
+		 */
+		req_tmp->n = col->break_points[col_index] -
+			     req_tmp->first + 1;
+
+		++col_index;
+
+		if (req_end < col->resume_points[col_index]) {
+			/*
+			 * The last entry of the original request comes before
+			 * the beginning of next collection interval. Stop here.
+			 */
+			break;
+		}
+
+		if (col_index < col->size) {
+			/* Make a new request. */
+			req_first = col->resume_points[col_index];
+
+			req_tmp->next =
+				kshark_entry_request_alloc(req_first,
+							   0,
+							   req_tmp->cond,
+							   req_tmp->val,
+							   req_tmp->vis_only,
+							   req_tmp->vis_mask);
+
+			req_tmp = req_tmp->next;
+			++req_count;
+		}
+	}
+
+	return req_count;
+
+do_nothing:
+	kshark_free_entry_request(*req);
+	*req = NULL;
+	return 0;
+}
+
+/**
+ * @brief Search for an entry satisfying the requirements of a given Data
+ *	  request. Start from the position provided by the request and go
+ *	  searching in the direction of the increasing timestamps (front).
+ *	  The search is performed only inside the intervals, defined by
+ *	  the data collection.
+ * @param req: Input location for a single Data request. The imputed request
+ *	       will be transformed into a list of requests. This new list of
+ *	       requests will ignore the data outside of the intervals of the
+ *	       collection.
+ * @param data: Intput location for the trace data.
+ * @param col: Intput location for the Data collection.
+ * @param index: Optional output location for the index of the returned
+ *		 entry inside the array.
+ * @returns Pointer to the first entry satisfying the matching condition on
+ *	    success, or NULL on failure.
+ *	    In the special case when some entries, satisfying the Matching
+ *	    condition function have been found, but all these entries have
+ *	    been discarded because of the visibility criteria (filtered
+ *	    entries), the function returns a pointer to a special
+ *	    "Dummy entry".
+ */
+const struct kshark_entry *
+kshark_get_collection_entry_front(struct kshark_entry_request **req,
+				  struct kshark_entry **data,
+				  const struct kshark_entry_collection *col,
+				  ssize_t *index)
+{
+	const struct kshark_entry *entry = NULL;
+	int req_count;
+
+	/*
+	 * Use the intervals of the Data collection to redefine the data
+	 * request in a way which will ignore the data outside of the
+	 * intervals of the collection.
+	 */
+	req_count = map_collection_front_request(col, req);
+
+	if (index && !req_count)
+		*index = KS_EMPTY_BIN;
+
+	/*
+	 * Loop over the list of redefined requests and search until you find
+	 * the first matching entry.
+	 */
+	while (*req) {
+		entry = kshark_get_entry_front(*req, data, index);
+		if (entry)
+			break;
+
+		*req = (*req)->next;
+	}
+
+	return entry;
+}
+
+/**
+ * @brief Search for an entry satisfying the requirements of a given Data
+ *	  request. Start from the position provided by the request and go
+ *	  searching in the direction of the decreasing timestamps (back).
+ *	  The search is performed only inside the intervals, defined by
+ *	  the data collection.
+ * @param req: Input location for Data request. The imputed request
+ *	       will be transformed into a list of requests. This new list of
+ *	       requests will ignore the data outside of the intervals of the
+ *	       collection.
+ * @param data: Intput location for the trace data.
+ * @param col: Intput location for the Data collection.
+ * @param index: Optional output location for the index of the returned
+ *		 entry inside the array.
+ * @returns Pointer to the first entry satisfying the matching condition on
+ *	    success, or NULL on failure.
+ *	    In the special case when some entries, satisfying the Matching
+ *	    condition function have been found, but all these entries have
+ *	    been discarded because of the visibility criteria (filtered
+ *	    entries), the function returns a pointer to a special
+ *	    "Dummy entry".
+ */
+const struct kshark_entry *
+kshark_get_collection_entry_back(struct kshark_entry_request **req,
+				 struct kshark_entry **data,
+				 const struct kshark_entry_collection *col,
+				 ssize_t *index)
+{
+	const struct kshark_entry *entry = NULL;
+	int req_count;
+
+	/*
+	 * Use the intervals of the Data collection to redefine the data
+	 * request in a way which will ignore the data outside of the
+	 * intervals of the collection.
+	 */
+	req_count = map_collection_back_request(col, req);
+	if (index && !req_count)
+		*index = KS_EMPTY_BIN;
+
+	/*
+	 * Loop over the list of redefined requests and search until you find
+	 * the first matching entry.
+	 */
+	while (*req) {
+		entry = kshark_get_entry_back(*req, data, index);
+		if (entry)
+			break;
+
+		*req = (*req)->next;
+	}
+
+	return entry;
+}
+
+/**
+ * @brief Search the list of Data collections and find the collection defined
+ *	  with a given Matching condition function and value.
+ * @param col: Intput location for the Data collection list.
+ * @param cond: Matching condition function.
+ * @param val: Matching condition value, used by the Matching condition
+ *	       function.
+ * @returns Pointer to a Data collections on success, or NULL on failure.
+ */
+struct kshark_entry_collection *
+kshark_find_data_collection(struct kshark_entry_collection *col,
+			    matching_condition_func cond,
+			    int val)
+{
+	while (col) {
+		if (col->cond == cond && col->val == val)
+			return col;
+
+		col = col->next;
+	}
+
+	return NULL;
+}
+
+/**
+ * @brief Clear all data intervals of the given Data collection.
+ * @param col: Intput location for the Data collection.
+ */
+void kshark_reset_data_collection(struct kshark_entry_collection *col)
+{
+	free(col->resume_points);
+	col->resume_points = NULL;
+
+	free(col->break_points);
+	col->break_points = NULL;
+
+	col->size = 0;
+}
+
+static void kshark_free_data_collection(struct kshark_entry_collection *col)
+{
+	free(col->resume_points);
+	free(col->break_points);
+	free(col);
+}
+
+/**
+ * @brief Allocate and process data collection, defined with a given Matching
+ *	  condition function and value. Add this collection to the list of
+ *	  collections used by the session.
+ * @param kshark_ctx: Input location for the session context pointer.
+ * @param data: Input location for the trace data.
+ * @param n_rows: The size of the inputted data.
+ * @param cond: Matching condition function for the collection to be
+ *	        registered.
+ * @param val: Matching condition value of for collection to be registered.
+ * @param margin: The size of the additional (margin) data which do not
+ *		  satisfy the matching condition, but is added at the
+ *		  beginning and at the end of each interval of the collection
+ *		  as well as at the beginning and at the end of data-set. If
+ *		  "0", no margin data is added.
+ * @returns Pointer to the registered Data collections on success, or NULL
+ *	    on failure.
+ */
+struct kshark_entry_collection *
+kshark_register_data_collection(struct kshark_context *kshark_ctx,
+				struct kshark_entry **data,
+				size_t n_rows,
+				matching_condition_func cond,
+				int val,
+				size_t margin)
+{
+	struct kshark_entry_collection *col;
+
+	col = kshark_data_collection_alloc(kshark_ctx, data,
+					   0, n_rows,
+					   cond, val,
+					   margin);
+
+	if (col) {
+		col->next = kshark_ctx->collections;
+		kshark_ctx->collections = col;
+	}
+
+	return col;
+}
+
+/**
+ * @brief Search the list of Data collections for a collection defined
+ *	  with a given Matching condition function and value. If such a
+ *	  collection exists, unregister (remove and free) this collection
+ *	  from the list.
+ * @param col: Intput location for the Data collection list.
+ * @param cond: Matching condition function of the collection to be
+ *	        unregistered.
+ * @param val: Matching condition value of the collection to be unregistered.
+ */
+void kshark_unregister_data_collection(struct kshark_entry_collection **col,
+				       matching_condition_func cond,
+				       int val)
+{
+	struct kshark_entry_collection **last = col;
+	struct kshark_entry_collection *list;
+
+	for (list = *col; list; list = list->next) {
+		if (list->cond == cond && list->val == val) {
+			*last = list->next;
+			kshark_free_data_collection(list);
+			return;
+		}
+
+		last = &list->next;
+	}
+}
+
+/**
+ * @brief Free all Data collections in a given list.
+ * @param col: Intput location for the Data collection list.
+ */
+void kshark_free_collection_list(struct kshark_entry_collection *col)
+{
+	struct kshark_entry_collection *last;
+
+	while (col) {
+		last = col;
+		col = col->next;
+		kshark_free_data_collection(last);
+	}
+}
diff --git a/kernel-shark-qt/src/libkshark.c b/kernel-shark-qt/src/libkshark.c
index 1796bf8..d383b95 100644
--- a/kernel-shark-qt/src/libkshark.c
+++ b/kernel-shark-qt/src/libkshark.c
@@ -1024,6 +1024,7 @@ kshark_entry_request_alloc(size_t first, size_t n,
 		return NULL;
 	}
 
+	req->next = NULL;
 	req->first = first;
 	req->n = n;
 	req->cond = cond;
@@ -1034,6 +1035,21 @@ kshark_entry_request_alloc(size_t first, size_t n,
 	return req;
 }
 
+/**
+ * @brief Free all Data requests in a given list.
+ * @param req: Intput location for the Data request list.
+ */
+void kshark_free_entry_request(struct kshark_entry_request *req)
+{
+	struct kshark_entry_request *last;
+
+	while (req) {
+		last = req;
+		req = req->next;
+		free(last);
+	}
+}
+
 /** Dummy entry, used to indicate the existence of filtered entries. */
 const struct kshark_entry dummy_entry = {
 	.next		= NULL,
diff --git a/kernel-shark-qt/src/libkshark.h b/kernel-shark-qt/src/libkshark.h
index adbd392..f3a63ce 100644
--- a/kernel-shark-qt/src/libkshark.h
+++ b/kernel-shark-qt/src/libkshark.h
@@ -115,6 +115,9 @@ struct kshark_context {
 	 * the event.
 	 */
 	struct event_filter		*advanced_event_filter;
+
+	/** List of Data collections. */
+	struct kshark_entry_collection *collections;
 };
 
 bool kshark_instance(struct kshark_context **kshark_ctx);
@@ -232,6 +235,9 @@ typedef bool (matching_condition_func)(struct kshark_context*,
  * kshark_entry.
  */
 struct kshark_entry_request {
+	/** Pointer to the next Data request. */
+	struct kshark_entry_request *next;
+
 	/**
 	 * Array index specifying the position inside the array from where
 	 * the search starts.
@@ -264,6 +270,8 @@ kshark_entry_request_alloc(size_t first, size_t n,
 			   matching_condition_func cond, int val,
 			   bool vis_only, int vis_mask);
 
+void kshark_free_entry_request(struct kshark_entry_request *req);
+
 const struct kshark_entry *
 kshark_get_entry_front(const struct kshark_entry_request *req,
 		       struct kshark_entry **data,
@@ -274,6 +282,77 @@ kshark_get_entry_back(const struct kshark_entry_request *req,
 		      struct kshark_entry **data,
 		      ssize_t *index);
 
+/**
+ * Data collections are used to optimize the search for an entry having
+ * an abstract property, defined by a Matching condition function and a
+ * value. When a collection is processed, the data which is relevant for
+ * the collection is enclosed in "Data intervals", defined by pairs of
+ * "Resume" and "Break" points. It is guaranteed that the data outside of
+ * the intervals contains no entries satisfying the abstract matching
+ * condition. Once defined, the Data collection can be used when searching
+ * for an entry having the same abstract property. The collection allows to
+ * ignore the irrelevant data, thus it eliminates the linear worst-case time
+ * complexity of the search.
+ */
+struct kshark_entry_collection {
+	/** Pointer to the next Data collection. */
+	struct kshark_entry_collection *next;
+
+	/** Matching condition function, used to define the collections. */
+	matching_condition_func *cond;
+
+	/**
+	 * Matching condition value, used by the Matching condition finction
+	 * to define the collections.
+	 */
+	int val;
+
+	/**
+	 * Array of indexes defining the beginning of each individual data
+	 * interval.
+	 */
+	size_t *resume_points;
+
+	/**
+	 * Array of indexes defining the end of each individual data interval.
+	 */
+	size_t *break_points;
+
+	/** Number of data intervals in this collection. */
+	size_t size;
+};
+
+struct kshark_entry_collection *
+kshark_register_data_collection(struct kshark_context *kshark_ctx,
+				struct kshark_entry **data, size_t n_rows,
+				matching_condition_func cond, int val,
+				size_t margin);
+
+void kshark_unregister_data_collection(struct kshark_entry_collection **col,
+				       matching_condition_func cond,
+				       int val);
+
+struct kshark_entry_collection *
+kshark_find_data_collection(struct kshark_entry_collection *col,
+			    matching_condition_func cond,
+			    int val);
+
+void kshark_reset_data_collection(struct kshark_entry_collection *col);
+
+void kshark_free_collection_list(struct kshark_entry_collection *col);
+
+const struct kshark_entry *
+kshark_get_collection_entry_front(struct kshark_entry_request **req,
+				  struct kshark_entry **data,
+				  const struct kshark_entry_collection *col,
+				  ssize_t *index);
+
+const struct kshark_entry *
+kshark_get_collection_entry_back(struct kshark_entry_request **req,
+				 struct kshark_entry **data,
+				 const struct kshark_entry_collection *col,
+				 ssize_t *index);
+
 #ifdef __cplusplus
 }
 #endif
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 6/7] kernel-shark-qt: Make the Vis. model use Data collections.
  2018-07-31 13:52 [PATCH v2 0/7] Add visualization model for the Qt-based KernelShark Yordan Karadzhov (VMware)
                   ` (4 preceding siblings ...)
  2018-07-31 13:52 ` [PATCH v2 5/7] kernel-shark-qt: Define Data collections Yordan Karadzhov (VMware)
@ 2018-07-31 13:52 ` Yordan Karadzhov (VMware)
  2018-07-31 13:52 ` [PATCH v2 7/7] kernel-shark-qt: Changed the KernelShark version identifier Yordan Karadzhov (VMware)
  6 siblings, 0 replies; 21+ messages in thread
From: Yordan Karadzhov (VMware) @ 2018-07-31 13:52 UTC (permalink / raw)
  To: rostedt; +Cc: linux-trace-devel, Yordan Karadzhov (VMware)

This patch optimizes the search instruments of the model by
adding the possibility of using Data collections.

Signed-off-by: Yordan Karadzhov (VMware) <y.karadz@gmail.com>
---
 kernel-shark-qt/examples/datahisto.c  |  4 ++
 kernel-shark-qt/src/libkshark-model.c | 57 +++++++++++++++++++++++----
 kernel-shark-qt/src/libkshark-model.h | 14 ++++++-
 3 files changed, 65 insertions(+), 10 deletions(-)

diff --git a/kernel-shark-qt/examples/datahisto.c b/kernel-shark-qt/examples/datahisto.c
index 3f19870..99ac495 100644
--- a/kernel-shark-qt/examples/datahisto.c
+++ b/kernel-shark-qt/examples/datahisto.c
@@ -27,18 +27,22 @@ void dump_bin(struct kshark_trace_histo *histo, int bin,
 	if (strcmp(type, "cpu") == 0) {
 		e_front = ksmodel_get_entry_front(histo, bin, true,
 						  kshark_match_cpu, val,
+						  NULL,
 						  &i_front);
 
 		e_back = ksmodel_get_entry_back(histo, bin, true,
 						kshark_match_cpu, val,
+						NULL,
 						&i_back);
 	} else if (strcmp(type, "task") == 0) {
 		e_front = ksmodel_get_entry_front(histo, bin, true,
 						  kshark_match_pid, val,
+						  NULL,
 						  &i_front);
 
 		e_back = ksmodel_get_entry_back(histo, bin, true,
 						kshark_match_pid, val,
+						NULL,
 						&i_back);
 	} else {
 		i_front = ksmodel_first_index_at_bin(histo, bin);
diff --git a/kernel-shark-qt/src/libkshark-model.c b/kernel-shark-qt/src/libkshark-model.c
index 4a4e910..73251f2 100644
--- a/kernel-shark-qt/src/libkshark-model.c
+++ b/kernel-shark-qt/src/libkshark-model.c
@@ -836,6 +836,7 @@ ssize_t ksmodel_first_index_at_pid(struct kshark_trace_histo *histo,
  * @param func: Matching condition function.
  * @param val: Matching condition value, used by the Matching condition
  *	       function.
+ * @param col: Optional input location for Data collection.
  * @param index: Optional output location for the index of the requested
  *		 entry inside the array.
  * @returns Pointer ot a kshark_entry, if an entry has been found. Else NULL.
@@ -844,6 +845,7 @@ const struct kshark_entry *
 ksmodel_get_entry_front(struct kshark_trace_histo *histo,
 			int bin, bool vis_only,
 			matching_condition_func func, int val,
+			struct kshark_entry_collection *col,
 			ssize_t *index)
 {
 	struct kshark_entry_request *req;
@@ -858,7 +860,12 @@ ksmodel_get_entry_front(struct kshark_trace_histo *histo,
 	if (!req)
 		return NULL;
 
-	entry = kshark_get_entry_front(req, histo->data, index);
+	if (col && col->size)
+		entry = kshark_get_collection_entry_front(&req, histo->data,
+							  col, index);
+	else
+		entry = kshark_get_entry_front(req, histo->data, index);
+
 	free(req);
 
 	return entry;
@@ -874,6 +881,7 @@ ksmodel_get_entry_front(struct kshark_trace_histo *histo,
  * @param func: Matching condition function.
  * @param val: Matching condition value, used by the Matching condition
  *	       function.
+ * @param col: Optional input location for Data collection.
  * @param index: Optional output location for the index of the requested
  *		 entry inside the array.
  * @returns Pointer ot a kshark_entry, if an entry has been found. Else NULL.
@@ -882,6 +890,7 @@ const struct kshark_entry *
 ksmodel_get_entry_back(struct kshark_trace_histo *histo,
 		       int bin, bool vis_only,
 		       matching_condition_func func, int val,
+		       struct kshark_entry_collection *col,
 		       ssize_t *index)
 {
 	struct kshark_entry_request *req;
@@ -896,7 +905,12 @@ ksmodel_get_entry_back(struct kshark_trace_histo *histo,
 	if (!req)
 		return NULL;
 
-	entry = kshark_get_entry_back(req, histo->data, index);
+	if (col && col->size)
+		entry = kshark_get_collection_entry_back(&req, histo->data,
+							  col, index);
+	else
+		entry = kshark_get_entry_back(req, histo->data, index);
+
 	free(req);
 
 	return entry;
@@ -926,6 +940,7 @@ static int ksmodel_get_entry_pid(const struct kshark_entry *entry)
  * @param bin: Bin id.
  * @param cpu: CPU Id.
  * @param vis_only: If true, a visible entry is requested.
+ * @param col: Optional input location for Data collection.
  * @param index: Optional output location for the index of the requested
  *		 entry inside the array.
  * @returns Process Id of the task if an entry has been found. Else a negative
@@ -933,6 +948,7 @@ static int ksmodel_get_entry_pid(const struct kshark_entry *entry)
  */
 int ksmodel_get_pid_front(struct kshark_trace_histo *histo,
 			  int bin, int cpu, bool vis_only,
+			  struct kshark_entry_collection *col,
 			  ssize_t *index)
 {
 	const struct kshark_entry *entry;
@@ -942,7 +958,8 @@ int ksmodel_get_pid_front(struct kshark_trace_histo *histo,
 
 	entry = ksmodel_get_entry_front(histo, bin, vis_only,
 					       kshark_match_cpu, cpu,
-					       index);
+					       col, index);
+
 	return ksmodel_get_entry_pid(entry);
 }
 
@@ -954,6 +971,7 @@ int ksmodel_get_pid_front(struct kshark_trace_histo *histo,
  * @param bin: Bin id.
  * @param cpu: CPU Id.
  * @param vis_only: If true, a visible entry is requested.
+ * @param col: Optional input location for Data collection.
  * @param index: Optional output location for the index of the requested
  *		 entry inside the array.
  * @returns Process Id of the task if an entry has been found. Else a negative
@@ -961,6 +979,7 @@ int ksmodel_get_pid_front(struct kshark_trace_histo *histo,
  */
 int ksmodel_get_pid_back(struct kshark_trace_histo *histo,
 			 int bin, int cpu, bool vis_only,
+			 struct kshark_entry_collection *col,
 			 ssize_t *index)
 {
 	const struct kshark_entry *entry;
@@ -970,7 +989,7 @@ int ksmodel_get_pid_back(struct kshark_trace_histo *histo,
 
 	entry = ksmodel_get_entry_back(histo, bin, vis_only,
 					      kshark_match_cpu, cpu,
-					      index);
+					      col, index);
 
 	return ksmodel_get_entry_pid(entry);
 }
@@ -999,6 +1018,7 @@ static int ksmodel_get_entry_cpu(const struct kshark_entry *entry)
  * @param bin: Bin id.
  * @param pid: Process Id.
  * @param vis_only: If true, a visible entry is requested.
+ * @param col: Optional input location for Data collection.
  * @param index: Optional output location for the index of the requested
  *		 entry inside the array.
  * @returns Process Id of the task if an entry has been found. Else a negative
@@ -1006,6 +1026,7 @@ static int ksmodel_get_entry_cpu(const struct kshark_entry *entry)
  */
 int ksmodel_get_cpu_front(struct kshark_trace_histo *histo,
 			  int bin, int pid, bool vis_only,
+			  struct kshark_entry_collection *col,
 			  ssize_t *index)
 {
 	const struct kshark_entry *entry;
@@ -1015,6 +1036,7 @@ int ksmodel_get_cpu_front(struct kshark_trace_histo *histo,
 
 	entry = ksmodel_get_entry_front(histo, bin, vis_only,
 					       kshark_match_pid, pid,
+					       col,
 					       index);
 	return ksmodel_get_entry_cpu(entry);
 }
@@ -1027,6 +1049,7 @@ int ksmodel_get_cpu_front(struct kshark_trace_histo *histo,
  * @param bin: Bin id.
  * @param pid: Process Id.
  * @param vis_only: If true, a visible entry is requested.
+ * @param col: Optional input location for Data collection.
  * @param index: Optional output location for the index of the requested
  *		 entry inside the array.
  * @returns Process Id of the task if an entry has been found. Else a negative
@@ -1034,6 +1057,7 @@ int ksmodel_get_cpu_front(struct kshark_trace_histo *histo,
  */
 int ksmodel_get_cpu_back(struct kshark_trace_histo *histo,
 			 int bin, int pid, bool vis_only,
+			 struct kshark_entry_collection *col,
 			 ssize_t *index)
 {
 	const struct kshark_entry *entry;
@@ -1043,6 +1067,7 @@ int ksmodel_get_cpu_back(struct kshark_trace_histo *histo,
 
 	entry = ksmodel_get_entry_back(histo, bin, vis_only,
 					      kshark_match_pid, pid,
+					      col,
 					      index);
 
 	return ksmodel_get_entry_cpu(entry);
@@ -1053,12 +1078,15 @@ int ksmodel_get_cpu_back(struct kshark_trace_histo *histo,
  * @param histo: Input location for the model descriptor.
  * @param bin: Bin id.
  * @param cpu: Cpu Id.
+ * @param col: Optional input location for Data collection.
  * @param index: Optional output location for the index of the requested
  *		 entry inside the array.
  * @returns True, if a visible entry exists in this bin. Else false.
  */
 bool ksmodel_cpu_visible_event_exist(struct kshark_trace_histo *histo,
-				     int bin, int cpu, ssize_t *index)
+				     int bin, int cpu,
+				     struct kshark_entry_collection *col,
+				     ssize_t *index)
 {
 	struct kshark_entry_request *req;
 	const struct kshark_entry *entry;
@@ -1080,7 +1108,12 @@ bool ksmodel_cpu_visible_event_exist(struct kshark_trace_histo *histo,
 	 */
 	req->vis_mask = KS_EVENT_VIEW_FILTER_MASK;
 
-	entry = kshark_get_entry_front(req, histo->data, index);
+	if (col && col->size)
+		entry = kshark_get_collection_entry_front(&req, histo->data,
+							  col, index);
+	else
+		entry = kshark_get_entry_front(req, histo->data, index);
+
 	free(req);
 
 	if (!entry || !entry->visible) {
@@ -1096,12 +1129,15 @@ bool ksmodel_cpu_visible_event_exist(struct kshark_trace_histo *histo,
  * @param histo: Input location for the model descriptor.
  * @param bin: Bin id.
  * @param pid: Process Id of the task.
+ * @param col: Optional input location for Data collection.
  * @param index: Optional output location for the index of the requested
  *		 entry inside the array.
  * @returns True, if a visible entry exists in this bin. Else false.
  */
 bool ksmodel_task_visible_event_exist(struct kshark_trace_histo *histo,
-				      int bin, int pid, ssize_t *index)
+				      int bin, int pid,
+				      struct kshark_entry_collection *col,
+				      ssize_t *index)
 {
 	struct kshark_entry_request *req;
 	const struct kshark_entry *entry;
@@ -1123,7 +1159,12 @@ bool ksmodel_task_visible_event_exist(struct kshark_trace_histo *histo,
 	 */
 	req->vis_mask = KS_EVENT_VIEW_FILTER_MASK;
 
-	entry = kshark_get_entry_front(req, histo->data, index);
+	if (col && col->size)
+		entry = kshark_get_collection_entry_front(&req, histo->data,
+							  col, index);
+	else
+		entry = kshark_get_entry_front(req, histo->data, index);
+
 	free(req);
 
 	if (!entry || !entry->visible) {
diff --git a/kernel-shark-qt/src/libkshark-model.h b/kernel-shark-qt/src/libkshark-model.h
index 15391a9..5ffa682 100644
--- a/kernel-shark-qt/src/libkshark-model.h
+++ b/kernel-shark-qt/src/libkshark-model.h
@@ -93,35 +93,45 @@ const struct kshark_entry *
 ksmodel_get_entry_front(struct kshark_trace_histo *histo,
 			int bin, bool vis_only,
 			matching_condition_func func, int val,
+			struct kshark_entry_collection *col,
 			ssize_t *index);
 
 const struct kshark_entry *
 ksmodel_get_entry_back(struct kshark_trace_histo *histo,
 		       int bin, bool vis_only,
 		       matching_condition_func func, int val,
+		       struct kshark_entry_collection *col,
 		       ssize_t *index);
 
 int ksmodel_get_pid_front(struct kshark_trace_histo *histo,
 			  int bin, int cpu, bool vis_only,
+			  struct kshark_entry_collection *col,
 			  ssize_t *index);
 
 int ksmodel_get_pid_back(struct kshark_trace_histo *histo,
 			 int bin, int cpu, bool vis_only,
+			 struct kshark_entry_collection *col,
 			 ssize_t *index);
 
 int ksmodel_get_cpu_front(struct kshark_trace_histo *histo,
 			  int bin, int pid, bool vis_only,
+			  struct kshark_entry_collection *col,
 			  ssize_t *index);
 
 int ksmodel_get_cpu_back(struct kshark_trace_histo *histo,
 			 int bin, int pid, bool vis_only,
+			 struct kshark_entry_collection *col,
 			 ssize_t *index);
 
 bool ksmodel_cpu_visible_event_exist(struct kshark_trace_histo *histo,
-				     int bin, int cpu, ssize_t *index);
+				     int bin, int cpu,
+				     struct kshark_entry_collection *col,
+				     ssize_t *index);
 
 bool ksmodel_task_visible_event_exist(struct kshark_trace_histo *histo,
-				      int bin, int pid, ssize_t *index);
+				      int bin, int pid,
+				      struct kshark_entry_collection *col,
+				      ssize_t *index);
 
 static inline double ksmodel_bin_time(struct kshark_trace_histo *histo,
 				      int bin)
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 7/7] kernel-shark-qt: Changed the KernelShark version identifier.
  2018-07-31 13:52 [PATCH v2 0/7] Add visualization model for the Qt-based KernelShark Yordan Karadzhov (VMware)
                   ` (5 preceding siblings ...)
  2018-07-31 13:52 ` [PATCH v2 6/7] kernel-shark-qt: Make the Vis. model use " Yordan Karadzhov (VMware)
@ 2018-07-31 13:52 ` Yordan Karadzhov (VMware)
  6 siblings, 0 replies; 21+ messages in thread
From: Yordan Karadzhov (VMware) @ 2018-07-31 13:52 UTC (permalink / raw)
  To: rostedt; +Cc: linux-trace-devel, Yordan Karadzhov (VMware)

(Patch Id) ++

Signed-off-by: Yordan Karadzhov (VMware) <y.karadz@gmail.com>
---
 kernel-shark-qt/CMakeLists.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel-shark-qt/CMakeLists.txt b/kernel-shark-qt/CMakeLists.txt
index 9ff12a1..7a802cd 100644
--- a/kernel-shark-qt/CMakeLists.txt
+++ b/kernel-shark-qt/CMakeLists.txt
@@ -6,7 +6,7 @@ project(kernel-shark-qt)
 
 set(KS_VERSION_MAJOR 0)
 set(KS_VERSION_MINOR 7)
-set(KS_VERSION_PATCH 0)
+set(KS_VERSION_PATCH 1)
 set(KS_VERSION_STRING ${KS_VERSION_MAJOR}.${KS_VERSION_MINOR}.${KS_VERSION_PATCH})
 message("\n project: Kernel Shark: (version: ${KS_VERSION_STRING})\n")
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 2/7] kernel-shark-qt: Add generic instruments for searching inside the trace data
  2018-07-31 13:52 ` [PATCH v2 2/7] kernel-shark-qt: Add generic instruments for searching inside the trace data Yordan Karadzhov (VMware)
@ 2018-07-31 21:43   ` Steven Rostedt
  0 siblings, 0 replies; 21+ messages in thread
From: Steven Rostedt @ 2018-07-31 21:43 UTC (permalink / raw)
  To: Yordan Karadzhov (VMware); +Cc: linux-trace-devel, Tzvetomir Stoyanov

On Tue, 31 Jul 2018 16:52:43 +0300
"Yordan Karadzhov (VMware)" <y.karadz@gmail.com> wrote:

> This patch introduces the instrumentation for data extraction used by the
> visualization model of the Qt-based KernelShark. The effectiveness of these
> instruments for searching has a dominant effect over the performance of the
> model, so let's spend some time and explain this in detail.
> 
> The first type of instruments provide binary search inside a sorted in time
> arrays of kshark_entries or trace_records. The search returns the first
> element of the array, having timestamp bigger than a reference time value.
> The time complexity of these searches is log(n).
> 
> The second type of instruments provide searching for the first (in time)
> entry, satisfying an abstract Matching condition. Since the array is sorted
> in time, but we search for an abstract property, for this search the array
> is considered unsorted, thus we have to iterate and check all elements of the
> array one by one. If we search for a type of entries, which are well presented
> in the array, the time complexity of the search is constant, because no matter
> how big is the array the search only goes through small number of entries at
> the beginning of the array (or at the end, if we search backwards), before it
> finds the first match. However if we search for sparse, or even nonexistent
> entries, the time complexity becomes linear.
> 
> These explanations will start making more sense with the following patches.
> 
> Signed-off-by: Yordan Karadzhov (VMware) <y.karadz@gmail.com>
> ---
>  kernel-shark-qt/src/libkshark.c | 233 +++++++++++++++++++++++++++++++-
>  kernel-shark-qt/src/libkshark.h |  86 +++++++++++-
>  2 files changed, 317 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel-shark-qt/src/libkshark.c b/kernel-shark-qt/src/libkshark.c
> index 3299752..1796bf8 100644
> --- a/kernel-shark-qt/src/libkshark.c
> +++ b/kernel-shark-qt/src/libkshark.c
> @@ -861,7 +861,7 @@ static const char *kshark_get_info(struct pevent *pe,
>   * @returns The returned string contains a semicolon-separated list of data
>   *	    fields.
>   */
> -char* kshark_dump_entry(struct kshark_entry *entry)
> +char* kshark_dump_entry(const struct kshark_entry *entry)

Hmm, is this a separate change? It looks like it should be a separate
patch.

>  {
>  	const char *event_name, *task, *lat, *info;
>  	struct kshark_context *kshark_ctx;
> @@ -908,3 +908,234 @@ char* kshark_dump_entry(struct kshark_entry *entry)
>  
>  	return NULL;
>  }
> +
> +/**
> + * @brief Binary search inside a time-sorted array of kshark_entries.

Yeah having a space here would be nice.

> + * @param time: The value of time to search for.
> + * @param data: Input location for the trace data.
> + * @param l: Array index specifying the lower edge of the range to search in.
> + * @param h: Array index specifying the upper edge of the range to search in.

And here (and for others)

> + * @returns On success, the first kshark_entry inside the range, having a
> +	    timestamp equal or bigger than "time". In the case when no
> +	    kshark_entry has been found inside the range, the function will
> +	    return the value of "l" or "h".

Looks like 'l' or 'h' have a different meaning when it is returned.
Please comment it here as for what it means if it returns 'l' or 'h'.

Also, it looks like there's no difference if it found an entry at time
'l' or 'h' or if it found no entry at all.


> + */
> +size_t kshark_find_entry_by_time(uint64_t time,
> +				 struct kshark_entry **data,
> +				 size_t l, size_t h)
> +{
> +	if (data[l]->ts >= time)
> +		return l;
> +
> +	if (data[h]->ts < time)
> +		return h;
> +
> +	size_t mid;
> +	BSEARCH(h, l, data[mid]->ts < time);
> +	return h;
> +}
> +
> +/**
> + * @brief Binary search inside a time-sorted array of pevent_records.
> + * @param time: The value of time to search for.
> + * @param data: Input location for the trace data.
> + * @param l: Array index specifying the lower edge of the range to search in.
> + * @param h: Array index specifying the upper edge of the range to search in.
> + * @returns On success, the first pevent_record inside the range, having a
> +	    timestamp equal or bigger than "time". In the case when no
> +	    pevent_record has been found inside the range, the function will
> +	    return the value of "l" or "h".

Same here.

> + */
> +size_t kshark_find_record_by_time(uint64_t time,
> +				  struct pevent_record **data,
> +				  size_t l, size_t h)
> +{
> +	if (data[l]->ts >= time)
> +		return l;
> +
> +	if (data[h]->ts < time)
> +		return h;
> +
> +	size_t mid;
> +	BSEARCH(h, l, data[mid]->ts < time);
> +	return h;
> +}
> +
> +/**
> + * @brief Simple Pid matching function to be user for data requests.
> + * @param kshark_ctx: Input location for the session context pointer.
> + * @param e: kshark_entry to be checked.
> + * @param pid: Matching condition value.
> + * @returns True if the Pid of the entry matches the value of "pid".
> + *	    Else false.
> + */
> +bool kshark_match_pid(struct kshark_context *kshark_ctx,
> +		      struct kshark_entry *e, int pid)
> +{
> +	if (e->pid == pid)
> +		return true;
> +
> +	return false;
> +}
> +
> +/**
> + * @brief Simple Cpu matching function to be user for data requests.
> + * @param kshark_ctx: Input location for the session context pointer.
> + * @param e: kshark_entry to be checked.
> + * @param cpu: Matching condition value.
> + * @returns True if the Cpu of the entry matches the value of "cpu".
> + *	    Else false.
> + */
> +bool kshark_match_cpu(struct kshark_context *kshark_ctx,
> +		      struct kshark_entry *e, int cpu)
> +{
> +	if (e->cpu == cpu)
> +		return true;
> +
> +	return false;
> +}
> +
> +/**
> + * @brief Create Data request. The request defines the properties of the
> + *	  requested kshark_entry.
> + * @param first: Array index specifying the position inside the array from
> + *		 where the search starts.
> + * @param n: Number of array elements to search in.
> + * @param cond: Matching condition function.
> + * @param val: Matching condition value, used by the Matching condition
> + *	       function.
> + * @param vis_only: If true, a visible entry is requested.
> + * @param vis_mask: If "vis_only" is true, use this mask to specify the level
> + *		    of visibility of the requested entry
> + * @returns Pointer to kshark_entry_request on success, or NULL on failure.
> + * 	    The user is responsible for freeing the returned
> + *	    kshark_entry_request.
> + */
> +struct kshark_entry_request *
> +kshark_entry_request_alloc(size_t first, size_t n,
> +			   matching_condition_func cond, int val,
> +			   bool vis_only, int vis_mask)
> +{
> +	struct kshark_entry_request *req = malloc(sizeof(*req));
> +
> +	if (!req) {
> +		fprintf(stderr,
> +			"Failed to allocate memory for entry request.\n");
> +		return NULL;
> +	}
> +
> +	req->first = first;
> +	req->n = n;
> +	req->cond = cond;
> +	req->val = val;
> +	req->vis_only = vis_only;
> +	req->vis_mask = vis_mask;
> +
> +	return req;
> +}
> +
> +/** Dummy entry, used to indicate the existence of filtered entries. */
> +const struct kshark_entry dummy_entry = {
> +	.next		= NULL,
> +	.visible	= 0x00,
> +	.cpu		= KS_FILTERED_BIN,
> +	.pid		= KS_FILTERED_BIN,
> +	.event_id	= -1,
> +	.offset		= 0,
> +	.ts		= 0
> +};
> +
> +static const struct kshark_entry *
> +get_entry(const struct kshark_entry_request *req,
> +          struct kshark_entry **data,
> +          ssize_t *index, size_t start, ssize_t end, int inc)
> +{
> +	struct kshark_context *kshark_ctx = NULL;
> +	const struct kshark_entry *e = NULL;
> +	ssize_t i;
> +
> +	if (index)
> +		*index = KS_EMPTY_BIN;
> +
> +	if (!kshark_instance(&kshark_ctx))
> +		return e;
> +
> +	for (i = start; i != end; i += inc) {
> +		if (req->cond(kshark_ctx, data[i], req->val)) {
> +			/*
> +			 * Data satisfying the condition has been found.
> +			 */
> +			if (req->vis_only &&
> +			    !(data[i]->visible & req->vis_mask)) {
> +				/* This data entry has been filtered. */
> +				e = &dummy_entry;
> +			} else {
> +				e = data[i];
> +				break;
> +			}
> +		}
> +	}
> +
> +	if (index) {
> +		if (e)
> +			*index = (e->event_id >= 0)? i : KS_FILTERED_BIN;
> +		else
> +			*index = KS_EMPTY_BIN;
> +	}
> +
> +	return e;
> +}
> +
> +/**
> + * @brief Search for an entry satisfying the requirements of a given Data
> + *	  request. Start from the position provided by the request and go
> + *	  searching in the direction of the increasing timestamps (front).
> + * @param req: Input location for Data request.
> + * @param data: Input location for the trace data.
> + * @param index: Optional output location for the index of the returned
> + *		 entry inside the array.
> + * @returns Pointer to the first entry satisfying the matching conditionon
> + *	    success, or NULL on failure.
> + *	    In the special case when some entries, satisfying the Matching
> + *	    condition function have been found, but all these entries have
> + *	    been discarded because of the visibility criteria (filtered
> + *	    entries), the function returns a pointer to a special
> + *	    "Dummy entry".
> + */
> +const struct kshark_entry *
> +kshark_get_entry_front(const struct kshark_entry_request *req,
> +                       struct kshark_entry **data,
> +                       ssize_t *index)
> +{
> +	ssize_t end = req->first + req->n;
> +
> +	return get_entry(req, data, index, req->first, end, +1);

You don't need the "+" in the +1 ;-)

> +}
> +
> +/**
> + * @brief Search for an entry satisfying the requirements of a given Data
> + *	  request. Start from the position provided by the request and go
> + *	  searching in the direction of the decreasing timestamps (back).
> + * @param req: Input location for Data request.
> + * @param data: Input location for the trace data.
> + * @param index: Optional output location for the index of the returned
> + *		 entry inside the array.
> + * @returns Pointer to the first entry satisfying the matching conditionon
> + *	    success, or NULL on failure.
> + *	    In the special case when some entries, satisfying the Matching
> + *	    condition function have been found, but all these entries have
> + *	    been discarded because of the visibility criteria (filtered
> + *	    entries), the function returns a pointer to a special
> + *	    "Dummy entry".
> + */
> +const struct kshark_entry *
> +kshark_get_entry_back(const struct kshark_entry_request *req,
> +                      struct kshark_entry **data,
> +                      ssize_t *index)
> +{
> +	ssize_t end = req->first - req->n;
> +	if (end < 0)
> +		end = -1;
> +
> +	return get_entry(req, data, index, req->first, end, -1);
> +}
> diff --git a/kernel-shark-qt/src/libkshark.h b/kernel-shark-qt/src/libkshark.h
> index 0ad31c0..adbd392 100644
> --- a/kernel-shark-qt/src/libkshark.h
> +++ b/kernel-shark-qt/src/libkshark.h
> @@ -133,7 +133,7 @@ void kshark_close(struct kshark_context *kshark_ctx);
>  
>  void kshark_free(struct kshark_context *kshark_ctx);
>  
> -char* kshark_dump_entry(struct kshark_entry *entry);
> +char* kshark_dump_entry(const struct kshark_entry *entry);

Again, shouldn't this be in its own patch?

-- Steve

>  
>  /** Bit masks used to control the visibility of the entry after filtering. */
>  enum kshark_filter_masks {
> @@ -190,6 +190,90 @@ void kshark_filter_entries(struct kshark_context *kshark_ctx,
>  			   struct kshark_entry **data,
>  			   size_t n_entries);
>  
> +/** General purpose Binary search macro. */
> +#define BSEARCH(h, l, cond) 			\
> +	({						\
> +		while (h - l > 1) {			\
> +			mid = (l + h) / 2;		\
> +			if (cond)	\
> +				l = mid;		\
> +			else				\
> +				h = mid;		\
> +		}					\
> +	})
> +
> +size_t kshark_find_entry_by_time(uint64_t time,
> +				 struct kshark_entry **data_rows,
> +				 size_t l, size_t h);
> +
> +size_t kshark_find_record_by_time(uint64_t time,
> +				  struct pevent_record **data_rows,
> +				  size_t l, size_t h);
> +
> +bool kshark_match_pid(struct kshark_context *kshark_ctx,
> +		      struct kshark_entry *e, int pid);
> +
> +bool kshark_match_cpu(struct kshark_context *kshark_ctx,
> +		      struct kshark_entry *e, int cpu);
> +
> +/** Empty bin identifier. */
> +#define KS_EMPTY_BIN		-1
> +
> +/** Filtered bin identifier. */
> +#define KS_FILTERED_BIN		-2
> +
> +/** Matching condition function type. To be user for data requests */
> +typedef bool (matching_condition_func)(struct kshark_context*,
> +				       struct kshark_entry*,
> +				       int);
> +
> +/**
> + * Data request structure, defining the properties of the required
> + * kshark_entry.
> + */
> +struct kshark_entry_request {
> +	/**
> +	 * Array index specifying the position inside the array from where
> +	 * the search starts.
> +	 */
> +	size_t first;
> +
> +	/** Number of array elements to search in. */
> +	size_t n;
> +
> +	/** Matching condition function. */
> +	matching_condition_func *cond;
> +
> +	/**
> +	 * Matching condition value, used by the Matching condition function.
> +	 */
> +	int val;
> +
> +	/** If true, a visible entry is requested. */
> +	bool vis_only;
> +
> +	/**
> +	 * If "vis_only" is true, use this mask to specify the level of
> +	 * visibility of the requested entry.
> +	 */
> +	uint8_t vis_mask;
> +};
> +
> +struct kshark_entry_request *
> +kshark_entry_request_alloc(size_t first, size_t n,
> +			   matching_condition_func cond, int val,
> +			   bool vis_only, int vis_mask);
> +
> +const struct kshark_entry *
> +kshark_get_entry_front(const struct kshark_entry_request *req,
> +		       struct kshark_entry **data,
> +		       ssize_t *index);
> +
> +const struct kshark_entry *
> +kshark_get_entry_back(const struct kshark_entry_request *req,
> +		      struct kshark_entry **data,
> +		      ssize_t *index);
> +
>  #ifdef __cplusplus
>  }
>  #endif

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 3/7] kernel-shark-qt: Introduce the visualization model used by the Qt-based KS
  2018-07-31 13:52 ` [PATCH v2 3/7] kernel-shark-qt: Introduce the visualization model used by the Qt-based KS Yordan Karadzhov (VMware)
@ 2018-08-01  0:51   ` Steven Rostedt
  2018-08-01 16:10     ` Yordan Karadzhov
  2018-08-03 18:48     ` Steven Rostedt
  2018-08-01  1:43   ` Steven Rostedt
                     ` (3 subsequent siblings)
  4 siblings, 2 replies; 21+ messages in thread
From: Steven Rostedt @ 2018-08-01  0:51 UTC (permalink / raw)
  To: Yordan Karadzhov (VMware); +Cc: linux-trace-devel, Tzvetomir Stoyanov

On Tue, 31 Jul 2018 16:52:44 +0300
"Yordan Karadzhov (VMware)" <y.karadz@gmail.com> wrote:

> diff --git a/kernel-shark-qt/src/CMakeLists.txt b/kernel-shark-qt/src/CMakeLists.txt
> index ed3c60e..ec22f63 100644
> --- a/kernel-shark-qt/src/CMakeLists.txt
> +++ b/kernel-shark-qt/src/CMakeLists.txt
> @@ -1,7 +1,8 @@
>  message("\n src ...")
>  
>  message(STATUS "libkshark")
> -add_library(kshark SHARED libkshark.c)
> +add_library(kshark SHARED libkshark.c
> +                          libkshark-model.c)
>  
>  target_link_libraries(kshark ${CMAKE_DL_LIBS}
>                               ${TRACEEVENT_LIBRARY}
> diff --git a/kernel-shark-qt/src/libkshark-model.c b/kernel-shark-qt/src/libkshark-model.c
> new file mode 100644
> index 0000000..4a4e910
> --- /dev/null
> +++ b/kernel-shark-qt/src/libkshark-model.c
> @@ -0,0 +1,1135 @@
> +// SPDX-License-Identifier: LGPL-2.1
> +
> +/*
> + * Copyright (C) 2017 VMware Inc, Yordan Karadzhov <y.karadz@gmail.com>
> + */
> +
> + /**
> +  *  @file    libkshark.c
> +  *  @brief   Visualization model for FTRACE (trace-cmd) data.
> +  */
> +
> +// C
> +#include <stdlib.h>
> +
> +// KernelShark
> +#include "libkshark-model.h"
> +

Needs comment here explaining what these are.

> +#define UOB(histo) (histo->n_bins)
> +#define LOB(histo) (histo->n_bins + 1)

Perhaps add:

/* For all bins */
# define ALLB(histo) LOB(histo)

> +
> +/**
> + * @brief Initialize the Visualization model.
> + * @param histo: Input location for the model descriptor.
> + */
> +void ksmodel_init(struct kshark_trace_histo *histo)
> +{
> +	/*
> +	 * Initialize an empty histo. The histo will have no bins and will
> +	 * contain no data.
> +	 */
> +	histo->bin_size = 0;
> +	histo->min = 0;
> +	histo->max = 0;
> +	histo->n_bins = 0;
> +
> +	histo->bin_count = NULL;
> +	histo->map = NULL;
> +}
> +
> +/**
> + * @brief Clear (reset) the Visualization model.
> + * @param histo: Input location for the model descriptor.
> + */
> +void ksmodel_clear(struct kshark_trace_histo *histo)
> +{
> +	/* Reset the histo. It will have no bins and will contain no data. */
> +	free(histo->map);
> +	free(histo->bin_count);
> +	ksmodel_init(histo);
> +}
> +
> +static void ksmodel_reset_bins(struct kshark_trace_histo *histo,
> +			       size_t first, size_t last)
> +{
> +	/* Reset the content of the bins. */
> +	memset(&histo->map[first], KS_EMPTY_BIN,
> +	       (last - first + 1) * sizeof(histo->map[0]));

This patch should add a comment here and by KS_EMPTY_BIN stating that
KS_EMPTY_BIN is expected to be -1, as it is used to reset the entire
array with memset(). As memset() only updates an array to a single
byte, that byte must be the same throughout. Which works for zero and
-1.

> +
> +	memset(&histo->bin_count[first], 0,
> +	       (last - first + 1) * sizeof(histo->bin_count[0]));
> +}
> +
> +static bool ksmodel_histo_alloc(struct kshark_trace_histo *histo, size_t n)
> +{
> +	free(histo->bin_count);
> +	free(histo->map);
> +
> +	/* Create bins. Two overflow bins are added. */
> +	histo->map = calloc(n + 2, sizeof(*histo->map));
> +	histo->bin_count = calloc(n + 2, sizeof(*histo->bin_count));
> +
> +	if (!histo->map || !histo->bin_count) {
> +		ksmodel_clear(histo);
> +		fprintf(stderr, "Failed to allocate memory for a histo.\n");
> +		return false;
> +	}
> +
> +	histo->n_bins = n;
> +
> +	return true;
> +}
> +
> +static void ksmodel_set_in_range_bining(struct kshark_trace_histo *histo,
> +					size_t n, uint64_t min, uint64_t max,
> +					bool force_in_range)
> +{
> +	uint64_t corrected_range, delta_range, range = max - min;
> +	struct kshark_entry *last;
> +
> +	/* The size of the bin must be >= 1, hence the range must be >= n. */
> +	if (n == 0 || range < n)
> +		return;
> +
> +	/*
> +	 * If the number of bins changes, allocate memory for the descriptor
> +	 * of the model.
> +	 */
> +	if (n != histo->n_bins) {
> +		if (!ksmodel_histo_alloc(histo, n)) {
> +			ksmodel_clear(histo);
> +			return;
> +		}
> +	}
> +
> +	/* Reset the content of all bins (including overflow bins) to zero. */
> +	ksmodel_reset_bins(histo, 0, histo->n_bins + 1);

Here we could then have:

	ksmodel_reset_bins(histo, 0, ALLB(histo));

> +
> +	if (range % n == 0) {
> +		/*
> +		 * The range is multiple of the number of bin and needs no
> +		 * adjustment. This is very unlikely to happen but still ...
> +		 */
> +		histo->min = min;
> +		histo->max = max;
> +		histo->bin_size = range / n;
> +	} else {
> +		/*
> +		 * The range needs adjustment. The new range will be slightly
> +		 * bigger, compared to the requested one.
> +		 */
> +		histo->bin_size = range / n + 1;
> +		corrected_range = histo->bin_size * n;
> +		delta_range = corrected_range - range;
> +		histo->min = min - delta_range / 2;
> +		histo->max = histo->min + corrected_range;
> +
> +		if (!force_in_range)
> +			return;
> +
> +		/*
> +		 * Make sure that the new range doesn't go outside of the time
> +		 * interval of the dataset.
> +		 */
> +		last = histo->data[histo->data_size - 1];
> +		if (histo->min < histo->data[0]->ts) {
> +			histo->min = histo->data[0]->ts;
> +			histo->max = histo->min + corrected_range;
> +		} else if (histo->max > last->ts) {
> +			histo->max = last->ts;
> +			histo->min = histo->max - corrected_range;
> +		}

Hmm, Let's say the range of the data is 0..1,000,001 and we picked a
range of 999,999 starting at 0. And there's 1024 buckets. This would
have:

min = 0; max = 999999; n = 1024; range = 999999;

bin_size = 999999 / 1024 + 1 = 977;
correct_range = 977 * 1024 = 1000448;
delta_rang = 1000448 - 999999 = 449;
histo->min = 0 - 449 / 2 = -224;
histo->max = -224 + 1000448 = 1000224;

Now histo->min (-224) < histo->data[0]->ts (0) so

histo->min = 0;
histo->max = 0 + 1000448 = 1000448;

Thus we get max greater than the data set.

Actually, we would always get a range greater than the data set, if the
data set itself is not divisible by the bin size. This that a problem?

-- Steve

> +	}
> +}
> +
> +/**
> + * @brief Prepare the bining of the Visualization model.
> + * @param histo: Input location for the model descriptor.
> + * @param n: Number of bins.
> + * @param min: Lower edge of the time-window to be visualized.
> + * @param max: Upper edge of the time-window to be visualized.
> + */
> +void ksmodel_set_bining(struct kshark_trace_histo *histo,
> +			size_t n, uint64_t min, uint64_t max)
> +{
> +	ksmodel_set_in_range_bining(histo, n, min, max, false);
> +}
> +
>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 3/7] kernel-shark-qt: Introduce the visualization model used by the Qt-based KS
  2018-07-31 13:52 ` [PATCH v2 3/7] kernel-shark-qt: Introduce the visualization model used by the Qt-based KS Yordan Karadzhov (VMware)
  2018-08-01  0:51   ` Steven Rostedt
@ 2018-08-01  1:43   ` Steven Rostedt
  2018-08-01 18:22   ` Steven Rostedt
                     ` (2 subsequent siblings)
  4 siblings, 0 replies; 21+ messages in thread
From: Steven Rostedt @ 2018-08-01  1:43 UTC (permalink / raw)
  To: Yordan Karadzhov (VMware); +Cc: linux-trace-devel

On Tue, 31 Jul 2018 16:52:44 +0300
"Yordan Karadzhov (VMware)" <y.karadz@gmail.com> wrote:

> +static void ksmodel_set_next_bin_edge(struct kshark_trace_histo *histo,
> +				      size_t bin)
> +{
> +	size_t time, row, next_bin = bin + 1;
> +
> +	/* Calculate the beginning of the next bin. */
> +	time = histo->min + next_bin * histo->bin_size;
> +
> +	/*
> +	 * Find the index of the first entry inside
> +	 * the next bin (timestamp > time).
> +	 */
> +	row = kshark_find_entry_by_time(time, histo->data, 0,
> +					histo->data_size - 1);

Hmm, I see this used as this:

       for (bin = 0; bin < histo->n_bins; ++bin)
               ksmodel_set_next_bin_edge(histo, bin);

A lot. Thus we should be able to optimize this with:

static void ksmodel_set_next_bin_edge(struct kshark_trace_histo *histo,
				size_t bin, size_t *last_row)
{

	row = kshark_find_entry_by_time(time, histo->data, *last_row,
					histo->data_size - 1);

	*last_row = row;

And the caller can be:

	last_row = 0;
	for (bin = 0; bin < histo->n_bins; ++bin)
		ksmodel_set_next_bin_edge(histo, bin, &last_row);

Then we wont be doing the search from 0 each time. It should help speed
it up a little.

> +
> +	/*
> +	 * The timestamp of the very last entry of the dataset can be exactly
> +	 * equal to the value of the upper edge of the range. This is very
> +	 * likely to happen when we use ksmodel_set_in_range_bining(). In this
> +	 * case we have to increase the size of the very last bin in order to
> +	 * make sure that the last entry of the dataset will fall into it.
> +	 */
> +	if (next_bin == histo->n_bins - 1)
> +		++time;
> +
> +	if (histo->data[row]->ts >= time + histo->bin_size) {
> +		/* The bin is empty. */
> +		histo->map[next_bin] = KS_EMPTY_BIN;
> +		return;
> +	}
> +
> +	/* Set the index of the first entry. */
> +	histo->map[next_bin] = row;
> +}
> +


[..]

> +/**
> + * @brief Provide the Visualization model with data. Calculate the current
> + *	  state of the model.
> + * @param histo: Input location for the model descriptor.
> + * @param data: Input location for the trace data.
> + * @param n: Number of bins.
> + */
> +void ksmodel_fill(struct kshark_trace_histo *histo,
> +		  struct kshark_entry **data, size_t n)
> +{
> +	int bin;
> +
> +	histo->data_size = n;
> +	histo->data = data;
> +
> +	if (histo->n_bins == 0 ||
> +	    histo->bin_size == 0 ||
> +	    histo->data_size == 0) {
> +		/*
> +		 * Something is wrong with this histo.
> +		 * Most likely the binning is not set.
> +		 */
> +		ksmodel_clear(histo);
> +		fprintf(stderr,
> +			"Unable to fill the model with data.\n");
> +		fprintf(stderr,
> +			"Try to set the bining of the model first.\n");
> +
> +		return;
> +	}
> +
> +	/* Set the Lower Overflow bin */
> +	ksmodel_set_lower_edge(histo);
> +
> +	/*
> +	 * Loop over the dataset and set the beginning of all individual bins.
> +	 */
> +	bin = 0;

Superfluous bin assignment above.

> +	for (bin = 0; bin < histo->n_bins; ++bin)
> +		ksmodel_set_next_bin_edge(histo, bin);
> +
> +	/* Set the Upper Overflow bin. */
> +	ksmodel_set_upper_edge(histo);
> +
> +	/* Calculate the number of entries in each bin. */
> +	ksmodel_set_bin_counts(histo);
> +}
> +
> +/**
> + * @brief Get the total number of entries in a given bin.
> + * @param histo: Input location for the model descriptor.
> + * @param bin: Bin id.
> + * @returns The number of entries in this bin.
> + */
> +size_t ksmodel_bin_count(struct kshark_trace_histo *histo, int bin)
> +{
> +	if (bin >= 0 && bin < histo->n_bins)
> +		return histo->bin_count[bin];
> +
> +	if (bin == UPPER_OVERFLOW_BIN)
> +		return histo->bin_count[UOB(histo)];
> +
> +	if (bin == LOWER_OVERFLOW_BIN)
> +		return histo->bin_count[LOB(histo)];
> +
> +	return 0;
> +}
> +
> +/**
> + * @brief Shift the time-window of the model forward. Recalculate the current
> + *	  state of the model.
> + * @param histo: Input location for the model descriptor.
> + * @param n: Number of bins to shift.
> + */
> +void ksmodel_shift_forward(struct kshark_trace_histo *histo, size_t n)
> +{
> +	int bin;
> +	
> +	if (!histo->data_size)
> +		return;
> +
> +	if (histo->bin_count[UOB(histo)] == 0) {
> +		/*
> +		 * The Upper Overflow bin is empty. This means that we are at
> +		 * the upper edge of the dataset already. Do nothing in this
> +		 * case.
> +		 */
> +		return;
> +	}
> +
> +	histo->min += n * histo->bin_size;
> +	histo->max += n * histo->bin_size;
> +
> +	if (n >= histo->n_bins) {
> +		/*
> +		 * No overlap between the new and the old ranges. Recalculate
> +		 * all bins from scratch. First calculate the new range.
> +		 */
> +		ksmodel_set_bining(histo, histo->n_bins, histo->min,
> +							 histo->max);
> +
> +		ksmodel_fill(histo, histo->data, histo->data_size);
> +		return;
> +	}
> +
> +	/* Set the new Lower Overflow bin. */
> +	ksmodel_set_lower_edge(histo);
> +
> +	/*
> +	 * Copy the the mapping indexes of all overlaping bins starting from
> +	 * bin "0" of the new histo. Note that the number of overlaping bins
> +	 * is histo->n_bins - n.
> +	 */
> +	memmove(&histo->map[0], &histo->map[n],
> +		sizeof(histo->map[0]) * (histo->n_bins - n));
> +
> +	/*
> +	 * The the mapping index pf the old Upper Overflow bin is now index

"The the" and "pf"?

> +	 * of the first new bin.
> +	 */
> +	bin = UOB(histo) - n;
> +	histo->map[bin] = histo->map[UOB(histo)];
> +
> +	/* Calculate only the content of the new (non-overlapping) bins. */
> +	for (; bin < histo->n_bins; ++bin)
> +		ksmodel_set_next_bin_edge(histo, bin);
> +
> +	/*
> +	 * Set the new Upper Overflow bin and calculate the number of entries
> +	 * in each bin.
> +	 */
> +	ksmodel_set_upper_edge(histo);
> +	ksmodel_set_bin_counts(histo);
> +}
> +
> +/**
> + * @brief Shift the time-window of the model backward. Recalculate the current
> + *	  state of the model.
> + * @param histo: Input location for the model descriptor.
> + * @param n: Number of bins to shift.
> + */
> +void ksmodel_shift_backward(struct kshark_trace_histo *histo, size_t n)
> +{
> +	int bin;
> +
> +	if (!histo->data_size)
> +		return;
> +
> +	if (histo->bin_count[LOB(histo)] == 0) {
> +		/*
> +		 * The Lower Overflow bin is empty. This means that we are at
> +		 * the Lower edge of the dataset already. Do nothing in this
> +		 * case.
> +		 */
> +		return;
> +	}
> +
> +	histo->min -= n * histo->bin_size;
> +	histo->max -= n * histo->bin_size;
> +
> +	if (n >= histo->n_bins) {
> +		/*
> +		 * No overlap between the new and the old range. Recalculate
> +		 * all bins from scratch. First calculate the new range.
> +		 */
> +		ksmodel_set_bining(histo, histo->n_bins, histo->min,
> +							 histo->max);
> +
> +		ksmodel_fill(histo, histo->data, histo->data_size);
> +		return;
> +	}
> +
> +	/*
> +	 * Copy the the mapping indexes of all overlaping bins starting from
> +	 * bin "0" of the old histo. Note that the number of overlaping bins
> +	 * is histo->n_bins - n.
> +	 */
> +	memmove(&histo->map[n], &histo->map[0],
> +		sizeof(histo->map[0]) * (histo->n_bins - n));
> +
> +	/* Set the new Lower Overflow bin. */
> +	ksmodel_set_lower_edge(histo);
> +
> +	/* Calculate only the content of the new (non-overlapping) bins. */
> +	bin = 0;
> +	while (bin < n) {

Convert this to a for loop please.

-- Steve

> +		ksmodel_set_next_bin_edge(histo, bin);
> +		++bin;
> +	}
> +
> +	/*
> +	 * Set the new Upper Overflow bin and calculate the number of entries
> +	 * in each bin.
> +	 */
> +	ksmodel_set_upper_edge(histo);
> +	ksmodel_set_bin_counts(histo);
> +}
> 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 3/7] kernel-shark-qt: Introduce the visualization model used by the Qt-based KS
  2018-08-01  0:51   ` Steven Rostedt
@ 2018-08-01 16:10     ` Yordan Karadzhov
  2018-08-03 18:48     ` Steven Rostedt
  1 sibling, 0 replies; 21+ messages in thread
From: Yordan Karadzhov @ 2018-08-01 16:10 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: linux-trace-devel, Tzvetomir Stoyanov



On 1.08.2018 03:51, Steven Rostedt wrote:
>> +static void ksmodel_set_in_range_bining(struct kshark_trace_histo *histo,
>> +					size_t n, uint64_t min, uint64_t max,
>> +					bool force_in_range)
>> +{
>> +	uint64_t corrected_range, delta_range, range = max - min;
>> +	struct kshark_entry *last;
>> +
>> +	/* The size of the bin must be >= 1, hence the range must be >= n. */
>> +	if (n == 0 || range < n)
>> +		return;
>> +
>> +	/*
>> +	 * If the number of bins changes, allocate memory for the descriptor
>> +	 * of the model.
>> +	 */
>> +	if (n != histo->n_bins) {
>> +		if (!ksmodel_histo_alloc(histo, n)) {
>> +			ksmodel_clear(histo);
>> +			return;
>> +		}
>> +	}
>> +
>> +	/* Reset the content of all bins (including overflow bins) to zero. */
>> +	ksmodel_reset_bins(histo, 0, histo->n_bins + 1);
> Here we could then have:
>
> 	ksmodel_reset_bins(histo, 0, ALLB(histo));
>
>> +
>> +	if (range % n == 0) {
>> +		/*
>> +		 * The range is multiple of the number of bin and needs no
>> +		 * adjustment. This is very unlikely to happen but still ...
>> +		 */
>> +		histo->min = min;
>> +		histo->max = max;
>> +		histo->bin_size = range / n;
>> +	} else {
>> +		/*
>> +		 * The range needs adjustment. The new range will be slightly
>> +		 * bigger, compared to the requested one.
>> +		 */
>> +		histo->bin_size = range / n + 1;
>> +		corrected_range = histo->bin_size * n;
>> +		delta_range = corrected_range - range;
>> +		histo->min = min - delta_range / 2;
>> +		histo->max = histo->min + corrected_range;
>> +
>> +		if (!force_in_range)
>> +			return;
>> +
>> +		/*
>> +		 * Make sure that the new range doesn't go outside of the time
>> +		 * interval of the dataset.
>> +		 */
>> +		last = histo->data[histo->data_size - 1];
>> +		if (histo->min < histo->data[0]->ts) {
>> +			histo->min = histo->data[0]->ts;
>> +			histo->max = histo->min + corrected_range;
>> +		} else if (histo->max > last->ts) {
>> +			histo->max = last->ts;
>> +			histo->min = histo->max - corrected_range;
>> +		}
> Hmm, Let's say the range of the data is 0..1,000,001 and we picked a
> range of 999,999 starting at 0. And there's 1024 buckets. This would
> have:
>
> min = 0; max = 999999; n = 1024; range = 999999;
>
> bin_size = 999999 / 1024 + 1 = 977;
> correct_range = 977 * 1024 = 1000448;
> delta_rang = 1000448 - 999999 = 449;
> histo->min = 0 - 449 / 2 = -224;
> histo->max = -224 + 1000448 = 1000224;
>
> Now histo->min (-224) < histo->data[0]->ts (0) so
>
> histo->min = 0;
> histo->max = 0 + 1000448 = 1000448;
>
> Thus we get max greater than the data set.
>
> Actually, we would always get a range greater than the data set, if the
> data set itself is not divisible by the bin size. This that a problem?
Hi Steven,
In your example you consider the case when we want to visualize the 
entire data-set.
Indeed in this case the true range of the histo will be slightly bigger. 
This is not a problem.

The "force_in_range" part of the logic in this function deals with 
another case.
Let's stick to your example but say that the current range is from 0 to 
10,000.
Now if the user hits the "Zoom Out" for a second, this function will be 
called several hundred times
and each call of the function will add its own small negative correction 
to the value of histo->min (initially 0).
At the end we will have a significant part of the graph outside of the 
data-set.

Thanks!
Yordan

>
> -- Steve
>
>> +	}
>> +}
>> +
>> +/**
>> + * @brief Prepare the bining of the Visualization model.
>> + * @param histo: Input location for the model descriptor.
>> + * @param n: Number of bins.
>> + * @param min: Lower edge of the time-window to be visualized.
>> + * @param max: Upper edge of the time-window to be visualized.
>> + */
>> +void ksmodel_set_bining(struct kshark_trace_histo *histo,
>> +			size_t n, uint64_t min, uint64_t max)
>> +{
>> +	ksmodel_set_in_range_bining(histo, n, min, max, false);
>> +}
>> +
>>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 3/7] kernel-shark-qt: Introduce the visualization model used by the Qt-based KS
  2018-07-31 13:52 ` [PATCH v2 3/7] kernel-shark-qt: Introduce the visualization model used by the Qt-based KS Yordan Karadzhov (VMware)
  2018-08-01  0:51   ` Steven Rostedt
  2018-08-01  1:43   ` Steven Rostedt
@ 2018-08-01 18:22   ` Steven Rostedt
  2018-08-02 12:59     ` Yordan Karadzhov (VMware)
  2018-08-01 18:44   ` Steven Rostedt
  2018-08-01 18:50   ` Steven Rostedt
  4 siblings, 1 reply; 21+ messages in thread
From: Steven Rostedt @ 2018-08-01 18:22 UTC (permalink / raw)
  To: Yordan Karadzhov (VMware); +Cc: linux-trace-devel, Tzvetomir Stoyanov

On Tue, 31 Jul 2018 16:52:44 +0300
"Yordan Karadzhov (VMware)" <y.karadz@gmail.com> wrote:


I'd add a comment above this function (yes static functions may have
header comments, just doesn't need to be doxygen like).

/*
 * Fill in the bin_count array, which maps the number of data rows that
 * exist within each bin.
 */

Or something like that.

> +static void ksmodel_set_bin_counts(struct kshark_trace_histo *histo)
> +{
> +	int i = 0, prev_not_empty;
> +
> +	memset(&histo->bin_count[0], 0,
> +	       (histo->n_bins) * sizeof(histo->bin_count[0]));
> +	/*
> +	 * Find the first bin which contains data. Start by checking the
> +	 * Lower Overflow bin.
> +	 */
> +	if (histo->map[histo->n_bins + 1] != KS_EMPTY_BIN) {

Hmm, shouldn't that be:

	if (histo->map[LOB(histo)] != KS_EMPTY_BIN) ?

> +		prev_not_empty = LOB(histo);
> +	} else {

Add a comment here:

		/* Find the first non-empty bin */

> +		while (histo->map[i] < 0) {

Can map[i] be a negative other than KS_EMPTY_BIN? 

> +			++i;
> +		}
> +
> +		prev_not_empty = i++;
> +	}
> +
> +	/*
> +	 * Starting from the first not empty bin, loop over all bins and fill
> +	 * in the bin_count array to hold the number of entries in each bin.
> +	 */
> +	while (i < histo->n_bins) {

The above should be a for loop:

	for (; i < histo->n_bins; i++) {


> +		if (histo->map[i] != KS_EMPTY_BIN) {
> +			/*
> +			 * Here we set the number of entries in
> +			 * "prev_not_empty" bin.

The above comment needs to be changed:

			/*
			 * The current bin is not empty, take its data
			 * row and subtract it from the data row of the
			 * previous not empty bin, which will give us
			 * the number of data rows in that bin.

Or something like that.

> +			 */
> +			histo->bin_count[prev_not_empty] =
> +				histo->map[i] - histo->map[prev_not_empty];
> +	
> +			prev_not_empty = i;
> +		}
> +
> +		++i;
> +	}
> +
> +	/* Check if the Upper Overflow bin contains data. */
> +	if (histo->map[UOB(histo)] == KS_EMPTY_BIN) {
> +		/*
> +		 * The Upper Overflow bin is empty. Use the size of the
> +		 * dataset to calculate the content of the previouse not
> +		 * empty bin.
> +		 */
> +		histo->bin_count[prev_not_empty] = histo->data_size -
> +						   histo->map[prev_not_empty];
> +	} else {
> +		/*
> +		 * Use the index of the first entry inside the Upper Overflow
> +		 * bin to calculate the content of the previouse not empty
> +		 * bin.
> +		 */
> +		histo->bin_count[prev_not_empty] = histo->map[UOB(histo)] -
> +						   histo->map[prev_not_empty];
> +	}
> +}
> +
> +/**
> + * @brief Provide the Visualization model with data. Calculate the current
> + *	  state of the model.
> + * @param histo: Input location for the model descriptor.
> + * @param data: Input location for the trace data.
> + * @param n: Number of bins.
> + */
> +void ksmodel_fill(struct kshark_trace_histo *histo,
> +		  struct kshark_entry **data, size_t n)
> +{
> +	int bin;
> +
> +	histo->data_size = n;
> +	histo->data = data;
> +
> +	if (histo->n_bins == 0 ||
> +	    histo->bin_size == 0 ||
> +	    histo->data_size == 0) {
> +		/*
> +		 * Something is wrong with this histo.
> +		 * Most likely the binning is not set.
> +		 */
> +		ksmodel_clear(histo);
> +		fprintf(stderr,
> +			"Unable to fill the model with data.\n");
> +		fprintf(stderr,
> +			"Try to set the bining of the model first.\n");
> +
> +		return;
> +	}
> +
> +	/* Set the Lower Overflow bin */
> +	ksmodel_set_lower_edge(histo);
> +
> +	/*
> +	 * Loop over the dataset and set the beginning of all individual bins.
> +	 */
> +	bin = 0;

As stated before, superfluous bin.

> +	for (bin = 0; bin < histo->n_bins; ++bin)
> +		ksmodel_set_next_bin_edge(histo, bin);
> +
> +	/* Set the Upper Overflow bin. */
> +	ksmodel_set_upper_edge(histo);
> +
> +	/* Calculate the number of entries in each bin. */
> +	ksmodel_set_bin_counts(histo);
> +}
> +
> +/**
> + * @brief Get the total number of entries in a given bin.
> + * @param histo: Input location for the model descriptor.
> + * @param bin: Bin id.
> + * @returns The number of entries in this bin.
> + */
> +size_t ksmodel_bin_count(struct kshark_trace_histo *histo, int bin)
> +{
> +	if (bin >= 0 && bin < histo->n_bins)
> +		return histo->bin_count[bin];
> +
> +	if (bin == UPPER_OVERFLOW_BIN)
> +		return histo->bin_count[UOB(histo)];
> +
> +	if (bin == LOWER_OVERFLOW_BIN)
> +		return histo->bin_count[LOB(histo)];
> +
> +	return 0;
> +}
> +
> +/**
> + * @brief Shift the time-window of the model forward. Recalculate the current
> + *	  state of the model.
> + * @param histo: Input location for the model descriptor.
> + * @param n: Number of bins to shift.
> + */
> +void ksmodel_shift_forward(struct kshark_trace_histo *histo, size_t n)
> +{
> +	int bin;
> +	
> +	if (!histo->data_size)
> +		return;
> +
> +	if (histo->bin_count[UOB(histo)] == 0) {

Or should this be:

	if (histo->map[UOB(histo)] == KS_EMPTY_BIN)  ?

I know it should mean the same, but we use map below, and I like to be
more consistent.

> +		/*
> +		 * The Upper Overflow bin is empty. This means that we are at
> +		 * the upper edge of the dataset already. Do nothing in this
> +		 * case.
> +		 */
> +		return;
> +	}
> +
> +	histo->min += n * histo->bin_size;
> +	histo->max += n * histo->bin_size;
> +
> +	if (n >= histo->n_bins) {
> +		/*
> +		 * No overlap between the new and the old ranges. Recalculate
> +		 * all bins from scratch. First calculate the new range.
> +		 */
> +		ksmodel_set_bining(histo, histo->n_bins, histo->min,
> +							 histo->max);
> +
> +		ksmodel_fill(histo, histo->data, histo->data_size);
> +		return;
> +	}
> +
> +	/* Set the new Lower Overflow bin. */
> +	ksmodel_set_lower_edge(histo);
> +

Hmm, the above also sets histo->map[0]. This should then equal
histo->map[n] right? I wonder if we should have a sanity check here
making sure that's the case.

> +	/*
> +	 * Copy the the mapping indexes of all overlaping bins starting from
> +	 * bin "0" of the new histo. Note that the number of overlaping bins
> +	 * is histo->n_bins - n.
> +	 */
> +	memmove(&histo->map[0], &histo->map[n],
> +		sizeof(histo->map[0]) * (histo->n_bins - n));
> +
> +	/*
> +	 * The the mapping index pf the old Upper Overflow bin is now index
> +	 * of the first new bin.
> +	 */
> +	bin = UOB(histo) - n;
> +	histo->map[bin] = histo->map[UOB(histo)];
> +
> +	/* Calculate only the content of the new (non-overlapping) bins. */
> +	for (; bin < histo->n_bins; ++bin)
> +		ksmodel_set_next_bin_edge(histo, bin);
> +
> +	/*
> +	 * Set the new Upper Overflow bin and calculate the number of entries
> +	 * in each bin.
> +	 */
> +	ksmodel_set_upper_edge(histo);
> +	ksmodel_set_bin_counts(histo);
> +}
> +
> +/**
> + * @brief Shift the time-window of the model backward. Recalculate the current
> + *	  state of the model.
> + * @param histo: Input location for the model descriptor.
> + * @param n: Number of bins to shift.
> + */
> +void ksmodel_shift_backward(struct kshark_trace_histo *histo, size_t n)
> +{
> +	int bin;
> +
> +	if (!histo->data_size)
> +		return;
> +
> +	if (histo->bin_count[LOB(histo)] == 0) {

Again, this probably should be:

	if (histo->map[LOB(histo)] == KS_EMPTY_BIN) {


> +		/*
> +		 * The Lower Overflow bin is empty. This means that we are at
> +		 * the Lower edge of the dataset already. Do nothing in this
> +		 * case.
> +		 */
> +		return;
> +	}
> +
> +	histo->min -= n * histo->bin_size;
> +	histo->max -= n * histo->bin_size;
> +
> +	if (n >= histo->n_bins) {
> +		/*
> +		 * No overlap between the new and the old range. Recalculate
> +		 * all bins from scratch. First calculate the new range.
> +		 */
> +		ksmodel_set_bining(histo, histo->n_bins, histo->min,
> +							 histo->max);
> +
> +		ksmodel_fill(histo, histo->data, histo->data_size);
> +		return;
> +	}
> +
> +	/*
> +	 * Copy the the mapping indexes of all overlaping bins starting from
> +	 * bin "0" of the old histo. Note that the number of overlaping bins
> +	 * is histo->n_bins - n.
> +	 */
> +	memmove(&histo->map[n], &histo->map[0],
> +		sizeof(histo->map[0]) * (histo->n_bins - n));
> +
> +	/* Set the new Lower Overflow bin. */
> +	ksmodel_set_lower_edge(histo);
> +
> +	/* Calculate only the content of the new (non-overlapping) bins. */
> +	bin = 0;
> +	while (bin < n) {

This needs to be a for loop.

> +		ksmodel_set_next_bin_edge(histo, bin);
> +		++bin;
> +	}
> +
> +	/*
> +	 * Set the new Upper Overflow bin and calculate the number of entries
> +	 * in each bin.
> +	 */
> +	ksmodel_set_upper_edge(histo);
> +	ksmodel_set_bin_counts(histo);
> +}
> +
> +/**
> + * @brief Move the time-window of the model to a given location. Recalculate
> + *	  the current state of the model.
> + * @param histo: Input location for the model descriptor.
> + * @param ts: position in time to be visualized.
> + */
> +void ksmodel_jump_to(struct kshark_trace_histo *histo, size_t ts)
> +{
> +	size_t min, max, range_min;
> +
> +	if (ts > histo->min && ts < histo->max) {
> +		/*
> +		 * The new position is already inside the range.
> +		 * Do nothing in this case.
> +		 */
> +		return;
> +	}
> +
> +	/*
> +	 * Calculate the new range without changing the size and the number
> +	 * of bins.
> +	 */
> +	min = ts - histo->n_bins * histo->bin_size / 2;
> +
> +	/* Make sure that the range does not go outside of the dataset. */
> +	if (min < histo->data[0]->ts)
> +		min = histo->data[0]->ts;
> +

I wonder if we should make this an else

	else {

> +	range_min = histo->data[histo->data_size - 1]->ts -
> +		   histo->n_bins * histo->bin_size;
> +
> +	if (min > range_min)
> +		min = range_min;

	}

Making sure min isn't less than the data set?

> +
> +	max = min + histo->n_bins * histo->bin_size;
> +
> +	/* Use the new range to recalculate all bins from scratch. */
> +	ksmodel_set_bining(histo, histo->n_bins, min, max);
> +	ksmodel_fill(histo, histo->data, histo->data_size);
> +}
> +
> +/**
> + * @brief Extend the time-window of the model. Recalculate the current state
> + *	  of the model.
> + * @param histo: Input location for the model descriptor.
> + * @param r: Scale factor of the zoom-out.
> + * @param mark: Focus point of the zoom-out.
> + */
> +void ksmodel_zoom_out(struct kshark_trace_histo *histo,
> +		      double r, int mark)
> +{
> +	size_t range, min, max, delta_min;
> +	double delta_tot;
> +
> +	if (!histo->data_size)
> +		return;
> +
> +	/*
> +	 * If the marker is not set, assume that the focal point of the zoom
> +	 * is the center of the range.
> +	 */
> +	if (mark < 0)
> +		mark = histo->n_bins / 2;
> +
> +	/*
> +	 * Calculate the new range of the histo. Use the bin of the marker
> +	 * as a focal point for the zoomout. With this the maker will stay
> +	 * inside the same bin in the new histo.
> +	 */
> +	range = histo->max - histo->min;
> +	delta_tot = range * r;
> +	delta_min = delta_tot * mark / histo->n_bins;
> +
> +	min = histo->min - delta_min;
> +	max = histo->max + (size_t) delta_tot - delta_min;

Took me a bit to figure out what exactly the above is doing. Let me
explain what I think it is doing and you can correct me if I'm wrong.

We set delta_tot to increase by the percentage requested (easy).

Now we make delta_min equal to a percentage of delta_tot based on where
mark is in the original bins. If mark is zero, then mark was at 0% of
the original bins, if it was at histo->n_bins - 1, it was at (almost)
100%. If it is half way, then we place delta_min at %50 of delta_tot.

Then we subtract the original min by the delta_tot * mark/n_bins
percentage, and add the max by delta_tot * (1 - mark/n_bins).

Sound right? Maybe we can add a comment saying such?

> +
> +	/* Make sure the new range doesn't go outside of the dataset. */
> +	if (min < histo->data[0]->ts)
> +		min = histo->data[0]->ts;
> +
> +	if (max > histo->data[histo->data_size - 1]->ts)
> +		max = histo->data[histo->data_size - 1]->ts;
> +
> +	/*
> +	 * Use the new range to recalculate all bins from scratch. Enforce
> +	 * "In Range" adjustment of the range of the model, in order to avoid
> +	 * slowly drifting outside of the data-set in the case when the very
> +	 * first or the very last entry is used as a focal point.
> +	 */
> +	ksmodel_set_in_range_bining(histo, histo->n_bins, min, max, true);
> +	ksmodel_fill(histo, histo->data, histo->data_size);
> +}
> +
> +/**
> + * @brief Shrink the time-window of the model. Recalculate the current state
> + *	  of the model.
> + * @param histo: Input location for the model descriptor.
> + * @param r: Scale factor of the zoom-in.
> + * @param mark: Focus point of the zoom-in.
> + */
> +void ksmodel_zoom_in(struct kshark_trace_histo *histo,
> +		     double r, int mark)
> +{
> +	size_t range, min, max, delta_min;
> +	double delta_tot;
> +
> +	if (!histo->data_size)
> +		return;
> +
> +	/*
> +	 * If the marker is not set, assume that the focal point of the zoom
> +	 * is the center of the range.
> +	 */
> +	if (mark < 0)
> +		mark = histo->n_bins / 2;
> +
> +	range = histo->max - histo->min;
> +
> +	/* Avoid overzooming. */
> +	if (range < histo->n_bins * 4)
> +		return;
> +
> +	/*
> +	 * Calculate the new range of the histo. Use the bin of the marker
> +	 * as a focal point for the zoomin. With this the maker will stay
> +	 * inside the same bin in the new histo.
> +	 */
> +	delta_tot =  range * r;
> +	if (mark == (int)histo->n_bins - 1)
> +		delta_min = delta_tot;


> +	else if (mark == 0)
> +		delta_min = 0;
> +	else
> +		delta_min = delta_tot * mark / histo->n_bins;

The above two are equivalent:

	if (mark == 0)
Then
	delta_min = delta_tot * mark / histo->n_bins = 0


> +
> +	min = histo->min + delta_min;
> +	max = histo->max - (size_t) delta_tot + delta_min;
> +
> +	/*
> +	 * Use the new range to recalculate all bins from scratch. Enforce
> +	 * "In Range" adjustment of the range of the model, in order to avoid
> +	 * slowly drifting outside of the data-set in the case when the very
> +	 * first or the very last entry is used as a focal point.
> +	 */
> +	ksmodel_set_in_range_bining(histo, histo->n_bins, min, max, true);
> +	ksmodel_fill(histo, histo->data, histo->data_size);
> +}

Hmm, I think zoom_out and zoom_in could be combined:

static void ksmodel_zoom(struct kshark_trace_histo *histo,
			 double r, int mark, bool zoom_in)
{
	size_t range, min, max, delta_min;
	double delta_tot;

	if (!histo->data_size)
		return;

	/*
	 * If the marker is not set, assume that the focal point of the zoom
	 * is the center of the range.
	 */
	if (mark < 0)
		mark = histo->n_bins / 2;

	range = histo->max - histo->min;

	/* Avoid overzooming. */
	if (range < histo->n_bins * 4)
		return;

	/*
	 * Calculate the new range of the histo. Use the bin of the marker
	 * as a focal point for the zoomout. With this the maker will stay
	 * inside the same bin in the new histo.
	 */
	delta_tot = range * r;
	if (mark == (int)histo->n_bins - 1)
		delta_min = delta_tot;
	else
		delta_min = delta_tot * mark / histo->n_bins;

	
	min = zoom_in ? histo->min + delta_min : histo->min - delta_min;
	max = zoom_in ? histo->max - (size_t) delta_tot + delta_min :
		        histo->max + (size_t) delta_tot - delta_min;


	/* Make sure the new range doesn't go outside of the dataset. */
	if (min < histo->data[0]->ts)
		min = histo->data[0]->ts;

	if (max > histo->data[histo->data_size - 1]->ts)
		max = histo->data[histo->data_size - 1]->ts;

	/*
	 * Use the new range to recalculate all bins from scratch. Enforce
	 * "In Range" adjustment of the range of the model, in order to avoid
	 * slowly drifting outside of the data-set in the case when the very
	 * first or the very last entry is used as a focal point.
	 */
	ksmodel_set_in_range_bining(histo, histo->n_bins, min, max, true);
	ksmodel_fill(histo, histo->data, histo->data_size);
}

void ksmodel_zoom_out(struct kshark_trace_histo *histo,
		      double r, int mark)
{
	ksmodel_zoom(histo, r, mark, false);
}

void ksmodel_zoom_in(struct kshark_trace_histo *histo,
		     double r, int mark)
{
	ksmodel_zoom(histo, r, mark, true);
}

-- Steve

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 3/7] kernel-shark-qt: Introduce the visualization model used by the Qt-based KS
  2018-07-31 13:52 ` [PATCH v2 3/7] kernel-shark-qt: Introduce the visualization model used by the Qt-based KS Yordan Karadzhov (VMware)
                     ` (2 preceding siblings ...)
  2018-08-01 18:22   ` Steven Rostedt
@ 2018-08-01 18:44   ` Steven Rostedt
  2018-08-03 14:01     ` Yordan Karadzhov (VMware)
  2018-08-01 18:50   ` Steven Rostedt
  4 siblings, 1 reply; 21+ messages in thread
From: Steven Rostedt @ 2018-08-01 18:44 UTC (permalink / raw)
  To: Yordan Karadzhov (VMware); +Cc: linux-trace-devel, Tzvetomir Stoyanov

On Tue, 31 Jul 2018 16:52:44 +0300
"Yordan Karadzhov (VMware)" <y.karadz@gmail.com> wrote:

> +/**
> + * @brief Get the index of the first entry in a given bin.
> + * @param histo: Input location for the model descriptor.
> + * @param bin: Bin id.
> + * @returns Index of the first entry in this bin. If the bin is empty the
> + *	    function returns negative error identifier (KS_EMPTY_BIN).
> + */
> +ssize_t ksmodel_first_index_at_bin(struct kshark_trace_histo *histo, int bin)
> +{
> +	if (bin >= 0 && bin < (int) histo->n_bins)
> +		return histo->map[bin];
> +
> +	if (bin == UPPER_OVERFLOW_BIN)
> +		return histo->map[histo->n_bins];

		return histo->map[UOB(histo)];

> +
> +	if (bin == LOWER_OVERFLOW_BIN)
> +		return histo->map[histo->n_bins + 1];

		return histo->map[LOB(histo)];

> +
> +	return KS_EMPTY_BIN;
> +}
> +
> +/**
> + * @brief Get the index of the last entry in a given bin.
> + * @param histo: Input location for the model descriptor.
> + * @param bin: Bin id.
> + * @returns Index of the last entry in this bin. If the bin is empty the
> + *	    function returns negative error identifier (KS_EMPTY_BIN).
> + */
> +ssize_t ksmodel_last_index_at_bin(struct kshark_trace_histo *histo, int bin)
> +{
> +	ssize_t index = ksmodel_first_index_at_bin(histo, bin);
> +	size_t count = ksmodel_bin_count(histo, bin);
> +
> +	if (index >= 0 && count)
> +		index += count - 1;
> +
> +	return index;
> +}
> +
> +static bool ksmodel_is_visible(struct kshark_entry *e)
> +{
> +	if ((e->visible & KS_GRAPH_VIEW_FILTER_MASK) &&
> +	    (e->visible & KS_EVENT_VIEW_FILTER_MASK))
> +		return true;
> +
> +	return false;
> +}
> +
> +static struct kshark_entry_request *
> +ksmodel_entry_front_request_alloc(struct kshark_trace_histo *histo,
> +				  int bin, bool vis_only,
> +				  matching_condition_func func, int val)
> +{
> +	struct kshark_entry_request *req;
> +	size_t first, n;
> +
> +	/* Get the number of entries in this bin. */
> +	n = ksmodel_bin_count(histo, bin);
> +	if (!n)
> +		return NULL;
> +
> +	first = ksmodel_first_index_at_bin(histo, bin);
> +
> +	req = kshark_entry_request_alloc(first, n,
> +					 func, val,
> +					 vis_only, KS_GRAPH_VIEW_FILTER_MASK);

No need for req; just return the function:

	return kshark_entry_request_alloc(...);

> +
> +	return req;
> +}
> +
> +static struct kshark_entry_request *
> +ksmodel_entry_back_request_alloc(struct kshark_trace_histo *histo,
> +				 int bin, bool vis_only,
> +				 matching_condition_func func, int val)
> +{
> +	struct kshark_entry_request *req;
> +	size_t first, n;
> +
> +	/* Get the number of entries in this bin. */
> +	n = ksmodel_bin_count(histo, bin);
> +	if (!n)
> +		return NULL;
> +
> +	first = ksmodel_last_index_at_bin(histo, bin);
> +
> +	req = kshark_entry_request_alloc(first, n,
> +					 func, val,
> +					 vis_only, KS_GRAPH_VIEW_FILTER_MASK);

Same here.

> +
> +	return req;
> +}
> +
> +/**
> + * @brief Get the index of the first entry from a given Cpu in a given bin.
> + * @param histo: Input location for the model descriptor.
> + * @param bin: Bin id.
> + * @param cpu: Cpu Id.
> + * @returns Index of the first entry from a given Cpu in this bin.
> + */
> +ssize_t ksmodel_first_index_at_cpu(struct kshark_trace_histo *histo,
> +				   int bin, int cpu)
> +{
> +	size_t i, n, first, not_found = KS_EMPTY_BIN;
> +
> +	n = ksmodel_bin_count(histo, bin);
> +	if (!n)
> +		return not_found;
> +
> +	first = ksmodel_first_index_at_bin(histo, bin);

I wonder what this is used for. Don't we have per cpu arrays or link
lists?

> +
> +	for (i = first; i < first + n; ++i) {
> +		if (histo->data[i]->cpu == cpu) {
> +			if (ksmodel_is_visible(histo->data[i]))
> +				return i;
> +			else
> +				not_found = KS_FILTERED_BIN;
> +		}
> +	}
> +
> +	return not_found;
> +}
> +
> +/**
> + * @brief Get the index of the first entry from a given Task in a given bin.
> + * @param histo: Input location for the model descriptor.
> + * @param bin: Bin id.
> + * @param pid: Process Id of a task.
> + * @returns Index of the first entry from a given Task in this bin.
> + */
> +ssize_t ksmodel_first_index_at_pid(struct kshark_trace_histo *histo,
> +				   int bin, int pid)
> +{
> +	size_t i, n, first, not_found = KS_EMPTY_BIN;
> +
> +	n = ksmodel_bin_count(histo, bin);
> +	if (!n)
> +		return not_found;
> +
> +	first = ksmodel_first_index_at_bin(histo, bin);
> +	
> +	for (i = first; i < first + n; ++i) {
> +		if (histo->data[i]->pid == pid) {
> +			if (ksmodel_is_visible(histo->data[i]))
> +				return i;
> +			else
> +				not_found = KS_FILTERED_BIN;
> +		}
> +	}
> +
> +	return not_found;
> +}
> +
> +/**
> + * @brief In a given bin, start from the front end of the bin and go towards
> + *	  the back end, searching for an entry satisfying the Matching
> + *	  condition defined by a Matching condition function.
> + * @param histo: Input location for the model descriptor.
> + * @param bin: Bin id.
> + * @param vis_only: If true, a visible entry is requested.
> + * @param func: Matching condition function.
> + * @param val: Matching condition value, used by the Matching condition
> + *	       function.
> + * @param index: Optional output location for the index of the requested
> + *		 entry inside the array.
> + * @returns Pointer ot a kshark_entry, if an entry has been found. Else NULL.
> + */
> +const struct kshark_entry *
> +ksmodel_get_entry_front(struct kshark_trace_histo *histo,
> +			int bin, bool vis_only,
> +			matching_condition_func func, int val,
> +			ssize_t *index)
> +{
> +	struct kshark_entry_request *req;
> +	const struct kshark_entry *entry;
> +
> +	if (index)
> +		*index = KS_EMPTY_BIN;
> +
> +	/* Set the position at the beginning of the bin and go forward. */
> +	req = ksmodel_entry_front_request_alloc(histo, bin, vis_only,
> +							    func, val);
> +	if (!req)
> +		return NULL;
> +
> +	entry = kshark_get_entry_front(req, histo->data, index);
> +	free(req);
> +
> +	return entry;
> +}

We could save on the allocation if we were to create the following:

void
kshark_entry_request_set(struct kshark_entry_request *req,
			 size_t first, size_t n,
			 matching_condition_func cond, int val,
			 bool vis_only, int vis_mask)
{
	req->first = first;
	req->n = n;
	req->cond = cond;
	req->val = val;
	req->vis_only = vis_only;
	req->vis_mask = vis_mask;
}

bool
ksmodel_entry_front_request_set(struct kshark_trace_histo *histo,
				struct kshark_entry_request *req,
				int bin, bool vis_only,
				matching_condition_func func, int val)
{
	size_t first, n;

	/* Get the number of entries in this bin. */
	n = ksmodel_bin_count(histo, bin);
	if (!n)
		return false;

	first = ksmodel_first_index_at_bin(histo, bin);

	kshark_entry_request_set(first, n,
			       func, val,
			       vis_only, KS_GRAPH_VIEW_FILTER_MASK);

	return true;
}

const struct kshark_entry *
ksmodel_get_entry_front(struct kshark_trace_histo *histo,
			int bin, bool vis_only,
			matching_condition_func func, int val,
			ssize_t *index)
{
	struct kshark_entry_request req;
	const struct kshark_entry *entry;
	bool ret;

	if (index)
		*index = KS_EMPTY_BIN;

	/* Set the position at the beginning of the bin and go forward. */
	ret = ksmodel_entry_front_request_set(histo, bin, vis_only,
							    func, val);
	if (!ret)
		return NULL;

	entry = kshark_get_entry_front(req, histo->data, index);

	return entry;
}

> +
> +/**
> + * @brief In a given bin, start from the back end of the bin and go towards
> + *	  the front end, searching for an entry satisfying the Matching
> + *	  condition defined by a Matching condition function.
> + * @param histo: Input location for the model descriptor.
> + * @param bin: Bin id.
> + * @param vis_only: If true, a visible entry is requested.
> + * @param func: Matching condition function.
> + * @param val: Matching condition value, used by the Matching condition
> + *	       function.
> + * @param index: Optional output location for the index of the requested
> + *		 entry inside the array.
> + * @returns Pointer ot a kshark_entry, if an entry has been found. Else NULL.
> + */
> +const struct kshark_entry *
> +ksmodel_get_entry_back(struct kshark_trace_histo *histo,
> +		       int bin, bool vis_only,
> +		       matching_condition_func func, int val,
> +		       ssize_t *index)
> +{
> +	struct kshark_entry_request *req;
> +	const struct kshark_entry *entry;
> +
> +	if (index)
> +		*index = KS_EMPTY_BIN;
> +
> +	/* Set the position at the end of the bin and go backwards. */
> +	req = ksmodel_entry_back_request_alloc(histo, bin, vis_only,
> +							   func, val);
> +	if (!req)
> +		return NULL;
> +
> +	entry = kshark_get_entry_back(req, histo->data, index);
> +	free(req);

Ditto.


> +
> +	return entry;
> +}
> +
> +static int ksmodel_get_entry_pid(const struct kshark_entry *entry)
> +{
> +	if (!entry) {
> +		/* No data has been found. */
> +		return KS_EMPTY_BIN;
> +	}
> +
> +	/*
> +	 * Note that if some data has been found, but this data is
> +	 * filtered-outa, the Dummy entry is returned. The PID of the Dummy
> +	 * entry is KS_FILTERED_BIN.
> +	 */
> +
> +	return entry->pid;
> +}
> +
> +/**
> + * @brief In a given bin, start from the front end of the bin and go towards
> + *	  the back end, searching for an entry from a given CPU. Return
> + *	  the Process Id of the task of the entry found.
> + * @param histo: Input location for the model descriptor.
> + * @param bin: Bin id.
> + * @param cpu: CPU Id.
> + * @param vis_only: If true, a visible entry is requested.
> + * @param index: Optional output location for the index of the requested
> + *		 entry inside the array.
> + * @returns Process Id of the task if an entry has been found. Else a negative
> + *	    Identifier (KS_EMPTY_BIN or KS_FILTERED_BIN).
> + */
> +int ksmodel_get_pid_front(struct kshark_trace_histo *histo,
> +			  int bin, int cpu, bool vis_only,
> +			  ssize_t *index)
> +{
> +	const struct kshark_entry *entry;
> +
> +	if (cpu < 0)
> +		return KS_EMPTY_BIN;
> +
> +	entry = ksmodel_get_entry_front(histo, bin, vis_only,
> +					       kshark_match_cpu, cpu,
> +					       index);
> +	return ksmodel_get_entry_pid(entry);
> +}
> +
> +/**
> + * @brief In a given bin, start from the back end of the bin and go towards
> + *	  the front end, searching for an entry from a given CPU. Return
> + *	  the Process Id of the task of the entry found.
> + * @param histo: Input location for the model descriptor.
> + * @param bin: Bin id.
> + * @param cpu: CPU Id.
> + * @param vis_only: If true, a visible entry is requested.
> + * @param index: Optional output location for the index of the requested
> + *		 entry inside the array.
> + * @returns Process Id of the task if an entry has been found. Else a negative
> + *	    Identifier (KS_EMPTY_BIN or KS_FILTERED_BIN).
> + */
> +int ksmodel_get_pid_back(struct kshark_trace_histo *histo,
> +			 int bin, int cpu, bool vis_only,
> +			 ssize_t *index)
> +{
> +	const struct kshark_entry *entry;
> +
> +	if (cpu < 0)
> +		return KS_EMPTY_BIN;
> +
> +	entry = ksmodel_get_entry_back(histo, bin, vis_only,
> +					      kshark_match_cpu, cpu,
> +					      index);
> +
> +	return ksmodel_get_entry_pid(entry);
> +}
> +
> +static int ksmodel_get_entry_cpu(const struct kshark_entry *entry)
> +{
> +	if (!entry) {
> +		/* No data has been found. */
> +		return KS_EMPTY_BIN;
> +	}
> +
> +	/*
> +	 * Note that if some data has been found, but this data is
> +	 * filtered-outa, the Dummy entry is returned. The CPU Id of the Dummy
> +	 * entry is KS_FILTERED_BIN.
> +	 */
> +
> +	return entry->cpu;
> +}
> +
> +/**
> + * @brief In a given bin, start from the front end of the bin and go towards
> + *	  the back end, searching for an entry from a given PID. Return
> + *	  the CPU Id of the entry found.
> + * @param histo: Input location for the model descriptor.
> + * @param bin: Bin id.
> + * @param pid: Process Id.
> + * @param vis_only: If true, a visible entry is requested.
> + * @param index: Optional output location for the index of the requested
> + *		 entry inside the array.
> + * @returns Process Id of the task if an entry has been found. Else a negative
> + *	    Identifier (KS_EMPTY_BIN or KS_FILTERED_BIN).
> + */
> +int ksmodel_get_cpu_front(struct kshark_trace_histo *histo,
> +			  int bin, int pid, bool vis_only,
> +			  ssize_t *index)
> +{
> +	const struct kshark_entry *entry;
> +
> +	if (pid < 0)
> +		return KS_EMPTY_BIN;
> +
> +	entry = ksmodel_get_entry_front(histo, bin, vis_only,
> +					       kshark_match_pid, pid,
> +					       index);
> +	return ksmodel_get_entry_cpu(entry);
> +}
> +
> +/**
> + * @brief In a given bin, start from the back end of the bin and go towards
> + *	  the front end, searching for an entry from a given PID. Return
> + *	  the CPU Id of the entry found.
> + * @param histo: Input location for the model descriptor.
> + * @param bin: Bin id.
> + * @param pid: Process Id.
> + * @param vis_only: If true, a visible entry is requested.
> + * @param index: Optional output location for the index of the requested
> + *		 entry inside the array.
> + * @returns Process Id of the task if an entry has been found. Else a negative
> + *	    Identifier (KS_EMPTY_BIN or KS_FILTERED_BIN).
> + */
> +int ksmodel_get_cpu_back(struct kshark_trace_histo *histo,
> +			 int bin, int pid, bool vis_only,
> +			 ssize_t *index)
> +{
> +	const struct kshark_entry *entry;
> +
> +	if (pid < 0)
> +		return KS_EMPTY_BIN;
> +
> +	entry = ksmodel_get_entry_back(histo, bin, vis_only,
> +					      kshark_match_pid, pid,
> +					      index);
> +
> +	return ksmodel_get_entry_cpu(entry);
> +}
> +
> +/**
> + * @brief Check if a visible trace event from a given Cpu exists in this bin.
> + * @param histo: Input location for the model descriptor.
> + * @param bin: Bin id.
> + * @param cpu: Cpu Id.
> + * @param index: Optional output location for the index of the requested
> + *		 entry inside the array.
> + * @returns True, if a visible entry exists in this bin. Else false.
> + */
> +bool ksmodel_cpu_visible_event_exist(struct kshark_trace_histo *histo,
> +				     int bin, int cpu, ssize_t *index)
> +{
> +	struct kshark_entry_request *req;
> +	const struct kshark_entry *entry;
> +
> +	if (index)
> +		*index = KS_EMPTY_BIN;
> +
> +	/* Set the position at the beginning of the bin and go forward. */
> +	req = ksmodel_entry_front_request_alloc(histo,
> +						bin, true,
> +						kshark_match_cpu, cpu);

And would save an allocation here too.

> +	if (!req)
> +		return false;
> +
> +	/*
> +	 * The default visibility mask of the Model Data request is
> +	 * KS_GRAPH_VIEW_FILTER_MASK. Change the mask to
> +	 * KS_EVENT_VIEW_FILTER_MASK because we want to find a visible event.
> +	 */
> +	req->vis_mask = KS_EVENT_VIEW_FILTER_MASK;
> +
> +	entry = kshark_get_entry_front(req, histo->data, index);
> +	free(req);
> +
> +	if (!entry || !entry->visible) {
> +		/* No visible entry has been found. */
> +		return false;
> +	}
> +
> +	return true;
> +}
> +
> +/**
> + * @brief Check if a visible trace event from a given Task exists in this bin.
> + * @param histo: Input location for the model descriptor.
> + * @param bin: Bin id.
> + * @param pid: Process Id of the task.
> + * @param index: Optional output location for the index of the requested
> + *		 entry inside the array.
> + * @returns True, if a visible entry exists in this bin. Else false.
> + */
> +bool ksmodel_task_visible_event_exist(struct kshark_trace_histo *histo,
> +				      int bin, int pid, ssize_t *index)
> +{
> +	struct kshark_entry_request *req;
> +	const struct kshark_entry *entry;
> +
> +	if (index)
> +		*index = KS_EMPTY_BIN;
> +
> +	/* Set the position at the beginning of the bin and go forward. */
> +	req = ksmodel_entry_front_request_alloc(histo,
> +						bin, true,
> +						kshark_match_pid, pid);
> +	if (!req)
> +		return false;
> +
> +	/*
> +	 * The default visibility mask of the Model Data request is
> +	 * KS_GRAPH_VIEW_FILTER_MASK. Change the mask to
> +	 * KS_EVENT_VIEW_FILTER_MASK because we want to find a visible event.
> +	 */
> +	req->vis_mask = KS_EVENT_VIEW_FILTER_MASK;
> +
> +	entry = kshark_get_entry_front(req, histo->data, index);
> +	free(req);

And here.

-- Steve

> +
> +	if (!entry || !entry->visible) {
> +		/* No visible entry has been found. */
> +		return false;
> +	}
> +
> +	return true;
> +}

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 3/7] kernel-shark-qt: Introduce the visualization model used by the Qt-based KS
  2018-07-31 13:52 ` [PATCH v2 3/7] kernel-shark-qt: Introduce the visualization model used by the Qt-based KS Yordan Karadzhov (VMware)
                     ` (3 preceding siblings ...)
  2018-08-01 18:44   ` Steven Rostedt
@ 2018-08-01 18:50   ` Steven Rostedt
  2018-08-01 19:06     ` Yordan Karadzhov
  4 siblings, 1 reply; 21+ messages in thread
From: Steven Rostedt @ 2018-08-01 18:50 UTC (permalink / raw)
  To: Yordan Karadzhov (VMware); +Cc: linux-trace-devel

On Tue, 31 Jul 2018 16:52:44 +0300
"Yordan Karadzhov (VMware)" <y.karadz@gmail.com> wrote:

> index 0000000..15391a9
> --- /dev/null
> +++ b/kernel-shark-qt/src/libkshark-model.h
> @@ -0,0 +1,142 @@
> +/* SPDX-License-Identifier: LGPL-2.1 */
> +
> +/*
> + * Copyright (C) 2017 VMware Inc, Yordan Karadzhov <y.karadz@gmail.com>
> + */
> +
> + /**
> +  *  @file    libkshark-model.h
> +  *  @brief   Visualization model for FTRACE (trace-cmd) data.
> +  */
> +
> +#ifndef _LIB_KSHARK_MODEL_H
> +#define _LIB_KSHARK_MODEL_H
> +
> +// KernelShark
> +#include "libkshark.h"
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif // __cplusplus
> +
> +/** Overflow Bin identifiers. */

Should add what an "Overflow Bin" is.

> +enum OverflowBin {
> +	/** Identifier of the Upper Overflow Bin. */
> +	UPPER_OVERFLOW_BIN = -1,
> +
> +	/** Identifier of the Lower Overflow Bin. */
> +	LOWER_OVERFLOW_BIN = -2,
> +};
> +
> +/** Structure describing the current state of the visualization model. */
> +struct kshark_trace_histo {
> +	/** Trace data. */
> +	struct kshark_entry	**data;
> +
> +	/** The size of the data. */

	/** The size of the above data array */

> +	size_t			data_size;
> +
> +	/** The index of the first entry in each bin. */

	/** The first entry (index of data array) in each bin */

> +	ssize_t			*map;
> +
> +	/** Number of entries in each bin. */
> +	size_t			*bin_count;
> +
> +	/** Lower edge of the time-window to be visualized. */
> +	uint64_t		min;
> +
> +	/** Upper edge of the time-window to be visualized. */
> +	uint64_t		max;
> +
> +	/** The size of the bins. */

	/** The size in time for each bin */

> +	uint64_t		bin_size;
> +
> +	/** Number of bins. */
> +	int			n_bins;
> +};

The rest looks good. Ug that was a big patch to review! ;-)

-- Steve

> +
> +void ksmodel_init(struct kshark_trace_histo *histo);
> +
> +void ksmodel_clear(struct kshark_trace_histo *histo);
> +
> +void ksmodel_set_bining(struct kshark_trace_histo *histo,
> +			size_t n, uint64_t min, uint64_t max);
> +
> +void ksmodel_fill(struct kshark_trace_histo *histo,
> +		  struct kshark_entry **data, size_t n);
> +
> +size_t ksmodel_bin_count(struct kshark_trace_histo *histo, int bin);
> +
> +void ksmodel_shift_forward(struct kshark_trace_histo *histo, size_t n);
> +
> +void ksmodel_shift_backward(struct kshark_trace_histo *histo, size_t n);
> +
> +void ksmodel_jump_to(struct kshark_trace_histo *histo, size_t ts);
> +
> +void ksmodel_zoom_out(struct kshark_trace_histo *histo,
> +		      double r, int mark);
> +
> +void ksmodel_zoom_in(struct kshark_trace_histo *histo,
> +		     double r, int mark);
> +
> +ssize_t ksmodel_first_index_at_bin(struct kshark_trace_histo *histo, int bin);
> +
> +ssize_t ksmodel_last_index_at_bin(struct kshark_trace_histo *histo, int bin);
> +
> +ssize_t ksmodel_first_index_at_cpu(struct kshark_trace_histo *histo,
> +				   int bin, int cpu);
> +
> +ssize_t ksmodel_first_index_at_pid(struct kshark_trace_histo *histo,
> +				   int bin, int pid);
> +
> +const struct kshark_entry *
> +ksmodel_get_entry_front(struct kshark_trace_histo *histo,
> +			int bin, bool vis_only,
> +			matching_condition_func func, int val,
> +			ssize_t *index);
> +
> +const struct kshark_entry *
> +ksmodel_get_entry_back(struct kshark_trace_histo *histo,
> +		       int bin, bool vis_only,
> +		       matching_condition_func func, int val,
> +		       ssize_t *index);
> +
> +int ksmodel_get_pid_front(struct kshark_trace_histo *histo,
> +			  int bin, int cpu, bool vis_only,
> +			  ssize_t *index);
> +
> +int ksmodel_get_pid_back(struct kshark_trace_histo *histo,
> +			 int bin, int cpu, bool vis_only,
> +			 ssize_t *index);
> +
> +int ksmodel_get_cpu_front(struct kshark_trace_histo *histo,
> +			  int bin, int pid, bool vis_only,
> +			  ssize_t *index);
> +
> +int ksmodel_get_cpu_back(struct kshark_trace_histo *histo,
> +			 int bin, int pid, bool vis_only,
> +			 ssize_t *index);
> +
> +bool ksmodel_cpu_visible_event_exist(struct kshark_trace_histo *histo,
> +				     int bin, int cpu, ssize_t *index);
> +
> +bool ksmodel_task_visible_event_exist(struct kshark_trace_histo *histo,
> +				      int bin, int pid, ssize_t *index);
> +
> +static inline double ksmodel_bin_time(struct kshark_trace_histo *histo,
> +				      int bin)
> +{
> +	return (histo->min + bin*histo->bin_size) * 1e-9;
> +}
> +
> +static inline uint64_t ksmodel_bin_ts(struct kshark_trace_histo *histo,
> +				      int bin)
> +{
> +	return (histo->min + bin*histo->bin_size);
> +}
> +
> +#ifdef __cplusplus
> +}
> +#endif // __cplusplus
> +
> +#endif

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 3/7] kernel-shark-qt: Introduce the visualization model used by the Qt-based KS
  2018-08-01 18:50   ` Steven Rostedt
@ 2018-08-01 19:06     ` Yordan Karadzhov
  2018-08-01 19:11       ` Steven Rostedt
  0 siblings, 1 reply; 21+ messages in thread
From: Yordan Karadzhov @ 2018-08-01 19:06 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: linux-trace-devel



On 1.08.2018 21:50, Steven Rostedt wrote:
>> +};
> The rest looks good. Ug that was a big patch to review!;-)
It was a big patch indeed. Thank you very much!!!
Please hold on the review of [4/7], because I found couple of bugs in 
this patch. I will send v3 tomorrow.

Yordan

>
> -- Steve
>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 3/7] kernel-shark-qt: Introduce the visualization model used by the Qt-based KS
  2018-08-01 19:06     ` Yordan Karadzhov
@ 2018-08-01 19:11       ` Steven Rostedt
  0 siblings, 0 replies; 21+ messages in thread
From: Steven Rostedt @ 2018-08-01 19:11 UTC (permalink / raw)
  To: Yordan Karadzhov; +Cc: linux-trace-devel

On Wed, 1 Aug 2018 22:06:44 +0300
Yordan Karadzhov <y.karadz@gmail.com> wrote:

> On 1.08.2018 21:50, Steven Rostedt wrote:
> >> +};  
> > The rest looks good. Ug that was a big patch to review!;-)  
> It was a big patch indeed. Thank you very much!!!
> Please hold on the review of [4/7], because I found couple of bugs in 
> this patch. I will send v3 tomorrow.
> 

Thanks for letting me know!

-- Steve

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 3/7] kernel-shark-qt: Introduce the visualization model used by the Qt-based KS
  2018-08-01 18:22   ` Steven Rostedt
@ 2018-08-02 12:59     ` Yordan Karadzhov (VMware)
  0 siblings, 0 replies; 21+ messages in thread
From: Yordan Karadzhov (VMware) @ 2018-08-02 12:59 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: linux-trace-devel, Tzvetomir Stoyanov



On  1.08.2018 21:22, Steven Rostedt wrote:
>> +	/*
>> +	 * Calculate the new range of the histo. Use the bin of the marker
>> +	 * as a focal point for the zoomout. With this the maker will stay
>> +	 * inside the same bin in the new histo.
>> +	 */
>> +	range = histo->max - histo->min;
>> +	delta_tot = range * r;
>> +	delta_min = delta_tot * mark / histo->n_bins;
>> +
>> +	min = histo->min - delta_min;
>> +	max = histo->max + (size_t) delta_tot - delta_min;
> Took me a bit to figure out what exactly the above is doing. Let me
> explain what I think it is doing and you can correct me if I'm wrong.
> 
> We set delta_tot to increase by the percentage requested (easy).
> 
> Now we make delta_min equal to a percentage of delta_tot based on where
> mark is in the original bins. If mark is zero, then mark was at 0% of
> the original bins, if it was at histo->n_bins - 1, it was at (almost)
> 100%. If it is half way, then we place delta_min at %50 of delta_tot.
> 
> Then we subtract the original min by the delta_tot * mark/n_bins
> percentage, and add the max by delta_tot * (1 - mark/n_bins).
> 
> Sound right? Maybe we can add a comment saying such?
> 

Yes, this is a correct explanation. I will use it as a comment in the code.

Thanks!
Yordan

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 3/7] kernel-shark-qt: Introduce the visualization model used by the Qt-based KS
  2018-08-01 18:44   ` Steven Rostedt
@ 2018-08-03 14:01     ` Yordan Karadzhov (VMware)
  2018-08-03 16:00       ` Steven Rostedt
  0 siblings, 1 reply; 21+ messages in thread
From: Yordan Karadzhov (VMware) @ 2018-08-03 14:01 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: linux-trace-devel, Tzvetomir Stoyanov



On  1.08.2018 21:44, Steven Rostedt wrote:
>> +
>> +/**
>> + * @brief In a given bin, start from the front end of the bin and go towards
>> + *	  the back end, searching for an entry satisfying the Matching
>> + *	  condition defined by a Matching condition function.
>> + * @param histo: Input location for the model descriptor.
>> + * @param bin: Bin id.
>> + * @param vis_only: If true, a visible entry is requested.
>> + * @param func: Matching condition function.
>> + * @param val: Matching condition value, used by the Matching condition
>> + *	       function.
>> + * @param index: Optional output location for the index of the requested
>> + *		 entry inside the array.
>> + * @returns Pointer ot a kshark_entry, if an entry has been found. Else NULL.
>> + */
>> +const struct kshark_entry *
>> +ksmodel_get_entry_front(struct kshark_trace_histo *histo,
>> +			int bin, bool vis_only,
>> +			matching_condition_func func, int val,
>> +			ssize_t *index)
>> +{
>> +	struct kshark_entry_request *req;
>> +	const struct kshark_entry *entry;
>> +
>> +	if (index)
>> +		*index = KS_EMPTY_BIN;
>> +
>> +	/* Set the position at the beginning of the bin and go forward. */
>> +	req = ksmodel_entry_front_request_alloc(histo, bin, vis_only,
>> +							    func, val);
>> +	if (!req)
>> +		return NULL;
>> +
>> +	entry = kshark_get_entry_front(req, histo->data, index);
>> +	free(req);
>> +
>> +	return entry;
>> +}
> We could save on the allocation if we were to create the following:
> 
> void
> kshark_entry_request_set(struct kshark_entry_request *req,
> 			 size_t first, size_t n,
> 			 matching_condition_func cond, int val,
> 			 bool vis_only, int vis_mask)
> {
> 	req->first = first;
> 	req->n = n;
> 	req->cond = cond;
> 	req->val = val;
> 	req->vis_only = vis_only;
> 	req->vis_mask = vis_mask;
> }
> 
> bool
> ksmodel_entry_front_request_set(struct kshark_trace_histo *histo,
> 				struct kshark_entry_request *req,
> 				int bin, bool vis_only,
> 				matching_condition_func func, int val)
> {
> 	size_t first, n;
> 
> 	/* Get the number of entries in this bin. */
> 	n = ksmodel_bin_count(histo, bin);
> 	if (!n)
> 		return false;
> 
> 	first = ksmodel_first_index_at_bin(histo, bin);
> 
> 	kshark_entry_request_set(first, n,
> 			       func, val,
> 			       vis_only, KS_GRAPH_VIEW_FILTER_MASK);
> 
> 	return true;
> }
> 
> const struct kshark_entry *
> ksmodel_get_entry_front(struct kshark_trace_histo *histo,
> 			int bin, bool vis_only,
> 			matching_condition_func func, int val,
> 			ssize_t *index)
> {
> 	struct kshark_entry_request req;
> 	const struct kshark_entry *entry;
> 	bool ret;
> 
> 	if (index)
> 		*index = KS_EMPTY_BIN;
> 
> 	/* Set the position at the beginning of the bin and go forward. */
> 	ret = ksmodel_entry_front_request_set(histo, bin, vis_only,
> 							    func, val);
> 	if (!ret)
> 		return NULL;
> 
> 	entry = kshark_get_entry_front(req, histo->data, index);
> 
> 	return entry;
> }
> 

Hi Steven,
I have tried implementing this, but it becomes a bit ugly in the 
following patches where the single request is transformed into a linked 
list of requests.

Thanks!
Yordan

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 3/7] kernel-shark-qt: Introduce the visualization model used by the Qt-based KS
  2018-08-03 14:01     ` Yordan Karadzhov (VMware)
@ 2018-08-03 16:00       ` Steven Rostedt
  0 siblings, 0 replies; 21+ messages in thread
From: Steven Rostedt @ 2018-08-03 16:00 UTC (permalink / raw)
  To: Yordan Karadzhov (VMware); +Cc: linux-trace-devel, Tzvetomir Stoyanov

On Fri, 3 Aug 2018 17:01:45 +0300
"Yordan Karadzhov (VMware)" <y.karadz@gmail.com> wrote:

> Hi Steven,
> I have tried implementing this, but it becomes a bit ugly in the 
> following patches where the single request is transformed into a linked 
> list of requests.

OK. Then let's not implement it, and see if we can clean it up later
after most of the changes have been made. I was hoping that it would
make the later patches better, but if that's not the case, then let's
ditch the idea.

-- Steve

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 3/7] kernel-shark-qt: Introduce the visualization model used by the Qt-based KS
  2018-08-01  0:51   ` Steven Rostedt
  2018-08-01 16:10     ` Yordan Karadzhov
@ 2018-08-03 18:48     ` Steven Rostedt
  1 sibling, 0 replies; 21+ messages in thread
From: Steven Rostedt @ 2018-08-03 18:48 UTC (permalink / raw)
  To: Yordan Karadzhov (VMware); +Cc: linux-trace-devel, Tzvetomir Stoyanov

On Tue, 31 Jul 2018 20:51:13 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:

> > +static void ksmodel_reset_bins(struct kshark_trace_histo *histo,
> > +			       size_t first, size_t last)
> > +{
> > +	/* Reset the content of the bins. */
> > +	memset(&histo->map[first], KS_EMPTY_BIN,
> > +	       (last - first + 1) * sizeof(histo->map[0]));  
> 
> This patch should add a comment here and by KS_EMPTY_BIN stating that
> KS_EMPTY_BIN is expected to be -1, as it is used to reset the entire
> array with memset(). As memset() only updates an array to a single
> byte, that byte must be the same throughout. Which works for zero and
> -1.
> 

Note, I added this too.

-- Steve

diff --git a/kernel-shark-qt/src/libkshark.h b/kernel-shark-qt/src/libkshark.h
index 4860e74d..122c030e 100644
--- a/kernel-shark-qt/src/libkshark.h
+++ b/kernel-shark-qt/src/libkshark.h
@@ -225,7 +225,10 @@ bool kshark_match_pid(struct kshark_context *kshark_ctx,
 bool kshark_match_cpu(struct kshark_context *kshark_ctx,
 		      struct kshark_entry *e, int cpu);
 
-/** Empty bin identifier. */
+/** Empty bin identifier.
+ * KS_EMPTY_BIN is used to reset entire arrays to empty with memset(),
+ * thus it must be -1 for that to work.
+ */
 #define KS_EMPTY_BIN		-1
 
 /** Filtered bin identifier. */

^ permalink raw reply related	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2018-08-03 20:46 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-31 13:52 [PATCH v2 0/7] Add visualization model for the Qt-based KernelShark Yordan Karadzhov (VMware)
2018-07-31 13:52 ` [PATCH v2 1/7] kernel-shark-qt: Change the type of the fields in struct kshark_entry Yordan Karadzhov (VMware)
2018-07-31 13:52 ` [PATCH v2 2/7] kernel-shark-qt: Add generic instruments for searching inside the trace data Yordan Karadzhov (VMware)
2018-07-31 21:43   ` Steven Rostedt
2018-07-31 13:52 ` [PATCH v2 3/7] kernel-shark-qt: Introduce the visualization model used by the Qt-based KS Yordan Karadzhov (VMware)
2018-08-01  0:51   ` Steven Rostedt
2018-08-01 16:10     ` Yordan Karadzhov
2018-08-03 18:48     ` Steven Rostedt
2018-08-01  1:43   ` Steven Rostedt
2018-08-01 18:22   ` Steven Rostedt
2018-08-02 12:59     ` Yordan Karadzhov (VMware)
2018-08-01 18:44   ` Steven Rostedt
2018-08-03 14:01     ` Yordan Karadzhov (VMware)
2018-08-03 16:00       ` Steven Rostedt
2018-08-01 18:50   ` Steven Rostedt
2018-08-01 19:06     ` Yordan Karadzhov
2018-08-01 19:11       ` Steven Rostedt
2018-07-31 13:52 ` [PATCH v2 4/7] kernel-shark-qt: Add an example showing how to manipulate the Vis. model Yordan Karadzhov (VMware)
2018-07-31 13:52 ` [PATCH v2 5/7] kernel-shark-qt: Define Data collections Yordan Karadzhov (VMware)
2018-07-31 13:52 ` [PATCH v2 6/7] kernel-shark-qt: Make the Vis. model use " Yordan Karadzhov (VMware)
2018-07-31 13:52 ` [PATCH v2 7/7] kernel-shark-qt: Changed the KernelShark version identifier Yordan Karadzhov (VMware)

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.