* [PATCH 0/4] libtracefs: More updates and fixes to mmap code
@ 2024-01-09 20:48 Steven Rostedt
2024-01-09 20:48 ` [PATCH 1/4] libtracefs: Unmap mmap mapping on tracefs_cpu close Steven Rostedt
` (3 more replies)
0 siblings, 4 replies; 5+ messages in thread
From: Steven Rostedt @ 2024-01-09 20:48 UTC (permalink / raw)
To: linux-trace-devel; +Cc: Vincent Donnefort, Steven Rostedt (Google)
From: "Steven Rostedt (Google)" <rostedt@goodmis.org>
Changed the iterator to use memory mapping when possible, which also
triggered bugs in the unit tests. Those were fixed here too.
Steven Rostedt (Google) (4):
libtracefs: Unmap mmap mapping on tracefs_cpu close
libtracefs: Use tracefs_cpu_*_buf() calls for iterator
libtracefs: Use mmapping for iterating raw events
libtracefs: Have tracefs_cpu_flush(_buf)() use mapping
src/tracefs-events.c | 57 +++++++++++++-------------------------------
src/tracefs-record.c | 11 +++++++++
2 files changed, 27 insertions(+), 41 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH 1/4] libtracefs: Unmap mmap mapping on tracefs_cpu close
2024-01-09 20:48 [PATCH 0/4] libtracefs: More updates and fixes to mmap code Steven Rostedt
@ 2024-01-09 20:48 ` Steven Rostedt
2024-01-09 20:48 ` [PATCH 2/4] libtracefs: Use tracefs_cpu_*_buf() calls for iterator Steven Rostedt
` (2 subsequent siblings)
3 siblings, 0 replies; 5+ messages in thread
From: Steven Rostedt @ 2024-01-09 20:48 UTC (permalink / raw)
To: linux-trace-devel; +Cc: Vincent Donnefort, Steven Rostedt (Google)
From: "Steven Rostedt (Google)" <rostedt@goodmis.org>
The tracefs_cpu_open_mapped() will mmap the ring buffer if it is supported,
but it does not unmap it when it is closed.
Fixes: 2ed14b59 ("libtracefs: Add ring buffer memory mapping APIs")
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
src/tracefs-record.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/src/tracefs-record.c b/src/tracefs-record.c
index f51e18420bc7..4a59c61c195f 100644
--- a/src/tracefs-record.c
+++ b/src/tracefs-record.c
@@ -276,6 +276,7 @@ void tracefs_cpu_free_fd(struct tracefs_cpu *tcpu)
close_fd(tcpu->splice_pipe[0]);
close_fd(tcpu->splice_pipe[1]);
+ trace_unmap(tcpu->mapping);
kbuffer_free(tcpu->kbuf);
free(tcpu);
}
--
2.43.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH 2/4] libtracefs: Use tracefs_cpu_*_buf() calls for iterator
2024-01-09 20:48 [PATCH 0/4] libtracefs: More updates and fixes to mmap code Steven Rostedt
2024-01-09 20:48 ` [PATCH 1/4] libtracefs: Unmap mmap mapping on tracefs_cpu close Steven Rostedt
@ 2024-01-09 20:48 ` Steven Rostedt
2024-01-09 20:48 ` [PATCH 3/4] libtracefs: Use mmapping for iterating raw events Steven Rostedt
2024-01-09 20:48 ` [PATCH 4/4] libtracefs: Have tracefs_cpu_flush(_buf)() use mapping Steven Rostedt
3 siblings, 0 replies; 5+ messages in thread
From: Steven Rostedt @ 2024-01-09 20:48 UTC (permalink / raw)
To: linux-trace-devel; +Cc: Vincent Donnefort, Steven Rostedt (Google)
From: "Steven Rostedt (Google)" <rostedt@goodmis.org>
The iterators were created before the tracefs_cpu_buffered_read_buf() and
tracefs_cpu_flush_buf() which returns a kbuffer to iterate. Instead of
having to manage its own kbuffer, use the one that is managed by the
tracefs_cpu.
This will also allow the iterator to use the memory mapping code.
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
src/tracefs-events.c | 49 ++++++++------------------------------------
1 file changed, 9 insertions(+), 40 deletions(-)
diff --git a/src/tracefs-events.c b/src/tracefs-events.c
index 3c844b0ab408..2571c4b43341 100644
--- a/src/tracefs-events.c
+++ b/src/tracefs-events.c
@@ -31,8 +31,6 @@ struct cpu_iterate {
struct tep_record record;
struct tep_event *event;
struct kbuffer *kbuf;
- void *page;
- int psize;
int cpu;
};
@@ -63,46 +61,24 @@ static int read_kbuf_record(struct cpu_iterate *cpu)
int read_next_page(struct tep_handle *tep, struct cpu_iterate *cpu)
{
- enum kbuffer_long_size long_size;
- enum kbuffer_endian endian;
- int r;
+ struct kbuffer *kbuf;
if (!cpu->tcpu)
return -1;
- r = tracefs_cpu_buffered_read(cpu->tcpu, cpu->page, true);
+ kbuf = tracefs_cpu_buffered_read_buf(cpu->tcpu, true);
/*
- * tracefs_cpu_buffered_read() only reads in full subbuffer size,
+ * tracefs_cpu_buffered_read_buf() only reads in full subbuffer size,
* but this wants partial buffers as well. If the function returns
- * empty (-1 for EAGAIN), try tracefs_cpu_read() next, as that can
+ * empty (-1 for EAGAIN), try tracefs_cpu_flush_buf() next, as that can
* read partially filled buffers too, but isn't as efficient.
*/
- if (r <= 0)
- r = tracefs_cpu_read(cpu->tcpu, cpu->page, true);
- if (r <= 0)
+ if (!kbuf)
+ kbuf = tracefs_cpu_flush_buf(cpu->tcpu);
+ if (!kbuf)
return -1;
- if (!cpu->kbuf) {
- if (tep_is_file_bigendian(tep))
- endian = KBUFFER_ENDIAN_BIG;
- else
- endian = KBUFFER_ENDIAN_LITTLE;
-
- if (tep_get_header_page_size(tep) == 8)
- long_size = KBUFFER_LSIZE_8;
- else
- long_size = KBUFFER_LSIZE_4;
-
- cpu->kbuf = kbuffer_alloc(long_size, endian);
- if (!cpu->kbuf)
- return -1;
- }
-
- kbuffer_load_subbuffer(cpu->kbuf, cpu->page);
- if (kbuffer_subbuffer_size(cpu->kbuf) > r) {
- tracefs_warning("%s: page_size > %d", __func__, r);
- return -1;
- }
+ cpu->kbuf = kbuf;
return 0;
}
@@ -314,11 +290,7 @@ static int open_cpu_files(struct tracefs_instance *instance, cpu_set_t *cpus,
tmp[i].tcpu = tcpu;
tmp[i].cpu = cpu;
- tmp[i].psize = tracefs_cpu_read_size(tcpu);
- tmp[i].page = malloc(tmp[i].psize);
-
- if (!tmp[i++].page)
- goto error;
+ i++;
}
*count = i;
return 0;
@@ -326,7 +298,6 @@ static int open_cpu_files(struct tracefs_instance *instance, cpu_set_t *cpus,
tmp = *all_cpus;
for (; i >= 0; i--) {
tracefs_cpu_close(tmp[i].tcpu);
- free(tmp[i].page);
}
free(tmp);
*all_cpus = NULL;
@@ -539,9 +510,7 @@ static int iterate_events(struct tep_handle *tep,
out:
if (all_cpus) {
for (i = 0; i < count; i++) {
- kbuffer_free(all_cpus[i].kbuf);
tracefs_cpu_close(all_cpus[i].tcpu);
- free(all_cpus[i].page);
}
free(all_cpus);
}
--
2.43.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH 3/4] libtracefs: Use mmapping for iterating raw events
2024-01-09 20:48 [PATCH 0/4] libtracefs: More updates and fixes to mmap code Steven Rostedt
2024-01-09 20:48 ` [PATCH 1/4] libtracefs: Unmap mmap mapping on tracefs_cpu close Steven Rostedt
2024-01-09 20:48 ` [PATCH 2/4] libtracefs: Use tracefs_cpu_*_buf() calls for iterator Steven Rostedt
@ 2024-01-09 20:48 ` Steven Rostedt
2024-01-09 20:48 ` [PATCH 4/4] libtracefs: Have tracefs_cpu_flush(_buf)() use mapping Steven Rostedt
3 siblings, 0 replies; 5+ messages in thread
From: Steven Rostedt @ 2024-01-09 20:48 UTC (permalink / raw)
To: linux-trace-devel; +Cc: Vincent Donnefort, Steven Rostedt (Google)
From: "Steven Rostedt (Google)" <rostedt@goodmis.org>
If mmapping the ring buffer is available, use that for iterating raw events
as it's less copying than using splice buffering.
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
src/tracefs-events.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/src/tracefs-events.c b/src/tracefs-events.c
index 2571c4b43341..9f620abebdda 100644
--- a/src/tracefs-events.c
+++ b/src/tracefs-events.c
@@ -32,6 +32,7 @@ struct cpu_iterate {
struct tep_event *event;
struct kbuffer *kbuf;
int cpu;
+ bool mapped;
};
static int read_kbuf_record(struct cpu_iterate *cpu)
@@ -66,7 +67,11 @@ int read_next_page(struct tep_handle *tep, struct cpu_iterate *cpu)
if (!cpu->tcpu)
return -1;
- kbuf = tracefs_cpu_buffered_read_buf(cpu->tcpu, true);
+ /* Do not do buffered reads if it is mapped */
+ if (cpu->mapped)
+ kbuf = tracefs_cpu_read_buf(cpu->tcpu, true);
+ else
+ kbuf = tracefs_cpu_buffered_read_buf(cpu->tcpu, true);
/*
* tracefs_cpu_buffered_read_buf() only reads in full subbuffer size,
* but this wants partial buffers as well. If the function returns
@@ -274,7 +279,7 @@ static int open_cpu_files(struct tracefs_instance *instance, cpu_set_t *cpus,
if (snapshot)
tcpu = tracefs_cpu_snapshot_open(instance, cpu, true);
else
- tcpu = tracefs_cpu_open(instance, cpu, true);
+ tcpu = tracefs_cpu_open_mapped(instance, cpu, true);
tmp = realloc(*all_cpus, (i + 1) * sizeof(*tmp));
if (!tmp) {
i--;
@@ -290,6 +295,7 @@ static int open_cpu_files(struct tracefs_instance *instance, cpu_set_t *cpus,
tmp[i].tcpu = tcpu;
tmp[i].cpu = cpu;
+ tmp[i].mapped = tracefs_cpu_is_mapped(tcpu);
i++;
}
*count = i;
--
2.43.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH 4/4] libtracefs: Have tracefs_cpu_flush(_buf)() use mapping
2024-01-09 20:48 [PATCH 0/4] libtracefs: More updates and fixes to mmap code Steven Rostedt
` (2 preceding siblings ...)
2024-01-09 20:48 ` [PATCH 3/4] libtracefs: Use mmapping for iterating raw events Steven Rostedt
@ 2024-01-09 20:48 ` Steven Rostedt
3 siblings, 0 replies; 5+ messages in thread
From: Steven Rostedt @ 2024-01-09 20:48 UTC (permalink / raw)
To: linux-trace-devel; +Cc: Vincent Donnefort, Steven Rostedt (Google)
From: "Steven Rostedt (Google)" <rostedt@goodmis.org>
If the tracefs_cpu is opened with tracefs_cpu_open_mapped() and uses
tracefs_cpu_read_buf() along with tracefs_cpu_flush_buf(), the flush will
load the tcpu->kbuf with a new buffer which may make the one in the mmapped
out of sync.
If the tcpu is mapped, make sure tracefs_cpu_flush() and
tracefs_cpu_flush_buf() also use the mapping.
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
src/tracefs-record.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/src/tracefs-record.c b/src/tracefs-record.c
index 4a59c61c195f..fca3ddf9afbe 100644
--- a/src/tracefs-record.c
+++ b/src/tracefs-record.c
@@ -690,6 +690,9 @@ int tracefs_cpu_flush(struct tracefs_cpu *tcpu, void *buffer)
if (tcpu->buffered < 0)
tcpu->buffered = 0;
+ if (tcpu->mapping)
+ return trace_mmap_read(tcpu->mapping, buffer);
+
if (tcpu->buffered) {
ret = read(tcpu->splice_pipe[0], buffer, tcpu->subbuf_size);
if (ret > 0)
@@ -729,6 +732,13 @@ struct kbuffer *tracefs_cpu_flush_buf(struct tracefs_cpu *tcpu)
if (!get_buffer(tcpu))
return NULL;
+ if (tcpu->mapping) {
+ /* Make sure that reading is now non blocking */
+ set_nonblock(tcpu);
+ ret = trace_mmap_load_subbuf(tcpu->mapping, tcpu->kbuf);
+ return ret > 0 ? tcpu->kbuf : NULL;
+ }
+
ret = tracefs_cpu_flush(tcpu, tcpu->buffer);
if (ret <= 0)
return NULL;
--
2.43.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
end of thread, other threads:[~2024-01-09 20:50 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-01-09 20:48 [PATCH 0/4] libtracefs: More updates and fixes to mmap code Steven Rostedt
2024-01-09 20:48 ` [PATCH 1/4] libtracefs: Unmap mmap mapping on tracefs_cpu close Steven Rostedt
2024-01-09 20:48 ` [PATCH 2/4] libtracefs: Use tracefs_cpu_*_buf() calls for iterator Steven Rostedt
2024-01-09 20:48 ` [PATCH 3/4] libtracefs: Use mmapping for iterating raw events Steven Rostedt
2024-01-09 20:48 ` [PATCH 4/4] libtracefs: Have tracefs_cpu_flush(_buf)() use mapping Steven Rostedt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).