All of lore.kernel.org
 help / color / mirror / Atom feed
* How to analysis a huge trace-collected data efficiently
@ 2014-09-11  1:38 zhenyu.ren
  0 siblings, 0 replies; 2+ messages in thread
From: zhenyu.ren @ 2014-09-11  1:38 UTC (permalink / raw)
  To: lttng-dev


[-- Attachment #1.1: Type: text/plain, Size: 495 bytes --]

Hi,   It's well known that tracing can produce huge data easily.So if we keep tracing on a production system for a while(hour/day/week)  ,we may get  a huge CTF file. It's very inefficient to analysis a huge CTF file ,especially the interested events are in some narrow time window.Is there any way to change the situation?Can we split a huge ctf file into several small ctf file? Or can lttng produce a list of small time-sequence  ctf filles just like some loggers do?Thankszhenyu.ren


[-- Attachment #1.2: Type: text/html, Size: 1080 bytes --]

[-- Attachment #2: Type: text/plain, Size: 155 bytes --]

_______________________________________________
lttng-dev mailing list
lttng-dev@lists.lttng.org
http://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

^ permalink raw reply	[flat|nested] 2+ messages in thread
[parent not found: <e4797186-ccb1-4efc-a68e-953e8575a42a@aliyun.com>]

end of thread, other threads:[~2014-09-11 11:33 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-11  1:38 How to analysis a huge trace-collected data efficiently zhenyu.ren
     [not found] <e4797186-ccb1-4efc-a68e-953e8575a42a@aliyun.com>
2014-09-11 11:32 ` Michel Dagenais

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.