From mboxrd@z Thu Jan 1 00:00:00 1970 From: Michel Dagenais Subject: Re: How to analysis a huge trace-collected data efficiently Date: Thu, 11 Sep 2014 07:32:04 -0400 (EDT) Message-ID: <166741422.14611521.1410435124958.JavaMail.root__2407.79664968631$1410435236$gmane$org@polymtl.ca> References: Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1522580262==" Return-path: Received: from smtp.polymtl.ca ([132.207.4.11]) by ltt.polymtl.ca with esmtp (Exim 4.80) (envelope-from ) id 1XS2ch-0005Jl-TW for lttng-dev@lists.lttng.org; Thu, 11 Sep 2014 07:33:14 -0400 In-Reply-To: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: lttng-dev-bounces@lists.lttng.org To: "zhenyu.ren" Cc: lttng-dev List-Id: lttng-dev@lists.lttng.org --===============1522580262== Content-Type: multipart/alternative; boundary="----=_Part_14611520_455137948.1410435124957" ------=_Part_14611520_455137948.1410435124957 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Yes, if you look at the options available, you can split the trace in smaller files and even limit the total size accumulated. ----- Mail original ----- > Hi, > It's well known that tracing can produce huge data easily.So if we > keep tracing on a production system for a while(hour/day/week) ,we > may get a huge CTF file. It's very inefficient to analysis a huge > CTF file ,especially the interested events are in some narrow time > window.Is there any way to change the situation?Can we split a huge > ctf file into several small ctf file? Or can lttng produce a list of > small time-sequence ctf filles just like some loggers do? ------=_Part_14611520_455137948.1410435124957 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable <= div style=3D'font-family: times new roman,new york,times,serif; font-size: = 12pt; color: #000000'>Yes, if you look at the options available, you can sp= lit the trace in smaller files and even limit the total size accumulated.

Hi,
   It's well known that tracing can produce huge data easily.= So if we keep tracing on a production system for a while(hour/day/week) &nb= sp;,we may get  a huge CTF file. It's very inefficient to analysis a h= uge CTF file ,especially the interested events are in some narrow time wind= ow.Is there any way to change the situation?Can we split a huge ctf file in= to several small ctf file? Or can lttng produce a list of small time-sequen= ce  ctf filles just like some loggers do?

------=_Part_14611520_455137948.1410435124957-- --===============1522580262== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ lttng-dev mailing list lttng-dev@lists.lttng.org http://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev --===============1522580262==--