linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/9] Fix references for some missing documentation files
@ 2018-06-26  9:49 Mauro Carvalho Chehab
  2018-06-26  9:49 ` [PATCH 1/9] scripts/documentation-file-ref-check: remove some false positives Mauro Carvalho Chehab
                   ` (9 more replies)
  0 siblings, 10 replies; 15+ messages in thread
From: Mauro Carvalho Chehab @ 2018-06-26  9:49 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet, Jacek Anaszewski, devicetree, Ingo Molnar,
	linux-kernel, Andrew Morton, linux-leds, intel-wired-lan,
	Mark Rutland, linux-gpio, David S. Miller, James Morris,
	Jeff Kirsher, Changbin Du, Masami Hiramatsu, netdev,
	Steven Rostedt, linux-input, linuxppc-dev, linux-scsi, kvm,
	virtualization, Andy Whitcroft, Joe Perches

Having nothing to do while waiting for my plane to arrive while
returning back from Japan, I ended by writing a small series of 
patches meant to reduce the number of bad Documentation/* 
links that are detected by:
	./scripts/documentation-file-ref-check

I ended by rebasing this patch series against linux-next, because
of those two patches:
	3b0c3ebe2a42 Documentation: e100: Fix docs build error
	805f16a5f12f Documentation: e1000: Fix docs build error

They basically fix documentation builds with upstream Kernel. Both
got merged on -rc2.

The first two patches in this series makes the script to ignore some
false positives.

Patches 3 to 6 corrects the location of some documentation files.

Patches 7 and 8 were actually two patches meant to fix the build
error. I ended by rebasing them over linux-next, as they fix some
troubles with the ReST syntax with causes warnings.

Patch 9 converts Documentation/trace/histogram.txt to ReST
syntax. It also had to be rebased against linux-next, due to some minor
conflicts with:
    064f35a95224 ("tracing: Fix some errors in histogram documentation")

After this series, the script still produces 16 warnings:

Documentation/devicetree/bindings/input/mtk-pmic-keys.txt: Documentation/devicetree/bindings/input/keys.txt
Documentation/devicetree/bindings/input/mtk-pmic-keys.txt: Documentation/devicetree/bindings/input/keys.txt
Documentation/devicetree/bindings/regulator/rohm,bd71837-regulator.txt: Documentation/devicetree/bindings/mfd/rohm,bd71837-pmic.txt
Documentation/devicetree/dynamic-resolution-notes.txt: Documentation/devicetree/dt-object-internal.txt
Documentation/scsi/scsi_mid_low_api.txt: Documentation/Configure.help
Documentation/translations/zh_CN/HOWTO: Documentation/DocBook/
Documentation/translations/zh_CN/basic_profiling.txt: Documentation/basic_profiling
Documentation/translations/zh_CN/basic_profiling.txt: Documentation/basic_profiling
MAINTAINERS: Documentation/fpga/
MAINTAINERS: Documentation/devicetree/bindings/rng/samsung,exynos5250-trng.txt
arch/powerpc/Kconfig: Documentation/vm/protection-keys.rst
drivers/isdn/mISDN/dsp_core.c: Documentation/isdn/mISDN.cert
drivers/scsi/Kconfig: file:Documentation/scsi/tmscsim.txt
drivers/vhost/vhost.c: Documentation/virtual/lguest/lguest.c
include/linux/fs_context.h: Documentation/filesystems/mounting.txt
include/linux/lsm_hooks.h: Documentation/filesystems/mounting.txt

IMHO, the above should be fixed by the corresponding maintainers.

The ones that scarry me most are the DT binding documentation, as
the binding documentation for some stuff are likely broken.

Btw, two of the above are new on linux-next (include/linux/fs_context.h
and include/linux/lsm_hooks.h) . That makes me wander that we should
likely add some logic (or run the detect script) at checkpatch.pl or make
it to call ./scripts/documentation-file-ref-check.

Mauro Carvalho Chehab (9):
  scripts/documentation-file-ref-check: remove some false positives
  scripts/documentation-file-ref-check: ignore sched-pelt false positive
  docs: zh_CN: fix location of oops-tracing.txt
  devicectree: bindings: fix location of leds common file
  MAINTAINERS: fix location of ina2xx.txt device tree file
  gpio.h: fix location of gpio legacy documentation
  networking: e100.rst: Get rid of Sphinx warnings
  networking: e1000.rst: Get rid of Sphinx warnings
  docs: histogram.txt: convert it to ReST file format

 .../devicetree/bindings/leds/common.txt       |    2 +-
 Documentation/networking/e100.rst             |   27 +-
 Documentation/networking/e1000.rst            |  187 ++-
 Documentation/trace/events.rst                |    2 +-
 .../trace/{histogram.txt => histogram.rst}    | 1242 +++++++++--------
 Documentation/trace/index.rst                 |    1 +
 .../translations/zh_CN/oops-tracing.txt       |    4 +-
 MAINTAINERS                                   |    2 +-
 include/linux/gpio.h                          |    2 +-
 kernel/trace/Kconfig                          |    2 +-
 scripts/documentation-file-ref-check          |    6 +
 11 files changed, 767 insertions(+), 710 deletions(-)
 rename Documentation/trace/{histogram.txt => histogram.rst} (73%)

-- 
2.17.1



^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 1/9] scripts/documentation-file-ref-check: remove some false positives
  2018-06-26  9:49 [PATCH 0/9] Fix references for some missing documentation files Mauro Carvalho Chehab
@ 2018-06-26  9:49 ` Mauro Carvalho Chehab
  2018-06-26  9:49 ` [PATCH 2/9] scripts/documentation-file-ref-check: ignore sched-pelt false positive Mauro Carvalho Chehab
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 15+ messages in thread
From: Mauro Carvalho Chehab @ 2018-06-26  9:49 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet, Jani Nikula

There are several false positives at tcm_mod_builder.txt:

    Documentation/target/tcm_mod_builder.txt: mnt/sdb/lio-core-2.6.git/Documentation/target/../../drivers/target/tcm_nab5000
    Documentation/target/tcm_mod_builder.txt: mnt/sdb/lio-core-2.6.git/Documentation/target/../../drivers/target/tcm_nab5000
    Documentation/target/tcm_mod_builder.txt: mnt/sdb/lio-core-2.6.git/Documentation/target/../../drivers/target/tcm_nab5000/tcm_nab5000_base.h
    Documentation/target/tcm_mod_builder.txt: mnt/sdb/lio-core-2.6.git/Documentation/target/../../include/target/target_core_fabric_ops.h
    Documentation/target/tcm_mod_builder.txt: mnt/sdb/lio-core-2.6.git/Documentation/target/../../drivers/target/tcm_nab5000/tcm_nab5000_fabric.c
    Documentation/target/tcm_mod_builder.txt: mnt/sdb/lio-core-2.6.git/Documentation/target/../../drivers/target/tcm_nab5000/tcm_nab5000_fabric.h
    Documentation/target/tcm_mod_builder.txt: mnt/sdb/lio-core-2.6.git/Documentation/target/../../drivers/target/tcm_nab5000/tcm_nab5000_configfs.c
    Documentation/target/tcm_mod_builder.txt: mnt/sdb/lio-core-2.6.git/Documentation/target/../../drivers/target/tcm_nab5000/Kbuild
    Documentation/target/tcm_mod_builder.txt: mnt/sdb/lio-core-2.6.git/Documentation/target/../../drivers/target/tcm_nab5000/Kconfig

Ignore them.

Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
---
 scripts/documentation-file-ref-check | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/scripts/documentation-file-ref-check b/scripts/documentation-file-ref-check
index 078999a3fdff..857eb0d7458d 100755
--- a/scripts/documentation-file-ref-check
+++ b/scripts/documentation-file-ref-check
@@ -75,6 +75,9 @@ while (<IN>) {
 		# Remove URL false-positives
 		next if ($fulref =~ m/^http/);
 
+		# Discard some build examples from Documentation/target/tcm_mod_builder.txt
+		next if ($fulref =~ m,mnt/sdb/lio-core-2.6.git/Documentation/target,);
+
 		# Check if exists, evaluating wildcards
 		next if (grep -e, glob("$ref $fulref"));
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 2/9] scripts/documentation-file-ref-check: ignore sched-pelt false positive
  2018-06-26  9:49 [PATCH 0/9] Fix references for some missing documentation files Mauro Carvalho Chehab
  2018-06-26  9:49 ` [PATCH 1/9] scripts/documentation-file-ref-check: remove some false positives Mauro Carvalho Chehab
@ 2018-06-26  9:49 ` Mauro Carvalho Chehab
  2018-06-26  9:49 ` [PATCH 3/9] docs: zh_CN: fix location of oops-tracing.txt Mauro Carvalho Chehab
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 15+ messages in thread
From: Mauro Carvalho Chehab @ 2018-06-26  9:49 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet, Jani Nikula

When Documentation/scheduler/sched-pelt.c is compiled, it generates
a file called Documentation/scheduler/sched-pelt. As this only
exists after building such tool, we need an explict check
to remove the false-positive.

Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
---
 scripts/documentation-file-ref-check | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/scripts/documentation-file-ref-check b/scripts/documentation-file-ref-check
index 857eb0d7458d..ad9db6821824 100755
--- a/scripts/documentation-file-ref-check
+++ b/scripts/documentation-file-ref-check
@@ -75,6 +75,9 @@ while (<IN>) {
 		# Remove URL false-positives
 		next if ($fulref =~ m/^http/);
 
+		# Remove sched-pelt false-positive
+		next if ($fulref =~ m,^Documentation/scheduler/sched-pelt$,);
+
 		# Discard some build examples from Documentation/target/tcm_mod_builder.txt
 		next if ($fulref =~ m,mnt/sdb/lio-core-2.6.git/Documentation/target,);
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 3/9] docs: zh_CN: fix location of oops-tracing.txt
  2018-06-26  9:49 [PATCH 0/9] Fix references for some missing documentation files Mauro Carvalho Chehab
  2018-06-26  9:49 ` [PATCH 1/9] scripts/documentation-file-ref-check: remove some false positives Mauro Carvalho Chehab
  2018-06-26  9:49 ` [PATCH 2/9] scripts/documentation-file-ref-check: ignore sched-pelt false positive Mauro Carvalho Chehab
@ 2018-06-26  9:49 ` Mauro Carvalho Chehab
  2018-06-26  9:49 ` [PATCH 4/9] devicectree: bindings: fix location of leds common file Mauro Carvalho Chehab
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 15+ messages in thread
From: Mauro Carvalho Chehab @ 2018-06-26  9:49 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet, Harry Wei, linux-kernel

This file was merged with bug-hunting. Make the translation
to point for its new location.

Fixes: f226e460875d ("admin-guide: merge oops-tracing with bug-hunting")
Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
---
 Documentation/translations/zh_CN/oops-tracing.txt | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/Documentation/translations/zh_CN/oops-tracing.txt b/Documentation/translations/zh_CN/oops-tracing.txt
index 41ab53cc0e83..a893f04dfd5d 100644
--- a/Documentation/translations/zh_CN/oops-tracing.txt
+++ b/Documentation/translations/zh_CN/oops-tracing.txt
@@ -1,4 +1,4 @@
-Chinese translated version of Documentation/admin-guide/oops-tracing.rst
+Chinese translated version of Documentation/admin-guide/bug-hunting.rst
 
 If you have any comment or update to the content, please contact the
 original document maintainer directly.  However, if you have a problem
@@ -8,7 +8,7 @@ or if there is a problem with the translation.
 
 Chinese maintainer: Dave Young <hidave.darkstar@gmail.com>
 ---------------------------------------------------------------------
-Documentation/admin-guide/oops-tracing.rst 的中文翻译
+Documentation/admin-guide/bug-hunting.rst 的中文翻译
 
 如果想评论或更新本文的内容,请直接联系原文档的维护者。如果你使用英文
 交流有困难的话,也可以向中文版维护者求助。如果本翻译更新不及时或者翻
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 4/9] devicectree: bindings: fix location of leds common file
  2018-06-26  9:49 [PATCH 0/9] Fix references for some missing documentation files Mauro Carvalho Chehab
                   ` (2 preceding siblings ...)
  2018-06-26  9:49 ` [PATCH 3/9] docs: zh_CN: fix location of oops-tracing.txt Mauro Carvalho Chehab
@ 2018-06-26  9:49 ` Mauro Carvalho Chehab
  2018-06-26 14:38   ` Pavel Machek
  2018-06-26 19:41   ` Jacek Anaszewski
  2018-06-26  9:49 ` [PATCH 5/9] MAINTAINERS: fix location of ina2xx.txt device tree file Mauro Carvalho Chehab
                   ` (5 subsequent siblings)
  9 siblings, 2 replies; 15+ messages in thread
From: Mauro Carvalho Chehab @ 2018-06-26  9:49 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet, Jacek Anaszewski, Pavel Machek, Rob Herring,
	Mark Rutland, linux-leds, devicetree

The leds.txt was moved and renamed. Fix references to
it accordingly.

Fixes: f67605394f0b ("devicetree/bindings: Move gpio-leds binding into leds directory")
Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
---
 Documentation/devicetree/bindings/leds/common.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Documentation/devicetree/bindings/leds/common.txt b/Documentation/devicetree/bindings/leds/common.txt
index 1d4afe9644b6..aa1399814a2a 100644
--- a/Documentation/devicetree/bindings/leds/common.txt
+++ b/Documentation/devicetree/bindings/leds/common.txt
@@ -31,7 +31,7 @@ Optional properties for child nodes:
      "backlight" - LED will act as a back-light, controlled by the framebuffer
 		   system
      "default-on" - LED will turn on (but for leds-gpio see "default-state"
-		    property in Documentation/devicetree/bindings/gpio/led.txt)
+		    property in Documentation/devicetree/bindings/leds/leds-gpio.txt)
      "heartbeat" - LED "double" flashes at a load average based rate
      "disk-activity" - LED indicates disk activity
      "ide-disk" - LED indicates IDE disk activity (deprecated),
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 5/9] MAINTAINERS: fix location of ina2xx.txt device tree file
  2018-06-26  9:49 [PATCH 0/9] Fix references for some missing documentation files Mauro Carvalho Chehab
                   ` (3 preceding siblings ...)
  2018-06-26  9:49 ` [PATCH 4/9] devicectree: bindings: fix location of leds common file Mauro Carvalho Chehab
@ 2018-06-26  9:49 ` Mauro Carvalho Chehab
  2018-06-26  9:49 ` [PATCH 6/9] gpio.h: fix location of gpio legacy documentation Mauro Carvalho Chehab
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 15+ messages in thread
From: Mauro Carvalho Chehab @ 2018-06-26  9:49 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet, David S. Miller, Greg Kroah-Hartman,
	Andrew Morton, Randy Dunlap

This file got moved and merged, causing the old reference to
not exist anymore. Fix it.

Fixes: 6e24d205a8aa ("hwmon: ina209: move binding docs to proper place")
Fixes: 62bc9f15e443 ("dt-bindings: merge ina209 binding into ina2xx binding")
Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
---
 MAINTAINERS | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index a917139c2b65..77a0827845a2 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -7043,7 +7043,7 @@ M:	Guenter Roeck <linux@roeck-us.net>
 L:	linux-hwmon@vger.kernel.org
 S:	Maintained
 F:	Documentation/hwmon/ina209
-F:	Documentation/devicetree/bindings/i2c/ina209.txt
+F:	Documentation/devicetree/bindings/hwmon/ina2xx.txt
 F:	drivers/hwmon/ina209.c
 
 INA2XX HARDWARE MONITOR DRIVER
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 6/9] gpio.h: fix location of gpio legacy documentation
  2018-06-26  9:49 [PATCH 0/9] Fix references for some missing documentation files Mauro Carvalho Chehab
                   ` (4 preceding siblings ...)
  2018-06-26  9:49 ` [PATCH 5/9] MAINTAINERS: fix location of ina2xx.txt device tree file Mauro Carvalho Chehab
@ 2018-06-26  9:49 ` Mauro Carvalho Chehab
  2018-06-29 12:36   ` Linus Walleij
  2018-06-26  9:49 ` [PATCH 7/9] networking: e100.rst: Get rid of Sphinx warnings Mauro Carvalho Chehab
                   ` (3 subsequent siblings)
  9 siblings, 1 reply; 15+ messages in thread
From: Mauro Carvalho Chehab @ 2018-06-26  9:49 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet, Linus Walleij, linux-gpio

The location of this doc file was moved. Change its reference
accordingly.

Fixes: 7ee2c13080c9 ("Documentation: gpio: Move legacy documentation to driver-api")
Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
---
 include/linux/gpio.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/gpio.h b/include/linux/gpio.h
index 91ed23468530..39745b8bdd65 100644
--- a/include/linux/gpio.h
+++ b/include/linux/gpio.h
@@ -14,7 +14,7 @@
 
 #include <linux/errno.h>
 
-/* see Documentation/gpio/gpio-legacy.txt */
+/* see Documentation/driver-api/gpio/legacy.rst */
 
 /* make these flag values available regardless of GPIO kconfig options */
 #define GPIOF_DIR_OUT	(0 << 0)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 7/9] networking: e100.rst: Get rid of Sphinx warnings
  2018-06-26  9:49 [PATCH 0/9] Fix references for some missing documentation files Mauro Carvalho Chehab
                   ` (5 preceding siblings ...)
  2018-06-26  9:49 ` [PATCH 6/9] gpio.h: fix location of gpio legacy documentation Mauro Carvalho Chehab
@ 2018-06-26  9:49 ` Mauro Carvalho Chehab
  2018-06-26  9:49 ` [PATCH 8/9] networking: e1000.rst: " Mauro Carvalho Chehab
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 15+ messages in thread
From: Mauro Carvalho Chehab @ 2018-06-26  9:49 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet, Jeff Kirsher, David S. Miller, intel-wired-lan,
	netdev

    Documentation/networking/e100.rst:57: WARNING: Literal block expected; none found.
    Documentation/networking/e100.rst:68: WARNING: Literal block expected; none found.
    Documentation/networking/e100.rst:75: WARNING: Literal block expected; none found.
    Documentation/networking/e100.rst:84: WARNING: Literal block expected; none found.
    Documentation/networking/e100.rst:93: WARNING: Inline emphasis start-string without end-string.

While here, fix some highlights.

Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
---
 Documentation/networking/e100.rst | 27 +++++++++++++++++----------
 1 file changed, 17 insertions(+), 10 deletions(-)

diff --git a/Documentation/networking/e100.rst b/Documentation/networking/e100.rst
index 9708f5fa76de..f81111eba9c5 100644
--- a/Documentation/networking/e100.rst
+++ b/Documentation/networking/e100.rst
@@ -47,41 +47,45 @@ Driver Configuration Parameters
 The default value for each parameter is generally the recommended setting,
 unless otherwise noted.
 
-Rx Descriptors: Number of receive descriptors. A receive descriptor is a data
+Rx Descriptors:
+   Number of receive descriptors. A receive descriptor is a data
    structure that describes a receive buffer and its attributes to the network
    controller. The data in the descriptor is used by the controller to write
    data from the controller to host memory. In the 3.x.x driver the valid range
    for this parameter is 64-256. The default value is 256. This parameter can be
    changed using the command::
 
-   ethtool -G eth? rx n
+     ethtool -G eth? rx n
 
    Where n is the number of desired Rx descriptors.
 
-Tx Descriptors: Number of transmit descriptors. A transmit descriptor is a data
+Tx Descriptors:
+   Number of transmit descriptors. A transmit descriptor is a data
    structure that describes a transmit buffer and its attributes to the network
    controller. The data in the descriptor is used by the controller to read
    data from the host memory to the controller. In the 3.x.x driver the valid
    range for this parameter is 64-256. The default value is 128. This parameter
    can be changed using the command::
 
-   ethtool -G eth? tx n
+     ethtool -G eth? tx n
 
    Where n is the number of desired Tx descriptors.
 
-Speed/Duplex: The driver auto-negotiates the link speed and duplex settings by
+Speed/Duplex:
+   The driver auto-negotiates the link speed and duplex settings by
    default. The ethtool utility can be used as follows to force speed/duplex.::
 
-   ethtool -s eth?  autoneg off speed {10|100} duplex {full|half}
+     ethtool -s eth?  autoneg off speed {10|100} duplex {full|half}
 
    NOTE: setting the speed/duplex to incorrect values will cause the link to
    fail.
 
-Event Log Message Level:  The driver uses the message level flag to log events
+Event Log Message Level:
+   The driver uses the message level flag to log events
    to syslog. The message level can be set at driver load time. It can also be
    set using the command::
 
-   ethtool -s eth? msglvl n
+     ethtool -s eth? msglvl n
 
 
 Additional Configurations
@@ -92,7 +96,7 @@ Configuring the Driver on Different Distributions
 
 Configuring a network driver to load properly when the system is started
 is distribution dependent.  Typically, the configuration process involves
-adding an alias line to /etc/modprobe.d/*.conf as well as editing other
+adding an alias line to `/etc/modprobe.d/*.conf` as well as editing other
 system startup scripts and/or configuration files.  Many popular Linux
 distributions ship with tools to make these changes for you.  To learn
 the proper way to configure a network device for your system, refer to
@@ -160,7 +164,10 @@ This results in unbalanced receive traffic.
 If you have multiple interfaces in a server, either turn on ARP
 filtering by
 
-(1) entering:: echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter
+(1) entering::
+
+	echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter
+
     (this only works if your kernel's version is higher than 2.4.5), or
 
 (2) installing the interfaces in separate broadcast domains (either
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 8/9] networking: e1000.rst: Get rid of Sphinx warnings
  2018-06-26  9:49 [PATCH 0/9] Fix references for some missing documentation files Mauro Carvalho Chehab
                   ` (6 preceding siblings ...)
  2018-06-26  9:49 ` [PATCH 7/9] networking: e100.rst: Get rid of Sphinx warnings Mauro Carvalho Chehab
@ 2018-06-26  9:49 ` Mauro Carvalho Chehab
  2018-06-26  9:49 ` [PATCH 9/9] docs: histogram.txt: convert it to ReST file format Mauro Carvalho Chehab
  2018-07-02 17:27 ` [PATCH 0/9] Fix references for some missing documentation files Jonathan Corbet
  9 siblings, 0 replies; 15+ messages in thread
From: Mauro Carvalho Chehab @ 2018-06-26  9:49 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet, Jeff Kirsher, David S. Miller, intel-wired-lan,
	netdev

    Documentation/networking/e1000.rst:83: ERROR: Unexpected indentation.
    Documentation/networking/e1000.rst:84: WARNING: Block quote ends without a blank line; unexpected unindent.
    Documentation/networking/e1000.rst:173: WARNING: Definition list ends without a blank line; unexpected unindent.
    Documentation/networking/e1000.rst:236: WARNING: Definition list ends without a blank line; unexpected unindent.

While here, fix highlights and mark a table as such.

Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
---
 Documentation/networking/e1000.rst | 187 +++++++++++++++++------------
 1 file changed, 112 insertions(+), 75 deletions(-)

diff --git a/Documentation/networking/e1000.rst b/Documentation/networking/e1000.rst
index 144b87eef153..f10dd4086921 100644
--- a/Documentation/networking/e1000.rst
+++ b/Documentation/networking/e1000.rst
@@ -34,7 +34,8 @@ Command Line Parameters
 The default value for each parameter is generally the recommended setting,
 unless otherwise noted.
 
-NOTES:  For more information about the AutoNeg, Duplex, and Speed
+NOTES:
+	For more information about the AutoNeg, Duplex, and Speed
         parameters, see the "Speed and Duplex Configuration" section in
         this document.
 
@@ -45,22 +46,27 @@ NOTES:  For more information about the AutoNeg, Duplex, and Speed
 
 AutoNeg
 -------
+
 (Supported only on adapters with copper connections)
-Valid Range:   0x01-0x0F, 0x20-0x2F
-Default Value: 0x2F
+
+:Valid Range:   0x01-0x0F, 0x20-0x2F
+:Default Value: 0x2F
 
 This parameter is a bit-mask that specifies the speed and duplex settings
 advertised by the adapter.  When this parameter is used, the Speed and
 Duplex parameters must not be specified.
 
-NOTE:  Refer to the Speed and Duplex section of this readme for more
+NOTE:
+       Refer to the Speed and Duplex section of this readme for more
        information on the AutoNeg parameter.
 
 Duplex
 ------
+
 (Supported only on adapters with copper connections)
-Valid Range:   0-2 (0=auto-negotiate, 1=half, 2=full)
-Default Value: 0
+
+:Valid Range:   0-2 (0=auto-negotiate, 1=half, 2=full)
+:Default Value: 0
 
 This defines the direction in which data is allowed to flow.  Can be
 either one or two-directional.  If both Duplex and the link partner are
@@ -70,18 +76,22 @@ duplex.
 
 FlowControl
 -----------
-Valid Range:   0-3 (0=none, 1=Rx only, 2=Tx only, 3=Rx&Tx)
-Default Value: Reads flow control settings from the EEPROM
+
+:Valid Range:   0-3 (0=none, 1=Rx only, 2=Tx only, 3=Rx&Tx)
+:Default Value: Reads flow control settings from the EEPROM
 
 This parameter controls the automatic generation(Tx) and response(Rx)
 to Ethernet PAUSE frames.
 
 InterruptThrottleRate
 ---------------------
+
 (not supported on Intel(R) 82542, 82543 or 82544-based adapters)
-Valid Range:   0,1,3,4,100-100000 (0=off, 1=dynamic, 3=dynamic conservative,
-                                 4=simplified balancing)
-Default Value: 3
+
+:Valid Range:
+   0,1,3,4,100-100000 (0=off, 1=dynamic, 3=dynamic conservative,
+   4=simplified balancing)
+:Default Value: 3
 
 The driver can limit the amount of interrupts per second that the adapter
 will generate for incoming packets. It does this by writing a value to the
@@ -135,13 +145,15 @@ Setting InterruptThrottleRate to 0 turns off any interrupt moderation
 and may improve small packet latency, but is generally not suitable
 for bulk throughput traffic.
 
-NOTE:  InterruptThrottleRate takes precedence over the TxAbsIntDelay and
+NOTE:
+       InterruptThrottleRate takes precedence over the TxAbsIntDelay and
        RxAbsIntDelay parameters.  In other words, minimizing the receive
        and/or transmit absolute delays does not force the controller to
        generate more interrupts than what the Interrupt Throttle Rate
        allows.
 
-CAUTION:  If you are using the Intel(R) PRO/1000 CT Network Connection
+CAUTION:
+          If you are using the Intel(R) PRO/1000 CT Network Connection
           (controller 82547), setting InterruptThrottleRate to a value
           greater than 75,000, may hang (stop transmitting) adapters
           under certain network conditions.  If this occurs a NETDEV
@@ -151,7 +163,8 @@ CAUTION:  If you are using the Intel(R) PRO/1000 CT Network Connection
           hang, ensure that InterruptThrottleRate is set no greater
           than 75,000 and is not set to 0.
 
-NOTE:  When e1000 is loaded with default settings and multiple adapters
+NOTE:
+       When e1000 is loaded with default settings and multiple adapters
        are in use simultaneously, the CPU utilization may increase non-
        linearly.  In order to limit the CPU utilization without impacting
        the overall throughput, we recommend that you load the driver as
@@ -168,9 +181,11 @@ NOTE:  When e1000 is loaded with default settings and multiple adapters
 
 RxDescriptors
 -------------
-Valid Range:   48-256 for 82542 and 82543-based adapters
-               48-4096 for all other supported adapters
-Default Value: 256
+
+:Valid Range:
+ - 48-256 for 82542 and 82543-based adapters
+ - 48-4096 for all other supported adapters
+:Default Value: 256
 
 This value specifies the number of receive buffer descriptors allocated
 by the driver.  Increasing this value allows the driver to buffer more
@@ -180,15 +195,17 @@ Each descriptor is 16 bytes.  A receive buffer is also allocated for each
 descriptor and can be either 2048, 4096, 8192, or 16384 bytes, depending
 on the MTU setting. The maximum MTU size is 16110.
 
-NOTE:  MTU designates the frame size.  It only needs to be set for Jumbo
+NOTE:
+       MTU designates the frame size.  It only needs to be set for Jumbo
        Frames.  Depending on the available system resources, the request
        for a higher number of receive descriptors may be denied.  In this
        case, use a lower number.
 
 RxIntDelay
 ----------
-Valid Range:   0-65535 (0=off)
-Default Value: 0
+
+:Valid Range:   0-65535 (0=off)
+:Default Value: 0
 
 This value delays the generation of receive interrupts in units of 1.024
 microseconds.  Receive interrupt reduction can improve CPU efficiency if
@@ -198,7 +215,8 @@ of TCP traffic.  If the system is reporting dropped receives, this value
 may be set too high, causing the driver to run out of available receive
 descriptors.
 
-CAUTION:  When setting RxIntDelay to a value other than 0, adapters may
+CAUTION:
+          When setting RxIntDelay to a value other than 0, adapters may
           hang (stop transmitting) under certain network conditions.  If
           this occurs a NETDEV WATCHDOG message is logged in the system
           event log.  In addition, the controller is automatically reset,
@@ -207,9 +225,11 @@ CAUTION:  When setting RxIntDelay to a value other than 0, adapters may
 
 RxAbsIntDelay
 -------------
+
 (This parameter is supported only on 82540, 82545 and later adapters.)
-Valid Range:   0-65535 (0=off)
-Default Value: 128
+
+:Valid Range:   0-65535 (0=off)
+:Default Value: 128
 
 This value, in units of 1.024 microseconds, limits the delay in which a
 receive interrupt is generated.  Useful only if RxIntDelay is non-zero,
@@ -220,9 +240,11 @@ conditions.
 
 Speed
 -----
+
 (This parameter is supported only on adapters with copper connections.)
-Valid Settings: 0, 10, 100, 1000
-Default Value:  0 (auto-negotiate at all supported speeds)
+
+:Valid Settings: 0, 10, 100, 1000
+:Default Value:  0 (auto-negotiate at all supported speeds)
 
 Speed forces the line speed to the specified value in megabits per second
 (Mbps).  If this parameter is not specified or is set to 0 and the link
@@ -231,22 +253,26 @@ speed.  Duplex should also be set when Speed is set to either 10 or 100.
 
 TxDescriptors
 -------------
-Valid Range:   48-256 for 82542 and 82543-based adapters
-               48-4096 for all other supported adapters
-Default Value: 256
+
+:Valid Range:
+  - 48-256 for 82542 and 82543-based adapters
+  - 48-4096 for all other supported adapters
+:Default Value: 256
 
 This value is the number of transmit descriptors allocated by the driver.
 Increasing this value allows the driver to queue more transmits.  Each
 descriptor is 16 bytes.
 
-NOTE:  Depending on the available system resources, the request for a
+NOTE:
+       Depending on the available system resources, the request for a
        higher number of transmit descriptors may be denied.  In this case,
        use a lower number.
 
 TxIntDelay
 ----------
-Valid Range:   0-65535 (0=off)
-Default Value: 8
+
+:Valid Range:   0-65535 (0=off)
+:Default Value: 8
 
 This value delays the generation of transmit interrupts in units of
 1.024 microseconds.  Transmit interrupt reduction can improve CPU
@@ -256,9 +282,11 @@ causing the driver to run out of available transmit descriptors.
 
 TxAbsIntDelay
 -------------
+
 (This parameter is supported only on 82540, 82545 and later adapters.)
-Valid Range:   0-65535 (0=off)
-Default Value: 32
+
+:Valid Range:   0-65535 (0=off)
+:Default Value: 32
 
 This value, in units of 1.024 microseconds, limits the delay in which a
 transmit interrupt is generated.  Useful only if TxIntDelay is non-zero,
@@ -269,18 +297,21 @@ network conditions.
 
 XsumRX
 ------
+
 (This parameter is NOT supported on the 82542-based adapter.)
-Valid Range:   0-1
-Default Value: 1
+
+:Valid Range:   0-1
+:Default Value: 1
 
 A value of '1' indicates that the driver should enable IP checksum
 offload for received packets (both UDP and TCP) to the adapter hardware.
 
 Copybreak
 ---------
-Valid Range:   0-xxxxxxx (0=off)
-Default Value: 256
-Usage: modprobe e1000.ko copybreak=128
+
+:Valid Range:   0-xxxxxxx (0=off)
+:Default Value: 256
+:Usage: modprobe e1000.ko copybreak=128
 
 Driver copies all packets below or equaling this size to a fresh RX
 buffer before handing it up the stack.
@@ -292,8 +323,9 @@ it is also available during runtime at
 
 SmartPowerDownEnable
 --------------------
-Valid Range: 0-1
-Default Value:  0 (disabled)
+
+:Valid Range: 0-1
+:Default Value:  0 (disabled)
 
 Allows PHY to turn off in lower power states. The user can turn off
 this parameter in supported chipsets.
@@ -309,14 +341,14 @@ fiber interface board only links at 1000 Mbps full-duplex.
 
 For copper-based boards, the keywords interact as follows:
 
-  The default operation is auto-negotiate.  The board advertises all
+- The default operation is auto-negotiate.  The board advertises all
   supported speed and duplex combinations, and it links at the highest
   common speed and duplex mode IF the link partner is set to auto-negotiate.
 
-  If Speed = 1000, limited auto-negotiation is enabled and only 1000 Mbps
+- If Speed = 1000, limited auto-negotiation is enabled and only 1000 Mbps
   is advertised (The 1000BaseT spec requires auto-negotiation.)
 
-  If Speed = 10 or 100, then both Speed and Duplex should be set.  Auto-
+- If Speed = 10 or 100, then both Speed and Duplex should be set.  Auto-
   negotiation is disabled, and the AutoNeg parameter is ignored.  Partner
   SHOULD also be forced.
 
@@ -328,13 +360,15 @@ process.
 The parameter may be specified as either a decimal or hexadecimal value as
 determined by the bitmap below.
 
+============== ====== ====== ======= ======= ====== ====== ======= ======
 Bit position   7      6      5       4       3      2      1       0
 Decimal Value  128    64     32      16      8      4      2       1
 Hex value      80     40     20      10      8      4      2       1
 Speed (Mbps)   N/A    N/A    1000    N/A     100    100    10      10
 Duplex                       Full            Full   Half   Full    Half
+============== ====== ====== ======= ======= ====== ====== ======= ======
 
-Some examples of using AutoNeg:
+Some examples of using AutoNeg::
 
   modprobe e1000 AutoNeg=0x01 (Restricts autonegotiation to 10 Half)
   modprobe e1000 AutoNeg=1 (Same as above)
@@ -357,56 +391,59 @@ Additional Configurations
 
 Jumbo Frames
 ------------
-Jumbo Frames support is enabled by changing the MTU to a value larger
-than the default of 1500.  Use the ifconfig command to increase the MTU
-size.  For example::
+
+  Jumbo Frames support is enabled by changing the MTU to a value larger than
+  the default of 1500.  Use the ifconfig command to increase the MTU size.
+  For example::
 
        ifconfig eth<x> mtu 9000 up
 
-This setting is not saved across reboots.  It can be made permanent if
-you add::
+  This setting is not saved across reboots.  It can be made permanent if
+  you add::
 
        MTU=9000
 
-to the file /etc/sysconfig/network-scripts/ifcfg-eth<x>.  This example
-applies to the Red Hat distributions; other distributions may store this
-setting in a different location.
+  to the file /etc/sysconfig/network-scripts/ifcfg-eth<x>.  This example
+  applies to the Red Hat distributions; other distributions may store this
+  setting in a different location.
 
-Notes: Degradation in throughput performance may be observed in some
-Jumbo frames environments.  If this is observed, increasing the
-application's socket buffer size and/or increasing the
-/proc/sys/net/ipv4/tcp_*mem entry values may help.  See the specific
-application manual and /usr/src/linux*/Documentation/
-networking/ip-sysctl.txt for more details.
+Notes:
+  Degradation in throughput performance may be observed in some Jumbo frames
+  environments. If this is observed, increasing the application's socket buffer
+  size and/or increasing the /proc/sys/net/ipv4/tcp_*mem entry values may help.
+  See the specific application manual and /usr/src/linux*/Documentation/
+  networking/ip-sysctl.txt for more details.
 
-- The maximum MTU setting for Jumbo Frames is 16110.  This value
-  coincides with the maximum Jumbo Frames size of 16128.
+  - The maximum MTU setting for Jumbo Frames is 16110.  This value coincides
+    with the maximum Jumbo Frames size of 16128.
 
-- Using Jumbo frames at 10 or 100 Mbps is not supported and may result
-  in poor performance or loss of link.
+  - Using Jumbo frames at 10 or 100 Mbps is not supported and may result in
+    poor performance or loss of link.
 
-- Adapters based on the Intel(R) 82542 and 82573V/E controller do not
-  support Jumbo Frames.  These correspond to the following product names:
-  Intel(R) PRO/1000 Gigabit Server Adapter Intel(R) PRO/1000 PM Network
-  Connection
+  - Adapters based on the Intel(R) 82542 and 82573V/E controller do not
+    support Jumbo Frames. These correspond to the following product names::
+
+     Intel(R) PRO/1000 Gigabit Server Adapter
+     Intel(R) PRO/1000 PM Network Connection
 
 ethtool
 -------
-The driver utilizes the ethtool interface for driver configuration and
-diagnostics, as well as displaying statistical information.  The ethtool
-version 1.6 or later is required for this functionality.
 
-The latest release of ethtool can be found from
-https://www.kernel.org/pub/software/network/ethtool/
+  The driver utilizes the ethtool interface for driver configuration and
+  diagnostics, as well as displaying statistical information.  The ethtool
+  version 1.6 or later is required for this functionality.
+
+  The latest release of ethtool can be found from
+  https://www.kernel.org/pub/software/network/ethtool/
 
 Enabling Wake on LAN* (WoL)
 ---------------------------
-WoL is configured through the ethtool* utility.
 
-WoL will be enabled on the system during the next shut down or reboot.
-For this driver version, in order to enable WoL, the e1000 driver must be
-loaded when shutting down or rebooting the system.
+  WoL is configured through the ethtool* utility.
 
+  WoL will be enabled on the system during the next shut down or reboot.
+  For this driver version, in order to enable WoL, the e1000 driver must be
+  loaded when shutting down or rebooting the system.
 
 Support
 =======
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 9/9] docs: histogram.txt: convert it to ReST file format
  2018-06-26  9:49 [PATCH 0/9] Fix references for some missing documentation files Mauro Carvalho Chehab
                   ` (7 preceding siblings ...)
  2018-06-26  9:49 ` [PATCH 8/9] networking: e1000.rst: " Mauro Carvalho Chehab
@ 2018-06-26  9:49 ` Mauro Carvalho Chehab
  2018-06-26 14:20   ` Steven Rostedt
  2018-07-02 17:27 ` [PATCH 0/9] Fix references for some missing documentation files Jonathan Corbet
  9 siblings, 1 reply; 15+ messages in thread
From: Mauro Carvalho Chehab @ 2018-06-26  9:49 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet, Steven Rostedt, Ingo Molnar, Tom Zanussi,
	James Morris, Xiongwei Song, Changbin Du, Masami Hiramatsu,
	Joel Fernandes (Google)

Despite being mentioned at Documentation/trace/ftrace.rst as a
rst file, this file was still a text one, with several issues.

Convert it to ReST and add it to the trace index:
- Mark the document title as such;
- Identify and indent the literal blocks;
- Use the proper markups for table.

Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
---
 Documentation/trace/events.rst                |    2 +-
 .../trace/{histogram.txt => histogram.rst}    | 1242 +++++++++--------
 Documentation/trace/index.rst                 |    1 +
 kernel/trace/Kconfig                          |    2 +-
 4 files changed, 627 insertions(+), 620 deletions(-)
 rename Documentation/trace/{histogram.txt => histogram.rst} (73%)

diff --git a/Documentation/trace/events.rst b/Documentation/trace/events.rst
index 696dc69b8158..f7e1fcc0953c 100644
--- a/Documentation/trace/events.rst
+++ b/Documentation/trace/events.rst
@@ -524,4 +524,4 @@ The following commands are supported:
   totals derived from one or more trace event format fields and/or
   event counts (hitcount).
 
-  See Documentation/trace/histogram.txt for details and examples.
+  See Documentation/trace/histogram.rst for details and examples.
diff --git a/Documentation/trace/histogram.txt b/Documentation/trace/histogram.rst
similarity index 73%
rename from Documentation/trace/histogram.txt
rename to Documentation/trace/histogram.rst
index 7ffea6aa22e3..5ac724baea7d 100644
--- a/Documentation/trace/histogram.txt
+++ b/Documentation/trace/histogram.rst
@@ -1,6 +1,8 @@
-			     Event Histograms
+================
+Event Histograms
+================
 
-		    Documentation written by Tom Zanussi
+Documentation written by Tom Zanussi
 
 1. Introduction
 ===============
@@ -19,7 +21,7 @@
   derived from one or more trace event format fields and/or event
   counts (hitcount).
 
-  The format of a hist trigger is as follows:
+  The format of a hist trigger is as follows::
 
         hist:keys=<field1[,field2,...]>[:values=<field1[,field2,...]>]
           [:sort=<field1[,field2,...]>][:size=#entries][:pause][:continue]
@@ -68,6 +70,7 @@
   modified by appending any of the following modifiers to the field
   name:
 
+	=========== ==========================================
         .hex        display a number as a hex value
 	.sym        display an address as a symbol
 	.sym-offset display an address as a symbol and offset
@@ -75,6 +78,7 @@
 	.execname   display a common_pid as a program name
 	.log2       display log2 value rather than raw number
 	.usecs      display a common_timestamp in microseconds
+	=========== ==========================================
 
   Note that in general the semantics of a given field aren't
   interpreted when applying a modifier to it, but there are some
@@ -92,15 +96,15 @@
       pid-specific comm fields in the event itself.
 
   A typical usage scenario would be the following to enable a hist
-  trigger, read its current contents, and then turn it off:
+  trigger, read its current contents, and then turn it off::
 
-  # echo 'hist:keys=skbaddr.hex:vals=len' > \
-    /sys/kernel/debug/tracing/events/net/netif_rx/trigger
+    # echo 'hist:keys=skbaddr.hex:vals=len' > \
+      /sys/kernel/debug/tracing/events/net/netif_rx/trigger
 
-  # cat /sys/kernel/debug/tracing/events/net/netif_rx/hist
+    # cat /sys/kernel/debug/tracing/events/net/netif_rx/hist
 
-  # echo '!hist:keys=skbaddr.hex:vals=len' > \
-    /sys/kernel/debug/tracing/events/net/netif_rx/trigger
+    # echo '!hist:keys=skbaddr.hex:vals=len' > \
+      /sys/kernel/debug/tracing/events/net/netif_rx/trigger
 
   The trigger file itself can be read to show the details of the
   currently attached hist trigger.  This information is also displayed
@@ -140,7 +144,7 @@
   can be attached to a given event, allowing that event to kick off
   and stop aggregations on a host of other events.
 
-  The format is very similar to the enable/disable_event triggers:
+  The format is very similar to the enable/disable_event triggers::
 
       enable_hist:<system>:<event>[:count]
       disable_hist:<system>:<event>[:count]
@@ -153,16 +157,16 @@
   A typical usage scenario for the enable_hist/disable_hist triggers
   would be to first set up a paused hist trigger on some event,
   followed by an enable_hist/disable_hist pair that turns the hist
-  aggregation on and off when conditions of interest are hit:
+  aggregation on and off when conditions of interest are hit::
 
-  # echo 'hist:keys=skbaddr.hex:vals=len:pause' > \
-    /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
+   # echo 'hist:keys=skbaddr.hex:vals=len:pause' > \
+      /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
 
-  # echo 'enable_hist:net:netif_receive_skb if filename==/usr/bin/wget' > \
-    /sys/kernel/debug/tracing/events/sched/sched_process_exec/trigger
+    # echo 'enable_hist:net:netif_receive_skb if filename==/usr/bin/wget' > \
+      /sys/kernel/debug/tracing/events/sched/sched_process_exec/trigger
 
-  # echo 'disable_hist:net:netif_receive_skb if comm==wget' > \
-    /sys/kernel/debug/tracing/events/sched/sched_process_exit/trigger
+    # echo 'disable_hist:net:netif_receive_skb if comm==wget' > \
+      /sys/kernel/debug/tracing/events/sched/sched_process_exit/trigger
 
   The above sets up an initially paused hist trigger which is unpaused
   and starts aggregating events when a given program is executed, and
@@ -172,8 +176,8 @@
   The examples below provide a more concrete illustration of the
   concepts and typical usage patterns discussed above.
 
-  'special' event fields
-  ------------------------
+'special' event fields
+------------------------
 
   There are a number of 'special event fields' available for use as
   keys or values in a hist trigger.  These look like and behave as if
@@ -182,14 +186,16 @@
   event, and can be used anywhere an actual event field could be.
   They are:
 
-    common_timestamp       u64 - timestamp (from ring buffer) associated
-                                 with the event, in nanoseconds.  May be
-				 modified by .usecs to have timestamps
-				 interpreted as microseconds.
-    cpu                    int - the cpu on which the event occurred.
+    ====================== ==== =======================================
+    common_timestamp       u64  timestamp (from ring buffer) associated
+                                with the event, in nanoseconds.  May be
+			        modified by .usecs to have timestamps
+			        interpreted as microseconds.
+    cpu                    int  the cpu on which the event occurred.
+    ====================== ==== =======================================
 
-  Extended error information
-  --------------------------
+Extended error information
+--------------------------
 
   For some error conditions encountered when invoking a hist trigger
   command, extended error information is available via the
@@ -199,7 +205,7 @@
   be available until the next hist trigger command for that event.
 
   If available for a given error condition, the extended error
-  information and usage takes the following form:
+  information and usage takes the following form::
 
     # echo xxx > /sys/kernel/debug/tracing/events/sched/sched_wakeup/trigger
     echo: write error: Invalid argument
@@ -213,7 +219,7 @@
 
   The first set of examples creates aggregations using the kmalloc
   event.  The fields that can be used for the hist trigger are listed
-  in the kmalloc event's format file:
+  in the kmalloc event's format file::
 
     # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/format
     name: kmalloc
@@ -232,7 +238,7 @@
 
   We'll start by creating a hist trigger that generates a simple table
   that lists the total number of bytes requested for each function in
-  the kernel that made one or more calls to kmalloc:
+  the kernel that made one or more calls to kmalloc::
 
     # echo 'hist:key=call_site:val=bytes_req' > \
             /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
@@ -247,7 +253,7 @@
 
   We'll let it run for awhile and then dump the contents of the 'hist'
   file in the kmalloc event's subdirectory (for readability, a number
-  of entries have been omitted):
+  of entries have been omitted)::
 
     # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/hist
     # trigger info: hist:keys=call_site:vals=bytes_req:sort=hitcount:size=2048 [active]
@@ -287,7 +293,7 @@
   specified in the trigger, followed by the value(s) also specified in
   the trigger.  At the beginning of the output is a line that displays
   the trigger info, which can also be displayed by reading the
-  'trigger' file:
+  'trigger' file::
 
     # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
     hist:keys=call_site:vals=bytes_req:sort=hitcount:size=2048 [active]
@@ -317,7 +323,7 @@
   frequencies.
 
   To turn the hist trigger off, simply call up the trigger in the
-  command history and re-execute it with a '!' prepended:
+  command history and re-execute it with a '!' prepended::
 
     # echo '!hist:key=call_site:val=bytes_req' > \
            /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
@@ -325,7 +331,7 @@
   Finally, notice that the call_site as displayed in the output above
   isn't really very useful.  It's an address, but normally addresses
   are displayed in hex.  To have a numeric field displayed as a hex
-  value, simply append '.hex' to the field name in the trigger:
+  value, simply append '.hex' to the field name in the trigger::
 
     # echo 'hist:key=call_site.hex:val=bytes_req' > \
            /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
@@ -370,7 +376,7 @@
   when looking at text addresses are the corresponding symbols
   instead.  To have an address displayed as symbolic value instead,
   simply append '.sym' or '.sym-offset' to the field name in the
-  trigger:
+  trigger::
 
     # echo 'hist:key=call_site.sym:val=bytes_req' > \
            /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
@@ -420,7 +426,7 @@
   run.  If instead we we wanted to see the top kmalloc callers in
   terms of the number of bytes requested rather than the number of
   calls, and we wanted the top caller to appear at the top, we can use
-  the 'sort' parameter, along with the 'descending' modifier:
+  the 'sort' parameter, along with the 'descending' modifier::
 
     # echo 'hist:key=call_site.sym:val=bytes_req:sort=bytes_req.descending' > \
            /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
@@ -461,7 +467,7 @@
         Dropped: 0
 
   To display the offset and size information in addition to the symbol
-  name, just use 'sym-offset' instead:
+  name, just use 'sym-offset' instead::
 
     # echo 'hist:key=call_site.sym-offset:val=bytes_req:sort=bytes_req.descending' > \
            /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
@@ -500,7 +506,7 @@
   We can also add multiple fields to the 'values' parameter.  For
   example, we might want to see the total number of bytes allocated
   alongside bytes requested, and display the result sorted by bytes
-  allocated in a descending order:
+  allocated in a descending order::
 
     # echo 'hist:keys=call_site.sym:values=bytes_req,bytes_alloc:sort=bytes_alloc.descending' > \
            /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
@@ -543,7 +549,7 @@
   the hist trigger display symbolic call_sites, we can have the hist
   trigger additionally display the complete set of kernel stack traces
   that led to each call_site.  To do that, we simply use the special
-  value 'stacktrace' for the key parameter:
+  value 'stacktrace' for the key parameter::
 
     # echo 'hist:keys=stacktrace:values=bytes_req,bytes_alloc:sort=bytes_alloc' > \
            /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
@@ -554,7 +560,7 @@
   event, along with a running total of any of the event fields for
   that event.  Here we tally bytes requested and bytes allocated for
   every callpath in the system that led up to a kmalloc (in this case
-  every callpath to a kmalloc for a kernel compile):
+  every callpath to a kmalloc for a kernel compile)::
 
     # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/hist
     # trigger info: hist:keys=stacktrace:vals=bytes_req,bytes_alloc:sort=bytes_alloc:size=2048 [active]
@@ -652,7 +658,7 @@
   gather and display sorted totals for each process, you can use the
   special .execname modifier to display the executable names for the
   processes in the table rather than raw pids.  The example below
-  keeps a per-process sum of total bytes read:
+  keeps a per-process sum of total bytes read::
 
     # echo 'hist:key=common_pid.execname:val=count:sort=count.descending' > \
            /sys/kernel/debug/tracing/events/syscalls/sys_enter_read/trigger
@@ -693,7 +699,7 @@
   gather and display a list of systemwide syscall hits, you can use
   the special .syscall modifier to display the syscall names rather
   than raw ids.  The example below keeps a running total of syscall
-  counts for the system during the run:
+  counts for the system during the run::
 
     # echo 'hist:key=id.syscall:val=hitcount' > \
            /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/trigger
@@ -735,19 +741,19 @@
         Entries: 72
         Dropped: 0
 
-    The syscall counts above provide a rough overall picture of system
-    call activity on the system; we can see for example that the most
-    popular system call on this system was the 'sys_ioctl' system call.
+  The syscall counts above provide a rough overall picture of system
+  call activity on the system; we can see for example that the most
+  popular system call on this system was the 'sys_ioctl' system call.
 
-    We can use 'compound' keys to refine that number and provide some
-    further insight as to which processes exactly contribute to the
-    overall ioctl count.
+  We can use 'compound' keys to refine that number and provide some
+  further insight as to which processes exactly contribute to the
+  overall ioctl count.
 
-    The command below keeps a hitcount for every unique combination of
-    system call id and pid - the end result is essentially a table
-    that keeps a per-pid sum of system call hits.  The results are
-    sorted using the system call id as the primary key, and the
-    hitcount sum as the secondary key:
+  The command below keeps a hitcount for every unique combination of
+  system call id and pid - the end result is essentially a table
+  that keeps a per-pid sum of system call hits.  The results are
+  sorted using the system call id as the primary key, and the
+  hitcount sum as the secondary key::
 
     # echo 'hist:key=id.syscall,common_pid.execname:val=hitcount:sort=id,hitcount' > \
            /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/trigger
@@ -793,11 +799,11 @@
         Entries: 323
         Dropped: 0
 
-    The above list does give us a breakdown of the ioctl syscall by
-    pid, but it also gives us quite a bit more than that, which we
-    don't really care about at the moment.  Since we know the syscall
-    id for sys_ioctl (16, displayed next to the sys_ioctl name), we
-    can use that to filter out all the other syscalls:
+  The above list does give us a breakdown of the ioctl syscall by
+  pid, but it also gives us quite a bit more than that, which we
+  don't really care about at the moment.  Since we know the syscall
+  id for sys_ioctl (16, displayed next to the sys_ioctl name), we
+  can use that to filter out all the other syscalls::
 
     # echo 'hist:key=id.syscall,common_pid.execname:val=hitcount:sort=id,hitcount if id == 16' > \
            /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/trigger
@@ -829,18 +835,18 @@
         Entries: 103
         Dropped: 0
 
-    The above output shows that 'compiz' and 'Xorg' are far and away
-    the heaviest ioctl callers (which might lead to questions about
-    whether they really need to be making all those calls and to
-    possible avenues for further investigation.)
+  The above output shows that 'compiz' and 'Xorg' are far and away
+  the heaviest ioctl callers (which might lead to questions about
+  whether they really need to be making all those calls and to
+  possible avenues for further investigation.)
 
-    The compound key examples used a key and a sum value (hitcount) to
-    sort the output, but we can just as easily use two keys instead.
-    Here's an example where we use a compound key composed of the the
-    common_pid and size event fields.  Sorting with pid as the primary
-    key and 'size' as the secondary key allows us to display an
-    ordered summary of the recvfrom sizes, with counts, received by
-    each process:
+  The compound key examples used a key and a sum value (hitcount) to
+  sort the output, but we can just as easily use two keys instead.
+  Here's an example where we use a compound key composed of the the
+  common_pid and size event fields.  Sorting with pid as the primary
+  key and 'size' as the secondary key allows us to display an
+  ordered summary of the recvfrom sizes, with counts, received by
+  each process::
 
     # echo 'hist:key=common_pid.execname,size:val=hitcount:sort=common_pid,size' > \
            /sys/kernel/debug/tracing/events/syscalls/sys_enter_recvfrom/trigger
@@ -893,7 +899,7 @@
   demonstrates how you can manually pause and continue a hist trigger.
   In this example, we'll aggregate fork counts and don't expect a
   large number of entries in the hash table, so we'll drop it to a
-  much smaller number, say 256:
+  much smaller number, say 256::
 
     # echo 'hist:key=child_comm:val=hitcount:size=256' > \
            /sys/kernel/debug/tracing/events/sched/sched_process_fork/trigger
@@ -929,7 +935,7 @@
 
   If we want to pause the hist trigger, we can simply append :pause to
   the command that started the trigger.  Notice that the trigger info
-  displays as [paused]:
+  displays as [paused]::
 
     # echo 'hist:key=child_comm:val=hitcount:size=256:pause' >> \
            /sys/kernel/debug/tracing/events/sched/sched_process_fork/trigger
@@ -966,7 +972,7 @@
 
   To manually continue having the trigger aggregate events, append
   :cont instead.  Notice that the trigger info displays as [active]
-  again, and the data has changed:
+  again, and the data has changed::
 
     # echo 'hist:key=child_comm:val=hitcount:size=256:cont' >> \
            /sys/kernel/debug/tracing/events/sched/sched_process_fork/trigger
@@ -1020,7 +1026,7 @@
   wget.
 
   First we set up an initially paused stacktrace trigger on the
-  netif_receive_skb event:
+  netif_receive_skb event::
 
     # echo 'hist:key=stacktrace:vals=len:pause' > \
            /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
@@ -1031,7 +1037,7 @@
   set up on netif_receive_skb if and only if it sees a
   sched_process_exec event with a filename of '/usr/bin/wget'.  When
   that happens, all netif_receive_skb events are aggregated into a
-  hash table keyed on stacktrace:
+  hash table keyed on stacktrace::
 
     # echo 'enable_hist:net:netif_receive_skb if filename==/usr/bin/wget' > \
            /sys/kernel/debug/tracing/events/sched/sched_process_exec/trigger
@@ -1039,7 +1045,7 @@
   The aggregation continues until the netif_receive_skb is paused
   again, which is what the following disable_hist event does by
   creating a similar setup on the sched_process_exit event, using the
-  filter 'comm==wget':
+  filter 'comm==wget'::
 
     # echo 'disable_hist:net:netif_receive_skb if comm==wget' > \
            /sys/kernel/debug/tracing/events/sched/sched_process_exit/trigger
@@ -1051,7 +1057,7 @@
   The overall effect is that netif_receive_skb events are aggregated
   into the hash table for only the duration of the wget.  Executing a
   wget command and then listing the 'hist' file will display the
-  output generated by the wget command:
+  output generated by the wget command::
 
     $ wget https://www.kernel.org/pub/linux/kernel/v3.x/patch-3.19.xz
 
@@ -1136,13 +1142,13 @@
   Suppose we wanted to try another run of the previous example but
   this time also wanted to see the complete list of events that went
   into the histogram.  In order to avoid having to set everything up
-  again, we can just clear the histogram first:
+  again, we can just clear the histogram first::
 
     # echo 'hist:key=stacktrace:vals=len:clear' >> \
            /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
 
   Just to verify that it is in fact cleared, here's what we now see in
-  the hist file:
+  the hist file::
 
     # cat /sys/kernel/debug/tracing/events/net/netif_receive_skb/hist
     # trigger info: hist:keys=stacktrace:vals=len:sort=hitcount:size=2048 [paused]
@@ -1156,7 +1162,7 @@
   event occurring during the new run, which are in fact the same
   events being aggregated into the hash table, we add some additional
   'enable_event' events to the triggering sched_process_exec and
-  sched_process_exit events as such:
+  sched_process_exit events as such::
 
     # echo 'enable_event:net:netif_receive_skb if filename==/usr/bin/wget' > \
            /sys/kernel/debug/tracing/events/sched/sched_process_exec/trigger
@@ -1167,7 +1173,7 @@
   If you read the trigger files for the sched_process_exec and
   sched_process_exit triggers, you should see two triggers for each:
   one enabling/disabling the hist aggregation and the other
-  enabling/disabling the logging of events:
+  enabling/disabling the logging of events::
 
     # cat /sys/kernel/debug/tracing/events/sched/sched_process_exec/trigger
     enable_event:net:netif_receive_skb:unlimited if filename==/usr/bin/wget
@@ -1181,13 +1187,13 @@
   sched_process_exit events is hit and matches 'wget', it enables or
   disables both the histogram and the event log, and what you end up
   with is a hash table and set of events just covering the specified
-  duration.  Run the wget command again:
+  duration.  Run the wget command again::
 
     $ wget https://www.kernel.org/pub/linux/kernel/v3.x/patch-3.19.xz
 
   Displaying the 'hist' file should show something similar to what you
   saw in the last run, but this time you should also see the
-  individual events in the trace file:
+  individual events in the trace file::
 
     # cat /sys/kernel/debug/tracing/trace
 
@@ -1220,7 +1226,7 @@
   attached to a given event.  This capability can be useful for
   creating a set of different summaries derived from the same set of
   events, or for comparing the effects of different filters, among
-  other things.
+  other things::
 
     # echo 'hist:keys=skbaddr.hex:vals=len if len < 0' >> \
            /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
@@ -1241,7 +1247,7 @@
   any existing hist triggers beforehand).
 
   Displaying the contents of the 'hist' file for the event shows the
-  contents of all five histograms:
+  contents of all five histograms::
 
     # cat /sys/kernel/debug/tracing/events/net/netif_receive_skb/hist
 
@@ -1361,7 +1367,7 @@
   output of events generated by tracepoints contained inside inline
   functions, but names can be used in a hist trigger on any event.
   For example, these two triggers when hit will update the same 'len'
-  field in the shared 'foo' histogram data:
+  field in the shared 'foo' histogram data::
 
     # echo 'hist:name=foo:keys=skbaddr.hex:vals=len' > \
            /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
@@ -1369,7 +1375,7 @@
            /sys/kernel/debug/tracing/events/net/netif_rx/trigger
 
   You can see that they're updating common histogram data by reading
-  each event's hist files at the same time:
+  each event's hist files at the same time::
 
     # cat /sys/kernel/debug/tracing/events/net/netif_receive_skb/hist;
       cat /sys/kernel/debug/tracing/events/net/netif_rx/hist
@@ -1482,7 +1488,7 @@
   And here's an example that shows how to combine histogram data from
   any two events even if they don't share any 'compatible' fields
   other than 'hitcount' and 'stacktrace'.  These commands create a
-  couple of triggers named 'bar' using those fields:
+  couple of triggers named 'bar' using those fields::
 
     # echo 'hist:name=bar:key=stacktrace:val=hitcount' > \
            /sys/kernel/debug/tracing/events/sched/sched_process_fork/trigger
@@ -1490,7 +1496,7 @@
           /sys/kernel/debug/tracing/events/net/netif_rx/trigger
 
   And displaying the output of either shows some interesting if
-  somewhat confusing output:
+  somewhat confusing output::
 
     # cat /sys/kernel/debug/tracing/events/sched/sched_process_fork/hist
     # cat /sys/kernel/debug/tracing/events/net/netif_rx/hist
@@ -1705,7 +1711,7 @@ to any event field.
 
 Either keys or values can be saved and retrieved in this way.  This
 creates a variable named 'ts0' for a histogram entry with the key
-'next_pid':
+'next_pid'::
 
   # echo 'hist:keys=next_pid:vals=$ts0:ts0=common_timestamp ... >> \
 	event/trigger
@@ -1721,40 +1727,40 @@ Because 'vals=' is used, the common_timestamp variable value above
 will also be summed as a normal histogram value would (though for a
 timestamp it makes little sense).
 
-The below shows that a key value can also be saved in the same way:
+The below shows that a key value can also be saved in the same way::
 
   # echo 'hist:timer_pid=common_pid:key=timer_pid ...' >> event/trigger
 
 If a variable isn't a key variable or prefixed with 'vals=', the
 associated event field will be saved in a variable but won't be summed
-as a value:
+as a value::
 
   # echo 'hist:keys=next_pid:ts1=common_timestamp ...' >> event/trigger
 
 Multiple variables can be assigned at the same time.  The below would
 result in both ts0 and b being created as variables, with both
-common_timestamp and field1 additionally being summed as values:
+common_timestamp and field1 additionally being summed as values::
 
   # echo 'hist:keys=pid:vals=$ts0,$b:ts0=common_timestamp,b=field1 ...' >> \
 	event/trigger
 
 Note that variable assignments can appear either preceding or
 following their use.  The command below behaves identically to the
-command above:
+command above::
 
   # echo 'hist:keys=pid:ts0=common_timestamp,b=field1:vals=$ts0,$b ...' >> \
 	event/trigger
 
 Any number of variables not bound to a 'vals=' prefix can also be
 assigned by simply separating them with colons.  Below is the same
-thing but without the values being summed in the histogram:
+thing but without the values being summed in the histogram::
 
   # echo 'hist:keys=pid:ts0=common_timestamp:b=field1 ...' >> event/trigger
 
 Variables set as above can be referenced and used in expressions on
 another event.
 
-For example, here's how a latency can be calculated:
+For example, here's how a latency can be calculated::
 
   # echo 'hist:keys=pid,prio:ts0=common_timestamp ...' >> event1/trigger
   # echo 'hist:keys=next_pid:wakeup_lat=common_timestamp-$ts0 ...' >> event2/trigger
@@ -1764,7 +1770,7 @@ variable ts0.  In the next line, ts0 is subtracted from the second
 event's timestamp to produce the latency, which is then assigned into
 yet another variable, 'wakeup_lat'.  The hist trigger below in turn
 makes use of the wakeup_lat variable to compute a combined latency
-using the same key and variable from yet another event:
+using the same key and variable from yet another event::
 
   # echo 'hist:key=pid:wakeupswitch_lat=$wakeup_lat+$switchtime_lat ...' >> event3/trigger
 
@@ -1784,7 +1790,7 @@ separated by semicolons, to the tracing/synthetic_events file.
 
 For instance, the following creates a new event named 'wakeup_latency'
 with 3 fields: lat, pid, and prio.  Each of those fields is simply a
-variable reference to a variable on another event:
+variable reference to a variable on another event::
 
   # echo 'wakeup_latency \
           u64 lat; \
@@ -1793,13 +1799,13 @@ variable reference to a variable on another event:
 	  /sys/kernel/debug/tracing/synthetic_events
 
 Reading the tracing/synthetic_events file lists all the currently
-defined synthetic events, in this case the event defined above:
+defined synthetic events, in this case the event defined above::
 
   # cat /sys/kernel/debug/tracing/synthetic_events
     wakeup_latency u64 lat; pid_t pid; int prio
 
 An existing synthetic event definition can be removed by prepending
-the command that defined it with a '!':
+the command that defined it with a '!'::
 
   # echo '!wakeup_latency u64 lat pid_t pid int prio' >> \
     /sys/kernel/debug/tracing/synthetic_events
@@ -1811,13 +1817,13 @@ and variables defined on other events (see Section 2.2.3 below on
 how that is done using hist trigger 'onmatch' action). Once that is
 done, the 'wakeup_latency' synthetic event instance is created.
 
-A histogram can now be defined for the new synthetic event:
+A histogram can now be defined for the new synthetic event::
 
   # echo 'hist:keys=pid,prio,lat.log2:sort=pid,lat' >> \
         /sys/kernel/debug/tracing/events/synthetic/wakeup_latency/trigger
 
 The new event is created under the tracing/events/synthetic/ directory
-and looks and behaves just like any other event:
+and looks and behaves just like any other event::
 
   # ls /sys/kernel/debug/tracing/events/synthetic/wakeup_latency
         enable  filter  format  hist  id  trigger
@@ -1872,74 +1878,74 @@ hist trigger specification.
     As an example the below defines a simple synthetic event and uses
     a variable defined on the sched_wakeup_new event as a parameter
     when invoking the synthetic event.  Here we define the synthetic
-    event:
+    event::
 
-    # echo 'wakeup_new_test pid_t pid' >> \
-           /sys/kernel/debug/tracing/synthetic_events
+      # echo 'wakeup_new_test pid_t pid' >> \
+             /sys/kernel/debug/tracing/synthetic_events
 
-    # cat /sys/kernel/debug/tracing/synthetic_events
-          wakeup_new_test pid_t pid
+      # cat /sys/kernel/debug/tracing/synthetic_events
+            wakeup_new_test pid_t pid
 
     The following hist trigger both defines the missing testpid
     variable and specifies an onmatch() action that generates a
     wakeup_new_test synthetic event whenever a sched_wakeup_new event
     occurs, which because of the 'if comm == "cyclictest"' filter only
-    happens when the executable is cyclictest:
+    happens when the executable is cyclictest::
 
-    # echo 'hist:keys=$testpid:testpid=pid:onmatch(sched.sched_wakeup_new).\
-            wakeup_new_test($testpid) if comm=="cyclictest"' >> \
-            /sys/kernel/debug/tracing/events/sched/sched_wakeup_new/trigger
+      # echo 'hist:keys=$testpid:testpid=pid:onmatch(sched.sched_wakeup_new).\
+              wakeup_new_test($testpid) if comm=="cyclictest"' >> \
+              /sys/kernel/debug/tracing/events/sched/sched_wakeup_new/trigger
 
     Creating and displaying a histogram based on those events is now
     just a matter of using the fields and new synthetic event in the
-    tracing/events/synthetic directory, as usual:
+    tracing/events/synthetic directory, as usual::
 
-    # echo 'hist:keys=pid:sort=pid' >> \
-           /sys/kernel/debug/tracing/events/synthetic/wakeup_new_test/trigger
+      # echo 'hist:keys=pid:sort=pid' >> \
+             /sys/kernel/debug/tracing/events/synthetic/wakeup_new_test/trigger
 
     Running 'cyclictest' should cause wakeup_new events to generate
     wakeup_new_test synthetic events which should result in histogram
-    output in the wakeup_new_test event's hist file:
+    output in the wakeup_new_test event's hist file::
 
-    # cat /sys/kernel/debug/tracing/events/synthetic/wakeup_new_test/hist
+      # cat /sys/kernel/debug/tracing/events/synthetic/wakeup_new_test/hist
 
     A more typical usage would be to use two events to calculate a
     latency.  The following example uses a set of hist triggers to
-    produce a 'wakeup_latency' histogram:
+    produce a 'wakeup_latency' histogram.
 
-    First, we define a 'wakeup_latency' synthetic event:
+    First, we define a 'wakeup_latency' synthetic event::
 
-    # echo 'wakeup_latency u64 lat; pid_t pid; int prio' >> \
-            /sys/kernel/debug/tracing/synthetic_events
+      # echo 'wakeup_latency u64 lat; pid_t pid; int prio' >> \
+              /sys/kernel/debug/tracing/synthetic_events
 
     Next, we specify that whenever we see a sched_waking event for a
-    cyclictest thread, save the timestamp in a 'ts0' variable:
+    cyclictest thread, save the timestamp in a 'ts0' variable::
 
-    # echo 'hist:keys=$saved_pid:saved_pid=pid:ts0=common_timestamp.usecs \
-            if comm=="cyclictest"' >> \
-	    /sys/kernel/debug/tracing/events/sched/sched_waking/trigger
+      # echo 'hist:keys=$saved_pid:saved_pid=pid:ts0=common_timestamp.usecs \
+              if comm=="cyclictest"' >> \
+	      /sys/kernel/debug/tracing/events/sched/sched_waking/trigger
 
     Then, when the corresponding thread is actually scheduled onto the
     CPU by a sched_switch event, calculate the latency and use that
     along with another variable and an event field to generate a
-    wakeup_latency synthetic event:
+    wakeup_latency synthetic event::
 
-    # echo 'hist:keys=next_pid:wakeup_lat=common_timestamp.usecs-$ts0:\
-            onmatch(sched.sched_waking).wakeup_latency($wakeup_lat,\
-	            $saved_pid,next_prio) if next_comm=="cyclictest"' >> \
-	    /sys/kernel/debug/tracing/events/sched/sched_switch/trigger
+      # echo 'hist:keys=next_pid:wakeup_lat=common_timestamp.usecs-$ts0:\
+              onmatch(sched.sched_waking).wakeup_latency($wakeup_lat,\
+	              $saved_pid,next_prio) if next_comm=="cyclictest"' >> \
+	      /sys/kernel/debug/tracing/events/sched/sched_switch/trigger
 
     We also need to create a histogram on the wakeup_latency synthetic
-    event in order to aggregate the generated synthetic event data:
+    event in order to aggregate the generated synthetic event data::
 
-    # echo 'hist:keys=pid,prio,lat:sort=pid,lat' >> \
-            /sys/kernel/debug/tracing/events/synthetic/wakeup_latency/trigger
+      # echo 'hist:keys=pid,prio,lat:sort=pid,lat' >> \
+              /sys/kernel/debug/tracing/events/synthetic/wakeup_latency/trigger
 
     Finally, once we've run cyclictest to actually generate some
     events, we can see the output by looking at the wakeup_latency
-    synthetic event's hist file:
+    synthetic event's hist file::
 
-    # cat /sys/kernel/debug/tracing/events/synthetic/wakeup_latency/hist
+      # cat /sys/kernel/debug/tracing/events/synthetic/wakeup_latency/hist
 
   - onmax(var).save(field,..	.)
 
@@ -1961,38 +1967,38 @@ hist trigger specification.
     back to that pid, the timestamp difference is calculated.  If the
     resulting latency, stored in wakeup_lat, exceeds the current
     maximum latency, the values specified in the save() fields are
-    recorded:
+    recorded::
 
-    # echo 'hist:keys=pid:ts0=common_timestamp.usecs \
-            if comm=="cyclictest"' >> \
-            /sys/kernel/debug/tracing/events/sched/sched_waking/trigger
+      # echo 'hist:keys=pid:ts0=common_timestamp.usecs \
+              if comm=="cyclictest"' >> \
+              /sys/kernel/debug/tracing/events/sched/sched_waking/trigger
 
-    # echo 'hist:keys=next_pid:\
-            wakeup_lat=common_timestamp.usecs-$ts0:\
-            onmax($wakeup_lat).save(next_comm,prev_pid,prev_prio,prev_comm) \
-            if next_comm=="cyclictest"' >> \
-            /sys/kernel/debug/tracing/events/sched/sched_switch/trigger
+      # echo 'hist:keys=next_pid:\
+              wakeup_lat=common_timestamp.usecs-$ts0:\
+              onmax($wakeup_lat).save(next_comm,prev_pid,prev_prio,prev_comm) \
+              if next_comm=="cyclictest"' >> \
+              /sys/kernel/debug/tracing/events/sched/sched_switch/trigger
 
     When the histogram is displayed, the max value and the saved
     values corresponding to the max are displayed following the rest
-    of the fields:
+    of the fields::
 
-    # cat /sys/kernel/debug/tracing/events/sched/sched_switch/hist
-      { next_pid:       2255 } hitcount:        239
-        common_timestamp-ts0:          0
-        max:         27
-	next_comm: cyclictest
-        prev_pid:          0  prev_prio:        120  prev_comm: swapper/1
+      # cat /sys/kernel/debug/tracing/events/sched/sched_switch/hist
+        { next_pid:       2255 } hitcount:        239
+          common_timestamp-ts0:          0
+          max:         27
+	  next_comm: cyclictest
+          prev_pid:          0  prev_prio:        120  prev_comm: swapper/1
 
-      { next_pid:       2256 } hitcount:       2355
-        common_timestamp-ts0: 0
-        max:         49  next_comm: cyclictest
-        prev_pid:          0  prev_prio:        120  prev_comm: swapper/0
+        { next_pid:       2256 } hitcount:       2355
+          common_timestamp-ts0: 0
+          max:         49  next_comm: cyclictest
+          prev_pid:          0  prev_prio:        120  prev_comm: swapper/0
 
-      Totals:
-          Hits: 12970
-          Entries: 2
-          Dropped: 0
+        Totals:
+            Hits: 12970
+            Entries: 2
+            Dropped: 0
 
 3. User space creating a trigger
 --------------------------------
@@ -2002,24 +2008,24 @@ ring buffer. This can also act like an event, by writing into the trigger
 file located in /sys/kernel/tracing/events/ftrace/print/
 
 Modifying cyclictest to write into the trace_marker file before it sleeps
-and after it wakes up, something like this:
+and after it wakes up, something like this::
 
-static void traceputs(char *str)
-{
+  static void traceputs(char *str)
+  {
 	/* tracemark_fd is the trace_marker file descriptor */
 	if (tracemark_fd < 0)
 		return;
 	/* write the tracemark message */
 	write(tracemark_fd, str, strlen(str));
-}
+  }
 
-And later add something like:
+And later add something like::
 
 	traceputs("start");
 	clock_nanosleep(...);
 	traceputs("end");
 
-We can make a histogram from this:
+We can make a histogram from this::
 
  # cd /sys/kernel/tracing
  # echo 'latency u64 lat' > synthetic_events
@@ -2034,7 +2040,7 @@ it will call the "latency" synthetic event with the calculated latency as its
 parameter. Finally, a histogram is added to the latency synthetic event to
 record the calculated latency along with the pid.
 
-Now running cyclictest with:
+Now running cyclictest with::
 
  # ./cyclictest -p80 -d0 -i250 -n -a -t --tracemark -b 1000
 
@@ -2049,297 +2055,297 @@ Now running cyclictest with:
 
 Note, the -b 1000 is used just to make --tracemark available.
 
-Then we can see the histogram created by this with:
+Then we can see the histogram created by this with::
 
  # cat events/synthetic/latency/hist
-# event histogram
-#
-# trigger info: hist:keys=lat,common_pid:vals=hitcount:sort=lat:size=2048 [active]
-#
+ # event histogram
+ #
+ # trigger info: hist:keys=lat,common_pid:vals=hitcount:sort=lat:size=2048 [active]
+ #
 
-{ lat:        107, common_pid:       2039 } hitcount:          1
-{ lat:        122, common_pid:       2041 } hitcount:          1
-{ lat:        166, common_pid:       2039 } hitcount:          1
-{ lat:        174, common_pid:       2039 } hitcount:          1
-{ lat:        194, common_pid:       2041 } hitcount:          1
-{ lat:        196, common_pid:       2036 } hitcount:          1
-{ lat:        197, common_pid:       2038 } hitcount:          1
-{ lat:        198, common_pid:       2039 } hitcount:          1
-{ lat:        199, common_pid:       2039 } hitcount:          1
-{ lat:        200, common_pid:       2041 } hitcount:          1
-{ lat:        201, common_pid:       2039 } hitcount:          2
-{ lat:        202, common_pid:       2038 } hitcount:          1
-{ lat:        202, common_pid:       2043 } hitcount:          1
-{ lat:        203, common_pid:       2039 } hitcount:          1
-{ lat:        203, common_pid:       2036 } hitcount:          1
-{ lat:        203, common_pid:       2041 } hitcount:          1
-{ lat:        206, common_pid:       2038 } hitcount:          2
-{ lat:        207, common_pid:       2039 } hitcount:          1
-{ lat:        207, common_pid:       2036 } hitcount:          1
-{ lat:        208, common_pid:       2040 } hitcount:          1
-{ lat:        209, common_pid:       2043 } hitcount:          1
-{ lat:        210, common_pid:       2039 } hitcount:          1
-{ lat:        211, common_pid:       2039 } hitcount:          4
-{ lat:        212, common_pid:       2043 } hitcount:          1
-{ lat:        212, common_pid:       2039 } hitcount:          2
-{ lat:        213, common_pid:       2039 } hitcount:          1
-{ lat:        214, common_pid:       2038 } hitcount:          1
-{ lat:        214, common_pid:       2039 } hitcount:          2
-{ lat:        214, common_pid:       2042 } hitcount:          1
-{ lat:        215, common_pid:       2039 } hitcount:          1
-{ lat:        217, common_pid:       2036 } hitcount:          1
-{ lat:        217, common_pid:       2040 } hitcount:          1
-{ lat:        217, common_pid:       2039 } hitcount:          1
-{ lat:        218, common_pid:       2039 } hitcount:          6
-{ lat:        219, common_pid:       2039 } hitcount:          9
-{ lat:        220, common_pid:       2039 } hitcount:         11
-{ lat:        221, common_pid:       2039 } hitcount:          5
-{ lat:        221, common_pid:       2042 } hitcount:          1
-{ lat:        222, common_pid:       2039 } hitcount:          7
-{ lat:        223, common_pid:       2036 } hitcount:          1
-{ lat:        223, common_pid:       2039 } hitcount:          3
-{ lat:        224, common_pid:       2039 } hitcount:          4
-{ lat:        224, common_pid:       2037 } hitcount:          1
-{ lat:        224, common_pid:       2036 } hitcount:          2
-{ lat:        225, common_pid:       2039 } hitcount:          5
-{ lat:        225, common_pid:       2042 } hitcount:          1
-{ lat:        226, common_pid:       2039 } hitcount:          7
-{ lat:        226, common_pid:       2036 } hitcount:          4
-{ lat:        227, common_pid:       2039 } hitcount:          6
-{ lat:        227, common_pid:       2036 } hitcount:         12
-{ lat:        227, common_pid:       2043 } hitcount:          1
-{ lat:        228, common_pid:       2039 } hitcount:          7
-{ lat:        228, common_pid:       2036 } hitcount:         14
-{ lat:        229, common_pid:       2039 } hitcount:          9
-{ lat:        229, common_pid:       2036 } hitcount:          8
-{ lat:        229, common_pid:       2038 } hitcount:          1
-{ lat:        230, common_pid:       2039 } hitcount:         11
-{ lat:        230, common_pid:       2036 } hitcount:          6
-{ lat:        230, common_pid:       2043 } hitcount:          1
-{ lat:        230, common_pid:       2042 } hitcount:          2
-{ lat:        231, common_pid:       2041 } hitcount:          1
-{ lat:        231, common_pid:       2036 } hitcount:          6
-{ lat:        231, common_pid:       2043 } hitcount:          1
-{ lat:        231, common_pid:       2039 } hitcount:          8
-{ lat:        232, common_pid:       2037 } hitcount:          1
-{ lat:        232, common_pid:       2039 } hitcount:          6
-{ lat:        232, common_pid:       2040 } hitcount:          2
-{ lat:        232, common_pid:       2036 } hitcount:          5
-{ lat:        232, common_pid:       2043 } hitcount:          1
-{ lat:        233, common_pid:       2036 } hitcount:          5
-{ lat:        233, common_pid:       2039 } hitcount:         11
-{ lat:        234, common_pid:       2039 } hitcount:          4
-{ lat:        234, common_pid:       2038 } hitcount:          2
-{ lat:        234, common_pid:       2043 } hitcount:          2
-{ lat:        234, common_pid:       2036 } hitcount:         11
-{ lat:        234, common_pid:       2040 } hitcount:          1
-{ lat:        235, common_pid:       2037 } hitcount:          2
-{ lat:        235, common_pid:       2036 } hitcount:          8
-{ lat:        235, common_pid:       2043 } hitcount:          2
-{ lat:        235, common_pid:       2039 } hitcount:          5
-{ lat:        235, common_pid:       2042 } hitcount:          2
-{ lat:        235, common_pid:       2040 } hitcount:          4
-{ lat:        235, common_pid:       2041 } hitcount:          1
-{ lat:        236, common_pid:       2036 } hitcount:          7
-{ lat:        236, common_pid:       2037 } hitcount:          1
-{ lat:        236, common_pid:       2041 } hitcount:          5
-{ lat:        236, common_pid:       2039 } hitcount:          3
-{ lat:        236, common_pid:       2043 } hitcount:          9
-{ lat:        236, common_pid:       2040 } hitcount:          7
-{ lat:        237, common_pid:       2037 } hitcount:          1
-{ lat:        237, common_pid:       2040 } hitcount:          1
-{ lat:        237, common_pid:       2036 } hitcount:          9
-{ lat:        237, common_pid:       2039 } hitcount:          3
-{ lat:        237, common_pid:       2043 } hitcount:          8
-{ lat:        237, common_pid:       2042 } hitcount:          2
-{ lat:        237, common_pid:       2041 } hitcount:          2
-{ lat:        238, common_pid:       2043 } hitcount:         10
-{ lat:        238, common_pid:       2040 } hitcount:          1
-{ lat:        238, common_pid:       2037 } hitcount:          9
-{ lat:        238, common_pid:       2038 } hitcount:          1
-{ lat:        238, common_pid:       2039 } hitcount:          1
-{ lat:        238, common_pid:       2042 } hitcount:          3
-{ lat:        238, common_pid:       2036 } hitcount:          7
-{ lat:        239, common_pid:       2041 } hitcount:          1
-{ lat:        239, common_pid:       2043 } hitcount:         11
-{ lat:        239, common_pid:       2037 } hitcount:         11
-{ lat:        239, common_pid:       2038 } hitcount:          6
-{ lat:        239, common_pid:       2036 } hitcount:          7
-{ lat:        239, common_pid:       2040 } hitcount:          1
-{ lat:        239, common_pid:       2042 } hitcount:          9
-{ lat:        240, common_pid:       2037 } hitcount:         29
-{ lat:        240, common_pid:       2043 } hitcount:         15
-{ lat:        240, common_pid:       2040 } hitcount:         44
-{ lat:        240, common_pid:       2039 } hitcount:          1
-{ lat:        240, common_pid:       2041 } hitcount:          2
-{ lat:        240, common_pid:       2038 } hitcount:          1
-{ lat:        240, common_pid:       2036 } hitcount:         10
-{ lat:        240, common_pid:       2042 } hitcount:         13
-{ lat:        241, common_pid:       2036 } hitcount:         21
-{ lat:        241, common_pid:       2041 } hitcount:         36
-{ lat:        241, common_pid:       2037 } hitcount:         34
-{ lat:        241, common_pid:       2042 } hitcount:         14
-{ lat:        241, common_pid:       2040 } hitcount:         94
-{ lat:        241, common_pid:       2039 } hitcount:         12
-{ lat:        241, common_pid:       2038 } hitcount:          2
-{ lat:        241, common_pid:       2043 } hitcount:         28
-{ lat:        242, common_pid:       2040 } hitcount:        109
-{ lat:        242, common_pid:       2041 } hitcount:        506
-{ lat:        242, common_pid:       2039 } hitcount:        155
-{ lat:        242, common_pid:       2042 } hitcount:         21
-{ lat:        242, common_pid:       2037 } hitcount:         52
-{ lat:        242, common_pid:       2043 } hitcount:         21
-{ lat:        242, common_pid:       2036 } hitcount:         16
-{ lat:        242, common_pid:       2038 } hitcount:        156
-{ lat:        243, common_pid:       2037 } hitcount:         46
-{ lat:        243, common_pid:       2039 } hitcount:         40
-{ lat:        243, common_pid:       2042 } hitcount:        119
-{ lat:        243, common_pid:       2041 } hitcount:        611
-{ lat:        243, common_pid:       2036 } hitcount:         69
-{ lat:        243, common_pid:       2038 } hitcount:        784
-{ lat:        243, common_pid:       2040 } hitcount:        323
-{ lat:        243, common_pid:       2043 } hitcount:         14
-{ lat:        244, common_pid:       2043 } hitcount:         35
-{ lat:        244, common_pid:       2042 } hitcount:        305
-{ lat:        244, common_pid:       2039 } hitcount:          8
-{ lat:        244, common_pid:       2040 } hitcount:       4515
-{ lat:        244, common_pid:       2038 } hitcount:        371
-{ lat:        244, common_pid:       2037 } hitcount:         31
-{ lat:        244, common_pid:       2036 } hitcount:        114
-{ lat:        244, common_pid:       2041 } hitcount:       3396
-{ lat:        245, common_pid:       2036 } hitcount:        700
-{ lat:        245, common_pid:       2041 } hitcount:       2772
-{ lat:        245, common_pid:       2037 } hitcount:        268
-{ lat:        245, common_pid:       2039 } hitcount:        472
-{ lat:        245, common_pid:       2038 } hitcount:       2758
-{ lat:        245, common_pid:       2042 } hitcount:       3833
-{ lat:        245, common_pid:       2040 } hitcount:       3105
-{ lat:        245, common_pid:       2043 } hitcount:        645
-{ lat:        246, common_pid:       2038 } hitcount:       3451
-{ lat:        246, common_pid:       2041 } hitcount:        142
-{ lat:        246, common_pid:       2037 } hitcount:       5101
-{ lat:        246, common_pid:       2040 } hitcount:         68
-{ lat:        246, common_pid:       2043 } hitcount:       5099
-{ lat:        246, common_pid:       2039 } hitcount:       5608
-{ lat:        246, common_pid:       2042 } hitcount:       3723
-{ lat:        246, common_pid:       2036 } hitcount:       4738
-{ lat:        247, common_pid:       2042 } hitcount:        312
-{ lat:        247, common_pid:       2043 } hitcount:       2385
-{ lat:        247, common_pid:       2041 } hitcount:        452
-{ lat:        247, common_pid:       2038 } hitcount:        792
-{ lat:        247, common_pid:       2040 } hitcount:         78
-{ lat:        247, common_pid:       2036 } hitcount:       2375
-{ lat:        247, common_pid:       2039 } hitcount:       1834
-{ lat:        247, common_pid:       2037 } hitcount:       2655
-{ lat:        248, common_pid:       2037 } hitcount:         36
-{ lat:        248, common_pid:       2042 } hitcount:         11
-{ lat:        248, common_pid:       2038 } hitcount:        122
-{ lat:        248, common_pid:       2036 } hitcount:        135
-{ lat:        248, common_pid:       2039 } hitcount:         26
-{ lat:        248, common_pid:       2041 } hitcount:        503
-{ lat:        248, common_pid:       2043 } hitcount:         66
-{ lat:        248, common_pid:       2040 } hitcount:         46
-{ lat:        249, common_pid:       2037 } hitcount:         29
-{ lat:        249, common_pid:       2038 } hitcount:          1
-{ lat:        249, common_pid:       2043 } hitcount:         29
-{ lat:        249, common_pid:       2039 } hitcount:          8
-{ lat:        249, common_pid:       2042 } hitcount:         56
-{ lat:        249, common_pid:       2040 } hitcount:         27
-{ lat:        249, common_pid:       2041 } hitcount:         11
-{ lat:        249, common_pid:       2036 } hitcount:         27
-{ lat:        250, common_pid:       2038 } hitcount:          1
-{ lat:        250, common_pid:       2036 } hitcount:         30
-{ lat:        250, common_pid:       2040 } hitcount:         19
-{ lat:        250, common_pid:       2043 } hitcount:         22
-{ lat:        250, common_pid:       2042 } hitcount:         20
-{ lat:        250, common_pid:       2041 } hitcount:          1
-{ lat:        250, common_pid:       2039 } hitcount:          6
-{ lat:        250, common_pid:       2037 } hitcount:         48
-{ lat:        251, common_pid:       2037 } hitcount:         43
-{ lat:        251, common_pid:       2039 } hitcount:          1
-{ lat:        251, common_pid:       2036 } hitcount:         12
-{ lat:        251, common_pid:       2042 } hitcount:          2
-{ lat:        251, common_pid:       2041 } hitcount:          1
-{ lat:        251, common_pid:       2043 } hitcount:         15
-{ lat:        251, common_pid:       2040 } hitcount:          3
-{ lat:        252, common_pid:       2040 } hitcount:          1
-{ lat:        252, common_pid:       2036 } hitcount:         12
-{ lat:        252, common_pid:       2037 } hitcount:         21
-{ lat:        252, common_pid:       2043 } hitcount:         14
-{ lat:        253, common_pid:       2037 } hitcount:         21
-{ lat:        253, common_pid:       2039 } hitcount:          2
-{ lat:        253, common_pid:       2036 } hitcount:          9
-{ lat:        253, common_pid:       2043 } hitcount:          6
-{ lat:        253, common_pid:       2040 } hitcount:          1
-{ lat:        254, common_pid:       2036 } hitcount:          8
-{ lat:        254, common_pid:       2043 } hitcount:          3
-{ lat:        254, common_pid:       2041 } hitcount:          1
-{ lat:        254, common_pid:       2042 } hitcount:          1
-{ lat:        254, common_pid:       2039 } hitcount:          1
-{ lat:        254, common_pid:       2037 } hitcount:         12
-{ lat:        255, common_pid:       2043 } hitcount:          1
-{ lat:        255, common_pid:       2037 } hitcount:          2
-{ lat:        255, common_pid:       2036 } hitcount:          2
-{ lat:        255, common_pid:       2039 } hitcount:          8
-{ lat:        256, common_pid:       2043 } hitcount:          1
-{ lat:        256, common_pid:       2036 } hitcount:          4
-{ lat:        256, common_pid:       2039 } hitcount:          6
-{ lat:        257, common_pid:       2039 } hitcount:          5
-{ lat:        257, common_pid:       2036 } hitcount:          4
-{ lat:        258, common_pid:       2039 } hitcount:          5
-{ lat:        258, common_pid:       2036 } hitcount:          2
-{ lat:        259, common_pid:       2036 } hitcount:          7
-{ lat:        259, common_pid:       2039 } hitcount:          7
-{ lat:        260, common_pid:       2036 } hitcount:          8
-{ lat:        260, common_pid:       2039 } hitcount:          6
-{ lat:        261, common_pid:       2036 } hitcount:          5
-{ lat:        261, common_pid:       2039 } hitcount:          7
-{ lat:        262, common_pid:       2039 } hitcount:          5
-{ lat:        262, common_pid:       2036 } hitcount:          5
-{ lat:        263, common_pid:       2039 } hitcount:          7
-{ lat:        263, common_pid:       2036 } hitcount:          7
-{ lat:        264, common_pid:       2039 } hitcount:          9
-{ lat:        264, common_pid:       2036 } hitcount:          9
-{ lat:        265, common_pid:       2036 } hitcount:          5
-{ lat:        265, common_pid:       2039 } hitcount:          1
-{ lat:        266, common_pid:       2036 } hitcount:          1
-{ lat:        266, common_pid:       2039 } hitcount:          3
-{ lat:        267, common_pid:       2036 } hitcount:          1
-{ lat:        267, common_pid:       2039 } hitcount:          3
-{ lat:        268, common_pid:       2036 } hitcount:          1
-{ lat:        268, common_pid:       2039 } hitcount:          6
-{ lat:        269, common_pid:       2036 } hitcount:          1
-{ lat:        269, common_pid:       2043 } hitcount:          1
-{ lat:        269, common_pid:       2039 } hitcount:          2
-{ lat:        270, common_pid:       2040 } hitcount:          1
-{ lat:        270, common_pid:       2039 } hitcount:          6
-{ lat:        271, common_pid:       2041 } hitcount:          1
-{ lat:        271, common_pid:       2039 } hitcount:          5
-{ lat:        272, common_pid:       2039 } hitcount:         10
-{ lat:        273, common_pid:       2039 } hitcount:          8
-{ lat:        274, common_pid:       2039 } hitcount:          2
-{ lat:        275, common_pid:       2039 } hitcount:          1
-{ lat:        276, common_pid:       2039 } hitcount:          2
-{ lat:        276, common_pid:       2037 } hitcount:          1
-{ lat:        276, common_pid:       2038 } hitcount:          1
-{ lat:        277, common_pid:       2039 } hitcount:          1
-{ lat:        277, common_pid:       2042 } hitcount:          1
-{ lat:        278, common_pid:       2039 } hitcount:          1
-{ lat:        279, common_pid:       2039 } hitcount:          4
-{ lat:        279, common_pid:       2043 } hitcount:          1
-{ lat:        280, common_pid:       2039 } hitcount:          3
-{ lat:        283, common_pid:       2036 } hitcount:          2
-{ lat:        284, common_pid:       2039 } hitcount:          1
-{ lat:        284, common_pid:       2043 } hitcount:          1
-{ lat:        288, common_pid:       2039 } hitcount:          1
-{ lat:        289, common_pid:       2039 } hitcount:          1
-{ lat:        300, common_pid:       2039 } hitcount:          1
-{ lat:        384, common_pid:       2039 } hitcount:          1
+ { lat:        107, common_pid:       2039 } hitcount:          1
+ { lat:        122, common_pid:       2041 } hitcount:          1
+ { lat:        166, common_pid:       2039 } hitcount:          1
+ { lat:        174, common_pid:       2039 } hitcount:          1
+ { lat:        194, common_pid:       2041 } hitcount:          1
+ { lat:        196, common_pid:       2036 } hitcount:          1
+ { lat:        197, common_pid:       2038 } hitcount:          1
+ { lat:        198, common_pid:       2039 } hitcount:          1
+ { lat:        199, common_pid:       2039 } hitcount:          1
+ { lat:        200, common_pid:       2041 } hitcount:          1
+ { lat:        201, common_pid:       2039 } hitcount:          2
+ { lat:        202, common_pid:       2038 } hitcount:          1
+ { lat:        202, common_pid:       2043 } hitcount:          1
+ { lat:        203, common_pid:       2039 } hitcount:          1
+ { lat:        203, common_pid:       2036 } hitcount:          1
+ { lat:        203, common_pid:       2041 } hitcount:          1
+ { lat:        206, common_pid:       2038 } hitcount:          2
+ { lat:        207, common_pid:       2039 } hitcount:          1
+ { lat:        207, common_pid:       2036 } hitcount:          1
+ { lat:        208, common_pid:       2040 } hitcount:          1
+ { lat:        209, common_pid:       2043 } hitcount:          1
+ { lat:        210, common_pid:       2039 } hitcount:          1
+ { lat:        211, common_pid:       2039 } hitcount:          4
+ { lat:        212, common_pid:       2043 } hitcount:          1
+ { lat:        212, common_pid:       2039 } hitcount:          2
+ { lat:        213, common_pid:       2039 } hitcount:          1
+ { lat:        214, common_pid:       2038 } hitcount:          1
+ { lat:        214, common_pid:       2039 } hitcount:          2
+ { lat:        214, common_pid:       2042 } hitcount:          1
+ { lat:        215, common_pid:       2039 } hitcount:          1
+ { lat:        217, common_pid:       2036 } hitcount:          1
+ { lat:        217, common_pid:       2040 } hitcount:          1
+ { lat:        217, common_pid:       2039 } hitcount:          1
+ { lat:        218, common_pid:       2039 } hitcount:          6
+ { lat:        219, common_pid:       2039 } hitcount:          9
+ { lat:        220, common_pid:       2039 } hitcount:         11
+ { lat:        221, common_pid:       2039 } hitcount:          5
+ { lat:        221, common_pid:       2042 } hitcount:          1
+ { lat:        222, common_pid:       2039 } hitcount:          7
+ { lat:        223, common_pid:       2036 } hitcount:          1
+ { lat:        223, common_pid:       2039 } hitcount:          3
+ { lat:        224, common_pid:       2039 } hitcount:          4
+ { lat:        224, common_pid:       2037 } hitcount:          1
+ { lat:        224, common_pid:       2036 } hitcount:          2
+ { lat:        225, common_pid:       2039 } hitcount:          5
+ { lat:        225, common_pid:       2042 } hitcount:          1
+ { lat:        226, common_pid:       2039 } hitcount:          7
+ { lat:        226, common_pid:       2036 } hitcount:          4
+ { lat:        227, common_pid:       2039 } hitcount:          6
+ { lat:        227, common_pid:       2036 } hitcount:         12
+ { lat:        227, common_pid:       2043 } hitcount:          1
+ { lat:        228, common_pid:       2039 } hitcount:          7
+ { lat:        228, common_pid:       2036 } hitcount:         14
+ { lat:        229, common_pid:       2039 } hitcount:          9
+ { lat:        229, common_pid:       2036 } hitcount:          8
+ { lat:        229, common_pid:       2038 } hitcount:          1
+ { lat:        230, common_pid:       2039 } hitcount:         11
+ { lat:        230, common_pid:       2036 } hitcount:          6
+ { lat:        230, common_pid:       2043 } hitcount:          1
+ { lat:        230, common_pid:       2042 } hitcount:          2
+ { lat:        231, common_pid:       2041 } hitcount:          1
+ { lat:        231, common_pid:       2036 } hitcount:          6
+ { lat:        231, common_pid:       2043 } hitcount:          1
+ { lat:        231, common_pid:       2039 } hitcount:          8
+ { lat:        232, common_pid:       2037 } hitcount:          1
+ { lat:        232, common_pid:       2039 } hitcount:          6
+ { lat:        232, common_pid:       2040 } hitcount:          2
+ { lat:        232, common_pid:       2036 } hitcount:          5
+ { lat:        232, common_pid:       2043 } hitcount:          1
+ { lat:        233, common_pid:       2036 } hitcount:          5
+ { lat:        233, common_pid:       2039 } hitcount:         11
+ { lat:        234, common_pid:       2039 } hitcount:          4
+ { lat:        234, common_pid:       2038 } hitcount:          2
+ { lat:        234, common_pid:       2043 } hitcount:          2
+ { lat:        234, common_pid:       2036 } hitcount:         11
+ { lat:        234, common_pid:       2040 } hitcount:          1
+ { lat:        235, common_pid:       2037 } hitcount:          2
+ { lat:        235, common_pid:       2036 } hitcount:          8
+ { lat:        235, common_pid:       2043 } hitcount:          2
+ { lat:        235, common_pid:       2039 } hitcount:          5
+ { lat:        235, common_pid:       2042 } hitcount:          2
+ { lat:        235, common_pid:       2040 } hitcount:          4
+ { lat:        235, common_pid:       2041 } hitcount:          1
+ { lat:        236, common_pid:       2036 } hitcount:          7
+ { lat:        236, common_pid:       2037 } hitcount:          1
+ { lat:        236, common_pid:       2041 } hitcount:          5
+ { lat:        236, common_pid:       2039 } hitcount:          3
+ { lat:        236, common_pid:       2043 } hitcount:          9
+ { lat:        236, common_pid:       2040 } hitcount:          7
+ { lat:        237, common_pid:       2037 } hitcount:          1
+ { lat:        237, common_pid:       2040 } hitcount:          1
+ { lat:        237, common_pid:       2036 } hitcount:          9
+ { lat:        237, common_pid:       2039 } hitcount:          3
+ { lat:        237, common_pid:       2043 } hitcount:          8
+ { lat:        237, common_pid:       2042 } hitcount:          2
+ { lat:        237, common_pid:       2041 } hitcount:          2
+ { lat:        238, common_pid:       2043 } hitcount:         10
+ { lat:        238, common_pid:       2040 } hitcount:          1
+ { lat:        238, common_pid:       2037 } hitcount:          9
+ { lat:        238, common_pid:       2038 } hitcount:          1
+ { lat:        238, common_pid:       2039 } hitcount:          1
+ { lat:        238, common_pid:       2042 } hitcount:          3
+ { lat:        238, common_pid:       2036 } hitcount:          7
+ { lat:        239, common_pid:       2041 } hitcount:          1
+ { lat:        239, common_pid:       2043 } hitcount:         11
+ { lat:        239, common_pid:       2037 } hitcount:         11
+ { lat:        239, common_pid:       2038 } hitcount:          6
+ { lat:        239, common_pid:       2036 } hitcount:          7
+ { lat:        239, common_pid:       2040 } hitcount:          1
+ { lat:        239, common_pid:       2042 } hitcount:          9
+ { lat:        240, common_pid:       2037 } hitcount:         29
+ { lat:        240, common_pid:       2043 } hitcount:         15
+ { lat:        240, common_pid:       2040 } hitcount:         44
+ { lat:        240, common_pid:       2039 } hitcount:          1
+ { lat:        240, common_pid:       2041 } hitcount:          2
+ { lat:        240, common_pid:       2038 } hitcount:          1
+ { lat:        240, common_pid:       2036 } hitcount:         10
+ { lat:        240, common_pid:       2042 } hitcount:         13
+ { lat:        241, common_pid:       2036 } hitcount:         21
+ { lat:        241, common_pid:       2041 } hitcount:         36
+ { lat:        241, common_pid:       2037 } hitcount:         34
+ { lat:        241, common_pid:       2042 } hitcount:         14
+ { lat:        241, common_pid:       2040 } hitcount:         94
+ { lat:        241, common_pid:       2039 } hitcount:         12
+ { lat:        241, common_pid:       2038 } hitcount:          2
+ { lat:        241, common_pid:       2043 } hitcount:         28
+ { lat:        242, common_pid:       2040 } hitcount:        109
+ { lat:        242, common_pid:       2041 } hitcount:        506
+ { lat:        242, common_pid:       2039 } hitcount:        155
+ { lat:        242, common_pid:       2042 } hitcount:         21
+ { lat:        242, common_pid:       2037 } hitcount:         52
+ { lat:        242, common_pid:       2043 } hitcount:         21
+ { lat:        242, common_pid:       2036 } hitcount:         16
+ { lat:        242, common_pid:       2038 } hitcount:        156
+ { lat:        243, common_pid:       2037 } hitcount:         46
+ { lat:        243, common_pid:       2039 } hitcount:         40
+ { lat:        243, common_pid:       2042 } hitcount:        119
+ { lat:        243, common_pid:       2041 } hitcount:        611
+ { lat:        243, common_pid:       2036 } hitcount:         69
+ { lat:        243, common_pid:       2038 } hitcount:        784
+ { lat:        243, common_pid:       2040 } hitcount:        323
+ { lat:        243, common_pid:       2043 } hitcount:         14
+ { lat:        244, common_pid:       2043 } hitcount:         35
+ { lat:        244, common_pid:       2042 } hitcount:        305
+ { lat:        244, common_pid:       2039 } hitcount:          8
+ { lat:        244, common_pid:       2040 } hitcount:       4515
+ { lat:        244, common_pid:       2038 } hitcount:        371
+ { lat:        244, common_pid:       2037 } hitcount:         31
+ { lat:        244, common_pid:       2036 } hitcount:        114
+ { lat:        244, common_pid:       2041 } hitcount:       3396
+ { lat:        245, common_pid:       2036 } hitcount:        700
+ { lat:        245, common_pid:       2041 } hitcount:       2772
+ { lat:        245, common_pid:       2037 } hitcount:        268
+ { lat:        245, common_pid:       2039 } hitcount:        472
+ { lat:        245, common_pid:       2038 } hitcount:       2758
+ { lat:        245, common_pid:       2042 } hitcount:       3833
+ { lat:        245, common_pid:       2040 } hitcount:       3105
+ { lat:        245, common_pid:       2043 } hitcount:        645
+ { lat:        246, common_pid:       2038 } hitcount:       3451
+ { lat:        246, common_pid:       2041 } hitcount:        142
+ { lat:        246, common_pid:       2037 } hitcount:       5101
+ { lat:        246, common_pid:       2040 } hitcount:         68
+ { lat:        246, common_pid:       2043 } hitcount:       5099
+ { lat:        246, common_pid:       2039 } hitcount:       5608
+ { lat:        246, common_pid:       2042 } hitcount:       3723
+ { lat:        246, common_pid:       2036 } hitcount:       4738
+ { lat:        247, common_pid:       2042 } hitcount:        312
+ { lat:        247, common_pid:       2043 } hitcount:       2385
+ { lat:        247, common_pid:       2041 } hitcount:        452
+ { lat:        247, common_pid:       2038 } hitcount:        792
+ { lat:        247, common_pid:       2040 } hitcount:         78
+ { lat:        247, common_pid:       2036 } hitcount:       2375
+ { lat:        247, common_pid:       2039 } hitcount:       1834
+ { lat:        247, common_pid:       2037 } hitcount:       2655
+ { lat:        248, common_pid:       2037 } hitcount:         36
+ { lat:        248, common_pid:       2042 } hitcount:         11
+ { lat:        248, common_pid:       2038 } hitcount:        122
+ { lat:        248, common_pid:       2036 } hitcount:        135
+ { lat:        248, common_pid:       2039 } hitcount:         26
+ { lat:        248, common_pid:       2041 } hitcount:        503
+ { lat:        248, common_pid:       2043 } hitcount:         66
+ { lat:        248, common_pid:       2040 } hitcount:         46
+ { lat:        249, common_pid:       2037 } hitcount:         29
+ { lat:        249, common_pid:       2038 } hitcount:          1
+ { lat:        249, common_pid:       2043 } hitcount:         29
+ { lat:        249, common_pid:       2039 } hitcount:          8
+ { lat:        249, common_pid:       2042 } hitcount:         56
+ { lat:        249, common_pid:       2040 } hitcount:         27
+ { lat:        249, common_pid:       2041 } hitcount:         11
+ { lat:        249, common_pid:       2036 } hitcount:         27
+ { lat:        250, common_pid:       2038 } hitcount:          1
+ { lat:        250, common_pid:       2036 } hitcount:         30
+ { lat:        250, common_pid:       2040 } hitcount:         19
+ { lat:        250, common_pid:       2043 } hitcount:         22
+ { lat:        250, common_pid:       2042 } hitcount:         20
+ { lat:        250, common_pid:       2041 } hitcount:          1
+ { lat:        250, common_pid:       2039 } hitcount:          6
+ { lat:        250, common_pid:       2037 } hitcount:         48
+ { lat:        251, common_pid:       2037 } hitcount:         43
+ { lat:        251, common_pid:       2039 } hitcount:          1
+ { lat:        251, common_pid:       2036 } hitcount:         12
+ { lat:        251, common_pid:       2042 } hitcount:          2
+ { lat:        251, common_pid:       2041 } hitcount:          1
+ { lat:        251, common_pid:       2043 } hitcount:         15
+ { lat:        251, common_pid:       2040 } hitcount:          3
+ { lat:        252, common_pid:       2040 } hitcount:          1
+ { lat:        252, common_pid:       2036 } hitcount:         12
+ { lat:        252, common_pid:       2037 } hitcount:         21
+ { lat:        252, common_pid:       2043 } hitcount:         14
+ { lat:        253, common_pid:       2037 } hitcount:         21
+ { lat:        253, common_pid:       2039 } hitcount:          2
+ { lat:        253, common_pid:       2036 } hitcount:          9
+ { lat:        253, common_pid:       2043 } hitcount:          6
+ { lat:        253, common_pid:       2040 } hitcount:          1
+ { lat:        254, common_pid:       2036 } hitcount:          8
+ { lat:        254, common_pid:       2043 } hitcount:          3
+ { lat:        254, common_pid:       2041 } hitcount:          1
+ { lat:        254, common_pid:       2042 } hitcount:          1
+ { lat:        254, common_pid:       2039 } hitcount:          1
+ { lat:        254, common_pid:       2037 } hitcount:         12
+ { lat:        255, common_pid:       2043 } hitcount:          1
+ { lat:        255, common_pid:       2037 } hitcount:          2
+ { lat:        255, common_pid:       2036 } hitcount:          2
+ { lat:        255, common_pid:       2039 } hitcount:          8
+ { lat:        256, common_pid:       2043 } hitcount:          1
+ { lat:        256, common_pid:       2036 } hitcount:          4
+ { lat:        256, common_pid:       2039 } hitcount:          6
+ { lat:        257, common_pid:       2039 } hitcount:          5
+ { lat:        257, common_pid:       2036 } hitcount:          4
+ { lat:        258, common_pid:       2039 } hitcount:          5
+ { lat:        258, common_pid:       2036 } hitcount:          2
+ { lat:        259, common_pid:       2036 } hitcount:          7
+ { lat:        259, common_pid:       2039 } hitcount:          7
+ { lat:        260, common_pid:       2036 } hitcount:          8
+ { lat:        260, common_pid:       2039 } hitcount:          6
+ { lat:        261, common_pid:       2036 } hitcount:          5
+ { lat:        261, common_pid:       2039 } hitcount:          7
+ { lat:        262, common_pid:       2039 } hitcount:          5
+ { lat:        262, common_pid:       2036 } hitcount:          5
+ { lat:        263, common_pid:       2039 } hitcount:          7
+ { lat:        263, common_pid:       2036 } hitcount:          7
+ { lat:        264, common_pid:       2039 } hitcount:          9
+ { lat:        264, common_pid:       2036 } hitcount:          9
+ { lat:        265, common_pid:       2036 } hitcount:          5
+ { lat:        265, common_pid:       2039 } hitcount:          1
+ { lat:        266, common_pid:       2036 } hitcount:          1
+ { lat:        266, common_pid:       2039 } hitcount:          3
+ { lat:        267, common_pid:       2036 } hitcount:          1
+ { lat:        267, common_pid:       2039 } hitcount:          3
+ { lat:        268, common_pid:       2036 } hitcount:          1
+ { lat:        268, common_pid:       2039 } hitcount:          6
+ { lat:        269, common_pid:       2036 } hitcount:          1
+ { lat:        269, common_pid:       2043 } hitcount:          1
+ { lat:        269, common_pid:       2039 } hitcount:          2
+ { lat:        270, common_pid:       2040 } hitcount:          1
+ { lat:        270, common_pid:       2039 } hitcount:          6
+ { lat:        271, common_pid:       2041 } hitcount:          1
+ { lat:        271, common_pid:       2039 } hitcount:          5
+ { lat:        272, common_pid:       2039 } hitcount:         10
+ { lat:        273, common_pid:       2039 } hitcount:          8
+ { lat:        274, common_pid:       2039 } hitcount:          2
+ { lat:        275, common_pid:       2039 } hitcount:          1
+ { lat:        276, common_pid:       2039 } hitcount:          2
+ { lat:        276, common_pid:       2037 } hitcount:          1
+ { lat:        276, common_pid:       2038 } hitcount:          1
+ { lat:        277, common_pid:       2039 } hitcount:          1
+ { lat:        277, common_pid:       2042 } hitcount:          1
+ { lat:        278, common_pid:       2039 } hitcount:          1
+ { lat:        279, common_pid:       2039 } hitcount:          4
+ { lat:        279, common_pid:       2043 } hitcount:          1
+ { lat:        280, common_pid:       2039 } hitcount:          3
+ { lat:        283, common_pid:       2036 } hitcount:          2
+ { lat:        284, common_pid:       2039 } hitcount:          1
+ { lat:        284, common_pid:       2043 } hitcount:          1
+ { lat:        288, common_pid:       2039 } hitcount:          1
+ { lat:        289, common_pid:       2039 } hitcount:          1
+ { lat:        300, common_pid:       2039 } hitcount:          1
+ { lat:        384, common_pid:       2039 } hitcount:          1
 
-Totals:
-    Hits: 67625
-    Entries: 278
-    Dropped: 0
+ Totals:
+     Hits: 67625
+     Entries: 278
+     Dropped: 0
 
 Note, the writes are around the sleep, so ideally they will all be of 250
 microseconds. If you are wondering how there are several that are under
@@ -2350,7 +2356,7 @@ will be at 200 microseconds.
 
 But this could easily be done in userspace. To make this even more
 interesting, we can mix the histogram between events that happened in the
-kernel with trace_marker.
+kernel with trace_marker::
 
  # cd /sys/kernel/tracing
  # echo 'latency u64 lat' > synthetic_events
@@ -2362,177 +2368,177 @@ The difference this time is that instead of using the trace_marker to start
 the latency, the sched_waking event is used, matching the common_pid for the
 trace_marker write with the pid that is being woken by sched_waking.
 
-After running cyclictest again with the same parameters, we now have:
+After running cyclictest again with the same parameters, we now have::
 
  # cat events/synthetic/latency/hist
-# event histogram
-#
-# trigger info: hist:keys=lat,common_pid:vals=hitcount:sort=lat:size=2048 [active]
-#
+ # event histogram
+ #
+ # trigger info: hist:keys=lat,common_pid:vals=hitcount:sort=lat:size=2048 [active]
+ #
 
-{ lat:          7, common_pid:       2302 } hitcount:        640
-{ lat:          7, common_pid:       2299 } hitcount:         42
-{ lat:          7, common_pid:       2303 } hitcount:         18
-{ lat:          7, common_pid:       2305 } hitcount:        166
-{ lat:          7, common_pid:       2306 } hitcount:          1
-{ lat:          7, common_pid:       2301 } hitcount:         91
-{ lat:          7, common_pid:       2300 } hitcount:         17
-{ lat:          8, common_pid:       2303 } hitcount:       8296
-{ lat:          8, common_pid:       2304 } hitcount:       6864
-{ lat:          8, common_pid:       2305 } hitcount:       9464
-{ lat:          8, common_pid:       2301 } hitcount:       9213
-{ lat:          8, common_pid:       2306 } hitcount:       6246
-{ lat:          8, common_pid:       2302 } hitcount:       8797
-{ lat:          8, common_pid:       2299 } hitcount:       8771
-{ lat:          8, common_pid:       2300 } hitcount:       8119
-{ lat:          9, common_pid:       2305 } hitcount:       1519
-{ lat:          9, common_pid:       2299 } hitcount:       2346
-{ lat:          9, common_pid:       2303 } hitcount:       2841
-{ lat:          9, common_pid:       2301 } hitcount:       1846
-{ lat:          9, common_pid:       2304 } hitcount:       3861
-{ lat:          9, common_pid:       2302 } hitcount:       1210
-{ lat:          9, common_pid:       2300 } hitcount:       2762
-{ lat:          9, common_pid:       2306 } hitcount:       4247
-{ lat:         10, common_pid:       2299 } hitcount:         16
-{ lat:         10, common_pid:       2306 } hitcount:        333
-{ lat:         10, common_pid:       2303 } hitcount:         16
-{ lat:         10, common_pid:       2304 } hitcount:        168
-{ lat:         10, common_pid:       2302 } hitcount:        240
-{ lat:         10, common_pid:       2301 } hitcount:         28
-{ lat:         10, common_pid:       2300 } hitcount:         95
-{ lat:         10, common_pid:       2305 } hitcount:         18
-{ lat:         11, common_pid:       2303 } hitcount:          5
-{ lat:         11, common_pid:       2305 } hitcount:          8
-{ lat:         11, common_pid:       2306 } hitcount:        221
-{ lat:         11, common_pid:       2302 } hitcount:         76
-{ lat:         11, common_pid:       2304 } hitcount:         26
-{ lat:         11, common_pid:       2300 } hitcount:        125
-{ lat:         11, common_pid:       2299 } hitcount:          2
-{ lat:         12, common_pid:       2305 } hitcount:          3
-{ lat:         12, common_pid:       2300 } hitcount:          6
-{ lat:         12, common_pid:       2306 } hitcount:         90
-{ lat:         12, common_pid:       2302 } hitcount:          4
-{ lat:         12, common_pid:       2303 } hitcount:          1
-{ lat:         12, common_pid:       2304 } hitcount:        122
-{ lat:         13, common_pid:       2300 } hitcount:         12
-{ lat:         13, common_pid:       2301 } hitcount:          1
-{ lat:         13, common_pid:       2306 } hitcount:         32
-{ lat:         13, common_pid:       2302 } hitcount:          5
-{ lat:         13, common_pid:       2305 } hitcount:          1
-{ lat:         13, common_pid:       2303 } hitcount:          1
-{ lat:         13, common_pid:       2304 } hitcount:         61
-{ lat:         14, common_pid:       2303 } hitcount:          4
-{ lat:         14, common_pid:       2306 } hitcount:          5
-{ lat:         14, common_pid:       2305 } hitcount:          4
-{ lat:         14, common_pid:       2304 } hitcount:         62
-{ lat:         14, common_pid:       2302 } hitcount:         19
-{ lat:         14, common_pid:       2300 } hitcount:         33
-{ lat:         14, common_pid:       2299 } hitcount:          1
-{ lat:         14, common_pid:       2301 } hitcount:          4
-{ lat:         15, common_pid:       2305 } hitcount:          1
-{ lat:         15, common_pid:       2302 } hitcount:         25
-{ lat:         15, common_pid:       2300 } hitcount:         11
-{ lat:         15, common_pid:       2299 } hitcount:          5
-{ lat:         15, common_pid:       2301 } hitcount:          1
-{ lat:         15, common_pid:       2304 } hitcount:          8
-{ lat:         15, common_pid:       2303 } hitcount:          1
-{ lat:         15, common_pid:       2306 } hitcount:          6
-{ lat:         16, common_pid:       2302 } hitcount:         31
-{ lat:         16, common_pid:       2306 } hitcount:          3
-{ lat:         16, common_pid:       2300 } hitcount:          5
-{ lat:         17, common_pid:       2302 } hitcount:          6
-{ lat:         17, common_pid:       2303 } hitcount:          1
-{ lat:         18, common_pid:       2304 } hitcount:          1
-{ lat:         18, common_pid:       2302 } hitcount:          8
-{ lat:         18, common_pid:       2299 } hitcount:          1
-{ lat:         18, common_pid:       2301 } hitcount:          1
-{ lat:         19, common_pid:       2303 } hitcount:          4
-{ lat:         19, common_pid:       2304 } hitcount:          5
-{ lat:         19, common_pid:       2302 } hitcount:          4
-{ lat:         19, common_pid:       2299 } hitcount:          3
-{ lat:         19, common_pid:       2306 } hitcount:          1
-{ lat:         19, common_pid:       2300 } hitcount:          4
-{ lat:         19, common_pid:       2305 } hitcount:          5
-{ lat:         20, common_pid:       2299 } hitcount:          2
-{ lat:         20, common_pid:       2302 } hitcount:          3
-{ lat:         20, common_pid:       2305 } hitcount:          1
-{ lat:         20, common_pid:       2300 } hitcount:          2
-{ lat:         20, common_pid:       2301 } hitcount:          2
-{ lat:         20, common_pid:       2303 } hitcount:          3
-{ lat:         21, common_pid:       2305 } hitcount:          1
-{ lat:         21, common_pid:       2299 } hitcount:          5
-{ lat:         21, common_pid:       2303 } hitcount:          4
-{ lat:         21, common_pid:       2302 } hitcount:          7
-{ lat:         21, common_pid:       2300 } hitcount:          1
-{ lat:         21, common_pid:       2301 } hitcount:          5
-{ lat:         21, common_pid:       2304 } hitcount:          2
-{ lat:         22, common_pid:       2302 } hitcount:          5
-{ lat:         22, common_pid:       2303 } hitcount:          1
-{ lat:         22, common_pid:       2306 } hitcount:          3
-{ lat:         22, common_pid:       2301 } hitcount:          2
-{ lat:         22, common_pid:       2300 } hitcount:          1
-{ lat:         22, common_pid:       2299 } hitcount:          1
-{ lat:         22, common_pid:       2305 } hitcount:          1
-{ lat:         22, common_pid:       2304 } hitcount:          1
-{ lat:         23, common_pid:       2299 } hitcount:          1
-{ lat:         23, common_pid:       2306 } hitcount:          2
-{ lat:         23, common_pid:       2302 } hitcount:          6
-{ lat:         24, common_pid:       2302 } hitcount:          3
-{ lat:         24, common_pid:       2300 } hitcount:          1
-{ lat:         24, common_pid:       2306 } hitcount:          2
-{ lat:         24, common_pid:       2305 } hitcount:          1
-{ lat:         24, common_pid:       2299 } hitcount:          1
-{ lat:         25, common_pid:       2300 } hitcount:          1
-{ lat:         25, common_pid:       2302 } hitcount:          4
-{ lat:         26, common_pid:       2302 } hitcount:          2
-{ lat:         27, common_pid:       2305 } hitcount:          1
-{ lat:         27, common_pid:       2300 } hitcount:          1
-{ lat:         27, common_pid:       2302 } hitcount:          3
-{ lat:         28, common_pid:       2306 } hitcount:          1
-{ lat:         28, common_pid:       2302 } hitcount:          4
-{ lat:         29, common_pid:       2302 } hitcount:          1
-{ lat:         29, common_pid:       2300 } hitcount:          2
-{ lat:         29, common_pid:       2306 } hitcount:          1
-{ lat:         29, common_pid:       2304 } hitcount:          1
-{ lat:         30, common_pid:       2302 } hitcount:          4
-{ lat:         31, common_pid:       2302 } hitcount:          6
-{ lat:         32, common_pid:       2302 } hitcount:          1
-{ lat:         33, common_pid:       2299 } hitcount:          1
-{ lat:         33, common_pid:       2302 } hitcount:          3
-{ lat:         34, common_pid:       2302 } hitcount:          2
-{ lat:         35, common_pid:       2302 } hitcount:          1
-{ lat:         35, common_pid:       2304 } hitcount:          1
-{ lat:         36, common_pid:       2302 } hitcount:          4
-{ lat:         37, common_pid:       2302 } hitcount:          6
-{ lat:         38, common_pid:       2302 } hitcount:          2
-{ lat:         39, common_pid:       2302 } hitcount:          2
-{ lat:         39, common_pid:       2304 } hitcount:          1
-{ lat:         40, common_pid:       2304 } hitcount:          2
-{ lat:         40, common_pid:       2302 } hitcount:          5
-{ lat:         41, common_pid:       2304 } hitcount:          1
-{ lat:         41, common_pid:       2302 } hitcount:          8
-{ lat:         42, common_pid:       2302 } hitcount:          6
-{ lat:         42, common_pid:       2304 } hitcount:          1
-{ lat:         43, common_pid:       2302 } hitcount:          3
-{ lat:         43, common_pid:       2304 } hitcount:          4
-{ lat:         44, common_pid:       2302 } hitcount:          6
-{ lat:         45, common_pid:       2302 } hitcount:          5
-{ lat:         46, common_pid:       2302 } hitcount:          5
-{ lat:         47, common_pid:       2302 } hitcount:          7
-{ lat:         48, common_pid:       2301 } hitcount:          1
-{ lat:         48, common_pid:       2302 } hitcount:          9
-{ lat:         49, common_pid:       2302 } hitcount:          3
-{ lat:         50, common_pid:       2302 } hitcount:          1
-{ lat:         50, common_pid:       2301 } hitcount:          1
-{ lat:         51, common_pid:       2302 } hitcount:          2
-{ lat:         51, common_pid:       2301 } hitcount:          1
-{ lat:         61, common_pid:       2302 } hitcount:          1
-{ lat:        110, common_pid:       2302 } hitcount:          1
+ { lat:          7, common_pid:       2302 } hitcount:        640
+ { lat:          7, common_pid:       2299 } hitcount:         42
+ { lat:          7, common_pid:       2303 } hitcount:         18
+ { lat:          7, common_pid:       2305 } hitcount:        166
+ { lat:          7, common_pid:       2306 } hitcount:          1
+ { lat:          7, common_pid:       2301 } hitcount:         91
+ { lat:          7, common_pid:       2300 } hitcount:         17
+ { lat:          8, common_pid:       2303 } hitcount:       8296
+ { lat:          8, common_pid:       2304 } hitcount:       6864
+ { lat:          8, common_pid:       2305 } hitcount:       9464
+ { lat:          8, common_pid:       2301 } hitcount:       9213
+ { lat:          8, common_pid:       2306 } hitcount:       6246
+ { lat:          8, common_pid:       2302 } hitcount:       8797
+ { lat:          8, common_pid:       2299 } hitcount:       8771
+ { lat:          8, common_pid:       2300 } hitcount:       8119
+ { lat:          9, common_pid:       2305 } hitcount:       1519
+ { lat:          9, common_pid:       2299 } hitcount:       2346
+ { lat:          9, common_pid:       2303 } hitcount:       2841
+ { lat:          9, common_pid:       2301 } hitcount:       1846
+ { lat:          9, common_pid:       2304 } hitcount:       3861
+ { lat:          9, common_pid:       2302 } hitcount:       1210
+ { lat:          9, common_pid:       2300 } hitcount:       2762
+ { lat:          9, common_pid:       2306 } hitcount:       4247
+ { lat:         10, common_pid:       2299 } hitcount:         16
+ { lat:         10, common_pid:       2306 } hitcount:        333
+ { lat:         10, common_pid:       2303 } hitcount:         16
+ { lat:         10, common_pid:       2304 } hitcount:        168
+ { lat:         10, common_pid:       2302 } hitcount:        240
+ { lat:         10, common_pid:       2301 } hitcount:         28
+ { lat:         10, common_pid:       2300 } hitcount:         95
+ { lat:         10, common_pid:       2305 } hitcount:         18
+ { lat:         11, common_pid:       2303 } hitcount:          5
+ { lat:         11, common_pid:       2305 } hitcount:          8
+ { lat:         11, common_pid:       2306 } hitcount:        221
+ { lat:         11, common_pid:       2302 } hitcount:         76
+ { lat:         11, common_pid:       2304 } hitcount:         26
+ { lat:         11, common_pid:       2300 } hitcount:        125
+ { lat:         11, common_pid:       2299 } hitcount:          2
+ { lat:         12, common_pid:       2305 } hitcount:          3
+ { lat:         12, common_pid:       2300 } hitcount:          6
+ { lat:         12, common_pid:       2306 } hitcount:         90
+ { lat:         12, common_pid:       2302 } hitcount:          4
+ { lat:         12, common_pid:       2303 } hitcount:          1
+ { lat:         12, common_pid:       2304 } hitcount:        122
+ { lat:         13, common_pid:       2300 } hitcount:         12
+ { lat:         13, common_pid:       2301 } hitcount:          1
+ { lat:         13, common_pid:       2306 } hitcount:         32
+ { lat:         13, common_pid:       2302 } hitcount:          5
+ { lat:         13, common_pid:       2305 } hitcount:          1
+ { lat:         13, common_pid:       2303 } hitcount:          1
+ { lat:         13, common_pid:       2304 } hitcount:         61
+ { lat:         14, common_pid:       2303 } hitcount:          4
+ { lat:         14, common_pid:       2306 } hitcount:          5
+ { lat:         14, common_pid:       2305 } hitcount:          4
+ { lat:         14, common_pid:       2304 } hitcount:         62
+ { lat:         14, common_pid:       2302 } hitcount:         19
+ { lat:         14, common_pid:       2300 } hitcount:         33
+ { lat:         14, common_pid:       2299 } hitcount:          1
+ { lat:         14, common_pid:       2301 } hitcount:          4
+ { lat:         15, common_pid:       2305 } hitcount:          1
+ { lat:         15, common_pid:       2302 } hitcount:         25
+ { lat:         15, common_pid:       2300 } hitcount:         11
+ { lat:         15, common_pid:       2299 } hitcount:          5
+ { lat:         15, common_pid:       2301 } hitcount:          1
+ { lat:         15, common_pid:       2304 } hitcount:          8
+ { lat:         15, common_pid:       2303 } hitcount:          1
+ { lat:         15, common_pid:       2306 } hitcount:          6
+ { lat:         16, common_pid:       2302 } hitcount:         31
+ { lat:         16, common_pid:       2306 } hitcount:          3
+ { lat:         16, common_pid:       2300 } hitcount:          5
+ { lat:         17, common_pid:       2302 } hitcount:          6
+ { lat:         17, common_pid:       2303 } hitcount:          1
+ { lat:         18, common_pid:       2304 } hitcount:          1
+ { lat:         18, common_pid:       2302 } hitcount:          8
+ { lat:         18, common_pid:       2299 } hitcount:          1
+ { lat:         18, common_pid:       2301 } hitcount:          1
+ { lat:         19, common_pid:       2303 } hitcount:          4
+ { lat:         19, common_pid:       2304 } hitcount:          5
+ { lat:         19, common_pid:       2302 } hitcount:          4
+ { lat:         19, common_pid:       2299 } hitcount:          3
+ { lat:         19, common_pid:       2306 } hitcount:          1
+ { lat:         19, common_pid:       2300 } hitcount:          4
+ { lat:         19, common_pid:       2305 } hitcount:          5
+ { lat:         20, common_pid:       2299 } hitcount:          2
+ { lat:         20, common_pid:       2302 } hitcount:          3
+ { lat:         20, common_pid:       2305 } hitcount:          1
+ { lat:         20, common_pid:       2300 } hitcount:          2
+ { lat:         20, common_pid:       2301 } hitcount:          2
+ { lat:         20, common_pid:       2303 } hitcount:          3
+ { lat:         21, common_pid:       2305 } hitcount:          1
+ { lat:         21, common_pid:       2299 } hitcount:          5
+ { lat:         21, common_pid:       2303 } hitcount:          4
+ { lat:         21, common_pid:       2302 } hitcount:          7
+ { lat:         21, common_pid:       2300 } hitcount:          1
+ { lat:         21, common_pid:       2301 } hitcount:          5
+ { lat:         21, common_pid:       2304 } hitcount:          2
+ { lat:         22, common_pid:       2302 } hitcount:          5
+ { lat:         22, common_pid:       2303 } hitcount:          1
+ { lat:         22, common_pid:       2306 } hitcount:          3
+ { lat:         22, common_pid:       2301 } hitcount:          2
+ { lat:         22, common_pid:       2300 } hitcount:          1
+ { lat:         22, common_pid:       2299 } hitcount:          1
+ { lat:         22, common_pid:       2305 } hitcount:          1
+ { lat:         22, common_pid:       2304 } hitcount:          1
+ { lat:         23, common_pid:       2299 } hitcount:          1
+ { lat:         23, common_pid:       2306 } hitcount:          2
+ { lat:         23, common_pid:       2302 } hitcount:          6
+ { lat:         24, common_pid:       2302 } hitcount:          3
+ { lat:         24, common_pid:       2300 } hitcount:          1
+ { lat:         24, common_pid:       2306 } hitcount:          2
+ { lat:         24, common_pid:       2305 } hitcount:          1
+ { lat:         24, common_pid:       2299 } hitcount:          1
+ { lat:         25, common_pid:       2300 } hitcount:          1
+ { lat:         25, common_pid:       2302 } hitcount:          4
+ { lat:         26, common_pid:       2302 } hitcount:          2
+ { lat:         27, common_pid:       2305 } hitcount:          1
+ { lat:         27, common_pid:       2300 } hitcount:          1
+ { lat:         27, common_pid:       2302 } hitcount:          3
+ { lat:         28, common_pid:       2306 } hitcount:          1
+ { lat:         28, common_pid:       2302 } hitcount:          4
+ { lat:         29, common_pid:       2302 } hitcount:          1
+ { lat:         29, common_pid:       2300 } hitcount:          2
+ { lat:         29, common_pid:       2306 } hitcount:          1
+ { lat:         29, common_pid:       2304 } hitcount:          1
+ { lat:         30, common_pid:       2302 } hitcount:          4
+ { lat:         31, common_pid:       2302 } hitcount:          6
+ { lat:         32, common_pid:       2302 } hitcount:          1
+ { lat:         33, common_pid:       2299 } hitcount:          1
+ { lat:         33, common_pid:       2302 } hitcount:          3
+ { lat:         34, common_pid:       2302 } hitcount:          2
+ { lat:         35, common_pid:       2302 } hitcount:          1
+ { lat:         35, common_pid:       2304 } hitcount:          1
+ { lat:         36, common_pid:       2302 } hitcount:          4
+ { lat:         37, common_pid:       2302 } hitcount:          6
+ { lat:         38, common_pid:       2302 } hitcount:          2
+ { lat:         39, common_pid:       2302 } hitcount:          2
+ { lat:         39, common_pid:       2304 } hitcount:          1
+ { lat:         40, common_pid:       2304 } hitcount:          2
+ { lat:         40, common_pid:       2302 } hitcount:          5
+ { lat:         41, common_pid:       2304 } hitcount:          1
+ { lat:         41, common_pid:       2302 } hitcount:          8
+ { lat:         42, common_pid:       2302 } hitcount:          6
+ { lat:         42, common_pid:       2304 } hitcount:          1
+ { lat:         43, common_pid:       2302 } hitcount:          3
+ { lat:         43, common_pid:       2304 } hitcount:          4
+ { lat:         44, common_pid:       2302 } hitcount:          6
+ { lat:         45, common_pid:       2302 } hitcount:          5
+ { lat:         46, common_pid:       2302 } hitcount:          5
+ { lat:         47, common_pid:       2302 } hitcount:          7
+ { lat:         48, common_pid:       2301 } hitcount:          1
+ { lat:         48, common_pid:       2302 } hitcount:          9
+ { lat:         49, common_pid:       2302 } hitcount:          3
+ { lat:         50, common_pid:       2302 } hitcount:          1
+ { lat:         50, common_pid:       2301 } hitcount:          1
+ { lat:         51, common_pid:       2302 } hitcount:          2
+ { lat:         51, common_pid:       2301 } hitcount:          1
+ { lat:         61, common_pid:       2302 } hitcount:          1
+ { lat:        110, common_pid:       2302 } hitcount:          1
 
-Totals:
-    Hits: 89565
-    Entries: 158
-    Dropped: 0
+ Totals:
+     Hits: 89565
+     Entries: 158
+     Dropped: 0
 
 This doesn't tell us any information about how late cyclictest may have
 woken up, but it does show us a nice histogram of how long it took from
diff --git a/Documentation/trace/index.rst b/Documentation/trace/index.rst
index b58c10b04e27..306997941ba1 100644
--- a/Documentation/trace/index.rst
+++ b/Documentation/trace/index.rst
@@ -18,6 +18,7 @@ Linux Tracing Technologies
    events-nmi
    events-msr
    mmiotrace
+   histogram
    hwlat_detector
    intel_th
    stm
diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
index dcc0166d1997..2bd4a9181a0f 100644
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -605,7 +605,7 @@ config HIST_TRIGGERS
 	  Inter-event tracing of quantities such as latencies is also
 	  supported using hist triggers under this option.
 
-	  See Documentation/trace/histogram.txt.
+	  See Documentation/trace/histogram.rst.
 	  If in doubt, say N.
 
 config MMIOTRACE_TEST
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH 9/9] docs: histogram.txt: convert it to ReST file format
  2018-06-26  9:49 ` [PATCH 9/9] docs: histogram.txt: convert it to ReST file format Mauro Carvalho Chehab
@ 2018-06-26 14:20   ` Steven Rostedt
  0 siblings, 0 replies; 15+ messages in thread
From: Steven Rostedt @ 2018-06-26 14:20 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Linux Doc Mailing List, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet, Ingo Molnar, Tom Zanussi, James Morris,
	Xiongwei Song, Changbin Du, Masami Hiramatsu,
	Joel Fernandes (Google)

On Tue, 26 Jun 2018 06:49:11 -0300
Mauro Carvalho Chehab <mchehab+samsung@kernel.org> wrote:

> Despite being mentioned at Documentation/trace/ftrace.rst as a
> rst file, this file was still a text one, with several issues.
> 
> Convert it to ReST and add it to the trace index:
> - Mark the document title as such;
> - Identify and indent the literal blocks;
> - Use the proper markups for table.
> 
> Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
> ---
>  Documentation/trace/events.rst                |    2 +-
>  .../trace/{histogram.txt => histogram.rst}    | 1242 +++++++++--------
>  Documentation/trace/index.rst                 |    1 +
>  kernel/trace/Kconfig                          |    2 +-
>  4 files changed, 627 insertions(+), 620 deletions(-)
>  rename Documentation/trace/{histogram.txt => histogram.rst} (73%)
> 

Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

-- Steve

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 4/9] devicectree: bindings: fix location of leds common file
  2018-06-26  9:49 ` [PATCH 4/9] devicectree: bindings: fix location of leds common file Mauro Carvalho Chehab
@ 2018-06-26 14:38   ` Pavel Machek
  2018-06-26 19:41   ` Jacek Anaszewski
  1 sibling, 0 replies; 15+ messages in thread
From: Pavel Machek @ 2018-06-26 14:38 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Linux Doc Mailing List, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet, Jacek Anaszewski, Rob Herring, Mark Rutland,
	linux-leds, devicetree

[-- Attachment #1: Type: text/plain, Size: 480 bytes --]

On Tue 2018-06-26 06:49:06, Mauro Carvalho Chehab wrote:
> The leds.txt was moved and renamed. Fix references to
> it accordingly.
> 
> Fixes: f67605394f0b ("devicetree/bindings: Move gpio-leds binding into leds directory")
> Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>

Acked-by: Pavel Machek <pavel@ucw.cz>

-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 4/9] devicectree: bindings: fix location of leds common file
  2018-06-26  9:49 ` [PATCH 4/9] devicectree: bindings: fix location of leds common file Mauro Carvalho Chehab
  2018-06-26 14:38   ` Pavel Machek
@ 2018-06-26 19:41   ` Jacek Anaszewski
  1 sibling, 0 replies; 15+ messages in thread
From: Jacek Anaszewski @ 2018-06-26 19:41 UTC (permalink / raw)
  To: Mauro Carvalho Chehab, Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, linux-kernel, Jonathan Corbet,
	Pavel Machek, Rob Herring, Mark Rutland, linux-leds, devicetree

Hi Mauro.

Thank you for the patch.

On 06/26/2018 11:49 AM, Mauro Carvalho Chehab wrote:
> The leds.txt was moved and renamed. Fix references to
> it accordingly.
> 
> Fixes: f67605394f0b ("devicetree/bindings: Move gpio-leds binding into leds directory")
> Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
> ---
>   Documentation/devicetree/bindings/leds/common.txt | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/Documentation/devicetree/bindings/leds/common.txt b/Documentation/devicetree/bindings/leds/common.txt
> index 1d4afe9644b6..aa1399814a2a 100644
> --- a/Documentation/devicetree/bindings/leds/common.txt
> +++ b/Documentation/devicetree/bindings/leds/common.txt
> @@ -31,7 +31,7 @@ Optional properties for child nodes:
>        "backlight" - LED will act as a back-light, controlled by the framebuffer
>   		   system
>        "default-on" - LED will turn on (but for leds-gpio see "default-state"
> -		    property in Documentation/devicetree/bindings/gpio/led.txt)
> +		    property in Documentation/devicetree/bindings/leds/leds-gpio.txt)
>        "heartbeat" - LED "double" flashes at a load average based rate
>        "disk-activity" - LED indicates disk activity
>        "ide-disk" - LED indicates IDE disk activity (deprecated),
> 

Applied.

-- 
Best regards,
Jacek Anaszewski

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 6/9] gpio.h: fix location of gpio legacy documentation
  2018-06-26  9:49 ` [PATCH 6/9] gpio.h: fix location of gpio legacy documentation Mauro Carvalho Chehab
@ 2018-06-29 12:36   ` Linus Walleij
  0 siblings, 0 replies; 15+ messages in thread
From: Linus Walleij @ 2018-06-29 12:36 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: linux-doc, Mauro Carvalho Chehab, linux-kernel, Jonathan Corbet,
	open list:GPIO SUBSYSTEM

On Tue, Jun 26, 2018 at 11:49 AM Mauro Carvalho Chehab
<mchehab+samsung@kernel.org> wrote:

> The location of this doc file was moved. Change its reference
> accordingly.
>
> Fixes: 7ee2c13080c9 ("Documentation: gpio: Move legacy documentation to driver-api")
> Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>

Reviewed-by: Linus Walleij <linus.walleij@linaro.org>

Should I apply this to the GPIO tree?

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 0/9] Fix references for some missing documentation files
  2018-06-26  9:49 [PATCH 0/9] Fix references for some missing documentation files Mauro Carvalho Chehab
                   ` (8 preceding siblings ...)
  2018-06-26  9:49 ` [PATCH 9/9] docs: histogram.txt: convert it to ReST file format Mauro Carvalho Chehab
@ 2018-07-02 17:27 ` Jonathan Corbet
  9 siblings, 0 replies; 15+ messages in thread
From: Jonathan Corbet @ 2018-07-02 17:27 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Linux Doc Mailing List, Mauro Carvalho Chehab, linux-kernel,
	Jacek Anaszewski, devicetree, Ingo Molnar, linux-kernel,
	Andrew Morton, linux-leds, intel-wired-lan, Mark Rutland,
	linux-gpio, David S. Miller, James Morris, Jeff Kirsher,
	Changbin Du, Masami Hiramatsu, netdev, Steven Rostedt,
	linux-input, linuxppc-dev, linux-scsi, kvm, virtualization,
	Andy Whitcroft, Joe Perches

On Tue, 26 Jun 2018 06:49:02 -0300
Mauro Carvalho Chehab <mchehab+samsung@kernel.org> wrote:

> Having nothing to do while waiting for my plane to arrive while
> returning back from Japan, I ended by writing a small series of 
> patches meant to reduce the number of bad Documentation/* 
> links that are detected by:
> 	./scripts/documentation-file-ref-check

I've applied everything except the two networking patches, since I expect
those to go through Dave's tree.

Thanks,

jon

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2018-07-02 17:27 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-26  9:49 [PATCH 0/9] Fix references for some missing documentation files Mauro Carvalho Chehab
2018-06-26  9:49 ` [PATCH 1/9] scripts/documentation-file-ref-check: remove some false positives Mauro Carvalho Chehab
2018-06-26  9:49 ` [PATCH 2/9] scripts/documentation-file-ref-check: ignore sched-pelt false positive Mauro Carvalho Chehab
2018-06-26  9:49 ` [PATCH 3/9] docs: zh_CN: fix location of oops-tracing.txt Mauro Carvalho Chehab
2018-06-26  9:49 ` [PATCH 4/9] devicectree: bindings: fix location of leds common file Mauro Carvalho Chehab
2018-06-26 14:38   ` Pavel Machek
2018-06-26 19:41   ` Jacek Anaszewski
2018-06-26  9:49 ` [PATCH 5/9] MAINTAINERS: fix location of ina2xx.txt device tree file Mauro Carvalho Chehab
2018-06-26  9:49 ` [PATCH 6/9] gpio.h: fix location of gpio legacy documentation Mauro Carvalho Chehab
2018-06-29 12:36   ` Linus Walleij
2018-06-26  9:49 ` [PATCH 7/9] networking: e100.rst: Get rid of Sphinx warnings Mauro Carvalho Chehab
2018-06-26  9:49 ` [PATCH 8/9] networking: e1000.rst: " Mauro Carvalho Chehab
2018-06-26  9:49 ` [PATCH 9/9] docs: histogram.txt: convert it to ReST file format Mauro Carvalho Chehab
2018-06-26 14:20   ` Steven Rostedt
2018-07-02 17:27 ` [PATCH 0/9] Fix references for some missing documentation files Jonathan Corbet

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).