All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 01/29] IPMI.txt: standardize document format
@ 2017-06-17 15:26 Mauro Carvalho Chehab
  2017-06-17 15:26 ` [PATCH v2 02/29] IRQ-affinity.txt: " Mauro Carvalho Chehab
                   ` (28 more replies)
  0 siblings, 29 replies; 35+ messages in thread
From: Mauro Carvalho Chehab @ 2017-06-17 15:26 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet, Corey Minyard, openipmi-developer

Each text file under Documentation follows a different
format. Some doesn't even have titles!

Change its representation to follow the adopted standard,
using ReST markups for it to be parseable by Sphinx:

- fix document type;
- add missing markups for subitems;
- mark literal blocks;
- add whitespaces and blank lines where needed;
- use bulleted list markups where neded.

Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
---
 Documentation/IPMI.txt | 76 +++++++++++++++++++++++++++++---------------------
 1 file changed, 44 insertions(+), 32 deletions(-)

diff --git a/Documentation/IPMI.txt b/Documentation/IPMI.txt
index 6962cab997ef..aa77a25a0940 100644
--- a/Documentation/IPMI.txt
+++ b/Documentation/IPMI.txt
@@ -1,9 +1,8 @@
+=====================
+The Linux IPMI Driver
+=====================
 
-                          The Linux IPMI Driver
-			  ---------------------
-			      Corey Minyard
-			  <minyard@mvista.com>
-			    <minyard@acm.org>
+:Author: Corey Minyard <minyard@mvista.com> / <minyard@acm.org>
 
 The Intelligent Platform Management Interface, or IPMI, is a
 standard for controlling intelligent devices that monitor a system.
@@ -141,7 +140,7 @@ Addressing
 ----------
 
 The IPMI addressing works much like IP addresses, you have an overlay
-to handle the different address types.  The overlay is:
+to handle the different address types.  The overlay is::
 
   struct ipmi_addr
   {
@@ -153,7 +152,7 @@ to handle the different address types.  The overlay is:
 The addr_type determines what the address really is.  The driver
 currently understands two different types of addresses.
 
-"System Interface" addresses are defined as:
+"System Interface" addresses are defined as::
 
   struct ipmi_system_interface_addr
   {
@@ -166,7 +165,7 @@ straight to the BMC on the current card.  The channel must be
 IPMI_BMC_CHANNEL.
 
 Messages that are destined to go out on the IPMB bus use the
-IPMI_IPMB_ADDR_TYPE address type.  The format is
+IPMI_IPMB_ADDR_TYPE address type.  The format is::
 
   struct ipmi_ipmb_addr
   {
@@ -184,16 +183,16 @@ spec.
 Messages
 --------
 
-Messages are defined as:
+Messages are defined as::
 
-struct ipmi_msg
-{
+  struct ipmi_msg
+  {
 	unsigned char netfn;
 	unsigned char lun;
 	unsigned char cmd;
 	unsigned char *data;
 	int           data_len;
-};
+  };
 
 The driver takes care of adding/stripping the header information.  The
 data portion is just the data to be send (do NOT put addressing info
@@ -208,7 +207,7 @@ block of data, even when receiving messages.  Otherwise the driver
 will have no place to put the message.
 
 Messages coming up from the message handler in kernelland will come in
-as:
+as::
 
   struct ipmi_recv_msg
   {
@@ -246,6 +245,7 @@ and the user should not have to care what type of SMI is below them.
 
 
 Watching For Interfaces
+^^^^^^^^^^^^^^^^^^^^^^^
 
 When your code comes up, the IPMI driver may or may not have detected
 if IPMI devices exist.  So you might have to defer your setup until
@@ -256,6 +256,7 @@ and tell you when they come and go.
 
 
 Creating the User
+^^^^^^^^^^^^^^^^^
 
 To use the message handler, you must first create a user using
 ipmi_create_user.  The interface number specifies which SMI you want
@@ -272,6 +273,7 @@ closing the device automatically destroys the user.
 
 
 Messaging
+^^^^^^^^^
 
 To send a message from kernel-land, the ipmi_request_settime() call does
 pretty much all message handling.  Most of the parameter are
@@ -321,6 +323,7 @@ though, since it is tricky to manage your own buffers.
 
 
 Events and Incoming Commands
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
 The driver takes care of polling for IPMI events and receiving
 commands (commands are messages that are not responses, they are
@@ -367,7 +370,7 @@ in the system.  It discovers interfaces through a host of different
 methods, depending on the system.
 
 You can specify up to four interfaces on the module load line and
-control some module parameters:
+control some module parameters::
 
   modprobe ipmi_si.o type=<type1>,<type2>....
        ports=<port1>,<port2>... addrs=<addr1>,<addr2>...
@@ -437,7 +440,7 @@ default is one.  Setting to 0 is useful with the hotmod, but is
 obviously only useful for modules.
 
 When compiled into the kernel, the parameters can be specified on the
-kernel command line as:
+kernel command line as::
 
   ipmi_si.type=<type1>,<type2>...
        ipmi_si.ports=<port1>,<port2>... ipmi_si.addrs=<addr1>,<addr2>...
@@ -474,16 +477,22 @@ The driver supports a hot add and remove of interfaces.  This way,
 interfaces can be added or removed after the kernel is up and running.
 This is done using /sys/modules/ipmi_si/parameters/hotmod, which is a
 write-only parameter.  You write a string to this interface.  The string
-has the format:
+has the format::
+
    <op1>[:op2[:op3...]]
-The "op"s are:
+
+The "op"s are::
+
    add|remove,kcs|bt|smic,mem|i/o,<address>[,<opt1>[,<opt2>[,...]]]
-You can specify more than one interface on the line.  The "opt"s are:
+
+You can specify more than one interface on the line.  The "opt"s are::
+
    rsp=<regspacing>
    rsi=<regsize>
    rsh=<regshift>
    irq=<irq>
    ipmb=<ipmb slave addr>
+
 and these have the same meanings as discussed above.  Note that you
 can also use this on the kernel command line for a more compact format
 for specifying an interface.  Note that when removing an interface,
@@ -496,7 +505,7 @@ The SMBus Driver (SSIF)
 The SMBus driver allows up to 4 SMBus devices to be configured in the
 system.  By default, the driver will only register with something it
 finds in DMI or ACPI tables.  You can change this
-at module load time (for a module) with:
+at module load time (for a module) with::
 
   modprobe ipmi_ssif.o
 	addr=<i2caddr1>[,<i2caddr2>[,...]]
@@ -535,7 +544,7 @@ the smb_addr parameter unless you have DMI or ACPI data to tell the
 driver what to use.
 
 When compiled into the kernel, the addresses can be specified on the
-kernel command line as:
+kernel command line as::
 
   ipmb_ssif.addr=<i2caddr1>[,<i2caddr2>[...]]
 	ipmi_ssif.adapter=<adapter1>[,<adapter2>[...]]
@@ -565,9 +574,9 @@ Some users need more detailed information about a device, like where
 the address came from or the raw base device for the IPMI interface.
 You can use the IPMI smi_watcher to catch the IPMI interfaces as they
 come or go, and to grab the information, you can use the function
-ipmi_get_smi_info(), which returns the following structure:
+ipmi_get_smi_info(), which returns the following structure::
 
-struct ipmi_smi_info {
+  struct ipmi_smi_info {
 	enum ipmi_addr_src addr_src;
 	struct device *dev;
 	union {
@@ -575,7 +584,7 @@ struct ipmi_smi_info {
 			void *acpi_handle;
 		} acpi_info;
 	} addr_info;
-};
+  };
 
 Currently special info for only for SI_ACPI address sources is
 returned.  Others may be added as necessary.
@@ -590,7 +599,7 @@ Watchdog
 
 A watchdog timer is provided that implements the Linux-standard
 watchdog timer interface.  It has three module parameters that can be
-used to control it:
+used to control it::
 
   modprobe ipmi_watchdog timeout=<t> pretimeout=<t> action=<action type>
       preaction=<preaction type> preop=<preop type> start_now=x
@@ -635,7 +644,7 @@ watchdog device is closed.  The default value of nowayout is true
 if the CONFIG_WATCHDOG_NOWAYOUT option is enabled, or false if not.
 
 When compiled into the kernel, the kernel command line is available
-for configuring the watchdog:
+for configuring the watchdog::
 
   ipmi_watchdog.timeout=<t> ipmi_watchdog.pretimeout=<t>
 	ipmi_watchdog.action=<action type>
@@ -675,6 +684,7 @@ also get a bunch of OEM events holding the panic string.
 
 
 The field settings of the events are:
+
 * Generator ID: 0x21 (kernel)
 * EvM Rev: 0x03 (this event is formatting in IPMI 1.0 format)
 * Sensor Type: 0x20 (OS critical stop sensor)
@@ -683,18 +693,20 @@ The field settings of the events are:
 * Event Data 1: 0xa1 (Runtime stop in OEM bytes 2 and 3)
 * Event data 2: second byte of panic string
 * Event data 3: third byte of panic string
+
 See the IPMI spec for the details of the event layout.  This event is
 always sent to the local management controller.  It will handle routing
 the message to the right place
 
 Other OEM events have the following format:
-Record ID (bytes 0-1): Set by the SEL.
-Record type (byte 2): 0xf0 (OEM non-timestamped)
-byte 3: The slave address of the card saving the panic
-byte 4: A sequence number (starting at zero)
-The rest of the bytes (11 bytes) are the panic string.  If the panic string
-is longer than 11 bytes, multiple messages will be sent with increasing
-sequence numbers.
+
+* Record ID (bytes 0-1): Set by the SEL.
+* Record type (byte 2): 0xf0 (OEM non-timestamped)
+* byte 3: The slave address of the card saving the panic
+* byte 4: A sequence number (starting at zero)
+  The rest of the bytes (11 bytes) are the panic string.  If the panic string
+  is longer than 11 bytes, multiple messages will be sent with increasing
+  sequence numbers.
 
 Because you cannot send OEM events using the standard interface, this
 function will attempt to find an SEL and add the events there.  It
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v2 02/29] IRQ-affinity.txt: standardize document format
  2017-06-17 15:26 [PATCH v2 01/29] IPMI.txt: standardize document format Mauro Carvalho Chehab
@ 2017-06-17 15:26 ` Mauro Carvalho Chehab
  2017-06-17 15:26 ` [PATCH v2 03/29] IRQ-domain.txt: " Mauro Carvalho Chehab
                   ` (27 subsequent siblings)
  28 siblings, 0 replies; 35+ messages in thread
From: Mauro Carvalho Chehab @ 2017-06-17 15:26 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet

Each text file under Documentation follows a different
format. Some doesn't even have titles!

Change its representation to follow the adopted standard,
using ReST markups for it to be parseable by Sphinx:

- Add a title for the document;
- mark literal blocks as such;
- use a bulleted list for changelog.

Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
---
 Documentation/IRQ-affinity.txt | 77 ++++++++++++++++++++++--------------------
 1 file changed, 41 insertions(+), 36 deletions(-)

diff --git a/Documentation/IRQ-affinity.txt b/Documentation/IRQ-affinity.txt
index 01a675175a36..29da5000836a 100644
--- a/Documentation/IRQ-affinity.txt
+++ b/Documentation/IRQ-affinity.txt
@@ -1,8 +1,11 @@
-ChangeLog:
-	Started by Ingo Molnar <mingo@redhat.com>
-	Update by Max Krasnyansky <maxk@qualcomm.com>
-
+================
 SMP IRQ affinity
+================
+
+ChangeLog:
+	- Started by Ingo Molnar <mingo@redhat.com>
+	- Update by Max Krasnyansky <maxk@qualcomm.com>
+
 
 /proc/irq/IRQ#/smp_affinity and /proc/irq/IRQ#/smp_affinity_list specify
 which target CPUs are permitted for a given IRQ source.  It's a bitmask
@@ -16,50 +19,52 @@ will be set to the default mask. It can then be changed as described above.
 Default mask is 0xffffffff.
 
 Here is an example of restricting IRQ44 (eth1) to CPU0-3 then restricting
-it to CPU4-7 (this is an 8-CPU SMP box):
+it to CPU4-7 (this is an 8-CPU SMP box)::
 
-[root@moon 44]# cd /proc/irq/44
-[root@moon 44]# cat smp_affinity
-ffffffff
+	[root@moon 44]# cd /proc/irq/44
+	[root@moon 44]# cat smp_affinity
+	ffffffff
 
-[root@moon 44]# echo 0f > smp_affinity
-[root@moon 44]# cat smp_affinity
-0000000f
-[root@moon 44]# ping -f h
-PING hell (195.4.7.3): 56 data bytes
-...
---- hell ping statistics ---
-6029 packets transmitted, 6027 packets received, 0% packet loss
-round-trip min/avg/max = 0.1/0.1/0.4 ms
-[root@moon 44]# cat /proc/interrupts | grep 'CPU\|44:'
-           CPU0       CPU1       CPU2       CPU3      CPU4       CPU5        CPU6       CPU7
- 44:       1068       1785       1785       1783         0          0           0         0    IO-APIC-level  eth1
+	[root@moon 44]# echo 0f > smp_affinity
+	[root@moon 44]# cat smp_affinity
+	0000000f
+	[root@moon 44]# ping -f h
+	PING hell (195.4.7.3): 56 data bytes
+	...
+	--- hell ping statistics ---
+	6029 packets transmitted, 6027 packets received, 0% packet loss
+	round-trip min/avg/max = 0.1/0.1/0.4 ms
+	[root@moon 44]# cat /proc/interrupts | grep 'CPU\|44:'
+		CPU0       CPU1       CPU2       CPU3      CPU4       CPU5        CPU6       CPU7
+	44:       1068       1785       1785       1783         0          0           0         0    IO-APIC-level  eth1
 
 As can be seen from the line above IRQ44 was delivered only to the first four
 processors (0-3).
 Now lets restrict that IRQ to CPU(4-7).
 
-[root@moon 44]# echo f0 > smp_affinity
-[root@moon 44]# cat smp_affinity
-000000f0
-[root@moon 44]# ping -f h
-PING hell (195.4.7.3): 56 data bytes
-..
---- hell ping statistics ---
-2779 packets transmitted, 2777 packets received, 0% packet loss
-round-trip min/avg/max = 0.1/0.5/585.4 ms
-[root@moon 44]# cat /proc/interrupts |  'CPU\|44:'
-           CPU0       CPU1       CPU2       CPU3      CPU4       CPU5        CPU6       CPU7
- 44:       1068       1785       1785       1783      1784       1069        1070       1069   IO-APIC-level  eth1
+::
+
+	[root@moon 44]# echo f0 > smp_affinity
+	[root@moon 44]# cat smp_affinity
+	000000f0
+	[root@moon 44]# ping -f h
+	PING hell (195.4.7.3): 56 data bytes
+	..
+	--- hell ping statistics ---
+	2779 packets transmitted, 2777 packets received, 0% packet loss
+	round-trip min/avg/max = 0.1/0.5/585.4 ms
+	[root@moon 44]# cat /proc/interrupts |  'CPU\|44:'
+		CPU0       CPU1       CPU2       CPU3      CPU4       CPU5        CPU6       CPU7
+	44:       1068       1785       1785       1783      1784       1069        1070       1069   IO-APIC-level  eth1
 
 This time around IRQ44 was delivered only to the last four processors.
 i.e counters for the CPU0-3 did not change.
 
-Here is an example of limiting that same irq (44) to cpus 1024 to 1031:
+Here is an example of limiting that same irq (44) to cpus 1024 to 1031::
 
-[root@moon 44]# echo 1024-1031 > smp_affinity_list
-[root@moon 44]# cat smp_affinity_list
-1024-1031
+	[root@moon 44]# echo 1024-1031 > smp_affinity_list
+	[root@moon 44]# cat smp_affinity_list
+	1024-1031
 
 Note that to do this with a bitmask would require 32 bitmasks of zero
 to follow the pertinent one.
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v2 03/29] IRQ-domain.txt: standardize document format
  2017-06-17 15:26 [PATCH v2 01/29] IPMI.txt: standardize document format Mauro Carvalho Chehab
  2017-06-17 15:26 ` [PATCH v2 02/29] IRQ-affinity.txt: " Mauro Carvalho Chehab
@ 2017-06-17 15:26 ` Mauro Carvalho Chehab
  2017-06-17 15:26 ` [PATCH v2 04/29] irqflags-tracing.txt: " Mauro Carvalho Chehab
                   ` (26 subsequent siblings)
  28 siblings, 0 replies; 35+ messages in thread
From: Mauro Carvalho Chehab @ 2017-06-17 15:26 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet, Marc Zyngier

Each text file under Documentation follows a different
format. Some doesn't even have titles!

Change its representation to follow the adopted standard,
using ReST markups for it to be parseable by Sphinx:

- use proper markups for titles;
- mark literal blocks as such;
- add blank lines where needed.

Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
---
 Documentation/IRQ-domain.txt | 69 +++++++++++++++++++++++++++++++-------------
 1 file changed, 49 insertions(+), 20 deletions(-)

diff --git a/Documentation/IRQ-domain.txt b/Documentation/IRQ-domain.txt
index 1f246eb25ca5..4a1cd7645d85 100644
--- a/Documentation/IRQ-domain.txt
+++ b/Documentation/IRQ-domain.txt
@@ -1,4 +1,6 @@
-irq_domain interrupt number mapping library
+===============================================
+The irq_domain interrupt number mapping library
+===============================================
 
 The current design of the Linux kernel uses a single large number
 space where each separate IRQ source is assigned a different number.
@@ -36,7 +38,9 @@ irq_domain also implements translation from an abstract irq_fwspec
 structure to hwirq numbers (Device Tree and ACPI GSI so far), and can
 be easily extended to support other IRQ topology data sources.
 
-=== irq_domain usage ===
+irq_domain usage
+================
+
 An interrupt controller driver creates and registers an irq_domain by
 calling one of the irq_domain_add_*() functions (each mapping method
 has a different allocator function, more on that later).  The function
@@ -62,15 +66,21 @@ If the driver has the Linux IRQ number or the irq_data pointer, and
 needs to know the associated hwirq number (such as in the irq_chip
 callbacks) then it can be directly obtained from irq_data->hwirq.
 
-=== Types of irq_domain mappings ===
+Types of irq_domain mappings
+============================
+
 There are several mechanisms available for reverse mapping from hwirq
 to Linux irq, and each mechanism uses a different allocation function.
 Which reverse map type should be used depends on the use case.  Each
 of the reverse map types are described below:
 
-==== Linear ====
-irq_domain_add_linear()
-irq_domain_create_linear()
+Linear
+------
+
+::
+
+	irq_domain_add_linear()
+	irq_domain_create_linear()
 
 The linear reverse map maintains a fixed size table indexed by the
 hwirq number.  When a hwirq is mapped, an irq_desc is allocated for
@@ -89,9 +99,13 @@ accepts a more general abstraction 'struct fwnode_handle'.
 
 The majority of drivers should use the linear map.
 
-==== Tree ====
-irq_domain_add_tree()
-irq_domain_create_tree()
+Tree
+----
+
+::
+
+	irq_domain_add_tree()
+	irq_domain_create_tree()
 
 The irq_domain maintains a radix tree map from hwirq numbers to Linux
 IRQs.  When an hwirq is mapped, an irq_desc is allocated and the
@@ -109,8 +123,12 @@ accepts a more general abstraction 'struct fwnode_handle'.
 
 Very few drivers should need this mapping.
 
-==== No Map ===-
-irq_domain_add_nomap()
+No Map
+------
+
+::
+
+	irq_domain_add_nomap()
 
 The No Map mapping is to be used when the hwirq number is
 programmable in the hardware.  In this case it is best to program the
@@ -121,10 +139,14 @@ Linux IRQ number into the hardware.
 
 Most drivers cannot use this mapping.
 
-==== Legacy ====
-irq_domain_add_simple()
-irq_domain_add_legacy()
-irq_domain_add_legacy_isa()
+Legacy
+------
+
+::
+
+	irq_domain_add_simple()
+	irq_domain_add_legacy()
+	irq_domain_add_legacy_isa()
 
 The Legacy mapping is a special case for drivers that already have a
 range of irq_descs allocated for the hwirqs.  It is used when the
@@ -163,14 +185,17 @@ that the driver using the simple domain call irq_create_mapping()
 before any irq_find_mapping() since the latter will actually work
 for the static IRQ assignment case.
 
-==== Hierarchy IRQ domain ====
+Hierarchy IRQ domain
+--------------------
+
 On some architectures, there may be multiple interrupt controllers
 involved in delivering an interrupt from the device to the target CPU.
-Let's look at a typical interrupt delivering path on x86 platforms:
+Let's look at a typical interrupt delivering path on x86 platforms::
 
-Device --> IOAPIC -> Interrupt remapping Controller -> Local APIC -> CPU
+  Device --> IOAPIC -> Interrupt remapping Controller -> Local APIC -> CPU
 
 There are three interrupt controllers involved:
+
 1) IOAPIC controller
 2) Interrupt remapping controller
 3) Local APIC controller
@@ -180,7 +205,8 @@ hardware architecture, an irq_domain data structure is built for each
 interrupt controller and those irq_domains are organized into hierarchy.
 When building irq_domain hierarchy, the irq_domain near to the device is
 child and the irq_domain near to CPU is parent. So a hierarchy structure
-as below will be built for the example above.
+as below will be built for the example above::
+
 	CPU Vector irq_domain (root irq_domain to manage CPU vectors)
 		^
 		|
@@ -190,6 +216,7 @@ as below will be built for the example above.
 	IOAPIC irq_domain (manage IOAPIC delivery entries/pins)
 
 There are four major interfaces to use hierarchy irq_domain:
+
 1) irq_domain_alloc_irqs(): allocate IRQ descriptors and interrupt
    controller related resources to deliver these interrupts.
 2) irq_domain_free_irqs(): free IRQ descriptors and interrupt controller
@@ -199,7 +226,8 @@ There are four major interfaces to use hierarchy irq_domain:
 4) irq_domain_deactivate_irq(): deactivate interrupt controller hardware
    to stop delivering the interrupt.
 
-Following changes are needed to support hierarchy irq_domain.
+Following changes are needed to support hierarchy irq_domain:
+
 1) a new field 'parent' is added to struct irq_domain; it's used to
    maintain irq_domain hierarchy information.
 2) a new field 'parent_data' is added to struct irq_data; it's used to
@@ -223,6 +251,7 @@ software architecture.
 
 For an interrupt controller driver to support hierarchy irq_domain, it
 needs to:
+
 1) Implement irq_domain_ops.alloc and irq_domain_ops.free
 2) Optionally implement irq_domain_ops.activate and
    irq_domain_ops.deactivate.
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v2 04/29] irqflags-tracing.txt: standardize document format
  2017-06-17 15:26 [PATCH v2 01/29] IPMI.txt: standardize document format Mauro Carvalho Chehab
  2017-06-17 15:26 ` [PATCH v2 02/29] IRQ-affinity.txt: " Mauro Carvalho Chehab
  2017-06-17 15:26 ` [PATCH v2 03/29] IRQ-domain.txt: " Mauro Carvalho Chehab
@ 2017-06-17 15:26 ` Mauro Carvalho Chehab
  2017-06-17 15:26 ` [PATCH v2 05/29] IRQ.txt: add a markup for its title Mauro Carvalho Chehab
                   ` (25 subsequent siblings)
  28 siblings, 0 replies; 35+ messages in thread
From: Mauro Carvalho Chehab @ 2017-06-17 15:26 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet

Each text file under Documentation follows a different
format. Some doesn't even have titles!

Change its representation to follow the adopted standard,
using ReST markups for it to be parseable by Sphinx:

There isn't much to be done here: just mark the document
title as such and add a :Author:.

While here, use upper case at the beginning of a few paragraphs
that start with lower case.

Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
---
 Documentation/irqflags-tracing.txt | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/Documentation/irqflags-tracing.txt b/Documentation/irqflags-tracing.txt
index f6da05670e16..bdd208259fb3 100644
--- a/Documentation/irqflags-tracing.txt
+++ b/Documentation/irqflags-tracing.txt
@@ -1,8 +1,10 @@
+=======================
 IRQ-flags state tracing
+=======================
 
-started by Ingo Molnar <mingo@redhat.com>
+:Author: started by Ingo Molnar <mingo@redhat.com>
 
-the "irq-flags tracing" feature "traces" hardirq and softirq state, in
+The "irq-flags tracing" feature "traces" hardirq and softirq state, in
 that it gives interested subsystems an opportunity to be notified of
 every hardirqs-off/hardirqs-on, softirqs-off/softirqs-on event that
 happens in the kernel.
@@ -14,7 +16,7 @@ CONFIG_PROVE_RWSEM_LOCKING will be offered on an architecture - these
 are locking APIs that are not used in IRQ context. (the one exception
 for rwsems is worked around)
 
-architecture support for this is certainly not in the "trivial"
+Architecture support for this is certainly not in the "trivial"
 category, because lots of lowlevel assembly code deal with irq-flags
 state changes. But an architecture can be irq-flags-tracing enabled in a
 rather straightforward and risk-free manner.
@@ -41,7 +43,7 @@ irq-flags-tracing support:
   excluded from the irq-tracing [and lock validation] mechanism via
   lockdep_off()/lockdep_on().
 
-in general there is no risk from having an incomplete irq-flags-tracing
+In general there is no risk from having an incomplete irq-flags-tracing
 implementation in an architecture: lockdep will detect that and will
 turn itself off. I.e. the lock validator will still be reliable. There
 should be no crashes due to irq-tracing bugs. (except if the assembly
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v2 05/29] IRQ.txt: add a markup for its title
  2017-06-17 15:26 [PATCH v2 01/29] IPMI.txt: standardize document format Mauro Carvalho Chehab
                   ` (2 preceding siblings ...)
  2017-06-17 15:26 ` [PATCH v2 04/29] irqflags-tracing.txt: " Mauro Carvalho Chehab
@ 2017-06-17 15:26 ` Mauro Carvalho Chehab
  2017-06-17 15:26 ` [PATCH v2 06/29] isapnp.txt: promote title level Mauro Carvalho Chehab
                   ` (24 subsequent siblings)
  28 siblings, 0 replies; 35+ messages in thread
From: Mauro Carvalho Chehab @ 2017-06-17 15:26 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet

This simple document only needs a markup for its title to be
using the standard we're adopting for text documents.

Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
---
 Documentation/IRQ.txt | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/Documentation/IRQ.txt b/Documentation/IRQ.txt
index 1011e7175021..4273806a606b 100644
--- a/Documentation/IRQ.txt
+++ b/Documentation/IRQ.txt
@@ -1,4 +1,6 @@
+===============
 What is an IRQ?
+===============
 
 An IRQ is an interrupt request from a device.
 Currently they can come in over a pin, or over a packet.
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v2 06/29] isapnp.txt: promote title level
  2017-06-17 15:26 [PATCH v2 01/29] IPMI.txt: standardize document format Mauro Carvalho Chehab
                   ` (3 preceding siblings ...)
  2017-06-17 15:26 ` [PATCH v2 05/29] IRQ.txt: add a markup for its title Mauro Carvalho Chehab
@ 2017-06-17 15:26 ` Mauro Carvalho Chehab
  2017-06-17 15:26 ` [PATCH v2 07/29] isa.txt: standardize document format Mauro Carvalho Chehab
                   ` (23 subsequent siblings)
  28 siblings, 0 replies; 35+ messages in thread
From: Mauro Carvalho Chehab @ 2017-06-17 15:26 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet, Jaroslav Kysela

This simple document only needs to promote its title to be
using the standard we're using for document texts.

Yet, IMHO, it would be worth merging this file with
Documentation/pnp.txt in the future.

Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
---
 Documentation/isapnp.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/Documentation/isapnp.txt b/Documentation/isapnp.txt
index 400d1b5b523d..8d0840ac847b 100644
--- a/Documentation/isapnp.txt
+++ b/Documentation/isapnp.txt
@@ -1,3 +1,4 @@
+==========================================================
 ISA Plug & Play support by Jaroslav Kysela <perex@suse.cz>
 ==========================================================
 
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v2 07/29] isa.txt: standardize document format
  2017-06-17 15:26 [PATCH v2 01/29] IPMI.txt: standardize document format Mauro Carvalho Chehab
                   ` (4 preceding siblings ...)
  2017-06-17 15:26 ` [PATCH v2 06/29] isapnp.txt: promote title level Mauro Carvalho Chehab
@ 2017-06-17 15:26 ` Mauro Carvalho Chehab
  2017-06-17 15:26 ` [PATCH v2 08/29] kernel-per-CPU-kthreads.txt: " Mauro Carvalho Chehab
                   ` (22 subsequent siblings)
  28 siblings, 0 replies; 35+ messages in thread
From: Mauro Carvalho Chehab @ 2017-06-17 15:26 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet, William Breathitt Gray

Each text file under Documentation follows a different
format. Some doesn't even have titles!

Change its representation to follow the adopted standard,
using ReST markups for it to be parseable by Sphinx:

- Use the main title standard for this document;
- replace _foo_ by **foo** for emphasis;
- mark literal blocks as such.

Acked-by: William Breathitt Gray <vilhelm.gray@gmail.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
---
 Documentation/isa.txt | 53 ++++++++++++++++++++++++++-------------------------
 1 file changed, 27 insertions(+), 26 deletions(-)

diff --git a/Documentation/isa.txt b/Documentation/isa.txt
index f232c26a40be..def4a7b690b5 100644
--- a/Documentation/isa.txt
+++ b/Documentation/isa.txt
@@ -1,5 +1,6 @@
+===========
 ISA Drivers
------------
+===========
 
 The following text is adapted from the commit message of the initial
 commit of the ISA bus driver authored by Rene Herman.
@@ -23,17 +24,17 @@ that all device creation has been made internal as well.
 
 The usage model this provides is nice, and has been acked from the ALSA
 side by Takashi Iwai and Jaroslav Kysela. The ALSA driver module_init's
-now (for oldisa-only drivers) become:
+now (for oldisa-only drivers) become::
 
-static int __init alsa_card_foo_init(void)
-{
-	return isa_register_driver(&snd_foo_isa_driver, SNDRV_CARDS);
-}
+	static int __init alsa_card_foo_init(void)
+	{
+		return isa_register_driver(&snd_foo_isa_driver, SNDRV_CARDS);
+	}
 
-static void __exit alsa_card_foo_exit(void)
-{
-	isa_unregister_driver(&snd_foo_isa_driver);
-}
+	static void __exit alsa_card_foo_exit(void)
+	{
+		isa_unregister_driver(&snd_foo_isa_driver);
+	}
 
 Quite like the other bus models therefore. This removes a lot of
 duplicated init code from the ALSA ISA drivers.
@@ -47,11 +48,11 @@ parameter, indicating how many devices to create and call our methods
 with.
 
 The platform_driver callbacks are called with a platform_device param;
-the isa_driver callbacks are being called with a "struct device *dev,
-unsigned int id" pair directly -- with the device creation completely
+the isa_driver callbacks are being called with a ``struct device *dev,
+unsigned int id`` pair directly -- with the device creation completely
 internal to the bus it's much cleaner to not leak isa_dev's by passing
 them in at all. The id is the only thing we ever want other then the
-struct device * anyways, and it makes for nicer code in the callbacks as
+struct device anyways, and it makes for nicer code in the callbacks as
 well.
 
 With this additional .match() callback ISA drivers have all options. If
@@ -75,20 +76,20 @@ This exports only two functions; isa_{,un}register_driver().
 
 isa_register_driver() register's the struct device_driver, and then
 loops over the passed in ndev creating devices and registering them.
-This causes the bus match method to be called for them, which is:
+This causes the bus match method to be called for them, which is::
 
-int isa_bus_match(struct device *dev, struct device_driver *driver)
-{
-          struct isa_driver *isa_driver = to_isa_driver(driver);
+	int isa_bus_match(struct device *dev, struct device_driver *driver)
+	{
+		struct isa_driver *isa_driver = to_isa_driver(driver);
 
-          if (dev->platform_data == isa_driver) {
-                  if (!isa_driver->match ||
-                          isa_driver->match(dev, to_isa_dev(dev)->id))
-                          return 1;
-                  dev->platform_data = NULL;
-          }
-          return 0;
-}
+		if (dev->platform_data == isa_driver) {
+			if (!isa_driver->match ||
+				isa_driver->match(dev, to_isa_dev(dev)->id))
+				return 1;
+			dev->platform_data = NULL;
+		}
+		return 0;
+	}
 
 The first thing this does is check if this device is in fact one of this
 driver's devices by seeing if the device's platform_data pointer is set
@@ -102,7 +103,7 @@ well.
 Then, if the the driver did not provide a .match, it matches. If it did,
 the driver match() method is called to determine a match.
 
-If it did _not_ match, dev->platform_data is reset to indicate this to
+If it did **not** match, dev->platform_data is reset to indicate this to
 isa_register_driver which can then unregister the device again.
 
 If during all this, there's any error, or no devices matched at all
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v2 08/29] kernel-per-CPU-kthreads.txt: standardize document format
  2017-06-17 15:26 [PATCH v2 01/29] IPMI.txt: standardize document format Mauro Carvalho Chehab
                   ` (5 preceding siblings ...)
  2017-06-17 15:26 ` [PATCH v2 07/29] isa.txt: standardize document format Mauro Carvalho Chehab
@ 2017-06-17 15:26 ` Mauro Carvalho Chehab
  2017-06-17 15:26 ` [PATCH v2 09/29] kobject.txt: " Mauro Carvalho Chehab
                   ` (21 subsequent siblings)
  28 siblings, 0 replies; 35+ messages in thread
From: Mauro Carvalho Chehab @ 2017-06-17 15:26 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet

Each text file under Documentation follows a different
format. Some doesn't even have titles!

Change its representation to follow the adopted standard,
using ReST markups for it to be parseable by Sphinx:

- Use title markups;
- use "-" for bulletted lists;
- Split Name/Purpose on two lines, in order to make visually
  easier to read (in text format), and to bold the title
  (on ReST output)
- Add blank lines to split bulleted lists;
- use sub-titles for the several kthread softirq types;
- mark one literal var with asterisk as such, in order to
  avoid an error warning on Sphinx.

Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
---
 Documentation/kernel-per-CPU-kthreads.txt | 156 +++++++++++++++++++++++-------
 1 file changed, 121 insertions(+), 35 deletions(-)

diff --git a/Documentation/kernel-per-CPU-kthreads.txt b/Documentation/kernel-per-CPU-kthreads.txt
index 2cb7dc5c0e0d..0f00f9c164ac 100644
--- a/Documentation/kernel-per-CPU-kthreads.txt
+++ b/Documentation/kernel-per-CPU-kthreads.txt
@@ -1,27 +1,29 @@
-REDUCING OS JITTER DUE TO PER-CPU KTHREADS
+==========================================
+Reducing OS jitter due to per-cpu kthreads
+==========================================
 
 This document lists per-CPU kthreads in the Linux kernel and presents
 options to control their OS jitter.  Note that non-per-CPU kthreads are
 not listed here.  To reduce OS jitter from non-per-CPU kthreads, bind
 them to a "housekeeping" CPU dedicated to such work.
 
+References
+==========
 
-REFERENCES
+-	Documentation/IRQ-affinity.txt:  Binding interrupts to sets of CPUs.
 
-o	Documentation/IRQ-affinity.txt:  Binding interrupts to sets of CPUs.
+-	Documentation/cgroup-v1:  Using cgroups to bind tasks to sets of CPUs.
 
-o	Documentation/cgroup-v1:  Using cgroups to bind tasks to sets of CPUs.
-
-o	man taskset:  Using the taskset command to bind tasks to sets
+-	man taskset:  Using the taskset command to bind tasks to sets
 	of CPUs.
 
-o	man sched_setaffinity:  Using the sched_setaffinity() system
+-	man sched_setaffinity:  Using the sched_setaffinity() system
 	call to bind tasks to sets of CPUs.
 
-o	/sys/devices/system/cpu/cpuN/online:  Control CPU N's hotplug state,
+-	/sys/devices/system/cpu/cpuN/online:  Control CPU N's hotplug state,
 	writing "0" to offline and "1" to online.
 
-o	In order to locate kernel-generated OS jitter on CPU N:
+-	In order to locate kernel-generated OS jitter on CPU N:
 
 		cd /sys/kernel/debug/tracing
 		echo 1 > max_graph_depth # Increase the "1" for more detail
@@ -29,12 +31,17 @@ o	In order to locate kernel-generated OS jitter on CPU N:
 		# run workload
 		cat per_cpu/cpuN/trace
 
+kthreads
+========
 
-KTHREADS
+Name:
+  ehca_comp/%u
+
+Purpose:
+  Periodically process Infiniband-related work.
 
-Name: ehca_comp/%u
-Purpose: Periodically process Infiniband-related work.
 To reduce its OS jitter, do any of the following:
+
 1.	Don't use eHCA Infiniband hardware, instead choosing hardware
 	that does not require per-CPU kthreads.  This will prevent these
 	kthreads from being created in the first place.  (This will
@@ -46,26 +53,45 @@ To reduce its OS jitter, do any of the following:
 	provisioned only on selected CPUs.
 
 
-Name: irq/%d-%s
-Purpose: Handle threaded interrupts.
+Name:
+  irq/%d-%s
+
+Purpose:
+  Handle threaded interrupts.
+
 To reduce its OS jitter, do the following:
+
 1.	Use irq affinity to force the irq threads to execute on
 	some other CPU.
 
-Name: kcmtpd_ctr_%d
-Purpose: Handle Bluetooth work.
+Name:
+  kcmtpd_ctr_%d
+
+Purpose:
+  Handle Bluetooth work.
+
 To reduce its OS jitter, do one of the following:
+
 1.	Don't use Bluetooth, in which case these kthreads won't be
 	created in the first place.
 2.	Use irq affinity to force Bluetooth-related interrupts to
 	occur on some other CPU and furthermore initiate all
 	Bluetooth activity on some other CPU.
 
-Name: ksoftirqd/%u
-Purpose: Execute softirq handlers when threaded or when under heavy load.
+Name:
+  ksoftirqd/%u
+
+Purpose:
+  Execute softirq handlers when threaded or when under heavy load.
+
 To reduce its OS jitter, each softirq vector must be handled
 separately as follows:
-TIMER_SOFTIRQ:  Do all of the following:
+
+TIMER_SOFTIRQ
+-------------
+
+Do all of the following:
+
 1.	To the extent possible, keep the CPU out of the kernel when it
 	is non-idle, for example, by avoiding system calls and by forcing
 	both kernel threads and interrupts to execute elsewhere.
@@ -76,34 +102,59 @@ TIMER_SOFTIRQ:  Do all of the following:
 	first one back online.  Once you have onlined the CPUs in question,
 	do not offline any other CPUs, because doing so could force the
 	timer back onto one of the CPUs in question.
-NET_TX_SOFTIRQ and NET_RX_SOFTIRQ:  Do all of the following:
+
+NET_TX_SOFTIRQ and NET_RX_SOFTIRQ
+---------------------------------
+
+Do all of the following:
+
 1.	Force networking interrupts onto other CPUs.
 2.	Initiate any network I/O on other CPUs.
 3.	Once your application has started, prevent CPU-hotplug operations
 	from being initiated from tasks that might run on the CPU to
 	be de-jittered.  (It is OK to force this CPU offline and then
 	bring it back online before you start your application.)
-BLOCK_SOFTIRQ:  Do all of the following:
+
+BLOCK_SOFTIRQ
+-------------
+
+Do all of the following:
+
 1.	Force block-device interrupts onto some other CPU.
 2.	Initiate any block I/O on other CPUs.
 3.	Once your application has started, prevent CPU-hotplug operations
 	from being initiated from tasks that might run on the CPU to
 	be de-jittered.  (It is OK to force this CPU offline and then
 	bring it back online before you start your application.)
-IRQ_POLL_SOFTIRQ:  Do all of the following:
+
+IRQ_POLL_SOFTIRQ
+----------------
+
+Do all of the following:
+
 1.	Force block-device interrupts onto some other CPU.
 2.	Initiate any block I/O and block-I/O polling on other CPUs.
 3.	Once your application has started, prevent CPU-hotplug operations
 	from being initiated from tasks that might run on the CPU to
 	be de-jittered.  (It is OK to force this CPU offline and then
 	bring it back online before you start your application.)
-TASKLET_SOFTIRQ: Do one or more of the following:
+
+TASKLET_SOFTIRQ
+---------------
+
+Do one or more of the following:
+
 1.	Avoid use of drivers that use tasklets.  (Such drivers will contain
 	calls to things like tasklet_schedule().)
 2.	Convert all drivers that you must use from tasklets to workqueues.
 3.	Force interrupts for drivers using tasklets onto other CPUs,
 	and also do I/O involving these drivers on other CPUs.
-SCHED_SOFTIRQ: Do all of the following:
+
+SCHED_SOFTIRQ
+-------------
+
+Do all of the following:
+
 1.	Avoid sending scheduler IPIs to the CPU to be de-jittered,
 	for example, ensure that at most one runnable kthread is present
 	on that CPU.  If a thread that expects to run on the de-jittered
@@ -120,7 +171,12 @@ SCHED_SOFTIRQ: Do all of the following:
 	forcing both kernel threads and interrupts to execute elsewhere.
 	This further reduces the number of scheduler-clock interrupts
 	received by the de-jittered CPU.
-HRTIMER_SOFTIRQ:  Do all of the following:
+
+HRTIMER_SOFTIRQ
+---------------
+
+Do all of the following:
+
 1.	To the extent possible, keep the CPU out of the kernel when it
 	is non-idle.  For example, avoid system calls and force both
 	kernel threads and interrupts to execute elsewhere.
@@ -131,9 +187,15 @@ HRTIMER_SOFTIRQ:  Do all of the following:
 	back online.  Once you have onlined the CPUs in question, do not
 	offline any other CPUs, because doing so could force the timer
 	back onto one of the CPUs in question.
-RCU_SOFTIRQ:  Do at least one of the following:
+
+RCU_SOFTIRQ
+-----------
+
+Do at least one of the following:
+
 1.	Offload callbacks and keep the CPU in either dyntick-idle or
 	adaptive-ticks state by doing all of the following:
+
 	a.	CONFIG_NO_HZ_FULL=y and ensure that the CPU to be
 		de-jittered is marked as an adaptive-ticks CPU using the
 		"nohz_full=" boot parameter.  Bind the rcuo kthreads to
@@ -142,8 +204,10 @@ RCU_SOFTIRQ:  Do at least one of the following:
 		when it is non-idle, for example, by avoiding system
 		calls and by forcing both kernel threads and interrupts
 		to execute elsewhere.
+
 2.	Enable RCU to do its processing remotely via dyntick-idle by
 	doing all of the following:
+
 	a.	Build with CONFIG_NO_HZ=y and CONFIG_RCU_FAST_NO_HZ=y.
 	b.	Ensure that the CPU goes idle frequently, allowing other
 		CPUs to detect that it has passed through an RCU quiescent
@@ -155,15 +219,20 @@ RCU_SOFTIRQ:  Do at least one of the following:
 		calls and by forcing both kernel threads and interrupts
 		to execute elsewhere.
 
-Name: kworker/%u:%d%s (cpu, id, priority)
-Purpose: Execute workqueue requests
+Name:
+  kworker/%u:%d%s (cpu, id, priority)
+
+Purpose:
+  Execute workqueue requests
+
 To reduce its OS jitter, do any of the following:
+
 1.	Run your workload at a real-time priority, which will allow
 	preempting the kworker daemons.
 2.	A given workqueue can be made visible in the sysfs filesystem
 	by passing the WQ_SYSFS to that workqueue's alloc_workqueue().
 	Such a workqueue can be confined to a given subset of the
-	CPUs using the /sys/devices/virtual/workqueue/*/cpumask sysfs
+	CPUs using the ``/sys/devices/virtual/workqueue/*/cpumask`` sysfs
 	files.	The set of WQ_SYSFS workqueues can be displayed using
 	"ls sys/devices/virtual/workqueue".  That said, the workqueues
 	maintainer would like to caution people against indiscriminately
@@ -173,6 +242,7 @@ To reduce its OS jitter, do any of the following:
 	to remove it, even if its addition was a mistake.
 3.	Do any of the following needed to avoid jitter that your
 	application cannot tolerate:
+
 	a.	Build your kernel with CONFIG_SLUB=y rather than
 		CONFIG_SLAB=y, thus avoiding the slab allocator's periodic
 		use of each CPU's workqueues to run its cache_reap()
@@ -186,6 +256,7 @@ To reduce its OS jitter, do any of the following:
 		be able to build your kernel with CONFIG_CPU_FREQ=n to
 		avoid the CPU-frequency governor periodically running
 		on each CPU, including cs_dbs_timer() and od_dbs_timer().
+
 		WARNING:  Please check your CPU specifications to
 		make sure that this is safe on your particular system.
 	d.	As of v3.18, Christoph Lameter's on-demand vmstat workers
@@ -222,9 +293,14 @@ To reduce its OS jitter, do any of the following:
 		CONFIG_PMAC_RACKMETER=n to disable the CPU-meter,
 		avoiding OS jitter from rackmeter_do_timer().
 
-Name: rcuc/%u
-Purpose: Execute RCU callbacks in CONFIG_RCU_BOOST=y kernels.
+Name:
+  rcuc/%u
+
+Purpose:
+  Execute RCU callbacks in CONFIG_RCU_BOOST=y kernels.
+
 To reduce its OS jitter, do at least one of the following:
+
 1.	Build the kernel with CONFIG_PREEMPT=n.  This prevents these
 	kthreads from being created in the first place, and also obviates
 	the need for RCU priority boosting.  This approach is feasible
@@ -244,9 +320,14 @@ To reduce its OS jitter, do at least one of the following:
 	CPU, again preventing the rcuc/%u kthreads from having any work
 	to do.
 
-Name: rcuob/%d, rcuop/%d, and rcuos/%d
-Purpose: Offload RCU callbacks from the corresponding CPU.
+Name:
+  rcuob/%d, rcuop/%d, and rcuos/%d
+
+Purpose:
+  Offload RCU callbacks from the corresponding CPU.
+
 To reduce its OS jitter, do at least one of the following:
+
 1.	Use affinity, cgroups, or other mechanism to force these kthreads
 	to execute on some other CPU.
 2.	Build with CONFIG_RCU_NOCB_CPU=n, which will prevent these
@@ -254,9 +335,14 @@ To reduce its OS jitter, do at least one of the following:
 	note that this will not eliminate OS jitter, but will instead
 	shift it to RCU_SOFTIRQ.
 
-Name: watchdog/%u
-Purpose: Detect software lockups on each CPU.
+Name:
+  watchdog/%u
+
+Purpose:
+  Detect software lockups on each CPU.
+
 To reduce its OS jitter, do at least one of the following:
+
 1.	Build with CONFIG_LOCKUP_DETECTOR=n, which will prevent these
 	kthreads from being created in the first place.
 2.	Boot with "nosoftlockup=0", which will also prevent these kthreads
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v2 09/29] kobject.txt: standardize document format
  2017-06-17 15:26 [PATCH v2 01/29] IPMI.txt: standardize document format Mauro Carvalho Chehab
                   ` (6 preceding siblings ...)
  2017-06-17 15:26 ` [PATCH v2 08/29] kernel-per-CPU-kthreads.txt: " Mauro Carvalho Chehab
@ 2017-06-17 15:26 ` Mauro Carvalho Chehab
  2017-06-17 15:26 ` [PATCH v2 10/29] kprobes.txt: " Mauro Carvalho Chehab
                   ` (20 subsequent siblings)
  28 siblings, 0 replies; 35+ messages in thread
From: Mauro Carvalho Chehab @ 2017-06-17 15:26 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet, Greg Kroah-Hartman

Each text file under Documentation follows a different
format. Some doesn't even have titles!

Change its representation to follow the adopted standard,
using ReST markups for it to be parseable by Sphinx:

- Add markups for titles;
- mark literal blocks as such;
- add needed whitespace/blank lines;
- use :Author: and :Last updated: for authorship.

Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
---
 Documentation/kobject.txt | 69 ++++++++++++++++++++++++++++-------------------
 1 file changed, 42 insertions(+), 27 deletions(-)

diff --git a/Documentation/kobject.txt b/Documentation/kobject.txt
index 1be59a3a521c..fc9485d79061 100644
--- a/Documentation/kobject.txt
+++ b/Documentation/kobject.txt
@@ -1,13 +1,13 @@
+=====================================================================
 Everything you never wanted to know about kobjects, ksets, and ktypes
+=====================================================================
 
-Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+:Author: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+:Last updated: December 19, 2007
 
 Based on an original article by Jon Corbet for lwn.net written October 1,
 2003 and located at http://lwn.net/Articles/51437/
 
-Last updated December 19, 2007
-
-
 Part of the difficulty in understanding the driver model - and the kobject
 abstraction upon which it is built - is that there is no obvious starting
 place. Dealing with kobjects requires understanding a few different types,
@@ -47,6 +47,7 @@ approach will be taken, so we'll go back to kobjects.
 
 
 Embedding kobjects
+==================
 
 It is rare for kernel code to create a standalone kobject, with one major
 exception explained below.  Instead, kobjects are used to control access to
@@ -65,7 +66,7 @@ their own, but are invariably found embedded in the larger objects of
 interest.)
 
 So, for example, the UIO code in drivers/uio/uio.c has a structure that
-defines the memory region associated with a uio device:
+defines the memory region associated with a uio device::
 
     struct uio_map {
 	struct kobject kobj;
@@ -77,7 +78,7 @@ just a matter of using the kobj member.  Code that works with kobjects will
 often have the opposite problem, however: given a struct kobject pointer,
 what is the pointer to the containing structure?  You must avoid tricks
 (such as assuming that the kobject is at the beginning of the structure)
-and, instead, use the container_of() macro, found in <linux/kernel.h>:
+and, instead, use the container_of() macro, found in <linux/kernel.h>::
 
     container_of(pointer, type, member)
 
@@ -90,13 +91,13 @@ where:
 The return value from container_of() is a pointer to the corresponding
 container type. So, for example, a pointer "kp" to a struct kobject
 embedded *within* a struct uio_map could be converted to a pointer to the
-*containing* uio_map structure with:
+*containing* uio_map structure with::
 
     struct uio_map *u_map = container_of(kp, struct uio_map, kobj);
 
 For convenience, programmers often define a simple macro for "back-casting"
 kobject pointers to the containing type.  Exactly this happens in the
-earlier drivers/uio/uio.c, as you can see here:
+earlier drivers/uio/uio.c, as you can see here::
 
     struct uio_map {
         struct kobject kobj;
@@ -106,23 +107,25 @@ earlier drivers/uio/uio.c, as you can see here:
     #define to_map(map) container_of(map, struct uio_map, kobj)
 
 where the macro argument "map" is a pointer to the struct kobject in
-question.  That macro is subsequently invoked with:
+question.  That macro is subsequently invoked with::
 
     struct uio_map *map = to_map(kobj);
 
 
 Initialization of kobjects
+==========================
 
 Code which creates a kobject must, of course, initialize that object. Some
-of the internal fields are setup with a (mandatory) call to kobject_init():
+of the internal fields are setup with a (mandatory) call to kobject_init()::
 
     void kobject_init(struct kobject *kobj, struct kobj_type *ktype);
 
 The ktype is required for a kobject to be created properly, as every kobject
 must have an associated kobj_type.  After calling kobject_init(), to
-register the kobject with sysfs, the function kobject_add() must be called:
+register the kobject with sysfs, the function kobject_add() must be called::
 
-    int kobject_add(struct kobject *kobj, struct kobject *parent, const char *fmt, ...);
+    int kobject_add(struct kobject *kobj, struct kobject *parent,
+		    const char *fmt, ...);
 
 This sets up the parent of the kobject and the name for the kobject
 properly.  If the kobject is to be associated with a specific kset,
@@ -133,7 +136,7 @@ kset itself.
 
 As the name of the kobject is set when it is added to the kernel, the name
 of the kobject should never be manipulated directly.  If you must change
-the name of the kobject, call kobject_rename():
+the name of the kobject, call kobject_rename()::
 
     int kobject_rename(struct kobject *kobj, const char *new_name);
 
@@ -146,12 +149,12 @@ is being removed.  If your code needs to call this function, it is
 incorrect and needs to be fixed.
 
 To properly access the name of the kobject, use the function
-kobject_name():
+kobject_name()::
 
     const char *kobject_name(const struct kobject * kobj);
 
 There is a helper function to both initialize and add the kobject to the
-kernel at the same time, called surprisingly enough kobject_init_and_add():
+kernel at the same time, called surprisingly enough kobject_init_and_add()::
 
     int kobject_init_and_add(struct kobject *kobj, struct kobj_type *ktype,
                              struct kobject *parent, const char *fmt, ...);
@@ -161,10 +164,11 @@ kobject_add() functions described above.
 
 
 Uevents
+=======
 
 After a kobject has been registered with the kobject core, you need to
 announce to the world that it has been created.  This can be done with a
-call to kobject_uevent():
+call to kobject_uevent()::
 
     int kobject_uevent(struct kobject *kobj, enum kobject_action action);
 
@@ -180,11 +184,12 @@ hand.
 
 
 Reference counts
+================
 
 One of the key functions of a kobject is to serve as a reference counter
 for the object in which it is embedded. As long as references to the object
 exist, the object (and the code which supports it) must continue to exist.
-The low-level functions for manipulating a kobject's reference counts are:
+The low-level functions for manipulating a kobject's reference counts are::
 
     struct kobject *kobject_get(struct kobject *kobj);
     void kobject_put(struct kobject *kobj);
@@ -209,21 +214,24 @@ file Documentation/kref.txt in the Linux kernel source tree.
 
 
 Creating "simple" kobjects
+==========================
 
 Sometimes all that a developer wants is a way to create a simple directory
 in the sysfs hierarchy, and not have to mess with the whole complication of
 ksets, show and store functions, and other details.  This is the one
 exception where a single kobject should be created.  To create such an
-entry, use the function:
+entry, use the function::
 
     struct kobject *kobject_create_and_add(char *name, struct kobject *parent);
 
 This function will create a kobject and place it in sysfs in the location
 underneath the specified parent kobject.  To create simple attributes
-associated with this kobject, use:
+associated with this kobject, use::
 
     int sysfs_create_file(struct kobject *kobj, struct attribute *attr);
-or
+
+or::
+
     int sysfs_create_group(struct kobject *kobj, struct attribute_group *grp);
 
 Both types of attributes used here, with a kobject that has been created
@@ -236,6 +244,7 @@ implementation of a simple kobject and attributes.
 
 
 ktypes and release methods
+==========================
 
 One important thing still missing from the discussion is what happens to a
 kobject when its reference count reaches zero. The code which created the
@@ -257,7 +266,7 @@ is good practice to always use kobject_put() after kobject_init() to avoid
 errors creeping in.
 
 This notification is done through a kobject's release() method. Usually
-such a method has a form like:
+such a method has a form like::
 
     void my_object_release(struct kobject *kobj)
     {
@@ -281,7 +290,7 @@ leak in the kobject core, which makes people unhappy.
 
 Interestingly, the release() method is not stored in the kobject itself;
 instead, it is associated with the ktype. So let us introduce struct
-kobj_type:
+kobj_type::
 
     struct kobj_type {
 	    void (*release)(struct kobject *kobj);
@@ -306,6 +315,7 @@ automatically created for any kobject that is registered with this ktype.
 
 
 ksets
+=====
 
 A kset is merely a collection of kobjects that want to be associated with
 each other.  There is no restriction that they be of the same ktype, but be
@@ -335,13 +345,16 @@ kobject) in their parent.
 
 As a kset contains a kobject within it, it should always be dynamically
 created and never declared statically or on the stack.  To create a new
-kset use:
+kset use::
+
   struct kset *kset_create_and_add(const char *name,
 				   struct kset_uevent_ops *u,
 				   struct kobject *parent);
 
-When you are finished with the kset, call:
+When you are finished with the kset, call::
+
   void kset_unregister(struct kset *kset);
+
 to destroy it.  This removes the kset from sysfs and decrements its reference
 count.  When the reference count goes to zero, the kset will be released.
 Because other references to the kset may still exist, the release may happen
@@ -351,14 +364,14 @@ An example of using a kset can be seen in the
 samples/kobject/kset-example.c file in the kernel tree.
 
 If a kset wishes to control the uevent operations of the kobjects
-associated with it, it can use the struct kset_uevent_ops to handle it:
+associated with it, it can use the struct kset_uevent_ops to handle it::
 
-struct kset_uevent_ops {
+  struct kset_uevent_ops {
         int (*filter)(struct kset *kset, struct kobject *kobj);
         const char *(*name)(struct kset *kset, struct kobject *kobj);
         int (*uevent)(struct kset *kset, struct kobject *kobj,
                       struct kobj_uevent_env *env);
-};
+  };
 
 
 The filter function allows a kset to prevent a uevent from being emitted to
@@ -386,6 +399,7 @@ added below the parent kobject.
 
 
 Kobject removal
+===============
 
 After a kobject has been registered with the kobject core successfully, it
 must be cleaned up when the code is finished with it.  To do that, call
@@ -409,6 +423,7 @@ called, and the objects in the former circle release each other.
 
 
 Example code to copy from
+=========================
 
 For a more complete example of using ksets and kobjects properly, see the
 example programs samples/kobject/{kobject-example.c,kset-example.c},
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v2 10/29] kprobes.txt: standardize document format
  2017-06-17 15:26 [PATCH v2 01/29] IPMI.txt: standardize document format Mauro Carvalho Chehab
                   ` (7 preceding siblings ...)
  2017-06-17 15:26 ` [PATCH v2 09/29] kobject.txt: " Mauro Carvalho Chehab
@ 2017-06-17 15:26 ` Mauro Carvalho Chehab
  2017-06-17 15:26 ` [PATCH v2 11/29] kref.txt: " Mauro Carvalho Chehab
                   ` (19 subsequent siblings)
  28 siblings, 0 replies; 35+ messages in thread
From: Mauro Carvalho Chehab @ 2017-06-17 15:26 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet, Ananth N Mavinakayanahalli,
	Anil S Keshavamurthy, David S. Miller, Masami Hiramatsu

Each text file under Documentation follows a different
format. Some doesn't even have titles!

Change its representation to follow the adopted standard,
using ReST markups for it to be parseable by Sphinx:

- comment the contents;
- add proper markups for titles;
- mark literal blocks as such;
- use :Author: for authorship;
- use the right markups for footnotes;
- escape some literals that would otherwise cause problems;
- fix identation and add blank lines where needed.

Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
---
 Documentation/kprobes.txt | 468 +++++++++++++++++++++++++++-------------------
 1 file changed, 276 insertions(+), 192 deletions(-)

diff --git a/Documentation/kprobes.txt b/Documentation/kprobes.txt
index 1f6d45abfe42..bb5ff6d04bac 100644
--- a/Documentation/kprobes.txt
+++ b/Documentation/kprobes.txt
@@ -1,30 +1,36 @@
-Title	: Kernel Probes (Kprobes)
-Authors	: Jim Keniston <jkenisto@us.ibm.com>
-	: Prasanna S Panchamukhi <prasanna.panchamukhi@gmail.com>
-	: Masami Hiramatsu <mhiramat@redhat.com>
+=======================
+Kernel Probes (Kprobes)
+=======================
 
-CONTENTS
+:Author: Jim Keniston <jkenisto@us.ibm.com>
+:Author: Prasanna S Panchamukhi <prasanna.panchamukhi@gmail.com>
+:Author: Masami Hiramatsu <mhiramat@redhat.com>
 
-1. Concepts: Kprobes, Jprobes, Return Probes
-2. Architectures Supported
-3. Configuring Kprobes
-4. API Reference
-5. Kprobes Features and Limitations
-6. Probe Overhead
-7. TODO
-8. Kprobes Example
-9. Jprobes Example
-10. Kretprobes Example
-Appendix A: The kprobes debugfs interface
-Appendix B: The kprobes sysctl interface
+.. CONTENTS
 
-1. Concepts: Kprobes, Jprobes, Return Probes
+  1. Concepts: Kprobes, Jprobes, Return Probes
+  2. Architectures Supported
+  3. Configuring Kprobes
+  4. API Reference
+  5. Kprobes Features and Limitations
+  6. Probe Overhead
+  7. TODO
+  8. Kprobes Example
+  9. Jprobes Example
+  10. Kretprobes Example
+  Appendix A: The kprobes debugfs interface
+  Appendix B: The kprobes sysctl interface
+
+Concepts: Kprobes, Jprobes, Return Probes
+=========================================
 
 Kprobes enables you to dynamically break into any kernel routine and
 collect debugging and performance information non-disruptively. You
-can trap at almost any kernel code address(*), specifying a handler
+can trap at almost any kernel code address [1]_, specifying a handler
 routine to be invoked when the breakpoint is hit.
-(*: some parts of the kernel code can not be trapped, see 1.5 Blacklist)
+
+.. [1] some parts of the kernel code can not be trapped, see
+       :ref:`kprobes_blacklist`)
 
 There are currently three types of probes: kprobes, jprobes, and
 kretprobes (also called return probes).  A kprobe can be inserted
@@ -40,8 +46,8 @@ registration function such as register_kprobe() specifies where
 the probe is to be inserted and what handler is to be called when
 the probe is hit.
 
-There are also register_/unregister_*probes() functions for batch
-registration/unregistration of a group of *probes. These functions
+There are also ``register_/unregister_*probes()`` functions for batch
+registration/unregistration of a group of ``*probes``. These functions
 can speed up unregistration process when you have to unregister
 a lot of probes at once.
 
@@ -51,9 +57,10 @@ things that you'll need to know in order to make the best use of
 Kprobes -- e.g., the difference between a pre_handler and
 a post_handler, and how to use the maxactive and nmissed fields of
 a kretprobe.  But if you're in a hurry to start using Kprobes, you
-can skip ahead to section 2.
+can skip ahead to :ref:`kprobes_archs_supported`.
 
-1.1 How Does a Kprobe Work?
+How Does a Kprobe Work?
+-----------------------
 
 When a kprobe is registered, Kprobes makes a copy of the probed
 instruction and replaces the first byte(s) of the probed instruction
@@ -75,7 +82,8 @@ After the instruction is single-stepped, Kprobes executes the
 "post_handler," if any, that is associated with the kprobe.
 Execution then continues with the instruction following the probepoint.
 
-1.2 How Does a Jprobe Work?
+How Does a Jprobe Work?
+-----------------------
 
 A jprobe is implemented using a kprobe that is placed on a function's
 entry point.  It employs a simple mirroring principle to allow
@@ -113,9 +121,11 @@ more than eight function arguments, an argument of more than sixteen
 bytes, or more than 64 bytes of argument data, depending on
 architecture).
 
-1.3 Return Probes
+Return Probes
+-------------
 
-1.3.1 How Does a Return Probe Work?
+How Does a Return Probe Work?
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
 When you call register_kretprobe(), Kprobes establishes a kprobe at
 the entry to the function.  When the probed function is called and this
@@ -150,7 +160,8 @@ zero when the return probe is registered, and is incremented every
 time the probed function is entered but there is no kretprobe_instance
 object available for establishing the return probe.
 
-1.3.2 Kretprobe entry-handler
+Kretprobe entry-handler
+^^^^^^^^^^^^^^^^^^^^^^^
 
 Kretprobes also provides an optional user-specified handler which runs
 on function entry. This handler is specified by setting the entry_handler
@@ -174,7 +185,10 @@ In case probed function is entered but there is no kretprobe_instance
 object available, then in addition to incrementing the nmissed count,
 the user entry_handler invocation is also skipped.
 
-1.4 How Does Jump Optimization Work?
+.. _kprobes_jump_optimization:
+
+How Does Jump Optimization Work?
+--------------------------------
 
 If your kernel is built with CONFIG_OPTPROBES=y (currently this flag
 is automatically set 'y' on x86/x86-64, non-preemptive kernel) and
@@ -182,53 +196,60 @@ the "debug.kprobes_optimization" kernel parameter is set to 1 (see
 sysctl(8)), Kprobes tries to reduce probe-hit overhead by using a jump
 instruction instead of a breakpoint instruction at each probepoint.
 
-1.4.1 Init a Kprobe
+Init a Kprobe
+^^^^^^^^^^^^^
 
 When a probe is registered, before attempting this optimization,
 Kprobes inserts an ordinary, breakpoint-based kprobe at the specified
 address. So, even if it's not possible to optimize this particular
 probepoint, there'll be a probe there.
 
-1.4.2 Safety Check
+Safety Check
+^^^^^^^^^^^^
 
 Before optimizing a probe, Kprobes performs the following safety checks:
 
 - Kprobes verifies that the region that will be replaced by the jump
-instruction (the "optimized region") lies entirely within one function.
-(A jump instruction is multiple bytes, and so may overlay multiple
-instructions.)
+  instruction (the "optimized region") lies entirely within one function.
+  (A jump instruction is multiple bytes, and so may overlay multiple
+  instructions.)
 
 - Kprobes analyzes the entire function and verifies that there is no
-jump into the optimized region.  Specifically:
+  jump into the optimized region.  Specifically:
+
   - the function contains no indirect jump;
   - the function contains no instruction that causes an exception (since
-  the fixup code triggered by the exception could jump back into the
-  optimized region -- Kprobes checks the exception tables to verify this);
-  and
+    the fixup code triggered by the exception could jump back into the
+    optimized region -- Kprobes checks the exception tables to verify this);
   - there is no near jump to the optimized region (other than to the first
-  byte).
+    byte).
 
 - For each instruction in the optimized region, Kprobes verifies that
-the instruction can be executed out of line.
+  the instruction can be executed out of line.
 
-1.4.3 Preparing Detour Buffer
+Preparing Detour Buffer
+^^^^^^^^^^^^^^^^^^^^^^^
 
 Next, Kprobes prepares a "detour" buffer, which contains the following
 instruction sequence:
+
 - code to push the CPU's registers (emulating a breakpoint trap)
 - a call to the trampoline code which calls user's probe handlers.
 - code to restore registers
 - the instructions from the optimized region
 - a jump back to the original execution path.
 
-1.4.4 Pre-optimization
+Pre-optimization
+^^^^^^^^^^^^^^^^
 
 After preparing the detour buffer, Kprobes verifies that none of the
 following situations exist:
+
 - The probe has either a break_handler (i.e., it's a jprobe) or a
-post_handler.
+  post_handler.
 - Other instructions in the optimized region are probed.
 - The probe is disabled.
+
 In any of the above cases, Kprobes won't start optimizing the probe.
 Since these are temporary situations, Kprobes tries to start
 optimizing it again if the situation is changed.
@@ -240,21 +261,23 @@ Kprobes returns control to the original instruction path by setting
 the CPU's instruction pointer to the copied code in the detour buffer
 -- thus at least avoiding the single-step.
 
-1.4.5 Optimization
+Optimization
+^^^^^^^^^^^^
 
 The Kprobe-optimizer doesn't insert the jump instruction immediately;
 rather, it calls synchronize_sched() for safety first, because it's
 possible for a CPU to be interrupted in the middle of executing the
-optimized region(*).  As you know, synchronize_sched() can ensure
+optimized region [3]_.  As you know, synchronize_sched() can ensure
 that all interruptions that were active when synchronize_sched()
 was called are done, but only if CONFIG_PREEMPT=n.  So, this version
-of kprobe optimization supports only kernels with CONFIG_PREEMPT=n.(**)
+of kprobe optimization supports only kernels with CONFIG_PREEMPT=n [4]_.
 
 After that, the Kprobe-optimizer calls stop_machine() to replace
 the optimized region with a jump instruction to the detour buffer,
 using text_poke_smp().
 
-1.4.6 Unoptimization
+Unoptimization
+^^^^^^^^^^^^^^
 
 When an optimized kprobe is unregistered, disabled, or blocked by
 another kprobe, it will be unoptimized.  If this happens before
@@ -263,15 +286,15 @@ optimized list.  If the optimization has been done, the jump is
 replaced with the original code (except for an int3 breakpoint in
 the first byte) by using text_poke_smp().
 
-(*)Please imagine that the 2nd instruction is interrupted and then
-the optimizer replaces the 2nd instruction with the jump *address*
-while the interrupt handler is running. When the interrupt
-returns to original address, there is no valid instruction,
-and it causes an unexpected result.
+.. [3] Please imagine that the 2nd instruction is interrupted and then
+   the optimizer replaces the 2nd instruction with the jump *address*
+   while the interrupt handler is running. When the interrupt
+   returns to original address, there is no valid instruction,
+   and it causes an unexpected result.
 
-(**)This optimization-safety checking may be replaced with the
-stop-machine method that ksplice uses for supporting a CONFIG_PREEMPT=y
-kernel.
+.. [4] This optimization-safety checking may be replaced with the
+   stop-machine method that ksplice uses for supporting a CONFIG_PREEMPT=y
+   kernel.
 
 NOTE for geeks:
 The jump optimization changes the kprobe's pre_handler behavior.
@@ -280,11 +303,17 @@ path by changing regs->ip and returning 1.  However, when the probe
 is optimized, that modification is ignored.  Thus, if you want to
 tweak the kernel's execution path, you need to suppress optimization,
 using one of the following techniques:
+
 - Specify an empty function for the kprobe's post_handler or break_handler.
- or
+
+or
+
 - Execute 'sysctl -w debug.kprobes_optimization=n'
 
-1.5 Blacklist
+.. _kprobes_blacklist:
+
+Blacklist
+---------
 
 Kprobes can probe most of the kernel except itself. This means
 that there are some functions where kprobes cannot probe. Probing
@@ -297,7 +326,10 @@ to specify a blacklisted function.
 Kprobes checks the given probe address against the blacklist and
 rejects registering it, if the given address is in the blacklist.
 
-2. Architectures Supported
+.. _kprobes_archs_supported:
+
+Architectures Supported
+=======================
 
 Kprobes, jprobes, and return probes are implemented on the following
 architectures:
@@ -312,7 +344,8 @@ architectures:
 - mips
 - s390
 
-3. Configuring Kprobes
+Configuring Kprobes
+===================
 
 When configuring the kernel using make menuconfig/xconfig/oldconfig,
 ensure that CONFIG_KPROBES is set to "y". Under "General setup", look
@@ -331,7 +364,8 @@ it useful to "Compile the kernel with debug info" (CONFIG_DEBUG_INFO),
 so you can use "objdump -d -l vmlinux" to see the source-to-object
 code mapping.
 
-4. API Reference
+API Reference
+=============
 
 The Kprobes API includes a "register" function and an "unregister"
 function for each type of probe. The API also includes "register_*probes"
@@ -340,10 +374,13 @@ Here are terse, mini-man-page specifications for these functions and
 the associated probe handlers that you'll write. See the files in the
 samples/kprobes/ sub-directory for examples.
 
-4.1 register_kprobe
+register_kprobe
+---------------
 
-#include <linux/kprobes.h>
-int register_kprobe(struct kprobe *kp);
+::
+
+	#include <linux/kprobes.h>
+	int register_kprobe(struct kprobe *kp);
 
 Sets a breakpoint at the address kp->addr.  When the breakpoint is
 hit, Kprobes calls kp->pre_handler.  After the probed instruction
@@ -354,61 +391,68 @@ kp->fault_handler.  Any or all handlers can be NULL. If kp->flags
 is set KPROBE_FLAG_DISABLED, that kp will be registered but disabled,
 so, its handlers aren't hit until calling enable_kprobe(kp).
 
-NOTE:
-1. With the introduction of the "symbol_name" field to struct kprobe,
-the probepoint address resolution will now be taken care of by the kernel.
-The following will now work:
+.. note::
+
+   1. With the introduction of the "symbol_name" field to struct kprobe,
+      the probepoint address resolution will now be taken care of by the kernel.
+      The following will now work::
 
 	kp.symbol_name = "symbol_name";
 
-(64-bit powerpc intricacies such as function descriptors are handled
-transparently)
+      (64-bit powerpc intricacies such as function descriptors are handled
+      transparently)
 
-2. Use the "offset" field of struct kprobe if the offset into the symbol
-to install a probepoint is known. This field is used to calculate the
-probepoint.
+   2. Use the "offset" field of struct kprobe if the offset into the symbol
+      to install a probepoint is known. This field is used to calculate the
+      probepoint.
 
-3. Specify either the kprobe "symbol_name" OR the "addr". If both are
-specified, kprobe registration will fail with -EINVAL.
+   3. Specify either the kprobe "symbol_name" OR the "addr". If both are
+      specified, kprobe registration will fail with -EINVAL.
 
-4. With CISC architectures (such as i386 and x86_64), the kprobes code
-does not validate if the kprobe.addr is at an instruction boundary.
-Use "offset" with caution.
+   4. With CISC architectures (such as i386 and x86_64), the kprobes code
+      does not validate if the kprobe.addr is at an instruction boundary.
+      Use "offset" with caution.
 
 register_kprobe() returns 0 on success, or a negative errno otherwise.
 
-User's pre-handler (kp->pre_handler):
-#include <linux/kprobes.h>
-#include <linux/ptrace.h>
-int pre_handler(struct kprobe *p, struct pt_regs *regs);
+User's pre-handler (kp->pre_handler)::
+
+	#include <linux/kprobes.h>
+	#include <linux/ptrace.h>
+	int pre_handler(struct kprobe *p, struct pt_regs *regs);
 
 Called with p pointing to the kprobe associated with the breakpoint,
 and regs pointing to the struct containing the registers saved when
 the breakpoint was hit.  Return 0 here unless you're a Kprobes geek.
 
-User's post-handler (kp->post_handler):
-#include <linux/kprobes.h>
-#include <linux/ptrace.h>
-void post_handler(struct kprobe *p, struct pt_regs *regs,
-	unsigned long flags);
+User's post-handler (kp->post_handler)::
+
+	#include <linux/kprobes.h>
+	#include <linux/ptrace.h>
+	void post_handler(struct kprobe *p, struct pt_regs *regs,
+			  unsigned long flags);
 
 p and regs are as described for the pre_handler.  flags always seems
 to be zero.
 
-User's fault-handler (kp->fault_handler):
-#include <linux/kprobes.h>
-#include <linux/ptrace.h>
-int fault_handler(struct kprobe *p, struct pt_regs *regs, int trapnr);
+User's fault-handler (kp->fault_handler)::
+
+	#include <linux/kprobes.h>
+	#include <linux/ptrace.h>
+	int fault_handler(struct kprobe *p, struct pt_regs *regs, int trapnr);
 
 p and regs are as described for the pre_handler.  trapnr is the
 architecture-specific trap number associated with the fault (e.g.,
 on i386, 13 for a general protection fault or 14 for a page fault).
 Returns 1 if it successfully handled the exception.
 
-4.2 register_jprobe
+register_jprobe
+---------------
 
-#include <linux/kprobes.h>
-int register_jprobe(struct jprobe *jp)
+::
+
+	#include <linux/kprobes.h>
+	int register_jprobe(struct jprobe *jp)
 
 Sets a breakpoint at the address jp->kp.addr, which must be the address
 of the first instruction of a function.  When the breakpoint is hit,
@@ -423,10 +467,13 @@ declaration must match.
 
 register_jprobe() returns 0 on success, or a negative errno otherwise.
 
-4.3 register_kretprobe
+register_kretprobe
+------------------
 
-#include <linux/kprobes.h>
-int register_kretprobe(struct kretprobe *rp);
+::
+
+	#include <linux/kprobes.h>
+	int register_kretprobe(struct kretprobe *rp);
 
 Establishes a return probe for the function whose address is
 rp->kp.addr.  When that function returns, Kprobes calls rp->handler.
@@ -436,14 +483,17 @@ register_kretprobe(); see "How Does a Return Probe Work?" for details.
 register_kretprobe() returns 0 on success, or a negative errno
 otherwise.
 
-User's return-probe handler (rp->handler):
-#include <linux/kprobes.h>
-#include <linux/ptrace.h>
-int kretprobe_handler(struct kretprobe_instance *ri, struct pt_regs *regs);
+User's return-probe handler (rp->handler)::
+
+	#include <linux/kprobes.h>
+	#include <linux/ptrace.h>
+	int kretprobe_handler(struct kretprobe_instance *ri,
+			      struct pt_regs *regs);
 
 regs is as described for kprobe.pre_handler.  ri points to the
 kretprobe_instance object, of which the following fields may be
 of interest:
+
 - ret_addr: the return address
 - rp: points to the corresponding kretprobe object
 - task: points to the corresponding task struct
@@ -456,74 +506,94 @@ the architecture's ABI.
 
 The handler's return value is currently ignored.
 
-4.4 unregister_*probe
+unregister_*probe
+------------------
 
-#include <linux/kprobes.h>
-void unregister_kprobe(struct kprobe *kp);
-void unregister_jprobe(struct jprobe *jp);
-void unregister_kretprobe(struct kretprobe *rp);
+::
+
+	#include <linux/kprobes.h>
+	void unregister_kprobe(struct kprobe *kp);
+	void unregister_jprobe(struct jprobe *jp);
+	void unregister_kretprobe(struct kretprobe *rp);
 
 Removes the specified probe.  The unregister function can be called
 at any time after the probe has been registered.
 
-NOTE:
-If the functions find an incorrect probe (ex. an unregistered probe),
-they clear the addr field of the probe.
+.. note::
 
-4.5 register_*probes
+   If the functions find an incorrect probe (ex. an unregistered probe),
+   they clear the addr field of the probe.
 
-#include <linux/kprobes.h>
-int register_kprobes(struct kprobe **kps, int num);
-int register_kretprobes(struct kretprobe **rps, int num);
-int register_jprobes(struct jprobe **jps, int num);
+register_*probes
+----------------
+
+::
+
+	#include <linux/kprobes.h>
+	int register_kprobes(struct kprobe **kps, int num);
+	int register_kretprobes(struct kretprobe **rps, int num);
+	int register_jprobes(struct jprobe **jps, int num);
 
 Registers each of the num probes in the specified array.  If any
 error occurs during registration, all probes in the array, up to
 the bad probe, are safely unregistered before the register_*probes
 function returns.
-- kps/rps/jps: an array of pointers to *probe data structures
+
+- kps/rps/jps: an array of pointers to ``*probe`` data structures
 - num: the number of the array entries.
 
-NOTE:
-You have to allocate(or define) an array of pointers and set all
-of the array entries before using these functions.
+.. note::
 
-4.6 unregister_*probes
+   You have to allocate(or define) an array of pointers and set all
+   of the array entries before using these functions.
 
-#include <linux/kprobes.h>
-void unregister_kprobes(struct kprobe **kps, int num);
-void unregister_kretprobes(struct kretprobe **rps, int num);
-void unregister_jprobes(struct jprobe **jps, int num);
+unregister_*probes
+------------------
+
+::
+
+	#include <linux/kprobes.h>
+	void unregister_kprobes(struct kprobe **kps, int num);
+	void unregister_kretprobes(struct kretprobe **rps, int num);
+	void unregister_jprobes(struct jprobe **jps, int num);
 
 Removes each of the num probes in the specified array at once.
 
-NOTE:
-If the functions find some incorrect probes (ex. unregistered
-probes) in the specified array, they clear the addr field of those
-incorrect probes. However, other probes in the array are
-unregistered correctly.
+.. note::
 
-4.7 disable_*probe
+   If the functions find some incorrect probes (ex. unregistered
+   probes) in the specified array, they clear the addr field of those
+   incorrect probes. However, other probes in the array are
+   unregistered correctly.
 
-#include <linux/kprobes.h>
-int disable_kprobe(struct kprobe *kp);
-int disable_kretprobe(struct kretprobe *rp);
-int disable_jprobe(struct jprobe *jp);
+disable_*probe
+--------------
 
-Temporarily disables the specified *probe. You can enable it again by using
+::
+
+	#include <linux/kprobes.h>
+	int disable_kprobe(struct kprobe *kp);
+	int disable_kretprobe(struct kretprobe *rp);
+	int disable_jprobe(struct jprobe *jp);
+
+Temporarily disables the specified ``*probe``. You can enable it again by using
 enable_*probe(). You must specify the probe which has been registered.
 
-4.8 enable_*probe
+enable_*probe
+-------------
 
-#include <linux/kprobes.h>
-int enable_kprobe(struct kprobe *kp);
-int enable_kretprobe(struct kretprobe *rp);
-int enable_jprobe(struct jprobe *jp);
+::
 
-Enables *probe which has been disabled by disable_*probe(). You must specify
+	#include <linux/kprobes.h>
+	int enable_kprobe(struct kprobe *kp);
+	int enable_kretprobe(struct kretprobe *rp);
+	int enable_jprobe(struct jprobe *jp);
+
+Enables ``*probe`` which has been disabled by disable_*probe(). You must specify
 the probe which has been registered.
 
-5. Kprobes Features and Limitations
+Kprobes Features and Limitations
+================================
 
 Kprobes allows multiple probes at the same address.  Currently,
 however, there cannot be multiple jprobes on the same function at
@@ -538,7 +608,7 @@ are discussed in this section.
 
 The register_*probe functions will return -EINVAL if you attempt
 to install a probe in the code that implements Kprobes (mostly
-kernel/kprobes.c and arch/*/kernel/kprobes.c, but also functions such
+kernel/kprobes.c and ``arch/*/kernel/kprobes.c``, but also functions such
 as do_page_fault and notifier_call_chain).
 
 If you install a probe in an inline-able function, Kprobes makes
@@ -602,19 +672,21 @@ explain it, we introduce some terminology. Imagine a 3-instruction
 sequence consisting of a two 2-byte instructions and one 3-byte
 instruction.
 
-        IA
-         |
-[-2][-1][0][1][2][3][4][5][6][7]
-        [ins1][ins2][  ins3 ]
-	[<-     DCR       ->]
-	   [<- JTPR ->]
+::
 
-ins1: 1st Instruction
-ins2: 2nd Instruction
-ins3: 3rd Instruction
-IA:  Insertion Address
-JTPR: Jump Target Prohibition Region
-DCR: Detoured Code Region
+		IA
+		|
+	[-2][-1][0][1][2][3][4][5][6][7]
+		[ins1][ins2][  ins3 ]
+		[<-     DCR       ->]
+		[<- JTPR ->]
+
+	ins1: 1st Instruction
+	ins2: 2nd Instruction
+	ins3: 3rd Instruction
+	IA:  Insertion Address
+	JTPR: Jump Target Prohibition Region
+	DCR: Detoured Code Region
 
 The instructions in DCR are copied to the out-of-line buffer
 of the kprobe, because the bytes in DCR are replaced by
@@ -628,7 +700,8 @@ d) DCR must not straddle the border between functions.
 Anyway, these limitations are checked by the in-kernel instruction
 decoder, so you don't need to worry about that.
 
-6. Probe Overhead
+Probe Overhead
+==============
 
 On a typical CPU in use in 2005, a kprobe hit takes 0.5 to 1.0
 microseconds to process.  Specifically, a benchmark that hits the same
@@ -638,70 +711,80 @@ return-probe hit typically takes 50-75% longer than a kprobe hit.
 When you have a return probe set on a function, adding a kprobe at
 the entry to that function adds essentially no overhead.
 
-Here are sample overhead figures (in usec) for different architectures.
-k = kprobe; j = jprobe; r = return probe; kr = kprobe + return probe
-on same function; jr = jprobe + return probe on same function
+Here are sample overhead figures (in usec) for different architectures::
 
-i386: Intel Pentium M, 1495 MHz, 2957.31 bogomips
-k = 0.57 usec; j = 1.00; r = 0.92; kr = 0.99; jr = 1.40
+  k = kprobe; j = jprobe; r = return probe; kr = kprobe + return probe
+  on same function; jr = jprobe + return probe on same function::
 
-x86_64: AMD Opteron 246, 1994 MHz, 3971.48 bogomips
-k = 0.49 usec; j = 0.76; r = 0.80; kr = 0.82; jr = 1.07
+  i386: Intel Pentium M, 1495 MHz, 2957.31 bogomips
+  k = 0.57 usec; j = 1.00; r = 0.92; kr = 0.99; jr = 1.40
 
-ppc64: POWER5 (gr), 1656 MHz (SMT disabled, 1 virtual CPU per physical CPU)
-k = 0.77 usec; j = 1.31; r = 1.26; kr = 1.45; jr = 1.99
+  x86_64: AMD Opteron 246, 1994 MHz, 3971.48 bogomips
+  k = 0.49 usec; j = 0.76; r = 0.80; kr = 0.82; jr = 1.07
 
-6.1 Optimized Probe Overhead
+  ppc64: POWER5 (gr), 1656 MHz (SMT disabled, 1 virtual CPU per physical CPU)
+  k = 0.77 usec; j = 1.31; r = 1.26; kr = 1.45; jr = 1.99
+
+Optimized Probe Overhead
+------------------------
 
 Typically, an optimized kprobe hit takes 0.07 to 0.1 microseconds to
-process. Here are sample overhead figures (in usec) for x86 architectures.
-k = unoptimized kprobe, b = boosted (single-step skipped), o = optimized kprobe,
-r = unoptimized kretprobe, rb = boosted kretprobe, ro = optimized kretprobe.
+process. Here are sample overhead figures (in usec) for x86 architectures::
 
-i386: Intel(R) Xeon(R) E5410, 2.33GHz, 4656.90 bogomips
-k = 0.80 usec; b = 0.33; o = 0.05; r = 1.10; rb = 0.61; ro = 0.33
+  k = unoptimized kprobe, b = boosted (single-step skipped), o = optimized kprobe,
+  r = unoptimized kretprobe, rb = boosted kretprobe, ro = optimized kretprobe.
 
-x86-64: Intel(R) Xeon(R) E5410, 2.33GHz, 4656.90 bogomips
-k = 0.99 usec; b = 0.43; o = 0.06; r = 1.24; rb = 0.68; ro = 0.30
+  i386: Intel(R) Xeon(R) E5410, 2.33GHz, 4656.90 bogomips
+  k = 0.80 usec; b = 0.33; o = 0.05; r = 1.10; rb = 0.61; ro = 0.33
 
-7. TODO
+  x86-64: Intel(R) Xeon(R) E5410, 2.33GHz, 4656.90 bogomips
+  k = 0.99 usec; b = 0.43; o = 0.06; r = 1.24; rb = 0.68; ro = 0.30
+
+TODO
+====
 
 a. SystemTap (http://sourceware.org/systemtap): Provides a simplified
-programming interface for probe-based instrumentation.  Try it out.
+   programming interface for probe-based instrumentation.  Try it out.
 b. Kernel return probes for sparc64.
 c. Support for other architectures.
 d. User-space probes.
 e. Watchpoint probes (which fire on data references).
 
-8. Kprobes Example
+Kprobes Example
+===============
 
 See samples/kprobes/kprobe_example.c
 
-9. Jprobes Example
+Jprobes Example
+===============
 
 See samples/kprobes/jprobe_example.c
 
-10. Kretprobes Example
+Kretprobes Example
+==================
 
 See samples/kprobes/kretprobe_example.c
 
 For additional information on Kprobes, refer to the following URLs:
-http://www-106.ibm.com/developerworks/library/l-kprobes.html?ca=dgr-lnxw42Kprobe
-http://www.redhat.com/magazine/005mar05/features/kprobes/
-http://www-users.cs.umn.edu/~boutcher/kprobes/
-http://www.linuxsymposium.org/2006/linuxsymposium_procv2.pdf (pages 101-115)
 
+- http://www-106.ibm.com/developerworks/library/l-kprobes.html?ca=dgr-lnxw42Kprobe
+- http://www.redhat.com/magazine/005mar05/features/kprobes/
+- http://www-users.cs.umn.edu/~boutcher/kprobes/
+- http://www.linuxsymposium.org/2006/linuxsymposium_procv2.pdf (pages 101-115)
+
+
+The kprobes debugfs interface
+=============================
 
-Appendix A: The kprobes debugfs interface
 
 With recent kernels (> 2.6.20) the list of registered kprobes is visible
 under the /sys/kernel/debug/kprobes/ directory (assuming debugfs is mounted at //sys/kernel/debug).
 
-/sys/kernel/debug/kprobes/list: Lists all registered probes on the system
+/sys/kernel/debug/kprobes/list: Lists all registered probes on the system::
 
-c015d71a  k  vfs_read+0x0
-c011a316  j  do_fork+0x0
-c03dedc5  r  tcp_v4_rcv+0x0
+	c015d71a  k  vfs_read+0x0
+	c011a316  j  do_fork+0x0
+	c03dedc5  r  tcp_v4_rcv+0x0
 
 The first column provides the kernel address where the probe is inserted.
 The second column identifies the type of probe (k - kprobe, r - kretprobe
@@ -725,17 +808,18 @@ change each probe's disabling state. This means that disabled kprobes (marked
 [DISABLED]) will be not enabled if you turn ON all kprobes by this knob.
 
 
-Appendix B: The kprobes sysctl interface
+The kprobes sysctl interface
+============================
 
 /proc/sys/debug/kprobes-optimization: Turn kprobes optimization ON/OFF.
 
 When CONFIG_OPTPROBES=y, this sysctl interface appears and it provides
 a knob to globally and forcibly turn jump optimization (see section
-1.4) ON or OFF. By default, jump optimization is allowed (ON).
-If you echo "0" to this file or set "debug.kprobes_optimization" to
-0 via sysctl, all optimized probes will be unoptimized, and any new
-probes registered after that will not be optimized.  Note that this
-knob *changes* the optimized state. This means that optimized probes
-(marked [OPTIMIZED]) will be unoptimized ([OPTIMIZED] tag will be
+:ref:`kprobes_jump_optimization`) ON or OFF. By default, jump optimization
+is allowed (ON). If you echo "0" to this file or set
+"debug.kprobes_optimization" to 0 via sysctl, all optimized probes will be
+unoptimized, and any new probes registered after that will not be optimized.
+ Note that this knob *changes* the optimized state. This means that optimized
+probes (marked [OPTIMIZED]) will be unoptimized ([OPTIMIZED] tag will be
 removed). If the knob is turned on, they will be optimized again.
 
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v2 11/29] kref.txt: standardize document format
  2017-06-17 15:26 [PATCH v2 01/29] IPMI.txt: standardize document format Mauro Carvalho Chehab
                   ` (8 preceding siblings ...)
  2017-06-17 15:26 ` [PATCH v2 10/29] kprobes.txt: " Mauro Carvalho Chehab
@ 2017-06-17 15:26 ` Mauro Carvalho Chehab
  2017-06-17 15:26 ` [PATCH v2 12/29] kselftest.rst: do some adjustments after ReST conversion Mauro Carvalho Chehab
                   ` (18 subsequent siblings)
  28 siblings, 0 replies; 35+ messages in thread
From: Mauro Carvalho Chehab @ 2017-06-17 15:26 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet

Each text file under Documentation follows a different
format. Some doesn't even have titles!

Change its representation to follow the adopted standard,
using ReST markups for it to be parseable by Sphinx:

- add a title for the document and section titles;
- move authorship information to the beginning and use
  :Author:
- mark literal blocks as such and ident them if needed.

Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
---
 Documentation/kref.txt | 285 ++++++++++++++++++++++++++-----------------------
 1 file changed, 150 insertions(+), 135 deletions(-)

diff --git a/Documentation/kref.txt b/Documentation/kref.txt
index d26a27ca964d..3af384156d7e 100644
--- a/Documentation/kref.txt
+++ b/Documentation/kref.txt
@@ -1,24 +1,42 @@
+===================================================
+Adding reference counters (krefs) to kernel objects
+===================================================
+
+:Author: Corey Minyard <minyard@acm.org>
+:Author: Thomas Hellstrom <thellstrom@vmware.com>
+
+A lot of this was lifted from Greg Kroah-Hartman's 2004 OLS paper and
+presentation on krefs, which can be found at:
+
+  - http://www.kroah.com/linux/talks/ols_2004_kref_paper/Reprint-Kroah-Hartman-OLS2004.pdf
+  - http://www.kroah.com/linux/talks/ols_2004_kref_talk/
+
+Introduction
+============
 
 krefs allow you to add reference counters to your objects.  If you
 have objects that are used in multiple places and passed around, and
 you don't have refcounts, your code is almost certainly broken.  If
 you want refcounts, krefs are the way to go.
 
-To use a kref, add one to your data structures like:
+To use a kref, add one to your data structures like::
 
-struct my_data
-{
+    struct my_data
+    {
 	.
 	.
 	struct kref refcount;
 	.
 	.
-};
+    };
 
 The kref can occur anywhere within the data structure.
 
+Initialization
+==============
+
 You must initialize the kref after you allocate it.  To do this, call
-kref_init as so:
+kref_init as so::
 
      struct my_data *data;
 
@@ -29,18 +47,25 @@ kref_init as so:
 
 This sets the refcount in the kref to 1.
 
+Kref rules
+==========
+
 Once you have an initialized kref, you must follow the following
 rules:
 
 1) If you make a non-temporary copy of a pointer, especially if
    it can be passed to another thread of execution, you must
-   increment the refcount with kref_get() before passing it off:
+   increment the refcount with kref_get() before passing it off::
+
        kref_get(&data->refcount);
+
    If you already have a valid pointer to a kref-ed structure (the
    refcount cannot go to zero) you may do this without a lock.
 
-2) When you are done with a pointer, you must call kref_put():
+2) When you are done with a pointer, you must call kref_put()::
+
        kref_put(&data->refcount, data_release);
+
    If this is the last reference to the pointer, the release
    routine will be called.  If the code never tries to get
    a valid pointer to a kref-ed structure without already
@@ -53,25 +78,25 @@ rules:
    structure must remain valid during the kref_get().
 
 For example, if you allocate some data and then pass it to another
-thread to process:
+thread to process::
 
-void data_release(struct kref *ref)
-{
+    void data_release(struct kref *ref)
+    {
 	struct my_data *data = container_of(ref, struct my_data, refcount);
 	kfree(data);
-}
+    }
 
-void more_data_handling(void *cb_data)
-{
+    void more_data_handling(void *cb_data)
+    {
 	struct my_data *data = cb_data;
 	.
 	. do stuff with data here
 	.
 	kref_put(&data->refcount, data_release);
-}
+    }
 
-int my_data_handler(void)
-{
+    int my_data_handler(void)
+    {
 	int rv = 0;
 	struct my_data *data;
 	struct task_struct *task;
@@ -91,10 +116,10 @@ int my_data_handler(void)
 	.
 	. do stuff with data here
 	.
- out:
+    out:
 	kref_put(&data->refcount, data_release);
 	return rv;
-}
+    }
 
 This way, it doesn't matter what order the two threads handle the
 data, the kref_put() handles knowing when the data is not referenced
@@ -104,7 +129,7 @@ put needs no lock because nothing tries to get the data without
 already holding a pointer.
 
 Note that the "before" in rule 1 is very important.  You should never
-do something like:
+do something like::
 
 	task = kthread_run(more_data_handling, data, "more_data_handling");
 	if (task == ERR_PTR(-ENOMEM)) {
@@ -124,14 +149,14 @@ bad style.  Don't do it.
 There are some situations where you can optimize the gets and puts.
 For instance, if you are done with an object and enqueuing it for
 something else or passing it off to something else, there is no reason
-to do a get then a put:
+to do a get then a put::
 
 	/* Silly extra get and put */
 	kref_get(&obj->ref);
 	enqueue(obj);
 	kref_put(&obj->ref, obj_cleanup);
 
-Just do the enqueue.  A comment about this is always welcome:
+Just do the enqueue.  A comment about this is always welcome::
 
 	enqueue(obj);
 	/* We are done with obj, so we pass our refcount off
@@ -142,109 +167,99 @@ instance, you have a list of items that are each kref-ed, and you wish
 to get the first one.  You can't just pull the first item off the list
 and kref_get() it.  That violates rule 3 because you are not already
 holding a valid pointer.  You must add a mutex (or some other lock).
-For instance:
+For instance::
 
-static DEFINE_MUTEX(mutex);
-static LIST_HEAD(q);
-struct my_data
-{
-	struct kref      refcount;
-	struct list_head link;
-};
+	static DEFINE_MUTEX(mutex);
+	static LIST_HEAD(q);
+	struct my_data
+	{
+		struct kref      refcount;
+		struct list_head link;
+	};
 
-static struct my_data *get_entry()
-{
-	struct my_data *entry = NULL;
-	mutex_lock(&mutex);
-	if (!list_empty(&q)) {
-		entry = container_of(q.next, struct my_data, link);
-		kref_get(&entry->refcount);
+	static struct my_data *get_entry()
+	{
+		struct my_data *entry = NULL;
+		mutex_lock(&mutex);
+		if (!list_empty(&q)) {
+			entry = container_of(q.next, struct my_data, link);
+			kref_get(&entry->refcount);
+		}
+		mutex_unlock(&mutex);
+		return entry;
 	}
-	mutex_unlock(&mutex);
-	return entry;
-}
 
-static void release_entry(struct kref *ref)
-{
-	struct my_data *entry = container_of(ref, struct my_data, refcount);
+	static void release_entry(struct kref *ref)
+	{
+		struct my_data *entry = container_of(ref, struct my_data, refcount);
 
-	list_del(&entry->link);
-	kfree(entry);
-}
+		list_del(&entry->link);
+		kfree(entry);
+	}
 
-static void put_entry(struct my_data *entry)
-{
-	mutex_lock(&mutex);
-	kref_put(&entry->refcount, release_entry);
-	mutex_unlock(&mutex);
-}
+	static void put_entry(struct my_data *entry)
+	{
+		mutex_lock(&mutex);
+		kref_put(&entry->refcount, release_entry);
+		mutex_unlock(&mutex);
+	}
 
 The kref_put() return value is useful if you do not want to hold the
 lock during the whole release operation.  Say you didn't want to call
 kfree() with the lock held in the example above (since it is kind of
-pointless to do so).  You could use kref_put() as follows:
+pointless to do so).  You could use kref_put() as follows::
 
-static void release_entry(struct kref *ref)
-{
-	/* All work is done after the return from kref_put(). */
-}
+	static void release_entry(struct kref *ref)
+	{
+		/* All work is done after the return from kref_put(). */
+	}
 
-static void put_entry(struct my_data *entry)
-{
-	mutex_lock(&mutex);
-	if (kref_put(&entry->refcount, release_entry)) {
-		list_del(&entry->link);
-		mutex_unlock(&mutex);
-		kfree(entry);
-	} else
-		mutex_unlock(&mutex);
-}
+	static void put_entry(struct my_data *entry)
+	{
+		mutex_lock(&mutex);
+		if (kref_put(&entry->refcount, release_entry)) {
+			list_del(&entry->link);
+			mutex_unlock(&mutex);
+			kfree(entry);
+		} else
+			mutex_unlock(&mutex);
+	}
 
 This is really more useful if you have to call other routines as part
 of the free operations that could take a long time or might claim the
 same lock.  Note that doing everything in the release routine is still
 preferred as it is a little neater.
 
-
-Corey Minyard <minyard@acm.org>
-
-A lot of this was lifted from Greg Kroah-Hartman's 2004 OLS paper and
-presentation on krefs, which can be found at:
-  http://www.kroah.com/linux/talks/ols_2004_kref_paper/Reprint-Kroah-Hartman-OLS2004.pdf
-and:
-  http://www.kroah.com/linux/talks/ols_2004_kref_talk/
-
-
 The above example could also be optimized using kref_get_unless_zero() in
-the following way:
+the following way::
 
-static struct my_data *get_entry()
-{
-	struct my_data *entry = NULL;
-	mutex_lock(&mutex);
-	if (!list_empty(&q)) {
-		entry = container_of(q.next, struct my_data, link);
-		if (!kref_get_unless_zero(&entry->refcount))
-			entry = NULL;
+	static struct my_data *get_entry()
+	{
+		struct my_data *entry = NULL;
+		mutex_lock(&mutex);
+		if (!list_empty(&q)) {
+			entry = container_of(q.next, struct my_data, link);
+			if (!kref_get_unless_zero(&entry->refcount))
+				entry = NULL;
+		}
+		mutex_unlock(&mutex);
+		return entry;
 	}
-	mutex_unlock(&mutex);
-	return entry;
-}
 
-static void release_entry(struct kref *ref)
-{
-	struct my_data *entry = container_of(ref, struct my_data, refcount);
+	static void release_entry(struct kref *ref)
+	{
+		struct my_data *entry = container_of(ref, struct my_data, refcount);
 
-	mutex_lock(&mutex);
-	list_del(&entry->link);
-	mutex_unlock(&mutex);
-	kfree(entry);
-}
+		mutex_lock(&mutex);
+		list_del(&entry->link);
+		mutex_unlock(&mutex);
+		kfree(entry);
+	}
 
-static void put_entry(struct my_data *entry)
-{
-	kref_put(&entry->refcount, release_entry);
-}
+	static void put_entry(struct my_data *entry)
+	{
+		kref_put(&entry->refcount, release_entry);
+	}
 
 Which is useful to remove the mutex lock around kref_put() in put_entry(), but
 it's important that kref_get_unless_zero is enclosed in the same critical
@@ -254,51 +269,51 @@ Note that it is illegal to use kref_get_unless_zero without checking its
 return value. If you are sure (by already having a valid pointer) that
 kref_get_unless_zero() will return true, then use kref_get() instead.
 
+Krefs and RCU
+=============
+
 The function kref_get_unless_zero also makes it possible to use rcu
-locking for lookups in the above example:
+locking for lookups in the above example::
 
-struct my_data
-{
-	struct rcu_head rhead;
-	.
-	struct kref refcount;
-	.
-	.
-};
+	struct my_data
+	{
+		struct rcu_head rhead;
+		.
+		struct kref refcount;
+		.
+		.
+	};
 
-static struct my_data *get_entry_rcu()
-{
-	struct my_data *entry = NULL;
-	rcu_read_lock();
-	if (!list_empty(&q)) {
-		entry = container_of(q.next, struct my_data, link);
-		if (!kref_get_unless_zero(&entry->refcount))
-			entry = NULL;
+	static struct my_data *get_entry_rcu()
+	{
+		struct my_data *entry = NULL;
+		rcu_read_lock();
+		if (!list_empty(&q)) {
+			entry = container_of(q.next, struct my_data, link);
+			if (!kref_get_unless_zero(&entry->refcount))
+				entry = NULL;
+		}
+		rcu_read_unlock();
+		return entry;
 	}
-	rcu_read_unlock();
-	return entry;
-}
 
-static void release_entry_rcu(struct kref *ref)
-{
-	struct my_data *entry = container_of(ref, struct my_data, refcount);
+	static void release_entry_rcu(struct kref *ref)
+	{
+		struct my_data *entry = container_of(ref, struct my_data, refcount);
 
-	mutex_lock(&mutex);
-	list_del_rcu(&entry->link);
-	mutex_unlock(&mutex);
-	kfree_rcu(entry, rhead);
-}
+		mutex_lock(&mutex);
+		list_del_rcu(&entry->link);
+		mutex_unlock(&mutex);
+		kfree_rcu(entry, rhead);
+	}
 
-static void put_entry(struct my_data *entry)
-{
-	kref_put(&entry->refcount, release_entry_rcu);
-}
+	static void put_entry(struct my_data *entry)
+	{
+		kref_put(&entry->refcount, release_entry_rcu);
+	}
 
 But note that the struct kref member needs to remain in valid memory for a
 rcu grace period after release_entry_rcu was called. That can be accomplished
 by using kfree_rcu(entry, rhead) as done above, or by calling synchronize_rcu()
 before using kfree, but note that synchronize_rcu() may sleep for a
 substantial amount of time.
-
-
-Thomas Hellstrom <thellstrom@vmware.com>
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v2 12/29] kselftest.rst: do some adjustments after ReST conversion
  2017-06-17 15:26 [PATCH v2 01/29] IPMI.txt: standardize document format Mauro Carvalho Chehab
                   ` (9 preceding siblings ...)
  2017-06-17 15:26 ` [PATCH v2 11/29] kref.txt: " Mauro Carvalho Chehab
@ 2017-06-17 15:26 ` Mauro Carvalho Chehab
  2017-06-23 14:04   ` Shuah Khan
  2017-06-17 15:26 ` [PATCH v2 13/29] ldm.txt: standardize document format Mauro Carvalho Chehab
                   ` (17 subsequent siblings)
  28 siblings, 1 reply; 35+ messages in thread
From: Mauro Carvalho Chehab @ 2017-06-17 15:26 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet, Shuah Khan, linux-kselftest

Do some minor adjustments after ReST conversion:

- On most documents, we use prepend a "$ " before
  command line arguments;

- Prefer to use :: on the preceding line;

- Split a multi-paragraph description as such.

Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
---
 Documentation/dev-tools/kselftest.rst | 32 ++++++++++++++++----------------
 1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/Documentation/dev-tools/kselftest.rst b/Documentation/dev-tools/kselftest.rst
index b3861500c42d..ebd03d11d2c2 100644
--- a/Documentation/dev-tools/kselftest.rst
+++ b/Documentation/dev-tools/kselftest.rst
@@ -19,15 +19,15 @@ Running the selftests (hotplug tests are run in limited mode)
 
 To build the tests::
 
-    make -C tools/testing/selftests
+  $ make -C tools/testing/selftests
 
 To run the tests::
 
-    make -C tools/testing/selftests run_tests
+  $ make -C tools/testing/selftests run_tests
 
 To build and run the tests with a single command, use::
 
-    make kselftest
+  $ make kselftest
 
 Note that some tests will require root privileges.
 
@@ -40,11 +40,11 @@ single test to run, or a list of tests to run.
 
 To run only tests targeted for a single subsystem::
 
-    make -C tools/testing/selftests TARGETS=ptrace run_tests
+  $ make -C tools/testing/selftests TARGETS=ptrace run_tests
 
 You can specify multiple tests to build and run::
 
-    make TARGETS="size timers" kselftest
+  $  make TARGETS="size timers" kselftest
 
 See the top-level tools/testing/selftests/Makefile for the list of all
 possible targets.
@@ -55,11 +55,11 @@ Running the full range hotplug selftests
 
 To build the hotplug tests::
 
-    make -C tools/testing/selftests hotplug
+  $ make -C tools/testing/selftests hotplug
 
 To run the hotplug tests::
 
-    make -C tools/testing/selftests run_hotplug
+  $ make -C tools/testing/selftests run_hotplug
 
 Note that some tests will require root privileges.
 
@@ -73,13 +73,13 @@ location.
 
 To install selftests in default location::
 
-    cd tools/testing/selftests
-    ./kselftest_install.sh
+   $ cd tools/testing/selftests
+   $ ./kselftest_install.sh
 
 To install selftests in a user specified location::
 
-    cd tools/testing/selftests
-    ./kselftest_install.sh install_dir
+   $ cd tools/testing/selftests
+   $ ./kselftest_install.sh install_dir
 
 Running installed selftests
 ===========================
@@ -88,12 +88,10 @@ Kselftest install as well as the Kselftest tarball provide a script
 named "run_kselftest.sh" to run the tests.
 
 You can simply do the following to run the installed Kselftests. Please
-note some tests will require root privileges.
+note some tests will require root privileges::
 
-::
-
-    cd kselftest
-    ./run_kselftest.sh
+   $ cd kselftest
+   $ ./run_kselftest.sh
 
 Contributing new tests
 ======================
@@ -114,8 +112,10 @@ Contributing new tests (details)
 
  * Use TEST_GEN_XXX if such binaries or files are generated during
    compiling.
+
    TEST_PROGS, TEST_GEN_PROGS mean it is the excutable tested by
    default.
+
    TEST_PROGS_EXTENDED, TEST_GEN_PROGS_EXTENDED mean it is the
    executable which is not tested by default.
    TEST_FILES, TEST_GEN_FILES mean it is the file which is used by
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v2 13/29] ldm.txt: standardize document format
  2017-06-17 15:26 [PATCH v2 01/29] IPMI.txt: standardize document format Mauro Carvalho Chehab
                   ` (10 preceding siblings ...)
  2017-06-17 15:26 ` [PATCH v2 12/29] kselftest.rst: do some adjustments after ReST conversion Mauro Carvalho Chehab
@ 2017-06-17 15:26 ` Mauro Carvalho Chehab
  2017-06-17 15:26 ` [PATCH v2 14/29] lockup-watchdogs.txt: " Mauro Carvalho Chehab
                   ` (16 subsequent siblings)
  28 siblings, 0 replies; 35+ messages in thread
From: Mauro Carvalho Chehab @ 2017-06-17 15:26 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet, Richard Russon (FlatCap),
	linux-ntfs-dev

Each text file under Documentation follows a different
format. Some doesn't even have titles!

Change its representation to follow the adopted standard,
using ReST markups for it to be parseable by Sphinx:

- Reformat its title;
- Use :Author: and :Last Updated: for authorship
- Use note markup;
- Reformat table to match ReST standard;
- Use bulleted lists where needed.

Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
---
 Documentation/ldm.txt | 54 +++++++++++++++++++++++++++++++--------------------
 1 file changed, 33 insertions(+), 21 deletions(-)

diff --git a/Documentation/ldm.txt b/Documentation/ldm.txt
index 4f80edd14d0a..12c571368e73 100644
--- a/Documentation/ldm.txt
+++ b/Documentation/ldm.txt
@@ -1,9 +1,9 @@
+==========================================
+LDM - Logical Disk Manager (Dynamic Disks)
+==========================================
 
-            LDM - Logical Disk Manager (Dynamic Disks)
-            ------------------------------------------
-
-Originally Written by FlatCap - Richard Russon <ldm@flatcap.org>.
-Last Updated by Anton Altaparmakov on 30 March 2007 for Windows Vista.
+:Author: Originally Written by FlatCap - Richard Russon <ldm@flatcap.org>.
+:Last Updated: Anton Altaparmakov on 30 March 2007 for Windows Vista.
 
 Overview
 --------
@@ -37,24 +37,36 @@ Example
 -------
 
 Below we have a 50MiB disk, divided into seven partitions.
-N.B.  The missing 1MiB at the end of the disk is where the LDM database is
-      stored.
 
-  Device | Offset Bytes  Sectors  MiB | Size   Bytes  Sectors  MiB
-  -------+----------------------------+---------------------------
-  hda    |            0        0    0 |     52428800   102400   50
-  hda1   |     51380224   100352   49 |      1048576     2048    1
-  hda2   |        16384       32    0 |      6979584    13632    6
-  hda3   |      6995968    13664    6 |     10485760    20480   10
-  hda4   |     17481728    34144   16 |      4194304     8192    4
-  hda5   |     21676032    42336   20 |      5242880    10240    5
-  hda6   |     26918912    52576   25 |     10485760    20480   10
-  hda7   |     37404672    73056   35 |     13959168    27264   13
+.. note::
+
+   The missing 1MiB at the end of the disk is where the LDM database is
+   stored.
+
++-------++--------------+---------+-----++--------------+---------+----+
+|Device || Offset Bytes | Sectors | MiB || Size   Bytes | Sectors | MiB|
++=======++==============+=========+=====++==============+=========+====+
+|hda    ||            0 |       0 |   0 ||     52428800 |  102400 |  50|
++-------++--------------+---------+-----++--------------+---------+----+
+|hda1   ||     51380224 |  100352 |  49 ||      1048576 |    2048 |   1|
++-------++--------------+---------+-----++--------------+---------+----+
+|hda2   ||        16384 |      32 |   0 ||      6979584 |   13632 |   6|
++-------++--------------+---------+-----++--------------+---------+----+
+|hda3   ||      6995968 |   13664 |   6 ||     10485760 |   20480 |  10|
++-------++--------------+---------+-----++--------------+---------+----+
+|hda4   ||     17481728 |   34144 |  16 ||      4194304 |    8192 |   4|
++-------++--------------+---------+-----++--------------+---------+----+
+|hda5   ||     21676032 |   42336 |  20 ||      5242880 |   10240 |   5|
++-------++--------------+---------+-----++--------------+---------+----+
+|hda6   ||     26918912 |   52576 |  25 ||     10485760 |   20480 |  10|
++-------++--------------+---------+-----++--------------+---------+----+
+|hda7   ||     37404672 |   73056 |  35 ||     13959168 |   27264 |  13|
++-------++--------------+---------+-----++--------------+---------+----+
 
 The LDM Database may not store the partitions in the order that they appear on
 disk, but the driver will sort them.
 
-When Linux boots, you will see something like:
+When Linux boots, you will see something like::
 
   hda: 102400 sectors w/32KiB Cache, CHS=50/64/32
   hda: [LDM] hda1 hda2 hda3 hda4 hda5 hda6 hda7
@@ -65,13 +77,13 @@ Compiling LDM Support
 
 To enable LDM, choose the following two options: 
 
-  "Advanced partition selection" CONFIG_PARTITION_ADVANCED
-  "Windows Logical Disk Manager (Dynamic Disk) support" CONFIG_LDM_PARTITION
+  - "Advanced partition selection" CONFIG_PARTITION_ADVANCED
+  - "Windows Logical Disk Manager (Dynamic Disk) support" CONFIG_LDM_PARTITION
 
 If you believe the driver isn't working as it should, you can enable the extra
 debugging code.  This will produce a LOT of output.  The option is:
 
-  "Windows LDM extra logging" CONFIG_LDM_DEBUG
+  - "Windows LDM extra logging" CONFIG_LDM_DEBUG
 
 N.B. The partition code cannot be compiled as a module.
 
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v2 14/29] lockup-watchdogs.txt: standardize document format
  2017-06-17 15:26 [PATCH v2 01/29] IPMI.txt: standardize document format Mauro Carvalho Chehab
                   ` (11 preceding siblings ...)
  2017-06-17 15:26 ` [PATCH v2 13/29] ldm.txt: standardize document format Mauro Carvalho Chehab
@ 2017-06-17 15:26 ` Mauro Carvalho Chehab
  2017-06-17 15:26 ` [PATCH v2 15/29] lzo.txt: " Mauro Carvalho Chehab
                   ` (15 subsequent siblings)
  28 siblings, 0 replies; 35+ messages in thread
From: Mauro Carvalho Chehab @ 2017-06-17 15:26 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet

Each text file under Documentation follows a different
format. Some doesn't even have titles!

Change its representation to follow the adopted standard,
using ReST markups for it to be parseable by Sphinx.

This file is almost at ReST format. Just one title needs
to be adjusted, in order to follow the standard.

Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
---
 Documentation/lockup-watchdogs.txt | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/Documentation/lockup-watchdogs.txt b/Documentation/lockup-watchdogs.txt
index c8b8378513d6..290840c160af 100644
--- a/Documentation/lockup-watchdogs.txt
+++ b/Documentation/lockup-watchdogs.txt
@@ -30,7 +30,8 @@ timeout is set through the confusingly named "kernel.panic" sysctl),
 to cause the system to reboot automatically after a specified amount
 of time.
 
-=== Implementation ===
+Implementation
+==============
 
 The soft and hard lockup detectors are built on top of the hrtimer and
 perf subsystems, respectively. A direct consequence of this is that,
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v2 15/29] lzo.txt: standardize document format
  2017-06-17 15:26 [PATCH v2 01/29] IPMI.txt: standardize document format Mauro Carvalho Chehab
                   ` (12 preceding siblings ...)
  2017-06-17 15:26 ` [PATCH v2 14/29] lockup-watchdogs.txt: " Mauro Carvalho Chehab
@ 2017-06-17 15:26 ` Mauro Carvalho Chehab
  2017-06-17 15:26 ` [PATCH v2 16/29] mailbox.txt: " Mauro Carvalho Chehab
                   ` (14 subsequent siblings)
  28 siblings, 0 replies; 35+ messages in thread
From: Mauro Carvalho Chehab @ 2017-06-17 15:26 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet

Each text file under Documentation follows a different
format. Some doesn't even have titles!

Change its representation to follow the adopted standard,
using ReST markups for it to be parseable by Sphinx:

- Add markups for section titles;
- mark literal blocks;
- use ".. important::" for an important note.

Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
---
 Documentation/lzo.txt | 27 +++++++++++++++++----------
 1 file changed, 17 insertions(+), 10 deletions(-)

diff --git a/Documentation/lzo.txt b/Documentation/lzo.txt
index 285c54f66779..6fa6a93d0949 100644
--- a/Documentation/lzo.txt
+++ b/Documentation/lzo.txt
@@ -1,8 +1,9 @@
-
+===========================================================
 LZO stream format as understood by Linux's LZO decompressor
 ===========================================================
 
 Introduction
+============
 
   This is not a specification. No specification seems to be publicly available
   for the LZO stream format. This document describes what input format the LZO
@@ -14,12 +15,13 @@ Introduction
   for future bug reports.
 
 Description
+===========
 
   The stream is composed of a series of instructions, operands, and data. The
   instructions consist in a few bits representing an opcode, and bits forming
   the operands for the instruction, whose size and position depend on the
   opcode and on the number of literals copied by previous instruction. The
-  operands are used to indicate :
+  operands are used to indicate:
 
     - a distance when copying data from the dictionary (past output buffer)
     - a length (number of bytes to copy from dictionary)
@@ -38,7 +40,7 @@ Description
   of bits in the operand. If the number of bits isn't enough to represent the
   length, up to 255 may be added in increments by consuming more bytes with a
   rate of at most 255 per extra byte (thus the compression ratio cannot exceed
-  around 255:1). The variable length encoding using #bits is always the same :
+  around 255:1). The variable length encoding using #bits is always the same::
 
        length = byte & ((1 << #bits) - 1)
        if (!length) {
@@ -67,15 +69,19 @@ Description
   instruction may encode this distance (0001HLLL), it takes one LE16 operand
   for the distance, thus requiring 3 bytes.
 
-  IMPORTANT NOTE : in the code some length checks are missing because certain
-  instructions are called under the assumption that a certain number of bytes
-  follow because it has already been guaranteed before parsing the instructions.
-  They just have to "refill" this credit if they consume extra bytes. This is
-  an implementation design choice independent on the algorithm or encoding.
+  .. important::
+
+     In the code some length checks are missing because certain instructions
+     are called under the assumption that a certain number of bytes follow
+     because it has already been guaranteed before parsing the instructions.
+     They just have to "refill" this credit if they consume extra bytes. This
+     is an implementation design choice independent on the algorithm or
+     encoding.
 
 Byte sequences
+==============
 
-  First byte encoding :
+  First byte encoding::
 
       0..17   : follow regular instruction encoding, see below. It is worth
                 noting that codes 16 and 17 will represent a block copy from
@@ -91,7 +97,7 @@ Byte sequences
                 state = 4 [ don't copy extra literals ]
                 skip byte
 
-  Instruction encoding :
+  Instruction encoding::
 
       0 0 0 0 X X X X  (0..15)
         Depends on the number of literals copied by the last instruction.
@@ -156,6 +162,7 @@ Byte sequences
            distance = (H << 3) + D + 1
 
 Authors
+=======
 
   This document was written by Willy Tarreau <w@1wt.eu> on 2014/07/19 during an
   analysis of the decompression code available in Linux 3.16-rc5. The code is
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v2 16/29] mailbox.txt: standardize document format
  2017-06-17 15:26 [PATCH v2 01/29] IPMI.txt: standardize document format Mauro Carvalho Chehab
                   ` (13 preceding siblings ...)
  2017-06-17 15:26 ` [PATCH v2 15/29] lzo.txt: " Mauro Carvalho Chehab
@ 2017-06-17 15:26 ` Mauro Carvalho Chehab
  2017-06-17 15:26 ` [PATCH v2 17/29] memory-barriers.txt: " Mauro Carvalho Chehab
                   ` (13 subsequent siblings)
  28 siblings, 0 replies; 35+ messages in thread
From: Mauro Carvalho Chehab @ 2017-06-17 15:26 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet

Each text file under Documentation follows a different
format. Some doesn't even have titles!

Change its representation to follow the adopted standard,
using ReST markups for it to be parseable by Sphinx:

- Add markups for section titles;
- Use :Author: for authorship;
- Mark literal block as such and ident it.

Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
---
 Documentation/mailbox.txt | 181 ++++++++++++++++++++++++----------------------
 1 file changed, 95 insertions(+), 86 deletions(-)

diff --git a/Documentation/mailbox.txt b/Documentation/mailbox.txt
index 7ed371c85204..0ed95009cc30 100644
--- a/Documentation/mailbox.txt
+++ b/Documentation/mailbox.txt
@@ -1,7 +1,10 @@
-		The Common Mailbox Framework
-		Jassi Brar <jaswinder.singh@linaro.org>
+============================
+The Common Mailbox Framework
+============================
 
- This document aims to help developers write client and controller
+:Author: Jassi Brar <jaswinder.singh@linaro.org>
+
+This document aims to help developers write client and controller
 drivers for the API. But before we start, let us note that the
 client (especially) and controller drivers are likely going to be
 very platform specific because the remote firmware is likely to be
@@ -13,14 +16,17 @@ similar copies of code written for each platform. Having said that,
 nothing prevents the remote f/w to also be Linux based and use the
 same api there. However none of that helps us locally because we only
 ever deal at client's protocol level.
- Some of the choices made during implementation are the result of this
+
+Some of the choices made during implementation are the result of this
 peculiarity of this "common" framework.
 
 
 
-	Part 1 - Controller Driver (See include/linux/mailbox_controller.h)
+Controller Driver (See include/linux/mailbox_controller.h)
+==========================================================
 
- Allocate mbox_controller and the array of mbox_chan.
+
+Allocate mbox_controller and the array of mbox_chan.
 Populate mbox_chan_ops, except peek_data() all are mandatory.
 The controller driver might know a message has been consumed
 by the remote by getting an IRQ or polling some hardware flag
@@ -30,91 +36,94 @@ the controller driver should set via 'txdone_irq' or 'txdone_poll'
 or neither.
 
 
-	Part 2 - Client Driver (See include/linux/mailbox_client.h)
+Client Driver (See include/linux/mailbox_client.h)
+==================================================
 
- The client might want to operate in blocking mode (synchronously
+
+The client might want to operate in blocking mode (synchronously
 send a message through before returning) or non-blocking/async mode (submit
 a message and a callback function to the API and return immediately).
 
+::
 
-struct demo_client {
-	struct mbox_client cl;
-	struct mbox_chan *mbox;
-	struct completion c;
-	bool async;
-	/* ... */
-};
+	struct demo_client {
+		struct mbox_client cl;
+		struct mbox_chan *mbox;
+		struct completion c;
+		bool async;
+		/* ... */
+	};
 
-/*
- * This is the handler for data received from remote. The behaviour is purely
- * dependent upon the protocol. This is just an example.
- */
-static void message_from_remote(struct mbox_client *cl, void *mssg)
-{
-	struct demo_client *dc = container_of(cl, struct demo_client, cl);
-	if (dc->async) {
-		if (is_an_ack(mssg)) {
-			/* An ACK to our last sample sent */
-			return; /* Or do something else here */
-		} else { /* A new message from remote */
-			queue_req(mssg);
+	/*
+	* This is the handler for data received from remote. The behaviour is purely
+	* dependent upon the protocol. This is just an example.
+	*/
+	static void message_from_remote(struct mbox_client *cl, void *mssg)
+	{
+		struct demo_client *dc = container_of(cl, struct demo_client, cl);
+		if (dc->async) {
+			if (is_an_ack(mssg)) {
+				/* An ACK to our last sample sent */
+				return; /* Or do something else here */
+			} else { /* A new message from remote */
+				queue_req(mssg);
+			}
+		} else {
+			/* Remote f/w sends only ACK packets on this channel */
+			return;
 		}
-	} else {
-		/* Remote f/w sends only ACK packets on this channel */
-		return;
 	}
-}
-
-static void sample_sent(struct mbox_client *cl, void *mssg, int r)
-{
-	struct demo_client *dc = container_of(cl, struct demo_client, cl);
-	complete(&dc->c);
-}
-
-static void client_demo(struct platform_device *pdev)
-{
-	struct demo_client *dc_sync, *dc_async;
-	/* The controller already knows async_pkt and sync_pkt */
-	struct async_pkt ap;
-	struct sync_pkt sp;
-
-	dc_sync = kzalloc(sizeof(*dc_sync), GFP_KERNEL);
-	dc_async = kzalloc(sizeof(*dc_async), GFP_KERNEL);
-
-	/* Populate non-blocking mode client */
-	dc_async->cl.dev = &pdev->dev;
-	dc_async->cl.rx_callback = message_from_remote;
-	dc_async->cl.tx_done = sample_sent;
-	dc_async->cl.tx_block = false;
-	dc_async->cl.tx_tout = 0; /* doesn't matter here */
-	dc_async->cl.knows_txdone = false; /* depending upon protocol */
-	dc_async->async = true;
-	init_completion(&dc_async->c);
-
-	/* Populate blocking mode client */
-	dc_sync->cl.dev = &pdev->dev;
-	dc_sync->cl.rx_callback = message_from_remote;
-	dc_sync->cl.tx_done = NULL; /* operate in blocking mode */
-	dc_sync->cl.tx_block = true;
-	dc_sync->cl.tx_tout = 500; /* by half a second */
-	dc_sync->cl.knows_txdone = false; /* depending upon protocol */
-	dc_sync->async = false;
-
-	/* ASync mailbox is listed second in 'mboxes' property */
-	dc_async->mbox = mbox_request_channel(&dc_async->cl, 1);
-	/* Populate data packet */
-	/* ap.xxx = 123; etc */
-	/* Send async message to remote */
-	mbox_send_message(dc_async->mbox, &ap);
-
-	/* Sync mailbox is listed first in 'mboxes' property */
-	dc_sync->mbox = mbox_request_channel(&dc_sync->cl, 0);
-	/* Populate data packet */
-	/* sp.abc = 123; etc */
-	/* Send message to remote in blocking mode */
-	mbox_send_message(dc_sync->mbox, &sp);
-	/* At this point 'sp' has been sent */
-
-	/* Now wait for async chan to be done */
-	wait_for_completion(&dc_async->c);
-}
+
+	static void sample_sent(struct mbox_client *cl, void *mssg, int r)
+	{
+		struct demo_client *dc = container_of(cl, struct demo_client, cl);
+		complete(&dc->c);
+	}
+
+	static void client_demo(struct platform_device *pdev)
+	{
+		struct demo_client *dc_sync, *dc_async;
+		/* The controller already knows async_pkt and sync_pkt */
+		struct async_pkt ap;
+		struct sync_pkt sp;
+
+		dc_sync = kzalloc(sizeof(*dc_sync), GFP_KERNEL);
+		dc_async = kzalloc(sizeof(*dc_async), GFP_KERNEL);
+
+		/* Populate non-blocking mode client */
+		dc_async->cl.dev = &pdev->dev;
+		dc_async->cl.rx_callback = message_from_remote;
+		dc_async->cl.tx_done = sample_sent;
+		dc_async->cl.tx_block = false;
+		dc_async->cl.tx_tout = 0; /* doesn't matter here */
+		dc_async->cl.knows_txdone = false; /* depending upon protocol */
+		dc_async->async = true;
+		init_completion(&dc_async->c);
+
+		/* Populate blocking mode client */
+		dc_sync->cl.dev = &pdev->dev;
+		dc_sync->cl.rx_callback = message_from_remote;
+		dc_sync->cl.tx_done = NULL; /* operate in blocking mode */
+		dc_sync->cl.tx_block = true;
+		dc_sync->cl.tx_tout = 500; /* by half a second */
+		dc_sync->cl.knows_txdone = false; /* depending upon protocol */
+		dc_sync->async = false;
+
+		/* ASync mailbox is listed second in 'mboxes' property */
+		dc_async->mbox = mbox_request_channel(&dc_async->cl, 1);
+		/* Populate data packet */
+		/* ap.xxx = 123; etc */
+		/* Send async message to remote */
+		mbox_send_message(dc_async->mbox, &ap);
+
+		/* Sync mailbox is listed first in 'mboxes' property */
+		dc_sync->mbox = mbox_request_channel(&dc_sync->cl, 0);
+		/* Populate data packet */
+		/* sp.abc = 123; etc */
+		/* Send message to remote in blocking mode */
+		mbox_send_message(dc_sync->mbox, &sp);
+		/* At this point 'sp' has been sent */
+
+		/* Now wait for async chan to be done */
+		wait_for_completion(&dc_async->c);
+	}
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v2 17/29] memory-barriers.txt: standardize document format
  2017-06-17 15:26 [PATCH v2 01/29] IPMI.txt: standardize document format Mauro Carvalho Chehab
                   ` (14 preceding siblings ...)
  2017-06-17 15:26 ` [PATCH v2 16/29] mailbox.txt: " Mauro Carvalho Chehab
@ 2017-06-17 15:26 ` Mauro Carvalho Chehab
  2017-06-17 15:26 ` [PATCH v2 18/29] memory-barriers.txt: use literals for variables Mauro Carvalho Chehab
                   ` (12 subsequent siblings)
  28 siblings, 0 replies; 35+ messages in thread
From: Mauro Carvalho Chehab @ 2017-06-17 15:26 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet

Each text file under Documentation follows a different
format. Some doesn't even have titles!

Change its representation to follow the adopted standard,
using ReST markups for it to be parseable by Sphinx:

- use (#) instead of (*) as the latter is not recognized.
  As a bonus, using (#) will make it to auto-number paragraphs
  on PDF/LaTeX and HTML outputs;
- use the ReST markup for auto-numbered footnotes;
- comment the internal CONTENTS table;
- use "::" instead of ":" in order to mark literal blocks and
  ascii artwork;
- use the ReST markups for a table;
- use "-" for some bulleted lists that aren't marked;
- Use :Author: for authorship;
- Don't use capital leters on titles.

NOTE:

Trying to build this file with Sphinx will produce
some warnings:
	Documentation/memory-barriers.rst:192: WARNING: Inline emphasis start-string without end-string.

That's because, on several places, it use asterisks like "*Q"
to identify variables.

As asterisks are used for emphasis, it expects an end
asterisk to put everything between them in italics.

In order to avoid it, either one of the notation below
is needed:
	- \*Q
	- "*Q"
	- ``*Q``

The first notation can be confusing for the ones reading the
file in its ascii format. So, I don't think it is a good
idea to use it.

The other two notations would be OK. ``*Q`` gives the
additional bonus of using a different font for html/pdf
output.

For now, as we're just standardizing the document
notation, let's not touch it, but we should later
revisit this, when moving this file to one of the Kernel
books.

Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
---
 Documentation/memory-barriers.txt | 660 +++++++++++++++++++-------------------
 1 file changed, 329 insertions(+), 331 deletions(-)

diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
index 9d5e0f853f08..69cc3e770e8d 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -1,14 +1,13 @@
-			 ============================
-			 LINUX KERNEL MEMORY BARRIERS
-			 ============================
+============================
+Linux kernel memory barriers
+============================
 
-By: David Howells <dhowells@redhat.com>
-    Paul E. McKenney <paulmck@linux.vnet.ibm.com>
-    Will Deacon <will.deacon@arm.com>
-    Peter Zijlstra <peterz@infradead.org>
+:Author: David Howells <dhowells@redhat.com>
+:Author: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
+:Author: Will Deacon <will.deacon@arm.com>
+:Author: Peter Zijlstra <peterz@infradead.org>
 
-==========
-DISCLAIMER
+Disclaimer
 ==========
 
 This document is not a specification; it is intentionally (for the sake of
@@ -35,16 +34,14 @@ architecture because the way that arch works renders an explicit barrier
 unnecessary in that case.
 
 
-========
-CONTENTS
-========
+.. CONTENTS
 
- (*) Abstract memory access model.
+ (#) Abstract memory access model.
 
      - Device operations.
      - Guarantees.
 
- (*) What are memory barriers?
+ (#) What are memory barriers?
 
      - Varieties of memory barrier.
      - What may not be assumed about memory barriers?
@@ -55,58 +52,57 @@ CONTENTS
      - Read memory barriers vs load speculation.
      - Transitivity
 
- (*) Explicit kernel barriers.
+ (#) Explicit kernel barriers.
 
      - Compiler barrier.
      - CPU memory barriers.
      - MMIO write barrier.
 
- (*) Implicit kernel memory barriers.
+ (#) Implicit kernel memory barriers.
 
      - Lock acquisition functions.
      - Interrupt disabling functions.
      - Sleep and wake-up functions.
      - Miscellaneous functions.
 
- (*) Inter-CPU acquiring barrier effects.
+ (#) Inter-CPU acquiring barrier effects.
 
      - Acquires vs memory accesses.
      - Acquires vs I/O accesses.
 
- (*) Where are memory barriers needed?
+ (#) Where are memory barriers needed?
 
      - Interprocessor interaction.
      - Atomic operations.
      - Accessing devices.
      - Interrupts.
 
- (*) Kernel I/O barrier effects.
+ (#) Kernel I/O barrier effects.
 
- (*) Assumed minimum execution ordering model.
+ (#) Assumed minimum execution ordering model.
 
- (*) The effects of the cpu cache.
+ (#) The effects of the cpu cache.
 
      - Cache coherency.
      - Cache coherency vs DMA.
      - Cache coherency vs MMIO.
 
- (*) The things CPUs get up to.
+ (#) The things CPUs get up to.
 
      - And then there's the Alpha.
      - Virtual Machine Guests.
 
- (*) Example uses.
+ (#) Example uses.
 
      - Circular buffers.
 
- (*) References.
+ (#) References.
 
 
-============================
-ABSTRACT MEMORY ACCESS MODEL
+Abstract memory access model
 ============================
 
-Consider the following abstract model of the system:
+Consider the following abstract model of the system::
 
 		            :                :
 		            :                :
@@ -143,7 +139,7 @@ CPU are perceived by the rest of the system as the operations cross the
 interface between the CPU and rest of the system (the dotted lines).
 
 
-For example, consider the following sequence of events:
+For example, consider the following sequence of events::
 
 	CPU 1		CPU 2
 	===============	===============
@@ -152,7 +148,7 @@ For example, consider the following sequence of events:
 	B = 4;		y = A;
 
 The set of accesses as seen by the memory system in the middle can be arranged
-in 24 different combinations:
+in 24 different combinations::
 
 	STORE A=3,	STORE B=4,	y=LOAD A->3,	x=LOAD B->4
 	STORE A=3,	STORE B=4,	x=LOAD B->4,	y=LOAD A->3
@@ -164,7 +160,7 @@ in 24 different combinations:
 	STORE B=4, ...
 	...
 
-and can thus result in four different combinations of values:
+and can thus result in four different combinations of values::
 
 	x == 2, y == 1
 	x == 2, y == 3
@@ -177,7 +173,7 @@ perceived by the loads made by another CPU in the same order as the stores were
 committed.
 
 
-As a further example, consider this sequence of events:
+As a further example, consider this sequence of events::
 
 	CPU 1		CPU 2
 	===============	===============
@@ -187,7 +183,7 @@ As a further example, consider this sequence of events:
 
 There is an obvious data dependency here, as the value loaded into D depends on
 the address retrieved from P by CPU 2.  At the end of the sequence, any of the
-following results are possible:
+following results are possible::
 
 	(Q == &A) and (D == 1)
 	(Q == &B) and (D == 2)
@@ -197,7 +193,7 @@ Note that CPU 2 will never try and load C into D because the CPU will load P
 into Q before issuing the load of *Q.
 
 
-DEVICE OPERATIONS
+Device operations
 -----------------
 
 Some devices present their control interfaces as collections of memory
@@ -205,12 +201,12 @@ locations, but the order in which the control registers are accessed is very
 important.  For instance, imagine an ethernet card with a set of internal
 registers that are accessed through an address port register (A) and a data
 port register (D).  To read internal register 5, the following code might then
-be used:
+be used::
 
 	*A = 5;
 	x = *D;
 
-but this might show up as either of the following two sequences:
+but this might show up as either of the following two sequences::
 
 	STORE *A = 5, x = LOAD *D
 	x = LOAD *D, STORE *A = 5
@@ -219,17 +215,17 @@ the second of which will almost certainly result in a malfunction, since it set
 the address _after_ attempting to read the register.
 
 
-GUARANTEES
+Guarantees
 ----------
 
 There are some minimal guarantees that may be expected of a CPU:
 
- (*) On any given CPU, dependent memory accesses will be issued in order, with
-     respect to itself.  This means that for:
+ (#) On any given CPU, dependent memory accesses will be issued in order, with
+     respect to itself.  This means that for::
 
 	Q = READ_ONCE(P); smp_read_barrier_depends(); D = READ_ONCE(*Q);
 
-     the CPU will issue the following memory operations:
+     the CPU will issue the following memory operations::
 
 	Q = LOAD P, D = LOAD *Q
 
@@ -239,20 +235,20 @@ There are some minimal guarantees that may be expected of a CPU:
      should normally use something like rcu_dereference() instead of
      open-coding smp_read_barrier_depends().
 
- (*) Overlapping loads and stores within a particular CPU will appear to be
-     ordered within that CPU.  This means that for:
+ (#) Overlapping loads and stores within a particular CPU will appear to be
+     ordered within that CPU.  This means that for::
 
 	a = READ_ONCE(*X); WRITE_ONCE(*X, b);
 
-     the CPU will only issue the following sequence of memory operations:
+     the CPU will only issue the following sequence of memory operations::
 
 	a = LOAD *X, STORE *X = b
 
-     And for:
+     And for::
 
 	WRITE_ONCE(*X, c); d = READ_ONCE(*X);
 
-     the CPU will only issue:
+     the CPU will only issue::
 
 	STORE *X = c, d = LOAD *X
 
@@ -261,18 +257,18 @@ There are some minimal guarantees that may be expected of a CPU:
 
 And there are a number of things that _must_ or _must_not_ be assumed:
 
- (*) It _must_not_ be assumed that the compiler will do what you want
+ (#) It _must_not_ be assumed that the compiler will do what you want
      with memory references that are not protected by READ_ONCE() and
      WRITE_ONCE().  Without them, the compiler is within its rights to
      do all sorts of "creative" transformations, which are covered in
      the COMPILER BARRIER section.
 
- (*) It _must_not_ be assumed that independent loads and stores will be issued
-     in the order given.  This means that for:
+ (#) It _must_not_ be assumed that independent loads and stores will be issued
+     in the order given.  This means that for::
 
 	X = *A; Y = *B; *D = Z;
 
-     we may get any of the following sequences:
+     we may get any of the following sequences::
 
 	X = LOAD *A,  Y = LOAD *B,  STORE *D = Z
 	X = LOAD *A,  STORE *D = Z, Y = LOAD *B
@@ -281,22 +277,22 @@ And there are a number of things that _must_ or _must_not_ be assumed:
 	STORE *D = Z, X = LOAD *A,  Y = LOAD *B
 	STORE *D = Z, Y = LOAD *B,  X = LOAD *A
 
- (*) It _must_ be assumed that overlapping memory accesses may be merged or
-     discarded.  This means that for:
+ (#) It _must_ be assumed that overlapping memory accesses may be merged or
+     discarded.  This means that for::
 
 	X = *A; Y = *(A + 4);
 
-     we may get any one of the following sequences:
+     we may get any one of the following sequences::
 
 	X = LOAD *A; Y = LOAD *(A + 4);
 	Y = LOAD *(A + 4); X = LOAD *A;
 	{X, Y} = LOAD {*A, *(A + 4) };
 
-     And for:
+     And for::
 
 	*A = X; *(A + 4) = Y;
 
-     we may get any of:
+     we may get any of::
 
 	STORE *A = X; STORE *(A + 4) = Y;
 	STORE *(A + 4) = Y; STORE *A = X;
@@ -304,18 +300,18 @@ And there are a number of things that _must_ or _must_not_ be assumed:
 
 And there are anti-guarantees:
 
- (*) These guarantees do not apply to bitfields, because compilers often
+ (#) These guarantees do not apply to bitfields, because compilers often
      generate code to modify these using non-atomic read-modify-write
      sequences.  Do not attempt to use bitfields to synchronize parallel
      algorithms.
 
- (*) Even in cases where bitfields are protected by locks, all fields
+ (#) Even in cases where bitfields are protected by locks, all fields
      in a given bitfield must be protected by one lock.  If two fields
      in a given bitfield are protected by different locks, the compiler's
      non-atomic read-modify-write sequences can cause an update to one
      field to corrupt the value of an adjacent field.
 
- (*) These guarantees apply only to properly aligned and sized scalar
+ (#) These guarantees apply only to properly aligned and sized scalar
      variables.  "Properly sized" currently means variables that are
      the same size as "char", "short", "int" and "long".  "Properly
      aligned" means the natural alignment, thus no constraints for
@@ -347,8 +343,7 @@ And there are anti-guarantees:
 		sizes of those intervening bit-fields happen to be.
 
 
-=========================
-WHAT ARE MEMORY BARRIERS?
+What are memory barriers?
 =========================
 
 As can be seen above, independent memory operations are effectively performed
@@ -366,8 +361,7 @@ branch prediction and various types of caching.  Memory barriers are used to
 override or suppress these tricks, allowing the code to sanely control the
 interaction of multiple CPUs and/or devices.
 
-
-VARIETIES OF MEMORY BARRIER
+Varieties of memory barrier
 ---------------------------
 
 Memory barriers come in four basic varieties:
@@ -515,36 +509,36 @@ more substantial guarantees, but they may _not_ be relied upon outside of arch
 specific code.
 
 
-WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS?
+What may not be assumed about memory barriers?
 ----------------------------------------------
 
 There are certain things that the Linux kernel memory barriers do not guarantee:
 
- (*) There is no guarantee that any of the memory accesses specified before a
+ (#) There is no guarantee that any of the memory accesses specified before a
      memory barrier will be _complete_ by the completion of a memory barrier
      instruction; the barrier can be considered to draw a line in that CPU's
      access queue that accesses of the appropriate type may not cross.
 
- (*) There is no guarantee that issuing a memory barrier on one CPU will have
+ (#) There is no guarantee that issuing a memory barrier on one CPU will have
      any direct effect on another CPU or any other hardware in the system.  The
      indirect effect will be the order in which the second CPU sees the effects
      of the first CPU's accesses occur, but see the next point:
 
- (*) There is no guarantee that a CPU will see the correct order of effects
+ (#) There is no guarantee that a CPU will see the correct order of effects
      from a second CPU's accesses, even _if_ the second CPU uses a memory
      barrier, unless the first CPU _also_ uses a matching memory barrier (see
      the subsection on "SMP Barrier Pairing").
 
- (*) There is no guarantee that some intervening piece of off-the-CPU
-     hardware[*] will not reorder the memory accesses.  CPU cache coherency
+ (#) There is no guarantee that some intervening piece of off-the-CPU
+     hardware [1]_ will not reorder the memory accesses.  CPU cache coherency
      mechanisms should propagate the indirect effects of a memory barrier
      between CPUs, but might not do so in order.
 
-	[*] For information on bus mastering DMA and coherency please read:
+	.. [1] For information on bus mastering DMA and coherency please read:
 
-	    Documentation/PCI/pci.txt
-	    Documentation/DMA-API-HOWTO.txt
-	    Documentation/DMA-API.txt
+	   - Documentation/PCI/pci.txt
+	   - Documentation/DMA-API-HOWTO.txt
+	   - Documentation/DMA-API.txt
 
 
 DATA DEPENDENCY BARRIERS
@@ -552,7 +546,7 @@ DATA DEPENDENCY BARRIERS
 
 The usage requirements of data dependency barriers are a little subtle, and
 it's not always obvious that they're needed.  To illustrate, consider the
-following sequence of events:
+following sequence of events::
 
 	CPU 1		      CPU 2
 	===============	      ===============
@@ -564,13 +558,13 @@ following sequence of events:
 			      D = *Q;
 
 There's a clear data dependency here, and it would seem that by the end of the
-sequence, Q must be either &A or &B, and that:
+sequence, Q must be either &A or &B, and that::
 
 	(Q == &A) implies (D == 1)
 	(Q == &B) implies (D == 4)
 
 But!  CPU 2's perception of P may be updated _before_ its perception of B, thus
-leading to the following situation:
+leading to the following situation::
 
 	(Q == &B) and (D == 2) ????
 
@@ -579,7 +573,7 @@ isn't, and this behaviour can be observed on certain real CPUs (such as the DEC
 Alpha).
 
 To deal with this, a data dependency barrier or better must be inserted
-between the address load and the data load:
+between the address load and the data load::
 
 	CPU 1		      CPU 2
 	===============	      ===============
@@ -594,7 +588,7 @@ between the address load and the data load:
 This enforces the occurrence of one of the two implications, and prevents the
 third possibility from arising.
 
-A data-dependency barrier must also order against dependent writes:
+A data-dependency barrier must also order against dependent writes::
 
 	CPU 1		      CPU 2
 	===============	      ===============
@@ -607,7 +601,7 @@ A data-dependency barrier must also order against dependent writes:
 			      *Q = 5;
 
 The data-dependency barrier must order the read into Q with the store
-into *Q.  This prohibits this outcome:
+into *Q.  This prohibits this outcome::
 
 	(Q == &B) && (B == 4)
 
@@ -637,7 +631,7 @@ target appearing to be incompletely initialised.
 See also the subsection on "Cache Coherency" for a more thorough example.
 
 
-CONTROL DEPENDENCIES
+Control dependencies
 --------------------
 
 Control dependencies can be a bit tricky because current compilers do
@@ -646,7 +640,7 @@ the compiler's ignorance from breaking your code.
 
 A load-load control dependency requires a full read memory barrier, not
 simply a data dependency barrier to make it work correctly.  Consider the
-following bit of code:
+following bit of code::
 
 	q = READ_ONCE(a);
 	if (q) {
@@ -658,7 +652,7 @@ This will not have the desired effect because there is no actual data
 dependency, but rather a control dependency that the CPU may short-circuit
 by attempting to predict the outcome in advance, so that other CPUs see
 the load from b as having happened before the load from a.  In such a
-case what's actually required is:
+case what's actually required is::
 
 	q = READ_ONCE(a);
 	if (q) {
@@ -667,7 +661,7 @@ case what's actually required is:
 	}
 
 However, stores are not speculated.  This means that ordering -is- provided
-for load-store control dependencies, as in the following example:
+for load-store control dependencies, as in the following example::
 
 	q = READ_ONCE(a);
 	if (q) {
@@ -684,7 +678,7 @@ Either can result in highly counterintuitive effects on ordering.
 Worse yet, if the compiler is able to prove (say) that the value of
 variable 'a' is always non-zero, it would be well within its rights
 to optimize the original example by eliminating the "if" statement
-as follows:
+as follows::
 
 	q = a;
 	b = 1;  /* BUG: Compiler and CPU can both reorder!!! */
@@ -692,7 +686,7 @@ as follows:
 So don't leave out the READ_ONCE().
 
 It is tempting to try to enforce ordering on identical stores on both
-branches of the "if" statement as follows:
+branches of the "if" statement as follows::
 
 	q = READ_ONCE(a);
 	if (q) {
@@ -706,7 +700,7 @@ branches of the "if" statement as follows:
 	}
 
 Unfortunately, current compilers will transform this as follows at high
-optimization levels:
+optimization levels::
 
 	q = READ_ONCE(a);
 	barrier();
@@ -724,7 +718,7 @@ Now there is no conditional between the load from 'a' and the store to
 The conditional is absolutely required, and must be present in the
 assembly code even after all compiler optimizations have been applied.
 Therefore, if you need ordering in this example, you need explicit
-memory barriers, for example, smp_store_release():
+memory barriers, for example, smp_store_release()::
 
 	q = READ_ONCE(a);
 	if (q) {
@@ -736,7 +730,7 @@ memory barriers, for example, smp_store_release():
 	}
 
 In contrast, without explicit memory barriers, two-legged-if control
-ordering is guaranteed only when the stores differ, for example:
+ordering is guaranteed only when the stores differ, for example::
 
 	q = READ_ONCE(a);
 	if (q) {
@@ -752,7 +746,7 @@ proving the value of 'a'.
 
 In addition, you need to be careful what you do with the local variable 'q',
 otherwise the compiler might be able to guess the value and again remove
-the needed conditional.  For example:
+the needed conditional.  For example::
 
 	q = READ_ONCE(a);
 	if (q % MAX) {
@@ -765,7 +759,7 @@ the needed conditional.  For example:
 
 If MAX is defined to be 1, then the compiler knows that (q % MAX) is
 equal to zero, in which case the compiler is within its rights to
-transform the above code into the following:
+transform the above code into the following::
 
 	q = READ_ONCE(a);
 	WRITE_ONCE(b, 2);
@@ -776,7 +770,7 @@ between the load from variable 'a' and the store to variable 'b'.  It is
 tempting to add a barrier(), but this does not help.  The conditional
 is gone, and the barrier won't bring it back.  Therefore, if you are
 relying on this ordering, you should make sure that MAX is greater than
-one, perhaps as follows:
+one, perhaps as follows::
 
 	q = READ_ONCE(a);
 	BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */
@@ -793,7 +787,7 @@ identical, as noted earlier, the compiler could pull this store outside
 of the 'if' statement.
 
 You must also be careful not to rely too much on boolean short-circuit
-evaluation.  Consider this example:
+evaluation.  Consider this example::
 
 	q = READ_ONCE(a);
 	if (q || 1 > 0)
@@ -801,7 +795,7 @@ evaluation.  Consider this example:
 
 Because the first condition cannot fault and the second condition is
 always true, the compiler can transform this example as following,
-defeating control dependency:
+defeating control dependency::
 
 	q = READ_ONCE(a);
 	WRITE_ONCE(b, 1);
@@ -813,7 +807,7 @@ the compiler to use the results.
 
 In addition, control dependencies apply only to the then-clause and
 else-clause of the if-statement in question.  In particular, it does
-not necessarily apply to code following the if-statement:
+not necessarily apply to code following the if-statement::
 
 	q = READ_ONCE(a);
 	if (q) {
@@ -828,7 +822,7 @@ compiler cannot reorder volatile accesses and also cannot reorder
 the writes to 'b' with the condition.  Unfortunately for this line
 of reasoning, the compiler might compile the two writes to 'b' as
 conditional-move instructions, as in this fanciful pseudo-assembly
-language:
+language::
 
 	ld r1,a
 	cmp r1,$0
@@ -846,7 +840,7 @@ invoked by those two clauses), not to code following that if-statement.
 
 Finally, control dependencies do -not- provide transitivity.  This is
 demonstrated by two related examples, with the initial values of
-'x' and 'y' both being zero:
+'x' and 'y' both being zero::
 
 	CPU 0                     CPU 1
 	=======================   =======================
@@ -858,7 +852,7 @@ demonstrated by two related examples, with the initial values of
 
 The above two-CPU example will never trigger the assert().  However,
 if control dependencies guaranteed transitivity (which they do not),
-then adding the following CPU would guarantee a related assertion:
+then adding the following CPU would guarantee a related assertion::
 
 	CPU 2
 	=====================
@@ -879,14 +873,14 @@ site: https://www.cl.cam.ac.uk/~pes20/ppcmem/index.html.
 
 In summary:
 
-  (*) Control dependencies can order prior loads against later stores.
+  (#) Control dependencies can order prior loads against later stores.
       However, they do -not- guarantee any other sort of ordering:
       Not prior loads against later loads, nor prior stores against
       later anything.  If you need these other forms of ordering,
       use smp_rmb(), smp_wmb(), or, in the case of prior stores and
       later loads, smp_mb().
 
-  (*) If both legs of the "if" statement begin with identical stores to
+  (#) If both legs of the "if" statement begin with identical stores to
       the same variable, then those stores must be ordered, either by
       preceding both of them with smp_mb() or by using smp_store_release()
       to carry out the stores.  Please note that it is -not- sufficient
@@ -895,30 +889,30 @@ In summary:
       destroy the control dependency while respecting the letter of the
       barrier() law.
 
-  (*) Control dependencies require at least one run-time conditional
+  (#) Control dependencies require at least one run-time conditional
       between the prior load and the subsequent store, and this
       conditional must involve the prior load.  If the compiler is able
       to optimize the conditional away, it will have also optimized
       away the ordering.  Careful use of READ_ONCE() and WRITE_ONCE()
       can help to preserve the needed conditional.
 
-  (*) Control dependencies require that the compiler avoid reordering the
+  (#) Control dependencies require that the compiler avoid reordering the
       dependency into nonexistence.  Careful use of READ_ONCE() or
       atomic{,64}_read() can help to preserve your control dependency.
       Please see the COMPILER BARRIER section for more information.
 
-  (*) Control dependencies apply only to the then-clause and else-clause
+  (#) Control dependencies apply only to the then-clause and else-clause
       of the if-statement containing the control dependency, including
       any functions that these two clauses call.  Control dependencies
       do -not- apply to code following the if-statement containing the
       control dependency.
 
-  (*) Control dependencies pair normally with other types of barriers.
+  (#) Control dependencies pair normally with other types of barriers.
 
-  (*) Control dependencies do -not- provide transitivity.  If you
+  (#) Control dependencies do -not- provide transitivity.  If you
       need transitivity, use smp_mb().
 
-  (*) Compilers do not understand control dependencies.  It is therefore
+  (#) Compilers do not understand control dependencies.  It is therefore
       your job to ensure that they do not break your code.
 
 
@@ -935,7 +929,7 @@ including of course general barriers.  A write barrier pairs with a data
 dependency barrier, a control dependency, an acquire barrier, a release
 barrier, a read barrier, or a general barrier.  Similarly a read barrier,
 control dependency, or a data dependency barrier pairs with a write
-barrier, an acquire barrier, a release barrier, or a general barrier:
+barrier, an acquire barrier, a release barrier, or a general barrier::
 
 	CPU 1		      CPU 2
 	===============	      ===============
@@ -945,7 +939,7 @@ barrier, an acquire barrier, a release barrier, or a general barrier:
 			      <read barrier>
 			      y = READ_ONCE(a);
 
-Or:
+Or::
 
 	CPU 1		      CPU 2
 	===============	      ===============================
@@ -955,7 +949,7 @@ Or:
 			      <data dependency barrier>
 			      y = *x;
 
-Or even:
+Or even::
 
 	CPU 1		      CPU 2
 	===============	      ===============================
@@ -973,7 +967,7 @@ the "weaker" type.
 
 [!] Note that the stores before the write barrier would normally be expected to
 match the loads after the read barrier or the data dependency barrier, and vice
-versa:
+versa::
 
 	CPU 1                               CPU 2
 	===================                 ===================
@@ -984,11 +978,11 @@ versa:
 	WRITE_ONCE(d, 4);    }----   --->{  y = READ_ONCE(b);
 
 
-EXAMPLES OF MEMORY BARRIER SEQUENCES
+Examples of memory barrier sequences
 ------------------------------------
 
 Firstly, write barriers act as partial orderings on store operations.
-Consider the following sequence of events:
+Consider the following sequence of events::
 
 	CPU 1
 	=======================
@@ -1002,7 +996,7 @@ Consider the following sequence of events:
 This sequence of events is committed to the memory coherence system in an order
 that the rest of the system might perceive as the unordered set of { STORE A,
 STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E
-}:
+}::
 
 	+-------+       :      :
 	|       |       +------+
@@ -1026,7 +1020,7 @@ STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E
 
 
 Secondly, data dependency barriers act as partial orderings on data-dependent
-loads.  Consider the following sequence of events:
+loads.  Consider the following sequence of events::
 
 	CPU 1			CPU 2
 	=======================	=======================
@@ -1039,7 +1033,7 @@ loads.  Consider the following sequence of events:
 				LOAD *C (reads B)
 
 Without intervention, CPU 2 may perceive the events on CPU 1 in some
-effectively random order, despite the write barrier issued by CPU 1:
+effectively random order, despite the write barrier issued by CPU 1::
 
 	+-------+       :      :                :       :
 	|       |       +------+                +-------+  | Sequence of update
@@ -1072,7 +1066,7 @@ In the above example, CPU 2 perceives that B is 7, despite the load of *C
 (which would be B) coming after the LOAD of C.
 
 If, however, a data dependency barrier were to be placed between the load of C
-and the load of *C (ie: B) on CPU 2:
+and the load of *C (ie: B) on CPU 2::
 
 	CPU 1			CPU 2
 	=======================	=======================
@@ -1085,7 +1079,7 @@ and the load of *C (ie: B) on CPU 2:
 				<data dependency barrier>
 				LOAD *C (reads B)
 
-then the following will occur:
+then the following will occur::
 
 	+-------+       :      :                :       :
 	|       |       +------+                +-------+
@@ -1113,7 +1107,7 @@ then the following will occur:
 
 
 And thirdly, a read barrier acts as a partial order on loads.  Consider the
-following sequence of events:
+following sequence of events::
 
 	CPU 1			CPU 2
 	=======================	=======================
@@ -1125,7 +1119,7 @@ following sequence of events:
 				LOAD A
 
 Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in
-some effectively random order, despite the write barrier issued by CPU 1:
+some effectively random order, despite the write barrier issued by CPU 1::
 
 	+-------+       :      :                :       :
 	|       |       +------+                +-------+
@@ -1149,7 +1143,7 @@ some effectively random order, despite the write barrier issued by CPU 1:
 
 
 If, however, a read barrier were to be placed between the load of B and the
-load of A on CPU 2:
+load of A on CPU 2::
 
 	CPU 1			CPU 2
 	=======================	=======================
@@ -1162,7 +1156,7 @@ load of A on CPU 2:
 				LOAD A
 
 then the partial ordering imposed by CPU 1 will be perceived correctly by CPU
-2:
+2::
 
 	+-------+       :      :                :       :
 	|       |       +------+                +-------+
@@ -1185,7 +1179,7 @@ then the partial ordering imposed by CPU 1 will be perceived correctly by CPU
 
 
 To illustrate this more completely, consider what could happen if the code
-contained a load of A either side of the read barrier:
+contained a load of A either side of the read barrier::
 
 	CPU 1			CPU 2
 	=======================	=======================
@@ -1199,7 +1193,7 @@ contained a load of A either side of the read barrier:
 				LOAD A [second load of A]
 
 Even though the two loads of A both occur after the load of B, they may both
-come up with different values:
+come up with different values::
 
 	+-------+       :      :                :       :
 	|       |       +------+                +-------+
@@ -1225,7 +1219,7 @@ come up with different values:
 
 
 But it may be that the update to A from CPU 1 becomes perceptible to CPU 2
-before the read barrier completes anyway:
+before the read barrier completes anyway::
 
 	+-------+       :      :                :       :
 	|       |       +------+                +-------+
@@ -1255,7 +1249,7 @@ load of B came up with B == 2.  No such guarantee exists for the first load of
 A; that may come up with either A == 0 or A == 1.
 
 
-READ MEMORY BARRIERS VS LOAD SPECULATION
+Read memory barriers vs load speculation
 ----------------------------------------
 
 Many CPUs speculate with loads: that is they see that they will need to load an
@@ -1269,7 +1263,7 @@ It may turn out that the CPU didn't actually need the value - perhaps because a
 branch circumvented the load - in which case it can discard the value or just
 cache it for later use.
 
-Consider:
+Consider::
 
 	CPU 1			CPU 2
 	=======================	=======================
@@ -1278,7 +1272,7 @@ Consider:
 				DIVIDE		} take a long time to perform
 				LOAD A
 
-Which might appear as this:
+Which might appear as this::
 
 	                                        :       :       +-------+
 	                                        +-------+       |       |
@@ -1297,7 +1291,7 @@ Which might appear as this:
 
 
 Placing a read barrier or a data dependency barrier just before the second
-load:
+load::
 
 	CPU 1			CPU 2
 	=======================	=======================
@@ -1309,7 +1303,7 @@ load:
 
 will force any value speculatively obtained to be reconsidered to an extent
 dependent on the type of barrier used.  If there was no change made to the
-speculated memory location, then the speculated value will just be used:
+speculated memory location, then the speculated value will just be used::
 
 	                                        :       :       +-------+
 	                                        +-------+       |       |
@@ -1331,7 +1325,7 @@ speculated memory location, then the speculated value will just be used:
 
 
 but if there was an update or an invalidation from another CPU pending, then
-the speculation will be cancelled and the value reloaded:
+the speculation will be cancelled and the value reloaded::
 
 	                                        :       :       +-------+
 	                                        +-------+       |       |
@@ -1352,12 +1346,12 @@ the speculation will be cancelled and the value reloaded:
 	retrieved                               :       :       +-------+
 
 
-TRANSITIVITY
+Transitivity
 ------------
 
 Transitivity is a deeply intuitive notion about ordering that is not
 always provided by real computer systems.  The following example
-demonstrates transitivity:
+demonstrates transitivity::
 
 	CPU 1			CPU 2			CPU 3
 	=======================	=======================	=======================
@@ -1385,7 +1379,7 @@ also return 1.
 
 However, transitivity is -not- guaranteed for read or write barriers.
 For example, suppose that CPU 2's general barrier in the above example
-is changed to a read barrier as shown below:
+is changed to a read barrier as shown below::
 
 	CPU 1			CPU 2			CPU 3
 	=======================	=======================	=======================
@@ -1409,7 +1403,7 @@ General barriers provide "global transitivity", so that all CPUs will
 agree on the order of operations.  In contrast, a chain of release-acquire
 pairs provides only "local transitivity", so that only those CPUs on
 the chain are guaranteed to agree on the combined order of the accesses.
-For example, switching to C code in deference to Herman Hollerith:
+For example, switching to C code in deference to Herman Hollerith::
 
 	int u, v, x, y, z;
 
@@ -1443,23 +1437,23 @@ For example, switching to C code in deference to Herman Hollerith:
 
 Because cpu0(), cpu1(), and cpu2() participate in a local transitive
 chain of smp_store_release()/smp_load_acquire() pairs, the following
-outcome is prohibited:
+outcome is prohibited::
 
 	r0 == 1 && r1 == 1 && r2 == 1
 
 Furthermore, because of the release-acquire relationship between cpu0()
 and cpu1(), cpu1() must see cpu0()'s writes, so that the following
-outcome is prohibited:
+outcome is prohibited::
 
 	r1 == 1 && r5 == 0
 
 However, the transitivity of release-acquire is local to the participating
 CPUs and does not apply to cpu3().  Therefore, the following outcome
-is possible:
+is possible::
 
 	r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0
 
-As an aside, the following outcome is also possible:
+As an aside, the following outcome is also possible::
 
 	r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0 && r5 == 1
 
@@ -1476,7 +1470,7 @@ intended order.
 However, please keep in mind that smp_load_acquire() is not magic.
 In particular, it simply reads from its argument with ordering.  It does
 -not- ensure that any particular value will be read.  Therefore, the
-following outcome is possible:
+following outcome is possible::
 
 	r0 == 0 && r1 == 0 && r2 == 0 && r5 == 0
 
@@ -1487,25 +1481,24 @@ To reiterate, if your code requires global transitivity, use general
 barriers throughout.
 
 
-========================
-EXPLICIT KERNEL BARRIERS
+Explicit kernel barriers
 ========================
 
 The Linux kernel has a variety of different barriers that act at different
 levels:
 
-  (*) Compiler barrier.
+  (#) Compiler barrier.
 
-  (*) CPU memory barriers.
+  (#) CPU memory barriers.
 
-  (*) MMIO write barrier.
+  (#) MMIO write barrier.
 
 
-COMPILER BARRIER
+Compiler barrier
 ----------------
 
 The Linux kernel has an explicit compiler barrier function that prevents the
-compiler from moving the memory accesses either side of it to the other side:
+compiler from moving the memory accesses either side of it to the other side::
 
 	barrier();
 
@@ -1516,12 +1509,12 @@ accesses flagged by the READ_ONCE() or WRITE_ONCE().
 
 The barrier() function has the following effects:
 
- (*) Prevents the compiler from reordering accesses following the
+ (#) Prevents the compiler from reordering accesses following the
      barrier() to precede any accesses preceding the barrier().
      One example use for this property is to ease communication between
      interrupt-handler code and the code that was interrupted.
 
- (*) Within a loop, forces the compiler to load the variables used
+ (#) Within a loop, forces the compiler to load the variables used
      in that loop's conditional on each pass through that loop.
 
 The READ_ONCE() and WRITE_ONCE() functions can prevent any number of
@@ -1529,16 +1522,16 @@ optimizations that, while perfectly safe in single-threaded code, can
 be fatal in concurrent code.  Here are some examples of these sorts
 of optimizations:
 
- (*) The compiler is within its rights to reorder loads and stores
+ (#) The compiler is within its rights to reorder loads and stores
      to the same variable, and in some cases, the CPU is within its
      rights to reorder loads to the same variable.  This means that
-     the following code:
+     the following code::
 
 	a[0] = x;
 	a[1] = x;
 
      Might result in an older value of x stored in a[1] than in a[0].
-     Prevent both the compiler and the CPU from doing this as follows:
+     Prevent both the compiler and the CPU from doing this as follows::
 
 	a[0] = READ_ONCE(x);
 	a[1] = READ_ONCE(x);
@@ -1546,36 +1539,36 @@ of optimizations:
      In short, READ_ONCE() and WRITE_ONCE() provide cache coherence for
      accesses from multiple CPUs to a single variable.
 
- (*) The compiler is within its rights to merge successive loads from
+ (#) The compiler is within its rights to merge successive loads from
      the same variable.  Such merging can cause the compiler to "optimize"
-     the following code:
+     the following code::
 
 	while (tmp = a)
 		do_something_with(tmp);
 
      into the following code, which, although in some sense legitimate
      for single-threaded code, is almost certainly not what the developer
-     intended:
+     intended::
 
 	if (tmp = a)
 		for (;;)
 			do_something_with(tmp);
 
-     Use READ_ONCE() to prevent the compiler from doing this to you:
+     Use READ_ONCE() to prevent the compiler from doing this to you::
 
 	while (tmp = READ_ONCE(a))
 		do_something_with(tmp);
 
- (*) The compiler is within its rights to reload a variable, for example,
+ (#) The compiler is within its rights to reload a variable, for example,
      in cases where high register pressure prevents the compiler from
      keeping all data of interest in registers.  The compiler might
-     therefore optimize the variable 'tmp' out of our previous example:
+     therefore optimize the variable 'tmp' out of our previous example::
 
 	while (tmp = a)
 		do_something_with(tmp);
 
      This could result in the following code, which is perfectly safe in
-     single-threaded code, but can be fatal in concurrent code:
+     single-threaded code, but can be fatal in concurrent code::
 
 	while (a)
 		do_something_with(a);
@@ -1585,7 +1578,7 @@ of optimizations:
      a was modified by some other CPU between the "while" statement and
      the call to do_something_with().
 
-     Again, use READ_ONCE() to prevent the compiler from doing this:
+     Again, use READ_ONCE() to prevent the compiler from doing this::
 
 	while (tmp = READ_ONCE(a))
 		do_something_with(tmp);
@@ -1596,14 +1589,14 @@ of optimizations:
      single-threaded code, so you need to tell the compiler about cases
      where it is not safe.
 
- (*) The compiler is within its rights to omit a load entirely if it knows
+ (#) The compiler is within its rights to omit a load entirely if it knows
      what the value will be.  For example, if the compiler can prove that
-     the value of variable 'a' is always zero, it can optimize this code:
+     the value of variable 'a' is always zero, it can optimize this code::
 
 	while (tmp = a)
 		do_something_with(tmp);
 
-     Into this:
+     Into this::
 
 	do { } while (0);
 
@@ -1612,14 +1605,14 @@ of optimizations:
      will carry out its proof assuming that the current CPU is the only
      one updating variable 'a'.  If variable 'a' is shared, then the
      compiler's proof will be erroneous.  Use READ_ONCE() to tell the
-     compiler that it doesn't know as much as it thinks it does:
+     compiler that it doesn't know as much as it thinks it does::
 
 	while (tmp = READ_ONCE(a))
 		do_something_with(tmp);
 
      But please note that the compiler is also closely watching what you
      do with the value after the READ_ONCE().  For example, suppose you
-     do the following and MAX is a preprocessor macro with the value 1:
+     do the following and MAX is a preprocessor macro with the value 1::
 
 	while ((tmp = READ_ONCE(a)) % MAX)
 		do_something_with(tmp);
@@ -1629,12 +1622,12 @@ of optimizations:
      the code into near-nonexistence.  (It will still load from the
      variable 'a'.)
 
- (*) Similarly, the compiler is within its rights to omit a store entirely
+ (#) Similarly, the compiler is within its rights to omit a store entirely
      if it knows that the variable already has the value being stored.
      Again, the compiler assumes that the current CPU is the only one
      storing into the variable, which can cause the compiler to do the
      wrong thing for shared variables.  For example, suppose you have
-     the following:
+     the following::
 
 	a = 0;
 	... Code that does not store to variable a ...
@@ -1646,15 +1639,15 @@ of optimizations:
      meantime.
 
      Use WRITE_ONCE() to prevent the compiler from making this sort of
-     wrong guess:
+     wrong guess::
 
 	WRITE_ONCE(a, 0);
 	... Code that does not store to variable a ...
 	WRITE_ONCE(a, 0);
 
- (*) The compiler is within its rights to reorder memory accesses unless
+ (#) The compiler is within its rights to reorder memory accesses unless
      you tell it not to.  For example, consider the following interaction
-     between process-level code and an interrupt handler:
+     between process-level code and an interrupt handler::
 
 	void process_level(void)
 	{
@@ -1670,7 +1663,7 @@ of optimizations:
 
      There is nothing to prevent the compiler from transforming
      process_level() to the following, in fact, this might well be a
-     win for single-threaded code:
+     win for single-threaded code::
 
 	void process_level(void)
 	{
@@ -1680,7 +1673,7 @@ of optimizations:
 
      If the interrupt occurs between these two statement, then
      interrupt_handler() might be passed a garbled msg.  Use WRITE_ONCE()
-     to prevent this as follows:
+     to prevent this as follows::
 
 	void process_level(void)
 	{
@@ -1717,15 +1710,15 @@ of optimizations:
      respect the order in which the READ_ONCE()s and WRITE_ONCE()s occur,
      though the CPU of course need not do so.
 
- (*) The compiler is within its rights to invent stores to a variable,
-     as in the following example:
+ (#) The compiler is within its rights to invent stores to a variable,
+     as in the following example::
 
 	if (a)
 		b = a;
 	else
 		b = 42;
 
-     The compiler might save a branch by optimizing this as follows:
+     The compiler might save a branch by optimizing this as follows::
 
 	b = 42;
 	if (a)
@@ -1735,7 +1728,7 @@ of optimizations:
      a branch.  Unfortunately, in concurrent code, this optimization
      could cause some other CPU to see a spurious value of 42 -- even
      if variable 'a' was never zero -- when loading variable 'b'.
-     Use WRITE_ONCE() to prevent this as follows:
+     Use WRITE_ONCE() to prevent this as follows::
 
 	if (a)
 		WRITE_ONCE(b, a);
@@ -1747,13 +1740,13 @@ of optimizations:
      poor performance and scalability.  Use READ_ONCE() to prevent
      invented loads.
 
- (*) For aligned memory locations whose size allows them to be accessed
+ (#) For aligned memory locations whose size allows them to be accessed
      with a single memory-reference instruction, prevents "load tearing"
      and "store tearing," in which a single large access is replaced by
      multiple smaller accesses.  For example, given an architecture having
      16-bit store instructions with 7-bit immediate fields, the compiler
      might be tempted to use two 16-bit store-immediate instructions to
-     implement the following 32-bit store:
+     implement the following 32-bit store::
 
 	p = 0x00010002;
 
@@ -1763,12 +1756,12 @@ of optimizations:
      This optimization can therefore be a win in single-threaded code.
      In fact, a recent bug (since fixed) caused GCC to incorrectly use
      this optimization in a volatile store.  In the absence of such bugs,
-     use of WRITE_ONCE() prevents store tearing in the following example:
+     use of WRITE_ONCE() prevents store tearing in the following example::
 
 	WRITE_ONCE(p, 0x00010002);
 
      Use of packed structures can also result in load and store tearing,
-     as in this example:
+     as in this example::
 
 	struct __attribute__((__packed__)) foo {
 		short a;
@@ -1787,7 +1780,7 @@ of optimizations:
      implement these three assignment statements as a pair of 32-bit
      loads followed by a pair of 32-bit stores.  This would result in
      load tearing on 'foo1.b' and store tearing on 'foo2.b'.  READ_ONCE()
-     and WRITE_ONCE() again prevent tearing in this example:
+     and WRITE_ONCE() again prevent tearing in this example::
 
 	foo2.a = foo1.a;
 	WRITE_ONCE(foo2.b, READ_ONCE(foo1.b));
@@ -1804,17 +1797,19 @@ Please note that these compiler barriers have no direct effect on the CPU,
 which may then reorder things however it wishes.
 
 
-CPU MEMORY BARRIERS
+CPU memory barriers
 -------------------
 
 The Linux kernel has eight basic CPU memory barriers:
 
+	===============	=======================	===========================
 	TYPE		MANDATORY		SMP CONDITIONAL
 	===============	=======================	===========================
 	GENERAL		mb()			smp_mb()
 	WRITE		wmb()			smp_wmb()
 	READ		rmb()			smp_rmb()
 	DATA DEPENDENCY	read_barrier_depends()	smp_read_barrier_depends()
+	===============	=======================	===========================
 
 
 All memory barriers except the data dependency barriers imply a compiler
@@ -1849,15 +1844,15 @@ compiler and the CPU from reordering them.
 
 There are some more advanced barrier functions:
 
- (*) smp_store_mb(var, value)
+ (#) smp_store_mb(var, value)
 
      This assigns the value to the variable and then inserts a full memory
      barrier after it.  It isn't guaranteed to insert anything more than a
      compiler barrier in a UP compilation.
 
 
- (*) smp_mb__before_atomic();
- (*) smp_mb__after_atomic();
+ (#) smp_mb__before_atomic();
+ (#) smp_mb__after_atomic();
 
      These are for use with atomic (such as add, subtract, increment and
      decrement) functions that don't return a value, especially when used for
@@ -1867,7 +1862,7 @@ There are some more advanced barrier functions:
      value (such as set_bit and clear_bit).
 
      As an example, consider a piece of code that marks an object as being dead
-     and then decrements the object's reference count:
+     and then decrements the object's reference count::
 
 	obj->dead = 1;
 	smp_mb__before_atomic();
@@ -1880,7 +1875,7 @@ There are some more advanced barrier functions:
      operations" subsection for information on where to use these.
 
 
- (*) lockless_dereference();
+ (#) lockless_dereference();
 
      This can be thought of as a pointer-fetch wrapper around the
      smp_read_barrier_depends() data-dependency barrier.
@@ -1892,8 +1887,8 @@ There are some more advanced barrier functions:
      that can be used both with and without RCU.
 
 
- (*) dma_wmb();
- (*) dma_rmb();
+ (#) dma_wmb();
+ (#) dma_rmb();
 
      These are for use with consistent memory to guarantee the ordering
      of writes or reads of shared memory accessible to both the CPU and a
@@ -1902,7 +1897,7 @@ There are some more advanced barrier functions:
      For example, consider a device driver that shares memory with a device
      and uses a descriptor status value to indicate if the descriptor belongs
      to the device or the CPU, and a doorbell to notify it when new
-     descriptors are available:
+     descriptors are available::
 
 	if (desc->status != DEVICE_OWN) {
 		/* do not read data until we own descriptor */
@@ -1935,11 +1930,11 @@ There are some more advanced barrier functions:
      See Documentation/DMA-API.txt for more information on consistent memory.
 
 
-MMIO WRITE BARRIER
+MMIO write barrier
 ------------------
 
 The Linux kernel also has a special barrier for use with memory-mapped I/O
-writes:
+writes::
 
 	mmiowb();
 
@@ -1950,8 +1945,7 @@ CPU->Hardware interface and actually affect the hardware at some level.
 See the subsection "Acquires vs I/O accesses" for more information.
 
 
-===============================
-IMPLICIT KERNEL MEMORY BARRIERS
+Implicit kernel memory barriers
 ===============================
 
 Some of the other functions in the linux kernel imply memory barriers, amongst
@@ -1962,16 +1956,16 @@ provide more substantial guarantees, but these may not be relied upon outside
 of arch specific code.
 
 
-LOCK ACQUISITION FUNCTIONS
+Lock acquisition functions
 --------------------------
 
 The Linux kernel has a number of locking constructs:
 
- (*) spin locks
- (*) R/W spin locks
- (*) mutexes
- (*) semaphores
- (*) R/W semaphores
+ (#) spin locks
+ (#) R/W spin locks
+ (#) mutexes
+ (#) semaphores
+ (#) R/W semaphores
 
 In all cases there are variants on "ACQUIRE" operations and "RELEASE" operations
 for each construct.  These operations all imply certain barriers:
@@ -2019,14 +2013,14 @@ section may seep into the inside of the critical section.
 An ACQUIRE followed by a RELEASE may not be assumed to be full memory barrier
 because it is possible for an access preceding the ACQUIRE to happen after the
 ACQUIRE, and an access following the RELEASE to happen before the RELEASE, and
-the two accesses can themselves then cross:
+the two accesses can themselves then cross::
 
 	*A = a;
 	ACQUIRE M
 	RELEASE M
 	*B = b;
 
-may occur as:
+may occur as::
 
 	ACQUIRE M, STORE *B, STORE *A, RELEASE M
 
@@ -2039,14 +2033,14 @@ RELEASE may -not- be assumed to be a full memory barrier.
 Similarly, the reverse case of a RELEASE followed by an ACQUIRE does
 not imply a full memory barrier.  Therefore, the CPU's execution of the
 critical sections corresponding to the RELEASE and the ACQUIRE can cross,
-so that:
+so that::
 
 	*A = a;
 	RELEASE M
 	ACQUIRE N
 	*B = b;
 
-could occur as:
+could occur as::
 
 	ACQUIRE N, STORE *B, STORE *A, RELEASE M
 
@@ -2085,7 +2079,7 @@ with interrupt disabling operations.
 See also the section on "Inter-CPU acquiring barrier effects".
 
 
-As an example, consider the following:
+As an example, consider the following::
 
 	*A = a;
 	*B = b;
@@ -2096,13 +2090,13 @@ As an example, consider the following:
 	*E = e;
 	*F = f;
 
-The following sequence of events is acceptable:
+The following sequence of events is acceptable::
 
 	ACQUIRE, {*F,*A}, *E, {*C,*D}, *B, RELEASE
 
 	[+] Note that {*F,*A} indicates a combined access.
 
-But none of the following are:
+But none of the following are::
 
 	{*F,*A}, *B,	ACQUIRE, *C, *D,	RELEASE, *E
 	*A, *B, *C,	ACQUIRE, *D,		RELEASE, *E, *F
@@ -2111,7 +2105,7 @@ But none of the following are:
 
 
 
-INTERRUPT DISABLING FUNCTIONS
+Interrupt disabling functions
 -----------------------------
 
 Functions that disable interrupts (ACQUIRE equivalent) and enable interrupts
@@ -2120,7 +2114,7 @@ barriers are required in such a situation, they must be provided from some
 other means.
 
 
-SLEEP AND WAKE-UP FUNCTIONS
+Sleep and wake-up functions
 ---------------------------
 
 Sleeping and waking on an event flagged in global data can be viewed as an
@@ -2130,7 +2124,7 @@ these appear to happen in the right order, the primitives to begin the process
 of going to sleep, and the primitives to initiate a wake up imply certain
 barriers.
 
-Firstly, the sleeper normally follows something like this sequence of events:
+Firstly, the sleeper normally follows something like this sequence of events::
 
 	for (;;) {
 		set_current_state(TASK_UNINTERRUPTIBLE);
@@ -2140,7 +2134,7 @@ Firstly, the sleeper normally follows something like this sequence of events:
 	}
 
 A general memory barrier is interpolated automatically by set_current_state()
-after it has altered the task state:
+after it has altered the task state::
 
 	CPU 1
 	===============================
@@ -2150,14 +2144,14 @@ after it has altered the task state:
 	    <general barrier>
 	LOAD event_indicated
 
-set_current_state() may be wrapped by:
+set_current_state() may be wrapped by::
 
 	prepare_to_wait();
 	prepare_to_wait_exclusive();
 
 which therefore also imply a general memory barrier after setting the state.
 The whole sequence above is available in various canned forms, all of which
-interpolate the memory barrier in the right place:
+interpolate the memory barrier in the right place::
 
 	wait_event();
 	wait_event_interruptible();
@@ -2169,19 +2163,19 @@ interpolate the memory barrier in the right place:
 	wait_on_bit_lock();
 
 
-Secondly, code that performs a wake up normally follows something like this:
+Secondly, code that performs a wake up normally follows something like this::
 
 	event_indicated = 1;
 	wake_up(&event_wait_queue);
 
-or:
+or::
 
 	event_indicated = 1;
 	wake_up_process(event_daemon);
 
 A write memory barrier is implied by wake_up() and co.  if and only if they
 wake something up.  The barrier occurs before the task state is cleared, and so
-sits between the STORE to indicate the event and the STORE to set TASK_RUNNING:
+sits between the STORE to indicate the event and the STORE to set TASK_RUNNING::
 
 	CPU 1				CPU 2
 	===============================	===============================
@@ -2193,7 +2187,7 @@ sits between the STORE to indicate the event and the STORE to set TASK_RUNNING:
 
 To repeat, this write memory barrier is present if and only if something
 is actually awakened.  To see this, consider the following sequence of
-events, where X and Y are both initially zero:
+events, where X and Y are both initially zero::
 
 	CPU 1				CPU 2
 	===============================	===============================
@@ -2206,7 +2200,7 @@ events, where X and Y are both initially zero:
 In contrast, if a wakeup does occur, CPU 2's load from X would be guaranteed
 to see 1.
 
-The available waker functions include:
+The available waker functions include::
 
 	complete();
 	wake_up();
@@ -2228,7 +2222,7 @@ The available waker functions include:
 [!] Note that the memory barriers implied by the sleeper and the waker do _not_
 order multiple stores before the wake-up with respect to loads of those stored
 values after the sleeper has called set_current_state().  For instance, if the
-sleeper does:
+sleeper does::
 
 	set_current_state(TASK_INTERRUPTIBLE);
 	if (event_indicated)
@@ -2236,7 +2230,7 @@ sleeper does:
 	__set_current_state(TASK_RUNNING);
 	do_something(my_data);
 
-and the waker does:
+and the waker does::
 
 	my_data = value;
 	event_indicated = 1;
@@ -2245,7 +2239,7 @@ and the waker does:
 there's no guarantee that the change to event_indicated will be perceived by
 the sleeper as coming after the change to my_data.  In such a circumstance, the
 code on both sides must interpolate its own memory barriers between the
-separate data accesses.  Thus the above sleeper ought to do:
+separate data accesses.  Thus the above sleeper ought to do::
 
 	set_current_state(TASK_INTERRUPTIBLE);
 	if (event_indicated) {
@@ -2253,7 +2247,7 @@ separate data accesses.  Thus the above sleeper ought to do:
 		do_something(my_data);
 	}
 
-and the waker should do:
+and the waker should do::
 
 	my_data = value;
 	smp_wmb();
@@ -2261,16 +2255,15 @@ and the waker should do:
 	wake_up(&event_wait_queue);
 
 
-MISCELLANEOUS FUNCTIONS
+Miscelaneous functions
 -----------------------
 
 Other functions that imply barriers:
 
- (*) schedule() and similar imply full memory barriers.
+ (#) schedule() and similar imply full memory barriers.
 
 
-===================================
-INTER-CPU ACQUIRING BARRIER EFFECTS
+Inter-CPU acquiring barrier effects
 ===================================
 
 On SMP systems locking primitives give a more substantial form of barrier: one
@@ -2278,11 +2271,11 @@ that does affect memory access ordering on other CPUs, within the context of
 conflict on any particular lock.
 
 
-ACQUIRES VS MEMORY ACCESSES
+Acquires vs memory accesses
 ---------------------------
 
 Consider the following: the system has a pair of spinlocks (M) and (Q), and
-three CPUs; then should the following sequence of events occur:
+three CPUs; then should the following sequence of events occur::
 
 	CPU 1				CPU 2
 	===============================	===============================
@@ -2295,11 +2288,11 @@ three CPUs; then should the following sequence of events occur:
 
 Then there is no guarantee as to what order CPU 3 will see the accesses to *A
 through *H occur in, other than the constraints imposed by the separate locks
-on the separate CPUs.  It might, for example, see:
+on the separate CPUs.  It might, for example, see::
 
 	*E, ACQUIRE M, ACQUIRE Q, *G, *C, *F, *A, *B, RELEASE Q, *D, *H, RELEASE M
 
-But it won't see any of:
+But it won't see any of::
 
 	*B, *C or *D preceding ACQUIRE M
 	*A, *B or *C following RELEASE M
@@ -2308,7 +2301,7 @@ But it won't see any of:
 
 
 
-ACQUIRES VS I/O ACCESSES
+Acquires vs I/O accesses
 ------------------------
 
 Under certain circumstances (especially involving NUMA), I/O accesses within
@@ -2317,7 +2310,7 @@ PCI bridge, because the PCI bridge does not necessarily participate in the
 cache-coherence protocol, and is therefore incapable of issuing the required
 read memory barriers.
 
-For example:
+For example::
 
 	CPU 1				CPU 2
 	===============================	===============================
@@ -2330,7 +2323,7 @@ For example:
 					writel(5, DATA);
 					spin_unlock(Q);
 
-may be seen by the PCI bridge as follows:
+may be seen by the PCI bridge as follows::
 
 	STORE *ADDR = 0, STORE *ADDR = 4, STORE *DATA = 1, STORE *DATA = 5
 
@@ -2338,7 +2331,7 @@ which would probably cause the hardware to malfunction.
 
 
 What is necessary here is to intervene with an mmiowb() before dropping the
-spinlock, for example:
+spinlock, for example::
 
 	CPU 1				CPU 2
 	===============================	===============================
@@ -2359,7 +2352,7 @@ before either of the stores issued on CPU 2.
 
 Furthermore, following a store by a load from the same device obviates the need
 for the mmiowb(), because the load forces the store to complete before the load
-is performed:
+is performed::
 
 	CPU 1				CPU 2
 	===============================	===============================
@@ -2376,8 +2369,7 @@ is performed:
 See Documentation/driver-api/device-io.rst for more information.
 
 
-=================================
-WHERE ARE MEMORY BARRIERS NEEDED?
+Where are memory barriers needed?
 =================================
 
 Under normal operation, memory operation reordering is generally not going to
@@ -2385,16 +2377,16 @@ be a problem as a single-threaded linear piece of code will still appear to
 work correctly, even if it's in an SMP kernel.  There are, however, four
 circumstances in which reordering definitely _could_ be a problem:
 
- (*) Interprocessor interaction.
+ (#) Interprocessor interaction.
 
- (*) Atomic operations.
+ (#) Atomic operations.
 
- (*) Accessing devices.
+ (#) Accessing devices.
 
- (*) Interrupts.
+ (#) Interrupts.
 
 
-INTERPROCESSOR INTERACTION
+Interprocessor interaction
 --------------------------
 
 When there's a system with more than one processor, more than one CPU in the
@@ -2407,7 +2399,7 @@ a malfunction.
 
 Consider, for example, the R/W semaphore slow path.  Here a waiting process is
 queued on the semaphore, by virtue of it having a piece of its stack linked to
-the semaphore's list of waiting processes:
+the semaphore's list of waiting processes::
 
 	struct rw_semaphore {
 		...
@@ -2433,7 +2425,7 @@ To wake up a particular waiter, the up_read() or up_write() functions have to:
 
  (5) release the reference held on the waiter's task struct.
 
-In other words, it has to perform this sequence of events:
+In other words, it has to perform this sequence of events::
 
 	LOAD waiter->list.next;
 	LOAD waiter->task;
@@ -2451,7 +2443,7 @@ if the task pointer is cleared _before_ the next pointer in the list is read,
 another CPU might start processing the waiter and might clobber the waiter's
 stack before the up*() function has a chance to read the next pointer.
 
-Consider then what might happen to the above sequence of events:
+Consider then what might happen to the above sequence of events::
 
 	CPU 1				CPU 2
 	===============================	===============================
@@ -2474,7 +2466,7 @@ Consider then what might happen to the above sequence of events:
 This could be dealt with using the semaphore lock, but then the down_xxx()
 function has to needlessly get the spinlock again after being woken up.
 
-The way to deal with this is to insert a general SMP memory barrier:
+The way to deal with this is to insert a general SMP memory barrier::
 
 	LOAD waiter->list.next;
 	LOAD waiter->task;
@@ -2495,7 +2487,7 @@ right order without actually intervening in the CPU.  Since there's only one
 CPU, that CPU's dependency ordering logic will take care of everything else.
 
 
-ATOMIC OPERATIONS
+Atomic operations
 -----------------
 
 Whilst they are technically interprocessor interaction considerations, atomic
@@ -2506,7 +2498,7 @@ kernel.
 Any atomic operation that modifies some state in memory and returns information
 about the state (old or new) implies an SMP-conditional general memory barrier
 (smp_mb()) on each side of the actual operation (with the exception of
-explicit lock operations, described later).  These include:
+explicit lock operations, described later).  These include::
 
 	xchg();
 	atomic_xchg();			atomic_long_xchg();
@@ -2534,7 +2526,7 @@ such the implicit memory barrier effects are necessary.
 
 The following operations are potential problems as they do _not_ imply memory
 barriers, but might be used for implementing such things as RELEASE-class
-operations:
+operations::
 
 	atomic_set();
 	set_bit();
@@ -2547,7 +2539,7 @@ With these the appropriate explicit memory barrier should be used if necessary
 
 The following also do _not_ imply memory barriers, and so may require explicit
 memory barriers under some circumstances (smp_mb__before_atomic() for
-instance):
+instance)::
 
 	atomic_add();
 	atomic_sub();
@@ -2569,7 +2561,7 @@ specific order.
 Basically, each usage case has to be carefully considered as to whether memory
 barriers are needed or not.
 
-The following operations are special locking primitives:
+The following operations are special locking primitives::
 
 	test_and_set_bit_lock();
 	clear_bit_unlock();
@@ -2587,7 +2579,7 @@ and in such cases the special barrier primitives will be no-ops.
 See Documentation/atomic_ops.txt for more information.
 
 
-ACCESSING DEVICES
+Accessing devices
 -----------------
 
 Many devices can be memory mapped, and so appear to the CPU as if they're just
@@ -2633,7 +2625,7 @@ handled, thus the interrupt handler does not need to lock against that.
 
 However, consider a driver that was talking to an ethernet card that sports an
 address register and a data register.  If that driver's core talks to the card
-under interrupt-disablement and then the driver's interrupt handler is invoked:
+under interrupt-disablement and then the driver's interrupt handler is invoked::
 
 	LOCAL IRQ DISABLE
 	writew(ADDR, 3);
@@ -2645,7 +2637,7 @@ under interrupt-disablement and then the driver's interrupt handler is invoked:
 	</interrupt>
 
 The store to the data register might happen after the second store to the
-address register if ordering rules are sufficiently relaxed:
+address register if ordering rules are sufficiently relaxed::
 
 	STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA
 
@@ -2666,14 +2658,13 @@ running on separate CPUs that communicate with each other.  If such a case is
 likely, then interrupt-disabling locks should be used to guarantee ordering.
 
 
-==========================
-KERNEL I/O BARRIER EFFECTS
+Kernel I/O barrier effects
 ==========================
 
 When accessing I/O memory, drivers should use the appropriate accessor
 functions:
 
- (*) inX(), outX():
+ (#) inX(), outX():
 
      These are intended to talk to I/O space rather than memory space, but
      that's primarily a CPU-specific concept.  The i386 and x86_64 processors
@@ -2695,7 +2686,7 @@ functions:
      They are not guaranteed to be fully ordered with respect to other types of
      memory and I/O operation.
 
- (*) readX(), writeX():
+ (#) readX(), writeX():
 
      Whether these are guaranteed to be fully ordered and uncombined with
      respect to each other on the issuing CPU depends on the characteristics
@@ -2708,12 +2699,12 @@ functions:
 
      However, intermediary hardware (such as a PCI bridge) may indulge in
      deferral if it so wishes; to flush a store, a load from the same location
-     is preferred[*], but a load from the same device or from configuration
+     is preferred [#]_, but a load from the same device or from configuration
      space should suffice for PCI.
 
-     [*] NOTE! attempting to load from the same location as was written to may
-	 cause a malfunction - consider the 16550 Rx/Tx serial registers for
-	 example.
+     .. [#] NOTE! attempting to load from the same location as was written to may
+	    cause a malfunction - consider the 16550 Rx/Tx serial registers for
+	    example.
 
      Used with prefetchable I/O memory, an mmiowb() barrier may be required to
      force stores to be ordered.
@@ -2721,7 +2712,7 @@ functions:
      Please refer to the PCI specification for more information on interactions
      between PCI transactions.
 
- (*) readX_relaxed(), writeX_relaxed()
+ (#) readX_relaxed(), writeX_relaxed()
 
      These are similar to readX() and writeX(), but provide weaker memory
      ordering guarantees.  Specifically, they do not guarantee ordering with
@@ -2731,14 +2722,13 @@ functions:
      the same peripheral are guaranteed to be ordered with respect to each
      other.
 
- (*) ioreadX(), iowriteX()
+ (#) ioreadX(), iowriteX()
 
      These will perform appropriately for the type of access they're actually
      doing, be it inX()/outX() or readX()/writeX().
 
 
-========================================
-ASSUMED MINIMUM EXECUTION ORDERING MODEL
+Assumed minimum execution ordering model
 ========================================
 
 It has to be assumed that the conceptual CPU is weakly-ordered but that it will
@@ -2750,13 +2740,13 @@ of arch-specific code.
 This means that it must be considered that the CPU will execute its instruction
 stream in any order it feels like - or even in parallel - provided that if an
 instruction in the stream depends on an earlier instruction, then that
-earlier instruction must be sufficiently complete[*] before the later
+earlier instruction must be sufficiently complete [#]_ before the later
 instruction may proceed; in other words: provided that the appearance of
 causality is maintained.
 
- [*] Some instructions have more than one effect - such as changing the
-     condition codes, changing registers or changing memory - and different
-     instructions may depend on different effects.
+ .. [#] Some instructions have more than one effect - such as changing the
+        condition codes, changing registers or changing memory - and different
+        instructions may depend on different effects.
 
 A CPU may also discard any instruction sequence that winds up having no
 ultimate effect.  For example, if two adjacent instructions both load an
@@ -2768,8 +2758,7 @@ stream in any way it sees fit, again provided the appearance of causality is
 maintained.
 
 
-============================
-THE EFFECTS OF THE CPU CACHE
+The effects of the CPU cache
 ============================
 
 The way cached memory operations are perceived across the system is affected to
@@ -2779,7 +2768,7 @@ memory coherence system that maintains the consistency of state in the system.
 As far as the way a CPU interacts with another part of the system through the
 caches goes, the memory system has to include the CPU's caches, and memory
 barriers for the most part act at the interface between the CPU and its cache
-(memory barriers logically act on the dotted line in the following diagram):
+(memory barriers logically act on the dotted line in the following diagram)::
 
 	    <--- CPU --->         :       <----------- Memory ----------->
 	                          :
@@ -2829,7 +2818,7 @@ the properties of the memory window through which devices are accessed and/or
 the use of any special device communication instructions the CPU may have.
 
 
-CACHE COHERENCY
+Cache coherency
 ---------------
 
 Life isn't quite as simple as it may appear above, however: for while the
@@ -2840,7 +2829,7 @@ become apparent in the same order on those other CPUs.
 
 
 Consider dealing with a system that has a pair of CPUs (1 & 2), each of which
-has a pair of parallel data caches (CPU 1 has A/B, and CPU 2 has C/D):
+has a pair of parallel data caches (CPU 1 has A/B, and CPU 2 has C/D)::
 
 	            :
 	            :                          +--------+
@@ -2864,26 +2853,26 @@ has a pair of parallel data caches (CPU 1 has A/B, and CPU 2 has C/D):
 
 Imagine the system has the following properties:
 
- (*) an odd-numbered cache line may be in cache A, cache C or it may still be
+ (#) an odd-numbered cache line may be in cache A, cache C or it may still be
      resident in memory;
 
- (*) an even-numbered cache line may be in cache B, cache D or it may still be
+ (#) an even-numbered cache line may be in cache B, cache D or it may still be
      resident in memory;
 
- (*) whilst the CPU core is interrogating one cache, the other cache may be
+ (#) whilst the CPU core is interrogating one cache, the other cache may be
      making use of the bus to access the rest of the system - perhaps to
      displace a dirty cacheline or to do a speculative load;
 
- (*) each cache has a queue of operations that need to be applied to that cache
+ (#) each cache has a queue of operations that need to be applied to that cache
      to maintain coherency with the rest of the system;
 
- (*) the coherency queue is not flushed by normal loads to lines already
+ (#) the coherency queue is not flushed by normal loads to lines already
      present in the cache, even though the contents of the queue may
      potentially affect those loads.
 
 Imagine, then, that two writes are made on the first CPU, with a write barrier
 between them to guarantee that they will appear to reach that CPU's caches in
-the requisite order:
+the requisite order::
 
 	CPU 1		CPU 2		COMMENT
 	===============	===============	=======================================
@@ -2897,7 +2886,7 @@ the requisite order:
 
 The write memory barrier forces the other CPUs in the system to perceive that
 the local CPU's caches have apparently been updated in the correct order.  But
-now imagine that the second CPU wants to read those values:
+now imagine that the second CPU wants to read those values::
 
 	CPU 1		CPU 2		COMMENT
 	===============	===============	=======================================
@@ -2908,7 +2897,7 @@ now imagine that the second CPU wants to read those values:
 The above pair of reads may then fail to happen in the expected order, as the
 cacheline holding p may get updated in one of the second CPU's caches whilst
 the update to the cacheline holding v is delayed in the other of the second
-CPU's caches by some other cache event:
+CPU's caches by some other cache event::
 
 	CPU 1		CPU 2		COMMENT
 	===============	===============	=======================================
@@ -2933,7 +2922,7 @@ as that committed on CPU 1.
 
 To intervene, we need to interpolate a data dependency barrier or a read
 barrier between the loads.  This will force the cache to commit its coherency
-queue before processing any further requests:
+queue before processing any further requests::
 
 	CPU 1		CPU 2		COMMENT
 	===============	===============	=======================================
@@ -2963,7 +2952,7 @@ cachelets for normal memory accesses.  The semantics of the Alpha removes the
 need for coordination in the absence of memory barriers.
 
 
-CACHE COHERENCY VS DMA
+Cache coherency vs DMA
 ----------------------
 
 Not all systems maintain cache coherency with respect to devices doing DMA.  In
@@ -2984,7 +2973,7 @@ cache on each CPU.
 See Documentation/cachetlb.txt for more information on cache management.
 
 
-CACHE COHERENCY VS MMIO
+Cache coherency vs MMIO
 -----------------------
 
 Memory mapped I/O usually takes place through memory locations that are part of
@@ -2999,13 +2988,12 @@ flushed between the cached memory write and the MMIO access if the two are in
 any way dependent.
 
 
-=========================
-THE THINGS CPUS GET UP TO
+The things CPUs get up to
 =========================
 
 A programmer might take it for granted that the CPU will perform memory
 operations in exactly the order specified, so that if the CPU is, for example,
-given the following piece of code to execute:
+given the following piece of code to execute::
 
 	a = READ_ONCE(*A);
 	WRITE_ONCE(*B, b);
@@ -3015,7 +3003,7 @@ given the following piece of code to execute:
 
 they would then expect that the CPU will complete the memory operation for each
 instruction before moving on to the next one, leading to a definite sequence of
-operations as seen by external observers in the system:
+operations as seen by external observers in the system::
 
 	LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E.
 
@@ -3023,31 +3011,31 @@ operations as seen by external observers in the system:
 Reality is, of course, much messier.  With many CPUs and compilers, the above
 assumption doesn't hold because:
 
- (*) loads are more likely to need to be completed immediately to permit
+ (#) loads are more likely to need to be completed immediately to permit
      execution progress, whereas stores can often be deferred without a
      problem;
 
- (*) loads may be done speculatively, and the result discarded should it prove
+ (#) loads may be done speculatively, and the result discarded should it prove
      to have been unnecessary;
 
- (*) loads may be done speculatively, leading to the result having been fetched
+ (#) loads may be done speculatively, leading to the result having been fetched
      at the wrong time in the expected sequence of events;
 
- (*) the order of the memory accesses may be rearranged to promote better use
+ (#) the order of the memory accesses may be rearranged to promote better use
      of the CPU buses and caches;
 
- (*) loads and stores may be combined to improve performance when talking to
+ (#) loads and stores may be combined to improve performance when talking to
      memory or I/O hardware that can do batched accesses of adjacent locations,
      thus cutting down on transaction setup costs (memory and PCI devices may
      both be able to do this); and
 
- (*) the CPU's data cache may affect the ordering, and whilst cache-coherency
+ (#) the CPU's data cache may affect the ordering, and whilst cache-coherency
      mechanisms may alleviate this - once the store has actually hit the cache
      - there's no guarantee that the coherency management will be propagated in
      order to other CPUs.
 
 So what another CPU, say, might actually observe from the above piece of code
-is:
+is::
 
 	LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B
 
@@ -3056,7 +3044,7 @@ is:
 
 However, it is guaranteed that a CPU will be self-consistent: it will see its
 _own_ accesses appear to be correctly ordered, without the need for a memory
-barrier.  For instance with the following code:
+barrier.  For instance with the following code::
 
 	U = READ_ONCE(*A);
 	WRITE_ONCE(*A, V);
@@ -3066,7 +3054,7 @@ barrier.  For instance with the following code:
 	Z = READ_ONCE(*A);
 
 and assuming no intervention by an external influence, it can be assumed that
-the final result will appear to be:
+the final result will appear to be::
 
 	U == the original value of *A
 	X == W
@@ -3074,7 +3062,7 @@ the final result will appear to be:
 	*A == Y
 
 The code above may cause the CPU to generate the full sequence of memory
-accesses:
+accesses::
 
 	U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A
 
@@ -3091,23 +3079,23 @@ and st.rel instructions (respectively) that prevent such reordering.
 The compiler may also combine, discard or defer elements of the sequence before
 the CPU even sees them.
 
-For instance:
+For instance::
 
 	*A = V;
 	*A = W;
 
-may be reduced to:
+may be reduced to::
 
 	*A = W;
 
 since, without either a write barrier or an WRITE_ONCE(), it can be
-assumed that the effect of the storage of V to *A is lost.  Similarly:
+assumed that the effect of the storage of V to *A is lost.  Similarly::
 
 	*A = Y;
 	Z = *A;
 
 may, without a memory barrier or an READ_ONCE() and WRITE_ONCE(), be
-reduced to:
+reduced to::
 
 	*A = Y;
 	Z = Y;
@@ -3115,7 +3103,7 @@ reduced to:
 and the LOAD operation never appear outside of the CPU.
 
 
-AND THEN THERE'S THE ALPHA
+And then there's the Alpha
 --------------------------
 
 The DEC Alpha CPU is one of the most relaxed CPUs there is.  Not only that,
@@ -3130,7 +3118,7 @@ The Alpha defines the Linux kernel's memory barrier model.
 See the subsection on "Cache Coherency" above.
 
 
-VIRTUAL MACHINE GUESTS
+Virtual machine guests
 ----------------------
 
 Guests running within virtual machines might be affected by SMP effects even if
@@ -3149,11 +3137,10 @@ in particular, they do not control MMIO effects: to control
 MMIO effects, use mandatory barriers.
 
 
-============
-EXAMPLE USES
+Example uses
 ============
 
-CIRCULAR BUFFERS
+Circular buffers
 ----------------
 
 Memory barriers can be used to implement circular buffering without the need
@@ -3164,58 +3151,69 @@ of a lock to serialise the producer with the consumer.  See:
 for details.
 
 
-==========
-REFERENCES
+
+References
 ==========
 
 Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek,
 Digital Press)
-	Chapter 5.2: Physical Address Space Characteristics
-	Chapter 5.4: Caches and Write Buffers
-	Chapter 5.5: Data Sharing
-	Chapter 5.6: Read/Write Ordering
+
+	- Chapter 5.2: Physical Address Space Characteristics
+	- Chapter 5.4: Caches and Write Buffers
+	- Chapter 5.5: Data Sharing
+	- Chapter 5.6: Read/Write Ordering
 
 AMD64 Architecture Programmer's Manual Volume 2: System Programming
-	Chapter 7.1: Memory-Access Ordering
-	Chapter 7.4: Buffering and Combining Memory Writes
+
+	- Chapter 7.1: Memory-Access Ordering
+	- Chapter 7.4: Buffering and Combining Memory Writes
 
 IA-32 Intel Architecture Software Developer's Manual, Volume 3:
 System Programming Guide
-	Chapter 7.1: Locked Atomic Operations
-	Chapter 7.2: Memory Ordering
-	Chapter 7.4: Serializing Instructions
+
+	- Chapter 7.1: Locked Atomic Operations
+	- Chapter 7.2: Memory Ordering
+	- Chapter 7.4: Serializing Instructions
 
 The SPARC Architecture Manual, Version 9
-	Chapter 8: Memory Models
-	Appendix D: Formal Specification of the Memory Models
-	Appendix J: Programming with the Memory Models
+
+	- Chapter 8: Memory Models
+	- Appendix D: Formal Specification of the Memory Models
+	- Appendix J: Programming with the Memory Models
 
 UltraSPARC Programmer Reference Manual
-	Chapter 5: Memory Accesses and Cacheability
-	Chapter 15: Sparc-V9 Memory Models
+
+	- Chapter 5: Memory Accesses and Cacheability
+	- Chapter 15: Sparc-V9 Memory Models
 
 UltraSPARC III Cu User's Manual
-	Chapter 9: Memory Models
+
+	- Chapter 9: Memory Models
 
 UltraSPARC IIIi Processor User's Manual
-	Chapter 8: Memory Models
+
+	- Chapter 8: Memory Models
 
 UltraSPARC Architecture 2005
-	Chapter 9: Memory
-	Appendix D: Formal Specifications of the Memory Models
+
+	- Chapter 9: Memory
+	- Appendix D: Formal Specifications of the Memory Models
 
 UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005
-	Chapter 8: Memory Models
-	Appendix F: Caches and Cache Coherency
+
+	- Chapter 8: Memory Models
+	- Appendix F: Caches and Cache Coherency
 
 Solaris Internals, Core Kernel Architecture, p63-68:
-	Chapter 3.3: Hardware Considerations for Locks and
-			Synchronization
+
+	- Chapter 3.3: Hardware Considerations for Locks and Synchronization
 
 Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching
 for Kernel Programmers:
-	Chapter 13: Other Memory Models
+
+	- Chapter 13: Other Memory Models
 
 Intel Itanium Architecture Software Developer's Manual: Volume 1:
-	Section 2.6: Speculation
-	Section 4.4: Memory Access
+
+	- Section 2.6: Speculation
+	- Section 4.4: Memory Access
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v2 18/29] memory-barriers.txt: use literals for variables
  2017-06-17 15:26 [PATCH v2 01/29] IPMI.txt: standardize document format Mauro Carvalho Chehab
                   ` (15 preceding siblings ...)
  2017-06-17 15:26 ` [PATCH v2 17/29] memory-barriers.txt: " Mauro Carvalho Chehab
@ 2017-06-17 15:26 ` Mauro Carvalho Chehab
  2017-06-17 15:26 ` [PATCH v2 19/29] memory-hotplug.txt: standardize document format Mauro Carvalho Chehab
                   ` (11 subsequent siblings)
  28 siblings, 0 replies; 35+ messages in thread
From: Mauro Carvalho Chehab @ 2017-06-17 15:26 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet, Mauro Carvalho Chehab

From: Mauro Carvalho Chehab <mchehab@osg.samsung.com>

The minimal adjustments on this file were not enough
to make it build clean with Sphinx:

  Documentation/memory-barriers.rst:192: WARNING: Inline emphasis start-string without end-string.
  Documentation/memory-barriers.rst:603: WARNING: Inline emphasis start-string without end-string.
  Documentation/memory-barriers.rst:1065: WARNING: Inline emphasis start-string without end-string.
  Documentation/memory-barriers.rst:1068: WARNING: Inline emphasis start-string without end-string.
  Documentation/memory-barriers.rst:2289: WARNING: Inline emphasis start-string without end-string.
  Documentation/memory-barriers.rst:2289: WARNING: Inline emphasis start-string without end-string.
  Documentation/memory-barriers.rst:3091: WARNING: Inline emphasis start-string without end-string.

What happens there is that, while some vars are inside
'var' or `var`, most of them aren't, and some start with
asterisk.

Standardize it by always use ``literal``. As a bonus,
the output will use the same monospaced fonts as the
literal blocks.

Signed-off-by: Mauro Carvalho Chehab <mchehab@osg.samsung.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
---
 Documentation/memory-barriers.txt | 154 +++++++++++++++++++-------------------
 1 file changed, 77 insertions(+), 77 deletions(-)

diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
index 69cc3e770e8d..f1642d927957 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -181,16 +181,16 @@ As a further example, consider this sequence of events::
 	B = 4;		Q = P;
 	P = &B		D = *Q;
 
-There is an obvious data dependency here, as the value loaded into D depends on
-the address retrieved from P by CPU 2.  At the end of the sequence, any of the
+There is an obvious data dependency here, as the value loaded into ``D`` depends on
+the address retrieved from ``P`` by CPU 2.  At the end of the sequence, any of the
 following results are possible::
 
 	(Q == &A) and (D == 1)
 	(Q == &B) and (D == 2)
 	(Q == &B) and (D == 4)
 
-Note that CPU 2 will never try and load C into D because the CPU will load P
-into Q before issuing the load of *Q.
+Note that CPU 2 will never try and load ``C`` into ``D`` because the CPU will load ``P``
+into ``Q`` before issuing the load of ``*Q``.
 
 
 Device operations
@@ -199,8 +199,8 @@ Device operations
 Some devices present their control interfaces as collections of memory
 locations, but the order in which the control registers are accessed is very
 important.  For instance, imagine an ethernet card with a set of internal
-registers that are accessed through an address port register (A) and a data
-port register (D).  To read internal register 5, the following code might then
+registers that are accessed through an address port register (``A``) and a data
+port register (``D``).  To read internal register 5, the following code might then
 be used::
 
 	*A = 5;
@@ -558,12 +558,12 @@ following sequence of events::
 			      D = *Q;
 
 There's a clear data dependency here, and it would seem that by the end of the
-sequence, Q must be either &A or &B, and that::
+sequence, ``Q`` must be either ``&A`` or ``&B``, and that::
 
 	(Q == &A) implies (D == 1)
 	(Q == &B) implies (D == 4)
 
-But!  CPU 2's perception of P may be updated _before_ its perception of B, thus
+But!  CPU 2's perception of ``P`` may be updated _before_ its perception of ``B``, thus
 leading to the following situation::
 
 	(Q == &B) and (D == 2) ????
@@ -600,8 +600,8 @@ A data-dependency barrier must also order against dependent writes::
 			      <data dependency barrier>
 			      *Q = 5;
 
-The data-dependency barrier must order the read into Q with the store
-into *Q.  This prohibits this outcome::
+The data-dependency barrier must order the read into ``Q`` with the store
+into ``*Q``.  This prohibits this outcome::
 
 	(Q == &B) && (B == 4)
 
@@ -615,11 +615,11 @@ prevents such records from being lost.
 [!] Note that this extremely counterintuitive situation arises most easily on
 machines with split caches, so that, for example, one cache bank processes
 even-numbered cache lines and the other bank processes odd-numbered cache
-lines.  The pointer P might be stored in an odd-numbered cache line, and the
-variable B might be stored in an even-numbered cache line.  Then, if the
+lines.  The pointer ``P`` might be stored in an odd-numbered cache line, and the
+variable ``B`` might be stored in an even-numbered cache line.  Then, if the
 even-numbered bank of the reading CPU's cache is extremely busy while the
-odd-numbered bank is idle, one can see the new value of the pointer P (&B),
-but the old value of the variable B (2).
+odd-numbered bank is idle, one can see the new value of the pointer ``P`` (``&B``),
+but the old value of the variable ``B`` (2).
 
 
 The data dependency barrier is very important to the RCU system,
@@ -651,7 +651,7 @@ following bit of code::
 This will not have the desired effect because there is no actual data
 dependency, but rather a control dependency that the CPU may short-circuit
 by attempting to predict the outcome in advance, so that other CPUs see
-the load from b as having happened before the load from a.  In such a
+the load from ``b`` as having happened before the load from ``a``.  In such a
 case what's actually required is::
 
 	q = READ_ONCE(a);
@@ -671,12 +671,12 @@ for load-store control dependencies, as in the following example::
 Control dependencies pair normally with other types of barriers.
 That said, please note that neither READ_ONCE() nor WRITE_ONCE()
 are optional! Without the READ_ONCE(), the compiler might combine the
-load from 'a' with other loads from 'a'.  Without the WRITE_ONCE(),
-the compiler might combine the store to 'b' with other stores to 'b'.
+load from ``a`` with other loads from ``a``.  Without the WRITE_ONCE(),
+the compiler might combine the store to ``b`` with other stores to ``b``.
 Either can result in highly counterintuitive effects on ordering.
 
 Worse yet, if the compiler is able to prove (say) that the value of
-variable 'a' is always non-zero, it would be well within its rights
+variable ``a`` is always non-zero, it would be well within its rights
 to optimize the original example by eliminating the "if" statement
 as follows::
 
@@ -713,8 +713,8 @@ optimization levels::
 		do_something_else();
 	}
 
-Now there is no conditional between the load from 'a' and the store to
-'b', which means that the CPU is within its rights to reorder them:
+Now there is no conditional between the load from ``a`` and the store to
+``b``, which means that the CPU is within its rights to reorder them:
 The conditional is absolutely required, and must be present in the
 assembly code even after all compiler optimizations have been applied.
 Therefore, if you need ordering in this example, you need explicit
@@ -742,9 +742,9 @@ ordering is guaranteed only when the stores differ, for example::
 	}
 
 The initial READ_ONCE() is still required to prevent the compiler from
-proving the value of 'a'.
+proving the value of ``a``.
 
-In addition, you need to be careful what you do with the local variable 'q',
+In addition, you need to be careful what you do with the local variable ``q``,
 otherwise the compiler might be able to guess the value and again remove
 the needed conditional.  For example::
 
@@ -757,7 +757,7 @@ the needed conditional.  For example::
 		do_something_else();
 	}
 
-If MAX is defined to be 1, then the compiler knows that (q % MAX) is
+If MAX is defined to be 1, then the compiler knows that ``(q % MAX)`` is
 equal to zero, in which case the compiler is within its rights to
 transform the above code into the following::
 
@@ -766,7 +766,7 @@ transform the above code into the following::
 	do_something_else();
 
 Given this transformation, the CPU is not required to respect the ordering
-between the load from variable 'a' and the store to variable 'b'.  It is
+between the load from variable ``a`` and the store to variable ``b``.  It is
 tempting to add a barrier(), but this does not help.  The conditional
 is gone, and the barrier won't bring it back.  Therefore, if you are
 relying on this ordering, you should make sure that MAX is greater than
@@ -782,7 +782,7 @@ one, perhaps as follows::
 		do_something_else();
 	}
 
-Please note once again that the stores to 'b' differ.  If they were
+Please note once again that the stores to ``b`` differ.  If they were
 identical, as noted earlier, the compiler could pull this store outside
 of the 'if' statement.
 
@@ -819,8 +819,8 @@ not necessarily apply to code following the if-statement::
 
 It is tempting to argue that there in fact is ordering because the
 compiler cannot reorder volatile accesses and also cannot reorder
-the writes to 'b' with the condition.  Unfortunately for this line
-of reasoning, the compiler might compile the two writes to 'b' as
+the writes to ``b`` with the condition.  Unfortunately for this line
+of reasoning, the compiler might compile the two writes to ``b`` as
 conditional-move instructions, as in this fanciful pseudo-assembly
 language::
 
@@ -832,7 +832,7 @@ language::
 	st $1,c
 
 A weakly ordered CPU would have no dependency of any sort between the load
-from 'a' and the store to 'c'.  The control dependencies would extend
+from ``a`` and the store to ``c``.  The control dependencies would extend
 only to the pair of cmov instructions and the store depending on them.
 In short, control dependencies apply only to the stores in the then-clause
 and else-clause of the if-statement in question (including functions
@@ -840,7 +840,7 @@ invoked by those two clauses), not to code following that if-statement.
 
 Finally, control dependencies do -not- provide transitivity.  This is
 demonstrated by two related examples, with the initial values of
-'x' and 'y' both being zero::
+``x`` and ``y`` both being zero::
 
 	CPU 0                     CPU 1
 	=======================   =======================
@@ -994,9 +994,9 @@ Consider the following sequence of events::
 	STORE E = 5
 
 This sequence of events is committed to the memory coherence system in an order
-that the rest of the system might perceive as the unordered set of { STORE A,
-STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E
-}::
+that the rest of the system might perceive as the unordered set of ``{ STORE A,
+STORE B, STORE C }`` all occurring before the unordered set of ``{ STORE D, STORE E
+}``::
 
 	+-------+       :      :
 	|       |       +------+
@@ -1062,11 +1062,11 @@ effectively random order, despite the write barrier issued by CPU 1::
 	                                        :       :
 
 
-In the above example, CPU 2 perceives that B is 7, despite the load of *C
-(which would be B) coming after the LOAD of C.
+In the above example, CPU 2 perceives that ``B`` is 7, despite the load of ``*C``
+(which would be ``B``) coming after the LOAD of ``C``.
 
-If, however, a data dependency barrier were to be placed between the load of C
-and the load of *C (ie: B) on CPU 2::
+If, however, a data dependency barrier were to be placed between the load of ``C``
+and the load of ``*C`` (ie: ``B``) on CPU 2::
 
 	CPU 1			CPU 2
 	=======================	=======================
@@ -1142,8 +1142,8 @@ some effectively random order, despite the write barrier issued by CPU 1::
 	                                        :       :
 
 
-If, however, a read barrier were to be placed between the load of B and the
-load of A on CPU 2::
+If, however, a read barrier were to be placed between the load of ``B`` and the
+load of ``A`` on CPU 2::
 
 	CPU 1			CPU 2
 	=======================	=======================
@@ -1179,7 +1179,7 @@ then the partial ordering imposed by CPU 1 will be perceived correctly by CPU
 
 
 To illustrate this more completely, consider what could happen if the code
-contained a load of A either side of the read barrier::
+contained a load of ``A`` either side of the read barrier::
 
 	CPU 1			CPU 2
 	=======================	=======================
@@ -1192,7 +1192,7 @@ contained a load of A either side of the read barrier::
 				<read barrier>
 				LOAD A [second load of A]
 
-Even though the two loads of A both occur after the load of B, they may both
+Even though the two loads of ``A`` both occur after the load of ``B``, they may both
 come up with different values::
 
 	+-------+       :      :                :       :
@@ -1218,7 +1218,7 @@ come up with different values::
 	                                        :       :       +-------+
 
 
-But it may be that the update to A from CPU 1 becomes perceptible to CPU 2
+But it may be that the update to ``A`` from CPU 1 becomes perceptible to CPU 2
 before the read barrier completes anyway::
 
 	+-------+       :      :                :       :
@@ -1244,9 +1244,9 @@ before the read barrier completes anyway::
 	                                        :       :       +-------+
 
 
-The guarantee is that the second load will always come up with A == 1 if the
-load of B came up with B == 2.  No such guarantee exists for the first load of
-A; that may come up with either A == 0 or A == 1.
+The guarantee is that the second load will always come up with ``A`` == 1 if the
+load of ``B`` came up with ``B`` == 2.  No such guarantee exists for the first load of
+``A``; that may come up with either ``A`` == 0 or ``A`` == 1.
 
 
 Read memory barriers vs load speculation
@@ -1360,21 +1360,21 @@ demonstrates transitivity::
 				<general barrier>	<general barrier>
 				LOAD Y			LOAD X
 
-Suppose that CPU 2's load from X returns 1 and its load from Y returns 0.
-This indicates that CPU 2's load from X in some sense follows CPU 1's
-store to X and that CPU 2's load from Y in some sense preceded CPU 3's
-store to Y.  The question is then "Can CPU 3's load from X return 0?"
+Suppose that CPU 2's load from ``X`` returns 1 and its load from ``Y`` returns 0.
+This indicates that CPU 2's load from ``X`` in some sense follows CPU 1's
+store to ``X`` and that CPU 2's load from ``Y`` in some sense preceded CPU 3's
+store to ``Y``.  The question is then "Can CPU 3's load from ``X`` return 0?"
 
-Because CPU 2's load from X in some sense came after CPU 1's store, it
-is natural to expect that CPU 3's load from X must therefore return 1.
+Because CPU 2's load from ``X`` in some sense came after CPU 1's store, it
+is natural to expect that CPU 3's load from ``X`` must therefore return 1.
 This expectation is an example of transitivity: if a load executing on
 CPU A follows a load from the same variable executing on CPU B, then
 CPU A's load must either return the same value that CPU B's load did,
 or must return some later value.
 
 In the Linux kernel, use of general memory barriers guarantees
-transitivity.  Therefore, in the above example, if CPU 2's load from X
-returns 1 and its load from Y returns 0, then CPU 3's load from X must
+transitivity.  Therefore, in the above example, if CPU 2's load from ``X``
+returns 1 and its load from ``Y`` returns 0, then CPU 3's load from ``X`` must
 also return 1.
 
 However, transitivity is -not- guaranteed for read or write barriers.
@@ -1389,8 +1389,8 @@ is changed to a read barrier as shown below::
 				LOAD Y			LOAD X
 
 This substitution destroys transitivity: in this example, it is perfectly
-legal for CPU 2's load from X to return 1, its load from Y to return 0,
-and CPU 3's load from X to return 0.
+legal for CPU 2's load from ``X`` to return 1, its load from ``Y`` to return 0,
+and CPU 3's load from ``X`` to return 0.
 
 The key point is that although CPU 2's read barrier orders its pair
 of loads, it does not guarantee to order CPU 1's store.  Therefore, if
@@ -1530,7 +1530,7 @@ of optimizations:
 	a[0] = x;
 	a[1] = x;
 
-     Might result in an older value of x stored in a[1] than in a[0].
+     Might result in an older value of ``x`` stored in ``a[1]`` than in ``a[0]``.
      Prevent both the compiler and the CPU from doing this as follows::
 
 	a[0] = READ_ONCE(x);
@@ -1562,7 +1562,7 @@ of optimizations:
  (#) The compiler is within its rights to reload a variable, for example,
      in cases where high register pressure prevents the compiler from
      keeping all data of interest in registers.  The compiler might
-     therefore optimize the variable 'tmp' out of our previous example::
+     therefore optimize the variable ``tmp`` out of our previous example::
 
 	while (tmp = a)
 		do_something_with(tmp);
@@ -1591,7 +1591,7 @@ of optimizations:
 
  (#) The compiler is within its rights to omit a load entirely if it knows
      what the value will be.  For example, if the compiler can prove that
-     the value of variable 'a' is always zero, it can optimize this code::
+     the value of variable ``a`` is always zero, it can optimize this code::
 
 	while (tmp = a)
 		do_something_with(tmp);
@@ -1603,7 +1603,7 @@ of optimizations:
      This transformation is a win for single-threaded code because it
      gets rid of a load and a branch.  The problem is that the compiler
      will carry out its proof assuming that the current CPU is the only
-     one updating variable 'a'.  If variable 'a' is shared, then the
+     one updating variable ``a``.  If variable ``a`` is shared, then the
      compiler's proof will be erroneous.  Use READ_ONCE() to tell the
      compiler that it doesn't know as much as it thinks it does::
 
@@ -1620,7 +1620,7 @@ of optimizations:
      Then the compiler knows that the result of the "%" operator applied
      to MAX will always be zero, again allowing the compiler to optimize
      the code into near-nonexistence.  (It will still load from the
-     variable 'a'.)
+     variable ``a``.)
 
  (#) Similarly, the compiler is within its rights to omit a store entirely
      if it knows that the variable already has the value being stored.
@@ -1633,9 +1633,9 @@ of optimizations:
 	... Code that does not store to variable a ...
 	a = 0;
 
-     The compiler sees that the value of variable 'a' is already zero, so
+     The compiler sees that the value of variable ``a`` is already zero, so
      it might well omit the second store.  This would come as a fatal
-     surprise if some other CPU might have stored to variable 'a' in the
+     surprise if some other CPU might have stored to variable ``a`` in the
      meantime.
 
      Use WRITE_ONCE() to prevent the compiler from making this sort of
@@ -1689,7 +1689,7 @@ of optimizations:
 
      Note that the READ_ONCE() and WRITE_ONCE() wrappers in
      interrupt_handler() are needed if this interrupt handler can itself
-     be interrupted by something that also accesses 'flag' and 'msg',
+     be interrupted by something that also accesses ``flag`` and ``msg``,
      for example, a nested interrupt or an NMI.  Otherwise, READ_ONCE()
      and WRITE_ONCE() are not needed in interrupt_handler() other than
      for documentation purposes.  (Note also that nested interrupts
@@ -1727,7 +1727,7 @@ of optimizations:
      In single-threaded code, this is not only safe, but also saves
      a branch.  Unfortunately, in concurrent code, this optimization
      could cause some other CPU to see a spurious value of 42 -- even
-     if variable 'a' was never zero -- when loading variable 'b'.
+     if variable ``a`` was never zero -- when loading variable ``b``.
      Use WRITE_ONCE() to prevent this as follows::
 
 	if (a)
@@ -1779,7 +1779,7 @@ of optimizations:
      volatile markings, the compiler would be well within its rights to
      implement these three assignment statements as a pair of 32-bit
      loads followed by a pair of 32-bit stores.  This would result in
-     load tearing on 'foo1.b' and store tearing on 'foo2.b'.  READ_ONCE()
+     load tearing on ``foo1.b`` and store tearing on ``foo2.b``.  READ_ONCE()
      and WRITE_ONCE() again prevent tearing in this example::
 
 	foo2.a = foo1.a;
@@ -1788,7 +1788,7 @@ of optimizations:
 
 All that aside, it is never necessary to use READ_ONCE() and
 WRITE_ONCE() on a variable that has been marked volatile.  For example,
-because 'jiffies' is marked volatile, it is never necessary to
+because ``jiffies`` is marked volatile, it is never necessary to
 say READ_ONCE(jiffies).  The reason for this is that READ_ONCE() and
 WRITE_ONCE() are implemented as volatile casts, which has no effect when
 its argument is already marked volatile.
@@ -1816,12 +1816,12 @@ All memory barriers except the data dependency barriers imply a compiler
 barrier.  Data dependencies do not impose any additional compiler ordering.
 
 Aside: In the case of data dependencies, the compiler would be expected
-to issue the loads in the correct order (eg. `a[b]` would have to load
-the value of b before loading a[b]), however there is no guarantee in
-the C specification that the compiler may not speculate the value of b
-(eg. is equal to 1) and load a before b (eg. tmp = a[1]; if (b != 1)
-tmp = a[b]; ).  There is also the problem of a compiler reloading b after
-having loaded a[b], thus having a newer copy of b than a[b].  A consensus
+to issue the loads in the correct order (eg. ``a[b]`` would have to load
+the value of ``b`` before loading ``a[b]``), however there is no guarantee in
+the C specification that the compiler may not speculate the value of ``b``
+(eg. is equal to 1) and load ``a`` before ``b`` (eg. ``tmp`` = ``a[1]``; if (``b`` != 1)
+``tmp = ``a[b]``; ).  There is also the problem of a compiler reloading b after
+having loaded ``a[b]``, thus having a newer copy of ``b`` than ``a[b]``.  A consensus
 has not yet been reached about these problems, however the READ_ONCE()
 macro is a good place to start looking.
 
@@ -2197,7 +2197,7 @@ events, where X and Y are both initially zero::
 	wake_up();			  load from Y sees 1, no memory barrier
 					load from X might see 0
 
-In contrast, if a wakeup does occur, CPU 2's load from X would be guaranteed
+In contrast, if a wakeup does occur, CPU 2's load from ``X`` would be guaranteed
 to see 1.
 
 The available waker functions include::
@@ -2274,7 +2274,7 @@ conflict on any particular lock.
 Acquires vs memory accesses
 ---------------------------
 
-Consider the following: the system has a pair of spinlocks (M) and (Q), and
+Consider the following: the system has a pair of spinlocks (``M``) and (``Q``), and
 three CPUs; then should the following sequence of events occur::
 
 	CPU 1				CPU 2
@@ -2286,8 +2286,8 @@ three CPUs; then should the following sequence of events occur::
 	RELEASE M			RELEASE Q
 	WRITE_ONCE(*D, d);		WRITE_ONCE(*H, h);
 
-Then there is no guarantee as to what order CPU 3 will see the accesses to *A
-through *H occur in, other than the constraints imposed by the separate locks
+Then there is no guarantee as to what order CPU 3 will see the accesses to ``*A``
+through ``*H`` occur in, other than the constraints imposed by the separate locks
 on the separate CPUs.  It might, for example, see::
 
 	*E, ACQUIRE M, ACQUIRE Q, *G, *C, *F, *A, *B, RELEASE Q, *D, *H, RELEASE M
@@ -2896,7 +2896,7 @@ now imagine that the second CPU wants to read those values::
 
 The above pair of reads may then fail to happen in the expected order, as the
 cacheline holding p may get updated in one of the second CPU's caches whilst
-the update to the cacheline holding v is delayed in the other of the second
+the update to the cacheline holding ``v`` is delayed in the other of the second
 CPU's caches by some other cache event::
 
 	CPU 1		CPU 2		COMMENT
@@ -3089,7 +3089,7 @@ may be reduced to::
 	*A = W;
 
 since, without either a write barrier or an WRITE_ONCE(), it can be
-assumed that the effect of the storage of V to *A is lost.  Similarly::
+assumed that the effect of the storage of ``V`` to ``*A`` is lost.  Similarly::
 
 	*A = Y;
 	Z = *A;
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v2 19/29] memory-hotplug.txt: standardize document format
  2017-06-17 15:26 [PATCH v2 01/29] IPMI.txt: standardize document format Mauro Carvalho Chehab
                   ` (16 preceding siblings ...)
  2017-06-17 15:26 ` [PATCH v2 18/29] memory-barriers.txt: use literals for variables Mauro Carvalho Chehab
@ 2017-06-17 15:26 ` Mauro Carvalho Chehab
  2017-06-17 15:26 ` [PATCH v2 20/29] men-chameleon-bus.txt: " Mauro Carvalho Chehab
                   ` (10 subsequent siblings)
  28 siblings, 0 replies; 35+ messages in thread
From: Mauro Carvalho Chehab @ 2017-06-17 15:26 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet

Each text file under Documentation follows a different
format. Some doesn't even have titles!

Change its representation to follow the adopted standard,
using ReST markups for it to be parseable by Sphinx:

- use :Created: and :Updated: for the timestamps;
- comment its internal index;
- adjust titles and use proper markup;
- Whitespace fixes;
- Use cross references where needed;
- Use bulleted lists where needed;
- mark literal blocks;
- Use the ReST notation for a table.

Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
---
 Documentation/memory-hotplug.txt | 357 +++++++++++++++++++++------------------
 1 file changed, 195 insertions(+), 162 deletions(-)

diff --git a/Documentation/memory-hotplug.txt b/Documentation/memory-hotplug.txt
index 670f3ded0802..e2d1f455dcc7 100644
--- a/Documentation/memory-hotplug.txt
+++ b/Documentation/memory-hotplug.txt
@@ -2,43 +2,48 @@
 Memory Hotplug
 ==============
 
-Created:					Jul 28 2007
-Add description of notifier of memory hotplug	Oct 11 2007
+:Created:							Jul 28 2007
+:Updated: Add description of notifier of memory hotplug:	Oct 11 2007
 
 This document is about memory hotplug including how-to-use and current status.
 Because Memory Hotplug is still under development, contents of this text will
 be changed often.
 
-1. Introduction
-  1.1 purpose of memory hotplug
-  1.2. Phases of memory hotplug
-  1.3. Unit of Memory online/offline operation
-2. Kernel Configuration
-3. sysfs files for memory hotplug
-4. Physical memory hot-add phase
-  4.1 Hardware(Firmware) Support
-  4.2 Notify memory hot-add event by hand
-5. Logical Memory hot-add phase
-  5.1. State of memory
-  5.2. How to online memory
-6. Logical memory remove
-  6.1 Memory offline and ZONE_MOVABLE
-  6.2. How to offline memory
-7. Physical memory remove
-8. Memory hotplug event notifier
-9. Future Work List
-
-Note(1): x86_64's has special implementation for memory hotplug.
-         This text does not describe it.
-Note(2): This text assumes that sysfs is mounted at /sys.
-
-
----------------
-1. Introduction
----------------
-
-1.1 purpose of memory hotplug
-------------
+.. CONTENTS
+
+  1. Introduction
+    1.1 purpose of memory hotplug
+    1.2. Phases of memory hotplug
+    1.3. Unit of Memory online/offline operation
+  2. Kernel Configuration
+  3. sysfs files for memory hotplug
+  4. Physical memory hot-add phase
+    4.1 Hardware(Firmware) Support
+    4.2 Notify memory hot-add event by hand
+  5. Logical Memory hot-add phase
+    5.1. State of memory
+    5.2. How to online memory
+  6. Logical memory remove
+    6.1 Memory offline and ZONE_MOVABLE
+    6.2. How to offline memory
+  7. Physical memory remove
+  8. Memory hotplug event notifier
+  9. Future Work List
+
+
+.. note::
+
+    (1) x86_64's has special implementation for memory hotplug.
+        This text does not describe it.
+    (2) This text assumes that sysfs is mounted at /sys.
+
+
+Introduction
+============
+
+purpose of memory hotplug
+-------------------------
+
 Memory Hotplug allows users to increase/decrease the amount of memory.
 Generally, there are two purposes.
 
@@ -53,9 +58,11 @@ hardware which supports memory power management.
 Linux memory hotplug is designed for both purpose.
 
 
-1.2. Phases of memory hotplug
----------------
-There are 2 phases in Memory Hotplug.
+Phases of memory hotplug
+------------------------
+
+There are 2 phases in Memory Hotplug:
+
   1) Physical Memory Hotplug phase
   2) Logical Memory Hotplug phase.
 
@@ -70,7 +77,7 @@ management tables, and makes sysfs files for new memory's operation.
 If firmware supports notification of connection of new memory to OS,
 this phase is triggered automatically. ACPI can notify this event. If not,
 "probe" operation by system administration is used instead.
-(see Section 4.).
+(see :ref:`memory_hotplug_physical_mem`).
 
 Logical Memory Hotplug phase is to change memory state into
 available/unavailable for users. Amount of memory from user's view is
@@ -83,11 +90,12 @@ Logical Memory Hotplug phase is triggered by write of sysfs file by system
 administrator. For the hot-add case, it must be executed after Physical Hotplug
 phase by hand.
 (However, if you writes udev's hotplug scripts for memory hotplug, these
- phases can be execute in seamless way.)
+phases can be execute in seamless way.)
 
 
-1.3. Unit of Memory online/offline operation
-------------
+Unit of Memory online/offline operation
+---------------------------------------
+
 Memory hotplug uses SPARSEMEM memory model which allows memory to be divided
 into chunks of the same size. These chunks are called "sections". The size of
 a memory section is architecture dependent. For example, power uses 16MiB, ia64
@@ -97,46 +105,50 @@ Memory sections are combined into chunks referred to as "memory blocks". The
 size of a memory block is architecture dependent and represents the logical
 unit upon which memory online/offline operations are to be performed. The
 default size of a memory block is the same as memory section size unless an
-architecture specifies otherwise. (see Section 3.)
+architecture specifies otherwise. (see :ref:`memory_hotplug_sysfs_files`.)
 
 To determine the size (in bytes) of a memory block please read this file:
 
 /sys/devices/system/memory/block_size_bytes
 
 
------------------------
-2. Kernel Configuration
------------------------
+Kernel Configuration
+====================
+
 To use memory hotplug feature, kernel must be compiled with following
 config options.
 
-- For all memory hotplug
-    Memory model -> Sparse Memory  (CONFIG_SPARSEMEM)
-    Allow for memory hot-add       (CONFIG_MEMORY_HOTPLUG)
+- For all memory hotplug:
+    - Memory model -> Sparse Memory  (CONFIG_SPARSEMEM)
+    - Allow for memory hot-add       (CONFIG_MEMORY_HOTPLUG)
 
-- To enable memory removal, the following are also necessary
-    Allow for memory hot remove    (CONFIG_MEMORY_HOTREMOVE)
-    Page Migration                 (CONFIG_MIGRATION)
+- To enable memory removal, the following are also necessary:
+    - Allow for memory hot remove    (CONFIG_MEMORY_HOTREMOVE)
+    - Page Migration                 (CONFIG_MIGRATION)
 
-- For ACPI memory hotplug, the following are also necessary
-    Memory hotplug (under ACPI Support menu) (CONFIG_ACPI_HOTPLUG_MEMORY)
-    This option can be kernel module.
+- For ACPI memory hotplug, the following are also necessary:
+    - Memory hotplug (under ACPI Support menu) (CONFIG_ACPI_HOTPLUG_MEMORY)
+    - This option can be kernel module.
 
 - As a related configuration, if your box has a feature of NUMA-node hotplug
   via ACPI, then this option is necessary too.
-    ACPI0004,PNP0A05 and PNP0A06 Container Driver (under ACPI Support menu)
-    (CONFIG_ACPI_CONTAINER).
-    This option can be kernel module too.
 
+    - ACPI0004,PNP0A05 and PNP0A06 Container Driver (under ACPI Support menu)
+      (CONFIG_ACPI_CONTAINER).
+
+     This option can be kernel module too.
+
+
+.. _memory_hotplug_sysfs_files:
+
+sysfs files for memory hotplug
+==============================
 
---------------------------------
-3 sysfs files for memory hotplug
---------------------------------
 All memory blocks have their device information in sysfs.  Each memory block
-is described under /sys/devices/system/memory as
+is described under /sys/devices/system/memory as:
 
-/sys/devices/system/memory/memoryXXX
-(XXX is the memory block id.)
+	/sys/devices/system/memory/memoryXXX
+	(XXX is the memory block id.)
 
 For the memory block covered by the sysfs directory.  It is expected that all
 memory sections in this range are present and no memory holes exist in the
@@ -145,43 +157,53 @@ the existence of one should not affect the hotplug capabilities of the memory
 block.
 
 For example, assume 1GiB memory block size. A device for a memory starting at
-0x100000000 is /sys/device/system/memory/memory4
-(0x100000000 / 1Gib = 4)
+0x100000000 is /sys/device/system/memory/memory4::
+
+	(0x100000000 / 1Gib = 4)
+
 This device covers address range [0x100000000 ... 0x140000000)
 
 Under each memory block, you can see 5 files:
 
-/sys/devices/system/memory/memoryXXX/phys_index
-/sys/devices/system/memory/memoryXXX/phys_device
-/sys/devices/system/memory/memoryXXX/state
-/sys/devices/system/memory/memoryXXX/removable
-/sys/devices/system/memory/memoryXXX/valid_zones
+- /sys/devices/system/memory/memoryXXX/phys_index
+- /sys/devices/system/memory/memoryXXX/phys_device
+- /sys/devices/system/memory/memoryXXX/state
+- /sys/devices/system/memory/memoryXXX/removable
+- /sys/devices/system/memory/memoryXXX/valid_zones
+
+=================== ============================================================
+``phys_index``      read-only and contains memory block id, same as XXX.
+``state``           read-write
+
+                    - at read:  contains online/offline state of memory.
+                    - at write: user can specify "online_kernel",
 
-'phys_index'      : read-only and contains memory block id, same as XXX.
-'state'           : read-write
-                    at read:  contains online/offline state of memory.
-                    at write: user can specify "online_kernel",
                     "online_movable", "online", "offline" command
                     which will be performed on all sections in the block.
-'phys_device'     : read-only: designed to show the name of physical memory
+``phys_device``     read-only: designed to show the name of physical memory
                     device.  This is not well implemented now.
-'removable'       : read-only: contains an integer value indicating
+``removable``       read-only: contains an integer value indicating
                     whether the memory block is removable or not
                     removable.  A value of 1 indicates that the memory
                     block is removable and a value of 0 indicates that
                     it is not removable. A memory block is removable only if
                     every section in the block is removable.
-'valid_zones'     : read-only: designed to show which zones this memory block
+``valid_zones``     read-only: designed to show which zones this memory block
 		    can be onlined to.
-		    The first column shows it's default zone.
+
+		    The first column shows it`s default zone.
+
 		    "memory6/valid_zones: Normal Movable" shows this memoryblock
 		    can be onlined to ZONE_NORMAL by default and to ZONE_MOVABLE
 		    by online_movable.
+
 		    "memory7/valid_zones: Movable Normal" shows this memoryblock
 		    can be onlined to ZONE_MOVABLE by default and to ZONE_NORMAL
 		    by online_kernel.
+=================== ============================================================
+
+.. note::
 
-NOTE:
   These directories/files appear after physical memory hotplug phase.
 
 If CONFIG_NUMA is enabled the memoryXXX/ directories can also be accessed
@@ -193,13 +215,14 @@ For example:
 A backlink will also be created:
 /sys/devices/system/memory/memory9/node0 -> ../../node/node0
 
+.. _memory_hotplug_physical_mem:
 
---------------------------------
-4. Physical memory hot-add phase
---------------------------------
+Physical memory hot-add phase
+=============================
+
+Hardware(Firmware) Support
+--------------------------
 
-4.1 Hardware(Firmware) Support
-------------
 On x86_64/ia64 platform, memory hotplug by ACPI is supported.
 
 In general, the firmware (ACPI) which supports memory hotplug defines
@@ -209,7 +232,8 @@ script. This will be done automatically.
 
 But scripts for memory hotplug are not contained in generic udev package(now).
 You may have to write it by yourself or online/offline memory by hand.
-Please see "How to online memory", "How to offline memory" in this text.
+Please see :ref:`memory_hotplug_how_to_online_memory` and
+:ref:`memory_hotplug_how_to_offline_memory`.
 
 If firmware supports NUMA-node hotplug, and defines an object _HID "ACPI0004",
 "PNP0A05", or "PNP0A06", notification is asserted to it, and ACPI handler
@@ -217,8 +241,9 @@ calls hotplug code for all of objects which are defined in it.
 If memory device is found, memory hotplug code will be called.
 
 
-4.2 Notify memory hot-add event by hand
-------------
+Notify memory hot-add event by hand
+-----------------------------------
+
 On some architectures, the firmware may not notify the kernel of a memory
 hotplug event.  Therefore, the memory "probe" interface is supported to
 explicitly notify the kernel.  This interface depends on
@@ -229,45 +254,48 @@ notification.
 Probe interface is located at
 /sys/devices/system/memory/probe
 
-You can tell the physical address of new memory to the kernel by
+You can tell the physical address of new memory to the kernel by::
 
-% echo start_address_of_new_memory > /sys/devices/system/memory/probe
+	% echo start_address_of_new_memory > /sys/devices/system/memory/probe
 
 Then, [start_address_of_new_memory, start_address_of_new_memory +
 memory_block_size] memory range is hot-added. In this case, hotplug script is
 not called (in current implementation). You'll have to online memory by
-yourself.  Please see "How to online memory" in this text.
+yourself.  Please see :ref:`memory_hotplug_how_to_online_memory`.
 
 
-------------------------------
-5. Logical Memory hot-add phase
-------------------------------
+Logical Memory hot-add phase
+============================
 
-5.1. State of memory
-------------
-To see (online/offline) state of a memory block, read 'state' file.
+State of memory
+---------------
 
-% cat /sys/device/system/memory/memoryXXX/state
+To see (online/offline) state of a memory block, read 'state' file::
 
+	% cat /sys/device/system/memory/memoryXXX/state
 
-If the memory block is online, you'll read "online".
-If the memory block is offline, you'll read "offline".
 
+- If the memory block is online, you'll read "online".
+- If the memory block is offline, you'll read "offline".
+
+
+.. _memory_hotplug_how_to_online_memory:
+
+How to online memory
+--------------------
 
-5.2. How to online memory
-------------
 When the memory is hot-added, the kernel decides whether or not to "online"
-it according to the policy which can be read from "auto_online_blocks" file:
+it according to the policy which can be read from "auto_online_blocks" file::
 
-% cat /sys/devices/system/memory/auto_online_blocks
+	% cat /sys/devices/system/memory/auto_online_blocks
 
 The default depends on the CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE kernel config
 option. If it is disabled the default is "offline" which means the newly added
 memory is not in a ready-to-use state and you have to "online" the newly added
 memory blocks manually. Automatic onlining can be requested by writing "online"
-to "auto_online_blocks" file:
+to "auto_online_blocks" file::
 
-% echo online > /sys/devices/system/memory/auto_online_blocks
+	% echo online > /sys/devices/system/memory/auto_online_blocks
 
 This sets a global policy and impacts all memory blocks that will subsequently
 be hotplugged. Currently offline blocks keep their state. It is possible, under
@@ -277,20 +305,22 @@ online. User space tools can check their "state" files
 
 If the automatic onlining wasn't requested, failed, or some memory block was
 offlined it is possible to change the individual block's state by writing to the
-"state" file:
+"state" file::
 
-% echo online > /sys/devices/system/memory/memoryXXX/state
+	% echo online > /sys/devices/system/memory/memoryXXX/state
 
 This onlining will not change the ZONE type of the target memory block,
-If the memory block is in ZONE_NORMAL, you can change it to ZONE_MOVABLE:
+If the memory block is in ZONE_NORMAL, you can change it to ZONE_MOVABLE::
 
-% echo online_movable > /sys/devices/system/memory/memoryXXX/state
-(NOTE: current limit: this memory block must be adjacent to ZONE_MOVABLE)
+	% echo online_movable > /sys/devices/system/memory/memoryXXX/state
 
-And if the memory block is in ZONE_MOVABLE, you can change it to ZONE_NORMAL:
+.. note:: current limit: this memory block must be adjacent to ZONE_MOVABLE
 
-% echo online_kernel > /sys/devices/system/memory/memoryXXX/state
-(NOTE: current limit: this memory block must be adjacent to ZONE_NORMAL)
+And if the memory block is in ZONE_MOVABLE, you can change it to ZONE_NORMAL::
+
+	% echo online_kernel > /sys/devices/system/memory/memoryXXX/state
+
+.. note:: current limit: this memory block must be adjacent to ZONE_NORMAL
 
 After this, memory block XXX's state will be 'online' and the amount of
 available memory will be increased.
@@ -300,12 +330,12 @@ This may be changed in future.
 
 
 
-------------------------
-6. Logical memory remove
-------------------------
+Logical memory remove
+=====================
+
+Memory offline and ZONE_MOVABLE
+-------------------------------
 
-6.1 Memory offline and ZONE_MOVABLE
-------------
 Memory offlining is more complicated than memory online. Because memory offline
 has to make the whole memory block be unused, memory offline can fail if
 the memory block includes memory which cannot be freed.
@@ -330,24 +360,27 @@ Assume the system has "TOTAL" amount of memory at boot time, this boot option
 creates ZONE_MOVABLE as following.
 
 1) When kernelcore=YYYY boot option is used,
-  Size of memory not for movable pages (not for offline) is YYYY.
-  Size of memory for movable pages (for offline) is TOTAL-YYYY.
+   Size of memory not for movable pages (not for offline) is YYYY.
+   Size of memory for movable pages (for offline) is TOTAL-YYYY.
 
 2) When movablecore=ZZZZ boot option is used,
-  Size of memory not for movable pages (not for offline) is TOTAL - ZZZZ.
-  Size of memory for movable pages (for offline) is ZZZZ.
+   Size of memory not for movable pages (not for offline) is TOTAL - ZZZZ.
+   Size of memory for movable pages (for offline) is ZZZZ.
 
+.. note::
 
-Note: Unfortunately, there is no information to show which memory block belongs
-to ZONE_MOVABLE. This is TBD.
+   Unfortunately, there is no information to show which memory block belongs
+   to ZONE_MOVABLE. This is TBD.
 
+.. _memory_hotplug_how_to_offline_memory:
+
+How to offline memory
+---------------------
 
-6.2. How to offline memory
-------------
 You can offline a memory block by using the same sysfs interface that was used
-in memory onlining.
+in memory onlining::
 
-% echo offline > /sys/devices/system/memory/memoryXXX/state
+	% echo offline > /sys/devices/system/memory/memoryXXX/state
 
 If offline succeeds, the state of the memory block is changed to be "offline".
 If it fails, some error core (like -EBUSY) will be returned by the kernel.
@@ -361,22 +394,22 @@ able to offline it (or not). (For example, a page is referred to by some kernel
 internal call and released soon.)
 
 Consideration:
-Memory hotplug's design direction is to make the possibility of memory offlining
-higher and to guarantee unplugging memory under any situation. But it needs
-more work. Returning -EBUSY under some situation may be good because the user
-can decide to retry more or not by himself. Currently, memory offlining code
-does some amount of retry with 120 seconds timeout.
+  Memory hotplug's design direction is to make the possibility of memory
+  offlining higher and to guarantee unplugging memory under any situation. But
+  it needs more work. Returning -EBUSY under some situation may be good because
+  the user can decide to retry more or not by himself. Currently, memory
+  offlining code does some amount of retry with 120 seconds timeout.
+
+Physical memory remove
+======================
 
--------------------------
-7. Physical memory remove
--------------------------
 Need more implementation yet....
  - Notification completion of remove works by OS to firmware.
  - Guard from remove if not yet.
 
---------------------------------
-8. Memory hotplug event notifier
---------------------------------
+Memory hotplug event notifier
+=============================
+
 Hotplugging events are sent to a notification queue.
 
 There are six types of notification defined in include/linux/memory.h:
@@ -406,14 +439,14 @@ MEM_CANCEL_OFFLINE
 MEM_OFFLINE
   Generated after offlining memory is complete.
 
-A callback routine can be registered by calling
+A callback routine can be registered by calling::
 
   hotplug_memory_notifier(callback_func, priority)
 
 Callback functions with higher values of priority are called before callback
 functions with lower values.
 
-A callback function must have the following prototype:
+A callback function must have the following prototype::
 
   int callback_func(
     struct notifier_block *self, unsigned long action, void *arg);
@@ -421,27 +454,28 @@ A callback function must have the following prototype:
 The first argument of the callback function (self) is a pointer to the block
 of the notifier chain that points to the callback function itself.
 The second argument (action) is one of the event types described above.
-The third argument (arg) passes a pointer of struct memory_notify.
+The third argument (arg) passes a pointer of struct memory_notify::
 
-struct memory_notify {
-       unsigned long start_pfn;
-       unsigned long nr_pages;
-       int status_change_nid_normal;
-       int status_change_nid_high;
-       int status_change_nid;
-}
+	struct memory_notify {
+		unsigned long start_pfn;
+		unsigned long nr_pages;
+		int status_change_nid_normal;
+		int status_change_nid_high;
+		int status_change_nid;
+	}
 
-start_pfn is start_pfn of online/offline memory.
-nr_pages is # of pages of online/offline memory.
-status_change_nid_normal is set node id when N_NORMAL_MEMORY of nodemask
-is (will be) set/clear, if this is -1, then nodemask status is not changed.
-status_change_nid_high is set node id when N_HIGH_MEMORY of nodemask
-is (will be) set/clear, if this is -1, then nodemask status is not changed.
-status_change_nid is set node id when N_MEMORY of nodemask is (will be)
-set/clear. It means a new(memoryless) node gets new memory by online and a
-node loses all memory. If this is -1, then nodemask status is not changed.
-If status_changed_nid* >= 0, callback should create/discard structures for the
-node if necessary.
+- start_pfn is start_pfn of online/offline memory.
+- nr_pages is # of pages of online/offline memory.
+- status_change_nid_normal is set node id when N_NORMAL_MEMORY of nodemask
+  is (will be) set/clear, if this is -1, then nodemask status is not changed.
+- status_change_nid_high is set node id when N_HIGH_MEMORY of nodemask
+  is (will be) set/clear, if this is -1, then nodemask status is not changed.
+- status_change_nid is set node id when N_MEMORY of nodemask is (will be)
+  set/clear. It means a new(memoryless) node gets new memory by online and a
+  node loses all memory. If this is -1, then nodemask status is not changed.
+
+  If status_changed_nid* >= 0, callback should create/discard structures for the
+  node if necessary.
 
 The callback routine shall return one of the values
 NOTIFY_DONE, NOTIFY_OK, NOTIFY_BAD, NOTIFY_STOP
@@ -455,9 +489,9 @@ further processing of the notification queue.
 
 NOTIFY_STOP stops further processing of the notification queue.
 
---------------
-9. Future Work
---------------
+Future Work
+===========
+
   - allowing memory hot-add to ZONE_MOVABLE. maybe we need some switch like
     sysctl or new control file.
   - showing memory block and physical device relationship.
@@ -465,4 +499,3 @@ NOTIFY_STOP stops further processing of the notification queue.
   - support HugeTLB page migration and offlining.
   - memmap removing at memory offline.
   - physical remove memory.
-
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v2 20/29] men-chameleon-bus.txt: standardize document format
  2017-06-17 15:26 [PATCH v2 01/29] IPMI.txt: standardize document format Mauro Carvalho Chehab
                   ` (17 preceding siblings ...)
  2017-06-17 15:26 ` [PATCH v2 19/29] memory-hotplug.txt: standardize document format Mauro Carvalho Chehab
@ 2017-06-17 15:26 ` Mauro Carvalho Chehab
  2017-06-17 15:26 ` [PATCH v2 21/29] nommu-mmap.txt: " Mauro Carvalho Chehab
                   ` (9 subsequent siblings)
  28 siblings, 0 replies; 35+ messages in thread
From: Mauro Carvalho Chehab @ 2017-06-17 15:26 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet, Johannes Thumshirn

Each text file under Documentation follows a different
format. Some doesn't even have titles!

Change its representation to follow the adopted standard,
using ReST markups for it to be parseable by Sphinx:

- Adjust identations;
- Remove title numbering;
- mark literal blocks;
- comment its TOC.

Acked-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
---
 Documentation/men-chameleon-bus.txt | 298 +++++++++++++++++++-----------------
 1 file changed, 155 insertions(+), 143 deletions(-)

diff --git a/Documentation/men-chameleon-bus.txt b/Documentation/men-chameleon-bus.txt
index 30ded732027e..1b1f048aa748 100644
--- a/Documentation/men-chameleon-bus.txt
+++ b/Documentation/men-chameleon-bus.txt
@@ -1,163 +1,175 @@
-                               MEN Chameleon Bus
-                               =================
-
-Table of Contents
 =================
-1 Introduction
-    1.1 Scope of this Document
-    1.2 Limitations of the current implementation
-2 Architecture
-    2.1 MEN Chameleon Bus
-    2.2 Carrier Devices
-    2.3 Parser
-3 Resource handling
-    3.1 Memory Resources
-    3.2 IRQs
-4 Writing an MCB driver
-    4.1 The driver structure
-    4.2 Probing and attaching
-    4.3 Initializing the driver
-
+MEN Chameleon Bus
+=================
 
-1 Introduction
-===============
-  This document describes the architecture and implementation of the MEN
-  Chameleon Bus (called MCB throughout this document).
+.. Table of Contents
+   =================
+   1 Introduction
+       1.1 Scope of this Document
+       1.2 Limitations of the current implementation
+   2 Architecture
+       2.1 MEN Chameleon Bus
+       2.2 Carrier Devices
+       2.3 Parser
+   3 Resource handling
+       3.1 Memory Resources
+       3.2 IRQs
+   4 Writing an MCB driver
+       4.1 The driver structure
+       4.2 Probing and attaching
+       4.3 Initializing the driver
 
-1.1 Scope of this Document
----------------------------
-  This document is intended to be a short overview of the current
-  implementation and does by no means describe the complete possibilities of MCB
-  based devices.
 
-1.2 Limitations of the current implementation
-----------------------------------------------
-  The current implementation is limited to PCI and PCIe based carrier devices
-  that only use a single memory resource and share the PCI legacy IRQ.  Not
-  implemented are:
-  - Multi-resource MCB devices like the VME Controller or M-Module carrier.
-  - MCB devices that need another MCB device, like SRAM for a DMA Controller's
-    buffer descriptors or a video controller's video memory.
-  - A per-carrier IRQ domain for carrier devices that have one (or more) IRQs
-    per MCB device like PCIe based carriers with MSI or MSI-X support.
+Introduction
+============
 
-2 Architecture
-===============
-  MCB is divided into 3 functional blocks:
-  - The MEN Chameleon Bus itself,
-  - drivers for MCB Carrier Devices and
-  - the parser for the Chameleon table.
+This document describes the architecture and implementation of the MEN
+Chameleon Bus (called MCB throughout this document).
 
-2.1 MEN Chameleon Bus
+Scope of this Document
 ----------------------
-   The MEN Chameleon Bus is an artificial bus system that attaches to a so
-   called Chameleon FPGA device found on some hardware produced my MEN Mikro
-   Elektronik GmbH. These devices are multi-function devices implemented in a
-   single FPGA and usually attached via some sort of PCI or PCIe link. Each
-   FPGA contains a header section describing the content of the FPGA. The
-   header lists the device id, PCI BAR, offset from the beginning of the PCI
-   BAR, size in the FPGA, interrupt number and some other properties currently
-   not handled by the MCB implementation.
-
-2.2 Carrier Devices
+
+This document is intended to be a short overview of the current
+implementation and does by no means describe the complete possibilities of MCB
+based devices.
+
+Limitations of the current implementation
+-----------------------------------------
+
+The current implementation is limited to PCI and PCIe based carrier devices
+that only use a single memory resource and share the PCI legacy IRQ.  Not
+implemented are:
+
+- Multi-resource MCB devices like the VME Controller or M-Module carrier.
+- MCB devices that need another MCB device, like SRAM for a DMA Controller's
+  buffer descriptors or a video controller's video memory.
+- A per-carrier IRQ domain for carrier devices that have one (or more) IRQs
+  per MCB device like PCIe based carriers with MSI or MSI-X support.
+
+Architecture
+============
+
+MCB is divided into 3 functional blocks:
+
+- The MEN Chameleon Bus itself,
+- drivers for MCB Carrier Devices and
+- the parser for the Chameleon table.
+
+MEN Chameleon Bus
+-----------------
+
+The MEN Chameleon Bus is an artificial bus system that attaches to a so
+called Chameleon FPGA device found on some hardware produced my MEN Mikro
+Elektronik GmbH. These devices are multi-function devices implemented in a
+single FPGA and usually attached via some sort of PCI or PCIe link. Each
+FPGA contains a header section describing the content of the FPGA. The
+header lists the device id, PCI BAR, offset from the beginning of the PCI
+BAR, size in the FPGA, interrupt number and some other properties currently
+not handled by the MCB implementation.
+
+Carrier Devices
+---------------
+
+A carrier device is just an abstraction for the real world physical bus the
+Chameleon FPGA is attached to. Some IP Core drivers may need to interact with
+properties of the carrier device (like querying the IRQ number of a PCI
+device). To provide abstraction from the real hardware bus, an MCB carrier
+device provides callback methods to translate the driver's MCB function calls
+to hardware related function calls. For example a carrier device may
+implement the get_irq() method which can be translated into a hardware bus
+query for the IRQ number the device should use.
+
+Parser
+------
+
+The parser reads the first 512 bytes of a Chameleon device and parses the
+Chameleon table. Currently the parser only supports the Chameleon v2 variant
+of the Chameleon table but can easily be adopted to support an older or
+possible future variant. While parsing the table's entries new MCB devices
+are allocated and their resources are assigned according to the resource
+assignment in the Chameleon table. After resource assignment is finished, the
+MCB devices are registered at the MCB and thus at the driver core of the
+Linux kernel.
+
+Resource handling
+=================
+
+The current implementation assigns exactly one memory and one IRQ resource
+per MCB device. But this is likely going to change in the future.
+
+Memory Resources
+----------------
+
+Each MCB device has exactly one memory resource, which can be requested from
+the MCB bus. This memory resource is the physical address of the MCB device
+inside the carrier and is intended to be passed to ioremap() and friends. It
+is already requested from the kernel by calling request_mem_region().
+
+IRQs
+----
+
+Each MCB device has exactly one IRQ resource, which can be requested from the
+MCB bus. If a carrier device driver implements the ->get_irq() callback
+method, the IRQ number assigned by the carrier device will be returned,
+otherwise the IRQ number inside the Chameleon table will be returned. This
+number is suitable to be passed to request_irq().
+
+Writing an MCB driver
+=====================
+
+The driver structure
 --------------------
-   A carrier device is just an abstraction for the real world physical bus the
-   Chameleon FPGA is attached to. Some IP Core drivers may need to interact with
-   properties of the carrier device (like querying the IRQ number of a PCI
-   device). To provide abstraction from the real hardware bus, an MCB carrier
-   device provides callback methods to translate the driver's MCB function calls
-   to hardware related function calls. For example a carrier device may
-   implement the get_irq() method which can be translated into a hardware bus
-   query for the IRQ number the device should use.
 
-2.3 Parser
------------
-   The parser reads the first 512 bytes of a Chameleon device and parses the
-   Chameleon table. Currently the parser only supports the Chameleon v2 variant
-   of the Chameleon table but can easily be adopted to support an older or
-   possible future variant. While parsing the table's entries new MCB devices
-   are allocated and their resources are assigned according to the resource
-   assignment in the Chameleon table. After resource assignment is finished, the
-   MCB devices are registered at the MCB and thus at the driver core of the
-   Linux kernel.
+Each MCB driver has a structure to identify the device driver as well as
+device ids which identify the IP Core inside the FPGA. The driver structure
+also contains callback methods which get executed on driver probe and
+removal from the system::
 
-3 Resource handling
-====================
-  The current implementation assigns exactly one memory and one IRQ resource
-  per MCB device. But this is likely going to change in the future.
+	static const struct mcb_device_id foo_ids[] = {
+		{ .device = 0x123 },
+		{ }
+	};
+	MODULE_DEVICE_TABLE(mcb, foo_ids);
 
-3.1 Memory Resources
+	static struct mcb_driver foo_driver = {
+	driver = {
+		.name = "foo-bar",
+		.owner = THIS_MODULE,
+	},
+		.probe = foo_probe,
+		.remove = foo_remove,
+		.id_table = foo_ids,
+	};
+
+Probing and attaching
 ---------------------
-   Each MCB device has exactly one memory resource, which can be requested from
-   the MCB bus. This memory resource is the physical address of the MCB device
-   inside the carrier and is intended to be passed to ioremap() and friends. It
-   is already requested from the kernel by calling request_mem_region().
 
-3.2 IRQs
----------
-   Each MCB device has exactly one IRQ resource, which can be requested from the
-   MCB bus. If a carrier device driver implements the ->get_irq() callback
-   method, the IRQ number assigned by the carrier device will be returned,
-   otherwise the IRQ number inside the Chameleon table will be returned. This
-   number is suitable to be passed to request_irq().
+When a driver is loaded and the MCB devices it services are found, the MCB
+core will call the driver's probe callback method. When the driver is removed
+from the system, the MCB core will call the driver's remove callback method::
 
-4 Writing an MCB driver
-=======================
+	static init foo_probe(struct mcb_device *mdev, const struct mcb_device_id *id);
+	static void foo_remove(struct mcb_device *mdev);
 
-4.1 The driver structure
--------------------------
-    Each MCB driver has a structure to identify the device driver as well as
-    device ids which identify the IP Core inside the FPGA. The driver structure
-    also contains callback methods which get executed on driver probe and
-    removal from the system.
+Initializing the driver
+-----------------------
 
+When the kernel is booted or your foo driver module is inserted, you have to
+perform driver initialization. Usually it is enough to register your driver
+module at the MCB core::
 
-  static const struct mcb_device_id foo_ids[] = {
-          { .device = 0x123 },
-          { }
-  };
-  MODULE_DEVICE_TABLE(mcb, foo_ids);
+	static int __init foo_init(void)
+	{
+		return mcb_register_driver(&foo_driver);
+	}
+	module_init(foo_init);
 
-  static struct mcb_driver foo_driver = {
-          driver = {
-                  .name = "foo-bar",
-                  .owner = THIS_MODULE,
-          },
-          .probe = foo_probe,
-          .remove = foo_remove,
-          .id_table = foo_ids,
-  };
+	static void __exit foo_exit(void)
+	{
+		mcb_unregister_driver(&foo_driver);
+	}
+	module_exit(foo_exit);
 
-4.2 Probing and attaching
---------------------------
-   When a driver is loaded and the MCB devices it services are found, the MCB
-   core will call the driver's probe callback method. When the driver is removed
-   from the system, the MCB core will call the driver's remove callback method.
+The module_mcb_driver() macro can be used to reduce the above code::
 
-
-  static init foo_probe(struct mcb_device *mdev, const struct mcb_device_id *id);
-  static void foo_remove(struct mcb_device *mdev);
-
-4.3 Initializing the driver
-----------------------------
-   When the kernel is booted or your foo driver module is inserted, you have to
-   perform driver initialization. Usually it is enough to register your driver
-   module at the MCB core.
-
-
-  static int __init foo_init(void)
-  {
-          return mcb_register_driver(&foo_driver);
-  }
-  module_init(foo_init);
-
-  static void __exit foo_exit(void)
-  {
-          mcb_unregister_driver(&foo_driver);
-  }
-  module_exit(foo_exit);
-
-   The module_mcb_driver() macro can be used to reduce the above code.
-
-
-  module_mcb_driver(foo_driver);
+	module_mcb_driver(foo_driver);
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v2 21/29] nommu-mmap.txt: standardize document format
  2017-06-17 15:26 [PATCH v2 01/29] IPMI.txt: standardize document format Mauro Carvalho Chehab
                   ` (18 preceding siblings ...)
  2017-06-17 15:26 ` [PATCH v2 20/29] men-chameleon-bus.txt: " Mauro Carvalho Chehab
@ 2017-06-17 15:26 ` Mauro Carvalho Chehab
  2017-06-17 15:26 ` [PATCH v2 22/29] nommu-mmap.txt: don't use all upper case on titles Mauro Carvalho Chehab
                   ` (8 subsequent siblings)
  28 siblings, 0 replies; 35+ messages in thread
From: Mauro Carvalho Chehab @ 2017-06-17 15:26 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet

Each text file under Documentation follows a different
format. Some doesn't even have titles!

Change its representation to follow the adopted standard,
using ReST markups for it to be parseable by Sphinx:

- Adjust identation for main title;
- fix level for chapter titles;
- use ".. important::" tag for an important note;
- use the right notation for paragraph auto-numbering "(#)";
- Fix footnotes syntax;
- fix one literal var to use the right ReST tag.

Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
---
 Documentation/nommu-mmap.txt | 52 +++++++++++++++++++-------------------------
 1 file changed, 22 insertions(+), 30 deletions(-)

diff --git a/Documentation/nommu-mmap.txt b/Documentation/nommu-mmap.txt
index ae57b9ea0d41..39a62ab0f50a 100644
--- a/Documentation/nommu-mmap.txt
+++ b/Documentation/nommu-mmap.txt
@@ -1,6 +1,6 @@
-			 =============================
-			 NO-MMU MEMORY MAPPING SUPPORT
-			 =============================
+=============================
+NO-MMU MEMORY MAPPING SUPPORT
+=============================
 
 The kernel has limited support for memory mapping under no-MMU conditions, such
 as are used in uClinux environments. From the userspace point of view, memory
@@ -16,7 +16,7 @@ the CLONE_VM flag.
 The behaviour is similar between the MMU and no-MMU cases, but not identical;
 and it's also much more restricted in the latter case:
 
- (*) Anonymous mapping, MAP_PRIVATE
+ (#) Anonymous mapping, MAP_PRIVATE
 
 	In the MMU case: VM regions backed by arbitrary pages; copy-on-write
 	across fork.
@@ -24,14 +24,14 @@ and it's also much more restricted in the latter case:
 	In the no-MMU case: VM regions backed by arbitrary contiguous runs of
 	pages.
 
- (*) Anonymous mapping, MAP_SHARED
+ (#) Anonymous mapping, MAP_SHARED
 
 	These behave very much like private mappings, except that they're
 	shared across fork() or clone() without CLONE_VM in the MMU case. Since
 	the no-MMU case doesn't support these, behaviour is identical to
 	MAP_PRIVATE there.
 
- (*) File, MAP_PRIVATE, PROT_READ / PROT_EXEC, !PROT_WRITE
+ (#) File, MAP_PRIVATE, PROT_READ / PROT_EXEC, !PROT_WRITE
 
 	In the MMU case: VM regions backed by pages read from file; changes to
 	the underlying file are reflected in the mapping; copied across fork.
@@ -56,7 +56,7 @@ and it's also much more restricted in the latter case:
 	   are visible in other processes (no MMU protection), but should not
 	   happen.
 
- (*) File, MAP_PRIVATE, PROT_READ / PROT_EXEC, PROT_WRITE
+ (#) File, MAP_PRIVATE, PROT_READ / PROT_EXEC, PROT_WRITE
 
 	In the MMU case: like the non-PROT_WRITE case, except that the pages in
 	question get copied before the write actually happens. From that point
@@ -66,7 +66,7 @@ and it's also much more restricted in the latter case:
 	In the no-MMU case: works much like the non-PROT_WRITE case, except
 	that a copy is always taken and never shared.
 
- (*) Regular file / blockdev, MAP_SHARED, PROT_READ / PROT_EXEC / PROT_WRITE
+ (#) Regular file / blockdev, MAP_SHARED, PROT_READ / PROT_EXEC / PROT_WRITE
 
 	In the MMU case: VM regions backed by pages read from file; changes to
 	pages written back to file; writes to file reflected into pages backing
@@ -74,7 +74,7 @@ and it's also much more restricted in the latter case:
 
 	In the no-MMU case: not supported.
 
- (*) Memory backed regular file, MAP_SHARED, PROT_READ / PROT_EXEC / PROT_WRITE
+ (#) Memory backed regular file, MAP_SHARED, PROT_READ / PROT_EXEC / PROT_WRITE
 
 	In the MMU case: As for ordinary regular files.
 
@@ -85,7 +85,7 @@ and it's also much more restricted in the latter case:
 	as for the MMU case. If the filesystem does not provide any such
 	support, then the mapping request will be denied.
 
- (*) Memory backed blockdev, MAP_SHARED, PROT_READ / PROT_EXEC / PROT_WRITE
+ (#) Memory backed blockdev, MAP_SHARED, PROT_READ / PROT_EXEC / PROT_WRITE
 
 	In the MMU case: As for ordinary regular files.
 
@@ -94,7 +94,7 @@ and it's also much more restricted in the latter case:
 	truncate being called. The ramdisk driver could do this if it allocated
 	all its memory as a contiguous array upfront.
 
- (*) Memory backed chardev, MAP_SHARED, PROT_READ / PROT_EXEC / PROT_WRITE
+ (#) Memory backed chardev, MAP_SHARED, PROT_READ / PROT_EXEC / PROT_WRITE
 
 	In the MMU case: As for ordinary regular files.
 
@@ -105,21 +105,20 @@ and it's also much more restricted in the latter case:
 	provide any such support, then the mapping request will be denied.
 
 
-============================
 FURTHER NOTES ON NO-MMU MMAP
 ============================
 
- (*) A request for a private mapping of a file may return a buffer that is not
+ (#) A request for a private mapping of a file may return a buffer that is not
      page-aligned.  This is because XIP may take place, and the data may not be
      paged aligned in the backing store.
 
- (*) A request for an anonymous mapping will always be page aligned.  If
+ (#) A request for an anonymous mapping will always be page aligned.  If
      possible the size of the request should be a power of two otherwise some
      of the space may be wasted as the kernel must allocate a power-of-2
      granule but will only discard the excess if appropriately configured as
      this has an effect on fragmentation.
 
- (*) The memory allocated by a request for an anonymous mapping will normally
+ (#) The memory allocated by a request for an anonymous mapping will normally
      be cleared by the kernel before being returned in accordance with the
      Linux man pages (ver 2.22 or later).
 
@@ -145,23 +144,22 @@ FURTHER NOTES ON NO-MMU MMAP
      uClibc uses this to speed up malloc(), and the ELF-FDPIC binfmt uses this
      to allocate the brk and stack region.
 
- (*) A list of all the private copy and anonymous mappings on the system is
+ (#) A list of all the private copy and anonymous mappings on the system is
      visible through /proc/maps in no-MMU mode.
 
- (*) A list of all the mappings in use by a process is visible through
+ (#) A list of all the mappings in use by a process is visible through
      /proc/<pid>/maps in no-MMU mode.
 
- (*) Supplying MAP_FIXED or a requesting a particular mapping address will
+ (#) Supplying MAP_FIXED or a requesting a particular mapping address will
      result in an error.
 
- (*) Files mapped privately usually have to have a read method provided by the
+ (#) Files mapped privately usually have to have a read method provided by the
      driver or filesystem so that the contents can be read into the memory
      allocated if mmap() chooses not to map the backing device directly. An
      error will result if they don't. This is most likely to be encountered
      with character device files, pipes, fifos and sockets.
 
 
-==========================
 INTERPROCESS SHARED MEMORY
 ==========================
 
@@ -170,7 +168,6 @@ mode.  The former through the usual mechanism, the latter through files created
 on ramfs or tmpfs mounts.
 
 
-=======
 FUTEXES
 =======
 
@@ -180,12 +177,11 @@ mappings made by a process or if the mapping in which the address lies does not
 support futexes (such as an I/O chardev mapping).
 
 
-=============
 NO-MMU MREMAP
 =============
 
 The mremap() function is partially supported.  It may change the size of a
-mapping, and may move it[*] if MREMAP_MAYMOVE is specified and if the new size
+mapping, and may move it [#]_ if MREMAP_MAYMOVE is specified and if the new size
 of the mapping exceeds the size of the slab object currently occupied by the
 memory to which the mapping refers, or if a smaller slab object could be used.
 
@@ -200,10 +196,9 @@ a previously mapped object.  It may not be used to create holes in existing
 mappings, move parts of existing mappings or resize parts of mappings.  It must
 act on a complete mapping.
 
-[*] Not currently supported.
+.. [#] Not currently supported.
 
 
-============================================
 PROVIDING SHAREABLE CHARACTER DEVICE SUPPORT
 ============================================
 
@@ -235,7 +230,7 @@ direct the call to the device-specific driver. Under such circumstances, the
 mapping request will be rejected if NOMMU_MAP_COPY is not specified, and a
 copy mapped otherwise.
 
-IMPORTANT NOTE:
+.. important::
 
 	Some types of device may present a different appearance to anyone
 	looking at them in certain modes. Flash chips can be like this; for
@@ -249,7 +244,6 @@ IMPORTANT NOTE:
 	circumstances!
 
 
-==============================================
 PROVIDING SHAREABLE MEMORY-BACKED FILE SUPPORT
 ==============================================
 
@@ -267,7 +261,6 @@ Memory backed devices are indicated by the mapping's backing device info having
 the memory_backed flag set.
 
 
-========================================
 PROVIDING SHAREABLE BLOCK DEVICE SUPPORT
 ========================================
 
@@ -276,7 +269,6 @@ character devices. If there isn't a real device underneath, then the driver
 should allocate sufficient contiguous memory to honour any supported mapping.
 
 
-=================================
 ADJUSTING PAGE TRIMMING BEHAVIOUR
 =================================
 
@@ -288,4 +280,4 @@ allocator.  In order to retain finer-grained control over fragmentation, this
 behaviour can either be disabled completely, or bumped up to a higher page
 watermark where trimming begins.
 
-Page trimming behaviour is configurable via the sysctl `vm.nr_trim_pages'.
+Page trimming behaviour is configurable via the sysctl ``vm.nr_trim_pages``.
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v2 22/29] nommu-mmap.txt: don't use all upper case on titles
  2017-06-17 15:26 [PATCH v2 01/29] IPMI.txt: standardize document format Mauro Carvalho Chehab
                   ` (19 preceding siblings ...)
  2017-06-17 15:26 ` [PATCH v2 21/29] nommu-mmap.txt: " Mauro Carvalho Chehab
@ 2017-06-17 15:26 ` Mauro Carvalho Chehab
  2017-06-17 15:26 ` [PATCH v2 23/29] ntb.txt: standardize document format Mauro Carvalho Chehab
                   ` (7 subsequent siblings)
  28 siblings, 0 replies; 35+ messages in thread
From: Mauro Carvalho Chehab @ 2017-06-17 15:26 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet

This file is almost in the standard format we're adopting for
other documentation text files. Yet, it use upper case on
titles.

So, in order to uniform how chapter names, adjust caps on
titles.

Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
---
 Documentation/nommu-mmap.txt | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/Documentation/nommu-mmap.txt b/Documentation/nommu-mmap.txt
index 39a62ab0f50a..69556f0d494b 100644
--- a/Documentation/nommu-mmap.txt
+++ b/Documentation/nommu-mmap.txt
@@ -1,5 +1,5 @@
 =============================
-NO-MMU MEMORY MAPPING SUPPORT
+No-MMU memory mapping support
 =============================
 
 The kernel has limited support for memory mapping under no-MMU conditions, such
@@ -105,7 +105,7 @@ and it's also much more restricted in the latter case:
 	provide any such support, then the mapping request will be denied.
 
 
-FURTHER NOTES ON NO-MMU MMAP
+Further notes on no-MMU MMAP
 ============================
 
  (#) A request for a private mapping of a file may return a buffer that is not
@@ -160,7 +160,7 @@ FURTHER NOTES ON NO-MMU MMAP
      with character device files, pipes, fifos and sockets.
 
 
-INTERPROCESS SHARED MEMORY
+Interprocess shared memory
 ==========================
 
 Both SYSV IPC SHM shared memory and POSIX shared memory is supported in NOMMU
@@ -168,7 +168,7 @@ mode.  The former through the usual mechanism, the latter through files created
 on ramfs or tmpfs mounts.
 
 
-FUTEXES
+Futexes
 =======
 
 Futexes are supported in NOMMU mode if the arch supports them.  An error will
@@ -177,7 +177,7 @@ mappings made by a process or if the mapping in which the address lies does not
 support futexes (such as an I/O chardev mapping).
 
 
-NO-MMU MREMAP
+No-MMU mremap
 =============
 
 The mremap() function is partially supported.  It may change the size of a
@@ -199,7 +199,7 @@ act on a complete mapping.
 .. [#] Not currently supported.
 
 
-PROVIDING SHAREABLE CHARACTER DEVICE SUPPORT
+Providing shareable character device support
 ============================================
 
 To provide shareable character device support, a driver must provide a
@@ -244,7 +244,7 @@ copy mapped otherwise.
 	circumstances!
 
 
-PROVIDING SHAREABLE MEMORY-BACKED FILE SUPPORT
+Providing shareable memory-backed file support
 ==============================================
 
 Provision of shared mappings on memory backed files is similar to the provision
@@ -261,7 +261,7 @@ Memory backed devices are indicated by the mapping's backing device info having
 the memory_backed flag set.
 
 
-PROVIDING SHAREABLE BLOCK DEVICE SUPPORT
+Providing shareable block device support
 ========================================
 
 Provision of shared mappings on block device files is exactly the same as for
@@ -269,7 +269,7 @@ character devices. If there isn't a real device underneath, then the driver
 should allocate sufficient contiguous memory to honour any supported mapping.
 
 
-ADJUSTING PAGE TRIMMING BEHAVIOUR
+Adjusting page trimming behaviour
 =================================
 
 NOMMU mmap automatically rounds up to the nearest power-of-2 number of pages
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v2 23/29] ntb.txt: standardize document format
  2017-06-17 15:26 [PATCH v2 01/29] IPMI.txt: standardize document format Mauro Carvalho Chehab
                   ` (20 preceding siblings ...)
  2017-06-17 15:26 ` [PATCH v2 22/29] nommu-mmap.txt: don't use all upper case on titles Mauro Carvalho Chehab
@ 2017-06-17 15:26 ` Mauro Carvalho Chehab
  2017-06-17 15:26 ` [PATCH v2 24/29] numastat.txt: " Mauro Carvalho Chehab
                   ` (6 subsequent siblings)
  28 siblings, 0 replies; 35+ messages in thread
From: Mauro Carvalho Chehab @ 2017-06-17 15:26 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet

Each text file under Documentation follows a different
format. Some doesn't even have titles!

Change its representation to follow the adopted standard,
using ReST markups for it to be parseable by Sphinx.

This file is using some other markup notation (likely, markdown).
Convert it to the adopted standard:

  - Adjust the header level markup;
  - Adjust identation for debugfs files and module parameters.

Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
---
 Documentation/ntb.txt | 55 ++++++++++++++++++++++++++++++++++-----------------
 1 file changed, 37 insertions(+), 18 deletions(-)

diff --git a/Documentation/ntb.txt b/Documentation/ntb.txt
index 1d9bbabb6c79..e4771e5c2ad7 100644
--- a/Documentation/ntb.txt
+++ b/Documentation/ntb.txt
@@ -1,4 +1,6 @@
-# NTB Drivers
+===========
+NTB Drivers
+===========
 
 NTB (Non-Transparent Bridge) is a type of PCI-Express bridge chip that connects
 the separate memory systems of two computers to the same PCI-Express fabric.
@@ -10,7 +12,8 @@ fixed address.  Doorbell registers provide a way for peers to send interrupt
 events.  Memory windows allow translated read and write access to the peer
 memory.
 
-## NTB Core Driver (ntb)
+NTB Core Driver (ntb)
+=====================
 
 The NTB core driver defines an api wrapping the common feature set, and allows
 clients interested in NTB features to discover NTB the devices supported by
@@ -18,7 +21,8 @@ hardware drivers.  The term "client" is used here to mean an upper layer
 component making use of the NTB api.  The term "driver," or "hardware driver,"
 is used here to mean a driver for a specific vendor and model of NTB hardware.
 
-## NTB Client Drivers
+NTB Client Drivers
+==================
 
 NTB client drivers should register with the NTB core driver.  After
 registering, the client probe and remove functions will be called appropriately
@@ -26,7 +30,8 @@ as ntb hardware, or hardware drivers, are inserted and removed.  The
 registration uses the Linux Device framework, so it should feel familiar to
 anyone who has written a pci driver.
 
-### NTB Transport Client (ntb\_transport) and NTB Netdev (ntb\_netdev)
+NTB Transport Client (ntb\_transport) and NTB Netdev (ntb\_netdev)
+------------------------------------------------------------------
 
 The primary client for NTB is the Transport client, used in tandem with NTB
 Netdev.  These drivers function together to create a logical link to the peer,
@@ -37,7 +42,8 @@ Transport queue pair.  Network data is copied between socket buffers and the
 Transport queue pair buffer.  The Transport client may be used for other things
 besides Netdev, however no other applications have yet been written.
 
-### NTB Ping Pong Test Client (ntb\_pingpong)
+NTB Ping Pong Test Client (ntb\_pingpong)
+-----------------------------------------
 
 The Ping Pong test client serves as a demonstration to exercise the doorbell
 and scratchpad registers of NTB hardware, and as an example simple NTB client.
@@ -64,7 +70,8 @@ Module Parameters:
 * dyndbg - It is suggested to specify dyndbg=+p when loading this module, and
 	then to observe debugging output on the console.
 
-### NTB Tool Test Client (ntb\_tool)
+NTB Tool Test Client (ntb\_tool)
+--------------------------------
 
 The Tool test client serves for debugging, primarily, ntb hardware and drivers.
 The Tool provides access through debugfs for reading, setting, and clearing the
@@ -74,48 +81,60 @@ The Tool does not currently have any module parameters.
 
 Debugfs Files:
 
-* *debugfs*/ntb\_tool/*hw*/ - A directory in debugfs will be created for each
+* *debugfs*/ntb\_tool/*hw*/
+	A directory in debugfs will be created for each
 	NTB device probed by the tool.  This directory is shortened to *hw*
 	below.
-* *hw*/db - This file is used to read, set, and clear the local doorbell.  Not
+* *hw*/db
+	This file is used to read, set, and clear the local doorbell.  Not
 	all operations may be supported by all hardware.  To read the doorbell,
 	read the file.  To set the doorbell, write `s` followed by the bits to
 	set (eg: `echo 's 0x0101' > db`).  To clear the doorbell, write `c`
 	followed by the bits to clear.
-* *hw*/mask - This file is used to read, set, and clear the local doorbell mask.
+* *hw*/mask
+	This file is used to read, set, and clear the local doorbell mask.
 	See *db* for details.
-* *hw*/peer\_db - This file is used to read, set, and clear the peer doorbell.
+* *hw*/peer\_db
+	This file is used to read, set, and clear the peer doorbell.
 	See *db* for details.
-* *hw*/peer\_mask - This file is used to read, set, and clear the peer doorbell
+* *hw*/peer\_mask
+	This file is used to read, set, and clear the peer doorbell
 	mask.  See *db* for details.
-* *hw*/spad - This file is used to read and write local scratchpads.  To read
+* *hw*/spad
+	This file is used to read and write local scratchpads.  To read
 	the values of all scratchpads, read the file.  To write values, write a
 	series of pairs of scratchpad number and value
 	(eg: `echo '4 0x123 7 0xabc' > spad`
 	# to set scratchpads `4` and `7` to `0x123` and `0xabc`, respectively).
-* *hw*/peer\_spad - This file is used to read and write peer scratchpads.  See
+* *hw*/peer\_spad
+	This file is used to read and write peer scratchpads.  See
 	*spad* for details.
 
-## NTB Hardware Drivers
+NTB Hardware Drivers
+====================
 
 NTB hardware drivers should register devices with the NTB core driver.  After
 registering, clients probe and remove functions will be called.
 
-### NTB Intel Hardware Driver (ntb\_hw\_intel)
+NTB Intel Hardware Driver (ntb\_hw\_intel)
+------------------------------------------
 
 The Intel hardware driver supports NTB on Xeon and Atom CPUs.
 
 Module Parameters:
 
-* b2b\_mw\_idx - If the peer ntb is to be accessed via a memory window, then use
+* b2b\_mw\_idx
+	If the peer ntb is to be accessed via a memory window, then use
 	this memory window to access the peer ntb.  A value of zero or positive
 	starts from the first mw idx, and a negative value starts from the last
 	mw idx.  Both sides MUST set the same value here!  The default value is
 	`-1`.
-* b2b\_mw\_share - If the peer ntb is to be accessed via a memory window, and if
+* b2b\_mw\_share
+	If the peer ntb is to be accessed via a memory window, and if
 	the memory window is large enough, still allow the client to use the
 	second half of the memory window for address translation to the peer.
-* xeon\_b2b\_usd\_bar2\_addr64 - If using B2B topology on Xeon hardware, use
+* xeon\_b2b\_usd\_bar2\_addr64
+	If using B2B topology on Xeon hardware, use
 	this 64 bit address on the bus between the NTB devices for the window
 	at BAR2, on the upstream side of the link.
 * xeon\_b2b\_usd\_bar4\_addr64 - See *xeon\_b2b\_bar2\_addr64*.
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v2 24/29] numastat.txt: standardize document format
  2017-06-17 15:26 [PATCH v2 01/29] IPMI.txt: standardize document format Mauro Carvalho Chehab
                   ` (21 preceding siblings ...)
  2017-06-17 15:26 ` [PATCH v2 23/29] ntb.txt: standardize document format Mauro Carvalho Chehab
@ 2017-06-17 15:26 ` Mauro Carvalho Chehab
  2017-06-17 15:26 ` [PATCH v2 25/29] padata.txt: " Mauro Carvalho Chehab
                   ` (5 subsequent siblings)
  28 siblings, 0 replies; 35+ messages in thread
From: Mauro Carvalho Chehab @ 2017-06-17 15:26 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet

Each text file under Documentation follows a different
format. Some doesn't even have titles!

Change its representation to follow the adopted standard,
using ReST markups for it to be parseable by Sphinx:

- mark the document title;
- mark the table as such.

Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
---
 Documentation/numastat.txt | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/Documentation/numastat.txt b/Documentation/numastat.txt
index 520327790d54..aaf1667489f8 100644
--- a/Documentation/numastat.txt
+++ b/Documentation/numastat.txt
@@ -1,10 +1,12 @@
-
+===============================
 Numa policy hit/miss statistics
+===============================
 
 /sys/devices/system/node/node*/numastat
 
 All units are pages. Hugepages have separate counters.
 
+=============== ============================================================
 numa_hit	A process wanted to allocate memory from this node,
 		and succeeded.
 
@@ -20,6 +22,7 @@ other_node	A process ran on this node and got memory from another node.
 
 interleave_hit 	Interleaving wanted to allocate from this node
 		and succeeded.
+=============== ============================================================
 
 For easier reading you can use the numastat utility from the numactl package
 (http://oss.sgi.com/projects/libnuma/). Note that it only works
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v2 25/29] padata.txt: standardize document format
  2017-06-17 15:26 [PATCH v2 01/29] IPMI.txt: standardize document format Mauro Carvalho Chehab
                   ` (22 preceding siblings ...)
  2017-06-17 15:26 ` [PATCH v2 24/29] numastat.txt: " Mauro Carvalho Chehab
@ 2017-06-17 15:26 ` Mauro Carvalho Chehab
  2017-06-17 15:26 ` [PATCH v2 26/29] parport-lowlevel.txt: " Mauro Carvalho Chehab
                   ` (4 subsequent siblings)
  28 siblings, 0 replies; 35+ messages in thread
From: Mauro Carvalho Chehab @ 2017-06-17 15:26 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet, Steffen Klassert, linux-crypto

Each text file under Documentation follows a different
format. Some doesn't even have titles!

Change its representation to follow the adopted standard,
using ReST markups for it to be parseable by Sphinx:

- mark document title;
- mark literal blocks.

Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
---
 Documentation/padata.txt | 27 +++++++++++++++------------
 1 file changed, 15 insertions(+), 12 deletions(-)

diff --git a/Documentation/padata.txt b/Documentation/padata.txt
index 7ddfe216a0aa..b103d0c82000 100644
--- a/Documentation/padata.txt
+++ b/Documentation/padata.txt
@@ -1,5 +1,8 @@
+=======================================
 The padata parallel execution mechanism
-Last updated for 2.6.36
+=======================================
+
+:Last updated: for 2.6.36
 
 Padata is a mechanism by which the kernel can farm work out to be done in
 parallel on multiple CPUs while retaining the ordering of tasks.  It was
@@ -9,7 +12,7 @@ those packets.  The crypto developers made a point of writing padata in a
 sufficiently general fashion that it could be put to other uses as well.
 
 The first step in using padata is to set up a padata_instance structure for
-overall control of how tasks are to be run:
+overall control of how tasks are to be run::
 
     #include <linux/padata.h>
 
@@ -24,7 +27,7 @@ The workqueue wq is where the work will actually be done; it should be
 a multithreaded queue, naturally.
 
 To allocate a padata instance with the cpu_possible_mask for both
-cpumasks this helper function can be used:
+cpumasks this helper function can be used::
 
     struct padata_instance *padata_alloc_possible(struct workqueue_struct *wq);
 
@@ -36,7 +39,7 @@ it is legal to supply a cpumask to padata that contains offline CPUs.
 Once an offline CPU in the user supplied cpumask comes online, padata
 is going to use it.
 
-There are functions for enabling and disabling the instance:
+There are functions for enabling and disabling the instance::
 
     int padata_start(struct padata_instance *pinst);
     void padata_stop(struct padata_instance *pinst);
@@ -48,7 +51,7 @@ padata cpumask contains no active CPU (flag not set).
 padata_stop clears the flag and blocks until the padata instance
 is unused.
 
-The list of CPUs to be used can be adjusted with these functions:
+The list of CPUs to be used can be adjusted with these functions::
 
     int padata_set_cpumasks(struct padata_instance *pinst,
 			    cpumask_var_t pcpumask,
@@ -71,12 +74,12 @@ padata_add_cpu/padata_remove_cpu are used. cpu specifies the CPU to add or
 remove and mask is one of PADATA_CPU_SERIAL, PADATA_CPU_PARALLEL.
 
 If a user is interested in padata cpumask changes, he can register to
-the padata cpumask change notifier:
+the padata cpumask change notifier::
 
     int padata_register_cpumask_notifier(struct padata_instance *pinst,
 					 struct notifier_block *nblock);
 
-To unregister from that notifier:
+To unregister from that notifier::
 
     int padata_unregister_cpumask_notifier(struct padata_instance *pinst,
 					   struct notifier_block *nblock);
@@ -84,7 +87,7 @@ To unregister from that notifier:
 The padata cpumask change notifier notifies about changes of the usable
 cpumasks, i.e. the subset of active CPUs in the user supplied cpumask.
 
-Padata calls the notifier chain with:
+Padata calls the notifier chain with::
 
     blocking_notifier_call_chain(&pinst->cpumask_change_notifier,
 				 notification_mask,
@@ -95,7 +98,7 @@ is one of PADATA_CPU_SERIAL, PADATA_CPU_PARALLEL and cpumask is a pointer
 to a struct padata_cpumask that contains the new cpumask information.
 
 Actually submitting work to the padata instance requires the creation of a
-padata_priv structure:
+padata_priv structure::
 
     struct padata_priv {
         /* Other stuff here... */
@@ -110,7 +113,7 @@ parallel() and serial() functions should be provided.  Those functions will
 be called in the process of getting the work done as we will see
 momentarily.
 
-The submission of work is done with:
+The submission of work is done with::
 
     int padata_do_parallel(struct padata_instance *pinst,
 		           struct padata_priv *padata, int cb_cpu);
@@ -138,7 +141,7 @@ need not be completed during this call, but, if parallel() leaves work
 outstanding, it should be prepared to be called again with a new job before
 the previous one completes.  When a task does complete, parallel() (or
 whatever function actually finishes the job) should inform padata of the
-fact with a call to:
+fact with a call to::
 
     void padata_do_serial(struct padata_priv *padata);
 
@@ -151,7 +154,7 @@ pains to ensure that tasks are completed in the order in which they were
 submitted.
 
 The one remaining function in the padata API should be called to clean up
-when a padata instance is no longer needed:
+when a padata instance is no longer needed::
 
     void padata_free(struct padata_instance *pinst);
 
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v2 26/29] parport-lowlevel.txt: standardize document format
  2017-06-17 15:26 [PATCH v2 01/29] IPMI.txt: standardize document format Mauro Carvalho Chehab
                   ` (23 preceding siblings ...)
  2017-06-17 15:26 ` [PATCH v2 25/29] padata.txt: " Mauro Carvalho Chehab
@ 2017-06-17 15:26 ` Mauro Carvalho Chehab
  2017-06-17 15:26 ` [PATCH v2 27/29] percpu-rw-semaphore.txt: " Mauro Carvalho Chehab
                   ` (3 subsequent siblings)
  28 siblings, 0 replies; 35+ messages in thread
From: Mauro Carvalho Chehab @ 2017-06-17 15:26 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet, Sudip Mukherjee

Each text file under Documentation follows a different
format. This one uses a man-page like approach.

Change its representation to be closer to the adopted standard,
using ReST markups for it to be parseable by Sphinx:

- Mark titles;
- Mark literals and literal blocks;
- Adjust identation.

Still, the best would be to move its contents to kernel-docs.

Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
---
 Documentation/parport-lowlevel.txt | 1303 +++++++++++++++++++++++-------------
 1 file changed, 832 insertions(+), 471 deletions(-)

diff --git a/Documentation/parport-lowlevel.txt b/Documentation/parport-lowlevel.txt
index 120eb20dbb09..0633d70ffda7 100644
--- a/Documentation/parport-lowlevel.txt
+++ b/Documentation/parport-lowlevel.txt
@@ -1,11 +1,12 @@
+===============================
 PARPORT interface documentation
--------------------------------
+===============================
 
-Time-stamp: <2000-02-24 13:30:20 twaugh>
+:Time-stamp: <2000-02-24 13:30:20 twaugh>
 
 Described here are the following functions:
 
-Global functions:
+Global functions::
   parport_register_driver
   parport_unregister_driver
   parport_enumerate
@@ -31,7 +32,8 @@ Global functions:
   parport_set_timeout
 
 Port functions (can be overridden by low-level drivers):
-  SPP:
+
+  SPP::
     port->ops->read_data
     port->ops->write_data
     port->ops->read_status
@@ -43,23 +45,23 @@ Port functions (can be overridden by low-level drivers):
     port->ops->data_forward
     port->ops->data_reverse
 
-  EPP:
+  EPP::
     port->ops->epp_write_data
     port->ops->epp_read_data
     port->ops->epp_write_addr
     port->ops->epp_read_addr
 
-  ECP:
+  ECP::
     port->ops->ecp_write_data
     port->ops->ecp_read_data
     port->ops->ecp_write_addr
 
-  Other:
+  Other::
     port->ops->nibble_read_data
     port->ops->byte_read_data
     port->ops->compat_write_data
 
-The parport subsystem comprises 'parport' (the core port-sharing
+The parport subsystem comprises ``parport`` (the core port-sharing
 code), and a variety of low-level drivers that actually do the port
 accesses.  Each low-level driver handles a particular style of port
 (PC, Amiga, and so on).
@@ -70,14 +72,14 @@ into global functions and port functions.
 The global functions are mostly for communicating between the device
 driver and the parport subsystem: acquiring a list of available ports,
 claiming a port for exclusive use, and so on.  They also include
-'generic' functions for doing standard things that will work on any
+``generic`` functions for doing standard things that will work on any
 IEEE 1284-capable architecture.
 
 The port functions are provided by the low-level drivers, although the
-core parport module provides generic 'defaults' for some routines.
+core parport module provides generic ``defaults`` for some routines.
 The port functions can be split into three groups: SPP, EPP, and ECP.
 
-SPP (Standard Parallel Port) functions modify so-called 'SPP'
+SPP (Standard Parallel Port) functions modify so-called ``SPP``
 registers: data, status, and control.  The hardware may not actually
 have registers exactly like that, but the PC does and this interface is
 modelled after common PC implementations.  Other low-level drivers may
@@ -95,58 +97,63 @@ to cope with peripherals that only tenuously support IEEE 1284, a
 low-level driver specific function is provided, for altering 'fudge
 factors'.
 \f
-GLOBAL FUNCTIONS
-----------------
+Global functions
+================
 
 parport_register_driver - register a device driver with parport
------------------------
+---------------------------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-struct parport_driver {
-	const char *name;
-	void (*attach) (struct parport *);
-	void (*detach) (struct parport *);
-	struct parport_driver *next;
-};
-int parport_register_driver (struct parport_driver *driver);
+	#include <linux/parport.h>
+
+	struct parport_driver {
+		const char *name;
+		void (*attach) (struct parport *);
+		void (*detach) (struct parport *);
+		struct parport_driver *next;
+	};
+	int parport_register_driver (struct parport_driver *driver);
 
 DESCRIPTION
+^^^^^^^^^^^
 
 In order to be notified about parallel ports when they are detected,
 parport_register_driver should be called.  Your driver will
 immediately be notified of all ports that have already been detected,
 and of each new port as low-level drivers are loaded.
 
-A 'struct parport_driver' contains the textual name of your driver,
+A ``struct parport_driver`` contains the textual name of your driver,
 a pointer to a function to handle new ports, and a pointer to a
 function to handle ports going away due to a low-level driver
 unloading.  Ports will only be detached if they are not being used
 (i.e. there are no devices registered on them).
 
-The visible parts of the 'struct parport *' argument given to
-attach/detach are:
+The visible parts of the ``struct parport *`` argument given to
+attach/detach are::
 
-struct parport
-{
-	struct parport *next; /* next parport in list */
-	const char *name;     /* port's name */
-	unsigned int modes;   /* bitfield of hardware modes */
-	struct parport_device_info probe_info;
-			      /* IEEE1284 info */
-	int number;           /* parport index */
-	struct parport_operations *ops;
-	...
-};
+	struct parport
+	{
+		struct parport *next; /* next parport in list */
+		const char *name;     /* port's name */
+		unsigned int modes;   /* bitfield of hardware modes */
+		struct parport_device_info probe_info;
+				/* IEEE1284 info */
+		int number;           /* parport index */
+		struct parport_operations *ops;
+		...
+	};
 
 There are other members of the structure, but they should not be
 touched.
 
-The 'modes' member summarises the capabilities of the underlying
+The ``modes`` member summarises the capabilities of the underlying
 hardware.  It consists of flags which may be bitwise-ored together:
 
+  ============================= ===============================================
   PARPORT_MODE_PCSPP		IBM PC registers are available,
 				i.e. functions that act on data,
 				control and status registers are
@@ -169,297 +176,351 @@ hardware.  It consists of flags which may be bitwise-ored together:
 				GFP_DMA flag with kmalloc) to the
 				low-level driver in order to take
 				advantage of it.
+  ============================= ===============================================
 
-There may be other flags in 'modes' as well.
+There may be other flags in ``modes`` as well.
 
-The contents of 'modes' is advisory only.  For example, if the
-hardware is capable of DMA, and PARPORT_MODE_DMA is in 'modes', it
+The contents of ``modes`` is advisory only.  For example, if the
+hardware is capable of DMA, and PARPORT_MODE_DMA is in ``modes``, it
 doesn't necessarily mean that DMA will always be used when possible.
 Similarly, hardware that is capable of assisting ECP transfers won't
 necessarily be used.
 
 RETURN VALUE
+^^^^^^^^^^^^
 
 Zero on success, otherwise an error code.
 
 ERRORS
+^^^^^^
 
 None. (Can it fail? Why return int?)
 
 EXAMPLE
+^^^^^^^
 
-static void lp_attach (struct parport *port)
-{
-	...
-	private = kmalloc (...);
-	dev[count++] = parport_register_device (...);
-	...
-}
+::
 
-static void lp_detach (struct parport *port)
-{
-	...
-}
+	static void lp_attach (struct parport *port)
+	{
+		...
+		private = kmalloc (...);
+		dev[count++] = parport_register_device (...);
+		...
+	}
+
+	static void lp_detach (struct parport *port)
+	{
+		...
+	}
 
-static struct parport_driver lp_driver = {
-	"lp",
-	lp_attach,
-	lp_detach,
-	NULL /* always put NULL here */
-};
+	static struct parport_driver lp_driver = {
+		"lp",
+		lp_attach,
+		lp_detach,
+		NULL /* always put NULL here */
+	};
 
-int lp_init (void)
-{
-	...
-	if (parport_register_driver (&lp_driver)) {
-		/* Failed; nothing we can do. */
-		return -EIO;
+	int lp_init (void)
+	{
+		...
+		if (parport_register_driver (&lp_driver)) {
+			/* Failed; nothing we can do. */
+			return -EIO;
+		}
+		...
 	}
-	...
-}
+
 
 SEE ALSO
+^^^^^^^^
 
 parport_unregister_driver, parport_register_device, parport_enumerate
-\f
+
+
+
 parport_unregister_driver - tell parport to forget about this driver
--------------------------
+--------------------------------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-struct parport_driver {
-	const char *name;
-	void (*attach) (struct parport *);
-	void (*detach) (struct parport *);
-	struct parport_driver *next;
-};
-void parport_unregister_driver (struct parport_driver *driver);
+	#include <linux/parport.h>
+
+	struct parport_driver {
+		const char *name;
+		void (*attach) (struct parport *);
+		void (*detach) (struct parport *);
+		struct parport_driver *next;
+	};
+	void parport_unregister_driver (struct parport_driver *driver);
 
 DESCRIPTION
+^^^^^^^^^^^
 
 This tells parport not to notify the device driver of new ports or of
 ports going away.  Registered devices belonging to that driver are NOT
 unregistered: parport_unregister_device must be used for each one.
 
 EXAMPLE
+^^^^^^^
 
-void cleanup_module (void)
-{
-	...
-	/* Stop notifications. */
-	parport_unregister_driver (&lp_driver);
+::
 
-	/* Unregister devices. */
-	for (i = 0; i < NUM_DEVS; i++)
-		parport_unregister_device (dev[i]);
-	...
-}
+	void cleanup_module (void)
+	{
+		...
+		/* Stop notifications. */
+		parport_unregister_driver (&lp_driver);
+
+		/* Unregister devices. */
+		for (i = 0; i < NUM_DEVS; i++)
+			parport_unregister_device (dev[i]);
+		...
+	}
 
 SEE ALSO
+^^^^^^^^
 
 parport_register_driver, parport_enumerate
-\f
+
+
+
 parport_enumerate - retrieve a list of parallel ports (DEPRECATED)
------------------
+------------------------------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-struct parport *parport_enumerate (void);
+	#include <linux/parport.h>
+
+	struct parport *parport_enumerate (void);
 
 DESCRIPTION
+^^^^^^^^^^^
 
 Retrieve the first of a list of valid parallel ports for this machine.
-Successive parallel ports can be found using the 'struct parport
-*next' element of the 'struct parport *' that is returned.  If 'next'
+Successive parallel ports can be found using the ``struct parport
+*next`` element of the ``struct parport *`` that is returned.  If ``next``
 is NULL, there are no more parallel ports in the list.  The number of
 ports in the list will not exceed PARPORT_MAX.
 
 RETURN VALUE
+^^^^^^^^^^^^
 
-A 'struct parport *' describing a valid parallel port for the machine,
+A ``struct parport *`` describing a valid parallel port for the machine,
 or NULL if there are none.
 
 ERRORS
+^^^^^^
 
 This function can return NULL to indicate that there are no parallel
 ports to use.
 
 EXAMPLE
+^^^^^^^
 
-int detect_device (void)
-{
-	struct parport *port;
+::
+
+	int detect_device (void)
+	{
+		struct parport *port;
+
+		for (port = parport_enumerate ();
+		port != NULL;
+		port = port->next) {
+			/* Try to detect a device on the port... */
+			...
+		}
+		}
 
-	for (port = parport_enumerate ();
-	     port != NULL;
-	     port = port->next) {
-		/* Try to detect a device on the port... */
 		...
-             }
 	}
 
-	...
-}
-
 NOTES
+^^^^^
 
 parport_enumerate is deprecated; parport_register_driver should be
 used instead.
 
 SEE ALSO
+^^^^^^^^
 
 parport_register_driver, parport_unregister_driver
-\f
+
+
+
 parport_register_device - register to use a port
------------------------
+------------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-typedef int (*preempt_func) (void *handle);
-typedef void (*wakeup_func) (void *handle);
-typedef int (*irq_func) (int irq, void *handle, struct pt_regs *);
+	#include <linux/parport.h>
 
-struct pardevice *parport_register_device(struct parport *port,
-                                          const char *name,
-                                          preempt_func preempt,
-                                          wakeup_func wakeup,
-                                          irq_func irq,
-                                          int flags,
-                                          void *handle);
+	typedef int (*preempt_func) (void *handle);
+	typedef void (*wakeup_func) (void *handle);
+	typedef int (*irq_func) (int irq, void *handle, struct pt_regs *);
+
+	struct pardevice *parport_register_device(struct parport *port,
+						  const char *name,
+						  preempt_func preempt,
+						  wakeup_func wakeup,
+						  irq_func irq,
+						  int flags,
+						  void *handle);
 
 DESCRIPTION
+^^^^^^^^^^^
 
 Use this function to register your device driver on a parallel port
-('port').  Once you have done that, you will be able to use
+(``port``).  Once you have done that, you will be able to use
 parport_claim and parport_release in order to use the port.
 
-The ('name') argument is the name of the device that appears in /proc
+The (``name``) argument is the name of the device that appears in /proc
 filesystem. The string must be valid for the whole lifetime of the
 device (until parport_unregister_device is called).
 
 This function will register three callbacks into your driver:
-'preempt', 'wakeup' and 'irq'.  Each of these may be NULL in order to
+``preempt``, ``wakeup`` and ``irq``.  Each of these may be NULL in order to
 indicate that you do not want a callback.
 
-When the 'preempt' function is called, it is because another driver
-wishes to use the parallel port.  The 'preempt' function should return
+When the ``preempt`` function is called, it is because another driver
+wishes to use the parallel port.  The ``preempt`` function should return
 non-zero if the parallel port cannot be released yet -- if zero is
 returned, the port is lost to another driver and the port must be
 re-claimed before use.
 
-The 'wakeup' function is called once another driver has released the
+The ``wakeup`` function is called once another driver has released the
 port and no other driver has yet claimed it.  You can claim the
-parallel port from within the 'wakeup' function (in which case the
+parallel port from within the ``wakeup`` function (in which case the
 claim is guaranteed to succeed), or choose not to if you don't need it
 now.
 
 If an interrupt occurs on the parallel port your driver has claimed,
-the 'irq' function will be called. (Write something about shared
+the ``irq`` function will be called. (Write something about shared
 interrupts here.)
 
-The 'handle' is a pointer to driver-specific data, and is passed to
+The ``handle`` is a pointer to driver-specific data, and is passed to
 the callback functions.
 
-'flags' may be a bitwise combination of the following flags:
+``flags`` may be a bitwise combination of the following flags:
 
+  ===================== =================================================
         Flag            Meaning
+  ===================== =================================================
   PARPORT_DEV_EXCL	The device cannot share the parallel port at all.
 			Use this only when absolutely necessary.
+  ===================== =================================================
 
 The typedefs are not actually defined -- they are only shown in order
 to make the function prototype more readable.
 
-The visible parts of the returned 'struct pardevice' are:
+The visible parts of the returned ``struct pardevice`` are::
 
-struct pardevice {
-	struct parport *port;	/* Associated port */
-	void *private;		/* Device driver's 'handle' */
-	...
-};
+	struct pardevice {
+		struct parport *port;	/* Associated port */
+		void *private;		/* Device driver's 'handle' */
+		...
+	};
 
 RETURN VALUE
+^^^^^^^^^^^^
 
-A 'struct pardevice *': a handle to the registered parallel port
+A ``struct pardevice *``: a handle to the registered parallel port
 device that can be used for parport_claim, parport_release, etc.
 
 ERRORS
+^^^^^^
 
 A return value of NULL indicates that there was a problem registering
 a device on that port.
 
 EXAMPLE
+^^^^^^^
 
-static int preempt (void *handle)
-{
-	if (busy_right_now)
-		return 1;
+::
 
-	must_reclaim_port = 1;
-	return 0;
-}
+	static int preempt (void *handle)
+	{
+		if (busy_right_now)
+			return 1;
 
-static void wakeup (void *handle)
-{
-	struct toaster *private = handle;
-	struct pardevice *dev = private->dev;
-	if (!dev) return; /* avoid races */
+		must_reclaim_port = 1;
+		return 0;
+	}
 
-	if (want_port)
-		parport_claim (dev);
-}
+	static void wakeup (void *handle)
+	{
+		struct toaster *private = handle;
+		struct pardevice *dev = private->dev;
+		if (!dev) return; /* avoid races */
 
-static int toaster_detect (struct toaster *private, struct parport *port)
-{
-	private->dev = parport_register_device (port, "toaster", preempt,
-					        wakeup, NULL, 0,
-						private);
-	if (!private->dev)
-		/* Couldn't register with parport. */
-		return -EIO;
+		if (want_port)
+			parport_claim (dev);
+	}
+
+	static int toaster_detect (struct toaster *private, struct parport *port)
+	{
+		private->dev = parport_register_device (port, "toaster", preempt,
+							wakeup, NULL, 0,
+							private);
+		if (!private->dev)
+			/* Couldn't register with parport. */
+			return -EIO;
 
-	must_reclaim_port = 0;
-	busy_right_now = 1;
-	parport_claim_or_block (private->dev);
-	...
-	/* Don't need the port while the toaster warms up. */
-	busy_right_now = 0;
-	...
-	busy_right_now = 1;
-	if (must_reclaim_port) {
-		parport_claim_or_block (private->dev);
 		must_reclaim_port = 0;
+		busy_right_now = 1;
+		parport_claim_or_block (private->dev);
+		...
+		/* Don't need the port while the toaster warms up. */
+		busy_right_now = 0;
+		...
+		busy_right_now = 1;
+		if (must_reclaim_port) {
+			parport_claim_or_block (private->dev);
+			must_reclaim_port = 0;
+		}
+		...
 	}
-	...
-}
 
 SEE ALSO
+^^^^^^^^
 
 parport_unregister_device, parport_claim
+
+
 \f
 parport_unregister_device - finish using a port
--------------------------
+-----------------------------------------------
 
 SYNPOPSIS
 
-#include <linux/parport.h>
+::
 
-void parport_unregister_device (struct pardevice *dev);
+	#include <linux/parport.h>
+
+	void parport_unregister_device (struct pardevice *dev);
 
 DESCRIPTION
+^^^^^^^^^^^
 
 This function is the opposite of parport_register_device.  After using
-parport_unregister_device, 'dev' is no longer a valid device handle.
+parport_unregister_device, ``dev`` is no longer a valid device handle.
 
 You should not unregister a device that is currently claimed, although
 if you do it will be released automatically.
 
 EXAMPLE
+^^^^^^^
+
+::
 
 	...
 	kfree (dev->private); /* before we lose the pointer */
@@ -467,460 +528,602 @@ EXAMPLE
 	...
 
 SEE ALSO
+^^^^^^^^
+
 
 parport_unregister_driver
 \f
 parport_claim, parport_claim_or_block - claim the parallel port for a device
--------------------------------------
+----------------------------------------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-int parport_claim (struct pardevice *dev);
-int parport_claim_or_block (struct pardevice *dev);
+	#include <linux/parport.h>
+
+	int parport_claim (struct pardevice *dev);
+	int parport_claim_or_block (struct pardevice *dev);
 
 DESCRIPTION
+^^^^^^^^^^^
 
 These functions attempt to gain control of the parallel port on which
-'dev' is registered.  'parport_claim' does not block, but
-'parport_claim_or_block' may do. (Put something here about blocking
+``dev`` is registered.  ``parport_claim`` does not block, but
+``parport_claim_or_block`` may do. (Put something here about blocking
 interruptibly or non-interruptibly.)
 
 You should not try to claim a port that you have already claimed.
 
 RETURN VALUE
+^^^^^^^^^^^^
 
 A return value of zero indicates that the port was successfully
 claimed, and the caller now has possession of the parallel port.
 
-If 'parport_claim_or_block' blocks before returning successfully, the
+If ``parport_claim_or_block`` blocks before returning successfully, the
 return value is positive.
 
 ERRORS
+^^^^^^
 
+========== ==========================================================
   -EAGAIN  The port is unavailable at the moment, but another attempt
            to claim it may succeed.
+========== ==========================================================
 
 SEE ALSO
+^^^^^^^^
+
 
 parport_release
 \f
 parport_release - release the parallel port
----------------
+-------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-void parport_release (struct pardevice *dev);
+	#include <linux/parport.h>
+
+	void parport_release (struct pardevice *dev);
 
 DESCRIPTION
+^^^^^^^^^^^
 
 Once a parallel port device has been claimed, it can be released using
-'parport_release'.  It cannot fail, but you should not release a
+``parport_release``.  It cannot fail, but you should not release a
 device that you do not have possession of.
 
 EXAMPLE
+^^^^^^^
 
-static size_t write (struct pardevice *dev, const void *buf,
-		     size_t len)
-{
-	...
-	written = dev->port->ops->write_ecp_data (dev->port, buf,
-						  len);
-	parport_release (dev);
-	...
-}
+::
+
+	static size_t write (struct pardevice *dev, const void *buf,
+			size_t len)
+	{
+		...
+		written = dev->port->ops->write_ecp_data (dev->port, buf,
+							len);
+		parport_release (dev);
+		...
+	}
 
 
 SEE ALSO
+^^^^^^^^
 
 change_mode, parport_claim, parport_claim_or_block, parport_yield
-\f
+
+
+
 parport_yield, parport_yield_blocking - temporarily release a parallel port
--------------------------------------
+---------------------------------------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-int parport_yield (struct pardevice *dev)
-int parport_yield_blocking (struct pardevice *dev);
+	#include <linux/parport.h>
+
+	int parport_yield (struct pardevice *dev)
+	int parport_yield_blocking (struct pardevice *dev);
 
 DESCRIPTION
+^^^^^^^^^^^
 
 When a driver has control of a parallel port, it may allow another
-driver to temporarily 'borrow' it.  'parport_yield' does not block;
-'parport_yield_blocking' may do.
+driver to temporarily ``borrow`` it.  ``parport_yield`` does not block;
+``parport_yield_blocking`` may do.
 
 RETURN VALUE
+^^^^^^^^^^^^
 
 A return value of zero indicates that the caller still owns the port
 and the call did not block.
 
-A positive return value from 'parport_yield_blocking' indicates that
+A positive return value from ``parport_yield_blocking`` indicates that
 the caller still owns the port and the call blocked.
 
 A return value of -EAGAIN indicates that the caller no longer owns the
 port, and it must be re-claimed before use.
 
 ERRORS
+^^^^^^
 
+========= ==========================================================
   -EAGAIN  Ownership of the parallel port was given away.
+========= ==========================================================
 
 SEE ALSO
+^^^^^^^^
 
 parport_release
+
+
 \f
 parport_wait_peripheral - wait for status lines, up to 35ms
------------------------
+-----------------------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-int parport_wait_peripheral (struct parport *port,
-			     unsigned char mask,
-			     unsigned char val);
+	#include <linux/parport.h>
+
+	int parport_wait_peripheral (struct parport *port,
+				     unsigned char mask,
+				     unsigned char val);
 
 DESCRIPTION
+^^^^^^^^^^^
 
 Wait for the status lines in mask to match the values in val.
 
 RETURN VALUE
+^^^^^^^^^^^^
 
+======== ==========================================================
  -EINTR  a signal is pending
       0  the status lines in mask have values in val
       1  timed out while waiting (35ms elapsed)
+======== ==========================================================
 
 SEE ALSO
+^^^^^^^^
 
 parport_poll_peripheral
+
+
 \f
 parport_poll_peripheral - wait for status lines, in usec
------------------------
+--------------------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-int parport_poll_peripheral (struct parport *port,
-			     unsigned char mask,
-			     unsigned char val,
-			     int usec);
+	#include <linux/parport.h>
+
+	int parport_poll_peripheral (struct parport *port,
+				     unsigned char mask,
+				     unsigned char val,
+				     int usec);
 
 DESCRIPTION
+^^^^^^^^^^^
 
 Wait for the status lines in mask to match the values in val.
 
 RETURN VALUE
+^^^^^^^^^^^^
 
+======== ==========================================================
  -EINTR  a signal is pending
       0  the status lines in mask have values in val
       1  timed out while waiting (usec microseconds have elapsed)
+======== ==========================================================
 
 SEE ALSO
+^^^^^^^^
 
 parport_wait_peripheral
-\f
+
+
+
 parport_wait_event - wait for an event on a port
-------------------
+------------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-int parport_wait_event (struct parport *port, signed long timeout)
+	#include <linux/parport.h>
+
+	int parport_wait_event (struct parport *port, signed long timeout)
 
 DESCRIPTION
+^^^^^^^^^^^
 
 Wait for an event (e.g. interrupt) on a port.  The timeout is in
 jiffies.
 
 RETURN VALUE
+^^^^^^^^^^^^
 
+======= ==========================================================
       0  success
      <0  error (exit as soon as possible)
      >0  timed out
-\f
+======= ==========================================================
+
 parport_negotiate - perform IEEE 1284 negotiation
------------------
+-------------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-int parport_negotiate (struct parport *, int mode);
+	#include <linux/parport.h>
+
+	int parport_negotiate (struct parport *, int mode);
 
 DESCRIPTION
+^^^^^^^^^^^
 
 Perform IEEE 1284 negotiation.
 
 RETURN VALUE
+^^^^^^^^^^^^
 
+======= ==========================================================
      0  handshake OK; IEEE 1284 peripheral and mode available
     -1  handshake failed; peripheral not compliant (or none present)
      1  handshake OK; IEEE 1284 peripheral present but mode not
         available
+======= ==========================================================
 
 SEE ALSO
+^^^^^^^^
 
 parport_read, parport_write
-\f
+
+
+
 parport_read - read data from device
-------------
+------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-ssize_t parport_read (struct parport *, void *buf, size_t len);
+	#include <linux/parport.h>
+
+	ssize_t parport_read (struct parport *, void *buf, size_t len);
 
 DESCRIPTION
+^^^^^^^^^^^
 
 Read data from device in current IEEE 1284 transfer mode.  This only
 works for modes that support reverse data transfer.
 
 RETURN VALUE
+^^^^^^^^^^^^
 
 If negative, an error code; otherwise the number of bytes transferred.
 
 SEE ALSO
+^^^^^^^^
 
 parport_write, parport_negotiate
-\f
+
+
+
 parport_write - write data to device
--------------
+------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-ssize_t parport_write (struct parport *, const void *buf, size_t len);
+	#include <linux/parport.h>
+
+	ssize_t parport_write (struct parport *, const void *buf, size_t len);
 
 DESCRIPTION
+^^^^^^^^^^^
 
 Write data to device in current IEEE 1284 transfer mode.  This only
 works for modes that support forward data transfer.
 
 RETURN VALUE
+^^^^^^^^^^^^
 
 If negative, an error code; otherwise the number of bytes transferred.
 
 SEE ALSO
+^^^^^^^^
 
 parport_read, parport_negotiate
+
+
 \f
 parport_open - register device for particular device number
-------------
+-----------------------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-struct pardevice *parport_open (int devnum, const char *name,
-			        int (*pf) (void *),
-				void (*kf) (void *),
-				void (*irqf) (int, void *,
-					      struct pt_regs *),
-				int flags, void *handle);
+	#include <linux/parport.h>
+
+	struct pardevice *parport_open (int devnum, const char *name,
+				        int (*pf) (void *),
+					void (*kf) (void *),
+					void (*irqf) (int, void *,
+						      struct pt_regs *),
+					int flags, void *handle);
 
 DESCRIPTION
+^^^^^^^^^^^
 
 This is like parport_register_device but takes a device number instead
 of a pointer to a struct parport.
 
 RETURN VALUE
+^^^^^^^^^^^^
 
 See parport_register_device.  If no device is associated with devnum,
 NULL is returned.
 
 SEE ALSO
+^^^^^^^^
 
 parport_register_device
-\f
+
+
+
 parport_close - unregister device for particular device number
--------------
+--------------------------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-void parport_close (struct pardevice *dev);
+	#include <linux/parport.h>
+
+	void parport_close (struct pardevice *dev);
 
 DESCRIPTION
+^^^^^^^^^^^
 
 This is the equivalent of parport_unregister_device for parport_open.
 
 SEE ALSO
+^^^^^^^^
 
 parport_unregister_device, parport_open
-\f
+
+
+
 parport_device_id - obtain IEEE 1284 Device ID
------------------
+----------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-ssize_t parport_device_id (int devnum, char *buffer, size_t len);
+	#include <linux/parport.h>
+
+	ssize_t parport_device_id (int devnum, char *buffer, size_t len);
 
 DESCRIPTION
+^^^^^^^^^^^
 
 Obtains the IEEE 1284 Device ID associated with a given device.
 
 RETURN VALUE
+^^^^^^^^^^^^
 
 If negative, an error code; otherwise, the number of bytes of buffer
 that contain the device ID.  The format of the device ID is as
-follows:
+follows::
 
-[length][ID]
+	[length][ID]
 
 The first two bytes indicate the inclusive length of the entire Device
 ID, and are in big-endian order.  The ID is a sequence of pairs of the
-form:
+form::
 
-key:value;
+	key:value;
 
 NOTES
+^^^^^
 
 Many devices have ill-formed IEEE 1284 Device IDs.
 
 SEE ALSO
+^^^^^^^^
 
 parport_find_class, parport_find_device
-\f
+
+
+
 parport_device_coords - convert device number to device coordinates
-------------------
+-------------------------------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-int parport_device_coords (int devnum, int *parport, int *mux,
-			   int *daisy);
+	#include <linux/parport.h>
+
+	int parport_device_coords (int devnum, int *parport, int *mux,
+				   int *daisy);
 
 DESCRIPTION
+^^^^^^^^^^^
 
 Convert between device number (zero-based) and device coordinates
 (port, multiplexor, daisy chain address).
 
 RETURN VALUE
+^^^^^^^^^^^^
 
-Zero on success, in which case the coordinates are (*parport, *mux,
-*daisy).
+Zero on success, in which case the coordinates are (``*parport``, ``*mux``,
+``*daisy``).
 
 SEE ALSO
+^^^^^^^^
 
 parport_open, parport_device_id
-\f
+
+
+
 parport_find_class - find a device by its class
-------------------
+-----------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-typedef enum {
-	PARPORT_CLASS_LEGACY = 0,       /* Non-IEEE1284 device */
-	PARPORT_CLASS_PRINTER,
-	PARPORT_CLASS_MODEM,
-	PARPORT_CLASS_NET,
-	PARPORT_CLASS_HDC,              /* Hard disk controller */
-	PARPORT_CLASS_PCMCIA,
-	PARPORT_CLASS_MEDIA,            /* Multimedia device */
-	PARPORT_CLASS_FDC,              /* Floppy disk controller */
-	PARPORT_CLASS_PORTS,
-	PARPORT_CLASS_SCANNER,
-	PARPORT_CLASS_DIGCAM,
-	PARPORT_CLASS_OTHER,            /* Anything else */
-	PARPORT_CLASS_UNSPEC,           /* No CLS field in ID */
-	PARPORT_CLASS_SCSIADAPTER
-} parport_device_class;
+	#include <linux/parport.h>
 
-int parport_find_class (parport_device_class cls, int from);
+	typedef enum {
+		PARPORT_CLASS_LEGACY = 0,       /* Non-IEEE1284 device */
+		PARPORT_CLASS_PRINTER,
+		PARPORT_CLASS_MODEM,
+		PARPORT_CLASS_NET,
+		PARPORT_CLASS_HDC,              /* Hard disk controller */
+		PARPORT_CLASS_PCMCIA,
+		PARPORT_CLASS_MEDIA,            /* Multimedia device */
+		PARPORT_CLASS_FDC,              /* Floppy disk controller */
+		PARPORT_CLASS_PORTS,
+		PARPORT_CLASS_SCANNER,
+		PARPORT_CLASS_DIGCAM,
+		PARPORT_CLASS_OTHER,            /* Anything else */
+		PARPORT_CLASS_UNSPEC,           /* No CLS field in ID */
+		PARPORT_CLASS_SCSIADAPTER
+	} parport_device_class;
+
+	int parport_find_class (parport_device_class cls, int from);
 
 DESCRIPTION
+^^^^^^^^^^^
 
 Find a device by class.  The search starts from device number from+1.
 
 RETURN VALUE
+^^^^^^^^^^^^
 
 The device number of the next device in that class, or -1 if no such
 device exists.
 
 NOTES
+^^^^^
 
-Example usage:
+Example usage::
 
-int devnum = -1;
-while ((devnum = parport_find_class (PARPORT_CLASS_DIGCAM, devnum)) != -1) {
-    struct pardevice *dev = parport_open (devnum, ...);
-    ...
-}
+	int devnum = -1;
+	while ((devnum = parport_find_class (PARPORT_CLASS_DIGCAM, devnum)) != -1) {
+		struct pardevice *dev = parport_open (devnum, ...);
+		...
+	}
 
 SEE ALSO
+^^^^^^^^
 
 parport_find_device, parport_open, parport_device_id
-\f
+
+
+
 parport_find_device - find a device by its class
-------------------
+------------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-int parport_find_device (const char *mfg, const char *mdl, int from);
+	#include <linux/parport.h>
+
+	int parport_find_device (const char *mfg, const char *mdl, int from);
 
 DESCRIPTION
+^^^^^^^^^^^
 
 Find a device by vendor and model.  The search starts from device
 number from+1.
 
 RETURN VALUE
+^^^^^^^^^^^^
 
 The device number of the next device matching the specifications, or
 -1 if no such device exists.
 
 NOTES
+^^^^^
 
-Example usage:
+Example usage::
 
-int devnum = -1;
-while ((devnum = parport_find_device ("IOMEGA", "ZIP+", devnum)) != -1) {
-    struct pardevice *dev = parport_open (devnum, ...);
-    ...
-}
+	int devnum = -1;
+	while ((devnum = parport_find_device ("IOMEGA", "ZIP+", devnum)) != -1) {
+		struct pardevice *dev = parport_open (devnum, ...);
+		...
+	}
 
 SEE ALSO
+^^^^^^^^
 
 parport_find_class, parport_open, parport_device_id
+
+
 \f
 parport_set_timeout - set the inactivity timeout
--------------------
+------------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-long parport_set_timeout (struct pardevice *dev, long inactivity);
+	#include <linux/parport.h>
+
+	long parport_set_timeout (struct pardevice *dev, long inactivity);
 
 DESCRIPTION
+^^^^^^^^^^^
 
 Set the inactivity timeout, in jiffies, for a registered device.  The
 previous timeout is returned.
 
 RETURN VALUE
+^^^^^^^^^^^^
 
 The previous timeout, in jiffies.
 
 NOTES
+^^^^^
 
 Some of the port->ops functions for a parport may take time, owing to
 delays at the peripheral.  After the peripheral has not responded for
-'inactivity' jiffies, a timeout will occur and the blocking function
+``inactivity`` jiffies, a timeout will occur and the blocking function
 will return.
 
 A timeout of 0 jiffies is a special case: the function must do as much
@@ -932,29 +1135,37 @@ Once set for a registered device, the timeout will remain at the set
 value until set again.
 
 SEE ALSO
+^^^^^^^^
 
 port->ops->xxx_read/write_yyy
-\f
+
+
+
+
 PORT FUNCTIONS
---------------
+==============
 
 The functions in the port->ops structure (struct parport_operations)
 are provided by the low-level driver responsible for that port.
 
 port->ops->read_data - read the data register
---------------------
+---------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-struct parport_operations {
-	...
-	unsigned char (*read_data) (struct parport *port);
-	...
-};
+	#include <linux/parport.h>
+
+	struct parport_operations {
+		...
+		unsigned char (*read_data) (struct parport *port);
+		...
+	};
 
 DESCRIPTION
+^^^^^^^^^^^
 
 If port->modes contains the PARPORT_MODE_TRISTATE flag and the
 PARPORT_CONTROL_DIRECTION bit in the control register is set, this
@@ -964,45 +1175,59 @@ not set, the return value _may_ be the last value written to the data
 register.  Otherwise the return value is undefined.
 
 SEE ALSO
+^^^^^^^^
 
 write_data, read_status, write_control
+
+
 \f
 port->ops->write_data - write the data register
----------------------
+-----------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-struct parport_operations {
-	...
-	void (*write_data) (struct parport *port, unsigned char d);
-	...
-};
+	#include <linux/parport.h>
+
+	struct parport_operations {
+		...
+		void (*write_data) (struct parport *port, unsigned char d);
+		...
+	};
 
 DESCRIPTION
+^^^^^^^^^^^
 
 Writes to the data register.  May have side-effects (a STROBE pulse,
 for instance).
 
 SEE ALSO
+^^^^^^^^
 
 read_data, read_status, write_control
+
+
 \f
 port->ops->read_status - read the status register
-----------------------
+-------------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-struct parport_operations {
-	...
-	unsigned char (*read_status) (struct parport *port);
-	...
-};
+	#include <linux/parport.h>
+
+	struct parport_operations {
+		...
+		unsigned char (*read_status) (struct parport *port);
+		...
+	};
 
 DESCRIPTION
+^^^^^^^^^^^
 
 Reads from the status register.  This is a bitmask:
 
@@ -1015,76 +1240,98 @@ Reads from the status register.  This is a bitmask:
 There may be other bits set.
 
 SEE ALSO
+^^^^^^^^
 
 read_data, write_data, write_control
+
+
 \f
 port->ops->read_control - read the control register
------------------------
+---------------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-struct parport_operations {
-	...
-	unsigned char (*read_control) (struct parport *port);
-	...
-};
+	#include <linux/parport.h>
+
+	struct parport_operations {
+		...
+		unsigned char (*read_control) (struct parport *port);
+		...
+	};
 
 DESCRIPTION
+^^^^^^^^^^^
 
 Returns the last value written to the control register (either from
 write_control or frob_control).  No port access is performed.
 
 SEE ALSO
+^^^^^^^^
 
 read_data, write_data, read_status, write_control
+
+
 \f
 port->ops->write_control - write the control register
-------------------------
+-----------------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-struct parport_operations {
-	...
-	void (*write_control) (struct parport *port, unsigned char s);
-	...
-};
+	#include <linux/parport.h>
+
+	struct parport_operations {
+		...
+		void (*write_control) (struct parport *port, unsigned char s);
+		...
+	};
 
 DESCRIPTION
+^^^^^^^^^^^
 
-Writes to the control register. This is a bitmask:
-                          _______
-- PARPORT_CONTROL_STROBE (nStrobe)
-                          _______
-- PARPORT_CONTROL_AUTOFD (nAutoFd)
-                        _____
-- PARPORT_CONTROL_INIT (nInit)
-                          _________
-- PARPORT_CONTROL_SELECT (nSelectIn)
+Writes to the control register. This is a bitmask::
+
+				  _______
+	- PARPORT_CONTROL_STROBE (nStrobe)
+				  _______
+	- PARPORT_CONTROL_AUTOFD (nAutoFd)
+				_____
+	- PARPORT_CONTROL_INIT (nInit)
+				  _________
+	- PARPORT_CONTROL_SELECT (nSelectIn)
 
 SEE ALSO
+^^^^^^^^
 
 read_data, write_data, read_status, frob_control
+
+
 \f
 port->ops->frob_control - write control register bits
------------------------
+-----------------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-struct parport_operations {
-	...
-	unsigned char (*frob_control) (struct parport *port,
-				       unsigned char mask,
-				       unsigned char val);
-	...
-};
+	#include <linux/parport.h>
+
+	struct parport_operations {
+		...
+		unsigned char (*frob_control) (struct parport *port,
+					unsigned char mask,
+					unsigned char val);
+		...
+	};
 
 DESCRIPTION
+^^^^^^^^^^^
 
 This is equivalent to reading from the control register, masking out
 the bits in mask, exclusive-or'ing with the bits in val, and writing
@@ -1095,23 +1342,30 @@ of its contents is maintained, so frob_control is in fact only one
 port access.
 
 SEE ALSO
+^^^^^^^^
 
 read_data, write_data, read_status, write_control
+
+
 \f
 port->ops->enable_irq - enable interrupt generation
----------------------
+---------------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-struct parport_operations {
-	...
-	void (*enable_irq) (struct parport *port);
-	...
-};
+	#include <linux/parport.h>
+
+	struct parport_operations {
+		...
+		void (*enable_irq) (struct parport *port);
+		...
+	};
 
 DESCRIPTION
+^^^^^^^^^^^
 
 The parallel port hardware is instructed to generate interrupts at
 appropriate moments, although those moments are
@@ -1119,353 +1373,460 @@ architecture-specific.  For the PC architecture, interrupts are
 commonly generated on the rising edge of nAck.
 
 SEE ALSO
+^^^^^^^^
 
 disable_irq
+
+
 \f
 port->ops->disable_irq - disable interrupt generation
-----------------------
+-----------------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-struct parport_operations {
-	...
-	void (*disable_irq) (struct parport *port);
-	...
-};
+	#include <linux/parport.h>
+
+	struct parport_operations {
+		...
+		void (*disable_irq) (struct parport *port);
+		...
+	};
 
 DESCRIPTION
+^^^^^^^^^^^
 
 The parallel port hardware is instructed not to generate interrupts.
 The interrupt itself is not masked.
 
 SEE ALSO
+^^^^^^^^
 
 enable_irq
 \f
+
+
 port->ops->data_forward - enable data drivers
------------------------
+---------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-struct parport_operations {
-	...
-	void (*data_forward) (struct parport *port);
-	...
-};
+	#include <linux/parport.h>
+
+	struct parport_operations {
+		...
+		void (*data_forward) (struct parport *port);
+		...
+	};
 
 DESCRIPTION
+^^^^^^^^^^^
 
 Enables the data line drivers, for 8-bit host-to-peripheral
 communications.
 
 SEE ALSO
+^^^^^^^^
 
 data_reverse
+
+
 \f
 port->ops->data_reverse - tristate the buffer
------------------------
+---------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-struct parport_operations {
-	...
-	void (*data_reverse) (struct parport *port);
-	...
-};
+	#include <linux/parport.h>
+
+	struct parport_operations {
+		...
+		void (*data_reverse) (struct parport *port);
+		...
+	};
 
 DESCRIPTION
+^^^^^^^^^^^
 
 Places the data bus in a high impedance state, if port->modes has the
 PARPORT_MODE_TRISTATE bit set.
 
 SEE ALSO
+^^^^^^^^
 
 data_forward
-\f
+
+
+
 port->ops->epp_write_data - write EPP data
--------------------------
+------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-struct parport_operations {
-	...
-	size_t (*epp_write_data) (struct parport *port, const void *buf,
-				  size_t len, int flags);
-	...
-};
+	#include <linux/parport.h>
+
+	struct parport_operations {
+		...
+		size_t (*epp_write_data) (struct parport *port, const void *buf,
+					size_t len, int flags);
+		...
+	};
 
 DESCRIPTION
+^^^^^^^^^^^
 
 Writes data in EPP mode, and returns the number of bytes written.
 
-The 'flags' parameter may be one or more of the following,
+The ``flags`` parameter may be one or more of the following,
 bitwise-or'ed together:
 
+======================= =================================================
 PARPORT_EPP_FAST	Use fast transfers. Some chips provide 16-bit and
 			32-bit registers.  However, if a transfer
 			times out, the return value may be unreliable.
+======================= =================================================
 
 SEE ALSO
+^^^^^^^^
 
 epp_read_data, epp_write_addr, epp_read_addr
+
+
 \f
 port->ops->epp_read_data - read EPP data
-------------------------
+----------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-struct parport_operations {
-	...
-	size_t (*epp_read_data) (struct parport *port, void *buf,
-				 size_t len, int flags);
-	...
-};
+	#include <linux/parport.h>
+
+	struct parport_operations {
+		...
+		size_t (*epp_read_data) (struct parport *port, void *buf,
+					size_t len, int flags);
+		...
+	};
 
 DESCRIPTION
+^^^^^^^^^^^
 
 Reads data in EPP mode, and returns the number of bytes read.
 
-The 'flags' parameter may be one or more of the following,
+The ``flags`` parameter may be one or more of the following,
 bitwise-or'ed together:
 
+======================= =================================================
 PARPORT_EPP_FAST	Use fast transfers. Some chips provide 16-bit and
 			32-bit registers.  However, if a transfer
 			times out, the return value may be unreliable.
+======================= =================================================
 
 SEE ALSO
+^^^^^^^^
 
 epp_write_data, epp_write_addr, epp_read_addr
-\f
+
+
+
 port->ops->epp_write_addr - write EPP address
--------------------------
+---------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-struct parport_operations {
-	...
-	size_t (*epp_write_addr) (struct parport *port,
-				  const void *buf, size_t len, int flags);
-	...
-};
+	#include <linux/parport.h>
+
+	struct parport_operations {
+		...
+		size_t (*epp_write_addr) (struct parport *port,
+					const void *buf, size_t len, int flags);
+		...
+	};
 
 DESCRIPTION
+^^^^^^^^^^^
 
 Writes EPP addresses (8 bits each), and returns the number written.
 
-The 'flags' parameter may be one or more of the following,
+The ``flags`` parameter may be one or more of the following,
 bitwise-or'ed together:
 
+======================= =================================================
 PARPORT_EPP_FAST	Use fast transfers. Some chips provide 16-bit and
 			32-bit registers.  However, if a transfer
 			times out, the return value may be unreliable.
+======================= =================================================
 
 (Does PARPORT_EPP_FAST make sense for this function?)
 
 SEE ALSO
+^^^^^^^^
 
 epp_write_data, epp_read_data, epp_read_addr
+
+
 \f
 port->ops->epp_read_addr - read EPP address
-------------------------
+-------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-struct parport_operations {
-	...
-	size_t (*epp_read_addr) (struct parport *port, void *buf,
-				 size_t len, int flags);
-	...
-};
+	#include <linux/parport.h>
+
+	struct parport_operations {
+		...
+		size_t (*epp_read_addr) (struct parport *port, void *buf,
+					size_t len, int flags);
+		...
+	};
 
 DESCRIPTION
+^^^^^^^^^^^
 
 Reads EPP addresses (8 bits each), and returns the number read.
 
-The 'flags' parameter may be one or more of the following,
+The ``flags`` parameter may be one or more of the following,
 bitwise-or'ed together:
 
+======================= =================================================
 PARPORT_EPP_FAST	Use fast transfers. Some chips provide 16-bit and
 			32-bit registers.  However, if a transfer
 			times out, the return value may be unreliable.
+======================= =================================================
 
 (Does PARPORT_EPP_FAST make sense for this function?)
 
 SEE ALSO
+^^^^^^^^
 
 epp_write_data, epp_read_data, epp_write_addr
+
+
 \f
 port->ops->ecp_write_data - write a block of ECP data
--------------------------
+-----------------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-struct parport_operations {
-	...
-	size_t (*ecp_write_data) (struct parport *port,
-				  const void *buf, size_t len, int flags);
-	...
-};
+	#include <linux/parport.h>
+
+	struct parport_operations {
+		...
+		size_t (*ecp_write_data) (struct parport *port,
+					const void *buf, size_t len, int flags);
+		...
+	};
 
 DESCRIPTION
+^^^^^^^^^^^
 
-Writes a block of ECP data.  The 'flags' parameter is ignored.
+Writes a block of ECP data.  The ``flags`` parameter is ignored.
 
 RETURN VALUE
+^^^^^^^^^^^^
 
 The number of bytes written.
 
 SEE ALSO
+^^^^^^^^
 
 ecp_read_data, ecp_write_addr
 \f
+
+
 port->ops->ecp_read_data - read a block of ECP data
-------------------------
+---------------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-struct parport_operations {
-	...
-	size_t (*ecp_read_data) (struct parport *port,
-				 void *buf, size_t len, int flags);
-	...
-};
+	#include <linux/parport.h>
+
+	struct parport_operations {
+		...
+		size_t (*ecp_read_data) (struct parport *port,
+					void *buf, size_t len, int flags);
+		...
+	};
 
 DESCRIPTION
+^^^^^^^^^^^
 
-Reads a block of ECP data.  The 'flags' parameter is ignored.
+Reads a block of ECP data.  The ``flags`` parameter is ignored.
 
 RETURN VALUE
+^^^^^^^^^^^^
 
 The number of bytes read.  NB. There may be more unread data in a
 FIFO.  Is there a way of stunning the FIFO to prevent this?
 
 SEE ALSO
+^^^^^^^^
 
 ecp_write_block, ecp_write_addr
-\f
+
+
+
 port->ops->ecp_write_addr - write a block of ECP addresses
--------------------------
+----------------------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-struct parport_operations {
-	...
-	size_t (*ecp_write_addr) (struct parport *port,
-				  const void *buf, size_t len, int flags);
-	...
-};
+	#include <linux/parport.h>
+
+	struct parport_operations {
+		...
+		size_t (*ecp_write_addr) (struct parport *port,
+					const void *buf, size_t len, int flags);
+		...
+	};
 
 DESCRIPTION
+^^^^^^^^^^^
 
-Writes a block of ECP addresses.  The 'flags' parameter is ignored.
+Writes a block of ECP addresses.  The ``flags`` parameter is ignored.
 
 RETURN VALUE
+^^^^^^^^^^^^
 
 The number of bytes written.
 
 NOTES
+^^^^^
 
 This may use a FIFO, and if so shall not return until the FIFO is empty.
 
 SEE ALSO
+^^^^^^^^
 
 ecp_read_data, ecp_write_data
-\f
+
+
+
 port->ops->nibble_read_data - read a block of data in nibble mode
----------------------------
+-----------------------------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-struct parport_operations {
-	...
-	size_t (*nibble_read_data) (struct parport *port,
-				    void *buf, size_t len, int flags);
-	...
-};
+	#include <linux/parport.h>
+
+	struct parport_operations {
+		...
+		size_t (*nibble_read_data) (struct parport *port,
+					void *buf, size_t len, int flags);
+		...
+	};
 
 DESCRIPTION
+^^^^^^^^^^^
 
-Reads a block of data in nibble mode.  The 'flags' parameter is ignored.
+Reads a block of data in nibble mode.  The ``flags`` parameter is ignored.
 
 RETURN VALUE
+^^^^^^^^^^^^
 
 The number of whole bytes read.
 
 SEE ALSO
+^^^^^^^^
 
 byte_read_data, compat_write_data
+
+
 \f
 port->ops->byte_read_data - read a block of data in byte mode
--------------------------
+-------------------------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-struct parport_operations {
-	...
-	size_t (*byte_read_data) (struct parport *port,
-				  void *buf, size_t len, int flags);
-	...
-};
+	#include <linux/parport.h>
+
+	struct parport_operations {
+		...
+		size_t (*byte_read_data) (struct parport *port,
+					void *buf, size_t len, int flags);
+		...
+	};
 
 DESCRIPTION
+^^^^^^^^^^^
 
-Reads a block of data in byte mode.  The 'flags' parameter is ignored.
+Reads a block of data in byte mode.  The ``flags`` parameter is ignored.
 
 RETURN VALUE
+^^^^^^^^^^^^
 
 The number of bytes read.
 
 SEE ALSO
+^^^^^^^^
 
 nibble_read_data, compat_write_data
+
+
 \f
 port->ops->compat_write_data - write a block of data in compatibility mode
-----------------------------
+--------------------------------------------------------------------------
 
 SYNOPSIS
+^^^^^^^^
 
-#include <linux/parport.h>
+::
 
-struct parport_operations {
-	...
-	size_t (*compat_write_data) (struct parport *port,
-				     const void *buf, size_t len, int flags);
-	...
-};
+	#include <linux/parport.h>
+
+	struct parport_operations {
+		...
+		size_t (*compat_write_data) (struct parport *port,
+					const void *buf, size_t len, int flags);
+		...
+	};
 
 DESCRIPTION
+^^^^^^^^^^^
 
-Writes a block of data in compatibility mode.  The 'flags' parameter
+Writes a block of data in compatibility mode.  The ``flags`` parameter
 is ignored.
 
 RETURN VALUE
+^^^^^^^^^^^^
 
 The number of bytes written.
 
 SEE ALSO
+^^^^^^^^
 
 nibble_read_data, byte_read_data
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v2 27/29] percpu-rw-semaphore.txt: standardize document format
  2017-06-17 15:26 [PATCH v2 01/29] IPMI.txt: standardize document format Mauro Carvalho Chehab
                   ` (24 preceding siblings ...)
  2017-06-17 15:26 ` [PATCH v2 26/29] parport-lowlevel.txt: " Mauro Carvalho Chehab
@ 2017-06-17 15:26 ` Mauro Carvalho Chehab
  2017-06-17 15:26 ` [PATCH v2 28/29] phy.txt: " Mauro Carvalho Chehab
                   ` (2 subsequent siblings)
  28 siblings, 0 replies; 35+ messages in thread
From: Mauro Carvalho Chehab @ 2017-06-17 15:26 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet

Each text file under Documentation follows a different
format. Some doesn't even have titles!

Change its representation to follow the adopted standard,
using ReST markups for it to be parseable by Sphinx:

This document is already adopting the standard format,
with a single exception: we're using this convention
for the document title:
	===
	foo
	===

So, adjust the title of this document to follow the
standard.

Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
---
 Documentation/percpu-rw-semaphore.txt | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/Documentation/percpu-rw-semaphore.txt b/Documentation/percpu-rw-semaphore.txt
index 7d3c82431909..247de6410855 100644
--- a/Documentation/percpu-rw-semaphore.txt
+++ b/Documentation/percpu-rw-semaphore.txt
@@ -1,5 +1,6 @@
+====================
 Percpu rw semaphores
---------------------
+====================
 
 Percpu rw semaphores is a new read-write semaphore design that is
 optimized for locking for reading.
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v2 28/29] phy.txt: standardize document format
  2017-06-17 15:26 [PATCH v2 01/29] IPMI.txt: standardize document format Mauro Carvalho Chehab
                   ` (25 preceding siblings ...)
  2017-06-17 15:26 ` [PATCH v2 27/29] percpu-rw-semaphore.txt: " Mauro Carvalho Chehab
@ 2017-06-17 15:26 ` Mauro Carvalho Chehab
  2017-06-17 15:26 ` [PATCH v2 29/29] pi-futex.txt: " Mauro Carvalho Chehab
  2017-06-17 15:46 ` [PATCH v2 01/29] IPMI.txt: " Corey Minyard
  28 siblings, 0 replies; 35+ messages in thread
From: Mauro Carvalho Chehab @ 2017-06-17 15:26 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet

Each text file under Documentation follows a different
format. Some doesn't even have titles!

Change its representation to follow the adopted standard,
using ReST markups for it to be parseable by Sphinx:

- mark titles;
- use :Author: for authorship;
- mark literal blocks.

Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
---
 Documentation/phy.txt | 106 +++++++++++++++++++++++++++++++-------------------
 1 file changed, 67 insertions(+), 39 deletions(-)

diff --git a/Documentation/phy.txt b/Documentation/phy.txt
index 383cdd863f08..457c3e0f86d6 100644
--- a/Documentation/phy.txt
+++ b/Documentation/phy.txt
@@ -1,10 +1,14 @@
-			    PHY SUBSYSTEM
-		  Kishon Vijay Abraham I <kishon@ti.com>
+=============
+PHY subsystem
+=============
+
+:Author: Kishon Vijay Abraham I <kishon@ti.com>
 
 This document explains the Generic PHY Framework along with the APIs provided,
 and how-to-use.
 
-1. Introduction
+Introduction
+============
 
 *PHY* is the abbreviation for physical layer. It is used to connect a device
 to the physical medium e.g., the USB controller has a PHY to provide functions
@@ -21,7 +25,8 @@ better code maintainability.
 This framework will be of use only to devices that use external PHY (PHY
 functionality is not embedded within the controller).
 
-2. Registering/Unregistering the PHY provider
+Registering/Unregistering the PHY provider
+==========================================
 
 PHY provider refers to an entity that implements one or more PHY instances.
 For the simple case where the PHY provider implements only a single instance of
@@ -30,11 +35,14 @@ of_phy_simple_xlate. If the PHY provider implements multiple instances, it
 should provide its own implementation of of_xlate. of_xlate is used only for
 dt boot case.
 
-#define of_phy_provider_register(dev, xlate)    \
-        __of_phy_provider_register((dev), NULL, THIS_MODULE, (xlate))
+::
 
-#define devm_of_phy_provider_register(dev, xlate)       \
-        __devm_of_phy_provider_register((dev), NULL, THIS_MODULE, (xlate))
+	#define of_phy_provider_register(dev, xlate)    \
+		__of_phy_provider_register((dev), NULL, THIS_MODULE, (xlate))
+
+	#define devm_of_phy_provider_register(dev, xlate)       \
+		__devm_of_phy_provider_register((dev), NULL, THIS_MODULE,
+						(xlate))
 
 of_phy_provider_register and devm_of_phy_provider_register macros can be used to
 register the phy_provider and it takes device and of_xlate as
@@ -47,28 +55,35 @@ nodes within extra levels for context and extensibility, in which case the low
 level of_phy_provider_register_full() and devm_of_phy_provider_register_full()
 macros can be used to override the node containing the children.
 
-#define of_phy_provider_register_full(dev, children, xlate) \
-	__of_phy_provider_register(dev, children, THIS_MODULE, xlate)
+::
 
-#define devm_of_phy_provider_register_full(dev, children, xlate) \
-	__devm_of_phy_provider_register_full(dev, children, THIS_MODULE, xlate)
+	#define of_phy_provider_register_full(dev, children, xlate) \
+		__of_phy_provider_register(dev, children, THIS_MODULE, xlate)
 
-void devm_of_phy_provider_unregister(struct device *dev,
-	struct phy_provider *phy_provider);
-void of_phy_provider_unregister(struct phy_provider *phy_provider);
+	#define devm_of_phy_provider_register_full(dev, children, xlate) \
+		__devm_of_phy_provider_register_full(dev, children,
+						     THIS_MODULE, xlate)
+
+	void devm_of_phy_provider_unregister(struct device *dev,
+		struct phy_provider *phy_provider);
+	void of_phy_provider_unregister(struct phy_provider *phy_provider);
 
 devm_of_phy_provider_unregister and of_phy_provider_unregister can be used to
 unregister the PHY.
 
-3. Creating the PHY
+Creating the PHY
+================
 
 The PHY driver should create the PHY in order for other peripheral controllers
 to make use of it. The PHY framework provides 2 APIs to create the PHY.
 
-struct phy *phy_create(struct device *dev, struct device_node *node,
-		       const struct phy_ops *ops);
-struct phy *devm_phy_create(struct device *dev, struct device_node *node,
-			    const struct phy_ops *ops);
+::
+
+	struct phy *phy_create(struct device *dev, struct device_node *node,
+			       const struct phy_ops *ops);
+	struct phy *devm_phy_create(struct device *dev,
+				    struct device_node *node,
+				    const struct phy_ops *ops);
 
 The PHY drivers can use one of the above 2 APIs to create the PHY by passing
 the device pointer and phy ops.
@@ -84,12 +99,16 @@ phy_ops to get back the private data.
 Before the controller can make use of the PHY, it has to get a reference to
 it. This framework provides the following APIs to get a reference to the PHY.
 
-struct phy *phy_get(struct device *dev, const char *string);
-struct phy *phy_optional_get(struct device *dev, const char *string);
-struct phy *devm_phy_get(struct device *dev, const char *string);
-struct phy *devm_phy_optional_get(struct device *dev, const char *string);
-struct phy *devm_of_phy_get_by_index(struct device *dev, struct device_node *np,
-				     int index);
+::
+
+	struct phy *phy_get(struct device *dev, const char *string);
+	struct phy *phy_optional_get(struct device *dev, const char *string);
+	struct phy *devm_phy_get(struct device *dev, const char *string);
+	struct phy *devm_phy_optional_get(struct device *dev,
+					  const char *string);
+	struct phy *devm_of_phy_get_by_index(struct device *dev,
+					     struct device_node *np,
+					     int index);
 
 phy_get, phy_optional_get, devm_phy_get and devm_phy_optional_get can
 be used to get the PHY. In the case of dt boot, the string arguments
@@ -111,30 +130,35 @@ the phy_init() and phy_exit() calls, and phy_power_on() and
 phy_power_off() calls are all NOP when applied to a NULL phy. The NULL
 phy is useful in devices for handling optional phy devices.
 
-5. Releasing a reference to the PHY
+Releasing a reference to the PHY
+================================
 
 When the controller no longer needs the PHY, it has to release the reference
 to the PHY it has obtained using the APIs mentioned in the above section. The
 PHY framework provides 2 APIs to release a reference to the PHY.
 
-void phy_put(struct phy *phy);
-void devm_phy_put(struct device *dev, struct phy *phy);
+::
+
+	void phy_put(struct phy *phy);
+	void devm_phy_put(struct device *dev, struct phy *phy);
 
 Both these APIs are used to release a reference to the PHY and devm_phy_put
 destroys the devres associated with this PHY.
 
-6. Destroying the PHY
+Destroying the PHY
+==================
 
 When the driver that created the PHY is unloaded, it should destroy the PHY it
-created using one of the following 2 APIs.
+created using one of the following 2 APIs::
 
-void phy_destroy(struct phy *phy);
-void devm_phy_destroy(struct device *dev, struct phy *phy);
+	void phy_destroy(struct phy *phy);
+	void devm_phy_destroy(struct device *dev, struct phy *phy);
 
 Both these APIs destroy the PHY and devm_phy_destroy destroys the devres
 associated with this PHY.
 
-7. PM Runtime
+PM Runtime
+==========
 
 This subsystem is pm runtime enabled. So while creating the PHY,
 pm_runtime_enable of the phy device created by this subsystem is called and
@@ -150,7 +174,8 @@ There are exported APIs like phy_pm_runtime_get, phy_pm_runtime_get_sync,
 phy_pm_runtime_put, phy_pm_runtime_put_sync, phy_pm_runtime_allow and
 phy_pm_runtime_forbid for performing PM operations.
 
-8. PHY Mappings
+PHY Mappings
+============
 
 In order to get reference to a PHY without help from DeviceTree, the framework
 offers lookups which can be compared to clkdev that allow clk structures to be
@@ -158,12 +183,15 @@ bound to devices. A lookup can be made be made during runtime when a handle to
 the struct phy already exists.
 
 The framework offers the following API for registering and unregistering the
-lookups.
+lookups::
 
-int phy_create_lookup(struct phy *phy, const char *con_id, const char *dev_id);
-void phy_remove_lookup(struct phy *phy, const char *con_id, const char *dev_id);
+	int phy_create_lookup(struct phy *phy, const char *con_id,
+			      const char *dev_id);
+	void phy_remove_lookup(struct phy *phy, const char *con_id,
+			       const char *dev_id);
 
-9. DeviceTree Binding
+DeviceTree Binding
+==================
 
 The documentation for PHY dt binding can be found @
 Documentation/devicetree/bindings/phy/phy-bindings.txt
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v2 29/29] pi-futex.txt: standardize document format
  2017-06-17 15:26 [PATCH v2 01/29] IPMI.txt: standardize document format Mauro Carvalho Chehab
                   ` (26 preceding siblings ...)
  2017-06-17 15:26 ` [PATCH v2 28/29] phy.txt: " Mauro Carvalho Chehab
@ 2017-06-17 15:26 ` Mauro Carvalho Chehab
  2017-06-17 15:46 ` [PATCH v2 01/29] IPMI.txt: " Corey Minyard
  28 siblings, 0 replies; 35+ messages in thread
From: Mauro Carvalho Chehab @ 2017-06-17 15:26 UTC (permalink / raw)
  To: Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, Mauro Carvalho Chehab, linux-kernel,
	Jonathan Corbet, Thomas Gleixner, Ingo Molnar, Peter Zijlstra,
	Darren Hart

Each text file under Documentation follows a different
format. Some doesn't even have titles!

Change its representation to follow the adopted standard,
using ReST markups for it to be parseable by Sphinx.

This document requires just minor adjustments to match
the standard documentation style:

- promote document name;
- remove extra collons on some chapter titles;
- use "-" for a bulleted list.

Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
---
 Documentation/pi-futex.txt | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/Documentation/pi-futex.txt b/Documentation/pi-futex.txt
index 9a5bc8651c29..aafddbee7377 100644
--- a/Documentation/pi-futex.txt
+++ b/Documentation/pi-futex.txt
@@ -1,5 +1,6 @@
+======================
 Lightweight PI-futexes
-----------------------
+======================
 
 We are calling them lightweight for 3 reasons:
 
@@ -25,8 +26,8 @@ determinism and well-bound latencies. Even in the worst-case, PI will
 improve the statistical distribution of locking related application
 delays.
 
-The longer reply:
------------------
+The longer reply
+----------------
 
 Firstly, sharing locks between multiple tasks is a common programming
 technique that often cannot be replaced with lockless algorithms. As we
@@ -71,8 +72,8 @@ deterministic execution of the high-prio task: any medium-priority task
 could preempt the low-prio task while it holds the shared lock and
 executes the critical section, and could delay it indefinitely.
 
-Implementation:
----------------
+Implementation
+--------------
 
 As mentioned before, the userspace fastpath of PI-enabled pthread
 mutexes involves no kernel work at all - they behave quite similarly to
@@ -83,8 +84,8 @@ entering the kernel.
 
 To handle the slowpath, we have added two new futex ops:
 
-  FUTEX_LOCK_PI
-  FUTEX_UNLOCK_PI
+  - FUTEX_LOCK_PI
+  - FUTEX_UNLOCK_PI
 
 If the lock-acquire fastpath fails, [i.e. an atomic transition from 0 to
 TID fails], then FUTEX_LOCK_PI is called. The kernel does all the
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [PATCH v2 01/29] IPMI.txt: standardize document format
  2017-06-17 15:26 [PATCH v2 01/29] IPMI.txt: standardize document format Mauro Carvalho Chehab
                   ` (27 preceding siblings ...)
  2017-06-17 15:26 ` [PATCH v2 29/29] pi-futex.txt: " Mauro Carvalho Chehab
@ 2017-06-17 15:46 ` Corey Minyard
  28 siblings, 0 replies; 35+ messages in thread
From: Corey Minyard @ 2017-06-17 15:46 UTC (permalink / raw)
  To: Mauro Carvalho Chehab, Linux Doc Mailing List
  Cc: Mauro Carvalho Chehab, linux-kernel, Jonathan Corbet, openipmi-developer

On 06/17/2017 10:26 AM, Mauro Carvalho Chehab wrote:
> Each text file under Documentation follows a different
> format. Some doesn't even have titles!
>
> Change its representation to follow the adopted standard,
> using ReST markups for it to be parseable by Sphinx:
>
> - fix document type;
> - add missing markups for subitems;
> - mark literal blocks;
> - add whitespaces and blank lines where needed;
> - use bulleted list markups where neded.
>
> Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
> ---
>   Documentation/IPMI.txt | 76 +++++++++++++++++++++++++++++---------------------
>   1 file changed, 44 insertions(+), 32 deletions(-)

This is ok by me.  Nice to have a standard format for this.

Reviewed-by: Corey Minyard <cminyard@mvista.com>

> diff --git a/Documentation/IPMI.txt b/Documentation/IPMI.txt
> index 6962cab997ef..aa77a25a0940 100644
> --- a/Documentation/IPMI.txt
> +++ b/Documentation/IPMI.txt
> @@ -1,9 +1,8 @@
> +=====================
> +The Linux IPMI Driver
> +=====================
>   
> -                          The Linux IPMI Driver
> -			  ---------------------
> -			      Corey Minyard
> -			  <minyard@mvista.com>
> -			    <minyard@acm.org>
> +:Author: Corey Minyard <minyard@mvista.com> / <minyard@acm.org>
>   
>   The Intelligent Platform Management Interface, or IPMI, is a
>   standard for controlling intelligent devices that monitor a system.
> @@ -141,7 +140,7 @@ Addressing
>   ----------
>   
>   The IPMI addressing works much like IP addresses, you have an overlay
> -to handle the different address types.  The overlay is:
> +to handle the different address types.  The overlay is::
>   
>     struct ipmi_addr
>     {
> @@ -153,7 +152,7 @@ to handle the different address types.  The overlay is:
>   The addr_type determines what the address really is.  The driver
>   currently understands two different types of addresses.
>   
> -"System Interface" addresses are defined as:
> +"System Interface" addresses are defined as::
>   
>     struct ipmi_system_interface_addr
>     {
> @@ -166,7 +165,7 @@ straight to the BMC on the current card.  The channel must be
>   IPMI_BMC_CHANNEL.
>   
>   Messages that are destined to go out on the IPMB bus use the
> -IPMI_IPMB_ADDR_TYPE address type.  The format is
> +IPMI_IPMB_ADDR_TYPE address type.  The format is::
>   
>     struct ipmi_ipmb_addr
>     {
> @@ -184,16 +183,16 @@ spec.
>   Messages
>   --------
>   
> -Messages are defined as:
> +Messages are defined as::
>   
> -struct ipmi_msg
> -{
> +  struct ipmi_msg
> +  {
>   	unsigned char netfn;
>   	unsigned char lun;
>   	unsigned char cmd;
>   	unsigned char *data;
>   	int           data_len;
> -};
> +  };
>   
>   The driver takes care of adding/stripping the header information.  The
>   data portion is just the data to be send (do NOT put addressing info
> @@ -208,7 +207,7 @@ block of data, even when receiving messages.  Otherwise the driver
>   will have no place to put the message.
>   
>   Messages coming up from the message handler in kernelland will come in
> -as:
> +as::
>   
>     struct ipmi_recv_msg
>     {
> @@ -246,6 +245,7 @@ and the user should not have to care what type of SMI is below them.
>   
>   
>   Watching For Interfaces
> +^^^^^^^^^^^^^^^^^^^^^^^
>   
>   When your code comes up, the IPMI driver may or may not have detected
>   if IPMI devices exist.  So you might have to defer your setup until
> @@ -256,6 +256,7 @@ and tell you when they come and go.
>   
>   
>   Creating the User
> +^^^^^^^^^^^^^^^^^
>   
>   To use the message handler, you must first create a user using
>   ipmi_create_user.  The interface number specifies which SMI you want
> @@ -272,6 +273,7 @@ closing the device automatically destroys the user.
>   
>   
>   Messaging
> +^^^^^^^^^
>   
>   To send a message from kernel-land, the ipmi_request_settime() call does
>   pretty much all message handling.  Most of the parameter are
> @@ -321,6 +323,7 @@ though, since it is tricky to manage your own buffers.
>   
>   
>   Events and Incoming Commands
> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>   
>   The driver takes care of polling for IPMI events and receiving
>   commands (commands are messages that are not responses, they are
> @@ -367,7 +370,7 @@ in the system.  It discovers interfaces through a host of different
>   methods, depending on the system.
>   
>   You can specify up to four interfaces on the module load line and
> -control some module parameters:
> +control some module parameters::
>   
>     modprobe ipmi_si.o type=<type1>,<type2>....
>          ports=<port1>,<port2>... addrs=<addr1>,<addr2>...
> @@ -437,7 +440,7 @@ default is one.  Setting to 0 is useful with the hotmod, but is
>   obviously only useful for modules.
>   
>   When compiled into the kernel, the parameters can be specified on the
> -kernel command line as:
> +kernel command line as::
>   
>     ipmi_si.type=<type1>,<type2>...
>          ipmi_si.ports=<port1>,<port2>... ipmi_si.addrs=<addr1>,<addr2>...
> @@ -474,16 +477,22 @@ The driver supports a hot add and remove of interfaces.  This way,
>   interfaces can be added or removed after the kernel is up and running.
>   This is done using /sys/modules/ipmi_si/parameters/hotmod, which is a
>   write-only parameter.  You write a string to this interface.  The string
> -has the format:
> +has the format::
> +
>      <op1>[:op2[:op3...]]
> -The "op"s are:
> +
> +The "op"s are::
> +
>      add|remove,kcs|bt|smic,mem|i/o,<address>[,<opt1>[,<opt2>[,...]]]
> -You can specify more than one interface on the line.  The "opt"s are:
> +
> +You can specify more than one interface on the line.  The "opt"s are::
> +
>      rsp=<regspacing>
>      rsi=<regsize>
>      rsh=<regshift>
>      irq=<irq>
>      ipmb=<ipmb slave addr>
> +
>   and these have the same meanings as discussed above.  Note that you
>   can also use this on the kernel command line for a more compact format
>   for specifying an interface.  Note that when removing an interface,
> @@ -496,7 +505,7 @@ The SMBus Driver (SSIF)
>   The SMBus driver allows up to 4 SMBus devices to be configured in the
>   system.  By default, the driver will only register with something it
>   finds in DMI or ACPI tables.  You can change this
> -at module load time (for a module) with:
> +at module load time (for a module) with::
>   
>     modprobe ipmi_ssif.o
>   	addr=<i2caddr1>[,<i2caddr2>[,...]]
> @@ -535,7 +544,7 @@ the smb_addr parameter unless you have DMI or ACPI data to tell the
>   driver what to use.
>   
>   When compiled into the kernel, the addresses can be specified on the
> -kernel command line as:
> +kernel command line as::
>   
>     ipmb_ssif.addr=<i2caddr1>[,<i2caddr2>[...]]
>   	ipmi_ssif.adapter=<adapter1>[,<adapter2>[...]]
> @@ -565,9 +574,9 @@ Some users need more detailed information about a device, like where
>   the address came from or the raw base device for the IPMI interface.
>   You can use the IPMI smi_watcher to catch the IPMI interfaces as they
>   come or go, and to grab the information, you can use the function
> -ipmi_get_smi_info(), which returns the following structure:
> +ipmi_get_smi_info(), which returns the following structure::
>   
> -struct ipmi_smi_info {
> +  struct ipmi_smi_info {
>   	enum ipmi_addr_src addr_src;
>   	struct device *dev;
>   	union {
> @@ -575,7 +584,7 @@ struct ipmi_smi_info {
>   			void *acpi_handle;
>   		} acpi_info;
>   	} addr_info;
> -};
> +  };
>   
>   Currently special info for only for SI_ACPI address sources is
>   returned.  Others may be added as necessary.
> @@ -590,7 +599,7 @@ Watchdog
>   
>   A watchdog timer is provided that implements the Linux-standard
>   watchdog timer interface.  It has three module parameters that can be
> -used to control it:
> +used to control it::
>   
>     modprobe ipmi_watchdog timeout=<t> pretimeout=<t> action=<action type>
>         preaction=<preaction type> preop=<preop type> start_now=x
> @@ -635,7 +644,7 @@ watchdog device is closed.  The default value of nowayout is true
>   if the CONFIG_WATCHDOG_NOWAYOUT option is enabled, or false if not.
>   
>   When compiled into the kernel, the kernel command line is available
> -for configuring the watchdog:
> +for configuring the watchdog::
>   
>     ipmi_watchdog.timeout=<t> ipmi_watchdog.pretimeout=<t>
>   	ipmi_watchdog.action=<action type>
> @@ -675,6 +684,7 @@ also get a bunch of OEM events holding the panic string.
>   
>   
>   The field settings of the events are:
> +
>   * Generator ID: 0x21 (kernel)
>   * EvM Rev: 0x03 (this event is formatting in IPMI 1.0 format)
>   * Sensor Type: 0x20 (OS critical stop sensor)
> @@ -683,18 +693,20 @@ The field settings of the events are:
>   * Event Data 1: 0xa1 (Runtime stop in OEM bytes 2 and 3)
>   * Event data 2: second byte of panic string
>   * Event data 3: third byte of panic string
> +
>   See the IPMI spec for the details of the event layout.  This event is
>   always sent to the local management controller.  It will handle routing
>   the message to the right place
>   
>   Other OEM events have the following format:
> -Record ID (bytes 0-1): Set by the SEL.
> -Record type (byte 2): 0xf0 (OEM non-timestamped)
> -byte 3: The slave address of the card saving the panic
> -byte 4: A sequence number (starting at zero)
> -The rest of the bytes (11 bytes) are the panic string.  If the panic string
> -is longer than 11 bytes, multiple messages will be sent with increasing
> -sequence numbers.
> +
> +* Record ID (bytes 0-1): Set by the SEL.
> +* Record type (byte 2): 0xf0 (OEM non-timestamped)
> +* byte 3: The slave address of the card saving the panic
> +* byte 4: A sequence number (starting at zero)
> +  The rest of the bytes (11 bytes) are the panic string.  If the panic string
> +  is longer than 11 bytes, multiple messages will be sent with increasing
> +  sequence numbers.
>   
>   Because you cannot send OEM events using the standard interface, this
>   function will attempt to find an SEL and add the events there.  It

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v2 12/29] kselftest.rst: do some adjustments after ReST conversion
  2017-06-17 15:26 ` [PATCH v2 12/29] kselftest.rst: do some adjustments after ReST conversion Mauro Carvalho Chehab
@ 2017-06-23 14:04   ` Shuah Khan
  2017-06-23 21:32     ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 35+ messages in thread
From: Shuah Khan @ 2017-06-23 14:04 UTC (permalink / raw)
  To: Mauro Carvalho Chehab, Linux Doc Mailing List, Jonathan Corbet
  Cc: Mauro Carvalho Chehab, linux-kernel, linux-kselftest, Shuah Khan,
	Shuah Khan

Hi Mauro,

On 06/17/2017 09:26 AM, Mauro Carvalho Chehab wrote:
> Do some minor adjustments after ReST conversion:
> 
> - On most documents, we use prepend a "$ " before
>   command line arguments;
> 
> - Prefer to use :: on the preceding line;
> 
> - Split a multi-paragraph description as such.
> 
> Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>

Looks good to me. I can take this through linux-kselftest or here is
my Ack for it to go through the doc tree with the rest in this series.

Acked-by: Shuah Khan <shuahkh@osg.samsung.com>

thanks,
-- Shuah

> ---
>  Documentation/dev-tools/kselftest.rst | 32 ++++++++++++++++----------------
>  1 file changed, 16 insertions(+), 16 deletions(-)
> 
> diff --git a/Documentation/dev-tools/kselftest.rst b/Documentation/dev-tools/kselftest.rst
> index b3861500c42d..ebd03d11d2c2 100644
> --- a/Documentation/dev-tools/kselftest.rst
> +++ b/Documentation/dev-tools/kselftest.rst
> @@ -19,15 +19,15 @@ Running the selftests (hotplug tests are run in limited mode)
>  
>  To build the tests::
>  
> -    make -C tools/testing/selftests
> +  $ make -C tools/testing/selftests
>  
>  To run the tests::
>  
> -    make -C tools/testing/selftests run_tests
> +  $ make -C tools/testing/selftests run_tests
>  
>  To build and run the tests with a single command, use::
>  
> -    make kselftest
> +  $ make kselftest
>  
>  Note that some tests will require root privileges.
>  
> @@ -40,11 +40,11 @@ single test to run, or a list of tests to run.
>  
>  To run only tests targeted for a single subsystem::
>  
> -    make -C tools/testing/selftests TARGETS=ptrace run_tests
> +  $ make -C tools/testing/selftests TARGETS=ptrace run_tests
>  
>  You can specify multiple tests to build and run::
>  
> -    make TARGETS="size timers" kselftest
> +  $  make TARGETS="size timers" kselftest
>  
>  See the top-level tools/testing/selftests/Makefile for the list of all
>  possible targets.
> @@ -55,11 +55,11 @@ Running the full range hotplug selftests
>  
>  To build the hotplug tests::
>  
> -    make -C tools/testing/selftests hotplug
> +  $ make -C tools/testing/selftests hotplug
>  
>  To run the hotplug tests::
>  
> -    make -C tools/testing/selftests run_hotplug
> +  $ make -C tools/testing/selftests run_hotplug
>  
>  Note that some tests will require root privileges.
>  
> @@ -73,13 +73,13 @@ location.
>  
>  To install selftests in default location::
>  
> -    cd tools/testing/selftests
> -    ./kselftest_install.sh
> +   $ cd tools/testing/selftests
> +   $ ./kselftest_install.sh
>  
>  To install selftests in a user specified location::
>  
> -    cd tools/testing/selftests
> -    ./kselftest_install.sh install_dir
> +   $ cd tools/testing/selftests
> +   $ ./kselftest_install.sh install_dir
>  
>  Running installed selftests
>  ===========================
> @@ -88,12 +88,10 @@ Kselftest install as well as the Kselftest tarball provide a script
>  named "run_kselftest.sh" to run the tests.
>  
>  You can simply do the following to run the installed Kselftests. Please
> -note some tests will require root privileges.
> +note some tests will require root privileges::
>  
> -::
> -
> -    cd kselftest
> -    ./run_kselftest.sh
> +   $ cd kselftest
> +   $ ./run_kselftest.sh
>  
>  Contributing new tests
>  ======================
> @@ -114,8 +112,10 @@ Contributing new tests (details)
>  
>   * Use TEST_GEN_XXX if such binaries or files are generated during
>     compiling.
> +
>     TEST_PROGS, TEST_GEN_PROGS mean it is the excutable tested by
>     default.
> +
>     TEST_PROGS_EXTENDED, TEST_GEN_PROGS_EXTENDED mean it is the
>     executable which is not tested by default.
>     TEST_FILES, TEST_GEN_FILES mean it is the file which is used by
> 

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v2 12/29] kselftest.rst: do some adjustments after ReST conversion
  2017-06-23 14:04   ` Shuah Khan
@ 2017-06-23 21:32     ` Mauro Carvalho Chehab
  2017-06-24  0:47       ` Shuah Khan
  0 siblings, 1 reply; 35+ messages in thread
From: Mauro Carvalho Chehab @ 2017-06-23 21:32 UTC (permalink / raw)
  To: Shuah Khan, Jonathan Corbet
  Cc: Linux Doc Mailing List, Mauro Carvalho Chehab, linux-kernel,
	linux-kselftest, Shuah Khan

Em Fri, 23 Jun 2017 08:04:02 -0600
Shuah Khan <shuah@kernel.org> escreveu:

> Hi Mauro,
> 
> On 06/17/2017 09:26 AM, Mauro Carvalho Chehab wrote:
> > Do some minor adjustments after ReST conversion:
> > 
> > - On most documents, we use prepend a "$ " before
> >   command line arguments;
> > 
> > - Prefer to use :: on the preceding line;
> > 
> > - Split a multi-paragraph description as such.
> > 
> > Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>  
> 
> Looks good to me. I can take this through linux-kselftest or here is
> my Ack for it to go through the doc tree with the rest in this series.
> 
> Acked-by: Shuah Khan <shuahkh@osg.samsung.com>

Shuah,

Either way works for me. Whatever makes easier for you and Jon.

Regards,
Mauro

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v2 12/29] kselftest.rst: do some adjustments after ReST conversion
  2017-06-23 21:32     ` Mauro Carvalho Chehab
@ 2017-06-24  0:47       ` Shuah Khan
  2017-06-25 19:35         ` Jonathan Corbet
  0 siblings, 1 reply; 35+ messages in thread
From: Shuah Khan @ 2017-06-24  0:47 UTC (permalink / raw)
  To: Mauro Carvalho Chehab, Jonathan Corbet
  Cc: Linux Doc Mailing List, Mauro Carvalho Chehab, linux-kernel,
	linux-kselftest, Shuah Khan, Shuah Khan

On 06/23/2017 03:32 PM, Mauro Carvalho Chehab wrote:
> Em Fri, 23 Jun 2017 08:04:02 -0600
> Shuah Khan <shuah@kernel.org> escreveu:
> 
>> Hi Mauro,
>>
>> On 06/17/2017 09:26 AM, Mauro Carvalho Chehab wrote:
>>> Do some minor adjustments after ReST conversion:
>>>
>>> - On most documents, we use prepend a "$ " before
>>>   command line arguments;
>>>
>>> - Prefer to use :: on the preceding line;
>>>
>>> - Split a multi-paragraph description as such.
>>>
>>> Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>  
>>
>> Looks good to me. I can take this through linux-kselftest or here is
>> my Ack for it to go through the doc tree with the rest in this series.
>>
>> Acked-by: Shuah Khan <shuahkh@osg.samsung.com>
> 
> Shuah,
> 
> Either way works for me. Whatever makes easier for you and Jon.
> 
> Regards,
> Mauro
> 
> 

Hi Jon,

Please let me know if you want me to take this through linux-kselftest
In which case, Ack the patch. If not, you already have my Ack.

thanks,
-- Shuah

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v2 12/29] kselftest.rst: do some adjustments after ReST conversion
  2017-06-24  0:47       ` Shuah Khan
@ 2017-06-25 19:35         ` Jonathan Corbet
  2017-06-26 16:30           ` Shuah Khan
  0 siblings, 1 reply; 35+ messages in thread
From: Jonathan Corbet @ 2017-06-25 19:35 UTC (permalink / raw)
  To: Shuah Khan
  Cc: Mauro Carvalho Chehab, Linux Doc Mailing List,
	Mauro Carvalho Chehab, linux-kernel, linux-kselftest, Shuah Khan

On Fri, 23 Jun 2017 18:47:51 -0600
Shuah Khan <shuah@kernel.org> wrote:

> Please let me know if you want me to take this through linux-kselftest
> In which case, Ack the patch. If not, you already have my Ack.

Go ahead and take it if you want:

	Acked-by: Jonathan Corbet <corbet@lwn.net>

I haven't quite figured out how I'm going to manage this mountain of
patches yet; making it a little smaller can only help :)

jon

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v2 12/29] kselftest.rst: do some adjustments after ReST conversion
  2017-06-25 19:35         ` Jonathan Corbet
@ 2017-06-26 16:30           ` Shuah Khan
  0 siblings, 0 replies; 35+ messages in thread
From: Shuah Khan @ 2017-06-26 16:30 UTC (permalink / raw)
  To: Jonathan Corbet
  Cc: Mauro Carvalho Chehab, Linux Doc Mailing List,
	Mauro Carvalho Chehab, linux-kernel, linux-kselftest, Shuah Khan,
	Shuah Khan

On 06/25/2017 01:35 PM, Jonathan Corbet wrote:
> On Fri, 23 Jun 2017 18:47:51 -0600
> Shuah Khan <shuah@kernel.org> wrote:
> 
>> Please let me know if you want me to take this through linux-kselftest
>> In which case, Ack the patch. If not, you already have my Ack.
> 
> Go ahead and take it if you want:
> 
> 	Acked-by: Jonathan Corbet <corbet@lwn.net>
> 
> I haven't quite figured out how I'm going to manage this mountain of
> patches yet; making it a little smaller can only help :)

Good luck. :)

> 
> jon
> 
> 

Thanks. Applied to linux-kselftest next for 4.13-rc1

-- Shuah

^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2017-06-26 16:30 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-06-17 15:26 [PATCH v2 01/29] IPMI.txt: standardize document format Mauro Carvalho Chehab
2017-06-17 15:26 ` [PATCH v2 02/29] IRQ-affinity.txt: " Mauro Carvalho Chehab
2017-06-17 15:26 ` [PATCH v2 03/29] IRQ-domain.txt: " Mauro Carvalho Chehab
2017-06-17 15:26 ` [PATCH v2 04/29] irqflags-tracing.txt: " Mauro Carvalho Chehab
2017-06-17 15:26 ` [PATCH v2 05/29] IRQ.txt: add a markup for its title Mauro Carvalho Chehab
2017-06-17 15:26 ` [PATCH v2 06/29] isapnp.txt: promote title level Mauro Carvalho Chehab
2017-06-17 15:26 ` [PATCH v2 07/29] isa.txt: standardize document format Mauro Carvalho Chehab
2017-06-17 15:26 ` [PATCH v2 08/29] kernel-per-CPU-kthreads.txt: " Mauro Carvalho Chehab
2017-06-17 15:26 ` [PATCH v2 09/29] kobject.txt: " Mauro Carvalho Chehab
2017-06-17 15:26 ` [PATCH v2 10/29] kprobes.txt: " Mauro Carvalho Chehab
2017-06-17 15:26 ` [PATCH v2 11/29] kref.txt: " Mauro Carvalho Chehab
2017-06-17 15:26 ` [PATCH v2 12/29] kselftest.rst: do some adjustments after ReST conversion Mauro Carvalho Chehab
2017-06-23 14:04   ` Shuah Khan
2017-06-23 21:32     ` Mauro Carvalho Chehab
2017-06-24  0:47       ` Shuah Khan
2017-06-25 19:35         ` Jonathan Corbet
2017-06-26 16:30           ` Shuah Khan
2017-06-17 15:26 ` [PATCH v2 13/29] ldm.txt: standardize document format Mauro Carvalho Chehab
2017-06-17 15:26 ` [PATCH v2 14/29] lockup-watchdogs.txt: " Mauro Carvalho Chehab
2017-06-17 15:26 ` [PATCH v2 15/29] lzo.txt: " Mauro Carvalho Chehab
2017-06-17 15:26 ` [PATCH v2 16/29] mailbox.txt: " Mauro Carvalho Chehab
2017-06-17 15:26 ` [PATCH v2 17/29] memory-barriers.txt: " Mauro Carvalho Chehab
2017-06-17 15:26 ` [PATCH v2 18/29] memory-barriers.txt: use literals for variables Mauro Carvalho Chehab
2017-06-17 15:26 ` [PATCH v2 19/29] memory-hotplug.txt: standardize document format Mauro Carvalho Chehab
2017-06-17 15:26 ` [PATCH v2 20/29] men-chameleon-bus.txt: " Mauro Carvalho Chehab
2017-06-17 15:26 ` [PATCH v2 21/29] nommu-mmap.txt: " Mauro Carvalho Chehab
2017-06-17 15:26 ` [PATCH v2 22/29] nommu-mmap.txt: don't use all upper case on titles Mauro Carvalho Chehab
2017-06-17 15:26 ` [PATCH v2 23/29] ntb.txt: standardize document format Mauro Carvalho Chehab
2017-06-17 15:26 ` [PATCH v2 24/29] numastat.txt: " Mauro Carvalho Chehab
2017-06-17 15:26 ` [PATCH v2 25/29] padata.txt: " Mauro Carvalho Chehab
2017-06-17 15:26 ` [PATCH v2 26/29] parport-lowlevel.txt: " Mauro Carvalho Chehab
2017-06-17 15:26 ` [PATCH v2 27/29] percpu-rw-semaphore.txt: " Mauro Carvalho Chehab
2017-06-17 15:26 ` [PATCH v2 28/29] phy.txt: " Mauro Carvalho Chehab
2017-06-17 15:26 ` [PATCH v2 29/29] pi-futex.txt: " Mauro Carvalho Chehab
2017-06-17 15:46 ` [PATCH v2 01/29] IPMI.txt: " Corey Minyard

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.