All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH v1 0/6] pure python kernel-doc parser and more
@ 2017-01-24 19:52 Markus Heiser
  2017-01-24 19:52 ` [RFC PATCH v1 1/6] kernel-doc: pure python kernel-doc parser (preparation) Markus Heiser
                   ` (5 more replies)
  0 siblings, 6 replies; 23+ messages in thread
From: Markus Heiser @ 2017-01-24 19:52 UTC (permalink / raw)
  To: Jonathan Corbet, Mauro Carvalho Chehab, Jani Nikula,
	Daniel Vetter, Matthew Wilcox
  Cc: Markus Heiser, linux-doc @ vger . kernel . org List,
	linux-kernel @ vger . kernel . org List

Hi Jon,

here is my RFC, replacing the kernel-doc parser perl script with a python
implementation. The parser is implemented as module and is used by several
kernel-doc applications:

* kerneldoc         : the parser
* kerneldoc-lint    : liniting
* kerneldoc-src2rst : autodoc source tree
* manKernelDoc.py   : a builder generating man-pages

All this is mainly merged 1:1 from my POC at:

  https://github.com/return42/linuxdoc  commit 3991d3c

Since it is merged 1:1, you will notice it's CodingStyle is (ATM) not kernel
compliant and it lacks a user doc ('Documentation/doc-guide').  Take this as a
starting point to play around and gain some experience with the parser and its
applications. CodingStyle and user documentation will be patched when the
community agreed about functionalities.

Thanks

  -- Markus --


[1] https://www.mail-archive.com/linux-doc@vger.kernel.org/msg09002.html

Markus Heiser (6):
  kernel-doc: pure python kernel-doc parser (preparation)
  kernel-doc: replace kernel-doc perl parser with a pure python one (WIP)
  kernel-doc: add kerneldoc-lint command
  kernel-doc: insert TODOs on kernel-doc errors
  kernel-doc: add kerneldoc-src2rst command
  kernel-doc: add man page builder (target mandocs)

 Documentation/Makefile.sphinx        |    8 +-
 Documentation/admin-guide/conf.py    |    2 +
 Documentation/admin-guide/index.rst  |    2 +
 Documentation/conf.py                |    8 +-
 Documentation/core-api/conf.py       |    2 +
 Documentation/core-api/index.rst     |    2 +
 Documentation/dev-tools/conf.py      |    2 +
 Documentation/dev-tools/index.rst    |    2 +
 Documentation/doc-guide/conf.py      |    2 +
 Documentation/doc-guide/index.rst    |    2 +
 Documentation/driver-api/conf.py     |    2 +
 Documentation/driver-api/index.rst   |    2 +
 Documentation/gpu/conf.py            |    2 +
 Documentation/gpu/index.rst          |    2 +
 Documentation/media/Makefile         |    1 +
 Documentation/media/conf.py          |    2 +
 Documentation/media/index.rst        |    2 +
 Documentation/process/conf.py        |    2 +
 Documentation/process/index.rst      |    2 +
 Documentation/security/conf.py       |    2 +
 Documentation/security/index.rst     |    9 +
 Documentation/sphinx/fspath.py       |  435 +++++
 Documentation/sphinx/kernel_doc.py   | 2908 ++++++++++++++++++++++++++++++++++
 Documentation/sphinx/kerneldoc.py    |  149 --
 Documentation/sphinx/lint.py         |  121 ++
 Documentation/sphinx/manKernelDoc.py |  408 +++++
 Documentation/sphinx/rstKernelDoc.py |  560 +++++++
 Documentation/sphinx/src2rst.py      |  229 +++
 scripts/kerneldoc                    |   11 +
 scripts/kerneldoc-lint               |   11 +
 scripts/kerneldoc-src2rst            |   11 +
 31 files changed, 4748 insertions(+), 155 deletions(-)
 create mode 100644 Documentation/sphinx/fspath.py
 create mode 100755 Documentation/sphinx/kernel_doc.py
 delete mode 100644 Documentation/sphinx/kerneldoc.py
 create mode 100755 Documentation/sphinx/lint.py
 create mode 100755 Documentation/sphinx/manKernelDoc.py
 create mode 100755 Documentation/sphinx/rstKernelDoc.py
 create mode 100755 Documentation/sphinx/src2rst.py
 create mode 100755 scripts/kerneldoc
 create mode 100755 scripts/kerneldoc-lint
 create mode 100755 scripts/kerneldoc-src2rst

-- 
2.7.4

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [RFC PATCH v1 1/6] kernel-doc: pure python kernel-doc parser (preparation)
  2017-01-24 19:52 [RFC PATCH v1 0/6] pure python kernel-doc parser and more Markus Heiser
@ 2017-01-24 19:52 ` Markus Heiser
  2017-01-24 19:52 ` [RFC PATCH v1 2/6] kernel-doc: replace kernel-doc perl parser with a pure python one (WIP) Markus Heiser
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 23+ messages in thread
From: Markus Heiser @ 2017-01-24 19:52 UTC (permalink / raw)
  To: Jonathan Corbet, Mauro Carvalho Chehab, Jani Nikula,
	Daniel Vetter, Matthew Wilcox
  Cc: Markus Heiser, linux-doc @ vger . kernel . org List,
	linux-kernel @ vger . kernel . org List

This is the first patch of a series which merges a pure python
implementation of the kernel-doc parser. It adds the prerequisites which
are needed by the pure python implementation (which comes later in the
series).

The fspath module in this patch is a reduced implementation of pypi's
fspath [1]. As an alternative to this patch we can add an external
dependence which can be installed from PyPi with 'pip install fspath'
see [2] (but I guess we won't more external dependencies).

[1] https://pypi.python.org/pypi/fspath/
{2] https://return42.github.io/fspath/

Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
---
 Documentation/sphinx/fspath.py | 435 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 435 insertions(+)
 create mode 100644 Documentation/sphinx/fspath.py

diff --git a/Documentation/sphinx/fspath.py b/Documentation/sphinx/fspath.py
new file mode 100644
index 0000000..554b8a2
--- /dev/null
+++ b/Documentation/sphinx/fspath.py
@@ -0,0 +1,435 @@
+# -*- coding: utf-8; mode: python -*-
+u"""
+Handling path names and executables more comfortable.
+"""
+
+# ==============================================================================
+# imports
+# ==============================================================================
+
+import io
+import os
+from os import path
+import re
+import shutil
+import subprocess
+import sys
+from glob import iglob
+
+import six
+
+# ==============================================================================
+class FSPath(six.text_type):
+# ==============================================================================
+
+    u"""
+    A path name to a file or folder.
+
+    Handling path names more comfortable, e.g.:
+
+    * concatenate path names with the division operator ``/``
+    * call functions like *mkdir* on path names
+    * get properties like *EXISTS*
+
+    .. code-block:: python
+
+       >>> folder = fspath.FSPath("/tmp")
+       >>> folder.EXISTS
+       True
+       >>> (folder / "subfolder").makedirs()
+       >>> list(folder.reMatchFind(".*sub"))
+       ['/tmp/subfolder']
+       >>> (folder / "test.txt").FILENAME
+       'test'
+       >>> (folder / "test.txt").DIRNAME
+       '/tmp'
+       >>> six.print_("topfolder" / folder)
+       topfolder/temp
+       >>> six.print_(folder + "addedstr")
+       tmpaddedstr
+       >>> six.print_((folder/"foo"/"bar.txt").splitpath())
+       (u'/tmp/foo', u'bar.txt')
+       >>> six.print_(folder / "foo" / "../bar.txt")
+       tmp/bar.txt
+    """
+
+    def __new__(cls, pathname):
+        u"""Constructor of a path name object.
+
+        Regardless of how the encoding of the file system is, the ``pathname``
+        is converted to unicode. The conversion of byte strings is based the
+        default encodings.
+
+        To issue "File-system Encoding" See also:
+
+        * https://docs.python.org/3.5/howto/unicode.html#unicode-filenames
+        """
+        pathname = path.normpath(path.expanduser(six.text_type(pathname)))
+        return super(FSPath, cls).__new__(cls, pathname)
+
+    @property
+    def VALUE(self):
+        u"""string of the path name"""
+        return six.text_type(self)
+
+    @property
+    def EXISTS(self):
+        u"""True if file/pathname exist"""
+        return path.exists(self)
+
+    @property
+    def SIZE(self):
+        u"""Size in bytes"""
+        return path.getsize(self)
+
+    @property
+    def isReadable(self):
+        u"""True if file/path is readable"""
+        return os.access(self, os.R_OK)
+
+    @property
+    def isWriteable(self):
+        u"""True if file/path is writeable"""
+        return os.access(self, os.W_OK)
+
+    @property
+    def isExecutable(self):
+        u"""True if file is executable"""
+        return os.access(self, os.X_OK)
+
+    @property
+    def ISDIR(self):
+        u"""True if path is a folder"""
+        return path.isdir(self)
+
+    @property
+    def ISFILE(self):
+        u"""True if path is a file"""
+        return path.isfile(self)
+
+    @property
+    def ISLINK(self):
+        u"""True if path is a symbolic link"""
+        return path.islink(self)
+
+    @property
+    def DIRNAME(self):
+        u"""The path name of the folder, where the file is located
+
+        E.g.: ``/path/to/folder/filename.ext`` --> ``/path/to/folder``
+        """
+        return self.__class__(path.dirname(self))
+
+    @property
+    def BASENAME(self):
+        u"""The path name with suffix, but without the folder name.
+
+        E.g.: ``/path/to/folder/filename.ext`` --> ``filename.ext``
+        """
+        return self.__class__(path.basename(self))
+
+    @property
+    def FILENAME(self):
+        u"""The path name without folder and suffix.
+
+        E.g.: ``/path/to/folder/filename.ext`` --> ``filename``
+
+        """
+        return self.__class__(path.splitext(path.basename(self))[0])
+
+    @property
+    def SUFFIX(self):
+        u"""The filename suffix
+
+        E.g.: ``/path/to/folder/filename.ext`` --> ``.ext``
+
+        """
+        return self.__class__(path.splitext(self)[1])
+
+    @property
+    def SKIPSUFFIX(self):
+        u"""The complete file name without suffix.
+
+        E.g.: ``/path/to/folder/filename.ext`` --> ``/path/to/folder/filename``
+        """
+        return self.__class__(path.splitext(self)[0])
+
+    @property
+    def ABSPATH(self):
+        u"""The absolute pathname
+
+        E.g: ``../to/../to/folder/filename.ext`` --> ``/path/to/folder/filename.ext``
+
+        """
+        return self.__class__(path.abspath(self))
+
+    @property
+    def REALPATH(self):
+        u"""The real pathname without symbolic links."""
+        return self.__class__(path.realpath(self))
+
+    @property
+    def POSIXPATH(self):
+        u"""The path name in *POSIX* notation.
+
+        Helpfull if you are on MS-Windows and need the POSIX name.
+        """
+        if os.sep == "/":
+            return six.text_type(self)
+        else:
+            p = six.text_type(self)
+            if p[1] == ":":
+                p = "/" + p.replace(":", "", 1)
+            return p.replace(os.sep, "/")
+
+    @property
+    def NTPATH(self):
+        u"""The path name in the Windows (NT) notation.
+        """
+        if os.sep == "\\":
+            return six.text_type(self)
+        else:
+            return six.text_type(self).replace(os.sep, "\\")
+
+    @property
+    def EXPANDVARS(self):
+        u"""Path with environment variables expanded."""
+        return self.__class__(path.expandvars(self))
+
+    @property
+    def EXPANDUSER(self):
+        u"""Path with an initial component of ~ or ~user replaced by that user's home."""
+        return self.__class__(path.expanduser(self))
+
+    @classmethod
+    def getHOME(cls):
+        u"""User's home folder."""
+        return cls(path.expanduser("~"))
+
+    def makedirs(self, mode=0o775):
+        u"""Recursive directory creation, default mode is 0o775 (octal)."""
+        if not self.ISDIR:
+            return os.makedirs(self, mode)
+
+    def __div__(self, pathname):
+        return self.__class__(self.VALUE + os.sep + six.text_type(pathname))
+    __truediv__ = __div__
+
+    def __rdiv__(self, pathname):
+        return self.__class__(six.text_type(pathname) + os.sep + self.VALUE)
+
+    def __add__(self, other):
+        return self.__class__(self.VALUE + six.text_type(other))
+
+    def __radd__(self, other):
+        return self.__class__(six.text_type(other) + self.VALUE)
+
+    def relpath(self, start):
+        return self.__class__(path.relpath(self, start))
+
+    def splitpath(self):
+        head, tail = path.split(self)
+        return (self.__class__(head), self.__class__(tail))
+
+    def listdir(self):
+        for name in os.listdir(self):
+            yield self.__class__(name)
+
+    def glob(self, pattern):
+        for name in  iglob(self / pattern ):
+            yield self.__class__(name)
+
+    def walk(self, topdown=True, onerror=None, followlinks=False):
+        for dirpath, dirnames, filenames in os.walk(self, topdown, onerror, followlinks):
+            yield (self.__class__(dirpath),
+                   [self.__class__(x) for x in dirnames],
+                   [self.__class__(x) for x in filenames] )
+
+    def reMatchFind(self, name, isFile=True, isDir=True, followlinks=False):
+
+        # find first e.g: next(myFolder.reMatchFind(r".*name.*"), None)
+
+        name_re = re.compile(name)
+        for folder, dirnames, filenames in self.walk(followlinks=followlinks):
+            if isDir:
+                for d_name in [x for x in dirnames if name_re.match(x)]:
+                    yield folder / d_name
+            if isFile:
+                for f_name in [x for x in filenames if name_re.match(x)]:
+                    yield folder / f_name
+
+    def suffix(self, newSuffix):
+        return self.__class__(self.SKIPSUFFIX + newSuffix)
+
+    def copyfile(self, dest, preserve=False):
+        u"""Copy the file src to the file or directory dest.
+
+        Argument preserve copies permission bits.
+        """
+        if preserve:
+            shutil.copy2(self, dest)
+        else:
+            shutil.copy(self, dest)
+
+    def copytree(self, dest, symlinks=False, ignore=None):
+        u"""Recursively copy the entire directory tree"""
+        shutil.copytree(self, dest, symlinks, ignore)
+
+    def move(self, dest):
+        u"""Move path to another location (dest)"""
+        shutil.move(self, dest)
+
+    def delete(self):
+        u"""remove file/folder"""
+        if self.ISDIR:
+            self.rmtree()
+        else:
+            os.remove(self)
+
+    def rmtree(self, ignore_errors=False, onerror=None):
+        u"""remove tree"""
+        shutil.rmtree(self, ignore_errors, onerror)
+
+    def openTextFile(
+            self, mode = 'r', encoding = 'utf-8'
+            , errors = 'strict', buffering = 1
+            , newline = None):
+        u"""Open file as text file"""
+        return io.open(
+            self, mode=mode, encoding=encoding
+            , errors=errors, buffering=buffering
+            , newline=newline)
+
+    def readFile(self, encoding='utf-8', errors='strict'):
+        u"""read entire file"""
+        with self.openTextFile(encoding=encoding, errors=errors) as f:
+            return f.read()
+
+    def Popen(self, *args, **kwargs):
+        u"""Get a ``subprocess.Popen`` object (``proc``).
+
+        The path name of the self-object is the programm to call. The program
+        arguments are given py ``*args`` and the ``*kwargs`` are passed to the
+        ``subprocess.Popen`` constructor. The ``universal_newlines=True`` is
+        true by default.
+
+        see https://docs.python.org/3/library/subprocess.html#popen-constructor
+
+        .. code-block:: python
+
+           import six
+           from fspath import FSPath
+           proc = FSPath("arp").Popen("-a",)
+           stdout, stderr = proc.communicate()
+           retVal = proc.returncode
+           six.print_("stdout: %s" % stdout)
+           six.print_("stderr: %s" % stderr)
+           six.print_("exit code = %d" % retVal)
+
+        """
+
+        defaults = {
+            'stdout'             : subprocess.PIPE,
+            'stderr'             : subprocess.PIPE,
+            'stdin'              : subprocess.PIPE,
+            'universal_newlines' : True
+            }
+        defaults.update(kwargs)
+        return subprocess.Popen([self,] + list(args), **defaults)
+
+
+# ==============================================================================
+def which(fname, findall=True):
+# ==============================================================================
+    u"""
+    Searches the fname in the enviroment ``PATH``.
+
+    This *which* is not POSIX conform, it searches the fname (without extension)
+    and the fname with one of the ".exe", ".cmd", ".bat" extension. If nothing
+    is found, ``None` is returned if something matches, a list (``set``) is
+    returned. With option ``findall=False`` the first match is returned or
+    ``None``, if nothing is found.
+    """
+    exe = ["", ".exe", ".cmd", ".bat"]
+    if sys.platform != 'win32':
+        exe = [""]
+    envpath = os.environ.get('PATH', None) or os.defpath
+
+    locations = set()
+    for folder in envpath.split(os.pathsep):
+        for ext in exe:
+            fullname = FSPath(folder + os.sep + fname + ext)
+            if fullname.ISFILE:
+                if not findall:
+                    return fullname
+                locations.add(fullname)
+    return locations or None
+
+# ==============================================================================
+def callEXE(cmd, *args, **kwargs):
+# ==============================================================================
+
+    u"""
+    Synchronous command call ``cmd`` with arguments ``*args`` .
+
+    The ``*kwargs`` are passed to the ``subprocess.Popen`` constructor. The
+    return value is a three-digit tuple ``(stdout, stderr, rc)``.
+
+    .. code-block:: python
+
+       import six
+       from fspath import callEXE
+       out, err, rc = callEXE("arp", "-a")
+
+       six.print_("stdout: %s" % out)
+       six.print_("stderr: %s" % err)
+       six.print_("exit code = %d" % rc)
+    """
+
+    exe = which(cmd, findall=False)
+    if exe is None:
+        raise IOError('command "%s" not availble!' % cmd)
+    proc = exe.Popen(*args, **kwargs)
+    stdout, stderr = proc.communicate()
+    rc = proc.returncode
+    return (stdout, stderr, rc)
+
+
+# ==============================================================================
+class DevNull(object):  # pylint: disable=R0903
+# ==============================================================================
+
+    """A dev/null file descriptor."""
+    def write(self, *args, **kwargs):
+        pass
+
+DevNull = DevNull()
+
+# ==============================================================================
+class OS_ENV(dict):
+# ==============================================================================
+
+    u"""
+    Environment object (singleton).
+
+    .. code-block:: python
+
+       >>> if OS_ENV.get("SHELL") is None:
+               OS_ENV.SHELL = "/bin/bash"
+       >>> OS_ENV.MY_NAME
+       '/bin/bash'
+    """
+    @property
+    def __dict__(self):
+        return os.environ
+
+    def __getattr__(self, attr):
+        return os.environ[attr]
+
+    def __setattr__(self, attr, val):
+        os.environ[attr] = val
+
+    def get(self, attr, default=None):
+        return os.environ.get(attr, default)
+
+OS_ENV = OS_ENV()
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [RFC PATCH v1 2/6] kernel-doc: replace kernel-doc perl parser with a pure python one (WIP)
  2017-01-24 19:52 [RFC PATCH v1 0/6] pure python kernel-doc parser and more Markus Heiser
  2017-01-24 19:52 ` [RFC PATCH v1 1/6] kernel-doc: pure python kernel-doc parser (preparation) Markus Heiser
@ 2017-01-24 19:52 ` Markus Heiser
  2017-01-25  0:13   ` Jonathan Corbet
  2017-01-24 19:52 ` [RFC PATCH v1 3/6] kernel-doc: add kerneldoc-lint command Markus Heiser
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 23+ messages in thread
From: Markus Heiser @ 2017-01-24 19:52 UTC (permalink / raw)
  To: Jonathan Corbet, Mauro Carvalho Chehab, Jani Nikula,
	Daniel Vetter, Matthew Wilcox
  Cc: Markus Heiser, linux-doc @ vger . kernel . org List,
	linux-kernel @ vger . kernel . org List

This patch is the initial merge of a pure python implementation
to parse kernel-doc comments and generate reST from.

It consist mainly of to parts, the parser module (kerneldoc.py) and the
sphinx-doc extension (rstKernelDoc.py). For the command line, there is
also a 'scripts/kerneldoc' added.::

   scripts/kerneldoc --help

The main two parts are merged 1:1 from

  https://github.com/return42/linuxdoc  commit 3991d3c

Take this as a starting point, there is a lot of work to do (WIP).
Since it is merged 1:1, you will also notice it's CodingStyle is (ATM)
not kernel compliant and it lacks a user doc ('Documentation/doc-guide').

I will send patches for this when the community agreed about
functionalities. I guess there are a lot of topics we have to agree
about. E.g. the py-implementation is more strict the perl one.  When you
build doc with the py-module you will see a lot of additional errors and
warnings compared to the sloppy perl one.

I also guess that kernel_doc.py module needs some clean up, but first
follow the patch series to see what additional functionalities it brings.

I few words about it's history: I started this implementation very early
as a part of a POC in May 2016:

  https://github.com/return42/sphkerneldoc

Later I implemented the 'linuxdoc' suite mentioned above. Since
beginning, all bug fixes has been merged which has been applied to the
perl one from the kernel's source tree.  The test was always to run the
kernel-doc parser against the whole source tree and see if there are
changes which might be a regression.  A partial history of reST output
changes can be found here:

  https://github.com/return42/sphkerneldoc/tree/master/linux_src_doc

At the beginning I spent weeks/month in testing, reviewing the logs and
the produced output. At that time, while testing against whole source
tree, I found several bugs in the perl version I ported. Sorry that I
have not mailed all those bugs to the kernel-doc ML back, but my first
aim was to implement a replacement and not to merge improvements
downwards. This was also the time I implemented several improvements in
linting. To sum, thats why I hope, a switch to py-version brings
progress without any regression.

Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
---
 Documentation/Makefile.sphinx        |    3 +-
 Documentation/conf.py                |    6 +-
 Documentation/sphinx/kernel_doc.py   | 2908 ++++++++++++++++++++++++++++++++++
 Documentation/sphinx/kerneldoc.py    |  149 --
 Documentation/sphinx/rstKernelDoc.py |  560 +++++++
 scripts/kerneldoc                    |   11 +
 6 files changed, 3483 insertions(+), 154 deletions(-)
 create mode 100755 Documentation/sphinx/kernel_doc.py
 delete mode 100644 Documentation/sphinx/kerneldoc.py
 create mode 100755 Documentation/sphinx/rstKernelDoc.py
 create mode 100755 scripts/kerneldoc

diff --git a/Documentation/Makefile.sphinx b/Documentation/Makefile.sphinx
index 707c653..626dfd0 100644
--- a/Documentation/Makefile.sphinx
+++ b/Documentation/Makefile.sphinx
@@ -37,8 +37,7 @@ HAVE_PDFLATEX := $(shell if which $(PDFLATEX) >/dev/null 2>&1; then echo 1; else
 PAPEROPT_a4     = -D latex_paper_size=a4
 PAPEROPT_letter = -D latex_paper_size=letter
 KERNELDOC       = $(srctree)/scripts/kernel-doc
-KERNELDOC_CONF  = -D kerneldoc_srctree=$(srctree) -D kerneldoc_bin=$(KERNELDOC)
-ALLSPHINXOPTS   =  $(KERNELDOC_CONF) $(PAPEROPT_$(PAPER)) $(SPHINXOPTS)
+ALLSPHINXOPTS   =  $(PAPEROPT_$(PAPER)) $(SPHINXOPTS)
 # the i18n builder cannot share the environment and doctrees with the others
 I18NSPHINXOPTS  = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
 
diff --git a/Documentation/conf.py b/Documentation/conf.py
index 1ac958c..4843903 100644
--- a/Documentation/conf.py
+++ b/Documentation/conf.py
@@ -34,7 +34,7 @@ from load_config import loadConfig
 # Add any Sphinx extension module names here, as strings. They can be
 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
 # ones.
-extensions = ['kerneldoc', 'rstFlatTable', 'kernel_include', 'cdomain']
+extensions = ['rstKernelDoc', 'rstFlatTable', 'kernel_include', 'cdomain' ]
 
 # The name of the math extension changed on Sphinx 1.4
 if major == 1 and minor > 3:
@@ -505,8 +505,8 @@ pdf_documents = [
 # kernel-doc extension configuration for running Sphinx directly (e.g. by Read
 # the Docs). In a normal build, these are supplied from the Makefile via command
 # line arguments.
-kerneldoc_bin = '../scripts/kernel-doc'
-kerneldoc_srctree = '..'
+kernel_doc_verbose_warn = False
+kernel_doc_raise_error = False
 
 # ------------------------------------------------------------------------------
 # Since loadConfig overwrites settings from the global namespace, it has to be
diff --git a/Documentation/sphinx/kernel_doc.py b/Documentation/sphinx/kernel_doc.py
new file mode 100755
index 0000000..2abea3a
--- /dev/null
+++ b/Documentation/sphinx/kernel_doc.py
@@ -0,0 +1,2908 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8; mode: python -*-
+# pylint: disable=C0103,C0302,R0912,R0914,R0915
+
+u"""
+    kerneldoc
+    ~~~~~~~~~
+
+    Implementation of the ``kernel-doc`` parser.
+
+    :copyright:  Copyright (C) 2016  Markus Heiser
+    :license:    GPL Version 2, June 1991 see Linux/COPYING for details.
+
+    The kernel-doc parser extracts documentation from Linux kernel's source code
+    comments. This implements the :ref:`kernel-doc:kernel-doc-HOWTO`.
+
+    This module provides an API -- which could be used by a sphinx-doc generator
+    extension -- and a command-line interface, see ``--help``::
+
+        $ kernel-doc --help
+
+    But, the command-line is only for test, normally you don't need it.
+
+    Compared with the Perl kernel-doc script, this implementation has additional
+    features like *parse options* for a smooth integration of reStructuredText
+    (reST) markup in the kernel's source code comments. In addition, this
+    rewrite brings the functionalities, which has been spread in *docproc* and
+    make files (e.g. working with *EXPORTED_SYMBOLS*) back to the kernel-doc
+    parse process. In combination with a (separate) *kernel-doc* reST directive
+    (which uses this module), the documentation generation becomes more clear
+    and flexible.
+
+    The architecture is simple and consists of three types of objects (three
+    classes).
+
+    * class Parser: The parser parses the source-file and dumps extracted
+      kernel-doc data.
+
+    * subclasses of class TranslatorAPI: to translate the dumped kernel-doc data
+      into output formats. There exists two implementations:
+
+      - class NullTranslator: translates nothing, just parse
+
+      - class ReSTTranslator(TranslatorAPI): translates dumped kernel-doc data
+        to reST markup.
+
+    * class ParseOptions: a container full with options to control parsing an
+      translation.
+
+    With the NullTranslator a source file is parsed only once while different
+    output could be generated (multiple times) just by changing the Translator
+    (e.g. with the ReSTTranslator) and the option container.
+
+    With parsing the source files only once, the building time is reduced n-times.
+"""
+
+# ==============================================================================
+# imports
+# ==============================================================================
+
+import argparse
+import codecs
+import collections
+import copy
+import os
+import re
+import sys
+import textwrap
+
+import six
+
+from fspath import OS_ENV
+
+# ==============================================================================
+# common globals
+# ==============================================================================
+
+# The version numbering follows numbering of the specification
+# (Documentation/books/kernel-doc-HOWTO).
+__version__  = '1.0'
+
+# ==============================================================================
+# regular expresssions and helper used by the parser and the translator
+# ==============================================================================
+
+class RE(object):
+    u"""regular expression that stores last match (like Perl's ``=~`` operator)"""
+
+    def __init__(self, *args, **kwargs):
+        self.re = re.compile(*args, **kwargs)
+        self.last_match = None
+
+    def match(self, *args, **kwargs):
+        self.last_match = self.re.match(*args, **kwargs)
+        return self.last_match
+
+    def search(self, *args, **kwargs):
+        self.last_match = self.re.search(*args, **kwargs)
+        return self.last_match
+
+    def __getattr__(self, attr):
+        return getattr(self.re, attr)
+
+    def __getitem__(self, group):
+        if group < 0 or group > self.groups - 1:
+            raise IndexError("group index out of range (max %s groups)" % self.groups )
+        if self.last_match is None:
+            raise IndexError("nothing hase matched / no groups")
+        return self.last_match.group(group + 1)
+
+# these regular expresions has been *stolen* from the kernel-doc perl script.
+
+doc_start        = RE(r"^/\*\*\s*$")  # Allow whitespace at end of comment start.
+doc_end          = RE(r"\s*\*+/")
+doc_com          = RE(r"\s*\*\s*")
+doc_com_section  = RE(r"\s*\*\s{1,8}") # more than 8 spaces (one tab) as prefix is not a new section comment
+doc_com_body     = RE(r"\s*\* ?")
+doc_decl         = RE(doc_com.pattern + r"(\w+)")
+#doc_decl_ident   = RE(r"\s*([\w\s]+?)\s*[\(\)]\s*[-:]")
+doc_decl_ident   = RE(doc_com.pattern + r"(struct|union|enum|typedef|function)\s*(\w+)")
+doc_decl_purpose = RE(r"[-:](.*)$")
+
+# except pattern like "http://", a whitespace is required after the colon
+doc_sect_except  = RE(doc_com.pattern + r"[^\s@](.*)?:[^\s]")
+
+#doc_sect = RE(doc_com.pattern + r"([" + doc_special.pattern + r"]?[\w\s]+):(.*)")
+# "section header:" names must be unique per function (or struct,union, typedef,
+# enum). Additional condition: the header name should have 3 characters at least!
+doc_sect  = RE(doc_com_section.pattern
+               + r"("
+               + r"@\w[^:]*"                                 # "@foo: lorem" or
+               + r"|" + r"@\w[.\w]+[^:]*"                    # "@foo.bar: lorem" or
+               + r"|" + r"\@\.\.\."                          # ellipsis "@...: lorem" or
+               + r"|" + r"\w[\w\s]+\w"                       # e.g. "Return: lorem"
+               + r")"
+               + r":(.*?)\s*$")   # this matches also strings like "http://..." (doc_sect_except)
+
+doc_sect_reST = RE(doc_com_section.pattern
+               + r"("
+               + r"@\w[^:]*"                                 # "@foo: lorem" or
+               + r"|" + r"@\w[.\w]+[^:]*"                    # "@foo.bar: lorem" or
+               + r"|" + r"\@\.\.\."                          # ellipsis "@...: lorem" or
+               # a tribute to vintage markups, when in reST mode ...
+               + r"|description|context|returns?|notes?|examples?|introduction|intro"
+               + r")"
+               + r":(.*?)\s*$"    # this matches also strings like "http://..." (doc_sect_except)
+               , flags = re.IGNORECASE)
+
+reST_sect = RE(doc_com_section.pattern
+              + r"("
+              r"\w[\w\s]+\w"
+              + r")"
+              + r":\s*$")
+
+doc_content      = RE(doc_com_body.pattern + r"(.*)")
+doc_block        = RE(doc_com.pattern + r"DOC:\s*(.*)?")
+
+# state: 5 - gathering documentation outside main block
+doc_state5_start = RE(r"^\s*/\*\*\s*$")
+doc_state5_sect  = RE(r"\s*\*\s*(@[\w\s]+):(.*)")
+doc_state5_end   = RE(r"^\s*\*/\s*$")
+doc_state5_oneline = RE(r"^\s*/\*\*\s*(@[\w\s]+):\s*(.*)\s*\*/\s*$");
+
+# match expressions used to find embedded type information
+type_enum_full    = RE(r"(?<=\s)\&(enum)\s*([_\w]+)")
+type_struct_full  = RE(r"(?<=\s)\&(struct)\s*([_\w]+)")
+type_typedef_full = RE(r"(?<=\s)\&(typedef)\s*([_\w]+)")
+type_union_full   = RE(r"(?<=\s)\&(union)\s*([_\w]+)")
+type_member       = RE(r"(?<=\s)\&([_\w]+)((\.|->)[_\w]+)")
+type_member_func  = RE(type_member.pattern + r"\(\)")
+type_func         = RE(r"(?<=\s)(\w+)(?<!\\)\(\)")
+type_constant     = RE(r"(?<=\s)\%([-_\w]+)")
+type_param        = RE(r"(?<=\s)\@(\w+)")
+type_env          = RE(r"(?<=\s)(\$\w+)")
+type_struct       = RE(r"(?<=\s)\&((struct\s*)*[_\w]+)")
+
+esc_type_prefix  = RE(r"\\([\@\%\&\$\(])")
+
+CR_NL            = RE(r"[\r\n]")
+C99_comments     = RE(r"//.*$")
+C89_comments     = RE(r"/\*.*?\*/")
+
+C_STRUCT         = RE(r"struct\s+(\w+)\s*{(.*)}")
+C_UNION          = RE(r"union\s+(\w+)\s*{(.*)}")
+C_STRUCT_UNION   = RE(r"(struct|union)\s+(\w+)\s*{(.*)}")
+C_ENUM           = RE(r"enum\s+(\w+)\s*{(.*)}")
+C_TYPEDEF        = RE(r"typedef.*\s+(\w+)\s*;")
+
+# typedef of a function pointer
+C_FUNC_TYPEDEF   = RE(r"typedef\s+(\w+)\s*\(\*\s*(\w\S+)\s*\)\s*\((.*)\);")
+C_FUNC_TYPEDEF_2 = RE(r"typedef\s+(\w+)\s+(\w\S+)\s*\((.*)\);")
+
+MACRO            = RE(r"^#")
+MACRO_define     = RE(r"^#\s*define\s+")
+
+SYSCALL_DEFINE   = RE(r"^\s*SYSCALL_DEFINE.*\(")
+SYSCALL_DEFINE0  = RE(r"^\s*SYSCALL_DEFINE0")
+
+TP_PROTO                 = RE(r"TP_PROTO\((.*?)\)")
+TRACE_EVENT              = RE(r"TRACE_EVENT")
+TRACE_EVENT_name         = RE(r"TRACE_EVENT\((.*?),")
+DEFINE_EVENT             = RE(r"DEFINE_EVENT")
+DEFINE_EVENT_name        = RE(r"DEFINE_EVENT\((.*?),(.*?),")
+DEFINE_SINGLE_EVENT      = RE(r"DEFINE_SINGLE_EVENT")
+DEFINE_SINGLE_EVENT_name = RE(r"DEFINE_SINGLE_EVENT\((.*?),")
+
+FUNC_PROTOTYPES = [
+    # RE(r"^(\w+)\s+\(\*([a-zA-Z0-9_]+)\)\s*\(([^\(]*)\)") # match: void (*foo) (int bar);
+    RE(r"^()([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)")
+    , RE(r"^(\w+)\s+([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)")
+    , RE(r"^(\w+\s*\*+)\s*([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)")
+    , RE(r"^(\w+\s+\w+)\s+([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)")
+    , RE(r"^(\w+\s+\w+\s*\*+)\s*([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)")
+    , RE(r"^(\w+\s+\w+\s+\w+)\s+([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)")
+    , RE(r"^(\w+\s+\w+\s+\w+\s*\*+)\s*([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)")
+    , RE(r"^()([a-zA-Z0-9_~:]+)\s*\(([^\{]*)\)")
+    , RE(r"^(\w+)\s+([a-zA-Z0-9_~:]+)\s*\(([^\{]*)\)")
+    , RE(r"^(\w+\s*\*+)\s*([a-zA-Z0-9_~:]+)\s*\(([^\{]*)\)")
+    , RE(r"^(\w+\s+\w+)\s+([a-zA-Z0-9_~:]+)\s*\(([^\{]*)\)")
+    , RE(r"^(\w+\s+\w+\s*\*+)\s*([a-zA-Z0-9_~:]+)\s*\(([^\{]*)\)")
+    , RE(r"^(\w+\s+\w+\s+\w+)\s+([a-zA-Z0-9_~:]+)\s*\(([^\{]*)\)")
+    , RE(r"^(\w+\s+\w+\s+\w+\s*\*+)\s*([a-zA-Z0-9_~:]+)\s*\(([^\{]*)\)")
+    , RE(r"^(\w+\s+\w+\s+\w+\s+\w+)\s+([a-zA-Z0-9_~:]+)\s*\(([^\{]*)\)")
+    , RE(r"^(\w+\s+\w+\s+\w+\s+\w+\s*\*+)\s*([a-zA-Z0-9_~:]+)\s*\(([^\{]*)\)")
+    , RE(r"^(\w+\s+\w+\s*\*\s*\w+\s*\*+\s*)\s*([a-zA-Z0-9_~:]+)\s*\(([^\{]*)\)")
+]
+
+EXPORTED_SYMBOLS = RE(
+    r"^\s*(EXPORT_SYMBOL)(_GPL)?(_FUTURE)?\s*\(\s*(\w*)\s*\)\s*", flags=re.M)
+
+# MODULE_AUTHOR("..."); /  MODULE_DESCRIPTION("..."); / MODULE_LICENSE("...");
+#
+MODULE_INFO = RE(r'^\s*(MODULE_)(AUTHOR|DESCRIPTION|LICENSE)\s*\(\s*"([^"]+)"', flags=re.M)
+
+WHITESPACE = RE(r"\s+", flags=re.UNICODE)
+
+def normalize_ws(string):
+    u"""strip needles whitespaces.
+
+    Substitute consecutive whitespaces with one single space and strip
+    trailing/leading whitespaces"""
+
+    string = WHITESPACE.sub(" ", string)
+    return string.strip()
+
+ID_CHARS = RE(r"[^A-Za-z0-9\._]")
+
+def normalize_id(ID):
+    u"""substitude invalid chars of the ID with ``-`` and mak it lowercase"""
+    return ID_CHARS.sub("-", ID).lower()
+
+def map_text(text, map_table):
+    for regexpr, substitute in map_table:
+        if substitute is not None:
+            text = regexpr.sub(substitute, text)
+    return text
+
+# ==============================================================================
+# helper
+# ==============================================================================
+
+def openTextFile(fname, mode="r", encoding="utf-8", errors="strict", buffering=1):
+    return codecs.open(
+        fname, mode=mode, encoding=encoding
+        , errors=errors, buffering=buffering)
+
+def readFile(fname, encoding="utf-8", errors="strict"):
+    with openTextFile(fname, encoding=encoding, errors=errors) as f:
+        return f.read()
+
+class Container(dict):
+    @property
+    def __dict__(self):
+        return self
+    def __getattr__(self, attr):
+        return self[attr]
+    def __setattr__(self, attr, val):
+        self[attr] = val
+
+class DevNull(object):
+    """A dev/null file descriptor."""
+    def write(self, *args, **kwargs):
+        pass
+DevNull = DevNull()
+
+KBUILD_VERBOSE = int(OS_ENV.get("KBUILD_VERBOSE", "0"))
+KERNELVERSION  = OS_ENV.get("KERNELVERSION", "unknown kernel version")
+SRCTREE        = OS_ENV.get("srctree", "")
+GIT_REF        = ("Linux kernel source tree:"
+                  " `%(rel_fname)s <https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/"
+                  "%(rel_fname)s>`__")
+
+# ==============================================================================
+# Logging stuff
+# ==============================================================================
+
+STREAM = Container(
+    # pipes used by the application & logger
+    appl_out   = sys.__stdout__
+    , log_out  = sys.__stderr__
+    , )
+
+VERBOSE = bool(KBUILD_VERBOSE)
+DEBUG   = False
+INSPECT = False
+QUIET   = False
+
+class SimpleLog(object):
+
+    LOG_FORMAT = "%(logclass)s: %(message)s\n"
+
+    def error(self, message, **replace):
+        message = message % replace
+        replace.update(dict(message = message, logclass = "ERROR"))
+        STREAM.log_out.write(self.LOG_FORMAT % replace)
+
+    def warn(self, message, **replace):
+        if QUIET:
+            return
+        message = message % replace
+        replace.update(dict(message = message, logclass = "WARN"))
+        STREAM.log_out.write(self.LOG_FORMAT % replace)
+
+    def info(self, message, **replace):
+        if not VERBOSE:
+            return
+        message = message % replace
+        replace.update(dict(message = message, logclass = "INFO"))
+        STREAM.log_out.write(self.LOG_FORMAT % replace)
+
+    def debug(self, message, **replace):
+        if not DEBUG:
+            return
+        message = message % replace
+        replace.update(dict(message = message, logclass = "DEBUG"))
+        STREAM.log_out.write(self.LOG_FORMAT % replace)
+
+LOG = SimpleLog()
+
+# ==============================================================================
+def main():
+# ==============================================================================
+
+    global VERBOSE, DEBUG # pylint: disable=W0603
+
+    epilog = (u"This implementation of uses the kernel-doc parser"
+              " from the linuxdoc extension, for detail informations read"
+              " http://return42.github.io/sphkerneldoc/books/kernel-doc-HOWTO")
+
+    CLI = argparse.ArgumentParser(
+        description = (
+            "Parse *kernel-doc* comments from source code"
+            " and print them (with reST markup) to stdout." )
+        , epilog = epilog
+        , formatter_class=argparse.ArgumentDefaultsHelpFormatter)
+
+    CLI.add_argument(
+        "files"
+        , nargs   = "+"
+        , help    = "source file(s) to parse.")
+
+    CLI.add_argument(
+        "--id-prefix"
+        , default = ""
+        , help    = (
+            "A prefix for generated IDs. IDs are used as anchors by cross"
+            " references. They are automaticly gernerated based on the declaration"
+            " and / or section names. Declartions like 'open' or section names"
+            " like 'Intro' are very common, to make them unique, a prefix is needed."
+            " Mostly you will choose the *module* or *include filename* as prefix." ))
+
+    CLI.add_argument(
+        "--verbose", "-v"
+        , action  = "store_true"
+        , help    = "verbose output with log messages to stderr" )
+
+    CLI.add_argument(
+        "--sloppy"
+        , action  = "store_true"
+        , help    = "Sloppy linting, reports only severe errors.")
+
+    CLI.add_argument(
+        "--debug"
+        , action  = "store_true"
+        , help    = "debug messages to stderr" )
+
+    CLI.add_argument(
+        "--quiet", "-q"
+        , action  = "store_true"
+        , help    = "no messages to stderr" )
+
+    CLI.add_argument(
+        "--skip-preamble"
+        , action  = "store_true"
+        , help    = "skip preamble in the output" )
+
+    CLI.add_argument(
+        "--skip-epilog"
+        , action  = "store_true"
+        , help    = "skip epilog in the output" )
+
+    CLI.add_argument(
+        "--list-internals"
+        , choices = Parser.DOC_TYPES + ["all"]
+        , nargs   = "+"
+        , help    = "list symbols, titles or whatever is documented, but *not* exported" )
+
+    CLI.add_argument(
+        "--list-exports"
+        , action  = "store_true"
+        , help    = "list all symbols exported using EXPORT_SYMBOL" )
+
+    CLI.add_argument(
+        "--use-names"
+        , nargs   = "+"
+        , help    = "print documentation of functions, structs or whatever title/object")
+
+    CLI.add_argument(
+        "--exported"
+        , action  = "store_true"
+        , help    = ("print documentation of all symbols exported"
+                     " using EXPORT_SYMBOL macros" ))
+
+    CLI.add_argument(
+         "--internal"
+        , action  = "store_true"
+        , help    = ("print documentation of all symbols that are documented,"
+                     " but not exported" ))
+
+    CLI.add_argument(
+        "--markup"
+        , choices = ["reST", "kernel-doc"]
+        , default = "reST"
+        , help    = (
+            "Markup of the comments. Change this option only if you know"
+            " what you do. New comments must be marked up with reST!"))
+
+    CMD     = CLI.parse_args()
+    VERBOSE = CMD.verbose
+    DEBUG   = CMD.debug
+
+    if CMD.quiet:
+        STREAM.log_out = DevNull
+
+    LOG.debug(u"CMD: %(CMD)s", CMD=CMD)
+
+    retVal     = 0
+
+    for fname in CMD.files:
+        translator = ReSTTranslator()
+        opts = ParseOptions(
+            fname           = fname
+            , id_prefix     = CMD.id_prefix
+            , skip_preamble = CMD.skip_preamble
+            , skip_epilog   = CMD.skip_epilog
+            , out           = STREAM.appl_out
+            , markup        = CMD.markup
+            , verbose_warn  = not (CMD.sloppy)
+            ,)
+        opts.set_defaults()
+
+        if CMD.list_exports or CMD.list_internals:
+            # pylint: disable=R0204
+            translator = ListTranslator(CMD.list_exports, CMD.list_internals)
+            opts.gather_context = True
+
+        elif CMD.use_names:
+            opts.use_names  = CMD.use_names
+
+        elif CMD.exported or CMD.internal:
+            # gather exported symbols ...
+            src   = readFile(opts.fname)
+            ctx   = ParserContext()
+            Parser.gather_context(src, ctx)
+
+            opts.error_missing = False
+            opts.use_names     = ctx.exported_symbols
+            opts.skip_names    = []
+
+            if CMD.internal:
+                opts.use_names  = []
+                opts.skip_names = ctx.exported_symbols
+        else:
+            # if non section is choosen by use-name, internal or exclude, then
+            # use all DOC: sections
+            opts.use_all_docs = True
+
+        parser = Parser(opts, translator)
+        parser.parse()
+        parser.close()
+        if parser.errors:
+            retVal = 1
+
+    return retVal
+
+# ==============================================================================
+# API
+# ==============================================================================
+
+# ------------------------------------------------------------------------------
+class TranslatorAPI(object):
+# ------------------------------------------------------------------------------
+    u"""
+    Abstract kernel-doc translator.
+
+    :cvar list cls.HIGHLIGHT_MAP:  highlight mapping
+    :cvar tuple cls.LINE_COMMENT:  tuple with start-/end- comment tags
+    """
+
+    HIGHLIGHT_MAP = [
+        ( type_constant      , None )
+        , ( type_func        , None )
+        , ( type_param       , None )
+        , ( type_struct_full , None )
+        , ( type_struct      , None )
+        , ( type_enum_full   , None )
+        , ( type_env         , None )
+        , ( type_member_func , None )
+        , ( type_member      , None )
+        , ]
+
+    LINE_COMMENT = ("# ", "")
+
+    def __init__(self):
+        self.options = None
+        self.parser  = None
+        self.dumped_names = []
+        self.translated_names = set()
+
+    def setParser(self, parser):
+        self.parser = parser
+        self.dumped_names = []
+
+    def setOptions(self, options):
+        self.options = options
+
+    def highlight(self, cont):
+        u"""returns *highlighted* text"""
+        if self.options.highlight:
+            return map_text(cont, self.HIGHLIGHT_MAP)
+        return cont
+
+    def get_preamble(self):
+        retVal = ""
+        if self.options.preamble == "":
+            retVal = self.comment("src-file: %s" % (self.options.rel_fname or self.options.fname))
+        elif self.options.preamble:
+            retVal = self.options.preamble % self
+        return retVal
+
+    def get_epilog(self):
+        retVal = ""
+        if self.options.epilog == "":
+            retVal = self.comment(
+                "\nThis file was automatic generated / don't edit.")
+        elif self.options.epilog:
+            retVal = self.options.epilog % self
+        return retVal
+
+    @classmethod
+    def comment(cls, cont):
+        u"""returns *commented* text"""
+
+        start, end = cls.LINE_COMMENT
+        if not start and not end:
+            return cont
+
+        retVal = []
+        for line in cont.split("\n"):
+            if line.strip():
+                retVal.append(start + line + end)
+            else:
+                retVal.append("")
+        return "\n".join(retVal)
+
+    def write(self, *objects):
+        u"""Write *objects* to stream.
+
+        Write Unicode-values of the *objects* to :py:attr:``self.options.out``.
+
+        :param objects: The positional arguments are the objects with the
+            content to write.
+        """
+        for obj in objects:
+            cont = six.text_type(obj)
+            self.options.out.write(cont)
+
+    def write_comment(self, *objects):
+        u"""Write *objects* as comments to stream."""
+        for obj in objects:
+            cont = six.text_type(obj)
+            self.write(self.comment(cont))
+
+    def eof(self):
+        if self.options.eof_newline:
+            self.write("\n")
+
+    # API
+    # ---
+
+    def output_preamble(self):
+        raise NotImplementedError
+
+    def output_epilog(self):
+        raise NotImplementedError
+
+    def output_DOC(
+            self
+            , sections         = None # ctx.sections
+            , ):
+        raise NotImplementedError
+
+    def output_function_decl(
+            self
+            , function         = None # ctx.decl_name
+            , return_type      = None # ctx.return_type
+            , parameterlist    = None # ctx.parameterlist
+            , parameterdescs   = None # ctx.parameterdescs
+            , parametertypes   = None # ctx.parametertypes
+            , sections         = None # ctx.sections
+            , purpose          = None # ctx.decl_purpose
+            , ):
+        raise NotImplementedError
+
+    def output_struct_decl(
+            self
+            , decl_name        = None # ctx.decl_name
+            , decl_type        = None # ctx.decl_type
+            , parameterlist    = None # ctx.parameterlist
+            , parameterdescs   = None # ctx.parameterdescs
+            , parametertypes   = None # ctx.parametertypes
+            , sections         = None # ctx.sections
+            , purpose          = None # ctx.decl_purpose
+            , ):
+        raise NotImplementedError
+
+    def output_union_decl(self, *args, **kwargs):
+        self.output_struct_decl(*args, **kwargs)
+
+    def output_enum_decl(
+            self
+            , enum             = None # ctx.decl_name
+            , parameterlist    = None # ctx.parameterlist
+            , parameterdescs   = None # ctx.parameterdescs
+            , sections         = None # ctx.sections
+            , purpose          = None # ctx.decl_purpose
+            , ):
+        raise NotImplementedError
+
+    def output_typedef_decl(
+            self
+            , typedef          = None # ctx.decl_name
+            , sections         = None # ctx.sections
+            , purpose          = None # ctx.decl_purpose
+            , ):
+        raise NotImplementedError
+
+# ------------------------------------------------------------------------------
+class NullTranslator(TranslatorAPI):
+# ------------------------------------------------------------------------------
+    u"""
+    Null translator, translates nothing, just parse.
+    """
+    HIGHLIGHT_MAP = []
+    LINE_COMMENT = ("", "")
+
+    def output_preamble(self, *args, **kwargs):
+        pass
+    def output_epilog(self, *args, **kwargs):
+        pass
+    def output_DOC(self, *args, **kwargs):
+        pass
+    def output_function_decl(self, *args, **kwargs):
+        pass
+    def output_struct_decl(self, *args, **kwargs):
+        pass
+    def output_union_decl(self, *args, **kwargs):
+        pass
+    def output_enum_decl(self, *args, **kwargs):
+        pass
+    def output_typedef_decl(self, *args, **kwargs):
+        pass
+    def eof(self):
+        pass
+
+# ------------------------------------------------------------------------------
+class ListTranslator(TranslatorAPI):
+# ------------------------------------------------------------------------------
+
+    u"""
+    Generates a list of kernel-doc symbols.
+    """
+
+    def __init__(self, list_exported, list_internal_types
+                 , *args, **kwargs):
+        super(ListTranslator, self).__init__(*args, **kwargs)
+
+        self.list_exported       = list_exported
+        self.list_internal_types = list_internal_types
+
+        self.names = dict()
+        for t in Parser.DOC_TYPES:
+            self.names[t] = []
+
+    def get_type(self, name):
+        for t, l in self.names.items():
+            if name in l:
+                return t
+        return None
+
+    def output_preamble(self):
+        pass
+
+    def output_epilog(self):
+        pass
+
+    def output_DOC(self, sections = None):
+        for header in sections.keys():
+            self.names["DOC"].append(header)
+
+    def output_function_decl(self, **kwargs):
+        self.names["function"].append(kwargs["function"])
+
+    def output_struct_decl(self, **kwargs):
+        self.names["struct"].append(kwargs["decl_name"])
+
+    def output_union_decl(self, **kwargs):
+        self.names["union"].append(kwargs["decl_name"])
+
+    def output_enum_decl(self, **kwargs):
+        self.names["enum"].append(kwargs["enum"])
+
+    def output_typedef_decl(self, **kwargs):
+        self.names["typedef"].append(kwargs["typedef"])
+
+    def eof(self):
+
+        if self.list_exported:
+            self.parser.info("list exported symbols")
+            for name in self.parser.ctx.exported_symbols:
+                t = self.get_type(name)
+                if t is None:
+                    self.parser.warn("exported symbol '%(name)s' is undocumented"
+                                     , name = name)
+                    t = "undocumented"
+                self.write("[exported %-14s] %s \n" % (t, name))
+
+        if self.list_internal_types:
+            self.parser.info("list internal names")
+            for t, l in self.names.items():
+                if not ("all" in self.list_internal_types
+                        or t in self.list_internal_types):
+                    continue
+                for name in l:
+                    if name not in self.parser.ctx.exported_symbols:
+                        self.write("[internal %-10s] %s \n" % (t, name))
+
+# ------------------------------------------------------------------------------
+class ReSTTranslator(TranslatorAPI):
+# ------------------------------------------------------------------------------
+
+    u"""
+    Translate kernel-doc to reST markup.
+
+    :cvar list HIGHLIGHT_map: Escape common reST (in-line) markups.  Classic
+        kernel-doc comments contain characters and strings like ``*`` or
+        trailing ``_``, which are in-line markups in reST. These special strings
+        has to be masked in reST.
+
+    """
+    INDENT       = "    "
+    LINE_COMMENT = (".. ", "")
+
+    HIGHLIGHT_MAP = [
+        # the regexpr are partial *overlapping*, mind the order!
+        (   type_enum_full   , r"\ :c:type:`\1 \2 <\2>`\ " )
+        , ( type_struct_full , r"\ :c:type:`\1 \2 <\2>`\ " )
+        , ( type_typedef_full, r"\ :c:type:`\1 \2 <\2>`\ " )
+        , ( type_union_full  , r"\ :c:type:`\1 \2 <\2>`\ " )
+        , ( type_member_func , r"\ :c:type:`\1\2() <\1>`\ " )
+        , ( type_member      , r"\ :c:type:`\1\2 <\1>`\ " )
+        , ( type_func        , r"\ :c:func:`\1`\ ")
+        , ( type_constant    , r"\ ``\1``\ " )
+        , ( type_param       , r"\ ``\1``\ " )
+        , ( type_env         , r"\ ``\1``\ " )
+        , ( type_struct      , r"\ :c:type:`struct \1 <\1>`\ ")
+        # at least replace escaped %, & and $
+        , ( esc_type_prefix  , r"\1")
+        , ]
+
+    MASK_REST_INLINES = [
+        (RE(r"(\w)_([\s\*])")  , r"\1\\_\2")  # trailing underline
+        , (RE(r"([\s\*])_(\w)"), r"\1\\_\2")  # leading underline
+        , (RE(r"(\*)")   , r"\\\1")  # emphasis
+        , (RE(r"(`)")    , r"\\\1")  # interpreted text & inline literals
+        , (RE(r"(\|)")   , r"\\\1")  # substitution references
+        , ]
+
+    FUNC_PTR = RE(r"([^\(]*\(\*)\s*\)\s*\(([^\)]*)\)")
+    BITFIELD = RE(r"^(.*?)\s*(:.*)")
+
+    def highlight(self, text):
+        if self.options.markup == "kernel-doc":
+            text = map_text(text, self.MASK_REST_INLINES + self.HIGHLIGHT_MAP )
+        elif self.options.markup == "reST":
+            text = map_text(text, self.HIGHLIGHT_MAP )
+        return text
+
+    def format_block(self, content):
+        u"""format the content (string)"""
+        lines = []
+        if self.options.markup == "kernel-doc":
+            lines = [ l.strip() for l in content.split("\n")]
+        elif self.options.markup == "reST":
+            lines = [ l.rstrip() for l in content.split("\n")]
+        return "\n".join(lines)
+
+    def write_anchor(self, refname):
+        ID = refname
+        if self.options.id_prefix:
+            ID = self.options.id_prefix + "." + ID
+        ID = normalize_id(ID)
+        self.write("\n.. _`%s`:\n" % ID)
+
+    HEADER_TAGS = (
+        "#"   # level 0 / part with overline
+        "="   # level 1 / chapter with overline
+        "="   # level 2 / sec
+        "-"   # level 3 / subsec
+        "-"   # level 4 / subsubsec
+        '"' ) # level 5 / para
+
+    def write_header(self, header, sec_level=2):
+        header = self.highlight(header)
+        sectag = self.HEADER_TAGS[sec_level]
+        if sec_level < 2:
+            self.write("\n", (sectag * len(header)))
+        self.write("\n%s" % header)
+        self.write("\n", (sectag * len(header)), "\n")
+
+    def write_section(self, header, content, sec_level=2, ID=None):
+        if not self.options.no_header:
+            if ID:
+                self.write_anchor(ID)
+            self.write_header(header, sec_level=sec_level)
+        if header.lower() == "example":
+            self.write("\n.. code-block:: c\n\n")
+            for l in textwrap.dedent(content).split("\n"):
+                if not l.strip():
+                    self.write("\n")
+                else:
+                    self.write(self.INDENT, l, "\n")
+        else:
+            content = self.format_block(content)
+            content = self.highlight(content)
+            self.write("\n" + content)
+
+        self.write("\n")
+
+    def write_definition(self, term, definition, prefix=""):
+        term  = normalize_ws(term) # term has to be a "one-liner"
+        term  = self.highlight(term)
+        if definition != Parser.undescribed:
+            definition = self.format_block(definition)
+            definition = self.highlight(definition)
+        self.write("\n", prefix, term)
+        for l in textwrap.dedent(definition).split("\n"):
+            self.write("\n", prefix)
+            if l.strip():
+                self.write(self.INDENT, l)
+        self.write("\n")
+
+    def write_func_param(self, param, descr):
+        param = param.replace("*", r"\*")
+        self.write("\n", self.INDENT, param)
+
+        if descr != Parser.undescribed:
+            descr = self.format_block(descr)
+            descr = self.highlight(descr)
+        for l in textwrap.dedent(descr).split("\n"):
+            self.write("\n")
+            if l.strip():
+                self.write(self.INDENT * 2, l)
+        self.write("\n")
+
+    def output_preamble(self):
+        self.parser.ctx.offset = 0
+        if self.options.mode_line:
+            self.write_comment(
+                "-*- coding: %s; mode: rst -*-\n"
+                % (getattr(self.options.out, "encoding", "utf-8") or "utf-8").lower())
+
+        preamble = self.get_preamble()
+        if preamble:
+            self.write(preamble, "\n")
+
+        if self.options.top_title:
+            self.write_anchor(self.options.top_title)
+            self.write_header(self.options.top_title, 0)
+            if self.options.top_link:
+                self.write("\n", self.options.top_link % self.options, "\n")
+
+    def output_epilog(self):
+        self.parser.ctx.offset = 0
+        epilog = self.get_epilog()
+        if epilog:
+            self.write(epilog, "\n")
+
+    def output_DOC(self, sections = None):
+        self.parser.ctx.offset = self.parser.ctx.decl_offset
+        for header, content in sections.items():
+            self.write_section(header, content, sec_level=2, ID=header)
+
+    def output_function_decl(
+            self
+            , function         = None # ctx.decl_name
+            , return_type      = None # ctx.return_type
+            , parameterlist    = None # ctx.parameterlist
+            , parameterdescs   = None # ctx.parameterdescs
+            , parametertypes   = None # ctx.parametertypes
+            , sections         = None # ctx.sections
+            , purpose          = None # ctx.decl_purpose
+            , ):
+        self.parser.ctx.offset = self.parser.ctx.decl_offset
+        self.write_anchor(function)
+        self.write_header(function, sec_level=2)
+
+        if self.options.man_sect:
+            self.write("\n.. kernel-doc-man:: %s.%s\n" % (function, self.options.man_sect) )
+
+        # write function definition
+
+        self.write("\n.. c:function:: ")
+        if return_type and return_type.endswith("*"):
+            self.write(return_type, function, "(")
+        else:
+            self.write(return_type, " ", function, "(")
+
+        p_list = []
+
+        for p_name in parameterlist:
+            p_type = parametertypes[p_name]
+
+            if self.FUNC_PTR.search(p_type):
+                # pointer to function
+                p_list.append("%s%s)(%s)"
+                              % (self.FUNC_PTR[0], p_name, self.FUNC_PTR[1]))
+            elif p_type.endswith("*"):
+                # pointer
+                p_list.append("%s%s" % (p_type, p_name))
+            else:
+                p_list.append("%s %s" % (p_type, p_name))
+
+        p_line = ", ".join(p_list)
+        self.write(p_line, ")\n")
+
+        # purpose
+
+        if purpose:
+            self.write("\n", self.INDENT, self.highlight(purpose), "\n")
+
+        # parameter descriptions
+
+        for p_name in parameterlist:
+
+            p_type = parametertypes[p_name]
+            p_name = re.sub(r"\[.*", "", p_name)
+            p_desc = parameterdescs[p_name]
+
+            param = ""
+            if self.FUNC_PTR.search(p_type):
+                # pointer to function
+                param = ":param %s%s)(%s):" % (self.FUNC_PTR[0], p_name, self.FUNC_PTR[1])
+            elif p_type.endswith("*"):
+                # pointer & pointer to pointer
+                param = ":param %s%s:" % (p_type, p_name)
+            elif p_name == "...":
+                param = ":param %s :" % (p_name)
+            else:
+                param = ":param %s %s:" % (p_type, p_name)
+
+            self.parser.ctx.offset = parameterdescs.offsets.get(
+                p_name, self.parser.ctx.offset)
+
+            self.write_func_param(param, p_desc)
+
+            # print all the @foo.bar sub-descriptions
+            sub_descr = [x for x in parameterdescs.keys() if x.startswith(p_name + ".")]
+            for p_name in sub_descr:
+                p_desc = parameterdescs.get(p_name, None)
+                self.parser.ctx.offset = parameterdescs.offsets.get(
+                    p_name, self.parser.ctx.offset)
+                self.write_definition(p_name, p_desc)
+
+        # sections
+
+        for header, content in sections.items():
+            self.parser.ctx.offset = sections.offsets[header]
+            self.write_section(
+                header
+                , content
+                , sec_level = 3
+                , ID = function + "." + header)
+
+    def output_struct_decl(
+            self
+            , decl_name        = None # ctx.decl_name
+            , decl_type        = None # ctx.decl_type
+            , parameterlist    = None # ctx.parameterlist
+            , parameterdescs   = None # ctx.parameterdescs
+            , parametertypes   = None # ctx.parametertypes
+            , sections         = None # ctx.sections
+            , purpose          = None # ctx.decl_purpose
+            , ):
+        self.parser.ctx.offset = self.parser.ctx.decl_offset
+        self.write_anchor(decl_name)
+        self.write_header("%s %s" % (decl_type, decl_name), sec_level=2)
+
+        if self.options.man_sect:
+            self.write("\n.. kernel-doc-man:: %s.%s\n" % (decl_name, self.options.man_sect) )
+
+        # write struct definition
+        # see https://github.com/sphinx-doc/sphinx/issues/2713
+        self.write("\n.. c:type:: struct %s\n\n" % decl_name)
+
+        # purpose
+
+        if purpose:
+            self.write(self.INDENT, self.highlight(purpose), "\n")
+
+        # definition
+
+        self.write_anchor(decl_name + "." + Parser.section_def)
+        self.write_header(Parser.section_def, sec_level=3)
+        self.write("\n.. code-block:: c\n\n")
+        self.write(self.INDENT, decl_type, " ", decl_name, " {\n")
+
+        for p_name in parameterlist:
+            p_type = parametertypes[p_name]
+
+            if MACRO.match(p_name):
+                self.write(self.INDENT, "%s\n" % p_name)
+
+            elif self.FUNC_PTR.search(p_type):
+                # pointer to function
+                self.write(
+                    self.INDENT * 2
+                    , "%s%s)(%s);\n" % (self.FUNC_PTR[0], p_name, self.FUNC_PTR[1]))
+
+            elif self.BITFIELD.match(p_type):
+                self.write(
+                    self.INDENT * 2
+                    , "%s %s%s;\n" % (self.BITFIELD[0], p_name, self.BITFIELD[1]))
+            elif p_type.endswith("*"):
+                # pointer
+                self.write(
+                    self.INDENT * 2
+                    , "%s%s;\n" % (p_type, p_name))
+
+            else:
+                self.write(
+                    self.INDENT * 2
+                    , "%s %s;\n" % (p_type, p_name))
+
+        self.write(self.INDENT, "}\n")
+
+        # member description
+
+        self.write_anchor(decl_name + "." + Parser.section_members)
+        self.write_header(Parser.section_members, sec_level=3)
+
+        for p_name in parameterlist:
+            if MACRO.match(p_name):
+                continue
+            p_name = re.sub(r"\[.*", "", p_name)
+            p_desc = parameterdescs.get(p_name, None)
+
+            if p_desc is not None:
+                self.parser.ctx.offset = parameterdescs.offsets.get(
+                    p_name, self.parser.ctx.offset)
+                self.write_definition(p_name, p_desc)
+
+            # print all the @foo.bar sub-descriptions
+            sub_descr = [x for x in parameterdescs.keys() if x.startswith(p_name + ".")]
+            for p_name in sub_descr:
+                p_desc = parameterdescs.get(p_name, None)
+                self.parser.ctx.offset = parameterdescs.offsets.get(
+                    p_name, self.parser.ctx.offset)
+                self.write_definition(p_name, p_desc)
+
+        # sections
+
+        for header, content in sections.items():
+            self.parser.ctx.offset = sections.offsets[header]
+            self.write_section(
+                header
+                , content
+                , sec_level = 3
+                , ID = decl_name + "." + header)
+
+    def output_enum_decl(
+            self
+            , enum             = None # ctx.decl_name
+            , parameterlist    = None # ctx.parameterlist
+            , parameterdescs   = None # ctx.parameterdescs
+            , sections         = None # ctx.sections
+            , purpose          = None # ctx.decl_purpose
+            , ):
+        self.parser.ctx.offset = self.parser.ctx.decl_offset
+        self.write_anchor(enum)
+        self.write_header("enum %s" % enum, sec_level=2)
+
+        if self.options.man_sect:
+            self.write("\n.. kernel-doc-man:: %s.%s\n" % (enum, self.options.man_sect) )
+
+        # write union definition
+        # see https://github.com/sphinx-doc/sphinx/issues/2713
+        self.write("\n.. c:type:: enum %s\n\n" % enum)
+
+        # purpose
+
+        if purpose:
+            self.write(self.INDENT, self.highlight(purpose), "\n")
+
+        # definition
+
+        self.write_anchor(enum + "." + Parser.section_def)
+        self.write_header(Parser.section_def, sec_level=3)
+        self.write("\n.. code-block:: c\n\n")
+        self.write(self.INDENT, "enum ", enum, " {")
+
+        e_list = parameterlist[:]
+        while e_list:
+            e = e_list.pop(0)
+            if MACRO.match(e):
+                self.write("\n", self.INDENT, e)
+            else:
+                self.write("\n", self.INDENT * 2, e)
+            if e_list:
+                self.write(",")
+        self.write("\n", self.INDENT, "};\n")
+
+        # constants description
+
+        self.write_anchor(enum + "." + Parser.section_constants)
+        self.write_header(Parser.section_constants, sec_level=3)
+
+        for p_name in parameterlist:
+            p_desc = parameterdescs.get(p_name, None)
+            self.parser.ctx.offset = parameterdescs.offsets.get(
+                p_name, self.parser.ctx.offset)
+            if p_desc is None:
+                continue
+            self.write_definition(p_name, p_desc)
+
+        # sections
+
+        for header, content in sections.items():
+            self.parser.ctx.offset = sections.offsets[header]
+            self.write_section(
+                header
+                , content or "???"
+                , sec_level = 3
+                , ID = enum + "." + header)
+
+    def output_typedef_decl(
+            self
+            , typedef          = None # ctx.decl_name
+            , sections         = None # ctx.sections
+            , purpose          = None # ctx.decl_purpose
+            , ):
+        self.parser.ctx.offset = self.parser.ctx.decl_offset
+        self.write_anchor(typedef)
+        self.write_header("typedef %s" % typedef, sec_level=2)
+
+        if self.options.man_sect:
+            self.write("\n.. kernel-doc-man:: %s.%s\n" % (typedef, self.options.man_sect) )
+
+        # write typdef definition
+        # see https://github.com/sphinx-doc/sphinx/issues/2713
+        self.write("\n.. c:type:: typedef %s\n\n" % typedef)
+        if purpose:
+            self.write(self.INDENT, self.highlight(purpose), "\n")
+
+        for header, content in sections.items():
+            self.parser.ctx.offset = sections.offsets[header]
+            self.write_section(
+                header
+                , content or "???"
+                , sec_level = 3
+                , ID = typedef + "." + header)
+
+# ------------------------------------------------------------------------------
+class ParseOptions(Container):
+# ------------------------------------------------------------------------------
+
+    PARSE_OPTION_RE = r"^/\*+\s*parse-%s:\s*([a-zA-Z0-9_-]*?)\s*\*/+\s*$"
+    PARSE_OPTIONS   = [
+        ("highlight", ["on","off"], "setOnOff")
+        , ("INSPECT", ["on","off"], "setINSPECT")
+        , ("markup",  ["reST", "kernel-doc"], "setVal")
+        , ("SNIP",    [], "setVal")
+        , ("SNAP",    [], "snap")
+        , ]
+
+    def dumpOptions(self):
+        # dumps options which are variable from parsing source-code
+        return dict(
+            highlight = self.highlight
+            , markup  = self.markup )
+
+    def __init__(self, *args, **kwargs):
+
+        self.id_prefix      = None  # A prefix for generated IDs.
+        self.out            = None  # File descriptor for output.
+        self.eof_newline    = True  # write newline on end of file
+
+        self.src_tree       = SRCTREE # root of the kernel sources
+        self.rel_fname      = ""      # pathname relative to src_tree
+        self.fname          = ""      # absolute pathname
+
+        # self.encoding: the input encoding (encoding of the parsed source
+        # file), the output encoding could be seek from the file-descriptor at
+        # self.out.
+
+        self.encoding       = "utf-8"
+        self.tab_width      = 8  # tab-stops every n chars
+
+        # control which content to print
+
+        self.use_names     = []    # positive list of names to print / empty list means "print all"
+        self.skip_names    = []    # negative list of names (not to print)
+        self.use_all_docs  = False # True/False print all "DOC:" sections
+        self.no_header     = False # skip section header
+        self.error_missing = True  # report missing names as errors / else warning
+        self.verbose_warn  = True  # more warn messages
+
+        # self.gather_context: [True/False] Scan additional context from the
+        # parsed source. E.g.: The list of exported symbols is a part of the
+        # parser's context. If the context of exported symbols is needed, we
+        # have to parse twice. First to find exported symbols, store them in the
+        # context and a second once for *normal* parsing within this modified
+        # *context*.
+
+        self.gather_context    = False
+
+        # epilog / preamble
+
+        self.skip_preamble  = False
+        self.skip_epilog    = False
+        self.mode_line      = True  # write mode-line in the very first line
+        self.top_title      = ""    # write a title on top of the preamble
+        self.top_link       = ""    # if top_title, add link to the *top* of the preamble
+        self.preamble       = ""    # additional text placed into the preamble
+        self.epilog         = ""    # text placed into the epilog
+
+        # default's of filtered PARSE_OPTIONS
+
+        self.opt_filters    = dict()
+        self.markup         = "reST"
+        self.highlight      = True  # switch highlighting on/off
+        self.man_sect       = None  # insert ".. kernel-doc-man:" directive, section no self.man_sect
+        self.add_filters(self.PARSE_OPTIONS)
+
+        # SNIP / SNAP
+        self.SNIP = None
+
+        super(ParseOptions, self).__init__(self, *args, **kwargs)
+
+        # absolute and relativ filename
+
+        if self.src_tree and self.fname and os.path.isabs(self.fname):
+            # if SCRTREE and abspath fname is given, determine relativ pathname
+            self.rel_fname = os.path.relpath(self.fname, self.src_tree)
+
+        if self.src_tree and self.fname and not os.path.isabs(self.fname):
+            # if SCRTREE and relative fname is given, drop fname and set rel_fname
+            self.rel_fname = self.fname
+            self.fname = ""
+
+        if self.src_tree and self.rel_fname:
+            self.fname = os.path.join(self.src_tree, self.rel_fname)
+        else:
+            LOG.warn("no relative pathname given / no SRCTREE: "
+                     " features based on these settings might not work"
+                     " as expected!")
+        if not self.fname:
+            LOG.error("no source file given!")
+        else:
+            self.fname = os.path.abspath(self.fname)
+
+    def set_defaults(self):
+
+        # default top title and top link
+
+        if self.fname and self.top_title == "":
+            self.top_title = os.path.basename(self.fname)
+        if self.top_title:
+            self.top_title = self.top_title % self
+
+        if self.top_link == "":
+            if self.rel_fname:
+                self.top_link  = GIT_REF % self
+            else:
+                LOG.warn("missing SRCTREE, can't set *top_link* option")
+        if self.top_link:
+            self.top_link = self.top_link % self
+
+    def add_filters(self, parse_options):
+
+        def setINSPECT(name, val): # pylint: disable=W0613
+            global INSPECT         # pylint: disable=W0603
+            INSPECT = bool(val == "on")
+
+        _actions = dict(
+            setOnOff     = lambda name, val: ( name, bool(val == "on") )
+            , setVal     = lambda name, val: ( name, val )
+            , snap       = lambda name, val: ( "SNIP", "" )
+            , setINSPECT = setINSPECT
+            , )
+
+        for option, val_list, action in parse_options:
+            self.opt_filters[option] = (
+                RE(self.PARSE_OPTION_RE % option), val_list, _actions[action])
+
+    def filter_opt(self, line, parser):
+
+        for name, (regexpr, val_list, action) in self.opt_filters.items():
+            if regexpr.match(line):
+                line  = None
+                value = regexpr[0]
+                if val_list and value not in val_list:
+                    parser.error("unknown parse-%(name)s value: '%(value)s'"
+                               , name=name, value=value)
+                else:
+                    opt_val = action(name, value)
+                    if opt_val  is not None:
+                        name, value = opt_val
+                        self[name]  = value
+                    parser.info(
+                        "set parse-option: %(name)s = '%(value)s'"
+                        , name=name, value=value)
+                break
+        return line
+
+# ------------------------------------------------------------------------------
+class ParserContext(Container):
+# ------------------------------------------------------------------------------
+
+    def dumpCtx(self):
+        # dumps options which are variable from parsing source-code
+        return dict(
+            decl_offset = self.decl_offset )
+
+    def __init__(self, *args, **kwargs):
+        self.line_no           = 0
+        self.contents          = ""
+        self.section           = Parser.section_default
+
+        # self.sections: ordered dictionary (list) of sections as they appear in
+        # the source. The sections are set by Parser.dump_section
+        self.sections          = collections.OrderedDict()
+        self.sectcheck         = []
+
+        self.prototype         = ""
+        self.last_identifier   = ""
+
+        # self.parameterlist: ordered list of the parameters as they appear in
+        # the source. The parameter-list is set by Parser.push_parameter and
+        # Parser.dump_enum
+        self.parameterlist     = []
+
+        # self.parametertypes: dictionary of <parameter-name>:<type>
+        # key/values of the parameters. Set by Parser.push_parameter
+        self.parametertypes    = dict()
+
+        # self.parameterdescs: dictionary of <'@parameter'>:<description>
+        # key/values of the parameters. Set by Parser.dump_section
+        self.parameterdescs    = collections.OrderedDict()
+
+        # self.constants: dictionary of <'%CONST'>:<description>
+        # key/values. Set by Parser.dump_section
+        self.constants         = dict()
+
+        self.decl_name         = ""
+        self.decl_type         = ""  # [struct|union|enum|typedef|function]
+        self.decl_purpose      = ""
+        self.return_type       = ""
+
+        #self.struct_actual     = ""
+
+        # Additional context from the parsed source
+
+        # self.exported: list of exported symbols
+        self.exported_symbols  = []
+
+        # self.mod_xxx: Module informations
+        self.mod_authors       = []
+        self.mod_descr         = ""
+        self.mod_license       = ""
+
+        # SNIP / SNAP
+        self.snippets  = collections.OrderedDict()
+
+        # the place, where type dumps are stored
+        self.dump_storage = []
+
+        # memo line numbers
+        self.offset = 0
+        self.last_offset = 0
+        self.decl_offset = 0
+        self.sections.offsets = dict()
+        self.parameterdescs.offsets = dict()
+
+        super(ParserContext, self).__init__(self, *args, **kwargs)
+
+    def new(self):
+        return self.__class__(
+            line_no            = self.line_no
+            , exported_symbols = self.exported_symbols
+            , snippets         = self.snippets
+            , dump_storage     = self.dump_storage )
+
+
+class ParserBuggy(RuntimeError):
+    u"""Exception raised when the parser implementation seems buggy.
+
+    The parser implementation perform some integrity tests at runtime.  This
+    exception type mainly exists to improve the regular expressions which are
+    used to parse and analyze the kernels source code.
+
+    In the exception message the last position the parser parsed is stored, this
+    position may, but does not need to be related with the exception (it is only
+    an additional information which might help).
+
+    Under normal circumstances, exceptions of this type should never arise,
+    unless the implementation of the parser is buggy."""
+
+    def __init__(self, parserObj, message):
+
+        message = ("last parse position %s:%s\n"
+                   % (parserObj.ctx.line_no, parserObj.options.fname)
+                   + message)
+        super(ParserBuggy, self).__init__(message)
+        self.parserObj = parserObj
+
+# ------------------------------------------------------------------------------
+class Parser(SimpleLog):
+# ------------------------------------------------------------------------------
+
+    u"""
+    kernel-doc comments parser
+
+    States:
+
+    * 0 - normal code
+    * 1 - looking for function name
+    * 2 - scanning field start.
+    * 3 - scanning prototype.
+    * 4 - documentation block
+    * 5 - gathering documentation outside main block (see Split Doc State)
+
+    Split Doc States:
+
+    * 0 - Invalid (Before start or after finish)
+    * 1 - Is started (the /\\*\\* was found inside a struct)
+    * 2 - The @parameter header was found, start accepting multi paragraph text.
+    * 3 - Finished (the \\*/ was found)
+    * 4 - Error: Comment without header was found. Spit a error as it's not
+          proper kernel-doc and ignore the rest.
+    """
+
+    LOG_FORMAT = "%(fname)s:%(line_no)s: :%(logclass)s: %(message)s\n"
+
+    # DOC_TYPES: types of documentation gathered by the parser
+    DOC_TYPES      = ["DOC", "function", "struct", "union", "enum", "typedef"]
+
+    undescribed      = "*undescribed*"
+
+    section_descr     = "Description"
+    section_def       = "Definition"
+    section_members   = "Members"
+    section_constants = "Constants"
+    section_intro     = "Introduction"
+    section_context   = "Context"
+    section_return    = "Return"
+    section_default   = section_descr
+
+    special_sections  = [ section_descr
+                          , section_def
+                          , section_members
+                          , section_constants
+                          , section_context
+                          , section_return ]
+
+    def __init__(self, options, translator):
+        super(Parser, self).__init__()
+
+        # raw data akku
+        self.rawdata    = ""
+
+        # flags:
+        self.state = 0
+        self.split_doc_state   = 0
+        self.in_doc_sect       = False
+        self.in_purpose        = False
+        self.brcount           = 0
+        self.warnings          = 0
+        self.errors            = 0
+        self.anon_struct_union = False
+
+        self.options    = None
+        self.translator = None
+        self.ctx        = ParserContext()
+
+        self.setTranslator(translator)
+        self.setOptions(options)
+
+    def setTranslator(self, translator):
+        self.translator = translator
+        self.translator.setParser(self)
+        self.translator.setOptions(self.options)
+
+    def setOptions(self, options):
+        self.options = options
+        self.translator.setOptions(options)
+
+    def reset_state(self):
+        self.ctx = self.ctx.new()
+        self.state             = 0
+        self.split_doc_state   = 0
+        self.in_doc_sect       = False
+        self.in_purpose        = False
+        self.brcount           = 0
+        self.anon_struct_union = False
+
+    # ------------------------------------------------------------
+    # Log
+    # ------------------------------------------------------------
+
+    def error(self, message, _line_no=None, **replace):
+        replace["fname"]   = self.options.fname
+        replace["line_no"] = replace.get("line_no", self.ctx.line_no)
+        self.errors += 1
+        super(Parser, self).error(message, **replace)
+
+    def warn(self, message, _line_no=None, **replace):
+        replace["fname"]   = self.options.fname
+        replace["line_no"] = replace.get("line_no", self.ctx.line_no)
+        self.warnings += 1
+        super(Parser, self).warn(message, **replace)
+
+    def info(self, message, _line_no=None, **replace):
+        replace["fname"]   = self.options.fname
+        replace["line_no"] = replace.get("line_no", self.ctx.line_no)
+        super(Parser, self).info(message, **replace)
+
+    def debug(self, message, _line_no=None, **replace):
+        replace["fname"]   = self.options.fname
+        replace["line_no"] = replace.get("line_no", self.ctx.line_no)
+        super(Parser, self).debug(message, **replace)
+
+    # ------------------------------------------------------------
+    # state parser
+    # ------------------------------------------------------------
+
+    @classmethod
+    def gather_context(cls, src, ctx):
+        u"""Scan source about context informations.
+
+        Scans *whole* source (e.g. :py:attr:`Parser.rawdata`) about data relevant
+        for the context (e.g. exported symbols).
+
+        Names of exported symbols gathered in :py:attr:`ParserContext.exported`.
+        The list contains names (symbols) which are exported using the
+        EXPORT_SYMBOL macro.
+
+        * ``EXPORT_SYMBOL(<name>)``
+        * ``EXPORT_SYMBOL_GPL(<name>)``
+        * ``EXPORT_SYMBOL_GPL_FUTURE(<name>)``)
+
+        .. hint::
+
+          A exported symbol does not necessarily have a corresponding source code
+          comment with a documentation.
+
+        Module information comes from the ``MODULE_xxx`` macros.  Module
+        informations are gathered in ``ParserContext.module_xxx``:
+
+        * ``MODULE_AUTHOR("...")``: Author entries are collected in a list in
+          :py:attr:`ParserContext.mod_authors`
+
+        * ``MODULE_DESCRIPTION("...")``: A concatenated string in
+          :py:attr:`ParserContext.mod_descr`
+
+        * ``MODULE_LICENSE("...")``: String with comma separated licenses in
+          :py:attr:`ParserContext.mod_license`.
+
+        .. hint::
+
+           While parsing header files, about kernel-doc, you will not find the
+           ``MODULE_xxx`` macros, because they are commonly used in the ".c"
+           files.
+        """
+
+        LOG.debug("gather_context() regExp: %(pattern)s", pattern=EXPORTED_SYMBOLS.pattern)
+        for match in EXPORTED_SYMBOLS.findall(src):
+            name = match[3]
+            LOG.info("exported symbol: %(name)s", name = name)
+            ctx.exported_symbols.append(name)
+
+        LOG.debug("gather_context() regExp: %(pattern)s", pattern=MODULE_INFO.pattern)
+
+        for match in MODULE_INFO.findall(src):
+            info_type = match[1]
+            content   = match[2]
+            if info_type == "AUTHOR":
+                ctx.mod_authors.append(content)
+            elif info_type == "DESCRIPTION":
+                ctx.mod_descr   += content + " "
+            elif info_type == "LICENSE":
+                ctx.mod_license += content + ", "
+
+        LOG.info("mod_authors: %(x)s",  x = ctx.mod_authors)
+        LOG.info("mod_descr: %(x)s",    x = ctx.mod_descr)
+        LOG.info("mod_license : %(x)s", x = ctx.mod_license)
+
+    def parse(self, src=None): # start parsing
+        self.dump_preamble()
+        if src is not None:
+            for line in src:
+                self.feed(line)
+        else:
+            with openTextFile(self.options.fname, encoding=self.options.encoding) as src:
+                for line in src:
+                    self.feed(line)
+        self.dump_epilog()
+        self.translator.eof()
+
+    def parse_dump_storage(self, translator=None, options=None):
+        if options is not None:
+            self.setOptions(options)
+        if translator is not None:
+            self.setTranslator(translator)
+        self.dump_preamble()
+        for name, out_type, opts, ctx, kwargs in self.ctx.dump_storage:
+            self.options.update(opts)
+            self.ctx.update(ctx)
+            self.output_decl(name, out_type, **kwargs)
+        self.dump_epilog()
+        self.translator.eof()
+
+    def close(self):           # end parsing
+        self.feed("", eof=True)
+        # log requested but missed documentation
+        log_missed = self.error
+        if not self.options.error_missing:
+            log_missed = self.warn
+
+        if isinstance(self.translator, NullTranslator):
+            # the NullTranslator does not translate / translated_names is
+            # empty
+            pass
+        elif isinstance(self.translator, ListTranslator):
+            self.parse_dump_storage()
+        else:
+            for name in self.options.use_names:
+                if name not in self.translator.translated_names:
+                    log_missed("no documentation for '%(name)s' found", name=name)
+
+        if self.errors or self.warnings:
+            self.warn("total errors: %(errors)s / total warnings: %(warnings)s"
+                      , errors=self.errors, warnings=self.warnings)
+            self.warnings -= 1
+        global INSPECT # pylint: disable=W0603
+        INSPECT = False
+
+    def feed(self, data, eof=False):
+        self.rawdata = self.rawdata + data
+
+        if self.options.gather_context:
+            # Scan additional context from the parsed source. For this, collect
+            # all lines in self.rawdata until EOF. On EOF, scan rawdata about
+            # (e.g.) exported symbols and after this, continue with the *normal*
+            # parsing.
+            if not eof:
+                return
+            else:
+                self.gather_context(self.rawdata, self.ctx)
+
+        lines = self.rawdata.split("\n")
+
+        if not eof:
+            # keep last line, until EOF
+            self.rawdata = lines[-1]
+            lines = lines[:-1]
+
+        for l in lines:
+            l = l.expandtabs(self.options.tab_width)
+            self.ctx.line_no += 1
+            l = self.options.filter_opt(l, self)
+            if l is None:
+                continue
+
+            if self.options.SNIP:
+                # record snippet
+                val = self.ctx.snippets.get(self.options.SNIP, "")
+                if val or l:
+                    self.ctx.snippets[self.options.SNIP] = val + l + "\n"
+
+            state = getattr(self, "state_%s" % self.state)
+            try:
+                state(l)
+            except Exception as _exc:
+                self.warn("total errors: %(errors)s / warnings: %(warnings)s"
+                           , errors=self.errors, warnings=self.warnings)
+                self.warnings -= 1
+                self.error("unhandled exception in line: %(l)s", l=l)
+                raise
+
+    def output_decl(self, name, out_type, **kwargs):
+        self.ctx.offset = self.ctx.decl_offset
+
+        if name in self.translator.dumped_names:
+            self.error("name '%s' used several times" % name)
+        self.translator.dumped_names.append(name)
+
+        if isinstance(self.translator, NullTranslator):
+            self.ctx.dump_storage.append(
+                ( name
+                  , out_type
+                  , self.options.dumpOptions()
+                  , self.ctx.dumpCtx()
+                  , copy.deepcopy(kwargs) ) )
+            return
+
+        do_translate = False
+        if name in self.options.skip_names:
+            do_translate = False
+        elif name in self.options.use_names:
+            do_translate = True
+        elif out_type != "DOC"  and not self.options.use_names:
+            do_translate = True
+        elif out_type == "DOC" and self.options.use_all_docs:
+            do_translate = True
+        if do_translate:
+            self.translator.translated_names.add(name)
+            out_func = getattr(self.translator, "output_%s" % out_type)
+            out_func(**kwargs)
+        else:
+            self.debug("skip translation of %(t)s: '%(n)s'", t=out_type, n=name)
+
+    def state_0(self, line):
+        u"""state: 0 - normal code"""
+
+        if doc_start.match(line):
+            self.debug("START: kernel-doc comment / switch state 0 --> 1")
+            self.ctx.decl_offset = self.ctx.line_no + 1
+            self.state = 1
+            self.in_doc_sect = False
+
+    def state_1(self, line):
+        u"""state: 1 - looking for function name"""
+
+        if doc_block.match(line):
+            self.debug("START: DOC block / switch state 1 --> 4")
+            self.ctx.last_offset = self.ctx.line_no + 1
+            self.state = 4
+            self.ctx.contents = ""
+            self.ctx.section =  self.section_intro
+            if doc_block[0].strip():
+                self.ctx.section = self.sect_title(doc_block[0])
+            self.info("DOC: %(sect)s", sect=self.ctx.section)
+
+        elif doc_decl.match(line):
+            self.debug("START: declaration / switch state 1 --> 2")
+            self.ctx.last_offset = self.ctx.line_no + 1
+            self.state = 2
+
+            identifier = doc_decl[0].strip()
+            self.ctx.decl_type = "function"
+            if doc_decl_ident.match(line):
+                identifier = doc_decl_ident[1]
+                self.ctx.decl_type = doc_decl_ident[0]
+            self.ctx.last_identifier = identifier.strip()
+
+            self.debug("FLAG: in_purpose=True")
+            self.in_purpose = True
+            self.info("scanning doc for: %(t)s '%(i)s'", t=self.ctx.decl_type, i = identifier)
+
+            self.ctx.decl_purpose = ""
+            if doc_decl_purpose.search(line):
+                self.ctx.decl_purpose = doc_decl_purpose[0].strip()
+
+            if not self.ctx.decl_purpose:
+                self.warn("missing initial short description of '%(i)s'"
+                          , i=self.ctx.last_identifier)
+
+        else:
+            self.warn("can't understand: -->|%(line)s|<--"
+                      " - I thought it was a doc line" , line=line)
+            self.state = 0
+
+    def sect_title(self, title):
+        u"""Normalize common section titles"""
+        # fix varius notations for the "Return:" section
+
+        retVal = title
+
+        if title.lower() in ["description", ]:
+            retVal = self.section_descr
+
+        elif title.lower() in ["introduction", "intro"]:
+            retVal = self.section_intro
+
+        elif title.lower() in ["context", ]:
+            retVal = self.section_context
+
+        elif title.lower() in ["return", "returns"]:
+            retVal = self.section_return
+
+        return retVal
+
+    def state_2(self, line):
+        u"""state: 2 - scanning field start. """
+
+        new_sect = ""
+        new_cont = ""
+
+        if not doc_sect_except.match(line):
+
+            # probe different sect start pattern ...
+
+            if self.options.markup == "reST":
+                if doc_sect_reST.match(line):
+                    # this is a line with a parameter definition or vintage
+                    # section "Context: lorem", "Return: lorem" etc.
+                    new_sect = self.sect_title(doc_sect_reST[0].strip())
+                    new_cont = doc_sect_reST[1].strip()
+                elif reST_sect.match(line):
+                    # this is a line with a section definition "Section name:\n"
+                    new_sect = self.sect_title(reST_sect[0].strip())
+                    new_cont = ""
+
+                # Sub-sections in parameter descriptions are not provided,
+                # with the exception of special_sections names. To allow
+                # comments like:
+                #   * @arg: lorem
+                #   * Return: foo
+                if (new_sect
+                    and self.ctx.section.startswith("@")
+                    and not new_sect.startswith("@")
+                    and not new_sect in self.special_sections ):
+                    new_sect = ""
+                    new_cont = ""
+
+            else:  # kernel-doc vintage mode
+                if doc_sect.match(line):
+                    # this is a line with a parameter or section definition
+                    new_sect = self.sect_title(doc_sect[0].strip())
+                    new_cont = doc_sect[1].strip()
+
+        if new_sect:
+
+            # a new section starts *here*
+
+            self.debug("found new section --> %(sect)s", sect=new_sect)
+
+            if self.ctx.contents.strip():
+                if not self.in_doc_sect:
+                    self.warn("contents before sections '%(c)s'" , c=self.ctx.contents.strip())
+                self.dump_section(self.ctx.section, self.ctx.contents)
+                self.ctx.section  = self.section_default
+                self.ctx.contents = ""
+
+            self.debug("new_sect: '%(sec)s' / desc: '%(desc)s'", sec = new_sect, desc = new_cont)
+            self.ctx.last_offset = self.ctx.line_no
+
+            self.in_doc_sect = True
+            self.in_purpose  = False
+            self.debug("FLAGs: in_doc_sect=%(s)s / in_purpose=%(p)s", s=self.in_doc_sect, p=self.in_purpose)
+
+            self.ctx.section  = new_sect
+            if new_cont:
+                self.ctx.contents = new_cont + "\n"
+            self.info("section: %(sec)s" , sec=self.ctx.section)
+
+        elif doc_end.search(line):
+
+            # end of the comment-block
+
+            if self.ctx.contents:
+                self.dump_section(self.ctx.section, self.ctx.contents)
+                self.ctx.section  = self.section_default
+                self.ctx.contents = ""
+
+            # look for doc_com + <text> + doc_end:
+            if RE(doc_com.pattern + r"[a-zA-Z_0-9:\.]+" + doc_end.pattern).match(line):
+                self.warn("suspicious ending line")
+
+            self.ctx.prototype = ""
+            self.debug("END doc block / switch state 2 --> 3")
+            self.debug("end of doc comment, looking for prototype")
+            self.state   = 3
+            self.brcount = 0
+
+        elif doc_content.match(line):
+
+            # a comment line with *content* of a section or a *purpose*
+
+            cont_line = doc_content[0]
+
+            if not cont_line.strip():
+                # it's a empty line
+
+                if self.in_purpose:
+
+                    # empty line after short description (*purpose*) introduce the
+                    # "Description" section
+
+                    self.debug("found empty line in *purpose* --> start 'Description' section")
+                    if self.ctx.contents.strip():
+                        if not self.in_doc_sect:
+                            self.warn("contents before sections '%(c)s'" , c=self.ctx.contents.strip())
+                        self.dump_section(self.ctx.section, self.ctx.contents)
+
+                    self.ctx.section  = self.section_descr
+                    self.ctx.contents = ""
+                    self.in_doc_sect  = True
+                    self.in_purpose   = False
+                    self.debug("FLAGs: in_doc_sect=%(s)s / in_purpose=%(p)s", s=self.in_doc_sect, p=self.in_purpose)
+
+                elif (self.ctx.section.startswith("@")
+                      or self.ctx.section == self.section_context):
+
+                    # miguel-style comment kludge, look for blank lines after @parameter
+                    # line to signify start of description
+
+                    self.debug("blank lines after @parameter --> start 'Description' section")
+                    self.dump_section(self.ctx.section, self.ctx.contents)
+                    self.ctx.last_offset = self.ctx.line_no
+                    self.ctx.section  = self.section_descr
+                    self.ctx.contents = ""
+                    self.in_doc_sect  = True
+                    self.debug("FLAGs: in_doc_sect=%(s)s / in_purpose=%(p)s", s=self.in_doc_sect, p=self.in_purpose)
+
+                else:
+                    self.ctx.contents += "\n"
+
+            elif self.in_purpose:
+                # Continued declaration purpose, dismiss leading whitespace
+                if self.ctx.decl_purpose:
+                    self.ctx.decl_purpose += " " + cont_line.strip()
+                else:
+                    self.ctx.decl_purpose = cont_line.strip()
+            else:
+                if (self.options.markup == "reST"
+                    and self.ctx.section.startswith("@")):
+                    # FIXME: I doubt if it is a good idea to strip leading
+                    # whitespaces in parameter description, but *over all* we
+                    # get better reST output.
+                    cont_line = cont_line.strip()
+                    # Sub-sections in parameter descriptions are not provided,
+                    # but if this is a "lorem:\n" line create a new paragraph.
+                    if reST_sect.match(line) and not doc_sect_except.match(line):
+                        cont_line = "\n" + cont_line + "\n"
+
+                self.ctx.contents += cont_line + "\n"
+
+        else:
+            # i dont know - bad line?  ignore.
+            self.warn("bad line: '%(line)s'", line = line.strip())
+
+    def state_3(self, line):
+        u"""state: 3 - scanning prototype."""
+
+        if line.startswith('typedef'):
+            if not self.ctx.decl_type == 'typedef':
+                self.warn(
+                    "typedef of function pointer not marked"
+                    " as typdef, use: 'typedef %s' in the comment."
+                    % (self.ctx.last_identifier)
+                    , line_no = self.ctx.decl_offset)
+            self.ctx.decl_type = 'typedef'
+
+        if doc_state5_oneline.match(line):
+            sect = doc_state5_oneline[0].strip()
+            cont = doc_state5_oneline[1].strip()
+            if cont and sect:
+                self.ctx.section  = self.sect_title(sect)
+                self.ctx.contents = cont
+                self.dump_section(self.ctx.section, self.ctx.contents)
+                self.ctx.section  = self.section_default
+                self.ctx.contents = ""
+
+        elif doc_state5_start.match(line):
+            self.debug("FLAG: split_doc_state=1 / switch state 3 --> 5")
+            self.state = 5
+            self.split_doc_state = 1
+            if self.ctx.decl_type == 'function':
+                self.error("odd construct, gathering documentation of a function"
+                           " outside of the main block?!?")
+
+        elif (self.ctx.decl_type == 'function'):
+            self.process_state3_function(line)
+        else:
+            self.process_state3_type(line)
+
+    def state_4(self, line):
+        u"""state: 4 - documentation block"""
+
+        if doc_block.match(line):
+            # a new DOC block arrived, dump the last section and pass the new
+            # DOC block to state 1.
+            self.dump_DOC(self.ctx.section, self.ctx.contents)
+            self.ctx = self.ctx.new()
+            self.debug("END & START: DOC block / switch state 4 --> 1")
+            self.state = 1
+            self.state_1(line)
+
+        elif doc_end.match(line):
+            # the DOC block ends here, dump it and reset to state 0
+            self.debug("END: DOC block / dump doc section / switch state 4 --> 0")
+            self.dump_DOC(self.ctx.section, self.ctx.contents)
+            self.ctx = self.ctx.new()
+            self.state = 0
+
+        elif doc_content.match(line):
+            cont = doc_content[0]
+            if (not cont.strip() # dismiss leading newlines
+                and not self.ctx.contents):
+                pass
+            else:
+                self.ctx.contents += doc_content[0] + "\n"
+
+    def state_5(self, line):
+        u"""state: 5 - gathering documentation outside main block"""
+
+        if (self.split_doc_state == 1
+            and doc_state5_sect.match(line)):
+
+            # First line (split_doc_state 1) needs to be a @parameter
+            self.ctx.section  = self.sect_title(doc_state5_sect[0].strip())
+            self.ctx.contents = doc_state5_sect[1].strip()
+            self.split_doc_state = 2
+            self.debug("SPLIT-DOC-START: '%(param)s' / split-state 1 --> 2"
+                       , param = self.ctx.section)
+            self.ctx.last_offset = self.ctx.line_no
+            self.info("section: %(sec)s" , sec=self.ctx.section)
+
+        elif doc_state5_end.match(line):
+            # Documentation block end
+            self.debug("SPLIT-DOC-END: ...")
+
+            if not self.ctx.contents.strip():
+                self.debug("SPLIT-DOC-END: ... no description to dump")
+
+            else:
+                self.dump_section(self.ctx.section, self.ctx.contents)
+                self.ctx.section  = self.section_default
+                self.ctx.contents = ""
+
+            self.debug("SPLIT-DOC-END: ... split-state --> 0  / state = 3")
+            self.state = 3
+            self.split_doc_state = 0
+
+        elif doc_content.match(line):
+            # Regular text
+            if self.split_doc_state == 2:
+                self.ctx.contents += doc_content[0] + "\n"
+
+            elif self.split_doc_state == 1:
+                self.split_doc_state = 4
+                self.error("Comment without header was found split-state --> 4")
+                self.warn("Incorrect use of kernel-doc format: %(line)s"
+                          , line = line)
+
+    # ------------------------------------------------------------
+    # helper to parse special objects
+    # ------------------------------------------------------------
+
+    def process_state3_function(self, line):
+
+        self.debug("PROCESS-FUNCTION: %(line)s", line=line)
+        line = C99_comments.sub("", line) # strip C99-style comments to end of line
+        line = line.strip()
+
+        stripProto = RE(r"([^\{]*)")
+
+        # ?!?!? MACDOC does not exists (any more)?
+        # if ($x =~ m#\s*/\*\s+MACDOC\s*#io || ($x =~ /^#/ && $x !~ /^#\s*define/)) {
+        #   do nothing
+        # }
+
+        if line.startswith("#") and not MACRO_define.search(line):
+            # do nothing
+            pass
+        elif stripProto.match(line):
+            self.ctx.prototype += " " + stripProto[0]
+
+        if (MACRO_define.search(line)
+            or "{" in line
+            or ";" in line ):
+
+            # strip cr&nl, strip C89 comments, strip leading whitespaces
+            self.ctx.prototype = C89_comments.sub(
+                "", CR_NL.sub(" ", self.ctx.prototype)).lstrip()
+
+            if SYSCALL_DEFINE.search(self.ctx.prototype):
+                self.ctx.prototype = self.syscall_munge(self.ctx.prototype)
+
+            if (TRACE_EVENT.search(self.ctx.prototype)
+                or DEFINE_EVENT.search(self.ctx.prototype)
+                or DEFINE_SINGLE_EVENT.search(self.ctx.prototype) ):
+                self.ctx.prototype = self.tracepoint_munge(self.ctx.prototype)
+
+            self.ctx.prototype = self.ctx.prototype.strip()
+            self.info("prototype --> '%(proto)s'", proto=self.ctx.prototype)
+            self.dump_function(self.ctx.prototype)
+            self.reset_state()
+
+    def syscall_munge(self, prototype):
+        self.debug("syscall munge: '%(prototype)s'" , prototype=prototype)
+        void = False
+
+        # strip needles whitespaces
+        prototype = normalize_ws(prototype)
+
+        if SYSCALL_DEFINE0.search(prototype):
+            void = True
+        prototype = SYSCALL_DEFINE.sub("long sys_", prototype)
+        if not self.ctx.last_identifier.startswith("sys_"):
+            self.ctx.last_identifier = "sys_%s" % self.ctx.last_identifier
+
+        if re.search(r"long (sys_.*?),", prototype):
+            prototype = prototype.replace(",", "(", 1)
+        elif void:
+            prototype = prototype.replace(")","(void)",1)
+
+        # now delete all of the odd-number commas in $prototype
+        # so that arg types & arg names don't have a comma between them
+
+        retVal = prototype
+        if not void:
+            x = prototype.split(",")
+            y = []
+            while x:
+                y.append(x.pop(0) + x.pop(0))
+            retVal = ",".join(y)
+        self.debug("syscall munge: retVal '%(retVal)s'" , retVal=retVal)
+        return retVal
+
+    def tracepoint_munge(self, prototype):
+        self.debug("tracepoint munge: %(prototype)s" , prototype=prototype)
+
+        retVal  = prototype
+        tp_name = ""
+        tp_args = ""
+
+        if TRACE_EVENT_name.match(prototype):
+            tp_name = TRACE_EVENT_name[0]
+
+        elif DEFINE_SINGLE_EVENT_name.match(prototype):
+            tp_name = DEFINE_SINGLE_EVENT_name[0]
+
+        elif DEFINE_EVENT_name.match(prototype):
+            tp_name = DEFINE_EVENT_name[1]
+
+        tp_name = tp_name.lstrip()
+
+        if TP_PROTO.search(prototype):
+            tp_args = TP_PROTO[0]
+
+        if not tp_name.strip() or not tp_args.strip():
+            self.warn("Unrecognized tracepoint format: %(prototype)s"
+                      , prototype=prototype)
+        else:
+            if not self.ctx.last_identifier.startswith("trace_"):
+                self.ctx.last_identifier = "trace_%s" % self.ctx.last_identifier
+            retVal = ("static inline void trace_%s(%s)"
+                      % (tp_name, tp_args))
+        return retVal
+
+    def process_state3_type(self, line):
+        self.debug("PROCESS-TYPE: %(line)s", line=line)
+
+        # strip cr&nl, strip C99 comments, strip leading&trailing whitespaces
+        line = C99_comments.sub("", CR_NL.sub(" ", line)).strip()
+
+        if MACRO.match(line):
+            # To distinguish preprocessor directive from regular declaration
+            # later.
+            line += ";"
+
+        m = RE(r"([^{};]*)([{};])(.*)")
+
+        while True:
+            if m.search(line):
+                self.ctx.prototype += m[0] + m[1]
+                if m[1] == "{":
+                    self.brcount += 1
+                if m[1] == "}":
+                    self.brcount -= 1
+                if m[1] == ";" and self.brcount == 0:
+                    self.info("prototype --> '%(proto)s'", proto=self.ctx.prototype)
+                    self.debug("decl_type: %(decl_type)s", decl_type=self.ctx.decl_type)
+                    if self.ctx.decl_type == "union":
+                        self.dump_union(self.ctx.prototype)
+                    elif self.ctx.decl_type == "struct":
+                        self.dump_struct(self.ctx.prototype)
+                    elif self.ctx.decl_type == "enum":
+                        self.dump_enum(self.ctx.prototype)
+                    elif self.ctx.decl_type == "typedef":
+                        self.dump_typedef(self.ctx.prototype)
+                    else:
+                        raise ParserBuggy(
+                            self, "unknown decl_type: %s" % self.ctx.decl_type)
+
+                    self.reset_state()
+                    break
+                line = m[2]
+            else:
+                self.ctx.prototype += line
+                break
+
+    # ------------------------------------------------------------
+    # dump objects
+    # ------------------------------------------------------------
+
+    def dump_preamble(self):
+        if not self.options.skip_preamble:
+            self.translator.output_preamble()
+
+    def dump_epilog(self):
+        if not self.options.skip_epilog:
+            self.translator.output_epilog()
+
+    def dump_section(self, name, cont):
+        u"""Store section's *content* under it's name.
+
+        :param str name: name of the section
+        :param str cont: content of the section
+
+        Stores the *content* under section's *name* in one of the *container*. A
+        container is a hash object, the section name is the *key* and the
+        content is the *value*.
+
+        Container:
+
+        * self.ctx.constants:       holds constant's descriptions
+        * self.ctx.parameterdescs:  holds parameter's descriptions
+        * self.ctx.sections:        holds common sections like "Return:"
+
+        There are the following contai
+        """
+        self.debug("dump_section(): %(name)s", name = name)
+        name = name.strip()
+        cont = cont.rstrip() # dismiss trailing whitespace
+
+        # FIXME: sections with '%CONST' prefix no longer exists
+        # _type_constant     = RE(r"\%([-_\w]+)")
+        #if _type_constant.match(name):  # '%CONST' - name of a constant.
+        #    name = _type_constant[0]
+        #    self.debug("constant section '%(name)s'",  name = name)
+        #    if self.ctx.constants.get(name, None):
+        #        self.error("duplicate constant definition '%(name)s'"
+        #                   , name = name)
+        #    self.ctx.constants[name] = cont
+
+        _type_param  = RE(r"\@(\w[.\w]*)")  # match @foo and @foo.bar
+        if _type_param.match(name):   # '@parameter' - name of a parameter
+            name = _type_param[0]
+            self.debug("parameter definition '%(name)s'", name = name)
+            if self.ctx.parameterdescs.get(name, None):
+                self.error("duplicate parameter definition '%(name)s'"
+                           , name = name, line_no = self.ctx.last_offset )
+            self.ctx.parameterdescs[name] = cont
+            self.ctx.parameterdescs.offsets[name] = self.ctx.last_offset
+            self.ctx.sectcheck.append(name)
+
+        elif name == "@...":
+            self.debug("parameter definiton '...'")
+            name = "..."
+            if self.ctx.parameterdescs.get(name, None):
+                self.error("parameter definiton '...'"
+                           , line_no = self.ctx.last_offset )
+            self.ctx.parameterdescs[name] = cont
+            self.ctx.parameterdescs.offsets[name] = self.ctx.last_offset
+            self.ctx.sectcheck.append(name)
+        else:
+            self.debug("other section '%(name)s'", name = name)
+            if self.ctx.sections.get(name, None):
+                self.warn("duplicate section name '%(name)s'"
+                          , name = name, line_no = self.ctx.last_offset )
+                self.ctx.sections[name] += "\n\n" + cont
+            else:
+                self.ctx.sections[name] = cont
+            self.ctx.sections.offsets[name] = self.ctx.last_offset
+
+    def dump_function(self, proto):
+        self.debug("dump_function(): (1) '%(proto)s'", proto=proto)
+        hasRetVal = True
+        proto = re.sub( r"^static +"         , "", proto )
+        proto = re.sub( r"^extern +"         , "", proto )
+        proto = re.sub( r"^asmlinkage +"     , "", proto )
+        proto = re.sub( r"^inline +"         , "", proto )
+        proto = re.sub( r"^__inline__ +"     , "", proto )
+        proto = re.sub( r"^__inline +"       , "", proto )
+        proto = re.sub( r"^__always_inline +", "", proto )
+        proto = re.sub( r"^noinline +"       , "", proto )
+        proto = re.sub( r"__init +"          , "", proto )
+        proto = re.sub( r"__init_or_module +", "", proto )
+        proto = re.sub( r"__meminit +"       , "", proto )
+        proto = re.sub( r"__must_check +"    , "", proto )
+        proto = re.sub( r"__weak +"          , "", proto )
+
+        define = bool(MACRO_define.match(proto))
+        proto = MACRO_define.sub("", proto )
+
+        proto = re.sub( r"__attribute__\s*\(\("
+                        r"(?:"
+                        r"[\w\s]+"          # attribute name
+                        r"(?:\([^)]*\))?"   # attribute arguments
+                        r"\s*,?"            # optional comma at the end
+                        r")+"
+                        r"\)\)\s+"
+                        , ""
+                        , proto)
+
+        # Yes, this truly is vile.  We are looking for:
+        # 1. Return type (may be nothing if we're looking at a macro)
+        # 2. Function name
+        # 3. Function parameters.
+        #
+        # All the while we have to watch out for function pointer parameters
+        # (which IIRC is what the two sections are for), C types (these
+        # regexps don't even start to express all the possibilities), and
+        # so on.
+        #
+        # If you mess with these regexps, it's a good idea to check that
+        # the following functions' documentation still comes out right:
+        # - parport_register_device (function pointer parameters)
+        # - atomic_set (macro)
+        # - pci_match_device, __copy_to_user (long return type)
+
+        self.debug("dump_function(): (2) '%(proto)s'", proto=proto)
+
+        x = RE(r"^()([a-zA-Z0-9_~:]+)\s+")
+
+        if define and x.match(proto):
+            # This is an object-like macro, it has no return type and no
+            # parameter list.  Function-like macros are not allowed to have
+            # spaces between decl_name and opening parenthesis (notice
+            # the \s+).
+            self.ctx.return_type = x[0]
+            self.ctx.decl_name   = x[1]
+            hasRetVal = False
+            self.debug("dump_function(): (hasRetVal = False) '%(proto)s'"
+                       , proto=proto)
+        else:
+            matchExpr = None
+            for regexp in FUNC_PROTOTYPES:
+                if regexp.match(proto):
+                    matchExpr = regexp
+                    self.debug("dump_function(): matchExpr = '%(pattern)s' // '%(proto)s'"
+                               , pattern = matchExpr.pattern, proto=proto)
+                    break
+
+            if matchExpr is not None:
+                self.debug("dump_function(): return_type='%(x)s'", x=matchExpr[0])
+                self.ctx.return_type = matchExpr[0]
+                self.debug("dump_function(): decl_name='%(x)s'", x=matchExpr[1])
+                self.ctx.decl_name   = matchExpr[1]
+                self.create_parameterlist(matchExpr[2], ",")
+            else:
+                self.warn("can't understand function proto: '%(prototype)s'"
+                          , prototype = self.ctx.prototype
+                          , line_no = self.ctx.decl_offset)
+                return
+
+            if self.ctx.last_identifier != self.ctx.decl_name:
+                self.warn("function name from comment differs:  %s <--> %s"
+                          % (self.ctx.last_identifier, self.ctx.decl_name)
+                          , line_no = self.ctx.decl_offset)
+
+        self.check_sections(self.ctx.decl_name
+                            , self.ctx.decl_type
+                            , self.ctx.sectcheck
+                            , self.ctx.parameterlist
+                            , "")
+        if hasRetVal:
+            self.check_return_section(self.ctx.decl_name, self.ctx.return_type)
+
+        self.output_decl(
+            self.ctx.decl_name, "function_decl"
+            , function         = self.ctx.decl_name
+            , return_type      = self.ctx.return_type
+            , parameterlist    = self.ctx.parameterlist
+            , parameterdescs   = self.ctx.parameterdescs
+            , parametertypes   = self.ctx.parametertypes
+            , sections         = self.ctx.sections
+            , purpose          = self.ctx.decl_purpose )
+
+    def dump_DOC(self, name, cont):
+        self.dump_section(name, cont)
+        self.output_decl(name, "DOC"
+                         , sections = self.ctx.sections )
+
+    def dump_union(self, proto):
+
+        if not self.prepare_struct_union(proto):
+            self.error("can't parse union!")
+            return
+
+        if self.ctx.last_identifier != self.ctx.decl_name:
+            self.warn("struct name from comment differs:  %s <--> %s"
+                      % (self.ctx.last_identifier, self.ctx.decl_name)
+                      , line_no = self.ctx.decl_offset)
+
+        self.output_decl(
+            self.ctx.decl_name, "union_decl"
+            , decl_name        = self.ctx.decl_name
+            , decl_type        = self.ctx.decl_type
+            , parameterlist    = self.ctx.parameterlist
+            , parameterdescs   = self.ctx.parameterdescs
+            , parametertypes   = self.ctx.parametertypes
+            , sections         = self.ctx.sections
+            , purpose          = self.ctx.decl_purpose )
+
+    def dump_struct(self, proto):
+
+        if not self.prepare_struct_union(proto):
+            self.error("can't parse struct!")
+            return
+
+        if self.ctx.last_identifier != self.ctx.decl_name:
+            self.warn("struct name from comment differs:  %s <--> %s"
+                      % (self.ctx.last_identifier, self.ctx.decl_name)
+                      , line_no = self.ctx.decl_offset)
+
+        self.output_decl(
+            self.ctx.decl_name, "struct_decl"
+            , decl_name        = self.ctx.decl_name
+            , decl_type        = self.ctx.decl_type
+            , parameterlist    = self.ctx.parameterlist
+            , parameterdescs   = self.ctx.parameterdescs
+            , parametertypes   = self.ctx.parametertypes
+            , sections         = self.ctx.sections
+            , purpose          = self.ctx.decl_purpose )
+
+    def prepare_struct_union(self, proto):
+        self.debug("prepare_struct_union(): '%(proto)s'", proto=proto)
+
+        retVal  = False
+        members = ""
+        nested  = ""
+
+        if C_STRUCT_UNION.match(proto):
+
+            if C_STRUCT_UNION[0] != self.ctx.decl_type:
+                self.error("determine of decl_type is inconsistent: '%s' <--> '%s'"
+                           "\nprototype: %s"
+                           % (C_STRUCT_UNION[0], self.ctx.decl_type, proto))
+                return False
+
+            self.ctx.decl_name = C_STRUCT_UNION[1]
+            members = C_STRUCT_UNION[2]
+
+            # ignore embedded structs or unions
+            embeded_re = RE(r"({.*})")
+            if embeded_re.search(proto):
+                nested  = embeded_re[0]
+                members = embeded_re.sub("", members)
+
+            # ignore members marked private:
+            members = re.sub(r"/\*\s*private:.*?/\*\s*public:.*?\*/", "", members, flags=re.I)
+            members = re.sub(r"/\*\s*private:.*", "", members, flags=re.I)
+
+            # strip comments:
+            members = C89_comments.sub("", members)
+            nested  = C89_comments.sub("", nested)
+
+            # strip kmemcheck_bitfield_{begin,end}.*;
+            members =  re.sub(r"kmemcheck_bitfield_.*?;", "", members)
+
+            # strip attributes
+            members = re.sub(r"__attribute__\s*\(\([a-z,_\*\s\(\)]*\)\)", "", members, flags=re.I)
+            members = re.sub(r"__aligned\s*\([^;]*\)", "", members)
+            members = re.sub(r"\s*CRYPTO_MINALIGN_ATTR", "", members)
+
+            # replace DECLARE_BITMAP
+            members = re.sub(r"DECLARE_BITMAP\s*\(([^,)]+), ([^,)]+)\)"
+                             , r"unsigned long \1[BITS_TO_LONGS(\2)]"
+                             , members )
+
+            self.create_parameterlist(members, ';')
+            self.check_sections(self.ctx.decl_name
+                                , self.ctx.decl_type
+                                , self.ctx.sectcheck
+                                , self.ctx.parameterlist # self.ctx.struct_actual.split(" ")
+                                , nested)
+            retVal = True
+
+        else:
+            retVal = False
+
+        return retVal
+
+    def dump_enum(self, proto):
+        self.debug("dump_enum(): '%(proto)s'", proto=proto)
+
+        proto = C89_comments.sub("", proto)
+        # strip #define macros inside enums
+        proto = re.sub(r"#\s*((define|ifdef)\s+|endif)[^;]*;", "", proto)
+
+        splitchar = ","
+        RE_NAME = RE(r"^\s*(\w+).*")
+
+        if C_ENUM.search(proto):
+            self.ctx.decl_name = C_ENUM[0]
+            members = normalize_ws(C_ENUM[1])
+
+            # drop trailing splitchar, if extists
+            if members.endswith(splitchar):
+                members = members[:-1]
+
+            for member in members.split(splitchar):
+                name = RE_NAME.sub(r"\1", member)
+                self.ctx.parameterlist.append(name)
+                if not self.ctx.parameterdescs.get(name, None):
+                    self.warn(
+                        "Enum value '%(name)s' not described"
+                        " in enum '%(decl_name)s'"
+                        , name = name,  decl_name=self.ctx.decl_name )
+                    self.ctx.parameterdescs[name] = Parser.undescribed
+
+            if self.ctx.last_identifier != self.ctx.decl_name:
+                self.warn("enum name from comment differs:  %s <--> %s"
+                          % (self.ctx.last_identifier, self.ctx.decl_name)
+                          , line_no = self.ctx.decl_offset)
+
+            self.check_sections(self.ctx.decl_name
+                                , self.ctx.decl_type
+                                , self.ctx.sectcheck
+                                , self.ctx.parameterlist
+                                , "")
+
+            self.output_decl(
+                self.ctx.decl_name, "enum_decl"
+                , enum             = self.ctx.decl_name
+                , parameterlist    = self.ctx.parameterlist
+                , parameterdescs   = self.ctx.parameterdescs
+                , sections         = self.ctx.sections
+                , purpose          = self.ctx.decl_purpose )
+
+        else:
+            self.error("can't parse enum!")
+
+    def dump_typedef(self, proto):
+        self.debug("dump_typedef(): '%(proto)s'", proto=proto)
+
+        proto = C89_comments.sub("", proto)
+
+        matchExpr = None
+        if C_FUNC_TYPEDEF.search(proto):
+            matchExpr = C_FUNC_TYPEDEF
+        elif C_FUNC_TYPEDEF_2.search(proto):
+            self.warn("typedef of function pointer used uncommon code style: '%s'" % proto)
+            matchExpr = C_FUNC_TYPEDEF_2
+
+        if matchExpr:
+            # Parse function prototypes
+
+            self.ctx.return_type = matchExpr[0]
+            self.ctx.decl_name   = matchExpr[1]
+            self.check_return_section(self.ctx.decl_name, self.ctx.return_type)
+
+            f_args = matchExpr[2]
+            self.create_parameterlist(f_args, ',')
+
+            if self.ctx.last_identifier != self.ctx.decl_name:
+                self.warn("function name from comment differs:  %s <--> %s"
+                          % (self.ctx.last_identifier, self.ctx.decl_name)
+                          , line_no = self.ctx.decl_offset)
+
+            self.check_sections(self.ctx.decl_name
+                                , self.ctx.decl_type
+                                , self.ctx.sectcheck
+                                , self.ctx.parameterlist
+                                , "")
+            self.output_decl(
+                self.ctx.decl_name, "function_decl"
+                , function         = self.ctx.decl_name
+                , return_type      = self.ctx.return_type
+                , parameterlist    = self.ctx.parameterlist
+                , parameterdescs   = self.ctx.parameterdescs
+                , parametertypes   = self.ctx.parametertypes
+                , sections         = self.ctx.sections
+                , purpose          = self.ctx.decl_purpose )
+
+        else:
+            self.debug("dump_typedef(): '%(proto)s'", proto=proto)
+            x1 = RE(r"\(*.\)\s*;$")
+            x2 = RE(r"\[*.\]\s*;$")
+
+            while x1.search(proto) or x2.search(proto):
+                proto = x1.sub(";", proto)
+                proto = x2.sub(";", proto)
+
+            self.debug("dump_typedef(): '%(proto)s'", proto=proto)
+
+            if C_TYPEDEF.match(proto):
+                self.ctx.decl_name = C_TYPEDEF[0]
+                if self.ctx.last_identifier != self.ctx.decl_name:
+                    self.warn("typedef name from comment differs:  %s <--> %s"
+                              % (self.ctx.last_identifier, self.ctx.decl_name)
+                              , line_no = self.ctx.decl_offset)
+
+                self.check_sections(self.ctx.decl_name
+                                    , self.ctx.decl_type
+                                    , self.ctx.sectcheck
+                                    , self.ctx.parameterlist
+                                    , "")
+                self.output_decl(
+                    self.ctx.decl_name, "typedef_decl"
+                        , typedef   = self.ctx.decl_name
+                        , sections  = self.ctx.sections
+                        , purpose   = self.ctx.decl_purpose )
+            else:
+                self.error("can't parse typedef!")
+
+    def create_parameterlist(self, parameter, splitchar):
+        self.debug("create_parameterlist(): splitchar='%(x)s' params='%(y)s'"
+                   , x=splitchar, y=parameter)
+        parameter = normalize_ws(parameter)
+        pointer_to_func = RE(r"\(.+\)\s*\(")
+
+        # temporarily replace commas inside function pointer definition
+        m = RE(r"(\([^\),]+),")
+
+        while m.search(parameter):
+            parameter = m.sub(r"\1#", parameter)
+        # drop trailing splitchar, if extists
+        if parameter.endswith(splitchar):
+            parameter = parameter[:-1]
+
+        self.debug("create_parameterlist(): params='%(y)s'", y=parameter)
+        for c, p in enumerate(parameter.split(splitchar)):
+
+            p = C99_comments.sub("", p)
+            p = p.strip()
+
+            self.debug("  parameter#%(c)s: %(p)s", c=c, p=p)
+            p_type = None
+            p_name = None
+
+            if MACRO.match(p):
+
+                # Treat preprocessor directive as a typeless variable just to
+                # fill corresponding data structures "correctly". Catch it later
+                # in output_* subs.
+                self.debug("  parameter#%(c)s: (MACRO) %(p)s=''" , c=c, p=p)
+                self.push_parameter(p, "")
+
+            elif pointer_to_func.search(p):
+
+                # pointer-to-function
+                p = p.replace("#", ",") # reinsert temporarily removed commas
+                self.debug("  parameter#%(c)s: (pointer to function) %(p)s", c=c, p=p)
+                m = RE(r"[^\(]+\(\*?\s*(\w*)\s*\)")
+                m.match(p)
+                p_name = m[0]
+                p_type  = p
+                p_type = re.sub(r"([^\(]+\(\*?)\s*"+p_name, r"\1", p_type)
+                #self.save_struct_actual(p_name)
+                self.push_parameter(p_name, p_type)
+
+            else:
+                p = re.sub(r"\s*:\s*", ":", p)
+                p = re.sub(r"\s*\["  , "[", p)
+                self.debug("  parameter#%(c)s: (common) %(p)s", c=c, p=p)
+
+                p_args = re.split(r"\s*,\s*", p)
+                if re.match(r"\s*,\s*", p_args[0]):
+                    p_args[0] = re.sub(r"(\*+)\s*", r" \1", p_args[0])
+
+                self.debug("  parameter#%(c)s : (1) p_args = %(p_args)s"
+                           , c=c, p_args=repr(p_args))
+
+                first_arg = []
+                m = RE(r"^(.*\s+)(.*?\[.*\].*)$")
+                if m.match(p_args[0]):
+                    p_args.pop(0)
+                    first_arg.extend(re.split(r"\s+", m[0]))
+                    first_arg.append(m[1])
+                else:
+                    first_arg.extend(re.split(r"\s+", p_args.pop(0)))
+
+                p_args = [first_arg.pop() ] + p_args
+                self.debug("  parameter#%(c)s : (2) p_args=%(p_args)s"
+                           , c=c, p_args=repr(p_args))
+                p_type = " ".join(first_arg)
+
+                ma = RE(r"^(\*+)\s*(.*)")
+                mb = RE(r"(.*?):(\d+)")
+
+                for p_name in p_args:
+                    self.debug("  parameter#%(c)s : (3) p_name='%(p_name)s'"
+                               , c=c, p_name=p_name)
+
+                    if ma.match(p_name):
+                        p_type = "%s %s" % (p_type, ma[0])
+                        p_name = ma[1]
+
+                    elif mb.match(p_name):
+                        if p_type:
+                            p_name = mb[0]
+                            p_type = "%s:%s" % (p_type, mb[1])
+                        else:
+                            # skip unnamed bit-fields
+                            continue
+
+                    self.debug("  parameter#%(c)s : (4) p_name='%(p_name)s' / p_type='%(p_type)s'"
+                               , c=c, p_name=p_name, p_type=p_type)
+                    #self.save_struct_actual(p_name)
+                    self.push_parameter(p_name, p_type)
+
+    def push_parameter(self, p_name, p_type):
+        self.debug(
+            "push_parameter(): p_name='%(p_name)s' / p_type='%(p_type)s'"
+            , p_name=p_name, p_type=p_type)
+
+        p_name  = p_name.strip()
+        p_type  = p_type.strip()
+
+        if (self.anon_struct_union
+            and not p_type
+            and p_name == "}"):
+            # ignore the ending }; from anon. struct/union
+            return
+
+        self.anon_struct_union = False
+
+        self.debug(
+            "push_parameter(): (1) p_name='%(p_name)s' / p_type='%(p_type)s'"
+            , p_name=p_name, p_type=p_type)
+
+        if not p_type and re.search(r"\.\.\.$", p_name):
+            if not self.ctx.parameterdescs.get(p_name, None):
+                self.ctx.parameterdescs[p_name] = "variable arguments"
+
+        elif not p_type and (not p_name or p_name == "void"):
+            p_name = "void"
+            self.ctx.parameterdescs[p_name] = "no arguments"
+
+        elif not p_type and (p_name == "struct" or p_name == "union"):
+            # handle unnamed (anonymous) union or struct:
+            p_type  = p_name
+            p_name = "{unnamed_" + p_name + "}"
+            self.ctx.parameterdescs[p_name] = "anonymous\n"
+            self.anon_struct_union = True
+
+        self.debug(
+            "push_parameter(): (2) p_name='%(p_name)s' / p_type='%(p_type)s'"
+            , p_name=p_name, p_type=p_type)
+
+        # strip array from paramater name / e.g. p_name is "modes[]" from a
+        # parmeter defined by: "const char * const modes[]"
+
+        p_name_doc = re.sub(r"\[.*", "", p_name)
+
+        # warn if parameter has no description (but ignore ones starting with
+        # '#' as these are not parameters but inline preprocessor statements);
+        # also ignore unnamed structs/unions;
+
+        if not self.anon_struct_union:
+
+            if (not self.ctx.parameterdescs.get(p_name_doc, None)
+                and not p_name.startswith("#")):
+
+                if p_type == "function" or p_type == "enum":
+                    self.warn("Function parameter or member '%(p_name)s' not "
+                              "described in '%(decl_name)s'."
+                              , p_name = p_name
+                              , decl_name = self.ctx.decl_name
+                              , line_no = self.ctx.last_offset)
+                else:
+                    self.warn("no description found for parameter '%(p_name)s'"
+                              , p_name = p_name, line_no = self.ctx.decl_offset)
+                self.ctx.parameterdescs[p_name] = Parser.undescribed
+
+        self.debug(
+            "push_parameter(): (3) p_name='%(p_name)s' / p_type='%(p_type)s'"
+            , p_name=p_name, p_type=p_type)
+
+        self.ctx.parameterlist.append(p_name)
+        self.ctx.parametertypes[p_name] = p_type.strip()
+
+    # def save_struct_actual(self, actual):
+    #     # strip all spaces from the actual param so that it looks like one
+    #     # string item
+    #     self.debug("save_struct_actual(): actual='%(a)s'", a=actual)
+    #     actual = WHITESPACE.sub("", actual)
+    #     self.ctx.struct_actual += actual + " "
+    #     self.debug("save_struct_actual: '%(a)s'", a=self.ctx.struct_actual)
+
+
+    def check_sections(self, decl_name, decl_type
+                       , sectcheck, parameterlist, nested):
+        self.debug("check_sections(): decl_name='%(n)s' / decl_type='%(t)s' /"
+                   " sectcheck=%(sc)s / parameterlist=%(pl)s / nested='%(nested)s'"
+                   , n=decl_name, t=decl_type, sc=sectcheck, pl=parameterlist, nested=nested)
+
+        for sect in sectcheck:
+            sub_sect = re.sub(r"\..*", "", sect) # take @foo.bar sections as "foo" sub-section
+            err = True
+            for para in parameterlist:
+                para = re.sub(r"\[.*\]", "", para)
+                #para = re.sub(r"/__attribute__\s*\(\([A-Za-z,_\*\s\(\)]*\)\)/", "", para)
+                if para == sub_sect or para == sect:
+                    err = False
+                    break
+            if err:
+                if decl_type == "function":
+                    self.warn(
+                        "excess function parameter '%(sect)s' description in '%(decl_name)s'"
+                        , sect = sect, decl_name = decl_name
+                        , line_no = self.ctx.decl_offset )
+                elif not re.search(r"\b(" + sect + ")[^a-zA-Z0-9]", nested):
+                    self.warn(
+                        "excess %(decl_type)s member '%(sect)s' description in '%(decl_name)s'"
+                        , decl_type = decl_type, decl_name = decl_name, sect = sect
+                        , line_no = self.ctx.decl_offset )
+            else:
+                self.debug("check_sections(): parameter '%(sect)s': description exists / OK"
+                           , sect=sect)
+
+    def check_return_section(self, decl_name, return_type):
+        self.debug("check_return_section(): decl_name='%(n)s', return_type='%(t)s"
+                   , n=decl_name, t=return_type)
+        # Ignore an empty return type (It's a macro) and ignore functions with a
+        # "void" return type. (But don't ignore "void *")
+
+        if (not return_type
+            or re.match(r"void\s*\w*\s*$", return_type)):
+            self.debug("check_return_section(): ignore void")
+            return
+
+        if self.options.verbose_warn and not self.ctx.sections.get(self.section_return, None):
+            self.warn("no description found for return-value of function '%(func)s()'"
+                      , func = decl_name, line_no = self.ctx.decl_offset)
+        else:
+            self.debug("check_return_section(): return-value of %(func)s() OK"
+                      , func = decl_name)
+
+# ==============================================================================
+# 2cent debugging & introspection
+# ==============================================================================
+
+def CONSOLE(arround=5, frame=None):
+    # pylint: disable=C0321,C0410
+    import inspect, code, linecache
+    sys.stderr.flush()
+    sys.stdout.flush()
+
+    frame  = frame or inspect.currentframe().f_back
+    fName  = frame.f_code.co_filename
+    lineNo = frame.f_lineno
+
+    ns = dict(**frame.f_globals)
+    ns.update(**frame.f_locals)
+
+    histfile = os.path.join(os.path.expanduser("~"), ".kernel-doc-history")
+    try:
+        import readline, rlcompleter  # pylint: disable=W0612
+        readline.set_completer(rlcompleter.Completer(namespace=ns).complete)
+        readline.parse_and_bind("tab: complete")
+        readline.set_history_length(1000)
+        if os.path.exists(histfile):
+            readline.read_history_file(histfile)
+    except ImportError:
+        readline = None
+    lines  = []
+    for c in range(lineNo - arround, lineNo + arround):
+        if c > 0:
+            prefix = "%-04s|" % c
+            if c == lineNo:   prefix = "---->"
+            line = linecache.getline(fName, c, frame.f_globals)
+            if line != '':    lines.append(prefix + line)
+            else:
+                if lines: lines[-1] = lines[-1] + "<EOF>\n"
+                break
+    banner =  "".join(lines) + "file: %s:%s\n" % (fName, lineNo)
+    try:
+        code.interact(banner=banner, local=ns)
+    finally:
+        if readline is not None:
+            readline.write_history_file(histfile)
+
+# ==============================================================================
+# run ...
+# ==============================================================================
+
+if __name__ == "__main__":
+    sys.exit(main())
+else:
+    # FIXME: just for testing
+    __builtins__["CONSOLE"] = CONSOLE
diff --git a/Documentation/sphinx/kerneldoc.py b/Documentation/sphinx/kerneldoc.py
deleted file mode 100644
index d15e07f..0000000
--- a/Documentation/sphinx/kerneldoc.py
+++ /dev/null
@@ -1,149 +0,0 @@
-# coding=utf-8
-#
-# Copyright © 2016 Intel Corporation
-#
-# Permission is hereby granted, free of charge, to any person obtaining a
-# copy of this software and associated documentation files (the "Software"),
-# to deal in the Software without restriction, including without limitation
-# the rights to use, copy, modify, merge, publish, distribute, sublicense,
-# and/or sell copies of the Software, and to permit persons to whom the
-# Software is furnished to do so, subject to the following conditions:
-#
-# The above copyright notice and this permission notice (including the next
-# paragraph) shall be included in all copies or substantial portions of the
-# Software.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
-# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
-# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
-# IN THE SOFTWARE.
-#
-# Authors:
-#    Jani Nikula <jani.nikula@intel.com>
-#
-# Please make sure this works on both python2 and python3.
-#
-
-import os
-import subprocess
-import sys
-import re
-import glob
-
-from docutils import nodes, statemachine
-from docutils.statemachine import ViewList
-from docutils.parsers.rst import directives
-from sphinx.util.compat import Directive
-from sphinx.ext.autodoc import AutodocReporter
-
-__version__  = '1.0'
-
-class KernelDocDirective(Directive):
-    """Extract kernel-doc comments from the specified file"""
-    required_argument = 1
-    optional_arguments = 4
-    option_spec = {
-        'doc': directives.unchanged_required,
-        'functions': directives.unchanged_required,
-        'export': directives.unchanged,
-        'internal': directives.unchanged,
-    }
-    has_content = False
-
-    def run(self):
-        env = self.state.document.settings.env
-        cmd = [env.config.kerneldoc_bin, '-rst', '-enable-lineno']
-
-        filename = env.config.kerneldoc_srctree + '/' + self.arguments[0]
-        export_file_patterns = []
-
-        # Tell sphinx of the dependency
-        env.note_dependency(os.path.abspath(filename))
-
-        tab_width = self.options.get('tab-width', self.state.document.settings.tab_width)
-
-        # FIXME: make this nicer and more robust against errors
-        if 'export' in self.options:
-            cmd += ['-export']
-            export_file_patterns = str(self.options.get('export')).split()
-        elif 'internal' in self.options:
-            cmd += ['-internal']
-            export_file_patterns = str(self.options.get('internal')).split()
-        elif 'doc' in self.options:
-            cmd += ['-function', str(self.options.get('doc'))]
-        elif 'functions' in self.options:
-            for f in str(self.options.get('functions')).split():
-                cmd += ['-function', f]
-
-        for pattern in export_file_patterns:
-            for f in glob.glob(env.config.kerneldoc_srctree + '/' + pattern):
-                env.note_dependency(os.path.abspath(f))
-                cmd += ['-export-file', f]
-
-        cmd += [filename]
-
-        try:
-            env.app.verbose('calling kernel-doc \'%s\'' % (" ".join(cmd)))
-
-            p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True)
-            out, err = p.communicate()
-
-            # python2 needs conversion to unicode.
-            # python3 with universal_newlines=True returns strings.
-            if sys.version_info.major < 3:
-                out, err = unicode(out, 'utf-8'), unicode(err, 'utf-8')
-
-            if p.returncode != 0:
-                sys.stderr.write(err)
-
-                env.app.warn('kernel-doc \'%s\' failed with return code %d' % (" ".join(cmd), p.returncode))
-                return [nodes.error(None, nodes.paragraph(text = "kernel-doc missing"))]
-            elif env.config.kerneldoc_verbosity > 0:
-                sys.stderr.write(err)
-
-            lines = statemachine.string2lines(out, tab_width, convert_whitespace=True)
-            result = ViewList()
-
-            lineoffset = 0;
-            line_regex = re.compile("^#define LINENO ([0-9]+)$")
-            for line in lines:
-                match = line_regex.search(line)
-                if match:
-                    # sphinx counts lines from 0
-                    lineoffset = int(match.group(1)) - 1
-                    # we must eat our comments since the upset the markup
-                else:
-                    result.append(line, filename, lineoffset)
-                    lineoffset += 1
-
-            node = nodes.section()
-            buf = self.state.memo.title_styles, self.state.memo.section_level, self.state.memo.reporter
-            self.state.memo.reporter = AutodocReporter(result, self.state.memo.reporter)
-            self.state.memo.title_styles, self.state.memo.section_level = [], 0
-            try:
-                self.state.nested_parse(result, 0, node, match_titles=1)
-            finally:
-                self.state.memo.title_styles, self.state.memo.section_level, self.state.memo.reporter = buf
-
-            return node.children
-
-        except Exception as e:  # pylint: disable=W0703
-            env.app.warn('kernel-doc \'%s\' processing failed with: %s' %
-                         (" ".join(cmd), str(e)))
-            return [nodes.error(None, nodes.paragraph(text = "kernel-doc missing"))]
-
-def setup(app):
-    app.add_config_value('kerneldoc_bin', None, 'env')
-    app.add_config_value('kerneldoc_srctree', None, 'env')
-    app.add_config_value('kerneldoc_verbosity', 1, 'env')
-
-    app.add_directive('kernel-doc', KernelDocDirective)
-
-    return dict(
-        version = __version__,
-        parallel_read_safe = True,
-        parallel_write_safe = True
-    )
diff --git a/Documentation/sphinx/rstKernelDoc.py b/Documentation/sphinx/rstKernelDoc.py
new file mode 100755
index 0000000..c41d692
--- /dev/null
+++ b/Documentation/sphinx/rstKernelDoc.py
@@ -0,0 +1,560 @@
+# -*- coding: utf-8; mode: python -*-
+# pylint: disable=R0912,R0914,R0915
+
+u"""
+    rstKernelDoc
+    ~~~~~~~~~~~~
+
+    Implementation of the ``kernel-doc`` reST-directive.
+
+    :copyright:  Copyright (C) 2016  Markus Heiser
+    :license:    GPL Version 2, June 1991 see Linux/COPYING for details.
+
+    The ``kernel-doc`` (:py:class:`KernelDoc`) directive includes contend from
+    linux kernel source code comments.
+
+    Here is a short overview of the options:
+
+    .. code-block:: rst
+
+        .. kernel-doc:: <src-filename>
+            :doc: <section title>
+            :no-header:
+            :export:
+            :internal:
+            :functions: <function [, functions [, ...]]>
+            :module:    <prefix-id>
+            :man-sect:  <man sect-no>
+            :snippets:  <snippet [, snippets [, ...]]>
+            :language:  <snippet-lang>
+            :linenos:
+            :debug:
+
+    The argument ``<src-filename>`` is required, it points to a source file in the
+    kernel source tree. The pathname is relative to kernel's root folder.  The
+    options have the following meaning, but be aware that not all combinations of
+    these options make sense:
+
+    ``doc <section title>``
+        Include content of the ``DOC:`` section titled ``<section title>``.  Spaces
+        are allowed in ``<section title>``; do not quote the ``<section title>``.
+
+        The next option make only sense in conjunction with option ``doc``:
+
+        ``no-header``
+            Do not output DOC: section's title. Useful, if the surrounding context
+            already has a heading, and the DOC: section title is only used as an
+            identifier. Take in mind, that this option will not suppress any native
+            reST heading markup in the comment (:ref:`reST-section-structure`).
+
+    ``export [<src-fname-pattern> [, ...]]``
+        Include documentation for all function, struct or whatever definition in
+        ``<src-filename>``, exported using EXPORT_SYMBOL macro (``EXPORT_SYMBOL``,
+        ``EXPORT_SYMBOL_GPL`` & ``EXPORT_SYMBOL_GPL_FUTURE``) either in
+        ``<src-filename>`` or in any of the files specified by
+        ``<src-fname-pattern>``.
+
+        The ``<src-fname-pattern>`` (glob) is useful when the kernel-doc comments
+        have been placed in header files, while EXPORT_SYMBOL are next to the
+        function definitions.
+
+    ``internal [<src-fname-pattern> [, ...]]``
+        Include documentation for all documented definitions, **not** exported using
+        EXPORT_SYMBOL macro either in ``<src-filename>`` or in any of the files
+        specified by ``<src-fname-pattern>``.
+
+    ``functions <name [, names [, ...]]>``
+        Include documentation for each named definition.
+
+    ``module <prefix-id>``
+        The option ``:module: <id-prefix>`` sets a module-name. The module-name is
+        used as a prefix for automatic generated IDs (reference anchors).
+
+    ``man-sect <sect-no>``
+
+      Section number of the manual pages (see man man-pages). The man-pages are build
+      by the ``kernel-doc-man`` builder.
+
+    ``snippets <name [, names [, ...]]>``
+        Inserts the source-code passage(s) marked with the snippet ``name``. The
+        snippet is inserted with a `code-block:: <http://www.sphinx-doc.org/en/stable/markup/code.html>`_
+        directive.
+
+        The next options make only sense in conjunction with option ``snippets``:
+
+        ``language <highlighter>``
+            Set highlighting language of the snippet code-block.
+
+        ``linenos``
+            Set line numbers in the snippet code-block.
+
+    ``debug``
+        Inserts a code-block with the generated reST source. This might sometimes
+        helpful to see how the kernel-doc parser transforms the kernel-doc markup to
+        reST markup.
+
+    The following example shows how to insert documentation from the source file
+    ``/drivers/gpu/drm/drm_drv.c``. In this example the documentation from the
+    ``DOC:`` section with the title "driver instance overview" and the
+    documentation of all exported symbols (EXPORT_SYMBOL) is included in the
+    reST tree.
+
+    .. code-block:: rst
+
+        .. kernel-doc::  drivers/gpu/drm/drm_drv.c
+            :export:
+            :doc:        driver instance overview
+
+    An other example is to use only one function description.
+
+    .. code-block:: rst
+
+        .. kernel-doc::  include/media/i2c/tvp7002.h
+            :functions:  tvp7002_config
+            :module:     tvp7002
+
+    This will produce the following reST markup to include:
+
+    .. code-block:: rst
+
+        .. _`tvp514x.tvp514x_platform_data`:
+
+        struct tvp514x_platform_data
+        ============================
+
+        .. c:type:: tvp514x_platform_data
+
+
+        .. _`tvp514x.tvp514x_platform_data.definition`:
+
+        Definition
+        ----------
+
+        .. code-block:: c
+
+            struct tvp514x_platform_data {
+                bool clk_polarity;
+                bool hs_polarity;
+                bool vs_polarity;
+            }
+
+        .. _`tvp514x.tvp514x_platform_data.members`:
+
+        Members
+        -------
+
+        clk_polarity
+            Clock polarity of the current interface.
+
+        hs_polarity
+            HSYNC Polarity configuration for current interface.
+
+        vs_polarity
+            VSYNC Polarity configuration for current interface.
+
+    The last example illustrates, that the option ``:module: tvp514x`` is used
+    as a prefix for anchors. E.g. ```ref:`tvp514x.tvp514x_platform_data.members¸```
+    refers to the to the member description of ``struct tvp514x_platform_data``.
+"""
+
+# ==============================================================================
+# imports
+# ==============================================================================
+
+import glob
+from os import path
+from io import StringIO
+
+import six
+
+from docutils import nodes
+from docutils.parsers.rst import Directive, directives
+from docutils.utils import SystemMessage
+from docutils.statemachine import ViewList
+
+from sphinx.ext.autodoc import AutodocReporter
+
+import kernel_doc as kerneldoc
+
+# ==============================================================================
+# common globals
+# ==============================================================================
+
+# The version numbering follows numbering of the specification
+# (Documentation/books/kernel-doc-HOWTO).
+__version__  = '1.0'
+
+PARSER_CACHE = dict()
+
+# ==============================================================================
+def setup(app):
+# ==============================================================================
+
+    app.add_config_value('kernel_doc_raise_error', False, 'env')
+    app.add_config_value('kernel_doc_verbose_warn', True, 'env')
+    app.add_config_value('kernel_doc_mode', "reST", 'env')
+    app.add_config_value('kernel_doc_mansect', None, 'env')
+    app.add_directive("kernel-doc", KernelDoc)
+
+    return dict(
+        version = __version__
+        , parallel_read_safe = True
+        , parallel_write_safe = True
+    )
+
+# ==============================================================================
+class KernelDocParser(kerneldoc.Parser):
+# ==============================================================================
+
+    def __init__(self, app, *args, **kwargs):
+        super(KernelDocParser, self).__init__(*args, **kwargs)
+        self.app = app
+
+    # -------------------------------------------------
+    # bind the parser logging to the sphinx application
+    # -------------------------------------------------
+
+    def error(self, message, **replace):
+        replace["fname"]   = self.options.fname
+        replace["line_no"] = replace.get("line_no", self.ctx.line_no)
+        self.errors += 1
+        message = ("%(fname)s:%(line_no)s: [kernel-doc ERROR] : " + message) % replace
+        self.app.warn(message, prefix="")
+
+    def warn(self, message, **replace):
+        replace["fname"]   = self.options.fname
+        replace["line_no"] = replace.get("line_no", self.ctx.line_no)
+        self.warnings += 1
+        message = ("%(fname)s:%(line_no)s: [kernel-doc WARN] : " + message) % replace
+        self.app.warn(message, prefix="")
+
+    def info(self, message, **replace):
+        replace["fname"]   = self.options.fname
+        replace["line_no"] = replace.get("line_no", self.ctx.line_no)
+        message = ("%(fname)s:%(line_no)s: [kernel-doc INFO] : " + message) % replace
+        self.app.verbose(message, prefix="")
+
+    def debug(self, message, **replace):
+        if self.app.verbosity < 2:
+            return
+        replace["fname"]   = self.options.fname
+        replace["line_no"] = replace.get("line_no", self.ctx.line_no)
+        message = ("%(fname)s:%(line_no)s: [kernel-doc DEBUG] : " + message) % replace
+        self.app.debug(message, prefix="")
+
+
+# ==============================================================================
+class FaultyOption(Exception):
+    pass
+
+class KernelDoc(Directive):
+# ==============================================================================
+
+    u"""KernelDoc (``kernel-doc``) directive"""
+
+    required_arguments = 1
+    optional_arguments = 0
+    final_argument_whitespace = True
+
+    option_spec = {
+        "doc"          : directives.unchanged_required # aka lines containing !P
+        , "no-header"  : directives.flag
+
+        , "export"     : directives.unchanged          # aka lines containing !E
+        , "internal"   : directives.unchanged          # aka lines containing !I
+        , "functions"  : directives.unchanged_required # aka lines containing !F
+
+        , "debug"      : directives.flag               # insert generated reST as code-block
+
+        , "snippets"   : directives.unchanged_required
+        , "language"   : directives.unchanged_required
+        , "linenos"    : directives.flag
+
+        # module name / used as id-prefix
+        , "module"     : directives.unchanged_required
+        , "man-sect"   : directives.nonnegative_int
+
+        # The encoding of the source file with the kernel-doc comments. The
+        # default is the config.source_encoding from sphinx configuration and
+        # this default is utf-8-sig
+        , "encoding"   : directives.encoding
+
+    }
+
+    def getParserOptions(self):
+
+        fname     = self.arguments[0]
+        src_tree  = kerneldoc.SRCTREE
+        exp_files = []  # file pattern to search for EXPORT_SYMBOL
+
+        if self.arguments[0].startswith("./"):
+            # the prefix "./" indicates a relative pathname
+            fname = self.arguments[0][2:]
+            src_tree = path.dirname(path.normpath(self.doc.current_source))
+
+        if "internal" in self.options and "export" in self.options:
+            raise FaultyOption(
+                "Options 'export' and 'internal' are orthogonal,"
+                " can't use them togehter")
+
+        if "snippets" in self.options:
+            rest = set(self.options.keys()) - set(["snippets", "linenos", "language", "debug"])
+            if rest:
+                raise FaultyOption(
+                    "kernel-doc 'snippets' has non of these options: %s"
+                    % ",".join(rest))
+
+        if self.env.config.kernel_doc_mode not in ["reST", "kernel-doc"]:
+            raise FaultyOption(
+                "unknow kernel-doc mode: %s" % self.env.config.kernel_doc_mode)
+
+        # set parse adjustments
+
+        ctx  = kerneldoc.ParserContext()
+        opts = kerneldoc.ParseOptions(
+            rel_fname       = fname
+            , src_tree      = src_tree
+            , id_prefix     = self.options.get("module", "").strip()
+            , encoding      = self.options.get("encoding", self.env.config.source_encoding)
+            , verbose_warn  = self.env.config.kernel_doc_verbose_warn
+            , markup        = self.env.config.kernel_doc_mode
+            , man_sect      = self.options.get("man-sect", None)
+            ,)
+
+        if ("doc" not in self.options
+            and opts.man_sect is None):
+            opts.man_sect = self.env.config.kernel_doc_mansect
+
+        opts.set_defaults()
+
+        if not path.exists(opts.fname):
+            raise FaultyOption(
+                "kernel-doc refers to nonexisting document %s" % opts.fname)
+
+        # always skip preamble and epilog in kernel-doc directives
+        opts.skip_preamble = True
+        opts.skip_epilog   = True
+
+        if ("doc" not in self.options
+            and "export" not in self.options
+            and "internal" not in self.options
+            and "functions" not in self.options
+            and "snippets" not in self.options):
+            # if no explicit content is selected, then print all, including all
+            # DOC sections
+            opts.use_all_docs = True
+
+        if "doc" in self.options:
+            opts.no_header = bool("no-header" in self.options)
+            opts.use_names.append(self.options.get("doc"))
+
+        if "export" in self.options:
+            # gather exported symbols and add them to the list of names
+            kerneldoc.Parser.gather_context(kerneldoc.readFile(opts.fname), ctx)
+            exp_files.extend((self.options.get('export') or "").replace(","," ").split())
+            opts.error_missing = True
+
+        elif "internal" in self.options:
+            # gather exported symbols and add them to the ignore-list of names
+            kerneldoc.Parser.gather_context(kerneldoc.readFile(opts.fname), ctx)
+            exp_files.extend((self.options.get('internal') or "").replace(","," ").split())
+
+        if "functions" in self.options:
+            opts.error_missing = True
+            opts.use_names.extend(
+                self.options["functions"].replace(","," ").split())
+
+        for pattern in exp_files:
+            if pattern.startswith("./"): # "./" indicates a relative pathname
+                pattern = path.join(
+                    path.dirname(path.normpath(self.doc.current_source))
+                    , pattern[2:])
+            else:
+                pattern = path.join(kerneldoc.SRCTREE, pattern)
+
+            if (not glob.has_magic(pattern)
+                and not path.lexists(pattern)):
+                # if pattern is a filename (is not a glob pattern) and this file
+                # does not exists, an error is raised.
+                raise FaultyOption("file not found: %s" % pattern)
+
+            for fname in glob.glob(pattern):
+                self.env.note_dependency(path.abspath(fname))
+                kerneldoc.Parser.gather_context(kerneldoc.readFile(fname), ctx)
+
+        if "export" in self.options:
+            if not ctx.exported_symbols:
+                raise FaultyOption("using option :export: but there are no exported symbols")
+            opts.use_names.extend(ctx.exported_symbols)
+
+        if "internal" in self.options:
+            opts.skip_names.extend(ctx.exported_symbols)
+
+        return opts
+
+    def errMsg(self, msg):
+        msg = six.text_type(msg)
+        error = self.state_machine.reporter.error(
+            msg
+            , nodes.literal_block(self.block_text, self.block_text)
+            , line = self.lineno )
+
+        # raise exception on error?
+        if self.env.config.kernel_doc_raise_error:
+            raise SystemMessage(error, 4)
+
+        # insert oops/todo admonition, this is the replacement of the escape
+        # sequences "!C<filename> " formerly used in the DocBook-SGML template
+        # files.
+        todo = ("\n\n.. todo::"
+                "\n\n    Oops: Document generation inconsistency."
+                "\n\n    The template for this document tried to insert"
+                " structured comment at this point, but an error occoured."
+                " This dummy section is inserted to allow generation to continue.::"
+                "\n\n")
+
+        for l in error.astext().split("\n"):
+            todo +=  "        " + l + "\n"
+        todo += "\n\n"
+        self.state_machine.insert_input(todo.split("\n"), self.arguments[0] )
+
+    def parseSource(self, opts):
+        parser = PARSER_CACHE.get(opts.fname, None)
+
+        if parser is None:
+            self.env.note_dependency(opts.fname)
+            #self.env.app.info("parse kernel-doc comments from: %s" % opts.fname)
+            parser = KernelDocParser(self.env.app, opts, kerneldoc.NullTranslator())
+            parser.parse()
+            PARSER_CACHE[opts.fname] = parser
+        else:
+            parser.setOptions(opts)
+
+        return parser
+
+    def run(self):
+
+        # pylint: disable=W0201
+        self.parser = None
+        self.doc    = self.state.document
+        self.env    = self.doc.settings.env
+        self.nodes  = []
+
+        try:
+            if not self.doc.settings.file_insertion_enabled:
+                raise FaultyOption('docutils: file insertion disabled')
+            opts = self.getParserOptions()
+            self.parser = self.parseSource(opts)
+            self.nodes.extend(self.getNodes())
+
+        except FaultyOption as exc:
+            self.errMsg(exc)
+
+        return self.nodes
+
+
+    def getNodes(self):
+
+        translator = kerneldoc.ReSTTranslator()
+        lines      = ""
+        content    = WriterList(self.parser)
+        node       = nodes.section()
+
+        # translate
+
+        if "debug" in self.options:
+            rstout = StringIO()
+            self.parser.options.out = rstout
+            self.parser.parse_dump_storage(translator=translator)
+            code_block = "\n\n.. code-block:: rst\n    :linenos:\n"
+            for l in rstout.getvalue().split("\n"):
+                code_block += "\n    " + l
+            lines = code_block + "\n\n"
+
+        elif "snippets" in self.options:
+            selected  = self.options["snippets"].replace(","," ").split()
+            names     = self.parser.ctx.snippets.keys()
+            not_found = [ s for s in selected if s not in names]
+            found     = [ s for s in selected if s in names]
+            if not_found:
+                self.errMsg("selected snippets(s) not found:\n    %s"
+                            % "\n    ,".join(not_found))
+
+            if found:
+                code_block = "\n\n.. code-block:: %s\n" % self.options.get("language", "c")
+                if "linenos" in self.options:
+                    code_block += "    :linenos:\n"
+                snipsnap = ""
+                while found :
+                    snipsnap += self.parser.ctx.snippets[found.pop(0)] + "\n\n"
+                for l in snipsnap.split("\n"):
+                    code_block += "\n    " + l
+                lines = code_block + "\n\n"
+
+        else:
+            self.parser.options.out = content  # pylint: disable=R0204
+            self.parser.parse_dump_storage(translator=translator)
+
+        # check translation
+
+        if "functions" in self.options:
+            selected  = self.options["functions"].replace(","," ").split()
+            names     = translator.translated_names
+            not_found = [ s for s in selected if s not in names]
+            if not_found:
+                self.errMsg(
+                    "selected section(s) not found:\n    %s"
+                    % "\n    ,".join(not_found))
+
+        if "export" in self.options:
+            selected  = self.parser.options.use_names
+            names     = translator.translated_names
+            not_found = [ s for s in selected if s not in names]
+            if not_found:
+                self.errMsg(
+                    "exported definitions not found:\n    %s"
+                    % "\n    ,".join(not_found))
+
+        # add lines to content list
+        reSTfname = self.state.document.current_source
+
+        content.flush()
+        if lines:
+            for l in lines.split("\n"):
+                content.append(l, reSTfname, self.lineno)
+
+        buf = self.state.memo.title_styles, self.state.memo.section_level, self.state.memo.reporter
+        self.state.memo.reporter = AutodocReporter(content, self.state.memo.reporter)
+        self.state.memo.title_styles, self.state.memo.section_level = [], 0
+        try:
+            self.state.nested_parse(content, 0, node, match_titles=1)
+        finally:
+            self.state.memo.title_styles, self.state.memo.section_level, self.state.memo.reporter = buf
+
+        return node.children
+
+
+# ==============================================================================
+class WriterList(ViewList):
+# ==============================================================================
+    u"""docutils ViewList with write method."""
+
+    def __init__(self, parser, *args, **kwargs):
+        ViewList.__init__(self, *args, **kwargs)
+        self.parser = parser
+
+        self.last_offset = -1
+        self.line_buffer = ""
+
+    def write(self, cont):
+        if self.last_offset != self.parser.ctx.offset:
+            self.flush()
+            self.line_buffer = ""
+            self.last_offset = self.parser.ctx.offset
+
+        self.line_buffer += cont
+
+    def flush(self):
+        for _i, l in enumerate(self.line_buffer.split("\n")):
+            self.append(l, self.parser.options.fname, self.last_offset)
+        self.line_buffer = ""
diff --git a/scripts/kerneldoc b/scripts/kerneldoc
new file mode 100755
index 0000000..d633822
--- /dev/null
+++ b/scripts/kerneldoc
@@ -0,0 +1,11 @@
+#!/usr/bin/python
+
+import sys
+from os import path
+
+linuxdoc = path.abspath(path.join(path.dirname(__file__), '..'))
+linuxdoc = path.join(linuxdoc, 'Documentation', 'sphinx')
+sys.path.insert(0, linuxdoc)
+
+import kernel_doc
+kernel_doc.main()
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [RFC PATCH v1 3/6] kernel-doc: add kerneldoc-lint command
  2017-01-24 19:52 [RFC PATCH v1 0/6] pure python kernel-doc parser and more Markus Heiser
  2017-01-24 19:52 ` [RFC PATCH v1 1/6] kernel-doc: pure python kernel-doc parser (preparation) Markus Heiser
  2017-01-24 19:52 ` [RFC PATCH v1 2/6] kernel-doc: replace kernel-doc perl parser with a pure python one (WIP) Markus Heiser
@ 2017-01-24 19:52 ` Markus Heiser
  2017-01-25  6:38   ` Daniel Vetter
  2017-01-24 19:52 ` [RFC PATCH v1 4/6] kernel-doc: insert TODOs on kernel-doc errors Markus Heiser
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 23+ messages in thread
From: Markus Heiser @ 2017-01-24 19:52 UTC (permalink / raw)
  To: Jonathan Corbet, Mauro Carvalho Chehab, Jani Nikula,
	Daniel Vetter, Matthew Wilcox
  Cc: Markus Heiser, linux-doc @ vger . kernel . org List,
	linux-kernel @ vger . kernel . org List

this patch adds a command to lint kernel-doc comments.::

  scripts/kerneldoc-lint --help

The lint check include (only) kernel-doc rules described at [1]. It
does not check against reST (sphinx-doc) markup used in the kernel-doc
comments.  Since reST markups could include depencies to the build-
context (e.g. open/closed refs) only a sphinx-doc build can check the
reST markup in the context of the document it builds.

With 'kerneldoc-lint' command you can check a single file or a whole
folder, e.g:

  scripts/kerneldoc-lint include/drm
  ...
  scripts/kerneldoc-lint include/media/media-device.h

The lint-implementation is a part of the parser module (kernel_doc.py).
The comandline implementation consist only of a argument parser ('opts')
which calls the kernel-doc parser with a 'NullTranslator'.::

   parser = kerneldoc.Parser(opts, kerneldoc.NullTranslator())

Latter is also a small example of how-to implement kernel-doc
applications with the kernel-doc parser architecture.

[1] https://www.kernel.org/doc/html/latest/doc-guide/kernel-doc.html#writing-kernel-doc-comments

Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
---
 Documentation/sphinx/lint.py | 121 +++++++++++++++++++++++++++++++++++++++++++
 scripts/kerneldoc-lint       |  11 ++++
 2 files changed, 132 insertions(+)
 create mode 100755 Documentation/sphinx/lint.py
 create mode 100755 scripts/kerneldoc-lint

diff --git a/Documentation/sphinx/lint.py b/Documentation/sphinx/lint.py
new file mode 100755
index 0000000..5a0128f
--- /dev/null
+++ b/Documentation/sphinx/lint.py
@@ -0,0 +1,121 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8; mode: python -*-
+# pylint: disable=C0103
+
+u"""
+    lint
+    ~~~~
+
+    Implementation of the ``kerneldoc-lint`` command.
+
+    :copyright:  Copyright (C) 2016  Markus Heiser
+    :license:    GPL Version 2, June 1991 see Linux/COPYING for details.
+
+    The ``kernel-doclint`` command *lints* documentation from Linux kernel's
+    source code comments, see ``--help``::
+
+        $ kernel-lintdoc --help
+
+    .. note::
+
+       The kernel-lintdoc command is under construction, no stable release
+       yet. The command-line arguments might be changed/extended in the near
+       future."""
+
+# ------------------------------------------------------------------------------
+# imports
+# ------------------------------------------------------------------------------
+
+import sys
+import argparse
+
+#import six
+
+from fspath import FSPath
+import kernel_doc as kerneldoc
+
+# ------------------------------------------------------------------------------
+# config
+# ------------------------------------------------------------------------------
+
+MSG    = lambda msg: sys.__stderr__.write("INFO : %s\n" % msg)
+ERR    = lambda msg: sys.__stderr__.write("ERROR: %s\n" % msg)
+FATAL  = lambda msg: sys.__stderr__.write("FATAL: %s\n" % msg)
+
+epilog = u"""This implementation of uses the kernel-doc parser
+from the linuxdoc extension, for detail informations read
+http://return42.github.io/sphkerneldoc/books/kernel-doc-HOWTO"""
+
+# ------------------------------------------------------------------------------
+def main():
+# ------------------------------------------------------------------------------
+
+    CLI = argparse.ArgumentParser(
+        description = ("Lint *kernel-doc* comments from source code")
+        , epilog = epilog
+        , formatter_class=argparse.ArgumentDefaultsHelpFormatter)
+
+    CLI.add_argument(
+        "srctree"
+        , help    = "File or folder of source code."
+        , type    = lambda x: FSPath(x).ABSPATH)
+
+    CLI.add_argument(
+        "--sloppy"
+        , action  = "store_true"
+        , help    = "Sloppy linting, reports only severe errors.")
+
+    CLI.add_argument(
+        "--markup"
+        , choices = ["reST", "kernel-doc"]
+        , default = "reST"
+        , help    = (
+            "Markup of the comments. Change this option only if you know"
+            " what you do. New comments must be marked up with reST!"))
+
+    CLI.add_argument(
+        "--verbose", "-v"
+        , action  = "store_true"
+        , help    = "verbose output with log messages to stderr" )
+
+    CLI.add_argument(
+        "--debug"
+        , action  = "store_true"
+        , help    = "debug messages to stderr" )
+
+    CMD = CLI.parse_args()
+    kerneldoc.DEBUG = CMD.debug
+    kerneldoc.VERBOSE = CMD.verbose
+
+    if not CMD.srctree.EXISTS:
+        ERR("%s does not exists or is not a folder" % CMD.srctree)
+        sys.exit(42)
+
+    if CMD.srctree.ISDIR:
+        for fname in CMD.srctree.reMatchFind(r"^.*\.[ch]$"):
+            if fname.startswith(CMD.srctree/"Documentation"):
+                continue
+            lintdoc_file(fname, CMD)
+    else:
+        fname = CMD.srctree
+        CMD.srctree = CMD.srctree.DIRNAME
+        lintdoc_file(fname, CMD)
+
+# ------------------------------------------------------------------------------
+def lintdoc_file(fname, CMD):
+# ------------------------------------------------------------------------------
+
+    fname = fname.relpath(CMD.srctree)
+    opts = kerneldoc.ParseOptions(
+        rel_fname       = fname
+        , src_tree      = CMD.srctree
+        , verbose_warn  = not (CMD.sloppy)
+        , markup        = CMD.markup )
+
+    parser = kerneldoc.Parser(opts, kerneldoc.NullTranslator())
+    try:
+        parser.parse()
+    except Exception: # pylint: disable=W0703
+        FATAL("kernel-doc comments markup of %s seems buggy / can't parse" % opts.fname)
+        return
+
diff --git a/scripts/kerneldoc-lint b/scripts/kerneldoc-lint
new file mode 100755
index 0000000..5109f7b
--- /dev/null
+++ b/scripts/kerneldoc-lint
@@ -0,0 +1,11 @@
+#!/usr/bin/python
+
+import sys
+from os import path
+
+linuxdoc = path.abspath(path.join(path.dirname(__file__), '..'))
+linuxdoc = path.join(linuxdoc, 'Documentation', 'sphinx')
+sys.path.insert(0, linuxdoc)
+
+import lint
+lint.main()
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [RFC PATCH v1 4/6] kernel-doc: insert TODOs on kernel-doc errors
  2017-01-24 19:52 [RFC PATCH v1 0/6] pure python kernel-doc parser and more Markus Heiser
                   ` (2 preceding siblings ...)
  2017-01-24 19:52 ` [RFC PATCH v1 3/6] kernel-doc: add kerneldoc-lint command Markus Heiser
@ 2017-01-24 19:52 ` Markus Heiser
  2017-01-24 19:52 ` [RFC PATCH v1 5/6] kernel-doc: add kerneldoc-src2rst command Markus Heiser
  2017-01-24 19:52 ` [RFC PATCH v1 6/6] kernel-doc: add man page builder (target mandocs) Markus Heiser
  5 siblings, 0 replies; 23+ messages in thread
From: Markus Heiser @ 2017-01-24 19:52 UTC (permalink / raw)
  To: Jonathan Corbet, Mauro Carvalho Chehab, Jani Nikula,
	Daniel Vetter, Matthew Wilcox
  Cc: Markus Heiser, linux-doc @ vger . kernel . org List,
	linux-kernel @ vger . kernel . org List

The rstKernelDoc.py sphinx-extensions inserts '.. todo::' on
errors. With this patch sphinx.ext.todo extension [1] is activated.
This is similar to what we know from DocBook's *Oops* functionality of
the kernel-doc perl script [2].

I added this functionality (only) to the subproject, to left the main
build untouched. E.g. run::

  make SPHINXDIRS="driver-api" htmldocs

open the HTML output and scroll down to see those *Oops* (TODO) boxes.

The *Oops* in the HTML output helps authors to find rough errors and
increase the quality of the documentation. ATM there are to many
errors (false positives) and it needs some discussion. Take this as a
starting point.

[1] http://www.sphinx-doc.org/en/stable/ext/todo.html
[2] http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/scripts/kernel-doc#n3073

Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
---
 Documentation/admin-guide/conf.py   | 2 ++
 Documentation/admin-guide/index.rst | 2 ++
 Documentation/conf.py               | 3 ++-
 Documentation/core-api/conf.py      | 2 ++
 Documentation/core-api/index.rst    | 2 ++
 Documentation/dev-tools/conf.py     | 2 ++
 Documentation/dev-tools/index.rst   | 2 ++
 Documentation/doc-guide/conf.py     | 2 ++
 Documentation/doc-guide/index.rst   | 2 ++
 Documentation/driver-api/conf.py    | 2 ++
 Documentation/driver-api/index.rst  | 2 ++
 Documentation/gpu/conf.py           | 2 ++
 Documentation/gpu/index.rst         | 2 ++
 Documentation/media/conf.py         | 2 ++
 Documentation/media/index.rst       | 2 ++
 Documentation/process/conf.py       | 2 ++
 Documentation/process/index.rst     | 2 ++
 Documentation/security/conf.py      | 2 ++
 Documentation/security/index.rst    | 9 +++++++++
 19 files changed, 45 insertions(+), 1 deletion(-)

diff --git a/Documentation/admin-guide/conf.py b/Documentation/admin-guide/conf.py
index 86f7389..28993c4 100644
--- a/Documentation/admin-guide/conf.py
+++ b/Documentation/admin-guide/conf.py
@@ -8,3 +8,5 @@ latex_documents = [
     ('index', 'linux-user.tex', 'Linux Kernel User Documentation',
      'The kernel development community', 'manual'),
 ]
+
+todo_include_todos = True
diff --git a/Documentation/admin-guide/index.rst b/Documentation/admin-guide/index.rst
index 8ddae4e..8c9245a 100644
--- a/Documentation/admin-guide/index.rst
+++ b/Documentation/admin-guide/index.rst
@@ -67,3 +67,5 @@ configure specific aspects of kernel behavior to your liking.
    =======
 
    * :ref:`genindex`
+
+   .. todolist::
diff --git a/Documentation/conf.py b/Documentation/conf.py
index 4843903..013af9a 100644
--- a/Documentation/conf.py
+++ b/Documentation/conf.py
@@ -34,7 +34,8 @@ from load_config import loadConfig
 # Add any Sphinx extension module names here, as strings. They can be
 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
 # ones.
-extensions = ['rstKernelDoc', 'rstFlatTable', 'kernel_include', 'cdomain' ]
+extensions = ['rstKernelDoc', 'rstFlatTable', 'kernel_include', 'cdomain',
+              'sphinx.ext.todo' ]
 
 # The name of the math extension changed on Sphinx 1.4
 if major == 1 and minor > 3:
diff --git a/Documentation/core-api/conf.py b/Documentation/core-api/conf.py
index db1f765..55e1a0e 100644
--- a/Documentation/core-api/conf.py
+++ b/Documentation/core-api/conf.py
@@ -8,3 +8,5 @@ latex_documents = [
     ('index', 'core-api.tex', project,
      'The kernel development community', 'manual'),
 ]
+
+todo_include_todos = True
diff --git a/Documentation/core-api/index.rst b/Documentation/core-api/index.rst
index 0d93d80..e958124 100644
--- a/Documentation/core-api/index.rst
+++ b/Documentation/core-api/index.rst
@@ -32,3 +32,5 @@ Interfaces for kernel debugging
    =======
 
    * :ref:`genindex`
+
+   .. todolist::
diff --git a/Documentation/dev-tools/conf.py b/Documentation/dev-tools/conf.py
index 7faafa3..f9d394b 100644
--- a/Documentation/dev-tools/conf.py
+++ b/Documentation/dev-tools/conf.py
@@ -8,3 +8,5 @@ latex_documents = [
     ('index', 'dev-tools.tex', project,
      'The kernel development community', 'manual'),
 ]
+
+todo_include_todos = True
diff --git a/Documentation/dev-tools/index.rst b/Documentation/dev-tools/index.rst
index 07d8811..48ffb70 100644
--- a/Documentation/dev-tools/index.rst
+++ b/Documentation/dev-tools/index.rst
@@ -31,3 +31,5 @@ whole; patches welcome!
    =======
 
    * :ref:`genindex`
+
+   .. todolist::
diff --git a/Documentation/doc-guide/conf.py b/Documentation/doc-guide/conf.py
index fd37311..8f2af2e 100644
--- a/Documentation/doc-guide/conf.py
+++ b/Documentation/doc-guide/conf.py
@@ -8,3 +8,5 @@ latex_documents = [
     ('index', 'kernel-doc-guide.tex', 'Linux Kernel Documentation Guide',
      'The kernel development community', 'manual'),
 ]
+
+todo_include_todos = True
diff --git a/Documentation/doc-guide/index.rst b/Documentation/doc-guide/index.rst
index 6fff402..b177a4e 100644
--- a/Documentation/doc-guide/index.rst
+++ b/Documentation/doc-guide/index.rst
@@ -18,3 +18,5 @@ How to write kernel documentation
    =======
 
    * :ref:`genindex`
+
+   .. todolist::
diff --git a/Documentation/driver-api/conf.py b/Documentation/driver-api/conf.py
index 202726d..b682e5c 100644
--- a/Documentation/driver-api/conf.py
+++ b/Documentation/driver-api/conf.py
@@ -8,3 +8,5 @@ latex_documents = [
     ('index', 'driver-api.tex', project,
      'The kernel development community', 'manual'),
 ]
+
+todo_include_todos = True
diff --git a/Documentation/driver-api/index.rst b/Documentation/driver-api/index.rst
index a2e5db0..ebc86cb 100644
--- a/Documentation/driver-api/index.rst
+++ b/Documentation/driver-api/index.rst
@@ -38,3 +38,5 @@ available subsections can be seen below.
    =======
 
    * :ref:`genindex`
+
+   .. todolist::
diff --git a/Documentation/gpu/conf.py b/Documentation/gpu/conf.py
index 1757b04..44dc0c2 100644
--- a/Documentation/gpu/conf.py
+++ b/Documentation/gpu/conf.py
@@ -8,3 +8,5 @@ latex_documents = [
     ('index', 'gpu.tex', project,
      'The kernel development community', 'manual'),
 ]
+
+todo_include_todos = True
diff --git a/Documentation/gpu/index.rst b/Documentation/gpu/index.rst
index 367d7c3..fee1857 100644
--- a/Documentation/gpu/index.rst
+++ b/Documentation/gpu/index.rst
@@ -20,3 +20,5 @@ Linux GPU Driver Developer's Guide
    =======
 
    * :ref:`genindex`
+
+   .. todolist::
diff --git a/Documentation/media/conf.py b/Documentation/media/conf.py
index bef927b..09755f9 100644
--- a/Documentation/media/conf.py
+++ b/Documentation/media/conf.py
@@ -8,3 +8,5 @@ latex_documents = [
     ('index', 'media.tex', 'Linux Media Subsystem Documentation',
      'The kernel development community', 'manual'),
 ]
+
+todo_include_todos = True
diff --git a/Documentation/media/index.rst b/Documentation/media/index.rst
index 7f8f0af..eb2754d 100644
--- a/Documentation/media/index.rst
+++ b/Documentation/media/index.rst
@@ -17,3 +17,5 @@ Contents:
    =======
 
    * :ref:`genindex`
+
+   .. todolist::
diff --git a/Documentation/process/conf.py b/Documentation/process/conf.py
index 1b01a80..a322314 100644
--- a/Documentation/process/conf.py
+++ b/Documentation/process/conf.py
@@ -8,3 +8,5 @@ latex_documents = [
     ('index', 'process.tex', 'Linux Kernel Development Documentation',
      'The kernel development community', 'manual'),
 ]
+
+todo_include_todos = True
diff --git a/Documentation/process/index.rst b/Documentation/process/index.rst
index 10aa692..acf13bb 100644
--- a/Documentation/process/index.rst
+++ b/Documentation/process/index.rst
@@ -55,3 +55,5 @@ lack of a better place.
    =======
 
    * :ref:`genindex`
+
+   .. todolist::
diff --git a/Documentation/security/conf.py b/Documentation/security/conf.py
index 472fc9a..78b30ab 100644
--- a/Documentation/security/conf.py
+++ b/Documentation/security/conf.py
@@ -6,3 +6,5 @@ latex_documents = [
     ('index', 'security.tex', project,
      'The kernel development community', 'manual'),
 ]
+
+todo_include_todos = True
diff --git a/Documentation/security/index.rst b/Documentation/security/index.rst
index 9bae6bb..b85944c 100644
--- a/Documentation/security/index.rst
+++ b/Documentation/security/index.rst
@@ -5,3 +5,12 @@ Security documentation
 .. toctree::
 
    tpm/index
+
+.. only::  subproject and html
+
+   Indices
+   =======
+
+   * :ref:`genindex`
+
+   .. todolist::
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [RFC PATCH v1 5/6] kernel-doc: add kerneldoc-src2rst command
  2017-01-24 19:52 [RFC PATCH v1 0/6] pure python kernel-doc parser and more Markus Heiser
                   ` (3 preceding siblings ...)
  2017-01-24 19:52 ` [RFC PATCH v1 4/6] kernel-doc: insert TODOs on kernel-doc errors Markus Heiser
@ 2017-01-24 19:52 ` Markus Heiser
  2017-01-24 19:52 ` [RFC PATCH v1 6/6] kernel-doc: add man page builder (target mandocs) Markus Heiser
  5 siblings, 0 replies; 23+ messages in thread
From: Markus Heiser @ 2017-01-24 19:52 UTC (permalink / raw)
  To: Jonathan Corbet, Mauro Carvalho Chehab, Jani Nikula,
	Daniel Vetter, Matthew Wilcox
  Cc: Markus Heiser, linux-doc @ vger . kernel . org List,
	linux-kernel @ vger . kernel . org List

this patch adds a command to auto-generate documentation from kernel's
source tree.::

  scripts/kerneldoc-src2rst --help

E.g. to autodoc the kernel's ./include folder use::

  scripts/kerneldoc-src2rst ./include /tmp/test123

>From the resulting reST-doctree you can build HTML rendered output like
the one in [1]. I use it to see, if patches on the kernel_doc.py parser
cause a regression (compare reST before/after the patch). Autodoc the
whole source tree takes a long time. To speed up, the src2rst module
uses multiprocessing from [2]. This means, it consumes all your CPUs. If
you don't want this, use option '--threads=n'.

[1] https://h2626237.stratoserver.net/kernel/linux_src_doc/index.html
[2] https://docs.python.org/3.6/library/multiprocessing.html

Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
---
 Documentation/sphinx/src2rst.py | 229 ++++++++++++++++++++++++++++++++++++++++
 scripts/kerneldoc-src2rst       |  11 ++
 2 files changed, 240 insertions(+)
 create mode 100755 Documentation/sphinx/src2rst.py
 create mode 100755 scripts/kerneldoc-src2rst

diff --git a/Documentation/sphinx/src2rst.py b/Documentation/sphinx/src2rst.py
new file mode 100755
index 0000000..d8d6e7b
--- /dev/null
+++ b/Documentation/sphinx/src2rst.py
@@ -0,0 +1,229 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8; mode: python -*-
+# pylint: disable=C0103
+
+u"""
+    src2rst
+    ~~~~~~~
+
+    Implementation of the ``kerneldoc-src2rst`` command.
+
+    :copyright:  Copyright (C) 2016  Markus Heiser
+    :license:    GPL Version 2, June 1991 see Linux/COPYING for details.
+
+    The ``kerneldoc-src2rst`` command extracts documentation from Linux kernel's
+    source code comments, see ``--help``::
+
+        $ kerneldoc-src2rst --help
+
+"""
+
+# ------------------------------------------------------------------------------
+# imports
+# ------------------------------------------------------------------------------
+
+import sys
+import argparse
+import re
+import multiprocessing
+
+import six
+
+from fspath import FSPath, OS_ENV
+import kernel_doc as kerneldoc
+
+# ------------------------------------------------------------------------------
+# config
+# ------------------------------------------------------------------------------
+
+MARKUP = "kernel-doc" # "reST"
+MSG    = lambda msg: sys.__stderr__.write("INFO : %s\n" % msg)
+ERR    = lambda msg: sys.__stderr__.write("ERROR: %s\n" % msg)
+FATAL  = lambda msg: sys.__stderr__.write("FATAL: %s\n" % msg)
+IGNORE = ['kernel-doc.rst']
+
+TEMPLATE_INDEX=u"""\
+.. -*- coding: utf-8; mode: rst -*-
+
+================================================================================
+%(title)s
+================================================================================
+
+.. toctree::
+    :maxdepth: 1
+
+"""
+
+CMD     = None # global used by multiprocessing
+SRCTREE = FSPath(OS_ENV.get("srctree", ""))
+
+# ------------------------------------------------------------------------------
+def main():
+# ------------------------------------------------------------------------------
+
+    global CMD # pylint: disable=W0603, W0621
+
+    CLI = argparse.ArgumentParser(
+        description = ("Parse *kernel-doc* comments from source code")
+        , formatter_class=argparse.ArgumentDefaultsHelpFormatter)
+
+    CLI.add_argument(
+        "srctree"
+        , help    = "Folder of source code."
+        , type    = lambda x: FSPath(x).ABSPATH)
+
+    CLI.add_argument(
+        "doctree"
+        , help    = "Folder to place reST documentation."
+        , type    = lambda x: FSPath(x).ABSPATH)
+
+    CLI.add_argument(
+        "--sloppy"
+        , action  = "store_true"
+        , help    = "Sloppy comment check, reports only severe errors.")
+
+    CLI.add_argument(
+        "--force"
+        , action  = "store_true"
+        , help    = "Don't stop if doctree exists.")
+
+    CLI.add_argument(
+        "--threads"
+        , type    =  int
+        , default = multiprocessing.cpu_count()
+        , help    = "Use up to n threads.")
+
+    CLI.add_argument(
+        "--markup"
+        , choices = ["reST", "kernel-doc", "auto"]
+        , default = "auto"
+        , help    = (
+            "Markup of the comments. Change this option only if you know"
+            " what you do. New comments must be marked up with reST!"))
+
+    CMD = CLI.parse_args()
+
+    if not CMD.srctree.EXISTS:
+        ERR("%s does not exists." % CMD.srctree)
+        sys.exit(42)
+
+    if not CMD.srctree.ISDIR:
+        ERR("%s is not a folder." % CMD.srctree)
+        sys.exit(42)
+
+    if not CMD.force and CMD.doctree.EXISTS:
+        ERR("%s is in the way, remove it first" % CMD.doctree)
+        sys.exit(42)
+
+    CMD.rst_files = set()
+    if CMD.markup == "auto":
+        CMD.rst_files = docgrep(SRCTREE/"Documentation")
+
+    pool = multiprocessing.Pool(CMD.threads)
+    pool.map(autodoc_file, gather_filenames(CMD))
+    pool.close()
+    pool.join()
+
+    insert_index_files(CMD.doctree)
+
+# ------------------------------------------------------------------------------
+def gather_filenames(cmd):
+# ------------------------------------------------------------------------------
+    MSG("gather files ...")
+
+    for fname in cmd.srctree.reMatchFind(r"^.*\.[ch]$"):
+        if fname.startswith(CMD.srctree/"Documentation"):
+            continue
+        yield fname
+
+# ------------------------------------------------------------------------------
+def autodoc_file(fname):
+# ------------------------------------------------------------------------------
+
+    fname  = fname.relpath(CMD.srctree)
+    markup = CMD.markup
+
+    if CMD.markup == "kernel-doc" and fname in CMD.rst_files:
+        markup = "reST"
+
+    opts = kerneldoc.ParseOptions(
+        rel_fname       = fname
+        , src_tree      = CMD.srctree
+        , verbose_warn  = not (CMD.sloppy)
+        , markup        = markup )
+
+    parser = kerneldoc.Parser(opts, kerneldoc.NullTranslator())
+    try:
+        parser.parse()
+    except Exception: # pylint: disable=W0703
+        FATAL("kernel-doc markup of %s seems buggy / can't parse" % opts.fname)
+        return
+
+    if not parser.ctx.dump_storage:
+        # no kernel-doc comments found
+        MSG("parsed: NONE comments: %s" % opts.fname)
+        return
+
+    MSG("parsed: %4d comments: %s" % (len(parser.ctx.dump_storage), opts.fname))
+
+    try:
+        rst = six.StringIO()
+        translator = kerneldoc.ReSTTranslator()
+        opts.out   = rst
+
+        # First try to output reST, this might fail, because the kernel-doc
+        # parser part is to tollerant ("bad lines", "function name and function
+        # declaration are different", etc ...).
+        parser.parse_dump_storage(translator=translator)
+
+        outFile = CMD.doctree / fname.replace(".","_") + ".rst"
+        outFile.DIRNAME.makedirs()
+        with outFile.openTextFile(mode="w") as out:
+            out.write(rst.getvalue())
+
+    except Exception: # pylint: disable=W0703
+        FATAL("kernel-doc markup of %s seems buggy / can't parse" % opts.fname)
+        return
+
+# ------------------------------------------------------------------------------
+def insert_index_files(folder):
+# ------------------------------------------------------------------------------
+
+    for folder, dirnames, filenames in folder.walk():
+        ctx = kerneldoc.Container( title = folder.FILENAME )
+        dirnames.sort()
+        filenames.sort()
+        indexFile = folder / "index.rst"
+        MSG("create index: %s" % indexFile)
+        with indexFile.openTextFile(mode="w") as index:
+            index.write(TEMPLATE_INDEX % ctx)
+            for d in dirnames:
+                index.write("    %s/index\n" % d.FILENAME)
+            for f in filenames:
+                if f.FILENAME == "index":
+                    continue
+                index.write("    %s\n" % f.FILENAME)
+
+# ------------------------------------------------------------------------------
+def docgrep(folder):
+# ------------------------------------------------------------------------------
+
+    # Hackisch script to grep all '.. kernel-doc::' directives. The assumption
+    # is, that comments from the source files used in those directives are
+    # allready migratetd to the reST format. I guess that (ATM) 95-99% of the
+    # comments are not migrated, those will be parsed with the old kernel-doc
+    # comment style introduced by the old DocBook toolchain.
+
+    pat = re.compile(r"^\s*\.\.\s+kernel-doc::\s*([^\s]+)\s*$")
+    out = set()
+    for rstFile in folder.reMatchFind(r".*\.rst"):
+        if rstFile.BASENAME in IGNORE:
+            continue
+        #print(rstFile)
+        with rstFile.openTextFile() as f:
+            for l in f:
+                match = pat.search(l)
+                if match:
+                    #print(match.group(1))
+                    out.add(match.group(1))
+    return sorted(out)
diff --git a/scripts/kerneldoc-src2rst b/scripts/kerneldoc-src2rst
new file mode 100755
index 0000000..6c5e8b1
--- /dev/null
+++ b/scripts/kerneldoc-src2rst
@@ -0,0 +1,11 @@
+#!/usr/bin/python
+
+import sys
+from os import path
+
+linuxdoc = path.abspath(path.join(path.dirname(__file__), '..'))
+linuxdoc = path.join(linuxdoc, 'Documentation', 'sphinx')
+sys.path.insert(0, linuxdoc)
+
+import src2rst
+src2rst.main()
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [RFC PATCH v1 6/6] kernel-doc: add man page builder (target mandocs)
  2017-01-24 19:52 [RFC PATCH v1 0/6] pure python kernel-doc parser and more Markus Heiser
                   ` (4 preceding siblings ...)
  2017-01-24 19:52 ` [RFC PATCH v1 5/6] kernel-doc: add kerneldoc-src2rst command Markus Heiser
@ 2017-01-24 19:52 ` Markus Heiser
  5 siblings, 0 replies; 23+ messages in thread
From: Markus Heiser @ 2017-01-24 19:52 UTC (permalink / raw)
  To: Jonathan Corbet, Mauro Carvalho Chehab, Jani Nikula,
	Daniel Vetter, Matthew Wilcox
  Cc: Markus Heiser, linux-doc @ vger . kernel . org List,
	linux-kernel @ vger . kernel . org List

This patch brings man page build we already know from the DocBook
toolchain. It adds a sphinx extension 'manKernelDoc' which consists of:

* '.. kernel-doc-man::' : directive implemented in class KernelDocMan
* 'kernel_doc_man'      : *invisible* node
* 'kernel-doc-man'      : an alternative man builder (class KernelDocManBuilder)

The 'kernel-doc-man' builder produces manual pages in the groff
format. It is a *man* page sphinx-builder mainly written to generate
manual pages from kernel-doc comments by:

* scanning the master doc-tree for sections marked with the
  '.. kernel-doc-man::' directive and build manual pages for theses
  sections.

* reorder / rename (sub-) sections according to the conventions that
  should be employed when writing man pages for the Linux man-pages
  project, see man-pages(7) ...  (has to be discussed if we really need
  such a reordering).

A few words about How the 'kernel-doc-man' builder an the kernel-doc
parser work together:

It starts with the kernel-doc parser which builds small reST-doctrees
from the comments of functions, structs and so on. The kernel-doc parser
has an option (see 'kernel_doc.ParseOptions.man_sect=n') which can be
activated e.g. on a *per* kernel-doc directive::

  .. kernel-doc:: include/linux/debugobjects.h
     :man-sect: 9

or alternative it can be activated globally with::

  kernel_doc_mansect = 9

in the conf.py (this is what this patch does).

With option 'man_sect=n' the kernel-doc parser inserts *invisible*
'kernel_doc_man' nodes in the reST-doctree. Here is a view on such a
doctree where the <kernel_doc_man/> is a child of the <section>
describing 'get_sd_load_idx' function::

  <section docname="basics" dupnames="get_sd_load_idx"
           ids="get-sd-load-idx id86" names="get_sd_load_idx">
    <title>get_sd_load_idx</title>
    <kernel_doc_man manpage="get_sd_load_idx.9"/>
    ...
    <desc desctype="function" domain="c" noindex="False" objtype="function">
      <desc_signature first="False" ids="c.get_sd_load_idx" names="c.get_sd_load_idx">
	<desc_type>int</desc_type>
        ...

After the whole doctree is build by sphinx-build it is stored in the env
and the builder takes place. These *invisible* 'kernel_doc_man' nodes
will be ignored by all builders (html, pdf etc.) except the
'kernel-doc-man' builder.  The later picks up those nodes from the
doctree and builds a manpage from the surrounding <section>.

Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
---
 Documentation/Makefile.sphinx        |   5 +-
 Documentation/conf.py                |   3 +-
 Documentation/media/Makefile         |   1 +
 Documentation/sphinx/manKernelDoc.py | 408 +++++++++++++++++++++++++++++++++++
 4 files changed, 415 insertions(+), 2 deletions(-)
 create mode 100755 Documentation/sphinx/manKernelDoc.py

diff --git a/Documentation/Makefile.sphinx b/Documentation/Makefile.sphinx
index 626dfd0..73bd71b 100644
--- a/Documentation/Makefile.sphinx
+++ b/Documentation/Makefile.sphinx
@@ -89,10 +89,12 @@ epubdocs:
 xmldocs:
 	@$(foreach var,$(SPHINXDIRS),$(call loop_cmd,sphinx,xml,$(var),xml,$(var)))
 
+mandocs:
+	@$(foreach var,$(SPHINXDIRS),$(call loop_cmd,sphinx,kernel-doc-man,$(var),man,$(var)))
+
 # no-ops for the Sphinx toolchain
 sgmldocs:
 psdocs:
-mandocs:
 installmandocs:
 
 cleandocs:
@@ -106,6 +108,7 @@ dochelp:
 	@echo  '  htmldocs        - HTML'
 	@echo  '  latexdocs       - LaTeX'
 	@echo  '  pdfdocs         - PDF'
+	@echo  '  mandocs         - man pages'
 	@echo  '  epubdocs        - EPUB'
 	@echo  '  xmldocs         - XML'
 	@echo  '  cleandocs       - clean all generated files'
diff --git a/Documentation/conf.py b/Documentation/conf.py
index 013af9a..97b826b 100644
--- a/Documentation/conf.py
+++ b/Documentation/conf.py
@@ -35,7 +35,7 @@ from load_config import loadConfig
 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
 # ones.
 extensions = ['rstKernelDoc', 'rstFlatTable', 'kernel_include', 'cdomain',
-              'sphinx.ext.todo' ]
+              'manKernelDoc', 'sphinx.ext.todo' ]
 
 # The name of the math extension changed on Sphinx 1.4
 if major == 1 and minor > 3:
@@ -508,6 +508,7 @@ pdf_documents = [
 # line arguments.
 kernel_doc_verbose_warn = False
 kernel_doc_raise_error = False
+kernel_doc_mansect = 9
 
 # ------------------------------------------------------------------------------
 # Since loadConfig overwrites settings from the global namespace, it has to be
diff --git a/Documentation/media/Makefile b/Documentation/media/Makefile
index 3266360..289ca6d 100644
--- a/Documentation/media/Makefile
+++ b/Documentation/media/Makefile
@@ -103,6 +103,7 @@ html: all
 epub: all
 xml: all
 latex: $(IMGPDF) all
+kernel-doc-man: $(BUILDDIR) ${TARGETS}
 
 clean:
 	-rm -f $(DOTTGT) $(IMGTGT) ${TARGETS} 2>/dev/null
diff --git a/Documentation/sphinx/manKernelDoc.py b/Documentation/sphinx/manKernelDoc.py
new file mode 100755
index 0000000..3e27983
--- /dev/null
+++ b/Documentation/sphinx/manKernelDoc.py
@@ -0,0 +1,408 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8; mode: python -*-
+
+u"""
+    kernel-doc-man
+    ~~~~~~~~~~~~~~
+
+    Implementation of the sphinx builder ``kernel-doc-man``.
+
+    :copyright:  Copyright (C) 2016  Markus Heiser
+    :license:    GPL Version 2, June 1991 see Linux/COPYING for details.
+
+    The ``kernel-doc-man`` (:py:class:`KernelDocManBuilder`) produces manual
+    pages in the groff format. It is a *man* page sphinx-builder mainly written
+    to generate manual pages from kernel-doc comments by:
+
+    * scanning the master doc-tree for sections marked with the
+      ``.. kernel-doc-man::`` directive and build manual pages for theses
+      sections.
+
+    * reorder / rename (sub-) sections according to the conventions that should
+      be employed when writing man pages for the Linux man-pages project, see
+      man-pages(7)
+
+    Usage::
+
+        $ sphinx-build -b kernel-doc-man
+
+    rest-Markup entry (e.g)::
+
+        .. kernel-doc-man::  manpage-name.9
+
+    Since the ``kernel-doc-man`` is an extension of the common `sphinx *man*
+    builder
+    <http://www.sphinx-doc.org/en/stable/config.html#confval-man_pages>`_, it is
+    also a full replacement, building booth, the common sphinx man-pages and
+    those marked with the ``.. kernel-doc-man::`` directive.
+
+    Mostly authors will use this feature in their reST documents in conjunction
+    with the ``.. kernel-doc::`` :ref:`directive
+    <kernel-doc:kernel-doc-directive>`, to create man pages from kernel-doc
+    comments.  This could be done, by setting the man section number with the
+    option ``man-sect``, e.g.::
+
+      .. kernel-doc:: include/linux/debugobjects.h
+          :man-sect: 9
+          :internal:
+
+    With this ``:man-sect: 9`` option, the kernel-doc parser will insert a
+    ``.. kernel-doc-man:: <declaration-name>.<man-sect no>`` directive in the
+    reST output, for every section describing a function, union etc.
+
+"""
+
+# ==============================================================================
+# imports
+# ==============================================================================
+
+import re
+import collections
+from os import path
+
+from docutils.io import FileOutput
+from docutils.frontend import OptionParser
+from docutils import nodes
+from docutils.utils import new_document
+from docutils.parsers.rst import Directive
+from docutils.transforms import Transform
+
+from sphinx import addnodes
+from sphinx.util.nodes import inline_all_toctrees
+from sphinx.util.console import bold, darkgreen     # pylint: disable=E0611
+from sphinx.writers.manpage import ManualPageWriter
+
+from sphinx.builders.manpage import ManualPageBuilder
+
+from kernel_doc import Container
+
+# ==============================================================================
+# common globals
+# ==============================================================================
+
+DEFAULT_MAN_SECT  = 9
+
+# The version numbering follows numbering of the specification
+# (Documentation/books/kernel-doc-HOWTO).
+__version__  = '1.0'
+
+# ==============================================================================
+def setup(app):
+# ==============================================================================
+
+    app.add_builder(KernelDocManBuilder)
+    app.add_directive("kernel-doc-man", KernelDocMan)
+    app.add_config_value('author', "", 'env')
+    app.add_node(kernel_doc_man
+                 , html    = (skip_kernel_doc_man, None)
+                 , latex   = (skip_kernel_doc_man, None)
+                 , texinfo = (skip_kernel_doc_man, None)
+                 , text    = (skip_kernel_doc_man, None)
+                 , man     = (skip_kernel_doc_man, None) )
+
+    return dict(
+        version = __version__
+        , parallel_read_safe = True
+        , parallel_write_safe = True
+    )
+
+# ==============================================================================
+class kernel_doc_man(nodes.Invisible, nodes.Element):    # pylint: disable=C0103
+# ==============================================================================
+    """Node to mark a section as *manpage*"""
+
+def skip_kernel_doc_man(self, node):                     # pylint: disable=W0613
+    raise nodes.SkipNode
+
+
+# ==============================================================================
+class KernelDocMan(Directive):
+# ==============================================================================
+
+    required_arguments = 1
+    optional_arguments = 0
+
+    def run(self):
+        man_node = kernel_doc_man()
+        man_node["manpage"] = self.arguments[0]
+        return [man_node]
+
+# ==============================================================================
+class Section2Manpage(Transform):
+# ==============================================================================
+    u"""Transforms a *section* tree into an *manpage* tree.
+
+    The structural layout of a man-page differs from the one produced, by the
+    kernel-doc parser. The kernel-doc parser produce reST which fits to *normal*
+    documentation, e.g. the declaration of a function in reST is like.
+
+    .. code-block:: rst
+
+        user_function
+        =============
+
+        .. c:function:: int user_function(int a)
+
+           The *purpose* description.
+
+           :param int a:
+               Parameter a description
+
+       Description
+       ===========
+
+       lorem ipsum ..
+
+       Return
+       ======
+
+       Returns first argument
+
+    On the other side, in man-pages it is common (see ``man man-pages``) to
+    print the *purpose* line in the "NAME" section, function's prototype in the
+    "SYNOPSIS" section and the parameter description in the "OPTIONS" section::
+
+       NAME
+              user_function -- The *purpose* description.
+
+       SYNOPSIS
+               int user_function(int a)
+
+       OPTIONS
+               a
+
+       DESCRIPTION
+               lorem ipsum
+
+       RETURN VALUE
+               Returns first argument
+
+    """
+    # The common section order is:
+    manTitles = [
+        (re.compile(r"^SYNOPSIS|^DEFINITION"
+                    , flags=re.I), "SYNOPSIS")
+        , (re.compile(r"^CONFIG",     flags=re.I), "CONFIGURATION")
+        , (re.compile(r"^DESCR",      flags=re.I), "DESCRIPTION")
+        , (re.compile(r"^OPTION",     flags=re.I), "OPTIONS")
+        , (re.compile(r"^EXIT",       flags=re.I), "EXIT STATUS")
+        , (re.compile(r"^RETURN",     flags=re.I), "RETURN VALUE")
+        , (re.compile(r"^ERROR",      flags=re.I), "ERRORS")
+        , (re.compile(r"^ENVIRON",    flags=re.I), "ENVIRONMENT")
+        , (re.compile(r"^FILE",       flags=re.I), "FILES")
+        , (re.compile(r"^VER",        flags=re.I), "VERSIONS")
+        , (re.compile(r"^ATTR",       flags=re.I), "ATTRIBUTES")
+        , (re.compile(r"^CONFOR",     flags=re.I), "CONFORMING TO")
+        , (re.compile(r"^NOTE",       flags=re.I), "NOTES")
+        , (re.compile(r"^BUG",        flags=re.I), "BUGS")
+        , (re.compile(r"^EXAMPLE",    flags=re.I), "EXAMPLE")
+        , (re.compile(r"^SEE",        flags=re.I), "SEE ALSO")
+        , ]
+
+    manTitleOrder = [t for r,t in manTitles]
+
+    @classmethod
+    def getFirstChild(cls, subtree, *classes):
+        for c in classes:
+            if subtree is None:
+                break
+            idx = subtree.first_child_matching_class(c)
+            if idx is None:
+                subtree = None
+                break
+            subtree = subtree[idx]
+        return subtree
+
+    def strip_man_info(self):
+        section  = self.document[0]
+        man_info = Container(authors=[])
+        man_node = self.getFirstChild(section, kernel_doc_man)
+        name, sect = (man_node["manpage"].split(".", -1) + [DEFAULT_MAN_SECT])[:2]
+        man_info["manpage"] = name
+        man_info["mansect"] = sect
+
+        # strip field list
+        field_list = self.getFirstChild(section, nodes.field_list)
+        if field_list:
+            field_list.parent.remove(field_list)
+            for field in field_list:
+                name  = field[0].astext().lower()
+                value = field[1].astext()
+                man_info[name] = man_info.get(name, []) + [value,]
+
+            # normalize authors
+            for auth, adr in zip(man_info.get("author", [])
+                                 , man_info.get("address", [])):
+                man_info["authors"].append("%s <%s>" % (auth, adr))
+
+        # strip *purpose*
+        desc_content = self.getFirstChild(
+            section, addnodes.desc, addnodes.desc_content)
+        if not len(desc_content):
+            # missing initial short description in kernel-doc comment
+            man_info.subtitle = ""
+        else:
+            man_info.subtitle = desc_content[0].astext()
+            del desc_content[0]
+
+        # remove section title
+        old_title = self.getFirstChild(section, nodes.title)
+        old_title.parent.remove(old_title)
+
+        # gather type of the declaration
+        decl_type = self.getFirstChild(
+            section, addnodes.desc, addnodes.desc_signature, addnodes.desc_type)
+        if decl_type is not None:
+            decl_type = decl_type.astext().strip()
+        man_info.decl_type = decl_type
+
+        # complete infos
+        man_info.title    = man_info["manpage"]
+        man_info.section  = man_info["mansect"]
+
+        return man_info
+
+    def isolateSections(self, sec_by_title):
+        section = self.document[0]
+        while True:
+            sect = self.getFirstChild(section, nodes.section)
+            if not sect:
+                break
+            sec_parent = sect.parent
+            target_idx = sect.parent.index(sect) - 1
+            sect.parent.remove(sect)
+            if isinstance(sec_parent[target_idx], nodes.target):
+                # drop target / is useless in man-pages
+                del sec_parent[target_idx]
+            title = sect[0].astext().upper()
+            for r, man_title in self.manTitles:
+                if r.search(title):
+                    title = man_title
+                    sect[0].replace_self(nodes.title(text = title))
+                    break
+            # we dont know if there are sections with the same title
+            sec_by_title[title] = sec_by_title.get(title, []) + [sect]
+
+        return sec_by_title
+
+    def isolateSynopsis(self, sec_by_title):
+        synopsis = None
+        c_desc = self.getFirstChild(self.document[0], addnodes.desc)
+        if c_desc is not None:
+            c_desc.parent.remove(c_desc)
+            synopsis = nodes.section()
+            synopsis += nodes.title(text = 'synopsis')
+            synopsis += c_desc
+            sec_by_title["SYNOPSIS"] = sec_by_title.get("SYNOPSIS", []) + [synopsis]
+        return sec_by_title
+
+    def apply(self):
+        self.document.man_info = self.strip_man_info()
+        sec_by_title = collections.OrderedDict()
+
+        self.isolateSections(sec_by_title)
+        # On struct, enum, union, typedef, the SYNOPSIS is taken from the
+        # DEFINITION section.
+        if self.document.man_info.decl_type not in [
+                "struct", "enum", "union", "typedef"]:
+            self.isolateSynopsis(sec_by_title)
+
+        for sec_name in self.manTitleOrder:
+            sec_list = sec_by_title.pop(sec_name,[])
+            self.document[0] += sec_list
+
+        for sec_list in sec_by_title.values():
+            self.document[0] += sec_list
+
+# ==============================================================================
+class KernelDocManBuilder(ManualPageBuilder):
+# ==============================================================================
+
+    """
+    Builds groff output in manual page format.
+    """
+    name = 'kernel-doc-man'
+    format = 'man'
+    supported_image_types = []
+
+    def init(self):
+        pass
+
+    def is_manpage(self, node):               # pylint: disable=R0201
+        if isinstance(node, nodes.section):
+            return bool(Section2Manpage.getFirstChild(
+            node, kernel_doc_man) is not None)
+        else:
+            return False
+
+    def prepare_writing(self, docnames):
+        """A place where you can add logic before :meth:`write_doc` is run"""
+        pass
+
+    def write_doc(self, docname, doctree):
+        """Where you actually write something to the filesystem."""
+        pass
+
+    def get_partial_document(self, children): # pylint: disable=R0201
+        doc_tree =  new_document('<output>')
+        doc_tree += children
+        return doc_tree
+
+    def write(self, *ignored):
+        if self.config.man_pages:
+            # build manpages from config.man_pages as usual
+            ManualPageBuilder.write(self, *ignored)
+            # FIXME:
+
+        self.info(bold("scan master tree for kernel-doc man-pages ... ") + darkgreen("{"), nonl=True)
+
+        master_tree = self.env.get_doctree(self.config.master_doc)
+        master_tree = inline_all_toctrees(
+            self, set(), self.config.master_doc, master_tree, darkgreen, [self.config.master_doc])
+        self.info(darkgreen("}"))
+        man_nodes   = master_tree.traverse(condition=self.is_manpage)
+        if not man_nodes and not self.config.man_pages:
+            self.warn('no "man_pages" config value nor manual section found; no manual pages '
+                      'will be written')
+            return
+
+        self.info(bold('writing man pages ... '), nonl=True)
+
+        for man_parent in man_nodes:
+
+            doc_tree = self.get_partial_document(man_parent)
+            Section2Manpage(doc_tree).apply()
+
+            if not doc_tree.man_info["authors"] and self.config.author:
+                doc_tree.man_info["authors"].append(self.config.author)
+
+            doc_writer   = ManualPageWriter(self)
+            doc_settings = OptionParser(
+                defaults            = self.env.settings
+                , components        = (doc_writer,)
+                , read_config_files = True
+                , ).get_default_values()
+
+            doc_settings.__dict__.update(doc_tree.man_info)
+            doc_tree.settings = doc_settings
+            targetname  = '%s.%s' % (doc_tree.man_info.title, doc_tree.man_info.section)
+            if doc_tree.man_info.decl_type in [
+                    "struct", "enum", "union", "typedef"]:
+                targetname = "%s_%s" % (doc_tree.man_info.decl_type, targetname)
+
+            destination = FileOutput(
+                destination_path = path.join(self.outdir, targetname)
+                , encoding='utf-8')
+
+            self.info(darkgreen(targetname) + " ", nonl=True)
+            self.env.resolve_references(doc_tree, doc_tree.man_info.manpage, self)
+
+            # remove pending_xref nodes
+            for pendingnode in doc_tree.traverse(addnodes.pending_xref):
+                pendingnode.replace_self(pendingnode.children)
+            doc_writer.write(doc_tree, destination)
+        self.info()
+
+
+    def finish(self):
+        pass
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH v1 2/6] kernel-doc: replace kernel-doc perl parser with a pure python one (WIP)
  2017-01-24 19:52 ` [RFC PATCH v1 2/6] kernel-doc: replace kernel-doc perl parser with a pure python one (WIP) Markus Heiser
@ 2017-01-25  0:13   ` Jonathan Corbet
  2017-01-25  6:37     ` Daniel Vetter
  2017-01-25 10:24     ` Jani Nikula
  0 siblings, 2 replies; 23+ messages in thread
From: Jonathan Corbet @ 2017-01-25  0:13 UTC (permalink / raw)
  To: Markus Heiser
  Cc: Mauro Carvalho Chehab, Jani Nikula, Daniel Vetter,
	Matthew Wilcox, linux-doc @ vger . kernel . org List,
	linux-kernel @ vger . kernel . org List

On Tue, 24 Jan 2017 20:52:40 +0100
Markus Heiser <markus.heiser@darmarit.de> wrote:

> This patch is the initial merge of a pure python implementation
> to parse kernel-doc comments and generate reST from.
> 
> It consist mainly of to parts, the parser module (kerneldoc.py) and the
> sphinx-doc extension (rstKernelDoc.py). For the command line, there is
> also a 'scripts/kerneldoc' added.::
> 
>    scripts/kerneldoc --help
> 
> The main two parts are merged 1:1 from
> 
>   https://github.com/return42/linuxdoc  commit 3991d3c
> 
> Take this as a starting point, there is a lot of work to do (WIP).
> Since it is merged 1:1, you will also notice it's CodingStyle is (ATM)
> not kernel compliant and it lacks a user doc ('Documentation/doc-guide').
> 
> I will send patches for this when the community agreed about
> functionalities. I guess there are a lot of topics we have to agree
> about. E.g. the py-implementation is more strict the perl one.  When you
> build doc with the py-module you will see a lot of additional errors and
> warnings compared to the sloppy perl one.

Again, quick comments...

 - I would *much* rather evolve our existing Sphinx extension in the
   direction we want it to go than to just replace it wholesale.
   Replacement is the wrong approach for a few reasons, including the need
   to minimize change and preserve credit for Jani's work.  Can we work on
   that basis, please?

   Ideally at the time of merging, we would be able to build the docs with
   *either* kerneldoc.

 - I'll have to try it out to see how noisy it is.  I'm not opposed to
   stricter checks; indeed, they could be a good thing.  But we might want
   to have an option so we can cut back on the noise by default.

Thanks,

jon

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH v1 2/6] kernel-doc: replace kernel-doc perl parser with a pure python one (WIP)
  2017-01-25  0:13   ` Jonathan Corbet
@ 2017-01-25  6:37     ` Daniel Vetter
  2017-01-25  7:37       ` Markus Heiser
  2017-01-25 10:24     ` Jani Nikula
  1 sibling, 1 reply; 23+ messages in thread
From: Daniel Vetter @ 2017-01-25  6:37 UTC (permalink / raw)
  To: Jonathan Corbet
  Cc: Markus Heiser, Mauro Carvalho Chehab, Jani Nikula, Daniel Vetter,
	Matthew Wilcox, linux-doc @ vger . kernel . org List,
	linux-kernel @ vger . kernel . org List

On Tue, Jan 24, 2017 at 05:13:14PM -0700, Jonathan Corbet wrote:
> On Tue, 24 Jan 2017 20:52:40 +0100
> Markus Heiser <markus.heiser@darmarit.de> wrote:
> 
> > This patch is the initial merge of a pure python implementation
> > to parse kernel-doc comments and generate reST from.
> > 
> > It consist mainly of to parts, the parser module (kerneldoc.py) and the
> > sphinx-doc extension (rstKernelDoc.py). For the command line, there is
> > also a 'scripts/kerneldoc' added.::
> > 
> >    scripts/kerneldoc --help
> > 
> > The main two parts are merged 1:1 from
> > 
> >   https://github.com/return42/linuxdoc  commit 3991d3c
> > 
> > Take this as a starting point, there is a lot of work to do (WIP).
> > Since it is merged 1:1, you will also notice it's CodingStyle is (ATM)
> > not kernel compliant and it lacks a user doc ('Documentation/doc-guide').
> > 
> > I will send patches for this when the community agreed about
> > functionalities. I guess there are a lot of topics we have to agree
> > about. E.g. the py-implementation is more strict the perl one.  When you
> > build doc with the py-module you will see a lot of additional errors and
> > warnings compared to the sloppy perl one.
> 
> Again, quick comments...
> 
>  - I would *much* rather evolve our existing Sphinx extension in the
>    direction we want it to go than to just replace it wholesale.
>    Replacement is the wrong approach for a few reasons, including the need
>    to minimize change and preserve credit for Jani's work.  Can we work on
>    that basis, please?
> 
>    Ideally at the time of merging, we would be able to build the docs with
>    *either* kerneldoc.

Seconded, I think renaming the extension string like this is just fairly
pointless busy-work. Kernel-doc isn't interacting perfectly with rst, but
now we already have a sizeable amount of stuff converted and going through
all that once more needs imo som really clear benefits. I think
bug-for-bug compatibility would be much better. Later on we could do
changes, on a change-by-change basis.
-Daniel


>  - I'll have to try it out to see how noisy it is.  I'm not opposed to
>    stricter checks; indeed, they could be a good thing.  But we might want
>    to have an option so we can cut back on the noise by default.


> 
> Thanks,
> 
> jon

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH v1 3/6] kernel-doc: add kerneldoc-lint command
  2017-01-24 19:52 ` [RFC PATCH v1 3/6] kernel-doc: add kerneldoc-lint command Markus Heiser
@ 2017-01-25  6:38   ` Daniel Vetter
  2017-01-25  8:21     ` Jani Nikula
  0 siblings, 1 reply; 23+ messages in thread
From: Daniel Vetter @ 2017-01-25  6:38 UTC (permalink / raw)
  To: Markus Heiser
  Cc: Jonathan Corbet, Mauro Carvalho Chehab, Jani Nikula,
	Daniel Vetter, Matthew Wilcox,
	linux-doc @ vger . kernel . org List,
	linux-kernel @ vger . kernel . org List

On Tue, Jan 24, 2017 at 08:52:41PM +0100, Markus Heiser wrote:
> this patch adds a command to lint kernel-doc comments.::
> 
>   scripts/kerneldoc-lint --help
> 
> The lint check include (only) kernel-doc rules described at [1]. It
> does not check against reST (sphinx-doc) markup used in the kernel-doc
> comments.  Since reST markups could include depencies to the build-
> context (e.g. open/closed refs) only a sphinx-doc build can check the
> reST markup in the context of the document it builds.
> 
> With 'kerneldoc-lint' command you can check a single file or a whole
> folder, e.g:
> 
>   scripts/kerneldoc-lint include/drm
>   ...
>   scripts/kerneldoc-lint include/media/media-device.h
> 
> The lint-implementation is a part of the parser module (kernel_doc.py).
> The comandline implementation consist only of a argument parser ('opts')
> which calls the kernel-doc parser with a 'NullTranslator'.::
> 
>    parser = kerneldoc.Parser(opts, kerneldoc.NullTranslator())
> 
> Latter is also a small example of how-to implement kernel-doc
> applications with the kernel-doc parser architecture.
> 
> [1] https://www.kernel.org/doc/html/latest/doc-guide/kernel-doc.html#writing-kernel-doc-comments
> 
> Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>

Didn't we have a patch from Jani to gives us a make target doing this? I
think that'd be even neater ...
-Daniel

> ---
>  Documentation/sphinx/lint.py | 121 +++++++++++++++++++++++++++++++++++++++++++
>  scripts/kerneldoc-lint       |  11 ++++
>  2 files changed, 132 insertions(+)
>  create mode 100755 Documentation/sphinx/lint.py
>  create mode 100755 scripts/kerneldoc-lint
> 
> diff --git a/Documentation/sphinx/lint.py b/Documentation/sphinx/lint.py
> new file mode 100755
> index 0000000..5a0128f
> --- /dev/null
> +++ b/Documentation/sphinx/lint.py
> @@ -0,0 +1,121 @@
> +#!/usr/bin/env python3
> +# -*- coding: utf-8; mode: python -*-
> +# pylint: disable=C0103
> +
> +u"""
> +    lint
> +    ~~~~
> +
> +    Implementation of the ``kerneldoc-lint`` command.
> +
> +    :copyright:  Copyright (C) 2016  Markus Heiser
> +    :license:    GPL Version 2, June 1991 see Linux/COPYING for details.
> +
> +    The ``kernel-doclint`` command *lints* documentation from Linux kernel's
> +    source code comments, see ``--help``::
> +
> +        $ kernel-lintdoc --help
> +
> +    .. note::
> +
> +       The kernel-lintdoc command is under construction, no stable release
> +       yet. The command-line arguments might be changed/extended in the near
> +       future."""
> +
> +# ------------------------------------------------------------------------------
> +# imports
> +# ------------------------------------------------------------------------------
> +
> +import sys
> +import argparse
> +
> +#import six
> +
> +from fspath import FSPath
> +import kernel_doc as kerneldoc
> +
> +# ------------------------------------------------------------------------------
> +# config
> +# ------------------------------------------------------------------------------
> +
> +MSG    = lambda msg: sys.__stderr__.write("INFO : %s\n" % msg)
> +ERR    = lambda msg: sys.__stderr__.write("ERROR: %s\n" % msg)
> +FATAL  = lambda msg: sys.__stderr__.write("FATAL: %s\n" % msg)
> +
> +epilog = u"""This implementation of uses the kernel-doc parser
> +from the linuxdoc extension, for detail informations read
> +http://return42.github.io/sphkerneldoc/books/kernel-doc-HOWTO"""
> +
> +# ------------------------------------------------------------------------------
> +def main():
> +# ------------------------------------------------------------------------------
> +
> +    CLI = argparse.ArgumentParser(
> +        description = ("Lint *kernel-doc* comments from source code")
> +        , epilog = epilog
> +        , formatter_class=argparse.ArgumentDefaultsHelpFormatter)
> +
> +    CLI.add_argument(
> +        "srctree"
> +        , help    = "File or folder of source code."
> +        , type    = lambda x: FSPath(x).ABSPATH)
> +
> +    CLI.add_argument(
> +        "--sloppy"
> +        , action  = "store_true"
> +        , help    = "Sloppy linting, reports only severe errors.")
> +
> +    CLI.add_argument(
> +        "--markup"
> +        , choices = ["reST", "kernel-doc"]
> +        , default = "reST"
> +        , help    = (
> +            "Markup of the comments. Change this option only if you know"
> +            " what you do. New comments must be marked up with reST!"))
> +
> +    CLI.add_argument(
> +        "--verbose", "-v"
> +        , action  = "store_true"
> +        , help    = "verbose output with log messages to stderr" )
> +
> +    CLI.add_argument(
> +        "--debug"
> +        , action  = "store_true"
> +        , help    = "debug messages to stderr" )
> +
> +    CMD = CLI.parse_args()
> +    kerneldoc.DEBUG = CMD.debug
> +    kerneldoc.VERBOSE = CMD.verbose
> +
> +    if not CMD.srctree.EXISTS:
> +        ERR("%s does not exists or is not a folder" % CMD.srctree)
> +        sys.exit(42)
> +
> +    if CMD.srctree.ISDIR:
> +        for fname in CMD.srctree.reMatchFind(r"^.*\.[ch]$"):
> +            if fname.startswith(CMD.srctree/"Documentation"):
> +                continue
> +            lintdoc_file(fname, CMD)
> +    else:
> +        fname = CMD.srctree
> +        CMD.srctree = CMD.srctree.DIRNAME
> +        lintdoc_file(fname, CMD)
> +
> +# ------------------------------------------------------------------------------
> +def lintdoc_file(fname, CMD):
> +# ------------------------------------------------------------------------------
> +
> +    fname = fname.relpath(CMD.srctree)
> +    opts = kerneldoc.ParseOptions(
> +        rel_fname       = fname
> +        , src_tree      = CMD.srctree
> +        , verbose_warn  = not (CMD.sloppy)
> +        , markup        = CMD.markup )
> +
> +    parser = kerneldoc.Parser(opts, kerneldoc.NullTranslator())
> +    try:
> +        parser.parse()
> +    except Exception: # pylint: disable=W0703
> +        FATAL("kernel-doc comments markup of %s seems buggy / can't parse" % opts.fname)
> +        return
> +
> diff --git a/scripts/kerneldoc-lint b/scripts/kerneldoc-lint
> new file mode 100755
> index 0000000..5109f7b
> --- /dev/null
> +++ b/scripts/kerneldoc-lint
> @@ -0,0 +1,11 @@
> +#!/usr/bin/python
> +
> +import sys
> +from os import path
> +
> +linuxdoc = path.abspath(path.join(path.dirname(__file__), '..'))
> +linuxdoc = path.join(linuxdoc, 'Documentation', 'sphinx')
> +sys.path.insert(0, linuxdoc)
> +
> +import lint
> +lint.main()
> -- 
> 2.7.4
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH v1 2/6] kernel-doc: replace kernel-doc perl parser with a pure python one (WIP)
  2017-01-25  6:37     ` Daniel Vetter
@ 2017-01-25  7:37       ` Markus Heiser
  0 siblings, 0 replies; 23+ messages in thread
From: Markus Heiser @ 2017-01-25  7:37 UTC (permalink / raw)
  To: Daniel Vetter, Jonathan Corbet
  Cc: Mauro Carvalho Chehab, Jani Nikula, Daniel Vetter,
	Matthew Wilcox, linux-doc @ vger . kernel . org List,
	linux-kernel @ vger . kernel . org List

Hi Jon, hi Daniel !

Am 25.01.2017 um 07:37 schrieb Daniel Vetter <daniel@ffwll.ch>:

>> Again, quick comments...
>> 
>> - I would *much* rather evolve our existing Sphinx extension in the
>>   direction we want it to go than to just replace it wholesale.
>>   Replacement is the wrong approach for a few reasons, including the need
>>   to minimize change and preserve credit for Jani's work.  Can we work on
>>   that basis, please?

Sure. But I fear I haven't understood you right .... last post was:

> Markus, would you consider sending out a new patch set for review?  What I
> would like to do see is something adding the new script for the Sphinx
> toolchain, while leaving the DocBook build unchanged, using the old
> script.  We could then delete it once the last template file has moved
> over. 

talking about DocBook and now I read ...

>>   Ideally at the time of merging, we would be able to build the docs with
>>   *either* kerneldoc.

Now I'am totally confused ... it's no about you, but I do not understand
you clearly ... can you help a conceptual man?

> Seconded, I think renaming the extension string like this is just fairly
> pointless busy-work.

Hi Daniel, please help me, what did you mean with "renaming" the extension
string and "busy-work"?

There is a renaming of module's name but there should no work outside this
patch ... 

> Kernel-doc isn't interacting perfectly with rst, but
> now we already have a sizeable amount of stuff converted and going through
> all that once more needs imo som really clear benefits.

from authors POV nothing has changed.

> I think bug-for-bug compatibility would be much better. Later on we could do
> changes, on a change-by-change basis.

Both sphinx-extensions (the one we have and the one in the series) are
adapter to a "parser backend". 

1. Documentation/sphinx/kerneldoc.py    <--> scripts/kerneldoc -rst
2. Documentation/sphinx/rstKernelDoc.py <--> import module Documentation/sphinx/kernel_doc.py

Maintain two adapters for the two backends is possible. But one adapter
for two complete different backends .. is this what you mean?

>> - I'll have to try it out to see how noisy it is.  I'm not opposed to
>>   stricter checks; indeed, they could be a good thing.  But we might want
>>   to have an option so we can cut back on the noise by default.

As said, I'am willing to go communities way, it seems just a communication
problem (on my side) to understand what this way would be.

I try to sum what I guess ... e.g. to build output as usual with (1.)

  $ make htmldocs

to build with the py-parser and its sphinx-extension (see 2. above)::

  $ USE_PY_PARSER=1 make htmldocs

this should be easy and I can realize it in v2, but is this what you want?

Please give me some more hints / Thanks a lot!

--Markus--

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH v1 3/6] kernel-doc: add kerneldoc-lint command
  2017-01-25  6:38   ` Daniel Vetter
@ 2017-01-25  8:21     ` Jani Nikula
  2017-01-25  9:34       ` Markus Heiser
  0 siblings, 1 reply; 23+ messages in thread
From: Jani Nikula @ 2017-01-25  8:21 UTC (permalink / raw)
  To: Daniel Vetter, Markus Heiser
  Cc: Jonathan Corbet, Mauro Carvalho Chehab, Daniel Vetter,
	Matthew Wilcox, linux-doc @ vger . kernel . org List,
	linux-kernel @ vger . kernel . org List

On Wed, 25 Jan 2017, Daniel Vetter <daniel@ffwll.ch> wrote:
> On Tue, Jan 24, 2017 at 08:52:41PM +0100, Markus Heiser wrote:
>> this patch adds a command to lint kernel-doc comments.::
>> 
>>   scripts/kerneldoc-lint --help
>> 
>> The lint check include (only) kernel-doc rules described at [1]. It
>> does not check against reST (sphinx-doc) markup used in the kernel-doc
>> comments.  Since reST markups could include depencies to the build-
>> context (e.g. open/closed refs) only a sphinx-doc build can check the
>> reST markup in the context of the document it builds.
>> 
>> With 'kerneldoc-lint' command you can check a single file or a whole
>> folder, e.g:
>> 
>>   scripts/kerneldoc-lint include/drm
>>   ...
>>   scripts/kerneldoc-lint include/media/media-device.h
>> 
>> The lint-implementation is a part of the parser module (kernel_doc.py).
>> The comandline implementation consist only of a argument parser ('opts')
>> which calls the kernel-doc parser with a 'NullTranslator'.::
>> 
>>    parser = kerneldoc.Parser(opts, kerneldoc.NullTranslator())
>> 
>> Latter is also a small example of how-to implement kernel-doc
>> applications with the kernel-doc parser architecture.
>> 
>> [1] https://www.kernel.org/doc/html/latest/doc-guide/kernel-doc.html#writing-kernel-doc-comments
>> 
>> Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
>
> Didn't we have a patch from Jani to gives us a make target doing this? I
> think that'd be even neater ...

Yes, see below. It's simplistic and it has an external dependency, but
it got the job done. And it does not depend on Sphinx; it's just a
kernel-doc and rst lint, not Sphinx lint. Whether that's a good or a bad
thing is debatable.

Anyway, I do think the approach of making 'make CHECK=the-tool C=1' work
is what we should aim at. Markus' patch could probably be made to do
that by accepting the same arguments that are passed to compilers.

BR,
Jani.


>From 96e780dea5fe0cafcb500d7e16a16f85416dea6d Mon Sep 17 00:00:00 2001
From: Jani Nikula <jani.nikula@intel.com>
Date: Tue, 31 May 2016 18:11:33 +0300
Subject: [PATCH] kernel-doc-rst-lint: add tool to check kernel-doc and rst
 correctness
Organization: Intel Finland Oy - BIC 0357606-4 - Westendinkatu 7, 02160 Espoo
Cc: Jani Nikula <jani.nikula@intel.com>

Simple kernel-doc and reStructuredText lint tool that can be used
independently and as a kernel build CHECK tool to validate kernel-doc
comments.

Independent usage:
$ kernel-doc-rst-lint FILE

Kernel CHECK usage:
$ make CHECK=scripts/kernel-doc-rst-lint C=1		# (or C=2)

Depends on docutils and the rst-lint package
https://pypi.python.org/pypi/restructuredtext_lint

Signed-off-by: Jani Nikula <jani.nikula@intel.com>
---
 scripts/kernel-doc-rst-lint | 106 ++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 106 insertions(+)
 create mode 100755 scripts/kernel-doc-rst-lint

diff --git a/scripts/kernel-doc-rst-lint b/scripts/kernel-doc-rst-lint
new file mode 100755
index 000000000000..7e0157679f83
--- /dev/null
+++ b/scripts/kernel-doc-rst-lint
@@ -0,0 +1,106 @@
+#!/usr/bin/env python
+# coding=utf-8
+#
+# Copyright © 2016 Intel Corporation
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the "Software"),
+# to deal in the Software without restriction, including without limitation
+# the rights to use, copy, modify, merge, publish, distribute, sublicense,
+# and/or sell copies of the Software, and to permit persons to whom the
+# Software is furnished to do so, subject to the following conditions:
+#
+# The above copyright notice and this permission notice (including the next
+# paragraph) shall be included in all copies or substantial portions of the
+# Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+# Authors:
+#    Jani Nikula <jani.nikula@intel.com>
+#
+# Simple kernel-doc and reStructuredText lint tool that can be used
+# independently and as a kernel build CHECK tool to validate kernel-doc
+# comments.
+#
+# Independent usage:
+# $ kernel-doc-rst-lint FILE
+#
+# Kernel CHECK usage:
+# $ make CHECK=scripts/kernel-doc-rst-lint C=1		# (or C=2)
+#
+# Depends on docutils and the rst-lint package
+# https://pypi.python.org/pypi/restructuredtext_lint
+#
+
+import os
+import subprocess
+import sys
+
+from docutils.parsers.rst import directives
+from docutils.parsers.rst import Directive
+from docutils.parsers.rst import roles
+from docutils import nodes, statemachine
+import restructuredtext_lint
+
+class DummyDirective(Directive):
+    required_argument = 1
+    optional_arguments = 0
+    option_spec = { }
+    has_content = True
+
+    def run(self):
+        return []
+
+# Fake the Sphinx C Domain directives and roles
+directives.register_directive('c:function', DummyDirective)
+directives.register_directive('c:type', DummyDirective)
+roles.register_generic_role('c:func', nodes.emphasis)
+roles.register_generic_role('c:type', nodes.emphasis)
+
+# We accept but ignore parameters to be compatible with how the kernel build
+# invokes CHECK.
+if len(sys.argv) < 2:
+    sys.stderr.write('usage: kernel-doc-rst-lint [IGNORED OPTIONS] FILE\n');
+    sys.exit(1)
+
+infile = sys.argv[len(sys.argv) - 1]
+cmd = ['scripts/kernel-doc', '-rst', infile]
+
+try:
+    p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True)
+    out, err = p.communicate()
+
+    # python2 needs conversion to unicode.
+    # python3 with universal_newlines=True returns strings.
+    if sys.version_info.major < 3:
+        out, err = unicode(out, 'utf-8'), unicode(err, 'utf-8')
+
+    # kernel-doc errors
+    sys.stderr.write(err)
+    if p.returncode != 0:
+        sys.exit(p.returncode)
+
+    # restructured text errors
+    lines = statemachine.string2lines(out, 8, convert_whitespace=True)
+    lint_errors = restructuredtext_lint.lint(out, infile)
+    for error in lint_errors:
+        # Ignore INFO
+        if error.level <= 1:
+            continue
+
+        print(error.source + ': ' + error.type + ': ' + error.full_message)
+        if error.line is not None:
+            print('Context:')
+            print('\t' + lines[error.line - 1])
+            print('\t' + lines[error.line])
+
+except Exception as e:
+    sys.stderr.write(str(e) + '\n')
+    sys.exit(1)
-- 
2.1.4





-- 
Jani Nikula, Intel Open Source Technology Center

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH v1 3/6] kernel-doc: add kerneldoc-lint command
  2017-01-25  8:21     ` Jani Nikula
@ 2017-01-25  9:34       ` Markus Heiser
  2017-01-25 10:08         ` Jani Nikula
  0 siblings, 1 reply; 23+ messages in thread
From: Markus Heiser @ 2017-01-25  9:34 UTC (permalink / raw)
  To: Jani Nikula
  Cc: Daniel Vetter, Jonathan Corbet, Mauro Carvalho Chehab,
	Daniel Vetter, Matthew Wilcox,
	linux-doc @ vger . kernel . org List,
	linux-kernel @ vger . kernel . org List


Am 25.01.2017 um 09:21 schrieb Jani Nikula <jani.nikula@intel.com>:
> Yes, see below. It's simplistic and it has an external dependency, but
> it got the job done. And it does not depend on Sphinx; it's just a
> kernel-doc and rst lint, not Sphinx lint. Whether that's a good or a bad
> thing is debatable.
> 
> Anyway, I do think the approach of making 'make CHECK=the-tool C=1' work
> is what we should aim at.

Ah, cool ... didn't know C=1 before .. I will consider it in v2.

> Markus' patch could probably be made to do
> that by accepting the same arguments that are passed to compilers.

Is this what you mean?

  make W=n   [targets] Enable extra gcc checks, n=1,2,3 where
		1: warnings which may be relevant and do not occur too often
		2: warnings which occur quite often but may still be relevant
		3: more obscure warnings, can most likely be ignored
		Multiple levels can be combined with W=12 or W=123

Thanks!

 --Markus-- 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH v1 3/6] kernel-doc: add kerneldoc-lint command
  2017-01-25  9:34       ` Markus Heiser
@ 2017-01-25 10:08         ` Jani Nikula
  0 siblings, 0 replies; 23+ messages in thread
From: Jani Nikula @ 2017-01-25 10:08 UTC (permalink / raw)
  To: Markus Heiser
  Cc: Daniel Vetter, Jonathan Corbet, Mauro Carvalho Chehab,
	Daniel Vetter, Matthew Wilcox,
	linux-doc @ vger . kernel . org List,
	linux-kernel @ vger . kernel . org List

On Wed, 25 Jan 2017, Markus Heiser <markus.heiser@darmarit.de> wrote:
> Am 25.01.2017 um 09:21 schrieb Jani Nikula <jani.nikula@intel.com>:
>> Yes, see below. It's simplistic and it has an external dependency, but
>> it got the job done. And it does not depend on Sphinx; it's just a
>> kernel-doc and rst lint, not Sphinx lint. Whether that's a good or a bad
>> thing is debatable.
>> 
>> Anyway, I do think the approach of making 'make CHECK=the-tool C=1' work
>> is what we should aim at.
>
> Ah, cool ... didn't know C=1 before .. I will consider it in v2.
>
>> Markus' patch could probably be made to do
>> that by accepting the same arguments that are passed to compilers.
>
> Is this what you mean?

No. The build system passes the same (or roughly the same) arguments to
the CHECK tool as it passes to the compiler. You need to handle them in
your tool, possibly just ignoring them if they're not relevant.

BR,
Jani.


>
>   make W=n   [targets] Enable extra gcc checks, n=1,2,3 where
> 		1: warnings which may be relevant and do not occur too often
> 		2: warnings which occur quite often but may still be relevant
> 		3: more obscure warnings, can most likely be ignored
> 		Multiple levels can be combined with W=12 or W=123
>
> Thanks!
>
>  --Markus-- 
>

-- 
Jani Nikula, Intel Open Source Technology Center

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH v1 2/6] kernel-doc: replace kernel-doc perl parser with a pure python one (WIP)
  2017-01-25  0:13   ` Jonathan Corbet
  2017-01-25  6:37     ` Daniel Vetter
@ 2017-01-25 10:24     ` Jani Nikula
  2017-01-25 10:35       ` Daniel Vetter
  2017-01-25 19:07       ` Markus Heiser
  1 sibling, 2 replies; 23+ messages in thread
From: Jani Nikula @ 2017-01-25 10:24 UTC (permalink / raw)
  To: Jonathan Corbet, Markus Heiser
  Cc: Mauro Carvalho Chehab, Daniel Vetter, Matthew Wilcox,
	linux-doc @ vger . kernel . org List,
	linux-kernel @ vger . kernel . org List

On Wed, 25 Jan 2017, Jonathan Corbet <corbet@lwn.net> wrote:
> On Tue, 24 Jan 2017 20:52:40 +0100
> Markus Heiser <markus.heiser@darmarit.de> wrote:
>
>> This patch is the initial merge of a pure python implementation
>> to parse kernel-doc comments and generate reST from.
>> 
>> It consist mainly of to parts, the parser module (kerneldoc.py) and the
>> sphinx-doc extension (rstKernelDoc.py). For the command line, there is
>> also a 'scripts/kerneldoc' added.::
>> 
>>    scripts/kerneldoc --help
>> 
>> The main two parts are merged 1:1 from
>> 
>>   https://github.com/return42/linuxdoc  commit 3991d3c
>> 
>> Take this as a starting point, there is a lot of work to do (WIP).
>> Since it is merged 1:1, you will also notice it's CodingStyle is (ATM)
>> not kernel compliant and it lacks a user doc ('Documentation/doc-guide').
>> 
>> I will send patches for this when the community agreed about
>> functionalities. I guess there are a lot of topics we have to agree
>> about. E.g. the py-implementation is more strict the perl one.  When you
>> build doc with the py-module you will see a lot of additional errors and
>> warnings compared to the sloppy perl one.

Markus, thanks for your work on this.

> Again, quick comments...
>
>  - I would *much* rather evolve our existing Sphinx extension in the
>    direction we want it to go than to just replace it wholesale.
>    Replacement is the wrong approach for a few reasons, including the need
>    to minimize change and preserve credit for Jani's work.  Can we work on
>    that basis, please?

I would grossly downplay the role of preserving credit for what I've
done, and put much more emphasis on the need to create a patch series
that gradually, step by step, evolves the current approach into
something better.

Excuse me for my bluntness, but I think changing everything in a single
commit, or even a few commits, is strictly not acceptable.

When I changed *small* things in scripts/kernel-doc, I would make
htmldocs before and after the change, and recursively diff the produced
output to ensure there were no surprises. We already have enough
documentation that a manual eyeballing of the output is simply not
sufficient to ensure things don't break.

The diff in output between before and after this series? 160k lines of
unified diff without context ('diff -u0 -r old new | wc -l').

Many of the changes are improvements on the result, such as using proper
<div> tags for function parameter lists etc., but clearly changing the
output should be independent of changing the parser, so we have some
chance of validating the parser.

>    Ideally at the time of merging, we would be able to build the docs with
>    *either* kerneldoc.

I'd be fine with switching over in a single commit that doesn't
drastically change the output. A drop-in replacement. But that's not the
case here.

>  - I'll have to try it out to see how noisy it is.  I'm not opposed to
>    stricter checks; indeed, they could be a good thing.  But we might want
>    to have an option so we can cut back on the noise by default.

The increase in 'make htmldocs' build log was from 1521 to 2791 lines in
my tree. Arguably there was useful extra diagnosis, but some of it was
the printouts of long lists of definitions that were not found, one per
line. So it could be condensed without losing info too.

On to performance. With the default build options the new system was
noticeably slower than the current one, with a 50% increase on my
machine. But what really caught me by surprise was that passing
SPHINXOPTS=-j5 to parallelize worked better on the current system,
making the new one a whopping 70% slower. Of course, the argument is
that the proposed parser does more and is better, but due to the
monolithic change it's impossible to pinpoint the culprit or do a proper
cost/benefit analysis on this. Again, this calls for a more broken down
series of patches to make the changes.

Finally, while I'd love to see scripts/kernel-doc go, I do have to ask
if changing roughly 3k lines of Perl to roughly 3k lines of Python (*)
really makes everything better? They both still parse everything using a
large pile of regular expressions and a clunky state machine. When I
look at the code, I'm afraid I do not get that liberating feeling of
throwing out old junk in favor of something small or elegant or even
obviously more maintainable than the old one. The new one offers more
features, but repeatedly we face the problem that it's all lumped in
together with the parser change. We should be able to look at the parser
change and the other improvements separately.

That said, perhaps having an elegant parser (perhaps based on a compiler
plugin) is incompatible with the idea of making it a bug-for-bug drop-in
replacement of the old one, and it's something we need to think about.

All in all I think the message should be clear: this needs to be split
into small, incremental changes. Just like we do everything in the
kernel.


BR,
Jani.


(*) Please do not get hung up on these numbers. The Python version does
    more in some ways, but adds more deps such as fspath that's not
    included in the figures, and the Perl version outputs more
    formats. It's not an apples to apples comparison. Let's just say
    they are somewhere in the same ballpark.

-- 
Jani Nikula, Intel Open Source Technology Center

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH v1 2/6] kernel-doc: replace kernel-doc perl parser with a pure python one (WIP)
  2017-01-25 10:24     ` Jani Nikula
@ 2017-01-25 10:35       ` Daniel Vetter
  2017-01-25 19:07       ` Markus Heiser
  1 sibling, 0 replies; 23+ messages in thread
From: Daniel Vetter @ 2017-01-25 10:35 UTC (permalink / raw)
  To: Jani Nikula
  Cc: Jonathan Corbet, Markus Heiser, Mauro Carvalho Chehab,
	Daniel Vetter, Matthew Wilcox,
	linux-doc @ vger . kernel . org List,
	linux-kernel @ vger . kernel . org List

On Wed, Jan 25, 2017 at 12:24:31PM +0200, Jani Nikula wrote:
> Finally, while I'd love to see scripts/kernel-doc go, I do have to ask
> if changing roughly 3k lines of Perl to roughly 3k lines of Python (*)
> really makes everything better? They both still parse everything using a
> large pile of regular expressions and a clunky state machine. When I
> look at the code, I'm afraid I do not get that liberating feeling of
> throwing out old junk in favor of something small or elegant or even
> obviously more maintainable than the old one. The new one offers more
> features, but repeatedly we face the problem that it's all lumped in
> together with the parser change. We should be able to look at the parser
> change and the other improvements separately.

I share this concern a lot. The kernel-doc perl is a horror show, but it's
a horror show that 3-4 people now somewhat understand. Simply translating
the entire script into python leaves us with the same horror show, but in
a different language. And personally I'm not versed at all in either of
them (and I think that applies to many kernel hackers), so seems a wash.

If the new script would implement the state machinery in some
parser-combinator library to make it much easier to maintain, while still
being bug-for-bug compatible, then I'd be much, much more in favour of
doing this. And once we go to that amount of effort, then rewriting it in
python for more consistency with sphinx is definitely a good idea.

> That said, perhaps having an elegant parser (perhaps based on a compiler
> plugin) is incompatible with the idea of making it a bug-for-bug drop-in
> replacement of the old one, and it's something we need to think about.

Yeah, I fear we'll always need our own parser to avoid breaking the world.
But there's definitely better ways out there to write parsers than
cobbling together regexes in a state machine that uses globals :-)
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH v1 2/6] kernel-doc: replace kernel-doc perl parser with a pure python one (WIP)
  2017-01-25 10:24     ` Jani Nikula
  2017-01-25 10:35       ` Daniel Vetter
@ 2017-01-25 19:07       ` Markus Heiser
  2017-01-25 20:59         ` Jani Nikula
  2017-01-26 18:50         ` Jonathan Corbet
  1 sibling, 2 replies; 23+ messages in thread
From: Markus Heiser @ 2017-01-25 19:07 UTC (permalink / raw)
  To: Jani Nikula
  Cc: Jonathan Corbet, Mauro Carvalho Chehab, Daniel Vetter,
	Matthew Wilcox, linux-doc @ vger . kernel . org List,
	linux-kernel @ vger . kernel . org List


Am 25.01.2017 um 11:24 schrieb Jani Nikula <jani.nikula@intel.com>:

> Markus, thanks for your work on this.

Thanks for your comments!

> Excuse me for my bluntness, but I think changing everything in a single
> commit, or even a few commits, is strictly not acceptable.

OK, I understand.

> When I changed *small* things in scripts/kernel-doc, I would make
> htmldocs before and after the change, and recursively diff the produced
> output to ensure there were no surprises. We already have enough
> documentation that a manual eyeballing of the output is simply not
> sufficient to ensure things don't break.

> The diff in output between before and after this series? 160k lines of
> unified diff without context ('diff -u0 -r old new | wc -l').
> 
> Many of the changes are improvements on the result, such as using proper
> <div> tags for function parameter lists etc., but clearly changing the
> output should be independent of changing the parser, so we have some
> chance of validating the parser.


Hmm ... I try to sort my thoughts on this:

The both parser are generating reST output. We have tested reST output
so it should be enough to compare the reST from the old one with the
new one ... at least theoretical.

But the problem I see here is, that the perl script generates a
reST output which I can't use. As an example we can take a look at
the man-page builder I shipped in the series.

 https://www.mail-archive.com/linux-doc@vger.kernel.org/msg09017.html

In the commit message there is a small example:

  <section docname="basics ...>
    <title>get_sd_load_idx</title>
    <kernel_doc_man manpage="get_sd_load_idx.9"/>
    ...
    <desc desctype="function" domain="c"....>
      <desc_signature ....>
        <desc_type>int</desc_type>
        ...

You see that it has <section> tag with childs <title/>, <kernel_doc_man/> 
and so on. This structured markup is used by builders, they navigate through
the structured tree picking up nodes and spit out some man-page html, or
whatever builder it is.

ATM the perl parser generates a reST output which does not have such
a structured tree, so the builder can't navigate in.

So, what I mean is, the new parser has to generate a complete different reST
output and thats why we can't compare the perl parser with python one on a reST
basis ... and if reST is different, HTML is different :(

So we do not have any chance to track regression when switching from
the old to the new parser.

Thats are my thoughts on this topic, may be you have a solution for this?

> 
>>   Ideally at the time of merging, we would be able to build the docs with
>>   *either* kerneldoc.
> 
> I'd be fine with switching over in a single commit that doesn't
> drastically change the output.

One solution might be to improve the reST output of the perl script
first, so that it produce something which has a structure and we
all can agree on (short: reST output is the reference, ATM the reference
need some improvements)

If this is a way we like to go, I can send a patch for the perl script,
so that we can commit one a reST reference.

> A drop-in replacement. But that's not the
> case here.
> 
>> - I'll have to try it out to see how noisy it is.  I'm not opposed to
>>   stricter checks; indeed, they could be a good thing.  But we might want
>>   to have an option so we can cut back on the noise by default.
> 
> The increase in 'make htmldocs' build log was from 1521 to 2791 lines in
> my tree. Arguably there was useful extra diagnosis, but some of it was
> the printouts of long lists of definitions that were not found, one per
> line. So it could be condensed without losing info too.

Yes, this was just a 1:1 merge from my POC, there are a lot of things
which could be meld down. ATM, for me it is important to get a feedback 
on the functionalities and concepts of kernel-doc apps (RFC).

> On to performance. With the default build options the new system was
> noticeably slower than the current one, with a 50% increase on my
> machine. But what really caught me by surprise was that passing
> SPHINXOPTS=-j5 to parallelize worked better on the current system,
> making the new one a whopping 70% slower. Of course, the argument is
> that the proposed parser does more and is better, but due to the
> monolithic change it's impossible to pinpoint the culprit or do a proper
> cost/benefit analysis on this. Again, this calls for a more broken down
> series of patches to make the changes.

Ups, I have to look closer ... I thought the py-solution is faster 
since it does not for processes and does some caching.

> Finally, while I'd love to see scripts/kernel-doc go, I do have to ask
> if changing roughly 3k lines of Perl to roughly 3k lines of Python (*)
> really makes everything better? They both still parse everything using a
> large pile of regular expressions and a clunky state machine. When I
> look at the code, I'm afraid I do not get that liberating feeling of
> throwing out old junk in favor of something small or elegant or even
> obviously more maintainable than the old one. The new one offers more
> features, but repeatedly we face the problem that it's all lumped in
> together with the parser change. We should be able to look at the parser
> change and the other improvements separately.
> 
> That said, perhaps having an elegant parser (perhaps based on a compiler
> plugin) is incompatible with the idea of making it a bug-for-bug drop-in
> replacement of the old one, and it's something we need to think about.

Before I started implementing the parser I thought about separating
parsing from generating reST. I played a bit with pycparser

  https://github.com/eliben/pycparser

but I realized that the coverage of those parser might be not
enough for the kernel sources. At this time you mentioned sparse.
I haven't had time to at sparse but I guess that this is the
tool.

-- Markus --


> All in all I think the message should be clear: this needs to be split
> into small, incremental changes. Just like we do everything in the
> kernel.
> 
> 
> BR,
> Jani.
> 
> 
> (*) Please do not get hung up on these numbers. The Python version does
>    more in some ways, but adds more deps such as fspath that's not
>    included in the figures, and the Perl version outputs more
>    formats. It's not an apples to apples comparison. Let's just say
>    they are somewhere in the same ballpark.
> 
> -- 
> Jani Nikula, Intel Open Source Technology Center

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH v1 2/6] kernel-doc: replace kernel-doc perl parser with a pure python one (WIP)
  2017-01-25 19:07       ` Markus Heiser
@ 2017-01-25 20:59         ` Jani Nikula
  2017-01-26  9:54           ` Markus Heiser
  2017-01-26 18:50         ` Jonathan Corbet
  1 sibling, 1 reply; 23+ messages in thread
From: Jani Nikula @ 2017-01-25 20:59 UTC (permalink / raw)
  To: Markus Heiser
  Cc: Jonathan Corbet, Mauro Carvalho Chehab, Daniel Vetter,
	Matthew Wilcox, linux-doc @ vger . kernel . org List,
	linux-kernel @ vger . kernel . org List

On Wed, 25 Jan 2017, Markus Heiser <markus.heiser@darmarit.de> wrote:
> Am 25.01.2017 um 11:24 schrieb Jani Nikula <jani.nikula@intel.com>:
>
>> Markus, thanks for your work on this.
>
> Thanks for your comments!
>
>> Excuse me for my bluntness, but I think changing everything in a single
>> commit, or even a few commits, is strictly not acceptable.
>
> OK, I understand.
>
>> When I changed *small* things in scripts/kernel-doc, I would make
>> htmldocs before and after the change, and recursively diff the produced
>> output to ensure there were no surprises. We already have enough
>> documentation that a manual eyeballing of the output is simply not
>> sufficient to ensure things don't break.
>
>> The diff in output between before and after this series? 160k lines of
>> unified diff without context ('diff -u0 -r old new | wc -l').
>> 
>> Many of the changes are improvements on the result, such as using proper
>> <div> tags for function parameter lists etc., but clearly changing the
>> output should be independent of changing the parser, so we have some
>> chance of validating the parser.
>
>
> Hmm ... I try to sort my thoughts on this:
>
> The both parser are generating reST output. We have tested reST output
> so it should be enough to compare the reST from the old one with the
> new one ... at least theoretical.
>
> But the problem I see here is, that the perl script generates a
> reST output which I can't use. As an example we can take a look at
> the man-page builder I shipped in the series.

Sorry, I still don't understand *why* you can't use the same rst. Your
explanation seems to relate to man pages, but man pages come
*afterwards*, and are a separate improvement. I know you talk about lack
of proper structure and all that, but *why* can it strictly not be used,
if the *current* rst clearly can be used?

BR,
Jani.


>
>  https://www.mail-archive.com/linux-doc@vger.kernel.org/msg09017.html
>
> In the commit message there is a small example:
>
>   <section docname="basics ...>
>     <title>get_sd_load_idx</title>
>     <kernel_doc_man manpage="get_sd_load_idx.9"/>
>     ...
>     <desc desctype="function" domain="c"....>
>       <desc_signature ....>
>         <desc_type>int</desc_type>
>         ...
>
> You see that it has <section> tag with childs <title/>, <kernel_doc_man/> 
> and so on. This structured markup is used by builders, they navigate through
> the structured tree picking up nodes and spit out some man-page html, or
> whatever builder it is.
>
> ATM the perl parser generates a reST output which does not have such
> a structured tree, so the builder can't navigate in.
>
> So, what I mean is, the new parser has to generate a complete different reST
> output and thats why we can't compare the perl parser with python one on a reST
> basis ... and if reST is different, HTML is different :(
>
> So we do not have any chance to track regression when switching from
> the old to the new parser.
>
> Thats are my thoughts on this topic, may be you have a solution for this?
>
>> 
>>>   Ideally at the time of merging, we would be able to build the docs with
>>>   *either* kerneldoc.
>> 
>> I'd be fine with switching over in a single commit that doesn't
>> drastically change the output.
>
> One solution might be to improve the reST output of the perl script
> first, so that it produce something which has a structure and we
> all can agree on (short: reST output is the reference, ATM the reference
> need some improvements)
>
> If this is a way we like to go, I can send a patch for the perl script,
> so that we can commit one a reST reference.
>
>> A drop-in replacement. But that's not the
>> case here.
>> 
>>> - I'll have to try it out to see how noisy it is.  I'm not opposed to
>>>   stricter checks; indeed, they could be a good thing.  But we might want
>>>   to have an option so we can cut back on the noise by default.
>> 
>> The increase in 'make htmldocs' build log was from 1521 to 2791 lines in
>> my tree. Arguably there was useful extra diagnosis, but some of it was
>> the printouts of long lists of definitions that were not found, one per
>> line. So it could be condensed without losing info too.
>
> Yes, this was just a 1:1 merge from my POC, there are a lot of things
> which could be meld down. ATM, for me it is important to get a feedback 
> on the functionalities and concepts of kernel-doc apps (RFC).
>
>> On to performance. With the default build options the new system was
>> noticeably slower than the current one, with a 50% increase on my
>> machine. But what really caught me by surprise was that passing
>> SPHINXOPTS=-j5 to parallelize worked better on the current system,
>> making the new one a whopping 70% slower. Of course, the argument is
>> that the proposed parser does more and is better, but due to the
>> monolithic change it's impossible to pinpoint the culprit or do a proper
>> cost/benefit analysis on this. Again, this calls for a more broken down
>> series of patches to make the changes.
>
> Ups, I have to look closer ... I thought the py-solution is faster 
> since it does not for processes and does some caching.
>
>> Finally, while I'd love to see scripts/kernel-doc go, I do have to ask
>> if changing roughly 3k lines of Perl to roughly 3k lines of Python (*)
>> really makes everything better? They both still parse everything using a
>> large pile of regular expressions and a clunky state machine. When I
>> look at the code, I'm afraid I do not get that liberating feeling of
>> throwing out old junk in favor of something small or elegant or even
>> obviously more maintainable than the old one. The new one offers more
>> features, but repeatedly we face the problem that it's all lumped in
>> together with the parser change. We should be able to look at the parser
>> change and the other improvements separately.
>> 
>> That said, perhaps having an elegant parser (perhaps based on a compiler
>> plugin) is incompatible with the idea of making it a bug-for-bug drop-in
>> replacement of the old one, and it's something we need to think about.
>
> Before I started implementing the parser I thought about separating
> parsing from generating reST. I played a bit with pycparser
>
>   https://github.com/eliben/pycparser
>
> but I realized that the coverage of those parser might be not
> enough for the kernel sources. At this time you mentioned sparse.
> I haven't had time to at sparse but I guess that this is the
> tool.
>
> -- Markus --
>
>
>> All in all I think the message should be clear: this needs to be split
>> into small, incremental changes. Just like we do everything in the
>> kernel.
>> 
>> 
>> BR,
>> Jani.
>> 
>> 
>> (*) Please do not get hung up on these numbers. The Python version does
>>    more in some ways, but adds more deps such as fspath that's not
>>    included in the figures, and the Perl version outputs more
>>    formats. It's not an apples to apples comparison. Let's just say
>>    they are somewhere in the same ballpark.
>> 
>> -- 
>> Jani Nikula, Intel Open Source Technology Center
>

-- 
Jani Nikula, Intel Open Source Technology Center

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH v1 2/6] kernel-doc: replace kernel-doc perl parser with a pure python one (WIP)
  2017-01-25 20:59         ` Jani Nikula
@ 2017-01-26  9:54           ` Markus Heiser
  2017-01-26 10:16             ` Jani Nikula
  0 siblings, 1 reply; 23+ messages in thread
From: Markus Heiser @ 2017-01-26  9:54 UTC (permalink / raw)
  To: Jani Nikula
  Cc: Jonathan Corbet, Mauro Carvalho Chehab, Daniel Vetter,
	Matthew Wilcox, linux-doc @ vger . kernel . org List,
	linux-kernel @ vger . kernel . org List


Am 25.01.2017 um 21:59 schrieb Jani Nikula <jani.nikula@intel.com>:

>> But the problem I see here is, that the perl script generates a
>> reST output which I can't use. As an example we can take a look at
>> the man-page builder I shipped in the series.
> 
> Sorry, I still don't understand *why* you can't use the same rst. Your
> explanation seems to relate to man pages, but man pages come
> *afterwards*, and are a separate improvement. I know you talk about lack
> of proper structure and all that, but *why* can it strictly not be used,
> if the *current* rst clearly can be used?

"afterwards" is the word, that lets me slowly realize, that I have to
stop solving the world's problems with one patch. Now I guess how my
next patch series has to look like. Thanks! ... for being patient with
me.

Before I start, I want to hear your thoughts about the parsing
aspect ...

>>> That said, perhaps having an elegant parser (perhaps based on a compiler
>>> plugin) is incompatible with the idea of making it a bug-for-bug drop-in
>>> replacement of the old one, and it's something we need to think about.

Did you have any suggestions?

-- Markus --

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH v1 2/6] kernel-doc: replace kernel-doc perl parser with a pure python one (WIP)
  2017-01-26  9:54           ` Markus Heiser
@ 2017-01-26 10:16             ` Jani Nikula
  0 siblings, 0 replies; 23+ messages in thread
From: Jani Nikula @ 2017-01-26 10:16 UTC (permalink / raw)
  To: Markus Heiser
  Cc: Jonathan Corbet, Mauro Carvalho Chehab, Daniel Vetter,
	Matthew Wilcox, linux-doc @ vger . kernel . org List,
	linux-kernel @ vger . kernel . org List

On Thu, 26 Jan 2017, Markus Heiser <markus.heiser@darmarit.de> wrote:
> Am 25.01.2017 um 21:59 schrieb Jani Nikula <jani.nikula@intel.com>:
>
>>> But the problem I see here is, that the perl script generates a
>>> reST output which I can't use. As an example we can take a look at
>>> the man-page builder I shipped in the series.
>> 
>> Sorry, I still don't understand *why* you can't use the same rst. Your
>> explanation seems to relate to man pages, but man pages come
>> *afterwards*, and are a separate improvement. I know you talk about lack
>> of proper structure and all that, but *why* can it strictly not be used,
>> if the *current* rst clearly can be used?
>
> "afterwards" is the word, that lets me slowly realize, that I have to
> stop solving the world's problems with one patch. Now I guess how my
> next patch series has to look like. Thanks! ... for being patient with
> me.

Indeed, we change the world, one small incremental patch at a time. ;)

> Before I start, I want to hear your thoughts about the parsing
> aspect ...
>
>>>> That said, perhaps having an elegant parser (perhaps based on a
>>>> compiler plugin) is incompatible with the idea of making it a
>>>> bug-for-bug drop-in replacement of the old one, and it's something
>>>> we need to think about.
>
> Did you have any suggestions?

The perfect is the enemy of the good... If we see that the current Perl
parser just rewritten in Python really is an improvement, we should
consider it. But as I wrote, there are still issues there, like
performance, that we need to understand. I'll mostly defer to Jon on
this.

But before we plunge on with this, I would like to see at least some
research into reusing existing parsers which I would expect are
plentiful. We may end up deciding regexps are the way to go after all,
but I'd like it to be based on a decision rather than a lack of one. And
we might decide to look at this as a later improvement instead as well.

I've looked at python-clang myself, but it's a huge dependency, and it's
not trivial to cover all the things that the current one does with
that. I'd dismiss that.


BR,
Jani.


-- 
Jani Nikula, Intel Open Source Technology Center

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH v1 2/6] kernel-doc: replace kernel-doc perl parser with a pure python one (WIP)
  2017-01-25 19:07       ` Markus Heiser
  2017-01-25 20:59         ` Jani Nikula
@ 2017-01-26 18:50         ` Jonathan Corbet
  2017-01-26 19:26           ` Jani Nikula
  1 sibling, 1 reply; 23+ messages in thread
From: Jonathan Corbet @ 2017-01-26 18:50 UTC (permalink / raw)
  To: Markus Heiser
  Cc: Jani Nikula, Mauro Carvalho Chehab, Daniel Vetter,
	Matthew Wilcox, linux-doc @ vger . kernel . org List,
	linux-kernel @ vger . kernel . org List

On Wed, 25 Jan 2017 20:07:47 +0100
Markus Heiser <markus.heiser@darmarit.de> wrote:

> So, what I mean is, the new parser has to generate a complete different reST
> output and thats why we can't compare the perl parser with python one on a reST
> basis ... and if reST is different, HTML is different :(
> 
> So we do not have any chance to track regression when switching from
> the old to the new parser.
> 
> Thats are my thoughts on this topic, may be you have a solution for this?

The solution, I think, is as has been described by others in the thread.
I'll make a try at it now :)

The objectives in a patch set are something like this:

 - Replace the kernel-doc utility with one that is easier to maintain and
   enhance.

 - Add various enhancements (man pages, linting, better output, better
   parsing) to the docs build system.

What everybody is complaining about here is that all of that stuff is
being thrown in together into a single patch set.  We don't do things that
way because long experience says we'll create a mess that takes a long
time to straighten out again.

As I said before, I'm very much amenable to the idea of replacing
kernel-doc with one that is easier to work with.  I haven't yet had the
time to look closely enough at yours to have an opinion on whether it does
that or not.  But, assuming it does, the proper way to make this change is
to provide a new kerneldoc that behaves as closely to the old one as
possible, with an absolute minimum of output changes.

Doing it that way probably seems like a pretty annoying request.  But it
lets us validate its basic mechanics and be confident that we won't break
the docs build in weird ways.  It also lets us evaluate the question of
whether the replacement has merit in its own right, independent of any
other change we want to make.  Give me a new kerneldoc that passes those
tests, and I'll happily merge it.  (I have some sympathy with the idea
that we should look into other parsers, but I would not hold up a new
kerneldoc that passed those tests on this basis alone.)

*Then* we can start adding the other stuff, which, from a first look,
appears to be stuff that we very much want to have.  Each one of those,
too, should stand alone and pass muster on its own merits.  Changes
presented in this way could be merged in the same development cycle if
they are ready, but we need to be able to evaluate each one separately.

Does this make sense?  We all really appreciate the work you're doing
here, we're just asking that it be done in an evolutionary manner so we
can evaluate it properly.

Thanks,

jon

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH v1 2/6] kernel-doc: replace kernel-doc perl parser with a pure python one (WIP)
  2017-01-26 18:50         ` Jonathan Corbet
@ 2017-01-26 19:26           ` Jani Nikula
  2017-01-27  9:46             ` Markus Heiser
  0 siblings, 1 reply; 23+ messages in thread
From: Jani Nikula @ 2017-01-26 19:26 UTC (permalink / raw)
  To: Jonathan Corbet, Markus Heiser
  Cc: Mauro Carvalho Chehab, Daniel Vetter, Matthew Wilcox,
	linux-doc @ vger . kernel . org List,
	linux-kernel @ vger . kernel . org List

On Thu, 26 Jan 2017, Jonathan Corbet <corbet@lwn.net> wrote:
> Give me a new kerneldoc that passes those tests, and I'll happily
> merge it.  (I have some sympathy with the idea that we should look
> into other parsers, but I would not hold up a new kerneldoc that
> passed those tests on this basis alone.)

I'll just note in passing that having another parser that actually works
for our needs might be a pink unicorn pony. It might exist, it might
not, and someone would have to put in the hours to try to find it, tame
it, and bring it to the kernel. But it would be awesome to
have. Switching to a homebrew Python parser first does not preclude a
unicorn hunt later.

BR,
Jani.


-- 
Jani Nikula, Intel Open Source Technology Center

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH v1 2/6] kernel-doc: replace kernel-doc perl parser with a pure python one (WIP)
  2017-01-26 19:26           ` Jani Nikula
@ 2017-01-27  9:46             ` Markus Heiser
  0 siblings, 0 replies; 23+ messages in thread
From: Markus Heiser @ 2017-01-27  9:46 UTC (permalink / raw)
  To: Jani Nikula
  Cc: Jonathan Corbet, Mauro Carvalho Chehab, Daniel Vetter,
	Matthew Wilcox, linux-doc @ vger . kernel . org List,
	linux-kernel @ vger . kernel . org List


Am 26.01.2017 um 20:26 schrieb Jani Nikula <jani.nikula@intel.com>:

> On Thu, 26 Jan 2017, Jonathan Corbet <corbet@lwn.net> wrote:
>> Give me a new kerneldoc that passes those tests, and I'll happily
>> merge it.  (I have some sympathy with the idea that we should look
>> into other parsers, but I would not hold up a new kerneldoc that
>> passed those tests on this basis alone.)
> 
> I'll just note in passing that having another parser that actually works
> for our needs might be a pink unicorn pony. It might exist, it might
> not, and someone would have to put in the hours to try to find it, tame
> it, and bring it to the kernel. But it would be awesome to
> have. Switching to a homebrew Python parser first does not preclude a
> unicorn hunt later.

Here are my experience about parsing C code and kernel-doc comments.

The reg-expressions divide into two parts:

a.) those parsing "C sources", catching up function prototypes, structs etc. and

b.) those parsing "kernel-doc comments", catching up attribute descriptions,
   cross references etc.

When I developed the py-version in my POC I realized that the reg-expressions
parsing C sources (a.) aren't so bad. They have a long history and are well
tested against kernel' sources (As far as I remember, I added only one regexp
more to match function prototypes).

 This was the time where I looked at some other parsing tool and
 after a day I throw away the idea of using a external parser
 tool, first.

Most problems I have had, was parsing the kernel-doc markup itself. E.g. the
ambiguous attribute markup "* @foo: lorem" and its cross-ref "@foo". The latter
syntax is ambiguous, it fails mostly on new-lines and with strings like
"me@foo.bar".

When I looked at the whole sources, I also realized that we have two flavors of
kernel-doc markups.

b.1) Those from traditional DocBook where whitespaces aren't markups and

b.2) those which has been rewritten with reST markup in, where whitespaces are a
    part of the reST.

But this was only the half truth of b.2) : the 'new' markup did not only
consists of pure reST markup. For convince it is a mix of kernel-doc markup and
reST markup (e.g. remember the cross-ref mentioned above).

I suppose that we will never completely get rid off traditional (b.1), since
this means; changing the whole kernel source ;)

At that time I wanted to implement a parser which has the ability to handle both
flavors. A (undocumented) 'vintage' mode an the user-documented 'reST' mode.
But what is the criteria to switch from one mode to the other?  For this I made
a primitive assumption: every C source file which is used in a ".. kernel-doc::"
directive has to be marked up with the modern reST flavor.

ATM, the py-version of kernel-doc implements the same state machine as the perl
one and the modes are implemented in the same state machine (not perfect but it
worked for me first, suppose we can make it better).

I remember about a very early discussion we had about those modes and I know
that it doesn't find friends in the community (at that time).  May be today we
have more experience and new ideas.

  I really like to see (to work on) a parser with we can parse the
  whole kernel source and generate reST from.

What do you think, is it a bloody idea?

Thanks!

-- Markus --

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2017-01-27  9:47 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-01-24 19:52 [RFC PATCH v1 0/6] pure python kernel-doc parser and more Markus Heiser
2017-01-24 19:52 ` [RFC PATCH v1 1/6] kernel-doc: pure python kernel-doc parser (preparation) Markus Heiser
2017-01-24 19:52 ` [RFC PATCH v1 2/6] kernel-doc: replace kernel-doc perl parser with a pure python one (WIP) Markus Heiser
2017-01-25  0:13   ` Jonathan Corbet
2017-01-25  6:37     ` Daniel Vetter
2017-01-25  7:37       ` Markus Heiser
2017-01-25 10:24     ` Jani Nikula
2017-01-25 10:35       ` Daniel Vetter
2017-01-25 19:07       ` Markus Heiser
2017-01-25 20:59         ` Jani Nikula
2017-01-26  9:54           ` Markus Heiser
2017-01-26 10:16             ` Jani Nikula
2017-01-26 18:50         ` Jonathan Corbet
2017-01-26 19:26           ` Jani Nikula
2017-01-27  9:46             ` Markus Heiser
2017-01-24 19:52 ` [RFC PATCH v1 3/6] kernel-doc: add kerneldoc-lint command Markus Heiser
2017-01-25  6:38   ` Daniel Vetter
2017-01-25  8:21     ` Jani Nikula
2017-01-25  9:34       ` Markus Heiser
2017-01-25 10:08         ` Jani Nikula
2017-01-24 19:52 ` [RFC PATCH v1 4/6] kernel-doc: insert TODOs on kernel-doc errors Markus Heiser
2017-01-24 19:52 ` [RFC PATCH v1 5/6] kernel-doc: add kerneldoc-src2rst command Markus Heiser
2017-01-24 19:52 ` [RFC PATCH v1 6/6] kernel-doc: add man page builder (target mandocs) Markus Heiser

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.