All of lore.kernel.org
 help / color / mirror / Atom feed
* [LTP] [PATCH 00/11] Test metadata extraction
@ 2020-10-05 13:30 Cyril Hrubis
  2020-10-05 13:30 ` [LTP] [PATCH 01/11] make: Support compiling native build tools Cyril Hrubis
                   ` (12 more replies)
  0 siblings, 13 replies; 21+ messages in thread
From: Cyril Hrubis @ 2020-10-05 13:30 UTC (permalink / raw)
  To: ltp

This patchset adds a test metadata extraction into LTP and also
documentation generator that produces browseable HTML documentation from
the exported metadata. For detailed description of the idea and
implementation see the patch that adds README.md.

While the idea is quite new the code is mature enough to be included in
the upstream repository and I'm also worried that we will not get any
feedback or users of the metadata unless it's included in the upstream
git.

The next step is to use the extracted metadata in runltp-ng in the proof
of concept parallel executor that has been written by Ritchie and posted
to this mailing list as well.

Cyril Hrubis (4):
  docparse: Add test documentation parser
  docparse: Add README
  syscalls: Add a few documentation comments
  syscalls: Move needs_drivers inside of the tst_test struct

Petr Vorel (7):
  make: Support compiling native build tools
  travis: Add git
  make: Allow {INSTALL,MAKE}_TARGETS be a directory
  make: Allow CLEAN_TARGETS to remove directories
  travis: Install docparse dependencies
  docparse: Add configure options
  docparse: Generate html and pdf using asciidoc{,tor}

 Makefile                                      |   4 +
 configure.ac                                  |  32 +-
 docparse/.gitignore                           |   7 +
 docparse/Makefile                             |  77 ++++
 docparse/README.md                            | 248 ++++++++++
 docparse/data_storage.h                       | 299 ++++++++++++
 docparse/docparse.c                           | 415 +++++++++++++++++
 docparse/parse.sh                             |  41 ++
 docparse/testinfo.pl                          | 424 ++++++++++++++++++
 include/mk/config.mk.in                       |  21 +
 include/mk/env_post.mk                        |   3 +-
 include/mk/features.mk.in                     |   5 +
 include/mk/functions.mk                       |   3 +-
 include/mk/generic_leaf_target.inc            |  16 +-
 include/mk/rules.mk                           |   8 +
 m4/ax_compare_version.m4                      | 177 ++++++++
 m4/ax_prog_perl_modules.m4                    |  77 ++++
 m4/ltp-docparse.m4                            | 112 +++++
 testcases/kernel/syscalls/abort/abort01.c     |  16 +-
 testcases/kernel/syscalls/accept/accept01.c   |   8 +-
 testcases/kernel/syscalls/accept/accept02.c   |   7 +-
 testcases/kernel/syscalls/acct/acct01.c       |   5 +
 testcases/kernel/syscalls/acct/acct02.c       |   6 +-
 .../kernel/syscalls/fsetxattr/fsetxattr02.c   |  10 +-
 testcases/kernel/syscalls/ioctl/ioctl08.c     |   9 +-
 travis/alpine.sh                              |   4 +
 travis/debian.minimal.sh                      |   8 +-
 travis/debian.sh                              |  10 +-
 travis/fedora.sh                              |  12 +-
 travis/tumbleweed.sh                          |   9 +-
 30 files changed, 2034 insertions(+), 39 deletions(-)
 create mode 100644 docparse/.gitignore
 create mode 100644 docparse/Makefile
 create mode 100644 docparse/README.md
 create mode 100644 docparse/data_storage.h
 create mode 100644 docparse/docparse.c
 create mode 100755 docparse/parse.sh
 create mode 100755 docparse/testinfo.pl
 create mode 100644 m4/ax_compare_version.m4
 create mode 100644 m4/ax_prog_perl_modules.m4
 create mode 100644 m4/ltp-docparse.m4

-- 
2.26.2


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [LTP] [PATCH 01/11] make: Support compiling native build tools
  2020-10-05 13:30 [LTP] [PATCH 00/11] Test metadata extraction Cyril Hrubis
@ 2020-10-05 13:30 ` Cyril Hrubis
  2020-10-05 13:30 ` [LTP] [PATCH 02/11] travis: Add git Cyril Hrubis
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Cyril Hrubis @ 2020-10-05 13:30 UTC (permalink / raw)
  To: ltp

From: Petr Vorel <pvorel@suse.cz>

Add HOST_MAKE_TARGETS make target and HOSTCC,
HOST_{CFLAGS,LDFLAGS} make variables.

Needed for cross-compilation.

NOTE: detect cross compilation with comparing $(build) and $(host)
instead of using $cross_compiling configure variable, which would
require move the detection into m4 macro.

Signed-off-by: Petr Vorel <pvorel@suse.cz>
Reviewed-by: Cyril Hrubis <chrubis@suse.cz>
---
 configure.ac                       |  2 ++
 include/mk/config.mk.in            | 21 +++++++++++++++++++++
 include/mk/env_post.mk             |  3 +--
 include/mk/generic_leaf_target.inc |  6 ++++++
 include/mk/rules.mk                |  8 ++++++++
 5 files changed, 38 insertions(+), 2 deletions(-)

diff --git a/configure.ac b/configure.ac
index 03e4e09c9..05672f8f6 100644
--- a/configure.ac
+++ b/configure.ac
@@ -13,6 +13,8 @@ AC_CONFIG_FILES([ \
     execltp \
 ])
 
+AC_ARG_VAR(HOSTCC, [The C compiler on the host])
+
 AM_MAINTAINER_MODE([enable])
 
 AC_CANONICAL_HOST
diff --git a/include/mk/config.mk.in b/include/mk/config.mk.in
index 427608a17..8f73a6a34 100644
--- a/include/mk/config.mk.in
+++ b/include/mk/config.mk.in
@@ -31,6 +31,19 @@ RANLIB			:= @RANLIB@
 STRIP			:= @STRIP@
 YACC			:= @YACC@
 
+HOSTCC  = @HOSTCC@
+build := @build@
+host := @host@
+ifeq ($(strip $(HOSTCC)),)
+# native build, respect CC
+ifeq ($(build),$(host))
+HOSTCC := $(CC)
+else
+# cross compilation
+HOSTCC := cc
+endif
+endif
+
 AIO_LIBS		:= @AIO_LIBS@
 CAP_LIBS		:= @CAP_LIBS@
 ACL_LIBS		:= @ACL_LIBS@
@@ -70,6 +83,14 @@ WCFLAGS			?= -Wall -W @GCC_WARN_OLDSTYLE@
 LDFLAGS			+= $(WLDFLAGS)
 CFLAGS			+= $(DEBUG_CFLAGS) $(OPT_CFLAGS) $(WCFLAGS)
 
+ifeq ($(strip $(HOST_CFLAGS)),)
+HOST_CFLAGS := $(CFLAGS)
+endif
+
+ifeq ($(strip $(HOST_LDFLAGS)),)
+HOST_LDFLAGS := $(LDFLAGS)
+endif
+
 LINUX_VERSION		:= @LINUX_VERSION@
 LINUX_DIR		:= @LINUX_DIR@
 LINUX_VERSION_MAJOR	:= @LINUX_VERSION_MAJOR@
diff --git a/include/mk/env_post.mk b/include/mk/env_post.mk
index 44a333198..d52ad9f0b 100644
--- a/include/mk/env_post.mk
+++ b/include/mk/env_post.mk
@@ -48,11 +48,10 @@ LDFLAGS				+= -L$(top_builddir)/lib/android_librt
 endif
 
 MAKE_TARGETS			?= $(notdir $(patsubst %.c,%,$(wildcard $(abs_srcdir)/*.c)))
-
 MAKE_TARGETS			:= $(filter-out $(FILTER_OUT_MAKE_TARGETS),$(MAKE_TARGETS))
 
 # with only *.dwo, .[0-9]+.dwo can not be cleaned
-CLEAN_TARGETS			+= $(MAKE_TARGETS) *.o *.pyc .cache.mk *.dwo .*.dwo
+CLEAN_TARGETS			+= $(MAKE_TARGETS) $(HOST_MAKE_TARGETS) *.o *.pyc .cache.mk *.dwo .*.dwo
 
 # Majority of the files end up in testcases/bin...
 INSTALL_DIR			?= testcases/bin
diff --git a/include/mk/generic_leaf_target.inc b/include/mk/generic_leaf_target.inc
index dd54d05e9..e6fa107d1 100644
--- a/include/mk/generic_leaf_target.inc
+++ b/include/mk/generic_leaf_target.inc
@@ -57,6 +57,8 @@
 #				     rope to hang one's self in the event of
 #				     unwanted behavior.
 #
+# $(HOST_MAKE_TARGETS)	: Host tools which use $HOSTCC.
+#
 # $(CLEAN_TARGETS)		: What targets should be cleaned (must be
 #				  real files). This will automatically append
 #				  adds the .o suffix to all files referenced
@@ -92,6 +94,10 @@
 
 .PHONY: all clean install
 
+ifneq ($(strip $(MAKE_TARGETS)),)
+$(MAKE_TARGETS) += $(HOST_MAKE_TARGETS)
+endif
+
 $(MAKE_TARGETS): | $(MAKE_DEPS)
 
 all: $(MAKE_TARGETS)
diff --git a/include/mk/rules.mk b/include/mk/rules.mk
index 6a22e43af..c8f4bbbbe 100644
--- a/include/mk/rules.mk
+++ b/include/mk/rules.mk
@@ -22,6 +22,14 @@ else
 	@echo LD $(target_rel_dir)$@
 endif
 
+$(HOST_MAKE_TARGETS): %: %.c
+ifdef VERBOSE
+	$(HOSTCC) $(HOST_CFLAGS) $(HOST_LDFLAGS) $< $(HOST_LDLIBS) -o $@
+else
+	@$(HOSTCC) $(HOST_CFLAGS) $(HOST_LDFLAGS) $< $(HOST_LDLIBS) -o $@
+	@echo HOSTCC $(target_rel_dir)$@
+endif
+
 %: %.c
 ifdef VERBOSE
 	$(CC) $(CPPFLAGS) $(CFLAGS) $(LDFLAGS) $^ $(LTPLDLIBS) $(LDLIBS) -o $@
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [LTP] [PATCH 02/11] travis: Add git
  2020-10-05 13:30 [LTP] [PATCH 00/11] Test metadata extraction Cyril Hrubis
  2020-10-05 13:30 ` [LTP] [PATCH 01/11] make: Support compiling native build tools Cyril Hrubis
@ 2020-10-05 13:30 ` Cyril Hrubis
  2020-10-05 13:30 ` [LTP] [PATCH 03/11] docparse: Add test documentation parser Cyril Hrubis
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Cyril Hrubis @ 2020-10-05 13:30 UTC (permalink / raw)
  To: ltp

From: Petr Vorel <pvorel@suse.cz>

Needed in parse.sh script in next commit.

Signed-off-by: Petr Vorel <pvorel@suse.cz>
---
 travis/alpine.sh     | 1 +
 travis/debian.sh     | 1 +
 travis/fedora.sh     | 1 +
 travis/tumbleweed.sh | 1 +
 4 files changed, 4 insertions(+)

diff --git a/travis/alpine.sh b/travis/alpine.sh
index 61ef144a8..f8960bed0 100755
--- a/travis/alpine.sh
+++ b/travis/alpine.sh
@@ -10,6 +10,7 @@ apk add \
 	automake \
 	clang \
 	gcc \
+	git \
 	keyutils-dev \
 	libaio-dev \
 	libacl \
diff --git a/travis/debian.sh b/travis/debian.sh
index b759a9576..28685f4d3 100755
--- a/travis/debian.sh
+++ b/travis/debian.sh
@@ -17,6 +17,7 @@ apt install -y --no-install-recommends \
 	devscripts \
 	clang \
 	gcc \
+	git \
 	libacl1 \
 	libacl1-dev \
 	libaio-dev \
diff --git a/travis/fedora.sh b/travis/fedora.sh
index 990a84daf..3c224f71e 100755
--- a/travis/fedora.sh
+++ b/travis/fedora.sh
@@ -8,6 +8,7 @@ yum -y install \
 	make \
 	clang \
 	gcc \
+	git \
 	findutils \
 	libtirpc \
 	libtirpc-devel \
diff --git a/travis/tumbleweed.sh b/travis/tumbleweed.sh
index 4d5e9da79..6247daa98 100755
--- a/travis/tumbleweed.sh
+++ b/travis/tumbleweed.sh
@@ -8,6 +8,7 @@ zypper --non-interactive install --force-resolution --no-recommends \
 	clang \
 	findutils \
 	gcc \
+	git \
 	gzip \
 	make \
 	kernel-default-devel \
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [LTP] [PATCH 03/11] docparse: Add test documentation parser
  2020-10-05 13:30 [LTP] [PATCH 00/11] Test metadata extraction Cyril Hrubis
  2020-10-05 13:30 ` [LTP] [PATCH 01/11] make: Support compiling native build tools Cyril Hrubis
  2020-10-05 13:30 ` [LTP] [PATCH 02/11] travis: Add git Cyril Hrubis
@ 2020-10-05 13:30 ` Cyril Hrubis
  2020-10-23  7:01   ` Li Wang
  2020-10-05 13:30 ` [LTP] [PATCH 04/11] docparse: Add README Cyril Hrubis
                   ` (9 subsequent siblings)
  12 siblings, 1 reply; 21+ messages in thread
From: Cyril Hrubis @ 2020-10-05 13:30 UTC (permalink / raw)
  To: ltp

From: Cyril Hrubis <metan@ucw.cz>

This commit implements a simple parser to pick up metadata that are
nicely structured in the tst_test structure. In order to do that it
implements very simple and very basic C tokenizer.

The result of the metadata extraction is one big json file that contains
data extracted from all new library testcases plus some information
about the LTP version etc.

The tokenizer also extracts special documentation comments that are then
used for a test description. The documentation format has been choosen
to be asciidoc.

Special documentation comments starts with string "/*\\n" and is, by
convention ended with the same, so it looks like:

/*\
 * [DOCUMENTATION]
 *  - foo
 *  - bar
 *
 * ...
 *
\*/

[ pvorel: fixes
- parse.sh: git related fixes, out-of-tree build
- docparse: Eat also space following *
- Makefile: remove header rule (broken for clang)
]
Signed-off-by: Petr Vorel <pvorel@suse.cz>
Signed-off-by: Cyril Hrubis <chrubis@suse.cz>
---
 Makefile                |   2 +-
 docparse/.gitignore     |   2 +
 docparse/Makefile       |  19 ++
 docparse/data_storage.h | 299 +++++++++++++++++++++++++++++
 docparse/docparse.c     | 415 ++++++++++++++++++++++++++++++++++++++++
 docparse/parse.sh       |  41 ++++
 docparse/testinfo.pl    |  40 ++++
 7 files changed, 817 insertions(+), 1 deletion(-)
 create mode 100644 docparse/.gitignore
 create mode 100644 docparse/Makefile
 create mode 100644 docparse/data_storage.h
 create mode 100644 docparse/docparse.c
 create mode 100755 docparse/parse.sh
 create mode 100755 docparse/testinfo.pl

diff --git a/Makefile b/Makefile
index bf5077231..3830fb6d4 100644
--- a/Makefile
+++ b/Makefile
@@ -62,7 +62,7 @@ $(1):: | $$(abs_top_builddir)/$$(basename $$(subst -,.,$(1)))
 endif
 endef
 
-COMMON_TARGETS		+= testcases tools
+COMMON_TARGETS		+= testcases tools docparse
 # Don't want to nuke the original files if we're installing in-build-tree.
 ifneq ($(BUILD_TREE_STATE),$(BUILD_TREE_SRCDIR_INSTALL))
 INSTALL_TARGETS		+= runtest scenario_groups testscripts
diff --git a/docparse/.gitignore b/docparse/.gitignore
new file mode 100644
index 000000000..f636ed847
--- /dev/null
+++ b/docparse/.gitignore
@@ -0,0 +1,2 @@
+/docparse
+/metadata.json
diff --git a/docparse/Makefile b/docparse/Makefile
new file mode 100644
index 000000000..94ba83ffe
--- /dev/null
+++ b/docparse/Makefile
@@ -0,0 +1,19 @@
+# SPDX-License-Identifier: GPL-2.0-or-later
+# Copyright (c) 2019 Cyril Hrubis <chrubis@suse.cz>
+
+top_srcdir		?= ..
+
+include $(top_srcdir)/include/mk/env_pre.mk
+include $(top_srcdir)/include/mk/functions.mk
+
+MAKE_TARGETS		:= metadata.json
+HOST_MAKE_TARGETS	:= docparse
+
+INSTALL_DIR = metadata
+
+.PHONY: metadata.json
+
+metadata.json: docparse
+	$(abs_srcdir)/parse.sh > metadata.json
+
+include $(top_srcdir)/include/mk/generic_leaf_target.mk
diff --git a/docparse/data_storage.h b/docparse/data_storage.h
new file mode 100644
index 000000000..1a9265f92
--- /dev/null
+++ b/docparse/data_storage.h
@@ -0,0 +1,299 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Copyright (c) 2019 Cyril Hrubis <chrubis@suse.cz>
+ */
+
+#ifndef DATA_STORAGE_H__
+#define DATA_STORAGE_H__
+
+#include <stdarg.h>
+#include <stdio.h>
+#include <string.h>
+#include <stdlib.h>
+
+enum data_type {
+	DATA_ARRAY,
+	DATA_HASH,
+	DATA_STRING,
+};
+
+struct data_node_array {
+	enum data_type type;
+	unsigned int array_len;
+	unsigned int array_used;
+	struct data_node *array[];
+};
+
+struct data_hash_elem {
+	struct data_node *node;
+	char *id;
+};
+
+struct data_node_hash {
+	enum data_type type;
+	unsigned int elems_len;
+	unsigned int elems_used;
+	struct data_hash_elem elems[];
+};
+
+struct data_node_string {
+	enum data_type type;
+	char val[];
+};
+
+struct data_node {
+	union {
+		enum data_type type;
+		struct data_node_hash hash;
+		struct data_node_array array;
+		struct data_node_string string;
+	};
+};
+
+static inline struct data_node *data_node_string(const char *string)
+{
+	size_t size = sizeof(struct data_node_string) + strlen(string) + 1;
+	struct data_node *node = malloc(size);
+
+	if (!node)
+		return NULL;
+
+	node->type = DATA_STRING;
+	strcpy(node->string.val, string);
+
+	return node;
+}
+
+#define MAX_ELEMS 20
+
+static inline struct data_node *data_node_hash(void)
+{
+	size_t size = sizeof(struct data_node_hash)
+	              + MAX_ELEMS * sizeof(struct data_hash_elem);
+	struct data_node *node = malloc(size);
+
+	if (!node)
+		return NULL;
+
+	node->type = DATA_HASH;
+	node->hash.elems_len = MAX_ELEMS;
+	node->hash.elems_used = 0;
+
+	return node;
+}
+
+static inline struct data_node *data_node_array(void)
+{
+	size_t size = sizeof(struct data_node_array) +
+	              + MAX_ELEMS * sizeof(struct data_node*);
+	struct data_node *node = malloc(size);
+
+	if (!node)
+		return NULL;
+
+	node->type = DATA_ARRAY;
+	node->array.array_len = MAX_ELEMS;
+	node->array.array_used = 0;
+
+	return node;
+}
+
+static inline int data_node_hash_add(struct data_node *self, const char *id, struct data_node *payload)
+{
+	if (self->type != DATA_HASH)
+		return 1;
+
+	struct data_node_hash *hash = &self->hash;
+
+	if (hash->elems_used == hash->elems_len)
+		return 1;
+
+	struct data_hash_elem *elem = &hash->elems[hash->elems_used++];
+
+	elem->node = payload;
+	elem->id = strdup(id);
+
+	return 0;
+}
+
+static inline void data_node_free(struct data_node *self)
+{
+	unsigned int i;
+
+	switch (self->type) {
+	case DATA_STRING:
+	break;
+	case DATA_HASH:
+		for (i = 0; i < self->hash.elems_used; i++) {
+			data_node_free(self->hash.elems[i].node);
+			free(self->hash.elems[i].id);
+		}
+	break;
+	case DATA_ARRAY:
+		for (i = 0; i < self->array.array_used; i++)
+			data_node_free(self->array.array[i]);
+	break;
+	}
+
+	free(self);
+}
+
+static inline int data_node_hash_del(struct data_node *self, const char *id)
+{
+	unsigned int i;
+	struct data_node_hash *hash = &self->hash;
+
+	for (i = 0; i < hash->elems_used; i++) {
+		if (!strcmp(hash->elems[i].id, id))
+			break;
+	}
+
+	if (i >= hash->elems_used)
+		return 0;
+
+	data_node_free(hash->elems[i].node);
+	free(hash->elems[i].id);
+
+	hash->elems[i] = hash->elems[--hash->elems_used];
+
+	return 1;
+}
+
+static struct data_node *data_node_hash_get(struct data_node *self, const char *id)
+{
+	unsigned int i;
+	struct data_node_hash *hash = &self->hash;
+
+	for (i = 0; i < hash->elems_used; i++) {
+		if (!strcmp(hash->elems[i].id, id))
+			break;
+	}
+
+	if (i >= hash->elems_used)
+		return NULL;
+
+	return hash->elems[i].node;
+}
+
+static inline int data_node_array_add(struct data_node *self, struct data_node *payload)
+{
+	if (self->type != DATA_ARRAY)
+		return 1;
+
+	struct data_node_array *array = &self->array;
+
+	if (array->array_used == array->array_len)
+		return 1;
+
+	array->array[array->array_used++] = payload;
+
+	return 0;
+}
+
+static inline unsigned int data_node_array_len(struct data_node *self)
+{
+	if (self->type != DATA_ARRAY)
+		return 0;
+
+	return self->array.array_used;
+}
+
+static inline void data_print_padd(unsigned int i)
+{
+	while (i-- > 0)
+		putchar(' ');
+}
+
+static inline void data_node_print_(struct data_node *self, unsigned int padd)
+{
+	unsigned int i;
+
+	switch (self->type) {
+	case DATA_STRING:
+		data_print_padd(padd);
+		printf("'%s'\n", self->string.val);
+	break;
+	case DATA_HASH:
+		for (i = 0; i < self->hash.elems_used; i++) {
+			data_print_padd(padd);
+			printf("%s = {\n", self->hash.elems[i].id);
+			data_node_print_(self->hash.elems[i].node, padd+1);
+			data_print_padd(padd);
+			printf("},\n");
+		}
+	break;
+	case DATA_ARRAY:
+		for (i = 0; i < self->array.array_used; i++) {
+			data_print_padd(padd);
+			printf("{\n");
+			data_node_print_(self->array.array[i], padd+1);
+			data_print_padd(padd);
+			printf("},\n");
+		}
+	break;
+	}
+}
+
+static inline void data_node_print(struct data_node *self)
+{
+	printf("{\n");
+	data_node_print_(self, 1);
+	printf("}\n");
+}
+
+static inline void data_fprintf(FILE *f, unsigned int padd, const char *fmt, ...)
+                   __attribute__((format (printf, 3, 4)));
+
+static inline void data_fprintf(FILE *f, unsigned int padd, const char *fmt, ...)
+{
+	va_list va;
+
+	while (padd-- > 0)
+		putc(' ', f);
+
+	va_start(va, fmt);
+	vfprintf(f, fmt, va);
+	va_end(va);
+}
+
+static inline void data_to_json_(struct data_node *self, FILE *f, unsigned int padd, int do_padd)
+{
+	unsigned int i;
+
+	switch (self->type) {
+	case DATA_STRING:
+		padd = do_padd ? padd : 0;
+		data_fprintf(f, padd, "\"%s\"", self->string.val);
+	break;
+	case DATA_HASH:
+		for (i = 0; i < self->hash.elems_used; i++) {
+			data_fprintf(f, padd, "\"%s\": ", self->hash.elems[i].id);
+			data_to_json_(self->hash.elems[i].node, f, padd+1, 0);
+			if (i < self->hash.elems_used - 1)
+				fprintf(f, ",\n");
+			else
+				fprintf(f, "\n");
+		}
+	break;
+	case DATA_ARRAY:
+		data_fprintf(f, do_padd ? padd : 0, "[\n");
+		for (i = 0; i < self->array.array_used; i++) {
+			data_to_json_(self->array.array[i], f, padd+1, 1);
+			if (i < self->array.array_used - 1)
+				fprintf(f, ",\n");
+			else
+				fprintf(f, "\n");
+		}
+		data_fprintf(f, padd, "]");
+	break;
+	}
+}
+
+static inline void data_to_json(struct data_node *self, FILE *f, unsigned int padd)
+{
+	fprintf(f, "{\n");
+	data_to_json_(self, f, padd + 1, 1);
+	data_fprintf(f, padd, "}");
+}
+
+#endif /* DATA_STORAGE_H__ */
diff --git a/docparse/docparse.c b/docparse/docparse.c
new file mode 100644
index 000000000..242bbd1dc
--- /dev/null
+++ b/docparse/docparse.c
@@ -0,0 +1,415 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Copyright (c) 2019 Cyril Hrubis <chrubis@suse.cz>
+ * Copyright (c) 2020 Petr Vorel <pvorel@suse.cz>
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <libgen.h>
+#include <ctype.h>
+
+#include "data_storage.h"
+
+static void oneline_comment(FILE *f)
+{
+	int c;
+
+	do {
+		c = getc(f);
+	} while (c != '\n');
+}
+
+static const char *eat_asterisk_space(const char *c)
+{
+	unsigned int i = 0;
+
+	while (isspace(c[i]))
+		i++;
+
+	if (c[i] == '*') {
+		while (isspace(c[i+1]))
+			i++;
+		return &c[i+1];
+	}
+
+	return c;
+}
+
+static void multiline_comment(FILE *f, struct data_node *doc)
+{
+	int c;
+	int state = 0;
+	char buf[4096];
+	unsigned int bufp = 0;
+
+	for (;;) {
+		c = getc(f);
+
+		if (doc) {
+			if (c == '\n') {
+				struct data_node *line;
+				buf[bufp] = 0;
+				line = data_node_string(eat_asterisk_space(buf));
+				data_node_array_add(doc, line);
+				bufp = 0;
+				continue;
+			}
+
+			if (bufp + 1 >= sizeof(buf))
+				continue;
+
+			buf[bufp++] = c;
+		}
+
+		switch (state) {
+		case 0:
+			if (c == '*')
+				state = 1;
+		break;
+		case 1:
+			switch (c) {
+			case '/':
+				return;
+			case '*':
+				continue;
+			default:
+				state = 0;
+			break;
+			}
+		break;
+		}
+	}
+
+}
+
+static const char doc_prefix[] = "\\\n";
+
+static void maybe_doc_comment(FILE *f, struct data_node *doc)
+{
+	int c, i;
+
+	for (i = 0; doc_prefix[i]; i++) {
+		c = getc(f);
+
+		if (c == doc_prefix[i])
+			continue;
+
+		if (c == '*')
+			ungetc(c, f);
+
+		multiline_comment(f, NULL);
+		return;
+	}
+
+	multiline_comment(f, doc);
+}
+
+static void maybe_comment(FILE *f, struct data_node *doc)
+{
+	int c = getc(f);
+
+	switch (c) {
+	case '/':
+		oneline_comment(f);
+	break;
+	case '*':
+		maybe_doc_comment(f, doc);
+	break;
+	default:
+		ungetc(c, f);
+	break;
+	}
+}
+
+const char *next_token(FILE *f, struct data_node *doc)
+{
+	size_t i = 0;
+	static char buf[4096];
+	int c;
+	int in_str = 0;
+
+	for (;;) {
+		c = fgetc(f);
+
+		if (c == EOF)
+			goto exit;
+
+		if (in_str) {
+			if (c == '"') {
+				if (i == 0 || buf[i-1] != '\\')
+					goto exit;
+			}
+
+			buf[i++] = c;
+			continue;
+		}
+
+		switch (c) {
+		case '{':
+		case '}':
+		case ';':
+		case '(':
+		case ')':
+		case '=':
+		case ',':
+		case '[':
+		case ']':
+			if (i) {
+				ungetc(c, f);
+				goto exit;
+			}
+
+			buf[i++]=c;
+			goto exit;
+		case '0' ... '9':
+		case 'a' ... 'z':
+		case 'A' ... 'Z':
+		case '.':
+		case '_':
+		case '-':
+			buf[i++]=c;
+		break;
+		case '/':
+			maybe_comment(f, doc);
+		break;
+		case '"':
+			in_str = 1;
+		break;
+		case ' ':
+		case '\n':
+		case '\t':
+			if (i)
+				goto exit;
+		break;
+		}
+	}
+
+exit:
+	if (i == 0)
+		return NULL;
+
+	buf[i] = 0;
+	return buf;
+}
+
+#define WARN(str) fprintf(stderr, str "\n")
+
+static int parse_array(FILE *f, struct data_node *node)
+{
+	const char *token;
+
+	for (;;) {
+		if (!(token = next_token(f, NULL)))
+			return 1;
+
+		if (!strcmp(token, "{")) {
+			struct data_node *ret = data_node_array();
+			parse_array(f, ret);
+
+			if (data_node_array_len(ret))
+				data_node_array_add(node, ret);
+			else
+				data_node_free(ret);
+
+			continue;
+		}
+
+		if (!strcmp(token, "}"))
+			return 0;
+
+		if (!strcmp(token, ","))
+			continue;
+
+		if (!strcmp(token, "NULL"))
+			continue;
+
+		struct data_node *str = data_node_string(token);
+
+		data_node_array_add(node, str);
+	}
+
+	return 0;
+}
+
+static int parse_test_struct(FILE *f, struct data_node *doc, struct data_node *node)
+{
+	const char *token;
+	char *id = NULL;
+	int state = 0;
+	struct data_node *ret;
+
+	for (;;) {
+		if (!(token = next_token(f, doc)))
+			return 1;
+
+		if (!strcmp(token, "}"))
+			return 0;
+
+		switch (state) {
+		case 0:
+			id = strdup(token);
+			state = 1;
+			continue;
+		case 1:
+			if (!strcmp(token, "="))
+				state = 2;
+			else
+				WARN("Expected '='");
+			continue;
+		case 2:
+			if (!strcmp(token, "(")) {
+				state = 3;
+				continue;
+			}
+		break;
+		case 3:
+			if (!strcmp(token, ")"))
+				state = 2;
+			continue;
+
+		case 4:
+			if (!strcmp(token, ","))
+				state = 0;
+			continue;
+		}
+
+		if (!strcmp(token, "{")) {
+			ret = data_node_array();
+			parse_array(f, ret);
+		} else {
+			ret = data_node_string(token);
+		}
+
+		const char *key = id;
+		if (key[0] == '.')
+			key++;
+
+		data_node_hash_add(node, key, ret);
+		free(id);
+		state = 4;
+	}
+}
+
+static const char *tokens[] = {
+	"static",
+	"struct",
+	"tst_test",
+	"test",
+	"=",
+	"{",
+};
+
+static struct data_node *parse_file(const char *fname)
+{
+	int state = 0, found = 0;
+	const char *token;
+
+	FILE *f = fopen(fname, "r");
+
+	struct data_node *res = data_node_hash();
+	struct data_node *doc = data_node_array();
+
+	while ((token = next_token(f, doc))) {
+		if (state < 6 && !strcmp(tokens[state], token))
+			state++;
+		else
+			state = 0;
+
+		if (state < 6)
+			continue;
+
+		found = 1;
+		parse_test_struct(f, doc, res);
+	}
+
+
+	if (data_node_array_len(doc)) {
+		data_node_hash_add(res, "doc", doc);
+		found = 1;
+	} else {
+		data_node_free(doc);
+	}
+
+	fclose(f);
+
+	if (!found) {
+		data_node_free(res);
+		return NULL;
+	}
+
+	return res;
+}
+
+static const char *filter_out[] = {
+	"test",
+	"test_all",
+	"setup",
+	"cleanup",
+	"tcnt",
+	"mntpoint",
+	"bufs",
+	NULL
+};
+
+static struct implies {
+	const char *flag;
+	const char *implies;
+} implies[] = {
+	{"format_device", "needs_device"},
+	{"mount_device", "needs_device"},
+	{"mount_device", "format_device"},
+	{"all_filesystems", "needs_device"},
+        {"needs_device", "needs_tmpdir"},
+	{NULL, NULL}
+};
+
+const char *strip_name(char *path)
+{
+	char *name = basename(path);
+	size_t len = strlen(name);
+
+	if (len > 2 && name[len-1] == 'c' && name[len-2] == '.')
+		name[len-2] = '\0';
+
+	return name;
+}
+
+int main(int argc, char *argv[])
+{
+	unsigned int i;
+	struct data_node *res;
+
+	if (argc != 2) {
+		fprintf(stderr, "Usage: docparse filename.c\n");
+		return 1;
+	}
+
+	res = parse_file(argv[1]);
+	if (!res)
+		return 0;
+
+	/* Filter out useless data */
+	for (i = 0; filter_out[i]; i++)
+		data_node_hash_del(res, filter_out[i]);
+
+	/* Normalize the result */
+	for (i = 0; implies[i].flag; i++) {
+		if (!data_node_hash_get(res, implies[i].flag))
+			continue;
+
+		if (data_node_hash_get(res, implies[i].implies)) {
+			fprintf(stderr, "%s: useless tag: %s\n", argv[1], implies[i].implies);
+			continue;
+		}
+
+		data_node_hash_add(res, implies[i].implies, data_node_string("1"));
+	}
+
+	data_node_hash_add(res, "fname", data_node_string(argv[1]));
+	printf("  \"%s\": ", strip_name(argv[1]));
+	data_to_json(res, stdout, 2);
+	data_node_free(res);
+
+	return 0;
+}
diff --git a/docparse/parse.sh b/docparse/parse.sh
new file mode 100755
index 000000000..4ae0c42b2
--- /dev/null
+++ b/docparse/parse.sh
@@ -0,0 +1,41 @@
+#!/bin/sh
+# SPDX-License-Identifier: GPL-2.0-or-later
+# Copyright (c) 2019 Cyril Hrubis <chrubis@suse.cz>
+# Copyright (c) 2020 Petr Vorel <pvorel@suse.cz>
+set -e
+
+top_builddir=$PWD/..
+top_srcdir="$(dirname $0)/.."
+
+cd $top_srcdir
+
+version=$(cat $top_srcdir/VERSION)
+if [ -d .git ]; then
+	version=$(git describe 2>/dev/null) || version=$(cat $top_srcdir/VERSION).GIT-UNKNOWN
+fi
+
+echo '{'
+echo ' "testsuite": "Linux Test Project",'
+echo ' "testsuite_short": "LTP",'
+echo ' "url": "https://github.com/linux-test-project/ltp/",'
+echo ' "scm_url_base": "https://github.com/linux-test-project/ltp/tree/master/",'
+echo ' "timeout": 300,'
+echo " \"version\": \"$version\","
+echo ' "tests": {'
+
+first=1
+
+for test in `find testcases/ -name '*.c'`; do
+	a=$($top_builddir/docparse/docparse "$test")
+	if [ -n "$a" ]; then
+		if [ -z "$first" ]; then
+			echo ','
+		fi
+		first=
+		echo -n "$a"
+	fi
+done
+
+echo
+echo ' }'
+echo '}'
diff --git a/docparse/testinfo.pl b/docparse/testinfo.pl
new file mode 100755
index 000000000..d93d7d701
--- /dev/null
+++ b/docparse/testinfo.pl
@@ -0,0 +1,40 @@
+#!/usr/bin/perl
+# SPDX-License-Identifier: GPL-2.0-or-later
+# Copyright (c) 2019 Cyril Hrubis <chrubis@suse.cz>
+
+use strict;
+use warnings;
+
+use JSON;
+use Data::Dumper;
+
+sub load_json
+{
+	my ($fname) = @_;
+	local $/;
+
+	open(my $fh, '<', $fname) or die("Can't open $fname $!");
+
+	return <$fh>;
+}
+
+sub query_flag
+{
+	my ($json, $flag) = @_;
+
+	my $tests = $json->{'tests'};
+
+	foreach my $key (sort(keys %$tests)) {
+		if ($tests->{$key}->{$flag}) {
+			if ($tests->{$key}->{$flag} eq "1") {
+				print("$key\n");
+			} else {
+				print("$key:\n" . Dumper($tests->{$key}->{$flag}) . "\n");
+			}
+		}
+	}
+}
+
+my $json = decode_json(load_json($ARGV[0]));
+
+query_flag($json, $ARGV[1]);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [LTP] [PATCH 04/11] docparse: Add README
  2020-10-05 13:30 [LTP] [PATCH 00/11] Test metadata extraction Cyril Hrubis
                   ` (2 preceding siblings ...)
  2020-10-05 13:30 ` [LTP] [PATCH 03/11] docparse: Add test documentation parser Cyril Hrubis
@ 2020-10-05 13:30 ` Cyril Hrubis
  2020-10-05 14:15   ` Jan Stancek
  2020-10-05 13:30 ` [LTP] [PATCH 05/11] syscalls: Add a few documentation comments Cyril Hrubis
                   ` (8 subsequent siblings)
  12 siblings, 1 reply; 21+ messages in thread
From: Cyril Hrubis @ 2020-10-05 13:30 UTC (permalink / raw)
  To: ltp

From: Cyril Hrubis <metan@ucw.cz>

* example of C source and JSON
* note about exporting timeouts

Signed-off-by: Tim Bird <tim.bird@sony.com>
[ Tim: fix typos and clean up grammar ]
Signed-off-by: Cyril Hrubis <metan@ucw.cz>
---
 docparse/README.md | 248 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 248 insertions(+)
 create mode 100644 docparse/README.md

diff --git a/docparse/README.md b/docparse/README.md
new file mode 100644
index 000000000..7e4847ba2
--- /dev/null
+++ b/docparse/README.md
@@ -0,0 +1,248 @@
+Motivation for metadata exctraction
+===================================
+
+Exporting documentation
+-----------------------
+
+This allow us to build browseable documentation for the testcases, e.g. a
+catalogue of test information that would be searchable etc. At this point there
+is a single page generated from the extracted data that tries to outline the
+intent.
+
+
+Propagating test requirements
+-----------------------------
+
+Some subtests require differnt hardware resources/software versions/etc. the
+test execution framework needs to consume these so that it can locate proper
+hardware, install proper software, etc.
+
+Some examples of requriments are:
+
+* Test needs at least 1GB of RAM.
+
+* Test needs a block device at least 512MB in size
+
+* Test needs a NUMA machine with two memory nodes and at least 300 free pages on each node
+
+* Test needs i2c eeprom connected on a i2c bus
+
+* Test needs two serial ports connected via null-modem cable
+
+
+With this information extracted from the tests the testrunner can then map the
+requiremnts on the available machines in a lab and select a proper machine for
+the particular (sub)set of testcases as well as supply a particular test with
+additional information needed for the test, such as address of the i2c device,
+paths to the serial devices, etc. In the case of virtual machines the test could
+also dynamically prepare the correct environment for the test on demand.
+
+
+Parallel test execution
+-----------------------
+
+An LTP testrun on a modern hardware wastes most of the machine resources
+because the testcases are running sequentially. However in order to execute
+tests in parallel we need to know which system resources are utilized by a
+given test, as obviously we cannot run two tests that monopolize the same
+resource. In some cases we would also need to partition the system resource
+accordingly, e.g. if we have two memory stress tests running at the same time
+we will need to cap each of these tests on half of the available memory, or
+make sure that sum of the memory used by these two tests is not greater an
+available memory.
+
+Examples of such tests are:
+
+* Tests that mess with global system state
+   - system time (e.g. settimeofday() test and leap second test)
+   - SysV SHM
+   - ...
+
+* Tests that use block device
+
+* Tests that work with a particular hardware resource
+  - i2c eeprom test
+  - serial port tests
+  - ...
+
+Exporting test runtime/timeout to the testrunner
+------------------------------------------------
+
+Currently most of the testrunners usually do not know for how long is the test
+supposed to run, this means that we have to guess some upper limit on how long
+a test is supposed to run. The value is usually twice of the maximal runtime
+for all testcases or whole suite or even larger. This means that we are wasting
+time in the case that the test ends up stuck and we could have failed it much
+sooner in most of the cases. This becomes quite important for a kernel
+regression tests that do crash the host, if the information that the test is
+supposed to crash a kernel under a minute is exported to the testrunner we can
+reboot the machine much faster in an event of a crash.
+
+Getting rid of runtest files
+----------------------------
+
+This would also allow us to get rid of the unflexible and hard to maintain
+runtest files. Once this system is in place we will have a list of all tests
+along with their respective metadata - which means that we will be able to
+generate subsets of the test easily on the fly.
+
+In order to achieve this we need two things:
+
+Each test will describe which syscall/functionality it tests in the metadata.
+Then we could define groups of tests based on that. I.e. instead of having
+syscall runtest file we would ask the testrunner to run all test that have a
+defined which syscall they test, or whose filename matches a particular syscall name.
+
+Secondly we will have to store the test variants in the test metadata instead
+of putting them in a file that is unrelated to the test.
+
+For example:
+
+* To run CVE related test we would select testcases with CVE tag
+
+* To run IPC test we will define a list of IPC syscalls and run all syscall
+  test that are in the list
+
+* And many more...
+
+
+Implementation
+==============
+
+The docparser is implemented as a minimal C tokenizer that can parse and
+extract code comments and C structures. The docparser then runs over all C
+sources in the testcases directory and if tst\_test structure is present in the
+source it's parsed and the result is included in the resulting metadata.
+
+During parsing the metadata is stored in a simple key/value storage that more
+or less follows C structure layout, i.e. can include hash, array, and string.
+Once the parsing is finished the result is filtered so that only interesting
+fields of the tst\_test structure are included and then converted into JSON
+output.
+
+This process produces one big JSON file with metadata for all tests, that
+is then installed along with the testcases. This would then be used by the
+testrunner.
+
+The test requirements are stored in the tst\_test structure either as
+bitflags, integers or arrays of strings:
+
+```c
+struct tst_test test = {
+	...
+	/* tests needs to run with UID=1 */
+	.needs_root = 1,
+
+	/*
+	 * Tests needs a block device at least 1024MB in size and also
+	 * mkfs.ext4 installed.
+	 */
+	.needs_device = 1,
+	.dev_min_size = 1024,
+	.dev_fs_type = ext4,
+
+	/* Indicates that the test is messing with system wall clock */
+	.restore_wallclock = 1,
+
+	/* Tests needs uinput either compiled in or loaded as a module */
+	.needs_drivers = (const char *[]) {
+		"uinput",
+		NULL
+	},
+
+	/* Tests needs enabled kernel config flags */
+	.needs_kconfigs = (const char *[]) {
+		"CONFIG_X86_INTEL_UMIP=y",
+		NULL
+	},
+
+	/* Additional array of key value pairs */
+	.tags = (const struct tst_tag[]) {
+                {"linux-git", "43a6684519ab"},
+                {"CVE", "2017-2671"},
+                {NULL, NULL}
+        }
+};
+```
+
+The test documentation is stored in a special comment such as:
+
+```
+/*\
+ * Test description
+ *
+ * This is a test description.
+ * Consisting of several lines.
+\*/
+```
+
+Which will yield following json output:
+
+```json
+ "testcaseXY": {
+  "needs_root": "1",
+  "needs_device": "1",
+  "dev_min_size": "1024",
+  "dev_fs_type": "ext4",
+  "restore_wallclock": "1",
+  "needs_drivers": [
+    "uinput",
+  ],
+  "needs_kconfigs": [
+    "CONFIG_X86_INTEL_UMIP=y",
+  ],
+  "tags": [
+    [
+     "linux-git",
+     "43a6684519ab"
+    ],
+    [
+     "CVE",
+     "2017-2671"
+    ],
+   ],
+  "doc": [
+    "Test description",
+    "",
+    "This is a test description.",
+    "Consisting of several lines."
+  ],
+  "fname": "testcases/kernel/syscalls/foo/testcaseXY.c"
+ },
+```
+
+The final JSON file is JSON object of test descriptions indexed by a test name
+with a header describing the testsuite:
+
+```json
+{
+ "testsuite": "Linux Test Project",
+ "testsuite_short": "LTP",
+ "url": "https://github.com/linux-test-project/ltp/",
+ "scm_url_base": "https://github.com/linux-test-project/ltp/tree/master/",
+ "timeout": 300,
+ "version": "20200930",
+ "tests": {
+  "testcaseXY": {
+   ...
+  },
+  ...
+ }
+}
+```
+
+Open Points
+===========
+
+There are stil some loose ends. Mostly it's not well defined where to put
+things and how to format them.
+
+* Some of the hardware requirements are already listed in the tst\_test. Should
+  we put all of them there?
+
+* What would be the format for test documentation and how to store things such
+  as test variants there?
+
+So far this proof of concept generates a metadata file. I guess that we need
+actual consumers which will help to settle things down, I will try to look into
+making use of this in the runltp-ng at least as a reference implementation.
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [LTP] [PATCH 05/11] syscalls: Add a few documentation comments
  2020-10-05 13:30 [LTP] [PATCH 00/11] Test metadata extraction Cyril Hrubis
                   ` (3 preceding siblings ...)
  2020-10-05 13:30 ` [LTP] [PATCH 04/11] docparse: Add README Cyril Hrubis
@ 2020-10-05 13:30 ` Cyril Hrubis
  2020-10-05 13:30 ` [LTP] [PATCH 06/11] syscalls: Move needs_drivers inside of the tst_test struct Cyril Hrubis
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Cyril Hrubis @ 2020-10-05 13:30 UTC (permalink / raw)
  To: ltp

From: Cyril Hrubis <metan@ucw.cz>

So that it shows up in the resulting json file.

Signed-off-by: Cyril Hrubis <metan@ucw.cz>
---
 testcases/kernel/syscalls/abort/abort01.c   | 16 ++++++++++------
 testcases/kernel/syscalls/accept/accept01.c |  8 +++++---
 testcases/kernel/syscalls/accept/accept02.c |  7 +++++--
 testcases/kernel/syscalls/acct/acct01.c     |  5 +++++
 testcases/kernel/syscalls/acct/acct02.c     |  6 ++++--
 5 files changed, 29 insertions(+), 13 deletions(-)

diff --git a/testcases/kernel/syscalls/abort/abort01.c b/testcases/kernel/syscalls/abort/abort01.c
index 9505a5eec..b93324b34 100644
--- a/testcases/kernel/syscalls/abort/abort01.c
+++ b/testcases/kernel/syscalls/abort/abort01.c
@@ -5,14 +5,18 @@
  *   01/02/2003	Port to LTP	avenkat@us.ibm.com
  *   11/11/2002: Ported to LTP Suite by Ananda
  *   06/30/2001	Port to Linux	nsharoff@us.ibm.com
- *
- * ALGORITHM
- *	Fork child.  Have child abort, check return status.
- *
- * RESTRICTIONS
- *      The ulimit for core file size must be greater than 0.
  */
 
+/*\
+ * [DESCRIPTION]
+ *  Checks that process which called abort() gets killed by SIGIOT and dumps core.
+ *
+ * [ALGORITHM]
+ *  - Fork child.
+ *  - Child calls abort.
+ *  - Parent checks return status.
+\*/
+
 #include <sys/types.h>
 #include <sys/wait.h>
 #include <errno.h>
diff --git a/testcases/kernel/syscalls/accept/accept01.c b/testcases/kernel/syscalls/accept/accept01.c
index 4e30906f2..01d6db84c 100644
--- a/testcases/kernel/syscalls/accept/accept01.c
+++ b/testcases/kernel/syscalls/accept/accept01.c
@@ -3,11 +3,13 @@
 /*
  *   Copyright (c) International Business Machines  Corp., 2001
  *   07/2001 Ported by Wayne Boyer
- *
- *   Description:
- *     Verify that accept() returns the proper errno for various failure cases
  */
 
+/*\
+ * [DESCRIPTION]
+ * Verify that accept() returns the proper errno for various failure cases.
+\*/
+
 #include <stdio.h>
 #include <unistd.h>
 #include <errno.h>
diff --git a/testcases/kernel/syscalls/accept/accept02.c b/testcases/kernel/syscalls/accept/accept02.c
index 37ab8b64f..7fb6a494a 100644
--- a/testcases/kernel/syscalls/accept/accept02.c
+++ b/testcases/kernel/syscalls/accept/accept02.c
@@ -3,7 +3,10 @@
  * Copyright (c) 2019 SUSE LLC
  * Author: Christian Amann <camann@suse.com>
  */
-/* Test for CVE-2017-8890
+/*\
+ * [DESCRIPTION]
+ *
+ * Test for CVE-2017-8890
  *
  * In Kernels up to 4.10.15 missing commit 657831ff the multicast
  * group information of a socket gets copied over to a newly created
@@ -16,7 +19,7 @@
  *
  * For more information about this CVE see:
  * https://www.suse.com/security/cve/CVE-2017-8890/
- */
+\*/
 
 #include <errno.h>
 #include <sys/socket.h>
diff --git a/testcases/kernel/syscalls/acct/acct01.c b/testcases/kernel/syscalls/acct/acct01.c
index c161d2a2c..60e81bfad 100644
--- a/testcases/kernel/syscalls/acct/acct01.c
+++ b/testcases/kernel/syscalls/acct/acct01.c
@@ -7,6 +7,11 @@
 /* 12/03/2002	Port to LTP     robbiew@us.ibm.com */
 /* 06/30/2001	Port to Linux	nsharoff@us.ibm.com */
 
+/*\
+ * [DOCUMENTATION]
+ *  Verify that acct() returns proper errno on failure.
+\*/
+
 #include <sys/types.h>
 #include <sys/stat.h>
 #include <errno.h>
diff --git a/testcases/kernel/syscalls/acct/acct02.c b/testcases/kernel/syscalls/acct/acct02.c
index 8ee1bfcf8..e718e7df4 100644
--- a/testcases/kernel/syscalls/acct/acct02.c
+++ b/testcases/kernel/syscalls/acct/acct02.c
@@ -3,7 +3,9 @@
  *  Copyright (c) SUSE LLC, 2019
  *  Author: Christian Amann <camann@suse.com>
  */
-/*
+/*\
+ * [DOCUMENTATION]
+ *
  * This tests if the kernel writes correct data to the
  * process accounting file.
  *
@@ -19,7 +21,7 @@
  *
  * This is also accidental regression test for:
  * 4d9570158b626 kernel/acct.c: fix the acct->needcheck check in check_free_space()
- */
+\*/
 
 #include <sys/stat.h>
 #include <errno.h>
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [LTP] [PATCH 06/11] syscalls: Move needs_drivers inside of the tst_test struct
  2020-10-05 13:30 [LTP] [PATCH 00/11] Test metadata extraction Cyril Hrubis
                   ` (4 preceding siblings ...)
  2020-10-05 13:30 ` [LTP] [PATCH 05/11] syscalls: Add a few documentation comments Cyril Hrubis
@ 2020-10-05 13:30 ` Cyril Hrubis
  2020-10-05 13:30 ` [LTP] [PATCH 07/11] make: Allow {INSTALL, MAKE}_TARGETS be a directory Cyril Hrubis
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Cyril Hrubis @ 2020-10-05 13:30 UTC (permalink / raw)
  To: ltp

Signed-off-by: Cyril Hrubis <chrubis@suse.cz>
---
 testcases/kernel/syscalls/fsetxattr/fsetxattr02.c | 10 ++++------
 testcases/kernel/syscalls/ioctl/ioctl08.c         |  9 ++++-----
 2 files changed, 8 insertions(+), 11 deletions(-)

diff --git a/testcases/kernel/syscalls/fsetxattr/fsetxattr02.c b/testcases/kernel/syscalls/fsetxattr/fsetxattr02.c
index 205e80c95..3aea4b59e 100644
--- a/testcases/kernel/syscalls/fsetxattr/fsetxattr02.c
+++ b/testcases/kernel/syscalls/fsetxattr/fsetxattr02.c
@@ -241,11 +241,6 @@ static void cleanup(void)
 	}
 }
 
-static const char *const needed_drivers[] = {
-	"brd",
-	NULL,
-};
-
 static struct tst_test test = {
 	.setup = setup,
 	.test = verify_fsetxattr,
@@ -254,7 +249,10 @@ static struct tst_test test = {
 	.needs_devfs = 1,
 	.mntpoint = MNTPOINT,
 	.needs_root = 1,
-	.needs_drivers = needed_drivers,
+	.needs_drivers = (const char *const[]) {
+		"brd",
+		NULL,
+	},
 };
 
 #else /* HAVE_SYS_XATTR_H */
diff --git a/testcases/kernel/syscalls/ioctl/ioctl08.c b/testcases/kernel/syscalls/ioctl/ioctl08.c
index dca898a65..f7d11815d 100644
--- a/testcases/kernel/syscalls/ioctl/ioctl08.c
+++ b/testcases/kernel/syscalls/ioctl/ioctl08.c
@@ -112,10 +112,6 @@ static void setup(void)
 			sizeof(struct file_dedupe_range_info));
 }
 
-static const char *const needed_drivers[] = {
-	"btrfs",
-	NULL,
-};
 
 static struct tst_test test = {
 	.test = verify_ioctl,
@@ -127,7 +123,10 @@ static struct tst_test test = {
 	.mount_device = 1,
 	.mntpoint = MNTPOINT,
 	.dev_fs_type = "btrfs",
-	.needs_drivers = needed_drivers,
+	.needs_drivers = (const char *const[]) {
+		"btrfs",
+		NULL,
+	},
 };
 #else
 	TST_TEST_TCONF(
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [LTP] [PATCH 07/11] make: Allow {INSTALL, MAKE}_TARGETS be a directory
  2020-10-05 13:30 [LTP] [PATCH 00/11] Test metadata extraction Cyril Hrubis
                   ` (5 preceding siblings ...)
  2020-10-05 13:30 ` [LTP] [PATCH 06/11] syscalls: Move needs_drivers inside of the tst_test struct Cyril Hrubis
@ 2020-10-05 13:30 ` Cyril Hrubis
  2020-10-05 13:30 ` [LTP] [PATCH 08/11] make: Allow CLEAN_TARGETS to remove directories Cyril Hrubis
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Cyril Hrubis @ 2020-10-05 13:30 UTC (permalink / raw)
  To: ltp

From: Petr Vorel <pvorel@suse.cz>

by detecting it and adding required -d parameter for install
and installing whole directory with -t.

This will be needed for metadata metadata.chunked target.

Signed-off-by: Petr Vorel <pvorel@suse.cz>
---
 include/mk/functions.mk | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/include/mk/functions.mk b/include/mk/functions.mk
index 79c6193ca..e86dbccdc 100644
--- a/include/mk/functions.mk
+++ b/include/mk/functions.mk
@@ -35,7 +35,8 @@ INSTALL_FILES		+= $$(abspath $$(DESTDIR)/$(3)/$(1))
 
 $$(abspath $$(DESTDIR)/$(3)/$(1)): \
     $$(abspath $$(dir $$(DESTDIR)/$(3)/$(1)))
-	install -m $$(INSTALL_MODE) "$(2)/$(1)" "$$@"
+	install -m $$(INSTALL_MODE) $(shell test -d "$(2)/$(1)" && echo "-d") $(PARAM) "$(2)/$(1)" $$@
+	$(shell test -d "$(2)/$(1)" && echo "install -m "'$$(INSTALL_MODE) $(PARAM)' "$(2)/$(1)/*" -t '$$@')
 endef
 
 #
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [LTP] [PATCH 08/11] make: Allow CLEAN_TARGETS to remove directories
  2020-10-05 13:30 [LTP] [PATCH 00/11] Test metadata extraction Cyril Hrubis
                   ` (6 preceding siblings ...)
  2020-10-05 13:30 ` [LTP] [PATCH 07/11] make: Allow {INSTALL, MAKE}_TARGETS be a directory Cyril Hrubis
@ 2020-10-05 13:30 ` Cyril Hrubis
  2020-10-05 13:30 ` [LTP] [PATCH 09/11] travis: Install docparse dependencies Cyril Hrubis
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Cyril Hrubis @ 2020-10-05 13:30 UTC (permalink / raw)
  To: ltp

From: Petr Vorel <pvorel@suse.cz>

make clean on leaf target can contain also directories.

Changing delete to recursive delete (-r parameter for rm) should be safe
thus detection whether actually requited is not implemented.

This will be needed for metadata metadata.chunked target.

Signed-off-by: Petr Vorel <pvorel@suse.cz>
---
 include/mk/generic_leaf_target.inc | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/include/mk/generic_leaf_target.inc b/include/mk/generic_leaf_target.inc
index e6fa107d1..64953f89a 100644
--- a/include/mk/generic_leaf_target.inc
+++ b/include/mk/generic_leaf_target.inc
@@ -60,10 +60,10 @@
 # $(HOST_MAKE_TARGETS)	: Host tools which use $HOSTCC.
 #
 # $(CLEAN_TARGETS)		: What targets should be cleaned (must be
-#				  real files). This will automatically append
-#				  adds the .o suffix to all files referenced
-#				  by $(MAKE_TARGETS)) to CLEAN_TARGETS, if
-#				  MAKE_TARGETS wasn't defined (see
+#				  real files or directories). This will automatically append
+#				  adds the .o suffix to all files referenced by
+#				  $(MAKE_TARGETS)) to CLEAN_TARGETS, if MAKE_TARGETS wasn't
+#				  defined (see
 #				  $(MAKE_TARGETS)).
 # $(INSTALL_MODE)		: What mode should we using when calling
 # 				  install(1)?
@@ -103,7 +103,7 @@ $(MAKE_TARGETS): | $(MAKE_DEPS)
 all: $(MAKE_TARGETS)
 
 clean:: $(CLEAN_DEPS)
-	-$(RM) -f $(CLEAN_TARGETS)
+	-$(RM) -f -r $(CLEAN_TARGETS)
 
 $(INSTALL_FILES): | $(INSTALL_DEPS)
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [LTP] [PATCH 09/11] travis: Install docparse dependencies
  2020-10-05 13:30 [LTP] [PATCH 00/11] Test metadata extraction Cyril Hrubis
                   ` (7 preceding siblings ...)
  2020-10-05 13:30 ` [LTP] [PATCH 08/11] make: Allow CLEAN_TARGETS to remove directories Cyril Hrubis
@ 2020-10-05 13:30 ` Cyril Hrubis
  2020-10-21 10:38   ` Li Wang
  2020-10-05 13:30 ` [LTP] [PATCH 10/11] docparse: Add configure options Cyril Hrubis
                   ` (3 subsequent siblings)
  12 siblings, 1 reply; 21+ messages in thread
From: Cyril Hrubis @ 2020-10-05 13:30 UTC (permalink / raw)
  To: ltp

From: Petr Vorel <pvorel@suse.cz>

Signed-off-by: Petr Vorel <pvorel@suse.cz>
---
 travis/alpine.sh         |  3 +++
 travis/debian.minimal.sh |  8 +++++++-
 travis/debian.sh         |  9 ++++++++-
 travis/fedora.sh         | 11 ++++++++---
 travis/tumbleweed.sh     |  8 +++++++-
 5 files changed, 33 insertions(+), 6 deletions(-)

diff --git a/travis/alpine.sh b/travis/alpine.sh
index f8960bed0..b793a9fbe 100755
--- a/travis/alpine.sh
+++ b/travis/alpine.sh
@@ -6,6 +6,8 @@ apk update
 
 apk add \
 	acl-dev \
+	asciidoc \
+	asciidoctor \
 	autoconf \
 	automake \
 	clang \
@@ -23,6 +25,7 @@ apk add \
 	musl-dev \
 	numactl-dev \
 	openssl-dev \
+	perl-json \
 	pkgconfig
 
 cat /etc/os-release
diff --git a/travis/debian.minimal.sh b/travis/debian.minimal.sh
index 3f1941969..5e6ba8662 100755
--- a/travis/debian.minimal.sh
+++ b/travis/debian.minimal.sh
@@ -2,7 +2,11 @@
 # Copyright (c) 2018-2020 Petr Vorel <pvorel@suse.cz>
 set -ex
 
-apt remove -y \
+apt="apt remove -y"
+
+$apt \
+	asciidoc \
+	asciidoctor \
 	libacl1-dev \
 	libaio-dev \
 	libaio1 \
@@ -17,3 +21,5 @@ apt remove -y \
 	libsepol1-dev \
 	libssl-dev \
 	libtirpc-dev
+
+$apt asciidoc-base ruby-asciidoctor || true
diff --git a/travis/debian.sh b/travis/debian.sh
index 28685f4d3..743b79001 100755
--- a/travis/debian.sh
+++ b/travis/debian.sh
@@ -8,8 +8,12 @@ grep -v oldstable-updates /etc/apt/sources.list > /tmp/sources.list && mv /tmp/s
 
 apt update
 
-apt install -y --no-install-recommends \
+apt="apt install -y --no-install-recommends"
+
+$apt \
 	acl-dev \
+	asciidoc \
+	asciidoctor \
 	autoconf \
 	automake \
 	build-essential \
@@ -26,6 +30,7 @@ apt install -y --no-install-recommends \
 	libcap2 \
 	libc6 \
 	libc6-dev \
+	libjson-perl \
 	libkeyutils-dev \
 	libkeyutils1 \
 	libmm-dev \
@@ -40,4 +45,6 @@ apt install -y --no-install-recommends \
 	lsb-release \
 	pkg-config
 
+$apt ruby-asciidoctor-pdf || true
+
 df -hT
diff --git a/travis/fedora.sh b/travis/fedora.sh
index 3c224f71e..6127d367d 100755
--- a/travis/fedora.sh
+++ b/travis/fedora.sh
@@ -2,7 +2,10 @@
 # Copyright (c) 2018-2020 Petr Vorel <pvorel@suse.cz>
 set -ex
 
-yum -y install \
+yum="yum -y install"
+
+$yum \
+	asciidoc \
 	autoconf \
 	automake \
 	make \
@@ -12,8 +15,10 @@ yum -y install \
 	findutils \
 	libtirpc \
 	libtirpc-devel \
+	perl-JSON \
 	pkg-config \
 	redhat-lsb-core
 
-# CentOS 8 doesn't have libmnl-devel
-yum -y install libmnl-devel || yum -y install libmnl
+# CentOS 8 fixes
+$yum libmnl-devel || $yum libmnl
+$yum rubygem-asciidoctor || true
diff --git a/travis/tumbleweed.sh b/travis/tumbleweed.sh
index 6247daa98..43ca3252a 100755
--- a/travis/tumbleweed.sh
+++ b/travis/tumbleweed.sh
@@ -2,7 +2,10 @@
 # Copyright (c) 2018-2020 Petr Vorel <pvorel@suse.cz>
 set -ex
 
-zypper --non-interactive install --force-resolution --no-recommends \
+zyp="zypper --non-interactive install --force-resolution --no-recommends"
+
+$zyp \
+	asciidoc \
 	autoconf \
 	automake \
 	clang \
@@ -23,4 +26,7 @@ zypper --non-interactive install --force-resolution --no-recommends \
 	libtirpc-devel \
 	linux-glibc-devel \
 	lsb-release \
+	perl-JSON \
 	pkg-config
+
+$zyp ruby2.7-rubygem-asciidoctor || $zyp ruby2.5-rubygem-asciidoctor
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [LTP] [PATCH 10/11] docparse: Add configure options
  2020-10-05 13:30 [LTP] [PATCH 00/11] Test metadata extraction Cyril Hrubis
                   ` (8 preceding siblings ...)
  2020-10-05 13:30 ` [LTP] [PATCH 09/11] travis: Install docparse dependencies Cyril Hrubis
@ 2020-10-05 13:30 ` Cyril Hrubis
  2020-10-05 13:30 ` [LTP] [PATCH 11/11] docparse: Generate html and pdf using asciidoc{, tor} Cyril Hrubis
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Cyril Hrubis @ 2020-10-05 13:30 UTC (permalink / raw)
  To: ltp

From: Petr Vorel <pvorel@suse.cz>

We build HTML and PDF docs with:
 - asciidoctor (the default as it produces better output, any version)
 - asciidoc (required >= 9)

For asciidoctor we support only native asciidoctor-pdf plugin
(we don't support DocBook, Apache FO, or LaTeX intermediary formats).

The goal is to have smart default: try to generate HTML (or also PDF
if enabled) docs if suitable generator available.

But allow user to chose:
1) disable whole metadata generation (--disable-metadata)
2) disable metadata HTML generation (--disable-metadata-html)
3) enable metadata PDF generation (--enable-metadata-pdf)
4) generator (--with-metadata-generator=asciidoc|asciidoctor)

Used 2 ax_* (third party) macros from autoconf-archive.

Signed-off-by: Petr Vorel <pvorel@suse.cz>
---
 Makefile                   |   6 +-
 configure.ac               |  30 ++++++-
 include/mk/features.mk.in  |   5 ++
 m4/ax_compare_version.m4   | 177 +++++++++++++++++++++++++++++++++++++
 m4/ax_prog_perl_modules.m4 |  77 ++++++++++++++++
 m4/ltp-docparse.m4         | 112 +++++++++++++++++++++++
 6 files changed, 405 insertions(+), 2 deletions(-)
 create mode 100644 m4/ax_compare_version.m4
 create mode 100644 m4/ax_prog_perl_modules.m4
 create mode 100644 m4/ltp-docparse.m4

diff --git a/Makefile b/Makefile
index 3830fb6d4..56812d77b 100644
--- a/Makefile
+++ b/Makefile
@@ -62,7 +62,11 @@ $(1):: | $$(abs_top_builddir)/$$(basename $$(subst -,.,$(1)))
 endif
 endef
 
-COMMON_TARGETS		+= testcases tools docparse
+COMMON_TARGETS		+= testcases tools
+ifeq ($(WITH_METADATA),yes)
+COMMON_TARGETS		+= docparse
+endif
+
 # Don't want to nuke the original files if we're installing in-build-tree.
 ifneq ($(BUILD_TREE_STATE),$(BUILD_TREE_SRCDIR_INSTALL))
 INSTALL_TARGETS		+= runtest scenario_groups testscripts
diff --git a/configure.ac b/configure.ac
index 05672f8f6..06be1c094 100644
--- a/configure.ac
+++ b/configure.ac
@@ -189,7 +189,7 @@ AC_CHECK_TYPES([struct xt_entry_match, struct xt_entry_target],,,[
 
 # Tools knobs
 
-# Expect
+# Bash
 AC_ARG_WITH([bash],
   [AC_HELP_STRING([--with-bash],
     [have the Bourne Again Shell interpreter])],
@@ -202,6 +202,34 @@ else
     AC_SUBST([WITH_BASH],["no"])
 fi
 
+# metadata
+AC_ARG_ENABLE([metadata],
+  [AC_HELP_STRING([--disable-metadata],
+	[Disable metadata generation (both HTML and PDF, default no)])],
+  [], [enable_metadata=yes]
+)
+AC_ARG_ENABLE([metadata_html],
+  [AC_HELP_STRING([--disable-metadata-html],
+	[Disable metadata HTML generation (default no)])],
+  [], [enable_metadata_html=yes]
+)
+
+AC_ARG_ENABLE([metadata_pdf],
+  [AC_HELP_STRING([--enable-metadata-pdf],
+	[Enable metadata PDF generation (default no)])],
+  [], [enable_metadata_pdf=no]
+)
+
+AC_ARG_WITH([metadata_generator],
+  [AC_HELP_STRING([--with-metadata-generator=asciidoc|asciidoctor],
+	[Specify metadata generator to use (default autodetect)])],
+  [with_metadata_generator=$withval],
+  [with_metadata_generator=detect]
+)
+
+LTP_DOCPARSE
+
+# Expect
 AC_ARG_WITH([expect],
   [AC_HELP_STRING([--with-expect],
     [have the Tcl/expect library])],
diff --git a/include/mk/features.mk.in b/include/mk/features.mk.in
index 8e561b738..ecb15a0f7 100644
--- a/include/mk/features.mk.in
+++ b/include/mk/features.mk.in
@@ -27,6 +27,11 @@ WITH_PERL			:= @WITH_PERL@
 
 WITH_PYTHON			:= @WITH_PYTHON@
 
+METADATA_GENERATOR		:= @METADATA_GENERATOR@
+WITH_METADATA			:= @WITH_METADATA@
+WITH_METADATA_HTML		:= @WITH_METADATA_HTML@
+WITH_METADATA_PDF		:= @WITH_METADATA_PDF@
+
 # Features knobs
 
 # Test suite knobs
diff --git a/m4/ax_compare_version.m4 b/m4/ax_compare_version.m4
new file mode 100644
index 000000000..ffb4997e8
--- /dev/null
+++ b/m4/ax_compare_version.m4
@@ -0,0 +1,177 @@
+# ===========================================================================
+#    https://www.gnu.org/software/autoconf-archive/ax_compare_version.html
+# ===========================================================================
+#
+# SYNOPSIS
+#
+#   AX_COMPARE_VERSION(VERSION_A, OP, VERSION_B, [ACTION-IF-TRUE], [ACTION-IF-FALSE])
+#
+# DESCRIPTION
+#
+#   This macro compares two version strings. Due to the various number of
+#   minor-version numbers that can exist, and the fact that string
+#   comparisons are not compatible with numeric comparisons, this is not
+#   necessarily trivial to do in a autoconf script. This macro makes doing
+#   these comparisons easy.
+#
+#   The six basic comparisons are available, as well as checking equality
+#   limited to a certain number of minor-version levels.
+#
+#   The operator OP determines what type of comparison to do, and can be one
+#   of:
+#
+#    eq  - equal (test A == B)
+#    ne  - not equal (test A != B)
+#    le  - less than or equal (test A <= B)
+#    ge  - greater than or equal (test A >= B)
+#    lt  - less than (test A < B)
+#    gt  - greater than (test A > B)
+#
+#   Additionally, the eq and ne operator can have a number after it to limit
+#   the test to that number of minor versions.
+#
+#    eq0 - equal up to the length of the shorter version
+#    ne0 - not equal up to the length of the shorter version
+#    eqN - equal up to N sub-version levels
+#    neN - not equal up to N sub-version levels
+#
+#   When the condition is true, shell commands ACTION-IF-TRUE are run,
+#   otherwise shell commands ACTION-IF-FALSE are run. The environment
+#   variable 'ax_compare_version' is always set to either 'true' or 'false'
+#   as well.
+#
+#   Examples:
+#
+#     AX_COMPARE_VERSION([3.15.7],[lt],[3.15.8])
+#     AX_COMPARE_VERSION([3.15],[lt],[3.15.8])
+#
+#   would both be true.
+#
+#     AX_COMPARE_VERSION([3.15.7],[eq],[3.15.8])
+#     AX_COMPARE_VERSION([3.15],[gt],[3.15.8])
+#
+#   would both be false.
+#
+#     AX_COMPARE_VERSION([3.15.7],[eq2],[3.15.8])
+#
+#   would be true because it is only comparing two minor versions.
+#
+#     AX_COMPARE_VERSION([3.15.7],[eq0],[3.15])
+#
+#   would be true because it is only comparing the lesser number of minor
+#   versions of the two values.
+#
+#   Note: The characters that separate the version numbers do not matter. An
+#   empty string is the same as version 0. OP is evaluated by autoconf, not
+#   configure, so must be a string, not a variable.
+#
+#   The author would like to acknowledge Guido Draheim whose advice about
+#   the m4_case and m4_ifvaln functions make this macro only include the
+#   portions necessary to perform the specific comparison specified by the
+#   OP argument in the final configure script.
+#
+# LICENSE
+#
+#   Copyright (c) 2008 Tim Toolan <toolan@ele.uri.edu>
+#
+#   Copying and distribution of this file, with or without modification, are
+#   permitted in any medium without royalty provided the copyright notice
+#   and this notice are preserved. This file is offered as-is, without any
+#   warranty.
+
+#serial 13
+
+dnl #########################################################################
+AC_DEFUN([AX_COMPARE_VERSION], [
+  AC_REQUIRE([AC_PROG_AWK])
+
+  # Used to indicate true or false condition
+  ax_compare_version=false
+
+  # Convert the two version strings to be compared into a format that
+  # allows a simple string comparison.  The end result is that a version
+  # string of the form 1.12.5-r617 will be converted to the form
+  # 0001001200050617.  In other words, each number is zero padded to four
+  # digits, and non digits are removed.
+  AS_VAR_PUSHDEF([A],[ax_compare_version_A])
+  A=`echo "$1" | sed -e 's/\([[0-9]]*\)/Z\1Z/g' \
+                     -e 's/Z\([[0-9]]\)Z/Z0\1Z/g' \
+                     -e 's/Z\([[0-9]][[0-9]]\)Z/Z0\1Z/g' \
+                     -e 's/Z\([[0-9]][[0-9]][[0-9]]\)Z/Z0\1Z/g' \
+                     -e 's/[[^0-9]]//g'`
+
+  AS_VAR_PUSHDEF([B],[ax_compare_version_B])
+  B=`echo "$3" | sed -e 's/\([[0-9]]*\)/Z\1Z/g' \
+                     -e 's/Z\([[0-9]]\)Z/Z0\1Z/g' \
+                     -e 's/Z\([[0-9]][[0-9]]\)Z/Z0\1Z/g' \
+                     -e 's/Z\([[0-9]][[0-9]][[0-9]]\)Z/Z0\1Z/g' \
+                     -e 's/[[^0-9]]//g'`
+
+  dnl # In the case of le, ge, lt, and gt, the strings are sorted as necessary
+  dnl # then the first line is used to determine if the condition is true.
+  dnl # The sed right after the echo is to remove any indented white space.
+  m4_case(m4_tolower($2),
+  [lt],[
+    ax_compare_version=`echo "x$A
+x$B" | sed 's/^ *//' | sort -r | sed "s/x${A}/false/;s/x${B}/true/;1q"`
+  ],
+  [gt],[
+    ax_compare_version=`echo "x$A
+x$B" | sed 's/^ *//' | sort | sed "s/x${A}/false/;s/x${B}/true/;1q"`
+  ],
+  [le],[
+    ax_compare_version=`echo "x$A
+x$B" | sed 's/^ *//' | sort | sed "s/x${A}/true/;s/x${B}/false/;1q"`
+  ],
+  [ge],[
+    ax_compare_version=`echo "x$A
+x$B" | sed 's/^ *//' | sort -r | sed "s/x${A}/true/;s/x${B}/false/;1q"`
+  ],[
+    dnl Split the operator from the subversion count if present.
+    m4_bmatch(m4_substr($2,2),
+    [0],[
+      # A count of zero means use the length of the shorter version.
+      # Determine the number of characters in A and B.
+      ax_compare_version_len_A=`echo "$A" | $AWK '{print(length)}'`
+      ax_compare_version_len_B=`echo "$B" | $AWK '{print(length)}'`
+
+      # Set A to no more than B's length and B to no more than A's length.
+      A=`echo "$A" | sed "s/\(.\{$ax_compare_version_len_B\}\).*/\1/"`
+      B=`echo "$B" | sed "s/\(.\{$ax_compare_version_len_A\}\).*/\1/"`
+    ],
+    [[0-9]+],[
+      # A count greater than zero means use only that many subversions
+      A=`echo "$A" | sed "s/\(\([[0-9]]\{4\}\)\{m4_substr($2,2)\}\).*/\1/"`
+      B=`echo "$B" | sed "s/\(\([[0-9]]\{4\}\)\{m4_substr($2,2)\}\).*/\1/"`
+    ],
+    [.+],[
+      AC_WARNING(
+        [invalid OP numeric parameter: $2])
+    ],[])
+
+    # Pad zeros at end of numbers to make same length.
+    ax_compare_version_tmp_A="$A`echo $B | sed 's/./0/g'`"
+    B="$B`echo $A | sed 's/./0/g'`"
+    A="$ax_compare_version_tmp_A"
+
+    # Check for equality or inequality as necessary.
+    m4_case(m4_tolower(m4_substr($2,0,2)),
+    [eq],[
+      test "x$A" = "x$B" && ax_compare_version=true
+    ],
+    [ne],[
+      test "x$A" != "x$B" && ax_compare_version=true
+    ],[
+      AC_WARNING([invalid OP parameter: $2])
+    ])
+  ])
+
+  AS_VAR_POPDEF([A])dnl
+  AS_VAR_POPDEF([B])dnl
+
+  dnl # Execute ACTION-IF-TRUE / ACTION-IF-FALSE.
+  if test "$ax_compare_version" = "true" ; then
+    m4_ifvaln([$4],[$4],[:])dnl
+    m4_ifvaln([$5],[else $5])dnl
+  fi
+]) dnl AX_COMPARE_VERSION
diff --git a/m4/ax_prog_perl_modules.m4 b/m4/ax_prog_perl_modules.m4
new file mode 100644
index 000000000..70b3230eb
--- /dev/null
+++ b/m4/ax_prog_perl_modules.m4
@@ -0,0 +1,77 @@
+# ===========================================================================
+#   https://www.gnu.org/software/autoconf-archive/ax_prog_perl_modules.html
+# ===========================================================================
+#
+# SYNOPSIS
+#
+#   AX_PROG_PERL_MODULES([MODULES], [ACTION-IF-TRUE], [ACTION-IF-FALSE])
+#
+# DESCRIPTION
+#
+#   Checks to see if the given perl modules are available. If true the shell
+#   commands in ACTION-IF-TRUE are executed. If not the shell commands in
+#   ACTION-IF-FALSE are run. Note if $PERL is not set (for example by
+#   calling AC_CHECK_PROG, or AC_PATH_PROG), AC_CHECK_PROG(PERL, perl, perl)
+#   will be run.
+#
+#   MODULES is a space separated list of module names. To check for a
+#   minimum version of a module, append the version number to the module
+#   name, separated by an equals sign.
+#
+#   Example:
+#
+#     AX_PROG_PERL_MODULES( Text::Wrap Net::LDAP=1.0.3, ,
+#                           AC_MSG_WARN(Need some Perl modules)
+#
+# LICENSE
+#
+#   Copyright (c) 2009 Dean Povey <povey@wedgetail.com>
+#
+#   Copying and distribution of this file, with or without modification, are
+#   permitted in any medium without royalty provided the copyright notice
+#   and this notice are preserved. This file is offered as-is, without any
+#   warranty.
+
+#serial 8
+
+AU_ALIAS([AC_PROG_PERL_MODULES], [AX_PROG_PERL_MODULES])
+AC_DEFUN([AX_PROG_PERL_MODULES],[dnl
+
+m4_define([ax_perl_modules])
+m4_foreach([ax_perl_module], m4_split(m4_normalize([$1])),
+	  [
+	   m4_append([ax_perl_modules],
+		     [']m4_bpatsubst(ax_perl_module,=,[ ])[' ])
+          ])
+
+# Make sure we have perl
+if test -z "$PERL"; then
+AC_CHECK_PROG(PERL,perl,perl)
+fi
+
+if test "x$PERL" != x; then
+  ax_perl_modules_failed=0
+  for ax_perl_module in ax_perl_modules; do
+    AC_MSG_CHECKING(for perl module $ax_perl_module)
+
+    # Would be nice to log result here, but can't rely on autoconf internals
+    $PERL -e "use $ax_perl_module; exit" > /dev/null 2>&1
+    if test $? -ne 0; then
+      AC_MSG_RESULT(no);
+      ax_perl_modules_failed=1
+   else
+      AC_MSG_RESULT(ok);
+    fi
+  done
+
+  # Run optional shell commands
+  if test "$ax_perl_modules_failed" = 0; then
+    :
+    $2
+  else
+    :
+    $3
+  fi
+else
+  AC_MSG_WARN(could not find perl)
+fi])dnl
diff --git a/m4/ltp-docparse.m4 b/m4/ltp-docparse.m4
new file mode 100644
index 000000000..88d2e08e4
--- /dev/null
+++ b/m4/ltp-docparse.m4
@@ -0,0 +1,112 @@
+dnl SPDX-License-Identifier: GPL-2.0-or-later
+dnl Copyright (c) 2020 Petr Vorel <pvorel@suse.cz>
+
+AC_DEFUN([LTP_CHECK_METADATA_GENERATOR_ASCIIDOCTOR], [
+	AC_MSG_NOTICE(checking asciidoctor as metadata generator)
+	AC_PATH_TOOL(asciidoctor, "asciidoctor")
+	metadata_generator_html=$asciidoctor
+	# pdf requires both asciidoctor and asciidoctor-pdf
+	if test "x$metadata_generator_html" != x; then
+		AC_PATH_TOOL(asciidoctor_pdf, "asciidoctor-pdf")
+		metadata_generator_pdf=$asciidoctor_pdf
+	fi
+])
+
+AC_DEFUN([LTP_CHECK_METADATA_GENERATOR_ASCIIDOC], [
+	AC_MSG_NOTICE(checking asciidoc as metadata generator)
+	AC_PATH_TOOL(a2x, "a2x")
+	if test "x$a2x" != x; then
+		version="`$a2x --version | cut -d ' ' -f2 `"
+		AX_COMPARE_VERSION([$version], lt, 9, [
+		AC_MSG_WARN([a2x unsupported version: $version. Use a2x >= 9])
+		a2x=
+		])
+	fi
+	metadata_generator_html=$a2x
+	# pdf requires both asciidoc and dblatex
+	if test "x$metadata_generator_html" != x; then
+		AC_PATH_TOOL(dblatex, "dblatex")
+		metadata_generator_pdf=$dblatex
+	fi
+])
+
+AC_DEFUN([LTP_DOCPARSE], [
+with_metadata=no
+with_metadata_html=no
+with_metadata_pdf=no
+
+if test "x$enable_metadata" = xyes && test "x$enable_metadata_html" = xyes -o "x$enable_metadata_pdf" = xyes; then
+	AX_PROG_PERL_MODULES(Cwd File::Basename JSON LWP::Simple)
+fi
+
+if test "x$ax_perl_modules_failed" = x0; then
+	if test "x$with_metadata_generator" = xasciidoctor -o "x$with_metadata_generator" = xdetect; then
+		LTP_CHECK_METADATA_GENERATOR_ASCIIDOCTOR
+	elif test "x$with_metadata_generator" = xasciidoc; then
+		LTP_CHECK_METADATA_GENERATOR_ASCIIDOC
+	else
+		AC_MSG_ERROR([invalid metadata generator '$with_metadata_generator', use --with-metadata-generator=asciidoc|asciidoctor])
+	fi
+
+	# autodetection: check also Asciidoc
+	if test "x$with_metadata_generator" = xdetect; then
+		with_metadata_generator='asciidoctor'
+		# problems with Asciidoctor: (html enabled && not found) || (pdf enabled && not found) => try Asciidoc
+		if test "x$enable_metadata_html" = xyes -a "x$metadata_generator_html" = x ||
+			test "x$enable_metadata_pdf" = xyes -a "x$metadata_generator_pdf" = x; then
+			backup_html="$metadata_generator_html"
+			backup_pdf="$metadata_generator_pdf"
+			AC_MSG_NOTICE(missing some dependencies for Asciidoctor => trying Asciidoc)
+			with_metadata_generator='asciidoc'
+			LTP_CHECK_METADATA_GENERATOR_ASCIIDOC
+			# prefer Asciidoctor if it's not worse than Asciidoc
+			# (html not enabled || asciidoctor html found || asciidoc html not found) && (pdf ...)
+			if test "x$enable_metadata_html" != xyes -o "x$backup_html" != x -o "x$metadata_generator_html" = x &&
+				test "x$enable_metadata_pdf" != xyes -o "x$backup_pdf" != x -o "x$metadata_generator_pdf" = x; then
+				with_metadata_generator='asciidoctor'
+				metadata_generator_html="$backup_html"
+				metadata_generator_pdf="$backup_pdf"
+			fi
+		fi
+		if test "x$metadata_generator_html" != x -o "x$metadata_generator_pdf" != x; then
+			AC_MSG_NOTICE(choosing $with_metadata_generator for metadata generation)
+		fi
+	fi
+
+	if test "x$enable_metadata_html" = xno; then
+		AC_MSG_NOTICE(HTML metadata generation disabled)
+	elif test "x$metadata_generator_html" != x; then
+		with_metadata_html=yes
+	fi
+
+	if test "x$enable_metadata_pdf" = xno; then
+		AC_MSG_NOTICE(PDF metadata generation disabled)
+	elif test "x$metadata_generator_pdf" != x; then
+		with_metadata_pdf=yes
+	fi
+fi
+
+reason="metadata generation skipped due missing suitable generator"
+hint="specify correct generator with --with-metadata-generator=asciidoc|asciidoctor or use --disable-metadata|--disable-metadata-html|--disable-metadata-pdf"
+
+if test -z "$ax_perl_modules_failed"; then
+	AC_MSG_NOTICE(metadata generation disabled)
+elif test "x$ax_perl_modules_failed" = x1; then
+	AC_MSG_WARN(metadata generation skipped due missing required Perl modules)
+elif test "x$with_metadata_html" = xno -a "x$with_metadata_pdf" = xno; then
+	AC_MSG_WARN([$reason, $hint])
+else
+	with_metadata=yes
+	AC_SUBST(METADATA_GENERATOR, $with_metadata_generator)
+	if test "x$with_metadata_html" = xno -a "x$enable_metadata_html" = xyes; then
+		AC_MSG_WARN([HTML $reason, $hint])
+	fi
+	if test "x$with_metadata_pdf" = xno -a "x$enable_metadata_pdf" = xyes; then
+		AC_MSG_WARN([PDF $reason, $hint])
+	fi
+fi
+
+AC_SUBST(WITH_METADATA, $with_metadata)
+AC_SUBST(WITH_METADATA_HTML, $with_metadata_html)
+AC_SUBST(WITH_METADATA_PDF, $with_metadata_pdf)
+])
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [LTP] [PATCH 11/11] docparse: Generate html and pdf using asciidoc{, tor}
  2020-10-05 13:30 [LTP] [PATCH 00/11] Test metadata extraction Cyril Hrubis
                   ` (9 preceding siblings ...)
  2020-10-05 13:30 ` [LTP] [PATCH 10/11] docparse: Add configure options Cyril Hrubis
@ 2020-10-05 13:30 ` Cyril Hrubis
  2020-10-23  6:18   ` Li Wang
  2020-10-05 13:35 ` [LTP] [PATCH 00/11] Test metadata extraction Cyril Hrubis
  2020-10-12  8:53 ` Petr Vorel
  12 siblings, 1 reply; 21+ messages in thread
From: Cyril Hrubis @ 2020-10-05 13:30 UTC (permalink / raw)
  To: ltp

From: Petr Vorel <pvorel@suse.cz>

Rewrite testinfo.pl to generate *.txt pages in asciidoc format which is
then regenerated to html (and pdf if enabled) using asciidoc,{tor}.

Replace getting Linux kernel git commit messages from local git
repository (needed after having all tests in single page, because API
has access limits; it's also better to generate everything once thus
don't depend on network connection).

Signed-off-by: Petr Vorel <pvorel@suse.cz>
---
 docparse/.gitignore  |   5 +
 docparse/Makefile    |  58 +++++++
 docparse/testinfo.pl | 406 +++++++++++++++++++++++++++++++++++++++++--
 3 files changed, 458 insertions(+), 11 deletions(-)

diff --git a/docparse/.gitignore b/docparse/.gitignore
index f636ed847..7a87b4234 100644
--- a/docparse/.gitignore
+++ b/docparse/.gitignore
@@ -1,2 +1,7 @@
+/*.txt
+/docbook-xsl.css
 /docparse
 /metadata.json
+/metadata.html
+/metadata.pdf
+/metadata.chunked/
diff --git a/docparse/Makefile b/docparse/Makefile
index 94ba83ffe..a0ae9d965 100644
--- a/docparse/Makefile
+++ b/docparse/Makefile
@@ -1,19 +1,77 @@
 # SPDX-License-Identifier: GPL-2.0-or-later
 # Copyright (c) 2019 Cyril Hrubis <chrubis@suse.cz>
+# Copyright (c) 2020 Petr Vorel <pvorel@suse.cz>
 
 top_srcdir		?= ..
 
 include $(top_srcdir)/include/mk/env_pre.mk
 include $(top_srcdir)/include/mk/functions.mk
 
+ifeq ($(METADATA_GENERATOR),asciidoctor)
+METADATA_GENERATOR_CMD := asciidoctor
+METADATA_GENERATOR_PARAMS := -d book metadata.txt
+METADATA_GENERATOR_PARAMS_HTML := -b xhtml
+METADATA_GENERATOR_PARAMS_PDF := -b pdf -r asciidoctor-pdf
+else ifeq ($(METADATA_GENERATOR),asciidoc)
+METADATA_GENERATOR_CMD := a2x
+METADATA_GENERATOR_PARAMS := --xsltproc-opts "--stringparam toc.section.depth 1" -d book -L  --resource="$(PWD)" metadata.txt
+METADATA_GENERATOR_PARAMS_HTML := -f xhtml
+METADATA_GENERATOR_PARAMS_PDF := -f pdf
+METADATA_GENERATOR_PARAMS_HTML_CHUNKED := -f chunked
+else ifeq ($(METADATA_GENERATOR),)
+$(error 'METADATA_GENERATOR' not not configured, run ./configure in the root directory)
+else
+$(error '$(METADATA_GENERATOR)' not supported, only asciidoctor and asciidoc are supported)
+endif
+
+ifdef VERBOSE
+METADATA_GENERATOR_PARAMS += -v
+endif
+
+CLEAN_TARGETS		:= *.txt
 MAKE_TARGETS		:= metadata.json
+
+ifeq ($(WITH_METADATA_HTML),yes)
+MAKE_TARGETS		+= metadata.html
+ifneq ($(METADATA_GENERATOR_PARAMS_HTML_CHUNKED),)
+MAKE_TARGETS		+= metadata.chunked
+endif
+endif
+
+ifeq ($(WITH_METADATA_PDF),yes)
+MAKE_TARGETS		+= metadata.pdf
+endif
+
 HOST_MAKE_TARGETS	:= docparse
 
 INSTALL_DIR = metadata
+INSTALL_TARGETS = *.css *.js
+
+ifndef METADATA_GENERATOR
+METADATA_GENERATOR := asciidoctor
+endif
 
 .PHONY: metadata.json
 
 metadata.json: docparse
 	$(abs_srcdir)/parse.sh > metadata.json
 
+txt: metadata.json
+	$(abs_srcdir)/testinfo.pl metadata.json
+
+ifeq ($(WITH_METADATA_HTML),yes)
+metadata.html: txt
+	$(METADATA_GENERATOR_CMD) $(METADATA_GENERATOR_PARAMS) $(METADATA_GENERATOR_PARAMS_HTML)
+
+ifneq ($(METADATA_GENERATOR_PARAMS_HTML_CHUNKED),)
+metadata.chunked: txt
+	$(METADATA_GENERATOR_CMD) $(METADATA_GENERATOR_PARAMS) $(METADATA_GENERATOR_PARAMS_HTML_CHUNKED)
+endif
+endif
+
+ifeq ($(WITH_METADATA_PDF),yes)
+metadata.pdf: txt
+	$(METADATA_GENERATOR_CMD) $(METADATA_GENERATOR_PARAMS) $(METADATA_GENERATOR_PARAMS_PDF)
+endif
+
 include $(top_srcdir)/include/mk/generic_leaf_target.mk
diff --git a/docparse/testinfo.pl b/docparse/testinfo.pl
index d93d7d701..d8d9ea663 100755
--- a/docparse/testinfo.pl
+++ b/docparse/testinfo.pl
@@ -1,16 +1,21 @@
 #!/usr/bin/perl
 # SPDX-License-Identifier: GPL-2.0-or-later
 # Copyright (c) 2019 Cyril Hrubis <chrubis@suse.cz>
+# Copyright (c) 2020 Petr Vorel <pvorel@suse.cz>
 
 use strict;
 use warnings;
 
 use JSON;
-use Data::Dumper;
+use LWP::Simple;
+use Cwd qw(abs_path);
+use File::Basename qw(dirname);
+
+use constant OUTDIR => dirname(abs_path($0));
 
 sub load_json
 {
-	my ($fname) = @_;
+	my ($fname, $mode) = @_;
 	local $/;
 
 	open(my $fh, '<', $fname) or die("Can't open $fname $!");
@@ -18,23 +23,402 @@ sub load_json
 	return <$fh>;
 }
 
-sub query_flag
+sub log_info
+{
+	my $msg = shift;
+	print STDERR "INFO: $msg\n";
+}
+
+sub log_warn
+{
+	my $msg = shift;
+	print STDERR "WARN: $msg\n";
+}
+
+sub print_asciidoc_page
+{
+	my ($fh, $json, $title, $content) = @_;
+
+	print $fh <<EOL;
+// -*- mode:doc; -*-
+// vim: set syntax=asciidoc:
+
+$title
+
+$content
+EOL
+}
+
+sub tag_url {
+	my ($tag, $value, $scm_url_base) = @_;
+
+    if ($tag eq "CVE") {
+        return "https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-" . $value;
+	}
+    if ($tag eq "linux-git") {
+        return "https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=" . $value;
+	}
+    if ($tag eq "fname") {
+        return $scm_url_base . $value;
+	}
+}
+
+sub bold
+{
+	return "*$_[0]*";
+}
+
+sub code
+{
+	return "+$_[0]+";
+}
+
+sub hr
+{
+	return "\n\n'''\n\n";
+}
+
+sub html_a
+{
+	my ($url, $text) = @_;
+	return "$url\[$text\]";
+}
+
+sub h1
+{
+	return "== $_[0]\n";
+}
+
+sub h2
+{
+	return "=== $_[0]\n";
+}
+
+sub h3
+{
+	return "==== $_[0]\n";
+}
+
+sub label
+{
+	return "[[$_[0]]]\n";
+}
+
+sub paragraph
+{
+	return "$_[0]\n\n";
+}
+
+sub reference
+{
+	return "xref:$_[0]\[$_[0]\]" . (defined($_[1]) ? $_[1] : "") . "\n";
+}
+
+sub table
+{
+	return "|===\n";
+}
+
+sub print_defined
+{
+	my ($key, $val, $val2) = @_;
+
+	if (defined($val)) {
+		return paragraph(bold($key) . ": " . $val . (defined($val2) ? " $val2" : ""));
+	}
+}
+
+sub content_about
+{
+	my $json = shift;
+	my $content;
+
+	$content .= print_defined("URL", $json->{'url'});
+	$content .= print_defined("Version", $json->{'version'});
+	$content .= print_defined("Default timeout", $json->{'timeout'}, "seconds");
+
+	return $content;
+}
+
+sub uniq {
+	my %seen;
+	grep !$seen{$_}++, @_;
+}
+
+sub get_test_names
+{
+	my @names = @{$_[0]};
+	my ($letter, $prev_letter);
+	my $content;
+
+	for my $name (sort @names) {
+		$letter = substr($name, 0, 1);
+		if (defined($prev_letter) && $letter ne $prev_letter) {
+			$content .= "\n";
+		}
+
+		$content .= reference($name, " ");
+		$prev_letter = $letter;
+	}
+	$content .= "\n";
+
+	return $content;
+}
+
+sub get_test_letters
+{
+	my @names = @{$_[0]};
+	my $letter;
+	my $prev_letter = "";
+	my $content;
+
+	for (@names) {
+		$_ = substr($_, 0, 1);
+	}
+	@names = uniq(@names);
+
+	for my $letter (@names) {
+		$content .= reference($letter);
+	}
+	$content .= "\n";
+
+	return $content;
+}
+
+sub tag2title
 {
-	my ($json, $flag) = @_;
+	my $tag = shift;
+	return code(".$tag");
+}
+
+sub get_filters
+{
+	my $json = shift;
+	my %data;
+	while (my ($k, $v) = each %{$json->{'tests'}}) {
+		for my $j (keys %{$v}) {
+
+			next if ($j eq 'fname' || $j eq 'doc');
+
+			$data{$j} = () unless (defined($data{$j}));
+			push @{$data{$j}}, $k;
+		}
+	}
+	return \%data;
+}
+
+# TODO: Handle better .tags (and anything else which contains array)
+# e.g. for .tags there could be separate list for CVE and linux-git
+# (now it's together in single list).
+sub content_filters
+{
+	my $json = shift;
+	my $data = get_filters($json);
+	my %h = %$data;
+	my $content;
+
+	for my $k (sort keys %$data) {
+		my $tag = tag2title($k);
+		my ($letter, $prev_letter);
+		$content .= h2($tag);
+		$content .= paragraph("Tests containing $tag flag.");
+		$content .= get_test_names(\@{$h{$k}});
+	}
+
+	return $content;
+}
+
+sub detect_git
+{
+	unless (defined $ENV{'LINUX_GIT'} && $ENV{'LINUX_GIT'}) {
+		log_warn("kernel git repository not defined. Define it in \$LINUX_GIT");
+		return 0;
+	}
+
+	unless (-d $ENV{'LINUX_GIT'}) {
+		log_warn("\$LINUX_GIT does not exit ('$ENV{'LINUX_GIT'}')");
+		return 0;
+	}
+
+	my $ret = 0;
+	if (system("which git >/dev/null")) {
+		log_warn("git not in \$PATH ('$ENV{'PATH'}')");
+		return 0;
+	}
+
+	chdir($ENV{'LINUX_GIT'});
+	if (!system("git log -1 > /dev/null")) {
+		log_info("using '$ENV{'LINUX_GIT'}' as kernel git repository");
+		$ret = 1;
+	} else {
+		log_warn("git failed, git not installed or \$LINUX_GIT is not a git repository? ('$ENV{'LINUX_GIT'}')");
+	}
+	chdir(OUTDIR);
+
+	return $ret;
+}
+
+sub content_all_tests
+{
+	my $json = shift;
+	my @names = sort keys %{$json->{'tests'}};
+	my $letters = paragraph(get_test_letters(\@names));
+	my $has_kernel_git = detect_git();
+	my $tmp = undef;
+	my $printed = "";
+	my $content;
+
+	unless ($has_kernel_git) {
+		log_info("Parsing git messages from linux git repository skipped due previous error");
+	}
+
+	$content .= paragraph("Total $#names tests.");
+	$content .= $letters;
+	$content .= get_test_names(\@names);
+
+	for my $name (@names) {
+		my $letter = substr($name, 0, 1);
 
-	my $tests = $json->{'tests'};
+		if ($printed ne $letter) {
+			$content .= label($letter);
+			$content .= h2($letter);
+			$printed = $letter;
+		}
+
+		$content .= hr() if (defined($tmp));
+		$content .= label($name);
+		$content .= h3($name);
+		$content .= $letters;
+
+		if (defined($json->{'scm_url_base'}) &&
+			defined($json->{'tests'}{$name}{fname})) {
+			$content .= paragraph(html_a(tag_url("fname", $json->{'tests'}{$name}{fname},
+					$json->{'scm_url_base'}), "source"));
+		}
+
+		if (defined $json->{'tests'}{$name}{doc}) {
+			for my $doc (@{$json->{'tests'}{$name}{doc}}) {
 
-	foreach my $key (sort(keys %$tests)) {
-		if ($tests->{$key}->{$flag}) {
-			if ($tests->{$key}->{$flag} eq "1") {
-				print("$key\n");
+				# fix formatting for asciidoc [DOCUMENTATION] => *DOCUMENTATION*
+				if ($doc =~ s/^\[(.*)\]$/$1/) {
+					$doc = paragraph(bold($doc));
+				}
+
+				$content .= "$doc\n";
+			}
+			$content .= "\n";
+		}
+
+		if ($json->{'tests'}{$name}{timeout}) {
+			if ($json->{'tests'}{$name}{timeout} eq -1) {
+				$content .= paragraph("Test timeout is disabled");
 			} else {
-				print("$key:\n" . Dumper($tests->{$key}->{$flag}) . "\n");
+				$content .= paragraph("Test timeout is $json->{'tests'}{$name}{timeout} seconds");
 			}
+		} else {
+			$content .= paragraph("Test timeout defaults to $json->{'timeout'} seconds");
 		}
+
+		my $tmp2 = undef;
+		for my $k (sort keys %{$json->{'tests'}{$name}}) {
+			my $v = $json->{'tests'}{$name}{$k};
+			next if ($k eq "tags" || $k eq "fname" || $k eq "doc");
+			if (!defined($tmp2)) {
+				$content .= table . "|Key|Value\n\n"
+			}
+
+			$content .= "|" . tag2title($k) . "\n|";
+			if (ref($v) eq 'ARRAY') {
+				$content .= join(', ', @$v),
+			} else {
+				$content .= $v;
+			}
+			$content .= "\n";
+
+			$tmp2 = 1;
+		}
+		if (defined($tmp2)) {
+			$content .= table . "\n";
+		}
+
+		$tmp2 = undef;
+		my %commits;
+
+		for my $tag (@{$json->{'tests'}{$name}{tags}}) {
+			if (!defined($tmp2)) {
+				$content .= table . "|Tags|Info\n"
+			}
+			my $k = @$tag[0];
+			my $v = @$tag[1];
+			my $text = $k;
+
+            if ($has_kernel_git && $k eq "linux-git") {
+				$text .= "-$v";
+				unless (defined($commits{$v})) {
+					chdir($ENV{'LINUX_GIT'});
+					$commits{$v} = `git log --pretty=format:'%s' -1 $v`;
+					chdir(OUTDIR);
+				}
+				$v = $commits{$v};
+			}
+			my $a = html_a(tag_url($k, @$tag[1]), $text);
+			$content .= "\n|$a\n|$v\n";
+			$tmp2 = 1;
+		}
+		if (defined($tmp2)) {
+			$content .= table . "\n";
+		}
+
+		$tmp = 1;
 	}
+
+	return $content;
 }
 
+
 my $json = decode_json(load_json($ARGV[0]));
 
-query_flag($json, $ARGV[1]);
+my $config = [
+    {
+		file => "about.txt",
+		title => h2("About $json->{'testsuite'}"),
+		content => \&content_about,
+    },
+    {
+		file => "filters.txt",
+		title => h1("Test filtered by used flags"),
+		content => \&content_filters,
+    },
+    {
+		file => "all-tests.txt",
+		title => h1("All tests"),
+		content => \&content_all_tests,
+    },
+];
+
+sub print_asciidoc_main
+{
+	my $config = shift;
+	my $file = "metadata.txt";
+	my $content;
+
+	open(my $fh, '>', $file) or die("Can't open $file $!");
+
+	$content = <<EOL;
+:doctype: inline
+:sectanchors:
+:toc:
+
+EOL
+	for my $c (@{$config}) {
+		$content .= "include::$c->{'file'}\[\]\n";
+	}
+	print_asciidoc_page($fh, $json, h1($json->{'testsuite_short'} . " test catalog"), $content);
+}
+
+for my $c (@{$config}) {
+	open(my $fh, '>', $c->{'file'}) or die("Can't open $c->{'file'} $!");
+	print_asciidoc_page($fh, $json, $c->{'title'}, $c->{'content'}->($json));
+}
+
+print_asciidoc_main($config);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [LTP] [PATCH 00/11] Test metadata extraction
  2020-10-05 13:30 [LTP] [PATCH 00/11] Test metadata extraction Cyril Hrubis
                   ` (10 preceding siblings ...)
  2020-10-05 13:30 ` [LTP] [PATCH 11/11] docparse: Generate html and pdf using asciidoc{, tor} Cyril Hrubis
@ 2020-10-05 13:35 ` Cyril Hrubis
  2020-10-12  8:53 ` Petr Vorel
  12 siblings, 0 replies; 21+ messages in thread
From: Cyril Hrubis @ 2020-10-05 13:35 UTC (permalink / raw)
  To: ltp

Hi!
The applied patchset can also be seen at:

https://github.com/metan-ucw/ltp/tree/master/docparse

Also please ignore the metan@ucw.cz address in the sign-offs that should
have been chrubis@suse.cz I will fix that before applying if we decide
to go with this version.

-- 
Cyril Hrubis
chrubis@suse.cz

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [LTP] [PATCH 04/11] docparse: Add README
  2020-10-05 13:30 ` [LTP] [PATCH 04/11] docparse: Add README Cyril Hrubis
@ 2020-10-05 14:15   ` Jan Stancek
  2020-10-13 11:59     ` Cyril Hrubis
  0 siblings, 1 reply; 21+ messages in thread
From: Jan Stancek @ 2020-10-05 14:15 UTC (permalink / raw)
  To: ltp



----- Original Message -----
> +Open Points
> +===========
> +
> +There are stil some loose ends. Mostly it's not well defined where to put
> +things and how to format them.
> +
> +* Some of the hardware requirements are already listed in the tst\_test.
> Should
> +  we put all of them there?
> +
> +* What would be the format for test documentation and how to store things
> such
> +  as test variants there?

I'm assuming you don't mean ".test_variants" here, but runtest entries
using same binary with different parameters. Currently we have a "tag"
that can refer to binary+parameters pair. Which is also useful, for
skipping tests - e.g. they run long, they are broken, or older kernel
is known to crash - we have a list of checks that modify runtest files
before actual run to avoid running into known issues (or save time).

> +
> +So far this proof of concept generates a metadata file. I guess that we need
> +actual consumers which will help to settle things down, I will try to look
> into
> +making use of this in the runltp-ng at least as a reference implementation.
> --
> 2.26.2
> 
> 
> --
> Mailing list info: https://lists.linux.it/listinfo/ltp
> 
> 


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [LTP] [PATCH 00/11] Test metadata extraction
  2020-10-05 13:30 [LTP] [PATCH 00/11] Test metadata extraction Cyril Hrubis
                   ` (11 preceding siblings ...)
  2020-10-05 13:35 ` [LTP] [PATCH 00/11] Test metadata extraction Cyril Hrubis
@ 2020-10-12  8:53 ` Petr Vorel
  12 siblings, 0 replies; 21+ messages in thread
From: Petr Vorel @ 2020-10-12  8:53 UTC (permalink / raw)
  To: ltp

Hi,

> This patchset adds a test metadata extraction into LTP and also
> documentation generator that produces browseable HTML documentation from
> the exported metadata. For detailed description of the idea and
> implementation see the patch that adds README.md.

> While the idea is quite new the code is mature enough to be included in
> the upstream repository and I'm also worried that we will not get any
> feedback or users of the metadata unless it's included in the upstream
> git.

Example of the output:
(generally I prefer asciidoctor, but PDF support is not always available)
HTML could have custom javascript (probably jquery) based search/filtering.

= asciidoctor
* PDF
https://pevik.github.io/asciidoctor/metadata.pdf
There is missing TOC (as :toc: does not work and adding '-a toc' to generate as:
asciidoctor -d book metadata.txt -b pdf -r asciidoctor-pdf -a toc

puts TOC at the top.

* HTML single page
https://pevik.github.io/asciidoctor/metadata.html

= asciidoc
* PDF
https://pevik.github.io/asciidoc/metadata.pdf
There is ugly revision history at the top.

* HTML single page
https://pevik.github.io/asciidoc/metadata.html

* HTML chunked
https://pevik.github.io/asciidoc/metadata.chunked/index.html
https://pevik.github.io/asciidoc/metadata.chunked/ch01.html
https://pevik.github.io/asciidoc/metadata.chunked/ch02.html
https://pevik.github.io/asciidoc/metadata.chunked/ch03.html

> The next step is to use the extracted metadata in runltp-ng in the proof
> of concept parallel executor that has been written by Ritchie and posted
> to this mailing list as well.

Kind regards,
Petr

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [LTP] [PATCH 04/11] docparse: Add README
  2020-10-05 14:15   ` Jan Stancek
@ 2020-10-13 11:59     ` Cyril Hrubis
  0 siblings, 0 replies; 21+ messages in thread
From: Cyril Hrubis @ 2020-10-13 11:59 UTC (permalink / raw)
  To: ltp

Hi!
> > +Open Points
> > +===========
> > +
> > +There are stil some loose ends. Mostly it's not well defined where to put
> > +things and how to format them.
> > +
> > +* Some of the hardware requirements are already listed in the tst\_test.
> > Should
> > +  we put all of them there?
> > +
> > +* What would be the format for test documentation and how to store things
> > such
> > +  as test variants there?
> 
> I'm assuming you don't mean ".test_variants" here, but runtest entries
> using same binary with different parameters. Currently we have a "tag"
> that can refer to binary+parameters pair. Which is also useful, for
> skipping tests - e.g. they run long, they are broken, or older kernel
> is known to crash - we have a list of checks that modify runtest files
> before actual run to avoid running into known issues (or save time).

Yes I do mean binary parameters here. And yes we will need a way how to
express skiplist for this as well.

-- 
Cyril Hrubis
chrubis@suse.cz

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [LTP] [PATCH 09/11] travis: Install docparse dependencies
  2020-10-05 13:30 ` [LTP] [PATCH 09/11] travis: Install docparse dependencies Cyril Hrubis
@ 2020-10-21 10:38   ` Li Wang
  0 siblings, 0 replies; 21+ messages in thread
From: Li Wang @ 2020-10-21 10:38 UTC (permalink / raw)
  To: ltp

Cyril Hrubis <chrubis@suse.cz> wrote:
...
> --- a/travis/fedora.sh
> +++ b/travis/fedora.sh
> @@ -2,7 +2,10 @@
>  # Copyright (c) 2018-2020 Petr Vorel <pvorel@suse.cz>
>  set -ex
>
> -yum -y install \
> +yum="yum -y install"
> +
> +$yum \
> +       asciidoc \
>         autoconf \
>         automake \
>         make \
> @@ -12,8 +15,10 @@ yum -y install \
>         findutils \
>         libtirpc \
>         libtirpc-devel \
> +       perl-JSON \

Package perl-libwww-perl is needed for fedora/centos as well.

-- 
Regards,
Li Wang


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [LTP] [PATCH 11/11] docparse: Generate html and pdf using asciidoc{, tor}
  2020-10-05 13:30 ` [LTP] [PATCH 11/11] docparse: Generate html and pdf using asciidoc{, tor} Cyril Hrubis
@ 2020-10-23  6:18   ` Li Wang
  2020-10-23  7:19     ` Petr Vorel
  0 siblings, 1 reply; 21+ messages in thread
From: Li Wang @ 2020-10-23  6:18 UTC (permalink / raw)
  To: ltp

Cyril Hrubis <chrubis@suse.cz> wrote:

> --- a/docparse/Makefile
> +++ b/docparse/Makefile
> @@ -1,19 +1,77 @@
>  # SPDX-License-Identifier: GPL-2.0-or-later
>  # Copyright (c) 2019 Cyril Hrubis <chrubis@suse.cz>
> +# Copyright (c) 2020 Petr Vorel <pvorel@suse.cz>
>
>  top_srcdir             ?= ..
>
>  include $(top_srcdir)/include/mk/env_pre.mk
>  include $(top_srcdir)/include/mk/functions.mk
>
> +ifeq ($(METADATA_GENERATOR),asciidoctor)
> +METADATA_GENERATOR_CMD := asciidoctor
> +METADATA_GENERATOR_PARAMS := -d book metadata.txt
> +METADATA_GENERATOR_PARAMS_HTML := -b xhtml
> +METADATA_GENERATOR_PARAMS_PDF := -b pdf -r asciidoctor-pdf
> +else ifeq ($(METADATA_GENERATOR),asciidoc)
> +METADATA_GENERATOR_CMD := a2x
> +METADATA_GENERATOR_PARAMS := --xsltproc-opts "--stringparam toc.section.depth 1" -d book -L  --resource="$(PWD)" metadata.txt
> +METADATA_GENERATOR_PARAMS_HTML := -f xhtml
> +METADATA_GENERATOR_PARAMS_PDF := -f pdf
> +METADATA_GENERATOR_PARAMS_HTML_CHUNKED := -f chunked
> +else ifeq ($(METADATA_GENERATOR),)
> +$(error 'METADATA_GENERATOR' not not configured, run ./configure in the root directory)
> +else
> +$(error '$(METADATA_GENERATOR)' not supported, only asciidoctor and asciidoc are supported)
> +endif
> +
> +ifdef VERBOSE
> +METADATA_GENERATOR_PARAMS += -v
> +endif
> +
> +CLEAN_TARGETS          := *.txt

I guess the generated  *.css *.js files should be deleted as well.

--
Regards,
Li Wang


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [LTP] [PATCH 03/11] docparse: Add test documentation parser
  2020-10-05 13:30 ` [LTP] [PATCH 03/11] docparse: Add test documentation parser Cyril Hrubis
@ 2020-10-23  7:01   ` Li Wang
  2020-10-23  9:36     ` Li Wang
  0 siblings, 1 reply; 21+ messages in thread
From: Li Wang @ 2020-10-23  7:01 UTC (permalink / raw)
  To: ltp

Cyril Hrubis <chrubis@suse.cz> wrote:

> +const char *next_token(FILE *f, struct data_node *doc)
> +{
> +       size_t i = 0;
> +       static char buf[4096];
> +       int c;
> +       int in_str = 0;
> +
> +       for (;;) {
> +               c = fgetc(f);
> +
> +               if (c == EOF)
> +                       goto exit;
> +
> +               if (in_str) {
> +                       if (c == '"') {
> +                               if (i == 0 || buf[i-1] != '\\')
> +                                       goto exit;
> +                       }

There is a problem in handle a special string token here,
which can not parse the "" correctly in many test cases.

e.g.

# ./docparse ../testcases/kernel/syscalls/fsopen/fsopen01.c
# ./docparse ../testcases/kernel/fs/ftest/ftest02.c
....

We got nothing output from the above two tests parsing because they
contains "" in their sentence, it makes next_token() exit too early.

     TEST(move_mount(fsmfd, "", AT_FDCWD, MNTPOINT,
            MOVE_MOUNT_F_EMPTY_PATH));


> +
> +                       buf[i++] = c;
> +                       continue;
> +               }
> +
> +               switch (c) {
> +               case '{':
> +               case '}':
> +               case ';':
> +               case '(':
> +               case ')':
> +               case '=':
> +               case ',':
> +               case '[':
> +               case ']':
> +                       if (i) {
> +                               ungetc(c, f);
> +                               goto exit;
> +                       }
> +
> +                       buf[i++]=c;
> +                       goto exit;
> +               case '0' ... '9':
> +               case 'a' ... 'z':
> +               case 'A' ... 'Z':
> +               case '.':
> +               case '_':
> +               case '-':
> +                       buf[i++]=c;
> +               break;
> +               case '/':
> +                       maybe_comment(f, doc);
> +               break;
> +               case '"':
> +                       in_str = 1;
> +               break;
> +               case ' ':
> +               case '\n':
> +               case '\t':
> +                       if (i)
> +                               goto exit;
> +               break;
> +               }
> +       }
> +
> +exit:
> +       if (i == 0)
> +               return NULL;
> +
> +       buf[i] = 0;
> +       return buf;
> +}
> +
> +#define WARN(str) fprintf(stderr, str "\n")
> +
> +static int parse_array(FILE *f, struct data_node *node)
> +{
> +       const char *token;
> +
> +       for (;;) {
> +               if (!(token = next_token(f, NULL)))
> +                       return 1;
> +
> +               if (!strcmp(token, "{")) {
> +                       struct data_node *ret = data_node_array();
> +                       parse_array(f, ret);
> +
> +                       if (data_node_array_len(ret))
> +                               data_node_array_add(node, ret);
> +                       else
> +                               data_node_free(ret);
> +
> +                       continue;
> +               }
> +
> +               if (!strcmp(token, "}"))
> +                       return 0;
> +
> +               if (!strcmp(token, ","))
> +                       continue;
> +
> +               if (!strcmp(token, "NULL"))
> +                       continue;
> +
> +               struct data_node *str = data_node_string(token);
> +
> +               data_node_array_add(node, str);
> +       }
> +
> +       return 0;
> +}
> +
> +static const char *tokens[] = {
> +       "static",
> +       "struct",
> +       "tst_test",
> +       "test",
> +       "=",
> +       "{",
> +};
> +
> +static struct data_node *parse_file(const char *fname)
> +{
> +       int state = 0, found = 0;
> +       const char *token;
> +

Seems we'd better check the fname is valid before opening it.

       if (access(fname, F_OK)) {
               fprintf(stderr, "file %s is not exist\n", fname);
               return NULL;
       }

> +static const char *filter_out[] = {
> +       "test",
> +       "test_all",
> +       "setup",
> +       "cleanup",
> +       "tcnt",
> +       "mntpoint",
> +       "bufs",

I guess the "options"  should be also filtered out here?

--
Regards,
Li Wang
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linux.it/pipermail/ltp/attachments/20201023/483b507f/attachment-0001.htm>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [LTP] [PATCH 11/11] docparse: Generate html and pdf using asciidoc{, tor}
  2020-10-23  6:18   ` Li Wang
@ 2020-10-23  7:19     ` Petr Vorel
  0 siblings, 0 replies; 21+ messages in thread
From: Petr Vorel @ 2020-10-23  7:19 UTC (permalink / raw)
  To: ltp

Hi,

> Cyril Hrubis <chrubis@suse.cz> wrote:

> > +
> > +CLEAN_TARGETS          := *.txt

> I guess the generated  *.css *.js files should be deleted as well.
+1

Kind regards,
Petr

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [LTP] [PATCH 03/11] docparse: Add test documentation parser
  2020-10-23  7:01   ` Li Wang
@ 2020-10-23  9:36     ` Li Wang
  0 siblings, 0 replies; 21+ messages in thread
From: Li Wang @ 2020-10-23  9:36 UTC (permalink / raw)
  To: ltp

On Fri, Oct 23, 2020 at 3:01 PM Li Wang <liwang@redhat.com> wrote:

>
> Cyril Hrubis <chrubis@suse.cz> wrote:
>
> > +const char *next_token(FILE *f, struct data_node *doc)
> > +{
> > +       size_t i = 0;
> > +       static char buf[4096];
> > +       int c;
> > +       int in_str = 0;
> > +
> > +       for (;;) {
> > +               c = fgetc(f);
> > +
> > +               if (c == EOF)
> > +                       goto exit;
> > +
> > +               if (in_str) {
> > +                       if (c == '"') {
> > +                               if (i == 0 || buf[i-1] != '\\')
> > +                                       goto exit;
> > +                       }
>
> There is a problem in handle a special string token here,
> which can not parse the "" correctly in many test cases.
>
> e.g.
>
> # ./docparse ../testcases/kernel/syscalls/fsopen/fsopen01.c
> # ./docparse ../testcases/kernel/fs/ftest/ftest02.c
>

The ftest02.c has not been converted to new API, please ignore.

To fix this problem I propose to add a simple line as:

@@ -137,8 +138,10 @@ const char *next_token(FILE *f, struct data_node *doc)

                if (in_str) {
                        if (c == '"') {
-                               if (i == 0 || buf[i-1] != '\\')
+                               if (i == 0 || buf[i-1] != '\\') {
+                                       buf[i++] = c;
                                        goto exit;
+                               }
                        }



> ....
>
> We got nothing output from the above two tests parsing because they
> contains "" in their sentence, it makes next_token() exit too early.
>
>      TEST(move_mount(fsmfd, "", AT_FDCWD, MNTPOINT,
>             MOVE_MOUNT_F_EMPTY_PATH));
>


-- 
Regards,
Li Wang
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linux.it/pipermail/ltp/attachments/20201023/6111a36b/attachment.htm>

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2020-10-23  9:36 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-05 13:30 [LTP] [PATCH 00/11] Test metadata extraction Cyril Hrubis
2020-10-05 13:30 ` [LTP] [PATCH 01/11] make: Support compiling native build tools Cyril Hrubis
2020-10-05 13:30 ` [LTP] [PATCH 02/11] travis: Add git Cyril Hrubis
2020-10-05 13:30 ` [LTP] [PATCH 03/11] docparse: Add test documentation parser Cyril Hrubis
2020-10-23  7:01   ` Li Wang
2020-10-23  9:36     ` Li Wang
2020-10-05 13:30 ` [LTP] [PATCH 04/11] docparse: Add README Cyril Hrubis
2020-10-05 14:15   ` Jan Stancek
2020-10-13 11:59     ` Cyril Hrubis
2020-10-05 13:30 ` [LTP] [PATCH 05/11] syscalls: Add a few documentation comments Cyril Hrubis
2020-10-05 13:30 ` [LTP] [PATCH 06/11] syscalls: Move needs_drivers inside of the tst_test struct Cyril Hrubis
2020-10-05 13:30 ` [LTP] [PATCH 07/11] make: Allow {INSTALL, MAKE}_TARGETS be a directory Cyril Hrubis
2020-10-05 13:30 ` [LTP] [PATCH 08/11] make: Allow CLEAN_TARGETS to remove directories Cyril Hrubis
2020-10-05 13:30 ` [LTP] [PATCH 09/11] travis: Install docparse dependencies Cyril Hrubis
2020-10-21 10:38   ` Li Wang
2020-10-05 13:30 ` [LTP] [PATCH 10/11] docparse: Add configure options Cyril Hrubis
2020-10-05 13:30 ` [LTP] [PATCH 11/11] docparse: Generate html and pdf using asciidoc{, tor} Cyril Hrubis
2020-10-23  6:18   ` Li Wang
2020-10-23  7:19     ` Petr Vorel
2020-10-05 13:35 ` [LTP] [PATCH 00/11] Test metadata extraction Cyril Hrubis
2020-10-12  8:53 ` Petr Vorel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.