qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PULL 00/22] Build system + misc changes for 2020-10-16
@ 2020-10-16 11:47 Paolo Bonzini
  2020-10-16 11:47 ` [PULL 01/22] submodules: bump meson to 0.55.3 Paolo Bonzini
                   ` (21 more replies)
  0 siblings, 22 replies; 25+ messages in thread
From: Paolo Bonzini @ 2020-10-16 11:47 UTC (permalink / raw)
  To: qemu-devel

The following changes since commit 57c98ea9acdcef5021f5671efa6475a5794a51c4:

  Merge remote-tracking branch 'remotes/kraxel/tags/ui-20201014-pull-request' into staging (2020-10-14 13:56:06 +0100)

are available in the Git repository at:

  https://gitlab.com/bonzini/qemu.git tags/for-upstream

for you to fetch changes up to 2a2f0924537993510e8d24d60ec2a43e7b4a72a9:

  ci: include configure and meson logs in all jobs if configure fails (2020-10-16 07:44:38 -0400)

----------------------------------------------------------------
* Drop ninjatool and just require ninja (Paolo)
* Fix docs build under msys2 (Yonggang)
* HAX snafu fix (Claudio)
* Disable signal handlers during fuzzing (Alex)
* Miscellaneous fixes (Bruce, Greg)

----------------------------------------------------------------
Alexander Bulekov (1):
      fuzz: Disable QEMU's SIG{INT,HUP,TERM} handlers

Bruce Rogers (3):
      meson.build: don't condition iconv detection on library detection
      configure: fix handling of --docdir parameter
      meson: Only install icons and qemu.desktop if have_system

Claudio Fontana (1):
      hax: unbreak accelerator cpu code after cpus.c split

Greg Kurz (1):
      Makefile: Ensure cscope.out/tags/TAGS are generated in the source tree

Paolo Bonzini (13):
      submodules: bump meson to 0.55.3
      tests/Makefile.include: unbreak non-tcg builds
      make: run shell with pipefail
      tests: add missing generated sources to testqapi
      configure: move QEMU_INCLUDES to meson
      dockerfiles: enable Centos 8 PowerTools
      add ninja to dockerfiles, CI configurations and test VMs
      build: cleanups to Makefile
      build: replace ninjatool with ninja
      build: add --enable/--disable-libudev
      meson: cleanup curses/iconv test
      meson: move SPHINX_ARGS references within "if build_docs"
      ci: include configure and meson logs in all jobs if configure fails

Yonggang Luo (3):
      docs: Fix Sphinx configuration for msys2/mingw
      meson: Move the detection logic for sphinx to meson
      cirrus: Enable doc build on msys2/mingw

 .cirrus.yml                                |   21 +-
 .gitlab-ci.yml                             |    6 +-
 .travis.yml                                |   21 +-
 Makefile                                   |  134 ++--
 configure                                  |   99 +--
 docs/conf.py                               |    2 +-
 docs/devel/build-system.rst                |    6 +-
 docs/meson.build                           |   46 ++
 docs/sphinx/kerneldoc.py                   |    2 +-
 meson                                      |    2 +-
 meson.build                                |  185 ++---
 meson_options.txt                          |    6 +
 scripts/mtest2make.py                      |    3 +-
 scripts/ninjatool.py                       | 1008 ----------------------------
 target/i386/hax-cpus.c                     |    1 +
 tests/Makefile.include                     |    2 +-
 tests/docker/dockerfiles/centos7.docker    |    1 +
 tests/docker/dockerfiles/centos8.docker    |    5 +-
 tests/docker/dockerfiles/debian10.docker   |    1 +
 tests/docker/dockerfiles/fedora.docker     |    1 +
 tests/docker/dockerfiles/travis.docker     |    2 +-
 tests/docker/dockerfiles/ubuntu.docker     |    1 +
 tests/docker/dockerfiles/ubuntu1804.docker |    1 +
 tests/docker/dockerfiles/ubuntu2004.docker |    1 +
 tests/include/meson.build                  |    8 +-
 tests/meson.build                          |   14 +-
 tests/qapi-schema/meson.build              |   88 +--
 tests/qtest/fuzz/fuzz.c                    |    8 +
 tests/vm/centos                            |    2 +-
 tests/vm/centos.aarch64                    |    2 +-
 tests/vm/fedora                            |    2 +-
 tests/vm/freebsd                           |    1 +
 tests/vm/netbsd                            |    1 +
 tests/vm/openbsd                           |    1 +
 tests/vm/ubuntu.aarch64                    |    2 +-
 tests/vm/ubuntu.i386                       |    2 +-
 ui/meson.build                             |    7 +-
 37 files changed, 387 insertions(+), 1308 deletions(-)
 delete mode 100755 scripts/ninjatool.py
-- 
2.26.2



^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PULL 01/22] submodules: bump meson to 0.55.3
  2020-10-16 11:47 [PULL 00/22] Build system + misc changes for 2020-10-16 Paolo Bonzini
@ 2020-10-16 11:47 ` Paolo Bonzini
  2020-10-16 11:47 ` [PULL 02/22] Makefile: Ensure cscope.out/tags/TAGS are generated in the source tree Paolo Bonzini
                   ` (20 subsequent siblings)
  21 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2020-10-16 11:47 UTC (permalink / raw)
  To: qemu-devel

This adds some bugfixes, and allows MSYS2 to configure
without "--ninja=ninja".

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 .cirrus.yml | 3 +--
 meson       | 2 +-
 2 files changed, 2 insertions(+), 3 deletions(-)

diff --git a/.cirrus.yml b/.cirrus.yml
index 99d118239c..0f46cb5eaf 100644
--- a/.cirrus.yml
+++ b/.cirrus.yml
@@ -123,8 +123,7 @@ windows_msys2_task:
 
   script:
     - C:\tools\msys64\usr\bin\bash.exe -lc "mkdir build"
-    - C:\tools\msys64\usr\bin\bash.exe -lc "cd build && ../configure
-      --python=python3 --ninja=ninja"
+    - C:\tools\msys64\usr\bin\bash.exe -lc "cd build && ../configure --python=python3"
     - C:\tools\msys64\usr\bin\bash.exe -lc "cd build && make -j8"
   test_script:
     - C:\tools\msys64\usr\bin\bash.exe -lc "cd build && make V=1 check"
diff --git a/meson b/meson
index 68ed748f84..776acd2a80 160000
--- a/meson
+++ b/meson
@@ -1 +1 @@
-Subproject commit 68ed748f84f14c2d4e62dcbd123816e5898eb04c
+Subproject commit 776acd2a805c9b42b4f0375150977df42130317f
-- 
2.26.2




^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PULL 02/22] Makefile: Ensure cscope.out/tags/TAGS are generated in the source tree
  2020-10-16 11:47 [PULL 00/22] Build system + misc changes for 2020-10-16 Paolo Bonzini
  2020-10-16 11:47 ` [PULL 01/22] submodules: bump meson to 0.55.3 Paolo Bonzini
@ 2020-10-16 11:47 ` Paolo Bonzini
  2020-10-16 11:47 ` [PULL 03/22] tests/Makefile.include: unbreak non-tcg builds Paolo Bonzini
                   ` (19 subsequent siblings)
  21 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2020-10-16 11:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Greg Kurz

From: Greg Kurz <groug@kaod.org>

Tools usually expect the index files to be in the source tree, eg. emacs.
This is already the case when doing out-of-tree builds, but with in-tree
builds they end up in the build directory.

Force cscope, ctags and etags to put them in the source tree.

Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <160277334665.1754102.10921580280105870386.stgit@bahia.lan>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 Makefile | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/Makefile b/Makefile
index c37e513431..d20c7a3f80 100644
--- a/Makefile
+++ b/Makefile
@@ -194,19 +194,19 @@ find-src-path = find "$(SRC_PATH)/" -path "$(SRC_PATH)/meson" -prune -o -name "*
 
 .PHONY: ctags
 ctags:
-	rm -f tags
-	$(find-src-path) -exec ctags --append {} +
+	rm -f "$(SRC_PATH)/"tags
+	$(find-src-path) -exec ctags -f "$(SRC_PATH)/"tags --append {} +
 
 .PHONY: TAGS
 TAGS:
-	rm -f TAGS
-	$(find-src-path) -exec etags --append {} +
+	rm -f "$(SRC_PATH)/"TAGS
+	$(find-src-path) -exec etags -f "$(SRC_PATH)/"TAGS --append {} +
 
 .PHONY: cscope
 cscope:
 	rm -f "$(SRC_PATH)"/cscope.*
 	$(find-src-path) -print | sed -e 's,^\./,,' > "$(SRC_PATH)/cscope.files"
-	cscope -b -i"$(SRC_PATH)/cscope.files"
+	cscope -b -i"$(SRC_PATH)/cscope.files" -f"$(SRC_PATH)"/cscope.out
 
 # Needed by "meson install"
 export DESTDIR
-- 
2.26.2




^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PULL 03/22] tests/Makefile.include: unbreak non-tcg builds
  2020-10-16 11:47 [PULL 00/22] Build system + misc changes for 2020-10-16 Paolo Bonzini
  2020-10-16 11:47 ` [PULL 01/22] submodules: bump meson to 0.55.3 Paolo Bonzini
  2020-10-16 11:47 ` [PULL 02/22] Makefile: Ensure cscope.out/tags/TAGS are generated in the source tree Paolo Bonzini
@ 2020-10-16 11:47 ` Paolo Bonzini
  2020-10-16 11:47 ` [PULL 04/22] make: run shell with pipefail Paolo Bonzini
                   ` (18 subsequent siblings)
  21 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2020-10-16 11:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel P . Berrangé

Remove from check-block the requirement that all TARGET_DIRS are built.

Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 tests/Makefile.include | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tests/Makefile.include b/tests/Makefile.include
index 5aca98e60c..4037490b69 100644
--- a/tests/Makefile.include
+++ b/tests/Makefile.include
@@ -140,7 +140,7 @@ QEMU_IOTESTS_HELPERS-$(CONFIG_LINUX) = tests/qemu-iotests/socket_scm_helper$(EXE
 check: check-block
 check-block: $(SRC_PATH)/tests/check-block.sh qemu-img$(EXESUF) \
 		qemu-io$(EXESUF) qemu-nbd$(EXESUF) $(QEMU_IOTESTS_HELPERS-y) \
-		$(patsubst %-softmmu,qemu-system-%,$(filter %-softmmu,$(TARGET_DIRS)))
+		$(filter qemu-system-%, $(ninja-targets-c_LINKER) $(ninja-targets-cpp_LINKER))
 	@$<
 endif
 
-- 
2.26.2




^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PULL 04/22] make: run shell with pipefail
  2020-10-16 11:47 [PULL 00/22] Build system + misc changes for 2020-10-16 Paolo Bonzini
                   ` (2 preceding siblings ...)
  2020-10-16 11:47 ` [PULL 03/22] tests/Makefile.include: unbreak non-tcg builds Paolo Bonzini
@ 2020-10-16 11:47 ` Paolo Bonzini
  2020-10-16 11:47 ` [PULL 05/22] tests: add missing generated sources to testqapi Paolo Bonzini
                   ` (17 subsequent siblings)
  21 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2020-10-16 11:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel P . Berrangé

Without pipefail, it is possible to miss failures if the recipes
include pipes.

Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 Makefile | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/Makefile b/Makefile
index d20c7a3f80..91c62a26c8 100644
--- a/Makefile
+++ b/Makefile
@@ -14,6 +14,8 @@ SRC_PATH=.
 # we have explicit rules for everything
 MAKEFLAGS += -rR
 
+SHELL = /usr/bin/env bash -o pipefail
+
 # Usage: $(call quiet-command,command and args,"NAME","args to print")
 # This will run "command and args", and either:
 #  if V=1 just print the whole command and args
-- 
2.26.2




^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PULL 05/22] tests: add missing generated sources to testqapi
  2020-10-16 11:47 [PULL 00/22] Build system + misc changes for 2020-10-16 Paolo Bonzini
                   ` (3 preceding siblings ...)
  2020-10-16 11:47 ` [PULL 04/22] make: run shell with pipefail Paolo Bonzini
@ 2020-10-16 11:47 ` Paolo Bonzini
  2020-10-16 11:47 ` [PULL 06/22] configure: move QEMU_INCLUDES to meson Paolo Bonzini
                   ` (16 subsequent siblings)
  21 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2020-10-16 11:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel P . Berrangé

Ninja notices them due to a different order in visiting the graph.

Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 tests/include/meson.build |  8 ++++----
 tests/meson.build         | 14 ++++++++++++--
 2 files changed, 16 insertions(+), 6 deletions(-)

diff --git a/tests/include/meson.build b/tests/include/meson.build
index fea3a6342f..9abba308fa 100644
--- a/tests/include/meson.build
+++ b/tests/include/meson.build
@@ -10,7 +10,7 @@ test_qapi_outputs_extra = [
   'test-qapi-visit-sub-module.h',
 ]
 
-test_qapi_outputs_extra = custom_target('QAPI test (include)',
-                                        output: test_qapi_outputs_extra,
-                                        input: test_qapi_files,
-                                        command: 'true')
+test_qapi_files_extra = custom_target('QAPI test (include)',
+                                      output: test_qapi_outputs_extra,
+                                      input: test_qapi_files,
+                                      command: 'true')
diff --git a/tests/meson.build b/tests/meson.build
index bf47a38c74..afeb6be689 100644
--- a/tests/meson.build
+++ b/tests/meson.build
@@ -56,8 +56,18 @@ test_qapi_files = custom_target('Test QAPI files',
 # perhaps change qapi_gen to replace / with _, like Meson itself does?
 subdir('include')
 
-libtestqapi = static_library('testqapi', sources: [test_qapi_files, genh, test_qapi_outputs_extra])
-testqapi = declare_dependency(link_with: libtestqapi)
+test_qapi_sources = []
+test_qapi_headers = []
+i = 0
+foreach o: test_qapi_files.to_list() + test_qapi_files_extra.to_list()
+  if o.full_path().endswith('.h')
+    test_qapi_headers += o
+  endif
+  test_qapi_sources += o
+endforeach
+
+libtestqapi = static_library('testqapi', sources: [genh, test_qapi_sources])
+testqapi = declare_dependency(link_with: libtestqapi, sources: [genh, test_qapi_headers])
 
 testblock = declare_dependency(dependencies: [block], sources: 'iothread.c')
 
-- 
2.26.2




^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PULL 06/22] configure: move QEMU_INCLUDES to meson
  2020-10-16 11:47 [PULL 00/22] Build system + misc changes for 2020-10-16 Paolo Bonzini
                   ` (4 preceding siblings ...)
  2020-10-16 11:47 ` [PULL 05/22] tests: add missing generated sources to testqapi Paolo Bonzini
@ 2020-10-16 11:47 ` Paolo Bonzini
  2020-10-16 11:47 ` [PULL 07/22] dockerfiles: enable Centos 8 PowerTools Paolo Bonzini
                   ` (15 subsequent siblings)
  21 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2020-10-16 11:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel P . Berrangé

Confusingly, QEMU_INCLUDES is not used by configure tests.  Moving
it to meson.build ensures that Windows paths are specified instead of
the msys paths like /c/Users/...

Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 configure   | 20 --------------------
 meson.build | 30 ++++++++++++++++++++++++++++--
 2 files changed, 28 insertions(+), 22 deletions(-)

diff --git a/configure b/configure
index f839c2a557..8aa03876d4 100755
--- a/configure
+++ b/configure
@@ -537,8 +537,6 @@ QEMU_CFLAGS="-fno-strict-aliasing -fno-common -fwrapv $QEMU_CFLAGS"
 QEMU_CFLAGS="-Wundef -Wwrite-strings -Wmissing-prototypes $QEMU_CFLAGS"
 QEMU_CFLAGS="-Wstrict-prototypes -Wredundant-decls $QEMU_CFLAGS"
 QEMU_CFLAGS="-D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE $QEMU_CFLAGS"
-QEMU_INCLUDES="-iquote . -iquote ${source_path} -iquote ${source_path}/accel/tcg -iquote ${source_path}/include"
-QEMU_INCLUDES="$QEMU_INCLUDES -iquote ${source_path}/disas/libvixl"
 
 # Flags that are needed during configure but later taken care of by Meson
 CONFIGURE_CFLAGS="-std=gnu99 -Wall"
@@ -796,7 +794,6 @@ Linux)
   audio_possible_drivers="oss alsa sdl pa"
   linux="yes"
   linux_user="yes"
-  QEMU_INCLUDES="-isystem ${source_path}/linux-headers -Ilinux-headers $QEMU_INCLUDES"
 ;;
 esac
 
@@ -6776,22 +6773,6 @@ if test "$secret_keyring" = "yes" ; then
   echo "CONFIG_SECRET_KEYRING=y" >> $config_host_mak
 fi
 
-if test "$tcg_interpreter" = "yes"; then
-  QEMU_INCLUDES="-iquote ${source_path}/tcg/tci $QEMU_INCLUDES"
-elif test "$ARCH" = "sparc64" ; then
-  QEMU_INCLUDES="-iquote ${source_path}/tcg/sparc $QEMU_INCLUDES"
-elif test "$ARCH" = "s390x" ; then
-  QEMU_INCLUDES="-iquote ${source_path}/tcg/s390 $QEMU_INCLUDES"
-elif test "$ARCH" = "x86_64" || test "$ARCH" = "x32" ; then
-  QEMU_INCLUDES="-iquote ${source_path}/tcg/i386 $QEMU_INCLUDES"
-elif test "$ARCH" = "ppc64" ; then
-  QEMU_INCLUDES="-iquote ${source_path}/tcg/ppc $QEMU_INCLUDES"
-elif test "$ARCH" = "riscv32" || test "$ARCH" = "riscv64" ; then
-  QEMU_INCLUDES="-I${source_path}/tcg/riscv $QEMU_INCLUDES"
-else
-  QEMU_INCLUDES="-iquote ${source_path}/tcg/${ARCH} $QEMU_INCLUDES"
-fi
-
 echo "ROMS=$roms" >> $config_host_mak
 echo "MAKE=$make" >> $config_host_mak
 echo "PYTHON=$python" >> $config_host_mak
@@ -6818,7 +6799,6 @@ echo "WINDRES=$windres" >> $config_host_mak
 echo "CFLAGS_NOPIE=$CFLAGS_NOPIE" >> $config_host_mak
 echo "QEMU_CFLAGS=$QEMU_CFLAGS" >> $config_host_mak
 echo "QEMU_CXXFLAGS=$QEMU_CXXFLAGS" >> $config_host_mak
-echo "QEMU_INCLUDES=$QEMU_INCLUDES" >> $config_host_mak
 echo "GLIB_CFLAGS=$glib_cflags" >> $config_host_mak
 echo "GLIB_LIBS=$glib_libs" >> $config_host_mak
 echo "QEMU_LDFLAGS=$QEMU_LDFLAGS" >> $config_host_mak
diff --git a/meson.build b/meson.build
index 1a4a482492..88f757eac9 100644
--- a/meson.build
+++ b/meson.build
@@ -93,9 +93,35 @@ add_project_arguments(config_host['QEMU_CXXFLAGS'].split(),
                       native: false, language: 'cpp')
 add_project_link_arguments(config_host['QEMU_LDFLAGS'].split(),
                            native: false, language: ['c', 'cpp', 'objc'])
-add_project_arguments(config_host['QEMU_INCLUDES'].split(),
-                      language: ['c', 'cpp', 'objc'])
 
+if targetos == 'linux'
+  add_project_arguments('-isystem', meson.current_source_dir() / 'linux-headers',
+                        '-isystem', 'linux-headers',
+                        language: ['c', 'cpp'])
+endif
+
+if 'CONFIG_TCG_INTERPRETER' in config_host
+  tcg_arch = 'tci'
+elif config_host['ARCH'] == 'sparc64'
+  tcg_arch = 'sparc'
+elif config_host['ARCH'] == 's390x'
+  tcg_arch = 's390'
+elif config_host['ARCH'] in ['x86_64', 'x32']
+  tcg_arch = 'i386'
+elif config_host['ARCH'] == 'ppc64'
+  tcg_arch = 'ppc'
+elif config_host['ARCH'] in ['riscv32', 'riscv64']
+  tcg_arch = 'riscv'
+else
+  tcg_arch = config_host['ARCH']
+endif
+add_project_arguments('-iquote', meson.current_source_dir() / 'tcg' / tcg_arch,
+                      '-iquote', '.',
+                      '-iquote', meson.current_source_dir(),
+                      '-iquote', meson.current_source_dir() / 'accel/tcg',
+                      '-iquote', meson.current_source_dir() / 'include',
+                      '-iquote', meson.current_source_dir() / 'disas/libvixl',
+                      language: ['c', 'cpp', 'objc'])
 
 link_language = meson.get_external_property('link_language', 'cpp')
 if link_language == 'cpp'
-- 
2.26.2




^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PULL 07/22] dockerfiles: enable Centos 8 PowerTools
  2020-10-16 11:47 [PULL 00/22] Build system + misc changes for 2020-10-16 Paolo Bonzini
                   ` (5 preceding siblings ...)
  2020-10-16 11:47 ` [PULL 06/22] configure: move QEMU_INCLUDES to meson Paolo Bonzini
@ 2020-10-16 11:47 ` Paolo Bonzini
  2020-10-16 11:48 ` [PULL 08/22] add ninja to dockerfiles, CI configurations and test VMs Paolo Bonzini
                   ` (14 subsequent siblings)
  21 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2020-10-16 11:47 UTC (permalink / raw)
  To: qemu-devel

ninja is included in the CentOS PowerTools repository.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 tests/docker/dockerfiles/centos8.docker | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/tests/docker/dockerfiles/centos8.docker b/tests/docker/dockerfiles/centos8.docker
index 0fc2697491..e29e9657fb 100644
--- a/tests/docker/dockerfiles/centos8.docker
+++ b/tests/docker/dockerfiles/centos8.docker
@@ -28,5 +28,7 @@ ENV PACKAGES \
     tar \
     zlib-devel
 
-RUN dnf install -y $PACKAGES
+RUN dnf install -y dnf-plugins-core && \
+  dnf config-manager --set-enabled PowerTools && \
+  dnf install -y $PACKAGES
 RUN rpm -q $PACKAGES | sort > /packages.txt
-- 
2.26.2




^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PULL 08/22] add ninja to dockerfiles, CI configurations and test VMs
  2020-10-16 11:47 [PULL 00/22] Build system + misc changes for 2020-10-16 Paolo Bonzini
                   ` (6 preceding siblings ...)
  2020-10-16 11:47 ` [PULL 07/22] dockerfiles: enable Centos 8 PowerTools Paolo Bonzini
@ 2020-10-16 11:48 ` Paolo Bonzini
  2020-10-16 11:48 ` [PULL 09/22] build: cleanups to Makefile Paolo Bonzini
                   ` (13 subsequent siblings)
  21 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2020-10-16 11:48 UTC (permalink / raw)
  To: qemu-devel; +Cc: Alex Bennée, Daniel P . Berrangé

Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Acked-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 .cirrus.yml                                |  6 +++---
 .travis.yml                                | 13 +++++++++++++
 tests/docker/dockerfiles/centos7.docker    |  1 +
 tests/docker/dockerfiles/centos8.docker    |  1 +
 tests/docker/dockerfiles/debian10.docker   |  1 +
 tests/docker/dockerfiles/fedora.docker     |  1 +
 tests/docker/dockerfiles/travis.docker     |  2 +-
 tests/docker/dockerfiles/ubuntu.docker     |  1 +
 tests/docker/dockerfiles/ubuntu1804.docker |  1 +
 tests/docker/dockerfiles/ubuntu2004.docker |  1 +
 tests/vm/centos                            |  2 +-
 tests/vm/centos.aarch64                    |  2 +-
 tests/vm/fedora                            |  2 +-
 tests/vm/freebsd                           |  1 +
 tests/vm/netbsd                            |  1 +
 tests/vm/openbsd                           |  1 +
 tests/vm/ubuntu.aarch64                    |  2 +-
 tests/vm/ubuntu.i386                       |  2 +-
 18 files changed, 32 insertions(+), 9 deletions(-)

diff --git a/.cirrus.yml b/.cirrus.yml
index 0f46cb5eaf..396888fbd3 100644
--- a/.cirrus.yml
+++ b/.cirrus.yml
@@ -9,7 +9,7 @@ freebsd_12_task:
   install_script:
     - ASSUME_ALWAYS_YES=yes pkg bootstrap -f ;
     - pkg install -y bash curl cyrus-sasl git glib gmake gnutls gsed
-          nettle perl5 pixman pkgconf png usbredir
+          nettle perl5 pixman pkgconf png usbredir ninja
   script:
     - mkdir build
     - cd build
@@ -21,7 +21,7 @@ macos_task:
   osx_instance:
     image: catalina-base
   install_script:
-    - brew install pkg-config python gnu-sed glib pixman make sdl2 bash
+    - brew install pkg-config python gnu-sed glib pixman make sdl2 bash ninja
   script:
     - mkdir build
     - cd build
@@ -36,7 +36,7 @@ macos_xcode_task:
     # this is an alias for the latest Xcode
     image: catalina-xcode
   install_script:
-    - brew install pkg-config gnu-sed glib pixman make sdl2 bash
+    - brew install pkg-config gnu-sed glib pixman make sdl2 bash ninja
   script:
     - mkdir build
     - cd build
diff --git a/.travis.yml b/.travis.yml
index 1054ec5d29..d7bfbb8bfe 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -49,6 +49,7 @@ addons:
       - libvdeplug-dev
       - libvte-2.91-dev
       - libzstd-dev
+      - ninja-build
       - sparse
       - uuid-dev
       - gcovr
@@ -177,6 +178,7 @@ jobs:
       addons:
         apt:
           packages:
+            - ninja-build
             - python3-sphinx
             - perl
 
@@ -211,6 +213,10 @@ jobs:
     # gprof/gcov are GCC features
     - name: "GCC gprof/gcov"
       dist: bionic
+      addons:
+        apt:
+          packages:
+            - ninja-build
       env:
         - CONFIG="--enable-gprof --enable-gcov --disable-libssh
                   --target-list=${MAIN_SOFTMMU_TARGETS}"
@@ -281,6 +287,7 @@ jobs:
             - liburcu-dev
             - libusb-1.0-0-dev
             - libvte-2.91-dev
+            - ninja-build
             - sparse
             - uuid-dev
       language: generic
@@ -346,6 +353,7 @@ jobs:
           - libusb-1.0-0-dev
           - libvdeplug-dev
           - libvte-2.91-dev
+          - ninja-build
           # Tests dependencies
           - genisoimage
       env:
@@ -379,6 +387,7 @@ jobs:
           - libusb-1.0-0-dev
           - libvdeplug-dev
           - libvte-2.91-dev
+          - ninja-build
           # Tests dependencies
           - genisoimage
       env:
@@ -411,6 +420,7 @@ jobs:
           - libusb-1.0-0-dev
           - libvdeplug-dev
           - libvte-2.91-dev
+          - ninja-build
           # Tests dependencies
           - genisoimage
       env:
@@ -450,6 +460,7 @@ jobs:
           - libzstd-dev
           - nettle-dev
           - xfslibs-dev
+          - ninja-build
           # Tests dependencies
           - genisoimage
       env:
@@ -463,6 +474,7 @@ jobs:
         apt_packages:
           - libgcrypt20-dev
           - libgnutls28-dev
+          - ninja-build
       env:
         - CONFIG="--disable-containers --disable-system"
 
@@ -493,6 +505,7 @@ jobs:
           - libusb-1.0-0-dev
           - libvdeplug-dev
           - libvte-2.91-dev
+          - ninja-build
       env:
         - TEST_CMD="make check-unit"
         - CONFIG="--disable-containers --disable-tcg --enable-kvm
diff --git a/tests/docker/dockerfiles/centos7.docker b/tests/docker/dockerfiles/centos7.docker
index 46277773bf..8b273725ee 100644
--- a/tests/docker/dockerfiles/centos7.docker
+++ b/tests/docker/dockerfiles/centos7.docker
@@ -27,6 +27,7 @@ ENV PACKAGES \
     mesa-libEGL-devel \
     mesa-libgbm-devel \
     nettle-devel \
+    ninja-build \
     perl-Test-Harness \
     pixman-devel \
     python3 \
diff --git a/tests/docker/dockerfiles/centos8.docker b/tests/docker/dockerfiles/centos8.docker
index e29e9657fb..585dfad9be 100644
--- a/tests/docker/dockerfiles/centos8.docker
+++ b/tests/docker/dockerfiles/centos8.docker
@@ -19,6 +19,7 @@ ENV PACKAGES \
     make \
     mesa-libEGL-devel \
     nettle-devel \
+    ninja-build \
     perl-Test-Harness \
     pixman-devel \
     python36 \
diff --git a/tests/docker/dockerfiles/debian10.docker b/tests/docker/dockerfiles/debian10.docker
index 1e4188ba22..21cc671d71 100644
--- a/tests/docker/dockerfiles/debian10.docker
+++ b/tests/docker/dockerfiles/debian10.docker
@@ -26,6 +26,7 @@ RUN apt update && \
         gettext \
         git \
         libncurses5-dev \
+        ninja-build \
         pkg-config \
         psmisc \
         python3 \
diff --git a/tests/docker/dockerfiles/fedora.docker b/tests/docker/dockerfiles/fedora.docker
index 85c975543d..ac79d95418 100644
--- a/tests/docker/dockerfiles/fedora.docker
+++ b/tests/docker/dockerfiles/fedora.docker
@@ -75,6 +75,7 @@ ENV PACKAGES \
     mingw64-SDL2 \
     ncurses-devel \
     nettle-devel \
+    ninja-build \
     nss-devel \
     numactl-devel \
     perl \
diff --git a/tests/docker/dockerfiles/travis.docker b/tests/docker/dockerfiles/travis.docker
index 591282561b..cd1435a7e9 100644
--- a/tests/docker/dockerfiles/travis.docker
+++ b/tests/docker/dockerfiles/travis.docker
@@ -9,7 +9,7 @@ ENV LC_ALL en_US.UTF-8
 RUN sed -i "s/# deb-src/deb-src/" /etc/apt/sources.list
 RUN apt-get update
 RUN apt-get -y build-dep qemu
-RUN apt-get -y install device-tree-compiler python3 python3-yaml dh-autoreconf gdb strace lsof net-tools gcovr
+RUN apt-get -y install device-tree-compiler python3 python3-yaml dh-autoreconf gdb strace lsof net-tools gcovr ninja-build
 # Travis tools require PhantomJS / Neo4j / Maven accessible
 # in their PATH (QEMU build won't access them).
 ENV PATH /usr/local/phantomjs/bin:/usr/local/phantomjs:/usr/local/neo4j-3.2.7/bin:/usr/local/maven-3.5.2/bin:/usr/local/cmake-3.9.2/bin:/usr/local/clang-5.0.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
diff --git a/tests/docker/dockerfiles/ubuntu.docker b/tests/docker/dockerfiles/ubuntu.docker
index b556ed17d2..b5ef7a8198 100644
--- a/tests/docker/dockerfiles/ubuntu.docker
+++ b/tests/docker/dockerfiles/ubuntu.docker
@@ -60,6 +60,7 @@ ENV PACKAGES \
     libxen-dev \
     libzstd-dev \
     make \
+    ninja-build \
     python3-yaml \
     python3-sphinx \
     sparse \
diff --git a/tests/docker/dockerfiles/ubuntu1804.docker b/tests/docker/dockerfiles/ubuntu1804.docker
index a6a7617da6..9b0a19ba5e 100644
--- a/tests/docker/dockerfiles/ubuntu1804.docker
+++ b/tests/docker/dockerfiles/ubuntu1804.docker
@@ -48,6 +48,7 @@ ENV PACKAGES \
     make \
     python3-yaml \
     python3-sphinx \
+    ninja-build \
     sparse \
     xfslibs-dev
 RUN apt-get update && \
diff --git a/tests/docker/dockerfiles/ubuntu2004.docker b/tests/docker/dockerfiles/ubuntu2004.docker
index f4b9556b9e..17b37cda38 100644
--- a/tests/docker/dockerfiles/ubuntu2004.docker
+++ b/tests/docker/dockerfiles/ubuntu2004.docker
@@ -47,6 +47,7 @@ ENV PACKAGES flex bison \
     libxen-dev \
     libzstd-dev \
     make \
+    ninja-build \
     python3-numpy \
     python3-opencv \
     python3-pil \
diff --git a/tests/vm/centos b/tests/vm/centos
index 0ad4ecf419..efe3dbbb36 100755
--- a/tests/vm/centos
+++ b/tests/vm/centos
@@ -42,7 +42,7 @@ class CentosVM(basevm.BaseVM):
         self.wait_ssh()
         self.ssh_root_check("touch /etc/cloud/cloud-init.disabled")
         self.ssh_root_check("yum update -y")
-        self.ssh_root_check("yum install -y docker make git python3")
+        self.ssh_root_check("yum install -y docker make ninja-build git python3")
         self.ssh_root_check("systemctl enable docker")
         self.ssh_root("poweroff")
         self.wait()
diff --git a/tests/vm/centos.aarch64 b/tests/vm/centos.aarch64
index d5232ecdb8..e687b93e52 100755
--- a/tests/vm/centos.aarch64
+++ b/tests/vm/centos.aarch64
@@ -23,7 +23,7 @@ import aarch64vm
 DEFAULT_CONFIG = {
     'cpu'          : "max",
     'machine'      : "virt,gic-version=max",
-    'install_cmds' : "yum install -y make git python3 gcc gcc-c++ flex bison, "\
+    'install_cmds' : "yum install -y make ninja-build git python3 gcc gcc-c++ flex bison, "\
         "yum install -y glib2-devel pixman-devel zlib-devel, "\
         "yum install -y perl-Test-Harness, "\
         "alternatives --set python /usr/bin/python3, "\
diff --git a/tests/vm/fedora b/tests/vm/fedora
index b2b478fdbc..b977efe4a2 100755
--- a/tests/vm/fedora
+++ b/tests/vm/fedora
@@ -32,7 +32,7 @@ class FedoraVM(basevm.BaseVM):
     pkgs = [
         # tools
         'git-core',
-        'gcc', 'binutils', 'make',
+        'gcc', 'binutils', 'make', 'ninja-build',
 
         # perl
         'perl-Test-Harness',
diff --git a/tests/vm/freebsd b/tests/vm/freebsd
index 5f866e09c4..04ee793381 100755
--- a/tests/vm/freebsd
+++ b/tests/vm/freebsd
@@ -34,6 +34,7 @@ class FreeBSDVM(basevm.BaseVM):
         "bzip2",
         "python37",
         "py37-setuptools",
+        "ninja",
 
         # gnu tools
         "bash",
diff --git a/tests/vm/netbsd b/tests/vm/netbsd
index ffb65a89be..a9da255c5a 100755
--- a/tests/vm/netbsd
+++ b/tests/vm/netbsd
@@ -32,6 +32,7 @@ class NetBSDVM(basevm.BaseVM):
         "xz",
         "python37",
         "py37-setuptools",
+        "ninja",
 
         # gnu tools
         "bash",
diff --git a/tests/vm/openbsd b/tests/vm/openbsd
index 8356646f21..ad882a76a2 100755
--- a/tests/vm/openbsd
+++ b/tests/vm/openbsd
@@ -31,6 +31,7 @@ class OpenBSDVM(basevm.BaseVM):
         "pkgconf",
         "bzip2", "xz",
         "py3-setuptools",
+        "ninja",
 
         # gnu tools
         "bash",
diff --git a/tests/vm/ubuntu.aarch64 b/tests/vm/ubuntu.aarch64
index 21d454c27f..b291945a7e 100755
--- a/tests/vm/ubuntu.aarch64
+++ b/tests/vm/ubuntu.aarch64
@@ -22,7 +22,7 @@ DEFAULT_CONFIG = {
     'machine'      : "virt,gic-version=3",
     'install_cmds' : "apt-get update,"\
                      "apt-get build-dep -y --arch-only qemu,"\
-                     "apt-get install -y libfdt-dev pkg-config language-pack-en",
+                     "apt-get install -y libfdt-dev pkg-config language-pack-en ninja-build",
     # We increase beyond the default time since during boot
     # it can take some time (many seconds) to log into the VM
     # especially using softmmu.
diff --git a/tests/vm/ubuntu.i386 b/tests/vm/ubuntu.i386
index 5ce72610a6..47681b6f87 100755
--- a/tests/vm/ubuntu.i386
+++ b/tests/vm/ubuntu.i386
@@ -18,7 +18,7 @@ import ubuntuvm
 DEFAULT_CONFIG = {
     'install_cmds' : "apt-get update,"\
                      "apt-get build-dep -y qemu,"\
-                     "apt-get install -y libfdt-dev language-pack-en",
+                     "apt-get install -y libfdt-dev language-pack-en ninja-build",
 }
 
 class UbuntuX86VM(ubuntuvm.UbuntuVM):
-- 
2.26.2




^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PULL 09/22] build: cleanups to Makefile
  2020-10-16 11:47 [PULL 00/22] Build system + misc changes for 2020-10-16 Paolo Bonzini
                   ` (7 preceding siblings ...)
  2020-10-16 11:48 ` [PULL 08/22] add ninja to dockerfiles, CI configurations and test VMs Paolo Bonzini
@ 2020-10-16 11:48 ` Paolo Bonzini
  2020-10-16 11:48 ` [PULL 10/22] build: replace ninjatool with ninja Paolo Bonzini
                   ` (12 subsequent siblings)
  21 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2020-10-16 11:48 UTC (permalink / raw)
  To: qemu-devel

Group similar rules, add comments to "else" and "endif" lines,
detect too-old config-host.mak before messing things up.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 Makefile | 45 ++++++++++++++++++++++++++++-----------------
 1 file changed, 28 insertions(+), 17 deletions(-)

diff --git a/Makefile b/Makefile
index 91c62a26c8..53d624a0be 100644
--- a/Makefile
+++ b/Makefile
@@ -30,13 +30,21 @@ UNCHECKED_GOALS := %clean TAGS cscope ctags dist \
     help check-help print-% \
     docker docker-% vm-help vm-test vm-build-%
 
+all:
+.PHONY: all clean distclean recurse-all dist msi FORCE
+
+# Don't try to regenerate Makefile or configure
+# We don't generate any of them
+Makefile: ;
+configure: ;
+
 # All following code might depend on configuration variables
 ifneq ($(wildcard config-host.mak),)
-# Put the all: rule here so that config-host.mak can contain dependencies.
-all:
 include config-host.mak
 
 git-submodule-update:
+.git-submodule-status: git-submodule-update config-host.mak
+Makefile: .git-submodule-status
 
 .PHONY: git-submodule-update
 
@@ -84,9 +92,7 @@ Makefile.mtest: build.ninja scripts/mtest2make.py
 -include Makefile.mtest
 endif
 
-Makefile: .git-submodule-status
-.git-submodule-status: git-submodule-update config-host.mak
-
+# Ensure the build tree is okay.
 # Check that we're not trying to do an out-of-tree build from
 # a tree that's been used for an in-tree build.
 ifneq ($(realpath $(SRC_PATH)),$(realpath .))
@@ -97,6 +103,20 @@ seems to have been used for an in-tree build. You can fix this by running \
 endif
 endif
 
+# force a rerun of configure if config-host.mak is too old or corrupted
+ifeq ($(MESON),)
+.PHONY: config-host.mak
+x := $(shell rm -rf meson-private meson-info meson-logs)
+endif
+ifeq ($(NINJA),)
+.PHONY: config-host.mak
+x := $(shell rm -rf meson-private meson-info meson-logs)
+endif
+ifeq ($(wildcard build.ninja),)
+.PHONY: config-host.mak
+x := $(shell rm -rf meson-private meson-info meson-logs)
+endif
+
 config-host.mak: $(SRC_PATH)/configure $(SRC_PATH)/pc-bios $(SRC_PATH)/VERSION
 	@echo $@ is out-of-date, running configure
 	@if test -f meson-private/coredata.dat; then \
@@ -114,15 +134,15 @@ plugins:
 	$(call quiet-command,\
 		$(MAKE) $(SUBDIR_MAKEFLAGS) -C contrib/plugins V="$(V)", \
 		"BUILD", "example plugins")
-endif
+endif # $(CONFIG_PLUGIN)
 
-else
+else # config-host.mak does not exist
 config-host.mak:
 ifneq ($(filter-out $(UNCHECKED_GOALS),$(MAKECMDGOALS)),$(if $(MAKECMDGOALS),,fail))
 	@echo "Please call configure before running make!"
 	@exit 1
 endif
-endif
+endif # config-host.mak does not exist
 
 # Only needed in case Makefile.ninja does not exist.
 .PHONY: ninja-clean ninja-distclean clean-ctlist
@@ -131,20 +151,11 @@ ninja-clean::
 ninja-distclean::
 build.ninja: config-host.mak
 
-# Don't try to regenerate Makefile or configure
-# We don't generate any of them
-Makefile: ;
-configure: ;
-
-.PHONY: all clean distclean install \
-	recurse-all dist msi FORCE
-
 SUBDIR_MAKEFLAGS=$(if $(V),,--no-print-directory --quiet)
 
 include $(SRC_PATH)/tests/Makefile.include
 
 all: recurse-all
-Makefile:
 
 ROM_DIRS = $(addprefix pc-bios/, $(ROMS))
 ROM_DIRS_RULES=$(foreach t, all clean, $(addsuffix /$(t), $(ROM_DIRS)))
-- 
2.26.2




^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PULL 10/22] build: replace ninjatool with ninja
  2020-10-16 11:47 [PULL 00/22] Build system + misc changes for 2020-10-16 Paolo Bonzini
                   ` (8 preceding siblings ...)
  2020-10-16 11:48 ` [PULL 09/22] build: cleanups to Makefile Paolo Bonzini
@ 2020-10-16 11:48 ` Paolo Bonzini
  2020-10-16 11:48 ` [PULL 11/22] build: add --enable/--disable-libudev Paolo Bonzini
                   ` (11 subsequent siblings)
  21 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2020-10-16 11:48 UTC (permalink / raw)
  To: qemu-devel

Now that the build is done entirely by Meson, there is no need
to keep the Makefile conversion.  Instead, we can ask Ninja about
the targets it exposes and forward them.

The main advantages are, from smallest to largest:

- reducing the possible namespace pollution within the Makefile

- removal of a relatively large Python program

- faster build because parsing Makefile.ninja is slower than
parsing build.ninja; and faster build after Meson runs because
we do not have to generate Makefile.ninja.

- tracking of command lines, which provides more accurate rebuilds

In addition the change removes the requirement for GNU make 3.82, which
was annoying on Mac, and avoids bugs on Windows due to ninjatool not
knowing how to convert Windows escapes to POSIX escapes.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 Makefile                    |   79 +--
 configure                   |    9 +-
 docs/devel/build-system.rst |    6 +-
 meson.build                 |    4 -
 scripts/mtest2make.py       |    3 +-
 scripts/ninjatool.py        | 1008 -----------------------------------
 tests/Makefile.include      |    2 +-
 7 files changed, 58 insertions(+), 1053 deletions(-)
 delete mode 100755 scripts/ninjatool.py

diff --git a/Makefile b/Makefile
index 53d624a0be..9196b31733 100644
--- a/Makefile
+++ b/Makefile
@@ -72,27 +72,8 @@ git-submodule-update:
 endif
 endif
 
-export NINJA=./ninjatool
+# 0. ensure the build tree is okay
 
-# Running meson regenerates both build.ninja and ninjatool, and that is
-# enough to prime the rest of the build.
-ninjatool: build.ninja
-
-Makefile.ninja: build.ninja ninjatool
-	./ninjatool -t ninja2make --omit clean dist uninstall cscope TAGS ctags < $< > $@
--include Makefile.ninja
-
-${ninja-targets-c_COMPILER} ${ninja-targets-cpp_COMPILER}: .var.command += -MP
-
-# If MESON is empty, the rule will be re-evaluated after Makefiles are
-# reread (and MESON won't be empty anymore).
-ifneq ($(MESON),)
-Makefile.mtest: build.ninja scripts/mtest2make.py
-	$(MESON) introspect --targets --tests --benchmarks | $(PYTHON) scripts/mtest2make.py > $@
--include Makefile.mtest
-endif
-
-# Ensure the build tree is okay.
 # Check that we're not trying to do an out-of-tree build from
 # a tree that's been used for an in-tree build.
 ifneq ($(realpath $(SRC_PATH)),$(realpath .))
@@ -117,6 +98,7 @@ ifeq ($(wildcard build.ninja),)
 x := $(shell rm -rf meson-private meson-info meson-logs)
 endif
 
+# 1. ensure config-host.mak is up-to-date
 config-host.mak: $(SRC_PATH)/configure $(SRC_PATH)/pc-bios $(SRC_PATH)/VERSION
 	@echo $@ is out-of-date, running configure
 	@if test -f meson-private/coredata.dat; then \
@@ -125,6 +107,45 @@ config-host.mak: $(SRC_PATH)/configure $(SRC_PATH)/pc-bios $(SRC_PATH)/VERSION
 	  ./config.status; \
 	fi
 
+# 2. ensure generated build files are up-to-date
+
+ifneq ($(NINJA),)
+# A separate rule is needed for Makefile dependencies to avoid -n
+export NINJA
+Makefile.ninja: build.ninja
+	$(quiet-@){ echo 'ninja-targets = \'; $(NINJA) -t targets all | sed 's/:.*//; $$!s/$$/ \\/'; } > $@
+-include Makefile.ninja
+endif
+
+ifneq ($(MESON),)
+Makefile.mtest: build.ninja scripts/mtest2make.py
+	$(MESON) introspect --targets --tests --benchmarks | $(PYTHON) scripts/mtest2make.py > $@
+-include Makefile.mtest
+endif
+
+# 3. Rules to bridge to other makefiles
+
+ifneq ($(NINJA),)
+NINJAFLAGS = $(if $V,-v,) \
+        $(filter-out -j, $(lastword -j1 $(filter -l% -j%, $(MAKEFLAGS)))) \
+        $(subst -k, -k0, $(filter -n -k,$(MAKEFLAGS)))
+
+ninja-cmd-goals = $(or $(MAKECMDGOALS), all)
+ninja-cmd-goals += $(foreach t, $(.tests), $(.test.deps.$t))
+
+makefile-targets := build.ninja ctags TAGS cscope dist clean uninstall
+ninja-targets := $(filter-out $(makefile-targets), $(ninja-targets))
+.PHONY: $(ninja-targets) run-ninja
+$(ninja-targets): run-ninja
+
+# Use "| cat" to give Ninja a more "make-y" output.  Use "+" to bypass the
+# --output-sync line.
+run-ninja: config-host.mak
+ifneq ($(filter $(ninja-targets), $(ninja-cmd-goals)),)
+	+@$(NINJA) $(NINJAFLAGS) $(sort $(filter $(ninja-targets), $(ninja-cmd-goals))) | cat
+endif
+endif
+
 # Force configure to re-run if the API symbols are updated
 ifeq ($(CONFIG_PLUGIN),y)
 config-host.mak: $(SRC_PATH)/plugins/qemu-plugins.symbols
@@ -144,13 +165,6 @@ ifneq ($(filter-out $(UNCHECKED_GOALS),$(MAKECMDGOALS)),$(if $(MAKECMDGOALS),,fa
 endif
 endif # config-host.mak does not exist
 
-# Only needed in case Makefile.ninja does not exist.
-.PHONY: ninja-clean ninja-distclean clean-ctlist
-clean-ctlist:
-ninja-clean::
-ninja-distclean::
-build.ninja: config-host.mak
-
 SUBDIR_MAKEFLAGS=$(if $(V),,--no-print-directory --quiet)
 
 include $(SRC_PATH)/tests/Makefile.include
@@ -170,8 +184,9 @@ recurse-clean: $(addsuffix /clean, $(ROM_DIRS))
 
 ######################################################################
 
-clean: recurse-clean ninja-clean clean-ctlist
-	if test -f ninjatool; then ./ninjatool $(if $(V),-v,) -t clean; fi
+clean: recurse-clean
+	-@test -f build.ninja && $(quiet-@)$(NINJA) $(NINJAFLAGS) -t clean || :
+	-@test -f build.ninja && $(NINJA) $(NINJAFLAGS) clean-ctlist || :
 # avoid old build problems by removing potentially incorrect old files
 	rm -f config.mak op-i386.h opc-i386.h gen-op-i386.h op-arm.h opc-arm.h gen-op-arm.h
 	find . \( -name '*.so' -o -name '*.dll' -o -name '*.[oda]' \) -type f \
@@ -188,8 +203,8 @@ dist: qemu-$(VERSION).tar.bz2
 qemu-%.tar.bz2:
 	$(SRC_PATH)/scripts/make-release "$(SRC_PATH)" "$(patsubst qemu-%.tar.bz2,%,$@)"
 
-distclean: clean ninja-distclean
-	-test -f ninjatool && ./ninjatool $(if $(V),-v,) -t clean -g
+distclean: clean
+	-@test -f build.ninja && $(quiet-@)$(NINJA) $(NINJAFLAGS) -t clean -g || :
 	rm -f config-host.mak config-host.h*
 	rm -f tests/tcg/config-*.mak
 	rm -f config-all-disas.mak config.status
@@ -198,7 +213,7 @@ distclean: clean ninja-distclean
 	rm -f qemu-plugins-ld.symbols qemu-plugins-ld64.symbols
 	rm -f *-config-target.h *-config-devices.mak *-config-devices.h
 	rm -rf meson-private meson-logs meson-info compile_commands.json
-	rm -f Makefile.ninja ninjatool ninjatool.stamp Makefile.mtest
+	rm -f Makefile.ninja Makefile.mtest
 	rm -f config.log
 	rm -f linux-headers/asm
 	rm -Rf .sdk
diff --git a/configure b/configure
index 8aa03876d4..9317349044 100755
--- a/configure
+++ b/configure
@@ -1905,7 +1905,7 @@ case "$meson" in
     *) meson=$(command -v "$meson") ;;
 esac
 
-# Probe for ninja (used for compdb)
+# Probe for ninja
 
 if test -z "$ninja"; then
     for c in ninja ninja-build samu; do
@@ -1914,6 +1914,9 @@ if test -z "$ninja"; then
             break
         fi
     done
+    if test -z "$ninja"; then
+      error_exit "Cannot find Ninja"
+    fi
 fi
 
 # Check that the C compiler works. Doing this here before testing
@@ -6779,6 +6782,7 @@ echo "PYTHON=$python" >> $config_host_mak
 echo "SPHINX_BUILD=$sphinx_build" >> $config_host_mak
 echo "GENISOIMAGE=$genisoimage" >> $config_host_mak
 echo "MESON=$meson" >> $config_host_mak
+echo "NINJA=$ninja" >> $config_host_mak
 echo "CC=$cc" >> $config_host_mak
 if $iasl -h > /dev/null 2>&1; then
   echo "CONFIG_IASL=$iasl" >> $config_host_mak
@@ -7030,7 +7034,7 @@ fi
 mv $cross config-meson.cross
 
 rm -rf meson-private meson-info meson-logs
-NINJA=${ninja:-$PWD/ninjatool} $meson setup \
+NINJA=$ninja $meson setup \
         --prefix "$prefix" \
         --libdir "$libdir" \
         --libexecdir "$libexecdir" \
@@ -7063,7 +7067,6 @@ NINJA=${ninja:-$PWD/ninjatool} $meson setup \
 if test "$?" -ne 0 ; then
     error_exit "meson setup failed"
 fi
-touch ninjatool.stamp
 fi
 
 if test -n "${deprecated_features}"; then
diff --git a/docs/devel/build-system.rst b/docs/devel/build-system.rst
index 2ee368fad6..6fcf8854b7 100644
--- a/docs/devel/build-system.rst
+++ b/docs/devel/build-system.rst
@@ -404,10 +404,8 @@ Built by Meson:
 Built by Makefile:
 
 `Makefile.ninja`
-  A Makefile conversion of the build rules in build.ninja.  The conversion
-  is straightforward and, were it necessary to debug the rules produced
-  by Meson, it should be enough to look at build.ninja.  The conversion
-  is performed by scripts/ninjatool.py.
+  A Makefile include that bridges to ninja for the actual build.  The
+  Makefile is mostly a list of targets that Meson included in build.ninja.
 
 `Makefile.mtest`
   The Makefile definitions that let "make check" run tests defined in
diff --git a/meson.build b/meson.build
index 88f757eac9..2c93e22382 100644
--- a/meson.build
+++ b/meson.build
@@ -47,10 +47,6 @@ supported_cpus = ['ppc', 'ppc64', 's390x', 'riscv32', 'riscv64', 'x86', 'x86_64'
 cpu = host_machine.cpu_family()
 targetos = host_machine.system()
 
-configure_file(input: files('scripts/ninjatool.py'),
-               output: 'ninjatool',
-               configuration: config_host)
-
 if cpu in ['x86', 'x86_64']
   kvm_targets = ['i386-softmmu', 'x86_64-softmmu']
 elif cpu == 'aarch64'
diff --git a/scripts/mtest2make.py b/scripts/mtest2make.py
index c3489a4605..25ee6887cf 100644
--- a/scripts/mtest2make.py
+++ b/scripts/mtest2make.py
@@ -70,8 +70,9 @@ def process_tests(test, targets, suites):
     print('.test.driver.%d := %s' % (i, driver))
     print('.test.env.%d := $(.test.env) %s' % (i, env))
     print('.test.cmd.%d := %s' % (i, cmd))
+    print('.test.deps.%d := %s' % (i, ' '.join(deps)))
     print('.PHONY: run-test-%d' % (i,))
-    print('run-test-%d: %s' % (i, ' '.join(deps)))
+    print('run-test-%d: $(.test.deps.%d)' % (i,i))
     print('\t@$(call .test.run,%d,$(.test.output-format))' % (i,))
 
     test_suites = test['suite'] or ['default']
diff --git a/scripts/ninjatool.py b/scripts/ninjatool.py
deleted file mode 100755
index 6f0e35c727..0000000000
--- a/scripts/ninjatool.py
+++ /dev/null
@@ -1,1008 +0,0 @@
-#! /bin/sh
-
-# Python module for parsing and processing .ninja files.
-#
-# Author: Paolo Bonzini
-#
-# Copyright (C) 2019 Red Hat, Inc.
-
-
-# We don't want to put "#! @PYTHON@" as the shebang and
-# make the file executable, so instead we make this a
-# Python/shell polyglot.  The first line below starts a
-# multiline string literal for Python, while it is just
-# ":" for bash.  The closing of the multiline string literal
-# is never parsed by bash since it exits before.
-
-'''':
-case "$0" in
-  /*) me=$0 ;;
-  *) me=$(command -v "$0") ;;
-esac
-python="@PYTHON@"
-case $python in
-  @*) python=python3 ;;
-esac
-exec $python "$me" "$@"
-exit 1
-'''
-
-
-from collections import namedtuple, defaultdict
-import sys
-import os
-import re
-import json
-import argparse
-import hashlib
-import shutil
-
-
-class InvalidArgumentError(Exception):
-    pass
-
-# faster version of os.path.normpath: do nothing unless there is a double
-# slash or a "." or ".." component.  The filter does not have to be super
-# precise, but it has to be fast.  os.path.normpath is the hottest function
-# for ninja2make without this optimization!
-if os.path.sep == '/':
-    def normpath(path, _slow_re=re.compile('/[./]')):
-        return os.path.normpath(path) if _slow_re.search(path) or path[0] == '.' else path
-else:
-    normpath = os.path.normpath
-
-
-def sha1_text(text):
-    return hashlib.sha1(text.encode()).hexdigest()
-
-# ---- lexer and parser ----
-
-PATH_RE = r"[^$\s:|]+|\$[$ :]|\$[a-zA-Z0-9_-]+|\$\{[a-zA-Z0-9_.-]+\}"
-
-SIMPLE_PATH_RE = re.compile(r"^[^$\s:|]+$")
-IDENT_RE = re.compile(r"[a-zA-Z0-9_.-]+$")
-STRING_RE = re.compile(r"(" + PATH_RE + r"|[\s:|])(?:\r?\n)?|.")
-TOPLEVEL_RE = re.compile(r"([=:#]|\|\|?|^ +|(?:" + PATH_RE + r")+)\s*|.")
-VAR_RE=re.compile(r'\$\$|\$\{([^}]*)\}')
-
-BUILD = 1
-POOL = 2
-RULE = 3
-DEFAULT = 4
-EQUALS = 5
-COLON = 6
-PIPE = 7
-PIPE2 = 8
-IDENT = 9
-INCLUDE = 10
-INDENT = 11
-EOL = 12
-
-
-class LexerError(Exception):
-    pass
-
-
-class ParseError(Exception):
-    pass
-
-
-class NinjaParserEvents(object):
-    def __init__(self, parser):
-        self.parser = parser
-
-    def dollar_token(self, word, in_path=False):
-        return '$$' if word == '$' else word
-
-    def variable_expansion_token(self, varname):
-        return '${%s}' % varname
-
-    def variable(self, name, arg):
-        pass
-
-    def begin_file(self):
-        pass
-
-    def end_file(self):
-        pass
-
-    def end_scope(self):
-        pass
-
-    def begin_pool(self, name):
-        pass
-
-    def begin_rule(self, name):
-        pass
-
-    def begin_build(self, out, iout, rule, in_, iin, orderdep):
-        pass
-
-    def default(self, targets):
-        pass
-
-
-class NinjaParser(object):
-
-    InputFile = namedtuple('InputFile', 'filename iter lineno')
-
-    def __init__(self, filename, input):
-        self.stack = []
-        self.top = None
-        self.iter = None
-        self.lineno = None
-        self.match_keyword = False
-        self.push(filename, input)
-
-    def file_changed(self):
-        self.iter = self.top.iter
-        self.lineno = self.top.lineno
-        if self.top.filename is not None:
-            os.chdir(os.path.dirname(self.top.filename) or '.')
-
-    def push(self, filename, input):
-        if self.top:
-            self.top.lineno = self.lineno
-            self.top.iter = self.iter
-            self.stack.append(self.top)
-        self.top = self.InputFile(filename=filename or 'stdin',
-                                  iter=self._tokens(input), lineno=0)
-        self.file_changed()
-
-    def pop(self):
-        if len(self.stack):
-            self.top = self.stack[-1]
-            self.stack.pop()
-            self.file_changed()
-        else:
-            self.top = self.iter = None
-
-    def next_line(self, input):
-        line = next(input).rstrip()
-        self.lineno += 1
-        while len(line) and line[-1] == '$':
-            line = line[0:-1] + next(input).strip()
-            self.lineno += 1
-        return line
-
-    def print_token(self, tok):
-        if tok == EOL:
-            return "end of line"
-        if tok == BUILD:
-            return '"build"'
-        if tok == POOL:
-            return '"pool"'
-        if tok == RULE:
-            return '"rule"'
-        if tok == DEFAULT:
-            return '"default"'
-        if tok == EQUALS:
-            return '"="'
-        if tok == COLON:
-            return '":"'
-        if tok == PIPE:
-            return '"|"'
-        if tok == PIPE2:
-            return '"||"'
-        if tok == INCLUDE:
-            return '"include"'
-        if tok == IDENT:
-            return 'identifier'
-        return '"%s"' % tok
-
-    def error(self, msg):
-        raise LexerError("%s:%d: %s" % (self.stack[-1].filename, self.lineno, msg))
-
-    def parse_error(self, msg):
-        raise ParseError("%s:%d: %s" % (self.stack[-1].filename, self.lineno, msg))
-
-    def expected(self, expected, tok):
-        msg = "found %s, expected " % (self.print_token(tok), )
-        for i, exp_tok in enumerate(expected):
-            if i > 0:
-                msg = msg + (' or ' if i == len(expected) - 1 else ', ')
-            msg = msg + self.print_token(exp_tok)
-        self.parse_error(msg)
-
-    def _variable_tokens(self, value):
-        for m in STRING_RE.finditer(value):
-            match = m.group(1)
-            if not match:
-                self.error("unexpected '%s'" % (m.group(0), ))
-            yield match
-
-    def _tokens(self, input):
-        while True:
-            try:
-                line = self.next_line(input)
-            except StopIteration:
-                return
-            for m in TOPLEVEL_RE.finditer(line):
-                match = m.group(1)
-                if not match:
-                    self.error("unexpected '%s'" % (m.group(0), ))
-                if match == ':':
-                    yield COLON
-                    continue
-                if match == '|':
-                    yield PIPE
-                    continue
-                if match == '||':
-                    yield PIPE2
-                    continue
-                if match[0] == ' ':
-                    yield INDENT
-                    continue
-                if match[0] == '=':
-                    yield EQUALS
-                    value = line[m.start() + 1:].lstrip()
-                    yield from self._variable_tokens(value)
-                    break
-                if match[0] == '#':
-                    break
-
-                # identifier
-                if self.match_keyword:
-                    if match == 'build':
-                        yield BUILD
-                        continue
-                    if match == 'pool':
-                        yield POOL
-                        continue
-                    if match == 'rule':
-                        yield RULE
-                        continue
-                    if match == 'default':
-                        yield DEFAULT
-                        continue
-                    if match == 'include':
-                        filename = line[m.start() + 8:].strip()
-                        self.push(filename, open(filename, 'r'))
-                        break
-                    if match == 'subninja':
-                        self.error('subninja is not supported')
-                yield match
-            yield EOL
-
-    def parse(self, events):
-        global_var = True
-
-        def look_for(*expected):
-            # The last token in the token stream is always EOL.  This
-            # is exploited to avoid catching StopIteration everywhere.
-            tok = next(self.iter)
-            if tok not in expected:
-                self.expected(expected, tok)
-            return tok
-
-        def look_for_ident(*expected):
-            tok = next(self.iter)
-            if isinstance(tok, str):
-                if not IDENT_RE.match(tok):
-                    self.parse_error('variable expansion not allowed')
-            elif tok not in expected:
-                self.expected(expected + (IDENT,), tok)
-            return tok
-
-        def parse_assignment_rhs(gen, expected, in_path):
-            tokens = []
-            for tok in gen:
-                if not isinstance(tok, str):
-                    if tok in expected:
-                        break
-                    self.expected(expected + (IDENT,), tok)
-                if tok[0] != '$':
-                    tokens.append(tok)
-                elif tok == '$ ' or tok == '$$' or tok == '$:':
-                    tokens.append(events.dollar_token(tok[1], in_path))
-                else:
-                    var = tok[2:-1] if tok[1] == '{' else tok[1:]
-                    tokens.append(events.variable_expansion_token(var))
-            else:
-                # gen must have raised StopIteration
-                tok = None
-
-            if tokens:
-                # Fast path avoiding str.join()
-                value = tokens[0] if len(tokens) == 1 else ''.join(tokens)
-            else:
-                value = None
-            return value, tok
-
-        def look_for_path(*expected):
-            # paths in build rules are parsed one space-separated token
-            # at a time and expanded
-            token = next(self.iter)
-            if not isinstance(token, str):
-                return None, token
-            # Fast path if there are no dollar and variable expansion
-            if SIMPLE_PATH_RE.match(token):
-                return token, None
-            gen = self._variable_tokens(token)
-            return parse_assignment_rhs(gen, expected, True)
-
-        def parse_assignment(tok):
-            name = tok
-            assert isinstance(name, str)
-            look_for(EQUALS)
-            value, tok = parse_assignment_rhs(self.iter, (EOL,), False)
-            assert tok == EOL
-            events.variable(name, value)
-
-        def parse_build():
-            # parse outputs
-            out = []
-            iout = []
-            while True:
-                value, tok = look_for_path(COLON, PIPE)
-                if value is None:
-                    break
-                out.append(value)
-            if tok == PIPE:
-                while True:
-                    value, tok = look_for_path(COLON)
-                    if value is None:
-                        break
-                    iout.append(value)
-
-            # parse rule
-            assert tok == COLON
-            rule = look_for_ident()
-
-            # parse inputs and dependencies
-            in_ = []
-            iin = []
-            orderdep = []
-            while True:
-                value, tok = look_for_path(PIPE, PIPE2, EOL)
-                if value is None:
-                    break
-                in_.append(value)
-            if tok == PIPE:
-                while True:
-                    value, tok = look_for_path(PIPE2, EOL)
-                    if value is None:
-                        break
-                    iin.append(value)
-            if tok == PIPE2:
-                while True:
-                    value, tok = look_for_path(EOL)
-                    if value is None:
-                        break
-                    orderdep.append(value)
-            assert tok == EOL
-            events.begin_build(out, iout, rule, in_, iin, orderdep)
-            nonlocal global_var
-            global_var = False
-
-        def parse_pool():
-            # pool declarations are ignored.  Just gobble all the variables
-            ident = look_for_ident()
-            look_for(EOL)
-            events.begin_pool(ident)
-            nonlocal global_var
-            global_var = False
-
-        def parse_rule():
-            ident = look_for_ident()
-            look_for(EOL)
-            events.begin_rule(ident)
-            nonlocal global_var
-            global_var = False
-
-        def parse_default():
-            idents = []
-            while True:
-                ident = look_for_ident(EOL)
-                if ident == EOL:
-                    break
-                idents.append(ident)
-            events.default(idents)
-
-        def parse_declaration(tok):
-            if tok == EOL:
-                return
-
-            nonlocal global_var
-            if tok == INDENT:
-                if global_var:
-                    self.parse_error('indented line outside rule or edge')
-                tok = look_for_ident(EOL)
-                if tok == EOL:
-                    return
-                parse_assignment(tok)
-                return
-
-            if not global_var:
-                events.end_scope()
-                global_var = True
-            if tok == POOL:
-                parse_pool()
-            elif tok == BUILD:
-                parse_build()
-            elif tok == RULE:
-                parse_rule()
-            elif tok == DEFAULT:
-                parse_default()
-            elif isinstance(tok, str):
-                parse_assignment(tok)
-            else:
-                self.expected((POOL, BUILD, RULE, INCLUDE, DEFAULT, IDENT), tok)
-
-        events.begin_file()
-        while self.iter:
-            try:
-                self.match_keyword = True
-                token = next(self.iter)
-                self.match_keyword = False
-                parse_declaration(token)
-            except StopIteration:
-                self.pop()
-        events.end_file()
-
-
-# ---- variable handling ----
-
-def expand(x, rule_vars=None, build_vars=None, global_vars=None):
-    if x is None:
-        return None
-    changed = True
-    have_dollar_replacement = False
-    while changed:
-        changed = False
-        matches = list(VAR_RE.finditer(x))
-        if not matches:
-            break
-
-        # Reverse the match so that expanding later matches does not
-        # invalidate m.start()/m.end() for earlier ones.  Do not reduce $$ to $
-        # until all variables are dealt with.
-        for m in reversed(matches):
-            name = m.group(1)
-            if not name:
-                have_dollar_replacement = True
-                continue
-            changed = True
-            if build_vars and name in build_vars:
-                value = build_vars[name]
-            elif rule_vars and name in rule_vars:
-                value = rule_vars[name]
-            elif name in global_vars:
-                value = global_vars[name]
-            else:
-                value = ''
-            x = x[:m.start()] + value + x[m.end():]
-    return x.replace('$$', '$') if have_dollar_replacement else x
-
-
-class Scope(object):
-    def __init__(self, events):
-        self.events = events
-
-    def on_left_scope(self):
-        pass
-
-    def on_variable(self, key, value):
-        pass
-
-
-class BuildScope(Scope):
-    def __init__(self, events, out, iout, rule, in_, iin, orderdep, rule_vars):
-        super().__init__(events)
-        self.rule = rule
-        self.out = [events.expand_and_normalize(x) for x in out]
-        self.in_ = [events.expand_and_normalize(x) for x in in_]
-        self.iin = [events.expand_and_normalize(x) for x in iin]
-        self.orderdep = [events.expand_and_normalize(x) for x in orderdep]
-        self.iout = [events.expand_and_normalize(x) for x in iout]
-        self.rule_vars = rule_vars
-        self.build_vars = dict()
-        self._define_variable('out', ' '.join(self.out))
-        self._define_variable('in', ' '.join(self.in_))
-
-    def expand(self, x):
-        return self.events.expand(x, self.rule_vars, self.build_vars)
-
-    def on_left_scope(self):
-        self.events.variable('out', self.build_vars['out'])
-        self.events.variable('in', self.build_vars['in'])
-        self.events.end_build(self, self.out, self.iout, self.rule, self.in_,
-                              self.iin, self.orderdep)
-
-    def _define_variable(self, key, value):
-        # The value has been expanded already, quote it for further
-        # expansion from rule variables
-        value = value.replace('$', '$$')
-        self.build_vars[key] = value
-
-    def on_variable(self, key, value):
-        # in and out are at the top of the lookup order and cannot
-        # be overridden.  Also, unlike what the manual says, build
-        # variables only lookup global variables.  They never lookup
-        # rule variables, earlier build variables, or in/out.
-        if key not in ('in', 'in_newline', 'out'):
-            self._define_variable(key, self.events.expand(value))
-
-
-class RuleScope(Scope):
-    def __init__(self, events, name, vars_dict):
-        super().__init__(events)
-        self.name = name
-        self.vars_dict = vars_dict
-        self.generator = False
-
-    def on_left_scope(self):
-        self.events.end_rule(self, self.name)
-
-    def on_variable(self, key, value):
-        self.vars_dict[key] = value
-        if key == 'generator':
-            self.generator = True
-
-
-class NinjaParserEventsWithVars(NinjaParserEvents):
-    def __init__(self, parser):
-        super().__init__(parser)
-        self.rule_vars = defaultdict(lambda: dict())
-        self.global_vars = dict()
-        self.scope = None
-
-    def variable(self, name, value):
-        if self.scope:
-            self.scope.on_variable(name, value)
-        else:
-            self.global_vars[name] = self.expand(value)
-
-    def begin_build(self, out, iout, rule, in_, iin, orderdep):
-        if rule != 'phony' and rule not in self.rule_vars:
-            self.parser.parse_error("undefined rule '%s'" % rule)
-
-        self.scope = BuildScope(self, out, iout, rule, in_, iin, orderdep, self.rule_vars[rule])
-
-    def begin_pool(self, name):
-        # pool declarations are ignored.  Just gobble all the variables
-        self.scope = Scope(self)
-
-    def begin_rule(self, name):
-        if name in self.rule_vars:
-            self.parser.parse_error("duplicate rule '%s'" % name)
-        self.scope = RuleScope(self, name, self.rule_vars[name])
-
-    def end_scope(self):
-        self.scope.on_left_scope()
-        self.scope = None
-
-    # utility functions:
-
-    def expand(self, x, rule_vars=None, build_vars=None):
-        return expand(x, rule_vars, build_vars, self.global_vars)
-
-    def expand_and_normalize(self, x):
-        return normpath(self.expand(x))
-
-    # extra events not present in the superclass:
-
-    def end_build(self, scope, out, iout, rule, in_, iin, orderdep):
-        pass
-
-    def end_rule(self, scope, name):
-        pass
-
-
-# ---- test client that just prints back whatever it parsed  ----
-
-class Writer(NinjaParserEvents):
-    ARGS = argparse.ArgumentParser(description='Rewrite input build.ninja to stdout.')
-
-    def __init__(self, output, parser, args):
-        super().__init__(parser)
-        self.output = output
-        self.indent = ''
-        self.had_vars = False
-
-    def dollar_token(self, word, in_path=False):
-        return '$' + word
-
-    def print(self, *args, **kwargs):
-        if len(args):
-            self.output.write(self.indent)
-        print(*args, **kwargs, file=self.output)
-
-    def variable(self, name, value):
-        self.print('%s = %s' % (name, value))
-        self.had_vars = True
-
-    def begin_scope(self):
-        self.indent = '  '
-        self.had_vars = False
-
-    def end_scope(self):
-        if self.had_vars:
-            self.print()
-        self.indent = ''
-        self.had_vars = False
-
-    def begin_pool(self, name):
-        self.print('pool %s' % name)
-        self.begin_scope()
-
-    def begin_rule(self, name):
-        self.print('rule %s' % name)
-        self.begin_scope()
-
-    def begin_build(self, outputs, implicit_outputs, rule, inputs, implicit, order_only):
-        all_outputs = list(outputs)
-        all_inputs = list(inputs)
-
-        if implicit:
-            all_inputs.append('|')
-            all_inputs.extend(implicit)
-        if order_only:
-            all_inputs.append('||')
-            all_inputs.extend(order_only)
-        if implicit_outputs:
-            all_outputs.append('|')
-            all_outputs.extend(implicit_outputs)
-
-        self.print('build %s: %s' % (' '.join(all_outputs),
-                                     ' '.join([rule] + all_inputs)))
-        self.begin_scope()
-
-    def default(self, targets):
-        self.print('default %s' % ' '.join(targets))
-
-
-# ---- emit compile_commands.json ----
-
-class Compdb(NinjaParserEventsWithVars):
-    ARGS = argparse.ArgumentParser(description='Emit compile_commands.json.')
-    ARGS.add_argument('rules', nargs='*',
-                      help='The ninja rules to emit compilation commands for.')
-
-    def __init__(self, output, parser, args):
-        super().__init__(parser)
-        self.output = output
-        self.rules = args.rules
-        self.sep = ''
-
-    def begin_file(self):
-        self.output.write('[')
-        self.directory = os.getcwd()
-
-    def print_entry(self, **entry):
-        entry['directory'] = self.directory
-        self.output.write(self.sep + json.dumps(entry))
-        self.sep = ',\n'
-
-    def begin_build(self, out, iout, rule, in_, iin, orderdep):
-        if in_ and rule in self.rules:
-            super().begin_build(out, iout, rule, in_, iin, orderdep)
-        else:
-            self.scope = Scope(self)
-
-    def end_build(self, scope, out, iout, rule, in_, iin, orderdep):
-        self.print_entry(command=scope.expand('${command}'), file=in_[0])
-
-    def end_file(self):
-        self.output.write(']\n')
-
-
-# ---- clean output files ----
-
-class Clean(NinjaParserEventsWithVars):
-    ARGS = argparse.ArgumentParser(description='Remove output build files.')
-    ARGS.add_argument('-g', dest='generator', action='store_true',
-                      help='clean generated files too')
-
-    def __init__(self, output, parser, args):
-        super().__init__(parser)
-        self.dry_run = args.dry_run
-        self.verbose = args.verbose or args.dry_run
-        self.generator = args.generator
-
-    def begin_file(self):
-        print('Cleaning... ', end=(None if self.verbose else ''), flush=True)
-        self.cnt = 0
-
-    def end_file(self):
-        print('%d files' % self.cnt)
-
-    def do_clean(self, *files):
-        for f in files:
-            if self.dry_run:
-                if os.path.exists(f):
-                    self.cnt += 1
-                    print('Would remove ' + f)
-                    continue
-            else:
-                try:
-                    if os.path.isdir(f):
-                        shutil.rmtree(f)
-                    else:
-                        os.unlink(f)
-                    self.cnt += 1
-                    if self.verbose:
-                        print('Removed ' + f)
-                except FileNotFoundError:
-                    pass
-
-    def end_build(self, scope, out, iout, rule, in_, iin, orderdep):
-        if rule == 'phony':
-            return
-        if self.generator:
-            rspfile = scope.expand('${rspfile}')
-            if rspfile:
-                self.do_clean(rspfile)
-        if self.generator or not scope.expand('${generator}'):
-            self.do_clean(*out, *iout)
-            depfile = scope.expand('${depfile}')
-            if depfile:
-                self.do_clean(depfile)
-
-
-# ---- convert build.ninja to makefile ----
-
-class Ninja2Make(NinjaParserEventsWithVars):
-    ARGS = argparse.ArgumentParser(description='Convert build.ninja to a Makefile.')
-    ARGS.add_argument('--clean', dest='emit_clean', action='store_true',
-                      help='Emit clean/distclean rules.')
-    ARGS.add_argument('--doublecolon', action='store_true',
-                      help='Emit double-colon rules for phony targets.')
-    ARGS.add_argument('--omit', metavar='TARGET', nargs='+',
-                      help='Targets to omit.')
-
-    def __init__(self, output, parser, args):
-        super().__init__(parser)
-        self.output = output
-
-        self.emit_clean = args.emit_clean
-        self.doublecolon = args.doublecolon
-        self.omit = set(args.omit)
-
-        if self.emit_clean:
-            self.omit.update(['clean', 'distclean'])
-
-        # Lists of targets are kept in memory and emitted only at the
-        # end because appending is really inefficient in GNU make.
-        # We only do it when it's O(#rules) or O(#variables), but
-        # never when it could be O(#targets).
-        self.depfiles = list()
-        self.rspfiles = list()
-        self.build_vars = defaultdict(lambda: dict())
-        self.rule_targets = defaultdict(lambda: list())
-        self.stamp_targets = defaultdict(lambda: list())
-        self.all_outs = set()
-        self.all_ins = set()
-        self.all_phony = set()
-        self.seen_default = False
-
-    def print(self, *args, **kwargs):
-        print(*args, **kwargs, file=self.output)
-
-    def dollar_token(self, word, in_path=False):
-        if in_path and word == ' ':
-            self.parser.parse_error('Make does not support spaces in filenames')
-        return '$$' if word == '$' else word
-
-    def print_phony(self, outs, ins):
-        targets = ' '.join(outs).replace('$', '$$')
-        deps = ' '.join(ins).replace('$', '$$')
-        deps = deps.strip()
-        if self.doublecolon:
-            self.print(targets + '::' + (' ' if deps else '') + deps + ';@:')
-        else:
-            self.print(targets + ':' + (' ' if deps else '') + deps)
-        self.all_phony.update(outs)
-
-    def begin_file(self):
-        self.print(r'# This is an automatically generated file, and it shows.')
-        self.print(r'ninja-default:')
-        self.print(r'.PHONY: ninja-default ninja-clean ninja-distclean')
-        if self.emit_clean:
-            self.print(r'ninja-clean:: ninja-clean-start; $(if $V,,@)rm -f ${ninja-depfiles}')
-            self.print(r'ninja-clean-start:; $(if $V,,@echo Cleaning...)')
-            self.print(r'ninja-distclean:: clean; $(if $V,,@)rm -f ${ninja-rspfiles}')
-            self.print(r'.PHONY: ninja-clean-start')
-            self.print_phony(['clean'], ['ninja-clean'])
-            self.print_phony(['distclean'], ['ninja-distclean'])
-        self.print(r'vpath')
-        self.print(r'NULL :=')
-        self.print(r'SPACE := ${NULL} #')
-        self.print(r'MAKEFLAGS += -rR')
-        self.print(r'define NEWLINE')
-        self.print(r'')
-        self.print(r'endef')
-        self.print(r'.var.in_newline = $(subst $(SPACE),$(NEWLINE),${.var.in})')
-        self.print(r"ninja-command = $(if $V,,$(if ${.var.description},@printf '%s\n' '$(subst ','\'',${.var.description})' && ))${.var.command}")
-        self.print(r"ninja-command-restat = $(if $V,,$(if ${.var.description},@printf '%s\n' '$(subst ','\'',${.var.description})' && ))${.var.command} && if test -e $(firstword ${.var.out}); then printf '%s\n' ${.var.out} > $@; fi")
-
-    def end_file(self):
-        def natural_sort_key(s, _nsre=re.compile('([0-9]+)')):
-            return [int(text) if text.isdigit() else text.lower()
-                    for text in _nsre.split(s)]
-
-        self.print()
-        self.print('ninja-outputdirs :=')
-        for rule in self.rule_vars:
-            if rule == 'phony':
-                continue
-            self.print('ninja-targets-%s := %s' % (rule, ' '.join(self.rule_targets[rule])))
-            self.print('ninja-stamp-%s := %s' % (rule, ' '.join(self.stamp_targets[rule])))
-            self.print('ninja-outputdirs += $(sort $(dir ${ninja-targets-%s}))' % rule)
-            self.print()
-        self.print('dummy := $(shell mkdir -p . $(sort $(ninja-outputdirs)))')
-        self.print('ninja-depfiles :=' + ' '.join(self.depfiles))
-        self.print('ninja-rspfiles :=' + ' '.join(self.rspfiles))
-        self.print('-include ${ninja-depfiles}')
-        self.print()
-        for targets in self.build_vars:
-            for name, value in self.build_vars[targets].items():
-                self.print('%s: private .var.%s := %s' %
-                           (targets, name, value.replace('$', '$$')))
-            self.print()
-        if not self.seen_default:
-            default_targets = sorted(self.all_outs - self.all_ins, key=natural_sort_key)
-            self.print('ninja-default: ' + ' '.join(default_targets))
-
-        # This is a hack...  Meson declares input meson.build files as
-        # phony, because Ninja does not have an equivalent of Make's
-        # "path/to/file:" declaration that ignores "path/to/file" even
-        # if it is absent.  However, Makefile.ninja wants to depend on
-        # build.ninja, which in turn depends on these phony targets which
-        # would cause Makefile.ninja to be rebuilt in a loop.
-        phony_targets = sorted(self.all_phony - self.all_ins, key=natural_sort_key)
-        self.print('.PHONY: ' + ' '.join(phony_targets))
-
-    def variable(self, name, value):
-        super().variable(name, value)
-        if self.scope is None:
-            self.global_vars[name] = self.expand(value)
-            self.print('.var.%s := %s' % (name, self.global_vars[name]))
-
-    def begin_build(self, out, iout, rule, in_, iin, orderdep):
-        if any(x in self.omit for x in out):
-            self.scope = Scope(self)
-            return
-
-        super().begin_build(out, iout, rule, in_, iin, orderdep)
-        self.current_targets = ' '.join(self.scope.out + self.scope.iout).replace('$', '$$')
-
-    def end_build(self, scope, out, iout, rule, in_, iin, orderdep):
-        self.rule_targets[rule] += self.scope.out
-        self.rule_targets[rule] += self.scope.iout
-
-        self.all_outs.update(self.scope.iout)
-        self.all_outs.update(self.scope.out)
-        self.all_ins.update(self.scope.in_)
-        self.all_ins.update(self.scope.iin)
-
-        targets = self.current_targets
-        self.current_targets = None
-        if rule == 'phony':
-            # Phony rules treat order-only dependencies as normal deps
-            self.print_phony(out + iout, in_ + iin + orderdep)
-            return
-
-        inputs = ' '.join(in_ + iin).replace('$', '$$')
-        orderonly = ' '.join(orderdep).replace('$', '$$')
-
-        rspfile = scope.expand('${rspfile}')
-        if rspfile:
-            rspfile_content = scope.expand('${rspfile_content}')
-            with open(rspfile, 'w') as f:
-                f.write(rspfile_content)
-            inputs += ' ' + rspfile
-            self.rspfiles.append(rspfile)
-
-        restat = 'restat' in self.scope.build_vars or 'restat' in self.rule_vars[rule]
-        depfile = scope.expand('${depfile}')
-        build_vars = {
-            'command': scope.expand('${command}'),
-            'description': scope.expand('${description}'),
-            'out': scope.expand('${out}')
-        }
-
-        if restat and not depfile:
-            if len(out) == 1:
-                stamp = out[0] + '.stamp'
-            else:
-                stamp = '%s@%s.stamp' % (rule, sha1_text(targets)[0:11])
-            self.print('%s: %s; @:' % (targets, stamp))
-            self.print('ifneq (%s, $(wildcard %s))' % (targets, targets))
-            self.print('.PHONY: %s' % (stamp, ))
-            self.print('endif')
-            self.print('%s: %s | %s; ${ninja-command-restat}' % (stamp, inputs, orderonly))
-            self.rule_targets[rule].append(stamp)
-            self.stamp_targets[rule].append(stamp)
-            self.build_vars[stamp] = build_vars
-        else:
-            self.print('%s: %s | %s; ${ninja-command}' % (targets, inputs, orderonly))
-            self.build_vars[targets] = build_vars
-            if depfile:
-                self.depfiles.append(depfile)
-
-    def end_rule(self, scope, name):
-        # Note that the generator pseudo-variable could also be attached
-        # to a build block rather than a rule.  This is not handled here
-        # in order to reduce the number of "rm" invocations.  However,
-        # "ninjatool.py -t clean" does that correctly.
-        target = 'distclean' if scope.generator else 'clean'
-        self.print('ninja-%s:: ; $(if $V,,@)rm -f ${ninja-stamp-%s}' % (target, name))
-        if self.emit_clean:
-            self.print('ninja-%s:: ; $(if $V,,@)rm -rf ${ninja-targets-%s}' % (target, name))
-
-    def default(self, targets):
-        self.print("ninja-default: " + ' '.join(targets))
-        self.seen_default = True
-
-
-# ---- command line parsing ----
-
-# we cannot use subparsers because tools are chosen through the "-t"
-# option.
-
-class ToolAction(argparse.Action):
-    def __init__(self, option_strings, dest, choices, metavar='TOOL', nargs=None, **kwargs):
-        if nargs is not None:
-            raise ValueError("nargs not allowed")
-        super().__init__(option_strings, dest, required=True, choices=choices,
-                         metavar=metavar, **kwargs)
-
-    def __call__(self, parser, namespace, value, option_string):
-        tool = self.choices[value]
-        setattr(namespace, self.dest, tool)
-        tool.ARGS.prog = '%s %s %s' % (parser.prog, option_string, value)
-
-
-class ToolHelpAction(argparse.Action):
-    def __init__(self, option_strings, dest, nargs=None, **kwargs):
-        if nargs is not None:
-            raise ValueError("nargs not allowed")
-        super().__init__(option_strings, dest, nargs=0, **kwargs)
-
-    def __call__(self, parser, namespace, values, option_string=None):
-        if namespace.tool:
-            namespace.tool.ARGS.print_help()
-        else:
-            parser.print_help()
-        parser.exit()
-
-
-tools = {
-    'test': Writer,
-    'ninja2make': Ninja2Make,
-    'compdb': Compdb,
-    'clean': Clean,
-}
-
-parser = argparse.ArgumentParser(description='Process and transform build.ninja files.',
-                                 add_help=False)
-parser.add_argument('-C', metavar='DIR', dest='dir', default='.',
-                    help='change to DIR before doing anything else')
-parser.add_argument('-f', metavar='FILE', dest='file', default='build.ninja',
-                    help='specify input build file [default=build.ninja]')
-parser.add_argument('-n', dest='dry_run', action='store_true',
-                    help='do not actually do anything')
-parser.add_argument('-v', dest='verbose', action='store_true',
-                    help='be more verbose')
-
-parser.add_argument('-t', dest='tool', choices=tools, action=ToolAction,
-                    help='choose the tool to run')
-parser.add_argument('-h', '--help', action=ToolHelpAction,
-                    help='show this help message and exit')
-
-if len(sys.argv) >= 2 and sys.argv[1] == '--version':
-    print('1.8')
-    sys.exit(0)
-
-args, tool_args = parser.parse_known_args()
-args.tool.ARGS.parse_args(tool_args, args)
-
-os.chdir(args.dir)
-with open(args.file, 'r') as f:
-    parser = NinjaParser(args.file, f)
-    try:
-        events = args.tool(sys.stdout, parser, args)
-    except InvalidArgumentError as e:
-        parser.error(str(e))
-    parser.parse(events)
diff --git a/tests/Makefile.include b/tests/Makefile.include
index 4037490b69..3a0524ce74 100644
--- a/tests/Makefile.include
+++ b/tests/Makefile.include
@@ -140,7 +140,7 @@ QEMU_IOTESTS_HELPERS-$(CONFIG_LINUX) = tests/qemu-iotests/socket_scm_helper$(EXE
 check: check-block
 check-block: $(SRC_PATH)/tests/check-block.sh qemu-img$(EXESUF) \
 		qemu-io$(EXESUF) qemu-nbd$(EXESUF) $(QEMU_IOTESTS_HELPERS-y) \
-		$(filter qemu-system-%, $(ninja-targets-c_LINKER) $(ninja-targets-cpp_LINKER))
+		$(filter qemu-system-%, $(ninja-targets))
 	@$<
 endif
 
-- 
2.26.2




^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PULL 11/22] build: add --enable/--disable-libudev
  2020-10-16 11:47 [PULL 00/22] Build system + misc changes for 2020-10-16 Paolo Bonzini
                   ` (9 preceding siblings ...)
  2020-10-16 11:48 ` [PULL 10/22] build: replace ninjatool with ninja Paolo Bonzini
@ 2020-10-16 11:48 ` Paolo Bonzini
  2020-10-16 11:48 ` [PULL 12/22] meson.build: don't condition iconv detection on library detection Paolo Bonzini
                   ` (10 subsequent siblings)
  21 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2020-10-16 11:48 UTC (permalink / raw)
  To: qemu-devel; +Cc: Peter Maydell

Initially, libudev detection was bundled with --enable-mpath because
qemu-pr-helper was the only user of libudev.  Recently however the USB
U2F emulation has also started using libudev, so add a separate
option.  This also allows 1) disabling libudev if desired for static
builds and 2) for non-static builds, requiring libudev even if
multipath support is undesirable.

The multipath test is adjusted, because it is now possible to enter it
with configurations that should fail, such as --static --enable-mpath
--disable-libudev.

Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 configure         |  8 +++++++-
 meson.build       | 50 ++++++++++++++++++++++++++---------------------
 meson_options.txt |  2 ++
 3 files changed, 37 insertions(+), 23 deletions(-)

diff --git a/configure b/configure
index 9317349044..c83a2eeb9d 100755
--- a/configure
+++ b/configure
@@ -303,6 +303,7 @@ netmap="no"
 sdl="auto"
 sdl_image="auto"
 virtfs=""
+libudev="auto"
 mpath="auto"
 vnc="enabled"
 sparse="auto"
@@ -1002,6 +1003,10 @@ for opt do
   ;;
   --enable-virtfs) virtfs="yes"
   ;;
+  --disable-libudev) libudev="disabled"
+  ;;
+  --enable-libudev) libudev="enabled"
+  ;;
   --disable-mpath) mpath="disabled"
   ;;
   --enable-mpath) mpath="enabled"
@@ -1759,6 +1764,7 @@ disabled with --disable-FEATURE, default is enabled if available:
   vnc-png         PNG compression for VNC server
   cocoa           Cocoa UI (Mac OS X only)
   virtfs          VirtFS
+  libudev         Use libudev to enumerate host devices
   mpath           Multipath persistent reservation passthrough
   xen             xen backend driver support
   xen-pci-passthrough    PCI passthrough support for Xen
@@ -7060,7 +7066,7 @@ NINJA=$ninja $meson setup \
         -Dvnc=$vnc -Dvnc_sasl=$vnc_sasl -Dvnc_jpeg=$vnc_jpeg -Dvnc_png=$vnc_png \
         -Dgettext=$gettext -Dxkbcommon=$xkbcommon -Du2f=$u2f \
         -Dcapstone=$capstone -Dslirp=$slirp -Dfdt=$fdt \
-        -Diconv=$iconv -Dcurses=$curses \
+        -Diconv=$iconv -Dcurses=$curses -Dlibudev=$libudev\
         $cross_arg \
         "$PWD" "$source_path"
 
diff --git a/meson.build b/meson.build
index 2c93e22382..0c0f4f9fd8 100644
--- a/meson.build
+++ b/meson.build
@@ -380,10 +380,11 @@ endif
 libudev = not_found
 if targetos == 'linux' and (have_system or have_tools)
   libudev = dependency('libudev',
-                       required: get_option('mpath').enabled(),
+                       required: get_option('libudev'),
                        static: enable_static)
 endif
 
+mpathlibs = [libudev]
 mpathpersist = not_found
 mpathpersist_new_api = false
 if targetos == 'linux' and have_tools and not get_option('mpath').disabled()
@@ -414,35 +415,40 @@ if targetos == 'linux' and have_tools and not get_option('mpath').disabled()
           mpath_lib_init(udev);
           return 0;
       }'''
-  mpathlibs = [libudev]
-  if enable_static
-    mpathlibs += cc.find_library('devmapper',
-                                   required: get_option('mpath'),
-                                   static: enable_static)
-  endif
-  mpathlibs += cc.find_library('multipath',
-                               required: get_option('mpath'),
-                               static: enable_static)
-  mpathlibs += cc.find_library('mpathpersist',
-                               required: get_option('mpath'),
-                               static: enable_static)
-  foreach lib: mpathlibs
-    if not lib.found()
-      mpathlibs = []
-      break
+  libmpathpersist = cc.find_library('mpathpersist',
+                                    required: get_option('mpath'),
+                                    static: enable_static)
+  if libmpathpersist.found()
+    mpathlibs += libmpathpersist
+    if enable_static
+      mpathlibs += cc.find_library('devmapper',
+                                     required: get_option('mpath'),
+                                     static: enable_static)
     endif
-  endforeach
-  if mpathlibs.length() > 0
-    if cc.links(mpath_test_source_new, dependencies: mpathlibs)
+    mpathlibs += cc.find_library('multipath',
+                                 required: get_option('mpath'),
+                                 static: enable_static)
+    foreach lib: mpathlibs
+      if not lib.found()
+        mpathlibs = []
+        break
+      endif
+    endforeach
+    if mpathlibs.length() == 0
+      msg = 'Dependencies missing for libmpathpersist'
+    elif cc.links(mpath_test_source_new, dependencies: mpathlibs)
       mpathpersist = declare_dependency(dependencies: mpathlibs)
       mpathpersist_new_api = true
     elif cc.links(mpath_test_source_old, dependencies: mpathlibs)
       mpathpersist = declare_dependency(dependencies: mpathlibs)
     else
+      msg = 'Cannot detect libmpathpersist API'
+    endif
+    if not mpathpersist.found()
       if get_option('mpath').enabled()
-        error('Cannot detect libmpathpersist API')
+        error(msg)
       else
-        warning('Cannot detect libmpathpersist API, disabling')
+        warning(msg + ', disabling')
       endif
     endif
   endif
diff --git a/meson_options.txt b/meson_options.txt
index e6cb1e589b..77b3fabd00 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -36,6 +36,8 @@ option('iconv', type : 'feature', value : 'auto',
        description: 'Font glyph conversion support')
 option('curses', type : 'feature', value : 'auto',
        description: 'curses UI')
+option('libudev', type : 'feature', value : 'auto',
+       description: 'Use libudev to enumerate host devices')
 option('sdl', type : 'feature', value : 'auto',
        description: 'SDL user interface')
 option('sdl_image', type : 'feature', value : 'auto',
-- 
2.26.2




^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PULL 12/22] meson.build: don't condition iconv detection on library detection
  2020-10-16 11:47 [PULL 00/22] Build system + misc changes for 2020-10-16 Paolo Bonzini
                   ` (10 preceding siblings ...)
  2020-10-16 11:48 ` [PULL 11/22] build: add --enable/--disable-libudev Paolo Bonzini
@ 2020-10-16 11:48 ` Paolo Bonzini
  2020-10-16 11:48 ` [PULL 13/22] meson: cleanup curses/iconv test Paolo Bonzini
                   ` (9 subsequent siblings)
  21 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2020-10-16 11:48 UTC (permalink / raw)
  To: qemu-devel; +Cc: Yonggang Luo, Bruce Rogers

From: Bruce Rogers <brogers@suse.com>

It isn't necessarily the case that use of iconv requires an additional
library. For that reason we shouldn't conditionalize iconv detection on
libiconv.found.

Fixes: 5285e593c33 (configure: Fixes ncursesw detection under msys2/mingw by convert them to meson)

Signed-off-by: Bruce Rogers <brogers@suse.com>
Reviewed-by: Yonggang Luo<l <brogers@suse.com>uoyonggang@gmail.com>
Reviewed-by:Yonggang Luo <luoyonggang@gmail.com>
Message-Id: <20201014221939.196958-1-brogers@suse.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 meson.build | 16 +++++++---------
 1 file changed, 7 insertions(+), 9 deletions(-)

diff --git a/meson.build b/meson.build
index 0c0f4f9fd8..c1c45e9845 100644
--- a/meson.build
+++ b/meson.build
@@ -459,15 +459,13 @@ if not get_option('iconv').disabled()
   libiconv = cc.find_library('iconv',
                              required: false,
                              static: enable_static)
-  if libiconv.found()
-    if cc.links('''
-      #include <iconv.h>
-      int main(void) {
-        iconv_t conv = iconv_open("WCHAR_T", "UCS-2");
-        return conv != (iconv_t) -1;
-      }''', dependencies: [libiconv])
-      iconv = declare_dependency(dependencies: [libiconv])
-    endif
+  if cc.links('''
+    #include <iconv.h>
+    int main(void) {
+      iconv_t conv = iconv_open("WCHAR_T", "UCS-2");
+      return conv != (iconv_t) -1;
+    }''', dependencies: [libiconv])
+    iconv = declare_dependency(dependencies: [libiconv])
   endif
 endif
 if get_option('iconv').enabled() and not iconv.found()
-- 
2.26.2




^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PULL 13/22] meson: cleanup curses/iconv test
  2020-10-16 11:47 [PULL 00/22] Build system + misc changes for 2020-10-16 Paolo Bonzini
                   ` (11 preceding siblings ...)
  2020-10-16 11:48 ` [PULL 12/22] meson.build: don't condition iconv detection on library detection Paolo Bonzini
@ 2020-10-16 11:48 ` Paolo Bonzini
  2020-10-16 11:48 ` [PULL 14/22] configure: fix handling of --docdir parameter Paolo Bonzini
                   ` (8 subsequent siblings)
  21 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2020-10-16 11:48 UTC (permalink / raw)
  To: qemu-devel

Skip the test if it is system emulation is not requested, and
differentiate errors for lack of iconv and lack of curses.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 meson.build | 85 ++++++++++++++++++++++++++++-------------------------
 1 file changed, 45 insertions(+), 40 deletions(-)

diff --git a/meson.build b/meson.build
index c1c45e9845..15732f4701 100644
--- a/meson.build
+++ b/meson.build
@@ -455,40 +455,40 @@ if targetos == 'linux' and have_tools and not get_option('mpath').disabled()
 endif
 
 iconv = not_found
-if not get_option('iconv').disabled()
-  libiconv = cc.find_library('iconv',
-                             required: false,
-                             static: enable_static)
-  if cc.links('''
-    #include <iconv.h>
-    int main(void) {
-      iconv_t conv = iconv_open("WCHAR_T", "UCS-2");
-      return conv != (iconv_t) -1;
-    }''', dependencies: [libiconv])
-    iconv = declare_dependency(dependencies: [libiconv])
-  endif
-endif
-if get_option('iconv').enabled() and not iconv.found()
-  error('Cannot detect iconv API')
-endif
-
 curses = not_found
-if iconv.found() and not get_option('curses').disabled()
-  curses_libname_list = ['ncursesw', 'ncurses', 'cursesw', 'pdcurses']
-  curses_test = '''
-    #include <locale.h>
-    #include <curses.h>
-    #include <wchar.h>
-    int main(void) {
-      wchar_t wch = L'w';
-      setlocale(LC_ALL, "");
-      resize_term(0, 0);
-      addwstr(L"wide chars\n");
-      addnwstr(&wch, 1);
-      add_wch(WACS_DEGREE);
-      return 0;
-    }'''
-  foreach curses_libname : curses_libname_list
+if have_system and not get_option('curses').disabled()
+  if not get_option('iconv').disabled()
+    libiconv = cc.find_library('iconv',
+                               required: false,
+                               static: enable_static)
+    if cc.links('''
+      #include <iconv.h>
+      int main(void) {
+        iconv_t conv = iconv_open("WCHAR_T", "UCS-2");
+        return conv != (iconv_t) -1;
+      }''', dependencies: [libiconv])
+      iconv = declare_dependency(dependencies: [libiconv])
+    endif
+  endif
+  if get_option('iconv').enabled() and not iconv.found()
+    error('Cannot detect iconv API')
+  endif
+  if iconv.found()
+    curses_libname_list = ['ncursesw', 'ncurses', 'cursesw', 'pdcurses']
+    curses_test = '''
+      #include <locale.h>
+      #include <curses.h>
+      #include <wchar.h>
+      int main(void) {
+        wchar_t wch = L'w';
+        setlocale(LC_ALL, "");
+        resize_term(0, 0);
+        addwstr(L"wide chars\n");
+        addnwstr(&wch, 1);
+        add_wch(WACS_DEGREE);
+        return 0;
+      }'''
+    foreach curses_libname : curses_libname_list
       libcurses = dependency(curses_libname,
                              required: false,
                              method: 'pkg-config',
@@ -510,13 +510,18 @@ if iconv.found() and not get_option('curses').disabled()
           break
         endif
       endif
-  endforeach
-endif
-if get_option('curses').enabled() and not curses.found()
-  if not iconv.found()
-    error('Cannot detect iconv API')
-  else
-    error('Cannot detect curses API')
+    endforeach
+  endif
+  if not curses.found()
+    if iconv.found()
+      if get_option('curses').enabled()
+        error('Cannot find curses')
+      endif
+    elif get_option('curses').enabled()
+      error('iconv required for curses UI but not available')
+    else
+      warning('iconv required for curses UI but not available, disabling')
+    endif
   endif
 endif
 
-- 
2.26.2




^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PULL 14/22] configure: fix handling of --docdir parameter
  2020-10-16 11:47 [PULL 00/22] Build system + misc changes for 2020-10-16 Paolo Bonzini
                   ` (12 preceding siblings ...)
  2020-10-16 11:48 ` [PULL 13/22] meson: cleanup curses/iconv test Paolo Bonzini
@ 2020-10-16 11:48 ` Paolo Bonzini
  2020-10-16 11:48 ` [PULL 15/22] meson: Only install icons and qemu.desktop if have_system Paolo Bonzini
                   ` (7 subsequent siblings)
  21 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2020-10-16 11:48 UTC (permalink / raw)
  To: qemu-devel; +Cc: Philippe Mathieu-Daudé, Bruce Rogers

From: Bruce Rogers <brogers@suse.com>

Commit ca8c0909f01 changed qemu_docdir to be docdir, then later uses the
qemu_docdir name in the final assignment. Unfortunately, one instance of
qemu_docdir was missed: the one which comes from the --docdir parameter.
This patch restores the proper handling of the --docdir parameter.

Fixes: ca8c0909f01 ("configure: build docdir like other suffixed
directories")

Signed-off-by: Bruce Rogers <brogers@suse.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20201015190742.270629-1-brogers@suse.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 configure | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/configure b/configure
index c83a2eeb9d..3edbdd2a24 100755
--- a/configure
+++ b/configure
@@ -969,7 +969,7 @@ for opt do
   ;;
   --with-suffix=*) qemu_suffix="$optarg"
   ;;
-  --docdir=*) qemu_docdir="$optarg"
+  --docdir=*) docdir="$optarg"
   ;;
   --sysconfdir=*) sysconfdir="$optarg"
   ;;
@@ -5776,7 +5776,6 @@ fi
 qemu_confdir="$sysconfdir/$qemu_suffix"
 qemu_moddir="$libdir/$qemu_suffix"
 qemu_datadir="$datadir/$qemu_suffix"
-qemu_docdir="$docdir/$qemu_suffix"
 qemu_localedir="$datadir/locale"
 qemu_icondir="$datadir/icons"
 qemu_desktopdir="$datadir/applications"
-- 
2.26.2




^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PULL 15/22] meson: Only install icons and qemu.desktop if have_system
  2020-10-16 11:47 [PULL 00/22] Build system + misc changes for 2020-10-16 Paolo Bonzini
                   ` (13 preceding siblings ...)
  2020-10-16 11:48 ` [PULL 14/22] configure: fix handling of --docdir parameter Paolo Bonzini
@ 2020-10-16 11:48 ` Paolo Bonzini
  2020-10-16 11:48 ` [PULL 16/22] docs: Fix Sphinx configuration for msys2/mingw Paolo Bonzini
                   ` (6 subsequent siblings)
  21 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2020-10-16 11:48 UTC (permalink / raw)
  To: qemu-devel; +Cc: Bruce Rogers

From: Bruce Rogers <brogers@suse.com>

These files are not needed for a linux-user only install.

Signed-off-by: Bruce Rogers <brogers@suse.com>
Message-Id: <20201015201840.282956-1-brogers@suse.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 ui/meson.build | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/ui/meson.build b/ui/meson.build
index 78ad792ffb..fb36d305ca 100644
--- a/ui/meson.build
+++ b/ui/meson.build
@@ -113,8 +113,11 @@ if have_system or xkbcommon.found()
 endif
 
 subdir('shader')
-subdir('icons')
 
-install_data('qemu.desktop', install_dir: config_host['qemu_desktopdir'])
+if have_system
+  subdir('icons')
+
+  install_data('qemu.desktop', install_dir: config_host['qemu_desktopdir'])
+endif
 
 modules += {'ui': ui_modules}
-- 
2.26.2




^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PULL 16/22] docs: Fix Sphinx configuration for msys2/mingw
  2020-10-16 11:47 [PULL 00/22] Build system + misc changes for 2020-10-16 Paolo Bonzini
                   ` (14 preceding siblings ...)
  2020-10-16 11:48 ` [PULL 15/22] meson: Only install icons and qemu.desktop if have_system Paolo Bonzini
@ 2020-10-16 11:48 ` Paolo Bonzini
  2020-10-16 11:48 ` [PULL 17/22] meson: move SPHINX_ARGS references within "if build_docs" Paolo Bonzini
                   ` (5 subsequent siblings)
  21 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2020-10-16 11:48 UTC (permalink / raw)
  To: qemu-devel; +Cc: Yonggang Luo

From: Yonggang Luo <luoyonggang@gmail.com>

Python doesn't support running ../scripts/kernel-doc directly.

Signed-off-by: Yonggang Luo <luoyonggang@gmail.com>
Message-Id: <20201015220626.418-2-luoyonggang@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 docs/conf.py             | 2 +-
 docs/sphinx/kerneldoc.py | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/docs/conf.py b/docs/conf.py
index 00e1b750e2..e584f68393 100644
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -241,7 +241,7 @@ texinfo_documents = [
 # We use paths starting from qemu_docdir here so that you can run
 # sphinx-build from anywhere and the kerneldoc extension can still
 # find everything.
-kerneldoc_bin = os.path.join(qemu_docdir, '../scripts/kernel-doc')
+kerneldoc_bin = ['perl', os.path.join(qemu_docdir, '../scripts/kernel-doc')]
 kerneldoc_srctree = os.path.join(qemu_docdir, '..')
 hxtool_srctree = os.path.join(qemu_docdir, '..')
 qapidoc_srctree = os.path.join(qemu_docdir, '..')
diff --git a/docs/sphinx/kerneldoc.py b/docs/sphinx/kerneldoc.py
index 3e87940206..3ac277d162 100644
--- a/docs/sphinx/kerneldoc.py
+++ b/docs/sphinx/kerneldoc.py
@@ -67,7 +67,7 @@ class KernelDocDirective(Directive):
 
     def run(self):
         env = self.state.document.settings.env
-        cmd = [env.config.kerneldoc_bin, '-rst', '-enable-lineno']
+        cmd = env.config.kerneldoc_bin + ['-rst', '-enable-lineno']
 
         filename = env.config.kerneldoc_srctree + '/' + self.arguments[0]
         export_file_patterns = []
-- 
2.26.2




^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PULL 17/22] meson: move SPHINX_ARGS references within "if build_docs"
  2020-10-16 11:47 [PULL 00/22] Build system + misc changes for 2020-10-16 Paolo Bonzini
                   ` (15 preceding siblings ...)
  2020-10-16 11:48 ` [PULL 16/22] docs: Fix Sphinx configuration for msys2/mingw Paolo Bonzini
@ 2020-10-16 11:48 ` Paolo Bonzini
  2020-10-16 11:48 ` [PULL 18/22] meson: Move the detection logic for sphinx to meson Paolo Bonzini
                   ` (4 subsequent siblings)
  21 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2020-10-16 11:48 UTC (permalink / raw)
  To: qemu-devel

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 tests/qapi-schema/meson.build | 88 +++++++++++++++++------------------
 1 file changed, 44 insertions(+), 44 deletions(-)

diff --git a/tests/qapi-schema/meson.build b/tests/qapi-schema/meson.build
index 1f222a7a13..304ef939bd 100644
--- a/tests/qapi-schema/meson.build
+++ b/tests/qapi-schema/meson.build
@@ -219,53 +219,53 @@ qapi_doc = custom_target('QAPI doc',
                                     '-p', 'doc-good-', '@INPUT0@' ],
                          depend_files: qapi_gen_depends)
 
-# Test the document-comment document generation code by running a test schema
-# file through Sphinx's plain-text builder and comparing the result against
-# a golden reference. This is in theory susceptible to failures if Sphinx
-# changes its output, but the text output has historically been very stable
-# (no changes between Sphinx 1.6 and 3.0), so it is a better bet than
-# texinfo or HTML generation, both of which have had changes. We might
-# need to add more sophisticated logic here in future for some sort of
-# fuzzy comparison if future Sphinx versions produce different text,
-# but for now the simple comparison suffices.
-qapi_doc_out = custom_target('QAPI rST doc',
-                             output: ['doc-good.txt'],
-                             input: files('doc-good.json', 'doc-good.rst'),
-                             build_by_default: build_docs,
-                             depend_files: sphinx_extn_depends,
-                             # We use -E to suppress Sphinx's caching, because
-                             # we want it to always really run the QAPI doc
-                             # generation code. It also means we don't
-                             # clutter up the build dir with the cache.
-                             command: [SPHINX_ARGS,
-                                       '-b', 'text', '-E',
-                                       '-c', meson.source_root() / 'docs',
-                                       '-D', 'master_doc=doc-good',
-                                       meson.current_source_dir(),
-                                       meson.current_build_dir()])
+if build_docs
+  # Test the document-comment document generation code by running a test schema
+  # file through Sphinx's plain-text builder and comparing the result against
+  # a golden reference. This is in theory susceptible to failures if Sphinx
+  # changes its output, but the text output has historically been very stable
+  # (no changes between Sphinx 1.6 and 3.0), so it is a better bet than
+  # texinfo or HTML generation, both of which have had changes. We might
+  # need to add more sophisticated logic here in future for some sort of
+  # fuzzy comparison if future Sphinx versions produce different text,
+  # but for now the simple comparison suffices.
+  qapi_doc_out = custom_target('QAPI rST doc',
+                               output: ['doc-good.txt'],
+                               input: files('doc-good.json', 'doc-good.rst'),
+                               build_by_default: true,
+                               depend_files: sphinx_extn_depends,
+                               # We use -E to suppress Sphinx's caching, because
+                               # we want it to always really run the QAPI doc
+                               # generation code. It also means we don't
+                               # clutter up the build dir with the cache.
+                               command: [SPHINX_ARGS,
+                                         '-b', 'text', '-E',
+                                         '-c', meson.source_root() / 'docs',
+                                         '-D', 'master_doc=doc-good',
+                                         meson.current_source_dir(),
+                                         meson.current_build_dir()])
 
-# Fix possible inconsistency in line endings in generated output and
-# in the golden reference (which could otherwise cause test failures
-# on Windows hosts). Unfortunately diff --strip-trailing-cr
-# is GNU-diff only. The odd-looking perl is because we must avoid
-# using an explicit '\' character in the command arguments to
-# a custom_target(), as Meson will unhelpfully replace it with a '/'
-# (https://github.com/mesonbuild/meson/issues/1564)
-qapi_doc_out_nocr = custom_target('QAPI rST doc newline-sanitized',
-                                  output: ['doc-good.txt.nocr'],
-                                  input: qapi_doc_out[0],
-                                  build_by_default: build_docs,
-                                  command: ['perl', '-pe', '$x = chr 13; s/$x$//', '@INPUT@'],
-                                  capture: true)
+  # Fix possible inconsistency in line endings in generated output and
+  # in the golden reference (which could otherwise cause test failures
+  # on Windows hosts). Unfortunately diff --strip-trailing-cr
+  # is GNU-diff only. The odd-looking perl is because we must avoid
+  # using an explicit '\' character in the command arguments to
+  # a custom_target(), as Meson will unhelpfully replace it with a '/'
+  # (https://github.com/mesonbuild/meson/issues/1564)
+  qapi_doc_out_nocr = custom_target('QAPI rST doc newline-sanitized',
+                                    output: ['doc-good.txt.nocr'],
+                                    input: qapi_doc_out[0],
+                                    build_by_default: true,
+                                    command: ['perl', '-pe', '$x = chr 13; s/$x$//', '@INPUT@'],
+                                    capture: true)
 
-qapi_doc_ref_nocr = custom_target('QAPI rST doc reference newline-sanitized',
-                                  output: ['doc-good.ref.nocr'],
-                                  input: files('doc-good.txt'),
-                                  build_by_default: build_docs,
-                                  command: ['perl', '-pe', '$x = chr 13; s/$x$//', '@INPUT@'],
-                                  capture: true)
+  qapi_doc_ref_nocr = custom_target('QAPI rST doc reference newline-sanitized',
+                                    output: ['doc-good.ref.nocr'],
+                                    input: files('doc-good.txt'),
+                                    build_by_default: true,
+                                    command: ['perl', '-pe', '$x = chr 13; s/$x$//', '@INPUT@'],
+                                    capture: true)
 
-if build_docs
   # "full_path()" needed here to work around
   # https://github.com/mesonbuild/meson/issues/7585
   test('QAPI rST doc', diff, args: ['-u', qapi_doc_ref_nocr[0].full_path(),
-- 
2.26.2




^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PULL 18/22] meson: Move the detection logic for sphinx to meson
  2020-10-16 11:47 [PULL 00/22] Build system + misc changes for 2020-10-16 Paolo Bonzini
                   ` (16 preceding siblings ...)
  2020-10-16 11:48 ` [PULL 17/22] meson: move SPHINX_ARGS references within "if build_docs" Paolo Bonzini
@ 2020-10-16 11:48 ` Paolo Bonzini
  2020-10-16 13:27   ` 罗勇刚(Yonggang Luo)
  2020-10-16 11:48 ` [PULL 19/22] cirrus: Enable doc build on msys2/mingw Paolo Bonzini
                   ` (3 subsequent siblings)
  21 siblings, 1 reply; 25+ messages in thread
From: Paolo Bonzini @ 2020-10-16 11:48 UTC (permalink / raw)
  To: qemu-devel; +Cc: Yonggang Luo

From: Yonggang Luo <luoyonggang@gmail.com>

Signed-off-by: Yonggang Luo <luoyonggang@gmail.com>
Message-Id: <20201015220626.418-4-luoyonggang@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 configure         | 59 ++++-------------------------------------------
 docs/meson.build  | 46 ++++++++++++++++++++++++++++++++++++
 meson.build       | 30 ++++++++----------------
 meson_options.txt |  4 ++++
 4 files changed, 64 insertions(+), 75 deletions(-)

diff --git a/configure b/configure
index 3edbdd2a24..68f097861d 100755
--- a/configure
+++ b/configure
@@ -297,7 +297,7 @@ brlapi=""
 curl=""
 iconv="auto"
 curses="auto"
-docs=""
+docs="auto"
 fdt="auto"
 netmap="no"
 sdl="auto"
@@ -820,15 +820,6 @@ do
     fi
 done
 
-sphinx_build=
-for binary in sphinx-build-3 sphinx-build
-do
-    if has "$binary"
-    then
-        sphinx_build=$(command -v "$binary")
-        break
-    fi
-done
 
 # Check for ancillary tools used in testing
 genisoimage=
@@ -1228,9 +1219,9 @@ for opt do
   ;;
   --disable-crypto-afalg) crypto_afalg="no"
   ;;
-  --disable-docs) docs="no"
+  --disable-docs) docs="disabled"
   ;;
-  --enable-docs) docs="yes"
+  --enable-docs) docs="enabled"
   ;;
   --disable-vhost-net) vhost_net="no"
   ;;
@@ -4419,45 +4410,6 @@ if check_include linux/btrfs.h ; then
     btrfs=yes
 fi
 
-# If we're making warnings fatal, apply this to Sphinx runs as well
-sphinx_werror=""
-if test "$werror" = "yes"; then
-    sphinx_werror="-W"
-fi
-
-# Check we have a new enough version of sphinx-build
-has_sphinx_build() {
-    # This is a bit awkward but works: create a trivial document and
-    # try to run it with our configuration file (which enforces a
-    # version requirement). This will fail if either
-    # sphinx-build doesn't exist at all or if it is too old.
-    mkdir -p "$TMPDIR1/sphinx"
-    touch "$TMPDIR1/sphinx/index.rst"
-    "$sphinx_build" $sphinx_werror -c "$source_path/docs" \
-                    -b html "$TMPDIR1/sphinx" \
-                    "$TMPDIR1/sphinx/out"  >> config.log 2>&1
-}
-
-# Check if tools are available to build documentation.
-if test "$docs" != "no" ; then
-  if has_sphinx_build; then
-    sphinx_ok=yes
-  else
-    sphinx_ok=no
-  fi
-  if test "$sphinx_ok" = "yes"; then
-    docs=yes
-  else
-    if test "$docs" = "yes" ; then
-      if has $sphinx_build && test "$sphinx_ok" != "yes"; then
-        echo "Warning: $sphinx_build exists but it is either too old or uses too old a Python version" >&2
-      fi
-      feature_not_found "docs" "Install a Python 3 version of python-sphinx"
-    fi
-    docs=no
-  fi
-fi
-
 # Search for bswap_32 function
 byteswap_h=no
 cat > $TMPC << EOF
@@ -6093,9 +6045,6 @@ qemu_version=$(head $source_path/VERSION)
 echo "PKGVERSION=$pkgversion" >>$config_host_mak
 echo "SRC_PATH=$source_path" >> $config_host_mak
 echo "TARGET_DIRS=$target_list" >> $config_host_mak
-if [ "$docs" = "yes" ] ; then
-  echo "BUILD_DOCS=yes" >> $config_host_mak
-fi
 if test "$modules" = "yes"; then
   # $shacmd can generate a hash started with digit, which the compiler doesn't
   # like as an symbol. So prefix it with an underscore
@@ -6784,7 +6733,6 @@ fi
 echo "ROMS=$roms" >> $config_host_mak
 echo "MAKE=$make" >> $config_host_mak
 echo "PYTHON=$python" >> $config_host_mak
-echo "SPHINX_BUILD=$sphinx_build" >> $config_host_mak
 echo "GENISOIMAGE=$genisoimage" >> $config_host_mak
 echo "MESON=$meson" >> $config_host_mak
 echo "NINJA=$ninja" >> $config_host_mak
@@ -7066,6 +7014,7 @@ NINJA=$ninja $meson setup \
         -Dgettext=$gettext -Dxkbcommon=$xkbcommon -Du2f=$u2f \
         -Dcapstone=$capstone -Dslirp=$slirp -Dfdt=$fdt \
         -Diconv=$iconv -Dcurses=$curses -Dlibudev=$libudev\
+        -Ddocs=$docs -Dsphinx_build=$sphinx_build \
         $cross_arg \
         "$PWD" "$source_path"
 
diff --git a/docs/meson.build b/docs/meson.build
index 0340d489ac..789dca8cc0 100644
--- a/docs/meson.build
+++ b/docs/meson.build
@@ -1,4 +1,50 @@
+if get_option('sphinx_build') == ''
+  sphinx_build = find_program(['sphinx-build-3', 'sphinx-build'],
+                              required: get_option('docs'))
+else
+  sphinx_build = find_program(get_option('sphinx_build'),
+                              required: get_option('docs'))
+endif
+
+# Check if tools are available to build documentation.
+build_docs = false
+if sphinx_build.found()
+  SPHINX_ARGS = [sphinx_build]
+  # If we're making warnings fatal, apply this to Sphinx runs as well
+  if get_option('werror')
+    SPHINX_ARGS += [ '-W' ]
+  endif
+
+  # This is a bit awkward but works: create a trivial document and
+  # try to run it with our configuration file (which enforces a
+  # version requirement). This will fail if sphinx-build is too old.
+  run_command('mkdir', ['-p', tmpdir / 'sphinx'])
+  run_command('touch', [tmpdir / 'sphinx/index.rst'])
+  sphinx_build_test_out = run_command(SPHINX_ARGS + [
+    '-c', meson.current_source_dir(),
+    '-b', 'html', tmpdir / 'sphinx',
+    tmpdir / 'sphinx/out'])
+  build_docs = (sphinx_build_test_out.returncode() == 0)
+
+  if not build_docs
+    warning('@0@ exists but it is either too old or uses too old a Python version'.format(sphinx_build_option))
+    if get_option('docs').enabled()
+      error('Install a Python 3 version of python-sphinx')
+    endif
+  endif
+endif
+
 if build_docs
+  SPHINX_ARGS += ['-Dversion=' + meson.project_version(), '-Drelease=' + config_host['PKGVERSION']]
+
+  sphinx_extn_depends = [ meson.source_root() / 'docs/sphinx/depfile.py',
+                          meson.source_root() / 'docs/sphinx/hxtool.py',
+                          meson.source_root() / 'docs/sphinx/kerneldoc.py',
+                          meson.source_root() / 'docs/sphinx/kernellog.py',
+                          meson.source_root() / 'docs/sphinx/qapidoc.py',
+                          meson.source_root() / 'docs/sphinx/qmp_lexer.py',
+                          qapi_gen_depends ]
+
   configure_file(output: 'index.html',
                  input: files('index.html.in'),
                  configuration: {'VERSION': meson.project_version()},
diff --git a/meson.build b/meson.build
index 15732f4701..05fb59a00b 100644
--- a/meson.build
+++ b/meson.build
@@ -17,7 +17,13 @@ cc = meson.get_compiler('c')
 config_host = keyval.load(meson.current_build_dir() / 'config-host.mak')
 enable_modules = 'CONFIG_MODULES' in config_host
 enable_static = 'CONFIG_STATIC' in config_host
-build_docs = 'BUILD_DOCS' in config_host
+
+# Temporary directory used for files created while
+# configure runs. Since it is in the build directory
+# we can safely blow away any previous version of it
+# (and we need not jump through hoops to try to delete
+# it when configure exits.)
+tmpdir = meson.current_build_dir() / 'meson-private/temp'
 
 if get_option('qemu_suffix').startswith('/')
   error('qemu_suffix cannot start with a /')
@@ -1266,22 +1272,6 @@ foreach d : hx_headers
 endforeach
 genh += hxdep
 
-SPHINX_ARGS = [config_host['SPHINX_BUILD'],
-               '-Dversion=' + meson.project_version(),
-               '-Drelease=' + config_host['PKGVERSION']]
-
-if get_option('werror')
-  SPHINX_ARGS += [ '-W' ]
-endif
-
-sphinx_extn_depends = [ meson.source_root() / 'docs/sphinx/depfile.py',
-                        meson.source_root() / 'docs/sphinx/hxtool.py',
-                        meson.source_root() / 'docs/sphinx/kerneldoc.py',
-                        meson.source_root() / 'docs/sphinx/kernellog.py',
-                        meson.source_root() / 'docs/sphinx/qapidoc.py',
-                        meson.source_root() / 'docs/sphinx/qmp_lexer.py',
-                        qapi_gen_depends ]
-
 ###################
 # Collect sources #
 ###################
@@ -1866,8 +1856,8 @@ endif
 subdir('scripts')
 subdir('tools')
 subdir('pc-bios')
-subdir('tests')
 subdir('docs')
+subdir('tests')
 if 'CONFIG_GTK' in config_host
   subdir('po')
 endif
@@ -1949,7 +1939,7 @@ summary_info += {'QEMU_CFLAGS':       config_host['QEMU_CFLAGS']}
 summary_info += {'QEMU_LDFLAGS':      config_host['QEMU_LDFLAGS']}
 summary_info += {'make':              config_host['MAKE']}
 summary_info += {'python':            '@0@ (version: @1@)'.format(python.full_path(), python.language_version())}
-summary_info += {'sphinx-build':      config_host['SPHINX_BUILD']}
+summary_info += {'sphinx-build':      sphinx_build.found()}
 summary_info += {'genisoimage':       config_host['GENISOIMAGE']}
 # TODO: add back version
 summary_info += {'slirp support':     slirp_opt == 'disabled' ? false : slirp_opt}
@@ -2017,7 +2007,7 @@ if config_host.has_key('CONFIG_XEN_BACKEND')
   summary_info += {'xen ctrl version':  config_host['CONFIG_XEN_CTRL_INTERFACE_VERSION']}
 endif
 summary_info += {'brlapi support':    config_host.has_key('CONFIG_BRLAPI')}
-summary_info += {'Documentation':     config_host.has_key('BUILD_DOCS')}
+summary_info += {'Documentation':     build_docs}
 summary_info += {'PIE':               get_option('b_pie')}
 summary_info += {'vde support':       config_host.has_key('CONFIG_VDE')}
 summary_info += {'netmap support':    config_host.has_key('CONFIG_NETMAP')}
diff --git a/meson_options.txt b/meson_options.txt
index 77b3fabd00..967229b66e 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -2,7 +2,11 @@ option('qemu_suffix', type : 'string', value: 'qemu',
        description: 'Suffix for QEMU data/modules/config directories (can be empty)')
 option('docdir', type : 'string', value : 'doc',
        description: 'Base directory for documentation installation (can be empty)')
+option('sphinx_build', type : 'string', value : '',
+       description: 'Use specified sphinx-build [$sphinx_build] for building document (default to be empty)')
 
+option('docs', type : 'feature', value : 'auto',
+       description: 'Documentations build support')
 option('gettext', type : 'boolean', value : true,
        description: 'Localization of the GTK+ user interface')
 option('sparse', type : 'feature', value : 'auto',
-- 
2.26.2




^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PULL 19/22] cirrus: Enable doc build on msys2/mingw
  2020-10-16 11:47 [PULL 00/22] Build system + misc changes for 2020-10-16 Paolo Bonzini
                   ` (17 preceding siblings ...)
  2020-10-16 11:48 ` [PULL 18/22] meson: Move the detection logic for sphinx to meson Paolo Bonzini
@ 2020-10-16 11:48 ` Paolo Bonzini
  2020-10-16 11:48 ` [PULL 20/22] fuzz: Disable QEMU's SIG{INT,HUP,TERM} handlers Paolo Bonzini
                   ` (2 subsequent siblings)
  21 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2020-10-16 11:48 UTC (permalink / raw)
  To: qemu-devel; +Cc: Yonggang Luo

From: Yonggang Luo <luoyonggang@gmail.com>

Currently rST depends on old version sphinx-2.x.
Install it by downloading it.
Remove the need of university mirror, the main repo are recovered.

Signed-off-by: Yonggang Luo <luoyonggang@gmail.com>
Message-Id: <20201015220626.418-5-luoyonggang@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 .cirrus.yml | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/.cirrus.yml b/.cirrus.yml
index 396888fbd3..e099da0fec 100644
--- a/.cirrus.yml
+++ b/.cirrus.yml
@@ -76,7 +76,6 @@ windows_msys2_task:
         ((Get-Content -path C:\tools\msys64\etc\\post-install\\07-pacman-key.post -Raw) -replace '--refresh-keys', '--version') | Set-Content -Path C:\tools\msys64\etc\\post-install\\07-pacman-key.post
         C:\tools\msys64\usr\bin\bash.exe -lc "sed -i 's/^CheckSpace/#CheckSpace/g' /etc/pacman.conf"
         C:\tools\msys64\usr\bin\bash.exe -lc "export"
-        C:\tools\msys64\usr\bin\bash.exe -lc "grep -rl 'repo.msys2.org/' /etc/pacman.d/mirrorlist.* | xargs sed -i 's/repo.msys2.org\//mirrors.tuna.tsinghua.edu.cn\/msys2\//g'"
         C:\tools\msys64\usr\bin\pacman.exe --noconfirm -Sy
         echo Y | C:\tools\msys64\usr\bin\pacman.exe --noconfirm -Suu --overwrite=*
         taskkill /F /FI "MODULES eq msys-2.0.dll"
@@ -111,6 +110,11 @@ windows_msys2_task:
           mingw-w64-x86_64-curl \
           mingw-w64-x86_64-gnutls \
           "
+        bitsadmin /transfer msys_download /dynamic /download /priority FOREGROUND `
+          https://repo.msys2.org/mingw/x86_64/mingw-w64-x86_64-python-sphinx-2.3.1-1-any.pkg.tar.xz `
+          C:\tools\mingw-w64-x86_64-python-sphinx-2.3.1-1-any.pkg.tar.xz
+        C:\tools\msys64\usr\bin\bash.exe -lc "pacman --noconfirm -U /c/tools/mingw-w64-x86_64-python-sphinx-2.3.1-1-any.pkg.tar.xz"
+        del C:\tools\mingw-w64-x86_64-python-sphinx-2.3.1-1-any.pkg.tar.xz
         C:\tools\msys64\usr\bin\bash.exe -lc "rm -rf /var/cache/pacman/pkg/*"
         cd C:\tools\msys64
         echo "Start archive"
-- 
2.26.2




^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PULL 20/22] fuzz: Disable QEMU's SIG{INT,HUP,TERM} handlers
  2020-10-16 11:47 [PULL 00/22] Build system + misc changes for 2020-10-16 Paolo Bonzini
                   ` (18 preceding siblings ...)
  2020-10-16 11:48 ` [PULL 19/22] cirrus: Enable doc build on msys2/mingw Paolo Bonzini
@ 2020-10-16 11:48 ` Paolo Bonzini
  2020-10-16 11:48 ` [PULL 21/22] hax: unbreak accelerator cpu code after cpus.c split Paolo Bonzini
  2020-10-16 11:48 ` [PULL 22/22] ci: include configure and meson logs in all jobs if configure fails Paolo Bonzini
  21 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2020-10-16 11:48 UTC (permalink / raw)
  To: qemu-devel; +Cc: Alexander Bulekov, Darren Kenny

From: Alexander Bulekov <alxndr@bu.edu>

Prior to this patch, the only way I found to terminate the fuzzer was
either to:
 1. Explicitly specify the number of fuzzer runs with the -runs= flag
 2. SIGKILL the process with "pkill -9 qemu-fuzz-*" or similar

In addition to being annoying to deal with, SIGKILLing the process skips
over any exit handlers(e.g. registered with atexit()). This is bad,
since some fuzzers might create temporary files that should ideally be
removed on exit using an exit handler. The only way to achieve a clean
exit now is to specify -runs=N , but the desired "N" is tricky to
identify prior to fuzzing.

Why doesn't the process exit with standard SIGINT,SIGHUP,SIGTERM
signals? QEMU installs its own handlers for these signals in
os-posix.c:os_setup_signal_handling, which notify the main loop that an
exit was requested. The fuzzer, however, does not run qemu_main_loop,
which performs the main_loop_should_exit() check.  This means that the
fuzzer effectively ignores these signals. As we don't really care about
cleanly stopping the disposable fuzzer "VM", this patch uninstalls
QEMU's signal handlers. Thus, we can stop the fuzzer with
SIG{INT,HUP,TERM} and the fuzzing code can optionally use atexit() to
clean up temporary files/resources.

Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
Message-Id: <20201014142157.46028-1-alxndr@bu.edu>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 tests/qtest/fuzz/fuzz.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/tests/qtest/fuzz/fuzz.c b/tests/qtest/fuzz/fuzz.c
index d926c490c5..eb0070437f 100644
--- a/tests/qtest/fuzz/fuzz.c
+++ b/tests/qtest/fuzz/fuzz.c
@@ -217,5 +217,13 @@ int LLVMFuzzerInitialize(int *argc, char ***argv, char ***envp)
     /* re-enable the rcu atfork, which was previously disabled in qemu_init */
     rcu_enable_atfork();
 
+    /*
+     * Disable QEMU's signal handlers, since we manually control the main_loop,
+     * and don't check for main_loop_should_exit
+     */
+    signal(SIGINT, SIG_DFL);
+    signal(SIGHUP, SIG_DFL);
+    signal(SIGTERM, SIG_DFL);
+
     return 0;
 }
-- 
2.26.2




^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PULL 21/22] hax: unbreak accelerator cpu code after cpus.c split
  2020-10-16 11:47 [PULL 00/22] Build system + misc changes for 2020-10-16 Paolo Bonzini
                   ` (19 preceding siblings ...)
  2020-10-16 11:48 ` [PULL 20/22] fuzz: Disable QEMU's SIG{INT,HUP,TERM} handlers Paolo Bonzini
@ 2020-10-16 11:48 ` Paolo Bonzini
  2020-10-16 11:48 ` [PULL 22/22] ci: include configure and meson logs in all jobs if configure fails Paolo Bonzini
  21 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2020-10-16 11:48 UTC (permalink / raw)
  To: qemu-devel; +Cc: Volker Rümelin, Claudio Fontana

From: Claudio Fontana <cfontana@suse.de>

during my split of cpus.c, code line
"current_cpu = cpu"
was removed by mistake, causing hax to break.

This commit fixes the situation restoring it.

Reported-by: Volker Rümelin <vr_qemu@t-online.de>
Fixes: e92558e4bf8059ce4f0b310afe218802b72766bc
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Message-Id: <20201016080032.13914-1-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 target/i386/hax-cpus.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/target/i386/hax-cpus.c b/target/i386/hax-cpus.c
index 99770e590c..f72c85bd49 100644
--- a/target/i386/hax-cpus.c
+++ b/target/i386/hax-cpus.c
@@ -38,6 +38,7 @@ static void *hax_cpu_thread_fn(void *arg)
     qemu_thread_get_self(cpu->thread);
 
     cpu->thread_id = qemu_get_thread_id();
+    current_cpu = cpu;
     hax_init_vcpu(cpu);
     cpu_thread_signal_created(cpu);
     qemu_guest_random_seed_thread_part2(cpu->random_seed);
-- 
2.26.2




^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PULL 22/22] ci: include configure and meson logs in all jobs if configure fails
  2020-10-16 11:47 [PULL 00/22] Build system + misc changes for 2020-10-16 Paolo Bonzini
                   ` (20 preceding siblings ...)
  2020-10-16 11:48 ` [PULL 21/22] hax: unbreak accelerator cpu code after cpus.c split Paolo Bonzini
@ 2020-10-16 11:48 ` Paolo Bonzini
  21 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2020-10-16 11:48 UTC (permalink / raw)
  To: qemu-devel; +Cc: Philippe Mathieu-Daudé

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 .cirrus.yml    | 6 +++---
 .gitlab-ci.yml | 6 +++---
 .travis.yml    | 8 ++++----
 3 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/.cirrus.yml b/.cirrus.yml
index e099da0fec..81a2960b1a 100644
--- a/.cirrus.yml
+++ b/.cirrus.yml
@@ -13,7 +13,7 @@ freebsd_12_task:
   script:
     - mkdir build
     - cd build
-    - ../configure --enable-werror || { cat config.log; exit 1; }
+    - ../configure --enable-werror || { cat config.log meson-logs/meson-log.txt; exit 1; }
     - gmake -j$(sysctl -n hw.ncpu)
     - gmake -j$(sysctl -n hw.ncpu) check V=1
 
@@ -27,7 +27,7 @@ macos_task:
     - cd build
     - ../configure --python=/usr/local/bin/python3 --enable-werror
                    --extra-cflags='-Wno-error=deprecated-declarations'
-                   || { cat config.log; exit 1; }
+                   || { cat config.log meson-logs/meson-log.txt; exit 1; }
     - gmake -j$(sysctl -n hw.ncpu)
     - gmake check V=1
 
@@ -41,7 +41,7 @@ macos_xcode_task:
     - mkdir build
     - cd build
     - ../configure --extra-cflags='-Wno-error=deprecated-declarations'
-                   --enable-werror --cc=clang || { cat config.log; exit 1; }
+                   --enable-werror --cc=clang || { cat config.log meson-logs/meson-log.txt; exit 1; }
     - gmake -j$(sysctl -n hw.ncpu)
     - gmake check V=1
 
diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
index 8ffd415ca5..66ad7aa5c2 100644
--- a/.gitlab-ci.yml
+++ b/.gitlab-ci.yml
@@ -32,7 +32,7 @@ include:
         ../configure --enable-werror $CONFIGURE_ARGS --target-list="$TARGETS" ;
       else
         ../configure --enable-werror $CONFIGURE_ARGS ;
-      fi
+      fi || { cat config.log meson-logs/meson-log.txt && exit 1; }
     - make -j"$JOBS"
     - if test -n "$MAKE_CHECK_ARGS";
       then
@@ -229,7 +229,7 @@ build-tcg-disabled:
   script:
     - mkdir build
     - cd build
-    - ../configure --disable-tcg --audio-drv-list=""
+    - ../configure --disable-tcg --audio-drv-list="" || { cat config.log meson-logs/meson-log.txt && exit 1; }
     - make -j"$JOBS"
     - make check-unit
     - make check-qapi-schema
@@ -322,7 +322,7 @@ build-tci:
     - mkdir build
     - cd build
     - ../configure --enable-tcg-interpreter
-        --target-list="$(for tg in $TARGETS; do echo -n ${tg}'-softmmu '; done)"
+        --target-list="$(for tg in $TARGETS; do echo -n ${tg}'-softmmu '; done)" || { cat config.log meson-logs/meson-log.txt && exit 1; }
     - make -j"$JOBS"
     - make run-tcg-tests-x86_64-softmmu
     - make tests/qtest/boot-serial-test tests/qtest/cdrom-test tests/qtest/pxe-test
diff --git a/.travis.yml b/.travis.yml
index d7bfbb8bfe..a3d78171ca 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -95,7 +95,7 @@ before_install:
 # Configure step - may be overridden
 before_script:
   - mkdir -p ${BUILD_DIR} && cd ${BUILD_DIR}
-  - ${SRC_DIR}/configure ${BASE_CONFIG} ${CONFIG} || { cat config.log && exit 1; }
+  - ${SRC_DIR}/configure ${BASE_CONFIG} ${CONFIG} || { cat config.log meson-logs/meson-log.txt && exit 1; }
 
 # Main build & test - rarely overridden - controlled by TEST_CMD
 script:
@@ -199,7 +199,7 @@ jobs:
       compiler: clang
       before_script:
         - mkdir -p ${BUILD_DIR} && cd ${BUILD_DIR}
-        - ${SRC_DIR}/configure ${CONFIG} --extra-cflags="-fsanitize=undefined -Werror" || { cat config.log && exit 1; }
+        - ${SRC_DIR}/configure ${CONFIG} --extra-cflags="-fsanitize=undefined -Werror" || { cat config.log meson-logs/meson-log.txt && exit 1; }
 
 
     - name: "Clang (other-softmmu)"
@@ -298,7 +298,7 @@ jobs:
         - TEST_CMD=""
       before_script:
         - mkdir -p ${BUILD_DIR} && cd ${BUILD_DIR}
-        - ${SRC_DIR}/configure ${CONFIG} --extra-cflags="-g3 -O0 -fsanitize=thread" || { cat config.log && exit 1; }
+        - ${SRC_DIR}/configure ${CONFIG} --extra-cflags="-g3 -O0 -fsanitize=thread" || { cat config.log meson-logs/meson-log.txt && exit 1; }
 
 
     # Run check-tcg against linux-user
@@ -530,7 +530,7 @@ jobs:
         - ls -l ${SRC_DIR}/qemu-${QEMU_VERSION}.tar.bz2
         - tar -xf ${SRC_DIR}/qemu-${QEMU_VERSION}.tar.bz2 && cd qemu-${QEMU_VERSION}
         - mkdir -p release-build && cd release-build
-        - ../configure ${BASE_CONFIG} ${CONFIG} || { cat config.log && exit 1; }
+        - ../configure ${BASE_CONFIG} ${CONFIG} || { cat config.log meson-logs/meson-log.txt && exit 1; }
         - make install
   allow_failures:
     - env: UNRELIABLE=true
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PULL 18/22] meson: Move the detection logic for sphinx to meson
  2020-10-16 11:48 ` [PULL 18/22] meson: Move the detection logic for sphinx to meson Paolo Bonzini
@ 2020-10-16 13:27   ` 罗勇刚(Yonggang Luo)
  2020-10-19 19:40     ` Eric Blake
  0 siblings, 1 reply; 25+ messages in thread
From: 罗勇刚(Yonggang Luo) @ 2020-10-16 13:27 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-level

[-- Attachment #1: Type: text/plain, Size: 11024 bytes --]

On Fri, Oct 16, 2020 at 7:48 PM Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> From: Yonggang Luo <luoyonggang@gmail.com>
>
> Signed-off-by: Yonggang Luo <luoyonggang@gmail.com>
> Message-Id: <20201015220626.418-4-luoyonggang@gmail.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  configure         | 59 ++++-------------------------------------------
>  docs/meson.build  | 46 ++++++++++++++++++++++++++++++++++++
>  meson.build       | 30 ++++++++----------------
>  meson_options.txt |  4 ++++
>  4 files changed, 64 insertions(+), 75 deletions(-)
>
> diff --git a/configure b/configure
> index 3edbdd2a24..68f097861d 100755
> --- a/configure
> +++ b/configure
> @@ -297,7 +297,7 @@ brlapi=""
>  curl=""
>  iconv="auto"
>  curses="auto"
> -docs=""
> +docs="auto"
>  fdt="auto"
>  netmap="no"
>  sdl="auto"
> @@ -820,15 +820,6 @@ do
>      fi
>  done
>
> -sphinx_build=
> -for binary in sphinx-build-3 sphinx-build
> -do
> -    if has "$binary"
> -    then
> -        sphinx_build=$(command -v "$binary")
> -        break
> -    fi
> -done
>
>  # Check for ancillary tools used in testing
>  genisoimage=
> @@ -1228,9 +1219,9 @@ for opt do
>    ;;
>    --disable-crypto-afalg) crypto_afalg="no"
>    ;;
> -  --disable-docs) docs="no"
> +  --disable-docs) docs="disabled"
>    ;;
> -  --enable-docs) docs="yes"
> +  --enable-docs) docs="enabled"
>    ;;
>    --disable-vhost-net) vhost_net="no"
>    ;;
> @@ -4419,45 +4410,6 @@ if check_include linux/btrfs.h ; then
>      btrfs=yes
>  fi
>
> -# If we're making warnings fatal, apply this to Sphinx runs as well
> -sphinx_werror=""
> -if test "$werror" = "yes"; then
> -    sphinx_werror="-W"
> -fi
> -
> -# Check we have a new enough version of sphinx-build
> -has_sphinx_build() {
> -    # This is a bit awkward but works: create a trivial document and
> -    # try to run it with our configuration file (which enforces a
> -    # version requirement). This will fail if either
> -    # sphinx-build doesn't exist at all or if it is too old.
> -    mkdir -p "$TMPDIR1/sphinx"
> -    touch "$TMPDIR1/sphinx/index.rst"
> -    "$sphinx_build" $sphinx_werror -c "$source_path/docs" \
> -                    -b html "$TMPDIR1/sphinx" \
> -                    "$TMPDIR1/sphinx/out"  >> config.log 2>&1
> -}
> -
> -# Check if tools are available to build documentation.
> -if test "$docs" != "no" ; then
> -  if has_sphinx_build; then
> -    sphinx_ok=yes
> -  else
> -    sphinx_ok=no
> -  fi
> -  if test "$sphinx_ok" = "yes"; then
> -    docs=yes
> -  else
> -    if test "$docs" = "yes" ; then
> -      if has $sphinx_build && test "$sphinx_ok" != "yes"; then
> -        echo "Warning: $sphinx_build exists but it is either too old or
uses too old a Python version" >&2
> -      fi
> -      feature_not_found "docs" "Install a Python 3 version of
python-sphinx"
> -    fi
> -    docs=no
> -  fi
> -fi
> -
>  # Search for bswap_32 function
>  byteswap_h=no
>  cat > $TMPC << EOF
> @@ -6093,9 +6045,6 @@ qemu_version=$(head $source_path/VERSION)
>  echo "PKGVERSION=$pkgversion" >>$config_host_mak
>  echo "SRC_PATH=$source_path" >> $config_host_mak
>  echo "TARGET_DIRS=$target_list" >> $config_host_mak
> -if [ "$docs" = "yes" ] ; then
> -  echo "BUILD_DOCS=yes" >> $config_host_mak
> -fi
>  if test "$modules" = "yes"; then
>    # $shacmd can generate a hash started with digit, which the compiler
doesn't
>    # like as an symbol. So prefix it with an underscore
> @@ -6784,7 +6733,6 @@ fi
>  echo "ROMS=$roms" >> $config_host_mak
>  echo "MAKE=$make" >> $config_host_mak
>  echo "PYTHON=$python" >> $config_host_mak
> -echo "SPHINX_BUILD=$sphinx_build" >> $config_host_mak
>  echo "GENISOIMAGE=$genisoimage" >> $config_host_mak
>  echo "MESON=$meson" >> $config_host_mak
>  echo "NINJA=$ninja" >> $config_host_mak
> @@ -7066,6 +7014,7 @@ NINJA=$ninja $meson setup \
>          -Dgettext=$gettext -Dxkbcommon=$xkbcommon -Du2f=$u2f \
>          -Dcapstone=$capstone -Dslirp=$slirp -Dfdt=$fdt \
>          -Diconv=$iconv -Dcurses=$curses -Dlibudev=$libudev\
> +        -Ddocs=$docs -Dsphinx_build=$sphinx_build \
>          $cross_arg \
>          "$PWD" "$source_path"
>
> diff --git a/docs/meson.build b/docs/meson.build
> index 0340d489ac..789dca8cc0 100644
> --- a/docs/meson.build
> +++ b/docs/meson.build
> @@ -1,4 +1,50 @@

> +if get_option('sphinx_build') == ''
> +  sphinx_build = find_program(['sphinx-build-3', 'sphinx-build'],
> +                              required: get_option('docs'))
> +else
> +  sphinx_build = find_program(get_option('sphinx_build'),
> +                              required: get_option('docs'))
> +endif
> +
> +# Check if tools are available to build documentation.
> +build_docs = false
> +if sphinx_build.found()
> +  SPHINX_ARGS = [sphinx_build]
> +  # If we're making warnings fatal, apply this to Sphinx runs as well
> +  if get_option('werror')
> +    SPHINX_ARGS += [ '-W' ]
> +  endif
> +
> +  # This is a bit awkward but works: create a trivial document and
> +  # try to run it with our configuration file (which enforces a
> +  # version requirement). This will fail if sphinx-build is too old.
> +  run_command('mkdir', ['-p', tmpdir / 'sphinx'])
> +  run_command('touch', [tmpdir / 'sphinx/index.rst'])
> +  sphinx_build_test_out = run_command(SPHINX_ARGS + [
> +    '-c', meson.current_source_dir(),
> +    '-b', 'html', tmpdir / 'sphinx',
> +    tmpdir / 'sphinx/out'])
> +  build_docs = (sphinx_build_test_out.returncode() == 0)
> +
> +  if not build_docs
> +    warning('@0@ exists but it is either too old or uses too old a
Python version'.format(sphinx_build_option))
Here need to be get_option('sphinx_build')




> +    if get_option('docs').enabled()
> +      error('Install a Python 3 version of python-sphinx')
> +    endif
> +  endif
> +endif
> +
>  if build_docs
> +  SPHINX_ARGS += ['-Dversion=' + meson.project_version(), '-Drelease=' +
config_host['PKGVERSION']]
> +
> +  sphinx_extn_depends = [ meson.source_root() / 'docs/sphinx/depfile.py',
> +                          meson.source_root() / 'docs/sphinx/hxtool.py',
> +                          meson.source_root() /
'docs/sphinx/kerneldoc.py',
> +                          meson.source_root() /
'docs/sphinx/kernellog.py',
> +                          meson.source_root() / 'docs/sphinx/qapidoc.py',
> +                          meson.source_root() /
'docs/sphinx/qmp_lexer.py',
> +                          qapi_gen_depends ]
> +
>    configure_file(output: 'index.html',
>                   input: files('index.html.in'),
>                   configuration: {'VERSION': meson.project_version()},
> diff --git a/meson.build b/meson.build
> index 15732f4701..05fb59a00b 100644
> --- a/meson.build
> +++ b/meson.build
> @@ -17,7 +17,13 @@ cc = meson.get_compiler('c')
>  config_host = keyval.load(meson.current_build_dir() / 'config-host.mak')
>  enable_modules = 'CONFIG_MODULES' in config_host
>  enable_static = 'CONFIG_STATIC' in config_host
> -build_docs = 'BUILD_DOCS' in config_host
> +
> +# Temporary directory used for files created while
> +# configure runs. Since it is in the build directory
> +# we can safely blow away any previous version of it
> +# (and we need not jump through hoops to try to delete
> +# it when configure exits.)
> +tmpdir = meson.current_build_dir() / 'meson-private/temp'
>
>  if get_option('qemu_suffix').startswith('/')
>    error('qemu_suffix cannot start with a /')
> @@ -1266,22 +1272,6 @@ foreach d : hx_headers
>  endforeach
>  genh += hxdep
>
> -SPHINX_ARGS = [config_host['SPHINX_BUILD'],
> -               '-Dversion=' + meson.project_version(),
> -               '-Drelease=' + config_host['PKGVERSION']]
> -
> -if get_option('werror')
> -  SPHINX_ARGS += [ '-W' ]
> -endif
> -
> -sphinx_extn_depends = [ meson.source_root() / 'docs/sphinx/depfile.py',
> -                        meson.source_root() / 'docs/sphinx/hxtool.py',
> -                        meson.source_root() / 'docs/sphinx/kerneldoc.py',
> -                        meson.source_root() / 'docs/sphinx/kernellog.py',
> -                        meson.source_root() / 'docs/sphinx/qapidoc.py',
> -                        meson.source_root() / 'docs/sphinx/qmp_lexer.py',
> -                        qapi_gen_depends ]
> -
>  ###################
>  # Collect sources #
>  ###################
> @@ -1866,8 +1856,8 @@ endif
>  subdir('scripts')
>  subdir('tools')
>  subdir('pc-bios')
> -subdir('tests')
>  subdir('docs')
> +subdir('tests')
>  if 'CONFIG_GTK' in config_host
>    subdir('po')
>  endif
> @@ -1949,7 +1939,7 @@ summary_info += {'QEMU_CFLAGS':
config_host['QEMU_CFLAGS']}
>  summary_info += {'QEMU_LDFLAGS':      config_host['QEMU_LDFLAGS']}
>  summary_info += {'make':              config_host['MAKE']}
>  summary_info += {'python':            '@0@ (version: @1@)'.format(python.full_path(),
python.language_version())}
> -summary_info += {'sphinx-build':      config_host['SPHINX_BUILD']}
> +summary_info += {'sphinx-build':      sphinx_build.found()}
>  summary_info += {'genisoimage':       config_host['GENISOIMAGE']}
>  # TODO: add back version
>  summary_info += {'slirp support':     slirp_opt == 'disabled' ? false :
slirp_opt}
> @@ -2017,7 +2007,7 @@ if config_host.has_key('CONFIG_XEN_BACKEND')
>    summary_info += {'xen ctrl version':
 config_host['CONFIG_XEN_CTRL_INTERFACE_VERSION']}
>  endif
>  summary_info += {'brlapi support':
 config_host.has_key('CONFIG_BRLAPI')}
> -summary_info += {'Documentation':     config_host.has_key('BUILD_DOCS')}
> +summary_info += {'Documentation':     build_docs}
>  summary_info += {'PIE':               get_option('b_pie')}
>  summary_info += {'vde support':       config_host.has_key('CONFIG_VDE')}
>  summary_info += {'netmap support':
 config_host.has_key('CONFIG_NETMAP')}
> diff --git a/meson_options.txt b/meson_options.txt
> index 77b3fabd00..967229b66e 100644
> --- a/meson_options.txt
> +++ b/meson_options.txt
> @@ -2,7 +2,11 @@ option('qemu_suffix', type : 'string', value: 'qemu',
>         description: 'Suffix for QEMU data/modules/config directories
(can be empty)')
>  option('docdir', type : 'string', value : 'doc',
>         description: 'Base directory for documentation installation (can
be empty)')
> +option('sphinx_build', type : 'string', value : '',
> +       description: 'Use specified sphinx-build [$sphinx_build] for
building document (default to be empty)')
>
> +option('docs', type : 'feature', value : 'auto',
> +       description: 'Documentations build support')
>  option('gettext', type : 'boolean', value : true,
>         description: 'Localization of the GTK+ user interface')
>  option('sparse', type : 'feature', value : 'auto',
> --
> 2.26.2
>
>


--
         此致
礼
罗勇刚
Yours
    sincerely,
Yonggang Luo

[-- Attachment #2: Type: text/html, Size: 14871 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PULL 18/22] meson: Move the detection logic for sphinx to meson
  2020-10-16 13:27   ` 罗勇刚(Yonggang Luo)
@ 2020-10-19 19:40     ` Eric Blake
  0 siblings, 0 replies; 25+ messages in thread
From: Eric Blake @ 2020-10-19 19:40 UTC (permalink / raw)
  To: luoyonggang, Paolo Bonzini; +Cc: qemu-level

On 10/16/20 8:27 AM, 罗勇刚(Yonggang Luo) wrote:
> On Fri, Oct 16, 2020 at 7:48 PM Paolo Bonzini <pbonzini@redhat.com> wrote:
>>

Meta-comment.  Your quoting style leaves a lot to be desired:


>> -    if test "$docs" = "yes" ; then
>> -      if has $sphinx_build && test "$sphinx_ok" != "yes"; then
>> -        echo "Warning: $sphinx_build exists but it is either too old or
> uses too old a Python version" >&2
>> -      fi
>> -      feature_not_found "docs" "Install a Python 3 version of
> python-sphinx"
>> -    fi

Here, your mailer wrapped lines but did not quote the wrapped portion, 
while...

>> +  if not build_docs
>> +    warning('@0@ exists but it is either too old or uses too old a
> Python version'.format(sphinx_build_option))
> Here need to be get_option('sphinx_build')

...here you added new content without any newline separator, right after 
another case of a mailer wrapping a line.  It makes it very difficult to 
decipher which text you are quoting and which text you are adding.

You may want to consider using a better mail engine that does not split 
quoted lines incorrectly, as well as using a blank line both before and 
after every block of your inline replies, to call more visual attention 
to what you are adding to the conversation.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2020-10-19 19:42 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-16 11:47 [PULL 00/22] Build system + misc changes for 2020-10-16 Paolo Bonzini
2020-10-16 11:47 ` [PULL 01/22] submodules: bump meson to 0.55.3 Paolo Bonzini
2020-10-16 11:47 ` [PULL 02/22] Makefile: Ensure cscope.out/tags/TAGS are generated in the source tree Paolo Bonzini
2020-10-16 11:47 ` [PULL 03/22] tests/Makefile.include: unbreak non-tcg builds Paolo Bonzini
2020-10-16 11:47 ` [PULL 04/22] make: run shell with pipefail Paolo Bonzini
2020-10-16 11:47 ` [PULL 05/22] tests: add missing generated sources to testqapi Paolo Bonzini
2020-10-16 11:47 ` [PULL 06/22] configure: move QEMU_INCLUDES to meson Paolo Bonzini
2020-10-16 11:47 ` [PULL 07/22] dockerfiles: enable Centos 8 PowerTools Paolo Bonzini
2020-10-16 11:48 ` [PULL 08/22] add ninja to dockerfiles, CI configurations and test VMs Paolo Bonzini
2020-10-16 11:48 ` [PULL 09/22] build: cleanups to Makefile Paolo Bonzini
2020-10-16 11:48 ` [PULL 10/22] build: replace ninjatool with ninja Paolo Bonzini
2020-10-16 11:48 ` [PULL 11/22] build: add --enable/--disable-libudev Paolo Bonzini
2020-10-16 11:48 ` [PULL 12/22] meson.build: don't condition iconv detection on library detection Paolo Bonzini
2020-10-16 11:48 ` [PULL 13/22] meson: cleanup curses/iconv test Paolo Bonzini
2020-10-16 11:48 ` [PULL 14/22] configure: fix handling of --docdir parameter Paolo Bonzini
2020-10-16 11:48 ` [PULL 15/22] meson: Only install icons and qemu.desktop if have_system Paolo Bonzini
2020-10-16 11:48 ` [PULL 16/22] docs: Fix Sphinx configuration for msys2/mingw Paolo Bonzini
2020-10-16 11:48 ` [PULL 17/22] meson: move SPHINX_ARGS references within "if build_docs" Paolo Bonzini
2020-10-16 11:48 ` [PULL 18/22] meson: Move the detection logic for sphinx to meson Paolo Bonzini
2020-10-16 13:27   ` 罗勇刚(Yonggang Luo)
2020-10-19 19:40     ` Eric Blake
2020-10-16 11:48 ` [PULL 19/22] cirrus: Enable doc build on msys2/mingw Paolo Bonzini
2020-10-16 11:48 ` [PULL 20/22] fuzz: Disable QEMU's SIG{INT,HUP,TERM} handlers Paolo Bonzini
2020-10-16 11:48 ` [PULL 21/22] hax: unbreak accelerator cpu code after cpus.c split Paolo Bonzini
2020-10-16 11:48 ` [PULL 22/22] ci: include configure and meson logs in all jobs if configure fails Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).