All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] Restore performance of rpm in Docker containers
@ 2018-05-10 22:20 Peter Kjellerstedt
  2018-05-10 22:20 ` [PATCH 1/2] Revert "rpm: add a patch to help with Docker performance issues" Peter Kjellerstedt
  2018-05-10 22:20 ` [PATCH 2/2] rpm: Assume a max limit of 1024 open files Peter Kjellerstedt
  0 siblings, 2 replies; 6+ messages in thread
From: Peter Kjellerstedt @ 2018-05-10 22:20 UTC (permalink / raw)
  To: openembedded-core; +Cc: Alexander Kanavin

As recently discussed on the Yocto list in the "ROOTFS_RPM_DEBUG 
undocumented" thread, there are performance issues with dnf/rpm 
when building in a Docker container. The problem is due to the 
Docker container reporting the maximum number of open files as 
unlimited and there is code in rpm that loops over all potentially 
open file descriptors. Alexander Kanavin supplied a patch for rpm 
(see commit 6f1822e5 in meta) that was supposed to correct this, but 
unfortunately it only solved a part of the problem.

To really solve the problem, I reverted Alexander's patch, and 
applied a new one that solves both the problematic code identified 
by Alexander, and an identical code path in another function.

It is possible to reproduce the problem without Docker by increasing 
the maximum number of open files. This can be done by editing 
/etc/security/limits.conf and adding:

* - nofile  100000000

where 100000000 is the maximum number of open files. It is also 
very likely that you need to increase the global limit by running:

sudo sysctl -w fs.nr_open=100000000

After that, ssh to localhost to get a new session where the new 
settings are active. Then you can use `ulimit -n <number>` to set 
the max you want to try.

I also did som measurements when building core-image-minimal. Here 
are the times I got for the do_rootfs task with different maximums:

Max files  Time 
---------  -----
1024       30 seconds
1048576    50 seconds
10000000   3 minutes 40 seconds
100000000  31 minutes

//Peter

The following changes since commit f4c938c47424424b89cde2269bd92cebc9a6ac1a:

  packagegroup: Do not add libssp to SDK (2018-05-09 10:47:51 +0100)

are available in the git repository at:

  git://push.yoctoproject.org/poky-contrib pkj/rpm-performance

Peter Kjellerstedt (2):
  Revert "rpm: add a patch to help with Docker performance issues"
  rpm: Assume a max limit of 1024 open files

 ...001-Assume-a-max-limit-of-1024-open-files.patch | 61 ++++++++++++++++++++++
 ...FD_CLOEXEC-on-opened-files-before-exec-fr.patch | 49 -----------------
 meta/recipes-devtools/rpm/rpm_4.14.1.bb            |  3 +-
 3 files changed, 63 insertions(+), 50 deletions(-)
 create mode 100644 meta/recipes-devtools/rpm/files/0001-Assume-a-max-limit-of-1024-open-files.patch
 delete mode 100644 meta/recipes-devtools/rpm/files/0001-Revert-Set-FD_CLOEXEC-on-opened-files-before-exec-fr.patch

-- 
2.12.0



^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 1/2] Revert "rpm: add a patch to help with Docker performance issues"
  2018-05-10 22:20 [PATCH 0/2] Restore performance of rpm in Docker containers Peter Kjellerstedt
@ 2018-05-10 22:20 ` Peter Kjellerstedt
  2018-05-10 22:20 ` [PATCH 2/2] rpm: Assume a max limit of 1024 open files Peter Kjellerstedt
  1 sibling, 0 replies; 6+ messages in thread
From: Peter Kjellerstedt @ 2018-05-10 22:20 UTC (permalink / raw)
  To: openembedded-core; +Cc: Alexander Kanavin

This reverts commit 6f1822e5f1eaafd8bc46e999de730c1fcca77f3a.

This patch only solved a part of the problem.

Signed-off-by: Peter Kjellerstedt <peter.kjellerstedt@axis.com>
---
 ...FD_CLOEXEC-on-opened-files-before-exec-fr.patch | 49 ----------------------
 meta/recipes-devtools/rpm/rpm_4.14.1.bb            |  1 -
 2 files changed, 50 deletions(-)
 delete mode 100644 meta/recipes-devtools/rpm/files/0001-Revert-Set-FD_CLOEXEC-on-opened-files-before-exec-fr.patch

diff --git a/meta/recipes-devtools/rpm/files/0001-Revert-Set-FD_CLOEXEC-on-opened-files-before-exec-fr.patch b/meta/recipes-devtools/rpm/files/0001-Revert-Set-FD_CLOEXEC-on-opened-files-before-exec-fr.patch
deleted file mode 100644
index 4651409a65..0000000000
--- a/meta/recipes-devtools/rpm/files/0001-Revert-Set-FD_CLOEXEC-on-opened-files-before-exec-fr.patch
+++ /dev/null
@@ -1,49 +0,0 @@
-From 982e47df7b82c5ffe3c414cf5641f08dba0f0e64 Mon Sep 17 00:00:00 2001
-From: Alexander Kanavin <alex.kanavin@gmail.com>
-Date: Fri, 26 Jan 2018 16:32:04 +0200
-Subject: [PATCH] Revert "Set FD_CLOEXEC on opened files before exec from lua
- script is called"
-
-This reverts commit 7a7c31f551ff167f8718aea6d5048f6288d60205.
-The reason is that when _SC_OPEN_MAX is much higher than the usual 1024
-(for example inside docker), the performance drops sharply.
-
-Upstream has been notified:
-https://bugzilla.redhat.com/show_bug.cgi?id=1537564
-
-Upstream-Status: Inappropriate [upstream needs to come up with a better fix]
-Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
----
- luaext/lposix.c | 12 ------------
- 1 file changed, 12 deletions(-)
-
-diff --git a/luaext/lposix.c b/luaext/lposix.c
-index 0a7c26c71..c578c5a11 100644
---- a/luaext/lposix.c
-+++ b/luaext/lposix.c
-@@ -335,22 +335,10 @@ static int Pexec(lua_State *L)			/** exec(path,[args]) */
- 	const char *path = luaL_checkstring(L, 1);
- 	int i,n=lua_gettop(L);
- 	char **argv;
--	int flag, fdno, open_max;
- 
- 	if (!have_forked)
- 	    return luaL_error(L, "exec not permitted in this context");
- 
--	open_max = sysconf(_SC_OPEN_MAX);
--	if (open_max == -1) {
--	    open_max = 1024;
--	}
--	for (fdno = 3; fdno < open_max; fdno++) {
--	    flag = fcntl(fdno, F_GETFD);
--	    if (flag == -1 || (flag & FD_CLOEXEC))
--		continue;
--	    fcntl(fdno, F_SETFD, FD_CLOEXEC);
--	}
--
- 	argv = malloc((n+1)*sizeof(char*));
- 	if (argv==NULL) return luaL_error(L,"not enough memory");
- 	argv[0] = (char*)path;
--- 
-2.15.1
-
diff --git a/meta/recipes-devtools/rpm/rpm_4.14.1.bb b/meta/recipes-devtools/rpm/rpm_4.14.1.bb
index 818d3ef9e3..85d791ce5c 100644
--- a/meta/recipes-devtools/rpm/rpm_4.14.1.bb
+++ b/meta/recipes-devtools/rpm/rpm_4.14.1.bb
@@ -39,7 +39,6 @@ SRC_URI = "git://github.com/rpm-software-management/rpm;branch=rpm-4.14.x \
            file://0003-rpmstrpool.c-make-operations-over-string-pools-threa.patch \
            file://0004-build-pack.c-remove-static-local-variables-from-buil.patch \
            file://0001-perl-disable-auto-reqs.patch \
-           file://0001-Revert-Set-FD_CLOEXEC-on-opened-files-before-exec-fr.patch \
            file://0001-configure.ac-add-option-for-dbus.patch \
            "
 
-- 
2.12.0



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 2/2] rpm: Assume a max limit of 1024 open files
  2018-05-10 22:20 [PATCH 0/2] Restore performance of rpm in Docker containers Peter Kjellerstedt
  2018-05-10 22:20 ` [PATCH 1/2] Revert "rpm: add a patch to help with Docker performance issues" Peter Kjellerstedt
@ 2018-05-10 22:20 ` Peter Kjellerstedt
  2018-05-11  5:43   ` Alexander Kanavin
  1 sibling, 1 reply; 6+ messages in thread
From: Peter Kjellerstedt @ 2018-05-10 22:20 UTC (permalink / raw)
  To: openembedded-core; +Cc: Alexander Kanavin

If sysconf(_SC_OPEN_MAX) is much greater than the usual 1024 (for
example inside a Docker container), the performance drops sharply.

Signed-off-by: Peter Kjellerstedt <peter.kjellerstedt@axis.com>
---
 ...001-Assume-a-max-limit-of-1024-open-files.patch | 61 ++++++++++++++++++++++
 meta/recipes-devtools/rpm/rpm_4.14.1.bb            |  2 +
 2 files changed, 63 insertions(+)
 create mode 100644 meta/recipes-devtools/rpm/files/0001-Assume-a-max-limit-of-1024-open-files.patch

diff --git a/meta/recipes-devtools/rpm/files/0001-Assume-a-max-limit-of-1024-open-files.patch b/meta/recipes-devtools/rpm/files/0001-Assume-a-max-limit-of-1024-open-files.patch
new file mode 100644
index 0000000000..c5ca892aa2
--- /dev/null
+++ b/meta/recipes-devtools/rpm/files/0001-Assume-a-max-limit-of-1024-open-files.patch
@@ -0,0 +1,61 @@
+From 0d40bae6c02799ea35c95acfaa8991ddf92ab232 Mon Sep 17 00:00:00 2001
+From: Peter Kjellerstedt <pkj@axis.com>
+Date: Fri, 4 May 2018 22:47:11 +0200
+Subject: [PATCH] Assume a max limit of 1024 open files
+
+If sysconf(_SC_OPEN_MAX) is much greater than the usual 1024 (for
+example inside a Docker container), the performance drops sharply.
+
+Upstream-Status: Inappropriate [upstream needs to come up with a better fix]
+Signed-off-by: Peter Kjellerstedt <peter.kjellerstedt@axis.com>
+---
+ lib/rpmscript.c | 7 +------
+ luaext/lposix.c | 8 ++------
+ 2 files changed, 3 insertions(+), 12 deletions(-)
+
+diff --git a/lib/rpmscript.c b/lib/rpmscript.c
+index 19a28fb75..ca7838835 100644
+--- a/lib/rpmscript.c
++++ b/lib/rpmscript.c
+@@ -160,17 +160,12 @@ static void doScriptExec(ARGV_const_t argv, ARGV_const_t prefixes,
+     int flag;
+     int fdno;
+     int xx;
+-    int open_max;
+ 
+     /* SIGPIPE is ignored in rpm, reset to default for the scriptlet */
+     (void) signal(SIGPIPE, SIG_DFL);
+ 
+     /* XXX Force FD_CLOEXEC on all inherited fdno's. */
+-    open_max = sysconf(_SC_OPEN_MAX);
+-    if (open_max == -1) {
+-	open_max = 1024;
+-    }
+-    for (fdno = 3; fdno < open_max; fdno++) {
++    for (fdno = 3; fdno < 1024; fdno++) {
+ 	flag = fcntl(fdno, F_GETFD);
+ 	if (flag == -1 || (flag & FD_CLOEXEC))
+ 	    continue;
+diff --git a/luaext/lposix.c b/luaext/lposix.c
+index 0a7c26c71..2640db51c 100644
+--- a/luaext/lposix.c
++++ b/luaext/lposix.c
+@@ -335,16 +335,12 @@ static int Pexec(lua_State *L)			/** exec(path,[args]) */
+ 	const char *path = luaL_checkstring(L, 1);
+ 	int i,n=lua_gettop(L);
+ 	char **argv;
+-	int flag, fdno, open_max;
++	int flag, fdno;
+ 
+ 	if (!have_forked)
+ 	    return luaL_error(L, "exec not permitted in this context");
+ 
+-	open_max = sysconf(_SC_OPEN_MAX);
+-	if (open_max == -1) {
+-	    open_max = 1024;
+-	}
+-	for (fdno = 3; fdno < open_max; fdno++) {
++	for (fdno = 3; fdno < 1024; fdno++) {
+ 	    flag = fcntl(fdno, F_GETFD);
+ 	    if (flag == -1 || (flag & FD_CLOEXEC))
+ 		continue;
diff --git a/meta/recipes-devtools/rpm/rpm_4.14.1.bb b/meta/recipes-devtools/rpm/rpm_4.14.1.bb
index 85d791ce5c..55b0396cd8 100644
--- a/meta/recipes-devtools/rpm/rpm_4.14.1.bb
+++ b/meta/recipes-devtools/rpm/rpm_4.14.1.bb
@@ -41,6 +41,8 @@ SRC_URI = "git://github.com/rpm-software-management/rpm;branch=rpm-4.14.x \
            file://0001-perl-disable-auto-reqs.patch \
            file://0001-configure.ac-add-option-for-dbus.patch \
            "
+SRC_URI_append_class-native = " file://0001-Assume-a-max-limit-of-1024-open-files.patch"
+SRC_URI_append_class-nativesdk = " file://0001-Assume-a-max-limit-of-1024-open-files.patch"
 
 PE = "1"
 SRCREV = "bfee1410af51c1cc9724791fb8d985260a62102b"
-- 
2.12.0



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH 2/2] rpm: Assume a max limit of 1024 open files
  2018-05-10 22:20 ` [PATCH 2/2] rpm: Assume a max limit of 1024 open files Peter Kjellerstedt
@ 2018-05-11  5:43   ` Alexander Kanavin
  2018-05-11 10:39     ` Peter Kjellerstedt
  0 siblings, 1 reply; 6+ messages in thread
From: Alexander Kanavin @ 2018-05-11  5:43 UTC (permalink / raw)
  To: Peter Kjellerstedt, openembedded-core

On 05/11/2018 01:20 AM, Peter Kjellerstedt wrote:
> If sysconf(_SC_OPEN_MAX) is much greater than the usual 1024 (for
> example inside a Docker container), the performance drops sharply.

Please do not drop the link to the upstream bug when replacing one patch 
with another. You can also simply amend my patch and add your SOB.

Also, why assume specifically 1024? We either need to close all the open 
files, or none, and your patch creates a situation where only some of 
the files may be closed. I'd say we should drop those two code snippets 
altogether instead of hardcoding 1024 into them.

Oh, and: please do comment in the upstream bug (e.g. with your 
benchmarks from the cover letter), otherwise upstream may do nothing 
because they're not using Docker.

Alex


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 2/2] rpm: Assume a max limit of 1024 open files
  2018-05-11  5:43   ` Alexander Kanavin
@ 2018-05-11 10:39     ` Peter Kjellerstedt
  2018-05-11 15:27       ` Alexander Kanavin
  0 siblings, 1 reply; 6+ messages in thread
From: Peter Kjellerstedt @ 2018-05-11 10:39 UTC (permalink / raw)
  To: Alexander Kanavin, openembedded-core

> -----Original Message-----
> From: Alexander Kanavin [mailto:alexander.kanavin@linux.intel.com]
> Sent: den 11 maj 2018 07:44
> To: Peter Kjellerstedt <peter.kjellerstedt@axis.com>; openembedded-
> core@lists.openembedded.org
> Subject: Re: [OE-core] [PATCH 2/2] rpm: Assume a max limit of 1024 open
> files
> 
> On 05/11/2018 01:20 AM, Peter Kjellerstedt wrote:
> > If sysconf(_SC_OPEN_MAX) is much greater than the usual 1024 (for
> > example inside a Docker container), the performance drops sharply.
> 
> Please do not drop the link to the upstream bug when replacing one
> patch with another.

Right, forgot about that.

> You can also simply amend my patch and add your SOB.

Well, I thought it was more clear this way, since your patch removed 
the functionality that closed the files, whereas mine modifies it.

> Also, why assume specifically 1024? We either need to close all the
> open files, or none, and your patch creates a situation where only 
> some of the files may be closed. I'd say we should drop those two 
> code snippets altogether instead of hardcoding 1024 into them.

Well, my expectation was that there is a reason that they want to make 
sure that all open file descriptors have FD_CLOEXEC set so that they 
are closed when forking. At the same time I don't assume rpm to actually 
have more than 1024 files open at one time, so even if using 
sysconf(_SC_OPEN_MAX) is the right thing to do, using 1024 should work 
in practice.

However, I gave this some more thought, and maybe it would be better to 
let bitbake set the soft limit for max open files to, e.g., 1024. That 
way we would not need to modify the code in rpm as it would adapt itself 
automatically. I tried adding the following to bitbake_main and it worked:

    import resource

    # Set the maximum number of open files as there are performance problems
    # with, e.g., rpm if this is unlimited (which it may be if running inside
    # a Docker container).
    (soft, hard) = resource.getrlimit(resource.RLIMIT_NOFILE)
    soft = int(os.environ.get("BB_LIMIT_NOFILE", "1024"))
    resource.setrlimit(resource.RLIMIT_NOFILE, (min(soft, hard), hard))

I do not know if bitbake_main is the right place for this, or if there is 
some more appropriate location for it. Richard?

I can send a new patch to add this, once I know where it should be added.

> Oh, and: please do comment in the upstream bug (e.g. with your
> benchmarks from the cover letter), otherwise upstream may do nothing
> because they're not using Docker.

Will do. I think suggesting that rpm should set the limit itself, as per 
my suggestion for bitbake above, might be an appropriate solution that 
they can accept.

> Alex

//Peter


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 2/2] rpm: Assume a max limit of 1024 open files
  2018-05-11 10:39     ` Peter Kjellerstedt
@ 2018-05-11 15:27       ` Alexander Kanavin
  0 siblings, 0 replies; 6+ messages in thread
From: Alexander Kanavin @ 2018-05-11 15:27 UTC (permalink / raw)
  To: Peter Kjellerstedt, openembedded-core

On 05/11/2018 01:39 PM, Peter Kjellerstedt wrote:
> However, I gave this some more thought, and maybe it would be better to
> let bitbake set the soft limit for max open files to, e.g., 1024. That
> way we would not need to modify the code in rpm as it would adapt itself
> automatically. I tried adding the following to bitbake_main and it worked:
> 
>      import resource
> 
>      # Set the maximum number of open files as there are performance problems
>      # with, e.g., rpm if this is unlimited (which it may be if running inside
>      # a Docker container).
>      (soft, hard) = resource.getrlimit(resource.RLIMIT_NOFILE)
>      soft = int(os.environ.get("BB_LIMIT_NOFILE", "1024"))
>      resource.setrlimit(resource.RLIMIT_NOFILE, (min(soft, hard), hard))

That's right, this is the best solution.

Alex


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2018-05-11 15:35 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-10 22:20 [PATCH 0/2] Restore performance of rpm in Docker containers Peter Kjellerstedt
2018-05-10 22:20 ` [PATCH 1/2] Revert "rpm: add a patch to help with Docker performance issues" Peter Kjellerstedt
2018-05-10 22:20 ` [PATCH 2/2] rpm: Assume a max limit of 1024 open files Peter Kjellerstedt
2018-05-11  5:43   ` Alexander Kanavin
2018-05-11 10:39     ` Peter Kjellerstedt
2018-05-11 15:27       ` Alexander Kanavin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.