All of lore.kernel.org
 help / color / mirror / Atom feed
* [Buildroot] [PATCH 1/5] download/git: fix code-style
  2018-08-22 21:10 [Buildroot] [PATCH 0/5] download: add a timeout and use a more fine-grained locking heuristic Yann E. MORIN
@ 2018-08-22 21:10 ` Yann E. MORIN
  2018-09-10 20:44   ` Thomas Petazzoni
  2018-08-22 21:10 ` [Buildroot] [PATCH 2/5] download/git: re-run the backend with a lock to the git tree Yann E. MORIN
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 15+ messages in thread
From: Yann E. MORIN @ 2018-08-22 21:10 UTC (permalink / raw)
  To: buildroot

This file uses leading spaces, not TABs.

Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Maxime Hadjinlian <maxime.hadjinlian@gmail.com>

---
;-)
---
 support/download/git | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/support/download/git b/support/download/git
index 11bb52c1e1..17ca04eb98 100755
--- a/support/download/git
+++ b/support/download/git
@@ -185,7 +185,7 @@ fi
 # the state of the remote server. It also would generate large tarballs
 # (gigabytes for some linux trees) when a full clone took place.
 find . -not -type d \
-	-and -not -path "./.git/*" >"${output}.list"
+       -and -not -path "./.git/*" >"${output}.list"
 LC_ALL=C sort <"${output}.list" >"${output}.list.sorted"
 
 # Create GNU-format tarballs, since that's the format of the tarballs on
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [Buildroot] [PATCH 0/5] download: add a timeout and use a more fine-grained locking heuristic
@ 2018-08-22 21:10 Yann E. MORIN
  2018-08-22 21:10 ` [Buildroot] [PATCH 1/5] download/git: fix code-style Yann E. MORIN
                   ` (4 more replies)
  0 siblings, 5 replies; 15+ messages in thread
From: Yann E. MORIN @ 2018-08-22 21:10 UTC (permalink / raw)
  To: buildroot

Hello All!

Currently, the dl-wrapper locks the package download directory,
effectively serialising access to that directory.

This is done to protect the git cache directory from being accessed by
two concurrent Buildroot builds, because the access pattern to the git
cache is not atomic.

However, all the other download backends are atomic, as the download
wrapper ensures they only ever work on temporary files and directories,
with only the final rename being atomic by itself.

Thus, the current locking policy effectively prevents other downloads
for that package from another Buildroot run, to run in parallel when
they would perfectly be able to do so without issue.

Furthermore, if the remote site is really damn slow, or blocks for
whatever reason, or we can't acquire the lock for whatever other reason,
the whole download can be stuck for, like, forever.

This series moves the locking into the git backend and the download
wrapper, to protect just what needs to be protected. Finally, a timeout
is added, so that the next download method, if any, is attempted as a
fallback.


Regards,
Yann E. MORIN.


The following changes since commit b3c1f0869645b136b9399d7a93acca65957e0bfe

  DEVELOPERS: add mender to Mirza Krak (2018-08-22 13:14:26 +0200)


are available in the git repository at:

  git://git.buildroot.org/~ymorin/git/buildroot.git

for you to fetch changes up to 17a71c0c569e1b66a46ef40f14f07db8adf847b9

  utils/genrandconfig: add a 30-minute timeout (2018-08-22 22:10:33 +0200)


----------------------------------------------------------------
Yann E. MORIN (5):
      download/git: fix code-style
      download/git: re-run the backend with a lock to the git tree
      download: move locking into download wrapper
      core/download: add per-download timeout
      utils/genrandconfig: add a 30-minute timeout

 Config.in                   | 14 ++++++++++++++
 package/pkg-download.mk     |  4 ++--
 support/download/dl-wrapper | 31 +++++++++++++++++++++----------
 support/download/git        | 32 ++++++++++++++++++++++++++++++--
 utils/genrandconfig         |  3 +++
 5 files changed, 70 insertions(+), 14 deletions(-)

-- 
.-----------------.--------------------.------------------.--------------------.
|  Yann E. MORIN  | Real-Time Embedded | /"\ ASCII RIBBON | Erics' conspiracy: |
| +33 662 376 056 | Software  Designer | \ / CAMPAIGN     |  ___               |
| +33 223 225 172 `------------.-------:  X  AGAINST      |  \e/  There is no  |
| http://ymorin.is-a-geek.org/ | _/*\_ | / \ HTML MAIL    |   v   conspiracy.  |
'------------------------------^-------^------------------^--------------------'

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [Buildroot] [PATCH 2/5] download/git: re-run the backend with a lock to the git tree
  2018-08-22 21:10 [Buildroot] [PATCH 0/5] download: add a timeout and use a more fine-grained locking heuristic Yann E. MORIN
  2018-08-22 21:10 ` [Buildroot] [PATCH 1/5] download/git: fix code-style Yann E. MORIN
@ 2018-08-22 21:10 ` Yann E. MORIN
  2018-09-10 20:52   ` Thomas Petazzoni
  2018-08-22 21:10 ` [Buildroot] [PATCH 3/5] download: move locking into download wrapper Yann E. MORIN
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 15+ messages in thread
From: Yann E. MORIN @ 2018-08-22 21:10 UTC (permalink / raw)
  To: buildroot

The access-pattern to the git repository must be entirely atomic.

This atomicity is currently guaranteed by the caller, from a higher
level. But this is not optimum, and it is better that the backend does
the locking it needs by itself, so that other backends from other builds
can still download for the same package (e.g. another build uses a wget
download for that package).

One would think that, ideally, we'd have three atomic sections, and that
we could release the lock in-between those sections:
  - setting the remote URL and fetching,
  - checking out, updating sub-modules, generating the archive,
  - compressing the archive.

However, this is not true, because another backend may acquire the lock
right after our first section, and decide to entirely remove the
repository (in the _on_error() trap), and thus, in the second section,
the commits we had fetched in the first section may now have disapeared.

So, in fact, the first two sections are really only one.

Now, once we have generated the tarball, we could indeed release the
lock, as we no longer need access to the repository. But releasing a
lock acquired by flock(1) is not trivial: we don't know the
filedescriptor that was used to open the directory, so we can't close
it to release the lock...

So, we just lock the whole backend: once we reach compressing the
archive, we're almost done anyway.

We do implement this locking policy by locking the git cache directory
right after it got created, and before we do any git command in there.
We do so by re-running the git backend under an flock(1), using a trick
explained in the manpage (except we don't use the script as lockfile,
but the git cache directory). Even though using an exclusive lock is the
default, we do request it, to be sure.

When running under the lock, we then know we have exclusive access to
the repository, and we can do whatever we need in it.

When the backend ends, its filesdecriptors are closed (by the kernel),
thus freeing the lock for another to acquire.

Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Maxime Hadjinlian <maxime.hadjinlian@gmail.com>
---
 support/download/git | 30 +++++++++++++++++++++++++++++-
 1 file changed, 29 insertions(+), 1 deletion(-)

diff --git a/support/download/git b/support/download/git
index 17ca04eb98..1a3d562605 100755
--- a/support/download/git
+++ b/support/download/git
@@ -25,6 +25,10 @@ declare -a OPTS=("${@}")
 # case an unexpected and unsupported situation arises with submodules
 # or uncommitted stuff (e.g. if the user manually mucked around in the
 # git cache).
+#
+# To be noted: this function is only called while we have the lock on
+# the git cache directory, so we can't remove it.
+#
 _on_error() {
     local ret=${?}
 
@@ -38,7 +42,10 @@ _on_error() {
     printf "Removing it and starting afresh.\n" >&2
 
     popd >/dev/null
-    rm -rf "${git_cache}"
+
+    # Remove everything in the git cache, but the directory itself,
+    # so as not to have to play tricks with the lock
+    find "${git_cache}" -mindepth 1 -maxdepth 1 -exec rm -rf {} \;
 
     exec "${myname}" "${OPTS[@]}" || exit ${ret}
 }
@@ -64,6 +71,21 @@ shift $((OPTIND-1)) # Get rid of our options
 # Create and cd into the directory that will contain the local git cache
 git_cache="${dl_dir}/git"
 mkdir -p "${git_cache}"
+
+# From now-on, we need exclusive access to repository. But this git
+# backend is not concurrent-safe; others instances may have to touch
+# the same git repository. So, we re-run under flock if not already
+# the case, but only after we've created the cache dir, so that we
+# only lock the cache, not the full dl-dir.
+#
+# Note that if this is our second-chance (see _on_error(), above), then
+# we already have the lock, and won't try to reacquire it (BR_GIT_FLOCK
+# is already set to the correct value).
+if [ "${BR_GIT_FLOCK}" != "${git_cache}" ]; then
+    export BR_GIT_FLOCK="${git_cache}"
+    exec flock  --exclusive --no-fork "${git_cache}" "${0}" "${OPTS[@]}"
+fi
+
 pushd "${git_cache}" >/dev/null
 
 # Any error now should try to recover
@@ -193,6 +215,12 @@ LC_ALL=C sort <"${output}.list" >"${output}.list.sorted"
 tar cf - --transform="s#^\./#${basename}/#" \
          --numeric-owner --owner=0 --group=0 --mtime="${date}" --format=gnu \
          -T "${output}.list.sorted" >"${output}.tar"
+
+# From now on, we could very well release the lock we have on the git
+# cache directory, but there is no simple solution for that (we don't
+# know its filedescriptor). So, just go on with our business, we're
+# almost done anyway...
+
 gzip -6 -n <"${output}.tar" >"${output}"
 
 rm -f "${output}.list"
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [Buildroot] [PATCH 3/5] download: move locking into download wrapper
  2018-08-22 21:10 [Buildroot] [PATCH 0/5] download: add a timeout and use a more fine-grained locking heuristic Yann E. MORIN
  2018-08-22 21:10 ` [Buildroot] [PATCH 1/5] download/git: fix code-style Yann E. MORIN
  2018-08-22 21:10 ` [Buildroot] [PATCH 2/5] download/git: re-run the backend with a lock to the git tree Yann E. MORIN
@ 2018-08-22 21:10 ` Yann E. MORIN
  2018-08-22 21:10 ` [Buildroot] [PATCH 4/5] core/download: add per-download timeout Yann E. MORIN
  2018-08-22 21:10 ` [Buildroot] [PATCH 5/5] utils/genrandconfig: add a 30-minute timeout Yann E. MORIN
  4 siblings, 0 replies; 15+ messages in thread
From: Yann E. MORIN @ 2018-08-22 21:10 UTC (permalink / raw)
  To: buildroot

The script as a whole does not need to be atomic.

The only section that needs to be atomic is checking for, and
grabbing a download from the legacy, global directory, and
linking/copying it to the new location.

The rest of the script goes at great length to ensure that every
operations are concurrent-safe, by using temporary filenames and
directories, with only the last operation being atomic to put the
download in its final location.

As for the backends, all but the git backend are concurrent-safe,
as they work on aforementioned temporary files and directories.
As for the git backend, it internally locks whatever needs to be
atomic, so is concurrent-safe.

Note: the trick to execute only a portion of the shell script is
taken directly from the manpage for flock(1).

Even though using an exclusive lock is the default, we do request
it, to be extra sure.

Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Maxime Hadjinlian <maxime.hadjinlian@gmail.com>
---
 package/pkg-download.mk     |  3 +--
 support/download/dl-wrapper | 16 +++++++++-------
 2 files changed, 10 insertions(+), 9 deletions(-)

diff --git a/package/pkg-download.mk b/package/pkg-download.mk
index bf93b9a08e..cd77ca5394 100644
--- a/package/pkg-download.mk
+++ b/package/pkg-download.mk
@@ -19,7 +19,6 @@ SSH := $(call qstrip,$(BR2_SSH))
 export LOCALFILES := $(call qstrip,$(BR2_LOCALFILES))
 
 DL_WRAPPER = support/download/dl-wrapper
-FLOCK = flock $($(PKG)_DL_DIR)/
 
 # DL_DIR may have been set already from the environment
 ifeq ($(origin DL_DIR),undefined)
@@ -92,7 +91,7 @@ endif
 
 define DOWNLOAD
 	$(Q)mkdir -p $($(PKG)_DL_DIR)
-	$(Q)$(EXTRA_ENV) $(FLOCK) $(DL_WRAPPER) \
+	$(Q)$(EXTRA_ENV) $(DL_WRAPPER) \
 		-c '$($(PKG)_DL_VERSION)' \
 		-d '$($(PKG)_DL_DIR)' \
 		-D '$(DL_DIR)' \
diff --git a/support/download/dl-wrapper b/support/download/dl-wrapper
index 490335c859..723c89b7d5 100755
--- a/support/download/dl-wrapper
+++ b/support/download/dl-wrapper
@@ -57,13 +57,15 @@ main() {
     # download directory. If it does, hard-link it. If it turns out it
     # was an incorrect download, we'd still check it below anyway.
     # If we can neither link nor copy, fallback to doing a download.
-    # NOTE! This is not atomic, is subject to TOCTTOU, but the whole
-    # dl-wrapper runs under an flock, so we're safe.
-    if [ ! -e "${output}" -a -e "${old_dl_dir}/${filename}" ]; then
-        ln "${old_dl_dir}/${filename}" "${output}" || \
-        cp "${old_dl_dir}/${filename}" "${output}" || \
-        true
-    fi
+    (
+        flock --exclusive 42 || exit 1
+        if [ ! -e "${output}" -a -e "${old_dl_dir}/${filename}" ]; then
+            ln "${old_dl_dir}/${filename}" "${output}" || \
+            cp "${old_dl_dir}/${filename}" "${output}" || \
+            true
+        fi
+    # Yes, we can open a directory for reading!
+    ) 42<"${dl_dir}" || exit ${?}
 
     # If the output file already exists and:
     # - there's no .hash file: do not download it again and exit promptly
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [Buildroot] [PATCH 4/5] core/download: add per-download timeout
  2018-08-22 21:10 [Buildroot] [PATCH 0/5] download: add a timeout and use a more fine-grained locking heuristic Yann E. MORIN
                   ` (2 preceding siblings ...)
  2018-08-22 21:10 ` [Buildroot] [PATCH 3/5] download: move locking into download wrapper Yann E. MORIN
@ 2018-08-22 21:10 ` Yann E. MORIN
  2018-09-10 20:55   ` Thomas Petazzoni
  2018-10-20 21:49   ` Arnout Vandecappelle
  2018-08-22 21:10 ` [Buildroot] [PATCH 5/5] utils/genrandconfig: add a 30-minute timeout Yann E. MORIN
  4 siblings, 2 replies; 15+ messages in thread
From: Yann E. MORIN @ 2018-08-22 21:10 UTC (permalink / raw)
  To: buildroot

In case an remote sitre is slow, or hangs for whatever reasons, of the
backend is somehow stuck (e.g. locally waiting on a lock that is never
released), the whole build can be stuck forever.

But in such circumstances, it may happen that another download location
(e.g. a mirror on the LAN) may already have the required tarball, and
downloading from there would be faster and would succeed.

Add an optional, configurable, per-backend timeout.

Note: all the FDs of a backend will be forcibly closed by the kernel
when the backend 15-terminates or is 9-killed; any lock held on those
FDs would be automatically released by the kernel, thus releasing any
concurrent download.

Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Hollis Blanchard <hollis_blanchard@mentor.com>
Cc: Maxime Hadjinlian <maxime.hadjinlian@gmail.com>

---
Note: with the follow-up patch, this would probably fix build issues
like the ones on Hollis' autobuilder:
    http://autobuild.buildroot.org/results/ddb/ddbc96b24017f2a2b06c6091dea3e19520bf2dd1/
---
 Config.in                   | 14 ++++++++++++++
 package/pkg-download.mk     |  1 +
 support/download/dl-wrapper | 15 ++++++++++++---
 3 files changed, 27 insertions(+), 3 deletions(-)

diff --git a/Config.in b/Config.in
index 6b5b2b043c..654734855a 100644
--- a/Config.in
+++ b/Config.in
@@ -312,6 +312,20 @@ endif
 
 endmenu
 
+config BR2_DL_TIMEOUT
+	string "Download timeout"
+	help
+	  Timeout after which a non-completed download will be considered
+	  failed. Suffix with 's' (or nothing), 'm', 'h', or 'd' for,
+	  respectively, seconds, minutes, hours, or days.
+
+	  When a download times out, the next download location, if any, is
+	  attempted, which means that, if the primary site (if any) is too
+	  slow, then upstream will be used; if that is then still too slow,
+	  then the backup mirror, if any, is used.
+
+	  Leave empty for no timeout (the default).
+
 config BR2_JLEVEL
 	int "Number of jobs to run simultaneously (0 for auto)"
 	default "0"
diff --git a/package/pkg-download.mk b/package/pkg-download.mk
index cd77ca5394..832346d3d5 100644
--- a/package/pkg-download.mk
+++ b/package/pkg-download.mk
@@ -92,6 +92,7 @@ endif
 define DOWNLOAD
 	$(Q)mkdir -p $($(PKG)_DL_DIR)
 	$(Q)$(EXTRA_ENV) $(DL_WRAPPER) \
+		-t '$(call qstrip,$(BR2_DL_TIMEOUT))' \
 		-c '$($(PKG)_DL_VERSION)' \
 		-d '$($(PKG)_DL_DIR)' \
 		-D '$(DL_DIR)' \
diff --git a/support/download/dl-wrapper b/support/download/dl-wrapper
index 723c89b7d5..fff27db497 100755
--- a/support/download/dl-wrapper
+++ b/support/download/dl-wrapper
@@ -23,11 +23,11 @@ export BR_BACKEND_DL_GETOPTS=":hc:d:o:n:N:H:ru:qf:e"
 
 main() {
     local OPT OPTARG
-    local backend output hfile recurse quiet rc
+    local backend output hfile recurse quiet rc timeout timeout_cmd
     local -a uris
 
     # Parse our options; anything after '--' is for the backend
-    while getopts ":hc:d:D:o:n:N:H:rf:u:q" OPT; do
+    while getopts ":hc:d:D:o:n:N:H:rf:u:t:q" OPT; do
         case "${OPT}" in
         h)  help; exit 0;;
         c)  cset="${OPTARG}";;
@@ -40,6 +40,7 @@ main() {
         r)  recurse="-r";;
         f)  filename="${OPTARG}";;
         u)  uris+=( "${OPTARG}" );;
+        t)  timeout="${OPTARG}";;
         q)  quiet="-q";;
         :)  error "option '%s' expects a mandatory argument\n" "${OPTARG}";;
         \?) error "unknown option '%s'\n" "${OPTARG}";;
@@ -53,6 +54,14 @@ main() {
         error "no output specified, use -o\n"
     fi
 
+    if [ -n "${timeout}" ]; then
+        # Timeout after the specified delay; additionaly, leave
+        # 30 more seconds for the backend to properly terminate
+        # (e.g. to cleanup behind itself), after which forcibly
+        # kill the backend.
+        timeout_cmd="timeout --kill-after=30s ${timeout}"
+    fi
+
     # Legacy handling: check if the file already exists in the global
     # download directory. If it does, hard-link it. If it turns out it
     # was an incorrect download, we'd still check it below anyway.
@@ -123,7 +132,7 @@ main() {
         # directory to remove all the cruft it may have left behind, and try
         # the next URI until it succeeds. Once out of URI to try, we need to
         # cleanup and exit.
-        if ! "${OLDPWD}/support/download/${backend}" \
+        if ! ${timeout_cmd} "${OLDPWD}/support/download/${backend}" \
                 $([ -n "${urlencode}" ] && printf %s '-e') \
                 -c "${cset}" \
                 -d "${dl_dir}" \
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [Buildroot] [PATCH 5/5] utils/genrandconfig: add a 30-minute timeout
  2018-08-22 21:10 [Buildroot] [PATCH 0/5] download: add a timeout and use a more fine-grained locking heuristic Yann E. MORIN
                   ` (3 preceding siblings ...)
  2018-08-22 21:10 ` [Buildroot] [PATCH 4/5] core/download: add per-download timeout Yann E. MORIN
@ 2018-08-22 21:10 ` Yann E. MORIN
  4 siblings, 0 replies; 15+ messages in thread
From: Yann E. MORIN @ 2018-08-22 21:10 UTC (permalink / raw)
  To: buildroot

Some autobuilders seem to have difficulties downloading big git trees.
Sicne those are autobuilders, they only test known git sha1s or tags, so
they eventually get cached in our primary mirror, and thus we can expect
that slow autobuilders can eventually fetch them from there.

Add a relatively high timeout, so that those autobuilders are not stuck
forever in downloads.

Should fix autobuild issues like:
    http://autobuild.buildroot.org/results/ddb/ddbc96b24017f2a2b06c6091dea3e19520bf2dd1/

Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Hollis Blanchard <hollis_blanchard@mentor.com>
Cc: Maxime Hadjinlian <maxime.hadjinlian@gmail.com>
---
 utils/genrandconfig | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/utils/genrandconfig b/utils/genrandconfig
index 27f84ea555..2dbdc2ccf6 100755
--- a/utils/genrandconfig
+++ b/utils/genrandconfig
@@ -353,6 +353,9 @@ def gen_config(args):
     # Allow hosts with old certificates to download over https
     configlines.append("BR2_WGET=\"wget --passive-ftp -nd -t 3 --no-check-certificate\"")
 
+    # Don't be stuck forever downloading big git trees, timeout after 30 min
+    configlines.append("BR2_DL_TIMEOUT=\"30m\"")
+
     # Amend the configuration with a few things.
     if randint(0, 20) == 0:
         configlines.append("BR2_ENABLE_DEBUG=y\n")
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [Buildroot] [PATCH 1/5] download/git: fix code-style
  2018-08-22 21:10 ` [Buildroot] [PATCH 1/5] download/git: fix code-style Yann E. MORIN
@ 2018-09-10 20:44   ` Thomas Petazzoni
  0 siblings, 0 replies; 15+ messages in thread
From: Thomas Petazzoni @ 2018-09-10 20:44 UTC (permalink / raw)
  To: buildroot

Hello,

On Wed, 22 Aug 2018 23:10:54 +0200, Yann E. MORIN wrote:
> This file uses leading spaces, not TABs.
> 
> Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
> Cc: Maxime Hadjinlian <maxime.hadjinlian@gmail.com>
> 
> ---
> ;-)
> ---
>  support/download/git | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

Applied to master, thanks.

Thomas
-- 
Thomas Petazzoni, CTO, Bootlin (formerly Free Electrons)
Embedded Linux and Kernel engineering
https://bootlin.com

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [Buildroot] [PATCH 2/5] download/git: re-run the backend with a lock to the git tree
  2018-08-22 21:10 ` [Buildroot] [PATCH 2/5] download/git: re-run the backend with a lock to the git tree Yann E. MORIN
@ 2018-09-10 20:52   ` Thomas Petazzoni
  2018-10-14 12:54     ` Yann E. MORIN
  0 siblings, 1 reply; 15+ messages in thread
From: Thomas Petazzoni @ 2018-09-10 20:52 UTC (permalink / raw)
  To: buildroot

Hello,

+Arnout.

On Wed, 22 Aug 2018 23:10:55 +0200, Yann E. MORIN wrote:
> The access-pattern to the git repository must be entirely atomic.
> 
> This atomicity is currently guaranteed by the caller, from a higher
> level. But this is not optimum, and it is better that the backend does
> the locking it needs by itself, so that other backends from other builds
> can still download for the same package (e.g. another build uses a wget
> download for that package).

I'd like to challenge the complexity vs. benefit of this patch and
PATCH 3/5. Basically, you replace a single $(FLOCK) call in dl-wrapper,
by 30 additional lines in the git-specific wrapper (including a
non-trivial trick that consists in calling itself), for a benefit that
is very, very limited. A download of the exact same package, from
another download method, happening at the same time. Can occur yes, but
is it really worth the additional complexity? I doubt it.

So at this point, I am not really enthusiastic about 2/5 and 3/5, but
perhaps you'll have some convincing arguments ?

Best regards,

Thomas
-- 
Thomas Petazzoni, CTO, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [Buildroot] [PATCH 4/5] core/download: add per-download timeout
  2018-08-22 21:10 ` [Buildroot] [PATCH 4/5] core/download: add per-download timeout Yann E. MORIN
@ 2018-09-10 20:55   ` Thomas Petazzoni
  2018-10-14 13:07     ` Yann E. MORIN
  2018-10-20 21:49   ` Arnout Vandecappelle
  1 sibling, 1 reply; 15+ messages in thread
From: Thomas Petazzoni @ 2018-09-10 20:55 UTC (permalink / raw)
  To: buildroot

Hello,

On Wed, 22 Aug 2018 23:10:57 +0200, Yann E. MORIN wrote:
> In case an remote sitre is slow, or hangs for whatever reasons, of the

an remote sitre -> a remote site


> +    if [ -n "${timeout}" ]; then
> +        # Timeout after the specified delay; additionaly, leave
> +        # 30 more seconds for the backend to properly terminate
> +        # (e.g. to cleanup behind itself), after which forcibly
> +        # kill the backend.
> +        timeout_cmd="timeout --kill-after=30s ${timeout}"

What happens if 30 seconds are not enough for the cleanup ? I suppose
we already handle that (as we can already interrupt the build at any
point), and the next build will already clean up whatever mess what left
behind. If that's indeed the case, then the --kill-after=30s looks a
bit useless, we should just abort the download and move on with the
next step. Indeed, saying "30 seconds should be enough" sounds like
saying "640 KB of memory should be enough" :-)

Best regards,

Thomas
-- 
Thomas Petazzoni, CTO, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [Buildroot] [PATCH 2/5] download/git: re-run the backend with a lock to the git tree
  2018-09-10 20:52   ` Thomas Petazzoni
@ 2018-10-14 12:54     ` Yann E. MORIN
  2018-10-20 22:11       ` Arnout Vandecappelle
  0 siblings, 1 reply; 15+ messages in thread
From: Yann E. MORIN @ 2018-10-14 12:54 UTC (permalink / raw)
  To: buildroot

Thomas, All,

On 2018-09-10 22:52 +0200, Thomas Petazzoni spake thusly:
> On Wed, 22 Aug 2018 23:10:55 +0200, Yann E. MORIN wrote:
> > The access-pattern to the git repository must be entirely atomic.
> > 
> > This atomicity is currently guaranteed by the caller, from a higher
> > level. But this is not optimum, and it is better that the backend does
> > the locking it needs by itself, so that other backends from other builds
> > can still download for the same package (e.g. another build uses a wget
> > download for that package).
> 
> I'd like to challenge the complexity vs. benefit of this patch and
> PATCH 3/5. Basically, you replace a single $(FLOCK) call in dl-wrapper,
> by 30 additional lines in the git-specific wrapper (including a
> non-trivial trick that consists in calling itself), for a benefit that
> is very, very limited. A download of the exact same package, from
> another download method, happening at the same time. Can occur yes, but
> is it really worth the additional complexity? I doubt it.
> 
> So at this point, I am not really enthusiastic about 2/5 and 3/5, but
> perhaps you'll have some convincing arguments ?

So, the use-case I have is that I get 9 parallel jobs on the same
machine, with 6 of them retrieving the kernel from a very fast http/1.1
server, and the three others getting theirs from a rather slow and big
git repository.

Downloading the 6 http in parallel takes under 2 minutes, while each git
download takes about 20 to 30 minutes.

Scheduling (not) helping, it happens that, when a git one gets scheduled
first, the 6 http/1.1 ones are stalled, when they could in fact proceed,
and even finish before the first git download is done. And if scheduling
is really mad at you, then it will schedule the two other git downloads
before any of the http/1.1 ones.

So yes, it helps tremendously: some builds finish earlier and can go to
CI earlier.

Note that one may argue that BR2_DL_DIR should be set in the environment,
so that it points to the non-default location, so that it gets reused
everytime, thus alleviating the need for a big download. Yes, right, but
there are use-cases where this is not wanted.

Regards,
Yann E. MORIN.

> Best regards,
> 
> Thomas
> -- 
> Thomas Petazzoni, CTO, Bootlin
> Embedded Linux and Kernel engineering
> https://bootlin.com

-- 
.-----------------.--------------------.------------------.--------------------.
|  Yann E. MORIN  | Real-Time Embedded | /"\ ASCII RIBBON | Erics' conspiracy: |
| +33 662 376 056 | Software  Designer | \ / CAMPAIGN     |  ___               |
| +33 223 225 172 `------------.-------:  X  AGAINST      |  \e/  There is no  |
| http://ymorin.is-a-geek.org/ | _/*\_ | / \ HTML MAIL    |   v   conspiracy.  |
'------------------------------^-------^------------------^--------------------'

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [Buildroot] [PATCH 4/5] core/download: add per-download timeout
  2018-09-10 20:55   ` Thomas Petazzoni
@ 2018-10-14 13:07     ` Yann E. MORIN
  0 siblings, 0 replies; 15+ messages in thread
From: Yann E. MORIN @ 2018-10-14 13:07 UTC (permalink / raw)
  To: buildroot

Thomas, All,

On 2018-09-10 22:55 +0200, Thomas Petazzoni spake thusly:
> On Wed, 22 Aug 2018 23:10:57 +0200, Yann E. MORIN wrote:
> > +    if [ -n "${timeout}" ]; then
> > +        # Timeout after the specified delay; additionaly, leave
> > +        # 30 more seconds for the backend to properly terminate
> > +        # (e.g. to cleanup behind itself), after which forcibly
> > +        # kill the backend.
> > +        timeout_cmd="timeout --kill-after=30s ${timeout}"
> 
> What happens if 30 seconds are not enough for the cleanup ? I suppose
> we already handle that (as we can already interrupt the build at any
> point), and the next build will already clean up whatever mess what left
> behind. If that's indeed the case, then the --kill-after=30s looks a
> bit useless, we should just abort the download and move on with the
> next step. Indeed, saying "30 seconds should be enough" sounds like
> saying "640 KB of memory should be enough" :-)

Yeah, I do understand what you mean. So, that's either that, or trust
the download backend will always finish upon receiving a SIGTERM.

I'm OK with dropping the kill-after option.

Regards,
Yann E. MORIN.

-- 
.-----------------.--------------------.------------------.--------------------.
|  Yann E. MORIN  | Real-Time Embedded | /"\ ASCII RIBBON | Erics' conspiracy: |
| +33 662 376 056 | Software  Designer | \ / CAMPAIGN     |  ___               |
| +33 223 225 172 `------------.-------:  X  AGAINST      |  \e/  There is no  |
| http://ymorin.is-a-geek.org/ | _/*\_ | / \ HTML MAIL    |   v   conspiracy.  |
'------------------------------^-------^------------------^--------------------'

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [Buildroot] [PATCH 4/5] core/download: add per-download timeout
  2018-08-22 21:10 ` [Buildroot] [PATCH 4/5] core/download: add per-download timeout Yann E. MORIN
  2018-09-10 20:55   ` Thomas Petazzoni
@ 2018-10-20 21:49   ` Arnout Vandecappelle
  1 sibling, 0 replies; 15+ messages in thread
From: Arnout Vandecappelle @ 2018-10-20 21:49 UTC (permalink / raw)
  To: buildroot



On 22/08/2018 22:10, Yann E. MORIN wrote:
> In case an remote sitre is slow, or hangs for whatever reasons, of the
> backend is somehow stuck (e.g. locally waiting on a lock that is never
> released), the whole build can be stuck forever.
> 
> But in such circumstances, it may happen that another download location
> (e.g. a mirror on the LAN) may already have the required tarball, and
> downloading from there would be faster and would succeed.
> 
> Add an optional, configurable, per-backend timeout.
> 
> Note: all the FDs of a backend will be forcibly closed by the kernel
> when the backend 15-terminates or is 9-killed; any lock held on those
> FDs would be automatically released by the kernel, thus releasing any
> concurrent download.
> 
> Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
> Cc: Hollis Blanchard <hollis_blanchard@mentor.com>
> Cc: Maxime Hadjinlian <maxime.hadjinlian@gmail.com>>
> ---
> Note: with the follow-up patch, this would probably fix build issues
> like the ones on Hollis' autobuilder:
>     http://autobuild.buildroot.org/results/ddb/ddbc96b24017f2a2b06c6091dea3e19520bf2dd1/

 As discussed in real life, I've marked this patch and the following one as
rejected.

 Indeed, this patch would not fix the build issue on Hollis' autobuilder, it
would just work around it and sweep the problem under the carpet. If you
encounter this kind of problem (be it in an autobuilder or on some personal CI),
you should be made aware of it and not hide it.

 In addition, any timeout risks introducing false positives and breaking
functional situation.

 Regards,
 Arnout

> ---
>  Config.in                   | 14 ++++++++++++++
>  package/pkg-download.mk     |  1 +
>  support/download/dl-wrapper | 15 ++++++++++++---
>  3 files changed, 27 insertions(+), 3 deletions(-)
> 
> diff --git a/Config.in b/Config.in
> index 6b5b2b043c..654734855a 100644
> --- a/Config.in
> +++ b/Config.in
> @@ -312,6 +312,20 @@ endif
>  
>  endmenu
>  
> +config BR2_DL_TIMEOUT
> +	string "Download timeout"
> +	help
> +	  Timeout after which a non-completed download will be considered
> +	  failed. Suffix with 's' (or nothing), 'm', 'h', or 'd' for,
> +	  respectively, seconds, minutes, hours, or days.
> +
> +	  When a download times out, the next download location, if any, is
> +	  attempted, which means that, if the primary site (if any) is too
> +	  slow, then upstream will be used; if that is then still too slow,
> +	  then the backup mirror, if any, is used.
> +
> +	  Leave empty for no timeout (the default).
> +
>  config BR2_JLEVEL
>  	int "Number of jobs to run simultaneously (0 for auto)"
>  	default "0"
> diff --git a/package/pkg-download.mk b/package/pkg-download.mk
> index cd77ca5394..832346d3d5 100644
> --- a/package/pkg-download.mk
> +++ b/package/pkg-download.mk
> @@ -92,6 +92,7 @@ endif
>  define DOWNLOAD
>  	$(Q)mkdir -p $($(PKG)_DL_DIR)
>  	$(Q)$(EXTRA_ENV) $(DL_WRAPPER) \
> +		-t '$(call qstrip,$(BR2_DL_TIMEOUT))' \
>  		-c '$($(PKG)_DL_VERSION)' \
>  		-d '$($(PKG)_DL_DIR)' \
>  		-D '$(DL_DIR)' \
> diff --git a/support/download/dl-wrapper b/support/download/dl-wrapper
> index 723c89b7d5..fff27db497 100755
> --- a/support/download/dl-wrapper
> +++ b/support/download/dl-wrapper
> @@ -23,11 +23,11 @@ export BR_BACKEND_DL_GETOPTS=":hc:d:o:n:N:H:ru:qf:e"
>  
>  main() {
>      local OPT OPTARG
> -    local backend output hfile recurse quiet rc
> +    local backend output hfile recurse quiet rc timeout timeout_cmd
>      local -a uris
>  
>      # Parse our options; anything after '--' is for the backend
> -    while getopts ":hc:d:D:o:n:N:H:rf:u:q" OPT; do
> +    while getopts ":hc:d:D:o:n:N:H:rf:u:t:q" OPT; do
>          case "${OPT}" in
>          h)  help; exit 0;;
>          c)  cset="${OPTARG}";;
> @@ -40,6 +40,7 @@ main() {
>          r)  recurse="-r";;
>          f)  filename="${OPTARG}";;
>          u)  uris+=( "${OPTARG}" );;
> +        t)  timeout="${OPTARG}";;
>          q)  quiet="-q";;
>          :)  error "option '%s' expects a mandatory argument\n" "${OPTARG}";;
>          \?) error "unknown option '%s'\n" "${OPTARG}";;
> @@ -53,6 +54,14 @@ main() {
>          error "no output specified, use -o\n"
>      fi
>  
> +    if [ -n "${timeout}" ]; then
> +        # Timeout after the specified delay; additionaly, leave
> +        # 30 more seconds for the backend to properly terminate
> +        # (e.g. to cleanup behind itself), after which forcibly
> +        # kill the backend.
> +        timeout_cmd="timeout --kill-after=30s ${timeout}"
> +    fi
> +
>      # Legacy handling: check if the file already exists in the global
>      # download directory. If it does, hard-link it. If it turns out it
>      # was an incorrect download, we'd still check it below anyway.
> @@ -123,7 +132,7 @@ main() {
>          # directory to remove all the cruft it may have left behind, and try
>          # the next URI until it succeeds. Once out of URI to try, we need to
>          # cleanup and exit.
> -        if ! "${OLDPWD}/support/download/${backend}" \
> +        if ! ${timeout_cmd} "${OLDPWD}/support/download/${backend}" \
>                  $([ -n "${urlencode}" ] && printf %s '-e') \
>                  -c "${cset}" \
>                  -d "${dl_dir}" \
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [Buildroot] [PATCH 2/5] download/git: re-run the backend with a lock to the git tree
  2018-10-14 12:54     ` Yann E. MORIN
@ 2018-10-20 22:11       ` Arnout Vandecappelle
  2018-10-22 21:12         ` Yann E. MORIN
  0 siblings, 1 reply; 15+ messages in thread
From: Arnout Vandecappelle @ 2018-10-20 22:11 UTC (permalink / raw)
  To: buildroot



On 14/10/2018 13:54, Yann E. MORIN wrote:
> Thomas, All,
> 
> On 2018-09-10 22:52 +0200, Thomas Petazzoni spake thusly:
>> On Wed, 22 Aug 2018 23:10:55 +0200, Yann E. MORIN wrote:
>>> The access-pattern to the git repository must be entirely atomic.
>>>
>>> This atomicity is currently guaranteed by the caller, from a higher
>>> level. But this is not optimum, and it is better that the backend does
>>> the locking it needs by itself, so that other backends from other builds
>>> can still download for the same package (e.g. another build uses a wget
>>> download for that package).
>>
>> I'd like to challenge the complexity vs. benefit of this patch and
>> PATCH 3/5. Basically, you replace a single $(FLOCK) call in dl-wrapper,
>> by 30 additional lines in the git-specific wrapper (including a
>> non-trivial trick that consists in calling itself), for a benefit that
>> is very, very limited. A download of the exact same package, from
>> another download method, happening at the same time. Can occur yes, but
>> is it really worth the additional complexity? I doubt it.

 As discussed at the Buildroot meeting, the change can be made a lot simpler by
just doing the flock from withint the dl-wrapper, around the git call. Then you
don't need that weird re-exec.

>>
>> So at this point, I am not really enthusiastic about 2/5 and 3/5, but
>> perhaps you'll have some convincing arguments ?
> 
> So, the use-case I have is that I get 9 parallel jobs on the same
> machine, with 6 of them retrieving the kernel from a very fast http/1.1
> server, and the three others getting theirs from a rather slow and big
> git repository.

 This particular use case is a bit crazy.

 However, it *does* make sense to do more fine-grain locking of downloads,
because now we serialize all the downloads of a package. Downloads of different
versions of a package really can happen in parallel without problem. Except for
git downloads, because they use the shared git directory, but since it is
git-specific, a git-specific lock should be used.

 Somewhat related to that: because of the locking we currently do (since the
per-package download dir), there is actually no reason any longer to do the
downloads to temporary files and move them to the correct place afterwards. We
can rely on the lock to serialize things, and download immediately to the
correct place.


 So, my proposal would be:

1. Remove the redundant temporary files from the download infra.
2. Replace the flock on the per-package download dir with a flock of a temporary
file named ${filename}.lock. That way, we serialize just what needs to be
serialized.
3. Add a git-specific "global" lock, but do it directly from the dl-wrapper to
keep the code simple.

(in terms of patch order, obviously 2 and 3 have to be swapped).


 With this in mind, I've marked patches 2 and 3 as Changes Requested.

 Regards,
 Arnout

> 
> Downloading the 6 http in parallel takes under 2 minutes, while each git
> download takes about 20 to 30 minutes.
> 
> Scheduling (not) helping, it happens that, when a git one gets scheduled
> first, the 6 http/1.1 ones are stalled, when they could in fact proceed,
> and even finish before the first git download is done. And if scheduling
> is really mad at you, then it will schedule the two other git downloads
> before any of the http/1.1 ones.
> 
> So yes, it helps tremendously: some builds finish earlier and can go to
> CI earlier.
> 
> Note that one may argue that BR2_DL_DIR should be set in the environment,
> so that it points to the non-default location, so that it gets reused
> everytime, thus alleviating the need for a big download. Yes, right, but
> there are use-cases where this is not wanted.
> 
> Regards,
> Yann E. MORIN.
> 
>> Best regards,
>>
>> Thomas
>> -- 
>> Thomas Petazzoni, CTO, Bootlin
>> Embedded Linux and Kernel engineering
>> https://bootlin.com
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [Buildroot] [PATCH 2/5] download/git: re-run the backend with a lock to the git tree
  2018-10-20 22:11       ` Arnout Vandecappelle
@ 2018-10-22 21:12         ` Yann E. MORIN
  2018-10-22 22:11           ` Arnout Vandecappelle
  0 siblings, 1 reply; 15+ messages in thread
From: Yann E. MORIN @ 2018-10-22 21:12 UTC (permalink / raw)
  To: buildroot

On 2018-10-20 23:11 +0100, Arnout Vandecappelle spake thusly:
[--SNIP--]
>  So, my proposal would be:

As we discussed IRL, this is a good suggestion, but now that it has had
time to bounce a bit between my two still-working neurons, here are my
comments on it...

> 1. Remove the redundant temporary files from the download infra.

> 2. Replace the flock on the per-package download dir with a flock of a temporary
> file named ${filename}.lock. That way, we serialize just what needs to be
> serialized.

And who would be responsible to remove those lock files?

When the download runs OK, I can see it pretty easy, even though it is
not nice... But when the download gets interrupted (think Ctrl-C or
SIGTERM/SIGKILL from a CI...)?

I'm afraid we could end up with dozens stale such lock files cluttering
the (per-package) directory... Locking the per-package directory itself
was nice and dandy because it was expected to live on and stay.

I could see an alternative, which would be to lock the final file,
directly. After all, those lock are just advisory locks; they don't
actually prevent another process to write in the locked files at all.

However, that does not work, because then it would be an empty file, so
would not match its hashes, and the download wrapper would remove it,
thus freeing a competing build into creating that lock file again while
the first one is still trying to download it...

So we need to lock a file other than the destination file, and back to
square one...

> 3. Add a git-specific "global" lock, but do it directly from the dl-wrapper to
> keep the code simple.

And then, what should be locked?

If we lock a (say) 'git.lock' file in the per-package directory, but the
download is interrupted, the lock file would linger around. Since it is
only ever the only git-related lock file, that is not too bad. Even if
we had more per-backend locks, that is still just a few of them, and we
could make them hidden files (.git.lock).

If we rely on the fact that the backend would create a directory named
after itself (e.g. the git backend creates a directory named 'git'),
then the dl-wrapper could create that directory and handle it over to
the backend. But then, the backend should never ever remove that
directory (which we currently do in case the git tree is borked)), or a
competing download could grab the lock while the first would still be
attempting the download.

So, I am not fond of all those new lock files, which have the potential
to eventually clutter the download difrectory... :-(

(Note: I am not saying that the mere presence of lock files would
prevent future downloads; not at all, as we use flock(2) on them. I'm
just saying that stale lock files are ugly...)

> (in terms of patch order, obviously 2 and 3 have to be swapped).

Regards,
Yann E. MORIN.

-- 
.-----------------.--------------------.------------------.--------------------.
|  Yann E. MORIN  | Real-Time Embedded | /"\ ASCII RIBBON | Erics' conspiracy: |
| +33 662 376 056 | Software  Designer | \ / CAMPAIGN     |  ___               |
| +33 223 225 172 `------------.-------:  X  AGAINST      |  \e/  There is no  |
| http://ymorin.is-a-geek.org/ | _/*\_ | / \ HTML MAIL    |   v   conspiracy.  |
'------------------------------^-------^------------------^--------------------'

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [Buildroot] [PATCH 2/5] download/git: re-run the backend with a lock to the git tree
  2018-10-22 21:12         ` Yann E. MORIN
@ 2018-10-22 22:11           ` Arnout Vandecappelle
  0 siblings, 0 replies; 15+ messages in thread
From: Arnout Vandecappelle @ 2018-10-22 22:11 UTC (permalink / raw)
  To: buildroot



On 10/22/18 10:12 PM, Yann E. MORIN wrote:
> On 2018-10-20 23:11 +0100, Arnout Vandecappelle spake thusly:
> [--SNIP--]
>>  So, my proposal would be:
> 
> As we discussed IRL, this is a good suggestion, but now that it has had
> time to bounce a bit between my two still-working neurons, here are my
> comments on it...
> 
>> 1. Remove the redundant temporary files from the download infra.
> 
>> 2. Replace the flock on the per-package download dir with a flock of a temporary
>> file named ${filename}.lock. That way, we serialize just what needs to be
>> serialized.
> 
> And who would be responsible to remove those lock files?

 Nobody. You don't remove it. It's just one extra inode. It gets cleaned up when
you remove the package download directory, same way that you would clean up old
tarballs.

> When the download runs OK, I can see it pretty easy, even though it is
> not nice... But when the download gets interrupted (think Ctrl-C or
> SIGTERM/SIGKILL from a CI...)?
> 
> I'm afraid we could end up with dozens stale such lock files cluttering
> the (per-package) directory... Locking the per-package directory itself
> was nice and dandy because it was expected to live on and stay.
> 
> I could see an alternative, which would be to lock the final file,
> directly. After all, those lock are just advisory locks; they don't
> actually prevent another process to write in the locked files at all.
> 
> However, that does not work, because then it would be an empty file, so
> would not match its hashes, and the download wrapper would remove it,
> thus freeing a competing build into creating that lock file again while
> the first one is still trying to download it...
> 
> So we need to lock a file other than the destination file, and back to
> square one...
> 
>> 3. Add a git-specific "global" lock, but do it directly from the dl-wrapper to
>> keep the code simple.
> 
> And then, what should be locked?

 Hm, good point. I thought "the git directory of course", but that only gets
created by the git download script and it would be weird to create it in the
dl-wrapper...

> If we lock a (say) 'git.lock' file in the per-package directory, but the

 So yeah, git.lock I guess.

 On second thought, though, I don't find the re-exec-under-lock construct that
ugly. It's a pretty standard way of doing things, similar to e.g.
re-exec-under-sudo. Then we can simply put the lock on the git dir.

 Oh, but then the lock is taken after mkdir -p, which is racy by itself... Not a
big deal, a race will just result in an error message, not exit because the git
download helper doesn't do 'set -e'. Hm, it does 'set -E' instead, is that
really the intention? You changed it from -e to -E in b7efb43e8, but -E is
(AFAIK) meaningless without -e - perhaps you meant to enable both -E and -e?

 But I digress...

 Oh, but now I realize you also need that whole complexity of not removing the
git/ directory but instead removing all files in it, so you actually do keep the
lock on the directory... That bit is actually pretty horrible. So then, a
separate git.lock file would in fact be a whole lot better.

> download is interrupted, the lock file would linger around. Since it is
> only ever the only git-related lock file, that is not too bad. Even if
> we had more per-backend locks, that is still just a few of them, and we
> could make them hidden files (.git.lock).
> 
> If we rely on the fact that the backend would create a directory named
> after itself (e.g. the git backend creates a directory named 'git'),
> then the dl-wrapper could create that directory and handle it over to
> the backend. But then, the backend should never ever remove that
> directory (which we currently do in case the git tree is borked)), or a
> competing download could grab the lock while the first would still be
> attempting the download.>
> So, I am not fond of all those new lock files, which have the potential
> to eventually clutter the download difrectory... :-(

 Bollocks. What about the clutter of all those stamp files we create in the
build directories?

 You probably indeed don't want to see those files, indeed, so put a . in front
of them.

 What I find a *lot* more annoying is that my download directory is cluttered
with files which have been downloaded ages ago and whose version has been bumped
so they never get used anymore. Maybe I should add a tiny script to utils/ that
does the correct 'find -atime +$1 -delete' command. But I really want only files
for which a newer version exists to be deleted. Oh well, different story.


 Regards,
 Arnout


> (Note: I am not saying that the mere presence of lock files would
> prevent future downloads; not at all, as we use flock(2) on them. I'm
> just saying that stale lock files are ugly...)
> 
>> (in terms of patch order, obviously 2 and 3 have to be swapped).
> 
> Regards,
> Yann E. MORIN.
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2018-10-22 22:11 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-08-22 21:10 [Buildroot] [PATCH 0/5] download: add a timeout and use a more fine-grained locking heuristic Yann E. MORIN
2018-08-22 21:10 ` [Buildroot] [PATCH 1/5] download/git: fix code-style Yann E. MORIN
2018-09-10 20:44   ` Thomas Petazzoni
2018-08-22 21:10 ` [Buildroot] [PATCH 2/5] download/git: re-run the backend with a lock to the git tree Yann E. MORIN
2018-09-10 20:52   ` Thomas Petazzoni
2018-10-14 12:54     ` Yann E. MORIN
2018-10-20 22:11       ` Arnout Vandecappelle
2018-10-22 21:12         ` Yann E. MORIN
2018-10-22 22:11           ` Arnout Vandecappelle
2018-08-22 21:10 ` [Buildroot] [PATCH 3/5] download: move locking into download wrapper Yann E. MORIN
2018-08-22 21:10 ` [Buildroot] [PATCH 4/5] core/download: add per-download timeout Yann E. MORIN
2018-09-10 20:55   ` Thomas Petazzoni
2018-10-14 13:07     ` Yann E. MORIN
2018-10-20 21:49   ` Arnout Vandecappelle
2018-08-22 21:10 ` [Buildroot] [PATCH 5/5] utils/genrandconfig: add a 30-minute timeout Yann E. MORIN

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.