All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/14] resumable network bundles
@ 2011-11-10  7:43 Jeff King
  2011-11-10  7:45 ` Jeff King
                   ` (16 more replies)
  0 siblings, 17 replies; 23+ messages in thread
From: Jeff King @ 2011-11-10  7:43 UTC (permalink / raw)
  To: git

One possible option for resumable clones that has been discussed is
letting the server point the client by http to a static bundle
containing most of history, followed by a fetch from the actual git repo
(which should be much cheaper now that we have all of the bundled
history).  This series implements "step 0" of this plan: just letting
bundles be fetched across the network in the first place.

Shawn raised some issues about using bundles for this (as opposed to
accessing the packfiles themselves); specifically, this raises the I/O
footprint of a repository that has to serve both the bundled version of
the pack and the regular packfile.

So it may be that we don't follow this plan all the way through.
However, even if we don't, fetching bundles over http is still a useful
thing to be able to do. Which makes this first step worth doing either
way.

  [01/14]: t/lib-httpd: check for NO_CURL
  [02/14]: http: turn off curl signals
  [03/14]: http: refactor http_request function
  [04/14]: http: add a public function for arbitrary-callback request
  [05/14]: remote-curl: use http callback for requesting refs
  [06/14]: transport: factor out bundle to ref list conversion
  [07/14]: bundle: add is_bundle_buf helper
  [08/14]: remote-curl: free "discovery" object
  [09/14]: remote-curl: auto-detect bundles when fetching refs
  [10/14]: remote-curl: try base $URL after $URL/info/refs
  [11/14]: progress: allow pure-throughput progress meters
  [12/14]: remote-curl: show progress for bundle downloads
  [13/14]: remote-curl: resume interrupted bundle transfers
  [14/14]: clone: give advice on how to resume a failed clone

-Peff

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 0/14] resumable network bundles
  2011-11-10  7:43 [PATCH 0/14] resumable network bundles Jeff King
@ 2011-11-10  7:45 ` Jeff King
  2011-11-10  7:46 ` [PATCH 01/14] t/lib-httpd: check for NO_CURL Jeff King
                   ` (15 subsequent siblings)
  16 siblings, 0 replies; 23+ messages in thread
From: Jeff King @ 2011-11-10  7:45 UTC (permalink / raw)
  To: git

On Thu, Nov 10, 2011 at 02:43:30AM -0500, Jeff King wrote:

>   [01/14]: t/lib-httpd: check for NO_CURL
>   [02/14]: http: turn off curl signals
>   [03/14]: http: refactor http_request function
>   [04/14]: http: add a public function for arbitrary-callback request
>   [05/14]: remote-curl: use http callback for requesting refs
>   [06/14]: transport: factor out bundle to ref list conversion
>   [07/14]: bundle: add is_bundle_buf helper
>   [08/14]: remote-curl: free "discovery" object
>   [09/14]: remote-curl: auto-detect bundles when fetching refs
>   [10/14]: remote-curl: try base $URL after $URL/info/refs
>   [11/14]: progress: allow pure-throughput progress meters
>   [12/14]: remote-curl: show progress for bundle downloads
>   [13/14]: remote-curl: resume interrupted bundle transfers
>   [14/14]: clone: give advice on how to resume a failed clone

I forgot to mention: this goes on top of mf/curl-select-fdset. It's only
in next now, but some of my http cleanups build semantically on the
cleanups in that topic.

-Peff

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 01/14] t/lib-httpd: check for NO_CURL
  2011-11-10  7:43 [PATCH 0/14] resumable network bundles Jeff King
  2011-11-10  7:45 ` Jeff King
@ 2011-11-10  7:46 ` Jeff King
  2011-11-10  7:48 ` [PATCH 02/14] http: turn off curl signals Jeff King
                   ` (14 subsequent siblings)
  16 siblings, 0 replies; 23+ messages in thread
From: Jeff King @ 2011-11-10  7:46 UTC (permalink / raw)
  To: git

We already do this check individually in each of the tests
that includes lib-httpd. Let's factor it out.

There is one test (t5540) that uses lib-httpd but does not
currently do this check. But it actually has a stricter
check which is a superset (it needs all of the requirements
to have built git-http-push, one of which is not setting
NO_CURL), so adding this extra check won't hurt anything.

Signed-off-by: Jeff King <peff@peff.net>
---
 t/lib-httpd.sh          |    5 +++++
 t/t5541-http-push.sh    |    5 -----
 t/t5550-http-fetch.sh   |    5 -----
 t/t5551-http-fetch.sh   |    5 -----
 t/t5561-http-backend.sh |    5 -----
 5 files changed, 5 insertions(+), 20 deletions(-)

diff --git a/t/lib-httpd.sh b/t/lib-httpd.sh
index f7dc078..8331527 100644
--- a/t/lib-httpd.sh
+++ b/t/lib-httpd.sh
@@ -3,6 +3,11 @@
 # Copyright (c) 2008 Clemens Buchacher <drizzd@aon.at>
 #
 
+if test -n "$NO_CURL"; then
+	skip_all='skipping test, git built without http support'
+	test_done
+fi
+
 if test -z "$GIT_TEST_HTTPD"
 then
 	skip_all="Network testing disabled (define GIT_TEST_HTTPD to enable)"
diff --git a/t/t5541-http-push.sh b/t/t5541-http-push.sh
index a73c826..a326ee0 100755
--- a/t/t5541-http-push.sh
+++ b/t/t5541-http-push.sh
@@ -6,11 +6,6 @@
 test_description='test smart pushing over http via http-backend'
 . ./test-lib.sh
 
-if test -n "$NO_CURL"; then
-	skip_all='skipping test, git built without http support'
-	test_done
-fi
-
 ROOT_PATH="$PWD"
 LIB_HTTPD_PORT=${LIB_HTTPD_PORT-'5541'}
 . "$TEST_DIRECTORY"/lib-httpd.sh
diff --git a/t/t5550-http-fetch.sh b/t/t5550-http-fetch.sh
index 311a33c..e9282c5 100755
--- a/t/t5550-http-fetch.sh
+++ b/t/t5550-http-fetch.sh
@@ -3,11 +3,6 @@
 test_description='test dumb fetching over http via static file'
 . ./test-lib.sh
 
-if test -n "$NO_CURL"; then
-	skip_all='skipping test, git built without http support'
-	test_done
-fi
-
 LIB_HTTPD_PORT=${LIB_HTTPD_PORT-'5550'}
 . "$TEST_DIRECTORY"/lib-httpd.sh
 start_httpd
diff --git a/t/t5551-http-fetch.sh b/t/t5551-http-fetch.sh
index 26d3557..3557f2e 100755
--- a/t/t5551-http-fetch.sh
+++ b/t/t5551-http-fetch.sh
@@ -3,11 +3,6 @@
 test_description='test smart fetching over http via http-backend'
 . ./test-lib.sh
 
-if test -n "$NO_CURL"; then
-	skip_all='skipping test, git built without http support'
-	test_done
-fi
-
 LIB_HTTPD_PORT=${LIB_HTTPD_PORT-'5551'}
 . "$TEST_DIRECTORY"/lib-httpd.sh
 start_httpd
diff --git a/t/t5561-http-backend.sh b/t/t5561-http-backend.sh
index b5d7fbc..974be7c 100755
--- a/t/t5561-http-backend.sh
+++ b/t/t5561-http-backend.sh
@@ -3,11 +3,6 @@
 test_description='test git-http-backend'
 . ./test-lib.sh
 
-if test -n "$NO_CURL"; then
-	skip_all='skipping test, git built without http support'
-	test_done
-fi
-
 LIB_HTTPD_PORT=${LIB_HTTPD_PORT-'5561'}
 . "$TEST_DIRECTORY"/lib-httpd.sh
 start_httpd
-- 
1.7.7.2.7.g9f96f

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 02/14] http: turn off curl signals
  2011-11-10  7:43 [PATCH 0/14] resumable network bundles Jeff King
  2011-11-10  7:45 ` Jeff King
  2011-11-10  7:46 ` [PATCH 01/14] t/lib-httpd: check for NO_CURL Jeff King
@ 2011-11-10  7:48 ` Jeff King
  2011-11-10  8:43   ` Daniel Stenberg
  2011-11-10  7:48 ` [PATCH 03/14] http: refactor http_request function Jeff King
                   ` (13 subsequent siblings)
  16 siblings, 1 reply; 23+ messages in thread
From: Jeff King @ 2011-11-10  7:48 UTC (permalink / raw)
  To: git

Curl sets and clears the handler for SIGALRM, which makes it
incompatible with git's progress code. However, we can ask
curl not to do this.

Signed-off-by: Jeff King <peff@peff.net>
---
I'm a little iffy on this one. If I understand correctly, depending on
the build and configuration, curl may not be able to timeout during DNS
lookups. But I'm not sure if it does, anyway, since we don't set any
timeouts.

An alternate plan would be to give the progress code a mode where it
gets poked by curl every second or so (curl has a PROGRESSFUNCTION
option for doing this).

 http.c |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/http.c b/http.c
index 95c2dfd..4f9e004 100644
--- a/http.c
+++ b/http.c
@@ -318,6 +318,8 @@ static int has_cert_password(void)
 	if (curl_http_proxy)
 		curl_easy_setopt(result, CURLOPT_PROXY, curl_http_proxy);
 
+	curl_easy_setopt(result, CURLOPT_NOSIGNAL, 1);
+
 	return result;
 }
 
-- 
1.7.7.2.7.g9f96f

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 03/14] http: refactor http_request function
  2011-11-10  7:43 [PATCH 0/14] resumable network bundles Jeff King
                   ` (2 preceding siblings ...)
  2011-11-10  7:48 ` [PATCH 02/14] http: turn off curl signals Jeff King
@ 2011-11-10  7:48 ` Jeff King
  2011-11-10  7:49 ` [PATCH 04/14] http: add a public function for arbitrary-callback request Jeff King
                   ` (12 subsequent siblings)
  16 siblings, 0 replies; 23+ messages in thread
From: Jeff King @ 2011-11-10  7:48 UTC (permalink / raw)
  To: git

This function takes a flag to indicate where the output
should go (either to a file or to a strbuf). This flag is
mostly used to set the callback function we hand to curl.

This isn't very flexible for adding new output types.
Instead, let's just let callers pass in the callback
function directly. This results in shorter, more readable,
and more flexible code.

The only other thing the flag was used for was to set a
"Range" header when we already have a partial file (by using
the results of ftell). This patch also adds an "offset"
parameter, which can be used by callers to specify this
feature separately (which is also more flexible, as non-FILE
callers can now resume partial transfers).

Signed-off-by: Jeff King <peff@peff.net>
---
 http.c |   37 ++++++++++++++-----------------------
 1 files changed, 14 insertions(+), 23 deletions(-)

diff --git a/http.c b/http.c
index 4f9e004..9ffd894 100644
--- a/http.c
+++ b/http.c
@@ -797,11 +797,8 @@ void append_remote_object_url(struct strbuf *buf, const char *url,
 	return strbuf_detach(&buf, NULL);
 }
 
-/* http_request() targets */
-#define HTTP_REQUEST_STRBUF	0
-#define HTTP_REQUEST_FILE	1
-
-static int http_request(const char *url, void *result, int target, int options)
+static int http_request(const char *url, curl_write_callback cb, void *result,
+			long offset, int options)
 {
 	struct active_request_slot *slot;
 	struct slot_results results;
@@ -818,19 +815,13 @@ static int http_request(const char *url, void *result, int target, int options)
 	} else {
 		curl_easy_setopt(slot->curl, CURLOPT_NOBODY, 0);
 		curl_easy_setopt(slot->curl, CURLOPT_FILE, result);
+		curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, cb);
+	}
 
-		if (target == HTTP_REQUEST_FILE) {
-			long posn = ftell(result);
-			curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION,
-					 fwrite);
-			if (posn > 0) {
-				strbuf_addf(&buf, "Range: bytes=%ld-", posn);
-				headers = curl_slist_append(headers, buf.buf);
-				strbuf_reset(&buf);
-			}
-		} else
-			curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION,
-					 fwrite_buffer);
+	if (offset > 0) {
+		strbuf_addf(&buf, "Range: bytes=%lu-", offset);
+		headers = curl_slist_append(headers, buf.buf);
+		strbuf_reset(&buf);
 	}
 
 	strbuf_addstr(&buf, "Pragma:");
@@ -881,18 +872,18 @@ static int http_request(const char *url, void *result, int target, int options)
 	return ret;
 }
 
-static int http_request_reauth(const char *url, void *result, int target,
-			       int options)
+static int http_request_reauth(const char *url, curl_write_callback cb,
+			       void *result, unsigned long offset, int options)
 {
-	int ret = http_request(url, result, target, options);
+	int ret = http_request(url, cb, result, offset, options);
 	if (ret != HTTP_REAUTH)
 		return ret;
-	return http_request(url, result, target, options);
+	return http_request(url, cb, result, offset, options);
 }
 
 int http_get_strbuf(const char *url, struct strbuf *result, int options)
 {
-	return http_request_reauth(url, result, HTTP_REQUEST_STRBUF, options);
+	return http_request_reauth(url, fwrite_buffer, result, 0, options);
 }
 
 /*
@@ -915,7 +906,7 @@ static int http_get_file(const char *url, const char *filename, int options)
 		goto cleanup;
 	}
 
-	ret = http_request_reauth(url, result, HTTP_REQUEST_FILE, options);
+	ret = http_request_reauth(url, NULL, result, ftell(result), options);
 	fclose(result);
 
 	if ((ret == HTTP_OK) && move_temp_to_file(tmpfile.buf, filename))
-- 
1.7.7.2.7.g9f96f

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 04/14] http: add a public function for arbitrary-callback request
  2011-11-10  7:43 [PATCH 0/14] resumable network bundles Jeff King
                   ` (3 preceding siblings ...)
  2011-11-10  7:48 ` [PATCH 03/14] http: refactor http_request function Jeff King
@ 2011-11-10  7:49 ` Jeff King
  2011-11-10  7:49 ` [PATCH 05/14] remote-curl: use http callback for requesting refs Jeff King
                   ` (11 subsequent siblings)
  16 siblings, 0 replies; 23+ messages in thread
From: Jeff King @ 2011-11-10  7:49 UTC (permalink / raw)
  To: git

The http_request function recently learned to take arbitrary
callbacks; let's expose this functionality to callers.

Signed-off-by: Jeff King <peff@peff.net>
---
 http.c |    6 ++++++
 http.h |    3 +++
 2 files changed, 9 insertions(+), 0 deletions(-)

diff --git a/http.c b/http.c
index 9ffd894..91451e9 100644
--- a/http.c
+++ b/http.c
@@ -886,6 +886,12 @@ int http_get_strbuf(const char *url, struct strbuf *result, int options)
 	return http_request_reauth(url, fwrite_buffer, result, 0, options);
 }
 
+int http_get_callback(const char *url, curl_write_callback cb,
+		      void *data, long offset, int options)
+{
+	return http_request_reauth(url, cb, data, offset, options);
+}
+
 /*
  * Downloads an url and stores the result in the given file.
  *
diff --git a/http.h b/http.h
index ee16069..4977bde 100644
--- a/http.h
+++ b/http.h
@@ -132,6 +132,9 @@ extern void append_remote_object_url(struct strbuf *buf, const char *url,
  */
 int http_get_strbuf(const char *url, struct strbuf *result, int options);
 
+int http_get_callback(const char *url, curl_write_callback cb, void *data,
+		      long offset, int options);
+
 /*
  * Prints an error message using error() containing url and curl_errorstr,
  * and returns ret.
-- 
1.7.7.2.7.g9f96f

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 05/14] remote-curl: use http callback for requesting refs
  2011-11-10  7:43 [PATCH 0/14] resumable network bundles Jeff King
                   ` (4 preceding siblings ...)
  2011-11-10  7:49 ` [PATCH 04/14] http: add a public function for arbitrary-callback request Jeff King
@ 2011-11-10  7:49 ` Jeff King
  2011-11-10  7:49 ` [PATCH 06/14] transport: factor out bundle to ref list conversion Jeff King
                   ` (10 subsequent siblings)
  16 siblings, 0 replies; 23+ messages in thread
From: Jeff King @ 2011-11-10  7:49 UTC (permalink / raw)
  To: git

This should behave identically to the current strbuf code,
but opens up room for us to do more clever things with
bundles in a future patch.

Signed-off-by: Jeff King <peff@peff.net>
---
Obviously it's way more code for the same thing, but future patches will
make the design more clear.

 remote-curl.c |   22 ++++++++++++++++++++--
 1 files changed, 20 insertions(+), 2 deletions(-)

diff --git a/remote-curl.c b/remote-curl.c
index 0e720ee..fb4d853 100644
--- a/remote-curl.c
+++ b/remote-curl.c
@@ -90,6 +90,24 @@ static void free_discovery(struct discovery *d)
 	}
 }
 
+struct get_refs_cb_data {
+	struct strbuf *out;
+};
+
+static size_t get_refs_callback(char *buf, size_t sz, size_t n, void *vdata)
+{
+	struct get_refs_cb_data *data = vdata;
+	strbuf_add(data->out, buf, sz * n);
+	return sz * n;
+}
+
+static int get_refs_from_url(const char *url, struct strbuf *out, int options)
+{
+	struct get_refs_cb_data data;
+	data.out = out;
+	return http_get_callback(url, get_refs_callback, &data, 0, options);
+}
+
 static struct discovery* discover_refs(const char *service)
 {
 	struct strbuf buffer = STRBUF_INIT;
@@ -112,7 +130,7 @@ static void free_discovery(struct discovery *d)
 	}
 	refs_url = strbuf_detach(&buffer, NULL);
 
-	http_ret = http_get_strbuf(refs_url, &buffer, HTTP_NO_CACHE);
+	http_ret = get_refs_from_url(refs_url, &buffer, HTTP_NO_CACHE);
 
 	/* try again with "plain" url (no ? or & appended) */
 	if (http_ret != HTTP_OK && http_ret != HTTP_NOAUTH) {
@@ -123,7 +141,7 @@ static void free_discovery(struct discovery *d)
 		strbuf_addf(&buffer, "%sinfo/refs", url);
 		refs_url = strbuf_detach(&buffer, NULL);
 
-		http_ret = http_get_strbuf(refs_url, &buffer, HTTP_NO_CACHE);
+		http_ret = get_refs_from_url(refs_url, &buffer, HTTP_NO_CACHE);
 	}
 
 	switch (http_ret) {
-- 
1.7.7.2.7.g9f96f

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 06/14] transport: factor out bundle to ref list conversion
  2011-11-10  7:43 [PATCH 0/14] resumable network bundles Jeff King
                   ` (5 preceding siblings ...)
  2011-11-10  7:49 ` [PATCH 05/14] remote-curl: use http callback for requesting refs Jeff King
@ 2011-11-10  7:49 ` Jeff King
  2011-11-10  7:50 ` [PATCH 07/14] bundle: add is_bundle_buf helper Jeff King
                   ` (9 subsequent siblings)
  16 siblings, 0 replies; 23+ messages in thread
From: Jeff King @ 2011-11-10  7:49 UTC (permalink / raw)
  To: git

There is a data structure mismatch between what the
transport code wants (a linked list of "struct ref") and
what the bundle header provides (an array of ref names and
sha1s), so the transport code has to convert.

Let's factor out this conversion to make it useful to other
transport-ish callers (like remote-curl).

Signed-off-by: Jeff King <peff@peff.net>
---
 bundle.c    |   16 ++++++++++++++++
 bundle.h    |    2 ++
 transport.c |   11 +----------
 3 files changed, 19 insertions(+), 10 deletions(-)

diff --git a/bundle.c b/bundle.c
index 08020bc..e48fe2f 100644
--- a/bundle.c
+++ b/bundle.c
@@ -7,6 +7,7 @@
 #include "list-objects.h"
 #include "run-command.h"
 #include "refs.h"
+#include "remote.h"
 
 static const char bundle_signature[] = "# v2 git bundle\n";
 
@@ -449,3 +450,18 @@ int unbundle(struct bundle_header *header, int bundle_fd, int flags)
 		return error("index-pack died");
 	return 0;
 }
+
+struct ref *bundle_header_to_refs(const struct bundle_header *header)
+{
+	struct ref *result = NULL;
+	int i;
+
+	for (i = 0; i < header->references.nr; i++) {
+		struct ref_list_entry *e = header->references.list + i;
+		struct ref *ref = alloc_ref(e->name);
+		hashcpy(ref->old_sha1, e->sha1);
+		ref->next = result;
+		result = ref;
+	}
+	return result;
+}
diff --git a/bundle.h b/bundle.h
index 1584e4d..675cc97 100644
--- a/bundle.h
+++ b/bundle.h
@@ -24,4 +24,6 @@ int create_bundle(struct bundle_header *header, const char *path,
 int list_bundle_refs(struct bundle_header *header,
 		int argc, const char **argv);
 
+struct ref *bundle_header_to_refs(const struct bundle_header *header);
+
 #endif
diff --git a/transport.c b/transport.c
index 51814b5..5020bbb 100644
--- a/transport.c
+++ b/transport.c
@@ -407,8 +407,6 @@ struct bundle_transport_data {
 static struct ref *get_refs_from_bundle(struct transport *transport, int for_push)
 {
 	struct bundle_transport_data *data = transport->data;
-	struct ref *result = NULL;
-	int i;
 
 	if (for_push)
 		return NULL;
@@ -418,14 +416,7 @@ struct bundle_transport_data {
 	data->fd = read_bundle_header(transport->url, &data->header);
 	if (data->fd < 0)
 		die ("Could not read bundle '%s'.", transport->url);
-	for (i = 0; i < data->header.references.nr; i++) {
-		struct ref_list_entry *e = data->header.references.list + i;
-		struct ref *ref = alloc_ref(e->name);
-		hashcpy(ref->old_sha1, e->sha1);
-		ref->next = result;
-		result = ref;
-	}
-	return result;
+	return bundle_header_to_refs(&data->header);
 }
 
 static int fetch_refs_from_bundle(struct transport *transport,
-- 
1.7.7.2.7.g9f96f

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 07/14] bundle: add is_bundle_buf helper
  2011-11-10  7:43 [PATCH 0/14] resumable network bundles Jeff King
                   ` (6 preceding siblings ...)
  2011-11-10  7:49 ` [PATCH 06/14] transport: factor out bundle to ref list conversion Jeff King
@ 2011-11-10  7:50 ` Jeff King
  2011-11-10  7:50 ` [PATCH 08/14] remote-curl: free "discovery" object Jeff King
                   ` (8 subsequent siblings)
  16 siblings, 0 replies; 23+ messages in thread
From: Jeff King @ 2011-11-10  7:50 UTC (permalink / raw)
  To: git

This is similar to is_bundle, but checks an in-memory buffer
rather than a file. It also works on a partial buffer,
checking only the bundle signature. This means the result is
a three-way conditional: yes, no, or "we do not have enough
data yet".

Signed-off-by: Jeff King <peff@peff.net>
---
 bundle.c |   14 ++++++++++++++
 bundle.h |    1 +
 2 files changed, 15 insertions(+), 0 deletions(-)

diff --git a/bundle.c b/bundle.c
index e48fe2f..6f911bc 100644
--- a/bundle.c
+++ b/bundle.c
@@ -122,6 +122,20 @@ int is_bundle(const char *path, int quiet)
 	return (fd >= 0);
 }
 
+int is_bundle_buf(const char *s, int len)
+{
+	if (len > strlen(bundle_signature))
+		len = strlen(bundle_signature);
+	/* If we don't match what we already have, then definitely not. */
+	if (memcmp(s, bundle_signature, len))
+		return 0;
+	/* If we have enough bytes, we can say yes */
+	if (len == strlen(bundle_signature))
+		return 1;
+	/* otherwise, we can only say "maybe" */
+	return -1;
+}
+
 static int list_refs(struct ref_list *r, int argc, const char **argv)
 {
 	int i;
diff --git a/bundle.h b/bundle.h
index 675cc97..8bec44d 100644
--- a/bundle.h
+++ b/bundle.h
@@ -15,6 +15,7 @@ struct bundle_header {
 };
 
 int is_bundle(const char *path, int quiet);
+int is_bundle_buf(const char *s, int len);
 int read_bundle_header(const char *path, struct bundle_header *header);
 int create_bundle(struct bundle_header *header, const char *path,
 		int argc, const char **argv);
-- 
1.7.7.2.7.g9f96f

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 08/14] remote-curl: free "discovery" object
  2011-11-10  7:43 [PATCH 0/14] resumable network bundles Jeff King
                   ` (7 preceding siblings ...)
  2011-11-10  7:50 ` [PATCH 07/14] bundle: add is_bundle_buf helper Jeff King
@ 2011-11-10  7:50 ` Jeff King
  2011-11-10  7:50 ` [PATCH 09/14] remote-curl: auto-detect bundles when fetching refs Jeff King
                   ` (7 subsequent siblings)
  16 siblings, 0 replies; 23+ messages in thread
From: Jeff King @ 2011-11-10  7:50 UTC (permalink / raw)
  To: git

We cache the last-used "discovery" object, which contains
the data we pulled from the remote about which refs it has,
which saves us an HTTP round-trip when somebody does
something like "list" followed by "fetch".

We don't bother free()ing it at the end of the program
because it just contains memory which will be reclaimed by
the OS. However, cleaning up explicitly will future-proof us
against later changes which will add external storage (like
temporary files).

Signed-off-by: Jeff King <peff@peff.net>
---
 remote-curl.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/remote-curl.c b/remote-curl.c
index fb4d853..014d413 100644
--- a/remote-curl.c
+++ b/remote-curl.c
@@ -933,6 +933,7 @@ int main(int argc, const char **argv)
 		strbuf_reset(&buf);
 	} while (1);
 
+	free_discovery(last_discovery);
 	http_cleanup();
 
 	return 0;
-- 
1.7.7.2.7.g9f96f

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 09/14] remote-curl: auto-detect bundles when fetching refs
  2011-11-10  7:43 [PATCH 0/14] resumable network bundles Jeff King
                   ` (8 preceding siblings ...)
  2011-11-10  7:50 ` [PATCH 08/14] remote-curl: free "discovery" object Jeff King
@ 2011-11-10  7:50 ` Jeff King
  2011-11-10  7:51 ` [PATCH 10/14] remote-curl: try base $URL after $URL/info/refs Jeff King
                   ` (6 subsequent siblings)
  16 siblings, 0 replies; 23+ messages in thread
From: Jeff King @ 2011-11-10  7:50 UTC (permalink / raw)
  To: git

You can't currently fetch from a network bundle, like:

  git fetch http://example.com/foo.bundle

This patch takes the first (and biggest) step towards that
working: it auto-detects when fetching refs results in a
bundle, and automatically spools the bundle to disk and
fetches from it.

There are a few important design decisions to note:

  1. We auto-detect the bundle based on content, not based
     on a special token in the URL (like ending in
     ".bundle"). This lets the server side be flexible with
     its URLs (e.g., "http://example.com/bundle?repo=foo").

  2. When fetching refs, we don't actually fetch $URL, but
     start with $URL/info/refs, looking for smart or dumb
     http. Some servers, when file "foo.bundle" exists, will
     serve it to the client when "foo.bundle/info/refs" is
     requested. Therefore we may be "surprised" to receive a
     bundle when we thought we were just getting the list of
     refs, and need to handle it appropriately.

  3. We spool the bundle to disk, and then run "index-pack
     --fix-thin" to create a packfile. That means we will
     momentarily use twice the size of the bundle in local
     disk space. Avoiding this would mean piping directly to
     "index-pack --fix-thin".  However, if we want to be
     able to resume the transfer of the bundle after an
     interruption, then we need to save the bundle's pack.

     In theory a smart index-pack that was interrupted could
     write out its partial results along with a count of how
     many bytes it actually consumed (i.e., where to resume
     next time), and then pick up where it left off when fed
     the rest of the data. But index-pack isn't that smart
     yet, so let's start off with spooling.

No tests yet, as apache is not one of the "surprising"
servers from (2), and our test harness is based around that
(though just with this patch, you can fetch from surprising
servers like lighttpd).

Signed-off-by: Jeff King <peff@peff.net>
---
This is really the big, interesting one.

 remote-curl.c |  124 ++++++++++++++++++++++++++++++++++++++++++++++++++++++--
 1 files changed, 119 insertions(+), 5 deletions(-)

diff --git a/remote-curl.c b/remote-curl.c
index 014d413..84586e0 100644
--- a/remote-curl.c
+++ b/remote-curl.c
@@ -7,6 +7,7 @@
 #include "run-command.h"
 #include "pkt-line.h"
 #include "sideband.h"
+#include "bundle.h"
 
 static struct remote *remote;
 static const char *url; /* always ends with a trailing slash */
@@ -77,6 +78,10 @@ struct discovery {
 	char *buf;
 	size_t len;
 	unsigned proto_git : 1;
+
+	char *bundle_filename;
+	int bundle_fd;
+	struct bundle_header bundle_header;
 };
 static struct discovery *last_discovery;
 
@@ -86,26 +91,93 @@ static void free_discovery(struct discovery *d)
 		if (d == last_discovery)
 			last_discovery = NULL;
 		free(d->buf_alloc);
+		if (d->bundle_fd >= 0)
+			close(d->bundle_fd);
+		if (d->bundle_filename) {
+			unlink(d->bundle_filename);
+			free(d->bundle_filename);
+		}
 		free(d);
 	}
 }
 
 struct get_refs_cb_data {
 	struct strbuf *out;
+
+	int is_bundle;
+	const char *tmpname;
+	FILE *fh;
 };
 
 static size_t get_refs_callback(char *buf, size_t sz, size_t n, void *vdata)
 {
 	struct get_refs_cb_data *data = vdata;
-	strbuf_add(data->out, buf, sz * n);
+	struct strbuf *out = data->out;
+
+	if (data->is_bundle > 0)
+		return fwrite(buf, sz, n, data->fh);
+
+	strbuf_add(out, buf, sz * n);
+
+	if (data->is_bundle == 0)
+		return sz * n;
+
+	data->is_bundle = is_bundle_buf(out->buf, out->len);
+	if (data->is_bundle > 0) {
+		data->fh = fopen(data->tmpname, "wb");
+		if (!data->fh)
+			die_errno("unable to open %s", data->tmpname);
+		if (fwrite(out->buf, 1, out->len, data->fh) < out->len)
+			die_errno("unable to write to %s", data->tmpname);
+	}
 	return sz * n;
 }
 
-static int get_refs_from_url(const char *url, struct strbuf *out, int options)
+static int get_refs_from_url(const char *url, struct strbuf *out, int options,
+			     const char *tmpname, int *is_bundle)
 {
 	struct get_refs_cb_data data;
+	int ret;
+
 	data.out = out;
-	return http_get_callback(url, get_refs_callback, &data, 0, options);
+	data.is_bundle = -1;
+	data.tmpname = tmpname;
+	data.fh = NULL;
+
+	ret = http_get_callback(url, get_refs_callback, &data, 0, options);
+
+	if (data.fh) {
+		if (fclose(data.fh))
+			die_errno("unable to write to %s", data.tmpname);
+	}
+
+	*is_bundle = data.is_bundle > 0;
+	return ret;
+}
+
+static const char *url_to_bundle_tmpfile(const char *url)
+{
+	struct strbuf buf = STRBUF_INIT;
+	int last_was_quoted = 1;
+	const char *ret;
+
+	strbuf_addstr(&buf, "tmp_bundle_");
+	for (; *url; url++) {
+		if (isalpha(*url) || isdigit(*url)) {
+			strbuf_addch(&buf, *url);
+			last_was_quoted = 0;
+		}
+		else if (!last_was_quoted) {
+			strbuf_addch(&buf, '_');
+			last_was_quoted = 1;
+		}
+	}
+	if (last_was_quoted)
+		strbuf_setlen(&buf, buf.len - 1);
+
+	ret = git_path("objects/%s", buf.buf);
+	strbuf_release(&buf);
+	return ret;
 }
 
 static struct discovery* discover_refs(const char *service)
@@ -114,11 +186,15 @@ static int get_refs_from_url(const char *url, struct strbuf *out, int options)
 	struct discovery *last = last_discovery;
 	char *refs_url;
 	int http_ret, is_http = 0, proto_git_candidate = 1;
+	const char *filename;
+	int is_bundle;
 
 	if (last && !strcmp(service, last->service))
 		return last;
 	free_discovery(last);
 
+	filename = url_to_bundle_tmpfile(url);
+
 	strbuf_addf(&buffer, "%sinfo/refs", url);
 	if (!prefixcmp(url, "http://") || !prefixcmp(url, "https://")) {
 		is_http = 1;
@@ -130,7 +206,8 @@ static int get_refs_from_url(const char *url, struct strbuf *out, int options)
 	}
 	refs_url = strbuf_detach(&buffer, NULL);
 
-	http_ret = get_refs_from_url(refs_url, &buffer, HTTP_NO_CACHE);
+	http_ret = get_refs_from_url(refs_url, &buffer, HTTP_NO_CACHE,
+				     filename, &is_bundle);
 
 	/* try again with "plain" url (no ? or & appended) */
 	if (http_ret != HTTP_OK && http_ret != HTTP_NOAUTH) {
@@ -141,7 +218,8 @@ static int get_refs_from_url(const char *url, struct strbuf *out, int options)
 		strbuf_addf(&buffer, "%sinfo/refs", url);
 		refs_url = strbuf_detach(&buffer, NULL);
 
-		http_ret = get_refs_from_url(refs_url, &buffer, HTTP_NO_CACHE);
+		http_ret = get_refs_from_url(refs_url, &buffer, HTTP_NO_CACHE,
+					     filename, &is_bundle);
 	}
 
 	switch (http_ret) {
@@ -161,6 +239,7 @@ static int get_refs_from_url(const char *url, struct strbuf *out, int options)
 	last->service = service;
 	last->buf_alloc = strbuf_detach(&buffer, &last->len);
 	last->buf = last->buf_alloc;
+	last->bundle_fd = -1;
 
 	if (is_http && proto_git_candidate
 		&& 5 <= last->len && last->buf[4] == '#') {
@@ -190,6 +269,10 @@ static int get_refs_from_url(const char *url, struct strbuf *out, int options)
 		last->proto_git = 1;
 	}
 
+	else if (is_bundle) {
+		last->bundle_filename = xstrdup(filename);
+	}
+
 	free(refs_url);
 	strbuf_release(&buffer);
 	last_discovery = last;
@@ -276,6 +359,22 @@ static int write_discovery(int in, int out, void *data)
 	return refs;
 }
 
+static void ensure_bundle_open(struct discovery *heads)
+{
+	if (heads->bundle_fd >= 0)
+		return;
+	heads->bundle_fd = read_bundle_header(heads->bundle_filename,
+					      &heads->bundle_header);
+	if (heads->bundle_fd < 0)
+		die("could not read bundle from %s", url);
+}
+
+static struct ref *parse_bundle_refs(struct discovery *heads)
+{
+	ensure_bundle_open(heads);
+	return bundle_header_to_refs(&heads->bundle_header);
+}
+
 static struct ref *get_refs(int for_push)
 {
 	struct discovery *heads;
@@ -287,6 +386,11 @@ static int write_discovery(int in, int out, void *data)
 
 	if (heads->proto_git)
 		return parse_git_refs(heads);
+	if (heads->bundle_filename) {
+		if (for_push)
+			die("cannot push into a remote bundle");
+		return parse_bundle_refs(heads);
+	}
 	return parse_info_refs(heads);
 }
 
@@ -690,11 +794,21 @@ static int fetch_git(struct discovery *heads,
 	return err;
 }
 
+static int fetch_bundle(struct discovery *d,
+			int nr_heads, struct ref **to_fetch)
+{
+	ensure_bundle_open(d);
+	return unbundle(&d->bundle_header, d->bundle_fd,
+			options.progress ? BUNDLE_VERBOSE : 0);
+}
+
 static int fetch(int nr_heads, struct ref **to_fetch)
 {
 	struct discovery *d = discover_refs("git-upload-pack");
 	if (d->proto_git)
 		return fetch_git(d, nr_heads, to_fetch);
+	else if (d->bundle_filename)
+		return fetch_bundle(d, nr_heads, to_fetch);
 	else
 		return fetch_dumb(nr_heads, to_fetch);
 }
-- 
1.7.7.2.7.g9f96f

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 10/14] remote-curl: try base $URL after $URL/info/refs
  2011-11-10  7:43 [PATCH 0/14] resumable network bundles Jeff King
                   ` (9 preceding siblings ...)
  2011-11-10  7:50 ` [PATCH 09/14] remote-curl: auto-detect bundles when fetching refs Jeff King
@ 2011-11-10  7:51 ` Jeff King
  2011-11-10  7:53 ` [PATCH 11/14] progress: allow pure-throughput progress meters Jeff King
                   ` (5 subsequent siblings)
  16 siblings, 0 replies; 23+ messages in thread
From: Jeff King @ 2011-11-10  7:51 UTC (permalink / raw)
  To: git

When fetching via http, we will try "$URL/info/refs" to get
the list of refs. We may get an unexpected bundle from that
transfer, and we already handle that case. But we should
also check just "$URL" to see if it's a bundle.

Signed-off-by: Jeff King <peff@peff.net>
---
And now we can actually test with apache.

 remote-curl.c          |   19 +++++++++++++++++++
 t/t5552-http-bundle.sh |   36 ++++++++++++++++++++++++++++++++++++
 2 files changed, 55 insertions(+), 0 deletions(-)
 create mode 100755 t/t5552-http-bundle.sh

diff --git a/remote-curl.c b/remote-curl.c
index 84586e0..7734495 100644
--- a/remote-curl.c
+++ b/remote-curl.c
@@ -222,6 +222,25 @@ static int get_refs_from_url(const char *url, struct strbuf *out, int options,
 					     filename, &is_bundle);
 	}
 
+	/* try the straight URL for a bundle, but don't impact the
+	 * error reporting that happens below. */
+	if (http_ret != HTTP_OK && http_ret != HTTP_NOAUTH) {
+		struct strbuf trimmed = STRBUF_INIT;
+		int r;
+
+		strbuf_reset(&buffer);
+
+		strbuf_addstr(&trimmed, url);
+		while (trimmed.len > 0 && trimmed.buf[trimmed.len-1] == '/')
+			strbuf_setlen(&trimmed, trimmed.len - 1);
+
+		r = get_refs_from_url(trimmed.buf, &buffer, 0, filename,
+				      &is_bundle);
+		if (r == HTTP_OK && is_bundle)
+			http_ret = r;
+		strbuf_release(&trimmed);
+	}
+
 	switch (http_ret) {
 	case HTTP_OK:
 		break;
diff --git a/t/t5552-http-bundle.sh b/t/t5552-http-bundle.sh
new file mode 100755
index 0000000..8527403
--- /dev/null
+++ b/t/t5552-http-bundle.sh
@@ -0,0 +1,36 @@
+#!/bin/sh
+
+test_description='test fetching from http-accessible bundles'
+. ./test-lib.sh
+
+LIB_HTTPD_PORT=${LIB_HTTPD_PORT-'5552'}
+. "$TEST_DIRECTORY"/lib-httpd.sh
+start_httpd
+
+test_expect_success 'create bundles' '
+	test_commit one &&
+	git bundle create "$HTTPD_DOCUMENT_ROOT_PATH/one.bundle" --all &&
+	test_commit two &&
+	git bundle create "$HTTPD_DOCUMENT_ROOT_PATH/two.bundle" --all ^one
+'
+
+test_expect_success 'clone from bundle' '
+	git clone --bare $HTTPD_URL/one.bundle clone &&
+	echo one >expect &&
+	git --git-dir=clone log -1 --format=%s >actual &&
+	test_cmp expect actual
+'
+
+test_expect_success 'fetch from bundle' '
+	git --git-dir=clone fetch $HTTPD_URL/two.bundle refs/*:refs/* &&
+	echo two >expect &&
+	git --git-dir=clone log -1 --format=%s >actual &&
+	test_cmp expect actual
+'
+
+test_expect_success 'cannot clone from partial bundle' '
+	test_must_fail git clone $HTTPD_URL/two.bundle
+'
+
+stop_httpd
+test_done
-- 
1.7.7.2.7.g9f96f

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 11/14] progress: allow pure-throughput progress meters
  2011-11-10  7:43 [PATCH 0/14] resumable network bundles Jeff King
                   ` (10 preceding siblings ...)
  2011-11-10  7:51 ` [PATCH 10/14] remote-curl: try base $URL after $URL/info/refs Jeff King
@ 2011-11-10  7:53 ` Jeff King
  2011-11-10  7:53 ` [PATCH 12/14] remote-curl: show progress for bundle downloads Jeff King
                   ` (4 subsequent siblings)
  16 siblings, 0 replies; 23+ messages in thread
From: Jeff King @ 2011-11-10  7:53 UTC (permalink / raw)
  To: git

The progress code assumes we are counting something (usually
objects), even if we are measuring throughput. This works
for fetching packfiles, since they show us the object count
alongside the throughput, like:

  Receiving objects:   2% (301/11968), 22.00 MiB | 10.97 MiB/s

You can also tell the progress code you don't know how many
items you have (by specifying a total of 0), and it looks
like:

  Counting objects: 34957

However, if you're fetching a single large item, you want
throughput but you might not have a meaningful count. You
can say you are getting item 0 or 1 out of 1 total, but then
the percent meter is misleading:

  Downloading:   0% (0/1), 22.00 MiB | 10.97 MiB/s

or

  Downloading: 100% (0/1), 22.00 MiB | 10.97 MiB/s

Neither of those is accurate. You are probably somewhere
between zero and 100 percent through the operation, but you
don't know how far.

Telling it you don't know how many items is even uglier:

  Downloading: 1, 22.00 MiB | 10.97 MiB/s

Instead, this patch will omit the count entirely if you are
on the zero-th item of an unknown number of items. It looks
like:

  Downloading: 22.00 MiB | 10.97 MiB/s

Signed-off-by: Jeff King <peff@peff.net>
---
This was the last amount of work to massage the progress code into doing
what I wanted. It might be nicer if it could show a percentage (if we
know the total size), but there's even more surgery required for that.

 progress.c |   20 ++++++++++++--------
 1 files changed, 12 insertions(+), 8 deletions(-)

diff --git a/progress.c b/progress.c
index 3971f49..92fc3c5 100644
--- a/progress.c
+++ b/progress.c
@@ -103,7 +103,10 @@ static int display(struct progress *progress, unsigned n, const char *done)
 			return 1;
 		}
 	} else if (progress_update) {
-		fprintf(stderr, "%s: %u%s%s", progress->title, n, tp, eol);
+		fprintf(stderr, "%s: ", progress->title);
+		if (n)
+			fprintf(stderr, "%u", n);
+		fprintf(stderr, "%s%s", tp, eol);
 		fflush(stderr);
 		progress_update = 0;
 		return 1;
@@ -113,23 +116,24 @@ static int display(struct progress *progress, unsigned n, const char *done)
 }
 
 static void throughput_string(struct throughput *tp, off_t total,
-			      unsigned int rate)
+			      unsigned int rate, struct progress *p)
 {
+	const char *delim = p->total || p->last_value > 0 ? ", " : "";
 	int l = sizeof(tp->display);
 	if (total > 1 << 30) {
-		l -= snprintf(tp->display, l, ", %u.%2.2u GiB",
+		l -= snprintf(tp->display, l, "%s%u.%2.2u GiB", delim,
 			      (int)(total >> 30),
 			      (int)(total & ((1 << 30) - 1)) / 10737419);
 	} else if (total > 1 << 20) {
 		int x = total + 5243;  /* for rounding */
-		l -= snprintf(tp->display, l, ", %u.%2.2u MiB",
+		l -= snprintf(tp->display, l, "%s%u.%2.2u MiB", delim,
 			      x >> 20, ((x & ((1 << 20) - 1)) * 100) >> 20);
 	} else if (total > 1 << 10) {
 		int x = total + 5;  /* for rounding */
-		l -= snprintf(tp->display, l, ", %u.%2.2u KiB",
+		l -= snprintf(tp->display, l, "%s%u.%2.2u KiB", delim,
 			      x >> 10, ((x & ((1 << 10) - 1)) * 100) >> 10);
 	} else {
-		l -= snprintf(tp->display, l, ", %u bytes", (int)total);
+		l -= snprintf(tp->display, l, "%s%u bytes", delim, (int)total);
 	}
 
 	if (rate > 1 << 10) {
@@ -197,7 +201,7 @@ void display_throughput(struct progress *progress, off_t total)
 		tp->last_misecs[tp->idx] = misecs;
 		tp->idx = (tp->idx + 1) % TP_IDX_MAX;
 
-		throughput_string(tp, total, rate);
+		throughput_string(tp, total, rate, progress);
 		if (progress->last_value != -1 && progress_update)
 			display(progress, progress->last_value, NULL);
 	}
@@ -255,7 +259,7 @@ void stop_progress_msg(struct progress **p_progress, const char *msg)
 		if (tp) {
 			unsigned int rate = !tp->avg_misecs ? 0 :
 					tp->avg_bytes / tp->avg_misecs;
-			throughput_string(tp, tp->curr_total, rate);
+			throughput_string(tp, tp->curr_total, rate, progress);
 		}
 		progress_update = 1;
 		sprintf(bufp, ", %s.\n", msg);
-- 
1.7.7.2.7.g9f96f

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 12/14] remote-curl: show progress for bundle downloads
  2011-11-10  7:43 [PATCH 0/14] resumable network bundles Jeff King
                   ` (11 preceding siblings ...)
  2011-11-10  7:53 ` [PATCH 11/14] progress: allow pure-throughput progress meters Jeff King
@ 2011-11-10  7:53 ` Jeff King
  2011-11-10  7:53 ` [PATCH 13/14] remote-curl: resume interrupted bundle transfers Jeff King
                   ` (3 subsequent siblings)
  16 siblings, 0 replies; 23+ messages in thread
From: Jeff King @ 2011-11-10  7:53 UTC (permalink / raw)
  To: git

Generally, the point of fetching from a bundle is that it's
big. Without a progress meter, git will appear to hang
during the long download.

This patch adds a throughput meter (i.e., just the bytes
transferred and the rate). In the long run, we should look
for a content-length header from the server so we can show a
total size and completion percentage. However, displaying
that properly will require some surgery to the progress
code, so let's leave it as a future enhancement.

Signed-off-by: Jeff King <peff@peff.net>
---
 remote-curl.c |   18 +++++++++++++++++-
 1 files changed, 17 insertions(+), 1 deletions(-)

diff --git a/remote-curl.c b/remote-curl.c
index 7734495..6b0820e 100644
--- a/remote-curl.c
+++ b/remote-curl.c
@@ -8,6 +8,7 @@
 #include "pkt-line.h"
 #include "sideband.h"
 #include "bundle.h"
+#include "progress.h"
 
 static struct remote *remote;
 static const char *url; /* always ends with a trailing slash */
@@ -107,6 +108,9 @@ struct get_refs_cb_data {
 	int is_bundle;
 	const char *tmpname;
 	FILE *fh;
+
+	struct progress *progress;
+	off_t total;
 };
 
 static size_t get_refs_callback(char *buf, size_t sz, size_t n, void *vdata)
@@ -114,8 +118,11 @@ static size_t get_refs_callback(char *buf, size_t sz, size_t n, void *vdata)
 	struct get_refs_cb_data *data = vdata;
 	struct strbuf *out = data->out;
 
-	if (data->is_bundle > 0)
+	if (data->is_bundle > 0) {
+		data->total += sz * n;
+		display_throughput(data->progress, data->total);
 		return fwrite(buf, sz, n, data->fh);
+	}
 
 	strbuf_add(out, buf, sz * n);
 
@@ -129,6 +136,12 @@ static size_t get_refs_callback(char *buf, size_t sz, size_t n, void *vdata)
 			die_errno("unable to open %s", data->tmpname);
 		if (fwrite(out->buf, 1, out->len, data->fh) < out->len)
 			die_errno("unable to write to %s", data->tmpname);
+		if (options.progress) {
+			data->total = out->len;
+			data->progress = start_progress("Downloading bundle", 0);
+			display_progress(data->progress, 0);
+			display_throughput(data->progress, data->total);
+		}
 	}
 	return sz * n;
 }
@@ -143,6 +156,8 @@ static int get_refs_from_url(const char *url, struct strbuf *out, int options,
 	data.is_bundle = -1;
 	data.tmpname = tmpname;
 	data.fh = NULL;
+	data.progress = NULL;
+	data.total = 0;
 
 	ret = http_get_callback(url, get_refs_callback, &data, 0, options);
 
@@ -150,6 +165,7 @@ static int get_refs_from_url(const char *url, struct strbuf *out, int options,
 		if (fclose(data.fh))
 			die_errno("unable to write to %s", data.tmpname);
 	}
+	stop_progress(&data.progress);
 
 	*is_bundle = data.is_bundle > 0;
 	return ret;
-- 
1.7.7.2.7.g9f96f

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 13/14] remote-curl: resume interrupted bundle transfers
  2011-11-10  7:43 [PATCH 0/14] resumable network bundles Jeff King
                   ` (12 preceding siblings ...)
  2011-11-10  7:53 ` [PATCH 12/14] remote-curl: show progress for bundle downloads Jeff King
@ 2011-11-10  7:53 ` Jeff King
  2011-11-10  7:56 ` [PATCH 14/14] clone: give advice on how to resume a failed clone Jeff King
                   ` (2 subsequent siblings)
  16 siblings, 0 replies; 23+ messages in thread
From: Jeff King @ 2011-11-10  7:53 UTC (permalink / raw)
  To: git

If we have a bundle file from a previous fetch that matches
this URL, then we should resume the transfer where we left
off.

Signed-off-by: Jeff King <peff@peff.net>
---
The second half of the diff is hard to read because I re-indent a big
chunk. It's much easier to see what's going on with "diff -b".

 remote-curl.c |   64 +++++++++++++++++++++++++++++++++++++++++++++++---------
 1 files changed, 53 insertions(+), 11 deletions(-)

diff --git a/remote-curl.c b/remote-curl.c
index 6b0820e..43ad1b6 100644
--- a/remote-curl.c
+++ b/remote-curl.c
@@ -9,6 +9,7 @@
 #include "sideband.h"
 #include "bundle.h"
 #include "progress.h"
+#include "dir.h"
 
 static struct remote *remote;
 static const char *url; /* always ends with a trailing slash */
@@ -171,6 +172,32 @@ static int get_refs_from_url(const char *url, struct strbuf *out, int options,
 	return ret;
 }
 
+static int resume_bundle(const char *url, const char *tmpname)
+{
+	struct get_refs_cb_data data;
+	int ret;
+
+	data.fh = fopen(tmpname, "ab");
+	if (!data.fh)
+		die_errno("unable to open %s", tmpname);
+
+	data.is_bundle = 1;
+	data.total = ftell(data.fh);
+	if (options.progress) {
+		data.progress = start_progress("Resuming bundle", 0);
+		display_progress(data.progress, 0);
+		display_throughput(data.progress, data.total);
+	}
+
+	ret = http_get_callback(url, get_refs_callback, &data, data.total, 0);
+
+	if (fclose(data.fh))
+		die_errno("unable to write to %s", tmpname);
+	stop_progress(&data.progress);
+
+	return ret;
+}
+
 static const char *url_to_bundle_tmpfile(const char *url)
 {
 	struct strbuf buf = STRBUF_INIT;
@@ -210,20 +237,35 @@ static int get_refs_from_url(const char *url, struct strbuf *out, int options,
 	free_discovery(last);
 
 	filename = url_to_bundle_tmpfile(url);
+	if (file_exists(filename)) {
+		struct strbuf trimmed = STRBUF_INIT;
 
-	strbuf_addf(&buffer, "%sinfo/refs", url);
-	if (!prefixcmp(url, "http://") || !prefixcmp(url, "https://")) {
-		is_http = 1;
-		if (!strchr(url, '?'))
-			strbuf_addch(&buffer, '?');
-		else
-			strbuf_addch(&buffer, '&');
-		strbuf_addf(&buffer, "service=%s", service);
+		strbuf_addstr(&trimmed, url);
+		while (trimmed.len > 0 && trimmed.buf[trimmed.len-1] == '/')
+			strbuf_setlen(&trimmed, trimmed.len - 1);
+		refs_url = strbuf_detach(&trimmed, NULL);
+
+		http_ret = resume_bundle(refs_url, filename);
+		is_bundle = 1;
 	}
-	refs_url = strbuf_detach(&buffer, NULL);
+	else
+		http_ret = HTTP_MISSING_TARGET;
+
+	if (http_ret != HTTP_OK && http_ret != HTTP_NOAUTH) {
+		strbuf_addf(&buffer, "%sinfo/refs", url);
+		if (!prefixcmp(url, "http://") || !prefixcmp(url, "https://")) {
+			is_http = 1;
+			if (!strchr(url, '?'))
+				strbuf_addch(&buffer, '?');
+			else
+				strbuf_addch(&buffer, '&');
+			strbuf_addf(&buffer, "service=%s", service);
+		}
+		refs_url = strbuf_detach(&buffer, NULL);
 
-	http_ret = get_refs_from_url(refs_url, &buffer, HTTP_NO_CACHE,
-				     filename, &is_bundle);
+		http_ret = get_refs_from_url(refs_url, &buffer, HTTP_NO_CACHE,
+					     filename, &is_bundle);
+	}
 
 	/* try again with "plain" url (no ? or & appended) */
 	if (http_ret != HTTP_OK && http_ret != HTTP_NOAUTH) {
-- 
1.7.7.2.7.g9f96f

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 14/14] clone: give advice on how to resume a failed clone
  2011-11-10  7:43 [PATCH 0/14] resumable network bundles Jeff King
                   ` (13 preceding siblings ...)
  2011-11-10  7:53 ` [PATCH 13/14] remote-curl: resume interrupted bundle transfers Jeff King
@ 2011-11-10  7:56 ` Jeff King
  2011-11-10 21:21   ` Junio C Hamano
  2011-11-11 13:13 ` [PATCH 0/14] resumable network bundles David Michael Barr
  2011-11-12 16:11 ` Tay Ray Chuan
  16 siblings, 1 reply; 23+ messages in thread
From: Jeff King @ 2011-11-10  7:56 UTC (permalink / raw)
  To: git

When clone fails, we usually delete the partial directory.
However, if cloning was fetching a bundle, that is resumable
and we should consider those results precious.

This patch detects when a partial bundle is present,
preserves the directory, and gives the user some advice
about how to resume.

Signed-off-by: Jeff King <peff@peff.net>
---
We could make "git clone ..." automatically resume, but I'm a little
nervous about that. I wrote a patch that did so, and it did work, but
there are a lot of little hiccups as we violate the assumption that the
directory didn't already exist (e.g., it writes multiple fetch refspec
lines to the config).

But more importantly, I really worry about destroying the safety valve
of not overwriting an existing directory. Yes, we can check to see that
it is a git directory and that it has a partially-downloaded bundle
file. But that could also describe an existing git repo with unstaged
changes. And you sure don't want to "checkout -f" over them.

So I'd rather at least start with giving the user some advice and
having them explicitly say "yeah, I do want to resume this".

 builtin/clone.c |   34 ++++++++++++++++++++++++++++++++++
 1 files changed, 34 insertions(+), 0 deletions(-)

diff --git a/builtin/clone.c b/builtin/clone.c
index efe8b6c..c242e20 100644
--- a/builtin/clone.c
+++ b/builtin/clone.c
@@ -392,6 +392,36 @@ static void copy_or_link_directory(struct strbuf *src, struct strbuf *dest,
 	return ret;
 }
 
+static int git_dir_is_resumable(const char *dir)
+{
+	const char *objects = mkpath("%s/objects", dir);
+	DIR *dh = opendir(objects);
+	struct dirent *de;
+
+	if (!dh)
+		return 0;
+
+	while ((de = readdir(dh))) {
+		if (!prefixcmp(de->d_name, "tmp_bundle_")) {
+			closedir(dh);
+			return 1;
+		}
+	}
+
+	closedir(dh);
+	return 0;
+}
+
+static void give_resume_advice(void)
+{
+	advise("Cloning failed, but partial results were saved.");
+	advise("You can resume the fetch with:");
+	advise("  git fetch");
+	if (!option_bare)
+		advise("  git checkout %s",
+		       option_branch ? option_branch : "master");
+}
+
 static const char *junk_work_tree;
 static const char *junk_git_dir;
 static pid_t junk_pid;
@@ -402,6 +432,10 @@ static void remove_junk(void)
 	if (getpid() != junk_pid)
 		return;
 	if (junk_git_dir) {
+		if (git_dir_is_resumable(junk_git_dir)) {
+			give_resume_advice();
+			return;
+		}
 		strbuf_addstr(&sb, junk_git_dir);
 		remove_dir_recursively(&sb, 0);
 		strbuf_reset(&sb);
-- 
1.7.7.2.7.g9f96f

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH 02/14] http: turn off curl signals
  2011-11-10  7:48 ` [PATCH 02/14] http: turn off curl signals Jeff King
@ 2011-11-10  8:43   ` Daniel Stenberg
  2011-11-11 20:54     ` Jeff King
  0 siblings, 1 reply; 23+ messages in thread
From: Daniel Stenberg @ 2011-11-10  8:43 UTC (permalink / raw)
  To: Jeff King; +Cc: git

On Thu, 10 Nov 2011, Jeff King wrote:

> I'm a little iffy on this one. If I understand correctly, depending on the 
> build and configuration, curl may not be able to timeout during DNS lookups. 
> But I'm not sure if it does, anyway, since we don't set any timeouts.

Right, without a timeout set libcurl won't try to timeout name resolves.

To clarify: when libcurl is built to use the standard synchronous name 
resolver functions it can only abort them after a specified time by using 
signals (on posix systems).

-- 

  / daniel.haxx.se

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 14/14] clone: give advice on how to resume a failed clone
  2011-11-10  7:56 ` [PATCH 14/14] clone: give advice on how to resume a failed clone Jeff King
@ 2011-11-10 21:21   ` Junio C Hamano
  2011-11-11 20:52     ` Jeff King
  0 siblings, 1 reply; 23+ messages in thread
From: Junio C Hamano @ 2011-11-10 21:21 UTC (permalink / raw)
  To: Jeff King; +Cc: git

Jeff King <peff@peff.net> writes:

> We could make "git clone ..." automatically resume, but I'm a little
> nervous about that. I wrote a patch that did so, and it did work, but
> there are a lot of little hiccups as we violate the assumption that the
> directory didn't already exist (e.g., it writes multiple fetch refspec
> lines to the config).

Sorry, but I do not think the worry is quite justified.

The "assumption that directory didn't already exist" becomes an issue only
if you implement your "git clone" that automatically resumes as a thin
wrapper around the current "git clone" in the form of

    until git clone ...
    do
	echo retrying...
    done

Stepping back a bit, I think there are two different situations where
resumable clone is beneficial. The "git clone" process died either by the
machine crashing or the user hitting a \C-c, or the connection between the
server and the "git clone" got severed for some reason.

Right now, the "got disconnected" case results in "git clone" voluntarily
dying and as the result of this, the symptom appears the same.  But it
does not have to be that way.  If we know the underlying transport is
resumable, e.g. when you are fetching a prepared bundle over the wire.

I have this suspicion that in practice the "got disconnected" case is the
majority. If "git clone" does not die upon disconnect while fetching a
bundle, but instead the fetching of the bundle is resumed internally by
reconnecting to the server and requesting a range transfer, there is no
risk of "writes multiple fetch refspec lines" etc, no?

Of course, it is _also_ beneficial if we made "git clone" resumable after
you purposefully kill it (maybe you thought it will clone within minutes,
but it turns out that it may take hours and you have to turn off the
machine in the next five minutes before leaving the work, or something).
A solution for that case _could_ be used for the "got disconnected" case
by letting it voluntarily die as we currently do, but I do not think that
is an optimal solution to the "got disconnected" case.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 0/14] resumable network bundles
  2011-11-10  7:43 [PATCH 0/14] resumable network bundles Jeff King
                   ` (14 preceding siblings ...)
  2011-11-10  7:56 ` [PATCH 14/14] clone: give advice on how to resume a failed clone Jeff King
@ 2011-11-11 13:13 ` David Michael Barr
  2011-11-12 16:11 ` Tay Ray Chuan
  16 siblings, 0 replies; 23+ messages in thread
From: David Michael Barr @ 2011-11-11 13:13 UTC (permalink / raw)
  To: Jeff King; +Cc: git

On Thu, Nov 10, 2011 at 6:43 PM, Jeff King <peff@peff.net> wrote:
> One possible option for resumable clones that has been discussed is
> letting the server point the client by http to a static bundle
> containing most of history, followed by a fetch from the actual git repo
> (which should be much cheaper now that we have all of the bundled
> history).  This series implements "step 0" of this plan: just letting
> bundles be fetched across the network in the first place.
>
> Shawn raised some issues about using bundles for this (as opposed to
> accessing the packfiles themselves); specifically, this raises the I/O
> footprint of a repository that has to serve both the bundled version of
> the pack and the regular packfile.
>
> So it may be that we don't follow this plan all the way through.
> However, even if we don't, fetching bundles over http is still a useful
> thing to be able to do. Which makes this first step worth doing either
> way.
>
>  [01/14]: t/lib-httpd: check for NO_CURL
>  [02/14]: http: turn off curl signals
>  [03/14]: http: refactor http_request function
>  [04/14]: http: add a public function for arbitrary-callback request
>  [05/14]: remote-curl: use http callback for requesting refs
>  [06/14]: transport: factor out bundle to ref list conversion
>  [07/14]: bundle: add is_bundle_buf helper
>  [08/14]: remote-curl: free "discovery" object
>  [09/14]: remote-curl: auto-detect bundles when fetching refs
>  [10/14]: remote-curl: try base $URL after $URL/info/refs
>  [11/14]: progress: allow pure-throughput progress meters
>  [12/14]: remote-curl: show progress for bundle downloads
>  [13/14]: remote-curl: resume interrupted bundle transfers
>  [14/14]: clone: give advice on how to resume a failed clone
>
> -Peff
> --
> To unsubscribe from this list: send the line "unsubscribe git" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

I just want to say thank you for doing this.--David Barr

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 14/14] clone: give advice on how to resume a failed clone
  2011-11-10 21:21   ` Junio C Hamano
@ 2011-11-11 20:52     ` Jeff King
  0 siblings, 0 replies; 23+ messages in thread
From: Jeff King @ 2011-11-11 20:52 UTC (permalink / raw)
  To: Junio C Hamano; +Cc: git

On Thu, Nov 10, 2011 at 01:21:38PM -0800, Junio C Hamano wrote:

> Jeff King <peff@peff.net> writes:
> 
> > We could make "git clone ..." automatically resume, but I'm a little
> > nervous about that. I wrote a patch that did so, and it did work, but
> > there are a lot of little hiccups as we violate the assumption that the
> > directory didn't already exist (e.g., it writes multiple fetch refspec
> > lines to the config).
> 
> Sorry, but I do not think the worry is quite justified.
> 
> The "assumption that directory didn't already exist" becomes an issue only
> if you implement your "git clone" that automatically resumes as a thin
> wrapper around the current "git clone" in the form of
> 
>     until git clone ...
>     do
> 	echo retrying...
>     done

That was sort of what my patch looked like. It didn't do the wrapper
bit; you would have to run "git clone" again to resume. I.e.:

  $ git clone http://...
  Downloading bundle: ...
  ^C
  $ git clone http://...
  Resuming bundle: ...

and the implementation was very minimal. Basically, instead of checking
that the directory exists and dying, it said "the directory exists, but
there is a resumable bundle in it, so let's keep going anyway".

You _could_ do that right, with clone saying "I got to the fetch stage",
and picking up there. But that means picking apart clone into its
various stages, and not repeating earlier stages (or making sure they're
idempotent).  And making sure that internally, later stages don't depend
on in-memory actions from the earlier stages.

And then what happens when you have different parameters? If I say "git
clone foo", and then resume with "git clone --mirror foo", what should
happen?

None of these problems are insurmountable. But it means looking over the
clone code carefully, figuring out what should happen, and then probably
breaking apart the various stages to see where we can resume from.

I wanted to start simply, and just tell the user "this is approximately
what clone would have done from here". And then the fancier automatic
bits can come later.

> Stepping back a bit, I think there are two different situations where
> resumable clone is beneficial. The "git clone" process died either by the
> machine crashing or the user hitting a \C-c, or the connection between the
> server and the "git clone" got severed for some reason.
> 
> Right now, the "got disconnected" case results in "git clone" voluntarily
> dying and as the result of this, the symptom appears the same.  But it
> does not have to be that way.  If we know the underlying transport is
> resumable, e.g. when you are fetching a prepared bundle over the wire.
>
> I have this suspicion that in practice the "got disconnected" case is the
> majority. If "git clone" does not die upon disconnect while fetching a
> bundle, but instead the fetching of the bundle is resumed internally by
> reconnecting to the server and requesting a range transfer, there is no
> risk of "writes multiple fetch refspec lines" etc, no?

I don't think it is the majority. And there are even variants of "git
disconnected" that don't work. Here are just a few cases it wouldn't
handle that I think are common:

  1. the client machine crashes or loses power; you'd like to resume
     after rebooting.

  2. the network or server goes out, but is not immediately available.
     The followup attempt to fetch fails, but you could succeed if you
     restarted the fetch a few minutes (or hours, or days) later.

  3. the user hits ^C not because they want to abort the clone entirely,
     but because they know they cannot complete the clone right now
     (e.g., they are taking their laptop off the network, or it is
     consuming too much bandwidth, or they would rather wait until later
     when they are on a faster network).

All of those mean we have to have some on-disk state that shows how far
we got, and that a totally separate process can figure out from the
state where to resume.

> Of course, it is _also_ beneficial if we made "git clone" resumable after
> you purposefully kill it (maybe you thought it will clone within minutes,
> but it turns out that it may take hours and you have to turn off the
> machine in the next five minutes before leaving the work, or something).
> A solution for that case _could_ be used for the "got disconnected" case
> by letting it voluntarily die as we currently do, but I do not think that
> is an optimal solution to the "got disconnected" case.

I see the "got disconnected, and we can immediately resume" as a
minority subset of the larger problem. If we really want to, we can
implement that while waiting for a larger solution, but I don't think it
will serve most people's needs, and then will eventually become obsolete
anyway.

-Peff

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 02/14] http: turn off curl signals
  2011-11-10  8:43   ` Daniel Stenberg
@ 2011-11-11 20:54     ` Jeff King
  0 siblings, 0 replies; 23+ messages in thread
From: Jeff King @ 2011-11-11 20:54 UTC (permalink / raw)
  To: Daniel Stenberg; +Cc: git

On Thu, Nov 10, 2011 at 09:43:40AM +0100, Daniel Stenberg wrote:

> >I'm a little iffy on this one. If I understand correctly, depending
> >on the build and configuration, curl may not be able to timeout
> >during DNS lookups. But I'm not sure if it does, anyway, since we
> >don't set any timeouts.
> 
> Right, without a timeout set libcurl won't try to timeout name resolves.
> 
> To clarify: when libcurl is built to use the standard synchronous
> name resolver functions it can only abort them after a specified time
> by using signals (on posix systems).

OK, that matches with my understanding. I think this patch is a fine
thing to do for us, then. If we ever do start caring about timing out on
name lookups, we can rework it.

Thanks.

-Peff

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 0/14] resumable network bundles
  2011-11-10  7:43 [PATCH 0/14] resumable network bundles Jeff King
                   ` (15 preceding siblings ...)
  2011-11-11 13:13 ` [PATCH 0/14] resumable network bundles David Michael Barr
@ 2011-11-12 16:11 ` Tay Ray Chuan
  2011-11-12 17:58   ` Jeff King
  16 siblings, 1 reply; 23+ messages in thread
From: Tay Ray Chuan @ 2011-11-12 16:11 UTC (permalink / raw)
  To: Jeff King; +Cc: git

On Thu, Nov 10, 2011 at 3:43 PM, Jeff King <peff@peff.net> wrote:
> One possible option for resumable clones that has been discussed is
> letting the server point the client by http to a static bundle
> containing most of history, followed by a fetch from the actual git repo
> (which should be much cheaper now that we have all of the bundled
> history).  This series implements "step 0" of this plan: just letting
> bundles be fetched across the network in the first place.
>
> Shawn raised some issues about using bundles for this (as opposed to
> accessing the packfiles themselves); specifically, this raises the I/O
> footprint of a repository that has to serve both the bundled version of
> the pack and the regular packfile.
>
> So it may be that we don't follow this plan all the way through.
> However, even if we don't, fetching bundles over http is still a useful
> thing to be able to do. Which makes this first step worth doing either
> way.

Jeff, this is a great series, I think the cleanups and refactors
should get integrated independently of the bundle-cloning stuff.

One thing I'm not comfortable with is the "flexibility" allowed in
bundle fetching - servers are allowed to send bundles if they see fit,
and we have to detect it when they do (if I'm reading the "surprised"
scenario in patch 9 correctly).

Perhaps we can expose bundle fetching through /objects/info/bundles?
It could possibly contain information about what bundles are available
and what revs they contain. If bundles are found, fetch them;
otherwise, go through the usual ref advertisement and other steps of
the pack protocol.

That way, we take out the "surprise" factor in the fetching protocol.

-- 
Cheers,
Ray Chuan

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 0/14] resumable network bundles
  2011-11-12 16:11 ` Tay Ray Chuan
@ 2011-11-12 17:58   ` Jeff King
  0 siblings, 0 replies; 23+ messages in thread
From: Jeff King @ 2011-11-12 17:58 UTC (permalink / raw)
  To: Tay Ray Chuan; +Cc: git

On Sun, Nov 13, 2011 at 12:11:31AM +0800, Tay Ray Chuan wrote:

> One thing I'm not comfortable with is the "flexibility" allowed in
> bundle fetching - servers are allowed to send bundles if they see fit,
> and we have to detect it when they do (if I'm reading the "surprised"
> scenario in patch 9 correctly).

Right.

> Perhaps we can expose bundle fetching through /objects/info/bundles?

But what if the server you are hitting doesn't have a git repo at all?
In the simplest case, a bundle provider should just be able to put a
file somewhere http-ccessible, without having any special directory
structure or other meta files.

Which means that we have to be prepared for the URL the user gave us to
be a bundle, not a git repo that contains bundles.

> It could possibly contain information about what bundles are available
> and what revs they contain. If bundles are found, fetch them;
> otherwise, go through the usual ref advertisement and other steps of
> the pack protocol.

This is "step 2" of my plan: hitting a git repo will provide a way of
redirecting to other, static storage. But I think it's important that
the other storage not just be a path in the existing repo, for two
reasons:

  1. You might want to redirect the client off-server to a
     higher-bandwidth static service like S3, or something backed by a
     CDN.

  2. The client might not be hitting you through http, so you can't
     expect them to look at arbitrary repo files (like
     objects/info/bundles). We need to provide the information over the
     git protocol (my plan is to use a special ref name, like
     "refs/mirrors" to encode the information).

> That way, we take out the "surprise" factor in the fetching protocol.

I don't think it's that big a deal. It influenced the way that patches 9
and 10 were written (patch 9 handles "surprise" bundles when fetching
info/refs, and then patch 10 falls back to fetching $URL without
info/refs). But even if we didn't have the "surprise" case, most of the
code in patch 9 would have just ended up in patch 10. That is, the
surprise case doesn't take much code, and doesn't have a negative impact
on the non-surprise case (i.e., until we see a bundle header, the
behavior is identical, just putting the refs into a memory buffer).

-Peff

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2011-11-12 17:58 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-11-10  7:43 [PATCH 0/14] resumable network bundles Jeff King
2011-11-10  7:45 ` Jeff King
2011-11-10  7:46 ` [PATCH 01/14] t/lib-httpd: check for NO_CURL Jeff King
2011-11-10  7:48 ` [PATCH 02/14] http: turn off curl signals Jeff King
2011-11-10  8:43   ` Daniel Stenberg
2011-11-11 20:54     ` Jeff King
2011-11-10  7:48 ` [PATCH 03/14] http: refactor http_request function Jeff King
2011-11-10  7:49 ` [PATCH 04/14] http: add a public function for arbitrary-callback request Jeff King
2011-11-10  7:49 ` [PATCH 05/14] remote-curl: use http callback for requesting refs Jeff King
2011-11-10  7:49 ` [PATCH 06/14] transport: factor out bundle to ref list conversion Jeff King
2011-11-10  7:50 ` [PATCH 07/14] bundle: add is_bundle_buf helper Jeff King
2011-11-10  7:50 ` [PATCH 08/14] remote-curl: free "discovery" object Jeff King
2011-11-10  7:50 ` [PATCH 09/14] remote-curl: auto-detect bundles when fetching refs Jeff King
2011-11-10  7:51 ` [PATCH 10/14] remote-curl: try base $URL after $URL/info/refs Jeff King
2011-11-10  7:53 ` [PATCH 11/14] progress: allow pure-throughput progress meters Jeff King
2011-11-10  7:53 ` [PATCH 12/14] remote-curl: show progress for bundle downloads Jeff King
2011-11-10  7:53 ` [PATCH 13/14] remote-curl: resume interrupted bundle transfers Jeff King
2011-11-10  7:56 ` [PATCH 14/14] clone: give advice on how to resume a failed clone Jeff King
2011-11-10 21:21   ` Junio C Hamano
2011-11-11 20:52     ` Jeff King
2011-11-11 13:13 ` [PATCH 0/14] resumable network bundles David Michael Barr
2011-11-12 16:11 ` Tay Ray Chuan
2011-11-12 17:58   ` Jeff King

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.