git.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/10] [RFC] Simple IPC Mechanism
@ 2021-01-12 15:31 Jeff Hostetler via GitGitGadget
  2021-01-12 15:31 ` [PATCH 01/10] pkt-line: use stack rather than static buffer in packet_write_gently() Jeff Hostetler via GitGitGadget
                   ` (12 more replies)
  0 siblings, 13 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-01-12 15:31 UTC (permalink / raw)
  To: git; +Cc: Jeff Hostetler

This series introduces a multi-threaded IPC mechanism called "Simple IPC".
This is a library-layer feature to make it easy to create very long running
daemon/service applications and for unrelated Git commands to communicate
with them. Communication uses pkt-line messaging over a Windows named pipe
or Unix domain socket.

On the server side, Simple IPC implements a (platform-specific) connection
listener and worker thread-pool to accept and handle a series of client
connections. The server functionality is completely hidden behind the
ipc_server_run() and ipc_server_run_async() APIs. The daemon/service
application only needs to define an application-specific callback to handle
client requests.

Note that Simple IPC is completely unrelated to the long running process
feature (described in sub-process.h) where the lifetime of a "sub-process"
child is bound to that of the invoking parent process and communication
occurs over the child's stdin/stdout.

Simple IPC will serve as a basis for a future builtin FSMonitor daemon
feature.

Jeff Hostetler (7):
  pkt-line: use stack rather than static buffer in packet_write_gently()
  simple-ipc: design documentation for new IPC mechanism
  simple-ipc: add win32 implementation
  unix-socket: create gentle version of unix_stream_listen()
  unix-socket: add no-chdir option to unix_stream_listen_gently()
  simple-ipc: add t/helper/test-simple-ipc and t0052
  simple-ipc: add Unix domain socket implementation

Johannes Schindelin (3):
  pkt-line: (optionally) libify the packet readers
  pkt-line: optionally skip the flush packet in
    write_packetized_from_buf()
  pkt-line: accept additional options in read_packetized_to_strbuf()

 Documentation/technical/api-simple-ipc.txt |   31 +
 Makefile                                   |    8 +
 compat/simple-ipc/ipc-shared.c             |   28 +
 compat/simple-ipc/ipc-unix-socket.c        | 1093 ++++++++++++++++++++
 compat/simple-ipc/ipc-win32.c              |  723 +++++++++++++
 config.mak.uname                           |    2 +
 contrib/buildsystems/CMakeLists.txt        |    6 +
 convert.c                                  |    4 +-
 pkt-line.c                                 |   30 +-
 pkt-line.h                                 |   13 +-
 simple-ipc.h                               |  221 ++++
 t/helper/test-simple-ipc.c                 |  485 +++++++++
 t/helper/test-tool.c                       |    1 +
 t/helper/test-tool.h                       |    1 +
 t/t0052-simple-ipc.sh                      |  129 +++
 unix-socket.c                              |   58 +-
 unix-socket.h                              |    9 +
 17 files changed, 2828 insertions(+), 14 deletions(-)
 create mode 100644 Documentation/technical/api-simple-ipc.txt
 create mode 100644 compat/simple-ipc/ipc-shared.c
 create mode 100644 compat/simple-ipc/ipc-unix-socket.c
 create mode 100644 compat/simple-ipc/ipc-win32.c
 create mode 100644 simple-ipc.h
 create mode 100644 t/helper/test-simple-ipc.c
 create mode 100755 t/t0052-simple-ipc.sh


base-commit: 71ca53e8125e36efbda17293c50027d31681a41f
Published-As: https://github.com/gitgitgadget/git/releases/tag/pr-766%2Fjeffhostetler%2Fsimple-ipc-v1
Fetch-It-Via: git fetch https://github.com/gitgitgadget/git pr-766/jeffhostetler/simple-ipc-v1
Pull-Request: https://github.com/gitgitgadget/git/pull/766
-- 
gitgitgadget

^ permalink raw reply	[flat|nested] 178+ messages in thread

* [PATCH 01/10] pkt-line: use stack rather than static buffer in packet_write_gently()
  2021-01-12 15:31 [PATCH 00/10] [RFC] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
@ 2021-01-12 15:31 ` Jeff Hostetler via GitGitGadget
  2021-01-13 13:29   ` Jeff King
  2021-01-12 15:31 ` [PATCH 02/10] pkt-line: (optionally) libify the packet readers Johannes Schindelin via GitGitGadget
                   ` (11 subsequent siblings)
  12 siblings, 1 reply; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-01-12 15:31 UTC (permalink / raw)
  To: git; +Cc: Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Teach packet_write_gently() to use a stack buffer rather than a static
buffer when composing the packet line message.  This helps get us ready
for threaded operations.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 pkt-line.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pkt-line.c b/pkt-line.c
index d633005ef74..98439a2fed0 100644
--- a/pkt-line.c
+++ b/pkt-line.c
@@ -196,7 +196,7 @@ int packet_write_fmt_gently(int fd, const char *fmt, ...)
 
 static int packet_write_gently(const int fd_out, const char *buf, size_t size)
 {
-	static char packet_write_buffer[LARGE_PACKET_MAX];
+	char packet_write_buffer[LARGE_PACKET_MAX];
 	size_t packet_size;
 
 	if (size > sizeof(packet_write_buffer) - 4)
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH 02/10] pkt-line: (optionally) libify the packet readers
  2021-01-12 15:31 [PATCH 00/10] [RFC] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
  2021-01-12 15:31 ` [PATCH 01/10] pkt-line: use stack rather than static buffer in packet_write_gently() Jeff Hostetler via GitGitGadget
@ 2021-01-12 15:31 ` Johannes Schindelin via GitGitGadget
  2021-01-12 15:31 ` [PATCH 03/10] pkt-line: optionally skip the flush packet in write_packetized_from_buf() Johannes Schindelin via GitGitGadget
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 178+ messages in thread
From: Johannes Schindelin via GitGitGadget @ 2021-01-12 15:31 UTC (permalink / raw)
  To: git; +Cc: Jeff Hostetler, Johannes Schindelin

From: Johannes Schindelin <johannes.schindelin@gmx.de>

So far, the (possibly indirect) callers of `get_packet_data()` can ask
that function to return an error instead of `die()`ing upon end-of-file.
However, random read errors will still cause the process to die.

So let's introduce an explicit option to tell the packet reader
machinery to please be nice and only return an error.

This change prepares pkt-line for use by long-running daemon processes.
Such processes should be able to serve multiple concurrent clients and
and survive random IO errors.  If there is an error on one connection,
a daemon should be able to drop that connection and continue serving
existing and future connections.

This ability will be used by a Git-aware "Internal FSMonitor" feature
in a later patch series.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
---
 pkt-line.c | 19 +++++++++++++++++--
 pkt-line.h |  4 ++++
 2 files changed, 21 insertions(+), 2 deletions(-)

diff --git a/pkt-line.c b/pkt-line.c
index 98439a2fed0..5c2d86a2f60 100644
--- a/pkt-line.c
+++ b/pkt-line.c
@@ -298,8 +298,11 @@ static int get_packet_data(int fd, char **src_buf, size_t *src_size,
 		*src_size -= ret;
 	} else {
 		ret = read_in_full(fd, dst, size);
-		if (ret < 0)
+		if (ret < 0) {
+			if (options & PACKET_READ_NEVER_DIE)
+				return error_errno(_("read error"));
 			die_errno(_("read error"));
+		}
 	}
 
 	/* And complain if we didn't get enough bytes to satisfy the read. */
@@ -307,6 +310,8 @@ static int get_packet_data(int fd, char **src_buf, size_t *src_size,
 		if (options & PACKET_READ_GENTLE_ON_EOF)
 			return -1;
 
+		if (options & PACKET_READ_NEVER_DIE)
+			return error(_("the remote end hung up unexpectedly"));
 		die(_("the remote end hung up unexpectedly"));
 	}
 
@@ -335,6 +340,9 @@ enum packet_read_status packet_read_with_status(int fd, char **src_buffer,
 	len = packet_length(linelen);
 
 	if (len < 0) {
+		if (options & PACKET_READ_NEVER_DIE)
+			return error(_("protocol error: bad line length "
+				       "character: %.4s"), linelen);
 		die(_("protocol error: bad line length character: %.4s"), linelen);
 	} else if (!len) {
 		packet_trace("0000", 4, 0);
@@ -349,12 +357,19 @@ enum packet_read_status packet_read_with_status(int fd, char **src_buffer,
 		*pktlen = 0;
 		return PACKET_READ_RESPONSE_END;
 	} else if (len < 4) {
+		if (options & PACKET_READ_NEVER_DIE)
+			return error(_("protocol error: bad line length %d"),
+				     len);
 		die(_("protocol error: bad line length %d"), len);
 	}
 
 	len -= 4;
-	if ((unsigned)len >= size)
+	if ((unsigned)len >= size) {
+		if (options & PACKET_READ_NEVER_DIE)
+			return error(_("protocol error: bad line length %d"),
+				     len);
 		die(_("protocol error: bad line length %d"), len);
+	}
 
 	if (get_packet_data(fd, src_buffer, src_len, buffer, len, options) < 0) {
 		*pktlen = -1;
diff --git a/pkt-line.h b/pkt-line.h
index 8c90daa59ef..c1fa245faf8 100644
--- a/pkt-line.h
+++ b/pkt-line.h
@@ -68,10 +68,14 @@ int write_packetized_from_buf(const char *src_in, size_t len, int fd_out);
  *
  * If options contains PACKET_READ_DIE_ON_ERR_PACKET, it dies when it sees an
  * ERR packet.
+ *
+ * With `PACKET_READ_NEVER_DIE`, no errors are allowed to trigger die() (except
+ * an ERR packet, when `PACKET_READ_DIE_ON_ERR_PACKET` is in effect).
  */
 #define PACKET_READ_GENTLE_ON_EOF     (1u<<0)
 #define PACKET_READ_CHOMP_NEWLINE     (1u<<1)
 #define PACKET_READ_DIE_ON_ERR_PACKET (1u<<2)
+#define PACKET_READ_NEVER_DIE         (1u<<3)
 int packet_read(int fd, char **src_buffer, size_t *src_len, char
 		*buffer, unsigned size, int options);
 
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH 03/10] pkt-line: optionally skip the flush packet in write_packetized_from_buf()
  2021-01-12 15:31 [PATCH 00/10] [RFC] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
  2021-01-12 15:31 ` [PATCH 01/10] pkt-line: use stack rather than static buffer in packet_write_gently() Jeff Hostetler via GitGitGadget
  2021-01-12 15:31 ` [PATCH 02/10] pkt-line: (optionally) libify the packet readers Johannes Schindelin via GitGitGadget
@ 2021-01-12 15:31 ` Johannes Schindelin via GitGitGadget
  2021-01-12 15:31 ` [PATCH 04/10] pkt-line: accept additional options in read_packetized_to_strbuf() Johannes Schindelin via GitGitGadget
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 178+ messages in thread
From: Johannes Schindelin via GitGitGadget @ 2021-01-12 15:31 UTC (permalink / raw)
  To: git; +Cc: Jeff Hostetler, Johannes Schindelin

From: Johannes Schindelin <johannes.schindelin@gmx.de>

This function currently has only one caller: `apply_multi_file_filter()`
in `convert.c`. That caller wants a flush packet to be written after
writing the payload.

However, we are about to introduce a user that wants to write many
packets before a final flush packet, so let's extend this function to
prepare for that scenario.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
---
 convert.c  | 2 +-
 pkt-line.c | 5 +++--
 pkt-line.h | 3 ++-
 3 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/convert.c b/convert.c
index ee360c2f07c..3f396a9b288 100644
--- a/convert.c
+++ b/convert.c
@@ -886,7 +886,7 @@ static int apply_multi_file_filter(const char *path, const char *src, size_t len
 	if (fd >= 0)
 		err = write_packetized_from_fd(fd, process->in);
 	else
-		err = write_packetized_from_buf(src, len, process->in);
+		err = write_packetized_from_buf(src, len, process->in, 1);
 	if (err)
 		goto done;
 
diff --git a/pkt-line.c b/pkt-line.c
index 5c2d86a2f60..ef83439b9ee 100644
--- a/pkt-line.c
+++ b/pkt-line.c
@@ -261,7 +261,8 @@ int write_packetized_from_fd(int fd_in, int fd_out)
 	return err;
 }
 
-int write_packetized_from_buf(const char *src_in, size_t len, int fd_out)
+int write_packetized_from_buf(const char *src_in, size_t len, int fd_out,
+			      int flush_at_end)
 {
 	int err = 0;
 	size_t bytes_written = 0;
@@ -277,7 +278,7 @@ int write_packetized_from_buf(const char *src_in, size_t len, int fd_out)
 		err = packet_write_gently(fd_out, src_in + bytes_written, bytes_to_write);
 		bytes_written += bytes_to_write;
 	}
-	if (!err)
+	if (!err && flush_at_end)
 		err = packet_flush_gently(fd_out);
 	return err;
 }
diff --git a/pkt-line.h b/pkt-line.h
index c1fa245faf8..5b7a0fb8510 100644
--- a/pkt-line.h
+++ b/pkt-line.h
@@ -33,7 +33,8 @@ void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len);
 int packet_flush_gently(int fd);
 int packet_write_fmt_gently(int fd, const char *fmt, ...) __attribute__((format (printf, 2, 3)));
 int write_packetized_from_fd(int fd_in, int fd_out);
-int write_packetized_from_buf(const char *src_in, size_t len, int fd_out);
+int write_packetized_from_buf(const char *src_in, size_t len, int fd_out,
+			      int flush_at_end);
 
 /*
  * Read a packetized line into the buffer, which must be at least size bytes
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH 04/10] pkt-line: accept additional options in read_packetized_to_strbuf()
  2021-01-12 15:31 [PATCH 00/10] [RFC] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
                   ` (2 preceding siblings ...)
  2021-01-12 15:31 ` [PATCH 03/10] pkt-line: optionally skip the flush packet in write_packetized_from_buf() Johannes Schindelin via GitGitGadget
@ 2021-01-12 15:31 ` Johannes Schindelin via GitGitGadget
  2021-01-12 15:31 ` [PATCH 05/10] simple-ipc: design documentation for new IPC mechanism Jeff Hostetler via GitGitGadget
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 178+ messages in thread
From: Johannes Schindelin via GitGitGadget @ 2021-01-12 15:31 UTC (permalink / raw)
  To: git; +Cc: Jeff Hostetler, Johannes Schindelin

From: Johannes Schindelin <johannes.schindelin@gmx.de>

The `read_packetized_to_strbuf()` function reads packets into a strbuf
until a flush packet has been received. So far, it has only one caller:
`apply_multi_file_filter()` in `convert.c`. This caller really only
needs the `PACKET_READ_GENTLE_ON_EOF` option to be passed to
`packet_read()` (which makes sense in the scenario where packets should
be read until a flush packet is received).

We are about to introduce a caller that wants to pass other options
through to `packet_read()`, so let's extend the function signature
accordingly.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
---
 convert.c  | 2 +-
 pkt-line.c | 4 ++--
 pkt-line.h | 6 +++++-
 3 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/convert.c b/convert.c
index 3f396a9b288..175c5cd51d5 100644
--- a/convert.c
+++ b/convert.c
@@ -903,7 +903,7 @@ static int apply_multi_file_filter(const char *path, const char *src, size_t len
 		if (err)
 			goto done;
 
-		err = read_packetized_to_strbuf(process->out, &nbuf) < 0;
+		err = read_packetized_to_strbuf(process->out, &nbuf, 0) < 0;
 		if (err)
 			goto done;
 
diff --git a/pkt-line.c b/pkt-line.c
index ef83439b9ee..615211819cd 100644
--- a/pkt-line.c
+++ b/pkt-line.c
@@ -437,7 +437,7 @@ char *packet_read_line_buf(char **src, size_t *src_len, int *dst_len)
 	return packet_read_line_generic(-1, src, src_len, dst_len);
 }
 
-ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out)
+ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out, int options)
 {
 	int packet_len;
 
@@ -453,7 +453,7 @@ ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out)
 			 * that there is already room for the extra byte.
 			 */
 			sb_out->buf + sb_out->len, LARGE_PACKET_DATA_MAX+1,
-			PACKET_READ_GENTLE_ON_EOF);
+			options | PACKET_READ_GENTLE_ON_EOF);
 		if (packet_len <= 0)
 			break;
 		sb_out->len += packet_len;
diff --git a/pkt-line.h b/pkt-line.h
index 5b7a0fb8510..02554a20a6c 100644
--- a/pkt-line.h
+++ b/pkt-line.h
@@ -135,8 +135,12 @@ char *packet_read_line_buf(char **src_buf, size_t *src_len, int *size);
 
 /*
  * Reads a stream of variable sized packets until a flush packet is detected.
+ *
+ * The options are augmented by PACKET_READ_GENTLE_ON_EOF and passed to
+ * packet_read.
  */
-ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out);
+ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out,
+				  int options);
 
 /*
  * Receive multiplexed output stream over git native protocol.
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH 05/10] simple-ipc: design documentation for new IPC mechanism
  2021-01-12 15:31 [PATCH 00/10] [RFC] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
                   ` (3 preceding siblings ...)
  2021-01-12 15:31 ` [PATCH 04/10] pkt-line: accept additional options in read_packetized_to_strbuf() Johannes Schindelin via GitGitGadget
@ 2021-01-12 15:31 ` Jeff Hostetler via GitGitGadget
  2021-01-12 16:40   ` Ævar Arnfjörð Bjarmason
  2021-01-12 15:31 ` [PATCH 06/10] simple-ipc: add win32 implementation Jeff Hostetler via GitGitGadget
                   ` (7 subsequent siblings)
  12 siblings, 1 reply; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-01-12 15:31 UTC (permalink / raw)
  To: git; +Cc: Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Brief design documentation for new IPC mechanism allowing
foreground Git client to talk with an existing daemon process
at a known location using a named pipe or unix domain socket.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 Documentation/technical/api-simple-ipc.txt | 31 ++++++++++++++++++++++
 1 file changed, 31 insertions(+)
 create mode 100644 Documentation/technical/api-simple-ipc.txt

diff --git a/Documentation/technical/api-simple-ipc.txt b/Documentation/technical/api-simple-ipc.txt
new file mode 100644
index 00000000000..920994a69d3
--- /dev/null
+++ b/Documentation/technical/api-simple-ipc.txt
@@ -0,0 +1,31 @@
+simple-ipc API
+==============
+
+The simple-ipc API is used to send an IPC message and response between
+a (presumably) foreground Git client process to a background server or
+daemon process.  The server process must already be running.  Multiple
+client processes can simultaneously communicate with the server
+process.
+
+Communication occurs over a named pipe on Windows and a Unix domain
+socket on other platforms.  Clients and the server rendezvous at a
+previously agreed-to application-specific pathname (which is outside
+the scope of this design).
+
+This IPC mechanism differs from the existing `sub-process.c` model
+(Documentation/technical/long-running-process-protocol.txt) and used
+by applications like Git-LFS because the server is assumed to be very
+long running system service.  In contrast, a "sub-process model process"
+is started with the foreground process and exits when the foreground
+process terminates.  How the server is started is also outside the
+scope of the IPC mechanism.
+
+The IPC protocol consists of a single request message from the client and
+an optional request message from the server.  For simplicity, pkt-line
+routines are used to hide chunking and buffering concerns.  Each side
+terminates their message with a flush packet.
+(Documentation/technical/protocol-common.txt)
+
+The actual format of the client and server messages is application
+specific.  The IPC layer transmits and receives an opaque buffer without
+any concern for the content within.
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH 06/10] simple-ipc: add win32 implementation
  2021-01-12 15:31 [PATCH 00/10] [RFC] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
                   ` (4 preceding siblings ...)
  2021-01-12 15:31 ` [PATCH 05/10] simple-ipc: design documentation for new IPC mechanism Jeff Hostetler via GitGitGadget
@ 2021-01-12 15:31 ` Jeff Hostetler via GitGitGadget
  2021-01-12 15:31 ` [PATCH 07/10] unix-socket: create gentle version of unix_stream_listen() Jeff Hostetler via GitGitGadget
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-01-12 15:31 UTC (permalink / raw)
  To: git; +Cc: Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Create Windows implementation of "simple-ipc" using named pipes.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 Makefile                            |   5 +
 compat/simple-ipc/ipc-shared.c      |  28 ++
 compat/simple-ipc/ipc-win32.c       | 723 ++++++++++++++++++++++++++++
 config.mak.uname                    |   2 +
 contrib/buildsystems/CMakeLists.txt |   4 +
 simple-ipc.h                        | 216 +++++++++
 6 files changed, 978 insertions(+)
 create mode 100644 compat/simple-ipc/ipc-shared.c
 create mode 100644 compat/simple-ipc/ipc-win32.c
 create mode 100644 simple-ipc.h

diff --git a/Makefile b/Makefile
index 7b64106930a..c94d5847919 100644
--- a/Makefile
+++ b/Makefile
@@ -1682,6 +1682,11 @@ else
 	LIB_OBJS += unix-socket.o
 endif
 
+ifdef USE_WIN32_IPC
+	LIB_OBJS += compat/simple-ipc/ipc-shared.o
+	LIB_OBJS += compat/simple-ipc/ipc-win32.o
+endif
+
 ifdef NO_ICONV
 	BASIC_CFLAGS += -DNO_ICONV
 endif
diff --git a/compat/simple-ipc/ipc-shared.c b/compat/simple-ipc/ipc-shared.c
new file mode 100644
index 00000000000..1edec815953
--- /dev/null
+++ b/compat/simple-ipc/ipc-shared.c
@@ -0,0 +1,28 @@
+#include "cache.h"
+#include "simple-ipc.h"
+#include "strbuf.h"
+#include "pkt-line.h"
+#include "thread-utils.h"
+
+#ifdef SUPPORTS_SIMPLE_IPC
+
+int ipc_server_run(const char *path, const struct ipc_server_opts *opts,
+		   ipc_server_application_cb *application_cb,
+		   void *application_data)
+{
+	struct ipc_server_data *server_data = NULL;
+	int ret;
+
+	ret = ipc_server_run_async(&server_data, path, opts,
+				   application_cb, application_data);
+	if (ret)
+		return ret;
+
+	ret = ipc_server_await(server_data);
+
+	ipc_server_free(server_data);
+
+	return ret;
+}
+
+#endif /* SUPPORTS_SIMPLE_IPC */
diff --git a/compat/simple-ipc/ipc-win32.c b/compat/simple-ipc/ipc-win32.c
new file mode 100644
index 00000000000..475d9f02ff6
--- /dev/null
+++ b/compat/simple-ipc/ipc-win32.c
@@ -0,0 +1,723 @@
+#include "cache.h"
+#include "simple-ipc.h"
+#include "strbuf.h"
+#include "pkt-line.h"
+#include "thread-utils.h"
+
+#ifndef GIT_WINDOWS_NATIVE
+#error This file can only be compiled on Windows
+#endif
+
+static int initialize_pipe_name(const char *path, wchar_t *wpath, size_t alloc)
+{
+	int off = 0;
+	struct strbuf realpath = STRBUF_INIT;
+
+	if (!strbuf_realpath(&realpath, path, 0))
+		return -1;
+
+	off = swprintf(wpath, alloc, L"\\\\.\\pipe\\");
+	if (xutftowcs(wpath + off, realpath.buf, alloc - off) < 0)
+		return -1;
+
+	/* Handle drive prefix */
+	if (wpath[off] && wpath[off + 1] == L':') {
+		wpath[off + 1] = L'_';
+		off += 2;
+	}
+
+	for (; wpath[off]; off++)
+		if (wpath[off] == L'/')
+			wpath[off] = L'\\';
+
+	strbuf_release(&realpath);
+	return 0;
+}
+
+static enum ipc_active_state get_active_state(wchar_t *pipe_path)
+{
+	if (WaitNamedPipeW(pipe_path, NMPWAIT_USE_DEFAULT_WAIT))
+		return IPC_STATE__LISTENING;
+
+	if (GetLastError() == ERROR_SEM_TIMEOUT)
+		return IPC_STATE__NOT_LISTENING;
+
+	if (GetLastError() == ERROR_FILE_NOT_FOUND)
+		return IPC_STATE__PATH_NOT_FOUND;
+
+	return IPC_STATE__OTHER_ERROR;
+}
+
+enum ipc_active_state ipc_get_active_state(const char *path)
+{
+	wchar_t pipe_path[MAX_PATH];
+
+	if (initialize_pipe_name(path, pipe_path, ARRAY_SIZE(pipe_path)) < 0)
+		return IPC_STATE__INVALID_PATH;
+
+	return get_active_state(pipe_path);
+}
+
+#define WAIT_STEP_MS (50)
+
+static enum ipc_active_state connect_to_server(
+	const wchar_t *wpath,
+	DWORD timeout_ms,
+	const struct ipc_client_connect_options *options,
+	int *pfd)
+{
+	DWORD t_start_ms, t_waited_ms;
+	DWORD step_ms;
+	HANDLE hPipe = INVALID_HANDLE_VALUE;
+	DWORD mode = PIPE_READMODE_BYTE;
+	DWORD gle;
+
+	*pfd = -1;
+
+	for (;;) {
+		hPipe = CreateFileW(wpath, GENERIC_READ | GENERIC_WRITE,
+				    0, NULL, OPEN_EXISTING, 0, NULL);
+		if (hPipe != INVALID_HANDLE_VALUE)
+			break;
+
+		gle = GetLastError();
+
+		switch (gle) {
+		case ERROR_FILE_NOT_FOUND:
+			if (!options->wait_if_not_found)
+				return IPC_STATE__PATH_NOT_FOUND;
+			if (!timeout_ms)
+				return IPC_STATE__PATH_NOT_FOUND;
+
+			step_ms = (timeout_ms < WAIT_STEP_MS) ?
+				timeout_ms : WAIT_STEP_MS;
+			sleep_millisec(step_ms);
+
+			timeout_ms -= step_ms;
+			break; /* try again */
+
+		case ERROR_PIPE_BUSY:
+			if (!options->wait_if_busy)
+				return IPC_STATE__NOT_LISTENING;
+			if (!timeout_ms)
+				return IPC_STATE__NOT_LISTENING;
+
+			t_start_ms = (DWORD)(getnanotime() / 1000000);
+
+			if (!WaitNamedPipeW(wpath, timeout_ms)) {
+				if (GetLastError() == ERROR_SEM_TIMEOUT)
+					return IPC_STATE__NOT_LISTENING;
+
+				return IPC_STATE__OTHER_ERROR;
+			}
+
+			/*
+			 * A pipe server instance became available.
+			 * Race other client processes to connect to
+			 * it.
+			 *
+			 * But first decrement our overall timeout so
+			 * that we don't starve if we keep losing the
+			 * race.  But also guard against special
+			 * NPMWAIT_ values (0 and -1).
+			 */
+			t_waited_ms = (DWORD)(getnanotime() / 1000000) - t_start_ms;
+			if (t_waited_ms < timeout_ms)
+				timeout_ms -= t_waited_ms;
+			else
+				timeout_ms = 1;
+			break; /* try again */
+
+		default:
+			return IPC_STATE__OTHER_ERROR;
+		}
+	}
+
+	if (!SetNamedPipeHandleState(hPipe, &mode, NULL, NULL)) {
+		CloseHandle(hPipe);
+		return IPC_STATE__OTHER_ERROR;
+	}
+
+	*pfd = _open_osfhandle((intptr_t)hPipe, O_RDWR|O_BINARY);
+	if (*pfd < 0) {
+		CloseHandle(hPipe);
+		return IPC_STATE__OTHER_ERROR;
+	}
+
+	/* fd now owns hPipe */
+
+	return IPC_STATE__LISTENING;
+}
+
+/*
+ * The default connection timeout for Windows clients.
+ *
+ * This is not currently part of the ipc_ API (nor the config settings)
+ * because of differences between Windows and other platforms.
+ *
+ * This value was chosen at random.
+ */
+#define WINDOWS_CONNECTION_TIMEOUT_MS (30000)
+
+enum ipc_active_state ipc_client_try_connect(
+	const char *path,
+	const struct ipc_client_connect_options *options,
+	int *pfd)
+{
+	wchar_t wpath[MAX_PATH];
+	enum ipc_active_state state = IPC_STATE__OTHER_ERROR;
+
+	*pfd = -1;
+
+	trace2_region_enter("ipc-client", "try-connect", NULL);
+	trace2_data_string("ipc-client", NULL, "try-connect/path", path);
+
+	if (initialize_pipe_name(path, wpath, ARRAY_SIZE(wpath)) < 0)
+		state = IPC_STATE__INVALID_PATH;
+	else
+		state = connect_to_server(wpath, WINDOWS_CONNECTION_TIMEOUT_MS,
+					  options, pfd);
+
+	trace2_data_intmax("ipc-client", NULL, "try-connect/state",
+			   (intmax_t)state);
+	trace2_region_leave("ipc-client", "try-connect", NULL);
+	return state;
+}
+
+int ipc_client_send_command_to_fd(int fd, const char *message,
+				  struct strbuf *answer)
+{
+	int ret = 0;
+
+	strbuf_setlen(answer, 0);
+
+	trace2_region_enter("ipc-client", "send-command", NULL);
+
+	if (write_packetized_from_buf(message, strlen(message), fd, 1) < 0) {
+		ret = error(_("could not send IPC command"));
+		goto done;
+	}
+
+	FlushFileBuffers((HANDLE)_get_osfhandle(fd));
+
+	if (read_packetized_to_strbuf(fd, answer, PACKET_READ_NEVER_DIE) < 0) {
+		ret = error(_("could not read IPC response"));
+		goto done;
+	}
+
+done:
+	trace2_region_leave("ipc-client", "send-command", NULL);
+	return ret;
+}
+
+int ipc_client_send_command(const char *path,
+			    const struct ipc_client_connect_options *options,
+			    const char *message, struct strbuf *response)
+{
+	int fd;
+	int ret = -1;
+	enum ipc_active_state state;
+
+	state = ipc_client_try_connect(path, options, &fd);
+
+	if (state != IPC_STATE__LISTENING)
+		return ret;
+
+	ret = ipc_client_send_command_to_fd(fd, message, response);
+	close(fd);
+	return ret;
+}
+
+/*
+ * Duplicate the given pipe handle and wrap it in a file descriptor so
+ * that we can use pkt-line on it.
+ */
+static int dup_fd_from_pipe(const HANDLE pipe)
+{
+	HANDLE process = GetCurrentProcess();
+	HANDLE handle;
+	int fd;
+
+	if (!DuplicateHandle(process, pipe, process, &handle, 0, FALSE,
+			     DUPLICATE_SAME_ACCESS)) {
+		errno = err_win_to_posix(GetLastError());
+		return -1;
+	}
+
+	fd = _open_osfhandle((intptr_t)handle, O_RDWR|O_BINARY);
+	if (fd < 0) {
+		errno = err_win_to_posix(GetLastError());
+		CloseHandle(handle);
+		return -1;
+	}
+
+	/*
+	 * `handle` is now owned by `fd` and will be automatically closed
+	 * when the descriptor is closed.
+	 */
+
+	return fd;
+}
+
+/*
+ * Magic numbers used to annotate callback instance data.
+ * These are used to help guard against accidentally passing the
+ * wrong instance data across multiple levels of callbacks (which
+ * is easy to do if there are `void*` arguments).
+ */
+enum magic {
+	MAGIC_SERVER_REPLY_DATA,
+	MAGIC_SERVER_THREAD_DATA,
+	MAGIC_SERVER_DATA,
+};
+
+struct ipc_server_reply_data {
+	enum magic magic;
+	int fd;
+	struct ipc_server_thread_data *server_thread_data;
+};
+
+struct ipc_server_thread_data {
+	enum magic magic;
+	struct ipc_server_thread_data *next_thread;
+	struct ipc_server_data *server_data;
+	pthread_t pthread_id;
+	HANDLE hPipe;
+};
+
+/*
+ * On Windows, the conceptual "ipc-server" is implemented as a pool of
+ * n idential/peer "server-thread" threads.  That is, there is no
+ * hierarchy of threads; and therefore no controller thread managing
+ * the pool.  Each thread has an independent handle to the named pipe,
+ * receives incoming connections, processes the client, and re-uses
+ * the pipe for the next client connection.
+ *
+ * Therefore, the "ipc-server" only needs to maintain a list of the
+ * spawned threads for eventual "join" purposes.
+ *
+ * A single "stop-event" is visible to all of the server threads to
+ * tell them to shutdown (when idle).
+ */
+struct ipc_server_data {
+	enum magic magic;
+	ipc_server_application_cb *application_cb;
+	void *application_data;
+	struct strbuf buf_path;
+	wchar_t wpath[MAX_PATH];
+
+	HANDLE hEventStopRequested;
+	struct ipc_server_thread_data *thread_list;
+	int is_stopped;
+};
+
+enum connect_result {
+	CR_CONNECTED = 0,
+	CR_CONNECT_PENDING,
+	CR_CONNECT_ERROR,
+	CR_WAIT_ERROR,
+	CR_SHUTDOWN,
+};
+
+static enum connect_result queue_overlapped_connect(
+	struct ipc_server_thread_data *server_thread_data,
+	OVERLAPPED *lpo)
+{
+	if (ConnectNamedPipe(server_thread_data->hPipe, lpo))
+		goto failed;
+
+	switch (GetLastError()) {
+	case ERROR_IO_PENDING:
+		return CR_CONNECT_PENDING;
+
+	case ERROR_PIPE_CONNECTED:
+		SetEvent(lpo->hEvent);
+		return CR_CONNECTED;
+
+	default:
+		break;
+	}
+
+failed:
+	error(_("ConnectNamedPipe failed for '%s' (%lu)"),
+	      server_thread_data->server_data->buf_path.buf,
+	      GetLastError());
+	return CR_CONNECT_ERROR;
+}
+
+/*
+ * Use Windows Overlapped IO to wait for a connection or for our event
+ * to be signalled.
+ */
+static enum connect_result wait_for_connection(
+	struct ipc_server_thread_data *server_thread_data,
+	OVERLAPPED *lpo)
+{
+	enum connect_result r;
+	HANDLE waitHandles[2];
+	DWORD dwWaitResult;
+
+	r = queue_overlapped_connect(server_thread_data, lpo);
+	if (r != CR_CONNECT_PENDING)
+		return r;
+
+	waitHandles[0] = server_thread_data->server_data->hEventStopRequested;
+	waitHandles[1] = lpo->hEvent;
+
+	dwWaitResult = WaitForMultipleObjects(2, waitHandles, FALSE, INFINITE);
+	switch (dwWaitResult) {
+	case WAIT_OBJECT_0 + 0:
+		return CR_SHUTDOWN;
+
+	case WAIT_OBJECT_0 + 1:
+		ResetEvent(lpo->hEvent);
+		return CR_CONNECTED;
+
+	default:
+		return CR_WAIT_ERROR;
+	}
+}
+
+/*
+ * Forward declare our reply callback function so that any compiler
+ * errors are reported when we actually define the function (in addition
+ * to any errors reported when we try to pass this callback function as
+ * a parameter in a function call).  The former are easier to understand.
+ */
+static ipc_server_reply_cb do_io_reply_callback;
+
+/*
+ * Relay application's response message to the client process.
+ * (We do not flush at this point because we allow the caller
+ * to chunk data to the client thru us.)
+ */
+static int do_io_reply_callback(struct ipc_server_reply_data *reply_data,
+		       const char *response, size_t response_len)
+{
+	if (reply_data->magic != MAGIC_SERVER_REPLY_DATA)
+		BUG("reply_cb called with wrong instance data");
+
+	return write_packetized_from_buf(response, response_len,
+					 reply_data->fd, 0);
+}
+
+/*
+ * Receive the request/command from the client and pass it to the
+ * registered request-callback.  The request-callback will compose
+ * a response and call our reply-callback to send it to the client.
+ *
+ * Simple-IPC only contains one round trip, so we flush and close
+ * here after the response.
+ */
+static int do_io(struct ipc_server_thread_data *server_thread_data)
+{
+	struct strbuf buf = STRBUF_INIT;
+	struct ipc_server_reply_data reply_data;
+	int ret = 0;
+
+	reply_data.magic = MAGIC_SERVER_REPLY_DATA;
+	reply_data.server_thread_data = server_thread_data;
+
+	reply_data.fd = dup_fd_from_pipe(server_thread_data->hPipe);
+	if (reply_data.fd < 0)
+		return error(_("could not create fd from pipe for '%s'"),
+			     server_thread_data->server_data->buf_path.buf);
+
+	ret = read_packetized_to_strbuf(reply_data.fd, &buf,
+					PACKET_READ_NEVER_DIE);
+	if (ret >= 0) {
+		ret = server_thread_data->server_data->application_cb(
+			server_thread_data->server_data->application_data,
+			buf.buf, do_io_reply_callback, &reply_data);
+
+		packet_flush_gently(reply_data.fd);
+
+		FlushFileBuffers((HANDLE)_get_osfhandle((reply_data.fd)));
+	}
+	else {
+		/*
+		 * The client probably disconnected/shutdown before it
+		 * could send a well-formed message.  Ignore it.
+		 */
+	}
+
+	strbuf_release(&buf);
+	close(reply_data.fd);
+
+	return ret;
+}
+
+/*
+ * Handle IPC request and response with this connected client.  And reset
+ * the pipe to prepare for the next client.
+ */
+static int use_connection(struct ipc_server_thread_data *server_thread_data)
+{
+	int ret;
+
+	ret = do_io(server_thread_data);
+
+	FlushFileBuffers(server_thread_data->hPipe);
+	DisconnectNamedPipe(server_thread_data->hPipe);
+
+	return ret;
+}
+
+/*
+ * Thread proc for an IPC server worker thread.  It handles a series of
+ * connections from clients.  It cleans and reuses the hPipe between each
+ * client.
+ */
+static void *server_thread_proc(void *_server_thread_data)
+{
+	struct ipc_server_thread_data *server_thread_data = _server_thread_data;
+	HANDLE hEventConnected = INVALID_HANDLE_VALUE;
+	OVERLAPPED oConnect;
+	enum connect_result cr;
+	int ret;
+
+	assert(server_thread_data->hPipe != INVALID_HANDLE_VALUE);
+
+	trace2_thread_start("ipc-server");
+	trace2_data_string("ipc-server", NULL, "pipe",
+			   server_thread_data->server_data->buf_path.buf);
+
+	hEventConnected = CreateEventW(NULL, TRUE, FALSE, NULL);
+
+	memset(&oConnect, 0, sizeof(oConnect));
+	oConnect.hEvent = hEventConnected;
+
+	for (;;) {
+		cr = wait_for_connection(server_thread_data, &oConnect);
+
+		switch (cr) {
+		case CR_SHUTDOWN:
+			goto finished;
+
+		case CR_CONNECTED:
+			ret = use_connection(server_thread_data);
+			if (ret == SIMPLE_IPC_QUIT) {
+				ipc_server_stop_async(
+					server_thread_data->server_data);
+				goto finished;
+			}
+			if (ret > 0) {
+				/*
+				 * Ignore (transient) IO errors with this
+				 * client and reset for the next client.
+				 */
+			}
+			break;
+
+		case CR_CONNECT_PENDING:
+			/* By construction, this should not happen. */
+			BUG("ipc-server[%s]: unexpeced CR_CONNECT_PENDING",
+			    server_thread_data->server_data->buf_path.buf);
+
+		case CR_CONNECT_ERROR:
+		case CR_WAIT_ERROR:
+			/*
+			 * Ignore these theoretical errors.
+			 */
+			DisconnectNamedPipe(server_thread_data->hPipe);
+			break;
+
+		default:
+			BUG("unandled case after wait_for_connection");
+		}
+	}
+
+finished:
+	CloseHandle(server_thread_data->hPipe);
+	CloseHandle(hEventConnected);
+
+	trace2_thread_exit();
+	return NULL;
+}
+
+static HANDLE create_new_pipe(wchar_t *wpath, int is_first)
+{
+	HANDLE hPipe;
+	DWORD dwOpenMode, dwPipeMode;
+	LPSECURITY_ATTRIBUTES lpsa = NULL;
+
+	dwOpenMode = PIPE_ACCESS_INBOUND | PIPE_ACCESS_OUTBOUND |
+		FILE_FLAG_OVERLAPPED;
+
+	dwPipeMode = PIPE_TYPE_MESSAGE | PIPE_READMODE_BYTE | PIPE_WAIT |
+		PIPE_REJECT_REMOTE_CLIENTS;
+
+	if (is_first) {
+		dwOpenMode |= FILE_FLAG_FIRST_PIPE_INSTANCE;
+
+		/*
+		 * On Windows, the first server pipe instance gets to
+		 * set the ACL / Security Attributes on the named
+		 * pipe; subsequent instances inherit and cannot
+		 * change them.
+		 *
+		 * TODO Should we allow the application layer to
+		 * specify security attributes, such as `LocalService`
+		 * or `LocalSystem`, when we create the named pipe?
+		 * This question is probably not important when the
+		 * daemon is started by a foreground user process and
+		 * only needs to talk to the current user, but may be
+		 * if the daemon is run via the Control Panel as a
+		 * System Service.
+		 */
+	}
+
+	hPipe = CreateNamedPipeW(wpath, dwOpenMode, dwPipeMode,
+				 PIPE_UNLIMITED_INSTANCES, 1024, 1024, 0, lpsa);
+
+	return hPipe;
+}
+
+int ipc_server_run_async(struct ipc_server_data **returned_server_data,
+			 const char *path, const struct ipc_server_opts *opts,
+			 ipc_server_application_cb *application_cb,
+			 void *application_data)
+{
+	struct ipc_server_data *server_data;
+	wchar_t wpath[MAX_PATH];
+	HANDLE hPipeFirst = INVALID_HANDLE_VALUE;
+	int k;
+	int ret = 0;
+	int nr_threads = opts->nr_threads;
+
+	*returned_server_data = NULL;
+
+	ret = initialize_pipe_name(path, wpath, ARRAY_SIZE(wpath));
+	if (ret < 0)
+		return error(
+			_("could not create normalized wchar_t path for '%s'"),
+			path);
+
+	hPipeFirst = create_new_pipe(wpath, 1);
+	if (hPipeFirst == INVALID_HANDLE_VALUE)
+		return error(_("IPC server already running on '%s'"), path);
+
+	server_data = xcalloc(1, sizeof(*server_data));
+	server_data->magic = MAGIC_SERVER_DATA;
+	server_data->application_cb = application_cb;
+	server_data->application_data = application_data;
+	server_data->hEventStopRequested = CreateEvent(NULL, TRUE, FALSE, NULL);
+	strbuf_init(&server_data->buf_path, 0);
+	strbuf_addstr(&server_data->buf_path, path);
+	wcscpy(server_data->wpath, wpath);
+
+	if (nr_threads < 1)
+		nr_threads = 1;
+
+	for (k = 0; k < nr_threads; k++) {
+		struct ipc_server_thread_data *std;
+
+		std = xcalloc(1, sizeof(*std));
+		std->magic = MAGIC_SERVER_THREAD_DATA;
+		std->server_data = server_data;
+		std->hPipe = INVALID_HANDLE_VALUE;
+
+		std->hPipe = (k == 0)
+			? hPipeFirst
+			: create_new_pipe(server_data->wpath, 0);
+
+		if (std->hPipe == INVALID_HANDLE_VALUE) {
+			/*
+			 * If we've reached a pipe instance limit for
+			 * this path, just use fewer threads.
+			 */
+			free(std);
+			break;
+		}
+
+		if (pthread_create(&std->pthread_id, NULL,
+				   server_thread_proc, std)) {
+			/*
+			 * Likewise, if we're out of threads, just use
+			 * fewer threads than requested.
+			 *
+			 * However, we just give up if we can't even get
+			 * one thread.  This should not happen.
+			 */
+			if (k == 0)
+				die(_("could not start thread[0] for '%s'"),
+				    path);
+
+			CloseHandle(std->hPipe);
+			free(std);
+			break;
+		}
+
+		std->next_thread = server_data->thread_list;
+		server_data->thread_list = std;
+	}
+
+	*returned_server_data = server_data;
+	return 0;
+}
+
+int ipc_server_stop_async(struct ipc_server_data *server_data)
+{
+	if (!server_data)
+		return 0;
+
+	/*
+	 * Gently tell all of the ipc_server threads to shutdown.
+	 * This will be seen the next time they are idle (and waiting
+	 * for a connection).
+	 *
+	 * We DO NOT attempt to force them to drop an active connection.
+	 */
+	SetEvent(server_data->hEventStopRequested);
+	return 0;
+}
+
+int ipc_server_await(struct ipc_server_data *server_data)
+{
+	DWORD dwWaitResult;
+
+	if (!server_data)
+		return 0;
+
+	dwWaitResult = WaitForSingleObject(server_data->hEventStopRequested, INFINITE);
+	if (dwWaitResult != WAIT_OBJECT_0)
+		return error(_("wait for hEvent failed for '%s'"),
+			     server_data->buf_path.buf);
+
+	while (server_data->thread_list) {
+		struct ipc_server_thread_data *std = server_data->thread_list;
+
+		pthread_join(std->pthread_id, NULL);
+
+		server_data->thread_list = std->next_thread;
+		free(std);
+	}
+
+	server_data->is_stopped = 1;
+
+	return 0;
+}
+
+void ipc_server_free(struct ipc_server_data *server_data)
+{
+	if (!server_data)
+		return;
+
+	if (!server_data->is_stopped)
+		BUG("cannot free ipc-server while running for '%s'",
+		    server_data->buf_path.buf);
+
+	strbuf_release(&server_data->buf_path);
+
+	if (server_data->hEventStopRequested != INVALID_HANDLE_VALUE)
+		CloseHandle(server_data->hEventStopRequested);
+
+	while (server_data->thread_list) {
+		struct ipc_server_thread_data *std = server_data->thread_list;
+
+		server_data->thread_list = std->next_thread;
+		free(std);
+	}
+
+	free(server_data);
+}
diff --git a/config.mak.uname b/config.mak.uname
index 198ab1e58f8..76087cff678 100644
--- a/config.mak.uname
+++ b/config.mak.uname
@@ -421,6 +421,7 @@ ifeq ($(uname_S),Windows)
 	RUNTIME_PREFIX = YesPlease
 	HAVE_WPGMPTR = YesWeDo
 	NO_ST_BLOCKS_IN_STRUCT_STAT = YesPlease
+	USE_WIN32_IPC = YesPlease
 	USE_WIN32_MMAP = YesPlease
 	MMAP_PREVENTS_DELETE = UnfortunatelyYes
 	# USE_NED_ALLOCATOR = YesPlease
@@ -597,6 +598,7 @@ ifneq (,$(findstring MINGW,$(uname_S)))
 	RUNTIME_PREFIX = YesPlease
 	HAVE_WPGMPTR = YesWeDo
 	NO_ST_BLOCKS_IN_STRUCT_STAT = YesPlease
+	USE_WIN32_IPC = YesPlease
 	USE_WIN32_MMAP = YesPlease
 	MMAP_PREVENTS_DELETE = UnfortunatelyYes
 	USE_NED_ALLOCATOR = YesPlease
diff --git a/contrib/buildsystems/CMakeLists.txt b/contrib/buildsystems/CMakeLists.txt
index c151dd7257f..4bd41054ee7 100644
--- a/contrib/buildsystems/CMakeLists.txt
+++ b/contrib/buildsystems/CMakeLists.txt
@@ -246,6 +246,10 @@ elseif(CMAKE_SYSTEM_NAME STREQUAL "Linux")
 	list(APPEND compat_SOURCES unix-socket.c)
 endif()
 
+if(CMAKE_SYSTEM_NAME STREQUAL "Windows")
+	list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-win32.c)
+endif()
+
 set(EXE_EXTENSION ${CMAKE_EXECUTABLE_SUFFIX})
 
 #header checks
diff --git a/simple-ipc.h b/simple-ipc.h
new file mode 100644
index 00000000000..cd525e711bd
--- /dev/null
+++ b/simple-ipc.h
@@ -0,0 +1,216 @@
+#ifndef GIT_SIMPLE_IPC_H
+#define GIT_SIMPLE_IPC_H
+
+/*
+ * See Documentation/technical/api-simple-ipc.txt
+ */
+
+#if defined(GIT_WINDOWS_NATIVE)
+#define SUPPORTS_SIMPLE_IPC
+#endif
+
+#ifdef SUPPORTS_SIMPLE_IPC
+
+/*
+ * Simple IPC Client Side API.
+ */
+
+enum ipc_active_state {
+	/*
+	 * The pipe/socket exists and the daemon is waiting for connections.
+	 */
+	IPC_STATE__LISTENING = 0,
+
+	/*
+	 * The pipe/socket exists, but the daemon is not listening.
+	 * Perhaps it is very busy.
+	 * Perhaps the daemon died without deleting the path.
+	 * Perhaps it is shutting down and draining existing clients.
+	 * Perhaps it is dead, but other clients are lingering and
+	 * still holding a reference to the pathname.
+	 */
+	IPC_STATE__NOT_LISTENING,
+
+	/*
+	 * The requested pathname is bogus and no amount of retries
+	 * will fix that.
+	 */
+	IPC_STATE__INVALID_PATH,
+
+	/*
+	 * The requested pathname is not found.  This usually means
+	 * that there is no daemon present.
+	 */
+	IPC_STATE__PATH_NOT_FOUND,
+
+	IPC_STATE__OTHER_ERROR,
+};
+
+struct ipc_client_connect_options {
+	/*
+	 * Spin under timeout if the server is running but can't
+	 * accept our connection yet.  This should always be set
+	 * unless you just want to poke the server and see if it
+	 * is alive.
+	 */
+	unsigned int wait_if_busy:1;
+
+	/*
+	 * Spin under timeout if the pipe/socket is not yet present
+	 * on the file system.  This is useful if we just started
+	 * the service and need to wait for it to become ready.
+	 */
+	unsigned int wait_if_not_found:1;
+};
+
+#define IPC_CLIENT_CONNECT_OPTIONS_INIT { \
+	.wait_if_busy = 0, \
+	.wait_if_not_found = 0, \
+}
+
+/*
+ * Determine if a server is listening on this named pipe or socket using
+ * platform-specific logic.  This might just probe the filesystem or it
+ * might make a trivial connection to the server using this pathname.
+ */
+enum ipc_active_state ipc_get_active_state(const char *path);
+
+/*
+ * Try to connect to the daemon on the named pipe or socket.
+ *
+ * Returns IPC_STATE__LISTENING (and an fd) when connected.
+ *
+ * Otherwise, returns info to help decide whether to retry or to
+ * spawn/respawn the server.
+ */
+enum ipc_active_state ipc_client_try_connect(
+	const char *path,
+	const struct ipc_client_connect_options *options,
+	int *pfd);
+
+/*
+ * Used by the client to synchronously send and receive a message with
+ * the server on the provided fd.
+ *
+ * Returns 0 when successful.
+ *
+ * Calls error() and returns non-zero otherwise.
+ */
+int ipc_client_send_command_to_fd(int fd, const char *message,
+				  struct strbuf *answer);
+
+/*
+ * Used by the client to synchronously connect and send and receive a
+ * message to the server listening at the given path.
+ *
+ * Returns 0 when successful.
+ *
+ * Calls error() and returns non-zero otherwise.
+ */
+int ipc_client_send_command(const char *path,
+			    const struct ipc_client_connect_options *options,
+			    const char *message, struct strbuf *answer);
+
+/*
+ * Simple IPC Server Side API.
+ */
+
+struct ipc_server_reply_data;
+
+typedef int (ipc_server_reply_cb)(struct ipc_server_reply_data *,
+				  const char *response,
+				  size_t response_len);
+
+/*
+ * Prototype for an application-supplied callback to process incoming
+ * client IPC messages and compose a reply.  The `application_cb` should
+ * use the provided `reply_cb` and `reply_data` to send an IPC response
+ * back to the client.  The `reply_cb` callback can be called multiple
+ * times for chunking purposes.  A reply message is optional and may be
+ * omitted if not necessary for the application.
+ *
+ * The return value from the application callback is ignored.
+ * The value `SIMPLE_IPC_QUIT` can be used to shutdown the server.
+ */
+typedef int (ipc_server_application_cb)(void *application_data,
+					const char *request,
+					ipc_server_reply_cb *reply_cb,
+					struct ipc_server_reply_data *reply_data);
+
+#define SIMPLE_IPC_QUIT -2
+
+/*
+ * Opaque instance data to represent an IPC server instance.
+ */
+struct ipc_server_data;
+
+/*
+ * Control parameters for the IPC server instance.
+ * Use this to hide platform-specific settings.
+ */
+struct ipc_server_opts
+{
+	int nr_threads;
+};
+
+/*
+ * Start an IPC server instance in one or more background threads
+ * and return a handle to the pool.
+ *
+ * Returns 0 if the asynchronous server pool was started successfully.
+ * Returns -1 if not.
+ *
+ * When a client IPC message is received, the `application_cb` will be
+ * called (possibly on a random thread) to handle the message and
+ * optionally compose a reply message.
+ */
+int ipc_server_run_async(struct ipc_server_data **returned_server_data,
+			 const char *path, const struct ipc_server_opts *opts,
+			 ipc_server_application_cb *application_cb,
+			 void *application_data);
+
+/*
+ * Gently signal the IPC server pool to shutdown.  No new client
+ * connections will be accepted, but existing connections will be
+ * allowed to complete.
+ */
+int ipc_server_stop_async(struct ipc_server_data *server_data);
+
+/*
+ * Block the calling thread until all threads in the IPC server pool
+ * have completed and been joined.
+ */
+int ipc_server_await(struct ipc_server_data *server_data);
+
+/*
+ * Close and free all resource handles associated with the IPC server
+ * pool.
+ */
+void ipc_server_free(struct ipc_server_data *server_data);
+
+/*
+ * Run an IPC server instance and block the calling thread of the
+ * current process.  It does not return until the IPC server has
+ * either shutdown or had an unrecoverable error.
+ *
+ * The IPC server handles incoming IPC messages from client processes
+ * and may use one or more background threads as necessary.
+ *
+ * Returns 0 after the server has completed successfully.
+ * Returns -1 if the server cannot be started.
+ *
+ * When a client IPC message is received, the `application_cb` will be
+ * called (possibly on a random thread) to handle the message and
+ * optionally compose a reply message.
+ *
+ * Note that `ipc_server_run()` is a synchronous wrapper around the
+ * above asynchronous routines.  It effectively hides all of the
+ * server state and thread details from the caller and presents a
+ * simple synchronous interface.
+ */
+int ipc_server_run(const char *path, const struct ipc_server_opts *opts,
+		   ipc_server_application_cb *application_cb,
+		   void *application_data);
+
+#endif /* SUPPORTS_SIMPLE_IPC */
+#endif /* GIT_SIMPLE_IPC_H */
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH 07/10] unix-socket: create gentle version of unix_stream_listen()
  2021-01-12 15:31 [PATCH 00/10] [RFC] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
                   ` (5 preceding siblings ...)
  2021-01-12 15:31 ` [PATCH 06/10] simple-ipc: add win32 implementation Jeff Hostetler via GitGitGadget
@ 2021-01-12 15:31 ` Jeff Hostetler via GitGitGadget
  2021-01-13 14:06   ` Jeff King
  2021-01-12 15:31 ` [PATCH 08/10] unix-socket: add no-chdir option to unix_stream_listen_gently() Jeff Hostetler via GitGitGadget
                   ` (5 subsequent siblings)
  12 siblings, 1 reply; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-01-12 15:31 UTC (permalink / raw)
  To: git; +Cc: Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Create a gentle version of `unix_stream_listen()`.  This version does
not call `die()` if a socket-fd cannot be created and does not assume
that it is safe to `unlink()` an existing socket-inode.

`unix_stream_listen()` uses `unix_stream_socket()` helper function to
create the socket-fd.  Avoid that helper because it calls `die()` on
errors.

`unix_stream_listen()` always tries to `unlink()` the socket-path before
calling `bind()`.  If there is an existing server/daemon already bound
and listening on that socket-path, our `unlink()` would have the effect
of disassociating the existing server's bound-socket-fd from the socket-path
without notifying the existing server.  The existing server could continue
to service existing connections (accepted-socket-fd's), but would not
receive any futher new connections (since clients rendezvous via the
socket-path).  The existing server would effectively be offline but yet
appear to be active.

Furthermore, `unix_stream_listen()` creates an opportunity for a brief
race condition for connecting clients if they try to connect in the
interval between the forced `unlink()` and the subsequent `bind()` (which
recreates the socket-path that is bound to a new socket-fd in the current
process).

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 unix-socket.c | 39 +++++++++++++++++++++++++++++++++++++++
 unix-socket.h |  8 ++++++++
 2 files changed, 47 insertions(+)

diff --git a/unix-socket.c b/unix-socket.c
index 19ed48be990..3a9ffc32268 100644
--- a/unix-socket.c
+++ b/unix-socket.c
@@ -121,3 +121,42 @@ int unix_stream_listen(const char *path)
 	errno = saved_errno;
 	return -1;
 }
+
+int unix_stream_listen_gently(const char *path,
+			      const struct unix_stream_listen_opts *opts)
+{
+	int fd = -1;
+	int bind_successful = 0;
+	int saved_errno;
+	struct sockaddr_un sa;
+	struct unix_sockaddr_context ctx;
+
+	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
+		goto fail;
+
+	fd = socket(AF_UNIX, SOCK_STREAM, 0);
+	if (fd < 0)
+		goto fail;
+
+	if (opts->force_unlink_before_bind)
+		unlink(path);
+
+	if (bind(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0)
+		goto fail;
+	bind_successful = 1;
+
+	if (listen(fd, opts->listen_backlog_size) < 0)
+		goto fail;
+
+	unix_sockaddr_cleanup(&ctx);
+	return fd;
+
+fail:
+	saved_errno = errno;
+	unix_sockaddr_cleanup(&ctx);
+	close(fd);
+	if (bind_successful)
+		unlink(path);
+	errno = saved_errno;
+	return -1;
+}
diff --git a/unix-socket.h b/unix-socket.h
index e271aeec5a0..253f579f087 100644
--- a/unix-socket.h
+++ b/unix-socket.h
@@ -4,4 +4,12 @@
 int unix_stream_connect(const char *path);
 int unix_stream_listen(const char *path);
 
+struct unix_stream_listen_opts {
+	int listen_backlog_size;
+	unsigned int force_unlink_before_bind:1;
+};
+
+int unix_stream_listen_gently(const char *path,
+			      const struct unix_stream_listen_opts *opts);
+
 #endif /* UNIX_SOCKET_H */
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH 08/10] unix-socket: add no-chdir option to unix_stream_listen_gently()
  2021-01-12 15:31 [PATCH 00/10] [RFC] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
                   ` (6 preceding siblings ...)
  2021-01-12 15:31 ` [PATCH 07/10] unix-socket: create gentle version of unix_stream_listen() Jeff Hostetler via GitGitGadget
@ 2021-01-12 15:31 ` Jeff Hostetler via GitGitGadget
  2021-01-12 15:31 ` [PATCH 09/10] simple-ipc: add t/helper/test-simple-ipc and t0052 Jeff Hostetler via GitGitGadget
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-01-12 15:31 UTC (permalink / raw)
  To: git; +Cc: Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Calls to `chdir()` are dangerous in a multi-threaded context.  If
`unix_stream_listen()` is given a socket pathname that is too big to
fit in a `sockaddr_un` structure, it will `chdir()` to the parent
directory of the requested socket pathname, create the socket using a
relative pathname, and then `chdir()` back.  This is not thread-safe.

Add `disallow_chdir` flag to `struct unix_sockaddr_context` and change
all callers to pass an initialized context structure.

Teach `unix_sockaddr_init()` to not allow calls to `chdir()` when flag
is set.

Extend the public interface to `unix_stream_listen_gently()` to also
expose this new flag.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 unix-socket.c | 21 +++++++++++++++++----
 unix-socket.h |  1 +
 2 files changed, 18 insertions(+), 4 deletions(-)

diff --git a/unix-socket.c b/unix-socket.c
index 3a9ffc32268..f66987261e6 100644
--- a/unix-socket.c
+++ b/unix-socket.c
@@ -19,8 +19,15 @@ static int chdir_len(const char *orig, int len)
 
 struct unix_sockaddr_context {
 	char *orig_dir;
+	unsigned int disallow_chdir:1;
 };
 
+#define UNIX_SOCKADDR_CONTEXT_INIT \
+{ \
+	.orig_dir=NULL, \
+	.disallow_chdir=0, \
+}
+
 static void unix_sockaddr_cleanup(struct unix_sockaddr_context *ctx)
 {
 	if (!ctx->orig_dir)
@@ -40,7 +47,11 @@ static int unix_sockaddr_init(struct sockaddr_un *sa, const char *path,
 {
 	int size = strlen(path) + 1;
 
-	ctx->orig_dir = NULL;
+	if (ctx->disallow_chdir && size > sizeof(sa->sun_path)) {
+		errno = ENAMETOOLONG;
+		return -1;
+	}
+
 	if (size > sizeof(sa->sun_path)) {
 		const char *slash = find_last_dir_sep(path);
 		const char *dir;
@@ -75,7 +86,7 @@ int unix_stream_connect(const char *path)
 {
 	int fd, saved_errno;
 	struct sockaddr_un sa;
-	struct unix_sockaddr_context ctx;
+	struct unix_sockaddr_context ctx = UNIX_SOCKADDR_CONTEXT_INIT;
 
 	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
 		return -1;
@@ -97,7 +108,7 @@ int unix_stream_listen(const char *path)
 {
 	int fd, saved_errno;
 	struct sockaddr_un sa;
-	struct unix_sockaddr_context ctx;
+	struct unix_sockaddr_context ctx = UNIX_SOCKADDR_CONTEXT_INIT;
 
 	unlink(path);
 
@@ -129,7 +140,9 @@ int unix_stream_listen_gently(const char *path,
 	int bind_successful = 0;
 	int saved_errno;
 	struct sockaddr_un sa;
-	struct unix_sockaddr_context ctx;
+	struct unix_sockaddr_context ctx = UNIX_SOCKADDR_CONTEXT_INIT;
+
+	ctx.disallow_chdir = opts->disallow_chdir;
 
 	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
 		goto fail;
diff --git a/unix-socket.h b/unix-socket.h
index 253f579f087..08d3d822111 100644
--- a/unix-socket.h
+++ b/unix-socket.h
@@ -7,6 +7,7 @@ int unix_stream_listen(const char *path);
 struct unix_stream_listen_opts {
 	int listen_backlog_size;
 	unsigned int force_unlink_before_bind:1;
+	unsigned int disallow_chdir:1;
 };
 
 int unix_stream_listen_gently(const char *path,
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH 09/10] simple-ipc: add t/helper/test-simple-ipc and t0052
  2021-01-12 15:31 [PATCH 00/10] [RFC] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
                   ` (7 preceding siblings ...)
  2021-01-12 15:31 ` [PATCH 08/10] unix-socket: add no-chdir option to unix_stream_listen_gently() Jeff Hostetler via GitGitGadget
@ 2021-01-12 15:31 ` Jeff Hostetler via GitGitGadget
  2021-01-12 15:31 ` [PATCH 10/10] simple-ipc: add Unix domain socket implementation Jeff Hostetler via GitGitGadget
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-01-12 15:31 UTC (permalink / raw)
  To: git; +Cc: Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Create unit tests for "simple-ipc".  These are currently only enabled
on Windows.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 Makefile                   |   1 +
 t/helper/test-simple-ipc.c | 485 +++++++++++++++++++++++++++++++++++++
 t/helper/test-tool.c       |   1 +
 t/helper/test-tool.h       |   1 +
 t/t0052-simple-ipc.sh      | 129 ++++++++++
 5 files changed, 617 insertions(+)
 create mode 100644 t/helper/test-simple-ipc.c
 create mode 100755 t/t0052-simple-ipc.sh

diff --git a/Makefile b/Makefile
index c94d5847919..e7ba8853ea6 100644
--- a/Makefile
+++ b/Makefile
@@ -740,6 +740,7 @@ TEST_BUILTINS_OBJS += test-serve-v2.o
 TEST_BUILTINS_OBJS += test-sha1.o
 TEST_BUILTINS_OBJS += test-sha256.o
 TEST_BUILTINS_OBJS += test-sigchain.o
+TEST_BUILTINS_OBJS += test-simple-ipc.o
 TEST_BUILTINS_OBJS += test-strcmp-offset.o
 TEST_BUILTINS_OBJS += test-string-list.o
 TEST_BUILTINS_OBJS += test-submodule-config.o
diff --git a/t/helper/test-simple-ipc.c b/t/helper/test-simple-ipc.c
new file mode 100644
index 00000000000..4960e79cf18
--- /dev/null
+++ b/t/helper/test-simple-ipc.c
@@ -0,0 +1,485 @@
+/*
+ * test-simple-ipc.c: verify that the Inter-Process Communication works.
+ */
+
+#include "test-tool.h"
+#include "cache.h"
+#include "strbuf.h"
+#include "simple-ipc.h"
+#include "parse-options.h"
+#include "thread-utils.h"
+
+#ifndef SUPPORTS_SIMPLE_IPC
+int cmd__simple_ipc(int argc, const char **argv)
+{
+	die("simple IPC not available on this platform");
+}
+#else
+
+/*
+ * The test daemon defines an "application callback" that supports a
+ * series of commands (see `test_app_cb()`).
+ *
+ * Unknown commands are caught here and we send an error message back
+ * to the client process.
+ */
+static int app__unhandled_command(const char *command,
+				  ipc_server_reply_cb *reply_cb,
+				  struct ipc_server_reply_data *reply_data)
+{
+	struct strbuf buf = STRBUF_INIT;
+	int ret;
+
+	strbuf_addf(&buf, "unhandled command: %s", command);
+	ret = reply_cb(reply_data, buf.buf, buf.len);
+	strbuf_release(&buf);
+
+	return ret;
+}
+
+/*
+ * Reply with a single very large buffer.  This is to ensure that
+ * long response are properly handled -- whether the chunking occurs
+ * in the kernel or in the (probably pkt-line) layer.
+ */
+#define BIG_ROWS (10000)
+static int app__big_command(ipc_server_reply_cb *reply_cb,
+			    struct ipc_server_reply_data *reply_data)
+{
+	struct strbuf buf = STRBUF_INIT;
+	int row;
+	int ret;
+
+	for (row = 0; row < BIG_ROWS; row++)
+		strbuf_addf(&buf, "big: %.75d\n", row);
+
+	ret = reply_cb(reply_data, buf.buf, buf.len);
+	strbuf_release(&buf);
+
+	return ret;
+}
+
+/*
+ * Reply with a series of lines.  This is to ensure that we can incrementally
+ * compute the response and chunk it to the client.
+ */
+#define CHUNK_ROWS (10000)
+static int app__chunk_command(ipc_server_reply_cb *reply_cb,
+			      struct ipc_server_reply_data *reply_data)
+{
+	struct strbuf buf = STRBUF_INIT;
+	int row;
+	int ret;
+
+	for (row = 0; row < CHUNK_ROWS; row++) {
+		strbuf_setlen(&buf, 0);
+		strbuf_addf(&buf, "big: %.75d\n", row);
+		ret = reply_cb(reply_data, buf.buf, buf.len);
+	}
+
+	strbuf_release(&buf);
+
+	return ret;
+}
+
+/*
+ * Slowly reply with a series of lines.  This is to model an expensive to
+ * compute chunked response (which might happen if this callback is running
+ * in a thread and is fighting for a lock with other threads).
+ */
+#define SLOW_ROWS     (1000)
+#define SLOW_DELAY_MS (10)
+static int app__slow_command(ipc_server_reply_cb *reply_cb,
+			     struct ipc_server_reply_data *reply_data)
+{
+	struct strbuf buf = STRBUF_INIT;
+	int row;
+	int ret;
+
+	for (row = 0; row < SLOW_ROWS; row++) {
+		strbuf_setlen(&buf, 0);
+		strbuf_addf(&buf, "big: %.75d\n", row);
+		ret = reply_cb(reply_data, buf.buf, buf.len);
+		sleep_millisec(SLOW_DELAY_MS);
+	}
+
+	strbuf_release(&buf);
+
+	return ret;
+}
+
+/*
+ * The client sent a command followed by a (possibly very) large buffer.
+ */
+static int app__sendbytes_command(const char *received,
+				  ipc_server_reply_cb *reply_cb,
+				  struct ipc_server_reply_data *reply_data)
+{
+	struct strbuf buf_resp = STRBUF_INIT;
+	const char *p = "?";
+	int len_ballast = 0;
+	int k;
+	int errs = 0;
+	int ret;
+
+	if (skip_prefix(received, "sendbytes ", &p))
+		len_ballast = strlen(p);
+
+	/*
+	 * Verify that the ballast is n copies of a single letter.
+	 * And that the multi-threaded IO layer didn't cross the streams.
+	 */
+	for (k = 1; k < len_ballast; k++)
+		if (p[k] != p[0])
+			errs++;
+
+	if (errs)
+		strbuf_addf(&buf_resp, "errs:%d\n", errs);
+	else
+		strbuf_addf(&buf_resp, "rcvd:%c%08d\n", p[0], len_ballast);
+
+	ret = reply_cb(reply_data, buf_resp.buf, buf_resp.len);
+
+	strbuf_release(&buf_resp);
+
+	return ret;
+}
+
+/*
+ * An arbitrary fixed address to verify that the application instance
+ * data is handled properly.
+ */
+static int my_app_data = 42;
+
+static ipc_server_application_cb test_app_cb;
+
+/*
+ * This is "application callback" that sits on top of the "ipc-server".
+ * It completely defines the set of command verbs supported by this
+ * application.
+ */
+static int test_app_cb(void *application_data,
+		       const char *command,
+		       ipc_server_reply_cb *reply_cb,
+		       struct ipc_server_reply_data *reply_data)
+{
+	/*
+	 * Verify that we received the application-data that we passed
+	 * when we started the ipc-server.  (We have several layers of
+	 * callbacks calling callbacks and it's easy to get things mixed
+	 * up (especially when some are "void*").)
+	 */
+	if (application_data != (void*)&my_app_data)
+		BUG("application_cb: application_data pointer wrong");
+
+	if (!strcmp(command, "quit")) {
+		/*
+		 * Tell ipc-server to hangup with an empty reply.
+		 */
+		return SIMPLE_IPC_QUIT;
+	}
+
+	if (!strcmp(command, "ping")) {
+		const char *answer = "pong";
+		return reply_cb(reply_data, answer, strlen(answer));
+	}
+
+	if (!strcmp(command, "big"))
+		return app__big_command(reply_cb, reply_data);
+
+	if (!strcmp(command, "chunk"))
+		return app__chunk_command(reply_cb, reply_data);
+
+	if (!strcmp(command, "slow"))
+		return app__slow_command(reply_cb, reply_data);
+
+	if (starts_with(command, "sendbytes "))
+		return app__sendbytes_command(command, reply_cb, reply_data);
+
+	return app__unhandled_command(command, reply_cb, reply_data);
+}
+
+/*
+ * This process will run as a simple-ipc server and listen for IPC commands
+ * from client processes.
+ */
+static int daemon__run_server(const char *path, int argc, const char **argv)
+{
+	struct ipc_server_opts opts = {
+		.nr_threads = 5
+	};
+
+	const char * const daemon_usage[] = {
+		N_("test-helper simple-ipc daemon [<options>"),
+		NULL
+	};
+	struct option daemon_options[] = {
+		OPT_INTEGER(0, "threads", &opts.nr_threads,
+			    N_("number of threads in server thread pool")),
+		OPT_END()
+	};
+
+	argc = parse_options(argc, argv, NULL, daemon_options, daemon_usage, 0);
+
+	if (opts.nr_threads < 1)
+		opts.nr_threads = 1;
+
+	/*
+	 * Synchronously run the ipc-server.  We don't need any application
+	 * instance data, so pass an arbitrary pointer (that we'll later
+	 * verify made the round trip).
+	 */
+	return ipc_server_run(path, &opts, test_app_cb, (void*)&my_app_data);
+}
+
+/*
+ * This process will run a quick probe to see if a simple-ipc server
+ * is active on this path.
+ *
+ * Returns 0 if the server is alive.
+ */
+static int client__probe_server(const char *path)
+{
+	enum ipc_active_state s;
+
+	s = ipc_get_active_state(path);
+	switch (s) {
+	case IPC_STATE__LISTENING:
+		return 0;
+
+	case IPC_STATE__NOT_LISTENING:
+		return error("no server listening at '%s'", path);
+
+	case IPC_STATE__PATH_NOT_FOUND:
+		return error("path not found '%s'", path);
+
+	case IPC_STATE__INVALID_PATH:
+		return error("invalid pipe/socket name '%s'", path);
+
+	case IPC_STATE__OTHER_ERROR:
+	default:
+		return error("other error for '%s'", path);
+	}
+}
+
+/*
+ * Send an IPC command to an already-running server daemon and print the
+ * response.
+ *
+ * argv[2] contains a simple (1 word) command verb that `test_app_cb()`
+ * (in the daemon process) will understand.
+ */
+static int client__send_ipc(int argc, const char **argv, const char *path)
+{
+	const char *command = argc > 2 ? argv[2] : "(no command)";
+	struct strbuf buf = STRBUF_INIT;
+	struct ipc_client_connect_options options
+		= IPC_CLIENT_CONNECT_OPTIONS_INIT;
+
+	options.wait_if_busy = 1;
+	options.wait_if_not_found = 0;
+
+	if (!ipc_client_send_command(path, &options, command, &buf)) {
+		printf("%s\n", buf.buf);
+		fflush(stdout);
+		strbuf_release(&buf);
+
+		return 0;
+	}
+
+	return error("failed to send '%s' to '%s'", command, path);
+}
+
+/*
+ * Send an IPC command followed by ballast to confirm that a large
+ * message can be sent and that the kernel or pkt-line layers will
+ * properly chunk it and that the daemon receives the entire message.
+ */
+static int do_sendbytes(int bytecount, char byte, const char *path)
+{
+	struct strbuf buf_send = STRBUF_INIT;
+	struct strbuf buf_resp = STRBUF_INIT;
+	struct ipc_client_connect_options options
+		= IPC_CLIENT_CONNECT_OPTIONS_INIT;
+
+	options.wait_if_busy = 1;
+	options.wait_if_not_found = 0;
+
+	strbuf_addstr(&buf_send, "sendbytes ");
+	strbuf_addchars(&buf_send, byte, bytecount);
+
+	if (!ipc_client_send_command(path, &options, buf_send.buf, &buf_resp)) {
+		strbuf_rtrim(&buf_resp);
+		printf("sent:%c%08d %s\n", byte, bytecount, buf_resp.buf);
+		fflush(stdout);
+		strbuf_release(&buf_send);
+		strbuf_release(&buf_resp);
+
+		return 0;
+	}
+
+	return error("client failed to sendbytes(%d, '%c') to '%s'",
+		     bytecount, byte, path);
+}
+
+/*
+ * Send an IPC command with ballast to an already-running server daemon.
+ */
+static int client__sendbytes(int argc, const char **argv, const char *path)
+{
+	int bytecount = 1024;
+	char *string = "x";
+	const char * const sendbytes_usage[] = {
+		N_("test-helper simple-ipc sendbytes [<options>]"),
+		NULL
+	};
+	struct option sendbytes_options[] = {
+		OPT_INTEGER(0, "bytecount", &bytecount, N_("number of bytes")),
+		OPT_STRING(0, "byte", &string, N_("byte"), N_("ballast")),
+		OPT_END()
+	};
+
+	argc = parse_options(argc, argv, NULL, sendbytes_options, sendbytes_usage, 0);
+
+	return do_sendbytes(bytecount, string[0], path);
+}
+
+struct multiple_thread_data {
+	pthread_t pthread_id;
+	struct multiple_thread_data *next;
+	const char *path;
+	int bytecount;
+	int batchsize;
+	int sum_errors;
+	int sum_good;
+	char letter;
+};
+
+static void *multiple_thread_proc(void *_multiple_thread_data)
+{
+	struct multiple_thread_data *d = _multiple_thread_data;
+	int k;
+
+	trace2_thread_start("multiple");
+
+	for (k = 0; k < d->batchsize; k++) {
+		if (do_sendbytes(d->bytecount + k, d->letter, d->path))
+			d->sum_errors++;
+		else
+			d->sum_good++;
+	}
+
+	trace2_thread_exit();
+	return NULL;
+}
+
+/*
+ * Start a client-side thread pool.  Each thread sends a series of
+ * IPC requests.  Each request is on a new connection to the server.
+ */
+static int client__multiple(int argc, const char **argv, const char *path)
+{
+	struct multiple_thread_data *list = NULL;
+	int k;
+	int nr_threads = 5;
+	int bytecount = 1;
+	int batchsize = 10;
+	int sum_join_errors = 0;
+	int sum_thread_errors = 0;
+	int sum_good = 0;
+
+	const char * const multiple_usage[] = {
+		N_("test-helper simple-ipc multiple [<options>]"),
+		NULL
+	};
+	struct option multiple_options[] = {
+		OPT_INTEGER(0, "bytecount", &bytecount, N_("number of bytes")),
+		OPT_INTEGER(0, "threads", &nr_threads, N_("number of threads")),
+		OPT_INTEGER(0, "batchsize", &batchsize, N_("number of requests per thread")),
+		OPT_END()
+	};
+
+	argc = parse_options(argc, argv, NULL, multiple_options, multiple_usage, 0);
+
+	if (bytecount < 1)
+		bytecount = 1;
+	if (nr_threads < 1)
+		nr_threads = 1;
+	if (batchsize < 1)
+		batchsize = 1;
+
+	for (k = 0; k < nr_threads; k++) {
+		struct multiple_thread_data *d = xcalloc(1, sizeof(*d));
+		d->next = list;
+		d->path = path;
+		d->bytecount = bytecount + batchsize*(k/26);
+		d->batchsize = batchsize;
+		d->sum_errors = 0;
+		d->sum_good = 0;
+		d->letter = 'A' + (k % 26);
+
+		if (pthread_create(&d->pthread_id, NULL, multiple_thread_proc, d)) {
+			warning("failed to create thread[%d] skipping remainder", k);
+			free(d);
+			break;
+		}
+
+		list = d;
+	}
+
+	while (list) {
+		struct multiple_thread_data *d = list;
+
+		if (pthread_join(d->pthread_id, NULL))
+			sum_join_errors++;
+
+		sum_thread_errors += d->sum_errors;
+		sum_good += d->sum_good;
+
+		list = d->next;
+		free(d);
+	}
+
+	printf("client (good %d) (join %d), (errors %d)\n",
+	       sum_good, sum_join_errors, sum_thread_errors);
+
+	return (sum_join_errors + sum_thread_errors) ? 1 : 0;
+}
+
+int cmd__simple_ipc(int argc, const char **argv)
+{
+	const char *path = "ipc-test";
+
+	if (argc == 2 && !strcmp(argv[1], "SUPPORTS_SIMPLE_IPC"))
+		return 0;
+
+	/* Use '!!' on all dispatch functions to map from `error()` style
+	 * (returns -1) style to `test_must_fail` style (expects 1) and
+	 * get less confusing shell error messages.
+	 */
+
+	if (argc == 2 && !strcmp(argv[1], "is-active"))
+		return !!client__probe_server(path);
+
+	if (argc >= 2 && !strcmp(argv[1], "daemon"))
+		return !!daemon__run_server(path, argc, argv);
+
+	/*
+	 * Client commands follow.  Ensure a server is running before
+	 * going any further.
+	 */
+	if (client__probe_server(path))
+		return 1;
+
+	if ((argc == 2 || argc == 3) && !strcmp(argv[1], "send"))
+		return !!client__send_ipc(argc, argv, path);
+
+	if (argc >= 2 && !strcmp(argv[1], "sendbytes"))
+		return !!client__sendbytes(argc, argv, path);
+
+	if (argc >= 2 && !strcmp(argv[1], "multiple"))
+		return !!client__multiple(argc, argv, path);
+
+	die("Unhandled argv[1]: '%s'", argv[1]);
+}
+#endif
diff --git a/t/helper/test-tool.c b/t/helper/test-tool.c
index 9d6d14d9293..a409655f03b 100644
--- a/t/helper/test-tool.c
+++ b/t/helper/test-tool.c
@@ -64,6 +64,7 @@ static struct test_cmd cmds[] = {
 	{ "sha1", cmd__sha1 },
 	{ "sha256", cmd__sha256 },
 	{ "sigchain", cmd__sigchain },
+	{ "simple-ipc", cmd__simple_ipc },
 	{ "strcmp-offset", cmd__strcmp_offset },
 	{ "string-list", cmd__string_list },
 	{ "submodule-config", cmd__submodule_config },
diff --git a/t/helper/test-tool.h b/t/helper/test-tool.h
index a6470ff62c4..564eb3c8e91 100644
--- a/t/helper/test-tool.h
+++ b/t/helper/test-tool.h
@@ -54,6 +54,7 @@ int cmd__sha1(int argc, const char **argv);
 int cmd__oid_array(int argc, const char **argv);
 int cmd__sha256(int argc, const char **argv);
 int cmd__sigchain(int argc, const char **argv);
+int cmd__simple_ipc(int argc, const char **argv);
 int cmd__strcmp_offset(int argc, const char **argv);
 int cmd__string_list(int argc, const char **argv);
 int cmd__submodule_config(int argc, const char **argv);
diff --git a/t/t0052-simple-ipc.sh b/t/t0052-simple-ipc.sh
new file mode 100755
index 00000000000..69588354545
--- /dev/null
+++ b/t/t0052-simple-ipc.sh
@@ -0,0 +1,129 @@
+#!/bin/sh
+
+test_description='simple command server'
+
+. ./test-lib.sh
+
+test-tool simple-ipc SUPPORTS_SIMPLE_IPC || {
+	skip_all='simple IPC not supported on this platform'
+	test_done
+}
+
+stop_simple_IPC_server () {
+	test -n "$SIMPLE_IPC_PID" || return 0
+
+	kill "$SIMPLE_IPC_PID" &&
+	SIMPLE_IPC_PID=
+}
+
+test_expect_success 'start simple command server' '
+	{ test-tool simple-ipc daemon --threads=8 & } &&
+	SIMPLE_IPC_PID=$! &&
+	test_atexit stop_simple_IPC_server &&
+
+	sleep 1 &&
+
+	test-tool simple-ipc is-active
+'
+
+test_expect_success 'simple command server' '
+	test-tool simple-ipc send ping >actual &&
+	echo pong >expect &&
+	test_cmp expect actual
+'
+
+test_expect_success 'servers cannot share the same path' '
+	test_must_fail test-tool simple-ipc daemon &&
+	test-tool simple-ipc is-active
+'
+
+test_expect_success 'big response' '
+	test-tool simple-ipc send big >actual &&
+	test_line_count -ge 10000 actual &&
+	grep -q "big: [0]*9999\$" actual
+'
+
+test_expect_success 'chunk response' '
+	test-tool simple-ipc send chunk >actual &&
+	test_line_count -ge 10000 actual &&
+	grep -q "big: [0]*9999\$" actual
+'
+
+test_expect_success 'slow response' '
+	test-tool simple-ipc send slow >actual &&
+	test_line_count -ge 100 actual &&
+	grep -q "big: [0]*99\$" actual
+'
+
+# Send an IPC with n=100,000 bytes of ballast.  This should be large enough
+# to force both the kernel and the pkt-line layer to chunk the message to the
+# daemon and for the daemon to receive it in chunks.
+#
+test_expect_success 'sendbytes' '
+	test-tool simple-ipc sendbytes --bytecount=100000 --byte=A >actual &&
+	grep "sent:A00100000 rcvd:A00100000" actual
+'
+
+# Start a series of <threads> client threads that each make <batchsize>
+# IPC requests to the server.  Each (<threads> * <batchsize>) request
+# will open a new connection to the server and randomly bind to a server
+# thread.  Each client thread exits after completing its batch.  So the
+# total number of live client threads will be smaller than the total.
+# Each request will send a message containing at least <bytecount> bytes
+# of ballast.  (Responses are small.)
+#
+# The purpose here is to test threading in the server and responding to
+# many concurrent client requests (regardless of whether they come from
+# 1 client process or many).  And to test that the server side of the
+# named pipe/socket is stable.  (On Windows this means that the server
+# pipe is properly recycled.)
+#
+# On Windows it also lets us adjust the connection timeout in the
+# `ipc_client_send_command()`.
+#
+# Note it is easy to drive the system into failure by requesting an
+# insane number of threads on client or server and/or increasing the
+# per-thread batchsize or the per-request bytecount (ballast).
+# On Windows these failures look like "pipe is busy" errors.
+# So I've chosen fairly conservative values for now.
+#
+# We expect output of the form "sent:<letter><length> ..."
+# With terms (7, 19, 13) we expect:
+#   <letter> in [A-G]
+#   <length> in [19+0 .. 19+(13-1)]
+# and (7 * 13) successful responses.
+#
+test_expect_success 'stress test threads' '
+	test-tool simple-ipc multiple \
+		--threads=7 \
+		--bytecount=19 \
+		--batchsize=13 \
+		>actual &&
+	test_line_count = 92 actual &&
+	grep "good 91" actual &&
+	grep "sent:A" <actual >actual_a &&
+	cat >expect_a <<-EOF &&
+		sent:A00000019 rcvd:A00000019
+		sent:A00000020 rcvd:A00000020
+		sent:A00000021 rcvd:A00000021
+		sent:A00000022 rcvd:A00000022
+		sent:A00000023 rcvd:A00000023
+		sent:A00000024 rcvd:A00000024
+		sent:A00000025 rcvd:A00000025
+		sent:A00000026 rcvd:A00000026
+		sent:A00000027 rcvd:A00000027
+		sent:A00000028 rcvd:A00000028
+		sent:A00000029 rcvd:A00000029
+		sent:A00000030 rcvd:A00000030
+		sent:A00000031 rcvd:A00000031
+	EOF
+	test_cmp expect_a actual_a
+'
+
+test_expect_success '`quit` works' '
+	test-tool simple-ipc send quit &&
+	test_must_fail test-tool simple-ipc is-active &&
+	test_must_fail test-tool simple-ipc send ping
+'
+
+test_done
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH 10/10] simple-ipc: add Unix domain socket implementation
  2021-01-12 15:31 [PATCH 00/10] [RFC] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
                   ` (8 preceding siblings ...)
  2021-01-12 15:31 ` [PATCH 09/10] simple-ipc: add t/helper/test-simple-ipc and t0052 Jeff Hostetler via GitGitGadget
@ 2021-01-12 15:31 ` Jeff Hostetler via GitGitGadget
  2021-01-12 16:50 ` [PATCH 00/10] [RFC] Simple IPC Mechanism Ævar Arnfjörð Bjarmason
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-01-12 15:31 UTC (permalink / raw)
  To: git; +Cc: Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Create Unix domain socket based implementation of "simple-ipc".

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 Makefile                            |    2 +
 compat/simple-ipc/ipc-unix-socket.c | 1093 +++++++++++++++++++++++++++
 contrib/buildsystems/CMakeLists.txt |    2 +
 simple-ipc.h                        |    7 +-
 4 files changed, 1103 insertions(+), 1 deletion(-)
 create mode 100644 compat/simple-ipc/ipc-unix-socket.c

diff --git a/Makefile b/Makefile
index e7ba8853ea6..f2524c02ff0 100644
--- a/Makefile
+++ b/Makefile
@@ -1681,6 +1681,8 @@ ifdef NO_UNIX_SOCKETS
 	BASIC_CFLAGS += -DNO_UNIX_SOCKETS
 else
 	LIB_OBJS += unix-socket.o
+	LIB_OBJS += compat/simple-ipc/ipc-shared.o
+	LIB_OBJS += compat/simple-ipc/ipc-unix-socket.o
 endif
 
 ifdef USE_WIN32_IPC
diff --git a/compat/simple-ipc/ipc-unix-socket.c b/compat/simple-ipc/ipc-unix-socket.c
new file mode 100644
index 00000000000..be100049e4b
--- /dev/null
+++ b/compat/simple-ipc/ipc-unix-socket.c
@@ -0,0 +1,1093 @@
+#include "cache.h"
+#include "simple-ipc.h"
+#include "strbuf.h"
+#include "pkt-line.h"
+#include "thread-utils.h"
+#include "unix-socket.h"
+
+#ifdef NO_UNIX_SOCKETS
+#error compat/simple-ipc/ipc-unix-socket.c requires Unix sockets
+#endif
+
+enum ipc_active_state ipc_get_active_state(const char *path)
+{
+	enum ipc_active_state state = IPC_STATE__OTHER_ERROR;
+	struct ipc_client_connect_options options
+		= IPC_CLIENT_CONNECT_OPTIONS_INIT;
+	struct stat st;
+	int fd_test = -1;
+
+	options.wait_if_busy = 0;
+	options.wait_if_not_found = 0;
+
+	if (lstat(path, &st) == -1) {
+		switch (errno) {
+		case ENOENT:
+		case ENOTDIR:
+			return IPC_STATE__NOT_LISTENING;
+		default:
+			return IPC_STATE__INVALID_PATH;
+		}
+	}
+
+	/* also complain if a plain file is in the way */
+	if ((st.st_mode & S_IFMT) != S_IFSOCK)
+		return IPC_STATE__INVALID_PATH;
+
+	/*
+	 * Just because the filesystem has a S_IFSOCK type inode
+	 * at `path`, doesn't mean it that there is a server listening.
+	 * Ping it to be sure.
+	 */
+	state = ipc_client_try_connect(path, &options, &fd_test);
+	close(fd_test);
+
+	return state;
+}
+
+/*
+ * This value was chosen at random.
+ */
+#define WAIT_STEP_MS (50)
+
+/*
+ * Try to connect to the server.  If the server is just starting up or
+ * is very busy, we may not get a connection the first time.
+ */
+static enum ipc_active_state connect_to_server(
+	const char *path,
+	int timeout_ms,
+	const struct ipc_client_connect_options *options,
+	int *pfd)
+{
+	int wait_ms = 50;
+	int k;
+
+	*pfd = -1;
+
+	for (k = 0; k < timeout_ms; k += wait_ms) {
+		int fd = unix_stream_connect(path);
+
+		if (fd != -1) {
+			*pfd = fd;
+			return IPC_STATE__LISTENING;
+		}
+
+		if (errno == ENOENT) {
+			if (!options->wait_if_not_found)
+				return IPC_STATE__PATH_NOT_FOUND;
+
+			goto sleep_and_try_again;
+		}
+
+		if (errno == ETIMEDOUT) {
+			if (!options->wait_if_busy)
+				return IPC_STATE__NOT_LISTENING;
+
+			goto sleep_and_try_again;
+		}
+
+		if (errno == ECONNREFUSED) {
+			if (!options->wait_if_busy)
+				return IPC_STATE__NOT_LISTENING;
+
+			goto sleep_and_try_again;
+		}
+
+		return IPC_STATE__OTHER_ERROR;
+
+	sleep_and_try_again:
+		sleep_millisec(wait_ms);
+	}
+
+	return IPC_STATE__NOT_LISTENING;
+}
+
+/*
+ * A randomly chosen timeout value.
+ */
+#define MY_CONNECTION_TIMEOUT_MS (1000)
+
+enum ipc_active_state ipc_client_try_connect(
+	const char *path,
+	const struct ipc_client_connect_options *options,
+	int *pfd)
+{
+	enum ipc_active_state state = IPC_STATE__OTHER_ERROR;
+
+	*pfd = -1;
+
+	trace2_region_enter("ipc-client", "try-connect", NULL);
+	trace2_data_string("ipc-client", NULL, "try-connect/path", path);
+
+	state = connect_to_server(path, MY_CONNECTION_TIMEOUT_MS,
+				  options, pfd);
+
+	trace2_data_intmax("ipc-client", NULL, "try-connect/state",
+			   (intmax_t)state);
+	trace2_region_leave("ipc-client", "try-connect", NULL);
+	return state;
+}
+
+int ipc_client_send_command_to_fd(int fd, const char *message,
+				  struct strbuf *answer)
+{
+	int ret = 0;
+
+	strbuf_setlen(answer, 0);
+
+	trace2_region_enter("ipc-client", "send-command", NULL);
+
+	if (write_packetized_from_buf(message, strlen(message), fd, 1) < 0) {
+		ret = error(_("could not send IPC command"));
+		goto done;
+	}
+
+	if (read_packetized_to_strbuf(fd, answer, PACKET_READ_NEVER_DIE) < 0) {
+		ret = error(_("could not read IPC response"));
+		goto done;
+	}
+
+done:
+	trace2_region_leave("ipc-client", "send-command", NULL);
+	return ret;
+}
+
+int ipc_client_send_command(const char *path,
+			    const struct ipc_client_connect_options *options,
+			    const char *message, struct strbuf *answer)
+{
+	int fd;
+	int ret = -1;
+	enum ipc_active_state state;
+
+	state = ipc_client_try_connect(path, options, &fd);
+
+	if (state != IPC_STATE__LISTENING)
+		return ret;
+
+	ret = ipc_client_send_command_to_fd(fd, message, answer);
+	close(fd);
+	return ret;
+}
+
+static int set_socket_blocking_flag(int fd, int make_nonblocking)
+{
+	int flags;
+
+	flags = fcntl(fd, F_GETFL, NULL);
+
+	if (flags < 0)
+		return -1;
+
+	if (make_nonblocking)
+		flags |= O_NONBLOCK;
+	else
+		flags &= ~O_NONBLOCK;
+
+	return fcntl(fd, F_SETFL, flags);
+}
+
+/*
+ * Magic numbers used to annotate callback instance data.
+ * These are used to help guard against accidentally passing the
+ * wrong instance data across multiple levels of callbacks (which
+ * is easy to do if there are `void*` arguments).
+ */
+enum magic {
+	MAGIC_SERVER_REPLY_DATA,
+	MAGIC_WORKER_THREAD_DATA,
+	MAGIC_ACCEPT_THREAD_DATA,
+	MAGIC_SERVER_DATA,
+};
+
+struct ipc_server_reply_data {
+	enum magic magic;
+	int fd;
+	struct ipc_worker_thread_data *worker_thread_data;
+};
+
+struct ipc_worker_thread_data {
+	enum magic magic;
+	struct ipc_worker_thread_data *next_thread;
+	struct ipc_server_data *server_data;
+	pthread_t pthread_id;
+};
+
+struct ipc_accept_thread_data {
+	enum magic magic;
+	struct ipc_server_data *server_data;
+	int fd_listen;
+	ino_t inode_listen;
+	int fd_send_shutdown;
+	int fd_wait_shutdown;
+	pthread_t pthread_id;
+};
+
+/*
+ * With unix-sockets, the conceptual "ipc-server" is implemented as a single
+ * controller "accept-thread" thread and a pool of "worker-thread" threads.
+ * The former does the usual `accept()` loop and dispatches connections
+ * to an idle worker thread.  The worker threads wait in an idle loop for
+ * a new connection, communicate with the client and relay data to/from
+ * the `application_cb` and then wait for another connection from the
+ * server thread.  This avoids the overhead of constantly creating and
+ * destroying threads.
+ */
+struct ipc_server_data {
+	enum magic magic;
+	ipc_server_application_cb *application_cb;
+	void *application_data;
+	struct strbuf buf_path;
+
+	struct ipc_accept_thread_data *accept_thread;
+	struct ipc_worker_thread_data *worker_thread_list;
+
+	pthread_mutex_t work_available_mutex;
+	pthread_cond_t work_available_cond;
+
+	/*
+	 * Accepted but not yet processed client connections are kept
+	 * in a circular buffer FIFO.  The queue is empty when the
+	 * positions are equal.
+	 */
+	int *fifo_fds;
+	int queue_size;
+	int back_pos;
+	int front_pos;
+
+	int shutdown_requested;
+	int is_stopped;
+};
+
+/*
+ * Remove and return the oldest queued connection.
+ *
+ * Returns -1 if empty.
+ */
+static int fifo_dequeue(struct ipc_server_data *server_data)
+{
+	/* ASSERT holding mutex */
+
+	int fd;
+
+	if (server_data->back_pos == server_data->front_pos)
+		return -1;
+
+	fd = server_data->fifo_fds[server_data->front_pos];
+	server_data->fifo_fds[server_data->front_pos] = -1;
+
+	server_data->front_pos++;
+	if (server_data->front_pos == server_data->queue_size)
+		server_data->front_pos = 0;
+
+	return fd;
+}
+
+/*
+ * Push a new fd onto the back of the queue.
+ *
+ * Drop it and return -1 if queue is already full.
+ */
+static int fifo_enqueue(struct ipc_server_data *server_data, int fd)
+{
+	/* ASSERT holding mutex */
+
+	int next_back_pos;
+
+	next_back_pos = server_data->back_pos + 1;
+	if (next_back_pos == server_data->queue_size)
+		next_back_pos = 0;
+
+	if (next_back_pos == server_data->front_pos) {
+		/* Queue is full. Just drop it. */
+		close(fd);
+		return -1;
+	}
+
+	server_data->fifo_fds[server_data->back_pos] = fd;
+	server_data->back_pos = next_back_pos;
+
+	return fd;
+}
+
+/*
+ * Wait for a connection to be queued to the FIFO and return it.
+ *
+ * Returns -1 if someone has already requested a shutdown.
+ */
+static int worker_thread__wait_for_connection(
+	struct ipc_worker_thread_data *worker_thread_data)
+{
+	/* ASSERT NOT holding mutex */
+
+	struct ipc_server_data *server_data = worker_thread_data->server_data;
+	int fd = -1;
+
+	pthread_mutex_lock(&server_data->work_available_mutex);
+	for (;;) {
+		if (server_data->shutdown_requested)
+			break;
+
+		fd = fifo_dequeue(server_data);
+		if (fd >= 0)
+			break;
+
+		pthread_cond_wait(&server_data->work_available_cond,
+				  &server_data->work_available_mutex);
+	}
+	pthread_mutex_unlock(&server_data->work_available_mutex);
+
+	return fd;
+}
+
+/*
+ * Forward declare our reply callback function so that any compiler
+ * errors are reported when we actually define the function (in addition
+ * to any errors reported when we try to pass this callback function as
+ * a parameter in a function call).  The former are easier to understand.
+ */
+static ipc_server_reply_cb do_io_reply_callback;
+
+/*
+ * Relay application's response message to the client process.
+ * (We do not flush at this point because we allow the caller
+ * to chunk data to the client thru us.)
+ */
+static int do_io_reply_callback(struct ipc_server_reply_data *reply_data,
+		       const char *response, size_t response_len)
+{
+	if (reply_data->magic != MAGIC_SERVER_REPLY_DATA)
+		BUG("reply_cb called with wrong instance data");
+
+	return write_packetized_from_buf(response, response_len,
+					 reply_data->fd, 0);
+}
+
+/* A randomly chosen value. */
+#define MY_WAIT_POLL_TIMEOUT_MS (10)
+
+/*
+ * If the client hangs up without sending any data on the wire, just
+ * quietly close the socket and ignore this client.
+ *
+ * This worker thread is committed to reading the IPC request data
+ * from the client at the other end of this fd.  Wait here for the
+ * client to actually put something on the wire -- because if the
+ * client just does a ping (connect and hangup without sending any
+ * data), our use of the pkt-line read routines will spew an error
+ * message.
+ *
+ * Return -1 if the client hung up.
+ * Return 0 if data (possibly incomplete) is ready.
+ */
+static int worker_thread__wait_for_io_start(
+	struct ipc_worker_thread_data *worker_thread_data,
+	int fd)
+{
+	struct ipc_server_data *server_data = worker_thread_data->server_data;
+	struct pollfd pollfd[1];
+	int result;
+
+	for (;;) {
+		pollfd[0].fd = fd;
+		pollfd[0].events = POLLIN;
+
+		result = poll(pollfd, 1, MY_WAIT_POLL_TIMEOUT_MS);
+		if (result < 0) {
+			if (errno == EINTR)
+				continue;
+			goto cleanup;
+		}
+
+		if (result == 0) {
+			/* a timeout */
+
+			int in_shutdown;
+
+			pthread_mutex_lock(&server_data->work_available_mutex);
+			in_shutdown = server_data->shutdown_requested;
+			pthread_mutex_unlock(&server_data->work_available_mutex);
+
+			/*
+			 * If a shutdown is already in progress and this
+			 * client has not started talking yet, just drop it.
+			 */
+			if (in_shutdown)
+				goto cleanup;
+			continue;
+		}
+
+		if (pollfd[0].revents & POLLHUP)
+			goto cleanup;
+
+		if (pollfd[0].revents & POLLIN)
+			return 0;
+
+		goto cleanup;
+	}
+
+cleanup:
+	close(fd);
+	return -1;
+}
+
+/*
+ * Receive the request/command from the client and pass it to the
+ * registered request-callback.  The request-callback will compose
+ * a response and call our reply-callback to send it to the client.
+ */
+static int worker_thread__do_io(
+	struct ipc_worker_thread_data *worker_thread_data,
+	int fd)
+{
+	/* ASSERT NOT holding lock */
+
+	struct strbuf buf = STRBUF_INIT;
+	struct ipc_server_reply_data reply_data;
+	int ret = 0;
+
+	reply_data.magic = MAGIC_SERVER_REPLY_DATA;
+	reply_data.worker_thread_data = worker_thread_data;
+
+	reply_data.fd = fd;
+
+	ret = read_packetized_to_strbuf(reply_data.fd, &buf,
+					PACKET_READ_NEVER_DIE);
+	if (ret >= 0) {
+		ret = worker_thread_data->server_data->application_cb(
+			worker_thread_data->server_data->application_data,
+			buf.buf, do_io_reply_callback, &reply_data);
+
+		packet_flush_gently(reply_data.fd);
+	}
+	else {
+		/*
+		 * The client probably disconnected/shutdown before it
+		 * could send a well-formed message.  Ignore it.
+		 */
+	}
+
+	strbuf_release(&buf);
+	close(reply_data.fd);
+
+	return ret;
+}
+
+/*
+ * Block SIGPIPE on the current thread (so that we get EPIPE from
+ * write() rather than an actual signal).
+ *
+ * Note that using sigchain_push() and _pop() to control SIGPIPE
+ * around our IO calls is not thread safe:
+ * [] It uses a global stack of handler frames.
+ * [] It uses ALLOC_GROW() to resize it.
+ * [] Finally, according to the `signal(2)` man-page:
+ *    "The effects of `signal()` in a multithreaded process are unspecified."
+ */
+static void thread_block_sigpipe(sigset_t *old_set)
+{
+	sigset_t new_set;
+
+	sigemptyset(&new_set);
+	sigaddset(&new_set, SIGPIPE);
+
+	sigemptyset(old_set);
+	pthread_sigmask(SIG_BLOCK, &new_set, old_set);
+}
+
+/*
+ * Thread proc for an IPC worker thread.  It handles a series of
+ * connections from clients.  It pulls the next fd from the queue
+ * processes it, and then waits for the next client.
+ *
+ * Block SIGPIPE in this worker thread for the life of the thread.
+ * This avoids stray (and sometimes delayed) SIGPIPE signals caused
+ * by client errors and/or when we are under extremely heavy IO load.
+ *
+ * This means that the application callback will have SIGPIPE blocked.
+ * The callback should not change it.
+ */
+static void *worker_thread_proc(void *_worker_thread_data)
+{
+	struct ipc_worker_thread_data *worker_thread_data = _worker_thread_data;
+	struct ipc_server_data *server_data = worker_thread_data->server_data;
+	sigset_t old_set;
+	int fd, io;
+	int ret;
+
+	trace2_thread_start("ipc-worker");
+
+	thread_block_sigpipe(&old_set);
+
+	for (;;) {
+		fd = worker_thread__wait_for_connection(worker_thread_data);
+		if (fd == -1)
+			break; /* in shutdown */
+
+		io = worker_thread__wait_for_io_start(worker_thread_data, fd);
+		if (io == -1)
+			continue; /* client hung up without sending anything */
+
+		ret = worker_thread__do_io(worker_thread_data, fd);
+
+		if (ret == SIMPLE_IPC_QUIT) {
+			trace2_data_string("ipc-worker", NULL, "queue_stop_async",
+					   "application_quit");
+			/* The application told us to shutdown. */
+			ipc_server_stop_async(server_data);
+			break;
+		}
+	}
+
+	trace2_thread_exit();
+	return NULL;
+}
+
+/*
+ * Return 1 if someone deleted or stole the on-disk socket from us.
+ */
+static int socket_was_stolen(struct ipc_accept_thread_data *accept_thread_data)
+{
+	struct stat st;
+
+	if (lstat(accept_thread_data->server_data->buf_path.buf, &st) == -1)
+		return 1;
+
+	if (st.st_ino != accept_thread_data->inode_listen)
+		return 1;
+
+	return 0;
+}
+
+/* A randomly chosen value. */
+#define MY_ACCEPT_POLL_TIMEOUT_MS (60 * 1000)
+
+/*
+ * Accept a new client connection on our socket.  This uses non-blocking
+ * IO so that we can also wait for shutdown requests on our socket-pair
+ * without actually spinning on a fast timeout.
+ */
+static int accept_thread__wait_for_connection(
+	struct ipc_accept_thread_data *accept_thread_data)
+{
+	struct pollfd pollfd[2];
+	int result;
+
+	for (;;) {
+		pollfd[0].fd = accept_thread_data->fd_wait_shutdown;
+		pollfd[0].events = POLLIN;
+
+		pollfd[1].fd = accept_thread_data->fd_listen;
+		pollfd[1].events = POLLIN;
+
+		result = poll(pollfd, 2, MY_ACCEPT_POLL_TIMEOUT_MS);
+		if (result < 0) {
+			if (errno == EINTR)
+				continue;
+			return result;
+		}
+
+		if (result == 0) {
+			/* a timeout */
+
+			/*
+			 * If someone deletes or force-creates a new unix
+			 * domain socket at out path, all future clients
+			 * will be routed elsewhere and we silently starve.
+			 * If that happens, just queue a shutdown.
+			 */
+			if (socket_was_stolen(
+				    accept_thread_data)) {
+				trace2_data_string("ipc-accept", NULL,
+						   "queue_stop_async",
+						   "socket_stolen");
+				ipc_server_stop_async(
+					accept_thread_data->server_data);
+			}
+			continue;
+		}
+
+		if (pollfd[0].revents & POLLIN) {
+			/* shutdown message queued to socketpair */
+			return -1;
+		}
+
+		if (pollfd[1].revents & POLLIN) {
+			/* a connection is available on fd_listen */
+
+			int client_fd = accept(accept_thread_data->fd_listen,
+					       NULL, NULL);
+			if (client_fd >= 0)
+				return client_fd;
+
+			/*
+			 * An error here is unlikely -- it probably
+			 * indicates that the connecting process has
+			 * already dropped the connection.
+			 */
+			continue;
+		}
+
+		BUG("unandled poll result errno=%d r[0]=%d r[1]=%d",
+		    errno, pollfd[0].revents, pollfd[1].revents);
+	}
+}
+
+/*
+ * Thread proc for the IPC server "accept thread".  This waits for
+ * an incoming socket connection, appends it to the queue of available
+ * connections, and notifies a worker thread to process it.
+ *
+ * Block SIGPIPE in this thread for the life of the thread.  This
+ * avoids any stray SIGPIPE signals when closing pipe fds under
+ * extremely heavy loads (such as when the fifo queue is full and we
+ * drop incomming connections).
+ */
+static void *accept_thread_proc(void *_accept_thread_data)
+{
+	struct ipc_accept_thread_data *accept_thread_data = _accept_thread_data;
+	struct ipc_server_data *server_data = accept_thread_data->server_data;
+	sigset_t old_set;
+
+	trace2_thread_start("ipc-accept");
+
+	thread_block_sigpipe(&old_set);
+
+	for (;;) {
+		int client_fd = accept_thread__wait_for_connection(
+			accept_thread_data);
+
+		pthread_mutex_lock(&server_data->work_available_mutex);
+		if (server_data->shutdown_requested) {
+			pthread_mutex_unlock(&server_data->work_available_mutex);
+			if (client_fd >= 0)
+				close(client_fd);
+			break;
+		}
+
+		if (client_fd < 0) {
+			/* ignore transient accept() errors */
+		}
+		else {
+			fifo_enqueue(server_data, client_fd);
+			pthread_cond_broadcast(&server_data->work_available_cond);
+		}
+		pthread_mutex_unlock(&server_data->work_available_mutex);
+	}
+
+	trace2_thread_exit();
+	return NULL;
+}
+
+/*
+ * We can't predict the connection arrival rate relative to the worker
+ * processing rate, therefore we allow the "accept-thread" to queue up
+ * a generous number of connections, since we'd rather have the client
+ * not unnecessarily timeout if we can avoid it.  (The assumption is
+ * that this will be used for FSMonitor and a few second wait on a
+ * connection is better than having the client timeout and do the full
+ * computation itself.)
+ *
+ * The FIFO queue size is set to a multiple of the worker pool size.
+ * This value chosen at random.
+ */
+#define FIFO_SCALE (100)
+
+/*
+ * The backlog value for `listen(2)`.  This doesn't need to huge,
+ * rather just large enough for our "accept-thread" to wake up and
+ * queue incoming connections onto the FIFO without the kernel
+ * dropping any.
+ *
+ * This value chosen at random.
+ */
+#define LISTEN_BACKLOG (50)
+
+/*
+ * Create a unix domain socket at the given path to listen for
+ * client connections.  The resulting socket will then appear
+ * in the filesystem as an inode with S_IFSOCK.  The inode is
+ * itself created as part of the `bind(2)` operation.
+ *
+ * The term "socket" is ambiguous in this context.  We want to open a
+ * "socket-fd" that is bound to a "socket-inode" (path) on disk.  We
+ * listen on "socket-fd" for new connections and clients try to
+ * open/connect using the "socket-inode" pathname.
+ *
+ * Unix domain sockets have a fundamental design flaw because the
+ * "socket-inode" persists until the pathname is deleted; closing the listening
+ * "socket-fd" only closes the socket handle/descriptor, it does not delete
+ * the inode/pathname.
+ *
+ * Well-behaving service daemons are expected to also delete the inode
+ * before shutdown.  If a service crashes (or forgets) it can leave
+ * the (now stale) inode in the filesystem.  This behaves like a stale
+ * ".lock" file and may prevent future service instances from starting
+ * up correctly.  (Because they won't be able to bind.)
+ *
+ * When future service instances try to create the listener socket,
+ * `bind(2)` will fail with EADDRINUSE -- because the inode already
+ * exists.  However, the new instance cannot tell if it is a stale
+ * inode *or* another service instance is already running.
+ *
+ * One possible solution is to blindly unlink the inode before
+ * attempting to bind a new socket-fd (and thus create) a new
+ * socket-inode.  Then `bind(2)` should always succeed.  However, if
+ * there is an existing service instance, it would be orphaned --
+ * it would still be listening on a socket-fd that is still bound
+ * to an (unlinked) socket-inode, but that socket-inode is no longer
+ * associated with the pathname.  New client connections will arrive
+ * at our new socket-inode and not the existing server's.  (It is upto
+ * the existing server to detect that its socket-inode has been
+ * stolen and shutdown.)
+ *
+ * Since this is rather obscure and infrequent, we try to "gently"
+ * create the socket-inode without disturbing an existing service.
+ */
+static int create_listener_socket(const char *path,
+				  const struct ipc_server_opts *ipc_opts)
+{
+	int fd_listen;
+	int fd_client;
+	struct unix_stream_listen_opts uslg_opts = {
+		.listen_backlog_size = LISTEN_BACKLOG,
+		.force_unlink_before_bind = 0,
+		.disallow_chdir = ipc_opts->uds_disallow_chdir
+	};
+
+	trace2_data_string("ipc-server", NULL, "try-listen-gently", path);
+
+	/*
+	 * Assume socket-inode does not exist and try to (gently)
+	 * create a new socket-inode on disk at pathname and bind
+	 * socket-fd to it.
+	 */
+	fd_listen = unix_stream_listen_gently(path, &uslg_opts);
+	if (fd_listen >= 0)
+		return fd_listen;
+
+	if (errno != EADDRINUSE)
+		return error_errno(_("could not create socket '%s'"),
+				   path);
+
+	trace2_data_string("ipc-server", NULL, "try-detect-server", path);
+
+	/*
+	 * A socket-inode at pathname exists on disk, but we don't
+	 * know if it a server is using it or if it is a stale inode.
+	 *
+	 * poke it with a trivial connection to try to find out.
+	 */
+	fd_client = unix_stream_connect(path);
+	if (fd_client >= 0) {
+		/*
+		 * An existing service process is alive and accepted our
+		 * connection.
+		 */
+		close(fd_client);
+
+		/*
+		 * We cannot create a new socket-inode here, so we cannot
+		 * startup a new server on this pathname.
+		 */
+		errno = EADDRINUSE;
+		return error_errno(_("socket already in use '%s'"),
+				   path);
+	}
+
+	trace2_data_string("ipc-server", NULL, "try-listen-force", path);
+
+	/*
+	 * A socket-inode at pathname exists on disk, but we were not
+	 * able to connect to it, so we believe that this is a stale
+	 * socket-inode that a previous server forgot to delete.  Use
+	 * the tradional solution: force unlink it and create a new
+	 * one.
+	 *
+	 * TODO Note that it is possible that another server is
+	 * listening, but is either just starting up and not yet
+	 * responsive or is stuck somehow.  For now, I'm OK with
+	 * stealing the socket-inode from it in this case.
+	 */
+	uslg_opts.force_unlink_before_bind = 1;
+	fd_listen = unix_stream_listen_gently(path, &uslg_opts);
+	if (fd_listen >= 0)
+		return fd_listen;
+
+	return error_errno(_("could not force create socket '%s'"), path);
+}
+
+static int setup_listener_socket(const char *path, ino_t *inode,
+				 const struct ipc_server_opts *ipc_opts)
+{
+	int fd_listen;
+	struct stat st;
+
+	trace2_region_enter("ipc-server", "create-listener_socket", NULL);
+	fd_listen = create_listener_socket(path, ipc_opts);
+	trace2_region_leave("ipc-server", "create-listener_socket", NULL);
+
+	if (fd_listen < 0)
+		return fd_listen;
+
+	/*
+	 * We just bound a socket (descriptor) to a newly created unix
+	 * domain socket in the filesystem.  Capture the inode number
+	 * so we can later detect if/when someone else force-creates a
+	 * new socket and effectively steals the path from us.  (Which
+	 * would leave us listening to a socket that no client could
+	 * reach.)
+	 */
+	if (lstat(path, &st) < 0) {
+		int saved_errno = errno;
+
+		close(fd_listen);
+		unlink(path);
+
+		errno = saved_errno;
+		return error_errno(_("could not lstat listener socket '%s'"),
+				   path);
+	}
+
+	if (set_socket_blocking_flag(fd_listen, 1)) {
+		int saved_errno = errno;
+
+		close(fd_listen);
+		unlink(path);
+
+		errno = saved_errno;
+		return error_errno(_("making listener socket nonblocking '%s'"),
+				   path);
+	}
+
+	*inode = st.st_ino;
+
+	return fd_listen;
+}
+
+/*
+ * Start IPC server in a pool of background threads.
+ */
+int ipc_server_run_async(struct ipc_server_data **returned_server_data,
+			 const char *path, const struct ipc_server_opts *opts,
+			 ipc_server_application_cb *application_cb,
+			 void *application_data)
+{
+	struct ipc_server_data *server_data;
+	int fd_listen;
+	ino_t inode_listen;
+	int sv[2];
+	int k;
+	int nr_threads = opts->nr_threads;
+
+	*returned_server_data = NULL;
+
+	/*
+	 * Create a socketpair and set sv[1] to non-blocking.  This
+	 * will used to send a shutdown message to the accept-thread
+	 * and allows the accept-thread to wait on EITHER a client
+	 * connection or a shutdown request without spinning.
+	 */
+	if (socketpair(AF_UNIX, SOCK_STREAM, 0, sv) < 0)
+		return error_errno(_("could not create socketpair for '%s'"),
+				   path);
+
+	if (set_socket_blocking_flag(sv[1], 1)) {
+		int saved_errno = errno;
+		close(sv[0]);
+		close(sv[1]);
+		errno = saved_errno;
+		return error_errno(_("making socketpair nonblocking '%s'"),
+				   path);
+	}
+
+	fd_listen = setup_listener_socket(path, &inode_listen, opts);
+	if (fd_listen < 0) {
+		int saved_errno = errno;
+		close(sv[0]);
+		close(sv[1]);
+		errno = saved_errno;
+		return -1;
+	}
+
+	server_data = xcalloc(1, sizeof(*server_data));
+	server_data->magic = MAGIC_SERVER_DATA;
+	server_data->application_cb = application_cb;
+	server_data->application_data = application_data;
+	strbuf_init(&server_data->buf_path, 0);
+	strbuf_addstr(&server_data->buf_path, path);
+
+	if (nr_threads < 1)
+		nr_threads = 1;
+
+	pthread_mutex_init(&server_data->work_available_mutex, NULL);
+	pthread_cond_init(&server_data->work_available_cond, NULL);
+
+	server_data->queue_size = nr_threads * FIFO_SCALE;
+	server_data->fifo_fds = xcalloc(server_data->queue_size,
+					sizeof(*server_data->fifo_fds));
+
+	server_data->accept_thread =
+		xcalloc(1, sizeof(*server_data->accept_thread));
+	server_data->accept_thread->magic = MAGIC_ACCEPT_THREAD_DATA;
+	server_data->accept_thread->server_data = server_data;
+	server_data->accept_thread->fd_listen = fd_listen;
+	server_data->accept_thread->inode_listen = inode_listen;
+	server_data->accept_thread->fd_send_shutdown = sv[0];
+	server_data->accept_thread->fd_wait_shutdown = sv[1];
+
+	if (pthread_create(&server_data->accept_thread->pthread_id, NULL,
+			   accept_thread_proc, server_data->accept_thread))
+		die_errno(_("could not start accept_thread '%s'"), path);
+
+	for (k = 0; k < nr_threads; k++) {
+		struct ipc_worker_thread_data *wtd;
+
+		wtd = xcalloc(1, sizeof(*wtd));
+		wtd->magic = MAGIC_WORKER_THREAD_DATA;
+		wtd->server_data = server_data;
+
+		if (pthread_create(&wtd->pthread_id, NULL, worker_thread_proc,
+				   wtd)) {
+			if (k == 0)
+				die(_("could not start worker[0] for '%s'"),
+				    path);
+			/*
+			 * Limp along with the thread pool that we have.
+			 */
+			break;
+		}
+
+		wtd->next_thread = server_data->worker_thread_list;
+		server_data->worker_thread_list = wtd;
+	}
+
+	*returned_server_data = server_data;
+	return 0;
+}
+
+/*
+ * Gently tell the IPC server treads to shutdown.
+ * Can be run on any thread.
+ */
+int ipc_server_stop_async(struct ipc_server_data *server_data)
+{
+	/* ASSERT NOT holding mutex */
+
+	int fd;
+
+	if (!server_data)
+		return 0;
+
+	trace2_region_enter("ipc-server", "server-stop-async", NULL);
+
+	pthread_mutex_lock(&server_data->work_available_mutex);
+
+	server_data->shutdown_requested = 1;
+
+	/*
+	 * Write a byte to the shutdown socket pair to wake up the
+	 * accept-thread.
+	 */
+	if (write(server_data->accept_thread->fd_send_shutdown, "Q", 1) < 0)
+		error_errno("could not write to fd_send_shutdown");
+
+	/*
+	 * Drain the queue of existing connections.
+	 */
+	while ((fd = fifo_dequeue(server_data)) != -1)
+		close(fd);
+
+	/*
+	 * Gently tell worker threads to stop processing new connections
+	 * and exit.  (This does not abort in-process conversations.)
+	 */
+	pthread_cond_broadcast(&server_data->work_available_cond);
+
+	pthread_mutex_unlock(&server_data->work_available_mutex);
+
+	trace2_region_leave("ipc-server", "server-stop-async", NULL);
+
+	return 0;
+}
+
+/*
+ * Wait for all IPC server threads to stop.
+ */
+int ipc_server_await(struct ipc_server_data *server_data)
+{
+	pthread_join(server_data->accept_thread->pthread_id, NULL);
+
+	if (!server_data->shutdown_requested)
+		BUG("ipc-server: accept-thread stopped for '%s'",
+		    server_data->buf_path.buf);
+
+	while (server_data->worker_thread_list) {
+		struct ipc_worker_thread_data *wtd =
+			server_data->worker_thread_list;
+
+		pthread_join(wtd->pthread_id, NULL);
+
+		server_data->worker_thread_list = wtd->next_thread;
+		free(wtd);
+	}
+
+	server_data->is_stopped = 1;
+
+	return 0;
+}
+
+void ipc_server_free(struct ipc_server_data *server_data)
+{
+	struct ipc_accept_thread_data * accept_thread_data;
+
+	if (!server_data)
+		return;
+
+	if (!server_data->is_stopped)
+		BUG("cannot free ipc-server while running for '%s'",
+		    server_data->buf_path.buf);
+
+	accept_thread_data = server_data->accept_thread;
+	if (accept_thread_data) {
+		if (accept_thread_data->fd_listen != -1) {
+			/*
+			 * Only unlink the unix domain socket if we
+			 * created it.  That is, if another daemon
+			 * process force-created a new socket at this
+			 * path, and effectively steals our path
+			 * (which prevents us from receiving any
+			 * future clients), we don't want to do the
+			 * same thing to them.
+			 */
+			if (!socket_was_stolen(
+				    accept_thread_data))
+				unlink(server_data->buf_path.buf);
+
+			close(accept_thread_data->fd_listen);
+		}
+		if (accept_thread_data->fd_send_shutdown != -1)
+			close(accept_thread_data->fd_send_shutdown);
+		if (accept_thread_data->fd_wait_shutdown != -1)
+			close(accept_thread_data->fd_wait_shutdown);
+
+		free(server_data->accept_thread);
+	}
+
+	while (server_data->worker_thread_list) {
+		struct ipc_worker_thread_data *wtd =
+			server_data->worker_thread_list;
+
+		server_data->worker_thread_list = wtd->next_thread;
+		free(wtd);
+	}
+
+	pthread_cond_destroy(&server_data->work_available_cond);
+	pthread_mutex_destroy(&server_data->work_available_mutex);
+
+	strbuf_release(&server_data->buf_path);
+
+	free(server_data->fifo_fds);
+	free(server_data);
+}
diff --git a/contrib/buildsystems/CMakeLists.txt b/contrib/buildsystems/CMakeLists.txt
index 4bd41054ee7..4c27a373414 100644
--- a/contrib/buildsystems/CMakeLists.txt
+++ b/contrib/buildsystems/CMakeLists.txt
@@ -248,6 +248,8 @@ endif()
 
 if(CMAKE_SYSTEM_NAME STREQUAL "Windows")
 	list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-win32.c)
+else()
+	list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-unix-socket.c)
 endif()
 
 set(EXE_EXTENSION ${CMAKE_EXECUTABLE_SUFFIX})
diff --git a/simple-ipc.h b/simple-ipc.h
index cd525e711bd..8a6dfc72c83 100644
--- a/simple-ipc.h
+++ b/simple-ipc.h
@@ -5,7 +5,7 @@
  * See Documentation/technical/api-simple-ipc.txt
  */
 
-#if defined(GIT_WINDOWS_NATIVE)
+#if defined(GIT_WINDOWS_NATIVE) || !defined(NO_UNIX_SOCKETS)
 #define SUPPORTS_SIMPLE_IPC
 #endif
 
@@ -151,6 +151,11 @@ struct ipc_server_data;
 struct ipc_server_opts
 {
 	int nr_threads;
+
+	/*
+	 * Disallow chdir() when creating a Unix domain socket.
+	 */
+	unsigned int uds_disallow_chdir:1;
 };
 
 /*
-- 
gitgitgadget

^ permalink raw reply related	[flat|nested] 178+ messages in thread

* Re: [PATCH 05/10] simple-ipc: design documentation for new IPC mechanism
  2021-01-12 15:31 ` [PATCH 05/10] simple-ipc: design documentation for new IPC mechanism Jeff Hostetler via GitGitGadget
@ 2021-01-12 16:40   ` Ævar Arnfjörð Bjarmason
  0 siblings, 0 replies; 178+ messages in thread
From: Ævar Arnfjörð Bjarmason @ 2021-01-12 16:40 UTC (permalink / raw)
  To: Jeff Hostetler via GitGitGadget; +Cc: git, Jeff Hostetler


On Tue, Jan 12 2021, Jeff Hostetler via GitGitGadget wrote:

> From: Jeff Hostetler <jeffhost@microsoft.com>
>
> Brief design documentation for new IPC mechanism allowing
> foreground Git client to talk with an existing daemon process
> at a known location using a named pipe or unix domain socket.
>
> Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
> Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
> ---
>  Documentation/technical/api-simple-ipc.txt | 31 ++++++++++++++++++++++
>  1 file changed, 31 insertions(+)
>  create mode 100644 Documentation/technical/api-simple-ipc.txt
>
> diff --git a/Documentation/technical/api-simple-ipc.txt b/Documentation/technical/api-simple-ipc.txt
> new file mode 100644
> index 00000000000..920994a69d3
> --- /dev/null
> +++ b/Documentation/technical/api-simple-ipc.txt
> @@ -0,0 +1,31 @@
> +simple-ipc API
> +==============
> +
> +The simple-ipc API is used to send an IPC message and response between
> +a (presumably) foreground Git client process to a background server or
> +daemon process.  The server process must already be running.  Multiple
> +client processes can simultaneously communicate with the server
> +process.
> +
> +Communication occurs over a named pipe on Windows and a Unix domain
> +socket on other platforms.  Clients and the server rendezvous at a
> +previously agreed-to application-specific pathname (which is outside
> +the scope of this design).
> +
> +This IPC mechanism differs from the existing `sub-process.c` model
> +(Documentation/technical/long-running-process-protocol.txt) and used
> +by applications like Git-LFS because the server is assumed to be very

s/to be very long running/to be a long running/, or at least "s/to be
very/to be a very/.

> +long running system service.  In contrast, a "sub-process model process"
> +is started with the foreground process and exits when the foreground
> +process terminates.  How the server is started is also outside the
> +scope of the IPC mechanism.
> +
> +The IPC protocol consists of a single request message from the client and
> +an optional request message from the server.  For simplicity, pkt-line
> +routines are used to hide chunking and buffering concerns.  Each side
> +terminates their message with a flush packet.
> +(Documentation/technical/protocol-common.txt)
> +
> +The actual format of the client and server messages is application
> +specific.  The IPC layer transmits and receives an opaque buffer without
> +any concern for the content within.


^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH 00/10] [RFC] Simple IPC Mechanism
  2021-01-12 15:31 [PATCH 00/10] [RFC] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
                   ` (9 preceding siblings ...)
  2021-01-12 15:31 ` [PATCH 10/10] simple-ipc: add Unix domain socket implementation Jeff Hostetler via GitGitGadget
@ 2021-01-12 16:50 ` Ævar Arnfjörð Bjarmason
  2021-01-12 18:25   ` Jeff Hostetler
  2021-01-12 20:01 ` Junio C Hamano
  2021-02-01 19:45 ` [PATCH v2 00/14] " Jeff Hostetler via GitGitGadget
  12 siblings, 1 reply; 178+ messages in thread
From: Ævar Arnfjörð Bjarmason @ 2021-01-12 16:50 UTC (permalink / raw)
  To: Jeff Hostetler via GitGitGadget
  Cc: git, Jeff Hostetler, Nguyễn Thái Ngọc Duy, Ben Peart


On Tue, Jan 12 2021, Jeff Hostetler via GitGitGadget wrote:

> This series introduces a multi-threaded IPC mechanism called "Simple IPC".
> This is a library-layer feature to make it easy to create very long running
> daemon/service applications and for unrelated Git commands to communicate
> with them. Communication uses pkt-line messaging over a Windows named pipe
> or Unix domain socket.
>
> On the server side, Simple IPC implements a (platform-specific) connection
> listener and worker thread-pool to accept and handle a series of client
> connections. The server functionality is completely hidden behind the
> ipc_server_run() and ipc_server_run_async() APIs. The daemon/service
> application only needs to define an application-specific callback to handle
> client requests.
>
> Note that Simple IPC is completely unrelated to the long running process
> feature (described in sub-process.h) where the lifetime of a "sub-process"
> child is bound to that of the invoking parent process and communication
> occurs over the child's stdin/stdout.
>
> Simple IPC will serve as a basis for a future builtin FSMonitor daemon
> feature.

I only skimmed this so far. In the past we had a git-read-cache--daemon
-> git-index-helper[1] -> watchman. The last iteration of that seems to
be the [3] re-roll from Ben Peart in 2017. I used/tested that for a
while and had some near-production use-cases of it.

How does this new series relate to that past work (if at all), and (not
having re-read the old threads) were there reasons those old patch
serieses weren't merged in that are addressed here, mitigated etc?

1. https://lore.kernel.org/git/1402406665-27988-1-git-send-email-pclouds@gmail.com/
2. https://lore.kernel.org/git/1457548582-28302-1-git-send-email-dturner@twopensource.com/
3. https://lore.kernel.org/git/20170518201333.13088-1-benpeart@microsoft.com/
4. https://lore.kernel.org/git/87bmhfwmqa.fsf@evledraar.gmail.com/

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH 00/10] [RFC] Simple IPC Mechanism
  2021-01-12 16:50 ` [PATCH 00/10] [RFC] Simple IPC Mechanism Ævar Arnfjörð Bjarmason
@ 2021-01-12 18:25   ` Jeff Hostetler
  0 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler @ 2021-01-12 18:25 UTC (permalink / raw)
  To: Ævar Arnfjörð Bjarmason, Jeff Hostetler via GitGitGadget
  Cc: git, Jeff Hostetler, Nguyễn Thái Ngọc Duy, Ben Peart



On 1/12/21 11:50 AM, Ævar Arnfjörð Bjarmason wrote:
> 
> On Tue, Jan 12 2021, Jeff Hostetler via GitGitGadget wrote:
> 
>> This series introduces a multi-threaded IPC mechanism called "Simple IPC".
>> This is a library-layer feature to make it easy to create very long running
>> daemon/service applications and for unrelated Git commands to communicate
>> with them. Communication uses pkt-line messaging over a Windows named pipe
>> or Unix domain socket.
>>
>> On the server side, Simple IPC implements a (platform-specific) connection
>> listener and worker thread-pool to accept and handle a series of client
>> connections. The server functionality is completely hidden behind the
>> ipc_server_run() and ipc_server_run_async() APIs. The daemon/service
>> application only needs to define an application-specific callback to handle
>> client requests.
>>
>> Note that Simple IPC is completely unrelated to the long running process
>> feature (described in sub-process.h) where the lifetime of a "sub-process"
>> child is bound to that of the invoking parent process and communication
>> occurs over the child's stdin/stdout.
>>
>> Simple IPC will serve as a basis for a future builtin FSMonitor daemon
>> feature.
> 
> I only skimmed this so far. In the past we had a git-read-cache--daemon
> -> git-index-helper[1] -> watchman. The last iteration of that seems to
> be the [3] re-roll from Ben Peart in 2017. I used/tested that for a
> while and had some near-production use-cases of it.
> 
> How does this new series relate to that past work (if at all), and (not
> having re-read the old threads) were there reasons those old patch
> serieses weren't merged in that are addressed here, mitigated etc?
> 
> 1. https://lore.kernel.org/git/1402406665-27988-1-git-send-email-pclouds@gmail.com/
> 2. https://lore.kernel.org/git/1457548582-28302-1-git-send-email-dturner@twopensource.com/
> 3. https://lore.kernel.org/git/20170518201333.13088-1-benpeart@microsoft.com/
> 4. https://lore.kernel.org/git/87bmhfwmqa.fsf@evledraar.gmail.com/
> 

I'm starting with the model used by the existing FSMonitor feature
that Ben Peart and Kevin Willford added to Git.

Item [3] looks to be an earlier draft of that effort.  The idea there
was to add the fsmonitor hook that could talk to a daemon like Watchman
and quickly update the in-memory cache-entry flags without the need to
lstat() and similarly update the untracked-cache.  An index extension
was added to remember the last fsmonitor response processed.

Currently in Git, we have a fsmonitor hook (usually a perl script) that
talks to Watchman and translates the Watchman response back into
something that the Git client can understand.  This comes back as a
list of files that have changed since some timestamp (or in V2, relative
to some daemon-specific token).

Items [1,2] are not related to that.  That was a different effort to
quickly fetch a read-only copy of an already-parsed index via shared
memory.  In the last version I saw, there were 2 daemons.  index-helper
kept a fresh view of the index in shared memory and could give it to
the Git client.  The client could just mmap the pre-parsed index and
avoid calling `read_index()`.  Index-helper would drive Watchman to
keep track of cache-entries as they changed and handle the lstat's.

I'm not familiar with [4] (and I only quickly scanned it).  There are
several ideas for finding slow spots while reading the index.  I don't
want to go into all of them, but several are obsolete now.  They didn't
contribute to the current effort.


The Simple IPC series (and a soon to be submitted fsmonitor--daemon
series) are intended to be a follow on to FSMonitor effort that is
currently in Git.

1. Build a git-native daemon to watch the file system and avoid needing
a third-party tool.  This doesn't preclude the use of Watchman, but
having a builtin tool might simplify engineering support costs when
deploying to a large team.

2. Use direct IPC between the Git command and the daemon to avoid the
expense of the Hook API (which is expensive on Windows).

3. Make the daemon Git-aware.  For example, it might want to pre-filter
ignored files.  (This might not be present in V1.  And we might extend
the daemon to do more of this as we improve performance.)

Jeff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH 00/10] [RFC] Simple IPC Mechanism
  2021-01-12 15:31 [PATCH 00/10] [RFC] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
                   ` (10 preceding siblings ...)
  2021-01-12 16:50 ` [PATCH 00/10] [RFC] Simple IPC Mechanism Ævar Arnfjörð Bjarmason
@ 2021-01-12 20:01 ` Junio C Hamano
  2021-01-12 23:25   ` Jeff Hostetler
  2021-02-01 19:45 ` [PATCH v2 00/14] " Jeff Hostetler via GitGitGadget
  12 siblings, 1 reply; 178+ messages in thread
From: Junio C Hamano @ 2021-01-12 20:01 UTC (permalink / raw)
  To: Jeff Hostetler via GitGitGadget; +Cc: git, Jeff Hostetler

"Jeff Hostetler via GitGitGadget" <gitgitgadget@gmail.com> writes:

> This series introduces a multi-threaded IPC mechanism called "Simple IPC".
> This is a library-layer feature to make it easy to create very long running
> daemon/service applications and for unrelated Git commands to communicate
> with them. Communication uses pkt-line messaging over a Windows named pipe
> or Unix domain socket.
>
> On the server side, Simple IPC implements a (platform-specific) connection
> listener and worker thread-pool to accept and handle a series of client
> connections. The server functionality is completely hidden behind the
> ipc_server_run() and ipc_server_run_async() APIs. The daemon/service
> application only needs to define an application-specific callback to handle
> client requests.
>
> Note that Simple IPC is completely unrelated to the long running process
> feature (described in sub-process.h) where the lifetime of a "sub-process"
> child is bound to that of the invoking parent process and communication
> occurs over the child's stdin/stdout.
>
> Simple IPC will serve as a basis for a future builtin FSMonitor daemon
> feature.

What kind of security implications does this bring into the system?

Can a Simple IPC daemon be connected by any client?  How does the
daemon know that the other side is authorized to make requests?
When a git binary acting as client connect to whatever happens to be
listening to the well-known location, how does it know if the other
side is the daemon it wanted to talk to and not a malicious MITM or
other impersonator?

Or is this to be only used for "this is meant to be used to give
read-only access to data that is public anyway" kind of daemon,
e.g. "git://" transport that serves clones and fetches?

Or is this meant to be used on client machines where all the
processes are assumed to be working for the end user, so it is OK to
declare that anything will go (good old DOS mental model?)

I know at the Mechanism level we do not yet know how it will be
used, but we cannot retrofit sufficient security, so it would be
necessary to know answers to these questions.

Thanks.

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH 00/10] [RFC] Simple IPC Mechanism
  2021-01-12 20:01 ` Junio C Hamano
@ 2021-01-12 23:25   ` Jeff Hostetler
  2021-01-13  0:13     ` Junio C Hamano
  2021-01-13 13:46     ` Jeff King
  0 siblings, 2 replies; 178+ messages in thread
From: Jeff Hostetler @ 2021-01-12 23:25 UTC (permalink / raw)
  To: Junio C Hamano, Jeff Hostetler via GitGitGadget; +Cc: git, Jeff Hostetler



On 1/12/21 3:01 PM, Junio C Hamano wrote:
> "Jeff Hostetler via GitGitGadget" <gitgitgadget@gmail.com> writes:
> 
>> This series introduces a multi-threaded IPC mechanism called "Simple IPC".
>> This is a library-layer feature to make it easy to create very long running
>> daemon/service applications and for unrelated Git commands to communicate
>> with them. Communication uses pkt-line messaging over a Windows named pipe
>> or Unix domain socket.
>>
>> On the server side, Simple IPC implements a (platform-specific) connection
>> listener and worker thread-pool to accept and handle a series of client
>> connections. The server functionality is completely hidden behind the
>> ipc_server_run() and ipc_server_run_async() APIs. The daemon/service
>> application only needs to define an application-specific callback to handle
>> client requests.
>>
>> Note that Simple IPC is completely unrelated to the long running process
>> feature (described in sub-process.h) where the lifetime of a "sub-process"
>> child is bound to that of the invoking parent process and communication
>> occurs over the child's stdin/stdout.
>>
>> Simple IPC will serve as a basis for a future builtin FSMonitor daemon
>> feature.
> 
> What kind of security implications does this bring into the system?
> 
> Can a Simple IPC daemon be connected by any client?  How does the
> daemon know that the other side is authorized to make requests?
> When a git binary acting as client connect to whatever happens to be
> listening to the well-known location, how does it know if the other
> side is the daemon it wanted to talk to and not a malicious MITM or
> other impersonator?
> 
> Or is this to be only used for "this is meant to be used to give
> read-only access to data that is public anyway" kind of daemon,
> e.g. "git://" transport that serves clones and fetches?
> 
> Or is this meant to be used on client machines where all the
> processes are assumed to be working for the end user, so it is OK to
> declare that anything will go (good old DOS mental model?)
> 
> I know at the Mechanism level we do not yet know how it will be
> used, but we cannot retrofit sufficient security, so it would be
> necessary to know answers to these questions.
> 
> Thanks.
> 

Good questions.

Yes, this is a local-only mechanism.  A local-only named pipe on
Windows and a Unix domain socket on Unix.  In both cases the daemon
creates the pipe/socket as the foreground user (since the daemon
process will be implicitly started by the first Git command that
needs to talk to it).  Later client process try to open the pipe/socket
with RW access if they can.

On Windows a local named pipe is created by the server side.  It rejects
remote connections.  I did not put an ACL, so it should inherit the
system default which grants the user RW access (since the daemon is
implicitly started by the first foreground client command that needs
to talk to it.)  Other users in the user's group and the anonymous
user should have R but not W access to it, so they could not be able
to connect.  The name pipe is kept in the local Named Pipe File System
(NPFS) as `\\.\pipe\<unique-path>` so it is globally visible on the 
system, but I don't think it is a problem.

On the Unix side, the socket is created inside the .git directory
by the daemon.  Potential clients would have to have access to the
working directory and the .git directory to connect to the socket,
so in normal circumstances they would be able to read everything in
the WD anyway.  So again, I don't think it is a problem.


Alternatively, if a malicious server is started and holds the named
pipe or socket:  the client might be able to talk to it (assuming
that bad server grants access or impersonates the rightful user).
The client might not be able to tell they've been tricked, but at
that point the system is already compromised.

So unless I'm missing something, I think we're OK either way.

Jeff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH 00/10] [RFC] Simple IPC Mechanism
  2021-01-12 23:25   ` Jeff Hostetler
@ 2021-01-13  0:13     ` Junio C Hamano
  2021-01-13  0:32       ` Jeff Hostetler
  2021-01-13 13:46     ` Jeff King
  1 sibling, 1 reply; 178+ messages in thread
From: Junio C Hamano @ 2021-01-13  0:13 UTC (permalink / raw)
  To: Jeff Hostetler; +Cc: Jeff Hostetler via GitGitGadget, git, Jeff Hostetler

Jeff Hostetler <git@jeffhostetler.com> writes:

> On Windows a local named pipe is created by the server side.  It rejects
> remote connections.  I did not put an ACL, so it should inherit the
> system default which grants the user RW access (since the daemon is
> implicitly started by the first foreground client command that needs
> to talk to it.)  Other users in the user's group and the anonymous
> user should have R but not W access to it, so they could not be able
> to connect.  The name pipe is kept in the local Named Pipe File System
> (NPFS) as `\\.\pipe\<unique-path>` so it is globally visible on the
> system, but I don't think it is a problem.

It is not intuitively obvious why globalluy visible thing is OK to
me, but I'll take your word for it on stuff about Windows.

> On the Unix side, the socket is created inside the .git directory
> by the daemon.  Potential clients would have to have access to the
> working directory and the .git directory to connect to the socket,
> so in normal circumstances they would be able to read everything in
> the WD anyway.  So again, I don't think it is a problem.

OK, yes, writability to .git would automatically mean that
everything is a fair game to those who can talk to the daemon, so
there is no new issue here, as long as the first process that
creates the socket is careful not to loosen the permission.

Thanks.

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH 00/10] [RFC] Simple IPC Mechanism
  2021-01-13  0:13     ` Junio C Hamano
@ 2021-01-13  0:32       ` Jeff Hostetler
  0 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler @ 2021-01-13  0:32 UTC (permalink / raw)
  To: Junio C Hamano; +Cc: Jeff Hostetler via GitGitGadget, git, Jeff Hostetler



On 1/12/21 7:13 PM, Junio C Hamano wrote:
> Jeff Hostetler <git@jeffhostetler.com> writes:
> 
>> On Windows a local named pipe is created by the server side.  It rejects
>> remote connections.  I did not put an ACL, so it should inherit the
>> system default which grants the user RW access (since the daemon is
>> implicitly started by the first foreground client command that needs
>> to talk to it.)  Other users in the user's group and the anonymous
>> user should have R but not W access to it, so they could not be able
>> to connect.  The name pipe is kept in the local Named Pipe File System
>> (NPFS) as `\\.\pipe\<unique-path>` so it is globally visible on the
>> system, but I don't think it is a problem.
> 
> It is not intuitively obvious why globalluy visible thing is OK to
> me, but I'll take your word for it on stuff about Windows.

Sorry, that's a quirk of Windows.  Windows has a funky virtual drive
where named pipes are stored -- kind of like a magic directory in /proc
on Linux.  All local named pipes have the "\\.\pipe\" path prefix.

So they are globally visible as a side-effect of that "namespace"
restriction.

> 
>> On the Unix side, the socket is created inside the .git directory
>> by the daemon.  Potential clients would have to have access to the
>> working directory and the .git directory to connect to the socket,
>> so in normal circumstances they would be able to read everything in
>> the WD anyway.  So again, I don't think it is a problem.
> 
> OK, yes, writability to .git would automatically mean that
> everything is a fair game to those who can talk to the daemon, so
> there is no new issue here, as long as the first process that
> creates the socket is careful not to loosen the permission.
> 
> Thanks.
> 

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH 01/10] pkt-line: use stack rather than static buffer in packet_write_gently()
  2021-01-12 15:31 ` [PATCH 01/10] pkt-line: use stack rather than static buffer in packet_write_gently() Jeff Hostetler via GitGitGadget
@ 2021-01-13 13:29   ` Jeff King
  2021-01-25 19:34     ` Jeff Hostetler
  0 siblings, 1 reply; 178+ messages in thread
From: Jeff King @ 2021-01-13 13:29 UTC (permalink / raw)
  To: Jeff Hostetler via GitGitGadget; +Cc: git, Jeff Hostetler

On Tue, Jan 12, 2021 at 03:31:23PM +0000, Jeff Hostetler via GitGitGadget wrote:

> Teach packet_write_gently() to use a stack buffer rather than a static
> buffer when composing the packet line message.  This helps get us ready
> for threaded operations.

Sounds like a good goal, but...

>  static int packet_write_gently(const int fd_out, const char *buf, size_t size)
>  {
> -	static char packet_write_buffer[LARGE_PACKET_MAX];
> +	char packet_write_buffer[LARGE_PACKET_MAX];
>  	size_t packet_size;

64k is awfully big for the stack, especially if you are thinking about
having threads. I know we've run into issues around that size before
(though I don't offhand recall whether there was any recursion
involved).

We might need to use thread-local storage here. Heap would also
obviously work, but I don't think we'd want a new allocation per write
(or maybe it wouldn't matter; we're making a syscall, so a malloc() may
not be that big a deal in terms of performance).

-Peff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH 00/10] [RFC] Simple IPC Mechanism
  2021-01-12 23:25   ` Jeff Hostetler
  2021-01-13  0:13     ` Junio C Hamano
@ 2021-01-13 13:46     ` Jeff King
  2021-01-13 15:48       ` Ævar Arnfjörð Bjarmason
  1 sibling, 1 reply; 178+ messages in thread
From: Jeff King @ 2021-01-13 13:46 UTC (permalink / raw)
  To: Jeff Hostetler
  Cc: Junio C Hamano, Jeff Hostetler via GitGitGadget, git, Jeff Hostetler

On Tue, Jan 12, 2021 at 06:25:20PM -0500, Jeff Hostetler wrote:

> On the Unix side, the socket is created inside the .git directory
> by the daemon.  Potential clients would have to have access to the
> working directory and the .git directory to connect to the socket,
> so in normal circumstances they would be able to read everything in
> the WD anyway.  So again, I don't think it is a problem.

Just thinking out loud, here are two potential issues with putting it in
.git that we may have to deal with later:

  - fsmonitor is conceptually a read-only thing (i.e., it would speed up
    "git status", etc). And not knowing much about how it will work, I'd
    guess that is carried through (i.e., even though you may open the
    socket R/W so that you can write requests and read them back, there
    is no operation you can request that will overwrite data). But the
    running user may not have write access to .git.

    As long as we cleanly bail to the non-fsmonitor code paths, I don't
    think it's the end of the world. Those read-only users just won't
    get to use the speedup (and it may even be desirable). They may
    complain, but it is open source so the onus is on them to improve
    it. You will not have made anything worse. :)

  - repositories may be on network filesystems that do not support unix
    sockets.

So it would be nice if there was some way to specify an alternate path
to be used for the socket. Possibly one or both of:

  - a config option to give a root path for sockets, where Git would
    then canonicalize the $GIT_DIR name and use $root/$GIT_DIR for the
    socket. That solves the problem for a given user once for all repos.

  - a config option to say "use this path for the socket". This would be
    per-repo, but is more flexible and possibly less confusing.

One final note: on some systems[1] the permissions on the socket file
itself are ignored. The safe way to protect it is to make sure the
permissions on the surrounding directory are what you want. See
credential-cache's init_socket_directory() for an example.

-Peff

[1] Sorry, I don't remember which systems. This is one of those random
    bits of Unix lore I've carried around for 20 years, and it's
    entirely possible it is simply obsolete at this point.

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH 07/10] unix-socket: create gentle version of unix_stream_listen()
  2021-01-12 15:31 ` [PATCH 07/10] unix-socket: create gentle version of unix_stream_listen() Jeff Hostetler via GitGitGadget
@ 2021-01-13 14:06   ` Jeff King
  2021-01-14  1:19     ` Chris Torek
  0 siblings, 1 reply; 178+ messages in thread
From: Jeff King @ 2021-01-13 14:06 UTC (permalink / raw)
  To: Jeff Hostetler via GitGitGadget; +Cc: git, Jeff Hostetler

On Tue, Jan 12, 2021 at 03:31:29PM +0000, Jeff Hostetler via GitGitGadget wrote:

> From: Jeff Hostetler <jeffhost@microsoft.com>
> 
> Create a gentle version of `unix_stream_listen()`.  This version does
> not call `die()` if a socket-fd cannot be created and does not assume
> that it is safe to `unlink()` an existing socket-inode.

The existing one is meant to be gentle. Maybe it is worth fixing it
instead.

> `unix_stream_listen()` uses `unix_stream_socket()` helper function to
> create the socket-fd.  Avoid that helper because it calls `die()` on
> errors.

Yeah, I think this is just a bug. My thinking in the original was that
socket() would basically never fail. And it generally wouldn't, but
things like EMFILE do happen. There are only two callers, and both would
be one-liners to propagate the error up the stack.

> `unix_stream_listen()` always tries to `unlink()` the socket-path before
> calling `bind()`.  If there is an existing server/daemon already bound
> and listening on that socket-path, our `unlink()` would have the effect
> of disassociating the existing server's bound-socket-fd from the socket-path
> without notifying the existing server.  The existing server could continue
> to service existing connections (accepted-socket-fd's), but would not
> receive any futher new connections (since clients rendezvous via the
> socket-path).  The existing server would effectively be offline but yet
> appear to be active.

The trouble here is that one cannot tell if the existing file is active,
and you are orphaning an existing server, or if there is leftover cruft
from an exited server that did not clean up after itself (you will get
EADDRINUSE either way).

Handling those cases (and especially doing so in a non-racy way) is
probably outside the scope of unix_stream_listen(), but it makes sense
for this to be an option. And it looks like you even made it so here,
so unix_stream_listen() could just become a wrapper that sets the
option. Or since there is only one caller in the whole code-base,
perhaps it could just learn to pass the option struct. :)

Likewise for the no-chdir option added in the follow-on patch.

> Furthermore, `unix_stream_listen()` creates an opportunity for a brief
> race condition for connecting clients if they try to connect in the
> interval between the forced `unlink()` and the subsequent `bind()` (which
> recreates the socket-path that is bound to a new socket-fd in the current
> process).

I'll be curious to see how you do this atomically. From my skim of patch
10, you will connect to see if it's active, and unlink if it's not. But
then two simultaneous new processes could both see an inactive one and
race to forcefully create the new one. One of them will lose and be
orphaned with a socket that has no filesystem name.

There might be a solution using link() to have an atomic winner, but it
gets tricky around unlinking the old name out of the way. You might need
a separate dot-lock to make sure only one process does the
unlink-and-create process at a time.

-Peff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH 00/10] [RFC] Simple IPC Mechanism
  2021-01-13 13:46     ` Jeff King
@ 2021-01-13 15:48       ` Ævar Arnfjörð Bjarmason
  0 siblings, 0 replies; 178+ messages in thread
From: Ævar Arnfjörð Bjarmason @ 2021-01-13 15:48 UTC (permalink / raw)
  To: Jeff King
  Cc: Jeff Hostetler, Junio C Hamano, Jeff Hostetler via GitGitGadget,
	git, Jeff Hostetler


On Wed, Jan 13 2021, Jeff King wrote:

> On Tue, Jan 12, 2021 at 06:25:20PM -0500, Jeff Hostetler wrote:
>
>> On the Unix side, the socket is created inside the .git directory
>> by the daemon.  Potential clients would have to have access to the
>> working directory and the .git directory to connect to the socket,
>> so in normal circumstances they would be able to read everything in
>> the WD anyway.  So again, I don't think it is a problem.
>
> Just thinking out loud, here are two potential issues with putting it in
> .git that we may have to deal with later:
>
>   - fsmonitor is conceptually a read-only thing (i.e., it would speed up
>     "git status", etc). And not knowing much about how it will work, I'd
>     guess that is carried through (i.e., even though you may open the
>     socket R/W so that you can write requests and read them back, there
>     is no operation you can request that will overwrite data). But the
>     running user may not have write access to .git.
>
>     As long as we cleanly bail to the non-fsmonitor code paths, I don't
>     think it's the end of the world. Those read-only users just won't
>     get to use the speedup (and it may even be desirable). They may
>     complain, but it is open source so the onus is on them to improve
>     it. You will not have made anything worse. :)
>
>   - repositories may be on network filesystems that do not support unix
>     sockets.
>
> So it would be nice if there was some way to specify an alternate path
> to be used for the socket. Possibly one or both of:
>
>   - a config option to give a root path for sockets, where Git would
>     then canonicalize the $GIT_DIR name and use $root/$GIT_DIR for the
>     socket. That solves the problem for a given user once for all repos.
>
>   - a config option to say "use this path for the socket". This would be
>     per-repo, but is more flexible and possibly less confusing.
>
> One final note: on some systems[1] the permissions on the socket file
> itself are ignored. The safe way to protect it is to make sure the
> permissions on the surrounding directory are what you want. See
> credential-cache's init_socket_directory() for an example.
>
> -Peff
>
> [1] Sorry, I don't remember which systems. This is one of those random
>     bits of Unix lore I've carried around for 20 years, and it's
>     entirely possible it is simply obsolete at this point.

According to StackExchange lore this seems to have been the case with
4.2 BSD & maybe something obscure like HP/UX:
https://unix.stackexchange.com/questions/83032/which-systems-do-not-honor-socket-read-write-permissions

I'd say it's probably safe to ignore this as a concern for new features
in git in general, and certainly for something like a thing intended for
a watchman-like program which users are likely to only run on modern
OS's.

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH 07/10] unix-socket: create gentle version of unix_stream_listen()
  2021-01-13 14:06   ` Jeff King
@ 2021-01-14  1:19     ` Chris Torek
  0 siblings, 0 replies; 178+ messages in thread
From: Chris Torek @ 2021-01-14  1:19 UTC (permalink / raw)
  To: Jeff King; +Cc: Jeff Hostetler via GitGitGadget, Git List, Jeff Hostetler

I had saved this to comment on, but Peff beat me to it :-)

On Wed, Jan 13, 2021 at 6:07 AM Jeff King <peff@peff.net> wrote:
> There might be a solution using link() to have an atomic winner, but it
> gets tricky around unlinking the old name out of the way.

You definitely should be able to do this atomically with link(), but
the cleanup is indeed messy, and there's already existing locking
code, so it's probably better to press that into service here.

Chris

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH 01/10] pkt-line: use stack rather than static buffer in packet_write_gently()
  2021-01-13 13:29   ` Jeff King
@ 2021-01-25 19:34     ` Jeff Hostetler
  0 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler @ 2021-01-25 19:34 UTC (permalink / raw)
  To: Jeff King, Jeff Hostetler via GitGitGadget; +Cc: git, Jeff Hostetler



On 1/13/21 8:29 AM, Jeff King wrote:
> On Tue, Jan 12, 2021 at 03:31:23PM +0000, Jeff Hostetler via GitGitGadget wrote:
> 
>> Teach packet_write_gently() to use a stack buffer rather than a static
>> buffer when composing the packet line message.  This helps get us ready
>> for threaded operations.
> 
> Sounds like a good goal, but...
> 
>>   static int packet_write_gently(const int fd_out, const char *buf, size_t size)
>>   {
>> -	static char packet_write_buffer[LARGE_PACKET_MAX];
>> +	char packet_write_buffer[LARGE_PACKET_MAX];
>>   	size_t packet_size;
> 
> 64k is awfully big for the stack, especially if you are thinking about
> having threads. I know we've run into issues around that size before
> (though I don't offhand recall whether there was any recursion
> involved).
> 
> We might need to use thread-local storage here. Heap would also
> obviously work, but I don't think we'd want a new allocation per write
> (or maybe it wouldn't matter; we're making a syscall, so a malloc() may
> not be that big a deal in terms of performance).
> 
> -Peff
> 

Good point.

I'll look at the callers and see if I can do something safer.

Jeeff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* [PATCH v2 00/14] Simple IPC Mechanism
  2021-01-12 15:31 [PATCH 00/10] [RFC] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
                   ` (11 preceding siblings ...)
  2021-01-12 20:01 ` Junio C Hamano
@ 2021-02-01 19:45 ` Jeff Hostetler via GitGitGadget
  2021-02-01 19:45   ` [PATCH v2 01/14] ci/install-depends: attempt to fix "brew cask" stuff Junio C Hamano via GitGitGadget
                     ` (15 more replies)
  12 siblings, 16 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-02-01 19:45 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, Chris Torek, Jeff Hostetler

Here is version 2 of my "Simple IPC" series and addresses the following
review comments:

[1] Redo packet_write_gently() to take a scratch buffer argument and fixup
callers to avoid potential thread-stack problems caused by a very large
stack buffer when used multi-threaded callers. This turned out to be a
little more involved than anticipated because the pkt-line code doesn't know
about its thread state nor an opportunity to initialize thread-state.

[2] Deleted the unix_stream_socket() helper function and inline it in the
few callers and then let those call sites decide whether to call die() or
not.

[3] Refactor unix_stream_listen() to take an "options" structure and to
incorporate the changes I described in my earlier
unix_stream_listen_gently().

[4] Update unix_stream_connect() to return errors rather than calling die().

[5] Update the simple-ipc server startup to detect dead and/or in-use Unix
domain sockets and/or create a new socket in a race-friendly way. I now use
a variation of the atomic lock-rename-trick when creating the socket
(details are in a large comment in the code).

Jeff Hostetler (10):
  pkt-line: promote static buffer in packet_write_gently() to callers
  pkt-line: add write_packetized_from_buf2() that takes scratch buffer
  simple-ipc: design documentation for new IPC mechanism
  simple-ipc: add win32 implementation
  simple-ipc: add t/helper/test-simple-ipc and t0052
  unix-socket: elimiate static unix_stream_socket() helper function
  unix-socket: add options to unix_stream_listen()
  unix-socket: add no-chdir option to unix_stream_listen()
  unix-socket: do not call die in unix_stream_connect()
  simple-ipc: add Unix domain socket implementation

Johannes Schindelin (3):
  pkt-line: optionally skip the flush packet in
    write_packetized_from_buf()
  pkt-line: (optionally) libify the packet readers
  pkt-line: accept additional options in read_packetized_to_strbuf()

Junio C Hamano (1):
  ci/install-depends: attempt to fix "brew cask" stuff

 Documentation/technical/api-simple-ipc.txt |   34 +
 Makefile                                   |    8 +
 builtin/credential-cache--daemon.c         |    3 +-
 ci/install-dependencies.sh                 |    8 +-
 compat/simple-ipc/ipc-shared.c             |   28 +
 compat/simple-ipc/ipc-unix-socket.c        | 1127 ++++++++++++++++++++
 compat/simple-ipc/ipc-win32.c              |  751 +++++++++++++
 config.mak.uname                           |    2 +
 contrib/buildsystems/CMakeLists.txt        |    6 +
 convert.c                                  |    4 +-
 pkt-line.c                                 |   70 +-
 pkt-line.h                                 |   26 +-
 simple-ipc.h                               |  230 ++++
 t/helper/test-simple-ipc.c                 |  485 +++++++++
 t/helper/test-tool.c                       |    1 +
 t/helper/test-tool.h                       |    1 +
 t/t0052-simple-ipc.sh                      |  129 +++
 unix-socket.c                              |   67 +-
 unix-socket.h                              |   16 +-
 19 files changed, 2949 insertions(+), 47 deletions(-)
 create mode 100644 Documentation/technical/api-simple-ipc.txt
 create mode 100644 compat/simple-ipc/ipc-shared.c
 create mode 100644 compat/simple-ipc/ipc-unix-socket.c
 create mode 100644 compat/simple-ipc/ipc-win32.c
 create mode 100644 simple-ipc.h
 create mode 100644 t/helper/test-simple-ipc.c
 create mode 100755 t/t0052-simple-ipc.sh


base-commit: 71ca53e8125e36efbda17293c50027d31681a41f
Published-As: https://github.com/gitgitgadget/git/releases/tag/pr-766%2Fjeffhostetler%2Fsimple-ipc-v2
Fetch-It-Via: git fetch https://github.com/gitgitgadget/git pr-766/jeffhostetler/simple-ipc-v2
Pull-Request: https://github.com/gitgitgadget/git/pull/766

Range-diff vs v1:

  1:  1155a45cf64 <  -:  ----------- pkt-line: use stack rather than static buffer in packet_write_gently()
  -:  ----------- >  1:  4c6766d4183 ci/install-depends: attempt to fix "brew cask" stuff
  -:  ----------- >  2:  3b03a8ff7a7 pkt-line: promote static buffer in packet_write_gently() to callers
  -:  ----------- >  3:  e671894b4c0 pkt-line: add write_packetized_from_buf2() that takes scratch buffer
  3:  edf5ac95d66 !  4:  0832f7d324d pkt-line: optionally skip the flush packet in write_packetized_from_buf()
     @@ Commit message
          packets before a final flush packet, so let's extend this function to
          prepare for that scenario.
      
     +    Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
          Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
      
       ## convert.c ##
     @@ pkt-line.c: int write_packetized_from_fd(int fd_in, int fd_out)
      -int write_packetized_from_buf(const char *src_in, size_t len, int fd_out)
      +int write_packetized_from_buf(const char *src_in, size_t len, int fd_out,
      +			      int flush_at_end)
     + {
     + 	static struct packet_scratch_space scratch;
     + 
     +-	return write_packetized_from_buf2(src_in, len, fd_out, &scratch);
     ++	return write_packetized_from_buf2(src_in, len, fd_out,
     ++					  flush_at_end, &scratch);
     + }
     + 
     + int write_packetized_from_buf2(const char *src_in, size_t len, int fd_out,
     ++			       int flush_at_end,
     + 			       struct packet_scratch_space *scratch)
       {
       	int err = 0;
     - 	size_t bytes_written = 0;
     -@@ pkt-line.c: int write_packetized_from_buf(const char *src_in, size_t len, int fd_out)
     - 		err = packet_write_gently(fd_out, src_in + bytes_written, bytes_to_write);
     +@@ pkt-line.c: int write_packetized_from_buf2(const char *src_in, size_t len, int fd_out,
     + 		err = packet_write_gently(fd_out, src_in + bytes_written, bytes_to_write, scratch);
       		bytes_written += bytes_to_write;
       	}
      -	if (!err)
     @@ pkt-line.h: void packet_buf_write_len(struct strbuf *buf, const char *data, size
      -int write_packetized_from_buf(const char *src_in, size_t len, int fd_out);
      +int write_packetized_from_buf(const char *src_in, size_t len, int fd_out,
      +			      int flush_at_end);
     + int write_packetized_from_buf2(const char *src_in, size_t len, int fd_out,
     ++			       int flush_at_end,
     + 			       struct packet_scratch_space *scratch);
       
       /*
     -  * Read a packetized line into the buffer, which must be at least size bytes
  2:  b7d678bc918 !  5:  43bc4a26b79 pkt-line: (optionally) libify the packet readers
     @@ pkt-line.c: enum packet_read_status packet_read_with_status(int fd, char **src_b
       		*pktlen = -1;
      
       ## pkt-line.h ##
     -@@ pkt-line.h: int write_packetized_from_buf(const char *src_in, size_t len, int fd_out);
     +@@ pkt-line.h: int write_packetized_from_buf2(const char *src_in, size_t len, int fd_out,
        *
        * If options contains PACKET_READ_DIE_ON_ERR_PACKET, it dies when it sees an
        * ERR packet.
  4:  2f399ac107c =  6:  6a389a35335 pkt-line: accept additional options in read_packetized_to_strbuf()
  5:  7064c5e9ffa !  7:  a7275b4bdc2 simple-ipc: design documentation for new IPC mechanism
     @@ Documentation/technical/api-simple-ipc.txt (new)
      +
      +This IPC mechanism differs from the existing `sub-process.c` model
      +(Documentation/technical/long-running-process-protocol.txt) and used
     -+by applications like Git-LFS because the server is assumed to be very
     -+long running system service.  In contrast, a "sub-process model process"
     -+is started with the foreground process and exits when the foreground
     -+process terminates.  How the server is started is also outside the
     -+scope of the IPC mechanism.
     ++by applications like Git-LFS.  In the simple-ipc model the server is
     ++assumed to be a very long-running system service.  In contrast, in the
     ++LFS-style sub-process model the helper is started with the foreground
     ++process and exits when the foreground process terminates.
     ++
     ++How the simple-ipc server is started is also outside the scope of the
     ++IPC mechanism.  For example, the server might be started during
     ++maintenance operations.
      +
      +The IPC protocol consists of a single request message from the client and
      +an optional request message from the server.  For simplicity, pkt-line
  6:  9e27c07d785 !  8:  388366913d4 simple-ipc: add win32 implementation
     @@ compat/simple-ipc/ipc-win32.c (new)
      +enum ipc_active_state ipc_client_try_connect(
      +	const char *path,
      +	const struct ipc_client_connect_options *options,
     -+	int *pfd)
     ++	struct ipc_client_connection **p_connection)
      +{
      +	wchar_t wpath[MAX_PATH];
      +	enum ipc_active_state state = IPC_STATE__OTHER_ERROR;
     ++	int fd = -1;
      +
     -+	*pfd = -1;
     ++	*p_connection = NULL;
      +
      +	trace2_region_enter("ipc-client", "try-connect", NULL);
      +	trace2_data_string("ipc-client", NULL, "try-connect/path", path);
     @@ compat/simple-ipc/ipc-win32.c (new)
      +		state = IPC_STATE__INVALID_PATH;
      +	else
      +		state = connect_to_server(wpath, WINDOWS_CONNECTION_TIMEOUT_MS,
     -+					  options, pfd);
     ++					  options, &fd);
      +
      +	trace2_data_intmax("ipc-client", NULL, "try-connect/state",
      +			   (intmax_t)state);
      +	trace2_region_leave("ipc-client", "try-connect", NULL);
     ++
     ++	if (state == IPC_STATE__LISTENING) {
     ++		(*p_connection) = xcalloc(1, sizeof(struct ipc_client_connection));
     ++		(*p_connection)->fd = fd;
     ++	}
     ++
      +	return state;
      +}
      +
     -+int ipc_client_send_command_to_fd(int fd, const char *message,
     -+				  struct strbuf *answer)
     ++void ipc_client_close_connection(struct ipc_client_connection *connection)
     ++{
     ++	if (!connection)
     ++		return;
     ++
     ++	if (connection->fd != -1)
     ++		close(connection->fd);
     ++
     ++	free(connection);
     ++}
     ++
     ++int ipc_client_send_command_to_connection(
     ++	struct ipc_client_connection *connection,
     ++	const char *message, struct strbuf *answer)
      +{
      +	int ret = 0;
      +
     @@ compat/simple-ipc/ipc-win32.c (new)
      +
      +	trace2_region_enter("ipc-client", "send-command", NULL);
      +
     -+	if (write_packetized_from_buf(message, strlen(message), fd, 1) < 0) {
     ++	if (write_packetized_from_buf2(message, strlen(message),
     ++				       connection->fd, 1,
     ++				       &connection->scratch_write_buffer) < 0) {
      +		ret = error(_("could not send IPC command"));
      +		goto done;
      +	}
      +
     -+	FlushFileBuffers((HANDLE)_get_osfhandle(fd));
     ++	FlushFileBuffers((HANDLE)_get_osfhandle(connection->fd));
      +
     -+	if (read_packetized_to_strbuf(fd, answer, PACKET_READ_NEVER_DIE) < 0) {
     ++	if (read_packetized_to_strbuf(connection->fd, answer,
     ++				      PACKET_READ_NEVER_DIE) < 0) {
      +		ret = error(_("could not read IPC response"));
      +		goto done;
      +	}
     @@ compat/simple-ipc/ipc-win32.c (new)
      +			    const struct ipc_client_connect_options *options,
      +			    const char *message, struct strbuf *response)
      +{
     -+	int fd;
      +	int ret = -1;
      +	enum ipc_active_state state;
     ++	struct ipc_client_connection *connection = NULL;
      +
     -+	state = ipc_client_try_connect(path, options, &fd);
     ++	state = ipc_client_try_connect(path, options, &connection);
      +
      +	if (state != IPC_STATE__LISTENING)
      +		return ret;
      +
     -+	ret = ipc_client_send_command_to_fd(fd, message, response);
     -+	close(fd);
     ++	ret = ipc_client_send_command_to_connection(connection, message, response);
     ++
     ++	ipc_client_close_connection(connection);
     ++
      +	return ret;
      +}
      +
     @@ compat/simple-ipc/ipc-win32.c (new)
      +	struct ipc_server_data *server_data;
      +	pthread_t pthread_id;
      +	HANDLE hPipe;
     ++	struct packet_scratch_space scratch_write_buffer;
      +};
      +
      +/*
     @@ compat/simple-ipc/ipc-win32.c (new)
      +static int do_io_reply_callback(struct ipc_server_reply_data *reply_data,
      +		       const char *response, size_t response_len)
      +{
     ++	struct packet_scratch_space *scratch =
     ++		&reply_data->server_thread_data->scratch_write_buffer;
     ++
      +	if (reply_data->magic != MAGIC_SERVER_REPLY_DATA)
      +		BUG("reply_cb called with wrong instance data");
      +
     -+	return write_packetized_from_buf(response, response_len,
     -+					 reply_data->fd, 0);
     ++	return write_packetized_from_buf2(response, response_len,
     ++					  reply_data->fd, 0, scratch);
      +}
      +
      +/*
     @@ simple-ipc.h (new)
      +#endif
      +
      +#ifdef SUPPORTS_SIMPLE_IPC
     ++#include "pkt-line.h"
      +
      +/*
      + * Simple IPC Client Side API.
     @@ simple-ipc.h (new)
      + */
      +enum ipc_active_state ipc_get_active_state(const char *path);
      +
     ++struct ipc_client_connection {
     ++	int fd;
     ++	struct packet_scratch_space scratch_write_buffer;
     ++};
     ++
      +/*
      + * Try to connect to the daemon on the named pipe or socket.
      + *
     -+ * Returns IPC_STATE__LISTENING (and an fd) when connected.
     ++ * Returns IPC_STATE__LISTENING and a connection handle.
      + *
      + * Otherwise, returns info to help decide whether to retry or to
      + * spawn/respawn the server.
     @@ simple-ipc.h (new)
      +enum ipc_active_state ipc_client_try_connect(
      +	const char *path,
      +	const struct ipc_client_connect_options *options,
     -+	int *pfd);
     ++	struct ipc_client_connection **p_connection);
     ++
     ++void ipc_client_close_connection(struct ipc_client_connection *connection);
      +
      +/*
      + * Used by the client to synchronously send and receive a message with
     -+ * the server on the provided fd.
     ++ * the server on the provided client connection.
      + *
      + * Returns 0 when successful.
      + *
      + * Calls error() and returns non-zero otherwise.
      + */
     -+int ipc_client_send_command_to_fd(int fd, const char *message,
     -+				  struct strbuf *answer);
     ++int ipc_client_send_command_to_connection(
     ++	struct ipc_client_connection *connection,
     ++	const char *message, struct strbuf *answer);
      +
      +/*
      + * Used by the client to synchronously connect and send and receive a
  9:  69969c2b8d3 =  9:  f0bebf1cdb3 simple-ipc: add t/helper/test-simple-ipc and t0052
  -:  ----------- > 10:  f5d5445cf42 unix-socket: elimiate static unix_stream_socket() helper function
  7:  96268351ac6 ! 11:  7a6a69dfc20 unix-socket: create gentle version of unix_stream_listen()
     @@ Metadata
      Author: Jeff Hostetler <jeffhost@microsoft.com>
      
       ## Commit message ##
     -    unix-socket: create gentle version of unix_stream_listen()
     +    unix-socket: add options to unix_stream_listen()
      
     -    Create a gentle version of `unix_stream_listen()`.  This version does
     -    not call `die()` if a socket-fd cannot be created and does not assume
     -    that it is safe to `unlink()` an existing socket-inode.
     +    Update `unix_stream_listen()` to take an options structure to override
     +    default behaviors.  This includes the size of the `listen()` backlog
     +    and whether it should always unlink the socket file before trying to
     +    create a new one.  Also eliminate calls to `die()` if it cannot create
     +    a socket.
      
     -    `unix_stream_listen()` uses `unix_stream_socket()` helper function to
     -    create the socket-fd.  Avoid that helper because it calls `die()` on
     -    errors.
     -
     -    `unix_stream_listen()` always tries to `unlink()` the socket-path before
     -    calling `bind()`.  If there is an existing server/daemon already bound
     -    and listening on that socket-path, our `unlink()` would have the effect
     -    of disassociating the existing server's bound-socket-fd from the socket-path
     -    without notifying the existing server.  The existing server could continue
     -    to service existing connections (accepted-socket-fd's), but would not
     -    receive any futher new connections (since clients rendezvous via the
     -    socket-path).  The existing server would effectively be offline but yet
     -    appear to be active.
     +    Normally, `unix_stream_listen()` always tries to `unlink()` the
     +    socket-path before calling `bind()`.  If there is an existing
     +    server/daemon already bound and listening on that socket-path, our
     +    `unlink()` would have the effect of disassociating the existing
     +    server's bound-socket-fd from the socket-path without notifying the
     +    existing server.  The existing server could continue to service
     +    existing connections (accepted-socket-fd's), but would not receive any
     +    futher new connections (since clients rendezvous via the socket-path).
     +    The existing server would effectively be offline but yet appear to be
     +    active.
      
          Furthermore, `unix_stream_listen()` creates an opportunity for a brief
          race condition for connecting clients if they try to connect in the
     @@ Commit message
      
          Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
      
     + ## builtin/credential-cache--daemon.c ##
     +@@ builtin/credential-cache--daemon.c: static int serve_cache_loop(int fd)
     + 
     + static void serve_cache(const char *socket_path, int debug)
     + {
     ++	struct unix_stream_listen_opts opts = UNIX_STREAM_LISTEN_OPTS_INIT;
     + 	int fd;
     + 
     +-	fd = unix_stream_listen(socket_path);
     ++	fd = unix_stream_listen(socket_path, &opts);
     + 	if (fd < 0)
     + 		die_errno("unable to bind to '%s'", socket_path);
     + 
     +
       ## unix-socket.c ##
     -@@ unix-socket.c: int unix_stream_listen(const char *path)
     - 	errno = saved_errno;
     +@@ unix-socket.c: int unix_stream_connect(const char *path)
       	return -1;
       }
     -+
     -+int unix_stream_listen_gently(const char *path,
     -+			      const struct unix_stream_listen_opts *opts)
     -+{
     + 
     +-int unix_stream_listen(const char *path)
     ++int unix_stream_listen(const char *path,
     ++		       const struct unix_stream_listen_opts *opts)
     + {
     +-	int fd, saved_errno;
      +	int fd = -1;
     -+	int bind_successful = 0;
      +	int saved_errno;
     -+	struct sockaddr_un sa;
     -+	struct unix_sockaddr_context ctx;
     -+
     -+	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
     -+		goto fail;
     ++	int bind_successful = 0;
     ++	int backlog;
     + 	struct sockaddr_un sa;
     + 	struct unix_sockaddr_context ctx;
     + 
     +-	unlink(path);
     +-
     + 	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
     + 		return -1;
      +
     -+	fd = socket(AF_UNIX, SOCK_STREAM, 0);
     -+	if (fd < 0)
     + 	fd = socket(AF_UNIX, SOCK_STREAM, 0);
     + 	if (fd < 0)
     +-		die_errno("unable to create socket");
      +		goto fail;
      +
      +	if (opts->force_unlink_before_bind)
      +		unlink(path);
     -+
     -+	if (bind(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0)
     -+		goto fail;
     + 
     + 	if (bind(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0)
     + 		goto fail;
      +	bind_successful = 1;
     -+
     -+	if (listen(fd, opts->listen_backlog_size) < 0)
     -+		goto fail;
     -+
     -+	unix_sockaddr_cleanup(&ctx);
     -+	return fd;
     -+
     -+fail:
     -+	saved_errno = errno;
     -+	unix_sockaddr_cleanup(&ctx);
     -+	close(fd);
     + 
     +-	if (listen(fd, 5) < 0)
     ++	if (opts->listen_backlog_size > 0)
     ++		backlog = opts->listen_backlog_size;
     ++	else
     ++		backlog = 5;
     ++	if (listen(fd, backlog) < 0)
     + 		goto fail;
     + 
     + 	unix_sockaddr_cleanup(&ctx);
     +@@ unix-socket.c: int unix_stream_listen(const char *path)
     + fail:
     + 	saved_errno = errno;
     + 	unix_sockaddr_cleanup(&ctx);
     +-	close(fd);
     ++	if (fd != -1)
     ++		close(fd);
      +	if (bind_successful)
      +		unlink(path);
     -+	errno = saved_errno;
     -+	return -1;
     -+}
     + 	errno = saved_errno;
     + 	return -1;
     + }
      
       ## unix-socket.h ##
      @@
     - int unix_stream_connect(const char *path);
     - int unix_stream_listen(const char *path);
     + #ifndef UNIX_SOCKET_H
     + #define UNIX_SOCKET_H
       
      +struct unix_stream_listen_opts {
      +	int listen_backlog_size;
      +	unsigned int force_unlink_before_bind:1;
      +};
      +
     -+int unix_stream_listen_gently(const char *path,
     -+			      const struct unix_stream_listen_opts *opts);
     ++#define UNIX_STREAM_LISTEN_OPTS_INIT \
     ++{ \
     ++	.listen_backlog_size = 5, \
     ++	.force_unlink_before_bind = 1, \
     ++}
      +
     + int unix_stream_connect(const char *path);
     +-int unix_stream_listen(const char *path);
     ++int unix_stream_listen(const char *path,
     ++		       const struct unix_stream_listen_opts *opts);
     + 
       #endif /* UNIX_SOCKET_H */
  8:  383a9755669 ! 12:  745b6d5fb74 unix-socket: add no-chdir option to unix_stream_listen_gently()
     @@ Metadata
      Author: Jeff Hostetler <jeffhost@microsoft.com>
      
       ## Commit message ##
     -    unix-socket: add no-chdir option to unix_stream_listen_gently()
     +    unix-socket: add no-chdir option to unix_stream_listen()
      
          Calls to `chdir()` are dangerous in a multi-threaded context.  If
          `unix_stream_listen()` is given a socket pathname that is too big to
     @@ Commit message
          Teach `unix_sockaddr_init()` to not allow calls to `chdir()` when flag
          is set.
      
     -    Extend the public interface to `unix_stream_listen_gently()` to also
     -    expose this new flag.
     -
          Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
      
       ## unix-socket.c ##
     @@ unix-socket.c: int unix_stream_connect(const char *path)
       
       	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
       		return -1;
     -@@ unix-socket.c: int unix_stream_listen(const char *path)
     - {
     - 	int fd, saved_errno;
     - 	struct sockaddr_un sa;
     --	struct unix_sockaddr_context ctx;
     -+	struct unix_sockaddr_context ctx = UNIX_SOCKADDR_CONTEXT_INIT;
     - 
     - 	unlink(path);
     - 
     -@@ unix-socket.c: int unix_stream_listen_gently(const char *path,
     +@@ unix-socket.c: int unix_stream_listen(const char *path,
       	int bind_successful = 0;
     - 	int saved_errno;
     + 	int backlog;
       	struct sockaddr_un sa;
      -	struct unix_sockaddr_context ctx;
      +	struct unix_sockaddr_context ctx = UNIX_SOCKADDR_CONTEXT_INIT;
     @@ unix-socket.c: int unix_stream_listen_gently(const char *path,
      +	ctx.disallow_chdir = opts->disallow_chdir;
       
       	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
     - 		goto fail;
     + 		return -1;
      
       ## unix-socket.h ##
     -@@ unix-socket.h: int unix_stream_listen(const char *path);
     +@@
       struct unix_stream_listen_opts {
       	int listen_backlog_size;
       	unsigned int force_unlink_before_bind:1;
      +	unsigned int disallow_chdir:1;
       };
       
     - int unix_stream_listen_gently(const char *path,
     + #define UNIX_STREAM_LISTEN_OPTS_INIT \
     + { \
     + 	.listen_backlog_size = 5, \
     + 	.force_unlink_before_bind = 1, \
     ++	.disallow_chdir = 0, \
     + }
     + 
     + int unix_stream_connect(const char *path);
  -:  ----------- > 13:  2cca15a10ec unix-socket: do not call die in unix_stream_connect()
 10:  a1b15fb5cb0 ! 14:  72c1c209c38 simple-ipc: add Unix domain socket implementation
     @@ Commit message
      
          Create Unix domain socket based implementation of "simple-ipc".
      
     +    A set of `ipc_client` routines implement a client library to connect
     +    to an `ipc_server` over a Unix domain socket, send a simple request,
     +    and receive a single response.  Clients use blocking IO on the socket.
     +
     +    A set of `ipc_server` routines implement a thread pool to listen for
     +    and concurrently service client connections.
     +
     +    The server creates a new Unix domain socket at a known location.  If a
     +    socket already exists with that name, the server tries to determine if
     +    another server is already listening on the socket or if the socket is
     +    dead.  If socket is busy, the server exits with an error rather than
     +    stealing the socket.  If the socket is dead, the server creates a new
     +    one and starts up.
     +
     +    If while running, the server detects that its socket has been stolen
     +    by another server, it automatically exits.
     +
          Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
      
       ## Makefile ##
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +	struct ipc_client_connect_options options
      +		= IPC_CLIENT_CONNECT_OPTIONS_INIT;
      +	struct stat st;
     -+	int fd_test = -1;
     ++	struct ipc_client_connection *connection_test = NULL;
      +
      +	options.wait_if_busy = 0;
      +	options.wait_if_not_found = 0;
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +	 * at `path`, doesn't mean it that there is a server listening.
      +	 * Ping it to be sure.
      +	 */
     -+	state = ipc_client_try_connect(path, &options, &fd_test);
     -+	close(fd_test);
     ++	state = ipc_client_try_connect(path, &options, &connection_test);
     ++	ipc_client_close_connection(connection_test);
      +
      +	return state;
      +}
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +enum ipc_active_state ipc_client_try_connect(
      +	const char *path,
      +	const struct ipc_client_connect_options *options,
     -+	int *pfd)
     ++	struct ipc_client_connection **p_connection)
      +{
      +	enum ipc_active_state state = IPC_STATE__OTHER_ERROR;
     ++	int fd = -1;
      +
     -+	*pfd = -1;
     ++	*p_connection = NULL;
      +
      +	trace2_region_enter("ipc-client", "try-connect", NULL);
      +	trace2_data_string("ipc-client", NULL, "try-connect/path", path);
      +
      +	state = connect_to_server(path, MY_CONNECTION_TIMEOUT_MS,
     -+				  options, pfd);
     ++				  options, &fd);
      +
      +	trace2_data_intmax("ipc-client", NULL, "try-connect/state",
      +			   (intmax_t)state);
      +	trace2_region_leave("ipc-client", "try-connect", NULL);
     ++
     ++	if (state == IPC_STATE__LISTENING) {
     ++		(*p_connection) = xcalloc(1, sizeof(struct ipc_client_connection));
     ++		(*p_connection)->fd = fd;
     ++	}
     ++
      +	return state;
      +}
      +
     -+int ipc_client_send_command_to_fd(int fd, const char *message,
     -+				  struct strbuf *answer)
     ++void ipc_client_close_connection(struct ipc_client_connection *connection)
     ++{
     ++	if (!connection)
     ++		return;
     ++
     ++	if (connection->fd != -1)
     ++		close(connection->fd);
     ++
     ++	free(connection);
     ++}
     ++
     ++int ipc_client_send_command_to_connection(
     ++	struct ipc_client_connection *connection,
     ++	const char *message, struct strbuf *answer)
      +{
      +	int ret = 0;
      +
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +
      +	trace2_region_enter("ipc-client", "send-command", NULL);
      +
     -+	if (write_packetized_from_buf(message, strlen(message), fd, 1) < 0) {
     ++	if (write_packetized_from_buf2(message, strlen(message),
     ++				       connection->fd, 1,
     ++				       &connection->scratch_write_buffer) < 0) {
      +		ret = error(_("could not send IPC command"));
      +		goto done;
      +	}
      +
     -+	if (read_packetized_to_strbuf(fd, answer, PACKET_READ_NEVER_DIE) < 0) {
     ++	if (read_packetized_to_strbuf(connection->fd, answer,
     ++				      PACKET_READ_NEVER_DIE) < 0) {
      +		ret = error(_("could not read IPC response"));
      +		goto done;
      +	}
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +			    const struct ipc_client_connect_options *options,
      +			    const char *message, struct strbuf *answer)
      +{
     -+	int fd;
      +	int ret = -1;
      +	enum ipc_active_state state;
     ++	struct ipc_client_connection *connection = NULL;
      +
     -+	state = ipc_client_try_connect(path, options, &fd);
     ++	state = ipc_client_try_connect(path, options, &connection);
      +
      +	if (state != IPC_STATE__LISTENING)
      +		return ret;
      +
     -+	ret = ipc_client_send_command_to_fd(fd, message, answer);
     -+	close(fd);
     ++	ret = ipc_client_send_command_to_connection(connection, message, answer);
     ++
     ++	ipc_client_close_connection(connection);
     ++
      +	return ret;
      +}
      +
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +	struct ipc_worker_thread_data *next_thread;
      +	struct ipc_server_data *server_data;
      +	pthread_t pthread_id;
     ++	struct packet_scratch_space scratch_write_buffer;
      +};
      +
      +struct ipc_accept_thread_data {
      +	enum magic magic;
      +	struct ipc_server_data *server_data;
     ++
      +	int fd_listen;
     -+	ino_t inode_listen;
     ++	struct stat st_listen;
     ++
      +	int fd_send_shutdown;
      +	int fd_wait_shutdown;
      +	pthread_t pthread_id;
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +static int do_io_reply_callback(struct ipc_server_reply_data *reply_data,
      +		       const char *response, size_t response_len)
      +{
     ++	struct packet_scratch_space *scratch =
     ++		&reply_data->worker_thread_data->scratch_write_buffer;
     ++
      +	if (reply_data->magic != MAGIC_SERVER_REPLY_DATA)
      +		BUG("reply_cb called with wrong instance data");
      +
     -+	return write_packetized_from_buf(response, response_len,
     -+					 reply_data->fd, 0);
     ++	return write_packetized_from_buf2(response, response_len,
     ++					  reply_data->fd, 0, scratch);
      +}
      +
      +/* A randomly chosen value. */
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +static int socket_was_stolen(struct ipc_accept_thread_data *accept_thread_data)
      +{
      +	struct stat st;
     ++	struct stat *ref_st = &accept_thread_data->st_listen;
      +
      +	if (lstat(accept_thread_data->server_data->buf_path.buf, &st) == -1)
      +		return 1;
      +
     -+	if (st.st_ino != accept_thread_data->inode_listen)
     ++	if (st.st_ino != ref_st->st_ino)
      +		return 1;
      +
     ++	/* We might also consider the creation time on some platforms. */
     ++
      +	return 0;
      +}
      +
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      + * open/connect using the "socket-inode" pathname.
      + *
      + * Unix domain sockets have a fundamental design flaw because the
     -+ * "socket-inode" persists until the pathname is deleted; closing the listening
     -+ * "socket-fd" only closes the socket handle/descriptor, it does not delete
     -+ * the inode/pathname.
     ++ * "socket-inode" persists until the pathname is deleted; closing the
     ++ * listening "socket-fd" only closes the socket handle/descriptor, it
     ++ * does not delete the inode/pathname.
      + *
      + * Well-behaving service daemons are expected to also delete the inode
      + * before shutdown.  If a service crashes (or forgets) it can leave
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      + * inode *or* another service instance is already running.
      + *
      + * One possible solution is to blindly unlink the inode before
     -+ * attempting to bind a new socket-fd (and thus create) a new
     ++ * attempting to bind a new socket-fd and thus create a new
      + * socket-inode.  Then `bind(2)` should always succeed.  However, if
     -+ * there is an existing service instance, it would be orphaned --
     -+ * it would still be listening on a socket-fd that is still bound
     -+ * to an (unlinked) socket-inode, but that socket-inode is no longer
     ++ * there is an existing service instance, it would be orphaned -- it
     ++ * would still be listening on a socket-fd that is still bound to an
     ++ * (unlinked) socket-inode, but that socket-inode is no longer
      + * associated with the pathname.  New client connections will arrive
     -+ * at our new socket-inode and not the existing server's.  (It is upto
     -+ * the existing server to detect that its socket-inode has been
     -+ * stolen and shutdown.)
     ++ * at OUR new socket-inode -- rather than the existing server's
     ++ * socket.  (I suppose it is up to the existing server to detect that
     ++ * its socket-inode has been stolen and shutdown.)
     ++ *
     ++ * Another possible solution is to try to use the ".lock" trick, but
     ++ * bind() does not have a exclusive-create use bit like open() does,
     ++ * so we cannot have multiple servers fighting/racing to create the
     ++ * same file name without having losers lose without knowing that they
     ++ * lost.
     ++ *
     ++ * We try to avoid such stealing and would rather fail to run than
     ++ * steal an existing socket-inode (because we assume that the
     ++ * existing server has more context and value to the clients than a
     ++ * freshly started server).  However, if multiple servers are racing
     ++ * to start, we don't care which one wins -- none of them have any
     ++ * state information yet worth fighting for.
     ++ *
     ++ * Create a "unique" socket-inode (with our PID in it (and assume that
     ++ * we can force-delete an existing socket with that name)).  Stat it
     ++ * to get the inode number and ctime -- so that we can identify it as
     ++ * the one we created.  Then use the atomic-rename trick to install it
     ++ * in the real location.  (This will unlink an existing socket with
     ++ * that pathname -- and thereby steal the real socket-inode from an
     ++ * existing server.)
      + *
     -+ * Since this is rather obscure and infrequent, we try to "gently"
     -+ * create the socket-inode without disturbing an existing service.
     ++ * Elsewhere, our thread will periodically poll the socket-inode to
     ++ * see if someone else steals ours.
      + */
      +static int create_listener_socket(const char *path,
     -+				  const struct ipc_server_opts *ipc_opts)
     ++				  const struct ipc_server_opts *ipc_opts,
     ++				  struct stat *st_socket)
      +{
     ++	struct stat st;
     ++	struct strbuf buf_uniq = STRBUF_INIT;
      +	int fd_listen;
     -+	int fd_client;
     -+	struct unix_stream_listen_opts uslg_opts = {
     -+		.listen_backlog_size = LISTEN_BACKLOG,
     -+		.force_unlink_before_bind = 0,
     -+		.disallow_chdir = ipc_opts->uds_disallow_chdir
     -+	};
     -+
     -+	trace2_data_string("ipc-server", NULL, "try-listen-gently", path);
     -+
     -+	/*
     -+	 * Assume socket-inode does not exist and try to (gently)
     -+	 * create a new socket-inode on disk at pathname and bind
     -+	 * socket-fd to it.
     -+	 */
     -+	fd_listen = unix_stream_listen_gently(path, &uslg_opts);
     -+	if (fd_listen >= 0)
     -+		return fd_listen;
     ++	struct unix_stream_listen_opts uslg_opts = UNIX_STREAM_LISTEN_OPTS_INIT;
      +
     -+	if (errno != EADDRINUSE)
     -+		return error_errno(_("could not create socket '%s'"),
     -+				   path);
     -+
     -+	trace2_data_string("ipc-server", NULL, "try-detect-server", path);
     -+
     -+	/*
     -+	 * A socket-inode at pathname exists on disk, but we don't
     -+	 * know if it a server is using it or if it is a stale inode.
     -+	 *
     -+	 * poke it with a trivial connection to try to find out.
     -+	 */
     -+	fd_client = unix_stream_connect(path);
     -+	if (fd_client >= 0) {
     ++	if (!lstat(path, &st) && S_ISSOCK(st.st_mode)) {
     ++		int fd_client;
      +		/*
     -+		 * An existing service process is alive and accepted our
     -+		 * connection.
     ++		 * A socket-inode at `path` exists on disk, but we
     ++		 * don't know whether it belongs to an active server
     ++		 * or if the last server died without cleaning up.
     ++		 *
     ++		 * Poke it with a trivial connection to try to find out.
      +		 */
     -+		close(fd_client);
     -+
     -+		/*
     -+		 * We cannot create a new socket-inode here, so we cannot
     -+		 * startup a new server on this pathname.
     -+		 */
     -+		errno = EADDRINUSE;
     -+		return error_errno(_("socket already in use '%s'"),
     ++		trace2_data_string("ipc-server", NULL, "try-detect-server",
      +				   path);
     ++		fd_client = unix_stream_connect(path);
     ++		if (fd_client >= 0) {
     ++			close(fd_client);
     ++			errno = EADDRINUSE;
     ++			return error_errno(_("socket already in use '%s'"),
     ++					   path);
     ++		}
      +	}
      +
     -+	trace2_data_string("ipc-server", NULL, "try-listen-force", path);
     -+
      +	/*
     -+	 * A socket-inode at pathname exists on disk, but we were not
     -+	 * able to connect to it, so we believe that this is a stale
     -+	 * socket-inode that a previous server forgot to delete.  Use
     -+	 * the tradional solution: force unlink it and create a new
     -+	 * one.
     -+	 *
     -+	 * TODO Note that it is possible that another server is
     -+	 * listening, but is either just starting up and not yet
     -+	 * responsive or is stuck somehow.  For now, I'm OK with
     -+	 * stealing the socket-inode from it in this case.
     ++	 * Create pathname to our "unique" socket and set it up for
     ++	 * business.
      +	 */
     -+	uslg_opts.force_unlink_before_bind = 1;
     -+	fd_listen = unix_stream_listen_gently(path, &uslg_opts);
     -+	if (fd_listen >= 0)
     -+		return fd_listen;
     -+
     -+	return error_errno(_("could not force create socket '%s'"), path);
     -+}
     -+
     -+static int setup_listener_socket(const char *path, ino_t *inode,
     -+				 const struct ipc_server_opts *ipc_opts)
     -+{
     -+	int fd_listen;
     -+	struct stat st;
     -+
     -+	trace2_region_enter("ipc-server", "create-listener_socket", NULL);
     -+	fd_listen = create_listener_socket(path, ipc_opts);
     -+	trace2_region_leave("ipc-server", "create-listener_socket", NULL);
     ++	strbuf_addf(&buf_uniq, "%s.%d", path, getpid());
      +
     -+	if (fd_listen < 0)
     -+		return fd_listen;
     -+
     -+	/*
     -+	 * We just bound a socket (descriptor) to a newly created unix
     -+	 * domain socket in the filesystem.  Capture the inode number
     -+	 * so we can later detect if/when someone else force-creates a
     -+	 * new socket and effectively steals the path from us.  (Which
     -+	 * would leave us listening to a socket that no client could
     -+	 * reach.)
     -+	 */
     -+	if (lstat(path, &st) < 0) {
     ++	uslg_opts.listen_backlog_size = LISTEN_BACKLOG;
     ++	uslg_opts.force_unlink_before_bind = 1;
     ++	uslg_opts.disallow_chdir = ipc_opts->uds_disallow_chdir;
     ++	fd_listen = unix_stream_listen(buf_uniq.buf, &uslg_opts);
     ++	if (fd_listen < 0) {
      +		int saved_errno = errno;
     ++		error_errno(_("could not create listener socket '%s'"),
     ++			    buf_uniq.buf);
     ++		strbuf_release(&buf_uniq);
     ++		errno = saved_errno;
     ++		return -1;
     ++	}
      +
     ++	if (lstat(buf_uniq.buf, st_socket)) {
     ++		int saved_errno = errno;
     ++		error_errno(_("could not stat listener socket '%s'"),
     ++			    buf_uniq.buf);
      +		close(fd_listen);
     -+		unlink(path);
     -+
     ++		unlink(buf_uniq.buf);
     ++		strbuf_release(&buf_uniq);
      +		errno = saved_errno;
     -+		return error_errno(_("could not lstat listener socket '%s'"),
     -+				   path);
     ++		return -1;
      +	}
      +
      +	if (set_socket_blocking_flag(fd_listen, 1)) {
      +		int saved_errno = errno;
     -+
     ++		error_errno(_("could not set listener socket nonblocking '%s'"),
     ++			    buf_uniq.buf);
      +		close(fd_listen);
     -+		unlink(path);
     ++		unlink(buf_uniq.buf);
     ++		strbuf_release(&buf_uniq);
     ++		errno = saved_errno;
     ++		return -1;
     ++	}
      +
     ++	/*
     ++	 * Install it as the "real" socket so that clients will starting
     ++	 * connecting to our socket.
     ++	 */
     ++	if (rename(buf_uniq.buf, path)) {
     ++		int saved_errno = errno;
     ++		error_errno(_("could not create listener socket '%s'"), path);
     ++		close(fd_listen);
     ++		unlink(buf_uniq.buf);
     ++		strbuf_release(&buf_uniq);
      +		errno = saved_errno;
     -+		return error_errno(_("making listener socket nonblocking '%s'"),
     -+				   path);
     ++		return -1;
      +	}
      +
     -+	*inode = st.st_ino;
     ++	strbuf_release(&buf_uniq);
     ++	trace2_data_string("ipc-server", NULL, "try-listen", path);
     ++	return fd_listen;
     ++}
     ++
     ++static int setup_listener_socket(const char *path, struct stat *st_socket,
     ++				 const struct ipc_server_opts *ipc_opts)
     ++{
     ++	int fd_listen;
     ++
     ++	trace2_region_enter("ipc-server", "create-listener_socket", NULL);
     ++	fd_listen = create_listener_socket(path, ipc_opts, st_socket);
     ++	trace2_region_leave("ipc-server", "create-listener_socket", NULL);
      +
      +	return fd_listen;
      +}
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +{
      +	struct ipc_server_data *server_data;
      +	int fd_listen;
     -+	ino_t inode_listen;
     ++	struct stat st_listen;
      +	int sv[2];
      +	int k;
      +	int nr_threads = opts->nr_threads;
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +				   path);
      +	}
      +
     -+	fd_listen = setup_listener_socket(path, &inode_listen, opts);
     ++	fd_listen = setup_listener_socket(path, &st_listen, opts);
      +	if (fd_listen < 0) {
      +		int saved_errno = errno;
      +		close(sv[0]);
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +	server_data->accept_thread->magic = MAGIC_ACCEPT_THREAD_DATA;
      +	server_data->accept_thread->server_data = server_data;
      +	server_data->accept_thread->fd_listen = fd_listen;
     -+	server_data->accept_thread->inode_listen = inode_listen;
     ++	server_data->accept_thread->st_listen = st_listen;
      +	server_data->accept_thread->fd_send_shutdown = sv[0];
      +	server_data->accept_thread->fd_wait_shutdown = sv[1];
      +

-- 
gitgitgadget

^ permalink raw reply	[flat|nested] 178+ messages in thread

* [PATCH v2 01/14] ci/install-depends: attempt to fix "brew cask" stuff
  2021-02-01 19:45 ` [PATCH v2 00/14] " Jeff Hostetler via GitGitGadget
@ 2021-02-01 19:45   ` Junio C Hamano via GitGitGadget
  2021-02-01 19:45   ` [PATCH v2 02/14] pkt-line: promote static buffer in packet_write_gently() to callers Jeff Hostetler via GitGitGadget
                     ` (14 subsequent siblings)
  15 siblings, 0 replies; 178+ messages in thread
From: Junio C Hamano via GitGitGadget @ 2021-02-01 19:45 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, Chris Torek, Jeff Hostetler, Junio C Hamano

From: Junio C Hamano <gitster@pobox.com>

We run "git pull" against "$cask_repo"; clarify that we are
expecting not to have any of our own modifications and running "git
pull" to merely update, by passing "--ff-only" on the command line.

Also, the "brew cask install" command line triggers an error message
that says:

    Error: Calling brew cask install is disabled! Use brew install
    [--cask] instead.

In addition, "brew install caskroom/cask/perforce" step triggers an
error that says:

    Error: caskroom/cask was moved. Tap homebrew/cask instead.

Attempt to see if blindly following the suggestion in these error
messages gets us into a better shape.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
---
 ci/install-dependencies.sh | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/ci/install-dependencies.sh b/ci/install-dependencies.sh
index 0229a77f7d2..0b1184e04ad 100755
--- a/ci/install-dependencies.sh
+++ b/ci/install-dependencies.sh
@@ -44,13 +44,13 @@ osx-clang|osx-gcc)
 	test -z "$BREW_INSTALL_PACKAGES" ||
 	brew install $BREW_INSTALL_PACKAGES
 	brew link --force gettext
-	brew cask install --no-quarantine perforce || {
+	brew install --cask --no-quarantine perforce || {
 		# Update the definitions and try again
 		cask_repo="$(brew --repository)"/Library/Taps/homebrew/homebrew-cask &&
-		git -C "$cask_repo" pull --no-stat &&
-		brew cask install --no-quarantine perforce
+		git -C "$cask_repo" pull --no-stat --ff-only &&
+		brew install --cask --no-quarantine perforce
 	} ||
-	brew install caskroom/cask/perforce
+	brew install homebrew/cask/perforce
 	case "$jobname" in
 	osx-gcc)
 		brew install gcc@9
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v2 02/14] pkt-line: promote static buffer in packet_write_gently() to callers
  2021-02-01 19:45 ` [PATCH v2 00/14] " Jeff Hostetler via GitGitGadget
  2021-02-01 19:45   ` [PATCH v2 01/14] ci/install-depends: attempt to fix "brew cask" stuff Junio C Hamano via GitGitGadget
@ 2021-02-01 19:45   ` Jeff Hostetler via GitGitGadget
  2021-02-02  9:41     ` Jeff King
  2021-02-01 19:45   ` [PATCH v2 03/14] pkt-line: add write_packetized_from_buf2() that takes scratch buffer Jeff Hostetler via GitGitGadget
                     ` (13 subsequent siblings)
  15 siblings, 1 reply; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-02-01 19:45 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, Chris Torek, Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Move the static buffer used in `packet_write_gently()` to its callers.
This is a first step to make packet writing more thread-safe.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 pkt-line.c | 33 ++++++++++++++++++++++++---------
 pkt-line.h | 10 ++++++++--
 2 files changed, 32 insertions(+), 11 deletions(-)

diff --git a/pkt-line.c b/pkt-line.c
index d633005ef74..14af049cd9c 100644
--- a/pkt-line.c
+++ b/pkt-line.c
@@ -194,26 +194,34 @@ int packet_write_fmt_gently(int fd, const char *fmt, ...)
 	return status;
 }
 
-static int packet_write_gently(const int fd_out, const char *buf, size_t size)
+/*
+ * Use the provided scratch space to build a combined <hdr><buf> buffer
+ * and write it to the file descriptor (in one write if possible).
+ */
+static int packet_write_gently(const int fd_out, const char *buf, size_t size,
+			       struct packet_scratch_space *scratch)
 {
-	static char packet_write_buffer[LARGE_PACKET_MAX];
 	size_t packet_size;
 
-	if (size > sizeof(packet_write_buffer) - 4)
+	if (size > sizeof(scratch->buffer) - 4)
 		return error(_("packet write failed - data exceeds max packet size"));
 
 	packet_trace(buf, size, 1);
 	packet_size = size + 4;
-	set_packet_header(packet_write_buffer, packet_size);
-	memcpy(packet_write_buffer + 4, buf, size);
-	if (write_in_full(fd_out, packet_write_buffer, packet_size) < 0)
+
+	set_packet_header(scratch->buffer, packet_size);
+	memcpy(scratch->buffer + 4, buf, size);
+
+	if (write_in_full(fd_out, scratch->buffer, packet_size) < 0)
 		return error(_("packet write failed"));
 	return 0;
 }
 
 void packet_write(int fd_out, const char *buf, size_t size)
 {
-	if (packet_write_gently(fd_out, buf, size))
+	static struct packet_scratch_space scratch;
+
+	if (packet_write_gently(fd_out, buf, size, &scratch))
 		die_errno(_("packet write failed"));
 }
 
@@ -244,6 +252,12 @@ void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len)
 
 int write_packetized_from_fd(int fd_in, int fd_out)
 {
+	/*
+	 * TODO We could save a memcpy() if we essentially inline
+	 * TODO packet_write_gently() here and change the xread()
+	 * TODO to pass &buf[4].
+	 */
+	static struct packet_scratch_space scratch;
 	static char buf[LARGE_PACKET_DATA_MAX];
 	int err = 0;
 	ssize_t bytes_to_write;
@@ -254,7 +268,7 @@ int write_packetized_from_fd(int fd_in, int fd_out)
 			return COPY_READ_ERROR;
 		if (bytes_to_write == 0)
 			break;
-		err = packet_write_gently(fd_out, buf, bytes_to_write);
+		err = packet_write_gently(fd_out, buf, bytes_to_write, &scratch);
 	}
 	if (!err)
 		err = packet_flush_gently(fd_out);
@@ -263,6 +277,7 @@ int write_packetized_from_fd(int fd_in, int fd_out)
 
 int write_packetized_from_buf(const char *src_in, size_t len, int fd_out)
 {
+	static struct packet_scratch_space scratch;
 	int err = 0;
 	size_t bytes_written = 0;
 	size_t bytes_to_write;
@@ -274,7 +289,7 @@ int write_packetized_from_buf(const char *src_in, size_t len, int fd_out)
 			bytes_to_write = len - bytes_written;
 		if (bytes_to_write == 0)
 			break;
-		err = packet_write_gently(fd_out, src_in + bytes_written, bytes_to_write);
+		err = packet_write_gently(fd_out, src_in + bytes_written, bytes_to_write, &scratch);
 		bytes_written += bytes_to_write;
 	}
 	if (!err)
diff --git a/pkt-line.h b/pkt-line.h
index 8c90daa59ef..4ccd6f88926 100644
--- a/pkt-line.h
+++ b/pkt-line.h
@@ -5,6 +5,13 @@
 #include "strbuf.h"
 #include "sideband.h"
 
+#define LARGE_PACKET_MAX 65520
+#define LARGE_PACKET_DATA_MAX (LARGE_PACKET_MAX - 4)
+
+struct packet_scratch_space {
+	char buffer[LARGE_PACKET_MAX];
+};
+
 /*
  * Write a packetized stream, where each line is preceded by
  * its length (including the header) as a 4-byte hex number.
@@ -213,8 +220,7 @@ enum packet_read_status packet_reader_read(struct packet_reader *reader);
 enum packet_read_status packet_reader_peek(struct packet_reader *reader);
 
 #define DEFAULT_PACKET_MAX 1000
-#define LARGE_PACKET_MAX 65520
-#define LARGE_PACKET_DATA_MAX (LARGE_PACKET_MAX - 4)
+
 extern char packet_buffer[LARGE_PACKET_MAX];
 
 struct packet_writer {
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v2 03/14] pkt-line: add write_packetized_from_buf2() that takes scratch buffer
  2021-02-01 19:45 ` [PATCH v2 00/14] " Jeff Hostetler via GitGitGadget
  2021-02-01 19:45   ` [PATCH v2 01/14] ci/install-depends: attempt to fix "brew cask" stuff Junio C Hamano via GitGitGadget
  2021-02-01 19:45   ` [PATCH v2 02/14] pkt-line: promote static buffer in packet_write_gently() to callers Jeff Hostetler via GitGitGadget
@ 2021-02-01 19:45   ` Jeff Hostetler via GitGitGadget
  2021-02-02  9:44     ` Jeff King
  2021-02-01 19:45   ` [PATCH v2 04/14] pkt-line: optionally skip the flush packet in write_packetized_from_buf() Johannes Schindelin via GitGitGadget
                     ` (12 subsequent siblings)
  15 siblings, 1 reply; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-02-01 19:45 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, Chris Torek, Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Create version of `write_packetized_from_buf()` that takes a scratch buffer
argument rather than assuming a static buffer.  This will be used later as
we make packet-line writing more thread-safe.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 pkt-line.c | 9 ++++++++-
 pkt-line.h | 2 ++
 2 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/pkt-line.c b/pkt-line.c
index 14af049cd9c..5d86354cbeb 100644
--- a/pkt-line.c
+++ b/pkt-line.c
@@ -278,6 +278,13 @@ int write_packetized_from_fd(int fd_in, int fd_out)
 int write_packetized_from_buf(const char *src_in, size_t len, int fd_out)
 {
 	static struct packet_scratch_space scratch;
+
+	return write_packetized_from_buf2(src_in, len, fd_out, &scratch);
+}
+
+int write_packetized_from_buf2(const char *src_in, size_t len, int fd_out,
+			       struct packet_scratch_space *scratch)
+{
 	int err = 0;
 	size_t bytes_written = 0;
 	size_t bytes_to_write;
@@ -289,7 +296,7 @@ int write_packetized_from_buf(const char *src_in, size_t len, int fd_out)
 			bytes_to_write = len - bytes_written;
 		if (bytes_to_write == 0)
 			break;
-		err = packet_write_gently(fd_out, src_in + bytes_written, bytes_to_write, &scratch);
+		err = packet_write_gently(fd_out, src_in + bytes_written, bytes_to_write, scratch);
 		bytes_written += bytes_to_write;
 	}
 	if (!err)
diff --git a/pkt-line.h b/pkt-line.h
index 4ccd6f88926..f1d5625e91f 100644
--- a/pkt-line.h
+++ b/pkt-line.h
@@ -41,6 +41,8 @@ int packet_flush_gently(int fd);
 int packet_write_fmt_gently(int fd, const char *fmt, ...) __attribute__((format (printf, 2, 3)));
 int write_packetized_from_fd(int fd_in, int fd_out);
 int write_packetized_from_buf(const char *src_in, size_t len, int fd_out);
+int write_packetized_from_buf2(const char *src_in, size_t len, int fd_out,
+			       struct packet_scratch_space *scratch);
 
 /*
  * Read a packetized line into the buffer, which must be at least size bytes
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v2 04/14] pkt-line: optionally skip the flush packet in write_packetized_from_buf()
  2021-02-01 19:45 ` [PATCH v2 00/14] " Jeff Hostetler via GitGitGadget
                     ` (2 preceding siblings ...)
  2021-02-01 19:45   ` [PATCH v2 03/14] pkt-line: add write_packetized_from_buf2() that takes scratch buffer Jeff Hostetler via GitGitGadget
@ 2021-02-01 19:45   ` Johannes Schindelin via GitGitGadget
  2021-02-02  9:48     ` Jeff King
  2021-02-01 19:45   ` [PATCH v2 05/14] pkt-line: (optionally) libify the packet readers Johannes Schindelin via GitGitGadget
                     ` (11 subsequent siblings)
  15 siblings, 1 reply; 178+ messages in thread
From: Johannes Schindelin via GitGitGadget @ 2021-02-01 19:45 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, Chris Torek, Jeff Hostetler, Johannes Schindelin

From: Johannes Schindelin <johannes.schindelin@gmx.de>

This function currently has only one caller: `apply_multi_file_filter()`
in `convert.c`. That caller wants a flush packet to be written after
writing the payload.

However, we are about to introduce a user that wants to write many
packets before a final flush packet, so let's extend this function to
prepare for that scenario.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
---
 convert.c  | 2 +-
 pkt-line.c | 9 ++++++---
 pkt-line.h | 4 +++-
 3 files changed, 10 insertions(+), 5 deletions(-)

diff --git a/convert.c b/convert.c
index ee360c2f07c..3f396a9b288 100644
--- a/convert.c
+++ b/convert.c
@@ -886,7 +886,7 @@ static int apply_multi_file_filter(const char *path, const char *src, size_t len
 	if (fd >= 0)
 		err = write_packetized_from_fd(fd, process->in);
 	else
-		err = write_packetized_from_buf(src, len, process->in);
+		err = write_packetized_from_buf(src, len, process->in, 1);
 	if (err)
 		goto done;
 
diff --git a/pkt-line.c b/pkt-line.c
index 5d86354cbeb..d91a1deda95 100644
--- a/pkt-line.c
+++ b/pkt-line.c
@@ -275,14 +275,17 @@ int write_packetized_from_fd(int fd_in, int fd_out)
 	return err;
 }
 
-int write_packetized_from_buf(const char *src_in, size_t len, int fd_out)
+int write_packetized_from_buf(const char *src_in, size_t len, int fd_out,
+			      int flush_at_end)
 {
 	static struct packet_scratch_space scratch;
 
-	return write_packetized_from_buf2(src_in, len, fd_out, &scratch);
+	return write_packetized_from_buf2(src_in, len, fd_out,
+					  flush_at_end, &scratch);
 }
 
 int write_packetized_from_buf2(const char *src_in, size_t len, int fd_out,
+			       int flush_at_end,
 			       struct packet_scratch_space *scratch)
 {
 	int err = 0;
@@ -299,7 +302,7 @@ int write_packetized_from_buf2(const char *src_in, size_t len, int fd_out,
 		err = packet_write_gently(fd_out, src_in + bytes_written, bytes_to_write, scratch);
 		bytes_written += bytes_to_write;
 	}
-	if (!err)
+	if (!err && flush_at_end)
 		err = packet_flush_gently(fd_out);
 	return err;
 }
diff --git a/pkt-line.h b/pkt-line.h
index f1d5625e91f..ccf27549227 100644
--- a/pkt-line.h
+++ b/pkt-line.h
@@ -40,8 +40,10 @@ void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len);
 int packet_flush_gently(int fd);
 int packet_write_fmt_gently(int fd, const char *fmt, ...) __attribute__((format (printf, 2, 3)));
 int write_packetized_from_fd(int fd_in, int fd_out);
-int write_packetized_from_buf(const char *src_in, size_t len, int fd_out);
+int write_packetized_from_buf(const char *src_in, size_t len, int fd_out,
+			      int flush_at_end);
 int write_packetized_from_buf2(const char *src_in, size_t len, int fd_out,
+			       int flush_at_end,
 			       struct packet_scratch_space *scratch);
 
 /*
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v2 05/14] pkt-line: (optionally) libify the packet readers
  2021-02-01 19:45 ` [PATCH v2 00/14] " Jeff Hostetler via GitGitGadget
                     ` (3 preceding siblings ...)
  2021-02-01 19:45   ` [PATCH v2 04/14] pkt-line: optionally skip the flush packet in write_packetized_from_buf() Johannes Schindelin via GitGitGadget
@ 2021-02-01 19:45   ` Johannes Schindelin via GitGitGadget
  2021-02-01 19:45   ` [PATCH v2 06/14] pkt-line: accept additional options in read_packetized_to_strbuf() Johannes Schindelin via GitGitGadget
                     ` (10 subsequent siblings)
  15 siblings, 0 replies; 178+ messages in thread
From: Johannes Schindelin via GitGitGadget @ 2021-02-01 19:45 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, Chris Torek, Jeff Hostetler, Johannes Schindelin

From: Johannes Schindelin <johannes.schindelin@gmx.de>

So far, the (possibly indirect) callers of `get_packet_data()` can ask
that function to return an error instead of `die()`ing upon end-of-file.
However, random read errors will still cause the process to die.

So let's introduce an explicit option to tell the packet reader
machinery to please be nice and only return an error.

This change prepares pkt-line for use by long-running daemon processes.
Such processes should be able to serve multiple concurrent clients and
and survive random IO errors.  If there is an error on one connection,
a daemon should be able to drop that connection and continue serving
existing and future connections.

This ability will be used by a Git-aware "Internal FSMonitor" feature
in a later patch series.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
---
 pkt-line.c | 19 +++++++++++++++++--
 pkt-line.h |  4 ++++
 2 files changed, 21 insertions(+), 2 deletions(-)

diff --git a/pkt-line.c b/pkt-line.c
index d91a1deda95..528493bca21 100644
--- a/pkt-line.c
+++ b/pkt-line.c
@@ -323,8 +323,11 @@ static int get_packet_data(int fd, char **src_buf, size_t *src_size,
 		*src_size -= ret;
 	} else {
 		ret = read_in_full(fd, dst, size);
-		if (ret < 0)
+		if (ret < 0) {
+			if (options & PACKET_READ_NEVER_DIE)
+				return error_errno(_("read error"));
 			die_errno(_("read error"));
+		}
 	}
 
 	/* And complain if we didn't get enough bytes to satisfy the read. */
@@ -332,6 +335,8 @@ static int get_packet_data(int fd, char **src_buf, size_t *src_size,
 		if (options & PACKET_READ_GENTLE_ON_EOF)
 			return -1;
 
+		if (options & PACKET_READ_NEVER_DIE)
+			return error(_("the remote end hung up unexpectedly"));
 		die(_("the remote end hung up unexpectedly"));
 	}
 
@@ -360,6 +365,9 @@ enum packet_read_status packet_read_with_status(int fd, char **src_buffer,
 	len = packet_length(linelen);
 
 	if (len < 0) {
+		if (options & PACKET_READ_NEVER_DIE)
+			return error(_("protocol error: bad line length "
+				       "character: %.4s"), linelen);
 		die(_("protocol error: bad line length character: %.4s"), linelen);
 	} else if (!len) {
 		packet_trace("0000", 4, 0);
@@ -374,12 +382,19 @@ enum packet_read_status packet_read_with_status(int fd, char **src_buffer,
 		*pktlen = 0;
 		return PACKET_READ_RESPONSE_END;
 	} else if (len < 4) {
+		if (options & PACKET_READ_NEVER_DIE)
+			return error(_("protocol error: bad line length %d"),
+				     len);
 		die(_("protocol error: bad line length %d"), len);
 	}
 
 	len -= 4;
-	if ((unsigned)len >= size)
+	if ((unsigned)len >= size) {
+		if (options & PACKET_READ_NEVER_DIE)
+			return error(_("protocol error: bad line length %d"),
+				     len);
 		die(_("protocol error: bad line length %d"), len);
+	}
 
 	if (get_packet_data(fd, src_buffer, src_len, buffer, len, options) < 0) {
 		*pktlen = -1;
diff --git a/pkt-line.h b/pkt-line.h
index ccf27549227..7f31c892165 100644
--- a/pkt-line.h
+++ b/pkt-line.h
@@ -79,10 +79,14 @@ int write_packetized_from_buf2(const char *src_in, size_t len, int fd_out,
  *
  * If options contains PACKET_READ_DIE_ON_ERR_PACKET, it dies when it sees an
  * ERR packet.
+ *
+ * With `PACKET_READ_NEVER_DIE`, no errors are allowed to trigger die() (except
+ * an ERR packet, when `PACKET_READ_DIE_ON_ERR_PACKET` is in effect).
  */
 #define PACKET_READ_GENTLE_ON_EOF     (1u<<0)
 #define PACKET_READ_CHOMP_NEWLINE     (1u<<1)
 #define PACKET_READ_DIE_ON_ERR_PACKET (1u<<2)
+#define PACKET_READ_NEVER_DIE         (1u<<3)
 int packet_read(int fd, char **src_buffer, size_t *src_len, char
 		*buffer, unsigned size, int options);
 
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v2 06/14] pkt-line: accept additional options in read_packetized_to_strbuf()
  2021-02-01 19:45 ` [PATCH v2 00/14] " Jeff Hostetler via GitGitGadget
                     ` (4 preceding siblings ...)
  2021-02-01 19:45   ` [PATCH v2 05/14] pkt-line: (optionally) libify the packet readers Johannes Schindelin via GitGitGadget
@ 2021-02-01 19:45   ` Johannes Schindelin via GitGitGadget
  2021-02-11  1:52     ` Taylor Blau
  2021-02-01 19:45   ` [PATCH v2 07/14] simple-ipc: design documentation for new IPC mechanism Jeff Hostetler via GitGitGadget
                     ` (9 subsequent siblings)
  15 siblings, 1 reply; 178+ messages in thread
From: Johannes Schindelin via GitGitGadget @ 2021-02-01 19:45 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, Chris Torek, Jeff Hostetler, Johannes Schindelin

From: Johannes Schindelin <johannes.schindelin@gmx.de>

The `read_packetized_to_strbuf()` function reads packets into a strbuf
until a flush packet has been received. So far, it has only one caller:
`apply_multi_file_filter()` in `convert.c`. This caller really only
needs the `PACKET_READ_GENTLE_ON_EOF` option to be passed to
`packet_read()` (which makes sense in the scenario where packets should
be read until a flush packet is received).

We are about to introduce a caller that wants to pass other options
through to `packet_read()`, so let's extend the function signature
accordingly.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
---
 convert.c  | 2 +-
 pkt-line.c | 4 ++--
 pkt-line.h | 6 +++++-
 3 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/convert.c b/convert.c
index 3f396a9b288..175c5cd51d5 100644
--- a/convert.c
+++ b/convert.c
@@ -903,7 +903,7 @@ static int apply_multi_file_filter(const char *path, const char *src, size_t len
 		if (err)
 			goto done;
 
-		err = read_packetized_to_strbuf(process->out, &nbuf) < 0;
+		err = read_packetized_to_strbuf(process->out, &nbuf, 0) < 0;
 		if (err)
 			goto done;
 
diff --git a/pkt-line.c b/pkt-line.c
index 528493bca21..f090fc56eef 100644
--- a/pkt-line.c
+++ b/pkt-line.c
@@ -461,7 +461,7 @@ char *packet_read_line_buf(char **src, size_t *src_len, int *dst_len)
 	return packet_read_line_generic(-1, src, src_len, dst_len);
 }
 
-ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out)
+ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out, int options)
 {
 	int packet_len;
 
@@ -477,7 +477,7 @@ ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out)
 			 * that there is already room for the extra byte.
 			 */
 			sb_out->buf + sb_out->len, LARGE_PACKET_DATA_MAX+1,
-			PACKET_READ_GENTLE_ON_EOF);
+			options | PACKET_READ_GENTLE_ON_EOF);
 		if (packet_len <= 0)
 			break;
 		sb_out->len += packet_len;
diff --git a/pkt-line.h b/pkt-line.h
index 7f31c892165..150319a6f00 100644
--- a/pkt-line.h
+++ b/pkt-line.h
@@ -145,8 +145,12 @@ char *packet_read_line_buf(char **src_buf, size_t *src_len, int *size);
 
 /*
  * Reads a stream of variable sized packets until a flush packet is detected.
+ *
+ * The options are augmented by PACKET_READ_GENTLE_ON_EOF and passed to
+ * packet_read.
  */
-ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out);
+ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out,
+				  int options);
 
 /*
  * Receive multiplexed output stream over git native protocol.
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v2 07/14] simple-ipc: design documentation for new IPC mechanism
  2021-02-01 19:45 ` [PATCH v2 00/14] " Jeff Hostetler via GitGitGadget
                     ` (5 preceding siblings ...)
  2021-02-01 19:45   ` [PATCH v2 06/14] pkt-line: accept additional options in read_packetized_to_strbuf() Johannes Schindelin via GitGitGadget
@ 2021-02-01 19:45   ` Jeff Hostetler via GitGitGadget
  2021-02-01 19:45   ` [PATCH v2 08/14] simple-ipc: add win32 implementation Jeff Hostetler via GitGitGadget
                     ` (8 subsequent siblings)
  15 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-02-01 19:45 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, Chris Torek, Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Brief design documentation for new IPC mechanism allowing
foreground Git client to talk with an existing daemon process
at a known location using a named pipe or unix domain socket.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 Documentation/technical/api-simple-ipc.txt | 34 ++++++++++++++++++++++
 1 file changed, 34 insertions(+)
 create mode 100644 Documentation/technical/api-simple-ipc.txt

diff --git a/Documentation/technical/api-simple-ipc.txt b/Documentation/technical/api-simple-ipc.txt
new file mode 100644
index 00000000000..670a5c163e3
--- /dev/null
+++ b/Documentation/technical/api-simple-ipc.txt
@@ -0,0 +1,34 @@
+simple-ipc API
+==============
+
+The simple-ipc API is used to send an IPC message and response between
+a (presumably) foreground Git client process to a background server or
+daemon process.  The server process must already be running.  Multiple
+client processes can simultaneously communicate with the server
+process.
+
+Communication occurs over a named pipe on Windows and a Unix domain
+socket on other platforms.  Clients and the server rendezvous at a
+previously agreed-to application-specific pathname (which is outside
+the scope of this design).
+
+This IPC mechanism differs from the existing `sub-process.c` model
+(Documentation/technical/long-running-process-protocol.txt) and used
+by applications like Git-LFS.  In the simple-ipc model the server is
+assumed to be a very long-running system service.  In contrast, in the
+LFS-style sub-process model the helper is started with the foreground
+process and exits when the foreground process terminates.
+
+How the simple-ipc server is started is also outside the scope of the
+IPC mechanism.  For example, the server might be started during
+maintenance operations.
+
+The IPC protocol consists of a single request message from the client and
+an optional request message from the server.  For simplicity, pkt-line
+routines are used to hide chunking and buffering concerns.  Each side
+terminates their message with a flush packet.
+(Documentation/technical/protocol-common.txt)
+
+The actual format of the client and server messages is application
+specific.  The IPC layer transmits and receives an opaque buffer without
+any concern for the content within.
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v2 08/14] simple-ipc: add win32 implementation
  2021-02-01 19:45 ` [PATCH v2 00/14] " Jeff Hostetler via GitGitGadget
                     ` (6 preceding siblings ...)
  2021-02-01 19:45   ` [PATCH v2 07/14] simple-ipc: design documentation for new IPC mechanism Jeff Hostetler via GitGitGadget
@ 2021-02-01 19:45   ` Jeff Hostetler via GitGitGadget
  2021-02-01 19:45   ` [PATCH v2 09/14] simple-ipc: add t/helper/test-simple-ipc and t0052 Jeff Hostetler via GitGitGadget
                     ` (7 subsequent siblings)
  15 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-02-01 19:45 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, Chris Torek, Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Create Windows implementation of "simple-ipc" using named pipes.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 Makefile                            |   5 +
 compat/simple-ipc/ipc-shared.c      |  28 ++
 compat/simple-ipc/ipc-win32.c       | 751 ++++++++++++++++++++++++++++
 config.mak.uname                    |   2 +
 contrib/buildsystems/CMakeLists.txt |   4 +
 simple-ipc.h                        | 225 +++++++++
 6 files changed, 1015 insertions(+)
 create mode 100644 compat/simple-ipc/ipc-shared.c
 create mode 100644 compat/simple-ipc/ipc-win32.c
 create mode 100644 simple-ipc.h

diff --git a/Makefile b/Makefile
index 7b64106930a..c94d5847919 100644
--- a/Makefile
+++ b/Makefile
@@ -1682,6 +1682,11 @@ else
 	LIB_OBJS += unix-socket.o
 endif
 
+ifdef USE_WIN32_IPC
+	LIB_OBJS += compat/simple-ipc/ipc-shared.o
+	LIB_OBJS += compat/simple-ipc/ipc-win32.o
+endif
+
 ifdef NO_ICONV
 	BASIC_CFLAGS += -DNO_ICONV
 endif
diff --git a/compat/simple-ipc/ipc-shared.c b/compat/simple-ipc/ipc-shared.c
new file mode 100644
index 00000000000..1edec815953
--- /dev/null
+++ b/compat/simple-ipc/ipc-shared.c
@@ -0,0 +1,28 @@
+#include "cache.h"
+#include "simple-ipc.h"
+#include "strbuf.h"
+#include "pkt-line.h"
+#include "thread-utils.h"
+
+#ifdef SUPPORTS_SIMPLE_IPC
+
+int ipc_server_run(const char *path, const struct ipc_server_opts *opts,
+		   ipc_server_application_cb *application_cb,
+		   void *application_data)
+{
+	struct ipc_server_data *server_data = NULL;
+	int ret;
+
+	ret = ipc_server_run_async(&server_data, path, opts,
+				   application_cb, application_data);
+	if (ret)
+		return ret;
+
+	ret = ipc_server_await(server_data);
+
+	ipc_server_free(server_data);
+
+	return ret;
+}
+
+#endif /* SUPPORTS_SIMPLE_IPC */
diff --git a/compat/simple-ipc/ipc-win32.c b/compat/simple-ipc/ipc-win32.c
new file mode 100644
index 00000000000..7871c9d8527
--- /dev/null
+++ b/compat/simple-ipc/ipc-win32.c
@@ -0,0 +1,751 @@
+#include "cache.h"
+#include "simple-ipc.h"
+#include "strbuf.h"
+#include "pkt-line.h"
+#include "thread-utils.h"
+
+#ifndef GIT_WINDOWS_NATIVE
+#error This file can only be compiled on Windows
+#endif
+
+static int initialize_pipe_name(const char *path, wchar_t *wpath, size_t alloc)
+{
+	int off = 0;
+	struct strbuf realpath = STRBUF_INIT;
+
+	if (!strbuf_realpath(&realpath, path, 0))
+		return -1;
+
+	off = swprintf(wpath, alloc, L"\\\\.\\pipe\\");
+	if (xutftowcs(wpath + off, realpath.buf, alloc - off) < 0)
+		return -1;
+
+	/* Handle drive prefix */
+	if (wpath[off] && wpath[off + 1] == L':') {
+		wpath[off + 1] = L'_';
+		off += 2;
+	}
+
+	for (; wpath[off]; off++)
+		if (wpath[off] == L'/')
+			wpath[off] = L'\\';
+
+	strbuf_release(&realpath);
+	return 0;
+}
+
+static enum ipc_active_state get_active_state(wchar_t *pipe_path)
+{
+	if (WaitNamedPipeW(pipe_path, NMPWAIT_USE_DEFAULT_WAIT))
+		return IPC_STATE__LISTENING;
+
+	if (GetLastError() == ERROR_SEM_TIMEOUT)
+		return IPC_STATE__NOT_LISTENING;
+
+	if (GetLastError() == ERROR_FILE_NOT_FOUND)
+		return IPC_STATE__PATH_NOT_FOUND;
+
+	return IPC_STATE__OTHER_ERROR;
+}
+
+enum ipc_active_state ipc_get_active_state(const char *path)
+{
+	wchar_t pipe_path[MAX_PATH];
+
+	if (initialize_pipe_name(path, pipe_path, ARRAY_SIZE(pipe_path)) < 0)
+		return IPC_STATE__INVALID_PATH;
+
+	return get_active_state(pipe_path);
+}
+
+#define WAIT_STEP_MS (50)
+
+static enum ipc_active_state connect_to_server(
+	const wchar_t *wpath,
+	DWORD timeout_ms,
+	const struct ipc_client_connect_options *options,
+	int *pfd)
+{
+	DWORD t_start_ms, t_waited_ms;
+	DWORD step_ms;
+	HANDLE hPipe = INVALID_HANDLE_VALUE;
+	DWORD mode = PIPE_READMODE_BYTE;
+	DWORD gle;
+
+	*pfd = -1;
+
+	for (;;) {
+		hPipe = CreateFileW(wpath, GENERIC_READ | GENERIC_WRITE,
+				    0, NULL, OPEN_EXISTING, 0, NULL);
+		if (hPipe != INVALID_HANDLE_VALUE)
+			break;
+
+		gle = GetLastError();
+
+		switch (gle) {
+		case ERROR_FILE_NOT_FOUND:
+			if (!options->wait_if_not_found)
+				return IPC_STATE__PATH_NOT_FOUND;
+			if (!timeout_ms)
+				return IPC_STATE__PATH_NOT_FOUND;
+
+			step_ms = (timeout_ms < WAIT_STEP_MS) ?
+				timeout_ms : WAIT_STEP_MS;
+			sleep_millisec(step_ms);
+
+			timeout_ms -= step_ms;
+			break; /* try again */
+
+		case ERROR_PIPE_BUSY:
+			if (!options->wait_if_busy)
+				return IPC_STATE__NOT_LISTENING;
+			if (!timeout_ms)
+				return IPC_STATE__NOT_LISTENING;
+
+			t_start_ms = (DWORD)(getnanotime() / 1000000);
+
+			if (!WaitNamedPipeW(wpath, timeout_ms)) {
+				if (GetLastError() == ERROR_SEM_TIMEOUT)
+					return IPC_STATE__NOT_LISTENING;
+
+				return IPC_STATE__OTHER_ERROR;
+			}
+
+			/*
+			 * A pipe server instance became available.
+			 * Race other client processes to connect to
+			 * it.
+			 *
+			 * But first decrement our overall timeout so
+			 * that we don't starve if we keep losing the
+			 * race.  But also guard against special
+			 * NPMWAIT_ values (0 and -1).
+			 */
+			t_waited_ms = (DWORD)(getnanotime() / 1000000) - t_start_ms;
+			if (t_waited_ms < timeout_ms)
+				timeout_ms -= t_waited_ms;
+			else
+				timeout_ms = 1;
+			break; /* try again */
+
+		default:
+			return IPC_STATE__OTHER_ERROR;
+		}
+	}
+
+	if (!SetNamedPipeHandleState(hPipe, &mode, NULL, NULL)) {
+		CloseHandle(hPipe);
+		return IPC_STATE__OTHER_ERROR;
+	}
+
+	*pfd = _open_osfhandle((intptr_t)hPipe, O_RDWR|O_BINARY);
+	if (*pfd < 0) {
+		CloseHandle(hPipe);
+		return IPC_STATE__OTHER_ERROR;
+	}
+
+	/* fd now owns hPipe */
+
+	return IPC_STATE__LISTENING;
+}
+
+/*
+ * The default connection timeout for Windows clients.
+ *
+ * This is not currently part of the ipc_ API (nor the config settings)
+ * because of differences between Windows and other platforms.
+ *
+ * This value was chosen at random.
+ */
+#define WINDOWS_CONNECTION_TIMEOUT_MS (30000)
+
+enum ipc_active_state ipc_client_try_connect(
+	const char *path,
+	const struct ipc_client_connect_options *options,
+	struct ipc_client_connection **p_connection)
+{
+	wchar_t wpath[MAX_PATH];
+	enum ipc_active_state state = IPC_STATE__OTHER_ERROR;
+	int fd = -1;
+
+	*p_connection = NULL;
+
+	trace2_region_enter("ipc-client", "try-connect", NULL);
+	trace2_data_string("ipc-client", NULL, "try-connect/path", path);
+
+	if (initialize_pipe_name(path, wpath, ARRAY_SIZE(wpath)) < 0)
+		state = IPC_STATE__INVALID_PATH;
+	else
+		state = connect_to_server(wpath, WINDOWS_CONNECTION_TIMEOUT_MS,
+					  options, &fd);
+
+	trace2_data_intmax("ipc-client", NULL, "try-connect/state",
+			   (intmax_t)state);
+	trace2_region_leave("ipc-client", "try-connect", NULL);
+
+	if (state == IPC_STATE__LISTENING) {
+		(*p_connection) = xcalloc(1, sizeof(struct ipc_client_connection));
+		(*p_connection)->fd = fd;
+	}
+
+	return state;
+}
+
+void ipc_client_close_connection(struct ipc_client_connection *connection)
+{
+	if (!connection)
+		return;
+
+	if (connection->fd != -1)
+		close(connection->fd);
+
+	free(connection);
+}
+
+int ipc_client_send_command_to_connection(
+	struct ipc_client_connection *connection,
+	const char *message, struct strbuf *answer)
+{
+	int ret = 0;
+
+	strbuf_setlen(answer, 0);
+
+	trace2_region_enter("ipc-client", "send-command", NULL);
+
+	if (write_packetized_from_buf2(message, strlen(message),
+				       connection->fd, 1,
+				       &connection->scratch_write_buffer) < 0) {
+		ret = error(_("could not send IPC command"));
+		goto done;
+	}
+
+	FlushFileBuffers((HANDLE)_get_osfhandle(connection->fd));
+
+	if (read_packetized_to_strbuf(connection->fd, answer,
+				      PACKET_READ_NEVER_DIE) < 0) {
+		ret = error(_("could not read IPC response"));
+		goto done;
+	}
+
+done:
+	trace2_region_leave("ipc-client", "send-command", NULL);
+	return ret;
+}
+
+int ipc_client_send_command(const char *path,
+			    const struct ipc_client_connect_options *options,
+			    const char *message, struct strbuf *response)
+{
+	int ret = -1;
+	enum ipc_active_state state;
+	struct ipc_client_connection *connection = NULL;
+
+	state = ipc_client_try_connect(path, options, &connection);
+
+	if (state != IPC_STATE__LISTENING)
+		return ret;
+
+	ret = ipc_client_send_command_to_connection(connection, message, response);
+
+	ipc_client_close_connection(connection);
+
+	return ret;
+}
+
+/*
+ * Duplicate the given pipe handle and wrap it in a file descriptor so
+ * that we can use pkt-line on it.
+ */
+static int dup_fd_from_pipe(const HANDLE pipe)
+{
+	HANDLE process = GetCurrentProcess();
+	HANDLE handle;
+	int fd;
+
+	if (!DuplicateHandle(process, pipe, process, &handle, 0, FALSE,
+			     DUPLICATE_SAME_ACCESS)) {
+		errno = err_win_to_posix(GetLastError());
+		return -1;
+	}
+
+	fd = _open_osfhandle((intptr_t)handle, O_RDWR|O_BINARY);
+	if (fd < 0) {
+		errno = err_win_to_posix(GetLastError());
+		CloseHandle(handle);
+		return -1;
+	}
+
+	/*
+	 * `handle` is now owned by `fd` and will be automatically closed
+	 * when the descriptor is closed.
+	 */
+
+	return fd;
+}
+
+/*
+ * Magic numbers used to annotate callback instance data.
+ * These are used to help guard against accidentally passing the
+ * wrong instance data across multiple levels of callbacks (which
+ * is easy to do if there are `void*` arguments).
+ */
+enum magic {
+	MAGIC_SERVER_REPLY_DATA,
+	MAGIC_SERVER_THREAD_DATA,
+	MAGIC_SERVER_DATA,
+};
+
+struct ipc_server_reply_data {
+	enum magic magic;
+	int fd;
+	struct ipc_server_thread_data *server_thread_data;
+};
+
+struct ipc_server_thread_data {
+	enum magic magic;
+	struct ipc_server_thread_data *next_thread;
+	struct ipc_server_data *server_data;
+	pthread_t pthread_id;
+	HANDLE hPipe;
+	struct packet_scratch_space scratch_write_buffer;
+};
+
+/*
+ * On Windows, the conceptual "ipc-server" is implemented as a pool of
+ * n idential/peer "server-thread" threads.  That is, there is no
+ * hierarchy of threads; and therefore no controller thread managing
+ * the pool.  Each thread has an independent handle to the named pipe,
+ * receives incoming connections, processes the client, and re-uses
+ * the pipe for the next client connection.
+ *
+ * Therefore, the "ipc-server" only needs to maintain a list of the
+ * spawned threads for eventual "join" purposes.
+ *
+ * A single "stop-event" is visible to all of the server threads to
+ * tell them to shutdown (when idle).
+ */
+struct ipc_server_data {
+	enum magic magic;
+	ipc_server_application_cb *application_cb;
+	void *application_data;
+	struct strbuf buf_path;
+	wchar_t wpath[MAX_PATH];
+
+	HANDLE hEventStopRequested;
+	struct ipc_server_thread_data *thread_list;
+	int is_stopped;
+};
+
+enum connect_result {
+	CR_CONNECTED = 0,
+	CR_CONNECT_PENDING,
+	CR_CONNECT_ERROR,
+	CR_WAIT_ERROR,
+	CR_SHUTDOWN,
+};
+
+static enum connect_result queue_overlapped_connect(
+	struct ipc_server_thread_data *server_thread_data,
+	OVERLAPPED *lpo)
+{
+	if (ConnectNamedPipe(server_thread_data->hPipe, lpo))
+		goto failed;
+
+	switch (GetLastError()) {
+	case ERROR_IO_PENDING:
+		return CR_CONNECT_PENDING;
+
+	case ERROR_PIPE_CONNECTED:
+		SetEvent(lpo->hEvent);
+		return CR_CONNECTED;
+
+	default:
+		break;
+	}
+
+failed:
+	error(_("ConnectNamedPipe failed for '%s' (%lu)"),
+	      server_thread_data->server_data->buf_path.buf,
+	      GetLastError());
+	return CR_CONNECT_ERROR;
+}
+
+/*
+ * Use Windows Overlapped IO to wait for a connection or for our event
+ * to be signalled.
+ */
+static enum connect_result wait_for_connection(
+	struct ipc_server_thread_data *server_thread_data,
+	OVERLAPPED *lpo)
+{
+	enum connect_result r;
+	HANDLE waitHandles[2];
+	DWORD dwWaitResult;
+
+	r = queue_overlapped_connect(server_thread_data, lpo);
+	if (r != CR_CONNECT_PENDING)
+		return r;
+
+	waitHandles[0] = server_thread_data->server_data->hEventStopRequested;
+	waitHandles[1] = lpo->hEvent;
+
+	dwWaitResult = WaitForMultipleObjects(2, waitHandles, FALSE, INFINITE);
+	switch (dwWaitResult) {
+	case WAIT_OBJECT_0 + 0:
+		return CR_SHUTDOWN;
+
+	case WAIT_OBJECT_0 + 1:
+		ResetEvent(lpo->hEvent);
+		return CR_CONNECTED;
+
+	default:
+		return CR_WAIT_ERROR;
+	}
+}
+
+/*
+ * Forward declare our reply callback function so that any compiler
+ * errors are reported when we actually define the function (in addition
+ * to any errors reported when we try to pass this callback function as
+ * a parameter in a function call).  The former are easier to understand.
+ */
+static ipc_server_reply_cb do_io_reply_callback;
+
+/*
+ * Relay application's response message to the client process.
+ * (We do not flush at this point because we allow the caller
+ * to chunk data to the client thru us.)
+ */
+static int do_io_reply_callback(struct ipc_server_reply_data *reply_data,
+		       const char *response, size_t response_len)
+{
+	struct packet_scratch_space *scratch =
+		&reply_data->server_thread_data->scratch_write_buffer;
+
+	if (reply_data->magic != MAGIC_SERVER_REPLY_DATA)
+		BUG("reply_cb called with wrong instance data");
+
+	return write_packetized_from_buf2(response, response_len,
+					  reply_data->fd, 0, scratch);
+}
+
+/*
+ * Receive the request/command from the client and pass it to the
+ * registered request-callback.  The request-callback will compose
+ * a response and call our reply-callback to send it to the client.
+ *
+ * Simple-IPC only contains one round trip, so we flush and close
+ * here after the response.
+ */
+static int do_io(struct ipc_server_thread_data *server_thread_data)
+{
+	struct strbuf buf = STRBUF_INIT;
+	struct ipc_server_reply_data reply_data;
+	int ret = 0;
+
+	reply_data.magic = MAGIC_SERVER_REPLY_DATA;
+	reply_data.server_thread_data = server_thread_data;
+
+	reply_data.fd = dup_fd_from_pipe(server_thread_data->hPipe);
+	if (reply_data.fd < 0)
+		return error(_("could not create fd from pipe for '%s'"),
+			     server_thread_data->server_data->buf_path.buf);
+
+	ret = read_packetized_to_strbuf(reply_data.fd, &buf,
+					PACKET_READ_NEVER_DIE);
+	if (ret >= 0) {
+		ret = server_thread_data->server_data->application_cb(
+			server_thread_data->server_data->application_data,
+			buf.buf, do_io_reply_callback, &reply_data);
+
+		packet_flush_gently(reply_data.fd);
+
+		FlushFileBuffers((HANDLE)_get_osfhandle((reply_data.fd)));
+	}
+	else {
+		/*
+		 * The client probably disconnected/shutdown before it
+		 * could send a well-formed message.  Ignore it.
+		 */
+	}
+
+	strbuf_release(&buf);
+	close(reply_data.fd);
+
+	return ret;
+}
+
+/*
+ * Handle IPC request and response with this connected client.  And reset
+ * the pipe to prepare for the next client.
+ */
+static int use_connection(struct ipc_server_thread_data *server_thread_data)
+{
+	int ret;
+
+	ret = do_io(server_thread_data);
+
+	FlushFileBuffers(server_thread_data->hPipe);
+	DisconnectNamedPipe(server_thread_data->hPipe);
+
+	return ret;
+}
+
+/*
+ * Thread proc for an IPC server worker thread.  It handles a series of
+ * connections from clients.  It cleans and reuses the hPipe between each
+ * client.
+ */
+static void *server_thread_proc(void *_server_thread_data)
+{
+	struct ipc_server_thread_data *server_thread_data = _server_thread_data;
+	HANDLE hEventConnected = INVALID_HANDLE_VALUE;
+	OVERLAPPED oConnect;
+	enum connect_result cr;
+	int ret;
+
+	assert(server_thread_data->hPipe != INVALID_HANDLE_VALUE);
+
+	trace2_thread_start("ipc-server");
+	trace2_data_string("ipc-server", NULL, "pipe",
+			   server_thread_data->server_data->buf_path.buf);
+
+	hEventConnected = CreateEventW(NULL, TRUE, FALSE, NULL);
+
+	memset(&oConnect, 0, sizeof(oConnect));
+	oConnect.hEvent = hEventConnected;
+
+	for (;;) {
+		cr = wait_for_connection(server_thread_data, &oConnect);
+
+		switch (cr) {
+		case CR_SHUTDOWN:
+			goto finished;
+
+		case CR_CONNECTED:
+			ret = use_connection(server_thread_data);
+			if (ret == SIMPLE_IPC_QUIT) {
+				ipc_server_stop_async(
+					server_thread_data->server_data);
+				goto finished;
+			}
+			if (ret > 0) {
+				/*
+				 * Ignore (transient) IO errors with this
+				 * client and reset for the next client.
+				 */
+			}
+			break;
+
+		case CR_CONNECT_PENDING:
+			/* By construction, this should not happen. */
+			BUG("ipc-server[%s]: unexpeced CR_CONNECT_PENDING",
+			    server_thread_data->server_data->buf_path.buf);
+
+		case CR_CONNECT_ERROR:
+		case CR_WAIT_ERROR:
+			/*
+			 * Ignore these theoretical errors.
+			 */
+			DisconnectNamedPipe(server_thread_data->hPipe);
+			break;
+
+		default:
+			BUG("unandled case after wait_for_connection");
+		}
+	}
+
+finished:
+	CloseHandle(server_thread_data->hPipe);
+	CloseHandle(hEventConnected);
+
+	trace2_thread_exit();
+	return NULL;
+}
+
+static HANDLE create_new_pipe(wchar_t *wpath, int is_first)
+{
+	HANDLE hPipe;
+	DWORD dwOpenMode, dwPipeMode;
+	LPSECURITY_ATTRIBUTES lpsa = NULL;
+
+	dwOpenMode = PIPE_ACCESS_INBOUND | PIPE_ACCESS_OUTBOUND |
+		FILE_FLAG_OVERLAPPED;
+
+	dwPipeMode = PIPE_TYPE_MESSAGE | PIPE_READMODE_BYTE | PIPE_WAIT |
+		PIPE_REJECT_REMOTE_CLIENTS;
+
+	if (is_first) {
+		dwOpenMode |= FILE_FLAG_FIRST_PIPE_INSTANCE;
+
+		/*
+		 * On Windows, the first server pipe instance gets to
+		 * set the ACL / Security Attributes on the named
+		 * pipe; subsequent instances inherit and cannot
+		 * change them.
+		 *
+		 * TODO Should we allow the application layer to
+		 * specify security attributes, such as `LocalService`
+		 * or `LocalSystem`, when we create the named pipe?
+		 * This question is probably not important when the
+		 * daemon is started by a foreground user process and
+		 * only needs to talk to the current user, but may be
+		 * if the daemon is run via the Control Panel as a
+		 * System Service.
+		 */
+	}
+
+	hPipe = CreateNamedPipeW(wpath, dwOpenMode, dwPipeMode,
+				 PIPE_UNLIMITED_INSTANCES, 1024, 1024, 0, lpsa);
+
+	return hPipe;
+}
+
+int ipc_server_run_async(struct ipc_server_data **returned_server_data,
+			 const char *path, const struct ipc_server_opts *opts,
+			 ipc_server_application_cb *application_cb,
+			 void *application_data)
+{
+	struct ipc_server_data *server_data;
+	wchar_t wpath[MAX_PATH];
+	HANDLE hPipeFirst = INVALID_HANDLE_VALUE;
+	int k;
+	int ret = 0;
+	int nr_threads = opts->nr_threads;
+
+	*returned_server_data = NULL;
+
+	ret = initialize_pipe_name(path, wpath, ARRAY_SIZE(wpath));
+	if (ret < 0)
+		return error(
+			_("could not create normalized wchar_t path for '%s'"),
+			path);
+
+	hPipeFirst = create_new_pipe(wpath, 1);
+	if (hPipeFirst == INVALID_HANDLE_VALUE)
+		return error(_("IPC server already running on '%s'"), path);
+
+	server_data = xcalloc(1, sizeof(*server_data));
+	server_data->magic = MAGIC_SERVER_DATA;
+	server_data->application_cb = application_cb;
+	server_data->application_data = application_data;
+	server_data->hEventStopRequested = CreateEvent(NULL, TRUE, FALSE, NULL);
+	strbuf_init(&server_data->buf_path, 0);
+	strbuf_addstr(&server_data->buf_path, path);
+	wcscpy(server_data->wpath, wpath);
+
+	if (nr_threads < 1)
+		nr_threads = 1;
+
+	for (k = 0; k < nr_threads; k++) {
+		struct ipc_server_thread_data *std;
+
+		std = xcalloc(1, sizeof(*std));
+		std->magic = MAGIC_SERVER_THREAD_DATA;
+		std->server_data = server_data;
+		std->hPipe = INVALID_HANDLE_VALUE;
+
+		std->hPipe = (k == 0)
+			? hPipeFirst
+			: create_new_pipe(server_data->wpath, 0);
+
+		if (std->hPipe == INVALID_HANDLE_VALUE) {
+			/*
+			 * If we've reached a pipe instance limit for
+			 * this path, just use fewer threads.
+			 */
+			free(std);
+			break;
+		}
+
+		if (pthread_create(&std->pthread_id, NULL,
+				   server_thread_proc, std)) {
+			/*
+			 * Likewise, if we're out of threads, just use
+			 * fewer threads than requested.
+			 *
+			 * However, we just give up if we can't even get
+			 * one thread.  This should not happen.
+			 */
+			if (k == 0)
+				die(_("could not start thread[0] for '%s'"),
+				    path);
+
+			CloseHandle(std->hPipe);
+			free(std);
+			break;
+		}
+
+		std->next_thread = server_data->thread_list;
+		server_data->thread_list = std;
+	}
+
+	*returned_server_data = server_data;
+	return 0;
+}
+
+int ipc_server_stop_async(struct ipc_server_data *server_data)
+{
+	if (!server_data)
+		return 0;
+
+	/*
+	 * Gently tell all of the ipc_server threads to shutdown.
+	 * This will be seen the next time they are idle (and waiting
+	 * for a connection).
+	 *
+	 * We DO NOT attempt to force them to drop an active connection.
+	 */
+	SetEvent(server_data->hEventStopRequested);
+	return 0;
+}
+
+int ipc_server_await(struct ipc_server_data *server_data)
+{
+	DWORD dwWaitResult;
+
+	if (!server_data)
+		return 0;
+
+	dwWaitResult = WaitForSingleObject(server_data->hEventStopRequested, INFINITE);
+	if (dwWaitResult != WAIT_OBJECT_0)
+		return error(_("wait for hEvent failed for '%s'"),
+			     server_data->buf_path.buf);
+
+	while (server_data->thread_list) {
+		struct ipc_server_thread_data *std = server_data->thread_list;
+
+		pthread_join(std->pthread_id, NULL);
+
+		server_data->thread_list = std->next_thread;
+		free(std);
+	}
+
+	server_data->is_stopped = 1;
+
+	return 0;
+}
+
+void ipc_server_free(struct ipc_server_data *server_data)
+{
+	if (!server_data)
+		return;
+
+	if (!server_data->is_stopped)
+		BUG("cannot free ipc-server while running for '%s'",
+		    server_data->buf_path.buf);
+
+	strbuf_release(&server_data->buf_path);
+
+	if (server_data->hEventStopRequested != INVALID_HANDLE_VALUE)
+		CloseHandle(server_data->hEventStopRequested);
+
+	while (server_data->thread_list) {
+		struct ipc_server_thread_data *std = server_data->thread_list;
+
+		server_data->thread_list = std->next_thread;
+		free(std);
+	}
+
+	free(server_data);
+}
diff --git a/config.mak.uname b/config.mak.uname
index 198ab1e58f8..76087cff678 100644
--- a/config.mak.uname
+++ b/config.mak.uname
@@ -421,6 +421,7 @@ ifeq ($(uname_S),Windows)
 	RUNTIME_PREFIX = YesPlease
 	HAVE_WPGMPTR = YesWeDo
 	NO_ST_BLOCKS_IN_STRUCT_STAT = YesPlease
+	USE_WIN32_IPC = YesPlease
 	USE_WIN32_MMAP = YesPlease
 	MMAP_PREVENTS_DELETE = UnfortunatelyYes
 	# USE_NED_ALLOCATOR = YesPlease
@@ -597,6 +598,7 @@ ifneq (,$(findstring MINGW,$(uname_S)))
 	RUNTIME_PREFIX = YesPlease
 	HAVE_WPGMPTR = YesWeDo
 	NO_ST_BLOCKS_IN_STRUCT_STAT = YesPlease
+	USE_WIN32_IPC = YesPlease
 	USE_WIN32_MMAP = YesPlease
 	MMAP_PREVENTS_DELETE = UnfortunatelyYes
 	USE_NED_ALLOCATOR = YesPlease
diff --git a/contrib/buildsystems/CMakeLists.txt b/contrib/buildsystems/CMakeLists.txt
index c151dd7257f..4bd41054ee7 100644
--- a/contrib/buildsystems/CMakeLists.txt
+++ b/contrib/buildsystems/CMakeLists.txt
@@ -246,6 +246,10 @@ elseif(CMAKE_SYSTEM_NAME STREQUAL "Linux")
 	list(APPEND compat_SOURCES unix-socket.c)
 endif()
 
+if(CMAKE_SYSTEM_NAME STREQUAL "Windows")
+	list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-win32.c)
+endif()
+
 set(EXE_EXTENSION ${CMAKE_EXECUTABLE_SUFFIX})
 
 #header checks
diff --git a/simple-ipc.h b/simple-ipc.h
new file mode 100644
index 00000000000..eb19b5da8b1
--- /dev/null
+++ b/simple-ipc.h
@@ -0,0 +1,225 @@
+#ifndef GIT_SIMPLE_IPC_H
+#define GIT_SIMPLE_IPC_H
+
+/*
+ * See Documentation/technical/api-simple-ipc.txt
+ */
+
+#if defined(GIT_WINDOWS_NATIVE)
+#define SUPPORTS_SIMPLE_IPC
+#endif
+
+#ifdef SUPPORTS_SIMPLE_IPC
+#include "pkt-line.h"
+
+/*
+ * Simple IPC Client Side API.
+ */
+
+enum ipc_active_state {
+	/*
+	 * The pipe/socket exists and the daemon is waiting for connections.
+	 */
+	IPC_STATE__LISTENING = 0,
+
+	/*
+	 * The pipe/socket exists, but the daemon is not listening.
+	 * Perhaps it is very busy.
+	 * Perhaps the daemon died without deleting the path.
+	 * Perhaps it is shutting down and draining existing clients.
+	 * Perhaps it is dead, but other clients are lingering and
+	 * still holding a reference to the pathname.
+	 */
+	IPC_STATE__NOT_LISTENING,
+
+	/*
+	 * The requested pathname is bogus and no amount of retries
+	 * will fix that.
+	 */
+	IPC_STATE__INVALID_PATH,
+
+	/*
+	 * The requested pathname is not found.  This usually means
+	 * that there is no daemon present.
+	 */
+	IPC_STATE__PATH_NOT_FOUND,
+
+	IPC_STATE__OTHER_ERROR,
+};
+
+struct ipc_client_connect_options {
+	/*
+	 * Spin under timeout if the server is running but can't
+	 * accept our connection yet.  This should always be set
+	 * unless you just want to poke the server and see if it
+	 * is alive.
+	 */
+	unsigned int wait_if_busy:1;
+
+	/*
+	 * Spin under timeout if the pipe/socket is not yet present
+	 * on the file system.  This is useful if we just started
+	 * the service and need to wait for it to become ready.
+	 */
+	unsigned int wait_if_not_found:1;
+};
+
+#define IPC_CLIENT_CONNECT_OPTIONS_INIT { \
+	.wait_if_busy = 0, \
+	.wait_if_not_found = 0, \
+}
+
+/*
+ * Determine if a server is listening on this named pipe or socket using
+ * platform-specific logic.  This might just probe the filesystem or it
+ * might make a trivial connection to the server using this pathname.
+ */
+enum ipc_active_state ipc_get_active_state(const char *path);
+
+struct ipc_client_connection {
+	int fd;
+	struct packet_scratch_space scratch_write_buffer;
+};
+
+/*
+ * Try to connect to the daemon on the named pipe or socket.
+ *
+ * Returns IPC_STATE__LISTENING and a connection handle.
+ *
+ * Otherwise, returns info to help decide whether to retry or to
+ * spawn/respawn the server.
+ */
+enum ipc_active_state ipc_client_try_connect(
+	const char *path,
+	const struct ipc_client_connect_options *options,
+	struct ipc_client_connection **p_connection);
+
+void ipc_client_close_connection(struct ipc_client_connection *connection);
+
+/*
+ * Used by the client to synchronously send and receive a message with
+ * the server on the provided client connection.
+ *
+ * Returns 0 when successful.
+ *
+ * Calls error() and returns non-zero otherwise.
+ */
+int ipc_client_send_command_to_connection(
+	struct ipc_client_connection *connection,
+	const char *message, struct strbuf *answer);
+
+/*
+ * Used by the client to synchronously connect and send and receive a
+ * message to the server listening at the given path.
+ *
+ * Returns 0 when successful.
+ *
+ * Calls error() and returns non-zero otherwise.
+ */
+int ipc_client_send_command(const char *path,
+			    const struct ipc_client_connect_options *options,
+			    const char *message, struct strbuf *answer);
+
+/*
+ * Simple IPC Server Side API.
+ */
+
+struct ipc_server_reply_data;
+
+typedef int (ipc_server_reply_cb)(struct ipc_server_reply_data *,
+				  const char *response,
+				  size_t response_len);
+
+/*
+ * Prototype for an application-supplied callback to process incoming
+ * client IPC messages and compose a reply.  The `application_cb` should
+ * use the provided `reply_cb` and `reply_data` to send an IPC response
+ * back to the client.  The `reply_cb` callback can be called multiple
+ * times for chunking purposes.  A reply message is optional and may be
+ * omitted if not necessary for the application.
+ *
+ * The return value from the application callback is ignored.
+ * The value `SIMPLE_IPC_QUIT` can be used to shutdown the server.
+ */
+typedef int (ipc_server_application_cb)(void *application_data,
+					const char *request,
+					ipc_server_reply_cb *reply_cb,
+					struct ipc_server_reply_data *reply_data);
+
+#define SIMPLE_IPC_QUIT -2
+
+/*
+ * Opaque instance data to represent an IPC server instance.
+ */
+struct ipc_server_data;
+
+/*
+ * Control parameters for the IPC server instance.
+ * Use this to hide platform-specific settings.
+ */
+struct ipc_server_opts
+{
+	int nr_threads;
+};
+
+/*
+ * Start an IPC server instance in one or more background threads
+ * and return a handle to the pool.
+ *
+ * Returns 0 if the asynchronous server pool was started successfully.
+ * Returns -1 if not.
+ *
+ * When a client IPC message is received, the `application_cb` will be
+ * called (possibly on a random thread) to handle the message and
+ * optionally compose a reply message.
+ */
+int ipc_server_run_async(struct ipc_server_data **returned_server_data,
+			 const char *path, const struct ipc_server_opts *opts,
+			 ipc_server_application_cb *application_cb,
+			 void *application_data);
+
+/*
+ * Gently signal the IPC server pool to shutdown.  No new client
+ * connections will be accepted, but existing connections will be
+ * allowed to complete.
+ */
+int ipc_server_stop_async(struct ipc_server_data *server_data);
+
+/*
+ * Block the calling thread until all threads in the IPC server pool
+ * have completed and been joined.
+ */
+int ipc_server_await(struct ipc_server_data *server_data);
+
+/*
+ * Close and free all resource handles associated with the IPC server
+ * pool.
+ */
+void ipc_server_free(struct ipc_server_data *server_data);
+
+/*
+ * Run an IPC server instance and block the calling thread of the
+ * current process.  It does not return until the IPC server has
+ * either shutdown or had an unrecoverable error.
+ *
+ * The IPC server handles incoming IPC messages from client processes
+ * and may use one or more background threads as necessary.
+ *
+ * Returns 0 after the server has completed successfully.
+ * Returns -1 if the server cannot be started.
+ *
+ * When a client IPC message is received, the `application_cb` will be
+ * called (possibly on a random thread) to handle the message and
+ * optionally compose a reply message.
+ *
+ * Note that `ipc_server_run()` is a synchronous wrapper around the
+ * above asynchronous routines.  It effectively hides all of the
+ * server state and thread details from the caller and presents a
+ * simple synchronous interface.
+ */
+int ipc_server_run(const char *path, const struct ipc_server_opts *opts,
+		   ipc_server_application_cb *application_cb,
+		   void *application_data);
+
+#endif /* SUPPORTS_SIMPLE_IPC */
+#endif /* GIT_SIMPLE_IPC_H */
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v2 09/14] simple-ipc: add t/helper/test-simple-ipc and t0052
  2021-02-01 19:45 ` [PATCH v2 00/14] " Jeff Hostetler via GitGitGadget
                     ` (7 preceding siblings ...)
  2021-02-01 19:45   ` [PATCH v2 08/14] simple-ipc: add win32 implementation Jeff Hostetler via GitGitGadget
@ 2021-02-01 19:45   ` Jeff Hostetler via GitGitGadget
  2021-02-02 21:35     ` SZEDER Gábor
  2021-02-05 19:38     ` SZEDER Gábor
  2021-02-01 19:45   ` [PATCH v2 10/14] unix-socket: elimiate static unix_stream_socket() helper function Jeff Hostetler via GitGitGadget
                     ` (6 subsequent siblings)
  15 siblings, 2 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-02-01 19:45 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, Chris Torek, Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Create unit tests for "simple-ipc".  These are currently only enabled
on Windows.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 Makefile                   |   1 +
 t/helper/test-simple-ipc.c | 485 +++++++++++++++++++++++++++++++++++++
 t/helper/test-tool.c       |   1 +
 t/helper/test-tool.h       |   1 +
 t/t0052-simple-ipc.sh      | 129 ++++++++++
 5 files changed, 617 insertions(+)
 create mode 100644 t/helper/test-simple-ipc.c
 create mode 100755 t/t0052-simple-ipc.sh

diff --git a/Makefile b/Makefile
index c94d5847919..e7ba8853ea6 100644
--- a/Makefile
+++ b/Makefile
@@ -740,6 +740,7 @@ TEST_BUILTINS_OBJS += test-serve-v2.o
 TEST_BUILTINS_OBJS += test-sha1.o
 TEST_BUILTINS_OBJS += test-sha256.o
 TEST_BUILTINS_OBJS += test-sigchain.o
+TEST_BUILTINS_OBJS += test-simple-ipc.o
 TEST_BUILTINS_OBJS += test-strcmp-offset.o
 TEST_BUILTINS_OBJS += test-string-list.o
 TEST_BUILTINS_OBJS += test-submodule-config.o
diff --git a/t/helper/test-simple-ipc.c b/t/helper/test-simple-ipc.c
new file mode 100644
index 00000000000..4960e79cf18
--- /dev/null
+++ b/t/helper/test-simple-ipc.c
@@ -0,0 +1,485 @@
+/*
+ * test-simple-ipc.c: verify that the Inter-Process Communication works.
+ */
+
+#include "test-tool.h"
+#include "cache.h"
+#include "strbuf.h"
+#include "simple-ipc.h"
+#include "parse-options.h"
+#include "thread-utils.h"
+
+#ifndef SUPPORTS_SIMPLE_IPC
+int cmd__simple_ipc(int argc, const char **argv)
+{
+	die("simple IPC not available on this platform");
+}
+#else
+
+/*
+ * The test daemon defines an "application callback" that supports a
+ * series of commands (see `test_app_cb()`).
+ *
+ * Unknown commands are caught here and we send an error message back
+ * to the client process.
+ */
+static int app__unhandled_command(const char *command,
+				  ipc_server_reply_cb *reply_cb,
+				  struct ipc_server_reply_data *reply_data)
+{
+	struct strbuf buf = STRBUF_INIT;
+	int ret;
+
+	strbuf_addf(&buf, "unhandled command: %s", command);
+	ret = reply_cb(reply_data, buf.buf, buf.len);
+	strbuf_release(&buf);
+
+	return ret;
+}
+
+/*
+ * Reply with a single very large buffer.  This is to ensure that
+ * long response are properly handled -- whether the chunking occurs
+ * in the kernel or in the (probably pkt-line) layer.
+ */
+#define BIG_ROWS (10000)
+static int app__big_command(ipc_server_reply_cb *reply_cb,
+			    struct ipc_server_reply_data *reply_data)
+{
+	struct strbuf buf = STRBUF_INIT;
+	int row;
+	int ret;
+
+	for (row = 0; row < BIG_ROWS; row++)
+		strbuf_addf(&buf, "big: %.75d\n", row);
+
+	ret = reply_cb(reply_data, buf.buf, buf.len);
+	strbuf_release(&buf);
+
+	return ret;
+}
+
+/*
+ * Reply with a series of lines.  This is to ensure that we can incrementally
+ * compute the response and chunk it to the client.
+ */
+#define CHUNK_ROWS (10000)
+static int app__chunk_command(ipc_server_reply_cb *reply_cb,
+			      struct ipc_server_reply_data *reply_data)
+{
+	struct strbuf buf = STRBUF_INIT;
+	int row;
+	int ret;
+
+	for (row = 0; row < CHUNK_ROWS; row++) {
+		strbuf_setlen(&buf, 0);
+		strbuf_addf(&buf, "big: %.75d\n", row);
+		ret = reply_cb(reply_data, buf.buf, buf.len);
+	}
+
+	strbuf_release(&buf);
+
+	return ret;
+}
+
+/*
+ * Slowly reply with a series of lines.  This is to model an expensive to
+ * compute chunked response (which might happen if this callback is running
+ * in a thread and is fighting for a lock with other threads).
+ */
+#define SLOW_ROWS     (1000)
+#define SLOW_DELAY_MS (10)
+static int app__slow_command(ipc_server_reply_cb *reply_cb,
+			     struct ipc_server_reply_data *reply_data)
+{
+	struct strbuf buf = STRBUF_INIT;
+	int row;
+	int ret;
+
+	for (row = 0; row < SLOW_ROWS; row++) {
+		strbuf_setlen(&buf, 0);
+		strbuf_addf(&buf, "big: %.75d\n", row);
+		ret = reply_cb(reply_data, buf.buf, buf.len);
+		sleep_millisec(SLOW_DELAY_MS);
+	}
+
+	strbuf_release(&buf);
+
+	return ret;
+}
+
+/*
+ * The client sent a command followed by a (possibly very) large buffer.
+ */
+static int app__sendbytes_command(const char *received,
+				  ipc_server_reply_cb *reply_cb,
+				  struct ipc_server_reply_data *reply_data)
+{
+	struct strbuf buf_resp = STRBUF_INIT;
+	const char *p = "?";
+	int len_ballast = 0;
+	int k;
+	int errs = 0;
+	int ret;
+
+	if (skip_prefix(received, "sendbytes ", &p))
+		len_ballast = strlen(p);
+
+	/*
+	 * Verify that the ballast is n copies of a single letter.
+	 * And that the multi-threaded IO layer didn't cross the streams.
+	 */
+	for (k = 1; k < len_ballast; k++)
+		if (p[k] != p[0])
+			errs++;
+
+	if (errs)
+		strbuf_addf(&buf_resp, "errs:%d\n", errs);
+	else
+		strbuf_addf(&buf_resp, "rcvd:%c%08d\n", p[0], len_ballast);
+
+	ret = reply_cb(reply_data, buf_resp.buf, buf_resp.len);
+
+	strbuf_release(&buf_resp);
+
+	return ret;
+}
+
+/*
+ * An arbitrary fixed address to verify that the application instance
+ * data is handled properly.
+ */
+static int my_app_data = 42;
+
+static ipc_server_application_cb test_app_cb;
+
+/*
+ * This is "application callback" that sits on top of the "ipc-server".
+ * It completely defines the set of command verbs supported by this
+ * application.
+ */
+static int test_app_cb(void *application_data,
+		       const char *command,
+		       ipc_server_reply_cb *reply_cb,
+		       struct ipc_server_reply_data *reply_data)
+{
+	/*
+	 * Verify that we received the application-data that we passed
+	 * when we started the ipc-server.  (We have several layers of
+	 * callbacks calling callbacks and it's easy to get things mixed
+	 * up (especially when some are "void*").)
+	 */
+	if (application_data != (void*)&my_app_data)
+		BUG("application_cb: application_data pointer wrong");
+
+	if (!strcmp(command, "quit")) {
+		/*
+		 * Tell ipc-server to hangup with an empty reply.
+		 */
+		return SIMPLE_IPC_QUIT;
+	}
+
+	if (!strcmp(command, "ping")) {
+		const char *answer = "pong";
+		return reply_cb(reply_data, answer, strlen(answer));
+	}
+
+	if (!strcmp(command, "big"))
+		return app__big_command(reply_cb, reply_data);
+
+	if (!strcmp(command, "chunk"))
+		return app__chunk_command(reply_cb, reply_data);
+
+	if (!strcmp(command, "slow"))
+		return app__slow_command(reply_cb, reply_data);
+
+	if (starts_with(command, "sendbytes "))
+		return app__sendbytes_command(command, reply_cb, reply_data);
+
+	return app__unhandled_command(command, reply_cb, reply_data);
+}
+
+/*
+ * This process will run as a simple-ipc server and listen for IPC commands
+ * from client processes.
+ */
+static int daemon__run_server(const char *path, int argc, const char **argv)
+{
+	struct ipc_server_opts opts = {
+		.nr_threads = 5
+	};
+
+	const char * const daemon_usage[] = {
+		N_("test-helper simple-ipc daemon [<options>"),
+		NULL
+	};
+	struct option daemon_options[] = {
+		OPT_INTEGER(0, "threads", &opts.nr_threads,
+			    N_("number of threads in server thread pool")),
+		OPT_END()
+	};
+
+	argc = parse_options(argc, argv, NULL, daemon_options, daemon_usage, 0);
+
+	if (opts.nr_threads < 1)
+		opts.nr_threads = 1;
+
+	/*
+	 * Synchronously run the ipc-server.  We don't need any application
+	 * instance data, so pass an arbitrary pointer (that we'll later
+	 * verify made the round trip).
+	 */
+	return ipc_server_run(path, &opts, test_app_cb, (void*)&my_app_data);
+}
+
+/*
+ * This process will run a quick probe to see if a simple-ipc server
+ * is active on this path.
+ *
+ * Returns 0 if the server is alive.
+ */
+static int client__probe_server(const char *path)
+{
+	enum ipc_active_state s;
+
+	s = ipc_get_active_state(path);
+	switch (s) {
+	case IPC_STATE__LISTENING:
+		return 0;
+
+	case IPC_STATE__NOT_LISTENING:
+		return error("no server listening at '%s'", path);
+
+	case IPC_STATE__PATH_NOT_FOUND:
+		return error("path not found '%s'", path);
+
+	case IPC_STATE__INVALID_PATH:
+		return error("invalid pipe/socket name '%s'", path);
+
+	case IPC_STATE__OTHER_ERROR:
+	default:
+		return error("other error for '%s'", path);
+	}
+}
+
+/*
+ * Send an IPC command to an already-running server daemon and print the
+ * response.
+ *
+ * argv[2] contains a simple (1 word) command verb that `test_app_cb()`
+ * (in the daemon process) will understand.
+ */
+static int client__send_ipc(int argc, const char **argv, const char *path)
+{
+	const char *command = argc > 2 ? argv[2] : "(no command)";
+	struct strbuf buf = STRBUF_INIT;
+	struct ipc_client_connect_options options
+		= IPC_CLIENT_CONNECT_OPTIONS_INIT;
+
+	options.wait_if_busy = 1;
+	options.wait_if_not_found = 0;
+
+	if (!ipc_client_send_command(path, &options, command, &buf)) {
+		printf("%s\n", buf.buf);
+		fflush(stdout);
+		strbuf_release(&buf);
+
+		return 0;
+	}
+
+	return error("failed to send '%s' to '%s'", command, path);
+}
+
+/*
+ * Send an IPC command followed by ballast to confirm that a large
+ * message can be sent and that the kernel or pkt-line layers will
+ * properly chunk it and that the daemon receives the entire message.
+ */
+static int do_sendbytes(int bytecount, char byte, const char *path)
+{
+	struct strbuf buf_send = STRBUF_INIT;
+	struct strbuf buf_resp = STRBUF_INIT;
+	struct ipc_client_connect_options options
+		= IPC_CLIENT_CONNECT_OPTIONS_INIT;
+
+	options.wait_if_busy = 1;
+	options.wait_if_not_found = 0;
+
+	strbuf_addstr(&buf_send, "sendbytes ");
+	strbuf_addchars(&buf_send, byte, bytecount);
+
+	if (!ipc_client_send_command(path, &options, buf_send.buf, &buf_resp)) {
+		strbuf_rtrim(&buf_resp);
+		printf("sent:%c%08d %s\n", byte, bytecount, buf_resp.buf);
+		fflush(stdout);
+		strbuf_release(&buf_send);
+		strbuf_release(&buf_resp);
+
+		return 0;
+	}
+
+	return error("client failed to sendbytes(%d, '%c') to '%s'",
+		     bytecount, byte, path);
+}
+
+/*
+ * Send an IPC command with ballast to an already-running server daemon.
+ */
+static int client__sendbytes(int argc, const char **argv, const char *path)
+{
+	int bytecount = 1024;
+	char *string = "x";
+	const char * const sendbytes_usage[] = {
+		N_("test-helper simple-ipc sendbytes [<options>]"),
+		NULL
+	};
+	struct option sendbytes_options[] = {
+		OPT_INTEGER(0, "bytecount", &bytecount, N_("number of bytes")),
+		OPT_STRING(0, "byte", &string, N_("byte"), N_("ballast")),
+		OPT_END()
+	};
+
+	argc = parse_options(argc, argv, NULL, sendbytes_options, sendbytes_usage, 0);
+
+	return do_sendbytes(bytecount, string[0], path);
+}
+
+struct multiple_thread_data {
+	pthread_t pthread_id;
+	struct multiple_thread_data *next;
+	const char *path;
+	int bytecount;
+	int batchsize;
+	int sum_errors;
+	int sum_good;
+	char letter;
+};
+
+static void *multiple_thread_proc(void *_multiple_thread_data)
+{
+	struct multiple_thread_data *d = _multiple_thread_data;
+	int k;
+
+	trace2_thread_start("multiple");
+
+	for (k = 0; k < d->batchsize; k++) {
+		if (do_sendbytes(d->bytecount + k, d->letter, d->path))
+			d->sum_errors++;
+		else
+			d->sum_good++;
+	}
+
+	trace2_thread_exit();
+	return NULL;
+}
+
+/*
+ * Start a client-side thread pool.  Each thread sends a series of
+ * IPC requests.  Each request is on a new connection to the server.
+ */
+static int client__multiple(int argc, const char **argv, const char *path)
+{
+	struct multiple_thread_data *list = NULL;
+	int k;
+	int nr_threads = 5;
+	int bytecount = 1;
+	int batchsize = 10;
+	int sum_join_errors = 0;
+	int sum_thread_errors = 0;
+	int sum_good = 0;
+
+	const char * const multiple_usage[] = {
+		N_("test-helper simple-ipc multiple [<options>]"),
+		NULL
+	};
+	struct option multiple_options[] = {
+		OPT_INTEGER(0, "bytecount", &bytecount, N_("number of bytes")),
+		OPT_INTEGER(0, "threads", &nr_threads, N_("number of threads")),
+		OPT_INTEGER(0, "batchsize", &batchsize, N_("number of requests per thread")),
+		OPT_END()
+	};
+
+	argc = parse_options(argc, argv, NULL, multiple_options, multiple_usage, 0);
+
+	if (bytecount < 1)
+		bytecount = 1;
+	if (nr_threads < 1)
+		nr_threads = 1;
+	if (batchsize < 1)
+		batchsize = 1;
+
+	for (k = 0; k < nr_threads; k++) {
+		struct multiple_thread_data *d = xcalloc(1, sizeof(*d));
+		d->next = list;
+		d->path = path;
+		d->bytecount = bytecount + batchsize*(k/26);
+		d->batchsize = batchsize;
+		d->sum_errors = 0;
+		d->sum_good = 0;
+		d->letter = 'A' + (k % 26);
+
+		if (pthread_create(&d->pthread_id, NULL, multiple_thread_proc, d)) {
+			warning("failed to create thread[%d] skipping remainder", k);
+			free(d);
+			break;
+		}
+
+		list = d;
+	}
+
+	while (list) {
+		struct multiple_thread_data *d = list;
+
+		if (pthread_join(d->pthread_id, NULL))
+			sum_join_errors++;
+
+		sum_thread_errors += d->sum_errors;
+		sum_good += d->sum_good;
+
+		list = d->next;
+		free(d);
+	}
+
+	printf("client (good %d) (join %d), (errors %d)\n",
+	       sum_good, sum_join_errors, sum_thread_errors);
+
+	return (sum_join_errors + sum_thread_errors) ? 1 : 0;
+}
+
+int cmd__simple_ipc(int argc, const char **argv)
+{
+	const char *path = "ipc-test";
+
+	if (argc == 2 && !strcmp(argv[1], "SUPPORTS_SIMPLE_IPC"))
+		return 0;
+
+	/* Use '!!' on all dispatch functions to map from `error()` style
+	 * (returns -1) style to `test_must_fail` style (expects 1) and
+	 * get less confusing shell error messages.
+	 */
+
+	if (argc == 2 && !strcmp(argv[1], "is-active"))
+		return !!client__probe_server(path);
+
+	if (argc >= 2 && !strcmp(argv[1], "daemon"))
+		return !!daemon__run_server(path, argc, argv);
+
+	/*
+	 * Client commands follow.  Ensure a server is running before
+	 * going any further.
+	 */
+	if (client__probe_server(path))
+		return 1;
+
+	if ((argc == 2 || argc == 3) && !strcmp(argv[1], "send"))
+		return !!client__send_ipc(argc, argv, path);
+
+	if (argc >= 2 && !strcmp(argv[1], "sendbytes"))
+		return !!client__sendbytes(argc, argv, path);
+
+	if (argc >= 2 && !strcmp(argv[1], "multiple"))
+		return !!client__multiple(argc, argv, path);
+
+	die("Unhandled argv[1]: '%s'", argv[1]);
+}
+#endif
diff --git a/t/helper/test-tool.c b/t/helper/test-tool.c
index 9d6d14d9293..a409655f03b 100644
--- a/t/helper/test-tool.c
+++ b/t/helper/test-tool.c
@@ -64,6 +64,7 @@ static struct test_cmd cmds[] = {
 	{ "sha1", cmd__sha1 },
 	{ "sha256", cmd__sha256 },
 	{ "sigchain", cmd__sigchain },
+	{ "simple-ipc", cmd__simple_ipc },
 	{ "strcmp-offset", cmd__strcmp_offset },
 	{ "string-list", cmd__string_list },
 	{ "submodule-config", cmd__submodule_config },
diff --git a/t/helper/test-tool.h b/t/helper/test-tool.h
index a6470ff62c4..564eb3c8e91 100644
--- a/t/helper/test-tool.h
+++ b/t/helper/test-tool.h
@@ -54,6 +54,7 @@ int cmd__sha1(int argc, const char **argv);
 int cmd__oid_array(int argc, const char **argv);
 int cmd__sha256(int argc, const char **argv);
 int cmd__sigchain(int argc, const char **argv);
+int cmd__simple_ipc(int argc, const char **argv);
 int cmd__strcmp_offset(int argc, const char **argv);
 int cmd__string_list(int argc, const char **argv);
 int cmd__submodule_config(int argc, const char **argv);
diff --git a/t/t0052-simple-ipc.sh b/t/t0052-simple-ipc.sh
new file mode 100755
index 00000000000..69588354545
--- /dev/null
+++ b/t/t0052-simple-ipc.sh
@@ -0,0 +1,129 @@
+#!/bin/sh
+
+test_description='simple command server'
+
+. ./test-lib.sh
+
+test-tool simple-ipc SUPPORTS_SIMPLE_IPC || {
+	skip_all='simple IPC not supported on this platform'
+	test_done
+}
+
+stop_simple_IPC_server () {
+	test -n "$SIMPLE_IPC_PID" || return 0
+
+	kill "$SIMPLE_IPC_PID" &&
+	SIMPLE_IPC_PID=
+}
+
+test_expect_success 'start simple command server' '
+	{ test-tool simple-ipc daemon --threads=8 & } &&
+	SIMPLE_IPC_PID=$! &&
+	test_atexit stop_simple_IPC_server &&
+
+	sleep 1 &&
+
+	test-tool simple-ipc is-active
+'
+
+test_expect_success 'simple command server' '
+	test-tool simple-ipc send ping >actual &&
+	echo pong >expect &&
+	test_cmp expect actual
+'
+
+test_expect_success 'servers cannot share the same path' '
+	test_must_fail test-tool simple-ipc daemon &&
+	test-tool simple-ipc is-active
+'
+
+test_expect_success 'big response' '
+	test-tool simple-ipc send big >actual &&
+	test_line_count -ge 10000 actual &&
+	grep -q "big: [0]*9999\$" actual
+'
+
+test_expect_success 'chunk response' '
+	test-tool simple-ipc send chunk >actual &&
+	test_line_count -ge 10000 actual &&
+	grep -q "big: [0]*9999\$" actual
+'
+
+test_expect_success 'slow response' '
+	test-tool simple-ipc send slow >actual &&
+	test_line_count -ge 100 actual &&
+	grep -q "big: [0]*99\$" actual
+'
+
+# Send an IPC with n=100,000 bytes of ballast.  This should be large enough
+# to force both the kernel and the pkt-line layer to chunk the message to the
+# daemon and for the daemon to receive it in chunks.
+#
+test_expect_success 'sendbytes' '
+	test-tool simple-ipc sendbytes --bytecount=100000 --byte=A >actual &&
+	grep "sent:A00100000 rcvd:A00100000" actual
+'
+
+# Start a series of <threads> client threads that each make <batchsize>
+# IPC requests to the server.  Each (<threads> * <batchsize>) request
+# will open a new connection to the server and randomly bind to a server
+# thread.  Each client thread exits after completing its batch.  So the
+# total number of live client threads will be smaller than the total.
+# Each request will send a message containing at least <bytecount> bytes
+# of ballast.  (Responses are small.)
+#
+# The purpose here is to test threading in the server and responding to
+# many concurrent client requests (regardless of whether they come from
+# 1 client process or many).  And to test that the server side of the
+# named pipe/socket is stable.  (On Windows this means that the server
+# pipe is properly recycled.)
+#
+# On Windows it also lets us adjust the connection timeout in the
+# `ipc_client_send_command()`.
+#
+# Note it is easy to drive the system into failure by requesting an
+# insane number of threads on client or server and/or increasing the
+# per-thread batchsize or the per-request bytecount (ballast).
+# On Windows these failures look like "pipe is busy" errors.
+# So I've chosen fairly conservative values for now.
+#
+# We expect output of the form "sent:<letter><length> ..."
+# With terms (7, 19, 13) we expect:
+#   <letter> in [A-G]
+#   <length> in [19+0 .. 19+(13-1)]
+# and (7 * 13) successful responses.
+#
+test_expect_success 'stress test threads' '
+	test-tool simple-ipc multiple \
+		--threads=7 \
+		--bytecount=19 \
+		--batchsize=13 \
+		>actual &&
+	test_line_count = 92 actual &&
+	grep "good 91" actual &&
+	grep "sent:A" <actual >actual_a &&
+	cat >expect_a <<-EOF &&
+		sent:A00000019 rcvd:A00000019
+		sent:A00000020 rcvd:A00000020
+		sent:A00000021 rcvd:A00000021
+		sent:A00000022 rcvd:A00000022
+		sent:A00000023 rcvd:A00000023
+		sent:A00000024 rcvd:A00000024
+		sent:A00000025 rcvd:A00000025
+		sent:A00000026 rcvd:A00000026
+		sent:A00000027 rcvd:A00000027
+		sent:A00000028 rcvd:A00000028
+		sent:A00000029 rcvd:A00000029
+		sent:A00000030 rcvd:A00000030
+		sent:A00000031 rcvd:A00000031
+	EOF
+	test_cmp expect_a actual_a
+'
+
+test_expect_success '`quit` works' '
+	test-tool simple-ipc send quit &&
+	test_must_fail test-tool simple-ipc is-active &&
+	test_must_fail test-tool simple-ipc send ping
+'
+
+test_done
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v2 10/14] unix-socket: elimiate static unix_stream_socket() helper function
  2021-02-01 19:45 ` [PATCH v2 00/14] " Jeff Hostetler via GitGitGadget
                     ` (8 preceding siblings ...)
  2021-02-01 19:45   ` [PATCH v2 09/14] simple-ipc: add t/helper/test-simple-ipc and t0052 Jeff Hostetler via GitGitGadget
@ 2021-02-01 19:45   ` Jeff Hostetler via GitGitGadget
  2021-02-02  9:54     ` Jeff King
  2021-02-02  9:58     ` Jeff King
  2021-02-01 19:45   ` [PATCH v2 11/14] unix-socket: add options to unix_stream_listen() Jeff Hostetler via GitGitGadget
                     ` (5 subsequent siblings)
  15 siblings, 2 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-02-01 19:45 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, Chris Torek, Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

The static helper function `unix_stream_socket()` calls `die()`.  This is not
appropriate for all callers.  Eliminate the wrapper function and move the
existing error handling to the callers in preparation for adapting specific
callers.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 unix-socket.c | 17 +++++++----------
 1 file changed, 7 insertions(+), 10 deletions(-)

diff --git a/unix-socket.c b/unix-socket.c
index 19ed48be990..ef2aeb46bcd 100644
--- a/unix-socket.c
+++ b/unix-socket.c
@@ -1,14 +1,6 @@
 #include "cache.h"
 #include "unix-socket.h"
 
-static int unix_stream_socket(void)
-{
-	int fd = socket(AF_UNIX, SOCK_STREAM, 0);
-	if (fd < 0)
-		die_errno("unable to create socket");
-	return fd;
-}
-
 static int chdir_len(const char *orig, int len)
 {
 	char *path = xmemdupz(orig, len);
@@ -79,7 +71,10 @@ int unix_stream_connect(const char *path)
 
 	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
 		return -1;
-	fd = unix_stream_socket();
+	fd = socket(AF_UNIX, SOCK_STREAM, 0);
+	if (fd < 0)
+		die_errno("unable to create socket");
+
 	if (connect(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0)
 		goto fail;
 	unix_sockaddr_cleanup(&ctx);
@@ -103,7 +98,9 @@ int unix_stream_listen(const char *path)
 
 	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
 		return -1;
-	fd = unix_stream_socket();
+	fd = socket(AF_UNIX, SOCK_STREAM, 0);
+	if (fd < 0)
+		die_errno("unable to create socket");
 
 	if (bind(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0)
 		goto fail;
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v2 11/14] unix-socket: add options to unix_stream_listen()
  2021-02-01 19:45 ` [PATCH v2 00/14] " Jeff Hostetler via GitGitGadget
                     ` (9 preceding siblings ...)
  2021-02-01 19:45   ` [PATCH v2 10/14] unix-socket: elimiate static unix_stream_socket() helper function Jeff Hostetler via GitGitGadget
@ 2021-02-01 19:45   ` Jeff Hostetler via GitGitGadget
  2021-02-02 10:14     ` Jeff King
  2021-02-01 19:45   ` [PATCH v2 12/14] unix-socket: add no-chdir option " Jeff Hostetler via GitGitGadget
                     ` (4 subsequent siblings)
  15 siblings, 1 reply; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-02-01 19:45 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, Chris Torek, Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Update `unix_stream_listen()` to take an options structure to override
default behaviors.  This includes the size of the `listen()` backlog
and whether it should always unlink the socket file before trying to
create a new one.  Also eliminate calls to `die()` if it cannot create
a socket.

Normally, `unix_stream_listen()` always tries to `unlink()` the
socket-path before calling `bind()`.  If there is an existing
server/daemon already bound and listening on that socket-path, our
`unlink()` would have the effect of disassociating the existing
server's bound-socket-fd from the socket-path without notifying the
existing server.  The existing server could continue to service
existing connections (accepted-socket-fd's), but would not receive any
futher new connections (since clients rendezvous via the socket-path).
The existing server would effectively be offline but yet appear to be
active.

Furthermore, `unix_stream_listen()` creates an opportunity for a brief
race condition for connecting clients if they try to connect in the
interval between the forced `unlink()` and the subsequent `bind()` (which
recreates the socket-path that is bound to a new socket-fd in the current
process).

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 builtin/credential-cache--daemon.c |  3 ++-
 unix-socket.c                      | 28 +++++++++++++++++++++-------
 unix-socket.h                      | 14 +++++++++++++-
 3 files changed, 36 insertions(+), 9 deletions(-)

diff --git a/builtin/credential-cache--daemon.c b/builtin/credential-cache--daemon.c
index c61f123a3b8..4c6c89ab0de 100644
--- a/builtin/credential-cache--daemon.c
+++ b/builtin/credential-cache--daemon.c
@@ -203,9 +203,10 @@ static int serve_cache_loop(int fd)
 
 static void serve_cache(const char *socket_path, int debug)
 {
+	struct unix_stream_listen_opts opts = UNIX_STREAM_LISTEN_OPTS_INIT;
 	int fd;
 
-	fd = unix_stream_listen(socket_path);
+	fd = unix_stream_listen(socket_path, &opts);
 	if (fd < 0)
 		die_errno("unable to bind to '%s'", socket_path);
 
diff --git a/unix-socket.c b/unix-socket.c
index ef2aeb46bcd..8bcef18ea55 100644
--- a/unix-socket.c
+++ b/unix-socket.c
@@ -88,24 +88,35 @@ int unix_stream_connect(const char *path)
 	return -1;
 }
 
-int unix_stream_listen(const char *path)
+int unix_stream_listen(const char *path,
+		       const struct unix_stream_listen_opts *opts)
 {
-	int fd, saved_errno;
+	int fd = -1;
+	int saved_errno;
+	int bind_successful = 0;
+	int backlog;
 	struct sockaddr_un sa;
 	struct unix_sockaddr_context ctx;
 
-	unlink(path);
-
 	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
 		return -1;
+
 	fd = socket(AF_UNIX, SOCK_STREAM, 0);
 	if (fd < 0)
-		die_errno("unable to create socket");
+		goto fail;
+
+	if (opts->force_unlink_before_bind)
+		unlink(path);
 
 	if (bind(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0)
 		goto fail;
+	bind_successful = 1;
 
-	if (listen(fd, 5) < 0)
+	if (opts->listen_backlog_size > 0)
+		backlog = opts->listen_backlog_size;
+	else
+		backlog = 5;
+	if (listen(fd, backlog) < 0)
 		goto fail;
 
 	unix_sockaddr_cleanup(&ctx);
@@ -114,7 +125,10 @@ int unix_stream_listen(const char *path)
 fail:
 	saved_errno = errno;
 	unix_sockaddr_cleanup(&ctx);
-	close(fd);
+	if (fd != -1)
+		close(fd);
+	if (bind_successful)
+		unlink(path);
 	errno = saved_errno;
 	return -1;
 }
diff --git a/unix-socket.h b/unix-socket.h
index e271aeec5a0..c28372ef48e 100644
--- a/unix-socket.h
+++ b/unix-socket.h
@@ -1,7 +1,19 @@
 #ifndef UNIX_SOCKET_H
 #define UNIX_SOCKET_H
 
+struct unix_stream_listen_opts {
+	int listen_backlog_size;
+	unsigned int force_unlink_before_bind:1;
+};
+
+#define UNIX_STREAM_LISTEN_OPTS_INIT \
+{ \
+	.listen_backlog_size = 5, \
+	.force_unlink_before_bind = 1, \
+}
+
 int unix_stream_connect(const char *path);
-int unix_stream_listen(const char *path);
+int unix_stream_listen(const char *path,
+		       const struct unix_stream_listen_opts *opts);
 
 #endif /* UNIX_SOCKET_H */
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v2 12/14] unix-socket: add no-chdir option to unix_stream_listen()
  2021-02-01 19:45 ` [PATCH v2 00/14] " Jeff Hostetler via GitGitGadget
                     ` (10 preceding siblings ...)
  2021-02-01 19:45   ` [PATCH v2 11/14] unix-socket: add options to unix_stream_listen() Jeff Hostetler via GitGitGadget
@ 2021-02-01 19:45   ` Jeff Hostetler via GitGitGadget
  2021-02-02 10:26     ` Jeff King
  2021-02-01 19:45   ` [PATCH v2 13/14] unix-socket: do not call die in unix_stream_connect() Jeff Hostetler via GitGitGadget
                     ` (3 subsequent siblings)
  15 siblings, 1 reply; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-02-01 19:45 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, Chris Torek, Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Calls to `chdir()` are dangerous in a multi-threaded context.  If
`unix_stream_listen()` is given a socket pathname that is too big to
fit in a `sockaddr_un` structure, it will `chdir()` to the parent
directory of the requested socket pathname, create the socket using a
relative pathname, and then `chdir()` back.  This is not thread-safe.

Add `disallow_chdir` flag to `struct unix_sockaddr_context` and change
all callers to pass an initialized context structure.

Teach `unix_sockaddr_init()` to not allow calls to `chdir()` when flag
is set.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 unix-socket.c | 19 ++++++++++++++++---
 unix-socket.h |  2 ++
 2 files changed, 18 insertions(+), 3 deletions(-)

diff --git a/unix-socket.c b/unix-socket.c
index 8bcef18ea55..9726992f276 100644
--- a/unix-socket.c
+++ b/unix-socket.c
@@ -11,8 +11,15 @@ static int chdir_len(const char *orig, int len)
 
 struct unix_sockaddr_context {
 	char *orig_dir;
+	unsigned int disallow_chdir:1;
 };
 
+#define UNIX_SOCKADDR_CONTEXT_INIT \
+{ \
+	.orig_dir=NULL, \
+	.disallow_chdir=0, \
+}
+
 static void unix_sockaddr_cleanup(struct unix_sockaddr_context *ctx)
 {
 	if (!ctx->orig_dir)
@@ -32,7 +39,11 @@ static int unix_sockaddr_init(struct sockaddr_un *sa, const char *path,
 {
 	int size = strlen(path) + 1;
 
-	ctx->orig_dir = NULL;
+	if (ctx->disallow_chdir && size > sizeof(sa->sun_path)) {
+		errno = ENAMETOOLONG;
+		return -1;
+	}
+
 	if (size > sizeof(sa->sun_path)) {
 		const char *slash = find_last_dir_sep(path);
 		const char *dir;
@@ -67,7 +78,7 @@ int unix_stream_connect(const char *path)
 {
 	int fd, saved_errno;
 	struct sockaddr_un sa;
-	struct unix_sockaddr_context ctx;
+	struct unix_sockaddr_context ctx = UNIX_SOCKADDR_CONTEXT_INIT;
 
 	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
 		return -1;
@@ -96,7 +107,9 @@ int unix_stream_listen(const char *path,
 	int bind_successful = 0;
 	int backlog;
 	struct sockaddr_un sa;
-	struct unix_sockaddr_context ctx;
+	struct unix_sockaddr_context ctx = UNIX_SOCKADDR_CONTEXT_INIT;
+
+	ctx.disallow_chdir = opts->disallow_chdir;
 
 	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
 		return -1;
diff --git a/unix-socket.h b/unix-socket.h
index c28372ef48e..5b0e8ccef10 100644
--- a/unix-socket.h
+++ b/unix-socket.h
@@ -4,12 +4,14 @@
 struct unix_stream_listen_opts {
 	int listen_backlog_size;
 	unsigned int force_unlink_before_bind:1;
+	unsigned int disallow_chdir:1;
 };
 
 #define UNIX_STREAM_LISTEN_OPTS_INIT \
 { \
 	.listen_backlog_size = 5, \
 	.force_unlink_before_bind = 1, \
+	.disallow_chdir = 0, \
 }
 
 int unix_stream_connect(const char *path);
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v2 13/14] unix-socket: do not call die in unix_stream_connect()
  2021-02-01 19:45 ` [PATCH v2 00/14] " Jeff Hostetler via GitGitGadget
                     ` (11 preceding siblings ...)
  2021-02-01 19:45   ` [PATCH v2 12/14] unix-socket: add no-chdir option " Jeff Hostetler via GitGitGadget
@ 2021-02-01 19:45   ` Jeff Hostetler via GitGitGadget
  2021-02-01 19:45   ` [PATCH v2 14/14] simple-ipc: add Unix domain socket implementation Jeff Hostetler via GitGitGadget
                     ` (2 subsequent siblings)
  15 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-02-01 19:45 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, Chris Torek, Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Teach `unix_stream_connect()` to return error rather than calling `die()`
when a socket cannot be created.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 unix-socket.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/unix-socket.c b/unix-socket.c
index 9726992f276..c7573df56a6 100644
--- a/unix-socket.c
+++ b/unix-socket.c
@@ -76,15 +76,17 @@ static int unix_sockaddr_init(struct sockaddr_un *sa, const char *path,
 
 int unix_stream_connect(const char *path)
 {
-	int fd, saved_errno;
+	int fd = -1;
+	int saved_errno;
 	struct sockaddr_un sa;
 	struct unix_sockaddr_context ctx = UNIX_SOCKADDR_CONTEXT_INIT;
 
 	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
 		return -1;
+
 	fd = socket(AF_UNIX, SOCK_STREAM, 0);
 	if (fd < 0)
-		die_errno("unable to create socket");
+		goto fail;
 
 	if (connect(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0)
 		goto fail;
@@ -94,7 +96,8 @@ int unix_stream_connect(const char *path)
 fail:
 	saved_errno = errno;
 	unix_sockaddr_cleanup(&ctx);
-	close(fd);
+	if (fd != -1)
+		close(fd);
 	errno = saved_errno;
 	return -1;
 }
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v2 14/14] simple-ipc: add Unix domain socket implementation
  2021-02-01 19:45 ` [PATCH v2 00/14] " Jeff Hostetler via GitGitGadget
                     ` (12 preceding siblings ...)
  2021-02-01 19:45   ` [PATCH v2 13/14] unix-socket: do not call die in unix_stream_connect() Jeff Hostetler via GitGitGadget
@ 2021-02-01 19:45   ` Jeff Hostetler via GitGitGadget
  2021-02-01 22:20   ` [PATCH v2 00/14] Simple IPC Mechanism Junio C Hamano
  2021-02-13  0:09   ` [PATCH v3 00/12] " Jeff Hostetler via GitGitGadget
  15 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-02-01 19:45 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, Chris Torek, Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Create Unix domain socket based implementation of "simple-ipc".

A set of `ipc_client` routines implement a client library to connect
to an `ipc_server` over a Unix domain socket, send a simple request,
and receive a single response.  Clients use blocking IO on the socket.

A set of `ipc_server` routines implement a thread pool to listen for
and concurrently service client connections.

The server creates a new Unix domain socket at a known location.  If a
socket already exists with that name, the server tries to determine if
another server is already listening on the socket or if the socket is
dead.  If socket is busy, the server exits with an error rather than
stealing the socket.  If the socket is dead, the server creates a new
one and starts up.

If while running, the server detects that its socket has been stolen
by another server, it automatically exits.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 Makefile                            |    2 +
 compat/simple-ipc/ipc-unix-socket.c | 1127 +++++++++++++++++++++++++++
 contrib/buildsystems/CMakeLists.txt |    2 +
 simple-ipc.h                        |    7 +-
 4 files changed, 1137 insertions(+), 1 deletion(-)
 create mode 100644 compat/simple-ipc/ipc-unix-socket.c

diff --git a/Makefile b/Makefile
index e7ba8853ea6..f2524c02ff0 100644
--- a/Makefile
+++ b/Makefile
@@ -1681,6 +1681,8 @@ ifdef NO_UNIX_SOCKETS
 	BASIC_CFLAGS += -DNO_UNIX_SOCKETS
 else
 	LIB_OBJS += unix-socket.o
+	LIB_OBJS += compat/simple-ipc/ipc-shared.o
+	LIB_OBJS += compat/simple-ipc/ipc-unix-socket.o
 endif
 
 ifdef USE_WIN32_IPC
diff --git a/compat/simple-ipc/ipc-unix-socket.c b/compat/simple-ipc/ipc-unix-socket.c
new file mode 100644
index 00000000000..844906d1af5
--- /dev/null
+++ b/compat/simple-ipc/ipc-unix-socket.c
@@ -0,0 +1,1127 @@
+#include "cache.h"
+#include "simple-ipc.h"
+#include "strbuf.h"
+#include "pkt-line.h"
+#include "thread-utils.h"
+#include "unix-socket.h"
+
+#ifdef NO_UNIX_SOCKETS
+#error compat/simple-ipc/ipc-unix-socket.c requires Unix sockets
+#endif
+
+enum ipc_active_state ipc_get_active_state(const char *path)
+{
+	enum ipc_active_state state = IPC_STATE__OTHER_ERROR;
+	struct ipc_client_connect_options options
+		= IPC_CLIENT_CONNECT_OPTIONS_INIT;
+	struct stat st;
+	struct ipc_client_connection *connection_test = NULL;
+
+	options.wait_if_busy = 0;
+	options.wait_if_not_found = 0;
+
+	if (lstat(path, &st) == -1) {
+		switch (errno) {
+		case ENOENT:
+		case ENOTDIR:
+			return IPC_STATE__NOT_LISTENING;
+		default:
+			return IPC_STATE__INVALID_PATH;
+		}
+	}
+
+	/* also complain if a plain file is in the way */
+	if ((st.st_mode & S_IFMT) != S_IFSOCK)
+		return IPC_STATE__INVALID_PATH;
+
+	/*
+	 * Just because the filesystem has a S_IFSOCK type inode
+	 * at `path`, doesn't mean it that there is a server listening.
+	 * Ping it to be sure.
+	 */
+	state = ipc_client_try_connect(path, &options, &connection_test);
+	ipc_client_close_connection(connection_test);
+
+	return state;
+}
+
+/*
+ * This value was chosen at random.
+ */
+#define WAIT_STEP_MS (50)
+
+/*
+ * Try to connect to the server.  If the server is just starting up or
+ * is very busy, we may not get a connection the first time.
+ */
+static enum ipc_active_state connect_to_server(
+	const char *path,
+	int timeout_ms,
+	const struct ipc_client_connect_options *options,
+	int *pfd)
+{
+	int wait_ms = 50;
+	int k;
+
+	*pfd = -1;
+
+	for (k = 0; k < timeout_ms; k += wait_ms) {
+		int fd = unix_stream_connect(path);
+
+		if (fd != -1) {
+			*pfd = fd;
+			return IPC_STATE__LISTENING;
+		}
+
+		if (errno == ENOENT) {
+			if (!options->wait_if_not_found)
+				return IPC_STATE__PATH_NOT_FOUND;
+
+			goto sleep_and_try_again;
+		}
+
+		if (errno == ETIMEDOUT) {
+			if (!options->wait_if_busy)
+				return IPC_STATE__NOT_LISTENING;
+
+			goto sleep_and_try_again;
+		}
+
+		if (errno == ECONNREFUSED) {
+			if (!options->wait_if_busy)
+				return IPC_STATE__NOT_LISTENING;
+
+			goto sleep_and_try_again;
+		}
+
+		return IPC_STATE__OTHER_ERROR;
+
+	sleep_and_try_again:
+		sleep_millisec(wait_ms);
+	}
+
+	return IPC_STATE__NOT_LISTENING;
+}
+
+/*
+ * A randomly chosen timeout value.
+ */
+#define MY_CONNECTION_TIMEOUT_MS (1000)
+
+enum ipc_active_state ipc_client_try_connect(
+	const char *path,
+	const struct ipc_client_connect_options *options,
+	struct ipc_client_connection **p_connection)
+{
+	enum ipc_active_state state = IPC_STATE__OTHER_ERROR;
+	int fd = -1;
+
+	*p_connection = NULL;
+
+	trace2_region_enter("ipc-client", "try-connect", NULL);
+	trace2_data_string("ipc-client", NULL, "try-connect/path", path);
+
+	state = connect_to_server(path, MY_CONNECTION_TIMEOUT_MS,
+				  options, &fd);
+
+	trace2_data_intmax("ipc-client", NULL, "try-connect/state",
+			   (intmax_t)state);
+	trace2_region_leave("ipc-client", "try-connect", NULL);
+
+	if (state == IPC_STATE__LISTENING) {
+		(*p_connection) = xcalloc(1, sizeof(struct ipc_client_connection));
+		(*p_connection)->fd = fd;
+	}
+
+	return state;
+}
+
+void ipc_client_close_connection(struct ipc_client_connection *connection)
+{
+	if (!connection)
+		return;
+
+	if (connection->fd != -1)
+		close(connection->fd);
+
+	free(connection);
+}
+
+int ipc_client_send_command_to_connection(
+	struct ipc_client_connection *connection,
+	const char *message, struct strbuf *answer)
+{
+	int ret = 0;
+
+	strbuf_setlen(answer, 0);
+
+	trace2_region_enter("ipc-client", "send-command", NULL);
+
+	if (write_packetized_from_buf2(message, strlen(message),
+				       connection->fd, 1,
+				       &connection->scratch_write_buffer) < 0) {
+		ret = error(_("could not send IPC command"));
+		goto done;
+	}
+
+	if (read_packetized_to_strbuf(connection->fd, answer,
+				      PACKET_READ_NEVER_DIE) < 0) {
+		ret = error(_("could not read IPC response"));
+		goto done;
+	}
+
+done:
+	trace2_region_leave("ipc-client", "send-command", NULL);
+	return ret;
+}
+
+int ipc_client_send_command(const char *path,
+			    const struct ipc_client_connect_options *options,
+			    const char *message, struct strbuf *answer)
+{
+	int ret = -1;
+	enum ipc_active_state state;
+	struct ipc_client_connection *connection = NULL;
+
+	state = ipc_client_try_connect(path, options, &connection);
+
+	if (state != IPC_STATE__LISTENING)
+		return ret;
+
+	ret = ipc_client_send_command_to_connection(connection, message, answer);
+
+	ipc_client_close_connection(connection);
+
+	return ret;
+}
+
+static int set_socket_blocking_flag(int fd, int make_nonblocking)
+{
+	int flags;
+
+	flags = fcntl(fd, F_GETFL, NULL);
+
+	if (flags < 0)
+		return -1;
+
+	if (make_nonblocking)
+		flags |= O_NONBLOCK;
+	else
+		flags &= ~O_NONBLOCK;
+
+	return fcntl(fd, F_SETFL, flags);
+}
+
+/*
+ * Magic numbers used to annotate callback instance data.
+ * These are used to help guard against accidentally passing the
+ * wrong instance data across multiple levels of callbacks (which
+ * is easy to do if there are `void*` arguments).
+ */
+enum magic {
+	MAGIC_SERVER_REPLY_DATA,
+	MAGIC_WORKER_THREAD_DATA,
+	MAGIC_ACCEPT_THREAD_DATA,
+	MAGIC_SERVER_DATA,
+};
+
+struct ipc_server_reply_data {
+	enum magic magic;
+	int fd;
+	struct ipc_worker_thread_data *worker_thread_data;
+};
+
+struct ipc_worker_thread_data {
+	enum magic magic;
+	struct ipc_worker_thread_data *next_thread;
+	struct ipc_server_data *server_data;
+	pthread_t pthread_id;
+	struct packet_scratch_space scratch_write_buffer;
+};
+
+struct ipc_accept_thread_data {
+	enum magic magic;
+	struct ipc_server_data *server_data;
+
+	int fd_listen;
+	struct stat st_listen;
+
+	int fd_send_shutdown;
+	int fd_wait_shutdown;
+	pthread_t pthread_id;
+};
+
+/*
+ * With unix-sockets, the conceptual "ipc-server" is implemented as a single
+ * controller "accept-thread" thread and a pool of "worker-thread" threads.
+ * The former does the usual `accept()` loop and dispatches connections
+ * to an idle worker thread.  The worker threads wait in an idle loop for
+ * a new connection, communicate with the client and relay data to/from
+ * the `application_cb` and then wait for another connection from the
+ * server thread.  This avoids the overhead of constantly creating and
+ * destroying threads.
+ */
+struct ipc_server_data {
+	enum magic magic;
+	ipc_server_application_cb *application_cb;
+	void *application_data;
+	struct strbuf buf_path;
+
+	struct ipc_accept_thread_data *accept_thread;
+	struct ipc_worker_thread_data *worker_thread_list;
+
+	pthread_mutex_t work_available_mutex;
+	pthread_cond_t work_available_cond;
+
+	/*
+	 * Accepted but not yet processed client connections are kept
+	 * in a circular buffer FIFO.  The queue is empty when the
+	 * positions are equal.
+	 */
+	int *fifo_fds;
+	int queue_size;
+	int back_pos;
+	int front_pos;
+
+	int shutdown_requested;
+	int is_stopped;
+};
+
+/*
+ * Remove and return the oldest queued connection.
+ *
+ * Returns -1 if empty.
+ */
+static int fifo_dequeue(struct ipc_server_data *server_data)
+{
+	/* ASSERT holding mutex */
+
+	int fd;
+
+	if (server_data->back_pos == server_data->front_pos)
+		return -1;
+
+	fd = server_data->fifo_fds[server_data->front_pos];
+	server_data->fifo_fds[server_data->front_pos] = -1;
+
+	server_data->front_pos++;
+	if (server_data->front_pos == server_data->queue_size)
+		server_data->front_pos = 0;
+
+	return fd;
+}
+
+/*
+ * Push a new fd onto the back of the queue.
+ *
+ * Drop it and return -1 if queue is already full.
+ */
+static int fifo_enqueue(struct ipc_server_data *server_data, int fd)
+{
+	/* ASSERT holding mutex */
+
+	int next_back_pos;
+
+	next_back_pos = server_data->back_pos + 1;
+	if (next_back_pos == server_data->queue_size)
+		next_back_pos = 0;
+
+	if (next_back_pos == server_data->front_pos) {
+		/* Queue is full. Just drop it. */
+		close(fd);
+		return -1;
+	}
+
+	server_data->fifo_fds[server_data->back_pos] = fd;
+	server_data->back_pos = next_back_pos;
+
+	return fd;
+}
+
+/*
+ * Wait for a connection to be queued to the FIFO and return it.
+ *
+ * Returns -1 if someone has already requested a shutdown.
+ */
+static int worker_thread__wait_for_connection(
+	struct ipc_worker_thread_data *worker_thread_data)
+{
+	/* ASSERT NOT holding mutex */
+
+	struct ipc_server_data *server_data = worker_thread_data->server_data;
+	int fd = -1;
+
+	pthread_mutex_lock(&server_data->work_available_mutex);
+	for (;;) {
+		if (server_data->shutdown_requested)
+			break;
+
+		fd = fifo_dequeue(server_data);
+		if (fd >= 0)
+			break;
+
+		pthread_cond_wait(&server_data->work_available_cond,
+				  &server_data->work_available_mutex);
+	}
+	pthread_mutex_unlock(&server_data->work_available_mutex);
+
+	return fd;
+}
+
+/*
+ * Forward declare our reply callback function so that any compiler
+ * errors are reported when we actually define the function (in addition
+ * to any errors reported when we try to pass this callback function as
+ * a parameter in a function call).  The former are easier to understand.
+ */
+static ipc_server_reply_cb do_io_reply_callback;
+
+/*
+ * Relay application's response message to the client process.
+ * (We do not flush at this point because we allow the caller
+ * to chunk data to the client thru us.)
+ */
+static int do_io_reply_callback(struct ipc_server_reply_data *reply_data,
+		       const char *response, size_t response_len)
+{
+	struct packet_scratch_space *scratch =
+		&reply_data->worker_thread_data->scratch_write_buffer;
+
+	if (reply_data->magic != MAGIC_SERVER_REPLY_DATA)
+		BUG("reply_cb called with wrong instance data");
+
+	return write_packetized_from_buf2(response, response_len,
+					  reply_data->fd, 0, scratch);
+}
+
+/* A randomly chosen value. */
+#define MY_WAIT_POLL_TIMEOUT_MS (10)
+
+/*
+ * If the client hangs up without sending any data on the wire, just
+ * quietly close the socket and ignore this client.
+ *
+ * This worker thread is committed to reading the IPC request data
+ * from the client at the other end of this fd.  Wait here for the
+ * client to actually put something on the wire -- because if the
+ * client just does a ping (connect and hangup without sending any
+ * data), our use of the pkt-line read routines will spew an error
+ * message.
+ *
+ * Return -1 if the client hung up.
+ * Return 0 if data (possibly incomplete) is ready.
+ */
+static int worker_thread__wait_for_io_start(
+	struct ipc_worker_thread_data *worker_thread_data,
+	int fd)
+{
+	struct ipc_server_data *server_data = worker_thread_data->server_data;
+	struct pollfd pollfd[1];
+	int result;
+
+	for (;;) {
+		pollfd[0].fd = fd;
+		pollfd[0].events = POLLIN;
+
+		result = poll(pollfd, 1, MY_WAIT_POLL_TIMEOUT_MS);
+		if (result < 0) {
+			if (errno == EINTR)
+				continue;
+			goto cleanup;
+		}
+
+		if (result == 0) {
+			/* a timeout */
+
+			int in_shutdown;
+
+			pthread_mutex_lock(&server_data->work_available_mutex);
+			in_shutdown = server_data->shutdown_requested;
+			pthread_mutex_unlock(&server_data->work_available_mutex);
+
+			/*
+			 * If a shutdown is already in progress and this
+			 * client has not started talking yet, just drop it.
+			 */
+			if (in_shutdown)
+				goto cleanup;
+			continue;
+		}
+
+		if (pollfd[0].revents & POLLHUP)
+			goto cleanup;
+
+		if (pollfd[0].revents & POLLIN)
+			return 0;
+
+		goto cleanup;
+	}
+
+cleanup:
+	close(fd);
+	return -1;
+}
+
+/*
+ * Receive the request/command from the client and pass it to the
+ * registered request-callback.  The request-callback will compose
+ * a response and call our reply-callback to send it to the client.
+ */
+static int worker_thread__do_io(
+	struct ipc_worker_thread_data *worker_thread_data,
+	int fd)
+{
+	/* ASSERT NOT holding lock */
+
+	struct strbuf buf = STRBUF_INIT;
+	struct ipc_server_reply_data reply_data;
+	int ret = 0;
+
+	reply_data.magic = MAGIC_SERVER_REPLY_DATA;
+	reply_data.worker_thread_data = worker_thread_data;
+
+	reply_data.fd = fd;
+
+	ret = read_packetized_to_strbuf(reply_data.fd, &buf,
+					PACKET_READ_NEVER_DIE);
+	if (ret >= 0) {
+		ret = worker_thread_data->server_data->application_cb(
+			worker_thread_data->server_data->application_data,
+			buf.buf, do_io_reply_callback, &reply_data);
+
+		packet_flush_gently(reply_data.fd);
+	}
+	else {
+		/*
+		 * The client probably disconnected/shutdown before it
+		 * could send a well-formed message.  Ignore it.
+		 */
+	}
+
+	strbuf_release(&buf);
+	close(reply_data.fd);
+
+	return ret;
+}
+
+/*
+ * Block SIGPIPE on the current thread (so that we get EPIPE from
+ * write() rather than an actual signal).
+ *
+ * Note that using sigchain_push() and _pop() to control SIGPIPE
+ * around our IO calls is not thread safe:
+ * [] It uses a global stack of handler frames.
+ * [] It uses ALLOC_GROW() to resize it.
+ * [] Finally, according to the `signal(2)` man-page:
+ *    "The effects of `signal()` in a multithreaded process are unspecified."
+ */
+static void thread_block_sigpipe(sigset_t *old_set)
+{
+	sigset_t new_set;
+
+	sigemptyset(&new_set);
+	sigaddset(&new_set, SIGPIPE);
+
+	sigemptyset(old_set);
+	pthread_sigmask(SIG_BLOCK, &new_set, old_set);
+}
+
+/*
+ * Thread proc for an IPC worker thread.  It handles a series of
+ * connections from clients.  It pulls the next fd from the queue
+ * processes it, and then waits for the next client.
+ *
+ * Block SIGPIPE in this worker thread for the life of the thread.
+ * This avoids stray (and sometimes delayed) SIGPIPE signals caused
+ * by client errors and/or when we are under extremely heavy IO load.
+ *
+ * This means that the application callback will have SIGPIPE blocked.
+ * The callback should not change it.
+ */
+static void *worker_thread_proc(void *_worker_thread_data)
+{
+	struct ipc_worker_thread_data *worker_thread_data = _worker_thread_data;
+	struct ipc_server_data *server_data = worker_thread_data->server_data;
+	sigset_t old_set;
+	int fd, io;
+	int ret;
+
+	trace2_thread_start("ipc-worker");
+
+	thread_block_sigpipe(&old_set);
+
+	for (;;) {
+		fd = worker_thread__wait_for_connection(worker_thread_data);
+		if (fd == -1)
+			break; /* in shutdown */
+
+		io = worker_thread__wait_for_io_start(worker_thread_data, fd);
+		if (io == -1)
+			continue; /* client hung up without sending anything */
+
+		ret = worker_thread__do_io(worker_thread_data, fd);
+
+		if (ret == SIMPLE_IPC_QUIT) {
+			trace2_data_string("ipc-worker", NULL, "queue_stop_async",
+					   "application_quit");
+			/* The application told us to shutdown. */
+			ipc_server_stop_async(server_data);
+			break;
+		}
+	}
+
+	trace2_thread_exit();
+	return NULL;
+}
+
+/*
+ * Return 1 if someone deleted or stole the on-disk socket from us.
+ */
+static int socket_was_stolen(struct ipc_accept_thread_data *accept_thread_data)
+{
+	struct stat st;
+	struct stat *ref_st = &accept_thread_data->st_listen;
+
+	if (lstat(accept_thread_data->server_data->buf_path.buf, &st) == -1)
+		return 1;
+
+	if (st.st_ino != ref_st->st_ino)
+		return 1;
+
+	/* We might also consider the creation time on some platforms. */
+
+	return 0;
+}
+
+/* A randomly chosen value. */
+#define MY_ACCEPT_POLL_TIMEOUT_MS (60 * 1000)
+
+/*
+ * Accept a new client connection on our socket.  This uses non-blocking
+ * IO so that we can also wait for shutdown requests on our socket-pair
+ * without actually spinning on a fast timeout.
+ */
+static int accept_thread__wait_for_connection(
+	struct ipc_accept_thread_data *accept_thread_data)
+{
+	struct pollfd pollfd[2];
+	int result;
+
+	for (;;) {
+		pollfd[0].fd = accept_thread_data->fd_wait_shutdown;
+		pollfd[0].events = POLLIN;
+
+		pollfd[1].fd = accept_thread_data->fd_listen;
+		pollfd[1].events = POLLIN;
+
+		result = poll(pollfd, 2, MY_ACCEPT_POLL_TIMEOUT_MS);
+		if (result < 0) {
+			if (errno == EINTR)
+				continue;
+			return result;
+		}
+
+		if (result == 0) {
+			/* a timeout */
+
+			/*
+			 * If someone deletes or force-creates a new unix
+			 * domain socket at out path, all future clients
+			 * will be routed elsewhere and we silently starve.
+			 * If that happens, just queue a shutdown.
+			 */
+			if (socket_was_stolen(
+				    accept_thread_data)) {
+				trace2_data_string("ipc-accept", NULL,
+						   "queue_stop_async",
+						   "socket_stolen");
+				ipc_server_stop_async(
+					accept_thread_data->server_data);
+			}
+			continue;
+		}
+
+		if (pollfd[0].revents & POLLIN) {
+			/* shutdown message queued to socketpair */
+			return -1;
+		}
+
+		if (pollfd[1].revents & POLLIN) {
+			/* a connection is available on fd_listen */
+
+			int client_fd = accept(accept_thread_data->fd_listen,
+					       NULL, NULL);
+			if (client_fd >= 0)
+				return client_fd;
+
+			/*
+			 * An error here is unlikely -- it probably
+			 * indicates that the connecting process has
+			 * already dropped the connection.
+			 */
+			continue;
+		}
+
+		BUG("unandled poll result errno=%d r[0]=%d r[1]=%d",
+		    errno, pollfd[0].revents, pollfd[1].revents);
+	}
+}
+
+/*
+ * Thread proc for the IPC server "accept thread".  This waits for
+ * an incoming socket connection, appends it to the queue of available
+ * connections, and notifies a worker thread to process it.
+ *
+ * Block SIGPIPE in this thread for the life of the thread.  This
+ * avoids any stray SIGPIPE signals when closing pipe fds under
+ * extremely heavy loads (such as when the fifo queue is full and we
+ * drop incomming connections).
+ */
+static void *accept_thread_proc(void *_accept_thread_data)
+{
+	struct ipc_accept_thread_data *accept_thread_data = _accept_thread_data;
+	struct ipc_server_data *server_data = accept_thread_data->server_data;
+	sigset_t old_set;
+
+	trace2_thread_start("ipc-accept");
+
+	thread_block_sigpipe(&old_set);
+
+	for (;;) {
+		int client_fd = accept_thread__wait_for_connection(
+			accept_thread_data);
+
+		pthread_mutex_lock(&server_data->work_available_mutex);
+		if (server_data->shutdown_requested) {
+			pthread_mutex_unlock(&server_data->work_available_mutex);
+			if (client_fd >= 0)
+				close(client_fd);
+			break;
+		}
+
+		if (client_fd < 0) {
+			/* ignore transient accept() errors */
+		}
+		else {
+			fifo_enqueue(server_data, client_fd);
+			pthread_cond_broadcast(&server_data->work_available_cond);
+		}
+		pthread_mutex_unlock(&server_data->work_available_mutex);
+	}
+
+	trace2_thread_exit();
+	return NULL;
+}
+
+/*
+ * We can't predict the connection arrival rate relative to the worker
+ * processing rate, therefore we allow the "accept-thread" to queue up
+ * a generous number of connections, since we'd rather have the client
+ * not unnecessarily timeout if we can avoid it.  (The assumption is
+ * that this will be used for FSMonitor and a few second wait on a
+ * connection is better than having the client timeout and do the full
+ * computation itself.)
+ *
+ * The FIFO queue size is set to a multiple of the worker pool size.
+ * This value chosen at random.
+ */
+#define FIFO_SCALE (100)
+
+/*
+ * The backlog value for `listen(2)`.  This doesn't need to huge,
+ * rather just large enough for our "accept-thread" to wake up and
+ * queue incoming connections onto the FIFO without the kernel
+ * dropping any.
+ *
+ * This value chosen at random.
+ */
+#define LISTEN_BACKLOG (50)
+
+/*
+ * Create a unix domain socket at the given path to listen for
+ * client connections.  The resulting socket will then appear
+ * in the filesystem as an inode with S_IFSOCK.  The inode is
+ * itself created as part of the `bind(2)` operation.
+ *
+ * The term "socket" is ambiguous in this context.  We want to open a
+ * "socket-fd" that is bound to a "socket-inode" (path) on disk.  We
+ * listen on "socket-fd" for new connections and clients try to
+ * open/connect using the "socket-inode" pathname.
+ *
+ * Unix domain sockets have a fundamental design flaw because the
+ * "socket-inode" persists until the pathname is deleted; closing the
+ * listening "socket-fd" only closes the socket handle/descriptor, it
+ * does not delete the inode/pathname.
+ *
+ * Well-behaving service daemons are expected to also delete the inode
+ * before shutdown.  If a service crashes (or forgets) it can leave
+ * the (now stale) inode in the filesystem.  This behaves like a stale
+ * ".lock" file and may prevent future service instances from starting
+ * up correctly.  (Because they won't be able to bind.)
+ *
+ * When future service instances try to create the listener socket,
+ * `bind(2)` will fail with EADDRINUSE -- because the inode already
+ * exists.  However, the new instance cannot tell if it is a stale
+ * inode *or* another service instance is already running.
+ *
+ * One possible solution is to blindly unlink the inode before
+ * attempting to bind a new socket-fd and thus create a new
+ * socket-inode.  Then `bind(2)` should always succeed.  However, if
+ * there is an existing service instance, it would be orphaned -- it
+ * would still be listening on a socket-fd that is still bound to an
+ * (unlinked) socket-inode, but that socket-inode is no longer
+ * associated with the pathname.  New client connections will arrive
+ * at OUR new socket-inode -- rather than the existing server's
+ * socket.  (I suppose it is up to the existing server to detect that
+ * its socket-inode has been stolen and shutdown.)
+ *
+ * Another possible solution is to try to use the ".lock" trick, but
+ * bind() does not have a exclusive-create use bit like open() does,
+ * so we cannot have multiple servers fighting/racing to create the
+ * same file name without having losers lose without knowing that they
+ * lost.
+ *
+ * We try to avoid such stealing and would rather fail to run than
+ * steal an existing socket-inode (because we assume that the
+ * existing server has more context and value to the clients than a
+ * freshly started server).  However, if multiple servers are racing
+ * to start, we don't care which one wins -- none of them have any
+ * state information yet worth fighting for.
+ *
+ * Create a "unique" socket-inode (with our PID in it (and assume that
+ * we can force-delete an existing socket with that name)).  Stat it
+ * to get the inode number and ctime -- so that we can identify it as
+ * the one we created.  Then use the atomic-rename trick to install it
+ * in the real location.  (This will unlink an existing socket with
+ * that pathname -- and thereby steal the real socket-inode from an
+ * existing server.)
+ *
+ * Elsewhere, our thread will periodically poll the socket-inode to
+ * see if someone else steals ours.
+ */
+static int create_listener_socket(const char *path,
+				  const struct ipc_server_opts *ipc_opts,
+				  struct stat *st_socket)
+{
+	struct stat st;
+	struct strbuf buf_uniq = STRBUF_INIT;
+	int fd_listen;
+	struct unix_stream_listen_opts uslg_opts = UNIX_STREAM_LISTEN_OPTS_INIT;
+
+	if (!lstat(path, &st) && S_ISSOCK(st.st_mode)) {
+		int fd_client;
+		/*
+		 * A socket-inode at `path` exists on disk, but we
+		 * don't know whether it belongs to an active server
+		 * or if the last server died without cleaning up.
+		 *
+		 * Poke it with a trivial connection to try to find out.
+		 */
+		trace2_data_string("ipc-server", NULL, "try-detect-server",
+				   path);
+		fd_client = unix_stream_connect(path);
+		if (fd_client >= 0) {
+			close(fd_client);
+			errno = EADDRINUSE;
+			return error_errno(_("socket already in use '%s'"),
+					   path);
+		}
+	}
+
+	/*
+	 * Create pathname to our "unique" socket and set it up for
+	 * business.
+	 */
+	strbuf_addf(&buf_uniq, "%s.%d", path, getpid());
+
+	uslg_opts.listen_backlog_size = LISTEN_BACKLOG;
+	uslg_opts.force_unlink_before_bind = 1;
+	uslg_opts.disallow_chdir = ipc_opts->uds_disallow_chdir;
+	fd_listen = unix_stream_listen(buf_uniq.buf, &uslg_opts);
+	if (fd_listen < 0) {
+		int saved_errno = errno;
+		error_errno(_("could not create listener socket '%s'"),
+			    buf_uniq.buf);
+		strbuf_release(&buf_uniq);
+		errno = saved_errno;
+		return -1;
+	}
+
+	if (lstat(buf_uniq.buf, st_socket)) {
+		int saved_errno = errno;
+		error_errno(_("could not stat listener socket '%s'"),
+			    buf_uniq.buf);
+		close(fd_listen);
+		unlink(buf_uniq.buf);
+		strbuf_release(&buf_uniq);
+		errno = saved_errno;
+		return -1;
+	}
+
+	if (set_socket_blocking_flag(fd_listen, 1)) {
+		int saved_errno = errno;
+		error_errno(_("could not set listener socket nonblocking '%s'"),
+			    buf_uniq.buf);
+		close(fd_listen);
+		unlink(buf_uniq.buf);
+		strbuf_release(&buf_uniq);
+		errno = saved_errno;
+		return -1;
+	}
+
+	/*
+	 * Install it as the "real" socket so that clients will starting
+	 * connecting to our socket.
+	 */
+	if (rename(buf_uniq.buf, path)) {
+		int saved_errno = errno;
+		error_errno(_("could not create listener socket '%s'"), path);
+		close(fd_listen);
+		unlink(buf_uniq.buf);
+		strbuf_release(&buf_uniq);
+		errno = saved_errno;
+		return -1;
+	}
+
+	strbuf_release(&buf_uniq);
+	trace2_data_string("ipc-server", NULL, "try-listen", path);
+	return fd_listen;
+}
+
+static int setup_listener_socket(const char *path, struct stat *st_socket,
+				 const struct ipc_server_opts *ipc_opts)
+{
+	int fd_listen;
+
+	trace2_region_enter("ipc-server", "create-listener_socket", NULL);
+	fd_listen = create_listener_socket(path, ipc_opts, st_socket);
+	trace2_region_leave("ipc-server", "create-listener_socket", NULL);
+
+	return fd_listen;
+}
+
+/*
+ * Start IPC server in a pool of background threads.
+ */
+int ipc_server_run_async(struct ipc_server_data **returned_server_data,
+			 const char *path, const struct ipc_server_opts *opts,
+			 ipc_server_application_cb *application_cb,
+			 void *application_data)
+{
+	struct ipc_server_data *server_data;
+	int fd_listen;
+	struct stat st_listen;
+	int sv[2];
+	int k;
+	int nr_threads = opts->nr_threads;
+
+	*returned_server_data = NULL;
+
+	/*
+	 * Create a socketpair and set sv[1] to non-blocking.  This
+	 * will used to send a shutdown message to the accept-thread
+	 * and allows the accept-thread to wait on EITHER a client
+	 * connection or a shutdown request without spinning.
+	 */
+	if (socketpair(AF_UNIX, SOCK_STREAM, 0, sv) < 0)
+		return error_errno(_("could not create socketpair for '%s'"),
+				   path);
+
+	if (set_socket_blocking_flag(sv[1], 1)) {
+		int saved_errno = errno;
+		close(sv[0]);
+		close(sv[1]);
+		errno = saved_errno;
+		return error_errno(_("making socketpair nonblocking '%s'"),
+				   path);
+	}
+
+	fd_listen = setup_listener_socket(path, &st_listen, opts);
+	if (fd_listen < 0) {
+		int saved_errno = errno;
+		close(sv[0]);
+		close(sv[1]);
+		errno = saved_errno;
+		return -1;
+	}
+
+	server_data = xcalloc(1, sizeof(*server_data));
+	server_data->magic = MAGIC_SERVER_DATA;
+	server_data->application_cb = application_cb;
+	server_data->application_data = application_data;
+	strbuf_init(&server_data->buf_path, 0);
+	strbuf_addstr(&server_data->buf_path, path);
+
+	if (nr_threads < 1)
+		nr_threads = 1;
+
+	pthread_mutex_init(&server_data->work_available_mutex, NULL);
+	pthread_cond_init(&server_data->work_available_cond, NULL);
+
+	server_data->queue_size = nr_threads * FIFO_SCALE;
+	server_data->fifo_fds = xcalloc(server_data->queue_size,
+					sizeof(*server_data->fifo_fds));
+
+	server_data->accept_thread =
+		xcalloc(1, sizeof(*server_data->accept_thread));
+	server_data->accept_thread->magic = MAGIC_ACCEPT_THREAD_DATA;
+	server_data->accept_thread->server_data = server_data;
+	server_data->accept_thread->fd_listen = fd_listen;
+	server_data->accept_thread->st_listen = st_listen;
+	server_data->accept_thread->fd_send_shutdown = sv[0];
+	server_data->accept_thread->fd_wait_shutdown = sv[1];
+
+	if (pthread_create(&server_data->accept_thread->pthread_id, NULL,
+			   accept_thread_proc, server_data->accept_thread))
+		die_errno(_("could not start accept_thread '%s'"), path);
+
+	for (k = 0; k < nr_threads; k++) {
+		struct ipc_worker_thread_data *wtd;
+
+		wtd = xcalloc(1, sizeof(*wtd));
+		wtd->magic = MAGIC_WORKER_THREAD_DATA;
+		wtd->server_data = server_data;
+
+		if (pthread_create(&wtd->pthread_id, NULL, worker_thread_proc,
+				   wtd)) {
+			if (k == 0)
+				die(_("could not start worker[0] for '%s'"),
+				    path);
+			/*
+			 * Limp along with the thread pool that we have.
+			 */
+			break;
+		}
+
+		wtd->next_thread = server_data->worker_thread_list;
+		server_data->worker_thread_list = wtd;
+	}
+
+	*returned_server_data = server_data;
+	return 0;
+}
+
+/*
+ * Gently tell the IPC server treads to shutdown.
+ * Can be run on any thread.
+ */
+int ipc_server_stop_async(struct ipc_server_data *server_data)
+{
+	/* ASSERT NOT holding mutex */
+
+	int fd;
+
+	if (!server_data)
+		return 0;
+
+	trace2_region_enter("ipc-server", "server-stop-async", NULL);
+
+	pthread_mutex_lock(&server_data->work_available_mutex);
+
+	server_data->shutdown_requested = 1;
+
+	/*
+	 * Write a byte to the shutdown socket pair to wake up the
+	 * accept-thread.
+	 */
+	if (write(server_data->accept_thread->fd_send_shutdown, "Q", 1) < 0)
+		error_errno("could not write to fd_send_shutdown");
+
+	/*
+	 * Drain the queue of existing connections.
+	 */
+	while ((fd = fifo_dequeue(server_data)) != -1)
+		close(fd);
+
+	/*
+	 * Gently tell worker threads to stop processing new connections
+	 * and exit.  (This does not abort in-process conversations.)
+	 */
+	pthread_cond_broadcast(&server_data->work_available_cond);
+
+	pthread_mutex_unlock(&server_data->work_available_mutex);
+
+	trace2_region_leave("ipc-server", "server-stop-async", NULL);
+
+	return 0;
+}
+
+/*
+ * Wait for all IPC server threads to stop.
+ */
+int ipc_server_await(struct ipc_server_data *server_data)
+{
+	pthread_join(server_data->accept_thread->pthread_id, NULL);
+
+	if (!server_data->shutdown_requested)
+		BUG("ipc-server: accept-thread stopped for '%s'",
+		    server_data->buf_path.buf);
+
+	while (server_data->worker_thread_list) {
+		struct ipc_worker_thread_data *wtd =
+			server_data->worker_thread_list;
+
+		pthread_join(wtd->pthread_id, NULL);
+
+		server_data->worker_thread_list = wtd->next_thread;
+		free(wtd);
+	}
+
+	server_data->is_stopped = 1;
+
+	return 0;
+}
+
+void ipc_server_free(struct ipc_server_data *server_data)
+{
+	struct ipc_accept_thread_data * accept_thread_data;
+
+	if (!server_data)
+		return;
+
+	if (!server_data->is_stopped)
+		BUG("cannot free ipc-server while running for '%s'",
+		    server_data->buf_path.buf);
+
+	accept_thread_data = server_data->accept_thread;
+	if (accept_thread_data) {
+		if (accept_thread_data->fd_listen != -1) {
+			/*
+			 * Only unlink the unix domain socket if we
+			 * created it.  That is, if another daemon
+			 * process force-created a new socket at this
+			 * path, and effectively steals our path
+			 * (which prevents us from receiving any
+			 * future clients), we don't want to do the
+			 * same thing to them.
+			 */
+			if (!socket_was_stolen(
+				    accept_thread_data))
+				unlink(server_data->buf_path.buf);
+
+			close(accept_thread_data->fd_listen);
+		}
+		if (accept_thread_data->fd_send_shutdown != -1)
+			close(accept_thread_data->fd_send_shutdown);
+		if (accept_thread_data->fd_wait_shutdown != -1)
+			close(accept_thread_data->fd_wait_shutdown);
+
+		free(server_data->accept_thread);
+	}
+
+	while (server_data->worker_thread_list) {
+		struct ipc_worker_thread_data *wtd =
+			server_data->worker_thread_list;
+
+		server_data->worker_thread_list = wtd->next_thread;
+		free(wtd);
+	}
+
+	pthread_cond_destroy(&server_data->work_available_cond);
+	pthread_mutex_destroy(&server_data->work_available_mutex);
+
+	strbuf_release(&server_data->buf_path);
+
+	free(server_data->fifo_fds);
+	free(server_data);
+}
diff --git a/contrib/buildsystems/CMakeLists.txt b/contrib/buildsystems/CMakeLists.txt
index 4bd41054ee7..4c27a373414 100644
--- a/contrib/buildsystems/CMakeLists.txt
+++ b/contrib/buildsystems/CMakeLists.txt
@@ -248,6 +248,8 @@ endif()
 
 if(CMAKE_SYSTEM_NAME STREQUAL "Windows")
 	list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-win32.c)
+else()
+	list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-unix-socket.c)
 endif()
 
 set(EXE_EXTENSION ${CMAKE_EXECUTABLE_SUFFIX})
diff --git a/simple-ipc.h b/simple-ipc.h
index eb19b5da8b1..17b28bc1f83 100644
--- a/simple-ipc.h
+++ b/simple-ipc.h
@@ -5,7 +5,7 @@
  * See Documentation/technical/api-simple-ipc.txt
  */
 
-#if defined(GIT_WINDOWS_NATIVE)
+#if defined(GIT_WINDOWS_NATIVE) || !defined(NO_UNIX_SOCKETS)
 #define SUPPORTS_SIMPLE_IPC
 #endif
 
@@ -160,6 +160,11 @@ struct ipc_server_data;
 struct ipc_server_opts
 {
 	int nr_threads;
+
+	/*
+	 * Disallow chdir() when creating a Unix domain socket.
+	 */
+	unsigned int uds_disallow_chdir:1;
 };
 
 /*
-- 
gitgitgadget

^ permalink raw reply related	[flat|nested] 178+ messages in thread

* Re: [PATCH v2 00/14] Simple IPC Mechanism
  2021-02-01 19:45 ` [PATCH v2 00/14] " Jeff Hostetler via GitGitGadget
                     ` (13 preceding siblings ...)
  2021-02-01 19:45   ` [PATCH v2 14/14] simple-ipc: add Unix domain socket implementation Jeff Hostetler via GitGitGadget
@ 2021-02-01 22:20   ` Junio C Hamano
  2021-02-01 23:26     ` Jeff Hostetler
  2021-02-13  0:09   ` [PATCH v3 00/12] " Jeff Hostetler via GitGitGadget
  15 siblings, 1 reply; 178+ messages in thread
From: Junio C Hamano @ 2021-02-01 22:20 UTC (permalink / raw)
  To: Jeff Hostetler via GitGitGadget
  Cc: git, Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, Chris Torek, Jeff Hostetler

"Jeff Hostetler via GitGitGadget" <gitgitgadget@gmail.com> writes:

> Here is version 2 of my "Simple IPC" series and addresses the following
> review comments:
> ...
> Junio C Hamano (1):
>   ci/install-depends: attempt to fix "brew cask" stuff

Huh?

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v2 00/14] Simple IPC Mechanism
  2021-02-01 22:20   ` [PATCH v2 00/14] Simple IPC Mechanism Junio C Hamano
@ 2021-02-01 23:26     ` Jeff Hostetler
  2021-02-02 23:07       ` Johannes Schindelin
  0 siblings, 1 reply; 178+ messages in thread
From: Jeff Hostetler @ 2021-02-01 23:26 UTC (permalink / raw)
  To: Junio C Hamano, Jeff Hostetler via GitGitGadget
  Cc: git, Ævar Arnfjörð Bjarmason, Jeff King,
	Chris Torek, Jeff Hostetler



On 2/1/21 5:20 PM, Junio C Hamano wrote:
> "Jeff Hostetler via GitGitGadget" <gitgitgadget@gmail.com> writes:
> 
>> Here is version 2 of my "Simple IPC" series and addresses the following
>> review comments:
>> ...
>> Junio C Hamano (1):
>>    ci/install-depends: attempt to fix "brew cask" stuff
> 
> Huh?
> 

Sorry.  I had to prepend that one to the patch series to get the
CI builds to run.  I've been working rebased against "v2.30.0" and
GitGitGadget references "master".

Jeff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v2 02/14] pkt-line: promote static buffer in packet_write_gently() to callers
  2021-02-01 19:45   ` [PATCH v2 02/14] pkt-line: promote static buffer in packet_write_gently() to callers Jeff Hostetler via GitGitGadget
@ 2021-02-02  9:41     ` Jeff King
  2021-02-02 20:33       ` Jeff Hostetler
  2021-02-02 22:54       ` Johannes Schindelin
  0 siblings, 2 replies; 178+ messages in thread
From: Jeff King @ 2021-02-02  9:41 UTC (permalink / raw)
  To: Jeff Hostetler via GitGitGadget
  Cc: git, Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Chris Torek, Jeff Hostetler

On Mon, Feb 01, 2021 at 07:45:35PM +0000, Jeff Hostetler via GitGitGadget wrote:

> -static int packet_write_gently(const int fd_out, const char *buf, size_t size)
> +/*
> + * Use the provided scratch space to build a combined <hdr><buf> buffer
> + * and write it to the file descriptor (in one write if possible).
> + */
> +static int packet_write_gently(const int fd_out, const char *buf, size_t size,
> +			       struct packet_scratch_space *scratch)

Thanks for addressing my stack space concern.

This solution does work (and I like wrapping it in a struct like this),
though I have to wonder if we're not just punting on the thread issues
in an ever-so-slight way with things like this:

>  void packet_write(int fd_out, const char *buf, size_t size)
>  {
> -	if (packet_write_gently(fd_out, buf, size))
> +	static struct packet_scratch_space scratch;
> +
> +	if (packet_write_gently(fd_out, buf, size, &scratch))
>  		die_errno(_("packet write failed"));
>  }

Where we just moved it one step up the call stack.

>  int write_packetized_from_fd(int fd_in, int fd_out)
>  {
> +	/*
> +	 * TODO We could save a memcpy() if we essentially inline
> +	 * TODO packet_write_gently() here and change the xread()
> +	 * TODO to pass &buf[4].
> +	 */

And comments like this make me wonder if the current crop of pktline
functions are just mis-designed in the first place. There are two
obvious directions here.

One, we can observe that the only reason we need the scratch space is to
ship out the whole thing in a single write():

> [in packet_write_gently]
> -	set_packet_header(packet_write_buffer, packet_size);
> -	memcpy(packet_write_buffer + 4, buf, size);
> -	if (write_in_full(fd_out, packet_write_buffer, packet_size) < 0)
> +
> +	set_packet_header(scratch->buffer, packet_size);
> +	memcpy(scratch->buffer + 4, buf, size);
> +
> +	if (write_in_full(fd_out, scratch->buffer, packet_size) < 0)
>  		return error(_("packet write failed"));

Would it really be so bad to do:

  char header[4];
  set_packet_header(header, packet_size);
  if (write_in_full(fd_out, header, 4) < 0 ||
      write_in_full(fd_out, buf, size) < 0)
          return error(...);

I doubt that two syscalls is breaking the bank here, but if people are
really concerned, using writev() would be a much better solution.
Obviously we can't rely on it being available everywhere, but it's quite
easy to emulate with a wrapper (and I'd be happy punt on any writev
stuff until somebody actually measures a difference).

The other direction is that callers could be using a correctly-sized
buffer in the first place. I.e., something like:

  struct packet_buffer {
          char full_packet[LARGE_PACKET_MAX];
  };
  static inline char *packet_data(struct packet_buffer *pb)
  {
	return pb->full_packet + 4;
  }

That lets people work with the oversized buffer in a natural-ish way
that would be hard to get wrong, like:

  memcpy(packet_data(pb), some_other_buf, len);

(though if we wanted to go even further, we could provide accessors that
actually do the writing and sanity-check the lengths; the downside is
that I'm not sure how callers typically get the bytes into these bufs in
the first place).

That's a much bigger change, of course, and I'd guess you'd much prefer
to focus on the actual point of your series. ;)

-Peff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v2 03/14] pkt-line: add write_packetized_from_buf2() that takes scratch buffer
  2021-02-01 19:45   ` [PATCH v2 03/14] pkt-line: add write_packetized_from_buf2() that takes scratch buffer Jeff Hostetler via GitGitGadget
@ 2021-02-02  9:44     ` Jeff King
  0 siblings, 0 replies; 178+ messages in thread
From: Jeff King @ 2021-02-02  9:44 UTC (permalink / raw)
  To: Jeff Hostetler via GitGitGadget
  Cc: git, Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Chris Torek, Jeff Hostetler

On Mon, Feb 01, 2021 at 07:45:36PM +0000, Jeff Hostetler via GitGitGadget wrote:

> From: Jeff Hostetler <jeffhost@microsoft.com>
> 
> Create version of `write_packetized_from_buf()` that takes a scratch buffer
> argument rather than assuming a static buffer.  This will be used later as
> we make packet-line writing more thread-safe.

OK, this is extending the changes from the first patch...

>  int write_packetized_from_buf(const char *src_in, size_t len, int fd_out)
>  {
>  	static struct packet_scratch_space scratch;
> +
> +	return write_packetized_from_buf2(src_in, len, fd_out, &scratch);
> +}
> +
> +int write_packetized_from_buf2(const char *src_in, size_t len, int fd_out,
> +			       struct packet_scratch_space *scratch)

Oof, that name. I know we are guilty of a lot of "foo_1()" helpers for
foo(), but they are usually internal static functions that don't get
spread around. This one is a public function.

Something like "_with_scratch" might be a bit more descriptive. Though
given that there is exactly one caller of the original currently, I'd be
tempted to say that it should just learn the scratch-space argument.

(All of this is moot, of course, if you follow either of my suggestions
from the earlier patch to drop the need for this scratch space
entirely).

-Peff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v2 04/14] pkt-line: optionally skip the flush packet in write_packetized_from_buf()
  2021-02-01 19:45   ` [PATCH v2 04/14] pkt-line: optionally skip the flush packet in write_packetized_from_buf() Johannes Schindelin via GitGitGadget
@ 2021-02-02  9:48     ` Jeff King
  2021-02-02 22:56       ` Johannes Schindelin
  2021-02-05 18:30       ` Jeff Hostetler
  0 siblings, 2 replies; 178+ messages in thread
From: Jeff King @ 2021-02-02  9:48 UTC (permalink / raw)
  To: Johannes Schindelin via GitGitGadget
  Cc: git, Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Chris Torek, Jeff Hostetler, Johannes Schindelin

On Mon, Feb 01, 2021 at 07:45:37PM +0000, Johannes Schindelin via GitGitGadget wrote:

> From: Johannes Schindelin <johannes.schindelin@gmx.de>
> 
> This function currently has only one caller: `apply_multi_file_filter()`
> in `convert.c`. That caller wants a flush packet to be written after
> writing the payload.
> 
> However, we are about to introduce a user that wants to write many
> packets before a final flush packet, so let's extend this function to
> prepare for that scenario.

I think this is a sign that the function is not very well-designed in
the first place. It seems like the code would be easier to understand
overall if that caller just explicitly did the flush itself. It even
already does so in other cases!

Something like (untested):

 convert.c  | 4 ++++
 pkt-line.c | 4 ----
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/convert.c b/convert.c
index ee360c2f07..3968ac37b9 100644
--- a/convert.c
+++ b/convert.c
@@ -890,6 +890,10 @@ static int apply_multi_file_filter(const char *path, const char *src, size_t len
 	if (err)
 		goto done;
 
+	err = packet_flush_gently(process->in);
+	if (err)
+		goto done;
+
 	err = subprocess_read_status(process->out, &filter_status);
 	if (err)
 		goto done;
diff --git a/pkt-line.c b/pkt-line.c
index d633005ef7..014520a9c2 100644
--- a/pkt-line.c
+++ b/pkt-line.c
@@ -256,8 +256,6 @@ int write_packetized_from_fd(int fd_in, int fd_out)
 			break;
 		err = packet_write_gently(fd_out, buf, bytes_to_write);
 	}
-	if (!err)
-		err = packet_flush_gently(fd_out);
 	return err;
 }
 
@@ -277,8 +275,6 @@ int write_packetized_from_buf(const char *src_in, size_t len, int fd_out)
 		err = packet_write_gently(fd_out, src_in + bytes_written, bytes_to_write);
 		bytes_written += bytes_to_write;
 	}
-	if (!err)
-		err = packet_flush_gently(fd_out);
 	return err;
 }
 

-Peff

^ permalink raw reply related	[flat|nested] 178+ messages in thread

* Re: [PATCH v2 10/14] unix-socket: elimiate static unix_stream_socket() helper function
  2021-02-01 19:45   ` [PATCH v2 10/14] unix-socket: elimiate static unix_stream_socket() helper function Jeff Hostetler via GitGitGadget
@ 2021-02-02  9:54     ` Jeff King
  2021-02-02  9:58     ` Jeff King
  1 sibling, 0 replies; 178+ messages in thread
From: Jeff King @ 2021-02-02  9:54 UTC (permalink / raw)
  To: Jeff Hostetler via GitGitGadget
  Cc: git, Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Chris Torek, Jeff Hostetler

On Mon, Feb 01, 2021 at 07:45:43PM +0000, Jeff Hostetler via GitGitGadget wrote:

> From: Jeff Hostetler <jeffhost@microsoft.com>
> 
> The static helper function `unix_stream_socket()` calls `die()`.  This is not
> appropriate for all callers.  Eliminate the wrapper function and move the
> existing error handling to the callers in preparation for adapting specific
> callers.

Thanks, this looks good.

> -static int unix_stream_socket(void)
> -{
> -	int fd = socket(AF_UNIX, SOCK_STREAM, 0);
> -	if (fd < 0)
> -		die_errno("unable to create socket");
> -	return fd;
> -}

This could become a one-liner:

  return socket(AF_UNIX, SOCK_STREAM, 0);

to keep the details abstracted. But it's local to this file, the callers
are already necessarily full of bsd-socket arcana, and it's not like the
magic words there have ever changed in 30+ years. Putting it inline
seems quite reasonable. :)

-Peff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v2 10/14] unix-socket: elimiate static unix_stream_socket() helper function
  2021-02-01 19:45   ` [PATCH v2 10/14] unix-socket: elimiate static unix_stream_socket() helper function Jeff Hostetler via GitGitGadget
  2021-02-02  9:54     ` Jeff King
@ 2021-02-02  9:58     ` Jeff King
  1 sibling, 0 replies; 178+ messages in thread
From: Jeff King @ 2021-02-02  9:58 UTC (permalink / raw)
  To: Jeff Hostetler via GitGitGadget
  Cc: git, Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Chris Torek, Jeff Hostetler

On Mon, Feb 01, 2021 at 07:45:43PM +0000, Jeff Hostetler via GitGitGadget wrote:

>  static int chdir_len(const char *orig, int len)
>  {
>  	char *path = xmemdupz(orig, len);
> @@ -79,7 +71,10 @@ int unix_stream_connect(const char *path)
>  
>  	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
>  		return -1;
> -	fd = unix_stream_socket();
> +	fd = socket(AF_UNIX, SOCK_STREAM, 0);
> +	if (fd < 0)
> +		die_errno("unable to create socket");
> +

Reading the next patch, I suddenly realized that these are die calls,
and not just passing along the error (which you then fix in the next
patch). It seems like that should be happening here in this patch.
Callers must already be ready to handle an error (we return -1 in the
context above).

> @@ -103,7 +98,9 @@ int unix_stream_listen(const char *path)
>  
>  	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
>  		return -1;
> -	fd = unix_stream_socket();
> +	fd = socket(AF_UNIX, SOCK_STREAM, 0);
> +	if (fd < 0)
> +		die_errno("unable to create socket");

Ditto here.

-Peff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v2 11/14] unix-socket: add options to unix_stream_listen()
  2021-02-01 19:45   ` [PATCH v2 11/14] unix-socket: add options to unix_stream_listen() Jeff Hostetler via GitGitGadget
@ 2021-02-02 10:14     ` Jeff King
  2021-02-05 23:28       ` Jeff Hostetler
  0 siblings, 1 reply; 178+ messages in thread
From: Jeff King @ 2021-02-02 10:14 UTC (permalink / raw)
  To: Jeff Hostetler via GitGitGadget
  Cc: git, Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Chris Torek, Jeff Hostetler

On Mon, Feb 01, 2021 at 07:45:44PM +0000, Jeff Hostetler via GitGitGadget wrote:

> From: Jeff Hostetler <jeffhost@microsoft.com>
> 
> Update `unix_stream_listen()` to take an options structure to override
> default behaviors.  This includes the size of the `listen()` backlog
> and whether it should always unlink the socket file before trying to
> create a new one.  Also eliminate calls to `die()` if it cannot create
> a socket.

I sent a follow-up on the previous patch, but I think this part about
the die() should be folded in there.

Likewise I think it would probably be easier to follow if we added the
backlog parameter and the unlink options in separate patches. The
backlog thing is small, but the unlink part is subtle and requires
explanation. That's a good sign it might do better in its own commit.

> Normally, `unix_stream_listen()` always tries to `unlink()` the
> socket-path before calling `bind()`.  If there is an existing
> server/daemon already bound and listening on that socket-path, our
> `unlink()` would have the effect of disassociating the existing
> server's bound-socket-fd from the socket-path without notifying the
> existing server.  The existing server could continue to service
> existing connections (accepted-socket-fd's), but would not receive any
> futher new connections (since clients rendezvous via the socket-path).
> The existing server would effectively be offline but yet appear to be
> active.
> 
> Furthermore, `unix_stream_listen()` creates an opportunity for a brief
> race condition for connecting clients if they try to connect in the
> interval between the forced `unlink()` and the subsequent `bind()` (which
> recreates the socket-path that is bound to a new socket-fd in the current
> process).

OK. I'm still not sure of the endgame here for writing non-racy code to
establish the socket (which is going to require either some atomic
renaming or some dot-locking in the caller).  But it's plausible to me
that this option will be a useful primitive.

The implementation looks correct, though here are a few small
observations/questions/nits:

> -int unix_stream_listen(const char *path)
> +int unix_stream_listen(const char *path,
> +		       const struct unix_stream_listen_opts *opts)
>  {
> -	int fd, saved_errno;
> +	int fd = -1;
> +	int saved_errno;
> +	int bind_successful = 0;
> +	int backlog;
>  	struct sockaddr_un sa;
>  	struct unix_sockaddr_context ctx;
>  
> -	unlink(path);
> -
>  	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
>  		return -1;

We can return directly here, because we know there is nothing to clean
up. Which I thought mean that here...

> +
>  	fd = socket(AF_UNIX, SOCK_STREAM, 0);
>  	if (fd < 0)
> -		die_errno("unable to create socket");
> +		goto fail;

...we are in the same boat. We did not create a socket, so we can just
return. That makes our cleanup code a bit simpler. But we can't do that,
because unix_sockaddr_init() may have done things that need cleaning up
(like chdir). So what you have here is correct.

IMHO that is all the more reason to push this (and the similar code in
unix_stream_connect() added in patch 13) into the previous patch.

> +	if (opts->force_unlink_before_bind)
> +		unlink(path);
>  
>  	if (bind(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0)
>  		goto fail;
> +	bind_successful = 1;

And this one needs to mark a flag explicitly, because we have no other
visible way of knowing we need to do the unlink. Makes sense.

> -	if (listen(fd, 5) < 0)
> +	if (opts->listen_backlog_size > 0)
> +		backlog = opts->listen_backlog_size;
> +	else
> +		backlog = 5;
> +	if (listen(fd, backlog) < 0)

The default-to-5 is a bit funny here. We already set the default to 5 in
UNIX_STREAM_LISTEN_OPTS_INIT. Should it be "0" there, so callers can
treat that as "use the default", which we fill in here? It probably
doesn't matter much in practice, but it seems cleaner to have only one
spot with the magic number.

> @@ -114,7 +125,10 @@ int unix_stream_listen(const char *path)
>  fail:
>  	saved_errno = errno;
>  	unix_sockaddr_cleanup(&ctx);
> -	close(fd);
> +	if (fd != -1)
> +		close(fd);
> +	if (bind_successful)
> +		unlink(path);
>  	errno = saved_errno;
>  	return -1;
>  }

Should we unlink before closing? I usually try to undo actions in the
reverse order that they were done. I thought at first it might even
matter here, such that we'd atomically relinquish the name without
having a moment where it still points to a closed socket (which might be
less confusing to somebody else trying to connect). But I guess there
will always be such a moment, because it's not like we would ever
accept() or service a request.

-Peff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v2 12/14] unix-socket: add no-chdir option to unix_stream_listen()
  2021-02-01 19:45   ` [PATCH v2 12/14] unix-socket: add no-chdir option " Jeff Hostetler via GitGitGadget
@ 2021-02-02 10:26     ` Jeff King
  0 siblings, 0 replies; 178+ messages in thread
From: Jeff King @ 2021-02-02 10:26 UTC (permalink / raw)
  To: Jeff Hostetler via GitGitGadget
  Cc: git, Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Chris Torek, Jeff Hostetler

On Mon, Feb 01, 2021 at 07:45:45PM +0000, Jeff Hostetler via GitGitGadget wrote:

> From: Jeff Hostetler <jeffhost@microsoft.com>
> 
> Calls to `chdir()` are dangerous in a multi-threaded context.  If
> `unix_stream_listen()` is given a socket pathname that is too big to
> fit in a `sockaddr_un` structure, it will `chdir()` to the parent
> directory of the requested socket pathname, create the socket using a
> relative pathname, and then `chdir()` back.  This is not thread-safe.
> 
> Add `disallow_chdir` flag to `struct unix_sockaddr_context` and change
> all callers to pass an initialized context structure.
> 
> Teach `unix_sockaddr_init()` to not allow calls to `chdir()` when flag
> is set.

Makes sense, and it fits nicely into the options pattern you set up in
the earlier patch.

>  struct unix_sockaddr_context {
>  	char *orig_dir;
> +	unsigned int disallow_chdir:1;
>  };
>  
> +#define UNIX_SOCKADDR_CONTEXT_INIT \
> +{ \
> +	.orig_dir=NULL, \
> +	.disallow_chdir=0, \
> +}

It is really just zero-initializing, so "{ 0 }" would be OK (I think we
are relaxed about allowing 0 as NULL in initializers). But I don't mind
it being written out (but do mind whitespace around the "=").

However, the point of unix_sockaddr_init() is that it's supposed to
initialize the struct. And I don't think we need to carry disallow_chdir
around; the cleanup function knows from orig_dir whether it's supposed
to do any cleanup, so only the init function has to care. So would:

diff --git a/unix-socket.c b/unix-socket.c
index 19ed48be99..0eb14faf54 100644
--- a/unix-socket.c
+++ b/unix-socket.c
@@ -36,16 +36,23 @@ static void unix_sockaddr_cleanup(struct unix_sockaddr_context *ctx)
 }
 
 static int unix_sockaddr_init(struct sockaddr_un *sa, const char *path,
-			      struct unix_sockaddr_context *ctx)
+			      struct unix_sockaddr_context *ctx,
+			      int disallow_chdir)
 {
 	int size = strlen(path) + 1;
 
 	ctx->orig_dir = NULL;
 	if (size > sizeof(sa->sun_path)) {
-		const char *slash = find_last_dir_sep(path);
+		const char *slash;
 		const char *dir;
 		struct strbuf cwd = STRBUF_INIT;
 
+		if (disallow_chdir) {
+			errno = ENAMETOOLONG;
+			return -1;
+		}
+
+		slash = find_last_dir_sep(path);
 		if (!slash) {
 			errno = ENAMETOOLONG;
 			return -1;

make it more obvious? There are only two callers, and this is all
file-local, so I don't mind adding the extra parameter there. And you
would not need an initializer at all.

>  #define UNIX_STREAM_LISTEN_OPTS_INIT \
>  { \
>  	.listen_backlog_size = 5, \
>  	.force_unlink_before_bind = 1, \
> +	.disallow_chdir = 0, \
>  }

I don't know if we care, but some options are positive "do this unlink"
and some are negative "do not do this chdir". Those could be made
consistent (and flip the initializer value to keep the same defaults).

There is actually value in making struct defaults generally "0" unless
we have reason not to, because callers sometimes zero-initialize without
thinking about it. I doubt that would happen for this particular struct,
and I'm deep into bike-shedding anyway, so I'm OK either way. But
something like:

  struct unix_stream_listen_opts_init {
	int listen_backlog_size;
	int disallow_unlink;
	int disallow_chdir;
  };

would work with just a "{ 0 }" zero-initializer. :)

-Peff

^ permalink raw reply related	[flat|nested] 178+ messages in thread

* Re: [PATCH v2 02/14] pkt-line: promote static buffer in packet_write_gently() to callers
  2021-02-02  9:41     ` Jeff King
@ 2021-02-02 20:33       ` Jeff Hostetler
  2021-02-02 22:54       ` Johannes Schindelin
  1 sibling, 0 replies; 178+ messages in thread
From: Jeff Hostetler @ 2021-02-02 20:33 UTC (permalink / raw)
  To: Jeff King, Jeff Hostetler via GitGitGadget
  Cc: git, Ævar Arnfjörð Bjarmason, Chris Torek, Jeff Hostetler



On 2/2/21 4:41 AM, Jeff King wrote:
> On Mon, Feb 01, 2021 at 07:45:35PM +0000, Jeff Hostetler via GitGitGadget wrote:
> 
>> -static int packet_write_gently(const int fd_out, const char *buf, size_t size)
>> +/*
>> + * Use the provided scratch space to build a combined <hdr><buf> buffer
>> + * and write it to the file descriptor (in one write if possible).
>> + */
>> +static int packet_write_gently(const int fd_out, const char *buf, size_t size,
>> +			       struct packet_scratch_space *scratch)
> 
> Thanks for addressing my stack space concern.
> 
> This solution does work (and I like wrapping it in a struct like this),
> though I have to wonder if we're not just punting on the thread issues
> in an ever-so-slight way with things like this:
> 
>>   void packet_write(int fd_out, const char *buf, size_t size)
>>   {
>> -	if (packet_write_gently(fd_out, buf, size))
>> +	static struct packet_scratch_space scratch;
>> +
>> +	if (packet_write_gently(fd_out, buf, size, &scratch))
>>   		die_errno(_("packet write failed"));
>>   }
> 
> Where we just moved it one step up the call stack.
> 
>>   int write_packetized_from_fd(int fd_in, int fd_out)
>>   {
>> +	/*
>> +	 * TODO We could save a memcpy() if we essentially inline
>> +	 * TODO packet_write_gently() here and change the xread()
>> +	 * TODO to pass &buf[4].
>> +	 */
> 
> And comments like this make me wonder if the current crop of pktline
> functions are just mis-designed in the first place. There are two
> obvious directions here.
> 
> One, we can observe that the only reason we need the scratch space is to
> ship out the whole thing in a single write():
> 
>> [in packet_write_gently]
>> -	set_packet_header(packet_write_buffer, packet_size);
>> -	memcpy(packet_write_buffer + 4, buf, size);
>> -	if (write_in_full(fd_out, packet_write_buffer, packet_size) < 0)
>> +
>> +	set_packet_header(scratch->buffer, packet_size);
>> +	memcpy(scratch->buffer + 4, buf, size);
>> +
>> +	if (write_in_full(fd_out, scratch->buffer, packet_size) < 0)
>>   		return error(_("packet write failed"));
> 
> Would it really be so bad to do:
> 
>    char header[4];
>    set_packet_header(header, packet_size);
>    if (write_in_full(fd_out, header, 4) < 0 ||
>        write_in_full(fd_out, buf, size) < 0)
>            return error(...);
> 
> I doubt that two syscalls is breaking the bank here, but if people are
> really concerned, using writev() would be a much better solution.
> Obviously we can't rely on it being available everywhere, but it's quite
> easy to emulate with a wrapper (and I'd be happy punt on any writev
> stuff until somebody actually measures a difference).
> 
> The other direction is that callers could be using a correctly-sized
> buffer in the first place. I.e., something like:
> 
>    struct packet_buffer {
>            char full_packet[LARGE_PACKET_MAX];
>    };
>    static inline char *packet_data(struct packet_buffer *pb)
>    {
> 	return pb->full_packet + 4;
>    }
> 
> That lets people work with the oversized buffer in a natural-ish way
> that would be hard to get wrong, like:
> 
>    memcpy(packet_data(pb), some_other_buf, len);
> 
> (though if we wanted to go even further, we could provide accessors that
> actually do the writing and sanity-check the lengths; the downside is
> that I'm not sure how callers typically get the bytes into these bufs in
> the first place).
> 
> That's a much bigger change, of course, and I'd guess you'd much prefer
> to focus on the actual point of your series. ;)
> 
> -Peff
> 

Yeah, I had all of those thoughts and debates in my head.  I'm not sure
there is a clear winner here.  And I was trying to prevent this change
from having a massive footprint and all that.  The FSMonitor stuff is
enough to worry about...

Personally, I like the 2 syscall model (for now at least and not mess
with writev()).  There are only 3 calls to packet_write_gently() and
this fixes 2 of them without any local buffers.  I might as well update
the 1 caller of write_packetized_from_fd() to pass a buffer rather than
have a static buffer while we're at it.  Then all of those routines
are fixed.

Let me see what that looks like.

Thanks
Jeff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v2 09/14] simple-ipc: add t/helper/test-simple-ipc and t0052
  2021-02-01 19:45   ` [PATCH v2 09/14] simple-ipc: add t/helper/test-simple-ipc and t0052 Jeff Hostetler via GitGitGadget
@ 2021-02-02 21:35     ` SZEDER Gábor
  2021-02-03  4:36       ` Jeff King
  2021-02-09 15:45       ` Jeff Hostetler
  2021-02-05 19:38     ` SZEDER Gábor
  1 sibling, 2 replies; 178+ messages in thread
From: SZEDER Gábor @ 2021-02-02 21:35 UTC (permalink / raw)
  To: Jeff Hostetler via GitGitGadget
  Cc: git, Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, Chris Torek, Jeff Hostetler

On Mon, Feb 01, 2021 at 07:45:42PM +0000, Jeff Hostetler via GitGitGadget wrote:
> diff --git a/t/t0052-simple-ipc.sh b/t/t0052-simple-ipc.sh
> new file mode 100755
> index 00000000000..69588354545
> --- /dev/null
> +++ b/t/t0052-simple-ipc.sh
> @@ -0,0 +1,129 @@
> +#!/bin/sh
> +
> +test_description='simple command server'
> +
> +. ./test-lib.sh
> +
> +test-tool simple-ipc SUPPORTS_SIMPLE_IPC || {
> +	skip_all='simple IPC not supported on this platform'
> +	test_done
> +}
> +
> +stop_simple_IPC_server () {
> +	test -n "$SIMPLE_IPC_PID" || return 0
> +
> +	kill "$SIMPLE_IPC_PID" &&
> +	SIMPLE_IPC_PID=
> +}
> +
> +test_expect_success 'start simple command server' '
> +	{ test-tool simple-ipc daemon --threads=8 & } &&
> +	SIMPLE_IPC_PID=$! &&
> +	test_atexit stop_simple_IPC_server &&
> +
> +	sleep 1 &&

This will certainly lead to occasional failures when the daemon takes
longer than that mere 1 second delay under heavy load or in CI jobs.

> +
> +	test-tool simple-ipc is-active
> +'

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v2 02/14] pkt-line: promote static buffer in packet_write_gently() to callers
  2021-02-02  9:41     ` Jeff King
  2021-02-02 20:33       ` Jeff Hostetler
@ 2021-02-02 22:54       ` Johannes Schindelin
  2021-02-03  4:52         ` Jeff King
  1 sibling, 1 reply; 178+ messages in thread
From: Johannes Schindelin @ 2021-02-02 22:54 UTC (permalink / raw)
  To: Jeff King
  Cc: Jeff Hostetler via GitGitGadget, git,
	Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Chris Torek, Jeff Hostetler

Hi Peff,

On Tue, 2 Feb 2021, Jeff King wrote:

> On Mon, Feb 01, 2021 at 07:45:35PM +0000, Jeff Hostetler via GitGitGadget wrote:
>
> > [in packet_write_gently]
> > -	set_packet_header(packet_write_buffer, packet_size);
> > -	memcpy(packet_write_buffer + 4, buf, size);
> > -	if (write_in_full(fd_out, packet_write_buffer, packet_size) < 0)
> > +
> > +	set_packet_header(scratch->buffer, packet_size);
> > +	memcpy(scratch->buffer + 4, buf, size);
> > +
> > +	if (write_in_full(fd_out, scratch->buffer, packet_size) < 0)
> >  		return error(_("packet write failed"));
>
> Would it really be so bad to do:
>
>   char header[4];
>   set_packet_header(header, packet_size);
>   if (write_in_full(fd_out, header, 4) < 0 ||
>       write_in_full(fd_out, buf, size) < 0)
>           return error(...);

There must have been a reason why the original code went out of its way to
copy the data. At least that's what I _assume_.

I could see, for example, that these extra round-trips just for the
header, really have a negative impact on network operations.

> I doubt that two syscalls is breaking the bank here, but if people are
> really concerned, using writev() would be a much better solution.

No, because there is no equivalent for that on Windows. And since Windows
is the primary target of our Simple IPC/FSMonitor work, that would break
the bank.

> Obviously we can't rely on it being available everywhere, but it's quite
> easy to emulate with a wrapper (and I'd be happy punt on any writev
> stuff until somebody actually measures a difference).
>
> The other direction is that callers could be using a correctly-sized
> buffer in the first place. I.e., something like:
>
>   struct packet_buffer {
>           char full_packet[LARGE_PACKET_MAX];
>   };
>   static inline char *packet_data(struct packet_buffer *pb)
>   {
> 	return pb->full_packet + 4;
>   }

Or we change it to

	struct packet_buffer {
		char count[4];
		char payload[LARGE_PACKET_MAX - 4];
	};

and then ask the callers to allocate one of those beauties
Not sure how well we can guarantee that the compiler won't pad this,
though.

And then there is `write_packetized_from_buf()` whose `src` parameter can
come from `convert_to_git()` that _definitely_ would not be of the desired
form.

So I guess if we can get away with the 2-syscall version, that's kind of
better than that.

Ciao,
Dscho

>
> That lets people work with the oversized buffer in a natural-ish way
> that would be hard to get wrong, like:
>
>   memcpy(packet_data(pb), some_other_buf, len);
>
> (though if we wanted to go even further, we could provide accessors that
> actually do the writing and sanity-check the lengths; the downside is
> that I'm not sure how callers typically get the bytes into these bufs in
> the first place).
>
> That's a much bigger change, of course, and I'd guess you'd much prefer
> to focus on the actual point of your series. ;)
>
> -Peff
>

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v2 04/14] pkt-line: optionally skip the flush packet in write_packetized_from_buf()
  2021-02-02  9:48     ` Jeff King
@ 2021-02-02 22:56       ` Johannes Schindelin
  2021-02-05 18:30       ` Jeff Hostetler
  1 sibling, 0 replies; 178+ messages in thread
From: Johannes Schindelin @ 2021-02-02 22:56 UTC (permalink / raw)
  To: Jeff King
  Cc: Johannes Schindelin via GitGitGadget, git,
	Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Chris Torek, Jeff Hostetler

Hi Peff,


On Tue, 2 Feb 2021, Jeff King wrote:

> On Mon, Feb 01, 2021 at 07:45:37PM +0000, Johannes Schindelin via GitGitGadget wrote:
>
> > From: Johannes Schindelin <johannes.schindelin@gmx.de>
> >
> > This function currently has only one caller: `apply_multi_file_filter()`
> > in `convert.c`. That caller wants a flush packet to be written after
> > writing the payload.
> >
> > However, we are about to introduce a user that wants to write many
> > packets before a final flush packet, so let's extend this function to
> > prepare for that scenario.
>
> I think this is a sign that the function is not very well-designed in
> the first place. It seems like the code would be easier to understand
> overall if that caller just explicitly did the flush itself. It even
> already does so in other cases!
>
> Something like (untested):

Fine by me.

Thanks,
Dscho

>
>  convert.c  | 4 ++++
>  pkt-line.c | 4 ----
>  2 files changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/convert.c b/convert.c
> index ee360c2f07..3968ac37b9 100644
> --- a/convert.c
> +++ b/convert.c
> @@ -890,6 +890,10 @@ static int apply_multi_file_filter(const char *path, const char *src, size_t len
>  	if (err)
>  		goto done;
>
> +	err = packet_flush_gently(process->in);
> +	if (err)
> +		goto done;
> +
>  	err = subprocess_read_status(process->out, &filter_status);
>  	if (err)
>  		goto done;
> diff --git a/pkt-line.c b/pkt-line.c
> index d633005ef7..014520a9c2 100644
> --- a/pkt-line.c
> +++ b/pkt-line.c
> @@ -256,8 +256,6 @@ int write_packetized_from_fd(int fd_in, int fd_out)
>  			break;
>  		err = packet_write_gently(fd_out, buf, bytes_to_write);
>  	}
> -	if (!err)
> -		err = packet_flush_gently(fd_out);
>  	return err;
>  }
>
> @@ -277,8 +275,6 @@ int write_packetized_from_buf(const char *src_in, size_t len, int fd_out)
>  		err = packet_write_gently(fd_out, src_in + bytes_written, bytes_to_write);
>  		bytes_written += bytes_to_write;
>  	}
> -	if (!err)
> -		err = packet_flush_gently(fd_out);
>  	return err;
>  }
>
>
> -Peff
>

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v2 00/14] Simple IPC Mechanism
  2021-02-01 23:26     ` Jeff Hostetler
@ 2021-02-02 23:07       ` Johannes Schindelin
  2021-02-04 19:08         ` Junio C Hamano
  0 siblings, 1 reply; 178+ messages in thread
From: Johannes Schindelin @ 2021-02-02 23:07 UTC (permalink / raw)
  To: Jeff Hostetler
  Cc: Junio C Hamano, Jeff Hostetler via GitGitGadget, git,
	Ævar Arnfjörð Bjarmason, Jeff King, Chris Torek,
	Jeff Hostetler

Hi Junio & Jeff,

On Mon, 1 Feb 2021, Jeff Hostetler wrote:

> On 2/1/21 5:20 PM, Junio C Hamano wrote:
> > "Jeff Hostetler via GitGitGadget" <gitgitgadget@gmail.com> writes:
> >
> > > Here is version 2 of my "Simple IPC" series and addresses the following
> > > review comments:
> > > ...
> > > Junio C Hamano (1):
> > >    ci/install-depends: attempt to fix "brew cask" stuff
> >
> > Huh?
> >
>
> Sorry.  I had to prepend that one to the patch series to get the
> CI builds to run.  I've been working rebased against "v2.30.0" and
> GitGitGadget references "master".

The idea being that we want to be able to merge this branch as-is into Git
for Windows (and also into microsoft/git), and therefore do not want to
base it on a later commit that is not reachable from git-for-windows/git's
`main` branch.

Maybe it is time to merge `jc/macos-install-dependencies-fix` down to
`maint`? Then we could base Simple IPC/FSMonitor on `maint` instead, and
would still have the benefit we want.

Ciao,
Dscho

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v2 09/14] simple-ipc: add t/helper/test-simple-ipc and t0052
  2021-02-02 21:35     ` SZEDER Gábor
@ 2021-02-03  4:36       ` Jeff King
  2021-02-09 15:45       ` Jeff Hostetler
  1 sibling, 0 replies; 178+ messages in thread
From: Jeff King @ 2021-02-03  4:36 UTC (permalink / raw)
  To: SZEDER Gábor
  Cc: Jeff Hostetler via GitGitGadget, git,
	Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Chris Torek, Jeff Hostetler

On Tue, Feb 02, 2021 at 10:35:23PM +0100, SZEDER Gábor wrote:

> > +test_expect_success 'start simple command server' '
> > +	{ test-tool simple-ipc daemon --threads=8 & } &&
> > +	SIMPLE_IPC_PID=$! &&
> > +	test_atexit stop_simple_IPC_server &&
> > +
> > +	sleep 1 &&
> 
> This will certainly lead to occasional failures when the daemon takes
> longer than that mere 1 second delay under heavy load or in CI jobs.

Yeah. The robust thing is to have the server indicate when it's ready to
receive requests. There's some prior art in t/lib-git-daemon.sh using a
fifo to get a line to the caller. It's ugly, but AFAIK pretty
bulletproof.

-Peff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v2 02/14] pkt-line: promote static buffer in packet_write_gently() to callers
  2021-02-02 22:54       ` Johannes Schindelin
@ 2021-02-03  4:52         ` Jeff King
  0 siblings, 0 replies; 178+ messages in thread
From: Jeff King @ 2021-02-03  4:52 UTC (permalink / raw)
  To: Johannes Schindelin
  Cc: Jeff Hostetler via GitGitGadget, git,
	Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Chris Torek, Jeff Hostetler

On Tue, Feb 02, 2021 at 11:54:43PM +0100, Johannes Schindelin wrote:

> > Would it really be so bad to do:
> >
> >   char header[4];
> >   set_packet_header(header, packet_size);
> >   if (write_in_full(fd_out, header, 4) < 0 ||
> >       write_in_full(fd_out, buf, size) < 0)
> >           return error(...);
> 
> There must have been a reason why the original code went out of its way to
> copy the data. At least that's what I _assume_.

Having looked at the history, including the original mailing list
threads, it doesn't seem to be.

> I could see, for example, that these extra round-trips just for the
> header, really have a negative impact on network operations.

Keep in mind these won't be network round-trips. They're just syscall
round-trips. The OS would keep writing without an ACK while filling a
TCP window. The worst case may be an extra packet on the wire, though
the OS may end up coalescing the writes into a single packet anyway.

> > I doubt that two syscalls is breaking the bank here, but if people are
> > really concerned, using writev() would be a much better solution.
> 
> No, because there is no equivalent for that on Windows. And since Windows
> is the primary target of our Simple IPC/FSMonitor work, that would break
> the bank.

Are you concerned about the performance implications, or just
portability? Falling back to two writes (and wrapping that in a
function) would be easy for the latter. For the former, there's WSASend,
but I have no idea what kind of difficulties/caveats we might run into.

> > The other direction is that callers could be using a correctly-sized
> > buffer in the first place. I.e., something like:
> >
> >   struct packet_buffer {
> >           char full_packet[LARGE_PACKET_MAX];
> >   };
> >   static inline char *packet_data(struct packet_buffer *pb)
> >   {
> > 	return pb->full_packet + 4;
> >   }
> 
> Or we change it to
> 
> 	struct packet_buffer {
> 		char count[4];
> 		char payload[LARGE_PACKET_MAX - 4];
> 	};
> 
> and then ask the callers to allocate one of those beauties
> Not sure how well we can guarantee that the compiler won't pad this,
> though.

Yeah, I almost suggested the same, but wasn't sure about padding. I
think the standard allows there to be arbitrary padding between the two,
so it's really up to the ABI to define. I'd be surprised if this struct
is a problem in practice (we already have some structs which assume
4-byte alignment, and nobody seems to have complained).

> And then there is `write_packetized_from_buf()` whose `src` parameter can
> come from `convert_to_git()` that _definitely_ would not be of the desired
> form.

Yep. It really does need to either use two writes or to copy, because
it's slicing up a much larger buffer (it wouldn't be the end of the
world for it to allocate a single LARGE_PACKET_MAX heap buffer for the
duration of its run, though).

> So I guess if we can get away with the 2-syscall version, that's kind of
> better than that.

I do prefer it, because then the whole thing just becomes an
implementation detail that callers don't need to care about.

-Peff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v2 00/14] Simple IPC Mechanism
  2021-02-02 23:07       ` Johannes Schindelin
@ 2021-02-04 19:08         ` Junio C Hamano
  2021-02-05 13:19           ` candidate branches for `maint`, was " Johannes Schindelin
  0 siblings, 1 reply; 178+ messages in thread
From: Junio C Hamano @ 2021-02-04 19:08 UTC (permalink / raw)
  To: Johannes Schindelin
  Cc: Jeff Hostetler, Jeff Hostetler via GitGitGadget, git,
	Ævar Arnfjörð Bjarmason, Jeff King, Chris Torek,
	Jeff Hostetler

Johannes Schindelin <Johannes.Schindelin@gmx.de> writes:

> The idea being that we want to be able to merge this branch as-is into Git
> for Windows (and also into microsoft/git), and therefore do not want to
> base it on a later commit that is not reachable from git-for-windows/git's
> `main` branch.
>
> Maybe it is time to merge `jc/macos-install-dependencies-fix` down to
> `maint`? Then we could base Simple IPC/FSMonitor on `maint` instead, and
> would still have the benefit we want.

Now you mention it, if we ever have an update to the current 'maint'
branch, we'd trigger this known failure in macOS build without a
good reason, even though we have a fix that has been in use on the
'master' and above.

I definitely qshould merge the jc/macos-install-dependencies-fix
topic down to 'maint' now.

Are there other topics that deserve to be in 'maint' that are
"obviously correct" people can think of?

Thanks.


^ permalink raw reply	[flat|nested] 178+ messages in thread

* candidate branches for `maint`, was Re: [PATCH v2 00/14] Simple IPC Mechanism
  2021-02-04 19:08         ` Junio C Hamano
@ 2021-02-05 13:19           ` Johannes Schindelin
  2021-02-05 19:55             ` Junio C Hamano
  0 siblings, 1 reply; 178+ messages in thread
From: Johannes Schindelin @ 2021-02-05 13:19 UTC (permalink / raw)
  To: Junio C Hamano
  Cc: Jeff Hostetler, Jeff Hostetler via GitGitGadget, git,
	Ævar Arnfjörð Bjarmason, Jeff King, Chris Torek,
	Jeff Hostetler

Hi Junio,

On Thu, 4 Feb 2021, Junio C Hamano wrote:

> Johannes Schindelin <Johannes.Schindelin@gmx.de> writes:
>
> > The idea being that we want to be able to merge this branch as-is into Git
> > for Windows (and also into microsoft/git), and therefore do not want to
> > base it on a later commit that is not reachable from git-for-windows/git's
> > `main` branch.
> >
> > Maybe it is time to merge `jc/macos-install-dependencies-fix` down to
> > `maint`? Then we could base Simple IPC/FSMonitor on `maint` instead, and
> > would still have the benefit we want.
>
> Now you mention it, if we ever have an update to the current 'maint'
> branch, we'd trigger this known failure in macOS build without a
> good reason, even though we have a fix that has been in use on the
> 'master' and above.
>
> I definitely qshould merge the jc/macos-install-dependencies-fix
> topic down to 'maint' now.
>
> Are there other topics that deserve to be in 'maint' that are
> "obviously correct" people can think of?

I looked over the branches merged into `master` that are not in `maint`,
and from a cursory look, these seem to be good candidates:

- pk/subsub-fetch-fix-take-2
- rs/rebase-commit-validation
- en/stash-apply-sparse-checkout
- ds/for-each-repo-noopfix
- pb/mergetool-tool-help-fix
- jk/t5516-deflake
- jc/macos-install-dependencies-fix
- jk/forbid-lf-in-git-url
- jk/log-cherry-pick-duplicate-patches
- js/skip-dashed-built-ins-from-config-mak

From a cursory look, it might seem as if ds/maintenance-prefetch-cleanup
would also be a good candidate, but it is not based on `maint`, although
it _does_ appear to fix an issue introduced in v2.30.0-rc0~23^2~8.

There is another candidate that I am not _quite_ sure about:
dl/p4-encode-after-kw-expansion. It _seems_ as if it would be good to
apply on the maintenance train, but I am uncertain how important a bug fix
it is.

There are also a couple test updates that might be nice to have in
`maint`:

- nk/perf-fsmonitor-cleanup
- mt/t4129-with-setgid-dir
- ad/t4129-setfacl-target-fix

Finally, there are documentation updates that I would probably merge, if I
was tasked with updating `maint`:

- ta/doc-typofix
- pb/doc-modules-git-work-tree-typofix
- jc/sign-off
- vv/send-email-with-less-secure-apps-access
- ug/doc-lose-dircache
- ab/gettext-charset-comment-fix
- bc/doc-status-short
- tb/local-clone-race-doc
- ab/fsck-doc-fix
- jt/packfile-as-uri-doc

Ciao,
Dscho

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v2 04/14] pkt-line: optionally skip the flush packet in write_packetized_from_buf()
  2021-02-02  9:48     ` Jeff King
  2021-02-02 22:56       ` Johannes Schindelin
@ 2021-02-05 18:30       ` Jeff Hostetler
  1 sibling, 0 replies; 178+ messages in thread
From: Jeff Hostetler @ 2021-02-05 18:30 UTC (permalink / raw)
  To: Jeff King, Johannes Schindelin via GitGitGadget
  Cc: git, Ævar Arnfjörð Bjarmason, Chris Torek,
	Jeff Hostetler, Johannes Schindelin



On 2/2/21 4:48 AM, Jeff King wrote:
> On Mon, Feb 01, 2021 at 07:45:37PM +0000, Johannes Schindelin via GitGitGadget wrote:
> 
>> From: Johannes Schindelin <johannes.schindelin@gmx.de>
>>
>> This function currently has only one caller: `apply_multi_file_filter()`
>> in `convert.c`. That caller wants a flush packet to be written after
>> writing the payload.
>>
>> However, we are about to introduce a user that wants to write many
>> packets before a final flush packet, so let's extend this function to
>> prepare for that scenario.
> 
> I think this is a sign that the function is not very well-designed in
> the first place. It seems like the code would be easier to understand
> overall if that caller just explicitly did the flush itself. It even
> already does so in other cases!
> 

I agree.  I'll move flush to the caller and rename the write packetized
function slightly to guard against new callers assuming the old behavior
during the transition.

Jeff


> Something like (untested):
> 
>   convert.c  | 4 ++++
>   pkt-line.c | 4 ----
>   2 files changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/convert.c b/convert.c
> index ee360c2f07..3968ac37b9 100644
> --- a/convert.c
> +++ b/convert.c
> @@ -890,6 +890,10 @@ static int apply_multi_file_filter(const char *path, const char *src, size_t len
>   	if (err)
>   		goto done;
>   
> +	err = packet_flush_gently(process->in);
> +	if (err)
> +		goto done;
> +
>   	err = subprocess_read_status(process->out, &filter_status);
>   	if (err)
>   		goto done;
> diff --git a/pkt-line.c b/pkt-line.c
> index d633005ef7..014520a9c2 100644
> --- a/pkt-line.c
> +++ b/pkt-line.c
> @@ -256,8 +256,6 @@ int write_packetized_from_fd(int fd_in, int fd_out)
>   			break;
>   		err = packet_write_gently(fd_out, buf, bytes_to_write);
>   	}
> -	if (!err)
> -		err = packet_flush_gently(fd_out);
>   	return err;
>   }
>   
> @@ -277,8 +275,6 @@ int write_packetized_from_buf(const char *src_in, size_t len, int fd_out)
>   		err = packet_write_gently(fd_out, src_in + bytes_written, bytes_to_write);
>   		bytes_written += bytes_to_write;
>   	}
> -	if (!err)
> -		err = packet_flush_gently(fd_out);
>   	return err;
>   }
>   
> 
> -Peff
> 

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v2 09/14] simple-ipc: add t/helper/test-simple-ipc and t0052
  2021-02-01 19:45   ` [PATCH v2 09/14] simple-ipc: add t/helper/test-simple-ipc and t0052 Jeff Hostetler via GitGitGadget
  2021-02-02 21:35     ` SZEDER Gábor
@ 2021-02-05 19:38     ` SZEDER Gábor
  1 sibling, 0 replies; 178+ messages in thread
From: SZEDER Gábor @ 2021-02-05 19:38 UTC (permalink / raw)
  To: Jeff Hostetler via GitGitGadget
  Cc: git, Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, Chris Torek, Jeff Hostetler

On Mon, Feb 01, 2021 at 07:45:42PM +0000, Jeff Hostetler via GitGitGadget wrote:
> Create unit tests for "simple-ipc".  These are currently only enabled
> on Windows.

> diff --git a/t/t0052-simple-ipc.sh b/t/t0052-simple-ipc.sh

> +test_expect_success '`quit` works' '
> +	test-tool simple-ipc send quit &&
> +	test_must_fail test-tool simple-ipc is-active &&
> +	test_must_fail test-tool simple-ipc send ping
> +'

This test is flaky as well, and it did actually fail in CI:

  expecting success of 0052.9 '`quit` works': 
  	test-tool simple-ipc send quit &&
  	test_must_fail test-tool simple-ipc is-active &&
  	test_must_fail test-tool simple-ipc send ping
  
  +test-tool simple-ipc send quit
  +test_must_fail test-tool simple-ipc is-active
  test_must_fail: command succeeded: test-tool simple-ipc is-active
  error: last command exited with $?=1
  not ok 9 - `quit` works

> +
> +test_done
> -- 
> gitgitgadget
> 

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: candidate branches for `maint`, was Re: [PATCH v2 00/14] Simple IPC Mechanism
  2021-02-05 13:19           ` candidate branches for `maint`, was " Johannes Schindelin
@ 2021-02-05 19:55             ` Junio C Hamano
  0 siblings, 0 replies; 178+ messages in thread
From: Junio C Hamano @ 2021-02-05 19:55 UTC (permalink / raw)
  To: Johannes Schindelin
  Cc: Jeff Hostetler, Jeff Hostetler via GitGitGadget, git,
	Ævar Arnfjörð Bjarmason, Jeff King, Chris Torek,
	Jeff Hostetler

Johannes Schindelin <Johannes.Schindelin@gmx.de> writes:

>> Are there other topics that deserve to be in 'maint' that are
>> "obviously correct" people can think of?
>
> I looked over the branches merged into `master` that are not in `maint`,
> and from a cursory look, these seem to be good candidates:
> ...
> There are also a couple test updates that might be nice to have in
> `maint`:
> ...
> Finally, there are documentation updates that I would probably merge, if I
> was tasked with updating `maint`:
> ...


Your list more or less matches what the ML (merge later) script on
the todo/ branch produces when it is fed the RelNotes (the script
just greps for "merge laster to maint" comments and shows the result
in way a bit easier to use for me).

The ones that are in RelNotes but not in your list are 

    ab/branch-sort # 7 (11 days ago) 
    ar/t6016-modernise # 1 (3 weeks ago) 
    dl/p4-encode-after-kw-expansion # 1 (3 weeks ago) 
    fc/t6030-bisect-reset-removes-auxiliary-files # 1 (4 weeks ago) 
    ma/doc-pack-format-varint-for-sizes # 1 (3 weeks ago) 
    ma/more-opaque-lock-file # 5 (11 days ago) 
    ma/t1300-cleanup # 3 (3 weeks ago) 
    zh/arg-help-format # 2 (3 weeks ago) 

and I think all of them are safe to merge down.

Thanks for being an independent source I can rely on to sanity check
what is in RelNotes.  Very much appreciated.



^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v2 11/14] unix-socket: add options to unix_stream_listen()
  2021-02-02 10:14     ` Jeff King
@ 2021-02-05 23:28       ` Jeff Hostetler
  2021-02-09 16:32         ` Jeff King
  0 siblings, 1 reply; 178+ messages in thread
From: Jeff Hostetler @ 2021-02-05 23:28 UTC (permalink / raw)
  To: Jeff King, Jeff Hostetler via GitGitGadget
  Cc: git, Ævar Arnfjörð Bjarmason, Chris Torek, Jeff Hostetler



On 2/2/21 5:14 AM, Jeff King wrote:
> On Mon, Feb 01, 2021 at 07:45:44PM +0000, Jeff Hostetler via GitGitGadget wrote:
> 
>> From: Jeff Hostetler <jeffhost@microsoft.com>
>>
>> Update `unix_stream_listen()` to take an options structure to override
>> default behaviors.  This includes the size of the `listen()` backlog
>> and whether it should always unlink the socket file before trying to
>> create a new one.  Also eliminate calls to `die()` if it cannot create
>> a socket.
> 
> I sent a follow-up on the previous patch, but I think this part about
> the die() should be folded in there.
> 
> Likewise I think it would probably be easier to follow if we added the
> backlog parameter and the unlink options in separate patches. The
> backlog thing is small, but the unlink part is subtle and requires
> explanation. That's a good sign it might do better in its own commit.

Yes, that helped having them in 2 patches each with 1 concern.

> 
>> Normally, `unix_stream_listen()` always tries to `unlink()` the
>> socket-path before calling `bind()`.  If there is an existing
>> server/daemon already bound and listening on that socket-path, our
>> `unlink()` would have the effect of disassociating the existing
>> server's bound-socket-fd from the socket-path without notifying the
>> existing server.  The existing server could continue to service
>> existing connections (accepted-socket-fd's), but would not receive any
>> futher new connections (since clients rendezvous via the socket-path).
>> The existing server would effectively be offline but yet appear to be
>> active.
>>
>> Furthermore, `unix_stream_listen()` creates an opportunity for a brief
>> race condition for connecting clients if they try to connect in the
>> interval between the forced `unlink()` and the subsequent `bind()` (which
>> recreates the socket-path that is bound to a new socket-fd in the current
>> process).
> 
> OK. I'm still not sure of the endgame here for writing non-racy code to
> establish the socket (which is going to require either some atomic
> renaming or some dot-locking in the caller).  But it's plausible to me
> that this option will be a useful primitive.

In part 14/14 in `ipc-unix-sockets.c:create_listener_socket()` I have
code in the calling layer to (try to) handle both the startup races
and basic collisions with existing long-running servers already using
the socket.

But you're right, it might be good to revisit that as a primitive at
this layer.  We only have 1 other caller right now and I don't know
enough about `credential-cache--daemon` to know if it would benefit
from this or not.

> 
> The implementation looks correct, though here are a few small
> observations/questions/nits:
> 
>> -int unix_stream_listen(const char *path)
>> +int unix_stream_listen(const char *path,
>> +		       const struct unix_stream_listen_opts *opts)
>>   {
>> -	int fd, saved_errno;
>> +	int fd = -1;
>> +	int saved_errno;
>> +	int bind_successful = 0;
>> +	int backlog;
>>   	struct sockaddr_un sa;
>>   	struct unix_sockaddr_context ctx;
>>   
>> -	unlink(path);
>> -
>>   	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
>>   		return -1;
> 
> We can return directly here, because we know there is nothing to clean
> up. Which I thought mean that here...
> 
>> +
>>   	fd = socket(AF_UNIX, SOCK_STREAM, 0);
>>   	if (fd < 0)
>> -		die_errno("unable to create socket");
>> +		goto fail;
> 
> ...we are in the same boat. We did not create a socket, so we can just
> return. That makes our cleanup code a bit simpler. But we can't do that,
> because unix_sockaddr_init() may have done things that need cleaning up
> (like chdir). So what you have here is correct.
> 
> IMHO that is all the more reason to push this (and the similar code in
> unix_stream_connect() added in patch 13) into the previous patch.

Agreed.

> 
>> +	if (opts->force_unlink_before_bind)
>> +		unlink(path);
>>   
>>   	if (bind(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0)
>>   		goto fail;
>> +	bind_successful = 1;
> 
> And this one needs to mark a flag explicitly, because we have no other
> visible way of knowing we need to do the unlink. Makes sense.
> 
>> -	if (listen(fd, 5) < 0)
>> +	if (opts->listen_backlog_size > 0)
>> +		backlog = opts->listen_backlog_size;
>> +	else
>> +		backlog = 5;
>> +	if (listen(fd, backlog) < 0)
> 
> The default-to-5 is a bit funny here. We already set the default to 5 in
> UNIX_STREAM_LISTEN_OPTS_INIT. Should it be "0" there, so callers can
> treat that as "use the default", which we fill in here? It probably
> doesn't matter much in practice, but it seems cleaner to have only one
> spot with the magic number.

I'll refactor this a bit.

> 
>> @@ -114,7 +125,10 @@ int unix_stream_listen(const char *path)
>>   fail:
>>   	saved_errno = errno;
>>   	unix_sockaddr_cleanup(&ctx);
>> -	close(fd);
>> +	if (fd != -1)
>> +		close(fd);
>> +	if (bind_successful)
>> +		unlink(path);
>>   	errno = saved_errno;
>>   	return -1;
>>   }
> 
> Should we unlink before closing? I usually try to undo actions in the
> reverse order that they were done. I thought at first it might even
> matter here, such that we'd atomically relinquish the name without
> having a moment where it still points to a closed socket (which might be
> less confusing to somebody else trying to connect). But I guess there
> will always be such a moment, because it's not like we would ever
> accept() or service a request.

I'm not sure it matters, but it does look better to unwind things
in reverse order.  And yes, unlinking first is a little bit safer.

> 
> -Peff
> 

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v2 09/14] simple-ipc: add t/helper/test-simple-ipc and t0052
  2021-02-02 21:35     ` SZEDER Gábor
  2021-02-03  4:36       ` Jeff King
@ 2021-02-09 15:45       ` Jeff Hostetler
  1 sibling, 0 replies; 178+ messages in thread
From: Jeff Hostetler @ 2021-02-09 15:45 UTC (permalink / raw)
  To: SZEDER Gábor, Jeff Hostetler via GitGitGadget
  Cc: git, Ævar Arnfjörð Bjarmason, Jeff King,
	Chris Torek, Jeff Hostetler



On 2/2/21 4:35 PM, SZEDER Gábor wrote:
> On Mon, Feb 01, 2021 at 07:45:42PM +0000, Jeff Hostetler via GitGitGadget wrote:
>> diff --git a/t/t0052-simple-ipc.sh b/t/t0052-simple-ipc.sh
>> new file mode 100755
>> index 00000000000..69588354545
>> --- /dev/null
>> +++ b/t/t0052-simple-ipc.sh
>> @@ -0,0 +1,129 @@
>> +#!/bin/sh
>> +
>> +test_description='simple command server'
>> +
>> +. ./test-lib.sh
>> +
>> +test-tool simple-ipc SUPPORTS_SIMPLE_IPC || {
>> +	skip_all='simple IPC not supported on this platform'
>> +	test_done
>> +}
>> +
>> +stop_simple_IPC_server () {
>> +	test -n "$SIMPLE_IPC_PID" || return 0
>> +
>> +	kill "$SIMPLE_IPC_PID" &&
>> +	SIMPLE_IPC_PID=
>> +}
>> +
>> +test_expect_success 'start simple command server' '
>> +	{ test-tool simple-ipc daemon --threads=8 & } &&
>> +	SIMPLE_IPC_PID=$! &&
>> +	test_atexit stop_simple_IPC_server &&
>> +
>> +	sleep 1 &&
> 
> This will certainly lead to occasional failures when the daemon takes
> longer than that mere 1 second delay under heavy load or in CI jobs.


Good point.  Thanks!


> 
>> +
>> +	test-tool simple-ipc is-active
>> +'

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v2 11/14] unix-socket: add options to unix_stream_listen()
  2021-02-05 23:28       ` Jeff Hostetler
@ 2021-02-09 16:32         ` Jeff King
  2021-02-09 17:39           ` Jeff Hostetler
  0 siblings, 1 reply; 178+ messages in thread
From: Jeff King @ 2021-02-09 16:32 UTC (permalink / raw)
  To: Jeff Hostetler
  Cc: Jeff Hostetler via GitGitGadget, git,
	Ævar Arnfjörð Bjarmason, Chris Torek,
	Jeff Hostetler

On Fri, Feb 05, 2021 at 06:28:13PM -0500, Jeff Hostetler wrote:

> > OK. I'm still not sure of the endgame here for writing non-racy code to
> > establish the socket (which is going to require either some atomic
> > renaming or some dot-locking in the caller).  But it's plausible to me
> > that this option will be a useful primitive.
> 
> In part 14/14 in `ipc-unix-sockets.c:create_listener_socket()` I have
> code in the calling layer to (try to) handle both the startup races
> and basic collisions with existing long-running servers already using
> the socket.

There you make a temp socket and then try to rename it into place.  But
because rename() overwrites the destination, it still seems like two
creating processes can race each other. Something like:

  0. There's no "foo" socket (or maybe there is a stale one that
     nobody's listening on).

  1. Process A wants to become the listener. So it creates foo.A.

  2. Process B likewise. It creates foo.B.

  3. Process A renames foo.A to foo. It believes it will now service
     clients.

  4. Process B renames foo.B to foo. Now process A is stranded but
     doesn't realize it.

I.e., I don't think this is much different than an unlink+create
strategy. You've eliminated the window where a process C shows up during
steps 3 and 4 and sees no socket (because somebody else is in the midst
of a non-atomic unlink+create operation). But there's no atomicity
between the "ping the socket" and "create the socket" steps.

> But you're right, it might be good to revisit that as a primitive at
> this layer.  We only have 1 other caller right now and I don't know
> enough about `credential-cache--daemon` to know if it would benefit
> from this or not.

Yeah, having seen patch 14, it looks like your only new caller always
sets the new unlink option to 1. So it might not be worth making it
optional if you don't need it (especially because the rename trick,
assuming it's portable, is superior to unlink+create; and you'd always
be fine with an unlink on the temp socket).

The call in credential-cache--daemon is definitely racy. It's pretty
much the same thing: it pings the socket to see if it's alive, but is
still susceptible to the problem above. I was was never too concerned
about it, since the whole point of the daemon is to hang around until
its contents expire. If it loses the race and nobody contacts it, the
worst case is it waits 30 seconds for somebody to give it data before
exiting. It would benefit slightly from switching to the rename
strategy, but the bigger race would remain.

-Peff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v2 11/14] unix-socket: add options to unix_stream_listen()
  2021-02-09 16:32         ` Jeff King
@ 2021-02-09 17:39           ` Jeff Hostetler
  2021-02-10 15:55             ` Jeff King
  0 siblings, 1 reply; 178+ messages in thread
From: Jeff Hostetler @ 2021-02-09 17:39 UTC (permalink / raw)
  To: Jeff King
  Cc: Jeff Hostetler via GitGitGadget, git,
	Ævar Arnfjörð Bjarmason, Chris Torek,
	Jeff Hostetler



On 2/9/21 11:32 AM, Jeff King wrote:
> On Fri, Feb 05, 2021 at 06:28:13PM -0500, Jeff Hostetler wrote:
> 
>>> OK. I'm still not sure of the endgame here for writing non-racy code to
>>> establish the socket (which is going to require either some atomic
>>> renaming or some dot-locking in the caller).  But it's plausible to me
>>> that this option will be a useful primitive.
>>
>> In part 14/14 in `ipc-unix-sockets.c:create_listener_socket()` I have
>> code in the calling layer to (try to) handle both the startup races
>> and basic collisions with existing long-running servers already using
>> the socket.
> 
> There you make a temp socket and then try to rename it into place.  But
> because rename() overwrites the destination, it still seems like two
> creating processes can race each other. Something like:
> 
>    0. There's no "foo" socket (or maybe there is a stale one that
>       nobody's listening on).
> 
>    1. Process A wants to become the listener. So it creates foo.A.
> 
>    2. Process B likewise. It creates foo.B.
> 
>    3. Process A renames foo.A to foo. It believes it will now service
>       clients.
> 
>    4. Process B renames foo.B to foo. Now process A is stranded but
>       doesn't realize it.
> 

Yeah, in my version two processes could still create uniquely named
sockets and then do the rename trick.  But they capture the inode
number of the socket before they do that.  They periodically lstat
the socket to see if the inode number has changed and if so, assume
it has been stolen from them.  (A bit of a hack, I admit.)

And I was assuming that 2 servers starting at about the same time
are effectively equivalent -- it doesn't matter which one dies, since
they both should have the same amount of cached state.  Unlike the
case where a long-running server (with lots of state) is replaced by
a newcomer.


> I.e., I don't think this is much different than an unlink+create
> strategy. You've eliminated the window where a process C shows up during
> steps 3 and 4 and sees no socket (because somebody else is in the midst
> of a non-atomic unlink+create operation). But there's no atomicity
> between the "ping the socket" and "create the socket" steps.
> 
>> But you're right, it might be good to revisit that as a primitive at
>> this layer.  We only have 1 other caller right now and I don't know
>> enough about `credential-cache--daemon` to know if it would benefit
>> from this or not.
> 
> Yeah, having seen patch 14, it looks like your only new caller always
> sets the new unlink option to 1. So it might not be worth making it
> optional if you don't need it (especially because the rename trick,
> assuming it's portable, is superior to unlink+create; and you'd always
> be fine with an unlink on the temp socket).


I am wondering if we can use the .LOCK file magic to our advantage
here (in sort of an off-label use).  If we have the server create a
lockfile "<path>.LOCK" and if successful leave it open/locked for the
life of the server (rather than immediately renaming it onto <path>)
and let the normal shutdown code rollback/delete the lockfile in the
cleanup/atexit.

If the server successfully creates the lockfile, then unlink+create
the socket at <path>.

That would give us the unique/exclusive creation (on the lock) that
we need.  Then wrap that with all the edge case cleanup code to
create/delete/manage the peer socket.  Basically if the lock exists,
there should be a live server listening to the socket (unless there
was a crash...).

And yes, then I don't think I need the `preserve_existing` bit in the
opts struct.

> 
> The call in credential-cache--daemon is definitely racy. It's pretty
> much the same thing: it pings the socket to see if it's alive, but is
> still susceptible to the problem above. I was was never too concerned
> about it, since the whole point of the daemon is to hang around until
> its contents expire. If it loses the race and nobody contacts it, the
> worst case is it waits 30 seconds for somebody to give it data before
> exiting. It would benefit slightly from switching to the rename
> strategy, but the bigger race would remain.
> 
> -Peff
> 

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v2 11/14] unix-socket: add options to unix_stream_listen()
  2021-02-09 17:39           ` Jeff Hostetler
@ 2021-02-10 15:55             ` Jeff King
  2021-02-10 21:31               ` Jeff Hostetler
  0 siblings, 1 reply; 178+ messages in thread
From: Jeff King @ 2021-02-10 15:55 UTC (permalink / raw)
  To: Jeff Hostetler
  Cc: Jeff Hostetler via GitGitGadget, git,
	Ævar Arnfjörð Bjarmason, Chris Torek,
	Jeff Hostetler

On Tue, Feb 09, 2021 at 12:39:22PM -0500, Jeff Hostetler wrote:

> Yeah, in my version two processes could still create uniquely named
> sockets and then do the rename trick.  But they capture the inode
> number of the socket before they do that.  They periodically lstat
> the socket to see if the inode number has changed and if so, assume
> it has been stolen from them.  (A bit of a hack, I admit.)

OK, that makes more sense. I saw the mention of the inode stuff in a
comment, but I didn't see it in the code (I guess if it's a periodic
check it's not in that initial socket creation function).

> And I was assuming that 2 servers starting at about the same time
> are effectively equivalent -- it doesn't matter which one dies, since
> they both should have the same amount of cached state.  Unlike the
> case where a long-running server (with lots of state) is replaced by
> a newcomer.

Yeah, I agree with that notion in general. I do think it would be easier
to reason about if the creation were truly race-proof (probably with a
dot-lock; see below), rather than the later "check if we got replaced"
thing.  OTOH, that "check" strategy covers a variety of cases (including
that somebody tried to ping us, decided we weren't alive due to a
timeout or some other system reason, and then replaced our socket).

Another strategy there could be having the daemon just decide to quit if
nobody contacts it for N time units. It is, after all, a cache. Even if
nobody replaces the socket, it probably makes sense to eventually decide
that the memory we're holding isn't going to good use.

> I am wondering if we can use the .LOCK file magic to our advantage
> here (in sort of an off-label use).  If we have the server create a
> lockfile "<path>.LOCK" and if successful leave it open/locked for the
> life of the server (rather than immediately renaming it onto <path>)
> and let the normal shutdown code rollback/delete the lockfile in the
> cleanup/atexit.
> 
> If the server successfully creates the lockfile, then unlink+create
> the socket at <path>.

I don't even think this is off-label. Though the normal use is for the
.lock file to get renamed into place as the official file, there are a
few other places where we use it solely for mutual exclusion. You just
always end with rollback_lock_file(), and never "commit" it.

So something like:

  1. Optimistically see if socket "foo" is present and accepting
     connections.

  2. If not, then take "foo.lock". If somebody else is holding it, loop
     with a timeout waiting for them to come alive.

  3. Assuming we got the lock, then either unlink+create the socket as
     "foo", or rename-into-place. I don't think it matters that much
     which.

  4. Rollback "foo.lock", unlinking it.

Then one process wins the lock and creates the socket, while any
simultaneous creators spin in step 2, and eventually connect to the
winner.

> That would give us the unique/exclusive creation (on the lock) that
> we need.  Then wrap that with all the edge case cleanup code to
> create/delete/manage the peer socket.  Basically if the lock exists,
> there should be a live server listening to the socket (unless there
> was a crash...).

I think you'd want to delete the lock as soon as you're done with the
setup. That reduces the chances that a dead server (e.g., killed by a
power outage without the chance to clean up after itself) leaves a stale
lock sitting around.

-Peff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v2 11/14] unix-socket: add options to unix_stream_listen()
  2021-02-10 15:55             ` Jeff King
@ 2021-02-10 21:31               ` Jeff Hostetler
  0 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler @ 2021-02-10 21:31 UTC (permalink / raw)
  To: Jeff King
  Cc: Jeff Hostetler via GitGitGadget, git,
	Ævar Arnfjörð Bjarmason, Chris Torek,
	Jeff Hostetler



On 2/10/21 10:55 AM, Jeff King wrote:
> On Tue, Feb 09, 2021 at 12:39:22PM -0500, Jeff Hostetler wrote:
> 
>> Yeah, in my version two processes could still create uniquely named
>> sockets and then do the rename trick.  But they capture the inode
>> number of the socket before they do that.  They periodically lstat
>> the socket to see if the inode number has changed and if so, assume
>> it has been stolen from them.  (A bit of a hack, I admit.)
> 
> OK, that makes more sense. I saw the mention of the inode stuff in a
> comment, but I didn't see it in the code (I guess if it's a periodic
> check it's not in that initial socket creation function).
> 

Yeah, there's a very slow poll(2) loop in the listen/accept thread
that watches for that and new connections (and quit messages).

>> And I was assuming that 2 servers starting at about the same time
>> are effectively equivalent -- it doesn't matter which one dies, since
>> they both should have the same amount of cached state.  Unlike the
>> case where a long-running server (with lots of state) is replaced by
>> a newcomer.
> 
> Yeah, I agree with that notion in general. I do think it would be easier
> to reason about if the creation were truly race-proof (probably with a
> dot-lock; see below), rather than the later "check if we got replaced"
> thing.  OTOH, that "check" strategy covers a variety of cases (including
> that somebody tried to ping us, decided we weren't alive due to a
> timeout or some other system reason, and then replaced our socket).
> 
> Another strategy there could be having the daemon just decide to quit if
> nobody contacts it for N time units. It is, after all, a cache. Even if
> nobody replaces the socket, it probably makes sense to eventually decide
> that the memory we're holding isn't going to good use.

I have the poll(2) loop set to recheck the inode for theft every 60
seconds (randomly chosen).

Assuming the socket isn't stolen, I want to leave any thoughts of
an auto-shutdown to the application layer above it.  My next patch
series will use this ipc mechanism to build a FSMonitor daemon that
will watch the filesystem for changes and then be able to quickly
respond to a `git status`, so it is important that it be allowed to
run without any clients for a while (such a during a build).  Yes,
memory concerns are important, so I do want it to auto-shutdown if
the socket is stolen (or the workdir is deleted).

> 
>> I am wondering if we can use the .LOCK file magic to our advantage
>> here (in sort of an off-label use).  If we have the server create a
>> lockfile "<path>.LOCK" and if successful leave it open/locked for the
>> life of the server (rather than immediately renaming it onto <path>)
>> and let the normal shutdown code rollback/delete the lockfile in the
>> cleanup/atexit.
>>
>> If the server successfully creates the lockfile, then unlink+create
>> the socket at <path>.
> 
> I don't even think this is off-label. Though the normal use is for the
> .lock file to get renamed into place as the official file, there are a
> few other places where we use it solely for mutual exclusion. You just
> always end with rollback_lock_file(), and never "commit" it.
> 
> So something like:
> 
>    1. Optimistically see if socket "foo" is present and accepting
>       connections.
> 
>    2. If not, then take "foo.lock". If somebody else is holding it, loop
>       with a timeout waiting for them to come alive.
> 
>    3. Assuming we got the lock, then either unlink+create the socket as
>       "foo", or rename-into-place. I don't think it matters that much
>       which.
> 
>    4. Rollback "foo.lock", unlinking it.
> 
> Then one process wins the lock and creates the socket, while any
> simultaneous creators spin in step 2, and eventually connect to the
> winner.
> 
>> That would give us the unique/exclusive creation (on the lock) that
>> we need.  Then wrap that with all the edge case cleanup code to
>> create/delete/manage the peer socket.  Basically if the lock exists,
>> there should be a live server listening to the socket (unless there
>> was a crash...).
> 
> I think you'd want to delete the lock as soon as you're done with the
> setup. That reduces the chances that a dead server (e.g., killed by a
> power outage without the chance to clean up after itself) leaves a stale
> lock sitting around.

Thanks this helps.  I've got a version now that is a slight variation on
what you have here that seems to work nicely and has the short-lived
lock file.  I'll post this shortly.

Jeff


^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v2 06/14] pkt-line: accept additional options in read_packetized_to_strbuf()
  2021-02-01 19:45   ` [PATCH v2 06/14] pkt-line: accept additional options in read_packetized_to_strbuf() Johannes Schindelin via GitGitGadget
@ 2021-02-11  1:52     ` Taylor Blau
  0 siblings, 0 replies; 178+ messages in thread
From: Taylor Blau @ 2021-02-11  1:52 UTC (permalink / raw)
  To: Johannes Schindelin via GitGitGadget
  Cc: git, Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, Chris Torek, Jeff Hostetler, Johannes Schindelin

On Mon, Feb 01, 2021 at 07:45:39PM +0000, Johannes Schindelin via GitGitGadget wrote:
> diff --git a/pkt-line.c b/pkt-line.c
> index 528493bca21..f090fc56eef 100644
> --- a/pkt-line.c
> +++ b/pkt-line.c
> @@ -461,7 +461,7 @@ char *packet_read_line_buf(char **src, size_t *src_len, int *dst_len)
>  	return packet_read_line_generic(-1, src, src_len, dst_len);
>  }
>
> -ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out)
> +ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out, int options)
>  {
>  	int packet_len;
>
> @@ -477,7 +477,7 @@ ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out)
>  			 * that there is already room for the extra byte.
>  			 */
>  			sb_out->buf + sb_out->len, LARGE_PACKET_DATA_MAX+1,
> -			PACKET_READ_GENTLE_ON_EOF);
> +			options | PACKET_READ_GENTLE_ON_EOF);

This feels a little magical to me. Since read_packetized_to_strbuf only
has the one caller you mention, why not have the caller pass all of the
options (including PACKET_READ_GENTLE_ON_EOF)?

>  		if (packet_len <= 0)
>  			break;
>  		sb_out->len += packet_len;
> diff --git a/pkt-line.h b/pkt-line.h
> index 7f31c892165..150319a6f00 100644
> --- a/pkt-line.h
> +++ b/pkt-line.h
> @@ -145,8 +145,12 @@ char *packet_read_line_buf(char **src_buf, size_t *src_len, int *size);
>
>  /*
>   * Reads a stream of variable sized packets until a flush packet is detected.
> + *
> + * The options are augmented by PACKET_READ_GENTLE_ON_EOF and passed to
> + * packet_read.

Obviously this comment will need updating if you take my suggestion.

Thanks,
Taylor

^ permalink raw reply	[flat|nested] 178+ messages in thread

* [PATCH v3 00/12] Simple IPC Mechanism
  2021-02-01 19:45 ` [PATCH v2 00/14] " Jeff Hostetler via GitGitGadget
                     ` (14 preceding siblings ...)
  2021-02-01 22:20   ` [PATCH v2 00/14] Simple IPC Mechanism Junio C Hamano
@ 2021-02-13  0:09   ` Jeff Hostetler via GitGitGadget
  2021-02-13  0:09     ` [PATCH v3 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently() Jeff Hostetler via GitGitGadget
                       ` (12 more replies)
  15 siblings, 13 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-02-13  0:09 UTC (permalink / raw)
  To: git
  Cc: Jeff Hostetler, Jeff King, SZEDER Gábor,
	Johannes Schindelin, Jeff Hostetler

Here is version 3 of my "Simple IPC" series. It addresses the following
review comments from V2:

[1] Convert packet_write_gently() to write the header length and then the
actual buffer using 2 syscalls and avoid the need for a static or stack
buffer and update callers.

[2] Added buffer argument to write_packetized_from_fd() to force (the one
caller) to provide a buffer and avoid the same thread issues discussed
earlier.

[3] Remove the implicit pkt-flush from write_packetized_from_buf(). (V2
added a flag to make it optional and I removed that too.) Updated the
existing callers to call packet_flush_gently() as desired. Renamed
write_packetized_*() functions to have ..._no_flush() suffix to prevent
future accidents with new (more limited) functionality.

[4] Removed the "force_unlink" flag to the unix-socket options that I added
in V1/V2.

[5] Created a new unix_stream_server__listen_with_lock() wrapper function to
safely create a Unix domain socket while holding a lockfile and (hopefully)
eliminate the previously discussed race conditions. Added a little helper
struct and related routines to help manage the life of the socket.

[6] Added test-tool simple-ipc start-daemon to launch a background instance
of test-tool simple-ipc run-daemon and wait for the server to become ready
before exiting. And updated t0052 to use it and avoid the problematic sleep
1 in V1/V2. (There was discussion on the mailing list about using a FIFO in
the test like lib-git-daemon.sh, but there are issues with FIFO support on
Windows that I didn't want to step into. (And I want to use the same "run"
and "start" technique with the FSMonitor layer, so lets me explore that
here.)

[7] Rebased onto v2.30.1 to get rid of a copy of Junio's "brew cask" commit
(3831132ace) that I included in earlier versions of this series.

[8] In response Gábor's comments about a CI test failure on "quit works"
(https://lore.kernel.org/git/20210205193847.GG2091@szeder.dev/) I added a
generous sleep and comments. I'm not completely happy with this solution,
but I'm not sure of a better solution right now.

[9] In response to Taylor's comment on read_packetized_to_strbuf()
(https://lore.kernel.org/git/YCSN260gqNV+DyTI@nand.local/), I've moved
PACKET_READ_GENTLE_ON_EOF flag to all the callers as suggested.

cc: Ævar Arnfjörð Bjarmason avarab@gmail.com cc: Jeff Hostetler
git@jeffhostetler.com cc: Jeff King peff@peff.net cc: Chris Torek
chris.torek@gmail.com

Jeff Hostetler (9):
  pkt-line: eliminate the need for static buffer in
    packet_write_gently()
  simple-ipc: design documentation for new IPC mechanism
  simple-ipc: add win32 implementation
  unix-socket: elimiate static unix_stream_socket() helper function
  unix-socket: add backlog size option to unix_stream_listen()
  unix-socket: disallow chdir() when creating unix domain sockets
  unix-socket: create `unix_stream_server__listen_with_lock()`
  simple-ipc: add Unix domain socket implementation
  t0052: add simple-ipc tests and t/helper/test-simple-ipc tool

Johannes Schindelin (3):
  pkt-line: do not issue flush packets in write_packetized_*()
  pkt-line: (optionally) libify the packet readers
  pkt-line: add options argument to read_packetized_to_strbuf()

 Documentation/technical/api-simple-ipc.txt |  34 +
 Makefile                                   |   8 +
 builtin/credential-cache--daemon.c         |   3 +-
 builtin/credential-cache.c                 |   2 +-
 compat/simple-ipc/ipc-shared.c             |  28 +
 compat/simple-ipc/ipc-unix-socket.c        | 979 +++++++++++++++++++++
 compat/simple-ipc/ipc-win32.c              | 749 ++++++++++++++++
 config.mak.uname                           |   2 +
 contrib/buildsystems/CMakeLists.txt        |   6 +
 convert.c                                  |  16 +-
 pkt-line.c                                 |  57 +-
 pkt-line.h                                 |  20 +-
 simple-ipc.h                               | 235 +++++
 t/helper/test-simple-ipc.c                 | 713 +++++++++++++++
 t/helper/test-tool.c                       |   1 +
 t/helper/test-tool.h                       |   1 +
 t/t0052-simple-ipc.sh                      | 134 +++
 unix-socket.c                              | 168 +++-
 unix-socket.h                              |  47 +-
 19 files changed, 3150 insertions(+), 53 deletions(-)
 create mode 100644 Documentation/technical/api-simple-ipc.txt
 create mode 100644 compat/simple-ipc/ipc-shared.c
 create mode 100644 compat/simple-ipc/ipc-unix-socket.c
 create mode 100644 compat/simple-ipc/ipc-win32.c
 create mode 100644 simple-ipc.h
 create mode 100644 t/helper/test-simple-ipc.c
 create mode 100755 t/t0052-simple-ipc.sh


base-commit: 773e25afc41b1b6533fa9ae2cd825d0b4a697fad
Published-As: https://github.com/gitgitgadget/git/releases/tag/pr-766%2Fjeffhostetler%2Fsimple-ipc-v3
Fetch-It-Via: git fetch https://github.com/gitgitgadget/git pr-766/jeffhostetler/simple-ipc-v3
Pull-Request: https://github.com/gitgitgadget/git/pull/766

Range-diff vs v2:

  1:  4c6766d41834 <  -:  ------------ ci/install-depends: attempt to fix "brew cask" stuff
  2:  3b03a8ff7a72 !  1:  2d6858b1625a pkt-line: promote static buffer in packet_write_gently() to callers
     @@ Metadata
      Author: Jeff Hostetler <jeffhost@microsoft.com>
      
       ## Commit message ##
     -    pkt-line: promote static buffer in packet_write_gently() to callers
     +    pkt-line: eliminate the need for static buffer in packet_write_gently()
      
     -    Move the static buffer used in `packet_write_gently()` to its callers.
     -    This is a first step to make packet writing more thread-safe.
     +    Teach `packet_write_gently()` to write the pkt-line header and the actual
     +    buffer in 2 separate calls to `write_in_full()` and avoid the need for a
     +    static buffer, thread-safe scratch space, or an excessively large stack
     +    buffer.
     +
     +    Change the API of `write_packetized_from_fd()` to accept a scratch space
     +    argument from its caller to avoid similar issues here.
     +
     +    These changes are intended to make it easier to use pkt-line routines in
     +    a multi-threaded context with multiple concurrent writers writing to
     +    different streams.
      
          Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
      
     + ## convert.c ##
     +@@ convert.c: static int apply_multi_file_filter(const char *path, const char *src, size_t len
     + 	if (err)
     + 		goto done;
     + 
     +-	if (fd >= 0)
     +-		err = write_packetized_from_fd(fd, process->in);
     +-	else
     ++	if (fd >= 0) {
     ++		struct packet_scratch_space scratch;
     ++		err = write_packetized_from_fd(fd, process->in, &scratch);
     ++	} else
     + 		err = write_packetized_from_buf(src, len, process->in);
     + 	if (err)
     + 		goto done;
     +
       ## pkt-line.c ##
      @@ pkt-line.c: int packet_write_fmt_gently(int fd, const char *fmt, ...)
     - 	return status;
     - }
       
     --static int packet_write_gently(const int fd_out, const char *buf, size_t size)
     -+/*
     -+ * Use the provided scratch space to build a combined <hdr><buf> buffer
     -+ * and write it to the file descriptor (in one write if possible).
     -+ */
     -+static int packet_write_gently(const int fd_out, const char *buf, size_t size,
     -+			       struct packet_scratch_space *scratch)
     + static int packet_write_gently(const int fd_out, const char *buf, size_t size)
       {
      -	static char packet_write_buffer[LARGE_PACKET_MAX];
     ++	char header[4];
       	size_t packet_size;
       
      -	if (size > sizeof(packet_write_buffer) - 4)
     -+	if (size > sizeof(scratch->buffer) - 4)
     ++	if (size > LARGE_PACKET_DATA_MAX)
       		return error(_("packet write failed - data exceeds max packet size"));
       
       	packet_trace(buf, size, 1);
     @@ pkt-line.c: int packet_write_fmt_gently(int fd, const char *fmt, ...)
      -	memcpy(packet_write_buffer + 4, buf, size);
      -	if (write_in_full(fd_out, packet_write_buffer, packet_size) < 0)
      +
     -+	set_packet_header(scratch->buffer, packet_size);
     -+	memcpy(scratch->buffer + 4, buf, size);
     ++	set_packet_header(header, packet_size);
      +
     -+	if (write_in_full(fd_out, scratch->buffer, packet_size) < 0)
     ++	/*
     ++	 * Write the header and the buffer in 2 parts so that we do not need
     ++	 * to allocate a buffer or rely on a static buffer.  This avoids perf
     ++	 * and multi-threading issues.
     ++	 */
     ++
     ++	if (write_in_full(fd_out, header, 4) < 0 ||
     ++	    write_in_full(fd_out, buf, size) < 0)
       		return error(_("packet write failed"));
       	return 0;
       }
     - 
     - void packet_write(int fd_out, const char *buf, size_t size)
     - {
     --	if (packet_write_gently(fd_out, buf, size))
     -+	static struct packet_scratch_space scratch;
     -+
     -+	if (packet_write_gently(fd_out, buf, size, &scratch))
     - 		die_errno(_("packet write failed"));
     - }
     - 
      @@ pkt-line.c: void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len)
     + 	packet_trace(data, len, 1);
     + }
       
     - int write_packetized_from_fd(int fd_in, int fd_out)
     +-int write_packetized_from_fd(int fd_in, int fd_out)
     ++int write_packetized_from_fd(int fd_in, int fd_out,
     ++			     struct packet_scratch_space *scratch)
       {
     -+	/*
     -+	 * TODO We could save a memcpy() if we essentially inline
     -+	 * TODO packet_write_gently() here and change the xread()
     -+	 * TODO to pass &buf[4].
     -+	 */
     -+	static struct packet_scratch_space scratch;
     - 	static char buf[LARGE_PACKET_DATA_MAX];
     +-	static char buf[LARGE_PACKET_DATA_MAX];
       	int err = 0;
       	ssize_t bytes_to_write;
     -@@ pkt-line.c: int write_packetized_from_fd(int fd_in, int fd_out)
     + 
     + 	while (!err) {
     +-		bytes_to_write = xread(fd_in, buf, sizeof(buf));
     ++		bytes_to_write = xread(fd_in, scratch->buffer,
     ++				       sizeof(scratch->buffer));
     + 		if (bytes_to_write < 0)
       			return COPY_READ_ERROR;
       		if (bytes_to_write == 0)
       			break;
      -		err = packet_write_gently(fd_out, buf, bytes_to_write);
     -+		err = packet_write_gently(fd_out, buf, bytes_to_write, &scratch);
     ++		err = packet_write_gently(fd_out, scratch->buffer,
     ++					  bytes_to_write);
       	}
       	if (!err)
       		err = packet_flush_gently(fd_out);
     -@@ pkt-line.c: int write_packetized_from_fd(int fd_in, int fd_out)
     - 
     - int write_packetized_from_buf(const char *src_in, size_t len, int fd_out)
     - {
     -+	static struct packet_scratch_space scratch;
     - 	int err = 0;
     - 	size_t bytes_written = 0;
     - 	size_t bytes_to_write;
     -@@ pkt-line.c: int write_packetized_from_buf(const char *src_in, size_t len, int fd_out)
     - 			bytes_to_write = len - bytes_written;
     - 		if (bytes_to_write == 0)
     - 			break;
     --		err = packet_write_gently(fd_out, src_in + bytes_written, bytes_to_write);
     -+		err = packet_write_gently(fd_out, src_in + bytes_written, bytes_to_write, &scratch);
     - 		bytes_written += bytes_to_write;
     - 	}
     - 	if (!err)
      
       ## pkt-line.h ##
      @@
     @@ pkt-line.h
      +#define LARGE_PACKET_DATA_MAX (LARGE_PACKET_MAX - 4)
      +
      +struct packet_scratch_space {
     -+	char buffer[LARGE_PACKET_MAX];
     ++	char buffer[LARGE_PACKET_DATA_MAX]; /* does not include header bytes */
      +};
      +
       /*
        * Write a packetized stream, where each line is preceded by
        * its length (including the header) as a 4-byte hex number.
     +@@ pkt-line.h: void packet_buf_write(struct strbuf *buf, const char *fmt, ...) __attribute__((f
     + void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len);
     + int packet_flush_gently(int fd);
     + int packet_write_fmt_gently(int fd, const char *fmt, ...) __attribute__((format (printf, 2, 3)));
     +-int write_packetized_from_fd(int fd_in, int fd_out);
     ++int write_packetized_from_fd(int fd_in, int fd_out, struct packet_scratch_space *scratch);
     + int write_packetized_from_buf(const char *src_in, size_t len, int fd_out);
     + 
     + /*
      @@ pkt-line.h: enum packet_read_status packet_reader_read(struct packet_reader *reader);
       enum packet_read_status packet_reader_peek(struct packet_reader *reader);
       
  3:  e671894b4c04 <  -:  ------------ pkt-line: add write_packetized_from_buf2() that takes scratch buffer
  4:  0832f7d324da !  2:  91a9f63d6692 pkt-line: optionally skip the flush packet in write_packetized_from_buf()
     @@ Metadata
      Author: Johannes Schindelin <Johannes.Schindelin@gmx.de>
      
       ## Commit message ##
     -    pkt-line: optionally skip the flush packet in write_packetized_from_buf()
     +    pkt-line: do not issue flush packets in write_packetized_*()
      
     -    This function currently has only one caller: `apply_multi_file_filter()`
     -    in `convert.c`. That caller wants a flush packet to be written after
     -    writing the payload.
     +    Remove the `packet_flush_gently()` call in `write_packetized_from_buf() and
     +    `write_packetized_from_fd()` and require the caller to call it if desired.
     +    Rename both functions to `write_packetized_from_*_no_flush()` to prevent
     +    later merge accidents.
      
     -    However, we are about to introduce a user that wants to write many
     -    packets before a final flush packet, so let's extend this function to
     -    prepare for that scenario.
     +    `write_packetized_from_buf()` currently only has one caller:
     +    `apply_multi_file_filter()` in `convert.c`.  It always wants a flush packet
     +    to be written after writing the payload.
     +
     +    However, we are about to introduce a caller that wants to write many
     +    packets before a final flush packet, so let's make the caller responsible
     +    for emitting the flush packet.
      
          Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
          Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
      
       ## convert.c ##
      @@ convert.c: static int apply_multi_file_filter(const char *path, const char *src, size_t len
     - 	if (fd >= 0)
     - 		err = write_packetized_from_fd(fd, process->in);
     - 	else
     + 
     + 	if (fd >= 0) {
     + 		struct packet_scratch_space scratch;
     +-		err = write_packetized_from_fd(fd, process->in, &scratch);
     ++		err = write_packetized_from_fd_no_flush(fd, process->in, &scratch);
     + 	} else
      -		err = write_packetized_from_buf(src, len, process->in);
     -+		err = write_packetized_from_buf(src, len, process->in, 1);
     ++		err = write_packetized_from_buf_no_flush(src, len, process->in);
     ++	if (err)
     ++		goto done;
     ++
     ++	err = packet_flush_gently(process->in);
       	if (err)
       		goto done;
       
      
       ## pkt-line.c ##
     -@@ pkt-line.c: int write_packetized_from_fd(int fd_in, int fd_out)
     - 	return err;
     +@@ pkt-line.c: void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len)
     + 	packet_trace(data, len, 1);
       }
       
     --int write_packetized_from_buf(const char *src_in, size_t len, int fd_out)
     -+int write_packetized_from_buf(const char *src_in, size_t len, int fd_out,
     -+			      int flush_at_end)
     +-int write_packetized_from_fd(int fd_in, int fd_out,
     +-			     struct packet_scratch_space *scratch)
     ++int write_packetized_from_fd_no_flush(int fd_in, int fd_out,
     ++				      struct packet_scratch_space *scratch)
       {
     - 	static struct packet_scratch_space scratch;
     - 
     --	return write_packetized_from_buf2(src_in, len, fd_out, &scratch);
     -+	return write_packetized_from_buf2(src_in, len, fd_out,
     -+					  flush_at_end, &scratch);
     + 	int err = 0;
     + 	ssize_t bytes_to_write;
     +@@ pkt-line.c: int write_packetized_from_fd(int fd_in, int fd_out,
     + 		err = packet_write_gently(fd_out, scratch->buffer,
     + 					  bytes_to_write);
     + 	}
     +-	if (!err)
     +-		err = packet_flush_gently(fd_out);
     + 	return err;
       }
       
     - int write_packetized_from_buf2(const char *src_in, size_t len, int fd_out,
     -+			       int flush_at_end,
     - 			       struct packet_scratch_space *scratch)
     +-int write_packetized_from_buf(const char *src_in, size_t len, int fd_out)
     ++int write_packetized_from_buf_no_flush(const char *src_in, size_t len, int fd_out)
       {
       	int err = 0;
     -@@ pkt-line.c: int write_packetized_from_buf2(const char *src_in, size_t len, int fd_out,
     - 		err = packet_write_gently(fd_out, src_in + bytes_written, bytes_to_write, scratch);
     + 	size_t bytes_written = 0;
     +@@ pkt-line.c: int write_packetized_from_buf(const char *src_in, size_t len, int fd_out)
     + 		err = packet_write_gently(fd_out, src_in + bytes_written, bytes_to_write);
       		bytes_written += bytes_to_write;
       	}
      -	if (!err)
     -+	if (!err && flush_at_end)
     - 		err = packet_flush_gently(fd_out);
     +-		err = packet_flush_gently(fd_out);
       	return err;
       }
     + 
      
       ## pkt-line.h ##
     -@@ pkt-line.h: void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len);
     +@@ pkt-line.h: void packet_buf_write(struct strbuf *buf, const char *fmt, ...) __attribute__((f
     + void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len);
       int packet_flush_gently(int fd);
       int packet_write_fmt_gently(int fd, const char *fmt, ...) __attribute__((format (printf, 2, 3)));
     - int write_packetized_from_fd(int fd_in, int fd_out);
     +-int write_packetized_from_fd(int fd_in, int fd_out, struct packet_scratch_space *scratch);
      -int write_packetized_from_buf(const char *src_in, size_t len, int fd_out);
     -+int write_packetized_from_buf(const char *src_in, size_t len, int fd_out,
     -+			      int flush_at_end);
     - int write_packetized_from_buf2(const char *src_in, size_t len, int fd_out,
     -+			       int flush_at_end,
     - 			       struct packet_scratch_space *scratch);
     ++int write_packetized_from_fd_no_flush(int fd_in, int fd_out, struct packet_scratch_space *scratch);
     ++int write_packetized_from_buf_no_flush(const char *src_in, size_t len, int fd_out);
       
       /*
     +  * Read a packetized line into the buffer, which must be at least size bytes
  5:  43bc4a26b790 !  3:  e05467def4e1 pkt-line: (optionally) libify the packet readers
     @@ pkt-line.c: enum packet_read_status packet_read_with_status(int fd, char **src_b
       		*pktlen = -1;
      
       ## pkt-line.h ##
     -@@ pkt-line.h: int write_packetized_from_buf2(const char *src_in, size_t len, int fd_out,
     +@@ pkt-line.h: int write_packetized_from_buf_no_flush(const char *src_in, size_t len, int fd_ou
        *
        * If options contains PACKET_READ_DIE_ON_ERR_PACKET, it dies when it sees an
        * ERR packet.
  6:  6a389a353351 !  4:  81e14bed955c pkt-line: accept additional options in read_packetized_to_strbuf()
     @@ Metadata
      Author: Johannes Schindelin <Johannes.Schindelin@gmx.de>
      
       ## Commit message ##
     -    pkt-line: accept additional options in read_packetized_to_strbuf()
     +    pkt-line: add options argument to read_packetized_to_strbuf()
      
     -    The `read_packetized_to_strbuf()` function reads packets into a strbuf
     -    until a flush packet has been received. So far, it has only one caller:
     -    `apply_multi_file_filter()` in `convert.c`. This caller really only
     -    needs the `PACKET_READ_GENTLE_ON_EOF` option to be passed to
     -    `packet_read()` (which makes sense in the scenario where packets should
     -    be read until a flush packet is received).
     +    Update the calling sequence of `read_packetized_to_strbuf()` to take
     +    an options argument and not assume a fixed set of options.  Update the
     +    only existing caller accordingly to explicitly pass the
     +    formerly-assumed flags.
      
     -    We are about to introduce a caller that wants to pass other options
     -    through to `packet_read()`, so let's extend the function signature
     -    accordingly.
     +    The `read_packetized_to_strbuf()` function calls `packet_read()` with
     +    a fixed set of assumed options (`PACKET_READ_GENTLE_ON_EOF`).  This
     +    assumption has been fine for the single existing caller
     +    `apply_multi_file_filter()` in `convert.c`.
     +
     +    In a later commit we would like to add other callers to
     +    `read_packetized_to_strbuf()` that need a different set of options.
      
          Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
     +    Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
      
       ## convert.c ##
      @@ convert.c: static int apply_multi_file_filter(const char *path, const char *src, size_t len
     @@ convert.c: static int apply_multi_file_filter(const char *path, const char *src,
       			goto done;
       
      -		err = read_packetized_to_strbuf(process->out, &nbuf) < 0;
     -+		err = read_packetized_to_strbuf(process->out, &nbuf, 0) < 0;
     ++		err = read_packetized_to_strbuf(process->out, &nbuf,
     ++						PACKET_READ_GENTLE_ON_EOF) < 0;
       		if (err)
       			goto done;
       
     @@ pkt-line.c: ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out)
       			 */
       			sb_out->buf + sb_out->len, LARGE_PACKET_DATA_MAX+1,
      -			PACKET_READ_GENTLE_ON_EOF);
     -+			options | PACKET_READ_GENTLE_ON_EOF);
     ++			options);
       		if (packet_len <= 0)
       			break;
       		sb_out->len += packet_len;
      
       ## pkt-line.h ##
      @@ pkt-line.h: char *packet_read_line_buf(char **src_buf, size_t *src_len, int *size);
     - 
       /*
        * Reads a stream of variable sized packets until a flush packet is detected.
     -+ *
     -+ * The options are augmented by PACKET_READ_GENTLE_ON_EOF and passed to
     -+ * packet_read.
        */
      -ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out);
     -+ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out,
     -+				  int options);
     ++ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out, int options);
       
       /*
        * Receive multiplexed output stream over git native protocol.
  7:  a7275b4bdc2a =  5:  22eec60761a8 simple-ipc: design documentation for new IPC mechanism
  8:  388366913d41 !  6:  171ec43ecfa4 simple-ipc: add win32 implementation
     @@ compat/simple-ipc/ipc-win32.c (new)
      +
      +	trace2_region_enter("ipc-client", "send-command", NULL);
      +
     -+	if (write_packetized_from_buf2(message, strlen(message),
     -+				       connection->fd, 1,
     -+				       &connection->scratch_write_buffer) < 0) {
     ++	if (write_packetized_from_buf_no_flush(message, strlen(message),
     ++					       connection->fd) < 0 ||
     ++	    packet_flush_gently(connection->fd) < 0) {
      +		ret = error(_("could not send IPC command"));
      +		goto done;
      +	}
      +
      +	FlushFileBuffers((HANDLE)_get_osfhandle(connection->fd));
      +
     -+	if (read_packetized_to_strbuf(connection->fd, answer,
     -+				      PACKET_READ_NEVER_DIE) < 0) {
     ++	if (read_packetized_to_strbuf(
     ++		    connection->fd, answer,
     ++		    PACKET_READ_GENTLE_ON_EOF | PACKET_READ_NEVER_DIE) < 0) {
      +		ret = error(_("could not read IPC response"));
      +		goto done;
      +	}
     @@ compat/simple-ipc/ipc-win32.c (new)
      +	struct ipc_server_data *server_data;
      +	pthread_t pthread_id;
      +	HANDLE hPipe;
     -+	struct packet_scratch_space scratch_write_buffer;
      +};
      +
      +/*
     @@ compat/simple-ipc/ipc-win32.c (new)
      +static int do_io_reply_callback(struct ipc_server_reply_data *reply_data,
      +		       const char *response, size_t response_len)
      +{
     -+	struct packet_scratch_space *scratch =
     -+		&reply_data->server_thread_data->scratch_write_buffer;
     -+
      +	if (reply_data->magic != MAGIC_SERVER_REPLY_DATA)
      +		BUG("reply_cb called with wrong instance data");
      +
     -+	return write_packetized_from_buf2(response, response_len,
     -+					  reply_data->fd, 0, scratch);
     ++	return write_packetized_from_buf_no_flush(response, response_len,
     ++						  reply_data->fd);
      +}
      +
      +/*
     @@ compat/simple-ipc/ipc-win32.c (new)
      +		return error(_("could not create fd from pipe for '%s'"),
      +			     server_thread_data->server_data->buf_path.buf);
      +
     -+	ret = read_packetized_to_strbuf(reply_data.fd, &buf,
     -+					PACKET_READ_NEVER_DIE);
     ++	ret = read_packetized_to_strbuf(
     ++		reply_data.fd, &buf,
     ++		PACKET_READ_GENTLE_ON_EOF | PACKET_READ_NEVER_DIE);
      +	if (ret >= 0) {
      +		ret = server_thread_data->server_data->application_cb(
      +			server_thread_data->server_data->application_data,
     @@ simple-ipc.h (new)
      +
      +struct ipc_client_connection {
      +	int fd;
     -+	struct packet_scratch_space scratch_write_buffer;
      +};
      +
      +/*
 10:  f5d5445cf42e !  7:  b368318e6a23 unix-socket: elimiate static unix_stream_socket() helper function
     @@ Metadata
       ## Commit message ##
          unix-socket: elimiate static unix_stream_socket() helper function
      
     -    The static helper function `unix_stream_socket()` calls `die()`.  This is not
     -    appropriate for all callers.  Eliminate the wrapper function and move the
     -    existing error handling to the callers in preparation for adapting specific
     -    callers.
     +    The static helper function `unix_stream_socket()` calls `die()`.  This
     +    is not appropriate for all callers.  Eliminate the wrapper function
     +    and make the callers propagate the error.
      
          Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
      
     @@ unix-socket.c
       static int chdir_len(const char *orig, int len)
       {
       	char *path = xmemdupz(orig, len);
     -@@ unix-socket.c: int unix_stream_connect(const char *path)
     +@@ unix-socket.c: static int unix_sockaddr_init(struct sockaddr_un *sa, const char *path,
     + 
     + int unix_stream_connect(const char *path)
     + {
     +-	int fd, saved_errno;
     ++	int fd = -1, saved_errno;
     + 	struct sockaddr_un sa;
     + 	struct unix_sockaddr_context ctx;
       
       	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
       		return -1;
      -	fd = unix_stream_socket();
      +	fd = socket(AF_UNIX, SOCK_STREAM, 0);
      +	if (fd < 0)
     -+		die_errno("unable to create socket");
     ++		goto fail;
      +
       	if (connect(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0)
       		goto fail;
       	unix_sockaddr_cleanup(&ctx);
     +@@ unix-socket.c: int unix_stream_connect(const char *path)
     + 
     + fail:
     + 	saved_errno = errno;
     ++	if (fd != -1)
     ++		close(fd);
     + 	unix_sockaddr_cleanup(&ctx);
     +-	close(fd);
     + 	errno = saved_errno;
     + 	return -1;
     + }
     + 
     + int unix_stream_listen(const char *path)
     + {
     +-	int fd, saved_errno;
     ++	int fd = -1, saved_errno;
     + 	struct sockaddr_un sa;
     + 	struct unix_sockaddr_context ctx;
     + 
      @@ unix-socket.c: int unix_stream_listen(const char *path)
       
       	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
     @@ unix-socket.c: int unix_stream_listen(const char *path)
      -	fd = unix_stream_socket();
      +	fd = socket(AF_UNIX, SOCK_STREAM, 0);
      +	if (fd < 0)
     -+		die_errno("unable to create socket");
     ++		goto fail;
       
       	if (bind(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0)
       		goto fail;
     +@@ unix-socket.c: int unix_stream_listen(const char *path)
     + 
     + fail:
     + 	saved_errno = errno;
     ++	if (fd != -1)
     ++		close(fd);
     + 	unix_sockaddr_cleanup(&ctx);
     +-	close(fd);
     + 	errno = saved_errno;
     + 	return -1;
     + }
 11:  7a6a69dfc20c !  8:  985b2e02b2df unix-socket: add options to unix_stream_listen()
     @@ Metadata
      Author: Jeff Hostetler <jeffhost@microsoft.com>
      
       ## Commit message ##
     -    unix-socket: add options to unix_stream_listen()
     +    unix-socket: add backlog size option to unix_stream_listen()
      
          Update `unix_stream_listen()` to take an options structure to override
     -    default behaviors.  This includes the size of the `listen()` backlog
     -    and whether it should always unlink the socket file before trying to
     -    create a new one.  Also eliminate calls to `die()` if it cannot create
     -    a socket.
     -
     -    Normally, `unix_stream_listen()` always tries to `unlink()` the
     -    socket-path before calling `bind()`.  If there is an existing
     -    server/daemon already bound and listening on that socket-path, our
     -    `unlink()` would have the effect of disassociating the existing
     -    server's bound-socket-fd from the socket-path without notifying the
     -    existing server.  The existing server could continue to service
     -    existing connections (accepted-socket-fd's), but would not receive any
     -    futher new connections (since clients rendezvous via the socket-path).
     -    The existing server would effectively be offline but yet appear to be
     -    active.
     -
     -    Furthermore, `unix_stream_listen()` creates an opportunity for a brief
     -    race condition for connecting clients if they try to connect in the
     -    interval between the forced `unlink()` and the subsequent `bind()` (which
     -    recreates the socket-path that is bound to a new socket-fd in the current
     -    process).
     +    default behaviors.  This commit includes the size of the `listen()` backlog.
      
          Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
      
     @@ unix-socket.c: int unix_stream_connect(const char *path)
      +int unix_stream_listen(const char *path,
      +		       const struct unix_stream_listen_opts *opts)
       {
     --	int fd, saved_errno;
     -+	int fd = -1;
     -+	int saved_errno;
     -+	int bind_successful = 0;
     + 	int fd = -1, saved_errno;
      +	int backlog;
       	struct sockaddr_un sa;
       	struct unix_sockaddr_context ctx;
       
     --	unlink(path);
     --
     - 	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
     - 		return -1;
     -+
     - 	fd = socket(AF_UNIX, SOCK_STREAM, 0);
     - 	if (fd < 0)
     --		die_errno("unable to create socket");
     -+		goto fail;
     -+
     -+	if (opts->force_unlink_before_bind)
     -+		unlink(path);
     - 
     +@@ unix-socket.c: int unix_stream_listen(const char *path)
       	if (bind(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0)
       		goto fail;
     -+	bind_successful = 1;
       
      -	if (listen(fd, 5) < 0)
     -+	if (opts->listen_backlog_size > 0)
     -+		backlog = opts->listen_backlog_size;
     -+	else
     -+		backlog = 5;
     ++	backlog = opts->listen_backlog_size;
     ++	if (backlog <= 0)
     ++		backlog = DEFAULT_UNIX_STREAM_LISTEN_BACKLOG;
      +	if (listen(fd, backlog) < 0)
       		goto fail;
       
       	unix_sockaddr_cleanup(&ctx);
     -@@ unix-socket.c: int unix_stream_listen(const char *path)
     - fail:
     - 	saved_errno = errno;
     - 	unix_sockaddr_cleanup(&ctx);
     --	close(fd);
     -+	if (fd != -1)
     -+		close(fd);
     -+	if (bind_successful)
     -+		unlink(path);
     - 	errno = saved_errno;
     - 	return -1;
     - }
      
       ## unix-socket.h ##
      @@
     @@ unix-socket.h
       
      +struct unix_stream_listen_opts {
      +	int listen_backlog_size;
     -+	unsigned int force_unlink_before_bind:1;
      +};
      +
     ++#define DEFAULT_UNIX_STREAM_LISTEN_BACKLOG (5)
     ++
      +#define UNIX_STREAM_LISTEN_OPTS_INIT \
      +{ \
     -+	.listen_backlog_size = 5, \
     -+	.force_unlink_before_bind = 1, \
     ++	.listen_backlog_size = DEFAULT_UNIX_STREAM_LISTEN_BACKLOG, \
      +}
      +
       int unix_stream_connect(const char *path);
 12:  745b6d5fb746 !  9:  1bfa36409d07 unix-socket: add no-chdir option to unix_stream_listen()
     @@ Metadata
      Author: Jeff Hostetler <jeffhost@microsoft.com>
      
       ## Commit message ##
     -    unix-socket: add no-chdir option to unix_stream_listen()
     +    unix-socket: disallow chdir() when creating unix domain sockets
      
          Calls to `chdir()` are dangerous in a multi-threaded context.  If
     -    `unix_stream_listen()` is given a socket pathname that is too big to
     -    fit in a `sockaddr_un` structure, it will `chdir()` to the parent
     -    directory of the requested socket pathname, create the socket using a
     -    relative pathname, and then `chdir()` back.  This is not thread-safe.
     +    `unix_stream_listen()` or `unix_stream_connect()` is given a socket
     +    pathname that is too long to fit in a `sockaddr_un` structure, it will
     +    `chdir()` to the parent directory of the requested socket pathname,
     +    create the socket using a relative pathname, and then `chdir()` back.
     +    This is not thread-safe.
      
     -    Add `disallow_chdir` flag to `struct unix_sockaddr_context` and change
     -    all callers to pass an initialized context structure.
     -
     -    Teach `unix_sockaddr_init()` to not allow calls to `chdir()` when flag
     -    is set.
     +    Teach `unix_sockaddr_init()` to not allow calls to `chdir()` when this
     +    flag is set.
      
          Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
      
     - ## unix-socket.c ##
     -@@ unix-socket.c: static int chdir_len(const char *orig, int len)
     + ## builtin/credential-cache.c ##
     +@@
     + static int send_request(const char *socket, const struct strbuf *out)
     + {
     + 	int got_data = 0;
     +-	int fd = unix_stream_connect(socket);
     ++	int fd = unix_stream_connect(socket, 0);
       
     - struct unix_sockaddr_context {
     - 	char *orig_dir;
     -+	unsigned int disallow_chdir:1;
     - };
     + 	if (fd < 0)
     + 		return -1;
     +
     + ## unix-socket.c ##
     +@@ unix-socket.c: static void unix_sockaddr_cleanup(struct unix_sockaddr_context *ctx)
     + }
       
     -+#define UNIX_SOCKADDR_CONTEXT_INIT \
     -+{ \
     -+	.orig_dir=NULL, \
     -+	.disallow_chdir=0, \
     -+}
     -+
     - static void unix_sockaddr_cleanup(struct unix_sockaddr_context *ctx)
     - {
     - 	if (!ctx->orig_dir)
     -@@ unix-socket.c: static int unix_sockaddr_init(struct sockaddr_un *sa, const char *path,
     + static int unix_sockaddr_init(struct sockaddr_un *sa, const char *path,
     +-			      struct unix_sockaddr_context *ctx)
     ++			      struct unix_sockaddr_context *ctx,
     ++			      int disallow_chdir)
       {
       	int size = strlen(path) + 1;
       
     --	ctx->orig_dir = NULL;
     -+	if (ctx->disallow_chdir && size > sizeof(sa->sun_path)) {
     -+		errno = ENAMETOOLONG;
     -+		return -1;
     -+	}
     -+
     + 	ctx->orig_dir = NULL;
       	if (size > sizeof(sa->sun_path)) {
     - 		const char *slash = find_last_dir_sep(path);
     +-		const char *slash = find_last_dir_sep(path);
     ++		const char *slash;
       		const char *dir;
     -@@ unix-socket.c: int unix_stream_connect(const char *path)
     + 		struct strbuf cwd = STRBUF_INIT;
     + 
     ++		if (disallow_chdir) {
     ++			errno = ENAMETOOLONG;
     ++			return -1;
     ++		}
     ++
     ++		slash = find_last_dir_sep(path);
     + 		if (!slash) {
     + 			errno = ENAMETOOLONG;
     + 			return -1;
     +@@ unix-socket.c: static int unix_sockaddr_init(struct sockaddr_un *sa, const char *path,
     + 	return 0;
     + }
     + 
     +-int unix_stream_connect(const char *path)
     ++int unix_stream_connect(const char *path, int disallow_chdir)
       {
     - 	int fd, saved_errno;
     + 	int fd = -1, saved_errno;
       	struct sockaddr_un sa;
     --	struct unix_sockaddr_context ctx;
     -+	struct unix_sockaddr_context ctx = UNIX_SOCKADDR_CONTEXT_INIT;
     + 	struct unix_sockaddr_context ctx;
       
     - 	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
     +-	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
     ++	if (unix_sockaddr_init(&sa, path, &ctx, disallow_chdir) < 0)
       		return -1;
     + 	fd = socket(AF_UNIX, SOCK_STREAM, 0);
     + 	if (fd < 0)
      @@ unix-socket.c: int unix_stream_listen(const char *path,
     - 	int bind_successful = 0;
     - 	int backlog;
     - 	struct sockaddr_un sa;
     --	struct unix_sockaddr_context ctx;
     -+	struct unix_sockaddr_context ctx = UNIX_SOCKADDR_CONTEXT_INIT;
     -+
     -+	ctx.disallow_chdir = opts->disallow_chdir;
       
     - 	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
     + 	unlink(path);
     + 
     +-	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
     ++	if (unix_sockaddr_init(&sa, path, &ctx, opts->disallow_chdir) < 0)
       		return -1;
     + 	fd = socket(AF_UNIX, SOCK_STREAM, 0);
     + 	if (fd < 0)
      
       ## unix-socket.h ##
      @@
     + 
       struct unix_stream_listen_opts {
       	int listen_backlog_size;
     - 	unsigned int force_unlink_before_bind:1;
      +	unsigned int disallow_chdir:1;
       };
       
     + #define DEFAULT_UNIX_STREAM_LISTEN_BACKLOG (5)
     +@@ unix-socket.h: struct unix_stream_listen_opts {
       #define UNIX_STREAM_LISTEN_OPTS_INIT \
       { \
     - 	.listen_backlog_size = 5, \
     - 	.force_unlink_before_bind = 1, \
     + 	.listen_backlog_size = DEFAULT_UNIX_STREAM_LISTEN_BACKLOG, \
      +	.disallow_chdir = 0, \
       }
       
     - int unix_stream_connect(const char *path);
     +-int unix_stream_connect(const char *path);
     ++int unix_stream_connect(const char *path, int disallow_chdir);
     + int unix_stream_listen(const char *path,
     + 		       const struct unix_stream_listen_opts *opts);
     + 
  -:  ------------ > 10:  b443e11ac32f unix-socket: create `unix_stream_server__listen_with_lock()`
 14:  72c1c209c380 ! 11:  43c8db9a4468 simple-ipc: add Unix domain socket implementation
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +	*pfd = -1;
      +
      +	for (k = 0; k < timeout_ms; k += wait_ms) {
     -+		int fd = unix_stream_connect(path);
     ++		int fd = unix_stream_connect(path, options->uds_disallow_chdir);
      +
      +		if (fd != -1) {
      +			*pfd = fd;
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +
      +	trace2_region_enter("ipc-client", "send-command", NULL);
      +
     -+	if (write_packetized_from_buf2(message, strlen(message),
     -+				       connection->fd, 1,
     -+				       &connection->scratch_write_buffer) < 0) {
     ++	if (write_packetized_from_buf_no_flush(message, strlen(message),
     ++					       connection->fd) < 0 ||
     ++	    packet_flush_gently(connection->fd) < 0) {
      +		ret = error(_("could not send IPC command"));
      +		goto done;
      +	}
      +
     -+	if (read_packetized_to_strbuf(connection->fd, answer,
     -+				      PACKET_READ_NEVER_DIE) < 0) {
     ++	if (read_packetized_to_strbuf(
     ++		    connection->fd, answer,
     ++		    PACKET_READ_GENTLE_ON_EOF | PACKET_READ_NEVER_DIE) < 0) {
      +		ret = error(_("could not read IPC response"));
      +		goto done;
      +	}
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +	struct ipc_worker_thread_data *next_thread;
      +	struct ipc_server_data *server_data;
      +	pthread_t pthread_id;
     -+	struct packet_scratch_space scratch_write_buffer;
      +};
      +
      +struct ipc_accept_thread_data {
      +	enum magic magic;
      +	struct ipc_server_data *server_data;
      +
     -+	int fd_listen;
     -+	struct stat st_listen;
     ++	struct unix_stream_server_socket *server_socket;
      +
      +	int fd_send_shutdown;
      +	int fd_wait_shutdown;
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +static int do_io_reply_callback(struct ipc_server_reply_data *reply_data,
      +		       const char *response, size_t response_len)
      +{
     -+	struct packet_scratch_space *scratch =
     -+		&reply_data->worker_thread_data->scratch_write_buffer;
     -+
      +	if (reply_data->magic != MAGIC_SERVER_REPLY_DATA)
      +		BUG("reply_cb called with wrong instance data");
      +
     -+	return write_packetized_from_buf2(response, response_len,
     -+					  reply_data->fd, 0, scratch);
     ++	return write_packetized_from_buf_no_flush(response, response_len,
     ++						  reply_data->fd);
      +}
      +
      +/* A randomly chosen value. */
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +
      +	reply_data.fd = fd;
      +
     -+	ret = read_packetized_to_strbuf(reply_data.fd, &buf,
     -+					PACKET_READ_NEVER_DIE);
     ++	ret = read_packetized_to_strbuf(
     ++		reply_data.fd, &buf,
     ++		PACKET_READ_GENTLE_ON_EOF | PACKET_READ_NEVER_DIE);
      +	if (ret >= 0) {
      +		ret = worker_thread_data->server_data->application_cb(
      +			worker_thread_data->server_data->application_data,
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +		if (ret == SIMPLE_IPC_QUIT) {
      +			trace2_data_string("ipc-worker", NULL, "queue_stop_async",
      +					   "application_quit");
     -+			/* The application told us to shutdown. */
     ++			/*
     ++			 * The application layer is telling the ipc-server
     ++			 * layer to shutdown.
     ++			 *
     ++			 * We DO NOT have a response to send to the client.
     ++			 *
     ++			 * Queue an async stop (to stop the other threads) and
     ++			 * allow this worker thread to exit now (no sense waiting
     ++			 * for the thread-pool shutdown signal).
     ++			 *
     ++			 * Other non-idle worker threads are allowed to finish
     ++			 * responding to their current clients.
     ++			 */
      +			ipc_server_stop_async(server_data);
      +			break;
      +		}
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +	return NULL;
      +}
      +
     -+/*
     -+ * Return 1 if someone deleted or stole the on-disk socket from us.
     -+ */
     -+static int socket_was_stolen(struct ipc_accept_thread_data *accept_thread_data)
     -+{
     -+	struct stat st;
     -+	struct stat *ref_st = &accept_thread_data->st_listen;
     -+
     -+	if (lstat(accept_thread_data->server_data->buf_path.buf, &st) == -1)
     -+		return 1;
     -+
     -+	if (st.st_ino != ref_st->st_ino)
     -+		return 1;
     -+
     -+	/* We might also consider the creation time on some platforms. */
     -+
     -+	return 0;
     -+}
     -+
      +/* A randomly chosen value. */
      +#define MY_ACCEPT_POLL_TIMEOUT_MS (60 * 1000)
      +
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +		pollfd[0].fd = accept_thread_data->fd_wait_shutdown;
      +		pollfd[0].events = POLLIN;
      +
     -+		pollfd[1].fd = accept_thread_data->fd_listen;
     ++		pollfd[1].fd = accept_thread_data->server_socket->fd_socket;
      +		pollfd[1].events = POLLIN;
      +
      +		result = poll(pollfd, 2, MY_ACCEPT_POLL_TIMEOUT_MS);
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +
      +			/*
      +			 * If someone deletes or force-creates a new unix
     -+			 * domain socket at out path, all future clients
     ++			 * domain socket at our path, all future clients
      +			 * will be routed elsewhere and we silently starve.
      +			 * If that happens, just queue a shutdown.
      +			 */
     -+			if (socket_was_stolen(
     -+				    accept_thread_data)) {
     ++			if (unix_stream_server__was_stolen(
     ++				    accept_thread_data->server_socket)) {
      +				trace2_data_string("ipc-accept", NULL,
      +						   "queue_stop_async",
      +						   "socket_stolen");
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +		}
      +
      +		if (pollfd[1].revents & POLLIN) {
     -+			/* a connection is available on fd_listen */
     ++			/* a connection is available on server_socket */
      +
     -+			int client_fd = accept(accept_thread_data->fd_listen,
     -+					       NULL, NULL);
     ++			int client_fd =
     ++				accept(accept_thread_data->server_socket->fd_socket,
     ++				       NULL, NULL);
      +			if (client_fd >= 0)
      +				return client_fd;
      +
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      + */
      +#define LISTEN_BACKLOG (50)
      +
     -+/*
     -+ * Create a unix domain socket at the given path to listen for
     -+ * client connections.  The resulting socket will then appear
     -+ * in the filesystem as an inode with S_IFSOCK.  The inode is
     -+ * itself created as part of the `bind(2)` operation.
     -+ *
     -+ * The term "socket" is ambiguous in this context.  We want to open a
     -+ * "socket-fd" that is bound to a "socket-inode" (path) on disk.  We
     -+ * listen on "socket-fd" for new connections and clients try to
     -+ * open/connect using the "socket-inode" pathname.
     -+ *
     -+ * Unix domain sockets have a fundamental design flaw because the
     -+ * "socket-inode" persists until the pathname is deleted; closing the
     -+ * listening "socket-fd" only closes the socket handle/descriptor, it
     -+ * does not delete the inode/pathname.
     -+ *
     -+ * Well-behaving service daemons are expected to also delete the inode
     -+ * before shutdown.  If a service crashes (or forgets) it can leave
     -+ * the (now stale) inode in the filesystem.  This behaves like a stale
     -+ * ".lock" file and may prevent future service instances from starting
     -+ * up correctly.  (Because they won't be able to bind.)
     -+ *
     -+ * When future service instances try to create the listener socket,
     -+ * `bind(2)` will fail with EADDRINUSE -- because the inode already
     -+ * exists.  However, the new instance cannot tell if it is a stale
     -+ * inode *or* another service instance is already running.
     -+ *
     -+ * One possible solution is to blindly unlink the inode before
     -+ * attempting to bind a new socket-fd and thus create a new
     -+ * socket-inode.  Then `bind(2)` should always succeed.  However, if
     -+ * there is an existing service instance, it would be orphaned -- it
     -+ * would still be listening on a socket-fd that is still bound to an
     -+ * (unlinked) socket-inode, but that socket-inode is no longer
     -+ * associated with the pathname.  New client connections will arrive
     -+ * at OUR new socket-inode -- rather than the existing server's
     -+ * socket.  (I suppose it is up to the existing server to detect that
     -+ * its socket-inode has been stolen and shutdown.)
     -+ *
     -+ * Another possible solution is to try to use the ".lock" trick, but
     -+ * bind() does not have a exclusive-create use bit like open() does,
     -+ * so we cannot have multiple servers fighting/racing to create the
     -+ * same file name without having losers lose without knowing that they
     -+ * lost.
     -+ *
     -+ * We try to avoid such stealing and would rather fail to run than
     -+ * steal an existing socket-inode (because we assume that the
     -+ * existing server has more context and value to the clients than a
     -+ * freshly started server).  However, if multiple servers are racing
     -+ * to start, we don't care which one wins -- none of them have any
     -+ * state information yet worth fighting for.
     -+ *
     -+ * Create a "unique" socket-inode (with our PID in it (and assume that
     -+ * we can force-delete an existing socket with that name)).  Stat it
     -+ * to get the inode number and ctime -- so that we can identify it as
     -+ * the one we created.  Then use the atomic-rename trick to install it
     -+ * in the real location.  (This will unlink an existing socket with
     -+ * that pathname -- and thereby steal the real socket-inode from an
     -+ * existing server.)
     -+ *
     -+ * Elsewhere, our thread will periodically poll the socket-inode to
     -+ * see if someone else steals ours.
     -+ */
     -+static int create_listener_socket(const char *path,
     -+				  const struct ipc_server_opts *ipc_opts,
     -+				  struct stat *st_socket)
     ++static struct unix_stream_server_socket *create_listener_socket(
     ++	const char *path,
     ++	const struct ipc_server_opts *ipc_opts)
      +{
     -+	struct stat st;
     -+	struct strbuf buf_uniq = STRBUF_INIT;
     -+	int fd_listen;
     ++	struct unix_stream_server_socket *server_socket = NULL;
      +	struct unix_stream_listen_opts uslg_opts = UNIX_STREAM_LISTEN_OPTS_INIT;
      +
     -+	if (!lstat(path, &st) && S_ISSOCK(st.st_mode)) {
     -+		int fd_client;
     -+		/*
     -+		 * A socket-inode at `path` exists on disk, but we
     -+		 * don't know whether it belongs to an active server
     -+		 * or if the last server died without cleaning up.
     -+		 *
     -+		 * Poke it with a trivial connection to try to find out.
     -+		 */
     -+		trace2_data_string("ipc-server", NULL, "try-detect-server",
     -+				   path);
     -+		fd_client = unix_stream_connect(path);
     -+		if (fd_client >= 0) {
     -+			close(fd_client);
     -+			errno = EADDRINUSE;
     -+			return error_errno(_("socket already in use '%s'"),
     -+					   path);
     -+		}
     -+	}
     -+
     -+	/*
     -+	 * Create pathname to our "unique" socket and set it up for
     -+	 * business.
     -+	 */
     -+	strbuf_addf(&buf_uniq, "%s.%d", path, getpid());
     -+
      +	uslg_opts.listen_backlog_size = LISTEN_BACKLOG;
     -+	uslg_opts.force_unlink_before_bind = 1;
      +	uslg_opts.disallow_chdir = ipc_opts->uds_disallow_chdir;
     -+	fd_listen = unix_stream_listen(buf_uniq.buf, &uslg_opts);
     -+	if (fd_listen < 0) {
     -+		int saved_errno = errno;
     -+		error_errno(_("could not create listener socket '%s'"),
     -+			    buf_uniq.buf);
     -+		strbuf_release(&buf_uniq);
     -+		errno = saved_errno;
     -+		return -1;
     -+	}
      +
     -+	if (lstat(buf_uniq.buf, st_socket)) {
     -+		int saved_errno = errno;
     -+		error_errno(_("could not stat listener socket '%s'"),
     -+			    buf_uniq.buf);
     -+		close(fd_listen);
     -+		unlink(buf_uniq.buf);
     -+		strbuf_release(&buf_uniq);
     -+		errno = saved_errno;
     -+		return -1;
     -+	}
     ++	server_socket = unix_stream_server__listen_with_lock(path, &uslg_opts);
     ++	if (!server_socket)
     ++		return NULL;
      +
     -+	if (set_socket_blocking_flag(fd_listen, 1)) {
     ++	if (set_socket_blocking_flag(server_socket->fd_socket, 1)) {
      +		int saved_errno = errno;
      +		error_errno(_("could not set listener socket nonblocking '%s'"),
     -+			    buf_uniq.buf);
     -+		close(fd_listen);
     -+		unlink(buf_uniq.buf);
     -+		strbuf_release(&buf_uniq);
     -+		errno = saved_errno;
     -+		return -1;
     -+	}
     -+
     -+	/*
     -+	 * Install it as the "real" socket so that clients will starting
     -+	 * connecting to our socket.
     -+	 */
     -+	if (rename(buf_uniq.buf, path)) {
     -+		int saved_errno = errno;
     -+		error_errno(_("could not create listener socket '%s'"), path);
     -+		close(fd_listen);
     -+		unlink(buf_uniq.buf);
     -+		strbuf_release(&buf_uniq);
     ++			    path);
     ++		unix_stream_server__free(server_socket);
      +		errno = saved_errno;
     -+		return -1;
     ++		return NULL;
      +	}
      +
     -+	strbuf_release(&buf_uniq);
     -+	trace2_data_string("ipc-server", NULL, "try-listen", path);
     -+	return fd_listen;
     ++	trace2_data_string("ipc-server", NULL, "listen-with-lock", path);
     ++	return server_socket;
      +}
      +
     -+static int setup_listener_socket(const char *path, struct stat *st_socket,
     -+				 const struct ipc_server_opts *ipc_opts)
     ++static struct unix_stream_server_socket *setup_listener_socket(
     ++	const char *path,
     ++	const struct ipc_server_opts *ipc_opts)
      +{
     -+	int fd_listen;
     ++	struct unix_stream_server_socket *server_socket;
      +
      +	trace2_region_enter("ipc-server", "create-listener_socket", NULL);
     -+	fd_listen = create_listener_socket(path, ipc_opts, st_socket);
     ++	server_socket = create_listener_socket(path, ipc_opts);
      +	trace2_region_leave("ipc-server", "create-listener_socket", NULL);
      +
     -+	return fd_listen;
     ++	return server_socket;
      +}
      +
      +/*
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +			 ipc_server_application_cb *application_cb,
      +			 void *application_data)
      +{
     ++	struct unix_stream_server_socket *server_socket = NULL;
      +	struct ipc_server_data *server_data;
     -+	int fd_listen;
     -+	struct stat st_listen;
      +	int sv[2];
      +	int k;
      +	int nr_threads = opts->nr_threads;
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +				   path);
      +	}
      +
     -+	fd_listen = setup_listener_socket(path, &st_listen, opts);
     -+	if (fd_listen < 0) {
     ++	server_socket = setup_listener_socket(path, opts);
     ++	if (!server_socket) {
      +		int saved_errno = errno;
      +		close(sv[0]);
      +		close(sv[1]);
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +		xcalloc(1, sizeof(*server_data->accept_thread));
      +	server_data->accept_thread->magic = MAGIC_ACCEPT_THREAD_DATA;
      +	server_data->accept_thread->server_data = server_data;
     -+	server_data->accept_thread->fd_listen = fd_listen;
     -+	server_data->accept_thread->st_listen = st_listen;
     ++	server_data->accept_thread->server_socket = server_socket;
      +	server_data->accept_thread->fd_send_shutdown = sv[0];
      +	server_data->accept_thread->fd_wait_shutdown = sv[1];
      +
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +
      +	accept_thread_data = server_data->accept_thread;
      +	if (accept_thread_data) {
     -+		if (accept_thread_data->fd_listen != -1) {
     -+			/*
     -+			 * Only unlink the unix domain socket if we
     -+			 * created it.  That is, if another daemon
     -+			 * process force-created a new socket at this
     -+			 * path, and effectively steals our path
     -+			 * (which prevents us from receiving any
     -+			 * future clients), we don't want to do the
     -+			 * same thing to them.
     -+			 */
     -+			if (!socket_was_stolen(
     -+				    accept_thread_data))
     -+				unlink(server_data->buf_path.buf);
     ++		unix_stream_server__free(accept_thread_data->server_socket);
      +
     -+			close(accept_thread_data->fd_listen);
     -+		}
      +		if (accept_thread_data->fd_send_shutdown != -1)
      +			close(accept_thread_data->fd_send_shutdown);
      +		if (accept_thread_data->fd_wait_shutdown != -1)
     @@ simple-ipc.h
       #define SUPPORTS_SIMPLE_IPC
       #endif
       
     +@@ simple-ipc.h: struct ipc_client_connect_options {
     + 	 * the service and need to wait for it to become ready.
     + 	 */
     + 	unsigned int wait_if_not_found:1;
     ++
     ++	/*
     ++	 * Disallow chdir() when creating a Unix domain socket.
     ++	 */
     ++	unsigned int uds_disallow_chdir:1;
     + };
     + 
     + #define IPC_CLIENT_CONNECT_OPTIONS_INIT { \
     + 	.wait_if_busy = 0, \
     + 	.wait_if_not_found = 0, \
     ++	.uds_disallow_chdir = 0, \
     + }
     + 
     + /*
      @@ simple-ipc.h: struct ipc_server_data;
       struct ipc_server_opts
       {
  9:  f0bebf1cdb31 ! 12:  1e5c856ade85 simple-ipc: add t/helper/test-simple-ipc and t0052
     @@ Metadata
      Author: Jeff Hostetler <jeffhost@microsoft.com>
      
       ## Commit message ##
     -    simple-ipc: add t/helper/test-simple-ipc and t0052
     +    t0052: add simple-ipc tests and t/helper/test-simple-ipc tool
      
     -    Create unit tests for "simple-ipc".  These are currently only enabled
     -    on Windows.
     +    Create t0052-simple-ipc.sh with unit tests for the "simple-ipc" mechanism.
     +
     +    Create t/helper/test-simple-ipc test tool to exercise the "simple-ipc"
     +    functions.
     +
     +    When the tool is invoked with "run-daemon", it runs a server to listen
     +    for "simple-ipc" connections on a test socket or named pipe and
     +    responds to a set of commands to exercise/stress the communication
     +    setup.
     +
     +    When the tool is invoked with "start-daemon", it spawns a "run-daemon"
     +    command in the background and waits for the server to become ready
     +    before exiting.  (This helps make unit tests in t0052 more predictable
     +    and avoids the need for arbitrary sleeps in the test script.)
     +
     +    The tool also has a series of client "send" commands to send commands
     +    and data to a server instance.
      
          Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
      
     @@ t/helper/test-simple-ipc.c (new)
      +#include "simple-ipc.h"
      +#include "parse-options.h"
      +#include "thread-utils.h"
     ++#include "strvec.h"
      +
      +#ifndef SUPPORTS_SIMPLE_IPC
      +int cmd__simple_ipc(int argc, const char **argv)
     @@ t/helper/test-simple-ipc.c (new)
      +
      +	if (!strcmp(command, "quit")) {
      +		/*
     -+		 * Tell ipc-server to hangup with an empty reply.
     ++		 * The client sent a "quit" command.  This is an async
     ++		 * request for the server to shutdown.
     ++		 *
     ++		 * We DO NOT send the client a response message
     ++		 * (because we have nothing to say and the other
     ++		 * server threads have not yet stopped).
     ++		 *
     ++		 * Tell the ipc-server layer to start shutting down.
     ++		 * This includes: stop listening for new connections
     ++		 * on the socket/pipe and telling all worker threads
     ++		 * to finish/drain their outgoing responses to other
     ++		 * clients.
     ++		 *
     ++		 * This DOES NOT force an immediate sync shutdown.
      +		 */
      +		return SIMPLE_IPC_QUIT;
      +	}
     @@ t/helper/test-simple-ipc.c (new)
      +	};
      +
      +	const char * const daemon_usage[] = {
     -+		N_("test-helper simple-ipc daemon [<options>"),
     ++		N_("test-helper simple-ipc run-daemon [<options>"),
      +		NULL
      +	};
      +	struct option daemon_options[] = {
     @@ t/helper/test-simple-ipc.c (new)
      +	return ipc_server_run(path, &opts, test_app_cb, (void*)&my_app_data);
      +}
      +
     ++#ifndef GIT_WINDOWS_NATIVE
     ++/*
     ++ * This is adapted from `daemonize()`.  Use `fork()` to directly create and
     ++ * run the daemon in a child process.
     ++ */
     ++static int spawn_server(const char *path,
     ++			const struct ipc_server_opts *opts,
     ++			pid_t *pid)
     ++{
     ++	*pid = fork();
     ++
     ++	switch (*pid) {
     ++	case 0:
     ++		if (setsid() == -1)
     ++			error_errno(_("setsid failed"));
     ++		close(0);
     ++		close(1);
     ++		close(2);
     ++		sanitize_stdfds();
     ++
     ++		return ipc_server_run(path, opts, test_app_cb, (void*)&my_app_data);
     ++
     ++	case -1:
     ++		return error_errno(_("could not spawn daemon in the background"));
     ++
     ++	default:
     ++		return 0;
     ++	}
     ++}
     ++#else
     ++/*
     ++ * Conceptually like `daemonize()` but different because Windows does not
     ++ * have `fork(2)`.  Spawn a normal Windows child process but without the
     ++ * limitations of `start_command()` and `finish_command()`.
     ++ */
     ++static int spawn_server(const char *path,
     ++			const struct ipc_server_opts *opts,
     ++			pid_t *pid)
     ++{
     ++	char test_tool_exe[MAX_PATH];
     ++	struct strvec args = STRVEC_INIT;
     ++	int in, out;
     ++
     ++	GetModuleFileNameA(NULL, test_tool_exe, MAX_PATH);
     ++
     ++	in = open("/dev/null", O_RDONLY);
     ++	out = open("/dev/null", O_WRONLY);
     ++
     ++	strvec_push(&args, test_tool_exe);
     ++	strvec_push(&args, "simple-ipc");
     ++	strvec_push(&args, "run-daemon");
     ++	strvec_pushf(&args, "--threads=%d", opts->nr_threads);
     ++
     ++	*pid = mingw_spawnvpe(args.v[0], args.v, NULL, NULL, in, out, out);
     ++	close(in);
     ++	close(out);
     ++
     ++	strvec_clear(&args);
     ++
     ++	if (*pid < 0)
     ++		return error(_("could not spawn daemon in the background"));
     ++
     ++	return 0;
     ++}
     ++#endif
     ++
     ++/*
     ++ * This is adapted from `wait_or_whine()`.  Watch the child process and
     ++ * let it get started and begin listening for requests on the socket
     ++ * before reporting our success.
     ++ */
     ++static int wait_for_server_startup(const char * path, pid_t pid_child,
     ++				   int max_wait_sec)
     ++{
     ++	int status;
     ++	pid_t pid_seen;
     ++	enum ipc_active_state s;
     ++	time_t time_limit, now;
     ++
     ++	time(&time_limit);
     ++	time_limit += max_wait_sec;
     ++
     ++	for (;;) {
     ++		pid_seen = waitpid(pid_child, &status, WNOHANG);
     ++
     ++		if (pid_seen == -1)
     ++			return error_errno(_("waitpid failed"));
     ++
     ++		else if (pid_seen == 0) {
     ++			/*
     ++			 * The child is still running (this should be
     ++			 * the normal case).  Try to connect to it on
     ++			 * the socket and see if it is ready for
     ++			 * business.
     ++			 *
     ++			 * If there is another daemon already running,
     ++			 * our child will fail to start (possibly
     ++			 * after a timeout on the lock), but we don't
     ++			 * care (who responds) if the socket is live.
     ++			 */
     ++			s = ipc_get_active_state(path);
     ++			if (s == IPC_STATE__LISTENING)
     ++				return 0;
     ++
     ++			time(&now);
     ++			if (now > time_limit)
     ++				return error(_("daemon not online yet"));
     ++
     ++			continue;
     ++		}
     ++
     ++		else if (pid_seen == pid_child) {
     ++			/*
     ++			 * The new child daemon process shutdown while
     ++			 * it was starting up, so it is not listening
     ++			 * on the socket.
     ++			 *
     ++			 * Try to ping the socket in the odd chance
     ++			 * that another daemon started (or was already
     ++			 * running) while our child was starting.
     ++			 *
     ++			 * Again, we don't care who services the socket.
     ++			 */
     ++			s = ipc_get_active_state(path);
     ++			if (s == IPC_STATE__LISTENING)
     ++				return 0;
     ++
     ++			/*
     ++			 * We don't care about the WEXITSTATUS() nor
     ++			 * any of the WIF*(status) values because
     ++			 * `cmd__simple_ipc()` does the `!!result`
     ++			 * trick on all function return values.
     ++			 *
     ++			 * So it is sufficient to just report the
     ++			 * early shutdown as an error.
     ++			 */
     ++			return error(_("daemon failed to start"));
     ++		}
     ++
     ++		else
     ++			return error(_("waitpid is confused"));
     ++	}
     ++}
     ++
     ++/*
     ++ * This process will start a simple-ipc server in a background process and
     ++ * wait for it to become ready.  This is like `daemonize()` but gives us
     ++ * more control and better error reporting (and makes it easier to write
     ++ * unit tests).
     ++ */
     ++static int daemon__start_server(const char *path, int argc, const char **argv)
     ++{
     ++	pid_t pid_child;
     ++	int ret;
     ++	int max_wait_sec = 60;
     ++	struct ipc_server_opts opts = {
     ++		.nr_threads = 5
     ++	};
     ++
     ++	const char * const daemon_usage[] = {
     ++		N_("test-helper simple-ipc start-daemon [<options>"),
     ++		NULL
     ++	};
     ++
     ++	struct option daemon_options[] = {
     ++		OPT_INTEGER(0, "max-wait", &max_wait_sec,
     ++			    N_("seconds to wait for daemon to startup")),
     ++		OPT_INTEGER(0, "threads", &opts.nr_threads,
     ++			    N_("number of threads in server thread pool")),
     ++		OPT_END()
     ++	};
     ++
     ++	argc = parse_options(argc, argv, NULL, daemon_options, daemon_usage, 0);
     ++
     ++	if (max_wait_sec < 0)
     ++		max_wait_sec = 0;
     ++	if (opts.nr_threads < 1)
     ++		opts.nr_threads = 1;
     ++
     ++	/*
     ++	 * Run the actual daemon in a background process.
     ++	 */
     ++	ret = spawn_server(path, &opts, &pid_child);
     ++	if (pid_child <= 0)
     ++		return ret;
     ++
     ++	/*
     ++	 * Let the parent wait for the child process to get started
     ++	 * and begin listening for requests on the socket.
     ++	 */
     ++	ret = wait_for_server_startup(path, pid_child, max_wait_sec);
     ++
     ++	return ret;
     ++}
     ++
      +/*
      + * This process will run a quick probe to see if a simple-ipc server
      + * is active on this path.
     @@ t/helper/test-simple-ipc.c (new)
      +	options.wait_if_not_found = 0;
      +
      +	if (!ipc_client_send_command(path, &options, command, &buf)) {
     -+		printf("%s\n", buf.buf);
     -+		fflush(stdout);
     ++		if (buf.len) {
     ++			printf("%s\n", buf.buf);
     ++			fflush(stdout);
     ++		}
      +		strbuf_release(&buf);
      +
      +		return 0;
     @@ t/helper/test-simple-ipc.c (new)
      + * message can be sent and that the kernel or pkt-line layers will
      + * properly chunk it and that the daemon receives the entire message.
      + */
     -+static int do_sendbytes(int bytecount, char byte, const char *path)
     ++static int do_sendbytes(int bytecount, char byte, const char *path,
     ++			const struct ipc_client_connect_options *options)
      +{
      +	struct strbuf buf_send = STRBUF_INIT;
      +	struct strbuf buf_resp = STRBUF_INIT;
     -+	struct ipc_client_connect_options options
     -+		= IPC_CLIENT_CONNECT_OPTIONS_INIT;
     -+
     -+	options.wait_if_busy = 1;
     -+	options.wait_if_not_found = 0;
      +
      +	strbuf_addstr(&buf_send, "sendbytes ");
      +	strbuf_addchars(&buf_send, byte, bytecount);
      +
     -+	if (!ipc_client_send_command(path, &options, buf_send.buf, &buf_resp)) {
     ++	if (!ipc_client_send_command(path, options, buf_send.buf, &buf_resp)) {
      +		strbuf_rtrim(&buf_resp);
      +		printf("sent:%c%08d %s\n", byte, bytecount, buf_resp.buf);
      +		fflush(stdout);
     @@ t/helper/test-simple-ipc.c (new)
      +		OPT_STRING(0, "byte", &string, N_("byte"), N_("ballast")),
      +		OPT_END()
      +	};
     ++	struct ipc_client_connect_options options
     ++		= IPC_CLIENT_CONNECT_OPTIONS_INIT;
     ++
     ++	options.wait_if_busy = 1;
     ++	options.wait_if_not_found = 0;
     ++	options.uds_disallow_chdir = 0;
      +
      +	argc = parse_options(argc, argv, NULL, sendbytes_options, sendbytes_usage, 0);
      +
     -+	return do_sendbytes(bytecount, string[0], path);
     ++	return do_sendbytes(bytecount, string[0], path, &options);
      +}
      +
      +struct multiple_thread_data {
     @@ t/helper/test-simple-ipc.c (new)
      +{
      +	struct multiple_thread_data *d = _multiple_thread_data;
      +	int k;
     ++	struct ipc_client_connect_options options
     ++		= IPC_CLIENT_CONNECT_OPTIONS_INIT;
     ++
     ++	options.wait_if_busy = 1;
     ++	options.wait_if_not_found = 0;
     ++	/*
     ++	 * A multi-threaded client should not be randomly calling chdir().
     ++	 * The test will pass without this restriction because the test is
     ++	 * not otherwise accessing the filesystem, but it makes us honest.
     ++	 */
     ++	options.uds_disallow_chdir = 1;
      +
      +	trace2_thread_start("multiple");
      +
      +	for (k = 0; k < d->batchsize; k++) {
     -+		if (do_sendbytes(d->bytecount + k, d->letter, d->path))
     ++		if (do_sendbytes(d->bytecount + k, d->letter, d->path, &options))
      +			d->sum_errors++;
      +		else
      +			d->sum_good++;
     @@ t/helper/test-simple-ipc.c (new)
      +	if (argc == 2 && !strcmp(argv[1], "SUPPORTS_SIMPLE_IPC"))
      +		return 0;
      +
     -+	/* Use '!!' on all dispatch functions to map from `error()` style
     -+	 * (returns -1) style to `test_must_fail` style (expects 1) and
     -+	 * get less confusing shell error messages.
     ++	/*
     ++	 * Use '!!' on all dispatch functions to map from `error()` style
     ++	 * (returns -1) style to `test_must_fail` style (expects 1).  This
     ++	 * makes shell error messages less confusing.
      +	 */
      +
      +	if (argc == 2 && !strcmp(argv[1], "is-active"))
      +		return !!client__probe_server(path);
      +
     -+	if (argc >= 2 && !strcmp(argv[1], "daemon"))
     ++	if (argc >= 2 && !strcmp(argv[1], "run-daemon"))
      +		return !!daemon__run_server(path, argc, argv);
      +
     ++	if (argc >= 2 && !strcmp(argv[1], "start-daemon"))
     ++		return !!daemon__start_server(path, argc, argv);
     ++
      +	/*
      +	 * Client commands follow.  Ensure a server is running before
      +	 * going any further.
     @@ t/t0052-simple-ipc.sh (new)
      +}
      +
      +stop_simple_IPC_server () {
     -+	test -n "$SIMPLE_IPC_PID" || return 0
     -+
     -+	kill "$SIMPLE_IPC_PID" &&
     -+	SIMPLE_IPC_PID=
     ++	test-tool simple-ipc send quit
      +}
      +
      +test_expect_success 'start simple command server' '
     -+	{ test-tool simple-ipc daemon --threads=8 & } &&
     -+	SIMPLE_IPC_PID=$! &&
      +	test_atexit stop_simple_IPC_server &&
     -+
     -+	sleep 1 &&
     -+
     ++	test-tool simple-ipc start-daemon --threads=8 &&
      +	test-tool simple-ipc is-active
      +'
      +
     @@ t/t0052-simple-ipc.sh (new)
      +'
      +
      +test_expect_success 'servers cannot share the same path' '
     -+	test_must_fail test-tool simple-ipc daemon &&
     ++	test_must_fail test-tool simple-ipc run-daemon &&
      +	test-tool simple-ipc is-active
      +'
      +
     @@ t/t0052-simple-ipc.sh (new)
      +	test_cmp expect_a actual_a
      +'
      +
     ++# Sending a "quit" message to the server causes it to start an "async
     ++# shutdown" -- queuing shutdown events to all socket/pipe thread-pool
     ++# threads.  Each thread will process that event after finishing
     ++# (draining) any in-progress IO with other clients.  So when the "send
     ++# quit" client command exits, the ipc-server may still be running (but
     ++# it should be cleaning up).
     ++#
     ++# So, insert a generous sleep here to give the server time to shutdown.
     ++#
      +test_expect_success '`quit` works' '
      +	test-tool simple-ipc send quit &&
     ++
     ++	sleep 5 &&
     ++
      +	test_must_fail test-tool simple-ipc is-active &&
      +	test_must_fail test-tool simple-ipc send ping
      +'
 13:  2cca15a10ece <  -:  ------------ unix-socket: do not call die in unix_stream_connect()

-- 
gitgitgadget

^ permalink raw reply	[flat|nested] 178+ messages in thread

* [PATCH v3 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently()
  2021-02-13  0:09   ` [PATCH v3 00/12] " Jeff Hostetler via GitGitGadget
@ 2021-02-13  0:09     ` Jeff Hostetler via GitGitGadget
  2021-02-13  0:09     ` [PATCH v3 02/12] pkt-line: do not issue flush packets in write_packetized_*() Johannes Schindelin via GitGitGadget
                       ` (11 subsequent siblings)
  12 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-02-13  0:09 UTC (permalink / raw)
  To: git
  Cc: Jeff Hostetler, Jeff King, SZEDER Gábor,
	Johannes Schindelin, Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Teach `packet_write_gently()` to write the pkt-line header and the actual
buffer in 2 separate calls to `write_in_full()` and avoid the need for a
static buffer, thread-safe scratch space, or an excessively large stack
buffer.

Change the API of `write_packetized_from_fd()` to accept a scratch space
argument from its caller to avoid similar issues here.

These changes are intended to make it easier to use pkt-line routines in
a multi-threaded context with multiple concurrent writers writing to
different streams.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 convert.c  |  7 ++++---
 pkt-line.c | 28 +++++++++++++++++++---------
 pkt-line.h | 12 +++++++++---
 3 files changed, 32 insertions(+), 15 deletions(-)

diff --git a/convert.c b/convert.c
index ee360c2f07ce..41012c2d301c 100644
--- a/convert.c
+++ b/convert.c
@@ -883,9 +883,10 @@ static int apply_multi_file_filter(const char *path, const char *src, size_t len
 	if (err)
 		goto done;
 
-	if (fd >= 0)
-		err = write_packetized_from_fd(fd, process->in);
-	else
+	if (fd >= 0) {
+		struct packet_scratch_space scratch;
+		err = write_packetized_from_fd(fd, process->in, &scratch);
+	} else
 		err = write_packetized_from_buf(src, len, process->in);
 	if (err)
 		goto done;
diff --git a/pkt-line.c b/pkt-line.c
index d633005ef746..4cff2f7a68a5 100644
--- a/pkt-line.c
+++ b/pkt-line.c
@@ -196,17 +196,25 @@ int packet_write_fmt_gently(int fd, const char *fmt, ...)
 
 static int packet_write_gently(const int fd_out, const char *buf, size_t size)
 {
-	static char packet_write_buffer[LARGE_PACKET_MAX];
+	char header[4];
 	size_t packet_size;
 
-	if (size > sizeof(packet_write_buffer) - 4)
+	if (size > LARGE_PACKET_DATA_MAX)
 		return error(_("packet write failed - data exceeds max packet size"));
 
 	packet_trace(buf, size, 1);
 	packet_size = size + 4;
-	set_packet_header(packet_write_buffer, packet_size);
-	memcpy(packet_write_buffer + 4, buf, size);
-	if (write_in_full(fd_out, packet_write_buffer, packet_size) < 0)
+
+	set_packet_header(header, packet_size);
+
+	/*
+	 * Write the header and the buffer in 2 parts so that we do not need
+	 * to allocate a buffer or rely on a static buffer.  This avoids perf
+	 * and multi-threading issues.
+	 */
+
+	if (write_in_full(fd_out, header, 4) < 0 ||
+	    write_in_full(fd_out, buf, size) < 0)
 		return error(_("packet write failed"));
 	return 0;
 }
@@ -242,19 +250,21 @@ void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len)
 	packet_trace(data, len, 1);
 }
 
-int write_packetized_from_fd(int fd_in, int fd_out)
+int write_packetized_from_fd(int fd_in, int fd_out,
+			     struct packet_scratch_space *scratch)
 {
-	static char buf[LARGE_PACKET_DATA_MAX];
 	int err = 0;
 	ssize_t bytes_to_write;
 
 	while (!err) {
-		bytes_to_write = xread(fd_in, buf, sizeof(buf));
+		bytes_to_write = xread(fd_in, scratch->buffer,
+				       sizeof(scratch->buffer));
 		if (bytes_to_write < 0)
 			return COPY_READ_ERROR;
 		if (bytes_to_write == 0)
 			break;
-		err = packet_write_gently(fd_out, buf, bytes_to_write);
+		err = packet_write_gently(fd_out, scratch->buffer,
+					  bytes_to_write);
 	}
 	if (!err)
 		err = packet_flush_gently(fd_out);
diff --git a/pkt-line.h b/pkt-line.h
index 8c90daa59ef0..c0722aefe638 100644
--- a/pkt-line.h
+++ b/pkt-line.h
@@ -5,6 +5,13 @@
 #include "strbuf.h"
 #include "sideband.h"
 
+#define LARGE_PACKET_MAX 65520
+#define LARGE_PACKET_DATA_MAX (LARGE_PACKET_MAX - 4)
+
+struct packet_scratch_space {
+	char buffer[LARGE_PACKET_DATA_MAX]; /* does not include header bytes */
+};
+
 /*
  * Write a packetized stream, where each line is preceded by
  * its length (including the header) as a 4-byte hex number.
@@ -32,7 +39,7 @@ void packet_buf_write(struct strbuf *buf, const char *fmt, ...) __attribute__((f
 void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len);
 int packet_flush_gently(int fd);
 int packet_write_fmt_gently(int fd, const char *fmt, ...) __attribute__((format (printf, 2, 3)));
-int write_packetized_from_fd(int fd_in, int fd_out);
+int write_packetized_from_fd(int fd_in, int fd_out, struct packet_scratch_space *scratch);
 int write_packetized_from_buf(const char *src_in, size_t len, int fd_out);
 
 /*
@@ -213,8 +220,7 @@ enum packet_read_status packet_reader_read(struct packet_reader *reader);
 enum packet_read_status packet_reader_peek(struct packet_reader *reader);
 
 #define DEFAULT_PACKET_MAX 1000
-#define LARGE_PACKET_MAX 65520
-#define LARGE_PACKET_DATA_MAX (LARGE_PACKET_MAX - 4)
+
 extern char packet_buffer[LARGE_PACKET_MAX];
 
 struct packet_writer {
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v3 02/12] pkt-line: do not issue flush packets in write_packetized_*()
  2021-02-13  0:09   ` [PATCH v3 00/12] " Jeff Hostetler via GitGitGadget
  2021-02-13  0:09     ` [PATCH v3 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently() Jeff Hostetler via GitGitGadget
@ 2021-02-13  0:09     ` Johannes Schindelin via GitGitGadget
  2021-02-13  0:09     ` [PATCH v3 03/12] pkt-line: (optionally) libify the packet readers Johannes Schindelin via GitGitGadget
                       ` (10 subsequent siblings)
  12 siblings, 0 replies; 178+ messages in thread
From: Johannes Schindelin via GitGitGadget @ 2021-02-13  0:09 UTC (permalink / raw)
  To: git
  Cc: Jeff Hostetler, Jeff King, SZEDER Gábor,
	Johannes Schindelin, Jeff Hostetler, Johannes Schindelin

From: Johannes Schindelin <johannes.schindelin@gmx.de>

Remove the `packet_flush_gently()` call in `write_packetized_from_buf() and
`write_packetized_from_fd()` and require the caller to call it if desired.
Rename both functions to `write_packetized_from_*_no_flush()` to prevent
later merge accidents.

`write_packetized_from_buf()` currently only has one caller:
`apply_multi_file_filter()` in `convert.c`.  It always wants a flush packet
to be written after writing the payload.

However, we are about to introduce a caller that wants to write many
packets before a final flush packet, so let's make the caller responsible
for emitting the flush packet.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
---
 convert.c  |  8 ++++++--
 pkt-line.c | 10 +++-------
 pkt-line.h |  4 ++--
 3 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/convert.c b/convert.c
index 41012c2d301c..bccf7afa8797 100644
--- a/convert.c
+++ b/convert.c
@@ -885,9 +885,13 @@ static int apply_multi_file_filter(const char *path, const char *src, size_t len
 
 	if (fd >= 0) {
 		struct packet_scratch_space scratch;
-		err = write_packetized_from_fd(fd, process->in, &scratch);
+		err = write_packetized_from_fd_no_flush(fd, process->in, &scratch);
 	} else
-		err = write_packetized_from_buf(src, len, process->in);
+		err = write_packetized_from_buf_no_flush(src, len, process->in);
+	if (err)
+		goto done;
+
+	err = packet_flush_gently(process->in);
 	if (err)
 		goto done;
 
diff --git a/pkt-line.c b/pkt-line.c
index 4cff2f7a68a5..3602b0d37092 100644
--- a/pkt-line.c
+++ b/pkt-line.c
@@ -250,8 +250,8 @@ void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len)
 	packet_trace(data, len, 1);
 }
 
-int write_packetized_from_fd(int fd_in, int fd_out,
-			     struct packet_scratch_space *scratch)
+int write_packetized_from_fd_no_flush(int fd_in, int fd_out,
+				      struct packet_scratch_space *scratch)
 {
 	int err = 0;
 	ssize_t bytes_to_write;
@@ -266,12 +266,10 @@ int write_packetized_from_fd(int fd_in, int fd_out,
 		err = packet_write_gently(fd_out, scratch->buffer,
 					  bytes_to_write);
 	}
-	if (!err)
-		err = packet_flush_gently(fd_out);
 	return err;
 }
 
-int write_packetized_from_buf(const char *src_in, size_t len, int fd_out)
+int write_packetized_from_buf_no_flush(const char *src_in, size_t len, int fd_out)
 {
 	int err = 0;
 	size_t bytes_written = 0;
@@ -287,8 +285,6 @@ int write_packetized_from_buf(const char *src_in, size_t len, int fd_out)
 		err = packet_write_gently(fd_out, src_in + bytes_written, bytes_to_write);
 		bytes_written += bytes_to_write;
 	}
-	if (!err)
-		err = packet_flush_gently(fd_out);
 	return err;
 }
 
diff --git a/pkt-line.h b/pkt-line.h
index c0722aefe638..a7149429ac35 100644
--- a/pkt-line.h
+++ b/pkt-line.h
@@ -39,8 +39,8 @@ void packet_buf_write(struct strbuf *buf, const char *fmt, ...) __attribute__((f
 void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len);
 int packet_flush_gently(int fd);
 int packet_write_fmt_gently(int fd, const char *fmt, ...) __attribute__((format (printf, 2, 3)));
-int write_packetized_from_fd(int fd_in, int fd_out, struct packet_scratch_space *scratch);
-int write_packetized_from_buf(const char *src_in, size_t len, int fd_out);
+int write_packetized_from_fd_no_flush(int fd_in, int fd_out, struct packet_scratch_space *scratch);
+int write_packetized_from_buf_no_flush(const char *src_in, size_t len, int fd_out);
 
 /*
  * Read a packetized line into the buffer, which must be at least size bytes
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v3 03/12] pkt-line: (optionally) libify the packet readers
  2021-02-13  0:09   ` [PATCH v3 00/12] " Jeff Hostetler via GitGitGadget
  2021-02-13  0:09     ` [PATCH v3 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently() Jeff Hostetler via GitGitGadget
  2021-02-13  0:09     ` [PATCH v3 02/12] pkt-line: do not issue flush packets in write_packetized_*() Johannes Schindelin via GitGitGadget
@ 2021-02-13  0:09     ` Johannes Schindelin via GitGitGadget
  2021-02-13  0:09     ` [PATCH v3 04/12] pkt-line: add options argument to read_packetized_to_strbuf() Johannes Schindelin via GitGitGadget
                       ` (9 subsequent siblings)
  12 siblings, 0 replies; 178+ messages in thread
From: Johannes Schindelin via GitGitGadget @ 2021-02-13  0:09 UTC (permalink / raw)
  To: git
  Cc: Jeff Hostetler, Jeff King, SZEDER Gábor,
	Johannes Schindelin, Jeff Hostetler, Johannes Schindelin

From: Johannes Schindelin <johannes.schindelin@gmx.de>

So far, the (possibly indirect) callers of `get_packet_data()` can ask
that function to return an error instead of `die()`ing upon end-of-file.
However, random read errors will still cause the process to die.

So let's introduce an explicit option to tell the packet reader
machinery to please be nice and only return an error.

This change prepares pkt-line for use by long-running daemon processes.
Such processes should be able to serve multiple concurrent clients and
and survive random IO errors.  If there is an error on one connection,
a daemon should be able to drop that connection and continue serving
existing and future connections.

This ability will be used by a Git-aware "Internal FSMonitor" feature
in a later patch series.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
---
 pkt-line.c | 19 +++++++++++++++++--
 pkt-line.h |  4 ++++
 2 files changed, 21 insertions(+), 2 deletions(-)

diff --git a/pkt-line.c b/pkt-line.c
index 3602b0d37092..83c46e6b46ee 100644
--- a/pkt-line.c
+++ b/pkt-line.c
@@ -304,8 +304,11 @@ static int get_packet_data(int fd, char **src_buf, size_t *src_size,
 		*src_size -= ret;
 	} else {
 		ret = read_in_full(fd, dst, size);
-		if (ret < 0)
+		if (ret < 0) {
+			if (options & PACKET_READ_NEVER_DIE)
+				return error_errno(_("read error"));
 			die_errno(_("read error"));
+		}
 	}
 
 	/* And complain if we didn't get enough bytes to satisfy the read. */
@@ -313,6 +316,8 @@ static int get_packet_data(int fd, char **src_buf, size_t *src_size,
 		if (options & PACKET_READ_GENTLE_ON_EOF)
 			return -1;
 
+		if (options & PACKET_READ_NEVER_DIE)
+			return error(_("the remote end hung up unexpectedly"));
 		die(_("the remote end hung up unexpectedly"));
 	}
 
@@ -341,6 +346,9 @@ enum packet_read_status packet_read_with_status(int fd, char **src_buffer,
 	len = packet_length(linelen);
 
 	if (len < 0) {
+		if (options & PACKET_READ_NEVER_DIE)
+			return error(_("protocol error: bad line length "
+				       "character: %.4s"), linelen);
 		die(_("protocol error: bad line length character: %.4s"), linelen);
 	} else if (!len) {
 		packet_trace("0000", 4, 0);
@@ -355,12 +363,19 @@ enum packet_read_status packet_read_with_status(int fd, char **src_buffer,
 		*pktlen = 0;
 		return PACKET_READ_RESPONSE_END;
 	} else if (len < 4) {
+		if (options & PACKET_READ_NEVER_DIE)
+			return error(_("protocol error: bad line length %d"),
+				     len);
 		die(_("protocol error: bad line length %d"), len);
 	}
 
 	len -= 4;
-	if ((unsigned)len >= size)
+	if ((unsigned)len >= size) {
+		if (options & PACKET_READ_NEVER_DIE)
+			return error(_("protocol error: bad line length %d"),
+				     len);
 		die(_("protocol error: bad line length %d"), len);
+	}
 
 	if (get_packet_data(fd, src_buffer, src_len, buffer, len, options) < 0) {
 		*pktlen = -1;
diff --git a/pkt-line.h b/pkt-line.h
index a7149429ac35..2e472efaf2c5 100644
--- a/pkt-line.h
+++ b/pkt-line.h
@@ -75,10 +75,14 @@ int write_packetized_from_buf_no_flush(const char *src_in, size_t len, int fd_ou
  *
  * If options contains PACKET_READ_DIE_ON_ERR_PACKET, it dies when it sees an
  * ERR packet.
+ *
+ * With `PACKET_READ_NEVER_DIE`, no errors are allowed to trigger die() (except
+ * an ERR packet, when `PACKET_READ_DIE_ON_ERR_PACKET` is in effect).
  */
 #define PACKET_READ_GENTLE_ON_EOF     (1u<<0)
 #define PACKET_READ_CHOMP_NEWLINE     (1u<<1)
 #define PACKET_READ_DIE_ON_ERR_PACKET (1u<<2)
+#define PACKET_READ_NEVER_DIE         (1u<<3)
 int packet_read(int fd, char **src_buffer, size_t *src_len, char
 		*buffer, unsigned size, int options);
 
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v3 04/12] pkt-line: add options argument to read_packetized_to_strbuf()
  2021-02-13  0:09   ` [PATCH v3 00/12] " Jeff Hostetler via GitGitGadget
                       ` (2 preceding siblings ...)
  2021-02-13  0:09     ` [PATCH v3 03/12] pkt-line: (optionally) libify the packet readers Johannes Schindelin via GitGitGadget
@ 2021-02-13  0:09     ` Johannes Schindelin via GitGitGadget
  2021-02-13  0:09     ` [PATCH v3 05/12] simple-ipc: design documentation for new IPC mechanism Jeff Hostetler via GitGitGadget
                       ` (8 subsequent siblings)
  12 siblings, 0 replies; 178+ messages in thread
From: Johannes Schindelin via GitGitGadget @ 2021-02-13  0:09 UTC (permalink / raw)
  To: git
  Cc: Jeff Hostetler, Jeff King, SZEDER Gábor,
	Johannes Schindelin, Jeff Hostetler, Johannes Schindelin

From: Johannes Schindelin <johannes.schindelin@gmx.de>

Update the calling sequence of `read_packetized_to_strbuf()` to take
an options argument and not assume a fixed set of options.  Update the
only existing caller accordingly to explicitly pass the
formerly-assumed flags.

The `read_packetized_to_strbuf()` function calls `packet_read()` with
a fixed set of assumed options (`PACKET_READ_GENTLE_ON_EOF`).  This
assumption has been fine for the single existing caller
`apply_multi_file_filter()` in `convert.c`.

In a later commit we would like to add other callers to
`read_packetized_to_strbuf()` that need a different set of options.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 convert.c  | 3 ++-
 pkt-line.c | 4 ++--
 pkt-line.h | 2 +-
 3 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/convert.c b/convert.c
index bccf7afa8797..9f44f00d841f 100644
--- a/convert.c
+++ b/convert.c
@@ -908,7 +908,8 @@ static int apply_multi_file_filter(const char *path, const char *src, size_t len
 		if (err)
 			goto done;
 
-		err = read_packetized_to_strbuf(process->out, &nbuf) < 0;
+		err = read_packetized_to_strbuf(process->out, &nbuf,
+						PACKET_READ_GENTLE_ON_EOF) < 0;
 		if (err)
 			goto done;
 
diff --git a/pkt-line.c b/pkt-line.c
index 83c46e6b46ee..18ecad65e08c 100644
--- a/pkt-line.c
+++ b/pkt-line.c
@@ -442,7 +442,7 @@ char *packet_read_line_buf(char **src, size_t *src_len, int *dst_len)
 	return packet_read_line_generic(-1, src, src_len, dst_len);
 }
 
-ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out)
+ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out, int options)
 {
 	int packet_len;
 
@@ -458,7 +458,7 @@ ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out)
 			 * that there is already room for the extra byte.
 			 */
 			sb_out->buf + sb_out->len, LARGE_PACKET_DATA_MAX+1,
-			PACKET_READ_GENTLE_ON_EOF);
+			options);
 		if (packet_len <= 0)
 			break;
 		sb_out->len += packet_len;
diff --git a/pkt-line.h b/pkt-line.h
index 2e472efaf2c5..e347fe46832a 100644
--- a/pkt-line.h
+++ b/pkt-line.h
@@ -142,7 +142,7 @@ char *packet_read_line_buf(char **src_buf, size_t *src_len, int *size);
 /*
  * Reads a stream of variable sized packets until a flush packet is detected.
  */
-ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out);
+ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out, int options);
 
 /*
  * Receive multiplexed output stream over git native protocol.
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v3 05/12] simple-ipc: design documentation for new IPC mechanism
  2021-02-13  0:09   ` [PATCH v3 00/12] " Jeff Hostetler via GitGitGadget
                       ` (3 preceding siblings ...)
  2021-02-13  0:09     ` [PATCH v3 04/12] pkt-line: add options argument to read_packetized_to_strbuf() Johannes Schindelin via GitGitGadget
@ 2021-02-13  0:09     ` Jeff Hostetler via GitGitGadget
  2021-02-13  0:09     ` [PATCH v3 06/12] simple-ipc: add win32 implementation Jeff Hostetler via GitGitGadget
                       ` (7 subsequent siblings)
  12 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-02-13  0:09 UTC (permalink / raw)
  To: git
  Cc: Jeff Hostetler, Jeff King, SZEDER Gábor,
	Johannes Schindelin, Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Brief design documentation for new IPC mechanism allowing
foreground Git client to talk with an existing daemon process
at a known location using a named pipe or unix domain socket.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 Documentation/technical/api-simple-ipc.txt | 34 ++++++++++++++++++++++
 1 file changed, 34 insertions(+)
 create mode 100644 Documentation/technical/api-simple-ipc.txt

diff --git a/Documentation/technical/api-simple-ipc.txt b/Documentation/technical/api-simple-ipc.txt
new file mode 100644
index 000000000000..670a5c163e39
--- /dev/null
+++ b/Documentation/technical/api-simple-ipc.txt
@@ -0,0 +1,34 @@
+simple-ipc API
+==============
+
+The simple-ipc API is used to send an IPC message and response between
+a (presumably) foreground Git client process to a background server or
+daemon process.  The server process must already be running.  Multiple
+client processes can simultaneously communicate with the server
+process.
+
+Communication occurs over a named pipe on Windows and a Unix domain
+socket on other platforms.  Clients and the server rendezvous at a
+previously agreed-to application-specific pathname (which is outside
+the scope of this design).
+
+This IPC mechanism differs from the existing `sub-process.c` model
+(Documentation/technical/long-running-process-protocol.txt) and used
+by applications like Git-LFS.  In the simple-ipc model the server is
+assumed to be a very long-running system service.  In contrast, in the
+LFS-style sub-process model the helper is started with the foreground
+process and exits when the foreground process terminates.
+
+How the simple-ipc server is started is also outside the scope of the
+IPC mechanism.  For example, the server might be started during
+maintenance operations.
+
+The IPC protocol consists of a single request message from the client and
+an optional request message from the server.  For simplicity, pkt-line
+routines are used to hide chunking and buffering concerns.  Each side
+terminates their message with a flush packet.
+(Documentation/technical/protocol-common.txt)
+
+The actual format of the client and server messages is application
+specific.  The IPC layer transmits and receives an opaque buffer without
+any concern for the content within.
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v3 06/12] simple-ipc: add win32 implementation
  2021-02-13  0:09   ` [PATCH v3 00/12] " Jeff Hostetler via GitGitGadget
                       ` (4 preceding siblings ...)
  2021-02-13  0:09     ` [PATCH v3 05/12] simple-ipc: design documentation for new IPC mechanism Jeff Hostetler via GitGitGadget
@ 2021-02-13  0:09     ` Jeff Hostetler via GitGitGadget
  2021-02-13  0:09     ` [PATCH v3 07/12] unix-socket: elimiate static unix_stream_socket() helper function Jeff Hostetler via GitGitGadget
                       ` (6 subsequent siblings)
  12 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-02-13  0:09 UTC (permalink / raw)
  To: git
  Cc: Jeff Hostetler, Jeff King, SZEDER Gábor,
	Johannes Schindelin, Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Create Windows implementation of "simple-ipc" using named pipes.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 Makefile                            |   5 +
 compat/simple-ipc/ipc-shared.c      |  28 ++
 compat/simple-ipc/ipc-win32.c       | 749 ++++++++++++++++++++++++++++
 config.mak.uname                    |   2 +
 contrib/buildsystems/CMakeLists.txt |   4 +
 simple-ipc.h                        | 224 +++++++++
 6 files changed, 1012 insertions(+)
 create mode 100644 compat/simple-ipc/ipc-shared.c
 create mode 100644 compat/simple-ipc/ipc-win32.c
 create mode 100644 simple-ipc.h

diff --git a/Makefile b/Makefile
index 4128b457e14b..40d5cab78d3f 100644
--- a/Makefile
+++ b/Makefile
@@ -1679,6 +1679,11 @@ else
 	LIB_OBJS += unix-socket.o
 endif
 
+ifdef USE_WIN32_IPC
+	LIB_OBJS += compat/simple-ipc/ipc-shared.o
+	LIB_OBJS += compat/simple-ipc/ipc-win32.o
+endif
+
 ifdef NO_ICONV
 	BASIC_CFLAGS += -DNO_ICONV
 endif
diff --git a/compat/simple-ipc/ipc-shared.c b/compat/simple-ipc/ipc-shared.c
new file mode 100644
index 000000000000..1edec8159532
--- /dev/null
+++ b/compat/simple-ipc/ipc-shared.c
@@ -0,0 +1,28 @@
+#include "cache.h"
+#include "simple-ipc.h"
+#include "strbuf.h"
+#include "pkt-line.h"
+#include "thread-utils.h"
+
+#ifdef SUPPORTS_SIMPLE_IPC
+
+int ipc_server_run(const char *path, const struct ipc_server_opts *opts,
+		   ipc_server_application_cb *application_cb,
+		   void *application_data)
+{
+	struct ipc_server_data *server_data = NULL;
+	int ret;
+
+	ret = ipc_server_run_async(&server_data, path, opts,
+				   application_cb, application_data);
+	if (ret)
+		return ret;
+
+	ret = ipc_server_await(server_data);
+
+	ipc_server_free(server_data);
+
+	return ret;
+}
+
+#endif /* SUPPORTS_SIMPLE_IPC */
diff --git a/compat/simple-ipc/ipc-win32.c b/compat/simple-ipc/ipc-win32.c
new file mode 100644
index 000000000000..f0cfbf9d15c3
--- /dev/null
+++ b/compat/simple-ipc/ipc-win32.c
@@ -0,0 +1,749 @@
+#include "cache.h"
+#include "simple-ipc.h"
+#include "strbuf.h"
+#include "pkt-line.h"
+#include "thread-utils.h"
+
+#ifndef GIT_WINDOWS_NATIVE
+#error This file can only be compiled on Windows
+#endif
+
+static int initialize_pipe_name(const char *path, wchar_t *wpath, size_t alloc)
+{
+	int off = 0;
+	struct strbuf realpath = STRBUF_INIT;
+
+	if (!strbuf_realpath(&realpath, path, 0))
+		return -1;
+
+	off = swprintf(wpath, alloc, L"\\\\.\\pipe\\");
+	if (xutftowcs(wpath + off, realpath.buf, alloc - off) < 0)
+		return -1;
+
+	/* Handle drive prefix */
+	if (wpath[off] && wpath[off + 1] == L':') {
+		wpath[off + 1] = L'_';
+		off += 2;
+	}
+
+	for (; wpath[off]; off++)
+		if (wpath[off] == L'/')
+			wpath[off] = L'\\';
+
+	strbuf_release(&realpath);
+	return 0;
+}
+
+static enum ipc_active_state get_active_state(wchar_t *pipe_path)
+{
+	if (WaitNamedPipeW(pipe_path, NMPWAIT_USE_DEFAULT_WAIT))
+		return IPC_STATE__LISTENING;
+
+	if (GetLastError() == ERROR_SEM_TIMEOUT)
+		return IPC_STATE__NOT_LISTENING;
+
+	if (GetLastError() == ERROR_FILE_NOT_FOUND)
+		return IPC_STATE__PATH_NOT_FOUND;
+
+	return IPC_STATE__OTHER_ERROR;
+}
+
+enum ipc_active_state ipc_get_active_state(const char *path)
+{
+	wchar_t pipe_path[MAX_PATH];
+
+	if (initialize_pipe_name(path, pipe_path, ARRAY_SIZE(pipe_path)) < 0)
+		return IPC_STATE__INVALID_PATH;
+
+	return get_active_state(pipe_path);
+}
+
+#define WAIT_STEP_MS (50)
+
+static enum ipc_active_state connect_to_server(
+	const wchar_t *wpath,
+	DWORD timeout_ms,
+	const struct ipc_client_connect_options *options,
+	int *pfd)
+{
+	DWORD t_start_ms, t_waited_ms;
+	DWORD step_ms;
+	HANDLE hPipe = INVALID_HANDLE_VALUE;
+	DWORD mode = PIPE_READMODE_BYTE;
+	DWORD gle;
+
+	*pfd = -1;
+
+	for (;;) {
+		hPipe = CreateFileW(wpath, GENERIC_READ | GENERIC_WRITE,
+				    0, NULL, OPEN_EXISTING, 0, NULL);
+		if (hPipe != INVALID_HANDLE_VALUE)
+			break;
+
+		gle = GetLastError();
+
+		switch (gle) {
+		case ERROR_FILE_NOT_FOUND:
+			if (!options->wait_if_not_found)
+				return IPC_STATE__PATH_NOT_FOUND;
+			if (!timeout_ms)
+				return IPC_STATE__PATH_NOT_FOUND;
+
+			step_ms = (timeout_ms < WAIT_STEP_MS) ?
+				timeout_ms : WAIT_STEP_MS;
+			sleep_millisec(step_ms);
+
+			timeout_ms -= step_ms;
+			break; /* try again */
+
+		case ERROR_PIPE_BUSY:
+			if (!options->wait_if_busy)
+				return IPC_STATE__NOT_LISTENING;
+			if (!timeout_ms)
+				return IPC_STATE__NOT_LISTENING;
+
+			t_start_ms = (DWORD)(getnanotime() / 1000000);
+
+			if (!WaitNamedPipeW(wpath, timeout_ms)) {
+				if (GetLastError() == ERROR_SEM_TIMEOUT)
+					return IPC_STATE__NOT_LISTENING;
+
+				return IPC_STATE__OTHER_ERROR;
+			}
+
+			/*
+			 * A pipe server instance became available.
+			 * Race other client processes to connect to
+			 * it.
+			 *
+			 * But first decrement our overall timeout so
+			 * that we don't starve if we keep losing the
+			 * race.  But also guard against special
+			 * NPMWAIT_ values (0 and -1).
+			 */
+			t_waited_ms = (DWORD)(getnanotime() / 1000000) - t_start_ms;
+			if (t_waited_ms < timeout_ms)
+				timeout_ms -= t_waited_ms;
+			else
+				timeout_ms = 1;
+			break; /* try again */
+
+		default:
+			return IPC_STATE__OTHER_ERROR;
+		}
+	}
+
+	if (!SetNamedPipeHandleState(hPipe, &mode, NULL, NULL)) {
+		CloseHandle(hPipe);
+		return IPC_STATE__OTHER_ERROR;
+	}
+
+	*pfd = _open_osfhandle((intptr_t)hPipe, O_RDWR|O_BINARY);
+	if (*pfd < 0) {
+		CloseHandle(hPipe);
+		return IPC_STATE__OTHER_ERROR;
+	}
+
+	/* fd now owns hPipe */
+
+	return IPC_STATE__LISTENING;
+}
+
+/*
+ * The default connection timeout for Windows clients.
+ *
+ * This is not currently part of the ipc_ API (nor the config settings)
+ * because of differences between Windows and other platforms.
+ *
+ * This value was chosen at random.
+ */
+#define WINDOWS_CONNECTION_TIMEOUT_MS (30000)
+
+enum ipc_active_state ipc_client_try_connect(
+	const char *path,
+	const struct ipc_client_connect_options *options,
+	struct ipc_client_connection **p_connection)
+{
+	wchar_t wpath[MAX_PATH];
+	enum ipc_active_state state = IPC_STATE__OTHER_ERROR;
+	int fd = -1;
+
+	*p_connection = NULL;
+
+	trace2_region_enter("ipc-client", "try-connect", NULL);
+	trace2_data_string("ipc-client", NULL, "try-connect/path", path);
+
+	if (initialize_pipe_name(path, wpath, ARRAY_SIZE(wpath)) < 0)
+		state = IPC_STATE__INVALID_PATH;
+	else
+		state = connect_to_server(wpath, WINDOWS_CONNECTION_TIMEOUT_MS,
+					  options, &fd);
+
+	trace2_data_intmax("ipc-client", NULL, "try-connect/state",
+			   (intmax_t)state);
+	trace2_region_leave("ipc-client", "try-connect", NULL);
+
+	if (state == IPC_STATE__LISTENING) {
+		(*p_connection) = xcalloc(1, sizeof(struct ipc_client_connection));
+		(*p_connection)->fd = fd;
+	}
+
+	return state;
+}
+
+void ipc_client_close_connection(struct ipc_client_connection *connection)
+{
+	if (!connection)
+		return;
+
+	if (connection->fd != -1)
+		close(connection->fd);
+
+	free(connection);
+}
+
+int ipc_client_send_command_to_connection(
+	struct ipc_client_connection *connection,
+	const char *message, struct strbuf *answer)
+{
+	int ret = 0;
+
+	strbuf_setlen(answer, 0);
+
+	trace2_region_enter("ipc-client", "send-command", NULL);
+
+	if (write_packetized_from_buf_no_flush(message, strlen(message),
+					       connection->fd) < 0 ||
+	    packet_flush_gently(connection->fd) < 0) {
+		ret = error(_("could not send IPC command"));
+		goto done;
+	}
+
+	FlushFileBuffers((HANDLE)_get_osfhandle(connection->fd));
+
+	if (read_packetized_to_strbuf(
+		    connection->fd, answer,
+		    PACKET_READ_GENTLE_ON_EOF | PACKET_READ_NEVER_DIE) < 0) {
+		ret = error(_("could not read IPC response"));
+		goto done;
+	}
+
+done:
+	trace2_region_leave("ipc-client", "send-command", NULL);
+	return ret;
+}
+
+int ipc_client_send_command(const char *path,
+			    const struct ipc_client_connect_options *options,
+			    const char *message, struct strbuf *response)
+{
+	int ret = -1;
+	enum ipc_active_state state;
+	struct ipc_client_connection *connection = NULL;
+
+	state = ipc_client_try_connect(path, options, &connection);
+
+	if (state != IPC_STATE__LISTENING)
+		return ret;
+
+	ret = ipc_client_send_command_to_connection(connection, message, response);
+
+	ipc_client_close_connection(connection);
+
+	return ret;
+}
+
+/*
+ * Duplicate the given pipe handle and wrap it in a file descriptor so
+ * that we can use pkt-line on it.
+ */
+static int dup_fd_from_pipe(const HANDLE pipe)
+{
+	HANDLE process = GetCurrentProcess();
+	HANDLE handle;
+	int fd;
+
+	if (!DuplicateHandle(process, pipe, process, &handle, 0, FALSE,
+			     DUPLICATE_SAME_ACCESS)) {
+		errno = err_win_to_posix(GetLastError());
+		return -1;
+	}
+
+	fd = _open_osfhandle((intptr_t)handle, O_RDWR|O_BINARY);
+	if (fd < 0) {
+		errno = err_win_to_posix(GetLastError());
+		CloseHandle(handle);
+		return -1;
+	}
+
+	/*
+	 * `handle` is now owned by `fd` and will be automatically closed
+	 * when the descriptor is closed.
+	 */
+
+	return fd;
+}
+
+/*
+ * Magic numbers used to annotate callback instance data.
+ * These are used to help guard against accidentally passing the
+ * wrong instance data across multiple levels of callbacks (which
+ * is easy to do if there are `void*` arguments).
+ */
+enum magic {
+	MAGIC_SERVER_REPLY_DATA,
+	MAGIC_SERVER_THREAD_DATA,
+	MAGIC_SERVER_DATA,
+};
+
+struct ipc_server_reply_data {
+	enum magic magic;
+	int fd;
+	struct ipc_server_thread_data *server_thread_data;
+};
+
+struct ipc_server_thread_data {
+	enum magic magic;
+	struct ipc_server_thread_data *next_thread;
+	struct ipc_server_data *server_data;
+	pthread_t pthread_id;
+	HANDLE hPipe;
+};
+
+/*
+ * On Windows, the conceptual "ipc-server" is implemented as a pool of
+ * n idential/peer "server-thread" threads.  That is, there is no
+ * hierarchy of threads; and therefore no controller thread managing
+ * the pool.  Each thread has an independent handle to the named pipe,
+ * receives incoming connections, processes the client, and re-uses
+ * the pipe for the next client connection.
+ *
+ * Therefore, the "ipc-server" only needs to maintain a list of the
+ * spawned threads for eventual "join" purposes.
+ *
+ * A single "stop-event" is visible to all of the server threads to
+ * tell them to shutdown (when idle).
+ */
+struct ipc_server_data {
+	enum magic magic;
+	ipc_server_application_cb *application_cb;
+	void *application_data;
+	struct strbuf buf_path;
+	wchar_t wpath[MAX_PATH];
+
+	HANDLE hEventStopRequested;
+	struct ipc_server_thread_data *thread_list;
+	int is_stopped;
+};
+
+enum connect_result {
+	CR_CONNECTED = 0,
+	CR_CONNECT_PENDING,
+	CR_CONNECT_ERROR,
+	CR_WAIT_ERROR,
+	CR_SHUTDOWN,
+};
+
+static enum connect_result queue_overlapped_connect(
+	struct ipc_server_thread_data *server_thread_data,
+	OVERLAPPED *lpo)
+{
+	if (ConnectNamedPipe(server_thread_data->hPipe, lpo))
+		goto failed;
+
+	switch (GetLastError()) {
+	case ERROR_IO_PENDING:
+		return CR_CONNECT_PENDING;
+
+	case ERROR_PIPE_CONNECTED:
+		SetEvent(lpo->hEvent);
+		return CR_CONNECTED;
+
+	default:
+		break;
+	}
+
+failed:
+	error(_("ConnectNamedPipe failed for '%s' (%lu)"),
+	      server_thread_data->server_data->buf_path.buf,
+	      GetLastError());
+	return CR_CONNECT_ERROR;
+}
+
+/*
+ * Use Windows Overlapped IO to wait for a connection or for our event
+ * to be signalled.
+ */
+static enum connect_result wait_for_connection(
+	struct ipc_server_thread_data *server_thread_data,
+	OVERLAPPED *lpo)
+{
+	enum connect_result r;
+	HANDLE waitHandles[2];
+	DWORD dwWaitResult;
+
+	r = queue_overlapped_connect(server_thread_data, lpo);
+	if (r != CR_CONNECT_PENDING)
+		return r;
+
+	waitHandles[0] = server_thread_data->server_data->hEventStopRequested;
+	waitHandles[1] = lpo->hEvent;
+
+	dwWaitResult = WaitForMultipleObjects(2, waitHandles, FALSE, INFINITE);
+	switch (dwWaitResult) {
+	case WAIT_OBJECT_0 + 0:
+		return CR_SHUTDOWN;
+
+	case WAIT_OBJECT_0 + 1:
+		ResetEvent(lpo->hEvent);
+		return CR_CONNECTED;
+
+	default:
+		return CR_WAIT_ERROR;
+	}
+}
+
+/*
+ * Forward declare our reply callback function so that any compiler
+ * errors are reported when we actually define the function (in addition
+ * to any errors reported when we try to pass this callback function as
+ * a parameter in a function call).  The former are easier to understand.
+ */
+static ipc_server_reply_cb do_io_reply_callback;
+
+/*
+ * Relay application's response message to the client process.
+ * (We do not flush at this point because we allow the caller
+ * to chunk data to the client thru us.)
+ */
+static int do_io_reply_callback(struct ipc_server_reply_data *reply_data,
+		       const char *response, size_t response_len)
+{
+	if (reply_data->magic != MAGIC_SERVER_REPLY_DATA)
+		BUG("reply_cb called with wrong instance data");
+
+	return write_packetized_from_buf_no_flush(response, response_len,
+						  reply_data->fd);
+}
+
+/*
+ * Receive the request/command from the client and pass it to the
+ * registered request-callback.  The request-callback will compose
+ * a response and call our reply-callback to send it to the client.
+ *
+ * Simple-IPC only contains one round trip, so we flush and close
+ * here after the response.
+ */
+static int do_io(struct ipc_server_thread_data *server_thread_data)
+{
+	struct strbuf buf = STRBUF_INIT;
+	struct ipc_server_reply_data reply_data;
+	int ret = 0;
+
+	reply_data.magic = MAGIC_SERVER_REPLY_DATA;
+	reply_data.server_thread_data = server_thread_data;
+
+	reply_data.fd = dup_fd_from_pipe(server_thread_data->hPipe);
+	if (reply_data.fd < 0)
+		return error(_("could not create fd from pipe for '%s'"),
+			     server_thread_data->server_data->buf_path.buf);
+
+	ret = read_packetized_to_strbuf(
+		reply_data.fd, &buf,
+		PACKET_READ_GENTLE_ON_EOF | PACKET_READ_NEVER_DIE);
+	if (ret >= 0) {
+		ret = server_thread_data->server_data->application_cb(
+			server_thread_data->server_data->application_data,
+			buf.buf, do_io_reply_callback, &reply_data);
+
+		packet_flush_gently(reply_data.fd);
+
+		FlushFileBuffers((HANDLE)_get_osfhandle((reply_data.fd)));
+	}
+	else {
+		/*
+		 * The client probably disconnected/shutdown before it
+		 * could send a well-formed message.  Ignore it.
+		 */
+	}
+
+	strbuf_release(&buf);
+	close(reply_data.fd);
+
+	return ret;
+}
+
+/*
+ * Handle IPC request and response with this connected client.  And reset
+ * the pipe to prepare for the next client.
+ */
+static int use_connection(struct ipc_server_thread_data *server_thread_data)
+{
+	int ret;
+
+	ret = do_io(server_thread_data);
+
+	FlushFileBuffers(server_thread_data->hPipe);
+	DisconnectNamedPipe(server_thread_data->hPipe);
+
+	return ret;
+}
+
+/*
+ * Thread proc for an IPC server worker thread.  It handles a series of
+ * connections from clients.  It cleans and reuses the hPipe between each
+ * client.
+ */
+static void *server_thread_proc(void *_server_thread_data)
+{
+	struct ipc_server_thread_data *server_thread_data = _server_thread_data;
+	HANDLE hEventConnected = INVALID_HANDLE_VALUE;
+	OVERLAPPED oConnect;
+	enum connect_result cr;
+	int ret;
+
+	assert(server_thread_data->hPipe != INVALID_HANDLE_VALUE);
+
+	trace2_thread_start("ipc-server");
+	trace2_data_string("ipc-server", NULL, "pipe",
+			   server_thread_data->server_data->buf_path.buf);
+
+	hEventConnected = CreateEventW(NULL, TRUE, FALSE, NULL);
+
+	memset(&oConnect, 0, sizeof(oConnect));
+	oConnect.hEvent = hEventConnected;
+
+	for (;;) {
+		cr = wait_for_connection(server_thread_data, &oConnect);
+
+		switch (cr) {
+		case CR_SHUTDOWN:
+			goto finished;
+
+		case CR_CONNECTED:
+			ret = use_connection(server_thread_data);
+			if (ret == SIMPLE_IPC_QUIT) {
+				ipc_server_stop_async(
+					server_thread_data->server_data);
+				goto finished;
+			}
+			if (ret > 0) {
+				/*
+				 * Ignore (transient) IO errors with this
+				 * client and reset for the next client.
+				 */
+			}
+			break;
+
+		case CR_CONNECT_PENDING:
+			/* By construction, this should not happen. */
+			BUG("ipc-server[%s]: unexpeced CR_CONNECT_PENDING",
+			    server_thread_data->server_data->buf_path.buf);
+
+		case CR_CONNECT_ERROR:
+		case CR_WAIT_ERROR:
+			/*
+			 * Ignore these theoretical errors.
+			 */
+			DisconnectNamedPipe(server_thread_data->hPipe);
+			break;
+
+		default:
+			BUG("unandled case after wait_for_connection");
+		}
+	}
+
+finished:
+	CloseHandle(server_thread_data->hPipe);
+	CloseHandle(hEventConnected);
+
+	trace2_thread_exit();
+	return NULL;
+}
+
+static HANDLE create_new_pipe(wchar_t *wpath, int is_first)
+{
+	HANDLE hPipe;
+	DWORD dwOpenMode, dwPipeMode;
+	LPSECURITY_ATTRIBUTES lpsa = NULL;
+
+	dwOpenMode = PIPE_ACCESS_INBOUND | PIPE_ACCESS_OUTBOUND |
+		FILE_FLAG_OVERLAPPED;
+
+	dwPipeMode = PIPE_TYPE_MESSAGE | PIPE_READMODE_BYTE | PIPE_WAIT |
+		PIPE_REJECT_REMOTE_CLIENTS;
+
+	if (is_first) {
+		dwOpenMode |= FILE_FLAG_FIRST_PIPE_INSTANCE;
+
+		/*
+		 * On Windows, the first server pipe instance gets to
+		 * set the ACL / Security Attributes on the named
+		 * pipe; subsequent instances inherit and cannot
+		 * change them.
+		 *
+		 * TODO Should we allow the application layer to
+		 * specify security attributes, such as `LocalService`
+		 * or `LocalSystem`, when we create the named pipe?
+		 * This question is probably not important when the
+		 * daemon is started by a foreground user process and
+		 * only needs to talk to the current user, but may be
+		 * if the daemon is run via the Control Panel as a
+		 * System Service.
+		 */
+	}
+
+	hPipe = CreateNamedPipeW(wpath, dwOpenMode, dwPipeMode,
+				 PIPE_UNLIMITED_INSTANCES, 1024, 1024, 0, lpsa);
+
+	return hPipe;
+}
+
+int ipc_server_run_async(struct ipc_server_data **returned_server_data,
+			 const char *path, const struct ipc_server_opts *opts,
+			 ipc_server_application_cb *application_cb,
+			 void *application_data)
+{
+	struct ipc_server_data *server_data;
+	wchar_t wpath[MAX_PATH];
+	HANDLE hPipeFirst = INVALID_HANDLE_VALUE;
+	int k;
+	int ret = 0;
+	int nr_threads = opts->nr_threads;
+
+	*returned_server_data = NULL;
+
+	ret = initialize_pipe_name(path, wpath, ARRAY_SIZE(wpath));
+	if (ret < 0)
+		return error(
+			_("could not create normalized wchar_t path for '%s'"),
+			path);
+
+	hPipeFirst = create_new_pipe(wpath, 1);
+	if (hPipeFirst == INVALID_HANDLE_VALUE)
+		return error(_("IPC server already running on '%s'"), path);
+
+	server_data = xcalloc(1, sizeof(*server_data));
+	server_data->magic = MAGIC_SERVER_DATA;
+	server_data->application_cb = application_cb;
+	server_data->application_data = application_data;
+	server_data->hEventStopRequested = CreateEvent(NULL, TRUE, FALSE, NULL);
+	strbuf_init(&server_data->buf_path, 0);
+	strbuf_addstr(&server_data->buf_path, path);
+	wcscpy(server_data->wpath, wpath);
+
+	if (nr_threads < 1)
+		nr_threads = 1;
+
+	for (k = 0; k < nr_threads; k++) {
+		struct ipc_server_thread_data *std;
+
+		std = xcalloc(1, sizeof(*std));
+		std->magic = MAGIC_SERVER_THREAD_DATA;
+		std->server_data = server_data;
+		std->hPipe = INVALID_HANDLE_VALUE;
+
+		std->hPipe = (k == 0)
+			? hPipeFirst
+			: create_new_pipe(server_data->wpath, 0);
+
+		if (std->hPipe == INVALID_HANDLE_VALUE) {
+			/*
+			 * If we've reached a pipe instance limit for
+			 * this path, just use fewer threads.
+			 */
+			free(std);
+			break;
+		}
+
+		if (pthread_create(&std->pthread_id, NULL,
+				   server_thread_proc, std)) {
+			/*
+			 * Likewise, if we're out of threads, just use
+			 * fewer threads than requested.
+			 *
+			 * However, we just give up if we can't even get
+			 * one thread.  This should not happen.
+			 */
+			if (k == 0)
+				die(_("could not start thread[0] for '%s'"),
+				    path);
+
+			CloseHandle(std->hPipe);
+			free(std);
+			break;
+		}
+
+		std->next_thread = server_data->thread_list;
+		server_data->thread_list = std;
+	}
+
+	*returned_server_data = server_data;
+	return 0;
+}
+
+int ipc_server_stop_async(struct ipc_server_data *server_data)
+{
+	if (!server_data)
+		return 0;
+
+	/*
+	 * Gently tell all of the ipc_server threads to shutdown.
+	 * This will be seen the next time they are idle (and waiting
+	 * for a connection).
+	 *
+	 * We DO NOT attempt to force them to drop an active connection.
+	 */
+	SetEvent(server_data->hEventStopRequested);
+	return 0;
+}
+
+int ipc_server_await(struct ipc_server_data *server_data)
+{
+	DWORD dwWaitResult;
+
+	if (!server_data)
+		return 0;
+
+	dwWaitResult = WaitForSingleObject(server_data->hEventStopRequested, INFINITE);
+	if (dwWaitResult != WAIT_OBJECT_0)
+		return error(_("wait for hEvent failed for '%s'"),
+			     server_data->buf_path.buf);
+
+	while (server_data->thread_list) {
+		struct ipc_server_thread_data *std = server_data->thread_list;
+
+		pthread_join(std->pthread_id, NULL);
+
+		server_data->thread_list = std->next_thread;
+		free(std);
+	}
+
+	server_data->is_stopped = 1;
+
+	return 0;
+}
+
+void ipc_server_free(struct ipc_server_data *server_data)
+{
+	if (!server_data)
+		return;
+
+	if (!server_data->is_stopped)
+		BUG("cannot free ipc-server while running for '%s'",
+		    server_data->buf_path.buf);
+
+	strbuf_release(&server_data->buf_path);
+
+	if (server_data->hEventStopRequested != INVALID_HANDLE_VALUE)
+		CloseHandle(server_data->hEventStopRequested);
+
+	while (server_data->thread_list) {
+		struct ipc_server_thread_data *std = server_data->thread_list;
+
+		server_data->thread_list = std->next_thread;
+		free(std);
+	}
+
+	free(server_data);
+}
diff --git a/config.mak.uname b/config.mak.uname
index 198ab1e58f83..76087cff6789 100644
--- a/config.mak.uname
+++ b/config.mak.uname
@@ -421,6 +421,7 @@ ifeq ($(uname_S),Windows)
 	RUNTIME_PREFIX = YesPlease
 	HAVE_WPGMPTR = YesWeDo
 	NO_ST_BLOCKS_IN_STRUCT_STAT = YesPlease
+	USE_WIN32_IPC = YesPlease
 	USE_WIN32_MMAP = YesPlease
 	MMAP_PREVENTS_DELETE = UnfortunatelyYes
 	# USE_NED_ALLOCATOR = YesPlease
@@ -597,6 +598,7 @@ ifneq (,$(findstring MINGW,$(uname_S)))
 	RUNTIME_PREFIX = YesPlease
 	HAVE_WPGMPTR = YesWeDo
 	NO_ST_BLOCKS_IN_STRUCT_STAT = YesPlease
+	USE_WIN32_IPC = YesPlease
 	USE_WIN32_MMAP = YesPlease
 	MMAP_PREVENTS_DELETE = UnfortunatelyYes
 	USE_NED_ALLOCATOR = YesPlease
diff --git a/contrib/buildsystems/CMakeLists.txt b/contrib/buildsystems/CMakeLists.txt
index c151dd7257f3..4bd41054ee70 100644
--- a/contrib/buildsystems/CMakeLists.txt
+++ b/contrib/buildsystems/CMakeLists.txt
@@ -246,6 +246,10 @@ elseif(CMAKE_SYSTEM_NAME STREQUAL "Linux")
 	list(APPEND compat_SOURCES unix-socket.c)
 endif()
 
+if(CMAKE_SYSTEM_NAME STREQUAL "Windows")
+	list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-win32.c)
+endif()
+
 set(EXE_EXTENSION ${CMAKE_EXECUTABLE_SUFFIX})
 
 #header checks
diff --git a/simple-ipc.h b/simple-ipc.h
new file mode 100644
index 000000000000..a3f96b42cca2
--- /dev/null
+++ b/simple-ipc.h
@@ -0,0 +1,224 @@
+#ifndef GIT_SIMPLE_IPC_H
+#define GIT_SIMPLE_IPC_H
+
+/*
+ * See Documentation/technical/api-simple-ipc.txt
+ */
+
+#if defined(GIT_WINDOWS_NATIVE)
+#define SUPPORTS_SIMPLE_IPC
+#endif
+
+#ifdef SUPPORTS_SIMPLE_IPC
+#include "pkt-line.h"
+
+/*
+ * Simple IPC Client Side API.
+ */
+
+enum ipc_active_state {
+	/*
+	 * The pipe/socket exists and the daemon is waiting for connections.
+	 */
+	IPC_STATE__LISTENING = 0,
+
+	/*
+	 * The pipe/socket exists, but the daemon is not listening.
+	 * Perhaps it is very busy.
+	 * Perhaps the daemon died without deleting the path.
+	 * Perhaps it is shutting down and draining existing clients.
+	 * Perhaps it is dead, but other clients are lingering and
+	 * still holding a reference to the pathname.
+	 */
+	IPC_STATE__NOT_LISTENING,
+
+	/*
+	 * The requested pathname is bogus and no amount of retries
+	 * will fix that.
+	 */
+	IPC_STATE__INVALID_PATH,
+
+	/*
+	 * The requested pathname is not found.  This usually means
+	 * that there is no daemon present.
+	 */
+	IPC_STATE__PATH_NOT_FOUND,
+
+	IPC_STATE__OTHER_ERROR,
+};
+
+struct ipc_client_connect_options {
+	/*
+	 * Spin under timeout if the server is running but can't
+	 * accept our connection yet.  This should always be set
+	 * unless you just want to poke the server and see if it
+	 * is alive.
+	 */
+	unsigned int wait_if_busy:1;
+
+	/*
+	 * Spin under timeout if the pipe/socket is not yet present
+	 * on the file system.  This is useful if we just started
+	 * the service and need to wait for it to become ready.
+	 */
+	unsigned int wait_if_not_found:1;
+};
+
+#define IPC_CLIENT_CONNECT_OPTIONS_INIT { \
+	.wait_if_busy = 0, \
+	.wait_if_not_found = 0, \
+}
+
+/*
+ * Determine if a server is listening on this named pipe or socket using
+ * platform-specific logic.  This might just probe the filesystem or it
+ * might make a trivial connection to the server using this pathname.
+ */
+enum ipc_active_state ipc_get_active_state(const char *path);
+
+struct ipc_client_connection {
+	int fd;
+};
+
+/*
+ * Try to connect to the daemon on the named pipe or socket.
+ *
+ * Returns IPC_STATE__LISTENING and a connection handle.
+ *
+ * Otherwise, returns info to help decide whether to retry or to
+ * spawn/respawn the server.
+ */
+enum ipc_active_state ipc_client_try_connect(
+	const char *path,
+	const struct ipc_client_connect_options *options,
+	struct ipc_client_connection **p_connection);
+
+void ipc_client_close_connection(struct ipc_client_connection *connection);
+
+/*
+ * Used by the client to synchronously send and receive a message with
+ * the server on the provided client connection.
+ *
+ * Returns 0 when successful.
+ *
+ * Calls error() and returns non-zero otherwise.
+ */
+int ipc_client_send_command_to_connection(
+	struct ipc_client_connection *connection,
+	const char *message, struct strbuf *answer);
+
+/*
+ * Used by the client to synchronously connect and send and receive a
+ * message to the server listening at the given path.
+ *
+ * Returns 0 when successful.
+ *
+ * Calls error() and returns non-zero otherwise.
+ */
+int ipc_client_send_command(const char *path,
+			    const struct ipc_client_connect_options *options,
+			    const char *message, struct strbuf *answer);
+
+/*
+ * Simple IPC Server Side API.
+ */
+
+struct ipc_server_reply_data;
+
+typedef int (ipc_server_reply_cb)(struct ipc_server_reply_data *,
+				  const char *response,
+				  size_t response_len);
+
+/*
+ * Prototype for an application-supplied callback to process incoming
+ * client IPC messages and compose a reply.  The `application_cb` should
+ * use the provided `reply_cb` and `reply_data` to send an IPC response
+ * back to the client.  The `reply_cb` callback can be called multiple
+ * times for chunking purposes.  A reply message is optional and may be
+ * omitted if not necessary for the application.
+ *
+ * The return value from the application callback is ignored.
+ * The value `SIMPLE_IPC_QUIT` can be used to shutdown the server.
+ */
+typedef int (ipc_server_application_cb)(void *application_data,
+					const char *request,
+					ipc_server_reply_cb *reply_cb,
+					struct ipc_server_reply_data *reply_data);
+
+#define SIMPLE_IPC_QUIT -2
+
+/*
+ * Opaque instance data to represent an IPC server instance.
+ */
+struct ipc_server_data;
+
+/*
+ * Control parameters for the IPC server instance.
+ * Use this to hide platform-specific settings.
+ */
+struct ipc_server_opts
+{
+	int nr_threads;
+};
+
+/*
+ * Start an IPC server instance in one or more background threads
+ * and return a handle to the pool.
+ *
+ * Returns 0 if the asynchronous server pool was started successfully.
+ * Returns -1 if not.
+ *
+ * When a client IPC message is received, the `application_cb` will be
+ * called (possibly on a random thread) to handle the message and
+ * optionally compose a reply message.
+ */
+int ipc_server_run_async(struct ipc_server_data **returned_server_data,
+			 const char *path, const struct ipc_server_opts *opts,
+			 ipc_server_application_cb *application_cb,
+			 void *application_data);
+
+/*
+ * Gently signal the IPC server pool to shutdown.  No new client
+ * connections will be accepted, but existing connections will be
+ * allowed to complete.
+ */
+int ipc_server_stop_async(struct ipc_server_data *server_data);
+
+/*
+ * Block the calling thread until all threads in the IPC server pool
+ * have completed and been joined.
+ */
+int ipc_server_await(struct ipc_server_data *server_data);
+
+/*
+ * Close and free all resource handles associated with the IPC server
+ * pool.
+ */
+void ipc_server_free(struct ipc_server_data *server_data);
+
+/*
+ * Run an IPC server instance and block the calling thread of the
+ * current process.  It does not return until the IPC server has
+ * either shutdown or had an unrecoverable error.
+ *
+ * The IPC server handles incoming IPC messages from client processes
+ * and may use one or more background threads as necessary.
+ *
+ * Returns 0 after the server has completed successfully.
+ * Returns -1 if the server cannot be started.
+ *
+ * When a client IPC message is received, the `application_cb` will be
+ * called (possibly on a random thread) to handle the message and
+ * optionally compose a reply message.
+ *
+ * Note that `ipc_server_run()` is a synchronous wrapper around the
+ * above asynchronous routines.  It effectively hides all of the
+ * server state and thread details from the caller and presents a
+ * simple synchronous interface.
+ */
+int ipc_server_run(const char *path, const struct ipc_server_opts *opts,
+		   ipc_server_application_cb *application_cb,
+		   void *application_data);
+
+#endif /* SUPPORTS_SIMPLE_IPC */
+#endif /* GIT_SIMPLE_IPC_H */
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v3 07/12] unix-socket: elimiate static unix_stream_socket() helper function
  2021-02-13  0:09   ` [PATCH v3 00/12] " Jeff Hostetler via GitGitGadget
                       ` (5 preceding siblings ...)
  2021-02-13  0:09     ` [PATCH v3 06/12] simple-ipc: add win32 implementation Jeff Hostetler via GitGitGadget
@ 2021-02-13  0:09     ` Jeff Hostetler via GitGitGadget
  2021-02-13  0:09     ` [PATCH v3 08/12] unix-socket: add backlog size option to unix_stream_listen() Jeff Hostetler via GitGitGadget
                       ` (5 subsequent siblings)
  12 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-02-13  0:09 UTC (permalink / raw)
  To: git
  Cc: Jeff Hostetler, Jeff King, SZEDER Gábor,
	Johannes Schindelin, Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

The static helper function `unix_stream_socket()` calls `die()`.  This
is not appropriate for all callers.  Eliminate the wrapper function
and make the callers propagate the error.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 unix-socket.c | 27 +++++++++++++--------------
 1 file changed, 13 insertions(+), 14 deletions(-)

diff --git a/unix-socket.c b/unix-socket.c
index 19ed48be9902..69f81d64e9d5 100644
--- a/unix-socket.c
+++ b/unix-socket.c
@@ -1,14 +1,6 @@
 #include "cache.h"
 #include "unix-socket.h"
 
-static int unix_stream_socket(void)
-{
-	int fd = socket(AF_UNIX, SOCK_STREAM, 0);
-	if (fd < 0)
-		die_errno("unable to create socket");
-	return fd;
-}
-
 static int chdir_len(const char *orig, int len)
 {
 	char *path = xmemdupz(orig, len);
@@ -73,13 +65,16 @@ static int unix_sockaddr_init(struct sockaddr_un *sa, const char *path,
 
 int unix_stream_connect(const char *path)
 {
-	int fd, saved_errno;
+	int fd = -1, saved_errno;
 	struct sockaddr_un sa;
 	struct unix_sockaddr_context ctx;
 
 	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
 		return -1;
-	fd = unix_stream_socket();
+	fd = socket(AF_UNIX, SOCK_STREAM, 0);
+	if (fd < 0)
+		goto fail;
+
 	if (connect(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0)
 		goto fail;
 	unix_sockaddr_cleanup(&ctx);
@@ -87,15 +82,16 @@ int unix_stream_connect(const char *path)
 
 fail:
 	saved_errno = errno;
+	if (fd != -1)
+		close(fd);
 	unix_sockaddr_cleanup(&ctx);
-	close(fd);
 	errno = saved_errno;
 	return -1;
 }
 
 int unix_stream_listen(const char *path)
 {
-	int fd, saved_errno;
+	int fd = -1, saved_errno;
 	struct sockaddr_un sa;
 	struct unix_sockaddr_context ctx;
 
@@ -103,7 +99,9 @@ int unix_stream_listen(const char *path)
 
 	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
 		return -1;
-	fd = unix_stream_socket();
+	fd = socket(AF_UNIX, SOCK_STREAM, 0);
+	if (fd < 0)
+		goto fail;
 
 	if (bind(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0)
 		goto fail;
@@ -116,8 +114,9 @@ int unix_stream_listen(const char *path)
 
 fail:
 	saved_errno = errno;
+	if (fd != -1)
+		close(fd);
 	unix_sockaddr_cleanup(&ctx);
-	close(fd);
 	errno = saved_errno;
 	return -1;
 }
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v3 08/12] unix-socket: add backlog size option to unix_stream_listen()
  2021-02-13  0:09   ` [PATCH v3 00/12] " Jeff Hostetler via GitGitGadget
                       ` (6 preceding siblings ...)
  2021-02-13  0:09     ` [PATCH v3 07/12] unix-socket: elimiate static unix_stream_socket() helper function Jeff Hostetler via GitGitGadget
@ 2021-02-13  0:09     ` Jeff Hostetler via GitGitGadget
  2021-02-13  0:09     ` [PATCH v3 09/12] unix-socket: disallow chdir() when creating unix domain sockets Jeff Hostetler via GitGitGadget
                       ` (4 subsequent siblings)
  12 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-02-13  0:09 UTC (permalink / raw)
  To: git
  Cc: Jeff Hostetler, Jeff King, SZEDER Gábor,
	Johannes Schindelin, Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Update `unix_stream_listen()` to take an options structure to override
default behaviors.  This commit includes the size of the `listen()` backlog.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 builtin/credential-cache--daemon.c |  3 ++-
 unix-socket.c                      |  9 +++++++--
 unix-socket.h                      | 14 +++++++++++++-
 3 files changed, 22 insertions(+), 4 deletions(-)

diff --git a/builtin/credential-cache--daemon.c b/builtin/credential-cache--daemon.c
index c61f123a3b81..4c6c89ab0de2 100644
--- a/builtin/credential-cache--daemon.c
+++ b/builtin/credential-cache--daemon.c
@@ -203,9 +203,10 @@ static int serve_cache_loop(int fd)
 
 static void serve_cache(const char *socket_path, int debug)
 {
+	struct unix_stream_listen_opts opts = UNIX_STREAM_LISTEN_OPTS_INIT;
 	int fd;
 
-	fd = unix_stream_listen(socket_path);
+	fd = unix_stream_listen(socket_path, &opts);
 	if (fd < 0)
 		die_errno("unable to bind to '%s'", socket_path);
 
diff --git a/unix-socket.c b/unix-socket.c
index 69f81d64e9d5..5ac7dafe9828 100644
--- a/unix-socket.c
+++ b/unix-socket.c
@@ -89,9 +89,11 @@ int unix_stream_connect(const char *path)
 	return -1;
 }
 
-int unix_stream_listen(const char *path)
+int unix_stream_listen(const char *path,
+		       const struct unix_stream_listen_opts *opts)
 {
 	int fd = -1, saved_errno;
+	int backlog;
 	struct sockaddr_un sa;
 	struct unix_sockaddr_context ctx;
 
@@ -106,7 +108,10 @@ int unix_stream_listen(const char *path)
 	if (bind(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0)
 		goto fail;
 
-	if (listen(fd, 5) < 0)
+	backlog = opts->listen_backlog_size;
+	if (backlog <= 0)
+		backlog = DEFAULT_UNIX_STREAM_LISTEN_BACKLOG;
+	if (listen(fd, backlog) < 0)
 		goto fail;
 
 	unix_sockaddr_cleanup(&ctx);
diff --git a/unix-socket.h b/unix-socket.h
index e271aeec5a07..06a5a05b03fe 100644
--- a/unix-socket.h
+++ b/unix-socket.h
@@ -1,7 +1,19 @@
 #ifndef UNIX_SOCKET_H
 #define UNIX_SOCKET_H
 
+struct unix_stream_listen_opts {
+	int listen_backlog_size;
+};
+
+#define DEFAULT_UNIX_STREAM_LISTEN_BACKLOG (5)
+
+#define UNIX_STREAM_LISTEN_OPTS_INIT \
+{ \
+	.listen_backlog_size = DEFAULT_UNIX_STREAM_LISTEN_BACKLOG, \
+}
+
 int unix_stream_connect(const char *path);
-int unix_stream_listen(const char *path);
+int unix_stream_listen(const char *path,
+		       const struct unix_stream_listen_opts *opts);
 
 #endif /* UNIX_SOCKET_H */
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v3 09/12] unix-socket: disallow chdir() when creating unix domain sockets
  2021-02-13  0:09   ` [PATCH v3 00/12] " Jeff Hostetler via GitGitGadget
                       ` (7 preceding siblings ...)
  2021-02-13  0:09     ` [PATCH v3 08/12] unix-socket: add backlog size option to unix_stream_listen() Jeff Hostetler via GitGitGadget
@ 2021-02-13  0:09     ` Jeff Hostetler via GitGitGadget
  2021-02-13  0:09     ` [PATCH v3 10/12] unix-socket: create `unix_stream_server__listen_with_lock()` Jeff Hostetler via GitGitGadget
                       ` (3 subsequent siblings)
  12 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-02-13  0:09 UTC (permalink / raw)
  To: git
  Cc: Jeff Hostetler, Jeff King, SZEDER Gábor,
	Johannes Schindelin, Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Calls to `chdir()` are dangerous in a multi-threaded context.  If
`unix_stream_listen()` or `unix_stream_connect()` is given a socket
pathname that is too long to fit in a `sockaddr_un` structure, it will
`chdir()` to the parent directory of the requested socket pathname,
create the socket using a relative pathname, and then `chdir()` back.
This is not thread-safe.

Teach `unix_sockaddr_init()` to not allow calls to `chdir()` when this
flag is set.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 builtin/credential-cache.c |  2 +-
 unix-socket.c              | 17 ++++++++++++-----
 unix-socket.h              |  4 +++-
 3 files changed, 16 insertions(+), 7 deletions(-)

diff --git a/builtin/credential-cache.c b/builtin/credential-cache.c
index 9b3f70990597..76a6ba37223f 100644
--- a/builtin/credential-cache.c
+++ b/builtin/credential-cache.c
@@ -14,7 +14,7 @@
 static int send_request(const char *socket, const struct strbuf *out)
 {
 	int got_data = 0;
-	int fd = unix_stream_connect(socket);
+	int fd = unix_stream_connect(socket, 0);
 
 	if (fd < 0)
 		return -1;
diff --git a/unix-socket.c b/unix-socket.c
index 5ac7dafe9828..1eaa8cf759c0 100644
--- a/unix-socket.c
+++ b/unix-socket.c
@@ -28,16 +28,23 @@ static void unix_sockaddr_cleanup(struct unix_sockaddr_context *ctx)
 }
 
 static int unix_sockaddr_init(struct sockaddr_un *sa, const char *path,
-			      struct unix_sockaddr_context *ctx)
+			      struct unix_sockaddr_context *ctx,
+			      int disallow_chdir)
 {
 	int size = strlen(path) + 1;
 
 	ctx->orig_dir = NULL;
 	if (size > sizeof(sa->sun_path)) {
-		const char *slash = find_last_dir_sep(path);
+		const char *slash;
 		const char *dir;
 		struct strbuf cwd = STRBUF_INIT;
 
+		if (disallow_chdir) {
+			errno = ENAMETOOLONG;
+			return -1;
+		}
+
+		slash = find_last_dir_sep(path);
 		if (!slash) {
 			errno = ENAMETOOLONG;
 			return -1;
@@ -63,13 +70,13 @@ static int unix_sockaddr_init(struct sockaddr_un *sa, const char *path,
 	return 0;
 }
 
-int unix_stream_connect(const char *path)
+int unix_stream_connect(const char *path, int disallow_chdir)
 {
 	int fd = -1, saved_errno;
 	struct sockaddr_un sa;
 	struct unix_sockaddr_context ctx;
 
-	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
+	if (unix_sockaddr_init(&sa, path, &ctx, disallow_chdir) < 0)
 		return -1;
 	fd = socket(AF_UNIX, SOCK_STREAM, 0);
 	if (fd < 0)
@@ -99,7 +106,7 @@ int unix_stream_listen(const char *path,
 
 	unlink(path);
 
-	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
+	if (unix_sockaddr_init(&sa, path, &ctx, opts->disallow_chdir) < 0)
 		return -1;
 	fd = socket(AF_UNIX, SOCK_STREAM, 0);
 	if (fd < 0)
diff --git a/unix-socket.h b/unix-socket.h
index 06a5a05b03fe..2c0b2e79d7b3 100644
--- a/unix-socket.h
+++ b/unix-socket.h
@@ -3,6 +3,7 @@
 
 struct unix_stream_listen_opts {
 	int listen_backlog_size;
+	unsigned int disallow_chdir:1;
 };
 
 #define DEFAULT_UNIX_STREAM_LISTEN_BACKLOG (5)
@@ -10,9 +11,10 @@ struct unix_stream_listen_opts {
 #define UNIX_STREAM_LISTEN_OPTS_INIT \
 { \
 	.listen_backlog_size = DEFAULT_UNIX_STREAM_LISTEN_BACKLOG, \
+	.disallow_chdir = 0, \
 }
 
-int unix_stream_connect(const char *path);
+int unix_stream_connect(const char *path, int disallow_chdir);
 int unix_stream_listen(const char *path,
 		       const struct unix_stream_listen_opts *opts);
 
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v3 10/12] unix-socket: create `unix_stream_server__listen_with_lock()`
  2021-02-13  0:09   ` [PATCH v3 00/12] " Jeff Hostetler via GitGitGadget
                       ` (8 preceding siblings ...)
  2021-02-13  0:09     ` [PATCH v3 09/12] unix-socket: disallow chdir() when creating unix domain sockets Jeff Hostetler via GitGitGadget
@ 2021-02-13  0:09     ` Jeff Hostetler via GitGitGadget
  2021-02-13  0:09     ` [PATCH v3 11/12] simple-ipc: add Unix domain socket implementation Jeff Hostetler via GitGitGadget
                       ` (2 subsequent siblings)
  12 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-02-13  0:09 UTC (permalink / raw)
  To: git
  Cc: Jeff Hostetler, Jeff King, SZEDER Gábor,
	Johannes Schindelin, Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Create a version of `unix_stream_listen()` that uses a ".lock" lockfile
to create the unix domain socket in a race-free manner.

Unix domain sockets have a fundamental problem on Unix systems because
they persist in the filesystem until they are deleted.  This is
independent of whether a server is actually listening for connections.
Well-behaved servers are expected to delete the socket when they
shutdown.  A new server cannot easily tell if a found socket is
attached to an active server or is leftover cruft from a dead server.
The traditional solution used by `unix_stream_listen()` is to force
delete the socket pathname and then create a new socket.  This solves
the latter (cruft) problem, but in the case of the former, it orphans
the existing server (by stealing the pathname associated with the
socket it is listening on).

We cannot directly use a .lock lockfile to create the socket because
the socket is created by `bind(2)` rather than the `open(2)` mechanism
used by `tempfile.c`.

As an alternative, we hold a plain lockfile ("<path>.lock") as a
mutual exclusion device.  Under the lock, we test if an existing
socket ("<path>") is has an active server.  If not, create a new
socket and begin listening.  Then we rollback the lockfile in all
cases.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 unix-socket.c | 115 ++++++++++++++++++++++++++++++++++++++++++++++++++
 unix-socket.h |  29 +++++++++++++
 2 files changed, 144 insertions(+)

diff --git a/unix-socket.c b/unix-socket.c
index 1eaa8cf759c0..647bbde37f97 100644
--- a/unix-socket.c
+++ b/unix-socket.c
@@ -1,4 +1,5 @@
 #include "cache.h"
+#include "lockfile.h"
 #include "unix-socket.h"
 
 static int chdir_len(const char *orig, int len)
@@ -132,3 +133,117 @@ int unix_stream_listen(const char *path,
 	errno = saved_errno;
 	return -1;
 }
+
+static int is_another_server_alive(const char *path,
+				   const struct unix_stream_listen_opts *opts)
+{
+	struct stat st;
+	int fd;
+
+	if (!lstat(path, &st) && S_ISSOCK(st.st_mode)) {
+		/*
+		 * A socket-inode exists on disk at `path`, but we
+		 * don't know whether it belongs to an active server
+		 * or whether the last server died without cleaning
+		 * up.
+		 *
+		 * Poke it with a trivial connection to try to find
+		 * out.
+		 */
+		fd = unix_stream_connect(path, opts->disallow_chdir);
+		if (fd >= 0) {
+			close(fd);
+			return 1;
+		}
+	}
+
+	return 0;
+}
+
+struct unix_stream_server_socket *unix_stream_server__listen_with_lock(
+	const char *path,
+	const struct unix_stream_listen_opts *opts)
+{
+	struct lock_file lock = LOCK_INIT;
+	int fd_socket;
+	struct unix_stream_server_socket *server_socket;
+
+	/*
+	 * Create a lock at "<path>.lock" if we can.
+	 */
+	if (hold_lock_file_for_update_timeout(&lock, path, 0,
+					      opts->timeout_ms) < 0) {
+		error_errno(_("could not lock listener socket '%s'"), path);
+		return NULL;
+	}
+
+	/*
+	 * If another server is listening on "<path>" give up.  We do not
+	 * want to create a socket and steal future connections from them.
+	 */
+	if (is_another_server_alive(path, opts)) {
+		errno = EADDRINUSE;
+		error_errno(_("listener socket already in use '%s'"), path);
+		rollback_lock_file(&lock);
+		return NULL;
+	}
+
+	/*
+	 * Create and bind to a Unix domain socket at "<path>".
+	 */
+	fd_socket = unix_stream_listen(path, opts);
+	if (fd_socket < 0) {
+		error_errno(_("could not create listener socket '%s'"), path);
+		rollback_lock_file(&lock);
+		return NULL;
+	}
+
+	server_socket = xcalloc(1, sizeof(*server_socket));
+	server_socket->path_socket = strdup(path);
+	server_socket->fd_socket = fd_socket;
+	lstat(path, &server_socket->st_socket);
+
+	/*
+	 * Always rollback (just delete) "<path>.lock" because we already created
+	 * "<path>" as a socket and do not want to commit_lock to do the atomic
+	 * rename trick.
+	 */
+	rollback_lock_file(&lock);
+
+	return server_socket;
+}
+
+void unix_stream_server__free(
+	struct unix_stream_server_socket *server_socket)
+{
+	if (!server_socket)
+		return;
+
+	if (server_socket->fd_socket >= 0) {
+		if (!unix_stream_server__was_stolen(server_socket))
+			unlink(server_socket->path_socket);
+		close(server_socket->fd_socket);
+	}
+
+	free(server_socket->path_socket);
+	free(server_socket);
+}
+
+int unix_stream_server__was_stolen(
+	struct unix_stream_server_socket *server_socket)
+{
+	struct stat st_now;
+
+	if (!server_socket)
+		return 0;
+
+	if (lstat(server_socket->path_socket, &st_now) == -1)
+		return 1;
+
+	if (st_now.st_ino != server_socket->st_socket.st_ino)
+		return 1;
+
+	/* We might also consider the ctime on some platforms. */
+
+	return 0;
+}
diff --git a/unix-socket.h b/unix-socket.h
index 2c0b2e79d7b3..8faf5b692f90 100644
--- a/unix-socket.h
+++ b/unix-socket.h
@@ -2,14 +2,17 @@
 #define UNIX_SOCKET_H
 
 struct unix_stream_listen_opts {
+	long timeout_ms;
 	int listen_backlog_size;
 	unsigned int disallow_chdir:1;
 };
 
+#define DEFAULT_UNIX_STREAM_LISTEN_TIMEOUT (100)
 #define DEFAULT_UNIX_STREAM_LISTEN_BACKLOG (5)
 
 #define UNIX_STREAM_LISTEN_OPTS_INIT \
 { \
+	.timeout_ms = DEFAULT_UNIX_STREAM_LISTEN_TIMEOUT, \
 	.listen_backlog_size = DEFAULT_UNIX_STREAM_LISTEN_BACKLOG, \
 	.disallow_chdir = 0, \
 }
@@ -18,4 +21,30 @@ int unix_stream_connect(const char *path, int disallow_chdir);
 int unix_stream_listen(const char *path,
 		       const struct unix_stream_listen_opts *opts);
 
+struct unix_stream_server_socket {
+	char *path_socket;
+	struct stat st_socket;
+	int fd_socket;
+};
+
+/*
+ * Create a Unix Domain Socket at the given path under the protection
+ * of a '.lock' lockfile.
+ */
+struct unix_stream_server_socket *unix_stream_server__listen_with_lock(
+	const char *path,
+	const struct unix_stream_listen_opts *opts);
+
+/*
+ * Close and delete the socket.
+ */
+void unix_stream_server__free(
+	struct unix_stream_server_socket *server_socket);
+
+/*
+ * Return 1 if the inode of the pathname to our socket changes.
+ */
+int unix_stream_server__was_stolen(
+	struct unix_stream_server_socket *server_socket);
+
 #endif /* UNIX_SOCKET_H */
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v3 11/12] simple-ipc: add Unix domain socket implementation
  2021-02-13  0:09   ` [PATCH v3 00/12] " Jeff Hostetler via GitGitGadget
                       ` (9 preceding siblings ...)
  2021-02-13  0:09     ` [PATCH v3 10/12] unix-socket: create `unix_stream_server__listen_with_lock()` Jeff Hostetler via GitGitGadget
@ 2021-02-13  0:09     ` Jeff Hostetler via GitGitGadget
  2021-02-13  0:09     ` [PATCH v3 12/12] t0052: add simple-ipc tests and t/helper/test-simple-ipc tool Jeff Hostetler via GitGitGadget
  2021-02-17 21:48     ` [PATCH v4 00/12] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
  12 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-02-13  0:09 UTC (permalink / raw)
  To: git
  Cc: Jeff Hostetler, Jeff King, SZEDER Gábor,
	Johannes Schindelin, Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Create Unix domain socket based implementation of "simple-ipc".

A set of `ipc_client` routines implement a client library to connect
to an `ipc_server` over a Unix domain socket, send a simple request,
and receive a single response.  Clients use blocking IO on the socket.

A set of `ipc_server` routines implement a thread pool to listen for
and concurrently service client connections.

The server creates a new Unix domain socket at a known location.  If a
socket already exists with that name, the server tries to determine if
another server is already listening on the socket or if the socket is
dead.  If socket is busy, the server exits with an error rather than
stealing the socket.  If the socket is dead, the server creates a new
one and starts up.

If while running, the server detects that its socket has been stolen
by another server, it automatically exits.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 Makefile                            |   2 +
 compat/simple-ipc/ipc-unix-socket.c | 979 ++++++++++++++++++++++++++++
 contrib/buildsystems/CMakeLists.txt |   2 +
 simple-ipc.h                        |  13 +-
 4 files changed, 995 insertions(+), 1 deletion(-)
 create mode 100644 compat/simple-ipc/ipc-unix-socket.c

diff --git a/Makefile b/Makefile
index 40d5cab78d3f..08a4c88b92f5 100644
--- a/Makefile
+++ b/Makefile
@@ -1677,6 +1677,8 @@ ifdef NO_UNIX_SOCKETS
 	BASIC_CFLAGS += -DNO_UNIX_SOCKETS
 else
 	LIB_OBJS += unix-socket.o
+	LIB_OBJS += compat/simple-ipc/ipc-shared.o
+	LIB_OBJS += compat/simple-ipc/ipc-unix-socket.o
 endif
 
 ifdef USE_WIN32_IPC
diff --git a/compat/simple-ipc/ipc-unix-socket.c b/compat/simple-ipc/ipc-unix-socket.c
new file mode 100644
index 000000000000..b7fd0b34329e
--- /dev/null
+++ b/compat/simple-ipc/ipc-unix-socket.c
@@ -0,0 +1,979 @@
+#include "cache.h"
+#include "simple-ipc.h"
+#include "strbuf.h"
+#include "pkt-line.h"
+#include "thread-utils.h"
+#include "unix-socket.h"
+
+#ifdef NO_UNIX_SOCKETS
+#error compat/simple-ipc/ipc-unix-socket.c requires Unix sockets
+#endif
+
+enum ipc_active_state ipc_get_active_state(const char *path)
+{
+	enum ipc_active_state state = IPC_STATE__OTHER_ERROR;
+	struct ipc_client_connect_options options
+		= IPC_CLIENT_CONNECT_OPTIONS_INIT;
+	struct stat st;
+	struct ipc_client_connection *connection_test = NULL;
+
+	options.wait_if_busy = 0;
+	options.wait_if_not_found = 0;
+
+	if (lstat(path, &st) == -1) {
+		switch (errno) {
+		case ENOENT:
+		case ENOTDIR:
+			return IPC_STATE__NOT_LISTENING;
+		default:
+			return IPC_STATE__INVALID_PATH;
+		}
+	}
+
+	/* also complain if a plain file is in the way */
+	if ((st.st_mode & S_IFMT) != S_IFSOCK)
+		return IPC_STATE__INVALID_PATH;
+
+	/*
+	 * Just because the filesystem has a S_IFSOCK type inode
+	 * at `path`, doesn't mean it that there is a server listening.
+	 * Ping it to be sure.
+	 */
+	state = ipc_client_try_connect(path, &options, &connection_test);
+	ipc_client_close_connection(connection_test);
+
+	return state;
+}
+
+/*
+ * This value was chosen at random.
+ */
+#define WAIT_STEP_MS (50)
+
+/*
+ * Try to connect to the server.  If the server is just starting up or
+ * is very busy, we may not get a connection the first time.
+ */
+static enum ipc_active_state connect_to_server(
+	const char *path,
+	int timeout_ms,
+	const struct ipc_client_connect_options *options,
+	int *pfd)
+{
+	int wait_ms = 50;
+	int k;
+
+	*pfd = -1;
+
+	for (k = 0; k < timeout_ms; k += wait_ms) {
+		int fd = unix_stream_connect(path, options->uds_disallow_chdir);
+
+		if (fd != -1) {
+			*pfd = fd;
+			return IPC_STATE__LISTENING;
+		}
+
+		if (errno == ENOENT) {
+			if (!options->wait_if_not_found)
+				return IPC_STATE__PATH_NOT_FOUND;
+
+			goto sleep_and_try_again;
+		}
+
+		if (errno == ETIMEDOUT) {
+			if (!options->wait_if_busy)
+				return IPC_STATE__NOT_LISTENING;
+
+			goto sleep_and_try_again;
+		}
+
+		if (errno == ECONNREFUSED) {
+			if (!options->wait_if_busy)
+				return IPC_STATE__NOT_LISTENING;
+
+			goto sleep_and_try_again;
+		}
+
+		return IPC_STATE__OTHER_ERROR;
+
+	sleep_and_try_again:
+		sleep_millisec(wait_ms);
+	}
+
+	return IPC_STATE__NOT_LISTENING;
+}
+
+/*
+ * A randomly chosen timeout value.
+ */
+#define MY_CONNECTION_TIMEOUT_MS (1000)
+
+enum ipc_active_state ipc_client_try_connect(
+	const char *path,
+	const struct ipc_client_connect_options *options,
+	struct ipc_client_connection **p_connection)
+{
+	enum ipc_active_state state = IPC_STATE__OTHER_ERROR;
+	int fd = -1;
+
+	*p_connection = NULL;
+
+	trace2_region_enter("ipc-client", "try-connect", NULL);
+	trace2_data_string("ipc-client", NULL, "try-connect/path", path);
+
+	state = connect_to_server(path, MY_CONNECTION_TIMEOUT_MS,
+				  options, &fd);
+
+	trace2_data_intmax("ipc-client", NULL, "try-connect/state",
+			   (intmax_t)state);
+	trace2_region_leave("ipc-client", "try-connect", NULL);
+
+	if (state == IPC_STATE__LISTENING) {
+		(*p_connection) = xcalloc(1, sizeof(struct ipc_client_connection));
+		(*p_connection)->fd = fd;
+	}
+
+	return state;
+}
+
+void ipc_client_close_connection(struct ipc_client_connection *connection)
+{
+	if (!connection)
+		return;
+
+	if (connection->fd != -1)
+		close(connection->fd);
+
+	free(connection);
+}
+
+int ipc_client_send_command_to_connection(
+	struct ipc_client_connection *connection,
+	const char *message, struct strbuf *answer)
+{
+	int ret = 0;
+
+	strbuf_setlen(answer, 0);
+
+	trace2_region_enter("ipc-client", "send-command", NULL);
+
+	if (write_packetized_from_buf_no_flush(message, strlen(message),
+					       connection->fd) < 0 ||
+	    packet_flush_gently(connection->fd) < 0) {
+		ret = error(_("could not send IPC command"));
+		goto done;
+	}
+
+	if (read_packetized_to_strbuf(
+		    connection->fd, answer,
+		    PACKET_READ_GENTLE_ON_EOF | PACKET_READ_NEVER_DIE) < 0) {
+		ret = error(_("could not read IPC response"));
+		goto done;
+	}
+
+done:
+	trace2_region_leave("ipc-client", "send-command", NULL);
+	return ret;
+}
+
+int ipc_client_send_command(const char *path,
+			    const struct ipc_client_connect_options *options,
+			    const char *message, struct strbuf *answer)
+{
+	int ret = -1;
+	enum ipc_active_state state;
+	struct ipc_client_connection *connection = NULL;
+
+	state = ipc_client_try_connect(path, options, &connection);
+
+	if (state != IPC_STATE__LISTENING)
+		return ret;
+
+	ret = ipc_client_send_command_to_connection(connection, message, answer);
+
+	ipc_client_close_connection(connection);
+
+	return ret;
+}
+
+static int set_socket_blocking_flag(int fd, int make_nonblocking)
+{
+	int flags;
+
+	flags = fcntl(fd, F_GETFL, NULL);
+
+	if (flags < 0)
+		return -1;
+
+	if (make_nonblocking)
+		flags |= O_NONBLOCK;
+	else
+		flags &= ~O_NONBLOCK;
+
+	return fcntl(fd, F_SETFL, flags);
+}
+
+/*
+ * Magic numbers used to annotate callback instance data.
+ * These are used to help guard against accidentally passing the
+ * wrong instance data across multiple levels of callbacks (which
+ * is easy to do if there are `void*` arguments).
+ */
+enum magic {
+	MAGIC_SERVER_REPLY_DATA,
+	MAGIC_WORKER_THREAD_DATA,
+	MAGIC_ACCEPT_THREAD_DATA,
+	MAGIC_SERVER_DATA,
+};
+
+struct ipc_server_reply_data {
+	enum magic magic;
+	int fd;
+	struct ipc_worker_thread_data *worker_thread_data;
+};
+
+struct ipc_worker_thread_data {
+	enum magic magic;
+	struct ipc_worker_thread_data *next_thread;
+	struct ipc_server_data *server_data;
+	pthread_t pthread_id;
+};
+
+struct ipc_accept_thread_data {
+	enum magic magic;
+	struct ipc_server_data *server_data;
+
+	struct unix_stream_server_socket *server_socket;
+
+	int fd_send_shutdown;
+	int fd_wait_shutdown;
+	pthread_t pthread_id;
+};
+
+/*
+ * With unix-sockets, the conceptual "ipc-server" is implemented as a single
+ * controller "accept-thread" thread and a pool of "worker-thread" threads.
+ * The former does the usual `accept()` loop and dispatches connections
+ * to an idle worker thread.  The worker threads wait in an idle loop for
+ * a new connection, communicate with the client and relay data to/from
+ * the `application_cb` and then wait for another connection from the
+ * server thread.  This avoids the overhead of constantly creating and
+ * destroying threads.
+ */
+struct ipc_server_data {
+	enum magic magic;
+	ipc_server_application_cb *application_cb;
+	void *application_data;
+	struct strbuf buf_path;
+
+	struct ipc_accept_thread_data *accept_thread;
+	struct ipc_worker_thread_data *worker_thread_list;
+
+	pthread_mutex_t work_available_mutex;
+	pthread_cond_t work_available_cond;
+
+	/*
+	 * Accepted but not yet processed client connections are kept
+	 * in a circular buffer FIFO.  The queue is empty when the
+	 * positions are equal.
+	 */
+	int *fifo_fds;
+	int queue_size;
+	int back_pos;
+	int front_pos;
+
+	int shutdown_requested;
+	int is_stopped;
+};
+
+/*
+ * Remove and return the oldest queued connection.
+ *
+ * Returns -1 if empty.
+ */
+static int fifo_dequeue(struct ipc_server_data *server_data)
+{
+	/* ASSERT holding mutex */
+
+	int fd;
+
+	if (server_data->back_pos == server_data->front_pos)
+		return -1;
+
+	fd = server_data->fifo_fds[server_data->front_pos];
+	server_data->fifo_fds[server_data->front_pos] = -1;
+
+	server_data->front_pos++;
+	if (server_data->front_pos == server_data->queue_size)
+		server_data->front_pos = 0;
+
+	return fd;
+}
+
+/*
+ * Push a new fd onto the back of the queue.
+ *
+ * Drop it and return -1 if queue is already full.
+ */
+static int fifo_enqueue(struct ipc_server_data *server_data, int fd)
+{
+	/* ASSERT holding mutex */
+
+	int next_back_pos;
+
+	next_back_pos = server_data->back_pos + 1;
+	if (next_back_pos == server_data->queue_size)
+		next_back_pos = 0;
+
+	if (next_back_pos == server_data->front_pos) {
+		/* Queue is full. Just drop it. */
+		close(fd);
+		return -1;
+	}
+
+	server_data->fifo_fds[server_data->back_pos] = fd;
+	server_data->back_pos = next_back_pos;
+
+	return fd;
+}
+
+/*
+ * Wait for a connection to be queued to the FIFO and return it.
+ *
+ * Returns -1 if someone has already requested a shutdown.
+ */
+static int worker_thread__wait_for_connection(
+	struct ipc_worker_thread_data *worker_thread_data)
+{
+	/* ASSERT NOT holding mutex */
+
+	struct ipc_server_data *server_data = worker_thread_data->server_data;
+	int fd = -1;
+
+	pthread_mutex_lock(&server_data->work_available_mutex);
+	for (;;) {
+		if (server_data->shutdown_requested)
+			break;
+
+		fd = fifo_dequeue(server_data);
+		if (fd >= 0)
+			break;
+
+		pthread_cond_wait(&server_data->work_available_cond,
+				  &server_data->work_available_mutex);
+	}
+	pthread_mutex_unlock(&server_data->work_available_mutex);
+
+	return fd;
+}
+
+/*
+ * Forward declare our reply callback function so that any compiler
+ * errors are reported when we actually define the function (in addition
+ * to any errors reported when we try to pass this callback function as
+ * a parameter in a function call).  The former are easier to understand.
+ */
+static ipc_server_reply_cb do_io_reply_callback;
+
+/*
+ * Relay application's response message to the client process.
+ * (We do not flush at this point because we allow the caller
+ * to chunk data to the client thru us.)
+ */
+static int do_io_reply_callback(struct ipc_server_reply_data *reply_data,
+		       const char *response, size_t response_len)
+{
+	if (reply_data->magic != MAGIC_SERVER_REPLY_DATA)
+		BUG("reply_cb called with wrong instance data");
+
+	return write_packetized_from_buf_no_flush(response, response_len,
+						  reply_data->fd);
+}
+
+/* A randomly chosen value. */
+#define MY_WAIT_POLL_TIMEOUT_MS (10)
+
+/*
+ * If the client hangs up without sending any data on the wire, just
+ * quietly close the socket and ignore this client.
+ *
+ * This worker thread is committed to reading the IPC request data
+ * from the client at the other end of this fd.  Wait here for the
+ * client to actually put something on the wire -- because if the
+ * client just does a ping (connect and hangup without sending any
+ * data), our use of the pkt-line read routines will spew an error
+ * message.
+ *
+ * Return -1 if the client hung up.
+ * Return 0 if data (possibly incomplete) is ready.
+ */
+static int worker_thread__wait_for_io_start(
+	struct ipc_worker_thread_data *worker_thread_data,
+	int fd)
+{
+	struct ipc_server_data *server_data = worker_thread_data->server_data;
+	struct pollfd pollfd[1];
+	int result;
+
+	for (;;) {
+		pollfd[0].fd = fd;
+		pollfd[0].events = POLLIN;
+
+		result = poll(pollfd, 1, MY_WAIT_POLL_TIMEOUT_MS);
+		if (result < 0) {
+			if (errno == EINTR)
+				continue;
+			goto cleanup;
+		}
+
+		if (result == 0) {
+			/* a timeout */
+
+			int in_shutdown;
+
+			pthread_mutex_lock(&server_data->work_available_mutex);
+			in_shutdown = server_data->shutdown_requested;
+			pthread_mutex_unlock(&server_data->work_available_mutex);
+
+			/*
+			 * If a shutdown is already in progress and this
+			 * client has not started talking yet, just drop it.
+			 */
+			if (in_shutdown)
+				goto cleanup;
+			continue;
+		}
+
+		if (pollfd[0].revents & POLLHUP)
+			goto cleanup;
+
+		if (pollfd[0].revents & POLLIN)
+			return 0;
+
+		goto cleanup;
+	}
+
+cleanup:
+	close(fd);
+	return -1;
+}
+
+/*
+ * Receive the request/command from the client and pass it to the
+ * registered request-callback.  The request-callback will compose
+ * a response and call our reply-callback to send it to the client.
+ */
+static int worker_thread__do_io(
+	struct ipc_worker_thread_data *worker_thread_data,
+	int fd)
+{
+	/* ASSERT NOT holding lock */
+
+	struct strbuf buf = STRBUF_INIT;
+	struct ipc_server_reply_data reply_data;
+	int ret = 0;
+
+	reply_data.magic = MAGIC_SERVER_REPLY_DATA;
+	reply_data.worker_thread_data = worker_thread_data;
+
+	reply_data.fd = fd;
+
+	ret = read_packetized_to_strbuf(
+		reply_data.fd, &buf,
+		PACKET_READ_GENTLE_ON_EOF | PACKET_READ_NEVER_DIE);
+	if (ret >= 0) {
+		ret = worker_thread_data->server_data->application_cb(
+			worker_thread_data->server_data->application_data,
+			buf.buf, do_io_reply_callback, &reply_data);
+
+		packet_flush_gently(reply_data.fd);
+	}
+	else {
+		/*
+		 * The client probably disconnected/shutdown before it
+		 * could send a well-formed message.  Ignore it.
+		 */
+	}
+
+	strbuf_release(&buf);
+	close(reply_data.fd);
+
+	return ret;
+}
+
+/*
+ * Block SIGPIPE on the current thread (so that we get EPIPE from
+ * write() rather than an actual signal).
+ *
+ * Note that using sigchain_push() and _pop() to control SIGPIPE
+ * around our IO calls is not thread safe:
+ * [] It uses a global stack of handler frames.
+ * [] It uses ALLOC_GROW() to resize it.
+ * [] Finally, according to the `signal(2)` man-page:
+ *    "The effects of `signal()` in a multithreaded process are unspecified."
+ */
+static void thread_block_sigpipe(sigset_t *old_set)
+{
+	sigset_t new_set;
+
+	sigemptyset(&new_set);
+	sigaddset(&new_set, SIGPIPE);
+
+	sigemptyset(old_set);
+	pthread_sigmask(SIG_BLOCK, &new_set, old_set);
+}
+
+/*
+ * Thread proc for an IPC worker thread.  It handles a series of
+ * connections from clients.  It pulls the next fd from the queue
+ * processes it, and then waits for the next client.
+ *
+ * Block SIGPIPE in this worker thread for the life of the thread.
+ * This avoids stray (and sometimes delayed) SIGPIPE signals caused
+ * by client errors and/or when we are under extremely heavy IO load.
+ *
+ * This means that the application callback will have SIGPIPE blocked.
+ * The callback should not change it.
+ */
+static void *worker_thread_proc(void *_worker_thread_data)
+{
+	struct ipc_worker_thread_data *worker_thread_data = _worker_thread_data;
+	struct ipc_server_data *server_data = worker_thread_data->server_data;
+	sigset_t old_set;
+	int fd, io;
+	int ret;
+
+	trace2_thread_start("ipc-worker");
+
+	thread_block_sigpipe(&old_set);
+
+	for (;;) {
+		fd = worker_thread__wait_for_connection(worker_thread_data);
+		if (fd == -1)
+			break; /* in shutdown */
+
+		io = worker_thread__wait_for_io_start(worker_thread_data, fd);
+		if (io == -1)
+			continue; /* client hung up without sending anything */
+
+		ret = worker_thread__do_io(worker_thread_data, fd);
+
+		if (ret == SIMPLE_IPC_QUIT) {
+			trace2_data_string("ipc-worker", NULL, "queue_stop_async",
+					   "application_quit");
+			/*
+			 * The application layer is telling the ipc-server
+			 * layer to shutdown.
+			 *
+			 * We DO NOT have a response to send to the client.
+			 *
+			 * Queue an async stop (to stop the other threads) and
+			 * allow this worker thread to exit now (no sense waiting
+			 * for the thread-pool shutdown signal).
+			 *
+			 * Other non-idle worker threads are allowed to finish
+			 * responding to their current clients.
+			 */
+			ipc_server_stop_async(server_data);
+			break;
+		}
+	}
+
+	trace2_thread_exit();
+	return NULL;
+}
+
+/* A randomly chosen value. */
+#define MY_ACCEPT_POLL_TIMEOUT_MS (60 * 1000)
+
+/*
+ * Accept a new client connection on our socket.  This uses non-blocking
+ * IO so that we can also wait for shutdown requests on our socket-pair
+ * without actually spinning on a fast timeout.
+ */
+static int accept_thread__wait_for_connection(
+	struct ipc_accept_thread_data *accept_thread_data)
+{
+	struct pollfd pollfd[2];
+	int result;
+
+	for (;;) {
+		pollfd[0].fd = accept_thread_data->fd_wait_shutdown;
+		pollfd[0].events = POLLIN;
+
+		pollfd[1].fd = accept_thread_data->server_socket->fd_socket;
+		pollfd[1].events = POLLIN;
+
+		result = poll(pollfd, 2, MY_ACCEPT_POLL_TIMEOUT_MS);
+		if (result < 0) {
+			if (errno == EINTR)
+				continue;
+			return result;
+		}
+
+		if (result == 0) {
+			/* a timeout */
+
+			/*
+			 * If someone deletes or force-creates a new unix
+			 * domain socket at our path, all future clients
+			 * will be routed elsewhere and we silently starve.
+			 * If that happens, just queue a shutdown.
+			 */
+			if (unix_stream_server__was_stolen(
+				    accept_thread_data->server_socket)) {
+				trace2_data_string("ipc-accept", NULL,
+						   "queue_stop_async",
+						   "socket_stolen");
+				ipc_server_stop_async(
+					accept_thread_data->server_data);
+			}
+			continue;
+		}
+
+		if (pollfd[0].revents & POLLIN) {
+			/* shutdown message queued to socketpair */
+			return -1;
+		}
+
+		if (pollfd[1].revents & POLLIN) {
+			/* a connection is available on server_socket */
+
+			int client_fd =
+				accept(accept_thread_data->server_socket->fd_socket,
+				       NULL, NULL);
+			if (client_fd >= 0)
+				return client_fd;
+
+			/*
+			 * An error here is unlikely -- it probably
+			 * indicates that the connecting process has
+			 * already dropped the connection.
+			 */
+			continue;
+		}
+
+		BUG("unandled poll result errno=%d r[0]=%d r[1]=%d",
+		    errno, pollfd[0].revents, pollfd[1].revents);
+	}
+}
+
+/*
+ * Thread proc for the IPC server "accept thread".  This waits for
+ * an incoming socket connection, appends it to the queue of available
+ * connections, and notifies a worker thread to process it.
+ *
+ * Block SIGPIPE in this thread for the life of the thread.  This
+ * avoids any stray SIGPIPE signals when closing pipe fds under
+ * extremely heavy loads (such as when the fifo queue is full and we
+ * drop incomming connections).
+ */
+static void *accept_thread_proc(void *_accept_thread_data)
+{
+	struct ipc_accept_thread_data *accept_thread_data = _accept_thread_data;
+	struct ipc_server_data *server_data = accept_thread_data->server_data;
+	sigset_t old_set;
+
+	trace2_thread_start("ipc-accept");
+
+	thread_block_sigpipe(&old_set);
+
+	for (;;) {
+		int client_fd = accept_thread__wait_for_connection(
+			accept_thread_data);
+
+		pthread_mutex_lock(&server_data->work_available_mutex);
+		if (server_data->shutdown_requested) {
+			pthread_mutex_unlock(&server_data->work_available_mutex);
+			if (client_fd >= 0)
+				close(client_fd);
+			break;
+		}
+
+		if (client_fd < 0) {
+			/* ignore transient accept() errors */
+		}
+		else {
+			fifo_enqueue(server_data, client_fd);
+			pthread_cond_broadcast(&server_data->work_available_cond);
+		}
+		pthread_mutex_unlock(&server_data->work_available_mutex);
+	}
+
+	trace2_thread_exit();
+	return NULL;
+}
+
+/*
+ * We can't predict the connection arrival rate relative to the worker
+ * processing rate, therefore we allow the "accept-thread" to queue up
+ * a generous number of connections, since we'd rather have the client
+ * not unnecessarily timeout if we can avoid it.  (The assumption is
+ * that this will be used for FSMonitor and a few second wait on a
+ * connection is better than having the client timeout and do the full
+ * computation itself.)
+ *
+ * The FIFO queue size is set to a multiple of the worker pool size.
+ * This value chosen at random.
+ */
+#define FIFO_SCALE (100)
+
+/*
+ * The backlog value for `listen(2)`.  This doesn't need to huge,
+ * rather just large enough for our "accept-thread" to wake up and
+ * queue incoming connections onto the FIFO without the kernel
+ * dropping any.
+ *
+ * This value chosen at random.
+ */
+#define LISTEN_BACKLOG (50)
+
+static struct unix_stream_server_socket *create_listener_socket(
+	const char *path,
+	const struct ipc_server_opts *ipc_opts)
+{
+	struct unix_stream_server_socket *server_socket = NULL;
+	struct unix_stream_listen_opts uslg_opts = UNIX_STREAM_LISTEN_OPTS_INIT;
+
+	uslg_opts.listen_backlog_size = LISTEN_BACKLOG;
+	uslg_opts.disallow_chdir = ipc_opts->uds_disallow_chdir;
+
+	server_socket = unix_stream_server__listen_with_lock(path, &uslg_opts);
+	if (!server_socket)
+		return NULL;
+
+	if (set_socket_blocking_flag(server_socket->fd_socket, 1)) {
+		int saved_errno = errno;
+		error_errno(_("could not set listener socket nonblocking '%s'"),
+			    path);
+		unix_stream_server__free(server_socket);
+		errno = saved_errno;
+		return NULL;
+	}
+
+	trace2_data_string("ipc-server", NULL, "listen-with-lock", path);
+	return server_socket;
+}
+
+static struct unix_stream_server_socket *setup_listener_socket(
+	const char *path,
+	const struct ipc_server_opts *ipc_opts)
+{
+	struct unix_stream_server_socket *server_socket;
+
+	trace2_region_enter("ipc-server", "create-listener_socket", NULL);
+	server_socket = create_listener_socket(path, ipc_opts);
+	trace2_region_leave("ipc-server", "create-listener_socket", NULL);
+
+	return server_socket;
+}
+
+/*
+ * Start IPC server in a pool of background threads.
+ */
+int ipc_server_run_async(struct ipc_server_data **returned_server_data,
+			 const char *path, const struct ipc_server_opts *opts,
+			 ipc_server_application_cb *application_cb,
+			 void *application_data)
+{
+	struct unix_stream_server_socket *server_socket = NULL;
+	struct ipc_server_data *server_data;
+	int sv[2];
+	int k;
+	int nr_threads = opts->nr_threads;
+
+	*returned_server_data = NULL;
+
+	/*
+	 * Create a socketpair and set sv[1] to non-blocking.  This
+	 * will used to send a shutdown message to the accept-thread
+	 * and allows the accept-thread to wait on EITHER a client
+	 * connection or a shutdown request without spinning.
+	 */
+	if (socketpair(AF_UNIX, SOCK_STREAM, 0, sv) < 0)
+		return error_errno(_("could not create socketpair for '%s'"),
+				   path);
+
+	if (set_socket_blocking_flag(sv[1], 1)) {
+		int saved_errno = errno;
+		close(sv[0]);
+		close(sv[1]);
+		errno = saved_errno;
+		return error_errno(_("making socketpair nonblocking '%s'"),
+				   path);
+	}
+
+	server_socket = setup_listener_socket(path, opts);
+	if (!server_socket) {
+		int saved_errno = errno;
+		close(sv[0]);
+		close(sv[1]);
+		errno = saved_errno;
+		return -1;
+	}
+
+	server_data = xcalloc(1, sizeof(*server_data));
+	server_data->magic = MAGIC_SERVER_DATA;
+	server_data->application_cb = application_cb;
+	server_data->application_data = application_data;
+	strbuf_init(&server_data->buf_path, 0);
+	strbuf_addstr(&server_data->buf_path, path);
+
+	if (nr_threads < 1)
+		nr_threads = 1;
+
+	pthread_mutex_init(&server_data->work_available_mutex, NULL);
+	pthread_cond_init(&server_data->work_available_cond, NULL);
+
+	server_data->queue_size = nr_threads * FIFO_SCALE;
+	server_data->fifo_fds = xcalloc(server_data->queue_size,
+					sizeof(*server_data->fifo_fds));
+
+	server_data->accept_thread =
+		xcalloc(1, sizeof(*server_data->accept_thread));
+	server_data->accept_thread->magic = MAGIC_ACCEPT_THREAD_DATA;
+	server_data->accept_thread->server_data = server_data;
+	server_data->accept_thread->server_socket = server_socket;
+	server_data->accept_thread->fd_send_shutdown = sv[0];
+	server_data->accept_thread->fd_wait_shutdown = sv[1];
+
+	if (pthread_create(&server_data->accept_thread->pthread_id, NULL,
+			   accept_thread_proc, server_data->accept_thread))
+		die_errno(_("could not start accept_thread '%s'"), path);
+
+	for (k = 0; k < nr_threads; k++) {
+		struct ipc_worker_thread_data *wtd;
+
+		wtd = xcalloc(1, sizeof(*wtd));
+		wtd->magic = MAGIC_WORKER_THREAD_DATA;
+		wtd->server_data = server_data;
+
+		if (pthread_create(&wtd->pthread_id, NULL, worker_thread_proc,
+				   wtd)) {
+			if (k == 0)
+				die(_("could not start worker[0] for '%s'"),
+				    path);
+			/*
+			 * Limp along with the thread pool that we have.
+			 */
+			break;
+		}
+
+		wtd->next_thread = server_data->worker_thread_list;
+		server_data->worker_thread_list = wtd;
+	}
+
+	*returned_server_data = server_data;
+	return 0;
+}
+
+/*
+ * Gently tell the IPC server treads to shutdown.
+ * Can be run on any thread.
+ */
+int ipc_server_stop_async(struct ipc_server_data *server_data)
+{
+	/* ASSERT NOT holding mutex */
+
+	int fd;
+
+	if (!server_data)
+		return 0;
+
+	trace2_region_enter("ipc-server", "server-stop-async", NULL);
+
+	pthread_mutex_lock(&server_data->work_available_mutex);
+
+	server_data->shutdown_requested = 1;
+
+	/*
+	 * Write a byte to the shutdown socket pair to wake up the
+	 * accept-thread.
+	 */
+	if (write(server_data->accept_thread->fd_send_shutdown, "Q", 1) < 0)
+		error_errno("could not write to fd_send_shutdown");
+
+	/*
+	 * Drain the queue of existing connections.
+	 */
+	while ((fd = fifo_dequeue(server_data)) != -1)
+		close(fd);
+
+	/*
+	 * Gently tell worker threads to stop processing new connections
+	 * and exit.  (This does not abort in-process conversations.)
+	 */
+	pthread_cond_broadcast(&server_data->work_available_cond);
+
+	pthread_mutex_unlock(&server_data->work_available_mutex);
+
+	trace2_region_leave("ipc-server", "server-stop-async", NULL);
+
+	return 0;
+}
+
+/*
+ * Wait for all IPC server threads to stop.
+ */
+int ipc_server_await(struct ipc_server_data *server_data)
+{
+	pthread_join(server_data->accept_thread->pthread_id, NULL);
+
+	if (!server_data->shutdown_requested)
+		BUG("ipc-server: accept-thread stopped for '%s'",
+		    server_data->buf_path.buf);
+
+	while (server_data->worker_thread_list) {
+		struct ipc_worker_thread_data *wtd =
+			server_data->worker_thread_list;
+
+		pthread_join(wtd->pthread_id, NULL);
+
+		server_data->worker_thread_list = wtd->next_thread;
+		free(wtd);
+	}
+
+	server_data->is_stopped = 1;
+
+	return 0;
+}
+
+void ipc_server_free(struct ipc_server_data *server_data)
+{
+	struct ipc_accept_thread_data * accept_thread_data;
+
+	if (!server_data)
+		return;
+
+	if (!server_data->is_stopped)
+		BUG("cannot free ipc-server while running for '%s'",
+		    server_data->buf_path.buf);
+
+	accept_thread_data = server_data->accept_thread;
+	if (accept_thread_data) {
+		unix_stream_server__free(accept_thread_data->server_socket);
+
+		if (accept_thread_data->fd_send_shutdown != -1)
+			close(accept_thread_data->fd_send_shutdown);
+		if (accept_thread_data->fd_wait_shutdown != -1)
+			close(accept_thread_data->fd_wait_shutdown);
+
+		free(server_data->accept_thread);
+	}
+
+	while (server_data->worker_thread_list) {
+		struct ipc_worker_thread_data *wtd =
+			server_data->worker_thread_list;
+
+		server_data->worker_thread_list = wtd->next_thread;
+		free(wtd);
+	}
+
+	pthread_cond_destroy(&server_data->work_available_cond);
+	pthread_mutex_destroy(&server_data->work_available_mutex);
+
+	strbuf_release(&server_data->buf_path);
+
+	free(server_data->fifo_fds);
+	free(server_data);
+}
diff --git a/contrib/buildsystems/CMakeLists.txt b/contrib/buildsystems/CMakeLists.txt
index 4bd41054ee70..4c27a373414a 100644
--- a/contrib/buildsystems/CMakeLists.txt
+++ b/contrib/buildsystems/CMakeLists.txt
@@ -248,6 +248,8 @@ endif()
 
 if(CMAKE_SYSTEM_NAME STREQUAL "Windows")
 	list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-win32.c)
+else()
+	list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-unix-socket.c)
 endif()
 
 set(EXE_EXTENSION ${CMAKE_EXECUTABLE_SUFFIX})
diff --git a/simple-ipc.h b/simple-ipc.h
index a3f96b42cca2..f7e72e966f9a 100644
--- a/simple-ipc.h
+++ b/simple-ipc.h
@@ -5,7 +5,7 @@
  * See Documentation/technical/api-simple-ipc.txt
  */
 
-#if defined(GIT_WINDOWS_NATIVE)
+#if defined(GIT_WINDOWS_NATIVE) || !defined(NO_UNIX_SOCKETS)
 #define SUPPORTS_SIMPLE_IPC
 #endif
 
@@ -62,11 +62,17 @@ struct ipc_client_connect_options {
 	 * the service and need to wait for it to become ready.
 	 */
 	unsigned int wait_if_not_found:1;
+
+	/*
+	 * Disallow chdir() when creating a Unix domain socket.
+	 */
+	unsigned int uds_disallow_chdir:1;
 };
 
 #define IPC_CLIENT_CONNECT_OPTIONS_INIT { \
 	.wait_if_busy = 0, \
 	.wait_if_not_found = 0, \
+	.uds_disallow_chdir = 0, \
 }
 
 /*
@@ -159,6 +165,11 @@ struct ipc_server_data;
 struct ipc_server_opts
 {
 	int nr_threads;
+
+	/*
+	 * Disallow chdir() when creating a Unix domain socket.
+	 */
+	unsigned int uds_disallow_chdir:1;
 };
 
 /*
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v3 12/12] t0052: add simple-ipc tests and t/helper/test-simple-ipc tool
  2021-02-13  0:09   ` [PATCH v3 00/12] " Jeff Hostetler via GitGitGadget
                       ` (10 preceding siblings ...)
  2021-02-13  0:09     ` [PATCH v3 11/12] simple-ipc: add Unix domain socket implementation Jeff Hostetler via GitGitGadget
@ 2021-02-13  0:09     ` Jeff Hostetler via GitGitGadget
  2021-02-13  9:30       ` SZEDER Gábor
  2021-02-17 21:48     ` [PATCH v4 00/12] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
  12 siblings, 1 reply; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-02-13  0:09 UTC (permalink / raw)
  To: git
  Cc: Jeff Hostetler, Jeff King, SZEDER Gábor,
	Johannes Schindelin, Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Create t0052-simple-ipc.sh with unit tests for the "simple-ipc" mechanism.

Create t/helper/test-simple-ipc test tool to exercise the "simple-ipc"
functions.

When the tool is invoked with "run-daemon", it runs a server to listen
for "simple-ipc" connections on a test socket or named pipe and
responds to a set of commands to exercise/stress the communication
setup.

When the tool is invoked with "start-daemon", it spawns a "run-daemon"
command in the background and waits for the server to become ready
before exiting.  (This helps make unit tests in t0052 more predictable
and avoids the need for arbitrary sleeps in the test script.)

The tool also has a series of client "send" commands to send commands
and data to a server instance.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 Makefile                   |   1 +
 t/helper/test-simple-ipc.c | 713 +++++++++++++++++++++++++++++++++++++
 t/helper/test-tool.c       |   1 +
 t/helper/test-tool.h       |   1 +
 t/t0052-simple-ipc.sh      | 134 +++++++
 5 files changed, 850 insertions(+)
 create mode 100644 t/helper/test-simple-ipc.c
 create mode 100755 t/t0052-simple-ipc.sh

diff --git a/Makefile b/Makefile
index 08a4c88b92f5..93f2e7ca9e1f 100644
--- a/Makefile
+++ b/Makefile
@@ -740,6 +740,7 @@ TEST_BUILTINS_OBJS += test-serve-v2.o
 TEST_BUILTINS_OBJS += test-sha1.o
 TEST_BUILTINS_OBJS += test-sha256.o
 TEST_BUILTINS_OBJS += test-sigchain.o
+TEST_BUILTINS_OBJS += test-simple-ipc.o
 TEST_BUILTINS_OBJS += test-strcmp-offset.o
 TEST_BUILTINS_OBJS += test-string-list.o
 TEST_BUILTINS_OBJS += test-submodule-config.o
diff --git a/t/helper/test-simple-ipc.c b/t/helper/test-simple-ipc.c
new file mode 100644
index 000000000000..92aa7f843cfa
--- /dev/null
+++ b/t/helper/test-simple-ipc.c
@@ -0,0 +1,713 @@
+/*
+ * test-simple-ipc.c: verify that the Inter-Process Communication works.
+ */
+
+#include "test-tool.h"
+#include "cache.h"
+#include "strbuf.h"
+#include "simple-ipc.h"
+#include "parse-options.h"
+#include "thread-utils.h"
+#include "strvec.h"
+
+#ifndef SUPPORTS_SIMPLE_IPC
+int cmd__simple_ipc(int argc, const char **argv)
+{
+	die("simple IPC not available on this platform");
+}
+#else
+
+/*
+ * The test daemon defines an "application callback" that supports a
+ * series of commands (see `test_app_cb()`).
+ *
+ * Unknown commands are caught here and we send an error message back
+ * to the client process.
+ */
+static int app__unhandled_command(const char *command,
+				  ipc_server_reply_cb *reply_cb,
+				  struct ipc_server_reply_data *reply_data)
+{
+	struct strbuf buf = STRBUF_INIT;
+	int ret;
+
+	strbuf_addf(&buf, "unhandled command: %s", command);
+	ret = reply_cb(reply_data, buf.buf, buf.len);
+	strbuf_release(&buf);
+
+	return ret;
+}
+
+/*
+ * Reply with a single very large buffer.  This is to ensure that
+ * long response are properly handled -- whether the chunking occurs
+ * in the kernel or in the (probably pkt-line) layer.
+ */
+#define BIG_ROWS (10000)
+static int app__big_command(ipc_server_reply_cb *reply_cb,
+			    struct ipc_server_reply_data *reply_data)
+{
+	struct strbuf buf = STRBUF_INIT;
+	int row;
+	int ret;
+
+	for (row = 0; row < BIG_ROWS; row++)
+		strbuf_addf(&buf, "big: %.75d\n", row);
+
+	ret = reply_cb(reply_data, buf.buf, buf.len);
+	strbuf_release(&buf);
+
+	return ret;
+}
+
+/*
+ * Reply with a series of lines.  This is to ensure that we can incrementally
+ * compute the response and chunk it to the client.
+ */
+#define CHUNK_ROWS (10000)
+static int app__chunk_command(ipc_server_reply_cb *reply_cb,
+			      struct ipc_server_reply_data *reply_data)
+{
+	struct strbuf buf = STRBUF_INIT;
+	int row;
+	int ret;
+
+	for (row = 0; row < CHUNK_ROWS; row++) {
+		strbuf_setlen(&buf, 0);
+		strbuf_addf(&buf, "big: %.75d\n", row);
+		ret = reply_cb(reply_data, buf.buf, buf.len);
+	}
+
+	strbuf_release(&buf);
+
+	return ret;
+}
+
+/*
+ * Slowly reply with a series of lines.  This is to model an expensive to
+ * compute chunked response (which might happen if this callback is running
+ * in a thread and is fighting for a lock with other threads).
+ */
+#define SLOW_ROWS     (1000)
+#define SLOW_DELAY_MS (10)
+static int app__slow_command(ipc_server_reply_cb *reply_cb,
+			     struct ipc_server_reply_data *reply_data)
+{
+	struct strbuf buf = STRBUF_INIT;
+	int row;
+	int ret;
+
+	for (row = 0; row < SLOW_ROWS; row++) {
+		strbuf_setlen(&buf, 0);
+		strbuf_addf(&buf, "big: %.75d\n", row);
+		ret = reply_cb(reply_data, buf.buf, buf.len);
+		sleep_millisec(SLOW_DELAY_MS);
+	}
+
+	strbuf_release(&buf);
+
+	return ret;
+}
+
+/*
+ * The client sent a command followed by a (possibly very) large buffer.
+ */
+static int app__sendbytes_command(const char *received,
+				  ipc_server_reply_cb *reply_cb,
+				  struct ipc_server_reply_data *reply_data)
+{
+	struct strbuf buf_resp = STRBUF_INIT;
+	const char *p = "?";
+	int len_ballast = 0;
+	int k;
+	int errs = 0;
+	int ret;
+
+	if (skip_prefix(received, "sendbytes ", &p))
+		len_ballast = strlen(p);
+
+	/*
+	 * Verify that the ballast is n copies of a single letter.
+	 * And that the multi-threaded IO layer didn't cross the streams.
+	 */
+	for (k = 1; k < len_ballast; k++)
+		if (p[k] != p[0])
+			errs++;
+
+	if (errs)
+		strbuf_addf(&buf_resp, "errs:%d\n", errs);
+	else
+		strbuf_addf(&buf_resp, "rcvd:%c%08d\n", p[0], len_ballast);
+
+	ret = reply_cb(reply_data, buf_resp.buf, buf_resp.len);
+
+	strbuf_release(&buf_resp);
+
+	return ret;
+}
+
+/*
+ * An arbitrary fixed address to verify that the application instance
+ * data is handled properly.
+ */
+static int my_app_data = 42;
+
+static ipc_server_application_cb test_app_cb;
+
+/*
+ * This is "application callback" that sits on top of the "ipc-server".
+ * It completely defines the set of command verbs supported by this
+ * application.
+ */
+static int test_app_cb(void *application_data,
+		       const char *command,
+		       ipc_server_reply_cb *reply_cb,
+		       struct ipc_server_reply_data *reply_data)
+{
+	/*
+	 * Verify that we received the application-data that we passed
+	 * when we started the ipc-server.  (We have several layers of
+	 * callbacks calling callbacks and it's easy to get things mixed
+	 * up (especially when some are "void*").)
+	 */
+	if (application_data != (void*)&my_app_data)
+		BUG("application_cb: application_data pointer wrong");
+
+	if (!strcmp(command, "quit")) {
+		/*
+		 * The client sent a "quit" command.  This is an async
+		 * request for the server to shutdown.
+		 *
+		 * We DO NOT send the client a response message
+		 * (because we have nothing to say and the other
+		 * server threads have not yet stopped).
+		 *
+		 * Tell the ipc-server layer to start shutting down.
+		 * This includes: stop listening for new connections
+		 * on the socket/pipe and telling all worker threads
+		 * to finish/drain their outgoing responses to other
+		 * clients.
+		 *
+		 * This DOES NOT force an immediate sync shutdown.
+		 */
+		return SIMPLE_IPC_QUIT;
+	}
+
+	if (!strcmp(command, "ping")) {
+		const char *answer = "pong";
+		return reply_cb(reply_data, answer, strlen(answer));
+	}
+
+	if (!strcmp(command, "big"))
+		return app__big_command(reply_cb, reply_data);
+
+	if (!strcmp(command, "chunk"))
+		return app__chunk_command(reply_cb, reply_data);
+
+	if (!strcmp(command, "slow"))
+		return app__slow_command(reply_cb, reply_data);
+
+	if (starts_with(command, "sendbytes "))
+		return app__sendbytes_command(command, reply_cb, reply_data);
+
+	return app__unhandled_command(command, reply_cb, reply_data);
+}
+
+/*
+ * This process will run as a simple-ipc server and listen for IPC commands
+ * from client processes.
+ */
+static int daemon__run_server(const char *path, int argc, const char **argv)
+{
+	struct ipc_server_opts opts = {
+		.nr_threads = 5
+	};
+
+	const char * const daemon_usage[] = {
+		N_("test-helper simple-ipc run-daemon [<options>"),
+		NULL
+	};
+	struct option daemon_options[] = {
+		OPT_INTEGER(0, "threads", &opts.nr_threads,
+			    N_("number of threads in server thread pool")),
+		OPT_END()
+	};
+
+	argc = parse_options(argc, argv, NULL, daemon_options, daemon_usage, 0);
+
+	if (opts.nr_threads < 1)
+		opts.nr_threads = 1;
+
+	/*
+	 * Synchronously run the ipc-server.  We don't need any application
+	 * instance data, so pass an arbitrary pointer (that we'll later
+	 * verify made the round trip).
+	 */
+	return ipc_server_run(path, &opts, test_app_cb, (void*)&my_app_data);
+}
+
+#ifndef GIT_WINDOWS_NATIVE
+/*
+ * This is adapted from `daemonize()`.  Use `fork()` to directly create and
+ * run the daemon in a child process.
+ */
+static int spawn_server(const char *path,
+			const struct ipc_server_opts *opts,
+			pid_t *pid)
+{
+	*pid = fork();
+
+	switch (*pid) {
+	case 0:
+		if (setsid() == -1)
+			error_errno(_("setsid failed"));
+		close(0);
+		close(1);
+		close(2);
+		sanitize_stdfds();
+
+		return ipc_server_run(path, opts, test_app_cb, (void*)&my_app_data);
+
+	case -1:
+		return error_errno(_("could not spawn daemon in the background"));
+
+	default:
+		return 0;
+	}
+}
+#else
+/*
+ * Conceptually like `daemonize()` but different because Windows does not
+ * have `fork(2)`.  Spawn a normal Windows child process but without the
+ * limitations of `start_command()` and `finish_command()`.
+ */
+static int spawn_server(const char *path,
+			const struct ipc_server_opts *opts,
+			pid_t *pid)
+{
+	char test_tool_exe[MAX_PATH];
+	struct strvec args = STRVEC_INIT;
+	int in, out;
+
+	GetModuleFileNameA(NULL, test_tool_exe, MAX_PATH);
+
+	in = open("/dev/null", O_RDONLY);
+	out = open("/dev/null", O_WRONLY);
+
+	strvec_push(&args, test_tool_exe);
+	strvec_push(&args, "simple-ipc");
+	strvec_push(&args, "run-daemon");
+	strvec_pushf(&args, "--threads=%d", opts->nr_threads);
+
+	*pid = mingw_spawnvpe(args.v[0], args.v, NULL, NULL, in, out, out);
+	close(in);
+	close(out);
+
+	strvec_clear(&args);
+
+	if (*pid < 0)
+		return error(_("could not spawn daemon in the background"));
+
+	return 0;
+}
+#endif
+
+/*
+ * This is adapted from `wait_or_whine()`.  Watch the child process and
+ * let it get started and begin listening for requests on the socket
+ * before reporting our success.
+ */
+static int wait_for_server_startup(const char * path, pid_t pid_child,
+				   int max_wait_sec)
+{
+	int status;
+	pid_t pid_seen;
+	enum ipc_active_state s;
+	time_t time_limit, now;
+
+	time(&time_limit);
+	time_limit += max_wait_sec;
+
+	for (;;) {
+		pid_seen = waitpid(pid_child, &status, WNOHANG);
+
+		if (pid_seen == -1)
+			return error_errno(_("waitpid failed"));
+
+		else if (pid_seen == 0) {
+			/*
+			 * The child is still running (this should be
+			 * the normal case).  Try to connect to it on
+			 * the socket and see if it is ready for
+			 * business.
+			 *
+			 * If there is another daemon already running,
+			 * our child will fail to start (possibly
+			 * after a timeout on the lock), but we don't
+			 * care (who responds) if the socket is live.
+			 */
+			s = ipc_get_active_state(path);
+			if (s == IPC_STATE__LISTENING)
+				return 0;
+
+			time(&now);
+			if (now > time_limit)
+				return error(_("daemon not online yet"));
+
+			continue;
+		}
+
+		else if (pid_seen == pid_child) {
+			/*
+			 * The new child daemon process shutdown while
+			 * it was starting up, so it is not listening
+			 * on the socket.
+			 *
+			 * Try to ping the socket in the odd chance
+			 * that another daemon started (or was already
+			 * running) while our child was starting.
+			 *
+			 * Again, we don't care who services the socket.
+			 */
+			s = ipc_get_active_state(path);
+			if (s == IPC_STATE__LISTENING)
+				return 0;
+
+			/*
+			 * We don't care about the WEXITSTATUS() nor
+			 * any of the WIF*(status) values because
+			 * `cmd__simple_ipc()` does the `!!result`
+			 * trick on all function return values.
+			 *
+			 * So it is sufficient to just report the
+			 * early shutdown as an error.
+			 */
+			return error(_("daemon failed to start"));
+		}
+
+		else
+			return error(_("waitpid is confused"));
+	}
+}
+
+/*
+ * This process will start a simple-ipc server in a background process and
+ * wait for it to become ready.  This is like `daemonize()` but gives us
+ * more control and better error reporting (and makes it easier to write
+ * unit tests).
+ */
+static int daemon__start_server(const char *path, int argc, const char **argv)
+{
+	pid_t pid_child;
+	int ret;
+	int max_wait_sec = 60;
+	struct ipc_server_opts opts = {
+		.nr_threads = 5
+	};
+
+	const char * const daemon_usage[] = {
+		N_("test-helper simple-ipc start-daemon [<options>"),
+		NULL
+	};
+
+	struct option daemon_options[] = {
+		OPT_INTEGER(0, "max-wait", &max_wait_sec,
+			    N_("seconds to wait for daemon to startup")),
+		OPT_INTEGER(0, "threads", &opts.nr_threads,
+			    N_("number of threads in server thread pool")),
+		OPT_END()
+	};
+
+	argc = parse_options(argc, argv, NULL, daemon_options, daemon_usage, 0);
+
+	if (max_wait_sec < 0)
+		max_wait_sec = 0;
+	if (opts.nr_threads < 1)
+		opts.nr_threads = 1;
+
+	/*
+	 * Run the actual daemon in a background process.
+	 */
+	ret = spawn_server(path, &opts, &pid_child);
+	if (pid_child <= 0)
+		return ret;
+
+	/*
+	 * Let the parent wait for the child process to get started
+	 * and begin listening for requests on the socket.
+	 */
+	ret = wait_for_server_startup(path, pid_child, max_wait_sec);
+
+	return ret;
+}
+
+/*
+ * This process will run a quick probe to see if a simple-ipc server
+ * is active on this path.
+ *
+ * Returns 0 if the server is alive.
+ */
+static int client__probe_server(const char *path)
+{
+	enum ipc_active_state s;
+
+	s = ipc_get_active_state(path);
+	switch (s) {
+	case IPC_STATE__LISTENING:
+		return 0;
+
+	case IPC_STATE__NOT_LISTENING:
+		return error("no server listening at '%s'", path);
+
+	case IPC_STATE__PATH_NOT_FOUND:
+		return error("path not found '%s'", path);
+
+	case IPC_STATE__INVALID_PATH:
+		return error("invalid pipe/socket name '%s'", path);
+
+	case IPC_STATE__OTHER_ERROR:
+	default:
+		return error("other error for '%s'", path);
+	}
+}
+
+/*
+ * Send an IPC command to an already-running server daemon and print the
+ * response.
+ *
+ * argv[2] contains a simple (1 word) command verb that `test_app_cb()`
+ * (in the daemon process) will understand.
+ */
+static int client__send_ipc(int argc, const char **argv, const char *path)
+{
+	const char *command = argc > 2 ? argv[2] : "(no command)";
+	struct strbuf buf = STRBUF_INIT;
+	struct ipc_client_connect_options options
+		= IPC_CLIENT_CONNECT_OPTIONS_INIT;
+
+	options.wait_if_busy = 1;
+	options.wait_if_not_found = 0;
+
+	if (!ipc_client_send_command(path, &options, command, &buf)) {
+		if (buf.len) {
+			printf("%s\n", buf.buf);
+			fflush(stdout);
+		}
+		strbuf_release(&buf);
+
+		return 0;
+	}
+
+	return error("failed to send '%s' to '%s'", command, path);
+}
+
+/*
+ * Send an IPC command followed by ballast to confirm that a large
+ * message can be sent and that the kernel or pkt-line layers will
+ * properly chunk it and that the daemon receives the entire message.
+ */
+static int do_sendbytes(int bytecount, char byte, const char *path,
+			const struct ipc_client_connect_options *options)
+{
+	struct strbuf buf_send = STRBUF_INIT;
+	struct strbuf buf_resp = STRBUF_INIT;
+
+	strbuf_addstr(&buf_send, "sendbytes ");
+	strbuf_addchars(&buf_send, byte, bytecount);
+
+	if (!ipc_client_send_command(path, options, buf_send.buf, &buf_resp)) {
+		strbuf_rtrim(&buf_resp);
+		printf("sent:%c%08d %s\n", byte, bytecount, buf_resp.buf);
+		fflush(stdout);
+		strbuf_release(&buf_send);
+		strbuf_release(&buf_resp);
+
+		return 0;
+	}
+
+	return error("client failed to sendbytes(%d, '%c') to '%s'",
+		     bytecount, byte, path);
+}
+
+/*
+ * Send an IPC command with ballast to an already-running server daemon.
+ */
+static int client__sendbytes(int argc, const char **argv, const char *path)
+{
+	int bytecount = 1024;
+	char *string = "x";
+	const char * const sendbytes_usage[] = {
+		N_("test-helper simple-ipc sendbytes [<options>]"),
+		NULL
+	};
+	struct option sendbytes_options[] = {
+		OPT_INTEGER(0, "bytecount", &bytecount, N_("number of bytes")),
+		OPT_STRING(0, "byte", &string, N_("byte"), N_("ballast")),
+		OPT_END()
+	};
+	struct ipc_client_connect_options options
+		= IPC_CLIENT_CONNECT_OPTIONS_INIT;
+
+	options.wait_if_busy = 1;
+	options.wait_if_not_found = 0;
+	options.uds_disallow_chdir = 0;
+
+	argc = parse_options(argc, argv, NULL, sendbytes_options, sendbytes_usage, 0);
+
+	return do_sendbytes(bytecount, string[0], path, &options);
+}
+
+struct multiple_thread_data {
+	pthread_t pthread_id;
+	struct multiple_thread_data *next;
+	const char *path;
+	int bytecount;
+	int batchsize;
+	int sum_errors;
+	int sum_good;
+	char letter;
+};
+
+static void *multiple_thread_proc(void *_multiple_thread_data)
+{
+	struct multiple_thread_data *d = _multiple_thread_data;
+	int k;
+	struct ipc_client_connect_options options
+		= IPC_CLIENT_CONNECT_OPTIONS_INIT;
+
+	options.wait_if_busy = 1;
+	options.wait_if_not_found = 0;
+	/*
+	 * A multi-threaded client should not be randomly calling chdir().
+	 * The test will pass without this restriction because the test is
+	 * not otherwise accessing the filesystem, but it makes us honest.
+	 */
+	options.uds_disallow_chdir = 1;
+
+	trace2_thread_start("multiple");
+
+	for (k = 0; k < d->batchsize; k++) {
+		if (do_sendbytes(d->bytecount + k, d->letter, d->path, &options))
+			d->sum_errors++;
+		else
+			d->sum_good++;
+	}
+
+	trace2_thread_exit();
+	return NULL;
+}
+
+/*
+ * Start a client-side thread pool.  Each thread sends a series of
+ * IPC requests.  Each request is on a new connection to the server.
+ */
+static int client__multiple(int argc, const char **argv, const char *path)
+{
+	struct multiple_thread_data *list = NULL;
+	int k;
+	int nr_threads = 5;
+	int bytecount = 1;
+	int batchsize = 10;
+	int sum_join_errors = 0;
+	int sum_thread_errors = 0;
+	int sum_good = 0;
+
+	const char * const multiple_usage[] = {
+		N_("test-helper simple-ipc multiple [<options>]"),
+		NULL
+	};
+	struct option multiple_options[] = {
+		OPT_INTEGER(0, "bytecount", &bytecount, N_("number of bytes")),
+		OPT_INTEGER(0, "threads", &nr_threads, N_("number of threads")),
+		OPT_INTEGER(0, "batchsize", &batchsize, N_("number of requests per thread")),
+		OPT_END()
+	};
+
+	argc = parse_options(argc, argv, NULL, multiple_options, multiple_usage, 0);
+
+	if (bytecount < 1)
+		bytecount = 1;
+	if (nr_threads < 1)
+		nr_threads = 1;
+	if (batchsize < 1)
+		batchsize = 1;
+
+	for (k = 0; k < nr_threads; k++) {
+		struct multiple_thread_data *d = xcalloc(1, sizeof(*d));
+		d->next = list;
+		d->path = path;
+		d->bytecount = bytecount + batchsize*(k/26);
+		d->batchsize = batchsize;
+		d->sum_errors = 0;
+		d->sum_good = 0;
+		d->letter = 'A' + (k % 26);
+
+		if (pthread_create(&d->pthread_id, NULL, multiple_thread_proc, d)) {
+			warning("failed to create thread[%d] skipping remainder", k);
+			free(d);
+			break;
+		}
+
+		list = d;
+	}
+
+	while (list) {
+		struct multiple_thread_data *d = list;
+
+		if (pthread_join(d->pthread_id, NULL))
+			sum_join_errors++;
+
+		sum_thread_errors += d->sum_errors;
+		sum_good += d->sum_good;
+
+		list = d->next;
+		free(d);
+	}
+
+	printf("client (good %d) (join %d), (errors %d)\n",
+	       sum_good, sum_join_errors, sum_thread_errors);
+
+	return (sum_join_errors + sum_thread_errors) ? 1 : 0;
+}
+
+int cmd__simple_ipc(int argc, const char **argv)
+{
+	const char *path = "ipc-test";
+
+	if (argc == 2 && !strcmp(argv[1], "SUPPORTS_SIMPLE_IPC"))
+		return 0;
+
+	/*
+	 * Use '!!' on all dispatch functions to map from `error()` style
+	 * (returns -1) style to `test_must_fail` style (expects 1).  This
+	 * makes shell error messages less confusing.
+	 */
+
+	if (argc == 2 && !strcmp(argv[1], "is-active"))
+		return !!client__probe_server(path);
+
+	if (argc >= 2 && !strcmp(argv[1], "run-daemon"))
+		return !!daemon__run_server(path, argc, argv);
+
+	if (argc >= 2 && !strcmp(argv[1], "start-daemon"))
+		return !!daemon__start_server(path, argc, argv);
+
+	/*
+	 * Client commands follow.  Ensure a server is running before
+	 * going any further.
+	 */
+	if (client__probe_server(path))
+		return 1;
+
+	if ((argc == 2 || argc == 3) && !strcmp(argv[1], "send"))
+		return !!client__send_ipc(argc, argv, path);
+
+	if (argc >= 2 && !strcmp(argv[1], "sendbytes"))
+		return !!client__sendbytes(argc, argv, path);
+
+	if (argc >= 2 && !strcmp(argv[1], "multiple"))
+		return !!client__multiple(argc, argv, path);
+
+	die("Unhandled argv[1]: '%s'", argv[1]);
+}
+#endif
diff --git a/t/helper/test-tool.c b/t/helper/test-tool.c
index 9d6d14d92937..a409655f03b5 100644
--- a/t/helper/test-tool.c
+++ b/t/helper/test-tool.c
@@ -64,6 +64,7 @@ static struct test_cmd cmds[] = {
 	{ "sha1", cmd__sha1 },
 	{ "sha256", cmd__sha256 },
 	{ "sigchain", cmd__sigchain },
+	{ "simple-ipc", cmd__simple_ipc },
 	{ "strcmp-offset", cmd__strcmp_offset },
 	{ "string-list", cmd__string_list },
 	{ "submodule-config", cmd__submodule_config },
diff --git a/t/helper/test-tool.h b/t/helper/test-tool.h
index a6470ff62c42..564eb3c8e911 100644
--- a/t/helper/test-tool.h
+++ b/t/helper/test-tool.h
@@ -54,6 +54,7 @@ int cmd__sha1(int argc, const char **argv);
 int cmd__oid_array(int argc, const char **argv);
 int cmd__sha256(int argc, const char **argv);
 int cmd__sigchain(int argc, const char **argv);
+int cmd__simple_ipc(int argc, const char **argv);
 int cmd__strcmp_offset(int argc, const char **argv);
 int cmd__string_list(int argc, const char **argv);
 int cmd__submodule_config(int argc, const char **argv);
diff --git a/t/t0052-simple-ipc.sh b/t/t0052-simple-ipc.sh
new file mode 100755
index 000000000000..e36b786709ec
--- /dev/null
+++ b/t/t0052-simple-ipc.sh
@@ -0,0 +1,134 @@
+#!/bin/sh
+
+test_description='simple command server'
+
+. ./test-lib.sh
+
+test-tool simple-ipc SUPPORTS_SIMPLE_IPC || {
+	skip_all='simple IPC not supported on this platform'
+	test_done
+}
+
+stop_simple_IPC_server () {
+	test-tool simple-ipc send quit
+}
+
+test_expect_success 'start simple command server' '
+	test_atexit stop_simple_IPC_server &&
+	test-tool simple-ipc start-daemon --threads=8 &&
+	test-tool simple-ipc is-active
+'
+
+test_expect_success 'simple command server' '
+	test-tool simple-ipc send ping >actual &&
+	echo pong >expect &&
+	test_cmp expect actual
+'
+
+test_expect_success 'servers cannot share the same path' '
+	test_must_fail test-tool simple-ipc run-daemon &&
+	test-tool simple-ipc is-active
+'
+
+test_expect_success 'big response' '
+	test-tool simple-ipc send big >actual &&
+	test_line_count -ge 10000 actual &&
+	grep -q "big: [0]*9999\$" actual
+'
+
+test_expect_success 'chunk response' '
+	test-tool simple-ipc send chunk >actual &&
+	test_line_count -ge 10000 actual &&
+	grep -q "big: [0]*9999\$" actual
+'
+
+test_expect_success 'slow response' '
+	test-tool simple-ipc send slow >actual &&
+	test_line_count -ge 100 actual &&
+	grep -q "big: [0]*99\$" actual
+'
+
+# Send an IPC with n=100,000 bytes of ballast.  This should be large enough
+# to force both the kernel and the pkt-line layer to chunk the message to the
+# daemon and for the daemon to receive it in chunks.
+#
+test_expect_success 'sendbytes' '
+	test-tool simple-ipc sendbytes --bytecount=100000 --byte=A >actual &&
+	grep "sent:A00100000 rcvd:A00100000" actual
+'
+
+# Start a series of <threads> client threads that each make <batchsize>
+# IPC requests to the server.  Each (<threads> * <batchsize>) request
+# will open a new connection to the server and randomly bind to a server
+# thread.  Each client thread exits after completing its batch.  So the
+# total number of live client threads will be smaller than the total.
+# Each request will send a message containing at least <bytecount> bytes
+# of ballast.  (Responses are small.)
+#
+# The purpose here is to test threading in the server and responding to
+# many concurrent client requests (regardless of whether they come from
+# 1 client process or many).  And to test that the server side of the
+# named pipe/socket is stable.  (On Windows this means that the server
+# pipe is properly recycled.)
+#
+# On Windows it also lets us adjust the connection timeout in the
+# `ipc_client_send_command()`.
+#
+# Note it is easy to drive the system into failure by requesting an
+# insane number of threads on client or server and/or increasing the
+# per-thread batchsize or the per-request bytecount (ballast).
+# On Windows these failures look like "pipe is busy" errors.
+# So I've chosen fairly conservative values for now.
+#
+# We expect output of the form "sent:<letter><length> ..."
+# With terms (7, 19, 13) we expect:
+#   <letter> in [A-G]
+#   <length> in [19+0 .. 19+(13-1)]
+# and (7 * 13) successful responses.
+#
+test_expect_success 'stress test threads' '
+	test-tool simple-ipc multiple \
+		--threads=7 \
+		--bytecount=19 \
+		--batchsize=13 \
+		>actual &&
+	test_line_count = 92 actual &&
+	grep "good 91" actual &&
+	grep "sent:A" <actual >actual_a &&
+	cat >expect_a <<-EOF &&
+		sent:A00000019 rcvd:A00000019
+		sent:A00000020 rcvd:A00000020
+		sent:A00000021 rcvd:A00000021
+		sent:A00000022 rcvd:A00000022
+		sent:A00000023 rcvd:A00000023
+		sent:A00000024 rcvd:A00000024
+		sent:A00000025 rcvd:A00000025
+		sent:A00000026 rcvd:A00000026
+		sent:A00000027 rcvd:A00000027
+		sent:A00000028 rcvd:A00000028
+		sent:A00000029 rcvd:A00000029
+		sent:A00000030 rcvd:A00000030
+		sent:A00000031 rcvd:A00000031
+	EOF
+	test_cmp expect_a actual_a
+'
+
+# Sending a "quit" message to the server causes it to start an "async
+# shutdown" -- queuing shutdown events to all socket/pipe thread-pool
+# threads.  Each thread will process that event after finishing
+# (draining) any in-progress IO with other clients.  So when the "send
+# quit" client command exits, the ipc-server may still be running (but
+# it should be cleaning up).
+#
+# So, insert a generous sleep here to give the server time to shutdown.
+#
+test_expect_success '`quit` works' '
+	test-tool simple-ipc send quit &&
+
+	sleep 5 &&
+
+	test_must_fail test-tool simple-ipc is-active &&
+	test_must_fail test-tool simple-ipc send ping
+'
+
+test_done
-- 
gitgitgadget

^ permalink raw reply related	[flat|nested] 178+ messages in thread

* Re: [PATCH v3 12/12] t0052: add simple-ipc tests and t/helper/test-simple-ipc tool
  2021-02-13  0:09     ` [PATCH v3 12/12] t0052: add simple-ipc tests and t/helper/test-simple-ipc tool Jeff Hostetler via GitGitGadget
@ 2021-02-13  9:30       ` SZEDER Gábor
  2021-02-16 15:53         ` Jeff Hostetler
  0 siblings, 1 reply; 178+ messages in thread
From: SZEDER Gábor @ 2021-02-13  9:30 UTC (permalink / raw)
  To: Jeff Hostetler via GitGitGadget
  Cc: git, Jeff Hostetler, Jeff King, Johannes Schindelin, Jeff Hostetler

On Sat, Feb 13, 2021 at 12:09:13AM +0000, Jeff Hostetler via GitGitGadget wrote:
> From: Jeff Hostetler <jeffhost@microsoft.com>
> 
> Create t0052-simple-ipc.sh with unit tests for the "simple-ipc" mechanism.
> 
> Create t/helper/test-simple-ipc test tool to exercise the "simple-ipc"
> functions.
> 
> When the tool is invoked with "run-daemon", it runs a server to listen
> for "simple-ipc" connections on a test socket or named pipe and
> responds to a set of commands to exercise/stress the communication
> setup.
> 
> When the tool is invoked with "start-daemon", it spawns a "run-daemon"
> command in the background and waits for the server to become ready
> before exiting.  (This helps make unit tests in t0052 more predictable
> and avoids the need for arbitrary sleeps in the test script.)
> 
> The tool also has a series of client "send" commands to send commands
> and data to a server instance.
> 
> Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
> ---

> diff --git a/t/helper/test-simple-ipc.c b/t/helper/test-simple-ipc.c
> new file mode 100644
> index 000000000000..92aa7f843cfa
> --- /dev/null
> +++ b/t/helper/test-simple-ipc.c

[...]

> +/*
> + * This is "application callback" that sits on top of the "ipc-server".
> + * It completely defines the set of command verbs supported by this

Please avoid the noiseword "verbs" and just call them commands; a few
of these commands are not even verbs.

> + * application.
> + */
> +static int test_app_cb(void *application_data,
> +		       const char *command,
> +		       ipc_server_reply_cb *reply_cb,
> +		       struct ipc_server_reply_data *reply_data)
> +{
> +	/*
> +	 * Verify that we received the application-data that we passed
> +	 * when we started the ipc-server.  (We have several layers of
> +	 * callbacks calling callbacks and it's easy to get things mixed
> +	 * up (especially when some are "void*").)
> +	 */
> +	if (application_data != (void*)&my_app_data)
> +		BUG("application_cb: application_data pointer wrong");
> +
> +	if (!strcmp(command, "quit")) {
> +		/*
> +		 * The client sent a "quit" command.  This is an async
> +		 * request for the server to shutdown.
> +		 *
> +		 * We DO NOT send the client a response message
> +		 * (because we have nothing to say and the other
> +		 * server threads have not yet stopped).
> +		 *
> +		 * Tell the ipc-server layer to start shutting down.
> +		 * This includes: stop listening for new connections
> +		 * on the socket/pipe and telling all worker threads
> +		 * to finish/drain their outgoing responses to other
> +		 * clients.
> +		 *
> +		 * This DOES NOT force an immediate sync shutdown.
> +		 */
> +		return SIMPLE_IPC_QUIT;
> +	}
> +
> +	if (!strcmp(command, "ping")) {
> +		const char *answer = "pong";
> +		return reply_cb(reply_data, answer, strlen(answer));
> +	}
> +
> +	if (!strcmp(command, "big"))
> +		return app__big_command(reply_cb, reply_data);
> +
> +	if (!strcmp(command, "chunk"))
> +		return app__chunk_command(reply_cb, reply_data);
> +
> +	if (!strcmp(command, "slow"))
> +		return app__slow_command(reply_cb, reply_data);
> +
> +	if (starts_with(command, "sendbytes "))
> +		return app__sendbytes_command(command, reply_cb, reply_data);
> +
> +	return app__unhandled_command(command, reply_cb, reply_data);
> +}

[...]

> +int cmd__simple_ipc(int argc, const char **argv)
> +{
> +	const char *path = "ipc-test";

Since the path of the socket used in the tests is hardcoded, we could
use it in the tests as well to check its presence/absence.

[...]

> diff --git a/t/t0052-simple-ipc.sh b/t/t0052-simple-ipc.sh
> new file mode 100755
> index 000000000000..e36b786709ec
> --- /dev/null
> +++ b/t/t0052-simple-ipc.sh
> @@ -0,0 +1,134 @@

[...]

> +# Sending a "quit" message to the server causes it to start an "async
> +# shutdown" -- queuing shutdown events to all socket/pipe thread-pool
> +# threads.  Each thread will process that event after finishing
> +# (draining) any in-progress IO with other clients.  So when the "send
> +# quit" client command exits, the ipc-server may still be running (but
> +# it should be cleaning up).
> +#
> +# So, insert a generous sleep here to give the server time to shutdown.
> +#
> +test_expect_success '`quit` works' '
> +	test-tool simple-ipc send quit &&
> +
> +	sleep 5 &&

The server process is responsible for removing the socket, so instead
of a hard-coded 5 seconds delay the test could (semi-)busy wait in a
loop until the socket disappears like this:

diff --git a/t/t0052-simple-ipc.sh b/t/t0052-simple-ipc.sh
index 6958835454..609d8d4283 100755
--- a/t/t0052-simple-ipc.sh
+++ b/t/t0052-simple-ipc.sh
@@ -122,6 +122,13 @@ test_expect_success 'stress test threads' '
 
 test_expect_success '`quit` works' '
 	test-tool simple-ipc send quit &&
+	nr_tries_left=10 &&
+	while test -S ipc-test &&
+	      test $nr_tries_left -gt 0
+	do
+		sleep 1
+		nr_tries_left=$(($nr_tries_left - 1))
+	done &&
 	test_must_fail test-tool simple-ipc is-active &&
 	test_must_fail test-tool simple-ipc send ping
 '

This way we might get away without any delay or with only a single
one-second sleep in most cases, while we could bump the timeout a bit
higher for the sake of a CI system in a particularly bad mood.

Would this work on Windows, or at least could it be tweaked to work
there?

I think this is conceptually the same as what you did at startup,
except in this example the test script waits instead of the test-tool
subcommand.  Perhaps it would be worth incorporating this wait into
the test-tool as well; or perhaps it would be simpler to do the
waiting in the test script at startup as well.

> +	test_must_fail test-tool simple-ipc is-active &&
> +	test_must_fail test-tool simple-ipc send ping
> +'
> +
> +test_done
> -- 
> gitgitgadget

^ permalink raw reply related	[flat|nested] 178+ messages in thread

* Re: [PATCH v3 12/12] t0052: add simple-ipc tests and t/helper/test-simple-ipc tool
  2021-02-13  9:30       ` SZEDER Gábor
@ 2021-02-16 15:53         ` Jeff Hostetler
  0 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler @ 2021-02-16 15:53 UTC (permalink / raw)
  To: SZEDER Gábor, Jeff Hostetler via GitGitGadget
  Cc: git, Jeff King, Johannes Schindelin, Jeff Hostetler



On 2/13/21 4:30 AM, SZEDER Gábor wrote:

[...]

> [...]
> 
>> +int cmd__simple_ipc(int argc, const char **argv)
>> +{
>> +	const char *path = "ipc-test";
> 
> Since the path of the socket used in the tests is hardcoded, we could
> use it in the tests as well to check its presence/absence.
> 
> [...]
> 
>> diff --git a/t/t0052-simple-ipc.sh b/t/t0052-simple-ipc.sh
>> new file mode 100755
>> index 000000000000..e36b786709ec
>> --- /dev/null
>> +++ b/t/t0052-simple-ipc.sh
>> @@ -0,0 +1,134 @@
> 
> [...]
> 
>> +# Sending a "quit" message to the server causes it to start an "async
>> +# shutdown" -- queuing shutdown events to all socket/pipe thread-pool
>> +# threads.  Each thread will process that event after finishing
>> +# (draining) any in-progress IO with other clients.  So when the "send
>> +# quit" client command exits, the ipc-server may still be running (but
>> +# it should be cleaning up).
>> +#
>> +# So, insert a generous sleep here to give the server time to shutdown.
>> +#
>> +test_expect_success '`quit` works' '
>> +	test-tool simple-ipc send quit &&
>> +
>> +	sleep 5 &&
> 
> The server process is responsible for removing the socket, so instead
> of a hard-coded 5 seconds delay the test could (semi-)busy wait in a
> loop until the socket disappears like this:
> 
> diff --git a/t/t0052-simple-ipc.sh b/t/t0052-simple-ipc.sh
> index 6958835454..609d8d4283 100755
> --- a/t/t0052-simple-ipc.sh
> +++ b/t/t0052-simple-ipc.sh
> @@ -122,6 +122,13 @@ test_expect_success 'stress test threads' '
>   
>   test_expect_success '`quit` works' '
>   	test-tool simple-ipc send quit &&
> +	nr_tries_left=10 &&
> +	while test -S ipc-test &&
> +	      test $nr_tries_left -gt 0
> +	do
> +		sleep 1
> +		nr_tries_left=$(($nr_tries_left - 1))
> +	done &&
>   	test_must_fail test-tool simple-ipc is-active &&
>   	test_must_fail test-tool simple-ipc send ping
>   '
> 
> This way we might get away without any delay or with only a single
> one-second sleep in most cases, while we could bump the timeout a bit
> higher for the sake of a CI system in a particularly bad mood.
> 
> Would this work on Windows, or at least could it be tweaked to work
> there?
> 
> I think this is conceptually the same as what you did at startup,
> except in this example the test script waits instead of the test-tool
> subcommand.  Perhaps it would be worth incorporating this wait into
> the test-tool as well; or perhaps it would be simpler to do the
> waiting in the test script at startup as well.

Thanks for the suggestions.  Let me take another pass at
it.  I think making the "send quit" command try to wait until
the server shutdown would make it easier for all concerned.

> 
>> +	test_must_fail test-tool simple-ipc is-active &&
>> +	test_must_fail test-tool simple-ipc send ping
>> +'
>> +
>> +test_done
>> -- 
>> gitgitgadget

^ permalink raw reply	[flat|nested] 178+ messages in thread

* [PATCH v4 00/12] Simple IPC Mechanism
  2021-02-13  0:09   ` [PATCH v3 00/12] " Jeff Hostetler via GitGitGadget
                       ` (11 preceding siblings ...)
  2021-02-13  0:09     ` [PATCH v3 12/12] t0052: add simple-ipc tests and t/helper/test-simple-ipc tool Jeff Hostetler via GitGitGadget
@ 2021-02-17 21:48     ` Jeff Hostetler via GitGitGadget
  2021-02-17 21:48       ` [PATCH v4 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently() Jeff Hostetler via GitGitGadget
                         ` (13 more replies)
  12 siblings, 14 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-02-17 21:48 UTC (permalink / raw)
  To: git
  Cc: Jeff Hostetler, Jeff King, SZEDER Gábor,
	Johannes Schindelin, Jeff Hostetler

Here is V4 of my "Simple IPC" series. It addresses Gábor's comment WRT
shutting down the server to make unit tests more predictable on CI servers.
(https://lore.kernel.org/git/20210213093052.GJ1015009@szeder.dev)

Jeff

cc: Ævar Arnfjörð Bjarmason avarab@gmail.com cc: Jeff Hostetler
git@jeffhostetler.com cc: Jeff King peff@peff.net cc: Chris Torek
chris.torek@gmail.com

Jeff Hostetler (9):
  pkt-line: eliminate the need for static buffer in
    packet_write_gently()
  simple-ipc: design documentation for new IPC mechanism
  simple-ipc: add win32 implementation
  unix-socket: elimiate static unix_stream_socket() helper function
  unix-socket: add backlog size option to unix_stream_listen()
  unix-socket: disallow chdir() when creating unix domain sockets
  unix-socket: create `unix_stream_server__listen_with_lock()`
  simple-ipc: add Unix domain socket implementation
  t0052: add simple-ipc tests and t/helper/test-simple-ipc tool

Johannes Schindelin (3):
  pkt-line: do not issue flush packets in write_packetized_*()
  pkt-line: (optionally) libify the packet readers
  pkt-line: add options argument to read_packetized_to_strbuf()

 Documentation/technical/api-simple-ipc.txt |  34 +
 Makefile                                   |   8 +
 builtin/credential-cache--daemon.c         |   3 +-
 builtin/credential-cache.c                 |   2 +-
 compat/simple-ipc/ipc-shared.c             |  28 +
 compat/simple-ipc/ipc-unix-socket.c        | 979 +++++++++++++++++++++
 compat/simple-ipc/ipc-win32.c              | 749 ++++++++++++++++
 config.mak.uname                           |   2 +
 contrib/buildsystems/CMakeLists.txt        |   6 +
 convert.c                                  |  16 +-
 pkt-line.c                                 |  57 +-
 pkt-line.h                                 |  20 +-
 simple-ipc.h                               | 235 +++++
 t/helper/test-simple-ipc.c                 | 773 ++++++++++++++++
 t/helper/test-tool.c                       |   1 +
 t/helper/test-tool.h                       |   1 +
 t/t0052-simple-ipc.sh                      | 122 +++
 unix-socket.c                              | 168 +++-
 unix-socket.h                              |  47 +-
 19 files changed, 3198 insertions(+), 53 deletions(-)
 create mode 100644 Documentation/technical/api-simple-ipc.txt
 create mode 100644 compat/simple-ipc/ipc-shared.c
 create mode 100644 compat/simple-ipc/ipc-unix-socket.c
 create mode 100644 compat/simple-ipc/ipc-win32.c
 create mode 100644 simple-ipc.h
 create mode 100644 t/helper/test-simple-ipc.c
 create mode 100755 t/t0052-simple-ipc.sh


base-commit: 773e25afc41b1b6533fa9ae2cd825d0b4a697fad
Published-As: https://github.com/gitgitgadget/git/releases/tag/pr-766%2Fjeffhostetler%2Fsimple-ipc-v4
Fetch-It-Via: git fetch https://github.com/gitgitgadget/git pr-766/jeffhostetler/simple-ipc-v4
Pull-Request: https://github.com/gitgitgadget/git/pull/766

Range-diff vs v3:

  1:  2d6858b1625a =  1:  2d6858b1625a pkt-line: eliminate the need for static buffer in packet_write_gently()
  2:  91a9f63d6692 =  2:  91a9f63d6692 pkt-line: do not issue flush packets in write_packetized_*()
  3:  e05467def4e1 =  3:  e05467def4e1 pkt-line: (optionally) libify the packet readers
  4:  81e14bed955c =  4:  81e14bed955c pkt-line: add options argument to read_packetized_to_strbuf()
  5:  22eec60761a8 =  5:  22eec60761a8 simple-ipc: design documentation for new IPC mechanism
  6:  171ec43ecfa4 =  6:  171ec43ecfa4 simple-ipc: add win32 implementation
  7:  b368318e6a23 =  7:  b368318e6a23 unix-socket: elimiate static unix_stream_socket() helper function
  8:  985b2e02b2df =  8:  985b2e02b2df unix-socket: add backlog size option to unix_stream_listen()
  9:  1bfa36409d07 =  9:  1bfa36409d07 unix-socket: disallow chdir() when creating unix domain sockets
 10:  b443e11ac32f = 10:  b443e11ac32f unix-socket: create `unix_stream_server__listen_with_lock()`
 11:  43c8db9a4468 = 11:  43c8db9a4468 simple-ipc: add Unix domain socket implementation
 12:  1e5c856ade85 ! 12:  09568a6500dd t0052: add simple-ipc tests and t/helper/test-simple-ipc tool
     @@ t/helper/test-simple-ipc.c (new)
      +static ipc_server_application_cb test_app_cb;
      +
      +/*
     -+ * This is "application callback" that sits on top of the "ipc-server".
     -+ * It completely defines the set of command verbs supported by this
     -+ * application.
     ++ * This is the "application callback" that sits on top of the
     ++ * "ipc-server".  It completely defines the set of commands supported
     ++ * by this application.
      + */
      +static int test_app_cb(void *application_data,
      +		       const char *command,
     @@ t/helper/test-simple-ipc.c (new)
      + * Send an IPC command to an already-running server daemon and print the
      + * response.
      + *
     -+ * argv[2] contains a simple (1 word) command verb that `test_app_cb()`
     -+ * (in the daemon process) will understand.
     ++ * argv[2] contains a simple (1 word) command that `test_app_cb()` (in
     ++ * the daemon process) will understand.
      + */
      +static int client__send_ipc(int argc, const char **argv, const char *path)
      +{
     @@ t/helper/test-simple-ipc.c (new)
      +}
      +
      +/*
     ++ * Send an IPC command to an already-running server and ask it to
     ++ * shutdown.  "send quit" is an async request and queues a shutdown
     ++ * event in the server, so we spin and wait here for it to actually
     ++ * shutdown to make the unit tests a little easier to write.
     ++ */
     ++static int client__stop_server(int argc, const char **argv, const char *path)
     ++{
     ++	const char *send_quit[] = { argv[0], "send", "quit", NULL };
     ++	int max_wait_sec = 60;
     ++	int ret;
     ++	time_t time_limit, now;
     ++	enum ipc_active_state s;
     ++
     ++	const char * const stop_usage[] = {
     ++		N_("test-helper simple-ipc stop-daemon [<options>]"),
     ++		NULL
     ++	};
     ++
     ++	struct option stop_options[] = {
     ++		OPT_INTEGER(0, "max-wait", &max_wait_sec,
     ++			    N_("seconds to wait for daemon to stop")),
     ++		OPT_END()
     ++	};
     ++
     ++	argc = parse_options(argc, argv, NULL, stop_options, stop_usage, 0);
     ++
     ++	if (max_wait_sec < 0)
     ++		max_wait_sec = 0;
     ++
     ++	time(&time_limit);
     ++	time_limit += max_wait_sec;
     ++
     ++	ret = client__send_ipc(3, send_quit, path);
     ++	if (ret)
     ++		return ret;
     ++
     ++	for (;;) {
     ++		sleep_millisec(100);
     ++
     ++		s = ipc_get_active_state(path);
     ++
     ++		if (s != IPC_STATE__LISTENING) {
     ++			/*
     ++			 * The socket/pipe is gone and/or has stopped
     ++			 * responding.  Lets assume that the daemon
     ++			 * process has exited too.
     ++			 */
     ++			return 0;
     ++		}
     ++
     ++		time(&now);
     ++		if (now > time_limit)
     ++			return error(_("daemon has not shutdown yet"));
     ++	}
     ++}
     ++
     ++/*
      + * Send an IPC command followed by ballast to confirm that a large
      + * message can be sent and that the kernel or pkt-line layers will
      + * properly chunk it and that the daemon receives the entire message.
     @@ t/helper/test-simple-ipc.c (new)
      +	if (client__probe_server(path))
      +		return 1;
      +
     ++	if (argc >= 2 && !strcmp(argv[1], "stop-daemon"))
     ++		return !!client__stop_server(argc, argv, path);
     ++
      +	if ((argc == 2 || argc == 3) && !strcmp(argv[1], "send"))
      +		return !!client__send_ipc(argc, argv, path);
      +
     @@ t/t0052-simple-ipc.sh (new)
      +}
      +
      +stop_simple_IPC_server () {
     -+	test-tool simple-ipc send quit
     ++	test-tool simple-ipc stop-daemon
      +}
      +
      +test_expect_success 'start simple command server' '
     @@ t/t0052-simple-ipc.sh (new)
      +	test_cmp expect_a actual_a
      +'
      +
     -+# Sending a "quit" message to the server causes it to start an "async
     -+# shutdown" -- queuing shutdown events to all socket/pipe thread-pool
     -+# threads.  Each thread will process that event after finishing
     -+# (draining) any in-progress IO with other clients.  So when the "send
     -+# quit" client command exits, the ipc-server may still be running (but
     -+# it should be cleaning up).
     -+#
     -+# So, insert a generous sleep here to give the server time to shutdown.
     -+#
     -+test_expect_success '`quit` works' '
     -+	test-tool simple-ipc send quit &&
     -+
     -+	sleep 5 &&
     -+
     ++test_expect_success 'stop-daemon works' '
     ++	test-tool simple-ipc stop-daemon &&
      +	test_must_fail test-tool simple-ipc is-active &&
      +	test_must_fail test-tool simple-ipc send ping
      +'

-- 
gitgitgadget

^ permalink raw reply	[flat|nested] 178+ messages in thread

* [PATCH v4 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently()
  2021-02-17 21:48     ` [PATCH v4 00/12] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
@ 2021-02-17 21:48       ` Jeff Hostetler via GitGitGadget
  2021-02-26  7:21         ` Jeff King
  2021-02-17 21:48       ` [PATCH v4 02/12] pkt-line: do not issue flush packets in write_packetized_*() Johannes Schindelin via GitGitGadget
                         ` (12 subsequent siblings)
  13 siblings, 1 reply; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-02-17 21:48 UTC (permalink / raw)
  To: git
  Cc: Jeff Hostetler, Jeff King, SZEDER Gábor,
	Johannes Schindelin, Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Teach `packet_write_gently()` to write the pkt-line header and the actual
buffer in 2 separate calls to `write_in_full()` and avoid the need for a
static buffer, thread-safe scratch space, or an excessively large stack
buffer.

Change the API of `write_packetized_from_fd()` to accept a scratch space
argument from its caller to avoid similar issues here.

These changes are intended to make it easier to use pkt-line routines in
a multi-threaded context with multiple concurrent writers writing to
different streams.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 convert.c  |  7 ++++---
 pkt-line.c | 28 +++++++++++++++++++---------
 pkt-line.h | 12 +++++++++---
 3 files changed, 32 insertions(+), 15 deletions(-)

diff --git a/convert.c b/convert.c
index ee360c2f07ce..41012c2d301c 100644
--- a/convert.c
+++ b/convert.c
@@ -883,9 +883,10 @@ static int apply_multi_file_filter(const char *path, const char *src, size_t len
 	if (err)
 		goto done;
 
-	if (fd >= 0)
-		err = write_packetized_from_fd(fd, process->in);
-	else
+	if (fd >= 0) {
+		struct packet_scratch_space scratch;
+		err = write_packetized_from_fd(fd, process->in, &scratch);
+	} else
 		err = write_packetized_from_buf(src, len, process->in);
 	if (err)
 		goto done;
diff --git a/pkt-line.c b/pkt-line.c
index d633005ef746..4cff2f7a68a5 100644
--- a/pkt-line.c
+++ b/pkt-line.c
@@ -196,17 +196,25 @@ int packet_write_fmt_gently(int fd, const char *fmt, ...)
 
 static int packet_write_gently(const int fd_out, const char *buf, size_t size)
 {
-	static char packet_write_buffer[LARGE_PACKET_MAX];
+	char header[4];
 	size_t packet_size;
 
-	if (size > sizeof(packet_write_buffer) - 4)
+	if (size > LARGE_PACKET_DATA_MAX)
 		return error(_("packet write failed - data exceeds max packet size"));
 
 	packet_trace(buf, size, 1);
 	packet_size = size + 4;
-	set_packet_header(packet_write_buffer, packet_size);
-	memcpy(packet_write_buffer + 4, buf, size);
-	if (write_in_full(fd_out, packet_write_buffer, packet_size) < 0)
+
+	set_packet_header(header, packet_size);
+
+	/*
+	 * Write the header and the buffer in 2 parts so that we do not need
+	 * to allocate a buffer or rely on a static buffer.  This avoids perf
+	 * and multi-threading issues.
+	 */
+
+	if (write_in_full(fd_out, header, 4) < 0 ||
+	    write_in_full(fd_out, buf, size) < 0)
 		return error(_("packet write failed"));
 	return 0;
 }
@@ -242,19 +250,21 @@ void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len)
 	packet_trace(data, len, 1);
 }
 
-int write_packetized_from_fd(int fd_in, int fd_out)
+int write_packetized_from_fd(int fd_in, int fd_out,
+			     struct packet_scratch_space *scratch)
 {
-	static char buf[LARGE_PACKET_DATA_MAX];
 	int err = 0;
 	ssize_t bytes_to_write;
 
 	while (!err) {
-		bytes_to_write = xread(fd_in, buf, sizeof(buf));
+		bytes_to_write = xread(fd_in, scratch->buffer,
+				       sizeof(scratch->buffer));
 		if (bytes_to_write < 0)
 			return COPY_READ_ERROR;
 		if (bytes_to_write == 0)
 			break;
-		err = packet_write_gently(fd_out, buf, bytes_to_write);
+		err = packet_write_gently(fd_out, scratch->buffer,
+					  bytes_to_write);
 	}
 	if (!err)
 		err = packet_flush_gently(fd_out);
diff --git a/pkt-line.h b/pkt-line.h
index 8c90daa59ef0..c0722aefe638 100644
--- a/pkt-line.h
+++ b/pkt-line.h
@@ -5,6 +5,13 @@
 #include "strbuf.h"
 #include "sideband.h"
 
+#define LARGE_PACKET_MAX 65520
+#define LARGE_PACKET_DATA_MAX (LARGE_PACKET_MAX - 4)
+
+struct packet_scratch_space {
+	char buffer[LARGE_PACKET_DATA_MAX]; /* does not include header bytes */
+};
+
 /*
  * Write a packetized stream, where each line is preceded by
  * its length (including the header) as a 4-byte hex number.
@@ -32,7 +39,7 @@ void packet_buf_write(struct strbuf *buf, const char *fmt, ...) __attribute__((f
 void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len);
 int packet_flush_gently(int fd);
 int packet_write_fmt_gently(int fd, const char *fmt, ...) __attribute__((format (printf, 2, 3)));
-int write_packetized_from_fd(int fd_in, int fd_out);
+int write_packetized_from_fd(int fd_in, int fd_out, struct packet_scratch_space *scratch);
 int write_packetized_from_buf(const char *src_in, size_t len, int fd_out);
 
 /*
@@ -213,8 +220,7 @@ enum packet_read_status packet_reader_read(struct packet_reader *reader);
 enum packet_read_status packet_reader_peek(struct packet_reader *reader);
 
 #define DEFAULT_PACKET_MAX 1000
-#define LARGE_PACKET_MAX 65520
-#define LARGE_PACKET_DATA_MAX (LARGE_PACKET_MAX - 4)
+
 extern char packet_buffer[LARGE_PACKET_MAX];
 
 struct packet_writer {
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v4 02/12] pkt-line: do not issue flush packets in write_packetized_*()
  2021-02-17 21:48     ` [PATCH v4 00/12] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
  2021-02-17 21:48       ` [PATCH v4 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently() Jeff Hostetler via GitGitGadget
@ 2021-02-17 21:48       ` Johannes Schindelin via GitGitGadget
  2021-02-17 21:48       ` [PATCH v4 03/12] pkt-line: (optionally) libify the packet readers Johannes Schindelin via GitGitGadget
                         ` (11 subsequent siblings)
  13 siblings, 0 replies; 178+ messages in thread
From: Johannes Schindelin via GitGitGadget @ 2021-02-17 21:48 UTC (permalink / raw)
  To: git
  Cc: Jeff Hostetler, Jeff King, SZEDER Gábor,
	Johannes Schindelin, Jeff Hostetler, Johannes Schindelin

From: Johannes Schindelin <johannes.schindelin@gmx.de>

Remove the `packet_flush_gently()` call in `write_packetized_from_buf() and
`write_packetized_from_fd()` and require the caller to call it if desired.
Rename both functions to `write_packetized_from_*_no_flush()` to prevent
later merge accidents.

`write_packetized_from_buf()` currently only has one caller:
`apply_multi_file_filter()` in `convert.c`.  It always wants a flush packet
to be written after writing the payload.

However, we are about to introduce a caller that wants to write many
packets before a final flush packet, so let's make the caller responsible
for emitting the flush packet.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
---
 convert.c  |  8 ++++++--
 pkt-line.c | 10 +++-------
 pkt-line.h |  4 ++--
 3 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/convert.c b/convert.c
index 41012c2d301c..bccf7afa8797 100644
--- a/convert.c
+++ b/convert.c
@@ -885,9 +885,13 @@ static int apply_multi_file_filter(const char *path, const char *src, size_t len
 
 	if (fd >= 0) {
 		struct packet_scratch_space scratch;
-		err = write_packetized_from_fd(fd, process->in, &scratch);
+		err = write_packetized_from_fd_no_flush(fd, process->in, &scratch);
 	} else
-		err = write_packetized_from_buf(src, len, process->in);
+		err = write_packetized_from_buf_no_flush(src, len, process->in);
+	if (err)
+		goto done;
+
+	err = packet_flush_gently(process->in);
 	if (err)
 		goto done;
 
diff --git a/pkt-line.c b/pkt-line.c
index 4cff2f7a68a5..3602b0d37092 100644
--- a/pkt-line.c
+++ b/pkt-line.c
@@ -250,8 +250,8 @@ void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len)
 	packet_trace(data, len, 1);
 }
 
-int write_packetized_from_fd(int fd_in, int fd_out,
-			     struct packet_scratch_space *scratch)
+int write_packetized_from_fd_no_flush(int fd_in, int fd_out,
+				      struct packet_scratch_space *scratch)
 {
 	int err = 0;
 	ssize_t bytes_to_write;
@@ -266,12 +266,10 @@ int write_packetized_from_fd(int fd_in, int fd_out,
 		err = packet_write_gently(fd_out, scratch->buffer,
 					  bytes_to_write);
 	}
-	if (!err)
-		err = packet_flush_gently(fd_out);
 	return err;
 }
 
-int write_packetized_from_buf(const char *src_in, size_t len, int fd_out)
+int write_packetized_from_buf_no_flush(const char *src_in, size_t len, int fd_out)
 {
 	int err = 0;
 	size_t bytes_written = 0;
@@ -287,8 +285,6 @@ int write_packetized_from_buf(const char *src_in, size_t len, int fd_out)
 		err = packet_write_gently(fd_out, src_in + bytes_written, bytes_to_write);
 		bytes_written += bytes_to_write;
 	}
-	if (!err)
-		err = packet_flush_gently(fd_out);
 	return err;
 }
 
diff --git a/pkt-line.h b/pkt-line.h
index c0722aefe638..a7149429ac35 100644
--- a/pkt-line.h
+++ b/pkt-line.h
@@ -39,8 +39,8 @@ void packet_buf_write(struct strbuf *buf, const char *fmt, ...) __attribute__((f
 void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len);
 int packet_flush_gently(int fd);
 int packet_write_fmt_gently(int fd, const char *fmt, ...) __attribute__((format (printf, 2, 3)));
-int write_packetized_from_fd(int fd_in, int fd_out, struct packet_scratch_space *scratch);
-int write_packetized_from_buf(const char *src_in, size_t len, int fd_out);
+int write_packetized_from_fd_no_flush(int fd_in, int fd_out, struct packet_scratch_space *scratch);
+int write_packetized_from_buf_no_flush(const char *src_in, size_t len, int fd_out);
 
 /*
  * Read a packetized line into the buffer, which must be at least size bytes
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v4 03/12] pkt-line: (optionally) libify the packet readers
  2021-02-17 21:48     ` [PATCH v4 00/12] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
  2021-02-17 21:48       ` [PATCH v4 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently() Jeff Hostetler via GitGitGadget
  2021-02-17 21:48       ` [PATCH v4 02/12] pkt-line: do not issue flush packets in write_packetized_*() Johannes Schindelin via GitGitGadget
@ 2021-02-17 21:48       ` Johannes Schindelin via GitGitGadget
  2021-03-03 19:53         ` Junio C Hamano
  2021-02-17 21:48       ` [PATCH v4 04/12] pkt-line: add options argument to read_packetized_to_strbuf() Johannes Schindelin via GitGitGadget
                         ` (10 subsequent siblings)
  13 siblings, 1 reply; 178+ messages in thread
From: Johannes Schindelin via GitGitGadget @ 2021-02-17 21:48 UTC (permalink / raw)
  To: git
  Cc: Jeff Hostetler, Jeff King, SZEDER Gábor,
	Johannes Schindelin, Jeff Hostetler, Johannes Schindelin

From: Johannes Schindelin <johannes.schindelin@gmx.de>

So far, the (possibly indirect) callers of `get_packet_data()` can ask
that function to return an error instead of `die()`ing upon end-of-file.
However, random read errors will still cause the process to die.

So let's introduce an explicit option to tell the packet reader
machinery to please be nice and only return an error.

This change prepares pkt-line for use by long-running daemon processes.
Such processes should be able to serve multiple concurrent clients and
and survive random IO errors.  If there is an error on one connection,
a daemon should be able to drop that connection and continue serving
existing and future connections.

This ability will be used by a Git-aware "Internal FSMonitor" feature
in a later patch series.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
---
 pkt-line.c | 19 +++++++++++++++++--
 pkt-line.h |  4 ++++
 2 files changed, 21 insertions(+), 2 deletions(-)

diff --git a/pkt-line.c b/pkt-line.c
index 3602b0d37092..83c46e6b46ee 100644
--- a/pkt-line.c
+++ b/pkt-line.c
@@ -304,8 +304,11 @@ static int get_packet_data(int fd, char **src_buf, size_t *src_size,
 		*src_size -= ret;
 	} else {
 		ret = read_in_full(fd, dst, size);
-		if (ret < 0)
+		if (ret < 0) {
+			if (options & PACKET_READ_NEVER_DIE)
+				return error_errno(_("read error"));
 			die_errno(_("read error"));
+		}
 	}
 
 	/* And complain if we didn't get enough bytes to satisfy the read. */
@@ -313,6 +316,8 @@ static int get_packet_data(int fd, char **src_buf, size_t *src_size,
 		if (options & PACKET_READ_GENTLE_ON_EOF)
 			return -1;
 
+		if (options & PACKET_READ_NEVER_DIE)
+			return error(_("the remote end hung up unexpectedly"));
 		die(_("the remote end hung up unexpectedly"));
 	}
 
@@ -341,6 +346,9 @@ enum packet_read_status packet_read_with_status(int fd, char **src_buffer,
 	len = packet_length(linelen);
 
 	if (len < 0) {
+		if (options & PACKET_READ_NEVER_DIE)
+			return error(_("protocol error: bad line length "
+				       "character: %.4s"), linelen);
 		die(_("protocol error: bad line length character: %.4s"), linelen);
 	} else if (!len) {
 		packet_trace("0000", 4, 0);
@@ -355,12 +363,19 @@ enum packet_read_status packet_read_with_status(int fd, char **src_buffer,
 		*pktlen = 0;
 		return PACKET_READ_RESPONSE_END;
 	} else if (len < 4) {
+		if (options & PACKET_READ_NEVER_DIE)
+			return error(_("protocol error: bad line length %d"),
+				     len);
 		die(_("protocol error: bad line length %d"), len);
 	}
 
 	len -= 4;
-	if ((unsigned)len >= size)
+	if ((unsigned)len >= size) {
+		if (options & PACKET_READ_NEVER_DIE)
+			return error(_("protocol error: bad line length %d"),
+				     len);
 		die(_("protocol error: bad line length %d"), len);
+	}
 
 	if (get_packet_data(fd, src_buffer, src_len, buffer, len, options) < 0) {
 		*pktlen = -1;
diff --git a/pkt-line.h b/pkt-line.h
index a7149429ac35..2e472efaf2c5 100644
--- a/pkt-line.h
+++ b/pkt-line.h
@@ -75,10 +75,14 @@ int write_packetized_from_buf_no_flush(const char *src_in, size_t len, int fd_ou
  *
  * If options contains PACKET_READ_DIE_ON_ERR_PACKET, it dies when it sees an
  * ERR packet.
+ *
+ * With `PACKET_READ_NEVER_DIE`, no errors are allowed to trigger die() (except
+ * an ERR packet, when `PACKET_READ_DIE_ON_ERR_PACKET` is in effect).
  */
 #define PACKET_READ_GENTLE_ON_EOF     (1u<<0)
 #define PACKET_READ_CHOMP_NEWLINE     (1u<<1)
 #define PACKET_READ_DIE_ON_ERR_PACKET (1u<<2)
+#define PACKET_READ_NEVER_DIE         (1u<<3)
 int packet_read(int fd, char **src_buffer, size_t *src_len, char
 		*buffer, unsigned size, int options);
 
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v4 04/12] pkt-line: add options argument to read_packetized_to_strbuf()
  2021-02-17 21:48     ` [PATCH v4 00/12] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
                         ` (2 preceding siblings ...)
  2021-02-17 21:48       ` [PATCH v4 03/12] pkt-line: (optionally) libify the packet readers Johannes Schindelin via GitGitGadget
@ 2021-02-17 21:48       ` Johannes Schindelin via GitGitGadget
  2021-02-17 21:48       ` [PATCH v4 05/12] simple-ipc: design documentation for new IPC mechanism Jeff Hostetler via GitGitGadget
                         ` (9 subsequent siblings)
  13 siblings, 0 replies; 178+ messages in thread
From: Johannes Schindelin via GitGitGadget @ 2021-02-17 21:48 UTC (permalink / raw)
  To: git
  Cc: Jeff Hostetler, Jeff King, SZEDER Gábor,
	Johannes Schindelin, Jeff Hostetler, Johannes Schindelin

From: Johannes Schindelin <johannes.schindelin@gmx.de>

Update the calling sequence of `read_packetized_to_strbuf()` to take
an options argument and not assume a fixed set of options.  Update the
only existing caller accordingly to explicitly pass the
formerly-assumed flags.

The `read_packetized_to_strbuf()` function calls `packet_read()` with
a fixed set of assumed options (`PACKET_READ_GENTLE_ON_EOF`).  This
assumption has been fine for the single existing caller
`apply_multi_file_filter()` in `convert.c`.

In a later commit we would like to add other callers to
`read_packetized_to_strbuf()` that need a different set of options.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 convert.c  | 3 ++-
 pkt-line.c | 4 ++--
 pkt-line.h | 2 +-
 3 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/convert.c b/convert.c
index bccf7afa8797..9f44f00d841f 100644
--- a/convert.c
+++ b/convert.c
@@ -908,7 +908,8 @@ static int apply_multi_file_filter(const char *path, const char *src, size_t len
 		if (err)
 			goto done;
 
-		err = read_packetized_to_strbuf(process->out, &nbuf) < 0;
+		err = read_packetized_to_strbuf(process->out, &nbuf,
+						PACKET_READ_GENTLE_ON_EOF) < 0;
 		if (err)
 			goto done;
 
diff --git a/pkt-line.c b/pkt-line.c
index 83c46e6b46ee..18ecad65e08c 100644
--- a/pkt-line.c
+++ b/pkt-line.c
@@ -442,7 +442,7 @@ char *packet_read_line_buf(char **src, size_t *src_len, int *dst_len)
 	return packet_read_line_generic(-1, src, src_len, dst_len);
 }
 
-ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out)
+ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out, int options)
 {
 	int packet_len;
 
@@ -458,7 +458,7 @@ ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out)
 			 * that there is already room for the extra byte.
 			 */
 			sb_out->buf + sb_out->len, LARGE_PACKET_DATA_MAX+1,
-			PACKET_READ_GENTLE_ON_EOF);
+			options);
 		if (packet_len <= 0)
 			break;
 		sb_out->len += packet_len;
diff --git a/pkt-line.h b/pkt-line.h
index 2e472efaf2c5..e347fe46832a 100644
--- a/pkt-line.h
+++ b/pkt-line.h
@@ -142,7 +142,7 @@ char *packet_read_line_buf(char **src_buf, size_t *src_len, int *size);
 /*
  * Reads a stream of variable sized packets until a flush packet is detected.
  */
-ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out);
+ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out, int options);
 
 /*
  * Receive multiplexed output stream over git native protocol.
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v4 05/12] simple-ipc: design documentation for new IPC mechanism
  2021-02-17 21:48     ` [PATCH v4 00/12] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
                         ` (3 preceding siblings ...)
  2021-02-17 21:48       ` [PATCH v4 04/12] pkt-line: add options argument to read_packetized_to_strbuf() Johannes Schindelin via GitGitGadget
@ 2021-02-17 21:48       ` Jeff Hostetler via GitGitGadget
  2021-03-03 20:19         ` Junio C Hamano
  2021-02-17 21:48       ` [PATCH v4 06/12] simple-ipc: add win32 implementation Jeff Hostetler via GitGitGadget
                         ` (8 subsequent siblings)
  13 siblings, 1 reply; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-02-17 21:48 UTC (permalink / raw)
  To: git
  Cc: Jeff Hostetler, Jeff King, SZEDER Gábor,
	Johannes Schindelin, Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Brief design documentation for new IPC mechanism allowing
foreground Git client to talk with an existing daemon process
at a known location using a named pipe or unix domain socket.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 Documentation/technical/api-simple-ipc.txt | 34 ++++++++++++++++++++++
 1 file changed, 34 insertions(+)
 create mode 100644 Documentation/technical/api-simple-ipc.txt

diff --git a/Documentation/technical/api-simple-ipc.txt b/Documentation/technical/api-simple-ipc.txt
new file mode 100644
index 000000000000..670a5c163e39
--- /dev/null
+++ b/Documentation/technical/api-simple-ipc.txt
@@ -0,0 +1,34 @@
+simple-ipc API
+==============
+
+The simple-ipc API is used to send an IPC message and response between
+a (presumably) foreground Git client process to a background server or
+daemon process.  The server process must already be running.  Multiple
+client processes can simultaneously communicate with the server
+process.
+
+Communication occurs over a named pipe on Windows and a Unix domain
+socket on other platforms.  Clients and the server rendezvous at a
+previously agreed-to application-specific pathname (which is outside
+the scope of this design).
+
+This IPC mechanism differs from the existing `sub-process.c` model
+(Documentation/technical/long-running-process-protocol.txt) and used
+by applications like Git-LFS.  In the simple-ipc model the server is
+assumed to be a very long-running system service.  In contrast, in the
+LFS-style sub-process model the helper is started with the foreground
+process and exits when the foreground process terminates.
+
+How the simple-ipc server is started is also outside the scope of the
+IPC mechanism.  For example, the server might be started during
+maintenance operations.
+
+The IPC protocol consists of a single request message from the client and
+an optional request message from the server.  For simplicity, pkt-line
+routines are used to hide chunking and buffering concerns.  Each side
+terminates their message with a flush packet.
+(Documentation/technical/protocol-common.txt)
+
+The actual format of the client and server messages is application
+specific.  The IPC layer transmits and receives an opaque buffer without
+any concern for the content within.
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v4 06/12] simple-ipc: add win32 implementation
  2021-02-17 21:48     ` [PATCH v4 00/12] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
                         ` (4 preceding siblings ...)
  2021-02-17 21:48       ` [PATCH v4 05/12] simple-ipc: design documentation for new IPC mechanism Jeff Hostetler via GitGitGadget
@ 2021-02-17 21:48       ` Jeff Hostetler via GitGitGadget
  2021-02-17 21:48       ` [PATCH v4 07/12] unix-socket: elimiate static unix_stream_socket() helper function Jeff Hostetler via GitGitGadget
                         ` (7 subsequent siblings)
  13 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-02-17 21:48 UTC (permalink / raw)
  To: git
  Cc: Jeff Hostetler, Jeff King, SZEDER Gábor,
	Johannes Schindelin, Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Create Windows implementation of "simple-ipc" using named pipes.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 Makefile                            |   5 +
 compat/simple-ipc/ipc-shared.c      |  28 ++
 compat/simple-ipc/ipc-win32.c       | 749 ++++++++++++++++++++++++++++
 config.mak.uname                    |   2 +
 contrib/buildsystems/CMakeLists.txt |   4 +
 simple-ipc.h                        | 224 +++++++++
 6 files changed, 1012 insertions(+)
 create mode 100644 compat/simple-ipc/ipc-shared.c
 create mode 100644 compat/simple-ipc/ipc-win32.c
 create mode 100644 simple-ipc.h

diff --git a/Makefile b/Makefile
index 4128b457e14b..40d5cab78d3f 100644
--- a/Makefile
+++ b/Makefile
@@ -1679,6 +1679,11 @@ else
 	LIB_OBJS += unix-socket.o
 endif
 
+ifdef USE_WIN32_IPC
+	LIB_OBJS += compat/simple-ipc/ipc-shared.o
+	LIB_OBJS += compat/simple-ipc/ipc-win32.o
+endif
+
 ifdef NO_ICONV
 	BASIC_CFLAGS += -DNO_ICONV
 endif
diff --git a/compat/simple-ipc/ipc-shared.c b/compat/simple-ipc/ipc-shared.c
new file mode 100644
index 000000000000..1edec8159532
--- /dev/null
+++ b/compat/simple-ipc/ipc-shared.c
@@ -0,0 +1,28 @@
+#include "cache.h"
+#include "simple-ipc.h"
+#include "strbuf.h"
+#include "pkt-line.h"
+#include "thread-utils.h"
+
+#ifdef SUPPORTS_SIMPLE_IPC
+
+int ipc_server_run(const char *path, const struct ipc_server_opts *opts,
+		   ipc_server_application_cb *application_cb,
+		   void *application_data)
+{
+	struct ipc_server_data *server_data = NULL;
+	int ret;
+
+	ret = ipc_server_run_async(&server_data, path, opts,
+				   application_cb, application_data);
+	if (ret)
+		return ret;
+
+	ret = ipc_server_await(server_data);
+
+	ipc_server_free(server_data);
+
+	return ret;
+}
+
+#endif /* SUPPORTS_SIMPLE_IPC */
diff --git a/compat/simple-ipc/ipc-win32.c b/compat/simple-ipc/ipc-win32.c
new file mode 100644
index 000000000000..f0cfbf9d15c3
--- /dev/null
+++ b/compat/simple-ipc/ipc-win32.c
@@ -0,0 +1,749 @@
+#include "cache.h"
+#include "simple-ipc.h"
+#include "strbuf.h"
+#include "pkt-line.h"
+#include "thread-utils.h"
+
+#ifndef GIT_WINDOWS_NATIVE
+#error This file can only be compiled on Windows
+#endif
+
+static int initialize_pipe_name(const char *path, wchar_t *wpath, size_t alloc)
+{
+	int off = 0;
+	struct strbuf realpath = STRBUF_INIT;
+
+	if (!strbuf_realpath(&realpath, path, 0))
+		return -1;
+
+	off = swprintf(wpath, alloc, L"\\\\.\\pipe\\");
+	if (xutftowcs(wpath + off, realpath.buf, alloc - off) < 0)
+		return -1;
+
+	/* Handle drive prefix */
+	if (wpath[off] && wpath[off + 1] == L':') {
+		wpath[off + 1] = L'_';
+		off += 2;
+	}
+
+	for (; wpath[off]; off++)
+		if (wpath[off] == L'/')
+			wpath[off] = L'\\';
+
+	strbuf_release(&realpath);
+	return 0;
+}
+
+static enum ipc_active_state get_active_state(wchar_t *pipe_path)
+{
+	if (WaitNamedPipeW(pipe_path, NMPWAIT_USE_DEFAULT_WAIT))
+		return IPC_STATE__LISTENING;
+
+	if (GetLastError() == ERROR_SEM_TIMEOUT)
+		return IPC_STATE__NOT_LISTENING;
+
+	if (GetLastError() == ERROR_FILE_NOT_FOUND)
+		return IPC_STATE__PATH_NOT_FOUND;
+
+	return IPC_STATE__OTHER_ERROR;
+}
+
+enum ipc_active_state ipc_get_active_state(const char *path)
+{
+	wchar_t pipe_path[MAX_PATH];
+
+	if (initialize_pipe_name(path, pipe_path, ARRAY_SIZE(pipe_path)) < 0)
+		return IPC_STATE__INVALID_PATH;
+
+	return get_active_state(pipe_path);
+}
+
+#define WAIT_STEP_MS (50)
+
+static enum ipc_active_state connect_to_server(
+	const wchar_t *wpath,
+	DWORD timeout_ms,
+	const struct ipc_client_connect_options *options,
+	int *pfd)
+{
+	DWORD t_start_ms, t_waited_ms;
+	DWORD step_ms;
+	HANDLE hPipe = INVALID_HANDLE_VALUE;
+	DWORD mode = PIPE_READMODE_BYTE;
+	DWORD gle;
+
+	*pfd = -1;
+
+	for (;;) {
+		hPipe = CreateFileW(wpath, GENERIC_READ | GENERIC_WRITE,
+				    0, NULL, OPEN_EXISTING, 0, NULL);
+		if (hPipe != INVALID_HANDLE_VALUE)
+			break;
+
+		gle = GetLastError();
+
+		switch (gle) {
+		case ERROR_FILE_NOT_FOUND:
+			if (!options->wait_if_not_found)
+				return IPC_STATE__PATH_NOT_FOUND;
+			if (!timeout_ms)
+				return IPC_STATE__PATH_NOT_FOUND;
+
+			step_ms = (timeout_ms < WAIT_STEP_MS) ?
+				timeout_ms : WAIT_STEP_MS;
+			sleep_millisec(step_ms);
+
+			timeout_ms -= step_ms;
+			break; /* try again */
+
+		case ERROR_PIPE_BUSY:
+			if (!options->wait_if_busy)
+				return IPC_STATE__NOT_LISTENING;
+			if (!timeout_ms)
+				return IPC_STATE__NOT_LISTENING;
+
+			t_start_ms = (DWORD)(getnanotime() / 1000000);
+
+			if (!WaitNamedPipeW(wpath, timeout_ms)) {
+				if (GetLastError() == ERROR_SEM_TIMEOUT)
+					return IPC_STATE__NOT_LISTENING;
+
+				return IPC_STATE__OTHER_ERROR;
+			}
+
+			/*
+			 * A pipe server instance became available.
+			 * Race other client processes to connect to
+			 * it.
+			 *
+			 * But first decrement our overall timeout so
+			 * that we don't starve if we keep losing the
+			 * race.  But also guard against special
+			 * NPMWAIT_ values (0 and -1).
+			 */
+			t_waited_ms = (DWORD)(getnanotime() / 1000000) - t_start_ms;
+			if (t_waited_ms < timeout_ms)
+				timeout_ms -= t_waited_ms;
+			else
+				timeout_ms = 1;
+			break; /* try again */
+
+		default:
+			return IPC_STATE__OTHER_ERROR;
+		}
+	}
+
+	if (!SetNamedPipeHandleState(hPipe, &mode, NULL, NULL)) {
+		CloseHandle(hPipe);
+		return IPC_STATE__OTHER_ERROR;
+	}
+
+	*pfd = _open_osfhandle((intptr_t)hPipe, O_RDWR|O_BINARY);
+	if (*pfd < 0) {
+		CloseHandle(hPipe);
+		return IPC_STATE__OTHER_ERROR;
+	}
+
+	/* fd now owns hPipe */
+
+	return IPC_STATE__LISTENING;
+}
+
+/*
+ * The default connection timeout for Windows clients.
+ *
+ * This is not currently part of the ipc_ API (nor the config settings)
+ * because of differences between Windows and other platforms.
+ *
+ * This value was chosen at random.
+ */
+#define WINDOWS_CONNECTION_TIMEOUT_MS (30000)
+
+enum ipc_active_state ipc_client_try_connect(
+	const char *path,
+	const struct ipc_client_connect_options *options,
+	struct ipc_client_connection **p_connection)
+{
+	wchar_t wpath[MAX_PATH];
+	enum ipc_active_state state = IPC_STATE__OTHER_ERROR;
+	int fd = -1;
+
+	*p_connection = NULL;
+
+	trace2_region_enter("ipc-client", "try-connect", NULL);
+	trace2_data_string("ipc-client", NULL, "try-connect/path", path);
+
+	if (initialize_pipe_name(path, wpath, ARRAY_SIZE(wpath)) < 0)
+		state = IPC_STATE__INVALID_PATH;
+	else
+		state = connect_to_server(wpath, WINDOWS_CONNECTION_TIMEOUT_MS,
+					  options, &fd);
+
+	trace2_data_intmax("ipc-client", NULL, "try-connect/state",
+			   (intmax_t)state);
+	trace2_region_leave("ipc-client", "try-connect", NULL);
+
+	if (state == IPC_STATE__LISTENING) {
+		(*p_connection) = xcalloc(1, sizeof(struct ipc_client_connection));
+		(*p_connection)->fd = fd;
+	}
+
+	return state;
+}
+
+void ipc_client_close_connection(struct ipc_client_connection *connection)
+{
+	if (!connection)
+		return;
+
+	if (connection->fd != -1)
+		close(connection->fd);
+
+	free(connection);
+}
+
+int ipc_client_send_command_to_connection(
+	struct ipc_client_connection *connection,
+	const char *message, struct strbuf *answer)
+{
+	int ret = 0;
+
+	strbuf_setlen(answer, 0);
+
+	trace2_region_enter("ipc-client", "send-command", NULL);
+
+	if (write_packetized_from_buf_no_flush(message, strlen(message),
+					       connection->fd) < 0 ||
+	    packet_flush_gently(connection->fd) < 0) {
+		ret = error(_("could not send IPC command"));
+		goto done;
+	}
+
+	FlushFileBuffers((HANDLE)_get_osfhandle(connection->fd));
+
+	if (read_packetized_to_strbuf(
+		    connection->fd, answer,
+		    PACKET_READ_GENTLE_ON_EOF | PACKET_READ_NEVER_DIE) < 0) {
+		ret = error(_("could not read IPC response"));
+		goto done;
+	}
+
+done:
+	trace2_region_leave("ipc-client", "send-command", NULL);
+	return ret;
+}
+
+int ipc_client_send_command(const char *path,
+			    const struct ipc_client_connect_options *options,
+			    const char *message, struct strbuf *response)
+{
+	int ret = -1;
+	enum ipc_active_state state;
+	struct ipc_client_connection *connection = NULL;
+
+	state = ipc_client_try_connect(path, options, &connection);
+
+	if (state != IPC_STATE__LISTENING)
+		return ret;
+
+	ret = ipc_client_send_command_to_connection(connection, message, response);
+
+	ipc_client_close_connection(connection);
+
+	return ret;
+}
+
+/*
+ * Duplicate the given pipe handle and wrap it in a file descriptor so
+ * that we can use pkt-line on it.
+ */
+static int dup_fd_from_pipe(const HANDLE pipe)
+{
+	HANDLE process = GetCurrentProcess();
+	HANDLE handle;
+	int fd;
+
+	if (!DuplicateHandle(process, pipe, process, &handle, 0, FALSE,
+			     DUPLICATE_SAME_ACCESS)) {
+		errno = err_win_to_posix(GetLastError());
+		return -1;
+	}
+
+	fd = _open_osfhandle((intptr_t)handle, O_RDWR|O_BINARY);
+	if (fd < 0) {
+		errno = err_win_to_posix(GetLastError());
+		CloseHandle(handle);
+		return -1;
+	}
+
+	/*
+	 * `handle` is now owned by `fd` and will be automatically closed
+	 * when the descriptor is closed.
+	 */
+
+	return fd;
+}
+
+/*
+ * Magic numbers used to annotate callback instance data.
+ * These are used to help guard against accidentally passing the
+ * wrong instance data across multiple levels of callbacks (which
+ * is easy to do if there are `void*` arguments).
+ */
+enum magic {
+	MAGIC_SERVER_REPLY_DATA,
+	MAGIC_SERVER_THREAD_DATA,
+	MAGIC_SERVER_DATA,
+};
+
+struct ipc_server_reply_data {
+	enum magic magic;
+	int fd;
+	struct ipc_server_thread_data *server_thread_data;
+};
+
+struct ipc_server_thread_data {
+	enum magic magic;
+	struct ipc_server_thread_data *next_thread;
+	struct ipc_server_data *server_data;
+	pthread_t pthread_id;
+	HANDLE hPipe;
+};
+
+/*
+ * On Windows, the conceptual "ipc-server" is implemented as a pool of
+ * n idential/peer "server-thread" threads.  That is, there is no
+ * hierarchy of threads; and therefore no controller thread managing
+ * the pool.  Each thread has an independent handle to the named pipe,
+ * receives incoming connections, processes the client, and re-uses
+ * the pipe for the next client connection.
+ *
+ * Therefore, the "ipc-server" only needs to maintain a list of the
+ * spawned threads for eventual "join" purposes.
+ *
+ * A single "stop-event" is visible to all of the server threads to
+ * tell them to shutdown (when idle).
+ */
+struct ipc_server_data {
+	enum magic magic;
+	ipc_server_application_cb *application_cb;
+	void *application_data;
+	struct strbuf buf_path;
+	wchar_t wpath[MAX_PATH];
+
+	HANDLE hEventStopRequested;
+	struct ipc_server_thread_data *thread_list;
+	int is_stopped;
+};
+
+enum connect_result {
+	CR_CONNECTED = 0,
+	CR_CONNECT_PENDING,
+	CR_CONNECT_ERROR,
+	CR_WAIT_ERROR,
+	CR_SHUTDOWN,
+};
+
+static enum connect_result queue_overlapped_connect(
+	struct ipc_server_thread_data *server_thread_data,
+	OVERLAPPED *lpo)
+{
+	if (ConnectNamedPipe(server_thread_data->hPipe, lpo))
+		goto failed;
+
+	switch (GetLastError()) {
+	case ERROR_IO_PENDING:
+		return CR_CONNECT_PENDING;
+
+	case ERROR_PIPE_CONNECTED:
+		SetEvent(lpo->hEvent);
+		return CR_CONNECTED;
+
+	default:
+		break;
+	}
+
+failed:
+	error(_("ConnectNamedPipe failed for '%s' (%lu)"),
+	      server_thread_data->server_data->buf_path.buf,
+	      GetLastError());
+	return CR_CONNECT_ERROR;
+}
+
+/*
+ * Use Windows Overlapped IO to wait for a connection or for our event
+ * to be signalled.
+ */
+static enum connect_result wait_for_connection(
+	struct ipc_server_thread_data *server_thread_data,
+	OVERLAPPED *lpo)
+{
+	enum connect_result r;
+	HANDLE waitHandles[2];
+	DWORD dwWaitResult;
+
+	r = queue_overlapped_connect(server_thread_data, lpo);
+	if (r != CR_CONNECT_PENDING)
+		return r;
+
+	waitHandles[0] = server_thread_data->server_data->hEventStopRequested;
+	waitHandles[1] = lpo->hEvent;
+
+	dwWaitResult = WaitForMultipleObjects(2, waitHandles, FALSE, INFINITE);
+	switch (dwWaitResult) {
+	case WAIT_OBJECT_0 + 0:
+		return CR_SHUTDOWN;
+
+	case WAIT_OBJECT_0 + 1:
+		ResetEvent(lpo->hEvent);
+		return CR_CONNECTED;
+
+	default:
+		return CR_WAIT_ERROR;
+	}
+}
+
+/*
+ * Forward declare our reply callback function so that any compiler
+ * errors are reported when we actually define the function (in addition
+ * to any errors reported when we try to pass this callback function as
+ * a parameter in a function call).  The former are easier to understand.
+ */
+static ipc_server_reply_cb do_io_reply_callback;
+
+/*
+ * Relay application's response message to the client process.
+ * (We do not flush at this point because we allow the caller
+ * to chunk data to the client thru us.)
+ */
+static int do_io_reply_callback(struct ipc_server_reply_data *reply_data,
+		       const char *response, size_t response_len)
+{
+	if (reply_data->magic != MAGIC_SERVER_REPLY_DATA)
+		BUG("reply_cb called with wrong instance data");
+
+	return write_packetized_from_buf_no_flush(response, response_len,
+						  reply_data->fd);
+}
+
+/*
+ * Receive the request/command from the client and pass it to the
+ * registered request-callback.  The request-callback will compose
+ * a response and call our reply-callback to send it to the client.
+ *
+ * Simple-IPC only contains one round trip, so we flush and close
+ * here after the response.
+ */
+static int do_io(struct ipc_server_thread_data *server_thread_data)
+{
+	struct strbuf buf = STRBUF_INIT;
+	struct ipc_server_reply_data reply_data;
+	int ret = 0;
+
+	reply_data.magic = MAGIC_SERVER_REPLY_DATA;
+	reply_data.server_thread_data = server_thread_data;
+
+	reply_data.fd = dup_fd_from_pipe(server_thread_data->hPipe);
+	if (reply_data.fd < 0)
+		return error(_("could not create fd from pipe for '%s'"),
+			     server_thread_data->server_data->buf_path.buf);
+
+	ret = read_packetized_to_strbuf(
+		reply_data.fd, &buf,
+		PACKET_READ_GENTLE_ON_EOF | PACKET_READ_NEVER_DIE);
+	if (ret >= 0) {
+		ret = server_thread_data->server_data->application_cb(
+			server_thread_data->server_data->application_data,
+			buf.buf, do_io_reply_callback, &reply_data);
+
+		packet_flush_gently(reply_data.fd);
+
+		FlushFileBuffers((HANDLE)_get_osfhandle((reply_data.fd)));
+	}
+	else {
+		/*
+		 * The client probably disconnected/shutdown before it
+		 * could send a well-formed message.  Ignore it.
+		 */
+	}
+
+	strbuf_release(&buf);
+	close(reply_data.fd);
+
+	return ret;
+}
+
+/*
+ * Handle IPC request and response with this connected client.  And reset
+ * the pipe to prepare for the next client.
+ */
+static int use_connection(struct ipc_server_thread_data *server_thread_data)
+{
+	int ret;
+
+	ret = do_io(server_thread_data);
+
+	FlushFileBuffers(server_thread_data->hPipe);
+	DisconnectNamedPipe(server_thread_data->hPipe);
+
+	return ret;
+}
+
+/*
+ * Thread proc for an IPC server worker thread.  It handles a series of
+ * connections from clients.  It cleans and reuses the hPipe between each
+ * client.
+ */
+static void *server_thread_proc(void *_server_thread_data)
+{
+	struct ipc_server_thread_data *server_thread_data = _server_thread_data;
+	HANDLE hEventConnected = INVALID_HANDLE_VALUE;
+	OVERLAPPED oConnect;
+	enum connect_result cr;
+	int ret;
+
+	assert(server_thread_data->hPipe != INVALID_HANDLE_VALUE);
+
+	trace2_thread_start("ipc-server");
+	trace2_data_string("ipc-server", NULL, "pipe",
+			   server_thread_data->server_data->buf_path.buf);
+
+	hEventConnected = CreateEventW(NULL, TRUE, FALSE, NULL);
+
+	memset(&oConnect, 0, sizeof(oConnect));
+	oConnect.hEvent = hEventConnected;
+
+	for (;;) {
+		cr = wait_for_connection(server_thread_data, &oConnect);
+
+		switch (cr) {
+		case CR_SHUTDOWN:
+			goto finished;
+
+		case CR_CONNECTED:
+			ret = use_connection(server_thread_data);
+			if (ret == SIMPLE_IPC_QUIT) {
+				ipc_server_stop_async(
+					server_thread_data->server_data);
+				goto finished;
+			}
+			if (ret > 0) {
+				/*
+				 * Ignore (transient) IO errors with this
+				 * client and reset for the next client.
+				 */
+			}
+			break;
+
+		case CR_CONNECT_PENDING:
+			/* By construction, this should not happen. */
+			BUG("ipc-server[%s]: unexpeced CR_CONNECT_PENDING",
+			    server_thread_data->server_data->buf_path.buf);
+
+		case CR_CONNECT_ERROR:
+		case CR_WAIT_ERROR:
+			/*
+			 * Ignore these theoretical errors.
+			 */
+			DisconnectNamedPipe(server_thread_data->hPipe);
+			break;
+
+		default:
+			BUG("unandled case after wait_for_connection");
+		}
+	}
+
+finished:
+	CloseHandle(server_thread_data->hPipe);
+	CloseHandle(hEventConnected);
+
+	trace2_thread_exit();
+	return NULL;
+}
+
+static HANDLE create_new_pipe(wchar_t *wpath, int is_first)
+{
+	HANDLE hPipe;
+	DWORD dwOpenMode, dwPipeMode;
+	LPSECURITY_ATTRIBUTES lpsa = NULL;
+
+	dwOpenMode = PIPE_ACCESS_INBOUND | PIPE_ACCESS_OUTBOUND |
+		FILE_FLAG_OVERLAPPED;
+
+	dwPipeMode = PIPE_TYPE_MESSAGE | PIPE_READMODE_BYTE | PIPE_WAIT |
+		PIPE_REJECT_REMOTE_CLIENTS;
+
+	if (is_first) {
+		dwOpenMode |= FILE_FLAG_FIRST_PIPE_INSTANCE;
+
+		/*
+		 * On Windows, the first server pipe instance gets to
+		 * set the ACL / Security Attributes on the named
+		 * pipe; subsequent instances inherit and cannot
+		 * change them.
+		 *
+		 * TODO Should we allow the application layer to
+		 * specify security attributes, such as `LocalService`
+		 * or `LocalSystem`, when we create the named pipe?
+		 * This question is probably not important when the
+		 * daemon is started by a foreground user process and
+		 * only needs to talk to the current user, but may be
+		 * if the daemon is run via the Control Panel as a
+		 * System Service.
+		 */
+	}
+
+	hPipe = CreateNamedPipeW(wpath, dwOpenMode, dwPipeMode,
+				 PIPE_UNLIMITED_INSTANCES, 1024, 1024, 0, lpsa);
+
+	return hPipe;
+}
+
+int ipc_server_run_async(struct ipc_server_data **returned_server_data,
+			 const char *path, const struct ipc_server_opts *opts,
+			 ipc_server_application_cb *application_cb,
+			 void *application_data)
+{
+	struct ipc_server_data *server_data;
+	wchar_t wpath[MAX_PATH];
+	HANDLE hPipeFirst = INVALID_HANDLE_VALUE;
+	int k;
+	int ret = 0;
+	int nr_threads = opts->nr_threads;
+
+	*returned_server_data = NULL;
+
+	ret = initialize_pipe_name(path, wpath, ARRAY_SIZE(wpath));
+	if (ret < 0)
+		return error(
+			_("could not create normalized wchar_t path for '%s'"),
+			path);
+
+	hPipeFirst = create_new_pipe(wpath, 1);
+	if (hPipeFirst == INVALID_HANDLE_VALUE)
+		return error(_("IPC server already running on '%s'"), path);
+
+	server_data = xcalloc(1, sizeof(*server_data));
+	server_data->magic = MAGIC_SERVER_DATA;
+	server_data->application_cb = application_cb;
+	server_data->application_data = application_data;
+	server_data->hEventStopRequested = CreateEvent(NULL, TRUE, FALSE, NULL);
+	strbuf_init(&server_data->buf_path, 0);
+	strbuf_addstr(&server_data->buf_path, path);
+	wcscpy(server_data->wpath, wpath);
+
+	if (nr_threads < 1)
+		nr_threads = 1;
+
+	for (k = 0; k < nr_threads; k++) {
+		struct ipc_server_thread_data *std;
+
+		std = xcalloc(1, sizeof(*std));
+		std->magic = MAGIC_SERVER_THREAD_DATA;
+		std->server_data = server_data;
+		std->hPipe = INVALID_HANDLE_VALUE;
+
+		std->hPipe = (k == 0)
+			? hPipeFirst
+			: create_new_pipe(server_data->wpath, 0);
+
+		if (std->hPipe == INVALID_HANDLE_VALUE) {
+			/*
+			 * If we've reached a pipe instance limit for
+			 * this path, just use fewer threads.
+			 */
+			free(std);
+			break;
+		}
+
+		if (pthread_create(&std->pthread_id, NULL,
+				   server_thread_proc, std)) {
+			/*
+			 * Likewise, if we're out of threads, just use
+			 * fewer threads than requested.
+			 *
+			 * However, we just give up if we can't even get
+			 * one thread.  This should not happen.
+			 */
+			if (k == 0)
+				die(_("could not start thread[0] for '%s'"),
+				    path);
+
+			CloseHandle(std->hPipe);
+			free(std);
+			break;
+		}
+
+		std->next_thread = server_data->thread_list;
+		server_data->thread_list = std;
+	}
+
+	*returned_server_data = server_data;
+	return 0;
+}
+
+int ipc_server_stop_async(struct ipc_server_data *server_data)
+{
+	if (!server_data)
+		return 0;
+
+	/*
+	 * Gently tell all of the ipc_server threads to shutdown.
+	 * This will be seen the next time they are idle (and waiting
+	 * for a connection).
+	 *
+	 * We DO NOT attempt to force them to drop an active connection.
+	 */
+	SetEvent(server_data->hEventStopRequested);
+	return 0;
+}
+
+int ipc_server_await(struct ipc_server_data *server_data)
+{
+	DWORD dwWaitResult;
+
+	if (!server_data)
+		return 0;
+
+	dwWaitResult = WaitForSingleObject(server_data->hEventStopRequested, INFINITE);
+	if (dwWaitResult != WAIT_OBJECT_0)
+		return error(_("wait for hEvent failed for '%s'"),
+			     server_data->buf_path.buf);
+
+	while (server_data->thread_list) {
+		struct ipc_server_thread_data *std = server_data->thread_list;
+
+		pthread_join(std->pthread_id, NULL);
+
+		server_data->thread_list = std->next_thread;
+		free(std);
+	}
+
+	server_data->is_stopped = 1;
+
+	return 0;
+}
+
+void ipc_server_free(struct ipc_server_data *server_data)
+{
+	if (!server_data)
+		return;
+
+	if (!server_data->is_stopped)
+		BUG("cannot free ipc-server while running for '%s'",
+		    server_data->buf_path.buf);
+
+	strbuf_release(&server_data->buf_path);
+
+	if (server_data->hEventStopRequested != INVALID_HANDLE_VALUE)
+		CloseHandle(server_data->hEventStopRequested);
+
+	while (server_data->thread_list) {
+		struct ipc_server_thread_data *std = server_data->thread_list;
+
+		server_data->thread_list = std->next_thread;
+		free(std);
+	}
+
+	free(server_data);
+}
diff --git a/config.mak.uname b/config.mak.uname
index 198ab1e58f83..76087cff6789 100644
--- a/config.mak.uname
+++ b/config.mak.uname
@@ -421,6 +421,7 @@ ifeq ($(uname_S),Windows)
 	RUNTIME_PREFIX = YesPlease
 	HAVE_WPGMPTR = YesWeDo
 	NO_ST_BLOCKS_IN_STRUCT_STAT = YesPlease
+	USE_WIN32_IPC = YesPlease
 	USE_WIN32_MMAP = YesPlease
 	MMAP_PREVENTS_DELETE = UnfortunatelyYes
 	# USE_NED_ALLOCATOR = YesPlease
@@ -597,6 +598,7 @@ ifneq (,$(findstring MINGW,$(uname_S)))
 	RUNTIME_PREFIX = YesPlease
 	HAVE_WPGMPTR = YesWeDo
 	NO_ST_BLOCKS_IN_STRUCT_STAT = YesPlease
+	USE_WIN32_IPC = YesPlease
 	USE_WIN32_MMAP = YesPlease
 	MMAP_PREVENTS_DELETE = UnfortunatelyYes
 	USE_NED_ALLOCATOR = YesPlease
diff --git a/contrib/buildsystems/CMakeLists.txt b/contrib/buildsystems/CMakeLists.txt
index c151dd7257f3..4bd41054ee70 100644
--- a/contrib/buildsystems/CMakeLists.txt
+++ b/contrib/buildsystems/CMakeLists.txt
@@ -246,6 +246,10 @@ elseif(CMAKE_SYSTEM_NAME STREQUAL "Linux")
 	list(APPEND compat_SOURCES unix-socket.c)
 endif()
 
+if(CMAKE_SYSTEM_NAME STREQUAL "Windows")
+	list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-win32.c)
+endif()
+
 set(EXE_EXTENSION ${CMAKE_EXECUTABLE_SUFFIX})
 
 #header checks
diff --git a/simple-ipc.h b/simple-ipc.h
new file mode 100644
index 000000000000..a3f96b42cca2
--- /dev/null
+++ b/simple-ipc.h
@@ -0,0 +1,224 @@
+#ifndef GIT_SIMPLE_IPC_H
+#define GIT_SIMPLE_IPC_H
+
+/*
+ * See Documentation/technical/api-simple-ipc.txt
+ */
+
+#if defined(GIT_WINDOWS_NATIVE)
+#define SUPPORTS_SIMPLE_IPC
+#endif
+
+#ifdef SUPPORTS_SIMPLE_IPC
+#include "pkt-line.h"
+
+/*
+ * Simple IPC Client Side API.
+ */
+
+enum ipc_active_state {
+	/*
+	 * The pipe/socket exists and the daemon is waiting for connections.
+	 */
+	IPC_STATE__LISTENING = 0,
+
+	/*
+	 * The pipe/socket exists, but the daemon is not listening.
+	 * Perhaps it is very busy.
+	 * Perhaps the daemon died without deleting the path.
+	 * Perhaps it is shutting down and draining existing clients.
+	 * Perhaps it is dead, but other clients are lingering and
+	 * still holding a reference to the pathname.
+	 */
+	IPC_STATE__NOT_LISTENING,
+
+	/*
+	 * The requested pathname is bogus and no amount of retries
+	 * will fix that.
+	 */
+	IPC_STATE__INVALID_PATH,
+
+	/*
+	 * The requested pathname is not found.  This usually means
+	 * that there is no daemon present.
+	 */
+	IPC_STATE__PATH_NOT_FOUND,
+
+	IPC_STATE__OTHER_ERROR,
+};
+
+struct ipc_client_connect_options {
+	/*
+	 * Spin under timeout if the server is running but can't
+	 * accept our connection yet.  This should always be set
+	 * unless you just want to poke the server and see if it
+	 * is alive.
+	 */
+	unsigned int wait_if_busy:1;
+
+	/*
+	 * Spin under timeout if the pipe/socket is not yet present
+	 * on the file system.  This is useful if we just started
+	 * the service and need to wait for it to become ready.
+	 */
+	unsigned int wait_if_not_found:1;
+};
+
+#define IPC_CLIENT_CONNECT_OPTIONS_INIT { \
+	.wait_if_busy = 0, \
+	.wait_if_not_found = 0, \
+}
+
+/*
+ * Determine if a server is listening on this named pipe or socket using
+ * platform-specific logic.  This might just probe the filesystem or it
+ * might make a trivial connection to the server using this pathname.
+ */
+enum ipc_active_state ipc_get_active_state(const char *path);
+
+struct ipc_client_connection {
+	int fd;
+};
+
+/*
+ * Try to connect to the daemon on the named pipe or socket.
+ *
+ * Returns IPC_STATE__LISTENING and a connection handle.
+ *
+ * Otherwise, returns info to help decide whether to retry or to
+ * spawn/respawn the server.
+ */
+enum ipc_active_state ipc_client_try_connect(
+	const char *path,
+	const struct ipc_client_connect_options *options,
+	struct ipc_client_connection **p_connection);
+
+void ipc_client_close_connection(struct ipc_client_connection *connection);
+
+/*
+ * Used by the client to synchronously send and receive a message with
+ * the server on the provided client connection.
+ *
+ * Returns 0 when successful.
+ *
+ * Calls error() and returns non-zero otherwise.
+ */
+int ipc_client_send_command_to_connection(
+	struct ipc_client_connection *connection,
+	const char *message, struct strbuf *answer);
+
+/*
+ * Used by the client to synchronously connect and send and receive a
+ * message to the server listening at the given path.
+ *
+ * Returns 0 when successful.
+ *
+ * Calls error() and returns non-zero otherwise.
+ */
+int ipc_client_send_command(const char *path,
+			    const struct ipc_client_connect_options *options,
+			    const char *message, struct strbuf *answer);
+
+/*
+ * Simple IPC Server Side API.
+ */
+
+struct ipc_server_reply_data;
+
+typedef int (ipc_server_reply_cb)(struct ipc_server_reply_data *,
+				  const char *response,
+				  size_t response_len);
+
+/*
+ * Prototype for an application-supplied callback to process incoming
+ * client IPC messages and compose a reply.  The `application_cb` should
+ * use the provided `reply_cb` and `reply_data` to send an IPC response
+ * back to the client.  The `reply_cb` callback can be called multiple
+ * times for chunking purposes.  A reply message is optional and may be
+ * omitted if not necessary for the application.
+ *
+ * The return value from the application callback is ignored.
+ * The value `SIMPLE_IPC_QUIT` can be used to shutdown the server.
+ */
+typedef int (ipc_server_application_cb)(void *application_data,
+					const char *request,
+					ipc_server_reply_cb *reply_cb,
+					struct ipc_server_reply_data *reply_data);
+
+#define SIMPLE_IPC_QUIT -2
+
+/*
+ * Opaque instance data to represent an IPC server instance.
+ */
+struct ipc_server_data;
+
+/*
+ * Control parameters for the IPC server instance.
+ * Use this to hide platform-specific settings.
+ */
+struct ipc_server_opts
+{
+	int nr_threads;
+};
+
+/*
+ * Start an IPC server instance in one or more background threads
+ * and return a handle to the pool.
+ *
+ * Returns 0 if the asynchronous server pool was started successfully.
+ * Returns -1 if not.
+ *
+ * When a client IPC message is received, the `application_cb` will be
+ * called (possibly on a random thread) to handle the message and
+ * optionally compose a reply message.
+ */
+int ipc_server_run_async(struct ipc_server_data **returned_server_data,
+			 const char *path, const struct ipc_server_opts *opts,
+			 ipc_server_application_cb *application_cb,
+			 void *application_data);
+
+/*
+ * Gently signal the IPC server pool to shutdown.  No new client
+ * connections will be accepted, but existing connections will be
+ * allowed to complete.
+ */
+int ipc_server_stop_async(struct ipc_server_data *server_data);
+
+/*
+ * Block the calling thread until all threads in the IPC server pool
+ * have completed and been joined.
+ */
+int ipc_server_await(struct ipc_server_data *server_data);
+
+/*
+ * Close and free all resource handles associated with the IPC server
+ * pool.
+ */
+void ipc_server_free(struct ipc_server_data *server_data);
+
+/*
+ * Run an IPC server instance and block the calling thread of the
+ * current process.  It does not return until the IPC server has
+ * either shutdown or had an unrecoverable error.
+ *
+ * The IPC server handles incoming IPC messages from client processes
+ * and may use one or more background threads as necessary.
+ *
+ * Returns 0 after the server has completed successfully.
+ * Returns -1 if the server cannot be started.
+ *
+ * When a client IPC message is received, the `application_cb` will be
+ * called (possibly on a random thread) to handle the message and
+ * optionally compose a reply message.
+ *
+ * Note that `ipc_server_run()` is a synchronous wrapper around the
+ * above asynchronous routines.  It effectively hides all of the
+ * server state and thread details from the caller and presents a
+ * simple synchronous interface.
+ */
+int ipc_server_run(const char *path, const struct ipc_server_opts *opts,
+		   ipc_server_application_cb *application_cb,
+		   void *application_data);
+
+#endif /* SUPPORTS_SIMPLE_IPC */
+#endif /* GIT_SIMPLE_IPC_H */
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v4 07/12] unix-socket: elimiate static unix_stream_socket() helper function
  2021-02-17 21:48     ` [PATCH v4 00/12] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
                         ` (5 preceding siblings ...)
  2021-02-17 21:48       ` [PATCH v4 06/12] simple-ipc: add win32 implementation Jeff Hostetler via GitGitGadget
@ 2021-02-17 21:48       ` Jeff Hostetler via GitGitGadget
  2021-02-26  7:25         ` Jeff King
  2021-03-03 20:41         ` Junio C Hamano
  2021-02-17 21:48       ` [PATCH v4 08/12] unix-socket: add backlog size option to unix_stream_listen() Jeff Hostetler via GitGitGadget
                         ` (6 subsequent siblings)
  13 siblings, 2 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-02-17 21:48 UTC (permalink / raw)
  To: git
  Cc: Jeff Hostetler, Jeff King, SZEDER Gábor,
	Johannes Schindelin, Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

The static helper function `unix_stream_socket()` calls `die()`.  This
is not appropriate for all callers.  Eliminate the wrapper function
and make the callers propagate the error.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 unix-socket.c | 27 +++++++++++++--------------
 1 file changed, 13 insertions(+), 14 deletions(-)

diff --git a/unix-socket.c b/unix-socket.c
index 19ed48be9902..69f81d64e9d5 100644
--- a/unix-socket.c
+++ b/unix-socket.c
@@ -1,14 +1,6 @@
 #include "cache.h"
 #include "unix-socket.h"
 
-static int unix_stream_socket(void)
-{
-	int fd = socket(AF_UNIX, SOCK_STREAM, 0);
-	if (fd < 0)
-		die_errno("unable to create socket");
-	return fd;
-}
-
 static int chdir_len(const char *orig, int len)
 {
 	char *path = xmemdupz(orig, len);
@@ -73,13 +65,16 @@ static int unix_sockaddr_init(struct sockaddr_un *sa, const char *path,
 
 int unix_stream_connect(const char *path)
 {
-	int fd, saved_errno;
+	int fd = -1, saved_errno;
 	struct sockaddr_un sa;
 	struct unix_sockaddr_context ctx;
 
 	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
 		return -1;
-	fd = unix_stream_socket();
+	fd = socket(AF_UNIX, SOCK_STREAM, 0);
+	if (fd < 0)
+		goto fail;
+
 	if (connect(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0)
 		goto fail;
 	unix_sockaddr_cleanup(&ctx);
@@ -87,15 +82,16 @@ int unix_stream_connect(const char *path)
 
 fail:
 	saved_errno = errno;
+	if (fd != -1)
+		close(fd);
 	unix_sockaddr_cleanup(&ctx);
-	close(fd);
 	errno = saved_errno;
 	return -1;
 }
 
 int unix_stream_listen(const char *path)
 {
-	int fd, saved_errno;
+	int fd = -1, saved_errno;
 	struct sockaddr_un sa;
 	struct unix_sockaddr_context ctx;
 
@@ -103,7 +99,9 @@ int unix_stream_listen(const char *path)
 
 	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
 		return -1;
-	fd = unix_stream_socket();
+	fd = socket(AF_UNIX, SOCK_STREAM, 0);
+	if (fd < 0)
+		goto fail;
 
 	if (bind(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0)
 		goto fail;
@@ -116,8 +114,9 @@ int unix_stream_listen(const char *path)
 
 fail:
 	saved_errno = errno;
+	if (fd != -1)
+		close(fd);
 	unix_sockaddr_cleanup(&ctx);
-	close(fd);
 	errno = saved_errno;
 	return -1;
 }
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v4 08/12] unix-socket: add backlog size option to unix_stream_listen()
  2021-02-17 21:48     ` [PATCH v4 00/12] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
                         ` (6 preceding siblings ...)
  2021-02-17 21:48       ` [PATCH v4 07/12] unix-socket: elimiate static unix_stream_socket() helper function Jeff Hostetler via GitGitGadget
@ 2021-02-17 21:48       ` Jeff Hostetler via GitGitGadget
  2021-02-26  7:30         ` Jeff King
  2021-02-17 21:48       ` [PATCH v4 09/12] unix-socket: disallow chdir() when creating unix domain sockets Jeff Hostetler via GitGitGadget
                         ` (5 subsequent siblings)
  13 siblings, 1 reply; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-02-17 21:48 UTC (permalink / raw)
  To: git
  Cc: Jeff Hostetler, Jeff King, SZEDER Gábor,
	Johannes Schindelin, Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Update `unix_stream_listen()` to take an options structure to override
default behaviors.  This commit includes the size of the `listen()` backlog.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 builtin/credential-cache--daemon.c |  3 ++-
 unix-socket.c                      |  9 +++++++--
 unix-socket.h                      | 14 +++++++++++++-
 3 files changed, 22 insertions(+), 4 deletions(-)

diff --git a/builtin/credential-cache--daemon.c b/builtin/credential-cache--daemon.c
index c61f123a3b81..4c6c89ab0de2 100644
--- a/builtin/credential-cache--daemon.c
+++ b/builtin/credential-cache--daemon.c
@@ -203,9 +203,10 @@ static int serve_cache_loop(int fd)
 
 static void serve_cache(const char *socket_path, int debug)
 {
+	struct unix_stream_listen_opts opts = UNIX_STREAM_LISTEN_OPTS_INIT;
 	int fd;
 
-	fd = unix_stream_listen(socket_path);
+	fd = unix_stream_listen(socket_path, &opts);
 	if (fd < 0)
 		die_errno("unable to bind to '%s'", socket_path);
 
diff --git a/unix-socket.c b/unix-socket.c
index 69f81d64e9d5..5ac7dafe9828 100644
--- a/unix-socket.c
+++ b/unix-socket.c
@@ -89,9 +89,11 @@ int unix_stream_connect(const char *path)
 	return -1;
 }
 
-int unix_stream_listen(const char *path)
+int unix_stream_listen(const char *path,
+		       const struct unix_stream_listen_opts *opts)
 {
 	int fd = -1, saved_errno;
+	int backlog;
 	struct sockaddr_un sa;
 	struct unix_sockaddr_context ctx;
 
@@ -106,7 +108,10 @@ int unix_stream_listen(const char *path)
 	if (bind(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0)
 		goto fail;
 
-	if (listen(fd, 5) < 0)
+	backlog = opts->listen_backlog_size;
+	if (backlog <= 0)
+		backlog = DEFAULT_UNIX_STREAM_LISTEN_BACKLOG;
+	if (listen(fd, backlog) < 0)
 		goto fail;
 
 	unix_sockaddr_cleanup(&ctx);
diff --git a/unix-socket.h b/unix-socket.h
index e271aeec5a07..06a5a05b03fe 100644
--- a/unix-socket.h
+++ b/unix-socket.h
@@ -1,7 +1,19 @@
 #ifndef UNIX_SOCKET_H
 #define UNIX_SOCKET_H
 
+struct unix_stream_listen_opts {
+	int listen_backlog_size;
+};
+
+#define DEFAULT_UNIX_STREAM_LISTEN_BACKLOG (5)
+
+#define UNIX_STREAM_LISTEN_OPTS_INIT \
+{ \
+	.listen_backlog_size = DEFAULT_UNIX_STREAM_LISTEN_BACKLOG, \
+}
+
 int unix_stream_connect(const char *path);
-int unix_stream_listen(const char *path);
+int unix_stream_listen(const char *path,
+		       const struct unix_stream_listen_opts *opts);
 
 #endif /* UNIX_SOCKET_H */
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v4 09/12] unix-socket: disallow chdir() when creating unix domain sockets
  2021-02-17 21:48     ` [PATCH v4 00/12] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
                         ` (7 preceding siblings ...)
  2021-02-17 21:48       ` [PATCH v4 08/12] unix-socket: add backlog size option to unix_stream_listen() Jeff Hostetler via GitGitGadget
@ 2021-02-17 21:48       ` Jeff Hostetler via GitGitGadget
  2021-03-03 22:53         ` Junio C Hamano
  2021-02-17 21:48       ` [PATCH v4 10/12] unix-socket: create `unix_stream_server__listen_with_lock()` Jeff Hostetler via GitGitGadget
                         ` (4 subsequent siblings)
  13 siblings, 1 reply; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-02-17 21:48 UTC (permalink / raw)
  To: git
  Cc: Jeff Hostetler, Jeff King, SZEDER Gábor,
	Johannes Schindelin, Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Calls to `chdir()` are dangerous in a multi-threaded context.  If
`unix_stream_listen()` or `unix_stream_connect()` is given a socket
pathname that is too long to fit in a `sockaddr_un` structure, it will
`chdir()` to the parent directory of the requested socket pathname,
create the socket using a relative pathname, and then `chdir()` back.
This is not thread-safe.

Teach `unix_sockaddr_init()` to not allow calls to `chdir()` when this
flag is set.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 builtin/credential-cache.c |  2 +-
 unix-socket.c              | 17 ++++++++++++-----
 unix-socket.h              |  4 +++-
 3 files changed, 16 insertions(+), 7 deletions(-)

diff --git a/builtin/credential-cache.c b/builtin/credential-cache.c
index 9b3f70990597..76a6ba37223f 100644
--- a/builtin/credential-cache.c
+++ b/builtin/credential-cache.c
@@ -14,7 +14,7 @@
 static int send_request(const char *socket, const struct strbuf *out)
 {
 	int got_data = 0;
-	int fd = unix_stream_connect(socket);
+	int fd = unix_stream_connect(socket, 0);
 
 	if (fd < 0)
 		return -1;
diff --git a/unix-socket.c b/unix-socket.c
index 5ac7dafe9828..1eaa8cf759c0 100644
--- a/unix-socket.c
+++ b/unix-socket.c
@@ -28,16 +28,23 @@ static void unix_sockaddr_cleanup(struct unix_sockaddr_context *ctx)
 }
 
 static int unix_sockaddr_init(struct sockaddr_un *sa, const char *path,
-			      struct unix_sockaddr_context *ctx)
+			      struct unix_sockaddr_context *ctx,
+			      int disallow_chdir)
 {
 	int size = strlen(path) + 1;
 
 	ctx->orig_dir = NULL;
 	if (size > sizeof(sa->sun_path)) {
-		const char *slash = find_last_dir_sep(path);
+		const char *slash;
 		const char *dir;
 		struct strbuf cwd = STRBUF_INIT;
 
+		if (disallow_chdir) {
+			errno = ENAMETOOLONG;
+			return -1;
+		}
+
+		slash = find_last_dir_sep(path);
 		if (!slash) {
 			errno = ENAMETOOLONG;
 			return -1;
@@ -63,13 +70,13 @@ static int unix_sockaddr_init(struct sockaddr_un *sa, const char *path,
 	return 0;
 }
 
-int unix_stream_connect(const char *path)
+int unix_stream_connect(const char *path, int disallow_chdir)
 {
 	int fd = -1, saved_errno;
 	struct sockaddr_un sa;
 	struct unix_sockaddr_context ctx;
 
-	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
+	if (unix_sockaddr_init(&sa, path, &ctx, disallow_chdir) < 0)
 		return -1;
 	fd = socket(AF_UNIX, SOCK_STREAM, 0);
 	if (fd < 0)
@@ -99,7 +106,7 @@ int unix_stream_listen(const char *path,
 
 	unlink(path);
 
-	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
+	if (unix_sockaddr_init(&sa, path, &ctx, opts->disallow_chdir) < 0)
 		return -1;
 	fd = socket(AF_UNIX, SOCK_STREAM, 0);
 	if (fd < 0)
diff --git a/unix-socket.h b/unix-socket.h
index 06a5a05b03fe..2c0b2e79d7b3 100644
--- a/unix-socket.h
+++ b/unix-socket.h
@@ -3,6 +3,7 @@
 
 struct unix_stream_listen_opts {
 	int listen_backlog_size;
+	unsigned int disallow_chdir:1;
 };
 
 #define DEFAULT_UNIX_STREAM_LISTEN_BACKLOG (5)
@@ -10,9 +11,10 @@ struct unix_stream_listen_opts {
 #define UNIX_STREAM_LISTEN_OPTS_INIT \
 { \
 	.listen_backlog_size = DEFAULT_UNIX_STREAM_LISTEN_BACKLOG, \
+	.disallow_chdir = 0, \
 }
 
-int unix_stream_connect(const char *path);
+int unix_stream_connect(const char *path, int disallow_chdir);
 int unix_stream_listen(const char *path,
 		       const struct unix_stream_listen_opts *opts);
 
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v4 10/12] unix-socket: create `unix_stream_server__listen_with_lock()`
  2021-02-17 21:48     ` [PATCH v4 00/12] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
                         ` (8 preceding siblings ...)
  2021-02-17 21:48       ` [PATCH v4 09/12] unix-socket: disallow chdir() when creating unix domain sockets Jeff Hostetler via GitGitGadget
@ 2021-02-17 21:48       ` Jeff Hostetler via GitGitGadget
  2021-02-26  7:56         ` Jeff King
  2021-02-17 21:48       ` [PATCH v4 11/12] simple-ipc: add Unix domain socket implementation Jeff Hostetler via GitGitGadget
                         ` (3 subsequent siblings)
  13 siblings, 1 reply; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-02-17 21:48 UTC (permalink / raw)
  To: git
  Cc: Jeff Hostetler, Jeff King, SZEDER Gábor,
	Johannes Schindelin, Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Create a version of `unix_stream_listen()` that uses a ".lock" lockfile
to create the unix domain socket in a race-free manner.

Unix domain sockets have a fundamental problem on Unix systems because
they persist in the filesystem until they are deleted.  This is
independent of whether a server is actually listening for connections.
Well-behaved servers are expected to delete the socket when they
shutdown.  A new server cannot easily tell if a found socket is
attached to an active server or is leftover cruft from a dead server.
The traditional solution used by `unix_stream_listen()` is to force
delete the socket pathname and then create a new socket.  This solves
the latter (cruft) problem, but in the case of the former, it orphans
the existing server (by stealing the pathname associated with the
socket it is listening on).

We cannot directly use a .lock lockfile to create the socket because
the socket is created by `bind(2)` rather than the `open(2)` mechanism
used by `tempfile.c`.

As an alternative, we hold a plain lockfile ("<path>.lock") as a
mutual exclusion device.  Under the lock, we test if an existing
socket ("<path>") is has an active server.  If not, create a new
socket and begin listening.  Then we rollback the lockfile in all
cases.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 unix-socket.c | 115 ++++++++++++++++++++++++++++++++++++++++++++++++++
 unix-socket.h |  29 +++++++++++++
 2 files changed, 144 insertions(+)

diff --git a/unix-socket.c b/unix-socket.c
index 1eaa8cf759c0..647bbde37f97 100644
--- a/unix-socket.c
+++ b/unix-socket.c
@@ -1,4 +1,5 @@
 #include "cache.h"
+#include "lockfile.h"
 #include "unix-socket.h"
 
 static int chdir_len(const char *orig, int len)
@@ -132,3 +133,117 @@ int unix_stream_listen(const char *path,
 	errno = saved_errno;
 	return -1;
 }
+
+static int is_another_server_alive(const char *path,
+				   const struct unix_stream_listen_opts *opts)
+{
+	struct stat st;
+	int fd;
+
+	if (!lstat(path, &st) && S_ISSOCK(st.st_mode)) {
+		/*
+		 * A socket-inode exists on disk at `path`, but we
+		 * don't know whether it belongs to an active server
+		 * or whether the last server died without cleaning
+		 * up.
+		 *
+		 * Poke it with a trivial connection to try to find
+		 * out.
+		 */
+		fd = unix_stream_connect(path, opts->disallow_chdir);
+		if (fd >= 0) {
+			close(fd);
+			return 1;
+		}
+	}
+
+	return 0;
+}
+
+struct unix_stream_server_socket *unix_stream_server__listen_with_lock(
+	const char *path,
+	const struct unix_stream_listen_opts *opts)
+{
+	struct lock_file lock = LOCK_INIT;
+	int fd_socket;
+	struct unix_stream_server_socket *server_socket;
+
+	/*
+	 * Create a lock at "<path>.lock" if we can.
+	 */
+	if (hold_lock_file_for_update_timeout(&lock, path, 0,
+					      opts->timeout_ms) < 0) {
+		error_errno(_("could not lock listener socket '%s'"), path);
+		return NULL;
+	}
+
+	/*
+	 * If another server is listening on "<path>" give up.  We do not
+	 * want to create a socket and steal future connections from them.
+	 */
+	if (is_another_server_alive(path, opts)) {
+		errno = EADDRINUSE;
+		error_errno(_("listener socket already in use '%s'"), path);
+		rollback_lock_file(&lock);
+		return NULL;
+	}
+
+	/*
+	 * Create and bind to a Unix domain socket at "<path>".
+	 */
+	fd_socket = unix_stream_listen(path, opts);
+	if (fd_socket < 0) {
+		error_errno(_("could not create listener socket '%s'"), path);
+		rollback_lock_file(&lock);
+		return NULL;
+	}
+
+	server_socket = xcalloc(1, sizeof(*server_socket));
+	server_socket->path_socket = strdup(path);
+	server_socket->fd_socket = fd_socket;
+	lstat(path, &server_socket->st_socket);
+
+	/*
+	 * Always rollback (just delete) "<path>.lock" because we already created
+	 * "<path>" as a socket and do not want to commit_lock to do the atomic
+	 * rename trick.
+	 */
+	rollback_lock_file(&lock);
+
+	return server_socket;
+}
+
+void unix_stream_server__free(
+	struct unix_stream_server_socket *server_socket)
+{
+	if (!server_socket)
+		return;
+
+	if (server_socket->fd_socket >= 0) {
+		if (!unix_stream_server__was_stolen(server_socket))
+			unlink(server_socket->path_socket);
+		close(server_socket->fd_socket);
+	}
+
+	free(server_socket->path_socket);
+	free(server_socket);
+}
+
+int unix_stream_server__was_stolen(
+	struct unix_stream_server_socket *server_socket)
+{
+	struct stat st_now;
+
+	if (!server_socket)
+		return 0;
+
+	if (lstat(server_socket->path_socket, &st_now) == -1)
+		return 1;
+
+	if (st_now.st_ino != server_socket->st_socket.st_ino)
+		return 1;
+
+	/* We might also consider the ctime on some platforms. */
+
+	return 0;
+}
diff --git a/unix-socket.h b/unix-socket.h
index 2c0b2e79d7b3..8faf5b692f90 100644
--- a/unix-socket.h
+++ b/unix-socket.h
@@ -2,14 +2,17 @@
 #define UNIX_SOCKET_H
 
 struct unix_stream_listen_opts {
+	long timeout_ms;
 	int listen_backlog_size;
 	unsigned int disallow_chdir:1;
 };
 
+#define DEFAULT_UNIX_STREAM_LISTEN_TIMEOUT (100)
 #define DEFAULT_UNIX_STREAM_LISTEN_BACKLOG (5)
 
 #define UNIX_STREAM_LISTEN_OPTS_INIT \
 { \
+	.timeout_ms = DEFAULT_UNIX_STREAM_LISTEN_TIMEOUT, \
 	.listen_backlog_size = DEFAULT_UNIX_STREAM_LISTEN_BACKLOG, \
 	.disallow_chdir = 0, \
 }
@@ -18,4 +21,30 @@ int unix_stream_connect(const char *path, int disallow_chdir);
 int unix_stream_listen(const char *path,
 		       const struct unix_stream_listen_opts *opts);
 
+struct unix_stream_server_socket {
+	char *path_socket;
+	struct stat st_socket;
+	int fd_socket;
+};
+
+/*
+ * Create a Unix Domain Socket at the given path under the protection
+ * of a '.lock' lockfile.
+ */
+struct unix_stream_server_socket *unix_stream_server__listen_with_lock(
+	const char *path,
+	const struct unix_stream_listen_opts *opts);
+
+/*
+ * Close and delete the socket.
+ */
+void unix_stream_server__free(
+	struct unix_stream_server_socket *server_socket);
+
+/*
+ * Return 1 if the inode of the pathname to our socket changes.
+ */
+int unix_stream_server__was_stolen(
+	struct unix_stream_server_socket *server_socket);
+
 #endif /* UNIX_SOCKET_H */
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v4 11/12] simple-ipc: add Unix domain socket implementation
  2021-02-17 21:48     ` [PATCH v4 00/12] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
                         ` (9 preceding siblings ...)
  2021-02-17 21:48       ` [PATCH v4 10/12] unix-socket: create `unix_stream_server__listen_with_lock()` Jeff Hostetler via GitGitGadget
@ 2021-02-17 21:48       ` Jeff Hostetler via GitGitGadget
  2021-02-17 21:48       ` [PATCH v4 12/12] t0052: add simple-ipc tests and t/helper/test-simple-ipc tool Jeff Hostetler via GitGitGadget
                         ` (2 subsequent siblings)
  13 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-02-17 21:48 UTC (permalink / raw)
  To: git
  Cc: Jeff Hostetler, Jeff King, SZEDER Gábor,
	Johannes Schindelin, Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Create Unix domain socket based implementation of "simple-ipc".

A set of `ipc_client` routines implement a client library to connect
to an `ipc_server` over a Unix domain socket, send a simple request,
and receive a single response.  Clients use blocking IO on the socket.

A set of `ipc_server` routines implement a thread pool to listen for
and concurrently service client connections.

The server creates a new Unix domain socket at a known location.  If a
socket already exists with that name, the server tries to determine if
another server is already listening on the socket or if the socket is
dead.  If socket is busy, the server exits with an error rather than
stealing the socket.  If the socket is dead, the server creates a new
one and starts up.

If while running, the server detects that its socket has been stolen
by another server, it automatically exits.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 Makefile                            |   2 +
 compat/simple-ipc/ipc-unix-socket.c | 979 ++++++++++++++++++++++++++++
 contrib/buildsystems/CMakeLists.txt |   2 +
 simple-ipc.h                        |  13 +-
 4 files changed, 995 insertions(+), 1 deletion(-)
 create mode 100644 compat/simple-ipc/ipc-unix-socket.c

diff --git a/Makefile b/Makefile
index 40d5cab78d3f..08a4c88b92f5 100644
--- a/Makefile
+++ b/Makefile
@@ -1677,6 +1677,8 @@ ifdef NO_UNIX_SOCKETS
 	BASIC_CFLAGS += -DNO_UNIX_SOCKETS
 else
 	LIB_OBJS += unix-socket.o
+	LIB_OBJS += compat/simple-ipc/ipc-shared.o
+	LIB_OBJS += compat/simple-ipc/ipc-unix-socket.o
 endif
 
 ifdef USE_WIN32_IPC
diff --git a/compat/simple-ipc/ipc-unix-socket.c b/compat/simple-ipc/ipc-unix-socket.c
new file mode 100644
index 000000000000..b7fd0b34329e
--- /dev/null
+++ b/compat/simple-ipc/ipc-unix-socket.c
@@ -0,0 +1,979 @@
+#include "cache.h"
+#include "simple-ipc.h"
+#include "strbuf.h"
+#include "pkt-line.h"
+#include "thread-utils.h"
+#include "unix-socket.h"
+
+#ifdef NO_UNIX_SOCKETS
+#error compat/simple-ipc/ipc-unix-socket.c requires Unix sockets
+#endif
+
+enum ipc_active_state ipc_get_active_state(const char *path)
+{
+	enum ipc_active_state state = IPC_STATE__OTHER_ERROR;
+	struct ipc_client_connect_options options
+		= IPC_CLIENT_CONNECT_OPTIONS_INIT;
+	struct stat st;
+	struct ipc_client_connection *connection_test = NULL;
+
+	options.wait_if_busy = 0;
+	options.wait_if_not_found = 0;
+
+	if (lstat(path, &st) == -1) {
+		switch (errno) {
+		case ENOENT:
+		case ENOTDIR:
+			return IPC_STATE__NOT_LISTENING;
+		default:
+			return IPC_STATE__INVALID_PATH;
+		}
+	}
+
+	/* also complain if a plain file is in the way */
+	if ((st.st_mode & S_IFMT) != S_IFSOCK)
+		return IPC_STATE__INVALID_PATH;
+
+	/*
+	 * Just because the filesystem has a S_IFSOCK type inode
+	 * at `path`, doesn't mean it that there is a server listening.
+	 * Ping it to be sure.
+	 */
+	state = ipc_client_try_connect(path, &options, &connection_test);
+	ipc_client_close_connection(connection_test);
+
+	return state;
+}
+
+/*
+ * This value was chosen at random.
+ */
+#define WAIT_STEP_MS (50)
+
+/*
+ * Try to connect to the server.  If the server is just starting up or
+ * is very busy, we may not get a connection the first time.
+ */
+static enum ipc_active_state connect_to_server(
+	const char *path,
+	int timeout_ms,
+	const struct ipc_client_connect_options *options,
+	int *pfd)
+{
+	int wait_ms = 50;
+	int k;
+
+	*pfd = -1;
+
+	for (k = 0; k < timeout_ms; k += wait_ms) {
+		int fd = unix_stream_connect(path, options->uds_disallow_chdir);
+
+		if (fd != -1) {
+			*pfd = fd;
+			return IPC_STATE__LISTENING;
+		}
+
+		if (errno == ENOENT) {
+			if (!options->wait_if_not_found)
+				return IPC_STATE__PATH_NOT_FOUND;
+
+			goto sleep_and_try_again;
+		}
+
+		if (errno == ETIMEDOUT) {
+			if (!options->wait_if_busy)
+				return IPC_STATE__NOT_LISTENING;
+
+			goto sleep_and_try_again;
+		}
+
+		if (errno == ECONNREFUSED) {
+			if (!options->wait_if_busy)
+				return IPC_STATE__NOT_LISTENING;
+
+			goto sleep_and_try_again;
+		}
+
+		return IPC_STATE__OTHER_ERROR;
+
+	sleep_and_try_again:
+		sleep_millisec(wait_ms);
+	}
+
+	return IPC_STATE__NOT_LISTENING;
+}
+
+/*
+ * A randomly chosen timeout value.
+ */
+#define MY_CONNECTION_TIMEOUT_MS (1000)
+
+enum ipc_active_state ipc_client_try_connect(
+	const char *path,
+	const struct ipc_client_connect_options *options,
+	struct ipc_client_connection **p_connection)
+{
+	enum ipc_active_state state = IPC_STATE__OTHER_ERROR;
+	int fd = -1;
+
+	*p_connection = NULL;
+
+	trace2_region_enter("ipc-client", "try-connect", NULL);
+	trace2_data_string("ipc-client", NULL, "try-connect/path", path);
+
+	state = connect_to_server(path, MY_CONNECTION_TIMEOUT_MS,
+				  options, &fd);
+
+	trace2_data_intmax("ipc-client", NULL, "try-connect/state",
+			   (intmax_t)state);
+	trace2_region_leave("ipc-client", "try-connect", NULL);
+
+	if (state == IPC_STATE__LISTENING) {
+		(*p_connection) = xcalloc(1, sizeof(struct ipc_client_connection));
+		(*p_connection)->fd = fd;
+	}
+
+	return state;
+}
+
+void ipc_client_close_connection(struct ipc_client_connection *connection)
+{
+	if (!connection)
+		return;
+
+	if (connection->fd != -1)
+		close(connection->fd);
+
+	free(connection);
+}
+
+int ipc_client_send_command_to_connection(
+	struct ipc_client_connection *connection,
+	const char *message, struct strbuf *answer)
+{
+	int ret = 0;
+
+	strbuf_setlen(answer, 0);
+
+	trace2_region_enter("ipc-client", "send-command", NULL);
+
+	if (write_packetized_from_buf_no_flush(message, strlen(message),
+					       connection->fd) < 0 ||
+	    packet_flush_gently(connection->fd) < 0) {
+		ret = error(_("could not send IPC command"));
+		goto done;
+	}
+
+	if (read_packetized_to_strbuf(
+		    connection->fd, answer,
+		    PACKET_READ_GENTLE_ON_EOF | PACKET_READ_NEVER_DIE) < 0) {
+		ret = error(_("could not read IPC response"));
+		goto done;
+	}
+
+done:
+	trace2_region_leave("ipc-client", "send-command", NULL);
+	return ret;
+}
+
+int ipc_client_send_command(const char *path,
+			    const struct ipc_client_connect_options *options,
+			    const char *message, struct strbuf *answer)
+{
+	int ret = -1;
+	enum ipc_active_state state;
+	struct ipc_client_connection *connection = NULL;
+
+	state = ipc_client_try_connect(path, options, &connection);
+
+	if (state != IPC_STATE__LISTENING)
+		return ret;
+
+	ret = ipc_client_send_command_to_connection(connection, message, answer);
+
+	ipc_client_close_connection(connection);
+
+	return ret;
+}
+
+static int set_socket_blocking_flag(int fd, int make_nonblocking)
+{
+	int flags;
+
+	flags = fcntl(fd, F_GETFL, NULL);
+
+	if (flags < 0)
+		return -1;
+
+	if (make_nonblocking)
+		flags |= O_NONBLOCK;
+	else
+		flags &= ~O_NONBLOCK;
+
+	return fcntl(fd, F_SETFL, flags);
+}
+
+/*
+ * Magic numbers used to annotate callback instance data.
+ * These are used to help guard against accidentally passing the
+ * wrong instance data across multiple levels of callbacks (which
+ * is easy to do if there are `void*` arguments).
+ */
+enum magic {
+	MAGIC_SERVER_REPLY_DATA,
+	MAGIC_WORKER_THREAD_DATA,
+	MAGIC_ACCEPT_THREAD_DATA,
+	MAGIC_SERVER_DATA,
+};
+
+struct ipc_server_reply_data {
+	enum magic magic;
+	int fd;
+	struct ipc_worker_thread_data *worker_thread_data;
+};
+
+struct ipc_worker_thread_data {
+	enum magic magic;
+	struct ipc_worker_thread_data *next_thread;
+	struct ipc_server_data *server_data;
+	pthread_t pthread_id;
+};
+
+struct ipc_accept_thread_data {
+	enum magic magic;
+	struct ipc_server_data *server_data;
+
+	struct unix_stream_server_socket *server_socket;
+
+	int fd_send_shutdown;
+	int fd_wait_shutdown;
+	pthread_t pthread_id;
+};
+
+/*
+ * With unix-sockets, the conceptual "ipc-server" is implemented as a single
+ * controller "accept-thread" thread and a pool of "worker-thread" threads.
+ * The former does the usual `accept()` loop and dispatches connections
+ * to an idle worker thread.  The worker threads wait in an idle loop for
+ * a new connection, communicate with the client and relay data to/from
+ * the `application_cb` and then wait for another connection from the
+ * server thread.  This avoids the overhead of constantly creating and
+ * destroying threads.
+ */
+struct ipc_server_data {
+	enum magic magic;
+	ipc_server_application_cb *application_cb;
+	void *application_data;
+	struct strbuf buf_path;
+
+	struct ipc_accept_thread_data *accept_thread;
+	struct ipc_worker_thread_data *worker_thread_list;
+
+	pthread_mutex_t work_available_mutex;
+	pthread_cond_t work_available_cond;
+
+	/*
+	 * Accepted but not yet processed client connections are kept
+	 * in a circular buffer FIFO.  The queue is empty when the
+	 * positions are equal.
+	 */
+	int *fifo_fds;
+	int queue_size;
+	int back_pos;
+	int front_pos;
+
+	int shutdown_requested;
+	int is_stopped;
+};
+
+/*
+ * Remove and return the oldest queued connection.
+ *
+ * Returns -1 if empty.
+ */
+static int fifo_dequeue(struct ipc_server_data *server_data)
+{
+	/* ASSERT holding mutex */
+
+	int fd;
+
+	if (server_data->back_pos == server_data->front_pos)
+		return -1;
+
+	fd = server_data->fifo_fds[server_data->front_pos];
+	server_data->fifo_fds[server_data->front_pos] = -1;
+
+	server_data->front_pos++;
+	if (server_data->front_pos == server_data->queue_size)
+		server_data->front_pos = 0;
+
+	return fd;
+}
+
+/*
+ * Push a new fd onto the back of the queue.
+ *
+ * Drop it and return -1 if queue is already full.
+ */
+static int fifo_enqueue(struct ipc_server_data *server_data, int fd)
+{
+	/* ASSERT holding mutex */
+
+	int next_back_pos;
+
+	next_back_pos = server_data->back_pos + 1;
+	if (next_back_pos == server_data->queue_size)
+		next_back_pos = 0;
+
+	if (next_back_pos == server_data->front_pos) {
+		/* Queue is full. Just drop it. */
+		close(fd);
+		return -1;
+	}
+
+	server_data->fifo_fds[server_data->back_pos] = fd;
+	server_data->back_pos = next_back_pos;
+
+	return fd;
+}
+
+/*
+ * Wait for a connection to be queued to the FIFO and return it.
+ *
+ * Returns -1 if someone has already requested a shutdown.
+ */
+static int worker_thread__wait_for_connection(
+	struct ipc_worker_thread_data *worker_thread_data)
+{
+	/* ASSERT NOT holding mutex */
+
+	struct ipc_server_data *server_data = worker_thread_data->server_data;
+	int fd = -1;
+
+	pthread_mutex_lock(&server_data->work_available_mutex);
+	for (;;) {
+		if (server_data->shutdown_requested)
+			break;
+
+		fd = fifo_dequeue(server_data);
+		if (fd >= 0)
+			break;
+
+		pthread_cond_wait(&server_data->work_available_cond,
+				  &server_data->work_available_mutex);
+	}
+	pthread_mutex_unlock(&server_data->work_available_mutex);
+
+	return fd;
+}
+
+/*
+ * Forward declare our reply callback function so that any compiler
+ * errors are reported when we actually define the function (in addition
+ * to any errors reported when we try to pass this callback function as
+ * a parameter in a function call).  The former are easier to understand.
+ */
+static ipc_server_reply_cb do_io_reply_callback;
+
+/*
+ * Relay application's response message to the client process.
+ * (We do not flush at this point because we allow the caller
+ * to chunk data to the client thru us.)
+ */
+static int do_io_reply_callback(struct ipc_server_reply_data *reply_data,
+		       const char *response, size_t response_len)
+{
+	if (reply_data->magic != MAGIC_SERVER_REPLY_DATA)
+		BUG("reply_cb called with wrong instance data");
+
+	return write_packetized_from_buf_no_flush(response, response_len,
+						  reply_data->fd);
+}
+
+/* A randomly chosen value. */
+#define MY_WAIT_POLL_TIMEOUT_MS (10)
+
+/*
+ * If the client hangs up without sending any data on the wire, just
+ * quietly close the socket and ignore this client.
+ *
+ * This worker thread is committed to reading the IPC request data
+ * from the client at the other end of this fd.  Wait here for the
+ * client to actually put something on the wire -- because if the
+ * client just does a ping (connect and hangup without sending any
+ * data), our use of the pkt-line read routines will spew an error
+ * message.
+ *
+ * Return -1 if the client hung up.
+ * Return 0 if data (possibly incomplete) is ready.
+ */
+static int worker_thread__wait_for_io_start(
+	struct ipc_worker_thread_data *worker_thread_data,
+	int fd)
+{
+	struct ipc_server_data *server_data = worker_thread_data->server_data;
+	struct pollfd pollfd[1];
+	int result;
+
+	for (;;) {
+		pollfd[0].fd = fd;
+		pollfd[0].events = POLLIN;
+
+		result = poll(pollfd, 1, MY_WAIT_POLL_TIMEOUT_MS);
+		if (result < 0) {
+			if (errno == EINTR)
+				continue;
+			goto cleanup;
+		}
+
+		if (result == 0) {
+			/* a timeout */
+
+			int in_shutdown;
+
+			pthread_mutex_lock(&server_data->work_available_mutex);
+			in_shutdown = server_data->shutdown_requested;
+			pthread_mutex_unlock(&server_data->work_available_mutex);
+
+			/*
+			 * If a shutdown is already in progress and this
+			 * client has not started talking yet, just drop it.
+			 */
+			if (in_shutdown)
+				goto cleanup;
+			continue;
+		}
+
+		if (pollfd[0].revents & POLLHUP)
+			goto cleanup;
+
+		if (pollfd[0].revents & POLLIN)
+			return 0;
+
+		goto cleanup;
+	}
+
+cleanup:
+	close(fd);
+	return -1;
+}
+
+/*
+ * Receive the request/command from the client and pass it to the
+ * registered request-callback.  The request-callback will compose
+ * a response and call our reply-callback to send it to the client.
+ */
+static int worker_thread__do_io(
+	struct ipc_worker_thread_data *worker_thread_data,
+	int fd)
+{
+	/* ASSERT NOT holding lock */
+
+	struct strbuf buf = STRBUF_INIT;
+	struct ipc_server_reply_data reply_data;
+	int ret = 0;
+
+	reply_data.magic = MAGIC_SERVER_REPLY_DATA;
+	reply_data.worker_thread_data = worker_thread_data;
+
+	reply_data.fd = fd;
+
+	ret = read_packetized_to_strbuf(
+		reply_data.fd, &buf,
+		PACKET_READ_GENTLE_ON_EOF | PACKET_READ_NEVER_DIE);
+	if (ret >= 0) {
+		ret = worker_thread_data->server_data->application_cb(
+			worker_thread_data->server_data->application_data,
+			buf.buf, do_io_reply_callback, &reply_data);
+
+		packet_flush_gently(reply_data.fd);
+	}
+	else {
+		/*
+		 * The client probably disconnected/shutdown before it
+		 * could send a well-formed message.  Ignore it.
+		 */
+	}
+
+	strbuf_release(&buf);
+	close(reply_data.fd);
+
+	return ret;
+}
+
+/*
+ * Block SIGPIPE on the current thread (so that we get EPIPE from
+ * write() rather than an actual signal).
+ *
+ * Note that using sigchain_push() and _pop() to control SIGPIPE
+ * around our IO calls is not thread safe:
+ * [] It uses a global stack of handler frames.
+ * [] It uses ALLOC_GROW() to resize it.
+ * [] Finally, according to the `signal(2)` man-page:
+ *    "The effects of `signal()` in a multithreaded process are unspecified."
+ */
+static void thread_block_sigpipe(sigset_t *old_set)
+{
+	sigset_t new_set;
+
+	sigemptyset(&new_set);
+	sigaddset(&new_set, SIGPIPE);
+
+	sigemptyset(old_set);
+	pthread_sigmask(SIG_BLOCK, &new_set, old_set);
+}
+
+/*
+ * Thread proc for an IPC worker thread.  It handles a series of
+ * connections from clients.  It pulls the next fd from the queue
+ * processes it, and then waits for the next client.
+ *
+ * Block SIGPIPE in this worker thread for the life of the thread.
+ * This avoids stray (and sometimes delayed) SIGPIPE signals caused
+ * by client errors and/or when we are under extremely heavy IO load.
+ *
+ * This means that the application callback will have SIGPIPE blocked.
+ * The callback should not change it.
+ */
+static void *worker_thread_proc(void *_worker_thread_data)
+{
+	struct ipc_worker_thread_data *worker_thread_data = _worker_thread_data;
+	struct ipc_server_data *server_data = worker_thread_data->server_data;
+	sigset_t old_set;
+	int fd, io;
+	int ret;
+
+	trace2_thread_start("ipc-worker");
+
+	thread_block_sigpipe(&old_set);
+
+	for (;;) {
+		fd = worker_thread__wait_for_connection(worker_thread_data);
+		if (fd == -1)
+			break; /* in shutdown */
+
+		io = worker_thread__wait_for_io_start(worker_thread_data, fd);
+		if (io == -1)
+			continue; /* client hung up without sending anything */
+
+		ret = worker_thread__do_io(worker_thread_data, fd);
+
+		if (ret == SIMPLE_IPC_QUIT) {
+			trace2_data_string("ipc-worker", NULL, "queue_stop_async",
+					   "application_quit");
+			/*
+			 * The application layer is telling the ipc-server
+			 * layer to shutdown.
+			 *
+			 * We DO NOT have a response to send to the client.
+			 *
+			 * Queue an async stop (to stop the other threads) and
+			 * allow this worker thread to exit now (no sense waiting
+			 * for the thread-pool shutdown signal).
+			 *
+			 * Other non-idle worker threads are allowed to finish
+			 * responding to their current clients.
+			 */
+			ipc_server_stop_async(server_data);
+			break;
+		}
+	}
+
+	trace2_thread_exit();
+	return NULL;
+}
+
+/* A randomly chosen value. */
+#define MY_ACCEPT_POLL_TIMEOUT_MS (60 * 1000)
+
+/*
+ * Accept a new client connection on our socket.  This uses non-blocking
+ * IO so that we can also wait for shutdown requests on our socket-pair
+ * without actually spinning on a fast timeout.
+ */
+static int accept_thread__wait_for_connection(
+	struct ipc_accept_thread_data *accept_thread_data)
+{
+	struct pollfd pollfd[2];
+	int result;
+
+	for (;;) {
+		pollfd[0].fd = accept_thread_data->fd_wait_shutdown;
+		pollfd[0].events = POLLIN;
+
+		pollfd[1].fd = accept_thread_data->server_socket->fd_socket;
+		pollfd[1].events = POLLIN;
+
+		result = poll(pollfd, 2, MY_ACCEPT_POLL_TIMEOUT_MS);
+		if (result < 0) {
+			if (errno == EINTR)
+				continue;
+			return result;
+		}
+
+		if (result == 0) {
+			/* a timeout */
+
+			/*
+			 * If someone deletes or force-creates a new unix
+			 * domain socket at our path, all future clients
+			 * will be routed elsewhere and we silently starve.
+			 * If that happens, just queue a shutdown.
+			 */
+			if (unix_stream_server__was_stolen(
+				    accept_thread_data->server_socket)) {
+				trace2_data_string("ipc-accept", NULL,
+						   "queue_stop_async",
+						   "socket_stolen");
+				ipc_server_stop_async(
+					accept_thread_data->server_data);
+			}
+			continue;
+		}
+
+		if (pollfd[0].revents & POLLIN) {
+			/* shutdown message queued to socketpair */
+			return -1;
+		}
+
+		if (pollfd[1].revents & POLLIN) {
+			/* a connection is available on server_socket */
+
+			int client_fd =
+				accept(accept_thread_data->server_socket->fd_socket,
+				       NULL, NULL);
+			if (client_fd >= 0)
+				return client_fd;
+
+			/*
+			 * An error here is unlikely -- it probably
+			 * indicates that the connecting process has
+			 * already dropped the connection.
+			 */
+			continue;
+		}
+
+		BUG("unandled poll result errno=%d r[0]=%d r[1]=%d",
+		    errno, pollfd[0].revents, pollfd[1].revents);
+	}
+}
+
+/*
+ * Thread proc for the IPC server "accept thread".  This waits for
+ * an incoming socket connection, appends it to the queue of available
+ * connections, and notifies a worker thread to process it.
+ *
+ * Block SIGPIPE in this thread for the life of the thread.  This
+ * avoids any stray SIGPIPE signals when closing pipe fds under
+ * extremely heavy loads (such as when the fifo queue is full and we
+ * drop incomming connections).
+ */
+static void *accept_thread_proc(void *_accept_thread_data)
+{
+	struct ipc_accept_thread_data *accept_thread_data = _accept_thread_data;
+	struct ipc_server_data *server_data = accept_thread_data->server_data;
+	sigset_t old_set;
+
+	trace2_thread_start("ipc-accept");
+
+	thread_block_sigpipe(&old_set);
+
+	for (;;) {
+		int client_fd = accept_thread__wait_for_connection(
+			accept_thread_data);
+
+		pthread_mutex_lock(&server_data->work_available_mutex);
+		if (server_data->shutdown_requested) {
+			pthread_mutex_unlock(&server_data->work_available_mutex);
+			if (client_fd >= 0)
+				close(client_fd);
+			break;
+		}
+
+		if (client_fd < 0) {
+			/* ignore transient accept() errors */
+		}
+		else {
+			fifo_enqueue(server_data, client_fd);
+			pthread_cond_broadcast(&server_data->work_available_cond);
+		}
+		pthread_mutex_unlock(&server_data->work_available_mutex);
+	}
+
+	trace2_thread_exit();
+	return NULL;
+}
+
+/*
+ * We can't predict the connection arrival rate relative to the worker
+ * processing rate, therefore we allow the "accept-thread" to queue up
+ * a generous number of connections, since we'd rather have the client
+ * not unnecessarily timeout if we can avoid it.  (The assumption is
+ * that this will be used for FSMonitor and a few second wait on a
+ * connection is better than having the client timeout and do the full
+ * computation itself.)
+ *
+ * The FIFO queue size is set to a multiple of the worker pool size.
+ * This value chosen at random.
+ */
+#define FIFO_SCALE (100)
+
+/*
+ * The backlog value for `listen(2)`.  This doesn't need to huge,
+ * rather just large enough for our "accept-thread" to wake up and
+ * queue incoming connections onto the FIFO without the kernel
+ * dropping any.
+ *
+ * This value chosen at random.
+ */
+#define LISTEN_BACKLOG (50)
+
+static struct unix_stream_server_socket *create_listener_socket(
+	const char *path,
+	const struct ipc_server_opts *ipc_opts)
+{
+	struct unix_stream_server_socket *server_socket = NULL;
+	struct unix_stream_listen_opts uslg_opts = UNIX_STREAM_LISTEN_OPTS_INIT;
+
+	uslg_opts.listen_backlog_size = LISTEN_BACKLOG;
+	uslg_opts.disallow_chdir = ipc_opts->uds_disallow_chdir;
+
+	server_socket = unix_stream_server__listen_with_lock(path, &uslg_opts);
+	if (!server_socket)
+		return NULL;
+
+	if (set_socket_blocking_flag(server_socket->fd_socket, 1)) {
+		int saved_errno = errno;
+		error_errno(_("could not set listener socket nonblocking '%s'"),
+			    path);
+		unix_stream_server__free(server_socket);
+		errno = saved_errno;
+		return NULL;
+	}
+
+	trace2_data_string("ipc-server", NULL, "listen-with-lock", path);
+	return server_socket;
+}
+
+static struct unix_stream_server_socket *setup_listener_socket(
+	const char *path,
+	const struct ipc_server_opts *ipc_opts)
+{
+	struct unix_stream_server_socket *server_socket;
+
+	trace2_region_enter("ipc-server", "create-listener_socket", NULL);
+	server_socket = create_listener_socket(path, ipc_opts);
+	trace2_region_leave("ipc-server", "create-listener_socket", NULL);
+
+	return server_socket;
+}
+
+/*
+ * Start IPC server in a pool of background threads.
+ */
+int ipc_server_run_async(struct ipc_server_data **returned_server_data,
+			 const char *path, const struct ipc_server_opts *opts,
+			 ipc_server_application_cb *application_cb,
+			 void *application_data)
+{
+	struct unix_stream_server_socket *server_socket = NULL;
+	struct ipc_server_data *server_data;
+	int sv[2];
+	int k;
+	int nr_threads = opts->nr_threads;
+
+	*returned_server_data = NULL;
+
+	/*
+	 * Create a socketpair and set sv[1] to non-blocking.  This
+	 * will used to send a shutdown message to the accept-thread
+	 * and allows the accept-thread to wait on EITHER a client
+	 * connection or a shutdown request without spinning.
+	 */
+	if (socketpair(AF_UNIX, SOCK_STREAM, 0, sv) < 0)
+		return error_errno(_("could not create socketpair for '%s'"),
+				   path);
+
+	if (set_socket_blocking_flag(sv[1], 1)) {
+		int saved_errno = errno;
+		close(sv[0]);
+		close(sv[1]);
+		errno = saved_errno;
+		return error_errno(_("making socketpair nonblocking '%s'"),
+				   path);
+	}
+
+	server_socket = setup_listener_socket(path, opts);
+	if (!server_socket) {
+		int saved_errno = errno;
+		close(sv[0]);
+		close(sv[1]);
+		errno = saved_errno;
+		return -1;
+	}
+
+	server_data = xcalloc(1, sizeof(*server_data));
+	server_data->magic = MAGIC_SERVER_DATA;
+	server_data->application_cb = application_cb;
+	server_data->application_data = application_data;
+	strbuf_init(&server_data->buf_path, 0);
+	strbuf_addstr(&server_data->buf_path, path);
+
+	if (nr_threads < 1)
+		nr_threads = 1;
+
+	pthread_mutex_init(&server_data->work_available_mutex, NULL);
+	pthread_cond_init(&server_data->work_available_cond, NULL);
+
+	server_data->queue_size = nr_threads * FIFO_SCALE;
+	server_data->fifo_fds = xcalloc(server_data->queue_size,
+					sizeof(*server_data->fifo_fds));
+
+	server_data->accept_thread =
+		xcalloc(1, sizeof(*server_data->accept_thread));
+	server_data->accept_thread->magic = MAGIC_ACCEPT_THREAD_DATA;
+	server_data->accept_thread->server_data = server_data;
+	server_data->accept_thread->server_socket = server_socket;
+	server_data->accept_thread->fd_send_shutdown = sv[0];
+	server_data->accept_thread->fd_wait_shutdown = sv[1];
+
+	if (pthread_create(&server_data->accept_thread->pthread_id, NULL,
+			   accept_thread_proc, server_data->accept_thread))
+		die_errno(_("could not start accept_thread '%s'"), path);
+
+	for (k = 0; k < nr_threads; k++) {
+		struct ipc_worker_thread_data *wtd;
+
+		wtd = xcalloc(1, sizeof(*wtd));
+		wtd->magic = MAGIC_WORKER_THREAD_DATA;
+		wtd->server_data = server_data;
+
+		if (pthread_create(&wtd->pthread_id, NULL, worker_thread_proc,
+				   wtd)) {
+			if (k == 0)
+				die(_("could not start worker[0] for '%s'"),
+				    path);
+			/*
+			 * Limp along with the thread pool that we have.
+			 */
+			break;
+		}
+
+		wtd->next_thread = server_data->worker_thread_list;
+		server_data->worker_thread_list = wtd;
+	}
+
+	*returned_server_data = server_data;
+	return 0;
+}
+
+/*
+ * Gently tell the IPC server treads to shutdown.
+ * Can be run on any thread.
+ */
+int ipc_server_stop_async(struct ipc_server_data *server_data)
+{
+	/* ASSERT NOT holding mutex */
+
+	int fd;
+
+	if (!server_data)
+		return 0;
+
+	trace2_region_enter("ipc-server", "server-stop-async", NULL);
+
+	pthread_mutex_lock(&server_data->work_available_mutex);
+
+	server_data->shutdown_requested = 1;
+
+	/*
+	 * Write a byte to the shutdown socket pair to wake up the
+	 * accept-thread.
+	 */
+	if (write(server_data->accept_thread->fd_send_shutdown, "Q", 1) < 0)
+		error_errno("could not write to fd_send_shutdown");
+
+	/*
+	 * Drain the queue of existing connections.
+	 */
+	while ((fd = fifo_dequeue(server_data)) != -1)
+		close(fd);
+
+	/*
+	 * Gently tell worker threads to stop processing new connections
+	 * and exit.  (This does not abort in-process conversations.)
+	 */
+	pthread_cond_broadcast(&server_data->work_available_cond);
+
+	pthread_mutex_unlock(&server_data->work_available_mutex);
+
+	trace2_region_leave("ipc-server", "server-stop-async", NULL);
+
+	return 0;
+}
+
+/*
+ * Wait for all IPC server threads to stop.
+ */
+int ipc_server_await(struct ipc_server_data *server_data)
+{
+	pthread_join(server_data->accept_thread->pthread_id, NULL);
+
+	if (!server_data->shutdown_requested)
+		BUG("ipc-server: accept-thread stopped for '%s'",
+		    server_data->buf_path.buf);
+
+	while (server_data->worker_thread_list) {
+		struct ipc_worker_thread_data *wtd =
+			server_data->worker_thread_list;
+
+		pthread_join(wtd->pthread_id, NULL);
+
+		server_data->worker_thread_list = wtd->next_thread;
+		free(wtd);
+	}
+
+	server_data->is_stopped = 1;
+
+	return 0;
+}
+
+void ipc_server_free(struct ipc_server_data *server_data)
+{
+	struct ipc_accept_thread_data * accept_thread_data;
+
+	if (!server_data)
+		return;
+
+	if (!server_data->is_stopped)
+		BUG("cannot free ipc-server while running for '%s'",
+		    server_data->buf_path.buf);
+
+	accept_thread_data = server_data->accept_thread;
+	if (accept_thread_data) {
+		unix_stream_server__free(accept_thread_data->server_socket);
+
+		if (accept_thread_data->fd_send_shutdown != -1)
+			close(accept_thread_data->fd_send_shutdown);
+		if (accept_thread_data->fd_wait_shutdown != -1)
+			close(accept_thread_data->fd_wait_shutdown);
+
+		free(server_data->accept_thread);
+	}
+
+	while (server_data->worker_thread_list) {
+		struct ipc_worker_thread_data *wtd =
+			server_data->worker_thread_list;
+
+		server_data->worker_thread_list = wtd->next_thread;
+		free(wtd);
+	}
+
+	pthread_cond_destroy(&server_data->work_available_cond);
+	pthread_mutex_destroy(&server_data->work_available_mutex);
+
+	strbuf_release(&server_data->buf_path);
+
+	free(server_data->fifo_fds);
+	free(server_data);
+}
diff --git a/contrib/buildsystems/CMakeLists.txt b/contrib/buildsystems/CMakeLists.txt
index 4bd41054ee70..4c27a373414a 100644
--- a/contrib/buildsystems/CMakeLists.txt
+++ b/contrib/buildsystems/CMakeLists.txt
@@ -248,6 +248,8 @@ endif()
 
 if(CMAKE_SYSTEM_NAME STREQUAL "Windows")
 	list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-win32.c)
+else()
+	list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-unix-socket.c)
 endif()
 
 set(EXE_EXTENSION ${CMAKE_EXECUTABLE_SUFFIX})
diff --git a/simple-ipc.h b/simple-ipc.h
index a3f96b42cca2..f7e72e966f9a 100644
--- a/simple-ipc.h
+++ b/simple-ipc.h
@@ -5,7 +5,7 @@
  * See Documentation/technical/api-simple-ipc.txt
  */
 
-#if defined(GIT_WINDOWS_NATIVE)
+#if defined(GIT_WINDOWS_NATIVE) || !defined(NO_UNIX_SOCKETS)
 #define SUPPORTS_SIMPLE_IPC
 #endif
 
@@ -62,11 +62,17 @@ struct ipc_client_connect_options {
 	 * the service and need to wait for it to become ready.
 	 */
 	unsigned int wait_if_not_found:1;
+
+	/*
+	 * Disallow chdir() when creating a Unix domain socket.
+	 */
+	unsigned int uds_disallow_chdir:1;
 };
 
 #define IPC_CLIENT_CONNECT_OPTIONS_INIT { \
 	.wait_if_busy = 0, \
 	.wait_if_not_found = 0, \
+	.uds_disallow_chdir = 0, \
 }
 
 /*
@@ -159,6 +165,11 @@ struct ipc_server_data;
 struct ipc_server_opts
 {
 	int nr_threads;
+
+	/*
+	 * Disallow chdir() when creating a Unix domain socket.
+	 */
+	unsigned int uds_disallow_chdir:1;
 };
 
 /*
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v4 12/12] t0052: add simple-ipc tests and t/helper/test-simple-ipc tool
  2021-02-17 21:48     ` [PATCH v4 00/12] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
                         ` (10 preceding siblings ...)
  2021-02-17 21:48       ` [PATCH v4 11/12] simple-ipc: add Unix domain socket implementation Jeff Hostetler via GitGitGadget
@ 2021-02-17 21:48       ` Jeff Hostetler via GitGitGadget
  2021-03-02  9:44         ` Jeff King
  2021-02-25 19:39       ` [PATCH v4 00/12] Simple IPC Mechanism Junio C Hamano
  2021-03-09 15:02       ` [PATCH v5 " Jeff Hostetler via GitGitGadget
  13 siblings, 1 reply; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-02-17 21:48 UTC (permalink / raw)
  To: git
  Cc: Jeff Hostetler, Jeff King, SZEDER Gábor,
	Johannes Schindelin, Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Create t0052-simple-ipc.sh with unit tests for the "simple-ipc" mechanism.

Create t/helper/test-simple-ipc test tool to exercise the "simple-ipc"
functions.

When the tool is invoked with "run-daemon", it runs a server to listen
for "simple-ipc" connections on a test socket or named pipe and
responds to a set of commands to exercise/stress the communication
setup.

When the tool is invoked with "start-daemon", it spawns a "run-daemon"
command in the background and waits for the server to become ready
before exiting.  (This helps make unit tests in t0052 more predictable
and avoids the need for arbitrary sleeps in the test script.)

The tool also has a series of client "send" commands to send commands
and data to a server instance.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 Makefile                   |   1 +
 t/helper/test-simple-ipc.c | 773 +++++++++++++++++++++++++++++++++++++
 t/helper/test-tool.c       |   1 +
 t/helper/test-tool.h       |   1 +
 t/t0052-simple-ipc.sh      | 122 ++++++
 5 files changed, 898 insertions(+)
 create mode 100644 t/helper/test-simple-ipc.c
 create mode 100755 t/t0052-simple-ipc.sh

diff --git a/Makefile b/Makefile
index 08a4c88b92f5..93f2e7ca9e1f 100644
--- a/Makefile
+++ b/Makefile
@@ -740,6 +740,7 @@ TEST_BUILTINS_OBJS += test-serve-v2.o
 TEST_BUILTINS_OBJS += test-sha1.o
 TEST_BUILTINS_OBJS += test-sha256.o
 TEST_BUILTINS_OBJS += test-sigchain.o
+TEST_BUILTINS_OBJS += test-simple-ipc.o
 TEST_BUILTINS_OBJS += test-strcmp-offset.o
 TEST_BUILTINS_OBJS += test-string-list.o
 TEST_BUILTINS_OBJS += test-submodule-config.o
diff --git a/t/helper/test-simple-ipc.c b/t/helper/test-simple-ipc.c
new file mode 100644
index 000000000000..d67eaa9a6ecc
--- /dev/null
+++ b/t/helper/test-simple-ipc.c
@@ -0,0 +1,773 @@
+/*
+ * test-simple-ipc.c: verify that the Inter-Process Communication works.
+ */
+
+#include "test-tool.h"
+#include "cache.h"
+#include "strbuf.h"
+#include "simple-ipc.h"
+#include "parse-options.h"
+#include "thread-utils.h"
+#include "strvec.h"
+
+#ifndef SUPPORTS_SIMPLE_IPC
+int cmd__simple_ipc(int argc, const char **argv)
+{
+	die("simple IPC not available on this platform");
+}
+#else
+
+/*
+ * The test daemon defines an "application callback" that supports a
+ * series of commands (see `test_app_cb()`).
+ *
+ * Unknown commands are caught here and we send an error message back
+ * to the client process.
+ */
+static int app__unhandled_command(const char *command,
+				  ipc_server_reply_cb *reply_cb,
+				  struct ipc_server_reply_data *reply_data)
+{
+	struct strbuf buf = STRBUF_INIT;
+	int ret;
+
+	strbuf_addf(&buf, "unhandled command: %s", command);
+	ret = reply_cb(reply_data, buf.buf, buf.len);
+	strbuf_release(&buf);
+
+	return ret;
+}
+
+/*
+ * Reply with a single very large buffer.  This is to ensure that
+ * long response are properly handled -- whether the chunking occurs
+ * in the kernel or in the (probably pkt-line) layer.
+ */
+#define BIG_ROWS (10000)
+static int app__big_command(ipc_server_reply_cb *reply_cb,
+			    struct ipc_server_reply_data *reply_data)
+{
+	struct strbuf buf = STRBUF_INIT;
+	int row;
+	int ret;
+
+	for (row = 0; row < BIG_ROWS; row++)
+		strbuf_addf(&buf, "big: %.75d\n", row);
+
+	ret = reply_cb(reply_data, buf.buf, buf.len);
+	strbuf_release(&buf);
+
+	return ret;
+}
+
+/*
+ * Reply with a series of lines.  This is to ensure that we can incrementally
+ * compute the response and chunk it to the client.
+ */
+#define CHUNK_ROWS (10000)
+static int app__chunk_command(ipc_server_reply_cb *reply_cb,
+			      struct ipc_server_reply_data *reply_data)
+{
+	struct strbuf buf = STRBUF_INIT;
+	int row;
+	int ret;
+
+	for (row = 0; row < CHUNK_ROWS; row++) {
+		strbuf_setlen(&buf, 0);
+		strbuf_addf(&buf, "big: %.75d\n", row);
+		ret = reply_cb(reply_data, buf.buf, buf.len);
+	}
+
+	strbuf_release(&buf);
+
+	return ret;
+}
+
+/*
+ * Slowly reply with a series of lines.  This is to model an expensive to
+ * compute chunked response (which might happen if this callback is running
+ * in a thread and is fighting for a lock with other threads).
+ */
+#define SLOW_ROWS     (1000)
+#define SLOW_DELAY_MS (10)
+static int app__slow_command(ipc_server_reply_cb *reply_cb,
+			     struct ipc_server_reply_data *reply_data)
+{
+	struct strbuf buf = STRBUF_INIT;
+	int row;
+	int ret;
+
+	for (row = 0; row < SLOW_ROWS; row++) {
+		strbuf_setlen(&buf, 0);
+		strbuf_addf(&buf, "big: %.75d\n", row);
+		ret = reply_cb(reply_data, buf.buf, buf.len);
+		sleep_millisec(SLOW_DELAY_MS);
+	}
+
+	strbuf_release(&buf);
+
+	return ret;
+}
+
+/*
+ * The client sent a command followed by a (possibly very) large buffer.
+ */
+static int app__sendbytes_command(const char *received,
+				  ipc_server_reply_cb *reply_cb,
+				  struct ipc_server_reply_data *reply_data)
+{
+	struct strbuf buf_resp = STRBUF_INIT;
+	const char *p = "?";
+	int len_ballast = 0;
+	int k;
+	int errs = 0;
+	int ret;
+
+	if (skip_prefix(received, "sendbytes ", &p))
+		len_ballast = strlen(p);
+
+	/*
+	 * Verify that the ballast is n copies of a single letter.
+	 * And that the multi-threaded IO layer didn't cross the streams.
+	 */
+	for (k = 1; k < len_ballast; k++)
+		if (p[k] != p[0])
+			errs++;
+
+	if (errs)
+		strbuf_addf(&buf_resp, "errs:%d\n", errs);
+	else
+		strbuf_addf(&buf_resp, "rcvd:%c%08d\n", p[0], len_ballast);
+
+	ret = reply_cb(reply_data, buf_resp.buf, buf_resp.len);
+
+	strbuf_release(&buf_resp);
+
+	return ret;
+}
+
+/*
+ * An arbitrary fixed address to verify that the application instance
+ * data is handled properly.
+ */
+static int my_app_data = 42;
+
+static ipc_server_application_cb test_app_cb;
+
+/*
+ * This is the "application callback" that sits on top of the
+ * "ipc-server".  It completely defines the set of commands supported
+ * by this application.
+ */
+static int test_app_cb(void *application_data,
+		       const char *command,
+		       ipc_server_reply_cb *reply_cb,
+		       struct ipc_server_reply_data *reply_data)
+{
+	/*
+	 * Verify that we received the application-data that we passed
+	 * when we started the ipc-server.  (We have several layers of
+	 * callbacks calling callbacks and it's easy to get things mixed
+	 * up (especially when some are "void*").)
+	 */
+	if (application_data != (void*)&my_app_data)
+		BUG("application_cb: application_data pointer wrong");
+
+	if (!strcmp(command, "quit")) {
+		/*
+		 * The client sent a "quit" command.  This is an async
+		 * request for the server to shutdown.
+		 *
+		 * We DO NOT send the client a response message
+		 * (because we have nothing to say and the other
+		 * server threads have not yet stopped).
+		 *
+		 * Tell the ipc-server layer to start shutting down.
+		 * This includes: stop listening for new connections
+		 * on the socket/pipe and telling all worker threads
+		 * to finish/drain their outgoing responses to other
+		 * clients.
+		 *
+		 * This DOES NOT force an immediate sync shutdown.
+		 */
+		return SIMPLE_IPC_QUIT;
+	}
+
+	if (!strcmp(command, "ping")) {
+		const char *answer = "pong";
+		return reply_cb(reply_data, answer, strlen(answer));
+	}
+
+	if (!strcmp(command, "big"))
+		return app__big_command(reply_cb, reply_data);
+
+	if (!strcmp(command, "chunk"))
+		return app__chunk_command(reply_cb, reply_data);
+
+	if (!strcmp(command, "slow"))
+		return app__slow_command(reply_cb, reply_data);
+
+	if (starts_with(command, "sendbytes "))
+		return app__sendbytes_command(command, reply_cb, reply_data);
+
+	return app__unhandled_command(command, reply_cb, reply_data);
+}
+
+/*
+ * This process will run as a simple-ipc server and listen for IPC commands
+ * from client processes.
+ */
+static int daemon__run_server(const char *path, int argc, const char **argv)
+{
+	struct ipc_server_opts opts = {
+		.nr_threads = 5
+	};
+
+	const char * const daemon_usage[] = {
+		N_("test-helper simple-ipc run-daemon [<options>"),
+		NULL
+	};
+	struct option daemon_options[] = {
+		OPT_INTEGER(0, "threads", &opts.nr_threads,
+			    N_("number of threads in server thread pool")),
+		OPT_END()
+	};
+
+	argc = parse_options(argc, argv, NULL, daemon_options, daemon_usage, 0);
+
+	if (opts.nr_threads < 1)
+		opts.nr_threads = 1;
+
+	/*
+	 * Synchronously run the ipc-server.  We don't need any application
+	 * instance data, so pass an arbitrary pointer (that we'll later
+	 * verify made the round trip).
+	 */
+	return ipc_server_run(path, &opts, test_app_cb, (void*)&my_app_data);
+}
+
+#ifndef GIT_WINDOWS_NATIVE
+/*
+ * This is adapted from `daemonize()`.  Use `fork()` to directly create and
+ * run the daemon in a child process.
+ */
+static int spawn_server(const char *path,
+			const struct ipc_server_opts *opts,
+			pid_t *pid)
+{
+	*pid = fork();
+
+	switch (*pid) {
+	case 0:
+		if (setsid() == -1)
+			error_errno(_("setsid failed"));
+		close(0);
+		close(1);
+		close(2);
+		sanitize_stdfds();
+
+		return ipc_server_run(path, opts, test_app_cb, (void*)&my_app_data);
+
+	case -1:
+		return error_errno(_("could not spawn daemon in the background"));
+
+	default:
+		return 0;
+	}
+}
+#else
+/*
+ * Conceptually like `daemonize()` but different because Windows does not
+ * have `fork(2)`.  Spawn a normal Windows child process but without the
+ * limitations of `start_command()` and `finish_command()`.
+ */
+static int spawn_server(const char *path,
+			const struct ipc_server_opts *opts,
+			pid_t *pid)
+{
+	char test_tool_exe[MAX_PATH];
+	struct strvec args = STRVEC_INIT;
+	int in, out;
+
+	GetModuleFileNameA(NULL, test_tool_exe, MAX_PATH);
+
+	in = open("/dev/null", O_RDONLY);
+	out = open("/dev/null", O_WRONLY);
+
+	strvec_push(&args, test_tool_exe);
+	strvec_push(&args, "simple-ipc");
+	strvec_push(&args, "run-daemon");
+	strvec_pushf(&args, "--threads=%d", opts->nr_threads);
+
+	*pid = mingw_spawnvpe(args.v[0], args.v, NULL, NULL, in, out, out);
+	close(in);
+	close(out);
+
+	strvec_clear(&args);
+
+	if (*pid < 0)
+		return error(_("could not spawn daemon in the background"));
+
+	return 0;
+}
+#endif
+
+/*
+ * This is adapted from `wait_or_whine()`.  Watch the child process and
+ * let it get started and begin listening for requests on the socket
+ * before reporting our success.
+ */
+static int wait_for_server_startup(const char * path, pid_t pid_child,
+				   int max_wait_sec)
+{
+	int status;
+	pid_t pid_seen;
+	enum ipc_active_state s;
+	time_t time_limit, now;
+
+	time(&time_limit);
+	time_limit += max_wait_sec;
+
+	for (;;) {
+		pid_seen = waitpid(pid_child, &status, WNOHANG);
+
+		if (pid_seen == -1)
+			return error_errno(_("waitpid failed"));
+
+		else if (pid_seen == 0) {
+			/*
+			 * The child is still running (this should be
+			 * the normal case).  Try to connect to it on
+			 * the socket and see if it is ready for
+			 * business.
+			 *
+			 * If there is another daemon already running,
+			 * our child will fail to start (possibly
+			 * after a timeout on the lock), but we don't
+			 * care (who responds) if the socket is live.
+			 */
+			s = ipc_get_active_state(path);
+			if (s == IPC_STATE__LISTENING)
+				return 0;
+
+			time(&now);
+			if (now > time_limit)
+				return error(_("daemon not online yet"));
+
+			continue;
+		}
+
+		else if (pid_seen == pid_child) {
+			/*
+			 * The new child daemon process shutdown while
+			 * it was starting up, so it is not listening
+			 * on the socket.
+			 *
+			 * Try to ping the socket in the odd chance
+			 * that another daemon started (or was already
+			 * running) while our child was starting.
+			 *
+			 * Again, we don't care who services the socket.
+			 */
+			s = ipc_get_active_state(path);
+			if (s == IPC_STATE__LISTENING)
+				return 0;
+
+			/*
+			 * We don't care about the WEXITSTATUS() nor
+			 * any of the WIF*(status) values because
+			 * `cmd__simple_ipc()` does the `!!result`
+			 * trick on all function return values.
+			 *
+			 * So it is sufficient to just report the
+			 * early shutdown as an error.
+			 */
+			return error(_("daemon failed to start"));
+		}
+
+		else
+			return error(_("waitpid is confused"));
+	}
+}
+
+/*
+ * This process will start a simple-ipc server in a background process and
+ * wait for it to become ready.  This is like `daemonize()` but gives us
+ * more control and better error reporting (and makes it easier to write
+ * unit tests).
+ */
+static int daemon__start_server(const char *path, int argc, const char **argv)
+{
+	pid_t pid_child;
+	int ret;
+	int max_wait_sec = 60;
+	struct ipc_server_opts opts = {
+		.nr_threads = 5
+	};
+
+	const char * const daemon_usage[] = {
+		N_("test-helper simple-ipc start-daemon [<options>"),
+		NULL
+	};
+
+	struct option daemon_options[] = {
+		OPT_INTEGER(0, "max-wait", &max_wait_sec,
+			    N_("seconds to wait for daemon to startup")),
+		OPT_INTEGER(0, "threads", &opts.nr_threads,
+			    N_("number of threads in server thread pool")),
+		OPT_END()
+	};
+
+	argc = parse_options(argc, argv, NULL, daemon_options, daemon_usage, 0);
+
+	if (max_wait_sec < 0)
+		max_wait_sec = 0;
+	if (opts.nr_threads < 1)
+		opts.nr_threads = 1;
+
+	/*
+	 * Run the actual daemon in a background process.
+	 */
+	ret = spawn_server(path, &opts, &pid_child);
+	if (pid_child <= 0)
+		return ret;
+
+	/*
+	 * Let the parent wait for the child process to get started
+	 * and begin listening for requests on the socket.
+	 */
+	ret = wait_for_server_startup(path, pid_child, max_wait_sec);
+
+	return ret;
+}
+
+/*
+ * This process will run a quick probe to see if a simple-ipc server
+ * is active on this path.
+ *
+ * Returns 0 if the server is alive.
+ */
+static int client__probe_server(const char *path)
+{
+	enum ipc_active_state s;
+
+	s = ipc_get_active_state(path);
+	switch (s) {
+	case IPC_STATE__LISTENING:
+		return 0;
+
+	case IPC_STATE__NOT_LISTENING:
+		return error("no server listening at '%s'", path);
+
+	case IPC_STATE__PATH_NOT_FOUND:
+		return error("path not found '%s'", path);
+
+	case IPC_STATE__INVALID_PATH:
+		return error("invalid pipe/socket name '%s'", path);
+
+	case IPC_STATE__OTHER_ERROR:
+	default:
+		return error("other error for '%s'", path);
+	}
+}
+
+/*
+ * Send an IPC command to an already-running server daemon and print the
+ * response.
+ *
+ * argv[2] contains a simple (1 word) command that `test_app_cb()` (in
+ * the daemon process) will understand.
+ */
+static int client__send_ipc(int argc, const char **argv, const char *path)
+{
+	const char *command = argc > 2 ? argv[2] : "(no command)";
+	struct strbuf buf = STRBUF_INIT;
+	struct ipc_client_connect_options options
+		= IPC_CLIENT_CONNECT_OPTIONS_INIT;
+
+	options.wait_if_busy = 1;
+	options.wait_if_not_found = 0;
+
+	if (!ipc_client_send_command(path, &options, command, &buf)) {
+		if (buf.len) {
+			printf("%s\n", buf.buf);
+			fflush(stdout);
+		}
+		strbuf_release(&buf);
+
+		return 0;
+	}
+
+	return error("failed to send '%s' to '%s'", command, path);
+}
+
+/*
+ * Send an IPC command to an already-running server and ask it to
+ * shutdown.  "send quit" is an async request and queues a shutdown
+ * event in the server, so we spin and wait here for it to actually
+ * shutdown to make the unit tests a little easier to write.
+ */
+static int client__stop_server(int argc, const char **argv, const char *path)
+{
+	const char *send_quit[] = { argv[0], "send", "quit", NULL };
+	int max_wait_sec = 60;
+	int ret;
+	time_t time_limit, now;
+	enum ipc_active_state s;
+
+	const char * const stop_usage[] = {
+		N_("test-helper simple-ipc stop-daemon [<options>]"),
+		NULL
+	};
+
+	struct option stop_options[] = {
+		OPT_INTEGER(0, "max-wait", &max_wait_sec,
+			    N_("seconds to wait for daemon to stop")),
+		OPT_END()
+	};
+
+	argc = parse_options(argc, argv, NULL, stop_options, stop_usage, 0);
+
+	if (max_wait_sec < 0)
+		max_wait_sec = 0;
+
+	time(&time_limit);
+	time_limit += max_wait_sec;
+
+	ret = client__send_ipc(3, send_quit, path);
+	if (ret)
+		return ret;
+
+	for (;;) {
+		sleep_millisec(100);
+
+		s = ipc_get_active_state(path);
+
+		if (s != IPC_STATE__LISTENING) {
+			/*
+			 * The socket/pipe is gone and/or has stopped
+			 * responding.  Lets assume that the daemon
+			 * process has exited too.
+			 */
+			return 0;
+		}
+
+		time(&now);
+		if (now > time_limit)
+			return error(_("daemon has not shutdown yet"));
+	}
+}
+
+/*
+ * Send an IPC command followed by ballast to confirm that a large
+ * message can be sent and that the kernel or pkt-line layers will
+ * properly chunk it and that the daemon receives the entire message.
+ */
+static int do_sendbytes(int bytecount, char byte, const char *path,
+			const struct ipc_client_connect_options *options)
+{
+	struct strbuf buf_send = STRBUF_INIT;
+	struct strbuf buf_resp = STRBUF_INIT;
+
+	strbuf_addstr(&buf_send, "sendbytes ");
+	strbuf_addchars(&buf_send, byte, bytecount);
+
+	if (!ipc_client_send_command(path, options, buf_send.buf, &buf_resp)) {
+		strbuf_rtrim(&buf_resp);
+		printf("sent:%c%08d %s\n", byte, bytecount, buf_resp.buf);
+		fflush(stdout);
+		strbuf_release(&buf_send);
+		strbuf_release(&buf_resp);
+
+		return 0;
+	}
+
+	return error("client failed to sendbytes(%d, '%c') to '%s'",
+		     bytecount, byte, path);
+}
+
+/*
+ * Send an IPC command with ballast to an already-running server daemon.
+ */
+static int client__sendbytes(int argc, const char **argv, const char *path)
+{
+	int bytecount = 1024;
+	char *string = "x";
+	const char * const sendbytes_usage[] = {
+		N_("test-helper simple-ipc sendbytes [<options>]"),
+		NULL
+	};
+	struct option sendbytes_options[] = {
+		OPT_INTEGER(0, "bytecount", &bytecount, N_("number of bytes")),
+		OPT_STRING(0, "byte", &string, N_("byte"), N_("ballast")),
+		OPT_END()
+	};
+	struct ipc_client_connect_options options
+		= IPC_CLIENT_CONNECT_OPTIONS_INIT;
+
+	options.wait_if_busy = 1;
+	options.wait_if_not_found = 0;
+	options.uds_disallow_chdir = 0;
+
+	argc = parse_options(argc, argv, NULL, sendbytes_options, sendbytes_usage, 0);
+
+	return do_sendbytes(bytecount, string[0], path, &options);
+}
+
+struct multiple_thread_data {
+	pthread_t pthread_id;
+	struct multiple_thread_data *next;
+	const char *path;
+	int bytecount;
+	int batchsize;
+	int sum_errors;
+	int sum_good;
+	char letter;
+};
+
+static void *multiple_thread_proc(void *_multiple_thread_data)
+{
+	struct multiple_thread_data *d = _multiple_thread_data;
+	int k;
+	struct ipc_client_connect_options options
+		= IPC_CLIENT_CONNECT_OPTIONS_INIT;
+
+	options.wait_if_busy = 1;
+	options.wait_if_not_found = 0;
+	/*
+	 * A multi-threaded client should not be randomly calling chdir().
+	 * The test will pass without this restriction because the test is
+	 * not otherwise accessing the filesystem, but it makes us honest.
+	 */
+	options.uds_disallow_chdir = 1;
+
+	trace2_thread_start("multiple");
+
+	for (k = 0; k < d->batchsize; k++) {
+		if (do_sendbytes(d->bytecount + k, d->letter, d->path, &options))
+			d->sum_errors++;
+		else
+			d->sum_good++;
+	}
+
+	trace2_thread_exit();
+	return NULL;
+}
+
+/*
+ * Start a client-side thread pool.  Each thread sends a series of
+ * IPC requests.  Each request is on a new connection to the server.
+ */
+static int client__multiple(int argc, const char **argv, const char *path)
+{
+	struct multiple_thread_data *list = NULL;
+	int k;
+	int nr_threads = 5;
+	int bytecount = 1;
+	int batchsize = 10;
+	int sum_join_errors = 0;
+	int sum_thread_errors = 0;
+	int sum_good = 0;
+
+	const char * const multiple_usage[] = {
+		N_("test-helper simple-ipc multiple [<options>]"),
+		NULL
+	};
+	struct option multiple_options[] = {
+		OPT_INTEGER(0, "bytecount", &bytecount, N_("number of bytes")),
+		OPT_INTEGER(0, "threads", &nr_threads, N_("number of threads")),
+		OPT_INTEGER(0, "batchsize", &batchsize, N_("number of requests per thread")),
+		OPT_END()
+	};
+
+	argc = parse_options(argc, argv, NULL, multiple_options, multiple_usage, 0);
+
+	if (bytecount < 1)
+		bytecount = 1;
+	if (nr_threads < 1)
+		nr_threads = 1;
+	if (batchsize < 1)
+		batchsize = 1;
+
+	for (k = 0; k < nr_threads; k++) {
+		struct multiple_thread_data *d = xcalloc(1, sizeof(*d));
+		d->next = list;
+		d->path = path;
+		d->bytecount = bytecount + batchsize*(k/26);
+		d->batchsize = batchsize;
+		d->sum_errors = 0;
+		d->sum_good = 0;
+		d->letter = 'A' + (k % 26);
+
+		if (pthread_create(&d->pthread_id, NULL, multiple_thread_proc, d)) {
+			warning("failed to create thread[%d] skipping remainder", k);
+			free(d);
+			break;
+		}
+
+		list = d;
+	}
+
+	while (list) {
+		struct multiple_thread_data *d = list;
+
+		if (pthread_join(d->pthread_id, NULL))
+			sum_join_errors++;
+
+		sum_thread_errors += d->sum_errors;
+		sum_good += d->sum_good;
+
+		list = d->next;
+		free(d);
+	}
+
+	printf("client (good %d) (join %d), (errors %d)\n",
+	       sum_good, sum_join_errors, sum_thread_errors);
+
+	return (sum_join_errors + sum_thread_errors) ? 1 : 0;
+}
+
+int cmd__simple_ipc(int argc, const char **argv)
+{
+	const char *path = "ipc-test";
+
+	if (argc == 2 && !strcmp(argv[1], "SUPPORTS_SIMPLE_IPC"))
+		return 0;
+
+	/*
+	 * Use '!!' on all dispatch functions to map from `error()` style
+	 * (returns -1) style to `test_must_fail` style (expects 1).  This
+	 * makes shell error messages less confusing.
+	 */
+
+	if (argc == 2 && !strcmp(argv[1], "is-active"))
+		return !!client__probe_server(path);
+
+	if (argc >= 2 && !strcmp(argv[1], "run-daemon"))
+		return !!daemon__run_server(path, argc, argv);
+
+	if (argc >= 2 && !strcmp(argv[1], "start-daemon"))
+		return !!daemon__start_server(path, argc, argv);
+
+	/*
+	 * Client commands follow.  Ensure a server is running before
+	 * going any further.
+	 */
+	if (client__probe_server(path))
+		return 1;
+
+	if (argc >= 2 && !strcmp(argv[1], "stop-daemon"))
+		return !!client__stop_server(argc, argv, path);
+
+	if ((argc == 2 || argc == 3) && !strcmp(argv[1], "send"))
+		return !!client__send_ipc(argc, argv, path);
+
+	if (argc >= 2 && !strcmp(argv[1], "sendbytes"))
+		return !!client__sendbytes(argc, argv, path);
+
+	if (argc >= 2 && !strcmp(argv[1], "multiple"))
+		return !!client__multiple(argc, argv, path);
+
+	die("Unhandled argv[1]: '%s'", argv[1]);
+}
+#endif
diff --git a/t/helper/test-tool.c b/t/helper/test-tool.c
index 9d6d14d92937..a409655f03b5 100644
--- a/t/helper/test-tool.c
+++ b/t/helper/test-tool.c
@@ -64,6 +64,7 @@ static struct test_cmd cmds[] = {
 	{ "sha1", cmd__sha1 },
 	{ "sha256", cmd__sha256 },
 	{ "sigchain", cmd__sigchain },
+	{ "simple-ipc", cmd__simple_ipc },
 	{ "strcmp-offset", cmd__strcmp_offset },
 	{ "string-list", cmd__string_list },
 	{ "submodule-config", cmd__submodule_config },
diff --git a/t/helper/test-tool.h b/t/helper/test-tool.h
index a6470ff62c42..564eb3c8e911 100644
--- a/t/helper/test-tool.h
+++ b/t/helper/test-tool.h
@@ -54,6 +54,7 @@ int cmd__sha1(int argc, const char **argv);
 int cmd__oid_array(int argc, const char **argv);
 int cmd__sha256(int argc, const char **argv);
 int cmd__sigchain(int argc, const char **argv);
+int cmd__simple_ipc(int argc, const char **argv);
 int cmd__strcmp_offset(int argc, const char **argv);
 int cmd__string_list(int argc, const char **argv);
 int cmd__submodule_config(int argc, const char **argv);
diff --git a/t/t0052-simple-ipc.sh b/t/t0052-simple-ipc.sh
new file mode 100755
index 000000000000..18dcc8130728
--- /dev/null
+++ b/t/t0052-simple-ipc.sh
@@ -0,0 +1,122 @@
+#!/bin/sh
+
+test_description='simple command server'
+
+. ./test-lib.sh
+
+test-tool simple-ipc SUPPORTS_SIMPLE_IPC || {
+	skip_all='simple IPC not supported on this platform'
+	test_done
+}
+
+stop_simple_IPC_server () {
+	test-tool simple-ipc stop-daemon
+}
+
+test_expect_success 'start simple command server' '
+	test_atexit stop_simple_IPC_server &&
+	test-tool simple-ipc start-daemon --threads=8 &&
+	test-tool simple-ipc is-active
+'
+
+test_expect_success 'simple command server' '
+	test-tool simple-ipc send ping >actual &&
+	echo pong >expect &&
+	test_cmp expect actual
+'
+
+test_expect_success 'servers cannot share the same path' '
+	test_must_fail test-tool simple-ipc run-daemon &&
+	test-tool simple-ipc is-active
+'
+
+test_expect_success 'big response' '
+	test-tool simple-ipc send big >actual &&
+	test_line_count -ge 10000 actual &&
+	grep -q "big: [0]*9999\$" actual
+'
+
+test_expect_success 'chunk response' '
+	test-tool simple-ipc send chunk >actual &&
+	test_line_count -ge 10000 actual &&
+	grep -q "big: [0]*9999\$" actual
+'
+
+test_expect_success 'slow response' '
+	test-tool simple-ipc send slow >actual &&
+	test_line_count -ge 100 actual &&
+	grep -q "big: [0]*99\$" actual
+'
+
+# Send an IPC with n=100,000 bytes of ballast.  This should be large enough
+# to force both the kernel and the pkt-line layer to chunk the message to the
+# daemon and for the daemon to receive it in chunks.
+#
+test_expect_success 'sendbytes' '
+	test-tool simple-ipc sendbytes --bytecount=100000 --byte=A >actual &&
+	grep "sent:A00100000 rcvd:A00100000" actual
+'
+
+# Start a series of <threads> client threads that each make <batchsize>
+# IPC requests to the server.  Each (<threads> * <batchsize>) request
+# will open a new connection to the server and randomly bind to a server
+# thread.  Each client thread exits after completing its batch.  So the
+# total number of live client threads will be smaller than the total.
+# Each request will send a message containing at least <bytecount> bytes
+# of ballast.  (Responses are small.)
+#
+# The purpose here is to test threading in the server and responding to
+# many concurrent client requests (regardless of whether they come from
+# 1 client process or many).  And to test that the server side of the
+# named pipe/socket is stable.  (On Windows this means that the server
+# pipe is properly recycled.)
+#
+# On Windows it also lets us adjust the connection timeout in the
+# `ipc_client_send_command()`.
+#
+# Note it is easy to drive the system into failure by requesting an
+# insane number of threads on client or server and/or increasing the
+# per-thread batchsize or the per-request bytecount (ballast).
+# On Windows these failures look like "pipe is busy" errors.
+# So I've chosen fairly conservative values for now.
+#
+# We expect output of the form "sent:<letter><length> ..."
+# With terms (7, 19, 13) we expect:
+#   <letter> in [A-G]
+#   <length> in [19+0 .. 19+(13-1)]
+# and (7 * 13) successful responses.
+#
+test_expect_success 'stress test threads' '
+	test-tool simple-ipc multiple \
+		--threads=7 \
+		--bytecount=19 \
+		--batchsize=13 \
+		>actual &&
+	test_line_count = 92 actual &&
+	grep "good 91" actual &&
+	grep "sent:A" <actual >actual_a &&
+	cat >expect_a <<-EOF &&
+		sent:A00000019 rcvd:A00000019
+		sent:A00000020 rcvd:A00000020
+		sent:A00000021 rcvd:A00000021
+		sent:A00000022 rcvd:A00000022
+		sent:A00000023 rcvd:A00000023
+		sent:A00000024 rcvd:A00000024
+		sent:A00000025 rcvd:A00000025
+		sent:A00000026 rcvd:A00000026
+		sent:A00000027 rcvd:A00000027
+		sent:A00000028 rcvd:A00000028
+		sent:A00000029 rcvd:A00000029
+		sent:A00000030 rcvd:A00000030
+		sent:A00000031 rcvd:A00000031
+	EOF
+	test_cmp expect_a actual_a
+'
+
+test_expect_success 'stop-daemon works' '
+	test-tool simple-ipc stop-daemon &&
+	test_must_fail test-tool simple-ipc is-active &&
+	test_must_fail test-tool simple-ipc send ping
+'
+
+test_done
-- 
gitgitgadget

^ permalink raw reply related	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 00/12] Simple IPC Mechanism
  2021-02-17 21:48     ` [PATCH v4 00/12] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
                         ` (11 preceding siblings ...)
  2021-02-17 21:48       ` [PATCH v4 12/12] t0052: add simple-ipc tests and t/helper/test-simple-ipc tool Jeff Hostetler via GitGitGadget
@ 2021-02-25 19:39       ` Junio C Hamano
  2021-02-26  7:59         ` Jeff King
  2021-03-09 15:02       ` [PATCH v5 " Jeff Hostetler via GitGitGadget
  13 siblings, 1 reply; 178+ messages in thread
From: Junio C Hamano @ 2021-02-25 19:39 UTC (permalink / raw)
  To: Jeff Hostetler via GitGitGadget
  Cc: git, Jeff Hostetler, Jeff King, SZEDER Gábor,
	Johannes Schindelin, Jeff Hostetler

"Jeff Hostetler via GitGitGadget" <gitgitgadget@gmail.com> writes:

> Here is V4 of my "Simple IPC" series. It addresses Gábor's comment WRT
> shutting down the server to make unit tests more predictable on CI servers.
> (https://lore.kernel.org/git/20210213093052.GJ1015009@szeder.dev)
>
> Jeff
>
> cc: Ævar Arnfjörð Bjarmason avarab@gmail.com cc: Jeff Hostetler
> git@jeffhostetler.com cc: Jeff King peff@peff.net cc: Chris Torek
> chris.torek@gmail.com

It seems that the discussions around the topic has mostly done
during the v2 review, and has quieted down since then.

Let's merge it down to 'next'?


^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently()
  2021-02-17 21:48       ` [PATCH v4 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently() Jeff Hostetler via GitGitGadget
@ 2021-02-26  7:21         ` Jeff King
  2021-02-26 19:52           ` Jeff Hostetler
  0 siblings, 1 reply; 178+ messages in thread
From: Jeff King @ 2021-02-26  7:21 UTC (permalink / raw)
  To: Jeff Hostetler via GitGitGadget
  Cc: git, Jeff Hostetler, SZEDER Gábor, Johannes Schindelin,
	Jeff Hostetler

On Wed, Feb 17, 2021 at 09:48:37PM +0000, Jeff Hostetler via GitGitGadget wrote:

> Change the API of `write_packetized_from_fd()` to accept a scratch space
> argument from its caller to avoid similar issues here.

OK, but...

> diff --git a/convert.c b/convert.c
> index ee360c2f07ce..41012c2d301c 100644
> --- a/convert.c
> +++ b/convert.c
> @@ -883,9 +883,10 @@ static int apply_multi_file_filter(const char *path, const char *src, size_t len
>  	if (err)
>  		goto done;
>  
> -	if (fd >= 0)
> -		err = write_packetized_from_fd(fd, process->in);
> -	else
> +	if (fd >= 0) {
> +		struct packet_scratch_space scratch;
> +		err = write_packetized_from_fd(fd, process->in, &scratch);
> +	} else
>  		err = write_packetized_from_buf(src, len, process->in);

Isn't this just putting the buffer onto the stack anyway? Your
scratch_space struct is really just a big array. You'd want to make
it static here, but then we haven't really solved anything. :)

I think instead that:

> -int write_packetized_from_fd(int fd_in, int fd_out)
> +int write_packetized_from_fd(int fd_in, int fd_out,
> +			     struct packet_scratch_space *scratch)
>  {
> -	static char buf[LARGE_PACKET_DATA_MAX];
>  	int err = 0;
>  	ssize_t bytes_to_write;
>  
>  	while (!err) {
> -		bytes_to_write = xread(fd_in, buf, sizeof(buf));
> +		bytes_to_write = xread(fd_in, scratch->buffer,
> +				       sizeof(scratch->buffer));
>  		if (bytes_to_write < 0)
>  			return COPY_READ_ERROR;
>  		if (bytes_to_write == 0)
>  			break;
> -		err = packet_write_gently(fd_out, buf, bytes_to_write);
> +		err = packet_write_gently(fd_out, scratch->buffer,
> +					  bytes_to_write);
>  	}

...just heap-allocating the buffer in this function would be fine. It's
one malloc for the whole sequence of pktlines, which is unlikely to be a
problem.

-Peff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 07/12] unix-socket: elimiate static unix_stream_socket() helper function
  2021-02-17 21:48       ` [PATCH v4 07/12] unix-socket: elimiate static unix_stream_socket() helper function Jeff Hostetler via GitGitGadget
@ 2021-02-26  7:25         ` Jeff King
  2021-03-03 20:41         ` Junio C Hamano
  1 sibling, 0 replies; 178+ messages in thread
From: Jeff King @ 2021-02-26  7:25 UTC (permalink / raw)
  To: Jeff Hostetler via GitGitGadget
  Cc: git, Jeff Hostetler, SZEDER Gábor, Johannes Schindelin,
	Jeff Hostetler

On Wed, Feb 17, 2021 at 09:48:43PM +0000, Jeff Hostetler via GitGitGadget wrote:

> From: Jeff Hostetler <jeffhost@microsoft.com>
> 
> The static helper function `unix_stream_socket()` calls `die()`.  This
> is not appropriate for all callers.  Eliminate the wrapper function
> and make the callers propagate the error.

Thanks for breaking it up this way. It's (IMHO) much easier to see the
motivation and impact of the changes now.

There's a small typo in the subject:

> Subject: unix-socket: elimiate static unix_stream_socket() helper function

-Peff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 08/12] unix-socket: add backlog size option to unix_stream_listen()
  2021-02-17 21:48       ` [PATCH v4 08/12] unix-socket: add backlog size option to unix_stream_listen() Jeff Hostetler via GitGitGadget
@ 2021-02-26  7:30         ` Jeff King
  2021-03-03 20:54           ` Junio C Hamano
  0 siblings, 1 reply; 178+ messages in thread
From: Jeff King @ 2021-02-26  7:30 UTC (permalink / raw)
  To: Jeff Hostetler via GitGitGadget
  Cc: git, Jeff Hostetler, SZEDER Gábor, Johannes Schindelin,
	Jeff Hostetler

On Wed, Feb 17, 2021 at 09:48:44PM +0000, Jeff Hostetler via GitGitGadget wrote:

> @@ -106,7 +108,10 @@ int unix_stream_listen(const char *path)
>  	if (bind(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0)
>  		goto fail;
>  
> -	if (listen(fd, 5) < 0)
> +	backlog = opts->listen_backlog_size;
> +	if (backlog <= 0)
> +		backlog = DEFAULT_UNIX_STREAM_LISTEN_BACKLOG;
> +	if (listen(fd, backlog) < 0)
>  		goto fail;

OK, so we still have the fallback-on-zero here, which is good...

> +struct unix_stream_listen_opts {
> +	int listen_backlog_size;
> +};
> +
> +#define DEFAULT_UNIX_STREAM_LISTEN_BACKLOG (5)
> +
> +#define UNIX_STREAM_LISTEN_OPTS_INIT \
> +{ \
> +	.listen_backlog_size = DEFAULT_UNIX_STREAM_LISTEN_BACKLOG, \
> +}

...but I thought the plan was to drop this initialization in favor of a
zero-initialization. What you have certainly wouldn't do the wrong
thing, but it just seems weirdly redundant. Unless some caller really
wants to know what the default will be?

-Peff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 10/12] unix-socket: create `unix_stream_server__listen_with_lock()`
  2021-02-17 21:48       ` [PATCH v4 10/12] unix-socket: create `unix_stream_server__listen_with_lock()` Jeff Hostetler via GitGitGadget
@ 2021-02-26  7:56         ` Jeff King
  2021-03-02 23:50           ` Jeff Hostetler
  0 siblings, 1 reply; 178+ messages in thread
From: Jeff King @ 2021-02-26  7:56 UTC (permalink / raw)
  To: Jeff Hostetler via GitGitGadget
  Cc: git, Jeff Hostetler, SZEDER Gábor, Johannes Schindelin,
	Jeff Hostetler

On Wed, Feb 17, 2021 at 09:48:46PM +0000, Jeff Hostetler via GitGitGadget wrote:

> From: Jeff Hostetler <jeffhost@microsoft.com>
> 
> Create a version of `unix_stream_listen()` that uses a ".lock" lockfile
> to create the unix domain socket in a race-free manner.

The "unix_stream_server__listen_with_lock" name is quite a mouthful.  My
first question was: don't we have an "options" struct that we can use to
tell it we're interested in using the locking strategy?

But I do find it a little weird for the feature to be at this layer at
all. I'd have thought it would make more sense in the simple-ipc layer
that implements the unix-socket backend, where app-level logic like
"it's OK to just connect to this socket and hang up in order to ping it"
might be more appropriate. We might even want to have a more robust
check (e.g., an actual "ping" that expects the server to say "yes, I'm
here").

(But also see below where I am less certain about this...)

> Unix domain sockets have a fundamental problem on Unix systems because
> they persist in the filesystem until they are deleted.  This is
> independent of whether a server is actually listening for connections.
> Well-behaved servers are expected to delete the socket when they
> shutdown.  A new server cannot easily tell if a found socket is
> attached to an active server or is leftover cruft from a dead server.
> The traditional solution used by `unix_stream_listen()` is to force
> delete the socket pathname and then create a new socket.  This solves
> the latter (cruft) problem, but in the case of the former, it orphans
> the existing server (by stealing the pathname associated with the
> socket it is listening on).

Nicely explained.

> We cannot directly use a .lock lockfile to create the socket because
> the socket is created by `bind(2)` rather than the `open(2)` mechanism
> used by `tempfile.c`.
> 
> As an alternative, we hold a plain lockfile ("<path>.lock") as a
> mutual exclusion device.  Under the lock, we test if an existing
> socket ("<path>") is has an active server.  If not, create a new
> socket and begin listening.  Then we rollback the lockfile in all
> cases.

Make sense.

> +static int is_another_server_alive(const char *path,
> +				   const struct unix_stream_listen_opts *opts)
> +{
> +	struct stat st;
> +	int fd;
> +
> +	if (!lstat(path, &st) && S_ISSOCK(st.st_mode)) {
> +		/*
> +		 * A socket-inode exists on disk at `path`, but we
> +		 * don't know whether it belongs to an active server
> +		 * or whether the last server died without cleaning
> +		 * up.
> +		 *
> +		 * Poke it with a trivial connection to try to find
> +		 * out.
> +		 */
> +		fd = unix_stream_connect(path, opts->disallow_chdir);
> +		if (fd >= 0) {
> +			close(fd);
> +			return 1;
> +		}
> +	}

The lstat() seems redundant here. unix_stream_connect() will tell us
whether there is something to connect to or not. (It's also racy with
respect to the actual connect, but since you're doing this under lock, I
don't think that matters).

> +struct unix_stream_server_socket *unix_stream_server__listen_with_lock(
> +	const char *path,
> +	const struct unix_stream_listen_opts *opts)
> +{
> +	struct lock_file lock = LOCK_INIT;
> +	int fd_socket;
> +	struct unix_stream_server_socket *server_socket;
> +
> +	/*
> +	 * Create a lock at "<path>.lock" if we can.
> +	 */
> +	if (hold_lock_file_for_update_timeout(&lock, path, 0,
> +					      opts->timeout_ms) < 0) {
> +		error_errno(_("could not lock listener socket '%s'"), path);
> +		return NULL;
> +	}

Would you want to ping to see if it's alive before creating the lock?
That would be the fast-path if we assume that a server will usually be
there once started. Or is that supposed to happen in the caller (in
which case I'd again wonder if this really should be happening in the
simple-ipc code).

> +	/*
> +	 * If another server is listening on "<path>" give up.  We do not
> +	 * want to create a socket and steal future connections from them.
> +	 */
> +	if (is_another_server_alive(path, opts)) {
> +		errno = EADDRINUSE;
> +		error_errno(_("listener socket already in use '%s'"), path);
> +		rollback_lock_file(&lock);
> +		return NULL;
> +	}

Wouldn't this be a "success" case for a caller? They did not open the
server themselves, but they are presumably happy that there is one there
now to talk to. So do we actually want to print an error to stderr?
Likewise, how do they tell the difference between this NULL and the NULL
we returned above because we couldn't take the lock? Or the NULL we
return below because there is some error creating a listening socket?

I'd think in those three cases you'd want:

  - if lock contention, pause a moment and wait for the winner to spin
    up and serve requests

  - if another server is live while we hold the lock, then we raced them
    and they won. Release the lock and start using them.

  - if we really tried to call unix_stream_listen() and that failed,
    give up now. There is some system error that is not likely to be
    fixed by trying anything more (e.g., ENAMETOOLONG).

> +	server_socket = xcalloc(1, sizeof(*server_socket));
> +	server_socket->path_socket = strdup(path);
> +	server_socket->fd_socket = fd_socket;

What do we need this server_socket for? The caller already knows the
path; they fed it to us. We do need to return the descriptor, but we
could do that directly.

> +	lstat(path, &server_socket->st_socket);

This lstat I guess is part of your "periodically check to see if we're
still the one holding the socket" strategy. We _shouldn't_ need that
anymore, with the dotlocking, but I'm OK with it as a
belt-and-suspenders check. But why are we filling in the lstat here?
This seems like something that the unix-socket code doesn't really need
to know about (though you do at least provide the complementary
"was_stolen" function here, so that part makes sense).

Again, I guess I'd find it less weird if it were happening at a layer
above. Maybe I'm really just complaining that this is in unix-socket.c.
I guess it is a separate unix_stream_server data type. Arguably that
should go in a separate file, but I guess the whole conditional
compilation of unix-socket.c makes that awkward. So maybe this is the
least-bad thing.

> +	/*
> +	 * Always rollback (just delete) "<path>.lock" because we already created
> +	 * "<path>" as a socket and do not want to commit_lock to do the atomic
> +	 * rename trick.
> +	 */
> +	rollback_lock_file(&lock);
> +
> +	return server_socket;
> +}

OK, this part makes sense to me.

> +void unix_stream_server__free(
> +	struct unix_stream_server_socket *server_socket)
> +{
> +	if (!server_socket)
> +		return;
> +
> +	if (server_socket->fd_socket >= 0) {
> +		if (!unix_stream_server__was_stolen(server_socket))
> +			unlink(server_socket->path_socket);
> +		close(server_socket->fd_socket);
> +	}
> +
> +	free(server_socket->path_socket);
> +	free(server_socket);
> +}

OK, this makes sense. We only remove it if we're still the ones holding
it. That's not done under lock, though, so it's possibly racy (somebody
steals from us while _they_ hold the lock; we check and see "not stolen"
right before they steal it, and then we unlink their stolen copy).

> +int unix_stream_server__was_stolen(
> +	struct unix_stream_server_socket *server_socket)
> +{
> +	struct stat st_now;
> +
> +	if (!server_socket)
> +		return 0;
> +
> +	if (lstat(server_socket->path_socket, &st_now) == -1)
> +		return 1;
> +
> +	if (st_now.st_ino != server_socket->st_socket.st_ino)
> +		return 1;
> +
> +	/* We might also consider the ctime on some platforms. */
> +
> +	return 0;
> +}

You probably should confirm that st.dev matches, too, since that is the
namespace for st.ino. Maybe also double check that it's still a socket
with S_ISSOCK(st_mode)?

-Peff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 00/12] Simple IPC Mechanism
  2021-02-25 19:39       ` [PATCH v4 00/12] Simple IPC Mechanism Junio C Hamano
@ 2021-02-26  7:59         ` Jeff King
  2021-02-26 20:18           ` Jeff Hostetler
  0 siblings, 1 reply; 178+ messages in thread
From: Jeff King @ 2021-02-26  7:59 UTC (permalink / raw)
  To: Junio C Hamano
  Cc: Jeff Hostetler via GitGitGadget, git, Jeff Hostetler,
	SZEDER Gábor, Johannes Schindelin, Jeff Hostetler

On Thu, Feb 25, 2021 at 11:39:39AM -0800, Junio C Hamano wrote:

> "Jeff Hostetler via GitGitGadget" <gitgitgadget@gmail.com> writes:
> 
> > Here is V4 of my "Simple IPC" series. It addresses Gábor's comment WRT
> > shutting down the server to make unit tests more predictable on CI servers.
> > (https://lore.kernel.org/git/20210213093052.GJ1015009@szeder.dev)
> >
> > Jeff
> >
> > cc: Ævar Arnfjörð Bjarmason avarab@gmail.com cc: Jeff Hostetler
> > git@jeffhostetler.com cc: Jeff King peff@peff.net cc: Chris Torek
> > chris.torek@gmail.com
> 
> It seems that the discussions around the topic has mostly done
> during the v2 review, and has quieted down since then.
> 
> Let's merge it down to 'next'?

Sorry, I hadn't gotten around to looking at the latest version. I left
another round of comments. Some of them are arguably bikeshedding, but
there's at least one I think we'd want to address (the big stack buffer
in patch 1).

I also haven't carefully looked at the simple-ipc design at all; my
focus has just been on the details of socket and pktline code being
touched. Since there are no simple-ipc users yet, and since it's
internal and would be easy to change later, I'm mostly content for Jeff
to proceed as he sees fit and iterate on it as necessary.

-Peff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently()
  2021-02-26  7:21         ` Jeff King
@ 2021-02-26 19:52           ` Jeff Hostetler
  2021-02-26 20:43             ` Jeff King
  2021-03-03 19:38             ` Junio C Hamano
  0 siblings, 2 replies; 178+ messages in thread
From: Jeff Hostetler @ 2021-02-26 19:52 UTC (permalink / raw)
  To: Jeff King, Jeff Hostetler via GitGitGadget
  Cc: git, SZEDER Gábor, Johannes Schindelin, Jeff Hostetler



On 2/26/21 2:21 AM, Jeff King wrote:
> On Wed, Feb 17, 2021 at 09:48:37PM +0000, Jeff Hostetler via GitGitGadget wrote:
> 
>> Change the API of `write_packetized_from_fd()` to accept a scratch space
>> argument from its caller to avoid similar issues here.
> 
> OK, but...
> 
>> diff --git a/convert.c b/convert.c
>> index ee360c2f07ce..41012c2d301c 100644
>> --- a/convert.c
>> +++ b/convert.c
>> @@ -883,9 +883,10 @@ static int apply_multi_file_filter(const char *path, const char *src, size_t len
>>   	if (err)
>>   		goto done;
>>   
>> -	if (fd >= 0)
>> -		err = write_packetized_from_fd(fd, process->in);
>> -	else
>> +	if (fd >= 0) {
>> +		struct packet_scratch_space scratch;
>> +		err = write_packetized_from_fd(fd, process->in, &scratch);
>> +	} else
>>   		err = write_packetized_from_buf(src, len, process->in);
> 
> Isn't this just putting the buffer onto the stack anyway? Your
> scratch_space struct is really just a big array. You'd want to make
> it static here, but then we haven't really solved anything. :)

Yeah, I was letting the caller decide how to provide the buffer.
They could put it on the stack or allocate it once across a whole
set of files or use a static buffer -- the caller has context for
what works best that we don't have here.  For example, the caller
may know that is not in threaded code at all, but we cannot assume
that here.

> 
> I think instead that:
> 
>> -int write_packetized_from_fd(int fd_in, int fd_out)
>> +int write_packetized_from_fd(int fd_in, int fd_out,
>> +			     struct packet_scratch_space *scratch)
>>   {
>> -	static char buf[LARGE_PACKET_DATA_MAX];
>>   	int err = 0;
>>   	ssize_t bytes_to_write;
>>   
>>   	while (!err) {
>> -		bytes_to_write = xread(fd_in, buf, sizeof(buf));
>> +		bytes_to_write = xread(fd_in, scratch->buffer,
>> +				       sizeof(scratch->buffer));
>>   		if (bytes_to_write < 0)
>>   			return COPY_READ_ERROR;
>>   		if (bytes_to_write == 0)
>>   			break;
>> -		err = packet_write_gently(fd_out, buf, bytes_to_write);
>> +		err = packet_write_gently(fd_out, scratch->buffer,
>> +					  bytes_to_write);
>>   	}
> 
> ...just heap-allocating the buffer in this function would be fine. It's
> one malloc for the whole sequence of pktlines, which is unlikely to be a
> problem.

Right, I think it would be fine to malloc it here, but I didn't
want to assume that everyone would think that.

I'll change it.

Thanks
Jeff


^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 00/12] Simple IPC Mechanism
  2021-02-26  7:59         ` Jeff King
@ 2021-02-26 20:18           ` Jeff Hostetler
  2021-02-26 20:50             ` Jeff King
  0 siblings, 1 reply; 178+ messages in thread
From: Jeff Hostetler @ 2021-02-26 20:18 UTC (permalink / raw)
  To: Jeff King, Junio C Hamano
  Cc: Jeff Hostetler via GitGitGadget, git, SZEDER Gábor,
	Johannes Schindelin, Jeff Hostetler



On 2/26/21 2:59 AM, Jeff King wrote:
> On Thu, Feb 25, 2021 at 11:39:39AM -0800, Junio C Hamano wrote:
> 
>> "Jeff Hostetler via GitGitGadget" <gitgitgadget@gmail.com> writes:
>>
>>> Here is V4 of my "Simple IPC" series. It addresses Gábor's comment WRT
>>> shutting down the server to make unit tests more predictable on CI servers.
>>> (https://lore.kernel.org/git/20210213093052.GJ1015009@szeder.dev)
>>>
>>> Jeff
>>>
>>> cc: Ævar Arnfjörð Bjarmason avarab@gmail.com cc: Jeff Hostetler
>>> git@jeffhostetler.com cc: Jeff King peff@peff.net cc: Chris Torek
>>> chris.torek@gmail.com
>>
>> It seems that the discussions around the topic has mostly done
>> during the v2 review, and has quieted down since then.
>>
>> Let's merge it down to 'next'?
> 
> Sorry, I hadn't gotten around to looking at the latest version. I left
> another round of comments. Some of them are arguably bikeshedding, but
> there's at least one I think we'd want to address (the big stack buffer
> in patch 1).
> 
> I also haven't carefully looked at the simple-ipc design at all; my
> focus has just been on the details of socket and pktline code being
> touched. Since there are no simple-ipc users yet, and since it's
> internal and would be easy to change later, I'm mostly content for Jeff
> to proceed as he sees fit and iterate on it as necessary.
> 
> -Peff
> 

We can wait until next week on moving this 'next' if you want.
I'll attend to the buffer alloc in patch 1.  I'm still reading the
other comments and will see where that takes me.

I'm about ready to push an RFC for my fsmonitor--daemon series that
sits on top of this simple-ipc series, so you can see an actual use
case if that would help understand (my madness).

Thanks
Jeff


^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently()
  2021-02-26 19:52           ` Jeff Hostetler
@ 2021-02-26 20:43             ` Jeff King
  2021-03-03 19:38             ` Junio C Hamano
  1 sibling, 0 replies; 178+ messages in thread
From: Jeff King @ 2021-02-26 20:43 UTC (permalink / raw)
  To: Jeff Hostetler
  Cc: Jeff Hostetler via GitGitGadget, git, SZEDER Gábor,
	Johannes Schindelin, Jeff Hostetler

On Fri, Feb 26, 2021 at 02:52:22PM -0500, Jeff Hostetler wrote:

> > > -	if (fd >= 0)
> > > -		err = write_packetized_from_fd(fd, process->in);
> > > -	else
> > > +	if (fd >= 0) {
> > > +		struct packet_scratch_space scratch;
> > > +		err = write_packetized_from_fd(fd, process->in, &scratch);
> > > +	} else
> > >   		err = write_packetized_from_buf(src, len, process->in);
> > 
> > Isn't this just putting the buffer onto the stack anyway? Your
> > scratch_space struct is really just a big array. You'd want to make
> > it static here, but then we haven't really solved anything. :)
> 
> Yeah, I was letting the caller decide how to provide the buffer.
> They could put it on the stack or allocate it once across a whole
> set of files or use a static buffer -- the caller has context for
> what works best that we don't have here.  For example, the caller
> may know that is not in threaded code at all, but we cannot assume
> that here.

Yeah, I think it's successfully pushed the problem up to the caller. But
it introduced a _new_ problem in putting the large buffer on the stack.
So if this were "static struct packet_scratch_space scratch", I think
we'd be OK.

And perhaps that would meet your needs (if you just need to call
write_packed_from_fd() in a thread, and not this other caller).

But I do think the heap approach is nice in that it keeps the interface
clean, and I think the performance should be comparable.

> Right, I think it would be fine to malloc it here, but I didn't
> want to assume that everyone would think that.
> 
> I'll change it.

Thanks. :)

-Peff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 00/12] Simple IPC Mechanism
  2021-02-26 20:18           ` Jeff Hostetler
@ 2021-02-26 20:50             ` Jeff King
  2021-03-03 19:29               ` Junio C Hamano
  0 siblings, 1 reply; 178+ messages in thread
From: Jeff King @ 2021-02-26 20:50 UTC (permalink / raw)
  To: Jeff Hostetler
  Cc: Junio C Hamano, Jeff Hostetler via GitGitGadget, git,
	SZEDER Gábor, Johannes Schindelin, Jeff Hostetler

On Fri, Feb 26, 2021 at 03:18:26PM -0500, Jeff Hostetler wrote:

> > Sorry, I hadn't gotten around to looking at the latest version. I left
> > another round of comments. Some of them are arguably bikeshedding, but
> > there's at least one I think we'd want to address (the big stack buffer
> > in patch 1).
> > 
> > I also haven't carefully looked at the simple-ipc design at all; my
> > focus has just been on the details of socket and pktline code being
> > touched. Since there are no simple-ipc users yet, and since it's
> > internal and would be easy to change later, I'm mostly content for Jeff
> > to proceed as he sees fit and iterate on it as necessary.
> 
> We can wait until next week on moving this 'next' if you want.
> I'll attend to the buffer alloc in patch 1.  I'm still reading the
> other comments and will see where that takes me.

I could have been a bit more clear here: modulo any response you have to
my latest round of comments, I'm mostly happy to let this proceed to
next. So I was thinking you'd have one more re-roll dealing with the
patch 1 problems plus anything else you think worth addressing from my
batch of comments, and then that result would probably be ready for
'next'.

> I'm about ready to push an RFC for my fsmonitor--daemon series that
> sits on top of this simple-ipc series, so you can see an actual use
> case if that would help understand (my madness).

I may have dug my own grave here. ;) I'm actually not incredibly
interested in the overall topic. So I wasn't saying so much "I'll
reserve judgement on simple-ipc until I see callers" so much as "I
expect you'll find any shortcomings in its design yourself as you build
on top of it".

And by "not interested" I don't mean that I think the topic is without
value. Far from it; I think this is an important area to be working in.
But it's complex and time-consuming to review. So I was hoping somebody
with more expertise and interest in the problem space would do that part
of the review, and I could continue to focus on other stuff. That may be
wishful thinking, though. :)

-Peff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 12/12] t0052: add simple-ipc tests and t/helper/test-simple-ipc tool
  2021-02-17 21:48       ` [PATCH v4 12/12] t0052: add simple-ipc tests and t/helper/test-simple-ipc tool Jeff Hostetler via GitGitGadget
@ 2021-03-02  9:44         ` Jeff King
  2021-03-03 15:25           ` Jeff Hostetler
  0 siblings, 1 reply; 178+ messages in thread
From: Jeff King @ 2021-03-02  9:44 UTC (permalink / raw)
  To: Jeff Hostetler via GitGitGadget
  Cc: git, Jeff Hostetler, SZEDER Gábor, Johannes Schindelin,
	Jeff Hostetler

On Wed, Feb 17, 2021 at 09:48:48PM +0000, Jeff Hostetler via GitGitGadget wrote:

> Create t/helper/test-simple-ipc test tool to exercise the "simple-ipc"
> functions.

BTW, one oddity I noticed in this (because of my -Wunused-parameters
branch):

> +#ifndef GIT_WINDOWS_NATIVE
> +/*
> + * This is adapted from `daemonize()`.  Use `fork()` to directly create and
> + * run the daemon in a child process.
> + */
> +static int spawn_server(const char *path,
> +			const struct ipc_server_opts *opts,
> +			pid_t *pid)
> +{
> +	*pid = fork();
> +
> +	switch (*pid) {
> +	case 0:
> +		if (setsid() == -1)
> +			error_errno(_("setsid failed"));
> +		close(0);
> +		close(1);
> +		close(2);
> +		sanitize_stdfds();
> +
> +		return ipc_server_run(path, opts, test_app_cb, (void*)&my_app_data);
> +
> +	case -1:
> +		return error_errno(_("could not spawn daemon in the background"));
> +
> +	default:
> +		return 0;
> +	}
> +}

In the non-Windows version, we spawn a server using the "path" parameter
we got from the caller.

But in the Windows version:

> +#else
> +/*
> + * Conceptually like `daemonize()` but different because Windows does not
> + * have `fork(2)`.  Spawn a normal Windows child process but without the
> + * limitations of `start_command()` and `finish_command()`.
> + */
> +static int spawn_server(const char *path,
> +			const struct ipc_server_opts *opts,
> +			pid_t *pid)
> +{
> +	char test_tool_exe[MAX_PATH];
> +	struct strvec args = STRVEC_INIT;
> +	int in, out;
> +
> +	GetModuleFileNameA(NULL, test_tool_exe, MAX_PATH);
> +
> +	in = open("/dev/null", O_RDONLY);
> +	out = open("/dev/null", O_WRONLY);
> +
> +	strvec_push(&args, test_tool_exe);
> +	strvec_push(&args, "simple-ipc");
> +	strvec_push(&args, "run-daemon");
> +	strvec_pushf(&args, "--threads=%d", opts->nr_threads);
> +
> +	*pid = mingw_spawnvpe(args.v[0], args.v, NULL, NULL, in, out, out);
> +	close(in);
> +	close(out);
> +
> +	strvec_clear(&args);
> +
> +	if (*pid < 0)
> +		return error(_("could not spawn daemon in the background"));
> +
> +	return 0;
> +}
> +#endif

We ignore the "path" parameter entirely. Should we be passing it along
as an option to the child process? I think it doesn't really matter at
this point because both the parent and child processes will use the
hard-coded string "ipc-test", but it seems like something the test
script might want to be able to specify.

-Peff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 10/12] unix-socket: create `unix_stream_server__listen_with_lock()`
  2021-02-26  7:56         ` Jeff King
@ 2021-03-02 23:50           ` Jeff Hostetler
  2021-03-04 15:13             ` Jeff King
  0 siblings, 1 reply; 178+ messages in thread
From: Jeff Hostetler @ 2021-03-02 23:50 UTC (permalink / raw)
  To: Jeff King, Jeff Hostetler via GitGitGadget
  Cc: git, SZEDER Gábor, Johannes Schindelin, Jeff Hostetler



On 2/26/21 2:56 AM, Jeff King wrote:
> On Wed, Feb 17, 2021 at 09:48:46PM +0000, Jeff Hostetler via GitGitGadget wrote:
> 
>> From: Jeff Hostetler <jeffhost@microsoft.com>
>>
>> Create a version of `unix_stream_listen()` that uses a ".lock" lockfile
>> to create the unix domain socket in a race-free manner.
> 
> The "unix_stream_server__listen_with_lock" name is quite a mouthful.  My
> first question was: don't we have an "options" struct that we can use to
> tell it we're interested in using the locking strategy?
> 
> But I do find it a little weird for the feature to be at this layer at
> all. I'd have thought it would make more sense in the simple-ipc layer
> that implements the unix-socket backend, where app-level logic like
> "it's OK to just connect to this socket and hang up in order to ping it"
> might be more appropriate. We might even want to have a more robust
> check (e.g., an actual "ping" that expects the server to say "yes, I'm
> here").

I think when I started this, the "safe listen" was much closer to the
original `unix_stream_listen()` and it made sense to keep it nearby,
but as it evolved (and we added lockfiles and etc.) it grew to be more
like its own level between the original socket code and the simple-ipc
layer.  Pulling it out into its own source file is probably a good idea
for clarity.

I was thinking that the "ping" is just to see if a server is listening
or not.  (And I viewed that as kind of a hack, but it works.)  If we
start sending data back and forth, we get into protocols and blocking
and stuff that this layer (even if we move it up a level) doesn't know
about.

I'll pull this out into a new file.


> 
> (But also see below where I am less certain about this...)
> 
>> Unix domain sockets have a fundamental problem on Unix systems because
>> they persist in the filesystem until they are deleted.  This is
>> independent of whether a server is actually listening for connections.
>> Well-behaved servers are expected to delete the socket when they
>> shutdown.  A new server cannot easily tell if a found socket is
>> attached to an active server or is leftover cruft from a dead server.
>> The traditional solution used by `unix_stream_listen()` is to force
>> delete the socket pathname and then create a new socket.  This solves
>> the latter (cruft) problem, but in the case of the former, it orphans
>> the existing server (by stealing the pathname associated with the
>> socket it is listening on).
> 
> Nicely explained.
> 
>> We cannot directly use a .lock lockfile to create the socket because
>> the socket is created by `bind(2)` rather than the `open(2)` mechanism
>> used by `tempfile.c`.
>>
>> As an alternative, we hold a plain lockfile ("<path>.lock") as a
>> mutual exclusion device.  Under the lock, we test if an existing
>> socket ("<path>") is has an active server.  If not, create a new
>> socket and begin listening.  Then we rollback the lockfile in all
>> cases.
> 
> Make sense.
> 
>> +static int is_another_server_alive(const char *path,
>> +				   const struct unix_stream_listen_opts *opts)
>> +{
>> +	struct stat st;
>> +	int fd;
>> +
>> +	if (!lstat(path, &st) && S_ISSOCK(st.st_mode)) {
>> +		/*
>> +		 * A socket-inode exists on disk at `path`, but we
>> +		 * don't know whether it belongs to an active server
>> +		 * or whether the last server died without cleaning
>> +		 * up.
>> +		 *
>> +		 * Poke it with a trivial connection to try to find
>> +		 * out.
>> +		 */
>> +		fd = unix_stream_connect(path, opts->disallow_chdir);
>> +		if (fd >= 0) {
>> +			close(fd);
>> +			return 1;
>> +		}
>> +	}
> 
> The lstat() seems redundant here. unix_stream_connect() will tell us
> whether there is something to connect to or not. (It's also racy with
> respect to the actual connect, but since you're doing this under lock, I
> don't think that matters).

I agree.  I'll get rid of the lstat().


> 
>> +struct unix_stream_server_socket *unix_stream_server__listen_with_lock(
>> +	const char *path,
>> +	const struct unix_stream_listen_opts *opts)
>> +{
>> +	struct lock_file lock = LOCK_INIT;
>> +	int fd_socket;
>> +	struct unix_stream_server_socket *server_socket;
>> +
>> +	/*
>> +	 * Create a lock at "<path>.lock" if we can.
>> +	 */
>> +	if (hold_lock_file_for_update_timeout(&lock, path, 0,
>> +					      opts->timeout_ms) < 0) {
>> +		error_errno(_("could not lock listener socket '%s'"), path);
>> +		return NULL;
>> +	}
> 
> Would you want to ping to see if it's alive before creating the lock?
> That would be the fast-path if we assume that a server will usually be
> there once started. Or is that supposed to happen in the caller (in
> which case I'd again wonder if this really should be happening in the
> simple-ipc code).

Starting a server should not happen that often, so I'm not sure it
matters.  And yes, a server once started should run for a long time.
Pinging without the lock puts us back in another race, so we might as
well lock first.

> 
>> +	/*
>> +	 * If another server is listening on "<path>" give up.  We do not
>> +	 * want to create a socket and steal future connections from them.
>> +	 */
>> +	if (is_another_server_alive(path, opts)) {
>> +		errno = EADDRINUSE;
>> +		error_errno(_("listener socket already in use '%s'"), path);
>> +		rollback_lock_file(&lock);
>> +		return NULL;
>> +	}
> 
> Wouldn't this be a "success" case for a caller? They did not open the
> server themselves, but they are presumably happy that there is one there
> now to talk to. So do we actually want to print an error to stderr?
> Likewise, how do they tell the difference between this NULL and the NULL
> we returned above because we couldn't take the lock? Or the NULL we
> return below because there is some error creating a listening socket?
> 
> I'd think in those three cases you'd want:
> 
>    - if lock contention, pause a moment and wait for the winner to spin
>      up and serve requests
> 
>    - if another server is live while we hold the lock, then we raced them
>      and they won. Release the lock and start using them.
> 
>    - if we really tried to call unix_stream_listen() and that failed,
>      give up now. There is some system error that is not likely to be
>      fixed by trying anything more (e.g., ENAMETOOLONG).

Yes, I want to move the error messages out of these library layers.

And yes, if another server is running, our server instance should
shutdown gracefully.  Other client processes can just talk to them
rather than us.

> 
>> +	server_socket = xcalloc(1, sizeof(*server_socket));
>> +	server_socket->path_socket = strdup(path);
>> +	server_socket->fd_socket = fd_socket;
> 
> What do we need this server_socket for? The caller already knows the
> path; they fed it to us. We do need to return the descriptor, but we
> could do that directly.

I wanted a wrapper struct to persist a copy of the pathname near
the fd.  Later when we get ready to shutdown, we can close and unlink
without worrying whether our caller kept their copy of the path buffer.

This also lets me have the pathname to poll and check for theft during
the accept thread's event loop.

> 
>> +	lstat(path, &server_socket->st_socket);
> 
> This lstat I guess is part of your "periodically check to see if we're
> still the one holding the socket" strategy. We _shouldn't_ need that
> anymore, with the dotlocking, but I'm OK with it as a
> belt-and-suspenders check. But why are we filling in the lstat here?
> This seems like something that the unix-socket code doesn't really need
> to know about (though you do at least provide the complementary
> "was_stolen" function here, so that part makes sense).

The dotlock is only on disk for the duration of the socket setup.
We do the rollback (to delete the lockfile) once we have the socket
open and ready for business.

The lstat gives me the inode of the socket on disk and we can watch
it with future lstat's in the event loop and see if it changes and
detect theft and auto-shutdown.

> 
> Again, I guess I'd find it less weird if it were happening at a layer
> above. Maybe I'm really just complaining that this is in unix-socket.c.
> I guess it is a separate unix_stream_server data type. Arguably that
> should go in a separate file, but I guess the whole conditional
> compilation of unix-socket.c makes that awkward. So maybe this is the
> least-bad thing.

Yeah, I'll move it out.

And yes, the whole conditional compilation thing was something I was
hesitating on, but it really isn't that bad.  (But I should not brag
here until all of the build servers have had their say....)

> 
>> +	/*
>> +	 * Always rollback (just delete) "<path>.lock" because we already created
>> +	 * "<path>" as a socket and do not want to commit_lock to do the atomic
>> +	 * rename trick.
>> +	 */
>> +	rollback_lock_file(&lock);
>> +
>> +	return server_socket;
>> +}
> 
> OK, this part makes sense to me.
> 
>> +void unix_stream_server__free(
>> +	struct unix_stream_server_socket *server_socket)
>> +{
>> +	if (!server_socket)
>> +		return;
>> +
>> +	if (server_socket->fd_socket >= 0) {
>> +		if (!unix_stream_server__was_stolen(server_socket))
>> +			unlink(server_socket->path_socket);
>> +		close(server_socket->fd_socket);
>> +	}
>> +
>> +	free(server_socket->path_socket);
>> +	free(server_socket);
>> +}
> 
> OK, this makes sense. We only remove it if we're still the ones holding
> it. That's not done under lock, though, so it's possibly racy (somebody
> steals from us while _they_ hold the lock; we check and see "not stolen"
> right before they steal it, and then we unlink their stolen copy).

Right, I didn't bother with the lock here.  I don't think we need it.

We technically still have the socket open and are listening on it when
we lstat and unlink it.  The other process should create the lock and
try to connect.  That should hang in the kernel because of the accept()
grace period.  Then we close the socket and the client's connection
request errors because we didn't accept it.  They will see the error
as no one is listening and then create their own socket.

> 
>> +int unix_stream_server__was_stolen(
>> +	struct unix_stream_server_socket *server_socket)
>> +{
>> +	struct stat st_now;
>> +
>> +	if (!server_socket)
>> +		return 0;
>> +
>> +	if (lstat(server_socket->path_socket, &st_now) == -1)
>> +		return 1;
>> +
>> +	if (st_now.st_ino != server_socket->st_socket.st_ino)
>> +		return 1;
>> +
>> +	/* We might also consider the ctime on some platforms. */
>> +
>> +	return 0;
>> +}
> 
> You probably should confirm that st.dev matches, too, since that is the
> namespace for st.ino. Maybe also double check that it's still a socket
> with S_ISSOCK(st_mode)?

Good point.

> 
> -Peff
> 

Thanks for all the careful study.  I'll push up a new series to
address them shortly.

Jeff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 12/12] t0052: add simple-ipc tests and t/helper/test-simple-ipc tool
  2021-03-02  9:44         ` Jeff King
@ 2021-03-03 15:25           ` Jeff Hostetler
  0 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler @ 2021-03-03 15:25 UTC (permalink / raw)
  To: Jeff King, Jeff Hostetler via GitGitGadget
  Cc: git, SZEDER Gábor, Johannes Schindelin, Jeff Hostetler



On 3/2/21 4:44 AM, Jeff King wrote:
> On Wed, Feb 17, 2021 at 09:48:48PM +0000, Jeff Hostetler via GitGitGadget wrote:
> 
>> Create t/helper/test-simple-ipc test tool to exercise the "simple-ipc"
>> functions.
> 
> BTW, one oddity I noticed in this (because of my -Wunused-parameters
> branch):
> 
>> +#ifndef GIT_WINDOWS_NATIVE
>> +/*
>> + * This is adapted from `daemonize()`.  Use `fork()` to directly create and
>> + * run the daemon in a child process.
>> + */
>> +static int spawn_server(const char *path,
>> +			const struct ipc_server_opts *opts,
>> +			pid_t *pid)
>> +{
>> +	*pid = fork();
>> +
>> +	switch (*pid) {
>> +	case 0:
>> +		if (setsid() == -1)
>> +			error_errno(_("setsid failed"));
>> +		close(0);
>> +		close(1);
>> +		close(2);
>> +		sanitize_stdfds();
>> +
>> +		return ipc_server_run(path, opts, test_app_cb, (void*)&my_app_data);
>> +
>> +	case -1:
>> +		return error_errno(_("could not spawn daemon in the background"));
>> +
>> +	default:
>> +		return 0;
>> +	}
>> +}
> 
> In the non-Windows version, we spawn a server using the "path" parameter
> we got from the caller.
> 
> But in the Windows version:
> 
>> +#else
>> +/*
>> + * Conceptually like `daemonize()` but different because Windows does not
>> + * have `fork(2)`.  Spawn a normal Windows child process but without the
>> + * limitations of `start_command()` and `finish_command()`.
>> + */
>> +static int spawn_server(const char *path,
>> +			const struct ipc_server_opts *opts,
>> +			pid_t *pid)
>> +{
>> +	char test_tool_exe[MAX_PATH];
>> +	struct strvec args = STRVEC_INIT;
>> +	int in, out;
>> +
>> +	GetModuleFileNameA(NULL, test_tool_exe, MAX_PATH);
>> +
>> +	in = open("/dev/null", O_RDONLY);
>> +	out = open("/dev/null", O_WRONLY);
>> +
>> +	strvec_push(&args, test_tool_exe);
>> +	strvec_push(&args, "simple-ipc");
>> +	strvec_push(&args, "run-daemon");
>> +	strvec_pushf(&args, "--threads=%d", opts->nr_threads);
>> +
>> +	*pid = mingw_spawnvpe(args.v[0], args.v, NULL, NULL, in, out, out);
>> +	close(in);
>> +	close(out);
>> +
>> +	strvec_clear(&args);
>> +
>> +	if (*pid < 0)
>> +		return error(_("could not spawn daemon in the background"));
>> +
>> +	return 0;
>> +}
>> +#endif
> 
> We ignore the "path" parameter entirely. Should we be passing it along
> as an option to the child process? I think it doesn't really matter at
> this point because both the parent and child processes will use the
> hard-coded string "ipc-test", but it seems like something the test
> script might want to be able to specify.
> 
> -Peff
> 

Yeah, since it was a test helper I hesitated to add a command line
arg to pass it to the child process (when all callers were right here
and using the same default value).  However it would be good to do so
in case we want to write more complicated tests.

Jeff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 00/12] Simple IPC Mechanism
  2021-02-26 20:50             ` Jeff King
@ 2021-03-03 19:29               ` Junio C Hamano
  0 siblings, 0 replies; 178+ messages in thread
From: Junio C Hamano @ 2021-03-03 19:29 UTC (permalink / raw)
  To: Jeff King
  Cc: Jeff Hostetler, Jeff Hostetler via GitGitGadget, git,
	SZEDER Gábor, Johannes Schindelin, Jeff Hostetler

Jeff King <peff@peff.net> writes:

> And by "not interested" I don't mean that I think the topic is without
> value. Far from it; I think this is an important area to be working in.
> But it's complex and time-consuming to review. So I was hoping somebody
> with more expertise and interest in the problem space would do that part
> of the review, and I could continue to focus on other stuff. That may be
> wishful thinking, though. :)

I was not paying close attention to this series, and was planning to
visit it before merging it to 'next' but only to ensure that changes
to any existing code would not regress existing callers, so it seems
that we two have been with pretty much the same attitude;-)

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently()
  2021-02-26 19:52           ` Jeff Hostetler
  2021-02-26 20:43             ` Jeff King
@ 2021-03-03 19:38             ` Junio C Hamano
  2021-03-04 13:29               ` Jeff Hostetler
  1 sibling, 1 reply; 178+ messages in thread
From: Junio C Hamano @ 2021-03-03 19:38 UTC (permalink / raw)
  To: Jeff Hostetler
  Cc: Jeff King, Jeff Hostetler via GitGitGadget, git,
	SZEDER Gábor, Johannes Schindelin, Jeff Hostetler

Jeff Hostetler <git@jeffhostetler.com> writes:

> Right, I think it would be fine to malloc it here, but I didn't
> want to assume that everyone would think that.
>
> I'll change it.

I agree with both of you that the code is unnice in its stack usage
and we want fix with malloc(), or something like that, but sorry, I
think I merged this round by mistake to 'next'.

As we won't be merging the topic to the upcoming release anyway, I
am willing to revert the merge to 'next' and requeue an updated one,
when it appears (I am also OK to see an incremental update, "oops,
no, we realize we don't want to have it on the stack" fix-up, if
this is the only glitch in the series that need to be fixed).

Thanks.

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 03/12] pkt-line: (optionally) libify the packet readers
  2021-02-17 21:48       ` [PATCH v4 03/12] pkt-line: (optionally) libify the packet readers Johannes Schindelin via GitGitGadget
@ 2021-03-03 19:53         ` Junio C Hamano
  2021-03-04 14:17           ` Jeff Hostetler
  0 siblings, 1 reply; 178+ messages in thread
From: Junio C Hamano @ 2021-03-03 19:53 UTC (permalink / raw)
  To: Johannes Schindelin via GitGitGadget
  Cc: git, Jeff Hostetler, Jeff King, SZEDER Gábor,
	Johannes Schindelin, Jeff Hostetler

"Johannes Schindelin via GitGitGadget" <gitgitgadget@gmail.com>
writes:

> @@ -313,6 +316,8 @@ static int get_packet_data(int fd, char **src_buf, 
>  		if (options & PACKET_READ_GENTLE_ON_EOF)
>  			return -1;
>  
> +		if (options & PACKET_READ_NEVER_DIE)
> +			return error(_("the remote end hung up unexpectedly"));
>  		die(_("the remote end hung up unexpectedly"));
>  	}

This hunk treats READ_NEVER_DIE as a less quiet version of
GENTRL_ON_EOF, i.e. the new flag allows to continue even after the
"hung up unexpectedly" condition that usually causes the process to
die..

> @@ -355,12 +363,19 @@ enum packet_read_status packet_read_with_status(i
> ...
> -	if ((unsigned)len >= size)
> +	if ((unsigned)len >= size) {
> +		if (options & PACKET_READ_NEVER_DIE)
> +			return error(_("protocol error: bad line length %d"),
> +				     len);
>  		die(_("protocol error: bad line length %d"), len);
> +	}
>  
>  	if (get_packet_data(fd, src_buffer, src_len, buffer, len, options) < 0) {
>  		*pktlen = -1;

In the post-context of this hunk, there is this code:

	if ((options & PACKET_READ_DIE_ON_ERR_PACKET) &&
	    starts_with(buffer, "ERR "))
		die(_("remote error: %s"), buffer + 4);

	*pktlen = len;
	return PACKET_READ_NORMAL;

But here, there is no way to override the DIE_ON_ERR with
READ_NEVER_DIE.

The asymmetry is somewhat annoying (i.e. if "if you do not want to
die upon ERR, don't pass DIE_ON_ERR" could be a valid suggestion to
the callers, then "if you do not want to die upon an unexpected
hung-up, pass GENTLE_ON_EOF" would equally be valid suggestion),
but I'll let it pass.

> diff --git a/pkt-line.h b/pkt-line.h
> index a7149429ac35..2e472efaf2c5 100644
> --- a/pkt-line.h
> +++ b/pkt-line.h
> @@ -75,10 +75,14 @@ int write_packetized_from_buf_no_flush(const char *src_in, size_t len, int fd_ou
>   *
>   * If options contains PACKET_READ_DIE_ON_ERR_PACKET, it dies when it sees an
>   * ERR packet.
> + *
> + * With `PACKET_READ_NEVER_DIE`, no errors are allowed to trigger die() (except
> + * an ERR packet, when `PACKET_READ_DIE_ON_ERR_PACKET` is in effect).
>   */
>  #define PACKET_READ_GENTLE_ON_EOF     (1u<<0)
>  #define PACKET_READ_CHOMP_NEWLINE     (1u<<1)
>  #define PACKET_READ_DIE_ON_ERR_PACKET (1u<<2)
> +#define PACKET_READ_NEVER_DIE         (1u<<3)
>  int packet_read(int fd, char **src_buffer, size_t *src_len, char
>  		*buffer, unsigned size, int options);

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 05/12] simple-ipc: design documentation for new IPC mechanism
  2021-02-17 21:48       ` [PATCH v4 05/12] simple-ipc: design documentation for new IPC mechanism Jeff Hostetler via GitGitGadget
@ 2021-03-03 20:19         ` Junio C Hamano
  0 siblings, 0 replies; 178+ messages in thread
From: Junio C Hamano @ 2021-03-03 20:19 UTC (permalink / raw)
  To: Jeff Hostetler via GitGitGadget
  Cc: git, Jeff Hostetler, Jeff King, SZEDER Gábor,
	Johannes Schindelin, Jeff Hostetler

"Jeff Hostetler via GitGitGadget" <gitgitgadget@gmail.com> writes:

> +How the simple-ipc server is started is also outside the scope of the
> +IPC mechanism.  For example, the server might be started during
> +maintenance operations.

Just a tiny nit.

I would expect to see "might be <re>started" if it is followed by
"during maintenance operations"; in other words, I expect "might be
started" to be followed by "as part of the boot-up sequence".

> +The IPC protocol consists of a single request message from the client and
> +an optional request message from the server.  For simplicity, pkt-line
> +routines are used to hide chunking and buffering concerns.  Each side
> +terminates their message with a flush packet.
> +(Documentation/technical/protocol-common.txt)

Hidign chunking and buffering concerns is good, but it introduces
some limitations, like 64k chunk limit, which probably want to be
mentioned (if not explained or described) here.

Do we give any extra meaning over "here, a message ends" to the
flush packet?  The lack of "now it is your turn to speak" (aka
"delim") has long been a weakness of the over-the-wire protocol,
and we'd probably want to learn from the past experience.

> +The actual format of the client and server messages is application
> +specific.  The IPC layer transmits and receives an opaque buffer without
> +any concern for the content within.

Please sell why such a semantic-agnostic layer exists and what
benefit the callers would get out of it.  Perhaps you offer some
mechanism to allow them to send and receive without having to worry
about deadlocks[*]?

THanks.

[Footnote]

*1* ...just an example benefit that may or may not exist.



^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 07/12] unix-socket: elimiate static unix_stream_socket() helper function
  2021-02-17 21:48       ` [PATCH v4 07/12] unix-socket: elimiate static unix_stream_socket() helper function Jeff Hostetler via GitGitGadget
  2021-02-26  7:25         ` Jeff King
@ 2021-03-03 20:41         ` Junio C Hamano
  1 sibling, 0 replies; 178+ messages in thread
From: Junio C Hamano @ 2021-03-03 20:41 UTC (permalink / raw)
  To: Jeff Hostetler via GitGitGadget
  Cc: git, Jeff Hostetler, Jeff King, SZEDER Gábor,
	Johannes Schindelin, Jeff Hostetler

"Jeff Hostetler via GitGitGadget" <gitgitgadget@gmail.com> writes:

>  int unix_stream_connect(const char *path)
>  {
> -	int fd, saved_errno;
> +	int fd = -1, saved_errno;
>  	struct sockaddr_un sa;
>  	struct unix_sockaddr_context ctx;
>  
>  	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
>  		return -1;
> -	fd = unix_stream_socket();
> +	fd = socket(AF_UNIX, SOCK_STREAM, 0);
> +	if (fd < 0)
> +		goto fail;
> +
>  	if (connect(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0)
>  		goto fail;
>  	unix_sockaddr_cleanup(&ctx);
> @@ -87,15 +82,16 @@ int unix_stream_connect(const char *path)
>  
>  fail:
>  	saved_errno = errno;
> +	if (fd != -1)
> +		close(fd);
>  	unix_sockaddr_cleanup(&ctx);
> -	close(fd);
>  	errno = saved_errno;
>  	return -1;
>  }

So, the difference is that the caller must be prepared to see and
handle error return from this function when creating socket fails,
but existing callers must be prepared to handle error returns from
this function for different reasons (e.g. we may successfully make a
socket, but connect may fail) already anyway, so this should be a
fairly safe thing to do.  The sole caller send_request() in
credential-cache.c will relay the error return back to do_cache()
which cares what errno it got, and that code does seem to care what
kind of error caused unix_stream_connect() to fail.  And the new
error case introduced by this patch won't result in ENOENT or
ECONNREFUSED to cause the code to fall back to "if the thing is not
running, let's try starting it and try again".

OK.


>  int unix_stream_listen(const char *path)
>  {

This one is simpler to vet its caller.  It immediately dies upon any
error return.

Thanks.

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 08/12] unix-socket: add backlog size option to unix_stream_listen()
  2021-02-26  7:30         ` Jeff King
@ 2021-03-03 20:54           ` Junio C Hamano
  0 siblings, 0 replies; 178+ messages in thread
From: Junio C Hamano @ 2021-03-03 20:54 UTC (permalink / raw)
  To: Jeff King
  Cc: Jeff Hostetler via GitGitGadget, git, Jeff Hostetler,
	SZEDER Gábor, Johannes Schindelin, Jeff Hostetler

Jeff King <peff@peff.net> writes:

> On Wed, Feb 17, 2021 at 09:48:44PM +0000, Jeff Hostetler via GitGitGadget wrote:
>
>> @@ -106,7 +108,10 @@ int unix_stream_listen(const char *path)
>>  	if (bind(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0)
>>  		goto fail;
>>  
>> -	if (listen(fd, 5) < 0)
>> +	backlog = opts->listen_backlog_size;
>> +	if (backlog <= 0)
>> +		backlog = DEFAULT_UNIX_STREAM_LISTEN_BACKLOG;
>> +	if (listen(fd, backlog) < 0)
>>  		goto fail;


Luckily there is no "pass 0 and the platforms will choose an
appropriate backlog value", so "pass 0 to get the default Git
chooses" is OK, but do we even want to allow passing any negative
value?  Shouldn't it be diagnosed as an error instead?

> OK, so we still have the fallback-on-zero here, which is good...
>
>> +struct unix_stream_listen_opts {
>> +	int listen_backlog_size;
>> +};
>> +
>> +#define DEFAULT_UNIX_STREAM_LISTEN_BACKLOG (5)
>> +
>> +#define UNIX_STREAM_LISTEN_OPTS_INIT \
>> +{ \
>> +	.listen_backlog_size = DEFAULT_UNIX_STREAM_LISTEN_BACKLOG, \
>> +}
>
> ...but I thought the plan was to drop this initialization in favor of a
> zero-initialization. What you have certainly wouldn't do the wrong
> thing, but it just seems weirdly redundant. Unless some caller really
> wants to know what the default will be?

Very true.  The code knows the exact value input 0 has to fall back
to; we shouldn't have to initialize to that same exact value and I
do not offhand see why the DEFAULT_UNIX_STREAM_LISTEN_BACKLOG needs
to be a public constant.

Thanks.

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 09/12] unix-socket: disallow chdir() when creating unix domain sockets
  2021-02-17 21:48       ` [PATCH v4 09/12] unix-socket: disallow chdir() when creating unix domain sockets Jeff Hostetler via GitGitGadget
@ 2021-03-03 22:53         ` Junio C Hamano
  2021-03-04 14:56           ` Jeff King
  0 siblings, 1 reply; 178+ messages in thread
From: Junio C Hamano @ 2021-03-03 22:53 UTC (permalink / raw)
  To: Jeff Hostetler via GitGitGadget
  Cc: git, Jeff Hostetler, Jeff King, SZEDER Gábor,
	Johannes Schindelin, Jeff Hostetler

"Jeff Hostetler via GitGitGadget" <gitgitgadget@gmail.com> writes:

> From: Jeff Hostetler <jeffhost@microsoft.com>
>
> Calls to `chdir()` are dangerous in a multi-threaded context.  If
> `unix_stream_listen()` or `unix_stream_connect()` is given a socket
> pathname that is too long to fit in a `sockaddr_un` structure, it will
> `chdir()` to the parent directory of the requested socket pathname,
> create the socket using a relative pathname, and then `chdir()` back.
> This is not thread-safe.
>
> Teach `unix_sockaddr_init()` to not allow calls to `chdir()` when this
> flag is set.

While it is clear that this will not affect any existing callers, I
am not sure if this is a good direction to go in the longer term.

I have to wonder if somebody actually relies on this "feature",
though.  As long as ENAMETOOLONG is passed back to the caller so
that it can react to it, any caller that knows it is safe to chdir()
at the point of calling "send_request()" should be able to chdir()
itself and come back (or fork a child that chdirs and opens a unix
domain socket there, and then send the file descriptor back to the
parent process).

Thanks.

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently()
  2021-03-03 19:38             ` Junio C Hamano
@ 2021-03-04 13:29               ` Jeff Hostetler
  2021-03-04 20:26                 ` Junio C Hamano
  0 siblings, 1 reply; 178+ messages in thread
From: Jeff Hostetler @ 2021-03-04 13:29 UTC (permalink / raw)
  To: Junio C Hamano
  Cc: Jeff King, Jeff Hostetler via GitGitGadget, git,
	SZEDER Gábor, Johannes Schindelin, Jeff Hostetler



On 3/3/21 2:38 PM, Junio C Hamano wrote:
> Jeff Hostetler <git@jeffhostetler.com> writes:
> 
>> Right, I think it would be fine to malloc it here, but I didn't
>> want to assume that everyone would think that.
>>
>> I'll change it.
> 
> I agree with both of you that the code is unnice in its stack usage
> and we want fix with malloc(), or something like that, but sorry, I
> think I merged this round by mistake to 'next'.
> 
> As we won't be merging the topic to the upcoming release anyway, I
> am willing to revert the merge to 'next' and requeue an updated one,
> when it appears (I am also OK to see an incremental update, "oops,
> no, we realize we don't want to have it on the stack" fix-up, if
> this is the only glitch in the series that need to be fixed).
> 
> Thanks.
> 

I'm preparing a follow-on patch series to address Peff's comments
from Friday/Monday and yours from yesterday.  I thought I'd send
it as a set of new changes to sit on top of what we have in "next"
if that would make things easier for you.

After the upcoming release we can talk about whether it would be
better for me to smash together the 2 series or not.

Jeff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 03/12] pkt-line: (optionally) libify the packet readers
  2021-03-03 19:53         ` Junio C Hamano
@ 2021-03-04 14:17           ` Jeff Hostetler
  2021-03-04 14:40             ` Jeff King
  0 siblings, 1 reply; 178+ messages in thread
From: Jeff Hostetler @ 2021-03-04 14:17 UTC (permalink / raw)
  To: Junio C Hamano, Johannes Schindelin via GitGitGadget
  Cc: git, Jeff King, SZEDER Gábor, Johannes Schindelin, Jeff Hostetler



On 3/3/21 2:53 PM, Junio C Hamano wrote:
> "Johannes Schindelin via GitGitGadget" <gitgitgadget@gmail.com>
> writes:
> 
>> @@ -313,6 +316,8 @@ static int get_packet_data(int fd, char **src_buf,
>>   		if (options & PACKET_READ_GENTLE_ON_EOF)
>>   			return -1;
>>   
>> +		if (options & PACKET_READ_NEVER_DIE)
>> +			return error(_("the remote end hung up unexpectedly"));
>>   		die(_("the remote end hung up unexpectedly"));
>>   	}
> 
> This hunk treats READ_NEVER_DIE as a less quiet version of
> GENTRL_ON_EOF, i.e. the new flag allows to continue even after the
> "hung up unexpectedly" condition that usually causes the process to
> die..
> 
>> @@ -355,12 +363,19 @@ enum packet_read_status packet_read_with_status(i
>> ...
>> -	if ((unsigned)len >= size)
>> +	if ((unsigned)len >= size) {
>> +		if (options & PACKET_READ_NEVER_DIE)
>> +			return error(_("protocol error: bad line length %d"),
>> +				     len);
>>   		die(_("protocol error: bad line length %d"), len);
>> +	}
>>   
>>   	if (get_packet_data(fd, src_buffer, src_len, buffer, len, options) < 0) {
>>   		*pktlen = -1;
> 
> In the post-context of this hunk, there is this code:
> 
> 	if ((options & PACKET_READ_DIE_ON_ERR_PACKET) &&
> 	    starts_with(buffer, "ERR "))
> 		die(_("remote error: %s"), buffer + 4);
> 
> 	*pktlen = len;
> 	return PACKET_READ_NORMAL;
> 
> But here, there is no way to override the DIE_ON_ERR with
> READ_NEVER_DIE.
> 
> The asymmetry is somewhat annoying (i.e. if "if you do not want to
> die upon ERR, don't pass DIE_ON_ERR" could be a valid suggestion to
> the callers, then "if you do not want to die upon an unexpected
> hung-up, pass GENTLE_ON_EOF" would equally be valid suggestion),
> but I'll let it pass.

I agree that there is something odd about all of these flags,
but I don't have the context on all the various caller combinations
to make a better suggestion at this time.  And I certainly don't
want to stir up a bigger mess than I already have. :-)

We did document in the .h that READ_NEVER_DIE excludes ERR packets
when READ_DIE_ON_ERR is set, so I think we're safe from unexpected
surprises.

> 
>> diff --git a/pkt-line.h b/pkt-line.h
>> index a7149429ac35..2e472efaf2c5 100644
>> --- a/pkt-line.h
>> +++ b/pkt-line.h
>> @@ -75,10 +75,14 @@ int write_packetized_from_buf_no_flush(const char *src_in, size_t len, int fd_ou
>>    *
>>    * If options contains PACKET_READ_DIE_ON_ERR_PACKET, it dies when it sees an
>>    * ERR packet.
>> + *
>> + * With `PACKET_READ_NEVER_DIE`, no errors are allowed to trigger die() (except
>> + * an ERR packet, when `PACKET_READ_DIE_ON_ERR_PACKET` is in effect).
>>    */
>>   #define PACKET_READ_GENTLE_ON_EOF     (1u<<0)
>>   #define PACKET_READ_CHOMP_NEWLINE     (1u<<1)
>>   #define PACKET_READ_DIE_ON_ERR_PACKET (1u<<2)
>> +#define PACKET_READ_NEVER_DIE         (1u<<3)
>>   int packet_read(int fd, char **src_buffer, size_t *src_len, char
>>   		*buffer, unsigned size, int options);

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 03/12] pkt-line: (optionally) libify the packet readers
  2021-03-04 14:17           ` Jeff Hostetler
@ 2021-03-04 14:40             ` Jeff King
  2021-03-04 20:28               ` Junio C Hamano
  0 siblings, 1 reply; 178+ messages in thread
From: Jeff King @ 2021-03-04 14:40 UTC (permalink / raw)
  To: Jeff Hostetler
  Cc: Junio C Hamano, Johannes Schindelin via GitGitGadget, git,
	SZEDER Gábor, Johannes Schindelin, Jeff Hostetler

On Thu, Mar 04, 2021 at 09:17:41AM -0500, Jeff Hostetler wrote:

> > In the post-context of this hunk, there is this code:
> > 
> > 	if ((options & PACKET_READ_DIE_ON_ERR_PACKET) &&
> > 	    starts_with(buffer, "ERR "))
> > 		die(_("remote error: %s"), buffer + 4);
> > 
> > 	*pktlen = len;
> > 	return PACKET_READ_NORMAL;
> > 
> > But here, there is no way to override the DIE_ON_ERR with
> > READ_NEVER_DIE.
> > 
> > The asymmetry is somewhat annoying (i.e. if "if you do not want to
> > die upon ERR, don't pass DIE_ON_ERR" could be a valid suggestion to
> > the callers, then "if you do not want to die upon an unexpected
> > hung-up, pass GENTLE_ON_EOF" would equally be valid suggestion),
> > but I'll let it pass.
> 
> I agree that there is something odd about all of these flags,
> but I don't have the context on all the various caller combinations
> to make a better suggestion at this time.  And I certainly don't
> want to stir up a bigger mess than I already have. :-)
> 
> We did document in the .h that READ_NEVER_DIE excludes ERR packets
> when READ_DIE_ON_ERR is set, so I think we're safe from unexpected
> surprises.

I think the flag is doing sensible things; it's just that the word
"never" in the name is confusing, since it is "never except this one
time".

Would PACKET_READ_GENTLE_ON_READ_ERROR be a better name, to match
GENTLE_ON_EOF? I was tempted to just call it "ON_ERROR", since it also
include parsing errors, but maybe somebody would think that includes ERR
packets (that is more of a stretch, though, I think).

Likewise, I kind of wonder if callers would really prefer suppressing
the error() calls, too. Saying "error: the remote end hung up
unexpectedly" is not that helpful if the "remote end" we are talking
about is fsmonitor, and not the server side of a fetch.

-Peff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 09/12] unix-socket: disallow chdir() when creating unix domain sockets
  2021-03-03 22:53         ` Junio C Hamano
@ 2021-03-04 14:56           ` Jeff King
  2021-03-04 20:34             ` Junio C Hamano
  0 siblings, 1 reply; 178+ messages in thread
From: Jeff King @ 2021-03-04 14:56 UTC (permalink / raw)
  To: Junio C Hamano
  Cc: Jeff Hostetler via GitGitGadget, git, Jeff Hostetler,
	SZEDER Gábor, Johannes Schindelin, Jeff Hostetler

On Wed, Mar 03, 2021 at 02:53:23PM -0800, Junio C Hamano wrote:

> "Jeff Hostetler via GitGitGadget" <gitgitgadget@gmail.com> writes:
> 
> > From: Jeff Hostetler <jeffhost@microsoft.com>
> >
> > Calls to `chdir()` are dangerous in a multi-threaded context.  If
> > `unix_stream_listen()` or `unix_stream_connect()` is given a socket
> > pathname that is too long to fit in a `sockaddr_un` structure, it will
> > `chdir()` to the parent directory of the requested socket pathname,
> > create the socket using a relative pathname, and then `chdir()` back.
> > This is not thread-safe.
> >
> > Teach `unix_sockaddr_init()` to not allow calls to `chdir()` when this
> > flag is set.
> 
> While it is clear that this will not affect any existing callers, I
> am not sure if this is a good direction to go in the longer term.
> 
> I have to wonder if somebody actually relies on this "feature",
> though.  As long as ENAMETOOLONG is passed back to the caller so
> that it can react to it, any caller that knows it is safe to chdir()
> at the point of calling "send_request()" should be able to chdir()
> itself and come back (or fork a child that chdirs and opens a unix
> domain socket there, and then send the file descriptor back to the
> parent process).

The feature is definitely useful; I think I did 1eb10f4091 (unix-socket:
handle long socket pathnames, 2012-01-09) in response to a real problem.

Certainly callers could handle the error themselves. The reason I pushed
it down into the socket code was to avoid having to implement in
multiple callers. There are only two, but we'd have needed it in both
sides (credential-cache--daemon as the listener, and credential-cache as
the client).

Ironically, the listening side now does a permanent chdir() to the
socket directory anyway, since 6e61449051 (credential-cache--daemon:
change to the socket dir on startup, 2016-02-23). So we could just do
that first, and then feed the basename to the socket code.

The client side would still need to handle it, though. It could probably
also chdir to the socket directory without any real downside (once
started, I don't think the helper program needs to access the filesystem
at all outside of the socket).

So I dunno. I'd be OK to just rip the feature out in favor of doing
those chdir()s. But that seems like a non-zero amount of work versus
leaving, and the existing code has the benefit that if another caller
shows up, it could benefit from the feature.

-Peff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 10/12] unix-socket: create `unix_stream_server__listen_with_lock()`
  2021-03-02 23:50           ` Jeff Hostetler
@ 2021-03-04 15:13             ` Jeff King
  0 siblings, 0 replies; 178+ messages in thread
From: Jeff King @ 2021-03-04 15:13 UTC (permalink / raw)
  To: Jeff Hostetler
  Cc: Jeff Hostetler via GitGitGadget, git, SZEDER Gábor,
	Johannes Schindelin, Jeff Hostetler

On Tue, Mar 02, 2021 at 06:50:51PM -0500, Jeff Hostetler wrote:

> I was thinking that the "ping" is just to see if a server is listening
> or not.  (And I viewed that as kind of a hack, but it works.)  If we
> start sending data back and forth, we get into protocols and blocking
> and stuff that this layer (even if we move it up a level) doesn't know
> about.

Right. Definitely the higher up the stack the ping happens, the more
value it has. But I also see the appeal of keeping this as its own
layer.

> > > +	if (hold_lock_file_for_update_timeout(&lock, path, 0,
> > > +					      opts->timeout_ms) < 0) {
> > > +		error_errno(_("could not lock listener socket '%s'"), path);
> > > +		return NULL;
> > > +	}
> > 
> > Would you want to ping to see if it's alive before creating the lock?
> > That would be the fast-path if we assume that a server will usually be
> > there once started. Or is that supposed to happen in the caller (in
> > which case I'd again wonder if this really should be happening in the
> > simple-ipc code).
> 
> Starting a server should not happen that often, so I'm not sure it
> matters.  And yes, a server once started should run for a long time.
> Pinging without the lock puts us back in another race, so we might as
> well lock first.

Definitely you need to ping under lock to avoid races. But I was
thinking of an additional optimistic ping before we take the lock. I
agree that starting the server should be rare, which is why I think
there's value in seeing "is it up" before taking any lock.

But I suspect your thinking is that this ping happens in the caller
anyway, before we hit any of this unix_socket_listen() code at all.  And
that makes sense to me. In fact, I guess it has to happen that way,
because "try to connect" and "try to spin up a server" are likely
happening in two separate processes entirely (we only spawn the second
one if the first one failed its ping).

> > I'd think in those three cases you'd want:
> > 
> >    - if lock contention, pause a moment and wait for the winner to spin
> >      up and serve requests
> > 
> >    - if another server is live while we hold the lock, then we raced them
> >      and they won. Release the lock and start using them.
> > 
> >    - if we really tried to call unix_stream_listen() and that failed,
> >      give up now. There is some system error that is not likely to be
> >      fixed by trying anything more (e.g., ENAMETOOLONG).
> 
> Yes, I want to move the error messages out of these library layers.
> 
> And yes, if another server is running, our server instance should
> shutdown gracefully.  Other client processes can just talk to them
> rather than us.

Right, that makes sense. Again, I was thinking earlier of the whole "try
to connect, but spin up a server otherwise" thing happening in a single
process. But by the time we get to the listen code, we have probably
already spawned a server process, and have redirected its stderr
somewhere. And likewise the caller doesn't even care that much if the
server reports an error because it somebody else won the race. It only
cares that after a few connect attempts it manages to talk to
_somebody_.

> > > +	lstat(path, &server_socket->st_socket);
> > 
> > This lstat I guess is part of your "periodically check to see if we're
> > still the one holding the socket" strategy. We _shouldn't_ need that
> > anymore, with the dotlocking, but I'm OK with it as a
> > belt-and-suspenders check. But why are we filling in the lstat here?
> > This seems like something that the unix-socket code doesn't really need
> > to know about (though you do at least provide the complementary
> > "was_stolen" function here, so that part makes sense).
> 
> The dotlock is only on disk for the duration of the socket setup.
> We do the rollback (to delete the lockfile) once we have the socket
> open and ready for business.
> 
> The lstat gives me the inode of the socket on disk and we can watch
> it with future lstat's in the event loop and see if it changes and
> detect theft and auto-shutdown.

Right, I gradually came to the understanding of what your extra layer
was trying to accomplish while reading (sometimes I'll go back and edit
earlier comments in my review before sending out the mail, but in this
case it seemed less confusing to leave my train of thought in place.
That might not have been correct, though. ;) ).

I think if everybody is abiding by the lock system to create the socket,
we probably don't strictly _need_ the theft detection. But it might not
hurt as a belt-and-suspenders, or for cases where somebody thinks the
socket is stale but it isn't (perhaps due to listen backlog or
something while trying to do the connect() ping).

> > > +void unix_stream_server__free(
> > > +	struct unix_stream_server_socket *server_socket)
> > > +{
> > > +	if (!server_socket)
> > > +		return;
> > > +
> > > +	if (server_socket->fd_socket >= 0) {
> > > +		if (!unix_stream_server__was_stolen(server_socket))
> > > +			unlink(server_socket->path_socket);
> > > +		close(server_socket->fd_socket);
> > > +	}
> > > +
> > > +	free(server_socket->path_socket);
> > > +	free(server_socket);
> > > +}
> > 
> > OK, this makes sense. We only remove it if we're still the ones holding
> > it. That's not done under lock, though, so it's possibly racy (somebody
> > steals from us while _they_ hold the lock; we check and see "not stolen"
> > right before they steal it, and then we unlink their stolen copy).
> 
> Right, I didn't bother with the lock here.  I don't think we need it.
> 
> We technically still have the socket open and are listening on it when
> we lstat and unlink it.  The other process should create the lock and
> try to connect.  That should hang in the kernel because of the accept()
> grace period.  Then we close the socket and the client's connection
> request errors because we didn't accept it.  They will see the error
> as no one is listening and then create their own socket.

I think there are still some races (at least if we believe that anything
can be stolen in the first place). Something like:

  - process A holds the socket but plans to exit

  - process B takes the lock

  - process B tries to ping us, but it doesn't work for some reason
    (this part is vague, but it's also the thing that makes stealing
    possible at all)

  - process A calls was_stolen(), which says "no"

  - process B decides nobody is there, so it unlinks the socket and
    creates its own

  - process A calls unlink(), removing B's socket

A is OK with this; it was exiting anyway. But it just stranded B, who
_thinks_ it owns the socket, but doesn't.

Again, there's a vagueness to "B somehow doesn't see A as listening" in
the middle step. But without that step, I don't see how you'd really
have stealing in the first place.

-Peff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently()
  2021-03-04 13:29               ` Jeff Hostetler
@ 2021-03-04 20:26                 ` Junio C Hamano
  0 siblings, 0 replies; 178+ messages in thread
From: Junio C Hamano @ 2021-03-04 20:26 UTC (permalink / raw)
  To: Jeff Hostetler
  Cc: Jeff King, Jeff Hostetler via GitGitGadget, git,
	SZEDER Gábor, Johannes Schindelin, Jeff Hostetler

Jeff Hostetler <git@jeffhostetler.com> writes:

> On 3/3/21 2:38 PM, Junio C Hamano wrote:
>
>> I agree with both of you that the code is unnice in its stack usage
>> and we want fix with malloc(), or something like that, but sorry, I
>> think I merged this round by mistake to 'next'.
>> As we won't be merging the topic to the upcoming release anyway, I
>> am willing to revert the merge to 'next' and requeue an updated one,
>> when it appears (I am also OK to see an incremental update, "oops,
>> no, we realize we don't want to have it on the stack" fix-up, if
>> this is the only glitch in the series that need to be fixed).
>
> I'm preparing a follow-on patch series to address Peff's comments
> from Friday/Monday and yours from yesterday.  I thought I'd send
> it as a set of new changes to sit on top of what we have in "next"
> if that would make things easier for you.

Yeah, that is OK, too.  Sorry for the mistake of merging it too
early.


^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 03/12] pkt-line: (optionally) libify the packet readers
  2021-03-04 14:40             ` Jeff King
@ 2021-03-04 20:28               ` Junio C Hamano
  0 siblings, 0 replies; 178+ messages in thread
From: Junio C Hamano @ 2021-03-04 20:28 UTC (permalink / raw)
  To: Jeff King
  Cc: Jeff Hostetler, Johannes Schindelin via GitGitGadget, git,
	SZEDER Gábor, Johannes Schindelin, Jeff Hostetler

Jeff King <peff@peff.net> writes:

> I think the flag is doing sensible things; it's just that the word
> "never" in the name is confusing, since it is "never except this one
> time".
>
> Would PACKET_READ_GENTLE_ON_READ_ERROR be a better name, to match
> GENTLE_ON_EOF? I was tempted to just call it "ON_ERROR", since it also
> include parsing errors, but maybe somebody would think that includes ERR
> packets (that is more of a stretch, though, I think).
>
> Likewise, I kind of wonder if callers would really prefer suppressing
> the error() calls, too. Saying "error: the remote end hung up
> unexpectedly" is not that helpful if the "remote end" we are talking
> about is fsmonitor, and not the server side of a fetch.

Both sounds sensible.

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 09/12] unix-socket: disallow chdir() when creating unix domain sockets
  2021-03-04 14:56           ` Jeff King
@ 2021-03-04 20:34             ` Junio C Hamano
  2021-03-04 23:34               ` Junio C Hamano
  2021-03-05 21:30               ` Jeff Hostetler
  0 siblings, 2 replies; 178+ messages in thread
From: Junio C Hamano @ 2021-03-04 20:34 UTC (permalink / raw)
  To: Jeff King
  Cc: Jeff Hostetler via GitGitGadget, git, Jeff Hostetler,
	SZEDER Gábor, Johannes Schindelin, Jeff Hostetler

Jeff King <peff@peff.net> writes:

> The feature is definitely useful; I think I did 1eb10f4091 (unix-socket:
> handle long socket pathnames, 2012-01-09) in response to a real problem.
>
> Certainly callers could handle the error themselves. The reason I pushed
> it down into the socket code was to avoid having to implement in
> multiple callers. There are only two, but we'd have needed it in both
> sides (credential-cache--daemon as the listener, and credential-cache as
> the client).
>
> Ironically, the listening side now does a permanent chdir() to the
> socket directory anyway, since 6e61449051 (credential-cache--daemon:
> change to the socket dir on startup, 2016-02-23). So we could just do
> that first, and then feed the basename to the socket code.
>
> The client side would still need to handle it, though. It could probably
> also chdir to the socket directory without any real downside (once
> started, I don't think the helper program needs to access the filesystem
> at all outside of the socket).
>
> So I dunno. I'd be OK to just rip the feature out in favor of doing
> those chdir()s. But that seems like a non-zero amount of work versus
> leaving, and the existing code has the benefit that if another caller
> shows up, it could benefit from the feature.

I am OK to keep the series as-is, and leave it to a possible future
work to remove the need for chdir even for long paths and not having
to return an error with ENAMETOOLONG; when such an update happens,
the "fail if need to chdir" feature this patch is adding will become
a no-op.


^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 09/12] unix-socket: disallow chdir() when creating unix domain sockets
  2021-03-04 20:34             ` Junio C Hamano
@ 2021-03-04 23:34               ` Junio C Hamano
  2021-03-05  9:02                 ` Jeff King
  2021-03-05 21:30               ` Jeff Hostetler
  1 sibling, 1 reply; 178+ messages in thread
From: Junio C Hamano @ 2021-03-04 23:34 UTC (permalink / raw)
  To: Jeff King
  Cc: Jeff Hostetler via GitGitGadget, git, Jeff Hostetler,
	SZEDER Gábor, Johannes Schindelin, Jeff Hostetler

Junio C Hamano <gitster@pobox.com> writes:

>> So I dunno. I'd be OK to just rip the feature out in favor of doing
>> those chdir()s. But that seems like a non-zero amount of work versus
>> leaving, and the existing code has the benefit that if another caller
>> shows up, it could benefit from the feature.
>
> I am OK to keep the series as-is, and leave it to a possible future
> work to remove the need for chdir even for long paths and not having
> to return an error with ENAMETOOLONG; when such an update happens,
> the "fail if need to chdir" feature this patch is adding will become
> a no-op.

For example, as this is UNIX-only codepath, I wonder if something
like this would be a good way to avoid chdir() that would cause
trouble.

    - obtain a fd from socket(2)
    - check if path is too long to fit in sa->sun_path
      - if it does, bind(2) the fd to the address
      - if it does not, fork(2) a child and
        - in the child, chdir(2) there and use the shortened path
	  to bind(2), and exit(3)
        - the parents just wait(2)s for the child to return. By the
          time it dies, the fd would be successfully bound to the
	  path.
    - now we have a file descriptor that is bound at that path.


^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 09/12] unix-socket: disallow chdir() when creating unix domain sockets
  2021-03-04 23:34               ` Junio C Hamano
@ 2021-03-05  9:02                 ` Jeff King
  2021-03-05  9:25                   ` Jeff King
  0 siblings, 1 reply; 178+ messages in thread
From: Jeff King @ 2021-03-05  9:02 UTC (permalink / raw)
  To: Junio C Hamano
  Cc: Jeff Hostetler via GitGitGadget, git, Jeff Hostetler,
	SZEDER Gábor, Johannes Schindelin, Jeff Hostetler

On Thu, Mar 04, 2021 at 03:34:07PM -0800, Junio C Hamano wrote:

> Junio C Hamano <gitster@pobox.com> writes:
> 
> >> So I dunno. I'd be OK to just rip the feature out in favor of doing
> >> those chdir()s. But that seems like a non-zero amount of work versus
> >> leaving, and the existing code has the benefit that if another caller
> >> shows up, it could benefit from the feature.
> >
> > I am OK to keep the series as-is, and leave it to a possible future
> > work to remove the need for chdir even for long paths and not having
> > to return an error with ENAMETOOLONG; when such an update happens,
> > the "fail if need to chdir" feature this patch is adding will become
> > a no-op.
> 
> For example, as this is UNIX-only codepath, I wonder if something
> like this would be a good way to avoid chdir() that would cause
> trouble.
> 
>     - obtain a fd from socket(2)
>     - check if path is too long to fit in sa->sun_path
>       - if it does, bind(2) the fd to the address
>       - if it does not, fork(2) a child and
>         - in the child, chdir(2) there and use the shortened path
> 	  to bind(2), and exit(3)
>         - the parents just wait(2)s for the child to return. By the
>           time it dies, the fd would be successfully bound to the
> 	  path.
>     - now we have a file descriptor that is bound at that path.

If the trouble is that chdir() isn't thread-safe, I wonder if fork()
creates its own headaches. :) I guess libc usually takes care of the
basics with pthread_atfork(), etc, and the child otherwise would not
need to access much data.

I don't know offhand if this trick actually works. I can imagine it
does, but it hinges on the subtlety between an integer descriptor and
the underlying "file description" (the term used in POSIX). Does binding
a socket operate on the former (like close() does not close the parent's
descriptor) or the latter (like lseek() impacts other descriptors).

I'd guess the latter, but I wasn't sure if you were suggesting this from
experience or if you just invented the technique. ;)

-Peff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 09/12] unix-socket: disallow chdir() when creating unix domain sockets
  2021-03-05  9:02                 ` Jeff King
@ 2021-03-05  9:25                   ` Jeff King
  2021-03-05 11:59                     ` Chris Torek
  0 siblings, 1 reply; 178+ messages in thread
From: Jeff King @ 2021-03-05  9:25 UTC (permalink / raw)
  To: Junio C Hamano
  Cc: Jeff Hostetler via GitGitGadget, git, Jeff Hostetler,
	SZEDER Gábor, Johannes Schindelin, Jeff Hostetler

On Fri, Mar 05, 2021 at 04:02:16AM -0500, Jeff King wrote:

> I don't know offhand if this trick actually works. I can imagine it
> does, but it hinges on the subtlety between an integer descriptor and
> the underlying "file description" (the term used in POSIX). Does binding
> a socket operate on the former (like close() does not close the parent's
> descriptor) or the latter (like lseek() impacts other descriptors).
> 
> I'd guess the latter, but I wasn't sure if you were suggesting this from
> experience or if you just invented the technique. ;)

I was curious, but this does indeed work:

-- >8 --
#include <sys/types.h>
#include <sys/socket.h>
#include <netdb.h>
#include <sys/wait.h>
#include <unistd.h>
#include <stdlib.h>

int main(void)
{
	int listen_fd, client_fd;
	struct addrinfo *ai;
	pid_t pid;

	getaddrinfo("127.0.0.1", "1234", NULL, &ai);
	listen_fd = socket(ai->ai_family, ai->ai_socktype, ai->ai_protocol);
	pid = fork();
	if (!pid) {
		bind(listen_fd, ai->ai_addr, ai->ai_addrlen);
		return 0;
	}
	waitpid(pid, NULL, 0);

	listen(listen_fd, 5);
	client_fd = accept(listen_fd, NULL, NULL);
	write(client_fd, "foo\n", 4);
	return 0;
}
-- >8 --

-Peff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 09/12] unix-socket: disallow chdir() when creating unix domain sockets
  2021-03-05  9:25                   ` Jeff King
@ 2021-03-05 11:59                     ` Chris Torek
  2021-03-05 17:33                       ` Jeff Hostetler
  0 siblings, 1 reply; 178+ messages in thread
From: Chris Torek @ 2021-03-05 11:59 UTC (permalink / raw)
  To: Jeff King
  Cc: Junio C Hamano, Jeff Hostetler via GitGitGadget, Git List,
	Jeff Hostetler, SZEDER Gábor, Johannes Schindelin,
	Jeff Hostetler

> On Fri, Mar 05, 2021 at 04:02:16AM -0500, Jeff King wrote:
>
> > I don't know offhand if this [bind in a child] trick actually works. ...

On Fri, Mar 5, 2021 at 1:29 AM Jeff King <peff@peff.net> wrote:
> I was curious, but this does indeed work:
[working example snipped]

Yes, it definitely works.  The bind() call, on a Unix domain socket,
creates a file system entity linked to the underlying socket instance.
The file descriptors, in whatever processes have them, provide
read/write/send/recv/etc linkage to the underlying socket instance
(and also a refcount or other GC protection: with the ability to
send sockets over sockets, simple refcounts stop working and we
need real GC in the kernel...).

Of course, once all the file descriptor references are gone, the
socket (eventually, depending on GC) evaporates.  The file system
entity does not count for keeping the underlying socket alive.  At
this point the file system entity is "dead".  Unfortunately there's no
way to test and clean out the dead entity atomically.  The whole
thing is kind of a mess.

Chris

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 09/12] unix-socket: disallow chdir() when creating unix domain sockets
  2021-03-05 11:59                     ` Chris Torek
@ 2021-03-05 17:33                       ` Jeff Hostetler
  2021-03-05 17:53                         ` Junio C Hamano
  0 siblings, 1 reply; 178+ messages in thread
From: Jeff Hostetler @ 2021-03-05 17:33 UTC (permalink / raw)
  To: Chris Torek, Jeff King
  Cc: Junio C Hamano, Jeff Hostetler via GitGitGadget, Git List,
	SZEDER Gábor, Johannes Schindelin, Jeff Hostetler



On 3/5/21 6:59 AM, Chris Torek wrote:
>> On Fri, Mar 05, 2021 at 04:02:16AM -0500, Jeff King wrote:
>>
>>> I don't know offhand if this [bind in a child] trick actually works. ...
> 
> On Fri, Mar 5, 2021 at 1:29 AM Jeff King <peff@peff.net> wrote:
>> I was curious, but this does indeed work:
> [working example snipped]
> 
> Yes, it definitely works.  The bind() call, on a Unix domain socket,
> creates a file system entity linked to the underlying socket instance.
> The file descriptors, in whatever processes have them, provide
> read/write/send/recv/etc linkage to the underlying socket instance
> (and also a refcount or other GC protection: with the ability to
> send sockets over sockets, simple refcounts stop working and we
> need real GC in the kernel...).
> 
> Of course, once all the file descriptor references are gone, the
> socket (eventually, depending on GC) evaporates.  The file system
> entity does not count for keeping the underlying socket alive.  At
> this point the file system entity is "dead".  Unfortunately there's no
> way to test and clean out the dead entity atomically.  The whole
> thing is kind of a mess.
> 
> Chris
> 

The original problem was that chdir() is not safe in a multi-threaded
process because one thread calling chdir() will affect any concurrent
file operations (open(), mkdir(), etc.) that use relative paths.

I think Adding a fork() at this layer would just create new types of 
problems.  For example, if another thread was concurrently writing to
a socket while we were setting up this new socket, we would suddenly
have 1 thread in each process now writing to that socket and the
receiver would get a mixture of output from both processes.  Right?

Jeff



^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 09/12] unix-socket: disallow chdir() when creating unix domain sockets
  2021-03-05 17:33                       ` Jeff Hostetler
@ 2021-03-05 17:53                         ` Junio C Hamano
  0 siblings, 0 replies; 178+ messages in thread
From: Junio C Hamano @ 2021-03-05 17:53 UTC (permalink / raw)
  To: Jeff Hostetler
  Cc: Chris Torek, Jeff King, Jeff Hostetler via GitGitGadget,
	Git List, SZEDER Gábor, Johannes Schindelin, Jeff Hostetler

Jeff Hostetler <git@jeffhostetler.com> writes:

> The original problem was that chdir() is not safe in a multi-threaded
> process because one thread calling chdir() will affect any concurrent
> file operations (open(), mkdir(), etc.) that use relative paths.
>
> I think Adding a fork() at this layer would just create new types of
> problems.  For example, if another thread was concurrently writing to
> a socket while we were setting up this new socket, we would suddenly
> have 1 thread in each process now writing to that socket and the
> receiver would get a mixture of output from both processes.  Right?

cf. https://pubs.opengroup.org/onlinepubs/9699919799/functions/fork.html

The fork() function shall create a new process. The new process
(child process) shall be an exact copy of the calling process
(parent process) except as detailed below:

...

 * A process shall be created with a single thread. If a
   multi-threaded process calls fork(), the new process shall
   contain a replica of the calling thread and its entire address
   space, possibly including the states of mutexes and other
   resources. Consequently, to avoid errors, the child process may
   only execute async-signal-safe operations until such time as one
   of the exec functions is called.

So, probably not.

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 09/12] unix-socket: disallow chdir() when creating unix domain sockets
  2021-03-04 20:34             ` Junio C Hamano
  2021-03-04 23:34               ` Junio C Hamano
@ 2021-03-05 21:30               ` Jeff Hostetler
  2021-03-05 21:52                 ` Junio C Hamano
  1 sibling, 1 reply; 178+ messages in thread
From: Jeff Hostetler @ 2021-03-05 21:30 UTC (permalink / raw)
  To: Junio C Hamano, Jeff King
  Cc: Jeff Hostetler via GitGitGadget, git, SZEDER Gábor,
	Johannes Schindelin, Jeff Hostetler



On 3/4/21 3:34 PM, Junio C Hamano wrote:
> Jeff King <peff@peff.net> writes:
> 
>> The feature is definitely useful; I think I did 1eb10f4091 (unix-socket:
>> handle long socket pathnames, 2012-01-09) in response to a real problem.
>>
>> Certainly callers could handle the error themselves. The reason I pushed
>> it down into the socket code was to avoid having to implement in
>> multiple callers. There are only two, but we'd have needed it in both
>> sides (credential-cache--daemon as the listener, and credential-cache as
>> the client).
>>
>> Ironically, the listening side now does a permanent chdir() to the
>> socket directory anyway, since 6e61449051 (credential-cache--daemon:
>> change to the socket dir on startup, 2016-02-23). So we could just do
>> that first, and then feed the basename to the socket code.
>>
>> The client side would still need to handle it, though. It could probably
>> also chdir to the socket directory without any real downside (once
>> started, I don't think the helper program needs to access the filesystem
>> at all outside of the socket).
>>
>> So I dunno. I'd be OK to just rip the feature out in favor of doing
>> those chdir()s. But that seems like a non-zero amount of work versus
>> leaving, and the existing code has the benefit that if another caller
>> shows up, it could benefit from the feature.
> 
> I am OK to keep the series as-is, and leave it to a possible future
> work to remove the need for chdir even for long paths and not having
> to return an error with ENAMETOOLONG; when such an update happens,
> the "fail if need to chdir" feature this patch is adding will become
> a no-op.
> 

I think I'd like to keep things as I have them now with the "disallow
chdir()" option bit and save the "fork() / bind()" solution for a
later patch series.  Simple IPC is large enough as it is and the new
ENAMETOOLONG error will only affect callers who set the bit.  A later
patch series can easily test and confirm the "fork() / bind() solution
in isolation and test it on the other Unix hosts and then remove the
bit from those callers (if we want).

Jeff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v4 09/12] unix-socket: disallow chdir() when creating unix domain sockets
  2021-03-05 21:30               ` Jeff Hostetler
@ 2021-03-05 21:52                 ` Junio C Hamano
  0 siblings, 0 replies; 178+ messages in thread
From: Junio C Hamano @ 2021-03-05 21:52 UTC (permalink / raw)
  To: Jeff Hostetler
  Cc: Jeff King, Jeff Hostetler via GitGitGadget, git,
	SZEDER Gábor, Johannes Schindelin, Jeff Hostetler

Jeff Hostetler <git@jeffhostetler.com> writes:

>> I am OK to keep the series as-is, and leave it to a possible future
>> work to remove the need for chdir even for long paths and not having
>> to return an error with ENAMETOOLONG; when such an update happens,
>> the "fail if need to chdir" feature this patch is adding will become
>> a no-op.
>
> I think I'd like to keep things as I have them now with the "disallow
> chdir()" option bit

So we are on the same page.

> and save the "fork() / bind()" solution for a
> later patch series.  Simple IPC is large enough as it is and the new
> ENAMETOOLONG error will only affect callers who set the bit.  A later
> patch series can easily test and confirm the "fork() / bind() solution
> in isolation and test it on the other Unix hosts and then remove the
> bit from those callers (if we want).

The bit will then become an unused API relic, but that is OK (I do
not think fork/bind would be the best and/or only way to avoid
chdir, though, but it won't matter in the context ofh tis
discussion).


^ permalink raw reply	[flat|nested] 178+ messages in thread

* [PATCH v5 00/12] Simple IPC Mechanism
  2021-02-17 21:48     ` [PATCH v4 00/12] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
                         ` (12 preceding siblings ...)
  2021-02-25 19:39       ` [PATCH v4 00/12] Simple IPC Mechanism Junio C Hamano
@ 2021-03-09 15:02       ` Jeff Hostetler via GitGitGadget
  2021-03-09 15:02         ` [PATCH v5 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently() Jeff Hostetler via GitGitGadget
                           ` (13 more replies)
  13 siblings, 14 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-03-09 15:02 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler

Here is V5 of my "Simple IPC" series. This version is a reiteration of the
series and combines the original "Simple IPC Mechanism V4" series [1] and
the "Simple IPC Cleanups V1" series [2]. It squashes them together and then
includes responses to last minute comments received on both.

These include: (a) Rename PACKET_READ_NEVER_DIE to
PACKET_READ_GENTLE_ON_READ_ERROR. (b) Accept zero length timeout in
unix_stream_socket__conect(). Only supply the default timeout when a
negative value is passed. Make the default timeout value private to the .c
file.

I apologize for rerolling something that is in "next". I think the combined
result is better long term than preserving them as two sequential series.
Given where we are in the release cycle, I thought it best to have a cleaned
up series for consideration post v2.31.

[1] Upstream in jh/simple-ipc via
https://github.com/gitgitgadget/git/pull/766 (and in "next" relative to
v2.30.1)
https://lore.kernel.org/git/pull.766.v4.git.1613598529.gitgitgadget@gmail.com/T/#mbd1da5ff93ef273049090f697aeab68c74f698f1

[2] Upstream in `jh/simple-ipc-cleanups via
https://github.com/gitgitgadget/git/pull/893
https://lore.kernel.org/git/8ea6401c-6ee6-94cb-4e33-9dfffaf466e8@jeffhostetler.com/T/#t

Jeff Hostetler (9):
  pkt-line: eliminate the need for static buffer in
    packet_write_gently()
  simple-ipc: design documentation for new IPC mechanism
  simple-ipc: add win32 implementation
  unix-socket: eliminate static unix_stream_socket() helper function
  unix-socket: add backlog size option to unix_stream_listen()
  unix-socket: disallow chdir() when creating unix domain sockets
  unix-stream-server: create unix domain socket under lock
  simple-ipc: add Unix domain socket implementation
  t0052: add simple-ipc tests and t/helper/test-simple-ipc tool

Johannes Schindelin (3):
  pkt-line: do not issue flush packets in write_packetized_*()
  pkt-line: add PACKET_READ_GENTLE_ON_READ_ERROR option
  pkt-line: add options argument to read_packetized_to_strbuf()

 Documentation/technical/api-simple-ipc.txt | 105 +++
 Makefile                                   |   9 +
 builtin/credential-cache--daemon.c         |   3 +-
 builtin/credential-cache.c                 |   2 +-
 compat/simple-ipc/ipc-shared.c             |  28 +
 compat/simple-ipc/ipc-unix-socket.c        | 986 +++++++++++++++++++++
 compat/simple-ipc/ipc-win32.c              | 751 ++++++++++++++++
 config.mak.uname                           |   2 +
 contrib/buildsystems/CMakeLists.txt        |   8 +-
 convert.c                                  |  11 +-
 pkt-line.c                                 |  58 +-
 pkt-line.h                                 |  17 +-
 simple-ipc.h                               | 239 +++++
 t/helper/test-simple-ipc.c                 | 787 ++++++++++++++++
 t/helper/test-tool.c                       |   1 +
 t/helper/test-tool.h                       |   1 +
 t/t0052-simple-ipc.sh                      | 122 +++
 unix-socket.c                              |  53 +-
 unix-socket.h                              |  12 +-
 unix-stream-server.c                       | 128 +++
 unix-stream-server.h                       |  36 +
 21 files changed, 3307 insertions(+), 52 deletions(-)
 create mode 100644 Documentation/technical/api-simple-ipc.txt
 create mode 100644 compat/simple-ipc/ipc-shared.c
 create mode 100644 compat/simple-ipc/ipc-unix-socket.c
 create mode 100644 compat/simple-ipc/ipc-win32.c
 create mode 100644 simple-ipc.h
 create mode 100644 t/helper/test-simple-ipc.c
 create mode 100755 t/t0052-simple-ipc.sh
 create mode 100644 unix-stream-server.c
 create mode 100644 unix-stream-server.h


base-commit: f01623b2c9d14207e497b21ebc6b3ec4afaf4b46
Published-As: https://github.com/gitgitgadget/git/releases/tag/pr-766%2Fjeffhostetler%2Fsimple-ipc-v5
Fetch-It-Via: git fetch https://github.com/gitgitgadget/git pr-766/jeffhostetler/simple-ipc-v5
Pull-Request: https://github.com/gitgitgadget/git/pull/766

Range-diff vs v4:

  1:  2d6858b1625a !  1:  311ea4a5cd71 pkt-line: eliminate the need for static buffer in packet_write_gently()
     @@ Commit message
          static buffer, thread-safe scratch space, or an excessively large stack
          buffer.
      
     -    Change the API of `write_packetized_from_fd()` to accept a scratch space
     -    argument from its caller to avoid similar issues here.
     +    Change `write_packetized_from_fd()` to allocate a temporary buffer rather
     +    than using a static buffer to avoid similar issues here.
      
          These changes are intended to make it easier to use pkt-line routines in
          a multi-threaded context with multiple concurrent writers writing to
     @@ Commit message
      
          Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
      
     - ## convert.c ##
     -@@ convert.c: static int apply_multi_file_filter(const char *path, const char *src, size_t len
     - 	if (err)
     - 		goto done;
     - 
     --	if (fd >= 0)
     --		err = write_packetized_from_fd(fd, process->in);
     --	else
     -+	if (fd >= 0) {
     -+		struct packet_scratch_space scratch;
     -+		err = write_packetized_from_fd(fd, process->in, &scratch);
     -+	} else
     - 		err = write_packetized_from_buf(src, len, process->in);
     - 	if (err)
     - 		goto done;
     -
       ## pkt-line.c ##
      @@ pkt-line.c: int packet_write_fmt_gently(int fd, const char *fmt, ...)
       
     @@ pkt-line.c: int packet_write_fmt_gently(int fd, const char *fmt, ...)
       	return 0;
       }
      @@ pkt-line.c: void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len)
     - 	packet_trace(data, len, 1);
     - }
       
     --int write_packetized_from_fd(int fd_in, int fd_out)
     -+int write_packetized_from_fd(int fd_in, int fd_out,
     -+			     struct packet_scratch_space *scratch)
     + int write_packetized_from_fd(int fd_in, int fd_out)
       {
      -	static char buf[LARGE_PACKET_DATA_MAX];
     ++	char *buf = xmalloc(LARGE_PACKET_DATA_MAX);
       	int err = 0;
       	ssize_t bytes_to_write;
       
       	while (!err) {
      -		bytes_to_write = xread(fd_in, buf, sizeof(buf));
     -+		bytes_to_write = xread(fd_in, scratch->buffer,
     -+				       sizeof(scratch->buffer));
     - 		if (bytes_to_write < 0)
     +-		if (bytes_to_write < 0)
     ++		bytes_to_write = xread(fd_in, buf, LARGE_PACKET_DATA_MAX);
     ++		if (bytes_to_write < 0) {
     ++			free(buf);
       			return COPY_READ_ERROR;
     ++		}
       		if (bytes_to_write == 0)
       			break;
     --		err = packet_write_gently(fd_out, buf, bytes_to_write);
     -+		err = packet_write_gently(fd_out, scratch->buffer,
     -+					  bytes_to_write);
     + 		err = packet_write_gently(fd_out, buf, bytes_to_write);
       	}
       	if (!err)
       		err = packet_flush_gently(fd_out);
     -
     - ## pkt-line.h ##
     -@@
     - #include "strbuf.h"
     - #include "sideband.h"
     - 
     -+#define LARGE_PACKET_MAX 65520
     -+#define LARGE_PACKET_DATA_MAX (LARGE_PACKET_MAX - 4)
     -+
     -+struct packet_scratch_space {
     -+	char buffer[LARGE_PACKET_DATA_MAX]; /* does not include header bytes */
     -+};
     -+
     - /*
     -  * Write a packetized stream, where each line is preceded by
     -  * its length (including the header) as a 4-byte hex number.
     -@@ pkt-line.h: void packet_buf_write(struct strbuf *buf, const char *fmt, ...) __attribute__((f
     - void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len);
     - int packet_flush_gently(int fd);
     - int packet_write_fmt_gently(int fd, const char *fmt, ...) __attribute__((format (printf, 2, 3)));
     --int write_packetized_from_fd(int fd_in, int fd_out);
     -+int write_packetized_from_fd(int fd_in, int fd_out, struct packet_scratch_space *scratch);
     - int write_packetized_from_buf(const char *src_in, size_t len, int fd_out);
     - 
     - /*
     -@@ pkt-line.h: enum packet_read_status packet_reader_read(struct packet_reader *reader);
     - enum packet_read_status packet_reader_peek(struct packet_reader *reader);
     - 
     - #define DEFAULT_PACKET_MAX 1000
     --#define LARGE_PACKET_MAX 65520
     --#define LARGE_PACKET_DATA_MAX (LARGE_PACKET_MAX - 4)
     -+
     - extern char packet_buffer[LARGE_PACKET_MAX];
     ++	free(buf);
     + 	return err;
     + }
       
     - struct packet_writer {
  2:  91a9f63d6692 !  2:  25157c1f4873 pkt-line: do not issue flush packets in write_packetized_*()
     @@ Commit message
      
       ## convert.c ##
      @@ convert.c: static int apply_multi_file_filter(const char *path, const char *src, size_t len
     + 		goto done;
       
     - 	if (fd >= 0) {
     - 		struct packet_scratch_space scratch;
     --		err = write_packetized_from_fd(fd, process->in, &scratch);
     -+		err = write_packetized_from_fd_no_flush(fd, process->in, &scratch);
     - 	} else
     + 	if (fd >= 0)
     +-		err = write_packetized_from_fd(fd, process->in);
     ++		err = write_packetized_from_fd_no_flush(fd, process->in);
     + 	else
      -		err = write_packetized_from_buf(src, len, process->in);
      +		err = write_packetized_from_buf_no_flush(src, len, process->in);
      +	if (err)
     @@ pkt-line.c: void packet_buf_write_len(struct strbuf *buf, const char *data, size
       	packet_trace(data, len, 1);
       }
       
     --int write_packetized_from_fd(int fd_in, int fd_out,
     --			     struct packet_scratch_space *scratch)
     -+int write_packetized_from_fd_no_flush(int fd_in, int fd_out,
     -+				      struct packet_scratch_space *scratch)
     +-int write_packetized_from_fd(int fd_in, int fd_out)
     ++int write_packetized_from_fd_no_flush(int fd_in, int fd_out)
       {
     + 	char *buf = xmalloc(LARGE_PACKET_DATA_MAX);
       	int err = 0;
     - 	ssize_t bytes_to_write;
     -@@ pkt-line.c: int write_packetized_from_fd(int fd_in, int fd_out,
     - 		err = packet_write_gently(fd_out, scratch->buffer,
     - 					  bytes_to_write);
     +@@ pkt-line.c: int write_packetized_from_fd(int fd_in, int fd_out)
     + 			break;
     + 		err = packet_write_gently(fd_out, buf, bytes_to_write);
       	}
      -	if (!err)
      -		err = packet_flush_gently(fd_out);
     + 	free(buf);
       	return err;
       }
       
     @@ pkt-line.h: void packet_buf_write(struct strbuf *buf, const char *fmt, ...) __at
       void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len);
       int packet_flush_gently(int fd);
       int packet_write_fmt_gently(int fd, const char *fmt, ...) __attribute__((format (printf, 2, 3)));
     --int write_packetized_from_fd(int fd_in, int fd_out, struct packet_scratch_space *scratch);
     +-int write_packetized_from_fd(int fd_in, int fd_out);
      -int write_packetized_from_buf(const char *src_in, size_t len, int fd_out);
     -+int write_packetized_from_fd_no_flush(int fd_in, int fd_out, struct packet_scratch_space *scratch);
     ++int write_packetized_from_fd_no_flush(int fd_in, int fd_out);
      +int write_packetized_from_buf_no_flush(const char *src_in, size_t len, int fd_out);
       
       /*
  3:  e05467def4e1 !  3:  af3d13113bc9 pkt-line: (optionally) libify the packet readers
     @@ Metadata
      Author: Johannes Schindelin <Johannes.Schindelin@gmx.de>
      
       ## Commit message ##
     -    pkt-line: (optionally) libify the packet readers
     +    pkt-line: add PACKET_READ_GENTLE_ON_READ_ERROR option
     +
     +    Introduce PACKET_READ_GENTLE_ON_READ_ERROR option to help libify the
     +    packet readers.
      
          So far, the (possibly indirect) callers of `get_packet_data()` can ask
          that function to return an error instead of `die()`ing upon end-of-file.
          However, random read errors will still cause the process to die.
      
          So let's introduce an explicit option to tell the packet reader
     -    machinery to please be nice and only return an error.
     +    machinery to please be nice and only return an error on read errors.
      
          This change prepares pkt-line for use by long-running daemon processes.
          Such processes should be able to serve multiple concurrent clients and
     @@ Commit message
          a daemon should be able to drop that connection and continue serving
          existing and future connections.
      
     -    This ability will be used by a Git-aware "Internal FSMonitor" feature
     +    This ability will be used by a Git-aware "Builtin FSMonitor" feature
          in a later patch series.
      
          Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
     +    Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
      
       ## pkt-line.c ##
      @@ pkt-line.c: static int get_packet_data(int fd, char **src_buf, size_t *src_size,
     @@ pkt-line.c: static int get_packet_data(int fd, char **src_buf, size_t *src_size,
       		ret = read_in_full(fd, dst, size);
      -		if (ret < 0)
      +		if (ret < 0) {
     -+			if (options & PACKET_READ_NEVER_DIE)
     ++			if (options & PACKET_READ_GENTLE_ON_READ_ERROR)
      +				return error_errno(_("read error"));
       			die_errno(_("read error"));
      +		}
     @@ pkt-line.c: static int get_packet_data(int fd, char **src_buf, size_t *src_size,
       		if (options & PACKET_READ_GENTLE_ON_EOF)
       			return -1;
       
     -+		if (options & PACKET_READ_NEVER_DIE)
     ++		if (options & PACKET_READ_GENTLE_ON_READ_ERROR)
      +			return error(_("the remote end hung up unexpectedly"));
       		die(_("the remote end hung up unexpectedly"));
       	}
     @@ pkt-line.c: enum packet_read_status packet_read_with_status(int fd, char **src_b
       	len = packet_length(linelen);
       
       	if (len < 0) {
     -+		if (options & PACKET_READ_NEVER_DIE)
     ++		if (options & PACKET_READ_GENTLE_ON_READ_ERROR)
      +			return error(_("protocol error: bad line length "
      +				       "character: %.4s"), linelen);
       		die(_("protocol error: bad line length character: %.4s"), linelen);
     @@ pkt-line.c: enum packet_read_status packet_read_with_status(int fd, char **src_b
       		*pktlen = 0;
       		return PACKET_READ_RESPONSE_END;
       	} else if (len < 4) {
     -+		if (options & PACKET_READ_NEVER_DIE)
     ++		if (options & PACKET_READ_GENTLE_ON_READ_ERROR)
      +			return error(_("protocol error: bad line length %d"),
      +				     len);
       		die(_("protocol error: bad line length %d"), len);
     @@ pkt-line.c: enum packet_read_status packet_read_with_status(int fd, char **src_b
       	len -= 4;
      -	if ((unsigned)len >= size)
      +	if ((unsigned)len >= size) {
     -+		if (options & PACKET_READ_NEVER_DIE)
     ++		if (options & PACKET_READ_GENTLE_ON_READ_ERROR)
      +			return error(_("protocol error: bad line length %d"),
      +				     len);
       		die(_("protocol error: bad line length %d"), len);
     @@ pkt-line.h: int write_packetized_from_buf_no_flush(const char *src_in, size_t le
        * If options contains PACKET_READ_DIE_ON_ERR_PACKET, it dies when it sees an
        * ERR packet.
      + *
     -+ * With `PACKET_READ_NEVER_DIE`, no errors are allowed to trigger die() (except
     -+ * an ERR packet, when `PACKET_READ_DIE_ON_ERR_PACKET` is in effect).
     ++ * If options contains PACKET_READ_GENTLE_ON_READ_ERROR, we will not die
     ++ * on read errors, but instead return -1.  However, we may still die on an
     ++ * ERR packet (if requested).
        */
     - #define PACKET_READ_GENTLE_ON_EOF     (1u<<0)
     - #define PACKET_READ_CHOMP_NEWLINE     (1u<<1)
     - #define PACKET_READ_DIE_ON_ERR_PACKET (1u<<2)
     -+#define PACKET_READ_NEVER_DIE         (1u<<3)
     +-#define PACKET_READ_GENTLE_ON_EOF     (1u<<0)
     +-#define PACKET_READ_CHOMP_NEWLINE     (1u<<1)
     +-#define PACKET_READ_DIE_ON_ERR_PACKET (1u<<2)
     ++#define PACKET_READ_GENTLE_ON_EOF        (1u<<0)
     ++#define PACKET_READ_CHOMP_NEWLINE        (1u<<1)
     ++#define PACKET_READ_DIE_ON_ERR_PACKET    (1u<<2)
     ++#define PACKET_READ_GENTLE_ON_READ_ERROR (1u<<3)
       int packet_read(int fd, char **src_buffer, size_t *src_len, char
       		*buffer, unsigned size, int options);
       
  4:  81e14bed955c =  4:  b73e66a69b61 pkt-line: add options argument to read_packetized_to_strbuf()
  5:  22eec60761a8 <  -:  ------------ simple-ipc: design documentation for new IPC mechanism
  -:  ------------ >  5:  1ae99d824a21 simple-ipc: design documentation for new IPC mechanism
  6:  171ec43ecfa4 !  6:  8b3ce40e4538 simple-ipc: add win32 implementation
     @@ compat/simple-ipc/ipc-win32.c (new)
      +
      +	if (read_packetized_to_strbuf(
      +		    connection->fd, answer,
     -+		    PACKET_READ_GENTLE_ON_EOF | PACKET_READ_NEVER_DIE) < 0) {
     ++		    PACKET_READ_GENTLE_ON_EOF | PACKET_READ_GENTLE_ON_READ_ERROR) < 0) {
      +		ret = error(_("could not read IPC response"));
      +		goto done;
      +	}
     @@ compat/simple-ipc/ipc-win32.c (new)
      +
      +	ret = read_packetized_to_strbuf(
      +		reply_data.fd, &buf,
     -+		PACKET_READ_GENTLE_ON_EOF | PACKET_READ_NEVER_DIE);
     ++		PACKET_READ_GENTLE_ON_EOF | PACKET_READ_GENTLE_ON_READ_ERROR);
      +	if (ret >= 0) {
      +		ret = server_thread_data->server_data->application_cb(
      +			server_thread_data->server_data->application_data,
     @@ compat/simple-ipc/ipc-win32.c (new)
      +	*returned_server_data = NULL;
      +
      +	ret = initialize_pipe_name(path, wpath, ARRAY_SIZE(wpath));
     -+	if (ret < 0)
     -+		return error(
     -+			_("could not create normalized wchar_t path for '%s'"),
     -+			path);
     ++	if (ret < 0) {
     ++		errno = EINVAL;
     ++		return -1;
     ++	}
      +
      +	hPipeFirst = create_new_pipe(wpath, 1);
     -+	if (hPipeFirst == INVALID_HANDLE_VALUE)
     -+		return error(_("IPC server already running on '%s'"), path);
     ++	if (hPipeFirst == INVALID_HANDLE_VALUE) {
     ++		errno = EADDRINUSE;
     ++		return -2;
     ++	}
      +
      +	server_data = xcalloc(1, sizeof(*server_data));
      +	server_data->magic = MAGIC_SERVER_DATA;
     @@ simple-ipc.h (new)
      + *
      + * Returns 0 if the asynchronous server pool was started successfully.
      + * Returns -1 if not.
     ++ * Returns -2 if we could not startup because another server is using
     ++ * the socket or named pipe.
      + *
      + * When a client IPC message is received, the `application_cb` will be
      + * called (possibly on a random thread) to handle the message and
     @@ simple-ipc.h (new)
      + *
      + * Returns 0 after the server has completed successfully.
      + * Returns -1 if the server cannot be started.
     ++ * Returns -2 if we could not startup because another server is using
     ++ * the socket or named pipe.
      + *
      + * When a client IPC message is received, the `application_cb` will be
      + * called (possibly on a random thread) to handle the message and
  7:  b368318e6a23 !  7:  34df1af98e5b unix-socket: elimiate static unix_stream_socket() helper function
     @@ Metadata
      Author: Jeff Hostetler <jeffhost@microsoft.com>
      
       ## Commit message ##
     -    unix-socket: elimiate static unix_stream_socket() helper function
     +    unix-socket: eliminate static unix_stream_socket() helper function
      
          The static helper function `unix_stream_socket()` calls `die()`.  This
          is not appropriate for all callers.  Eliminate the wrapper function
  8:  985b2e02b2df !  8:  d6ff6e0e050a unix-socket: add backlog size option to unix_stream_listen()
     @@ builtin/credential-cache--daemon.c: static int serve_cache_loop(int fd)
       
      
       ## unix-socket.c ##
     +@@
     + #include "cache.h"
     + #include "unix-socket.h"
     + 
     ++#define DEFAULT_UNIX_STREAM_LISTEN_BACKLOG (5)
     ++
     + static int chdir_len(const char *orig, int len)
     + {
     + 	char *path = xmemdupz(orig, len);
      @@ unix-socket.c: int unix_stream_connect(const char *path)
       	return -1;
       }
     @@ unix-socket.h
      +	int listen_backlog_size;
      +};
      +
     -+#define DEFAULT_UNIX_STREAM_LISTEN_BACKLOG (5)
     -+
     -+#define UNIX_STREAM_LISTEN_OPTS_INIT \
     -+{ \
     -+	.listen_backlog_size = DEFAULT_UNIX_STREAM_LISTEN_BACKLOG, \
     -+}
     ++#define UNIX_STREAM_LISTEN_OPTS_INIT { 0 }
      +
       int unix_stream_connect(const char *path);
      -int unix_stream_listen(const char *path);
  9:  1bfa36409d07 !  9:  21b8d3c63dbf unix-socket: disallow chdir() when creating unix domain sockets
     @@ unix-socket.h
      +	unsigned int disallow_chdir:1;
       };
       
     - #define DEFAULT_UNIX_STREAM_LISTEN_BACKLOG (5)
     -@@ unix-socket.h: struct unix_stream_listen_opts {
     - #define UNIX_STREAM_LISTEN_OPTS_INIT \
     - { \
     - 	.listen_backlog_size = DEFAULT_UNIX_STREAM_LISTEN_BACKLOG, \
     -+	.disallow_chdir = 0, \
     - }
     + #define UNIX_STREAM_LISTEN_OPTS_INIT { 0 }
       
      -int unix_stream_connect(const char *path);
      +int unix_stream_connect(const char *path, int disallow_chdir);
 10:  b443e11ac32f ! 10:  1ee9de55a106 unix-socket: create `unix_stream_server__listen_with_lock()`
     @@ Metadata
      Author: Jeff Hostetler <jeffhost@microsoft.com>
      
       ## Commit message ##
     -    unix-socket: create `unix_stream_server__listen_with_lock()`
     +    unix-stream-server: create unix domain socket under lock
      
     -    Create a version of `unix_stream_listen()` that uses a ".lock" lockfile
     -    to create the unix domain socket in a race-free manner.
     +    Create a wrapper class for `unix_stream_listen()` that uses a ".lock"
     +    lockfile to create the unix domain socket in a race-free manner.
      
          Unix domain sockets have a fundamental problem on Unix systems because
          they persist in the filesystem until they are deleted.  This is
     @@ Commit message
      
          As an alternative, we hold a plain lockfile ("<path>.lock") as a
          mutual exclusion device.  Under the lock, we test if an existing
     -    socket ("<path>") is has an active server.  If not, create a new
     -    socket and begin listening.  Then we rollback the lockfile in all
     -    cases.
     +    socket ("<path>") is has an active server.  If not, we create a new
     +    socket and begin listening.  Then we use "rollback" to delete the
     +    lockfile in all cases.
     +
     +    This wrapper code conceptually exists at a higher-level than the core
     +    unix_stream_connect() and unix_stream_listen() routines that it
     +    consumes.  It is isolated in a wrapper class for clarity.
      
          Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
      
     - ## unix-socket.c ##
     + ## Makefile ##
     +@@ Makefile: ifdef NO_UNIX_SOCKETS
     + 	BASIC_CFLAGS += -DNO_UNIX_SOCKETS
     + else
     + 	LIB_OBJS += unix-socket.o
     ++	LIB_OBJS += unix-stream-server.o
     + endif
     + 
     + ifdef USE_WIN32_IPC
     +
     + ## contrib/buildsystems/CMakeLists.txt ##
     +@@ contrib/buildsystems/CMakeLists.txt: if(CMAKE_SYSTEM_NAME STREQUAL "Windows")
     + 
     + elseif(CMAKE_SYSTEM_NAME STREQUAL "Linux")
     + 	add_compile_definitions(PROCFS_EXECUTABLE_PATH="/proc/self/exe" HAVE_DEV_TTY )
     +-	list(APPEND compat_SOURCES unix-socket.c)
     ++	list(APPEND compat_SOURCES unix-socket.c unix-stream-server.c)
     + endif()
     + 
     + if(CMAKE_SYSTEM_NAME STREQUAL "Windows")
     +
     + ## unix-stream-server.c (new) ##
      @@
     - #include "cache.h"
     ++#include "cache.h"
      +#include "lockfile.h"
     - #include "unix-socket.h"
     - 
     - static int chdir_len(const char *orig, int len)
     -@@ unix-socket.c: int unix_stream_listen(const char *path,
     - 	errno = saved_errno;
     - 	return -1;
     - }
     ++#include "unix-socket.h"
     ++#include "unix-stream-server.h"
      +
     ++#define DEFAULT_LOCK_TIMEOUT (100)
     ++
     ++/*
     ++ * Try to connect to a unix domain socket at `path` (if it exists) and
     ++ * see if there is a server listening.
     ++ *
     ++ * We don't know if the socket exists, whether a server died and
     ++ * failed to cleanup, or whether we have a live server listening, so
     ++ * we "poke" it.
     ++ *
     ++ * We immediately hangup without sending/receiving any data because we
     ++ * don't know anything about the protocol spoken and don't want to
     ++ * block while writing/reading data.  It is sufficient to just know
     ++ * that someone is listening.
     ++ */
      +static int is_another_server_alive(const char *path,
      +				   const struct unix_stream_listen_opts *opts)
      +{
     -+	struct stat st;
     -+	int fd;
     -+
     -+	if (!lstat(path, &st) && S_ISSOCK(st.st_mode)) {
     -+		/*
     -+		 * A socket-inode exists on disk at `path`, but we
     -+		 * don't know whether it belongs to an active server
     -+		 * or whether the last server died without cleaning
     -+		 * up.
     -+		 *
     -+		 * Poke it with a trivial connection to try to find
     -+		 * out.
     -+		 */
     -+		fd = unix_stream_connect(path, opts->disallow_chdir);
     -+		if (fd >= 0) {
     -+			close(fd);
     -+			return 1;
     -+		}
     ++	int fd = unix_stream_connect(path, opts->disallow_chdir);
     ++	if (fd >= 0) {
     ++		close(fd);
     ++		return 1;
      +	}
      +
      +	return 0;
      +}
      +
     -+struct unix_stream_server_socket *unix_stream_server__listen_with_lock(
     ++int unix_stream_server__create(
      +	const char *path,
     -+	const struct unix_stream_listen_opts *opts)
     ++	const struct unix_stream_listen_opts *opts,
     ++	long timeout_ms,
     ++	struct unix_stream_server_socket **new_server_socket)
      +{
      +	struct lock_file lock = LOCK_INIT;
      +	int fd_socket;
      +	struct unix_stream_server_socket *server_socket;
      +
     ++	*new_server_socket = NULL;
     ++
     ++	if (timeout_ms < 0)
     ++		timeout_ms = DEFAULT_LOCK_TIMEOUT;
     ++
      +	/*
      +	 * Create a lock at "<path>.lock" if we can.
      +	 */
     -+	if (hold_lock_file_for_update_timeout(&lock, path, 0,
     -+					      opts->timeout_ms) < 0) {
     -+		error_errno(_("could not lock listener socket '%s'"), path);
     -+		return NULL;
     -+	}
     ++	if (hold_lock_file_for_update_timeout(&lock, path, 0, timeout_ms) < 0)
     ++		return -1;
      +
      +	/*
      +	 * If another server is listening on "<path>" give up.  We do not
      +	 * want to create a socket and steal future connections from them.
      +	 */
      +	if (is_another_server_alive(path, opts)) {
     -+		errno = EADDRINUSE;
     -+		error_errno(_("listener socket already in use '%s'"), path);
      +		rollback_lock_file(&lock);
     -+		return NULL;
     ++		errno = EADDRINUSE;
     ++		return -2;
      +	}
      +
      +	/*
     @@ unix-socket.c: int unix_stream_listen(const char *path,
      +	 */
      +	fd_socket = unix_stream_listen(path, opts);
      +	if (fd_socket < 0) {
     -+		error_errno(_("could not create listener socket '%s'"), path);
     ++		int saved_errno = errno;
      +		rollback_lock_file(&lock);
     -+		return NULL;
     ++		errno = saved_errno;
     ++		return -1;
      +	}
      +
      +	server_socket = xcalloc(1, sizeof(*server_socket));
     @@ unix-socket.c: int unix_stream_listen(const char *path,
      +	server_socket->fd_socket = fd_socket;
      +	lstat(path, &server_socket->st_socket);
      +
     ++	*new_server_socket = server_socket;
     ++
      +	/*
      +	 * Always rollback (just delete) "<path>.lock" because we already created
      +	 * "<path>" as a socket and do not want to commit_lock to do the atomic
     @@ unix-socket.c: int unix_stream_listen(const char *path,
      +	 */
      +	rollback_lock_file(&lock);
      +
     -+	return server_socket;
     ++	return 0;
      +}
      +
      +void unix_stream_server__free(
     @@ unix-socket.c: int unix_stream_listen(const char *path,
      +
      +	if (st_now.st_ino != server_socket->st_socket.st_ino)
      +		return 1;
     ++	if (st_now.st_dev != server_socket->st_socket.st_dev)
     ++		return 1;
      +
     -+	/* We might also consider the ctime on some platforms. */
     ++	if (!S_ISSOCK(st_now.st_mode))
     ++		return 1;
      +
      +	return 0;
      +}
      
     - ## unix-socket.h ##
     + ## unix-stream-server.h (new) ##
      @@
     - #define UNIX_SOCKET_H
     - 
     - struct unix_stream_listen_opts {
     -+	long timeout_ms;
     - 	int listen_backlog_size;
     - 	unsigned int disallow_chdir:1;
     - };
     - 
     -+#define DEFAULT_UNIX_STREAM_LISTEN_TIMEOUT (100)
     - #define DEFAULT_UNIX_STREAM_LISTEN_BACKLOG (5)
     - 
     - #define UNIX_STREAM_LISTEN_OPTS_INIT \
     - { \
     -+	.timeout_ms = DEFAULT_UNIX_STREAM_LISTEN_TIMEOUT, \
     - 	.listen_backlog_size = DEFAULT_UNIX_STREAM_LISTEN_BACKLOG, \
     - 	.disallow_chdir = 0, \
     - }
     -@@ unix-socket.h: int unix_stream_connect(const char *path, int disallow_chdir);
     - int unix_stream_listen(const char *path,
     - 		       const struct unix_stream_listen_opts *opts);
     - 
     ++#ifndef UNIX_STREAM_SERVER_H
     ++#define UNIX_STREAM_SERVER_H
     ++
     ++#include "unix-socket.h"
     ++
      +struct unix_stream_server_socket {
      +	char *path_socket;
      +	struct stat st_socket;
     @@ unix-socket.h: int unix_stream_connect(const char *path, int disallow_chdir);
      +/*
      + * Create a Unix Domain Socket at the given path under the protection
      + * of a '.lock' lockfile.
     ++ *
     ++ * Returns 0 on success, -1 on error, -2 if socket is in use.
      + */
     -+struct unix_stream_server_socket *unix_stream_server__listen_with_lock(
     ++int unix_stream_server__create(
      +	const char *path,
     -+	const struct unix_stream_listen_opts *opts);
     ++	const struct unix_stream_listen_opts *opts,
     ++	long timeout_ms,
     ++	struct unix_stream_server_socket **server_socket);
      +
      +/*
      + * Close and delete the socket.
     @@ unix-socket.h: int unix_stream_connect(const char *path, int disallow_chdir);
      +int unix_stream_server__was_stolen(
      +	struct unix_stream_server_socket *server_socket);
      +
     - #endif /* UNIX_SOCKET_H */
     ++#endif /* UNIX_STREAM_SERVER_H */
 11:  43c8db9a4468 ! 11:  f2e3b046cc8f simple-ipc: add Unix domain socket implementation
     @@ Commit message
      
       ## Makefile ##
      @@ Makefile: ifdef NO_UNIX_SOCKETS
     - 	BASIC_CFLAGS += -DNO_UNIX_SOCKETS
       else
       	LIB_OBJS += unix-socket.o
     + 	LIB_OBJS += unix-stream-server.o
      +	LIB_OBJS += compat/simple-ipc/ipc-shared.o
      +	LIB_OBJS += compat/simple-ipc/ipc-unix-socket.o
       endif
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +#include "pkt-line.h"
      +#include "thread-utils.h"
      +#include "unix-socket.h"
     ++#include "unix-stream-server.h"
      +
      +#ifdef NO_UNIX_SOCKETS
      +#error compat/simple-ipc/ipc-unix-socket.c requires Unix sockets
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +
      +	if (read_packetized_to_strbuf(
      +		    connection->fd, answer,
     -+		    PACKET_READ_GENTLE_ON_EOF | PACKET_READ_NEVER_DIE) < 0) {
     ++		    PACKET_READ_GENTLE_ON_EOF | PACKET_READ_GENTLE_ON_READ_ERROR) < 0) {
      +		ret = error(_("could not read IPC response"));
      +		goto done;
      +	}
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +
      +	ret = read_packetized_to_strbuf(
      +		reply_data.fd, &buf,
     -+		PACKET_READ_GENTLE_ON_EOF | PACKET_READ_NEVER_DIE);
     ++		PACKET_READ_GENTLE_ON_EOF | PACKET_READ_GENTLE_ON_READ_ERROR);
      +	if (ret >= 0) {
      +		ret = worker_thread_data->server_data->application_cb(
      +			worker_thread_data->server_data->application_data,
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      + */
      +#define LISTEN_BACKLOG (50)
      +
     -+static struct unix_stream_server_socket *create_listener_socket(
     ++static int create_listener_socket(
      +	const char *path,
     -+	const struct ipc_server_opts *ipc_opts)
     ++	const struct ipc_server_opts *ipc_opts,
     ++	struct unix_stream_server_socket **new_server_socket)
      +{
      +	struct unix_stream_server_socket *server_socket = NULL;
      +	struct unix_stream_listen_opts uslg_opts = UNIX_STREAM_LISTEN_OPTS_INIT;
     ++	int ret;
      +
      +	uslg_opts.listen_backlog_size = LISTEN_BACKLOG;
      +	uslg_opts.disallow_chdir = ipc_opts->uds_disallow_chdir;
      +
     -+	server_socket = unix_stream_server__listen_with_lock(path, &uslg_opts);
     -+	if (!server_socket)
     -+		return NULL;
     ++	ret = unix_stream_server__create(path, &uslg_opts, -1, &server_socket);
     ++	if (ret)
     ++		return ret;
      +
      +	if (set_socket_blocking_flag(server_socket->fd_socket, 1)) {
      +		int saved_errno = errno;
     -+		error_errno(_("could not set listener socket nonblocking '%s'"),
     -+			    path);
      +		unix_stream_server__free(server_socket);
      +		errno = saved_errno;
     -+		return NULL;
     ++		return -1;
      +	}
      +
     ++	*new_server_socket = server_socket;
     ++
      +	trace2_data_string("ipc-server", NULL, "listen-with-lock", path);
     -+	return server_socket;
     ++	return 0;
      +}
      +
     -+static struct unix_stream_server_socket *setup_listener_socket(
     ++static int setup_listener_socket(
      +	const char *path,
     -+	const struct ipc_server_opts *ipc_opts)
     ++	const struct ipc_server_opts *ipc_opts,
     ++	struct unix_stream_server_socket **new_server_socket)
      +{
     -+	struct unix_stream_server_socket *server_socket;
     ++	int ret, saved_errno;
      +
      +	trace2_region_enter("ipc-server", "create-listener_socket", NULL);
     -+	server_socket = create_listener_socket(path, ipc_opts);
     ++
     ++	ret = create_listener_socket(path, ipc_opts, new_server_socket);
     ++
     ++	saved_errno = errno;
      +	trace2_region_leave("ipc-server", "create-listener_socket", NULL);
     ++	errno = saved_errno;
      +
     -+	return server_socket;
     ++	return ret;
      +}
      +
      +/*
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +	struct ipc_server_data *server_data;
      +	int sv[2];
      +	int k;
     ++	int ret;
      +	int nr_threads = opts->nr_threads;
      +
      +	*returned_server_data = NULL;
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +	 * connection or a shutdown request without spinning.
      +	 */
      +	if (socketpair(AF_UNIX, SOCK_STREAM, 0, sv) < 0)
     -+		return error_errno(_("could not create socketpair for '%s'"),
     -+				   path);
     ++		return -1;
      +
      +	if (set_socket_blocking_flag(sv[1], 1)) {
      +		int saved_errno = errno;
      +		close(sv[0]);
      +		close(sv[1]);
      +		errno = saved_errno;
     -+		return error_errno(_("making socketpair nonblocking '%s'"),
     -+				   path);
     ++		return -1;
      +	}
      +
     -+	server_socket = setup_listener_socket(path, opts);
     -+	if (!server_socket) {
     ++	ret = setup_listener_socket(path, opts, &server_socket);
     ++	if (ret) {
      +		int saved_errno = errno;
      +		close(sv[0]);
      +		close(sv[1]);
      +		errno = saved_errno;
     -+		return -1;
     ++		return ret;
      +	}
      +
      +	server_data = xcalloc(1, sizeof(*server_data));
 12:  09568a6500dd ! 12:  6ccc7472096f t0052: add simple-ipc tests and t/helper/test-simple-ipc tool
     @@ t/helper/test-simple-ipc.c (new)
      +	return app__unhandled_command(command, reply_cb, reply_data);
      +}
      +
     ++struct cl_args
     ++{
     ++	const char *subcommand;
     ++	const char *path;
     ++	const char *token;
     ++
     ++	int nr_threads;
     ++	int max_wait_sec;
     ++	int bytecount;
     ++	int batchsize;
     ++
     ++	char bytevalue;
     ++};
     ++
     ++static struct cl_args cl_args = {
     ++	.subcommand = NULL,
     ++	.path = "ipc-test",
     ++	.token = NULL,
     ++
     ++	.nr_threads = 5,
     ++	.max_wait_sec = 60,
     ++	.bytecount = 1024,
     ++	.batchsize = 10,
     ++
     ++	.bytevalue = 'x',
     ++};
     ++
      +/*
      + * This process will run as a simple-ipc server and listen for IPC commands
      + * from client processes.
      + */
     -+static int daemon__run_server(const char *path, int argc, const char **argv)
     ++static int daemon__run_server(void)
      +{
     -+	struct ipc_server_opts opts = {
     -+		.nr_threads = 5
     -+	};
     ++	int ret;
      +
     -+	const char * const daemon_usage[] = {
     -+		N_("test-helper simple-ipc run-daemon [<options>"),
     -+		NULL
     -+	};
     -+	struct option daemon_options[] = {
     -+		OPT_INTEGER(0, "threads", &opts.nr_threads,
     -+			    N_("number of threads in server thread pool")),
     -+		OPT_END()
     ++	struct ipc_server_opts opts = {
     ++		.nr_threads = cl_args.nr_threads,
      +	};
      +
     -+	argc = parse_options(argc, argv, NULL, daemon_options, daemon_usage, 0);
     -+
     -+	if (opts.nr_threads < 1)
     -+		opts.nr_threads = 1;
     -+
      +	/*
      +	 * Synchronously run the ipc-server.  We don't need any application
      +	 * instance data, so pass an arbitrary pointer (that we'll later
      +	 * verify made the round trip).
      +	 */
     -+	return ipc_server_run(path, &opts, test_app_cb, (void*)&my_app_data);
     ++	ret = ipc_server_run(cl_args.path, &opts, test_app_cb, (void*)&my_app_data);
     ++	if (ret == -2)
     ++		error(_("socket/pipe already in use: '%s'"), cl_args.path);
     ++	else if (ret == -1)
     ++		error_errno(_("could not start server on: '%s'"), cl_args.path);
     ++
     ++	return ret;
      +}
      +
      +#ifndef GIT_WINDOWS_NATIVE
     @@ t/helper/test-simple-ipc.c (new)
      + * This is adapted from `daemonize()`.  Use `fork()` to directly create and
      + * run the daemon in a child process.
      + */
     -+static int spawn_server(const char *path,
     -+			const struct ipc_server_opts *opts,
     -+			pid_t *pid)
     ++static int spawn_server(pid_t *pid)
      +{
     ++	struct ipc_server_opts opts = {
     ++		.nr_threads = cl_args.nr_threads,
     ++	};
     ++
      +	*pid = fork();
      +
      +	switch (*pid) {
     @@ t/helper/test-simple-ipc.c (new)
      +		close(2);
      +		sanitize_stdfds();
      +
     -+		return ipc_server_run(path, opts, test_app_cb, (void*)&my_app_data);
     ++		return ipc_server_run(cl_args.path, &opts, test_app_cb,
     ++				      (void*)&my_app_data);
      +
      +	case -1:
      +		return error_errno(_("could not spawn daemon in the background"));
     @@ t/helper/test-simple-ipc.c (new)
      + * have `fork(2)`.  Spawn a normal Windows child process but without the
      + * limitations of `start_command()` and `finish_command()`.
      + */
     -+static int spawn_server(const char *path,
     -+			const struct ipc_server_opts *opts,
     -+			pid_t *pid)
     ++static int spawn_server(pid_t *pid)
      +{
      +	char test_tool_exe[MAX_PATH];
      +	struct strvec args = STRVEC_INIT;
     @@ t/helper/test-simple-ipc.c (new)
      +	strvec_push(&args, test_tool_exe);
      +	strvec_push(&args, "simple-ipc");
      +	strvec_push(&args, "run-daemon");
     -+	strvec_pushf(&args, "--threads=%d", opts->nr_threads);
     ++	strvec_pushf(&args, "--name=%s", cl_args.path);
     ++	strvec_pushf(&args, "--threads=%d", cl_args.nr_threads);
      +
      +	*pid = mingw_spawnvpe(args.v[0], args.v, NULL, NULL, in, out, out);
      +	close(in);
     @@ t/helper/test-simple-ipc.c (new)
      + * let it get started and begin listening for requests on the socket
      + * before reporting our success.
      + */
     -+static int wait_for_server_startup(const char * path, pid_t pid_child,
     -+				   int max_wait_sec)
     ++static int wait_for_server_startup(pid_t pid_child)
      +{
      +	int status;
      +	pid_t pid_seen;
     @@ t/helper/test-simple-ipc.c (new)
      +	time_t time_limit, now;
      +
      +	time(&time_limit);
     -+	time_limit += max_wait_sec;
     ++	time_limit += cl_args.max_wait_sec;
      +
      +	for (;;) {
      +		pid_seen = waitpid(pid_child, &status, WNOHANG);
     @@ t/helper/test-simple-ipc.c (new)
      +			 * after a timeout on the lock), but we don't
      +			 * care (who responds) if the socket is live.
      +			 */
     -+			s = ipc_get_active_state(path);
     ++			s = ipc_get_active_state(cl_args.path);
      +			if (s == IPC_STATE__LISTENING)
      +				return 0;
      +
     @@ t/helper/test-simple-ipc.c (new)
      +			 *
      +			 * Again, we don't care who services the socket.
      +			 */
     -+			s = ipc_get_active_state(path);
     ++			s = ipc_get_active_state(cl_args.path);
      +			if (s == IPC_STATE__LISTENING)
      +				return 0;
      +
     @@ t/helper/test-simple-ipc.c (new)
      + * more control and better error reporting (and makes it easier to write
      + * unit tests).
      + */
     -+static int daemon__start_server(const char *path, int argc, const char **argv)
     ++static int daemon__start_server(void)
      +{
      +	pid_t pid_child;
      +	int ret;
     -+	int max_wait_sec = 60;
     -+	struct ipc_server_opts opts = {
     -+		.nr_threads = 5
     -+	};
     -+
     -+	const char * const daemon_usage[] = {
     -+		N_("test-helper simple-ipc start-daemon [<options>"),
     -+		NULL
     -+	};
     -+
     -+	struct option daemon_options[] = {
     -+		OPT_INTEGER(0, "max-wait", &max_wait_sec,
     -+			    N_("seconds to wait for daemon to startup")),
     -+		OPT_INTEGER(0, "threads", &opts.nr_threads,
     -+			    N_("number of threads in server thread pool")),
     -+		OPT_END()
     -+	};
     -+
     -+	argc = parse_options(argc, argv, NULL, daemon_options, daemon_usage, 0);
     -+
     -+	if (max_wait_sec < 0)
     -+		max_wait_sec = 0;
     -+	if (opts.nr_threads < 1)
     -+		opts.nr_threads = 1;
      +
      +	/*
      +	 * Run the actual daemon in a background process.
      +	 */
     -+	ret = spawn_server(path, &opts, &pid_child);
     ++	ret = spawn_server(&pid_child);
      +	if (pid_child <= 0)
      +		return ret;
      +
     @@ t/helper/test-simple-ipc.c (new)
      +	 * Let the parent wait for the child process to get started
      +	 * and begin listening for requests on the socket.
      +	 */
     -+	ret = wait_for_server_startup(path, pid_child, max_wait_sec);
     ++	ret = wait_for_server_startup(pid_child);
      +
      +	return ret;
      +}
     @@ t/helper/test-simple-ipc.c (new)
      + *
      + * Returns 0 if the server is alive.
      + */
     -+static int client__probe_server(const char *path)
     ++static int client__probe_server(void)
      +{
      +	enum ipc_active_state s;
      +
     -+	s = ipc_get_active_state(path);
     ++	s = ipc_get_active_state(cl_args.path);
      +	switch (s) {
      +	case IPC_STATE__LISTENING:
      +		return 0;
      +
      +	case IPC_STATE__NOT_LISTENING:
     -+		return error("no server listening at '%s'", path);
     ++		return error("no server listening at '%s'", cl_args.path);
      +
      +	case IPC_STATE__PATH_NOT_FOUND:
     -+		return error("path not found '%s'", path);
     ++		return error("path not found '%s'", cl_args.path);
      +
      +	case IPC_STATE__INVALID_PATH:
     -+		return error("invalid pipe/socket name '%s'", path);
     ++		return error("invalid pipe/socket name '%s'", cl_args.path);
      +
      +	case IPC_STATE__OTHER_ERROR:
      +	default:
     -+		return error("other error for '%s'", path);
     ++		return error("other error for '%s'", cl_args.path);
      +	}
      +}
      +
      +/*
     -+ * Send an IPC command to an already-running server daemon and print the
     -+ * response.
     ++ * Send an IPC command token to an already-running server daemon and
     ++ * print the response.
      + *
     -+ * argv[2] contains a simple (1 word) command that `test_app_cb()` (in
     -+ * the daemon process) will understand.
     ++ * This is a simple 1 word command/token that `test_app_cb()` (in the
     ++ * daemon process) will understand.
      + */
     -+static int client__send_ipc(int argc, const char **argv, const char *path)
     ++static int client__send_ipc(void)
      +{
     -+	const char *command = argc > 2 ? argv[2] : "(no command)";
     ++	const char *command = "(no-command)";
      +	struct strbuf buf = STRBUF_INIT;
      +	struct ipc_client_connect_options options
      +		= IPC_CLIENT_CONNECT_OPTIONS_INIT;
      +
     ++	if (cl_args.token && *cl_args.token)
     ++		command = cl_args.token;
     ++
      +	options.wait_if_busy = 1;
      +	options.wait_if_not_found = 0;
      +
     -+	if (!ipc_client_send_command(path, &options, command, &buf)) {
     ++	if (!ipc_client_send_command(cl_args.path, &options, command, &buf)) {
      +		if (buf.len) {
      +			printf("%s\n", buf.buf);
      +			fflush(stdout);
     @@ t/helper/test-simple-ipc.c (new)
      +		return 0;
      +	}
      +
     -+	return error("failed to send '%s' to '%s'", command, path);
     ++	return error("failed to send '%s' to '%s'", command, cl_args.path);
      +}
      +
      +/*
     @@ t/helper/test-simple-ipc.c (new)
      + * event in the server, so we spin and wait here for it to actually
      + * shutdown to make the unit tests a little easier to write.
      + */
     -+static int client__stop_server(int argc, const char **argv, const char *path)
     ++static int client__stop_server(void)
      +{
     -+	const char *send_quit[] = { argv[0], "send", "quit", NULL };
     -+	int max_wait_sec = 60;
      +	int ret;
      +	time_t time_limit, now;
      +	enum ipc_active_state s;
      +
     -+	const char * const stop_usage[] = {
     -+		N_("test-helper simple-ipc stop-daemon [<options>]"),
     -+		NULL
     -+	};
     -+
     -+	struct option stop_options[] = {
     -+		OPT_INTEGER(0, "max-wait", &max_wait_sec,
     -+			    N_("seconds to wait for daemon to stop")),
     -+		OPT_END()
     -+	};
     -+
     -+	argc = parse_options(argc, argv, NULL, stop_options, stop_usage, 0);
     -+
     -+	if (max_wait_sec < 0)
     -+		max_wait_sec = 0;
     -+
      +	time(&time_limit);
     -+	time_limit += max_wait_sec;
     ++	time_limit += cl_args.max_wait_sec;
     ++
     ++	cl_args.token = "quit";
      +
     -+	ret = client__send_ipc(3, send_quit, path);
     ++	ret = client__send_ipc();
      +	if (ret)
      +		return ret;
      +
      +	for (;;) {
      +		sleep_millisec(100);
      +
     -+		s = ipc_get_active_state(path);
     ++		s = ipc_get_active_state(cl_args.path);
      +
      +		if (s != IPC_STATE__LISTENING) {
      +			/*
     @@ t/helper/test-simple-ipc.c (new)
      +/*
      + * Send an IPC command with ballast to an already-running server daemon.
      + */
     -+static int client__sendbytes(int argc, const char **argv, const char *path)
     ++static int client__sendbytes(void)
      +{
     -+	int bytecount = 1024;
     -+	char *string = "x";
     -+	const char * const sendbytes_usage[] = {
     -+		N_("test-helper simple-ipc sendbytes [<options>]"),
     -+		NULL
     -+	};
     -+	struct option sendbytes_options[] = {
     -+		OPT_INTEGER(0, "bytecount", &bytecount, N_("number of bytes")),
     -+		OPT_STRING(0, "byte", &string, N_("byte"), N_("ballast")),
     -+		OPT_END()
     -+	};
      +	struct ipc_client_connect_options options
      +		= IPC_CLIENT_CONNECT_OPTIONS_INIT;
      +
     @@ t/helper/test-simple-ipc.c (new)
      +	options.wait_if_not_found = 0;
      +	options.uds_disallow_chdir = 0;
      +
     -+	argc = parse_options(argc, argv, NULL, sendbytes_options, sendbytes_usage, 0);
     -+
     -+	return do_sendbytes(bytecount, string[0], path, &options);
     ++	return do_sendbytes(cl_args.bytecount, cl_args.bytevalue, cl_args.path,
     ++			    &options);
      +}
      +
      +struct multiple_thread_data {
     @@ t/helper/test-simple-ipc.c (new)
      + * Start a client-side thread pool.  Each thread sends a series of
      + * IPC requests.  Each request is on a new connection to the server.
      + */
     -+static int client__multiple(int argc, const char **argv, const char *path)
     ++static int client__multiple(void)
      +{
      +	struct multiple_thread_data *list = NULL;
      +	int k;
     -+	int nr_threads = 5;
     -+	int bytecount = 1;
     -+	int batchsize = 10;
      +	int sum_join_errors = 0;
      +	int sum_thread_errors = 0;
      +	int sum_good = 0;
      +
     -+	const char * const multiple_usage[] = {
     -+		N_("test-helper simple-ipc multiple [<options>]"),
     -+		NULL
     -+	};
     -+	struct option multiple_options[] = {
     -+		OPT_INTEGER(0, "bytecount", &bytecount, N_("number of bytes")),
     -+		OPT_INTEGER(0, "threads", &nr_threads, N_("number of threads")),
     -+		OPT_INTEGER(0, "batchsize", &batchsize, N_("number of requests per thread")),
     -+		OPT_END()
     -+	};
     -+
     -+	argc = parse_options(argc, argv, NULL, multiple_options, multiple_usage, 0);
     -+
     -+	if (bytecount < 1)
     -+		bytecount = 1;
     -+	if (nr_threads < 1)
     -+		nr_threads = 1;
     -+	if (batchsize < 1)
     -+		batchsize = 1;
     -+
     -+	for (k = 0; k < nr_threads; k++) {
     ++	for (k = 0; k < cl_args.nr_threads; k++) {
      +		struct multiple_thread_data *d = xcalloc(1, sizeof(*d));
      +		d->next = list;
     -+		d->path = path;
     -+		d->bytecount = bytecount + batchsize*(k/26);
     -+		d->batchsize = batchsize;
     ++		d->path = cl_args.path;
     ++		d->bytecount = cl_args.bytecount + cl_args.batchsize*(k/26);
     ++		d->batchsize = cl_args.batchsize;
      +		d->sum_errors = 0;
      +		d->sum_good = 0;
      +		d->letter = 'A' + (k % 26);
     @@ t/helper/test-simple-ipc.c (new)
      +
      +int cmd__simple_ipc(int argc, const char **argv)
      +{
     -+	const char *path = "ipc-test";
     ++	const char * const simple_ipc_usage[] = {
     ++		N_("test-helper simple-ipc is-active    [<name>] [<options>]"),
     ++		N_("test-helper simple-ipc run-daemon   [<name>] [<threads>]"),
     ++		N_("test-helper simple-ipc start-daemon [<name>] [<threads>] [<max-wait>]"),
     ++		N_("test-helper simple-ipc stop-daemon  [<name>] [<max-wait>]"),
     ++		N_("test-helper simple-ipc send         [<name>] [<token>]"),
     ++		N_("test-helper simple-ipc sendbytes    [<name>] [<bytecount>] [<byte>]"),
     ++		N_("test-helper simple-ipc multiple     [<name>] [<threads>] [<bytecount>] [<batchsize>]"),
     ++		NULL
     ++	};
     ++
     ++	const char *bytevalue = NULL;
     ++
     ++	struct option options[] = {
     ++#ifndef GIT_WINDOWS_NATIVE
     ++		OPT_STRING(0, "name", &cl_args.path, N_("name"), N_("name or pathname of unix domain socket")),
     ++#else
     ++		OPT_STRING(0, "name", &cl_args.path, N_("name"), N_("named-pipe name")),
     ++#endif
     ++		OPT_INTEGER(0, "threads", &cl_args.nr_threads, N_("number of threads in server thread pool")),
     ++		OPT_INTEGER(0, "max-wait", &cl_args.max_wait_sec, N_("seconds to wait for daemon to start or stop")),
     ++
     ++		OPT_INTEGER(0, "bytecount", &cl_args.bytecount, N_("number of bytes")),
     ++		OPT_INTEGER(0, "batchsize", &cl_args.batchsize, N_("number of requests per thread")),
     ++
     ++		OPT_STRING(0, "byte", &bytevalue, N_("byte"), N_("ballast character")),
     ++		OPT_STRING(0, "token", &cl_args.token, N_("token"), N_("command token to send to the server")),
     ++
     ++		OPT_END()
     ++	};
     ++
     ++	if (argc < 2)
     ++		usage_with_options(simple_ipc_usage, options);
     ++
     ++	if (argc == 2 && !strcmp(argv[1], "-h"))
     ++		usage_with_options(simple_ipc_usage, options);
      +
      +	if (argc == 2 && !strcmp(argv[1], "SUPPORTS_SIMPLE_IPC"))
      +		return 0;
      +
     ++	cl_args.subcommand = argv[1];
     ++
     ++	argc--;
     ++	argv++;
     ++
     ++	argc = parse_options(argc, argv, NULL, options, simple_ipc_usage, 0);
     ++
     ++	if (cl_args.nr_threads < 1)
     ++		cl_args.nr_threads = 1;
     ++	if (cl_args.max_wait_sec < 0)
     ++		cl_args.max_wait_sec = 0;
     ++	if (cl_args.bytecount < 1)
     ++		cl_args.bytecount = 1;
     ++	if (cl_args.batchsize < 1)
     ++		cl_args.batchsize = 1;
     ++
     ++	if (bytevalue && *bytevalue)
     ++		cl_args.bytevalue = bytevalue[0];
     ++
      +	/*
      +	 * Use '!!' on all dispatch functions to map from `error()` style
      +	 * (returns -1) style to `test_must_fail` style (expects 1).  This
      +	 * makes shell error messages less confusing.
      +	 */
      +
     -+	if (argc == 2 && !strcmp(argv[1], "is-active"))
     -+		return !!client__probe_server(path);
     ++	if (!strcmp(cl_args.subcommand, "is-active"))
     ++		return !!client__probe_server();
      +
     -+	if (argc >= 2 && !strcmp(argv[1], "run-daemon"))
     -+		return !!daemon__run_server(path, argc, argv);
     ++	if (!strcmp(cl_args.subcommand, "run-daemon"))
     ++		return !!daemon__run_server();
      +
     -+	if (argc >= 2 && !strcmp(argv[1], "start-daemon"))
     -+		return !!daemon__start_server(path, argc, argv);
     ++	if (!strcmp(cl_args.subcommand, "start-daemon"))
     ++		return !!daemon__start_server();
      +
      +	/*
      +	 * Client commands follow.  Ensure a server is running before
     -+	 * going any further.
     ++	 * sending any data.  This might be overkill, but then again
     ++	 * this is a test harness.
      +	 */
     -+	if (client__probe_server(path))
     -+		return 1;
      +
     -+	if (argc >= 2 && !strcmp(argv[1], "stop-daemon"))
     -+		return !!client__stop_server(argc, argv, path);
     ++	if (!strcmp(cl_args.subcommand, "stop-daemon")) {
     ++		if (client__probe_server())
     ++			return 1;
     ++		return !!client__stop_server();
     ++	}
      +
     -+	if ((argc == 2 || argc == 3) && !strcmp(argv[1], "send"))
     -+		return !!client__send_ipc(argc, argv, path);
     ++	if (!strcmp(cl_args.subcommand, "send")) {
     ++		if (client__probe_server())
     ++			return 1;
     ++		return !!client__send_ipc();
     ++	}
      +
     -+	if (argc >= 2 && !strcmp(argv[1], "sendbytes"))
     -+		return !!client__sendbytes(argc, argv, path);
     ++	if (!strcmp(cl_args.subcommand, "sendbytes")) {
     ++		if (client__probe_server())
     ++			return 1;
     ++		return !!client__sendbytes();
     ++	}
      +
     -+	if (argc >= 2 && !strcmp(argv[1], "multiple"))
     -+		return !!client__multiple(argc, argv, path);
     ++	if (!strcmp(cl_args.subcommand, "multiple")) {
     ++		if (client__probe_server())
     ++			return 1;
     ++		return !!client__multiple();
     ++	}
      +
     -+	die("Unhandled argv[1]: '%s'", argv[1]);
     ++	die("Unhandled subcommand: '%s'", cl_args.subcommand);
      +}
      +#endif
      
     @@ t/t0052-simple-ipc.sh (new)
      +'
      +
      +test_expect_success 'simple command server' '
     -+	test-tool simple-ipc send ping >actual &&
     ++	test-tool simple-ipc send --token=ping >actual &&
      +	echo pong >expect &&
      +	test_cmp expect actual
      +'
     @@ t/t0052-simple-ipc.sh (new)
      +'
      +
      +test_expect_success 'big response' '
     -+	test-tool simple-ipc send big >actual &&
     ++	test-tool simple-ipc send --token=big >actual &&
      +	test_line_count -ge 10000 actual &&
      +	grep -q "big: [0]*9999\$" actual
      +'
      +
      +test_expect_success 'chunk response' '
     -+	test-tool simple-ipc send chunk >actual &&
     ++	test-tool simple-ipc send --token=chunk >actual &&
      +	test_line_count -ge 10000 actual &&
      +	grep -q "big: [0]*9999\$" actual
      +'
      +
      +test_expect_success 'slow response' '
     -+	test-tool simple-ipc send slow >actual &&
     ++	test-tool simple-ipc send --token=slow >actual &&
      +	test_line_count -ge 100 actual &&
      +	grep -q "big: [0]*99\$" actual
      +'
     @@ t/t0052-simple-ipc.sh (new)
      +test_expect_success 'stop-daemon works' '
      +	test-tool simple-ipc stop-daemon &&
      +	test_must_fail test-tool simple-ipc is-active &&
     -+	test_must_fail test-tool simple-ipc send ping
     ++	test_must_fail test-tool simple-ipc send --token=ping
      +'
      +
      +test_done

-- 
gitgitgadget

^ permalink raw reply	[flat|nested] 178+ messages in thread

* [PATCH v5 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently()
  2021-03-09 15:02       ` [PATCH v5 " Jeff Hostetler via GitGitGadget
@ 2021-03-09 15:02         ` Jeff Hostetler via GitGitGadget
  2021-03-09 23:48           ` Junio C Hamano
  2021-03-09 15:02         ` [PATCH v5 02/12] pkt-line: do not issue flush packets in write_packetized_*() Johannes Schindelin via GitGitGadget
                           ` (12 subsequent siblings)
  13 siblings, 1 reply; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-03-09 15:02 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Teach `packet_write_gently()` to write the pkt-line header and the actual
buffer in 2 separate calls to `write_in_full()` and avoid the need for a
static buffer, thread-safe scratch space, or an excessively large stack
buffer.

Change `write_packetized_from_fd()` to allocate a temporary buffer rather
than using a static buffer to avoid similar issues here.

These changes are intended to make it easier to use pkt-line routines in
a multi-threaded context with multiple concurrent writers writing to
different streams.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 pkt-line.c | 27 +++++++++++++++++++--------
 1 file changed, 19 insertions(+), 8 deletions(-)

diff --git a/pkt-line.c b/pkt-line.c
index d633005ef746..8b3512190442 100644
--- a/pkt-line.c
+++ b/pkt-line.c
@@ -196,17 +196,25 @@ int packet_write_fmt_gently(int fd, const char *fmt, ...)
 
 static int packet_write_gently(const int fd_out, const char *buf, size_t size)
 {
-	static char packet_write_buffer[LARGE_PACKET_MAX];
+	char header[4];
 	size_t packet_size;
 
-	if (size > sizeof(packet_write_buffer) - 4)
+	if (size > LARGE_PACKET_DATA_MAX)
 		return error(_("packet write failed - data exceeds max packet size"));
 
 	packet_trace(buf, size, 1);
 	packet_size = size + 4;
-	set_packet_header(packet_write_buffer, packet_size);
-	memcpy(packet_write_buffer + 4, buf, size);
-	if (write_in_full(fd_out, packet_write_buffer, packet_size) < 0)
+
+	set_packet_header(header, packet_size);
+
+	/*
+	 * Write the header and the buffer in 2 parts so that we do not need
+	 * to allocate a buffer or rely on a static buffer.  This avoids perf
+	 * and multi-threading issues.
+	 */
+
+	if (write_in_full(fd_out, header, 4) < 0 ||
+	    write_in_full(fd_out, buf, size) < 0)
 		return error(_("packet write failed"));
 	return 0;
 }
@@ -244,20 +252,23 @@ void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len)
 
 int write_packetized_from_fd(int fd_in, int fd_out)
 {
-	static char buf[LARGE_PACKET_DATA_MAX];
+	char *buf = xmalloc(LARGE_PACKET_DATA_MAX);
 	int err = 0;
 	ssize_t bytes_to_write;
 
 	while (!err) {
-		bytes_to_write = xread(fd_in, buf, sizeof(buf));
-		if (bytes_to_write < 0)
+		bytes_to_write = xread(fd_in, buf, LARGE_PACKET_DATA_MAX);
+		if (bytes_to_write < 0) {
+			free(buf);
 			return COPY_READ_ERROR;
+		}
 		if (bytes_to_write == 0)
 			break;
 		err = packet_write_gently(fd_out, buf, bytes_to_write);
 	}
 	if (!err)
 		err = packet_flush_gently(fd_out);
+	free(buf);
 	return err;
 }
 
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v5 02/12] pkt-line: do not issue flush packets in write_packetized_*()
  2021-03-09 15:02       ` [PATCH v5 " Jeff Hostetler via GitGitGadget
  2021-03-09 15:02         ` [PATCH v5 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently() Jeff Hostetler via GitGitGadget
@ 2021-03-09 15:02         ` Johannes Schindelin via GitGitGadget
  2021-03-09 15:02         ` [PATCH v5 03/12] pkt-line: add PACKET_READ_GENTLE_ON_READ_ERROR option Johannes Schindelin via GitGitGadget
                           ` (11 subsequent siblings)
  13 siblings, 0 replies; 178+ messages in thread
From: Johannes Schindelin via GitGitGadget @ 2021-03-09 15:02 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Johannes Schindelin

From: Johannes Schindelin <johannes.schindelin@gmx.de>

Remove the `packet_flush_gently()` call in `write_packetized_from_buf() and
`write_packetized_from_fd()` and require the caller to call it if desired.
Rename both functions to `write_packetized_from_*_no_flush()` to prevent
later merge accidents.

`write_packetized_from_buf()` currently only has one caller:
`apply_multi_file_filter()` in `convert.c`.  It always wants a flush packet
to be written after writing the payload.

However, we are about to introduce a caller that wants to write many
packets before a final flush packet, so let's make the caller responsible
for emitting the flush packet.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
---
 convert.c  | 8 ++++++--
 pkt-line.c | 8 ++------
 pkt-line.h | 4 ++--
 3 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/convert.c b/convert.c
index ee360c2f07ce..976d4905cb3a 100644
--- a/convert.c
+++ b/convert.c
@@ -884,9 +884,13 @@ static int apply_multi_file_filter(const char *path, const char *src, size_t len
 		goto done;
 
 	if (fd >= 0)
-		err = write_packetized_from_fd(fd, process->in);
+		err = write_packetized_from_fd_no_flush(fd, process->in);
 	else
-		err = write_packetized_from_buf(src, len, process->in);
+		err = write_packetized_from_buf_no_flush(src, len, process->in);
+	if (err)
+		goto done;
+
+	err = packet_flush_gently(process->in);
 	if (err)
 		goto done;
 
diff --git a/pkt-line.c b/pkt-line.c
index 8b3512190442..434da3a0c48d 100644
--- a/pkt-line.c
+++ b/pkt-line.c
@@ -250,7 +250,7 @@ void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len)
 	packet_trace(data, len, 1);
 }
 
-int write_packetized_from_fd(int fd_in, int fd_out)
+int write_packetized_from_fd_no_flush(int fd_in, int fd_out)
 {
 	char *buf = xmalloc(LARGE_PACKET_DATA_MAX);
 	int err = 0;
@@ -266,13 +266,11 @@ int write_packetized_from_fd(int fd_in, int fd_out)
 			break;
 		err = packet_write_gently(fd_out, buf, bytes_to_write);
 	}
-	if (!err)
-		err = packet_flush_gently(fd_out);
 	free(buf);
 	return err;
 }
 
-int write_packetized_from_buf(const char *src_in, size_t len, int fd_out)
+int write_packetized_from_buf_no_flush(const char *src_in, size_t len, int fd_out)
 {
 	int err = 0;
 	size_t bytes_written = 0;
@@ -288,8 +286,6 @@ int write_packetized_from_buf(const char *src_in, size_t len, int fd_out)
 		err = packet_write_gently(fd_out, src_in + bytes_written, bytes_to_write);
 		bytes_written += bytes_to_write;
 	}
-	if (!err)
-		err = packet_flush_gently(fd_out);
 	return err;
 }
 
diff --git a/pkt-line.h b/pkt-line.h
index 8c90daa59ef0..31012b9943bf 100644
--- a/pkt-line.h
+++ b/pkt-line.h
@@ -32,8 +32,8 @@ void packet_buf_write(struct strbuf *buf, const char *fmt, ...) __attribute__((f
 void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len);
 int packet_flush_gently(int fd);
 int packet_write_fmt_gently(int fd, const char *fmt, ...) __attribute__((format (printf, 2, 3)));
-int write_packetized_from_fd(int fd_in, int fd_out);
-int write_packetized_from_buf(const char *src_in, size_t len, int fd_out);
+int write_packetized_from_fd_no_flush(int fd_in, int fd_out);
+int write_packetized_from_buf_no_flush(const char *src_in, size_t len, int fd_out);
 
 /*
  * Read a packetized line into the buffer, which must be at least size bytes
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v5 03/12] pkt-line: add PACKET_READ_GENTLE_ON_READ_ERROR option
  2021-03-09 15:02       ` [PATCH v5 " Jeff Hostetler via GitGitGadget
  2021-03-09 15:02         ` [PATCH v5 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently() Jeff Hostetler via GitGitGadget
  2021-03-09 15:02         ` [PATCH v5 02/12] pkt-line: do not issue flush packets in write_packetized_*() Johannes Schindelin via GitGitGadget
@ 2021-03-09 15:02         ` Johannes Schindelin via GitGitGadget
  2021-03-09 15:02         ` [PATCH v5 04/12] pkt-line: add options argument to read_packetized_to_strbuf() Johannes Schindelin via GitGitGadget
                           ` (10 subsequent siblings)
  13 siblings, 0 replies; 178+ messages in thread
From: Johannes Schindelin via GitGitGadget @ 2021-03-09 15:02 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Johannes Schindelin

From: Johannes Schindelin <johannes.schindelin@gmx.de>

Introduce PACKET_READ_GENTLE_ON_READ_ERROR option to help libify the
packet readers.

So far, the (possibly indirect) callers of `get_packet_data()` can ask
that function to return an error instead of `die()`ing upon end-of-file.
However, random read errors will still cause the process to die.

So let's introduce an explicit option to tell the packet reader
machinery to please be nice and only return an error on read errors.

This change prepares pkt-line for use by long-running daemon processes.
Such processes should be able to serve multiple concurrent clients and
and survive random IO errors.  If there is an error on one connection,
a daemon should be able to drop that connection and continue serving
existing and future connections.

This ability will be used by a Git-aware "Builtin FSMonitor" feature
in a later patch series.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 pkt-line.c | 19 +++++++++++++++++--
 pkt-line.h | 11 ++++++++---
 2 files changed, 25 insertions(+), 5 deletions(-)

diff --git a/pkt-line.c b/pkt-line.c
index 434da3a0c48d..22775e37a72b 100644
--- a/pkt-line.c
+++ b/pkt-line.c
@@ -305,8 +305,11 @@ static int get_packet_data(int fd, char **src_buf, size_t *src_size,
 		*src_size -= ret;
 	} else {
 		ret = read_in_full(fd, dst, size);
-		if (ret < 0)
+		if (ret < 0) {
+			if (options & PACKET_READ_GENTLE_ON_READ_ERROR)
+				return error_errno(_("read error"));
 			die_errno(_("read error"));
+		}
 	}
 
 	/* And complain if we didn't get enough bytes to satisfy the read. */
@@ -314,6 +317,8 @@ static int get_packet_data(int fd, char **src_buf, size_t *src_size,
 		if (options & PACKET_READ_GENTLE_ON_EOF)
 			return -1;
 
+		if (options & PACKET_READ_GENTLE_ON_READ_ERROR)
+			return error(_("the remote end hung up unexpectedly"));
 		die(_("the remote end hung up unexpectedly"));
 	}
 
@@ -342,6 +347,9 @@ enum packet_read_status packet_read_with_status(int fd, char **src_buffer,
 	len = packet_length(linelen);
 
 	if (len < 0) {
+		if (options & PACKET_READ_GENTLE_ON_READ_ERROR)
+			return error(_("protocol error: bad line length "
+				       "character: %.4s"), linelen);
 		die(_("protocol error: bad line length character: %.4s"), linelen);
 	} else if (!len) {
 		packet_trace("0000", 4, 0);
@@ -356,12 +364,19 @@ enum packet_read_status packet_read_with_status(int fd, char **src_buffer,
 		*pktlen = 0;
 		return PACKET_READ_RESPONSE_END;
 	} else if (len < 4) {
+		if (options & PACKET_READ_GENTLE_ON_READ_ERROR)
+			return error(_("protocol error: bad line length %d"),
+				     len);
 		die(_("protocol error: bad line length %d"), len);
 	}
 
 	len -= 4;
-	if ((unsigned)len >= size)
+	if ((unsigned)len >= size) {
+		if (options & PACKET_READ_GENTLE_ON_READ_ERROR)
+			return error(_("protocol error: bad line length %d"),
+				     len);
 		die(_("protocol error: bad line length %d"), len);
+	}
 
 	if (get_packet_data(fd, src_buffer, src_len, buffer, len, options) < 0) {
 		*pktlen = -1;
diff --git a/pkt-line.h b/pkt-line.h
index 31012b9943bf..80ce0187e2ea 100644
--- a/pkt-line.h
+++ b/pkt-line.h
@@ -68,10 +68,15 @@ int write_packetized_from_buf_no_flush(const char *src_in, size_t len, int fd_ou
  *
  * If options contains PACKET_READ_DIE_ON_ERR_PACKET, it dies when it sees an
  * ERR packet.
+ *
+ * If options contains PACKET_READ_GENTLE_ON_READ_ERROR, we will not die
+ * on read errors, but instead return -1.  However, we may still die on an
+ * ERR packet (if requested).
  */
-#define PACKET_READ_GENTLE_ON_EOF     (1u<<0)
-#define PACKET_READ_CHOMP_NEWLINE     (1u<<1)
-#define PACKET_READ_DIE_ON_ERR_PACKET (1u<<2)
+#define PACKET_READ_GENTLE_ON_EOF        (1u<<0)
+#define PACKET_READ_CHOMP_NEWLINE        (1u<<1)
+#define PACKET_READ_DIE_ON_ERR_PACKET    (1u<<2)
+#define PACKET_READ_GENTLE_ON_READ_ERROR (1u<<3)
 int packet_read(int fd, char **src_buffer, size_t *src_len, char
 		*buffer, unsigned size, int options);
 
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v5 04/12] pkt-line: add options argument to read_packetized_to_strbuf()
  2021-03-09 15:02       ` [PATCH v5 " Jeff Hostetler via GitGitGadget
                           ` (2 preceding siblings ...)
  2021-03-09 15:02         ` [PATCH v5 03/12] pkt-line: add PACKET_READ_GENTLE_ON_READ_ERROR option Johannes Schindelin via GitGitGadget
@ 2021-03-09 15:02         ` Johannes Schindelin via GitGitGadget
  2021-03-09 15:02         ` [PATCH v5 05/12] simple-ipc: design documentation for new IPC mechanism Jeff Hostetler via GitGitGadget
                           ` (9 subsequent siblings)
  13 siblings, 0 replies; 178+ messages in thread
From: Johannes Schindelin via GitGitGadget @ 2021-03-09 15:02 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Johannes Schindelin

From: Johannes Schindelin <johannes.schindelin@gmx.de>

Update the calling sequence of `read_packetized_to_strbuf()` to take
an options argument and not assume a fixed set of options.  Update the
only existing caller accordingly to explicitly pass the
formerly-assumed flags.

The `read_packetized_to_strbuf()` function calls `packet_read()` with
a fixed set of assumed options (`PACKET_READ_GENTLE_ON_EOF`).  This
assumption has been fine for the single existing caller
`apply_multi_file_filter()` in `convert.c`.

In a later commit we would like to add other callers to
`read_packetized_to_strbuf()` that need a different set of options.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 convert.c  | 3 ++-
 pkt-line.c | 4 ++--
 pkt-line.h | 2 +-
 3 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/convert.c b/convert.c
index 976d4905cb3a..516f1095b06e 100644
--- a/convert.c
+++ b/convert.c
@@ -907,7 +907,8 @@ static int apply_multi_file_filter(const char *path, const char *src, size_t len
 		if (err)
 			goto done;
 
-		err = read_packetized_to_strbuf(process->out, &nbuf) < 0;
+		err = read_packetized_to_strbuf(process->out, &nbuf,
+						PACKET_READ_GENTLE_ON_EOF) < 0;
 		if (err)
 			goto done;
 
diff --git a/pkt-line.c b/pkt-line.c
index 22775e37a72b..695ea37b9d30 100644
--- a/pkt-line.c
+++ b/pkt-line.c
@@ -443,7 +443,7 @@ char *packet_read_line_buf(char **src, size_t *src_len, int *dst_len)
 	return packet_read_line_generic(-1, src, src_len, dst_len);
 }
 
-ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out)
+ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out, int options)
 {
 	int packet_len;
 
@@ -459,7 +459,7 @@ ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out)
 			 * that there is already room for the extra byte.
 			 */
 			sb_out->buf + sb_out->len, LARGE_PACKET_DATA_MAX+1,
-			PACKET_READ_GENTLE_ON_EOF);
+			options);
 		if (packet_len <= 0)
 			break;
 		sb_out->len += packet_len;
diff --git a/pkt-line.h b/pkt-line.h
index 80ce0187e2ea..5af5f4568768 100644
--- a/pkt-line.h
+++ b/pkt-line.h
@@ -136,7 +136,7 @@ char *packet_read_line_buf(char **src_buf, size_t *src_len, int *size);
 /*
  * Reads a stream of variable sized packets until a flush packet is detected.
  */
-ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out);
+ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out, int options);
 
 /*
  * Receive multiplexed output stream over git native protocol.
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v5 05/12] simple-ipc: design documentation for new IPC mechanism
  2021-03-09 15:02       ` [PATCH v5 " Jeff Hostetler via GitGitGadget
                           ` (3 preceding siblings ...)
  2021-03-09 15:02         ` [PATCH v5 04/12] pkt-line: add options argument to read_packetized_to_strbuf() Johannes Schindelin via GitGitGadget
@ 2021-03-09 15:02         ` Jeff Hostetler via GitGitGadget
  2021-03-09 15:02         ` [PATCH v5 06/12] simple-ipc: add win32 implementation Jeff Hostetler via GitGitGadget
                           ` (8 subsequent siblings)
  13 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-03-09 15:02 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Brief design documentation for new IPC mechanism allowing
foreground Git client to talk with an existing daemon process
at a known location using a named pipe or unix domain socket.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 Documentation/technical/api-simple-ipc.txt | 105 +++++++++++++++++++++
 1 file changed, 105 insertions(+)
 create mode 100644 Documentation/technical/api-simple-ipc.txt

diff --git a/Documentation/technical/api-simple-ipc.txt b/Documentation/technical/api-simple-ipc.txt
new file mode 100644
index 000000000000..d79ad323e675
--- /dev/null
+++ b/Documentation/technical/api-simple-ipc.txt
@@ -0,0 +1,105 @@
+Simple-IPC API
+==============
+
+The Simple-IPC API is a collection of `ipc_` prefixed library routines
+and a basic communication protocol that allow an IPC-client process to
+send an application-specific IPC-request message to an IPC-server
+process and receive an application-specific IPC-response message.
+
+Communication occurs over a named pipe on Windows and a Unix domain
+socket on other platforms.  IPC-clients and IPC-servers rendezvous at
+a previously agreed-to application-specific pathname (which is outside
+the scope of this design) that is local to the computer system.
+
+The IPC-server routines within the server application process create a
+thread pool to listen for connections and receive request messages
+from multiple concurrent IPC-clients.  When received, these messages
+are dispatched up to the server application callbacks for handling.
+IPC-server routines then incrementally relay responses back to the
+IPC-client.
+
+The IPC-client routines within a client application process connect
+to the IPC-server and send a request message and wait for a response.
+When received, the response is returned back the caller.
+
+For example, the `fsmonitor--daemon` feature will be built as a server
+application on top of the IPC-server library routines.  It will have
+threads watching for file system events and a thread pool waiting for
+client connections.  Clients, such as `git status` will request a list
+of file system events since a point in time and the server will
+respond with a list of changed files and directories.  The formats of
+the request and response are application-specific; the IPC-client and
+IPC-server routines treat them as opaque byte streams.
+
+
+Comparison with sub-process model
+---------------------------------
+
+The Simple-IPC mechanism differs from the existing `sub-process.c`
+model (Documentation/technical/long-running-process-protocol.txt) and
+used by applications like Git-LFS.  In the LFS-style sub-process model
+the helper is started by the foreground process, communication happens
+via a pair of file descriptors bound to the stdin/stdout of the
+sub-process, the sub-process only serves the current foreground
+process, and the sub-process exits when the foreground process
+terminates.
+
+In the Simple-IPC model the server is a very long-running service.  It
+can service many clients at the same time and has a private socket or
+named pipe connection to each active client.  It might be started
+(on-demand) by the current client process or it might have been
+started by a previous client or by the OS at boot time.  The server
+process is not associated with a terminal and it persists after
+clients terminate.  Clients do not have access to the stdin/stdout of
+the server process and therefore must communicate over sockets or
+named pipes.
+
+
+Server startup and shutdown
+---------------------------
+
+How an application server based upon IPC-server is started is also
+outside the scope of the Simple-IPC design and is a property of the
+application using it.  For example, the server might be started or
+restarted during routine maintenance operations, or it might be
+started as a system service during the system boot-up sequence, or it
+might be started on-demand by a foreground Git command when needed.
+
+Similarly, server shutdown is a property of the application using
+the simple-ipc routines.  For example, the server might decide to
+shutdown when idle or only upon explicit request.
+
+
+Simple-IPC protocol
+-------------------
+
+The Simple-IPC protocol consists of a single request message from the
+client and an optional response message from the server.  Both the
+client and server messages are unlimited in length and are terminated
+with a flush packet.
+
+The pkt-line routines (Documentation/technical/protocol-common.txt)
+are used to simplify buffer management during message generation,
+transmission, and reception.  A flush packet is used to mark the end
+of the message.  This allows the sender to incrementally generate and
+transmit the message.  It allows the receiver to incrementally receive
+the message in chunks and to know when they have received the entire
+message.
+
+The actual byte format of the client request and server response
+messages are application specific.  The IPC layer transmits and
+receives them as opaque byte buffers without any concern for the
+content within.  It is the job of the calling application layer to
+understand the contents of the request and response messages.
+
+
+Summary
+-------
+
+Conceptually, the Simple-IPC protocol is similar to an HTTP REST
+request.  Clients connect, make an application-specific and
+stateless request, receive an application-specific
+response, and disconnect.  It is a one round trip facility for
+querying the server.  The Simple-IPC routines hide the socket,
+named pipe, and thread pool details and allow the application
+layer to focus on the application at hand.
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v5 06/12] simple-ipc: add win32 implementation
  2021-03-09 15:02       ` [PATCH v5 " Jeff Hostetler via GitGitGadget
                           ` (4 preceding siblings ...)
  2021-03-09 15:02         ` [PATCH v5 05/12] simple-ipc: design documentation for new IPC mechanism Jeff Hostetler via GitGitGadget
@ 2021-03-09 15:02         ` Jeff Hostetler via GitGitGadget
  2021-03-09 15:02         ` [PATCH v5 07/12] unix-socket: eliminate static unix_stream_socket() helper function Jeff Hostetler via GitGitGadget
                           ` (7 subsequent siblings)
  13 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-03-09 15:02 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Create Windows implementation of "simple-ipc" using named pipes.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 Makefile                            |   5 +
 compat/simple-ipc/ipc-shared.c      |  28 ++
 compat/simple-ipc/ipc-win32.c       | 751 ++++++++++++++++++++++++++++
 config.mak.uname                    |   2 +
 contrib/buildsystems/CMakeLists.txt |   4 +
 simple-ipc.h                        | 228 +++++++++
 6 files changed, 1018 insertions(+)
 create mode 100644 compat/simple-ipc/ipc-shared.c
 create mode 100644 compat/simple-ipc/ipc-win32.c
 create mode 100644 simple-ipc.h

diff --git a/Makefile b/Makefile
index dd08b4ced01c..d3c42d3f4f9f 100644
--- a/Makefile
+++ b/Makefile
@@ -1667,6 +1667,11 @@ else
 	LIB_OBJS += unix-socket.o
 endif
 
+ifdef USE_WIN32_IPC
+	LIB_OBJS += compat/simple-ipc/ipc-shared.o
+	LIB_OBJS += compat/simple-ipc/ipc-win32.o
+endif
+
 ifdef NO_ICONV
 	BASIC_CFLAGS += -DNO_ICONV
 endif
diff --git a/compat/simple-ipc/ipc-shared.c b/compat/simple-ipc/ipc-shared.c
new file mode 100644
index 000000000000..1edec8159532
--- /dev/null
+++ b/compat/simple-ipc/ipc-shared.c
@@ -0,0 +1,28 @@
+#include "cache.h"
+#include "simple-ipc.h"
+#include "strbuf.h"
+#include "pkt-line.h"
+#include "thread-utils.h"
+
+#ifdef SUPPORTS_SIMPLE_IPC
+
+int ipc_server_run(const char *path, const struct ipc_server_opts *opts,
+		   ipc_server_application_cb *application_cb,
+		   void *application_data)
+{
+	struct ipc_server_data *server_data = NULL;
+	int ret;
+
+	ret = ipc_server_run_async(&server_data, path, opts,
+				   application_cb, application_data);
+	if (ret)
+		return ret;
+
+	ret = ipc_server_await(server_data);
+
+	ipc_server_free(server_data);
+
+	return ret;
+}
+
+#endif /* SUPPORTS_SIMPLE_IPC */
diff --git a/compat/simple-ipc/ipc-win32.c b/compat/simple-ipc/ipc-win32.c
new file mode 100644
index 000000000000..8f89c02037e3
--- /dev/null
+++ b/compat/simple-ipc/ipc-win32.c
@@ -0,0 +1,751 @@
+#include "cache.h"
+#include "simple-ipc.h"
+#include "strbuf.h"
+#include "pkt-line.h"
+#include "thread-utils.h"
+
+#ifndef GIT_WINDOWS_NATIVE
+#error This file can only be compiled on Windows
+#endif
+
+static int initialize_pipe_name(const char *path, wchar_t *wpath, size_t alloc)
+{
+	int off = 0;
+	struct strbuf realpath = STRBUF_INIT;
+
+	if (!strbuf_realpath(&realpath, path, 0))
+		return -1;
+
+	off = swprintf(wpath, alloc, L"\\\\.\\pipe\\");
+	if (xutftowcs(wpath + off, realpath.buf, alloc - off) < 0)
+		return -1;
+
+	/* Handle drive prefix */
+	if (wpath[off] && wpath[off + 1] == L':') {
+		wpath[off + 1] = L'_';
+		off += 2;
+	}
+
+	for (; wpath[off]; off++)
+		if (wpath[off] == L'/')
+			wpath[off] = L'\\';
+
+	strbuf_release(&realpath);
+	return 0;
+}
+
+static enum ipc_active_state get_active_state(wchar_t *pipe_path)
+{
+	if (WaitNamedPipeW(pipe_path, NMPWAIT_USE_DEFAULT_WAIT))
+		return IPC_STATE__LISTENING;
+
+	if (GetLastError() == ERROR_SEM_TIMEOUT)
+		return IPC_STATE__NOT_LISTENING;
+
+	if (GetLastError() == ERROR_FILE_NOT_FOUND)
+		return IPC_STATE__PATH_NOT_FOUND;
+
+	return IPC_STATE__OTHER_ERROR;
+}
+
+enum ipc_active_state ipc_get_active_state(const char *path)
+{
+	wchar_t pipe_path[MAX_PATH];
+
+	if (initialize_pipe_name(path, pipe_path, ARRAY_SIZE(pipe_path)) < 0)
+		return IPC_STATE__INVALID_PATH;
+
+	return get_active_state(pipe_path);
+}
+
+#define WAIT_STEP_MS (50)
+
+static enum ipc_active_state connect_to_server(
+	const wchar_t *wpath,
+	DWORD timeout_ms,
+	const struct ipc_client_connect_options *options,
+	int *pfd)
+{
+	DWORD t_start_ms, t_waited_ms;
+	DWORD step_ms;
+	HANDLE hPipe = INVALID_HANDLE_VALUE;
+	DWORD mode = PIPE_READMODE_BYTE;
+	DWORD gle;
+
+	*pfd = -1;
+
+	for (;;) {
+		hPipe = CreateFileW(wpath, GENERIC_READ | GENERIC_WRITE,
+				    0, NULL, OPEN_EXISTING, 0, NULL);
+		if (hPipe != INVALID_HANDLE_VALUE)
+			break;
+
+		gle = GetLastError();
+
+		switch (gle) {
+		case ERROR_FILE_NOT_FOUND:
+			if (!options->wait_if_not_found)
+				return IPC_STATE__PATH_NOT_FOUND;
+			if (!timeout_ms)
+				return IPC_STATE__PATH_NOT_FOUND;
+
+			step_ms = (timeout_ms < WAIT_STEP_MS) ?
+				timeout_ms : WAIT_STEP_MS;
+			sleep_millisec(step_ms);
+
+			timeout_ms -= step_ms;
+			break; /* try again */
+
+		case ERROR_PIPE_BUSY:
+			if (!options->wait_if_busy)
+				return IPC_STATE__NOT_LISTENING;
+			if (!timeout_ms)
+				return IPC_STATE__NOT_LISTENING;
+
+			t_start_ms = (DWORD)(getnanotime() / 1000000);
+
+			if (!WaitNamedPipeW(wpath, timeout_ms)) {
+				if (GetLastError() == ERROR_SEM_TIMEOUT)
+					return IPC_STATE__NOT_LISTENING;
+
+				return IPC_STATE__OTHER_ERROR;
+			}
+
+			/*
+			 * A pipe server instance became available.
+			 * Race other client processes to connect to
+			 * it.
+			 *
+			 * But first decrement our overall timeout so
+			 * that we don't starve if we keep losing the
+			 * race.  But also guard against special
+			 * NPMWAIT_ values (0 and -1).
+			 */
+			t_waited_ms = (DWORD)(getnanotime() / 1000000) - t_start_ms;
+			if (t_waited_ms < timeout_ms)
+				timeout_ms -= t_waited_ms;
+			else
+				timeout_ms = 1;
+			break; /* try again */
+
+		default:
+			return IPC_STATE__OTHER_ERROR;
+		}
+	}
+
+	if (!SetNamedPipeHandleState(hPipe, &mode, NULL, NULL)) {
+		CloseHandle(hPipe);
+		return IPC_STATE__OTHER_ERROR;
+	}
+
+	*pfd = _open_osfhandle((intptr_t)hPipe, O_RDWR|O_BINARY);
+	if (*pfd < 0) {
+		CloseHandle(hPipe);
+		return IPC_STATE__OTHER_ERROR;
+	}
+
+	/* fd now owns hPipe */
+
+	return IPC_STATE__LISTENING;
+}
+
+/*
+ * The default connection timeout for Windows clients.
+ *
+ * This is not currently part of the ipc_ API (nor the config settings)
+ * because of differences between Windows and other platforms.
+ *
+ * This value was chosen at random.
+ */
+#define WINDOWS_CONNECTION_TIMEOUT_MS (30000)
+
+enum ipc_active_state ipc_client_try_connect(
+	const char *path,
+	const struct ipc_client_connect_options *options,
+	struct ipc_client_connection **p_connection)
+{
+	wchar_t wpath[MAX_PATH];
+	enum ipc_active_state state = IPC_STATE__OTHER_ERROR;
+	int fd = -1;
+
+	*p_connection = NULL;
+
+	trace2_region_enter("ipc-client", "try-connect", NULL);
+	trace2_data_string("ipc-client", NULL, "try-connect/path", path);
+
+	if (initialize_pipe_name(path, wpath, ARRAY_SIZE(wpath)) < 0)
+		state = IPC_STATE__INVALID_PATH;
+	else
+		state = connect_to_server(wpath, WINDOWS_CONNECTION_TIMEOUT_MS,
+					  options, &fd);
+
+	trace2_data_intmax("ipc-client", NULL, "try-connect/state",
+			   (intmax_t)state);
+	trace2_region_leave("ipc-client", "try-connect", NULL);
+
+	if (state == IPC_STATE__LISTENING) {
+		(*p_connection) = xcalloc(1, sizeof(struct ipc_client_connection));
+		(*p_connection)->fd = fd;
+	}
+
+	return state;
+}
+
+void ipc_client_close_connection(struct ipc_client_connection *connection)
+{
+	if (!connection)
+		return;
+
+	if (connection->fd != -1)
+		close(connection->fd);
+
+	free(connection);
+}
+
+int ipc_client_send_command_to_connection(
+	struct ipc_client_connection *connection,
+	const char *message, struct strbuf *answer)
+{
+	int ret = 0;
+
+	strbuf_setlen(answer, 0);
+
+	trace2_region_enter("ipc-client", "send-command", NULL);
+
+	if (write_packetized_from_buf_no_flush(message, strlen(message),
+					       connection->fd) < 0 ||
+	    packet_flush_gently(connection->fd) < 0) {
+		ret = error(_("could not send IPC command"));
+		goto done;
+	}
+
+	FlushFileBuffers((HANDLE)_get_osfhandle(connection->fd));
+
+	if (read_packetized_to_strbuf(
+		    connection->fd, answer,
+		    PACKET_READ_GENTLE_ON_EOF | PACKET_READ_GENTLE_ON_READ_ERROR) < 0) {
+		ret = error(_("could not read IPC response"));
+		goto done;
+	}
+
+done:
+	trace2_region_leave("ipc-client", "send-command", NULL);
+	return ret;
+}
+
+int ipc_client_send_command(const char *path,
+			    const struct ipc_client_connect_options *options,
+			    const char *message, struct strbuf *response)
+{
+	int ret = -1;
+	enum ipc_active_state state;
+	struct ipc_client_connection *connection = NULL;
+
+	state = ipc_client_try_connect(path, options, &connection);
+
+	if (state != IPC_STATE__LISTENING)
+		return ret;
+
+	ret = ipc_client_send_command_to_connection(connection, message, response);
+
+	ipc_client_close_connection(connection);
+
+	return ret;
+}
+
+/*
+ * Duplicate the given pipe handle and wrap it in a file descriptor so
+ * that we can use pkt-line on it.
+ */
+static int dup_fd_from_pipe(const HANDLE pipe)
+{
+	HANDLE process = GetCurrentProcess();
+	HANDLE handle;
+	int fd;
+
+	if (!DuplicateHandle(process, pipe, process, &handle, 0, FALSE,
+			     DUPLICATE_SAME_ACCESS)) {
+		errno = err_win_to_posix(GetLastError());
+		return -1;
+	}
+
+	fd = _open_osfhandle((intptr_t)handle, O_RDWR|O_BINARY);
+	if (fd < 0) {
+		errno = err_win_to_posix(GetLastError());
+		CloseHandle(handle);
+		return -1;
+	}
+
+	/*
+	 * `handle` is now owned by `fd` and will be automatically closed
+	 * when the descriptor is closed.
+	 */
+
+	return fd;
+}
+
+/*
+ * Magic numbers used to annotate callback instance data.
+ * These are used to help guard against accidentally passing the
+ * wrong instance data across multiple levels of callbacks (which
+ * is easy to do if there are `void*` arguments).
+ */
+enum magic {
+	MAGIC_SERVER_REPLY_DATA,
+	MAGIC_SERVER_THREAD_DATA,
+	MAGIC_SERVER_DATA,
+};
+
+struct ipc_server_reply_data {
+	enum magic magic;
+	int fd;
+	struct ipc_server_thread_data *server_thread_data;
+};
+
+struct ipc_server_thread_data {
+	enum magic magic;
+	struct ipc_server_thread_data *next_thread;
+	struct ipc_server_data *server_data;
+	pthread_t pthread_id;
+	HANDLE hPipe;
+};
+
+/*
+ * On Windows, the conceptual "ipc-server" is implemented as a pool of
+ * n idential/peer "server-thread" threads.  That is, there is no
+ * hierarchy of threads; and therefore no controller thread managing
+ * the pool.  Each thread has an independent handle to the named pipe,
+ * receives incoming connections, processes the client, and re-uses
+ * the pipe for the next client connection.
+ *
+ * Therefore, the "ipc-server" only needs to maintain a list of the
+ * spawned threads for eventual "join" purposes.
+ *
+ * A single "stop-event" is visible to all of the server threads to
+ * tell them to shutdown (when idle).
+ */
+struct ipc_server_data {
+	enum magic magic;
+	ipc_server_application_cb *application_cb;
+	void *application_data;
+	struct strbuf buf_path;
+	wchar_t wpath[MAX_PATH];
+
+	HANDLE hEventStopRequested;
+	struct ipc_server_thread_data *thread_list;
+	int is_stopped;
+};
+
+enum connect_result {
+	CR_CONNECTED = 0,
+	CR_CONNECT_PENDING,
+	CR_CONNECT_ERROR,
+	CR_WAIT_ERROR,
+	CR_SHUTDOWN,
+};
+
+static enum connect_result queue_overlapped_connect(
+	struct ipc_server_thread_data *server_thread_data,
+	OVERLAPPED *lpo)
+{
+	if (ConnectNamedPipe(server_thread_data->hPipe, lpo))
+		goto failed;
+
+	switch (GetLastError()) {
+	case ERROR_IO_PENDING:
+		return CR_CONNECT_PENDING;
+
+	case ERROR_PIPE_CONNECTED:
+		SetEvent(lpo->hEvent);
+		return CR_CONNECTED;
+
+	default:
+		break;
+	}
+
+failed:
+	error(_("ConnectNamedPipe failed for '%s' (%lu)"),
+	      server_thread_data->server_data->buf_path.buf,
+	      GetLastError());
+	return CR_CONNECT_ERROR;
+}
+
+/*
+ * Use Windows Overlapped IO to wait for a connection or for our event
+ * to be signalled.
+ */
+static enum connect_result wait_for_connection(
+	struct ipc_server_thread_data *server_thread_data,
+	OVERLAPPED *lpo)
+{
+	enum connect_result r;
+	HANDLE waitHandles[2];
+	DWORD dwWaitResult;
+
+	r = queue_overlapped_connect(server_thread_data, lpo);
+	if (r != CR_CONNECT_PENDING)
+		return r;
+
+	waitHandles[0] = server_thread_data->server_data->hEventStopRequested;
+	waitHandles[1] = lpo->hEvent;
+
+	dwWaitResult = WaitForMultipleObjects(2, waitHandles, FALSE, INFINITE);
+	switch (dwWaitResult) {
+	case WAIT_OBJECT_0 + 0:
+		return CR_SHUTDOWN;
+
+	case WAIT_OBJECT_0 + 1:
+		ResetEvent(lpo->hEvent);
+		return CR_CONNECTED;
+
+	default:
+		return CR_WAIT_ERROR;
+	}
+}
+
+/*
+ * Forward declare our reply callback function so that any compiler
+ * errors are reported when we actually define the function (in addition
+ * to any errors reported when we try to pass this callback function as
+ * a parameter in a function call).  The former are easier to understand.
+ */
+static ipc_server_reply_cb do_io_reply_callback;
+
+/*
+ * Relay application's response message to the client process.
+ * (We do not flush at this point because we allow the caller
+ * to chunk data to the client thru us.)
+ */
+static int do_io_reply_callback(struct ipc_server_reply_data *reply_data,
+		       const char *response, size_t response_len)
+{
+	if (reply_data->magic != MAGIC_SERVER_REPLY_DATA)
+		BUG("reply_cb called with wrong instance data");
+
+	return write_packetized_from_buf_no_flush(response, response_len,
+						  reply_data->fd);
+}
+
+/*
+ * Receive the request/command from the client and pass it to the
+ * registered request-callback.  The request-callback will compose
+ * a response and call our reply-callback to send it to the client.
+ *
+ * Simple-IPC only contains one round trip, so we flush and close
+ * here after the response.
+ */
+static int do_io(struct ipc_server_thread_data *server_thread_data)
+{
+	struct strbuf buf = STRBUF_INIT;
+	struct ipc_server_reply_data reply_data;
+	int ret = 0;
+
+	reply_data.magic = MAGIC_SERVER_REPLY_DATA;
+	reply_data.server_thread_data = server_thread_data;
+
+	reply_data.fd = dup_fd_from_pipe(server_thread_data->hPipe);
+	if (reply_data.fd < 0)
+		return error(_("could not create fd from pipe for '%s'"),
+			     server_thread_data->server_data->buf_path.buf);
+
+	ret = read_packetized_to_strbuf(
+		reply_data.fd, &buf,
+		PACKET_READ_GENTLE_ON_EOF | PACKET_READ_GENTLE_ON_READ_ERROR);
+	if (ret >= 0) {
+		ret = server_thread_data->server_data->application_cb(
+			server_thread_data->server_data->application_data,
+			buf.buf, do_io_reply_callback, &reply_data);
+
+		packet_flush_gently(reply_data.fd);
+
+		FlushFileBuffers((HANDLE)_get_osfhandle((reply_data.fd)));
+	}
+	else {
+		/*
+		 * The client probably disconnected/shutdown before it
+		 * could send a well-formed message.  Ignore it.
+		 */
+	}
+
+	strbuf_release(&buf);
+	close(reply_data.fd);
+
+	return ret;
+}
+
+/*
+ * Handle IPC request and response with this connected client.  And reset
+ * the pipe to prepare for the next client.
+ */
+static int use_connection(struct ipc_server_thread_data *server_thread_data)
+{
+	int ret;
+
+	ret = do_io(server_thread_data);
+
+	FlushFileBuffers(server_thread_data->hPipe);
+	DisconnectNamedPipe(server_thread_data->hPipe);
+
+	return ret;
+}
+
+/*
+ * Thread proc for an IPC server worker thread.  It handles a series of
+ * connections from clients.  It cleans and reuses the hPipe between each
+ * client.
+ */
+static void *server_thread_proc(void *_server_thread_data)
+{
+	struct ipc_server_thread_data *server_thread_data = _server_thread_data;
+	HANDLE hEventConnected = INVALID_HANDLE_VALUE;
+	OVERLAPPED oConnect;
+	enum connect_result cr;
+	int ret;
+
+	assert(server_thread_data->hPipe != INVALID_HANDLE_VALUE);
+
+	trace2_thread_start("ipc-server");
+	trace2_data_string("ipc-server", NULL, "pipe",
+			   server_thread_data->server_data->buf_path.buf);
+
+	hEventConnected = CreateEventW(NULL, TRUE, FALSE, NULL);
+
+	memset(&oConnect, 0, sizeof(oConnect));
+	oConnect.hEvent = hEventConnected;
+
+	for (;;) {
+		cr = wait_for_connection(server_thread_data, &oConnect);
+
+		switch (cr) {
+		case CR_SHUTDOWN:
+			goto finished;
+
+		case CR_CONNECTED:
+			ret = use_connection(server_thread_data);
+			if (ret == SIMPLE_IPC_QUIT) {
+				ipc_server_stop_async(
+					server_thread_data->server_data);
+				goto finished;
+			}
+			if (ret > 0) {
+				/*
+				 * Ignore (transient) IO errors with this
+				 * client and reset for the next client.
+				 */
+			}
+			break;
+
+		case CR_CONNECT_PENDING:
+			/* By construction, this should not happen. */
+			BUG("ipc-server[%s]: unexpeced CR_CONNECT_PENDING",
+			    server_thread_data->server_data->buf_path.buf);
+
+		case CR_CONNECT_ERROR:
+		case CR_WAIT_ERROR:
+			/*
+			 * Ignore these theoretical errors.
+			 */
+			DisconnectNamedPipe(server_thread_data->hPipe);
+			break;
+
+		default:
+			BUG("unandled case after wait_for_connection");
+		}
+	}
+
+finished:
+	CloseHandle(server_thread_data->hPipe);
+	CloseHandle(hEventConnected);
+
+	trace2_thread_exit();
+	return NULL;
+}
+
+static HANDLE create_new_pipe(wchar_t *wpath, int is_first)
+{
+	HANDLE hPipe;
+	DWORD dwOpenMode, dwPipeMode;
+	LPSECURITY_ATTRIBUTES lpsa = NULL;
+
+	dwOpenMode = PIPE_ACCESS_INBOUND | PIPE_ACCESS_OUTBOUND |
+		FILE_FLAG_OVERLAPPED;
+
+	dwPipeMode = PIPE_TYPE_MESSAGE | PIPE_READMODE_BYTE | PIPE_WAIT |
+		PIPE_REJECT_REMOTE_CLIENTS;
+
+	if (is_first) {
+		dwOpenMode |= FILE_FLAG_FIRST_PIPE_INSTANCE;
+
+		/*
+		 * On Windows, the first server pipe instance gets to
+		 * set the ACL / Security Attributes on the named
+		 * pipe; subsequent instances inherit and cannot
+		 * change them.
+		 *
+		 * TODO Should we allow the application layer to
+		 * specify security attributes, such as `LocalService`
+		 * or `LocalSystem`, when we create the named pipe?
+		 * This question is probably not important when the
+		 * daemon is started by a foreground user process and
+		 * only needs to talk to the current user, but may be
+		 * if the daemon is run via the Control Panel as a
+		 * System Service.
+		 */
+	}
+
+	hPipe = CreateNamedPipeW(wpath, dwOpenMode, dwPipeMode,
+				 PIPE_UNLIMITED_INSTANCES, 1024, 1024, 0, lpsa);
+
+	return hPipe;
+}
+
+int ipc_server_run_async(struct ipc_server_data **returned_server_data,
+			 const char *path, const struct ipc_server_opts *opts,
+			 ipc_server_application_cb *application_cb,
+			 void *application_data)
+{
+	struct ipc_server_data *server_data;
+	wchar_t wpath[MAX_PATH];
+	HANDLE hPipeFirst = INVALID_HANDLE_VALUE;
+	int k;
+	int ret = 0;
+	int nr_threads = opts->nr_threads;
+
+	*returned_server_data = NULL;
+
+	ret = initialize_pipe_name(path, wpath, ARRAY_SIZE(wpath));
+	if (ret < 0) {
+		errno = EINVAL;
+		return -1;
+	}
+
+	hPipeFirst = create_new_pipe(wpath, 1);
+	if (hPipeFirst == INVALID_HANDLE_VALUE) {
+		errno = EADDRINUSE;
+		return -2;
+	}
+
+	server_data = xcalloc(1, sizeof(*server_data));
+	server_data->magic = MAGIC_SERVER_DATA;
+	server_data->application_cb = application_cb;
+	server_data->application_data = application_data;
+	server_data->hEventStopRequested = CreateEvent(NULL, TRUE, FALSE, NULL);
+	strbuf_init(&server_data->buf_path, 0);
+	strbuf_addstr(&server_data->buf_path, path);
+	wcscpy(server_data->wpath, wpath);
+
+	if (nr_threads < 1)
+		nr_threads = 1;
+
+	for (k = 0; k < nr_threads; k++) {
+		struct ipc_server_thread_data *std;
+
+		std = xcalloc(1, sizeof(*std));
+		std->magic = MAGIC_SERVER_THREAD_DATA;
+		std->server_data = server_data;
+		std->hPipe = INVALID_HANDLE_VALUE;
+
+		std->hPipe = (k == 0)
+			? hPipeFirst
+			: create_new_pipe(server_data->wpath, 0);
+
+		if (std->hPipe == INVALID_HANDLE_VALUE) {
+			/*
+			 * If we've reached a pipe instance limit for
+			 * this path, just use fewer threads.
+			 */
+			free(std);
+			break;
+		}
+
+		if (pthread_create(&std->pthread_id, NULL,
+				   server_thread_proc, std)) {
+			/*
+			 * Likewise, if we're out of threads, just use
+			 * fewer threads than requested.
+			 *
+			 * However, we just give up if we can't even get
+			 * one thread.  This should not happen.
+			 */
+			if (k == 0)
+				die(_("could not start thread[0] for '%s'"),
+				    path);
+
+			CloseHandle(std->hPipe);
+			free(std);
+			break;
+		}
+
+		std->next_thread = server_data->thread_list;
+		server_data->thread_list = std;
+	}
+
+	*returned_server_data = server_data;
+	return 0;
+}
+
+int ipc_server_stop_async(struct ipc_server_data *server_data)
+{
+	if (!server_data)
+		return 0;
+
+	/*
+	 * Gently tell all of the ipc_server threads to shutdown.
+	 * This will be seen the next time they are idle (and waiting
+	 * for a connection).
+	 *
+	 * We DO NOT attempt to force them to drop an active connection.
+	 */
+	SetEvent(server_data->hEventStopRequested);
+	return 0;
+}
+
+int ipc_server_await(struct ipc_server_data *server_data)
+{
+	DWORD dwWaitResult;
+
+	if (!server_data)
+		return 0;
+
+	dwWaitResult = WaitForSingleObject(server_data->hEventStopRequested, INFINITE);
+	if (dwWaitResult != WAIT_OBJECT_0)
+		return error(_("wait for hEvent failed for '%s'"),
+			     server_data->buf_path.buf);
+
+	while (server_data->thread_list) {
+		struct ipc_server_thread_data *std = server_data->thread_list;
+
+		pthread_join(std->pthread_id, NULL);
+
+		server_data->thread_list = std->next_thread;
+		free(std);
+	}
+
+	server_data->is_stopped = 1;
+
+	return 0;
+}
+
+void ipc_server_free(struct ipc_server_data *server_data)
+{
+	if (!server_data)
+		return;
+
+	if (!server_data->is_stopped)
+		BUG("cannot free ipc-server while running for '%s'",
+		    server_data->buf_path.buf);
+
+	strbuf_release(&server_data->buf_path);
+
+	if (server_data->hEventStopRequested != INVALID_HANDLE_VALUE)
+		CloseHandle(server_data->hEventStopRequested);
+
+	while (server_data->thread_list) {
+		struct ipc_server_thread_data *std = server_data->thread_list;
+
+		server_data->thread_list = std->next_thread;
+		free(std);
+	}
+
+	free(server_data);
+}
diff --git a/config.mak.uname b/config.mak.uname
index e22d4b6d67a3..2b3303f34be8 100644
--- a/config.mak.uname
+++ b/config.mak.uname
@@ -421,6 +421,7 @@ ifeq ($(uname_S),Windows)
 	RUNTIME_PREFIX = YesPlease
 	HAVE_WPGMPTR = YesWeDo
 	NO_ST_BLOCKS_IN_STRUCT_STAT = YesPlease
+	USE_WIN32_IPC = YesPlease
 	USE_WIN32_MMAP = YesPlease
 	MMAP_PREVENTS_DELETE = UnfortunatelyYes
 	# USE_NED_ALLOCATOR = YesPlease
@@ -597,6 +598,7 @@ ifneq (,$(findstring MINGW,$(uname_S)))
 	RUNTIME_PREFIX = YesPlease
 	HAVE_WPGMPTR = YesWeDo
 	NO_ST_BLOCKS_IN_STRUCT_STAT = YesPlease
+	USE_WIN32_IPC = YesPlease
 	USE_WIN32_MMAP = YesPlease
 	MMAP_PREVENTS_DELETE = UnfortunatelyYes
 	USE_NED_ALLOCATOR = YesPlease
diff --git a/contrib/buildsystems/CMakeLists.txt b/contrib/buildsystems/CMakeLists.txt
index ac3dbc079af8..40c9e8e3bd9d 100644
--- a/contrib/buildsystems/CMakeLists.txt
+++ b/contrib/buildsystems/CMakeLists.txt
@@ -246,6 +246,10 @@ elseif(CMAKE_SYSTEM_NAME STREQUAL "Linux")
 	list(APPEND compat_SOURCES unix-socket.c)
 endif()
 
+if(CMAKE_SYSTEM_NAME STREQUAL "Windows")
+	list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-win32.c)
+endif()
+
 set(EXE_EXTENSION ${CMAKE_EXECUTABLE_SUFFIX})
 
 #header checks
diff --git a/simple-ipc.h b/simple-ipc.h
new file mode 100644
index 000000000000..ab5619e3d76f
--- /dev/null
+++ b/simple-ipc.h
@@ -0,0 +1,228 @@
+#ifndef GIT_SIMPLE_IPC_H
+#define GIT_SIMPLE_IPC_H
+
+/*
+ * See Documentation/technical/api-simple-ipc.txt
+ */
+
+#if defined(GIT_WINDOWS_NATIVE)
+#define SUPPORTS_SIMPLE_IPC
+#endif
+
+#ifdef SUPPORTS_SIMPLE_IPC
+#include "pkt-line.h"
+
+/*
+ * Simple IPC Client Side API.
+ */
+
+enum ipc_active_state {
+	/*
+	 * The pipe/socket exists and the daemon is waiting for connections.
+	 */
+	IPC_STATE__LISTENING = 0,
+
+	/*
+	 * The pipe/socket exists, but the daemon is not listening.
+	 * Perhaps it is very busy.
+	 * Perhaps the daemon died without deleting the path.
+	 * Perhaps it is shutting down and draining existing clients.
+	 * Perhaps it is dead, but other clients are lingering and
+	 * still holding a reference to the pathname.
+	 */
+	IPC_STATE__NOT_LISTENING,
+
+	/*
+	 * The requested pathname is bogus and no amount of retries
+	 * will fix that.
+	 */
+	IPC_STATE__INVALID_PATH,
+
+	/*
+	 * The requested pathname is not found.  This usually means
+	 * that there is no daemon present.
+	 */
+	IPC_STATE__PATH_NOT_FOUND,
+
+	IPC_STATE__OTHER_ERROR,
+};
+
+struct ipc_client_connect_options {
+	/*
+	 * Spin under timeout if the server is running but can't
+	 * accept our connection yet.  This should always be set
+	 * unless you just want to poke the server and see if it
+	 * is alive.
+	 */
+	unsigned int wait_if_busy:1;
+
+	/*
+	 * Spin under timeout if the pipe/socket is not yet present
+	 * on the file system.  This is useful if we just started
+	 * the service and need to wait for it to become ready.
+	 */
+	unsigned int wait_if_not_found:1;
+};
+
+#define IPC_CLIENT_CONNECT_OPTIONS_INIT { \
+	.wait_if_busy = 0, \
+	.wait_if_not_found = 0, \
+}
+
+/*
+ * Determine if a server is listening on this named pipe or socket using
+ * platform-specific logic.  This might just probe the filesystem or it
+ * might make a trivial connection to the server using this pathname.
+ */
+enum ipc_active_state ipc_get_active_state(const char *path);
+
+struct ipc_client_connection {
+	int fd;
+};
+
+/*
+ * Try to connect to the daemon on the named pipe or socket.
+ *
+ * Returns IPC_STATE__LISTENING and a connection handle.
+ *
+ * Otherwise, returns info to help decide whether to retry or to
+ * spawn/respawn the server.
+ */
+enum ipc_active_state ipc_client_try_connect(
+	const char *path,
+	const struct ipc_client_connect_options *options,
+	struct ipc_client_connection **p_connection);
+
+void ipc_client_close_connection(struct ipc_client_connection *connection);
+
+/*
+ * Used by the client to synchronously send and receive a message with
+ * the server on the provided client connection.
+ *
+ * Returns 0 when successful.
+ *
+ * Calls error() and returns non-zero otherwise.
+ */
+int ipc_client_send_command_to_connection(
+	struct ipc_client_connection *connection,
+	const char *message, struct strbuf *answer);
+
+/*
+ * Used by the client to synchronously connect and send and receive a
+ * message to the server listening at the given path.
+ *
+ * Returns 0 when successful.
+ *
+ * Calls error() and returns non-zero otherwise.
+ */
+int ipc_client_send_command(const char *path,
+			    const struct ipc_client_connect_options *options,
+			    const char *message, struct strbuf *answer);
+
+/*
+ * Simple IPC Server Side API.
+ */
+
+struct ipc_server_reply_data;
+
+typedef int (ipc_server_reply_cb)(struct ipc_server_reply_data *,
+				  const char *response,
+				  size_t response_len);
+
+/*
+ * Prototype for an application-supplied callback to process incoming
+ * client IPC messages and compose a reply.  The `application_cb` should
+ * use the provided `reply_cb` and `reply_data` to send an IPC response
+ * back to the client.  The `reply_cb` callback can be called multiple
+ * times for chunking purposes.  A reply message is optional and may be
+ * omitted if not necessary for the application.
+ *
+ * The return value from the application callback is ignored.
+ * The value `SIMPLE_IPC_QUIT` can be used to shutdown the server.
+ */
+typedef int (ipc_server_application_cb)(void *application_data,
+					const char *request,
+					ipc_server_reply_cb *reply_cb,
+					struct ipc_server_reply_data *reply_data);
+
+#define SIMPLE_IPC_QUIT -2
+
+/*
+ * Opaque instance data to represent an IPC server instance.
+ */
+struct ipc_server_data;
+
+/*
+ * Control parameters for the IPC server instance.
+ * Use this to hide platform-specific settings.
+ */
+struct ipc_server_opts
+{
+	int nr_threads;
+};
+
+/*
+ * Start an IPC server instance in one or more background threads
+ * and return a handle to the pool.
+ *
+ * Returns 0 if the asynchronous server pool was started successfully.
+ * Returns -1 if not.
+ * Returns -2 if we could not startup because another server is using
+ * the socket or named pipe.
+ *
+ * When a client IPC message is received, the `application_cb` will be
+ * called (possibly on a random thread) to handle the message and
+ * optionally compose a reply message.
+ */
+int ipc_server_run_async(struct ipc_server_data **returned_server_data,
+			 const char *path, const struct ipc_server_opts *opts,
+			 ipc_server_application_cb *application_cb,
+			 void *application_data);
+
+/*
+ * Gently signal the IPC server pool to shutdown.  No new client
+ * connections will be accepted, but existing connections will be
+ * allowed to complete.
+ */
+int ipc_server_stop_async(struct ipc_server_data *server_data);
+
+/*
+ * Block the calling thread until all threads in the IPC server pool
+ * have completed and been joined.
+ */
+int ipc_server_await(struct ipc_server_data *server_data);
+
+/*
+ * Close and free all resource handles associated with the IPC server
+ * pool.
+ */
+void ipc_server_free(struct ipc_server_data *server_data);
+
+/*
+ * Run an IPC server instance and block the calling thread of the
+ * current process.  It does not return until the IPC server has
+ * either shutdown or had an unrecoverable error.
+ *
+ * The IPC server handles incoming IPC messages from client processes
+ * and may use one or more background threads as necessary.
+ *
+ * Returns 0 after the server has completed successfully.
+ * Returns -1 if the server cannot be started.
+ * Returns -2 if we could not startup because another server is using
+ * the socket or named pipe.
+ *
+ * When a client IPC message is received, the `application_cb` will be
+ * called (possibly on a random thread) to handle the message and
+ * optionally compose a reply message.
+ *
+ * Note that `ipc_server_run()` is a synchronous wrapper around the
+ * above asynchronous routines.  It effectively hides all of the
+ * server state and thread details from the caller and presents a
+ * simple synchronous interface.
+ */
+int ipc_server_run(const char *path, const struct ipc_server_opts *opts,
+		   ipc_server_application_cb *application_cb,
+		   void *application_data);
+
+#endif /* SUPPORTS_SIMPLE_IPC */
+#endif /* GIT_SIMPLE_IPC_H */
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v5 07/12] unix-socket: eliminate static unix_stream_socket() helper function
  2021-03-09 15:02       ` [PATCH v5 " Jeff Hostetler via GitGitGadget
                           ` (5 preceding siblings ...)
  2021-03-09 15:02         ` [PATCH v5 06/12] simple-ipc: add win32 implementation Jeff Hostetler via GitGitGadget
@ 2021-03-09 15:02         ` Jeff Hostetler via GitGitGadget
  2021-03-09 15:02         ` [PATCH v5 08/12] unix-socket: add backlog size option to unix_stream_listen() Jeff Hostetler via GitGitGadget
                           ` (6 subsequent siblings)
  13 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-03-09 15:02 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

The static helper function `unix_stream_socket()` calls `die()`.  This
is not appropriate for all callers.  Eliminate the wrapper function
and make the callers propagate the error.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 unix-socket.c | 27 +++++++++++++--------------
 1 file changed, 13 insertions(+), 14 deletions(-)

diff --git a/unix-socket.c b/unix-socket.c
index 19ed48be9902..69f81d64e9d5 100644
--- a/unix-socket.c
+++ b/unix-socket.c
@@ -1,14 +1,6 @@
 #include "cache.h"
 #include "unix-socket.h"
 
-static int unix_stream_socket(void)
-{
-	int fd = socket(AF_UNIX, SOCK_STREAM, 0);
-	if (fd < 0)
-		die_errno("unable to create socket");
-	return fd;
-}
-
 static int chdir_len(const char *orig, int len)
 {
 	char *path = xmemdupz(orig, len);
@@ -73,13 +65,16 @@ static int unix_sockaddr_init(struct sockaddr_un *sa, const char *path,
 
 int unix_stream_connect(const char *path)
 {
-	int fd, saved_errno;
+	int fd = -1, saved_errno;
 	struct sockaddr_un sa;
 	struct unix_sockaddr_context ctx;
 
 	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
 		return -1;
-	fd = unix_stream_socket();
+	fd = socket(AF_UNIX, SOCK_STREAM, 0);
+	if (fd < 0)
+		goto fail;
+
 	if (connect(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0)
 		goto fail;
 	unix_sockaddr_cleanup(&ctx);
@@ -87,15 +82,16 @@ int unix_stream_connect(const char *path)
 
 fail:
 	saved_errno = errno;
+	if (fd != -1)
+		close(fd);
 	unix_sockaddr_cleanup(&ctx);
-	close(fd);
 	errno = saved_errno;
 	return -1;
 }
 
 int unix_stream_listen(const char *path)
 {
-	int fd, saved_errno;
+	int fd = -1, saved_errno;
 	struct sockaddr_un sa;
 	struct unix_sockaddr_context ctx;
 
@@ -103,7 +99,9 @@ int unix_stream_listen(const char *path)
 
 	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
 		return -1;
-	fd = unix_stream_socket();
+	fd = socket(AF_UNIX, SOCK_STREAM, 0);
+	if (fd < 0)
+		goto fail;
 
 	if (bind(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0)
 		goto fail;
@@ -116,8 +114,9 @@ int unix_stream_listen(const char *path)
 
 fail:
 	saved_errno = errno;
+	if (fd != -1)
+		close(fd);
 	unix_sockaddr_cleanup(&ctx);
-	close(fd);
 	errno = saved_errno;
 	return -1;
 }
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v5 08/12] unix-socket: add backlog size option to unix_stream_listen()
  2021-03-09 15:02       ` [PATCH v5 " Jeff Hostetler via GitGitGadget
                           ` (6 preceding siblings ...)
  2021-03-09 15:02         ` [PATCH v5 07/12] unix-socket: eliminate static unix_stream_socket() helper function Jeff Hostetler via GitGitGadget
@ 2021-03-09 15:02         ` Jeff Hostetler via GitGitGadget
  2021-03-09 15:02         ` [PATCH v5 09/12] unix-socket: disallow chdir() when creating unix domain sockets Jeff Hostetler via GitGitGadget
                           ` (5 subsequent siblings)
  13 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-03-09 15:02 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Update `unix_stream_listen()` to take an options structure to override
default behaviors.  This commit includes the size of the `listen()` backlog.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 builtin/credential-cache--daemon.c |  3 ++-
 unix-socket.c                      | 11 +++++++++--
 unix-socket.h                      |  9 ++++++++-
 3 files changed, 19 insertions(+), 4 deletions(-)

diff --git a/builtin/credential-cache--daemon.c b/builtin/credential-cache--daemon.c
index c61f123a3b81..4c6c89ab0de2 100644
--- a/builtin/credential-cache--daemon.c
+++ b/builtin/credential-cache--daemon.c
@@ -203,9 +203,10 @@ static int serve_cache_loop(int fd)
 
 static void serve_cache(const char *socket_path, int debug)
 {
+	struct unix_stream_listen_opts opts = UNIX_STREAM_LISTEN_OPTS_INIT;
 	int fd;
 
-	fd = unix_stream_listen(socket_path);
+	fd = unix_stream_listen(socket_path, &opts);
 	if (fd < 0)
 		die_errno("unable to bind to '%s'", socket_path);
 
diff --git a/unix-socket.c b/unix-socket.c
index 69f81d64e9d5..012becd93d57 100644
--- a/unix-socket.c
+++ b/unix-socket.c
@@ -1,6 +1,8 @@
 #include "cache.h"
 #include "unix-socket.h"
 
+#define DEFAULT_UNIX_STREAM_LISTEN_BACKLOG (5)
+
 static int chdir_len(const char *orig, int len)
 {
 	char *path = xmemdupz(orig, len);
@@ -89,9 +91,11 @@ int unix_stream_connect(const char *path)
 	return -1;
 }
 
-int unix_stream_listen(const char *path)
+int unix_stream_listen(const char *path,
+		       const struct unix_stream_listen_opts *opts)
 {
 	int fd = -1, saved_errno;
+	int backlog;
 	struct sockaddr_un sa;
 	struct unix_sockaddr_context ctx;
 
@@ -106,7 +110,10 @@ int unix_stream_listen(const char *path)
 	if (bind(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0)
 		goto fail;
 
-	if (listen(fd, 5) < 0)
+	backlog = opts->listen_backlog_size;
+	if (backlog <= 0)
+		backlog = DEFAULT_UNIX_STREAM_LISTEN_BACKLOG;
+	if (listen(fd, backlog) < 0)
 		goto fail;
 
 	unix_sockaddr_cleanup(&ctx);
diff --git a/unix-socket.h b/unix-socket.h
index e271aeec5a07..ec2fb3ea7267 100644
--- a/unix-socket.h
+++ b/unix-socket.h
@@ -1,7 +1,14 @@
 #ifndef UNIX_SOCKET_H
 #define UNIX_SOCKET_H
 
+struct unix_stream_listen_opts {
+	int listen_backlog_size;
+};
+
+#define UNIX_STREAM_LISTEN_OPTS_INIT { 0 }
+
 int unix_stream_connect(const char *path);
-int unix_stream_listen(const char *path);
+int unix_stream_listen(const char *path,
+		       const struct unix_stream_listen_opts *opts);
 
 #endif /* UNIX_SOCKET_H */
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v5 09/12] unix-socket: disallow chdir() when creating unix domain sockets
  2021-03-09 15:02       ` [PATCH v5 " Jeff Hostetler via GitGitGadget
                           ` (7 preceding siblings ...)
  2021-03-09 15:02         ` [PATCH v5 08/12] unix-socket: add backlog size option to unix_stream_listen() Jeff Hostetler via GitGitGadget
@ 2021-03-09 15:02         ` Jeff Hostetler via GitGitGadget
  2021-03-09 15:02         ` [PATCH v5 10/12] unix-stream-server: create unix domain socket under lock Jeff Hostetler via GitGitGadget
                           ` (4 subsequent siblings)
  13 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-03-09 15:02 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Calls to `chdir()` are dangerous in a multi-threaded context.  If
`unix_stream_listen()` or `unix_stream_connect()` is given a socket
pathname that is too long to fit in a `sockaddr_un` structure, it will
`chdir()` to the parent directory of the requested socket pathname,
create the socket using a relative pathname, and then `chdir()` back.
This is not thread-safe.

Teach `unix_sockaddr_init()` to not allow calls to `chdir()` when this
flag is set.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 builtin/credential-cache.c |  2 +-
 unix-socket.c              | 17 ++++++++++++-----
 unix-socket.h              |  3 ++-
 3 files changed, 15 insertions(+), 7 deletions(-)

diff --git a/builtin/credential-cache.c b/builtin/credential-cache.c
index 9b3f70990597..76a6ba37223f 100644
--- a/builtin/credential-cache.c
+++ b/builtin/credential-cache.c
@@ -14,7 +14,7 @@
 static int send_request(const char *socket, const struct strbuf *out)
 {
 	int got_data = 0;
-	int fd = unix_stream_connect(socket);
+	int fd = unix_stream_connect(socket, 0);
 
 	if (fd < 0)
 		return -1;
diff --git a/unix-socket.c b/unix-socket.c
index 012becd93d57..e0be1badb58d 100644
--- a/unix-socket.c
+++ b/unix-socket.c
@@ -30,16 +30,23 @@ static void unix_sockaddr_cleanup(struct unix_sockaddr_context *ctx)
 }
 
 static int unix_sockaddr_init(struct sockaddr_un *sa, const char *path,
-			      struct unix_sockaddr_context *ctx)
+			      struct unix_sockaddr_context *ctx,
+			      int disallow_chdir)
 {
 	int size = strlen(path) + 1;
 
 	ctx->orig_dir = NULL;
 	if (size > sizeof(sa->sun_path)) {
-		const char *slash = find_last_dir_sep(path);
+		const char *slash;
 		const char *dir;
 		struct strbuf cwd = STRBUF_INIT;
 
+		if (disallow_chdir) {
+			errno = ENAMETOOLONG;
+			return -1;
+		}
+
+		slash = find_last_dir_sep(path);
 		if (!slash) {
 			errno = ENAMETOOLONG;
 			return -1;
@@ -65,13 +72,13 @@ static int unix_sockaddr_init(struct sockaddr_un *sa, const char *path,
 	return 0;
 }
 
-int unix_stream_connect(const char *path)
+int unix_stream_connect(const char *path, int disallow_chdir)
 {
 	int fd = -1, saved_errno;
 	struct sockaddr_un sa;
 	struct unix_sockaddr_context ctx;
 
-	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
+	if (unix_sockaddr_init(&sa, path, &ctx, disallow_chdir) < 0)
 		return -1;
 	fd = socket(AF_UNIX, SOCK_STREAM, 0);
 	if (fd < 0)
@@ -101,7 +108,7 @@ int unix_stream_listen(const char *path,
 
 	unlink(path);
 
-	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
+	if (unix_sockaddr_init(&sa, path, &ctx, opts->disallow_chdir) < 0)
 		return -1;
 	fd = socket(AF_UNIX, SOCK_STREAM, 0);
 	if (fd < 0)
diff --git a/unix-socket.h b/unix-socket.h
index ec2fb3ea7267..8542cdd7995d 100644
--- a/unix-socket.h
+++ b/unix-socket.h
@@ -3,11 +3,12 @@
 
 struct unix_stream_listen_opts {
 	int listen_backlog_size;
+	unsigned int disallow_chdir:1;
 };
 
 #define UNIX_STREAM_LISTEN_OPTS_INIT { 0 }
 
-int unix_stream_connect(const char *path);
+int unix_stream_connect(const char *path, int disallow_chdir);
 int unix_stream_listen(const char *path,
 		       const struct unix_stream_listen_opts *opts);
 
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v5 10/12] unix-stream-server: create unix domain socket under lock
  2021-03-09 15:02       ` [PATCH v5 " Jeff Hostetler via GitGitGadget
                           ` (8 preceding siblings ...)
  2021-03-09 15:02         ` [PATCH v5 09/12] unix-socket: disallow chdir() when creating unix domain sockets Jeff Hostetler via GitGitGadget
@ 2021-03-09 15:02         ` Jeff Hostetler via GitGitGadget
  2021-03-10  0:18           ` Junio C Hamano
  2021-03-09 15:02         ` [PATCH v5 11/12] simple-ipc: add Unix domain socket implementation Jeff Hostetler via GitGitGadget
                           ` (3 subsequent siblings)
  13 siblings, 1 reply; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-03-09 15:02 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Create a wrapper class for `unix_stream_listen()` that uses a ".lock"
lockfile to create the unix domain socket in a race-free manner.

Unix domain sockets have a fundamental problem on Unix systems because
they persist in the filesystem until they are deleted.  This is
independent of whether a server is actually listening for connections.
Well-behaved servers are expected to delete the socket when they
shutdown.  A new server cannot easily tell if a found socket is
attached to an active server or is leftover cruft from a dead server.
The traditional solution used by `unix_stream_listen()` is to force
delete the socket pathname and then create a new socket.  This solves
the latter (cruft) problem, but in the case of the former, it orphans
the existing server (by stealing the pathname associated with the
socket it is listening on).

We cannot directly use a .lock lockfile to create the socket because
the socket is created by `bind(2)` rather than the `open(2)` mechanism
used by `tempfile.c`.

As an alternative, we hold a plain lockfile ("<path>.lock") as a
mutual exclusion device.  Under the lock, we test if an existing
socket ("<path>") is has an active server.  If not, we create a new
socket and begin listening.  Then we use "rollback" to delete the
lockfile in all cases.

This wrapper code conceptually exists at a higher-level than the core
unix_stream_connect() and unix_stream_listen() routines that it
consumes.  It is isolated in a wrapper class for clarity.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 Makefile                            |   1 +
 contrib/buildsystems/CMakeLists.txt |   2 +-
 unix-stream-server.c                | 128 ++++++++++++++++++++++++++++
 unix-stream-server.h                |  36 ++++++++
 4 files changed, 166 insertions(+), 1 deletion(-)
 create mode 100644 unix-stream-server.c
 create mode 100644 unix-stream-server.h

diff --git a/Makefile b/Makefile
index d3c42d3f4f9f..012694276f6d 100644
--- a/Makefile
+++ b/Makefile
@@ -1665,6 +1665,7 @@ ifdef NO_UNIX_SOCKETS
 	BASIC_CFLAGS += -DNO_UNIX_SOCKETS
 else
 	LIB_OBJS += unix-socket.o
+	LIB_OBJS += unix-stream-server.o
 endif
 
 ifdef USE_WIN32_IPC
diff --git a/contrib/buildsystems/CMakeLists.txt b/contrib/buildsystems/CMakeLists.txt
index 40c9e8e3bd9d..c94011269ebb 100644
--- a/contrib/buildsystems/CMakeLists.txt
+++ b/contrib/buildsystems/CMakeLists.txt
@@ -243,7 +243,7 @@ if(CMAKE_SYSTEM_NAME STREQUAL "Windows")
 
 elseif(CMAKE_SYSTEM_NAME STREQUAL "Linux")
 	add_compile_definitions(PROCFS_EXECUTABLE_PATH="/proc/self/exe" HAVE_DEV_TTY )
-	list(APPEND compat_SOURCES unix-socket.c)
+	list(APPEND compat_SOURCES unix-socket.c unix-stream-server.c)
 endif()
 
 if(CMAKE_SYSTEM_NAME STREQUAL "Windows")
diff --git a/unix-stream-server.c b/unix-stream-server.c
new file mode 100644
index 000000000000..5dfe2a9ac2c0
--- /dev/null
+++ b/unix-stream-server.c
@@ -0,0 +1,128 @@
+#include "cache.h"
+#include "lockfile.h"
+#include "unix-socket.h"
+#include "unix-stream-server.h"
+
+#define DEFAULT_LOCK_TIMEOUT (100)
+
+/*
+ * Try to connect to a unix domain socket at `path` (if it exists) and
+ * see if there is a server listening.
+ *
+ * We don't know if the socket exists, whether a server died and
+ * failed to cleanup, or whether we have a live server listening, so
+ * we "poke" it.
+ *
+ * We immediately hangup without sending/receiving any data because we
+ * don't know anything about the protocol spoken and don't want to
+ * block while writing/reading data.  It is sufficient to just know
+ * that someone is listening.
+ */
+static int is_another_server_alive(const char *path,
+				   const struct unix_stream_listen_opts *opts)
+{
+	int fd = unix_stream_connect(path, opts->disallow_chdir);
+	if (fd >= 0) {
+		close(fd);
+		return 1;
+	}
+
+	return 0;
+}
+
+int unix_stream_server__create(
+	const char *path,
+	const struct unix_stream_listen_opts *opts,
+	long timeout_ms,
+	struct unix_stream_server_socket **new_server_socket)
+{
+	struct lock_file lock = LOCK_INIT;
+	int fd_socket;
+	struct unix_stream_server_socket *server_socket;
+
+	*new_server_socket = NULL;
+
+	if (timeout_ms < 0)
+		timeout_ms = DEFAULT_LOCK_TIMEOUT;
+
+	/*
+	 * Create a lock at "<path>.lock" if we can.
+	 */
+	if (hold_lock_file_for_update_timeout(&lock, path, 0, timeout_ms) < 0)
+		return -1;
+
+	/*
+	 * If another server is listening on "<path>" give up.  We do not
+	 * want to create a socket and steal future connections from them.
+	 */
+	if (is_another_server_alive(path, opts)) {
+		rollback_lock_file(&lock);
+		errno = EADDRINUSE;
+		return -2;
+	}
+
+	/*
+	 * Create and bind to a Unix domain socket at "<path>".
+	 */
+	fd_socket = unix_stream_listen(path, opts);
+	if (fd_socket < 0) {
+		int saved_errno = errno;
+		rollback_lock_file(&lock);
+		errno = saved_errno;
+		return -1;
+	}
+
+	server_socket = xcalloc(1, sizeof(*server_socket));
+	server_socket->path_socket = strdup(path);
+	server_socket->fd_socket = fd_socket;
+	lstat(path, &server_socket->st_socket);
+
+	*new_server_socket = server_socket;
+
+	/*
+	 * Always rollback (just delete) "<path>.lock" because we already created
+	 * "<path>" as a socket and do not want to commit_lock to do the atomic
+	 * rename trick.
+	 */
+	rollback_lock_file(&lock);
+
+	return 0;
+}
+
+void unix_stream_server__free(
+	struct unix_stream_server_socket *server_socket)
+{
+	if (!server_socket)
+		return;
+
+	if (server_socket->fd_socket >= 0) {
+		if (!unix_stream_server__was_stolen(server_socket))
+			unlink(server_socket->path_socket);
+		close(server_socket->fd_socket);
+	}
+
+	free(server_socket->path_socket);
+	free(server_socket);
+}
+
+int unix_stream_server__was_stolen(
+	struct unix_stream_server_socket *server_socket)
+{
+	struct stat st_now;
+
+	if (!server_socket)
+		return 0;
+
+	if (lstat(server_socket->path_socket, &st_now) == -1)
+		return 1;
+
+	if (st_now.st_ino != server_socket->st_socket.st_ino)
+		return 1;
+	if (st_now.st_dev != server_socket->st_socket.st_dev)
+		return 1;
+
+	if (!S_ISSOCK(st_now.st_mode))
+		return 1;
+
+	return 0;
+}
diff --git a/unix-stream-server.h b/unix-stream-server.h
new file mode 100644
index 000000000000..ef9241d0ef70
--- /dev/null
+++ b/unix-stream-server.h
@@ -0,0 +1,36 @@
+#ifndef UNIX_STREAM_SERVER_H
+#define UNIX_STREAM_SERVER_H
+
+#include "unix-socket.h"
+
+struct unix_stream_server_socket {
+	char *path_socket;
+	struct stat st_socket;
+	int fd_socket;
+};
+
+/*
+ * Create a Unix Domain Socket at the given path under the protection
+ * of a '.lock' lockfile.
+ *
+ * Returns 0 on success, -1 on error, -2 if socket is in use.
+ */
+int unix_stream_server__create(
+	const char *path,
+	const struct unix_stream_listen_opts *opts,
+	long timeout_ms,
+	struct unix_stream_server_socket **server_socket);
+
+/*
+ * Close and delete the socket.
+ */
+void unix_stream_server__free(
+	struct unix_stream_server_socket *server_socket);
+
+/*
+ * Return 1 if the inode of the pathname to our socket changes.
+ */
+int unix_stream_server__was_stolen(
+	struct unix_stream_server_socket *server_socket);
+
+#endif /* UNIX_STREAM_SERVER_H */
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v5 11/12] simple-ipc: add Unix domain socket implementation
  2021-03-09 15:02       ` [PATCH v5 " Jeff Hostetler via GitGitGadget
                           ` (9 preceding siblings ...)
  2021-03-09 15:02         ` [PATCH v5 10/12] unix-stream-server: create unix domain socket under lock Jeff Hostetler via GitGitGadget
@ 2021-03-09 15:02         ` Jeff Hostetler via GitGitGadget
  2021-03-10  0:08           ` Junio C Hamano
  2021-03-09 15:02         ` [PATCH v5 12/12] t0052: add simple-ipc tests and t/helper/test-simple-ipc tool Jeff Hostetler via GitGitGadget
                           ` (2 subsequent siblings)
  13 siblings, 1 reply; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-03-09 15:02 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Create Unix domain socket based implementation of "simple-ipc".

A set of `ipc_client` routines implement a client library to connect
to an `ipc_server` over a Unix domain socket, send a simple request,
and receive a single response.  Clients use blocking IO on the socket.

A set of `ipc_server` routines implement a thread pool to listen for
and concurrently service client connections.

The server creates a new Unix domain socket at a known location.  If a
socket already exists with that name, the server tries to determine if
another server is already listening on the socket or if the socket is
dead.  If socket is busy, the server exits with an error rather than
stealing the socket.  If the socket is dead, the server creates a new
one and starts up.

If while running, the server detects that its socket has been stolen
by another server, it automatically exits.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 Makefile                            |   2 +
 compat/simple-ipc/ipc-unix-socket.c | 986 ++++++++++++++++++++++++++++
 contrib/buildsystems/CMakeLists.txt |   2 +
 simple-ipc.h                        |  13 +-
 4 files changed, 1002 insertions(+), 1 deletion(-)
 create mode 100644 compat/simple-ipc/ipc-unix-socket.c

diff --git a/Makefile b/Makefile
index 012694276f6d..20dd65d19658 100644
--- a/Makefile
+++ b/Makefile
@@ -1666,6 +1666,8 @@ ifdef NO_UNIX_SOCKETS
 else
 	LIB_OBJS += unix-socket.o
 	LIB_OBJS += unix-stream-server.o
+	LIB_OBJS += compat/simple-ipc/ipc-shared.o
+	LIB_OBJS += compat/simple-ipc/ipc-unix-socket.o
 endif
 
 ifdef USE_WIN32_IPC
diff --git a/compat/simple-ipc/ipc-unix-socket.c b/compat/simple-ipc/ipc-unix-socket.c
new file mode 100644
index 000000000000..6e381a9e030e
--- /dev/null
+++ b/compat/simple-ipc/ipc-unix-socket.c
@@ -0,0 +1,986 @@
+#include "cache.h"
+#include "simple-ipc.h"
+#include "strbuf.h"
+#include "pkt-line.h"
+#include "thread-utils.h"
+#include "unix-socket.h"
+#include "unix-stream-server.h"
+
+#ifdef NO_UNIX_SOCKETS
+#error compat/simple-ipc/ipc-unix-socket.c requires Unix sockets
+#endif
+
+enum ipc_active_state ipc_get_active_state(const char *path)
+{
+	enum ipc_active_state state = IPC_STATE__OTHER_ERROR;
+	struct ipc_client_connect_options options
+		= IPC_CLIENT_CONNECT_OPTIONS_INIT;
+	struct stat st;
+	struct ipc_client_connection *connection_test = NULL;
+
+	options.wait_if_busy = 0;
+	options.wait_if_not_found = 0;
+
+	if (lstat(path, &st) == -1) {
+		switch (errno) {
+		case ENOENT:
+		case ENOTDIR:
+			return IPC_STATE__NOT_LISTENING;
+		default:
+			return IPC_STATE__INVALID_PATH;
+		}
+	}
+
+	/* also complain if a plain file is in the way */
+	if ((st.st_mode & S_IFMT) != S_IFSOCK)
+		return IPC_STATE__INVALID_PATH;
+
+	/*
+	 * Just because the filesystem has a S_IFSOCK type inode
+	 * at `path`, doesn't mean it that there is a server listening.
+	 * Ping it to be sure.
+	 */
+	state = ipc_client_try_connect(path, &options, &connection_test);
+	ipc_client_close_connection(connection_test);
+
+	return state;
+}
+
+/*
+ * This value was chosen at random.
+ */
+#define WAIT_STEP_MS (50)
+
+/*
+ * Try to connect to the server.  If the server is just starting up or
+ * is very busy, we may not get a connection the first time.
+ */
+static enum ipc_active_state connect_to_server(
+	const char *path,
+	int timeout_ms,
+	const struct ipc_client_connect_options *options,
+	int *pfd)
+{
+	int wait_ms = 50;
+	int k;
+
+	*pfd = -1;
+
+	for (k = 0; k < timeout_ms; k += wait_ms) {
+		int fd = unix_stream_connect(path, options->uds_disallow_chdir);
+
+		if (fd != -1) {
+			*pfd = fd;
+			return IPC_STATE__LISTENING;
+		}
+
+		if (errno == ENOENT) {
+			if (!options->wait_if_not_found)
+				return IPC_STATE__PATH_NOT_FOUND;
+
+			goto sleep_and_try_again;
+		}
+
+		if (errno == ETIMEDOUT) {
+			if (!options->wait_if_busy)
+				return IPC_STATE__NOT_LISTENING;
+
+			goto sleep_and_try_again;
+		}
+
+		if (errno == ECONNREFUSED) {
+			if (!options->wait_if_busy)
+				return IPC_STATE__NOT_LISTENING;
+
+			goto sleep_and_try_again;
+		}
+
+		return IPC_STATE__OTHER_ERROR;
+
+	sleep_and_try_again:
+		sleep_millisec(wait_ms);
+	}
+
+	return IPC_STATE__NOT_LISTENING;
+}
+
+/*
+ * A randomly chosen timeout value.
+ */
+#define MY_CONNECTION_TIMEOUT_MS (1000)
+
+enum ipc_active_state ipc_client_try_connect(
+	const char *path,
+	const struct ipc_client_connect_options *options,
+	struct ipc_client_connection **p_connection)
+{
+	enum ipc_active_state state = IPC_STATE__OTHER_ERROR;
+	int fd = -1;
+
+	*p_connection = NULL;
+
+	trace2_region_enter("ipc-client", "try-connect", NULL);
+	trace2_data_string("ipc-client", NULL, "try-connect/path", path);
+
+	state = connect_to_server(path, MY_CONNECTION_TIMEOUT_MS,
+				  options, &fd);
+
+	trace2_data_intmax("ipc-client", NULL, "try-connect/state",
+			   (intmax_t)state);
+	trace2_region_leave("ipc-client", "try-connect", NULL);
+
+	if (state == IPC_STATE__LISTENING) {
+		(*p_connection) = xcalloc(1, sizeof(struct ipc_client_connection));
+		(*p_connection)->fd = fd;
+	}
+
+	return state;
+}
+
+void ipc_client_close_connection(struct ipc_client_connection *connection)
+{
+	if (!connection)
+		return;
+
+	if (connection->fd != -1)
+		close(connection->fd);
+
+	free(connection);
+}
+
+int ipc_client_send_command_to_connection(
+	struct ipc_client_connection *connection,
+	const char *message, struct strbuf *answer)
+{
+	int ret = 0;
+
+	strbuf_setlen(answer, 0);
+
+	trace2_region_enter("ipc-client", "send-command", NULL);
+
+	if (write_packetized_from_buf_no_flush(message, strlen(message),
+					       connection->fd) < 0 ||
+	    packet_flush_gently(connection->fd) < 0) {
+		ret = error(_("could not send IPC command"));
+		goto done;
+	}
+
+	if (read_packetized_to_strbuf(
+		    connection->fd, answer,
+		    PACKET_READ_GENTLE_ON_EOF | PACKET_READ_GENTLE_ON_READ_ERROR) < 0) {
+		ret = error(_("could not read IPC response"));
+		goto done;
+	}
+
+done:
+	trace2_region_leave("ipc-client", "send-command", NULL);
+	return ret;
+}
+
+int ipc_client_send_command(const char *path,
+			    const struct ipc_client_connect_options *options,
+			    const char *message, struct strbuf *answer)
+{
+	int ret = -1;
+	enum ipc_active_state state;
+	struct ipc_client_connection *connection = NULL;
+
+	state = ipc_client_try_connect(path, options, &connection);
+
+	if (state != IPC_STATE__LISTENING)
+		return ret;
+
+	ret = ipc_client_send_command_to_connection(connection, message, answer);
+
+	ipc_client_close_connection(connection);
+
+	return ret;
+}
+
+static int set_socket_blocking_flag(int fd, int make_nonblocking)
+{
+	int flags;
+
+	flags = fcntl(fd, F_GETFL, NULL);
+
+	if (flags < 0)
+		return -1;
+
+	if (make_nonblocking)
+		flags |= O_NONBLOCK;
+	else
+		flags &= ~O_NONBLOCK;
+
+	return fcntl(fd, F_SETFL, flags);
+}
+
+/*
+ * Magic numbers used to annotate callback instance data.
+ * These are used to help guard against accidentally passing the
+ * wrong instance data across multiple levels of callbacks (which
+ * is easy to do if there are `void*` arguments).
+ */
+enum magic {
+	MAGIC_SERVER_REPLY_DATA,
+	MAGIC_WORKER_THREAD_DATA,
+	MAGIC_ACCEPT_THREAD_DATA,
+	MAGIC_SERVER_DATA,
+};
+
+struct ipc_server_reply_data {
+	enum magic magic;
+	int fd;
+	struct ipc_worker_thread_data *worker_thread_data;
+};
+
+struct ipc_worker_thread_data {
+	enum magic magic;
+	struct ipc_worker_thread_data *next_thread;
+	struct ipc_server_data *server_data;
+	pthread_t pthread_id;
+};
+
+struct ipc_accept_thread_data {
+	enum magic magic;
+	struct ipc_server_data *server_data;
+
+	struct unix_stream_server_socket *server_socket;
+
+	int fd_send_shutdown;
+	int fd_wait_shutdown;
+	pthread_t pthread_id;
+};
+
+/*
+ * With unix-sockets, the conceptual "ipc-server" is implemented as a single
+ * controller "accept-thread" thread and a pool of "worker-thread" threads.
+ * The former does the usual `accept()` loop and dispatches connections
+ * to an idle worker thread.  The worker threads wait in an idle loop for
+ * a new connection, communicate with the client and relay data to/from
+ * the `application_cb` and then wait for another connection from the
+ * server thread.  This avoids the overhead of constantly creating and
+ * destroying threads.
+ */
+struct ipc_server_data {
+	enum magic magic;
+	ipc_server_application_cb *application_cb;
+	void *application_data;
+	struct strbuf buf_path;
+
+	struct ipc_accept_thread_data *accept_thread;
+	struct ipc_worker_thread_data *worker_thread_list;
+
+	pthread_mutex_t work_available_mutex;
+	pthread_cond_t work_available_cond;
+
+	/*
+	 * Accepted but not yet processed client connections are kept
+	 * in a circular buffer FIFO.  The queue is empty when the
+	 * positions are equal.
+	 */
+	int *fifo_fds;
+	int queue_size;
+	int back_pos;
+	int front_pos;
+
+	int shutdown_requested;
+	int is_stopped;
+};
+
+/*
+ * Remove and return the oldest queued connection.
+ *
+ * Returns -1 if empty.
+ */
+static int fifo_dequeue(struct ipc_server_data *server_data)
+{
+	/* ASSERT holding mutex */
+
+	int fd;
+
+	if (server_data->back_pos == server_data->front_pos)
+		return -1;
+
+	fd = server_data->fifo_fds[server_data->front_pos];
+	server_data->fifo_fds[server_data->front_pos] = -1;
+
+	server_data->front_pos++;
+	if (server_data->front_pos == server_data->queue_size)
+		server_data->front_pos = 0;
+
+	return fd;
+}
+
+/*
+ * Push a new fd onto the back of the queue.
+ *
+ * Drop it and return -1 if queue is already full.
+ */
+static int fifo_enqueue(struct ipc_server_data *server_data, int fd)
+{
+	/* ASSERT holding mutex */
+
+	int next_back_pos;
+
+	next_back_pos = server_data->back_pos + 1;
+	if (next_back_pos == server_data->queue_size)
+		next_back_pos = 0;
+
+	if (next_back_pos == server_data->front_pos) {
+		/* Queue is full. Just drop it. */
+		close(fd);
+		return -1;
+	}
+
+	server_data->fifo_fds[server_data->back_pos] = fd;
+	server_data->back_pos = next_back_pos;
+
+	return fd;
+}
+
+/*
+ * Wait for a connection to be queued to the FIFO and return it.
+ *
+ * Returns -1 if someone has already requested a shutdown.
+ */
+static int worker_thread__wait_for_connection(
+	struct ipc_worker_thread_data *worker_thread_data)
+{
+	/* ASSERT NOT holding mutex */
+
+	struct ipc_server_data *server_data = worker_thread_data->server_data;
+	int fd = -1;
+
+	pthread_mutex_lock(&server_data->work_available_mutex);
+	for (;;) {
+		if (server_data->shutdown_requested)
+			break;
+
+		fd = fifo_dequeue(server_data);
+		if (fd >= 0)
+			break;
+
+		pthread_cond_wait(&server_data->work_available_cond,
+				  &server_data->work_available_mutex);
+	}
+	pthread_mutex_unlock(&server_data->work_available_mutex);
+
+	return fd;
+}
+
+/*
+ * Forward declare our reply callback function so that any compiler
+ * errors are reported when we actually define the function (in addition
+ * to any errors reported when we try to pass this callback function as
+ * a parameter in a function call).  The former are easier to understand.
+ */
+static ipc_server_reply_cb do_io_reply_callback;
+
+/*
+ * Relay application's response message to the client process.
+ * (We do not flush at this point because we allow the caller
+ * to chunk data to the client thru us.)
+ */
+static int do_io_reply_callback(struct ipc_server_reply_data *reply_data,
+		       const char *response, size_t response_len)
+{
+	if (reply_data->magic != MAGIC_SERVER_REPLY_DATA)
+		BUG("reply_cb called with wrong instance data");
+
+	return write_packetized_from_buf_no_flush(response, response_len,
+						  reply_data->fd);
+}
+
+/* A randomly chosen value. */
+#define MY_WAIT_POLL_TIMEOUT_MS (10)
+
+/*
+ * If the client hangs up without sending any data on the wire, just
+ * quietly close the socket and ignore this client.
+ *
+ * This worker thread is committed to reading the IPC request data
+ * from the client at the other end of this fd.  Wait here for the
+ * client to actually put something on the wire -- because if the
+ * client just does a ping (connect and hangup without sending any
+ * data), our use of the pkt-line read routines will spew an error
+ * message.
+ *
+ * Return -1 if the client hung up.
+ * Return 0 if data (possibly incomplete) is ready.
+ */
+static int worker_thread__wait_for_io_start(
+	struct ipc_worker_thread_data *worker_thread_data,
+	int fd)
+{
+	struct ipc_server_data *server_data = worker_thread_data->server_data;
+	struct pollfd pollfd[1];
+	int result;
+
+	for (;;) {
+		pollfd[0].fd = fd;
+		pollfd[0].events = POLLIN;
+
+		result = poll(pollfd, 1, MY_WAIT_POLL_TIMEOUT_MS);
+		if (result < 0) {
+			if (errno == EINTR)
+				continue;
+			goto cleanup;
+		}
+
+		if (result == 0) {
+			/* a timeout */
+
+			int in_shutdown;
+
+			pthread_mutex_lock(&server_data->work_available_mutex);
+			in_shutdown = server_data->shutdown_requested;
+			pthread_mutex_unlock(&server_data->work_available_mutex);
+
+			/*
+			 * If a shutdown is already in progress and this
+			 * client has not started talking yet, just drop it.
+			 */
+			if (in_shutdown)
+				goto cleanup;
+			continue;
+		}
+
+		if (pollfd[0].revents & POLLHUP)
+			goto cleanup;
+
+		if (pollfd[0].revents & POLLIN)
+			return 0;
+
+		goto cleanup;
+	}
+
+cleanup:
+	close(fd);
+	return -1;
+}
+
+/*
+ * Receive the request/command from the client and pass it to the
+ * registered request-callback.  The request-callback will compose
+ * a response and call our reply-callback to send it to the client.
+ */
+static int worker_thread__do_io(
+	struct ipc_worker_thread_data *worker_thread_data,
+	int fd)
+{
+	/* ASSERT NOT holding lock */
+
+	struct strbuf buf = STRBUF_INIT;
+	struct ipc_server_reply_data reply_data;
+	int ret = 0;
+
+	reply_data.magic = MAGIC_SERVER_REPLY_DATA;
+	reply_data.worker_thread_data = worker_thread_data;
+
+	reply_data.fd = fd;
+
+	ret = read_packetized_to_strbuf(
+		reply_data.fd, &buf,
+		PACKET_READ_GENTLE_ON_EOF | PACKET_READ_GENTLE_ON_READ_ERROR);
+	if (ret >= 0) {
+		ret = worker_thread_data->server_data->application_cb(
+			worker_thread_data->server_data->application_data,
+			buf.buf, do_io_reply_callback, &reply_data);
+
+		packet_flush_gently(reply_data.fd);
+	}
+	else {
+		/*
+		 * The client probably disconnected/shutdown before it
+		 * could send a well-formed message.  Ignore it.
+		 */
+	}
+
+	strbuf_release(&buf);
+	close(reply_data.fd);
+
+	return ret;
+}
+
+/*
+ * Block SIGPIPE on the current thread (so that we get EPIPE from
+ * write() rather than an actual signal).
+ *
+ * Note that using sigchain_push() and _pop() to control SIGPIPE
+ * around our IO calls is not thread safe:
+ * [] It uses a global stack of handler frames.
+ * [] It uses ALLOC_GROW() to resize it.
+ * [] Finally, according to the `signal(2)` man-page:
+ *    "The effects of `signal()` in a multithreaded process are unspecified."
+ */
+static void thread_block_sigpipe(sigset_t *old_set)
+{
+	sigset_t new_set;
+
+	sigemptyset(&new_set);
+	sigaddset(&new_set, SIGPIPE);
+
+	sigemptyset(old_set);
+	pthread_sigmask(SIG_BLOCK, &new_set, old_set);
+}
+
+/*
+ * Thread proc for an IPC worker thread.  It handles a series of
+ * connections from clients.  It pulls the next fd from the queue
+ * processes it, and then waits for the next client.
+ *
+ * Block SIGPIPE in this worker thread for the life of the thread.
+ * This avoids stray (and sometimes delayed) SIGPIPE signals caused
+ * by client errors and/or when we are under extremely heavy IO load.
+ *
+ * This means that the application callback will have SIGPIPE blocked.
+ * The callback should not change it.
+ */
+static void *worker_thread_proc(void *_worker_thread_data)
+{
+	struct ipc_worker_thread_data *worker_thread_data = _worker_thread_data;
+	struct ipc_server_data *server_data = worker_thread_data->server_data;
+	sigset_t old_set;
+	int fd, io;
+	int ret;
+
+	trace2_thread_start("ipc-worker");
+
+	thread_block_sigpipe(&old_set);
+
+	for (;;) {
+		fd = worker_thread__wait_for_connection(worker_thread_data);
+		if (fd == -1)
+			break; /* in shutdown */
+
+		io = worker_thread__wait_for_io_start(worker_thread_data, fd);
+		if (io == -1)
+			continue; /* client hung up without sending anything */
+
+		ret = worker_thread__do_io(worker_thread_data, fd);
+
+		if (ret == SIMPLE_IPC_QUIT) {
+			trace2_data_string("ipc-worker", NULL, "queue_stop_async",
+					   "application_quit");
+			/*
+			 * The application layer is telling the ipc-server
+			 * layer to shutdown.
+			 *
+			 * We DO NOT have a response to send to the client.
+			 *
+			 * Queue an async stop (to stop the other threads) and
+			 * allow this worker thread to exit now (no sense waiting
+			 * for the thread-pool shutdown signal).
+			 *
+			 * Other non-idle worker threads are allowed to finish
+			 * responding to their current clients.
+			 */
+			ipc_server_stop_async(server_data);
+			break;
+		}
+	}
+
+	trace2_thread_exit();
+	return NULL;
+}
+
+/* A randomly chosen value. */
+#define MY_ACCEPT_POLL_TIMEOUT_MS (60 * 1000)
+
+/*
+ * Accept a new client connection on our socket.  This uses non-blocking
+ * IO so that we can also wait for shutdown requests on our socket-pair
+ * without actually spinning on a fast timeout.
+ */
+static int accept_thread__wait_for_connection(
+	struct ipc_accept_thread_data *accept_thread_data)
+{
+	struct pollfd pollfd[2];
+	int result;
+
+	for (;;) {
+		pollfd[0].fd = accept_thread_data->fd_wait_shutdown;
+		pollfd[0].events = POLLIN;
+
+		pollfd[1].fd = accept_thread_data->server_socket->fd_socket;
+		pollfd[1].events = POLLIN;
+
+		result = poll(pollfd, 2, MY_ACCEPT_POLL_TIMEOUT_MS);
+		if (result < 0) {
+			if (errno == EINTR)
+				continue;
+			return result;
+		}
+
+		if (result == 0) {
+			/* a timeout */
+
+			/*
+			 * If someone deletes or force-creates a new unix
+			 * domain socket at our path, all future clients
+			 * will be routed elsewhere and we silently starve.
+			 * If that happens, just queue a shutdown.
+			 */
+			if (unix_stream_server__was_stolen(
+				    accept_thread_data->server_socket)) {
+				trace2_data_string("ipc-accept", NULL,
+						   "queue_stop_async",
+						   "socket_stolen");
+				ipc_server_stop_async(
+					accept_thread_data->server_data);
+			}
+			continue;
+		}
+
+		if (pollfd[0].revents & POLLIN) {
+			/* shutdown message queued to socketpair */
+			return -1;
+		}
+
+		if (pollfd[1].revents & POLLIN) {
+			/* a connection is available on server_socket */
+
+			int client_fd =
+				accept(accept_thread_data->server_socket->fd_socket,
+				       NULL, NULL);
+			if (client_fd >= 0)
+				return client_fd;
+
+			/*
+			 * An error here is unlikely -- it probably
+			 * indicates that the connecting process has
+			 * already dropped the connection.
+			 */
+			continue;
+		}
+
+		BUG("unandled poll result errno=%d r[0]=%d r[1]=%d",
+		    errno, pollfd[0].revents, pollfd[1].revents);
+	}
+}
+
+/*
+ * Thread proc for the IPC server "accept thread".  This waits for
+ * an incoming socket connection, appends it to the queue of available
+ * connections, and notifies a worker thread to process it.
+ *
+ * Block SIGPIPE in this thread for the life of the thread.  This
+ * avoids any stray SIGPIPE signals when closing pipe fds under
+ * extremely heavy loads (such as when the fifo queue is full and we
+ * drop incomming connections).
+ */
+static void *accept_thread_proc(void *_accept_thread_data)
+{
+	struct ipc_accept_thread_data *accept_thread_data = _accept_thread_data;
+	struct ipc_server_data *server_data = accept_thread_data->server_data;
+	sigset_t old_set;
+
+	trace2_thread_start("ipc-accept");
+
+	thread_block_sigpipe(&old_set);
+
+	for (;;) {
+		int client_fd = accept_thread__wait_for_connection(
+			accept_thread_data);
+
+		pthread_mutex_lock(&server_data->work_available_mutex);
+		if (server_data->shutdown_requested) {
+			pthread_mutex_unlock(&server_data->work_available_mutex);
+			if (client_fd >= 0)
+				close(client_fd);
+			break;
+		}
+
+		if (client_fd < 0) {
+			/* ignore transient accept() errors */
+		}
+		else {
+			fifo_enqueue(server_data, client_fd);
+			pthread_cond_broadcast(&server_data->work_available_cond);
+		}
+		pthread_mutex_unlock(&server_data->work_available_mutex);
+	}
+
+	trace2_thread_exit();
+	return NULL;
+}
+
+/*
+ * We can't predict the connection arrival rate relative to the worker
+ * processing rate, therefore we allow the "accept-thread" to queue up
+ * a generous number of connections, since we'd rather have the client
+ * not unnecessarily timeout if we can avoid it.  (The assumption is
+ * that this will be used for FSMonitor and a few second wait on a
+ * connection is better than having the client timeout and do the full
+ * computation itself.)
+ *
+ * The FIFO queue size is set to a multiple of the worker pool size.
+ * This value chosen at random.
+ */
+#define FIFO_SCALE (100)
+
+/*
+ * The backlog value for `listen(2)`.  This doesn't need to huge,
+ * rather just large enough for our "accept-thread" to wake up and
+ * queue incoming connections onto the FIFO without the kernel
+ * dropping any.
+ *
+ * This value chosen at random.
+ */
+#define LISTEN_BACKLOG (50)
+
+static int create_listener_socket(
+	const char *path,
+	const struct ipc_server_opts *ipc_opts,
+	struct unix_stream_server_socket **new_server_socket)
+{
+	struct unix_stream_server_socket *server_socket = NULL;
+	struct unix_stream_listen_opts uslg_opts = UNIX_STREAM_LISTEN_OPTS_INIT;
+	int ret;
+
+	uslg_opts.listen_backlog_size = LISTEN_BACKLOG;
+	uslg_opts.disallow_chdir = ipc_opts->uds_disallow_chdir;
+
+	ret = unix_stream_server__create(path, &uslg_opts, -1, &server_socket);
+	if (ret)
+		return ret;
+
+	if (set_socket_blocking_flag(server_socket->fd_socket, 1)) {
+		int saved_errno = errno;
+		unix_stream_server__free(server_socket);
+		errno = saved_errno;
+		return -1;
+	}
+
+	*new_server_socket = server_socket;
+
+	trace2_data_string("ipc-server", NULL, "listen-with-lock", path);
+	return 0;
+}
+
+static int setup_listener_socket(
+	const char *path,
+	const struct ipc_server_opts *ipc_opts,
+	struct unix_stream_server_socket **new_server_socket)
+{
+	int ret, saved_errno;
+
+	trace2_region_enter("ipc-server", "create-listener_socket", NULL);
+
+	ret = create_listener_socket(path, ipc_opts, new_server_socket);
+
+	saved_errno = errno;
+	trace2_region_leave("ipc-server", "create-listener_socket", NULL);
+	errno = saved_errno;
+
+	return ret;
+}
+
+/*
+ * Start IPC server in a pool of background threads.
+ */
+int ipc_server_run_async(struct ipc_server_data **returned_server_data,
+			 const char *path, const struct ipc_server_opts *opts,
+			 ipc_server_application_cb *application_cb,
+			 void *application_data)
+{
+	struct unix_stream_server_socket *server_socket = NULL;
+	struct ipc_server_data *server_data;
+	int sv[2];
+	int k;
+	int ret;
+	int nr_threads = opts->nr_threads;
+
+	*returned_server_data = NULL;
+
+	/*
+	 * Create a socketpair and set sv[1] to non-blocking.  This
+	 * will used to send a shutdown message to the accept-thread
+	 * and allows the accept-thread to wait on EITHER a client
+	 * connection or a shutdown request without spinning.
+	 */
+	if (socketpair(AF_UNIX, SOCK_STREAM, 0, sv) < 0)
+		return -1;
+
+	if (set_socket_blocking_flag(sv[1], 1)) {
+		int saved_errno = errno;
+		close(sv[0]);
+		close(sv[1]);
+		errno = saved_errno;
+		return -1;
+	}
+
+	ret = setup_listener_socket(path, opts, &server_socket);
+	if (ret) {
+		int saved_errno = errno;
+		close(sv[0]);
+		close(sv[1]);
+		errno = saved_errno;
+		return ret;
+	}
+
+	server_data = xcalloc(1, sizeof(*server_data));
+	server_data->magic = MAGIC_SERVER_DATA;
+	server_data->application_cb = application_cb;
+	server_data->application_data = application_data;
+	strbuf_init(&server_data->buf_path, 0);
+	strbuf_addstr(&server_data->buf_path, path);
+
+	if (nr_threads < 1)
+		nr_threads = 1;
+
+	pthread_mutex_init(&server_data->work_available_mutex, NULL);
+	pthread_cond_init(&server_data->work_available_cond, NULL);
+
+	server_data->queue_size = nr_threads * FIFO_SCALE;
+	server_data->fifo_fds = xcalloc(server_data->queue_size,
+					sizeof(*server_data->fifo_fds));
+
+	server_data->accept_thread =
+		xcalloc(1, sizeof(*server_data->accept_thread));
+	server_data->accept_thread->magic = MAGIC_ACCEPT_THREAD_DATA;
+	server_data->accept_thread->server_data = server_data;
+	server_data->accept_thread->server_socket = server_socket;
+	server_data->accept_thread->fd_send_shutdown = sv[0];
+	server_data->accept_thread->fd_wait_shutdown = sv[1];
+
+	if (pthread_create(&server_data->accept_thread->pthread_id, NULL,
+			   accept_thread_proc, server_data->accept_thread))
+		die_errno(_("could not start accept_thread '%s'"), path);
+
+	for (k = 0; k < nr_threads; k++) {
+		struct ipc_worker_thread_data *wtd;
+
+		wtd = xcalloc(1, sizeof(*wtd));
+		wtd->magic = MAGIC_WORKER_THREAD_DATA;
+		wtd->server_data = server_data;
+
+		if (pthread_create(&wtd->pthread_id, NULL, worker_thread_proc,
+				   wtd)) {
+			if (k == 0)
+				die(_("could not start worker[0] for '%s'"),
+				    path);
+			/*
+			 * Limp along with the thread pool that we have.
+			 */
+			break;
+		}
+
+		wtd->next_thread = server_data->worker_thread_list;
+		server_data->worker_thread_list = wtd;
+	}
+
+	*returned_server_data = server_data;
+	return 0;
+}
+
+/*
+ * Gently tell the IPC server treads to shutdown.
+ * Can be run on any thread.
+ */
+int ipc_server_stop_async(struct ipc_server_data *server_data)
+{
+	/* ASSERT NOT holding mutex */
+
+	int fd;
+
+	if (!server_data)
+		return 0;
+
+	trace2_region_enter("ipc-server", "server-stop-async", NULL);
+
+	pthread_mutex_lock(&server_data->work_available_mutex);
+
+	server_data->shutdown_requested = 1;
+
+	/*
+	 * Write a byte to the shutdown socket pair to wake up the
+	 * accept-thread.
+	 */
+	if (write(server_data->accept_thread->fd_send_shutdown, "Q", 1) < 0)
+		error_errno("could not write to fd_send_shutdown");
+
+	/*
+	 * Drain the queue of existing connections.
+	 */
+	while ((fd = fifo_dequeue(server_data)) != -1)
+		close(fd);
+
+	/*
+	 * Gently tell worker threads to stop processing new connections
+	 * and exit.  (This does not abort in-process conversations.)
+	 */
+	pthread_cond_broadcast(&server_data->work_available_cond);
+
+	pthread_mutex_unlock(&server_data->work_available_mutex);
+
+	trace2_region_leave("ipc-server", "server-stop-async", NULL);
+
+	return 0;
+}
+
+/*
+ * Wait for all IPC server threads to stop.
+ */
+int ipc_server_await(struct ipc_server_data *server_data)
+{
+	pthread_join(server_data->accept_thread->pthread_id, NULL);
+
+	if (!server_data->shutdown_requested)
+		BUG("ipc-server: accept-thread stopped for '%s'",
+		    server_data->buf_path.buf);
+
+	while (server_data->worker_thread_list) {
+		struct ipc_worker_thread_data *wtd =
+			server_data->worker_thread_list;
+
+		pthread_join(wtd->pthread_id, NULL);
+
+		server_data->worker_thread_list = wtd->next_thread;
+		free(wtd);
+	}
+
+	server_data->is_stopped = 1;
+
+	return 0;
+}
+
+void ipc_server_free(struct ipc_server_data *server_data)
+{
+	struct ipc_accept_thread_data * accept_thread_data;
+
+	if (!server_data)
+		return;
+
+	if (!server_data->is_stopped)
+		BUG("cannot free ipc-server while running for '%s'",
+		    server_data->buf_path.buf);
+
+	accept_thread_data = server_data->accept_thread;
+	if (accept_thread_data) {
+		unix_stream_server__free(accept_thread_data->server_socket);
+
+		if (accept_thread_data->fd_send_shutdown != -1)
+			close(accept_thread_data->fd_send_shutdown);
+		if (accept_thread_data->fd_wait_shutdown != -1)
+			close(accept_thread_data->fd_wait_shutdown);
+
+		free(server_data->accept_thread);
+	}
+
+	while (server_data->worker_thread_list) {
+		struct ipc_worker_thread_data *wtd =
+			server_data->worker_thread_list;
+
+		server_data->worker_thread_list = wtd->next_thread;
+		free(wtd);
+	}
+
+	pthread_cond_destroy(&server_data->work_available_cond);
+	pthread_mutex_destroy(&server_data->work_available_mutex);
+
+	strbuf_release(&server_data->buf_path);
+
+	free(server_data->fifo_fds);
+	free(server_data);
+}
diff --git a/contrib/buildsystems/CMakeLists.txt b/contrib/buildsystems/CMakeLists.txt
index c94011269ebb..9897fcc8ea2a 100644
--- a/contrib/buildsystems/CMakeLists.txt
+++ b/contrib/buildsystems/CMakeLists.txt
@@ -248,6 +248,8 @@ endif()
 
 if(CMAKE_SYSTEM_NAME STREQUAL "Windows")
 	list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-win32.c)
+else()
+	list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-unix-socket.c)
 endif()
 
 set(EXE_EXTENSION ${CMAKE_EXECUTABLE_SUFFIX})
diff --git a/simple-ipc.h b/simple-ipc.h
index ab5619e3d76f..dc3606e30bd6 100644
--- a/simple-ipc.h
+++ b/simple-ipc.h
@@ -5,7 +5,7 @@
  * See Documentation/technical/api-simple-ipc.txt
  */
 
-#if defined(GIT_WINDOWS_NATIVE)
+#if defined(GIT_WINDOWS_NATIVE) || !defined(NO_UNIX_SOCKETS)
 #define SUPPORTS_SIMPLE_IPC
 #endif
 
@@ -62,11 +62,17 @@ struct ipc_client_connect_options {
 	 * the service and need to wait for it to become ready.
 	 */
 	unsigned int wait_if_not_found:1;
+
+	/*
+	 * Disallow chdir() when creating a Unix domain socket.
+	 */
+	unsigned int uds_disallow_chdir:1;
 };
 
 #define IPC_CLIENT_CONNECT_OPTIONS_INIT { \
 	.wait_if_busy = 0, \
 	.wait_if_not_found = 0, \
+	.uds_disallow_chdir = 0, \
 }
 
 /*
@@ -159,6 +165,11 @@ struct ipc_server_data;
 struct ipc_server_opts
 {
 	int nr_threads;
+
+	/*
+	 * Disallow chdir() when creating a Unix domain socket.
+	 */
+	unsigned int uds_disallow_chdir:1;
 };
 
 /*
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v5 12/12] t0052: add simple-ipc tests and t/helper/test-simple-ipc tool
  2021-03-09 15:02       ` [PATCH v5 " Jeff Hostetler via GitGitGadget
                           ` (10 preceding siblings ...)
  2021-03-09 15:02         ` [PATCH v5 11/12] simple-ipc: add Unix domain socket implementation Jeff Hostetler via GitGitGadget
@ 2021-03-09 15:02         ` Jeff Hostetler via GitGitGadget
  2021-03-09 23:28         ` [PATCH v5 00/12] Simple IPC Mechanism Junio C Hamano
  2021-03-15 21:08         ` [PATCH v6 " Jeff Hostetler via GitGitGadget
  13 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-03-09 15:02 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Create t0052-simple-ipc.sh with unit tests for the "simple-ipc" mechanism.

Create t/helper/test-simple-ipc test tool to exercise the "simple-ipc"
functions.

When the tool is invoked with "run-daemon", it runs a server to listen
for "simple-ipc" connections on a test socket or named pipe and
responds to a set of commands to exercise/stress the communication
setup.

When the tool is invoked with "start-daemon", it spawns a "run-daemon"
command in the background and waits for the server to become ready
before exiting.  (This helps make unit tests in t0052 more predictable
and avoids the need for arbitrary sleeps in the test script.)

The tool also has a series of client "send" commands to send commands
and data to a server instance.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 Makefile                   |   1 +
 t/helper/test-simple-ipc.c | 787 +++++++++++++++++++++++++++++++++++++
 t/helper/test-tool.c       |   1 +
 t/helper/test-tool.h       |   1 +
 t/t0052-simple-ipc.sh      | 122 ++++++
 5 files changed, 912 insertions(+)
 create mode 100644 t/helper/test-simple-ipc.c
 create mode 100755 t/t0052-simple-ipc.sh

diff --git a/Makefile b/Makefile
index 20dd65d19658..e556388d28d0 100644
--- a/Makefile
+++ b/Makefile
@@ -734,6 +734,7 @@ TEST_BUILTINS_OBJS += test-serve-v2.o
 TEST_BUILTINS_OBJS += test-sha1.o
 TEST_BUILTINS_OBJS += test-sha256.o
 TEST_BUILTINS_OBJS += test-sigchain.o
+TEST_BUILTINS_OBJS += test-simple-ipc.o
 TEST_BUILTINS_OBJS += test-strcmp-offset.o
 TEST_BUILTINS_OBJS += test-string-list.o
 TEST_BUILTINS_OBJS += test-submodule-config.o
diff --git a/t/helper/test-simple-ipc.c b/t/helper/test-simple-ipc.c
new file mode 100644
index 000000000000..42040ef81b1e
--- /dev/null
+++ b/t/helper/test-simple-ipc.c
@@ -0,0 +1,787 @@
+/*
+ * test-simple-ipc.c: verify that the Inter-Process Communication works.
+ */
+
+#include "test-tool.h"
+#include "cache.h"
+#include "strbuf.h"
+#include "simple-ipc.h"
+#include "parse-options.h"
+#include "thread-utils.h"
+#include "strvec.h"
+
+#ifndef SUPPORTS_SIMPLE_IPC
+int cmd__simple_ipc(int argc, const char **argv)
+{
+	die("simple IPC not available on this platform");
+}
+#else
+
+/*
+ * The test daemon defines an "application callback" that supports a
+ * series of commands (see `test_app_cb()`).
+ *
+ * Unknown commands are caught here and we send an error message back
+ * to the client process.
+ */
+static int app__unhandled_command(const char *command,
+				  ipc_server_reply_cb *reply_cb,
+				  struct ipc_server_reply_data *reply_data)
+{
+	struct strbuf buf = STRBUF_INIT;
+	int ret;
+
+	strbuf_addf(&buf, "unhandled command: %s", command);
+	ret = reply_cb(reply_data, buf.buf, buf.len);
+	strbuf_release(&buf);
+
+	return ret;
+}
+
+/*
+ * Reply with a single very large buffer.  This is to ensure that
+ * long response are properly handled -- whether the chunking occurs
+ * in the kernel or in the (probably pkt-line) layer.
+ */
+#define BIG_ROWS (10000)
+static int app__big_command(ipc_server_reply_cb *reply_cb,
+			    struct ipc_server_reply_data *reply_data)
+{
+	struct strbuf buf = STRBUF_INIT;
+	int row;
+	int ret;
+
+	for (row = 0; row < BIG_ROWS; row++)
+		strbuf_addf(&buf, "big: %.75d\n", row);
+
+	ret = reply_cb(reply_data, buf.buf, buf.len);
+	strbuf_release(&buf);
+
+	return ret;
+}
+
+/*
+ * Reply with a series of lines.  This is to ensure that we can incrementally
+ * compute the response and chunk it to the client.
+ */
+#define CHUNK_ROWS (10000)
+static int app__chunk_command(ipc_server_reply_cb *reply_cb,
+			      struct ipc_server_reply_data *reply_data)
+{
+	struct strbuf buf = STRBUF_INIT;
+	int row;
+	int ret;
+
+	for (row = 0; row < CHUNK_ROWS; row++) {
+		strbuf_setlen(&buf, 0);
+		strbuf_addf(&buf, "big: %.75d\n", row);
+		ret = reply_cb(reply_data, buf.buf, buf.len);
+	}
+
+	strbuf_release(&buf);
+
+	return ret;
+}
+
+/*
+ * Slowly reply with a series of lines.  This is to model an expensive to
+ * compute chunked response (which might happen if this callback is running
+ * in a thread and is fighting for a lock with other threads).
+ */
+#define SLOW_ROWS     (1000)
+#define SLOW_DELAY_MS (10)
+static int app__slow_command(ipc_server_reply_cb *reply_cb,
+			     struct ipc_server_reply_data *reply_data)
+{
+	struct strbuf buf = STRBUF_INIT;
+	int row;
+	int ret;
+
+	for (row = 0; row < SLOW_ROWS; row++) {
+		strbuf_setlen(&buf, 0);
+		strbuf_addf(&buf, "big: %.75d\n", row);
+		ret = reply_cb(reply_data, buf.buf, buf.len);
+		sleep_millisec(SLOW_DELAY_MS);
+	}
+
+	strbuf_release(&buf);
+
+	return ret;
+}
+
+/*
+ * The client sent a command followed by a (possibly very) large buffer.
+ */
+static int app__sendbytes_command(const char *received,
+				  ipc_server_reply_cb *reply_cb,
+				  struct ipc_server_reply_data *reply_data)
+{
+	struct strbuf buf_resp = STRBUF_INIT;
+	const char *p = "?";
+	int len_ballast = 0;
+	int k;
+	int errs = 0;
+	int ret;
+
+	if (skip_prefix(received, "sendbytes ", &p))
+		len_ballast = strlen(p);
+
+	/*
+	 * Verify that the ballast is n copies of a single letter.
+	 * And that the multi-threaded IO layer didn't cross the streams.
+	 */
+	for (k = 1; k < len_ballast; k++)
+		if (p[k] != p[0])
+			errs++;
+
+	if (errs)
+		strbuf_addf(&buf_resp, "errs:%d\n", errs);
+	else
+		strbuf_addf(&buf_resp, "rcvd:%c%08d\n", p[0], len_ballast);
+
+	ret = reply_cb(reply_data, buf_resp.buf, buf_resp.len);
+
+	strbuf_release(&buf_resp);
+
+	return ret;
+}
+
+/*
+ * An arbitrary fixed address to verify that the application instance
+ * data is handled properly.
+ */
+static int my_app_data = 42;
+
+static ipc_server_application_cb test_app_cb;
+
+/*
+ * This is the "application callback" that sits on top of the
+ * "ipc-server".  It completely defines the set of commands supported
+ * by this application.
+ */
+static int test_app_cb(void *application_data,
+		       const char *command,
+		       ipc_server_reply_cb *reply_cb,
+		       struct ipc_server_reply_data *reply_data)
+{
+	/*
+	 * Verify that we received the application-data that we passed
+	 * when we started the ipc-server.  (We have several layers of
+	 * callbacks calling callbacks and it's easy to get things mixed
+	 * up (especially when some are "void*").)
+	 */
+	if (application_data != (void*)&my_app_data)
+		BUG("application_cb: application_data pointer wrong");
+
+	if (!strcmp(command, "quit")) {
+		/*
+		 * The client sent a "quit" command.  This is an async
+		 * request for the server to shutdown.
+		 *
+		 * We DO NOT send the client a response message
+		 * (because we have nothing to say and the other
+		 * server threads have not yet stopped).
+		 *
+		 * Tell the ipc-server layer to start shutting down.
+		 * This includes: stop listening for new connections
+		 * on the socket/pipe and telling all worker threads
+		 * to finish/drain their outgoing responses to other
+		 * clients.
+		 *
+		 * This DOES NOT force an immediate sync shutdown.
+		 */
+		return SIMPLE_IPC_QUIT;
+	}
+
+	if (!strcmp(command, "ping")) {
+		const char *answer = "pong";
+		return reply_cb(reply_data, answer, strlen(answer));
+	}
+
+	if (!strcmp(command, "big"))
+		return app__big_command(reply_cb, reply_data);
+
+	if (!strcmp(command, "chunk"))
+		return app__chunk_command(reply_cb, reply_data);
+
+	if (!strcmp(command, "slow"))
+		return app__slow_command(reply_cb, reply_data);
+
+	if (starts_with(command, "sendbytes "))
+		return app__sendbytes_command(command, reply_cb, reply_data);
+
+	return app__unhandled_command(command, reply_cb, reply_data);
+}
+
+struct cl_args
+{
+	const char *subcommand;
+	const char *path;
+	const char *token;
+
+	int nr_threads;
+	int max_wait_sec;
+	int bytecount;
+	int batchsize;
+
+	char bytevalue;
+};
+
+static struct cl_args cl_args = {
+	.subcommand = NULL,
+	.path = "ipc-test",
+	.token = NULL,
+
+	.nr_threads = 5,
+	.max_wait_sec = 60,
+	.bytecount = 1024,
+	.batchsize = 10,
+
+	.bytevalue = 'x',
+};
+
+/*
+ * This process will run as a simple-ipc server and listen for IPC commands
+ * from client processes.
+ */
+static int daemon__run_server(void)
+{
+	int ret;
+
+	struct ipc_server_opts opts = {
+		.nr_threads = cl_args.nr_threads,
+	};
+
+	/*
+	 * Synchronously run the ipc-server.  We don't need any application
+	 * instance data, so pass an arbitrary pointer (that we'll later
+	 * verify made the round trip).
+	 */
+	ret = ipc_server_run(cl_args.path, &opts, test_app_cb, (void*)&my_app_data);
+	if (ret == -2)
+		error(_("socket/pipe already in use: '%s'"), cl_args.path);
+	else if (ret == -1)
+		error_errno(_("could not start server on: '%s'"), cl_args.path);
+
+	return ret;
+}
+
+#ifndef GIT_WINDOWS_NATIVE
+/*
+ * This is adapted from `daemonize()`.  Use `fork()` to directly create and
+ * run the daemon in a child process.
+ */
+static int spawn_server(pid_t *pid)
+{
+	struct ipc_server_opts opts = {
+		.nr_threads = cl_args.nr_threads,
+	};
+
+	*pid = fork();
+
+	switch (*pid) {
+	case 0:
+		if (setsid() == -1)
+			error_errno(_("setsid failed"));
+		close(0);
+		close(1);
+		close(2);
+		sanitize_stdfds();
+
+		return ipc_server_run(cl_args.path, &opts, test_app_cb,
+				      (void*)&my_app_data);
+
+	case -1:
+		return error_errno(_("could not spawn daemon in the background"));
+
+	default:
+		return 0;
+	}
+}
+#else
+/*
+ * Conceptually like `daemonize()` but different because Windows does not
+ * have `fork(2)`.  Spawn a normal Windows child process but without the
+ * limitations of `start_command()` and `finish_command()`.
+ */
+static int spawn_server(pid_t *pid)
+{
+	char test_tool_exe[MAX_PATH];
+	struct strvec args = STRVEC_INIT;
+	int in, out;
+
+	GetModuleFileNameA(NULL, test_tool_exe, MAX_PATH);
+
+	in = open("/dev/null", O_RDONLY);
+	out = open("/dev/null", O_WRONLY);
+
+	strvec_push(&args, test_tool_exe);
+	strvec_push(&args, "simple-ipc");
+	strvec_push(&args, "run-daemon");
+	strvec_pushf(&args, "--name=%s", cl_args.path);
+	strvec_pushf(&args, "--threads=%d", cl_args.nr_threads);
+
+	*pid = mingw_spawnvpe(args.v[0], args.v, NULL, NULL, in, out, out);
+	close(in);
+	close(out);
+
+	strvec_clear(&args);
+
+	if (*pid < 0)
+		return error(_("could not spawn daemon in the background"));
+
+	return 0;
+}
+#endif
+
+/*
+ * This is adapted from `wait_or_whine()`.  Watch the child process and
+ * let it get started and begin listening for requests on the socket
+ * before reporting our success.
+ */
+static int wait_for_server_startup(pid_t pid_child)
+{
+	int status;
+	pid_t pid_seen;
+	enum ipc_active_state s;
+	time_t time_limit, now;
+
+	time(&time_limit);
+	time_limit += cl_args.max_wait_sec;
+
+	for (;;) {
+		pid_seen = waitpid(pid_child, &status, WNOHANG);
+
+		if (pid_seen == -1)
+			return error_errno(_("waitpid failed"));
+
+		else if (pid_seen == 0) {
+			/*
+			 * The child is still running (this should be
+			 * the normal case).  Try to connect to it on
+			 * the socket and see if it is ready for
+			 * business.
+			 *
+			 * If there is another daemon already running,
+			 * our child will fail to start (possibly
+			 * after a timeout on the lock), but we don't
+			 * care (who responds) if the socket is live.
+			 */
+			s = ipc_get_active_state(cl_args.path);
+			if (s == IPC_STATE__LISTENING)
+				return 0;
+
+			time(&now);
+			if (now > time_limit)
+				return error(_("daemon not online yet"));
+
+			continue;
+		}
+
+		else if (pid_seen == pid_child) {
+			/*
+			 * The new child daemon process shutdown while
+			 * it was starting up, so it is not listening
+			 * on the socket.
+			 *
+			 * Try to ping the socket in the odd chance
+			 * that another daemon started (or was already
+			 * running) while our child was starting.
+			 *
+			 * Again, we don't care who services the socket.
+			 */
+			s = ipc_get_active_state(cl_args.path);
+			if (s == IPC_STATE__LISTENING)
+				return 0;
+
+			/*
+			 * We don't care about the WEXITSTATUS() nor
+			 * any of the WIF*(status) values because
+			 * `cmd__simple_ipc()` does the `!!result`
+			 * trick on all function return values.
+			 *
+			 * So it is sufficient to just report the
+			 * early shutdown as an error.
+			 */
+			return error(_("daemon failed to start"));
+		}
+
+		else
+			return error(_("waitpid is confused"));
+	}
+}
+
+/*
+ * This process will start a simple-ipc server in a background process and
+ * wait for it to become ready.  This is like `daemonize()` but gives us
+ * more control and better error reporting (and makes it easier to write
+ * unit tests).
+ */
+static int daemon__start_server(void)
+{
+	pid_t pid_child;
+	int ret;
+
+	/*
+	 * Run the actual daemon in a background process.
+	 */
+	ret = spawn_server(&pid_child);
+	if (pid_child <= 0)
+		return ret;
+
+	/*
+	 * Let the parent wait for the child process to get started
+	 * and begin listening for requests on the socket.
+	 */
+	ret = wait_for_server_startup(pid_child);
+
+	return ret;
+}
+
+/*
+ * This process will run a quick probe to see if a simple-ipc server
+ * is active on this path.
+ *
+ * Returns 0 if the server is alive.
+ */
+static int client__probe_server(void)
+{
+	enum ipc_active_state s;
+
+	s = ipc_get_active_state(cl_args.path);
+	switch (s) {
+	case IPC_STATE__LISTENING:
+		return 0;
+
+	case IPC_STATE__NOT_LISTENING:
+		return error("no server listening at '%s'", cl_args.path);
+
+	case IPC_STATE__PATH_NOT_FOUND:
+		return error("path not found '%s'", cl_args.path);
+
+	case IPC_STATE__INVALID_PATH:
+		return error("invalid pipe/socket name '%s'", cl_args.path);
+
+	case IPC_STATE__OTHER_ERROR:
+	default:
+		return error("other error for '%s'", cl_args.path);
+	}
+}
+
+/*
+ * Send an IPC command token to an already-running server daemon and
+ * print the response.
+ *
+ * This is a simple 1 word command/token that `test_app_cb()` (in the
+ * daemon process) will understand.
+ */
+static int client__send_ipc(void)
+{
+	const char *command = "(no-command)";
+	struct strbuf buf = STRBUF_INIT;
+	struct ipc_client_connect_options options
+		= IPC_CLIENT_CONNECT_OPTIONS_INIT;
+
+	if (cl_args.token && *cl_args.token)
+		command = cl_args.token;
+
+	options.wait_if_busy = 1;
+	options.wait_if_not_found = 0;
+
+	if (!ipc_client_send_command(cl_args.path, &options, command, &buf)) {
+		if (buf.len) {
+			printf("%s\n", buf.buf);
+			fflush(stdout);
+		}
+		strbuf_release(&buf);
+
+		return 0;
+	}
+
+	return error("failed to send '%s' to '%s'", command, cl_args.path);
+}
+
+/*
+ * Send an IPC command to an already-running server and ask it to
+ * shutdown.  "send quit" is an async request and queues a shutdown
+ * event in the server, so we spin and wait here for it to actually
+ * shutdown to make the unit tests a little easier to write.
+ */
+static int client__stop_server(void)
+{
+	int ret;
+	time_t time_limit, now;
+	enum ipc_active_state s;
+
+	time(&time_limit);
+	time_limit += cl_args.max_wait_sec;
+
+	cl_args.token = "quit";
+
+	ret = client__send_ipc();
+	if (ret)
+		return ret;
+
+	for (;;) {
+		sleep_millisec(100);
+
+		s = ipc_get_active_state(cl_args.path);
+
+		if (s != IPC_STATE__LISTENING) {
+			/*
+			 * The socket/pipe is gone and/or has stopped
+			 * responding.  Lets assume that the daemon
+			 * process has exited too.
+			 */
+			return 0;
+		}
+
+		time(&now);
+		if (now > time_limit)
+			return error(_("daemon has not shutdown yet"));
+	}
+}
+
+/*
+ * Send an IPC command followed by ballast to confirm that a large
+ * message can be sent and that the kernel or pkt-line layers will
+ * properly chunk it and that the daemon receives the entire message.
+ */
+static int do_sendbytes(int bytecount, char byte, const char *path,
+			const struct ipc_client_connect_options *options)
+{
+	struct strbuf buf_send = STRBUF_INIT;
+	struct strbuf buf_resp = STRBUF_INIT;
+
+	strbuf_addstr(&buf_send, "sendbytes ");
+	strbuf_addchars(&buf_send, byte, bytecount);
+
+	if (!ipc_client_send_command(path, options, buf_send.buf, &buf_resp)) {
+		strbuf_rtrim(&buf_resp);
+		printf("sent:%c%08d %s\n", byte, bytecount, buf_resp.buf);
+		fflush(stdout);
+		strbuf_release(&buf_send);
+		strbuf_release(&buf_resp);
+
+		return 0;
+	}
+
+	return error("client failed to sendbytes(%d, '%c') to '%s'",
+		     bytecount, byte, path);
+}
+
+/*
+ * Send an IPC command with ballast to an already-running server daemon.
+ */
+static int client__sendbytes(void)
+{
+	struct ipc_client_connect_options options
+		= IPC_CLIENT_CONNECT_OPTIONS_INIT;
+
+	options.wait_if_busy = 1;
+	options.wait_if_not_found = 0;
+	options.uds_disallow_chdir = 0;
+
+	return do_sendbytes(cl_args.bytecount, cl_args.bytevalue, cl_args.path,
+			    &options);
+}
+
+struct multiple_thread_data {
+	pthread_t pthread_id;
+	struct multiple_thread_data *next;
+	const char *path;
+	int bytecount;
+	int batchsize;
+	int sum_errors;
+	int sum_good;
+	char letter;
+};
+
+static void *multiple_thread_proc(void *_multiple_thread_data)
+{
+	struct multiple_thread_data *d = _multiple_thread_data;
+	int k;
+	struct ipc_client_connect_options options
+		= IPC_CLIENT_CONNECT_OPTIONS_INIT;
+
+	options.wait_if_busy = 1;
+	options.wait_if_not_found = 0;
+	/*
+	 * A multi-threaded client should not be randomly calling chdir().
+	 * The test will pass without this restriction because the test is
+	 * not otherwise accessing the filesystem, but it makes us honest.
+	 */
+	options.uds_disallow_chdir = 1;
+
+	trace2_thread_start("multiple");
+
+	for (k = 0; k < d->batchsize; k++) {
+		if (do_sendbytes(d->bytecount + k, d->letter, d->path, &options))
+			d->sum_errors++;
+		else
+			d->sum_good++;
+	}
+
+	trace2_thread_exit();
+	return NULL;
+}
+
+/*
+ * Start a client-side thread pool.  Each thread sends a series of
+ * IPC requests.  Each request is on a new connection to the server.
+ */
+static int client__multiple(void)
+{
+	struct multiple_thread_data *list = NULL;
+	int k;
+	int sum_join_errors = 0;
+	int sum_thread_errors = 0;
+	int sum_good = 0;
+
+	for (k = 0; k < cl_args.nr_threads; k++) {
+		struct multiple_thread_data *d = xcalloc(1, sizeof(*d));
+		d->next = list;
+		d->path = cl_args.path;
+		d->bytecount = cl_args.bytecount + cl_args.batchsize*(k/26);
+		d->batchsize = cl_args.batchsize;
+		d->sum_errors = 0;
+		d->sum_good = 0;
+		d->letter = 'A' + (k % 26);
+
+		if (pthread_create(&d->pthread_id, NULL, multiple_thread_proc, d)) {
+			warning("failed to create thread[%d] skipping remainder", k);
+			free(d);
+			break;
+		}
+
+		list = d;
+	}
+
+	while (list) {
+		struct multiple_thread_data *d = list;
+
+		if (pthread_join(d->pthread_id, NULL))
+			sum_join_errors++;
+
+		sum_thread_errors += d->sum_errors;
+		sum_good += d->sum_good;
+
+		list = d->next;
+		free(d);
+	}
+
+	printf("client (good %d) (join %d), (errors %d)\n",
+	       sum_good, sum_join_errors, sum_thread_errors);
+
+	return (sum_join_errors + sum_thread_errors) ? 1 : 0;
+}
+
+int cmd__simple_ipc(int argc, const char **argv)
+{
+	const char * const simple_ipc_usage[] = {
+		N_("test-helper simple-ipc is-active    [<name>] [<options>]"),
+		N_("test-helper simple-ipc run-daemon   [<name>] [<threads>]"),
+		N_("test-helper simple-ipc start-daemon [<name>] [<threads>] [<max-wait>]"),
+		N_("test-helper simple-ipc stop-daemon  [<name>] [<max-wait>]"),
+		N_("test-helper simple-ipc send         [<name>] [<token>]"),
+		N_("test-helper simple-ipc sendbytes    [<name>] [<bytecount>] [<byte>]"),
+		N_("test-helper simple-ipc multiple     [<name>] [<threads>] [<bytecount>] [<batchsize>]"),
+		NULL
+	};
+
+	const char *bytevalue = NULL;
+
+	struct option options[] = {
+#ifndef GIT_WINDOWS_NATIVE
+		OPT_STRING(0, "name", &cl_args.path, N_("name"), N_("name or pathname of unix domain socket")),
+#else
+		OPT_STRING(0, "name", &cl_args.path, N_("name"), N_("named-pipe name")),
+#endif
+		OPT_INTEGER(0, "threads", &cl_args.nr_threads, N_("number of threads in server thread pool")),
+		OPT_INTEGER(0, "max-wait", &cl_args.max_wait_sec, N_("seconds to wait for daemon to start or stop")),
+
+		OPT_INTEGER(0, "bytecount", &cl_args.bytecount, N_("number of bytes")),
+		OPT_INTEGER(0, "batchsize", &cl_args.batchsize, N_("number of requests per thread")),
+
+		OPT_STRING(0, "byte", &bytevalue, N_("byte"), N_("ballast character")),
+		OPT_STRING(0, "token", &cl_args.token, N_("token"), N_("command token to send to the server")),
+
+		OPT_END()
+	};
+
+	if (argc < 2)
+		usage_with_options(simple_ipc_usage, options);
+
+	if (argc == 2 && !strcmp(argv[1], "-h"))
+		usage_with_options(simple_ipc_usage, options);
+
+	if (argc == 2 && !strcmp(argv[1], "SUPPORTS_SIMPLE_IPC"))
+		return 0;
+
+	cl_args.subcommand = argv[1];
+
+	argc--;
+	argv++;
+
+	argc = parse_options(argc, argv, NULL, options, simple_ipc_usage, 0);
+
+	if (cl_args.nr_threads < 1)
+		cl_args.nr_threads = 1;
+	if (cl_args.max_wait_sec < 0)
+		cl_args.max_wait_sec = 0;
+	if (cl_args.bytecount < 1)
+		cl_args.bytecount = 1;
+	if (cl_args.batchsize < 1)
+		cl_args.batchsize = 1;
+
+	if (bytevalue && *bytevalue)
+		cl_args.bytevalue = bytevalue[0];
+
+	/*
+	 * Use '!!' on all dispatch functions to map from `error()` style
+	 * (returns -1) style to `test_must_fail` style (expects 1).  This
+	 * makes shell error messages less confusing.
+	 */
+
+	if (!strcmp(cl_args.subcommand, "is-active"))
+		return !!client__probe_server();
+
+	if (!strcmp(cl_args.subcommand, "run-daemon"))
+		return !!daemon__run_server();
+
+	if (!strcmp(cl_args.subcommand, "start-daemon"))
+		return !!daemon__start_server();
+
+	/*
+	 * Client commands follow.  Ensure a server is running before
+	 * sending any data.  This might be overkill, but then again
+	 * this is a test harness.
+	 */
+
+	if (!strcmp(cl_args.subcommand, "stop-daemon")) {
+		if (client__probe_server())
+			return 1;
+		return !!client__stop_server();
+	}
+
+	if (!strcmp(cl_args.subcommand, "send")) {
+		if (client__probe_server())
+			return 1;
+		return !!client__send_ipc();
+	}
+
+	if (!strcmp(cl_args.subcommand, "sendbytes")) {
+		if (client__probe_server())
+			return 1;
+		return !!client__sendbytes();
+	}
+
+	if (!strcmp(cl_args.subcommand, "multiple")) {
+		if (client__probe_server())
+			return 1;
+		return !!client__multiple();
+	}
+
+	die("Unhandled subcommand: '%s'", cl_args.subcommand);
+}
+#endif
diff --git a/t/helper/test-tool.c b/t/helper/test-tool.c
index f97cd9f48a69..287aa6002307 100644
--- a/t/helper/test-tool.c
+++ b/t/helper/test-tool.c
@@ -65,6 +65,7 @@ static struct test_cmd cmds[] = {
 	{ "sha1", cmd__sha1 },
 	{ "sha256", cmd__sha256 },
 	{ "sigchain", cmd__sigchain },
+	{ "simple-ipc", cmd__simple_ipc },
 	{ "strcmp-offset", cmd__strcmp_offset },
 	{ "string-list", cmd__string_list },
 	{ "submodule-config", cmd__submodule_config },
diff --git a/t/helper/test-tool.h b/t/helper/test-tool.h
index 28072c0ad5ab..9ea4b31011dd 100644
--- a/t/helper/test-tool.h
+++ b/t/helper/test-tool.h
@@ -55,6 +55,7 @@ int cmd__sha1(int argc, const char **argv);
 int cmd__oid_array(int argc, const char **argv);
 int cmd__sha256(int argc, const char **argv);
 int cmd__sigchain(int argc, const char **argv);
+int cmd__simple_ipc(int argc, const char **argv);
 int cmd__strcmp_offset(int argc, const char **argv);
 int cmd__string_list(int argc, const char **argv);
 int cmd__submodule_config(int argc, const char **argv);
diff --git a/t/t0052-simple-ipc.sh b/t/t0052-simple-ipc.sh
new file mode 100755
index 000000000000..ff98be31a51b
--- /dev/null
+++ b/t/t0052-simple-ipc.sh
@@ -0,0 +1,122 @@
+#!/bin/sh
+
+test_description='simple command server'
+
+. ./test-lib.sh
+
+test-tool simple-ipc SUPPORTS_SIMPLE_IPC || {
+	skip_all='simple IPC not supported on this platform'
+	test_done
+}
+
+stop_simple_IPC_server () {
+	test-tool simple-ipc stop-daemon
+}
+
+test_expect_success 'start simple command server' '
+	test_atexit stop_simple_IPC_server &&
+	test-tool simple-ipc start-daemon --threads=8 &&
+	test-tool simple-ipc is-active
+'
+
+test_expect_success 'simple command server' '
+	test-tool simple-ipc send --token=ping >actual &&
+	echo pong >expect &&
+	test_cmp expect actual
+'
+
+test_expect_success 'servers cannot share the same path' '
+	test_must_fail test-tool simple-ipc run-daemon &&
+	test-tool simple-ipc is-active
+'
+
+test_expect_success 'big response' '
+	test-tool simple-ipc send --token=big >actual &&
+	test_line_count -ge 10000 actual &&
+	grep -q "big: [0]*9999\$" actual
+'
+
+test_expect_success 'chunk response' '
+	test-tool simple-ipc send --token=chunk >actual &&
+	test_line_count -ge 10000 actual &&
+	grep -q "big: [0]*9999\$" actual
+'
+
+test_expect_success 'slow response' '
+	test-tool simple-ipc send --token=slow >actual &&
+	test_line_count -ge 100 actual &&
+	grep -q "big: [0]*99\$" actual
+'
+
+# Send an IPC with n=100,000 bytes of ballast.  This should be large enough
+# to force both the kernel and the pkt-line layer to chunk the message to the
+# daemon and for the daemon to receive it in chunks.
+#
+test_expect_success 'sendbytes' '
+	test-tool simple-ipc sendbytes --bytecount=100000 --byte=A >actual &&
+	grep "sent:A00100000 rcvd:A00100000" actual
+'
+
+# Start a series of <threads> client threads that each make <batchsize>
+# IPC requests to the server.  Each (<threads> * <batchsize>) request
+# will open a new connection to the server and randomly bind to a server
+# thread.  Each client thread exits after completing its batch.  So the
+# total number of live client threads will be smaller than the total.
+# Each request will send a message containing at least <bytecount> bytes
+# of ballast.  (Responses are small.)
+#
+# The purpose here is to test threading in the server and responding to
+# many concurrent client requests (regardless of whether they come from
+# 1 client process or many).  And to test that the server side of the
+# named pipe/socket is stable.  (On Windows this means that the server
+# pipe is properly recycled.)
+#
+# On Windows it also lets us adjust the connection timeout in the
+# `ipc_client_send_command()`.
+#
+# Note it is easy to drive the system into failure by requesting an
+# insane number of threads on client or server and/or increasing the
+# per-thread batchsize or the per-request bytecount (ballast).
+# On Windows these failures look like "pipe is busy" errors.
+# So I've chosen fairly conservative values for now.
+#
+# We expect output of the form "sent:<letter><length> ..."
+# With terms (7, 19, 13) we expect:
+#   <letter> in [A-G]
+#   <length> in [19+0 .. 19+(13-1)]
+# and (7 * 13) successful responses.
+#
+test_expect_success 'stress test threads' '
+	test-tool simple-ipc multiple \
+		--threads=7 \
+		--bytecount=19 \
+		--batchsize=13 \
+		>actual &&
+	test_line_count = 92 actual &&
+	grep "good 91" actual &&
+	grep "sent:A" <actual >actual_a &&
+	cat >expect_a <<-EOF &&
+		sent:A00000019 rcvd:A00000019
+		sent:A00000020 rcvd:A00000020
+		sent:A00000021 rcvd:A00000021
+		sent:A00000022 rcvd:A00000022
+		sent:A00000023 rcvd:A00000023
+		sent:A00000024 rcvd:A00000024
+		sent:A00000025 rcvd:A00000025
+		sent:A00000026 rcvd:A00000026
+		sent:A00000027 rcvd:A00000027
+		sent:A00000028 rcvd:A00000028
+		sent:A00000029 rcvd:A00000029
+		sent:A00000030 rcvd:A00000030
+		sent:A00000031 rcvd:A00000031
+	EOF
+	test_cmp expect_a actual_a
+'
+
+test_expect_success 'stop-daemon works' '
+	test-tool simple-ipc stop-daemon &&
+	test_must_fail test-tool simple-ipc is-active &&
+	test_must_fail test-tool simple-ipc send --token=ping
+'
+
+test_done
-- 
gitgitgadget

^ permalink raw reply related	[flat|nested] 178+ messages in thread

* Re: [PATCH v5 00/12] Simple IPC Mechanism
  2021-03-09 15:02       ` [PATCH v5 " Jeff Hostetler via GitGitGadget
                           ` (11 preceding siblings ...)
  2021-03-09 15:02         ` [PATCH v5 12/12] t0052: add simple-ipc tests and t/helper/test-simple-ipc tool Jeff Hostetler via GitGitGadget
@ 2021-03-09 23:28         ` Junio C Hamano
  2021-03-15 21:08         ` [PATCH v6 " Jeff Hostetler via GitGitGadget
  13 siblings, 0 replies; 178+ messages in thread
From: Junio C Hamano @ 2021-03-09 23:28 UTC (permalink / raw)
  To: Jeff Hostetler via GitGitGadget
  Cc: git, Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler

"Jeff Hostetler via GitGitGadget" <gitgitgadget@gmail.com> writes:

> .... I think the combined
> result is better long term than preserving them as two sequential series.

Yup, I think that is a sensible thing to do, too.  Just kick the one
in 'next' out by reverting them, and queue a cleaned-up series to be
merged to 'next' once the upcoming release is out.

Thanks.

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v5 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently()
  2021-03-09 15:02         ` [PATCH v5 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently() Jeff Hostetler via GitGitGadget
@ 2021-03-09 23:48           ` Junio C Hamano
  2021-03-11 19:29             ` Jeff King
  0 siblings, 1 reply; 178+ messages in thread
From: Junio C Hamano @ 2021-03-09 23:48 UTC (permalink / raw)
  To: Jeff Hostetler via GitGitGadget
  Cc: git, Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler

"Jeff Hostetler via GitGitGadget" <gitgitgadget@gmail.com> writes:

> +	/*
> +	 * Write the header and the buffer in 2 parts so that we do not need
> +	 * to allocate a buffer or rely on a static buffer.  This avoids perf
> +	 * and multi-threading issues.
> +	 */

I understand "multi-threading issues" (i.e. let's not have too much
stuff on the stack), but what issue around "perf" are we worried
about?

Even though we eliminate memcpy() from the original buffer to our
temporary, this doubles the number of write(2) system calls used to
write out packetised data, by the way.  I do not know if this results
in measurable performance degradation, but hopefully we can fix it
locally if it turns out to be a real problem later.

> +	if (write_in_full(fd_out, header, 4) < 0 ||
> +	    write_in_full(fd_out, buf, size) < 0)
>  		return error(_("packet write failed"));
>  	return 0;
>  }
> @@ -244,20 +252,23 @@ void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len)
>  
>  int write_packetized_from_fd(int fd_in, int fd_out)
>  {
> -	static char buf[LARGE_PACKET_DATA_MAX];
> +	char *buf = xmalloc(LARGE_PACKET_DATA_MAX);
>  	int err = 0;
>  	ssize_t bytes_to_write;
>  
>  	while (!err) {
> -		bytes_to_write = xread(fd_in, buf, sizeof(buf));
> -		if (bytes_to_write < 0)
> +		bytes_to_write = xread(fd_in, buf, LARGE_PACKET_DATA_MAX);
> +		if (bytes_to_write < 0) {
> +			free(buf);
>  			return COPY_READ_ERROR;
> +		}
>  		if (bytes_to_write == 0)
>  			break;
>  		err = packet_write_gently(fd_out, buf, bytes_to_write);
>  	}
>  	if (!err)
>  		err = packet_flush_gently(fd_out);
> +	free(buf);
>  	return err;
>  }

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v5 11/12] simple-ipc: add Unix domain socket implementation
  2021-03-09 15:02         ` [PATCH v5 11/12] simple-ipc: add Unix domain socket implementation Jeff Hostetler via GitGitGadget
@ 2021-03-10  0:08           ` Junio C Hamano
  2021-03-15 19:56             ` Jeff Hostetler
  0 siblings, 1 reply; 178+ messages in thread
From: Junio C Hamano @ 2021-03-10  0:08 UTC (permalink / raw)
  To: Jeff Hostetler via GitGitGadget
  Cc: git, Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler

"Jeff Hostetler via GitGitGadget" <gitgitgadget@gmail.com> writes:

> +/*
> + * This value was chosen at random.
> + */
> +#define WAIT_STEP_MS (50)

... and never used.  Is this supposed to be used as the hardcoded
value 50 below ...

> +
> +/*
> + * Try to connect to the server.  If the server is just starting up or
> + * is very busy, we may not get a connection the first time.
> + */
> +static enum ipc_active_state connect_to_server(
> +	const char *path,
> +	int timeout_ms,
> +	const struct ipc_client_connect_options *options,
> +	int *pfd)
> +{
> +	int wait_ms = 50;

... here?

> +	int k;
> +
> +	*pfd = -1;
> +
> +	for (k = 0; k < timeout_ms; k += wait_ms) {
> +		int fd = unix_stream_connect(path, options->uds_disallow_chdir);
> +
> +		if (fd != -1) {
> +			*pfd = fd;
> +			return IPC_STATE__LISTENING;
> +		}
> +
> +		if (errno == ENOENT) {
> +			if (!options->wait_if_not_found)
> +				return IPC_STATE__PATH_NOT_FOUND;
> +
> +			goto sleep_and_try_again;
> +		}
> + ...
> +		return IPC_STATE__OTHER_ERROR;
> +
> +	sleep_and_try_again:
> +		sleep_millisec(wait_ms);

Or, since there is nothing like exponential back-off implemented
here which may want to modify wait_ms variable, perhaps use the
constant directly here and where k is incremented?

> +/*
> + * A randomly chosen timeout value.
> + */
> +#define MY_CONNECTION_TIMEOUT_MS (1000)

Even if it may have been "randomly chosen", there should be some
criteria to judge if the value is sensible, right?  IOW, I have a
suspicion that I would regret if I randomly chose 5 (or 3600000)
instead of 1000.  How would we figure that 1000 acceptable but not
5?

Perhaps explain that criterion here, e.g. "... value that ought to
be long enough to establish connection locally as long as the box is
not loaded unusably heavily" or something?


^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v5 10/12] unix-stream-server: create unix domain socket under lock
  2021-03-09 15:02         ` [PATCH v5 10/12] unix-stream-server: create unix domain socket under lock Jeff Hostetler via GitGitGadget
@ 2021-03-10  0:18           ` Junio C Hamano
  0 siblings, 0 replies; 178+ messages in thread
From: Junio C Hamano @ 2021-03-10  0:18 UTC (permalink / raw)
  To: Jeff Hostetler via GitGitGadget
  Cc: git, Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler

"Jeff Hostetler via GitGitGadget" <gitgitgadget@gmail.com> writes:

> +struct unix_stream_server_socket {
> +int unix_stream_server__create(
> +void unix_stream_server__free(
> +int unix_stream_server__was_stolen(

I think we reserve __ in our API for names of symbols that normal
callers never have to write (both data like git_attr__true[] and
functions like cmd_bisect__helper()).

It seems that list-objects-filter.h may have introduced the
"name_space" followed by "__" followed by "name" convention,
but I am not sure if that is a desirable convention to spread
throughout our codebase.

Also "unix_stream_server" is quite a mouthful.  Perhaps abbreviate
it to uss_ or something?  I dunno if that is too short and invite
confusion with other kinds of uss.




^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v5 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently()
  2021-03-09 23:48           ` Junio C Hamano
@ 2021-03-11 19:29             ` Jeff King
  2021-03-11 20:32               ` Junio C Hamano
  0 siblings, 1 reply; 178+ messages in thread
From: Jeff King @ 2021-03-11 19:29 UTC (permalink / raw)
  To: Junio C Hamano
  Cc: Jeff Hostetler via GitGitGadget, git,
	Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler

On Tue, Mar 09, 2021 at 03:48:40PM -0800, Junio C Hamano wrote:

> "Jeff Hostetler via GitGitGadget" <gitgitgadget@gmail.com> writes:
> 
> > +	/*
> > +	 * Write the header and the buffer in 2 parts so that we do not need
> > +	 * to allocate a buffer or rely on a static buffer.  This avoids perf
> > +	 * and multi-threading issues.
> > +	 */
> 
> I understand "multi-threading issues" (i.e. let's not have too much
> stuff on the stack), but what issue around "perf" are we worried
> about?
> 
> Even though we eliminate memcpy() from the original buffer to our
> temporary, this doubles the number of write(2) system calls used to
> write out packetised data, by the way.  I do not know if this results
> in measurable performance degradation, but hopefully we can fix it
> locally if it turns out to be a real problem later.

Yeah, this came from my suggestion. My gut feeling is that it isn't
likely to matter, but I'd much rather solve any performance problem we
find using writev(), which would be pretty easy to emulate with a
wrapper for systems that lack it.

-Peff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v5 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently()
  2021-03-11 19:29             ` Jeff King
@ 2021-03-11 20:32               ` Junio C Hamano
  2021-03-11 20:53                 ` Jeff King
  0 siblings, 1 reply; 178+ messages in thread
From: Junio C Hamano @ 2021-03-11 20:32 UTC (permalink / raw)
  To: Jeff King
  Cc: Jeff Hostetler via GitGitGadget, git,
	Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler

Jeff King <peff@peff.net> writes:

> On Tue, Mar 09, 2021 at 03:48:40PM -0800, Junio C Hamano wrote:
>
>> "Jeff Hostetler via GitGitGadget" <gitgitgadget@gmail.com> writes:
>> 
>> > +	/*
>> > +	 * Write the header and the buffer in 2 parts so that we do not need
>> > +	 * to allocate a buffer or rely on a static buffer.  This avoids perf
>> > +	 * and multi-threading issues.
>> > +	 */
>> 
>> I understand "multi-threading issues" (i.e. let's not have too much
>> stuff on the stack), but what issue around "perf" are we worried
>> about?
>>  ...
> Yeah, this came from my suggestion. My gut feeling is that it isn't
> likely to matter, but I'd much rather solve any performance problem we
> find using writev(), which would be pretty easy to emulate with a
> wrapper for systems that lack it.

I too had writev() in mind when I said "can fix it locally", so we
are on the same page, which is good.

So "this avoid multi-threading issues" without mentioning "perf and"
would be more appropriate?

Thanks.

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v5 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently()
  2021-03-11 20:32               ` Junio C Hamano
@ 2021-03-11 20:53                 ` Jeff King
  0 siblings, 0 replies; 178+ messages in thread
From: Jeff King @ 2021-03-11 20:53 UTC (permalink / raw)
  To: Junio C Hamano
  Cc: Jeff Hostetler via GitGitGadget, git,
	Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler

On Thu, Mar 11, 2021 at 12:32:29PM -0800, Junio C Hamano wrote:

> Jeff King <peff@peff.net> writes:
> 
> > On Tue, Mar 09, 2021 at 03:48:40PM -0800, Junio C Hamano wrote:
> >
> >> "Jeff Hostetler via GitGitGadget" <gitgitgadget@gmail.com> writes:
> >> 
> >> > +	/*
> >> > +	 * Write the header and the buffer in 2 parts so that we do not need
> >> > +	 * to allocate a buffer or rely on a static buffer.  This avoids perf
> >> > +	 * and multi-threading issues.
> >> > +	 */
> >> 
> >> I understand "multi-threading issues" (i.e. let's not have too much
> >> stuff on the stack), but what issue around "perf" are we worried
> >> about?
> >>  ...
> > Yeah, this came from my suggestion. My gut feeling is that it isn't
> > likely to matter, but I'd much rather solve any performance problem we
> > find using writev(), which would be pretty easy to emulate with a
> > wrapper for systems that lack it.
> 
> I too had writev() in mind when I said "can fix it locally", so we
> are on the same page, which is good.
> 
> So "this avoid multi-threading issues" without mentioning "perf and"
> would be more appropriate?

IMHO yes. I think "avoid perf issues" is probably answering the "why not
just heap-allocate the buffer" question. But that makes sense in the
commit message, not in a comment.

-Peff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* Re: [PATCH v5 11/12] simple-ipc: add Unix domain socket implementation
  2021-03-10  0:08           ` Junio C Hamano
@ 2021-03-15 19:56             ` Jeff Hostetler
  0 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler @ 2021-03-15 19:56 UTC (permalink / raw)
  To: Junio C Hamano, Jeff Hostetler via GitGitGadget
  Cc: git, Ævar Arnfjörð Bjarmason, Jeff King,
	SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler



On 3/9/21 7:08 PM, Junio C Hamano wrote:
> "Jeff Hostetler via GitGitGadget" <gitgitgadget@gmail.com> writes:
> 
>> +/*
>> + * This value was chosen at random.
>> + */
>> +#define WAIT_STEP_MS (50)
> 
> ... and never used.  Is this supposed to be used as the hardcoded
> value 50 below ...
> 

Let me fix that before you move this to "next".
Looks like I just missed that one.


>> +
>> +/*
>> + * Try to connect to the server.  If the server is just starting up or
>> + * is very busy, we may not get a connection the first time.
>> + */
>> +static enum ipc_active_state connect_to_server(
>> +	const char *path,
>> +	int timeout_ms,
>> +	const struct ipc_client_connect_options *options,
>> +	int *pfd)
>> +{
>> +	int wait_ms = 50;
> 
> ... here?
> 
>> +	int k;
>> +
>> +	*pfd = -1;
>> +
>> +	for (k = 0; k < timeout_ms; k += wait_ms) {
>> +		int fd = unix_stream_connect(path, options->uds_disallow_chdir);
>> +
>> +		if (fd != -1) {
>> +			*pfd = fd;
>> +			return IPC_STATE__LISTENING;
>> +		}
>> +
>> +		if (errno == ENOENT) {
>> +			if (!options->wait_if_not_found)
>> +				return IPC_STATE__PATH_NOT_FOUND;
>> +
>> +			goto sleep_and_try_again;
>> +		}
>> + ...
>> +		return IPC_STATE__OTHER_ERROR;
>> +
>> +	sleep_and_try_again:
>> +		sleep_millisec(wait_ms);
> 
> Or, since there is nothing like exponential back-off implemented
> here which may want to modify wait_ms variable, perhaps use the
> constant directly here and where k is incremented?
> 
>> +/*
>> + * A randomly chosen timeout value.
>> + */
>> +#define MY_CONNECTION_TIMEOUT_MS (1000)
> 
> Even if it may have been "randomly chosen", there should be some
> criteria to judge if the value is sensible, right?  IOW, I have a
> suspicion that I would regret if I randomly chose 5 (or 3600000)
> instead of 1000.  How would we figure that 1000 acceptable but not
> 5?
> 
> Perhaps explain that criterion here, e.g. "... value that ought to
> be long enough to establish connection locally as long as the box is
> not loaded unusably heavily" or something?
> 

Will do.  Thanks!
Jeff

^ permalink raw reply	[flat|nested] 178+ messages in thread

* [PATCH v6 00/12] Simple IPC Mechanism
  2021-03-09 15:02       ` [PATCH v5 " Jeff Hostetler via GitGitGadget
                           ` (12 preceding siblings ...)
  2021-03-09 23:28         ` [PATCH v5 00/12] Simple IPC Mechanism Junio C Hamano
@ 2021-03-15 21:08         ` Jeff Hostetler via GitGitGadget
  2021-03-15 21:08           ` [PATCH v6 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently() Jeff Hostetler via GitGitGadget
                             ` (12 more replies)
  13 siblings, 13 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-03-15 21:08 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler

Here is V6 of my "Simple IPC" series. This version addresses comments from
last week on V5. This includes:

 1. Removing "perf and" in pkt-line.c
 2. Better comments to describe the various timeout #define's.
 3. Remove the double-underscore and shorten the "unix_stream_server"
    prefix.

Thanks Jeff

Jeff Hostetler (9):
  pkt-line: eliminate the need for static buffer in
    packet_write_gently()
  simple-ipc: design documentation for new IPC mechanism
  simple-ipc: add win32 implementation
  unix-socket: eliminate static unix_stream_socket() helper function
  unix-socket: add backlog size option to unix_stream_listen()
  unix-socket: disallow chdir() when creating unix domain sockets
  unix-stream-server: create unix domain socket under lock
  simple-ipc: add Unix domain socket implementation
  t0052: add simple-ipc tests and t/helper/test-simple-ipc tool

Johannes Schindelin (3):
  pkt-line: do not issue flush packets in write_packetized_*()
  pkt-line: add PACKET_READ_GENTLE_ON_READ_ERROR option
  pkt-line: add options argument to read_packetized_to_strbuf()

 Documentation/technical/api-simple-ipc.txt |  105 ++
 Makefile                                   |    9 +
 builtin/credential-cache--daemon.c         |    3 +-
 builtin/credential-cache.c                 |    2 +-
 compat/simple-ipc/ipc-shared.c             |   28 +
 compat/simple-ipc/ipc-unix-socket.c        | 1000 ++++++++++++++++++++
 compat/simple-ipc/ipc-win32.c              |  751 +++++++++++++++
 config.mak.uname                           |    2 +
 contrib/buildsystems/CMakeLists.txt        |    8 +-
 convert.c                                  |   11 +-
 pkt-line.c                                 |   59 +-
 pkt-line.h                                 |   17 +-
 simple-ipc.h                               |  239 +++++
 t/helper/test-simple-ipc.c                 |  787 +++++++++++++++
 t/helper/test-tool.c                       |    1 +
 t/helper/test-tool.h                       |    1 +
 t/t0052-simple-ipc.sh                      |  122 +++
 unix-socket.c                              |   53 +-
 unix-socket.h                              |   12 +-
 unix-stream-server.c                       |  125 +++
 unix-stream-server.h                       |   33 +
 21 files changed, 3316 insertions(+), 52 deletions(-)
 create mode 100644 Documentation/technical/api-simple-ipc.txt
 create mode 100644 compat/simple-ipc/ipc-shared.c
 create mode 100644 compat/simple-ipc/ipc-unix-socket.c
 create mode 100644 compat/simple-ipc/ipc-win32.c
 create mode 100644 simple-ipc.h
 create mode 100644 t/helper/test-simple-ipc.c
 create mode 100755 t/t0052-simple-ipc.sh
 create mode 100644 unix-stream-server.c
 create mode 100644 unix-stream-server.h


base-commit: f01623b2c9d14207e497b21ebc6b3ec4afaf4b46
Published-As: https://github.com/gitgitgadget/git/releases/tag/pr-766%2Fjeffhostetler%2Fsimple-ipc-v6
Fetch-It-Via: git fetch https://github.com/gitgitgadget/git pr-766/jeffhostetler/simple-ipc-v6
Pull-Request: https://github.com/gitgitgadget/git/pull/766

Range-diff vs v5:

  1:  311ea4a5cd71 !  1:  fe35dc3d292d pkt-line: eliminate the need for static buffer in packet_write_gently()
     @@ pkt-line.c: int packet_write_fmt_gently(int fd, const char *fmt, ...)
      +	set_packet_header(header, packet_size);
      +
      +	/*
     -+	 * Write the header and the buffer in 2 parts so that we do not need
     -+	 * to allocate a buffer or rely on a static buffer.  This avoids perf
     -+	 * and multi-threading issues.
     ++	 * Write the header and the buffer in 2 parts so that we do
     ++	 * not need to allocate a buffer or rely on a static buffer.
     ++	 * This also avoids putting a large buffer on the stack which
     ++	 * might have multi-threading issues.
      +	 */
      +
      +	if (write_in_full(fd_out, header, 4) < 0 ||
  2:  25157c1f4873 =  2:  de11b3036148 pkt-line: do not issue flush packets in write_packetized_*()
  3:  af3d13113bc9 =  3:  3718da39da30 pkt-line: add PACKET_READ_GENTLE_ON_READ_ERROR option
  4:  b73e66a69b61 =  4:  b43df7ad0b7a pkt-line: add options argument to read_packetized_to_strbuf()
  5:  1ae99d824a21 =  5:  f829feb2aa93 simple-ipc: design documentation for new IPC mechanism
  6:  8b3ce40e4538 =  6:  58c3fb7cd776 simple-ipc: add win32 implementation
  7:  34df1af98e5b =  7:  4e8c352fb366 unix-socket: eliminate static unix_stream_socket() helper function
  8:  d6ff6e0e050a =  8:  3b71f52d8628 unix-socket: add backlog size option to unix_stream_listen()
  9:  21b8d3c63dbf =  9:  5972a198361c unix-socket: disallow chdir() when creating unix domain sockets
 10:  1ee9de55a106 ! 10:  02c885fd623d unix-stream-server: create unix domain socket under lock
     @@ unix-stream-server.c (new)
      +	return 0;
      +}
      +
     -+int unix_stream_server__create(
     -+	const char *path,
     -+	const struct unix_stream_listen_opts *opts,
     -+	long timeout_ms,
     -+	struct unix_stream_server_socket **new_server_socket)
     ++int unix_ss_create(const char *path,
     ++		   const struct unix_stream_listen_opts *opts,
     ++		   long timeout_ms,
     ++		   struct unix_ss_socket **new_server_socket)
      +{
      +	struct lock_file lock = LOCK_INIT;
      +	int fd_socket;
     -+	struct unix_stream_server_socket *server_socket;
     ++	struct unix_ss_socket *server_socket;
      +
      +	*new_server_socket = NULL;
      +
     @@ unix-stream-server.c (new)
      +	return 0;
      +}
      +
     -+void unix_stream_server__free(
     -+	struct unix_stream_server_socket *server_socket)
     ++void unix_ss_free(struct unix_ss_socket *server_socket)
      +{
      +	if (!server_socket)
      +		return;
      +
      +	if (server_socket->fd_socket >= 0) {
     -+		if (!unix_stream_server__was_stolen(server_socket))
     ++		if (!unix_ss_was_stolen(server_socket))
      +			unlink(server_socket->path_socket);
      +		close(server_socket->fd_socket);
      +	}
     @@ unix-stream-server.c (new)
      +	free(server_socket);
      +}
      +
     -+int unix_stream_server__was_stolen(
     -+	struct unix_stream_server_socket *server_socket)
     ++int unix_ss_was_stolen(struct unix_ss_socket *server_socket)
      +{
      +	struct stat st_now;
      +
     @@ unix-stream-server.h (new)
      +
      +#include "unix-socket.h"
      +
     -+struct unix_stream_server_socket {
     ++struct unix_ss_socket {
      +	char *path_socket;
      +	struct stat st_socket;
      +	int fd_socket;
     @@ unix-stream-server.h (new)
      + *
      + * Returns 0 on success, -1 on error, -2 if socket is in use.
      + */
     -+int unix_stream_server__create(
     -+	const char *path,
     -+	const struct unix_stream_listen_opts *opts,
     -+	long timeout_ms,
     -+	struct unix_stream_server_socket **server_socket);
     ++int unix_ss_create(const char *path,
     ++		   const struct unix_stream_listen_opts *opts,
     ++		   long timeout_ms,
     ++		   struct unix_ss_socket **server_socket);
      +
      +/*
      + * Close and delete the socket.
      + */
     -+void unix_stream_server__free(
     -+	struct unix_stream_server_socket *server_socket);
     ++void unix_ss_free(struct unix_ss_socket *server_socket);
      +
      +/*
      + * Return 1 if the inode of the pathname to our socket changes.
      + */
     -+int unix_stream_server__was_stolen(
     -+	struct unix_stream_server_socket *server_socket);
     ++int unix_ss_was_stolen(struct unix_ss_socket *server_socket);
      +
      +#endif /* UNIX_STREAM_SERVER_H */
 11:  f2e3b046cc8f ! 11:  4c2199231d05 simple-ipc: add Unix domain socket implementation
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +}
      +
      +/*
     -+ * This value was chosen at random.
     ++ * Retry frequency when trying to connect to a server.
     ++ *
     ++ * This value should be short enough that we don't seriously delay our
     ++ * caller, but not fast enough that our spinning puts pressure on the
     ++ * system.
      + */
      +#define WAIT_STEP_MS (50)
      +
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +	const struct ipc_client_connect_options *options,
      +	int *pfd)
      +{
     -+	int wait_ms = 50;
      +	int k;
      +
      +	*pfd = -1;
      +
     -+	for (k = 0; k < timeout_ms; k += wait_ms) {
     ++	for (k = 0; k < timeout_ms; k += WAIT_STEP_MS) {
      +		int fd = unix_stream_connect(path, options->uds_disallow_chdir);
      +
      +		if (fd != -1) {
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +		return IPC_STATE__OTHER_ERROR;
      +
      +	sleep_and_try_again:
     -+		sleep_millisec(wait_ms);
     ++		sleep_millisec(WAIT_STEP_MS);
      +	}
      +
      +	return IPC_STATE__NOT_LISTENING;
      +}
      +
      +/*
     -+ * A randomly chosen timeout value.
     ++ * The total amount of time that we are willing to wait when trying to
     ++ * connect to a server.
     ++ *
     ++ * When the server is first started, it might take a little while for
     ++ * it to become ready to service requests.  Likewise, the server may
     ++ * be very (temporarily) busy and not respond to our connections.
     ++ *
     ++ * We should gracefully and silently handle those conditions and try
     ++ * again for a reasonable time period.
     ++ *
     ++ * The value chosen here should be long enough for the server
     ++ * to reliably heal from the above conditions.
      + */
      +#define MY_CONNECTION_TIMEOUT_MS (1000)
      +
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +	enum magic magic;
      +	struct ipc_server_data *server_data;
      +
     -+	struct unix_stream_server_socket *server_socket;
     ++	struct unix_ss_socket *server_socket;
      +
      +	int fd_send_shutdown;
      +	int fd_wait_shutdown;
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +			 * will be routed elsewhere and we silently starve.
      +			 * If that happens, just queue a shutdown.
      +			 */
     -+			if (unix_stream_server__was_stolen(
     ++			if (unix_ss_was_stolen(
      +				    accept_thread_data->server_socket)) {
      +				trace2_data_string("ipc-accept", NULL,
      +						   "queue_stop_async",
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +static int create_listener_socket(
      +	const char *path,
      +	const struct ipc_server_opts *ipc_opts,
     -+	struct unix_stream_server_socket **new_server_socket)
     ++	struct unix_ss_socket **new_server_socket)
      +{
     -+	struct unix_stream_server_socket *server_socket = NULL;
     ++	struct unix_ss_socket *server_socket = NULL;
      +	struct unix_stream_listen_opts uslg_opts = UNIX_STREAM_LISTEN_OPTS_INIT;
      +	int ret;
      +
      +	uslg_opts.listen_backlog_size = LISTEN_BACKLOG;
      +	uslg_opts.disallow_chdir = ipc_opts->uds_disallow_chdir;
      +
     -+	ret = unix_stream_server__create(path, &uslg_opts, -1, &server_socket);
     ++	ret = unix_ss_create(path, &uslg_opts, -1, &server_socket);
      +	if (ret)
      +		return ret;
      +
      +	if (set_socket_blocking_flag(server_socket->fd_socket, 1)) {
      +		int saved_errno = errno;
     -+		unix_stream_server__free(server_socket);
     ++		unix_ss_free(server_socket);
      +		errno = saved_errno;
      +		return -1;
      +	}
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +static int setup_listener_socket(
      +	const char *path,
      +	const struct ipc_server_opts *ipc_opts,
     -+	struct unix_stream_server_socket **new_server_socket)
     ++	struct unix_ss_socket **new_server_socket)
      +{
      +	int ret, saved_errno;
      +
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +			 ipc_server_application_cb *application_cb,
      +			 void *application_data)
      +{
     -+	struct unix_stream_server_socket *server_socket = NULL;
     ++	struct unix_ss_socket *server_socket = NULL;
      +	struct ipc_server_data *server_data;
      +	int sv[2];
      +	int k;
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +
      +	accept_thread_data = server_data->accept_thread;
      +	if (accept_thread_data) {
     -+		unix_stream_server__free(accept_thread_data->server_socket);
     ++		unix_ss_free(accept_thread_data->server_socket);
      +
      +		if (accept_thread_data->fd_send_shutdown != -1)
      +			close(accept_thread_data->fd_send_shutdown);
 12:  6ccc7472096f = 12:  132b6f3271be t0052: add simple-ipc tests and t/helper/test-simple-ipc tool

-- 
gitgitgadget

^ permalink raw reply	[flat|nested] 178+ messages in thread

* [PATCH v6 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently()
  2021-03-15 21:08         ` [PATCH v6 " Jeff Hostetler via GitGitGadget
@ 2021-03-15 21:08           ` Jeff Hostetler via GitGitGadget
  2021-03-15 21:08           ` [PATCH v6 02/12] pkt-line: do not issue flush packets in write_packetized_*() Johannes Schindelin via GitGitGadget
                             ` (11 subsequent siblings)
  12 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-03-15 21:08 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Teach `packet_write_gently()` to write the pkt-line header and the actual
buffer in 2 separate calls to `write_in_full()` and avoid the need for a
static buffer, thread-safe scratch space, or an excessively large stack
buffer.

Change `write_packetized_from_fd()` to allocate a temporary buffer rather
than using a static buffer to avoid similar issues here.

These changes are intended to make it easier to use pkt-line routines in
a multi-threaded context with multiple concurrent writers writing to
different streams.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 pkt-line.c | 28 ++++++++++++++++++++--------
 1 file changed, 20 insertions(+), 8 deletions(-)

diff --git a/pkt-line.c b/pkt-line.c
index d633005ef746..66bd0ddfd1d0 100644
--- a/pkt-line.c
+++ b/pkt-line.c
@@ -196,17 +196,26 @@ int packet_write_fmt_gently(int fd, const char *fmt, ...)
 
 static int packet_write_gently(const int fd_out, const char *buf, size_t size)
 {
-	static char packet_write_buffer[LARGE_PACKET_MAX];
+	char header[4];
 	size_t packet_size;
 
-	if (size > sizeof(packet_write_buffer) - 4)
+	if (size > LARGE_PACKET_DATA_MAX)
 		return error(_("packet write failed - data exceeds max packet size"));
 
 	packet_trace(buf, size, 1);
 	packet_size = size + 4;
-	set_packet_header(packet_write_buffer, packet_size);
-	memcpy(packet_write_buffer + 4, buf, size);
-	if (write_in_full(fd_out, packet_write_buffer, packet_size) < 0)
+
+	set_packet_header(header, packet_size);
+
+	/*
+	 * Write the header and the buffer in 2 parts so that we do
+	 * not need to allocate a buffer or rely on a static buffer.
+	 * This also avoids putting a large buffer on the stack which
+	 * might have multi-threading issues.
+	 */
+
+	if (write_in_full(fd_out, header, 4) < 0 ||
+	    write_in_full(fd_out, buf, size) < 0)
 		return error(_("packet write failed"));
 	return 0;
 }
@@ -244,20 +253,23 @@ void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len)
 
 int write_packetized_from_fd(int fd_in, int fd_out)
 {
-	static char buf[LARGE_PACKET_DATA_MAX];
+	char *buf = xmalloc(LARGE_PACKET_DATA_MAX);
 	int err = 0;
 	ssize_t bytes_to_write;
 
 	while (!err) {
-		bytes_to_write = xread(fd_in, buf, sizeof(buf));
-		if (bytes_to_write < 0)
+		bytes_to_write = xread(fd_in, buf, LARGE_PACKET_DATA_MAX);
+		if (bytes_to_write < 0) {
+			free(buf);
 			return COPY_READ_ERROR;
+		}
 		if (bytes_to_write == 0)
 			break;
 		err = packet_write_gently(fd_out, buf, bytes_to_write);
 	}
 	if (!err)
 		err = packet_flush_gently(fd_out);
+	free(buf);
 	return err;
 }
 
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v6 02/12] pkt-line: do not issue flush packets in write_packetized_*()
  2021-03-15 21:08         ` [PATCH v6 " Jeff Hostetler via GitGitGadget
  2021-03-15 21:08           ` [PATCH v6 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently() Jeff Hostetler via GitGitGadget
@ 2021-03-15 21:08           ` Johannes Schindelin via GitGitGadget
  2021-03-15 21:08           ` [PATCH v6 03/12] pkt-line: add PACKET_READ_GENTLE_ON_READ_ERROR option Johannes Schindelin via GitGitGadget
                             ` (10 subsequent siblings)
  12 siblings, 0 replies; 178+ messages in thread
From: Johannes Schindelin via GitGitGadget @ 2021-03-15 21:08 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Johannes Schindelin

From: Johannes Schindelin <johannes.schindelin@gmx.de>

Remove the `packet_flush_gently()` call in `write_packetized_from_buf() and
`write_packetized_from_fd()` and require the caller to call it if desired.
Rename both functions to `write_packetized_from_*_no_flush()` to prevent
later merge accidents.

`write_packetized_from_buf()` currently only has one caller:
`apply_multi_file_filter()` in `convert.c`.  It always wants a flush packet
to be written after writing the payload.

However, we are about to introduce a caller that wants to write many
packets before a final flush packet, so let's make the caller responsible
for emitting the flush packet.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
---
 convert.c  | 8 ++++++--
 pkt-line.c | 8 ++------
 pkt-line.h | 4 ++--
 3 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/convert.c b/convert.c
index ee360c2f07ce..976d4905cb3a 100644
--- a/convert.c
+++ b/convert.c
@@ -884,9 +884,13 @@ static int apply_multi_file_filter(const char *path, const char *src, size_t len
 		goto done;
 
 	if (fd >= 0)
-		err = write_packetized_from_fd(fd, process->in);
+		err = write_packetized_from_fd_no_flush(fd, process->in);
 	else
-		err = write_packetized_from_buf(src, len, process->in);
+		err = write_packetized_from_buf_no_flush(src, len, process->in);
+	if (err)
+		goto done;
+
+	err = packet_flush_gently(process->in);
 	if (err)
 		goto done;
 
diff --git a/pkt-line.c b/pkt-line.c
index 66bd0ddfd1d0..bb0fb0c3802c 100644
--- a/pkt-line.c
+++ b/pkt-line.c
@@ -251,7 +251,7 @@ void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len)
 	packet_trace(data, len, 1);
 }
 
-int write_packetized_from_fd(int fd_in, int fd_out)
+int write_packetized_from_fd_no_flush(int fd_in, int fd_out)
 {
 	char *buf = xmalloc(LARGE_PACKET_DATA_MAX);
 	int err = 0;
@@ -267,13 +267,11 @@ int write_packetized_from_fd(int fd_in, int fd_out)
 			break;
 		err = packet_write_gently(fd_out, buf, bytes_to_write);
 	}
-	if (!err)
-		err = packet_flush_gently(fd_out);
 	free(buf);
 	return err;
 }
 
-int write_packetized_from_buf(const char *src_in, size_t len, int fd_out)
+int write_packetized_from_buf_no_flush(const char *src_in, size_t len, int fd_out)
 {
 	int err = 0;
 	size_t bytes_written = 0;
@@ -289,8 +287,6 @@ int write_packetized_from_buf(const char *src_in, size_t len, int fd_out)
 		err = packet_write_gently(fd_out, src_in + bytes_written, bytes_to_write);
 		bytes_written += bytes_to_write;
 	}
-	if (!err)
-		err = packet_flush_gently(fd_out);
 	return err;
 }
 
diff --git a/pkt-line.h b/pkt-line.h
index 8c90daa59ef0..31012b9943bf 100644
--- a/pkt-line.h
+++ b/pkt-line.h
@@ -32,8 +32,8 @@ void packet_buf_write(struct strbuf *buf, const char *fmt, ...) __attribute__((f
 void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len);
 int packet_flush_gently(int fd);
 int packet_write_fmt_gently(int fd, const char *fmt, ...) __attribute__((format (printf, 2, 3)));
-int write_packetized_from_fd(int fd_in, int fd_out);
-int write_packetized_from_buf(const char *src_in, size_t len, int fd_out);
+int write_packetized_from_fd_no_flush(int fd_in, int fd_out);
+int write_packetized_from_buf_no_flush(const char *src_in, size_t len, int fd_out);
 
 /*
  * Read a packetized line into the buffer, which must be at least size bytes
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v6 03/12] pkt-line: add PACKET_READ_GENTLE_ON_READ_ERROR option
  2021-03-15 21:08         ` [PATCH v6 " Jeff Hostetler via GitGitGadget
  2021-03-15 21:08           ` [PATCH v6 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently() Jeff Hostetler via GitGitGadget
  2021-03-15 21:08           ` [PATCH v6 02/12] pkt-line: do not issue flush packets in write_packetized_*() Johannes Schindelin via GitGitGadget
@ 2021-03-15 21:08           ` Johannes Schindelin via GitGitGadget
  2021-03-15 21:08           ` [PATCH v6 04/12] pkt-line: add options argument to read_packetized_to_strbuf() Johannes Schindelin via GitGitGadget
                             ` (9 subsequent siblings)
  12 siblings, 0 replies; 178+ messages in thread
From: Johannes Schindelin via GitGitGadget @ 2021-03-15 21:08 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Johannes Schindelin

From: Johannes Schindelin <johannes.schindelin@gmx.de>

Introduce PACKET_READ_GENTLE_ON_READ_ERROR option to help libify the
packet readers.

So far, the (possibly indirect) callers of `get_packet_data()` can ask
that function to return an error instead of `die()`ing upon end-of-file.
However, random read errors will still cause the process to die.

So let's introduce an explicit option to tell the packet reader
machinery to please be nice and only return an error on read errors.

This change prepares pkt-line for use by long-running daemon processes.
Such processes should be able to serve multiple concurrent clients and
and survive random IO errors.  If there is an error on one connection,
a daemon should be able to drop that connection and continue serving
existing and future connections.

This ability will be used by a Git-aware "Builtin FSMonitor" feature
in a later patch series.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 pkt-line.c | 19 +++++++++++++++++--
 pkt-line.h | 11 ++++++++---
 2 files changed, 25 insertions(+), 5 deletions(-)

diff --git a/pkt-line.c b/pkt-line.c
index bb0fb0c3802c..457ac4e151bb 100644
--- a/pkt-line.c
+++ b/pkt-line.c
@@ -306,8 +306,11 @@ static int get_packet_data(int fd, char **src_buf, size_t *src_size,
 		*src_size -= ret;
 	} else {
 		ret = read_in_full(fd, dst, size);
-		if (ret < 0)
+		if (ret < 0) {
+			if (options & PACKET_READ_GENTLE_ON_READ_ERROR)
+				return error_errno(_("read error"));
 			die_errno(_("read error"));
+		}
 	}
 
 	/* And complain if we didn't get enough bytes to satisfy the read. */
@@ -315,6 +318,8 @@ static int get_packet_data(int fd, char **src_buf, size_t *src_size,
 		if (options & PACKET_READ_GENTLE_ON_EOF)
 			return -1;
 
+		if (options & PACKET_READ_GENTLE_ON_READ_ERROR)
+			return error(_("the remote end hung up unexpectedly"));
 		die(_("the remote end hung up unexpectedly"));
 	}
 
@@ -343,6 +348,9 @@ enum packet_read_status packet_read_with_status(int fd, char **src_buffer,
 	len = packet_length(linelen);
 
 	if (len < 0) {
+		if (options & PACKET_READ_GENTLE_ON_READ_ERROR)
+			return error(_("protocol error: bad line length "
+				       "character: %.4s"), linelen);
 		die(_("protocol error: bad line length character: %.4s"), linelen);
 	} else if (!len) {
 		packet_trace("0000", 4, 0);
@@ -357,12 +365,19 @@ enum packet_read_status packet_read_with_status(int fd, char **src_buffer,
 		*pktlen = 0;
 		return PACKET_READ_RESPONSE_END;
 	} else if (len < 4) {
+		if (options & PACKET_READ_GENTLE_ON_READ_ERROR)
+			return error(_("protocol error: bad line length %d"),
+				     len);
 		die(_("protocol error: bad line length %d"), len);
 	}
 
 	len -= 4;
-	if ((unsigned)len >= size)
+	if ((unsigned)len >= size) {
+		if (options & PACKET_READ_GENTLE_ON_READ_ERROR)
+			return error(_("protocol error: bad line length %d"),
+				     len);
 		die(_("protocol error: bad line length %d"), len);
+	}
 
 	if (get_packet_data(fd, src_buffer, src_len, buffer, len, options) < 0) {
 		*pktlen = -1;
diff --git a/pkt-line.h b/pkt-line.h
index 31012b9943bf..80ce0187e2ea 100644
--- a/pkt-line.h
+++ b/pkt-line.h
@@ -68,10 +68,15 @@ int write_packetized_from_buf_no_flush(const char *src_in, size_t len, int fd_ou
  *
  * If options contains PACKET_READ_DIE_ON_ERR_PACKET, it dies when it sees an
  * ERR packet.
+ *
+ * If options contains PACKET_READ_GENTLE_ON_READ_ERROR, we will not die
+ * on read errors, but instead return -1.  However, we may still die on an
+ * ERR packet (if requested).
  */
-#define PACKET_READ_GENTLE_ON_EOF     (1u<<0)
-#define PACKET_READ_CHOMP_NEWLINE     (1u<<1)
-#define PACKET_READ_DIE_ON_ERR_PACKET (1u<<2)
+#define PACKET_READ_GENTLE_ON_EOF        (1u<<0)
+#define PACKET_READ_CHOMP_NEWLINE        (1u<<1)
+#define PACKET_READ_DIE_ON_ERR_PACKET    (1u<<2)
+#define PACKET_READ_GENTLE_ON_READ_ERROR (1u<<3)
 int packet_read(int fd, char **src_buffer, size_t *src_len, char
 		*buffer, unsigned size, int options);
 
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v6 04/12] pkt-line: add options argument to read_packetized_to_strbuf()
  2021-03-15 21:08         ` [PATCH v6 " Jeff Hostetler via GitGitGadget
                             ` (2 preceding siblings ...)
  2021-03-15 21:08           ` [PATCH v6 03/12] pkt-line: add PACKET_READ_GENTLE_ON_READ_ERROR option Johannes Schindelin via GitGitGadget
@ 2021-03-15 21:08           ` Johannes Schindelin via GitGitGadget
  2021-03-15 21:08           ` [PATCH v6 05/12] simple-ipc: design documentation for new IPC mechanism Jeff Hostetler via GitGitGadget
                             ` (8 subsequent siblings)
  12 siblings, 0 replies; 178+ messages in thread
From: Johannes Schindelin via GitGitGadget @ 2021-03-15 21:08 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Johannes Schindelin

From: Johannes Schindelin <johannes.schindelin@gmx.de>

Update the calling sequence of `read_packetized_to_strbuf()` to take
an options argument and not assume a fixed set of options.  Update the
only existing caller accordingly to explicitly pass the
formerly-assumed flags.

The `read_packetized_to_strbuf()` function calls `packet_read()` with
a fixed set of assumed options (`PACKET_READ_GENTLE_ON_EOF`).  This
assumption has been fine for the single existing caller
`apply_multi_file_filter()` in `convert.c`.

In a later commit we would like to add other callers to
`read_packetized_to_strbuf()` that need a different set of options.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 convert.c  | 3 ++-
 pkt-line.c | 4 ++--
 pkt-line.h | 2 +-
 3 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/convert.c b/convert.c
index 976d4905cb3a..516f1095b06e 100644
--- a/convert.c
+++ b/convert.c
@@ -907,7 +907,8 @@ static int apply_multi_file_filter(const char *path, const char *src, size_t len
 		if (err)
 			goto done;
 
-		err = read_packetized_to_strbuf(process->out, &nbuf) < 0;
+		err = read_packetized_to_strbuf(process->out, &nbuf,
+						PACKET_READ_GENTLE_ON_EOF) < 0;
 		if (err)
 			goto done;
 
diff --git a/pkt-line.c b/pkt-line.c
index 457ac4e151bb..0194137528c3 100644
--- a/pkt-line.c
+++ b/pkt-line.c
@@ -444,7 +444,7 @@ char *packet_read_line_buf(char **src, size_t *src_len, int *dst_len)
 	return packet_read_line_generic(-1, src, src_len, dst_len);
 }
 
-ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out)
+ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out, int options)
 {
 	int packet_len;
 
@@ -460,7 +460,7 @@ ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out)
 			 * that there is already room for the extra byte.
 			 */
 			sb_out->buf + sb_out->len, LARGE_PACKET_DATA_MAX+1,
-			PACKET_READ_GENTLE_ON_EOF);
+			options);
 		if (packet_len <= 0)
 			break;
 		sb_out->len += packet_len;
diff --git a/pkt-line.h b/pkt-line.h
index 80ce0187e2ea..5af5f4568768 100644
--- a/pkt-line.h
+++ b/pkt-line.h
@@ -136,7 +136,7 @@ char *packet_read_line_buf(char **src_buf, size_t *src_len, int *size);
 /*
  * Reads a stream of variable sized packets until a flush packet is detected.
  */
-ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out);
+ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out, int options);
 
 /*
  * Receive multiplexed output stream over git native protocol.
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v6 05/12] simple-ipc: design documentation for new IPC mechanism
  2021-03-15 21:08         ` [PATCH v6 " Jeff Hostetler via GitGitGadget
                             ` (3 preceding siblings ...)
  2021-03-15 21:08           ` [PATCH v6 04/12] pkt-line: add options argument to read_packetized_to_strbuf() Johannes Schindelin via GitGitGadget
@ 2021-03-15 21:08           ` Jeff Hostetler via GitGitGadget
  2021-03-15 21:08           ` [PATCH v6 06/12] simple-ipc: add win32 implementation Jeff Hostetler via GitGitGadget
                             ` (7 subsequent siblings)
  12 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-03-15 21:08 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Brief design documentation for new IPC mechanism allowing
foreground Git client to talk with an existing daemon process
at a known location using a named pipe or unix domain socket.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 Documentation/technical/api-simple-ipc.txt | 105 +++++++++++++++++++++
 1 file changed, 105 insertions(+)
 create mode 100644 Documentation/technical/api-simple-ipc.txt

diff --git a/Documentation/technical/api-simple-ipc.txt b/Documentation/technical/api-simple-ipc.txt
new file mode 100644
index 000000000000..d79ad323e675
--- /dev/null
+++ b/Documentation/technical/api-simple-ipc.txt
@@ -0,0 +1,105 @@
+Simple-IPC API
+==============
+
+The Simple-IPC API is a collection of `ipc_` prefixed library routines
+and a basic communication protocol that allow an IPC-client process to
+send an application-specific IPC-request message to an IPC-server
+process and receive an application-specific IPC-response message.
+
+Communication occurs over a named pipe on Windows and a Unix domain
+socket on other platforms.  IPC-clients and IPC-servers rendezvous at
+a previously agreed-to application-specific pathname (which is outside
+the scope of this design) that is local to the computer system.
+
+The IPC-server routines within the server application process create a
+thread pool to listen for connections and receive request messages
+from multiple concurrent IPC-clients.  When received, these messages
+are dispatched up to the server application callbacks for handling.
+IPC-server routines then incrementally relay responses back to the
+IPC-client.
+
+The IPC-client routines within a client application process connect
+to the IPC-server and send a request message and wait for a response.
+When received, the response is returned back the caller.
+
+For example, the `fsmonitor--daemon` feature will be built as a server
+application on top of the IPC-server library routines.  It will have
+threads watching for file system events and a thread pool waiting for
+client connections.  Clients, such as `git status` will request a list
+of file system events since a point in time and the server will
+respond with a list of changed files and directories.  The formats of
+the request and response are application-specific; the IPC-client and
+IPC-server routines treat them as opaque byte streams.
+
+
+Comparison with sub-process model
+---------------------------------
+
+The Simple-IPC mechanism differs from the existing `sub-process.c`
+model (Documentation/technical/long-running-process-protocol.txt) and
+used by applications like Git-LFS.  In the LFS-style sub-process model
+the helper is started by the foreground process, communication happens
+via a pair of file descriptors bound to the stdin/stdout of the
+sub-process, the sub-process only serves the current foreground
+process, and the sub-process exits when the foreground process
+terminates.
+
+In the Simple-IPC model the server is a very long-running service.  It
+can service many clients at the same time and has a private socket or
+named pipe connection to each active client.  It might be started
+(on-demand) by the current client process or it might have been
+started by a previous client or by the OS at boot time.  The server
+process is not associated with a terminal and it persists after
+clients terminate.  Clients do not have access to the stdin/stdout of
+the server process and therefore must communicate over sockets or
+named pipes.
+
+
+Server startup and shutdown
+---------------------------
+
+How an application server based upon IPC-server is started is also
+outside the scope of the Simple-IPC design and is a property of the
+application using it.  For example, the server might be started or
+restarted during routine maintenance operations, or it might be
+started as a system service during the system boot-up sequence, or it
+might be started on-demand by a foreground Git command when needed.
+
+Similarly, server shutdown is a property of the application using
+the simple-ipc routines.  For example, the server might decide to
+shutdown when idle or only upon explicit request.
+
+
+Simple-IPC protocol
+-------------------
+
+The Simple-IPC protocol consists of a single request message from the
+client and an optional response message from the server.  Both the
+client and server messages are unlimited in length and are terminated
+with a flush packet.
+
+The pkt-line routines (Documentation/technical/protocol-common.txt)
+are used to simplify buffer management during message generation,
+transmission, and reception.  A flush packet is used to mark the end
+of the message.  This allows the sender to incrementally generate and
+transmit the message.  It allows the receiver to incrementally receive
+the message in chunks and to know when they have received the entire
+message.
+
+The actual byte format of the client request and server response
+messages are application specific.  The IPC layer transmits and
+receives them as opaque byte buffers without any concern for the
+content within.  It is the job of the calling application layer to
+understand the contents of the request and response messages.
+
+
+Summary
+-------
+
+Conceptually, the Simple-IPC protocol is similar to an HTTP REST
+request.  Clients connect, make an application-specific and
+stateless request, receive an application-specific
+response, and disconnect.  It is a one round trip facility for
+querying the server.  The Simple-IPC routines hide the socket,
+named pipe, and thread pool details and allow the application
+layer to focus on the application at hand.
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v6 06/12] simple-ipc: add win32 implementation
  2021-03-15 21:08         ` [PATCH v6 " Jeff Hostetler via GitGitGadget
                             ` (4 preceding siblings ...)
  2021-03-15 21:08           ` [PATCH v6 05/12] simple-ipc: design documentation for new IPC mechanism Jeff Hostetler via GitGitGadget
@ 2021-03-15 21:08           ` Jeff Hostetler via GitGitGadget
  2021-03-15 21:08           ` [PATCH v6 07/12] unix-socket: eliminate static unix_stream_socket() helper function Jeff Hostetler via GitGitGadget
                             ` (6 subsequent siblings)
  12 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-03-15 21:08 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Create Windows implementation of "simple-ipc" using named pipes.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 Makefile                            |   5 +
 compat/simple-ipc/ipc-shared.c      |  28 ++
 compat/simple-ipc/ipc-win32.c       | 751 ++++++++++++++++++++++++++++
 config.mak.uname                    |   2 +
 contrib/buildsystems/CMakeLists.txt |   4 +
 simple-ipc.h                        | 228 +++++++++
 6 files changed, 1018 insertions(+)
 create mode 100644 compat/simple-ipc/ipc-shared.c
 create mode 100644 compat/simple-ipc/ipc-win32.c
 create mode 100644 simple-ipc.h

diff --git a/Makefile b/Makefile
index dd08b4ced01c..d3c42d3f4f9f 100644
--- a/Makefile
+++ b/Makefile
@@ -1667,6 +1667,11 @@ else
 	LIB_OBJS += unix-socket.o
 endif
 
+ifdef USE_WIN32_IPC
+	LIB_OBJS += compat/simple-ipc/ipc-shared.o
+	LIB_OBJS += compat/simple-ipc/ipc-win32.o
+endif
+
 ifdef NO_ICONV
 	BASIC_CFLAGS += -DNO_ICONV
 endif
diff --git a/compat/simple-ipc/ipc-shared.c b/compat/simple-ipc/ipc-shared.c
new file mode 100644
index 000000000000..1edec8159532
--- /dev/null
+++ b/compat/simple-ipc/ipc-shared.c
@@ -0,0 +1,28 @@
+#include "cache.h"
+#include "simple-ipc.h"
+#include "strbuf.h"
+#include "pkt-line.h"
+#include "thread-utils.h"
+
+#ifdef SUPPORTS_SIMPLE_IPC
+
+int ipc_server_run(const char *path, const struct ipc_server_opts *opts,
+		   ipc_server_application_cb *application_cb,
+		   void *application_data)
+{
+	struct ipc_server_data *server_data = NULL;
+	int ret;
+
+	ret = ipc_server_run_async(&server_data, path, opts,
+				   application_cb, application_data);
+	if (ret)
+		return ret;
+
+	ret = ipc_server_await(server_data);
+
+	ipc_server_free(server_data);
+
+	return ret;
+}
+
+#endif /* SUPPORTS_SIMPLE_IPC */
diff --git a/compat/simple-ipc/ipc-win32.c b/compat/simple-ipc/ipc-win32.c
new file mode 100644
index 000000000000..8f89c02037e3
--- /dev/null
+++ b/compat/simple-ipc/ipc-win32.c
@@ -0,0 +1,751 @@
+#include "cache.h"
+#include "simple-ipc.h"
+#include "strbuf.h"
+#include "pkt-line.h"
+#include "thread-utils.h"
+
+#ifndef GIT_WINDOWS_NATIVE
+#error This file can only be compiled on Windows
+#endif
+
+static int initialize_pipe_name(const char *path, wchar_t *wpath, size_t alloc)
+{
+	int off = 0;
+	struct strbuf realpath = STRBUF_INIT;
+
+	if (!strbuf_realpath(&realpath, path, 0))
+		return -1;
+
+	off = swprintf(wpath, alloc, L"\\\\.\\pipe\\");
+	if (xutftowcs(wpath + off, realpath.buf, alloc - off) < 0)
+		return -1;
+
+	/* Handle drive prefix */
+	if (wpath[off] && wpath[off + 1] == L':') {
+		wpath[off + 1] = L'_';
+		off += 2;
+	}
+
+	for (; wpath[off]; off++)
+		if (wpath[off] == L'/')
+			wpath[off] = L'\\';
+
+	strbuf_release(&realpath);
+	return 0;
+}
+
+static enum ipc_active_state get_active_state(wchar_t *pipe_path)
+{
+	if (WaitNamedPipeW(pipe_path, NMPWAIT_USE_DEFAULT_WAIT))
+		return IPC_STATE__LISTENING;
+
+	if (GetLastError() == ERROR_SEM_TIMEOUT)
+		return IPC_STATE__NOT_LISTENING;
+
+	if (GetLastError() == ERROR_FILE_NOT_FOUND)
+		return IPC_STATE__PATH_NOT_FOUND;
+
+	return IPC_STATE__OTHER_ERROR;
+}
+
+enum ipc_active_state ipc_get_active_state(const char *path)
+{
+	wchar_t pipe_path[MAX_PATH];
+
+	if (initialize_pipe_name(path, pipe_path, ARRAY_SIZE(pipe_path)) < 0)
+		return IPC_STATE__INVALID_PATH;
+
+	return get_active_state(pipe_path);
+}
+
+#define WAIT_STEP_MS (50)
+
+static enum ipc_active_state connect_to_server(
+	const wchar_t *wpath,
+	DWORD timeout_ms,
+	const struct ipc_client_connect_options *options,
+	int *pfd)
+{
+	DWORD t_start_ms, t_waited_ms;
+	DWORD step_ms;
+	HANDLE hPipe = INVALID_HANDLE_VALUE;
+	DWORD mode = PIPE_READMODE_BYTE;
+	DWORD gle;
+
+	*pfd = -1;
+
+	for (;;) {
+		hPipe = CreateFileW(wpath, GENERIC_READ | GENERIC_WRITE,
+				    0, NULL, OPEN_EXISTING, 0, NULL);
+		if (hPipe != INVALID_HANDLE_VALUE)
+			break;
+
+		gle = GetLastError();
+
+		switch (gle) {
+		case ERROR_FILE_NOT_FOUND:
+			if (!options->wait_if_not_found)
+				return IPC_STATE__PATH_NOT_FOUND;
+			if (!timeout_ms)
+				return IPC_STATE__PATH_NOT_FOUND;
+
+			step_ms = (timeout_ms < WAIT_STEP_MS) ?
+				timeout_ms : WAIT_STEP_MS;
+			sleep_millisec(step_ms);
+
+			timeout_ms -= step_ms;
+			break; /* try again */
+
+		case ERROR_PIPE_BUSY:
+			if (!options->wait_if_busy)
+				return IPC_STATE__NOT_LISTENING;
+			if (!timeout_ms)
+				return IPC_STATE__NOT_LISTENING;
+
+			t_start_ms = (DWORD)(getnanotime() / 1000000);
+
+			if (!WaitNamedPipeW(wpath, timeout_ms)) {
+				if (GetLastError() == ERROR_SEM_TIMEOUT)
+					return IPC_STATE__NOT_LISTENING;
+
+				return IPC_STATE__OTHER_ERROR;
+			}
+
+			/*
+			 * A pipe server instance became available.
+			 * Race other client processes to connect to
+			 * it.
+			 *
+			 * But first decrement our overall timeout so
+			 * that we don't starve if we keep losing the
+			 * race.  But also guard against special
+			 * NPMWAIT_ values (0 and -1).
+			 */
+			t_waited_ms = (DWORD)(getnanotime() / 1000000) - t_start_ms;
+			if (t_waited_ms < timeout_ms)
+				timeout_ms -= t_waited_ms;
+			else
+				timeout_ms = 1;
+			break; /* try again */
+
+		default:
+			return IPC_STATE__OTHER_ERROR;
+		}
+	}
+
+	if (!SetNamedPipeHandleState(hPipe, &mode, NULL, NULL)) {
+		CloseHandle(hPipe);
+		return IPC_STATE__OTHER_ERROR;
+	}
+
+	*pfd = _open_osfhandle((intptr_t)hPipe, O_RDWR|O_BINARY);
+	if (*pfd < 0) {
+		CloseHandle(hPipe);
+		return IPC_STATE__OTHER_ERROR;
+	}
+
+	/* fd now owns hPipe */
+
+	return IPC_STATE__LISTENING;
+}
+
+/*
+ * The default connection timeout for Windows clients.
+ *
+ * This is not currently part of the ipc_ API (nor the config settings)
+ * because of differences between Windows and other platforms.
+ *
+ * This value was chosen at random.
+ */
+#define WINDOWS_CONNECTION_TIMEOUT_MS (30000)
+
+enum ipc_active_state ipc_client_try_connect(
+	const char *path,
+	const struct ipc_client_connect_options *options,
+	struct ipc_client_connection **p_connection)
+{
+	wchar_t wpath[MAX_PATH];
+	enum ipc_active_state state = IPC_STATE__OTHER_ERROR;
+	int fd = -1;
+
+	*p_connection = NULL;
+
+	trace2_region_enter("ipc-client", "try-connect", NULL);
+	trace2_data_string("ipc-client", NULL, "try-connect/path", path);
+
+	if (initialize_pipe_name(path, wpath, ARRAY_SIZE(wpath)) < 0)
+		state = IPC_STATE__INVALID_PATH;
+	else
+		state = connect_to_server(wpath, WINDOWS_CONNECTION_TIMEOUT_MS,
+					  options, &fd);
+
+	trace2_data_intmax("ipc-client", NULL, "try-connect/state",
+			   (intmax_t)state);
+	trace2_region_leave("ipc-client", "try-connect", NULL);
+
+	if (state == IPC_STATE__LISTENING) {
+		(*p_connection) = xcalloc(1, sizeof(struct ipc_client_connection));
+		(*p_connection)->fd = fd;
+	}
+
+	return state;
+}
+
+void ipc_client_close_connection(struct ipc_client_connection *connection)
+{
+	if (!connection)
+		return;
+
+	if (connection->fd != -1)
+		close(connection->fd);
+
+	free(connection);
+}
+
+int ipc_client_send_command_to_connection(
+	struct ipc_client_connection *connection,
+	const char *message, struct strbuf *answer)
+{
+	int ret = 0;
+
+	strbuf_setlen(answer, 0);
+
+	trace2_region_enter("ipc-client", "send-command", NULL);
+
+	if (write_packetized_from_buf_no_flush(message, strlen(message),
+					       connection->fd) < 0 ||
+	    packet_flush_gently(connection->fd) < 0) {
+		ret = error(_("could not send IPC command"));
+		goto done;
+	}
+
+	FlushFileBuffers((HANDLE)_get_osfhandle(connection->fd));
+
+	if (read_packetized_to_strbuf(
+		    connection->fd, answer,
+		    PACKET_READ_GENTLE_ON_EOF | PACKET_READ_GENTLE_ON_READ_ERROR) < 0) {
+		ret = error(_("could not read IPC response"));
+		goto done;
+	}
+
+done:
+	trace2_region_leave("ipc-client", "send-command", NULL);
+	return ret;
+}
+
+int ipc_client_send_command(const char *path,
+			    const struct ipc_client_connect_options *options,
+			    const char *message, struct strbuf *response)
+{
+	int ret = -1;
+	enum ipc_active_state state;
+	struct ipc_client_connection *connection = NULL;
+
+	state = ipc_client_try_connect(path, options, &connection);
+
+	if (state != IPC_STATE__LISTENING)
+		return ret;
+
+	ret = ipc_client_send_command_to_connection(connection, message, response);
+
+	ipc_client_close_connection(connection);
+
+	return ret;
+}
+
+/*
+ * Duplicate the given pipe handle and wrap it in a file descriptor so
+ * that we can use pkt-line on it.
+ */
+static int dup_fd_from_pipe(const HANDLE pipe)
+{
+	HANDLE process = GetCurrentProcess();
+	HANDLE handle;
+	int fd;
+
+	if (!DuplicateHandle(process, pipe, process, &handle, 0, FALSE,
+			     DUPLICATE_SAME_ACCESS)) {
+		errno = err_win_to_posix(GetLastError());
+		return -1;
+	}
+
+	fd = _open_osfhandle((intptr_t)handle, O_RDWR|O_BINARY);
+	if (fd < 0) {
+		errno = err_win_to_posix(GetLastError());
+		CloseHandle(handle);
+		return -1;
+	}
+
+	/*
+	 * `handle` is now owned by `fd` and will be automatically closed
+	 * when the descriptor is closed.
+	 */
+
+	return fd;
+}
+
+/*
+ * Magic numbers used to annotate callback instance data.
+ * These are used to help guard against accidentally passing the
+ * wrong instance data across multiple levels of callbacks (which
+ * is easy to do if there are `void*` arguments).
+ */
+enum magic {
+	MAGIC_SERVER_REPLY_DATA,
+	MAGIC_SERVER_THREAD_DATA,
+	MAGIC_SERVER_DATA,
+};
+
+struct ipc_server_reply_data {
+	enum magic magic;
+	int fd;
+	struct ipc_server_thread_data *server_thread_data;
+};
+
+struct ipc_server_thread_data {
+	enum magic magic;
+	struct ipc_server_thread_data *next_thread;
+	struct ipc_server_data *server_data;
+	pthread_t pthread_id;
+	HANDLE hPipe;
+};
+
+/*
+ * On Windows, the conceptual "ipc-server" is implemented as a pool of
+ * n idential/peer "server-thread" threads.  That is, there is no
+ * hierarchy of threads; and therefore no controller thread managing
+ * the pool.  Each thread has an independent handle to the named pipe,
+ * receives incoming connections, processes the client, and re-uses
+ * the pipe for the next client connection.
+ *
+ * Therefore, the "ipc-server" only needs to maintain a list of the
+ * spawned threads for eventual "join" purposes.
+ *
+ * A single "stop-event" is visible to all of the server threads to
+ * tell them to shutdown (when idle).
+ */
+struct ipc_server_data {
+	enum magic magic;
+	ipc_server_application_cb *application_cb;
+	void *application_data;
+	struct strbuf buf_path;
+	wchar_t wpath[MAX_PATH];
+
+	HANDLE hEventStopRequested;
+	struct ipc_server_thread_data *thread_list;
+	int is_stopped;
+};
+
+enum connect_result {
+	CR_CONNECTED = 0,
+	CR_CONNECT_PENDING,
+	CR_CONNECT_ERROR,
+	CR_WAIT_ERROR,
+	CR_SHUTDOWN,
+};
+
+static enum connect_result queue_overlapped_connect(
+	struct ipc_server_thread_data *server_thread_data,
+	OVERLAPPED *lpo)
+{
+	if (ConnectNamedPipe(server_thread_data->hPipe, lpo))
+		goto failed;
+
+	switch (GetLastError()) {
+	case ERROR_IO_PENDING:
+		return CR_CONNECT_PENDING;
+
+	case ERROR_PIPE_CONNECTED:
+		SetEvent(lpo->hEvent);
+		return CR_CONNECTED;
+
+	default:
+		break;
+	}
+
+failed:
+	error(_("ConnectNamedPipe failed for '%s' (%lu)"),
+	      server_thread_data->server_data->buf_path.buf,
+	      GetLastError());
+	return CR_CONNECT_ERROR;
+}
+
+/*
+ * Use Windows Overlapped IO to wait for a connection or for our event
+ * to be signalled.
+ */
+static enum connect_result wait_for_connection(
+	struct ipc_server_thread_data *server_thread_data,
+	OVERLAPPED *lpo)
+{
+	enum connect_result r;
+	HANDLE waitHandles[2];
+	DWORD dwWaitResult;
+
+	r = queue_overlapped_connect(server_thread_data, lpo);
+	if (r != CR_CONNECT_PENDING)
+		return r;
+
+	waitHandles[0] = server_thread_data->server_data->hEventStopRequested;
+	waitHandles[1] = lpo->hEvent;
+
+	dwWaitResult = WaitForMultipleObjects(2, waitHandles, FALSE, INFINITE);
+	switch (dwWaitResult) {
+	case WAIT_OBJECT_0 + 0:
+		return CR_SHUTDOWN;
+
+	case WAIT_OBJECT_0 + 1:
+		ResetEvent(lpo->hEvent);
+		return CR_CONNECTED;
+
+	default:
+		return CR_WAIT_ERROR;
+	}
+}
+
+/*
+ * Forward declare our reply callback function so that any compiler
+ * errors are reported when we actually define the function (in addition
+ * to any errors reported when we try to pass this callback function as
+ * a parameter in a function call).  The former are easier to understand.
+ */
+static ipc_server_reply_cb do_io_reply_callback;
+
+/*
+ * Relay application's response message to the client process.
+ * (We do not flush at this point because we allow the caller
+ * to chunk data to the client thru us.)
+ */
+static int do_io_reply_callback(struct ipc_server_reply_data *reply_data,
+		       const char *response, size_t response_len)
+{
+	if (reply_data->magic != MAGIC_SERVER_REPLY_DATA)
+		BUG("reply_cb called with wrong instance data");
+
+	return write_packetized_from_buf_no_flush(response, response_len,
+						  reply_data->fd);
+}
+
+/*
+ * Receive the request/command from the client and pass it to the
+ * registered request-callback.  The request-callback will compose
+ * a response and call our reply-callback to send it to the client.
+ *
+ * Simple-IPC only contains one round trip, so we flush and close
+ * here after the response.
+ */
+static int do_io(struct ipc_server_thread_data *server_thread_data)
+{
+	struct strbuf buf = STRBUF_INIT;
+	struct ipc_server_reply_data reply_data;
+	int ret = 0;
+
+	reply_data.magic = MAGIC_SERVER_REPLY_DATA;
+	reply_data.server_thread_data = server_thread_data;
+
+	reply_data.fd = dup_fd_from_pipe(server_thread_data->hPipe);
+	if (reply_data.fd < 0)
+		return error(_("could not create fd from pipe for '%s'"),
+			     server_thread_data->server_data->buf_path.buf);
+
+	ret = read_packetized_to_strbuf(
+		reply_data.fd, &buf,
+		PACKET_READ_GENTLE_ON_EOF | PACKET_READ_GENTLE_ON_READ_ERROR);
+	if (ret >= 0) {
+		ret = server_thread_data->server_data->application_cb(
+			server_thread_data->server_data->application_data,
+			buf.buf, do_io_reply_callback, &reply_data);
+
+		packet_flush_gently(reply_data.fd);
+
+		FlushFileBuffers((HANDLE)_get_osfhandle((reply_data.fd)));
+	}
+	else {
+		/*
+		 * The client probably disconnected/shutdown before it
+		 * could send a well-formed message.  Ignore it.
+		 */
+	}
+
+	strbuf_release(&buf);
+	close(reply_data.fd);
+
+	return ret;
+}
+
+/*
+ * Handle IPC request and response with this connected client.  And reset
+ * the pipe to prepare for the next client.
+ */
+static int use_connection(struct ipc_server_thread_data *server_thread_data)
+{
+	int ret;
+
+	ret = do_io(server_thread_data);
+
+	FlushFileBuffers(server_thread_data->hPipe);
+	DisconnectNamedPipe(server_thread_data->hPipe);
+
+	return ret;
+}
+
+/*
+ * Thread proc for an IPC server worker thread.  It handles a series of
+ * connections from clients.  It cleans and reuses the hPipe between each
+ * client.
+ */
+static void *server_thread_proc(void *_server_thread_data)
+{
+	struct ipc_server_thread_data *server_thread_data = _server_thread_data;
+	HANDLE hEventConnected = INVALID_HANDLE_VALUE;
+	OVERLAPPED oConnect;
+	enum connect_result cr;
+	int ret;
+
+	assert(server_thread_data->hPipe != INVALID_HANDLE_VALUE);
+
+	trace2_thread_start("ipc-server");
+	trace2_data_string("ipc-server", NULL, "pipe",
+			   server_thread_data->server_data->buf_path.buf);
+
+	hEventConnected = CreateEventW(NULL, TRUE, FALSE, NULL);
+
+	memset(&oConnect, 0, sizeof(oConnect));
+	oConnect.hEvent = hEventConnected;
+
+	for (;;) {
+		cr = wait_for_connection(server_thread_data, &oConnect);
+
+		switch (cr) {
+		case CR_SHUTDOWN:
+			goto finished;
+
+		case CR_CONNECTED:
+			ret = use_connection(server_thread_data);
+			if (ret == SIMPLE_IPC_QUIT) {
+				ipc_server_stop_async(
+					server_thread_data->server_data);
+				goto finished;
+			}
+			if (ret > 0) {
+				/*
+				 * Ignore (transient) IO errors with this
+				 * client and reset for the next client.
+				 */
+			}
+			break;
+
+		case CR_CONNECT_PENDING:
+			/* By construction, this should not happen. */
+			BUG("ipc-server[%s]: unexpeced CR_CONNECT_PENDING",
+			    server_thread_data->server_data->buf_path.buf);
+
+		case CR_CONNECT_ERROR:
+		case CR_WAIT_ERROR:
+			/*
+			 * Ignore these theoretical errors.
+			 */
+			DisconnectNamedPipe(server_thread_data->hPipe);
+			break;
+
+		default:
+			BUG("unandled case after wait_for_connection");
+		}
+	}
+
+finished:
+	CloseHandle(server_thread_data->hPipe);
+	CloseHandle(hEventConnected);
+
+	trace2_thread_exit();
+	return NULL;
+}
+
+static HANDLE create_new_pipe(wchar_t *wpath, int is_first)
+{
+	HANDLE hPipe;
+	DWORD dwOpenMode, dwPipeMode;
+	LPSECURITY_ATTRIBUTES lpsa = NULL;
+
+	dwOpenMode = PIPE_ACCESS_INBOUND | PIPE_ACCESS_OUTBOUND |
+		FILE_FLAG_OVERLAPPED;
+
+	dwPipeMode = PIPE_TYPE_MESSAGE | PIPE_READMODE_BYTE | PIPE_WAIT |
+		PIPE_REJECT_REMOTE_CLIENTS;
+
+	if (is_first) {
+		dwOpenMode |= FILE_FLAG_FIRST_PIPE_INSTANCE;
+
+		/*
+		 * On Windows, the first server pipe instance gets to
+		 * set the ACL / Security Attributes on the named
+		 * pipe; subsequent instances inherit and cannot
+		 * change them.
+		 *
+		 * TODO Should we allow the application layer to
+		 * specify security attributes, such as `LocalService`
+		 * or `LocalSystem`, when we create the named pipe?
+		 * This question is probably not important when the
+		 * daemon is started by a foreground user process and
+		 * only needs to talk to the current user, but may be
+		 * if the daemon is run via the Control Panel as a
+		 * System Service.
+		 */
+	}
+
+	hPipe = CreateNamedPipeW(wpath, dwOpenMode, dwPipeMode,
+				 PIPE_UNLIMITED_INSTANCES, 1024, 1024, 0, lpsa);
+
+	return hPipe;
+}
+
+int ipc_server_run_async(struct ipc_server_data **returned_server_data,
+			 const char *path, const struct ipc_server_opts *opts,
+			 ipc_server_application_cb *application_cb,
+			 void *application_data)
+{
+	struct ipc_server_data *server_data;
+	wchar_t wpath[MAX_PATH];
+	HANDLE hPipeFirst = INVALID_HANDLE_VALUE;
+	int k;
+	int ret = 0;
+	int nr_threads = opts->nr_threads;
+
+	*returned_server_data = NULL;
+
+	ret = initialize_pipe_name(path, wpath, ARRAY_SIZE(wpath));
+	if (ret < 0) {
+		errno = EINVAL;
+		return -1;
+	}
+
+	hPipeFirst = create_new_pipe(wpath, 1);
+	if (hPipeFirst == INVALID_HANDLE_VALUE) {
+		errno = EADDRINUSE;
+		return -2;
+	}
+
+	server_data = xcalloc(1, sizeof(*server_data));
+	server_data->magic = MAGIC_SERVER_DATA;
+	server_data->application_cb = application_cb;
+	server_data->application_data = application_data;
+	server_data->hEventStopRequested = CreateEvent(NULL, TRUE, FALSE, NULL);
+	strbuf_init(&server_data->buf_path, 0);
+	strbuf_addstr(&server_data->buf_path, path);
+	wcscpy(server_data->wpath, wpath);
+
+	if (nr_threads < 1)
+		nr_threads = 1;
+
+	for (k = 0; k < nr_threads; k++) {
+		struct ipc_server_thread_data *std;
+
+		std = xcalloc(1, sizeof(*std));
+		std->magic = MAGIC_SERVER_THREAD_DATA;
+		std->server_data = server_data;
+		std->hPipe = INVALID_HANDLE_VALUE;
+
+		std->hPipe = (k == 0)
+			? hPipeFirst
+			: create_new_pipe(server_data->wpath, 0);
+
+		if (std->hPipe == INVALID_HANDLE_VALUE) {
+			/*
+			 * If we've reached a pipe instance limit for
+			 * this path, just use fewer threads.
+			 */
+			free(std);
+			break;
+		}
+
+		if (pthread_create(&std->pthread_id, NULL,
+				   server_thread_proc, std)) {
+			/*
+			 * Likewise, if we're out of threads, just use
+			 * fewer threads than requested.
+			 *
+			 * However, we just give up if we can't even get
+			 * one thread.  This should not happen.
+			 */
+			if (k == 0)
+				die(_("could not start thread[0] for '%s'"),
+				    path);
+
+			CloseHandle(std->hPipe);
+			free(std);
+			break;
+		}
+
+		std->next_thread = server_data->thread_list;
+		server_data->thread_list = std;
+	}
+
+	*returned_server_data = server_data;
+	return 0;
+}
+
+int ipc_server_stop_async(struct ipc_server_data *server_data)
+{
+	if (!server_data)
+		return 0;
+
+	/*
+	 * Gently tell all of the ipc_server threads to shutdown.
+	 * This will be seen the next time they are idle (and waiting
+	 * for a connection).
+	 *
+	 * We DO NOT attempt to force them to drop an active connection.
+	 */
+	SetEvent(server_data->hEventStopRequested);
+	return 0;
+}
+
+int ipc_server_await(struct ipc_server_data *server_data)
+{
+	DWORD dwWaitResult;
+
+	if (!server_data)
+		return 0;
+
+	dwWaitResult = WaitForSingleObject(server_data->hEventStopRequested, INFINITE);
+	if (dwWaitResult != WAIT_OBJECT_0)
+		return error(_("wait for hEvent failed for '%s'"),
+			     server_data->buf_path.buf);
+
+	while (server_data->thread_list) {
+		struct ipc_server_thread_data *std = server_data->thread_list;
+
+		pthread_join(std->pthread_id, NULL);
+
+		server_data->thread_list = std->next_thread;
+		free(std);
+	}
+
+	server_data->is_stopped = 1;
+
+	return 0;
+}
+
+void ipc_server_free(struct ipc_server_data *server_data)
+{
+	if (!server_data)
+		return;
+
+	if (!server_data->is_stopped)
+		BUG("cannot free ipc-server while running for '%s'",
+		    server_data->buf_path.buf);
+
+	strbuf_release(&server_data->buf_path);
+
+	if (server_data->hEventStopRequested != INVALID_HANDLE_VALUE)
+		CloseHandle(server_data->hEventStopRequested);
+
+	while (server_data->thread_list) {
+		struct ipc_server_thread_data *std = server_data->thread_list;
+
+		server_data->thread_list = std->next_thread;
+		free(std);
+	}
+
+	free(server_data);
+}
diff --git a/config.mak.uname b/config.mak.uname
index e22d4b6d67a3..2b3303f34be8 100644
--- a/config.mak.uname
+++ b/config.mak.uname
@@ -421,6 +421,7 @@ ifeq ($(uname_S),Windows)
 	RUNTIME_PREFIX = YesPlease
 	HAVE_WPGMPTR = YesWeDo
 	NO_ST_BLOCKS_IN_STRUCT_STAT = YesPlease
+	USE_WIN32_IPC = YesPlease
 	USE_WIN32_MMAP = YesPlease
 	MMAP_PREVENTS_DELETE = UnfortunatelyYes
 	# USE_NED_ALLOCATOR = YesPlease
@@ -597,6 +598,7 @@ ifneq (,$(findstring MINGW,$(uname_S)))
 	RUNTIME_PREFIX = YesPlease
 	HAVE_WPGMPTR = YesWeDo
 	NO_ST_BLOCKS_IN_STRUCT_STAT = YesPlease
+	USE_WIN32_IPC = YesPlease
 	USE_WIN32_MMAP = YesPlease
 	MMAP_PREVENTS_DELETE = UnfortunatelyYes
 	USE_NED_ALLOCATOR = YesPlease
diff --git a/contrib/buildsystems/CMakeLists.txt b/contrib/buildsystems/CMakeLists.txt
index ac3dbc079af8..40c9e8e3bd9d 100644
--- a/contrib/buildsystems/CMakeLists.txt
+++ b/contrib/buildsystems/CMakeLists.txt
@@ -246,6 +246,10 @@ elseif(CMAKE_SYSTEM_NAME STREQUAL "Linux")
 	list(APPEND compat_SOURCES unix-socket.c)
 endif()
 
+if(CMAKE_SYSTEM_NAME STREQUAL "Windows")
+	list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-win32.c)
+endif()
+
 set(EXE_EXTENSION ${CMAKE_EXECUTABLE_SUFFIX})
 
 #header checks
diff --git a/simple-ipc.h b/simple-ipc.h
new file mode 100644
index 000000000000..ab5619e3d76f
--- /dev/null
+++ b/simple-ipc.h
@@ -0,0 +1,228 @@
+#ifndef GIT_SIMPLE_IPC_H
+#define GIT_SIMPLE_IPC_H
+
+/*
+ * See Documentation/technical/api-simple-ipc.txt
+ */
+
+#if defined(GIT_WINDOWS_NATIVE)
+#define SUPPORTS_SIMPLE_IPC
+#endif
+
+#ifdef SUPPORTS_SIMPLE_IPC
+#include "pkt-line.h"
+
+/*
+ * Simple IPC Client Side API.
+ */
+
+enum ipc_active_state {
+	/*
+	 * The pipe/socket exists and the daemon is waiting for connections.
+	 */
+	IPC_STATE__LISTENING = 0,
+
+	/*
+	 * The pipe/socket exists, but the daemon is not listening.
+	 * Perhaps it is very busy.
+	 * Perhaps the daemon died without deleting the path.
+	 * Perhaps it is shutting down and draining existing clients.
+	 * Perhaps it is dead, but other clients are lingering and
+	 * still holding a reference to the pathname.
+	 */
+	IPC_STATE__NOT_LISTENING,
+
+	/*
+	 * The requested pathname is bogus and no amount of retries
+	 * will fix that.
+	 */
+	IPC_STATE__INVALID_PATH,
+
+	/*
+	 * The requested pathname is not found.  This usually means
+	 * that there is no daemon present.
+	 */
+	IPC_STATE__PATH_NOT_FOUND,
+
+	IPC_STATE__OTHER_ERROR,
+};
+
+struct ipc_client_connect_options {
+	/*
+	 * Spin under timeout if the server is running but can't
+	 * accept our connection yet.  This should always be set
+	 * unless you just want to poke the server and see if it
+	 * is alive.
+	 */
+	unsigned int wait_if_busy:1;
+
+	/*
+	 * Spin under timeout if the pipe/socket is not yet present
+	 * on the file system.  This is useful if we just started
+	 * the service and need to wait for it to become ready.
+	 */
+	unsigned int wait_if_not_found:1;
+};
+
+#define IPC_CLIENT_CONNECT_OPTIONS_INIT { \
+	.wait_if_busy = 0, \
+	.wait_if_not_found = 0, \
+}
+
+/*
+ * Determine if a server is listening on this named pipe or socket using
+ * platform-specific logic.  This might just probe the filesystem or it
+ * might make a trivial connection to the server using this pathname.
+ */
+enum ipc_active_state ipc_get_active_state(const char *path);
+
+struct ipc_client_connection {
+	int fd;
+};
+
+/*
+ * Try to connect to the daemon on the named pipe or socket.
+ *
+ * Returns IPC_STATE__LISTENING and a connection handle.
+ *
+ * Otherwise, returns info to help decide whether to retry or to
+ * spawn/respawn the server.
+ */
+enum ipc_active_state ipc_client_try_connect(
+	const char *path,
+	const struct ipc_client_connect_options *options,
+	struct ipc_client_connection **p_connection);
+
+void ipc_client_close_connection(struct ipc_client_connection *connection);
+
+/*
+ * Used by the client to synchronously send and receive a message with
+ * the server on the provided client connection.
+ *
+ * Returns 0 when successful.
+ *
+ * Calls error() and returns non-zero otherwise.
+ */
+int ipc_client_send_command_to_connection(
+	struct ipc_client_connection *connection,
+	const char *message, struct strbuf *answer);
+
+/*
+ * Used by the client to synchronously connect and send and receive a
+ * message to the server listening at the given path.
+ *
+ * Returns 0 when successful.
+ *
+ * Calls error() and returns non-zero otherwise.
+ */
+int ipc_client_send_command(const char *path,
+			    const struct ipc_client_connect_options *options,
+			    const char *message, struct strbuf *answer);
+
+/*
+ * Simple IPC Server Side API.
+ */
+
+struct ipc_server_reply_data;
+
+typedef int (ipc_server_reply_cb)(struct ipc_server_reply_data *,
+				  const char *response,
+				  size_t response_len);
+
+/*
+ * Prototype for an application-supplied callback to process incoming
+ * client IPC messages and compose a reply.  The `application_cb` should
+ * use the provided `reply_cb` and `reply_data` to send an IPC response
+ * back to the client.  The `reply_cb` callback can be called multiple
+ * times for chunking purposes.  A reply message is optional and may be
+ * omitted if not necessary for the application.
+ *
+ * The return value from the application callback is ignored.
+ * The value `SIMPLE_IPC_QUIT` can be used to shutdown the server.
+ */
+typedef int (ipc_server_application_cb)(void *application_data,
+					const char *request,
+					ipc_server_reply_cb *reply_cb,
+					struct ipc_server_reply_data *reply_data);
+
+#define SIMPLE_IPC_QUIT -2
+
+/*
+ * Opaque instance data to represent an IPC server instance.
+ */
+struct ipc_server_data;
+
+/*
+ * Control parameters for the IPC server instance.
+ * Use this to hide platform-specific settings.
+ */
+struct ipc_server_opts
+{
+	int nr_threads;
+};
+
+/*
+ * Start an IPC server instance in one or more background threads
+ * and return a handle to the pool.
+ *
+ * Returns 0 if the asynchronous server pool was started successfully.
+ * Returns -1 if not.
+ * Returns -2 if we could not startup because another server is using
+ * the socket or named pipe.
+ *
+ * When a client IPC message is received, the `application_cb` will be
+ * called (possibly on a random thread) to handle the message and
+ * optionally compose a reply message.
+ */
+int ipc_server_run_async(struct ipc_server_data **returned_server_data,
+			 const char *path, const struct ipc_server_opts *opts,
+			 ipc_server_application_cb *application_cb,
+			 void *application_data);
+
+/*
+ * Gently signal the IPC server pool to shutdown.  No new client
+ * connections will be accepted, but existing connections will be
+ * allowed to complete.
+ */
+int ipc_server_stop_async(struct ipc_server_data *server_data);
+
+/*
+ * Block the calling thread until all threads in the IPC server pool
+ * have completed and been joined.
+ */
+int ipc_server_await(struct ipc_server_data *server_data);
+
+/*
+ * Close and free all resource handles associated with the IPC server
+ * pool.
+ */
+void ipc_server_free(struct ipc_server_data *server_data);
+
+/*
+ * Run an IPC server instance and block the calling thread of the
+ * current process.  It does not return until the IPC server has
+ * either shutdown or had an unrecoverable error.
+ *
+ * The IPC server handles incoming IPC messages from client processes
+ * and may use one or more background threads as necessary.
+ *
+ * Returns 0 after the server has completed successfully.
+ * Returns -1 if the server cannot be started.
+ * Returns -2 if we could not startup because another server is using
+ * the socket or named pipe.
+ *
+ * When a client IPC message is received, the `application_cb` will be
+ * called (possibly on a random thread) to handle the message and
+ * optionally compose a reply message.
+ *
+ * Note that `ipc_server_run()` is a synchronous wrapper around the
+ * above asynchronous routines.  It effectively hides all of the
+ * server state and thread details from the caller and presents a
+ * simple synchronous interface.
+ */
+int ipc_server_run(const char *path, const struct ipc_server_opts *opts,
+		   ipc_server_application_cb *application_cb,
+		   void *application_data);
+
+#endif /* SUPPORTS_SIMPLE_IPC */
+#endif /* GIT_SIMPLE_IPC_H */
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v6 07/12] unix-socket: eliminate static unix_stream_socket() helper function
  2021-03-15 21:08         ` [PATCH v6 " Jeff Hostetler via GitGitGadget
                             ` (5 preceding siblings ...)
  2021-03-15 21:08           ` [PATCH v6 06/12] simple-ipc: add win32 implementation Jeff Hostetler via GitGitGadget
@ 2021-03-15 21:08           ` Jeff Hostetler via GitGitGadget
  2021-03-15 21:08           ` [PATCH v6 08/12] unix-socket: add backlog size option to unix_stream_listen() Jeff Hostetler via GitGitGadget
                             ` (5 subsequent siblings)
  12 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-03-15 21:08 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

The static helper function `unix_stream_socket()` calls `die()`.  This
is not appropriate for all callers.  Eliminate the wrapper function
and make the callers propagate the error.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 unix-socket.c | 27 +++++++++++++--------------
 1 file changed, 13 insertions(+), 14 deletions(-)

diff --git a/unix-socket.c b/unix-socket.c
index 19ed48be9902..69f81d64e9d5 100644
--- a/unix-socket.c
+++ b/unix-socket.c
@@ -1,14 +1,6 @@
 #include "cache.h"
 #include "unix-socket.h"
 
-static int unix_stream_socket(void)
-{
-	int fd = socket(AF_UNIX, SOCK_STREAM, 0);
-	if (fd < 0)
-		die_errno("unable to create socket");
-	return fd;
-}
-
 static int chdir_len(const char *orig, int len)
 {
 	char *path = xmemdupz(orig, len);
@@ -73,13 +65,16 @@ static int unix_sockaddr_init(struct sockaddr_un *sa, const char *path,
 
 int unix_stream_connect(const char *path)
 {
-	int fd, saved_errno;
+	int fd = -1, saved_errno;
 	struct sockaddr_un sa;
 	struct unix_sockaddr_context ctx;
 
 	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
 		return -1;
-	fd = unix_stream_socket();
+	fd = socket(AF_UNIX, SOCK_STREAM, 0);
+	if (fd < 0)
+		goto fail;
+
 	if (connect(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0)
 		goto fail;
 	unix_sockaddr_cleanup(&ctx);
@@ -87,15 +82,16 @@ int unix_stream_connect(const char *path)
 
 fail:
 	saved_errno = errno;
+	if (fd != -1)
+		close(fd);
 	unix_sockaddr_cleanup(&ctx);
-	close(fd);
 	errno = saved_errno;
 	return -1;
 }
 
 int unix_stream_listen(const char *path)
 {
-	int fd, saved_errno;
+	int fd = -1, saved_errno;
 	struct sockaddr_un sa;
 	struct unix_sockaddr_context ctx;
 
@@ -103,7 +99,9 @@ int unix_stream_listen(const char *path)
 
 	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
 		return -1;
-	fd = unix_stream_socket();
+	fd = socket(AF_UNIX, SOCK_STREAM, 0);
+	if (fd < 0)
+		goto fail;
 
 	if (bind(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0)
 		goto fail;
@@ -116,8 +114,9 @@ int unix_stream_listen(const char *path)
 
 fail:
 	saved_errno = errno;
+	if (fd != -1)
+		close(fd);
 	unix_sockaddr_cleanup(&ctx);
-	close(fd);
 	errno = saved_errno;
 	return -1;
 }
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v6 08/12] unix-socket: add backlog size option to unix_stream_listen()
  2021-03-15 21:08         ` [PATCH v6 " Jeff Hostetler via GitGitGadget
                             ` (6 preceding siblings ...)
  2021-03-15 21:08           ` [PATCH v6 07/12] unix-socket: eliminate static unix_stream_socket() helper function Jeff Hostetler via GitGitGadget
@ 2021-03-15 21:08           ` Jeff Hostetler via GitGitGadget
  2021-03-15 21:08           ` [PATCH v6 09/12] unix-socket: disallow chdir() when creating unix domain sockets Jeff Hostetler via GitGitGadget
                             ` (4 subsequent siblings)
  12 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-03-15 21:08 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Update `unix_stream_listen()` to take an options structure to override
default behaviors.  This commit includes the size of the `listen()` backlog.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 builtin/credential-cache--daemon.c |  3 ++-
 unix-socket.c                      | 11 +++++++++--
 unix-socket.h                      |  9 ++++++++-
 3 files changed, 19 insertions(+), 4 deletions(-)

diff --git a/builtin/credential-cache--daemon.c b/builtin/credential-cache--daemon.c
index c61f123a3b81..4c6c89ab0de2 100644
--- a/builtin/credential-cache--daemon.c
+++ b/builtin/credential-cache--daemon.c
@@ -203,9 +203,10 @@ static int serve_cache_loop(int fd)
 
 static void serve_cache(const char *socket_path, int debug)
 {
+	struct unix_stream_listen_opts opts = UNIX_STREAM_LISTEN_OPTS_INIT;
 	int fd;
 
-	fd = unix_stream_listen(socket_path);
+	fd = unix_stream_listen(socket_path, &opts);
 	if (fd < 0)
 		die_errno("unable to bind to '%s'", socket_path);
 
diff --git a/unix-socket.c b/unix-socket.c
index 69f81d64e9d5..012becd93d57 100644
--- a/unix-socket.c
+++ b/unix-socket.c
@@ -1,6 +1,8 @@
 #include "cache.h"
 #include "unix-socket.h"
 
+#define DEFAULT_UNIX_STREAM_LISTEN_BACKLOG (5)
+
 static int chdir_len(const char *orig, int len)
 {
 	char *path = xmemdupz(orig, len);
@@ -89,9 +91,11 @@ int unix_stream_connect(const char *path)
 	return -1;
 }
 
-int unix_stream_listen(const char *path)
+int unix_stream_listen(const char *path,
+		       const struct unix_stream_listen_opts *opts)
 {
 	int fd = -1, saved_errno;
+	int backlog;
 	struct sockaddr_un sa;
 	struct unix_sockaddr_context ctx;
 
@@ -106,7 +110,10 @@ int unix_stream_listen(const char *path)
 	if (bind(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0)
 		goto fail;
 
-	if (listen(fd, 5) < 0)
+	backlog = opts->listen_backlog_size;
+	if (backlog <= 0)
+		backlog = DEFAULT_UNIX_STREAM_LISTEN_BACKLOG;
+	if (listen(fd, backlog) < 0)
 		goto fail;
 
 	unix_sockaddr_cleanup(&ctx);
diff --git a/unix-socket.h b/unix-socket.h
index e271aeec5a07..ec2fb3ea7267 100644
--- a/unix-socket.h
+++ b/unix-socket.h
@@ -1,7 +1,14 @@
 #ifndef UNIX_SOCKET_H
 #define UNIX_SOCKET_H
 
+struct unix_stream_listen_opts {
+	int listen_backlog_size;
+};
+
+#define UNIX_STREAM_LISTEN_OPTS_INIT { 0 }
+
 int unix_stream_connect(const char *path);
-int unix_stream_listen(const char *path);
+int unix_stream_listen(const char *path,
+		       const struct unix_stream_listen_opts *opts);
 
 #endif /* UNIX_SOCKET_H */
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v6 09/12] unix-socket: disallow chdir() when creating unix domain sockets
  2021-03-15 21:08         ` [PATCH v6 " Jeff Hostetler via GitGitGadget
                             ` (7 preceding siblings ...)
  2021-03-15 21:08           ` [PATCH v6 08/12] unix-socket: add backlog size option to unix_stream_listen() Jeff Hostetler via GitGitGadget
@ 2021-03-15 21:08           ` Jeff Hostetler via GitGitGadget
  2021-03-15 21:08           ` [PATCH v6 10/12] unix-stream-server: create unix domain socket under lock Jeff Hostetler via GitGitGadget
                             ` (3 subsequent siblings)
  12 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-03-15 21:08 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Calls to `chdir()` are dangerous in a multi-threaded context.  If
`unix_stream_listen()` or `unix_stream_connect()` is given a socket
pathname that is too long to fit in a `sockaddr_un` structure, it will
`chdir()` to the parent directory of the requested socket pathname,
create the socket using a relative pathname, and then `chdir()` back.
This is not thread-safe.

Teach `unix_sockaddr_init()` to not allow calls to `chdir()` when this
flag is set.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 builtin/credential-cache.c |  2 +-
 unix-socket.c              | 17 ++++++++++++-----
 unix-socket.h              |  3 ++-
 3 files changed, 15 insertions(+), 7 deletions(-)

diff --git a/builtin/credential-cache.c b/builtin/credential-cache.c
index 9b3f70990597..76a6ba37223f 100644
--- a/builtin/credential-cache.c
+++ b/builtin/credential-cache.c
@@ -14,7 +14,7 @@
 static int send_request(const char *socket, const struct strbuf *out)
 {
 	int got_data = 0;
-	int fd = unix_stream_connect(socket);
+	int fd = unix_stream_connect(socket, 0);
 
 	if (fd < 0)
 		return -1;
diff --git a/unix-socket.c b/unix-socket.c
index 012becd93d57..e0be1badb58d 100644
--- a/unix-socket.c
+++ b/unix-socket.c
@@ -30,16 +30,23 @@ static void unix_sockaddr_cleanup(struct unix_sockaddr_context *ctx)
 }
 
 static int unix_sockaddr_init(struct sockaddr_un *sa, const char *path,
-			      struct unix_sockaddr_context *ctx)
+			      struct unix_sockaddr_context *ctx,
+			      int disallow_chdir)
 {
 	int size = strlen(path) + 1;
 
 	ctx->orig_dir = NULL;
 	if (size > sizeof(sa->sun_path)) {
-		const char *slash = find_last_dir_sep(path);
+		const char *slash;
 		const char *dir;
 		struct strbuf cwd = STRBUF_INIT;
 
+		if (disallow_chdir) {
+			errno = ENAMETOOLONG;
+			return -1;
+		}
+
+		slash = find_last_dir_sep(path);
 		if (!slash) {
 			errno = ENAMETOOLONG;
 			return -1;
@@ -65,13 +72,13 @@ static int unix_sockaddr_init(struct sockaddr_un *sa, const char *path,
 	return 0;
 }
 
-int unix_stream_connect(const char *path)
+int unix_stream_connect(const char *path, int disallow_chdir)
 {
 	int fd = -1, saved_errno;
 	struct sockaddr_un sa;
 	struct unix_sockaddr_context ctx;
 
-	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
+	if (unix_sockaddr_init(&sa, path, &ctx, disallow_chdir) < 0)
 		return -1;
 	fd = socket(AF_UNIX, SOCK_STREAM, 0);
 	if (fd < 0)
@@ -101,7 +108,7 @@ int unix_stream_listen(const char *path,
 
 	unlink(path);
 
-	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
+	if (unix_sockaddr_init(&sa, path, &ctx, opts->disallow_chdir) < 0)
 		return -1;
 	fd = socket(AF_UNIX, SOCK_STREAM, 0);
 	if (fd < 0)
diff --git a/unix-socket.h b/unix-socket.h
index ec2fb3ea7267..8542cdd7995d 100644
--- a/unix-socket.h
+++ b/unix-socket.h
@@ -3,11 +3,12 @@
 
 struct unix_stream_listen_opts {
 	int listen_backlog_size;
+	unsigned int disallow_chdir:1;
 };
 
 #define UNIX_STREAM_LISTEN_OPTS_INIT { 0 }
 
-int unix_stream_connect(const char *path);
+int unix_stream_connect(const char *path, int disallow_chdir);
 int unix_stream_listen(const char *path,
 		       const struct unix_stream_listen_opts *opts);
 
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v6 10/12] unix-stream-server: create unix domain socket under lock
  2021-03-15 21:08         ` [PATCH v6 " Jeff Hostetler via GitGitGadget
                             ` (8 preceding siblings ...)
  2021-03-15 21:08           ` [PATCH v6 09/12] unix-socket: disallow chdir() when creating unix domain sockets Jeff Hostetler via GitGitGadget
@ 2021-03-15 21:08           ` Jeff Hostetler via GitGitGadget
  2021-03-15 21:08           ` [PATCH v6 11/12] simple-ipc: add Unix domain socket implementation Jeff Hostetler via GitGitGadget
                             ` (2 subsequent siblings)
  12 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-03-15 21:08 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Create a wrapper class for `unix_stream_listen()` that uses a ".lock"
lockfile to create the unix domain socket in a race-free manner.

Unix domain sockets have a fundamental problem on Unix systems because
they persist in the filesystem until they are deleted.  This is
independent of whether a server is actually listening for connections.
Well-behaved servers are expected to delete the socket when they
shutdown.  A new server cannot easily tell if a found socket is
attached to an active server or is leftover cruft from a dead server.
The traditional solution used by `unix_stream_listen()` is to force
delete the socket pathname and then create a new socket.  This solves
the latter (cruft) problem, but in the case of the former, it orphans
the existing server (by stealing the pathname associated with the
socket it is listening on).

We cannot directly use a .lock lockfile to create the socket because
the socket is created by `bind(2)` rather than the `open(2)` mechanism
used by `tempfile.c`.

As an alternative, we hold a plain lockfile ("<path>.lock") as a
mutual exclusion device.  Under the lock, we test if an existing
socket ("<path>") is has an active server.  If not, we create a new
socket and begin listening.  Then we use "rollback" to delete the
lockfile in all cases.

This wrapper code conceptually exists at a higher-level than the core
unix_stream_connect() and unix_stream_listen() routines that it
consumes.  It is isolated in a wrapper class for clarity.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 Makefile                            |   1 +
 contrib/buildsystems/CMakeLists.txt |   2 +-
 unix-stream-server.c                | 125 ++++++++++++++++++++++++++++
 unix-stream-server.h                |  33 ++++++++
 4 files changed, 160 insertions(+), 1 deletion(-)
 create mode 100644 unix-stream-server.c
 create mode 100644 unix-stream-server.h

diff --git a/Makefile b/Makefile
index d3c42d3f4f9f..012694276f6d 100644
--- a/Makefile
+++ b/Makefile
@@ -1665,6 +1665,7 @@ ifdef NO_UNIX_SOCKETS
 	BASIC_CFLAGS += -DNO_UNIX_SOCKETS
 else
 	LIB_OBJS += unix-socket.o
+	LIB_OBJS += unix-stream-server.o
 endif
 
 ifdef USE_WIN32_IPC
diff --git a/contrib/buildsystems/CMakeLists.txt b/contrib/buildsystems/CMakeLists.txt
index 40c9e8e3bd9d..c94011269ebb 100644
--- a/contrib/buildsystems/CMakeLists.txt
+++ b/contrib/buildsystems/CMakeLists.txt
@@ -243,7 +243,7 @@ if(CMAKE_SYSTEM_NAME STREQUAL "Windows")
 
 elseif(CMAKE_SYSTEM_NAME STREQUAL "Linux")
 	add_compile_definitions(PROCFS_EXECUTABLE_PATH="/proc/self/exe" HAVE_DEV_TTY )
-	list(APPEND compat_SOURCES unix-socket.c)
+	list(APPEND compat_SOURCES unix-socket.c unix-stream-server.c)
 endif()
 
 if(CMAKE_SYSTEM_NAME STREQUAL "Windows")
diff --git a/unix-stream-server.c b/unix-stream-server.c
new file mode 100644
index 000000000000..efa2a207abcd
--- /dev/null
+++ b/unix-stream-server.c
@@ -0,0 +1,125 @@
+#include "cache.h"
+#include "lockfile.h"
+#include "unix-socket.h"
+#include "unix-stream-server.h"
+
+#define DEFAULT_LOCK_TIMEOUT (100)
+
+/*
+ * Try to connect to a unix domain socket at `path` (if it exists) and
+ * see if there is a server listening.
+ *
+ * We don't know if the socket exists, whether a server died and
+ * failed to cleanup, or whether we have a live server listening, so
+ * we "poke" it.
+ *
+ * We immediately hangup without sending/receiving any data because we
+ * don't know anything about the protocol spoken and don't want to
+ * block while writing/reading data.  It is sufficient to just know
+ * that someone is listening.
+ */
+static int is_another_server_alive(const char *path,
+				   const struct unix_stream_listen_opts *opts)
+{
+	int fd = unix_stream_connect(path, opts->disallow_chdir);
+	if (fd >= 0) {
+		close(fd);
+		return 1;
+	}
+
+	return 0;
+}
+
+int unix_ss_create(const char *path,
+		   const struct unix_stream_listen_opts *opts,
+		   long timeout_ms,
+		   struct unix_ss_socket **new_server_socket)
+{
+	struct lock_file lock = LOCK_INIT;
+	int fd_socket;
+	struct unix_ss_socket *server_socket;
+
+	*new_server_socket = NULL;
+
+	if (timeout_ms < 0)
+		timeout_ms = DEFAULT_LOCK_TIMEOUT;
+
+	/*
+	 * Create a lock at "<path>.lock" if we can.
+	 */
+	if (hold_lock_file_for_update_timeout(&lock, path, 0, timeout_ms) < 0)
+		return -1;
+
+	/*
+	 * If another server is listening on "<path>" give up.  We do not
+	 * want to create a socket and steal future connections from them.
+	 */
+	if (is_another_server_alive(path, opts)) {
+		rollback_lock_file(&lock);
+		errno = EADDRINUSE;
+		return -2;
+	}
+
+	/*
+	 * Create and bind to a Unix domain socket at "<path>".
+	 */
+	fd_socket = unix_stream_listen(path, opts);
+	if (fd_socket < 0) {
+		int saved_errno = errno;
+		rollback_lock_file(&lock);
+		errno = saved_errno;
+		return -1;
+	}
+
+	server_socket = xcalloc(1, sizeof(*server_socket));
+	server_socket->path_socket = strdup(path);
+	server_socket->fd_socket = fd_socket;
+	lstat(path, &server_socket->st_socket);
+
+	*new_server_socket = server_socket;
+
+	/*
+	 * Always rollback (just delete) "<path>.lock" because we already created
+	 * "<path>" as a socket and do not want to commit_lock to do the atomic
+	 * rename trick.
+	 */
+	rollback_lock_file(&lock);
+
+	return 0;
+}
+
+void unix_ss_free(struct unix_ss_socket *server_socket)
+{
+	if (!server_socket)
+		return;
+
+	if (server_socket->fd_socket >= 0) {
+		if (!unix_ss_was_stolen(server_socket))
+			unlink(server_socket->path_socket);
+		close(server_socket->fd_socket);
+	}
+
+	free(server_socket->path_socket);
+	free(server_socket);
+}
+
+int unix_ss_was_stolen(struct unix_ss_socket *server_socket)
+{
+	struct stat st_now;
+
+	if (!server_socket)
+		return 0;
+
+	if (lstat(server_socket->path_socket, &st_now) == -1)
+		return 1;
+
+	if (st_now.st_ino != server_socket->st_socket.st_ino)
+		return 1;
+	if (st_now.st_dev != server_socket->st_socket.st_dev)
+		return 1;
+
+	if (!S_ISSOCK(st_now.st_mode))
+		return 1;
+
+	return 0;
+}
diff --git a/unix-stream-server.h b/unix-stream-server.h
new file mode 100644
index 000000000000..ae2712ba39b1
--- /dev/null
+++ b/unix-stream-server.h
@@ -0,0 +1,33 @@
+#ifndef UNIX_STREAM_SERVER_H
+#define UNIX_STREAM_SERVER_H
+
+#include "unix-socket.h"
+
+struct unix_ss_socket {
+	char *path_socket;
+	struct stat st_socket;
+	int fd_socket;
+};
+
+/*
+ * Create a Unix Domain Socket at the given path under the protection
+ * of a '.lock' lockfile.
+ *
+ * Returns 0 on success, -1 on error, -2 if socket is in use.
+ */
+int unix_ss_create(const char *path,
+		   const struct unix_stream_listen_opts *opts,
+		   long timeout_ms,
+		   struct unix_ss_socket **server_socket);
+
+/*
+ * Close and delete the socket.
+ */
+void unix_ss_free(struct unix_ss_socket *server_socket);
+
+/*
+ * Return 1 if the inode of the pathname to our socket changes.
+ */
+int unix_ss_was_stolen(struct unix_ss_socket *server_socket);
+
+#endif /* UNIX_STREAM_SERVER_H */
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v6 11/12] simple-ipc: add Unix domain socket implementation
  2021-03-15 21:08         ` [PATCH v6 " Jeff Hostetler via GitGitGadget
                             ` (9 preceding siblings ...)
  2021-03-15 21:08           ` [PATCH v6 10/12] unix-stream-server: create unix domain socket under lock Jeff Hostetler via GitGitGadget
@ 2021-03-15 21:08           ` Jeff Hostetler via GitGitGadget
  2021-03-15 21:08           ` [PATCH v6 12/12] t0052: add simple-ipc tests and t/helper/test-simple-ipc tool Jeff Hostetler via GitGitGadget
  2021-03-22 10:29           ` [PATCH v7 00/12] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
  12 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-03-15 21:08 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Create Unix domain socket based implementation of "simple-ipc".

A set of `ipc_client` routines implement a client library to connect
to an `ipc_server` over a Unix domain socket, send a simple request,
and receive a single response.  Clients use blocking IO on the socket.

A set of `ipc_server` routines implement a thread pool to listen for
and concurrently service client connections.

The server creates a new Unix domain socket at a known location.  If a
socket already exists with that name, the server tries to determine if
another server is already listening on the socket or if the socket is
dead.  If socket is busy, the server exits with an error rather than
stealing the socket.  If the socket is dead, the server creates a new
one and starts up.

If while running, the server detects that its socket has been stolen
by another server, it automatically exits.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 Makefile                            |    2 +
 compat/simple-ipc/ipc-unix-socket.c | 1000 +++++++++++++++++++++++++++
 contrib/buildsystems/CMakeLists.txt |    2 +
 simple-ipc.h                        |   13 +-
 4 files changed, 1016 insertions(+), 1 deletion(-)
 create mode 100644 compat/simple-ipc/ipc-unix-socket.c

diff --git a/Makefile b/Makefile
index 012694276f6d..20dd65d19658 100644
--- a/Makefile
+++ b/Makefile
@@ -1666,6 +1666,8 @@ ifdef NO_UNIX_SOCKETS
 else
 	LIB_OBJS += unix-socket.o
 	LIB_OBJS += unix-stream-server.o
+	LIB_OBJS += compat/simple-ipc/ipc-shared.o
+	LIB_OBJS += compat/simple-ipc/ipc-unix-socket.o
 endif
 
 ifdef USE_WIN32_IPC
diff --git a/compat/simple-ipc/ipc-unix-socket.c b/compat/simple-ipc/ipc-unix-socket.c
new file mode 100644
index 000000000000..5e2e82a523a1
--- /dev/null
+++ b/compat/simple-ipc/ipc-unix-socket.c
@@ -0,0 +1,1000 @@
+#include "cache.h"
+#include "simple-ipc.h"
+#include "strbuf.h"
+#include "pkt-line.h"
+#include "thread-utils.h"
+#include "unix-socket.h"
+#include "unix-stream-server.h"
+
+#ifdef NO_UNIX_SOCKETS
+#error compat/simple-ipc/ipc-unix-socket.c requires Unix sockets
+#endif
+
+enum ipc_active_state ipc_get_active_state(const char *path)
+{
+	enum ipc_active_state state = IPC_STATE__OTHER_ERROR;
+	struct ipc_client_connect_options options
+		= IPC_CLIENT_CONNECT_OPTIONS_INIT;
+	struct stat st;
+	struct ipc_client_connection *connection_test = NULL;
+
+	options.wait_if_busy = 0;
+	options.wait_if_not_found = 0;
+
+	if (lstat(path, &st) == -1) {
+		switch (errno) {
+		case ENOENT:
+		case ENOTDIR:
+			return IPC_STATE__NOT_LISTENING;
+		default:
+			return IPC_STATE__INVALID_PATH;
+		}
+	}
+
+	/* also complain if a plain file is in the way */
+	if ((st.st_mode & S_IFMT) != S_IFSOCK)
+		return IPC_STATE__INVALID_PATH;
+
+	/*
+	 * Just because the filesystem has a S_IFSOCK type inode
+	 * at `path`, doesn't mean it that there is a server listening.
+	 * Ping it to be sure.
+	 */
+	state = ipc_client_try_connect(path, &options, &connection_test);
+	ipc_client_close_connection(connection_test);
+
+	return state;
+}
+
+/*
+ * Retry frequency when trying to connect to a server.
+ *
+ * This value should be short enough that we don't seriously delay our
+ * caller, but not fast enough that our spinning puts pressure on the
+ * system.
+ */
+#define WAIT_STEP_MS (50)
+
+/*
+ * Try to connect to the server.  If the server is just starting up or
+ * is very busy, we may not get a connection the first time.
+ */
+static enum ipc_active_state connect_to_server(
+	const char *path,
+	int timeout_ms,
+	const struct ipc_client_connect_options *options,
+	int *pfd)
+{
+	int k;
+
+	*pfd = -1;
+
+	for (k = 0; k < timeout_ms; k += WAIT_STEP_MS) {
+		int fd = unix_stream_connect(path, options->uds_disallow_chdir);
+
+		if (fd != -1) {
+			*pfd = fd;
+			return IPC_STATE__LISTENING;
+		}
+
+		if (errno == ENOENT) {
+			if (!options->wait_if_not_found)
+				return IPC_STATE__PATH_NOT_FOUND;
+
+			goto sleep_and_try_again;
+		}
+
+		if (errno == ETIMEDOUT) {
+			if (!options->wait_if_busy)
+				return IPC_STATE__NOT_LISTENING;
+
+			goto sleep_and_try_again;
+		}
+
+		if (errno == ECONNREFUSED) {
+			if (!options->wait_if_busy)
+				return IPC_STATE__NOT_LISTENING;
+
+			goto sleep_and_try_again;
+		}
+
+		return IPC_STATE__OTHER_ERROR;
+
+	sleep_and_try_again:
+		sleep_millisec(WAIT_STEP_MS);
+	}
+
+	return IPC_STATE__NOT_LISTENING;
+}
+
+/*
+ * The total amount of time that we are willing to wait when trying to
+ * connect to a server.
+ *
+ * When the server is first started, it might take a little while for
+ * it to become ready to service requests.  Likewise, the server may
+ * be very (temporarily) busy and not respond to our connections.
+ *
+ * We should gracefully and silently handle those conditions and try
+ * again for a reasonable time period.
+ *
+ * The value chosen here should be long enough for the server
+ * to reliably heal from the above conditions.
+ */
+#define MY_CONNECTION_TIMEOUT_MS (1000)
+
+enum ipc_active_state ipc_client_try_connect(
+	const char *path,
+	const struct ipc_client_connect_options *options,
+	struct ipc_client_connection **p_connection)
+{
+	enum ipc_active_state state = IPC_STATE__OTHER_ERROR;
+	int fd = -1;
+
+	*p_connection = NULL;
+
+	trace2_region_enter("ipc-client", "try-connect", NULL);
+	trace2_data_string("ipc-client", NULL, "try-connect/path", path);
+
+	state = connect_to_server(path, MY_CONNECTION_TIMEOUT_MS,
+				  options, &fd);
+
+	trace2_data_intmax("ipc-client", NULL, "try-connect/state",
+			   (intmax_t)state);
+	trace2_region_leave("ipc-client", "try-connect", NULL);
+
+	if (state == IPC_STATE__LISTENING) {
+		(*p_connection) = xcalloc(1, sizeof(struct ipc_client_connection));
+		(*p_connection)->fd = fd;
+	}
+
+	return state;
+}
+
+void ipc_client_close_connection(struct ipc_client_connection *connection)
+{
+	if (!connection)
+		return;
+
+	if (connection->fd != -1)
+		close(connection->fd);
+
+	free(connection);
+}
+
+int ipc_client_send_command_to_connection(
+	struct ipc_client_connection *connection,
+	const char *message, struct strbuf *answer)
+{
+	int ret = 0;
+
+	strbuf_setlen(answer, 0);
+
+	trace2_region_enter("ipc-client", "send-command", NULL);
+
+	if (write_packetized_from_buf_no_flush(message, strlen(message),
+					       connection->fd) < 0 ||
+	    packet_flush_gently(connection->fd) < 0) {
+		ret = error(_("could not send IPC command"));
+		goto done;
+	}
+
+	if (read_packetized_to_strbuf(
+		    connection->fd, answer,
+		    PACKET_READ_GENTLE_ON_EOF | PACKET_READ_GENTLE_ON_READ_ERROR) < 0) {
+		ret = error(_("could not read IPC response"));
+		goto done;
+	}
+
+done:
+	trace2_region_leave("ipc-client", "send-command", NULL);
+	return ret;
+}
+
+int ipc_client_send_command(const char *path,
+			    const struct ipc_client_connect_options *options,
+			    const char *message, struct strbuf *answer)
+{
+	int ret = -1;
+	enum ipc_active_state state;
+	struct ipc_client_connection *connection = NULL;
+
+	state = ipc_client_try_connect(path, options, &connection);
+
+	if (state != IPC_STATE__LISTENING)
+		return ret;
+
+	ret = ipc_client_send_command_to_connection(connection, message, answer);
+
+	ipc_client_close_connection(connection);
+
+	return ret;
+}
+
+static int set_socket_blocking_flag(int fd, int make_nonblocking)
+{
+	int flags;
+
+	flags = fcntl(fd, F_GETFL, NULL);
+
+	if (flags < 0)
+		return -1;
+
+	if (make_nonblocking)
+		flags |= O_NONBLOCK;
+	else
+		flags &= ~O_NONBLOCK;
+
+	return fcntl(fd, F_SETFL, flags);
+}
+
+/*
+ * Magic numbers used to annotate callback instance data.
+ * These are used to help guard against accidentally passing the
+ * wrong instance data across multiple levels of callbacks (which
+ * is easy to do if there are `void*` arguments).
+ */
+enum magic {
+	MAGIC_SERVER_REPLY_DATA,
+	MAGIC_WORKER_THREAD_DATA,
+	MAGIC_ACCEPT_THREAD_DATA,
+	MAGIC_SERVER_DATA,
+};
+
+struct ipc_server_reply_data {
+	enum magic magic;
+	int fd;
+	struct ipc_worker_thread_data *worker_thread_data;
+};
+
+struct ipc_worker_thread_data {
+	enum magic magic;
+	struct ipc_worker_thread_data *next_thread;
+	struct ipc_server_data *server_data;
+	pthread_t pthread_id;
+};
+
+struct ipc_accept_thread_data {
+	enum magic magic;
+	struct ipc_server_data *server_data;
+
+	struct unix_ss_socket *server_socket;
+
+	int fd_send_shutdown;
+	int fd_wait_shutdown;
+	pthread_t pthread_id;
+};
+
+/*
+ * With unix-sockets, the conceptual "ipc-server" is implemented as a single
+ * controller "accept-thread" thread and a pool of "worker-thread" threads.
+ * The former does the usual `accept()` loop and dispatches connections
+ * to an idle worker thread.  The worker threads wait in an idle loop for
+ * a new connection, communicate with the client and relay data to/from
+ * the `application_cb` and then wait for another connection from the
+ * server thread.  This avoids the overhead of constantly creating and
+ * destroying threads.
+ */
+struct ipc_server_data {
+	enum magic magic;
+	ipc_server_application_cb *application_cb;
+	void *application_data;
+	struct strbuf buf_path;
+
+	struct ipc_accept_thread_data *accept_thread;
+	struct ipc_worker_thread_data *worker_thread_list;
+
+	pthread_mutex_t work_available_mutex;
+	pthread_cond_t work_available_cond;
+
+	/*
+	 * Accepted but not yet processed client connections are kept
+	 * in a circular buffer FIFO.  The queue is empty when the
+	 * positions are equal.
+	 */
+	int *fifo_fds;
+	int queue_size;
+	int back_pos;
+	int front_pos;
+
+	int shutdown_requested;
+	int is_stopped;
+};
+
+/*
+ * Remove and return the oldest queued connection.
+ *
+ * Returns -1 if empty.
+ */
+static int fifo_dequeue(struct ipc_server_data *server_data)
+{
+	/* ASSERT holding mutex */
+
+	int fd;
+
+	if (server_data->back_pos == server_data->front_pos)
+		return -1;
+
+	fd = server_data->fifo_fds[server_data->front_pos];
+	server_data->fifo_fds[server_data->front_pos] = -1;
+
+	server_data->front_pos++;
+	if (server_data->front_pos == server_data->queue_size)
+		server_data->front_pos = 0;
+
+	return fd;
+}
+
+/*
+ * Push a new fd onto the back of the queue.
+ *
+ * Drop it and return -1 if queue is already full.
+ */
+static int fifo_enqueue(struct ipc_server_data *server_data, int fd)
+{
+	/* ASSERT holding mutex */
+
+	int next_back_pos;
+
+	next_back_pos = server_data->back_pos + 1;
+	if (next_back_pos == server_data->queue_size)
+		next_back_pos = 0;
+
+	if (next_back_pos == server_data->front_pos) {
+		/* Queue is full. Just drop it. */
+		close(fd);
+		return -1;
+	}
+
+	server_data->fifo_fds[server_data->back_pos] = fd;
+	server_data->back_pos = next_back_pos;
+
+	return fd;
+}
+
+/*
+ * Wait for a connection to be queued to the FIFO and return it.
+ *
+ * Returns -1 if someone has already requested a shutdown.
+ */
+static int worker_thread__wait_for_connection(
+	struct ipc_worker_thread_data *worker_thread_data)
+{
+	/* ASSERT NOT holding mutex */
+
+	struct ipc_server_data *server_data = worker_thread_data->server_data;
+	int fd = -1;
+
+	pthread_mutex_lock(&server_data->work_available_mutex);
+	for (;;) {
+		if (server_data->shutdown_requested)
+			break;
+
+		fd = fifo_dequeue(server_data);
+		if (fd >= 0)
+			break;
+
+		pthread_cond_wait(&server_data->work_available_cond,
+				  &server_data->work_available_mutex);
+	}
+	pthread_mutex_unlock(&server_data->work_available_mutex);
+
+	return fd;
+}
+
+/*
+ * Forward declare our reply callback function so that any compiler
+ * errors are reported when we actually define the function (in addition
+ * to any errors reported when we try to pass this callback function as
+ * a parameter in a function call).  The former are easier to understand.
+ */
+static ipc_server_reply_cb do_io_reply_callback;
+
+/*
+ * Relay application's response message to the client process.
+ * (We do not flush at this point because we allow the caller
+ * to chunk data to the client thru us.)
+ */
+static int do_io_reply_callback(struct ipc_server_reply_data *reply_data,
+		       const char *response, size_t response_len)
+{
+	if (reply_data->magic != MAGIC_SERVER_REPLY_DATA)
+		BUG("reply_cb called with wrong instance data");
+
+	return write_packetized_from_buf_no_flush(response, response_len,
+						  reply_data->fd);
+}
+
+/* A randomly chosen value. */
+#define MY_WAIT_POLL_TIMEOUT_MS (10)
+
+/*
+ * If the client hangs up without sending any data on the wire, just
+ * quietly close the socket and ignore this client.
+ *
+ * This worker thread is committed to reading the IPC request data
+ * from the client at the other end of this fd.  Wait here for the
+ * client to actually put something on the wire -- because if the
+ * client just does a ping (connect and hangup without sending any
+ * data), our use of the pkt-line read routines will spew an error
+ * message.
+ *
+ * Return -1 if the client hung up.
+ * Return 0 if data (possibly incomplete) is ready.
+ */
+static int worker_thread__wait_for_io_start(
+	struct ipc_worker_thread_data *worker_thread_data,
+	int fd)
+{
+	struct ipc_server_data *server_data = worker_thread_data->server_data;
+	struct pollfd pollfd[1];
+	int result;
+
+	for (;;) {
+		pollfd[0].fd = fd;
+		pollfd[0].events = POLLIN;
+
+		result = poll(pollfd, 1, MY_WAIT_POLL_TIMEOUT_MS);
+		if (result < 0) {
+			if (errno == EINTR)
+				continue;
+			goto cleanup;
+		}
+
+		if (result == 0) {
+			/* a timeout */
+
+			int in_shutdown;
+
+			pthread_mutex_lock(&server_data->work_available_mutex);
+			in_shutdown = server_data->shutdown_requested;
+			pthread_mutex_unlock(&server_data->work_available_mutex);
+
+			/*
+			 * If a shutdown is already in progress and this
+			 * client has not started talking yet, just drop it.
+			 */
+			if (in_shutdown)
+				goto cleanup;
+			continue;
+		}
+
+		if (pollfd[0].revents & POLLHUP)
+			goto cleanup;
+
+		if (pollfd[0].revents & POLLIN)
+			return 0;
+
+		goto cleanup;
+	}
+
+cleanup:
+	close(fd);
+	return -1;
+}
+
+/*
+ * Receive the request/command from the client and pass it to the
+ * registered request-callback.  The request-callback will compose
+ * a response and call our reply-callback to send it to the client.
+ */
+static int worker_thread__do_io(
+	struct ipc_worker_thread_data *worker_thread_data,
+	int fd)
+{
+	/* ASSERT NOT holding lock */
+
+	struct strbuf buf = STRBUF_INIT;
+	struct ipc_server_reply_data reply_data;
+	int ret = 0;
+
+	reply_data.magic = MAGIC_SERVER_REPLY_DATA;
+	reply_data.worker_thread_data = worker_thread_data;
+
+	reply_data.fd = fd;
+
+	ret = read_packetized_to_strbuf(
+		reply_data.fd, &buf,
+		PACKET_READ_GENTLE_ON_EOF | PACKET_READ_GENTLE_ON_READ_ERROR);
+	if (ret >= 0) {
+		ret = worker_thread_data->server_data->application_cb(
+			worker_thread_data->server_data->application_data,
+			buf.buf, do_io_reply_callback, &reply_data);
+
+		packet_flush_gently(reply_data.fd);
+	}
+	else {
+		/*
+		 * The client probably disconnected/shutdown before it
+		 * could send a well-formed message.  Ignore it.
+		 */
+	}
+
+	strbuf_release(&buf);
+	close(reply_data.fd);
+
+	return ret;
+}
+
+/*
+ * Block SIGPIPE on the current thread (so that we get EPIPE from
+ * write() rather than an actual signal).
+ *
+ * Note that using sigchain_push() and _pop() to control SIGPIPE
+ * around our IO calls is not thread safe:
+ * [] It uses a global stack of handler frames.
+ * [] It uses ALLOC_GROW() to resize it.
+ * [] Finally, according to the `signal(2)` man-page:
+ *    "The effects of `signal()` in a multithreaded process are unspecified."
+ */
+static void thread_block_sigpipe(sigset_t *old_set)
+{
+	sigset_t new_set;
+
+	sigemptyset(&new_set);
+	sigaddset(&new_set, SIGPIPE);
+
+	sigemptyset(old_set);
+	pthread_sigmask(SIG_BLOCK, &new_set, old_set);
+}
+
+/*
+ * Thread proc for an IPC worker thread.  It handles a series of
+ * connections from clients.  It pulls the next fd from the queue
+ * processes it, and then waits for the next client.
+ *
+ * Block SIGPIPE in this worker thread for the life of the thread.
+ * This avoids stray (and sometimes delayed) SIGPIPE signals caused
+ * by client errors and/or when we are under extremely heavy IO load.
+ *
+ * This means that the application callback will have SIGPIPE blocked.
+ * The callback should not change it.
+ */
+static void *worker_thread_proc(void *_worker_thread_data)
+{
+	struct ipc_worker_thread_data *worker_thread_data = _worker_thread_data;
+	struct ipc_server_data *server_data = worker_thread_data->server_data;
+	sigset_t old_set;
+	int fd, io;
+	int ret;
+
+	trace2_thread_start("ipc-worker");
+
+	thread_block_sigpipe(&old_set);
+
+	for (;;) {
+		fd = worker_thread__wait_for_connection(worker_thread_data);
+		if (fd == -1)
+			break; /* in shutdown */
+
+		io = worker_thread__wait_for_io_start(worker_thread_data, fd);
+		if (io == -1)
+			continue; /* client hung up without sending anything */
+
+		ret = worker_thread__do_io(worker_thread_data, fd);
+
+		if (ret == SIMPLE_IPC_QUIT) {
+			trace2_data_string("ipc-worker", NULL, "queue_stop_async",
+					   "application_quit");
+			/*
+			 * The application layer is telling the ipc-server
+			 * layer to shutdown.
+			 *
+			 * We DO NOT have a response to send to the client.
+			 *
+			 * Queue an async stop (to stop the other threads) and
+			 * allow this worker thread to exit now (no sense waiting
+			 * for the thread-pool shutdown signal).
+			 *
+			 * Other non-idle worker threads are allowed to finish
+			 * responding to their current clients.
+			 */
+			ipc_server_stop_async(server_data);
+			break;
+		}
+	}
+
+	trace2_thread_exit();
+	return NULL;
+}
+
+/* A randomly chosen value. */
+#define MY_ACCEPT_POLL_TIMEOUT_MS (60 * 1000)
+
+/*
+ * Accept a new client connection on our socket.  This uses non-blocking
+ * IO so that we can also wait for shutdown requests on our socket-pair
+ * without actually spinning on a fast timeout.
+ */
+static int accept_thread__wait_for_connection(
+	struct ipc_accept_thread_data *accept_thread_data)
+{
+	struct pollfd pollfd[2];
+	int result;
+
+	for (;;) {
+		pollfd[0].fd = accept_thread_data->fd_wait_shutdown;
+		pollfd[0].events = POLLIN;
+
+		pollfd[1].fd = accept_thread_data->server_socket->fd_socket;
+		pollfd[1].events = POLLIN;
+
+		result = poll(pollfd, 2, MY_ACCEPT_POLL_TIMEOUT_MS);
+		if (result < 0) {
+			if (errno == EINTR)
+				continue;
+			return result;
+		}
+
+		if (result == 0) {
+			/* a timeout */
+
+			/*
+			 * If someone deletes or force-creates a new unix
+			 * domain socket at our path, all future clients
+			 * will be routed elsewhere and we silently starve.
+			 * If that happens, just queue a shutdown.
+			 */
+			if (unix_ss_was_stolen(
+				    accept_thread_data->server_socket)) {
+				trace2_data_string("ipc-accept", NULL,
+						   "queue_stop_async",
+						   "socket_stolen");
+				ipc_server_stop_async(
+					accept_thread_data->server_data);
+			}
+			continue;
+		}
+
+		if (pollfd[0].revents & POLLIN) {
+			/* shutdown message queued to socketpair */
+			return -1;
+		}
+
+		if (pollfd[1].revents & POLLIN) {
+			/* a connection is available on server_socket */
+
+			int client_fd =
+				accept(accept_thread_data->server_socket->fd_socket,
+				       NULL, NULL);
+			if (client_fd >= 0)
+				return client_fd;
+
+			/*
+			 * An error here is unlikely -- it probably
+			 * indicates that the connecting process has
+			 * already dropped the connection.
+			 */
+			continue;
+		}
+
+		BUG("unandled poll result errno=%d r[0]=%d r[1]=%d",
+		    errno, pollfd[0].revents, pollfd[1].revents);
+	}
+}
+
+/*
+ * Thread proc for the IPC server "accept thread".  This waits for
+ * an incoming socket connection, appends it to the queue of available
+ * connections, and notifies a worker thread to process it.
+ *
+ * Block SIGPIPE in this thread for the life of the thread.  This
+ * avoids any stray SIGPIPE signals when closing pipe fds under
+ * extremely heavy loads (such as when the fifo queue is full and we
+ * drop incomming connections).
+ */
+static void *accept_thread_proc(void *_accept_thread_data)
+{
+	struct ipc_accept_thread_data *accept_thread_data = _accept_thread_data;
+	struct ipc_server_data *server_data = accept_thread_data->server_data;
+	sigset_t old_set;
+
+	trace2_thread_start("ipc-accept");
+
+	thread_block_sigpipe(&old_set);
+
+	for (;;) {
+		int client_fd = accept_thread__wait_for_connection(
+			accept_thread_data);
+
+		pthread_mutex_lock(&server_data->work_available_mutex);
+		if (server_data->shutdown_requested) {
+			pthread_mutex_unlock(&server_data->work_available_mutex);
+			if (client_fd >= 0)
+				close(client_fd);
+			break;
+		}
+
+		if (client_fd < 0) {
+			/* ignore transient accept() errors */
+		}
+		else {
+			fifo_enqueue(server_data, client_fd);
+			pthread_cond_broadcast(&server_data->work_available_cond);
+		}
+		pthread_mutex_unlock(&server_data->work_available_mutex);
+	}
+
+	trace2_thread_exit();
+	return NULL;
+}
+
+/*
+ * We can't predict the connection arrival rate relative to the worker
+ * processing rate, therefore we allow the "accept-thread" to queue up
+ * a generous number of connections, since we'd rather have the client
+ * not unnecessarily timeout if we can avoid it.  (The assumption is
+ * that this will be used for FSMonitor and a few second wait on a
+ * connection is better than having the client timeout and do the full
+ * computation itself.)
+ *
+ * The FIFO queue size is set to a multiple of the worker pool size.
+ * This value chosen at random.
+ */
+#define FIFO_SCALE (100)
+
+/*
+ * The backlog value for `listen(2)`.  This doesn't need to huge,
+ * rather just large enough for our "accept-thread" to wake up and
+ * queue incoming connections onto the FIFO without the kernel
+ * dropping any.
+ *
+ * This value chosen at random.
+ */
+#define LISTEN_BACKLOG (50)
+
+static int create_listener_socket(
+	const char *path,
+	const struct ipc_server_opts *ipc_opts,
+	struct unix_ss_socket **new_server_socket)
+{
+	struct unix_ss_socket *server_socket = NULL;
+	struct unix_stream_listen_opts uslg_opts = UNIX_STREAM_LISTEN_OPTS_INIT;
+	int ret;
+
+	uslg_opts.listen_backlog_size = LISTEN_BACKLOG;
+	uslg_opts.disallow_chdir = ipc_opts->uds_disallow_chdir;
+
+	ret = unix_ss_create(path, &uslg_opts, -1, &server_socket);
+	if (ret)
+		return ret;
+
+	if (set_socket_blocking_flag(server_socket->fd_socket, 1)) {
+		int saved_errno = errno;
+		unix_ss_free(server_socket);
+		errno = saved_errno;
+		return -1;
+	}
+
+	*new_server_socket = server_socket;
+
+	trace2_data_string("ipc-server", NULL, "listen-with-lock", path);
+	return 0;
+}
+
+static int setup_listener_socket(
+	const char *path,
+	const struct ipc_server_opts *ipc_opts,
+	struct unix_ss_socket **new_server_socket)
+{
+	int ret, saved_errno;
+
+	trace2_region_enter("ipc-server", "create-listener_socket", NULL);
+
+	ret = create_listener_socket(path, ipc_opts, new_server_socket);
+
+	saved_errno = errno;
+	trace2_region_leave("ipc-server", "create-listener_socket", NULL);
+	errno = saved_errno;
+
+	return ret;
+}
+
+/*
+ * Start IPC server in a pool of background threads.
+ */
+int ipc_server_run_async(struct ipc_server_data **returned_server_data,
+			 const char *path, const struct ipc_server_opts *opts,
+			 ipc_server_application_cb *application_cb,
+			 void *application_data)
+{
+	struct unix_ss_socket *server_socket = NULL;
+	struct ipc_server_data *server_data;
+	int sv[2];
+	int k;
+	int ret;
+	int nr_threads = opts->nr_threads;
+
+	*returned_server_data = NULL;
+
+	/*
+	 * Create a socketpair and set sv[1] to non-blocking.  This
+	 * will used to send a shutdown message to the accept-thread
+	 * and allows the accept-thread to wait on EITHER a client
+	 * connection or a shutdown request without spinning.
+	 */
+	if (socketpair(AF_UNIX, SOCK_STREAM, 0, sv) < 0)
+		return -1;
+
+	if (set_socket_blocking_flag(sv[1], 1)) {
+		int saved_errno = errno;
+		close(sv[0]);
+		close(sv[1]);
+		errno = saved_errno;
+		return -1;
+	}
+
+	ret = setup_listener_socket(path, opts, &server_socket);
+	if (ret) {
+		int saved_errno = errno;
+		close(sv[0]);
+		close(sv[1]);
+		errno = saved_errno;
+		return ret;
+	}
+
+	server_data = xcalloc(1, sizeof(*server_data));
+	server_data->magic = MAGIC_SERVER_DATA;
+	server_data->application_cb = application_cb;
+	server_data->application_data = application_data;
+	strbuf_init(&server_data->buf_path, 0);
+	strbuf_addstr(&server_data->buf_path, path);
+
+	if (nr_threads < 1)
+		nr_threads = 1;
+
+	pthread_mutex_init(&server_data->work_available_mutex, NULL);
+	pthread_cond_init(&server_data->work_available_cond, NULL);
+
+	server_data->queue_size = nr_threads * FIFO_SCALE;
+	server_data->fifo_fds = xcalloc(server_data->queue_size,
+					sizeof(*server_data->fifo_fds));
+
+	server_data->accept_thread =
+		xcalloc(1, sizeof(*server_data->accept_thread));
+	server_data->accept_thread->magic = MAGIC_ACCEPT_THREAD_DATA;
+	server_data->accept_thread->server_data = server_data;
+	server_data->accept_thread->server_socket = server_socket;
+	server_data->accept_thread->fd_send_shutdown = sv[0];
+	server_data->accept_thread->fd_wait_shutdown = sv[1];
+
+	if (pthread_create(&server_data->accept_thread->pthread_id, NULL,
+			   accept_thread_proc, server_data->accept_thread))
+		die_errno(_("could not start accept_thread '%s'"), path);
+
+	for (k = 0; k < nr_threads; k++) {
+		struct ipc_worker_thread_data *wtd;
+
+		wtd = xcalloc(1, sizeof(*wtd));
+		wtd->magic = MAGIC_WORKER_THREAD_DATA;
+		wtd->server_data = server_data;
+
+		if (pthread_create(&wtd->pthread_id, NULL, worker_thread_proc,
+				   wtd)) {
+			if (k == 0)
+				die(_("could not start worker[0] for '%s'"),
+				    path);
+			/*
+			 * Limp along with the thread pool that we have.
+			 */
+			break;
+		}
+
+		wtd->next_thread = server_data->worker_thread_list;
+		server_data->worker_thread_list = wtd;
+	}
+
+	*returned_server_data = server_data;
+	return 0;
+}
+
+/*
+ * Gently tell the IPC server treads to shutdown.
+ * Can be run on any thread.
+ */
+int ipc_server_stop_async(struct ipc_server_data *server_data)
+{
+	/* ASSERT NOT holding mutex */
+
+	int fd;
+
+	if (!server_data)
+		return 0;
+
+	trace2_region_enter("ipc-server", "server-stop-async", NULL);
+
+	pthread_mutex_lock(&server_data->work_available_mutex);
+
+	server_data->shutdown_requested = 1;
+
+	/*
+	 * Write a byte to the shutdown socket pair to wake up the
+	 * accept-thread.
+	 */
+	if (write(server_data->accept_thread->fd_send_shutdown, "Q", 1) < 0)
+		error_errno("could not write to fd_send_shutdown");
+
+	/*
+	 * Drain the queue of existing connections.
+	 */
+	while ((fd = fifo_dequeue(server_data)) != -1)
+		close(fd);
+
+	/*
+	 * Gently tell worker threads to stop processing new connections
+	 * and exit.  (This does not abort in-process conversations.)
+	 */
+	pthread_cond_broadcast(&server_data->work_available_cond);
+
+	pthread_mutex_unlock(&server_data->work_available_mutex);
+
+	trace2_region_leave("ipc-server", "server-stop-async", NULL);
+
+	return 0;
+}
+
+/*
+ * Wait for all IPC server threads to stop.
+ */
+int ipc_server_await(struct ipc_server_data *server_data)
+{
+	pthread_join(server_data->accept_thread->pthread_id, NULL);
+
+	if (!server_data->shutdown_requested)
+		BUG("ipc-server: accept-thread stopped for '%s'",
+		    server_data->buf_path.buf);
+
+	while (server_data->worker_thread_list) {
+		struct ipc_worker_thread_data *wtd =
+			server_data->worker_thread_list;
+
+		pthread_join(wtd->pthread_id, NULL);
+
+		server_data->worker_thread_list = wtd->next_thread;
+		free(wtd);
+	}
+
+	server_data->is_stopped = 1;
+
+	return 0;
+}
+
+void ipc_server_free(struct ipc_server_data *server_data)
+{
+	struct ipc_accept_thread_data * accept_thread_data;
+
+	if (!server_data)
+		return;
+
+	if (!server_data->is_stopped)
+		BUG("cannot free ipc-server while running for '%s'",
+		    server_data->buf_path.buf);
+
+	accept_thread_data = server_data->accept_thread;
+	if (accept_thread_data) {
+		unix_ss_free(accept_thread_data->server_socket);
+
+		if (accept_thread_data->fd_send_shutdown != -1)
+			close(accept_thread_data->fd_send_shutdown);
+		if (accept_thread_data->fd_wait_shutdown != -1)
+			close(accept_thread_data->fd_wait_shutdown);
+
+		free(server_data->accept_thread);
+	}
+
+	while (server_data->worker_thread_list) {
+		struct ipc_worker_thread_data *wtd =
+			server_data->worker_thread_list;
+
+		server_data->worker_thread_list = wtd->next_thread;
+		free(wtd);
+	}
+
+	pthread_cond_destroy(&server_data->work_available_cond);
+	pthread_mutex_destroy(&server_data->work_available_mutex);
+
+	strbuf_release(&server_data->buf_path);
+
+	free(server_data->fifo_fds);
+	free(server_data);
+}
diff --git a/contrib/buildsystems/CMakeLists.txt b/contrib/buildsystems/CMakeLists.txt
index c94011269ebb..9897fcc8ea2a 100644
--- a/contrib/buildsystems/CMakeLists.txt
+++ b/contrib/buildsystems/CMakeLists.txt
@@ -248,6 +248,8 @@ endif()
 
 if(CMAKE_SYSTEM_NAME STREQUAL "Windows")
 	list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-win32.c)
+else()
+	list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-unix-socket.c)
 endif()
 
 set(EXE_EXTENSION ${CMAKE_EXECUTABLE_SUFFIX})
diff --git a/simple-ipc.h b/simple-ipc.h
index ab5619e3d76f..dc3606e30bd6 100644
--- a/simple-ipc.h
+++ b/simple-ipc.h
@@ -5,7 +5,7 @@
  * See Documentation/technical/api-simple-ipc.txt
  */
 
-#if defined(GIT_WINDOWS_NATIVE)
+#if defined(GIT_WINDOWS_NATIVE) || !defined(NO_UNIX_SOCKETS)
 #define SUPPORTS_SIMPLE_IPC
 #endif
 
@@ -62,11 +62,17 @@ struct ipc_client_connect_options {
 	 * the service and need to wait for it to become ready.
 	 */
 	unsigned int wait_if_not_found:1;
+
+	/*
+	 * Disallow chdir() when creating a Unix domain socket.
+	 */
+	unsigned int uds_disallow_chdir:1;
 };
 
 #define IPC_CLIENT_CONNECT_OPTIONS_INIT { \
 	.wait_if_busy = 0, \
 	.wait_if_not_found = 0, \
+	.uds_disallow_chdir = 0, \
 }
 
 /*
@@ -159,6 +165,11 @@ struct ipc_server_data;
 struct ipc_server_opts
 {
 	int nr_threads;
+
+	/*
+	 * Disallow chdir() when creating a Unix domain socket.
+	 */
+	unsigned int uds_disallow_chdir:1;
 };
 
 /*
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v6 12/12] t0052: add simple-ipc tests and t/helper/test-simple-ipc tool
  2021-03-15 21:08         ` [PATCH v6 " Jeff Hostetler via GitGitGadget
                             ` (10 preceding siblings ...)
  2021-03-15 21:08           ` [PATCH v6 11/12] simple-ipc: add Unix domain socket implementation Jeff Hostetler via GitGitGadget
@ 2021-03-15 21:08           ` Jeff Hostetler via GitGitGadget
  2021-03-22 10:29           ` [PATCH v7 00/12] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
  12 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-03-15 21:08 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Create t0052-simple-ipc.sh with unit tests for the "simple-ipc" mechanism.

Create t/helper/test-simple-ipc test tool to exercise the "simple-ipc"
functions.

When the tool is invoked with "run-daemon", it runs a server to listen
for "simple-ipc" connections on a test socket or named pipe and
responds to a set of commands to exercise/stress the communication
setup.

When the tool is invoked with "start-daemon", it spawns a "run-daemon"
command in the background and waits for the server to become ready
before exiting.  (This helps make unit tests in t0052 more predictable
and avoids the need for arbitrary sleeps in the test script.)

The tool also has a series of client "send" commands to send commands
and data to a server instance.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 Makefile                   |   1 +
 t/helper/test-simple-ipc.c | 787 +++++++++++++++++++++++++++++++++++++
 t/helper/test-tool.c       |   1 +
 t/helper/test-tool.h       |   1 +
 t/t0052-simple-ipc.sh      | 122 ++++++
 5 files changed, 912 insertions(+)
 create mode 100644 t/helper/test-simple-ipc.c
 create mode 100755 t/t0052-simple-ipc.sh

diff --git a/Makefile b/Makefile
index 20dd65d19658..e556388d28d0 100644
--- a/Makefile
+++ b/Makefile
@@ -734,6 +734,7 @@ TEST_BUILTINS_OBJS += test-serve-v2.o
 TEST_BUILTINS_OBJS += test-sha1.o
 TEST_BUILTINS_OBJS += test-sha256.o
 TEST_BUILTINS_OBJS += test-sigchain.o
+TEST_BUILTINS_OBJS += test-simple-ipc.o
 TEST_BUILTINS_OBJS += test-strcmp-offset.o
 TEST_BUILTINS_OBJS += test-string-list.o
 TEST_BUILTINS_OBJS += test-submodule-config.o
diff --git a/t/helper/test-simple-ipc.c b/t/helper/test-simple-ipc.c
new file mode 100644
index 000000000000..42040ef81b1e
--- /dev/null
+++ b/t/helper/test-simple-ipc.c
@@ -0,0 +1,787 @@
+/*
+ * test-simple-ipc.c: verify that the Inter-Process Communication works.
+ */
+
+#include "test-tool.h"
+#include "cache.h"
+#include "strbuf.h"
+#include "simple-ipc.h"
+#include "parse-options.h"
+#include "thread-utils.h"
+#include "strvec.h"
+
+#ifndef SUPPORTS_SIMPLE_IPC
+int cmd__simple_ipc(int argc, const char **argv)
+{
+	die("simple IPC not available on this platform");
+}
+#else
+
+/*
+ * The test daemon defines an "application callback" that supports a
+ * series of commands (see `test_app_cb()`).
+ *
+ * Unknown commands are caught here and we send an error message back
+ * to the client process.
+ */
+static int app__unhandled_command(const char *command,
+				  ipc_server_reply_cb *reply_cb,
+				  struct ipc_server_reply_data *reply_data)
+{
+	struct strbuf buf = STRBUF_INIT;
+	int ret;
+
+	strbuf_addf(&buf, "unhandled command: %s", command);
+	ret = reply_cb(reply_data, buf.buf, buf.len);
+	strbuf_release(&buf);
+
+	return ret;
+}
+
+/*
+ * Reply with a single very large buffer.  This is to ensure that
+ * long response are properly handled -- whether the chunking occurs
+ * in the kernel or in the (probably pkt-line) layer.
+ */
+#define BIG_ROWS (10000)
+static int app__big_command(ipc_server_reply_cb *reply_cb,
+			    struct ipc_server_reply_data *reply_data)
+{
+	struct strbuf buf = STRBUF_INIT;
+	int row;
+	int ret;
+
+	for (row = 0; row < BIG_ROWS; row++)
+		strbuf_addf(&buf, "big: %.75d\n", row);
+
+	ret = reply_cb(reply_data, buf.buf, buf.len);
+	strbuf_release(&buf);
+
+	return ret;
+}
+
+/*
+ * Reply with a series of lines.  This is to ensure that we can incrementally
+ * compute the response and chunk it to the client.
+ */
+#define CHUNK_ROWS (10000)
+static int app__chunk_command(ipc_server_reply_cb *reply_cb,
+			      struct ipc_server_reply_data *reply_data)
+{
+	struct strbuf buf = STRBUF_INIT;
+	int row;
+	int ret;
+
+	for (row = 0; row < CHUNK_ROWS; row++) {
+		strbuf_setlen(&buf, 0);
+		strbuf_addf(&buf, "big: %.75d\n", row);
+		ret = reply_cb(reply_data, buf.buf, buf.len);
+	}
+
+	strbuf_release(&buf);
+
+	return ret;
+}
+
+/*
+ * Slowly reply with a series of lines.  This is to model an expensive to
+ * compute chunked response (which might happen if this callback is running
+ * in a thread and is fighting for a lock with other threads).
+ */
+#define SLOW_ROWS     (1000)
+#define SLOW_DELAY_MS (10)
+static int app__slow_command(ipc_server_reply_cb *reply_cb,
+			     struct ipc_server_reply_data *reply_data)
+{
+	struct strbuf buf = STRBUF_INIT;
+	int row;
+	int ret;
+
+	for (row = 0; row < SLOW_ROWS; row++) {
+		strbuf_setlen(&buf, 0);
+		strbuf_addf(&buf, "big: %.75d\n", row);
+		ret = reply_cb(reply_data, buf.buf, buf.len);
+		sleep_millisec(SLOW_DELAY_MS);
+	}
+
+	strbuf_release(&buf);
+
+	return ret;
+}
+
+/*
+ * The client sent a command followed by a (possibly very) large buffer.
+ */
+static int app__sendbytes_command(const char *received,
+				  ipc_server_reply_cb *reply_cb,
+				  struct ipc_server_reply_data *reply_data)
+{
+	struct strbuf buf_resp = STRBUF_INIT;
+	const char *p = "?";
+	int len_ballast = 0;
+	int k;
+	int errs = 0;
+	int ret;
+
+	if (skip_prefix(received, "sendbytes ", &p))
+		len_ballast = strlen(p);
+
+	/*
+	 * Verify that the ballast is n copies of a single letter.
+	 * And that the multi-threaded IO layer didn't cross the streams.
+	 */
+	for (k = 1; k < len_ballast; k++)
+		if (p[k] != p[0])
+			errs++;
+
+	if (errs)
+		strbuf_addf(&buf_resp, "errs:%d\n", errs);
+	else
+		strbuf_addf(&buf_resp, "rcvd:%c%08d\n", p[0], len_ballast);
+
+	ret = reply_cb(reply_data, buf_resp.buf, buf_resp.len);
+
+	strbuf_release(&buf_resp);
+
+	return ret;
+}
+
+/*
+ * An arbitrary fixed address to verify that the application instance
+ * data is handled properly.
+ */
+static int my_app_data = 42;
+
+static ipc_server_application_cb test_app_cb;
+
+/*
+ * This is the "application callback" that sits on top of the
+ * "ipc-server".  It completely defines the set of commands supported
+ * by this application.
+ */
+static int test_app_cb(void *application_data,
+		       const char *command,
+		       ipc_server_reply_cb *reply_cb,
+		       struct ipc_server_reply_data *reply_data)
+{
+	/*
+	 * Verify that we received the application-data that we passed
+	 * when we started the ipc-server.  (We have several layers of
+	 * callbacks calling callbacks and it's easy to get things mixed
+	 * up (especially when some are "void*").)
+	 */
+	if (application_data != (void*)&my_app_data)
+		BUG("application_cb: application_data pointer wrong");
+
+	if (!strcmp(command, "quit")) {
+		/*
+		 * The client sent a "quit" command.  This is an async
+		 * request for the server to shutdown.
+		 *
+		 * We DO NOT send the client a response message
+		 * (because we have nothing to say and the other
+		 * server threads have not yet stopped).
+		 *
+		 * Tell the ipc-server layer to start shutting down.
+		 * This includes: stop listening for new connections
+		 * on the socket/pipe and telling all worker threads
+		 * to finish/drain their outgoing responses to other
+		 * clients.
+		 *
+		 * This DOES NOT force an immediate sync shutdown.
+		 */
+		return SIMPLE_IPC_QUIT;
+	}
+
+	if (!strcmp(command, "ping")) {
+		const char *answer = "pong";
+		return reply_cb(reply_data, answer, strlen(answer));
+	}
+
+	if (!strcmp(command, "big"))
+		return app__big_command(reply_cb, reply_data);
+
+	if (!strcmp(command, "chunk"))
+		return app__chunk_command(reply_cb, reply_data);
+
+	if (!strcmp(command, "slow"))
+		return app__slow_command(reply_cb, reply_data);
+
+	if (starts_with(command, "sendbytes "))
+		return app__sendbytes_command(command, reply_cb, reply_data);
+
+	return app__unhandled_command(command, reply_cb, reply_data);
+}
+
+struct cl_args
+{
+	const char *subcommand;
+	const char *path;
+	const char *token;
+
+	int nr_threads;
+	int max_wait_sec;
+	int bytecount;
+	int batchsize;
+
+	char bytevalue;
+};
+
+static struct cl_args cl_args = {
+	.subcommand = NULL,
+	.path = "ipc-test",
+	.token = NULL,
+
+	.nr_threads = 5,
+	.max_wait_sec = 60,
+	.bytecount = 1024,
+	.batchsize = 10,
+
+	.bytevalue = 'x',
+};
+
+/*
+ * This process will run as a simple-ipc server and listen for IPC commands
+ * from client processes.
+ */
+static int daemon__run_server(void)
+{
+	int ret;
+
+	struct ipc_server_opts opts = {
+		.nr_threads = cl_args.nr_threads,
+	};
+
+	/*
+	 * Synchronously run the ipc-server.  We don't need any application
+	 * instance data, so pass an arbitrary pointer (that we'll later
+	 * verify made the round trip).
+	 */
+	ret = ipc_server_run(cl_args.path, &opts, test_app_cb, (void*)&my_app_data);
+	if (ret == -2)
+		error(_("socket/pipe already in use: '%s'"), cl_args.path);
+	else if (ret == -1)
+		error_errno(_("could not start server on: '%s'"), cl_args.path);
+
+	return ret;
+}
+
+#ifndef GIT_WINDOWS_NATIVE
+/*
+ * This is adapted from `daemonize()`.  Use `fork()` to directly create and
+ * run the daemon in a child process.
+ */
+static int spawn_server(pid_t *pid)
+{
+	struct ipc_server_opts opts = {
+		.nr_threads = cl_args.nr_threads,
+	};
+
+	*pid = fork();
+
+	switch (*pid) {
+	case 0:
+		if (setsid() == -1)
+			error_errno(_("setsid failed"));
+		close(0);
+		close(1);
+		close(2);
+		sanitize_stdfds();
+
+		return ipc_server_run(cl_args.path, &opts, test_app_cb,
+				      (void*)&my_app_data);
+
+	case -1:
+		return error_errno(_("could not spawn daemon in the background"));
+
+	default:
+		return 0;
+	}
+}
+#else
+/*
+ * Conceptually like `daemonize()` but different because Windows does not
+ * have `fork(2)`.  Spawn a normal Windows child process but without the
+ * limitations of `start_command()` and `finish_command()`.
+ */
+static int spawn_server(pid_t *pid)
+{
+	char test_tool_exe[MAX_PATH];
+	struct strvec args = STRVEC_INIT;
+	int in, out;
+
+	GetModuleFileNameA(NULL, test_tool_exe, MAX_PATH);
+
+	in = open("/dev/null", O_RDONLY);
+	out = open("/dev/null", O_WRONLY);
+
+	strvec_push(&args, test_tool_exe);
+	strvec_push(&args, "simple-ipc");
+	strvec_push(&args, "run-daemon");
+	strvec_pushf(&args, "--name=%s", cl_args.path);
+	strvec_pushf(&args, "--threads=%d", cl_args.nr_threads);
+
+	*pid = mingw_spawnvpe(args.v[0], args.v, NULL, NULL, in, out, out);
+	close(in);
+	close(out);
+
+	strvec_clear(&args);
+
+	if (*pid < 0)
+		return error(_("could not spawn daemon in the background"));
+
+	return 0;
+}
+#endif
+
+/*
+ * This is adapted from `wait_or_whine()`.  Watch the child process and
+ * let it get started and begin listening for requests on the socket
+ * before reporting our success.
+ */
+static int wait_for_server_startup(pid_t pid_child)
+{
+	int status;
+	pid_t pid_seen;
+	enum ipc_active_state s;
+	time_t time_limit, now;
+
+	time(&time_limit);
+	time_limit += cl_args.max_wait_sec;
+
+	for (;;) {
+		pid_seen = waitpid(pid_child, &status, WNOHANG);
+
+		if (pid_seen == -1)
+			return error_errno(_("waitpid failed"));
+
+		else if (pid_seen == 0) {
+			/*
+			 * The child is still running (this should be
+			 * the normal case).  Try to connect to it on
+			 * the socket and see if it is ready for
+			 * business.
+			 *
+			 * If there is another daemon already running,
+			 * our child will fail to start (possibly
+			 * after a timeout on the lock), but we don't
+			 * care (who responds) if the socket is live.
+			 */
+			s = ipc_get_active_state(cl_args.path);
+			if (s == IPC_STATE__LISTENING)
+				return 0;
+
+			time(&now);
+			if (now > time_limit)
+				return error(_("daemon not online yet"));
+
+			continue;
+		}
+
+		else if (pid_seen == pid_child) {
+			/*
+			 * The new child daemon process shutdown while
+			 * it was starting up, so it is not listening
+			 * on the socket.
+			 *
+			 * Try to ping the socket in the odd chance
+			 * that another daemon started (or was already
+			 * running) while our child was starting.
+			 *
+			 * Again, we don't care who services the socket.
+			 */
+			s = ipc_get_active_state(cl_args.path);
+			if (s == IPC_STATE__LISTENING)
+				return 0;
+
+			/*
+			 * We don't care about the WEXITSTATUS() nor
+			 * any of the WIF*(status) values because
+			 * `cmd__simple_ipc()` does the `!!result`
+			 * trick on all function return values.
+			 *
+			 * So it is sufficient to just report the
+			 * early shutdown as an error.
+			 */
+			return error(_("daemon failed to start"));
+		}
+
+		else
+			return error(_("waitpid is confused"));
+	}
+}
+
+/*
+ * This process will start a simple-ipc server in a background process and
+ * wait for it to become ready.  This is like `daemonize()` but gives us
+ * more control and better error reporting (and makes it easier to write
+ * unit tests).
+ */
+static int daemon__start_server(void)
+{
+	pid_t pid_child;
+	int ret;
+
+	/*
+	 * Run the actual daemon in a background process.
+	 */
+	ret = spawn_server(&pid_child);
+	if (pid_child <= 0)
+		return ret;
+
+	/*
+	 * Let the parent wait for the child process to get started
+	 * and begin listening for requests on the socket.
+	 */
+	ret = wait_for_server_startup(pid_child);
+
+	return ret;
+}
+
+/*
+ * This process will run a quick probe to see if a simple-ipc server
+ * is active on this path.
+ *
+ * Returns 0 if the server is alive.
+ */
+static int client__probe_server(void)
+{
+	enum ipc_active_state s;
+
+	s = ipc_get_active_state(cl_args.path);
+	switch (s) {
+	case IPC_STATE__LISTENING:
+		return 0;
+
+	case IPC_STATE__NOT_LISTENING:
+		return error("no server listening at '%s'", cl_args.path);
+
+	case IPC_STATE__PATH_NOT_FOUND:
+		return error("path not found '%s'", cl_args.path);
+
+	case IPC_STATE__INVALID_PATH:
+		return error("invalid pipe/socket name '%s'", cl_args.path);
+
+	case IPC_STATE__OTHER_ERROR:
+	default:
+		return error("other error for '%s'", cl_args.path);
+	}
+}
+
+/*
+ * Send an IPC command token to an already-running server daemon and
+ * print the response.
+ *
+ * This is a simple 1 word command/token that `test_app_cb()` (in the
+ * daemon process) will understand.
+ */
+static int client__send_ipc(void)
+{
+	const char *command = "(no-command)";
+	struct strbuf buf = STRBUF_INIT;
+	struct ipc_client_connect_options options
+		= IPC_CLIENT_CONNECT_OPTIONS_INIT;
+
+	if (cl_args.token && *cl_args.token)
+		command = cl_args.token;
+
+	options.wait_if_busy = 1;
+	options.wait_if_not_found = 0;
+
+	if (!ipc_client_send_command(cl_args.path, &options, command, &buf)) {
+		if (buf.len) {
+			printf("%s\n", buf.buf);
+			fflush(stdout);
+		}
+		strbuf_release(&buf);
+
+		return 0;
+	}
+
+	return error("failed to send '%s' to '%s'", command, cl_args.path);
+}
+
+/*
+ * Send an IPC command to an already-running server and ask it to
+ * shutdown.  "send quit" is an async request and queues a shutdown
+ * event in the server, so we spin and wait here for it to actually
+ * shutdown to make the unit tests a little easier to write.
+ */
+static int client__stop_server(void)
+{
+	int ret;
+	time_t time_limit, now;
+	enum ipc_active_state s;
+
+	time(&time_limit);
+	time_limit += cl_args.max_wait_sec;
+
+	cl_args.token = "quit";
+
+	ret = client__send_ipc();
+	if (ret)
+		return ret;
+
+	for (;;) {
+		sleep_millisec(100);
+
+		s = ipc_get_active_state(cl_args.path);
+
+		if (s != IPC_STATE__LISTENING) {
+			/*
+			 * The socket/pipe is gone and/or has stopped
+			 * responding.  Lets assume that the daemon
+			 * process has exited too.
+			 */
+			return 0;
+		}
+
+		time(&now);
+		if (now > time_limit)
+			return error(_("daemon has not shutdown yet"));
+	}
+}
+
+/*
+ * Send an IPC command followed by ballast to confirm that a large
+ * message can be sent and that the kernel or pkt-line layers will
+ * properly chunk it and that the daemon receives the entire message.
+ */
+static int do_sendbytes(int bytecount, char byte, const char *path,
+			const struct ipc_client_connect_options *options)
+{
+	struct strbuf buf_send = STRBUF_INIT;
+	struct strbuf buf_resp = STRBUF_INIT;
+
+	strbuf_addstr(&buf_send, "sendbytes ");
+	strbuf_addchars(&buf_send, byte, bytecount);
+
+	if (!ipc_client_send_command(path, options, buf_send.buf, &buf_resp)) {
+		strbuf_rtrim(&buf_resp);
+		printf("sent:%c%08d %s\n", byte, bytecount, buf_resp.buf);
+		fflush(stdout);
+		strbuf_release(&buf_send);
+		strbuf_release(&buf_resp);
+
+		return 0;
+	}
+
+	return error("client failed to sendbytes(%d, '%c') to '%s'",
+		     bytecount, byte, path);
+}
+
+/*
+ * Send an IPC command with ballast to an already-running server daemon.
+ */
+static int client__sendbytes(void)
+{
+	struct ipc_client_connect_options options
+		= IPC_CLIENT_CONNECT_OPTIONS_INIT;
+
+	options.wait_if_busy = 1;
+	options.wait_if_not_found = 0;
+	options.uds_disallow_chdir = 0;
+
+	return do_sendbytes(cl_args.bytecount, cl_args.bytevalue, cl_args.path,
+			    &options);
+}
+
+struct multiple_thread_data {
+	pthread_t pthread_id;
+	struct multiple_thread_data *next;
+	const char *path;
+	int bytecount;
+	int batchsize;
+	int sum_errors;
+	int sum_good;
+	char letter;
+};
+
+static void *multiple_thread_proc(void *_multiple_thread_data)
+{
+	struct multiple_thread_data *d = _multiple_thread_data;
+	int k;
+	struct ipc_client_connect_options options
+		= IPC_CLIENT_CONNECT_OPTIONS_INIT;
+
+	options.wait_if_busy = 1;
+	options.wait_if_not_found = 0;
+	/*
+	 * A multi-threaded client should not be randomly calling chdir().
+	 * The test will pass without this restriction because the test is
+	 * not otherwise accessing the filesystem, but it makes us honest.
+	 */
+	options.uds_disallow_chdir = 1;
+
+	trace2_thread_start("multiple");
+
+	for (k = 0; k < d->batchsize; k++) {
+		if (do_sendbytes(d->bytecount + k, d->letter, d->path, &options))
+			d->sum_errors++;
+		else
+			d->sum_good++;
+	}
+
+	trace2_thread_exit();
+	return NULL;
+}
+
+/*
+ * Start a client-side thread pool.  Each thread sends a series of
+ * IPC requests.  Each request is on a new connection to the server.
+ */
+static int client__multiple(void)
+{
+	struct multiple_thread_data *list = NULL;
+	int k;
+	int sum_join_errors = 0;
+	int sum_thread_errors = 0;
+	int sum_good = 0;
+
+	for (k = 0; k < cl_args.nr_threads; k++) {
+		struct multiple_thread_data *d = xcalloc(1, sizeof(*d));
+		d->next = list;
+		d->path = cl_args.path;
+		d->bytecount = cl_args.bytecount + cl_args.batchsize*(k/26);
+		d->batchsize = cl_args.batchsize;
+		d->sum_errors = 0;
+		d->sum_good = 0;
+		d->letter = 'A' + (k % 26);
+
+		if (pthread_create(&d->pthread_id, NULL, multiple_thread_proc, d)) {
+			warning("failed to create thread[%d] skipping remainder", k);
+			free(d);
+			break;
+		}
+
+		list = d;
+	}
+
+	while (list) {
+		struct multiple_thread_data *d = list;
+
+		if (pthread_join(d->pthread_id, NULL))
+			sum_join_errors++;
+
+		sum_thread_errors += d->sum_errors;
+		sum_good += d->sum_good;
+
+		list = d->next;
+		free(d);
+	}
+
+	printf("client (good %d) (join %d), (errors %d)\n",
+	       sum_good, sum_join_errors, sum_thread_errors);
+
+	return (sum_join_errors + sum_thread_errors) ? 1 : 0;
+}
+
+int cmd__simple_ipc(int argc, const char **argv)
+{
+	const char * const simple_ipc_usage[] = {
+		N_("test-helper simple-ipc is-active    [<name>] [<options>]"),
+		N_("test-helper simple-ipc run-daemon   [<name>] [<threads>]"),
+		N_("test-helper simple-ipc start-daemon [<name>] [<threads>] [<max-wait>]"),
+		N_("test-helper simple-ipc stop-daemon  [<name>] [<max-wait>]"),
+		N_("test-helper simple-ipc send         [<name>] [<token>]"),
+		N_("test-helper simple-ipc sendbytes    [<name>] [<bytecount>] [<byte>]"),
+		N_("test-helper simple-ipc multiple     [<name>] [<threads>] [<bytecount>] [<batchsize>]"),
+		NULL
+	};
+
+	const char *bytevalue = NULL;
+
+	struct option options[] = {
+#ifndef GIT_WINDOWS_NATIVE
+		OPT_STRING(0, "name", &cl_args.path, N_("name"), N_("name or pathname of unix domain socket")),
+#else
+		OPT_STRING(0, "name", &cl_args.path, N_("name"), N_("named-pipe name")),
+#endif
+		OPT_INTEGER(0, "threads", &cl_args.nr_threads, N_("number of threads in server thread pool")),
+		OPT_INTEGER(0, "max-wait", &cl_args.max_wait_sec, N_("seconds to wait for daemon to start or stop")),
+
+		OPT_INTEGER(0, "bytecount", &cl_args.bytecount, N_("number of bytes")),
+		OPT_INTEGER(0, "batchsize", &cl_args.batchsize, N_("number of requests per thread")),
+
+		OPT_STRING(0, "byte", &bytevalue, N_("byte"), N_("ballast character")),
+		OPT_STRING(0, "token", &cl_args.token, N_("token"), N_("command token to send to the server")),
+
+		OPT_END()
+	};
+
+	if (argc < 2)
+		usage_with_options(simple_ipc_usage, options);
+
+	if (argc == 2 && !strcmp(argv[1], "-h"))
+		usage_with_options(simple_ipc_usage, options);
+
+	if (argc == 2 && !strcmp(argv[1], "SUPPORTS_SIMPLE_IPC"))
+		return 0;
+
+	cl_args.subcommand = argv[1];
+
+	argc--;
+	argv++;
+
+	argc = parse_options(argc, argv, NULL, options, simple_ipc_usage, 0);
+
+	if (cl_args.nr_threads < 1)
+		cl_args.nr_threads = 1;
+	if (cl_args.max_wait_sec < 0)
+		cl_args.max_wait_sec = 0;
+	if (cl_args.bytecount < 1)
+		cl_args.bytecount = 1;
+	if (cl_args.batchsize < 1)
+		cl_args.batchsize = 1;
+
+	if (bytevalue && *bytevalue)
+		cl_args.bytevalue = bytevalue[0];
+
+	/*
+	 * Use '!!' on all dispatch functions to map from `error()` style
+	 * (returns -1) style to `test_must_fail` style (expects 1).  This
+	 * makes shell error messages less confusing.
+	 */
+
+	if (!strcmp(cl_args.subcommand, "is-active"))
+		return !!client__probe_server();
+
+	if (!strcmp(cl_args.subcommand, "run-daemon"))
+		return !!daemon__run_server();
+
+	if (!strcmp(cl_args.subcommand, "start-daemon"))
+		return !!daemon__start_server();
+
+	/*
+	 * Client commands follow.  Ensure a server is running before
+	 * sending any data.  This might be overkill, but then again
+	 * this is a test harness.
+	 */
+
+	if (!strcmp(cl_args.subcommand, "stop-daemon")) {
+		if (client__probe_server())
+			return 1;
+		return !!client__stop_server();
+	}
+
+	if (!strcmp(cl_args.subcommand, "send")) {
+		if (client__probe_server())
+			return 1;
+		return !!client__send_ipc();
+	}
+
+	if (!strcmp(cl_args.subcommand, "sendbytes")) {
+		if (client__probe_server())
+			return 1;
+		return !!client__sendbytes();
+	}
+
+	if (!strcmp(cl_args.subcommand, "multiple")) {
+		if (client__probe_server())
+			return 1;
+		return !!client__multiple();
+	}
+
+	die("Unhandled subcommand: '%s'", cl_args.subcommand);
+}
+#endif
diff --git a/t/helper/test-tool.c b/t/helper/test-tool.c
index f97cd9f48a69..287aa6002307 100644
--- a/t/helper/test-tool.c
+++ b/t/helper/test-tool.c
@@ -65,6 +65,7 @@ static struct test_cmd cmds[] = {
 	{ "sha1", cmd__sha1 },
 	{ "sha256", cmd__sha256 },
 	{ "sigchain", cmd__sigchain },
+	{ "simple-ipc", cmd__simple_ipc },
 	{ "strcmp-offset", cmd__strcmp_offset },
 	{ "string-list", cmd__string_list },
 	{ "submodule-config", cmd__submodule_config },
diff --git a/t/helper/test-tool.h b/t/helper/test-tool.h
index 28072c0ad5ab..9ea4b31011dd 100644
--- a/t/helper/test-tool.h
+++ b/t/helper/test-tool.h
@@ -55,6 +55,7 @@ int cmd__sha1(int argc, const char **argv);
 int cmd__oid_array(int argc, const char **argv);
 int cmd__sha256(int argc, const char **argv);
 int cmd__sigchain(int argc, const char **argv);
+int cmd__simple_ipc(int argc, const char **argv);
 int cmd__strcmp_offset(int argc, const char **argv);
 int cmd__string_list(int argc, const char **argv);
 int cmd__submodule_config(int argc, const char **argv);
diff --git a/t/t0052-simple-ipc.sh b/t/t0052-simple-ipc.sh
new file mode 100755
index 000000000000..ff98be31a51b
--- /dev/null
+++ b/t/t0052-simple-ipc.sh
@@ -0,0 +1,122 @@
+#!/bin/sh
+
+test_description='simple command server'
+
+. ./test-lib.sh
+
+test-tool simple-ipc SUPPORTS_SIMPLE_IPC || {
+	skip_all='simple IPC not supported on this platform'
+	test_done
+}
+
+stop_simple_IPC_server () {
+	test-tool simple-ipc stop-daemon
+}
+
+test_expect_success 'start simple command server' '
+	test_atexit stop_simple_IPC_server &&
+	test-tool simple-ipc start-daemon --threads=8 &&
+	test-tool simple-ipc is-active
+'
+
+test_expect_success 'simple command server' '
+	test-tool simple-ipc send --token=ping >actual &&
+	echo pong >expect &&
+	test_cmp expect actual
+'
+
+test_expect_success 'servers cannot share the same path' '
+	test_must_fail test-tool simple-ipc run-daemon &&
+	test-tool simple-ipc is-active
+'
+
+test_expect_success 'big response' '
+	test-tool simple-ipc send --token=big >actual &&
+	test_line_count -ge 10000 actual &&
+	grep -q "big: [0]*9999\$" actual
+'
+
+test_expect_success 'chunk response' '
+	test-tool simple-ipc send --token=chunk >actual &&
+	test_line_count -ge 10000 actual &&
+	grep -q "big: [0]*9999\$" actual
+'
+
+test_expect_success 'slow response' '
+	test-tool simple-ipc send --token=slow >actual &&
+	test_line_count -ge 100 actual &&
+	grep -q "big: [0]*99\$" actual
+'
+
+# Send an IPC with n=100,000 bytes of ballast.  This should be large enough
+# to force both the kernel and the pkt-line layer to chunk the message to the
+# daemon and for the daemon to receive it in chunks.
+#
+test_expect_success 'sendbytes' '
+	test-tool simple-ipc sendbytes --bytecount=100000 --byte=A >actual &&
+	grep "sent:A00100000 rcvd:A00100000" actual
+'
+
+# Start a series of <threads> client threads that each make <batchsize>
+# IPC requests to the server.  Each (<threads> * <batchsize>) request
+# will open a new connection to the server and randomly bind to a server
+# thread.  Each client thread exits after completing its batch.  So the
+# total number of live client threads will be smaller than the total.
+# Each request will send a message containing at least <bytecount> bytes
+# of ballast.  (Responses are small.)
+#
+# The purpose here is to test threading in the server and responding to
+# many concurrent client requests (regardless of whether they come from
+# 1 client process or many).  And to test that the server side of the
+# named pipe/socket is stable.  (On Windows this means that the server
+# pipe is properly recycled.)
+#
+# On Windows it also lets us adjust the connection timeout in the
+# `ipc_client_send_command()`.
+#
+# Note it is easy to drive the system into failure by requesting an
+# insane number of threads on client or server and/or increasing the
+# per-thread batchsize or the per-request bytecount (ballast).
+# On Windows these failures look like "pipe is busy" errors.
+# So I've chosen fairly conservative values for now.
+#
+# We expect output of the form "sent:<letter><length> ..."
+# With terms (7, 19, 13) we expect:
+#   <letter> in [A-G]
+#   <length> in [19+0 .. 19+(13-1)]
+# and (7 * 13) successful responses.
+#
+test_expect_success 'stress test threads' '
+	test-tool simple-ipc multiple \
+		--threads=7 \
+		--bytecount=19 \
+		--batchsize=13 \
+		>actual &&
+	test_line_count = 92 actual &&
+	grep "good 91" actual &&
+	grep "sent:A" <actual >actual_a &&
+	cat >expect_a <<-EOF &&
+		sent:A00000019 rcvd:A00000019
+		sent:A00000020 rcvd:A00000020
+		sent:A00000021 rcvd:A00000021
+		sent:A00000022 rcvd:A00000022
+		sent:A00000023 rcvd:A00000023
+		sent:A00000024 rcvd:A00000024
+		sent:A00000025 rcvd:A00000025
+		sent:A00000026 rcvd:A00000026
+		sent:A00000027 rcvd:A00000027
+		sent:A00000028 rcvd:A00000028
+		sent:A00000029 rcvd:A00000029
+		sent:A00000030 rcvd:A00000030
+		sent:A00000031 rcvd:A00000031
+	EOF
+	test_cmp expect_a actual_a
+'
+
+test_expect_success 'stop-daemon works' '
+	test-tool simple-ipc stop-daemon &&
+	test_must_fail test-tool simple-ipc is-active &&
+	test_must_fail test-tool simple-ipc send --token=ping
+'
+
+test_done
-- 
gitgitgadget

^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v7 00/12] Simple IPC Mechanism
  2021-03-15 21:08         ` [PATCH v6 " Jeff Hostetler via GitGitGadget
                             ` (11 preceding siblings ...)
  2021-03-15 21:08           ` [PATCH v6 12/12] t0052: add simple-ipc tests and t/helper/test-simple-ipc tool Jeff Hostetler via GitGitGadget
@ 2021-03-22 10:29           ` Jeff Hostetler via GitGitGadget
  2021-03-22 10:29             ` [PATCH v7 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently() Jeff Hostetler via GitGitGadget
                               ` (11 more replies)
  12 siblings, 12 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-03-22 10:29 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler

Here is version V7 of my simple-ipc series. The only change from V6 is to
squash in the CALLOC_ARRAY() suggestion.

$ git range-diff v2.31.0-rc1..pr-766/jeffhostetler/simple-ipc-v6
v2.31.0-rc1..HEAD 1: fe35dc3d29 = 1: fe35dc3d29 pkt-line: eliminate the need
for static buffer in packet_write_gently() 2: de11b30361 = 2: de11b30361
pkt-line: do not issue flush packets in write_packetized_*() 3: 3718da39da =
3: 3718da39da pkt-line: add PACKET_READ_GENTLE_ON_READ_ERROR option 4:
b43df7ad0b = 4: b43df7ad0b pkt-line: add options argument to
read_packetized_to_strbuf() 5: f829feb2aa = 5: f829feb2aa simple-ipc: design
documentation for new IPC mechanism 6: 58c3fb7cd7 = 6: 58c3fb7cd7
simple-ipc: add win32 implementation 7: 4e8c352fb3 = 7: 4e8c352fb3
unix-socket: eliminate static unix_stream_socket() helper function 8:
3b71f52d86 = 8: 3b71f52d86 unix-socket: add backlog size option to
unix_stream_listen() 9: 5972a19836 = 9: 5972a19836 unix-socket: disallow
chdir() when creating unix domain sockets 10: 02c885fd62 = 10: 02c885fd62
unix-stream-server: create unix domain socket under lock 11: 4c2199231d !
11: eee5f4796d simple-ipc: add Unix domain socket implementation @@
compat/simple-ipc/ipc-unix-socket.c (new) +
pthread_cond_init(&server_data->work_available_cond, NULL); + +
server_data->queue_size = nr_threads * FIFO_SCALE; -+ server_data->fifo_fds
= xcalloc(server_data->queue_size, -+ sizeof(*server_data->fifo_fds)); ++
CALLOC_ARRAY(server_data->fifo_fds, server_data->queue_size); + +
server_data->accept_thread = + xcalloc(1,
sizeof(*server_data->accept_thread)); 12: 132b6f3271 = 12: 8b5dcca684 t0052:
add simple-ipc tests and t/helper/test-simple-ipc tool

Jeff

Jeff Hostetler (9):
  pkt-line: eliminate the need for static buffer in
    packet_write_gently()
  simple-ipc: design documentation for new IPC mechanism
  simple-ipc: add win32 implementation
  unix-socket: eliminate static unix_stream_socket() helper function
  unix-socket: add backlog size option to unix_stream_listen()
  unix-socket: disallow chdir() when creating unix domain sockets
  unix-stream-server: create unix domain socket under lock
  simple-ipc: add Unix domain socket implementation
  t0052: add simple-ipc tests and t/helper/test-simple-ipc tool

Johannes Schindelin (3):
  pkt-line: do not issue flush packets in write_packetized_*()
  pkt-line: add PACKET_READ_GENTLE_ON_READ_ERROR option
  pkt-line: add options argument to read_packetized_to_strbuf()

 Documentation/technical/api-simple-ipc.txt | 105 +++
 Makefile                                   |   9 +
 builtin/credential-cache--daemon.c         |   3 +-
 builtin/credential-cache.c                 |   2 +-
 compat/simple-ipc/ipc-shared.c             |  28 +
 compat/simple-ipc/ipc-unix-socket.c        | 999 +++++++++++++++++++++
 compat/simple-ipc/ipc-win32.c              | 751 ++++++++++++++++
 config.mak.uname                           |   2 +
 contrib/buildsystems/CMakeLists.txt        |   8 +-
 convert.c                                  |  11 +-
 pkt-line.c                                 |  59 +-
 pkt-line.h                                 |  17 +-
 simple-ipc.h                               | 239 +++++
 t/helper/test-simple-ipc.c                 | 787 ++++++++++++++++
 t/helper/test-tool.c                       |   1 +
 t/helper/test-tool.h                       |   1 +
 t/t0052-simple-ipc.sh                      | 122 +++
 unix-socket.c                              |  53 +-
 unix-socket.h                              |  12 +-
 unix-stream-server.c                       | 125 +++
 unix-stream-server.h                       |  33 +
 21 files changed, 3315 insertions(+), 52 deletions(-)
 create mode 100644 Documentation/technical/api-simple-ipc.txt
 create mode 100644 compat/simple-ipc/ipc-shared.c
 create mode 100644 compat/simple-ipc/ipc-unix-socket.c
 create mode 100644 compat/simple-ipc/ipc-win32.c
 create mode 100644 simple-ipc.h
 create mode 100644 t/helper/test-simple-ipc.c
 create mode 100755 t/t0052-simple-ipc.sh
 create mode 100644 unix-stream-server.c
 create mode 100644 unix-stream-server.h


base-commit: f01623b2c9d14207e497b21ebc6b3ec4afaf4b46
Published-As: https://github.com/gitgitgadget/git/releases/tag/pr-766%2Fjeffhostetler%2Fsimple-ipc-v7
Fetch-It-Via: git fetch https://github.com/gitgitgadget/git pr-766/jeffhostetler/simple-ipc-v7
Pull-Request: https://github.com/gitgitgadget/git/pull/766

Range-diff vs v6:

  1:  fe35dc3d292d =  1:  fe35dc3d292d pkt-line: eliminate the need for static buffer in packet_write_gently()
  2:  de11b3036148 =  2:  de11b3036148 pkt-line: do not issue flush packets in write_packetized_*()
  3:  3718da39da30 =  3:  3718da39da30 pkt-line: add PACKET_READ_GENTLE_ON_READ_ERROR option
  4:  b43df7ad0b7a =  4:  b43df7ad0b7a pkt-line: add options argument to read_packetized_to_strbuf()
  5:  f829feb2aa93 =  5:  f829feb2aa93 simple-ipc: design documentation for new IPC mechanism
  6:  58c3fb7cd776 =  6:  58c3fb7cd776 simple-ipc: add win32 implementation
  7:  4e8c352fb366 =  7:  4e8c352fb366 unix-socket: eliminate static unix_stream_socket() helper function
  8:  3b71f52d8628 =  8:  3b71f52d8628 unix-socket: add backlog size option to unix_stream_listen()
  9:  5972a198361c =  9:  5972a198361c unix-socket: disallow chdir() when creating unix domain sockets
 10:  02c885fd623d = 10:  02c885fd623d unix-stream-server: create unix domain socket under lock
 11:  4c2199231d05 ! 11:  eee5f4796d37 simple-ipc: add Unix domain socket implementation
     @@ compat/simple-ipc/ipc-unix-socket.c (new)
      +	pthread_cond_init(&server_data->work_available_cond, NULL);
      +
      +	server_data->queue_size = nr_threads * FIFO_SCALE;
     -+	server_data->fifo_fds = xcalloc(server_data->queue_size,
     -+					sizeof(*server_data->fifo_fds));
     ++	CALLOC_ARRAY(server_data->fifo_fds, server_data->queue_size);
      +
      +	server_data->accept_thread =
      +		xcalloc(1, sizeof(*server_data->accept_thread));
 12:  132b6f3271be = 12:  8b5dcca68440 t0052: add simple-ipc tests and t/helper/test-simple-ipc tool

-- 
gitgitgadget

^ permalink raw reply	[flat|nested] 178+ messages in thread

* [PATCH v7 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently()
  2021-03-22 10:29           ` [PATCH v7 00/12] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
@ 2021-03-22 10:29             ` Jeff Hostetler via GitGitGadget
  2021-03-22 10:29             ` [PATCH v7 02/12] pkt-line: do not issue flush packets in write_packetized_*() Johannes Schindelin via GitGitGadget
                               ` (10 subsequent siblings)
  11 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-03-22 10:29 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Teach `packet_write_gently()` to write the pkt-line header and the actual
buffer in 2 separate calls to `write_in_full()` and avoid the need for a
static buffer, thread-safe scratch space, or an excessively large stack
buffer.

Change `write_packetized_from_fd()` to allocate a temporary buffer rather
than using a static buffer to avoid similar issues here.

These changes are intended to make it easier to use pkt-line routines in
a multi-threaded context with multiple concurrent writers writing to
different streams.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 pkt-line.c | 28 ++++++++++++++++++++--------
 1 file changed, 20 insertions(+), 8 deletions(-)

diff --git a/pkt-line.c b/pkt-line.c
index d633005ef746..66bd0ddfd1d0 100644
--- a/pkt-line.c
+++ b/pkt-line.c
@@ -196,17 +196,26 @@ int packet_write_fmt_gently(int fd, const char *fmt, ...)
 
 static int packet_write_gently(const int fd_out, const char *buf, size_t size)
 {
-	static char packet_write_buffer[LARGE_PACKET_MAX];
+	char header[4];
 	size_t packet_size;
 
-	if (size > sizeof(packet_write_buffer) - 4)
+	if (size > LARGE_PACKET_DATA_MAX)
 		return error(_("packet write failed - data exceeds max packet size"));
 
 	packet_trace(buf, size, 1);
 	packet_size = size + 4;
-	set_packet_header(packet_write_buffer, packet_size);
-	memcpy(packet_write_buffer + 4, buf, size);
-	if (write_in_full(fd_out, packet_write_buffer, packet_size) < 0)
+
+	set_packet_header(header, packet_size);
+
+	/*
+	 * Write the header and the buffer in 2 parts so that we do
+	 * not need to allocate a buffer or rely on a static buffer.
+	 * This also avoids putting a large buffer on the stack which
+	 * might have multi-threading issues.
+	 */
+
+	if (write_in_full(fd_out, header, 4) < 0 ||
+	    write_in_full(fd_out, buf, size) < 0)
 		return error(_("packet write failed"));
 	return 0;
 }
@@ -244,20 +253,23 @@ void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len)
 
 int write_packetized_from_fd(int fd_in, int fd_out)
 {
-	static char buf[LARGE_PACKET_DATA_MAX];
+	char *buf = xmalloc(LARGE_PACKET_DATA_MAX);
 	int err = 0;
 	ssize_t bytes_to_write;
 
 	while (!err) {
-		bytes_to_write = xread(fd_in, buf, sizeof(buf));
-		if (bytes_to_write < 0)
+		bytes_to_write = xread(fd_in, buf, LARGE_PACKET_DATA_MAX);
+		if (bytes_to_write < 0) {
+			free(buf);
 			return COPY_READ_ERROR;
+		}
 		if (bytes_to_write == 0)
 			break;
 		err = packet_write_gently(fd_out, buf, bytes_to_write);
 	}
 	if (!err)
 		err = packet_flush_gently(fd_out);
+	free(buf);
 	return err;
 }
 
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v7 02/12] pkt-line: do not issue flush packets in write_packetized_*()
  2021-03-22 10:29           ` [PATCH v7 00/12] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
  2021-03-22 10:29             ` [PATCH v7 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently() Jeff Hostetler via GitGitGadget
@ 2021-03-22 10:29             ` Johannes Schindelin via GitGitGadget
  2021-03-22 10:29             ` [PATCH v7 03/12] pkt-line: add PACKET_READ_GENTLE_ON_READ_ERROR option Johannes Schindelin via GitGitGadget
                               ` (9 subsequent siblings)
  11 siblings, 0 replies; 178+ messages in thread
From: Johannes Schindelin via GitGitGadget @ 2021-03-22 10:29 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Johannes Schindelin

From: Johannes Schindelin <johannes.schindelin@gmx.de>

Remove the `packet_flush_gently()` call in `write_packetized_from_buf() and
`write_packetized_from_fd()` and require the caller to call it if desired.
Rename both functions to `write_packetized_from_*_no_flush()` to prevent
later merge accidents.

`write_packetized_from_buf()` currently only has one caller:
`apply_multi_file_filter()` in `convert.c`.  It always wants a flush packet
to be written after writing the payload.

However, we are about to introduce a caller that wants to write many
packets before a final flush packet, so let's make the caller responsible
for emitting the flush packet.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
---
 convert.c  | 8 ++++++--
 pkt-line.c | 8 ++------
 pkt-line.h | 4 ++--
 3 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/convert.c b/convert.c
index ee360c2f07ce..976d4905cb3a 100644
--- a/convert.c
+++ b/convert.c
@@ -884,9 +884,13 @@ static int apply_multi_file_filter(const char *path, const char *src, size_t len
 		goto done;
 
 	if (fd >= 0)
-		err = write_packetized_from_fd(fd, process->in);
+		err = write_packetized_from_fd_no_flush(fd, process->in);
 	else
-		err = write_packetized_from_buf(src, len, process->in);
+		err = write_packetized_from_buf_no_flush(src, len, process->in);
+	if (err)
+		goto done;
+
+	err = packet_flush_gently(process->in);
 	if (err)
 		goto done;
 
diff --git a/pkt-line.c b/pkt-line.c
index 66bd0ddfd1d0..bb0fb0c3802c 100644
--- a/pkt-line.c
+++ b/pkt-line.c
@@ -251,7 +251,7 @@ void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len)
 	packet_trace(data, len, 1);
 }
 
-int write_packetized_from_fd(int fd_in, int fd_out)
+int write_packetized_from_fd_no_flush(int fd_in, int fd_out)
 {
 	char *buf = xmalloc(LARGE_PACKET_DATA_MAX);
 	int err = 0;
@@ -267,13 +267,11 @@ int write_packetized_from_fd(int fd_in, int fd_out)
 			break;
 		err = packet_write_gently(fd_out, buf, bytes_to_write);
 	}
-	if (!err)
-		err = packet_flush_gently(fd_out);
 	free(buf);
 	return err;
 }
 
-int write_packetized_from_buf(const char *src_in, size_t len, int fd_out)
+int write_packetized_from_buf_no_flush(const char *src_in, size_t len, int fd_out)
 {
 	int err = 0;
 	size_t bytes_written = 0;
@@ -289,8 +287,6 @@ int write_packetized_from_buf(const char *src_in, size_t len, int fd_out)
 		err = packet_write_gently(fd_out, src_in + bytes_written, bytes_to_write);
 		bytes_written += bytes_to_write;
 	}
-	if (!err)
-		err = packet_flush_gently(fd_out);
 	return err;
 }
 
diff --git a/pkt-line.h b/pkt-line.h
index 8c90daa59ef0..31012b9943bf 100644
--- a/pkt-line.h
+++ b/pkt-line.h
@@ -32,8 +32,8 @@ void packet_buf_write(struct strbuf *buf, const char *fmt, ...) __attribute__((f
 void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len);
 int packet_flush_gently(int fd);
 int packet_write_fmt_gently(int fd, const char *fmt, ...) __attribute__((format (printf, 2, 3)));
-int write_packetized_from_fd(int fd_in, int fd_out);
-int write_packetized_from_buf(const char *src_in, size_t len, int fd_out);
+int write_packetized_from_fd_no_flush(int fd_in, int fd_out);
+int write_packetized_from_buf_no_flush(const char *src_in, size_t len, int fd_out);
 
 /*
  * Read a packetized line into the buffer, which must be at least size bytes
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v7 03/12] pkt-line: add PACKET_READ_GENTLE_ON_READ_ERROR option
  2021-03-22 10:29           ` [PATCH v7 00/12] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
  2021-03-22 10:29             ` [PATCH v7 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently() Jeff Hostetler via GitGitGadget
  2021-03-22 10:29             ` [PATCH v7 02/12] pkt-line: do not issue flush packets in write_packetized_*() Johannes Schindelin via GitGitGadget
@ 2021-03-22 10:29             ` Johannes Schindelin via GitGitGadget
  2021-03-22 10:29             ` [PATCH v7 04/12] pkt-line: add options argument to read_packetized_to_strbuf() Johannes Schindelin via GitGitGadget
                               ` (8 subsequent siblings)
  11 siblings, 0 replies; 178+ messages in thread
From: Johannes Schindelin via GitGitGadget @ 2021-03-22 10:29 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Johannes Schindelin

From: Johannes Schindelin <johannes.schindelin@gmx.de>

Introduce PACKET_READ_GENTLE_ON_READ_ERROR option to help libify the
packet readers.

So far, the (possibly indirect) callers of `get_packet_data()` can ask
that function to return an error instead of `die()`ing upon end-of-file.
However, random read errors will still cause the process to die.

So let's introduce an explicit option to tell the packet reader
machinery to please be nice and only return an error on read errors.

This change prepares pkt-line for use by long-running daemon processes.
Such processes should be able to serve multiple concurrent clients and
and survive random IO errors.  If there is an error on one connection,
a daemon should be able to drop that connection and continue serving
existing and future connections.

This ability will be used by a Git-aware "Builtin FSMonitor" feature
in a later patch series.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 pkt-line.c | 19 +++++++++++++++++--
 pkt-line.h | 11 ++++++++---
 2 files changed, 25 insertions(+), 5 deletions(-)

diff --git a/pkt-line.c b/pkt-line.c
index bb0fb0c3802c..457ac4e151bb 100644
--- a/pkt-line.c
+++ b/pkt-line.c
@@ -306,8 +306,11 @@ static int get_packet_data(int fd, char **src_buf, size_t *src_size,
 		*src_size -= ret;
 	} else {
 		ret = read_in_full(fd, dst, size);
-		if (ret < 0)
+		if (ret < 0) {
+			if (options & PACKET_READ_GENTLE_ON_READ_ERROR)
+				return error_errno(_("read error"));
 			die_errno(_("read error"));
+		}
 	}
 
 	/* And complain if we didn't get enough bytes to satisfy the read. */
@@ -315,6 +318,8 @@ static int get_packet_data(int fd, char **src_buf, size_t *src_size,
 		if (options & PACKET_READ_GENTLE_ON_EOF)
 			return -1;
 
+		if (options & PACKET_READ_GENTLE_ON_READ_ERROR)
+			return error(_("the remote end hung up unexpectedly"));
 		die(_("the remote end hung up unexpectedly"));
 	}
 
@@ -343,6 +348,9 @@ enum packet_read_status packet_read_with_status(int fd, char **src_buffer,
 	len = packet_length(linelen);
 
 	if (len < 0) {
+		if (options & PACKET_READ_GENTLE_ON_READ_ERROR)
+			return error(_("protocol error: bad line length "
+				       "character: %.4s"), linelen);
 		die(_("protocol error: bad line length character: %.4s"), linelen);
 	} else if (!len) {
 		packet_trace("0000", 4, 0);
@@ -357,12 +365,19 @@ enum packet_read_status packet_read_with_status(int fd, char **src_buffer,
 		*pktlen = 0;
 		return PACKET_READ_RESPONSE_END;
 	} else if (len < 4) {
+		if (options & PACKET_READ_GENTLE_ON_READ_ERROR)
+			return error(_("protocol error: bad line length %d"),
+				     len);
 		die(_("protocol error: bad line length %d"), len);
 	}
 
 	len -= 4;
-	if ((unsigned)len >= size)
+	if ((unsigned)len >= size) {
+		if (options & PACKET_READ_GENTLE_ON_READ_ERROR)
+			return error(_("protocol error: bad line length %d"),
+				     len);
 		die(_("protocol error: bad line length %d"), len);
+	}
 
 	if (get_packet_data(fd, src_buffer, src_len, buffer, len, options) < 0) {
 		*pktlen = -1;
diff --git a/pkt-line.h b/pkt-line.h
index 31012b9943bf..80ce0187e2ea 100644
--- a/pkt-line.h
+++ b/pkt-line.h
@@ -68,10 +68,15 @@ int write_packetized_from_buf_no_flush(const char *src_in, size_t len, int fd_ou
  *
  * If options contains PACKET_READ_DIE_ON_ERR_PACKET, it dies when it sees an
  * ERR packet.
+ *
+ * If options contains PACKET_READ_GENTLE_ON_READ_ERROR, we will not die
+ * on read errors, but instead return -1.  However, we may still die on an
+ * ERR packet (if requested).
  */
-#define PACKET_READ_GENTLE_ON_EOF     (1u<<0)
-#define PACKET_READ_CHOMP_NEWLINE     (1u<<1)
-#define PACKET_READ_DIE_ON_ERR_PACKET (1u<<2)
+#define PACKET_READ_GENTLE_ON_EOF        (1u<<0)
+#define PACKET_READ_CHOMP_NEWLINE        (1u<<1)
+#define PACKET_READ_DIE_ON_ERR_PACKET    (1u<<2)
+#define PACKET_READ_GENTLE_ON_READ_ERROR (1u<<3)
 int packet_read(int fd, char **src_buffer, size_t *src_len, char
 		*buffer, unsigned size, int options);
 
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v7 04/12] pkt-line: add options argument to read_packetized_to_strbuf()
  2021-03-22 10:29           ` [PATCH v7 00/12] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
                               ` (2 preceding siblings ...)
  2021-03-22 10:29             ` [PATCH v7 03/12] pkt-line: add PACKET_READ_GENTLE_ON_READ_ERROR option Johannes Schindelin via GitGitGadget
@ 2021-03-22 10:29             ` Johannes Schindelin via GitGitGadget
  2021-03-22 10:29             ` [PATCH v7 05/12] simple-ipc: design documentation for new IPC mechanism Jeff Hostetler via GitGitGadget
                               ` (7 subsequent siblings)
  11 siblings, 0 replies; 178+ messages in thread
From: Johannes Schindelin via GitGitGadget @ 2021-03-22 10:29 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Johannes Schindelin

From: Johannes Schindelin <johannes.schindelin@gmx.de>

Update the calling sequence of `read_packetized_to_strbuf()` to take
an options argument and not assume a fixed set of options.  Update the
only existing caller accordingly to explicitly pass the
formerly-assumed flags.

The `read_packetized_to_strbuf()` function calls `packet_read()` with
a fixed set of assumed options (`PACKET_READ_GENTLE_ON_EOF`).  This
assumption has been fine for the single existing caller
`apply_multi_file_filter()` in `convert.c`.

In a later commit we would like to add other callers to
`read_packetized_to_strbuf()` that need a different set of options.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 convert.c  | 3 ++-
 pkt-line.c | 4 ++--
 pkt-line.h | 2 +-
 3 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/convert.c b/convert.c
index 976d4905cb3a..516f1095b06e 100644
--- a/convert.c
+++ b/convert.c
@@ -907,7 +907,8 @@ static int apply_multi_file_filter(const char *path, const char *src, size_t len
 		if (err)
 			goto done;
 
-		err = read_packetized_to_strbuf(process->out, &nbuf) < 0;
+		err = read_packetized_to_strbuf(process->out, &nbuf,
+						PACKET_READ_GENTLE_ON_EOF) < 0;
 		if (err)
 			goto done;
 
diff --git a/pkt-line.c b/pkt-line.c
index 457ac4e151bb..0194137528c3 100644
--- a/pkt-line.c
+++ b/pkt-line.c
@@ -444,7 +444,7 @@ char *packet_read_line_buf(char **src, size_t *src_len, int *dst_len)
 	return packet_read_line_generic(-1, src, src_len, dst_len);
 }
 
-ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out)
+ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out, int options)
 {
 	int packet_len;
 
@@ -460,7 +460,7 @@ ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out)
 			 * that there is already room for the extra byte.
 			 */
 			sb_out->buf + sb_out->len, LARGE_PACKET_DATA_MAX+1,
-			PACKET_READ_GENTLE_ON_EOF);
+			options);
 		if (packet_len <= 0)
 			break;
 		sb_out->len += packet_len;
diff --git a/pkt-line.h b/pkt-line.h
index 80ce0187e2ea..5af5f4568768 100644
--- a/pkt-line.h
+++ b/pkt-line.h
@@ -136,7 +136,7 @@ char *packet_read_line_buf(char **src_buf, size_t *src_len, int *size);
 /*
  * Reads a stream of variable sized packets until a flush packet is detected.
  */
-ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out);
+ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out, int options);
 
 /*
  * Receive multiplexed output stream over git native protocol.
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v7 05/12] simple-ipc: design documentation for new IPC mechanism
  2021-03-22 10:29           ` [PATCH v7 00/12] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
                               ` (3 preceding siblings ...)
  2021-03-22 10:29             ` [PATCH v7 04/12] pkt-line: add options argument to read_packetized_to_strbuf() Johannes Schindelin via GitGitGadget
@ 2021-03-22 10:29             ` Jeff Hostetler via GitGitGadget
  2021-03-22 10:29             ` [PATCH v7 06/12] simple-ipc: add win32 implementation Jeff Hostetler via GitGitGadget
                               ` (6 subsequent siblings)
  11 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-03-22 10:29 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Brief design documentation for new IPC mechanism allowing
foreground Git client to talk with an existing daemon process
at a known location using a named pipe or unix domain socket.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 Documentation/technical/api-simple-ipc.txt | 105 +++++++++++++++++++++
 1 file changed, 105 insertions(+)
 create mode 100644 Documentation/technical/api-simple-ipc.txt

diff --git a/Documentation/technical/api-simple-ipc.txt b/Documentation/technical/api-simple-ipc.txt
new file mode 100644
index 000000000000..d79ad323e675
--- /dev/null
+++ b/Documentation/technical/api-simple-ipc.txt
@@ -0,0 +1,105 @@
+Simple-IPC API
+==============
+
+The Simple-IPC API is a collection of `ipc_` prefixed library routines
+and a basic communication protocol that allow an IPC-client process to
+send an application-specific IPC-request message to an IPC-server
+process and receive an application-specific IPC-response message.
+
+Communication occurs over a named pipe on Windows and a Unix domain
+socket on other platforms.  IPC-clients and IPC-servers rendezvous at
+a previously agreed-to application-specific pathname (which is outside
+the scope of this design) that is local to the computer system.
+
+The IPC-server routines within the server application process create a
+thread pool to listen for connections and receive request messages
+from multiple concurrent IPC-clients.  When received, these messages
+are dispatched up to the server application callbacks for handling.
+IPC-server routines then incrementally relay responses back to the
+IPC-client.
+
+The IPC-client routines within a client application process connect
+to the IPC-server and send a request message and wait for a response.
+When received, the response is returned back the caller.
+
+For example, the `fsmonitor--daemon` feature will be built as a server
+application on top of the IPC-server library routines.  It will have
+threads watching for file system events and a thread pool waiting for
+client connections.  Clients, such as `git status` will request a list
+of file system events since a point in time and the server will
+respond with a list of changed files and directories.  The formats of
+the request and response are application-specific; the IPC-client and
+IPC-server routines treat them as opaque byte streams.
+
+
+Comparison with sub-process model
+---------------------------------
+
+The Simple-IPC mechanism differs from the existing `sub-process.c`
+model (Documentation/technical/long-running-process-protocol.txt) and
+used by applications like Git-LFS.  In the LFS-style sub-process model
+the helper is started by the foreground process, communication happens
+via a pair of file descriptors bound to the stdin/stdout of the
+sub-process, the sub-process only serves the current foreground
+process, and the sub-process exits when the foreground process
+terminates.
+
+In the Simple-IPC model the server is a very long-running service.  It
+can service many clients at the same time and has a private socket or
+named pipe connection to each active client.  It might be started
+(on-demand) by the current client process or it might have been
+started by a previous client or by the OS at boot time.  The server
+process is not associated with a terminal and it persists after
+clients terminate.  Clients do not have access to the stdin/stdout of
+the server process and therefore must communicate over sockets or
+named pipes.
+
+
+Server startup and shutdown
+---------------------------
+
+How an application server based upon IPC-server is started is also
+outside the scope of the Simple-IPC design and is a property of the
+application using it.  For example, the server might be started or
+restarted during routine maintenance operations, or it might be
+started as a system service during the system boot-up sequence, or it
+might be started on-demand by a foreground Git command when needed.
+
+Similarly, server shutdown is a property of the application using
+the simple-ipc routines.  For example, the server might decide to
+shutdown when idle or only upon explicit request.
+
+
+Simple-IPC protocol
+-------------------
+
+The Simple-IPC protocol consists of a single request message from the
+client and an optional response message from the server.  Both the
+client and server messages are unlimited in length and are terminated
+with a flush packet.
+
+The pkt-line routines (Documentation/technical/protocol-common.txt)
+are used to simplify buffer management during message generation,
+transmission, and reception.  A flush packet is used to mark the end
+of the message.  This allows the sender to incrementally generate and
+transmit the message.  It allows the receiver to incrementally receive
+the message in chunks and to know when they have received the entire
+message.
+
+The actual byte format of the client request and server response
+messages are application specific.  The IPC layer transmits and
+receives them as opaque byte buffers without any concern for the
+content within.  It is the job of the calling application layer to
+understand the contents of the request and response messages.
+
+
+Summary
+-------
+
+Conceptually, the Simple-IPC protocol is similar to an HTTP REST
+request.  Clients connect, make an application-specific and
+stateless request, receive an application-specific
+response, and disconnect.  It is a one round trip facility for
+querying the server.  The Simple-IPC routines hide the socket,
+named pipe, and thread pool details and allow the application
+layer to focus on the application at hand.
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v7 06/12] simple-ipc: add win32 implementation
  2021-03-22 10:29           ` [PATCH v7 00/12] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
                               ` (4 preceding siblings ...)
  2021-03-22 10:29             ` [PATCH v7 05/12] simple-ipc: design documentation for new IPC mechanism Jeff Hostetler via GitGitGadget
@ 2021-03-22 10:29             ` Jeff Hostetler via GitGitGadget
  2021-03-22 10:29             ` [PATCH v7 07/12] unix-socket: eliminate static unix_stream_socket() helper function Jeff Hostetler via GitGitGadget
                               ` (5 subsequent siblings)
  11 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-03-22 10:29 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Create Windows implementation of "simple-ipc" using named pipes.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 Makefile                            |   5 +
 compat/simple-ipc/ipc-shared.c      |  28 ++
 compat/simple-ipc/ipc-win32.c       | 751 ++++++++++++++++++++++++++++
 config.mak.uname                    |   2 +
 contrib/buildsystems/CMakeLists.txt |   4 +
 simple-ipc.h                        | 228 +++++++++
 6 files changed, 1018 insertions(+)
 create mode 100644 compat/simple-ipc/ipc-shared.c
 create mode 100644 compat/simple-ipc/ipc-win32.c
 create mode 100644 simple-ipc.h

diff --git a/Makefile b/Makefile
index dd08b4ced01c..d3c42d3f4f9f 100644
--- a/Makefile
+++ b/Makefile
@@ -1667,6 +1667,11 @@ else
 	LIB_OBJS += unix-socket.o
 endif
 
+ifdef USE_WIN32_IPC
+	LIB_OBJS += compat/simple-ipc/ipc-shared.o
+	LIB_OBJS += compat/simple-ipc/ipc-win32.o
+endif
+
 ifdef NO_ICONV
 	BASIC_CFLAGS += -DNO_ICONV
 endif
diff --git a/compat/simple-ipc/ipc-shared.c b/compat/simple-ipc/ipc-shared.c
new file mode 100644
index 000000000000..1edec8159532
--- /dev/null
+++ b/compat/simple-ipc/ipc-shared.c
@@ -0,0 +1,28 @@
+#include "cache.h"
+#include "simple-ipc.h"
+#include "strbuf.h"
+#include "pkt-line.h"
+#include "thread-utils.h"
+
+#ifdef SUPPORTS_SIMPLE_IPC
+
+int ipc_server_run(const char *path, const struct ipc_server_opts *opts,
+		   ipc_server_application_cb *application_cb,
+		   void *application_data)
+{
+	struct ipc_server_data *server_data = NULL;
+	int ret;
+
+	ret = ipc_server_run_async(&server_data, path, opts,
+				   application_cb, application_data);
+	if (ret)
+		return ret;
+
+	ret = ipc_server_await(server_data);
+
+	ipc_server_free(server_data);
+
+	return ret;
+}
+
+#endif /* SUPPORTS_SIMPLE_IPC */
diff --git a/compat/simple-ipc/ipc-win32.c b/compat/simple-ipc/ipc-win32.c
new file mode 100644
index 000000000000..8f89c02037e3
--- /dev/null
+++ b/compat/simple-ipc/ipc-win32.c
@@ -0,0 +1,751 @@
+#include "cache.h"
+#include "simple-ipc.h"
+#include "strbuf.h"
+#include "pkt-line.h"
+#include "thread-utils.h"
+
+#ifndef GIT_WINDOWS_NATIVE
+#error This file can only be compiled on Windows
+#endif
+
+static int initialize_pipe_name(const char *path, wchar_t *wpath, size_t alloc)
+{
+	int off = 0;
+	struct strbuf realpath = STRBUF_INIT;
+
+	if (!strbuf_realpath(&realpath, path, 0))
+		return -1;
+
+	off = swprintf(wpath, alloc, L"\\\\.\\pipe\\");
+	if (xutftowcs(wpath + off, realpath.buf, alloc - off) < 0)
+		return -1;
+
+	/* Handle drive prefix */
+	if (wpath[off] && wpath[off + 1] == L':') {
+		wpath[off + 1] = L'_';
+		off += 2;
+	}
+
+	for (; wpath[off]; off++)
+		if (wpath[off] == L'/')
+			wpath[off] = L'\\';
+
+	strbuf_release(&realpath);
+	return 0;
+}
+
+static enum ipc_active_state get_active_state(wchar_t *pipe_path)
+{
+	if (WaitNamedPipeW(pipe_path, NMPWAIT_USE_DEFAULT_WAIT))
+		return IPC_STATE__LISTENING;
+
+	if (GetLastError() == ERROR_SEM_TIMEOUT)
+		return IPC_STATE__NOT_LISTENING;
+
+	if (GetLastError() == ERROR_FILE_NOT_FOUND)
+		return IPC_STATE__PATH_NOT_FOUND;
+
+	return IPC_STATE__OTHER_ERROR;
+}
+
+enum ipc_active_state ipc_get_active_state(const char *path)
+{
+	wchar_t pipe_path[MAX_PATH];
+
+	if (initialize_pipe_name(path, pipe_path, ARRAY_SIZE(pipe_path)) < 0)
+		return IPC_STATE__INVALID_PATH;
+
+	return get_active_state(pipe_path);
+}
+
+#define WAIT_STEP_MS (50)
+
+static enum ipc_active_state connect_to_server(
+	const wchar_t *wpath,
+	DWORD timeout_ms,
+	const struct ipc_client_connect_options *options,
+	int *pfd)
+{
+	DWORD t_start_ms, t_waited_ms;
+	DWORD step_ms;
+	HANDLE hPipe = INVALID_HANDLE_VALUE;
+	DWORD mode = PIPE_READMODE_BYTE;
+	DWORD gle;
+
+	*pfd = -1;
+
+	for (;;) {
+		hPipe = CreateFileW(wpath, GENERIC_READ | GENERIC_WRITE,
+				    0, NULL, OPEN_EXISTING, 0, NULL);
+		if (hPipe != INVALID_HANDLE_VALUE)
+			break;
+
+		gle = GetLastError();
+
+		switch (gle) {
+		case ERROR_FILE_NOT_FOUND:
+			if (!options->wait_if_not_found)
+				return IPC_STATE__PATH_NOT_FOUND;
+			if (!timeout_ms)
+				return IPC_STATE__PATH_NOT_FOUND;
+
+			step_ms = (timeout_ms < WAIT_STEP_MS) ?
+				timeout_ms : WAIT_STEP_MS;
+			sleep_millisec(step_ms);
+
+			timeout_ms -= step_ms;
+			break; /* try again */
+
+		case ERROR_PIPE_BUSY:
+			if (!options->wait_if_busy)
+				return IPC_STATE__NOT_LISTENING;
+			if (!timeout_ms)
+				return IPC_STATE__NOT_LISTENING;
+
+			t_start_ms = (DWORD)(getnanotime() / 1000000);
+
+			if (!WaitNamedPipeW(wpath, timeout_ms)) {
+				if (GetLastError() == ERROR_SEM_TIMEOUT)
+					return IPC_STATE__NOT_LISTENING;
+
+				return IPC_STATE__OTHER_ERROR;
+			}
+
+			/*
+			 * A pipe server instance became available.
+			 * Race other client processes to connect to
+			 * it.
+			 *
+			 * But first decrement our overall timeout so
+			 * that we don't starve if we keep losing the
+			 * race.  But also guard against special
+			 * NPMWAIT_ values (0 and -1).
+			 */
+			t_waited_ms = (DWORD)(getnanotime() / 1000000) - t_start_ms;
+			if (t_waited_ms < timeout_ms)
+				timeout_ms -= t_waited_ms;
+			else
+				timeout_ms = 1;
+			break; /* try again */
+
+		default:
+			return IPC_STATE__OTHER_ERROR;
+		}
+	}
+
+	if (!SetNamedPipeHandleState(hPipe, &mode, NULL, NULL)) {
+		CloseHandle(hPipe);
+		return IPC_STATE__OTHER_ERROR;
+	}
+
+	*pfd = _open_osfhandle((intptr_t)hPipe, O_RDWR|O_BINARY);
+	if (*pfd < 0) {
+		CloseHandle(hPipe);
+		return IPC_STATE__OTHER_ERROR;
+	}
+
+	/* fd now owns hPipe */
+
+	return IPC_STATE__LISTENING;
+}
+
+/*
+ * The default connection timeout for Windows clients.
+ *
+ * This is not currently part of the ipc_ API (nor the config settings)
+ * because of differences between Windows and other platforms.
+ *
+ * This value was chosen at random.
+ */
+#define WINDOWS_CONNECTION_TIMEOUT_MS (30000)
+
+enum ipc_active_state ipc_client_try_connect(
+	const char *path,
+	const struct ipc_client_connect_options *options,
+	struct ipc_client_connection **p_connection)
+{
+	wchar_t wpath[MAX_PATH];
+	enum ipc_active_state state = IPC_STATE__OTHER_ERROR;
+	int fd = -1;
+
+	*p_connection = NULL;
+
+	trace2_region_enter("ipc-client", "try-connect", NULL);
+	trace2_data_string("ipc-client", NULL, "try-connect/path", path);
+
+	if (initialize_pipe_name(path, wpath, ARRAY_SIZE(wpath)) < 0)
+		state = IPC_STATE__INVALID_PATH;
+	else
+		state = connect_to_server(wpath, WINDOWS_CONNECTION_TIMEOUT_MS,
+					  options, &fd);
+
+	trace2_data_intmax("ipc-client", NULL, "try-connect/state",
+			   (intmax_t)state);
+	trace2_region_leave("ipc-client", "try-connect", NULL);
+
+	if (state == IPC_STATE__LISTENING) {
+		(*p_connection) = xcalloc(1, sizeof(struct ipc_client_connection));
+		(*p_connection)->fd = fd;
+	}
+
+	return state;
+}
+
+void ipc_client_close_connection(struct ipc_client_connection *connection)
+{
+	if (!connection)
+		return;
+
+	if (connection->fd != -1)
+		close(connection->fd);
+
+	free(connection);
+}
+
+int ipc_client_send_command_to_connection(
+	struct ipc_client_connection *connection,
+	const char *message, struct strbuf *answer)
+{
+	int ret = 0;
+
+	strbuf_setlen(answer, 0);
+
+	trace2_region_enter("ipc-client", "send-command", NULL);
+
+	if (write_packetized_from_buf_no_flush(message, strlen(message),
+					       connection->fd) < 0 ||
+	    packet_flush_gently(connection->fd) < 0) {
+		ret = error(_("could not send IPC command"));
+		goto done;
+	}
+
+	FlushFileBuffers((HANDLE)_get_osfhandle(connection->fd));
+
+	if (read_packetized_to_strbuf(
+		    connection->fd, answer,
+		    PACKET_READ_GENTLE_ON_EOF | PACKET_READ_GENTLE_ON_READ_ERROR) < 0) {
+		ret = error(_("could not read IPC response"));
+		goto done;
+	}
+
+done:
+	trace2_region_leave("ipc-client", "send-command", NULL);
+	return ret;
+}
+
+int ipc_client_send_command(const char *path,
+			    const struct ipc_client_connect_options *options,
+			    const char *message, struct strbuf *response)
+{
+	int ret = -1;
+	enum ipc_active_state state;
+	struct ipc_client_connection *connection = NULL;
+
+	state = ipc_client_try_connect(path, options, &connection);
+
+	if (state != IPC_STATE__LISTENING)
+		return ret;
+
+	ret = ipc_client_send_command_to_connection(connection, message, response);
+
+	ipc_client_close_connection(connection);
+
+	return ret;
+}
+
+/*
+ * Duplicate the given pipe handle and wrap it in a file descriptor so
+ * that we can use pkt-line on it.
+ */
+static int dup_fd_from_pipe(const HANDLE pipe)
+{
+	HANDLE process = GetCurrentProcess();
+	HANDLE handle;
+	int fd;
+
+	if (!DuplicateHandle(process, pipe, process, &handle, 0, FALSE,
+			     DUPLICATE_SAME_ACCESS)) {
+		errno = err_win_to_posix(GetLastError());
+		return -1;
+	}
+
+	fd = _open_osfhandle((intptr_t)handle, O_RDWR|O_BINARY);
+	if (fd < 0) {
+		errno = err_win_to_posix(GetLastError());
+		CloseHandle(handle);
+		return -1;
+	}
+
+	/*
+	 * `handle` is now owned by `fd` and will be automatically closed
+	 * when the descriptor is closed.
+	 */
+
+	return fd;
+}
+
+/*
+ * Magic numbers used to annotate callback instance data.
+ * These are used to help guard against accidentally passing the
+ * wrong instance data across multiple levels of callbacks (which
+ * is easy to do if there are `void*` arguments).
+ */
+enum magic {
+	MAGIC_SERVER_REPLY_DATA,
+	MAGIC_SERVER_THREAD_DATA,
+	MAGIC_SERVER_DATA,
+};
+
+struct ipc_server_reply_data {
+	enum magic magic;
+	int fd;
+	struct ipc_server_thread_data *server_thread_data;
+};
+
+struct ipc_server_thread_data {
+	enum magic magic;
+	struct ipc_server_thread_data *next_thread;
+	struct ipc_server_data *server_data;
+	pthread_t pthread_id;
+	HANDLE hPipe;
+};
+
+/*
+ * On Windows, the conceptual "ipc-server" is implemented as a pool of
+ * n idential/peer "server-thread" threads.  That is, there is no
+ * hierarchy of threads; and therefore no controller thread managing
+ * the pool.  Each thread has an independent handle to the named pipe,
+ * receives incoming connections, processes the client, and re-uses
+ * the pipe for the next client connection.
+ *
+ * Therefore, the "ipc-server" only needs to maintain a list of the
+ * spawned threads for eventual "join" purposes.
+ *
+ * A single "stop-event" is visible to all of the server threads to
+ * tell them to shutdown (when idle).
+ */
+struct ipc_server_data {
+	enum magic magic;
+	ipc_server_application_cb *application_cb;
+	void *application_data;
+	struct strbuf buf_path;
+	wchar_t wpath[MAX_PATH];
+
+	HANDLE hEventStopRequested;
+	struct ipc_server_thread_data *thread_list;
+	int is_stopped;
+};
+
+enum connect_result {
+	CR_CONNECTED = 0,
+	CR_CONNECT_PENDING,
+	CR_CONNECT_ERROR,
+	CR_WAIT_ERROR,
+	CR_SHUTDOWN,
+};
+
+static enum connect_result queue_overlapped_connect(
+	struct ipc_server_thread_data *server_thread_data,
+	OVERLAPPED *lpo)
+{
+	if (ConnectNamedPipe(server_thread_data->hPipe, lpo))
+		goto failed;
+
+	switch (GetLastError()) {
+	case ERROR_IO_PENDING:
+		return CR_CONNECT_PENDING;
+
+	case ERROR_PIPE_CONNECTED:
+		SetEvent(lpo->hEvent);
+		return CR_CONNECTED;
+
+	default:
+		break;
+	}
+
+failed:
+	error(_("ConnectNamedPipe failed for '%s' (%lu)"),
+	      server_thread_data->server_data->buf_path.buf,
+	      GetLastError());
+	return CR_CONNECT_ERROR;
+}
+
+/*
+ * Use Windows Overlapped IO to wait for a connection or for our event
+ * to be signalled.
+ */
+static enum connect_result wait_for_connection(
+	struct ipc_server_thread_data *server_thread_data,
+	OVERLAPPED *lpo)
+{
+	enum connect_result r;
+	HANDLE waitHandles[2];
+	DWORD dwWaitResult;
+
+	r = queue_overlapped_connect(server_thread_data, lpo);
+	if (r != CR_CONNECT_PENDING)
+		return r;
+
+	waitHandles[0] = server_thread_data->server_data->hEventStopRequested;
+	waitHandles[1] = lpo->hEvent;
+
+	dwWaitResult = WaitForMultipleObjects(2, waitHandles, FALSE, INFINITE);
+	switch (dwWaitResult) {
+	case WAIT_OBJECT_0 + 0:
+		return CR_SHUTDOWN;
+
+	case WAIT_OBJECT_0 + 1:
+		ResetEvent(lpo->hEvent);
+		return CR_CONNECTED;
+
+	default:
+		return CR_WAIT_ERROR;
+	}
+}
+
+/*
+ * Forward declare our reply callback function so that any compiler
+ * errors are reported when we actually define the function (in addition
+ * to any errors reported when we try to pass this callback function as
+ * a parameter in a function call).  The former are easier to understand.
+ */
+static ipc_server_reply_cb do_io_reply_callback;
+
+/*
+ * Relay application's response message to the client process.
+ * (We do not flush at this point because we allow the caller
+ * to chunk data to the client thru us.)
+ */
+static int do_io_reply_callback(struct ipc_server_reply_data *reply_data,
+		       const char *response, size_t response_len)
+{
+	if (reply_data->magic != MAGIC_SERVER_REPLY_DATA)
+		BUG("reply_cb called with wrong instance data");
+
+	return write_packetized_from_buf_no_flush(response, response_len,
+						  reply_data->fd);
+}
+
+/*
+ * Receive the request/command from the client and pass it to the
+ * registered request-callback.  The request-callback will compose
+ * a response and call our reply-callback to send it to the client.
+ *
+ * Simple-IPC only contains one round trip, so we flush and close
+ * here after the response.
+ */
+static int do_io(struct ipc_server_thread_data *server_thread_data)
+{
+	struct strbuf buf = STRBUF_INIT;
+	struct ipc_server_reply_data reply_data;
+	int ret = 0;
+
+	reply_data.magic = MAGIC_SERVER_REPLY_DATA;
+	reply_data.server_thread_data = server_thread_data;
+
+	reply_data.fd = dup_fd_from_pipe(server_thread_data->hPipe);
+	if (reply_data.fd < 0)
+		return error(_("could not create fd from pipe for '%s'"),
+			     server_thread_data->server_data->buf_path.buf);
+
+	ret = read_packetized_to_strbuf(
+		reply_data.fd, &buf,
+		PACKET_READ_GENTLE_ON_EOF | PACKET_READ_GENTLE_ON_READ_ERROR);
+	if (ret >= 0) {
+		ret = server_thread_data->server_data->application_cb(
+			server_thread_data->server_data->application_data,
+			buf.buf, do_io_reply_callback, &reply_data);
+
+		packet_flush_gently(reply_data.fd);
+
+		FlushFileBuffers((HANDLE)_get_osfhandle((reply_data.fd)));
+	}
+	else {
+		/*
+		 * The client probably disconnected/shutdown before it
+		 * could send a well-formed message.  Ignore it.
+		 */
+	}
+
+	strbuf_release(&buf);
+	close(reply_data.fd);
+
+	return ret;
+}
+
+/*
+ * Handle IPC request and response with this connected client.  And reset
+ * the pipe to prepare for the next client.
+ */
+static int use_connection(struct ipc_server_thread_data *server_thread_data)
+{
+	int ret;
+
+	ret = do_io(server_thread_data);
+
+	FlushFileBuffers(server_thread_data->hPipe);
+	DisconnectNamedPipe(server_thread_data->hPipe);
+
+	return ret;
+}
+
+/*
+ * Thread proc for an IPC server worker thread.  It handles a series of
+ * connections from clients.  It cleans and reuses the hPipe between each
+ * client.
+ */
+static void *server_thread_proc(void *_server_thread_data)
+{
+	struct ipc_server_thread_data *server_thread_data = _server_thread_data;
+	HANDLE hEventConnected = INVALID_HANDLE_VALUE;
+	OVERLAPPED oConnect;
+	enum connect_result cr;
+	int ret;
+
+	assert(server_thread_data->hPipe != INVALID_HANDLE_VALUE);
+
+	trace2_thread_start("ipc-server");
+	trace2_data_string("ipc-server", NULL, "pipe",
+			   server_thread_data->server_data->buf_path.buf);
+
+	hEventConnected = CreateEventW(NULL, TRUE, FALSE, NULL);
+
+	memset(&oConnect, 0, sizeof(oConnect));
+	oConnect.hEvent = hEventConnected;
+
+	for (;;) {
+		cr = wait_for_connection(server_thread_data, &oConnect);
+
+		switch (cr) {
+		case CR_SHUTDOWN:
+			goto finished;
+
+		case CR_CONNECTED:
+			ret = use_connection(server_thread_data);
+			if (ret == SIMPLE_IPC_QUIT) {
+				ipc_server_stop_async(
+					server_thread_data->server_data);
+				goto finished;
+			}
+			if (ret > 0) {
+				/*
+				 * Ignore (transient) IO errors with this
+				 * client and reset for the next client.
+				 */
+			}
+			break;
+
+		case CR_CONNECT_PENDING:
+			/* By construction, this should not happen. */
+			BUG("ipc-server[%s]: unexpeced CR_CONNECT_PENDING",
+			    server_thread_data->server_data->buf_path.buf);
+
+		case CR_CONNECT_ERROR:
+		case CR_WAIT_ERROR:
+			/*
+			 * Ignore these theoretical errors.
+			 */
+			DisconnectNamedPipe(server_thread_data->hPipe);
+			break;
+
+		default:
+			BUG("unandled case after wait_for_connection");
+		}
+	}
+
+finished:
+	CloseHandle(server_thread_data->hPipe);
+	CloseHandle(hEventConnected);
+
+	trace2_thread_exit();
+	return NULL;
+}
+
+static HANDLE create_new_pipe(wchar_t *wpath, int is_first)
+{
+	HANDLE hPipe;
+	DWORD dwOpenMode, dwPipeMode;
+	LPSECURITY_ATTRIBUTES lpsa = NULL;
+
+	dwOpenMode = PIPE_ACCESS_INBOUND | PIPE_ACCESS_OUTBOUND |
+		FILE_FLAG_OVERLAPPED;
+
+	dwPipeMode = PIPE_TYPE_MESSAGE | PIPE_READMODE_BYTE | PIPE_WAIT |
+		PIPE_REJECT_REMOTE_CLIENTS;
+
+	if (is_first) {
+		dwOpenMode |= FILE_FLAG_FIRST_PIPE_INSTANCE;
+
+		/*
+		 * On Windows, the first server pipe instance gets to
+		 * set the ACL / Security Attributes on the named
+		 * pipe; subsequent instances inherit and cannot
+		 * change them.
+		 *
+		 * TODO Should we allow the application layer to
+		 * specify security attributes, such as `LocalService`
+		 * or `LocalSystem`, when we create the named pipe?
+		 * This question is probably not important when the
+		 * daemon is started by a foreground user process and
+		 * only needs to talk to the current user, but may be
+		 * if the daemon is run via the Control Panel as a
+		 * System Service.
+		 */
+	}
+
+	hPipe = CreateNamedPipeW(wpath, dwOpenMode, dwPipeMode,
+				 PIPE_UNLIMITED_INSTANCES, 1024, 1024, 0, lpsa);
+
+	return hPipe;
+}
+
+int ipc_server_run_async(struct ipc_server_data **returned_server_data,
+			 const char *path, const struct ipc_server_opts *opts,
+			 ipc_server_application_cb *application_cb,
+			 void *application_data)
+{
+	struct ipc_server_data *server_data;
+	wchar_t wpath[MAX_PATH];
+	HANDLE hPipeFirst = INVALID_HANDLE_VALUE;
+	int k;
+	int ret = 0;
+	int nr_threads = opts->nr_threads;
+
+	*returned_server_data = NULL;
+
+	ret = initialize_pipe_name(path, wpath, ARRAY_SIZE(wpath));
+	if (ret < 0) {
+		errno = EINVAL;
+		return -1;
+	}
+
+	hPipeFirst = create_new_pipe(wpath, 1);
+	if (hPipeFirst == INVALID_HANDLE_VALUE) {
+		errno = EADDRINUSE;
+		return -2;
+	}
+
+	server_data = xcalloc(1, sizeof(*server_data));
+	server_data->magic = MAGIC_SERVER_DATA;
+	server_data->application_cb = application_cb;
+	server_data->application_data = application_data;
+	server_data->hEventStopRequested = CreateEvent(NULL, TRUE, FALSE, NULL);
+	strbuf_init(&server_data->buf_path, 0);
+	strbuf_addstr(&server_data->buf_path, path);
+	wcscpy(server_data->wpath, wpath);
+
+	if (nr_threads < 1)
+		nr_threads = 1;
+
+	for (k = 0; k < nr_threads; k++) {
+		struct ipc_server_thread_data *std;
+
+		std = xcalloc(1, sizeof(*std));
+		std->magic = MAGIC_SERVER_THREAD_DATA;
+		std->server_data = server_data;
+		std->hPipe = INVALID_HANDLE_VALUE;
+
+		std->hPipe = (k == 0)
+			? hPipeFirst
+			: create_new_pipe(server_data->wpath, 0);
+
+		if (std->hPipe == INVALID_HANDLE_VALUE) {
+			/*
+			 * If we've reached a pipe instance limit for
+			 * this path, just use fewer threads.
+			 */
+			free(std);
+			break;
+		}
+
+		if (pthread_create(&std->pthread_id, NULL,
+				   server_thread_proc, std)) {
+			/*
+			 * Likewise, if we're out of threads, just use
+			 * fewer threads than requested.
+			 *
+			 * However, we just give up if we can't even get
+			 * one thread.  This should not happen.
+			 */
+			if (k == 0)
+				die(_("could not start thread[0] for '%s'"),
+				    path);
+
+			CloseHandle(std->hPipe);
+			free(std);
+			break;
+		}
+
+		std->next_thread = server_data->thread_list;
+		server_data->thread_list = std;
+	}
+
+	*returned_server_data = server_data;
+	return 0;
+}
+
+int ipc_server_stop_async(struct ipc_server_data *server_data)
+{
+	if (!server_data)
+		return 0;
+
+	/*
+	 * Gently tell all of the ipc_server threads to shutdown.
+	 * This will be seen the next time they are idle (and waiting
+	 * for a connection).
+	 *
+	 * We DO NOT attempt to force them to drop an active connection.
+	 */
+	SetEvent(server_data->hEventStopRequested);
+	return 0;
+}
+
+int ipc_server_await(struct ipc_server_data *server_data)
+{
+	DWORD dwWaitResult;
+
+	if (!server_data)
+		return 0;
+
+	dwWaitResult = WaitForSingleObject(server_data->hEventStopRequested, INFINITE);
+	if (dwWaitResult != WAIT_OBJECT_0)
+		return error(_("wait for hEvent failed for '%s'"),
+			     server_data->buf_path.buf);
+
+	while (server_data->thread_list) {
+		struct ipc_server_thread_data *std = server_data->thread_list;
+
+		pthread_join(std->pthread_id, NULL);
+
+		server_data->thread_list = std->next_thread;
+		free(std);
+	}
+
+	server_data->is_stopped = 1;
+
+	return 0;
+}
+
+void ipc_server_free(struct ipc_server_data *server_data)
+{
+	if (!server_data)
+		return;
+
+	if (!server_data->is_stopped)
+		BUG("cannot free ipc-server while running for '%s'",
+		    server_data->buf_path.buf);
+
+	strbuf_release(&server_data->buf_path);
+
+	if (server_data->hEventStopRequested != INVALID_HANDLE_VALUE)
+		CloseHandle(server_data->hEventStopRequested);
+
+	while (server_data->thread_list) {
+		struct ipc_server_thread_data *std = server_data->thread_list;
+
+		server_data->thread_list = std->next_thread;
+		free(std);
+	}
+
+	free(server_data);
+}
diff --git a/config.mak.uname b/config.mak.uname
index e22d4b6d67a3..2b3303f34be8 100644
--- a/config.mak.uname
+++ b/config.mak.uname
@@ -421,6 +421,7 @@ ifeq ($(uname_S),Windows)
 	RUNTIME_PREFIX = YesPlease
 	HAVE_WPGMPTR = YesWeDo
 	NO_ST_BLOCKS_IN_STRUCT_STAT = YesPlease
+	USE_WIN32_IPC = YesPlease
 	USE_WIN32_MMAP = YesPlease
 	MMAP_PREVENTS_DELETE = UnfortunatelyYes
 	# USE_NED_ALLOCATOR = YesPlease
@@ -597,6 +598,7 @@ ifneq (,$(findstring MINGW,$(uname_S)))
 	RUNTIME_PREFIX = YesPlease
 	HAVE_WPGMPTR = YesWeDo
 	NO_ST_BLOCKS_IN_STRUCT_STAT = YesPlease
+	USE_WIN32_IPC = YesPlease
 	USE_WIN32_MMAP = YesPlease
 	MMAP_PREVENTS_DELETE = UnfortunatelyYes
 	USE_NED_ALLOCATOR = YesPlease
diff --git a/contrib/buildsystems/CMakeLists.txt b/contrib/buildsystems/CMakeLists.txt
index ac3dbc079af8..40c9e8e3bd9d 100644
--- a/contrib/buildsystems/CMakeLists.txt
+++ b/contrib/buildsystems/CMakeLists.txt
@@ -246,6 +246,10 @@ elseif(CMAKE_SYSTEM_NAME STREQUAL "Linux")
 	list(APPEND compat_SOURCES unix-socket.c)
 endif()
 
+if(CMAKE_SYSTEM_NAME STREQUAL "Windows")
+	list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-win32.c)
+endif()
+
 set(EXE_EXTENSION ${CMAKE_EXECUTABLE_SUFFIX})
 
 #header checks
diff --git a/simple-ipc.h b/simple-ipc.h
new file mode 100644
index 000000000000..ab5619e3d76f
--- /dev/null
+++ b/simple-ipc.h
@@ -0,0 +1,228 @@
+#ifndef GIT_SIMPLE_IPC_H
+#define GIT_SIMPLE_IPC_H
+
+/*
+ * See Documentation/technical/api-simple-ipc.txt
+ */
+
+#if defined(GIT_WINDOWS_NATIVE)
+#define SUPPORTS_SIMPLE_IPC
+#endif
+
+#ifdef SUPPORTS_SIMPLE_IPC
+#include "pkt-line.h"
+
+/*
+ * Simple IPC Client Side API.
+ */
+
+enum ipc_active_state {
+	/*
+	 * The pipe/socket exists and the daemon is waiting for connections.
+	 */
+	IPC_STATE__LISTENING = 0,
+
+	/*
+	 * The pipe/socket exists, but the daemon is not listening.
+	 * Perhaps it is very busy.
+	 * Perhaps the daemon died without deleting the path.
+	 * Perhaps it is shutting down and draining existing clients.
+	 * Perhaps it is dead, but other clients are lingering and
+	 * still holding a reference to the pathname.
+	 */
+	IPC_STATE__NOT_LISTENING,
+
+	/*
+	 * The requested pathname is bogus and no amount of retries
+	 * will fix that.
+	 */
+	IPC_STATE__INVALID_PATH,
+
+	/*
+	 * The requested pathname is not found.  This usually means
+	 * that there is no daemon present.
+	 */
+	IPC_STATE__PATH_NOT_FOUND,
+
+	IPC_STATE__OTHER_ERROR,
+};
+
+struct ipc_client_connect_options {
+	/*
+	 * Spin under timeout if the server is running but can't
+	 * accept our connection yet.  This should always be set
+	 * unless you just want to poke the server and see if it
+	 * is alive.
+	 */
+	unsigned int wait_if_busy:1;
+
+	/*
+	 * Spin under timeout if the pipe/socket is not yet present
+	 * on the file system.  This is useful if we just started
+	 * the service and need to wait for it to become ready.
+	 */
+	unsigned int wait_if_not_found:1;
+};
+
+#define IPC_CLIENT_CONNECT_OPTIONS_INIT { \
+	.wait_if_busy = 0, \
+	.wait_if_not_found = 0, \
+}
+
+/*
+ * Determine if a server is listening on this named pipe or socket using
+ * platform-specific logic.  This might just probe the filesystem or it
+ * might make a trivial connection to the server using this pathname.
+ */
+enum ipc_active_state ipc_get_active_state(const char *path);
+
+struct ipc_client_connection {
+	int fd;
+};
+
+/*
+ * Try to connect to the daemon on the named pipe or socket.
+ *
+ * Returns IPC_STATE__LISTENING and a connection handle.
+ *
+ * Otherwise, returns info to help decide whether to retry or to
+ * spawn/respawn the server.
+ */
+enum ipc_active_state ipc_client_try_connect(
+	const char *path,
+	const struct ipc_client_connect_options *options,
+	struct ipc_client_connection **p_connection);
+
+void ipc_client_close_connection(struct ipc_client_connection *connection);
+
+/*
+ * Used by the client to synchronously send and receive a message with
+ * the server on the provided client connection.
+ *
+ * Returns 0 when successful.
+ *
+ * Calls error() and returns non-zero otherwise.
+ */
+int ipc_client_send_command_to_connection(
+	struct ipc_client_connection *connection,
+	const char *message, struct strbuf *answer);
+
+/*
+ * Used by the client to synchronously connect and send and receive a
+ * message to the server listening at the given path.
+ *
+ * Returns 0 when successful.
+ *
+ * Calls error() and returns non-zero otherwise.
+ */
+int ipc_client_send_command(const char *path,
+			    const struct ipc_client_connect_options *options,
+			    const char *message, struct strbuf *answer);
+
+/*
+ * Simple IPC Server Side API.
+ */
+
+struct ipc_server_reply_data;
+
+typedef int (ipc_server_reply_cb)(struct ipc_server_reply_data *,
+				  const char *response,
+				  size_t response_len);
+
+/*
+ * Prototype for an application-supplied callback to process incoming
+ * client IPC messages and compose a reply.  The `application_cb` should
+ * use the provided `reply_cb` and `reply_data` to send an IPC response
+ * back to the client.  The `reply_cb` callback can be called multiple
+ * times for chunking purposes.  A reply message is optional and may be
+ * omitted if not necessary for the application.
+ *
+ * The return value from the application callback is ignored.
+ * The value `SIMPLE_IPC_QUIT` can be used to shutdown the server.
+ */
+typedef int (ipc_server_application_cb)(void *application_data,
+					const char *request,
+					ipc_server_reply_cb *reply_cb,
+					struct ipc_server_reply_data *reply_data);
+
+#define SIMPLE_IPC_QUIT -2
+
+/*
+ * Opaque instance data to represent an IPC server instance.
+ */
+struct ipc_server_data;
+
+/*
+ * Control parameters for the IPC server instance.
+ * Use this to hide platform-specific settings.
+ */
+struct ipc_server_opts
+{
+	int nr_threads;
+};
+
+/*
+ * Start an IPC server instance in one or more background threads
+ * and return a handle to the pool.
+ *
+ * Returns 0 if the asynchronous server pool was started successfully.
+ * Returns -1 if not.
+ * Returns -2 if we could not startup because another server is using
+ * the socket or named pipe.
+ *
+ * When a client IPC message is received, the `application_cb` will be
+ * called (possibly on a random thread) to handle the message and
+ * optionally compose a reply message.
+ */
+int ipc_server_run_async(struct ipc_server_data **returned_server_data,
+			 const char *path, const struct ipc_server_opts *opts,
+			 ipc_server_application_cb *application_cb,
+			 void *application_data);
+
+/*
+ * Gently signal the IPC server pool to shutdown.  No new client
+ * connections will be accepted, but existing connections will be
+ * allowed to complete.
+ */
+int ipc_server_stop_async(struct ipc_server_data *server_data);
+
+/*
+ * Block the calling thread until all threads in the IPC server pool
+ * have completed and been joined.
+ */
+int ipc_server_await(struct ipc_server_data *server_data);
+
+/*
+ * Close and free all resource handles associated with the IPC server
+ * pool.
+ */
+void ipc_server_free(struct ipc_server_data *server_data);
+
+/*
+ * Run an IPC server instance and block the calling thread of the
+ * current process.  It does not return until the IPC server has
+ * either shutdown or had an unrecoverable error.
+ *
+ * The IPC server handles incoming IPC messages from client processes
+ * and may use one or more background threads as necessary.
+ *
+ * Returns 0 after the server has completed successfully.
+ * Returns -1 if the server cannot be started.
+ * Returns -2 if we could not startup because another server is using
+ * the socket or named pipe.
+ *
+ * When a client IPC message is received, the `application_cb` will be
+ * called (possibly on a random thread) to handle the message and
+ * optionally compose a reply message.
+ *
+ * Note that `ipc_server_run()` is a synchronous wrapper around the
+ * above asynchronous routines.  It effectively hides all of the
+ * server state and thread details from the caller and presents a
+ * simple synchronous interface.
+ */
+int ipc_server_run(const char *path, const struct ipc_server_opts *opts,
+		   ipc_server_application_cb *application_cb,
+		   void *application_data);
+
+#endif /* SUPPORTS_SIMPLE_IPC */
+#endif /* GIT_SIMPLE_IPC_H */
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v7 07/12] unix-socket: eliminate static unix_stream_socket() helper function
  2021-03-22 10:29           ` [PATCH v7 00/12] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
                               ` (5 preceding siblings ...)
  2021-03-22 10:29             ` [PATCH v7 06/12] simple-ipc: add win32 implementation Jeff Hostetler via GitGitGadget
@ 2021-03-22 10:29             ` Jeff Hostetler via GitGitGadget
  2021-03-22 10:29             ` [PATCH v7 08/12] unix-socket: add backlog size option to unix_stream_listen() Jeff Hostetler via GitGitGadget
                               ` (4 subsequent siblings)
  11 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-03-22 10:29 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

The static helper function `unix_stream_socket()` calls `die()`.  This
is not appropriate for all callers.  Eliminate the wrapper function
and make the callers propagate the error.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 unix-socket.c | 27 +++++++++++++--------------
 1 file changed, 13 insertions(+), 14 deletions(-)

diff --git a/unix-socket.c b/unix-socket.c
index 19ed48be9902..69f81d64e9d5 100644
--- a/unix-socket.c
+++ b/unix-socket.c
@@ -1,14 +1,6 @@
 #include "cache.h"
 #include "unix-socket.h"
 
-static int unix_stream_socket(void)
-{
-	int fd = socket(AF_UNIX, SOCK_STREAM, 0);
-	if (fd < 0)
-		die_errno("unable to create socket");
-	return fd;
-}
-
 static int chdir_len(const char *orig, int len)
 {
 	char *path = xmemdupz(orig, len);
@@ -73,13 +65,16 @@ static int unix_sockaddr_init(struct sockaddr_un *sa, const char *path,
 
 int unix_stream_connect(const char *path)
 {
-	int fd, saved_errno;
+	int fd = -1, saved_errno;
 	struct sockaddr_un sa;
 	struct unix_sockaddr_context ctx;
 
 	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
 		return -1;
-	fd = unix_stream_socket();
+	fd = socket(AF_UNIX, SOCK_STREAM, 0);
+	if (fd < 0)
+		goto fail;
+
 	if (connect(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0)
 		goto fail;
 	unix_sockaddr_cleanup(&ctx);
@@ -87,15 +82,16 @@ int unix_stream_connect(const char *path)
 
 fail:
 	saved_errno = errno;
+	if (fd != -1)
+		close(fd);
 	unix_sockaddr_cleanup(&ctx);
-	close(fd);
 	errno = saved_errno;
 	return -1;
 }
 
 int unix_stream_listen(const char *path)
 {
-	int fd, saved_errno;
+	int fd = -1, saved_errno;
 	struct sockaddr_un sa;
 	struct unix_sockaddr_context ctx;
 
@@ -103,7 +99,9 @@ int unix_stream_listen(const char *path)
 
 	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
 		return -1;
-	fd = unix_stream_socket();
+	fd = socket(AF_UNIX, SOCK_STREAM, 0);
+	if (fd < 0)
+		goto fail;
 
 	if (bind(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0)
 		goto fail;
@@ -116,8 +114,9 @@ int unix_stream_listen(const char *path)
 
 fail:
 	saved_errno = errno;
+	if (fd != -1)
+		close(fd);
 	unix_sockaddr_cleanup(&ctx);
-	close(fd);
 	errno = saved_errno;
 	return -1;
 }
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v7 08/12] unix-socket: add backlog size option to unix_stream_listen()
  2021-03-22 10:29           ` [PATCH v7 00/12] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
                               ` (6 preceding siblings ...)
  2021-03-22 10:29             ` [PATCH v7 07/12] unix-socket: eliminate static unix_stream_socket() helper function Jeff Hostetler via GitGitGadget
@ 2021-03-22 10:29             ` Jeff Hostetler via GitGitGadget
  2021-03-22 10:29             ` [PATCH v7 09/12] unix-socket: disallow chdir() when creating unix domain sockets Jeff Hostetler via GitGitGadget
                               ` (3 subsequent siblings)
  11 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-03-22 10:29 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Update `unix_stream_listen()` to take an options structure to override
default behaviors.  This commit includes the size of the `listen()` backlog.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 builtin/credential-cache--daemon.c |  3 ++-
 unix-socket.c                      | 11 +++++++++--
 unix-socket.h                      |  9 ++++++++-
 3 files changed, 19 insertions(+), 4 deletions(-)

diff --git a/builtin/credential-cache--daemon.c b/builtin/credential-cache--daemon.c
index c61f123a3b81..4c6c89ab0de2 100644
--- a/builtin/credential-cache--daemon.c
+++ b/builtin/credential-cache--daemon.c
@@ -203,9 +203,10 @@ static int serve_cache_loop(int fd)
 
 static void serve_cache(const char *socket_path, int debug)
 {
+	struct unix_stream_listen_opts opts = UNIX_STREAM_LISTEN_OPTS_INIT;
 	int fd;
 
-	fd = unix_stream_listen(socket_path);
+	fd = unix_stream_listen(socket_path, &opts);
 	if (fd < 0)
 		die_errno("unable to bind to '%s'", socket_path);
 
diff --git a/unix-socket.c b/unix-socket.c
index 69f81d64e9d5..012becd93d57 100644
--- a/unix-socket.c
+++ b/unix-socket.c
@@ -1,6 +1,8 @@
 #include "cache.h"
 #include "unix-socket.h"
 
+#define DEFAULT_UNIX_STREAM_LISTEN_BACKLOG (5)
+
 static int chdir_len(const char *orig, int len)
 {
 	char *path = xmemdupz(orig, len);
@@ -89,9 +91,11 @@ int unix_stream_connect(const char *path)
 	return -1;
 }
 
-int unix_stream_listen(const char *path)
+int unix_stream_listen(const char *path,
+		       const struct unix_stream_listen_opts *opts)
 {
 	int fd = -1, saved_errno;
+	int backlog;
 	struct sockaddr_un sa;
 	struct unix_sockaddr_context ctx;
 
@@ -106,7 +110,10 @@ int unix_stream_listen(const char *path)
 	if (bind(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0)
 		goto fail;
 
-	if (listen(fd, 5) < 0)
+	backlog = opts->listen_backlog_size;
+	if (backlog <= 0)
+		backlog = DEFAULT_UNIX_STREAM_LISTEN_BACKLOG;
+	if (listen(fd, backlog) < 0)
 		goto fail;
 
 	unix_sockaddr_cleanup(&ctx);
diff --git a/unix-socket.h b/unix-socket.h
index e271aeec5a07..ec2fb3ea7267 100644
--- a/unix-socket.h
+++ b/unix-socket.h
@@ -1,7 +1,14 @@
 #ifndef UNIX_SOCKET_H
 #define UNIX_SOCKET_H
 
+struct unix_stream_listen_opts {
+	int listen_backlog_size;
+};
+
+#define UNIX_STREAM_LISTEN_OPTS_INIT { 0 }
+
 int unix_stream_connect(const char *path);
-int unix_stream_listen(const char *path);
+int unix_stream_listen(const char *path,
+		       const struct unix_stream_listen_opts *opts);
 
 #endif /* UNIX_SOCKET_H */
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v7 09/12] unix-socket: disallow chdir() when creating unix domain sockets
  2021-03-22 10:29           ` [PATCH v7 00/12] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
                               ` (7 preceding siblings ...)
  2021-03-22 10:29             ` [PATCH v7 08/12] unix-socket: add backlog size option to unix_stream_listen() Jeff Hostetler via GitGitGadget
@ 2021-03-22 10:29             ` Jeff Hostetler via GitGitGadget
  2021-03-22 10:29             ` [PATCH v7 10/12] unix-stream-server: create unix domain socket under lock Jeff Hostetler via GitGitGadget
                               ` (2 subsequent siblings)
  11 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-03-22 10:29 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Calls to `chdir()` are dangerous in a multi-threaded context.  If
`unix_stream_listen()` or `unix_stream_connect()` is given a socket
pathname that is too long to fit in a `sockaddr_un` structure, it will
`chdir()` to the parent directory of the requested socket pathname,
create the socket using a relative pathname, and then `chdir()` back.
This is not thread-safe.

Teach `unix_sockaddr_init()` to not allow calls to `chdir()` when this
flag is set.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 builtin/credential-cache.c |  2 +-
 unix-socket.c              | 17 ++++++++++++-----
 unix-socket.h              |  3 ++-
 3 files changed, 15 insertions(+), 7 deletions(-)

diff --git a/builtin/credential-cache.c b/builtin/credential-cache.c
index 9b3f70990597..76a6ba37223f 100644
--- a/builtin/credential-cache.c
+++ b/builtin/credential-cache.c
@@ -14,7 +14,7 @@
 static int send_request(const char *socket, const struct strbuf *out)
 {
 	int got_data = 0;
-	int fd = unix_stream_connect(socket);
+	int fd = unix_stream_connect(socket, 0);
 
 	if (fd < 0)
 		return -1;
diff --git a/unix-socket.c b/unix-socket.c
index 012becd93d57..e0be1badb58d 100644
--- a/unix-socket.c
+++ b/unix-socket.c
@@ -30,16 +30,23 @@ static void unix_sockaddr_cleanup(struct unix_sockaddr_context *ctx)
 }
 
 static int unix_sockaddr_init(struct sockaddr_un *sa, const char *path,
-			      struct unix_sockaddr_context *ctx)
+			      struct unix_sockaddr_context *ctx,
+			      int disallow_chdir)
 {
 	int size = strlen(path) + 1;
 
 	ctx->orig_dir = NULL;
 	if (size > sizeof(sa->sun_path)) {
-		const char *slash = find_last_dir_sep(path);
+		const char *slash;
 		const char *dir;
 		struct strbuf cwd = STRBUF_INIT;
 
+		if (disallow_chdir) {
+			errno = ENAMETOOLONG;
+			return -1;
+		}
+
+		slash = find_last_dir_sep(path);
 		if (!slash) {
 			errno = ENAMETOOLONG;
 			return -1;
@@ -65,13 +72,13 @@ static int unix_sockaddr_init(struct sockaddr_un *sa, const char *path,
 	return 0;
 }
 
-int unix_stream_connect(const char *path)
+int unix_stream_connect(const char *path, int disallow_chdir)
 {
 	int fd = -1, saved_errno;
 	struct sockaddr_un sa;
 	struct unix_sockaddr_context ctx;
 
-	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
+	if (unix_sockaddr_init(&sa, path, &ctx, disallow_chdir) < 0)
 		return -1;
 	fd = socket(AF_UNIX, SOCK_STREAM, 0);
 	if (fd < 0)
@@ -101,7 +108,7 @@ int unix_stream_listen(const char *path,
 
 	unlink(path);
 
-	if (unix_sockaddr_init(&sa, path, &ctx) < 0)
+	if (unix_sockaddr_init(&sa, path, &ctx, opts->disallow_chdir) < 0)
 		return -1;
 	fd = socket(AF_UNIX, SOCK_STREAM, 0);
 	if (fd < 0)
diff --git a/unix-socket.h b/unix-socket.h
index ec2fb3ea7267..8542cdd7995d 100644
--- a/unix-socket.h
+++ b/unix-socket.h
@@ -3,11 +3,12 @@
 
 struct unix_stream_listen_opts {
 	int listen_backlog_size;
+	unsigned int disallow_chdir:1;
 };
 
 #define UNIX_STREAM_LISTEN_OPTS_INIT { 0 }
 
-int unix_stream_connect(const char *path);
+int unix_stream_connect(const char *path, int disallow_chdir);
 int unix_stream_listen(const char *path,
 		       const struct unix_stream_listen_opts *opts);
 
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v7 10/12] unix-stream-server: create unix domain socket under lock
  2021-03-22 10:29           ` [PATCH v7 00/12] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
                               ` (8 preceding siblings ...)
  2021-03-22 10:29             ` [PATCH v7 09/12] unix-socket: disallow chdir() when creating unix domain sockets Jeff Hostetler via GitGitGadget
@ 2021-03-22 10:29             ` Jeff Hostetler via GitGitGadget
  2021-03-22 10:29             ` [PATCH v7 11/12] simple-ipc: add Unix domain socket implementation Jeff Hostetler via GitGitGadget
  2021-03-22 10:29             ` [PATCH v7 12/12] t0052: add simple-ipc tests and t/helper/test-simple-ipc tool Jeff Hostetler via GitGitGadget
  11 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-03-22 10:29 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Create a wrapper class for `unix_stream_listen()` that uses a ".lock"
lockfile to create the unix domain socket in a race-free manner.

Unix domain sockets have a fundamental problem on Unix systems because
they persist in the filesystem until they are deleted.  This is
independent of whether a server is actually listening for connections.
Well-behaved servers are expected to delete the socket when they
shutdown.  A new server cannot easily tell if a found socket is
attached to an active server or is leftover cruft from a dead server.
The traditional solution used by `unix_stream_listen()` is to force
delete the socket pathname and then create a new socket.  This solves
the latter (cruft) problem, but in the case of the former, it orphans
the existing server (by stealing the pathname associated with the
socket it is listening on).

We cannot directly use a .lock lockfile to create the socket because
the socket is created by `bind(2)` rather than the `open(2)` mechanism
used by `tempfile.c`.

As an alternative, we hold a plain lockfile ("<path>.lock") as a
mutual exclusion device.  Under the lock, we test if an existing
socket ("<path>") is has an active server.  If not, we create a new
socket and begin listening.  Then we use "rollback" to delete the
lockfile in all cases.

This wrapper code conceptually exists at a higher-level than the core
unix_stream_connect() and unix_stream_listen() routines that it
consumes.  It is isolated in a wrapper class for clarity.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 Makefile                            |   1 +
 contrib/buildsystems/CMakeLists.txt |   2 +-
 unix-stream-server.c                | 125 ++++++++++++++++++++++++++++
 unix-stream-server.h                |  33 ++++++++
 4 files changed, 160 insertions(+), 1 deletion(-)
 create mode 100644 unix-stream-server.c
 create mode 100644 unix-stream-server.h

diff --git a/Makefile b/Makefile
index d3c42d3f4f9f..012694276f6d 100644
--- a/Makefile
+++ b/Makefile
@@ -1665,6 +1665,7 @@ ifdef NO_UNIX_SOCKETS
 	BASIC_CFLAGS += -DNO_UNIX_SOCKETS
 else
 	LIB_OBJS += unix-socket.o
+	LIB_OBJS += unix-stream-server.o
 endif
 
 ifdef USE_WIN32_IPC
diff --git a/contrib/buildsystems/CMakeLists.txt b/contrib/buildsystems/CMakeLists.txt
index 40c9e8e3bd9d..c94011269ebb 100644
--- a/contrib/buildsystems/CMakeLists.txt
+++ b/contrib/buildsystems/CMakeLists.txt
@@ -243,7 +243,7 @@ if(CMAKE_SYSTEM_NAME STREQUAL "Windows")
 
 elseif(CMAKE_SYSTEM_NAME STREQUAL "Linux")
 	add_compile_definitions(PROCFS_EXECUTABLE_PATH="/proc/self/exe" HAVE_DEV_TTY )
-	list(APPEND compat_SOURCES unix-socket.c)
+	list(APPEND compat_SOURCES unix-socket.c unix-stream-server.c)
 endif()
 
 if(CMAKE_SYSTEM_NAME STREQUAL "Windows")
diff --git a/unix-stream-server.c b/unix-stream-server.c
new file mode 100644
index 000000000000..efa2a207abcd
--- /dev/null
+++ b/unix-stream-server.c
@@ -0,0 +1,125 @@
+#include "cache.h"
+#include "lockfile.h"
+#include "unix-socket.h"
+#include "unix-stream-server.h"
+
+#define DEFAULT_LOCK_TIMEOUT (100)
+
+/*
+ * Try to connect to a unix domain socket at `path` (if it exists) and
+ * see if there is a server listening.
+ *
+ * We don't know if the socket exists, whether a server died and
+ * failed to cleanup, or whether we have a live server listening, so
+ * we "poke" it.
+ *
+ * We immediately hangup without sending/receiving any data because we
+ * don't know anything about the protocol spoken and don't want to
+ * block while writing/reading data.  It is sufficient to just know
+ * that someone is listening.
+ */
+static int is_another_server_alive(const char *path,
+				   const struct unix_stream_listen_opts *opts)
+{
+	int fd = unix_stream_connect(path, opts->disallow_chdir);
+	if (fd >= 0) {
+		close(fd);
+		return 1;
+	}
+
+	return 0;
+}
+
+int unix_ss_create(const char *path,
+		   const struct unix_stream_listen_opts *opts,
+		   long timeout_ms,
+		   struct unix_ss_socket **new_server_socket)
+{
+	struct lock_file lock = LOCK_INIT;
+	int fd_socket;
+	struct unix_ss_socket *server_socket;
+
+	*new_server_socket = NULL;
+
+	if (timeout_ms < 0)
+		timeout_ms = DEFAULT_LOCK_TIMEOUT;
+
+	/*
+	 * Create a lock at "<path>.lock" if we can.
+	 */
+	if (hold_lock_file_for_update_timeout(&lock, path, 0, timeout_ms) < 0)
+		return -1;
+
+	/*
+	 * If another server is listening on "<path>" give up.  We do not
+	 * want to create a socket and steal future connections from them.
+	 */
+	if (is_another_server_alive(path, opts)) {
+		rollback_lock_file(&lock);
+		errno = EADDRINUSE;
+		return -2;
+	}
+
+	/*
+	 * Create and bind to a Unix domain socket at "<path>".
+	 */
+	fd_socket = unix_stream_listen(path, opts);
+	if (fd_socket < 0) {
+		int saved_errno = errno;
+		rollback_lock_file(&lock);
+		errno = saved_errno;
+		return -1;
+	}
+
+	server_socket = xcalloc(1, sizeof(*server_socket));
+	server_socket->path_socket = strdup(path);
+	server_socket->fd_socket = fd_socket;
+	lstat(path, &server_socket->st_socket);
+
+	*new_server_socket = server_socket;
+
+	/*
+	 * Always rollback (just delete) "<path>.lock" because we already created
+	 * "<path>" as a socket and do not want to commit_lock to do the atomic
+	 * rename trick.
+	 */
+	rollback_lock_file(&lock);
+
+	return 0;
+}
+
+void unix_ss_free(struct unix_ss_socket *server_socket)
+{
+	if (!server_socket)
+		return;
+
+	if (server_socket->fd_socket >= 0) {
+		if (!unix_ss_was_stolen(server_socket))
+			unlink(server_socket->path_socket);
+		close(server_socket->fd_socket);
+	}
+
+	free(server_socket->path_socket);
+	free(server_socket);
+}
+
+int unix_ss_was_stolen(struct unix_ss_socket *server_socket)
+{
+	struct stat st_now;
+
+	if (!server_socket)
+		return 0;
+
+	if (lstat(server_socket->path_socket, &st_now) == -1)
+		return 1;
+
+	if (st_now.st_ino != server_socket->st_socket.st_ino)
+		return 1;
+	if (st_now.st_dev != server_socket->st_socket.st_dev)
+		return 1;
+
+	if (!S_ISSOCK(st_now.st_mode))
+		return 1;
+
+	return 0;
+}
diff --git a/unix-stream-server.h b/unix-stream-server.h
new file mode 100644
index 000000000000..ae2712ba39b1
--- /dev/null
+++ b/unix-stream-server.h
@@ -0,0 +1,33 @@
+#ifndef UNIX_STREAM_SERVER_H
+#define UNIX_STREAM_SERVER_H
+
+#include "unix-socket.h"
+
+struct unix_ss_socket {
+	char *path_socket;
+	struct stat st_socket;
+	int fd_socket;
+};
+
+/*
+ * Create a Unix Domain Socket at the given path under the protection
+ * of a '.lock' lockfile.
+ *
+ * Returns 0 on success, -1 on error, -2 if socket is in use.
+ */
+int unix_ss_create(const char *path,
+		   const struct unix_stream_listen_opts *opts,
+		   long timeout_ms,
+		   struct unix_ss_socket **server_socket);
+
+/*
+ * Close and delete the socket.
+ */
+void unix_ss_free(struct unix_ss_socket *server_socket);
+
+/*
+ * Return 1 if the inode of the pathname to our socket changes.
+ */
+int unix_ss_was_stolen(struct unix_ss_socket *server_socket);
+
+#endif /* UNIX_STREAM_SERVER_H */
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v7 11/12] simple-ipc: add Unix domain socket implementation
  2021-03-22 10:29           ` [PATCH v7 00/12] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
                               ` (9 preceding siblings ...)
  2021-03-22 10:29             ` [PATCH v7 10/12] unix-stream-server: create unix domain socket under lock Jeff Hostetler via GitGitGadget
@ 2021-03-22 10:29             ` Jeff Hostetler via GitGitGadget
  2021-03-22 10:29             ` [PATCH v7 12/12] t0052: add simple-ipc tests and t/helper/test-simple-ipc tool Jeff Hostetler via GitGitGadget
  11 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-03-22 10:29 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Create Unix domain socket based implementation of "simple-ipc".

A set of `ipc_client` routines implement a client library to connect
to an `ipc_server` over a Unix domain socket, send a simple request,
and receive a single response.  Clients use blocking IO on the socket.

A set of `ipc_server` routines implement a thread pool to listen for
and concurrently service client connections.

The server creates a new Unix domain socket at a known location.  If a
socket already exists with that name, the server tries to determine if
another server is already listening on the socket or if the socket is
dead.  If socket is busy, the server exits with an error rather than
stealing the socket.  If the socket is dead, the server creates a new
one and starts up.

If while running, the server detects that its socket has been stolen
by another server, it automatically exits.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 Makefile                            |   2 +
 compat/simple-ipc/ipc-unix-socket.c | 999 ++++++++++++++++++++++++++++
 contrib/buildsystems/CMakeLists.txt |   2 +
 simple-ipc.h                        |  13 +-
 4 files changed, 1015 insertions(+), 1 deletion(-)
 create mode 100644 compat/simple-ipc/ipc-unix-socket.c

diff --git a/Makefile b/Makefile
index 012694276f6d..20dd65d19658 100644
--- a/Makefile
+++ b/Makefile
@@ -1666,6 +1666,8 @@ ifdef NO_UNIX_SOCKETS
 else
 	LIB_OBJS += unix-socket.o
 	LIB_OBJS += unix-stream-server.o
+	LIB_OBJS += compat/simple-ipc/ipc-shared.o
+	LIB_OBJS += compat/simple-ipc/ipc-unix-socket.o
 endif
 
 ifdef USE_WIN32_IPC
diff --git a/compat/simple-ipc/ipc-unix-socket.c b/compat/simple-ipc/ipc-unix-socket.c
new file mode 100644
index 000000000000..38689b278df3
--- /dev/null
+++ b/compat/simple-ipc/ipc-unix-socket.c
@@ -0,0 +1,999 @@
+#include "cache.h"
+#include "simple-ipc.h"
+#include "strbuf.h"
+#include "pkt-line.h"
+#include "thread-utils.h"
+#include "unix-socket.h"
+#include "unix-stream-server.h"
+
+#ifdef NO_UNIX_SOCKETS
+#error compat/simple-ipc/ipc-unix-socket.c requires Unix sockets
+#endif
+
+enum ipc_active_state ipc_get_active_state(const char *path)
+{
+	enum ipc_active_state state = IPC_STATE__OTHER_ERROR;
+	struct ipc_client_connect_options options
+		= IPC_CLIENT_CONNECT_OPTIONS_INIT;
+	struct stat st;
+	struct ipc_client_connection *connection_test = NULL;
+
+	options.wait_if_busy = 0;
+	options.wait_if_not_found = 0;
+
+	if (lstat(path, &st) == -1) {
+		switch (errno) {
+		case ENOENT:
+		case ENOTDIR:
+			return IPC_STATE__NOT_LISTENING;
+		default:
+			return IPC_STATE__INVALID_PATH;
+		}
+	}
+
+	/* also complain if a plain file is in the way */
+	if ((st.st_mode & S_IFMT) != S_IFSOCK)
+		return IPC_STATE__INVALID_PATH;
+
+	/*
+	 * Just because the filesystem has a S_IFSOCK type inode
+	 * at `path`, doesn't mean it that there is a server listening.
+	 * Ping it to be sure.
+	 */
+	state = ipc_client_try_connect(path, &options, &connection_test);
+	ipc_client_close_connection(connection_test);
+
+	return state;
+}
+
+/*
+ * Retry frequency when trying to connect to a server.
+ *
+ * This value should be short enough that we don't seriously delay our
+ * caller, but not fast enough that our spinning puts pressure on the
+ * system.
+ */
+#define WAIT_STEP_MS (50)
+
+/*
+ * Try to connect to the server.  If the server is just starting up or
+ * is very busy, we may not get a connection the first time.
+ */
+static enum ipc_active_state connect_to_server(
+	const char *path,
+	int timeout_ms,
+	const struct ipc_client_connect_options *options,
+	int *pfd)
+{
+	int k;
+
+	*pfd = -1;
+
+	for (k = 0; k < timeout_ms; k += WAIT_STEP_MS) {
+		int fd = unix_stream_connect(path, options->uds_disallow_chdir);
+
+		if (fd != -1) {
+			*pfd = fd;
+			return IPC_STATE__LISTENING;
+		}
+
+		if (errno == ENOENT) {
+			if (!options->wait_if_not_found)
+				return IPC_STATE__PATH_NOT_FOUND;
+
+			goto sleep_and_try_again;
+		}
+
+		if (errno == ETIMEDOUT) {
+			if (!options->wait_if_busy)
+				return IPC_STATE__NOT_LISTENING;
+
+			goto sleep_and_try_again;
+		}
+
+		if (errno == ECONNREFUSED) {
+			if (!options->wait_if_busy)
+				return IPC_STATE__NOT_LISTENING;
+
+			goto sleep_and_try_again;
+		}
+
+		return IPC_STATE__OTHER_ERROR;
+
+	sleep_and_try_again:
+		sleep_millisec(WAIT_STEP_MS);
+	}
+
+	return IPC_STATE__NOT_LISTENING;
+}
+
+/*
+ * The total amount of time that we are willing to wait when trying to
+ * connect to a server.
+ *
+ * When the server is first started, it might take a little while for
+ * it to become ready to service requests.  Likewise, the server may
+ * be very (temporarily) busy and not respond to our connections.
+ *
+ * We should gracefully and silently handle those conditions and try
+ * again for a reasonable time period.
+ *
+ * The value chosen here should be long enough for the server
+ * to reliably heal from the above conditions.
+ */
+#define MY_CONNECTION_TIMEOUT_MS (1000)
+
+enum ipc_active_state ipc_client_try_connect(
+	const char *path,
+	const struct ipc_client_connect_options *options,
+	struct ipc_client_connection **p_connection)
+{
+	enum ipc_active_state state = IPC_STATE__OTHER_ERROR;
+	int fd = -1;
+
+	*p_connection = NULL;
+
+	trace2_region_enter("ipc-client", "try-connect", NULL);
+	trace2_data_string("ipc-client", NULL, "try-connect/path", path);
+
+	state = connect_to_server(path, MY_CONNECTION_TIMEOUT_MS,
+				  options, &fd);
+
+	trace2_data_intmax("ipc-client", NULL, "try-connect/state",
+			   (intmax_t)state);
+	trace2_region_leave("ipc-client", "try-connect", NULL);
+
+	if (state == IPC_STATE__LISTENING) {
+		(*p_connection) = xcalloc(1, sizeof(struct ipc_client_connection));
+		(*p_connection)->fd = fd;
+	}
+
+	return state;
+}
+
+void ipc_client_close_connection(struct ipc_client_connection *connection)
+{
+	if (!connection)
+		return;
+
+	if (connection->fd != -1)
+		close(connection->fd);
+
+	free(connection);
+}
+
+int ipc_client_send_command_to_connection(
+	struct ipc_client_connection *connection,
+	const char *message, struct strbuf *answer)
+{
+	int ret = 0;
+
+	strbuf_setlen(answer, 0);
+
+	trace2_region_enter("ipc-client", "send-command", NULL);
+
+	if (write_packetized_from_buf_no_flush(message, strlen(message),
+					       connection->fd) < 0 ||
+	    packet_flush_gently(connection->fd) < 0) {
+		ret = error(_("could not send IPC command"));
+		goto done;
+	}
+
+	if (read_packetized_to_strbuf(
+		    connection->fd, answer,
+		    PACKET_READ_GENTLE_ON_EOF | PACKET_READ_GENTLE_ON_READ_ERROR) < 0) {
+		ret = error(_("could not read IPC response"));
+		goto done;
+	}
+
+done:
+	trace2_region_leave("ipc-client", "send-command", NULL);
+	return ret;
+}
+
+int ipc_client_send_command(const char *path,
+			    const struct ipc_client_connect_options *options,
+			    const char *message, struct strbuf *answer)
+{
+	int ret = -1;
+	enum ipc_active_state state;
+	struct ipc_client_connection *connection = NULL;
+
+	state = ipc_client_try_connect(path, options, &connection);
+
+	if (state != IPC_STATE__LISTENING)
+		return ret;
+
+	ret = ipc_client_send_command_to_connection(connection, message, answer);
+
+	ipc_client_close_connection(connection);
+
+	return ret;
+}
+
+static int set_socket_blocking_flag(int fd, int make_nonblocking)
+{
+	int flags;
+
+	flags = fcntl(fd, F_GETFL, NULL);
+
+	if (flags < 0)
+		return -1;
+
+	if (make_nonblocking)
+		flags |= O_NONBLOCK;
+	else
+		flags &= ~O_NONBLOCK;
+
+	return fcntl(fd, F_SETFL, flags);
+}
+
+/*
+ * Magic numbers used to annotate callback instance data.
+ * These are used to help guard against accidentally passing the
+ * wrong instance data across multiple levels of callbacks (which
+ * is easy to do if there are `void*` arguments).
+ */
+enum magic {
+	MAGIC_SERVER_REPLY_DATA,
+	MAGIC_WORKER_THREAD_DATA,
+	MAGIC_ACCEPT_THREAD_DATA,
+	MAGIC_SERVER_DATA,
+};
+
+struct ipc_server_reply_data {
+	enum magic magic;
+	int fd;
+	struct ipc_worker_thread_data *worker_thread_data;
+};
+
+struct ipc_worker_thread_data {
+	enum magic magic;
+	struct ipc_worker_thread_data *next_thread;
+	struct ipc_server_data *server_data;
+	pthread_t pthread_id;
+};
+
+struct ipc_accept_thread_data {
+	enum magic magic;
+	struct ipc_server_data *server_data;
+
+	struct unix_ss_socket *server_socket;
+
+	int fd_send_shutdown;
+	int fd_wait_shutdown;
+	pthread_t pthread_id;
+};
+
+/*
+ * With unix-sockets, the conceptual "ipc-server" is implemented as a single
+ * controller "accept-thread" thread and a pool of "worker-thread" threads.
+ * The former does the usual `accept()` loop and dispatches connections
+ * to an idle worker thread.  The worker threads wait in an idle loop for
+ * a new connection, communicate with the client and relay data to/from
+ * the `application_cb` and then wait for another connection from the
+ * server thread.  This avoids the overhead of constantly creating and
+ * destroying threads.
+ */
+struct ipc_server_data {
+	enum magic magic;
+	ipc_server_application_cb *application_cb;
+	void *application_data;
+	struct strbuf buf_path;
+
+	struct ipc_accept_thread_data *accept_thread;
+	struct ipc_worker_thread_data *worker_thread_list;
+
+	pthread_mutex_t work_available_mutex;
+	pthread_cond_t work_available_cond;
+
+	/*
+	 * Accepted but not yet processed client connections are kept
+	 * in a circular buffer FIFO.  The queue is empty when the
+	 * positions are equal.
+	 */
+	int *fifo_fds;
+	int queue_size;
+	int back_pos;
+	int front_pos;
+
+	int shutdown_requested;
+	int is_stopped;
+};
+
+/*
+ * Remove and return the oldest queued connection.
+ *
+ * Returns -1 if empty.
+ */
+static int fifo_dequeue(struct ipc_server_data *server_data)
+{
+	/* ASSERT holding mutex */
+
+	int fd;
+
+	if (server_data->back_pos == server_data->front_pos)
+		return -1;
+
+	fd = server_data->fifo_fds[server_data->front_pos];
+	server_data->fifo_fds[server_data->front_pos] = -1;
+
+	server_data->front_pos++;
+	if (server_data->front_pos == server_data->queue_size)
+		server_data->front_pos = 0;
+
+	return fd;
+}
+
+/*
+ * Push a new fd onto the back of the queue.
+ *
+ * Drop it and return -1 if queue is already full.
+ */
+static int fifo_enqueue(struct ipc_server_data *server_data, int fd)
+{
+	/* ASSERT holding mutex */
+
+	int next_back_pos;
+
+	next_back_pos = server_data->back_pos + 1;
+	if (next_back_pos == server_data->queue_size)
+		next_back_pos = 0;
+
+	if (next_back_pos == server_data->front_pos) {
+		/* Queue is full. Just drop it. */
+		close(fd);
+		return -1;
+	}
+
+	server_data->fifo_fds[server_data->back_pos] = fd;
+	server_data->back_pos = next_back_pos;
+
+	return fd;
+}
+
+/*
+ * Wait for a connection to be queued to the FIFO and return it.
+ *
+ * Returns -1 if someone has already requested a shutdown.
+ */
+static int worker_thread__wait_for_connection(
+	struct ipc_worker_thread_data *worker_thread_data)
+{
+	/* ASSERT NOT holding mutex */
+
+	struct ipc_server_data *server_data = worker_thread_data->server_data;
+	int fd = -1;
+
+	pthread_mutex_lock(&server_data->work_available_mutex);
+	for (;;) {
+		if (server_data->shutdown_requested)
+			break;
+
+		fd = fifo_dequeue(server_data);
+		if (fd >= 0)
+			break;
+
+		pthread_cond_wait(&server_data->work_available_cond,
+				  &server_data->work_available_mutex);
+	}
+	pthread_mutex_unlock(&server_data->work_available_mutex);
+
+	return fd;
+}
+
+/*
+ * Forward declare our reply callback function so that any compiler
+ * errors are reported when we actually define the function (in addition
+ * to any errors reported when we try to pass this callback function as
+ * a parameter in a function call).  The former are easier to understand.
+ */
+static ipc_server_reply_cb do_io_reply_callback;
+
+/*
+ * Relay application's response message to the client process.
+ * (We do not flush at this point because we allow the caller
+ * to chunk data to the client thru us.)
+ */
+static int do_io_reply_callback(struct ipc_server_reply_data *reply_data,
+		       const char *response, size_t response_len)
+{
+	if (reply_data->magic != MAGIC_SERVER_REPLY_DATA)
+		BUG("reply_cb called with wrong instance data");
+
+	return write_packetized_from_buf_no_flush(response, response_len,
+						  reply_data->fd);
+}
+
+/* A randomly chosen value. */
+#define MY_WAIT_POLL_TIMEOUT_MS (10)
+
+/*
+ * If the client hangs up without sending any data on the wire, just
+ * quietly close the socket and ignore this client.
+ *
+ * This worker thread is committed to reading the IPC request data
+ * from the client at the other end of this fd.  Wait here for the
+ * client to actually put something on the wire -- because if the
+ * client just does a ping (connect and hangup without sending any
+ * data), our use of the pkt-line read routines will spew an error
+ * message.
+ *
+ * Return -1 if the client hung up.
+ * Return 0 if data (possibly incomplete) is ready.
+ */
+static int worker_thread__wait_for_io_start(
+	struct ipc_worker_thread_data *worker_thread_data,
+	int fd)
+{
+	struct ipc_server_data *server_data = worker_thread_data->server_data;
+	struct pollfd pollfd[1];
+	int result;
+
+	for (;;) {
+		pollfd[0].fd = fd;
+		pollfd[0].events = POLLIN;
+
+		result = poll(pollfd, 1, MY_WAIT_POLL_TIMEOUT_MS);
+		if (result < 0) {
+			if (errno == EINTR)
+				continue;
+			goto cleanup;
+		}
+
+		if (result == 0) {
+			/* a timeout */
+
+			int in_shutdown;
+
+			pthread_mutex_lock(&server_data->work_available_mutex);
+			in_shutdown = server_data->shutdown_requested;
+			pthread_mutex_unlock(&server_data->work_available_mutex);
+
+			/*
+			 * If a shutdown is already in progress and this
+			 * client has not started talking yet, just drop it.
+			 */
+			if (in_shutdown)
+				goto cleanup;
+			continue;
+		}
+
+		if (pollfd[0].revents & POLLHUP)
+			goto cleanup;
+
+		if (pollfd[0].revents & POLLIN)
+			return 0;
+
+		goto cleanup;
+	}
+
+cleanup:
+	close(fd);
+	return -1;
+}
+
+/*
+ * Receive the request/command from the client and pass it to the
+ * registered request-callback.  The request-callback will compose
+ * a response and call our reply-callback to send it to the client.
+ */
+static int worker_thread__do_io(
+	struct ipc_worker_thread_data *worker_thread_data,
+	int fd)
+{
+	/* ASSERT NOT holding lock */
+
+	struct strbuf buf = STRBUF_INIT;
+	struct ipc_server_reply_data reply_data;
+	int ret = 0;
+
+	reply_data.magic = MAGIC_SERVER_REPLY_DATA;
+	reply_data.worker_thread_data = worker_thread_data;
+
+	reply_data.fd = fd;
+
+	ret = read_packetized_to_strbuf(
+		reply_data.fd, &buf,
+		PACKET_READ_GENTLE_ON_EOF | PACKET_READ_GENTLE_ON_READ_ERROR);
+	if (ret >= 0) {
+		ret = worker_thread_data->server_data->application_cb(
+			worker_thread_data->server_data->application_data,
+			buf.buf, do_io_reply_callback, &reply_data);
+
+		packet_flush_gently(reply_data.fd);
+	}
+	else {
+		/*
+		 * The client probably disconnected/shutdown before it
+		 * could send a well-formed message.  Ignore it.
+		 */
+	}
+
+	strbuf_release(&buf);
+	close(reply_data.fd);
+
+	return ret;
+}
+
+/*
+ * Block SIGPIPE on the current thread (so that we get EPIPE from
+ * write() rather than an actual signal).
+ *
+ * Note that using sigchain_push() and _pop() to control SIGPIPE
+ * around our IO calls is not thread safe:
+ * [] It uses a global stack of handler frames.
+ * [] It uses ALLOC_GROW() to resize it.
+ * [] Finally, according to the `signal(2)` man-page:
+ *    "The effects of `signal()` in a multithreaded process are unspecified."
+ */
+static void thread_block_sigpipe(sigset_t *old_set)
+{
+	sigset_t new_set;
+
+	sigemptyset(&new_set);
+	sigaddset(&new_set, SIGPIPE);
+
+	sigemptyset(old_set);
+	pthread_sigmask(SIG_BLOCK, &new_set, old_set);
+}
+
+/*
+ * Thread proc for an IPC worker thread.  It handles a series of
+ * connections from clients.  It pulls the next fd from the queue
+ * processes it, and then waits for the next client.
+ *
+ * Block SIGPIPE in this worker thread for the life of the thread.
+ * This avoids stray (and sometimes delayed) SIGPIPE signals caused
+ * by client errors and/or when we are under extremely heavy IO load.
+ *
+ * This means that the application callback will have SIGPIPE blocked.
+ * The callback should not change it.
+ */
+static void *worker_thread_proc(void *_worker_thread_data)
+{
+	struct ipc_worker_thread_data *worker_thread_data = _worker_thread_data;
+	struct ipc_server_data *server_data = worker_thread_data->server_data;
+	sigset_t old_set;
+	int fd, io;
+	int ret;
+
+	trace2_thread_start("ipc-worker");
+
+	thread_block_sigpipe(&old_set);
+
+	for (;;) {
+		fd = worker_thread__wait_for_connection(worker_thread_data);
+		if (fd == -1)
+			break; /* in shutdown */
+
+		io = worker_thread__wait_for_io_start(worker_thread_data, fd);
+		if (io == -1)
+			continue; /* client hung up without sending anything */
+
+		ret = worker_thread__do_io(worker_thread_data, fd);
+
+		if (ret == SIMPLE_IPC_QUIT) {
+			trace2_data_string("ipc-worker", NULL, "queue_stop_async",
+					   "application_quit");
+			/*
+			 * The application layer is telling the ipc-server
+			 * layer to shutdown.
+			 *
+			 * We DO NOT have a response to send to the client.
+			 *
+			 * Queue an async stop (to stop the other threads) and
+			 * allow this worker thread to exit now (no sense waiting
+			 * for the thread-pool shutdown signal).
+			 *
+			 * Other non-idle worker threads are allowed to finish
+			 * responding to their current clients.
+			 */
+			ipc_server_stop_async(server_data);
+			break;
+		}
+	}
+
+	trace2_thread_exit();
+	return NULL;
+}
+
+/* A randomly chosen value. */
+#define MY_ACCEPT_POLL_TIMEOUT_MS (60 * 1000)
+
+/*
+ * Accept a new client connection on our socket.  This uses non-blocking
+ * IO so that we can also wait for shutdown requests on our socket-pair
+ * without actually spinning on a fast timeout.
+ */
+static int accept_thread__wait_for_connection(
+	struct ipc_accept_thread_data *accept_thread_data)
+{
+	struct pollfd pollfd[2];
+	int result;
+
+	for (;;) {
+		pollfd[0].fd = accept_thread_data->fd_wait_shutdown;
+		pollfd[0].events = POLLIN;
+
+		pollfd[1].fd = accept_thread_data->server_socket->fd_socket;
+		pollfd[1].events = POLLIN;
+
+		result = poll(pollfd, 2, MY_ACCEPT_POLL_TIMEOUT_MS);
+		if (result < 0) {
+			if (errno == EINTR)
+				continue;
+			return result;
+		}
+
+		if (result == 0) {
+			/* a timeout */
+
+			/*
+			 * If someone deletes or force-creates a new unix
+			 * domain socket at our path, all future clients
+			 * will be routed elsewhere and we silently starve.
+			 * If that happens, just queue a shutdown.
+			 */
+			if (unix_ss_was_stolen(
+				    accept_thread_data->server_socket)) {
+				trace2_data_string("ipc-accept", NULL,
+						   "queue_stop_async",
+						   "socket_stolen");
+				ipc_server_stop_async(
+					accept_thread_data->server_data);
+			}
+			continue;
+		}
+
+		if (pollfd[0].revents & POLLIN) {
+			/* shutdown message queued to socketpair */
+			return -1;
+		}
+
+		if (pollfd[1].revents & POLLIN) {
+			/* a connection is available on server_socket */
+
+			int client_fd =
+				accept(accept_thread_data->server_socket->fd_socket,
+				       NULL, NULL);
+			if (client_fd >= 0)
+				return client_fd;
+
+			/*
+			 * An error here is unlikely -- it probably
+			 * indicates that the connecting process has
+			 * already dropped the connection.
+			 */
+			continue;
+		}
+
+		BUG("unandled poll result errno=%d r[0]=%d r[1]=%d",
+		    errno, pollfd[0].revents, pollfd[1].revents);
+	}
+}
+
+/*
+ * Thread proc for the IPC server "accept thread".  This waits for
+ * an incoming socket connection, appends it to the queue of available
+ * connections, and notifies a worker thread to process it.
+ *
+ * Block SIGPIPE in this thread for the life of the thread.  This
+ * avoids any stray SIGPIPE signals when closing pipe fds under
+ * extremely heavy loads (such as when the fifo queue is full and we
+ * drop incomming connections).
+ */
+static void *accept_thread_proc(void *_accept_thread_data)
+{
+	struct ipc_accept_thread_data *accept_thread_data = _accept_thread_data;
+	struct ipc_server_data *server_data = accept_thread_data->server_data;
+	sigset_t old_set;
+
+	trace2_thread_start("ipc-accept");
+
+	thread_block_sigpipe(&old_set);
+
+	for (;;) {
+		int client_fd = accept_thread__wait_for_connection(
+			accept_thread_data);
+
+		pthread_mutex_lock(&server_data->work_available_mutex);
+		if (server_data->shutdown_requested) {
+			pthread_mutex_unlock(&server_data->work_available_mutex);
+			if (client_fd >= 0)
+				close(client_fd);
+			break;
+		}
+
+		if (client_fd < 0) {
+			/* ignore transient accept() errors */
+		}
+		else {
+			fifo_enqueue(server_data, client_fd);
+			pthread_cond_broadcast(&server_data->work_available_cond);
+		}
+		pthread_mutex_unlock(&server_data->work_available_mutex);
+	}
+
+	trace2_thread_exit();
+	return NULL;
+}
+
+/*
+ * We can't predict the connection arrival rate relative to the worker
+ * processing rate, therefore we allow the "accept-thread" to queue up
+ * a generous number of connections, since we'd rather have the client
+ * not unnecessarily timeout if we can avoid it.  (The assumption is
+ * that this will be used for FSMonitor and a few second wait on a
+ * connection is better than having the client timeout and do the full
+ * computation itself.)
+ *
+ * The FIFO queue size is set to a multiple of the worker pool size.
+ * This value chosen at random.
+ */
+#define FIFO_SCALE (100)
+
+/*
+ * The backlog value for `listen(2)`.  This doesn't need to huge,
+ * rather just large enough for our "accept-thread" to wake up and
+ * queue incoming connections onto the FIFO without the kernel
+ * dropping any.
+ *
+ * This value chosen at random.
+ */
+#define LISTEN_BACKLOG (50)
+
+static int create_listener_socket(
+	const char *path,
+	const struct ipc_server_opts *ipc_opts,
+	struct unix_ss_socket **new_server_socket)
+{
+	struct unix_ss_socket *server_socket = NULL;
+	struct unix_stream_listen_opts uslg_opts = UNIX_STREAM_LISTEN_OPTS_INIT;
+	int ret;
+
+	uslg_opts.listen_backlog_size = LISTEN_BACKLOG;
+	uslg_opts.disallow_chdir = ipc_opts->uds_disallow_chdir;
+
+	ret = unix_ss_create(path, &uslg_opts, -1, &server_socket);
+	if (ret)
+		return ret;
+
+	if (set_socket_blocking_flag(server_socket->fd_socket, 1)) {
+		int saved_errno = errno;
+		unix_ss_free(server_socket);
+		errno = saved_errno;
+		return -1;
+	}
+
+	*new_server_socket = server_socket;
+
+	trace2_data_string("ipc-server", NULL, "listen-with-lock", path);
+	return 0;
+}
+
+static int setup_listener_socket(
+	const char *path,
+	const struct ipc_server_opts *ipc_opts,
+	struct unix_ss_socket **new_server_socket)
+{
+	int ret, saved_errno;
+
+	trace2_region_enter("ipc-server", "create-listener_socket", NULL);
+
+	ret = create_listener_socket(path, ipc_opts, new_server_socket);
+
+	saved_errno = errno;
+	trace2_region_leave("ipc-server", "create-listener_socket", NULL);
+	errno = saved_errno;
+
+	return ret;
+}
+
+/*
+ * Start IPC server in a pool of background threads.
+ */
+int ipc_server_run_async(struct ipc_server_data **returned_server_data,
+			 const char *path, const struct ipc_server_opts *opts,
+			 ipc_server_application_cb *application_cb,
+			 void *application_data)
+{
+	struct unix_ss_socket *server_socket = NULL;
+	struct ipc_server_data *server_data;
+	int sv[2];
+	int k;
+	int ret;
+	int nr_threads = opts->nr_threads;
+
+	*returned_server_data = NULL;
+
+	/*
+	 * Create a socketpair and set sv[1] to non-blocking.  This
+	 * will used to send a shutdown message to the accept-thread
+	 * and allows the accept-thread to wait on EITHER a client
+	 * connection or a shutdown request without spinning.
+	 */
+	if (socketpair(AF_UNIX, SOCK_STREAM, 0, sv) < 0)
+		return -1;
+
+	if (set_socket_blocking_flag(sv[1], 1)) {
+		int saved_errno = errno;
+		close(sv[0]);
+		close(sv[1]);
+		errno = saved_errno;
+		return -1;
+	}
+
+	ret = setup_listener_socket(path, opts, &server_socket);
+	if (ret) {
+		int saved_errno = errno;
+		close(sv[0]);
+		close(sv[1]);
+		errno = saved_errno;
+		return ret;
+	}
+
+	server_data = xcalloc(1, sizeof(*server_data));
+	server_data->magic = MAGIC_SERVER_DATA;
+	server_data->application_cb = application_cb;
+	server_data->application_data = application_data;
+	strbuf_init(&server_data->buf_path, 0);
+	strbuf_addstr(&server_data->buf_path, path);
+
+	if (nr_threads < 1)
+		nr_threads = 1;
+
+	pthread_mutex_init(&server_data->work_available_mutex, NULL);
+	pthread_cond_init(&server_data->work_available_cond, NULL);
+
+	server_data->queue_size = nr_threads * FIFO_SCALE;
+	CALLOC_ARRAY(server_data->fifo_fds, server_data->queue_size);
+
+	server_data->accept_thread =
+		xcalloc(1, sizeof(*server_data->accept_thread));
+	server_data->accept_thread->magic = MAGIC_ACCEPT_THREAD_DATA;
+	server_data->accept_thread->server_data = server_data;
+	server_data->accept_thread->server_socket = server_socket;
+	server_data->accept_thread->fd_send_shutdown = sv[0];
+	server_data->accept_thread->fd_wait_shutdown = sv[1];
+
+	if (pthread_create(&server_data->accept_thread->pthread_id, NULL,
+			   accept_thread_proc, server_data->accept_thread))
+		die_errno(_("could not start accept_thread '%s'"), path);
+
+	for (k = 0; k < nr_threads; k++) {
+		struct ipc_worker_thread_data *wtd;
+
+		wtd = xcalloc(1, sizeof(*wtd));
+		wtd->magic = MAGIC_WORKER_THREAD_DATA;
+		wtd->server_data = server_data;
+
+		if (pthread_create(&wtd->pthread_id, NULL, worker_thread_proc,
+				   wtd)) {
+			if (k == 0)
+				die(_("could not start worker[0] for '%s'"),
+				    path);
+			/*
+			 * Limp along with the thread pool that we have.
+			 */
+			break;
+		}
+
+		wtd->next_thread = server_data->worker_thread_list;
+		server_data->worker_thread_list = wtd;
+	}
+
+	*returned_server_data = server_data;
+	return 0;
+}
+
+/*
+ * Gently tell the IPC server treads to shutdown.
+ * Can be run on any thread.
+ */
+int ipc_server_stop_async(struct ipc_server_data *server_data)
+{
+	/* ASSERT NOT holding mutex */
+
+	int fd;
+
+	if (!server_data)
+		return 0;
+
+	trace2_region_enter("ipc-server", "server-stop-async", NULL);
+
+	pthread_mutex_lock(&server_data->work_available_mutex);
+
+	server_data->shutdown_requested = 1;
+
+	/*
+	 * Write a byte to the shutdown socket pair to wake up the
+	 * accept-thread.
+	 */
+	if (write(server_data->accept_thread->fd_send_shutdown, "Q", 1) < 0)
+		error_errno("could not write to fd_send_shutdown");
+
+	/*
+	 * Drain the queue of existing connections.
+	 */
+	while ((fd = fifo_dequeue(server_data)) != -1)
+		close(fd);
+
+	/*
+	 * Gently tell worker threads to stop processing new connections
+	 * and exit.  (This does not abort in-process conversations.)
+	 */
+	pthread_cond_broadcast(&server_data->work_available_cond);
+
+	pthread_mutex_unlock(&server_data->work_available_mutex);
+
+	trace2_region_leave("ipc-server", "server-stop-async", NULL);
+
+	return 0;
+}
+
+/*
+ * Wait for all IPC server threads to stop.
+ */
+int ipc_server_await(struct ipc_server_data *server_data)
+{
+	pthread_join(server_data->accept_thread->pthread_id, NULL);
+
+	if (!server_data->shutdown_requested)
+		BUG("ipc-server: accept-thread stopped for '%s'",
+		    server_data->buf_path.buf);
+
+	while (server_data->worker_thread_list) {
+		struct ipc_worker_thread_data *wtd =
+			server_data->worker_thread_list;
+
+		pthread_join(wtd->pthread_id, NULL);
+
+		server_data->worker_thread_list = wtd->next_thread;
+		free(wtd);
+	}
+
+	server_data->is_stopped = 1;
+
+	return 0;
+}
+
+void ipc_server_free(struct ipc_server_data *server_data)
+{
+	struct ipc_accept_thread_data * accept_thread_data;
+
+	if (!server_data)
+		return;
+
+	if (!server_data->is_stopped)
+		BUG("cannot free ipc-server while running for '%s'",
+		    server_data->buf_path.buf);
+
+	accept_thread_data = server_data->accept_thread;
+	if (accept_thread_data) {
+		unix_ss_free(accept_thread_data->server_socket);
+
+		if (accept_thread_data->fd_send_shutdown != -1)
+			close(accept_thread_data->fd_send_shutdown);
+		if (accept_thread_data->fd_wait_shutdown != -1)
+			close(accept_thread_data->fd_wait_shutdown);
+
+		free(server_data->accept_thread);
+	}
+
+	while (server_data->worker_thread_list) {
+		struct ipc_worker_thread_data *wtd =
+			server_data->worker_thread_list;
+
+		server_data->worker_thread_list = wtd->next_thread;
+		free(wtd);
+	}
+
+	pthread_cond_destroy(&server_data->work_available_cond);
+	pthread_mutex_destroy(&server_data->work_available_mutex);
+
+	strbuf_release(&server_data->buf_path);
+
+	free(server_data->fifo_fds);
+	free(server_data);
+}
diff --git a/contrib/buildsystems/CMakeLists.txt b/contrib/buildsystems/CMakeLists.txt
index c94011269ebb..9897fcc8ea2a 100644
--- a/contrib/buildsystems/CMakeLists.txt
+++ b/contrib/buildsystems/CMakeLists.txt
@@ -248,6 +248,8 @@ endif()
 
 if(CMAKE_SYSTEM_NAME STREQUAL "Windows")
 	list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-win32.c)
+else()
+	list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-unix-socket.c)
 endif()
 
 set(EXE_EXTENSION ${CMAKE_EXECUTABLE_SUFFIX})
diff --git a/simple-ipc.h b/simple-ipc.h
index ab5619e3d76f..dc3606e30bd6 100644
--- a/simple-ipc.h
+++ b/simple-ipc.h
@@ -5,7 +5,7 @@
  * See Documentation/technical/api-simple-ipc.txt
  */
 
-#if defined(GIT_WINDOWS_NATIVE)
+#if defined(GIT_WINDOWS_NATIVE) || !defined(NO_UNIX_SOCKETS)
 #define SUPPORTS_SIMPLE_IPC
 #endif
 
@@ -62,11 +62,17 @@ struct ipc_client_connect_options {
 	 * the service and need to wait for it to become ready.
 	 */
 	unsigned int wait_if_not_found:1;
+
+	/*
+	 * Disallow chdir() when creating a Unix domain socket.
+	 */
+	unsigned int uds_disallow_chdir:1;
 };
 
 #define IPC_CLIENT_CONNECT_OPTIONS_INIT { \
 	.wait_if_busy = 0, \
 	.wait_if_not_found = 0, \
+	.uds_disallow_chdir = 0, \
 }
 
 /*
@@ -159,6 +165,11 @@ struct ipc_server_data;
 struct ipc_server_opts
 {
 	int nr_threads;
+
+	/*
+	 * Disallow chdir() when creating a Unix domain socket.
+	 */
+	unsigned int uds_disallow_chdir:1;
 };
 
 /*
-- 
gitgitgadget


^ permalink raw reply related	[flat|nested] 178+ messages in thread

* [PATCH v7 12/12] t0052: add simple-ipc tests and t/helper/test-simple-ipc tool
  2021-03-22 10:29           ` [PATCH v7 00/12] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
                               ` (10 preceding siblings ...)
  2021-03-22 10:29             ` [PATCH v7 11/12] simple-ipc: add Unix domain socket implementation Jeff Hostetler via GitGitGadget
@ 2021-03-22 10:29             ` Jeff Hostetler via GitGitGadget
  11 siblings, 0 replies; 178+ messages in thread
From: Jeff Hostetler via GitGitGadget @ 2021-03-22 10:29 UTC (permalink / raw)
  To: git
  Cc: Ævar Arnfjörð Bjarmason, Jeff Hostetler,
	Jeff King, SZEDER Gábor, Johannes Schindelin, Chris Torek,
	Jeff Hostetler, Jeff Hostetler

From: Jeff Hostetler <jeffhost@microsoft.com>

Create t0052-simple-ipc.sh with unit tests for the "simple-ipc" mechanism.

Create t/helper/test-simple-ipc test tool to exercise the "simple-ipc"
functions.

When the tool is invoked with "run-daemon", it runs a server to listen
for "simple-ipc" connections on a test socket or named pipe and
responds to a set of commands to exercise/stress the communication
setup.

When the tool is invoked with "start-daemon", it spawns a "run-daemon"
command in the background and waits for the server to become ready
before exiting.  (This helps make unit tests in t0052 more predictable
and avoids the need for arbitrary sleeps in the test script.)

The tool also has a series of client "send" commands to send commands
and data to a server instance.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
---
 Makefile                   |   1 +
 t/helper/test-simple-ipc.c | 787 +++++++++++++++++++++++++++++++++++++
 t/helper/test-tool.c       |   1 +
 t/helper/test-tool.h       |   1 +
 t/t0052-simple-ipc.sh      | 122 ++++++
 5 files changed, 912 insertions(+)
 create mode 100644 t/helper/test-simple-ipc.c
 create mode 100755 t/t0052-simple-ipc.sh

diff --git a/Makefile b/Makefile
index 20dd65d19658..e556388d28d0 100644
--- a/Makefile
+++ b/Makefile
@@ -734,6 +734,7 @@ TEST_BUILTINS_OBJS += test-serve-v2.o
 TEST_BUILTINS_OBJS += test-sha1.o
 TEST_BUILTINS_OBJS += test-sha256.o
 TEST_BUILTINS_OBJS += test-sigchain.o
+TEST_BUILTINS_OBJS += test-simple-ipc.o
 TEST_BUILTINS_OBJS += test-strcmp-offset.o
 TEST_BUILTINS_OBJS += test-string-list.o
 TEST_BUILTINS_OBJS += test-submodule-config.o
diff --git a/t/helper/test-simple-ipc.c b/t/helper/test-simple-ipc.c
new file mode 100644
index 000000000000..42040ef81b1e
--- /dev/null
+++ b/t/helper/test-simple-ipc.c
@@ -0,0 +1,787 @@
+/*
+ * test-simple-ipc.c: verify that the Inter-Process Communication works.
+ */
+
+#include "test-tool.h"
+#include "cache.h"
+#include "strbuf.h"
+#include "simple-ipc.h"
+#include "parse-options.h"
+#include "thread-utils.h"
+#include "strvec.h"
+
+#ifndef SUPPORTS_SIMPLE_IPC
+int cmd__simple_ipc(int argc, const char **argv)
+{
+	die("simple IPC not available on this platform");
+}
+#else
+
+/*
+ * The test daemon defines an "application callback" that supports a
+ * series of commands (see `test_app_cb()`).
+ *
+ * Unknown commands are caught here and we send an error message back
+ * to the client process.
+ */
+static int app__unhandled_command(const char *command,
+				  ipc_server_reply_cb *reply_cb,
+				  struct ipc_server_reply_data *reply_data)
+{
+	struct strbuf buf = STRBUF_INIT;
+	int ret;
+
+	strbuf_addf(&buf, "unhandled command: %s", command);
+	ret = reply_cb(reply_data, buf.buf, buf.len);
+	strbuf_release(&buf);
+
+	return ret;
+}
+
+/*
+ * Reply with a single very large buffer.  This is to ensure that
+ * long response are properly handled -- whether the chunking occurs
+ * in the kernel or in the (probably pkt-line) layer.
+ */
+#define BIG_ROWS (10000)
+static int app__big_command(ipc_server_reply_cb *reply_cb,
+			    struct ipc_server_reply_data *reply_data)
+{
+	struct strbuf buf = STRBUF_INIT;
+	int row;
+	int ret;
+
+	for (row = 0; row < BIG_ROWS; row++)
+		strbuf_addf(&buf, "big: %.75d\n", row);
+
+	ret = reply_cb(reply_data, buf.buf, buf.len);
+	strbuf_release(&buf);
+
+	return ret;
+}
+
+/*
+ * Reply with a series of lines.  This is to ensure that we can incrementally
+ * compute the response and chunk it to the client.
+ */
+#define CHUNK_ROWS (10000)
+static int app__chunk_command(ipc_server_reply_cb *reply_cb,
+			      struct ipc_server_reply_data *reply_data)
+{
+	struct strbuf buf = STRBUF_INIT;
+	int row;
+	int ret;
+
+	for (row = 0; row < CHUNK_ROWS; row++) {
+		strbuf_setlen(&buf, 0);
+		strbuf_addf(&buf, "big: %.75d\n", row);
+		ret = reply_cb(reply_data, buf.buf, buf.len);
+	}
+
+	strbuf_release(&buf);
+
+	return ret;
+}
+
+/*
+ * Slowly reply with a series of lines.  This is to model an expensive to
+ * compute chunked response (which might happen if this callback is running
+ * in a thread and is fighting for a lock with other threads).
+ */
+#define SLOW_ROWS     (1000)
+#define SLOW_DELAY_MS (10)
+static int app__slow_command(ipc_server_reply_cb *reply_cb,
+			     struct ipc_server_reply_data *reply_data)
+{
+	struct strbuf buf = STRBUF_INIT;
+	int row;
+	int ret;
+
+	for (row = 0; row < SLOW_ROWS; row++) {
+		strbuf_setlen(&buf, 0);
+		strbuf_addf(&buf, "big: %.75d\n", row);
+		ret = reply_cb(reply_data, buf.buf, buf.len);
+		sleep_millisec(SLOW_DELAY_MS);
+	}
+
+	strbuf_release(&buf);
+
+	return ret;
+}
+
+/*
+ * The client sent a command followed by a (possibly very) large buffer.
+ */
+static int app__sendbytes_command(const char *received,
+				  ipc_server_reply_cb *reply_cb,
+				  struct ipc_server_reply_data *reply_data)
+{
+	struct strbuf buf_resp = STRBUF_INIT;
+	const char *p = "?";
+	int len_ballast = 0;
+	int k;
+	int errs = 0;
+	int ret;
+
+	if (skip_prefix(received, "sendbytes ", &p))
+		len_ballast = strlen(p);
+
+	/*
+	 * Verify that the ballast is n copies of a single letter.
+	 * And that the multi-threaded IO layer didn't cross the streams.
+	 */
+	for (k = 1; k < len_ballast; k++)
+		if (p[k] != p[0])
+			errs++;
+
+	if (errs)
+		strbuf_addf(&buf_resp, "errs:%d\n", errs);
+	else
+		strbuf_addf(&buf_resp, "rcvd:%c%08d\n", p[0], len_ballast);
+
+	ret = reply_cb(reply_data, buf_resp.buf, buf_resp.len);
+
+	strbuf_release(&buf_resp);
+
+	return ret;
+}
+
+/*
+ * An arbitrary fixed address to verify that the application instance
+ * data is handled properly.
+ */
+static int my_app_data = 42;
+
+static ipc_server_application_cb test_app_cb;
+
+/*
+ * This is the "application callback" that sits on top of the
+ * "ipc-server".  It completely defines the set of commands supported
+ * by this application.
+ */
+static int test_app_cb(void *application_data,
+		       const char *command,
+		       ipc_server_reply_cb *reply_cb,
+		       struct ipc_server_reply_data *reply_data)
+{
+	/*
+	 * Verify that we received the application-data that we passed
+	 * when we started the ipc-server.  (We have several layers of
+	 * callbacks calling callbacks and it's easy to get things mixed
+	 * up (especially when some are "void*").)
+	 */
+	if (application_data != (void*)&my_app_data)
+		BUG("application_cb: application_data pointer wrong");
+
+	if (!strcmp(command, "quit")) {
+		/*
+		 * The client sent a "quit" command.  This is an async
+		 * request for the server to shutdown.
+		 *
+		 * We DO NOT send the client a response message
+		 * (because we have nothing to say and the other
+		 * server threads have not yet stopped).
+		 *
+		 * Tell the ipc-server layer to start shutting down.
+		 * This includes: stop listening for new connections
+		 * on the socket/pipe and telling all worker threads
+		 * to finish/drain their outgoing responses to other
+		 * clients.
+		 *
+		 * This DOES NOT force an immediate sync shutdown.
+		 */
+		return SIMPLE_IPC_QUIT;
+	}
+
+	if (!strcmp(command, "ping")) {
+		const char *answer = "pong";
+		return reply_cb(reply_data, answer, strlen(answer));
+	}
+
+	if (!strcmp(command, "big"))
+		return app__big_command(reply_cb, reply_data);
+
+	if (!strcmp(command, "chunk"))
+		return app__chunk_command(reply_cb, reply_data);
+
+	if (!strcmp(command, "slow"))
+		return app__slow_command(reply_cb, reply_data);
+
+	if (starts_with(command, "sendbytes "))
+		return app__sendbytes_command(command, reply_cb, reply_data);
+
+	return app__unhandled_command(command, reply_cb, reply_data);
+}
+
+struct cl_args
+{
+	const char *subcommand;
+	const char *path;
+	const char *token;
+
+	int nr_threads;
+	int max_wait_sec;
+	int bytecount;
+	int batchsize;
+
+	char bytevalue;
+};
+
+static struct cl_args cl_args = {
+	.subcommand = NULL,
+	.path = "ipc-test",
+	.token = NULL,
+
+	.nr_threads = 5,
+	.max_wait_sec = 60,
+	.bytecount = 1024,
+	.batchsize = 10,
+
+	.bytevalue = 'x',
+};
+
+/*
+ * This process will run as a simple-ipc server and listen for IPC commands
+ * from client processes.
+ */
+static int daemon__run_server(void)
+{
+	int ret;
+
+	struct ipc_server_opts opts = {
+		.nr_threads = cl_args.nr_threads,
+	};
+
+	/*
+	 * Synchronously run the ipc-server.  We don't need any application
+	 * instance data, so pass an arbitrary pointer (that we'll later
+	 * verify made the round trip).
+	 */
+	ret = ipc_server_run(cl_args.path, &opts, test_app_cb, (void*)&my_app_data);
+	if (ret == -2)
+		error(_("socket/pipe already in use: '%s'"), cl_args.path);
+	else if (ret == -1)
+		error_errno(_("could not start server on: '%s'"), cl_args.path);
+
+	return ret;
+}
+
+#ifndef GIT_WINDOWS_NATIVE
+/*
+ * This is adapted from `daemonize()`.  Use `fork()` to directly create and
+ * run the daemon in a child process.
+ */
+static int spawn_server(pid_t *pid)
+{
+	struct ipc_server_opts opts = {
+		.nr_threads = cl_args.nr_threads,
+	};
+
+	*pid = fork();
+
+	switch (*pid) {
+	case 0:
+		if (setsid() == -1)
+			error_errno(_("setsid failed"));
+		close(0);
+		close(1);
+		close(2);
+		sanitize_stdfds();
+
+		return ipc_server_run(cl_args.path, &opts, test_app_cb,
+				      (void*)&my_app_data);
+
+	case -1:
+		return error_errno(_("could not spawn daemon in the background"));
+
+	default:
+		return 0;
+	}
+}
+#else
+/*
+ * Conceptually like `daemonize()` but different because Windows does not
+ * have `fork(2)`.  Spawn a normal Windows child process but without the
+ * limitations of `start_command()` and `finish_command()`.
+ */
+static int spawn_server(pid_t *pid)
+{
+	char test_tool_exe[MAX_PATH];
+	struct strvec args = STRVEC_INIT;
+	int in, out;
+
+	GetModuleFileNameA(NULL, test_tool_exe, MAX_PATH);
+
+	in = open("/dev/null", O_RDONLY);
+	out = open("/dev/null", O_WRONLY);
+
+	strvec_push(&args, test_tool_exe);
+	strvec_push(&args, "simple-ipc");
+	strvec_push(&args, "run-daemon");
+	strvec_pushf(&args, "--name=%s", cl_args.path);
+	strvec_pushf(&args, "--threads=%d", cl_args.nr_threads);
+
+	*pid = mingw_spawnvpe(args.v[0], args.v, NULL, NULL, in, out, out);
+	close(in);
+	close(out);
+
+	strvec_clear(&args);
+
+	if (*pid < 0)
+		return error(_("could not spawn daemon in the background"));
+
+	return 0;
+}
+#endif
+
+/*
+ * This is adapted from `wait_or_whine()`.  Watch the child process and
+ * let it get started and begin listening for requests on the socket
+ * before reporting our success.
+ */
+static int wait_for_server_startup(pid_t pid_child)
+{
+	int status;
+	pid_t pid_seen;
+	enum ipc_active_state s;
+	time_t time_limit, now;
+
+	time(&time_limit);
+	time_limit += cl_args.max_wait_sec;
+
+	for (;;) {
+		pid_seen = waitpid(pid_child, &status, WNOHANG);
+
+		if (pid_seen == -1)
+			return error_errno(_("waitpid failed"));
+
+		else if (pid_seen == 0) {
+			/*
+			 * The child is still running (this should be
+			 * the normal case).  Try to connect to it on
+			 * the socket and see if it is ready for
+			 * business.
+			 *
+			 * If there is another daemon already running,
+			 * our child will fail to start (possibly
+			 * after a timeout on the lock), but we don't
+			 * care (who responds) if the socket is live.
+			 */
+			s = ipc_get_active_state(cl_args.path);
+			if (s == IPC_STATE__LISTENING)
+				return 0;
+
+			time(&now);
+			if (now > time_limit)
+				return error(_("daemon not online yet"));
+
+			continue;
+		}
+
+		else if (pid_seen == pid_child) {
+			/*
+			 * The new child daemon process shutdown while
+			 * it was starting up, so it is not listening
+			 * on the socket.
+			 *
+			 * Try to ping the socket in the odd chance
+			 * that another daemon started (or was already
+			 * running) while our child was starting.
+			 *
+			 * Again, we don't care who services the socket.
+			 */
+			s = ipc_get_active_state(cl_args.path);
+			if (s == IPC_STATE__LISTENING)
+				return 0;
+
+			/*
+			 * We don't care about the WEXITSTATUS() nor
+			 * any of the WIF*(status) values because
+			 * `cmd__simple_ipc()` does the `!!result`
+			 * trick on all function return values.
+			 *
+			 * So it is sufficient to just report the
+			 * early shutdown as an error.
+			 */
+			return error(_("daemon failed to start"));
+		}
+
+		else
+			return error(_("waitpid is confused"));
+	}
+}
+
+/*
+ * This process will start a simple-ipc server in a background process and
+ * wait for it to become ready.  This is like `daemonize()` but gives us
+ * more control and better error reporting (and makes it easier to write
+ * unit tests).
+ */
+static int daemon__start_server(void)
+{
+	pid_t pid_child;
+	int ret;
+
+	/*
+	 * Run the actual daemon in a background process.
+	 */
+	ret = spawn_server(&pid_child);
+	if (pid_child <= 0)
+		return ret;
+
+	/*
+	 * Let the parent wait for the child process to get started
+	 * and begin listening for requests on the socket.
+	 */
+	ret = wait_for_server_startup(pid_child);
+
+	return ret;
+}
+
+/*
+ * This process will run a quick probe to see if a simple-ipc server
+ * is active on this path.
+ *
+ * Returns 0 if the server is alive.
+ */
+static int client__probe_server(void)
+{
+	enum ipc_active_state s;
+
+	s = ipc_get_active_state(cl_args.path);
+	switch (s) {
+	case IPC_STATE__LISTENING:
+		return 0;
+
+	case IPC_STATE__NOT_LISTENING:
+		return error("no server listening at '%s'", cl_args.path);
+
+	case IPC_STATE__PATH_NOT_FOUND:
+		return error("path not found '%s'", cl_args.path);
+
+	case IPC_STATE__INVALID_PATH:
+		return error("invalid pipe/socket name '%s'", cl_args.path);
+
+	case IPC_STATE__OTHER_ERROR:
+	default:
+		return error("other error for '%s'", cl_args.path);
+	}
+}
+
+/*
+ * Send an IPC command token to an already-running server daemon and
+ * print the response.
+ *
+ * This is a simple 1 word command/token that `test_app_cb()` (in the
+ * daemon process) will understand.
+ */
+static int client__send_ipc(void)
+{
+	const char *command = "(no-command)";
+	struct strbuf buf = STRBUF_INIT;
+	struct ipc_client_connect_options options
+		= IPC_CLIENT_CONNECT_OPTIONS_INIT;
+
+	if (cl_args.token && *cl_args.token)
+		command = cl_args.token;
+
+	options.wait_if_busy = 1;
+	options.wait_if_not_found = 0;
+
+	if (!ipc_client_send_command(cl_args.path, &options, command, &buf)) {
+		if (buf.len) {
+			printf("%s\n", buf.buf);
+			fflush(stdout);
+		}
+		strbuf_release(&buf);
+
+		return 0;
+	}
+
+	return error("failed to send '%s' to '%s'", command, cl_args.path);
+}
+
+/*
+ * Send an IPC command to an already-running server and ask it to
+ * shutdown.  "send quit" is an async request and queues a shutdown
+ * event in the server, so we spin and wait here for it to actually
+ * shutdown to make the unit tests a little easier to write.
+ */
+static int client__stop_server(void)
+{
+	int ret;
+	time_t time_limit, now;
+	enum ipc_active_state s;
+
+	time(&time_limit);
+	time_limit += cl_args.max_wait_sec;
+
+	cl_args.token = "quit";
+
+	ret = client__send_ipc();
+	if (ret)
+		return ret;
+
+	for (;;) {
+		sleep_millisec(100);
+
+		s = ipc_get_active_state(cl_args.path);
+
+		if (s != IPC_STATE__LISTENING) {
+			/*
+			 * The socket/pipe is gone and/or has stopped
+			 * responding.  Lets assume that the daemon
+			 * process has exited too.
+			 */
+			return 0;
+		}
+
+		time(&now);
+		if (now > time_limit)
+			return error(_("daemon has not shutdown yet"));
+	}
+}
+
+/*
+ * Send an IPC command followed by ballast to confirm that a large
+ * message can be sent and that the kernel or pkt-line layers will
+ * properly chunk it and that the daemon receives the entire message.
+ */
+static int do_sendbytes(int bytecount, char byte, const char *path,
+			const struct ipc_client_connect_options *options)
+{
+	struct strbuf buf_send = STRBUF_INIT;
+	struct strbuf buf_resp = STRBUF_INIT;
+
+	strbuf_addstr(&buf_send, "sendbytes ");
+	strbuf_addchars(&buf_send, byte, bytecount);
+
+	if (!ipc_client_send_command(path, options, buf_send.buf, &buf_resp)) {
+		strbuf_rtrim(&buf_resp);
+		printf("sent:%c%08d %s\n", byte, bytecount, buf_resp.buf);
+		fflush(stdout);
+		strbuf_release(&buf_send);
+		strbuf_release(&buf_resp);
+
+		return 0;
+	}
+
+	return error("client failed to sendbytes(%d, '%c') to '%s'",
+		     bytecount, byte, path);
+}
+
+/*
+ * Send an IPC command with ballast to an already-running server daemon.
+ */
+static int client__sendbytes(void)
+{
+	struct ipc_client_connect_options options
+		= IPC_CLIENT_CONNECT_OPTIONS_INIT;
+
+	options.wait_if_busy = 1;
+	options.wait_if_not_found = 0;
+	options.uds_disallow_chdir = 0;
+
+	return do_sendbytes(cl_args.bytecount, cl_args.bytevalue, cl_args.path,
+			    &options);
+}
+
+struct multiple_thread_data {
+	pthread_t pthread_id;
+	struct multiple_thread_data *next;
+	const char *path;
+	int bytecount;
+	int batchsize;
+	int sum_errors;
+	int sum_good;
+	char letter;
+};
+
+static void *multiple_thread_proc(void *_multiple_thread_data)
+{
+	struct multiple_thread_data *d = _multiple_thread_data;
+	int k;
+	struct ipc_client_connect_options options
+		= IPC_CLIENT_CONNECT_OPTIONS_INIT;
+
+	options.wait_if_busy = 1;
+	options.wait_if_not_found = 0;
+	/*
+	 * A multi-threaded client should not be randomly calling chdir().
+	 * The test will pass without this restriction because the test is
+	 * not otherwise accessing the filesystem, but it makes us honest.
+	 */
+	options.uds_disallow_chdir = 1;
+
+	trace2_thread_start("multiple");
+
+	for (k = 0; k < d->batchsize; k++) {
+		if (do_sendbytes(d->bytecount + k, d->letter, d->path, &options))
+			d->sum_errors++;
+		else
+			d->sum_good++;
+	}
+
+	trace2_thread_exit();
+	return NULL;
+}
+
+/*
+ * Start a client-side thread pool.  Each thread sends a series of
+ * IPC requests.  Each request is on a new connection to the server.
+ */
+static int client__multiple(void)
+{
+	struct multiple_thread_data *list = NULL;
+	int k;
+	int sum_join_errors = 0;
+	int sum_thread_errors = 0;
+	int sum_good = 0;
+
+	for (k = 0; k < cl_args.nr_threads; k++) {
+		struct multiple_thread_data *d = xcalloc(1, sizeof(*d));
+		d->next = list;
+		d->path = cl_args.path;
+		d->bytecount = cl_args.bytecount + cl_args.batchsize*(k/26);
+		d->batchsize = cl_args.batchsize;
+		d->sum_errors = 0;
+		d->sum_good = 0;
+		d->letter = 'A' + (k % 26);
+
+		if (pthread_create(&d->pthread_id, NULL, multiple_thread_proc, d)) {
+			warning("failed to create thread[%d] skipping remainder", k);
+			free(d);
+			break;
+		}
+
+		list = d;
+	}
+
+	while (list) {
+		struct multiple_thread_data *d = list;
+
+		if (pthread_join(d->pthread_id, NULL))
+			sum_join_errors++;
+
+		sum_thread_errors += d->sum_errors;
+		sum_good += d->sum_good;
+
+		list = d->next;
+		free(d);
+	}
+
+	printf("client (good %d) (join %d), (errors %d)\n",
+	       sum_good, sum_join_errors, sum_thread_errors);
+
+	return (sum_join_errors + sum_thread_errors) ? 1 : 0;
+}
+
+int cmd__simple_ipc(int argc, const char **argv)
+{
+	const char * const simple_ipc_usage[] = {
+		N_("test-helper simple-ipc is-active    [<name>] [<options>]"),
+		N_("test-helper simple-ipc run-daemon   [<name>] [<threads>]"),
+		N_("test-helper simple-ipc start-daemon [<name>] [<threads>] [<max-wait>]"),
+		N_("test-helper simple-ipc stop-daemon  [<name>] [<max-wait>]"),
+		N_("test-helper simple-ipc send         [<name>] [<token>]"),
+		N_("test-helper simple-ipc sendbytes    [<name>] [<bytecount>] [<byte>]"),
+		N_("test-helper simple-ipc multiple     [<name>] [<threads>] [<bytecount>] [<batchsize>]"),
+		NULL
+	};
+
+	const char *bytevalue = NULL;
+
+	struct option options[] = {
+#ifndef GIT_WINDOWS_NATIVE
+		OPT_STRING(0, "name", &cl_args.path, N_("name"), N_("name or pathname of unix domain socket")),
+#else
+		OPT_STRING(0, "name", &cl_args.path, N_("name"), N_("named-pipe name")),
+#endif
+		OPT_INTEGER(0, "threads", &cl_args.nr_threads, N_("number of threads in server thread pool")),
+		OPT_INTEGER(0, "max-wait", &cl_args.max_wait_sec, N_("seconds to wait for daemon to start or stop")),
+
+		OPT_INTEGER(0, "bytecount", &cl_args.bytecount, N_("number of bytes")),
+		OPT_INTEGER(0, "batchsize", &cl_args.batchsize, N_("number of requests per thread")),
+
+		OPT_STRING(0, "byte", &bytevalue, N_("byte"), N_("ballast character")),
+		OPT_STRING(0, "token", &cl_args.token, N_("token"), N_("command token to send to the server")),
+
+		OPT_END()
+	};
+
+	if (argc < 2)
+		usage_with_options(simple_ipc_usage, options);
+
+	if (argc == 2 && !strcmp(argv[1], "-h"))
+		usage_with_options(simple_ipc_usage, options);
+
+	if (argc == 2 && !strcmp(argv[1], "SUPPORTS_SIMPLE_IPC"))
+		return 0;
+
+	cl_args.subcommand = argv[1];
+
+	argc--;
+	argv++;
+
+	argc = parse_options(argc, argv, NULL, options, simple_ipc_usage, 0);
+
+	if (cl_args.nr_threads < 1)
+		cl_args.nr_threads = 1;
+	if (cl_args.max_wait_sec < 0)
+		cl_args.max_wait_sec = 0;
+	if (cl_args.bytecount < 1)
+		cl_args.bytecount = 1;
+	if (cl_args.batchsize < 1)
+		cl_args.batchsize = 1;
+
+	if (bytevalue && *bytevalue)
+		cl_args.bytevalue = bytevalue[0];
+
+	/*
+	 * Use '!!' on all dispatch functions to map from `error()` style
+	 * (returns -1) style to `test_must_fail` style (expects 1).  This
+	 * makes shell error messages less confusing.
+	 */
+
+	if (!strcmp(cl_args.subcommand, "is-active"))
+		return !!client__probe_server();
+
+	if (!strcmp(cl_args.subcommand, "run-daemon"))
+		return !!daemon__run_server();
+
+	if (!strcmp(cl_args.subcommand, "start-daemon"))
+		return !!daemon__start_server();
+
+	/*
+	 * Client commands follow.  Ensure a server is running before
+	 * sending any data.  This might be overkill, but then again
+	 * this is a test harness.
+	 */
+
+	if (!strcmp(cl_args.subcommand, "stop-daemon")) {
+		if (client__probe_server())
+			return 1;
+		return !!client__stop_server();
+	}
+
+	if (!strcmp(cl_args.subcommand, "send")) {
+		if (client__probe_server())
+			return 1;
+		return !!client__send_ipc();
+	}
+
+	if (!strcmp(cl_args.subcommand, "sendbytes")) {
+		if (client__probe_server())
+			return 1;
+		return !!client__sendbytes();
+	}
+
+	if (!strcmp(cl_args.subcommand, "multiple")) {
+		if (client__probe_server())
+			return 1;
+		return !!client__multiple();
+	}
+
+	die("Unhandled subcommand: '%s'", cl_args.subcommand);
+}
+#endif
diff --git a/t/helper/test-tool.c b/t/helper/test-tool.c
index f97cd9f48a69..287aa6002307 100644
--- a/t/helper/test-tool.c
+++ b/t/helper/test-tool.c
@@ -65,6 +65,7 @@ static struct test_cmd cmds[] = {
 	{ "sha1", cmd__sha1 },
 	{ "sha256", cmd__sha256 },
 	{ "sigchain", cmd__sigchain },
+	{ "simple-ipc", cmd__simple_ipc },
 	{ "strcmp-offset", cmd__strcmp_offset },
 	{ "string-list", cmd__string_list },
 	{ "submodule-config", cmd__submodule_config },
diff --git a/t/helper/test-tool.h b/t/helper/test-tool.h
index 28072c0ad5ab..9ea4b31011dd 100644
--- a/t/helper/test-tool.h
+++ b/t/helper/test-tool.h
@@ -55,6 +55,7 @@ int cmd__sha1(int argc, const char **argv);
 int cmd__oid_array(int argc, const char **argv);
 int cmd__sha256(int argc, const char **argv);
 int cmd__sigchain(int argc, const char **argv);
+int cmd__simple_ipc(int argc, const char **argv);
 int cmd__strcmp_offset(int argc, const char **argv);
 int cmd__string_list(int argc, const char **argv);
 int cmd__submodule_config(int argc, const char **argv);
diff --git a/t/t0052-simple-ipc.sh b/t/t0052-simple-ipc.sh
new file mode 100755
index 000000000000..ff98be31a51b
--- /dev/null
+++ b/t/t0052-simple-ipc.sh
@@ -0,0 +1,122 @@
+#!/bin/sh
+
+test_description='simple command server'
+
+. ./test-lib.sh
+
+test-tool simple-ipc SUPPORTS_SIMPLE_IPC || {
+	skip_all='simple IPC not supported on this platform'
+	test_done
+}
+
+stop_simple_IPC_server () {
+	test-tool simple-ipc stop-daemon
+}
+
+test_expect_success 'start simple command server' '
+	test_atexit stop_simple_IPC_server &&
+	test-tool simple-ipc start-daemon --threads=8 &&
+	test-tool simple-ipc is-active
+'
+
+test_expect_success 'simple command server' '
+	test-tool simple-ipc send --token=ping >actual &&
+	echo pong >expect &&
+	test_cmp expect actual
+'
+
+test_expect_success 'servers cannot share the same path' '
+	test_must_fail test-tool simple-ipc run-daemon &&
+	test-tool simple-ipc is-active
+'
+
+test_expect_success 'big response' '
+	test-tool simple-ipc send --token=big >actual &&
+	test_line_count -ge 10000 actual &&
+	grep -q "big: [0]*9999\$" actual
+'
+
+test_expect_success 'chunk response' '
+	test-tool simple-ipc send --token=chunk >actual &&
+	test_line_count -ge 10000 actual &&
+	grep -q "big: [0]*9999\$" actual
+'
+
+test_expect_success 'slow response' '
+	test-tool simple-ipc send --token=slow >actual &&
+	test_line_count -ge 100 actual &&
+	grep -q "big: [0]*99\$" actual
+'
+
+# Send an IPC with n=100,000 bytes of ballast.  This should be large enough
+# to force both the kernel and the pkt-line layer to chunk the message to the
+# daemon and for the daemon to receive it in chunks.
+#
+test_expect_success 'sendbytes' '
+	test-tool simple-ipc sendbytes --bytecount=100000 --byte=A >actual &&
+	grep "sent:A00100000 rcvd:A00100000" actual
+'
+
+# Start a series of <threads> client threads that each make <batchsize>
+# IPC requests to the server.  Each (<threads> * <batchsize>) request
+# will open a new connection to the server and randomly bind to a server
+# thread.  Each client thread exits after completing its batch.  So the
+# total number of live client threads will be smaller than the total.
+# Each request will send a message containing at least <bytecount> bytes
+# of ballast.  (Responses are small.)
+#
+# The purpose here is to test threading in the server and responding to
+# many concurrent client requests (regardless of whether they come from
+# 1 client process or many).  And to test that the server side of the
+# named pipe/socket is stable.  (On Windows this means that the server
+# pipe is properly recycled.)
+#
+# On Windows it also lets us adjust the connection timeout in the
+# `ipc_client_send_command()`.
+#
+# Note it is easy to drive the system into failure by requesting an
+# insane number of threads on client or server and/or increasing the
+# per-thread batchsize or the per-request bytecount (ballast).
+# On Windows these failures look like "pipe is busy" errors.
+# So I've chosen fairly conservative values for now.
+#
+# We expect output of the form "sent:<letter><length> ..."
+# With terms (7, 19, 13) we expect:
+#   <letter> in [A-G]
+#   <length> in [19+0 .. 19+(13-1)]
+# and (7 * 13) successful responses.
+#
+test_expect_success 'stress test threads' '
+	test-tool simple-ipc multiple \
+		--threads=7 \
+		--bytecount=19 \
+		--batchsize=13 \
+		>actual &&
+	test_line_count = 92 actual &&
+	grep "good 91" actual &&
+	grep "sent:A" <actual >actual_a &&
+	cat >expect_a <<-EOF &&
+		sent:A00000019 rcvd:A00000019
+		sent:A00000020 rcvd:A00000020
+		sent:A00000021 rcvd:A00000021
+		sent:A00000022 rcvd:A00000022
+		sent:A00000023 rcvd:A00000023
+		sent:A00000024 rcvd:A00000024
+		sent:A00000025 rcvd:A00000025
+		sent:A00000026 rcvd:A00000026
+		sent:A00000027 rcvd:A00000027
+		sent:A00000028 rcvd:A00000028
+		sent:A00000029 rcvd:A00000029
+		sent:A00000030 rcvd:A00000030
+		sent:A00000031 rcvd:A00000031
+	EOF
+	test_cmp expect_a actual_a
+'
+
+test_expect_success 'stop-daemon works' '
+	test-tool simple-ipc stop-daemon &&
+	test_must_fail test-tool simple-ipc is-active &&
+	test_must_fail test-tool simple-ipc send --token=ping
+'
+
+test_done
-- 
gitgitgadget

^ permalink raw reply related	[flat|nested] 178+ messages in thread

end of thread, other threads:[~2021-03-22 10:31 UTC | newest]

Thread overview: 178+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-12 15:31 [PATCH 00/10] [RFC] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
2021-01-12 15:31 ` [PATCH 01/10] pkt-line: use stack rather than static buffer in packet_write_gently() Jeff Hostetler via GitGitGadget
2021-01-13 13:29   ` Jeff King
2021-01-25 19:34     ` Jeff Hostetler
2021-01-12 15:31 ` [PATCH 02/10] pkt-line: (optionally) libify the packet readers Johannes Schindelin via GitGitGadget
2021-01-12 15:31 ` [PATCH 03/10] pkt-line: optionally skip the flush packet in write_packetized_from_buf() Johannes Schindelin via GitGitGadget
2021-01-12 15:31 ` [PATCH 04/10] pkt-line: accept additional options in read_packetized_to_strbuf() Johannes Schindelin via GitGitGadget
2021-01-12 15:31 ` [PATCH 05/10] simple-ipc: design documentation for new IPC mechanism Jeff Hostetler via GitGitGadget
2021-01-12 16:40   ` Ævar Arnfjörð Bjarmason
2021-01-12 15:31 ` [PATCH 06/10] simple-ipc: add win32 implementation Jeff Hostetler via GitGitGadget
2021-01-12 15:31 ` [PATCH 07/10] unix-socket: create gentle version of unix_stream_listen() Jeff Hostetler via GitGitGadget
2021-01-13 14:06   ` Jeff King
2021-01-14  1:19     ` Chris Torek
2021-01-12 15:31 ` [PATCH 08/10] unix-socket: add no-chdir option to unix_stream_listen_gently() Jeff Hostetler via GitGitGadget
2021-01-12 15:31 ` [PATCH 09/10] simple-ipc: add t/helper/test-simple-ipc and t0052 Jeff Hostetler via GitGitGadget
2021-01-12 15:31 ` [PATCH 10/10] simple-ipc: add Unix domain socket implementation Jeff Hostetler via GitGitGadget
2021-01-12 16:50 ` [PATCH 00/10] [RFC] Simple IPC Mechanism Ævar Arnfjörð Bjarmason
2021-01-12 18:25   ` Jeff Hostetler
2021-01-12 20:01 ` Junio C Hamano
2021-01-12 23:25   ` Jeff Hostetler
2021-01-13  0:13     ` Junio C Hamano
2021-01-13  0:32       ` Jeff Hostetler
2021-01-13 13:46     ` Jeff King
2021-01-13 15:48       ` Ævar Arnfjörð Bjarmason
2021-02-01 19:45 ` [PATCH v2 00/14] " Jeff Hostetler via GitGitGadget
2021-02-01 19:45   ` [PATCH v2 01/14] ci/install-depends: attempt to fix "brew cask" stuff Junio C Hamano via GitGitGadget
2021-02-01 19:45   ` [PATCH v2 02/14] pkt-line: promote static buffer in packet_write_gently() to callers Jeff Hostetler via GitGitGadget
2021-02-02  9:41     ` Jeff King
2021-02-02 20:33       ` Jeff Hostetler
2021-02-02 22:54       ` Johannes Schindelin
2021-02-03  4:52         ` Jeff King
2021-02-01 19:45   ` [PATCH v2 03/14] pkt-line: add write_packetized_from_buf2() that takes scratch buffer Jeff Hostetler via GitGitGadget
2021-02-02  9:44     ` Jeff King
2021-02-01 19:45   ` [PATCH v2 04/14] pkt-line: optionally skip the flush packet in write_packetized_from_buf() Johannes Schindelin via GitGitGadget
2021-02-02  9:48     ` Jeff King
2021-02-02 22:56       ` Johannes Schindelin
2021-02-05 18:30       ` Jeff Hostetler
2021-02-01 19:45   ` [PATCH v2 05/14] pkt-line: (optionally) libify the packet readers Johannes Schindelin via GitGitGadget
2021-02-01 19:45   ` [PATCH v2 06/14] pkt-line: accept additional options in read_packetized_to_strbuf() Johannes Schindelin via GitGitGadget
2021-02-11  1:52     ` Taylor Blau
2021-02-01 19:45   ` [PATCH v2 07/14] simple-ipc: design documentation for new IPC mechanism Jeff Hostetler via GitGitGadget
2021-02-01 19:45   ` [PATCH v2 08/14] simple-ipc: add win32 implementation Jeff Hostetler via GitGitGadget
2021-02-01 19:45   ` [PATCH v2 09/14] simple-ipc: add t/helper/test-simple-ipc and t0052 Jeff Hostetler via GitGitGadget
2021-02-02 21:35     ` SZEDER Gábor
2021-02-03  4:36       ` Jeff King
2021-02-09 15:45       ` Jeff Hostetler
2021-02-05 19:38     ` SZEDER Gábor
2021-02-01 19:45   ` [PATCH v2 10/14] unix-socket: elimiate static unix_stream_socket() helper function Jeff Hostetler via GitGitGadget
2021-02-02  9:54     ` Jeff King
2021-02-02  9:58     ` Jeff King
2021-02-01 19:45   ` [PATCH v2 11/14] unix-socket: add options to unix_stream_listen() Jeff Hostetler via GitGitGadget
2021-02-02 10:14     ` Jeff King
2021-02-05 23:28       ` Jeff Hostetler
2021-02-09 16:32         ` Jeff King
2021-02-09 17:39           ` Jeff Hostetler
2021-02-10 15:55             ` Jeff King
2021-02-10 21:31               ` Jeff Hostetler
2021-02-01 19:45   ` [PATCH v2 12/14] unix-socket: add no-chdir option " Jeff Hostetler via GitGitGadget
2021-02-02 10:26     ` Jeff King
2021-02-01 19:45   ` [PATCH v2 13/14] unix-socket: do not call die in unix_stream_connect() Jeff Hostetler via GitGitGadget
2021-02-01 19:45   ` [PATCH v2 14/14] simple-ipc: add Unix domain socket implementation Jeff Hostetler via GitGitGadget
2021-02-01 22:20   ` [PATCH v2 00/14] Simple IPC Mechanism Junio C Hamano
2021-02-01 23:26     ` Jeff Hostetler
2021-02-02 23:07       ` Johannes Schindelin
2021-02-04 19:08         ` Junio C Hamano
2021-02-05 13:19           ` candidate branches for `maint`, was " Johannes Schindelin
2021-02-05 19:55             ` Junio C Hamano
2021-02-13  0:09   ` [PATCH v3 00/12] " Jeff Hostetler via GitGitGadget
2021-02-13  0:09     ` [PATCH v3 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently() Jeff Hostetler via GitGitGadget
2021-02-13  0:09     ` [PATCH v3 02/12] pkt-line: do not issue flush packets in write_packetized_*() Johannes Schindelin via GitGitGadget
2021-02-13  0:09     ` [PATCH v3 03/12] pkt-line: (optionally) libify the packet readers Johannes Schindelin via GitGitGadget
2021-02-13  0:09     ` [PATCH v3 04/12] pkt-line: add options argument to read_packetized_to_strbuf() Johannes Schindelin via GitGitGadget
2021-02-13  0:09     ` [PATCH v3 05/12] simple-ipc: design documentation for new IPC mechanism Jeff Hostetler via GitGitGadget
2021-02-13  0:09     ` [PATCH v3 06/12] simple-ipc: add win32 implementation Jeff Hostetler via GitGitGadget
2021-02-13  0:09     ` [PATCH v3 07/12] unix-socket: elimiate static unix_stream_socket() helper function Jeff Hostetler via GitGitGadget
2021-02-13  0:09     ` [PATCH v3 08/12] unix-socket: add backlog size option to unix_stream_listen() Jeff Hostetler via GitGitGadget
2021-02-13  0:09     ` [PATCH v3 09/12] unix-socket: disallow chdir() when creating unix domain sockets Jeff Hostetler via GitGitGadget
2021-02-13  0:09     ` [PATCH v3 10/12] unix-socket: create `unix_stream_server__listen_with_lock()` Jeff Hostetler via GitGitGadget
2021-02-13  0:09     ` [PATCH v3 11/12] simple-ipc: add Unix domain socket implementation Jeff Hostetler via GitGitGadget
2021-02-13  0:09     ` [PATCH v3 12/12] t0052: add simple-ipc tests and t/helper/test-simple-ipc tool Jeff Hostetler via GitGitGadget
2021-02-13  9:30       ` SZEDER Gábor
2021-02-16 15:53         ` Jeff Hostetler
2021-02-17 21:48     ` [PATCH v4 00/12] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
2021-02-17 21:48       ` [PATCH v4 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently() Jeff Hostetler via GitGitGadget
2021-02-26  7:21         ` Jeff King
2021-02-26 19:52           ` Jeff Hostetler
2021-02-26 20:43             ` Jeff King
2021-03-03 19:38             ` Junio C Hamano
2021-03-04 13:29               ` Jeff Hostetler
2021-03-04 20:26                 ` Junio C Hamano
2021-02-17 21:48       ` [PATCH v4 02/12] pkt-line: do not issue flush packets in write_packetized_*() Johannes Schindelin via GitGitGadget
2021-02-17 21:48       ` [PATCH v4 03/12] pkt-line: (optionally) libify the packet readers Johannes Schindelin via GitGitGadget
2021-03-03 19:53         ` Junio C Hamano
2021-03-04 14:17           ` Jeff Hostetler
2021-03-04 14:40             ` Jeff King
2021-03-04 20:28               ` Junio C Hamano
2021-02-17 21:48       ` [PATCH v4 04/12] pkt-line: add options argument to read_packetized_to_strbuf() Johannes Schindelin via GitGitGadget
2021-02-17 21:48       ` [PATCH v4 05/12] simple-ipc: design documentation for new IPC mechanism Jeff Hostetler via GitGitGadget
2021-03-03 20:19         ` Junio C Hamano
2021-02-17 21:48       ` [PATCH v4 06/12] simple-ipc: add win32 implementation Jeff Hostetler via GitGitGadget
2021-02-17 21:48       ` [PATCH v4 07/12] unix-socket: elimiate static unix_stream_socket() helper function Jeff Hostetler via GitGitGadget
2021-02-26  7:25         ` Jeff King
2021-03-03 20:41         ` Junio C Hamano
2021-02-17 21:48       ` [PATCH v4 08/12] unix-socket: add backlog size option to unix_stream_listen() Jeff Hostetler via GitGitGadget
2021-02-26  7:30         ` Jeff King
2021-03-03 20:54           ` Junio C Hamano
2021-02-17 21:48       ` [PATCH v4 09/12] unix-socket: disallow chdir() when creating unix domain sockets Jeff Hostetler via GitGitGadget
2021-03-03 22:53         ` Junio C Hamano
2021-03-04 14:56           ` Jeff King
2021-03-04 20:34             ` Junio C Hamano
2021-03-04 23:34               ` Junio C Hamano
2021-03-05  9:02                 ` Jeff King
2021-03-05  9:25                   ` Jeff King
2021-03-05 11:59                     ` Chris Torek
2021-03-05 17:33                       ` Jeff Hostetler
2021-03-05 17:53                         ` Junio C Hamano
2021-03-05 21:30               ` Jeff Hostetler
2021-03-05 21:52                 ` Junio C Hamano
2021-02-17 21:48       ` [PATCH v4 10/12] unix-socket: create `unix_stream_server__listen_with_lock()` Jeff Hostetler via GitGitGadget
2021-02-26  7:56         ` Jeff King
2021-03-02 23:50           ` Jeff Hostetler
2021-03-04 15:13             ` Jeff King
2021-02-17 21:48       ` [PATCH v4 11/12] simple-ipc: add Unix domain socket implementation Jeff Hostetler via GitGitGadget
2021-02-17 21:48       ` [PATCH v4 12/12] t0052: add simple-ipc tests and t/helper/test-simple-ipc tool Jeff Hostetler via GitGitGadget
2021-03-02  9:44         ` Jeff King
2021-03-03 15:25           ` Jeff Hostetler
2021-02-25 19:39       ` [PATCH v4 00/12] Simple IPC Mechanism Junio C Hamano
2021-02-26  7:59         ` Jeff King
2021-02-26 20:18           ` Jeff Hostetler
2021-02-26 20:50             ` Jeff King
2021-03-03 19:29               ` Junio C Hamano
2021-03-09 15:02       ` [PATCH v5 " Jeff Hostetler via GitGitGadget
2021-03-09 15:02         ` [PATCH v5 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently() Jeff Hostetler via GitGitGadget
2021-03-09 23:48           ` Junio C Hamano
2021-03-11 19:29             ` Jeff King
2021-03-11 20:32               ` Junio C Hamano
2021-03-11 20:53                 ` Jeff King
2021-03-09 15:02         ` [PATCH v5 02/12] pkt-line: do not issue flush packets in write_packetized_*() Johannes Schindelin via GitGitGadget
2021-03-09 15:02         ` [PATCH v5 03/12] pkt-line: add PACKET_READ_GENTLE_ON_READ_ERROR option Johannes Schindelin via GitGitGadget
2021-03-09 15:02         ` [PATCH v5 04/12] pkt-line: add options argument to read_packetized_to_strbuf() Johannes Schindelin via GitGitGadget
2021-03-09 15:02         ` [PATCH v5 05/12] simple-ipc: design documentation for new IPC mechanism Jeff Hostetler via GitGitGadget
2021-03-09 15:02         ` [PATCH v5 06/12] simple-ipc: add win32 implementation Jeff Hostetler via GitGitGadget
2021-03-09 15:02         ` [PATCH v5 07/12] unix-socket: eliminate static unix_stream_socket() helper function Jeff Hostetler via GitGitGadget
2021-03-09 15:02         ` [PATCH v5 08/12] unix-socket: add backlog size option to unix_stream_listen() Jeff Hostetler via GitGitGadget
2021-03-09 15:02         ` [PATCH v5 09/12] unix-socket: disallow chdir() when creating unix domain sockets Jeff Hostetler via GitGitGadget
2021-03-09 15:02         ` [PATCH v5 10/12] unix-stream-server: create unix domain socket under lock Jeff Hostetler via GitGitGadget
2021-03-10  0:18           ` Junio C Hamano
2021-03-09 15:02         ` [PATCH v5 11/12] simple-ipc: add Unix domain socket implementation Jeff Hostetler via GitGitGadget
2021-03-10  0:08           ` Junio C Hamano
2021-03-15 19:56             ` Jeff Hostetler
2021-03-09 15:02         ` [PATCH v5 12/12] t0052: add simple-ipc tests and t/helper/test-simple-ipc tool Jeff Hostetler via GitGitGadget
2021-03-09 23:28         ` [PATCH v5 00/12] Simple IPC Mechanism Junio C Hamano
2021-03-15 21:08         ` [PATCH v6 " Jeff Hostetler via GitGitGadget
2021-03-15 21:08           ` [PATCH v6 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently() Jeff Hostetler via GitGitGadget
2021-03-15 21:08           ` [PATCH v6 02/12] pkt-line: do not issue flush packets in write_packetized_*() Johannes Schindelin via GitGitGadget
2021-03-15 21:08           ` [PATCH v6 03/12] pkt-line: add PACKET_READ_GENTLE_ON_READ_ERROR option Johannes Schindelin via GitGitGadget
2021-03-15 21:08           ` [PATCH v6 04/12] pkt-line: add options argument to read_packetized_to_strbuf() Johannes Schindelin via GitGitGadget
2021-03-15 21:08           ` [PATCH v6 05/12] simple-ipc: design documentation for new IPC mechanism Jeff Hostetler via GitGitGadget
2021-03-15 21:08           ` [PATCH v6 06/12] simple-ipc: add win32 implementation Jeff Hostetler via GitGitGadget
2021-03-15 21:08           ` [PATCH v6 07/12] unix-socket: eliminate static unix_stream_socket() helper function Jeff Hostetler via GitGitGadget
2021-03-15 21:08           ` [PATCH v6 08/12] unix-socket: add backlog size option to unix_stream_listen() Jeff Hostetler via GitGitGadget
2021-03-15 21:08           ` [PATCH v6 09/12] unix-socket: disallow chdir() when creating unix domain sockets Jeff Hostetler via GitGitGadget
2021-03-15 21:08           ` [PATCH v6 10/12] unix-stream-server: create unix domain socket under lock Jeff Hostetler via GitGitGadget
2021-03-15 21:08           ` [PATCH v6 11/12] simple-ipc: add Unix domain socket implementation Jeff Hostetler via GitGitGadget
2021-03-15 21:08           ` [PATCH v6 12/12] t0052: add simple-ipc tests and t/helper/test-simple-ipc tool Jeff Hostetler via GitGitGadget
2021-03-22 10:29           ` [PATCH v7 00/12] Simple IPC Mechanism Jeff Hostetler via GitGitGadget
2021-03-22 10:29             ` [PATCH v7 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently() Jeff Hostetler via GitGitGadget
2021-03-22 10:29             ` [PATCH v7 02/12] pkt-line: do not issue flush packets in write_packetized_*() Johannes Schindelin via GitGitGadget
2021-03-22 10:29             ` [PATCH v7 03/12] pkt-line: add PACKET_READ_GENTLE_ON_READ_ERROR option Johannes Schindelin via GitGitGadget
2021-03-22 10:29             ` [PATCH v7 04/12] pkt-line: add options argument to read_packetized_to_strbuf() Johannes Schindelin via GitGitGadget
2021-03-22 10:29             ` [PATCH v7 05/12] simple-ipc: design documentation for new IPC mechanism Jeff Hostetler via GitGitGadget
2021-03-22 10:29             ` [PATCH v7 06/12] simple-ipc: add win32 implementation Jeff Hostetler via GitGitGadget
2021-03-22 10:29             ` [PATCH v7 07/12] unix-socket: eliminate static unix_stream_socket() helper function Jeff Hostetler via GitGitGadget
2021-03-22 10:29             ` [PATCH v7 08/12] unix-socket: add backlog size option to unix_stream_listen() Jeff Hostetler via GitGitGadget
2021-03-22 10:29             ` [PATCH v7 09/12] unix-socket: disallow chdir() when creating unix domain sockets Jeff Hostetler via GitGitGadget
2021-03-22 10:29             ` [PATCH v7 10/12] unix-stream-server: create unix domain socket under lock Jeff Hostetler via GitGitGadget
2021-03-22 10:29             ` [PATCH v7 11/12] simple-ipc: add Unix domain socket implementation Jeff Hostetler via GitGitGadget
2021-03-22 10:29             ` [PATCH v7 12/12] t0052: add simple-ipc tests and t/helper/test-simple-ipc tool Jeff Hostetler via GitGitGadget

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).