All of lore.kernel.org
 help / color / mirror / Atom feed
* happy io_uring_prep_accept_direct() submissions go hiding!
@ 2021-12-08 19:07 jrun
  2021-12-08 19:16 ` [oops!] " jrun
  2021-12-09 15:02 ` Pavel Begunkov
  0 siblings, 2 replies; 6+ messages in thread
From: jrun @ 2021-12-08 19:07 UTC (permalink / raw)
  To: io-uring; +Cc: Pavel Begunkov

hello,

- this may very well be something simple i'm missing so apologies in advance. -

_some_ calls to io_uring_prep_accept_direct() never make it back from
kernel! or they seems so... since io_uring_prep_accept_direct() is a new
introduction to io_uring i thought i check with you first and get some help if
possible.


---------
TEST_PROG:
---------

this msg has a git repo bundled which has the crap i've put together where i
encounter this. to compile/run it do this, save the bundle somewhere, say under
`/tmp/` and then do:

```
cd /tmp/
git clone wsub.git wsub
cd wsub
# maybe have a look at build.sh before running the following
# it will install a single binary under ~/.local/bin
# also it will fire up the binary, the server part, wsub, right away
sh build.sh

# then from a different terminal
cd /tmp/wsub/client
# in zsh, use seq for bash
MAX_CONNECTIONS=4; for i in {0..$MAX_CONNECTIONS}; do ./client foo; done
```

srv starts listening on a *abstract* unix socket, names after the binary which
should turn up in the output of this, if you have ss(8) installed:

`ss -l -x --socket=unix_seqpacket`
it will be called `@wsub` if you don't change anything.

client bit just sends it's first arg, "foo" in this case, to the server, and
srv prints it out into it's stderr.


--------
PROBLEM:
--------

every calls to io_uring_prep_accept_direct() via q_accept(), before entering
event_loop(), main.c:587, get properly completed, but subsequent calls to
io_uring_prep_accept_direct() after entering event_loop(),
main.c:487 `case ACCEPT:`,
never turn up on ring's cq! you will notice that all other submissions inside
event_loop(), to the same ring, get completed fine.

note also that io_uring_prep_accept_direct() completions make it once there is a
new connection!

running the client bit one-by-one might illustrate the point better.

i also experimented with using IORING_SETUP_SQPOLL, different articles but same
result for io_uring_prep_accept_direct() submissions.

thoughts?


	- jrun

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [oops!] happy io_uring_prep_accept_direct() submissions go hiding!
  2021-12-08 19:07 happy io_uring_prep_accept_direct() submissions go hiding! jrun
@ 2021-12-08 19:16 ` jrun
  2021-12-09 15:02 ` Pavel Begunkov
  1 sibling, 0 replies; 6+ messages in thread
From: jrun @ 2021-12-08 19:16 UTC (permalink / raw)
  To: io-uring; +Cc: Pavel Begunkov

[-- Attachment #1: Type: text/plain, Size: 24 bytes --]


forgot the attachment!

[-- Attachment #2: wsub.git --]
[-- Type: application/octet-stream, Size: 11263 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: happy io_uring_prep_accept_direct() submissions go hiding!
  2021-12-08 19:07 happy io_uring_prep_accept_direct() submissions go hiding! jrun
  2021-12-08 19:16 ` [oops!] " jrun
@ 2021-12-09 15:02 ` Pavel Begunkov
  2021-12-09 17:56   ` jrun
  1 sibling, 1 reply; 6+ messages in thread
From: Pavel Begunkov @ 2021-12-09 15:02 UTC (permalink / raw)
  To: jrun, io-uring

On 12/8/21 19:07, jrun wrote:
> hello,
> 
> - this may very well be something simple i'm missing so apologies in advance. -
> 
> _some_ calls to io_uring_prep_accept_direct() never make it back from
> kernel! or they seems so... since io_uring_prep_accept_direct() is a new
> introduction to io_uring i thought i check with you first and get some help if
> possible.

Don't see how a CQE may get missing, so let me ask a bunch of questions:

First, let's try out my understanding of your problem. At the beginning you
submit MAX_CONNECTIONS/2 accept requests and _all_ of them complete. In the main
loop you add another bunch of accepts, but you're not getting CQEs from them.
Right?

1) Anything in dmesg? Please when it got stuck (or what the symptoms are),
don't kill it but wait for 3 minutes and check dmesg again.

Or you to reduce the waiting time:
"echo 10 > /proc/sys/kernel/hung_task_timeout_secs"

And then should if anything wrong it should appear in dmesg max in 20-30 secs

2) What kernel version are you running?

3) Have you tried normal accept (non-direct)?

4) Can try increase the max number io-wq workers exceeds the max number
of inflight requests? Increase RLIMIT_NPROC, E.g. set it to
RLIMIT_NPROC = nr_threads + max inflight requests.

5) Do you get CQEs when you shutdown listening sockets?

6) Do you check return values of io_uring_submit()?

7) Any variability during execution? E.g. a different number of
sockets get accepted.


> ---------
> TEST_PROG:
> ---------
> 
> this msg has a git repo bundled which has the crap i've put together where i
> encounter this. to compile/run it do this, save the bundle somewhere, say under
> `/tmp/` and then do:
> 
> ```
> cd /tmp/
> git clone wsub.git wsub
> cd wsub
> # maybe have a look at build.sh before running the following
> # it will install a single binary under ~/.local/bin
> # also it will fire up the binary, the server part, wsub, right away
> sh build.sh
> 
> # then from a different terminal
> cd /tmp/wsub/client
> # in zsh, use seq for bash
> MAX_CONNECTIONS=4; for i in {0..$MAX_CONNECTIONS}; do ./client foo; done
> ```
> 
> srv starts listening on a *abstract* unix socket, names after the binary which
> should turn up in the output of this, if you have ss(8) installed:
> 
> `ss -l -x --socket=unix_seqpacket`
> it will be called `@wsub` if you don't change anything.
> 
> client bit just sends it's first arg, "foo" in this case, to the server, and
> srv prints it out into it's stderr.
> 
> 
> --------
> PROBLEM:
> --------
> 
> every calls to io_uring_prep_accept_direct() via q_accept(), before entering
> event_loop(), main.c:587, get properly completed, but subsequent calls to
> io_uring_prep_accept_direct() after entering event_loop(),
> main.c:487 `case ACCEPT:`,
> never turn up on ring's cq! you will notice that all other submissions inside
> event_loop(), to the same ring, get completed fine.
> 
> note also that io_uring_prep_accept_direct() completions make it once there is a
> new connection!
> 
> running the client bit one-by-one might illustrate the point better.
> 
> i also experimented with using IORING_SETUP_SQPOLL, different articles but same
> result for io_uring_prep_accept_direct() submissions.
> 
> thoughts?
> 
> 
> 	- jrun
> 

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: happy io_uring_prep_accept_direct() submissions go hiding!
  2021-12-09 15:02 ` Pavel Begunkov
@ 2021-12-09 17:56   ` jrun
  2021-12-09 19:34     ` possible bug with unix sockets jrun
  2021-12-09 20:34     ` jrun
  0 siblings, 2 replies; 6+ messages in thread
From: jrun @ 2021-12-09 17:56 UTC (permalink / raw)
  To: Pavel Begunkov; +Cc: io-uring

On Thu, Dec 09, 2021 at 03:02:12PM +0000, Pavel Begunkov wrote:
> Don't see how a CQE may get missing, so let me ask a bunch of questions:
> 
> First, let's try out my understanding of your problem. At the beginning you
> submit MAX_CONNECTIONS/2 accept requests and _all_ of them complete. 

correct.

> In the main loop you add another bunch of accepts, but you're not getting CQEs
> from them. Right ?

yes, io_uring_prep_accept_direct() submissions before entering the main loop
complete.any io_uring_prep_accept_direct() submitted from within the main loop
goes missing.

> 1) Anything in dmesg? Please when it got stuck (or what the symptoms are),
> don't kill it but wait for 3 minutes and check dmesg again.
>

nothing in dmesg!

> Or you to reduce the waiting time:
> "echo 10 > /proc/sys/kernel/hung_task_timeout_secs"

oh, my kernel[mek] is missing that; rebuilding right now with
`CONFIG_DETECT_HUNG_TASK=y`; will report back after reboot.

btw, enabled CONFIG_WQ_WATCHDOG=y for workqueue.watchdog_thresh; don't know if
that would help too. let me know.

also any magic with bpftrace you would suggest?

> And then should if anything wrong it should appear in dmesg max in 20-30 secs
>
> 2) What kernel version are you running?

[mek]: Linux 5.15.6-gentoo-p51 #5 SMP PREEMPT x86_64 i7-7700HQ

> 3) Have you tried normal accept (non-direct)?

no, will try, but accept_direct worked for me before introducing pthread into
the code. don't know if it matters.

> 4) Can try increase the max number io-wq workers exceeds the max number
> of inflight requests? Increase RLIMIT_NPROC, E.g. set it to
> RLIMIT_NPROC = nr_threads + max inflight requests.

i only have 1 thread atm but will try this with the new kernel and report back.

> 5) Do you get CQEs when you shutdown listening sockets?

yes! io_uring_prep_close_direct() call, there is only one inside dq_msg(), come
in on subsequent arrival of connect() requests from the client.
tested with and without IOSQE_ASYNC set.

> 6) Do you check return values of io_uring_submit()?
> 
> 7) Any variability during execution? E.g. a different number of
> sockets get accepted.

with IORING_SETUP_SQPOLL, i was getting different numbers for:
pending, = io_uring_sq_ready(ring); vs
submitted, = io_uring_submit(ring); according to the commented block at the
beginning of the event loop. don't if that's the way to check what you're
asking. let me know please.


thanks for the help,
	- jrun

^ permalink raw reply	[flat|nested] 6+ messages in thread

* possible bug with unix sockets
  2021-12-09 17:56   ` jrun
@ 2021-12-09 19:34     ` jrun
  2021-12-09 20:34     ` jrun
  1 sibling, 0 replies; 6+ messages in thread
From: jrun @ 2021-12-09 19:34 UTC (permalink / raw)
  To: io-uring; +Cc: Pavel Begunkov

[-- Attachment #1: Type: text/plain, Size: 1798 bytes --]

On Thu, Dec 09, 2021 at 12:56:36PM -0500, jrun wrote:
> On Thu, Dec 09, 2021 at 03:02:12PM +0000, Pavel Begunkov wrote:
> > 1) Anything in dmesg? Please when it got stuck (or what the symptoms are),
> > don't kill it but wait for 3 minutes and check dmesg again.
> >
> 
> nothing in dmesg!
> 
> > Or you to reduce the waiting time:
> > "echo 10 > /proc/sys/kernel/hung_task_timeout_secs"
> 
> oh, my kernel[mek] is missing that; rebuilding right now with
> `CONFIG_DETECT_HUNG_TASK=y`; will report back after reboot.
> 
> btw, enabled CONFIG_WQ_WATCHDOG=y for workqueue.watchdog_thresh; don't know if
> that would help too. let me know.

nothin!

> > 3) Have you tried normal accept (non-direct)?

hum, io_uring_prep_accept() also goes out for lunch.

wait a minute, i see something (BUG?):
all things equal, unix sockets fails but tcp socket works. i can investigate
further to see if it has to do with _abstract_ unix sockets. let me know.

to test, apply the attached patch to the origial repo in this thread.

> no, will try, but accept_direct worked for me before introducing pthread into
> the code. don't know if it matters.
> 
> > 4) Can try increase the max number io-wq workers exceeds the max number
> > of inflight requests? Increase RLIMIT_NPROC, E.g. set it to
> > RLIMIT_NPROC = nr_threads + max inflight requests.

i'm maxed out i think, doing this at the top of main anyway, main():

```
struct rlimit rlim = {0};
getrlimit(RLIMIT_NPROC, &rlim);
if (!(rlim.rlim_cur == RLIM_INFINITY) || !(rlim.rlim_max == RLIM_INFINITY)) {
	fprintf(stderr, "rlim.rlim_cur=%lu rlim.rlim_max=%lu\n",
		rlim.rlim_cur, rlim.rlim_max);
	rlim.rlim_cur = RLIM_INFINITY;
	rlim.rlim_max = RLIM_INFINITY;
	setrlimit(RLIMIT_NPROC, &rlim);
	perror("setrlimit");
	if (ret)
		exit(EX_SOFTWARE);
}
```


	- jrun

[-- Attachment #2: 0001-try-with-tcp.patch --]
[-- Type: text/x-diff, Size: 6781 bytes --]

From fd6c8c353c28a2c3de39847957661014b6142470 Mon Sep 17 00:00:00 2001
From: Paymon MARANDI <darwinskernel@gmail.com>
Date: Thu, 9 Dec 2021 14:30:19 -0500
Subject: [PATCH] try with tcp

---
 main.c | 106 ++++++++++++++++++++++++++++++++++++---------------------
 1 file changed, 68 insertions(+), 38 deletions(-)

diff --git a/main.c b/main.c
index cc14a89..bc2f21f 100644
--- a/main.c
+++ b/main.c
@@ -16,6 +16,7 @@
 #include <arpa/inet.h>
 #include <net/if.h>
 #include <pthread.h>
+#include <sys/resource.h>
 
 #include <liburing.h>
 
@@ -156,9 +157,16 @@ static void q_accept(struct io_uring *ring, conn_info *conn_i)
 		sqe, conn_i->skf.mama,
 		(struct sockaddr *)&conn_i->skf.child.peer.addr,
 		&conn_i->skf.child.peer.addrlen, 0, conn_i->gbid);
+	/*
+	io_uring_prep_accept(
+		sqe, conn_i->skf.mama,
+		(struct sockaddr *)&conn_i->skf.child.peer.addr,
+		&conn_i->skf.child.peer.addrlen, 0);
+	*/
 	conn_i->state = ACCEPT;
 	io_uring_sqe_set_data(sqe, conn_i);
 
+	// sqe->flags |= IOSQE_IO_DRAIN;
 	// sqe->flags |= IOSQE_IO_LINK | IOSQE_IO_HARDLINK;
 	/* if (conn_i->reacts->recv.q) */
 	/*	conn_i->reacts->recv.q(ring, conn_i); */
@@ -377,8 +385,12 @@ static int conn_factory(conn_info **conns, struct io_uring *ring,
 		if ((conns[i]->provides_buf = tmpl.provides_buf))
 			conns[i]->reacts->buf.q(ring, conns[i]);
 	}
-	if (!tmpl.provides_buf)
+	if (!tmpl.provides_buf) {
+		for (int i = 0; i < max_conn; i++)
+			conns[i]->state = READY;
+
 		return ret;
+	}
 
 	unsigned head;
 	unsigned count = 0;
@@ -543,11 +555,11 @@ void *ctrlsock(void *name)
 		exit(EX_OSERR);
 
 	struct io_uring ring = { 0 };
-	/* struct io_uring_params p = { */
-	/*	.flags = IORING_SETUP_SQPOLL, */
-	/*	.sq_thread_idle = 2000, */
-	/* }; */
-	struct io_uring_params p = { 0 };
+	// struct io_uring_params p = { 0 };
+	 struct io_uring_params p = {
+		.flags = IORING_SETUP_SQPOLL,
+		.sq_thread_idle = 2000,
+	 };
 
 	ret = io_uring_queue_init_params(MAX_CONNECTIONS * 2, &ring, &p);
 	perrork(ret, "io_uring_queue_init_params::unix");
@@ -565,12 +577,13 @@ void *ctrlsock(void *name)
 	conn_info tmpl = {
 		.buf_len = 1024,
 		.buf_num_seg = 1,
-		.reacts = &reacts,
 		.provides_buf = true,
+		.reacts = &reacts,
 	};
 	conns[0] = &tmpl;
 	ret = conn_factory(conns, &ring, children_pool, unix_addr,
 			   MAX_CONNECTIONS);
+	perrork(ret, "conn_factory(unix)");
 	if (ret) {
 		fprintf(stderr, "%s::%s %s\n", "conn_factory", __func__,
 			strerror(ret));
@@ -584,8 +597,7 @@ void *ctrlsock(void *name)
 		exit(EX_OSERR);
 	}
 
-	for (int i=0; i<MAX_CONNECTIONS/2; i++)
-		conns[i]->reacts->accept.q(&ring, conns[i]);
+	conns[0]->reacts->accept.q(&ring, conns[0]);
 
 	fprintf(stderr, "accepting connections to @%s\n", (char *)name);
 
@@ -624,33 +636,29 @@ static void *wsub(void *data)
 	children_pool[0] = socket(args->addrstore.ss_family, SOCK_STREAM, 0);
 	/* shorthand */
 	__s32 mama = children_pool[0];
-	if (mama == -1) {
-		perror("sock");
+	perror("sock");
+	if (mama == -1)
 		exit(EX_OSERR);
-	}
 
 	int val = 1;
 	int ret = setsockopt(mama, SOL_SOCKET, SO_REUSEPORT | SO_REUSEADDR,
 			     &val, sizeof(val));
-	if (ret == -1) {
-		perror("setsockopt(wsub)");
+	perror("setsockopt(wsub)");
+	if (ret == -1)
 		exit(EX_OSERR);
-	}
 
 	int tcp_f = TCP_NODELAY | TCP_DEFER_ACCEPT;
 	ret = setsockopt(mama, IPPROTO_TCP, tcp_f, &val, sizeof(val));
-	if (ret == -1) {
-		perror("setsockopt(tcp_f)");
+	perror("setsockopt(tcp_f)");
+	if (ret == -1)
 		exit(EX_OSERR);
-	}
 
 	if (args->ipv6) {
 		ret = setsockopt(mama, IPPROTO_IP, IPV6_RECVPKTINFO, &val,
 				 sizeof(val));
-		if (ret == -1) {
-			perror("setsockopt(ipv6)");
+		perror("setsockopt(ipv6)");
+		if (ret == -1)
 			exit(EX_OSERR);
-		}
 		// "fe80::5b3e:1bc6:ac47:c5c4",
 		// wsub_addr->sin6.sin6_scope_id = if_nametoindex("enp0s31f6");
 		// wsub_addr.sin6_addr = inet_pton(in6addr_loopback);
@@ -667,27 +675,30 @@ static void *wsub(void *data)
 	}
 
 	// bind and listen
-	if (bind(mama, (struct sockaddr *)args->addr, sizeof(*args->addr)) <
-	    0) {
-		perror("bind(wsub)");
+	ret = bind(mama, (struct sockaddr *)args->addr, sizeof(*args->addr));
+	perror("bind(wsub)");
+	if (ret)
 		exit(EX_OSERR);
-	}
-	if (listen(mama, BACKLOG) < 0) {
-		perror("listen(wsub)");
+	ret = listen(mama, BACKLOG);
+	perror("listen(wsub)");
+	if (ret)
 		exit(EX_OSERR);
-	}
 	fprintf(stderr, "wsub listening for connections on port: %d\n",
 		args->port);
 
 	struct io_uring_params p;
 	struct io_uring ring;
 
-	memset(&p, 0, sizeof(p));
+	// struct io_uring_params p = { 0 };
+	 struct io_uring_params param = {
+		.flags = IORING_SETUP_SQPOLL,
+		.sq_thread_idle = 2000,
+	 };
 
-	if (io_uring_queue_init_params(2048, &ring, &p) < 0) {
-		perror("io_uring_queue_init_params(wsub)");
+	ret = io_uring_queue_init_params(MAX_CONNECTIONS * 2, &ring, &param);
+	perrork(ret, "io_uring_queue_init_params::unix");
+	if (ret < 0)
 		exit(EX_OSERR);
-	}
 
 	reactions reacts = {
 		.accept = { .dq = dq_accept, .q = q_accept },
@@ -702,16 +713,23 @@ static void *wsub(void *data)
 		.reacts = &reacts,
 	};
 	conns[0] = &tmpl;
-	// io_uring_prep_provide_buffers(sqe_wsub, bufs, MAX_MESSAGE_LEN,
-	// BUFFERS_COUNT, group_id, 0);
-
 	ret = conn_factory(conns, &ring, children_pool, args->addr,
 			   MAX_CONNECTIONS);
+	perrork(ret, "conn_factory(wsub)");
 	if (ret) {
 		fprintf(stderr, "ret = %d\n", ret);
-		perror("conn_factory(wsub)");
 		exit(EX_OSERR);
 	}
+	ret = io_uring_register_files(&ring, children_pool,
+				      MAX_CONNECTIONS + 1);
+	if (ret) {
+		fprintf(stderr, "%s::%s %s\n", "io_uring_register_files",
+			__func__, strerror(-ret));
+		exit(EX_OSERR);
+	}
+
+	conns[0]->reacts->accept.q(&ring, conns[0]);
+
 	event_loop(&ring, conns);
 
 	close(mama);
@@ -732,6 +750,18 @@ static void *wsub(void *data)
 int main(int argc, char **argv)
 {
 	rval_t ret = EX_OK;
+	struct rlimit rlim = {0};
+	getrlimit(RLIMIT_NPROC, &rlim);
+	if (!(rlim.rlim_cur == RLIM_INFINITY) || !(rlim.rlim_max == RLIM_INFINITY)) {
+		fprintf(stderr, "rlim.rlim_cur=%lu rlim.rlim_max=%lu\n",
+			rlim.rlim_cur, rlim.rlim_max);
+		rlim.rlim_cur = RLIM_INFINITY;
+		rlim.rlim_max = RLIM_INFINITY;
+		ret = setrlimit(RLIMIT_NPROC, &rlim);
+		perror("setrlimit");
+		if (ret)
+			exit(EX_SOFTWARE);
+	}
 
 	struct wsub_args wsub_args = {
 		.port = 8002,
@@ -767,19 +797,19 @@ int main(int argc, char **argv)
 	if (!exec_name)
 		return EX_OSERR;
 
+	/*
 	ret = pthread_create(&ctrl_thread, NULL, ctrlsock, (void *)(exec_name));
 	if (ret) {
 		perror("pthread_create(ctrlsock)");
 		return EX_OSERR;
 	}
-	/*
+	*/
 	pthread_t wsub_thread;
 	ret = pthread_create(&wsub_thread, NULL, wsub, (void *)(&wsub_args));
 	if (ret) {
 		perror("pthread_create(wsub)");
 		return EX_OSERR;
 	}
-	*/
 
 	pthread_exit(0);
 	return ret;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* possible bug with unix sockets
  2021-12-09 17:56   ` jrun
  2021-12-09 19:34     ` possible bug with unix sockets jrun
@ 2021-12-09 20:34     ` jrun
  1 sibling, 0 replies; 6+ messages in thread
From: jrun @ 2021-12-09 20:34 UTC (permalink / raw)
  To: Pavel Begunkov; +Cc: io-uring

[-- Attachment #1: Type: text/plain, Size: 419 bytes --]

On Thu, Dec 09, 2021 at 12:56:36PM -0500, jrun wrote:
> also any magic with bpftrace you would suggest?

see the bt file attached, just took an example from bpftrace guys and grepped
for anything returning a socket in net/unix/af_unix.c ... suggestions are
welcome.

it seems to me that inet accepts are handled with iou-sqp whereas unix ones stay
attached to the userspace program. is that correct/expected?


	- jrun

[-- Attachment #2: tcpacceptq.bt --]
[-- Type: text/plain, Size: 1858 bytes --]

#!/usr/bin/env bpftrace
/*
 * tcpaccept.bt Trace TCP accept()s
 *              For Linux, uses bpftrace and eBPF.
 *
 * USAGE: tcpaccept.bt
 *
 * This is a bpftrace version of the bcc tool of the same name.
 *
 * This uses dynamic tracing of the kernel inet_csk_accept() socket function
 * (from tcp_prot.accept), and will need to be modified to match kernel changes.

 * Copyright (c) 2018 Dale Hamel.
 * Licensed under the Apache License, Version 2.0 (the "License")

 * 23-Nov-2018	Dale Hamel	created this.
 */

#include <linux/socket.h>
#include <net/sock.h>

BEGIN
{
	printf("Tracing TCP accepts. Hit Ctrl-C to end.\n");
	printf("%-8s %-6s %-14s ", "TIME", "PID", "COMM");
	printf("%-39s %-5s %-39s %-5s %s\n", "RADDR", "RPORT", "LADDR",
	    "LPORT", "BL");
}

// static
kretprobe:unix_find_socket_byinode,
kretprobe:unix_create1,
// non-static
kretprobe:unix_peer_get,
kretprobe:inet_csk_accept
{
	$sk = (struct sock *)retval;
	$inet_family = $sk->__sk_common.skc_family;

	if ($inet_family == AF_INET || $inet_family == AF_INET6 || $inet_family == AF_UNIX) {
		// initialize variable type:
		$daddr = ntop(0);
		$saddr = ntop(0);
		if ($inet_family == AF_INET) {
			$daddr = ntop($sk->__sk_common.skc_daddr);
			$saddr = ntop($sk->__sk_common.skc_rcv_saddr);
		} else {
			$daddr = ntop(
			    $sk->__sk_common.skc_v6_daddr.in6_u.u6_addr8);
			$saddr = ntop(
			    $sk->__sk_common.skc_v6_rcv_saddr.in6_u.u6_addr8);
		}
		$lport = $sk->__sk_common.skc_num;
		$dport = $sk->__sk_common.skc_dport;
		$qlen  = $sk->sk_ack_backlog;
		$qmax  = $sk->sk_max_ack_backlog;

		// Destination port is big endian, it must be flipped
		$dport = ($dport >> 8) | (($dport << 8) & 0x00FF00);

		time("%H:%M:%S ");
		printf("%-6d %-14s ", pid, comm);
		printf("%-39s %-5d %-39s %-5d ", $daddr, $dport, $saddr,
		    $lport);
		printf("%d/%d\n", $qlen, $qmax);
	}
}

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2021-12-09 20:34 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-08 19:07 happy io_uring_prep_accept_direct() submissions go hiding! jrun
2021-12-08 19:16 ` [oops!] " jrun
2021-12-09 15:02 ` Pavel Begunkov
2021-12-09 17:56   ` jrun
2021-12-09 19:34     ` possible bug with unix sockets jrun
2021-12-09 20:34     ` jrun

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.