linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v7 0/3] implement getrandom() in vDSO
@ 2022-11-24 16:55 Jason A. Donenfeld
  2022-11-24 16:55 ` [PATCH v7 1/3] random: add vgetrandom_alloc() syscall Jason A. Donenfeld
                   ` (2 more replies)
  0 siblings, 3 replies; 16+ messages in thread
From: Jason A. Donenfeld @ 2022-11-24 16:55 UTC (permalink / raw)
  To: linux-kernel, patches, tglx
  Cc: Jason A. Donenfeld, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Arnd Bergmann,
	Christian Brauner

Changes v6->v7:
--------------
- VERY EXCITING! There is now a rudimentary glibc implementation for
  this from one of the glibc maintainers, Adhemerval Zanella (CC'd). A
  commit that works with with this latest v7 revision is here:

  https://github.com/bminor/glibc/commit/247ec6dd77ec2a047163fe3a1b60e57880464b39

- Pass an `unsigned int *` instead of an `unsigned long *` for the
  syscall, to avoid having to add a compat syscall.
- Use ordinary function framing in assembly, rather than kernel-specific
  framing.
- Don't hardcode the number '2', but derive it at compile time with the
  expression `sizeof(state->batch_key) / CHACHA_BLOCK_SIZE`, as well as
  adding a BUILD_BUG_ON() in case that doesn't divide cleanly.

Changes v5->v6:
--------------
- Fix various build errors for odd configurations.
- Do not leak any secrets onto the stack at all, to account for possibility of
  fork()ing in a multithreaded scenario, which would ruin forward secrecy.
  Instead provide a arch-specific implementation that doesn't need stack
  space.
- Prevent page alignment from overflowing variable, and clamp to acceptable
  limits.
- Read/write unaligned bytes using get/put_unaligned.
- Add extensive comments to vDSO function explaining subtle aspects.
- Account for fork() races when writing generation counter.

Changes v4->v5:
--------------
- Add example code to vDSO addition commit showing intended use and
  interaction with allocations.
- Reset buffer to beginning when retrying.
- Rely on generation counter never being zero for fork detection, rather than
  adding extra boolean.
- Make use of __ARCH_WANT_VGETRANDOM_ALLOC macro around new syscall so that
  it's condition by archs that actually choose to add this, and don't forget
  to bump __NR_syscalls.
- Separate __cvdso_getrandom() into __cvdso_getrandom() and
  __cvdso_getrandom_data() so that powerpc can make a more efficient call.

Changes v3->v4:
--------------
- Split up into small series rather than one big patch.
- Use proper ordering in generation counter reads.
- Make properly generic, not just a hairball with x86, by moving symbols into
  correct files.

Changes v2->v3:
--------------

Big changes:

Thomas' previous objection was two-fold: 1) vgetrandom
should really have the same function signature as getrandom, in
addition to all of the same behavior, and 2) having vgetrandom_alloc
be a vDSO function doesn't make sense, because it doesn't actually
need anything from the VDSO data page and it doesn't correspond to an
existing syscall.

After a discussion at Plumbers this last week, we devised the following
ways to fix these: 1) we make the opque state argument be the last
argument of vgetrandom, rather than the first one, since the real
syscall ignores the additional argument, and that way all the registers
are the same, and no behavior changes; and 2) we make vgetrandom_alloc a
syscall, rather than a vDSO function, which also gives it added
flexibility for the future, which is good.

Making those changes also reduced the size of this patch a bit.

Smaller changes:
- Properly add buffer offset position.
- Don't EXPORT_SYMBOL for vDSO code.
- Account for timens and vvar being in swapped pages.

--------------

Two statements:

  1) Userspace wants faster cryptographically secure random numbers of
     arbitrary size, big or small.

  2) Userspace is currently unable to safely roll its own RNG with the
     same security profile as getrandom().

Statement (1) has been debated for years, with arguments ranging from
"we need faster cryptographically secure card shuffling!" to "the only
things that actually need good randomness are keys, which are few and
far between" to "actually, TLS CBC nonces are frequent" and so on. I
don't intend to wade into that debate substantially, except to note that
recently glibc added arc4random(), whose goal is to return a
cryptographically secure uint32_t, and there are real user reports of it
being too slow. So here we are.

Statement (2) is more interesting. The kernel is the nexus of all
entropic inputs that influence the RNG. It is in the best position, and
probably the only position, to decide anything at all about the current
state of the RNG and of its entropy. One of the things it uniquely knows
about is when reseeding is necessary.

For example, when a virtual machine is forked, restored, or duplicated,
it's imparative that the RNG doesn't generate the same outputs. For this
reason, there's a small protocol between hypervisors and the kernel that
indicates this has happened, alongside some ID, which the RNG uses to
immediately reseed, so as not to return the same numbers. Were userspace
to expand a getrandom() seed from time T1 for the next hour, and at some
point T2 < hour, the virtual machine forked, userspace would continue to
provide the same numbers to two (or more) different virtual machines,
resulting in potential cryptographic catastrophe. Something similar
happens on resuming from hibernation (or even suspend), with various
compromise scenarios there in mind.

There's a more general reason why userspace rolling its own RNG from a
getrandom() seed is fraught. There's a lot of attention paid to this
particular Linuxism we have of the RNG being initialized and thus
non-blocking or uninitialized and thus blocking until it is initialized.
These are our Two Big States that many hold to be the holy
differentiating factor between safe and not safe, between
cryptographically secure and garbage. The fact is, however, that the
distinction between these two states is a hand-wavy wishy-washy inexact
approximation. Outside of a few exceptional cases (e.g. a HW RNG is
available), we actually don't really ever know with any rigor at all
when the RNG is safe and ready (nor when it's compromised). We do the
best we can to "estimate" it, but entropy estimation is fundamentally
impossible in the general case. So really, we're just doing guess work,
and hoping it's good and conservative enough. Let's then assume that
there's always some potential error involved in this differentiator.

In fact, under the surface, the RNG is engineered around a different
principal, and that is trying to *use* new entropic inputs regularly and
at the right specific moments in time. For example, close to boot time,
the RNG reseeds itself more often than later. At certain events, like VM
fork, the RNG reseeds itself immediately. The various heuristics for
when the RNG will use new entropy and how often is really a core aspect
of what the RNG has some potential to do decently enough (and something
that will probably continue to improve in the future from random.c's
present set of algorithms). So in your mind, put away the metal
attachment to the Two Big States, which represent an approximation with
a potential margin of error. Instead keep in mind that the RNG's primary
operating heuristic is how often and exactly when it's going to reseed.

So, if userspace takes a seed from getrandom() at point T1, and uses it
for the next hour (or N megabytes or some other meaningless metric),
during that time, potential errors in the Two Big States approximation
are amplified. During that time potential reseeds are being lost,
forgotten, not reflected in the output stream. That's not good.

The simplest statement you could make is that userspace RNGs that expand
a getrandom() seed at some point T1 are nearly always *worse*, in some
way, than just calling getrandom() every time a random number is
desired.

For those reasons, after some discussion on libc-alpha, glibc's
arc4random() now just calls getrandom() on each invocation. That's
trivially safe, and gives us latitude to then make the safe thing faster
without becoming unsafe at our leasure. Card shuffling isn't
particularly fast, however.

How do we rectify this? By putting a safe implementation of getrandom()
in the vDSO, which has access to whatever information a
particular iteration of random.c is using to make its decisions. I use
that careful language of "particular iteration of random.c", because the
set of things that a vDSO getrandom() implementation might need for making
decisions as good as the kernel's will likely change over time. This
isn't just a matter of exporting certain *data* to userspace. We're not
going to commit to a "data API" where the various heuristics used are
exposed, locking in how the kernel works for decades to come, and then
leave it to various userspaces to roll something on top and shoot
themselves in the foot and have all sorts of complexity disasters.
Rather, vDSO getrandom() is supposed to be the *same exact algorithm*
that runs in the kernel, except it's been hoisted into userspace as
much as possible. And so vDSO getrandom() and kernel getrandom() will
always mirror each other hermetically.

API-wise, the vDSO gains this function:

  ssize_t vgetrandom(void *buffer, size_t len, unsigned int flags, void *opaque_state);

The return value and the first 3 arguments are the same as ordinary
getrandom(), while the last argument is a pointer to some state
allocated with vgetrandom_alloc(), explained below. Were all four
arguments passed to the getrandom syscall, nothing different would
happen, and the functions would have the exact same behavior.

Then, we introduce a new syscall:

  void *vgetrandom_alloc([inout] unsigned int *num, [out] unsigned int *size_per_each, unsigned int flags);

This takes the desired number of opaque states in `num`, and returns a
pointer to an array of opaque states, the number actually allocated back
in `num`, and the size in bytes of each one in `size_per_each`, enabling
a libc to slice up the returned array into a state per each thread. (The
`flags` argument is always zero for now.) We very intentionally do *not*
leave state allocation up to the caller of vgetrandom, but provide
vgetrandom_alloc for that allocation. There are too many weird things
that can go wrong, and it's important that vDSO does not provide too
generic of a mechanism. It's not going to store its state in just any
old memory address. It'll do it only in ones it allocates.

Right now this means it's a mlock'd page with WIPEONFORK set. In the
future maybe there will be other interesting page flags or
anti-heartbleed measures, or other platform-specific kernel-specific
things that can be set from the syscall. Again, it's important that the
kernel has a say in how this works rather than agreeing to operate on
any old address; memory isn't neutral.

The syscall currently accomplishes this with a call to vm_mmap() and
then a call to do_madvise(). It'd be nice to do this all at once, but
I'm not sure that a helper function exists for that now, and it seems a
bit premature to add one, at least for now.

The interesting meat of the implementation is in lib/vdso/getrandom.c,
as generic C code, and it aims to mainly follow random.c's buffered fast
key erasure logic. Before the RNG is initialized, it falls back to the
syscall. Right now it uses a simple generation counter to make its decisions
on reseeding (though this could be made more extensive over time).

The actual place that has the most work to do is in all of the other
files. Most of the vDSO shared page infrastructure is centered around
gettimeofday, and so the main structs are all in arrays for different
timestamp types, and attached to time namespaces, and so forth. I've
done the best I could to add onto this in an unintrusive way.

In my test results, performance is pretty stellar (around 15x for uint32_t
generation), and it seems to be working. There's an extended example in the
second commit of this series, showing how the syscall and the vDSO function
are meant to be used together.

Cc: linux-crypto@vger.kernel.org
Cc: linux-api@vger.kernel.org
Cc: x86@kernel.org
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Adhemerval Zanella Netto <adhemerval.zanella@linaro.org>
Cc: Carlos O'Donell <carlos@redhat.com>
Cc: Florian Weimer <fweimer@redhat.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Christian Brauner <brauner@kernel.org>

Jason A. Donenfeld (3):
  random: add vgetrandom_alloc() syscall
  random: introduce generic vDSO getrandom() implementation
  x86: vdso: Wire up getrandom() vDSO implementation

 MAINTAINERS                             |   2 +
 arch/x86/Kconfig                        |   2 +
 arch/x86/entry/syscalls/syscall_64.tbl  |   1 +
 arch/x86/entry/vdso/Makefile            |   3 +-
 arch/x86/entry/vdso/vdso.lds.S          |   2 +
 arch/x86/entry/vdso/vgetrandom-chacha.S | 179 ++++++++++++++++++++++++
 arch/x86/entry/vdso/vgetrandom.c        |  18 +++
 arch/x86/include/asm/unistd.h           |   1 +
 arch/x86/include/asm/vdso/getrandom.h   |  49 +++++++
 arch/x86/include/asm/vdso/vsyscall.h    |   2 +
 arch/x86/include/asm/vvar.h             |  16 +++
 drivers/char/random.c                   |  68 +++++++++
 include/uapi/asm-generic/unistd.h       |   7 +-
 include/vdso/datapage.h                 |   6 +
 kernel/sys_ni.c                         |   3 +
 lib/vdso/Kconfig                        |   5 +
 lib/vdso/getrandom.c                    | 114 +++++++++++++++
 lib/vdso/getrandom.h                    |  23 +++
 scripts/checksyscalls.sh                |   4 +
 tools/include/uapi/asm-generic/unistd.h |   7 +-
 20 files changed, 509 insertions(+), 3 deletions(-)
 create mode 100644 arch/x86/entry/vdso/vgetrandom-chacha.S
 create mode 100644 arch/x86/entry/vdso/vgetrandom.c
 create mode 100644 arch/x86/include/asm/vdso/getrandom.h
 create mode 100644 lib/vdso/getrandom.c
 create mode 100644 lib/vdso/getrandom.h

-- 
2.38.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v7 1/3] random: add vgetrandom_alloc() syscall
  2022-11-24 16:55 [PATCH v7 0/3] implement getrandom() in vDSO Jason A. Donenfeld
@ 2022-11-24 16:55 ` Jason A. Donenfeld
  2022-11-25 20:45   ` Thomas Gleixner
  2022-11-24 16:55 ` [PATCH v7 2/3] random: introduce generic vDSO getrandom() implementation Jason A. Donenfeld
  2022-11-24 16:55 ` [PATCH v7 3/3] x86: vdso: Wire up getrandom() vDSO implementation Jason A. Donenfeld
  2 siblings, 1 reply; 16+ messages in thread
From: Jason A. Donenfeld @ 2022-11-24 16:55 UTC (permalink / raw)
  To: linux-kernel, patches, tglx
  Cc: Jason A. Donenfeld, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Arnd Bergmann,
	Christian Brauner

The vDSO getrandom() works over an opaque per-thread state of an
unexported size, which must be marked as MADV_WIPEONFORK and be
mlock()'d for proper operation. Over time, the nuances of these
allocations may change or grow or even differ based on architectural
features.

The syscall has the signature:

  void *vgetrandom_alloc([inout] unsigned int *num,
                         [out] unsigned int *size_per_each,
                         unsigned int flags);

This takes the desired number of opaque states in `num`, and returns a
pointer to an array of opaque states, the number actually allocated back
in `num`, and the size in bytes of each one in `size_per_each`, enabling
a libc to slice up the returned array into a state per each thread. (The
`flags` argument is always zero for now.) Libc is expected to allocate a
chunk of these on first use, and then dole them out to threads as
they're created, allocating more when needed. The following commit shows
an example of this, being used in conjunction with the getrandom() vDSO
function.

We very intentionally do *not* leave state allocation for vDSO
getrandom() up to userspace itself, but rather provide this new syscall
for such allocations. vDSO getrandom() must not store its state in just
any old memory address, but rather just ones that the kernel specially
allocates for it, leaving the particularities of those allocations up to
the kernel.

Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
---
 MAINTAINERS                             |  1 +
 arch/x86/Kconfig                        |  1 +
 arch/x86/entry/syscalls/syscall_64.tbl  |  1 +
 arch/x86/include/asm/unistd.h           |  1 +
 drivers/char/random.c                   | 59 +++++++++++++++++++++++++
 include/uapi/asm-generic/unistd.h       |  7 ++-
 kernel/sys_ni.c                         |  3 ++
 lib/vdso/getrandom.h                    | 23 ++++++++++
 scripts/checksyscalls.sh                |  4 ++
 tools/include/uapi/asm-generic/unistd.h |  7 ++-
 10 files changed, 105 insertions(+), 2 deletions(-)
 create mode 100644 lib/vdso/getrandom.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 256f03904987..843dd6a49538 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -17287,6 +17287,7 @@ T:	git https://git.kernel.org/pub/scm/linux/kernel/git/crng/random.git
 S:	Maintained
 F:	drivers/char/random.c
 F:	drivers/virt/vmgenid.c
+F:	lib/vdso/getrandom.h
 
 RAPIDIO SUBSYSTEM
 M:	Matt Porter <mporter@kernel.crashing.org>
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 67745ceab0db..331e21ba961a 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -59,6 +59,7 @@ config X86
 	#
 	select ACPI_LEGACY_TABLES_LOOKUP	if ACPI
 	select ACPI_SYSTEM_POWER_STATES_SUPPORT	if ACPI
+	select ADVISE_SYSCALLS			if X86_64
 	select ARCH_32BIT_OFF_T			if X86_32
 	select ARCH_CLOCKSOURCE_INIT
 	select ARCH_CORRECT_STACKTRACE_ON_KRETPROBE
diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl
index c84d12608cd2..0186f173f0e8 100644
--- a/arch/x86/entry/syscalls/syscall_64.tbl
+++ b/arch/x86/entry/syscalls/syscall_64.tbl
@@ -372,6 +372,7 @@
 448	common	process_mrelease	sys_process_mrelease
 449	common	futex_waitv		sys_futex_waitv
 450	common	set_mempolicy_home_node	sys_set_mempolicy_home_node
+451	common	vgetrandom_alloc	sys_vgetrandom_alloc
 
 #
 # Due to a historical design error, certain syscalls are numbered differently
diff --git a/arch/x86/include/asm/unistd.h b/arch/x86/include/asm/unistd.h
index 761173ccc33c..1bf509eaeff1 100644
--- a/arch/x86/include/asm/unistd.h
+++ b/arch/x86/include/asm/unistd.h
@@ -27,6 +27,7 @@
 #  define __ARCH_WANT_COMPAT_SYS_PWRITEV64
 #  define __ARCH_WANT_COMPAT_SYS_PREADV64V2
 #  define __ARCH_WANT_COMPAT_SYS_PWRITEV64V2
+#  define __ARCH_WANT_VGETRANDOM_ALLOC
 #  define X32_NR_syscalls (__NR_x32_syscalls)
 #  define IA32_NR_syscalls (__NR_ia32_syscalls)
 
diff --git a/drivers/char/random.c b/drivers/char/random.c
index a2a18bd3d7d7..71db7b787a60 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -8,6 +8,7 @@
  * into roughly six sections, each with a section header:
  *
  *   - Initialization and readiness waiting.
+ *   - vDSO support helpers.
  *   - Fast key erasure RNG, the "crng".
  *   - Entropy accumulation and extraction routines.
  *   - Entropy collection routines.
@@ -39,6 +40,7 @@
 #include <linux/blkdev.h>
 #include <linux/interrupt.h>
 #include <linux/mm.h>
+#include <linux/mman.h>
 #include <linux/nodemask.h>
 #include <linux/spinlock.h>
 #include <linux/kthread.h>
@@ -59,6 +61,7 @@
 #include <asm/irq.h>
 #include <asm/irq_regs.h>
 #include <asm/io.h>
+#include "../../lib/vdso/getrandom.h"
 
 /*********************************************************************
  *
@@ -167,6 +170,62 @@ int __cold execute_with_initialized_rng(struct notifier_block *nb)
 				__func__, (void *)_RET_IP_, crng_init)
 
 
+
+/********************************************************************
+ *
+ * vDSO support helpers.
+ *
+ * The actual vDSO function is defined over in lib/vdso/getrandom.c,
+ * but this section contains the kernel-mode helpers to support that.
+ *
+ ********************************************************************/
+
+#ifdef __ARCH_WANT_VGETRANDOM_ALLOC
+/*
+ * The vgetrandom() function in userspace requires an opaque state, which this
+ * function provides to userspace, by mapping a certain number of special pages
+ * into the calling process. It takes a hint as to the number of opaque states
+ * desired, and returns the number of opaque states actually allocated, the
+ * size of each one in bytes, and the address of the first state.
+ */
+SYSCALL_DEFINE3(vgetrandom_alloc, unsigned int __user *, num,
+		unsigned int __user *, size_per_each, unsigned int, flags)
+{
+	size_t alloc_size, num_states;
+	unsigned long pages_addr;
+	unsigned int num_hint;
+	int ret;
+
+	if (flags)
+		return -EINVAL;
+
+	if (get_user(num_hint, num))
+		return -EFAULT;
+
+	num_states = clamp_t(size_t, num_hint, 1, (SIZE_MAX & PAGE_MASK) / sizeof(struct vgetrandom_state));
+	alloc_size = PAGE_ALIGN(num_states * sizeof(struct vgetrandom_state));
+
+	if (put_user(alloc_size / sizeof(struct vgetrandom_state), num) ||
+	    put_user(sizeof(struct vgetrandom_state), size_per_each))
+		return -EFAULT;
+
+	pages_addr = vm_mmap(NULL, 0, alloc_size, PROT_READ | PROT_WRITE,
+			     MAP_PRIVATE | MAP_ANONYMOUS | MAP_LOCKED, 0);
+	if (IS_ERR_VALUE(pages_addr))
+		return pages_addr;
+
+	ret = do_madvise(current->mm, pages_addr, alloc_size, MADV_WIPEONFORK);
+	if (ret < 0)
+		goto err_unmap;
+
+	return pages_addr;
+
+err_unmap:
+	vm_munmap(pages_addr, alloc_size);
+	return ret;
+}
+#endif
+
 /*********************************************************************
  *
  * Fast key erasure RNG, the "crng".
diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h
index 45fa180cc56a..77b6debe7e18 100644
--- a/include/uapi/asm-generic/unistd.h
+++ b/include/uapi/asm-generic/unistd.h
@@ -886,8 +886,13 @@ __SYSCALL(__NR_futex_waitv, sys_futex_waitv)
 #define __NR_set_mempolicy_home_node 450
 __SYSCALL(__NR_set_mempolicy_home_node, sys_set_mempolicy_home_node)
 
+#ifdef __ARCH_WANT_VGETRANDOM_ALLOC
+#define __NR_vgetrandom_alloc 451
+__SYSCALL(__NR_vgetrandom_alloc, sys_vgetrandom_alloc)
+#endif
+
 #undef __NR_syscalls
-#define __NR_syscalls 451
+#define __NR_syscalls 452
 
 /*
  * 32 bit systems traditionally used different
diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c
index 860b2dcf3ac4..f28196cb919b 100644
--- a/kernel/sys_ni.c
+++ b/kernel/sys_ni.c
@@ -360,6 +360,9 @@ COND_SYSCALL(pkey_free);
 /* memfd_secret */
 COND_SYSCALL(memfd_secret);
 
+/* random */
+COND_SYSCALL(vgetrandom_alloc);
+
 /*
  * Architecture specific weak syscall entries.
  */
diff --git a/lib/vdso/getrandom.h b/lib/vdso/getrandom.h
new file mode 100644
index 000000000000..c7f727db2aaa
--- /dev/null
+++ b/lib/vdso/getrandom.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2022 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
+ */
+
+#ifndef _VDSO_LIB_GETRANDOM_H
+#define _VDSO_LIB_GETRANDOM_H
+
+#include <crypto/chacha.h>
+
+struct vgetrandom_state {
+	union {
+		struct {
+			u8 batch[CHACHA_BLOCK_SIZE * 3 / 2];
+			u32 key[CHACHA_KEY_SIZE / sizeof(u32)];
+		};
+		u8 batch_key[CHACHA_BLOCK_SIZE * 2];
+	};
+	unsigned long generation;
+	u8 pos;
+};
+
+#endif /* _VDSO_LIB_GETRANDOM_H */
diff --git a/scripts/checksyscalls.sh b/scripts/checksyscalls.sh
index f33e61aca93d..7f7928c6487f 100755
--- a/scripts/checksyscalls.sh
+++ b/scripts/checksyscalls.sh
@@ -44,6 +44,10 @@ cat << EOF
 #define __IGNORE_memfd_secret
 #endif
 
+#ifndef __ARCH_WANT_VGETRANDOM_ALLOC
+#define __IGNORE_vgetrandom_alloc
+#endif
+
 /* Missing flags argument */
 #define __IGNORE_renameat	/* renameat2 */
 
diff --git a/tools/include/uapi/asm-generic/unistd.h b/tools/include/uapi/asm-generic/unistd.h
index 45fa180cc56a..77b6debe7e18 100644
--- a/tools/include/uapi/asm-generic/unistd.h
+++ b/tools/include/uapi/asm-generic/unistd.h
@@ -886,8 +886,13 @@ __SYSCALL(__NR_futex_waitv, sys_futex_waitv)
 #define __NR_set_mempolicy_home_node 450
 __SYSCALL(__NR_set_mempolicy_home_node, sys_set_mempolicy_home_node)
 
+#ifdef __ARCH_WANT_VGETRANDOM_ALLOC
+#define __NR_vgetrandom_alloc 451
+__SYSCALL(__NR_vgetrandom_alloc, sys_vgetrandom_alloc)
+#endif
+
 #undef __NR_syscalls
-#define __NR_syscalls 451
+#define __NR_syscalls 452
 
 /*
  * 32 bit systems traditionally used different
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v7 2/3] random: introduce generic vDSO getrandom() implementation
  2022-11-24 16:55 [PATCH v7 0/3] implement getrandom() in vDSO Jason A. Donenfeld
  2022-11-24 16:55 ` [PATCH v7 1/3] random: add vgetrandom_alloc() syscall Jason A. Donenfeld
@ 2022-11-24 16:55 ` Jason A. Donenfeld
  2022-11-25 22:39   ` Thomas Gleixner
  2022-11-24 16:55 ` [PATCH v7 3/3] x86: vdso: Wire up getrandom() vDSO implementation Jason A. Donenfeld
  2 siblings, 1 reply; 16+ messages in thread
From: Jason A. Donenfeld @ 2022-11-24 16:55 UTC (permalink / raw)
  To: linux-kernel, patches, tglx
  Cc: Jason A. Donenfeld, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Arnd Bergmann,
	Christian Brauner

Provide a generic C vDSO getrandom() implementation, which operates on
an opaque state returned by vgetrandom_alloc() and produces random bytes
the same way as getrandom(). This has a the API signature:

  ssize_t vgetrandom(void *buffer, size_t len, unsigned int flags, void *opaque_state);

The return value and the first 3 arguments are the same as ordinary
getrandom(), while the last argument is a pointer to the opaque
allocated state. Were all four arguments passed to the getrandom()
syscall, nothing different would happen, and the functions would have
the exact same behavior.

The actual vDSO RNG algorithm implemented is the same one implemented by
drivers/char/random.c, using the same fast-erasure techniques as that.
Should the in-kernel implementation change, so too will the vDSO one.

It requires an implementation of ChaCha20 that does not use any stack,
in order to maintain forward secrecy, so this is left as an
architecture-specific fill-in. Stack-less ChaCha20 is an easy algorithm
to implement on a variety of architectures, so this shouldn't be too
onerous.

Initially, the state is keyless, and so the first call makes a
getrandom() syscall to generate that key, and then uses it for
subsequent calls. By keeping track of a generation counter, it knows
when its key is invalidated and it should fetch a new one using the
syscall. Later, more than just a generation counter might be used.

Since MADV_WIPEONFORK is set on the opaque state, the key and related
state is wiped during a fork(), so secrets don't roll over into new
processes, and the same state doesn't accidentally generate the same
random stream. The generation counter, as well, is always >0, so that
the 0 counter is a useful indication of a fork() or otherwise
uninitialized state.

If the kernel RNG is not yet initialized, then the vDSO always calls the
syscall, because that behavior cannot be emulated in userspace, but
fortunately that state is short lived and only during early boot. If it
has been initialized, then there is no need to inspect the `flags`
argument, because the behavior does not change post-initialization
regardless of the `flags` value.

Since the opaque state passed to it is mutated, vDSO getrandom() is not
reentrant, when used with the same opaque state, which libc should be
mindful of.

Together with the previous commit that introduces vgetrandom_alloc(),
this functionality is intended to be integrated into libc's thread
management. As an illustrative example, the following code might be used
to do the same outside of libc. All of the static functions are to be
considered implementation private, including the vgetrandom_alloc()
syscall wrapper, which generally shouldn't be exposed outside of libc,
with the non-static vgetrandom() function at the end being the exported
interface. The various pthread-isms are expected to be elided into libc
internals. This per-thread allocation scheme is very naive and does not
shrink; other implementations may choose to be more complex.

  static void *vgetrandom_alloc(unsigned int *num, unsigned int *size_per_each, unsigned int flags)
  {
    long ret = syscall(__NR_vgetrandom_alloc, &num, &size_per_each, flags);
    return ret == -1 ? NULL : (void *)ret;
  }

  static struct {
    pthread_mutex_t lock;
    void **states;
    size_t len, cap;
  } grnd_allocator = {
    .lock = PTHREAD_MUTEX_INITIALIZER
  };

  static void *vgetrandom_get_state(void)
  {
    void *state = NULL;

    pthread_mutex_lock(&grnd_allocator.lock);
    if (!grnd_allocator.len) {
      size_t new_cap;
      unsigned int size_per_each, num = 16; /* Just a hint. Could also be nr_cpus. */
      void *new_block = vgetrandom_alloc(&num, &size_per_each, 0), *new_states;

      if (!new_block)
        goto out;
      new_cap = grnd_allocator.cap + num;
      new_states = reallocarray(grnd_allocator.states, new_cap, sizeof(*grnd_allocator.states));
      if (!new_states) {
        munmap(new_block, num * size_per_each);
        goto out;
      }
      grnd_allocator.cap = new_cap;
      grnd_allocator.states = new_states;

      for (size_t i = 0; i < num; ++i) {
        grnd_allocator.states[i] = new_block;
        new_block += size_per_each;
      }
      grnd_allocator.len = num;
    }
    state = grnd_allocator.states[--grnd_allocator.len];

  out:
    pthread_mutex_unlock(&grnd_allocator.lock);
    return state;
  }

  static void vgetrandom_put_state(void *state)
  {
    if (!state)
      return;
    pthread_mutex_lock(&grnd_allocator.lock);
    grnd_allocator.states[grnd_allocator.len++] = state;
    pthread_mutex_unlock(&grnd_allocator.lock);
  }

  static struct {
    ssize_t(*fn)(void *buf, size_t len, unsigned long flags, void *state);
    pthread_key_t key;
    pthread_once_t initialized;
  } grnd_ctx = {
    .initialized = PTHREAD_ONCE_INIT
  };

  static void vgetrandom_init(void)
  {
    if (pthread_key_create(&grnd_ctx.key, vgetrandom_put_state) != 0)
      return;
    grnd_ctx.fn = __vdsosym("LINUX_2.6", "__vdso_getrandom");
  }

  ssize_t vgetrandom(void *buf, size_t len, unsigned long flags)
  {
    void *state;

    pthread_once(&grnd_ctx.initialized, vgetrandom_init);
    if (!grnd_ctx.fn)
      return getrandom(buf, len, flags);
    state = pthread_getspecific(grnd_ctx.key);
    if (!state) {
      state = vgetrandom_get_state();
      if (pthread_setspecific(grnd_ctx.key, state) != 0) {
        vgetrandom_put_state(state);
        state = NULL;
      }
      if (!state)
        return getrandom(buf, len, flags);
    }
    return grnd_ctx.fn(buf, len, flags, state);
  }

Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
---
 MAINTAINERS             |   1 +
 drivers/char/random.c   |   9 ++++
 include/vdso/datapage.h |   6 +++
 lib/vdso/Kconfig        |   5 ++
 lib/vdso/getrandom.c    | 114 ++++++++++++++++++++++++++++++++++++++++
 5 files changed, 135 insertions(+)
 create mode 100644 lib/vdso/getrandom.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 843dd6a49538..e0aa33f54c57 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -17287,6 +17287,7 @@ T:	git https://git.kernel.org/pub/scm/linux/kernel/git/crng/random.git
 S:	Maintained
 F:	drivers/char/random.c
 F:	drivers/virt/vmgenid.c
+F:	lib/vdso/getrandom.c
 F:	lib/vdso/getrandom.h
 
 RAPIDIO SUBSYSTEM
diff --git a/drivers/char/random.c b/drivers/char/random.c
index 71db7b787a60..35ac2d4d0726 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -61,6 +61,9 @@
 #include <asm/irq.h>
 #include <asm/irq_regs.h>
 #include <asm/io.h>
+#ifdef CONFIG_HAVE_VDSO_GETRANDOM
+#include <vdso/datapage.h>
+#endif
 #include "../../lib/vdso/getrandom.h"
 
 /*********************************************************************
@@ -328,6 +331,9 @@ static void crng_reseed(struct work_struct *work)
 	if (next_gen == ULONG_MAX)
 		++next_gen;
 	WRITE_ONCE(base_crng.generation, next_gen);
+#ifdef CONFIG_HAVE_VDSO_GETRANDOM
+	smp_store_release(&_vdso_rng_data.generation, next_gen + 1);
+#endif
 	if (!static_branch_likely(&crng_is_ready))
 		crng_init = CRNG_READY;
 	spin_unlock_irqrestore(&base_crng.lock, flags);
@@ -778,6 +784,9 @@ static void __cold _credit_init_bits(size_t bits)
 		if (static_key_initialized)
 			execute_in_process_context(crng_set_ready, &set_ready);
 		atomic_notifier_call_chain(&random_ready_notifier, 0, NULL);
+#ifdef CONFIG_HAVE_VDSO_GETRANDOM
+		smp_store_release(&_vdso_rng_data.is_ready, true);
+#endif
 		wake_up_interruptible(&crng_init_wait);
 		kill_fasync(&fasync, SIGIO, POLL_IN);
 		pr_notice("crng init done\n");
diff --git a/include/vdso/datapage.h b/include/vdso/datapage.h
index 73eb622e7663..cbacfd923a5c 100644
--- a/include/vdso/datapage.h
+++ b/include/vdso/datapage.h
@@ -109,6 +109,11 @@ struct vdso_data {
 	struct arch_vdso_data	arch_data;
 };
 
+struct vdso_rng_data {
+	unsigned long generation;
+	bool is_ready;
+};
+
 /*
  * We use the hidden visibility to prevent the compiler from generating a GOT
  * relocation. Not only is going through a GOT useless (the entry couldn't and
@@ -120,6 +125,7 @@ struct vdso_data {
  */
 extern struct vdso_data _vdso_data[CS_BASES] __attribute__((visibility("hidden")));
 extern struct vdso_data _timens_data[CS_BASES] __attribute__((visibility("hidden")));
+extern struct vdso_rng_data _vdso_rng_data __attribute__((visibility("hidden")));
 
 /*
  * The generic vDSO implementation requires that gettimeofday.h
diff --git a/lib/vdso/Kconfig b/lib/vdso/Kconfig
index d883ac299508..c35fac664574 100644
--- a/lib/vdso/Kconfig
+++ b/lib/vdso/Kconfig
@@ -30,4 +30,9 @@ config GENERIC_VDSO_TIME_NS
 	  Selected by architectures which support time namespaces in the
 	  VDSO
 
+config HAVE_VDSO_GETRANDOM
+	bool
+	help
+	  Selected by architectures that support vDSO getrandom().
+
 endif
diff --git a/lib/vdso/getrandom.c b/lib/vdso/getrandom.c
new file mode 100644
index 000000000000..2c4ef5ef212c
--- /dev/null
+++ b/lib/vdso/getrandom.c
@@ -0,0 +1,114 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2022 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
+ */
+
+#include <linux/kernel.h>
+#include <linux/atomic.h>
+#include <linux/fs.h>
+#include <vdso/datapage.h>
+#include <asm/vdso/getrandom.h>
+#include <asm/vdso/vsyscall.h>
+#include "getrandom.h"
+
+static void memcpy_and_zero(void *dst, void *src, size_t len)
+{
+#define CASCADE(type) \
+	while (len >= sizeof(type)) { \
+		__put_unaligned_t(type, __get_unaligned_t(type, src), dst); \
+		__put_unaligned_t(type, 0, src); \
+		dst += sizeof(type); \
+		src += sizeof(type); \
+		len -= sizeof(type); \
+	}
+#if IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)
+#if BITS_PER_LONG == 64
+	CASCADE(u64);
+#endif
+	CASCADE(u32);
+	CASCADE(u16);
+#endif
+	CASCADE(u8);
+#undef CASCADE
+}
+
+static __always_inline ssize_t
+__cvdso_getrandom_data(const struct vdso_rng_data *rng_info, void *buffer, size_t len,
+		       unsigned int flags, void *opaque_state)
+{
+	ssize_t ret = min_t(size_t, MAX_RW_COUNT, len);
+	struct vgetrandom_state *state = opaque_state;
+	size_t batch_len, nblocks, orig_len = len;
+	unsigned long current_generation;
+	void *orig_buffer = buffer;
+	u32 counter[2] = { 0 };
+
+	/*
+	 * If the kernel isn't yet initialized, then the various flags might have some effect
+	 * that we can't emulate in userspace, so use the syscall.  Otherwise, the flags have
+	 * no effect, and can continue.
+	 */
+	if (unlikely(!rng_info->is_ready))
+		return getrandom_syscall(orig_buffer, orig_len, flags);
+
+	if (unlikely(!len))
+		return 0;
+
+retry_generation:
+	current_generation = READ_ONCE(rng_info->generation);
+	if (unlikely(state->generation != current_generation)) {
+		/* Write the generation before filling the key, in case there's a fork before. */
+		WRITE_ONCE(state->generation, current_generation);
+		/* If the generation is wrong, the kernel has reseeded, so we should too. */
+		if (getrandom_syscall(state->key, sizeof(state->key), 0) != sizeof(state->key))
+			return getrandom_syscall(orig_buffer, orig_len, flags);
+		/* Set state->pos so that the batch is considered emptied. */
+		state->pos = sizeof(state->batch);
+	}
+
+	len = ret;
+more_batch:
+	/* First use whatever is left from the last call. */
+	batch_len = min_t(size_t, sizeof(state->batch) - state->pos, len);
+	if (batch_len) {
+		/* Zero out bytes as they're copied out, to preserve forward secrecy. */
+		memcpy_and_zero(buffer, state->batch + state->pos, batch_len);
+		state->pos += batch_len;
+		buffer += batch_len;
+		len -= batch_len;
+	}
+	if (!len) {
+		/*
+		 * Since rng_info->generation will never be 0, we re-read state->generation,
+		 * rather than using the local current_generation variable, to learn whether
+		 * we forked. Primarily, though, this indicates whether the rng itself has
+		 * reseeded, in which case we should generate a new key and start over.
+		 */
+		if (unlikely(READ_ONCE(state->generation) != READ_ONCE(rng_info->generation))) {
+			buffer = orig_buffer;
+			goto retry_generation;
+		}
+		return ret;
+	}
+
+	/* Generate blocks of rng output directly into the buffer while there's enough left. */
+	nblocks = len / CHACHA_BLOCK_SIZE;
+	if (nblocks) {
+		__arch_chacha20_blocks_nostack(buffer, state->key, counter, nblocks);
+		buffer += nblocks * CHACHA_BLOCK_SIZE;
+		len -= nblocks * CHACHA_BLOCK_SIZE;
+	}
+
+	/* Refill the batch and then overwrite the key, in order to preserve forward secrecy. */
+	BUILD_BUG_ON(sizeof(state->batch_key) % CHACHA_BLOCK_SIZE != 0);
+	__arch_chacha20_blocks_nostack(state->batch_key, state->key, counter,
+				       sizeof(state->batch_key) / CHACHA_BLOCK_SIZE);
+	state->pos = 0;
+	goto more_batch;
+}
+
+static __always_inline ssize_t
+__cvdso_getrandom(void *buffer, size_t len, unsigned int flags, void *opaque_state)
+{
+	return __cvdso_getrandom_data(__arch_get_vdso_rng_data(), buffer, len, flags, opaque_state);
+}
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v7 3/3] x86: vdso: Wire up getrandom() vDSO implementation
  2022-11-24 16:55 [PATCH v7 0/3] implement getrandom() in vDSO Jason A. Donenfeld
  2022-11-24 16:55 ` [PATCH v7 1/3] random: add vgetrandom_alloc() syscall Jason A. Donenfeld
  2022-11-24 16:55 ` [PATCH v7 2/3] random: introduce generic vDSO getrandom() implementation Jason A. Donenfeld
@ 2022-11-24 16:55 ` Jason A. Donenfeld
  2022-11-25 23:08   ` Thomas Gleixner
  2 siblings, 1 reply; 16+ messages in thread
From: Jason A. Donenfeld @ 2022-11-24 16:55 UTC (permalink / raw)
  To: linux-kernel, patches, tglx
  Cc: Jason A. Donenfeld, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Arnd Bergmann,
	Christian Brauner

Hook up the generic vDSO implementation to the x86 vDSO data page. Since
the existing vDSO infrastructure is heavily based on the timekeeping
functionality, which works over arrays of bases, a new macro is
introduced for vvars that are not arrays.

The vDSO function requires a ChaCha20 implementation that does not write
to the stack, yet can still do an entire ChaCha20 permutation, so
provide this using SSE2, since this is userland code that must work on
all x86-64 processors.

Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
---
 arch/x86/Kconfig                        |   1 +
 arch/x86/entry/vdso/Makefile            |   3 +-
 arch/x86/entry/vdso/vdso.lds.S          |   2 +
 arch/x86/entry/vdso/vgetrandom-chacha.S | 179 ++++++++++++++++++++++++
 arch/x86/entry/vdso/vgetrandom.c        |  18 +++
 arch/x86/include/asm/vdso/getrandom.h   |  49 +++++++
 arch/x86/include/asm/vdso/vsyscall.h    |   2 +
 arch/x86/include/asm/vvar.h             |  16 +++
 8 files changed, 269 insertions(+), 1 deletion(-)
 create mode 100644 arch/x86/entry/vdso/vgetrandom-chacha.S
 create mode 100644 arch/x86/entry/vdso/vgetrandom.c
 create mode 100644 arch/x86/include/asm/vdso/getrandom.h

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 331e21ba961a..b64b1b1274ae 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -270,6 +270,7 @@ config X86
 	select HAVE_UNSTABLE_SCHED_CLOCK
 	select HAVE_USER_RETURN_NOTIFIER
 	select HAVE_GENERIC_VDSO
+	select HAVE_VDSO_GETRANDOM		if X86_64
 	select HOTPLUG_SMT			if SMP
 	select IRQ_FORCED_THREADING
 	select NEED_PER_CPU_EMBED_FIRST_CHUNK
diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile
index 3e88b9df8c8f..2de64e52236a 100644
--- a/arch/x86/entry/vdso/Makefile
+++ b/arch/x86/entry/vdso/Makefile
@@ -27,7 +27,7 @@ VDSO32-$(CONFIG_X86_32)		:= y
 VDSO32-$(CONFIG_IA32_EMULATION)	:= y
 
 # files to link into the vdso
-vobjs-y := vdso-note.o vclock_gettime.o vgetcpu.o
+vobjs-y := vdso-note.o vclock_gettime.o vgetcpu.o vgetrandom.o vgetrandom-chacha.o
 vobjs32-y := vdso32/note.o vdso32/system_call.o vdso32/sigreturn.o
 vobjs32-y += vdso32/vclock_gettime.o
 vobjs-$(CONFIG_X86_SGX)	+= vsgx.o
@@ -104,6 +104,7 @@ CFLAGS_REMOVE_vclock_gettime.o = -pg
 CFLAGS_REMOVE_vdso32/vclock_gettime.o = -pg
 CFLAGS_REMOVE_vgetcpu.o = -pg
 CFLAGS_REMOVE_vsgx.o = -pg
+CFLAGS_REMOVE_vgetrandom.o = -pg
 
 #
 # X32 processes use x32 vDSO to access 64bit kernel data.
diff --git a/arch/x86/entry/vdso/vdso.lds.S b/arch/x86/entry/vdso/vdso.lds.S
index 4bf48462fca7..1919cc39277e 100644
--- a/arch/x86/entry/vdso/vdso.lds.S
+++ b/arch/x86/entry/vdso/vdso.lds.S
@@ -28,6 +28,8 @@ VERSION {
 		clock_getres;
 		__vdso_clock_getres;
 		__vdso_sgx_enter_enclave;
+		getrandom;
+		__vdso_getrandom;
 	local: *;
 	};
 }
diff --git a/arch/x86/entry/vdso/vgetrandom-chacha.S b/arch/x86/entry/vdso/vgetrandom-chacha.S
new file mode 100644
index 000000000000..d1b986be3aa4
--- /dev/null
+++ b/arch/x86/entry/vdso/vgetrandom-chacha.S
@@ -0,0 +1,179 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2022 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
+ */
+
+#include <linux/linkage.h>
+#include <asm/frame.h>
+
+.section	.rodata.cst16.CONSTANTS, "aM", @progbits, 16
+.align 16
+CONSTANTS:	.octa 0x6b20657479622d323320646e61707865
+.text
+
+/*
+ * Very basic SSE2 implementation of ChaCha20. Produces a given positive number
+ * of blocks of output with a nonce of 0, taking an input key and 8-byte
+ * counter. Importantly does not spill to the stack. Its arguments are:
+ *
+ *	rdi: output bytes
+ *	rsi: 32-byte key input
+ *	rdx: 8-byte counter input/output
+ *	rcx: number of 64-byte blocks to write to output
+ */
+SYM_FUNC_START(chacha20_blocks_nostack)
+
+#define output  %rdi
+#define key     %rsi
+#define counter %rdx
+#define nblocks %rcx
+#define i       %al
+#define state0  %xmm0
+#define state1  %xmm1
+#define state2  %xmm2
+#define state3  %xmm3
+#define copy0   %xmm4
+#define copy1   %xmm5
+#define copy2   %xmm6
+#define copy3   %xmm7
+#define temp    %xmm8
+#define one     %xmm9
+
+	/* copy0 = "expand 32-byte k" */
+	movaps		CONSTANTS(%rip),copy0
+	/* copy1,copy2 = key */
+	movdqu		0x00(key),copy1
+	movdqu		0x10(key),copy2
+	/* copy3 = counter || zero nonce */
+	movq		0x00(counter),copy3
+	/* one = 1 || 0 */
+	movq		$1,%rax
+	movq		%rax,one
+
+.Lblock:
+	/* state0,state1,state2,state3 = copy0,copy1,copy2,copy3 */
+	movdqa		copy0,state0
+	movdqa		copy1,state1
+	movdqa		copy2,state2
+	movdqa		copy3,state3
+
+	movb		$10,i
+.Lpermute:
+	/* state0 += state1, state3 = rotl32(state3 ^ state0, 16) */
+	paddd		state1,state0
+	pxor		state0,state3
+	movdqa		state3,temp
+	pslld		$16,temp
+	psrld		$16,state3
+	por		temp,state3
+
+	/* state2 += state3, state1 = rotl32(state1 ^ state2, 12) */
+	paddd		state3,state2
+	pxor		state2,state1
+	movdqa		state1,temp
+	pslld		$12,temp
+	psrld		$20,state1
+	por		temp,state1
+
+	/* state0 += state1, state3 = rotl32(state3 ^ state0, 8) */
+	paddd		state1,state0
+	pxor		state0,state3
+	movdqa		state3,temp
+	pslld		$8,temp
+	psrld		$24,state3
+	por		temp,state3
+
+	/* state2 += state3, state1 = rotl32(state1 ^ state2, 7) */
+	paddd		state3,state2
+	pxor		state2,state1
+	movdqa		state1,temp
+	pslld		$7,temp
+	psrld		$25,state1
+	por		temp,state1
+
+	/* state1 = shuffle32(state1, MASK(0, 3, 2, 1)) */
+	pshufd		$0x39,state1,state1
+	/* state2 = shuffle32(state2, MASK(1, 0, 3, 2)) */
+	pshufd		$0x4e,state2,state2
+	/* state3 = shuffle32(state3, MASK(2, 1, 0, 3)) */
+	pshufd		$0x93,state3,state3
+
+	/* state0 += state1, state3 = rotl32(state3 ^ state0, 16) */
+	paddd		state1,state0
+	pxor		state0,state3
+	movdqa		state3,temp
+	pslld		$16,temp
+	psrld		$16,state3
+	por		temp,state3
+
+	/* state2 += state3, state1 = rotl32(state1 ^ state2, 12) */
+	paddd		state3,state2
+	pxor		state2,state1
+	movdqa		state1,temp
+	pslld		$12,temp
+	psrld		$20,state1
+	por		temp,state1
+
+	/* state0 += state1, state3 = rotl32(state3 ^ state0, 8) */
+	paddd		state1,state0
+	pxor		state0,state3
+	movdqa		state3,temp
+	pslld		$8,temp
+	psrld		$24,state3
+	por		temp,state3
+
+	/* state2 += state3, state1 = rotl32(state1 ^ state2, 7) */
+	paddd		state3,state2
+	pxor		state2,state1
+	movdqa		state1,temp
+	pslld		$7,temp
+	psrld		$25,state1
+	por		temp,state1
+
+	/* state1 = shuffle32(state1, MASK(2, 1, 0, 3)) */
+	pshufd		$0x93,state1,state1
+	/* state2 = shuffle32(state2, MASK(1, 0, 3, 2)) */
+	pshufd		$0x4e,state2,state2
+	/* state3 = shuffle32(state3, MASK(0, 3, 2, 1)) */
+	pshufd		$0x39,state3,state3
+
+	decb		i
+	jnz		.Lpermute
+
+	/* output0 = state0 + copy0 */
+	paddd		copy0,state0
+	movdqu		state0,0x00(output)
+	/* output1 = state1 + copy1 */
+	paddd		copy1,state1
+	movdqu		state1,0x10(output)
+	/* output2 = state2 + copy2 */
+	paddd		copy2,state2
+	movdqu		state2,0x20(output)
+	/* output3 = state3 + copy3 */
+	paddd		copy3,state3
+	movdqu		state3,0x30(output)
+
+	/* ++copy3.counter */
+	paddq		one,copy3
+
+	/* output += 64, --nblocks */
+	addq		$64,output
+	decq		nblocks
+	jnz		.Lblock
+
+	/* counter = copy3.counter */
+	movq		copy3,0x00(counter)
+
+	/* Zero out all the regs, in case nothing uses these again. */
+	pxor		state0,state0
+	pxor		state1,state1
+	pxor		state2,state2
+	pxor		state3,state3
+	pxor		copy0,copy0
+	pxor		copy1,copy1
+	pxor		copy2,copy2
+	pxor		copy3,copy3
+	pxor		temp,temp
+
+	ret
+SYM_FUNC_END(chacha20_blocks_nostack)
diff --git a/arch/x86/entry/vdso/vgetrandom.c b/arch/x86/entry/vdso/vgetrandom.c
new file mode 100644
index 000000000000..c7a2476d5d8a
--- /dev/null
+++ b/arch/x86/entry/vdso/vgetrandom.c
@@ -0,0 +1,18 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2022 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
+ */
+#include <linux/kernel.h>
+#include <linux/types.h>
+
+#include "../../../../lib/vdso/getrandom.c"
+
+ssize_t __vdso_getrandom(void *buffer, size_t len, unsigned int flags, void *state);
+
+ssize_t __vdso_getrandom(void *buffer, size_t len, unsigned int flags, void *state)
+{
+	return __cvdso_getrandom(buffer, len, flags, state);
+}
+
+ssize_t getrandom(void *, size_t, unsigned int, void *)
+	__attribute__((weak, alias("__vdso_getrandom")));
diff --git a/arch/x86/include/asm/vdso/getrandom.h b/arch/x86/include/asm/vdso/getrandom.h
new file mode 100644
index 000000000000..099aca58ef20
--- /dev/null
+++ b/arch/x86/include/asm/vdso/getrandom.h
@@ -0,0 +1,49 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2022 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
+ */
+#ifndef __ASM_VDSO_GETRANDOM_H
+#define __ASM_VDSO_GETRANDOM_H
+
+#ifndef __ASSEMBLY__
+
+#include <asm/unistd.h>
+#include <asm/vvar.h>
+
+static __always_inline ssize_t
+getrandom_syscall(void *buffer, size_t len, unsigned int flags)
+{
+	long ret;
+
+	asm ("syscall" : "=a" (ret) :
+	     "0" (__NR_getrandom), "D" (buffer), "S" (len), "d" (flags) :
+	     "rcx", "r11", "memory");
+
+	return ret;
+}
+
+#define __vdso_rng_data (VVAR(_vdso_rng_data))
+
+static __always_inline const struct vdso_rng_data *__arch_get_vdso_rng_data(void)
+{
+	if (__vdso_data->clock_mode == VDSO_CLOCKMODE_TIMENS)
+		return (void *)&__vdso_rng_data +
+		       ((void *)&__timens_vdso_data - (void *)&__vdso_data);
+	return &__vdso_rng_data;
+}
+
+/*
+ * Generates a given positive number of block of ChaCha20 output with nonce=0,
+ * and does not write to any stack or memory outside of the parameters passed
+ * to it. This way, we don't need to worry about stack data leaking into forked
+ * child processes.
+ */
+static __always_inline void __arch_chacha20_blocks_nostack(u8 *dst_bytes, const u32 *key, u32 *counter, size_t nblocks)
+{
+	extern void chacha20_blocks_nostack(u8 *dst_bytes, const u32 *key, u32 *counter, size_t nblocks);
+	return chacha20_blocks_nostack(dst_bytes, key, counter, nblocks);
+}
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* __ASM_VDSO_GETRANDOM_H */
diff --git a/arch/x86/include/asm/vdso/vsyscall.h b/arch/x86/include/asm/vdso/vsyscall.h
index be199a9b2676..71c56586a22f 100644
--- a/arch/x86/include/asm/vdso/vsyscall.h
+++ b/arch/x86/include/asm/vdso/vsyscall.h
@@ -11,6 +11,8 @@
 #include <asm/vvar.h>
 
 DEFINE_VVAR(struct vdso_data, _vdso_data);
+DEFINE_VVAR_SINGLE(struct vdso_rng_data, _vdso_rng_data);
+
 /*
  * Update the vDSO data page to keep in sync with kernel timekeeping.
  */
diff --git a/arch/x86/include/asm/vvar.h b/arch/x86/include/asm/vvar.h
index 183e98e49ab9..9d9af37f7cab 100644
--- a/arch/x86/include/asm/vvar.h
+++ b/arch/x86/include/asm/vvar.h
@@ -26,6 +26,8 @@
  */
 #define DECLARE_VVAR(offset, type, name) \
 	EMIT_VVAR(name, offset)
+#define DECLARE_VVAR_SINGLE(offset, type, name) \
+	EMIT_VVAR(name, offset)
 
 #else
 
@@ -37,6 +39,10 @@ extern char __vvar_page;
 	extern type timens_ ## name[CS_BASES]				\
 	__attribute__((visibility("hidden")));				\
 
+#define DECLARE_VVAR_SINGLE(offset, type, name)				\
+	extern type vvar_ ## name					\
+	__attribute__((visibility("hidden")));				\
+
 #define VVAR(name) (vvar_ ## name)
 #define TIMENS(name) (timens_ ## name)
 
@@ -44,12 +50,22 @@ extern char __vvar_page;
 	type name[CS_BASES]						\
 	__attribute__((section(".vvar_" #name), aligned(16))) __visible
 
+#define DEFINE_VVAR_SINGLE(type, name)					\
+	type name							\
+	__attribute__((section(".vvar_" #name), aligned(16))) __visible
+
 #endif
 
 /* DECLARE_VVAR(offset, type, name) */
 
 DECLARE_VVAR(128, struct vdso_data, _vdso_data)
 
+#if !defined(_SINGLE_DATA)
+#define _SINGLE_DATA
+DECLARE_VVAR_SINGLE(640, struct vdso_rng_data, _vdso_rng_data)
+#endif
+
 #undef DECLARE_VVAR
+#undef DECLARE_VVAR_SINGLE
 
 #endif
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH v7 1/3] random: add vgetrandom_alloc() syscall
  2022-11-24 16:55 ` [PATCH v7 1/3] random: add vgetrandom_alloc() syscall Jason A. Donenfeld
@ 2022-11-25 20:45   ` Thomas Gleixner
  2022-11-27 20:18     ` Jason A. Donenfeld
  0 siblings, 1 reply; 16+ messages in thread
From: Thomas Gleixner @ 2022-11-25 20:45 UTC (permalink / raw)
  To: Jason A. Donenfeld, linux-kernel, patches
  Cc: Jason A. Donenfeld, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Arnd Bergmann,
	Christian Brauner

On Thu, Nov 24 2022 at 17:55, Jason A. Donenfeld wrote:
> ---
>  MAINTAINERS                             |  1 +
>  arch/x86/Kconfig                        |  1 +
>  arch/x86/entry/syscalls/syscall_64.tbl  |  1 +
>  arch/x86/include/asm/unistd.h           |  1 +
>  drivers/char/random.c                   | 59 +++++++++++++++++++++++++
>  include/uapi/asm-generic/unistd.h       |  7 ++-
>  kernel/sys_ni.c                         |  3 ++
>  lib/vdso/getrandom.h                    | 23 ++++++++++
>  scripts/checksyscalls.sh                |  4 ++
>  tools/include/uapi/asm-generic/unistd.h |  7 ++-
>  10 files changed, 105 insertions(+), 2 deletions(-)
>  create mode 100644 lib/vdso/getrandom.h

I think I asked for this before:

Please split these things properly up. Provide the syscall and then wire
it up.

> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index 67745ceab0db..331e21ba961a 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -59,6 +59,7 @@ config X86
>  	#
>  	select ACPI_LEGACY_TABLES_LOOKUP	if ACPI
>  	select ACPI_SYSTEM_POWER_STATES_SUPPORT	if ACPI
> +	select ADVISE_SYSCALLS			if X86_64

Why is this x86_64 specific?

> --- a/arch/x86/include/asm/unistd.h
> +++ b/arch/x86/include/asm/unistd.h
> @@ -27,6 +27,7 @@
>  #  define __ARCH_WANT_COMPAT_SYS_PWRITEV64
>  #  define __ARCH_WANT_COMPAT_SYS_PREADV64V2
>  #  define __ARCH_WANT_COMPAT_SYS_PWRITEV64V2
> +#  define __ARCH_WANT_VGETRANDOM_ALLOC

So instead of this define, why can't you do:

config VGETRADOM_ALLOC
       bool
       select ADVISE_SYSCALLS

and then have

config GENERIC_VDSO_RANDOM_WHATEVER
       bool
       select VGETRANDOM_ALLOC

This gives a clear Kconfig dependency instead of the random
ADVISE_SYSCALLS select.

>--- a/drivers/char/random.c
> +++ b/drivers/char/random.c

> +#include "../../lib/vdso/getrandom.h"

Seriously?

include/vdso/ exists for a reason.

> +#ifdef __ARCH_WANT_VGETRANDOM_ALLOC
> +/*
> + * The vgetrandom() function in userspace requires an opaque state, which this
> + * function provides to userspace, by mapping a certain number of special pages
> + * into the calling process. It takes a hint as to the number of opaque states
> + * desired, and returns the number of opaque states actually allocated, the
> + * size of each one in bytes, and the address of the first state.

As this is a syscall which can be invoked outside of the VDSO, can you
please provide proper kernel-doc which explains the arguments, the
functionality and the return value?

> + */
> +SYSCALL_DEFINE3(vgetrandom_alloc, unsigned int __user *, num,
> +		unsigned int __user *, size_per_each, unsigned int, flags)
> +{
> +	size_t alloc_size, num_states;
> +	unsigned long pages_addr;
> +	unsigned int num_hint;
> +	int ret;
> +
> +	if (flags)
> +		return -EINVAL;
> +
> +	if (get_user(num_hint, num))
> +		return -EFAULT;
> +
> +	num_states = clamp_t(size_t, num_hint, 1, (SIZE_MAX & PAGE_MASK) / sizeof(struct vgetrandom_state));
> +	alloc_size = PAGE_ALIGN(num_states * sizeof(struct vgetrandom_state));
> +
> +	if (put_user(alloc_size / sizeof(struct vgetrandom_state), num) ||
> +	    put_user(sizeof(struct vgetrandom_state), size_per_each))
> +		return -EFAULT;

That's a total of four sizeof(struct vgetrandom_state) usage sites.

       size_t state_size = sizeof(struct vgetrandom_state);

perhaps?

> diff --git a/lib/vdso/getrandom.h b/lib/vdso/getrandom.h
> new file mode 100644
> index 000000000000..c7f727db2aaa
> --- /dev/null
> +++ b/lib/vdso/getrandom.h

Wrong place. See above.

> @@ -0,0 +1,23 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2022 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
> + */
> +
> +#ifndef _VDSO_LIB_GETRANDOM_H
> +#define _VDSO_LIB_GETRANDOM_H
> +
> +#include <crypto/chacha.h>
> +
> +struct vgetrandom_state {
> +	union {
> +		struct {
> +			u8 batch[CHACHA_BLOCK_SIZE * 3 / 2];
> +			u32 key[CHACHA_KEY_SIZE / sizeof(u32)];
> +		};
> +		u8 batch_key[CHACHA_BLOCK_SIZE * 2];
> +	};
> +	unsigned long generation;
> +	u8 pos;
> +};

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v7 2/3] random: introduce generic vDSO getrandom() implementation
  2022-11-24 16:55 ` [PATCH v7 2/3] random: introduce generic vDSO getrandom() implementation Jason A. Donenfeld
@ 2022-11-25 22:39   ` Thomas Gleixner
  2022-11-27 21:52     ` Jason A. Donenfeld
  0 siblings, 1 reply; 16+ messages in thread
From: Thomas Gleixner @ 2022-11-25 22:39 UTC (permalink / raw)
  To: Jason A. Donenfeld, linux-kernel, patches
  Cc: Jason A. Donenfeld, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Arnd Bergmann,
	Christian Brauner

Jason!

On Thu, Nov 24 2022 at 17:55, Jason A. Donenfeld wrote:
>
> Together with the previous commit that introduces vgetrandom_alloc(),

Together with the previous commit is just a complete useless
information. 

> this functionality is intended to be integrated into libc's thread
> management.

vdso_getrandom() and sys_vgetrandom_alloc() provide ..... which is
intended to be integrated ....

> diff --git a/include/vdso/datapage.h b/include/vdso/datapage.h
> index 73eb622e7663..cbacfd923a5c 100644
> --- a/include/vdso/datapage.h
> +++ b/include/vdso/datapage.h
> @@ -109,6 +109,11 @@ struct vdso_data {
>  	struct arch_vdso_data	arch_data;
>  };
>  
> +struct vdso_rng_data {
> +	unsigned long generation;
> +	bool is_ready;
> +};

Please follow the coding style in this header:

       - make the struct definition tabular aligned
       - provide kernel doc which explains the struct members

> +config HAVE_VDSO_GETRANDOM
> +	bool
> +	help
> +	  Selected by architectures that support vDSO getrandom().

See ?

>  endif
> diff --git a/lib/vdso/getrandom.c b/lib/vdso/getrandom.c
> new file mode 100644
> index 000000000000..2c4ef5ef212c
> --- /dev/null
> +++ b/lib/vdso/getrandom.c
> @@ -0,0 +1,114 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2022 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
> + */
> +
> +#include <linux/kernel.h>
> +#include <linux/atomic.h>
> +#include <linux/fs.h>

No. This is VDSO and builds a userspace library. You cannot include
random kernel headers, which in turn include the world and some more. It
might build today, but any legitimate change to fs.h or one of the
headers included by it will make it explode.

I just fixed up some other instance which included world and I have zero
interest to get more of this "works by chance" things.

If you really need anything from fs.h then please isolate it out into a
separate header file which is included by fs.h and here.

> +#include <vdso/datapage.h>
> +#include <asm/vdso/getrandom.h>
> +#include <asm/vdso/vsyscall.h>
> +#include "getrandom.h"
> +
> +static void memcpy_and_zero(void *dst, void *src, size_t len)
> +{
> +#define CASCADE(type) \
> +	while (len >= sizeof(type)) { \
> +		__put_unaligned_t(type, __get_unaligned_t(type, src), dst); \
> +		__put_unaligned_t(type, 0, src); \
> +		dst += sizeof(type); \
> +		src += sizeof(type); \
> +		len -= sizeof(type); \
> +	}

No. Defines inside of functions are a horrible habit. This is not the
obfuscated C-code contest.

> +#if IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)
> +#if BITS_PER_LONG == 64
> +	CASCADE(u64);
> +#endif
> +	CASCADE(u32);
> +	CASCADE(u16);
> +#endif
> +	CASCADE(u8);
> +#undef CASCADE

This is equaly unreadable. I had to reread it 4 times to grok it.

#define MEMCPY_AND_ZERO_SRC(type, dst, src, len)				\
	while (len >= sizeof(type)) {                                           \
		__put_unaligned_t(type, __get_unaligned_t(type, src), dst);     \
		__put_unaligned_t(type, 0, src);                                \
		dst += sizeof(type);                                            \
		src += sizeof(type);                                            \
		len -= sizeof(type);                                            \
	}

static void memcpy_and_zero_src(void *dst, void *src, size_t len)
{
	if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) {
        	if (IS_ENABLED(CONFIG_64BIT))
                	MEMCPY_AND_ZERO_SRC(u64, dst, src, len);

		MEMCPY_AND_ZERO_SRC(u32, dst, src, len);
		MEMCPY_AND_ZERO_SRC(u16, dst, src, len);
        }
	MEMCPY_AND_ZERO_SRC(u8, dst, src, len);
}

Can you spot the difference in readability and formatting?

> +
> +static __always_inline ssize_t
> +__cvdso_getrandom_data(const struct vdso_rng_data *rng_info, void *buffer, size_t len,
> +		       unsigned int flags, void *opaque_state)

Lacks kernel-doc explaining the arguments and the functionality.

> +{
> +	ssize_t ret = min_t(size_t, MAX_RW_COUNT, len);
> +	struct vgetrandom_state *state = opaque_state;
> +	size_t batch_len, nblocks, orig_len = len;
> +	unsigned long current_generation;
> +	void *orig_buffer = buffer;
> +	u32 counter[2] = { 0 };
> +
> +	/*
> +	 * If the kernel isn't yet initialized, then the various flags might have some effect
> +	 * that we can't emulate in userspace, so use the syscall.  Otherwise, the flags have
> +	 * no effect, and can continue.

Sorry, this is word salad which raises a -ENOPARSE here.

@rng_info is a user supplied pointer, but there is zero information how
rng_info->is_ready is set to true because obviously getrandom_syscall()
does not know about it.

I know you described it in the changelog with an example to some extent,
but that does not help the casual reader of this code at all.

> +	 */
> +	if (unlikely(!rng_info->is_ready))
> +		return getrandom_syscall(orig_buffer, orig_len, flags);

What's the point of using orig_buffer and orig_len at this place? I can
understand it below, but here it does not make sense.

> +
> +	if (unlikely(!len))
> +		return 0;

Why go into the kernel above when len == 0?

> +
> +retry_generation:
> +	current_generation = READ_ONCE(rng_info->generation);

READ_ONCE() and WRITE_ONCE() require comments explaining why they are
used and what the actual counterpart of each operation is.

> +	if (unlikely(state->generation != current_generation)) {
> +		/* Write the generation before filling the key, in case there's a fork before. */
> +		WRITE_ONCE(state->generation, current_generation);
> +		/* If the generation is wrong, the kernel has reseeded, so we should too. */
> +		if (getrandom_syscall(state->key, sizeof(state->key), 0) != sizeof(state->key))
> +			return getrandom_syscall(orig_buffer, orig_len, flags);
> +		/* Set state->pos so that the batch is considered emptied. */
> +		state->pos = sizeof(state->batch);

Can you please add newlines into this so the various steps are visually
seperated?

	if (unlikely(state->generation != current_generation)) {
		/* Write the generation before filling the key, in case there's a fork before. */
		WRITE_ONCE(state->generation, current_generation);

		/* If the generation is wrong, the kernel has reseeded, so we should too. */
		if (getrandom_syscall(state->key, sizeof(state->key), 0) != sizeof(state->key))
			return getrandom_syscall(orig_buffer, orig_len, flags);

		/* Set state->pos so that the batch is considered emptied. */
		state->pos = sizeof(state->batch);
	}

Again: Can you spot the difference? Please fix that up all over the place.

Now let me comment on the content of this:

> +	if (unlikely(state->generation != current_generation)) {
> +		/* Write the generation before filling the key, in case there's a fork before. */

There is no space restriction which requires to write comments in a way
that they need crystal balls to decode.

                /*
                 * Update @state->generation before invoking the syscall, 
                 * which fills the key, to protect against a fork FOR
                 * WHATEVER EXPLICIT REASON
                 */

This might be completely obvious to you today, but it's not obvious to
me or anyone else and I'm sure that you would curse these comments six
month down the road yourself.

> +		WRITE_ONCE(state->generation, current_generation);
> +		/* If the generation is wrong, the kernel has reseeded, so we should too. */

Which generation is wrong? Your's or mine? Please spell things
out. There is enough space to do so.

Also please refrain from 'we should ...'. 'We' has no place here.

I know it is a common habit to impersonate code and code execution, but
it's a horrible habit. Aside of being non-factual there are people from
other cultures who really have a hard time to understand this.

Neither is 'should' a proper term here. 'should' is not mandatory.

  "If there is a state generation mismatch, which means the kernel has
   reseeded the random generator, then it is _required_ to reseed the
   vdso buffers too."

Sorry, I really fail to understand this sloppy wording coming from
someone who educates everyone else about the importance of correctness.

If you can't be bothered to express yourself in correct terms
consistently then why do you expect that anyone else understands
randomness or cryptography correctly?

I really appreciate your efforts to make all of this more accessible to
the average user, but please get your act together.

> +		if (getrandom_syscall(state->key, sizeof(state->key), 0) != sizeof(state->key))
> +			return getrandom_syscall(orig_buffer, orig_len, flags);

This lacks an explanation why this might invoke the syscall twice.

> +		/* Set state->pos so that the batch is considered emptied. */

Considered? There is nothing to consider here, right?

                /*
                 * Advance state->pos beyond the end of the batch buffer to
                 * signal that the batch needs to be refilled.
                 */

Hmm?

> +		state->pos = sizeof(state->batch);

> +	}
> +
> +	len = ret;
> +more_batch:
> +	/* First use whatever is left from the last call. */

Last call of what?

> +	batch_len = min_t(size_t, sizeof(state->batch) - state->pos, len);

Where is the sanity check for state->pos <= sizeof(state->batch)?

> +	if (batch_len) {
> +		/* Zero out bytes as they're copied out, to preserve forward secrecy. */

Please name the function so it is self explaining and add an
comprehensive comment to it so you can spare the comment here.

> +		memcpy_and_zero(buffer, state->batch + state->pos, batch_len);
> +		state->pos += batch_len;
> +		buffer += batch_len;
> +		len -= batch_len;

buffer and state->pos could be updated by that non-fail copy function to
aid the compiler, but I might be wrong about the ability of compilers to
optimize code like that.

> +	}
> +	if (!len) {
> +		/*
> +		 * Since rng_info->generation will never be 0, we re-read state->generation,

s/we// ....

> +		 * rather than using the local current_generation variable, to learn whether
> +		 * we forked. Primarily, though, this indicates whether the rng itself has
> +		 * reseeded, in which case we should generate a new key and start over.
> +		 */
> +		if (unlikely(READ_ONCE(state->generation) != READ_ONCE(rng_info->generation))) {
> +			buffer = orig_buffer;
> +			goto retry_generation;
> +		}
> +		return ret;
> +	}
> +
> +	/* Generate blocks of rng output directly into the buffer while there's enough left. */
> +	nblocks = len / CHACHA_BLOCK_SIZE;
> +	if (nblocks) {
> +		__arch_chacha20_blocks_nostack(buffer, state->key, counter, nblocks);
> +		buffer += nblocks * CHACHA_BLOCK_SIZE;
> +		len -= nblocks * CHACHA_BLOCK_SIZE;
> +	}
> +
> +	/* Refill the batch and then overwrite the key, in order to preserve forward secrecy. */
> +	BUILD_BUG_ON(sizeof(state->batch_key) % CHACHA_BLOCK_SIZE != 0);

Does this build bug really need to be glued in between the comment
explaining the function call and the function call itself?

> +	__arch_chacha20_blocks_nostack(state->batch_key, state->key, counter,
> +				       sizeof(state->batch_key) / CHACHA_BLOCK_SIZE);
> +	state->pos = 0;

This reset of state->pos also lacks a comment for the casual reader...

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v7 3/3] x86: vdso: Wire up getrandom() vDSO implementation
  2022-11-24 16:55 ` [PATCH v7 3/3] x86: vdso: Wire up getrandom() vDSO implementation Jason A. Donenfeld
@ 2022-11-25 23:08   ` Thomas Gleixner
  2022-11-27 22:07     ` Jason A. Donenfeld
  0 siblings, 1 reply; 16+ messages in thread
From: Thomas Gleixner @ 2022-11-25 23:08 UTC (permalink / raw)
  To: Jason A. Donenfeld, linux-kernel, patches
  Cc: Jason A. Donenfeld, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Arnd Bergmann,
	Christian Brauner

Jason!

On Thu, Nov 24 2022 at 17:55, Jason A. Donenfeld wrote:
> +++ b/arch/x86/entry/vdso/vgetrandom-chacha.S
> +/*
> + * Very basic SSE2 implementation of ChaCha20. Produces a given positive number
> + * of blocks of output with a nonce of 0, taking an input key and 8-byte
> + * counter. Importantly does not spill to the stack. Its arguments are:

Basic or not. This needs a Reviewed-by from someone who understands SSE2
and ChaCha20 before this can go anywhere near the x86 tree.

> +++ b/arch/x86/entry/vdso/vgetrandom.c
> @@ -0,0 +1,18 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * Copyright (C) 2022 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
> + */
> +#include <linux/kernel.h>

Why do you need kernel.h here?

> +#include <linux/types.h>
> +
> +#include "../../../../lib/vdso/getrandom.c"
> +
> +ssize_t __vdso_getrandom(void *buffer, size_t len, unsigned int flags, void *state);
> +
> +ssize_t __vdso_getrandom(void *buffer, size_t len, unsigned int flags, void *state)
> +{
> +	return __cvdso_getrandom(buffer, len, flags, state);
> +}
> +
> +ssize_t getrandom(void *, size_t, unsigned int, void *)
> +	__attribute__((weak, alias("__vdso_getrandom")));
> diff --git a/arch/x86/include/asm/vdso/getrandom.h b/arch/x86/include/asm/vdso/getrandom.h
> new file mode 100644
> index 000000000000..099aca58ef20
> --- /dev/null
> +++ b/arch/x86/include/asm/vdso/getrandom.h
> @@ -0,0 +1,49 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2022 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
> + */
> +#ifndef __ASM_VDSO_GETRANDOM_H
> +#define __ASM_VDSO_GETRANDOM_H
> +
> +#ifndef __ASSEMBLY__
> +
> +#include <asm/unistd.h>
> +#include <asm/vvar.h>
> +
> +static __always_inline ssize_t
> +getrandom_syscall(void *buffer, size_t len, unsigned int flags)

static __always_inline ssize_t getrandom_syscall(void *buffer, size_t len, unsigned int flags)

please. We expanded to 100 quite some time ago.

Some kernel-doc compliant comment for this would be appreciated as well.

> +{
> +	long ret;
> +
> +	asm ("syscall" : "=a" (ret) :
> +	     "0" (__NR_getrandom), "D" (buffer), "S" (len), "d" (flags) :
> +	     "rcx", "r11", "memory");
> +
> +	return ret;
> +}
> +
> +#define __vdso_rng_data (VVAR(_vdso_rng_data))
> +
> +static __always_inline const struct vdso_rng_data *__arch_get_vdso_rng_data(void)
> +{
> +	if (__vdso_data->clock_mode == VDSO_CLOCKMODE_TIMENS)
> +		return (void *)&__vdso_rng_data +
> +		       ((void *)&__timens_vdso_data - (void *)&__vdso_data);
> +	return &__vdso_rng_data;

So either bite the bullet and  write it:

	if (__vdso_data->clock_mode == VDSO_CLOCKMODE_TIMENS)
		return (void *)&__vdso_rng_data + ((void *)&__timens_vdso_data - (void *)&__vdso_data);

        return &__vdso_rng_data;

or comply to the well documented rules of the tip tree:

   https://www.kernel.org/doc/html/latest/process/maintainer-tip.html#bracket-rules

> +/*
> + * Generates a given positive number of block of ChaCha20 output with nonce=0,
> + * and does not write to any stack or memory outside of the parameters passed
> + * to it. This way, we don't need to worry about stack data leaking into forked
> + * child processes.

Please use proper kernel-doc

> + */
> +static __always_inline void __arch_chacha20_blocks_nostack(u8 *dst_bytes, const u32 *key, u32 *counter, size_t nblocks)
> +{
> +	extern void chacha20_blocks_nostack(u8 *dst_bytes, const u32 *key, u32 *counter, size_t nblocks);
> +	return chacha20_blocks_nostack(dst_bytes, key, counter, nblocks);

You surely have an issue with your newline key...

The above aside, can you please explain the value of this __arch_()
wrapper?

It's just voodoo for no value because it hands through the arguments
1:1. So where are you expecting that that __arch...() version of this is
any different than invoking the architecture specific version of
chacha20_blocks_nostack().

Can you spot the irony of your naming choices?

    __arch_chacha20_blocks_nostack() {
      	return chacha20_blocks_nostack()
    };

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v7 1/3] random: add vgetrandom_alloc() syscall
  2022-11-25 20:45   ` Thomas Gleixner
@ 2022-11-27 20:18     ` Jason A. Donenfeld
  2022-11-28  9:12       ` Thomas Gleixner
  2022-11-28 13:54       ` Arnd Bergmann
  0 siblings, 2 replies; 16+ messages in thread
From: Jason A. Donenfeld @ 2022-11-27 20:18 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: linux-kernel, patches, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Arnd Bergmann,
	Christian Brauner

Hi Thomas,

Thanks a lot for the thorough review, here, and in the other two emails.
I appreciate you taking the time to look at it, and my apologies for
parts that are unclear or sloppy or otherwise unpolished. I'll try to
make v8 a lot better.

Comments inline below:

On Fri, Nov 25, 2022 at 09:45:31PM +0100, Thomas Gleixner wrote:
> On Thu, Nov 24 2022 at 17:55, Jason A. Donenfeld wrote:
> > ---
> >  MAINTAINERS                             |  1 +
> >  arch/x86/Kconfig                        |  1 +
> >  arch/x86/entry/syscalls/syscall_64.tbl  |  1 +
> >  arch/x86/include/asm/unistd.h           |  1 +
> >  drivers/char/random.c                   | 59 +++++++++++++++++++++++++
> >  include/uapi/asm-generic/unistd.h       |  7 ++-
> >  kernel/sys_ni.c                         |  3 ++
> >  lib/vdso/getrandom.h                    | 23 ++++++++++
> >  scripts/checksyscalls.sh                |  4 ++
> >  tools/include/uapi/asm-generic/unistd.h |  7 ++-
> >  10 files changed, 105 insertions(+), 2 deletions(-)
> >  create mode 100644 lib/vdso/getrandom.h
> 
> I think I asked for this before:
> 
> Please split these things properly up. Provide the syscall and then wire
> it up.

Before I split it into "syscall, generic vdso, x86 vdso", as that's how
I interpreted your email. Next, I'll split it up into "generic syscall,
generic vdso, x86 vdso & syscall", since enabling the syscall without
the vdso function, or vice-versa, doesn't make sense, and having that
last step be all at once there will provide an easy thing for other
archs to look at.

> > diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> > index 67745ceab0db..331e21ba961a 100644
> > --- a/arch/x86/Kconfig
> > +++ b/arch/x86/Kconfig
> > @@ -59,6 +59,7 @@ config X86
> >  	#
> >  	select ACPI_LEGACY_TABLES_LOOKUP	if ACPI
> >  	select ACPI_SYSTEM_POWER_STATES_SUPPORT	if ACPI
> > +	select ADVISE_SYSCALLS			if X86_64
> 
> Why is this x86_64 specific?
> 
> > --- a/arch/x86/include/asm/unistd.h
> > +++ b/arch/x86/include/asm/unistd.h
> > @@ -27,6 +27,7 @@
> >  #  define __ARCH_WANT_COMPAT_SYS_PWRITEV64
> >  #  define __ARCH_WANT_COMPAT_SYS_PREADV64V2
> >  #  define __ARCH_WANT_COMPAT_SYS_PWRITEV64V2
> > +#  define __ARCH_WANT_VGETRANDOM_ALLOC
> 
> So instead of this define, why can't you do:
> 
> config VGETRADOM_ALLOC
>        bool
>        select ADVISE_SYSCALLS
> 
> and then have
> 
> config GENERIC_VDSO_RANDOM_WHATEVER
>        bool
>        select VGETRANDOM_ALLOC
> 
> This gives a clear Kconfig dependency instead of the random
> ADVISE_SYSCALLS select.

That's much better indeed. I was trying to straddle the two conventions
of `#define __ARCH_...` for syscalls and a Kconfig for vDSO functions,
but doing it all together as you've suggested is nicer.

I'll try to figure this out, though so far futzing around suggests there
might have to be both, because of unistd.h being a userspace header.
That is, include/uapi/asm-generic/unistd.h typically needs a `#if
__ARCH_WANT..., #define ...` in it. I'll give it a spin and you'll see
for v8. At the very least it should get rid of the more awkward
`select ADVISE_SYSCALLS if X86_64` part, and will better separate the
arch code from non-arch code.

> 
> >--- a/drivers/char/random.c
> > +++ b/drivers/char/random.c
> 
> > +#include "../../lib/vdso/getrandom.h"
> 
> Seriously?
> 
> include/vdso/ exists for a reason.

Er, yes, thanks.

> 
> > +#ifdef __ARCH_WANT_VGETRANDOM_ALLOC
> > +/*
> > + * The vgetrandom() function in userspace requires an opaque state, which this
> > + * function provides to userspace, by mapping a certain number of special pages
> > + * into the calling process. It takes a hint as to the number of opaque states
> > + * desired, and returns the number of opaque states actually allocated, the
> > + * size of each one in bytes, and the address of the first state.
> 
> As this is a syscall which can be invoked outside of the VDSO, can you
> please provide proper kernel-doc which explains the arguments, the
> functionality and the return value?

Yes, will do.

> 
> > + */
> > +SYSCALL_DEFINE3(vgetrandom_alloc, unsigned int __user *, num,
> > +		unsigned int __user *, size_per_each, unsigned int, flags)
> > +{
> > +	size_t alloc_size, num_states;
> > +	unsigned long pages_addr;
> > +	unsigned int num_hint;
> > +	int ret;
> > +
> > +	if (flags)
> > +		return -EINVAL;
> > +
> > +	if (get_user(num_hint, num))
> > +		return -EFAULT;
> > +
> > +	num_states = clamp_t(size_t, num_hint, 1, (SIZE_MAX & PAGE_MASK) / sizeof(struct vgetrandom_state));
> > +	alloc_size = PAGE_ALIGN(num_states * sizeof(struct vgetrandom_state));
> > +
> > +	if (put_user(alloc_size / sizeof(struct vgetrandom_state), num) ||
> > +	    put_user(sizeof(struct vgetrandom_state), size_per_each))
> > +		return -EFAULT;
> 
> That's a total of four sizeof(struct vgetrandom_state) usage sites.
> 
>        size_t state_size = sizeof(struct vgetrandom_state);
> 
> perhaps?

Not my style -- I like to have the constant expression at the usage site
so I don't have to remember the variable -- but I'm fine going with your
suggestion, so I'll do that for v8.

Jason

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v7 2/3] random: introduce generic vDSO getrandom() implementation
  2022-11-25 22:39   ` Thomas Gleixner
@ 2022-11-27 21:52     ` Jason A. Donenfeld
  2022-11-28  9:25       ` Thomas Gleixner
  0 siblings, 1 reply; 16+ messages in thread
From: Jason A. Donenfeld @ 2022-11-27 21:52 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: linux-kernel, patches, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Arnd Bergmann,
	Christian Brauner

> Jason!
Thomas!

On Fri, Nov 25, 2022 at 11:39:15PM +0100, Thomas Gleixner wrote:
> > +struct vdso_rng_data {
> > +	unsigned long generation;
> > +	bool is_ready;
> > +};
> 
> Please follow the coding style in this header:
> 
>        - make the struct definition tabular aligned
>        - provide kernel doc which explains the struct members

Will do.

> > +
> > +#include <linux/kernel.h>
> > +#include <linux/atomic.h>
> > +#include <linux/fs.h>
> 
> No. This is VDSO and builds a userspace library. You cannot include
> random kernel headers, which in turn include the world and some more. It
> might build today, but any legitimate change to fs.h or one of the
> headers included by it will make it explode.
> 
> I just fixed up some other instance which included world and I have zero
> interest to get more of this "works by chance" things.
> 
> If you really need anything from fs.h then please isolate it out into a
> separate header file which is included by fs.h and here.

Hm. I need MAX_RW_COUNT from linux/fs.h. I could just hardcode `(INT_MAX
& PAGE_MASK)`, though, if you'd prefer, and leave a comment. I'll do
that. Or I could move MAX_RW_COUNT into linux/kernel.h? But maybe that's
undesirable.

So:

    ssize_t ret = min_t(size_t, INT_MAX & PAGE_MASK /* = MAX_RW_COUNT */, len);

I'll do that, if it's okay with you. Or tell me if you want me to
instead move MAX_RW_COUNT into linux/kernel.h.

Also, if I remove linux/fs.h, I need to include linux/time.h in its
place, because vdso/datapage.h implicitly depends on it. Alternatively,
I could add linux/time.h to vdso/datapage.h, but I don't want to touch
too many files uninvited.


> 	MEMCPY_AND_ZERO_SRC(u8, dst, src, len);
> }
> 
> Can you spot the difference in readability and formatting?

Nice suggestion, will do.

> > +
> > +static __always_inline ssize_t
> > +__cvdso_getrandom_data(const struct vdso_rng_data *rng_info, void *buffer, size_t len,
> > +		       unsigned int flags, void *opaque_state)
> 
> Lacks kernel-doc explaining the arguments and the functionality.

I'll add one.

> > +
> > +	if (unlikely(!len))
> > +		return 0;
> 
> Why go into the kernel above when len == 0?

If the kernel's RNG isn't ready, then the behavior here can't easily be
emulated by userspace alone. For the len == 0 case, if flags == 0, then
the syscall will block until it's ready, for example, and that blocking
behavior is something we want to retain.

However, I recognize that the fact you had to ask this question is
simply indicative of the fact that this function is under-documented or
otherwise incomprehensible, per your other comments, so I'll try to make
this more clear for v8.

> Sorry, this is word salad which raises a -ENOPARSE here.
> Sorry, I really fail to understand this sloppy wording coming from
> someone who educates everyone else about the importance of correctness.
> If you can't be bothered to express yourself in correct terms
> consistently then why do you expect that anyone else understands
> randomness or cryptography correctly?
> I really appreciate your efforts to make all of this more accessible to
> the average user, but please get your act together.

Your individual specific points as well as the overarching one are well
taken, and I'll overhaul all of the documentation for v8, in hopes of
making this function a lot more clear.

> > +	batch_len = min_t(size_t, sizeof(state->batch) - state->pos, len);
> 
> Where is the sanity check for state->pos <= sizeof(state->batch)?

That condition cannot happen. "Does the compiler or some other checker
prove that as part of the development cycle?" No, unfortunately. So what
would you like to do here? Per Linus' email on an unrelated topic [1],
"We don't test for things that can't happen." And there's no
WARN_ON/BUG_ON primitive that'd be wise to use here -- nobody wants to
emit a ud2 into vDSO code I assume. So what would you like? For me to
add that check and bail out of the function if it's wrong, even if that
should normally never happen? Or adhere to the [1] more strictly and do
nothing, as is the case now? I'll do what you want here.

And please don't tell me, "if you even have to ask what to do, you
shouldn't be anywhere near... bla bla bla", because I *am* cognisant of
defense coding, yet at the same time I'm trying to respect kernel norms,
and balancing the two tends to be more subjective than, say, the results
of a verifier. So, anyway, let me know what behavior you want here --
the sanity check-->fallback path, or doing nothing, or something else.

[1] https://lore.kernel.org/all/CAHk-=wheoU5mkht1xk_4Tyw78oa-8iGvGE9nBdUmGqCykgo1xw@mail.gmail.com/

> 
> > +	if (batch_len) {
> > +		/* Zero out bytes as they're copied out, to preserve forward secrecy. */
> 
> Please name the function so it is self explaining and add an
> comprehensive comment to it so you can spare the comment here.

I'll take your memcpy_and_zero_*src* suggestion from earlier.

> > +		memcpy_and_zero(buffer, state->batch + state->pos, batch_len);
> > +		state->pos += batch_len;
> > +		buffer += batch_len;
> > +		len -= batch_len;
> 
> buffer and state->pos could be updated by that non-fail copy function to
> aid the compiler, but I might be wrong about the ability of compilers to
> optimize code like that.

memcpy_and_zero(_src) is inlined, and the codegen looks okay enough I
think.

> > +	/* Refill the batch and then overwrite the key, in order to preserve forward secrecy. */
> > +	BUILD_BUG_ON(sizeof(state->batch_key) % CHACHA_BLOCK_SIZE != 0);
> 
> Does this build bug really need to be glued in between the comment
> explaining the function call and the function call itself?

I thought of these two lines as sort of one thing together. But I'll
reverse the order of the comment and the BUILD_BUG_ON, per your
suggestion.

Thanks again for the extremely thorough review.

Jason

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v7 3/3] x86: vdso: Wire up getrandom() vDSO implementation
  2022-11-25 23:08   ` Thomas Gleixner
@ 2022-11-27 22:07     ` Jason A. Donenfeld
  2022-11-27 22:39       ` Samuel Neves
  0 siblings, 1 reply; 16+ messages in thread
From: Jason A. Donenfeld @ 2022-11-27 22:07 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: linux-kernel, patches, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Arnd Bergmann,
	Christian Brauner

Hi Thomas,

On Sat, Nov 26, 2022 at 12:08:41AM +0100, Thomas Gleixner wrote:
> Jason!
> 
> On Thu, Nov 24 2022 at 17:55, Jason A. Donenfeld wrote:
> > +++ b/arch/x86/entry/vdso/vgetrandom-chacha.S
> > +/*
> > + * Very basic SSE2 implementation of ChaCha20. Produces a given positive number
> > + * of blocks of output with a nonce of 0, taking an input key and 8-byte
> > + * counter. Importantly does not spill to the stack. Its arguments are:
> 
> Basic or not.

Heh, FYI I didn't mean "basic" here as in "doesn't need a review", but
just that it's a straightforward technique and doesn't do any
complicated multiblock pyrotechnics (which frankly aren't really
needed).

> This needs a Reviewed-by from someone who understands SSE2
> and ChaCha20 before this can go anywhere near the x86 tree.

No problem. I'll see to it that somebody qualified gives this a review.

> > +#include <linux/kernel.h>
> 
> Why do you need kernel.h here?

Turns out I don't, thanks.

> > +static __always_inline ssize_t
> > +getrandom_syscall(void *buffer, size_t len, unsigned int flags)
> 
> static __always_inline ssize_t getrandom_syscall(void *buffer, size_t len, unsigned int flags)
> 
> please. We expanded to 100 quite some time ago.
> 
> Some kernel-doc compliant comment for this would be appreciated as well.

Will do.

> 
> > +{
> > +	long ret;
> > +
> > +	asm ("syscall" : "=a" (ret) :
> > +	     "0" (__NR_getrandom), "D" (buffer), "S" (len), "d" (flags) :
> > +	     "rcx", "r11", "memory");
> > +
> > +	return ret;
> > +}
> > +
> > +#define __vdso_rng_data (VVAR(_vdso_rng_data))
> > +
> > +static __always_inline const struct vdso_rng_data *__arch_get_vdso_rng_data(void)
> > +{
> > +	if (__vdso_data->clock_mode == VDSO_CLOCKMODE_TIMENS)
> > +		return (void *)&__vdso_rng_data +
> > +		       ((void *)&__timens_vdso_data - (void *)&__vdso_data);
> > +	return &__vdso_rng_data;
> 
> So either bite the bullet and  write it:
> 
> 	if (__vdso_data->clock_mode == VDSO_CLOCKMODE_TIMENS)
> 		return (void *)&__vdso_rng_data + ((void *)&__timens_vdso_data - (void *)&__vdso_data);

Seems fine to me. I'll write it like that.

> > +/*
> > + * Generates a given positive number of block of ChaCha20 output with nonce=0,
> > + * and does not write to any stack or memory outside of the parameters passed
> > + * to it. This way, we don't need to worry about stack data leaking into forked
> > + * child processes.
> 
> Please use proper kernel-doc
> 
> > + */
> > +static __always_inline void __arch_chacha20_blocks_nostack(u8 *dst_bytes, const u32 *key, u32 *counter, size_t nblocks)
> > +{
> > +	extern void chacha20_blocks_nostack(u8 *dst_bytes, const u32 *key, u32 *counter, size_t nblocks);
> > +	return chacha20_blocks_nostack(dst_bytes, key, counter, nblocks);
> 
> The above aside, can you please explain the value of this __arch_()
> wrapper?
> 
> It's just voodoo for no value because it hands through the arguments
> 1:1. So where are you expecting that that __arch...() version of this is
> any different than invoking the architecture specific version of
> chacha20_blocks_nostack().

I'll just name the assembly function with __arch...(). The idea behind
the wrapper was just to keep all of the non-generic code called from the
generic code prefixed with __arch_, but there's no reason I need to name
it like that from C alone. Will fix for v8.

Thanks again,
Jason

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v7 3/3] x86: vdso: Wire up getrandom() vDSO implementation
  2022-11-27 22:07     ` Jason A. Donenfeld
@ 2022-11-27 22:39       ` Samuel Neves
  2022-11-28  0:19         ` Jason A. Donenfeld
  0 siblings, 1 reply; 16+ messages in thread
From: Samuel Neves @ 2022-11-27 22:39 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: Thomas Gleixner, linux-kernel, patches, linux-crypto, linux-api,
	x86, Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Arnd Bergmann,
	Christian Brauner

On Sun, Nov 27, 2022 at 10:13 PM Jason A. Donenfeld <Jason@zx2c4.com> wrote:
>
> Hi Thomas,
>
> On Sat, Nov 26, 2022 at 12:08:41AM +0100, Thomas Gleixner wrote:
> > Jason!
> >
> > On Thu, Nov 24 2022 at 17:55, Jason A. Donenfeld wrote:
> > > +++ b/arch/x86/entry/vdso/vgetrandom-chacha.S
> > > +/*
> > > + * Very basic SSE2 implementation of ChaCha20. Produces a given positive number
> > > + * of blocks of output with a nonce of 0, taking an input key and 8-byte
> > > + * counter. Importantly does not spill to the stack. Its arguments are:
> >
> > Basic or not.
>
> Heh, FYI I didn't mean "basic" here as in "doesn't need a review", but
> just that it's a straightforward technique and doesn't do any
> complicated multiblock pyrotechnics (which frankly aren't really
> needed).
>
> > This needs a Reviewed-by from someone who understands SSE2
> > and ChaCha20 before this can go anywhere near the x86 tree.
>
> No problem. I'll see to it that somebody qualified gives this a review.
>

I did look at this earlier. It looks fine. I would recommend changing

+ /* copy1,copy2 = key */
+ movdqu 0x00(key),copy1
+ movdqu 0x10(key),copy2

to

+ /* copy1,copy2 = key */
+ movups 0x00(key),copy1
+ movups 0x10(key),copy2

which has the same semantics, but saves a couple of code bytes. Likewise for

+ movdqu state0,0x00(output)
+ movdqu state1,0x10(output)
+ movdqu state2,0x20(output)
+ movdqu state3,0x30(output)

Otherwise,

Reviewed-by: Samuel Neves <sneves@dei.uc.pt> # for vgetrandom-chacha.S

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v7 3/3] x86: vdso: Wire up getrandom() vDSO implementation
  2022-11-27 22:39       ` Samuel Neves
@ 2022-11-28  0:19         ` Jason A. Donenfeld
  0 siblings, 0 replies; 16+ messages in thread
From: Jason A. Donenfeld @ 2022-11-28  0:19 UTC (permalink / raw)
  To: Samuel Neves
  Cc: Thomas Gleixner, linux-kernel, patches, linux-crypto, linux-api,
	x86, Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Arnd Bergmann,
	Christian Brauner

On Sun, Nov 27, 2022 at 10:39:27PM +0000, Samuel Neves wrote:
> On Sun, Nov 27, 2022 at 10:13 PM Jason A. Donenfeld <Jason@zx2c4.com> wrote:
> >
> > Hi Thomas,
> >
> > On Sat, Nov 26, 2022 at 12:08:41AM +0100, Thomas Gleixner wrote:
> > > Jason!
> > >
> > > On Thu, Nov 24 2022 at 17:55, Jason A. Donenfeld wrote:
> > > > +++ b/arch/x86/entry/vdso/vgetrandom-chacha.S
> > > > +/*
> > > > + * Very basic SSE2 implementation of ChaCha20. Produces a given positive number
> > > > + * of blocks of output with a nonce of 0, taking an input key and 8-byte
> > > > + * counter. Importantly does not spill to the stack. Its arguments are:
> > >
> > > Basic or not.
> >
> > Heh, FYI I didn't mean "basic" here as in "doesn't need a review", but
> > just that it's a straightforward technique and doesn't do any
> > complicated multiblock pyrotechnics (which frankly aren't really
> > needed).
> >
> > > This needs a Reviewed-by from someone who understands SSE2
> > > and ChaCha20 before this can go anywhere near the x86 tree.
> >
> > No problem. I'll see to it that somebody qualified gives this a review.
> >
> 
> I did look at this earlier. It looks fine. I would recommend changing
> 
> + /* copy1,copy2 = key */
> + movdqu 0x00(key),copy1
> + movdqu 0x10(key),copy2
> 
> to
> 
> + /* copy1,copy2 = key */
> + movups 0x00(key),copy1
> + movups 0x10(key),copy2
> 
> which has the same semantics, but saves a couple of code bytes. Likewise for
> 
> + movdqu state0,0x00(output)
> + movdqu state1,0x10(output)
> + movdqu state2,0x20(output)
> + movdqu state3,0x30(output)
> 
> Otherwise,
> 
> Reviewed-by: Samuel Neves <sneves@dei.uc.pt> # for vgetrandom-chacha.S

Thanks for the review and for the suggestion. Will do.

Jason

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v7 1/3] random: add vgetrandom_alloc() syscall
  2022-11-27 20:18     ` Jason A. Donenfeld
@ 2022-11-28  9:12       ` Thomas Gleixner
  2022-11-28 13:54       ` Arnd Bergmann
  1 sibling, 0 replies; 16+ messages in thread
From: Thomas Gleixner @ 2022-11-28  9:12 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: linux-kernel, patches, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Arnd Bergmann,
	Christian Brauner

On Sun, Nov 27 2022 at 21:18, Jason A. Donenfeld wrote:
> On Fri, Nov 25, 2022 at 09:45:31PM +0100, Thomas Gleixner wrote:
>> > --- a/arch/x86/include/asm/unistd.h
>> > +++ b/arch/x86/include/asm/unistd.h
>> > @@ -27,6 +27,7 @@
>> >  #  define __ARCH_WANT_COMPAT_SYS_PWRITEV64
>> >  #  define __ARCH_WANT_COMPAT_SYS_PREADV64V2
>> >  #  define __ARCH_WANT_COMPAT_SYS_PWRITEV64V2
>> > +#  define __ARCH_WANT_VGETRANDOM_ALLOC
>> 
>> So instead of this define, why can't you do:
>> 
>> config VGETRADOM_ALLOC
>>        bool
>>        select ADVISE_SYSCALLS
>> 
>> and then have
>> 
>> config GENERIC_VDSO_RANDOM_WHATEVER
>>        bool
>>        select VGETRANDOM_ALLOC
>> 
>> This gives a clear Kconfig dependency instead of the random
>> ADVISE_SYSCALLS select.
>
> That's much better indeed. I was trying to straddle the two conventions
> of `#define __ARCH_...` for syscalls and a Kconfig for vDSO functions,
> but doing it all together as you've suggested is nicer.
>
> I'll try to figure this out, though so far futzing around suggests there
> might have to be both, because of unistd.h being a userspace header.
> That is, include/uapi/asm-generic/unistd.h typically needs a `#if
> __ARCH_WANT..., #define ...` in it. I'll give it a spin and you'll see

Bah. Did not think about that user space part...

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v7 2/3] random: introduce generic vDSO getrandom() implementation
  2022-11-27 21:52     ` Jason A. Donenfeld
@ 2022-11-28  9:25       ` Thomas Gleixner
  0 siblings, 0 replies; 16+ messages in thread
From: Thomas Gleixner @ 2022-11-28  9:25 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: linux-kernel, patches, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Arnd Bergmann,
	Christian Brauner

Jason!

On Sun, Nov 27 2022 at 22:52, Jason A. Donenfeld wrote:
> On Fri, Nov 25, 2022 at 11:39:15PM +0100, Thomas Gleixner wrote:
>> If you really need anything from fs.h then please isolate it out into a
>> separate header file which is included by fs.h and here.
>
> Hm. I need MAX_RW_COUNT from linux/fs.h. I could just hardcode `(INT_MAX
> & PAGE_MASK)`, though, if you'd prefer, and leave a comment. I'll do
> that. Or I could move MAX_RW_COUNT into linux/kernel.h? But maybe that's
> undesirable.
>
> So:
>
>     ssize_t ret = min_t(size_t, INT_MAX & PAGE_MASK /* = MAX_RW_COUNT */, len);
>
> I'll do that, if it's okay with you. Or tell me if you want me to
> instead move MAX_RW_COUNT into linux/kernel.h.
>
> Also, if I remove linux/fs.h, I need to include linux/time.h in its
> place, because vdso/datapage.h implicitly depends on it. Alternatively,
> I could add linux/time.h to vdso/datapage.h, but I don't want to touch
> too many files uninvited.

Actually the minimal includes are those:

--- a/arch/x86/entry/vdso/vclock_gettime.c
+++ b/arch/x86/entry/vdso/vclock_gettime.c
@@ -8,8 +8,9 @@
  * 32 Bit compat layer by Stefani Seibold <stefani@seibold.net>
  *  sponsored by Rohde & Schwarz GmbH & Co. KG Munich/Germany
  */
-#include <linux/time.h>
+#include <linux/cache.h>
 #include <linux/kernel.h>
+#include <linux/time64.h>
 #include <linux/types.h>
 
 #include "../../../../lib/vdso/gettimeofday.c"

>> > +	batch_len = min_t(size_t, sizeof(state->batch) - state->pos, len);
>> 
>> Where is the sanity check for state->pos <= sizeof(state->batch)?
>
> That condition cannot happen. "Does the compiler or some other checker
> prove that as part of the development cycle?" No, unfortunately. So what
> would you like to do here? Per Linus' email on an unrelated topic [1],
> "We don't test for things that can't happen." And there's no
> WARN_ON/BUG_ON primitive that'd be wise to use here -- nobody wants to
> emit a ud2 into vDSO code I assume. So what would you like? For me to
> add that check and bail out of the function if it's wrong, even if that
> should normally never happen? Or adhere to the [1] more strictly and do
> nothing, as is the case now? I'll do what you want here.

I think we can do without any further checks. If the callsite fiddles
with state then the resulting memcpy will go into lala land and the
process can keep the pieces.

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v7 1/3] random: add vgetrandom_alloc() syscall
  2022-11-27 20:18     ` Jason A. Donenfeld
  2022-11-28  9:12       ` Thomas Gleixner
@ 2022-11-28 13:54       ` Arnd Bergmann
  2022-11-28 17:17         ` Jason A. Donenfeld
  1 sibling, 1 reply; 16+ messages in thread
From: Arnd Bergmann @ 2022-11-28 13:54 UTC (permalink / raw)
  To: Jason A . Donenfeld, Thomas Gleixner
  Cc: linux-kernel, patches, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Christian Brauner

On Sun, Nov 27, 2022, at 21:18, Jason A. Donenfeld wrote:
>> 
>> config GENERIC_VDSO_RANDOM_WHATEVER
>>        bool
>>        select VGETRANDOM_ALLOC
>> 
>> This gives a clear Kconfig dependency instead of the random
>> ADVISE_SYSCALLS select.
>
> That's much better indeed. I was trying to straddle the two conventions
> of `#define __ARCH_...` for syscalls and a Kconfig for vDSO functions,
> but doing it all together as you've suggested is nicer.
>
> I'll try to figure this out, though so far futzing around suggests there
> might have to be both, because of unistd.h being a userspace header.
> That is, include/uapi/asm-generic/unistd.h typically needs a `#if
> __ARCH_WANT..., #define ...` in it. I'll give it a spin and you'll see
> for v8. At the very least it should get rid of the more awkward
> `select ADVISE_SYSCALLS if X86_64` part, and will better separate the
> arch code from non-arch code.

I think you should not need an __ARCH_WANT_SYS_* symbol for this,
the only place we actually need them for is the asm-generic/unistd.h
header which is still used on a couple of architectures (I have
an experimental series for replacing it with a generic syscall.tbl
file, but it's not ready for 6.2). In most cases, the __ARCH_WANT_SYS_*
symbols are only used for syscalls that are part of the table for
old architectures but get skipped on newer targets that always had
a replacement syscalls (e.g. getrlimit getting replaced by prlimit64)

I think we should just reserve the syscall number for all architectures
right away and #define the __NR_* macro. libc will generally need
a runtime check anyway, and defining it now avoids the problem of
the tables getting out of sync.

The Kconfig symbol is fine in this case.

      Arnd

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v7 1/3] random: add vgetrandom_alloc() syscall
  2022-11-28 13:54       ` Arnd Bergmann
@ 2022-11-28 17:17         ` Jason A. Donenfeld
  0 siblings, 0 replies; 16+ messages in thread
From: Jason A. Donenfeld @ 2022-11-28 17:17 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Thomas Gleixner, linux-kernel, patches, linux-crypto, linux-api,
	x86, Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Christian Brauner

Hi Arnd,

On Mon, Nov 28, 2022 at 02:54:39PM +0100, Arnd Bergmann wrote:
> On Sun, Nov 27, 2022, at 21:18, Jason A. Donenfeld wrote:
> >> 
> >> config GENERIC_VDSO_RANDOM_WHATEVER
> >>        bool
> >>        select VGETRANDOM_ALLOC
> >> 
> >> This gives a clear Kconfig dependency instead of the random
> >> ADVISE_SYSCALLS select.
> >
> > That's much better indeed. I was trying to straddle the two conventions
> > of `#define __ARCH_...` for syscalls and a Kconfig for vDSO functions,
> > but doing it all together as you've suggested is nicer.
> >
> > I'll try to figure this out, though so far futzing around suggests there
> > might have to be both, because of unistd.h being a userspace header.
> > That is, include/uapi/asm-generic/unistd.h typically needs a `#if
> > __ARCH_WANT..., #define ...` in it. I'll give it a spin and you'll see
> > for v8. At the very least it should get rid of the more awkward
> > `select ADVISE_SYSCALLS if X86_64` part, and will better separate the
> > arch code from non-arch code.
> 
> I think you should not need an __ARCH_WANT_SYS_* symbol for this,
> the only place we actually need them for is the asm-generic/unistd.h
> header which is still used on a couple of architectures (I have
> an experimental series for replacing it with a generic syscall.tbl
> file, but it's not ready for 6.2). In most cases, the __ARCH_WANT_SYS_*
> symbols are only used for syscalls that are part of the table for
> old architectures but get skipped on newer targets that always had
> a replacement syscalls (e.g. getrlimit getting replaced by prlimit64)
> 
> I think we should just reserve the syscall number for all architectures
> right away and #define the __NR_* macro. libc will generally need
> a runtime check anyway, and defining it now avoids the problem of
> the tables getting out of sync.
> 
> The Kconfig symbol is fine in this case.

Oh, great, okay. I'll get rid of the __ARCH stuff entirely then. I
jumped the gun and posted v8 earlier today, but I'll include this in a
v9, whenever it makes sense to send that. So when reading v8, just
assume all he __ARCH_WANT_SYS_* business has been removed.

Jason

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2022-11-28 17:17 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-24 16:55 [PATCH v7 0/3] implement getrandom() in vDSO Jason A. Donenfeld
2022-11-24 16:55 ` [PATCH v7 1/3] random: add vgetrandom_alloc() syscall Jason A. Donenfeld
2022-11-25 20:45   ` Thomas Gleixner
2022-11-27 20:18     ` Jason A. Donenfeld
2022-11-28  9:12       ` Thomas Gleixner
2022-11-28 13:54       ` Arnd Bergmann
2022-11-28 17:17         ` Jason A. Donenfeld
2022-11-24 16:55 ` [PATCH v7 2/3] random: introduce generic vDSO getrandom() implementation Jason A. Donenfeld
2022-11-25 22:39   ` Thomas Gleixner
2022-11-27 21:52     ` Jason A. Donenfeld
2022-11-28  9:25       ` Thomas Gleixner
2022-11-24 16:55 ` [PATCH v7 3/3] x86: vdso: Wire up getrandom() vDSO implementation Jason A. Donenfeld
2022-11-25 23:08   ` Thomas Gleixner
2022-11-27 22:07     ` Jason A. Donenfeld
2022-11-27 22:39       ` Samuel Neves
2022-11-28  0:19         ` Jason A. Donenfeld

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).