All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v10 0/4] implement getrandom() in vDSO
@ 2022-11-29 21:06 Jason A. Donenfeld
  2022-11-29 21:06 ` [PATCH v10 1/4] random: add vgetrandom_alloc() syscall Jason A. Donenfeld
                   ` (3 more replies)
  0 siblings, 4 replies; 37+ messages in thread
From: Jason A. Donenfeld @ 2022-11-29 21:06 UTC (permalink / raw)
  To: linux-kernel, patches, tglx
  Cc: Jason A. Donenfeld, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Arnd Bergmann,
	Christian Brauner

Changes v9->v10:
---------------
- Actually include all 4 patches.

Changes v8->v9:
--------------
- Allocate system call number on all architectures, and split that off
  into a separate commit (2/4).
- Declare missing prototype in syscalls.h.

Changes v7->v8:
--------------
- Move lib/vdso/getrandom.h to include/vdso/getrandom.h in order to
  avoid #include "../../../../../../../../../../../......".
- Make use of two Kconfig symbols, VDSO_GETRANDOM and
  VGETRANDOM_ALLOC_SYSCALL, to handle selecting dependencies and
  conditionalizing code.
- Rename chacha20_blocks_nostack assembly function to
  __arch_chacha20_blocks_nostack, which allows removing the awkward C
  inline wrapper function.
- Save a byte per instruction by using movups instead of movdqu, and
  don't bother clearing registers that hold public constants.
- Add basic signal handling reentrancy protection to vDSO function.
- Invalidate RNG generation counter if key-refresh syscall fails.
- Reduce the defines in getrandom.c, which in turn requires using
  `INT_MAX & PAGE_MASK` explicitly rather than the `MAX_RW_COUNT` macro.
- Make use of 100 columns when it makes sense, and reformat various bits
  of code for clarity.
- Thoroughly document functions and add kernel-doc comments to several
  key functions.
- Hoist out repeated `sizeof(struct vgetrandom_state)` into variable.
- Rename `memcpy_and_zero` to `memcpy_and_zero_src`, and define helper
  macro outside of function.
- Separate all x86 work, including syscall wiring, into the x86 commit,
  so that the first two commits of this series are purely generic.

Changes v6->v7:
--------------
- VERY EXCITING! There is now a rudimentary glibc implementation for
  this from one of the glibc maintainers, Adhemerval Zanella (CC'd). A
  commit that works with with this latest v7 revision is here:

  https://github.com/bminor/glibc/commit/247ec6dd77ec2a047163fe3a1b60e57880464b39

- Pass an `unsigned int *` instead of an `unsigned long *` for the
  syscall, to avoid having to add a compat syscall.
- Use ordinary function framing in assembly, rather than kernel-specific
  framing.
- Don't hardcode the number '2', but derive it at compile time with the
  expression `sizeof(state->batch_key) / CHACHA_BLOCK_SIZE`, as well as
  adding a BUILD_BUG_ON() in case that doesn't divide cleanly.

Changes v5->v6:
--------------
- Fix various build errors for odd configurations.
- Do not leak any secrets onto the stack at all, to account for possibility of
  fork()ing in a multithreaded scenario, which would ruin forward secrecy.
  Instead provide a arch-specific implementation that doesn't need stack
  space.
- Prevent page alignment from overflowing variable, and clamp to acceptable
  limits.
- Read/write unaligned bytes using get/put_unaligned.
- Add extensive comments to vDSO function explaining subtle aspects.
- Account for fork() races when writing generation counter.

Changes v4->v5:
--------------
- Add example code to vDSO addition commit showing intended use and
  interaction with allocations.
- Reset buffer to beginning when retrying.
- Rely on generation counter never being zero for fork detection, rather than
  adding extra boolean.
- Make use of __ARCH_WANT_VGETRANDOM_ALLOC macro around new syscall so that
  it's condition by archs that actually choose to add this, and don't forget
  to bump __NR_syscalls.
- Separate __cvdso_getrandom() into __cvdso_getrandom() and
  __cvdso_getrandom_data() so that powerpc can make a more efficient call.

Changes v3->v4:
--------------
- Split up into small series rather than one big patch.
- Use proper ordering in generation counter reads.
- Make properly generic, not just a hairball with x86, by moving symbols into
  correct files.

Changes v2->v3:
--------------

Big changes:

Thomas' previous objection was two-fold: 1) vgetrandom
should really have the same function signature as getrandom, in
addition to all of the same behavior, and 2) having vgetrandom_alloc
be a vDSO function doesn't make sense, because it doesn't actually
need anything from the VDSO data page and it doesn't correspond to an
existing syscall.

After a discussion at Plumbers this last week, we devised the following
ways to fix these: 1) we make the opque state argument be the last
argument of vgetrandom, rather than the first one, since the real
syscall ignores the additional argument, and that way all the registers
are the same, and no behavior changes; and 2) we make vgetrandom_alloc a
syscall, rather than a vDSO function, which also gives it added
flexibility for the future, which is good.

Making those changes also reduced the size of this patch a bit.

Smaller changes:
- Properly add buffer offset position.
- Don't EXPORT_SYMBOL for vDSO code.
- Account for timens and vvar being in swapped pages.

--------------

Two statements:

  1) Userspace wants faster cryptographically secure random numbers of
     arbitrary size, big or small.

  2) Userspace is currently unable to safely roll its own RNG with the
     same security profile as getrandom().

Statement (1) has been debated for years, with arguments ranging from
"we need faster cryptographically secure card shuffling!" to "the only
things that actually need good randomness are keys, which are few and
far between" to "actually, TLS CBC nonces are frequent" and so on. I
don't intend to wade into that debate substantially, except to note that
recently glibc added arc4random(), whose goal is to return a
cryptographically secure uint32_t, and there are real user reports of it
being too slow. So here we are.

Statement (2) is more interesting. The kernel is the nexus of all
entropic inputs that influence the RNG. It is in the best position, and
probably the only position, to decide anything at all about the current
state of the RNG and of its entropy. One of the things it uniquely knows
about is when reseeding is necessary.

For example, when a virtual machine is forked, restored, or duplicated,
it's imparative that the RNG doesn't generate the same outputs. For this
reason, there's a small protocol between hypervisors and the kernel that
indicates this has happened, alongside some ID, which the RNG uses to
immediately reseed, so as not to return the same numbers. Were userspace
to expand a getrandom() seed from time T1 for the next hour, and at some
point T2 < hour, the virtual machine forked, userspace would continue to
provide the same numbers to two (or more) different virtual machines,
resulting in potential cryptographic catastrophe. Something similar
happens on resuming from hibernation (or even suspend), with various
compromise scenarios there in mind.

There's a more general reason why userspace rolling its own RNG from a
getrandom() seed is fraught. There's a lot of attention paid to this
particular Linuxism we have of the RNG being initialized and thus
non-blocking or uninitialized and thus blocking until it is initialized.
These are our Two Big States that many hold to be the holy
differentiating factor between safe and not safe, between
cryptographically secure and garbage. The fact is, however, that the
distinction between these two states is a hand-wavy wishy-washy inexact
approximation. Outside of a few exceptional cases (e.g. a HW RNG is
available), we actually don't really ever know with any rigor at all
when the RNG is safe and ready (nor when it's compromised). We do the
best we can to "estimate" it, but entropy estimation is fundamentally
impossible in the general case. So really, we're just doing guess work,
and hoping it's good and conservative enough. Let's then assume that
there's always some potential error involved in this differentiator.

In fact, under the surface, the RNG is engineered around a different
principal, and that is trying to *use* new entropic inputs regularly and
at the right specific moments in time. For example, close to boot time,
the RNG reseeds itself more often than later. At certain events, like VM
fork, the RNG reseeds itself immediately. The various heuristics for
when the RNG will use new entropy and how often is really a core aspect
of what the RNG has some potential to do decently enough (and something
that will probably continue to improve in the future from random.c's
present set of algorithms). So in your mind, put away the metal
attachment to the Two Big States, which represent an approximation with
a potential margin of error. Instead keep in mind that the RNG's primary
operating heuristic is how often and exactly when it's going to reseed.

So, if userspace takes a seed from getrandom() at point T1, and uses it
for the next hour (or N megabytes or some other meaningless metric),
during that time, potential errors in the Two Big States approximation
are amplified. During that time potential reseeds are being lost,
forgotten, not reflected in the output stream. That's not good.

The simplest statement you could make is that userspace RNGs that expand
a getrandom() seed at some point T1 are nearly always *worse*, in some
way, than just calling getrandom() every time a random number is
desired.

For those reasons, after some discussion on libc-alpha, glibc's
arc4random() now just calls getrandom() on each invocation. That's
trivially safe, and gives us latitude to then make the safe thing faster
without becoming unsafe at our leasure. Card shuffling isn't
particularly fast, however.

How do we rectify this? By putting a safe implementation of getrandom()
in the vDSO, which has access to whatever information a
particular iteration of random.c is using to make its decisions. I use
that careful language of "particular iteration of random.c", because the
set of things that a vDSO getrandom() implementation might need for making
decisions as good as the kernel's will likely change over time. This
isn't just a matter of exporting certain *data* to userspace. We're not
going to commit to a "data API" where the various heuristics used are
exposed, locking in how the kernel works for decades to come, and then
leave it to various userspaces to roll something on top and shoot
themselves in the foot and have all sorts of complexity disasters.
Rather, vDSO getrandom() is supposed to be the *same exact algorithm*
that runs in the kernel, except it's been hoisted into userspace as
much as possible. And so vDSO getrandom() and kernel getrandom() will
always mirror each other hermetically.

API-wise, the vDSO gains this function:

  ssize_t vgetrandom(void *buffer, size_t len, unsigned int flags, void *opaque_state);

The return value and the first 3 arguments are the same as ordinary
getrandom(), while the last argument is a pointer to some state
allocated with vgetrandom_alloc(), explained below. Were all four
arguments passed to the getrandom syscall, nothing different would
happen, and the functions would have the exact same behavior.

Then, we introduce a new syscall:

  void *vgetrandom_alloc([inout] unsigned int *num, [out] unsigned int *size_per_each, unsigned int flags);

This takes the desired number of opaque states in `num`, and returns a
pointer to an array of opaque states, the number actually allocated back
in `num`, and the size in bytes of each one in `size_per_each`, enabling
a libc to slice up the returned array into a state per each thread. (The
`flags` argument is always zero for now.) We very intentionally do *not*
leave state allocation up to the caller of vgetrandom, but provide
vgetrandom_alloc for that allocation. There are too many weird things
that can go wrong, and it's important that vDSO does not provide too
generic of a mechanism. It's not going to store its state in just any
old memory address. It'll do it only in ones it allocates.

Right now this means it's a mlock'd page with WIPEONFORK set. In the
future maybe there will be other interesting page flags or
anti-heartbleed measures, or other platform-specific kernel-specific
things that can be set from the syscall. Again, it's important that the
kernel has a say in how this works rather than agreeing to operate on
any old address; memory isn't neutral.

The syscall currently accomplishes this with a call to vm_mmap() and
then a call to do_madvise(). It'd be nice to do this all at once, but
I'm not sure that a helper function exists for that now, and it seems a
bit premature to add one, at least for now.

The interesting meat of the implementation is in lib/vdso/getrandom.c,
as generic C code, and it aims to mainly follow random.c's buffered fast
key erasure logic. Before the RNG is initialized, it falls back to the
syscall. Right now it uses a simple generation counter to make its decisions
on reseeding (though this could be made more extensive over time).

The actual place that has the most work to do is in all of the other
files. Most of the vDSO shared page infrastructure is centered around
gettimeofday, and so the main structs are all in arrays for different
timestamp types, and attached to time namespaces, and so forth. I've
done the best I could to add onto this in an unintrusive way.

In my test results, performance is pretty stellar (around 15x for uint32_t
generation), and it seems to be working. There's an extended example in the
second commit of this series, showing how the syscall and the vDSO function
are meant to be used together.

Cc: linux-crypto@vger.kernel.org
Cc: linux-api@vger.kernel.org
Cc: x86@kernel.org
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Adhemerval Zanella Netto <adhemerval.zanella@linaro.org>
Cc: Carlos O'Donell <carlos@redhat.com>
Cc: Florian Weimer <fweimer@redhat.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Christian Brauner <brauner@kernel.org>

Jason A. Donenfeld (4):
  random: add vgetrandom_alloc() syscall
  arch: allocate vgetrandom_alloc() syscall number
  random: introduce generic vDSO getrandom() implementation
  x86: vdso: Wire up getrandom() vDSO implementation

 MAINTAINERS                                   |   2 +
 arch/alpha/kernel/syscalls/syscall.tbl        |   1 +
 arch/arm/tools/syscall.tbl                    |   1 +
 arch/arm64/include/asm/unistd.h               |   2 +-
 arch/arm64/include/asm/unistd32.h             |   2 +
 arch/ia64/kernel/syscalls/syscall.tbl         |   1 +
 arch/m68k/kernel/syscalls/syscall.tbl         |   1 +
 arch/microblaze/kernel/syscalls/syscall.tbl   |   1 +
 arch/mips/kernel/syscalls/syscall_n32.tbl     |   1 +
 arch/mips/kernel/syscalls/syscall_n64.tbl     |   1 +
 arch/mips/kernel/syscalls/syscall_o32.tbl     |   1 +
 arch/parisc/kernel/syscalls/syscall.tbl       |   1 +
 arch/powerpc/kernel/syscalls/syscall.tbl      |   1 +
 arch/s390/kernel/syscalls/syscall.tbl         |   1 +
 arch/sh/kernel/syscalls/syscall.tbl           |   1 +
 arch/sparc/kernel/syscalls/syscall.tbl        |   1 +
 arch/x86/Kconfig                              |   1 +
 arch/x86/entry/syscalls/syscall_32.tbl        |   1 +
 arch/x86/entry/syscalls/syscall_64.tbl        |   1 +
 arch/x86/entry/vdso/Makefile                  |   3 +-
 arch/x86/entry/vdso/vdso.lds.S                |   2 +
 arch/x86/entry/vdso/vgetrandom-chacha.S       | 177 +++++++++++++++
 arch/x86/entry/vdso/vgetrandom.c              |  17 ++
 arch/x86/include/asm/vdso/getrandom.h         |  55 +++++
 arch/x86/include/asm/vdso/vsyscall.h          |   2 +
 arch/x86/include/asm/vvar.h                   |  16 ++
 arch/xtensa/kernel/syscalls/syscall.tbl       |   1 +
 drivers/char/random.c                         |  84 ++++++++
 include/linux/syscalls.h                      |   3 +
 include/uapi/asm-generic/unistd.h             |   5 +-
 include/vdso/datapage.h                       |  11 +
 include/vdso/getrandom.h                      |  24 +++
 kernel/sys_ni.c                               |   3 +
 lib/vdso/Kconfig                              |  14 +-
 lib/vdso/getrandom.c                          | 204 ++++++++++++++++++
 tools/include/uapi/asm-generic/unistd.h       |   5 +-
 .../arch/mips/entry/syscalls/syscall_n64.tbl  |   1 +
 .../arch/powerpc/entry/syscalls/syscall.tbl   |   1 +
 .../perf/arch/s390/entry/syscalls/syscall.tbl |   1 +
 .../arch/x86/entry/syscalls/syscall_64.tbl    |   1 +
 40 files changed, 647 insertions(+), 5 deletions(-)
 create mode 100644 arch/x86/entry/vdso/vgetrandom-chacha.S
 create mode 100644 arch/x86/entry/vdso/vgetrandom.c
 create mode 100644 arch/x86/include/asm/vdso/getrandom.h
 create mode 100644 include/vdso/getrandom.h
 create mode 100644 lib/vdso/getrandom.c

-- 
2.38.1


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [PATCH v10 1/4] random: add vgetrandom_alloc() syscall
  2022-11-29 21:06 [PATCH v10 0/4] implement getrandom() in vDSO Jason A. Donenfeld
@ 2022-11-29 21:06 ` Jason A. Donenfeld
  2022-11-29 22:02   ` Thomas Gleixner
  2022-11-30 10:51   ` Florian Weimer
  2022-11-29 21:06 ` [PATCH v10 2/4] arch: allocate vgetrandom_alloc() syscall number Jason A. Donenfeld
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 37+ messages in thread
From: Jason A. Donenfeld @ 2022-11-29 21:06 UTC (permalink / raw)
  To: linux-kernel, patches, tglx
  Cc: Jason A. Donenfeld, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Arnd Bergmann,
	Christian Brauner

The vDSO getrandom() works over an opaque per-thread state of an
unexported size, which must be marked as MADV_WIPEONFORK and be
mlock()'d for proper operation. Over time, the nuances of these
allocations may change or grow or even differ based on architectural
features.

The syscall has the signature:

  void *vgetrandom_alloc([inout] unsigned int *num,
                         [out] unsigned int *size_per_each,
                         unsigned int flags);

This takes the desired number of opaque states in `num`, and returns a
pointer to an array of opaque states, the number actually allocated back
in `num`, and the size in bytes of each one in `size_per_each`, enabling
a libc to slice up the returned array into a state per each thread. (The
`flags` argument is always zero for now.) Libc is expected to allocate a
chunk of these on first use, and then dole them out to threads as
they're created, allocating more when needed. The following commit shows
an example of this, being used in conjunction with the getrandom() vDSO
function.

We very intentionally do *not* leave state allocation for vDSO
getrandom() up to userspace itself, but rather provide this new syscall
for such allocations. vDSO getrandom() must not store its state in just
any old memory address, but rather just ones that the kernel specially
allocates for it, leaving the particularities of those allocations up to
the kernel.

Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
---
 MAINTAINERS              |  1 +
 drivers/char/random.c    | 75 ++++++++++++++++++++++++++++++++++++++++
 include/linux/syscalls.h |  3 ++
 include/vdso/getrandom.h | 24 +++++++++++++
 kernel/sys_ni.c          |  3 ++
 lib/vdso/Kconfig         |  7 ++++
 6 files changed, 113 insertions(+)
 create mode 100644 include/vdso/getrandom.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 256f03904987..3894f947a507 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -17287,6 +17287,7 @@ T:	git https://git.kernel.org/pub/scm/linux/kernel/git/crng/random.git
 S:	Maintained
 F:	drivers/char/random.c
 F:	drivers/virt/vmgenid.c
+F:	include/vdso/getrandom.h
 
 RAPIDIO SUBSYSTEM
 M:	Matt Porter <mporter@kernel.crashing.org>
diff --git a/drivers/char/random.c b/drivers/char/random.c
index 67558b95d531..b81d67f3ebab 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -8,6 +8,7 @@
  * into roughly six sections, each with a section header:
  *
  *   - Initialization and readiness waiting.
+ *   - vDSO support helpers.
  *   - Fast key erasure RNG, the "crng".
  *   - Entropy accumulation and extraction routines.
  *   - Entropy collection routines.
@@ -39,6 +40,7 @@
 #include <linux/blkdev.h>
 #include <linux/interrupt.h>
 #include <linux/mm.h>
+#include <linux/mman.h>
 #include <linux/nodemask.h>
 #include <linux/spinlock.h>
 #include <linux/kthread.h>
@@ -55,6 +57,9 @@
 #include <linux/siphash.h>
 #include <crypto/chacha.h>
 #include <crypto/blake2s.h>
+#ifdef CONFIG_VGETRANDOM_ALLOC_SYSCALL
+#include <vdso/getrandom.h>
+#endif
 #include <asm/processor.h>
 #include <asm/irq.h>
 #include <asm/irq_regs.h>
@@ -167,6 +172,76 @@ int __cold execute_with_initialized_rng(struct notifier_block *nb)
 				__func__, (void *)_RET_IP_, crng_init)
 
 
+
+/********************************************************************
+ *
+ * vDSO support helpers.
+ *
+ * The actual vDSO function is defined over in lib/vdso/getrandom.c,
+ * but this section contains the kernel-mode helpers to support that.
+ *
+ ********************************************************************/
+
+#ifdef CONFIG_VGETRANDOM_ALLOC_SYSCALL
+/**
+ * vgetrandom_alloc - allocate opaque states for use with vDSO getrandom().
+ *
+ * @num: on input, a pointer to a suggested hint of how many states to
+ * allocate, and on output the number of states actually allocated.
+ *
+ * @size_per_each: the size of each state allocated, so that the caller can
+ * split up the returned allocation into individual states.
+ *
+ * @flags: currently always zero.
+ *
+ * The getrandom() vDSO function in userspace requires an opaque state, which
+ * this function allocates by mapping a certain number of special pages into
+ * the calling process. It takes a hint as to the number of opaque states
+ * desired, and provides the caller with the number of opaque states actually
+ * allocated, the size of each one in bytes, and the address of the first
+ * state.
+
+ * Returns a pointer to the first state in the allocation.
+ *
+ */
+SYSCALL_DEFINE3(vgetrandom_alloc, unsigned int __user *, num,
+		unsigned int __user *, size_per_each, unsigned int, flags)
+{
+	const size_t state_size = sizeof(struct vgetrandom_state);
+	size_t alloc_size, num_states;
+	unsigned long pages_addr;
+	unsigned int num_hint;
+	int ret;
+
+	if (flags)
+		return -EINVAL;
+
+	if (get_user(num_hint, num))
+		return -EFAULT;
+
+	num_states = clamp_t(size_t, num_hint, 1, (SIZE_MAX & PAGE_MASK) / state_size);
+	alloc_size = PAGE_ALIGN(num_states * state_size);
+
+	if (put_user(alloc_size / state_size, num) || put_user(state_size, size_per_each))
+		return -EFAULT;
+
+	pages_addr = vm_mmap(NULL, 0, alloc_size, PROT_READ | PROT_WRITE,
+			     MAP_PRIVATE | MAP_ANONYMOUS | MAP_LOCKED, 0);
+	if (IS_ERR_VALUE(pages_addr))
+		return pages_addr;
+
+	ret = do_madvise(current->mm, pages_addr, alloc_size, MADV_WIPEONFORK);
+	if (ret < 0)
+		goto err_unmap;
+
+	return pages_addr;
+
+err_unmap:
+	vm_munmap(pages_addr, alloc_size);
+	return ret;
+}
+#endif
+
 /*********************************************************************
  *
  * Fast key erasure RNG, the "crng".
diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
index a34b0f9a9972..7741dc94f10c 100644
--- a/include/linux/syscalls.h
+++ b/include/linux/syscalls.h
@@ -1006,6 +1006,9 @@ asmlinkage long sys_seccomp(unsigned int op, unsigned int flags,
 			    void __user *uargs);
 asmlinkage long sys_getrandom(char __user *buf, size_t count,
 			      unsigned int flags);
+asmlinkage long sys_vgetrandom_alloc(unsigned int __user *num,
+				     unsigned int __user *size_per_each,
+				     unsigned int flags);
 asmlinkage long sys_memfd_create(const char __user *uname_ptr, unsigned int flags);
 asmlinkage long sys_bpf(int cmd, union bpf_attr *attr, unsigned int size);
 asmlinkage long sys_execveat(int dfd, const char __user *filename,
diff --git a/include/vdso/getrandom.h b/include/vdso/getrandom.h
new file mode 100644
index 000000000000..5f04c8bf4bd4
--- /dev/null
+++ b/include/vdso/getrandom.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2022 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
+ */
+
+#ifndef _VDSO_GETRANDOM_H
+#define _VDSO_GETRANDOM_H
+
+#include <crypto/chacha.h>
+
+struct vgetrandom_state {
+	union {
+		struct {
+			u8 batch[CHACHA_BLOCK_SIZE * 3 / 2];
+			u32 key[CHACHA_KEY_SIZE / sizeof(u32)];
+		};
+		u8 batch_key[CHACHA_BLOCK_SIZE * 2];
+	};
+	unsigned long generation;
+	u8 pos;
+	bool in_use;
+};
+
+#endif /* _VDSO_GETRANDOM_H */
diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c
index 860b2dcf3ac4..f28196cb919b 100644
--- a/kernel/sys_ni.c
+++ b/kernel/sys_ni.c
@@ -360,6 +360,9 @@ COND_SYSCALL(pkey_free);
 /* memfd_secret */
 COND_SYSCALL(memfd_secret);
 
+/* random */
+COND_SYSCALL(vgetrandom_alloc);
+
 /*
  * Architecture specific weak syscall entries.
  */
diff --git a/lib/vdso/Kconfig b/lib/vdso/Kconfig
index d883ac299508..b22584f8da03 100644
--- a/lib/vdso/Kconfig
+++ b/lib/vdso/Kconfig
@@ -31,3 +31,10 @@ config GENERIC_VDSO_TIME_NS
 	  VDSO
 
 endif
+
+config VGETRANDOM_ALLOC_SYSCALL
+	bool
+	select ADVISE_SYSCALLS
+	help
+	  Selected by the getrandom() vDSO function, which requires this
+	  for state allocation.
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v10 2/4] arch: allocate vgetrandom_alloc() syscall number
  2022-11-29 21:06 [PATCH v10 0/4] implement getrandom() in vDSO Jason A. Donenfeld
  2022-11-29 21:06 ` [PATCH v10 1/4] random: add vgetrandom_alloc() syscall Jason A. Donenfeld
@ 2022-11-29 21:06 ` Jason A. Donenfeld
  2022-11-30  8:56   ` Geert Uytterhoeven
  2022-11-29 21:06 ` [PATCH v10 3/4] random: introduce generic vDSO getrandom() implementation Jason A. Donenfeld
  2022-11-29 21:06 ` [PATCH v10 4/4] x86: vdso: Wire up getrandom() vDSO implementation Jason A. Donenfeld
  3 siblings, 1 reply; 37+ messages in thread
From: Jason A. Donenfeld @ 2022-11-29 21:06 UTC (permalink / raw)
  To: linux-kernel, patches, tglx
  Cc: Jason A. Donenfeld, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Arnd Bergmann,
	Christian Brauner

Add vgetrandom_alloc() as syscall 451 (or 561 on alpha) by adding it to
all of the various syscall.tbl and unistd.h files.

Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
---
 arch/alpha/kernel/syscalls/syscall.tbl              | 1 +
 arch/arm/tools/syscall.tbl                          | 1 +
 arch/arm64/include/asm/unistd.h                     | 2 +-
 arch/arm64/include/asm/unistd32.h                   | 2 ++
 arch/ia64/kernel/syscalls/syscall.tbl               | 1 +
 arch/m68k/kernel/syscalls/syscall.tbl               | 1 +
 arch/microblaze/kernel/syscalls/syscall.tbl         | 1 +
 arch/mips/kernel/syscalls/syscall_n32.tbl           | 1 +
 arch/mips/kernel/syscalls/syscall_n64.tbl           | 1 +
 arch/mips/kernel/syscalls/syscall_o32.tbl           | 1 +
 arch/parisc/kernel/syscalls/syscall.tbl             | 1 +
 arch/powerpc/kernel/syscalls/syscall.tbl            | 1 +
 arch/s390/kernel/syscalls/syscall.tbl               | 1 +
 arch/sh/kernel/syscalls/syscall.tbl                 | 1 +
 arch/sparc/kernel/syscalls/syscall.tbl              | 1 +
 arch/x86/entry/syscalls/syscall_32.tbl              | 1 +
 arch/x86/entry/syscalls/syscall_64.tbl              | 1 +
 arch/xtensa/kernel/syscalls/syscall.tbl             | 1 +
 include/uapi/asm-generic/unistd.h                   | 5 ++++-
 tools/include/uapi/asm-generic/unistd.h             | 5 ++++-
 tools/perf/arch/mips/entry/syscalls/syscall_n64.tbl | 1 +
 tools/perf/arch/powerpc/entry/syscalls/syscall.tbl  | 1 +
 tools/perf/arch/s390/entry/syscalls/syscall.tbl     | 1 +
 tools/perf/arch/x86/entry/syscalls/syscall_64.tbl   | 1 +
 24 files changed, 31 insertions(+), 3 deletions(-)

diff --git a/arch/alpha/kernel/syscalls/syscall.tbl b/arch/alpha/kernel/syscalls/syscall.tbl
index 8ebacf37a8cf..a4bfd7b53d6f 100644
--- a/arch/alpha/kernel/syscalls/syscall.tbl
+++ b/arch/alpha/kernel/syscalls/syscall.tbl
@@ -490,3 +490,4 @@
 558	common	process_mrelease		sys_process_mrelease
 559	common  futex_waitv                     sys_futex_waitv
 560	common	set_mempolicy_home_node		sys_ni_syscall
+561	common	vgetrandom_alloc		sys_vgetrandom_alloc
diff --git a/arch/arm/tools/syscall.tbl b/arch/arm/tools/syscall.tbl
index ac964612d8b0..e10319cc6c3e 100644
--- a/arch/arm/tools/syscall.tbl
+++ b/arch/arm/tools/syscall.tbl
@@ -464,3 +464,4 @@
 448	common	process_mrelease		sys_process_mrelease
 449	common	futex_waitv			sys_futex_waitv
 450	common	set_mempolicy_home_node		sys_set_mempolicy_home_node
+451	common	vgetrandom_alloc		sys_vgetrandom_alloc
diff --git a/arch/arm64/include/asm/unistd.h b/arch/arm64/include/asm/unistd.h
index 037feba03a51..64a514f90131 100644
--- a/arch/arm64/include/asm/unistd.h
+++ b/arch/arm64/include/asm/unistd.h
@@ -39,7 +39,7 @@
 #define __ARM_NR_compat_set_tls		(__ARM_NR_COMPAT_BASE + 5)
 #define __ARM_NR_COMPAT_END		(__ARM_NR_COMPAT_BASE + 0x800)
 
-#define __NR_compat_syscalls		451
+#define __NR_compat_syscalls		452
 #endif
 
 #define __ARCH_WANT_SYS_CLONE
diff --git a/arch/arm64/include/asm/unistd32.h b/arch/arm64/include/asm/unistd32.h
index 604a2053d006..7285b5a830cc 100644
--- a/arch/arm64/include/asm/unistd32.h
+++ b/arch/arm64/include/asm/unistd32.h
@@ -907,6 +907,8 @@ __SYSCALL(__NR_process_mrelease, sys_process_mrelease)
 __SYSCALL(__NR_futex_waitv, sys_futex_waitv)
 #define __NR_set_mempolicy_home_node 450
 __SYSCALL(__NR_set_mempolicy_home_node, sys_set_mempolicy_home_node)
+#define __NR_vgetrandom_alloc 451
+__SYSCALL(__NR_vgetrandom_alloc, sys_vgetrandom_alloc)
 
 /*
  * Please add new compat syscalls above this comment and update
diff --git a/arch/ia64/kernel/syscalls/syscall.tbl b/arch/ia64/kernel/syscalls/syscall.tbl
index 72c929d9902b..5ed9667051fc 100644
--- a/arch/ia64/kernel/syscalls/syscall.tbl
+++ b/arch/ia64/kernel/syscalls/syscall.tbl
@@ -371,3 +371,4 @@
 448	common	process_mrelease		sys_process_mrelease
 449	common  futex_waitv                     sys_futex_waitv
 450	common	set_mempolicy_home_node		sys_set_mempolicy_home_node
+451	common	vgetrandom_alloc		sys_vgetrandom_alloc
diff --git a/arch/m68k/kernel/syscalls/syscall.tbl b/arch/m68k/kernel/syscalls/syscall.tbl
index b1f3940bc298..d9e7ea26dd26 100644
--- a/arch/m68k/kernel/syscalls/syscall.tbl
+++ b/arch/m68k/kernel/syscalls/syscall.tbl
@@ -450,3 +450,4 @@
 448	common	process_mrelease		sys_process_mrelease
 449	common  futex_waitv                     sys_futex_waitv
 450	common	set_mempolicy_home_node		sys_set_mempolicy_home_node
+451	common	vgetrandom_alloc		sys_vgetrandom_alloc
diff --git a/arch/microblaze/kernel/syscalls/syscall.tbl b/arch/microblaze/kernel/syscalls/syscall.tbl
index 820145e47350..c109e307a37b 100644
--- a/arch/microblaze/kernel/syscalls/syscall.tbl
+++ b/arch/microblaze/kernel/syscalls/syscall.tbl
@@ -456,3 +456,4 @@
 448	common	process_mrelease		sys_process_mrelease
 449	common  futex_waitv                     sys_futex_waitv
 450	common	set_mempolicy_home_node		sys_set_mempolicy_home_node
+451	common	vgetrandom_alloc		sys_vgetrandom_alloc
diff --git a/arch/mips/kernel/syscalls/syscall_n32.tbl b/arch/mips/kernel/syscalls/syscall_n32.tbl
index 253ff994ed2e..6d47d8231f7d 100644
--- a/arch/mips/kernel/syscalls/syscall_n32.tbl
+++ b/arch/mips/kernel/syscalls/syscall_n32.tbl
@@ -389,3 +389,4 @@
 448	n32	process_mrelease		sys_process_mrelease
 449	n32	futex_waitv			sys_futex_waitv
 450	n32	set_mempolicy_home_node		sys_set_mempolicy_home_node
+451	n32	vgetrandom_alloc		sys_vgetrandom_alloc
diff --git a/arch/mips/kernel/syscalls/syscall_n64.tbl b/arch/mips/kernel/syscalls/syscall_n64.tbl
index 3f1886ad9d80..890e5b51e1fc 100644
--- a/arch/mips/kernel/syscalls/syscall_n64.tbl
+++ b/arch/mips/kernel/syscalls/syscall_n64.tbl
@@ -365,3 +365,4 @@
 448	n64	process_mrelease		sys_process_mrelease
 449	n64	futex_waitv			sys_futex_waitv
 450	common	set_mempolicy_home_node		sys_set_mempolicy_home_node
+451	n64	vgetrandom_alloc		sys_vgetrandom_alloc
diff --git a/arch/mips/kernel/syscalls/syscall_o32.tbl b/arch/mips/kernel/syscalls/syscall_o32.tbl
index 8f243e35a7b2..de512de148f5 100644
--- a/arch/mips/kernel/syscalls/syscall_o32.tbl
+++ b/arch/mips/kernel/syscalls/syscall_o32.tbl
@@ -438,3 +438,4 @@
 448	o32	process_mrelease		sys_process_mrelease
 449	o32	futex_waitv			sys_futex_waitv
 450	o32	set_mempolicy_home_node		sys_set_mempolicy_home_node
+451	o32	vgetrandom_alloc		sys_vgetrandom_alloc
diff --git a/arch/parisc/kernel/syscalls/syscall.tbl b/arch/parisc/kernel/syscalls/syscall.tbl
index 8a99c998da9b..bab1cee627e3 100644
--- a/arch/parisc/kernel/syscalls/syscall.tbl
+++ b/arch/parisc/kernel/syscalls/syscall.tbl
@@ -448,3 +448,4 @@
 448	common	process_mrelease		sys_process_mrelease
 449	common	futex_waitv			sys_futex_waitv
 450	common	set_mempolicy_home_node		sys_set_mempolicy_home_node
+451	common	vgetrandom_alloc		sys_vgetrandom_alloc
diff --git a/arch/powerpc/kernel/syscalls/syscall.tbl b/arch/powerpc/kernel/syscalls/syscall.tbl
index a0be127475b1..e6c04eda2363 100644
--- a/arch/powerpc/kernel/syscalls/syscall.tbl
+++ b/arch/powerpc/kernel/syscalls/syscall.tbl
@@ -537,3 +537,4 @@
 448	common	process_mrelease		sys_process_mrelease
 449	common  futex_waitv                     sys_futex_waitv
 450 	nospu	set_mempolicy_home_node		sys_set_mempolicy_home_node
+451	common	vgetrandom_alloc		sys_vgetrandom_alloc
diff --git a/arch/s390/kernel/syscalls/syscall.tbl b/arch/s390/kernel/syscalls/syscall.tbl
index 799147658dee..5b0b2bea46da 100644
--- a/arch/s390/kernel/syscalls/syscall.tbl
+++ b/arch/s390/kernel/syscalls/syscall.tbl
@@ -453,3 +453,4 @@
 448  common	process_mrelease	sys_process_mrelease		sys_process_mrelease
 449  common	futex_waitv		sys_futex_waitv			sys_futex_waitv
 450  common	set_mempolicy_home_node	sys_set_mempolicy_home_node	sys_set_mempolicy_home_node
+451  common	vgetrandom_alloc	sys_vgetrandom_alloc		sys_vgetrandom_alloc
diff --git a/arch/sh/kernel/syscalls/syscall.tbl b/arch/sh/kernel/syscalls/syscall.tbl
index 2de85c977f54..631f0bac0e9a 100644
--- a/arch/sh/kernel/syscalls/syscall.tbl
+++ b/arch/sh/kernel/syscalls/syscall.tbl
@@ -453,3 +453,4 @@
 448	common	process_mrelease		sys_process_mrelease
 449	common  futex_waitv                     sys_futex_waitv
 450	common	set_mempolicy_home_node		sys_set_mempolicy_home_node
+451	common	vgetrandom_alloc		sys_vgetrandom_alloc
diff --git a/arch/sparc/kernel/syscalls/syscall.tbl b/arch/sparc/kernel/syscalls/syscall.tbl
index 4398cc6fb68d..b4925978adea 100644
--- a/arch/sparc/kernel/syscalls/syscall.tbl
+++ b/arch/sparc/kernel/syscalls/syscall.tbl
@@ -496,3 +496,4 @@
 448	common	process_mrelease		sys_process_mrelease
 449	common  futex_waitv                     sys_futex_waitv
 450	common	set_mempolicy_home_node		sys_set_mempolicy_home_node
+451	common	vgetrandom_alloc		sys_vgetrandom_alloc
diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl
index 320480a8db4f..f5f863a33824 100644
--- a/arch/x86/entry/syscalls/syscall_32.tbl
+++ b/arch/x86/entry/syscalls/syscall_32.tbl
@@ -455,3 +455,4 @@
 448	i386	process_mrelease	sys_process_mrelease
 449	i386	futex_waitv		sys_futex_waitv
 450	i386	set_mempolicy_home_node		sys_set_mempolicy_home_node
+451	i386	vgetrandom_alloc		sys_vgetrandom_alloc
diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl
index c84d12608cd2..0186f173f0e8 100644
--- a/arch/x86/entry/syscalls/syscall_64.tbl
+++ b/arch/x86/entry/syscalls/syscall_64.tbl
@@ -372,6 +372,7 @@
 448	common	process_mrelease	sys_process_mrelease
 449	common	futex_waitv		sys_futex_waitv
 450	common	set_mempolicy_home_node	sys_set_mempolicy_home_node
+451	common	vgetrandom_alloc	sys_vgetrandom_alloc
 
 #
 # Due to a historical design error, certain syscalls are numbered differently
diff --git a/arch/xtensa/kernel/syscalls/syscall.tbl b/arch/xtensa/kernel/syscalls/syscall.tbl
index 52c94ab5c205..14d63a119cc2 100644
--- a/arch/xtensa/kernel/syscalls/syscall.tbl
+++ b/arch/xtensa/kernel/syscalls/syscall.tbl
@@ -421,3 +421,4 @@
 448	common	process_mrelease		sys_process_mrelease
 449	common  futex_waitv                     sys_futex_waitv
 450	common	set_mempolicy_home_node		sys_set_mempolicy_home_node
+451	common	vgetrandom_alloc		sys_vgetrandom_alloc
diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h
index 45fa180cc56a..9d2e299f3e8a 100644
--- a/include/uapi/asm-generic/unistd.h
+++ b/include/uapi/asm-generic/unistd.h
@@ -886,8 +886,11 @@ __SYSCALL(__NR_futex_waitv, sys_futex_waitv)
 #define __NR_set_mempolicy_home_node 450
 __SYSCALL(__NR_set_mempolicy_home_node, sys_set_mempolicy_home_node)
 
+#define __NR_vgetrandom_alloc 451
+__SYSCALL(__NR_vgetrandom_alloc, sys_vgetrandom_alloc)
+
 #undef __NR_syscalls
-#define __NR_syscalls 451
+#define __NR_syscalls 452
 
 /*
  * 32 bit systems traditionally used different
diff --git a/tools/include/uapi/asm-generic/unistd.h b/tools/include/uapi/asm-generic/unistd.h
index 45fa180cc56a..9d2e299f3e8a 100644
--- a/tools/include/uapi/asm-generic/unistd.h
+++ b/tools/include/uapi/asm-generic/unistd.h
@@ -886,8 +886,11 @@ __SYSCALL(__NR_futex_waitv, sys_futex_waitv)
 #define __NR_set_mempolicy_home_node 450
 __SYSCALL(__NR_set_mempolicy_home_node, sys_set_mempolicy_home_node)
 
+#define __NR_vgetrandom_alloc 451
+__SYSCALL(__NR_vgetrandom_alloc, sys_vgetrandom_alloc)
+
 #undef __NR_syscalls
-#define __NR_syscalls 451
+#define __NR_syscalls 452
 
 /*
  * 32 bit systems traditionally used different
diff --git a/tools/perf/arch/mips/entry/syscalls/syscall_n64.tbl b/tools/perf/arch/mips/entry/syscalls/syscall_n64.tbl
index 3f1886ad9d80..890e5b51e1fc 100644
--- a/tools/perf/arch/mips/entry/syscalls/syscall_n64.tbl
+++ b/tools/perf/arch/mips/entry/syscalls/syscall_n64.tbl
@@ -365,3 +365,4 @@
 448	n64	process_mrelease		sys_process_mrelease
 449	n64	futex_waitv			sys_futex_waitv
 450	common	set_mempolicy_home_node		sys_set_mempolicy_home_node
+451	n64	vgetrandom_alloc		sys_vgetrandom_alloc
diff --git a/tools/perf/arch/powerpc/entry/syscalls/syscall.tbl b/tools/perf/arch/powerpc/entry/syscalls/syscall.tbl
index e9e0df4f9a61..d58da67a9766 100644
--- a/tools/perf/arch/powerpc/entry/syscalls/syscall.tbl
+++ b/tools/perf/arch/powerpc/entry/syscalls/syscall.tbl
@@ -534,3 +534,4 @@
 448	common	process_mrelease		sys_process_mrelease
 449	common  futex_waitv                     sys_futex_waitv
 450 	nospu	set_mempolicy_home_node		sys_set_mempolicy_home_node
+451	common	vgetrandom_alloc		sys_vgetrandom_alloc
diff --git a/tools/perf/arch/s390/entry/syscalls/syscall.tbl b/tools/perf/arch/s390/entry/syscalls/syscall.tbl
index 799147658dee..5b0b2bea46da 100644
--- a/tools/perf/arch/s390/entry/syscalls/syscall.tbl
+++ b/tools/perf/arch/s390/entry/syscalls/syscall.tbl
@@ -453,3 +453,4 @@
 448  common	process_mrelease	sys_process_mrelease		sys_process_mrelease
 449  common	futex_waitv		sys_futex_waitv			sys_futex_waitv
 450  common	set_mempolicy_home_node	sys_set_mempolicy_home_node	sys_set_mempolicy_home_node
+451  common	vgetrandom_alloc	sys_vgetrandom_alloc		sys_vgetrandom_alloc
diff --git a/tools/perf/arch/x86/entry/syscalls/syscall_64.tbl b/tools/perf/arch/x86/entry/syscalls/syscall_64.tbl
index c84d12608cd2..0186f173f0e8 100644
--- a/tools/perf/arch/x86/entry/syscalls/syscall_64.tbl
+++ b/tools/perf/arch/x86/entry/syscalls/syscall_64.tbl
@@ -372,6 +372,7 @@
 448	common	process_mrelease	sys_process_mrelease
 449	common	futex_waitv		sys_futex_waitv
 450	common	set_mempolicy_home_node	sys_set_mempolicy_home_node
+451	common	vgetrandom_alloc	sys_vgetrandom_alloc
 
 #
 # Due to a historical design error, certain syscalls are numbered differently
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v10 3/4] random: introduce generic vDSO getrandom() implementation
  2022-11-29 21:06 [PATCH v10 0/4] implement getrandom() in vDSO Jason A. Donenfeld
  2022-11-29 21:06 ` [PATCH v10 1/4] random: add vgetrandom_alloc() syscall Jason A. Donenfeld
  2022-11-29 21:06 ` [PATCH v10 2/4] arch: allocate vgetrandom_alloc() syscall number Jason A. Donenfeld
@ 2022-11-29 21:06 ` Jason A. Donenfeld
  2022-11-29 22:42   ` Thomas Gleixner
  2022-11-30 10:44   ` Florian Weimer
  2022-11-29 21:06 ` [PATCH v10 4/4] x86: vdso: Wire up getrandom() vDSO implementation Jason A. Donenfeld
  3 siblings, 2 replies; 37+ messages in thread
From: Jason A. Donenfeld @ 2022-11-29 21:06 UTC (permalink / raw)
  To: linux-kernel, patches, tglx
  Cc: Jason A. Donenfeld, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Arnd Bergmann,
	Christian Brauner

Provide a generic C vDSO getrandom() implementation, which operates on
an opaque state returned by vgetrandom_alloc() and produces random bytes
the same way as getrandom(). This has a the API signature:

  ssize_t vgetrandom(void *buffer, size_t len, unsigned int flags, void *opaque_state);

The return value and the first 3 arguments are the same as ordinary
getrandom(), while the last argument is a pointer to the opaque
allocated state. Were all four arguments passed to the getrandom()
syscall, nothing different would happen, and the functions would have
the exact same behavior.

The actual vDSO RNG algorithm implemented is the same one implemented by
drivers/char/random.c, using the same fast-erasure techniques as that.
Should the in-kernel implementation change, so too will the vDSO one.

It requires an implementation of ChaCha20 that does not use any stack,
in order to maintain forward secrecy if a multi-threaded program forks
(though this does not account for a similar issue with SA_SIGINFO
copying registers to the stack), so this is left as an
architecture-specific fill-in. Stack-less ChaCha20 is an easy algorithm
to implement on a variety of architectures, so this shouldn't be too
onerous.

Initially, the state is keyless, and so the first call makes a
getrandom() syscall to generate that key, and then uses it for
subsequent calls. By keeping track of a generation counter, it knows
when its key is invalidated and it should fetch a new one using the
syscall. Later, more than just a generation counter might be used.

Since MADV_WIPEONFORK is set on the opaque state, the key and related
state is wiped during a fork(), so secrets don't roll over into new
processes, and the same state doesn't accidentally generate the same
random stream. The generation counter, as well, is always >0, so that
the 0 counter is a useful indication of a fork() or otherwise
uninitialized state.

If the kernel RNG is not yet initialized, then the vDSO always calls the
syscall, because that behavior cannot be emulated in userspace, but
fortunately that state is short lived and only during early boot. If it
has been initialized, then there is no need to inspect the `flags`
argument, because the behavior does not change post-initialization
regardless of the `flags` value.

Since the opaque state passed to it is mutated, vDSO getrandom() is not
reentrant, when used with the same opaque state, which libc should be
mindful of.

vgetrandom_alloc() and vDSO getrandom() together provide the ability for
userspace to generate random bytes quickly and safely, and is intended
to be integrated into libc's thread management. As an illustrative
example, the following code might be used to do the same outside of
libc. All of the static functions are to be considered implementation
private, including the vgetrandom_alloc() syscall wrapper, which
generally shouldn't be exposed outside of libc, with the non-static
vgetrandom() function at the end being the exported interface. The
various pthread-isms are expected to be elided into libc internals. This
per-thread allocation scheme is very naive and does not shrink; other
implementations may choose to be more complex.

  static void *vgetrandom_alloc(unsigned int *num, unsigned int *size_per_each, unsigned int flags)
  {
    long ret = syscall(__NR_vgetrandom_alloc, &num, &size_per_each, flags);
    return ret == -1 ? NULL : (void *)ret;
  }

  static struct {
    pthread_mutex_t lock;
    void **states;
    size_t len, cap;
  } grnd_allocator = {
    .lock = PTHREAD_MUTEX_INITIALIZER
  };

  static void *vgetrandom_get_state(void)
  {
    void *state = NULL;

    pthread_mutex_lock(&grnd_allocator.lock);
    if (!grnd_allocator.len) {
      size_t new_cap;
      unsigned int size_per_each, num = 16; /* Just a hint. Could also be nr_cpus. */
      void *new_block = vgetrandom_alloc(&num, &size_per_each, 0), *new_states;

      if (!new_block)
        goto out;
      new_cap = grnd_allocator.cap + num;
      new_states = reallocarray(grnd_allocator.states, new_cap, sizeof(*grnd_allocator.states));
      if (!new_states) {
        munmap(new_block, num * size_per_each);
        goto out;
      }
      grnd_allocator.cap = new_cap;
      grnd_allocator.states = new_states;

      for (size_t i = 0; i < num; ++i) {
        grnd_allocator.states[i] = new_block;
        new_block += size_per_each;
      }
      grnd_allocator.len = num;
    }
    state = grnd_allocator.states[--grnd_allocator.len];

  out:
    pthread_mutex_unlock(&grnd_allocator.lock);
    return state;
  }

  static void vgetrandom_put_state(void *state)
  {
    if (!state)
      return;
    pthread_mutex_lock(&grnd_allocator.lock);
    grnd_allocator.states[grnd_allocator.len++] = state;
    pthread_mutex_unlock(&grnd_allocator.lock);
  }

  static struct {
    ssize_t(*fn)(void *buf, size_t len, unsigned long flags, void *state);
    pthread_key_t key;
    pthread_once_t initialized;
  } grnd_ctx = {
    .initialized = PTHREAD_ONCE_INIT
  };

  static void vgetrandom_init(void)
  {
    if (pthread_key_create(&grnd_ctx.key, vgetrandom_put_state) != 0)
      return;
    grnd_ctx.fn = __vdsosym("LINUX_2.6", "__vdso_getrandom");
  }

  ssize_t vgetrandom(void *buf, size_t len, unsigned long flags)
  {
    void *state;

    pthread_once(&grnd_ctx.initialized, vgetrandom_init);
    if (!grnd_ctx.fn)
      return getrandom(buf, len, flags);
    state = pthread_getspecific(grnd_ctx.key);
    if (!state) {
      state = vgetrandom_get_state();
      if (pthread_setspecific(grnd_ctx.key, state) != 0) {
        vgetrandom_put_state(state);
        state = NULL;
      }
      if (!state)
        return getrandom(buf, len, flags);
    }
    return grnd_ctx.fn(buf, len, flags, state);
  }

Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
---
 MAINTAINERS             |   1 +
 drivers/char/random.c   |   9 ++
 include/vdso/datapage.h |  11 +++
 lib/vdso/Kconfig        |   7 +-
 lib/vdso/getrandom.c    | 204 ++++++++++++++++++++++++++++++++++++++++
 5 files changed, 231 insertions(+), 1 deletion(-)
 create mode 100644 lib/vdso/getrandom.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 3894f947a507..70dff39fcff9 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -17288,6 +17288,7 @@ S:	Maintained
 F:	drivers/char/random.c
 F:	drivers/virt/vmgenid.c
 F:	include/vdso/getrandom.h
+F:	lib/vdso/getrandom.c
 
 RAPIDIO SUBSYSTEM
 M:	Matt Porter <mporter@kernel.crashing.org>
diff --git a/drivers/char/random.c b/drivers/char/random.c
index b81d67f3ebab..b37a3f9367a9 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -60,6 +60,9 @@
 #ifdef CONFIG_VGETRANDOM_ALLOC_SYSCALL
 #include <vdso/getrandom.h>
 #endif
+#ifdef CONFIG_VDSO_GETRANDOM
+#include <vdso/datapage.h>
+#endif
 #include <asm/processor.h>
 #include <asm/irq.h>
 #include <asm/irq_regs.h>
@@ -344,6 +347,9 @@ static void crng_reseed(struct work_struct *work)
 	if (next_gen == ULONG_MAX)
 		++next_gen;
 	WRITE_ONCE(base_crng.generation, next_gen);
+#ifdef CONFIG_VDSO_GETRANDOM
+	smp_store_release(&_vdso_rng_data.generation, next_gen + 1);
+#endif
 	if (!static_branch_likely(&crng_is_ready))
 		crng_init = CRNG_READY;
 	spin_unlock_irqrestore(&base_crng.lock, flags);
@@ -794,6 +800,9 @@ static void __cold _credit_init_bits(size_t bits)
 		if (static_key_initialized)
 			execute_in_process_context(crng_set_ready, &set_ready);
 		atomic_notifier_call_chain(&random_ready_notifier, 0, NULL);
+#ifdef CONFIG_VDSO_GETRANDOM
+		smp_store_release(&_vdso_rng_data.is_ready, true);
+#endif
 		wake_up_interruptible(&crng_init_wait);
 		kill_fasync(&fasync, SIGIO, POLL_IN);
 		pr_notice("crng init done\n");
diff --git a/include/vdso/datapage.h b/include/vdso/datapage.h
index 73eb622e7663..9ae4d76b36c7 100644
--- a/include/vdso/datapage.h
+++ b/include/vdso/datapage.h
@@ -109,6 +109,16 @@ struct vdso_data {
 	struct arch_vdso_data	arch_data;
 };
 
+/**
+ * struct vdso_rng_data - vdso RNG state information
+ * @generation:	a counter representing the number of RNG reseeds
+ * @is_ready:	whether the RNG is initialized
+ */
+struct vdso_rng_data {
+	unsigned long	generation;
+	bool		is_ready;
+};
+
 /*
  * We use the hidden visibility to prevent the compiler from generating a GOT
  * relocation. Not only is going through a GOT useless (the entry couldn't and
@@ -120,6 +130,7 @@ struct vdso_data {
  */
 extern struct vdso_data _vdso_data[CS_BASES] __attribute__((visibility("hidden")));
 extern struct vdso_data _timens_data[CS_BASES] __attribute__((visibility("hidden")));
+extern struct vdso_rng_data _vdso_rng_data __attribute__((visibility("hidden")));
 
 /*
  * The generic vDSO implementation requires that gettimeofday.h
diff --git a/lib/vdso/Kconfig b/lib/vdso/Kconfig
index b22584f8da03..f12b76642921 100644
--- a/lib/vdso/Kconfig
+++ b/lib/vdso/Kconfig
@@ -29,7 +29,6 @@ config GENERIC_VDSO_TIME_NS
 	help
 	  Selected by architectures which support time namespaces in the
 	  VDSO
-
 endif
 
 config VGETRANDOM_ALLOC_SYSCALL
@@ -38,3 +37,9 @@ config VGETRANDOM_ALLOC_SYSCALL
 	help
 	  Selected by the getrandom() vDSO function, which requires this
 	  for state allocation.
+
+config VDSO_GETRANDOM
+	bool
+	select VGETRANDOM_ALLOC_SYSCALL
+	help
+	  Selected by architectures that support vDSO getrandom().
diff --git a/lib/vdso/getrandom.c b/lib/vdso/getrandom.c
new file mode 100644
index 000000000000..1c51e24a7f24
--- /dev/null
+++ b/lib/vdso/getrandom.c
@@ -0,0 +1,204 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2022 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
+ */
+
+#include <linux/cache.h>
+#include <linux/kernel.h>
+#include <linux/time64.h>
+#include <vdso/datapage.h>
+#include <vdso/getrandom.h>
+#include <asm/vdso/getrandom.h>
+#include <asm/vdso/vsyscall.h>
+
+#define MEMCPY_AND_ZERO_SRC(type, dst, src, len) do { \
+	while (len >= sizeof(type)) { \
+		__put_unaligned_t(type, __get_unaligned_t(type, src), dst); \
+		__put_unaligned_t(type, 0, src); \
+		dst += sizeof(type); \
+		src += sizeof(type); \
+		len -= sizeof(type); \
+	} \
+} while (0)
+
+static void memcpy_and_zero_src(void *dst, void *src, size_t len)
+{
+	if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) {
+		if (IS_ENABLED(CONFIG_64BIT))
+			MEMCPY_AND_ZERO_SRC(u64, dst, src, len);
+		MEMCPY_AND_ZERO_SRC(u32, dst, src, len);
+		MEMCPY_AND_ZERO_SRC(u16, dst, src, len);
+	}
+	MEMCPY_AND_ZERO_SRC(u8, dst, src, len);
+}
+
+/**
+ * __cvdso_getrandom_data - generic vDSO implementation of getrandom() syscall
+ * @rng_info:		describes state of kernel RNG, memory shared with kernel
+ * @buffer:		destination buffer to fill with random bytes
+ * @len:		size of @buffer in bytes
+ * @flags:		zero or more GRND_* flags
+ * @opaque_state:	a pointer to an opaque state area
+ *
+ * This implements a "fast key erasure" RNG using ChaCha20, in the same way that the kernel's
+ * getrandom() syscall does. It periodically reseeds its key from the kernel's RNG, at the same
+ * schedule that the kernel's RNG is reseeded. If the kernel's RNG is not ready, then this always
+ * calls into the syscall.
+ *
+ * @opaque_state *must* be allocated using the vgetrandom_alloc() syscall.  Unless external locking
+ * is used, one state must be allocated per thread, as it is not safe to call this function
+ * concurrently with the same @opaque_state. However, it is safe to call this using the same
+ * @opaque_state that is shared between main code and signal handling code, within the same thread.
+ *
+ * Returns the number of random bytes written to @buffer, or a negative value indicating an error.
+ */
+static __always_inline ssize_t
+__cvdso_getrandom_data(const struct vdso_rng_data *rng_info, void *buffer, size_t len,
+		       unsigned int flags, void *opaque_state)
+{
+	ssize_t ret = min_t(size_t, INT_MAX & PAGE_MASK /* = MAX_RW_COUNT */, len);
+	struct vgetrandom_state *state = opaque_state;
+	size_t batch_len, nblocks, orig_len = len;
+	unsigned long current_generation;
+	void *orig_buffer = buffer;
+	u32 counter[2] = { 0 };
+	bool in_use;
+
+	/*
+	 * If the kernel's RNG is not yet ready, then it's not possible to provide random bytes from
+	 * userspace, because A) the various @flags require this to block, or not, depending on
+	 * various factors unavailable to userspace, and B) the kernel's behavior before the RNG is
+	 * ready is to reseed from the entropy pool at every invocation.
+	 */
+	if (unlikely(!READ_ONCE(rng_info->is_ready)))
+		goto fallback_syscall;
+
+	/*
+	 * This condition is checked after @rng_info->is_ready, because before the kernel's RNG is
+	 * initialized, the @flags parameter may require this to block or return an error, even when
+	 * len is zero.
+	 */
+	if (unlikely(!len))
+		return 0;
+
+	/*
+	 * @state->in_use is basic reentrancy protection against this running in a signal handler
+	 * with the same @opaque_state, but obviously not atomic wrt multiple CPUs or more than one
+	 * level of reentrancy. If a signal interrupts this after reading @state->in_use, but before
+	 * writing @state->in_use, there is still no race, because the signal handler will run to
+	 * its completion before returning execution.
+	 */
+	in_use = READ_ONCE(state->in_use);
+	if (unlikely(in_use))
+		goto fallback_syscall;
+	WRITE_ONCE(state->in_use, true);
+
+retry_generation:
+	/*
+	 * @rng_info->generation must always be read here, as it serializes @state->key with the
+	 * kernel's RNG reseeding schedule.
+	 */
+	current_generation = READ_ONCE(rng_info->generation);
+
+	/*
+	 * If @state->generation doesn't match the kernel RNG's generation, then it means the
+	 * kernel's RNG has reseeded, and so @state->key is reseeded as well.
+	 */
+	if (unlikely(state->generation != current_generation)) {
+		/*
+		 * Write the generation before filling the key, in case of fork. If there is a fork
+		 * just after this line, the two forks will get different random bytes from the
+		 * syscall, which is good. However, were this line to occur after the getrandom
+		 * syscall, then both child and parent could have the same bytes and the same
+		 * generation counter, so the fork would not be detected. Therefore, write
+		 * @state->generation before the call to the getrandom syscall.
+		 */
+		WRITE_ONCE(state->generation, current_generation);
+
+		/* Reseed @state->key using fresh bytes from the kernel. */
+		if (getrandom_syscall(state->key, sizeof(state->key), 0) != sizeof(state->key)) {
+			/*
+			 * If the syscall failed to refresh the key, then @state->key is now
+			 * invalid, so invalidate the generation so that it is not used again, and
+			 * fallback to using the syscall entirely.
+			 */
+			WRITE_ONCE(state->generation, 0);
+
+			/*
+			 * Set @state->in_use to false only after the last write to @state in the
+			 * line above.
+			 */
+			WRITE_ONCE(state->in_use, false);
+
+			goto fallback_syscall;
+		}
+
+		/*
+		 * Set @state->pos to beyond the end of the batch, so that the batch is refilled
+		 * using the new key.
+		 */
+		state->pos = sizeof(state->batch);
+	}
+
+	len = ret;
+more_batch:
+	/*
+	 * First use bytes out of @state->batch, which may have been filled by the last call to this
+	 * function.
+	 */
+	batch_len = min_t(size_t, sizeof(state->batch) - state->pos, len);
+	if (batch_len) {
+		/* Zeroing at the same time as memcpying helps preserve forward secrecy. */
+		memcpy_and_zero_src(buffer, state->batch + state->pos, batch_len);
+		state->pos += batch_len;
+		buffer += batch_len;
+		len -= batch_len;
+	}
+
+	if (!len) {
+		/*
+		 * Since @rng_info->generation will never be 0, re-read @state->generation, rather
+		 * than using the local current_generation variable, to learn whether a fork
+		 * occurred. Primarily, though, this indicates whether the kernel's RNG has
+		 * reseeded, in which case generate a new key and start over.
+		 */
+		if (unlikely(READ_ONCE(state->generation) != READ_ONCE(rng_info->generation))) {
+			buffer = orig_buffer;
+			goto retry_generation;
+		}
+
+		/*
+		 * Set @state->in_use to false only when there will be no more reads or writes of
+		 * @state.
+		 */
+		WRITE_ONCE(state->in_use, false);
+		return ret;
+	}
+
+	/* Generate blocks of RNG output directly into @buffer while there's enough room left. */
+	nblocks = len / CHACHA_BLOCK_SIZE;
+	if (nblocks) {
+		__arch_chacha20_blocks_nostack(buffer, state->key, counter, nblocks);
+		buffer += nblocks * CHACHA_BLOCK_SIZE;
+		len -= nblocks * CHACHA_BLOCK_SIZE;
+	}
+
+	BUILD_BUG_ON(sizeof(state->batch_key) % CHACHA_BLOCK_SIZE != 0);
+
+	/* Refill the batch and then overwrite the key, in order to preserve forward secrecy. */
+	__arch_chacha20_blocks_nostack(state->batch_key, state->key, counter,
+				       sizeof(state->batch_key) / CHACHA_BLOCK_SIZE);
+
+	/* Since the batch was just refilled, set the position back to 0 to indicate a full batch. */
+	state->pos = 0;
+	goto more_batch;
+
+fallback_syscall:
+	return getrandom_syscall(orig_buffer, orig_len, flags);
+}
+
+static __always_inline ssize_t
+__cvdso_getrandom(void *buffer, size_t len, unsigned int flags, void *opaque_state)
+{
+	return __cvdso_getrandom_data(__arch_get_vdso_rng_data(), buffer, len, flags, opaque_state);
+}
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v10 4/4] x86: vdso: Wire up getrandom() vDSO implementation
  2022-11-29 21:06 [PATCH v10 0/4] implement getrandom() in vDSO Jason A. Donenfeld
                   ` (2 preceding siblings ...)
  2022-11-29 21:06 ` [PATCH v10 3/4] random: introduce generic vDSO getrandom() implementation Jason A. Donenfeld
@ 2022-11-29 21:06 ` Jason A. Donenfeld
  2022-11-29 22:52   ` Thomas Gleixner
  2022-11-30  5:22   ` Eric Biggers
  3 siblings, 2 replies; 37+ messages in thread
From: Jason A. Donenfeld @ 2022-11-29 21:06 UTC (permalink / raw)
  To: linux-kernel, patches, tglx
  Cc: Jason A. Donenfeld, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Arnd Bergmann,
	Christian Brauner, Samuel Neves

Hook up the generic vDSO implementation to the x86 vDSO data page. Since
the existing vDSO infrastructure is heavily based on the timekeeping
functionality, which works over arrays of bases, a new macro is
introduced for vvars that are not arrays.

The vDSO function requires a ChaCha20 implementation that does not write
to the stack, yet can still do an entire ChaCha20 permutation, so
provide this using SSE2, since this is userland code that must work on
all x86-64 processors.

Reviewed-by: Samuel Neves <sneves@dei.uc.pt> # for vgetrandom-chacha.S
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
---
 arch/x86/Kconfig                        |   1 +
 arch/x86/entry/vdso/Makefile            |   3 +-
 arch/x86/entry/vdso/vdso.lds.S          |   2 +
 arch/x86/entry/vdso/vgetrandom-chacha.S | 177 ++++++++++++++++++++++++
 arch/x86/entry/vdso/vgetrandom.c        |  17 +++
 arch/x86/include/asm/vdso/getrandom.h   |  55 ++++++++
 arch/x86/include/asm/vdso/vsyscall.h    |   2 +
 arch/x86/include/asm/vvar.h             |  16 +++
 8 files changed, 272 insertions(+), 1 deletion(-)
 create mode 100644 arch/x86/entry/vdso/vgetrandom-chacha.S
 create mode 100644 arch/x86/entry/vdso/vgetrandom.c
 create mode 100644 arch/x86/include/asm/vdso/getrandom.h

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 67745ceab0db..357148c4a3a4 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -269,6 +269,7 @@ config X86
 	select HAVE_UNSTABLE_SCHED_CLOCK
 	select HAVE_USER_RETURN_NOTIFIER
 	select HAVE_GENERIC_VDSO
+	select VDSO_GETRANDOM			if X86_64
 	select HOTPLUG_SMT			if SMP
 	select IRQ_FORCED_THREADING
 	select NEED_PER_CPU_EMBED_FIRST_CHUNK
diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile
index 3e88b9df8c8f..2de64e52236a 100644
--- a/arch/x86/entry/vdso/Makefile
+++ b/arch/x86/entry/vdso/Makefile
@@ -27,7 +27,7 @@ VDSO32-$(CONFIG_X86_32)		:= y
 VDSO32-$(CONFIG_IA32_EMULATION)	:= y
 
 # files to link into the vdso
-vobjs-y := vdso-note.o vclock_gettime.o vgetcpu.o
+vobjs-y := vdso-note.o vclock_gettime.o vgetcpu.o vgetrandom.o vgetrandom-chacha.o
 vobjs32-y := vdso32/note.o vdso32/system_call.o vdso32/sigreturn.o
 vobjs32-y += vdso32/vclock_gettime.o
 vobjs-$(CONFIG_X86_SGX)	+= vsgx.o
@@ -104,6 +104,7 @@ CFLAGS_REMOVE_vclock_gettime.o = -pg
 CFLAGS_REMOVE_vdso32/vclock_gettime.o = -pg
 CFLAGS_REMOVE_vgetcpu.o = -pg
 CFLAGS_REMOVE_vsgx.o = -pg
+CFLAGS_REMOVE_vgetrandom.o = -pg
 
 #
 # X32 processes use x32 vDSO to access 64bit kernel data.
diff --git a/arch/x86/entry/vdso/vdso.lds.S b/arch/x86/entry/vdso/vdso.lds.S
index 4bf48462fca7..1919cc39277e 100644
--- a/arch/x86/entry/vdso/vdso.lds.S
+++ b/arch/x86/entry/vdso/vdso.lds.S
@@ -28,6 +28,8 @@ VERSION {
 		clock_getres;
 		__vdso_clock_getres;
 		__vdso_sgx_enter_enclave;
+		getrandom;
+		__vdso_getrandom;
 	local: *;
 	};
 }
diff --git a/arch/x86/entry/vdso/vgetrandom-chacha.S b/arch/x86/entry/vdso/vgetrandom-chacha.S
new file mode 100644
index 000000000000..91fbb7ac7af4
--- /dev/null
+++ b/arch/x86/entry/vdso/vgetrandom-chacha.S
@@ -0,0 +1,177 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2022 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
+ */
+
+#include <linux/linkage.h>
+#include <asm/frame.h>
+
+.section	.rodata.cst16.CONSTANTS, "aM", @progbits, 16
+.align 16
+CONSTANTS:	.octa 0x6b20657479622d323320646e61707865
+.text
+
+/*
+ * Very basic SSE2 implementation of ChaCha20. Produces a given positive number
+ * of blocks of output with a nonce of 0, taking an input key and 8-byte
+ * counter. Importantly does not spill to the stack. Its arguments are:
+ *
+ *	rdi: output bytes
+ *	rsi: 32-byte key input
+ *	rdx: 8-byte counter input/output
+ *	rcx: number of 64-byte blocks to write to output
+ */
+SYM_FUNC_START(__arch_chacha20_blocks_nostack)
+
+#define output  %rdi
+#define key     %rsi
+#define counter %rdx
+#define nblocks %rcx
+#define i       %al
+#define state0  %xmm0
+#define state1  %xmm1
+#define state2  %xmm2
+#define state3  %xmm3
+#define copy0   %xmm4
+#define copy1   %xmm5
+#define copy2   %xmm6
+#define copy3   %xmm7
+#define temp    %xmm8
+#define one     %xmm9
+
+	/* copy0 = "expand 32-byte k" */
+	movaps		CONSTANTS(%rip),copy0
+	/* copy1,copy2 = key */
+	movups		0x00(key),copy1
+	movups		0x10(key),copy2
+	/* copy3 = counter || zero nonce */
+	movq		0x00(counter),copy3
+	/* one = 1 || 0 */
+	movq		$1,%rax
+	movq		%rax,one
+
+.Lblock:
+	/* state0,state1,state2,state3 = copy0,copy1,copy2,copy3 */
+	movdqa		copy0,state0
+	movdqa		copy1,state1
+	movdqa		copy2,state2
+	movdqa		copy3,state3
+
+	movb		$10,i
+.Lpermute:
+	/* state0 += state1, state3 = rotl32(state3 ^ state0, 16) */
+	paddd		state1,state0
+	pxor		state0,state3
+	movdqa		state3,temp
+	pslld		$16,temp
+	psrld		$16,state3
+	por		temp,state3
+
+	/* state2 += state3, state1 = rotl32(state1 ^ state2, 12) */
+	paddd		state3,state2
+	pxor		state2,state1
+	movdqa		state1,temp
+	pslld		$12,temp
+	psrld		$20,state1
+	por		temp,state1
+
+	/* state0 += state1, state3 = rotl32(state3 ^ state0, 8) */
+	paddd		state1,state0
+	pxor		state0,state3
+	movdqa		state3,temp
+	pslld		$8,temp
+	psrld		$24,state3
+	por		temp,state3
+
+	/* state2 += state3, state1 = rotl32(state1 ^ state2, 7) */
+	paddd		state3,state2
+	pxor		state2,state1
+	movdqa		state1,temp
+	pslld		$7,temp
+	psrld		$25,state1
+	por		temp,state1
+
+	/* state1[0,1,2,3] = state1[0,3,2,1] */
+	pshufd		$0x39,state1,state1
+	/* state2[0,1,2,3] = state2[1,0,3,2] */
+	pshufd		$0x4e,state2,state2
+	/* state3[0,1,2,3] = state3[2,1,0,3] */
+	pshufd		$0x93,state3,state3
+
+	/* state0 += state1, state3 = rotl32(state3 ^ state0, 16) */
+	paddd		state1,state0
+	pxor		state0,state3
+	movdqa		state3,temp
+	pslld		$16,temp
+	psrld		$16,state3
+	por		temp,state3
+
+	/* state2 += state3, state1 = rotl32(state1 ^ state2, 12) */
+	paddd		state3,state2
+	pxor		state2,state1
+	movdqa		state1,temp
+	pslld		$12,temp
+	psrld		$20,state1
+	por		temp,state1
+
+	/* state0 += state1, state3 = rotl32(state3 ^ state0, 8) */
+	paddd		state1,state0
+	pxor		state0,state3
+	movdqa		state3,temp
+	pslld		$8,temp
+	psrld		$24,state3
+	por		temp,state3
+
+	/* state2 += state3, state1 = rotl32(state1 ^ state2, 7) */
+	paddd		state3,state2
+	pxor		state2,state1
+	movdqa		state1,temp
+	pslld		$7,temp
+	psrld		$25,state1
+	por		temp,state1
+
+	/* state1[0,1,2,3] = state1[2,1,0,3] */
+	pshufd		$0x93,state1,state1
+	/* state2[0,1,2,3] = state2[1,0,3,2] */
+	pshufd		$0x4e,state2,state2
+	/* state3[0,1,2,3] = state3[0,3,2,1] */
+	pshufd		$0x39,state3,state3
+
+	decb		i
+	jnz		.Lpermute
+
+	/* output0 = state0 + copy0 */
+	paddd		copy0,state0
+	movups		state0,0x00(output)
+	/* output1 = state1 + copy1 */
+	paddd		copy1,state1
+	movups		state1,0x10(output)
+	/* output2 = state2 + copy2 */
+	paddd		copy2,state2
+	movups		state2,0x20(output)
+	/* output3 = state3 + copy3 */
+	paddd		copy3,state3
+	movups		state3,0x30(output)
+
+	/* ++copy3.counter */
+	paddq		one,copy3
+
+	/* output += 64, --nblocks */
+	addq		$64,output
+	decq		nblocks
+	jnz		.Lblock
+
+	/* counter = copy3.counter */
+	movq		copy3,0x00(counter)
+
+	/* Zero out the potentially sensitive regs, in case nothing uses these again. */
+	pxor		state0,state0
+	pxor		state1,state1
+	pxor		state2,state2
+	pxor		state3,state3
+	pxor		copy1,copy1
+	pxor		copy2,copy2
+	pxor		temp,temp
+
+	ret
+SYM_FUNC_END(__arch_chacha20_blocks_nostack)
diff --git a/arch/x86/entry/vdso/vgetrandom.c b/arch/x86/entry/vdso/vgetrandom.c
new file mode 100644
index 000000000000..6045ded5da90
--- /dev/null
+++ b/arch/x86/entry/vdso/vgetrandom.c
@@ -0,0 +1,17 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2022 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
+ */
+#include <linux/types.h>
+
+#include "../../../../lib/vdso/getrandom.c"
+
+ssize_t __vdso_getrandom(void *buffer, size_t len, unsigned int flags, void *state);
+
+ssize_t __vdso_getrandom(void *buffer, size_t len, unsigned int flags, void *state)
+{
+	return __cvdso_getrandom(buffer, len, flags, state);
+}
+
+ssize_t getrandom(void *, size_t, unsigned int, void *)
+	__attribute__((weak, alias("__vdso_getrandom")));
diff --git a/arch/x86/include/asm/vdso/getrandom.h b/arch/x86/include/asm/vdso/getrandom.h
new file mode 100644
index 000000000000..a2bb2dc4443e
--- /dev/null
+++ b/arch/x86/include/asm/vdso/getrandom.h
@@ -0,0 +1,55 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2022 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
+ */
+#ifndef __ASM_VDSO_GETRANDOM_H
+#define __ASM_VDSO_GETRANDOM_H
+
+#ifndef __ASSEMBLY__
+
+#include <asm/unistd.h>
+#include <asm/vvar.h>
+
+/**
+ * getrandom_syscall - invoke the getrandom() syscall
+ * @buffer:	destination buffer to fill with random bytes
+ * @len:	size of @buffer in bytes
+ * @flags:	zero or more GRND_* flags
+ * Returns the number of random bytes written to @buffer, or a negative value indicating an error.
+ */
+static __always_inline ssize_t getrandom_syscall(void *buffer, size_t len, unsigned int flags)
+{
+	long ret;
+
+	asm ("syscall" : "=a" (ret) :
+	     "0" (__NR_getrandom), "D" (buffer), "S" (len), "d" (flags) :
+	     "rcx", "r11", "memory");
+
+	return ret;
+}
+
+#define __vdso_rng_data (VVAR(_vdso_rng_data))
+
+static __always_inline const struct vdso_rng_data *__arch_get_vdso_rng_data(void)
+{
+	if (__vdso_data->clock_mode == VDSO_CLOCKMODE_TIMENS)
+		return (void *)&__vdso_rng_data + ((void *)&__timens_vdso_data - (void *)&__vdso_data);
+	return &__vdso_rng_data;
+}
+
+/**
+ * __arch_chacha20_blocks_nostack - generate ChaCha20 stream without using the stack
+ * @dst_bytes:	a destination buffer to hold @nblocks * 64 bytes of output
+ * @key:	32-byte input key
+ * @counter:	8-byte counter, read on input and updated on return
+ * @nblocks:	the number of blocks to generate
+ *
+ * Generates a given positive number of block of ChaCha20 output with nonce=0, and does not write to
+ * any stack or memory outside of the parameters passed to it. This way, there's no concern about
+ * stack data leaking into forked child processes.
+ */
+extern void __arch_chacha20_blocks_nostack(u8 *dst_bytes, const u32 *key, u32 *counter, size_t nblocks);
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* __ASM_VDSO_GETRANDOM_H */
diff --git a/arch/x86/include/asm/vdso/vsyscall.h b/arch/x86/include/asm/vdso/vsyscall.h
index be199a9b2676..71c56586a22f 100644
--- a/arch/x86/include/asm/vdso/vsyscall.h
+++ b/arch/x86/include/asm/vdso/vsyscall.h
@@ -11,6 +11,8 @@
 #include <asm/vvar.h>
 
 DEFINE_VVAR(struct vdso_data, _vdso_data);
+DEFINE_VVAR_SINGLE(struct vdso_rng_data, _vdso_rng_data);
+
 /*
  * Update the vDSO data page to keep in sync with kernel timekeeping.
  */
diff --git a/arch/x86/include/asm/vvar.h b/arch/x86/include/asm/vvar.h
index 183e98e49ab9..9d9af37f7cab 100644
--- a/arch/x86/include/asm/vvar.h
+++ b/arch/x86/include/asm/vvar.h
@@ -26,6 +26,8 @@
  */
 #define DECLARE_VVAR(offset, type, name) \
 	EMIT_VVAR(name, offset)
+#define DECLARE_VVAR_SINGLE(offset, type, name) \
+	EMIT_VVAR(name, offset)
 
 #else
 
@@ -37,6 +39,10 @@ extern char __vvar_page;
 	extern type timens_ ## name[CS_BASES]				\
 	__attribute__((visibility("hidden")));				\
 
+#define DECLARE_VVAR_SINGLE(offset, type, name)				\
+	extern type vvar_ ## name					\
+	__attribute__((visibility("hidden")));				\
+
 #define VVAR(name) (vvar_ ## name)
 #define TIMENS(name) (timens_ ## name)
 
@@ -44,12 +50,22 @@ extern char __vvar_page;
 	type name[CS_BASES]						\
 	__attribute__((section(".vvar_" #name), aligned(16))) __visible
 
+#define DEFINE_VVAR_SINGLE(type, name)					\
+	type name							\
+	__attribute__((section(".vvar_" #name), aligned(16))) __visible
+
 #endif
 
 /* DECLARE_VVAR(offset, type, name) */
 
 DECLARE_VVAR(128, struct vdso_data, _vdso_data)
 
+#if !defined(_SINGLE_DATA)
+#define _SINGLE_DATA
+DECLARE_VVAR_SINGLE(640, struct vdso_rng_data, _vdso_rng_data)
+#endif
+
 #undef DECLARE_VVAR
+#undef DECLARE_VVAR_SINGLE
 
 #endif
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* Re: [PATCH v10 1/4] random: add vgetrandom_alloc() syscall
  2022-11-29 21:06 ` [PATCH v10 1/4] random: add vgetrandom_alloc() syscall Jason A. Donenfeld
@ 2022-11-29 22:02   ` Thomas Gleixner
  2022-11-30  0:59     ` Jason A. Donenfeld
  2022-11-30 22:39     ` David Laight
  2022-11-30 10:51   ` Florian Weimer
  1 sibling, 2 replies; 37+ messages in thread
From: Thomas Gleixner @ 2022-11-29 22:02 UTC (permalink / raw)
  To: Jason A. Donenfeld, linux-kernel, patches
  Cc: Jason A. Donenfeld, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Arnd Bergmann,
	Christian Brauner

Jason!

On Tue, Nov 29 2022 at 22:06, Jason A. Donenfeld wrote:
> +
> +/********************************************************************
> + *
> + * vDSO support helpers.
> + *
> + * The actual vDSO function is defined over in lib/vdso/getrandom.c,
> + * but this section contains the kernel-mode helpers to support that.
> + *
> + ********************************************************************/
> +
> +#ifdef CONFIG_VGETRANDOM_ALLOC_SYSCALL
> +/**
> + * vgetrandom_alloc - allocate opaque states for use with vDSO getrandom().
> + *
> + * @num: on input, a pointer to a suggested hint of how many states to
> + * allocate, and on output the number of states actually allocated.
> + *
> + * @size_per_each: the size of each state allocated, so that the caller can
> + * split up the returned allocation into individual states.
> + *
> + * @flags: currently always zero.

NIT!

I personally prefer and ask for it in stuff I maintain:

 * @num:		On input, a pointer to a suggested hint of how many states to
 *			allocate, and on output the number of states actually allocated.
 *
 * @size_per_each: 	The size of each state allocated, so that the caller can
 * 			split up the returned allocation into individual states.
 *
 * @flags: 		Currently always zero.

But your turf :)

> + *
> + * The getrandom() vDSO function in userspace requires an opaque state, which
> + * this function allocates by mapping a certain number of special pages into
> + * the calling process. It takes a hint as to the number of opaque states
> + * desired, and provides the caller with the number of opaque states actually
> + * allocated, the size of each one in bytes, and the address of the first
> + * state.

make W=1 rightfully complains about:

> +

drivers/char/random.c:182: warning: bad line: 

> + * Returns a pointer to the first state in the allocation.

I have serious doubts that this statement is correct.

Think about this comment and documentation as a boiler plate for the
mandatory man page for a new syscall (hint...)

> + *
> + */

and W=1 also complains rightfully here:

> +SYSCALL_DEFINE3(vgetrandom_alloc, unsigned int __user *, num,
> +		unsigned int __user *, size_per_each, unsigned int, flags)

drivers/char/random.c:188: warning: expecting prototype for vgetrandom_alloc(). Prototype was for sys_vgetrandom_alloc() instead

> +{
> diff --git a/include/vdso/getrandom.h b/include/vdso/getrandom.h
> new file mode 100644
> index 000000000000..5f04c8bf4bd4
> --- /dev/null
> +++ b/include/vdso/getrandom.h
> @@ -0,0 +1,24 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2022 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
> + */
> +
> +#ifndef _VDSO_GETRANDOM_H
> +#define _VDSO_GETRANDOM_H
> +
> +#include <crypto/chacha.h>
> +
> +struct vgetrandom_state {
> +	union {
> +		struct {
> +			u8 batch[CHACHA_BLOCK_SIZE * 3 / 2];
> +			u32 key[CHACHA_KEY_SIZE / sizeof(u32)];
> +		};
> +		u8 batch_key[CHACHA_BLOCK_SIZE * 2];
> +	};
> +	unsigned long generation;
> +	u8 pos;
> +	bool in_use;
> +};

Again, please make this properly tabular:

struct vgetrandom_state {
	union {
		struct {
			u8	batch[CHACHA_BLOCK_SIZE * 3 / 2];
			u32	key[CHACHA_KEY_SIZE / sizeof(u32)];
		};
		u8	batch_key[CHACHA_BLOCK_SIZE * 2];
	};
	unsigned long	generation;
	u8		pos;
	bool		in_use;
};

Plus some kernel doc which explains what this is about.

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v10 3/4] random: introduce generic vDSO getrandom() implementation
  2022-11-29 21:06 ` [PATCH v10 3/4] random: introduce generic vDSO getrandom() implementation Jason A. Donenfeld
@ 2022-11-29 22:42   ` Thomas Gleixner
  2022-11-30  1:09     ` Jason A. Donenfeld
  2022-11-30 10:44   ` Florian Weimer
  1 sibling, 1 reply; 37+ messages in thread
From: Thomas Gleixner @ 2022-11-29 22:42 UTC (permalink / raw)
  To: Jason A. Donenfeld, linux-kernel, patches
  Cc: Jason A. Donenfeld, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Arnd Bergmann,
	Christian Brauner

Jason!

On Tue, Nov 29 2022 at 22:06, Jason A. Donenfeld wrote:
> +/**
> + * struct vdso_rng_data - vdso RNG state information
> + * @generation:	a counter representing the number of RNG reseeds

A counter

> + * @is_ready:	whether the RNG is initialized

Signals whether ...

> + */
> +struct vdso_rng_data {
> +	unsigned long	generation;
> +	bool		is_ready;
> +};
> +
> +
> +#define MEMCPY_AND_ZERO_SRC(type, dst, src, len) do { \
> +	while (len >= sizeof(type)) { \
> +		__put_unaligned_t(type, __get_unaligned_t(type, src), dst); \
> +		__put_unaligned_t(type, 0, src); \
> +		dst += sizeof(type); \
> +		src += sizeof(type); \
> +		len -= sizeof(type); \
> +	} \
> +} while (0)

I'd appreciate it if you go back to the code I suggested to you and
compare and contrast it in terms of readability.

> +
> +static void memcpy_and_zero_src(void *dst, void *src, size_t len)
> +{
> +	if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) {
> +		if (IS_ENABLED(CONFIG_64BIT))
> +			MEMCPY_AND_ZERO_SRC(u64, dst, src, len);
> +		MEMCPY_AND_ZERO_SRC(u32, dst, src, len);
> +		MEMCPY_AND_ZERO_SRC(u16, dst, src, len);
> +	}
> +	MEMCPY_AND_ZERO_SRC(u8, dst, src, len);
> +}
> +
> +/**
> + * __cvdso_getrandom_data - generic vDSO implementation of getrandom() syscall
> + * @rng_info:		describes state of kernel RNG, memory shared with kernel
> + * @buffer:		destination buffer to fill with random bytes
> + * @len:		size of @buffer in bytes
> + * @flags:		zero or more GRND_* flags
> + * @opaque_state:	a pointer to an opaque state area

NIT. Please start the explanations with an uppercase letter

> +		/*
> +		 * Set @state->pos to beyond the end of the batch, so that the batch is refilled
> +		 * using the new key.
> +		 */
> +		state->pos = sizeof(state->batch);
> +	}
> +

This one is odd:

> +	len = ret;

@ret is not modified after the initialization at the top of the
function:

> +	ssize_t ret = min_t(size_t, INT_MAX & PAGE_MASK /* = MAX_RW_COUNT */, len);

so I really had to go up a page and figure out what the story is.

> +
> +	/* Since the batch was just refilled, set the position back to 0 to indicate a full batch. */
> +	state->pos = 0;
> +	goto more_batch;

Aside of the nitpicks above, thank you very much for making this
comprehensible.

The comments are well done and appreciated and I'm pretty sure that this
part:

> +	in_use = READ_ONCE(state->in_use);
> +	if (unlikely(in_use))
> +		goto fallback_syscall;
> +	WRITE_ONCE(state->in_use, true);

was very much induced by writing those comments :)

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v10 4/4] x86: vdso: Wire up getrandom() vDSO implementation
  2022-11-29 21:06 ` [PATCH v10 4/4] x86: vdso: Wire up getrandom() vDSO implementation Jason A. Donenfeld
@ 2022-11-29 22:52   ` Thomas Gleixner
  2022-11-30  1:11     ` Jason A. Donenfeld
  2022-11-30  5:22   ` Eric Biggers
  1 sibling, 1 reply; 37+ messages in thread
From: Thomas Gleixner @ 2022-11-29 22:52 UTC (permalink / raw)
  To: Jason A. Donenfeld, linux-kernel, patches
  Cc: Jason A. Donenfeld, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Arnd Bergmann,
	Christian Brauner, Samuel Neves

On Tue, Nov 29 2022 at 22:06, Jason A. Donenfeld wrote:
> Hook up the generic vDSO implementation to the x86 vDSO data page. Since
> the existing vDSO infrastructure is heavily based on the timekeeping
> functionality, which works over arrays of bases, a new macro is
> introduced for vvars that are not arrays.
>
> The vDSO function requires a ChaCha20 implementation that does not write
> to the stack, yet can still do an entire ChaCha20 permutation, so
> provide this using SSE2, since this is userland code that must work on
> all x86-64 processors.

Way more consumable and looks about right. Please take your time and
give others a chance to look at this lot before rushing out v11.

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v10 1/4] random: add vgetrandom_alloc() syscall
  2022-11-29 22:02   ` Thomas Gleixner
@ 2022-11-30  0:59     ` Jason A. Donenfeld
  2022-11-30  1:37       ` Thomas Gleixner
  2022-11-30 22:39     ` David Laight
  1 sibling, 1 reply; 37+ messages in thread
From: Jason A. Donenfeld @ 2022-11-30  0:59 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: linux-kernel, patches, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Arnd Bergmann,
	Christian Brauner

Hi Thomas,

Thanks again for the big review. Comments inline below.

On Tue, Nov 29, 2022 at 11:02:29PM +0100, Thomas Gleixner wrote:
> > +/**
> > + * vgetrandom_alloc - allocate opaque states for use with vDSO getrandom().
> > + *
> > + * @num: on input, a pointer to a suggested hint of how many states to
> > + * allocate, and on output the number of states actually allocated.
> > + *
> > + * @size_per_each: the size of each state allocated, so that the caller can
> > + * split up the returned allocation into individual states.
> > + *
> > + * @flags: currently always zero.
> 
> NIT!
> 
> I personally prefer and ask for it in stuff I maintain:
> 
>  * @num:		On input, a pointer to a suggested hint of how many states to
>  *			allocate, and on output the number of states actually allocated.
>  *
>  * @size_per_each: 	The size of each state allocated, so that the caller can
>  * 			split up the returned allocation into individual states.
>  *
>  * @flags: 		Currently always zero.
> 
> But your turf :)

Hm. Caps and punctuation seem mostly missing in kernel/time/, though it
is that way in some places, so I'll do it with caps and punctuation.
Presumably that's the "newer" style you prefer, though I didn't look at
the dates in git-blame to confirm that supposition.

> 
> > + *
> > + * The getrandom() vDSO function in userspace requires an opaque state, which
> > + * this function allocates by mapping a certain number of special pages into
> > + * the calling process. It takes a hint as to the number of opaque states
> > + * desired, and provides the caller with the number of opaque states actually
> > + * allocated, the size of each one in bytes, and the address of the first
> > + * state.
> 
> make W=1 rightfully complains about:
> 
> > +
> 
> drivers/char/random.c:182: warning: bad line: 
> 
> > + * Returns a pointer to the first state in the allocation.
> 
> I have serious doubts that this statement is correct.

"Returns the address of the first state in the allocation" is better I
guess.

> and W=1 also complains rightfully here:
> 
> > +SYSCALL_DEFINE3(vgetrandom_alloc, unsigned int __user *, num,
> > +		unsigned int __user *, size_per_each, unsigned int, flags)
> 
> drivers/char/random.c:188: warning: expecting prototype for vgetrandom_alloc(). Prototype was for sys_vgetrandom_alloc() instead

Squinted at a lot of headers before realizing that's a kernel-doc
warning. Fixed, thanks.

> > +#ifndef _VDSO_GETRANDOM_H
> > +#define _VDSO_GETRANDOM_H
> > +
> > +#include <crypto/chacha.h>
> > +
> > +struct vgetrandom_state {
> > +	union {
> > +		struct {
> > +			u8 batch[CHACHA_BLOCK_SIZE * 3 / 2];
> > +			u32 key[CHACHA_KEY_SIZE / sizeof(u32)];
> > +		};
> > +		u8 batch_key[CHACHA_BLOCK_SIZE * 2];
> > +	};
> > +	unsigned long generation;
> > +	u8 pos;
> > +	bool in_use;
> > +};
> 
> Again, please make this properly tabular:
> 
> struct vgetrandom_state {
> 	union {
> 		struct {
> 			u8	batch[CHACHA_BLOCK_SIZE * 3 / 2];
> 			u32	key[CHACHA_KEY_SIZE / sizeof(u32)];
> 		};
> 		u8	batch_key[CHACHA_BLOCK_SIZE * 2];
> 	};
> 	unsigned long	generation;
> 	u8		pos;
> 	bool		in_use;
> };
> 
> Plus some kernel doc which explains what this is about.

Will do. Though, I'm going to move this to the vDSO commit, and for the
syscall commit, which needs the struct to merely exist, I'll have no
members in it. This should make review a bit easier.

Jason

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v10 3/4] random: introduce generic vDSO getrandom() implementation
  2022-11-29 22:42   ` Thomas Gleixner
@ 2022-11-30  1:09     ` Jason A. Donenfeld
  0 siblings, 0 replies; 37+ messages in thread
From: Jason A. Donenfeld @ 2022-11-30  1:09 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: linux-kernel, patches, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Arnd Bergmann,
	Christian Brauner

Hi Thomas,

On Tue, Nov 29, 2022 at 11:42:16PM +0100, Thomas Gleixner wrote:
> Jason!
> 
> On Tue, Nov 29 2022 at 22:06, Jason A. Donenfeld wrote:
> > +/**
> > + * struct vdso_rng_data - vdso RNG state information
> > + * @generation:	a counter representing the number of RNG reseeds

FYI, ever other struct in this file uses lower case and no punctuation,
so I'll follow that for this one.

> A counter
> 
> > + * @is_ready:	whether the RNG is initialized
> 
> Signals whether ...

Ack.

> > + */
> > +struct vdso_rng_data {
> > +	unsigned long	generation;
> > +	bool		is_ready;
> > +};
> > +
> > +
> > +#define MEMCPY_AND_ZERO_SRC(type, dst, src, len) do { \
> > +	while (len >= sizeof(type)) { \
> > +		__put_unaligned_t(type, __get_unaligned_t(type, src), dst); \
> > +		__put_unaligned_t(type, 0, src); \
> > +		dst += sizeof(type); \
> > +		src += sizeof(type); \
> > +		len -= sizeof(type); \
> > +	} \
> > +} while (0)
> 
> I'd appreciate it if you go back to the code I suggested to you and
> compare and contrast it in terms of readability.

Ahh, you like to align your \. Okay, I'll do that. I also added a do {
... } while (0) wrapper around it, but I think it makes sense to keep
that so that there aren't stray semicolons.

> > +/**
> > + * __cvdso_getrandom_data - generic vDSO implementation of getrandom() syscall
> > + * @rng_info:		describes state of kernel RNG, memory shared with kernel
> > + * @buffer:		destination buffer to fill with random bytes
> > + * @len:		size of @buffer in bytes
> > + * @flags:		zero or more GRND_* flags
> > + * @opaque_state:	a pointer to an opaque state area
> 
> NIT. Please start the explanations with an uppercase letter

Okay. Will do everywhere in this patchset except for in vdso/datapage.h.

 
> This one is odd:
> 
> > +	len = ret;
> 
> @ret is not modified after the initialization at the top of the
> function:
> 
> > +	ssize_t ret = min_t(size_t, INT_MAX & PAGE_MASK /* = MAX_RW_COUNT */, len);
> 
> so I really had to go up a page and figure out what the story is.

If the generation changes, and it's tried again, the whole random buffer
is filled again, so that has to be reset. I'll leave a comment.

> > +
> > +	/* Since the batch was just refilled, set the position back to 0 to indicate a full batch. */
> > +	state->pos = 0;
> > +	goto more_batch;
> 
> Aside of the nitpicks above, thank you very much for making this
> comprehensible.

Thanks for nudging me in the right direction.

> 
> The comments are well done and appreciated and I'm pretty sure that this
> part:
> 
> > +	in_use = READ_ONCE(state->in_use);
> > +	if (unlikely(in_use))
> > +		goto fallback_syscall;
> > +	WRITE_ONCE(state->in_use, true);
> 
> was very much induced by writing those comments :)

Well, not exactly, unfortunately. Adhemerval -- the glibc maintainer
working on the libc side of this -- and I have been discussing signal
handling craziness and lots of different schemes over the last week+,
and this rather simple thing is the result of those conversations.

Jason

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v10 4/4] x86: vdso: Wire up getrandom() vDSO implementation
  2022-11-29 22:52   ` Thomas Gleixner
@ 2022-11-30  1:11     ` Jason A. Donenfeld
  0 siblings, 0 replies; 37+ messages in thread
From: Jason A. Donenfeld @ 2022-11-30  1:11 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: linux-kernel, patches, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Arnd Bergmann,
	Christian Brauner, Samuel Neves

Hi Thomas,

On Tue, Nov 29, 2022 at 11:52:05PM +0100, Thomas Gleixner wrote:
> On Tue, Nov 29 2022 at 22:06, Jason A. Donenfeld wrote:
> > Hook up the generic vDSO implementation to the x86 vDSO data page. Since
> > the existing vDSO infrastructure is heavily based on the timekeeping
> > functionality, which works over arrays of bases, a new macro is
> > introduced for vvars that are not arrays.
> >
> > The vDSO function requires a ChaCha20 implementation that does not write
> > to the stack, yet can still do an entire ChaCha20 permutation, so
> > provide this using SSE2, since this is userland code that must work on
> > all x86-64 processors.
> 
> Way more consumable and looks about right.

Good.

> Please take your time and
> give others a chance to look at this lot before rushing out v11.

That's my plan indeed. Now that the patch is reviewable, I'll let it sit
for a while. In between v10 and v11, my scratch work will be in
<https://git.zx2c4.com/linux-rng/log/?h=vdso>, which is rebased often.

Thanks again for looking this over.

Jason

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v10 1/4] random: add vgetrandom_alloc() syscall
  2022-11-30  0:59     ` Jason A. Donenfeld
@ 2022-11-30  1:37       ` Thomas Gleixner
  2022-11-30  1:42         ` Jason A. Donenfeld
  0 siblings, 1 reply; 37+ messages in thread
From: Thomas Gleixner @ 2022-11-30  1:37 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: linux-kernel, patches, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Arnd Bergmann,
	Christian Brauner

On Wed, Nov 30 2022 at 01:59, Jason A. Donenfeld wrote:
> On Tue, Nov 29, 2022 at 11:02:29PM +0100, Thomas Gleixner wrote:
>> > + * Returns a pointer to the first state in the allocation.
>> 
>> I have serious doubts that this statement is correct.
>
> "Returns the address of the first state in the allocation" is better I
> guess.

Does not even come close to correct.

As my previous hint of 'using this as template for the (hint:missing)
man page' did not work well, may I suggest that you look at the various
return statements in that function and validate whether your proposed
return value documentation is valid for all of them?

Thanks,

        tglx







^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v10 1/4] random: add vgetrandom_alloc() syscall
  2022-11-30  1:37       ` Thomas Gleixner
@ 2022-11-30  1:42         ` Jason A. Donenfeld
  0 siblings, 0 replies; 37+ messages in thread
From: Jason A. Donenfeld @ 2022-11-30  1:42 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: linux-kernel, patches, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Arnd Bergmann,
	Christian Brauner

On Wed, Nov 30, 2022 at 02:37:32AM +0100, Thomas Gleixner wrote:
> On Wed, Nov 30 2022 at 01:59, Jason A. Donenfeld wrote:
> > On Tue, Nov 29, 2022 at 11:02:29PM +0100, Thomas Gleixner wrote:
> >> > + * Returns a pointer to the first state in the allocation.
> >> 
> >> I have serious doubts that this statement is correct.
> >
> > "Returns the address of the first state in the allocation" is better I
> > guess.
> 
> Does not even come close to correct.
> 
> As my previous hint of 'using this as template for the (hint:missing)
> man page' did not work well, may I suggest that you look at the various
> return statements in that function and validate whether your proposed
> return value documentation is valid for all of them?

Ahh, the error values and such. Righto. Will do. I'll match the style of
similar functions.

Jason

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v10 4/4] x86: vdso: Wire up getrandom() vDSO implementation
  2022-11-29 21:06 ` [PATCH v10 4/4] x86: vdso: Wire up getrandom() vDSO implementation Jason A. Donenfeld
  2022-11-29 22:52   ` Thomas Gleixner
@ 2022-11-30  5:22   ` Eric Biggers
  2022-11-30 10:12     ` Jason A. Donenfeld
  1 sibling, 1 reply; 37+ messages in thread
From: Eric Biggers @ 2022-11-30  5:22 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: linux-kernel, patches, tglx, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Arnd Bergmann,
	Christian Brauner, Samuel Neves

On Tue, Nov 29, 2022 at 10:06:39PM +0100, Jason A. Donenfeld wrote:
> diff --git a/arch/x86/entry/vdso/vgetrandom-chacha.S b/arch/x86/entry/vdso/vgetrandom-chacha.S
> new file mode 100644
> index 000000000000..91fbb7ac7af4
> --- /dev/null
> +++ b/arch/x86/entry/vdso/vgetrandom-chacha.S
> @@ -0,0 +1,177 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2022 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
> + */
> +
> +#include <linux/linkage.h>
> +#include <asm/frame.h>
> +
> +.section	.rodata.cst16.CONSTANTS, "aM", @progbits, 16
> +.align 16
> +CONSTANTS:	.octa 0x6b20657479622d323320646e61707865
> +.text
> +
> +/*
> + * Very basic SSE2 implementation of ChaCha20. Produces a given positive number
> + * of blocks of output with a nonce of 0, taking an input key and 8-byte
> + * counter. Importantly does not spill to the stack. Its arguments are:
> + *
> + *	rdi: output bytes
> + *	rsi: 32-byte key input
> + *	rdx: 8-byte counter input/output
> + *	rcx: number of 64-byte blocks to write to output
> + */
> +SYM_FUNC_START(__arch_chacha20_blocks_nostack)

How was this ChaCha20 implementation tested?

It really ought to have some sort of test.

Wouldn't this be a good candidate for a KUnit test?

- Eric

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v10 2/4] arch: allocate vgetrandom_alloc() syscall number
  2022-11-29 21:06 ` [PATCH v10 2/4] arch: allocate vgetrandom_alloc() syscall number Jason A. Donenfeld
@ 2022-11-30  8:56   ` Geert Uytterhoeven
  2022-11-30 10:06     ` Jason A. Donenfeld
  0 siblings, 1 reply; 37+ messages in thread
From: Geert Uytterhoeven @ 2022-11-30  8:56 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: linux-kernel, patches, tglx, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Arnd Bergmann,
	Christian Brauner

Hi Jason,

On Tue, Nov 29, 2022 at 10:09 PM Jason A. Donenfeld <Jason@zx2c4.com> wrote:
> Add vgetrandom_alloc() as syscall 451 (or 561 on alpha) by adding it to
> all of the various syscall.tbl and unistd.h files.
>
> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>

Thanks for your patch!

What's the policy regarding adding syscall numbers for VDSO-related
syscalls on architectures that do not support VDSOs yet?

>  arch/m68k/kernel/syscalls/syscall.tbl               | 1 +

Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v10 2/4] arch: allocate vgetrandom_alloc() syscall number
  2022-11-30  8:56   ` Geert Uytterhoeven
@ 2022-11-30 10:06     ` Jason A. Donenfeld
  2022-11-30 10:51       ` Arnd Bergmann
  0 siblings, 1 reply; 37+ messages in thread
From: Jason A. Donenfeld @ 2022-11-30 10:06 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: linux-kernel, patches, tglx, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Arnd Bergmann,
	Christian Brauner

Hi Geert,

On Wed, Nov 30, 2022 at 09:56:06AM +0100, Geert Uytterhoeven wrote:
> Hi Jason,
> 
> On Tue, Nov 29, 2022 at 10:09 PM Jason A. Donenfeld <Jason@zx2c4.com> wrote:
> > Add vgetrandom_alloc() as syscall 451 (or 561 on alpha) by adding it to
> > all of the various syscall.tbl and unistd.h files.
> >
> > Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
> 
> Thanks for your patch!
> 
> What's the policy regarding adding syscall numbers for VDSO-related
> syscalls on architectures that do not support VDSOs yet?

I don't know exactly what the /policy/ is, but not wanting to rock the
boat, the first iterations of this series only added it to x86. But then
Arnd joined the thread and said I should add it to all of them all at
once and separate that out into this commit, so that's what we have
here. I think the idea is to keep syscall numbers synchronized these
days between archs if possible.

Jason

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v10 4/4] x86: vdso: Wire up getrandom() vDSO implementation
  2022-11-30  5:22   ` Eric Biggers
@ 2022-11-30 10:12     ` Jason A. Donenfeld
  0 siblings, 0 replies; 37+ messages in thread
From: Jason A. Donenfeld @ 2022-11-30 10:12 UTC (permalink / raw)
  To: Eric Biggers
  Cc: linux-kernel, patches, tglx, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Arnd Bergmann,
	Christian Brauner, Samuel Neves

Hi Eric,

On Tue, Nov 29, 2022 at 09:22:11PM -0800, Eric Biggers wrote:
> On Tue, Nov 29, 2022 at 10:06:39PM +0100, Jason A. Donenfeld wrote:
> > diff --git a/arch/x86/entry/vdso/vgetrandom-chacha.S b/arch/x86/entry/vdso/vgetrandom-chacha.S
> > new file mode 100644
> > index 000000000000..91fbb7ac7af4
> > --- /dev/null
> > +++ b/arch/x86/entry/vdso/vgetrandom-chacha.S
> > @@ -0,0 +1,177 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * Copyright (C) 2022 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
> > + */
> > +
> > +#include <linux/linkage.h>
> > +#include <asm/frame.h>
> > +
> > +.section	.rodata.cst16.CONSTANTS, "aM", @progbits, 16
> > +.align 16
> > +CONSTANTS:	.octa 0x6b20657479622d323320646e61707865
> > +.text
> > +
> > +/*
> > + * Very basic SSE2 implementation of ChaCha20. Produces a given positive number
> > + * of blocks of output with a nonce of 0, taking an input key and 8-byte
> > + * counter. Importantly does not spill to the stack. Its arguments are:
> > + *
> > + *	rdi: output bytes
> > + *	rsi: 32-byte key input
> > + *	rdx: 8-byte counter input/output
> > + *	rcx: number of 64-byte blocks to write to output
> > + */
> > +SYM_FUNC_START(__arch_chacha20_blocks_nostack)
> 
> How was this ChaCha20 implementation tested?
> 
> It really ought to have some sort of test.

I've been comparing different output lengths with what libsodium
produces. ARX, so no bigint stuff with carry bugs or whatever. I'll see
if I can make a good test to add to one of the various suites for v11.

Jason

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v10 3/4] random: introduce generic vDSO getrandom() implementation
  2022-11-29 21:06 ` [PATCH v10 3/4] random: introduce generic vDSO getrandom() implementation Jason A. Donenfeld
  2022-11-29 22:42   ` Thomas Gleixner
@ 2022-11-30 10:44   ` Florian Weimer
  2022-11-30 14:51     ` Jason A. Donenfeld
  1 sibling, 1 reply; 37+ messages in thread
From: Florian Weimer @ 2022-11-30 10:44 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: linux-kernel, patches, tglx, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Arnd Bergmann, Christian Brauner

* Jason A. Donenfeld:

> diff --git a/include/vdso/datapage.h b/include/vdso/datapage.h
> index 73eb622e7663..9ae4d76b36c7 100644
> --- a/include/vdso/datapage.h
> +++ b/include/vdso/datapage.h
> @@ -109,6 +109,16 @@ struct vdso_data {
>  	struct arch_vdso_data	arch_data;
>  };
>  
> +/**
> + * struct vdso_rng_data - vdso RNG state information
> + * @generation:	a counter representing the number of RNG reseeds
> + * @is_ready:	whether the RNG is initialized
> + */
> +struct vdso_rng_data {
> +	unsigned long	generation;
> +	bool		is_ready;
> +};
> +

I don't think you can use a type like long here.  The header says this:

 * vdso_data will be accessed by 64 bit and compat code at the same time
 * so we should be careful before modifying this structure.

So the ABI must be same for 32-bit and 64-bit mode, and long isn't.

Thanks,
Florian


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v10 2/4] arch: allocate vgetrandom_alloc() syscall number
  2022-11-30 10:06     ` Jason A. Donenfeld
@ 2022-11-30 10:51       ` Arnd Bergmann
  0 siblings, 0 replies; 37+ messages in thread
From: Arnd Bergmann @ 2022-11-30 10:51 UTC (permalink / raw)
  To: Jason A . Donenfeld, Geert Uytterhoeven
  Cc: linux-kernel, patches, Thomas Gleixner, linux-crypto, linux-api,
	x86, Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Christian Brauner

On Wed, Nov 30, 2022, at 11:06, Jason A. Donenfeld wrote:
> On Wed, Nov 30, 2022 at 09:56:06AM +0100, Geert Uytterhoeven wrote:
>> Hi Jason,
>> 
>> On Tue, Nov 29, 2022 at 10:09 PM Jason A. Donenfeld <Jason@zx2c4.com> wrote:
>> > Add vgetrandom_alloc() as syscall 451 (or 561 on alpha) by adding it to
>> > all of the various syscall.tbl and unistd.h files.
>> >
>> > Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
>> 
>> Thanks for your patch!
>> 
>> What's the policy regarding adding syscall numbers for VDSO-related
>> syscalls on architectures that do not support VDSOs yet?
>
> I don't know exactly what the /policy/ is, but not wanting to rock the
> boat, the first iterations of this series only added it to x86. But then
> Arnd joined the thread and said I should add it to all of them all at
> once and separate that out into this commit, so that's what we have
> here. I think the idea is to keep syscall numbers synchronized these
> days between archs if possible.

Right, it shouldn't matter if the syscall has anything to do with vdso
or some other feature, the important bit is that this is an optional
syscall that may or may not exist in a kernel.

Adding every new syscall number to all architectures helps avoid
merge conflicts and keeps the numbers synchronized. It's entirely
plausible that this one gets enabled on an architecture that starts
out with the default -ENOSYS implementation, and has that
backported to 6.2 (or even older kernels in principle) for a
distro release, so it also makes sense to have it in the uapi
table as soon as we have fixed the number.

     Arnd

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v10 1/4] random: add vgetrandom_alloc() syscall
  2022-11-29 21:06 ` [PATCH v10 1/4] random: add vgetrandom_alloc() syscall Jason A. Donenfeld
  2022-11-29 22:02   ` Thomas Gleixner
@ 2022-11-30 10:51   ` Florian Weimer
  2022-11-30 15:39     ` Jason A. Donenfeld
  1 sibling, 1 reply; 37+ messages in thread
From: Florian Weimer @ 2022-11-30 10:51 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: linux-kernel, patches, tglx, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Arnd Bergmann, Christian Brauner

* Jason A. Donenfeld:

> +#ifdef CONFIG_VGETRANDOM_ALLOC_SYSCALL
> +/**
> + * vgetrandom_alloc - allocate opaque states for use with vDSO getrandom().
> + *
> + * @num: on input, a pointer to a suggested hint of how many states to
> + * allocate, and on output the number of states actually allocated.

Should userspace call this system call again if it needs more states?
The interface description doesn't make this clear.

> + * @size_per_each: the size of each state allocated, so that the caller can
> + * split up the returned allocation into individual states.
> + *
> + * @flags: currently always zero.
> + *
> + * The getrandom() vDSO function in userspace requires an opaque state, which
> + * this function allocates by mapping a certain number of special pages into
> + * the calling process. It takes a hint as to the number of opaque states
> + * desired, and provides the caller with the number of opaque states actually
> + * allocated, the size of each one in bytes, and the address of the first
> + * state.
> +
> + * Returns a pointer to the first state in the allocation.
> + *
> + */

How do we deallocate this memory?  Must it remain permanently allocated?

Can userspace use the memory for something else if it's not passed to
getrandom?  The separate system call strongly suggests that the
allocation is completely owned by the kernel, but there isn't
documentation here how the allocation life-cycle is supposed to look
like.  In particular, it is not clear if vgetrandom_alloc or getrandom
could retain a reference to the allocation in a future implementation of
these interfaces.

Some users might want to zap the memory for extra hardening after use,
and it's not clear if that's allowed, either.

> +SYSCALL_DEFINE3(vgetrandom_alloc, unsigned int __user *, num,
> +		unsigned int __user *, size_per_each, unsigned int, flags)
> +{

ABI-wise, that should work.

> +	const size_t state_size = sizeof(struct vgetrandom_state);
> +	size_t alloc_size, num_states;
> +	unsigned long pages_addr;
> +	unsigned int num_hint;
> +	int ret;
> +
> +	if (flags)
> +		return -EINVAL;
> +
> +	if (get_user(num_hint, num))
> +		return -EFAULT;
> +
> +	num_states = clamp_t(size_t, num_hint, 1, (SIZE_MAX & PAGE_MASK) / state_size);
> +	alloc_size = PAGE_ALIGN(num_states * state_size);

Doesn't this waste space for one state if state_size happens to be a
power of 2?  Why do this SIZE_MAX & PAGE_MASK thing at all?  Shouldn't
it be PAGE_SIZE / state_size?

> +	if (put_user(alloc_size / state_size, num) || put_user(state_size, size_per_each))
> +		return -EFAULT;
> +
> +	pages_addr = vm_mmap(NULL, 0, alloc_size, PROT_READ | PROT_WRITE,
> +			     MAP_PRIVATE | MAP_ANONYMOUS | MAP_LOCKED, 0);

I think Rasmus has already raised questions about MAP_LOCKED.

I think the kernel cannot rely on it because userspace could call
munlock on the allocation.

> +	if (IS_ERR_VALUE(pages_addr))
> +		return pages_addr;
> +
> +	ret = do_madvise(current->mm, pages_addr, alloc_size, MADV_WIPEONFORK);
> +	if (ret < 0)
> +		goto err_unmap;
> +
> +	return pages_addr;
> +
> +err_unmap:
> +	vm_munmap(pages_addr, alloc_size);
> +	return ret;
> +}
> +#endif

If there's no registration of the allocation, it's not clear why we need
a separate system call for this.  From a documentation perspective, it
may be easier to describe proper use of the getrandom vDSO call if
ownership resides with userspace.  But it will constrain future
evolution of the implementation because you can't add registration
(retaining a reference to the passed-in area in getrandom) after the
fact.  But I'm not sure if this is possible with the current interface,
either.  Userspace has to make some assumptions about the life-cycle to
avoid a memory leak on thread exit.

Thanks,
Florian


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v10 3/4] random: introduce generic vDSO getrandom() implementation
  2022-11-30 10:44   ` Florian Weimer
@ 2022-11-30 14:51     ` Jason A. Donenfeld
  2022-11-30 14:59       ` Jason A. Donenfeld
  0 siblings, 1 reply; 37+ messages in thread
From: Jason A. Donenfeld @ 2022-11-30 14:51 UTC (permalink / raw)
  To: Florian Weimer
  Cc: linux-kernel, patches, tglx, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Arnd Bergmann, Christian Brauner

Hi Florian,

On Wed, Nov 30, 2022 at 11:44:30AM +0100, Florian Weimer wrote:
> * Jason A. Donenfeld:
> 
> > diff --git a/include/vdso/datapage.h b/include/vdso/datapage.h
> > index 73eb622e7663..9ae4d76b36c7 100644
> > --- a/include/vdso/datapage.h
> > +++ b/include/vdso/datapage.h
> > @@ -109,6 +109,16 @@ struct vdso_data {
> >  	struct arch_vdso_data	arch_data;
> >  };
> >  
> > +/**
> > + * struct vdso_rng_data - vdso RNG state information
> > + * @generation:	a counter representing the number of RNG reseeds
> > + * @is_ready:	whether the RNG is initialized
> > + */
> > +struct vdso_rng_data {
> > +	unsigned long	generation;
> > +	bool		is_ready;
> > +};
> > +
> 
> I don't think you can use a type like long here.  The header says this:
> 
>  * vdso_data will be accessed by 64 bit and compat code at the same time
>  * so we should be careful before modifying this structure.
> 
> So the ABI must be same for 32-bit and 64-bit mode, and long isn't.

Excellent point. The size of the type needs to be explicit. Will fix.

Jason

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v10 3/4] random: introduce generic vDSO getrandom() implementation
  2022-11-30 14:51     ` Jason A. Donenfeld
@ 2022-11-30 14:59       ` Jason A. Donenfeld
  2022-11-30 15:07         ` Arnd Bergmann
  0 siblings, 1 reply; 37+ messages in thread
From: Jason A. Donenfeld @ 2022-11-30 14:59 UTC (permalink / raw)
  To: Florian Weimer
  Cc: linux-kernel, patches, tglx, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Arnd Bergmann, Christian Brauner

On Wed, Nov 30, 2022 at 03:51:01PM +0100, Jason A. Donenfeld wrote:
> Hi Florian,
> 
> On Wed, Nov 30, 2022 at 11:44:30AM +0100, Florian Weimer wrote:
> > * Jason A. Donenfeld:
> > 
> > > diff --git a/include/vdso/datapage.h b/include/vdso/datapage.h
> > > index 73eb622e7663..9ae4d76b36c7 100644
> > > --- a/include/vdso/datapage.h
> > > +++ b/include/vdso/datapage.h
> > > @@ -109,6 +109,16 @@ struct vdso_data {
> > >  	struct arch_vdso_data	arch_data;
> > >  };
> > >  
> > > +/**
> > > + * struct vdso_rng_data - vdso RNG state information
> > > + * @generation:	a counter representing the number of RNG reseeds
> > > + * @is_ready:	whether the RNG is initialized
> > > + */
> > > +struct vdso_rng_data {
> > > +	unsigned long	generation;
> > > +	bool		is_ready;
> > > +};
> > > +
> > 
> > I don't think you can use a type like long here.  The header says this:
> > 
> >  * vdso_data will be accessed by 64 bit and compat code at the same time
> >  * so we should be careful before modifying this structure.
> > 
> > So the ABI must be same for 32-bit and 64-bit mode, and long isn't.
> 
> Excellent point. The size of the type needs to be explicit. Will fix.

I'll do something like this:

diff --git a/include/vdso/datapage.h b/include/vdso/datapage.h
index 80e25cdcdb1c..218bbeac5613 100644
--- a/include/vdso/datapage.h
+++ b/include/vdso/datapage.h
@@ -19,6 +19,19 @@
 #include <vdso/time32.h>
 #include <vdso/time64.h>

+/**
+ * type vdso_kernel_ulong - unsigned long type that matches kernel's unsigned long
+ *
+ * The structs in this file must operate the same way in both 64-bit code and
+ * in 32-bit compat code, over the same potentially 64-bit kernel. So, this
+ * type represents the size of an unsigned long as used by kernel-space code.
+ */
+#ifdef CONFIG_64BIT
+typedef u64 vdso_kernel_ulong;
+#else
+typedef u32 vdso_kernel_ulong;
+#endif
+
 #ifdef CONFIG_ARCH_HAS_VDSO_DATA
 #include <asm/vdso/data.h>
 #else
@@ -115,8 +128,8 @@ struct vdso_data {
  * @is_ready:	signals whether the RNG is initialized
  */
 struct vdso_rng_data {
-	unsigned long	generation;
-	bool		is_ready;
+	vdso_kernel_ulong	generation;
+	bool			is_ready;
 };

 /*


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* Re: [PATCH v10 3/4] random: introduce generic vDSO getrandom() implementation
  2022-11-30 14:59       ` Jason A. Donenfeld
@ 2022-11-30 15:07         ` Arnd Bergmann
  2022-11-30 15:12           ` Jason A. Donenfeld
  0 siblings, 1 reply; 37+ messages in thread
From: Arnd Bergmann @ 2022-11-30 15:07 UTC (permalink / raw)
  To: Jason A . Donenfeld, Florian Weimer
  Cc: linux-kernel, patches, Thomas Gleixner, linux-crypto, linux-api,
	x86, Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Christian Brauner

On Wed, Nov 30, 2022, at 15:59, Jason A. Donenfeld wrote:
> On Wed, Nov 30, 2022 at 03:51:01PM +0100, Jason A. Donenfeld wrote:
>> On Wed, Nov 30, 2022 at 11:44:30AM +0100, Florian Weimer wrote:
>> > 
>> >  * vdso_data will be accessed by 64 bit and compat code at the same time
>> >  * so we should be careful before modifying this structure.
>> > 
>> > So the ABI must be same for 32-bit and 64-bit mode, and long isn't.

> I'll do something like this:
>
> 
> +#ifdef CONFIG_64BIT
> +typedef u64 vdso_kernel_ulong;
> +#else
> +typedef u32 vdso_kernel_ulong;
> +#endif

This does not address the ABI concern: to allow 32-bit and 64-bit
tasks to share the same data page, it has to be the same width on
both, either u32 or 64, but not depending on a configuration
option.

> struct vdso_rng_data {
>	vdso_kernel_ulong	generation;
>	bool			is_ready;
> };

There is another problem with this: you have implicit padding
in the structure because the two members have different size
and alignment requirements. The easiest fix is to make them
both u64, or you could have a u32 is_ready and an explit u32
for the padding.

      Arnd

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v10 3/4] random: introduce generic vDSO getrandom() implementation
  2022-11-30 15:07         ` Arnd Bergmann
@ 2022-11-30 15:12           ` Jason A. Donenfeld
  2022-11-30 15:29             ` Arnd Bergmann
  0 siblings, 1 reply; 37+ messages in thread
From: Jason A. Donenfeld @ 2022-11-30 15:12 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Florian Weimer, linux-kernel, patches, Thomas Gleixner,
	linux-crypto, linux-api, x86, Greg Kroah-Hartman,
	Adhemerval Zanella Netto, Carlos O'Donell, Christian Brauner

Hi Arnd,

On Wed, Nov 30, 2022 at 4:07 PM Arnd Bergmann <arnd@arndb.de> wrote:
> > +#ifdef CONFIG_64BIT
> > +typedef u64 vdso_kernel_ulong;
> > +#else
> > +typedef u32 vdso_kernel_ulong;
> > +#endif
>
> This does not address the ABI concern: to allow 32-bit and 64-bit
> tasks to share the same data page, it has to be the same width on
> both, either u32 or 64, but not depending on a configuration
> option.

I think it does address the issue. CONFIG_64BIT is a .config setting,
not a compiler-derived setting. So a 64-bit kernel will get a u64 in
kernel mode, and then it will get a u64 for the 64-bit vdso usermode
compile, and finally it will get a u64 for the 32-bit vdso usermode
compile. So in all three cases, the size is the same.

> > struct vdso_rng_data {
> >       vdso_kernel_ulong       generation;
> >       bool                    is_ready;
> > };
>
> There is another problem with this: you have implicit padding
> in the structure because the two members have different size
> and alignment requirements. The easiest fix is to make them
> both u64, or you could have a u32 is_ready and an explit u32
> for the padding.

There's padding at the end of the structure, yes. But both
`generation` and `is_ready` will be at the same offset. If the
structure grows, then sure, that'll have to be taken into account. But
that's not a problem because this is a private implementation detail
between the vdso code and the kernel.

Jason

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v10 3/4] random: introduce generic vDSO getrandom() implementation
  2022-11-30 15:12           ` Jason A. Donenfeld
@ 2022-11-30 15:29             ` Arnd Bergmann
  2022-11-30 15:47               ` Jason A. Donenfeld
  0 siblings, 1 reply; 37+ messages in thread
From: Arnd Bergmann @ 2022-11-30 15:29 UTC (permalink / raw)
  To: Jason A . Donenfeld
  Cc: Florian Weimer, linux-kernel, patches, Thomas Gleixner,
	linux-crypto, linux-api, x86, Greg Kroah-Hartman,
	Adhemerval Zanella Netto, Carlos O'Donell, Christian Brauner

On Wed, Nov 30, 2022, at 16:12, Jason A. Donenfeld wrote:
> Hi Arnd,
>
> On Wed, Nov 30, 2022 at 4:07 PM Arnd Bergmann <arnd@arndb.de> wrote:
>> > +#ifdef CONFIG_64BIT
>> > +typedef u64 vdso_kernel_ulong;
>> > +#else
>> > +typedef u32 vdso_kernel_ulong;
>> > +#endif
>>
>> This does not address the ABI concern: to allow 32-bit and 64-bit
>> tasks to share the same data page, it has to be the same width on
>> both, either u32 or 64, but not depending on a configuration
>> option.
>
> I think it does address the issue. CONFIG_64BIT is a .config setting,
> not a compiler-derived setting. So a 64-bit kernel will get a u64 in
> kernel mode, and then it will get a u64 for the 64-bit vdso usermode
> compile, and finally it will get a u64 for the 32-bit vdso usermode
> compile. So in all three cases, the size is the same.

I see what you mean now. However this means your vdso32 copies
are different between 32-bit and 64-bit kernels. If you need to
access one of the fields from assembler, it even ends up
different at source level, which adds a bit of complexity.

Making the interface configuration-independent makes it obvious
to the reader that none of these problems can happen.

>> > struct vdso_rng_data {
>> >       vdso_kernel_ulong       generation;
>> >       bool                    is_ready;
>> > };
>>
>> There is another problem with this: you have implicit padding
>> in the structure because the two members have different size
>> and alignment requirements. The easiest fix is to make them
>> both u64, or you could have a u32 is_ready and an explit u32
>> for the padding.
>
> There's padding at the end of the structure, yes. But both
> `generation` and `is_ready` will be at the same offset. If the
> structure grows, then sure, that'll have to be taken into account. But
> that's not a problem because this is a private implementation detail
> between the vdso code and the kernel.

I was not concerned about incompatibility here, but rather about
possibly leaking kernel data to the vdso page. Again, this probably
doesn't happen if your code is written correctly, but the rule for
kernel-user ABIs is to avoid implicit padding to ensure that
the padding bytes can never leak any information. Using structures
without padding at the minimum helps avoid having to think about
whether this can become a problem when inspecting the code for
possible issues, both from humans and from automated tools.

     Arnd

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v10 1/4] random: add vgetrandom_alloc() syscall
  2022-11-30 10:51   ` Florian Weimer
@ 2022-11-30 15:39     ` Jason A. Donenfeld
  2022-11-30 16:38       ` Jason A. Donenfeld
                         ` (2 more replies)
  0 siblings, 3 replies; 37+ messages in thread
From: Jason A. Donenfeld @ 2022-11-30 15:39 UTC (permalink / raw)
  To: Florian Weimer
  Cc: linux-kernel, patches, tglx, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Arnd Bergmann, Christian Brauner

Hi Florian,

On Wed, Nov 30, 2022 at 11:51:59AM +0100, Florian Weimer wrote:
> * Jason A. Donenfeld:
> 
> > +#ifdef CONFIG_VGETRANDOM_ALLOC_SYSCALL
> > +/**
> > + * vgetrandom_alloc - allocate opaque states for use with vDSO getrandom().
> > + *
> > + * @num: on input, a pointer to a suggested hint of how many states to
> > + * allocate, and on output the number of states actually allocated.
> 
> Should userspace call this system call again if it needs more states?
> The interface description doesn't make this clear.

Yes. And indeed that's what Adhemerval's patch does.

> 
> > + * @size_per_each: the size of each state allocated, so that the caller can
> > + * split up the returned allocation into individual states.
> > + *
> > + * @flags: currently always zero.
> > + *
> > + * The getrandom() vDSO function in userspace requires an opaque state, which
> > + * this function allocates by mapping a certain number of special pages into
> > + * the calling process. It takes a hint as to the number of opaque states
> > + * desired, and provides the caller with the number of opaque states actually
> > + * allocated, the size of each one in bytes, and the address of the first
> > + * state.
> > +
> > + * Returns a pointer to the first state in the allocation.
> > + *
> > + */
> 
> How do we deallocate this memory?  Must it remain permanently allocated?

It can be deallocated with munmap.

> Can userspace use the memory for something else if it's not passed to
> getrandom?

I suspect the documentation answer here is, "no", even if technically it
might happen to work on this kernel or that kernel. I suppose this could
even be quasi-enforced by xoring the top bits with some vdso
compile-time constant, so you can't rely on being able to dereference
it yourself.

> The separate system call strongly suggests that the
> allocation is completely owned by the kernel, but there isn't
> documentation here how the allocation life-cycle is supposed to look
> like.  In particular, it is not clear if vgetrandom_alloc or getrandom
> could retain a reference to the allocation in a future implementation of
> these interfaces.
> 
> Some users might want to zap the memory for extra hardening after use,
> and it's not clear if that's allowed, either.

I don't think zapping that memory is supported, or even a sensible thing
to do. In the first place, I don't think we should suggest that the user
can dereference that pointer, at all. In that sense, maybe it's best to
call it a "handle" or something similar (a "HANDLE"! a "HWND"? a "HRNG"?
just kidding). In the second place, the fast erasure aspect of this
means that such hardening would have no effect -- the key is overwritten
after using for forward secrecy, anyway, and batched bytes are zeroed.
(There is a corner case that might make it interesting to wipe in the
parent, not just the child, on fork, but that's sort of a separate
matter and would ideally be handled by kernel space anyway.)

> If there's no registration of the allocation, it's not clear why we need
> a separate system call for this.  From a documentation perspective, it
> may be easier to describe proper use of the getrandom vDSO call if
> ownership resides with userspace.

No, absolutely not, for the multiple reasons already listed in the
commit messages and cover letter and previous emails. But you seem
aware of this:

> But it will constrain future
> evolution of the implementation because you can't add registration
> (retaining a reference to the passed-in area in getrandom) after the
> fact.  But I'm not sure if this is possible with the current interface,
> either.  Userspace has to make some assumptions about the life-cycle to
> avoid a memory leak on thread exit.

It sounds like this is sort of a different angle on Rasmus' earlier
comment about how munmap leaks implementation details. Maybe there's
something to that after all? Or not? I see two approaches:

1) Keep munmap as the allocation function. If later on we do fancy
   registration and in-kernel state tracking, or add fancy protection
   flags, or whatever else, munmap should be able to identify these
   pages and carry out whatever special treatment is necessary.

2) Convert vgetrandom_alloc() into a clone3-style syscall, as Christian
   suggested earlier, which might allow for a bit more overloading
   capability. That would be a struct that looks like:

      struct vgetrandom_alloc_args {
	  __aligned_u64 flags;
          __aligned_u64 states;
	  __aligned_u64 num;
	  __aligned_u64 size_of_each;
      }

  - If flags is VGRA_ALLOCATE, states and size_of_each must be zero on
    input, while num is the hint, as is the case now. On output, states,
    size_of_each, and num are filled in.

  - If flags is VGRA_DEALLOCATE, states, size_of_each, and num must be as
    they were originally, and then it deallocates.

I suppose (2) would alleviate your concerns entirely, without future
uncertainty over what it'd be like to add special cases to munmap(). And
it'd add a bit more future proofing to the syscall, depending on what we
do.

So maybe I'm warming up to that approach a bit.

> > +	num_states = clamp_t(size_t, num_hint, 1, (SIZE_MAX & PAGE_MASK) / state_size);
> > +	alloc_size = PAGE_ALIGN(num_states * state_size);
> 
> Doesn't this waste space for one state if state_size happens to be a
> power of 2?  Why do this SIZE_MAX & PAGE_MASK thing at all?  Shouldn't
> it be PAGE_SIZE / state_size?

The first line is a clamp. That fixes num_hint between 1 and the largest
number that when multiplied and rounded up won't overflow.

So, if state_size is a power of two, let's say 256, and there's only one
state, here's what that looks like:

    num_states = clamp(1, 1, (0xffffffff & (~(4096 - 1))) / 256 = 16777200) = 1
    alloc_size = PAGE_ALIGN(1 * 256) = 4096

So that seems like it's working as intended, right? Or if not, maybe
it'd help to write out the digits you're concerned about?

> > +	if (put_user(alloc_size / state_size, num) || put_user(state_size, size_per_each))
> > +		return -EFAULT;
> > +
> > +	pages_addr = vm_mmap(NULL, 0, alloc_size, PROT_READ | PROT_WRITE,
> > +			     MAP_PRIVATE | MAP_ANONYMOUS | MAP_LOCKED, 0);
> 
> I think Rasmus has already raised questions about MAP_LOCKED.
> 
> I think the kernel cannot rely on it because userspace could call
> munlock on the allocation.

Then they're caught holding the bag? This doesn't seem much different
from userspace shooting themselves in general, like writing garbage into
the allocated states and then trying to use them. If this is something
you really, really are concerned about, then maybe my cheesy dumb xor
thing mentioned above would be a low effort mitigation here.

Jason

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v10 3/4] random: introduce generic vDSO getrandom() implementation
  2022-11-30 15:29             ` Arnd Bergmann
@ 2022-11-30 15:47               ` Jason A. Donenfeld
  2022-11-30 16:13                 ` Arnd Bergmann
  2022-11-30 17:00                 ` Thomas Gleixner
  0 siblings, 2 replies; 37+ messages in thread
From: Jason A. Donenfeld @ 2022-11-30 15:47 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Florian Weimer, linux-kernel, patches, Thomas Gleixner,
	linux-crypto, linux-api, x86, Greg Kroah-Hartman,
	Adhemerval Zanella Netto, Carlos O'Donell, Christian Brauner

Hi Arnd,

On Wed, Nov 30, 2022 at 4:29 PM Arnd Bergmann <arnd@arndb.de> wrote:
> > I think it does address the issue. CONFIG_64BIT is a .config setting,
> > not a compiler-derived setting. So a 64-bit kernel will get a u64 in
> > kernel mode, and then it will get a u64 for the 64-bit vdso usermode
> > compile, and finally it will get a u64 for the 32-bit vdso usermode
> > compile. So in all three cases, the size is the same.
>
> I see what you mean now. However this means your vdso32 copies
> are different between 32-bit and 64-bit kernels. If you need to
> access one of the fields from assembler, it even ends up
> different at source level, which adds a bit of complexity.
>
> Making the interface configuration-independent makes it obvious
> to the reader that none of these problems can happen.

Except ideally, these are word-sized accesses (where only compat code
has to suffer I suppose).

> >> > struct vdso_rng_data {
> >> >       vdso_kernel_ulong       generation;
> >> >       bool                    is_ready;
> >> > };
> >>
> >> There is another problem with this: you have implicit padding
> >> in the structure because the two members have different size
> >> and alignment requirements. The easiest fix is to make them
> >> both u64, or you could have a u32 is_ready and an explit u32
> >> for the padding.
> >
> > There's padding at the end of the structure, yes. But both
> > `generation` and `is_ready` will be at the same offset. If the
> > structure grows, then sure, that'll have to be taken into account. But
> > that's not a problem because this is a private implementation detail
> > between the vdso code and the kernel.
>
> I was not concerned about incompatibility here, but rather about
> possibly leaking kernel data to the vdso page.

The vvar page starts out zeroed, no?

Jason

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v10 3/4] random: introduce generic vDSO getrandom() implementation
  2022-11-30 15:47               ` Jason A. Donenfeld
@ 2022-11-30 16:13                 ` Arnd Bergmann
  2022-11-30 16:40                   ` Jason A. Donenfeld
  2022-11-30 17:00                 ` Thomas Gleixner
  1 sibling, 1 reply; 37+ messages in thread
From: Arnd Bergmann @ 2022-11-30 16:13 UTC (permalink / raw)
  To: Jason A . Donenfeld
  Cc: Florian Weimer, linux-kernel, patches, Thomas Gleixner,
	linux-crypto, linux-api, x86, Greg Kroah-Hartman,
	Adhemerval Zanella Netto, Carlos O'Donell, Christian Brauner

On Wed, Nov 30, 2022, at 16:47, Jason A. Donenfeld wrote:

>> > There's padding at the end of the structure, yes. But both
>> > `generation` and `is_ready` will be at the same offset. If the
>> > structure grows, then sure, that'll have to be taken into account. But
>> > that's not a problem because this is a private implementation detail
>> > between the vdso code and the kernel.
>>
>> I was not concerned about incompatibility here, but rather about
>> possibly leaking kernel data to the vdso page.
>
> The vvar page starts out zeroed, no?

The typical problem is someone doing a copy_to_user() of an in-kernel
structure into the userspace side, which would then copy the
padding as well. If the source is on the stack, a malicious caller
can trick the another syscall into leaving sensitive data at this
exact stack location. Again, I'm not saying that your code is
vulnerable to that type of attack, just that making all ABI
structures not have holes is useful for auditing.

    Arnd

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v10 1/4] random: add vgetrandom_alloc() syscall
  2022-11-30 15:39     ` Jason A. Donenfeld
@ 2022-11-30 16:38       ` Jason A. Donenfeld
  2022-12-02 14:38         ` Jason A. Donenfeld
  2022-12-01  2:16       ` Jason A. Donenfeld
  2022-12-02 17:17       ` Florian Weimer
  2 siblings, 1 reply; 37+ messages in thread
From: Jason A. Donenfeld @ 2022-11-30 16:38 UTC (permalink / raw)
  To: Florian Weimer
  Cc: linux-kernel, patches, tglx, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Arnd Bergmann, Christian Brauner

On Wed, Nov 30, 2022 at 04:39:55PM +0100, Jason A. Donenfeld wrote:
> 2) Convert vgetrandom_alloc() into a clone3-style syscall, as Christian
>    suggested earlier, which might allow for a bit more overloading
>    capability. That would be a struct that looks like:
> 
>       struct vgetrandom_alloc_args {
> 	  __aligned_u64 flags;
>           __aligned_u64 states;
> 	  __aligned_u64 num;
> 	  __aligned_u64 size_of_each;
>       }
> 
>   - If flags is VGRA_ALLOCATE, states and size_of_each must be zero on
>     input, while num is the hint, as is the case now. On output, states,
>     size_of_each, and num are filled in.
> 
>   - If flags is VGRA_DEALLOCATE, states, size_of_each, and num must be as
>     they were originally, and then it deallocates.
> 
> I suppose (2) would alleviate your concerns entirely, without future
> uncertainty over what it'd be like to add special cases to munmap(). And
> it'd add a bit more future proofing to the syscall, depending on what we
> do.
> 
> So maybe I'm warming up to that approach a bit.

So I just did a little quick implementation to see what it'd feel like,
and actually, it's quite simple, and might address a lot of concerns all
at once. What do you think of the below? Documentation and such still
needs work obviously, but the bones should be there.

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 4341c6a91207..dae6095b937d 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -189,44 +189,53 @@ int __cold execute_with_initialized_rng(struct notifier_block *nb)
 /**
  * sys_vgetrandom_alloc - Allocate opaque states for use with vDSO getrandom().
  *
- * @num:	   On input, a pointer to a suggested hint of how many states to
- * 		   allocate, and on output the number of states actually allocated.
- *
- * @size_per_each: The size of each state allocated, so that the caller can
- *		   split up the returned allocation into individual states.
- *
- * @flags:	   Currently always zero.
+ * @uargs:	A vgetrandom_alloc_args which may be updated on return.
+ * 		allocate, and on output the number of states actually allocated.
+ * @usize:	The size of @uargs, which determines the version of the struct used.
  *
  * The getrandom() vDSO function in userspace requires an opaque state, which
  * this function allocates by mapping a certain number of special pages into
  * the calling process. It takes a hint as to the number of opaque states
  * desired, and provides the caller with the number of opaque states actually
  * allocated, the size of each one in bytes, and the address of the first
- * state.
+ * state. Alternatively, if the VGRA_DEALLOCATE flag is specified, the provided
+ * states parameter is unmapped.
  *
- * Returns the address of the first state in the allocation on success, or a
- * negative error value on failure.
+ * Returns 0 on success and an error value otherwise.
  */
-SYSCALL_DEFINE3(vgetrandom_alloc, unsigned int __user *, num,
-		unsigned int __user *, size_per_each, unsigned int, flags)
+SYSCALL_DEFINE2(vgetrandom_alloc, struct vgetrandom_alloc_args __user *, uargs, size_t, usize)
 {
 	const size_t state_size = sizeof(struct vgetrandom_state);
+	const size_t max_states = (SIZE_MAX & PAGE_MASK) / state_size;
+	struct vgetrandom_alloc_args args;
 	size_t alloc_size, num_states;
 	unsigned long pages_addr;
-	unsigned int num_hint;
 	int ret;

-	if (flags)
+	if (usize > PAGE_SIZE)
+		return -E2BIG;
+	if (usize < VGETRANDOM_ALLOC_ARGS_SIZE_VER0)
 		return -EINVAL;
+	ret = copy_struct_from_user(&args, sizeof(args), uargs, usize);
+	if (ret)
+		return ret;

-	if (get_user(num_hint, num))
-		return -EFAULT;
+	/* Currently only VGRA_DEALLOCATE is defined. */
+	if (args.flags & ~VGRA_DEALLOCATE)
+		return -EINVAL;

-	num_states = clamp_t(size_t, num_hint, 1, (SIZE_MAX & PAGE_MASK) / state_size);
-	alloc_size = PAGE_ALIGN(num_states * state_size);
+	if (args.flags & VGRA_DEALLOCATE) {
+		if (args.size_per_each != state_size || args.num > max_states || !args.states)
+			return -EINVAL;
+		return vm_munmap(args.states, args.num * state_size);
+	}

-	if (put_user(alloc_size / state_size, num) || put_user(state_size, size_per_each))
-		return -EFAULT;
+	/* These don't make sense as input values if allocating, so reject them. */
+	if (args.size_per_each || args.states)
+		return -EINVAL;
+
+	num_states = clamp_t(size_t, args.num, 1, max_states);
+	alloc_size = PAGE_ALIGN(num_states * state_size);

 	pages_addr = vm_mmap(NULL, 0, alloc_size, PROT_READ | PROT_WRITE,
 			     MAP_PRIVATE | MAP_ANONYMOUS | MAP_LOCKED, 0);
@@ -237,7 +246,14 @@ SYSCALL_DEFINE3(vgetrandom_alloc, unsigned int __user *, num,
 	if (ret < 0)
 		goto err_unmap;

-	return pages_addr;
+	args.num = num_states;
+	args.size_per_each = state_size;
+	args.states = pages_addr;
+
+	ret = -EFAULT;
+	if (copy_to_user(uargs, &args, sizeof(args)))
+		goto err_unmap;
+	return 0;

 err_unmap:
 	vm_munmap(pages_addr, alloc_size);
diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
index 7741dc94f10c..de4338e26db0 100644
--- a/include/linux/syscalls.h
+++ b/include/linux/syscalls.h
@@ -72,6 +72,7 @@ struct open_how;
 struct mount_attr;
 struct landlock_ruleset_attr;
 enum landlock_rule_type;
+struct vgetrandom_alloc_args;

 #include <linux/types.h>
 #include <linux/aio_abi.h>
@@ -1006,9 +1007,8 @@ asmlinkage long sys_seccomp(unsigned int op, unsigned int flags,
 			    void __user *uargs);
 asmlinkage long sys_getrandom(char __user *buf, size_t count,
 			      unsigned int flags);
-asmlinkage long sys_vgetrandom_alloc(unsigned int __user *num,
-				     unsigned int __user *size_per_each,
-				     unsigned int flags);
+asmlinkage long sys_vgetrandom_alloc(struct vgetrandom_alloc_args __user *uargs,
+				     size_t size);
 asmlinkage long sys_memfd_create(const char __user *uname_ptr, unsigned int flags);
 asmlinkage long sys_bpf(int cmd, union bpf_attr *attr, unsigned int size);
 asmlinkage long sys_execveat(int dfd, const char __user *filename,
diff --git a/include/uapi/linux/random.h b/include/uapi/linux/random.h
index e744c23582eb..49911ea2c343 100644
--- a/include/uapi/linux/random.h
+++ b/include/uapi/linux/random.h
@@ -55,4 +55,30 @@ struct rand_pool_info {
 #define GRND_RANDOM	0x0002
 #define GRND_INSECURE	0x0004

+/*
+ * Flags for vgetrandom_alloc(2)
+ *
+ * VGRA_DEALLOCATE	Deallocate supplied states.
+ */
+#define VGRA_DEALLOCATE	0x0001ULL
+
+/**
+ * struct vgetrandom_alloc_args - Arguments for the vgetrandom_alloc(2) syscall.
+ *
+ * @flags:	   Zero or more VGRA_* flags.
+ * @states:	   Zero on input if allocating, and filled in on successful
+ *		   return. An existing allocation, if deallocating.
+ * @num:	   A hint as to the desired number of states, if allocating. The
+ *		   number of existing states in @states, if deallocating
+ * @size_per_each: The size of each state in @states.
+ */
+struct vgetrandom_alloc_args {
+	__aligned_u64 flags;
+	__aligned_u64 states;
+	__aligned_u64 num;
+	__aligned_u64 size_per_each;
+};
+
+#define VGETRANDOM_ALLOC_ARGS_SIZE_VER0 32 /* sizeof first published struct */
+
 #endif /* _UAPI_LINUX_RANDOM_H */


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* Re: [PATCH v10 3/4] random: introduce generic vDSO getrandom() implementation
  2022-11-30 16:13                 ` Arnd Bergmann
@ 2022-11-30 16:40                   ` Jason A. Donenfeld
  0 siblings, 0 replies; 37+ messages in thread
From: Jason A. Donenfeld @ 2022-11-30 16:40 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Florian Weimer, linux-kernel, patches, Thomas Gleixner,
	linux-crypto, linux-api, x86, Greg Kroah-Hartman,
	Adhemerval Zanella Netto, Carlos O'Donell, Christian Brauner

On Wed, Nov 30, 2022 at 05:13:18PM +0100, Arnd Bergmann wrote:
> On Wed, Nov 30, 2022, at 16:47, Jason A. Donenfeld wrote:
> 
> >> > There's padding at the end of the structure, yes. But both
> >> > `generation` and `is_ready` will be at the same offset. If the
> >> > structure grows, then sure, that'll have to be taken into account. But
> >> > that's not a problem because this is a private implementation detail
> >> > between the vdso code and the kernel.
> >>
> >> I was not concerned about incompatibility here, but rather about
> >> possibly leaking kernel data to the vdso page.
> >
> > The vvar page starts out zeroed, no?
> 
> The typical problem is someone doing a copy_to_user() of an in-kernel
> structure into the userspace side, which would then copy the
> padding as well. If the source is on the stack, a malicious caller
> can trick the another syscall into leaving sensitive data at this
> exact stack location.

I'm quite aware of this infoleak, having made use of it countless times
over the years. It just doesn't seem relevant to the vvar page.

Jason

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v10 3/4] random: introduce generic vDSO getrandom() implementation
  2022-11-30 15:47               ` Jason A. Donenfeld
  2022-11-30 16:13                 ` Arnd Bergmann
@ 2022-11-30 17:00                 ` Thomas Gleixner
  1 sibling, 0 replies; 37+ messages in thread
From: Thomas Gleixner @ 2022-11-30 17:00 UTC (permalink / raw)
  To: Jason A. Donenfeld, Arnd Bergmann
  Cc: Florian Weimer, linux-kernel, patches, linux-crypto, linux-api,
	x86, Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Christian Brauner

On Wed, Nov 30 2022 at 16:47, Jason A. Donenfeld wrote:
> On Wed, Nov 30, 2022 at 4:29 PM Arnd Bergmann <arnd@arndb.de> wrote:
>> I see what you mean now. However this means your vdso32 copies
>> are different between 32-bit and 64-bit kernels. If you need to
>> access one of the fields from assembler, it even ends up
>> different at source level, which adds a bit of complexity.
>>
>> Making the interface configuration-independent makes it obvious
>> to the reader that none of these problems can happen.
>
> Except ideally, these are word-sized accesses (where only compat code
> has to suffer I suppose).

While I hate it with a passion, there is actually a valid reason to use
this ugly typedef.

On 32bit architectures which have load/store tearing of 64bit variables
into two 32bit accesses due to ISA limitations, that results in
undefined behaviour when write and read are concurrent. Neither
READ_ONCE() nor WRITE_ONCE help there.

Though that begs the question whether we need a 64bit generation counter
for the VDSO at all.

Thanks,

        tglx





^ permalink raw reply	[flat|nested] 37+ messages in thread

* RE: [PATCH v10 1/4] random: add vgetrandom_alloc() syscall
  2022-11-29 22:02   ` Thomas Gleixner
  2022-11-30  0:59     ` Jason A. Donenfeld
@ 2022-11-30 22:39     ` David Laight
  2022-12-01  0:14       ` Jason A. Donenfeld
  1 sibling, 1 reply; 37+ messages in thread
From: David Laight @ 2022-11-30 22:39 UTC (permalink / raw)
  To: 'Thomas Gleixner', Jason A. Donenfeld, linux-kernel, patches
  Cc: Jason A. Donenfeld, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Arnd Bergmann,
	Christian Brauner

From: Thomas Gleixner
> Sent: 29 November 2022 22:02
> 
> Jason!
> 
> On Tue, Nov 29 2022 at 22:06, Jason A. Donenfeld wrote:
> > +
> > +/********************************************************************
> > + *
> > + * vDSO support helpers.
> > + *
> > + * The actual vDSO function is defined over in lib/vdso/getrandom.c,
> > + * but this section contains the kernel-mode helpers to support that.
> > + *
> > + ********************************************************************/
> > +
> > +#ifdef CONFIG_VGETRANDOM_ALLOC_SYSCALL
> > +/**
> > + * vgetrandom_alloc - allocate opaque states for use with vDSO getrandom().
> > + *
> > + * @num: on input, a pointer to a suggested hint of how many states to
> > + * allocate, and on output the number of states actually allocated.
> > + *
> > + * @size_per_each: the size of each state allocated, so that the caller can
> > + * split up the returned allocation into individual states.
> > + *
> > + * @flags: currently always zero.
> 
> NIT!
> 
> I personally prefer and ask for it in stuff I maintain:
> 
>  * @num:		On input, a pointer to a suggested hint of how many states to
>  *			allocate, and on output the number of states actually allocated.
>  *
>  * @size_per_each: 	The size of each state allocated, so that the caller can
>  * 			split up the returned allocation into individual states.
>  *
>  * @flags: 		Currently always zero.
> 
> But your turf :)
> 
> > + *
> > + * The getrandom() vDSO function in userspace requires an opaque state, which
> > + * this function allocates by mapping a certain number of special pages into
> > + * the calling process. It takes a hint as to the number of opaque states
> > + * desired, and provides the caller with the number of opaque states actually
> > + * allocated, the size of each one in bytes, and the address of the first
> > + * state.
> 
> make W=1 rightfully complains about:
> 
> > +
> 
> drivers/char/random.c:182: warning: bad line:
> 
> > + * Returns a pointer to the first state in the allocation.
> 
> I have serious doubts that this statement is correct.
> 
> Think about this comment and documentation as a boiler plate for the
> mandatory man page for a new syscall (hint...)
> 
> > + *
> > + */
> 
> and W=1 also complains rightfully here:
> 
> > +SYSCALL_DEFINE3(vgetrandom_alloc, unsigned int __user *, num,
> > +		unsigned int __user *, size_per_each, unsigned int, flags)
> 
> drivers/char/random.c:188: warning: expecting prototype for vgetrandom_alloc(). Prototype was for
> sys_vgetrandom_alloc() instead
> 
> > +{
> > diff --git a/include/vdso/getrandom.h b/include/vdso/getrandom.h
> > new file mode 100644
> > index 000000000000..5f04c8bf4bd4
> > --- /dev/null
> > +++ b/include/vdso/getrandom.h
> > @@ -0,0 +1,24 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +/*
> > + * Copyright (C) 2022 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
> > + */
> > +
> > +#ifndef _VDSO_GETRANDOM_H
> > +#define _VDSO_GETRANDOM_H
> > +
> > +#include <crypto/chacha.h>
> > +
> > +struct vgetrandom_state {
> > +	union {
> > +		struct {
> > +			u8 batch[CHACHA_BLOCK_SIZE * 3 / 2];
> > +			u32 key[CHACHA_KEY_SIZE / sizeof(u32)];
> > +		};
> > +		u8 batch_key[CHACHA_BLOCK_SIZE * 2];
> > +	};
> > +	unsigned long generation;
> > +	u8 pos;
> > +	bool in_use;
> > +};
> 
> Again, please make this properly tabular:
> 
> struct vgetrandom_state {
> 	union {
> 		struct {
> 			u8	batch[CHACHA_BLOCK_SIZE * 3 / 2];
> 			u32	key[CHACHA_KEY_SIZE / sizeof(u32)];
> 		};
> 		u8	batch_key[CHACHA_BLOCK_SIZE * 2];
> 	};
> 	unsigned long	generation;
> 	u8		pos;
> 	bool		in_use;
> };
> 
> Plus some kernel doc which explains what this is about.

That structure looks horrid - especially for something shared
between entities.
The 'unsigned long' should be either u32 or u64.
There is 'hidden' padding at the end.
If 'pos' is an index into something (longer name would be
better) then there is no reason to squeeze the value into
1 byte - it doesn't save anything and might make things bigger.

(I think Jason might have blocked my emails, he doesn't like
critisicism/feedback.)

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v10 1/4] random: add vgetrandom_alloc() syscall
  2022-11-30 22:39     ` David Laight
@ 2022-12-01  0:14       ` Jason A. Donenfeld
  0 siblings, 0 replies; 37+ messages in thread
From: Jason A. Donenfeld @ 2022-12-01  0:14 UTC (permalink / raw)
  To: David Laight
  Cc: 'Thomas Gleixner',
	linux-kernel, patches, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Florian Weimer, Arnd Bergmann,
	Christian Brauner

On Wed, Nov 30, 2022 at 10:39:38PM +0000, David Laight wrote:
> > struct vgetrandom_state {
> > 	union {
> > 		struct {
> > 			u8	batch[CHACHA_BLOCK_SIZE * 3 / 2];
> > 			u32	key[CHACHA_KEY_SIZE / sizeof(u32)];
> > 		};
> > 		u8	batch_key[CHACHA_BLOCK_SIZE * 2];
> > 	};
> > 	unsigned long	generation;
> > 	u8		pos;
> > 	bool		in_use;
> > };
> > 
> > Plus some kernel doc which explains what this is about.
> 
> That structure looks horrid - especially for something shared
> between entities.
> The 'unsigned long' should be either u32 or u64.

This struct isn't shared. It's used only by user mode code.
There may well be other issues with that long, though.

Jason

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v10 1/4] random: add vgetrandom_alloc() syscall
  2022-11-30 15:39     ` Jason A. Donenfeld
  2022-11-30 16:38       ` Jason A. Donenfeld
@ 2022-12-01  2:16       ` Jason A. Donenfeld
  2022-12-02 17:17       ` Florian Weimer
  2 siblings, 0 replies; 37+ messages in thread
From: Jason A. Donenfeld @ 2022-12-01  2:16 UTC (permalink / raw)
  To: Florian Weimer
  Cc: linux-kernel, patches, tglx, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Arnd Bergmann, Christian Brauner

On Wed, Nov 30, 2022 at 04:39:55PM +0100, Jason A. Donenfeld wrote:
> > Can userspace use the memory for something else if it's not passed to
> > getrandom?
> 
> I suspect the documentation answer here is, "no", even if technically it
> might happen to work on this kernel or that kernel. I suppose this could
> even be quasi-enforced by xoring the top bits with some vdso
> compile-time constant, so you can't rely on being able to dereference
> it yourself.
> [...]
> Then they're caught holding the bag? This doesn't seem much different
> from userspace shooting themselves in general, like writing garbage into
> the allocated states and then trying to use them. If this is something
> you really, really are concerned about, then maybe my cheesy dumb xor
> thing mentioned above would be a low effort mitigation here.

I implemented a sample of this, below. I think this is a bit silly,
though, and making this fully robust could take some effort. Overall, I
don't think we should do this.

However, the more I think about the args thing from the last email,
the more I like *that* idea. So I think I'll roll with that.

But this cheesy pointer obfuscation thing here, meh. But here's what it
could look like anyway:

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 2aaeb48d11be..7aff45165ce5 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -228,7 +228,7 @@ SYSCALL_DEFINE2(vgetrandom_alloc, struct vgetrandom_alloc_args __user *, uargs,
 	if (args.flags & VGRA_DEALLOCATE) {
 		if (args.size_per_each != state_size || args.num > max_states || !args.states)
 			return -EINVAL;
-		return vm_munmap(args.states, args.num * state_size);
+		return vm_munmap(args.states ^ VGETRANDOM_STATE_HI_TAINT, args.num * state_size);
 	}

 	/* These don't make sense as input values if allocating, so reject them. */
@@ -249,7 +249,7 @@ SYSCALL_DEFINE2(vgetrandom_alloc, struct vgetrandom_alloc_args __user *, uargs,

 	args.num = num_states;
 	args.size_per_each = state_size;
-	args.states = pages_addr;
+	args.states = pages_addr ^ VGETRANDOM_STATE_HI_TAINT;

 	ret = -EFAULT;
 	if (copy_to_user(uargs, &args, sizeof(args)))
diff --git a/include/vdso/getrandom.h b/include/vdso/getrandom.h
index cb624799a8e7..9a6aaf4d99d4 100644
--- a/include/vdso/getrandom.h
+++ b/include/vdso/getrandom.h
@@ -8,6 +8,7 @@

 #include <crypto/chacha.h>
 #include <vdso/limits.h>
+#include <linux/version.h>

 /**
  * struct vgetrandom_state - State used by vDSO getrandom() and allocated by vgetrandom_alloc().
@@ -41,4 +42,10 @@ struct vgetrandom_state {
 	bool 			in_use;
 };

+/* Be annoying by changing frequently enough. */
+#define VGETRANDOM_STATE_HI_TAINT ((unsigned long)(((LINUX_VERSION_CODE >> 16) + \
+				    (LINUX_VERSION_CODE >>  8) + (LINUX_VERSION_CODE >>  0) + \
+				    __GNUC__ + __GNUC_MINOR__ + __GNUC_PATCHLEVEL__) \
+				   & 0xff) << (BITS_PER_LONG - 8))
+
 #endif /* _VDSO_GETRANDOM_H */
diff --git a/lib/vdso/getrandom.c b/lib/vdso/getrandom.c
index 9ca624756432..14cbd349186c 100644
--- a/lib/vdso/getrandom.c
+++ b/lib/vdso/getrandom.c
@@ -57,7 +57,7 @@ __cvdso_getrandom_data(const struct vdso_rng_data *rng_info, void *buffer, size_
 		       unsigned int flags, void *opaque_state)
 {
 	ssize_t ret = min_t(size_t, INT_MAX & PAGE_MASK /* = MAX_RW_COUNT */, len);
-	struct vgetrandom_state *state = opaque_state;
+	struct vgetrandom_state *state = (void *)((unsigned long)opaque_state ^ VGETRANDOM_STATE_HI_TAINT);
 	size_t batch_len, nblocks, orig_len = len;
 	unsigned long current_generation;
 	void *orig_buffer = buffer;

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* Re: [PATCH v10 1/4] random: add vgetrandom_alloc() syscall
  2022-11-30 16:38       ` Jason A. Donenfeld
@ 2022-12-02 14:38         ` Jason A. Donenfeld
  0 siblings, 0 replies; 37+ messages in thread
From: Jason A. Donenfeld @ 2022-12-02 14:38 UTC (permalink / raw)
  To: Florian Weimer
  Cc: linux-kernel, patches, tglx, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Arnd Bergmann, Christian Brauner

On Wed, Nov 30, 2022 at 05:38:13PM +0100, Jason A. Donenfeld wrote:
> On Wed, Nov 30, 2022 at 04:39:55PM +0100, Jason A. Donenfeld wrote:
> > 2) Convert vgetrandom_alloc() into a clone3-style syscall, as Christian
> >    suggested earlier, which might allow for a bit more overloading
> >    capability. That would be a struct that looks like:
> > 
> >       struct vgetrandom_alloc_args {
> > 	  __aligned_u64 flags;
> >           __aligned_u64 states;
> > 	  __aligned_u64 num;
> > 	  __aligned_u64 size_of_each;
> >       }
> > 
> >   - If flags is VGRA_ALLOCATE, states and size_of_each must be zero on
> >     input, while num is the hint, as is the case now. On output, states,
> >     size_of_each, and num are filled in.
> > 
> >   - If flags is VGRA_DEALLOCATE, states, size_of_each, and num must be as
> >     they were originally, and then it deallocates.
> > 
> > I suppose (2) would alleviate your concerns entirely, without future
> > uncertainty over what it'd be like to add special cases to munmap(). And
> > it'd add a bit more future proofing to the syscall, depending on what we
> > do.
> > 
> > So maybe I'm warming up to that approach a bit.
> 
> So I just did a little quick implementation to see what it'd feel like,
> and actually, it's quite simple, and might address a lot of concerns all
> at once. What do you think of the below? Documentation and such still
> needs work obviously, but the bones should be there.

Well, despite writing into the ether here, I continue to chase my tail
around in circles over this. After Adhemerval expressed a sort of "meh"
opinion to me on IRC around doing the clone3-like thing, I went down a
mm rabbit hole and started looking at all the various ways memory is
allocated in userspace and under what conditions and for what and why.
Turns out there are a few drivers doing interesting things in this
space.

The long and short of it is that:
- All addresses involve maps and page tables.
- Allocating is mapping, deallocating is unmapping, and there's no way
  around that.
- Memory that's "special" usually comes with special attributes or
  operations on its vma.

So, this makes me think that `munmap` is the fine *and correct* API for
deallocation. It's what everything else uses, even "special" things. And
it doesn't constrain us in the future in case this gets "registered"
somehow, as Florian described it, because it's still attached to
current->mm and will still always go through the same mapping APIs
anyway.

In light of that, I'm going to stick with the original API design, and
not do the clone3() args struct thing and the VGRA_DEALLOCATE flag.
However, I think it'd be a good idea to add an additional parameter of
"unsigned long addr", which is enforced/reserved to be always 0 for now.
This might prove useful for something together with the currently unused
flags argument, sometime in the future.

Jason

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v10 1/4] random: add vgetrandom_alloc() syscall
  2022-11-30 15:39     ` Jason A. Donenfeld
  2022-11-30 16:38       ` Jason A. Donenfeld
  2022-12-01  2:16       ` Jason A. Donenfeld
@ 2022-12-02 17:17       ` Florian Weimer
  2022-12-02 18:29         ` Jason A. Donenfeld
  2 siblings, 1 reply; 37+ messages in thread
From: Florian Weimer @ 2022-12-02 17:17 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: linux-kernel, patches, tglx, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Arnd Bergmann, Christian Brauner

* Jason A. Donenfeld:

> I don't think zapping that memory is supported, or even a sensible thing
> to do. In the first place, I don't think we should suggest that the user
> can dereference that pointer, at all. In that sense, maybe it's best to
> call it a "handle" or something similar (a "HANDLE"! a "HWND"? a "HRNG"?

Surely the caller has to carve up the allocation, so the returned
pointer is not opaque at all.  From Adhemerval's glibc patch:

      grnd_allocator.cap = new_cap;
      grnd_allocator.states = new_states;

      for (size_t i = 0; i < num; ++i)
	{
	  grnd_allocator.states[i] = new_block;
	  new_block += size_per_each;
	}
      grnd_allocator.len = num;
    }

That's the opposite of a handle, really.

>> But it will constrain future
>> evolution of the implementation because you can't add registration
>> (retaining a reference to the passed-in area in getrandom) after the
>> fact.  But I'm not sure if this is possible with the current interface,
>> either.  Userspace has to make some assumptions about the life-cycle to
>> avoid a memory leak on thread exit.
>
> It sounds like this is sort of a different angle on Rasmus' earlier
> comment about how munmap leaks implementation details. Maybe there's
> something to that after all? Or not? I see two approaches:
>
> 1) Keep munmap as the allocation function. If later on we do fancy
>    registration and in-kernel state tracking, or add fancy protection
>    flags, or whatever else, munmap should be able to identify these
>    pages and carry out whatever special treatment is necessary.

munmap is fine, but the interface needs to say how to use it, and what
length to pass.

>> > +	num_states = clamp_t(size_t, num_hint, 1, (SIZE_MAX & PAGE_MASK) / state_size);
>> > +	alloc_size = PAGE_ALIGN(num_states * state_size);
>> 
>> Doesn't this waste space for one state if state_size happens to be a
>> power of 2?  Why do this SIZE_MAX & PAGE_MASK thing at all?  Shouldn't
>> it be PAGE_SIZE / state_size?
>
> The first line is a clamp. That fixes num_hint between 1 and the largest
> number that when multiplied and rounded up won't overflow.
>
> So, if state_size is a power of two, let's say 256, and there's only one
> state, here's what that looks like:
>
>     num_states = clamp(1, 1, (0xffffffff & (~(4096 - 1))) / 256 = 16777200) = 1
>     alloc_size = PAGE_ALIGN(1 * 256) = 4096
>
> So that seems like it's working as intended, right? Or if not, maybe
> it'd help to write out the digits you're concerned about?

I think I was just confused.

>> > +	if (put_user(alloc_size / state_size, num) || put_user(state_size, size_per_each))
>> > +		return -EFAULT;
>> > +
>> > +	pages_addr = vm_mmap(NULL, 0, alloc_size, PROT_READ | PROT_WRITE,
>> > +			     MAP_PRIVATE | MAP_ANONYMOUS | MAP_LOCKED, 0);
>> 
>> I think Rasmus has already raised questions about MAP_LOCKED.
>> 
>> I think the kernel cannot rely on it because userspace could call
>> munlock on the allocation.
>
> Then they're caught holding the bag? This doesn't seem much different
> from userspace shooting themselves in general, like writing garbage into
> the allocated states and then trying to use them. If this is something
> you really, really are concerned about, then maybe my cheesy dumb xor
> thing mentioned above would be a low effort mitigation here.

So the MAP_LOCKED is just there to prevent leakage to swap?

Thanks,
Florian


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v10 1/4] random: add vgetrandom_alloc() syscall
  2022-12-02 17:17       ` Florian Weimer
@ 2022-12-02 18:29         ` Jason A. Donenfeld
  0 siblings, 0 replies; 37+ messages in thread
From: Jason A. Donenfeld @ 2022-12-02 18:29 UTC (permalink / raw)
  To: Florian Weimer
  Cc: linux-kernel, patches, tglx, linux-crypto, linux-api, x86,
	Greg Kroah-Hartman, Adhemerval Zanella Netto,
	Carlos O'Donell, Arnd Bergmann, Christian Brauner

Hi Florian,

On Fri, Dec 02, 2022 at 06:17:17PM +0100, Florian Weimer wrote:
> * Jason A. Donenfeld:
> 
> > I don't think zapping that memory is supported, or even a sensible thing
> > to do. In the first place, I don't think we should suggest that the user
> > can dereference that pointer, at all. In that sense, maybe it's best to
> > call it a "handle" or something similar (a "HANDLE"! a "HWND"? a "HRNG"?
> 
> Surely the caller has to carve up the allocation, so the returned
> pointer is not opaque at all.  From Adhemerval's glibc patch:
> 
>       grnd_allocator.cap = new_cap;
>       grnd_allocator.states = new_states;
> 
>       for (size_t i = 0; i < num; ++i)
> 	{
> 	  grnd_allocator.states[i] = new_block;
> 	  new_block += size_per_each;
> 	}
>       grnd_allocator.len = num;
>     }
> 
> That's the opposite of a handle, really.

Right. (And the same code is in the commit message example too.)

> 
> >> But it will constrain future
> >> evolution of the implementation because you can't add registration
> >> (retaining a reference to the passed-in area in getrandom) after the
> >> fact.  But I'm not sure if this is possible with the current interface,
> >> either.  Userspace has to make some assumptions about the life-cycle to
> >> avoid a memory leak on thread exit.
> >
> > It sounds like this is sort of a different angle on Rasmus' earlier
> > comment about how munmap leaks implementation details. Maybe there's
> > something to that after all? Or not? I see two approaches:
> >
> > 1) Keep munmap as the allocation function. If later on we do fancy
> >    registration and in-kernel state tracking, or add fancy protection
> >    flags, or whatever else, munmap should be able to identify these
> >    pages and carry out whatever special treatment is necessary.
> 
> munmap is fine, but the interface needs to say how to use it, and what
> length to pass.

Glad we're on the same page. Indeed I've now documented this for my
in-progress v11. A blurb like:

+ * sys_vgetrandom_alloc - Allocate opaque states for use with vDSO getrandom().
+ *
+ * @num:          On input, a pointer to a suggested hint of how many states to
+ *                allocate, and on return the number of states actually allocated.
+ *
+ * @size_per_each: On input, must be zero. On return, the size of each state allocated,
+ *                so that the caller can split up the returned allocation into
+ *                individual states.
+ *
+ * @addr:         Reserved, must be zero.
+ *
+ * @flags:        Reserved, must be zero.
+ *
+ * The getrandom() vDSO function in userspace requires an opaque state, which
+ * this function allocates by mapping a certain number of special pages into
+ * the calling process. It takes a hint as to the number of opaque states
+ * desired, and provides the caller with the number of opaque states actually
+ * allocated, the size of each one in bytes, and the address of the first
+ * state, which may be split up into @num states of @size_per_each bytes each,
+ * by adding @size_per_each to the returned first state @num times.
+ *
+ * Returns the address of the first state in the allocation on success, or a
+ * negative error value on failure.
+ *
+ * The returned address of the first state may be passed to munmap(2) with a
+ * length of `(size_t)num * (size_t)size_per_each`, in order to deallocate the
+ * memory, after which it is invalid to pass it to vDSO getrandom().

What do you think of that text?

> > Then they're caught holding the bag? This doesn't seem much different
> > from userspace shooting themselves in general, like writing garbage into
> > the allocated states and then trying to use them. If this is something
> > you really, really are concerned about, then maybe my cheesy dumb xor
> > thing mentioned above would be a low effort mitigation here.
> 
> So the MAP_LOCKED is just there to prevent leakage to swap?

Right. I can combine that with MLOCK_ONFAULT and NORESERVED to avoid
having to commit the memory immediately. I've got this in my tree for
v11.

In case you're curious to see the WIP, it's in here:
https://git.zx2c4.com/linux-rng/log/?h=vdso

Jason

^ permalink raw reply	[flat|nested] 37+ messages in thread

end of thread, other threads:[~2022-12-02 18:29 UTC | newest]

Thread overview: 37+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-29 21:06 [PATCH v10 0/4] implement getrandom() in vDSO Jason A. Donenfeld
2022-11-29 21:06 ` [PATCH v10 1/4] random: add vgetrandom_alloc() syscall Jason A. Donenfeld
2022-11-29 22:02   ` Thomas Gleixner
2022-11-30  0:59     ` Jason A. Donenfeld
2022-11-30  1:37       ` Thomas Gleixner
2022-11-30  1:42         ` Jason A. Donenfeld
2022-11-30 22:39     ` David Laight
2022-12-01  0:14       ` Jason A. Donenfeld
2022-11-30 10:51   ` Florian Weimer
2022-11-30 15:39     ` Jason A. Donenfeld
2022-11-30 16:38       ` Jason A. Donenfeld
2022-12-02 14:38         ` Jason A. Donenfeld
2022-12-01  2:16       ` Jason A. Donenfeld
2022-12-02 17:17       ` Florian Weimer
2022-12-02 18:29         ` Jason A. Donenfeld
2022-11-29 21:06 ` [PATCH v10 2/4] arch: allocate vgetrandom_alloc() syscall number Jason A. Donenfeld
2022-11-30  8:56   ` Geert Uytterhoeven
2022-11-30 10:06     ` Jason A. Donenfeld
2022-11-30 10:51       ` Arnd Bergmann
2022-11-29 21:06 ` [PATCH v10 3/4] random: introduce generic vDSO getrandom() implementation Jason A. Donenfeld
2022-11-29 22:42   ` Thomas Gleixner
2022-11-30  1:09     ` Jason A. Donenfeld
2022-11-30 10:44   ` Florian Weimer
2022-11-30 14:51     ` Jason A. Donenfeld
2022-11-30 14:59       ` Jason A. Donenfeld
2022-11-30 15:07         ` Arnd Bergmann
2022-11-30 15:12           ` Jason A. Donenfeld
2022-11-30 15:29             ` Arnd Bergmann
2022-11-30 15:47               ` Jason A. Donenfeld
2022-11-30 16:13                 ` Arnd Bergmann
2022-11-30 16:40                   ` Jason A. Donenfeld
2022-11-30 17:00                 ` Thomas Gleixner
2022-11-29 21:06 ` [PATCH v10 4/4] x86: vdso: Wire up getrandom() vDSO implementation Jason A. Donenfeld
2022-11-29 22:52   ` Thomas Gleixner
2022-11-30  1:11     ` Jason A. Donenfeld
2022-11-30  5:22   ` Eric Biggers
2022-11-30 10:12     ` Jason A. Donenfeld

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.