bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 bpf-next 0/5] Make BPF CGROUP_STORAGE map usable by different programs at once
@ 2020-07-09 22:54 YiFei Zhu
  2020-07-09 22:54 ` [PATCH v2 bpf-next 1/5] selftests/bpf: Add test for CGROUP_STORAGE map on multiple attaches YiFei Zhu
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: YiFei Zhu @ 2020-07-09 22:54 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Daniel Borkmann, Stanislav Fomichev,
	Mahesh Bandewar, Roman Gushchin, Andrii Nakryiko, YiFei Zhu

From: YiFei Zhu <zhuyifei@google.com>

To access the storage in a CGROUP_STORAGE map, one uses
bpf_get_local_storage helper, which is extremely fast due to its
use of per-CPU variables. However, its whole code is built on
the assumption that one map can only be used by one program at any
time, and this prohibits any sharing of data between multiple
programs using these maps, eliminating a lot of use cases, such
as some per-cgroup configuration storage, written to by a
setsockopt program and read by a cg_sock_addr program.

Why not use other map types? The great part of CGROUP_STORAGE map
is that it is isolated by different cgroups its attached to. When
one program uses bpf_get_local_storage, even on the same map, it
gets different storages if it were run as a result of attaching
to different cgroups. The kernel manages the storages, simplifying
BPF program or userspace. In theory, one could probably use other
maps like array or hash to do the same thing, but it would be a
major overhead / complexity. Userspace needs to know when a cgroup
is being freed in order to free up a space in the replacement map.

This patch set introduces a significant change to the semantics of
CGROUP_STORAGE map type. Instead of each storage being tied to one
single attachment, it is shared across different attachments to
the same cgroup, and persists until either the map or the cgroup
attached to is being freed.

How could this break existing users?
* Users that uses detach & reattach / program replacement as a
  shortcut to zeroing the storage. Since we need sharing between
  programs, we cannot zero the storage. Users that expect this
  behavior should either attach a program with a new map, or
  explicitly zero the map with a syscall.
* Programs that expect isolation from different attach types. In
  reality, attaching the same program to different attach types,
  relying on that expected_attach_type not being enforced, should
  rarely happen, if at all.
Both cases are dependent on undocumented implementation details,
so the impact should be very minimal.

Patch 1 introduces a test on the old expected behavior of the map
type.

Patch 2 introduces a test showing how two programs cannot share
one such map.

Patch 3 implements the change of semantics to the map.

Patch 4 amends the new test such that it yields the behavior we
expect from the change.

Patch 5 documents the map type.

Changes since RFC:
* Clarify commit message in patch 3 such that it says the lifetime
  of the storage is ended at the freeing of the cgroup_bpf, rather
  than the cgroup itself.
* Restored an -ENOMEM check in __cgroup_bpf_attach.
* Update selftests for recent change in network_helpers API.

Changes since v1:
* s/CHECK_FAIL/CHECK/
* s/bpf_prog_attach/bpf_program__attach_cgroup/
* Moved test__start_subtest to test_cg_storage_multi.
* Removed some redundant CHECK_FAIL where they are already CHECK-ed.

YiFei Zhu (5):
  selftests/bpf: Add test for CGROUP_STORAGE map on multiple attaches
  selftests/bpf: Test CGROUP_STORAGE map can't be used by multiple progs
  bpf: Make cgroup storages shared across attaches on the same cgroup
  selftests/bpf: Test CGROUP_STORAGE behavior on shared egress + ingress
  Documentation/bpf: Document CGROUP_STORAGE map type

 Documentation/bpf/index.rst                   |   9 +
 Documentation/bpf/map_cgroup_storage.rst      |  95 +++++++
 include/linux/bpf-cgroup.h                    |  15 +-
 include/uapi/linux/bpf.h                      |   2 +-
 kernel/bpf/cgroup.c                           |  42 ++-
 kernel/bpf/core.c                             |  12 -
 kernel/bpf/local_storage.c                    |  77 +++--
 tools/include/uapi/linux/bpf.h                |   2 +-
 .../bpf/prog_tests/cg_storage_multi.c         | 265 ++++++++++++++++++
 .../selftests/bpf/progs/cg_storage_multi.h    |  13 +
 .../progs/cg_storage_multi_egress_ingress.c   |  45 +++
 .../bpf/progs/cg_storage_multi_egress_only.c  |  33 +++
 12 files changed, 520 insertions(+), 90 deletions(-)
 create mode 100644 Documentation/bpf/map_cgroup_storage.rst
 create mode 100644 tools/testing/selftests/bpf/prog_tests/cg_storage_multi.c
 create mode 100644 tools/testing/selftests/bpf/progs/cg_storage_multi.h
 create mode 100644 tools/testing/selftests/bpf/progs/cg_storage_multi_egress_ingress.c
 create mode 100644 tools/testing/selftests/bpf/progs/cg_storage_multi_egress_only.c

-- 
2.27.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-07-15 23:10 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-09 22:54 [PATCH v2 bpf-next 0/5] Make BPF CGROUP_STORAGE map usable by different programs at once YiFei Zhu
2020-07-09 22:54 ` [PATCH v2 bpf-next 1/5] selftests/bpf: Add test for CGROUP_STORAGE map on multiple attaches YiFei Zhu
2020-07-09 22:54 ` [PATCH v2 bpf-next 2/5] selftests/bpf: Test CGROUP_STORAGE map can't be used by multiple progs YiFei Zhu
2020-07-09 22:54 ` [PATCH v2 bpf-next 3/5] bpf: Make cgroup storages shared across attaches on the same cgroup YiFei Zhu
2020-07-14 23:53   ` Martin KaFai Lau
2020-07-15 21:06     ` YiFei Zhu
2020-07-15 23:09       ` Martin KaFai Lau
2020-07-09 22:54 ` [PATCH v2 bpf-next 4/5] selftests/bpf: Test CGROUP_STORAGE behavior on shared egress + ingress YiFei Zhu
2020-07-09 22:54 ` [PATCH v2 bpf-next 5/5] Documentation/bpf: Document CGROUP_STORAGE map type YiFei Zhu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).