kernelnewbies.kernelnewbies.org archive mirror
 help / color / mirror / Atom feed
From: Junyeong Jeong <esrse.jeong@gmail.com>
To: kernelnewbies@kernelnewbies.org
Subject: /sys/devices/system/cpu/possible is immutable?
Date: Thu, 18 Mar 2021 23:10:18 +0900	[thread overview]
Message-ID: <87a6r01pfs.fsf@gmail.com> (raw)

Hello everyone :)

I hope that kernelnewbies mailing list is a suitable place for asking my
question.

I wonder that possible-CPU-mask(/sys/devices/system/cpu/possible) can be
changed after boot in some way or other. I read that it is fixed at boot
time (https://elixir.bootlin.com/linux/v5.8/source/include/linux/cpumask.h#L50).
But I am not convinced that it is really immutable even if some cgroup
or virtualization magic is used.

Let me account for why I am curious about it.
Nowadays I am developing BPF library written in rust language.
In order to call `bpf_lookup_elem()` to get values from
BPF_MAP_TYPE_PERCPU_ARRAY in userspace, we need to know the correct
number of per-cpu areas before calling it. Because an out-buffer for
multiple per-cpu values should be allocated and passed to the
`bpf_lookup_elem()`. But this process is strongly based on the
assumption that the number of per-cpu area is always immutable.

I am referring to /sys/devices/system/cpu/possible file to get to know
the number of per-cpu areas. I don't know the better way for figuring
out the number. What I am anxious about is that the number of per-cpu
areas varies from time to time under some circumstances with cgroup or
virtualization magic.

So I checked some cgroup and virtualization ordinary use-cases which did
not affect the possible-CPU-mask.

--
1.
$ docker run --cpuset-cpus=0-3 -it ubuntu:20.10 bash

This does not affect /sys/devices/system/cpu/possible at all. The value
it contains is the same with the value of the host machine.

2.
$ virsh setvcpus --current ubuntu20.10 5

Before starting guest OS, the number of maximum vCPU was set to 8 and
current vCPU was set to 4. While guest OS is running, I changed the
number of vCPU to 5. And _inside guest OS_, I enabled the new CPU by
setting /sys/devices/system/cpu/cpu4/online to 1. But
/sys/devices/system/cpu/possible of guest OS did not change as expected.
--

While I was conducting some tests, I realized that it's not possible to
prove the immutability of possible-CPU-mask using inductive
method. Because there must be some corner cases that I can never
imagine.


Can anyone explain that possible-CPU-mask and the number of per-cpu
areas never change after boot-time even by cgroup magic or some tricks
from outside of hypervisors?

Thanks,
    Junyeong

_______________________________________________
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies

             reply	other threads:[~2021-03-18 14:56 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-18 14:10 Junyeong Jeong [this message]
2021-03-18 15:16 ` /sys/devices/system/cpu/possible is immutable? Greg KH
2021-03-18 15:45   ` Junyeong Jeong

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87a6r01pfs.fsf@gmail.com \
    --to=esrse.jeong@gmail.com \
    --cc=kernelnewbies@kernelnewbies.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).