All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ofir Gal <ofir.gal@volumez.com>
To: ming.lei@redhat.com
Cc: axboe@kernel.dk, cgroups@vger.kernel.org,
	linux-block@vger.kernel.org, linux-kernel@vger.kernel.org,
	tj@kernel.org, yi.zhang@huawei.com, yukuai3@huawei.com,
	Ofir Gal <ofir@gal.software>
Subject: [PATCH -next] blk-throttle: enable io throttle for root in cgroup v2
Date: Sun,  5 Feb 2023 17:55:41 +0200	[thread overview]
Message-ID: <20230205155541.1320485-1-ofir.gal@volumez.com> (raw)
In-Reply-To: <YgMxjyVjMjmkMQU5@T590>

From: Ofir Gal <ofir@gal.software>

Hello Ming Lei,

I am trying to use cgroups v2 to throttle a media disk that is controlled by an NVME target.
Unfortunately, it cannot be done without setting the limit in the root cgroup.
It can be done via cgroups v1. Yu Kuai's patch allows this to be accomplished.

My setup consist from 3 servers.
Server #1:
    a. SSD media disk (needs to be throttled to 100K IOPs)
    b. NVME target controlling the SSD (1.a)

Server #2:
    a. NVME initiator is connected to Server #1 NVME target (1.b)

Server #3:
    a. NVME initiator is connected to Server #1 NVME target (1.b)

My setup accesses this media from multiple servers using NVMe over TCP,
but the initiator servers' workloads are unknown and can be changed dynamically. I need to limit the media disk to 100K IOPS on the target side.

I have tried to limit the SSD on Server #1, but it seems that the NVME target kworkers are not affected unless I use Yu Kuai's patch.

Can you elaborate on the issues with this patch or how the scenario described above can be done with cgroups v2?

Best regards, Ofir Gal.

  reply	other threads:[~2023-02-05 15:56 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-14  9:30 [PATCH -next] blk-throttle: enable io throttle for root in cgroup v2 Yu Kuai
2022-01-14  9:30 ` Yu Kuai
2022-01-26 17:29 ` Tejun Heo
2022-01-27  2:36   ` yukuai (C)
2022-01-27  2:36     ` yukuai (C)
2022-02-01 17:20     ` Tejun Heo
2022-02-01 17:20       ` Tejun Heo
2022-02-08  1:38       ` yukuai (C)
2022-02-08  1:38         ` yukuai (C)
2022-02-08 18:49         ` Tejun Heo
2022-02-08 18:49           ` Tejun Heo
2022-02-09  1:22           ` yukuai (C)
2022-02-09  1:22             ` yukuai (C)
2023-02-06 15:10             ` Michal Koutný
2023-02-06 15:10               ` Michal Koutný
2022-02-09  3:14 ` Ming Lei
2022-02-09  3:14   ` Ming Lei
2023-02-05 15:55   ` Ofir Gal [this message]
2023-02-06 15:00     ` Michal Koutný

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230205155541.1320485-1-ofir.gal@volumez.com \
    --to=ofir.gal@volumez.com \
    --cc=axboe@kernel.dk \
    --cc=cgroups@vger.kernel.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=ming.lei@redhat.com \
    --cc=ofir@gal.software \
    --cc=tj@kernel.org \
    --cc=yi.zhang@huawei.com \
    --cc=yukuai3@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.