From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3397AC4CECD for ; Tue, 17 Sep 2019 09:14:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id EFA582067B for ; Tue, 17 Sep 2019 09:14:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=scylladb-com.20150623.gappssmtp.com header.i=@scylladb-com.20150623.gappssmtp.com header.b="J2og3Dg/" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726891AbfIQJOE (ORCPT ); Tue, 17 Sep 2019 05:14:04 -0400 Received: from mail-wm1-f68.google.com ([209.85.128.68]:37696 "EHLO mail-wm1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726720AbfIQJOD (ORCPT ); Tue, 17 Sep 2019 05:14:03 -0400 Received: by mail-wm1-f68.google.com with SMTP id r195so2192495wme.2 for ; Tue, 17 Sep 2019 02:14:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=scylladb-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=J3q851HjNurFsQDQC3tlZxSdaojcsq3jsQ2H8LllUYY=; b=J2og3Dg/qod5VtnEytySsQ8YyEbwaSgVGfq5v5BN26gyXBCn0RlD22jQkf+Jw9baQb 643ndS25feWaoLtud/m27G4C853nV0HPTo6sVPwsMbu7iT76JTFMHTlONkj1/mYqsvcR 1S65Is3DymHewzhUFIfrV5wy2EyjUqmncS08e5uiV/3OW0gvivggxW7acmHG6o+2iZnQ iRrE2b698SDcj6INGpjtDc516y2lbKUCZ8UJ179GWEHOpvre9DR2/EoRxQP99znszoJJ B6bvQB21W0R8hGuRxbaL+gg47Fc/HqhP5zW0lpNKrai0JTo7IEQGwC2hpQUyujCKoHzq zvdA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=J3q851HjNurFsQDQC3tlZxSdaojcsq3jsQ2H8LllUYY=; b=UhdFa5tY2KVlEJ8UY2rT4xv6qOi0keP5QYACdXKS4ktSyCsk2suWMPKn69bGukqOOd m5LSYZY41kB2+5If5el/OHK+4GWswsp+N9uBB49RLOy11W6wi0GDnhOLicTWrnqghpLF jHcn3eAA/xrXWPPJ//G0Kmm3gcupam/moQk8RJitSCZfIQXz63Jn1HW6TOS6lpafQS/Z QCDRs3SvZUAGaVPdVDIfhn2EzPU4RjcgGMsKeH/FErsPWZQbucI1b7kvgZdRttuRJi9j DlUqQplYW11vglr7pUf9yUV2p5d/Fe40bzjrMlmqi/zIgSt9KyPP8ZANXJ2/El6EjeHH OS5Q== X-Gm-Message-State: APjAAAXNAPX2ED5vc434W8u3mAYzmxdbxDygv2hVzUoZ29K8rnYJtK2z AXrJ0doZ7CSO05QquXaZebVpfw== X-Google-Smtp-Source: APXvYqxcQxDJL4hgNO1Obo5uli3NofxH4vnmuWQtbR0GVq3vtdmlYpXAaQGx7TfY5ZFxQoXnXb9c/g== X-Received: by 2002:a7b:c401:: with SMTP id k1mr2555955wmi.62.1568711641291; Tue, 17 Sep 2019 02:14:01 -0700 (PDT) Received: from avi.cloudius-systems.com (system.cloudius-systems.com. [199.203.229.89]) by smtp.gmail.com with ESMTPSA id y13sm4183768wrg.8.2019.09.17.02.13.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Sep 2019 02:14:00 -0700 (PDT) From: Avi Kivity To: Jens Axboe Cc: linux-kernel@vger.kernel.org, linux-block@vger.kernel.org Subject: [PATCH v1] io_uring: reserve word at cqring tail+4 for the user Date: Tue, 17 Sep 2019 12:13:58 +0300 Message-Id: <20190917091358.3652-1-avi@scylladb.com> X-Mailer: git-send-email 2.21.0 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In some applications, a thread waits for I/O events generated by the kernel, and also events generated by other threads in the same application. Typically events from other threads are passed using in-memory queues that are not known to the kernel. As long as the threads is active, it polls for both kernel completions and inter-thread completions; when it is idle, it tells the other threads to use an I/O event to wait it up (e.g. an eventfd or a pipe) and then enters the kernel, waiting for such an event or an ordinary I/O completion. When such a thread goes idle, it typically spins for a while to avoid the kernel entry/exit cost in case an event is forthcoming shortly. While it spins it polls both I/O completions and inter-thread queues. The x86 instruction pair UMONITOR/UMWAIT allows waiting for a cache line to be written to. This can be used with io_uring to wait for a wakeup without spinning (and wasting power and slowing down the other hyperthread). Other threads can also wake up the waiter by doing a safe write to the tail word (which triggers the wakeup), but safe writes are slow as they require an atomic instruction. To speed up those wakeups, reserve a word after the tail for user writes. A thread consuming an io_uring completion queue can then use the following sequences: - while busy: - pick up work from the completion queue and from other threads, and process it - while idle: - use UMONITOR/UMWAIT to wait on completions and notifications from other threads for a short period - if no work is picked up, let other threads know you will need a kernel wakeup, and use io_uring_enter to wait indefinitely Signed-off-by: Avi Kivity --- fs/io_uring.c | 5 +++-- include/uapi/linux/io_uring.h | 4 ++++ 2 files changed, 7 insertions(+), 2 deletions(-) diff --git a/fs/io_uring.c b/fs/io_uring.c index cfb48bd088e1..4bd7905cee1d 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -77,12 +77,13 @@ #define IORING_MAX_ENTRIES 4096 #define IORING_MAX_FIXED_FILES 1024 struct io_uring { - u32 head ____cacheline_aligned_in_smp; - u32 tail ____cacheline_aligned_in_smp; + u32 head ____cacheline_aligned; + u32 tail ____cacheline_aligned; + u32 reserved_for_user; // for cq ring and UMONITOR/UMWAIT (or similar) wakeups }; /* * This data is shared with the application through the mmap at offset * IORING_OFF_SQ_RING. diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h index 1e1652f25cc1..1a6a826a66f3 100644 --- a/include/uapi/linux/io_uring.h +++ b/include/uapi/linux/io_uring.h @@ -103,10 +103,14 @@ struct io_sqring_offsets { */ #define IORING_SQ_NEED_WAKEUP (1U << 0) /* needs io_uring_enter wakeup */ struct io_cqring_offsets { __u32 head; + // tail is guaranteed to be aligned on a cache line, and to have the + // following __u32 free for user use. This allows using e.g. + // UMONITOR/UMWAIT to wait on both writes to head and writes from + // other threads to the following word. __u32 tail; __u32 ring_mask; __u32 ring_entries; __u32 overflow; __u32 cqes; -- 2.21.0