From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EAA13C282C4 for ; Tue, 12 Feb 2019 22:06:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A6372222C1 for ; Tue, 12 Feb 2019 22:06:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=kernel-dk.20150623.gappssmtp.com header.i=@kernel-dk.20150623.gappssmtp.com header.b="fhKE5ZfZ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732021AbfBLWGV (ORCPT ); Tue, 12 Feb 2019 17:06:21 -0500 Received: from mail-pf1-f193.google.com ([209.85.210.193]:33757 "EHLO mail-pf1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726809AbfBLWGV (ORCPT ); Tue, 12 Feb 2019 17:06:21 -0500 Received: by mail-pf1-f193.google.com with SMTP id c123so130722pfb.0 for ; Tue, 12 Feb 2019 14:06:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20150623.gappssmtp.com; s=20150623; h=subject:from:to:cc:references:message-id:date:user-agent :mime-version:in-reply-to:content-language:content-transfer-encoding; bh=JURTBmS+UIlO9bvlMI6ii4oz4GTr5raDYgc8j0OTOKQ=; b=fhKE5ZfZemlYkQnhUpXuTEalYf0mBNbJ8N5wbWfWrmUBjf7lNxX2HsEtm3Y3k1qLlv PPsp1dLl/wOWtr7QvCLhX3OKCWpsFb7niSLcwLJK1dSxzqkOBq4mlHlvI8/Abh97SXzW AfMCRdbwLOCokIyuvM+4hs4IrgpYi/sWtXA7S8xjzq87CRz/q3h2cp7skspOf+HnMou3 rMJdKzY8hL03ncaaVA7pFkdUDcK6dZt+8fPbk44SEQ0ddGgbBscfwfLD2KCyxl3At35k SfsxL43p/156Ru3WomlvPAb+5zgI25wpTrTTBGlOUOCMadv3AYS6XITJIQhisJQZJEeE t+jw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:from:to:cc:references:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=JURTBmS+UIlO9bvlMI6ii4oz4GTr5raDYgc8j0OTOKQ=; b=YVsqCVBrWjdgBnHDAoJMdgM4IN89SUdkyfzZSKgUY7yq4FLQ9ebxZF3NlZMOPCH5vd J3yEKyRrfqA81yHMufqUzP9ilHP0tdmdvGnhWDvcIzURnShHDAUBejSGx06UxXHrfuVS 23M0bwP4ikBqfRpvOg1hUjise/PUfTOVC7KtZzVSVx+7inWz/vgPpwPPgqwb9ZSKBTze oqrU65DoAPo8xz9lqxsJvlqLPCcb+Eo/qcHQsSkb3TN98/J8uhJLgaa4GFsZdqOnl/6v 9LdThLw1UQN16EIbEEl5ZHWg6/G+3lx8MG+NerGSzEchNic42e2N1cYAyT6OLzMdCCB4 +IlQ== X-Gm-Message-State: AHQUAuZOGxnCR52j6YgOKzTzfgo5Nz/nGQem+/opappAjNIS53AFXTa6 LnerKMmwMNPcsnxbl5l03F6RFCRq7BoY8w== X-Google-Smtp-Source: AHgI3IZNhPWpEgi17/UTIQ2sCgXHWBuAbOPm+Z0gERGo4ZBeQ0ovcwlAITJ2cFHC9jzzzUwXACXMtQ== X-Received: by 2002:a63:4611:: with SMTP id t17mr5633290pga.119.1550009179889; Tue, 12 Feb 2019 14:06:19 -0800 (PST) Received: from [192.168.1.121] (66.29.188.166.static.utbb.net. [66.29.188.166]) by smtp.gmail.com with ESMTPSA id z14sm15907331pgv.47.2019.02.12.14.06.17 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Feb 2019 14:06:18 -0800 (PST) Subject: Re: [PATCH 05/19] Add io_uring IO interface From: Jens Axboe To: Jann Horn Cc: linux-aio@kvack.org, linux-block@vger.kernel.org, Linux API , hch@lst.de, jmoyer@redhat.com, Avi Kivity , Al Viro References: <20190208173423.27014-1-axboe@kernel.dk> <20190208173423.27014-6-axboe@kernel.dk> <42eea00c-81fb-2e28-d884-03be5bb229c8@kernel.dk> <1ca9f039-c6f0-cae7-8484-7db0a4e4e213@kernel.dk> Message-ID: Date: Tue, 12 Feb 2019 15:06:16 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.4.0 MIME-Version: 1.0 In-Reply-To: <1ca9f039-c6f0-cae7-8484-7db0a4e4e213@kernel.dk> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org On 2/12/19 3:03 PM, Jens Axboe wrote: > On 2/12/19 2:42 PM, Jann Horn wrote: >> On Sat, Feb 9, 2019 at 5:15 AM Jens Axboe wrote: >>> On 2/8/19 3:12 PM, Jann Horn wrote: >>>> On Fri, Feb 8, 2019 at 6:34 PM Jens Axboe wrote: >>>>> The submission queue (SQ) and completion queue (CQ) rings are shared >>>>> between the application and the kernel. This eliminates the need to >>>>> copy data back and forth to submit and complete IO. >>>>> >>>>> IO submissions use the io_uring_sqe data structure, and completions >>>>> are generated in the form of io_uring_cqe data structures. The SQ >>>>> ring is an index into the io_uring_sqe array, which makes it possible >>>>> to submit a batch of IOs without them being contiguous in the ring. >>>>> The CQ ring is always contiguous, as completion events are inherently >>>>> unordered, and hence any io_uring_cqe entry can point back to an >>>>> arbitrary submission. >>>>> >>>>> Two new system calls are added for this: >>>>> >>>>> io_uring_setup(entries, params) >>>>> Sets up an io_uring instance for doing async IO. On success, >>>>> returns a file descriptor that the application can mmap to >>>>> gain access to the SQ ring, CQ ring, and io_uring_sqes. >>>>> >>>>> io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize) >>>>> Initiates IO against the rings mapped to this fd, or waits for >>>>> them to complete, or both. The behavior is controlled by the >>>>> parameters passed in. If 'to_submit' is non-zero, then we'll >>>>> try and submit new IO. If IORING_ENTER_GETEVENTS is set, the >>>>> kernel will wait for 'min_complete' events, if they aren't >>>>> already available. It's valid to set IORING_ENTER_GETEVENTS >>>>> and 'min_complete' == 0 at the same time, this allows the >>>>> kernel to return already completed events without waiting >>>>> for them. This is useful only for polling, as for IRQ >>>>> driven IO, the application can just check the CQ ring >>>>> without entering the kernel. >>>>> >>>>> With this setup, it's possible to do async IO with a single system >>>>> call. Future developments will enable polled IO with this interface, >>>>> and polled submission as well. The latter will enable an application >>>>> to do IO without doing ANY system calls at all. >>>>> >>>>> For IRQ driven IO, an application only needs to enter the kernel for >>>>> completions if it wants to wait for them to occur. >>>>> >>>>> Each io_uring is backed by a workqueue, to support buffered async IO >>>>> as well. We will only punt to an async context if the command would >>>>> need to wait for IO on the device side. Any data that can be accessed >>>>> directly in the page cache is done inline. This avoids the slowness >>>>> issue of usual threadpools, since cached data is accessed as quickly >>>>> as a sync interface. >> [...] >>>>> +static int io_submit_sqe(struct io_ring_ctx *ctx, const struct sqe_submit *s) >>>>> +{ >>>>> + struct io_kiocb *req; >>>>> + ssize_t ret; >>>>> + >>>>> + /* enforce forwards compatibility on users */ >>>>> + if (unlikely(s->sqe->flags)) >>>>> + return -EINVAL; >>>>> + >>>>> + req = io_get_req(ctx); >>>>> + if (unlikely(!req)) >>>>> + return -EAGAIN; >>>>> + >>>>> + req->rw.ki_filp = NULL; >>>>> + >>>>> + ret = __io_submit_sqe(ctx, req, s, true); >>>>> + if (ret == -EAGAIN) { >>>>> + memcpy(&req->submit, s, sizeof(*s)); >>>>> + INIT_WORK(&req->work, io_sq_wq_submit_work); >>>>> + queue_work(ctx->sqo_wq, &req->work); >>>>> + ret = 0; >>>>> + } >>>>> + if (ret) >>>>> + io_free_req(req); >>>>> + >>>>> + return ret; >>>>> +} >>>>> + >>>>> +static void io_commit_sqring(struct io_ring_ctx *ctx) >>>>> +{ >>>>> + struct io_sq_ring *ring = ctx->sq_ring; >>>>> + >>>>> + if (ctx->cached_sq_head != ring->r.head) { >>>>> + WRITE_ONCE(ring->r.head, ctx->cached_sq_head); >>>>> + /* write side barrier of head update, app has read side */ >>>>> + smp_wmb(); >>>> >>>> Can you elaborate on what this memory barrier is doing? Don't you need >>>> some sort of memory barrier *before* the WRITE_ONCE(), to ensure that >>>> nobody sees the updated head before you're done reading the submission >>>> queue entry? Or is that barrier elsewhere? >>> >>> The matching read barrier is in the application, it must do that before >>> reading ->head for the SQ ring. >>> >>> For the other barrier, since the ring->r.head now has a READ_ONCE(), >>> that should be all we need to ensure that loads are done. >> >> READ_ONCE() / WRITE_ONCE are not hardware memory barriers that enforce >> ordering with regard to concurrent execution on other cores. They are >> only compiler barriers, influencing the order in which the compiler >> emits things. (Well, unless you're on alpha, where READ_ONCE() implies >> a memory barrier that prevents reordering of dependent reads.) >> >> As far as I can tell, between the READ_ONCE(ring->array[...]) in >> io_get_sqring() and the WRITE_ONCE() in io_commit_sqring(), you have >> no *hardware* memory barrier that prevents reordering against >> concurrently running userspace code. As far as I can tell, the >> following could happen: >> >> - The kernel reads from ring->array in io_get_sqring(), then updates >> the head in io_commit_sqring(). The CPU reorders the memory accesses >> such that the write to the head becomes visible before the read from >> ring->array has completed. >> - Userspace observes the write to the head and reuses the array slots >> the kernel has freed with the write, clobbering ring->array before the >> kernel reads from ring->array. > > I'd say this is highly theoretical for the normal use case, as we > will have submitted IO in between. Hence the load must have been done. > The only case that needs it is the sq thread case, since we bundle > those up. This should do it: Actually, I take that back, as in this particular case the sq thread is the only one that reads it. Hence it'll have done a full submission of the read SQE entries before reading a new round. Not that it matters for that case, as a preempt would have implied a full barrier anyway. The non-sq thread case does not need the store-vs-load ordering barrier, as SQEs are either discarded or submitted before we commit the sqring. Since that's the case, by definition all loads are done. -- Jens Axboe From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jens Axboe Subject: Re: [PATCH 05/19] Add io_uring IO interface Date: Tue, 12 Feb 2019 15:06:16 -0700 Message-ID: References: <20190208173423.27014-1-axboe@kernel.dk> <20190208173423.27014-6-axboe@kernel.dk> <42eea00c-81fb-2e28-d884-03be5bb229c8@kernel.dk> <1ca9f039-c6f0-cae7-8484-7db0a4e4e213@kernel.dk> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <1ca9f039-c6f0-cae7-8484-7db0a4e4e213@kernel.dk> Content-Language: en-US Sender: owner-linux-aio@kvack.org To: Jann Horn Cc: linux-aio@kvack.org, linux-block@vger.kernel.org, Linux API , hch@lst.de, jmoyer@redhat.com, Avi Kivity , Al Viro List-Id: linux-api@vger.kernel.org On 2/12/19 3:03 PM, Jens Axboe wrote: > On 2/12/19 2:42 PM, Jann Horn wrote: >> On Sat, Feb 9, 2019 at 5:15 AM Jens Axboe wrote: >>> On 2/8/19 3:12 PM, Jann Horn wrote: >>>> On Fri, Feb 8, 2019 at 6:34 PM Jens Axboe wrote: >>>>> The submission queue (SQ) and completion queue (CQ) rings are shared >>>>> between the application and the kernel. This eliminates the need to >>>>> copy data back and forth to submit and complete IO. >>>>> >>>>> IO submissions use the io_uring_sqe data structure, and completions >>>>> are generated in the form of io_uring_cqe data structures. The SQ >>>>> ring is an index into the io_uring_sqe array, which makes it possible >>>>> to submit a batch of IOs without them being contiguous in the ring. >>>>> The CQ ring is always contiguous, as completion events are inherently >>>>> unordered, and hence any io_uring_cqe entry can point back to an >>>>> arbitrary submission. >>>>> >>>>> Two new system calls are added for this: >>>>> >>>>> io_uring_setup(entries, params) >>>>> Sets up an io_uring instance for doing async IO. On success, >>>>> returns a file descriptor that the application can mmap to >>>>> gain access to the SQ ring, CQ ring, and io_uring_sqes. >>>>> >>>>> io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize) >>>>> Initiates IO against the rings mapped to this fd, or waits for >>>>> them to complete, or both. The behavior is controlled by the >>>>> parameters passed in. If 'to_submit' is non-zero, then we'll >>>>> try and submit new IO. If IORING_ENTER_GETEVENTS is set, the >>>>> kernel will wait for 'min_complete' events, if they aren't >>>>> already available. It's valid to set IORING_ENTER_GETEVENTS >>>>> and 'min_complete' == 0 at the same time, this allows the >>>>> kernel to return already completed events without waiting >>>>> for them. This is useful only for polling, as for IRQ >>>>> driven IO, the application can just check the CQ ring >>>>> without entering the kernel. >>>>> >>>>> With this setup, it's possible to do async IO with a single system >>>>> call. Future developments will enable polled IO with this interface, >>>>> and polled submission as well. The latter will enable an application >>>>> to do IO without doing ANY system calls at all. >>>>> >>>>> For IRQ driven IO, an application only needs to enter the kernel for >>>>> completions if it wants to wait for them to occur. >>>>> >>>>> Each io_uring is backed by a workqueue, to support buffered async IO >>>>> as well. We will only punt to an async context if the command would >>>>> need to wait for IO on the device side. Any data that can be accessed >>>>> directly in the page cache is done inline. This avoids the slowness >>>>> issue of usual threadpools, since cached data is accessed as quickly >>>>> as a sync interface. >> [...] >>>>> +static int io_submit_sqe(struct io_ring_ctx *ctx, const struct sqe_submit *s) >>>>> +{ >>>>> + struct io_kiocb *req; >>>>> + ssize_t ret; >>>>> + >>>>> + /* enforce forwards compatibility on users */ >>>>> + if (unlikely(s->sqe->flags)) >>>>> + return -EINVAL; >>>>> + >>>>> + req = io_get_req(ctx); >>>>> + if (unlikely(!req)) >>>>> + return -EAGAIN; >>>>> + >>>>> + req->rw.ki_filp = NULL; >>>>> + >>>>> + ret = __io_submit_sqe(ctx, req, s, true); >>>>> + if (ret == -EAGAIN) { >>>>> + memcpy(&req->submit, s, sizeof(*s)); >>>>> + INIT_WORK(&req->work, io_sq_wq_submit_work); >>>>> + queue_work(ctx->sqo_wq, &req->work); >>>>> + ret = 0; >>>>> + } >>>>> + if (ret) >>>>> + io_free_req(req); >>>>> + >>>>> + return ret; >>>>> +} >>>>> + >>>>> +static void io_commit_sqring(struct io_ring_ctx *ctx) >>>>> +{ >>>>> + struct io_sq_ring *ring = ctx->sq_ring; >>>>> + >>>>> + if (ctx->cached_sq_head != ring->r.head) { >>>>> + WRITE_ONCE(ring->r.head, ctx->cached_sq_head); >>>>> + /* write side barrier of head update, app has read side */ >>>>> + smp_wmb(); >>>> >>>> Can you elaborate on what this memory barrier is doing? Don't you need >>>> some sort of memory barrier *before* the WRITE_ONCE(), to ensure that >>>> nobody sees the updated head before you're done reading the submission >>>> queue entry? Or is that barrier elsewhere? >>> >>> The matching read barrier is in the application, it must do that before >>> reading ->head for the SQ ring. >>> >>> For the other barrier, since the ring->r.head now has a READ_ONCE(), >>> that should be all we need to ensure that loads are done. >> >> READ_ONCE() / WRITE_ONCE are not hardware memory barriers that enforce >> ordering with regard to concurrent execution on other cores. They are >> only compiler barriers, influencing the order in which the compiler >> emits things. (Well, unless you're on alpha, where READ_ONCE() implies >> a memory barrier that prevents reordering of dependent reads.) >> >> As far as I can tell, between the READ_ONCE(ring->array[...]) in >> io_get_sqring() and the WRITE_ONCE() in io_commit_sqring(), you have >> no *hardware* memory barrier that prevents reordering against >> concurrently running userspace code. As far as I can tell, the >> following could happen: >> >> - The kernel reads from ring->array in io_get_sqring(), then updates >> the head in io_commit_sqring(). The CPU reorders the memory accesses >> such that the write to the head becomes visible before the read from >> ring->array has completed. >> - Userspace observes the write to the head and reuses the array slots >> the kernel has freed with the write, clobbering ring->array before the >> kernel reads from ring->array. > > I'd say this is highly theoretical for the normal use case, as we > will have submitted IO in between. Hence the load must have been done. > The only case that needs it is the sq thread case, since we bundle > those up. This should do it: Actually, I take that back, as in this particular case the sq thread is the only one that reads it. Hence it'll have done a full submission of the read SQE entries before reading a new round. Not that it matters for that case, as a preempt would have implied a full barrier anyway. The non-sq thread case does not need the store-vs-load ordering barrier, as SQEs are either discarded or submitted before we commit the sqring. Since that's the case, by definition all loads are done. -- Jens Axboe -- To unsubscribe, send a message with 'unsubscribe linux-aio' in the body to majordomo@kvack.org. For more info on Linux AIO, see: http://www.kvack.org/aio/ Don't email: aart@kvack.org