From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.3 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C06FC433DF for ; Thu, 30 Jul 2020 16:14:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 082E220829 for ; Thu, 30 Jul 2020 16:14:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=kernel-dk.20150623.gappssmtp.com header.i=@kernel-dk.20150623.gappssmtp.com header.b="VP/dlrNE" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729998AbgG3QOC (ORCPT ); Thu, 30 Jul 2020 12:14:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43116 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729806AbgG3QOA (ORCPT ); Thu, 30 Jul 2020 12:14:00 -0400 Received: from mail-pl1-x644.google.com (mail-pl1-x644.google.com [IPv6:2607:f8b0:4864:20::644]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 88488C061757 for ; Thu, 30 Jul 2020 09:14:00 -0700 (PDT) Received: by mail-pl1-x644.google.com with SMTP id u10so5090293plr.7 for ; Thu, 30 Jul 2020 09:14:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20150623.gappssmtp.com; s=20150623; h=subject:to:cc:references:from:message-id:date:user-agent :mime-version:in-reply-to:content-language:content-transfer-encoding; bh=2cEXkEXVTWwLucH9omujSAfOqa3WLkWR4PelWjXQFNg=; b=VP/dlrNE4R3fh1kC0zEFNVslfKeWiqHg2rvnG3XvMO07cpQGdmqp1i+GAdgwDXQ548 baQ8y57LmBTqQhwBesX4FCAiukcf+aLMcOY1K2XaEmz1fU1G+P6bYrK8OPRXyiSMU/VX GFxZF3EyY+OvPy3BEbR1gOZ37VRpyEcUTnT+0ZR2Z1iOJ2p3IN870t/aTJRb3HipmO22 1Tuj5S3jgOO/XEQUo8AB3NvrinwFrH5SAnkrxF/g9dayxQza2pg5EiNhiDQPmWpvtcJ7 6ksBjlZNdtzEJTd9RO1LYI2wKShLlpjQ2Cx6aF+CsXn8NnMUuaty0oYsysIntLRNUKrd bbqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=2cEXkEXVTWwLucH9omujSAfOqa3WLkWR4PelWjXQFNg=; b=i3WE8zM9d61Rqcqrg2/rr1nkiCa/7Oap/esYgJjRLrwnWDE/Hw09f7+mBvioBSC42S Lfl6LbEP/3oNuSPGO/omaawUKg36mJ/tnQySRzbJ/A3hRMPZX5aq2J06XXyiSemulxIY ODiZmL+sKVigZaD7yav9uCMk16yQZNdbnJkFqSRvzZOB+F3JyLnZ+RcG3x7ko/2ugIwG rXBxSwrfdGd80TiasyLTSVoZBfSOo3ZgMOp6y9uPgwddVMVBPSYFQei5Jm+Yvs4oq8ql htozVRie4j6CV/jFilT0Rn25PNViyQt5PAMbxEmqR0D1Ovwq3yC3v865chxmaP5rOxTV t2jw== X-Gm-Message-State: AOAM533ZmvjO1P5dTrS7/C0N2fnG8FPOxC48SJXVHwLwjRtwQyBGF1+2 +4PJGoNtlKPTXMFDIJ6cIqP+nw== X-Google-Smtp-Source: ABdhPJwjyBVK7W6BHEhO7L1UnGYqerVAaCNgXaICwWaIii4F/KcpzVLHAKaxC/zFM550+qDWayd5hA== X-Received: by 2002:a17:90a:14a5:: with SMTP id k34mr16416649pja.37.1596125639842; Thu, 30 Jul 2020 09:13:59 -0700 (PDT) Received: from ?IPv6:2620:10d:c085:21e8::11c1? ([2620:10d:c090:400::5:6c5a]) by smtp.gmail.com with ESMTPSA id i196sm6694268pgc.55.2020.07.30.09.13.58 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 30 Jul 2020 09:13:59 -0700 (PDT) Subject: Re: [PATCH v4 6/6] io_uring: add support for zone-append To: Pavel Begunkov , Kanchan Joshi Cc: Kanchan Joshi , viro@zeniv.linux.org.uk, bcrl@kvack.org, Matthew Wilcox , Christoph Hellwig , Damien Le Moal , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-block@vger.kernel.org, linux-api@vger.kernel.org, SelvaKumar S , Nitesh Shetty , Javier Gonzalez References: <1595605762-17010-1-git-send-email-joshi.k@samsung.com> <1595605762-17010-7-git-send-email-joshi.k@samsung.com> From: Jens Axboe Message-ID: <80d27717-080a-1ced-50d5-a3a06cf06cd3@kernel.dk> Date: Thu, 30 Jul 2020 10:13:57 -0600 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org On 7/30/20 10:08 AM, Pavel Begunkov wrote: > On 27/07/2020 23:34, Jens Axboe wrote: >> On 7/27/20 1:16 PM, Kanchan Joshi wrote: >>> On Fri, Jul 24, 2020 at 10:00 PM Jens Axboe wrote: >>>> >>>> On 7/24/20 9:49 AM, Kanchan Joshi wrote: >>>>> diff --git a/fs/io_uring.c b/fs/io_uring.c >>>>> index 7809ab2..6510cf5 100644 >>>>> --- a/fs/io_uring.c >>>>> +++ b/fs/io_uring.c >>>>> @@ -1284,8 +1301,15 @@ static void __io_cqring_fill_event(struct io_kiocb *req, long res, long cflags) >>>>> cqe = io_get_cqring(ctx); >>>>> if (likely(cqe)) { >>>>> WRITE_ONCE(cqe->user_data, req->user_data); >>>>> - WRITE_ONCE(cqe->res, res); >>>>> - WRITE_ONCE(cqe->flags, cflags); >>>>> + if (unlikely(req->flags & REQ_F_ZONE_APPEND)) { >>>>> + if (likely(res > 0)) >>>>> + WRITE_ONCE(cqe->res64, req->rw.append_offset); >>>>> + else >>>>> + WRITE_ONCE(cqe->res64, res); >>>>> + } else { >>>>> + WRITE_ONCE(cqe->res, res); >>>>> + WRITE_ONCE(cqe->flags, cflags); >>>>> + } >>>> >>>> This would be nice to keep out of the fast path, if possible. >>> >>> I was thinking of keeping a function-pointer (in io_kiocb) during >>> submission. That would have avoided this check......but argument count >>> differs, so it did not add up. >> >> But that'd grow the io_kiocb just for this use case, which is arguably >> even worse. Unless you can keep it in the per-request private data, >> but there's no more room there for the regular read/write side. >> >>>>> diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h >>>>> index 92c2269..2580d93 100644 >>>>> --- a/include/uapi/linux/io_uring.h >>>>> +++ b/include/uapi/linux/io_uring.h >>>>> @@ -156,8 +156,13 @@ enum { >>>>> */ >>>>> struct io_uring_cqe { >>>>> __u64 user_data; /* sqe->data submission passed back */ >>>>> - __s32 res; /* result code for this event */ >>>>> - __u32 flags; >>>>> + union { >>>>> + struct { >>>>> + __s32 res; /* result code for this event */ >>>>> + __u32 flags; >>>>> + }; >>>>> + __s64 res64; /* appending offset for zone append */ >>>>> + }; >>>>> }; >>>> >>>> Is this a compatible change, both for now but also going forward? You >>>> could randomly have IORING_CQE_F_BUFFER set, or any other future flags. >>> >>> Sorry, I didn't quite understand the concern. CQE_F_BUFFER is not >>> used/set for write currently, so it looked compatible at this point. >> >> Not worried about that, since we won't ever use that for writes. But it >> is a potential headache down the line for other flags, if they apply to >> normal writes. >> >>> Yes, no room for future flags for this operation. >>> Do you see any other way to enable this support in io-uring? >> >> Honestly I think the only viable option is as we discussed previously, >> pass in a pointer to a 64-bit type where we can copy the additional >> completion information to. > > TBH, I hate the idea of such overhead/latency at times when SSDs can > serve writes in less than 10ms. Any chance you measured how long does it 10us? :-) > take to drag through task_work? A 64-bit value copy is really not a lot of overhead... But yes, we'd need to push the completion through task_work at that point, as we can't do it from the completion side. That's not a lot of overhead, and most notably, it's overhead that only affects this particular type. That's not a bad starting point, and something that can always be optimized later if need be. But I seriously doubt it'd be anything to worry about. -- Jens Axboe