From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 787D7ECAAD3 for ; Mon, 19 Sep 2022 12:40:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230100AbiISMkX (ORCPT ); Mon, 19 Sep 2022 08:40:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44318 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230141AbiISMkE (ORCPT ); Mon, 19 Sep 2022 08:40:04 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CDDB1120AD for ; Mon, 19 Sep 2022 05:39:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1663591180; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=7pheDU3szOAwwbK9OaACzET2BRlTQOVXynwN58u5LXI=; b=J2Bt9szNtt2tE4rBTo/pl5lzlf/fseHjp9h2Lioi+OrLy34X1EDnCHogdynmqvZLqrIJWf yYVs4Am46/aiElDzrdzmp3hayMgnAJ51LwyqqlyrOgwHoisQNajNXRXgb5Xl1lU/9mx2gH aQN5i5AusddvIbL5YNHdqlFndR+yWUk= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-39-8b79cf2cOZ6tLOl4h0tvKA-1; Mon, 19 Sep 2022 08:39:37 -0400 X-MC-Unique: 8b79cf2cOZ6tLOl4h0tvKA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B4B5485A59D; Mon, 19 Sep 2022 12:39:36 +0000 (UTC) Received: from T590 (ovpn-8-16.pek2.redhat.com [10.72.8.16]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 3A0EF40C2064; Mon, 19 Sep 2022 12:39:31 +0000 (UTC) Date: Mon, 19 Sep 2022 20:39:26 +0800 From: Ming Lei To: Ziyang Zhang Cc: axboe@kernel.dk, xiaoguang.wang@linux.alibaba.com, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, joseph.qi@linux.alibaba.com, ming.lei@redhat.com Subject: Re: [PATCH V3 4/7] ublk_drv: requeue rqs with recovery feature enabled Message-ID: References: <20220913041707.197334-1-ZiyangZhang@linux.alibaba.com> <20220913041707.197334-5-ZiyangZhang@linux.alibaba.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org On Mon, Sep 19, 2022 at 05:12:21PM +0800, Ziyang Zhang wrote: > On 2022/9/19 11:55, Ming Lei wrote: > > On Tue, Sep 13, 2022 at 12:17:04PM +0800, ZiyangZhang wrote: > >> With recovery feature enabled, in ublk_queue_rq or task work > >> (in exit_task_work or fallback wq), we requeue rqs instead of > >> ending(aborting) them. Besides, No matter recovery feature is enabled > >> or disabled, we schedule monitor_work immediately. > >> > >> Signed-off-by: ZiyangZhang > >> --- > >> drivers/block/ublk_drv.c | 34 ++++++++++++++++++++++++++++++++-- > >> 1 file changed, 32 insertions(+), 2 deletions(-) > >> > >> diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c > >> index 23337bd7c105..b067f33a1913 100644 > >> --- a/drivers/block/ublk_drv.c > >> +++ b/drivers/block/ublk_drv.c > >> @@ -682,6 +682,21 @@ static void ubq_complete_io_cmd(struct ublk_io *io, int res) > >> > >> #define UBLK_REQUEUE_DELAY_MS 3 > >> > >> +static inline void __ublk_abort_rq_in_task_work(struct ublk_queue *ubq, > >> + struct request *rq) > >> +{ > >> + pr_devel("%s: %s q_id %d tag %d io_flags %x.\n", __func__, > >> + (ublk_queue_can_use_recovery(ubq)) ? "requeue" : "abort", > >> + ubq->q_id, rq->tag, ubq->ios[rq->tag].flags); > >> + /* We cannot process this rq so just requeue it. */ > >> + if (ublk_queue_can_use_recovery(ubq)) { > >> + blk_mq_requeue_request(rq, false); > >> + blk_mq_delay_kick_requeue_list(rq->q, UBLK_REQUEUE_DELAY_MS); > > > > Here you needn't to kick requeue list since we know it can't make > > progress. And you can do that once before deleting gendisk > > or the queue is recovered. > > No, kicking rq here is necessary. > > Consider USER_RECOVERY is enabled and everything goes well. > User sends STOP_DEV, and we have kicked requeue list in > ublk_stop_dev() and are going to call del_gendisk(). > However, a crash happens now. Then rqs may be still requeued > by ublk_queue_rq() because ublk_queue_rq() sees a dying > ubq_daemon. So del_gendisk() will hang because there are > rqs leaving in requeue list and no one kicks them. Why can't you kick requeue list before calling del_gendisk(). > > BTW, kicking requeue list after requeue rqs is really harmless > since we schedule quiesce_work immediately after finding a > dying ubq_daemon. So few rqs have chance to be re-dispatched. Do you think it makes sense to kick requeue list when the queue can't handle any request? Thanks, Ming