From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4BBF8C433B4 for ; Fri, 30 Apr 2021 16:18:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 15C006109E for ; Fri, 30 Apr 2021 16:18:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230164AbhD3QTn (ORCPT ); Fri, 30 Apr 2021 12:19:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43446 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229821AbhD3QTm (ORCPT ); Fri, 30 Apr 2021 12:19:42 -0400 Received: from mail-lf1-x12e.google.com (mail-lf1-x12e.google.com [IPv6:2a00:1450:4864:20::12e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3060FC06174A for ; Fri, 30 Apr 2021 09:18:53 -0700 (PDT) Received: by mail-lf1-x12e.google.com with SMTP id 124so29914343lff.5 for ; Fri, 30 Apr 2021 09:18:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=digitalocean.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=zI4CowRa/LcwYkJ8lPvJ7HqO9Kr1KwEc72oXwyDFlAw=; b=S8EQ0yJh85oxt921TgVnXJzGWNt94gYHvUIq6jeEoBdSDHLSz/bbCZl3ac2Zi3EpVo Rqt7gsq+x98WhRykg1bchxQ6QCuNEh/Wf4CMD8qo8ih2RF6l1IwQj/s4hEaMieaXI1A7 v1yNwpD+wDqt7P9XDeMQq76d+Vw7/3Dq5YzUQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=zI4CowRa/LcwYkJ8lPvJ7HqO9Kr1KwEc72oXwyDFlAw=; b=CJHptFmsmhGNePAQXPVPZiZlrPlavx/nvUQ9A26e033ajGZkOI0qg02BD09br+BiKB p3ZW42NNHJbGZccJQMIuAE9qS0K9qBhVshz7/Peq9r6NJOOr71hG/qfe3riY3xlVPUbk 9wFOK9POe7zY29joSRGaguDm5SYzMrW0J5bf9hffT3YeFLHAx0PkuZruZdXLmVx/dBno nfG+UM1/3PKtWsRhCn7/PuTCA+ryZwkQxEhwBW3OKctQBZjEWUqZwz1SRwelg27QCqGj 9D9wsHZoBdGlL2PinljhmtkAvHgZCc/m9jLV1pGmK6P81FeIhnJqV5KWFRZJZ1mtu6y/ 6N3A== X-Gm-Message-State: AOAM530ZLungCzxG9/d41csFjFnNwihd70bR+TCwNWBjCnIorzuaAYaU e4SVwl1CV213Q2USpMCZMoTqsmglbSv/qzZ3xYNkHw== X-Google-Smtp-Source: ABdhPJzum24RN8/mdkU1Tb5XpaxSiOZ+tImR7pFL45m5EJ5KINjch/ohA2tEgIlGV1Txx52PoLYsBbBqSWa1iYbkJUM= X-Received: by 2002:a05:6512:218d:: with SMTP id b13mr3987599lft.228.1619799531644; Fri, 30 Apr 2021 09:18:51 -0700 (PDT) MIME-Version: 1.0 References: <20210422120459.447350175@infradead.org> <20210422123308.196692074@infradead.org> <5c289c5a-a120-a1d0-ca89-2724a1445fe8@linux.intel.com> In-Reply-To: From: Don Hiatt Date: Fri, 30 Apr 2021 09:18:40 -0700 Message-ID: Subject: Re: [PATCH 04/19] sched: Prepare for Core-wide rq->lock To: Josh Don Cc: Aubrey Li , Aubrey Li , Peter Zijlstra , Joel Fernandes , "Hyser,Chris" , Ingo Molnar , Vincent Guittot , Valentin Schneider , Mel Gorman , linux-kernel , Thomas Gleixner Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Apr 29, 2021 at 4:22 PM Josh Don wrote: > > On Thu, Apr 29, 2021 at 2:09 PM Don Hiatt wrote: > > > > On Thu, Apr 29, 2021 at 1:48 PM Josh Don wrote: > > > > > > On Wed, Apr 28, 2021 at 9:41 AM Don Hiatt wrote: > > > > > > > > I'm still seeing hard lockups while repeatedly setting cookies on qemu > > > > processes even with > > > > the updated patch. If there is any debug you'd like me to turn on, > > > > just let me know. > > > > > > > > Thanks! > > > > > > > > don > > > > > > Thanks for the added context on your repro configuration. In addition > > > to the updated patch from earlier, could you try the modification to > > > double_rq_lock() from > > > https://lkml.kernel.org/r/CABk29NuS-B3n4sbmavo0NDA1OCCsz6Zf2VDjjFQvAxBMQoJ_Lg@mail.gmail.com > > > ? I have a feeling this is what's causing your lockup. > > > > > > Best, > > > Josh > > > > Hi Josh, > > > > I've been running Aubrey+Peter's patch (attached) for almost 5 hours > > and haven't had a single issue. :) > > > > I'm running a set-cookie script every 5 seconds on the two VMs (each > > vm is running > > 'sysbench --threads=1 --time=0 cpu run' to generate some load in the vm) and > > I'm running two of the same sysbench runs on the HV while setting cookies > > every 5 seconds. > > > > Unless I jinxed us it looks like a great fix. :) > > > > Let me know if there is anything else you'd like me to try. I'm going > > to leave the tests running > > and see what happens. I update with what I find. > > > > Thanks! > > > > don > > That's awesome news, thanks for validating. Note that with Aubrey's > patch there is still a race window if sched core is being > enabled/disabled (ie. if you alternate between there being some > cookies in the system, and no cookies). In my reply I posted an > alternative version to avoid that. If your script were to do the > on-off flipping with the old patch, you'd might eventually see another > lockup. My tests have been running 24 hours now and all is good. I'll do another test with your changes next as well as continue to test the core-sched queue. This and all the other patches in Peter's repo: Tested-by: Don Hiatt Have a great day and thanks again. don