From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3FEDCC433E3 for ; Wed, 19 Aug 2020 22:58:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1ECD32076E for ; Wed, 19 Aug 2020 22:58:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=colorremedies-com.20150623.gappssmtp.com header.i=@colorremedies-com.20150623.gappssmtp.com header.b="dBee0qDk" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726578AbgHSW6u (ORCPT ); Wed, 19 Aug 2020 18:58:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44704 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726209AbgHSW6p (ORCPT ); Wed, 19 Aug 2020 18:58:45 -0400 Received: from mail-wm1-x343.google.com (mail-wm1-x343.google.com [IPv6:2a00:1450:4864:20::343]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 31819C061757 for ; Wed, 19 Aug 2020 15:58:45 -0700 (PDT) Received: by mail-wm1-x343.google.com with SMTP id u18so942708wmc.3 for ; Wed, 19 Aug 2020 15:58:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=colorremedies-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=OgK1MCgpPhEyKdpI6E11FXxu07NX2wkWvImgYHGvMzo=; b=dBee0qDkVpFH3ka/yH/WX+vNUaKq3fpDM4oLXAzn8oOZ++e1UbWyXN83UyU/b1MjlI jIECwviGYCbtb3n1bvxaiVbCLh0Jbmn29WUk+UCLl4ohCL/hyShJdaYVGFAWGgWEM1bd GklCcX0lWDfpZxe0pD+LrXxivai2hSowbpWQBHuq0RvW5MM9Sc6zq5HXH01G7L67oeYu SmJhCEKWzq7Dmh1ZDE5tsRj4oYKvw4N8vs1pllfJ3y/391lxKGEeSA2Mo/ZYs5TeeCEX RSVE8i7nq/AXwCxsr01mv3qnsqYEqmuQd8LDyLmk/dSWW57JqkKiVqjcL2uReDtjvBYi P8LQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=OgK1MCgpPhEyKdpI6E11FXxu07NX2wkWvImgYHGvMzo=; b=mYyBpLZkUVMQIvUOPrBbDGjtXd/phfv3m1bY1X5P7hsBS340aY6X5O3E0YZCe4NR1a 6xor3/aNa1KyU6c+FKcuOl0vxaOL3RDwt51X532KGJwY+PuwgTO+ZumPallvaiVI1t1+ mn5MXbI3/PWDd73dy9uXOZ3H+QSCUkinVKGbegBBYSluIJ0gvbVBW8TCj5nPm8izScLv PWmy6kRKLYlgRD3cCKqW1+S5Ypq7/YxS8nacSQeFlo1nze0VsQ2C+p5VJBlpQNlTIwG2 zNvACG1bfdKfWla1YGfjvrbrcPIyHJTvw/lTENdJlZLXatbu3nTZR/LvHWXTZEXxNf6f A/5g== X-Gm-Message-State: AOAM532E2L2z6TnrFQlr4R6BQ7n6Dzricb+3NAcpxbENmORAc0mYBJC3 KJtvSpkr9Oz1peLDOS0xQVQV77Kci0bMDd88WafYzg== X-Google-Smtp-Source: ABdhPJw9YwL2k9GzSFOCYosgh7Xv8zWbyw62vLe3UoXdWEvjv2XJIeClPdoe5nNbX+PauYRN3G1x1qgPYq9FACKpXSU= X-Received: by 2002:a1c:5581:: with SMTP id j123mr497261wmb.11.1597877923880; Wed, 19 Aug 2020 15:58:43 -0700 (PDT) MIME-Version: 1.0 References: <29509e08-e373-b352-d696-fcb9f507a545@xmyslivec.cz> <695936b4-67a2-c862-9cb6-5545b4ab3c42@xmyslivec.cz> <2f2f1c21-c81b-55aa-6f77-e2d3f32d32cb@xmyslivec.cz> In-Reply-To: <2f2f1c21-c81b-55aa-6f77-e2d3f32d32cb@xmyslivec.cz> From: Chris Murphy Date: Wed, 19 Aug 2020 16:58:05 -0600 Message-ID: Subject: Re: Linux RAID with btrfs stuck and consume 100 % CPU To: Vojtech Myslivec Cc: Chris Murphy , Michal Moravec , Btrfs BTRFS , Linux-RAID , Song Liu Content-Type: text/plain; charset="UTF-8" Sender: linux-raid-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-raid@vger.kernel.org On Wed, Aug 19, 2020 at 11:29 AM Vojtech Myslivec wrote: > > Linux backup1 5.7.0-0.bpo.2-amd64 #1 SMP Debian 5.7.10-1~bpo10+1 Should be new enough; I don't see raid related md changes between 5.7.10 and 5.7.16. I haven't looked at 5.8, but 5.7 is probably recent enough to know if there have been relevant changes in 5.8 that are worth testing. > > - `5.7_profs.txt` > - dump of the whole /proc when the issue happened The problem here I think is that /proc/pid/stack is empty. You might have to hammer on it a bunch of times to get a stack. I can't tell if the sysrq+w is enough information to conclusively tell if this is strictly an md problem or if there's something else going on. But I do see in the sysrq+w evidence of a Btrfs snapshot happening, which will result in a flush of the file system. Since the mdadm raid journal is on two SSDs which should be fast enough to accept the metadata changes before actually doing the flush. -- Chris Murphy