From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F33FC433B4 for ; Fri, 9 Apr 2021 03:49:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4E99561042 for ; Fri, 9 Apr 2021 03:49:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233233AbhDIDtO (ORCPT ); Thu, 8 Apr 2021 23:49:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57332 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232662AbhDIDtO (ORCPT ); Thu, 8 Apr 2021 23:49:14 -0400 Received: from mail-wr1-x42c.google.com (mail-wr1-x42c.google.com [IPv6:2a00:1450:4864:20::42c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F377C061760 for ; Thu, 8 Apr 2021 20:49:01 -0700 (PDT) Received: by mail-wr1-x42c.google.com with SMTP id f12so4227794wro.0 for ; Thu, 08 Apr 2021 20:49:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=leblancnet-us.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=iGyVxLqCM9a3q9O6Jc5gjj+UiLgoYz6kG6Wa/1FaIxo=; b=V20YpIyGHqWlpMW9ilnLixj//XFO8+UjPYJTNOhOSZBcUs9c0TdEpXfN6fk0dBpv9q J6tmCrgd+HOb6sD8OqC3UsJMABSCEjQWZYygf/ptZqSSs6KZnEHslxQk2CMQsQwCt2mC UMmwgv6r8/7X4+kU9vwAMfUwNixLM5zD/OxYRQ+JRvjqm+Be1ikAmcmgeItQFLhOe8FT teFr0j2V/ypcd5SpOATVEkEbKW75JZkB1sPmRZ+vl6PZkIlCV41HRyLFQrf2i1B+CGaw tX9GU+2BpqeZSjNq1BL7gHRNCW8nlxKybVgxYVceUYNLOk9uyDv0NLNkKMlYLKXMAswL 6V+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=iGyVxLqCM9a3q9O6Jc5gjj+UiLgoYz6kG6Wa/1FaIxo=; b=ce1jF5tyr8VVYDZbbhaNscf4v/rrqBzIU8IDfKkJUM9gthIObMwRCps0Kp8sADniTK /7siuIn37XxiIHpl2AFWJb6QmffVdz2UwtdtldV8ZY1hyKos3u2TcwA6rtKZDLeFMwSs CE4/J3pJ70JCQ6dOHL1MumLf5CF5a6Y3mkXqNXjI7wpaSIVRCTpeigrOevqhVref9DBA RAA/S/CDJEw2Sttkt0B3bSLEVmNHytyp8+o3MFoeFWnjgPpKR95sDRCISCUCLUBqiWAk cE6C9RlwBld8v/bGxODAW5Rke6NZJr6AsqvDTjqDYRSvuBTiBKRY3MzrcMn2sBdc3lNO rD5Q== X-Gm-Message-State: AOAM531MSJ9suR59YtzhCKoLPfrZbnxHyRJEa953OY8Spu5xn3pz5PvO 30TSE+RODgTubKukM9gcsdhDQxExyHNNs9DPIL7Bug== X-Google-Smtp-Source: ABdhPJyjsv24qtJ6iZylqDqK72L7qxfT+/nWH+mXANffdoNGZSUjhNUzrfDWDL9DXpM2xIy3mJloWPSB+ECceo0yJfQ= X-Received: by 2002:adf:eb0a:: with SMTP id s10mr15542513wrn.6.1617940140076; Thu, 08 Apr 2021 20:49:00 -0700 (PDT) MIME-Version: 1.0 References: <68fa3e03-55bd-c9aa-b19a-7cbe44af704e@bit.nl> In-Reply-To: From: Robert LeBlanc Date: Thu, 8 Apr 2021 21:48:48 -0600 Message-ID: Subject: Re: [ceph-users] Nautilus 14.2.19 mon 100% CPU To: Zizon Qiu Cc: Stefan Kooman , ceph-devel , ceph-users Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org Good thought. The storage for the monitor data is a RAID-0 over three NVMe devices. Watching iostat, they are completely idle, maybe 0.8% to 1.4% for a second every minute or so. ---------------- Robert LeBlanc PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Thu, Apr 8, 2021 at 7:48 PM Zizon Qiu wrote: > > Will it be related to some kind of disk issue of that mon located in,which may casually > slow down IO and further the rocksdb? > > > On Fri, Apr 9, 2021 at 4:29 AM Robert LeBlanc wrote: >> >> I found this thread that matches a lot of what I'm seeing. I see the >> ms_dispatch thread going to 100%, but I'm at a single MON, the >> recovery is done and the rocksdb MON database is ~300MB. I've tried >> all the settings mentioned in that thread with no noticeable >> improvement. I was hoping that once the recovery was done (backfills >> to reformatted OSDs) that it would clear up, but not yet. So any other >> ideas would be really helpful. Our MDS is functioning, but stalls a >> lot because the mons miss heartbeats. >> >> mon_compact_on_start = true >> rocksdb_cache_size = 1342177280 >> mon_lease = 30 >> mon_osd_cache_size = 200000 >> mon_sync_max_payload_size = 4096 >> >> ---------------- >> Robert LeBlanc >> PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 >> >> On Thu, Apr 8, 2021 at 1:11 PM Stefan Kooman wrote: >> > >> > On 4/8/21 6:22 PM, Robert LeBlanc wrote: >> > > I upgraded our Luminous cluster to Nautilus a couple of weeks ago and >> > > converted the last batch of FileStore OSDs to BlueStore about 36 hours >> > > ago. Yesterday our monitor cluster went nuts and started constantly >> > > calling elections because monitor nodes were at 100% and wouldn't >> > > respond to heartbeats. I reduced the monitor cluster to one to prevent >> > > the constant elections and that let the system limp along until the >> > > backfills finished. There are large amounts of time where ceph commands >> > > hang with the CPU is at 100%, when the CPU drops I see a lot of work >> > > getting done in the monitor logs which stops as soon as the CPU is at >> > > 100% again. >> > >> > >> > Try reducing mon_sync_max_payload_size=4096. I have seen Frank Schilder >> > advise this several times because of monitor issues. Also recently for a >> > cluster that got upgraded from Luminous -> Mimic -> Nautilus. >> > >> > Worth a shot. >> > >> > Otherwise I'll try to look in depth and see if I can come up with >> > something smart (for now I need to go catch some sleep). >> > >> > Gr. Stefan