ceph-devel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [ceph-users] [ Ceph MDS MON Config Variables ] Failover Delay issue
       [not found] <CAJm6b-741TRptPWOqoqEJG6m00auekTkcWUD+z3sxH1-34THgA@mail.gmail.com>
@ 2021-05-03 13:48 ` Olivier AUDRY
  2021-05-03 15:19 ` Patrick Donnelly
  1 sibling, 0 replies; 3+ messages in thread
From: Olivier AUDRY @ 2021-05-03 13:48 UTC (permalink / raw)
  To: Lokendra Rathour, ceph-devel, dev, ceph-users

hello

perhaps you should have more than one MDS active.

mds: cephfs:3 {0=cephfs-d=up:active,1=cephfs-e=up:active,2=cephfs-
a=up:active} 1 up:standby-replay

I got 3 active mds and one standby.

I'm using rook in kubernetes for this setup.

oau

Le lundi 03 mai 2021 à 19:06 +0530, Lokendra Rathour a écrit :
> Hi Team,
> I was setting up the ceph cluster with
> 
>    - Node Details:3 Mon,2 MDS, 2 Mgr, 2 RGW
>    - Deployment Type: Active Standby
>    - Testing Mode: Failover of MDS Node
>    - Setup : Octopus (15.2.7)
>    - OS: centos 8.3
>    - hardware: HP
>    - Ram:  128 GB on each Node
>    - OSD: 2 ( 1 tb each)
>    - Operation: Normal I/O with mkdir on every 1 second.
> 
> T*est Case: Power-off any active MDS Node for failover to happen*
> 
> *Observation:*
> We have observed that whenever an active MDS Node is down it takes
> around*
> 40 seconds* to activate the standby MDS Node.
> on further checking the logs for the new-handover MDS Node we have
> seen
> delay on the basis of following inputs:
> 
>    1. 10 second delay after which Mon calls for new Monitor election
>       1.  [log]  0 log_channel(cluster) log [INF] : mon.cephnode1
> calling
>       monitor election
>    2. 5 second delay in which newly elected Monitor is elected
>       1. [log] 0 log_channel(cluster) log [INF] : mon.cephnode1 is
> new
>       leader, mons cephnode1,cephnode3 in quorum (ranks 0,2)
>       3. the addition beacon grace time for which the system waits
> before
>    which it enables standby MDS node activation. (approx delay of 19
> seconds)
>       1. defaults :  sudo ceph config get mon mds_beacon_grace
>       15.000000
>       2. sudo ceph config get mon mds_beacon_interval
>       5.000000
>       3. [log] - 2021-04-30T18:23:10.136+0530 7f4e3925c700  1
>       mon.cephnode2@1(leader).mds e776 no beacon from mds.0.771 (gid:
>       639443 addr: [v2:
>       10.0.4.10:6800/2172152716,v1:10.0.4.10:6801/2172152716] state:
>       up:active)* since 18.7951*
>    4. *in Total it takes around 40 seconds to handover and activate
> passive
>    standby node. *
> 
> *Query:*
> 
>    1. Can these variables be configured ?  which we have tried,but
> are not
>    aware of the overall impact on the ceph cluster because of these
> changes
>       1. By tuning these values we could reach the minimum time of 12
>       seconds in which the active node comes up.
>       2. Values taken to get the said time :
>          1. *mon_election_timeout* (default 5) - configured as 1
>          2. *mon_lease*(default 5)  - configured as 2
>          3.  *mds_beacon_grace* (default 15) - configured as 5
>          4.  *mds_beacon_interval* (default 5) - configured as 1
> 
> We need to tune this setup to get the failover duration as low as 5-7
> seconds.
> 
> Please suggest/support and share your inputs, my setup is ready and
> already
> we are testing with multiple scenarios so that we are able to achive
> min
> failover duration.
> 


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [ceph-users] [ Ceph MDS MON Config Variables ] Failover Delay issue
       [not found] <CAJm6b-741TRptPWOqoqEJG6m00auekTkcWUD+z3sxH1-34THgA@mail.gmail.com>
  2021-05-03 13:48 ` [ceph-users] [ Ceph MDS MON Config Variables ] Failover Delay issue Olivier AUDRY
@ 2021-05-03 15:19 ` Patrick Donnelly
  2021-05-03 17:00   ` [ceph-users] " Frank Schilder
  1 sibling, 1 reply; 3+ messages in thread
From: Patrick Donnelly @ 2021-05-03 15:19 UTC (permalink / raw)
  To: Lokendra Rathour; +Cc: Ceph Development, dev, ceph-users

On Mon, May 3, 2021 at 6:36 AM Lokendra Rathour
<lokendrarathour@gmail.com> wrote:
>
> Hi Team,
> I was setting up the ceph cluster with
>
>    - Node Details:3 Mon,2 MDS, 2 Mgr, 2 RGW
>    - Deployment Type: Active Standby
>    - Testing Mode: Failover of MDS Node
>    - Setup : Octopus (15.2.7)
>    - OS: centos 8.3
>    - hardware: HP
>    - Ram:  128 GB on each Node
>    - OSD: 2 ( 1 tb each)
>    - Operation: Normal I/O with mkdir on every 1 second.
>
> T*est Case: Power-off any active MDS Node for failover to happen*
>
> *Observation:*
> We have observed that whenever an active MDS Node is down it takes around*
> 40 seconds* to activate the standby MDS Node.
> on further checking the logs for the new-handover MDS Node we have seen
> delay on the basis of following inputs:
>
>    1. 10 second delay after which Mon calls for new Monitor election
>       1.  [log]  0 log_channel(cluster) log [INF] : mon.cephnode1 calling
>       monitor election

In the process of killing the active MDS, are you also killing a monitor?

-- 
Patrick Donnelly, Ph.D.
He / Him / His
Principal Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [ceph-users] Re: [ Ceph MDS MON Config Variables ] Failover Delay issue
  2021-05-03 15:19 ` Patrick Donnelly
@ 2021-05-03 17:00   ` Frank Schilder
  0 siblings, 0 replies; 3+ messages in thread
From: Frank Schilder @ 2021-05-03 17:00 UTC (permalink / raw)
  To: Patrick Donnelly, Lokendra Rathour; +Cc: Ceph Development, dev, ceph-users

Following up on this and other comments, there are 2 different time delays. One (1)  is the time it takes from killing an MDS until a stand-by is made an active rank, and (2) the time it takes for the new active rank to restore all client sessions. My experience is that (1) takes close to 0 seconds while (2) can take between 20-30 seconds depending on how busy the clients are; the MDS will go through various states before reaching active. We usually have ca. 1600 client connections to our FS. With fewer clients, MDS fail-over is practically instantaneous. We are using latest mimic.

From what you write, you seem to have a 40 seconds window for (1), which points to a problem different to MON config values. This is supported by your description including a MON election (??? this should never happen). Do you have have services co-located? Which of the times (1) or (2) are you referring to? How many FS clients do you have?

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Patrick Donnelly <pdonnell@redhat.com>
Sent: 03 May 2021 17:19:37
To: Lokendra Rathour
Cc: Ceph Development; dev; ceph-users
Subject: [ceph-users] Re: [ Ceph MDS MON Config Variables ] Failover Delay issue

On Mon, May 3, 2021 at 6:36 AM Lokendra Rathour
<lokendrarathour@gmail.com> wrote:
>
> Hi Team,
> I was setting up the ceph cluster with
>
>    - Node Details:3 Mon,2 MDS, 2 Mgr, 2 RGW
>    - Deployment Type: Active Standby
>    - Testing Mode: Failover of MDS Node
>    - Setup : Octopus (15.2.7)
>    - OS: centos 8.3
>    - hardware: HP
>    - Ram:  128 GB on each Node
>    - OSD: 2 ( 1 tb each)
>    - Operation: Normal I/O with mkdir on every 1 second.
>
> T*est Case: Power-off any active MDS Node for failover to happen*
>
> *Observation:*
> We have observed that whenever an active MDS Node is down it takes around*
> 40 seconds* to activate the standby MDS Node.
> on further checking the logs for the new-handover MDS Node we have seen
> delay on the basis of following inputs:
>
>    1. 10 second delay after which Mon calls for new Monitor election
>       1.  [log]  0 log_channel(cluster) log [INF] : mon.cephnode1 calling
>       monitor election

In the process of killing the active MDS, are you also killing a monitor?

--
Patrick Donnelly, Ph.D.
He / Him / His
Principal Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-leave@ceph.io

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-05-03 17:02 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <CAJm6b-741TRptPWOqoqEJG6m00auekTkcWUD+z3sxH1-34THgA@mail.gmail.com>
2021-05-03 13:48 ` [ceph-users] [ Ceph MDS MON Config Variables ] Failover Delay issue Olivier AUDRY
2021-05-03 15:19 ` Patrick Donnelly
2021-05-03 17:00   ` [ceph-users] " Frank Schilder

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).