All of lore.kernel.org
 help / color / mirror / Atom feed
* pg rebalancing after taking osds out
@ 2016-01-26 22:26 Deneau, Tom
  2016-01-26 22:44 ` Sage Weil
  0 siblings, 1 reply; 3+ messages in thread
From: Deneau, Tom @ 2016-01-26 22:26 UTC (permalink / raw)
  To: ceph-devel

I have a replicated x2 pool with the crush step of host.
When the pool was created there were 3 hosts with 7 osds each,
and looking at the pg-by-pool for that pool I can see that
every pg has  copies on two different hosts.

Now I want to take 2 osds out of each node, which I did using
the osd out command. (So there are then 5 osds per host node).

Now I rerun ceph pg ls-by-pool for that pool and it shows that
some pgs have both their copies on the same node.

Is this normal?  My expectation was that each pg still
had its two copies on two different hosts.

-- Tom Deneau


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: pg rebalancing after taking osds out
  2016-01-26 22:26 pg rebalancing after taking osds out Deneau, Tom
@ 2016-01-26 22:44 ` Sage Weil
  2016-01-26 23:37   ` Deneau, Tom
  0 siblings, 1 reply; 3+ messages in thread
From: Sage Weil @ 2016-01-26 22:44 UTC (permalink / raw)
  To: Deneau, Tom; +Cc: ceph-devel

On Tue, 26 Jan 2016, Deneau, Tom wrote:
> I have a replicated x2 pool with the crush step of host.
> When the pool was created there were 3 hosts with 7 osds each,
> and looking at the pg-by-pool for that pool I can see that
> every pg has  copies on two different hosts.
> 
> Now I want to take 2 osds out of each node, which I did using
> the osd out command. (So there are then 5 osds per host node).
> 
> Now I rerun ceph pg ls-by-pool for that pool and it shows that
> some pgs have both their copies on the same node.
> 
> Is this normal?  My expectation was that each pg still
> had its two copies on two different hosts.

There those PGs in the 'remapped' state?  What does the tree look like 
('ceph osd tree')?

sage

^ permalink raw reply	[flat|nested] 3+ messages in thread

* RE: pg rebalancing after taking osds out
  2016-01-26 22:44 ` Sage Weil
@ 2016-01-26 23:37   ` Deneau, Tom
  0 siblings, 0 replies; 3+ messages in thread
From: Deneau, Tom @ 2016-01-26 23:37 UTC (permalink / raw)
  To: Sage Weil; +Cc: ceph-devel

I noticed that if I actually remove the osds from the crush map,
(after using ceph osd out), that everything works as I would expect.

So at the time of the behavior mentioned below, (without removing from crush map)
the tree looked something like the following:  Sorry, I don't have the pg state
saved from that time, I could recreate it if needed.

ID WEIGHT   TYPE NAME                         UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 37.98688 root default
-2 12.66229     host node-01
 0  1.80890         osd.0                          up  1.00000          1.00000
 1  1.80890         osd.1                          up  1.00000          1.00000
 2  1.80890         osd.2                          up  1.00000          1.00000
 3  1.80890         osd.3                          up  1.00000          1.00000
 4  1.80890         osd.4                          up  1.00000          1.00000
 5  1.80890         osd.5                          up        0          1.00000
 6  1.80890         osd.6                          up        0          1.00000
-3 12.66229     host node-02
 7  1.80890         osd.7                          up  1.00000          1.00000
 8  1.80890         osd.8                          up  1.00000          1.00000
 9  1.80890         osd.9                          up  1.00000          1.00000
10  1.80890         osd.10                         up  1.00000          1.00000
11  1.80890         osd.11                         up  1.00000          1.00000
12  1.80890         osd.12                         up        0          1.00000
13  1.80890         osd.13                         up        0          1.00000
-4 12.66229     host node-03
14  1.80890         osd.14                         up  1.00000          1.00000
15  1.80890         osd.15                         up  1.00000          1.00000
16  1.80890         osd.16                         up  1.00000          1.00000
17  1.80890         osd.17                         up  1.00000          1.00000
18  1.80890         osd.18                         up  1.00000          1.00000
19  1.80890         osd.19                         up        0          1.00000
20  1.80890         osd.20                         up        0          1.00000

-- Tom


> -----Original Message-----
> From: Sage Weil [mailto:sage@newdream.net]
> Sent: Tuesday, January 26, 2016 4:44 PM
> To: Deneau, Tom
> Cc: ceph-devel@vger.kernel.org
> Subject: Re: pg rebalancing after taking osds out
> 
> On Tue, 26 Jan 2016, Deneau, Tom wrote:
> > I have a replicated x2 pool with the crush step of host.
> > When the pool was created there were 3 hosts with 7 osds each, and
> > looking at the pg-by-pool for that pool I can see that every pg has
> > copies on two different hosts.
> >
> > Now I want to take 2 osds out of each node, which I did using the osd
> > out command. (So there are then 5 osds per host node).
> >
> > Now I rerun ceph pg ls-by-pool for that pool and it shows that some
> > pgs have both their copies on the same node.
> >
> > Is this normal?  My expectation was that each pg still had its two
> > copies on two different hosts.
> 
> There those PGs in the 'remapped' state?  What does the tree look like
> ('ceph osd tree')?
> 
> sage

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2016-01-26 23:37 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-26 22:26 pg rebalancing after taking osds out Deneau, Tom
2016-01-26 22:44 ` Sage Weil
2016-01-26 23:37   ` Deneau, Tom

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.