All of lore.kernel.org
 help / color / mirror / Atom feed
* scrub errors on rgw data pool
@ 2019-11-25 10:04 M Ranga Swami Reddy
       [not found] ` <CANA9Uk4kmPqe6z17BUCnUuJyNcQkOqEcwXv+RLOsbKS-bmsyGw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 4+ messages in thread
From: M Ranga Swami Reddy @ 2019-11-25 10:04 UTC (permalink / raw)
  To: ceph-users, ceph-devel


[-- Attachment #1.1: Type: text/plain, Size: 525 bytes --]

Hello - We are using the ceph 12.2.11 version (upgraded from Jewel 10.2.12
to 12.2.11). In this cluster, we are having mix of filestore and bluestore
OSD backends.
Recently we are seeing the scrub errors on rgw buckets.data pool every day,
after scrub operation performed by Ceph. If we run the PG repair, the
errors will go way.

Anyone seen the above issue?
Is the mix of filestore backend has bug/issue with 12.2.11 version (ie
Luminous).
Is the mix of filestore and bluestore OSDs cause this type of issue?

Thanks
Swami

[-- Attachment #1.2: Type: text/html, Size: 641 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: scrub errors on rgw data pool
       [not found] ` <CANA9Uk4kmPqe6z17BUCnUuJyNcQkOqEcwXv+RLOsbKS-bmsyGw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2019-11-26  2:32   ` Fyodor Ustinov
       [not found]     ` <1067131621.54855.1574735565028.JavaMail.zimbra-TAHq1US7lqU@public.gmane.org>
  2019-11-29 10:15   ` M Ranga Swami Reddy
  1 sibling, 1 reply; 4+ messages in thread
From: Fyodor Ustinov @ 2019-11-26  2:32 UTC (permalink / raw)
  To: M Ranga Swami Reddy; +Cc: ceph-users, ceph-devel

Hi!

I had similar errors in pools on SSD until I upgraded to nautilus (clean bluestore installation)

----- Original Message -----
> From: "M Ranga Swami Reddy" <swamireddy-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> To: "ceph-users" <ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org>, "ceph-devel" <ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>
> Sent: Monday, 25 November, 2019 12:04:46
> Subject: [ceph-users] scrub errors on rgw data pool

> Hello - We are using the ceph 12.2.11 version (upgraded from Jewel 10.2.12 to
> 12.2.11). In this cluster, we are having mix of filestore and bluestore OSD
> backends.
> Recently we are seeing the scrub errors on rgw buckets.data pool every day,
> after scrub operation performed by Ceph. If we run the PG repair, the errors
> will go way.
> 
> Anyone seen the above issue?
> Is the mix of filestore backend has bug/issue with 12.2.11 version (ie
> Luminous).
> Is the mix of filestore and bluestore OSDs cause this type of issue?
> 
> Thanks
> Swami
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: scrub errors on rgw data pool
       [not found]     ` <1067131621.54855.1574735565028.JavaMail.zimbra-TAHq1US7lqU@public.gmane.org>
@ 2019-11-26  6:30       ` M Ranga Swami Reddy
  0 siblings, 0 replies; 4+ messages in thread
From: M Ranga Swami Reddy @ 2019-11-26  6:30 UTC (permalink / raw)
  To: Fyodor Ustinov; +Cc: ceph-users, ceph-devel


[-- Attachment #1.1: Type: text/plain, Size: 1524 bytes --]

Thanks for reply
Have you migrated all filestore OSDs from filestore backend to bluestore
backend?
Or
Have you upgraded from Luminious 12.2.11 to 14.x?

What helped here?


On Tue, Nov 26, 2019 at 8:03 AM Fyodor Ustinov <ufm-TAHq1US7lqU@public.gmane.org> wrote:

> Hi!
>
> I had similar errors in pools on SSD until I upgraded to nautilus (clean
> bluestore installation)
>
> ----- Original Message -----
> > From: "M Ranga Swami Reddy" <swamireddy-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> > To: "ceph-users" <ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org>, "ceph-devel" <
> ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>
> > Sent: Monday, 25 November, 2019 12:04:46
> > Subject: [ceph-users] scrub errors on rgw data pool
>
> > Hello - We are using the ceph 12.2.11 version (upgraded from Jewel
> 10.2.12 to
> > 12.2.11). In this cluster, we are having mix of filestore and bluestore
> OSD
> > backends.
> > Recently we are seeing the scrub errors on rgw buckets.data pool every
> day,
> > after scrub operation performed by Ceph. If we run the PG repair, the
> errors
> > will go way.
> >
> > Anyone seen the above issue?
> > Is the mix of filestore backend has bug/issue with 12.2.11 version (ie
> > Luminous).
> > Is the mix of filestore and bluestore OSDs cause this type of issue?
> >
> > Thanks
> > Swami
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

[-- Attachment #1.2: Type: text/html, Size: 2510 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: scrub errors on rgw data pool
       [not found] ` <CANA9Uk4kmPqe6z17BUCnUuJyNcQkOqEcwXv+RLOsbKS-bmsyGw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2019-11-26  2:32   ` Fyodor Ustinov
@ 2019-11-29 10:15   ` M Ranga Swami Reddy
  1 sibling, 0 replies; 4+ messages in thread
From: M Ranga Swami Reddy @ 2019-11-29 10:15 UTC (permalink / raw)
  To: ceph-users, ceph-devel


[-- Attachment #1.1: Type: text/plain, Size: 866 bytes --]

Primary OSD crashes with below assert:
12.2.11/src/osd/ReplicatedBackend.cc:1445 assert(peer_missing.count(
fromshard))
==
here I have 2 OSDs with bluestore backend and 1 osd with filestore backend.

On Mon, Nov 25, 2019 at 3:34 PM M Ranga Swami Reddy <swamireddy-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
wrote:

> Hello - We are using the ceph 12.2.11 version (upgraded from Jewel 10.2.12
> to 12.2.11). In this cluster, we are having mix of filestore and bluestore
> OSD backends.
> Recently we are seeing the scrub errors on rgw buckets.data pool every
> day, after scrub operation performed by Ceph. If we run the PG repair, the
> errors will go way.
>
> Anyone seen the above issue?
> Is the mix of filestore backend has bug/issue with 12.2.11 version (ie
> Luminous).
> Is the mix of filestore and bluestore OSDs cause this type of issue?
>
> Thanks
> Swami
>

[-- Attachment #1.2: Type: text/html, Size: 1627 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2019-11-29 10:15 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-25 10:04 scrub errors on rgw data pool M Ranga Swami Reddy
     [not found] ` <CANA9Uk4kmPqe6z17BUCnUuJyNcQkOqEcwXv+RLOsbKS-bmsyGw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2019-11-26  2:32   ` Fyodor Ustinov
     [not found]     ` <1067131621.54855.1574735565028.JavaMail.zimbra-TAHq1US7lqU@public.gmane.org>
2019-11-26  6:30       ` M Ranga Swami Reddy
2019-11-29 10:15   ` M Ranga Swami Reddy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.