From mboxrd@z Thu Jan 1 00:00:00 1970 From: M Ranga Swami Reddy Subject: Re: scrub errors on rgw data pool Date: Tue, 26 Nov 2019 12:00:43 +0530 Message-ID: References: <1067131621.54855.1574735565028.JavaMail.zimbra@ufm.su> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============4881720310633547688==" Return-path: In-Reply-To: <1067131621.54855.1574735565028.JavaMail.zimbra-TAHq1US7lqU@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ceph-users-bounces-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org Sender: "ceph-users" To: Fyodor Ustinov Cc: ceph-users , ceph-devel List-Id: ceph-devel.vger.kernel.org --===============4881720310633547688== Content-Type: multipart/alternative; boundary="000000000000b2300305983a0515" --000000000000b2300305983a0515 Content-Type: text/plain; charset="UTF-8" Thanks for reply Have you migrated all filestore OSDs from filestore backend to bluestore backend? Or Have you upgraded from Luminious 12.2.11 to 14.x? What helped here? On Tue, Nov 26, 2019 at 8:03 AM Fyodor Ustinov wrote: > Hi! > > I had similar errors in pools on SSD until I upgraded to nautilus (clean > bluestore installation) > > ----- Original Message ----- > > From: "M Ranga Swami Reddy" > > To: "ceph-users" , "ceph-devel" < > ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org> > > Sent: Monday, 25 November, 2019 12:04:46 > > Subject: [ceph-users] scrub errors on rgw data pool > > > Hello - We are using the ceph 12.2.11 version (upgraded from Jewel > 10.2.12 to > > 12.2.11). In this cluster, we are having mix of filestore and bluestore > OSD > > backends. > > Recently we are seeing the scrub errors on rgw buckets.data pool every > day, > > after scrub operation performed by Ceph. If we run the PG repair, the > errors > > will go way. > > > > Anyone seen the above issue? > > Is the mix of filestore backend has bug/issue with 12.2.11 version (ie > > Luminous). > > Is the mix of filestore and bluestore OSDs cause this type of issue? > > > > Thanks > > Swami > > > > _______________________________________________ > > ceph-users mailing list > > ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > --000000000000b2300305983a0515 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Thanks for reply=C2=A0
Have=C2=A0you migrated all file= store OSDs from filestore backend to bluestore backend?
Or
<= div>Have you upgraded from Luminious 12.2.11 to 14.x?

<= div>What helped here?


On Tue, Nov 26, 2019 at 8:03 AM F= yodor Ustinov <ufm-TAHq1US7lqU@public.gmane.org> wrote:
Hi!

I had similar errors in pools on SSD until I upgraded to nautilus (clean bl= uestore installation)

----- Original Message -----
> From: "M Ranga Swami Reddy" <swamireddy-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> To: "ceph-users" <ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org>, "ceph-devel= " <= ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>
> Sent: Monday, 25 November, 2019 12:04:46
> Subject: [ceph-users] scrub errors on rgw data pool

> Hello - We are using the ceph 12.2.11 version (upgraded from Jewel 10.= 2.12 to
> 12.2.11). In this cluster, we are having mix of filestore and bluestor= e OSD
> backends.
> Recently we are seeing the scrub errors on rgw buckets.data pool every= day,
> after scrub operation performed by Ceph. If we run the PG repair, the = errors
> will go way.
>
> Anyone seen the above issue?
> Is the mix of filestore backend has bug/issue with 12.2.11 version (ie=
> Luminous).
> Is the mix of filestore and bluestore OSDs cause this type of issue? >
> Thanks
> Swami
>
> _______________________________________________
> ceph-users mailing list
> ceph-us= ers-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> http://lists.ceph.com/listinfo.cgi/ceph-u= sers-ceph.com
--000000000000b2300305983a0515-- --===============4881720310633547688== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ ceph-users mailing list ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com --===============4881720310633547688==--