From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net ([212.227.15.18]:45891 "EHLO mout.gmx.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751710AbeEEAnR (ORCPT ); Fri, 4 May 2018 20:43:17 -0400 Subject: Re: BTRFS RAID filesystem unmountable To: Michael Wade Cc: linux-btrfs@vger.kernel.org References: <54d2f70a-adae-98cc-581f-2e4786783b26@gmx.com> From: Qu Wenruo Message-ID: <31f77b2f-4110-9868-2c6b-abf40ccef316@gmx.com> Date: Sat, 5 May 2018 08:43:08 +0800 MIME-Version: 1.0 In-Reply-To: Content-Type: multipart/signed; micalg=pgp-sha256; protocol="application/pgp-signature"; boundary="g5QpOLXBuKU1g1PAlqDNbKd85JOL6IA31" Sender: linux-btrfs-owner@vger.kernel.org List-ID: This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --g5QpOLXBuKU1g1PAlqDNbKd85JOL6IA31 Content-Type: multipart/mixed; boundary="euVDmDN31fU2ooGOy9nzrwmggDaLjK09g"; protected-headers="v1" From: Qu Wenruo To: Michael Wade Cc: linux-btrfs@vger.kernel.org Message-ID: <31f77b2f-4110-9868-2c6b-abf40ccef316@gmx.com> Subject: Re: BTRFS RAID filesystem unmountable References: <54d2f70a-adae-98cc-581f-2e4786783b26@gmx.com> In-Reply-To: --euVDmDN31fU2ooGOy9nzrwmggDaLjK09g Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: quoted-printable On 2018=E5=B9=B405=E6=9C=8805=E6=97=A5 00:18, Michael Wade wrote: > Hi Qu, >=20 > The tool is still running and the log file is now ~300mb. I guess it > shouldn't normally take this long.. Is there anything else worth > trying? I'm afraid not much. Although there is a possibility to modify btrfs-find-root to do much faster but limited search. But from the result, it looks like underlying device corruption, and not much we can do right now. Thanks, Qu >=20 > Kind regards > Michael >=20 > On 2 May 2018 at 06:29, Michael Wade wrote: >> Thanks Qu, >> >> I actually aborted the run with the old btrfs tools once I saw its >> output. The new btrfs tools is still running and has produced a log >> file of ~85mb filled with that content so far. >> >> Kind regards >> Michael >> >> On 2 May 2018 at 02:31, Qu Wenruo wrote: >>> >>> >>> On 2018=E5=B9=B405=E6=9C=8801=E6=97=A5 23:50, Michael Wade wrote: >>>> Hi Qu, >>>> >>>> Oh dear that is not good news! >>>> >>>> I have been running the find root command since yesterday but it onl= y >>>> seems to be only be outputting the following message: >>>> >>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096 >>> >>> It's mostly fine, as find-root will go through all tree blocks and tr= y >>> to read them as tree blocks. >>> Although btrfs-find-root will suppress csum error output, but such ba= sic >>> tree validation check is not suppressed, thus you get such message. >>> >>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096 >>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096 >>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096 >>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096 >>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096 >>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096 >>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096 >>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096 >>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096 >>>> >>>> I tried with the latest btrfs tools compiled from source and the one= s >>>> I have installed with the same result. Is there a CLI utility I coul= d >>>> use to determine if the log contains any other content? >>> >>> Did it report any useful info at the end? >>> >>> Thanks, >>> Qu >>> >>>> >>>> Kind regards >>>> Michael >>>> >>>> >>>> On 30 April 2018 at 04:02, Qu Wenruo wrote:= >>>>> >>>>> >>>>> On 2018=E5=B9=B404=E6=9C=8829=E6=97=A5 22:08, Michael Wade wrote: >>>>>> Hi Qu, >>>>>> >>>>>> Got this error message: >>>>>> >>>>>> ./btrfs inspect dump-tree -b 20800943685632 /dev/md127 >>>>>> btrfs-progs v4.16.1 >>>>>> bytenr mismatch, want=3D20800943685632, have=3D3118598835113619663= >>>>>> ERROR: cannot read chunk root >>>>>> ERROR: unable to open /dev/md127 >>>>>> >>>>>> I have attached the dumps for: >>>>>> >>>>>> dd if=3D/dev/md127 of=3D/tmp/chunk_root.copy1 bs=3D1 count=3D32K s= kip=3D266325721088 >>>>>> dd if=3D/dev/md127 of=3D/tmp/chunk_root.copy2 bs=3D1 count=3D32K s= kip=3D266359275520 >>>>> >>>>> Unfortunately, both dumps are corrupted and contain mostly garbage.= >>>>> I think it's the underlying stack (mdraid) has something wrong or f= ailed >>>>> to recover its data. >>>>> >>>>> This means your last chance will be btrfs-find-root. >>>>> >>>>> Please try: >>>>> # btrfs-find-root -o 3 >>>>> >>>>> And provide all the output. >>>>> >>>>> But please keep in mind, chunk root is a critical tree, and so far = it's >>>>> already heavily damaged. >>>>> Although I could still continue try to recover, there is pretty low= >>>>> chance now. >>>>> >>>>> Thanks, >>>>> Qu >>>>>> >>>>>> Kind regards >>>>>> Michael >>>>>> >>>>>> >>>>>> On 29 April 2018 at 10:33, Qu Wenruo wrot= e: >>>>>>> >>>>>>> >>>>>>> On 2018=E5=B9=B404=E6=9C=8829=E6=97=A5 16:59, Michael Wade wrote:= >>>>>>>> Ok, will it be possible for me to install the new version of the= tools >>>>>>>> on my current kernel without overriding the existing install? He= sitant >>>>>>>> to update kernel/btrfs as it might break the ReadyNAS interface = / >>>>>>>> future firmware upgrades. >>>>>>>> >>>>>>>> Perhaps I could grab this: >>>>>>>> https://github.com/kdave/btrfs-progs/releases/tag/v4.16.1 and >>>>>>>> hopefully build from source and then run the binaries directly? >>>>>>> >>>>>>> Of course, that's how most of us test btrfs-progs builds. >>>>>>> >>>>>>> Thanks, >>>>>>> Qu >>>>>>> >>>>>>>> >>>>>>>> Kind regards >>>>>>>> >>>>>>>> On 29 April 2018 at 09:33, Qu Wenruo wr= ote: >>>>>>>>> >>>>>>>>> >>>>>>>>> On 2018=E5=B9=B404=E6=9C=8829=E6=97=A5 16:11, Michael Wade wrot= e: >>>>>>>>>> Thanks Qu, >>>>>>>>>> >>>>>>>>>> Please find attached the log file for the chunk recover comman= d. >>>>>>>>> >>>>>>>>> Strangely, btrfs chunk recovery found no extra chunk beyond cur= rent >>>>>>>>> system chunk range. >>>>>>>>> >>>>>>>>> Which means, it's chunk tree corrupted. >>>>>>>>> >>>>>>>>> Please dump the chunk tree with latest btrfs-progs (which provi= des the >>>>>>>>> new --follow option). >>>>>>>>> >>>>>>>>> # btrfs inspect dump-tree -b 20800943685632 >>>>>>>>> >>>>>>>>> If it doesn't work, please provide the following binary dump: >>>>>>>>> >>>>>>>>> # dd if=3D of=3D/tmp/chunk_root.copy1 bs=3D1 count=3D32K s= kip=3D266325721088 >>>>>>>>> # dd if=3D of=3D/tmp/chunk_root.copy2 bs=3D1 count=3D32K s= kip=3D266359275520 >>>>>>>>> (And will need to repeat similar dump for several times accordi= ng to >>>>>>>>> above dump) >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Qu >>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>> Kind regards >>>>>>>>>> Michael >>>>>>>>>> >>>>>>>>>> On 28 April 2018 at 12:38, Qu Wenruo = wrote: >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On 2018=E5=B9=B404=E6=9C=8828=E6=97=A5 17:37, Michael Wade wr= ote: >>>>>>>>>>>> Hi Qu, >>>>>>>>>>>> >>>>>>>>>>>> Thanks for your reply. I will investigate upgrading the kern= el, >>>>>>>>>>>> however I worry that future ReadyNAS firmware upgrades would= fail on a >>>>>>>>>>>> newer kernel version (I don't have much linux experience so = maybe my >>>>>>>>>>>> concerns are unfounded!?). >>>>>>>>>>>> >>>>>>>>>>>> I have attached the output of the dump super command. >>>>>>>>>>>> >>>>>>>>>>>> I did actually run chunk recover before, without the verbose= option, >>>>>>>>>>>> it took around 24 hours to finish but did not resolve my iss= ue. Happy >>>>>>>>>>>> to start that again if you need its output. >>>>>>>>>>> >>>>>>>>>>> The system chunk only contains the following chunks: >>>>>>>>>>> [0, 4194304]: Initial temporary chunk, not used at = all >>>>>>>>>>> [20971520, 29360128]: System chunk created by mkfs, should = be full >>>>>>>>>>> used up >>>>>>>>>>> [20800943685632, 20800977240064]: >>>>>>>>>>> The newly created large system chunk.= >>>>>>>>>>> >>>>>>>>>>> The chunk root is still in 2nd chunk thus valid, but some of = its leaf is >>>>>>>>>>> out of the range. >>>>>>>>>>> >>>>>>>>>>> If you can't wait 24h for chunk recovery to run, my advice wo= uld be move >>>>>>>>>>> the disk to some other computer, and use latest btrfs-progs t= o execute >>>>>>>>>>> the following command: >>>>>>>>>>> >>>>>>>>>>> # btrfs inpsect dump-tree -b 20800943685632 --follow >>>>>>>>>>> >>>>>>>>>>> If we're lucky enough, we may read out the tree leaf containi= ng the new >>>>>>>>>>> system chunk and save a day. >>>>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> Qu >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Thanks so much for your help. >>>>>>>>>>>> >>>>>>>>>>>> Kind regards >>>>>>>>>>>> Michael >>>>>>>>>>>> >>>>>>>>>>>> On 28 April 2018 at 09:45, Qu Wenruo wrote: >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On 2018=E5=B9=B404=E6=9C=8828=E6=97=A5 16:30, Michael Wade = wrote: >>>>>>>>>>>>>> Hi all, >>>>>>>>>>>>>> >>>>>>>>>>>>>> I was hoping that someone would be able to help me resolve= the issues >>>>>>>>>>>>>> I am having with my ReadyNAS BTRFS volume. Basically my tr= ouble >>>>>>>>>>>>>> started after a power cut, subsequently the volume would n= ot mount. >>>>>>>>>>>>>> Here are the details of my setup as it is at the moment: >>>>>>>>>>>>>> >>>>>>>>>>>>>> uname -a >>>>>>>>>>>>>> Linux QAI 4.4.116.alpine.1 #1 SMP Mon Feb 19 21:58:38 PST = 2018 armv7l GNU/Linux >>>>>>>>>>>>> >>>>>>>>>>>>> The kernel is pretty old for btrfs. >>>>>>>>>>>>> Strongly recommended to upgrade. >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> btrfs --version >>>>>>>>>>>>>> btrfs-progs v4.12 >>>>>>>>>>>>> >>>>>>>>>>>>> So is the user tools. >>>>>>>>>>>>> >>>>>>>>>>>>> Although I think it won't be a big problem, as needed tool = should be there. >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> btrfs fi show >>>>>>>>>>>>>> Label: '11baed92:data' uuid: 20628cda-d98f-4f85-955c-932a= 367f8821 >>>>>>>>>>>>>> Total devices 1 FS bytes used 5.12TiB >>>>>>>>>>>>>> devid 1 size 7.27TiB used 6.24TiB path /dev/md127 >>>>>>>>>>>>> >>>>>>>>>>>>> So, it's btrfs on mdraid. >>>>>>>>>>>>> It would normally make things harder to debug, so I could o= nly provide >>>>>>>>>>>>> advice from the respect of btrfs. >>>>>>>>>>>>> For mdraid part, I can't ensure anything. >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Here are the relevant dmesg logs for the current state of = the device: >>>>>>>>>>>>>> >>>>>>>>>>>>>> [ 19.119391] md: md127 stopped. >>>>>>>>>>>>>> [ 19.120841] md: bind >>>>>>>>>>>>>> [ 19.121120] md: bind >>>>>>>>>>>>>> [ 19.121380] md: bind >>>>>>>>>>>>>> [ 19.125535] md/raid:md127: device sda3 operational as r= aid disk 0 >>>>>>>>>>>>>> [ 19.125547] md/raid:md127: device sdc3 operational as r= aid disk 2 >>>>>>>>>>>>>> [ 19.125554] md/raid:md127: device sdb3 operational as r= aid disk 1 >>>>>>>>>>>>>> [ 19.126712] md/raid:md127: allocated 3240kB >>>>>>>>>>>>>> [ 19.126778] md/raid:md127: raid level 5 active with 3 o= ut of 3 >>>>>>>>>>>>>> devices, algorithm 2 >>>>>>>>>>>>>> [ 19.126784] RAID conf printout: >>>>>>>>>>>>>> [ 19.126789] --- level:5 rd:3 wd:3 >>>>>>>>>>>>>> [ 19.126794] disk 0, o:1, dev:sda3 >>>>>>>>>>>>>> [ 19.126799] disk 1, o:1, dev:sdb3 >>>>>>>>>>>>>> [ 19.126804] disk 2, o:1, dev:sdc3 >>>>>>>>>>>>>> [ 19.128118] md127: detected capacity change from 0 to 7= 991637573632 >>>>>>>>>>>>>> [ 19.395112] Adding 523708k swap on /dev/md1. Priority:= -1 extents:1 >>>>>>>>>>>>>> across:523708k >>>>>>>>>>>>>> [ 19.434956] BTRFS: device label 11baed92:data devid 1 t= ransid >>>>>>>>>>>>>> 151800 /dev/md127 >>>>>>>>>>>>>> [ 19.739276] BTRFS info (device md127): setting nodatasu= m >>>>>>>>>>>>>> [ 19.740440] BTRFS critical (device md127): unable to fi= nd logical >>>>>>>>>>>>>> 3208757641216 len 4096 >>>>>>>>>>>>>> [ 19.740450] BTRFS critical (device md127): unable to fi= nd logical >>>>>>>>>>>>>> 3208757641216 len 4096 >>>>>>>>>>>>>> [ 19.740498] BTRFS critical (device md127): unable to fi= nd logical >>>>>>>>>>>>>> 3208757641216 len 4096 >>>>>>>>>>>>>> [ 19.740512] BTRFS critical (device md127): unable to fi= nd logical >>>>>>>>>>>>>> 3208757641216 len 4096 >>>>>>>>>>>>>> [ 19.740552] BTRFS critical (device md127): unable to fi= nd logical >>>>>>>>>>>>>> 3208757641216 len 4096 >>>>>>>>>>>>>> [ 19.740560] BTRFS critical (device md127): unable to fi= nd logical >>>>>>>>>>>>>> 3208757641216 len 4096 >>>>>>>>>>>>>> [ 19.740576] BTRFS error (device md127): failed to read = chunk root >>>>>>>>>>>>> >>>>>>>>>>>>> This shows it pretty clear, btrfs fails to read chunk root.= >>>>>>>>>>>>> And according your above "len 4096" it's pretty old fs, as = it's still >>>>>>>>>>>>> using 4K nodesize other than 16K nodesize. >>>>>>>>>>>>> >>>>>>>>>>>>> According to above output, it means your superblock by some= how lacks the >>>>>>>>>>>>> needed system chunk mapping, which is used to initialize ch= unk mapping. >>>>>>>>>>>>> >>>>>>>>>>>>> Please provide the following command output: >>>>>>>>>>>>> >>>>>>>>>>>>> # btrfs inspect dump-super -fFa /dev/md127 >>>>>>>>>>>>> >>>>>>>>>>>>> Also, please consider run the following command and dump al= l its output: >>>>>>>>>>>>> >>>>>>>>>>>>> # btrfs rescue chunk-recover -v /dev/md127. >>>>>>>>>>>>> >>>>>>>>>>>>> Please note that, above command can take a long time to fin= ish, and if >>>>>>>>>>>>> it works without problem, it may solve your problem. >>>>>>>>>>>>> But if it doesn't work, the output could help me to manuall= y craft a fix >>>>>>>>>>>>> to your super block. >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks, >>>>>>>>>>>>> Qu >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> [ 19.783975] BTRFS error (device md127): open_ctree fail= ed >>>>>>>>>>>>>> >>>>>>>>>>>>>> In an attempt to recover the volume myself I run a few BTR= FS commands >>>>>>>>>>>>>> mostly using advice from here: >>>>>>>>>>>>>> https://lists.opensuse.org/opensuse/2017-02/msg00930.html.= However >>>>>>>>>>>>>> that actually seems to have made things worse as I can no = longer mount >>>>>>>>>>>>>> the file system, not even in readonly mode. >>>>>>>>>>>>>> >>>>>>>>>>>>>> So starting from the beginning here is a list of things I = have done so >>>>>>>>>>>>>> far (hopefully I remembered the order in which I ran them!= ) >>>>>>>>>>>>>> >>>>>>>>>>>>>> 1. Noticed that my backups to the NAS were not running (di= dn't get >>>>>>>>>>>>>> notified that the volume had basically "died") >>>>>>>>>>>>>> 2. ReadyNAS UI indicated that the volume was inactive. >>>>>>>>>>>>>> 3. SSHed onto the box and found that the first drive was n= ot marked as >>>>>>>>>>>>>> operational (log showed I/O errors / UNKOWN (0x2003)) so = I replaced >>>>>>>>>>>>>> the disk and let the array resync. >>>>>>>>>>>>>> 4. After resync the volume still was unaccessible so I loo= ked at the >>>>>>>>>>>>>> logs once more and saw something like the following which = seemed to >>>>>>>>>>>>>> indicate that the replay log had been corrupted when the p= ower went >>>>>>>>>>>>>> out: >>>>>>>>>>>>>> >>>>>>>>>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf= 's nritems >>>>>>>>>>>>>> is 0: block=3D232292352, root=3D7, slot=3D0 >>>>>>>>>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf= 's nritems >>>>>>>>>>>>>> is 0: block=3D232292352, root=3D7, slot=3D0 >>>>>>>>>>>>>> BTRFS: error (device md127) in btrfs_replay_log:2524: errn= o=3D-5 IO >>>>>>>>>>>>>> failure (Failed to recover log tree) >>>>>>>>>>>>>> BTRFS error (device md127): pending csums is 155648 >>>>>>>>>>>>>> BTRFS error (device md127): cleaner transaction attach ret= urned -30 >>>>>>>>>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf= 's nritems >>>>>>>>>>>>>> is 0: block=3D232292352, root=3D7, slot=3D0 >>>>>>>>>>>>>> >>>>>>>>>>>>>> 5. Then: >>>>>>>>>>>>>> >>>>>>>>>>>>>> btrfs rescue zero-log >>>>>>>>>>>>>> >>>>>>>>>>>>>> 6. Was then able to mount the volume in readonly mode. >>>>>>>>>>>>>> >>>>>>>>>>>>>> btrfs scrub start >>>>>>>>>>>>>> >>>>>>>>>>>>>> Which fixed some errors but not all: >>>>>>>>>>>>>> >>>>>>>>>>>>>> scrub status for 20628cda-d98f-4f85-955c-932a367f8821 >>>>>>>>>>>>>> >>>>>>>>>>>>>> scrub started at Tue Apr 24 17:27:44 2018, running for 04:= 00:34 >>>>>>>>>>>>>> total bytes scrubbed: 224.26GiB with 6 errors >>>>>>>>>>>>>> error details: csum=3D6 >>>>>>>>>>>>>> corrected errors: 0, uncorrectable errors: 6, unverified e= rrors: 0 >>>>>>>>>>>>>> >>>>>>>>>>>>>> scrub status for 20628cda-d98f-4f85-955c-932a367f8821 >>>>>>>>>>>>>> scrub started at Tue Apr 24 17:27:44 2018, running for 04:= 34:43 >>>>>>>>>>>>>> total bytes scrubbed: 224.26GiB with 6 errors >>>>>>>>>>>>>> error details: csum=3D6 >>>>>>>>>>>>>> corrected errors: 0, uncorrectable errors: 6, unverified e= rrors: 0 >>>>>>>>>>>>>> >>>>>>>>>>>>>> 6. Seeing this hanging I rebooted the NAS >>>>>>>>>>>>>> 7. Think this is when the volume would not mount at all. >>>>>>>>>>>>>> 8. Seeing log entries like these: >>>>>>>>>>>>>> >>>>>>>>>>>>>> BTRFS warning (device md127): checksum error at logical 20= 800943685632 >>>>>>>>>>>>>> on dev /dev/md127, sector 520167424: metadata node (level = 1) in tree 3 >>>>>>>>>>>>>> >>>>>>>>>>>>>> I ran >>>>>>>>>>>>>> >>>>>>>>>>>>>> btrfs check --fix-crc >>>>>>>>>>>>>> >>>>>>>>>>>>>> And that brings us to where I am now: Some seemly corrupte= d BTRFS >>>>>>>>>>>>>> metadata and unable to mount the drive even with the recov= ery option. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Any help you can give is much appreciated! >>>>>>>>>>>>>> >>>>>>>>>>>>>> Kind regards >>>>>>>>>>>>>> Michael >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> To unsubscribe from this list: send the line "unsubscribe = linux-btrfs" in >>>>>>>>>>>>>> the body of a message to majordomo@vger.kernel.org >>>>>>>>>>>>>> More majordomo info at http://vger.kernel.org/majordomo-i= nfo.html >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>> >>>>>>>> -- >>>>>>>> To unsubscribe from this list: send the line "unsubscribe linux-= btrfs" in >>>>>>>> the body of a message to majordomo@vger.kernel.org >>>>>>>> More majordomo info at http://vger.kernel.org/majordomo-info.ht= ml >>>>>>>> >>>>>>> >>>>> >>> --euVDmDN31fU2ooGOy9nzrwmggDaLjK09g-- --g5QpOLXBuKU1g1PAlqDNbKd85JOL6IA31 Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- iQEzBAEBCAAdFiEELd9y5aWlW6idqkLhwj2R86El/qgFAlrs/hwACgkQwj2R86El /qj6ngf/Y8EcCUpIgIAXXzF0AJL8cBzeb8HlkMfnPIOBwlArL25see2+okxoWb9F 7a2oGvZb6bhC4YZ7zXEJS59OAtLy9JeUPo5W+Gm+kCRWdYTqkm+aYDvafAPNAodv ufrSVccwaCfBkiGsL6KFJuqdUasdz68YNZQBXInKQft371D2R4XyrhogO0k6qoZE bBY/2F2FI1iz1P0s6w6YWL+g2aknYAtjSYVSLWWhKovn2hUm3GIZb6vYRGUycf49 4o4/wLwZtuYw+PlEdNe3J196bdsDxe5Izh/oQmmK4yfRztnxpZikHYAfw8f3jT+C 5B6dB/UwKD4lUBBlj7sw+pie/9KrwQ== =FlXh -----END PGP SIGNATURE----- --g5QpOLXBuKU1g1PAlqDNbKd85JOL6IA31--