From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754345Ab0DZBtS (ORCPT ); Sun, 25 Apr 2010 21:49:18 -0400 Received: from bld-mail13.adl6.internode.on.net ([150.101.137.98]:51803 "EHLO mail.internode.on.net" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1753387Ab0DZBtQ (ORCPT ); Sun, 25 Apr 2010 21:49:16 -0400 Date: Mon, 26 Apr 2010 11:49:08 +1000 From: Dave Chinner To: tytso@mit.edu, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, xfs@oss.sgi.com Subject: Re: [PATCH 3/4] writeback: pay attention to wbc->nr_to_write in write_cache_pages Message-ID: <20100426014908.GD11437@dastard> References: <1271731314-5893-1-git-send-email-david@fromorbit.com> <1271731314-5893-4-git-send-email-david@fromorbit.com> <20100425033315.GC667@thunk.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20100425033315.GC667@thunk.org> User-Agent: Mutt/1.5.20 (2009-06-14) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, Apr 24, 2010 at 11:33:15PM -0400, tytso@mit.edu wrote: > On Tue, Apr 20, 2010 at 12:41:53PM +1000, Dave Chinner wrote: > > From: Dave Chinner > > > > If a filesystem writes more than one page in ->writepage, write_cache_pages > > fails to notice this and continues to attempt writeback when wbc->nr_to_write > > has gone negative - this trace was captured from XFS: > > > > > > wbc_writeback_start: towrt=1024 > > wbc_writepage: towrt=1024 > > wbc_writepage: towrt=0 > > wbc_writepage: towrt=-1 > > wbc_writepage: towrt=-5 > > wbc_writepage: towrt=-21 > > wbc_writepage: towrt=-85 > > > > This has adverse effects on filesystem writeback behaviour. write_cache_pages() > > needs to terminate after a certain number of pages are written, not after a > > certain number of calls to ->writepage are made. Make it observe the current > > value of wbc->nr_to_write and treat a value of <= 0 as though it is a either a > > termination condition or a trigger to reset to MAX_WRITEḆACK_PAGES for data > > integrity syncs. > > Be careful here. If you are going to write more pages than what the > writeback code has requested (the stupid no more than 1024 pages > restriction in the writeback code before it jumps to start writing > some other inode), you actually need to let the returned > wbc->nr_to_write go negative, so that wb_writeback() knows how many > pages it has written. > > In other words, the writeback code assumes that > > - nr_to_write> > > is > > Yes, but that does not require a negative value to get right. None of the code relies on negative nr_to_write values to do anything correctly, and all the termination checks are for wbc->nr_to-write <= 0. And the tracing shows it behaves correctly when wbc->nr_to_write = 0 on return. Requiring a negative number is not documented in any of the comments, write_cache_pages() does not return a negative number, etc, so I can't see why you think this is necessary.... > If you don't let wbc->nr_to_write go negative, the writeback code will > be confused about how many pages were _actually_ written, and the > writeback code ends up writing too much. See commit 2faf2e1. ext4 added a "bump" to wbc->nr_to_write, then in some cases forgot to remove it so it never returned to <= 0. Well, of course this causes writeback to write too much! But that's an ext4 bug not allowing nr_to_write to reach zero (not negative, but zero), not a general writeback bug.... > All of this is a crock of course. The file system shouldn't be > second-guessing the writeback code. Instead the writeback code should > be adaptively measuring how long it takes to were written out N pages > to a particular block device, and then decide what's the appropriate > setting for nr_to_write. What makes sense for a USB stick, or a 4200 > RPM laptop drive, may not make sense for a massive RAID array.... Why? Writeback should just keep pushing pages down until it congests the block device. Then it throttles itself in get_request() and so writeback already adapts to the load on the device. Multiple passes of 1024 pages per dirty inode is fine for this - a larger nr_to_write doesn't get the block device to congestion any faster or slower, nor does it change the behaviour once at congestion.... > But since we don't have that, both XFS and ext4 have workarounds for > brain-damaged writeback behaviour. (I did some testing, and even for > standard laptop drives the cap of 1024 pages is just Way Too Small; > that limit was set something like a decade ago, and everyone has been > afraid to change it, even though disks have gotten a wee bit faster > since those days.) XFS put a workaround in for a different reason to ext4. ext4 put it in to improve delayed allocation by working with larger chunks of pages. XFS put it in to get large IOs to be issued through submit_bio(), not to help the allocator... And to be the nasty person to shoot down your modern hardware theory: nr_to_write = 1024 pages works just fine on my laptop (XFS on indilix SSD) as well as my big test server (XFS on 12 disk RAID0) The server gets 1.5GB/s with pretty much perfect IO patterns with the fixes I posted, unlike the mess of single page IOs that occurs without them.... Cheers, Dave. -- Dave Chinner david@fromorbit.com From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dave Chinner Subject: Re: [PATCH 3/4] writeback: pay attention to wbc->nr_to_write in write_cache_pages Date: Mon, 26 Apr 2010 11:49:08 +1000 Message-ID: <20100426014908.GD11437@dastard> References: <1271731314-5893-1-git-send-email-david@fromorbit.com> <1271731314-5893-4-git-send-email-david@fromorbit.com> <20100425033315.GC667@thunk.org> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: QUOTED-PRINTABLE To: tytso@mit.edu, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, xfs@oss.sgi.com Return-path: Received: from bld-mail13.adl6.internode.on.net ([150.101.137.98]:51803 "EHLO mail.internode.on.net" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1753387Ab0DZBtQ (ORCPT ); Sun, 25 Apr 2010 21:49:16 -0400 Content-Disposition: inline In-Reply-To: <20100425033315.GC667@thunk.org> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: On Sat, Apr 24, 2010 at 11:33:15PM -0400, tytso@mit.edu wrote: > On Tue, Apr 20, 2010 at 12:41:53PM +1000, Dave Chinner wrote: > > From: Dave Chinner > >=20 > > If a filesystem writes more than one page in ->writepage, write_cac= he_pages > > fails to notice this and continues to attempt writeback when wbc->n= r_to_write > > has gone negative - this trace was captured from XFS: > >=20 > >=20 > > wbc_writeback_start: towrt=3D1024 > > wbc_writepage: towrt=3D1024 > > wbc_writepage: towrt=3D0 > > wbc_writepage: towrt=3D-1 > > wbc_writepage: towrt=3D-5 > > wbc_writepage: towrt=3D-21 > > wbc_writepage: towrt=3D-85 > >=20 > > This has adverse effects on filesystem writeback behaviour. write_c= ache_pages() > > needs to terminate after a certain number of pages are written, not= after a > > certain number of calls to ->writepage are made. Make it observe th= e current > > value of wbc->nr_to_write and treat a value of <=3D 0 as though it = is a either a > > termination condition or a trigger to reset to MAX_WRITE=E1=B8=86AC= K_PAGES for data > > integrity syncs. >=20 > Be careful here. If you are going to write more pages than what the > writeback code has requested (the stupid no more than 1024 pages > restriction in the writeback code before it jumps to start writing > some other inode), you actually need to let the returned > wbc->nr_to_write go negative, so that wb_writeback() knows how many > pages it has written. >=20 > In other words, the writeback code assumes that=20 >=20 > - nr_to_write> >=20 > is >=20 > Yes, but that does not require a negative value to get right. None of the code relies on negative nr_to_write values to do anything correctly, and all the termination checks are for wbc->nr_to-write <=3D 0. And the tracing shows it behaves correctly when wbc->nr_to_write =3D 0 on return. Requiring a negative number is not documented in any of the comments, write_cache_pages() does not return a negative number, etc, so I can't see why you think this is necessary.... > If you don't let wbc->nr_to_write go negative, the writeback code wil= l > be confused about how many pages were _actually_ written, and the > writeback code ends up writing too much. See commit 2faf2e1. ext4 added a "bump" to wbc->nr_to_write, then in some cases forgot to remove it so it never returned to <=3D 0. Well, of course this causes writeback to write too much! But that's an ext4 bug not allowing nr_to_write to reach zero (not negative, but zero), not a general writeback bug.... > All of this is a crock of course. The file system shouldn't be > second-guessing the writeback code. Instead the writeback code shoul= d > be adaptively measuring how long it takes to were written out N pages > to a particular block device, and then decide what's the appropriate > setting for nr_to_write. What makes sense for a USB stick, or a 4200 > RPM laptop drive, may not make sense for a massive RAID array.... Why? Writeback should just keep pushing pages down until it congests the block device. Then it throttles itself in get_request() and so writeback already adapts to the load on the device. Multiple passes of 1024 pages per dirty inode is fine for this - a larger nr_to_write doesn't get the block device to congestion any faster or slower, nor does it change the behaviour once at congestion.... > But since we don't have that, both XFS and ext4 have workarounds for > brain-damaged writeback behaviour. (I did some testing, and even for > standard laptop drives the cap of 1024 pages is just Way Too Small; > that limit was set something like a decade ago, and everyone has been > afraid to change it, even though disks have gotten a wee bit faster > since those days.) XFS put a workaround in for a different reason to ext4. ext4 put it in to improve delayed allocation by working with larger chunks of pages. XFS put it in to get large IOs to be issued through submit_bio(), not to help the allocator... And to be the nasty person to shoot down your modern hardware theory: nr_to_write =3D 1024 pages works just fine on my laptop (XFS on indilix SSD) as well as my big test server (XFS on 12 disk RAID0) The server gets 1.5GB/s with pretty much perfect IO patterns with the fixes I posted, unlike the mess of single page IOs that occurs without them.... Cheers, Dave. --=20 Dave Chinner david@fromorbit.com -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel= " in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from cuda.sgi.com (cuda3.sgi.com [192.48.176.15]) by oss.sgi.com (8.14.3/8.14.3/SuSE Linux 0.8) with ESMTP id o3Q1lASf048662 for ; Sun, 25 Apr 2010 20:47:10 -0500 Received: from mail.internode.on.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 5CBCB1329DAE for ; Sun, 25 Apr 2010 18:49:11 -0700 (PDT) Received: from mail.internode.on.net (bld-mail13.adl6.internode.on.net [150.101.137.98]) by cuda.sgi.com with ESMTP id O6WCXX85GIxYq6vV for ; Sun, 25 Apr 2010 18:49:11 -0700 (PDT) Date: Mon, 26 Apr 2010 11:49:08 +1000 From: Dave Chinner Subject: Re: [PATCH 3/4] writeback: pay attention to wbc->nr_to_write in write_cache_pages Message-ID: <20100426014908.GD11437@dastard> References: <1271731314-5893-1-git-send-email-david@fromorbit.com> <1271731314-5893-4-git-send-email-david@fromorbit.com> <20100425033315.GC667@thunk.org> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20100425033315.GC667@thunk.org> List-Id: XFS Filesystem from SGI List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Sender: xfs-bounces@oss.sgi.com Errors-To: xfs-bounces@oss.sgi.com To: tytso@mit.edu, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, xfs@oss.sgi.com T24gU2F0LCBBcHIgMjQsIDIwMTAgYXQgMTE6MzM6MTVQTSAtMDQwMCwgdHl0c29AbWl0LmVkdSB3 cm90ZToKPiBPbiBUdWUsIEFwciAyMCwgMjAxMCBhdCAxMjo0MTo1M1BNICsxMDAwLCBEYXZlIENo aW5uZXIgd3JvdGU6Cj4gPiBGcm9tOiBEYXZlIENoaW5uZXIgPGRjaGlubmVyQHJlZGhhdC5jb20+ Cj4gPiAKPiA+IElmIGEgZmlsZXN5c3RlbSB3cml0ZXMgbW9yZSB0aGFuIG9uZSBwYWdlIGluIC0+ d3JpdGVwYWdlLCB3cml0ZV9jYWNoZV9wYWdlcwo+ID4gZmFpbHMgdG8gbm90aWNlIHRoaXMgYW5k IGNvbnRpbnVlcyB0byBhdHRlbXB0IHdyaXRlYmFjayB3aGVuIHdiYy0+bnJfdG9fd3JpdGUKPiA+ IGhhcyBnb25lIG5lZ2F0aXZlIC0gdGhpcyB0cmFjZSB3YXMgY2FwdHVyZWQgZnJvbSBYRlM6Cj4g PiAKPiA+IAo+ID4gICAgIHdiY193cml0ZWJhY2tfc3RhcnQ6IHRvd3J0PTEwMjQKPiA+ICAgICB3 YmNfd3JpdGVwYWdlOiB0b3dydD0xMDI0Cj4gPiAgICAgd2JjX3dyaXRlcGFnZTogdG93cnQ9MAo+ ID4gICAgIHdiY193cml0ZXBhZ2U6IHRvd3J0PS0xCj4gPiAgICAgd2JjX3dyaXRlcGFnZTogdG93 cnQ9LTUKPiA+ICAgICB3YmNfd3JpdGVwYWdlOiB0b3dydD0tMjEKPiA+ICAgICB3YmNfd3JpdGVw YWdlOiB0b3dydD0tODUKPiA+IAo+ID4gVGhpcyBoYXMgYWR2ZXJzZSBlZmZlY3RzIG9uIGZpbGVz eXN0ZW0gd3JpdGViYWNrIGJlaGF2aW91ci4gd3JpdGVfY2FjaGVfcGFnZXMoKQo+ID4gbmVlZHMg dG8gdGVybWluYXRlIGFmdGVyIGEgY2VydGFpbiBudW1iZXIgb2YgcGFnZXMgYXJlIHdyaXR0ZW4s IG5vdCBhZnRlciBhCj4gPiBjZXJ0YWluIG51bWJlciBvZiBjYWxscyB0byAtPndyaXRlcGFnZSBh cmUgbWFkZS4gTWFrZSBpdCBvYnNlcnZlIHRoZSBjdXJyZW50Cj4gPiB2YWx1ZSBvZiB3YmMtPm5y X3RvX3dyaXRlIGFuZCB0cmVhdCBhIHZhbHVlIG9mIDw9IDAgYXMgdGhvdWdoIGl0IGlzIGEgZWl0 aGVyIGEKPiA+IHRlcm1pbmF0aW9uIGNvbmRpdGlvbiBvciBhIHRyaWdnZXIgdG8gcmVzZXQgdG8g TUFYX1dSSVRF4biGQUNLX1BBR0VTIGZvciBkYXRhCj4gPiBpbnRlZ3JpdHkgc3luY3MuCj4gCj4g QmUgY2FyZWZ1bCBoZXJlLiAgSWYgeW91IGFyZSBnb2luZyB0byB3cml0ZSBtb3JlIHBhZ2VzIHRo YW4gd2hhdCB0aGUKPiB3cml0ZWJhY2sgY29kZSBoYXMgcmVxdWVzdGVkICh0aGUgc3R1cGlkIG5v IG1vcmUgdGhhbiAxMDI0IHBhZ2VzCj4gcmVzdHJpY3Rpb24gaW4gdGhlIHdyaXRlYmFjayBjb2Rl IGJlZm9yZSBpdCBqdW1wcyB0byBzdGFydCB3cml0aW5nCj4gc29tZSBvdGhlciBpbm9kZSksIHlv dSBhY3R1YWxseSBuZWVkIHRvIGxldCB0aGUgcmV0dXJuZWQKPiB3YmMtPm5yX3RvX3dyaXRlIGdv IG5lZ2F0aXZlLCBzbyB0aGF0IHdiX3dyaXRlYmFjaygpIGtub3dzIGhvdyBtYW55Cj4gcGFnZXMg aXQgaGFzIHdyaXR0ZW4uCj4gCj4gSW4gb3RoZXIgd29yZHMsIHRoZSB3cml0ZWJhY2sgY29kZSBh c3N1bWVzIHRoYXQgCj4gCj4gICA8b3JpZ25hbCB2YWx1ZSBvZiBucl90b193cml0ZT4gLSA8cmV0 dXJuZWQgd2JjLT5ucl90b193cml0ZT4KPiAKPiBpcwo+IAo+ICAgPG51bWJlciBvZiBwYWdlcyBh Y3R1YWxseSB3cml0dGVuPgoKWWVzLCBidXQgdGhhdCBkb2VzIG5vdCByZXF1aXJlIGEgbmVnYXRp dmUgdmFsdWUgdG8gZ2V0IHJpZ2h0LiAgTm9uZQpvZiB0aGUgY29kZSByZWxpZXMgb24gbmVnYXRp dmUgbnJfdG9fd3JpdGUgdmFsdWVzIHRvIGRvIGFueXRoaW5nCmNvcnJlY3RseSwgYW5kIGFsbCB0 aGUgdGVybWluYXRpb24gY2hlY2tzIGFyZSBmb3Igd2JjLT5ucl90by13cml0ZQo8PSAwLiBBbmQg dGhlIHRyYWNpbmcgc2hvd3MgaXQgYmVoYXZlcyBjb3JyZWN0bHkgd2hlbgp3YmMtPm5yX3RvX3dy aXRlID0gMCBvbiByZXR1cm4uIFJlcXVpcmluZyBhIG5lZ2F0aXZlIG51bWJlciBpcyBub3QKZG9j dW1lbnRlZCBpbiBhbnkgb2YgdGhlIGNvbW1lbnRzLCB3cml0ZV9jYWNoZV9wYWdlcygpIGRvZXMg bm90CnJldHVybiBhIG5lZ2F0aXZlIG51bWJlciwgZXRjLCBzbyBJIGNhbid0IHNlZSB3aHkgeW91 IHRoaW5rIHRoaXMgaXMKbmVjZXNzYXJ5Li4uLgoKPiBJZiB5b3UgZG9uJ3QgbGV0IHdiYy0+bnJf dG9fd3JpdGUgZ28gbmVnYXRpdmUsIHRoZSB3cml0ZWJhY2sgY29kZSB3aWxsCj4gYmUgY29uZnVz ZWQgYWJvdXQgaG93IG1hbnkgcGFnZXMgd2VyZSBfYWN0dWFsbHlfIHdyaXR0ZW4sIGFuZCB0aGUK PiB3cml0ZWJhY2sgY29kZSBlbmRzIHVwIHdyaXRpbmcgdG9vIG11Y2guICBTZWUgY29tbWl0IDJm YWYyZTEuCgpleHQ0IGFkZGVkIGEgImJ1bXAiIHRvIHdiYy0+bnJfdG9fd3JpdGUsIHRoZW4gaW4g c29tZSBjYXNlcyBmb3Jnb3QKdG8gcmVtb3ZlIGl0IHNvIGl0IG5ldmVyIHJldHVybmVkIHRvIDw9 IDAuIFdlbGwsIG9mIGNvdXJzZSB0aGlzCmNhdXNlcyB3cml0ZWJhY2sgdG8gd3JpdGUgdG9vIG11 Y2ghIEJ1dCB0aGF0J3MgYW4gZXh0NCBidWcgbm90CmFsbG93aW5nIG5yX3RvX3dyaXRlIHRvIHJl YWNoIHplcm8gKG5vdCBuZWdhdGl2ZSwgYnV0IHplcm8pLCBub3QgYQpnZW5lcmFsIHdyaXRlYmFj ayBidWcuLi4uCgo+IEFsbCBvZiB0aGlzIGlzIGEgY3JvY2sgb2YgY291cnNlLiAgVGhlIGZpbGUg c3lzdGVtIHNob3VsZG4ndCBiZQo+IHNlY29uZC1ndWVzc2luZyB0aGUgd3JpdGViYWNrIGNvZGUu ICBJbnN0ZWFkIHRoZSB3cml0ZWJhY2sgY29kZSBzaG91bGQKPiBiZSBhZGFwdGl2ZWx5IG1lYXN1 cmluZyBob3cgbG9uZyBpdCB0YWtlcyB0byB3ZXJlIHdyaXR0ZW4gb3V0IE4gcGFnZXMKPiB0byBh IHBhcnRpY3VsYXIgYmxvY2sgZGV2aWNlLCBhbmQgdGhlbiBkZWNpZGUgd2hhdCdzIHRoZSBhcHBy b3ByaWF0ZQo+IHNldHRpbmcgZm9yIG5yX3RvX3dyaXRlLiAgV2hhdCBtYWtlcyBzZW5zZSBmb3Ig YSBVU0Igc3RpY2ssIG9yIGEgNDIwMAo+IFJQTSBsYXB0b3AgZHJpdmUsIG1heSBub3QgbWFrZSBz ZW5zZSBmb3IgYSBtYXNzaXZlIFJBSUQgYXJyYXkuLi4uCgpXaHk/IFdyaXRlYmFjayBzaG91bGQg anVzdCBrZWVwIHB1c2hpbmcgcGFnZXMgZG93biB1bnRpbCBpdCBjb25nZXN0cwp0aGUgYmxvY2sg ZGV2aWNlLiBUaGVuIGl0IHRocm90dGxlcyBpdHNlbGYgaW4gZ2V0X3JlcXVlc3QoKSBhbmQgc28K d3JpdGViYWNrIGFscmVhZHkgYWRhcHRzIHRvIHRoZSBsb2FkIG9uIHRoZSBkZXZpY2UuICBNdWx0 aXBsZSBwYXNzZXMKb2YgMTAyNCBwYWdlcyBwZXIgZGlydHkgaW5vZGUgaXMgZmluZSBmb3IgdGhp cyAtIGEgbGFyZ2VyCm5yX3RvX3dyaXRlIGRvZXNuJ3QgZ2V0IHRoZSBibG9jayBkZXZpY2UgdG8g Y29uZ2VzdGlvbiBhbnkgZmFzdGVyIG9yCnNsb3dlciwgbm9yIGRvZXMgaXQgY2hhbmdlIHRoZSBi ZWhhdmlvdXIgb25jZSBhdCBjb25nZXN0aW9uLi4uLgoKPiBCdXQgc2luY2Ugd2UgZG9uJ3QgaGF2 ZSB0aGF0LCBib3RoIFhGUyBhbmQgZXh0NCBoYXZlIHdvcmthcm91bmRzIGZvcgo+IGJyYWluLWRh bWFnZWQgd3JpdGViYWNrIGJlaGF2aW91ci4gIChJIGRpZCBzb21lIHRlc3RpbmcsIGFuZCBldmVu IGZvcgo+IHN0YW5kYXJkIGxhcHRvcCBkcml2ZXMgdGhlIGNhcCBvZiAxMDI0IHBhZ2VzIGlzIGp1 c3QgV2F5IFRvbyBTbWFsbDsKPiB0aGF0IGxpbWl0IHdhcyBzZXQgc29tZXRoaW5nIGxpa2UgYSBk ZWNhZGUgYWdvLCBhbmQgZXZlcnlvbmUgaGFzIGJlZW4KPiBhZnJhaWQgdG8gY2hhbmdlIGl0LCBl dmVuIHRob3VnaCBkaXNrcyBoYXZlIGdvdHRlbiBhIHdlZSBiaXQgZmFzdGVyCj4gc2luY2UgdGhv c2UgZGF5cy4pCgpYRlMgcHV0IGEgd29ya2Fyb3VuZCBpbiBmb3IgYSBkaWZmZXJlbnQgcmVhc29u IHRvIGV4dDQuIGV4dDQgcHV0IGl0CmluIHRvIGltcHJvdmUgZGVsYXllZCBhbGxvY2F0aW9uIGJ5 IHdvcmtpbmcgd2l0aCBsYXJnZXIgY2h1bmtzIG9mCnBhZ2VzLiBYRlMgcHV0IGl0IGluIHRvIGdl dCBsYXJnZSBJT3MgdG8gYmUgaXNzdWVkIHRocm91Z2gKc3VibWl0X2JpbygpLCBub3QgdG8gaGVs cCB0aGUgYWxsb2NhdG9yLi4uCgpBbmQgdG8gYmUgdGhlIG5hc3R5IHBlcnNvbiB0byBzaG9vdCBk b3duIHlvdXIgbW9kZXJuIGhhcmR3YXJlCnRoZW9yeTogbnJfdG9fd3JpdGUgPSAxMDI0IHBhZ2Vz IHdvcmtzIGp1c3QgZmluZSBvbiBteSBsYXB0b3AgKFhGUwpvbiBpbmRpbGl4IFNTRCkgYXMgd2Vs bCBhcyBteSBiaWcgdGVzdCBzZXJ2ZXIgKFhGUyBvbiAxMiBkaXNrIFJBSUQwKQpUaGUgc2VydmVy IGdldHMgMS41R0IvcyB3aXRoIHByZXR0eSBtdWNoIHBlcmZlY3QgSU8gcGF0dGVybnMgd2l0aAp0 aGUgZml4ZXMgSSBwb3N0ZWQsIHVubGlrZSB0aGUgbWVzcyBvZiBzaW5nbGUgcGFnZSBJT3MgdGhh dCBvY2N1cnMKd2l0aG91dCB0aGVtLi4uLgoKQ2hlZXJzLAoKRGF2ZS4KLS0gCkRhdmUgQ2hpbm5l cgpkYXZpZEBmcm9tb3JiaXQuY29tCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fXwp4ZnMgbWFpbGluZyBsaXN0Cnhmc0Bvc3Muc2dpLmNvbQpodHRwOi8vb3Nz LnNnaS5jb20vbWFpbG1hbi9saXN0aW5mby94ZnMK