* OVMF very slow on AMD
@ 2016-07-14 15:53 Anthony PERARD
2016-07-15 13:48 ` Konrad Rzeszutek Wilk
0 siblings, 1 reply; 19+ messages in thread
From: Anthony PERARD @ 2016-07-14 15:53 UTC (permalink / raw)
To: xen-devel
Hi,
I've been investigating why OVMF is very slow in a Xen guest on an AMD
host. This, I think, is the current failure that osstest is having.
I've only look at a specific part of OVMF where the slowdown is very
obvious on AMD vs Intel, the decompression.
This is what I get on AMD, via the Xen serial console port:
Invoking OVMF ...
SecCoreStartupWithStack(0xFFFCC000, 0x818000)
then, nothing for almost 1 minute, then the rest of the boot process.
The same binary on Intel, the output does not stay "stuck" here.
I could pin-point which part of the boot process takes a long time, but
there is not anything obvious in there, just a loop that decompress the
ovmf binary, with plenty of iteration.
I tried `xentrace', but the trace does not show anything wrong, there is
just an interrupt from time to time. I've tried to had some tracepoint
inside this decompresion function in OVMF, but that did not reveal
anything either, maybe there where not at the right place.
Anyway, the function is: LzmaDec_DecodeReal() from the file
IntelFrameworkModulePkg/Library/LzmaCustomDecompressLib/Sdk/C/LzmaDec.c
you can get the assembly from this object:
Build/OvmfX64/DEBUG_GCC49/X64/IntelFrameworkModulePkg/Library/LzmaCustomDecompressLib/LzmaCustomDecompressLib/OUTPUT/Sdk/C/LzmaDec.obj
This is with OVMF upstream (https://github.com/tianocore/edk2).
I can send the assembly if needed.
So, this loop takes about 1 minute on my AMD machine (AMD Opteron(tm)
Processor 4284), and less that 1 second on an Intel machine.
If I compile OVMF as a 32bit binary, the loop is faster, but still takes
about 30s on AMD. (that's true for both OvmfIa32 and OvmfIa32X64 which
is 32bit bootstrap, but can start 64bit OS.)
Another thing, I tried the same binary (64bit) with KVM, and OVMF seems
fast.
So, any idee of what I could investigate?
Thanks,
--
Anthony PERARD
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: OVMF very slow on AMD
2016-07-14 15:53 OVMF very slow on AMD Anthony PERARD
@ 2016-07-15 13:48 ` Konrad Rzeszutek Wilk
2016-07-15 15:22 ` Boris Ostrovsky
` (2 more replies)
0 siblings, 3 replies; 19+ messages in thread
From: Konrad Rzeszutek Wilk @ 2016-07-15 13:48 UTC (permalink / raw)
To: Anthony PERARD; +Cc: xen-devel
On Thu, Jul 14, 2016 at 04:53:07PM +0100, Anthony PERARD wrote:
> Hi,
>
> I've been investigating why OVMF is very slow in a Xen guest on an AMD
> host. This, I think, is the current failure that osstest is having.
>
> I've only look at a specific part of OVMF where the slowdown is very
> obvious on AMD vs Intel, the decompression.
>
> This is what I get on AMD, via the Xen serial console port:
> Invoking OVMF ...
> SecCoreStartupWithStack(0xFFFCC000, 0x818000)
> then, nothing for almost 1 minute, then the rest of the boot process.
> The same binary on Intel, the output does not stay "stuck" here.
>
> I could pin-point which part of the boot process takes a long time, but
> there is not anything obvious in there, just a loop that decompress the
> ovmf binary, with plenty of iteration.
> I tried `xentrace', but the trace does not show anything wrong, there is
> just an interrupt from time to time. I've tried to had some tracepoint
> inside this decompresion function in OVMF, but that did not reveal
> anything either, maybe there where not at the right place.
>
> Anyway, the function is: LzmaDec_DecodeReal() from the file
> IntelFrameworkModulePkg/Library/LzmaCustomDecompressLib/Sdk/C/LzmaDec.c
> you can get the assembly from this object:
> Build/OvmfX64/DEBUG_GCC49/X64/IntelFrameworkModulePkg/Library/LzmaCustomDecompressLib/LzmaCustomDecompressLib/OUTPUT/Sdk/C/LzmaDec.obj
> This is with OVMF upstream (https://github.com/tianocore/edk2).
> I can send the assembly if needed.
Pls. The full file if possible. Perhaps there is also an .S file somewhere there?
>
> So, this loop takes about 1 minute on my AMD machine (AMD Opteron(tm)
> Processor 4284), and less that 1 second on an Intel machine.
> If I compile OVMF as a 32bit binary, the loop is faster, but still takes
> about 30s on AMD. (that's true for both OvmfIa32 and OvmfIa32X64 which
> is 32bit bootstrap, but can start 64bit OS.)
> Another thing, I tried the same binary (64bit) with KVM, and OVMF seems
> fast.
>
>
> So, any idee of what I could investigate?
I presume we emulating some operation on AMD but not on Intel.
However you say xentrace shows nothing - which would imply we are not
incurring VMEXITs to deal with this. Hmm.. Could it be what we
expose to the guest (the CPUID flags?) Somehow we are missing one on AMD
and it takes a slower route?
>
> Thanks,
>
> --
> Anthony PERARD
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: OVMF very slow on AMD
2016-07-15 13:48 ` Konrad Rzeszutek Wilk
@ 2016-07-15 15:22 ` Boris Ostrovsky
2016-07-27 11:08 ` Anthony PERARD
2016-07-18 14:10 ` Anthony PERARD
2016-07-18 15:09 ` Anthony PERARD
2 siblings, 1 reply; 19+ messages in thread
From: Boris Ostrovsky @ 2016-07-15 15:22 UTC (permalink / raw)
To: Konrad Rzeszutek Wilk, Anthony PERARD; +Cc: xen-devel
On 07/15/2016 09:48 AM, Konrad Rzeszutek Wilk wrote:
> On Thu, Jul 14, 2016 at 04:53:07PM +0100, Anthony PERARD wrote:
>> Hi,
>>
>> I've been investigating why OVMF is very slow in a Xen guest on an AMD
>> host. This, I think, is the current failure that osstest is having.
>>
>> I've only look at a specific part of OVMF where the slowdown is very
>> obvious on AMD vs Intel, the decompression.
>>
>> This is what I get on AMD, via the Xen serial console port:
>> Invoking OVMF ...
>> SecCoreStartupWithStack(0xFFFCC000, 0x818000)
>> then, nothing for almost 1 minute, then the rest of the boot process.
>> The same binary on Intel, the output does not stay "stuck" here.
>>
>> I could pin-point which part of the boot process takes a long time, but
>> there is not anything obvious in there, just a loop that decompress the
>> ovmf binary, with plenty of iteration.
>> I tried `xentrace', but the trace does not show anything wrong, there is
>> just an interrupt from time to time. I've tried to had some tracepoint
>> inside this decompresion function in OVMF, but that did not reveal
>> anything either, maybe there where not at the right place.
>>
>> Anyway, the function is: LzmaDec_DecodeReal() from the file
>> IntelFrameworkModulePkg/Library/LzmaCustomDecompressLib/Sdk/C/LzmaDec.c
>> you can get the assembly from this object:
>> Build/OvmfX64/DEBUG_GCC49/X64/IntelFrameworkModulePkg/Library/LzmaCustomDecompressLib/LzmaCustomDecompressLib/OUTPUT/Sdk/C/LzmaDec.obj
>> This is with OVMF upstream (https://github.com/tianocore/edk2).
>> I can send the assembly if needed.
> Pls. The full file if possible. Perhaps there is also an .S file somewhere there?
>
>> So, this loop takes about 1 minute on my AMD machine (AMD Opteron(tm)
>> Processor 4284), and less that 1 second on an Intel machine.
>> If I compile OVMF as a 32bit binary, the loop is faster, but still takes
>> about 30s on AMD. (that's true for both OvmfIa32 and OvmfIa32X64 which
>> is 32bit bootstrap, but can start 64bit OS.)
>> Another thing, I tried the same binary (64bit) with KVM, and OVMF seems
>> fast.
>>
>>
>> So, any idee of what I could investigate?
> I presume we emulating some operation on AMD but not on Intel.
>
> However you say xentrace shows nothing - which would imply we are not
> incurring VMEXITs to deal with this. Hmm.. Could it be what we
> expose to the guest (the CPUID flags?) Somehow we are missing one on AMD
> and it takes a slower route?
I don't know whether it's possible but can you extract this loop somehow
and run it on baremetal? Or run the whole thing on baremetal.
Also a newer compiler might potentially make a difference (if you are
running on something older).
-boris
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: OVMF very slow on AMD
2016-07-15 13:48 ` Konrad Rzeszutek Wilk
2016-07-15 15:22 ` Boris Ostrovsky
@ 2016-07-18 14:10 ` Anthony PERARD
2016-07-18 15:09 ` Anthony PERARD
2 siblings, 0 replies; 19+ messages in thread
From: Anthony PERARD @ 2016-07-18 14:10 UTC (permalink / raw)
To: Konrad Rzeszutek Wilk; +Cc: xen-devel
[-- Attachment #1: Type: text/plain, Size: 2188 bytes --]
On Fri, Jul 15, 2016 at 09:48:31AM -0400, Konrad Rzeszutek Wilk wrote:
> On Thu, Jul 14, 2016 at 04:53:07PM +0100, Anthony PERARD wrote:
> > Hi,
> >
> > I've been investigating why OVMF is very slow in a Xen guest on an AMD
> > host. This, I think, is the current failure that osstest is having.
> >
> > I've only look at a specific part of OVMF where the slowdown is very
> > obvious on AMD vs Intel, the decompression.
> >
> > This is what I get on AMD, via the Xen serial console port:
> > Invoking OVMF ...
> > SecCoreStartupWithStack(0xFFFCC000, 0x818000)
> > then, nothing for almost 1 minute, then the rest of the boot process.
> > The same binary on Intel, the output does not stay "stuck" here.
> >
> > I could pin-point which part of the boot process takes a long time, but
> > there is not anything obvious in there, just a loop that decompress the
> > ovmf binary, with plenty of iteration.
> > I tried `xentrace', but the trace does not show anything wrong, there is
> > just an interrupt from time to time. I've tried to had some tracepoint
> > inside this decompresion function in OVMF, but that did not reveal
> > anything either, maybe there where not at the right place.
> >
> > Anyway, the function is: LzmaDec_DecodeReal() from the file
> > IntelFrameworkModulePkg/Library/LzmaCustomDecompressLib/Sdk/C/LzmaDec.c
> > you can get the assembly from this object:
> > Build/OvmfX64/DEBUG_GCC49/X64/IntelFrameworkModulePkg/Library/LzmaCustomDecompressLib/LzmaCustomDecompressLib/OUTPUT/Sdk/C/LzmaDec.obj
> > This is with OVMF upstream (https://github.com/tianocore/edk2).
> > I can send the assembly if needed.
>
> Pls. The full file if possible. Perhaps there is also an .S file somewhere there?
I've attach the output of:
objdump -d Build/OvmfX64/DEBUG_GCC49/X64/IntelFrameworkModulePkg/Library/LzmaCustomDecompressLib/LzmaCustomDecompressLib/OUTPUT/Sdk/C/LzmaDec.obj
This is the C file:
https://github.com/tianocore/edk2/blob/2bfd84ed45b2b66bdabac059df9db3404912dd28/IntelFrameworkModulePkg/Library/LzmaCustomDecompressLib/Sdk/C/LzmaDec.c
As far as I can tell, LzmaDec_DecodeReal() does not call anything else.
And there is no .S file.
--
Anthony PERARD
[-- Attachment #2: lzmadec.obj.disas --]
[-- Type: text/plain, Size: 185781 bytes --]
Build/OvmfX64/DEBUG_GCC49/X64/IntelFrameworkModulePkg/Library/LzmaCustomDecompressLib/LzmaCustomDecompressLib/OUTPUT/Sdk/C/LzmaDec.obj: file format elf64-x86-64
Disassembly of section .text.LzmaDec_DecodeReal:
0000000000000000 <LzmaDec_DecodeReal>:
0: 55 push %rbp
1: 48 89 e5 mov %rsp,%rbp
4: 48 81 ec f8 00 00 00 sub $0xf8,%rsp
b: 48 89 bd 18 ff ff ff mov %rdi,-0xe8(%rbp)
12: 89 b5 14 ff ff ff mov %esi,-0xec(%rbp)
18: 48 89 95 08 ff ff ff mov %rdx,-0xf8(%rbp)
1f: 48 8b 85 18 ff ff ff mov -0xe8(%rbp),%rax
26: 48 8b 40 10 mov 0x10(%rax),%rax
2a: 48 89 85 78 ff ff ff mov %rax,-0x88(%rbp)
31: 48 8b 85 18 ff ff ff mov -0xe8(%rbp),%rax
38: 8b 40 40 mov 0x40(%rax),%eax
3b: 89 45 fc mov %eax,-0x4(%rbp)
3e: 48 8b 85 18 ff ff ff mov -0xe8(%rbp),%rax
45: 8b 40 44 mov 0x44(%rax),%eax
48: 89 45 f8 mov %eax,-0x8(%rbp)
4b: 48 8b 85 18 ff ff ff mov -0xe8(%rbp),%rax
52: 8b 40 48 mov 0x48(%rax),%eax
55: 89 45 f4 mov %eax,-0xc(%rbp)
58: 48 8b 85 18 ff ff ff mov -0xe8(%rbp),%rax
5f: 8b 40 4c mov 0x4c(%rax),%eax
62: 89 45 f0 mov %eax,-0x10(%rbp)
65: 48 8b 85 18 ff ff ff mov -0xe8(%rbp),%rax
6c: 8b 40 50 mov 0x50(%rax),%eax
6f: 89 45 ec mov %eax,-0x14(%rbp)
72: 48 8b 85 18 ff ff ff mov -0xe8(%rbp),%rax
79: 8b 40 08 mov 0x8(%rax),%eax
7c: ba 01 00 00 00 mov $0x1,%edx
81: 89 c1 mov %eax,%ecx
83: d3 e2 shl %cl,%edx
85: 89 d0 mov %edx,%eax
87: 83 e8 01 sub $0x1,%eax
8a: 89 85 74 ff ff ff mov %eax,-0x8c(%rbp)
90: 48 8b 85 18 ff ff ff mov -0xe8(%rbp),%rax
97: 8b 40 04 mov 0x4(%rax),%eax
9a: ba 01 00 00 00 mov $0x1,%edx
9f: 89 c1 mov %eax,%ecx
a1: d3 e2 shl %cl,%edx
a3: 89 d0 mov %edx,%eax
a5: 83 e8 01 sub $0x1,%eax
a8: 89 85 70 ff ff ff mov %eax,-0x90(%rbp)
ae: 48 8b 85 18 ff ff ff mov -0xe8(%rbp),%rax
b5: 8b 00 mov (%rax),%eax
b7: 89 85 6c ff ff ff mov %eax,-0x94(%rbp)
bd: 48 8b 85 18 ff ff ff mov -0xe8(%rbp),%rax
c4: 48 8b 40 18 mov 0x18(%rax),%rax
c8: 48 89 85 60 ff ff ff mov %rax,-0xa0(%rbp)
cf: 48 8b 85 18 ff ff ff mov -0xe8(%rbp),%rax
d6: 8b 40 34 mov 0x34(%rax),%eax
d9: 89 85 5c ff ff ff mov %eax,-0xa4(%rbp)
df: 48 8b 85 18 ff ff ff mov -0xe8(%rbp),%rax
e6: 8b 40 30 mov 0x30(%rax),%eax
e9: 89 45 e8 mov %eax,-0x18(%rbp)
ec: 48 8b 85 18 ff ff ff mov -0xe8(%rbp),%rax
f3: 8b 40 38 mov 0x38(%rax),%eax
f6: 89 45 e4 mov %eax,-0x1c(%rbp)
f9: 48 8b 85 18 ff ff ff mov -0xe8(%rbp),%rax
100: 8b 40 3c mov 0x3c(%rax),%eax
103: 89 85 58 ff ff ff mov %eax,-0xa8(%rbp)
109: c7 45 e0 00 00 00 00 movl $0x0,-0x20(%rbp)
110: 48 8b 85 18 ff ff ff mov -0xe8(%rbp),%rax
117: 48 8b 40 20 mov 0x20(%rax),%rax
11b: 48 89 45 d8 mov %rax,-0x28(%rbp)
11f: 48 8b 85 18 ff ff ff mov -0xe8(%rbp),%rax
126: 8b 40 28 mov 0x28(%rax),%eax
129: 89 45 d4 mov %eax,-0x2c(%rbp)
12c: 48 8b 85 18 ff ff ff mov -0xe8(%rbp),%rax
133: 8b 40 2c mov 0x2c(%rax),%eax
136: 89 45 d0 mov %eax,-0x30(%rbp)
139: 8b 45 e4 mov -0x1c(%rbp),%eax
13c: 23 85 74 ff ff ff and -0x8c(%rbp),%eax
142: 89 85 54 ff ff ff mov %eax,-0xac(%rbp)
148: 8b 45 fc mov -0x4(%rbp),%eax
14b: c1 e0 04 shl $0x4,%eax
14e: 89 c2 mov %eax,%edx
150: 8b 85 54 ff ff ff mov -0xac(%rbp),%eax
156: 48 01 d0 add %rdx,%rax
159: 48 8d 14 00 lea (%rax,%rax,1),%rdx
15d: 48 8b 85 78 ff ff ff mov -0x88(%rbp),%rax
164: 48 01 d0 add %rdx,%rax
167: 48 89 45 c8 mov %rax,-0x38(%rbp)
16b: 48 8b 45 c8 mov -0x38(%rbp),%rax
16f: 0f b7 00 movzwl (%rax),%eax
172: 0f b7 c0 movzwl %ax,%eax
175: 89 85 50 ff ff ff mov %eax,-0xb0(%rbp)
17b: 81 7d d4 ff ff ff 00 cmpl $0xffffff,-0x2c(%rbp)
182: 77 23 ja 1a7 <LzmaDec_DecodeReal+0x1a7>
184: c1 65 d4 08 shll $0x8,-0x2c(%rbp)
188: 8b 45 d0 mov -0x30(%rbp),%eax
18b: c1 e0 08 shl $0x8,%eax
18e: 89 c1 mov %eax,%ecx
190: 48 8b 45 d8 mov -0x28(%rbp),%rax
194: 48 8d 50 01 lea 0x1(%rax),%rdx
198: 48 89 55 d8 mov %rdx,-0x28(%rbp)
19c: 0f b6 00 movzbl (%rax),%eax
19f: 0f b6 c0 movzbl %al,%eax
1a2: 09 c8 or %ecx,%eax
1a4: 89 45 d0 mov %eax,-0x30(%rbp)
1a7: 8b 45 d4 mov -0x2c(%rbp),%eax
1aa: c1 e8 0b shr $0xb,%eax
1ad: 0f af 85 50 ff ff ff imul -0xb0(%rbp),%eax
1b4: 89 85 4c ff ff ff mov %eax,-0xb4(%rbp)
1ba: 8b 45 d0 mov -0x30(%rbp),%eax
1bd: 3b 85 4c ff ff ff cmp -0xb4(%rbp),%eax
1c3: 0f 83 56 03 00 00 jae 51f <LzmaDec_DecodeReal+0x51f>
1c9: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
1cf: 89 45 d4 mov %eax,-0x2c(%rbp)
1d2: 8b 85 50 ff ff ff mov -0xb0(%rbp),%eax
1d8: 89 c2 mov %eax,%edx
1da: b8 00 08 00 00 mov $0x800,%eax
1df: 2b 85 50 ff ff ff sub -0xb0(%rbp),%eax
1e5: c1 e8 05 shr $0x5,%eax
1e8: 01 c2 add %eax,%edx
1ea: 48 8b 45 c8 mov -0x38(%rbp),%rax
1ee: 66 89 10 mov %dx,(%rax)
1f1: 48 8b 85 78 ff ff ff mov -0x88(%rbp),%rax
1f8: 48 05 6c 0e 00 00 add $0xe6c,%rax
1fe: 48 89 45 c8 mov %rax,-0x38(%rbp)
202: 83 bd 58 ff ff ff 00 cmpl $0x0,-0xa8(%rbp)
209: 75 06 jne 211 <LzmaDec_DecodeReal+0x211>
20b: 83 7d e4 00 cmpl $0x0,-0x1c(%rbp)
20f: 74 68 je 279 <LzmaDec_DecodeReal+0x279>
211: 8b 45 e4 mov -0x1c(%rbp),%eax
214: 23 85 70 ff ff ff and -0x90(%rbp),%eax
21a: 89 c2 mov %eax,%edx
21c: 8b 85 6c ff ff ff mov -0x94(%rbp),%eax
222: 89 d6 mov %edx,%esi
224: 89 c1 mov %eax,%ecx
226: d3 e6 shl %cl,%esi
228: 83 7d e8 00 cmpl $0x0,-0x18(%rbp)
22c: 75 0d jne 23b <LzmaDec_DecodeReal+0x23b>
22e: 8b 85 5c ff ff ff mov -0xa4(%rbp),%eax
234: 83 e8 01 sub $0x1,%eax
237: 89 c2 mov %eax,%edx
239: eb 08 jmp 243 <LzmaDec_DecodeReal+0x243>
23b: 8b 45 e8 mov -0x18(%rbp),%eax
23e: 83 e8 01 sub $0x1,%eax
241: 89 c2 mov %eax,%edx
243: 48 8b 85 60 ff ff ff mov -0xa0(%rbp),%rax
24a: 48 01 d0 add %rdx,%rax
24d: 0f b6 00 movzbl (%rax),%eax
250: 0f b6 d0 movzbl %al,%edx
253: b8 08 00 00 00 mov $0x8,%eax
258: 2b 85 6c ff ff ff sub -0x94(%rbp),%eax
25e: 89 c1 mov %eax,%ecx
260: d3 fa sar %cl,%edx
262: 89 d0 mov %edx,%eax
264: 8d 14 06 lea (%rsi,%rax,1),%edx
267: 89 d0 mov %edx,%eax
269: 01 c0 add %eax,%eax
26b: 01 d0 add %edx,%eax
26d: c1 e0 08 shl $0x8,%eax
270: 89 c0 mov %eax,%eax
272: 48 01 c0 add %rax,%rax
275: 48 01 45 c8 add %rax,-0x38(%rbp)
279: 83 7d fc 06 cmpl $0x6,-0x4(%rbp)
27d: 0f 87 fc 00 00 00 ja 37f <LzmaDec_DecodeReal+0x37f>
283: c7 45 c4 01 00 00 00 movl $0x1,-0x3c(%rbp)
28a: 8b 45 c4 mov -0x3c(%rbp),%eax
28d: 48 8d 14 00 lea (%rax,%rax,1),%rdx
291: 48 8b 45 c8 mov -0x38(%rbp),%rax
295: 48 01 d0 add %rdx,%rax
298: 0f b7 00 movzwl (%rax),%eax
29b: 0f b7 c0 movzwl %ax,%eax
29e: 89 85 50 ff ff ff mov %eax,-0xb0(%rbp)
2a4: 81 7d d4 ff ff ff 00 cmpl $0xffffff,-0x2c(%rbp)
2ab: 77 23 ja 2d0 <LzmaDec_DecodeReal+0x2d0>
2ad: c1 65 d4 08 shll $0x8,-0x2c(%rbp)
2b1: 8b 45 d0 mov -0x30(%rbp),%eax
2b4: c1 e0 08 shl $0x8,%eax
2b7: 89 c1 mov %eax,%ecx
2b9: 48 8b 45 d8 mov -0x28(%rbp),%rax
2bd: 48 8d 50 01 lea 0x1(%rax),%rdx
2c1: 48 89 55 d8 mov %rdx,-0x28(%rbp)
2c5: 0f b6 00 movzbl (%rax),%eax
2c8: 0f b6 c0 movzbl %al,%eax
2cb: 09 c8 or %ecx,%eax
2cd: 89 45 d0 mov %eax,-0x30(%rbp)
2d0: 8b 45 d4 mov -0x2c(%rbp),%eax
2d3: c1 e8 0b shr $0xb,%eax
2d6: 0f af 85 50 ff ff ff imul -0xb0(%rbp),%eax
2dd: 89 85 4c ff ff ff mov %eax,-0xb4(%rbp)
2e3: 8b 45 d0 mov -0x30(%rbp),%eax
2e6: 3b 85 4c ff ff ff cmp -0xb4(%rbp),%eax
2ec: 73 3c jae 32a <LzmaDec_DecodeReal+0x32a>
2ee: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
2f4: 89 45 d4 mov %eax,-0x2c(%rbp)
2f7: 8b 45 c4 mov -0x3c(%rbp),%eax
2fa: 48 8d 14 00 lea (%rax,%rax,1),%rdx
2fe: 48 8b 45 c8 mov -0x38(%rbp),%rax
302: 48 01 d0 add %rdx,%rax
305: 8b 95 50 ff ff ff mov -0xb0(%rbp),%edx
30b: 89 d1 mov %edx,%ecx
30d: ba 00 08 00 00 mov $0x800,%edx
312: 2b 95 50 ff ff ff sub -0xb0(%rbp),%edx
318: c1 ea 05 shr $0x5,%edx
31b: 01 ca add %ecx,%edx
31d: 66 89 10 mov %dx,(%rax)
320: 8b 45 c4 mov -0x3c(%rbp),%eax
323: 01 c0 add %eax,%eax
325: 89 45 c4 mov %eax,-0x3c(%rbp)
328: eb 43 jmp 36d <LzmaDec_DecodeReal+0x36d>
32a: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
330: 29 45 d4 sub %eax,-0x2c(%rbp)
333: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
339: 29 45 d0 sub %eax,-0x30(%rbp)
33c: 8b 45 c4 mov -0x3c(%rbp),%eax
33f: 48 8d 14 00 lea (%rax,%rax,1),%rdx
343: 48 8b 45 c8 mov -0x38(%rbp),%rax
347: 48 01 d0 add %rdx,%rax
34a: 8b 95 50 ff ff ff mov -0xb0(%rbp),%edx
350: 89 d1 mov %edx,%ecx
352: 8b 95 50 ff ff ff mov -0xb0(%rbp),%edx
358: c1 ea 05 shr $0x5,%edx
35b: 29 d1 sub %edx,%ecx
35d: 89 ca mov %ecx,%edx
35f: 66 89 10 mov %dx,(%rax)
362: 8b 45 c4 mov -0x3c(%rbp),%eax
365: 01 c0 add %eax,%eax
367: 83 c0 01 add $0x1,%eax
36a: 89 45 c4 mov %eax,-0x3c(%rbp)
36d: 81 7d c4 ff 00 00 00 cmpl $0xff,-0x3c(%rbp)
374: 0f 86 10 ff ff ff jbe 28a <LzmaDec_DecodeReal+0x28a>
37a: e9 66 01 00 00 jmpq 4e5 <LzmaDec_DecodeReal+0x4e5>
37f: 48 8b 85 18 ff ff ff mov -0xe8(%rbp),%rax
386: 48 8b 50 18 mov 0x18(%rax),%rdx
38a: 8b 45 e8 mov -0x18(%rbp),%eax
38d: 2b 45 f8 sub -0x8(%rbp),%eax
390: 89 c1 mov %eax,%ecx
392: 8b 45 e8 mov -0x18(%rbp),%eax
395: 3b 45 f8 cmp -0x8(%rbp),%eax
398: 73 08 jae 3a2 <LzmaDec_DecodeReal+0x3a2>
39a: 8b 85 5c ff ff ff mov -0xa4(%rbp),%eax
3a0: eb 05 jmp 3a7 <LzmaDec_DecodeReal+0x3a7>
3a2: b8 00 00 00 00 mov $0x0,%eax
3a7: 01 c8 add %ecx,%eax
3a9: 89 c0 mov %eax,%eax
3ab: 48 01 d0 add %rdx,%rax
3ae: 0f b6 00 movzbl (%rax),%eax
3b1: 0f b6 c0 movzbl %al,%eax
3b4: 89 45 c0 mov %eax,-0x40(%rbp)
3b7: c7 45 bc 00 01 00 00 movl $0x100,-0x44(%rbp)
3be: c7 45 c4 01 00 00 00 movl $0x1,-0x3c(%rbp)
3c5: d1 65 c0 shll -0x40(%rbp)
3c8: 8b 45 c0 mov -0x40(%rbp),%eax
3cb: 23 45 bc and -0x44(%rbp),%eax
3ce: 89 85 48 ff ff ff mov %eax,-0xb8(%rbp)
3d4: 8b 55 bc mov -0x44(%rbp),%edx
3d7: 8b 85 48 ff ff ff mov -0xb8(%rbp),%eax
3dd: 48 01 c2 add %rax,%rdx
3e0: 8b 45 c4 mov -0x3c(%rbp),%eax
3e3: 48 01 d0 add %rdx,%rax
3e6: 48 8d 14 00 lea (%rax,%rax,1),%rdx
3ea: 48 8b 45 c8 mov -0x38(%rbp),%rax
3ee: 48 01 d0 add %rdx,%rax
3f1: 48 89 85 40 ff ff ff mov %rax,-0xc0(%rbp)
3f8: 48 8b 85 40 ff ff ff mov -0xc0(%rbp),%rax
3ff: 0f b7 00 movzwl (%rax),%eax
402: 0f b7 c0 movzwl %ax,%eax
405: 89 85 50 ff ff ff mov %eax,-0xb0(%rbp)
40b: 81 7d d4 ff ff ff 00 cmpl $0xffffff,-0x2c(%rbp)
412: 77 23 ja 437 <LzmaDec_DecodeReal+0x437>
414: c1 65 d4 08 shll $0x8,-0x2c(%rbp)
418: 8b 45 d0 mov -0x30(%rbp),%eax
41b: c1 e0 08 shl $0x8,%eax
41e: 89 c1 mov %eax,%ecx
420: 48 8b 45 d8 mov -0x28(%rbp),%rax
424: 48 8d 50 01 lea 0x1(%rax),%rdx
428: 48 89 55 d8 mov %rdx,-0x28(%rbp)
42c: 0f b6 00 movzbl (%rax),%eax
42f: 0f b6 c0 movzbl %al,%eax
432: 09 c8 or %ecx,%eax
434: 89 45 d0 mov %eax,-0x30(%rbp)
437: 8b 45 d4 mov -0x2c(%rbp),%eax
43a: c1 e8 0b shr $0xb,%eax
43d: 0f af 85 50 ff ff ff imul -0xb0(%rbp),%eax
444: 89 85 4c ff ff ff mov %eax,-0xb4(%rbp)
44a: 8b 45 d0 mov -0x30(%rbp),%eax
44d: 3b 85 4c ff ff ff cmp -0xb4(%rbp),%eax
453: 73 40 jae 495 <LzmaDec_DecodeReal+0x495>
455: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
45b: 89 45 d4 mov %eax,-0x2c(%rbp)
45e: 8b 85 50 ff ff ff mov -0xb0(%rbp),%eax
464: 89 c2 mov %eax,%edx
466: b8 00 08 00 00 mov $0x800,%eax
46b: 2b 85 50 ff ff ff sub -0xb0(%rbp),%eax
471: c1 e8 05 shr $0x5,%eax
474: 01 c2 add %eax,%edx
476: 48 8b 85 40 ff ff ff mov -0xc0(%rbp),%rax
47d: 66 89 10 mov %dx,(%rax)
480: 8b 45 c4 mov -0x3c(%rbp),%eax
483: 01 c0 add %eax,%eax
485: 89 45 c4 mov %eax,-0x3c(%rbp)
488: 8b 85 48 ff ff ff mov -0xb8(%rbp),%eax
48e: f7 d0 not %eax
490: 21 45 bc and %eax,-0x44(%rbp)
493: eb 43 jmp 4d8 <LzmaDec_DecodeReal+0x4d8>
495: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
49b: 29 45 d4 sub %eax,-0x2c(%rbp)
49e: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
4a4: 29 45 d0 sub %eax,-0x30(%rbp)
4a7: 8b 85 50 ff ff ff mov -0xb0(%rbp),%eax
4ad: 89 c2 mov %eax,%edx
4af: 8b 85 50 ff ff ff mov -0xb0(%rbp),%eax
4b5: c1 e8 05 shr $0x5,%eax
4b8: 29 c2 sub %eax,%edx
4ba: 48 8b 85 40 ff ff ff mov -0xc0(%rbp),%rax
4c1: 66 89 10 mov %dx,(%rax)
4c4: 8b 45 c4 mov -0x3c(%rbp),%eax
4c7: 01 c0 add %eax,%eax
4c9: 83 c0 01 add $0x1,%eax
4cc: 89 45 c4 mov %eax,-0x3c(%rbp)
4cf: 8b 85 48 ff ff ff mov -0xb8(%rbp),%eax
4d5: 21 45 bc and %eax,-0x44(%rbp)
4d8: 81 7d c4 ff 00 00 00 cmpl $0xff,-0x3c(%rbp)
4df: 0f 86 e0 fe ff ff jbe 3c5 <LzmaDec_DecodeReal+0x3c5>
4e5: 8b 45 e8 mov -0x18(%rbp),%eax
4e8: 8d 50 01 lea 0x1(%rax),%edx
4eb: 89 55 e8 mov %edx,-0x18(%rbp)
4ee: 89 c2 mov %eax,%edx
4f0: 48 8b 85 60 ff ff ff mov -0xa0(%rbp),%rax
4f7: 48 01 d0 add %rdx,%rax
4fa: 8b 55 c4 mov -0x3c(%rbp),%edx
4fd: 88 10 mov %dl,(%rax)
4ff: 83 45 e4 01 addl $0x1,-0x1c(%rbp)
503: 48 ba 00 00 00 00 00 movabs $0x0,%rdx
50a: 00 00 00
50d: 8b 45 fc mov -0x4(%rbp),%eax
510: 0f b6 04 02 movzbl (%rdx,%rax,1),%eax
514: 0f b6 c0 movzbl %al,%eax
517: 89 45 fc mov %eax,-0x4(%rbp)
51a: e9 6d 10 00 00 jmpq 158c <LzmaDec_DecodeReal+0x158c>
51f: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
525: 29 45 d4 sub %eax,-0x2c(%rbp)
528: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
52e: 29 45 d0 sub %eax,-0x30(%rbp)
531: 8b 85 50 ff ff ff mov -0xb0(%rbp),%eax
537: 89 c2 mov %eax,%edx
539: 8b 85 50 ff ff ff mov -0xb0(%rbp),%eax
53f: c1 e8 05 shr $0x5,%eax
542: 29 c2 sub %eax,%edx
544: 48 8b 45 c8 mov -0x38(%rbp),%rax
548: 66 89 10 mov %dx,(%rax)
54b: 8b 45 fc mov -0x4(%rbp),%eax
54e: 48 05 c0 00 00 00 add $0xc0,%rax
554: 48 8d 14 00 lea (%rax,%rax,1),%rdx
558: 48 8b 85 78 ff ff ff mov -0x88(%rbp),%rax
55f: 48 01 d0 add %rdx,%rax
562: 48 89 45 c8 mov %rax,-0x38(%rbp)
566: 48 8b 45 c8 mov -0x38(%rbp),%rax
56a: 0f b7 00 movzwl (%rax),%eax
56d: 0f b7 c0 movzwl %ax,%eax
570: 89 85 50 ff ff ff mov %eax,-0xb0(%rbp)
576: 81 7d d4 ff ff ff 00 cmpl $0xffffff,-0x2c(%rbp)
57d: 77 23 ja 5a2 <LzmaDec_DecodeReal+0x5a2>
57f: c1 65 d4 08 shll $0x8,-0x2c(%rbp)
583: 8b 45 d0 mov -0x30(%rbp),%eax
586: c1 e0 08 shl $0x8,%eax
589: 89 c1 mov %eax,%ecx
58b: 48 8b 45 d8 mov -0x28(%rbp),%rax
58f: 48 8d 50 01 lea 0x1(%rax),%rdx
593: 48 89 55 d8 mov %rdx,-0x28(%rbp)
597: 0f b6 00 movzbl (%rax),%eax
59a: 0f b6 c0 movzbl %al,%eax
59d: 09 c8 or %ecx,%eax
59f: 89 45 d0 mov %eax,-0x30(%rbp)
5a2: 8b 45 d4 mov -0x2c(%rbp),%eax
5a5: c1 e8 0b shr $0xb,%eax
5a8: 0f af 85 50 ff ff ff imul -0xb0(%rbp),%eax
5af: 89 85 4c ff ff ff mov %eax,-0xb4(%rbp)
5b5: 8b 45 d0 mov -0x30(%rbp),%eax
5b8: 3b 85 4c ff ff ff cmp -0xb4(%rbp),%eax
5be: 73 42 jae 602 <LzmaDec_DecodeReal+0x602>
5c0: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
5c6: 89 45 d4 mov %eax,-0x2c(%rbp)
5c9: 8b 85 50 ff ff ff mov -0xb0(%rbp),%eax
5cf: 89 c2 mov %eax,%edx
5d1: b8 00 08 00 00 mov $0x800,%eax
5d6: 2b 85 50 ff ff ff sub -0xb0(%rbp),%eax
5dc: c1 e8 05 shr $0x5,%eax
5df: 01 c2 add %eax,%edx
5e1: 48 8b 45 c8 mov -0x38(%rbp),%rax
5e5: 66 89 10 mov %dx,(%rax)
5e8: 83 45 fc 0c addl $0xc,-0x4(%rbp)
5ec: 48 8b 85 78 ff ff ff mov -0x88(%rbp),%rax
5f3: 48 05 64 06 00 00 add $0x664,%rax
5f9: 48 89 45 c8 mov %rax,-0x38(%rbp)
5fd: e9 3a 04 00 00 jmpq a3c <LzmaDec_DecodeReal+0xa3c>
602: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
608: 29 45 d4 sub %eax,-0x2c(%rbp)
60b: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
611: 29 45 d0 sub %eax,-0x30(%rbp)
614: 8b 85 50 ff ff ff mov -0xb0(%rbp),%eax
61a: 89 c2 mov %eax,%edx
61c: 8b 85 50 ff ff ff mov -0xb0(%rbp),%eax
622: c1 e8 05 shr $0x5,%eax
625: 29 c2 sub %eax,%edx
627: 48 8b 45 c8 mov -0x38(%rbp),%rax
62b: 66 89 10 mov %dx,(%rax)
62e: 83 bd 58 ff ff ff 00 cmpl $0x0,-0xa8(%rbp)
635: 75 10 jne 647 <LzmaDec_DecodeReal+0x647>
637: 83 7d e4 00 cmpl $0x0,-0x1c(%rbp)
63b: 75 0a jne 647 <LzmaDec_DecodeReal+0x647>
63d: b8 01 00 00 00 mov $0x1,%eax
642: e9 23 10 00 00 jmpq 166a <LzmaDec_DecodeReal+0x166a>
647: 8b 45 fc mov -0x4(%rbp),%eax
64a: 48 05 cc 00 00 00 add $0xcc,%rax
650: 48 8d 14 00 lea (%rax,%rax,1),%rdx
654: 48 8b 85 78 ff ff ff mov -0x88(%rbp),%rax
65b: 48 01 d0 add %rdx,%rax
65e: 48 89 45 c8 mov %rax,-0x38(%rbp)
662: 48 8b 45 c8 mov -0x38(%rbp),%rax
666: 0f b7 00 movzwl (%rax),%eax
669: 0f b7 c0 movzwl %ax,%eax
66c: 89 85 50 ff ff ff mov %eax,-0xb0(%rbp)
672: 81 7d d4 ff ff ff 00 cmpl $0xffffff,-0x2c(%rbp)
679: 77 23 ja 69e <LzmaDec_DecodeReal+0x69e>
67b: c1 65 d4 08 shll $0x8,-0x2c(%rbp)
67f: 8b 45 d0 mov -0x30(%rbp),%eax
682: c1 e0 08 shl $0x8,%eax
685: 89 c1 mov %eax,%ecx
687: 48 8b 45 d8 mov -0x28(%rbp),%rax
68b: 48 8d 50 01 lea 0x1(%rax),%rdx
68f: 48 89 55 d8 mov %rdx,-0x28(%rbp)
693: 0f b6 00 movzbl (%rax),%eax
696: 0f b6 c0 movzbl %al,%eax
699: 09 c8 or %ecx,%eax
69b: 89 45 d0 mov %eax,-0x30(%rbp)
69e: 8b 45 d4 mov -0x2c(%rbp),%eax
6a1: c1 e8 0b shr $0xb,%eax
6a4: 0f af 85 50 ff ff ff imul -0xb0(%rbp),%eax
6ab: 89 85 4c ff ff ff mov %eax,-0xb4(%rbp)
6b1: 8b 45 d0 mov -0x30(%rbp),%eax
6b4: 3b 85 4c ff ff ff cmp -0xb4(%rbp),%eax
6ba: 0f 83 67 01 00 00 jae 827 <LzmaDec_DecodeReal+0x827>
6c0: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
6c6: 89 45 d4 mov %eax,-0x2c(%rbp)
6c9: 8b 85 50 ff ff ff mov -0xb0(%rbp),%eax
6cf: 89 c2 mov %eax,%edx
6d1: b8 00 08 00 00 mov $0x800,%eax
6d6: 2b 85 50 ff ff ff sub -0xb0(%rbp),%eax
6dc: c1 e8 05 shr $0x5,%eax
6df: 01 c2 add %eax,%edx
6e1: 48 8b 45 c8 mov -0x38(%rbp),%rax
6e5: 66 89 10 mov %dx,(%rax)
6e8: 8b 45 fc mov -0x4(%rbp),%eax
6eb: c1 e0 04 shl $0x4,%eax
6ee: 89 c2 mov %eax,%edx
6f0: 8b 85 54 ff ff ff mov -0xac(%rbp),%eax
6f6: 48 01 d0 add %rdx,%rax
6f9: 48 05 f0 00 00 00 add $0xf0,%rax
6ff: 48 8d 14 00 lea (%rax,%rax,1),%rdx
703: 48 8b 85 78 ff ff ff mov -0x88(%rbp),%rax
70a: 48 01 d0 add %rdx,%rax
70d: 48 89 45 c8 mov %rax,-0x38(%rbp)
711: 48 8b 45 c8 mov -0x38(%rbp),%rax
715: 0f b7 00 movzwl (%rax),%eax
718: 0f b7 c0 movzwl %ax,%eax
71b: 89 85 50 ff ff ff mov %eax,-0xb0(%rbp)
721: 81 7d d4 ff ff ff 00 cmpl $0xffffff,-0x2c(%rbp)
728: 77 23 ja 74d <LzmaDec_DecodeReal+0x74d>
72a: c1 65 d4 08 shll $0x8,-0x2c(%rbp)
72e: 8b 45 d0 mov -0x30(%rbp),%eax
731: c1 e0 08 shl $0x8,%eax
734: 89 c1 mov %eax,%ecx
736: 48 8b 45 d8 mov -0x28(%rbp),%rax
73a: 48 8d 50 01 lea 0x1(%rax),%rdx
73e: 48 89 55 d8 mov %rdx,-0x28(%rbp)
742: 0f b6 00 movzbl (%rax),%eax
745: 0f b6 c0 movzbl %al,%eax
748: 09 c8 or %ecx,%eax
74a: 89 45 d0 mov %eax,-0x30(%rbp)
74d: 8b 45 d4 mov -0x2c(%rbp),%eax
750: c1 e8 0b shr $0xb,%eax
753: 0f af 85 50 ff ff ff imul -0xb0(%rbp),%eax
75a: 89 85 4c ff ff ff mov %eax,-0xb4(%rbp)
760: 8b 45 d0 mov -0x30(%rbp),%eax
763: 3b 85 4c ff ff ff cmp -0xb4(%rbp),%eax
769: 0f 83 87 00 00 00 jae 7f6 <LzmaDec_DecodeReal+0x7f6>
76f: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
775: 89 45 d4 mov %eax,-0x2c(%rbp)
778: 8b 85 50 ff ff ff mov -0xb0(%rbp),%eax
77e: 89 c2 mov %eax,%edx
780: b8 00 08 00 00 mov $0x800,%eax
785: 2b 85 50 ff ff ff sub -0xb0(%rbp),%eax
78b: c1 e8 05 shr $0x5,%eax
78e: 01 c2 add %eax,%edx
790: 48 8b 45 c8 mov -0x38(%rbp),%rax
794: 66 89 10 mov %dx,(%rax)
797: 8b 55 e8 mov -0x18(%rbp),%edx
79a: 48 8b 85 60 ff ff ff mov -0xa0(%rbp),%rax
7a1: 48 01 c2 add %rax,%rdx
7a4: 8b 45 e8 mov -0x18(%rbp),%eax
7a7: 2b 45 f8 sub -0x8(%rbp),%eax
7aa: 89 c1 mov %eax,%ecx
7ac: 8b 45 e8 mov -0x18(%rbp),%eax
7af: 3b 45 f8 cmp -0x8(%rbp),%eax
7b2: 73 08 jae 7bc <LzmaDec_DecodeReal+0x7bc>
7b4: 8b 85 5c ff ff ff mov -0xa4(%rbp),%eax
7ba: eb 05 jmp 7c1 <LzmaDec_DecodeReal+0x7c1>
7bc: b8 00 00 00 00 mov $0x0,%eax
7c1: 01 c8 add %ecx,%eax
7c3: 89 c1 mov %eax,%ecx
7c5: 48 8b 85 60 ff ff ff mov -0xa0(%rbp),%rax
7cc: 48 01 c8 add %rcx,%rax
7cf: 0f b6 00 movzbl (%rax),%eax
7d2: 88 02 mov %al,(%rdx)
7d4: 83 45 e8 01 addl $0x1,-0x18(%rbp)
7d8: 83 45 e4 01 addl $0x1,-0x1c(%rbp)
7dc: 83 7d fc 06 cmpl $0x6,-0x4(%rbp)
7e0: 77 07 ja 7e9 <LzmaDec_DecodeReal+0x7e9>
7e2: b8 09 00 00 00 mov $0x9,%eax
7e7: eb 05 jmp 7ee <LzmaDec_DecodeReal+0x7ee>
7e9: b8 0b 00 00 00 mov $0xb,%eax
7ee: 89 45 fc mov %eax,-0x4(%rbp)
7f1: e9 96 0d 00 00 jmpq 158c <LzmaDec_DecodeReal+0x158c>
7f6: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
7fc: 29 45 d4 sub %eax,-0x2c(%rbp)
7ff: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
805: 29 45 d0 sub %eax,-0x30(%rbp)
808: 8b 85 50 ff ff ff mov -0xb0(%rbp),%eax
80e: 89 c2 mov %eax,%edx
810: 8b 85 50 ff ff ff mov -0xb0(%rbp),%eax
816: c1 e8 05 shr $0x5,%eax
819: 29 c2 sub %eax,%edx
81b: 48 8b 45 c8 mov -0x38(%rbp),%rax
81f: 66 89 10 mov %dx,(%rax)
822: e9 ef 01 00 00 jmpq a16 <LzmaDec_DecodeReal+0xa16>
827: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
82d: 29 45 d4 sub %eax,-0x2c(%rbp)
830: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
836: 29 45 d0 sub %eax,-0x30(%rbp)
839: 8b 85 50 ff ff ff mov -0xb0(%rbp),%eax
83f: 89 c2 mov %eax,%edx
841: 8b 85 50 ff ff ff mov -0xb0(%rbp),%eax
847: c1 e8 05 shr $0x5,%eax
84a: 29 c2 sub %eax,%edx
84c: 48 8b 45 c8 mov -0x38(%rbp),%rax
850: 66 89 10 mov %dx,(%rax)
853: 8b 45 fc mov -0x4(%rbp),%eax
856: 48 05 d8 00 00 00 add $0xd8,%rax
85c: 48 8d 14 00 lea (%rax,%rax,1),%rdx
860: 48 8b 85 78 ff ff ff mov -0x88(%rbp),%rax
867: 48 01 d0 add %rdx,%rax
86a: 48 89 45 c8 mov %rax,-0x38(%rbp)
86e: 48 8b 45 c8 mov -0x38(%rbp),%rax
872: 0f b7 00 movzwl (%rax),%eax
875: 0f b7 c0 movzwl %ax,%eax
878: 89 85 50 ff ff ff mov %eax,-0xb0(%rbp)
87e: 81 7d d4 ff ff ff 00 cmpl $0xffffff,-0x2c(%rbp)
885: 77 23 ja 8aa <LzmaDec_DecodeReal+0x8aa>
887: c1 65 d4 08 shll $0x8,-0x2c(%rbp)
88b: 8b 45 d0 mov -0x30(%rbp),%eax
88e: c1 e0 08 shl $0x8,%eax
891: 89 c1 mov %eax,%ecx
893: 48 8b 45 d8 mov -0x28(%rbp),%rax
897: 48 8d 50 01 lea 0x1(%rax),%rdx
89b: 48 89 55 d8 mov %rdx,-0x28(%rbp)
89f: 0f b6 00 movzbl (%rax),%eax
8a2: 0f b6 c0 movzbl %al,%eax
8a5: 09 c8 or %ecx,%eax
8a7: 89 45 d0 mov %eax,-0x30(%rbp)
8aa: 8b 45 d4 mov -0x2c(%rbp),%eax
8ad: c1 e8 0b shr $0xb,%eax
8b0: 0f af 85 50 ff ff ff imul -0xb0(%rbp),%eax
8b7: 89 85 4c ff ff ff mov %eax,-0xb4(%rbp)
8bd: 8b 45 d0 mov -0x30(%rbp),%eax
8c0: 3b 85 4c ff ff ff cmp -0xb4(%rbp),%eax
8c6: 73 33 jae 8fb <LzmaDec_DecodeReal+0x8fb>
8c8: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
8ce: 89 45 d4 mov %eax,-0x2c(%rbp)
8d1: 8b 85 50 ff ff ff mov -0xb0(%rbp),%eax
8d7: 89 c2 mov %eax,%edx
8d9: b8 00 08 00 00 mov $0x800,%eax
8de: 2b 85 50 ff ff ff sub -0xb0(%rbp),%eax
8e4: c1 e8 05 shr $0x5,%eax
8e7: 01 c2 add %eax,%edx
8e9: 48 8b 45 c8 mov -0x38(%rbp),%rax
8ed: 66 89 10 mov %dx,(%rax)
8f0: 8b 45 f4 mov -0xc(%rbp),%eax
8f3: 89 45 b8 mov %eax,-0x48(%rbp)
8f6: e9 0f 01 00 00 jmpq a0a <LzmaDec_DecodeReal+0xa0a>
8fb: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
901: 29 45 d4 sub %eax,-0x2c(%rbp)
904: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
90a: 29 45 d0 sub %eax,-0x30(%rbp)
90d: 8b 85 50 ff ff ff mov -0xb0(%rbp),%eax
913: 89 c2 mov %eax,%edx
915: 8b 85 50 ff ff ff mov -0xb0(%rbp),%eax
91b: c1 e8 05 shr $0x5,%eax
91e: 29 c2 sub %eax,%edx
920: 48 8b 45 c8 mov -0x38(%rbp),%rax
924: 66 89 10 mov %dx,(%rax)
927: 8b 45 fc mov -0x4(%rbp),%eax
92a: 48 05 e4 00 00 00 add $0xe4,%rax
930: 48 8d 14 00 lea (%rax,%rax,1),%rdx
934: 48 8b 85 78 ff ff ff mov -0x88(%rbp),%rax
93b: 48 01 d0 add %rdx,%rax
93e: 48 89 45 c8 mov %rax,-0x38(%rbp)
942: 48 8b 45 c8 mov -0x38(%rbp),%rax
946: 0f b7 00 movzwl (%rax),%eax
949: 0f b7 c0 movzwl %ax,%eax
94c: 89 85 50 ff ff ff mov %eax,-0xb0(%rbp)
952: 81 7d d4 ff ff ff 00 cmpl $0xffffff,-0x2c(%rbp)
959: 77 23 ja 97e <LzmaDec_DecodeReal+0x97e>
95b: c1 65 d4 08 shll $0x8,-0x2c(%rbp)
95f: 8b 45 d0 mov -0x30(%rbp),%eax
962: c1 e0 08 shl $0x8,%eax
965: 89 c1 mov %eax,%ecx
967: 48 8b 45 d8 mov -0x28(%rbp),%rax
96b: 48 8d 50 01 lea 0x1(%rax),%rdx
96f: 48 89 55 d8 mov %rdx,-0x28(%rbp)
973: 0f b6 00 movzbl (%rax),%eax
976: 0f b6 c0 movzbl %al,%eax
979: 09 c8 or %ecx,%eax
97b: 89 45 d0 mov %eax,-0x30(%rbp)
97e: 8b 45 d4 mov -0x2c(%rbp),%eax
981: c1 e8 0b shr $0xb,%eax
984: 0f af 85 50 ff ff ff imul -0xb0(%rbp),%eax
98b: 89 85 4c ff ff ff mov %eax,-0xb4(%rbp)
991: 8b 45 d0 mov -0x30(%rbp),%eax
994: 3b 85 4c ff ff ff cmp -0xb4(%rbp),%eax
99a: 73 30 jae 9cc <LzmaDec_DecodeReal+0x9cc>
99c: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
9a2: 89 45 d4 mov %eax,-0x2c(%rbp)
9a5: 8b 85 50 ff ff ff mov -0xb0(%rbp),%eax
9ab: 89 c2 mov %eax,%edx
9ad: b8 00 08 00 00 mov $0x800,%eax
9b2: 2b 85 50 ff ff ff sub -0xb0(%rbp),%eax
9b8: c1 e8 05 shr $0x5,%eax
9bb: 01 c2 add %eax,%edx
9bd: 48 8b 45 c8 mov -0x38(%rbp),%rax
9c1: 66 89 10 mov %dx,(%rax)
9c4: 8b 45 f0 mov -0x10(%rbp),%eax
9c7: 89 45 b8 mov %eax,-0x48(%rbp)
9ca: eb 38 jmp a04 <LzmaDec_DecodeReal+0xa04>
9cc: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
9d2: 29 45 d4 sub %eax,-0x2c(%rbp)
9d5: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
9db: 29 45 d0 sub %eax,-0x30(%rbp)
9de: 8b 85 50 ff ff ff mov -0xb0(%rbp),%eax
9e4: 89 c2 mov %eax,%edx
9e6: 8b 85 50 ff ff ff mov -0xb0(%rbp),%eax
9ec: c1 e8 05 shr $0x5,%eax
9ef: 29 c2 sub %eax,%edx
9f1: 48 8b 45 c8 mov -0x38(%rbp),%rax
9f5: 66 89 10 mov %dx,(%rax)
9f8: 8b 45 ec mov -0x14(%rbp),%eax
9fb: 89 45 b8 mov %eax,-0x48(%rbp)
9fe: 8b 45 f0 mov -0x10(%rbp),%eax
a01: 89 45 ec mov %eax,-0x14(%rbp)
a04: 8b 45 f4 mov -0xc(%rbp),%eax
a07: 89 45 f0 mov %eax,-0x10(%rbp)
a0a: 8b 45 f8 mov -0x8(%rbp),%eax
a0d: 89 45 f4 mov %eax,-0xc(%rbp)
a10: 8b 45 b8 mov -0x48(%rbp),%eax
a13: 89 45 f8 mov %eax,-0x8(%rbp)
a16: 83 7d fc 06 cmpl $0x6,-0x4(%rbp)
a1a: 77 07 ja a23 <LzmaDec_DecodeReal+0xa23>
a1c: b8 08 00 00 00 mov $0x8,%eax
a21: eb 05 jmp a28 <LzmaDec_DecodeReal+0xa28>
a23: b8 0b 00 00 00 mov $0xb,%eax
a28: 89 45 fc mov %eax,-0x4(%rbp)
a2b: 48 8b 85 78 ff ff ff mov -0x88(%rbp),%rax
a32: 48 05 68 0a 00 00 add $0xa68,%rax
a38: 48 89 45 c8 mov %rax,-0x38(%rbp)
a3c: 48 8b 45 c8 mov -0x38(%rbp),%rax
a40: 48 89 45 a8 mov %rax,-0x58(%rbp)
a44: 48 8b 45 a8 mov -0x58(%rbp),%rax
a48: 0f b7 00 movzwl (%rax),%eax
a4b: 0f b7 c0 movzwl %ax,%eax
a4e: 89 85 50 ff ff ff mov %eax,-0xb0(%rbp)
a54: 81 7d d4 ff ff ff 00 cmpl $0xffffff,-0x2c(%rbp)
a5b: 77 23 ja a80 <LzmaDec_DecodeReal+0xa80>
a5d: c1 65 d4 08 shll $0x8,-0x2c(%rbp)
a61: 8b 45 d0 mov -0x30(%rbp),%eax
a64: c1 e0 08 shl $0x8,%eax
a67: 89 c1 mov %eax,%ecx
a69: 48 8b 45 d8 mov -0x28(%rbp),%rax
a6d: 48 8d 50 01 lea 0x1(%rax),%rdx
a71: 48 89 55 d8 mov %rdx,-0x28(%rbp)
a75: 0f b6 00 movzbl (%rax),%eax
a78: 0f b6 c0 movzbl %al,%eax
a7b: 09 c8 or %ecx,%eax
a7d: 89 45 d0 mov %eax,-0x30(%rbp)
a80: 8b 45 d4 mov -0x2c(%rbp),%eax
a83: c1 e8 0b shr $0xb,%eax
a86: 0f af 85 50 ff ff ff imul -0xb0(%rbp),%eax
a8d: 89 85 4c ff ff ff mov %eax,-0xb4(%rbp)
a93: 8b 45 d0 mov -0x30(%rbp),%eax
a96: 3b 85 4c ff ff ff cmp -0xb4(%rbp),%eax
a9c: 73 59 jae af7 <LzmaDec_DecodeReal+0xaf7>
a9e: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
aa4: 89 45 d4 mov %eax,-0x2c(%rbp)
aa7: 8b 85 50 ff ff ff mov -0xb0(%rbp),%eax
aad: 89 c2 mov %eax,%edx
aaf: b8 00 08 00 00 mov $0x800,%eax
ab4: 2b 85 50 ff ff ff sub -0xb0(%rbp),%eax
aba: c1 e8 05 shr $0x5,%eax
abd: 01 c2 add %eax,%edx
abf: 48 8b 45 a8 mov -0x58(%rbp),%rax
ac3: 66 89 10 mov %dx,(%rax)
ac6: 8b 85 54 ff ff ff mov -0xac(%rbp),%eax
acc: c1 e0 03 shl $0x3,%eax
acf: 89 c0 mov %eax,%eax
ad1: 48 83 c0 02 add $0x2,%rax
ad5: 48 8d 14 00 lea (%rax,%rax,1),%rdx
ad9: 48 8b 45 c8 mov -0x38(%rbp),%rax
add: 48 01 d0 add %rdx,%rax
ae0: 48 89 45 a8 mov %rax,-0x58(%rbp)
ae4: c7 45 b0 00 00 00 00 movl $0x0,-0x50(%rbp)
aeb: c7 45 b4 08 00 00 00 movl $0x8,-0x4c(%rbp)
af2: e9 32 01 00 00 jmpq c29 <LzmaDec_DecodeReal+0xc29>
af7: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
afd: 29 45 d4 sub %eax,-0x2c(%rbp)
b00: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
b06: 29 45 d0 sub %eax,-0x30(%rbp)
b09: 8b 85 50 ff ff ff mov -0xb0(%rbp),%eax
b0f: 89 c2 mov %eax,%edx
b11: 8b 85 50 ff ff ff mov -0xb0(%rbp),%eax
b17: c1 e8 05 shr $0x5,%eax
b1a: 29 c2 sub %eax,%edx
b1c: 48 8b 45 a8 mov -0x58(%rbp),%rax
b20: 66 89 10 mov %dx,(%rax)
b23: 48 8b 45 c8 mov -0x38(%rbp),%rax
b27: 48 83 c0 02 add $0x2,%rax
b2b: 48 89 45 a8 mov %rax,-0x58(%rbp)
b2f: 48 8b 45 a8 mov -0x58(%rbp),%rax
b33: 0f b7 00 movzwl (%rax),%eax
b36: 0f b7 c0 movzwl %ax,%eax
b39: 89 85 50 ff ff ff mov %eax,-0xb0(%rbp)
b3f: 81 7d d4 ff ff ff 00 cmpl $0xffffff,-0x2c(%rbp)
b46: 77 23 ja b6b <LzmaDec_DecodeReal+0xb6b>
b48: c1 65 d4 08 shll $0x8,-0x2c(%rbp)
b4c: 8b 45 d0 mov -0x30(%rbp),%eax
b4f: c1 e0 08 shl $0x8,%eax
b52: 89 c1 mov %eax,%ecx
b54: 48 8b 45 d8 mov -0x28(%rbp),%rax
b58: 48 8d 50 01 lea 0x1(%rax),%rdx
b5c: 48 89 55 d8 mov %rdx,-0x28(%rbp)
b60: 0f b6 00 movzbl (%rax),%eax
b63: 0f b6 c0 movzbl %al,%eax
b66: 09 c8 or %ecx,%eax
b68: 89 45 d0 mov %eax,-0x30(%rbp)
b6b: 8b 45 d4 mov -0x2c(%rbp),%eax
b6e: c1 e8 0b shr $0xb,%eax
b71: 0f af 85 50 ff ff ff imul -0xb0(%rbp),%eax
b78: 89 85 4c ff ff ff mov %eax,-0xb4(%rbp)
b7e: 8b 45 d0 mov -0x30(%rbp),%eax
b81: 3b 85 4c ff ff ff cmp -0xb4(%rbp),%eax
b87: 73 58 jae be1 <LzmaDec_DecodeReal+0xbe1>
b89: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
b8f: 89 45 d4 mov %eax,-0x2c(%rbp)
b92: 8b 85 50 ff ff ff mov -0xb0(%rbp),%eax
b98: 89 c2 mov %eax,%edx
b9a: b8 00 08 00 00 mov $0x800,%eax
b9f: 2b 85 50 ff ff ff sub -0xb0(%rbp),%eax
ba5: c1 e8 05 shr $0x5,%eax
ba8: 01 c2 add %eax,%edx
baa: 48 8b 45 a8 mov -0x58(%rbp),%rax
bae: 66 89 10 mov %dx,(%rax)
bb1: 8b 85 54 ff ff ff mov -0xac(%rbp),%eax
bb7: c1 e0 03 shl $0x3,%eax
bba: 89 c0 mov %eax,%eax
bbc: 48 05 82 00 00 00 add $0x82,%rax
bc2: 48 8d 14 00 lea (%rax,%rax,1),%rdx
bc6: 48 8b 45 c8 mov -0x38(%rbp),%rax
bca: 48 01 d0 add %rdx,%rax
bcd: 48 89 45 a8 mov %rax,-0x58(%rbp)
bd1: c7 45 b0 08 00 00 00 movl $0x8,-0x50(%rbp)
bd8: c7 45 b4 08 00 00 00 movl $0x8,-0x4c(%rbp)
bdf: eb 48 jmp c29 <LzmaDec_DecodeReal+0xc29>
be1: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
be7: 29 45 d4 sub %eax,-0x2c(%rbp)
bea: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
bf0: 29 45 d0 sub %eax,-0x30(%rbp)
bf3: 8b 85 50 ff ff ff mov -0xb0(%rbp),%eax
bf9: 89 c2 mov %eax,%edx
bfb: 8b 85 50 ff ff ff mov -0xb0(%rbp),%eax
c01: c1 e8 05 shr $0x5,%eax
c04: 29 c2 sub %eax,%edx
c06: 48 8b 45 a8 mov -0x58(%rbp),%rax
c0a: 66 89 10 mov %dx,(%rax)
c0d: 48 8b 45 c8 mov -0x38(%rbp),%rax
c11: 48 05 04 02 00 00 add $0x204,%rax
c17: 48 89 45 a8 mov %rax,-0x58(%rbp)
c1b: c7 45 b0 10 00 00 00 movl $0x10,-0x50(%rbp)
c22: c7 45 b4 00 01 00 00 movl $0x100,-0x4c(%rbp)
c29: c7 45 e0 01 00 00 00 movl $0x1,-0x20(%rbp)
c30: 8b 45 e0 mov -0x20(%rbp),%eax
c33: 48 8d 14 00 lea (%rax,%rax,1),%rdx
c37: 48 8b 45 a8 mov -0x58(%rbp),%rax
c3b: 48 01 d0 add %rdx,%rax
c3e: 0f b7 00 movzwl (%rax),%eax
c41: 0f b7 c0 movzwl %ax,%eax
c44: 89 85 50 ff ff ff mov %eax,-0xb0(%rbp)
c4a: 81 7d d4 ff ff ff 00 cmpl $0xffffff,-0x2c(%rbp)
c51: 77 23 ja c76 <LzmaDec_DecodeReal+0xc76>
c53: c1 65 d4 08 shll $0x8,-0x2c(%rbp)
c57: 8b 45 d0 mov -0x30(%rbp),%eax
c5a: c1 e0 08 shl $0x8,%eax
c5d: 89 c1 mov %eax,%ecx
c5f: 48 8b 45 d8 mov -0x28(%rbp),%rax
c63: 48 8d 50 01 lea 0x1(%rax),%rdx
c67: 48 89 55 d8 mov %rdx,-0x28(%rbp)
c6b: 0f b6 00 movzbl (%rax),%eax
c6e: 0f b6 c0 movzbl %al,%eax
c71: 09 c8 or %ecx,%eax
c73: 89 45 d0 mov %eax,-0x30(%rbp)
c76: 8b 45 d4 mov -0x2c(%rbp),%eax
c79: c1 e8 0b shr $0xb,%eax
c7c: 0f af 85 50 ff ff ff imul -0xb0(%rbp),%eax
c83: 89 85 4c ff ff ff mov %eax,-0xb4(%rbp)
c89: 8b 45 d0 mov -0x30(%rbp),%eax
c8c: 3b 85 4c ff ff ff cmp -0xb4(%rbp),%eax
c92: 73 3c jae cd0 <LzmaDec_DecodeReal+0xcd0>
c94: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
c9a: 89 45 d4 mov %eax,-0x2c(%rbp)
c9d: 8b 45 e0 mov -0x20(%rbp),%eax
ca0: 48 8d 14 00 lea (%rax,%rax,1),%rdx
ca4: 48 8b 45 a8 mov -0x58(%rbp),%rax
ca8: 48 01 d0 add %rdx,%rax
cab: 8b 95 50 ff ff ff mov -0xb0(%rbp),%edx
cb1: 89 d1 mov %edx,%ecx
cb3: ba 00 08 00 00 mov $0x800,%edx
cb8: 2b 95 50 ff ff ff sub -0xb0(%rbp),%edx
cbe: c1 ea 05 shr $0x5,%edx
cc1: 01 ca add %ecx,%edx
cc3: 66 89 10 mov %dx,(%rax)
cc6: 8b 45 e0 mov -0x20(%rbp),%eax
cc9: 01 c0 add %eax,%eax
ccb: 89 45 e0 mov %eax,-0x20(%rbp)
cce: eb 43 jmp d13 <LzmaDec_DecodeReal+0xd13>
cd0: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
cd6: 29 45 d4 sub %eax,-0x2c(%rbp)
cd9: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
cdf: 29 45 d0 sub %eax,-0x30(%rbp)
ce2: 8b 45 e0 mov -0x20(%rbp),%eax
ce5: 48 8d 14 00 lea (%rax,%rax,1),%rdx
ce9: 48 8b 45 a8 mov -0x58(%rbp),%rax
ced: 48 01 d0 add %rdx,%rax
cf0: 8b 95 50 ff ff ff mov -0xb0(%rbp),%edx
cf6: 89 d1 mov %edx,%ecx
cf8: 8b 95 50 ff ff ff mov -0xb0(%rbp),%edx
cfe: c1 ea 05 shr $0x5,%edx
d01: 29 d1 sub %edx,%ecx
d03: 89 ca mov %ecx,%edx
d05: 66 89 10 mov %dx,(%rax)
d08: 8b 45 e0 mov -0x20(%rbp),%eax
d0b: 01 c0 add %eax,%eax
d0d: 83 c0 01 add $0x1,%eax
d10: 89 45 e0 mov %eax,-0x20(%rbp)
d13: 8b 45 e0 mov -0x20(%rbp),%eax
d16: 3b 45 b4 cmp -0x4c(%rbp),%eax
d19: 0f 82 11 ff ff ff jb c30 <LzmaDec_DecodeReal+0xc30>
d1f: 8b 45 b4 mov -0x4c(%rbp),%eax
d22: 29 45 e0 sub %eax,-0x20(%rbp)
d25: 8b 45 b0 mov -0x50(%rbp),%eax
d28: 01 45 e0 add %eax,-0x20(%rbp)
d2b: 83 7d fc 0b cmpl $0xb,-0x4(%rbp)
d2f: 0f 86 35 07 00 00 jbe 146a <LzmaDec_DecodeReal+0x146a>
d35: b8 03 00 00 00 mov $0x3,%eax
d3a: 83 7d e0 03 cmpl $0x3,-0x20(%rbp)
d3e: 0f 46 45 e0 cmovbe -0x20(%rbp),%eax
d42: c1 e0 06 shl $0x6,%eax
d45: 89 c0 mov %eax,%eax
d47: 48 05 b0 01 00 00 add $0x1b0,%rax
d4d: 48 8d 14 00 lea (%rax,%rax,1),%rdx
d51: 48 8b 85 78 ff ff ff mov -0x88(%rbp),%rax
d58: 48 01 d0 add %rdx,%rax
d5b: 48 89 45 c8 mov %rax,-0x38(%rbp)
d5f: c7 45 a4 01 00 00 00 movl $0x1,-0x5c(%rbp)
d66: 8b 45 a4 mov -0x5c(%rbp),%eax
d69: 48 8d 14 00 lea (%rax,%rax,1),%rdx
d6d: 48 8b 45 c8 mov -0x38(%rbp),%rax
d71: 48 01 d0 add %rdx,%rax
d74: 0f b7 00 movzwl (%rax),%eax
d77: 0f b7 c0 movzwl %ax,%eax
d7a: 89 85 50 ff ff ff mov %eax,-0xb0(%rbp)
d80: 81 7d d4 ff ff ff 00 cmpl $0xffffff,-0x2c(%rbp)
d87: 77 23 ja dac <LzmaDec_DecodeReal+0xdac>
d89: c1 65 d4 08 shll $0x8,-0x2c(%rbp)
d8d: 8b 45 d0 mov -0x30(%rbp),%eax
d90: c1 e0 08 shl $0x8,%eax
d93: 89 c1 mov %eax,%ecx
d95: 48 8b 45 d8 mov -0x28(%rbp),%rax
d99: 48 8d 50 01 lea 0x1(%rax),%rdx
d9d: 48 89 55 d8 mov %rdx,-0x28(%rbp)
da1: 0f b6 00 movzbl (%rax),%eax
da4: 0f b6 c0 movzbl %al,%eax
da7: 09 c8 or %ecx,%eax
da9: 89 45 d0 mov %eax,-0x30(%rbp)
dac: 8b 45 d4 mov -0x2c(%rbp),%eax
daf: c1 e8 0b shr $0xb,%eax
db2: 0f af 85 50 ff ff ff imul -0xb0(%rbp),%eax
db9: 89 85 4c ff ff ff mov %eax,-0xb4(%rbp)
dbf: 8b 45 d0 mov -0x30(%rbp),%eax
dc2: 3b 85 4c ff ff ff cmp -0xb4(%rbp),%eax
dc8: 73 3c jae e06 <LzmaDec_DecodeReal+0xe06>
dca: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
dd0: 89 45 d4 mov %eax,-0x2c(%rbp)
dd3: 8b 45 a4 mov -0x5c(%rbp),%eax
dd6: 48 8d 14 00 lea (%rax,%rax,1),%rdx
dda: 48 8b 45 c8 mov -0x38(%rbp),%rax
dde: 48 01 d0 add %rdx,%rax
de1: 8b 95 50 ff ff ff mov -0xb0(%rbp),%edx
de7: 89 d1 mov %edx,%ecx
de9: ba 00 08 00 00 mov $0x800,%edx
dee: 2b 95 50 ff ff ff sub -0xb0(%rbp),%edx
df4: c1 ea 05 shr $0x5,%edx
df7: 01 ca add %ecx,%edx
df9: 66 89 10 mov %dx,(%rax)
dfc: 8b 45 a4 mov -0x5c(%rbp),%eax
dff: 01 c0 add %eax,%eax
e01: 89 45 a4 mov %eax,-0x5c(%rbp)
e04: eb 43 jmp e49 <LzmaDec_DecodeReal+0xe49>
e06: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
e0c: 29 45 d4 sub %eax,-0x2c(%rbp)
e0f: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
e15: 29 45 d0 sub %eax,-0x30(%rbp)
e18: 8b 45 a4 mov -0x5c(%rbp),%eax
e1b: 48 8d 14 00 lea (%rax,%rax,1),%rdx
e1f: 48 8b 45 c8 mov -0x38(%rbp),%rax
e23: 48 01 d0 add %rdx,%rax
e26: 8b 95 50 ff ff ff mov -0xb0(%rbp),%edx
e2c: 89 d1 mov %edx,%ecx
e2e: 8b 95 50 ff ff ff mov -0xb0(%rbp),%edx
e34: c1 ea 05 shr $0x5,%edx
e37: 29 d1 sub %edx,%ecx
e39: 89 ca mov %ecx,%edx
e3b: 66 89 10 mov %dx,(%rax)
e3e: 8b 45 a4 mov -0x5c(%rbp),%eax
e41: 01 c0 add %eax,%eax
e43: 83 c0 01 add $0x1,%eax
e46: 89 45 a4 mov %eax,-0x5c(%rbp)
e49: 83 7d a4 3f cmpl $0x3f,-0x5c(%rbp)
e4d: 0f 86 13 ff ff ff jbe d66 <LzmaDec_DecodeReal+0xd66>
e53: 83 6d a4 40 subl $0x40,-0x5c(%rbp)
e57: 83 7d a4 03 cmpl $0x3,-0x5c(%rbp)
e5b: 0f 86 a9 05 00 00 jbe 140a <LzmaDec_DecodeReal+0x140a>
e61: 8b 45 a4 mov -0x5c(%rbp),%eax
e64: 89 85 3c ff ff ff mov %eax,-0xc4(%rbp)
e6a: 8b 45 a4 mov -0x5c(%rbp),%eax
e6d: d1 e8 shr %eax
e6f: 83 e8 01 sub $0x1,%eax
e72: 89 45 a0 mov %eax,-0x60(%rbp)
e75: 8b 45 a4 mov -0x5c(%rbp),%eax
e78: 83 e0 01 and $0x1,%eax
e7b: 83 c8 02 or $0x2,%eax
e7e: 89 45 a4 mov %eax,-0x5c(%rbp)
e81: 83 bd 3c ff ff ff 0d cmpl $0xd,-0xc4(%rbp)
e88: 0f 87 3f 01 00 00 ja fcd <LzmaDec_DecodeReal+0xfcd>
e8e: 8b 45 a0 mov -0x60(%rbp),%eax
e91: 89 c1 mov %eax,%ecx
e93: d3 65 a4 shll %cl,-0x5c(%rbp)
e96: 8b 55 a4 mov -0x5c(%rbp),%edx
e99: 8b 85 3c ff ff ff mov -0xc4(%rbp),%eax
e9f: 48 29 c2 sub %rax,%rdx
ea2: 48 89 d0 mov %rdx,%rax
ea5: 48 05 b0 02 00 00 add $0x2b0,%rax
eab: 48 01 c0 add %rax,%rax
eae: 48 8d 50 fe lea -0x2(%rax),%rdx
eb2: 48 8b 85 78 ff ff ff mov -0x88(%rbp),%rax
eb9: 48 01 d0 add %rdx,%rax
ebc: 48 89 45 c8 mov %rax,-0x38(%rbp)
ec0: c7 45 9c 01 00 00 00 movl $0x1,-0x64(%rbp)
ec7: c7 45 98 01 00 00 00 movl $0x1,-0x68(%rbp)
ece: 8b 45 98 mov -0x68(%rbp),%eax
ed1: 48 8d 14 00 lea (%rax,%rax,1),%rdx
ed5: 48 8b 45 c8 mov -0x38(%rbp),%rax
ed9: 48 01 d0 add %rdx,%rax
edc: 0f b7 00 movzwl (%rax),%eax
edf: 0f b7 c0 movzwl %ax,%eax
ee2: 89 85 50 ff ff ff mov %eax,-0xb0(%rbp)
ee8: 81 7d d4 ff ff ff 00 cmpl $0xffffff,-0x2c(%rbp)
eef: 77 23 ja f14 <LzmaDec_DecodeReal+0xf14>
ef1: c1 65 d4 08 shll $0x8,-0x2c(%rbp)
ef5: 8b 45 d0 mov -0x30(%rbp),%eax
ef8: c1 e0 08 shl $0x8,%eax
efb: 89 c1 mov %eax,%ecx
efd: 48 8b 45 d8 mov -0x28(%rbp),%rax
f01: 48 8d 50 01 lea 0x1(%rax),%rdx
f05: 48 89 55 d8 mov %rdx,-0x28(%rbp)
f09: 0f b6 00 movzbl (%rax),%eax
f0c: 0f b6 c0 movzbl %al,%eax
f0f: 09 c8 or %ecx,%eax
f11: 89 45 d0 mov %eax,-0x30(%rbp)
f14: 8b 45 d4 mov -0x2c(%rbp),%eax
f17: c1 e8 0b shr $0xb,%eax
f1a: 0f af 85 50 ff ff ff imul -0xb0(%rbp),%eax
f21: 89 85 4c ff ff ff mov %eax,-0xb4(%rbp)
f27: 8b 45 d0 mov -0x30(%rbp),%eax
f2a: 3b 85 4c ff ff ff cmp -0xb4(%rbp),%eax
f30: 73 3c jae f6e <LzmaDec_DecodeReal+0xf6e>
f32: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
f38: 89 45 d4 mov %eax,-0x2c(%rbp)
f3b: 8b 45 98 mov -0x68(%rbp),%eax
f3e: 48 8d 14 00 lea (%rax,%rax,1),%rdx
f42: 48 8b 45 c8 mov -0x38(%rbp),%rax
f46: 48 01 d0 add %rdx,%rax
f49: 8b 95 50 ff ff ff mov -0xb0(%rbp),%edx
f4f: 89 d1 mov %edx,%ecx
f51: ba 00 08 00 00 mov $0x800,%edx
f56: 2b 95 50 ff ff ff sub -0xb0(%rbp),%edx
f5c: c1 ea 05 shr $0x5,%edx
f5f: 01 ca add %ecx,%edx
f61: 66 89 10 mov %dx,(%rax)
f64: 8b 45 98 mov -0x68(%rbp),%eax
f67: 01 c0 add %eax,%eax
f69: 89 45 98 mov %eax,-0x68(%rbp)
f6c: eb 49 jmp fb7 <LzmaDec_DecodeReal+0xfb7>
f6e: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
f74: 29 45 d4 sub %eax,-0x2c(%rbp)
f77: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
f7d: 29 45 d0 sub %eax,-0x30(%rbp)
f80: 8b 45 98 mov -0x68(%rbp),%eax
f83: 48 8d 14 00 lea (%rax,%rax,1),%rdx
f87: 48 8b 45 c8 mov -0x38(%rbp),%rax
f8b: 48 01 d0 add %rdx,%rax
f8e: 8b 95 50 ff ff ff mov -0xb0(%rbp),%edx
f94: 89 d1 mov %edx,%ecx
f96: 8b 95 50 ff ff ff mov -0xb0(%rbp),%edx
f9c: c1 ea 05 shr $0x5,%edx
f9f: 29 d1 sub %edx,%ecx
fa1: 89 ca mov %ecx,%edx
fa3: 66 89 10 mov %dx,(%rax)
fa6: 8b 45 98 mov -0x68(%rbp),%eax
fa9: 01 c0 add %eax,%eax
fab: 83 c0 01 add $0x1,%eax
fae: 89 45 98 mov %eax,-0x68(%rbp)
fb1: 8b 45 9c mov -0x64(%rbp),%eax
fb4: 09 45 a4 or %eax,-0x5c(%rbp)
fb7: d1 65 9c shll -0x64(%rbp)
fba: 83 6d a0 01 subl $0x1,-0x60(%rbp)
fbe: 83 7d a0 00 cmpl $0x0,-0x60(%rbp)
fc2: 0f 85 06 ff ff ff jne ece <LzmaDec_DecodeReal+0xece>
fc8: e9 3d 04 00 00 jmpq 140a <LzmaDec_DecodeReal+0x140a>
fcd: 83 6d a0 04 subl $0x4,-0x60(%rbp)
fd1: 81 7d d4 ff ff ff 00 cmpl $0xffffff,-0x2c(%rbp)
fd8: 77 23 ja ffd <LzmaDec_DecodeReal+0xffd>
fda: c1 65 d4 08 shll $0x8,-0x2c(%rbp)
fde: 8b 45 d0 mov -0x30(%rbp),%eax
fe1: c1 e0 08 shl $0x8,%eax
fe4: 89 c1 mov %eax,%ecx
fe6: 48 8b 45 d8 mov -0x28(%rbp),%rax
fea: 48 8d 50 01 lea 0x1(%rax),%rdx
fee: 48 89 55 d8 mov %rdx,-0x28(%rbp)
ff2: 0f b6 00 movzbl (%rax),%eax
ff5: 0f b6 c0 movzbl %al,%eax
ff8: 09 c8 or %ecx,%eax
ffa: 89 45 d0 mov %eax,-0x30(%rbp)
ffd: d1 6d d4 shrl -0x2c(%rbp)
1000: 8b 45 d4 mov -0x2c(%rbp),%eax
1003: 29 45 d0 sub %eax,-0x30(%rbp)
1006: 8b 45 d0 mov -0x30(%rbp),%eax
1009: c1 f8 1f sar $0x1f,%eax
100c: 89 85 38 ff ff ff mov %eax,-0xc8(%rbp)
1012: 8b 45 a4 mov -0x5c(%rbp),%eax
1015: 8d 14 00 lea (%rax,%rax,1),%edx
1018: 8b 85 38 ff ff ff mov -0xc8(%rbp),%eax
101e: 01 d0 add %edx,%eax
1020: 83 c0 01 add $0x1,%eax
1023: 89 45 a4 mov %eax,-0x5c(%rbp)
1026: 8b 45 d4 mov -0x2c(%rbp),%eax
1029: 23 85 38 ff ff ff and -0xc8(%rbp),%eax
102f: 01 45 d0 add %eax,-0x30(%rbp)
1032: 83 6d a0 01 subl $0x1,-0x60(%rbp)
1036: 83 7d a0 00 cmpl $0x0,-0x60(%rbp)
103a: 75 95 jne fd1 <LzmaDec_DecodeReal+0xfd1>
103c: 48 8b 85 78 ff ff ff mov -0x88(%rbp),%rax
1043: 48 05 44 06 00 00 add $0x644,%rax
1049: 48 89 45 c8 mov %rax,-0x38(%rbp)
104d: c1 65 a4 04 shll $0x4,-0x5c(%rbp)
1051: c7 45 94 01 00 00 00 movl $0x1,-0x6c(%rbp)
1058: 8b 45 94 mov -0x6c(%rbp),%eax
105b: 48 8d 14 00 lea (%rax,%rax,1),%rdx
105f: 48 8b 45 c8 mov -0x38(%rbp),%rax
1063: 48 01 d0 add %rdx,%rax
1066: 0f b7 00 movzwl (%rax),%eax
1069: 0f b7 c0 movzwl %ax,%eax
106c: 89 85 50 ff ff ff mov %eax,-0xb0(%rbp)
1072: 81 7d d4 ff ff ff 00 cmpl $0xffffff,-0x2c(%rbp)
1079: 77 23 ja 109e <LzmaDec_DecodeReal+0x109e>
107b: c1 65 d4 08 shll $0x8,-0x2c(%rbp)
107f: 8b 45 d0 mov -0x30(%rbp),%eax
1082: c1 e0 08 shl $0x8,%eax
1085: 89 c1 mov %eax,%ecx
1087: 48 8b 45 d8 mov -0x28(%rbp),%rax
108b: 48 8d 50 01 lea 0x1(%rax),%rdx
108f: 48 89 55 d8 mov %rdx,-0x28(%rbp)
1093: 0f b6 00 movzbl (%rax),%eax
1096: 0f b6 c0 movzbl %al,%eax
1099: 09 c8 or %ecx,%eax
109b: 89 45 d0 mov %eax,-0x30(%rbp)
109e: 8b 45 d4 mov -0x2c(%rbp),%eax
10a1: c1 e8 0b shr $0xb,%eax
10a4: 0f af 85 50 ff ff ff imul -0xb0(%rbp),%eax
10ab: 89 85 4c ff ff ff mov %eax,-0xb4(%rbp)
10b1: 8b 45 d0 mov -0x30(%rbp),%eax
10b4: 3b 85 4c ff ff ff cmp -0xb4(%rbp),%eax
10ba: 73 3c jae 10f8 <LzmaDec_DecodeReal+0x10f8>
10bc: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
10c2: 89 45 d4 mov %eax,-0x2c(%rbp)
10c5: 8b 45 94 mov -0x6c(%rbp),%eax
10c8: 48 8d 14 00 lea (%rax,%rax,1),%rdx
10cc: 48 8b 45 c8 mov -0x38(%rbp),%rax
10d0: 48 01 d0 add %rdx,%rax
10d3: 8b 95 50 ff ff ff mov -0xb0(%rbp),%edx
10d9: 89 d1 mov %edx,%ecx
10db: ba 00 08 00 00 mov $0x800,%edx
10e0: 2b 95 50 ff ff ff sub -0xb0(%rbp),%edx
10e6: c1 ea 05 shr $0x5,%edx
10e9: 01 ca add %ecx,%edx
10eb: 66 89 10 mov %dx,(%rax)
10ee: 8b 45 94 mov -0x6c(%rbp),%eax
10f1: 01 c0 add %eax,%eax
10f3: 89 45 94 mov %eax,-0x6c(%rbp)
10f6: eb 47 jmp 113f <LzmaDec_DecodeReal+0x113f>
10f8: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
10fe: 29 45 d4 sub %eax,-0x2c(%rbp)
1101: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
1107: 29 45 d0 sub %eax,-0x30(%rbp)
110a: 8b 45 94 mov -0x6c(%rbp),%eax
110d: 48 8d 14 00 lea (%rax,%rax,1),%rdx
1111: 48 8b 45 c8 mov -0x38(%rbp),%rax
1115: 48 01 d0 add %rdx,%rax
1118: 8b 95 50 ff ff ff mov -0xb0(%rbp),%edx
111e: 89 d1 mov %edx,%ecx
1120: 8b 95 50 ff ff ff mov -0xb0(%rbp),%edx
1126: c1 ea 05 shr $0x5,%edx
1129: 29 d1 sub %edx,%ecx
112b: 89 ca mov %ecx,%edx
112d: 66 89 10 mov %dx,(%rax)
1130: 8b 45 94 mov -0x6c(%rbp),%eax
1133: 01 c0 add %eax,%eax
1135: 83 c0 01 add $0x1,%eax
1138: 89 45 94 mov %eax,-0x6c(%rbp)
113b: 83 4d a4 01 orl $0x1,-0x5c(%rbp)
113f: 8b 45 94 mov -0x6c(%rbp),%eax
1142: 48 8d 14 00 lea (%rax,%rax,1),%rdx
1146: 48 8b 45 c8 mov -0x38(%rbp),%rax
114a: 48 01 d0 add %rdx,%rax
114d: 0f b7 00 movzwl (%rax),%eax
1150: 0f b7 c0 movzwl %ax,%eax
1153: 89 85 50 ff ff ff mov %eax,-0xb0(%rbp)
1159: 81 7d d4 ff ff ff 00 cmpl $0xffffff,-0x2c(%rbp)
1160: 77 23 ja 1185 <LzmaDec_DecodeReal+0x1185>
1162: c1 65 d4 08 shll $0x8,-0x2c(%rbp)
1166: 8b 45 d0 mov -0x30(%rbp),%eax
1169: c1 e0 08 shl $0x8,%eax
116c: 89 c1 mov %eax,%ecx
116e: 48 8b 45 d8 mov -0x28(%rbp),%rax
1172: 48 8d 50 01 lea 0x1(%rax),%rdx
1176: 48 89 55 d8 mov %rdx,-0x28(%rbp)
117a: 0f b6 00 movzbl (%rax),%eax
117d: 0f b6 c0 movzbl %al,%eax
1180: 09 c8 or %ecx,%eax
1182: 89 45 d0 mov %eax,-0x30(%rbp)
1185: 8b 45 d4 mov -0x2c(%rbp),%eax
1188: c1 e8 0b shr $0xb,%eax
118b: 0f af 85 50 ff ff ff imul -0xb0(%rbp),%eax
1192: 89 85 4c ff ff ff mov %eax,-0xb4(%rbp)
1198: 8b 45 d0 mov -0x30(%rbp),%eax
119b: 3b 85 4c ff ff ff cmp -0xb4(%rbp),%eax
11a1: 73 3c jae 11df <LzmaDec_DecodeReal+0x11df>
11a3: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
11a9: 89 45 d4 mov %eax,-0x2c(%rbp)
11ac: 8b 45 94 mov -0x6c(%rbp),%eax
11af: 48 8d 14 00 lea (%rax,%rax,1),%rdx
11b3: 48 8b 45 c8 mov -0x38(%rbp),%rax
11b7: 48 01 d0 add %rdx,%rax
11ba: 8b 95 50 ff ff ff mov -0xb0(%rbp),%edx
11c0: 89 d1 mov %edx,%ecx
11c2: ba 00 08 00 00 mov $0x800,%edx
11c7: 2b 95 50 ff ff ff sub -0xb0(%rbp),%edx
11cd: c1 ea 05 shr $0x5,%edx
11d0: 01 ca add %ecx,%edx
11d2: 66 89 10 mov %dx,(%rax)
11d5: 8b 45 94 mov -0x6c(%rbp),%eax
11d8: 01 c0 add %eax,%eax
11da: 89 45 94 mov %eax,-0x6c(%rbp)
11dd: eb 47 jmp 1226 <LzmaDec_DecodeReal+0x1226>
11df: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
11e5: 29 45 d4 sub %eax,-0x2c(%rbp)
11e8: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
11ee: 29 45 d0 sub %eax,-0x30(%rbp)
11f1: 8b 45 94 mov -0x6c(%rbp),%eax
11f4: 48 8d 14 00 lea (%rax,%rax,1),%rdx
11f8: 48 8b 45 c8 mov -0x38(%rbp),%rax
11fc: 48 01 d0 add %rdx,%rax
11ff: 8b 95 50 ff ff ff mov -0xb0(%rbp),%edx
1205: 89 d1 mov %edx,%ecx
1207: 8b 95 50 ff ff ff mov -0xb0(%rbp),%edx
120d: c1 ea 05 shr $0x5,%edx
1210: 29 d1 sub %edx,%ecx
1212: 89 ca mov %ecx,%edx
1214: 66 89 10 mov %dx,(%rax)
1217: 8b 45 94 mov -0x6c(%rbp),%eax
121a: 01 c0 add %eax,%eax
121c: 83 c0 01 add $0x1,%eax
121f: 89 45 94 mov %eax,-0x6c(%rbp)
1222: 83 4d a4 02 orl $0x2,-0x5c(%rbp)
1226: 8b 45 94 mov -0x6c(%rbp),%eax
1229: 48 8d 14 00 lea (%rax,%rax,1),%rdx
122d: 48 8b 45 c8 mov -0x38(%rbp),%rax
1231: 48 01 d0 add %rdx,%rax
1234: 0f b7 00 movzwl (%rax),%eax
1237: 0f b7 c0 movzwl %ax,%eax
123a: 89 85 50 ff ff ff mov %eax,-0xb0(%rbp)
1240: 81 7d d4 ff ff ff 00 cmpl $0xffffff,-0x2c(%rbp)
1247: 77 23 ja 126c <LzmaDec_DecodeReal+0x126c>
1249: c1 65 d4 08 shll $0x8,-0x2c(%rbp)
124d: 8b 45 d0 mov -0x30(%rbp),%eax
1250: c1 e0 08 shl $0x8,%eax
1253: 89 c1 mov %eax,%ecx
1255: 48 8b 45 d8 mov -0x28(%rbp),%rax
1259: 48 8d 50 01 lea 0x1(%rax),%rdx
125d: 48 89 55 d8 mov %rdx,-0x28(%rbp)
1261: 0f b6 00 movzbl (%rax),%eax
1264: 0f b6 c0 movzbl %al,%eax
1267: 09 c8 or %ecx,%eax
1269: 89 45 d0 mov %eax,-0x30(%rbp)
126c: 8b 45 d4 mov -0x2c(%rbp),%eax
126f: c1 e8 0b shr $0xb,%eax
1272: 0f af 85 50 ff ff ff imul -0xb0(%rbp),%eax
1279: 89 85 4c ff ff ff mov %eax,-0xb4(%rbp)
127f: 8b 45 d0 mov -0x30(%rbp),%eax
1282: 3b 85 4c ff ff ff cmp -0xb4(%rbp),%eax
1288: 73 3c jae 12c6 <LzmaDec_DecodeReal+0x12c6>
128a: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
1290: 89 45 d4 mov %eax,-0x2c(%rbp)
1293: 8b 45 94 mov -0x6c(%rbp),%eax
1296: 48 8d 14 00 lea (%rax,%rax,1),%rdx
129a: 48 8b 45 c8 mov -0x38(%rbp),%rax
129e: 48 01 d0 add %rdx,%rax
12a1: 8b 95 50 ff ff ff mov -0xb0(%rbp),%edx
12a7: 89 d1 mov %edx,%ecx
12a9: ba 00 08 00 00 mov $0x800,%edx
12ae: 2b 95 50 ff ff ff sub -0xb0(%rbp),%edx
12b4: c1 ea 05 shr $0x5,%edx
12b7: 01 ca add %ecx,%edx
12b9: 66 89 10 mov %dx,(%rax)
12bc: 8b 45 94 mov -0x6c(%rbp),%eax
12bf: 01 c0 add %eax,%eax
12c1: 89 45 94 mov %eax,-0x6c(%rbp)
12c4: eb 47 jmp 130d <LzmaDec_DecodeReal+0x130d>
12c6: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
12cc: 29 45 d4 sub %eax,-0x2c(%rbp)
12cf: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
12d5: 29 45 d0 sub %eax,-0x30(%rbp)
12d8: 8b 45 94 mov -0x6c(%rbp),%eax
12db: 48 8d 14 00 lea (%rax,%rax,1),%rdx
12df: 48 8b 45 c8 mov -0x38(%rbp),%rax
12e3: 48 01 d0 add %rdx,%rax
12e6: 8b 95 50 ff ff ff mov -0xb0(%rbp),%edx
12ec: 89 d1 mov %edx,%ecx
12ee: 8b 95 50 ff ff ff mov -0xb0(%rbp),%edx
12f4: c1 ea 05 shr $0x5,%edx
12f7: 29 d1 sub %edx,%ecx
12f9: 89 ca mov %ecx,%edx
12fb: 66 89 10 mov %dx,(%rax)
12fe: 8b 45 94 mov -0x6c(%rbp),%eax
1301: 01 c0 add %eax,%eax
1303: 83 c0 01 add $0x1,%eax
1306: 89 45 94 mov %eax,-0x6c(%rbp)
1309: 83 4d a4 04 orl $0x4,-0x5c(%rbp)
130d: 8b 45 94 mov -0x6c(%rbp),%eax
1310: 48 8d 14 00 lea (%rax,%rax,1),%rdx
1314: 48 8b 45 c8 mov -0x38(%rbp),%rax
1318: 48 01 d0 add %rdx,%rax
131b: 0f b7 00 movzwl (%rax),%eax
131e: 0f b7 c0 movzwl %ax,%eax
1321: 89 85 50 ff ff ff mov %eax,-0xb0(%rbp)
1327: 81 7d d4 ff ff ff 00 cmpl $0xffffff,-0x2c(%rbp)
132e: 77 23 ja 1353 <LzmaDec_DecodeReal+0x1353>
1330: c1 65 d4 08 shll $0x8,-0x2c(%rbp)
1334: 8b 45 d0 mov -0x30(%rbp),%eax
1337: c1 e0 08 shl $0x8,%eax
133a: 89 c1 mov %eax,%ecx
133c: 48 8b 45 d8 mov -0x28(%rbp),%rax
1340: 48 8d 50 01 lea 0x1(%rax),%rdx
1344: 48 89 55 d8 mov %rdx,-0x28(%rbp)
1348: 0f b6 00 movzbl (%rax),%eax
134b: 0f b6 c0 movzbl %al,%eax
134e: 09 c8 or %ecx,%eax
1350: 89 45 d0 mov %eax,-0x30(%rbp)
1353: 8b 45 d4 mov -0x2c(%rbp),%eax
1356: c1 e8 0b shr $0xb,%eax
1359: 0f af 85 50 ff ff ff imul -0xb0(%rbp),%eax
1360: 89 85 4c ff ff ff mov %eax,-0xb4(%rbp)
1366: 8b 45 d0 mov -0x30(%rbp),%eax
1369: 3b 85 4c ff ff ff cmp -0xb4(%rbp),%eax
136f: 73 3c jae 13ad <LzmaDec_DecodeReal+0x13ad>
1371: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
1377: 89 45 d4 mov %eax,-0x2c(%rbp)
137a: 8b 45 94 mov -0x6c(%rbp),%eax
137d: 48 8d 14 00 lea (%rax,%rax,1),%rdx
1381: 48 8b 45 c8 mov -0x38(%rbp),%rax
1385: 48 01 d0 add %rdx,%rax
1388: 8b 95 50 ff ff ff mov -0xb0(%rbp),%edx
138e: 89 d1 mov %edx,%ecx
1390: ba 00 08 00 00 mov $0x800,%edx
1395: 2b 95 50 ff ff ff sub -0xb0(%rbp),%edx
139b: c1 ea 05 shr $0x5,%edx
139e: 01 ca add %ecx,%edx
13a0: 66 89 10 mov %dx,(%rax)
13a3: 8b 45 94 mov -0x6c(%rbp),%eax
13a6: 01 c0 add %eax,%eax
13a8: 89 45 94 mov %eax,-0x6c(%rbp)
13ab: eb 47 jmp 13f4 <LzmaDec_DecodeReal+0x13f4>
13ad: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
13b3: 29 45 d4 sub %eax,-0x2c(%rbp)
13b6: 8b 85 4c ff ff ff mov -0xb4(%rbp),%eax
13bc: 29 45 d0 sub %eax,-0x30(%rbp)
13bf: 8b 45 94 mov -0x6c(%rbp),%eax
13c2: 48 8d 14 00 lea (%rax,%rax,1),%rdx
13c6: 48 8b 45 c8 mov -0x38(%rbp),%rax
13ca: 48 01 d0 add %rdx,%rax
13cd: 8b 95 50 ff ff ff mov -0xb0(%rbp),%edx
13d3: 89 d1 mov %edx,%ecx
13d5: 8b 95 50 ff ff ff mov -0xb0(%rbp),%edx
13db: c1 ea 05 shr $0x5,%edx
13de: 29 d1 sub %edx,%ecx
13e0: 89 ca mov %ecx,%edx
13e2: 66 89 10 mov %dx,(%rax)
13e5: 8b 45 94 mov -0x6c(%rbp),%eax
13e8: 01 c0 add %eax,%eax
13ea: 83 c0 01 add $0x1,%eax
13ed: 89 45 94 mov %eax,-0x6c(%rbp)
13f0: 83 4d a4 08 orl $0x8,-0x5c(%rbp)
13f4: 83 7d a4 ff cmpl $0xffffffff,-0x5c(%rbp)
13f8: 75 10 jne 140a <LzmaDec_DecodeReal+0x140a>
13fa: 81 45 e0 12 01 00 00 addl $0x112,-0x20(%rbp)
1401: 83 6d fc 0c subl $0xc,-0x4(%rbp)
1405: e9 9e 01 00 00 jmpq 15a8 <LzmaDec_DecodeReal+0x15a8>
140a: 8b 45 f0 mov -0x10(%rbp),%eax
140d: 89 45 ec mov %eax,-0x14(%rbp)
1410: 8b 45 f4 mov -0xc(%rbp),%eax
1413: 89 45 f0 mov %eax,-0x10(%rbp)
1416: 8b 45 f8 mov -0x8(%rbp),%eax
1419: 89 45 f4 mov %eax,-0xc(%rbp)
141c: 8b 45 a4 mov -0x5c(%rbp),%eax
141f: 83 c0 01 add $0x1,%eax
1422: 89 45 f8 mov %eax,-0x8(%rbp)
1425: 83 bd 58 ff ff ff 00 cmpl $0x0,-0xa8(%rbp)
142c: 75 12 jne 1440 <LzmaDec_DecodeReal+0x1440>
142e: 8b 45 a4 mov -0x5c(%rbp),%eax
1431: 3b 45 e4 cmp -0x1c(%rbp),%eax
1434: 72 1f jb 1455 <LzmaDec_DecodeReal+0x1455>
1436: b8 01 00 00 00 mov $0x1,%eax
143b: e9 2a 02 00 00 jmpq 166a <LzmaDec_DecodeReal+0x166a>
1440: 8b 45 a4 mov -0x5c(%rbp),%eax
1443: 3b 85 58 ff ff ff cmp -0xa8(%rbp),%eax
1449: 72 0a jb 1455 <LzmaDec_DecodeReal+0x1455>
144b: b8 01 00 00 00 mov $0x1,%eax
1450: e9 15 02 00 00 jmpq 166a <LzmaDec_DecodeReal+0x166a>
1455: 83 7d fc 12 cmpl $0x12,-0x4(%rbp)
1459: 77 07 ja 1462 <LzmaDec_DecodeReal+0x1462>
145b: b8 07 00 00 00 mov $0x7,%eax
1460: eb 05 jmp 1467 <LzmaDec_DecodeReal+0x1467>
1462: b8 0a 00 00 00 mov $0xa,%eax
1467: 89 45 fc mov %eax,-0x4(%rbp)
146a: 83 45 e0 02 addl $0x2,-0x20(%rbp)
146e: 8b 85 14 ff ff ff mov -0xec(%rbp),%eax
1474: 3b 45 e8 cmp -0x18(%rbp),%eax
1477: 75 0a jne 1483 <LzmaDec_DecodeReal+0x1483>
1479: b8 01 00 00 00 mov $0x1,%eax
147e: e9 e7 01 00 00 jmpq 166a <LzmaDec_DecodeReal+0x166a>
1483: 8b 85 14 ff ff ff mov -0xec(%rbp),%eax
1489: 2b 45 e8 sub -0x18(%rbp),%eax
148c: 89 85 34 ff ff ff mov %eax,-0xcc(%rbp)
1492: 8b 85 34 ff ff ff mov -0xcc(%rbp),%eax
1498: 39 45 e0 cmp %eax,-0x20(%rbp)
149b: 0f 46 45 e0 cmovbe -0x20(%rbp),%eax
149f: 89 45 90 mov %eax,-0x70(%rbp)
14a2: 8b 45 e8 mov -0x18(%rbp),%eax
14a5: 2b 45 f8 sub -0x8(%rbp),%eax
14a8: 89 c2 mov %eax,%edx
14aa: 8b 45 e8 mov -0x18(%rbp),%eax
14ad: 3b 45 f8 cmp -0x8(%rbp),%eax
14b0: 73 08 jae 14ba <LzmaDec_DecodeReal+0x14ba>
14b2: 8b 85 5c ff ff ff mov -0xa4(%rbp),%eax
14b8: eb 05 jmp 14bf <LzmaDec_DecodeReal+0x14bf>
14ba: b8 00 00 00 00 mov $0x0,%eax
14bf: 01 d0 add %edx,%eax
14c1: 89 45 8c mov %eax,-0x74(%rbp)
14c4: 8b 45 90 mov -0x70(%rbp),%eax
14c7: 01 45 e4 add %eax,-0x1c(%rbp)
14ca: 8b 45 90 mov -0x70(%rbp),%eax
14cd: 29 45 e0 sub %eax,-0x20(%rbp)
14d0: 8b 55 8c mov -0x74(%rbp),%edx
14d3: 8b 45 90 mov -0x70(%rbp),%eax
14d6: 01 d0 add %edx,%eax
14d8: 3b 85 5c ff ff ff cmp -0xa4(%rbp),%eax
14de: 77 65 ja 1545 <LzmaDec_DecodeReal+0x1545>
14e0: 8b 55 e8 mov -0x18(%rbp),%edx
14e3: 48 8b 85 60 ff ff ff mov -0xa0(%rbp),%rax
14ea: 48 01 d0 add %rdx,%rax
14ed: 48 89 45 80 mov %rax,-0x80(%rbp)
14f1: 8b 55 8c mov -0x74(%rbp),%edx
14f4: 8b 45 e8 mov -0x18(%rbp),%eax
14f7: 29 c2 sub %eax,%edx
14f9: 89 d0 mov %edx,%eax
14fb: 89 85 30 ff ff ff mov %eax,-0xd0(%rbp)
1501: 8b 55 90 mov -0x70(%rbp),%edx
1504: 48 8b 45 80 mov -0x80(%rbp),%rax
1508: 48 01 d0 add %rdx,%rax
150b: 48 89 85 28 ff ff ff mov %rax,-0xd8(%rbp)
1512: 8b 45 90 mov -0x70(%rbp),%eax
1515: 01 45 e8 add %eax,-0x18(%rbp)
1518: 8b 85 30 ff ff ff mov -0xd0(%rbp),%eax
151e: 48 63 d0 movslq %eax,%rdx
1521: 48 8b 45 80 mov -0x80(%rbp),%rax
1525: 48 01 d0 add %rdx,%rax
1528: 0f b6 10 movzbl (%rax),%edx
152b: 48 8b 45 80 mov -0x80(%rbp),%rax
152f: 88 10 mov %dl,(%rax)
1531: 48 83 45 80 01 addq $0x1,-0x80(%rbp)
1536: 48 8b 45 80 mov -0x80(%rbp),%rax
153a: 48 3b 85 28 ff ff ff cmp -0xd8(%rbp),%rax
1541: 75 d5 jne 1518 <LzmaDec_DecodeReal+0x1518>
1543: eb 47 jmp 158c <LzmaDec_DecodeReal+0x158c>
1545: 8b 45 e8 mov -0x18(%rbp),%eax
1548: 8d 50 01 lea 0x1(%rax),%edx
154b: 89 55 e8 mov %edx,-0x18(%rbp)
154e: 89 c2 mov %eax,%edx
1550: 48 8b 85 60 ff ff ff mov -0xa0(%rbp),%rax
1557: 48 01 c2 add %rax,%rdx
155a: 8b 4d 8c mov -0x74(%rbp),%ecx
155d: 48 8b 85 60 ff ff ff mov -0xa0(%rbp),%rax
1564: 48 01 c8 add %rcx,%rax
1567: 0f b6 00 movzbl (%rax),%eax
156a: 88 02 mov %al,(%rdx)
156c: 83 45 8c 01 addl $0x1,-0x74(%rbp)
1570: 8b 45 8c mov -0x74(%rbp),%eax
1573: 3b 85 5c ff ff ff cmp -0xa4(%rbp),%eax
1579: 75 07 jne 1582 <LzmaDec_DecodeReal+0x1582>
157b: c7 45 8c 00 00 00 00 movl $0x0,-0x74(%rbp)
1582: 83 6d 90 01 subl $0x1,-0x70(%rbp)
1586: 83 7d 90 00 cmpl $0x0,-0x70(%rbp)
158a: 75 b9 jne 1545 <LzmaDec_DecodeReal+0x1545>
158c: 8b 45 e8 mov -0x18(%rbp),%eax
158f: 3b 85 14 ff ff ff cmp -0xec(%rbp),%eax
1595: 73 11 jae 15a8 <LzmaDec_DecodeReal+0x15a8>
1597: 48 8b 45 d8 mov -0x28(%rbp),%rax
159b: 48 3b 85 08 ff ff ff cmp -0xf8(%rbp),%rax
15a2: 0f 82 91 eb ff ff jb 139 <LzmaDec_DecodeReal+0x139>
15a8: 81 7d d4 ff ff ff 00 cmpl $0xffffff,-0x2c(%rbp)
15af: 77 23 ja 15d4 <LzmaDec_DecodeReal+0x15d4>
15b1: c1 65 d4 08 shll $0x8,-0x2c(%rbp)
15b5: 8b 45 d0 mov -0x30(%rbp),%eax
15b8: c1 e0 08 shl $0x8,%eax
15bb: 89 c1 mov %eax,%ecx
15bd: 48 8b 45 d8 mov -0x28(%rbp),%rax
15c1: 48 8d 50 01 lea 0x1(%rax),%rdx
15c5: 48 89 55 d8 mov %rdx,-0x28(%rbp)
15c9: 0f b6 00 movzbl (%rax),%eax
15cc: 0f b6 c0 movzbl %al,%eax
15cf: 09 c8 or %ecx,%eax
15d1: 89 45 d0 mov %eax,-0x30(%rbp)
15d4: 48 8b 85 18 ff ff ff mov -0xe8(%rbp),%rax
15db: 48 8b 55 d8 mov -0x28(%rbp),%rdx
15df: 48 89 50 20 mov %rdx,0x20(%rax)
15e3: 48 8b 85 18 ff ff ff mov -0xe8(%rbp),%rax
15ea: 8b 55 d4 mov -0x2c(%rbp),%edx
15ed: 89 50 28 mov %edx,0x28(%rax)
15f0: 48 8b 85 18 ff ff ff mov -0xe8(%rbp),%rax
15f7: 8b 55 d0 mov -0x30(%rbp),%edx
15fa: 89 50 2c mov %edx,0x2c(%rax)
15fd: 48 8b 85 18 ff ff ff mov -0xe8(%rbp),%rax
1604: 8b 55 e0 mov -0x20(%rbp),%edx
1607: 89 50 54 mov %edx,0x54(%rax)
160a: 48 8b 85 18 ff ff ff mov -0xe8(%rbp),%rax
1611: 8b 55 e8 mov -0x18(%rbp),%edx
1614: 89 50 30 mov %edx,0x30(%rax)
1617: 48 8b 85 18 ff ff ff mov -0xe8(%rbp),%rax
161e: 8b 55 e4 mov -0x1c(%rbp),%edx
1621: 89 50 38 mov %edx,0x38(%rax)
1624: 48 8b 85 18 ff ff ff mov -0xe8(%rbp),%rax
162b: 8b 55 f8 mov -0x8(%rbp),%edx
162e: 89 50 44 mov %edx,0x44(%rax)
1631: 48 8b 85 18 ff ff ff mov -0xe8(%rbp),%rax
1638: 8b 55 f4 mov -0xc(%rbp),%edx
163b: 89 50 48 mov %edx,0x48(%rax)
163e: 48 8b 85 18 ff ff ff mov -0xe8(%rbp),%rax
1645: 8b 55 f0 mov -0x10(%rbp),%edx
1648: 89 50 4c mov %edx,0x4c(%rax)
164b: 48 8b 85 18 ff ff ff mov -0xe8(%rbp),%rax
1652: 8b 55 ec mov -0x14(%rbp),%edx
1655: 89 50 50 mov %edx,0x50(%rax)
1658: 48 8b 85 18 ff ff ff mov -0xe8(%rbp),%rax
165f: 8b 55 fc mov -0x4(%rbp),%edx
1662: 89 50 40 mov %edx,0x40(%rax)
1665: b8 00 00 00 00 mov $0x0,%eax
166a: c9 leaveq
166b: c3 retq
Disassembly of section .text.LzmaDec_WriteRem:
0000000000000000 <LzmaDec_WriteRem>:
0: 55 push %rbp
1: 48 89 e5 mov %rsp,%rbp
4: 48 83 ec 30 sub $0x30,%rsp
8: 48 89 7d d8 mov %rdi,-0x28(%rbp)
c: 89 75 d4 mov %esi,-0x2c(%rbp)
f: 48 8b 45 d8 mov -0x28(%rbp),%rax
13: 8b 40 54 mov 0x54(%rax),%eax
16: 85 c0 test %eax,%eax
18: 0f 84 01 01 00 00 je 11f <LzmaDec_WriteRem+0x11f>
1e: 48 8b 45 d8 mov -0x28(%rbp),%rax
22: 8b 40 54 mov 0x54(%rax),%eax
25: 3d 11 01 00 00 cmp $0x111,%eax
2a: 0f 87 ef 00 00 00 ja 11f <LzmaDec_WriteRem+0x11f>
30: 48 8b 45 d8 mov -0x28(%rbp),%rax
34: 48 8b 40 18 mov 0x18(%rax),%rax
38: 48 89 45 f0 mov %rax,-0x10(%rbp)
3c: 48 8b 45 d8 mov -0x28(%rbp),%rax
40: 8b 40 30 mov 0x30(%rax),%eax
43: 89 45 fc mov %eax,-0x4(%rbp)
46: 48 8b 45 d8 mov -0x28(%rbp),%rax
4a: 8b 40 34 mov 0x34(%rax),%eax
4d: 89 45 ec mov %eax,-0x14(%rbp)
50: 48 8b 45 d8 mov -0x28(%rbp),%rax
54: 8b 40 54 mov 0x54(%rax),%eax
57: 89 45 f8 mov %eax,-0x8(%rbp)
5a: 48 8b 45 d8 mov -0x28(%rbp),%rax
5e: 8b 40 44 mov 0x44(%rax),%eax
61: 89 45 e8 mov %eax,-0x18(%rbp)
64: 8b 45 d4 mov -0x2c(%rbp),%eax
67: 2b 45 fc sub -0x4(%rbp),%eax
6a: 3b 45 f8 cmp -0x8(%rbp),%eax
6d: 73 09 jae 78 <LzmaDec_WriteRem+0x78>
6f: 8b 45 d4 mov -0x2c(%rbp),%eax
72: 2b 45 fc sub -0x4(%rbp),%eax
75: 89 45 f8 mov %eax,-0x8(%rbp)
78: 48 8b 45 d8 mov -0x28(%rbp),%rax
7c: 8b 40 3c mov 0x3c(%rax),%eax
7f: 85 c0 test %eax,%eax
81: 75 25 jne a8 <LzmaDec_WriteRem+0xa8>
83: 48 8b 45 d8 mov -0x28(%rbp),%rax
87: 8b 50 0c mov 0xc(%rax),%edx
8a: 48 8b 45 d8 mov -0x28(%rbp),%rax
8e: 8b 40 38 mov 0x38(%rax),%eax
91: 29 c2 sub %eax,%edx
93: 89 d0 mov %edx,%eax
95: 3b 45 f8 cmp -0x8(%rbp),%eax
98: 77 0e ja a8 <LzmaDec_WriteRem+0xa8>
9a: 48 8b 45 d8 mov -0x28(%rbp),%rax
9e: 8b 50 0c mov 0xc(%rax),%edx
a1: 48 8b 45 d8 mov -0x28(%rbp),%rax
a5: 89 50 3c mov %edx,0x3c(%rax)
a8: 48 8b 45 d8 mov -0x28(%rbp),%rax
ac: 8b 50 38 mov 0x38(%rax),%edx
af: 8b 45 f8 mov -0x8(%rbp),%eax
b2: 01 c2 add %eax,%edx
b4: 48 8b 45 d8 mov -0x28(%rbp),%rax
b8: 89 50 38 mov %edx,0x38(%rax)
bb: 48 8b 45 d8 mov -0x28(%rbp),%rax
bf: 8b 40 54 mov 0x54(%rax),%eax
c2: 2b 45 f8 sub -0x8(%rbp),%eax
c5: 89 c2 mov %eax,%edx
c7: 48 8b 45 d8 mov -0x28(%rbp),%rax
cb: 89 50 54 mov %edx,0x54(%rax)
ce: eb 38 jmp 108 <LzmaDec_WriteRem+0x108>
d0: 8b 55 fc mov -0x4(%rbp),%edx
d3: 48 8b 45 f0 mov -0x10(%rbp),%rax
d7: 48 01 c2 add %rax,%rdx
da: 8b 45 fc mov -0x4(%rbp),%eax
dd: 2b 45 e8 sub -0x18(%rbp),%eax
e0: 89 c1 mov %eax,%ecx
e2: 8b 45 fc mov -0x4(%rbp),%eax
e5: 3b 45 e8 cmp -0x18(%rbp),%eax
e8: 73 05 jae ef <LzmaDec_WriteRem+0xef>
ea: 8b 45 ec mov -0x14(%rbp),%eax
ed: eb 05 jmp f4 <LzmaDec_WriteRem+0xf4>
ef: b8 00 00 00 00 mov $0x0,%eax
f4: 01 c8 add %ecx,%eax
f6: 89 c1 mov %eax,%ecx
f8: 48 8b 45 f0 mov -0x10(%rbp),%rax
fc: 48 01 c8 add %rcx,%rax
ff: 0f b6 00 movzbl (%rax),%eax
102: 88 02 mov %al,(%rdx)
104: 83 45 fc 01 addl $0x1,-0x4(%rbp)
108: 8b 45 f8 mov -0x8(%rbp),%eax
10b: 8d 50 ff lea -0x1(%rax),%edx
10e: 89 55 f8 mov %edx,-0x8(%rbp)
111: 85 c0 test %eax,%eax
113: 75 bb jne d0 <LzmaDec_WriteRem+0xd0>
115: 48 8b 45 d8 mov -0x28(%rbp),%rax
119: 8b 55 fc mov -0x4(%rbp),%edx
11c: 89 50 30 mov %edx,0x30(%rax)
11f: 90 nop
120: c9 leaveq
121: c3 retq
Disassembly of section .text.LzmaDec_DecodeReal2:
0000000000000000 <LzmaDec_DecodeReal2>:
0: 55 push %rbp
1: 48 89 e5 mov %rsp,%rbp
4: 48 83 ec 28 sub $0x28,%rsp
8: 48 89 7d e8 mov %rdi,-0x18(%rbp)
c: 89 75 e4 mov %esi,-0x1c(%rbp)
f: 48 89 55 d8 mov %rdx,-0x28(%rbp)
13: 8b 45 e4 mov -0x1c(%rbp),%eax
16: 89 45 fc mov %eax,-0x4(%rbp)
19: 48 8b 45 e8 mov -0x18(%rbp),%rax
1d: 8b 40 3c mov 0x3c(%rax),%eax
20: 85 c0 test %eax,%eax
22: 75 37 jne 5b <LzmaDec_DecodeReal2+0x5b>
24: 48 8b 45 e8 mov -0x18(%rbp),%rax
28: 8b 50 0c mov 0xc(%rax),%edx
2b: 48 8b 45 e8 mov -0x18(%rbp),%rax
2f: 8b 40 38 mov 0x38(%rax),%eax
32: 29 c2 sub %eax,%edx
34: 89 d0 mov %edx,%eax
36: 89 45 f8 mov %eax,-0x8(%rbp)
39: 48 8b 45 e8 mov -0x18(%rbp),%rax
3d: 8b 40 30 mov 0x30(%rax),%eax
40: 8b 55 e4 mov -0x1c(%rbp),%edx
43: 29 c2 sub %eax,%edx
45: 89 d0 mov %edx,%eax
47: 3b 45 f8 cmp -0x8(%rbp),%eax
4a: 76 0f jbe 5b <LzmaDec_DecodeReal2+0x5b>
4c: 48 8b 45 e8 mov -0x18(%rbp),%rax
50: 8b 50 30 mov 0x30(%rax),%edx
53: 8b 45 f8 mov -0x8(%rbp),%eax
56: 01 d0 add %edx,%eax
58: 89 45 fc mov %eax,-0x4(%rbp)
5b: 48 8b 55 d8 mov -0x28(%rbp),%rdx
5f: 8b 4d fc mov -0x4(%rbp),%ecx
62: 48 8b 45 e8 mov -0x18(%rbp),%rax
66: 89 ce mov %ecx,%esi
68: 48 89 c7 mov %rax,%rdi
6b: 48 b8 00 00 00 00 00 movabs $0x0,%rax
72: 00 00 00
75: ff d0 callq *%rax
77: 89 45 f4 mov %eax,-0xc(%rbp)
7a: 83 7d f4 00 cmpl $0x0,-0xc(%rbp)
7e: 74 08 je 88 <LzmaDec_DecodeReal2+0x88>
80: 8b 45 f4 mov -0xc(%rbp),%eax
83: e9 82 00 00 00 jmpq 10a <LzmaDec_DecodeReal2+0x10a>
88: 48 8b 45 e8 mov -0x18(%rbp),%rax
8c: 8b 50 38 mov 0x38(%rax),%edx
8f: 48 8b 45 e8 mov -0x18(%rbp),%rax
93: 8b 40 0c mov 0xc(%rax),%eax
96: 39 c2 cmp %eax,%edx
98: 72 0e jb a8 <LzmaDec_DecodeReal2+0xa8>
9a: 48 8b 45 e8 mov -0x18(%rbp),%rax
9e: 8b 50 0c mov 0xc(%rax),%edx
a1: 48 8b 45 e8 mov -0x18(%rbp),%rax
a5: 89 50 3c mov %edx,0x3c(%rax)
a8: 8b 55 e4 mov -0x1c(%rbp),%edx
ab: 48 8b 45 e8 mov -0x18(%rbp),%rax
af: 89 d6 mov %edx,%esi
b1: 48 89 c7 mov %rax,%rdi
b4: 48 b8 00 00 00 00 00 movabs $0x0,%rax
bb: 00 00 00
be: ff d0 callq *%rax
c0: 48 8b 45 e8 mov -0x18(%rbp),%rax
c4: 8b 40 30 mov 0x30(%rax),%eax
c7: 3b 45 e4 cmp -0x1c(%rbp),%eax
ca: 73 20 jae ec <LzmaDec_DecodeReal2+0xec>
cc: 48 8b 45 e8 mov -0x18(%rbp),%rax
d0: 48 8b 40 20 mov 0x20(%rax),%rax
d4: 48 3b 45 d8 cmp -0x28(%rbp),%rax
d8: 73 12 jae ec <LzmaDec_DecodeReal2+0xec>
da: 48 8b 45 e8 mov -0x18(%rbp),%rax
de: 8b 40 54 mov 0x54(%rax),%eax
e1: 3d 11 01 00 00 cmp $0x111,%eax
e6: 0f 86 27 ff ff ff jbe 13 <LzmaDec_DecodeReal2+0x13>
ec: 48 8b 45 e8 mov -0x18(%rbp),%rax
f0: 8b 40 54 mov 0x54(%rax),%eax
f3: 3d 12 01 00 00 cmp $0x112,%eax
f8: 76 0b jbe 105 <LzmaDec_DecodeReal2+0x105>
fa: 48 8b 45 e8 mov -0x18(%rbp),%rax
fe: c7 40 54 12 01 00 00 movl $0x112,0x54(%rax)
105: b8 00 00 00 00 mov $0x0,%eax
10a: c9 leaveq
10b: c3 retq
Disassembly of section .text.LzmaDec_TryDummy:
0000000000000000 <LzmaDec_TryDummy>:
0: 55 push %rbp
1: 48 89 e5 mov %rsp,%rbp
4: 48 81 ec 98 00 00 00 sub $0x98,%rsp
b: 48 89 bd 78 ff ff ff mov %rdi,-0x88(%rbp)
12: 48 89 b5 70 ff ff ff mov %rsi,-0x90(%rbp)
19: 89 95 6c ff ff ff mov %edx,-0x94(%rbp)
1f: 48 8b 85 78 ff ff ff mov -0x88(%rbp),%rax
26: 8b 40 28 mov 0x28(%rax),%eax
29: 89 45 fc mov %eax,-0x4(%rbp)
2c: 48 8b 85 78 ff ff ff mov -0x88(%rbp),%rax
33: 8b 40 2c mov 0x2c(%rax),%eax
36: 89 45 f8 mov %eax,-0x8(%rbp)
39: 8b 95 6c ff ff ff mov -0x94(%rbp),%edx
3f: 48 8b 85 70 ff ff ff mov -0x90(%rbp),%rax
46: 48 01 d0 add %rdx,%rax
49: 48 89 45 a8 mov %rax,-0x58(%rbp)
4d: 48 8b 85 78 ff ff ff mov -0x88(%rbp),%rax
54: 48 8b 40 10 mov 0x10(%rax),%rax
58: 48 89 45 a0 mov %rax,-0x60(%rbp)
5c: 48 8b 85 78 ff ff ff mov -0x88(%rbp),%rax
63: 8b 40 40 mov 0x40(%rax),%eax
66: 89 45 f4 mov %eax,-0xc(%rbp)
69: 48 8b 85 78 ff ff ff mov -0x88(%rbp),%rax
70: 8b 50 38 mov 0x38(%rax),%edx
73: 48 8b 85 78 ff ff ff mov -0x88(%rbp),%rax
7a: 8b 40 08 mov 0x8(%rax),%eax
7d: be 01 00 00 00 mov $0x1,%esi
82: 89 c1 mov %eax,%ecx
84: d3 e6 shl %cl,%esi
86: 89 f0 mov %esi,%eax
88: 83 e8 01 sub $0x1,%eax
8b: 21 d0 and %edx,%eax
8d: 89 45 9c mov %eax,-0x64(%rbp)
90: 8b 45 f4 mov -0xc(%rbp),%eax
93: c1 e0 04 shl $0x4,%eax
96: 89 c2 mov %eax,%edx
98: 8b 45 9c mov -0x64(%rbp),%eax
9b: 48 01 d0 add %rdx,%rax
9e: 48 8d 14 00 lea (%rax,%rax,1),%rdx
a2: 48 8b 45 a0 mov -0x60(%rbp),%rax
a6: 48 01 d0 add %rdx,%rax
a9: 48 89 45 e8 mov %rax,-0x18(%rbp)
ad: 48 8b 45 e8 mov -0x18(%rbp),%rax
b1: 0f b7 00 movzwl (%rax),%eax
b4: 0f b7 c0 movzwl %ax,%eax
b7: 89 45 98 mov %eax,-0x68(%rbp)
ba: 81 7d fc ff ff ff 00 cmpl $0xffffff,-0x4(%rbp)
c1: 77 40 ja 103 <LzmaDec_TryDummy+0x103>
c3: 48 8b 85 70 ff ff ff mov -0x90(%rbp),%rax
ca: 48 3b 45 a8 cmp -0x58(%rbp),%rax
ce: 72 0a jb da <LzmaDec_TryDummy+0xda>
d0: b8 00 00 00 00 mov $0x0,%eax
d5: e9 b8 0b 00 00 jmpq c92 <LzmaDec_TryDummy+0xc92>
da: c1 65 fc 08 shll $0x8,-0x4(%rbp)
de: 8b 45 f8 mov -0x8(%rbp),%eax
e1: c1 e0 08 shl $0x8,%eax
e4: 89 c1 mov %eax,%ecx
e6: 48 8b 85 70 ff ff ff mov -0x90(%rbp),%rax
ed: 48 8d 50 01 lea 0x1(%rax),%rdx
f1: 48 89 95 70 ff ff ff mov %rdx,-0x90(%rbp)
f8: 0f b6 00 movzbl (%rax),%eax
fb: 0f b6 c0 movzbl %al,%eax
fe: 09 c8 or %ecx,%eax
100: 89 45 f8 mov %eax,-0x8(%rbp)
103: 8b 45 fc mov -0x4(%rbp),%eax
106: c1 e8 0b shr $0xb,%eax
109: 0f af 45 98 imul -0x68(%rbp),%eax
10d: 89 45 94 mov %eax,-0x6c(%rbp)
110: 8b 45 f8 mov -0x8(%rbp),%eax
113: 3b 45 94 cmp -0x6c(%rbp),%eax
116: 0f 83 e2 02 00 00 jae 3fe <LzmaDec_TryDummy+0x3fe>
11c: 8b 45 94 mov -0x6c(%rbp),%eax
11f: 89 45 fc mov %eax,-0x4(%rbp)
122: 48 8b 45 a0 mov -0x60(%rbp),%rax
126: 48 05 6c 0e 00 00 add $0xe6c,%rax
12c: 48 89 45 e8 mov %rax,-0x18(%rbp)
130: 48 8b 85 78 ff ff ff mov -0x88(%rbp),%rax
137: 8b 40 3c mov 0x3c(%rax),%eax
13a: 85 c0 test %eax,%eax
13c: 75 12 jne 150 <LzmaDec_TryDummy+0x150>
13e: 48 8b 85 78 ff ff ff mov -0x88(%rbp),%rax
145: 8b 40 38 mov 0x38(%rax),%eax
148: 85 c0 test %eax,%eax
14a: 0f 84 a2 00 00 00 je 1f2 <LzmaDec_TryDummy+0x1f2>
150: 48 8b 85 78 ff ff ff mov -0x88(%rbp),%rax
157: 8b 50 38 mov 0x38(%rax),%edx
15a: 48 8b 85 78 ff ff ff mov -0x88(%rbp),%rax
161: 8b 40 04 mov 0x4(%rax),%eax
164: be 01 00 00 00 mov $0x1,%esi
169: 89 c1 mov %eax,%ecx
16b: d3 e6 shl %cl,%esi
16d: 89 f0 mov %esi,%eax
16f: 83 e8 01 sub $0x1,%eax
172: 21 c2 and %eax,%edx
174: 48 8b 85 78 ff ff ff mov -0x88(%rbp),%rax
17b: 8b 00 mov (%rax),%eax
17d: 89 d6 mov %edx,%esi
17f: 89 c1 mov %eax,%ecx
181: d3 e6 shl %cl,%esi
183: 48 8b 85 78 ff ff ff mov -0x88(%rbp),%rax
18a: 48 8b 50 18 mov 0x18(%rax),%rdx
18e: 48 8b 85 78 ff ff ff mov -0x88(%rbp),%rax
195: 8b 40 30 mov 0x30(%rax),%eax
198: 85 c0 test %eax,%eax
19a: 75 11 jne 1ad <LzmaDec_TryDummy+0x1ad>
19c: 48 8b 85 78 ff ff ff mov -0x88(%rbp),%rax
1a3: 8b 40 34 mov 0x34(%rax),%eax
1a6: 83 e8 01 sub $0x1,%eax
1a9: 89 c0 mov %eax,%eax
1ab: eb 0f jmp 1bc <LzmaDec_TryDummy+0x1bc>
1ad: 48 8b 85 78 ff ff ff mov -0x88(%rbp),%rax
1b4: 8b 40 30 mov 0x30(%rax),%eax
1b7: 83 e8 01 sub $0x1,%eax
1ba: 89 c0 mov %eax,%eax
1bc: 48 01 d0 add %rdx,%rax
1bf: 0f b6 00 movzbl (%rax),%eax
1c2: 0f b6 d0 movzbl %al,%edx
1c5: 48 8b 85 78 ff ff ff mov -0x88(%rbp),%rax
1cc: 8b 00 mov (%rax),%eax
1ce: b9 08 00 00 00 mov $0x8,%ecx
1d3: 29 c1 sub %eax,%ecx
1d5: 89 c8 mov %ecx,%eax
1d7: 89 c1 mov %eax,%ecx
1d9: d3 fa sar %cl,%edx
1db: 89 d0 mov %edx,%eax
1dd: 8d 14 06 lea (%rsi,%rax,1),%edx
1e0: 89 d0 mov %edx,%eax
1e2: 01 c0 add %eax,%eax
1e4: 01 d0 add %edx,%eax
1e6: c1 e0 08 shl $0x8,%eax
1e9: 89 c0 mov %eax,%eax
1eb: 48 01 c0 add %rax,%rax
1ee: 48 01 45 e8 add %rax,-0x18(%rbp)
1f2: 83 7d f4 06 cmpl $0x6,-0xc(%rbp)
1f6: 0f 87 b5 00 00 00 ja 2b1 <LzmaDec_TryDummy+0x2b1>
1fc: c7 45 e4 01 00 00 00 movl $0x1,-0x1c(%rbp)
203: 8b 45 e4 mov -0x1c(%rbp),%eax
206: 48 8d 14 00 lea (%rax,%rax,1),%rdx
20a: 48 8b 45 e8 mov -0x18(%rbp),%rax
20e: 48 01 d0 add %rdx,%rax
211: 0f b7 00 movzwl (%rax),%eax
214: 0f b7 c0 movzwl %ax,%eax
217: 89 45 98 mov %eax,-0x68(%rbp)
21a: 81 7d fc ff ff ff 00 cmpl $0xffffff,-0x4(%rbp)
221: 77 40 ja 263 <LzmaDec_TryDummy+0x263>
223: 48 8b 85 70 ff ff ff mov -0x90(%rbp),%rax
22a: 48 3b 45 a8 cmp -0x58(%rbp),%rax
22e: 72 0a jb 23a <LzmaDec_TryDummy+0x23a>
230: b8 00 00 00 00 mov $0x0,%eax
235: e9 58 0a 00 00 jmpq c92 <LzmaDec_TryDummy+0xc92>
23a: c1 65 fc 08 shll $0x8,-0x4(%rbp)
23e: 8b 45 f8 mov -0x8(%rbp),%eax
241: c1 e0 08 shl $0x8,%eax
244: 89 c1 mov %eax,%ecx
246: 48 8b 85 70 ff ff ff mov -0x90(%rbp),%rax
24d: 48 8d 50 01 lea 0x1(%rax),%rdx
251: 48 89 95 70 ff ff ff mov %rdx,-0x90(%rbp)
258: 0f b6 00 movzbl (%rax),%eax
25b: 0f b6 c0 movzbl %al,%eax
25e: 09 c8 or %ecx,%eax
260: 89 45 f8 mov %eax,-0x8(%rbp)
263: 8b 45 fc mov -0x4(%rbp),%eax
266: c1 e8 0b shr $0xb,%eax
269: 0f af 45 98 imul -0x68(%rbp),%eax
26d: 89 45 94 mov %eax,-0x6c(%rbp)
270: 8b 45 f8 mov -0x8(%rbp),%eax
273: 3b 45 94 cmp -0x6c(%rbp),%eax
276: 73 10 jae 288 <LzmaDec_TryDummy+0x288>
278: 8b 45 94 mov -0x6c(%rbp),%eax
27b: 89 45 fc mov %eax,-0x4(%rbp)
27e: 8b 45 e4 mov -0x1c(%rbp),%eax
281: 01 c0 add %eax,%eax
283: 89 45 e4 mov %eax,-0x1c(%rbp)
286: eb 17 jmp 29f <LzmaDec_TryDummy+0x29f>
288: 8b 45 94 mov -0x6c(%rbp),%eax
28b: 29 45 fc sub %eax,-0x4(%rbp)
28e: 8b 45 94 mov -0x6c(%rbp),%eax
291: 29 45 f8 sub %eax,-0x8(%rbp)
294: 8b 45 e4 mov -0x1c(%rbp),%eax
297: 01 c0 add %eax,%eax
299: 83 c0 01 add $0x1,%eax
29c: 89 45 e4 mov %eax,-0x1c(%rbp)
29f: 81 7d e4 ff 00 00 00 cmpl $0xff,-0x1c(%rbp)
2a6: 0f 86 57 ff ff ff jbe 203 <LzmaDec_TryDummy+0x203>
2ac: e9 41 01 00 00 jmpq 3f2 <LzmaDec_TryDummy+0x3f2>
2b1: 48 8b 85 78 ff ff ff mov -0x88(%rbp),%rax
2b8: 48 8b 50 18 mov 0x18(%rax),%rdx
2bc: 48 8b 85 78 ff ff ff mov -0x88(%rbp),%rax
2c3: 8b 48 30 mov 0x30(%rax),%ecx
2c6: 48 8b 85 78 ff ff ff mov -0x88(%rbp),%rax
2cd: 8b 40 44 mov 0x44(%rax),%eax
2d0: 89 ce mov %ecx,%esi
2d2: 29 c6 sub %eax,%esi
2d4: 48 8b 85 78 ff ff ff mov -0x88(%rbp),%rax
2db: 8b 48 30 mov 0x30(%rax),%ecx
2de: 48 8b 85 78 ff ff ff mov -0x88(%rbp),%rax
2e5: 8b 40 44 mov 0x44(%rax),%eax
2e8: 39 c1 cmp %eax,%ecx
2ea: 73 0c jae 2f8 <LzmaDec_TryDummy+0x2f8>
2ec: 48 8b 85 78 ff ff ff mov -0x88(%rbp),%rax
2f3: 8b 40 34 mov 0x34(%rax),%eax
2f6: eb 05 jmp 2fd <LzmaDec_TryDummy+0x2fd>
2f8: b8 00 00 00 00 mov $0x0,%eax
2fd: 01 f0 add %esi,%eax
2ff: 89 c0 mov %eax,%eax
301: 48 01 d0 add %rdx,%rax
304: 0f b6 00 movzbl (%rax),%eax
307: 0f b6 c0 movzbl %al,%eax
30a: 89 45 e0 mov %eax,-0x20(%rbp)
30d: c7 45 dc 00 01 00 00 movl $0x100,-0x24(%rbp)
314: c7 45 d8 01 00 00 00 movl $0x1,-0x28(%rbp)
31b: d1 65 e0 shll -0x20(%rbp)
31e: 8b 45 e0 mov -0x20(%rbp),%eax
321: 23 45 dc and -0x24(%rbp),%eax
324: 89 45 90 mov %eax,-0x70(%rbp)
327: 8b 55 dc mov -0x24(%rbp),%edx
32a: 8b 45 90 mov -0x70(%rbp),%eax
32d: 48 01 c2 add %rax,%rdx
330: 8b 45 d8 mov -0x28(%rbp),%eax
333: 48 01 d0 add %rdx,%rax
336: 48 8d 14 00 lea (%rax,%rax,1),%rdx
33a: 48 8b 45 e8 mov -0x18(%rbp),%rax
33e: 48 01 d0 add %rdx,%rax
341: 48 89 45 88 mov %rax,-0x78(%rbp)
345: 48 8b 45 88 mov -0x78(%rbp),%rax
349: 0f b7 00 movzwl (%rax),%eax
34c: 0f b7 c0 movzwl %ax,%eax
34f: 89 45 98 mov %eax,-0x68(%rbp)
352: 81 7d fc ff ff ff 00 cmpl $0xffffff,-0x4(%rbp)
359: 77 40 ja 39b <LzmaDec_TryDummy+0x39b>
35b: 48 8b 85 70 ff ff ff mov -0x90(%rbp),%rax
362: 48 3b 45 a8 cmp -0x58(%rbp),%rax
366: 72 0a jb 372 <LzmaDec_TryDummy+0x372>
368: b8 00 00 00 00 mov $0x0,%eax
36d: e9 20 09 00 00 jmpq c92 <LzmaDec_TryDummy+0xc92>
372: c1 65 fc 08 shll $0x8,-0x4(%rbp)
376: 8b 45 f8 mov -0x8(%rbp),%eax
379: c1 e0 08 shl $0x8,%eax
37c: 89 c1 mov %eax,%ecx
37e: 48 8b 85 70 ff ff ff mov -0x90(%rbp),%rax
385: 48 8d 50 01 lea 0x1(%rax),%rdx
389: 48 89 95 70 ff ff ff mov %rdx,-0x90(%rbp)
390: 0f b6 00 movzbl (%rax),%eax
393: 0f b6 c0 movzbl %al,%eax
396: 09 c8 or %ecx,%eax
398: 89 45 f8 mov %eax,-0x8(%rbp)
39b: 8b 45 fc mov -0x4(%rbp),%eax
39e: c1 e8 0b shr $0xb,%eax
3a1: 0f af 45 98 imul -0x68(%rbp),%eax
3a5: 89 45 94 mov %eax,-0x6c(%rbp)
3a8: 8b 45 f8 mov -0x8(%rbp),%eax
3ab: 3b 45 94 cmp -0x6c(%rbp),%eax
3ae: 73 18 jae 3c8 <LzmaDec_TryDummy+0x3c8>
3b0: 8b 45 94 mov -0x6c(%rbp),%eax
3b3: 89 45 fc mov %eax,-0x4(%rbp)
3b6: 8b 45 d8 mov -0x28(%rbp),%eax
3b9: 01 c0 add %eax,%eax
3bb: 89 45 d8 mov %eax,-0x28(%rbp)
3be: 8b 45 90 mov -0x70(%rbp),%eax
3c1: f7 d0 not %eax
3c3: 21 45 dc and %eax,-0x24(%rbp)
3c6: eb 1d jmp 3e5 <LzmaDec_TryDummy+0x3e5>
3c8: 8b 45 94 mov -0x6c(%rbp),%eax
3cb: 29 45 fc sub %eax,-0x4(%rbp)
3ce: 8b 45 94 mov -0x6c(%rbp),%eax
3d1: 29 45 f8 sub %eax,-0x8(%rbp)
3d4: 8b 45 d8 mov -0x28(%rbp),%eax
3d7: 01 c0 add %eax,%eax
3d9: 83 c0 01 add $0x1,%eax
3dc: 89 45 d8 mov %eax,-0x28(%rbp)
3df: 8b 45 90 mov -0x70(%rbp),%eax
3e2: 21 45 dc and %eax,-0x24(%rbp)
3e5: 81 7d d8 ff 00 00 00 cmpl $0xff,-0x28(%rbp)
3ec: 0f 86 29 ff ff ff jbe 31b <LzmaDec_TryDummy+0x31b>
3f2: c7 45 f0 01 00 00 00 movl $0x1,-0x10(%rbp)
3f9: e9 4b 08 00 00 jmpq c49 <LzmaDec_TryDummy+0xc49>
3fe: 8b 45 94 mov -0x6c(%rbp),%eax
401: 29 45 fc sub %eax,-0x4(%rbp)
404: 8b 45 94 mov -0x6c(%rbp),%eax
407: 29 45 f8 sub %eax,-0x8(%rbp)
40a: 8b 45 f4 mov -0xc(%rbp),%eax
40d: 48 05 c0 00 00 00 add $0xc0,%rax
413: 48 8d 14 00 lea (%rax,%rax,1),%rdx
417: 48 8b 45 a0 mov -0x60(%rbp),%rax
41b: 48 01 d0 add %rdx,%rax
41e: 48 89 45 e8 mov %rax,-0x18(%rbp)
422: 48 8b 45 e8 mov -0x18(%rbp),%rax
426: 0f b7 00 movzwl (%rax),%eax
429: 0f b7 c0 movzwl %ax,%eax
42c: 89 45 98 mov %eax,-0x68(%rbp)
42f: 81 7d fc ff ff ff 00 cmpl $0xffffff,-0x4(%rbp)
436: 77 40 ja 478 <LzmaDec_TryDummy+0x478>
438: 48 8b 85 70 ff ff ff mov -0x90(%rbp),%rax
43f: 48 3b 45 a8 cmp -0x58(%rbp),%rax
443: 72 0a jb 44f <LzmaDec_TryDummy+0x44f>
445: b8 00 00 00 00 mov $0x0,%eax
44a: e9 43 08 00 00 jmpq c92 <LzmaDec_TryDummy+0xc92>
44f: c1 65 fc 08 shll $0x8,-0x4(%rbp)
453: 8b 45 f8 mov -0x8(%rbp),%eax
456: c1 e0 08 shl $0x8,%eax
459: 89 c1 mov %eax,%ecx
45b: 48 8b 85 70 ff ff ff mov -0x90(%rbp),%rax
462: 48 8d 50 01 lea 0x1(%rax),%rdx
466: 48 89 95 70 ff ff ff mov %rdx,-0x90(%rbp)
46d: 0f b6 00 movzbl (%rax),%eax
470: 0f b6 c0 movzbl %al,%eax
473: 09 c8 or %ecx,%eax
475: 89 45 f8 mov %eax,-0x8(%rbp)
478: 8b 45 fc mov -0x4(%rbp),%eax
47b: c1 e8 0b shr $0xb,%eax
47e: 0f af 45 98 imul -0x68(%rbp),%eax
482: 89 45 94 mov %eax,-0x6c(%rbp)
485: 8b 45 f8 mov -0x8(%rbp),%eax
488: 3b 45 94 cmp -0x6c(%rbp),%eax
48b: 73 27 jae 4b4 <LzmaDec_TryDummy+0x4b4>
48d: 8b 45 94 mov -0x6c(%rbp),%eax
490: 89 45 fc mov %eax,-0x4(%rbp)
493: c7 45 f4 00 00 00 00 movl $0x0,-0xc(%rbp)
49a: 48 8b 45 a0 mov -0x60(%rbp),%rax
49e: 48 05 64 06 00 00 add $0x664,%rax
4a4: 48 89 45 e8 mov %rax,-0x18(%rbp)
4a8: c7 45 f0 02 00 00 00 movl $0x2,-0x10(%rbp)
4af: e9 ea 02 00 00 jmpq 79e <LzmaDec_TryDummy+0x79e>
4b4: 8b 45 94 mov -0x6c(%rbp),%eax
4b7: 29 45 fc sub %eax,-0x4(%rbp)
4ba: 8b 45 94 mov -0x6c(%rbp),%eax
4bd: 29 45 f8 sub %eax,-0x8(%rbp)
4c0: c7 45 f0 03 00 00 00 movl $0x3,-0x10(%rbp)
4c7: 8b 45 f4 mov -0xc(%rbp),%eax
4ca: 48 05 cc 00 00 00 add $0xcc,%rax
4d0: 48 8d 14 00 lea (%rax,%rax,1),%rdx
4d4: 48 8b 45 a0 mov -0x60(%rbp),%rax
4d8: 48 01 d0 add %rdx,%rax
4db: 48 89 45 e8 mov %rax,-0x18(%rbp)
4df: 48 8b 45 e8 mov -0x18(%rbp),%rax
4e3: 0f b7 00 movzwl (%rax),%eax
4e6: 0f b7 c0 movzwl %ax,%eax
4e9: 89 45 98 mov %eax,-0x68(%rbp)
4ec: 81 7d fc ff ff ff 00 cmpl $0xffffff,-0x4(%rbp)
4f3: 77 40 ja 535 <LzmaDec_TryDummy+0x535>
4f5: 48 8b 85 70 ff ff ff mov -0x90(%rbp),%rax
4fc: 48 3b 45 a8 cmp -0x58(%rbp),%rax
500: 72 0a jb 50c <LzmaDec_TryDummy+0x50c>
502: b8 00 00 00 00 mov $0x0,%eax
507: e9 86 07 00 00 jmpq c92 <LzmaDec_TryDummy+0xc92>
50c: c1 65 fc 08 shll $0x8,-0x4(%rbp)
510: 8b 45 f8 mov -0x8(%rbp),%eax
513: c1 e0 08 shl $0x8,%eax
516: 89 c1 mov %eax,%ecx
518: 48 8b 85 70 ff ff ff mov -0x90(%rbp),%rax
51f: 48 8d 50 01 lea 0x1(%rax),%rdx
523: 48 89 95 70 ff ff ff mov %rdx,-0x90(%rbp)
52a: 0f b6 00 movzbl (%rax),%eax
52d: 0f b6 c0 movzbl %al,%eax
530: 09 c8 or %ecx,%eax
532: 89 45 f8 mov %eax,-0x8(%rbp)
535: 8b 45 fc mov -0x4(%rbp),%eax
538: c1 e8 0b shr $0xb,%eax
53b: 0f af 45 98 imul -0x68(%rbp),%eax
53f: 89 45 94 mov %eax,-0x6c(%rbp)
542: 8b 45 f8 mov -0x8(%rbp),%eax
545: 3b 45 94 cmp -0x6c(%rbp),%eax
548: 0f 83 fe 00 00 00 jae 64c <LzmaDec_TryDummy+0x64c>
54e: 8b 45 94 mov -0x6c(%rbp),%eax
551: 89 45 fc mov %eax,-0x4(%rbp)
554: 8b 45 f4 mov -0xc(%rbp),%eax
557: c1 e0 04 shl $0x4,%eax
55a: 89 c2 mov %eax,%edx
55c: 8b 45 9c mov -0x64(%rbp),%eax
55f: 48 01 d0 add %rdx,%rax
562: 48 05 f0 00 00 00 add $0xf0,%rax
568: 48 8d 14 00 lea (%rax,%rax,1),%rdx
56c: 48 8b 45 a0 mov -0x60(%rbp),%rax
570: 48 01 d0 add %rdx,%rax
573: 48 89 45 e8 mov %rax,-0x18(%rbp)
577: 48 8b 45 e8 mov -0x18(%rbp),%rax
57b: 0f b7 00 movzwl (%rax),%eax
57e: 0f b7 c0 movzwl %ax,%eax
581: 89 45 98 mov %eax,-0x68(%rbp)
584: 81 7d fc ff ff ff 00 cmpl $0xffffff,-0x4(%rbp)
58b: 77 40 ja 5cd <LzmaDec_TryDummy+0x5cd>
58d: 48 8b 85 70 ff ff ff mov -0x90(%rbp),%rax
594: 48 3b 45 a8 cmp -0x58(%rbp),%rax
598: 72 0a jb 5a4 <LzmaDec_TryDummy+0x5a4>
59a: b8 00 00 00 00 mov $0x0,%eax
59f: e9 ee 06 00 00 jmpq c92 <LzmaDec_TryDummy+0xc92>
5a4: c1 65 fc 08 shll $0x8,-0x4(%rbp)
5a8: 8b 45 f8 mov -0x8(%rbp),%eax
5ab: c1 e0 08 shl $0x8,%eax
5ae: 89 c1 mov %eax,%ecx
5b0: 48 8b 85 70 ff ff ff mov -0x90(%rbp),%rax
5b7: 48 8d 50 01 lea 0x1(%rax),%rdx
5bb: 48 89 95 70 ff ff ff mov %rdx,-0x90(%rbp)
5c2: 0f b6 00 movzbl (%rax),%eax
5c5: 0f b6 c0 movzbl %al,%eax
5c8: 09 c8 or %ecx,%eax
5ca: 89 45 f8 mov %eax,-0x8(%rbp)
5cd: 8b 45 fc mov -0x4(%rbp),%eax
5d0: c1 e8 0b shr $0xb,%eax
5d3: 0f af 45 98 imul -0x68(%rbp),%eax
5d7: 89 45 94 mov %eax,-0x6c(%rbp)
5da: 8b 45 f8 mov -0x8(%rbp),%eax
5dd: 3b 45 94 cmp -0x6c(%rbp),%eax
5e0: 73 59 jae 63b <LzmaDec_TryDummy+0x63b>
5e2: 8b 45 94 mov -0x6c(%rbp),%eax
5e5: 89 45 fc mov %eax,-0x4(%rbp)
5e8: 81 7d fc ff ff ff 00 cmpl $0xffffff,-0x4(%rbp)
5ef: 77 40 ja 631 <LzmaDec_TryDummy+0x631>
5f1: 48 8b 85 70 ff ff ff mov -0x90(%rbp),%rax
5f8: 48 3b 45 a8 cmp -0x58(%rbp),%rax
5fc: 72 0a jb 608 <LzmaDec_TryDummy+0x608>
5fe: b8 00 00 00 00 mov $0x0,%eax
603: e9 8a 06 00 00 jmpq c92 <LzmaDec_TryDummy+0xc92>
608: c1 65 fc 08 shll $0x8,-0x4(%rbp)
60c: 8b 45 f8 mov -0x8(%rbp),%eax
60f: c1 e0 08 shl $0x8,%eax
612: 89 c1 mov %eax,%ecx
614: 48 8b 85 70 ff ff ff mov -0x90(%rbp),%rax
61b: 48 8d 50 01 lea 0x1(%rax),%rdx
61f: 48 89 95 70 ff ff ff mov %rdx,-0x90(%rbp)
626: 0f b6 00 movzbl (%rax),%eax
629: 0f b6 c0 movzbl %al,%eax
62c: 09 c8 or %ecx,%eax
62e: 89 45 f8 mov %eax,-0x8(%rbp)
631: b8 03 00 00 00 mov $0x3,%eax
636: e9 57 06 00 00 jmpq c92 <LzmaDec_TryDummy+0xc92>
63b: 8b 45 94 mov -0x6c(%rbp),%eax
63e: 29 45 fc sub %eax,-0x4(%rbp)
641: 8b 45 94 mov -0x6c(%rbp),%eax
644: 29 45 f8 sub %eax,-0x8(%rbp)
647: e9 3d 01 00 00 jmpq 789 <LzmaDec_TryDummy+0x789>
64c: 8b 45 94 mov -0x6c(%rbp),%eax
64f: 29 45 fc sub %eax,-0x4(%rbp)
652: 8b 45 94 mov -0x6c(%rbp),%eax
655: 29 45 f8 sub %eax,-0x8(%rbp)
658: 8b 45 f4 mov -0xc(%rbp),%eax
65b: 48 05 d8 00 00 00 add $0xd8,%rax
661: 48 8d 14 00 lea (%rax,%rax,1),%rdx
665: 48 8b 45 a0 mov -0x60(%rbp),%rax
669: 48 01 d0 add %rdx,%rax
66c: 48 89 45 e8 mov %rax,-0x18(%rbp)
670: 48 8b 45 e8 mov -0x18(%rbp),%rax
674: 0f b7 00 movzwl (%rax),%eax
677: 0f b7 c0 movzwl %ax,%eax
67a: 89 45 98 mov %eax,-0x68(%rbp)
67d: 81 7d fc ff ff ff 00 cmpl $0xffffff,-0x4(%rbp)
684: 77 40 ja 6c6 <LzmaDec_TryDummy+0x6c6>
686: 48 8b 85 70 ff ff ff mov -0x90(%rbp),%rax
68d: 48 3b 45 a8 cmp -0x58(%rbp),%rax
691: 72 0a jb 69d <LzmaDec_TryDummy+0x69d>
693: b8 00 00 00 00 mov $0x0,%eax
698: e9 f5 05 00 00 jmpq c92 <LzmaDec_TryDummy+0xc92>
69d: c1 65 fc 08 shll $0x8,-0x4(%rbp)
6a1: 8b 45 f8 mov -0x8(%rbp),%eax
6a4: c1 e0 08 shl $0x8,%eax
6a7: 89 c1 mov %eax,%ecx
6a9: 48 8b 85 70 ff ff ff mov -0x90(%rbp),%rax
6b0: 48 8d 50 01 lea 0x1(%rax),%rdx
6b4: 48 89 95 70 ff ff ff mov %rdx,-0x90(%rbp)
6bb: 0f b6 00 movzbl (%rax),%eax
6be: 0f b6 c0 movzbl %al,%eax
6c1: 09 c8 or %ecx,%eax
6c3: 89 45 f8 mov %eax,-0x8(%rbp)
6c6: 8b 45 fc mov -0x4(%rbp),%eax
6c9: c1 e8 0b shr $0xb,%eax
6cc: 0f af 45 98 imul -0x68(%rbp),%eax
6d0: 89 45 94 mov %eax,-0x6c(%rbp)
6d3: 8b 45 f8 mov -0x8(%rbp),%eax
6d6: 3b 45 94 cmp -0x6c(%rbp),%eax
6d9: 73 0b jae 6e6 <LzmaDec_TryDummy+0x6e6>
6db: 8b 45 94 mov -0x6c(%rbp),%eax
6de: 89 45 fc mov %eax,-0x4(%rbp)
6e1: e9 a3 00 00 00 jmpq 789 <LzmaDec_TryDummy+0x789>
6e6: 8b 45 94 mov -0x6c(%rbp),%eax
6e9: 29 45 fc sub %eax,-0x4(%rbp)
6ec: 8b 45 94 mov -0x6c(%rbp),%eax
6ef: 29 45 f8 sub %eax,-0x8(%rbp)
6f2: 8b 45 f4 mov -0xc(%rbp),%eax
6f5: 48 05 e4 00 00 00 add $0xe4,%rax
6fb: 48 8d 14 00 lea (%rax,%rax,1),%rdx
6ff: 48 8b 45 a0 mov -0x60(%rbp),%rax
703: 48 01 d0 add %rdx,%rax
706: 48 89 45 e8 mov %rax,-0x18(%rbp)
70a: 48 8b 45 e8 mov -0x18(%rbp),%rax
70e: 0f b7 00 movzwl (%rax),%eax
711: 0f b7 c0 movzwl %ax,%eax
714: 89 45 98 mov %eax,-0x68(%rbp)
717: 81 7d fc ff ff ff 00 cmpl $0xffffff,-0x4(%rbp)
71e: 77 40 ja 760 <LzmaDec_TryDummy+0x760>
720: 48 8b 85 70 ff ff ff mov -0x90(%rbp),%rax
727: 48 3b 45 a8 cmp -0x58(%rbp),%rax
72b: 72 0a jb 737 <LzmaDec_TryDummy+0x737>
72d: b8 00 00 00 00 mov $0x0,%eax
732: e9 5b 05 00 00 jmpq c92 <LzmaDec_TryDummy+0xc92>
737: c1 65 fc 08 shll $0x8,-0x4(%rbp)
73b: 8b 45 f8 mov -0x8(%rbp),%eax
73e: c1 e0 08 shl $0x8,%eax
741: 89 c1 mov %eax,%ecx
743: 48 8b 85 70 ff ff ff mov -0x90(%rbp),%rax
74a: 48 8d 50 01 lea 0x1(%rax),%rdx
74e: 48 89 95 70 ff ff ff mov %rdx,-0x90(%rbp)
755: 0f b6 00 movzbl (%rax),%eax
758: 0f b6 c0 movzbl %al,%eax
75b: 09 c8 or %ecx,%eax
75d: 89 45 f8 mov %eax,-0x8(%rbp)
760: 8b 45 fc mov -0x4(%rbp),%eax
763: c1 e8 0b shr $0xb,%eax
766: 0f af 45 98 imul -0x68(%rbp),%eax
76a: 89 45 94 mov %eax,-0x6c(%rbp)
76d: 8b 45 f8 mov -0x8(%rbp),%eax
770: 3b 45 94 cmp -0x6c(%rbp),%eax
773: 73 08 jae 77d <LzmaDec_TryDummy+0x77d>
775: 8b 45 94 mov -0x6c(%rbp),%eax
778: 89 45 fc mov %eax,-0x4(%rbp)
77b: eb 0c jmp 789 <LzmaDec_TryDummy+0x789>
77d: 8b 45 94 mov -0x6c(%rbp),%eax
780: 29 45 fc sub %eax,-0x4(%rbp)
783: 8b 45 94 mov -0x6c(%rbp),%eax
786: 29 45 f8 sub %eax,-0x8(%rbp)
789: c7 45 f4 0c 00 00 00 movl $0xc,-0xc(%rbp)
790: 48 8b 45 a0 mov -0x60(%rbp),%rax
794: 48 05 68 0a 00 00 add $0xa68,%rax
79a: 48 89 45 e8 mov %rax,-0x18(%rbp)
79e: 48 8b 45 e8 mov -0x18(%rbp),%rax
7a2: 48 89 45 c0 mov %rax,-0x40(%rbp)
7a6: 48 8b 45 c0 mov -0x40(%rbp),%rax
7aa: 0f b7 00 movzwl (%rax),%eax
7ad: 0f b7 c0 movzwl %ax,%eax
7b0: 89 45 98 mov %eax,-0x68(%rbp)
7b3: 81 7d fc ff ff ff 00 cmpl $0xffffff,-0x4(%rbp)
7ba: 77 40 ja 7fc <LzmaDec_TryDummy+0x7fc>
7bc: 48 8b 85 70 ff ff ff mov -0x90(%rbp),%rax
7c3: 48 3b 45 a8 cmp -0x58(%rbp),%rax
7c7: 72 0a jb 7d3 <LzmaDec_TryDummy+0x7d3>
7c9: b8 00 00 00 00 mov $0x0,%eax
7ce: e9 bf 04 00 00 jmpq c92 <LzmaDec_TryDummy+0xc92>
7d3: c1 65 fc 08 shll $0x8,-0x4(%rbp)
7d7: 8b 45 f8 mov -0x8(%rbp),%eax
7da: c1 e0 08 shl $0x8,%eax
7dd: 89 c1 mov %eax,%ecx
7df: 48 8b 85 70 ff ff ff mov -0x90(%rbp),%rax
7e6: 48 8d 50 01 lea 0x1(%rax),%rdx
7ea: 48 89 95 70 ff ff ff mov %rdx,-0x90(%rbp)
7f1: 0f b6 00 movzbl (%rax),%eax
7f4: 0f b6 c0 movzbl %al,%eax
7f7: 09 c8 or %ecx,%eax
7f9: 89 45 f8 mov %eax,-0x8(%rbp)
7fc: 8b 45 fc mov -0x4(%rbp),%eax
7ff: c1 e8 0b shr $0xb,%eax
802: 0f af 45 98 imul -0x68(%rbp),%eax
806: 89 45 94 mov %eax,-0x6c(%rbp)
809: 8b 45 f8 mov -0x8(%rbp),%eax
80c: 3b 45 94 cmp -0x6c(%rbp),%eax
80f: 73 34 jae 845 <LzmaDec_TryDummy+0x845>
811: 8b 45 94 mov -0x6c(%rbp),%eax
814: 89 45 fc mov %eax,-0x4(%rbp)
817: 8b 45 9c mov -0x64(%rbp),%eax
81a: c1 e0 03 shl $0x3,%eax
81d: 89 c0 mov %eax,%eax
81f: 48 83 c0 02 add $0x2,%rax
823: 48 8d 14 00 lea (%rax,%rax,1),%rdx
827: 48 8b 45 e8 mov -0x18(%rbp),%rax
82b: 48 01 d0 add %rdx,%rax
82e: 48 89 45 c0 mov %rax,-0x40(%rbp)
832: c7 45 cc 00 00 00 00 movl $0x0,-0x34(%rbp)
839: c7 45 d0 08 00 00 00 movl $0x8,-0x30(%rbp)
840: e9 de 00 00 00 jmpq 923 <LzmaDec_TryDummy+0x923>
845: 8b 45 94 mov -0x6c(%rbp),%eax
848: 29 45 fc sub %eax,-0x4(%rbp)
84b: 8b 45 94 mov -0x6c(%rbp),%eax
84e: 29 45 f8 sub %eax,-0x8(%rbp)
851: 48 8b 45 e8 mov -0x18(%rbp),%rax
855: 48 83 c0 02 add $0x2,%rax
859: 48 89 45 c0 mov %rax,-0x40(%rbp)
85d: 48 8b 45 c0 mov -0x40(%rbp),%rax
861: 0f b7 00 movzwl (%rax),%eax
864: 0f b7 c0 movzwl %ax,%eax
867: 89 45 98 mov %eax,-0x68(%rbp)
86a: 81 7d fc ff ff ff 00 cmpl $0xffffff,-0x4(%rbp)
871: 77 40 ja 8b3 <LzmaDec_TryDummy+0x8b3>
873: 48 8b 85 70 ff ff ff mov -0x90(%rbp),%rax
87a: 48 3b 45 a8 cmp -0x58(%rbp),%rax
87e: 72 0a jb 88a <LzmaDec_TryDummy+0x88a>
880: b8 00 00 00 00 mov $0x0,%eax
885: e9 08 04 00 00 jmpq c92 <LzmaDec_TryDummy+0xc92>
88a: c1 65 fc 08 shll $0x8,-0x4(%rbp)
88e: 8b 45 f8 mov -0x8(%rbp),%eax
891: c1 e0 08 shl $0x8,%eax
894: 89 c1 mov %eax,%ecx
896: 48 8b 85 70 ff ff ff mov -0x90(%rbp),%rax
89d: 48 8d 50 01 lea 0x1(%rax),%rdx
8a1: 48 89 95 70 ff ff ff mov %rdx,-0x90(%rbp)
8a8: 0f b6 00 movzbl (%rax),%eax
8ab: 0f b6 c0 movzbl %al,%eax
8ae: 09 c8 or %ecx,%eax
8b0: 89 45 f8 mov %eax,-0x8(%rbp)
8b3: 8b 45 fc mov -0x4(%rbp),%eax
8b6: c1 e8 0b shr $0xb,%eax
8b9: 0f af 45 98 imul -0x68(%rbp),%eax
8bd: 89 45 94 mov %eax,-0x6c(%rbp)
8c0: 8b 45 f8 mov -0x8(%rbp),%eax
8c3: 3b 45 94 cmp -0x6c(%rbp),%eax
8c6: 73 33 jae 8fb <LzmaDec_TryDummy+0x8fb>
8c8: 8b 45 94 mov -0x6c(%rbp),%eax
8cb: 89 45 fc mov %eax,-0x4(%rbp)
8ce: 8b 45 9c mov -0x64(%rbp),%eax
8d1: c1 e0 03 shl $0x3,%eax
8d4: 89 c0 mov %eax,%eax
8d6: 48 05 82 00 00 00 add $0x82,%rax
8dc: 48 8d 14 00 lea (%rax,%rax,1),%rdx
8e0: 48 8b 45 e8 mov -0x18(%rbp),%rax
8e4: 48 01 d0 add %rdx,%rax
8e7: 48 89 45 c0 mov %rax,-0x40(%rbp)
8eb: c7 45 cc 08 00 00 00 movl $0x8,-0x34(%rbp)
8f2: c7 45 d0 08 00 00 00 movl $0x8,-0x30(%rbp)
8f9: eb 28 jmp 923 <LzmaDec_TryDummy+0x923>
8fb: 8b 45 94 mov -0x6c(%rbp),%eax
8fe: 29 45 fc sub %eax,-0x4(%rbp)
901: 8b 45 94 mov -0x6c(%rbp),%eax
904: 29 45 f8 sub %eax,-0x8(%rbp)
907: 48 8b 45 e8 mov -0x18(%rbp),%rax
90b: 48 05 04 02 00 00 add $0x204,%rax
911: 48 89 45 c0 mov %rax,-0x40(%rbp)
915: c7 45 cc 10 00 00 00 movl $0x10,-0x34(%rbp)
91c: c7 45 d0 00 01 00 00 movl $0x100,-0x30(%rbp)
923: c7 45 d4 01 00 00 00 movl $0x1,-0x2c(%rbp)
92a: 8b 45 d4 mov -0x2c(%rbp),%eax
92d: 48 8d 14 00 lea (%rax,%rax,1),%rdx
931: 48 8b 45 c0 mov -0x40(%rbp),%rax
935: 48 01 d0 add %rdx,%rax
938: 0f b7 00 movzwl (%rax),%eax
93b: 0f b7 c0 movzwl %ax,%eax
93e: 89 45 98 mov %eax,-0x68(%rbp)
941: 81 7d fc ff ff ff 00 cmpl $0xffffff,-0x4(%rbp)
948: 77 40 ja 98a <LzmaDec_TryDummy+0x98a>
94a: 48 8b 85 70 ff ff ff mov -0x90(%rbp),%rax
951: 48 3b 45 a8 cmp -0x58(%rbp),%rax
955: 72 0a jb 961 <LzmaDec_TryDummy+0x961>
957: b8 00 00 00 00 mov $0x0,%eax
95c: e9 31 03 00 00 jmpq c92 <LzmaDec_TryDummy+0xc92>
961: c1 65 fc 08 shll $0x8,-0x4(%rbp)
965: 8b 45 f8 mov -0x8(%rbp),%eax
968: c1 e0 08 shl $0x8,%eax
96b: 89 c1 mov %eax,%ecx
96d: 48 8b 85 70 ff ff ff mov -0x90(%rbp),%rax
974: 48 8d 50 01 lea 0x1(%rax),%rdx
978: 48 89 95 70 ff ff ff mov %rdx,-0x90(%rbp)
97f: 0f b6 00 movzbl (%rax),%eax
982: 0f b6 c0 movzbl %al,%eax
985: 09 c8 or %ecx,%eax
987: 89 45 f8 mov %eax,-0x8(%rbp)
98a: 8b 45 fc mov -0x4(%rbp),%eax
98d: c1 e8 0b shr $0xb,%eax
990: 0f af 45 98 imul -0x68(%rbp),%eax
994: 89 45 94 mov %eax,-0x6c(%rbp)
997: 8b 45 f8 mov -0x8(%rbp),%eax
99a: 3b 45 94 cmp -0x6c(%rbp),%eax
99d: 73 10 jae 9af <LzmaDec_TryDummy+0x9af>
99f: 8b 45 94 mov -0x6c(%rbp),%eax
9a2: 89 45 fc mov %eax,-0x4(%rbp)
9a5: 8b 45 d4 mov -0x2c(%rbp),%eax
9a8: 01 c0 add %eax,%eax
9aa: 89 45 d4 mov %eax,-0x2c(%rbp)
9ad: eb 17 jmp 9c6 <LzmaDec_TryDummy+0x9c6>
9af: 8b 45 94 mov -0x6c(%rbp),%eax
9b2: 29 45 fc sub %eax,-0x4(%rbp)
9b5: 8b 45 94 mov -0x6c(%rbp),%eax
9b8: 29 45 f8 sub %eax,-0x8(%rbp)
9bb: 8b 45 d4 mov -0x2c(%rbp),%eax
9be: 01 c0 add %eax,%eax
9c0: 83 c0 01 add $0x1,%eax
9c3: 89 45 d4 mov %eax,-0x2c(%rbp)
9c6: 8b 45 d4 mov -0x2c(%rbp),%eax
9c9: 3b 45 d0 cmp -0x30(%rbp),%eax
9cc: 0f 82 58 ff ff ff jb 92a <LzmaDec_TryDummy+0x92a>
9d2: 8b 45 d0 mov -0x30(%rbp),%eax
9d5: 29 45 d4 sub %eax,-0x2c(%rbp)
9d8: 8b 45 cc mov -0x34(%rbp),%eax
9db: 01 45 d4 add %eax,-0x2c(%rbp)
9de: 83 7d f4 03 cmpl $0x3,-0xc(%rbp)
9e2: 0f 87 61 02 00 00 ja c49 <LzmaDec_TryDummy+0xc49>
9e8: b8 03 00 00 00 mov $0x3,%eax
9ed: 83 7d d4 03 cmpl $0x3,-0x2c(%rbp)
9f1: 0f 46 45 d4 cmovbe -0x2c(%rbp),%eax
9f5: c1 e0 06 shl $0x6,%eax
9f8: 89 c0 mov %eax,%eax
9fa: 48 05 b0 01 00 00 add $0x1b0,%rax
a00: 48 8d 14 00 lea (%rax,%rax,1),%rdx
a04: 48 8b 45 a0 mov -0x60(%rbp),%rax
a08: 48 01 d0 add %rdx,%rax
a0b: 48 89 45 e8 mov %rax,-0x18(%rbp)
a0f: c7 45 bc 01 00 00 00 movl $0x1,-0x44(%rbp)
a16: 8b 45 bc mov -0x44(%rbp),%eax
a19: 48 8d 14 00 lea (%rax,%rax,1),%rdx
a1d: 48 8b 45 e8 mov -0x18(%rbp),%rax
a21: 48 01 d0 add %rdx,%rax
a24: 0f b7 00 movzwl (%rax),%eax
a27: 0f b7 c0 movzwl %ax,%eax
a2a: 89 45 98 mov %eax,-0x68(%rbp)
a2d: 81 7d fc ff ff ff 00 cmpl $0xffffff,-0x4(%rbp)
a34: 77 40 ja a76 <LzmaDec_TryDummy+0xa76>
a36: 48 8b 85 70 ff ff ff mov -0x90(%rbp),%rax
a3d: 48 3b 45 a8 cmp -0x58(%rbp),%rax
a41: 72 0a jb a4d <LzmaDec_TryDummy+0xa4d>
a43: b8 00 00 00 00 mov $0x0,%eax
a48: e9 45 02 00 00 jmpq c92 <LzmaDec_TryDummy+0xc92>
a4d: c1 65 fc 08 shll $0x8,-0x4(%rbp)
a51: 8b 45 f8 mov -0x8(%rbp),%eax
a54: c1 e0 08 shl $0x8,%eax
a57: 89 c1 mov %eax,%ecx
a59: 48 8b 85 70 ff ff ff mov -0x90(%rbp),%rax
a60: 48 8d 50 01 lea 0x1(%rax),%rdx
a64: 48 89 95 70 ff ff ff mov %rdx,-0x90(%rbp)
a6b: 0f b6 00 movzbl (%rax),%eax
a6e: 0f b6 c0 movzbl %al,%eax
a71: 09 c8 or %ecx,%eax
a73: 89 45 f8 mov %eax,-0x8(%rbp)
a76: 8b 45 fc mov -0x4(%rbp),%eax
a79: c1 e8 0b shr $0xb,%eax
a7c: 0f af 45 98 imul -0x68(%rbp),%eax
a80: 89 45 94 mov %eax,-0x6c(%rbp)
a83: 8b 45 f8 mov -0x8(%rbp),%eax
a86: 3b 45 94 cmp -0x6c(%rbp),%eax
a89: 73 10 jae a9b <LzmaDec_TryDummy+0xa9b>
a8b: 8b 45 94 mov -0x6c(%rbp),%eax
a8e: 89 45 fc mov %eax,-0x4(%rbp)
a91: 8b 45 bc mov -0x44(%rbp),%eax
a94: 01 c0 add %eax,%eax
a96: 89 45 bc mov %eax,-0x44(%rbp)
a99: eb 17 jmp ab2 <LzmaDec_TryDummy+0xab2>
a9b: 8b 45 94 mov -0x6c(%rbp),%eax
a9e: 29 45 fc sub %eax,-0x4(%rbp)
aa1: 8b 45 94 mov -0x6c(%rbp),%eax
aa4: 29 45 f8 sub %eax,-0x8(%rbp)
aa7: 8b 45 bc mov -0x44(%rbp),%eax
aaa: 01 c0 add %eax,%eax
aac: 83 c0 01 add $0x1,%eax
aaf: 89 45 bc mov %eax,-0x44(%rbp)
ab2: 83 7d bc 3f cmpl $0x3f,-0x44(%rbp)
ab6: 0f 86 5a ff ff ff jbe a16 <LzmaDec_TryDummy+0xa16>
abc: 83 6d bc 40 subl $0x40,-0x44(%rbp)
ac0: 83 7d bc 03 cmpl $0x3,-0x44(%rbp)
ac4: 0f 86 7f 01 00 00 jbe c49 <LzmaDec_TryDummy+0xc49>
aca: 8b 45 bc mov -0x44(%rbp),%eax
acd: d1 e8 shr %eax
acf: 83 e8 01 sub $0x1,%eax
ad2: 89 45 b8 mov %eax,-0x48(%rbp)
ad5: 83 7d bc 0d cmpl $0xd,-0x44(%rbp)
ad9: 77 3c ja b17 <LzmaDec_TryDummy+0xb17>
adb: 8b 45 bc mov -0x44(%rbp),%eax
ade: 83 e0 01 and $0x1,%eax
ae1: 83 c8 02 or $0x2,%eax
ae4: 89 c2 mov %eax,%edx
ae6: 8b 45 b8 mov -0x48(%rbp),%eax
ae9: 89 c1 mov %eax,%ecx
aeb: d3 e2 shl %cl,%edx
aed: 89 d0 mov %edx,%eax
aef: 89 c2 mov %eax,%edx
af1: 8b 45 bc mov -0x44(%rbp),%eax
af4: 48 29 c2 sub %rax,%rdx
af7: 48 89 d0 mov %rdx,%rax
afa: 48 05 b0 02 00 00 add $0x2b0,%rax
b00: 48 01 c0 add %rax,%rax
b03: 48 8d 50 fe lea -0x2(%rax),%rdx
b07: 48 8b 45 a0 mov -0x60(%rbp),%rax
b0b: 48 01 d0 add %rdx,%rax
b0e: 48 89 45 e8 mov %rax,-0x18(%rbp)
b12: e9 81 00 00 00 jmpq b98 <LzmaDec_TryDummy+0xb98>
b17: 83 6d b8 04 subl $0x4,-0x48(%rbp)
b1b: 81 7d fc ff ff ff 00 cmpl $0xffffff,-0x4(%rbp)
b22: 77 40 ja b64 <LzmaDec_TryDummy+0xb64>
b24: 48 8b 85 70 ff ff ff mov -0x90(%rbp),%rax
b2b: 48 3b 45 a8 cmp -0x58(%rbp),%rax
b2f: 72 0a jb b3b <LzmaDec_TryDummy+0xb3b>
b31: b8 00 00 00 00 mov $0x0,%eax
b36: e9 57 01 00 00 jmpq c92 <LzmaDec_TryDummy+0xc92>
b3b: c1 65 fc 08 shll $0x8,-0x4(%rbp)
b3f: 8b 45 f8 mov -0x8(%rbp),%eax
b42: c1 e0 08 shl $0x8,%eax
b45: 89 c1 mov %eax,%ecx
b47: 48 8b 85 70 ff ff ff mov -0x90(%rbp),%rax
b4e: 48 8d 50 01 lea 0x1(%rax),%rdx
b52: 48 89 95 70 ff ff ff mov %rdx,-0x90(%rbp)
b59: 0f b6 00 movzbl (%rax),%eax
b5c: 0f b6 c0 movzbl %al,%eax
b5f: 09 c8 or %ecx,%eax
b61: 89 45 f8 mov %eax,-0x8(%rbp)
b64: d1 6d fc shrl -0x4(%rbp)
b67: 8b 45 f8 mov -0x8(%rbp),%eax
b6a: 2b 45 fc sub -0x4(%rbp),%eax
b6d: c1 e8 1f shr $0x1f,%eax
b70: 83 e8 01 sub $0x1,%eax
b73: 23 45 fc and -0x4(%rbp),%eax
b76: 29 45 f8 sub %eax,-0x8(%rbp)
b79: 83 6d b8 01 subl $0x1,-0x48(%rbp)
b7d: 83 7d b8 00 cmpl $0x0,-0x48(%rbp)
b81: 75 98 jne b1b <LzmaDec_TryDummy+0xb1b>
b83: 48 8b 45 a0 mov -0x60(%rbp),%rax
b87: 48 05 44 06 00 00 add $0x644,%rax
b8d: 48 89 45 e8 mov %rax,-0x18(%rbp)
b91: c7 45 b8 04 00 00 00 movl $0x4,-0x48(%rbp)
b98: c7 45 b4 01 00 00 00 movl $0x1,-0x4c(%rbp)
b9f: 8b 45 b4 mov -0x4c(%rbp),%eax
ba2: 48 8d 14 00 lea (%rax,%rax,1),%rdx
ba6: 48 8b 45 e8 mov -0x18(%rbp),%rax
baa: 48 01 d0 add %rdx,%rax
bad: 0f b7 00 movzwl (%rax),%eax
bb0: 0f b7 c0 movzwl %ax,%eax
bb3: 89 45 98 mov %eax,-0x68(%rbp)
bb6: 81 7d fc ff ff ff 00 cmpl $0xffffff,-0x4(%rbp)
bbd: 77 40 ja bff <LzmaDec_TryDummy+0xbff>
bbf: 48 8b 85 70 ff ff ff mov -0x90(%rbp),%rax
bc6: 48 3b 45 a8 cmp -0x58(%rbp),%rax
bca: 72 0a jb bd6 <LzmaDec_TryDummy+0xbd6>
bcc: b8 00 00 00 00 mov $0x0,%eax
bd1: e9 bc 00 00 00 jmpq c92 <LzmaDec_TryDummy+0xc92>
bd6: c1 65 fc 08 shll $0x8,-0x4(%rbp)
bda: 8b 45 f8 mov -0x8(%rbp),%eax
bdd: c1 e0 08 shl $0x8,%eax
be0: 89 c1 mov %eax,%ecx
be2: 48 8b 85 70 ff ff ff mov -0x90(%rbp),%rax
be9: 48 8d 50 01 lea 0x1(%rax),%rdx
bed: 48 89 95 70 ff ff ff mov %rdx,-0x90(%rbp)
bf4: 0f b6 00 movzbl (%rax),%eax
bf7: 0f b6 c0 movzbl %al,%eax
bfa: 09 c8 or %ecx,%eax
bfc: 89 45 f8 mov %eax,-0x8(%rbp)
bff: 8b 45 fc mov -0x4(%rbp),%eax
c02: c1 e8 0b shr $0xb,%eax
c05: 0f af 45 98 imul -0x68(%rbp),%eax
c09: 89 45 94 mov %eax,-0x6c(%rbp)
c0c: 8b 45 f8 mov -0x8(%rbp),%eax
c0f: 3b 45 94 cmp -0x6c(%rbp),%eax
c12: 73 10 jae c24 <LzmaDec_TryDummy+0xc24>
c14: 8b 45 94 mov -0x6c(%rbp),%eax
c17: 89 45 fc mov %eax,-0x4(%rbp)
c1a: 8b 45 b4 mov -0x4c(%rbp),%eax
c1d: 01 c0 add %eax,%eax
c1f: 89 45 b4 mov %eax,-0x4c(%rbp)
c22: eb 17 jmp c3b <LzmaDec_TryDummy+0xc3b>
c24: 8b 45 94 mov -0x6c(%rbp),%eax
c27: 29 45 fc sub %eax,-0x4(%rbp)
c2a: 8b 45 94 mov -0x6c(%rbp),%eax
c2d: 29 45 f8 sub %eax,-0x8(%rbp)
c30: 8b 45 b4 mov -0x4c(%rbp),%eax
c33: 01 c0 add %eax,%eax
c35: 83 c0 01 add $0x1,%eax
c38: 89 45 b4 mov %eax,-0x4c(%rbp)
c3b: 83 6d b8 01 subl $0x1,-0x48(%rbp)
c3f: 83 7d b8 00 cmpl $0x0,-0x48(%rbp)
c43: 0f 85 56 ff ff ff jne b9f <LzmaDec_TryDummy+0xb9f>
c49: 81 7d fc ff ff ff 00 cmpl $0xffffff,-0x4(%rbp)
c50: 77 3d ja c8f <LzmaDec_TryDummy+0xc8f>
c52: 48 8b 85 70 ff ff ff mov -0x90(%rbp),%rax
c59: 48 3b 45 a8 cmp -0x58(%rbp),%rax
c5d: 72 07 jb c66 <LzmaDec_TryDummy+0xc66>
c5f: b8 00 00 00 00 mov $0x0,%eax
c64: eb 2c jmp c92 <LzmaDec_TryDummy+0xc92>
c66: c1 65 fc 08 shll $0x8,-0x4(%rbp)
c6a: 8b 45 f8 mov -0x8(%rbp),%eax
c6d: c1 e0 08 shl $0x8,%eax
c70: 89 c1 mov %eax,%ecx
c72: 48 8b 85 70 ff ff ff mov -0x90(%rbp),%rax
c79: 48 8d 50 01 lea 0x1(%rax),%rdx
c7d: 48 89 95 70 ff ff ff mov %rdx,-0x90(%rbp)
c84: 0f b6 00 movzbl (%rax),%eax
c87: 0f b6 c0 movzbl %al,%eax
c8a: 09 c8 or %ecx,%eax
c8c: 89 45 f8 mov %eax,-0x8(%rbp)
c8f: 8b 45 f0 mov -0x10(%rbp),%eax
c92: c9 leaveq
c93: c3 retq
Disassembly of section .text.LzmaDec_InitRc:
0000000000000000 <LzmaDec_InitRc>:
0: 55 push %rbp
1: 48 89 e5 mov %rsp,%rbp
4: 48 83 ec 10 sub $0x10,%rsp
8: 48 89 7d f8 mov %rdi,-0x8(%rbp)
c: 48 89 75 f0 mov %rsi,-0x10(%rbp)
10: 48 8b 45 f0 mov -0x10(%rbp),%rax
14: 48 83 c0 01 add $0x1,%rax
18: 0f b6 00 movzbl (%rax),%eax
1b: 0f b6 c0 movzbl %al,%eax
1e: c1 e0 18 shl $0x18,%eax
21: 89 c2 mov %eax,%edx
23: 48 8b 45 f0 mov -0x10(%rbp),%rax
27: 48 83 c0 02 add $0x2,%rax
2b: 0f b6 00 movzbl (%rax),%eax
2e: 0f b6 c0 movzbl %al,%eax
31: c1 e0 10 shl $0x10,%eax
34: 09 c2 or %eax,%edx
36: 48 8b 45 f0 mov -0x10(%rbp),%rax
3a: 48 83 c0 03 add $0x3,%rax
3e: 0f b6 00 movzbl (%rax),%eax
41: 0f b6 c0 movzbl %al,%eax
44: c1 e0 08 shl $0x8,%eax
47: 09 c2 or %eax,%edx
49: 48 8b 45 f0 mov -0x10(%rbp),%rax
4d: 48 83 c0 04 add $0x4,%rax
51: 0f b6 00 movzbl (%rax),%eax
54: 0f b6 c0 movzbl %al,%eax
57: 09 c2 or %eax,%edx
59: 48 8b 45 f8 mov -0x8(%rbp),%rax
5d: 89 50 2c mov %edx,0x2c(%rax)
60: 48 8b 45 f8 mov -0x8(%rbp),%rax
64: c7 40 28 ff ff ff ff movl $0xffffffff,0x28(%rax)
6b: 48 8b 45 f8 mov -0x8(%rbp),%rax
6f: c7 40 58 00 00 00 00 movl $0x0,0x58(%rax)
76: 90 nop
77: c9 leaveq
78: c3 retq
Disassembly of section .text.LzmaDec_InitDicAndState:
0000000000000000 <LzmaDec_InitDicAndState>:
0: 55 push %rbp
1: 48 89 e5 mov %rsp,%rbp
4: 48 83 ec 10 sub $0x10,%rsp
8: 48 89 7d f8 mov %rdi,-0x8(%rbp)
c: 89 75 f4 mov %esi,-0xc(%rbp)
f: 89 55 f0 mov %edx,-0x10(%rbp)
12: 48 8b 45 f8 mov -0x8(%rbp),%rax
16: c7 40 58 01 00 00 00 movl $0x1,0x58(%rax)
1d: 48 8b 45 f8 mov -0x8(%rbp),%rax
21: c7 40 54 00 00 00 00 movl $0x0,0x54(%rax)
28: 48 8b 45 f8 mov -0x8(%rbp),%rax
2c: c7 40 64 00 00 00 00 movl $0x0,0x64(%rax)
33: 83 7d f4 00 cmpl $0x0,-0xc(%rbp)
37: 74 21 je 5a <LzmaDec_InitDicAndState+0x5a>
39: 48 8b 45 f8 mov -0x8(%rbp),%rax
3d: c7 40 38 00 00 00 00 movl $0x0,0x38(%rax)
44: 48 8b 45 f8 mov -0x8(%rbp),%rax
48: c7 40 3c 00 00 00 00 movl $0x0,0x3c(%rax)
4f: 48 8b 45 f8 mov -0x8(%rbp),%rax
53: c7 40 5c 01 00 00 00 movl $0x1,0x5c(%rax)
5a: 83 7d f0 00 cmpl $0x0,-0x10(%rbp)
5e: 74 0b je 6b <LzmaDec_InitDicAndState+0x6b>
60: 48 8b 45 f8 mov -0x8(%rbp),%rax
64: c7 40 5c 01 00 00 00 movl $0x1,0x5c(%rax)
6b: 90 nop
6c: c9 leaveq
6d: c3 retq
Disassembly of section .text.LzmaDec_Init:
0000000000000000 <LzmaDec_Init>:
0: 55 push %rbp
1: 48 89 e5 mov %rsp,%rbp
4: 48 83 ec 08 sub $0x8,%rsp
8: 48 89 7d f8 mov %rdi,-0x8(%rbp)
c: 48 8b 45 f8 mov -0x8(%rbp),%rax
10: c7 40 30 00 00 00 00 movl $0x0,0x30(%rax)
17: 48 8b 45 f8 mov -0x8(%rbp),%rax
1b: ba 01 00 00 00 mov $0x1,%edx
20: be 01 00 00 00 mov $0x1,%esi
25: 48 89 c7 mov %rax,%rdi
28: 48 b8 00 00 00 00 00 movabs $0x0,%rax
2f: 00 00 00
32: ff d0 callq *%rax
34: 90 nop
35: c9 leaveq
36: c3 retq
Disassembly of section .text.LzmaDec_InitStateReal:
0000000000000000 <LzmaDec_InitStateReal>:
0: 55 push %rbp
1: 48 89 e5 mov %rsp,%rbp
4: 48 83 ec 18 sub $0x18,%rsp
8: 48 89 7d e8 mov %rdi,-0x18(%rbp)
c: 48 8b 45 e8 mov -0x18(%rbp),%rax
10: 8b 10 mov (%rax),%edx
12: 48 8b 45 e8 mov -0x18(%rbp),%rax
16: 8b 40 04 mov 0x4(%rax),%eax
19: 01 d0 add %edx,%eax
1b: ba 00 03 00 00 mov $0x300,%edx
20: 89 c1 mov %eax,%ecx
22: d3 e2 shl %cl,%edx
24: 89 d0 mov %edx,%eax
26: 05 36 07 00 00 add $0x736,%eax
2b: 89 45 f8 mov %eax,-0x8(%rbp)
2e: 48 8b 45 e8 mov -0x18(%rbp),%rax
32: 48 8b 40 10 mov 0x10(%rax),%rax
36: 48 89 45 f0 mov %rax,-0x10(%rbp)
3a: c7 45 fc 00 00 00 00 movl $0x0,-0x4(%rbp)
41: eb 17 jmp 5a <LzmaDec_InitStateReal+0x5a>
43: 8b 45 fc mov -0x4(%rbp),%eax
46: 48 8d 14 00 lea (%rax,%rax,1),%rdx
4a: 48 8b 45 f0 mov -0x10(%rbp),%rax
4e: 48 01 d0 add %rdx,%rax
51: 66 c7 00 00 04 movw $0x400,(%rax)
56: 83 45 fc 01 addl $0x1,-0x4(%rbp)
5a: 8b 45 fc mov -0x4(%rbp),%eax
5d: 3b 45 f8 cmp -0x8(%rbp),%eax
60: 72 e1 jb 43 <LzmaDec_InitStateReal+0x43>
62: 48 8b 45 e8 mov -0x18(%rbp),%rax
66: c7 40 50 01 00 00 00 movl $0x1,0x50(%rax)
6d: 48 8b 45 e8 mov -0x18(%rbp),%rax
71: 8b 50 50 mov 0x50(%rax),%edx
74: 48 8b 45 e8 mov -0x18(%rbp),%rax
78: 89 50 4c mov %edx,0x4c(%rax)
7b: 48 8b 45 e8 mov -0x18(%rbp),%rax
7f: 8b 50 4c mov 0x4c(%rax),%edx
82: 48 8b 45 e8 mov -0x18(%rbp),%rax
86: 89 50 48 mov %edx,0x48(%rax)
89: 48 8b 45 e8 mov -0x18(%rbp),%rax
8d: 8b 50 48 mov 0x48(%rax),%edx
90: 48 8b 45 e8 mov -0x18(%rbp),%rax
94: 89 50 44 mov %edx,0x44(%rax)
97: 48 8b 45 e8 mov -0x18(%rbp),%rax
9b: c7 40 40 00 00 00 00 movl $0x0,0x40(%rax)
a2: 48 8b 45 e8 mov -0x18(%rbp),%rax
a6: c7 40 5c 00 00 00 00 movl $0x0,0x5c(%rax)
ad: 90 nop
ae: c9 leaveq
af: c3 retq
Disassembly of section .text.LzmaDec_DecodeToDic:
0000000000000000 <LzmaDec_DecodeToDic>:
0: 55 push %rbp
1: 48 89 e5 mov %rsp,%rbp
4: 48 83 ec 60 sub $0x60,%rsp
8: 48 89 7d c8 mov %rdi,-0x38(%rbp)
c: 89 75 c4 mov %esi,-0x3c(%rbp)
f: 48 89 55 b8 mov %rdx,-0x48(%rbp)
13: 48 89 4d b0 mov %rcx,-0x50(%rbp)
17: 44 89 45 c0 mov %r8d,-0x40(%rbp)
1b: 4c 89 4d a8 mov %r9,-0x58(%rbp)
1f: 48 8b 45 b0 mov -0x50(%rbp),%rax
23: 8b 00 mov (%rax),%eax
25: 89 45 fc mov %eax,-0x4(%rbp)
28: 48 8b 45 b0 mov -0x50(%rbp),%rax
2c: c7 00 00 00 00 00 movl $0x0,(%rax)
32: 8b 55 c4 mov -0x3c(%rbp),%edx
35: 48 8b 45 c8 mov -0x38(%rbp),%rax
39: 89 d6 mov %edx,%esi
3b: 48 89 c7 mov %rax,%rdi
3e: 48 b8 00 00 00 00 00 movabs $0x0,%rax
45: 00 00 00
48: ff d0 callq *%rax
4a: 48 8b 45 a8 mov -0x58(%rbp),%rax
4e: c7 00 00 00 00 00 movl $0x0,(%rax)
54: e9 f5 03 00 00 jmpq 44e <LzmaDec_DecodeToDic+0x44e>
59: 48 8b 45 c8 mov -0x38(%rbp),%rax
5d: 8b 40 58 mov 0x58(%rax),%eax
60: 85 c0 test %eax,%eax
62: 0f 84 b0 00 00 00 je 118 <LzmaDec_DecodeToDic+0x118>
68: eb 3d jmp a7 <LzmaDec_DecodeToDic+0xa7>
6a: 48 8b 45 c8 mov -0x38(%rbp),%rax
6e: 8b 50 64 mov 0x64(%rax),%edx
71: 8d 4a 01 lea 0x1(%rdx),%ecx
74: 48 8b 45 c8 mov -0x38(%rbp),%rax
78: 89 48 64 mov %ecx,0x64(%rax)
7b: 48 8b 45 b8 mov -0x48(%rbp),%rax
7f: 48 8d 48 01 lea 0x1(%rax),%rcx
83: 48 89 4d b8 mov %rcx,-0x48(%rbp)
87: 0f b6 08 movzbl (%rax),%ecx
8a: 48 8b 45 c8 mov -0x38(%rbp),%rax
8e: 89 d2 mov %edx,%edx
90: 88 4c 10 68 mov %cl,0x68(%rax,%rdx,1)
94: 48 8b 45 b0 mov -0x50(%rbp),%rax
98: 8b 00 mov (%rax),%eax
9a: 8d 50 01 lea 0x1(%rax),%edx
9d: 48 8b 45 b0 mov -0x50(%rbp),%rax
a1: 89 10 mov %edx,(%rax)
a3: 83 6d fc 01 subl $0x1,-0x4(%rbp)
a7: 83 7d fc 00 cmpl $0x0,-0x4(%rbp)
ab: 74 0c je b9 <LzmaDec_DecodeToDic+0xb9>
ad: 48 8b 45 c8 mov -0x38(%rbp),%rax
b1: 8b 40 64 mov 0x64(%rax),%eax
b4: 83 f8 04 cmp $0x4,%eax
b7: 76 b1 jbe 6a <LzmaDec_DecodeToDic+0x6a>
b9: 48 8b 45 c8 mov -0x38(%rbp),%rax
bd: 8b 40 64 mov 0x64(%rax),%eax
c0: 83 f8 04 cmp $0x4,%eax
c3: 77 14 ja d9 <LzmaDec_DecodeToDic+0xd9>
c5: 48 8b 45 a8 mov -0x58(%rbp),%rax
c9: c7 00 03 00 00 00 movl $0x3,(%rax)
cf: b8 00 00 00 00 mov $0x0,%eax
d4: e9 ab 03 00 00 jmpq 484 <LzmaDec_DecodeToDic+0x484>
d9: 48 8b 45 c8 mov -0x38(%rbp),%rax
dd: 0f b6 40 68 movzbl 0x68(%rax),%eax
e1: 84 c0 test %al,%al
e3: 74 0a je ef <LzmaDec_DecodeToDic+0xef>
e5: b8 01 00 00 00 mov $0x1,%eax
ea: e9 95 03 00 00 jmpq 484 <LzmaDec_DecodeToDic+0x484>
ef: 48 8b 45 c8 mov -0x38(%rbp),%rax
f3: 48 8d 50 68 lea 0x68(%rax),%rdx
f7: 48 8b 45 c8 mov -0x38(%rbp),%rax
fb: 48 89 d6 mov %rdx,%rsi
fe: 48 89 c7 mov %rax,%rdi
101: 48 b8 00 00 00 00 00 movabs $0x0,%rax
108: 00 00 00
10b: ff d0 callq *%rax
10d: 48 8b 45 c8 mov -0x38(%rbp),%rax
111: c7 40 64 00 00 00 00 movl $0x0,0x64(%rax)
118: c7 45 f8 00 00 00 00 movl $0x0,-0x8(%rbp)
11f: 48 8b 45 c8 mov -0x38(%rbp),%rax
123: 8b 40 30 mov 0x30(%rax),%eax
126: 3b 45 c4 cmp -0x3c(%rbp),%eax
129: 72 6a jb 195 <LzmaDec_DecodeToDic+0x195>
12b: 48 8b 45 c8 mov -0x38(%rbp),%rax
12f: 8b 40 54 mov 0x54(%rax),%eax
132: 85 c0 test %eax,%eax
134: 75 1f jne 155 <LzmaDec_DecodeToDic+0x155>
136: 48 8b 45 c8 mov -0x38(%rbp),%rax
13a: 8b 40 2c mov 0x2c(%rax),%eax
13d: 85 c0 test %eax,%eax
13f: 75 14 jne 155 <LzmaDec_DecodeToDic+0x155>
141: 48 8b 45 a8 mov -0x58(%rbp),%rax
145: c7 00 04 00 00 00 movl $0x4,(%rax)
14b: b8 00 00 00 00 mov $0x0,%eax
150: e9 2f 03 00 00 jmpq 484 <LzmaDec_DecodeToDic+0x484>
155: 83 7d c0 00 cmpl $0x0,-0x40(%rbp)
159: 75 14 jne 16f <LzmaDec_DecodeToDic+0x16f>
15b: 48 8b 45 a8 mov -0x58(%rbp),%rax
15f: c7 00 02 00 00 00 movl $0x2,(%rax)
165: b8 00 00 00 00 mov $0x0,%eax
16a: e9 15 03 00 00 jmpq 484 <LzmaDec_DecodeToDic+0x484>
16f: 48 8b 45 c8 mov -0x38(%rbp),%rax
173: 8b 40 54 mov 0x54(%rax),%eax
176: 85 c0 test %eax,%eax
178: 74 14 je 18e <LzmaDec_DecodeToDic+0x18e>
17a: 48 8b 45 a8 mov -0x58(%rbp),%rax
17e: c7 00 02 00 00 00 movl $0x2,(%rax)
184: b8 01 00 00 00 mov $0x1,%eax
189: e9 f6 02 00 00 jmpq 484 <LzmaDec_DecodeToDic+0x484>
18e: c7 45 f8 01 00 00 00 movl $0x1,-0x8(%rbp)
195: 48 8b 45 c8 mov -0x38(%rbp),%rax
199: 8b 40 5c mov 0x5c(%rax),%eax
19c: 85 c0 test %eax,%eax
19e: 74 13 je 1b3 <LzmaDec_DecodeToDic+0x1b3>
1a0: 48 8b 45 c8 mov -0x38(%rbp),%rax
1a4: 48 89 c7 mov %rax,%rdi
1a7: 48 b8 00 00 00 00 00 movabs $0x0,%rax
1ae: 00 00 00
1b1: ff d0 callq *%rax
1b3: 48 8b 45 c8 mov -0x38(%rbp),%rax
1b7: 8b 40 64 mov 0x64(%rax),%eax
1ba: 85 c0 test %eax,%eax
1bc: 0f 85 3b 01 00 00 jne 2fd <LzmaDec_DecodeToDic+0x2fd>
1c2: 83 7d fc 13 cmpl $0x13,-0x4(%rbp)
1c6: 76 0a jbe 1d2 <LzmaDec_DecodeToDic+0x1d2>
1c8: 83 7d f8 00 cmpl $0x0,-0x8(%rbp)
1cc: 0f 84 a8 00 00 00 je 27a <LzmaDec_DecodeToDic+0x27a>
1d2: 8b 55 fc mov -0x4(%rbp),%edx
1d5: 48 8b 4d b8 mov -0x48(%rbp),%rcx
1d9: 48 8b 45 c8 mov -0x38(%rbp),%rax
1dd: 48 89 ce mov %rcx,%rsi
1e0: 48 89 c7 mov %rax,%rdi
1e3: 48 b8 00 00 00 00 00 movabs $0x0,%rax
1ea: 00 00 00
1ed: ff d0 callq *%rax
1ef: 89 45 e4 mov %eax,-0x1c(%rbp)
1f2: 83 7d e4 00 cmpl $0x0,-0x1c(%rbp)
1f6: 75 58 jne 250 <LzmaDec_DecodeToDic+0x250>
1f8: 8b 55 fc mov -0x4(%rbp),%edx
1fb: 48 8b 45 c8 mov -0x38(%rbp),%rax
1ff: 48 8d 48 68 lea 0x68(%rax),%rcx
203: 48 8b 45 b8 mov -0x48(%rbp),%rax
207: 48 83 ec 20 sub $0x20,%rsp
20b: 49 89 d0 mov %rdx,%r8
20e: 48 89 c2 mov %rax,%rdx
211: 48 b8 00 00 00 00 00 movabs $0x0,%rax
218: 00 00 00
21b: ff d0 callq *%rax
21d: 48 83 c4 20 add $0x20,%rsp
221: 48 8b 45 c8 mov -0x38(%rbp),%rax
225: 8b 55 fc mov -0x4(%rbp),%edx
228: 89 50 64 mov %edx,0x64(%rax)
22b: 48 8b 45 b0 mov -0x50(%rbp),%rax
22f: 8b 10 mov (%rax),%edx
231: 8b 45 fc mov -0x4(%rbp),%eax
234: 01 c2 add %eax,%edx
236: 48 8b 45 b0 mov -0x50(%rbp),%rax
23a: 89 10 mov %edx,(%rax)
23c: 48 8b 45 a8 mov -0x58(%rbp),%rax
240: c7 00 03 00 00 00 movl $0x3,(%rax)
246: b8 00 00 00 00 mov $0x0,%eax
24b: e9 34 02 00 00 jmpq 484 <LzmaDec_DecodeToDic+0x484>
250: 83 7d f8 00 cmpl $0x0,-0x8(%rbp)
254: 74 1a je 270 <LzmaDec_DecodeToDic+0x270>
256: 83 7d e4 02 cmpl $0x2,-0x1c(%rbp)
25a: 74 14 je 270 <LzmaDec_DecodeToDic+0x270>
25c: 48 8b 45 a8 mov -0x58(%rbp),%rax
260: c7 00 02 00 00 00 movl $0x2,(%rax)
266: b8 01 00 00 00 mov $0x1,%eax
26b: e9 14 02 00 00 jmpq 484 <LzmaDec_DecodeToDic+0x484>
270: 48 8b 45 b8 mov -0x48(%rbp),%rax
274: 48 89 45 f0 mov %rax,-0x10(%rbp)
278: eb 12 jmp 28c <LzmaDec_DecodeToDic+0x28c>
27a: 8b 45 fc mov -0x4(%rbp),%eax
27d: 48 8d 50 ec lea -0x14(%rax),%rdx
281: 48 8b 45 b8 mov -0x48(%rbp),%rax
285: 48 01 d0 add %rdx,%rax
288: 48 89 45 f0 mov %rax,-0x10(%rbp)
28c: 48 8b 45 c8 mov -0x38(%rbp),%rax
290: 48 8b 55 b8 mov -0x48(%rbp),%rdx
294: 48 89 50 20 mov %rdx,0x20(%rax)
298: 48 8b 55 f0 mov -0x10(%rbp),%rdx
29c: 8b 4d c4 mov -0x3c(%rbp),%ecx
29f: 48 8b 45 c8 mov -0x38(%rbp),%rax
2a3: 89 ce mov %ecx,%esi
2a5: 48 89 c7 mov %rax,%rdi
2a8: 48 b8 00 00 00 00 00 movabs $0x0,%rax
2af: 00 00 00
2b2: ff d0 callq *%rax
2b4: 85 c0 test %eax,%eax
2b6: 74 0a je 2c2 <LzmaDec_DecodeToDic+0x2c2>
2b8: b8 01 00 00 00 mov $0x1,%eax
2bd: e9 c2 01 00 00 jmpq 484 <LzmaDec_DecodeToDic+0x484>
2c2: 48 8b 45 c8 mov -0x38(%rbp),%rax
2c6: 48 8b 40 20 mov 0x20(%rax),%rax
2ca: 48 89 c2 mov %rax,%rdx
2cd: 48 8b 45 b8 mov -0x48(%rbp),%rax
2d1: 48 29 c2 sub %rax,%rdx
2d4: 48 89 d0 mov %rdx,%rax
2d7: 89 45 e0 mov %eax,-0x20(%rbp)
2da: 48 8b 45 b0 mov -0x50(%rbp),%rax
2de: 8b 10 mov (%rax),%edx
2e0: 8b 45 e0 mov -0x20(%rbp),%eax
2e3: 01 c2 add %eax,%edx
2e5: 48 8b 45 b0 mov -0x50(%rbp),%rax
2e9: 89 10 mov %edx,(%rax)
2eb: 8b 45 e0 mov -0x20(%rbp),%eax
2ee: 48 01 45 b8 add %rax,-0x48(%rbp)
2f2: 8b 45 e0 mov -0x20(%rbp),%eax
2f5: 29 45 fc sub %eax,-0x4(%rbp)
2f8: e9 51 01 00 00 jmpq 44e <LzmaDec_DecodeToDic+0x44e>
2fd: 48 8b 45 c8 mov -0x38(%rbp),%rax
301: 8b 40 64 mov 0x64(%rax),%eax
304: 89 45 ec mov %eax,-0x14(%rbp)
307: c7 45 e8 00 00 00 00 movl $0x0,-0x18(%rbp)
30e: eb 28 jmp 338 <LzmaDec_DecodeToDic+0x338>
310: 8b 45 ec mov -0x14(%rbp),%eax
313: 8d 50 01 lea 0x1(%rax),%edx
316: 89 55 ec mov %edx,-0x14(%rbp)
319: 8b 55 e8 mov -0x18(%rbp),%edx
31c: 8d 4a 01 lea 0x1(%rdx),%ecx
31f: 89 4d e8 mov %ecx,-0x18(%rbp)
322: 89 d1 mov %edx,%ecx
324: 48 8b 55 b8 mov -0x48(%rbp),%rdx
328: 48 01 ca add %rcx,%rdx
32b: 0f b6 0a movzbl (%rdx),%ecx
32e: 48 8b 55 c8 mov -0x38(%rbp),%rdx
332: 89 c0 mov %eax,%eax
334: 88 4c 02 68 mov %cl,0x68(%rdx,%rax,1)
338: 83 7d ec 13 cmpl $0x13,-0x14(%rbp)
33c: 77 08 ja 346 <LzmaDec_DecodeToDic+0x346>
33e: 8b 45 e8 mov -0x18(%rbp),%eax
341: 3b 45 fc cmp -0x4(%rbp),%eax
344: 72 ca jb 310 <LzmaDec_DecodeToDic+0x310>
346: 48 8b 45 c8 mov -0x38(%rbp),%rax
34a: 8b 55 ec mov -0x14(%rbp),%edx
34d: 89 50 64 mov %edx,0x64(%rax)
350: 83 7d ec 13 cmpl $0x13,-0x14(%rbp)
354: 76 06 jbe 35c <LzmaDec_DecodeToDic+0x35c>
356: 83 7d f8 00 cmpl $0x0,-0x8(%rbp)
35a: 74 6f je 3cb <LzmaDec_DecodeToDic+0x3cb>
35c: 48 8b 45 c8 mov -0x38(%rbp),%rax
360: 48 8d 48 68 lea 0x68(%rax),%rcx
364: 8b 55 ec mov -0x14(%rbp),%edx
367: 48 8b 45 c8 mov -0x38(%rbp),%rax
36b: 48 89 ce mov %rcx,%rsi
36e: 48 89 c7 mov %rax,%rdi
371: 48 b8 00 00 00 00 00 movabs $0x0,%rax
378: 00 00 00
37b: ff d0 callq *%rax
37d: 89 45 dc mov %eax,-0x24(%rbp)
380: 83 7d dc 00 cmpl $0x0,-0x24(%rbp)
384: 75 25 jne 3ab <LzmaDec_DecodeToDic+0x3ab>
386: 48 8b 45 b0 mov -0x50(%rbp),%rax
38a: 8b 10 mov (%rax),%edx
38c: 8b 45 e8 mov -0x18(%rbp),%eax
38f: 01 c2 add %eax,%edx
391: 48 8b 45 b0 mov -0x50(%rbp),%rax
395: 89 10 mov %edx,(%rax)
397: 48 8b 45 a8 mov -0x58(%rbp),%rax
39b: c7 00 03 00 00 00 movl $0x3,(%rax)
3a1: b8 00 00 00 00 mov $0x0,%eax
3a6: e9 d9 00 00 00 jmpq 484 <LzmaDec_DecodeToDic+0x484>
3ab: 83 7d f8 00 cmpl $0x0,-0x8(%rbp)
3af: 74 1a je 3cb <LzmaDec_DecodeToDic+0x3cb>
3b1: 83 7d dc 02 cmpl $0x2,-0x24(%rbp)
3b5: 74 14 je 3cb <LzmaDec_DecodeToDic+0x3cb>
3b7: 48 8b 45 a8 mov -0x58(%rbp),%rax
3bb: c7 00 02 00 00 00 movl $0x2,(%rax)
3c1: b8 01 00 00 00 mov $0x1,%eax
3c6: e9 b9 00 00 00 jmpq 484 <LzmaDec_DecodeToDic+0x484>
3cb: 48 8b 45 c8 mov -0x38(%rbp),%rax
3cf: 48 8d 50 68 lea 0x68(%rax),%rdx
3d3: 48 8b 45 c8 mov -0x38(%rbp),%rax
3d7: 48 89 50 20 mov %rdx,0x20(%rax)
3db: 48 8b 45 c8 mov -0x38(%rbp),%rax
3df: 48 8b 50 20 mov 0x20(%rax),%rdx
3e3: 8b 4d c4 mov -0x3c(%rbp),%ecx
3e6: 48 8b 45 c8 mov -0x38(%rbp),%rax
3ea: 89 ce mov %ecx,%esi
3ec: 48 89 c7 mov %rax,%rdi
3ef: 48 b8 00 00 00 00 00 movabs $0x0,%rax
3f6: 00 00 00
3f9: ff d0 callq *%rax
3fb: 85 c0 test %eax,%eax
3fd: 74 07 je 406 <LzmaDec_DecodeToDic+0x406>
3ff: b8 01 00 00 00 mov $0x1,%eax
404: eb 7e jmp 484 <LzmaDec_DecodeToDic+0x484>
406: 48 8b 45 c8 mov -0x38(%rbp),%rax
40a: 48 8b 40 20 mov 0x20(%rax),%rax
40e: 48 89 c2 mov %rax,%rdx
411: 48 8b 45 c8 mov -0x38(%rbp),%rax
415: 48 83 c0 68 add $0x68,%rax
419: 48 29 c2 sub %rax,%rdx
41c: 48 89 d0 mov %rdx,%rax
41f: 2b 45 ec sub -0x14(%rbp),%eax
422: 01 45 e8 add %eax,-0x18(%rbp)
425: 48 8b 45 b0 mov -0x50(%rbp),%rax
429: 8b 10 mov (%rax),%edx
42b: 8b 45 e8 mov -0x18(%rbp),%eax
42e: 01 c2 add %eax,%edx
430: 48 8b 45 b0 mov -0x50(%rbp),%rax
434: 89 10 mov %edx,(%rax)
436: 8b 45 e8 mov -0x18(%rbp),%eax
439: 48 01 45 b8 add %rax,-0x48(%rbp)
43d: 8b 45 e8 mov -0x18(%rbp),%eax
440: 29 45 fc sub %eax,-0x4(%rbp)
443: 48 8b 45 c8 mov -0x38(%rbp),%rax
447: c7 40 64 00 00 00 00 movl $0x0,0x64(%rax)
44e: 48 8b 45 c8 mov -0x38(%rbp),%rax
452: 8b 40 54 mov 0x54(%rax),%eax
455: 3d 12 01 00 00 cmp $0x112,%eax
45a: 0f 85 f9 fb ff ff jne 59 <LzmaDec_DecodeToDic+0x59>
460: 48 8b 45 c8 mov -0x38(%rbp),%rax
464: 8b 40 2c mov 0x2c(%rax),%eax
467: 85 c0 test %eax,%eax
469: 75 0a jne 475 <LzmaDec_DecodeToDic+0x475>
46b: 48 8b 45 a8 mov -0x58(%rbp),%rax
46f: c7 00 01 00 00 00 movl $0x1,(%rax)
475: 48 8b 45 c8 mov -0x38(%rbp),%rax
479: 8b 40 2c mov 0x2c(%rax),%eax
47c: 85 c0 test %eax,%eax
47e: 0f 95 c0 setne %al
481: 0f b6 c0 movzbl %al,%eax
484: c9 leaveq
485: c3 retq
Disassembly of section .text.LzmaDec_DecodeToBuf:
0000000000000000 <LzmaDec_DecodeToBuf>:
0: 55 push %rbp
1: 48 89 e5 mov %rsp,%rbp
4: 48 83 ec 50 sub $0x50,%rsp
8: 48 89 7d d8 mov %rdi,-0x28(%rbp)
c: 48 89 75 d0 mov %rsi,-0x30(%rbp)
10: 48 89 55 c8 mov %rdx,-0x38(%rbp)
14: 48 89 4d c0 mov %rcx,-0x40(%rbp)
18: 4c 89 45 b8 mov %r8,-0x48(%rbp)
1c: 44 89 4d b4 mov %r9d,-0x4c(%rbp)
20: 48 8b 45 c8 mov -0x38(%rbp),%rax
24: 8b 00 mov (%rax),%eax
26: 89 45 fc mov %eax,-0x4(%rbp)
29: 48 8b 45 b8 mov -0x48(%rbp),%rax
2d: 8b 00 mov (%rax),%eax
2f: 89 45 f8 mov %eax,-0x8(%rbp)
32: 48 8b 45 c8 mov -0x38(%rbp),%rax
36: c7 00 00 00 00 00 movl $0x0,(%rax)
3c: 48 8b 45 c8 mov -0x38(%rbp),%rax
40: 8b 10 mov (%rax),%edx
42: 48 8b 45 b8 mov -0x48(%rbp),%rax
46: 89 10 mov %edx,(%rax)
48: 8b 45 f8 mov -0x8(%rbp),%eax
4b: 89 45 e4 mov %eax,-0x1c(%rbp)
4e: 48 8b 45 d8 mov -0x28(%rbp),%rax
52: 8b 50 30 mov 0x30(%rax),%edx
55: 48 8b 45 d8 mov -0x28(%rbp),%rax
59: 8b 40 34 mov 0x34(%rax),%eax
5c: 39 c2 cmp %eax,%edx
5e: 75 0b jne 6b <LzmaDec_DecodeToBuf+0x6b>
60: 48 8b 45 d8 mov -0x28(%rbp),%rax
64: c7 40 30 00 00 00 00 movl $0x0,0x30(%rax)
6b: 48 8b 45 d8 mov -0x28(%rbp),%rax
6f: 8b 40 30 mov 0x30(%rax),%eax
72: 89 45 ec mov %eax,-0x14(%rbp)
75: 48 8b 45 d8 mov -0x28(%rbp),%rax
79: 8b 40 34 mov 0x34(%rax),%eax
7c: 2b 45 ec sub -0x14(%rbp),%eax
7f: 3b 45 fc cmp -0x4(%rbp),%eax
82: 73 13 jae 97 <LzmaDec_DecodeToBuf+0x97>
84: 48 8b 45 d8 mov -0x28(%rbp),%rax
88: 8b 40 34 mov 0x34(%rax),%eax
8b: 89 45 f4 mov %eax,-0xc(%rbp)
8e: c7 45 f0 00 00 00 00 movl $0x0,-0x10(%rbp)
95: eb 11 jmp a8 <LzmaDec_DecodeToBuf+0xa8>
97: 8b 55 ec mov -0x14(%rbp),%edx
9a: 8b 45 fc mov -0x4(%rbp),%eax
9d: 01 d0 add %edx,%eax
9f: 89 45 f4 mov %eax,-0xc(%rbp)
a2: 8b 45 b4 mov -0x4c(%rbp),%eax
a5: 89 45 f0 mov %eax,-0x10(%rbp)
a8: 8b 7d f0 mov -0x10(%rbp),%edi
ab: 48 8d 4d e4 lea -0x1c(%rbp),%rcx
af: 48 8b 55 c0 mov -0x40(%rbp),%rdx
b3: 8b 75 f4 mov -0xc(%rbp),%esi
b6: 48 8b 45 d8 mov -0x28(%rbp),%rax
ba: 4c 8b 4d 10 mov 0x10(%rbp),%r9
be: 41 89 f8 mov %edi,%r8d
c1: 48 89 c7 mov %rax,%rdi
c4: 48 b8 00 00 00 00 00 movabs $0x0,%rax
cb: 00 00 00
ce: ff d0 callq *%rax
d0: 89 45 e8 mov %eax,-0x18(%rbp)
d3: 8b 45 e4 mov -0x1c(%rbp),%eax
d6: 89 c0 mov %eax,%eax
d8: 48 01 45 c0 add %rax,-0x40(%rbp)
dc: 8b 45 e4 mov -0x1c(%rbp),%eax
df: 29 45 f8 sub %eax,-0x8(%rbp)
e2: 48 8b 45 b8 mov -0x48(%rbp),%rax
e6: 8b 10 mov (%rax),%edx
e8: 8b 45 e4 mov -0x1c(%rbp),%eax
eb: 01 c2 add %eax,%edx
ed: 48 8b 45 b8 mov -0x48(%rbp),%rax
f1: 89 10 mov %edx,(%rax)
f3: 48 8b 45 d8 mov -0x28(%rbp),%rax
f7: 8b 40 30 mov 0x30(%rax),%eax
fa: 2b 45 ec sub -0x14(%rbp),%eax
fd: 89 45 f4 mov %eax,-0xc(%rbp)
100: 8b 4d f4 mov -0xc(%rbp),%ecx
103: 48 8b 45 d8 mov -0x28(%rbp),%rax
107: 48 8b 50 18 mov 0x18(%rax),%rdx
10b: 8b 45 ec mov -0x14(%rbp),%eax
10e: 48 01 c2 add %rax,%rdx
111: 48 8b 45 d0 mov -0x30(%rbp),%rax
115: 48 83 ec 20 sub $0x20,%rsp
119: 49 89 c8 mov %rcx,%r8
11c: 48 89 c1 mov %rax,%rcx
11f: 48 b8 00 00 00 00 00 movabs $0x0,%rax
126: 00 00 00
129: ff d0 callq *%rax
12b: 48 83 c4 20 add $0x20,%rsp
12f: 8b 45 f4 mov -0xc(%rbp),%eax
132: 48 01 45 d0 add %rax,-0x30(%rbp)
136: 8b 45 f4 mov -0xc(%rbp),%eax
139: 29 45 fc sub %eax,-0x4(%rbp)
13c: 48 8b 45 c8 mov -0x38(%rbp),%rax
140: 8b 10 mov (%rax),%edx
142: 8b 45 f4 mov -0xc(%rbp),%eax
145: 01 c2 add %eax,%edx
147: 48 8b 45 c8 mov -0x38(%rbp),%rax
14b: 89 10 mov %edx,(%rax)
14d: 83 7d e8 00 cmpl $0x0,-0x18(%rbp)
151: 74 05 je 158 <LzmaDec_DecodeToBuf+0x158>
153: 8b 45 e8 mov -0x18(%rbp),%eax
156: eb 15 jmp 16d <LzmaDec_DecodeToBuf+0x16d>
158: 83 7d f4 00 cmpl $0x0,-0xc(%rbp)
15c: 74 0a je 168 <LzmaDec_DecodeToBuf+0x168>
15e: 83 7d fc 00 cmpl $0x0,-0x4(%rbp)
162: 0f 85 e0 fe ff ff jne 48 <LzmaDec_DecodeToBuf+0x48>
168: b8 00 00 00 00 mov $0x0,%eax
16d: c9 leaveq
16e: c3 retq
Disassembly of section .text.LzmaDec_FreeProbs:
0000000000000000 <LzmaDec_FreeProbs>:
0: 55 push %rbp
1: 48 89 e5 mov %rsp,%rbp
4: 48 83 ec 10 sub $0x10,%rsp
8: 48 89 7d f8 mov %rdi,-0x8(%rbp)
c: 48 89 75 f0 mov %rsi,-0x10(%rbp)
10: 48 8b 45 f0 mov -0x10(%rbp),%rax
14: 48 8b 40 08 mov 0x8(%rax),%rax
18: 48 8b 55 f8 mov -0x8(%rbp),%rdx
1c: 48 8b 4a 10 mov 0x10(%rdx),%rcx
20: 48 8b 55 f0 mov -0x10(%rbp),%rdx
24: 48 89 ce mov %rcx,%rsi
27: 48 89 d7 mov %rdx,%rdi
2a: ff d0 callq *%rax
2c: 48 8b 45 f8 mov -0x8(%rbp),%rax
30: 48 c7 40 10 00 00 00 movq $0x0,0x10(%rax)
37: 00
38: 90 nop
39: c9 leaveq
3a: c3 retq
Disassembly of section .text.LzmaDec_FreeDict:
0000000000000000 <LzmaDec_FreeDict>:
0: 55 push %rbp
1: 48 89 e5 mov %rsp,%rbp
4: 48 83 ec 10 sub $0x10,%rsp
8: 48 89 7d f8 mov %rdi,-0x8(%rbp)
c: 48 89 75 f0 mov %rsi,-0x10(%rbp)
10: 48 8b 45 f0 mov -0x10(%rbp),%rax
14: 48 8b 40 08 mov 0x8(%rax),%rax
18: 48 8b 55 f8 mov -0x8(%rbp),%rdx
1c: 48 8b 4a 18 mov 0x18(%rdx),%rcx
20: 48 8b 55 f0 mov -0x10(%rbp),%rdx
24: 48 89 ce mov %rcx,%rsi
27: 48 89 d7 mov %rdx,%rdi
2a: ff d0 callq *%rax
2c: 48 8b 45 f8 mov -0x8(%rbp),%rax
30: 48 c7 40 18 00 00 00 movq $0x0,0x18(%rax)
37: 00
38: 90 nop
39: c9 leaveq
3a: c3 retq
Disassembly of section .text.LzmaDec_Free:
0000000000000000 <LzmaDec_Free>:
0: 55 push %rbp
1: 48 89 e5 mov %rsp,%rbp
4: 48 83 ec 10 sub $0x10,%rsp
8: 48 89 7d f8 mov %rdi,-0x8(%rbp)
c: 48 89 75 f0 mov %rsi,-0x10(%rbp)
10: 48 8b 55 f0 mov -0x10(%rbp),%rdx
14: 48 8b 45 f8 mov -0x8(%rbp),%rax
18: 48 89 d6 mov %rdx,%rsi
1b: 48 89 c7 mov %rax,%rdi
1e: 48 b8 00 00 00 00 00 movabs $0x0,%rax
25: 00 00 00
28: ff d0 callq *%rax
2a: 48 8b 55 f0 mov -0x10(%rbp),%rdx
2e: 48 8b 45 f8 mov -0x8(%rbp),%rax
32: 48 89 d6 mov %rdx,%rsi
35: 48 89 c7 mov %rax,%rdi
38: 48 b8 00 00 00 00 00 movabs $0x0,%rax
3f: 00 00 00
42: ff d0 callq *%rax
44: 90 nop
45: c9 leaveq
46: c3 retq
Disassembly of section .text.LzmaProps_Decode:
0000000000000000 <LzmaProps_Decode>:
0: 55 push %rbp
1: 48 89 e5 mov %rsp,%rbp
4: 48 83 ec 28 sub $0x28,%rsp
8: 48 89 7d e8 mov %rdi,-0x18(%rbp)
c: 48 89 75 e0 mov %rsi,-0x20(%rbp)
10: 89 55 dc mov %edx,-0x24(%rbp)
13: 83 7d dc 04 cmpl $0x4,-0x24(%rbp)
17: 77 0a ja 23 <LzmaProps_Decode+0x23>
19: b8 04 00 00 00 mov $0x4,%eax
1e: e9 35 01 00 00 jmpq 158 <LzmaProps_Decode+0x158>
23: 48 8b 45 e0 mov -0x20(%rbp),%rax
27: 48 83 c0 01 add $0x1,%rax
2b: 0f b6 00 movzbl (%rax),%eax
2e: 0f b6 c0 movzbl %al,%eax
31: 48 8b 55 e0 mov -0x20(%rbp),%rdx
35: 48 83 c2 02 add $0x2,%rdx
39: 0f b6 12 movzbl (%rdx),%edx
3c: 0f b6 d2 movzbl %dl,%edx
3f: c1 e2 08 shl $0x8,%edx
42: 09 c2 or %eax,%edx
44: 48 8b 45 e0 mov -0x20(%rbp),%rax
48: 48 83 c0 03 add $0x3,%rax
4c: 0f b6 00 movzbl (%rax),%eax
4f: 0f b6 c0 movzbl %al,%eax
52: c1 e0 10 shl $0x10,%eax
55: 09 c2 or %eax,%edx
57: 48 8b 45 e0 mov -0x20(%rbp),%rax
5b: 48 83 c0 04 add $0x4,%rax
5f: 0f b6 00 movzbl (%rax),%eax
62: 0f b6 c0 movzbl %al,%eax
65: c1 e0 18 shl $0x18,%eax
68: 09 d0 or %edx,%eax
6a: 89 45 fc mov %eax,-0x4(%rbp)
6d: 81 7d fc ff 0f 00 00 cmpl $0xfff,-0x4(%rbp)
74: 77 07 ja 7d <LzmaProps_Decode+0x7d>
76: c7 45 fc 00 10 00 00 movl $0x1000,-0x4(%rbp)
7d: 48 8b 45 e8 mov -0x18(%rbp),%rax
81: 8b 55 fc mov -0x4(%rbp),%edx
84: 89 50 0c mov %edx,0xc(%rax)
87: 48 8b 45 e0 mov -0x20(%rbp),%rax
8b: 0f b6 00 movzbl (%rax),%eax
8e: 88 45 fb mov %al,-0x5(%rbp)
91: 80 7d fb e0 cmpb $0xe0,-0x5(%rbp)
95: 76 0a jbe a1 <LzmaProps_Decode+0xa1>
97: b8 04 00 00 00 mov $0x4,%eax
9c: e9 b7 00 00 00 jmpq 158 <LzmaProps_Decode+0x158>
a1: 0f b6 4d fb movzbl -0x5(%rbp),%ecx
a5: 0f b6 d1 movzbl %cl,%edx
a8: 89 d0 mov %edx,%eax
aa: c1 e0 03 shl $0x3,%eax
ad: 29 d0 sub %edx,%eax
af: c1 e0 03 shl $0x3,%eax
b2: 01 d0 add %edx,%eax
b4: 66 c1 e8 08 shr $0x8,%ax
b8: 89 c2 mov %eax,%edx
ba: d0 ea shr %dl
bc: 89 d0 mov %edx,%eax
be: c1 e0 03 shl $0x3,%eax
c1: 01 d0 add %edx,%eax
c3: 29 c1 sub %eax,%ecx
c5: 89 ca mov %ecx,%edx
c7: 0f b6 d2 movzbl %dl,%edx
ca: 48 8b 45 e8 mov -0x18(%rbp),%rax
ce: 89 10 mov %edx,(%rax)
d0: 0f b6 45 fb movzbl -0x5(%rbp),%eax
d4: 0f b6 d0 movzbl %al,%edx
d7: 89 d0 mov %edx,%eax
d9: c1 e0 03 shl $0x3,%eax
dc: 29 d0 sub %edx,%eax
de: c1 e0 03 shl $0x3,%eax
e1: 01 d0 add %edx,%eax
e3: 66 c1 e8 08 shr $0x8,%ax
e7: d0 e8 shr %al
e9: 88 45 fb mov %al,-0x5(%rbp)
ec: 0f b6 45 fb movzbl -0x5(%rbp),%eax
f0: 0f b6 d0 movzbl %al,%edx
f3: 89 d0 mov %edx,%eax
f5: c1 e0 02 shl $0x2,%eax
f8: 01 d0 add %edx,%eax
fa: c1 e0 03 shl $0x3,%eax
fd: 01 d0 add %edx,%eax
ff: 8d 14 85 00 00 00 00 lea 0x0(,%rax,4),%edx
106: 01 d0 add %edx,%eax
108: 66 c1 e8 08 shr $0x8,%ax
10c: c0 e8 02 shr $0x2,%al
10f: 0f b6 d0 movzbl %al,%edx
112: 48 8b 45 e8 mov -0x18(%rbp),%rax
116: 89 50 08 mov %edx,0x8(%rax)
119: 0f b6 4d fb movzbl -0x5(%rbp),%ecx
11d: 0f b6 d1 movzbl %cl,%edx
120: 89 d0 mov %edx,%eax
122: c1 e0 02 shl $0x2,%eax
125: 01 d0 add %edx,%eax
127: c1 e0 03 shl $0x3,%eax
12a: 01 d0 add %edx,%eax
12c: 8d 14 85 00 00 00 00 lea 0x0(,%rax,4),%edx
133: 01 d0 add %edx,%eax
135: 66 c1 e8 08 shr $0x8,%ax
139: 89 c2 mov %eax,%edx
13b: c0 ea 02 shr $0x2,%dl
13e: 89 d0 mov %edx,%eax
140: c1 e0 02 shl $0x2,%eax
143: 01 d0 add %edx,%eax
145: 29 c1 sub %eax,%ecx
147: 89 ca mov %ecx,%edx
149: 0f b6 d2 movzbl %dl,%edx
14c: 48 8b 45 e8 mov -0x18(%rbp),%rax
150: 89 50 04 mov %edx,0x4(%rax)
153: b8 00 00 00 00 mov $0x0,%eax
158: c9 leaveq
159: c3 retq
Disassembly of section .text.LzmaDec_AllocateProbs2:
0000000000000000 <LzmaDec_AllocateProbs2>:
0: 55 push %rbp
1: 48 89 e5 mov %rsp,%rbp
4: 48 83 ec 30 sub $0x30,%rsp
8: 48 89 7d e8 mov %rdi,-0x18(%rbp)
c: 48 89 75 e0 mov %rsi,-0x20(%rbp)
10: 48 89 55 d8 mov %rdx,-0x28(%rbp)
14: 48 8b 45 e0 mov -0x20(%rbp),%rax
18: 8b 10 mov (%rax),%edx
1a: 48 8b 45 e0 mov -0x20(%rbp),%rax
1e: 8b 40 04 mov 0x4(%rax),%eax
21: 01 d0 add %edx,%eax
23: ba 00 03 00 00 mov $0x300,%edx
28: 89 c1 mov %eax,%ecx
2a: d3 e2 shl %cl,%edx
2c: 89 d0 mov %edx,%eax
2e: 05 36 07 00 00 add $0x736,%eax
33: 89 45 fc mov %eax,-0x4(%rbp)
36: 48 8b 45 e8 mov -0x18(%rbp),%rax
3a: 48 8b 40 10 mov 0x10(%rax),%rax
3e: 48 85 c0 test %rax,%rax
41: 74 0c je 4f <LzmaDec_AllocateProbs2+0x4f>
43: 48 8b 45 e8 mov -0x18(%rbp),%rax
47: 8b 40 60 mov 0x60(%rax),%eax
4a: 3b 45 fc cmp -0x4(%rbp),%eax
4d: 74 5b je aa <LzmaDec_AllocateProbs2+0xaa>
4f: 48 8b 55 d8 mov -0x28(%rbp),%rdx
53: 48 8b 45 e8 mov -0x18(%rbp),%rax
57: 48 89 d6 mov %rdx,%rsi
5a: 48 89 c7 mov %rax,%rdi
5d: 48 b8 00 00 00 00 00 movabs $0x0,%rax
64: 00 00 00
67: ff d0 callq *%rax
69: 48 8b 45 d8 mov -0x28(%rbp),%rax
6d: 48 8b 00 mov (%rax),%rax
70: 8b 55 fc mov -0x4(%rbp),%edx
73: 8d 0c 12 lea (%rdx,%rdx,1),%ecx
76: 48 8b 55 d8 mov -0x28(%rbp),%rdx
7a: 89 ce mov %ecx,%esi
7c: 48 89 d7 mov %rdx,%rdi
7f: ff d0 callq *%rax
81: 48 89 c2 mov %rax,%rdx
84: 48 8b 45 e8 mov -0x18(%rbp),%rax
88: 48 89 50 10 mov %rdx,0x10(%rax)
8c: 48 8b 45 e8 mov -0x18(%rbp),%rax
90: 8b 55 fc mov -0x4(%rbp),%edx
93: 89 50 60 mov %edx,0x60(%rax)
96: 48 8b 45 e8 mov -0x18(%rbp),%rax
9a: 48 8b 40 10 mov 0x10(%rax),%rax
9e: 48 85 c0 test %rax,%rax
a1: 75 07 jne aa <LzmaDec_AllocateProbs2+0xaa>
a3: b8 02 00 00 00 mov $0x2,%eax
a8: eb 05 jmp af <LzmaDec_AllocateProbs2+0xaf>
aa: b8 00 00 00 00 mov $0x0,%eax
af: c9 leaveq
b0: c3 retq
Disassembly of section .text.LzmaDec_AllocateProbs:
0000000000000000 <LzmaDec_AllocateProbs>:
0: 55 push %rbp
1: 48 89 e5 mov %rsp,%rbp
4: 48 83 ec 40 sub $0x40,%rsp
8: 48 89 7d d8 mov %rdi,-0x28(%rbp)
c: 48 89 75 d0 mov %rsi,-0x30(%rbp)
10: 89 55 cc mov %edx,-0x34(%rbp)
13: 48 89 4d c0 mov %rcx,-0x40(%rbp)
17: 8b 55 cc mov -0x34(%rbp),%edx
1a: 48 8b 4d d0 mov -0x30(%rbp),%rcx
1e: 48 8d 45 e8 lea -0x18(%rbp),%rax
22: 48 89 ce mov %rcx,%rsi
25: 48 89 c7 mov %rax,%rdi
28: 48 b8 00 00 00 00 00 movabs $0x0,%rax
2f: 00 00 00
32: ff d0 callq *%rax
34: 89 45 fc mov %eax,-0x4(%rbp)
37: 83 7d fc 00 cmpl $0x0,-0x4(%rbp)
3b: 74 05 je 42 <LzmaDec_AllocateProbs+0x42>
3d: 8b 45 fc mov -0x4(%rbp),%eax
40: eb 44 jmp 86 <LzmaDec_AllocateProbs+0x86>
42: 48 8b 55 c0 mov -0x40(%rbp),%rdx
46: 48 8d 4d e8 lea -0x18(%rbp),%rcx
4a: 48 8b 45 d8 mov -0x28(%rbp),%rax
4e: 48 89 ce mov %rcx,%rsi
51: 48 89 c7 mov %rax,%rdi
54: 48 b8 00 00 00 00 00 movabs $0x0,%rax
5b: 00 00 00
5e: ff d0 callq *%rax
60: 89 45 f8 mov %eax,-0x8(%rbp)
63: 83 7d f8 00 cmpl $0x0,-0x8(%rbp)
67: 74 05 je 6e <LzmaDec_AllocateProbs+0x6e>
69: 8b 45 f8 mov -0x8(%rbp),%eax
6c: eb 18 jmp 86 <LzmaDec_AllocateProbs+0x86>
6e: 48 8b 4d d8 mov -0x28(%rbp),%rcx
72: 48 8b 45 e8 mov -0x18(%rbp),%rax
76: 48 8b 55 f0 mov -0x10(%rbp),%rdx
7a: 48 89 01 mov %rax,(%rcx)
7d: 48 89 51 08 mov %rdx,0x8(%rcx)
81: b8 00 00 00 00 mov $0x0,%eax
86: c9 leaveq
87: c3 retq
Disassembly of section .text.LzmaDec_Allocate:
0000000000000000 <LzmaDec_Allocate>:
0: 55 push %rbp
1: 48 89 e5 mov %rsp,%rbp
4: 48 83 ec 40 sub $0x40,%rsp
8: 48 89 7d d8 mov %rdi,-0x28(%rbp)
c: 48 89 75 d0 mov %rsi,-0x30(%rbp)
10: 89 55 cc mov %edx,-0x34(%rbp)
13: 48 89 4d c0 mov %rcx,-0x40(%rbp)
17: 8b 55 cc mov -0x34(%rbp),%edx
1a: 48 8b 4d d0 mov -0x30(%rbp),%rcx
1e: 48 8d 45 e4 lea -0x1c(%rbp),%rax
22: 48 89 ce mov %rcx,%rsi
25: 48 89 c7 mov %rax,%rdi
28: 48 b8 00 00 00 00 00 movabs $0x0,%rax
2f: 00 00 00
32: ff d0 callq *%rax
34: 89 45 fc mov %eax,-0x4(%rbp)
37: 83 7d fc 00 cmpl $0x0,-0x4(%rbp)
3b: 74 08 je 45 <LzmaDec_Allocate+0x45>
3d: 8b 45 fc mov -0x4(%rbp),%eax
40: e9 d8 00 00 00 jmpq 11d <LzmaDec_Allocate+0x11d>
45: 48 8b 55 c0 mov -0x40(%rbp),%rdx
49: 48 8d 4d e4 lea -0x1c(%rbp),%rcx
4d: 48 8b 45 d8 mov -0x28(%rbp),%rax
51: 48 89 ce mov %rcx,%rsi
54: 48 89 c7 mov %rax,%rdi
57: 48 b8 00 00 00 00 00 movabs $0x0,%rax
5e: 00 00 00
61: ff d0 callq *%rax
63: 89 45 f8 mov %eax,-0x8(%rbp)
66: 83 7d f8 00 cmpl $0x0,-0x8(%rbp)
6a: 74 08 je 74 <LzmaDec_Allocate+0x74>
6c: 8b 45 f8 mov -0x8(%rbp),%eax
6f: e9 a9 00 00 00 jmpq 11d <LzmaDec_Allocate+0x11d>
74: 8b 45 f0 mov -0x10(%rbp),%eax
77: 89 45 f4 mov %eax,-0xc(%rbp)
7a: 48 8b 45 d8 mov -0x28(%rbp),%rax
7e: 48 8b 40 18 mov 0x18(%rax),%rax
82: 48 85 c0 test %rax,%rax
85: 74 0c je 93 <LzmaDec_Allocate+0x93>
87: 48 8b 45 d8 mov -0x28(%rbp),%rax
8b: 8b 40 34 mov 0x34(%rax),%eax
8e: 3b 45 f4 cmp -0xc(%rbp),%eax
91: 74 68 je fb <LzmaDec_Allocate+0xfb>
93: 48 8b 55 c0 mov -0x40(%rbp),%rdx
97: 48 8b 45 d8 mov -0x28(%rbp),%rax
9b: 48 89 d6 mov %rdx,%rsi
9e: 48 89 c7 mov %rax,%rdi
a1: 48 b8 00 00 00 00 00 movabs $0x0,%rax
a8: 00 00 00
ab: ff d0 callq *%rax
ad: 48 8b 45 c0 mov -0x40(%rbp),%rax
b1: 48 8b 00 mov (%rax),%rax
b4: 8b 4d f4 mov -0xc(%rbp),%ecx
b7: 48 8b 55 c0 mov -0x40(%rbp),%rdx
bb: 89 ce mov %ecx,%esi
bd: 48 89 d7 mov %rdx,%rdi
c0: ff d0 callq *%rax
c2: 48 89 c2 mov %rax,%rdx
c5: 48 8b 45 d8 mov -0x28(%rbp),%rax
c9: 48 89 50 18 mov %rdx,0x18(%rax)
cd: 48 8b 45 d8 mov -0x28(%rbp),%rax
d1: 48 8b 40 18 mov 0x18(%rax),%rax
d5: 48 85 c0 test %rax,%rax
d8: 75 21 jne fb <LzmaDec_Allocate+0xfb>
da: 48 8b 55 c0 mov -0x40(%rbp),%rdx
de: 48 8b 45 d8 mov -0x28(%rbp),%rax
e2: 48 89 d6 mov %rdx,%rsi
e5: 48 89 c7 mov %rax,%rdi
e8: 48 b8 00 00 00 00 00 movabs $0x0,%rax
ef: 00 00 00
f2: ff d0 callq *%rax
f4: b8 02 00 00 00 mov $0x2,%eax
f9: eb 22 jmp 11d <LzmaDec_Allocate+0x11d>
fb: 48 8b 45 d8 mov -0x28(%rbp),%rax
ff: 8b 55 f4 mov -0xc(%rbp),%edx
102: 89 50 34 mov %edx,0x34(%rax)
105: 48 8b 4d d8 mov -0x28(%rbp),%rcx
109: 48 8b 45 e4 mov -0x1c(%rbp),%rax
10d: 48 8b 55 ec mov -0x14(%rbp),%rdx
111: 48 89 01 mov %rax,(%rcx)
114: 48 89 51 08 mov %rdx,0x8(%rcx)
118: b8 00 00 00 00 mov $0x0,%eax
11d: c9 leaveq
11e: c3 retq
Disassembly of section .text.LzmaDecode:
0000000000000000 <LzmaDecode>:
0: 55 push %rbp
1: 48 89 e5 mov %rsp,%rbp
4: 48 81 ec c0 00 00 00 sub $0xc0,%rsp
b: 48 89 bd 68 ff ff ff mov %rdi,-0x98(%rbp)
12: 48 89 b5 60 ff ff ff mov %rsi,-0xa0(%rbp)
19: 48 89 95 58 ff ff ff mov %rdx,-0xa8(%rbp)
20: 48 89 8d 50 ff ff ff mov %rcx,-0xb0(%rbp)
27: 4c 89 85 48 ff ff ff mov %r8,-0xb8(%rbp)
2e: 44 89 8d 44 ff ff ff mov %r9d,-0xbc(%rbp)
35: 48 8b 85 50 ff ff ff mov -0xb0(%rbp),%rax
3c: 8b 00 mov (%rax),%eax
3e: 89 45 f8 mov %eax,-0x8(%rbp)
41: 48 8b 85 60 ff ff ff mov -0xa0(%rbp),%rax
48: 8b 00 mov (%rax),%eax
4a: 89 45 f4 mov %eax,-0xc(%rbp)
4d: 48 8b 85 60 ff ff ff mov -0xa0(%rbp),%rax
54: c7 00 00 00 00 00 movl $0x0,(%rax)
5a: 48 8b 85 60 ff ff ff mov -0xa0(%rbp),%rax
61: 8b 10 mov (%rax),%edx
63: 48 8b 85 50 ff ff ff mov -0xb0(%rbp),%rax
6a: 89 10 mov %edx,(%rax)
6c: 83 7d f8 04 cmpl $0x4,-0x8(%rbp)
70: 77 0a ja 7c <LzmaDecode+0x7c>
72: b8 06 00 00 00 mov $0x6,%eax
77: e9 f4 00 00 00 jmpq 170 <LzmaDecode+0x170>
7c: 48 c7 45 88 00 00 00 movq $0x0,-0x78(%rbp)
83: 00
84: 48 c7 45 80 00 00 00 movq $0x0,-0x80(%rbp)
8b: 00
8c: 48 8b 4d 20 mov 0x20(%rbp),%rcx
90: 8b 95 44 ff ff ff mov -0xbc(%rbp),%edx
96: 48 8b b5 48 ff ff ff mov -0xb8(%rbp),%rsi
9d: 48 8d 85 70 ff ff ff lea -0x90(%rbp),%rax
a4: 48 89 c7 mov %rax,%rdi
a7: 48 b8 00 00 00 00 00 movabs $0x0,%rax
ae: 00 00 00
b1: ff d0 callq *%rax
b3: 89 45 fc mov %eax,-0x4(%rbp)
b6: 83 7d fc 00 cmpl $0x0,-0x4(%rbp)
ba: 74 08 je c4 <LzmaDecode+0xc4>
bc: 8b 45 fc mov -0x4(%rbp),%eax
bf: e9 ac 00 00 00 jmpq 170 <LzmaDecode+0x170>
c4: 48 8b 85 68 ff ff ff mov -0x98(%rbp),%rax
cb: 48 89 45 88 mov %rax,-0x78(%rbp)
cf: 8b 45 f4 mov -0xc(%rbp),%eax
d2: 89 45 a4 mov %eax,-0x5c(%rbp)
d5: 48 8d 85 70 ff ff ff lea -0x90(%rbp),%rax
dc: 48 89 c7 mov %rax,%rdi
df: 48 b8 00 00 00 00 00 movabs $0x0,%rax
e6: 00 00 00
e9: ff d0 callq *%rax
eb: 48 8b 85 50 ff ff ff mov -0xb0(%rbp),%rax
f2: 8b 55 f8 mov -0x8(%rbp),%edx
f5: 89 10 mov %edx,(%rax)
f7: 48 8b 7d 18 mov 0x18(%rbp),%rdi
fb: 48 8b 8d 50 ff ff ff mov -0xb0(%rbp),%rcx
102: 48 8b 95 58 ff ff ff mov -0xa8(%rbp),%rdx
109: 8b 75 f4 mov -0xc(%rbp),%esi
10c: 48 8d 85 70 ff ff ff lea -0x90(%rbp),%rax
113: 49 89 f9 mov %rdi,%r9
116: 44 8b 45 10 mov 0x10(%rbp),%r8d
11a: 48 89 c7 mov %rax,%rdi
11d: 48 b8 00 00 00 00 00 movabs $0x0,%rax
124: 00 00 00
127: ff d0 callq *%rax
129: 89 45 fc mov %eax,-0x4(%rbp)
12c: 83 7d fc 00 cmpl $0x0,-0x4(%rbp)
130: 75 12 jne 144 <LzmaDecode+0x144>
132: 48 8b 45 18 mov 0x18(%rbp),%rax
136: 8b 00 mov (%rax),%eax
138: 83 f8 03 cmp $0x3,%eax
13b: 75 07 jne 144 <LzmaDecode+0x144>
13d: c7 45 fc 06 00 00 00 movl $0x6,-0x4(%rbp)
144: 8b 55 a0 mov -0x60(%rbp),%edx
147: 48 8b 85 60 ff ff ff mov -0xa0(%rbp),%rax
14e: 89 10 mov %edx,(%rax)
150: 48 8b 55 20 mov 0x20(%rbp),%rdx
154: 48 8d 85 70 ff ff ff lea -0x90(%rbp),%rax
15b: 48 89 d6 mov %rdx,%rsi
15e: 48 89 c7 mov %rax,%rdi
161: 48 b8 00 00 00 00 00 movabs $0x0,%rax
168: 00 00 00
16b: ff d0 callq *%rax
16d: 8b 45 fc mov -0x4(%rbp),%eax
170: c9 leaveq
171: c3 retq
[-- Attachment #3: Type: text/plain, Size: 127 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: OVMF very slow on AMD
2016-07-15 13:48 ` Konrad Rzeszutek Wilk
2016-07-15 15:22 ` Boris Ostrovsky
2016-07-18 14:10 ` Anthony PERARD
@ 2016-07-18 15:09 ` Anthony PERARD
2016-07-22 10:40 ` Dario Faggioli
2 siblings, 1 reply; 19+ messages in thread
From: Anthony PERARD @ 2016-07-18 15:09 UTC (permalink / raw)
To: Konrad Rzeszutek Wilk; +Cc: xen-devel
On Fri, Jul 15, 2016 at 09:48:31AM -0400, Konrad Rzeszutek Wilk wrote:
> On Thu, Jul 14, 2016 at 04:53:07PM +0100, Anthony PERARD wrote:
> > So, this loop takes about 1 minute on my AMD machine (AMD Opteron(tm)
> > Processor 4284), and less that 1 second on an Intel machine.
> > If I compile OVMF as a 32bit binary, the loop is faster, but still takes
> > about 30s on AMD. (that's true for both OvmfIa32 and OvmfIa32X64 which
> > is 32bit bootstrap, but can start 64bit OS.)
> > Another thing, I tried the same binary (64bit) with KVM, and OVMF seems
> > fast.
> >
> >
> > So, any idee of what I could investigate?
>
> I presume we emulating some operation on AMD but not on Intel.
>
> However you say xentrace shows nothing - which would imply we are not
> incurring VMEXITs to deal with this. Hmm.. Could it be what we
> expose to the guest (the CPUID flags?) Somehow we are missing one on AMD
> and it takes a slower route?
Since the same binary runs much faster in a KVM guest, I have compared
the procinfo of the guest between KVM and Xen (via /proc/cpuinfo), and
Xen have more flags:
$ dwdiff procinfo_guest_ovmf_kvm procinfo_guest_ovmf_xen
processor : 0
vendor_id : AuthenticAMD
cpu family : [-6-] {+21+}
model : [-6-] {+1+}
model name : [-QEMU Virtual CPU version 2.5+-] {+AMD Opteron(tm) Processor 4284+}
stepping : [-3-] {+2+}
microcode : [-0x1000065-] {+0x600063d+}
cpu MHz : [-3000.034-] {+3000.112+}
cache size : [-512-] {+2048+} KB
{+physical id : 0
siblings : 1
core id : 0
cpu cores : 1
apicid : 0
initial apicid : 0+}
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu {+vme+} de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 {+ht+} syscall nx {+mmxext fxsr_opt pdpe1gb rdtscp+} lm {+rep_good+} nopl {+extd_apicid+} pni {+pclmulqdq ssse3+} cx16 {+sse4_1 sse4_2+} x2apic {+popcnt aes xsave avx+} hypervisor lahf_lm {+cr8_legacy abm sse4a misalignsse 3dnowprefetch ibs xop lwp fma4 arat+}
bogomips : [-6002.07-] {+6002.23+}
TLB size : [-1024-] {+1536+} 4K pages
clflush size : 64
cache_alignment : 64
address sizes : [-40-] {+48+} bits physical, 48 bits virtual
power management:
--
Anthony PERARD
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: OVMF very slow on AMD
2016-07-18 15:09 ` Anthony PERARD
@ 2016-07-22 10:40 ` Dario Faggioli
0 siblings, 0 replies; 19+ messages in thread
From: Dario Faggioli @ 2016-07-22 10:40 UTC (permalink / raw)
To: Anthony PERARD, Konrad Rzeszutek Wilk; +Cc: xen-devel
[-- Attachment #1.1: Type: text/plain, Size: 829 bytes --]
On Mon, 2016-07-18 at 16:09 +0100, Anthony PERARD wrote:
>
> $ dwdiff procinfo_guest_ovmf_kvm procinfo_guest_ovmf_xen
> processor : 0
> vendor_id : AuthenticAMD
> cpu family : [-6-] {+21+}
> model : [-6-] {+1+}
> model name : [-QEMU Virtual CPU version 2.5+-] {+AMD
> Opteron(tm) Processor 4284+}
> stepping : [-3-] {+2+}
> microcode : [-0x1000065-] {+0x600063d+}
>
Is it ok / expected for this (microcode, I mean) to be different?
BTW, I'm super ignorant about all it's being discussed here, so this
can well be completely off!
Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]
[-- Attachment #2: Type: text/plain, Size: 127 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: OVMF very slow on AMD
2016-07-15 15:22 ` Boris Ostrovsky
@ 2016-07-27 11:08 ` Anthony PERARD
2016-07-27 11:35 ` Anthony PERARD
0 siblings, 1 reply; 19+ messages in thread
From: Anthony PERARD @ 2016-07-27 11:08 UTC (permalink / raw)
To: Boris Ostrovsky; +Cc: xen-devel
On Fri, Jul 15, 2016 at 11:22:45AM -0400, Boris Ostrovsky wrote:
> On 07/15/2016 09:48 AM, Konrad Rzeszutek Wilk wrote:
> > On Thu, Jul 14, 2016 at 04:53:07PM +0100, Anthony PERARD wrote:
> >> Hi,
> >>
> >> I've been investigating why OVMF is very slow in a Xen guest on an AMD
> >> host. This, I think, is the current failure that osstest is having.
> >>
> >> I've only look at a specific part of OVMF where the slowdown is very
> >> obvious on AMD vs Intel, the decompression.
> >>
> >> This is what I get on AMD, via the Xen serial console port:
> >> Invoking OVMF ...
> >> SecCoreStartupWithStack(0xFFFCC000, 0x818000)
> >> then, nothing for almost 1 minute, then the rest of the boot process.
> >> The same binary on Intel, the output does not stay "stuck" here.
> >>
> >> I could pin-point which part of the boot process takes a long time, but
> >> there is not anything obvious in there, just a loop that decompress the
> >> ovmf binary, with plenty of iteration.
> >> I tried `xentrace', but the trace does not show anything wrong, there is
> >> just an interrupt from time to time. I've tried to had some tracepoint
> >> inside this decompresion function in OVMF, but that did not reveal
> >> anything either, maybe there where not at the right place.
> >>
> >> Anyway, the function is: LzmaDec_DecodeReal() from the file
> >> IntelFrameworkModulePkg/Library/LzmaCustomDecompressLib/Sdk/C/LzmaDec.c
> >> you can get the assembly from this object:
> >> Build/OvmfX64/DEBUG_GCC49/X64/IntelFrameworkModulePkg/Library/LzmaCustomDecompressLib/LzmaCustomDecompressLib/OUTPUT/Sdk/C/LzmaDec.obj
> >> This is with OVMF upstream (https://github.com/tianocore/edk2).
>
> I don't know whether it's possible but can you extract this loop somehow
> and run it on baremetal? Or run the whole thing on baremetal.
I think I've managed to run the same function, with the same input, as a
linux process.
And, even within the guest, it takes about 0.3s to run, versus about 60s
when OVMF boot.
Could it be that, for some reason, access to the memory is uncached?
Only on AMD? And later, Linux is doing the right things?
I can try to describe how OVMF is setting up the memory.
> Also a newer compiler might potentially make a difference (if you are
> running on something older).
I have gcc (GCC) 6.1.1 20160707. I think that new enough, or too new
maybe.
--
Anthony PERARD
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: OVMF very slow on AMD
2016-07-27 11:08 ` Anthony PERARD
@ 2016-07-27 11:35 ` Anthony PERARD
2016-07-27 19:45 ` Boris Ostrovsky
0 siblings, 1 reply; 19+ messages in thread
From: Anthony PERARD @ 2016-07-27 11:35 UTC (permalink / raw)
To: Boris Ostrovsky; +Cc: xen-devel
On Wed, Jul 27, 2016 at 12:08:04PM +0100, Anthony PERARD wrote:
> I can try to describe how OVMF is setting up the memory.
From the start of the day:
setup gdt
cr0 = 0x40000023
jump to 32bit
cr4 = 0x640
setup page tables:
page directory attributes: (PAGE_ACCESSED + PAGE_READ_WRITE + PAGE_PRESENT)
page tables attributes: (PAGE_2M_MBO + PAGE_ACCESSED + PAGE_DIRTY + PAGE_READ_WRITE + PAGE_PRESENT)
I think they map the all 4GB with 2MB pages.
set cr3.
enable PAE
set LME
set PG
jump to 64bit
I think that's it, before running this painfully slow decompression
function.
Is there something wrong, or maybe missing? Is the hypervisor or maybe
the hardware does not do the right thing?
Thanks,
--
Anthony PERARD
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: OVMF very slow on AMD
2016-07-27 11:35 ` Anthony PERARD
@ 2016-07-27 19:45 ` Boris Ostrovsky
2016-07-28 10:18 ` Anthony PERARD
0 siblings, 1 reply; 19+ messages in thread
From: Boris Ostrovsky @ 2016-07-27 19:45 UTC (permalink / raw)
To: Anthony PERARD; +Cc: xen-devel
On 07/27/2016 07:35 AM, Anthony PERARD wrote:
> On Wed, Jul 27, 2016 at 12:08:04PM +0100, Anthony PERARD wrote:
>> I can try to describe how OVMF is setting up the memory.
> From the start of the day:
> setup gdt
> cr0 = 0x40000023
I think this is slightly odd, with bit 30 (cache disable) set. I'd
suspect that this would affect both Intel and AMD though.
Can you try clearing this bit?
-boris
>
> jump to 32bit
> cr4 = 0x640
>
> setup page tables:
> page directory attributes: (PAGE_ACCESSED + PAGE_READ_WRITE + PAGE_PRESENT)
> page tables attributes: (PAGE_2M_MBO + PAGE_ACCESSED + PAGE_DIRTY + PAGE_READ_WRITE + PAGE_PRESENT)
> I think they map the all 4GB with 2MB pages.
> set cr3.
>
> enable PAE
> set LME
> set PG
>
> jump to 64bit
>
> I think that's it, before running this painfully slow decompression
> function.
>
> Is there something wrong, or maybe missing? Is the hypervisor or maybe
> the hardware does not do the right thing?
>
> Thanks,
>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: OVMF very slow on AMD
2016-07-27 19:45 ` Boris Ostrovsky
@ 2016-07-28 10:18 ` Anthony PERARD
2016-07-28 10:43 ` George Dunlap
0 siblings, 1 reply; 19+ messages in thread
From: Anthony PERARD @ 2016-07-28 10:18 UTC (permalink / raw)
To: Boris Ostrovsky; +Cc: xen-devel
On Wed, Jul 27, 2016 at 03:45:23PM -0400, Boris Ostrovsky wrote:
> On 07/27/2016 07:35 AM, Anthony PERARD wrote:
> > On Wed, Jul 27, 2016 at 12:08:04PM +0100, Anthony PERARD wrote:
> >> I can try to describe how OVMF is setting up the memory.
> > From the start of the day:
> > setup gdt
> > cr0 = 0x40000023
>
> I think this is slightly odd, with bit 30 (cache disable) set. I'd
> suspect that this would affect both Intel and AMD though.
>
> Can you try clearing this bit?
That works...
I wonder why it does not appear to affect Intel or KVM.
Thanks,
--
Anthony PERARD
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: OVMF very slow on AMD
2016-07-28 10:18 ` Anthony PERARD
@ 2016-07-28 10:43 ` George Dunlap
2016-07-28 10:54 ` Andrew Cooper
0 siblings, 1 reply; 19+ messages in thread
From: George Dunlap @ 2016-07-28 10:43 UTC (permalink / raw)
To: Anthony PERARD; +Cc: Boris Ostrovsky, xen-devel
On Thu, Jul 28, 2016 at 11:18 AM, Anthony PERARD
<anthony.perard@citrix.com> wrote:
> On Wed, Jul 27, 2016 at 03:45:23PM -0400, Boris Ostrovsky wrote:
>> On 07/27/2016 07:35 AM, Anthony PERARD wrote:
>> > On Wed, Jul 27, 2016 at 12:08:04PM +0100, Anthony PERARD wrote:
>> >> I can try to describe how OVMF is setting up the memory.
>> > From the start of the day:
>> > setup gdt
>> > cr0 = 0x40000023
>>
>> I think this is slightly odd, with bit 30 (cache disable) set. I'd
>> suspect that this would affect both Intel and AMD though.
>>
>> Can you try clearing this bit?
>
> That works...
>
> I wonder why it does not appear to affect Intel or KVM.
Are those bits hard-coded, or are they set based on the hardware
that's available?
Is it possible that the particular combination of CPUID bits presented
by Xen on AMD are causing a different value to be written?
Or is it possible that the cache disable bit is being ignored (by Xen)
on Intel and KVM?
-George
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: OVMF very slow on AMD
2016-07-28 10:43 ` George Dunlap
@ 2016-07-28 10:54 ` Andrew Cooper
2016-07-28 11:28 ` Anthony PERARD
2016-07-28 15:17 ` Boris Ostrovsky
0 siblings, 2 replies; 19+ messages in thread
From: Andrew Cooper @ 2016-07-28 10:54 UTC (permalink / raw)
To: George Dunlap, Anthony PERARD; +Cc: Boris Ostrovsky, xen-devel
On 28/07/16 11:43, George Dunlap wrote:
> On Thu, Jul 28, 2016 at 11:18 AM, Anthony PERARD
> <anthony.perard@citrix.com> wrote:
>> On Wed, Jul 27, 2016 at 03:45:23PM -0400, Boris Ostrovsky wrote:
>>> On 07/27/2016 07:35 AM, Anthony PERARD wrote:
>>>> On Wed, Jul 27, 2016 at 12:08:04PM +0100, Anthony PERARD wrote:
>>>>> I can try to describe how OVMF is setting up the memory.
>>>> From the start of the day:
>>>> setup gdt
>>>> cr0 = 0x40000023
>>> I think this is slightly odd, with bit 30 (cache disable) set. I'd
>>> suspect that this would affect both Intel and AMD though.
>>>
>>> Can you try clearing this bit?
>> That works...
>>
>> I wonder why it does not appear to affect Intel or KVM.
> Are those bits hard-coded, or are they set based on the hardware
> that's available?
>
> Is it possible that the particular combination of CPUID bits presented
> by Xen on AMD are causing a different value to be written?
>
> Or is it possible that the cache disable bit is being ignored (by Xen)
> on Intel and KVM?
If a guest has no hardware, then it has no reason to actually disable
caches. We should have logic to catch this an avoid actually disabling
caches when the guest asks for it.
~Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: OVMF very slow on AMD
2016-07-28 10:54 ` Andrew Cooper
@ 2016-07-28 11:28 ` Anthony PERARD
2016-07-28 15:17 ` Boris Ostrovsky
1 sibling, 0 replies; 19+ messages in thread
From: Anthony PERARD @ 2016-07-28 11:28 UTC (permalink / raw)
To: Andrew Cooper; +Cc: Boris Ostrovsky, George Dunlap, xen-devel
On Thu, Jul 28, 2016 at 11:54:27AM +0100, Andrew Cooper wrote:
> On 28/07/16 11:43, George Dunlap wrote:
> > On Thu, Jul 28, 2016 at 11:18 AM, Anthony PERARD
> > <anthony.perard@citrix.com> wrote:
> >> On Wed, Jul 27, 2016 at 03:45:23PM -0400, Boris Ostrovsky wrote:
> >>> On 07/27/2016 07:35 AM, Anthony PERARD wrote:
> >>>> On Wed, Jul 27, 2016 at 12:08:04PM +0100, Anthony PERARD wrote:
> >>>>> I can try to describe how OVMF is setting up the memory.
> >>>> From the start of the day:
> >>>> setup gdt
> >>>> cr0 = 0x40000023
> >>> I think this is slightly odd, with bit 30 (cache disable) set. I'd
> >>> suspect that this would affect both Intel and AMD though.
> >>>
> >>> Can you try clearing this bit?
> >> That works...
> >>
> >> I wonder why it does not appear to affect Intel or KVM.
> > Are those bits hard-coded, or are they set based on the hardware
> > that's available?
> >
> > Is it possible that the particular combination of CPUID bits presented
> > by Xen on AMD are causing a different value to be written?
> >
> > Or is it possible that the cache disable bit is being ignored (by Xen)
> > on Intel and KVM?
>
> If a guest has no hardware, then it has no reason to actually disable
> caches. We should have logic to catch this an avoid actually disabling
> caches when the guest asks for it.
For KVM/QEMU, the OVMF binary is loaded in a 'CFI parallel flash', which
is map via mmio, so it is executed from a device memory. The memory
region is marked as rom_device.
--
Anthony PERARD
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: OVMF very slow on AMD
2016-07-28 10:54 ` Andrew Cooper
2016-07-28 11:28 ` Anthony PERARD
@ 2016-07-28 15:17 ` Boris Ostrovsky
2016-07-28 15:51 ` Andrew Cooper
1 sibling, 1 reply; 19+ messages in thread
From: Boris Ostrovsky @ 2016-07-28 15:17 UTC (permalink / raw)
To: Andrew Cooper, George Dunlap, Anthony PERARD; +Cc: xen-devel
On 07/28/2016 06:54 AM, Andrew Cooper wrote:
> On 28/07/16 11:43, George Dunlap wrote:
>> On Thu, Jul 28, 2016 at 11:18 AM, Anthony PERARD
>> <anthony.perard@citrix.com> wrote:
>>> On Wed, Jul 27, 2016 at 03:45:23PM -0400, Boris Ostrovsky wrote:
>>>> On 07/27/2016 07:35 AM, Anthony PERARD wrote:
>>>>> On Wed, Jul 27, 2016 at 12:08:04PM +0100, Anthony PERARD wrote:
>>>>>> I can try to describe how OVMF is setting up the memory.
>>>>> From the start of the day:
>>>>> setup gdt
>>>>> cr0 = 0x40000023
>>>> I think this is slightly odd, with bit 30 (cache disable) set. I'd
>>>> suspect that this would affect both Intel and AMD though.
>>>>
>>>> Can you try clearing this bit?
>>> That works...
>>>
>>> I wonder why it does not appear to affect Intel or KVM.
>> Are those bits hard-coded, or are they set based on the hardware
>> that's available?
>>
>> Is it possible that the particular combination of CPUID bits presented
>> by Xen on AMD are causing a different value to be written?
>>
>> Or is it possible that the cache disable bit is being ignored (by Xen)
>> on Intel and KVM?
> If a guest has no hardware, then it has no reason to actually disable
> caches. We should have logic to catch this an avoid actually disabling
> caches when the guest asks for it.
Is this really safe to do? Can't a guest decide to disable cache to
avoid having to deal with coherency in SW?
As far as Intel vs AMD implementation in Xen, we have vmx_handle_cd()
but no corresponding SVM code. Could it be that we need to set gPAT, for
example?
-boris
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: OVMF very slow on AMD
2016-07-28 15:17 ` Boris Ostrovsky
@ 2016-07-28 15:51 ` Andrew Cooper
2016-07-28 19:25 ` Boris Ostrovsky
0 siblings, 1 reply; 19+ messages in thread
From: Andrew Cooper @ 2016-07-28 15:51 UTC (permalink / raw)
To: Boris Ostrovsky, George Dunlap, Anthony PERARD; +Cc: xen-devel
On 28/07/16 16:17, Boris Ostrovsky wrote:
> On 07/28/2016 06:54 AM, Andrew Cooper wrote:
>> On 28/07/16 11:43, George Dunlap wrote:
>>> On Thu, Jul 28, 2016 at 11:18 AM, Anthony PERARD
>>> <anthony.perard@citrix.com> wrote:
>>>> On Wed, Jul 27, 2016 at 03:45:23PM -0400, Boris Ostrovsky wrote:
>>>>> On 07/27/2016 07:35 AM, Anthony PERARD wrote:
>>>>>> On Wed, Jul 27, 2016 at 12:08:04PM +0100, Anthony PERARD wrote:
>>>>>>> I can try to describe how OVMF is setting up the memory.
>>>>>> From the start of the day:
>>>>>> setup gdt
>>>>>> cr0 = 0x40000023
>>>>> I think this is slightly odd, with bit 30 (cache disable) set. I'd
>>>>> suspect that this would affect both Intel and AMD though.
>>>>>
>>>>> Can you try clearing this bit?
>>>> That works...
>>>>
>>>> I wonder why it does not appear to affect Intel or KVM.
>>> Are those bits hard-coded, or are they set based on the hardware
>>> that's available?
>>>
>>> Is it possible that the particular combination of CPUID bits presented
>>> by Xen on AMD are causing a different value to be written?
>>>
>>> Or is it possible that the cache disable bit is being ignored (by Xen)
>>> on Intel and KVM?
>> If a guest has no hardware, then it has no reason to actually disable
>> caches. We should have logic to catch this an avoid actually disabling
>> caches when the guest asks for it.
> Is this really safe to do? Can't a guest decide to disable cache to
> avoid having to deal with coherency in SW?
What SW coherency issue do you think can be solved with disabling the cache?
x86 has strict ordering of writes and reads with respect to each other.
The only case which can be out of order is reads promoted ahead of
unaliasing writes.
>
> As far as Intel vs AMD implementation in Xen, we have vmx_handle_cd()
> but no corresponding SVM code. Could it be that we need to set gPAT, for
> example?
A better approach would be to find out why ovmf insists on disabling
caches at all. Even if we optimise the non-PCI-device case in the
hypervisor, a passthrough case will still run like treacle if caches are
disabled.
~Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: OVMF very slow on AMD
2016-07-28 15:51 ` Andrew Cooper
@ 2016-07-28 19:25 ` Boris Ostrovsky
2016-07-28 19:44 ` Andrew Cooper
0 siblings, 1 reply; 19+ messages in thread
From: Boris Ostrovsky @ 2016-07-28 19:25 UTC (permalink / raw)
To: Andrew Cooper, George Dunlap, Anthony PERARD; +Cc: xen-devel
On 07/28/2016 11:51 AM, Andrew Cooper wrote:
> On 28/07/16 16:17, Boris Ostrovsky wrote:
>> On 07/28/2016 06:54 AM, Andrew Cooper wrote:
>>> On 28/07/16 11:43, George Dunlap wrote:
>>>> On Thu, Jul 28, 2016 at 11:18 AM, Anthony PERARD
>>>> <anthony.perard@citrix.com> wrote:
>>>>> On Wed, Jul 27, 2016 at 03:45:23PM -0400, Boris Ostrovsky wrote:
>>>>>> On 07/27/2016 07:35 AM, Anthony PERARD wrote:
>>>>>>> On Wed, Jul 27, 2016 at 12:08:04PM +0100, Anthony PERARD wrote:
>>>>>>>> I can try to describe how OVMF is setting up the memory.
>>>>>>> From the start of the day:
>>>>>>> setup gdt
>>>>>>> cr0 = 0x40000023
>>>>>> I think this is slightly odd, with bit 30 (cache disable) set. I'd
>>>>>> suspect that this would affect both Intel and AMD though.
>>>>>>
>>>>>> Can you try clearing this bit?
>>>>> That works...
>>>>>
>>>>> I wonder why it does not appear to affect Intel or KVM.
>>>> Are those bits hard-coded, or are they set based on the hardware
>>>> that's available?
>>>>
>>>> Is it possible that the particular combination of CPUID bits presented
>>>> by Xen on AMD are causing a different value to be written?
>>>>
>>>> Or is it possible that the cache disable bit is being ignored (by Xen)
>>>> on Intel and KVM?
>>> If a guest has no hardware, then it has no reason to actually disable
>>> caches. We should have logic to catch this an avoid actually disabling
>>> caches when the guest asks for it.
>> Is this really safe to do? Can't a guest decide to disable cache to
>> avoid having to deal with coherency in SW?
> What SW coherency issue do you think can be solved with disabling the cache?
>
> x86 has strict ordering of writes and reads with respect to each other.
> The only case which can be out of order is reads promoted ahead of
> unaliasing writes.
Right, that was not a good example.
>
>> As far as Intel vs AMD implementation in Xen, we have vmx_handle_cd()
>> but no corresponding SVM code. Could it be that we need to set gPAT, for
>> example?
> A better approach would be to find out why ovmf insists on disabling
> caches at all. Even if we optimise the non-PCI-device case in the
> hypervisor, a passthrough case will still run like treacle if caches are
> disabled.
True, we should understand why OVMF does this. But I think we also need
to understand what makes Intel run faster. Or is it already clear from
vmx_handle_cd()?
-boris
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: OVMF very slow on AMD
2016-07-28 19:25 ` Boris Ostrovsky
@ 2016-07-28 19:44 ` Andrew Cooper
2016-07-28 19:54 ` Boris Ostrovsky
0 siblings, 1 reply; 19+ messages in thread
From: Andrew Cooper @ 2016-07-28 19:44 UTC (permalink / raw)
To: Boris Ostrovsky, George Dunlap, Anthony PERARD; +Cc: xen-devel
On 28/07/16 20:25, Boris Ostrovsky wrote:
> On 07/28/2016 11:51 AM, Andrew Cooper wrote:
>> On 28/07/16 16:17, Boris Ostrovsky wrote:
>>> On 07/28/2016 06:54 AM, Andrew Cooper wrote:
>>>> On 28/07/16 11:43, George Dunlap wrote:
>>>>> On Thu, Jul 28, 2016 at 11:18 AM, Anthony PERARD
>>>>> <anthony.perard@citrix.com> wrote:
>>>>>> On Wed, Jul 27, 2016 at 03:45:23PM -0400, Boris Ostrovsky wrote:
>>>>>>> On 07/27/2016 07:35 AM, Anthony PERARD wrote:
>>>>>>>> On Wed, Jul 27, 2016 at 12:08:04PM +0100, Anthony PERARD wrote:
>>>>>>>>> I can try to describe how OVMF is setting up the memory.
>>>>>>>> From the start of the day:
>>>>>>>> setup gdt
>>>>>>>> cr0 = 0x40000023
>>>>>>> I think this is slightly odd, with bit 30 (cache disable) set. I'd
>>>>>>> suspect that this would affect both Intel and AMD though.
>>>>>>>
>>>>>>> Can you try clearing this bit?
>>>>>> That works...
>>>>>>
>>>>>> I wonder why it does not appear to affect Intel or KVM.
>>>>> Are those bits hard-coded, or are they set based on the hardware
>>>>> that's available?
>>>>>
>>>>> Is it possible that the particular combination of CPUID bits presented
>>>>> by Xen on AMD are causing a different value to be written?
>>>>>
>>>>> Or is it possible that the cache disable bit is being ignored (by Xen)
>>>>> on Intel and KVM?
>>>> If a guest has no hardware, then it has no reason to actually disable
>>>> caches. We should have logic to catch this an avoid actually disabling
>>>> caches when the guest asks for it.
>>> Is this really safe to do? Can't a guest decide to disable cache to
>>> avoid having to deal with coherency in SW?
>> What SW coherency issue do you think can be solved with disabling the cache?
>>
>> x86 has strict ordering of writes and reads with respect to each other.
>> The only case which can be out of order is reads promoted ahead of
>> unaliasing writes.
> Right, that was not a good example.
>
>>> As far as Intel vs AMD implementation in Xen, we have vmx_handle_cd()
>>> but no corresponding SVM code. Could it be that we need to set gPAT, for
>>> example?
>> A better approach would be to find out why ovmf insists on disabling
>> caches at all. Even if we optimise the non-PCI-device case in the
>> hypervisor, a passthrough case will still run like treacle if caches are
>> disabled.
> True, we should understand why OVMF does this. But I think we also need
> to understand what makes Intel run faster. Or is it already clear from
> vmx_handle_cd()?
Wow this code is hard to follow :(
handle_cd() is only called when an IOMMU is enabled and the domain in
question has access to real ioports or PCI devices.
However, I really can't spot anything that ends up eliding the
cache-disable setting even for Intel. This clearly needs further
investigation.
~Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: OVMF very slow on AMD
2016-07-28 19:44 ` Andrew Cooper
@ 2016-07-28 19:54 ` Boris Ostrovsky
2016-07-29 15:54 ` Anthony PERARD
0 siblings, 1 reply; 19+ messages in thread
From: Boris Ostrovsky @ 2016-07-28 19:54 UTC (permalink / raw)
To: Andrew Cooper, George Dunlap, Anthony PERARD; +Cc: xen-devel
On 07/28/2016 03:44 PM, Andrew Cooper wrote:
>>>> As far as Intel vs AMD implementation in Xen, we have vmx_handle_cd()
>>>> but no corresponding SVM code. Could it be that we need to set gPAT, for
>>>> example?
>>> A better approach would be to find out why ovmf insists on disabling
>>> caches at all. Even if we optimise the non-PCI-device case in the
>>> hypervisor, a passthrough case will still run like treacle if caches are
>>> disabled.
>> True, we should understand why OVMF does this. But I think we also need
>> to understand what makes Intel run faster. Or is it already clear from
>> vmx_handle_cd()?
> Wow this code is hard to follow :(
>
> handle_cd() is only called when an IOMMU is enabled and the domain in
> question has access to real ioports or PCI devices.
>
> However, I really can't spot anything that ends up eliding the
> cache-disable setting even for Intel. This clearly needs further
> investigation.
So as an easy start perhaps Anthony could check whether this call is
made with his guest running on Intel.
-boris
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: OVMF very slow on AMD
2016-07-28 19:54 ` Boris Ostrovsky
@ 2016-07-29 15:54 ` Anthony PERARD
0 siblings, 0 replies; 19+ messages in thread
From: Anthony PERARD @ 2016-07-29 15:54 UTC (permalink / raw)
To: Boris Ostrovsky; +Cc: Andrew Cooper, George Dunlap, xen-devel
On Thu, Jul 28, 2016 at 03:54:34PM -0400, Boris Ostrovsky wrote:
> On 07/28/2016 03:44 PM, Andrew Cooper wrote:
> >>>> As far as Intel vs AMD implementation in Xen, we have vmx_handle_cd()
> >>>> but no corresponding SVM code. Could it be that we need to set gPAT, for
> >>>> example?
> >>> A better approach would be to find out why ovmf insists on disabling
> >>> caches at all. Even if we optimise the non-PCI-device case in the
> >>> hypervisor, a passthrough case will still run like treacle if caches are
> >>> disabled.
> >> True, we should understand why OVMF does this. But I think we also need
> >> to understand what makes Intel run faster. Or is it already clear from
> >> vmx_handle_cd()?
> > Wow this code is hard to follow :(
> >
> > handle_cd() is only called when an IOMMU is enabled and the domain in
> > question has access to real ioports or PCI devices.
> >
> > However, I really can't spot anything that ends up eliding the
> > cache-disable setting even for Intel. This clearly needs further
> > investigation.
>
> So as an easy start perhaps Anthony could check whether this call is
> made with his guest running on Intel.
No, handle_cd is never called on my guest.
--
Anthony PERARD
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2016-07-29 15:54 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-07-14 15:53 OVMF very slow on AMD Anthony PERARD
2016-07-15 13:48 ` Konrad Rzeszutek Wilk
2016-07-15 15:22 ` Boris Ostrovsky
2016-07-27 11:08 ` Anthony PERARD
2016-07-27 11:35 ` Anthony PERARD
2016-07-27 19:45 ` Boris Ostrovsky
2016-07-28 10:18 ` Anthony PERARD
2016-07-28 10:43 ` George Dunlap
2016-07-28 10:54 ` Andrew Cooper
2016-07-28 11:28 ` Anthony PERARD
2016-07-28 15:17 ` Boris Ostrovsky
2016-07-28 15:51 ` Andrew Cooper
2016-07-28 19:25 ` Boris Ostrovsky
2016-07-28 19:44 ` Andrew Cooper
2016-07-28 19:54 ` Boris Ostrovsky
2016-07-29 15:54 ` Anthony PERARD
2016-07-18 14:10 ` Anthony PERARD
2016-07-18 15:09 ` Anthony PERARD
2016-07-22 10:40 ` Dario Faggioli
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).