All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>
To: Borislav Petkov <bp@suse.de>
Cc: Ingo Molnar <mingo@redhat.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Brian Gerst <brgerst@gmail.com>,
	Chris Metcalf <cmetcalf@mellanox.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Masami Hiramatsu <mhiramat@kernel.org>,
	Huang Rui <ray.huang@amd.com>, Jiri Slaby <jslaby@suse.cz>,
	Jonathan Corbet <corbet@lwn.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Paul Gortmaker <paul.gortmaker@windriver.com>,
	Vlastimil Babka <vbabka@suse.cz>, Chen Yucong <slaoub@gmail.com>,
	Alexandre Julliard <julliard@winehq.org>,
	Stas Sergeev <stsp@list.ru>, Fenghua Yu <fenghua.yu@intel.com>,
	"Ravi V. Shankar" <ravi.v.shankar@intel.com>,
	Shuah Khan <shuah@kernel.org>,
	linux-kernel@vger.kernel.org, x86@kernel.org,
	linux-msdos@vger.kernel.org, wine-devel@winehq.org,
	Adam Buchbinder <adam.buchbinder@gmail.com>,
	Colin Ian King <colin.king@canonical.com>,
	Lorenzo Stoakes <lstoakes@gmail.com>,
	Qiaowei Ren <qiaowei.ren@intel.com>,
	Arnaldo Carvalho de Melo <acme@redhat.com>,
	Adrian Hunter <adrian.hunter@intel.com>,
	Kees Cook <keescook@chromium.org>,
	Thomas Garnier <thgarnie@google.com>,
	Dmitry Vyukov <dvyukov@google.com>
Subject: Re: [PATCH v7 16/26] x86/insn-eval: Support both signed 32-bit and 64-bit effective addresses
Date: Tue, 25 Jul 2017 16:48:13 -0700	[thread overview]
Message-ID: <1501026493.22603.48.camel@ranerica-desktop> (raw)
In-Reply-To: <20170607154819.xkbxp3hg7lwjdxd6@pd.tnic>

I am sorry Boris, while working on this series I missed a few of your
feedback comments.

On Wed, 2017-06-07 at 17:48 +0200, Borislav Petkov wrote:
> On Fri, May 05, 2017 at 11:17:14AM -0700, Ricardo Neri wrote:
> > The 32-bit and 64-bit address encodings are identical. This means that we
> > can use the same function in both cases. In order to reuse the function
> > for 32-bit address encodings, we must sign-extend our 32-bit signed
> > operands to 64-bit signed variables (only for 64-bit builds). To decide on
> > whether sign extension is needed, we rely on the address size as given by
> > the instruction structure.
> > 
> > Once the effective address has been computed, a special verification is
> > needed for 32-bit processes. If running on a 64-bit kernel, such processes
> > can address up to 4GB of memory. Hence, for instance, an effective
> > address of 0xffff1234 would be misinterpreted as 0xffffffffffff1234 due to
> > the sign extension mentioned above. For this reason, the 4 must be
> 
> Which 4?

I meant to say the 4 most significant bytes. In this case, the
64-address 0xffffffffffff1234 would lie in the kernel memory while
0xffff1234 would correctly be in the user space memory.
> 
> > truncated to obtain the true effective address.
> > 
> > Lastly, before computing the linear address, we verify that the effective
> > address is within the limits of the segment. The check is kept for long
> > mode because in such a case the limit is set to -1L. This is the largest
> > unsigned number possible. This is equivalent to a limit-less segment.
> > 
> > Cc: Dave Hansen <dave.hansen@linux.intel.com>
> > Cc: Adam Buchbinder <adam.buchbinder@gmail.com>
> > Cc: Colin Ian King <colin.king@canonical.com>
> > Cc: Lorenzo Stoakes <lstoakes@gmail.com>
> > Cc: Qiaowei Ren <qiaowei.ren@intel.com>
> > Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
> > Cc: Masami Hiramatsu <mhiramat@kernel.org>
> > Cc: Adrian Hunter <adrian.hunter@intel.com>
> > Cc: Kees Cook <keescook@chromium.org>
> > Cc: Thomas Garnier <thgarnie@google.com>
> > Cc: Peter Zijlstra <peterz@infradead.org>
> > Cc: Borislav Petkov <bp@suse.de>
> > Cc: Dmitry Vyukov <dvyukov@google.com>
> > Cc: Ravi V. Shankar <ravi.v.shankar@intel.com>
> > Cc: x86@kernel.org
> > Signed-off-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>
> > ---
> >  arch/x86/lib/insn-eval.c | 99 ++++++++++++++++++++++++++++++++++++++++++------
> >  1 file changed, 88 insertions(+), 11 deletions(-)
> > 
> > diff --git a/arch/x86/lib/insn-eval.c b/arch/x86/lib/insn-eval.c
> > index 1a5f5a6..c7c1239 100644
> > --- a/arch/x86/lib/insn-eval.c
> > +++ b/arch/x86/lib/insn-eval.c
> > @@ -688,6 +688,62 @@ int insn_get_modrm_rm_off(struct insn *insn, struct pt_regs *regs)
> >  	return get_reg_offset(insn, regs, REG_TYPE_RM);
> >  }
> >  
> > +/**
> > + * _to_signed_long() - Cast an unsigned long into signed long
> > + * @val		A 32-bit or 64-bit unsigned long
> > + * @long_bytes	The number of bytes used to represent a long number
> > + * @out		The casted signed long
> > + *
> > + * Return: A signed long of either 32 or 64 bits, as per the build configuration
> > + * of the kernel.
> > + */
> > +static int _to_signed_long(unsigned long val, int long_bytes, long *out)
> > +{
> > +	if (!out)
> > +		return -EINVAL;
> > +
> > +#ifdef CONFIG_X86_64
> > +	if (long_bytes == 4) {
> > +		/* higher bytes should all be zero */
> > +		if (val & ~0xffffffff)
> > +			return -EINVAL;
> > +
> > +		/* sign-extend to a 64-bit long */
> 
> So this is a 32-bit userspace on a 64-bit kernel, right?

Yes.
> 
> If so, how can a memory offset be > 32-bits and we have to extend it to
> a 64-bit long?!?

Yes, perhaps the check above is not needed. I included that check as
part of my argument validation. In a 64-bit kernel, this function could
be called with val with non-zero most significant bytes.
> 
> I *think* you want to say that you want to convert it to long so that
> you can do the calculation in longs.

That is exactly what I meant. More specifically, I want to convert my
32-bit variables into 64-bit signed longs; this is the reason I need the
sign extension.
> 
> However!
> 
> If you're a 64-bit kernel running a 32-bit userspace, you need to do
> the calculation in 32-bits only so that it overflows, as it would do
> on 32-bit hardware. IOW, the clamping to 32-bits at the end is not
> something you wanna do but actually let it wrap if it overflows.

I have looked into this closely and as far as I can see, the 4 least
significant bytes will wrap around when using 64-bit signed numbers as
they would when using 32-bit signed numbers. For instance, for two
positive numbers we have:

7fff:ffff + 7000:0000 = efff:ffff.

The addition above overflows. When sign-extended to 64-bit numbers we
would have:

0000:0000:7fff:ffff + 0000:0000:7000:0000 = 0000:0000:efff:ffff.

The addition above does not overflow. However, the 4 least significant
bytes overflow as we expect. We can clamp the 4 most significant bytes.

For a two's complement negative numbers we can have:

ffff:ffff + 8000:0000 = 7fff:ffff with a carry flag.

The addition above overflows.

When sign-extending to 64-bit numbers we would have:

ffff:ffff:ffff:ffff + ffff:ffff:8000:0000 = ffff:ffff:7fff:ffff with a
carry flag.

The addition above does not overflow. However, the 4 least significant
bytes overflew and wrapped around as they would when using 32-bit signed
numbers.

> Or am I missing something?

Now, am I missing something?

Thanks and BR,
Ricardo

WARNING: multiple messages have this Message-ID (diff)
From: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>
To: Borislav Petkov <bp@suse.de>
Cc: Ingo Molnar <mingo@redhat.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Brian Gerst <brgerst@gmail.com>,
	Chris Metcalf <cmetcalf@mellanox.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Masami Hiramatsu <mhiramat@kernel.org>,
	Huang Rui <ray.huang@amd.com>, Jiri Slaby <jslaby@suse.cz>,
	Jonathan Corbet <corbet@lwn.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Paul Gortmaker <paul.gortmaker@windriver.com>,
	Vlastimil Babka <vbabka@suse.cz>, Chen Yucong <slaoub@gmail.com>,
	Alexandre Julliard <julliard@winehq.org>,
	Stas Sergeev <stsp@list.ru>, Fenghua Yu <fenghua.yu>
Subject: Re: [PATCH v7 16/26] x86/insn-eval: Support both signed 32-bit and 64-bit effective addresses
Date: Tue, 25 Jul 2017 16:48:13 -0700	[thread overview]
Message-ID: <1501026493.22603.48.camel@ranerica-desktop> (raw)
In-Reply-To: <20170607154819.xkbxp3hg7lwjdxd6@pd.tnic>

I am sorry Boris, while working on this series I missed a few of your
feedback comments.

On Wed, 2017-06-07 at 17:48 +0200, Borislav Petkov wrote:
> On Fri, May 05, 2017 at 11:17:14AM -0700, Ricardo Neri wrote:
> > The 32-bit and 64-bit address encodings are identical. This means that we
> > can use the same function in both cases. In order to reuse the function
> > for 32-bit address encodings, we must sign-extend our 32-bit signed
> > operands to 64-bit signed variables (only for 64-bit builds). To decide on
> > whether sign extension is needed, we rely on the address size as given by
> > the instruction structure.
> > 
> > Once the effective address has been computed, a special verification is
> > needed for 32-bit processes. If running on a 64-bit kernel, such processes
> > can address up to 4GB of memory. Hence, for instance, an effective
> > address of 0xffff1234 would be misinterpreted as 0xffffffffffff1234 due to
> > the sign extension mentioned above. For this reason, the 4 must be
> 
> Which 4?

I meant to say the 4 most significant bytes. In this case, the
64-address 0xffffffffffff1234 would lie in the kernel memory while
0xffff1234 would correctly be in the user space memory.
> 
> > truncated to obtain the true effective address.
> > 
> > Lastly, before computing the linear address, we verify that the effective
> > address is within the limits of the segment. The check is kept for long
> > mode because in such a case the limit is set to -1L. This is the largest
> > unsigned number possible. This is equivalent to a limit-less segment.
> > 
> > Cc: Dave Hansen <dave.hansen@linux.intel.com>
> > Cc: Adam Buchbinder <adam.buchbinder@gmail.com>
> > Cc: Colin Ian King <colin.king@canonical.com>
> > Cc: Lorenzo Stoakes <lstoakes@gmail.com>
> > Cc: Qiaowei Ren <qiaowei.ren@intel.com>
> > Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
> > Cc: Masami Hiramatsu <mhiramat@kernel.org>
> > Cc: Adrian Hunter <adrian.hunter@intel.com>
> > Cc: Kees Cook <keescook@chromium.org>
> > Cc: Thomas Garnier <thgarnie@google.com>
> > Cc: Peter Zijlstra <peterz@infradead.org>
> > Cc: Borislav Petkov <bp@suse.de>
> > Cc: Dmitry Vyukov <dvyukov@google.com>
> > Cc: Ravi V. Shankar <ravi.v.shankar@intel.com>
> > Cc: x86@kernel.org
> > Signed-off-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>
> > ---
> >  arch/x86/lib/insn-eval.c | 99 ++++++++++++++++++++++++++++++++++++++++++------
> >  1 file changed, 88 insertions(+), 11 deletions(-)
> > 
> > diff --git a/arch/x86/lib/insn-eval.c b/arch/x86/lib/insn-eval.c
> > index 1a5f5a6..c7c1239 100644
> > --- a/arch/x86/lib/insn-eval.c
> > +++ b/arch/x86/lib/insn-eval.c
> > @@ -688,6 +688,62 @@ int insn_get_modrm_rm_off(struct insn *insn, struct pt_regs *regs)
> >  	return get_reg_offset(insn, regs, REG_TYPE_RM);
> >  }
> >  
> > +/**
> > + * _to_signed_long() - Cast an unsigned long into signed long
> > + * @val		A 32-bit or 64-bit unsigned long
> > + * @long_bytes	The number of bytes used to represent a long number
> > + * @out		The casted signed long
> > + *
> > + * Return: A signed long of either 32 or 64 bits, as per the build configuration
> > + * of the kernel.
> > + */
> > +static int _to_signed_long(unsigned long val, int long_bytes, long *out)
> > +{
> > +	if (!out)
> > +		return -EINVAL;
> > +
> > +#ifdef CONFIG_X86_64
> > +	if (long_bytes == 4) {
> > +		/* higher bytes should all be zero */
> > +		if (val & ~0xffffffff)
> > +			return -EINVAL;
> > +
> > +		/* sign-extend to a 64-bit long */
> 
> So this is a 32-bit userspace on a 64-bit kernel, right?

Yes.
> 
> If so, how can a memory offset be > 32-bits and we have to extend it to
> a 64-bit long?!?

Yes, perhaps the check above is not needed. I included that check as
part of my argument validation. In a 64-bit kernel, this function could
be called with val with non-zero most significant bytes.
> 
> I *think* you want to say that you want to convert it to long so that
> you can do the calculation in longs.

That is exactly what I meant. More specifically, I want to convert my
32-bit variables into 64-bit signed longs; this is the reason I need the
sign extension.
> 
> However!
> 
> If you're a 64-bit kernel running a 32-bit userspace, you need to do
> the calculation in 32-bits only so that it overflows, as it would do
> on 32-bit hardware. IOW, the clamping to 32-bits at the end is not
> something you wanna do but actually let it wrap if it overflows.

I have looked into this closely and as far as I can see, the 4 least
significant bytes will wrap around when using 64-bit signed numbers as
they would when using 32-bit signed numbers. For instance, for two
positive numbers we have:

7fff:ffff + 7000:0000 = efff:ffff.

The addition above overflows. When sign-extended to 64-bit numbers we
would have:

0000:0000:7fff:ffff + 0000:0000:7000:0000 = 0000:0000:efff:ffff.

The addition above does not overflow. However, the 4 least significant
bytes overflow as we expect. We can clamp the 4 most significant bytes.

For a two's complement negative numbers we can have:

ffff:ffff + 8000:0000 = 7fff:ffff with a carry flag.

The addition above overflows.

When sign-extending to 64-bit numbers we would have:

ffff:ffff:ffff:ffff + ffff:ffff:8000:0000 = ffff:ffff:7fff:ffff with a
carry flag.

The addition above does not overflow. However, the 4 least significant
bytes overflew and wrapped around as they would when using 32-bit signed
numbers.

> Or am I missing something?

Now, am I missing something?

Thanks and BR,
Ricardo



  reply	other threads:[~2017-07-25 23:48 UTC|newest]

Thread overview: 164+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-05 18:16 [PATCH v7 00/26] x86: Enable User-Mode Instruction Prevention Ricardo Neri
2017-05-05 18:16 ` Ricardo Neri
2017-05-05 18:16 ` [PATCH v7 01/26] ptrace,x86: Make user_64bit_mode() available to 32-bit builds Ricardo Neri
2017-05-05 18:16   ` Ricardo Neri
2017-05-21 14:19   ` Borislav Petkov
2017-05-21 14:19     ` Borislav Petkov
2017-05-05 18:17 ` [PATCH v7 02/26] x86/mm: Relocate page fault error codes to traps.h Ricardo Neri
2017-05-05 18:17   ` Ricardo Neri
2017-05-21 14:23   ` Borislav Petkov
2017-05-21 14:23     ` Borislav Petkov
2017-05-27  3:40     ` Ricardo Neri
2017-05-27  3:40       ` Ricardo Neri
2017-05-27 10:13       ` Borislav Petkov
2017-05-27 10:13         ` Borislav Petkov
2017-06-01  3:09         ` Ricardo Neri
2017-06-01  3:09           ` Ricardo Neri
2017-05-05 18:17 ` [PATCH v7 03/26] x86/mpx: Use signed variables to compute effective addresses Ricardo Neri
2017-05-05 18:17   ` Ricardo Neri
2017-05-05 18:17 ` [PATCH v7 04/26] x86/mpx: Do not use SIB.index if its value is 100b and ModRM.mod is not 11b Ricardo Neri
2017-05-05 18:17   ` Ricardo Neri
2017-05-24 13:37   ` Borislav Petkov
2017-05-24 13:37     ` Borislav Petkov
2017-05-27  3:36     ` Ricardo Neri
2017-05-27  3:36       ` Ricardo Neri
2017-05-05 18:17 ` [PATCH v7 05/26] x86/mpx: Do not use SIB.base if its value is 101b and ModRM.mod = 0 Ricardo Neri
2017-05-05 18:17   ` Ricardo Neri
2017-05-29 13:07   ` Borislav Petkov
2017-05-29 13:07     ` Borislav Petkov
2017-06-06  6:08     ` Ricardo Neri
2017-06-06  6:08       ` Ricardo Neri
2017-05-05 18:17 ` [PATCH v7 06/26] x86/mpx, x86/insn: Relocate insn util functions to a new insn-eval file Ricardo Neri
2017-05-05 18:17   ` Ricardo Neri
2017-05-05 18:17 ` [PATCH v7 07/26] x86/insn-eval: Do not BUG on invalid register type Ricardo Neri
2017-05-05 18:17   ` Ricardo Neri
2017-05-29 16:37   ` Borislav Petkov
2017-05-29 16:37     ` Borislav Petkov
2017-06-06  6:06     ` Ricardo Neri
2017-06-06  6:06       ` Ricardo Neri
2017-06-06 11:58       ` Borislav Petkov
2017-06-06 11:58         ` Borislav Petkov
2017-06-07  0:28         ` Ricardo Neri
2017-06-07  0:28           ` Ricardo Neri
2017-06-07 12:21           ` Borislav Petkov
2017-06-07 12:21             ` Borislav Petkov
2017-06-07 18:54           ` Stas Sergeev
2017-06-27 19:03             ` Ricardo Neri
2017-05-05 18:17 ` [PATCH v7 08/26] x86/insn-eval: Add a utility function to get register offsets Ricardo Neri
2017-05-05 18:17   ` Ricardo Neri
2017-05-29 17:16   ` Borislav Petkov
2017-05-29 17:16     ` Borislav Petkov
2017-06-06  6:02     ` Ricardo Neri
2017-06-06  6:02       ` Ricardo Neri
2017-05-05 18:17 ` [PATCH v7 09/26] x86/insn-eval: Add utility function to identify string instructions Ricardo Neri
2017-05-05 18:17   ` Ricardo Neri
2017-05-29 21:48   ` Borislav Petkov
2017-05-29 21:48     ` Borislav Petkov
2017-06-06  6:01     ` Ricardo Neri
2017-06-06  6:01       ` Ricardo Neri
2017-06-06 12:04       ` Borislav Petkov
2017-06-06 12:04         ` Borislav Petkov
2017-05-05 18:17 ` [PATCH v7 10/26] x86/insn-eval: Add utility functions to get segment selector Ricardo Neri
2017-05-05 18:17   ` Ricardo Neri
2017-05-30 10:35   ` Borislav Petkov
2017-05-30 10:35     ` Borislav Petkov
2017-06-15 18:37     ` Ricardo Neri
2017-06-15 18:37       ` Ricardo Neri
2017-06-15 19:04       ` Ricardo Neri
2017-06-15 19:04         ` Ricardo Neri
2017-06-19 15:29         ` Borislav Petkov
2017-06-19 15:29           ` Borislav Petkov
2017-06-19 15:37       ` Borislav Petkov
2017-06-19 15:37         ` Borislav Petkov
2017-05-05 18:17 ` [PATCH v7 11/26] x86/insn-eval: Add utility function to get segment descriptor Ricardo Neri
2017-05-05 18:17   ` Ricardo Neri
2017-05-05 18:17 ` [PATCH v7 12/26] x86/insn-eval: Add utility functions to get segment descriptor base address and limit Ricardo Neri
2017-05-05 18:17   ` Ricardo Neri
2017-05-31 16:58   ` Borislav Petkov
2017-05-31 16:58     ` Borislav Petkov
2017-06-03 17:23     ` Ricardo Neri
2017-06-03 17:23       ` Ricardo Neri
2017-05-05 18:17 ` [PATCH v7 13/26] x86/insn-eval: Add function to get default params of code segment Ricardo Neri
2017-05-05 18:17   ` Ricardo Neri
2017-06-07 12:59   ` Borislav Petkov
2017-06-07 12:59     ` Borislav Petkov
2017-06-15 19:24     ` Ricardo Neri
2017-06-15 19:24       ` Ricardo Neri
2017-06-19 17:11       ` Borislav Petkov
2017-06-19 17:11         ` Borislav Petkov
2017-05-05 18:17 ` [PATCH v7 14/26] x86/insn-eval: Indicate a 32-bit displacement if ModRM.mod is 0 and ModRM.rm is 5 Ricardo Neri
2017-05-05 18:17   ` Ricardo Neri
2017-06-07 13:15   ` Borislav Petkov
2017-06-07 13:15     ` Borislav Petkov
2017-06-15 19:36     ` Ricardo Neri
2017-06-15 19:36       ` Ricardo Neri
2017-05-05 18:17 ` [PATCH v7 15/26] x86/insn-eval: Incorporate segment base and limit in linear address computation Ricardo Neri
2017-05-05 18:17   ` Ricardo Neri
2017-05-05 18:17 ` [PATCH v7 16/26] x86/insn-eval: Support both signed 32-bit and 64-bit effective addresses Ricardo Neri
2017-05-05 18:17   ` Ricardo Neri
2017-06-07 15:48   ` Borislav Petkov
2017-06-07 15:48     ` Borislav Petkov
2017-07-25 23:48     ` Ricardo Neri [this message]
2017-07-25 23:48       ` Ricardo Neri
2017-07-27 13:26       ` Borislav Petkov
2017-07-27 13:26         ` Borislav Petkov
2017-07-28  2:04         ` Ricardo Neri
2017-07-28  2:04           ` Ricardo Neri
2017-07-28  6:50           ` Borislav Petkov
2017-07-28  6:50             ` Borislav Petkov
2017-06-07 15:49   ` Borislav Petkov
2017-06-07 15:49     ` Borislav Petkov
2017-06-15 19:58     ` Ricardo Neri
2017-06-15 19:58       ` Ricardo Neri
2017-05-05 18:17 ` [PATCH v7 17/26] x86/insn-eval: Handle 32-bit address encodings in virtual-8086 mode Ricardo Neri
2017-05-05 18:17   ` Ricardo Neri
2017-05-05 18:17 ` [PATCH v7 18/26] x86/insn-eval: Add support to resolve 16-bit addressing encodings Ricardo Neri
2017-05-05 18:17   ` Ricardo Neri
2017-06-07 16:28   ` Borislav Petkov
2017-06-07 16:28     ` Borislav Petkov
2017-06-15 21:50     ` Ricardo Neri
2017-06-15 21:50       ` Ricardo Neri
2017-05-05 18:17 ` [PATCH v7 19/26] x86/insn-eval: Add wrapper function for 16-bit and 32-bit address encodings Ricardo Neri
2017-05-05 18:17   ` Ricardo Neri
2017-05-05 18:17 ` [PATCH v7 20/26] x86/cpufeature: Add User-Mode Instruction Prevention definitions Ricardo Neri
2017-05-05 18:17   ` Ricardo Neri
2017-05-06  9:04   ` Paolo Bonzini
2017-05-06  9:04     ` Paolo Bonzini
2017-05-11  3:23     ` Ricardo Neri
2017-05-11  3:23       ` Ricardo Neri
2017-06-07 18:24   ` Borislav Petkov
2017-06-07 18:24     ` Borislav Petkov
2017-05-05 18:17 ` [PATCH v7 21/26] x86: Add emulation code for UMIP instructions Ricardo Neri
2017-05-05 18:17   ` Ricardo Neri
2017-06-08 18:38   ` Borislav Petkov
2017-06-08 18:38     ` Borislav Petkov
2017-06-17  1:34     ` Ricardo Neri
2017-06-17  1:34       ` Ricardo Neri
2017-05-05 18:17 ` [PATCH v7 22/26] x86/umip: Force a page fault when unable to copy emulated result to user Ricardo Neri
2017-05-05 18:17   ` Ricardo Neri
2017-06-09 11:02   ` Borislav Petkov
2017-06-09 11:02     ` Borislav Petkov
2017-07-25 23:50     ` Ricardo Neri
2017-07-25 23:50       ` Ricardo Neri
2017-05-05 18:17 ` [PATCH v7 23/26] x86/traps: Fixup general protection faults caused by UMIP Ricardo Neri
2017-05-05 18:17   ` Ricardo Neri
2017-06-09 13:02   ` Borislav Petkov
2017-06-09 13:02     ` Borislav Petkov
2017-07-25 23:51     ` Ricardo Neri
2017-07-25 23:51       ` Ricardo Neri
2017-05-05 18:17 ` [PATCH v7 24/26] x86: Enable User-Mode Instruction Prevention Ricardo Neri
2017-05-05 18:17   ` Ricardo Neri
2017-06-09 16:10   ` Borislav Petkov
2017-06-09 16:10     ` Borislav Petkov
2017-07-26  0:44     ` Ricardo Neri
2017-07-26  0:44       ` Ricardo Neri
2017-07-27 13:57       ` Borislav Petkov
2017-07-27 13:57         ` Borislav Petkov
2017-05-05 18:17 ` [PATCH v7 25/26] selftests/x86: Add tests for " Ricardo Neri
2017-05-05 18:17   ` Ricardo Neri
2017-05-05 18:17 ` [PATCH v7 26/26] selftests/x86: Add tests for instruction str and sldt Ricardo Neri
2017-05-05 18:17   ` Ricardo Neri
2017-05-17 18:42 ` [PATCH v7 00/26] x86: Enable User-Mode Instruction Prevention Ricardo Neri
2017-05-17 18:42   ` Ricardo Neri
2017-05-27  3:49   ` Neri, Ricardo
2017-05-27  3:49     ` Neri, Ricardo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1501026493.22603.48.camel@ranerica-desktop \
    --to=ricardo.neri-calderon@linux.intel.com \
    --cc=acme@redhat.com \
    --cc=adam.buchbinder@gmail.com \
    --cc=adrian.hunter@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=bp@suse.de \
    --cc=brgerst@gmail.com \
    --cc=cmetcalf@mellanox.com \
    --cc=colin.king@canonical.com \
    --cc=corbet@lwn.net \
    --cc=dave.hansen@linux.intel.com \
    --cc=dvyukov@google.com \
    --cc=fenghua.yu@intel.com \
    --cc=hpa@zytor.com \
    --cc=jslaby@suse.cz \
    --cc=julliard@winehq.org \
    --cc=keescook@chromium.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-msdos@vger.kernel.org \
    --cc=lstoakes@gmail.com \
    --cc=luto@kernel.org \
    --cc=mhiramat@kernel.org \
    --cc=mingo@redhat.com \
    --cc=mst@redhat.com \
    --cc=paul.gortmaker@windriver.com \
    --cc=pbonzini@redhat.com \
    --cc=peterz@infradead.org \
    --cc=qiaowei.ren@intel.com \
    --cc=ravi.v.shankar@intel.com \
    --cc=ray.huang@amd.com \
    --cc=shuah@kernel.org \
    --cc=slaoub@gmail.com \
    --cc=stsp@list.ru \
    --cc=tglx@linutronix.de \
    --cc=thgarnie@google.com \
    --cc=vbabka@suse.cz \
    --cc=wine-devel@winehq.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.