From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andy Lutomirski Subject: Re: [PATCH v3 1/3] x86/ldt: Make modify_ldt synchronous Date: Fri, 24 Jul 2015 21:58:23 -0700 Message-ID: References: <049fdbab8ae2ecac1c8b40ecd558e9df45ccd5d3.1437592883.git.luto@kernel.org> <55B01745.4010702@oracle.com> <55B30CE3.2010902@oracle.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <55B30CE3.2010902@oracle.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Boris Ostrovsky Cc: "security@kernel.org" , Jan Beulich , Peter Zijlstra , Andrew Cooper , X86 ML , "linux-kernel@vger.kernel.org" , Steven Rostedt , xen-devel , Borislav Petkov , stable , Andy Lutomirski , Sasha Levin List-Id: xen-devel@lists.xenproject.org On Fri, Jul 24, 2015 at 9:13 PM, Boris Ostrovsky wrote: > > > On 07/22/2015 06:20 PM, Boris Ostrovsky wrote: >> >> On 07/22/2015 03:23 PM, Andy Lutomirski wrote: >>> >>> >>> + error = -ENOMEM; >>> + new_ldt = alloc_ldt_struct(newsize); >>> + if (!new_ldt) >>> goto out_unlock; >>> - } >>> - fill_ldt(&ldt, &ldt_info); >>> - if (oldmode) >>> - ldt.avl = 0; >>> + if (old_ldt) { >>> + memcpy(new_ldt->entries, old_ldt->entries, >>> + oldsize * LDT_ENTRY_SIZE); >>> + } >>> + memset(new_ldt->entries + oldsize * LDT_ENTRY_SIZE, 0, >>> + (newsize - oldsize) * LDT_ENTRY_SIZE); >> >> >> We need to zero out full page (probably better in alloc_ldt_struct() with >> vmzalloc/__GFP_ZERO) --- Xen checks whole page that is assigned to G/LDT >> and gets unhappy if an invalid descriptor is found there. >> >> This fixes one problem. There is something else that Xen gets upset about, >> I haven't figured what it is yet (and I am out tomorrow so it may need to >> wait until Friday). >> > > > What I thought was another problem turned out not to be one so both 64- and > 32-bit tests passed on 64-bit PV (when allocated LDT is zeroed out) > > However, on 32-bit kernel the test is failing multicpu test, I don't know > yet what it is. Test case bug or unrelated kernel bug depending on your point of view. I forgot that x86_32 and x86_64 have very different handling of IRET faults. Wait for v2 :) --Andy