From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.1 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5B4AC4320A for ; Thu, 26 Aug 2021 21:26:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7DB2660EE0 for ; Thu, 26 Aug 2021 21:26:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 7DB2660EE0 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id D86258D0002; Thu, 26 Aug 2021 17:26:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D0F148D0001; Thu, 26 Aug 2021 17:26:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B620D8D0002; Thu, 26 Aug 2021 17:26:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0098.hostedemail.com [216.40.44.98]) by kanga.kvack.org (Postfix) with ESMTP id 922D28D0001 for ; Thu, 26 Aug 2021 17:26:29 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 1E7C8184218E1 for ; Thu, 26 Aug 2021 21:26:29 +0000 (UTC) X-FDA: 78518515698.16.1C09E29 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by imf01.hostedemail.com (Postfix) with ESMTP id 9EE685032C66 for ; Thu, 26 Aug 2021 21:26:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1630013188; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BsnyWI3p28wRHpeU+PZ242f3/6NLAQAJP+SyEljvdwk=; b=edFpK09A6kWHVYjpzQzKTQ/mFNoPiyle0+9SuASWzHuNXLe7xzIIjnK/KeEhUiviw4WDv3 HdkyMz7RiC9bh+rS7rq18EbyYBP187gXQ2Pm3SAGq5W9ZwdAfJ+4Rc64fgHMCcsPuMRya8 ZUsmI2SRzQEdt8xj5HPe4orbu/B2+5w= Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com [209.85.221.69]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-81-UyDFpdFuOAKUYjHWWzbh0Q-1; Thu, 26 Aug 2021 17:26:26 -0400 X-MC-Unique: UyDFpdFuOAKUYjHWWzbh0Q-1 Received: by mail-wr1-f69.google.com with SMTP id 102-20020adf82ef000000b001576e345169so1300136wrc.7 for ; Thu, 26 Aug 2021 14:26:26 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:to:cc:references:from:organization:subject :message-id:date:user-agent:mime-version:in-reply-to :content-language:content-transfer-encoding; bh=BsnyWI3p28wRHpeU+PZ242f3/6NLAQAJP+SyEljvdwk=; b=WB45Ve9c5Ush91VLDPrfh0MUsMUFOrJLEt4rlHItIcKf5sSTe0MRwDk5vXhkD+mTJB 3eCsTdXgwi4gjcHNN+bo4N5p4gdERgoPKjjLmx/YFrbUxNaivNzldIx2unFxzt8yID/n a5TANeSDYT3feheNnQAO+3tk2PtN0wpIyt2QtuZ0BvkJezqX0hWbafd46Wkyt6xtskbh Yweq7GbhzW/N1DVKU6j40ausDnHs5vOUG2nr7Eml4aNNJAatZmIcon4hykEzlriQDRCl mJHofifqbWVzyRI05WYEqafCu71n1dxDHcC8USjOkvJN74iNvMt8s77mD4/x1p0KPGQN zeHA== X-Gm-Message-State: AOAM532B4vET1r6dVATVl8fNP6zW2gEDsUl7TeQlmzJxwsiZ+jAwv6mm ZFd5+sHIAxfxanPMotALdrSRhBGtiAZ5wQFTPZilxsGaMHxCezkLYpBHlXK03YBvoK4SagFeKKr e+4o90nLW3+E= X-Received: by 2002:a05:6000:1b8d:: with SMTP id r13mr6414327wru.230.1630013184905; Thu, 26 Aug 2021 14:26:24 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzU/t5tVGMRsfrN41d2ez2mmo6cbEh/dbw1MZX3pf07YySqoRe8t0Iz/mdz6u9b8F3iqxaXXA== X-Received: by 2002:a05:6000:1b8d:: with SMTP id r13mr6414290wru.230.1630013184562; Thu, 26 Aug 2021 14:26:24 -0700 (PDT) Received: from [192.168.3.132] (p4ff23dec.dip0.t-ipconnect.de. [79.242.61.236]) by smtp.gmail.com with ESMTPSA id f20sm3550064wml.38.2021.08.26.14.26.22 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 26 Aug 2021 14:26:24 -0700 (PDT) To: Andy Lutomirski , Sean Christopherson , Paolo Bonzini Cc: Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Borislav Petkov , Andrew Morton , Joerg Roedel , Andi Kleen , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Ingo Molnar , Varad Gautam , Dario Faggioli , x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, "Kirill A . Shutemov" , "Kirill A . Shutemov" , Kuppuswamy Sathyanarayanan , Dave Hansen , Yu Zhang References: <20210824005248.200037-1-seanjc@google.com> <307d385a-a263-276f-28eb-4bc8dd287e32@redhat.com> <40af9d25-c854-8846-fdab-13fe70b3b279@kernel.org> From: David Hildenbrand Organization: Red Hat Subject: Re: [RFC] KVM: mm: fd-based approach for supporting KVM guest private memory Message-ID: Date: Thu, 26 Aug 2021 23:26:22 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: <40af9d25-c854-8846-fdab-13fe70b3b279@kernel.org> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US X-Rspamd-Queue-Id: 9EE685032C66 Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=edFpK09A; dmarc=pass (policy=none) header.from=redhat.com; spf=none (imf01.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 216.205.24.124) smtp.mailfrom=david@redhat.com X-Rspamd-Server: rspam01 X-Stat-Signature: nkngo7nf1bo6f3yuyghzcs9xpqn6e7c7 X-HE-Tag: 1630013188-193493 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 26.08.21 19:05, Andy Lutomirski wrote: > On 8/26/21 3:15 AM, David Hildenbrand wrote: >> On 24.08.21 02:52, Sean Christopherson wrote: >>> The goal of this RFC is to try and align KVM, mm, and anyone else wit= h >>> skin in the >>> game, on an acceptable direction for supporting guest private memory, >>> e.g. for >>> Intel's TDX.=C2=A0 The TDX architectural effectively allows KVM guest= s to >>> crash the >>> host if guest private memory is accessible to host userspace, and thu= s >>> does not >>> play nice with KVM's existing approach of pulling the pfn and mapping >>> level from >>> the host page tables. >>> >>> This is by no means a complete patch; it's a rough sketch of the KVM >>> changes that >>> would be needed.=C2=A0 The kernel side of things is completely omitte= d from >>> the patch; >>> the design concept is below. >>> >>> There's also fair bit of hand waving on implementation details that >>> shouldn't >>> fundamentally change the overall ABI, e.g. how the backing store will >>> ensure >>> there are no mappings when "converting" to guest private. >>> >> >> This is a lot of complexity and rather advanced approaches (not saying >> they are bad, just that we try to teach the whole stack something >> completely new). >> >> >> What I think would really help is a list of requirements, such that >> everybody is aware of what we actually want to achieve. Let me start: >> >> GFN: Guest Frame Number >> EPFN: Encrypted Physical Frame Number >> >> >> 1) An EPFN must not get mapped into more than one VM: it belongs exact= ly >> to one VM. It must neither be shared between VMs between processes nor >> between VMs within a processes. >> >> >> 2) User space (well, and actually the kernel) must never access an EPF= N: >> >> - If we go for an fd, essentially all operations (read/write) have to >> =C2=A0 fail. >> - If we have to map an EPFN into user space page tables (e.g., to >> =C2=A0 simplify KVM), we could only allow fake swap entries such that= "there >> =C2=A0 is something" but it cannot be=C2=A0 accessed and is flagged a= ccordingly. >> - /proc/kcore and friends have to be careful as well and should not re= ad >> =C2=A0 this memory. So there has to be a way to flag these pages. >> >> 3) We need a way to express the GFN<->EPFN mapping and essentially >> assign an EPFN to a GFN. >> >> >> 4) Once we assigned a EPFN to a GFN, that assignment must not longer >> change. Further, an EPFN must not get assigned to multiple GFNs. >> >> >> 5) There has to be a way to "replace" encrypted parts by "shared" part= s >> =C2=A0=C2=A0 and the other way around. >> >> What else? >> >> >> >>> Background >>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>> >>> This is a loose continuation of Kirill's RFC[*] to support TDX guest >>> private >>> memory by tracking guest memory at the 'struct page' level.=C2=A0 Thi= s >>> proposal is the >>> result of several offline discussions that were prompted by Andy >>> Lutomirksi's >>> concerns with tracking via 'struct page': >>> >>> =C2=A0=C2=A0 1. The kernel wouldn't easily be able to enforce a 1:1 = page:guest >>> association, >>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 let alone a 1:1 pfn:gfn mapping. >> >> Well, it could with some help on higher layers. Someone has to do the >> tracking. Marking EPFNs as EPFNs can actually be very helpful,=C2=A0 e= .g., >> allow /proc/kcore to just not touch such pages. If we want to do all t= he >> tracking in the struct page is a different story. >> >>> >>> =C2=A0=C2=A0 2. Does not work for memory that isn't backed by 'struc= t page', >>> e.g. if devices >>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 gain support for exposing encrypted m= emory regions to guests. >> >> Let's keep it simple. If a struct page is right now what we need to >> properly track it, so be it. If not, good. But let's not make this a >> requirement right from the start if it's stuff for the far future. >> >>> >>> =C2=A0=C2=A0 3. Does not help march toward page migration or swap su= pport >>> (though it doesn't >>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 hurt either). >> >> "Does not help towards world peace, (though it doesn't hurt either)". >> >> Maybe let's ignore that for now, as it doesn't seem to be required to >> get something reasonable running. >> >>> >>> [*] >>> https://lkml.kernel.org/r/20210416154106.23721-1-kirill.shutemov@linu= x.intel.com >>> >>> >>> Concept >>> =3D=3D=3D=3D=3D=3D=3D >>> >>> Guest private memory must be backed by an "enlightened" file >>> descriptor, where >>> "enlightened" means the implementing subsystem supports a one-way >>> "conversion" to >>> guest private memory and provides bi-directional hooks to communicate >>> directly >>> with KVM.=C2=A0 Creating a private fd doesn't necessarily have to be = a >>> conversion, e.g. it >>> could also be a flag provided at file creation, a property of the fil= e >>> system itself, >>> etc... >> >> Doesn't sound too crazy. Maybe even introducing memfd_encrypted() if >> extending the other ones turns out too complicated. >> >>> >>> Before a private fd can be mapped into a KVM guest, it must be paired >>> 1:1 with a >>> KVM guest, i.e. multiple guests cannot share a fd.=C2=A0 At pairing, = KVM >>> and the fd's >>> subsystem exchange a set of function pointers to allow KVM to call >>> into the subsystem, >>> e.g. to translate gfn->pfn, and vice versa to allow the subsystem to >>> call into KVM, >>> e.g. to invalidate/move/swap a gfn range. >>> >>> Mapping a private fd in host userspace is disallowed, i.e. there is >>> never a host >>> virtual address associated with the fd and thus no userspace page >>> tables pointing >>> at the private memory. >> >> To keep the primary vs. secondary MMU thing working, I think it would >> actually be nice to go with special swap entries instead; it just keep= s >> most things working as expected. But let's see where we end up. >> >>> >>> Pinning _from KVM_ is not required.=C2=A0 If the backing store suppor= ts >>> page migration >>> and/or swap, it can query the KVM-provided function pointers to see i= f >>> KVM supports >>> the operation.=C2=A0 If the operation is not supported (this will be = the >>> case initially >>> in KVM), the backing store is responsible for ensuring correct >>> functionality. >>> >>> Unmapping guest memory, e.g. to prevent use-after-free, is handled vi= a >>> a callback >>> from the backing store to KVM.=C2=A0 KVM will employ techniques simil= ar to >>> those it uses >>> for mmu_notifiers to ensure the guest cannot access freed memory. >>> >>> A key point is that, unlike similar failed proposals of the past, e.g= . >>> /dev/mktme, >>> existing backing stores can be englightened, a from-scratch >>> implementations is not >>> required (though would obviously be possible as well). >> >> Right. But if it's just a bad fit, let's do something new. Just like w= e >> did with memfd_secret. >> >>> >>> One idea for extending existing backing stores, e.g. HugeTLBFS and >>> tmpfs, is >>> to add F_SEAL_GUEST, which would convert the entire file to guest >>> private memory >>> and either fail if the current size is non-zero or truncate the size >>> to zero. >> >> While possible, I actually do have the feeling that we want eventually >> to have something new, as the semantics are just too different. But >> let's see. >> >> >>> KVM >>> =3D=3D=3D >>> >>> Guest private memory is managed as a new address space, i.e. as a >>> different set of >>> memslots, similar to how KVM has a separate memory view for when a >>> guest vCPU is >>> executing in virtual SMM.=C2=A0 SMM is mutually exclusive with guest >>> private memory. >>> >>> The fd (the actual integer) is provided to KVM when a private memslot >>> is added >>> via KVM_SET_USER_MEMORY_REGION.=C2=A0 This is when the aforementioned >>> pairing occurs. >>> >>> By default, KVM memslot lookups will be "shared", only specific >>> touchpoints will >>> be modified to work with private memslots, e.g. guest page faults. >>> All host >>> accesses to guest memory, e.g. for emulation, will thus look for >>> shared memory >>> and naturally fail without attempting copy_to/from_user() if the gues= t >>> attempts >>> to coerce KVM into access private memory.=C2=A0 Note, avoiding >>> copy_to/from_user() and >>> friends isn't strictly necessary, it's more of a happy side effect. >>> >>> A new KVM exit reason, e.g. KVM_EXIT_MEMORY_ERROR, and data struct in >>> vcpu->run >>> is added to propagate illegal accesses (see above) and implicit >>> conversions >>> to userspace (see below).=C2=A0 Note, the new exit reason + struct ca= n also >>> be to >>> support several other feature requests in KVM[1][2]. >>> >>> The guest may explicitly or implicity request KVM to map a >>> shared/private variant >>> of a GFN.=C2=A0 An explicit map request is done via hypercall (out of= scope >>> for this >>> proposal as both TDX and SNP ABIs define such a hypercall).=C2=A0 An >>> implicit map request >>> is triggered simply by the guest accessing the shared/private variant= , >>> which KVM >>> sees as a guest page fault (EPT violation or #NPF).=C2=A0 Ideally onl= y >>> explicit requests >>> would be supported, but neither TDX nor SNP require this in their >>> guest<->host ABIs. >>> >>> For implicit or explicit mappings, if a memslot is found that fully >>> covers the >>> requested range (which is a single gfn for implicit mappings), KVM's >>> normal guest >>> page fault handling works with minimal modification. >>> >>> If a memslot is not found, for explicit mappings, KVM will exit to >>> userspace with >>> the aforementioned dedicated exit reason.=C2=A0 For implict _private_ >>> mappings, KVM will >>> also immediately exit with the same dedicated reason.=C2=A0 For impli= cit >>> shared mappings, >>> an additional check is required to differentiate between emulated MMI= O >>> and an >>> implicit private->shared conversion[*].=C2=A0 If there is an existing >>> private memslot >>> for the gfn, KVM will exit to userspace, otherwise KVM will treat the >>> access as an >>> emulated MMIO access and handle the page fault accordingly. >> >> Do you mean some kind of overlay. "Ordinary" user memory regions overl= ay >> "private user memory regions"? So when marking something shared, you'd >> leave the private user memory region alone and only create a new >> "ordinary"user memory regions that references shared memory in QEMU >> (IOW, a different mapping)? >> >> Reading below, I think you were not actually thinking about an overlay= , >> but maybe overlays might actually be a nice concept to have instead. >> >> >>> Punching Holes >>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>> >>> The expected userspace memory model is that mapping requests will be >>> handled as >>> conversions, e.g. on a shared mapping request, first unmap the privat= e >>> gfn range, >>> then map the shared gfn range.=C2=A0 A new KVM ioctl() will likely be >>> needed to allow >>> userspace to punch a hole in a memslot, as expressing such an >>> operation isn't >>> possible with KVM_SET_USER_MEMORY_REGION.=C2=A0 While userspace could >>> delete the >>> memslot, then recreate three new memslots, doing so would be >>> destructive to guest >>> data as unmapping guest private memory (from the EPT/NPT tables) is >>> destructive >>> to the data for both TDX and SEV-SNP guests. >> >> If you'd treat it like an overlay, you'd not actually be punching hole= s. >> You'd only be creating/removing ordinary user memory regions when >> marking something shared/unshared. >> >>> >>> Pros (vs. struct page) >>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>> >>> Easy to enforce 1:1 fd:guest pairing, as well as 1:1 gfn:pfn mapping. >>> >>> Userspace page tables are not populated, e.g. reduced memory >>> footprint, lower >>> probability of making private memory accessible to userspace. >> >> Agreed to the first part, although I consider that a secondary concern= . >> The second part, I'm not sure if that is really the case. Fake swap >> entries are just a marker. >> >>> >>> Provides line of sight to supporting page migration and swap. >> >> Again, let's leave that out for now. I think that's an kernel internal >> that will require quite some thought either way. >> >>> >>> Provides line of sight to mapping MMIO pages into guest private memor= y. >> >> That's an interesting thought. Would it work via overlays as well? Can >> you elaborate? >> >>> >>> Cons (vs. struct page) >>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>> >>> Significantly more churn in KVM, e.g. to plumb 'private' through wher= e >>> needed, >>> support memslot hole punching, etc... >>> >>> KVM's MMU gets another method of retrieving host pfn and page size. >>> >>> Requires enabling in every backing store that someone wants to suppor= t. >> >> I think we will only care about anonymous memory eventually with >> huge/gigantic pages in the next years. Just to what memfd() is already >> limited. File-backed -- I don't know ... if at all, swapping ... in a >> couple of years ... >> >>> >>> Because the NUMA APIs work on virtual addresses, new syscalls >>> fmove_pages(), >>> fbind(), etc... would be required to provide equivalents to existing = NUMA >>> functionality (though those syscalls would likely be useful >>> irrespective of guest >>> private memory). >> >> Right, that's because we don't have a VMA that describes all this. E.g= ., >> mbind(). >> >>> >>> Washes (vs. struct page) >>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D >>> >>> A misbehaving guest that triggers a large number of shared memory >>> mappings will >>> consume a large number of memslots.=C2=A0 But, this is likely a wash = as >>> similar effect >>> would happen with VMAs in the struct page approach. >> >> Just cap it then to something sane. 32k which we have right now is cra= zy >> and only required in very special setups. You can just make QEMU >> override/set the KVM default. >> >> >> >> >> >> My wild idea after reading everything so far (full of flaws, just want >> to mention it, maybe it gives some ideas): >> >> Introduce memfd_encrypted(). >> >> Similar to like memfd_secret() >> - Most system calls will just fail. >> - Allow MAP_SHARED only. >> - Enforce VM_DONTDUMP and skip during fork(). >=20 > This seems like it would work, but integrating it with the hugetlb > reserve mechanism might be rather messy. One step at a time. >=20 >> - File size can change exactly once, before any mmap. (IIRC) >=20 > Why is this needed? Obviously if the file size can be reduced, then th= e > pages need to get removed safely, but this seems doable if there's a us= e > case. Right, but we usually don't resize memfd either. >=20 >> >> Different to memfd_secret(), allow mapping each page of the fd exactly >> one time via mmap() into a single process. >=20 > This doesn't solve the case of multiple memslots pointing at the same > address. It also doesn't help with future operations that need to map > from a memfd_encrypted() backing page to the GPA that maps it. That's trivial to enforce inside KVM when mapping. Encryted user memory=20 regions, just like such VMAs just have to be sticky until we can come up=20 with something different. >=20 >> You'll end up with a VMA that corresponds to the whole file in a singl= e >> process only, and that cannot vanish, not even in parts. >> >> Define "ordinary" user memory slots as overlay on top of "encrypted" >> memory slots.=C2=A0 Inside KVM, bail out if you encounter such a VMA i= nside a >> normal user memory slot. When creating a "encryped" user memory slot, >> require that the whole VMA is covered at creation time. You know the V= MA >> can't change later. >=20 > Oof. That's quite a requirement. What's the point of the VMA once all > this is done? You can keep using things like mbind(), madvise(), ... and the GUP code=20 with a special flag might mostly just do what you want. You won't have=20 to reinvent too many wheels on the page fault logic side at least. Just a brain dump. Feel free to refine if you think any of this makes sen= se. --=20 Thanks, David / dhildenb