From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andre Przywara Subject: Re: [PATCH 0 of 3 v5/leftover] Automatic NUMA placement for xl Date: Fri, 20 Jul 2012 13:43:42 +0200 Message-ID: <5009446E.3000900@amd.com> References: <50093C0E.9030809@cantab.net> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <50093C0E.9030809@cantab.net> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: David Vrabel Cc: Ian Campbell , Stefano Stabellini , George Dunlap , Juergen Gross , Ian Jackson , xen-devel , Dario Faggioli List-Id: xen-devel@lists.xenproject.org On 07/20/2012 01:07 PM, David Vrabel wrote: > On 16/07/12 18:13, Dario Faggioli wrote: >> Hello again, >> >> This is a new version (fixing a small bug) of this series: >> , which in turn was a repost of what remained >> un-committed of my NUMA placement series. > > Whilst I don't think this should prevent the merging of this series now, > I think there needs to be some sort of unit tests for the placement > algorithm before the 4.2 release. > > I think the tests should test a representative sample of common system > configurations, available memory and VM memory requirements. I'd > suggest you'd be looking at 100s of test cases here for reasonable coverage. > > One method would be to start with various 'empty' systems and pile as > many differently sized VMs as will fit. You may want both fixed test of > reproducible tests and random ones. That sounds good. Though I don't have much automated testing experience with Xen I could chime in with two things: 1. If we focus on placement only, I have good experience with ttylinux.iso. Those live distros can be killed easily at any time and you just need one instance of the .iso file on the disk. 2. # xl vcpu-list | sed -e 1d | sort -n -k 7 | tr -s \ | cut -d\ -f7 | uniq -c This gives the number of VCPUs per node (sort of ;-) > Some means of automatically verifying the quality of the result at each > stage would be essential. This "automatically verifying the quality of the result" doesn't sound trivial. If we knew this exactly, we could just code this into the algorithm, right? Also it seems to depend on the setup. Maybe we just collect some test output first and then try to come up with quality algorithms? Regards, Andre. -- Andre Przywara AMD-Operating System Research Center (OSRC), Dresden, Germany