From mboxrd@z Thu Jan 1 00:00:00 1970 From: Keir Fraser Subject: Re: [PATCH 3 of 8] xen: let the (credit) scheduler know about `node affinity` Date: Tue, 09 Oct 2012 12:10:37 +0100 Message-ID: References: <1349778541.3610.62.camel@Abyss> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <1349778541.3610.62.camel@Abyss> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Dario Faggioli , Jan Beulich Cc: Andre Przywara , Ian Campbell , Marcus Granado , George Dunlap , Andrew Cooper , Juergen Gross , Anil Madhavapeddy , Ian Jackson , xen-devel@lists.xen.org, Matt Wilson , Daniel De Graaf List-Id: xen-devel@lists.xenproject.org On 09/10/2012 11:29, "Dario Faggioli" wrote: > On Fri, 2012-10-05 at 15:25 +0100, Jan Beulich wrote: >>>>> On 05.10.12 at 16:08, Dario Faggioli wrote: >>> @@ -287,22 +344,26 @@ static inline void >>> } >>> else >>> { >>> - cpumask_t idle_mask; >>> + cpumask_t idle_mask, balance_mask; >> >> Be _very_ careful about adding on-stack CPU mask variables >> (also further below): each one of them grows the stack frame >> by 512 bytes (when building for the current maximum of 4095 >> CPUs), which is generally too much; you may want to consider >> pre-allocated scratch space instead. >> > I see your point, and I think you're right... I wasn't "thinking that > big". :-) > > I'll look into all of these situations and see if I can move the masks > off the stack. Any preference between global variables and members of > one of the scheduler's data structures? Since multiple instances of the scheduler can be active, across multiple cpu pools, surely they have to be allocated in the per-scheduler-instance structures? Or dynamically xmalloc'ed just in the scope they are needed. -- Keir > Thanks and Regards, > Dario