On Fri, 2016-10-14 at 11:42 +0100, George Dunlap wrote: > On 14/10/16 11:17, Dario Faggioli wrote: > > > > Signed-off-by: Dario Faggioli > > We don't have a general framework for declaring things "supported" > yet, > and at the moment we only have a single level of "supported", which > includes XSAs.  I do think it makes sense to include an assessment of > the security risk when deciding whether to make such a declaration. > Right... So, first of all, sorry if I dropped the ball on this for a while, but I really wasn't sure of what should have come next (more people from security team speaking their mind? Here or somewhere else? Etc.) As already said by others, it's hard to follow the steps of a procedure which is, in fact, unknown and not formalized nor written down anywhere (which, literally, _just_ mean we need to formalize and write down it! :-D). > Here's a sort of checklist we could use to start a discussion. > Ok, thanks George. I'll go through the checklist, and your analysis of Credit2, and provide my view (when worthwhile). > 1. Guest -> host privilege escalation. > 2. Guest user -> guest kernel escalation. > 3. Information leakage. > 4. Denial-of-service. > > --- > > WRT Credit2: > > I *think* the only interfaces exposed to *unprivileged* guests are > the > SCHEDOP hypercalls -- block(), yield(), &c -- and timers.  (Dario, > please correct me if I'm wrong.)   > I agree, that's all it's there, especially if we are talking about unprivileged guests. If we include the control domain, we also expose:  - XEN_DOMCTL_SCHEDOP_* hypercalls,  - XEN_SYSCTL_SCHEDOP_* hypercalls. > The only thing that the scheduler > gives is time on the cpu.  Neither block nor yield contain any > pointers, > and so it should be trivial to verify that they contain no privilege > escalation paths. > I agree. FWIW, I've tried auditing that code, and could not find anything that looks like a potential security issue. E.g., there's not buffer/local variable, being used, that could lead to hypervisor stack leaks (and in fact, this is probably more about George's point 3 than 1). > The only information which the scheduler exposes to unprivileged > guests > is the timing information.  This may be able to be used for side- > channel > attacks to probabilistically infer things about other vcpus running > on > the same system; but this has not traditionally been considered > within > the security boundary. > And this is possible with all schedulers. Not that we can't think of improvements, of course, but if this counts, Credit1 is *Experimental* too. :-) > There have been denial-of-service attacks on credit1 before, where a > vcpu could "game the system" by going to sleep at particular times to > fool the accounting system into thinking that it wasn't running, and > thus run at BOOST priority and monopolize 95% of the cpu.  But this > was > fixed by actually charging cpus for time they spent running rather > than > probabilistycally, which credit2 already does. > And Credit2 does not have BOOST or, said in a more general way, it does not have any way for a vcpu to acquire any kind of "superpowers"... they're all subjected to the same crediting algorithm, which is a lot simpler than Credit1's one, and hence a lot easier to understand, debug and audit. > It's worth taking stock again and thinking if there's any way to > "game" > credit2 in such a way; but I think it's relatively unlikely that > someone > will be able to monopolize the cpu 100% as with the credit1 bug; and > even if it did, it's a pretty low-profile XSA. > It may be me not being a 'security guy' (yet :-)), but I can't think of anything. > But I agree with Jan, that such an argument would go well to go in > the > commit message. :-) > Right. Actually, I wrote a few words about a lot of development and testing having been done lately, and I thought whether or not to use that as commit message. I certainly could have thought about doing something like this that George did, but:  - I probably wouldn't have been able to do it as good as him;  - I wasn't sure whether it was worth/meaningful for me to do it, as     opposed to the security team wanting to do that themselves. So, again, let's formalize the process, because it's hard to follow an unwritten and implicite set of rules. :-) All this being said, what's the next step? Is there anything more I should do, here in this thread? Should we wait for other people to weigh in? Should I resend this patch, with a non empty commit message? Containing what, a summary of this? Any help and insight, would be greatly appreciated. Thanks for everyone's time and Regards, Dario -- <> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://about.me/dario.faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)