Hi! I don't believe doing huge revert is good idea. > I have been meaning to do this for a while, but recent events have > finally forced me to do so. > > Commits from @umn.edu addresses have been found to be submitted in "bad > faith" to try to test the kernel community's ability to review "known > malicious" changes. The result of these submissions can be found in a > paper published at the 42nd IEEE Symposium on Security and Privacy > entitled, "Open Source Insecurity: Stealthily Introducing > Vulnerabilities via Hypocrite Commits" written by Qiushi Wu (University > of Minnesota) and Kangjie Lu (University of Minnesota). Do you have examples of those "bad faith" commits? Because that's not what the paper says. While I identified one unneccessary commit during stable review, I don't believe it was done in bad faith. According to the paper, there are just three (3) (!!) bad faith commits, and were done from gmail addresses, and steps were taken so to prevent them from entering git. I do believe we have problem with -stable kernel getting way too many changes that are not really fixing anything, or are fixing stuff like "16 bytes memory leak once per boot" or printk log levels. I tried pushing back with little success. Stable kernel rules are not consistent with patches actually accepted into stable. Plus, it is quicker to get patch to stable release than to mainline release, which I believe is additional problem. For the reference, the paper seems to be available here: https://github.com/QiushiWu/QiushiWu.github.io/blob/main/papers/OpenSourceInsecurity.pdf Quoting the paper: Experiment overview. In this experiment, we leverage program-analysis techniques to prepare three minor hypocrite commits that introduce UAF bugs in the Linux kernel. The three cases represent three different kinds of hypocrite commits: (1) a coding-improvement change that simply prints an error message, (2) a patch for fixing a memory-leak bug, and (3) a patch for fixing a refcount bug. We submit the three patches using a random Gmail account to the Linux community and seek their feedback—whether the patches look good to them. The experiment is to demonstrate the practicality of hypocrite commits, and it will not introduce or intend to introduce actual UAF or any other bug in the Linux kernel. A. Ethical Considerations Ensuring the safety of the experiment. In the experiment, we aim to demonstrate the practicality of stealthily introducing vulnerabilities through hypocrite commits. Our goal is not to introduce vulnerabilities to harm OSS. Therefore, we safely conduct the experiment to make sure that the introduced UAF bugs will not be merged into the actual Linux code. In addition to the minor patches that introduce UAF conditions, we also prepare the correct patches for fixing the minor issues. We send the minor patches to the Linux community through email to seek their feedback. Fortunately, there is a time window between the confirmation of a patch and the merging of the patch. Once a maintainer confirmed our patches, e.g., an email reply indicating “looks good”, we immediately notify the maintainers of the introduced UAF and request them to not go ahead to apply the patch. At the same time, we point out the correct fixing of the bug and provide our correct patch. In all the three cases, maintainers explicitly acknowledged and confirmed to not move forward with the incorrect patches ... Best regards, Pavel -- http://www.livejournal.com/~pavelmachek