在 2020/8/26 上午9:19, Daniel Jordan 写道: > On Tue, Aug 25, 2020 at 11:26:58AM +0800, Alex Shi wrote: >> 在 2020/8/25 上午9:56, Daniel Jordan 写道: >>> Alex, do you have a pointer to the modified readtwice case? >> >> Sorry, no. my developer machine crashed, so I lost case my container and modified >> case. I am struggling to get my container back from a account problematic repository. >> >> But some testing scripts is here, generally, the original readtwice case will >> run each of threads on each of cpus. The new case will run one container on each cpus, >> and just run one readtwice thead in each of containers. > > Ok, what you've sent so far gives me an idea of what you did. My readtwice > changes were similar, except I used the cgroup interface directly instead of > docker and shared a filesystem between all the cgroups whereas it looks like > you had one per memcg. 30 second runs on 5.9-rc2 and v18 gave 11% more data > read with v18. This was using 16 cgroups (32 dd tasks) on a 40 CPU, 2 socket > machine. I clean up my testing and make it reproducable by a Dockerfile and a case patch which attached. User can build a container from the file, and then do testing like following: #start some testing containers for ((i=0; i< 80; i++)); do docker run --privileged=true --rm lrulock bash -c " sleep 20000" & done #do testing evn setup for i in `docker ps | sed '1 d' | awk '{print $1 }'` ;do docker exec --privileged=true -it $i bash -c "cd vm-scalability/; bash -x ./case-lru-file-readtwice m"& done #kick testing for i in `docker ps | sed '1 d' | awk '{print $1 }'` ;do docker exec --privileged=true -it $i bash -c "cd vm-scalability/; bash -x ./case-lru-file-readtwice r"& done #show result for i in `docker ps | sed '1 d' | awk '{print $1 }'` ;do echo === $i ===; docker exec $i bash -c 'cat /tmp/vm-scalability-tmp/dd-output-* ' & done | grep MB | awk 'BEGIN {a=0;} { a+=$10 } END {print NR, a/(NR)}' This time, on a 2P * 20 core * 2 HT machine, This readtwice performance is 252% compare to v5.9-rc2 kernel. A good surprise! > >>> Even better would be a description of the problem you're having in production >>> with lru_lock. We might be able to create at least a simulation of it to show >>> what the expected improvement of your real workload is. >> >> we are using thousands memcgs in a machine, but as a simulation, I guess above case >> could be helpful to show the problem. > > Using thousands of memcgs to do what? Any particulars about the type of > workload? Surely it's more complicated than page cache reads :) Yes, the workload are quit different on different business, some use cpu a lot, some use memory a lot, and some are may mixed. For containers number, that are also quit various from tens to hundreds to thousands. > >>> I ran a few benchmarks on v17 last week (sysbench oltp readonly, kerndevel from >>> mmtests, a memcg-ized version of the readtwice case I cooked up) and then today >>> discovered there's a chance I wasn't running the right kernels, so I'm redoing >>> them on v18. > > Neither kernel compile nor git checkout in the root cgroup changed much, just > 0.31% slower on elapsed time for the compile, so no significant regressions > there. Now for sysbench again. > Thanks a lot for testing report! Alex