1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
|
# Benchmark tools
This package and subpackages are for running macro benchmarks on `runsc`. They
are meant to replace the previous //benchmarks benchmark-tools written in
python.
Benchmarks are meant to look like regular golang benchmarks using the testing.B
library.
## Setup
To run benchmarks you will need:
* Docker installed (17.09.0 or greater).
The easiest way to setup runsc for running benchmarks is to use the make file.
From the root directory:
* Download images: `make load-all-images`
* Install runsc suitable for benchmarking, which should probably not have
strace or debug logs enabled. For example:`make configure RUNTIME=myrunsc
ARGS=--platform=kvm`.
* Restart docker: `sudo service docker restart`
You should now have a runtime with the following options configured in
`/etc/docker/daemon.json`
```
"myrunsc": {
"path": "/tmp/myrunsc/runsc",
"runtimeArgs": [
"--debug-log",
"/tmp/bench/logs/runsc.log.%TEST%.%TIMESTAMP%.%COMMAND%",
"--platform=kvm"
]
},
```
This runtime has been configured with a debugging off and strace logs off and is
using kvm for demonstration.
## Running benchmarks
Given the runtime above runtime `myrunsc`, run benchmarks with the following:
```
make sudo TARGETS=//path/to:target ARGS="--runtime=myrunsc -test.v \
-test.bench=." OPTIONS="-c opt
```
For example, to run only the Iperf tests:
```
make sudo TARGETS=//test/benchmarks/network:network_test \
ARGS="--runtime=myrunsc -test.v -test.bench=Iperf" OPTIONS="-c opt"
```
Benchmarks are run with root as some benchmarks require root privileges to do
things like drop caches.
## Writing benchmarks
Benchmarks consist of docker images as Dockerfiles and golang testing.B
benchmarks.
### Dockerfiles:
* Are stored at //images.
* New Dockerfiles go in an appropriately named directory at
`//images/benchmarks/my-cool-dockerfile`.
* Dockerfiles for benchmarks should:
* Use explicitly versioned packages.
* Not use ENV and CMD statements...it is easy to add these in the API.
* Note: A common pattern for getting access to a tmpfs mount is to copy files
there after container start. See: //test/benchmarks/build/bazel_test.go. You
can also make your own with `RunOpts.Mounts`.
### testing.B packages
In general, benchmarks should look like this:
```golang
var h harness.Harness
func BenchmarkMyCoolOne(b *testing.B) {
machine, err := h.GetMachine()
// check err
defer machine.CleanUp()
ctx := context.Background()
container := machine.GetContainer(ctx, b)
defer container.CleanUp(ctx)
b.ResetTimer()
//Respect b.N.
for i := 0; i < b.N; i++ {
out, err := container.Run(ctx, dockerutil.RunOpts{
Image: "benchmarks/my-cool-image",
Env: []string{"MY_VAR=awesome"},
other options...see dockerutil
}, "sh", "-c", "echo MY_VAR")
//check err
b.StopTimer()
// Do parsing and reporting outside of the timer.
number := parseMyMetric(out)
b.ReportMetric(number, "my-cool-custom-metric")
b.StartTimer()
}
}
func TestMain(m *testing.M) {
h.Init()
os.Exit(m.Run())
}
```
Some notes on the above:
* The harness is initiated in the TestMain method and made global to test
module. The harness will handle any presetup that needs to happen with
flags, remote virtual machines (eventually), and other services.
* Respect `b.N` in that users of the benchmark may want to "run for an hour"
or something of the sort.
* Use the `b.ReportMetric()` method to report custom metrics.
* Set the timer if time is useful for reporting. There isn't a way to turn off
default metrics in testing.B (B/op, allocs/op, ns/op).
* Take a look at dockerutil at //pkg/test/dockerutil to see all methods
available from containers. The API is based on the "official"
[docker API for golang](https://pkg.go.dev/mod/github.com/docker/docker).
* `harness.GetMachine()` marks how many machines this tests needs. If you have
a client and server and to mark them as multiple machines, call
`harness.GetMachine()` twice.
## Profiling
For profiling, the runtime is required to have the `--profile` flag enabled.
This flag loosens seccomp filters so that the runtime can write profile data to
disk. This configuration is not recommended for production.
* Install runsc with the `--profile` flag: `make configure RUNTIME=myrunsc
ARGS="--profile --platform=kvm --vfs2"`. The kvm and vfs2 flags are not
required, but are included for demonstration.
* Restart docker: `sudo service docker restart`
To run and generate CPU profiles fs_test test run:
```
make sudo TARGETS=//test/benchmarks/fs:fs_test \
ARGS="--runtime=myrunsc -test.v -test.bench=. --pprof-cpu" OPTIONS="-c opt"
```
Profiles would be at: `/tmp/profile/myrunsc/CONTAINERNAME/cpu.pprof`
|