1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
|
# Benchmark tools
This package and subpackages are for running macro benchmarks on `runsc`. They
are meant to replace the previous //benchmarks benchmark-tools written in
python.
Benchmarks are meant to look like regular golang benchmarks using the testing.B
library.
## Setup
To run benchmarks you will need:
* Docker installed (17.09.0 or greater).
The easiest way to run benchmarks is to use the script at
//scripts/benchmark.sh.
If not using the script, you will need:
* `runsc` configured with docker
Note: benchmarks call the runtime by name. If docker can run it with
`--runtime=` flag, these tools should work.
## Running benchmarks
The easiest way to run is with the script at //scripts/benchmarks.sh. The script
will run all benchmarks under //test/benchmarks if a target is not provided.
```bash
./script/benchmarks.sh //path/to/target
```
If you want to run benchmarks manually:
* Run `make load-all-images` from `//`
* Run with:
```bash
bazel test --test_arg=--runtime=RUNTIME -c opt --test_output=streamed --test_timeout=600 --test_arg=-test.bench=. --nocache_test_results //path/to/target
```
## Writing benchmarks
Benchmarks consist of docker images as Dockerfiles and golang testing.B
benchmarks.
### Dockerfiles:
* Are stored at //images.
* New Dockerfiles go in an appropriately named directory at
`//images/benchmarks/my-cool-dockerfile`.
* Dockerfiles for benchmarks should:
* Use explicitly versioned packages.
* Not use ENV and CMD statements...it is easy to add these in the API.
* Note: A common pattern for getting access to a tmpfs mount is to copy files
there after container start. See: //test/benchmarks/build/bazel_test.go. You
can also make your own with `RunOpts.Mounts`.
### testing.B packages
In general, benchmarks should look like this:
```golang
var h harness.Harness
func BenchmarkMyCoolOne(b *testing.B) {
machine, err := h.GetMachine()
// check err
ctx := context.Background()
container := machine.GetContainer(ctx, b)
defer container.CleanUp(ctx)
b.ResetTimer()
//Respect b.N.
for i := 0; i < b.N; i++ {
out, err := container.Run(ctx, dockerutil.RunOpts{
Image: "benchmarks/my-cool-image",
Env: []string{"MY_VAR=awesome"},
other options...see dockerutil
}, "sh", "-c", "echo MY_VAR" ...)
//check err
b.StopTimer()
// Do parsing and reporting outside of the timer.
number := parseMyMetric(out)
b.ReportMetric(number, "my-cool-custom-metric")
b.StartTimer()
}
}
func TestMain(m *testing.M) {
h.Init()
os.Exit(m.Run())
}
```
Some notes on the above:
* The harness is initiated in the TestMain method and made global to test
module. The harness will handle any presetup that needs to happen with
flags, remote virtual machines (eventually), and other services.
* Respect `b.N` in that users of the benchmark may want to "run for an hour"
or something of the sort.
* Use the `b.ReportMetric` method to report custom metrics.
* Set the timer if time is useful for reporting. There isn't a way to turn off
default metrics in testing.B (B/op, allocs/op, ns/op).
* Take a look at dockerutil at //pkg/test/dockerutil to see all methods
available from containers. The API is based on the "official"
[docker API for golang](https://pkg.go.dev/mod/github.com/docker/docker).
* `harness.GetMachine` marks how many machines this tests needs. If you have a
client and server and to mark them as multiple machines, call it
`GetMachine` twice.
|