diff options
author | Adin Scannell <ascannell@google.com> | 2019-05-13 15:12:03 -0700 |
---|---|---|
committer | Adin Scannell <adin@scannell.ca> | 2019-05-13 15:27:34 -0700 |
commit | 9ea68ce1655ddadc8e3ca4c65fd2922d2564b33d (patch) | |
tree | 1786168a13eff022e520c21f39ffff09f2c3893b | |
parent | 5b3f25047fe8dd4d04262ea56c38a134291d0067 (diff) |
Drop confusing kvm-bare syscall time
The difference is only due to different machines.
-rw-r--r-- | content/docs/architecture_guide/performance.md | 15 | ||||
-rw-r--r-- | static/performance/syscall.csv | 1 |
2 files changed, 7 insertions, 9 deletions
diff --git a/content/docs/architecture_guide/performance.md b/content/docs/architecture_guide/performance.md index f31baa5e1..f246b5d5c 100644 --- a/content/docs/architecture_guide/performance.md +++ b/content/docs/architecture_guide/performance.md @@ -85,22 +85,21 @@ For many use cases, fixed memory overheads are a primary concern. This may be because sandboxed containers handle a low volume of requests, and it is therefore important to achieve high densities for efficiency. -{{< graph id="density" url="/performance/density.csv" title="perf.py density --runtime=runc --runtime=runsc" log="true" y_min="100000">}} +{{< graph id="density" url="/performance/density.csv" title="perf.py density --runtime=runc --runtime=runsc" log="true" y_min="100000" >}} The above figure demonstrates these costs based on three sample applications. This test is the result of running many instances of a container (typically 50) and calculating available memory on the host before and afterwards, and dividing -the difference by the number of containers. - -> Note: the above technique is used for measuring memory usage over the -> `usage_in_bytes` value of the container cgroup because we found that some -> container runtimes, other than `runc` and `runsc` do not use an individual -> container cgroup. +the difference by the number of containers. This technique is used for measuring +memory usage over the `usage_in_bytes` value of the container cgroup because we +found that some container runtimes, other than `runc` and `runsc`, do not use an +individual container cgroup. The first application is an instance of `sleep`: a trivial application that does nothing. The second application is a synthetic `node` application which imports a number of modules and listens for requests. The third application is a similar -synthetic `ruby` application which does the same. In all cases, the sandbox +synthetic `ruby` application which does the same. Finally, we include an +instance of `redis` storing approximately 1GB of data. In all cases, the sandbox itself is responsible for a small, mostly fixed amount of memory overhead. ## CPU performance diff --git a/static/performance/syscall.csv b/static/performance/syscall.csv index 7fa9b5d50..40bdce49e 100644 --- a/static/performance/syscall.csv +++ b/static/performance/syscall.csv @@ -2,4 +2,3 @@ runtime,metric,result runc,syscall_time_ns,1939.0
runsc,syscall_time_ns,38219.0
runsc-kvm,syscall_time_ns,763.0
-runsc-kvm-bare,syscall_time_ns,455.0
\ No newline at end of file |