summaryrefslogtreecommitdiffhomepage
path: root/benchmarks
diff options
context:
space:
mode:
Diffstat (limited to 'benchmarks')
-rw-r--r--benchmarks/README.md126
-rw-r--r--benchmarks/harness/__init__.py2
-rw-r--r--benchmarks/harness/machine.py8
-rw-r--r--benchmarks/harness/machine_producers/machine_producer.py21
-rw-r--r--benchmarks/runner/BUILD10
-rw-r--r--benchmarks/runner/__init__.py59
-rw-r--r--benchmarks/runner/commands.py84
-rw-r--r--benchmarks/runner/runner_test.py2
-rw-r--r--benchmarks/suites/http.py2
-rw-r--r--benchmarks/workloads/BUILD40
-rw-r--r--benchmarks/workloads/ab/BUILD5
-rw-r--r--benchmarks/workloads/absl/BUILD5
-rw-r--r--benchmarks/workloads/curl/BUILD6
-rw-r--r--benchmarks/workloads/ffmpeg/BUILD6
-rw-r--r--benchmarks/workloads/fio/BUILD5
-rw-r--r--benchmarks/workloads/httpd/BUILD6
-rw-r--r--benchmarks/workloads/iperf/BUILD5
-rw-r--r--benchmarks/workloads/netcat/BUILD6
-rw-r--r--benchmarks/workloads/nginx/BUILD6
-rw-r--r--benchmarks/workloads/node/BUILD6
-rw-r--r--benchmarks/workloads/node_template/BUILD6
-rw-r--r--benchmarks/workloads/redis/BUILD6
-rw-r--r--benchmarks/workloads/redisbenchmark/BUILD5
-rw-r--r--benchmarks/workloads/ruby/BUILD13
-rw-r--r--benchmarks/workloads/ruby_template/BUILD7
-rw-r--r--benchmarks/workloads/sleep/BUILD6
-rw-r--r--benchmarks/workloads/sysbench/BUILD5
-rw-r--r--benchmarks/workloads/syscall/BUILD5
-rw-r--r--benchmarks/workloads/tensorflow/BUILD6
-rw-r--r--benchmarks/workloads/true/BUILD7
30 files changed, 303 insertions, 173 deletions
diff --git a/benchmarks/README.md b/benchmarks/README.md
index ad44cd6ac..ff21614c5 100644
--- a/benchmarks/README.md
+++ b/benchmarks/README.md
@@ -6,66 +6,55 @@ These scripts are tools for collecting performance data for Docker-based tests.
The scripts assume the following:
-* You have a local machine with bazel installed.
-* You have some machine(s) with docker installed. These machines will be
- refered to as the "Environment".
-* Environment machines have the runtime(s) under test installed, such that you
- can run docker with a command like: `docker run --runtime=$RUNTIME
- your/image`.
-* You are able to login to machines in the environment with the local machine
- via ssh and the user for ssh can run docker commands without using `sudo`.
+* There are two sets of machines: one where the scripts will be run
+ (controller) and one or more machines on which docker containers will be run
+ (environment).
+* The controller machine must have bazel installed along with this source
+ code. You should be able to run a command like `bazel run :benchmarks --
+ --list`
+* Environment machines must have docker and the required runtimes installed.
+ More specifically, you should be able to run a command like: `docker run
+ --runtime=$RUNTIME your/image`.
+* The controller has ssh private key which can be used to login to environment
+ machines and run docker commands without using `sudo`. This is not required
+ if running locally via the `run-local` command.
* The docker daemon on each of your environment machines is listening on
`unix:///var/run/docker.sock` (docker's default).
For configuring the environment manually, consult the
[dockerd documentation][dockerd].
-## Environment
-
-All benchmarks require a user defined yaml file describe the environment. These
-files are of the form:
-
-```yaml
-machine1: local
-machine2:
- hostname: 100.100.100.100
- username: username
- key_path: ~/private_keyfile
- key_password: passphrase
-machine3:
- hostname: 100.100.100.101
- username: username
- key_path: ~/private_keyfile
- key_password: passphrase
-```
+## Running benchmarks
-The yaml file defines an environment with three machines named `machine1`,
-`machine2` and `machine3`. `machine1` is the local machine, `machine2` and
-`machine3` are remote machines. Both `machine2` and `machine3` should be
-reachable by `ssh`. For example, the command `ssh -i ~/private_keyfile
-username@100.100.100.100` (using the passphrase `passphrase`) should connect to
-`machine2`.
+Run the following from the benchmarks directory:
-The above is an example only. Machines should be uniform, since they are treated
-as such by the tests. Machines must also be accessible to each other via their
-default routes. Furthermore, some benchmarks will meaningless if running on the
-local machine, such as density.
+```bash
+bazel run :benchmarks -- run-local startup
-For remote machines, `hostname`, `key_path`, and `username` are required and
-others are optional. In addition key files must be generated
-[using the instrcutions below](#generating-ssh-keys).
+...
+method,metric,result
+startup.empty,startup_time_ms,652.5772
+startup.node,startup_time_ms,1654.4042000000002
+startup.ruby,startup_time_ms,1429.835
+```
-The above yaml file can be checked for correctness with the `validate` command
-in the top level perf.py script:
+The above command ran the startup benchmark locally, which consists of three
+benchmarks (empty, node, and ruby). Benchmark tools ran it on the default
+runtime, runc. Running on another installed runtime, like say runsc, is as
+simple as:
-`bazel run :benchmarks -- validate $PWD/examples/localhost.yaml`
+```bash
+bazel run :benchmakrs -- run-local startup --runtime=runsc
+```
-## Running benchmarks
+There is help: ``bash bash bazel run :benchmarks -- --help bazel
+run :benchmarks -- run-local --help` ``
To list available benchmarks, use the `list` commmand:
```bash
bazel run :benchmarks -- list
+ls
...
Benchmark: sysbench.cpu
@@ -75,24 +64,44 @@ Metrics: events_per_second
:param max_prime: The maximum prime number to search.
```
-To run benchmarks, use the `run` command. For example, to run the sysbench
-benchmark above:
+You can choose benchmarks by name or regex like:
```bash
-bazel run :benchmarks -- run --env $PWD/examples/localhost.yaml sysbench.cpu
+bazel run :benchmarks -- run-local startup.node
+...
+metric,result
+startup_time_ms,1671.7178000000001
+
+```
+
+or
+
+```bash
+bazel run :benchmarks -- run-local s
+...
+method,metric,result
+startup.empty,startup_time_ms,1792.8292
+startup.node,startup_time_ms,3113.5274
+startup.ruby,startup_time_ms,3025.2424
+sysbench.cpu,cpu_events_per_second,12661.47
+sysbench.memory,memory_ops_per_second,7228268.44
+sysbench.mutex,mutex_time,17.4835
+sysbench.mutex,mutex_latency,3496.7
+sysbench.mutex,mutex_deviation,0.04
+syscall.syscall,syscall_time_ns,2065.0
```
You can run parameterized benchmarks, for example to run with different
runtimes:
```bash
-bazel run :benchmarks -- run --env $PWD/examples/localhost.yaml --runtime=runc --runtime=runsc sysbench.cpu
+bazel run :benchmarks -- run-local --runtime=runc --runtime=runsc sysbench.cpu
```
Or with different parameters:
```bash
-bazel run :benchmarks -- run --env $PWD/examples/localhost.yaml --max_prime=10 --max_prime=100 sysbench.cpu
+bazel run :benchmarks -- run-local --max_prime=10 --max_prime=100 sysbench.cpu
```
## Writing benchmarks
@@ -121,7 +130,7 @@ The harness requires workloads to run. These are all available in the
In general, a workload consists of a Dockerfile to build it (while these are not
hermetic, in general they should be as fixed and isolated as possible), some
-parses for output if required, parser tests and sample data. Provided the test
+parsers for output if required, parser tests and sample data. Provided the test
is named after the workload package and contains a function named `sample`, this
variable will be used to automatically mock workload output when the `--mock`
flag is provided to the main tool.
@@ -149,24 +158,5 @@ To write a new benchmark, open a module in the `suites` directory and use the
above signature. You should add a descriptive doc string to describe what your
benchmark is and any test centric arguments.
-## Generating SSH Keys
-
-The scripts only support RSA Keys, and ssh library used in paramiko. Paramiko
-only supports RSA keys that look like the following (PEM format):
-
-```bash
-$ cat /path/to/ssh/key
-
------BEGIN RSA PRIVATE KEY-----
-...private key text...
------END RSA PRIVATE KEY-----
-
-```
-
-To generate ssh keys in PEM format, use the [`-t rsa -m PEM -b 4096`][RSA-keys].
-option.
-
[dockerd]: https://docs.docker.com/engine/reference/commandline/dockerd/
[docker-py]: https://docker-py.readthedocs.io/en/stable/
-[paramiko]: http://docs.paramiko.org/en/2.4/api/client.html
-[RSA-keys]: https://serverfault.com/questions/939909/ssh-keygen-does-not-create-rsa-private-key
diff --git a/benchmarks/harness/__init__.py b/benchmarks/harness/__init__.py
index a7f34da9e..7b96d1666 100644
--- a/benchmarks/harness/__init__.py
+++ b/benchmarks/harness/__init__.py
@@ -18,7 +18,7 @@ import os
# LOCAL_WORKLOADS_PATH defines the path to use for local workloads. This is a
# format string that accepts a single string parameter.
LOCAL_WORKLOADS_PATH = os.path.join(
- os.path.dirname(__file__), "../workloads/{}")
+ os.path.dirname(__file__), "../workloads/{}/tar.tar")
# REMOTE_WORKLOADS_PATH defines the path to use for storing the workloads on the
# remote host. This is a format string that accepts a single string parameter.
diff --git a/benchmarks/harness/machine.py b/benchmarks/harness/machine.py
index 66b719b63..af037dbcc 100644
--- a/benchmarks/harness/machine.py
+++ b/benchmarks/harness/machine.py
@@ -160,15 +160,17 @@ class LocalMachine(Machine):
stdout, stderr = process.communicate()
return stdout.decode("utf-8"), stderr.decode("utf-8")
- def read(self, path: str) -> str:
+ def read(self, path: str) -> bytes:
# Read the exact path locally.
return open(path, "r").read()
def pull(self, workload: str) -> str:
# Run the docker build command locally.
logging.info("Building %s@%s locally...", workload, self._name)
- self.run("docker build --tag={} {}".format(
- workload, harness.LOCAL_WORKLOADS_PATH.format(workload)))
+ with open(harness.LOCAL_WORKLOADS_PATH.format(workload),
+ "rb") as dockerfile:
+ self._docker_client.images.build(
+ fileobj=dockerfile, tag=workload, custom_context=True)
return workload # Workload is the tag.
def container(self, image: str, **kwargs) -> container.Container:
diff --git a/benchmarks/harness/machine_producers/machine_producer.py b/benchmarks/harness/machine_producers/machine_producer.py
index 124ee14cc..f5591c026 100644
--- a/benchmarks/harness/machine_producers/machine_producer.py
+++ b/benchmarks/harness/machine_producers/machine_producer.py
@@ -13,6 +13,7 @@
# limitations under the License.
"""Abstract types."""
+import threading
from typing import List
from benchmarks.harness import machine
@@ -28,3 +29,23 @@ class MachineProducer:
def release_machines(self, machine_list: List[machine.Machine]):
"""Releases the given set of machines."""
raise NotImplementedError
+
+
+class LocalMachineProducer(MachineProducer):
+ """Produces Local Machines."""
+
+ def __init__(self, limit: int):
+ self.limit_sem = threading.Semaphore(value=limit)
+
+ def get_machines(self, num_machines: int) -> List[machine.Machine]:
+ """Returns the request number of MockMachines."""
+
+ self.limit_sem.acquire()
+ return [machine.LocalMachine("local") for _ in range(num_machines)]
+
+ def release_machines(self, machine_list: List[machine.MockMachine]):
+ """No-op."""
+ if not machine_list:
+ raise ValueError("Cannot release an empty list!")
+ self.limit_sem.release()
+ machine_list.clear()
diff --git a/benchmarks/runner/BUILD b/benchmarks/runner/BUILD
index de24824cc..e1b2ea550 100644
--- a/benchmarks/runner/BUILD
+++ b/benchmarks/runner/BUILD
@@ -10,7 +10,9 @@ py_library(
],
visibility = ["//benchmarks:__pkg__"],
deps = [
+ ":commands",
"//benchmarks/harness:benchmark_driver",
+ "//benchmarks/harness/machine_producers:machine_producer",
"//benchmarks/harness/machine_producers:mock_producer",
"//benchmarks/harness/machine_producers:yaml_producer",
"//benchmarks/suites",
@@ -30,6 +32,14 @@ py_library(
],
)
+py_library(
+ name = "commands",
+ srcs = ["commands.py"],
+ deps = [
+ requirement("click", True),
+ ],
+)
+
py_test(
name = "runner_test",
srcs = ["runner_test.py"],
diff --git a/benchmarks/runner/__init__.py b/benchmarks/runner/__init__.py
index 9bf9cfd65..6f56704d8 100644
--- a/benchmarks/runner/__init__.py
+++ b/benchmarks/runner/__init__.py
@@ -28,8 +28,10 @@ import click
from benchmarks import suites
from benchmarks.harness import benchmark_driver
+from benchmarks.harness.machine_producers import machine_producer
from benchmarks.harness.machine_producers import mock_producer
from benchmarks.harness.machine_producers import yaml_producer
+from benchmarks.runner import commands
@click.group()
@@ -100,30 +102,22 @@ def list_all(method):
print("\n")
-# pylint: disable=too-many-arguments
-# pylint: disable=too-many-branches
-# pylint: disable=too-many-locals
-@runner.command(
- context_settings=dict(ignore_unknown_options=True, allow_extra_args=True))
+@runner.command("run-local", commands.LocalCommand)
@click.pass_context
-@click.argument("method")
-@click.option("--mock/--no-mock", default=False, help="Mock the machines.")
-@click.option("--env", default=None, help="Specify a yaml file with machines.")
-@click.option(
- "--runtime", default=["runc"], help="The runtime to use.", multiple=True)
-@click.option("--metric", help="The metric to extract.", multiple=True)
-@click.option(
- "--runs", default=1, help="The number of times to run each benchmark.")
-@click.option(
- "--stat",
- default="median",
- help="How to aggregate the data from all runs."
- "\nmedian - returns the median of all runs (default)"
- "\nall - returns all results comma separated"
- "\nmeanstd - returns result as mean,std")
-# pylint: disable=too-many-statements
-def run(ctx, method: str, runs: int, env: str, mock: bool, runtime: List[str],
- metric: List[str], stat: str, **kwargs):
+def run_local(ctx, limit: float, **kwargs):
+ """Runs benchmarks locally."""
+ run(ctx, machine_producer.LocalMachineProducer(limit=limit), **kwargs)
+
+
+@runner.command("run-mock", commands.RunCommand)
+@click.pass_context
+def run_mock(ctx, **kwargs):
+ """Runs benchmarks on Mock machines. Used for testing."""
+ run(ctx, mock_producer.MockMachineProducer(), **kwargs)
+
+
+def run(ctx, producer: machine_producer.MachineProducer, method: str, runs: int,
+ runtime: List[str], metric: List[str], stat: str, **kwargs):
"""Runs arbitrary benchmarks.
All unknown command line flags are passed through to the underlying benchmark
@@ -139,16 +133,13 @@ def run(ctx, method: str, runs: int, env: str, mock: bool, runtime: List[str],
All benchmarks are run in parallel where possible, but have exclusive
ownership over the individual machines.
- Exactly one of the --mock and --env flag must be specified.
-
Every benchmark method will be run the times indicated by --runs.
Args:
ctx: Click context.
+ producer: A Machine Producer from which to get Machines.
method: A regular expression for methods to be run.
runs: Number of runs.
- env: Environment to use.
- mock: If true, use mocked environment (supercedes env).
runtime: A list of runtimes to test.
metric: A list of metrics to extract.
stat: The class of statistics to extract.
@@ -218,20 +209,6 @@ def run(ctx, method: str, runs: int, env: str, mock: bool, runtime: List[str],
sys.exit(1)
fold("method", list(methods.keys()), allow_flatten=True)
- # Construct the environment.
- if mock and env:
- # You can't provide both.
- logging.error("both --mock and --env are set: which one is it?")
- sys.exit(1)
- elif mock:
- producer = mock_producer.MockMachineProducer()
- elif env:
- producer = yaml_producer.YamlMachineProducer(env)
- else:
- # You must provide one of mock or env.
- logging.error("no enviroment provided: use --mock or --env.")
- sys.exit(1)
-
# Spin up the drivers.
#
# We ensure that metric is the last entry, because we have special behavior.
diff --git a/benchmarks/runner/commands.py b/benchmarks/runner/commands.py
new file mode 100644
index 000000000..4973843b9
--- /dev/null
+++ b/benchmarks/runner/commands.py
@@ -0,0 +1,84 @@
+# python3
+# Copyright 2019 Google LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""Module with the guts of `click` commands.
+
+Overrides of the click.core.Command. This is done so flags are inherited between
+similar commands (the run command). The classes below are meant to be used in
+click templates like so.
+
+@runner.command("run-mock", RunCommand)
+def run_mock(**kwargs):
+ # mock implementation
+
+"""
+import click
+
+
+class RunCommand(click.core.Command):
+ """Base Run Command with flags.
+
+ Attributes:
+ method: regex of which suite to choose (e.g. sysbench would run
+ sysbench.cpu, sysbench.memory, and sysbench.mutex) See list command for
+ details.
+ metric: metric(s) to extract. See list command for details.
+ runtime: the runtime(s) on which to run.
+ runs: the number of runs to do of each method.
+ stat: how to compile results in the case of multiple run (e.g. median).
+ """
+
+ def __init__(self, *args, **kwargs):
+ super().__init__(*args, **kwargs)
+ method = click.core.Argument(("method",))
+
+ metric = click.core.Option(("--metric",),
+ help="The metric to extract.",
+ multiple=True)
+
+ runtime = click.core.Option(("--runtime",),
+ default=["runc"],
+ help="The runtime to use.",
+ multiple=True)
+ runs = click.core.Option(("--runs",),
+ default=1,
+ help="The number of times to run each benchmark.")
+ stat = click.core.Option(
+ ("--stat",),
+ default="median",
+ help="How to aggregate the data from all runs."
+ "\nmedian - returns the median of all runs (default)"
+ "\nall - returns all results comma separated"
+ "\nmeanstd - returns result as mean,std")
+ self.params.extend([method, runtime, runs, stat, metric])
+ self.ignore_unknown_options = True
+ self.allow_extra_args = True
+
+
+class LocalCommand(RunCommand):
+ """LocalCommand inherits all flags from RunCommand.
+
+ Attributes:
+ limit: limits the number of machines on which to run benchmarks. This limits
+ for local how many benchmarks may run at a time. e.g. "startup" requires
+ one machine -- passing two machines would limit two startup jobs at a
+ time. Default is infinity.
+ """
+
+ def __init__(self, *args, **kwargs):
+ super().__init__(*args, **kwargs)
+ self.params.append(
+ click.core.Option(
+ ("--limit",),
+ default=1,
+ help="Limit of number of benchmarks that can run at a given time."))
diff --git a/benchmarks/runner/runner_test.py b/benchmarks/runner/runner_test.py
index 5719c2838..7818d631a 100644
--- a/benchmarks/runner/runner_test.py
+++ b/benchmarks/runner/runner_test.py
@@ -49,7 +49,7 @@ def test_list():
def test_run():
cli_runner = testing.CliRunner()
- result = cli_runner.invoke(runner.runner, ["run", "--mock", "."])
+ result = cli_runner.invoke(runner.runner, ["run-mock", "."])
print(result.output)
assert result.exit_code == 0
diff --git a/benchmarks/suites/http.py b/benchmarks/suites/http.py
index ea9024e43..6efea938c 100644
--- a/benchmarks/suites/http.py
+++ b/benchmarks/suites/http.py
@@ -92,7 +92,7 @@ def http_app(server: machine.Machine,
redis = server.pull("redis")
image = server.pull(workload)
redis_port = 6379
- redis_name = "redis_server"
+ redis_name = "{workload}_redis_server".format(workload=workload)
with server.container(redis, name=redis_name).detach():
server.container(server_netcat, links={redis_name: redis_name})\
diff --git a/benchmarks/workloads/BUILD b/benchmarks/workloads/BUILD
index 643806105..ccb86af5b 100644
--- a/benchmarks/workloads/BUILD
+++ b/benchmarks/workloads/BUILD
@@ -11,25 +11,25 @@ py_library(
filegroup(
name = "files",
srcs = [
- "//benchmarks/workloads/ab:files",
- "//benchmarks/workloads/absl:files",
- "//benchmarks/workloads/curl:files",
- "//benchmarks/workloads/ffmpeg:files",
- "//benchmarks/workloads/fio:files",
- "//benchmarks/workloads/httpd:files",
- "//benchmarks/workloads/iperf:files",
- "//benchmarks/workloads/netcat:files",
- "//benchmarks/workloads/nginx:files",
- "//benchmarks/workloads/node:files",
- "//benchmarks/workloads/node_template:files",
- "//benchmarks/workloads/redis:files",
- "//benchmarks/workloads/redisbenchmark:files",
- "//benchmarks/workloads/ruby:files",
- "//benchmarks/workloads/ruby_template:files",
- "//benchmarks/workloads/sleep:files",
- "//benchmarks/workloads/sysbench:files",
- "//benchmarks/workloads/syscall:files",
- "//benchmarks/workloads/tensorflow:files",
- "//benchmarks/workloads/true:files",
+ "//benchmarks/workloads/ab:tar",
+ "//benchmarks/workloads/absl:tar",
+ "//benchmarks/workloads/curl:tar",
+ "//benchmarks/workloads/ffmpeg:tar",
+ "//benchmarks/workloads/fio:tar",
+ "//benchmarks/workloads/httpd:tar",
+ "//benchmarks/workloads/iperf:tar",
+ "//benchmarks/workloads/netcat:tar",
+ "//benchmarks/workloads/nginx:tar",
+ "//benchmarks/workloads/node:tar",
+ "//benchmarks/workloads/node_template:tar",
+ "//benchmarks/workloads/redis:tar",
+ "//benchmarks/workloads/redisbenchmark:tar",
+ "//benchmarks/workloads/ruby:tar",
+ "//benchmarks/workloads/ruby_template:tar",
+ "//benchmarks/workloads/sleep:tar",
+ "//benchmarks/workloads/sysbench:tar",
+ "//benchmarks/workloads/syscall:tar",
+ "//benchmarks/workloads/tensorflow:tar",
+ "//benchmarks/workloads/true:tar",
],
)
diff --git a/benchmarks/workloads/ab/BUILD b/benchmarks/workloads/ab/BUILD
index e99a8d674..4fc0ab735 100644
--- a/benchmarks/workloads/ab/BUILD
+++ b/benchmarks/workloads/ab/BUILD
@@ -1,4 +1,5 @@
load("//benchmarks:defs.bzl", "py_library", "py_test", "requirement")
+load("@rules_pkg//:pkg.bzl", "pkg_tar")
package(
default_visibility = ["//benchmarks:__subpackages__"],
@@ -27,8 +28,8 @@ py_test(
],
)
-filegroup(
- name = "files",
+pkg_tar(
+ name = "tar",
srcs = [
"Dockerfile",
],
diff --git a/benchmarks/workloads/absl/BUILD b/benchmarks/workloads/absl/BUILD
index bb499620e..61e010096 100644
--- a/benchmarks/workloads/absl/BUILD
+++ b/benchmarks/workloads/absl/BUILD
@@ -1,4 +1,5 @@
load("//benchmarks:defs.bzl", "py_library", "py_test", "requirement")
+load("@rules_pkg//:pkg.bzl", "pkg_tar")
package(
default_visibility = ["//benchmarks:__subpackages__"],
@@ -27,8 +28,8 @@ py_test(
],
)
-filegroup(
- name = "files",
+pkg_tar(
+ name = "tar",
srcs = [
"Dockerfile",
],
diff --git a/benchmarks/workloads/curl/BUILD b/benchmarks/workloads/curl/BUILD
index 83f3c71a0..eb0fb6165 100644
--- a/benchmarks/workloads/curl/BUILD
+++ b/benchmarks/workloads/curl/BUILD
@@ -1,10 +1,12 @@
+load("@rules_pkg//:pkg.bzl", "pkg_tar")
+
package(
default_visibility = ["//benchmarks:__subpackages__"],
licenses = ["notice"],
)
-filegroup(
- name = "files",
+pkg_tar(
+ name = "tar",
srcs = [
"Dockerfile",
],
diff --git a/benchmarks/workloads/ffmpeg/BUILD b/benchmarks/workloads/ffmpeg/BUILD
index c1f2afc40..be472dfb2 100644
--- a/benchmarks/workloads/ffmpeg/BUILD
+++ b/benchmarks/workloads/ffmpeg/BUILD
@@ -1,3 +1,5 @@
+load("@rules_pkg//:pkg.bzl", "pkg_tar")
+
package(
default_visibility = ["//benchmarks:__subpackages__"],
licenses = ["notice"],
@@ -8,8 +10,8 @@ py_library(
srcs = ["__init__.py"],
)
-filegroup(
- name = "files",
+pkg_tar(
+ name = "tar",
srcs = [
"Dockerfile",
],
diff --git a/benchmarks/workloads/fio/BUILD b/benchmarks/workloads/fio/BUILD
index 7fc96cfa5..de257adad 100644
--- a/benchmarks/workloads/fio/BUILD
+++ b/benchmarks/workloads/fio/BUILD
@@ -1,4 +1,5 @@
load("//benchmarks:defs.bzl", "py_library", "py_test", "requirement")
+load("@rules_pkg//:pkg.bzl", "pkg_tar")
package(
default_visibility = ["//benchmarks:__subpackages__"],
@@ -27,8 +28,8 @@ py_test(
],
)
-filegroup(
- name = "files",
+pkg_tar(
+ name = "tar",
srcs = [
"Dockerfile",
],
diff --git a/benchmarks/workloads/httpd/BUILD b/benchmarks/workloads/httpd/BUILD
index 83f3c71a0..eb0fb6165 100644
--- a/benchmarks/workloads/httpd/BUILD
+++ b/benchmarks/workloads/httpd/BUILD
@@ -1,10 +1,12 @@
+load("@rules_pkg//:pkg.bzl", "pkg_tar")
+
package(
default_visibility = ["//benchmarks:__subpackages__"],
licenses = ["notice"],
)
-filegroup(
- name = "files",
+pkg_tar(
+ name = "tar",
srcs = [
"Dockerfile",
],
diff --git a/benchmarks/workloads/iperf/BUILD b/benchmarks/workloads/iperf/BUILD
index fe0acbfce..8832a996c 100644
--- a/benchmarks/workloads/iperf/BUILD
+++ b/benchmarks/workloads/iperf/BUILD
@@ -1,4 +1,5 @@
load("//benchmarks:defs.bzl", "py_library", "py_test", "requirement")
+load("@rules_pkg//:pkg.bzl", "pkg_tar")
package(
default_visibility = ["//benchmarks:__subpackages__"],
@@ -27,8 +28,8 @@ py_test(
],
)
-filegroup(
- name = "files",
+pkg_tar(
+ name = "tar",
srcs = [
"Dockerfile",
],
diff --git a/benchmarks/workloads/netcat/BUILD b/benchmarks/workloads/netcat/BUILD
index 83f3c71a0..eb0fb6165 100644
--- a/benchmarks/workloads/netcat/BUILD
+++ b/benchmarks/workloads/netcat/BUILD
@@ -1,10 +1,12 @@
+load("@rules_pkg//:pkg.bzl", "pkg_tar")
+
package(
default_visibility = ["//benchmarks:__subpackages__"],
licenses = ["notice"],
)
-filegroup(
- name = "files",
+pkg_tar(
+ name = "tar",
srcs = [
"Dockerfile",
],
diff --git a/benchmarks/workloads/nginx/BUILD b/benchmarks/workloads/nginx/BUILD
index 83f3c71a0..eb0fb6165 100644
--- a/benchmarks/workloads/nginx/BUILD
+++ b/benchmarks/workloads/nginx/BUILD
@@ -1,10 +1,12 @@
+load("@rules_pkg//:pkg.bzl", "pkg_tar")
+
package(
default_visibility = ["//benchmarks:__subpackages__"],
licenses = ["notice"],
)
-filegroup(
- name = "files",
+pkg_tar(
+ name = "tar",
srcs = [
"Dockerfile",
],
diff --git a/benchmarks/workloads/node/BUILD b/benchmarks/workloads/node/BUILD
index 59460d02f..71cd9f519 100644
--- a/benchmarks/workloads/node/BUILD
+++ b/benchmarks/workloads/node/BUILD
@@ -1,10 +1,12 @@
+load("@rules_pkg//:pkg.bzl", "pkg_tar")
+
package(
default_visibility = ["//benchmarks:__subpackages__"],
licenses = ["notice"],
)
-filegroup(
- name = "files",
+pkg_tar(
+ name = "tar",
srcs = [
"Dockerfile",
"index.js",
diff --git a/benchmarks/workloads/node_template/BUILD b/benchmarks/workloads/node_template/BUILD
index ae7f121d3..ca996f068 100644
--- a/benchmarks/workloads/node_template/BUILD
+++ b/benchmarks/workloads/node_template/BUILD
@@ -1,10 +1,12 @@
+load("@rules_pkg//:pkg.bzl", "pkg_tar")
+
package(
default_visibility = ["//benchmarks:__subpackages__"],
licenses = ["notice"],
)
-filegroup(
- name = "files",
+pkg_tar(
+ name = "tar",
srcs = [
"Dockerfile",
"index.hbs",
diff --git a/benchmarks/workloads/redis/BUILD b/benchmarks/workloads/redis/BUILD
index 83f3c71a0..eb0fb6165 100644
--- a/benchmarks/workloads/redis/BUILD
+++ b/benchmarks/workloads/redis/BUILD
@@ -1,10 +1,12 @@
+load("@rules_pkg//:pkg.bzl", "pkg_tar")
+
package(
default_visibility = ["//benchmarks:__subpackages__"],
licenses = ["notice"],
)
-filegroup(
- name = "files",
+pkg_tar(
+ name = "tar",
srcs = [
"Dockerfile",
],
diff --git a/benchmarks/workloads/redisbenchmark/BUILD b/benchmarks/workloads/redisbenchmark/BUILD
index d40e75a3a..f5994a815 100644
--- a/benchmarks/workloads/redisbenchmark/BUILD
+++ b/benchmarks/workloads/redisbenchmark/BUILD
@@ -1,4 +1,5 @@
load("//benchmarks:defs.bzl", "py_library", "py_test", "requirement")
+load("@rules_pkg//:pkg.bzl", "pkg_tar")
package(
default_visibility = ["//benchmarks:__subpackages__"],
@@ -27,8 +28,8 @@ py_test(
],
)
-filegroup(
- name = "files",
+pkg_tar(
+ name = "tar",
srcs = [
"Dockerfile",
],
diff --git a/benchmarks/workloads/ruby/BUILD b/benchmarks/workloads/ruby/BUILD
index 9846c7e70..e37d77804 100644
--- a/benchmarks/workloads/ruby/BUILD
+++ b/benchmarks/workloads/ruby/BUILD
@@ -1,3 +1,5 @@
+load("@rules_pkg//:pkg.bzl", "pkg_tar")
+
package(
default_visibility = ["//benchmarks:__subpackages__"],
licenses = ["notice"],
@@ -13,3 +15,14 @@ filegroup(
"index.rb",
],
)
+
+pkg_tar(
+ name = "tar",
+ srcs = [
+ "Dockerfile",
+ "Gemfile",
+ "Gemfile.lock",
+ "config.ru",
+ "index.rb",
+ ],
+)
diff --git a/benchmarks/workloads/ruby_template/BUILD b/benchmarks/workloads/ruby_template/BUILD
index 2b99892af..27f7c0c46 100644
--- a/benchmarks/workloads/ruby_template/BUILD
+++ b/benchmarks/workloads/ruby_template/BUILD
@@ -1,10 +1,12 @@
+load("@rules_pkg//:pkg.bzl", "pkg_tar")
+
package(
default_visibility = ["//benchmarks:__subpackages__"],
licenses = ["notice"],
)
-filegroup(
- name = "files",
+pkg_tar(
+ name = "tar",
srcs = [
"Dockerfile",
"Gemfile",
@@ -13,4 +15,5 @@ filegroup(
"index.erb",
"main.rb",
],
+ strip_prefix = "third_party/gvisor/benchmarks/workloads/ruby_template",
)
diff --git a/benchmarks/workloads/sleep/BUILD b/benchmarks/workloads/sleep/BUILD
index 83f3c71a0..eb0fb6165 100644
--- a/benchmarks/workloads/sleep/BUILD
+++ b/benchmarks/workloads/sleep/BUILD
@@ -1,10 +1,12 @@
+load("@rules_pkg//:pkg.bzl", "pkg_tar")
+
package(
default_visibility = ["//benchmarks:__subpackages__"],
licenses = ["notice"],
)
-filegroup(
- name = "files",
+pkg_tar(
+ name = "tar",
srcs = [
"Dockerfile",
],
diff --git a/benchmarks/workloads/sysbench/BUILD b/benchmarks/workloads/sysbench/BUILD
index 35f4d460b..fd2f8f03d 100644
--- a/benchmarks/workloads/sysbench/BUILD
+++ b/benchmarks/workloads/sysbench/BUILD
@@ -1,4 +1,5 @@
load("//benchmarks:defs.bzl", "py_library", "py_test", "requirement")
+load("@rules_pkg//:pkg.bzl", "pkg_tar")
package(
default_visibility = ["//benchmarks:__subpackages__"],
@@ -27,8 +28,8 @@ py_test(
],
)
-filegroup(
- name = "files",
+pkg_tar(
+ name = "tar",
srcs = [
"Dockerfile",
],
diff --git a/benchmarks/workloads/syscall/BUILD b/benchmarks/workloads/syscall/BUILD
index e1ff3059b..5100cbb21 100644
--- a/benchmarks/workloads/syscall/BUILD
+++ b/benchmarks/workloads/syscall/BUILD
@@ -1,4 +1,5 @@
load("//benchmarks:defs.bzl", "py_library", "py_test", "requirement")
+load("@rules_pkg//:pkg.bzl", "pkg_tar")
package(
default_visibility = ["//benchmarks:__subpackages__"],
@@ -27,8 +28,8 @@ py_test(
],
)
-filegroup(
- name = "files",
+pkg_tar(
+ name = "tar",
srcs = [
"Dockerfile",
"syscall.c",
diff --git a/benchmarks/workloads/tensorflow/BUILD b/benchmarks/workloads/tensorflow/BUILD
index 17f1f8ebb..026c3b316 100644
--- a/benchmarks/workloads/tensorflow/BUILD
+++ b/benchmarks/workloads/tensorflow/BUILD
@@ -1,3 +1,5 @@
+load("@rules_pkg//:pkg.bzl", "pkg_tar")
+
package(
default_visibility = ["//benchmarks:__subpackages__"],
licenses = ["notice"],
@@ -8,8 +10,8 @@ py_library(
srcs = ["__init__.py"],
)
-filegroup(
- name = "files",
+pkg_tar(
+ name = "tar",
srcs = [
"Dockerfile",
],
diff --git a/benchmarks/workloads/true/BUILD b/benchmarks/workloads/true/BUILD
index 83f3c71a0..221c4b9a7 100644
--- a/benchmarks/workloads/true/BUILD
+++ b/benchmarks/workloads/true/BUILD
@@ -1,11 +1,14 @@
+load("@rules_pkg//:pkg.bzl", "pkg_tar")
+
package(
default_visibility = ["//benchmarks:__subpackages__"],
licenses = ["notice"],
)
-filegroup(
- name = "files",
+pkg_tar(
+ name = "tar",
srcs = [
"Dockerfile",
],
+ extension = "tar",
)