diff options
Diffstat (limited to 'g3doc/user_guide')
25 files changed, 0 insertions, 2406 deletions
diff --git a/g3doc/user_guide/BUILD b/g3doc/user_guide/BUILD deleted file mode 100644 index b69aee12c..000000000 --- a/g3doc/user_guide/BUILD +++ /dev/null @@ -1,70 +0,0 @@ -load("//website:defs.bzl", "doc") - -package( - default_visibility = ["//website:__pkg__"], - licenses = ["notice"], -) - -doc( - name = "compatibility", - src = "compatibility.md", - category = "Compatibility", - permalink = "/docs/user_guide/compatibility/", - weight = "0", -) - -doc( - name = "checkpoint_restore", - src = "checkpoint_restore.md", - category = "User Guide", - permalink = "/docs/user_guide/checkpoint_restore/", - weight = "60", -) - -doc( - name = "debugging", - src = "debugging.md", - category = "User Guide", - permalink = "/docs/user_guide/debugging/", - weight = "70", -) - -doc( - name = "FAQ", - src = "FAQ.md", - category = "User Guide", - permalink = "/docs/user_guide/faq/", - weight = "90", -) - -doc( - name = "filesystem", - src = "filesystem.md", - category = "User Guide", - permalink = "/docs/user_guide/filesystem/", - weight = "40", -) - -doc( - name = "networking", - src = "networking.md", - category = "User Guide", - permalink = "/docs/user_guide/networking/", - weight = "50", -) - -doc( - name = "install", - src = "install.md", - category = "User Guide", - permalink = "/docs/user_guide/install/", - weight = "10", -) - -doc( - name = "platforms", - src = "platforms.md", - category = "User Guide", - permalink = "/docs/user_guide/platforms/", - weight = "30", -) diff --git a/g3doc/user_guide/FAQ.md b/g3doc/user_guide/FAQ.md deleted file mode 100644 index 26c836ddf..000000000 --- a/g3doc/user_guide/FAQ.md +++ /dev/null @@ -1,152 +0,0 @@ -# FAQ - -[TOC] - -### What operating systems are supported? {#supported-os} - -Today, gVisor requires Linux. - -### What CPU architectures are supported? {#supported-cpus} - -gVisor currently supports [x86_64/AMD64](https://en.wikipedia.org/wiki/X86-64) -compatible processors. Preliminary support is also available for -[ARM64](https://en.wikipedia.org/wiki/ARM_architecture#AArch64). - -### Do I need to modify my Linux application to use gVisor? {#modify-app} - -No. gVisor is capable of running unmodified Linux binaries. - -### What binary formats does gVisor support? {#supported-binaries} - -gVisor supports Linux -[ELF](https://en.wikipedia.org/wiki/Executable_and_Linkable_Format) binaries. - -Binaries run in gVisor should be built for the -[AMD64](https://en.wikipedia.org/wiki/X86-64) or -[AArch64](https://en.wikipedia.org/wiki/ARM_architecture#AArch64) CPU -architectures. - -### Can I run Docker images using gVisor? {#docker-images} - -Yes. Please see the [Docker Quick Start][docker]. - -### Can I run Kubernetes pods using gVisor? {#k8s-pods} - -Yes. Please see the [Kubernetes Quick Start][k8s]. - -### What's the security model? {#security-model} - -See the [Security Model][security-model]. - -## Troubleshooting - -### My container runs fine with `runc` but fails with `runsc` {#app-compatibility} - -If you’re having problems running a container with `runsc` it’s most likely due -to a compatibility issue or a missing feature in gVisor. See -[Debugging][debugging]. - -### When I run my container, docker fails with: `open /run/containerd/.../<containerid>/log.json: no such file or directory` {#memfd-create} - -You are using an older version of Linux which doesn't support `memfd_create`. - -This is tracked in [bug #268](https://gvisor.dev/issue/268). - -### When I run my container, docker fails with: `flag provided but not defined: -console` {#old-docker} - -You're using an old version of Docker. See [Docker Quick Start][docker]. - -### I can’t see a file copied with: `docker cp` {#fs-cache} - -For performance reasons, gVisor caches directory contents, and therefore it may -not realize a new file was copied to a given directory. To invalidate the cache -and force a refresh, create a file under the directory in question and list the -contents again. - -As a workaround, shared root filesystem can be enabled. See -[Filesystem][filesystem]. - -This bug is tracked in [bug #4](https://gvisor.dev/issue/4). - -Note that `kubectl cp` works because it does the copy by exec'ing inside the -sandbox, and thus gVisor's internal cache is made aware of the new files and -directories. - -### I'm getting an error like: `panic: unable to attach: operation not permitted` or `fork/exec /proc/self/exe: invalid argument: unknown` {#runsc-perms} - -Make sure that permissions is correct on the `runsc` binary. - -```bash -sudo chmod a+rx /usr/local/bin/runsc -``` - -### I'm getting an error like `mount submount "/etc/hostname": creating mount with source ".../hostname": input/output error: unknown.` {#memlock} - -There is a bug in Linux kernel versions 5.1 to 5.3.15, 5.4.2, and 5.5. Upgrade -to a newer kernel or add the following to -`/lib/systemd/system/containerd.service` as a workaround. - -``` -LimitMEMLOCK=infinity -``` - -And run `systemctl daemon-reload && systemctl restart containerd` to restart -containerd. - -See [issue #1765](https://gvisor.dev/issue/1765) for more details. - -### I'm getting an error like `RuntimeHandler "runsc" not supported` {#runtime-handler} - -This error indicates that the Kubernetes CRI runtime was not set up to handle -`runsc` as a runtime handler. Please ensure that containerd configuration has -been created properly and containerd has been restarted. See the -[containerd quick start](containerd/quick_start.md) for more details. - -If you have ensured that containerd has been set up properly and you used -kubeadm to create your cluster please check if Docker is also installed on that -system. Kubeadm prefers using Docker if both Docker and containerd are -installed. - -Please recreate your cluster and set the `--cri-socket` option on kubeadm -commands. For example: - -```bash -kubeadm init --cri-socket=/var/run/containerd/containerd.sock ... -``` - -To fix an existing cluster edit the `/var/lib/kubelet/kubeadm-flags.env` file -and set the `--container-runtime` flag to `remote` and set the -`--container-runtime-endpoint` flag to point to the containerd socket. e.g. -`/var/run/containerd/containerd.sock`. - -### My container cannot resolve another container's name when using Docker user defined bridge {#docker-bridge} - -This is normally indicated by errors like `bad address 'container-name'` when -trying to communicate to another container in the same network. - -Docker user defined bridge uses an embedded DNS server bound to the loopback -interface on address 127.0.0.10. This requires access to the host network in -order to communicate to the DNS server. runsc network is isolated from the host -and cannot access the DNS server on the host network without breaking the -sandbox isolation. There are a few different workarounds you can try: - -* Use default bridge network with `--link` to connect containers. Default - bridge doesn't use embedded DNS. -* Use [`--network=host`][host-net] option in runsc, however beware that it - will use the host network stack and is less secure. -* Use IPs instead of container names. -* Use [Kubernetes][k8s]. Container name lookup works fine in Kubernetes. - -### I'm getting an error like `dial unix /run/containerd/s/09e4...8cff: connect: connection refused: unknown` {#shim-connect} - -This error may happen when using `gvisor-containerd-shim` with a `containerd` -that does not contain the fix for [CVE-2020-15257]. The resolve the issue, -update containerd to 1.3.9 or 1.4.3 (or newer versions respectively). - -[security-model]: /docs/architecture_guide/security/ -[host-net]: /docs/user_guide/networking/#network-passthrough -[debugging]: /docs/user_guide/debugging/ -[filesystem]: /docs/user_guide/filesystem/ -[docker]: /docs/user_guide/quick_start/docker/ -[k8s]: /docs/user_guide/quick_start/kubernetes/ -[CVE-2020-15257]: https://github.com/containerd/containerd/security/advisories/GHSA-36xw-fx78-c5r4 diff --git a/g3doc/user_guide/checkpoint_restore.md b/g3doc/user_guide/checkpoint_restore.md deleted file mode 100644 index 0ab0911b0..000000000 --- a/g3doc/user_guide/checkpoint_restore.md +++ /dev/null @@ -1,101 +0,0 @@ -# Checkpoint/Restore - -[TOC] - -gVisor has the ability to checkpoint a process, save its current state in a -state file, and restore into a new container using the state file. - -## How to use checkpoint/restore - -Checkpoint/restore functionality is currently available via raw `runsc` -commands. To use the checkpoint command, first run a container. - -```bash -runsc run <container id> -``` - -To checkpoint the container, the `--image-path` flag must be provided. This is -the directory path within which the checkpoint state-file will be created. The -file will be called `checkpoint.img` and necessary directories will be created -if they do not yet exist. - -> Note: Two checkpoints cannot be saved to the same directory; every image-path -> provided must be unique. - -```bash -runsc checkpoint --image-path=<path> <container id> -``` - -There is also an optional `--leave-running` flag that allows the container to -continue to run after the checkpoint has been made. (By default, containers stop -their processes after committing a checkpoint.) - -> Note: All top-level runsc flags needed when calling run must be provided to -> checkpoint if --leave-running is used. - -> Note: --leave-running functions by causing an immediate restore so the -> container, although will maintain its given container id, may have a different -> process id. - -```bash -runsc checkpoint --image-path=<path> --leave-running <container id> -``` - -To restore, provide the image path to the `checkpoint.img` file created during -the checkpoint. Because containers stop by default after checkpointing, restore -needs to happen in a new container (restore is a command which parallels start). - -```bash -runsc create <container id> - -runsc restore --image-path=<path> <container id> -``` - -## How to use checkpoint/restore in Docker: - -Currently checkpoint/restore through `runsc` is not entirely compatible with -Docker, although there has been progress made from both gVisor and Docker to -enable compatibility. Here, we document the ideal workflow. - -Run a container: - -```bash -docker run [options] --runtime=runsc <image>` -``` - -Checkpoint a container: - -```bash -docker checkpoint create <container> <checkpoint_name>` -``` - -Create a new container into which to restore: - -```bash -docker create [options] --runtime=runsc <image> -``` - -Restore a container: - -```bash -docker start --checkpoint --checkpoint-dir=<directory> <container> -``` - -### Issues Preventing Compatibility with Docker - -- **[Moby #37360][leave-running]:** Docker version 18.03.0-ce and earlier - hangs when checkpointing and does not create the checkpoint. To successfully - use this feature, install a custom version of docker-ce from the moby - repository. This issue is caused by an improper implementation of the - `--leave-running` flag. This issue is fixed in newer releases. -- **Docker does not support restoration into new containers:** Docker - currently expects the container which created the checkpoint to be the same - container used to restore which is not possible in runsc. When Docker - supports container migration and therefore restoration into new containers, - this will be the flow. -- **[Moby #37344][checkpoint-dir]:** Docker does not currently support the - `--checkpoint-dir` flag but this will be required when restoring from a - checkpoint made in another container. - -[leave-running]: https://github.com/moby/moby/pull/37360 -[checkpoint-dir]: https://github.com/moby/moby/issues/37344 diff --git a/g3doc/user_guide/compatibility.md b/g3doc/user_guide/compatibility.md deleted file mode 100644 index 76e879a01..000000000 --- a/g3doc/user_guide/compatibility.md +++ /dev/null @@ -1,94 +0,0 @@ -# Applications - -[TOC] - -gVisor implements a large portion of the Linux surface and while we strive to -make it broadly compatible, there are (and always will be) unimplemented -features and bugs. The only real way to know if it will work is to try. If you -find a container that doesn’t work and there is no known issue, please -[file a bug][bug] indicating the full command you used to run the image. You can -view open issues related to compatibility [here][issues]. - -If you're able to provide the [debug logs](../debugging/), the problem likely to -be fixed much faster. - -## What works? - -The following applications/images have been tested: - -* elasticsearch -* golang -* httpd -* java8 -* jenkins -* mariadb -* memcached -* mongo -* mysql -* nginx -* node -* php -* postgres -* prometheus -* python -* redis -* registry -* tomcat -* wordpress - -## Utilities - -Most common utilities work. Note that: - -* Some tools, such as `tcpdump` and old versions of `ping`, require explicitly - enabling raw sockets via the unsafe `--net-raw` runsc flag. -* Different Docker images can behave differently. For example, Alpine Linux - and Ubuntu have different `ip` binaries. - - Specific tools include: - -<!-- mdformat off(don't wrap the table) --> - -| Tool | Status | -| :--------: | :-----------------------------------------: | -| apt-get | Working. | -| bundle | Working. | -| cat | Working. | -| curl | Working. | -| dd | Working. | -| df | Working. | -| dig | Working. | -| drill | Working. | -| env | Working. | -| find | Working. | -| gcore | Working. | -| gdb | Working. | -| gosu | Working. | -| grep | Working (unless stdin is a pipe and stdout is /dev/null). | -| ifconfig | Works partially, like ip. Full support [in progress](https://gvisor.dev/issue/578). | -| ip | Some subcommands work (e.g. addr, route). Full support [in progress](https://gvisor.dev/issue/578). | -| less | Working. | -| ls | Working. | -| lsof | Working. | -| mount | Works in readonly mode. gVisor doesn't currently support creating new mounts at runtime. | -| nc | Working. | -| nmap | Not working. | -| netstat | [In progress](https://gvisor.dev/issue/2112). | -| nslookup | Working. | -| ping | Working. | -| ps | Working. | -| route | Working. | -| ss | [In progress](https://gvisor.dev/issue/2114). | -| sshd | Partially working. Job control [in progress](https://gvisor.dev/issue/154). | -| strace | Working. | -| tar | Working. | -| tcpdump | Working. [Promiscuous mode in progress](https://gvisor.dev/issue/3333). | -| top | Working. | -| uptime | Working. | -| vim | Working. | -| wget | Working. | - -<!-- mdformat on --> - -[bug]: https://github.com/google/gvisor/issues/new?title=Compatibility%20Issue: -[issues]: https://github.com/google/gvisor/issues?q=is%3Aissue+is%3Aopen+label%3A%22area%3A+compatibility%22 diff --git a/g3doc/user_guide/containerd/BUILD b/g3doc/user_guide/containerd/BUILD deleted file mode 100644 index 0ede4819c..000000000 --- a/g3doc/user_guide/containerd/BUILD +++ /dev/null @@ -1,33 +0,0 @@ -load("//website:defs.bzl", "doc") - -package( - default_visibility = ["//website:__pkg__"], - licenses = ["notice"], -) - -doc( - name = "quick_start", - src = "quick_start.md", - category = "User Guide", - permalink = "/docs/user_guide/containerd/quick_start/", - subcategory = "Containerd", - weight = "10", -) - -doc( - name = "configuration", - src = "configuration.md", - category = "User Guide", - permalink = "/docs/user_guide/containerd/configuration/", - subcategory = "Containerd", - weight = "90", -) - -doc( - name = "containerd_11", - src = "containerd_11.md", - category = "User Guide", - include_in_menu = False, - permalink = "/docs/user_guide/containerd/containerd_11/", - subcategory = "Containerd", -) diff --git a/g3doc/user_guide/containerd/configuration.md b/g3doc/user_guide/containerd/configuration.md deleted file mode 100644 index a214fb0c7..000000000 --- a/g3doc/user_guide/containerd/configuration.md +++ /dev/null @@ -1,102 +0,0 @@ -# Containerd Advanced Configuration - -This document describes how to configure runtime options for -`containerd-shim-runsc-v1`. You can find the installation instructions and -minimal requirements in [Containerd Quick Start](./quick_start.md). - -## Shim Configuration - -The shim can be provided with a configuration file containing options to the -shim itself as well as a set of flags to runsc. Here is a quick example: - -```shell -cat <<EOF | sudo tee /etc/containerd/runsc.toml -option = "value" -[runsc_config] - flag = "value" -EOF -``` - -The set of options that can be configured can be found in -[options.go](https://cs.opensource.google/gvisor/gvisor/+/master:pkg/shim/options.go). -Values under `[runsc_config]` can be used to set arbitrary flags to runsc. -`flag = "value"` is converted to `--flag="value"` when runsc is invoked. Run -`runsc flags` so see which flags are available - -Next, containerd needs to be configured to send the configuration file to the -shim. - -### Containerd 1.3+ - -Starting in 1.3, containerd supports a configurable `ConfigPath` in the runtime -configuration. Here is an example: - -```shell -cat <<EOF | sudo tee /etc/containerd/config.toml -version = 2 -[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc] - runtime_type = "io.containerd.runc.v2" -[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runsc] - runtime_type = "io.containerd.runsc.v1" -[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runsc.options] - TypeUrl = "io.containerd.runsc.v1.options" - ConfigPath = "/etc/containerd/runsc.toml" -EOF -``` - -When you are done, restart containerd to pick up the changes. - -```shell -sudo systemctl restart containerd -``` - -## Debug - -When `shim_debug` is enabled in `/etc/containerd/config.toml`, containerd will -forward shim logs to its own log. You can additionally set `level = "debug"` to -enable debug logs. To see the logs run `sudo journalctl -u containerd`. Here is -a containerd configuration file that enables both options: - -```shell -cat <<EOF | sudo tee /etc/containerd/config.toml -version = 2 -[debug] - level = "debug" -[plugins."io.containerd.runtime.v1.linux"] - shim_debug = true -[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc] - runtime_type = "io.containerd.runc.v2" -[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runsc] - runtime_type = "io.containerd.runsc.v1" -[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runsc.option] - TypeUrl = "io.containerd.runsc.v1.options" - ConfigPath = "/etc/containerd/runsc.toml" -EOF -``` - -It can be hard to separate containerd messages from the shim's though. To create -a log file dedicated to the shim, you can set the `log_path` and `log_level` -values in the shim configuration file: - -- `log_path` is the directory where the shim logs will be created. `%ID%` is - the path is replaced with the container ID. -- `log_level` sets the logs level. It is normally set to "debug" as there is - not much interesting happening with other log levels. - -### Example: Enable shim and gVisor debug logging - -gVisor debug logging can be enabled by setting the `debug` and `debug-log` flag. -The shim will replace "%ID%" with the container ID, and "%COMMAND%" with the -runsc command (run, boot, etc.) in the path of the `debug-log` flag. - -Find out more about debugging in the [debugging guide](../debugging.md). - -```shell -cat <<EOF | sudo tee /etc/containerd/runsc.toml -log_path = "/var/log/runsc/%ID%/shim.log" -log_level = "debug" -[runsc_config] - debug = "true" - debug-log = "/var/log/runsc/%ID%/gvisor.%COMMAND%.log" -EOF -``` diff --git a/g3doc/user_guide/containerd/containerd_11.md b/g3doc/user_guide/containerd/containerd_11.md deleted file mode 100644 index 200d3da76..000000000 --- a/g3doc/user_guide/containerd/containerd_11.md +++ /dev/null @@ -1,167 +0,0 @@ -# Older Versions (containerd 1.1) - -**WARNING: containerd 1.1 and shim v1 is no longer supported. The instructions -below is kept just for reference in case you're dealing with an old version. -It's highly recommended upgrading to the latest version.** - -This document describes how to install and run the `gvisor-containerd-shim` -using the untrusted workload CRI extension. This requires `containerd` 1.1 or -later. - -*Note: The untrusted workload CRI extension is deprecated by containerd and -`gvisor-containerd-shim` is maintained on a best-effort basis. If you are using -containerd 1.2+, please see the -[containerd 1.2+ documentation](./quick_start.md) and use -`containerd-shim-runsc-v1`.* - -## Requirements - -- **runsc** and **gvisor-containerd-shim**: See the - [installation guide](/docs/user_guide/install/). -- **containerd**: See the [containerd website](https://containerd.io/) for - information on how to install containerd. - -## Configure containerd - -Create the configuration for the gvisor shim in -`/etc/containerd/gvisor-containerd-shim.toml`: - -```shell -cat <<EOF | sudo tee /etc/containerd/gvisor-containerd-shim.toml -# This is the path to the default runc containerd-shim. -runc_shim = "/usr/local/bin/containerd-shim" -EOF -``` - -Update `/etc/containerd/config.toml`. Be sure to update the path to -`gvisor-containerd-shim` and `runsc` if necessary: - -```shell -cat <<EOF | sudo tee /etc/containerd/config.toml -disabled_plugins = ["restart"] -[plugins.linux] - shim = "/usr/local/bin/gvisor-containerd-shim" - shim_debug = true -[plugins.cri.containerd.untrusted_workload_runtime] - runtime_type = "io.containerd.runtime.v1.linux" - runtime_engine = "/usr/local/bin/runsc" - runtime_root = "/run/containerd/runsc" -EOF -``` - -Restart `containerd`: - -```shell -sudo systemctl restart containerd -``` - -## Usage - -You can run containers in gVisor via containerd's CRI. - -### Install crictl - -Download and install the `crictl` binary: - -```shell -{ -wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.13.0/crictl-v1.13.0-linux-amd64.tar.gz -tar xf crictl-v1.13.0-linux-amd64.tar.gz -sudo mv crictl /usr/local/bin -} -``` - -Write the `crictl` configuration file: - -```shell -cat <<EOF | sudo tee /etc/crictl.yaml -runtime-endpoint: unix:///run/containerd/containerd.sock -EOF -``` - -### Create the nginx Sandbox in gVisor - -Pull the nginx image: - -```shell -sudo crictl pull nginx -``` - -Create the sandbox creation request: - -```shell -cat <<EOF | tee sandbox.json -{ - "metadata": { - "name": "nginx-sandbox", - "namespace": "default", - "attempt": 1, - "uid": "hdishd83djaidwnduwk28bcsb" - }, - "annotations": { - "io.kubernetes.cri.untrusted-workload": "true" - }, - "linux": { - }, - "log_directory": "/tmp" -} -EOF -``` - -Create the pod in gVisor: - -```shell -SANDBOX_ID=$(sudo crictl runp sandbox.json) -``` - -### Run the nginx Container in the Sandbox - -Create the nginx container creation request: - -```shell -cat <<EOF | tee container.json -{ - "metadata": { - "name": "nginx" - }, - "image":{ - "image": "nginx" - }, - "log_path":"nginx.0.log", - "linux": { - } -} -EOF -``` - -Create the nginx container: - -```shell -CONTAINER_ID=$(sudo crictl create ${SANDBOX_ID} container.json sandbox.json) -``` - -Start the nginx container: - -```shell -sudo crictl start ${CONTAINER_ID} -``` - -### Validate the container - -Inspect the created pod: - -```shell -sudo crictl inspectp ${SANDBOX_ID} -``` - -Inspect the nginx container: - -```shell -sudo crictl inspect ${CONTAINER_ID} -``` - -Verify that nginx is running in gVisor: - -```shell -sudo crictl exec ${CONTAINER_ID} dmesg | grep -i gvisor -``` diff --git a/g3doc/user_guide/containerd/quick_start.md b/g3doc/user_guide/containerd/quick_start.md deleted file mode 100644 index c742f225c..000000000 --- a/g3doc/user_guide/containerd/quick_start.md +++ /dev/null @@ -1,182 +0,0 @@ -# Containerd Quick Start - -This document describes how to use `containerd-shim-runsc-v1` with the -containerd runtime handler support on `containerd`. - -> ⚠️ NOTE: If you are using Kubernetes and set up your cluster using kubeadm you -> may run into issues. See the [FAQ](../FAQ.md#runtime-handler) for details. - -## Requirements - -- **runsc** and **containerd-shim-runsc-v1**: See the - [installation guide](/docs/user_guide/install/). -- **containerd**: See the [containerd website](https://containerd.io/) for - information on how to install containerd. **Minimal version supported: 1.3.9 - or 1.4.3.** - -## Configure containerd - -Update `/etc/containerd/config.toml`. Make sure `containerd-shim-runsc-v1` is in -`${PATH}` or in the same directory as `containerd` binary. - -```shell -cat <<EOF | sudo tee /etc/containerd/config.toml -version = 2 -[plugins."io.containerd.runtime.v1.linux"] - shim_debug = true -[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc] - runtime_type = "io.containerd.runc.v2" -[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runsc] - runtime_type = "io.containerd.runsc.v1" -EOF -``` - -Restart `containerd`: - -```shell -sudo systemctl restart containerd -``` - -## Usage - -You can run containers in gVisor via containerd's CRI. - -### Install crictl - -Download and install the `crictl` binary: - -```shell -{ -wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.13.0/crictl-v1.13.0-linux-amd64.tar.gz -tar xf crictl-v1.13.0-linux-amd64.tar.gz -sudo mv crictl /usr/local/bin -} -``` - -Write the `crictl` configuration file: - -```shell -cat <<EOF | sudo tee /etc/crictl.yaml -runtime-endpoint: unix:///run/containerd/containerd.sock -EOF -``` - -### Create the nginx sandbox in gVisor - -Pull the nginx image: - -```shell -sudo crictl pull nginx -``` - -Create the sandbox creation request: - -```shell -cat <<EOF | tee sandbox.json -{ - "metadata": { - "name": "nginx-sandbox", - "namespace": "default", - "attempt": 1, - "uid": "hdishd83djaidwnduwk28bcsb" - }, - "linux": { - }, - "log_directory": "/tmp" -} -EOF -``` - -Create the pod in gVisor: - -```shell -SANDBOX_ID=$(sudo crictl runp --runtime runsc sandbox.json) -``` - -### Run the nginx container in the sandbox - -Create the nginx container creation request: - -```shell -cat <<EOF | tee container.json -{ - "metadata": { - "name": "nginx" - }, - "image":{ - "image": "nginx" - }, - "log_path":"nginx.0.log", - "linux": { - } -} -EOF -``` - -Create the nginx container: - -```shell -CONTAINER_ID=$(sudo crictl create ${SANDBOX_ID} container.json sandbox.json) -``` - -Start the nginx container: - -```shell -sudo crictl start ${CONTAINER_ID} -``` - -### Validate the container - -Inspect the created pod: - -```shell -sudo crictl inspectp ${SANDBOX_ID} -``` - -Inspect the nginx container: - -```shell -sudo crictl inspect ${CONTAINER_ID} -``` - -Verify that nginx is running in gVisor: - -```shell -sudo crictl exec ${CONTAINER_ID} dmesg | grep -i gvisor -``` - -### Set up the Kubernetes RuntimeClass - -Install the RuntimeClass for gVisor: - -```shell -cat <<EOF | kubectl apply -f - -apiVersion: node.k8s.io/v1beta1 -kind: RuntimeClass -metadata: - name: gvisor -handler: runsc -EOF -``` - -Create a Pod with the gVisor RuntimeClass: - -```shell -cat <<EOF | kubectl apply -f - -apiVersion: v1 -kind: Pod -metadata: - name: nginx-gvisor -spec: - runtimeClassName: gvisor - containers: - - name: nginx - image: nginx -EOF -``` - -Verify that the Pod is running: - -```shell -kubectl get pod nginx-gvisor -o wide -``` diff --git a/g3doc/user_guide/debugging.md b/g3doc/user_guide/debugging.md deleted file mode 100644 index 2291b5fab..000000000 --- a/g3doc/user_guide/debugging.md +++ /dev/null @@ -1,156 +0,0 @@ -# Debugging - -[TOC] - -To enable debug and system call logging, add the `runtimeArgs` below to your -[Docker](../quick_start/docker/) configuration (`/etc/docker/daemon.json`): - -```json -{ - "runtimes": { - "runsc": { - "path": "/usr/local/bin/runsc", - "runtimeArgs": [ - "--debug-log=/tmp/runsc/", - "--debug", - "--strace" - ] - } - } -} -``` - -> Note: the last `/` in `--debug-log` is needed to interpret it as a directory. -> Then each `runsc` command executed will create a separate log file. Otherwise, -> log messages from all commands will be appended to the same file. - -You may also want to pass `--log-packets` to troubleshoot network problems. Then -restart the Docker daemon: - -```bash -sudo systemctl restart docker -``` - -Run your container again, and inspect the files under `/tmp/runsc`. The log file -ending with `.boot` will contain the strace logs from your application, which -can be useful for identifying missing or broken system calls in gVisor. If you -are having problems starting the container, the log file ending with `.create` -may have the reason for the failure. - -## Stack traces - -The command `runsc debug --stacks` collects stack traces while the sandbox is -running which can be useful to troubleshoot issues or just to learn more about -gVisor. It connects to the sandbox process, collects a stack dump, and writes it -to the console. For example: - -```bash -docker run --runtime=runsc --rm -d alpine sh -c "while true; do echo running; sleep 1; done" -63254c6ab3a6989623fa1fb53616951eed31ac605a2637bb9ddba5d8d404b35b - -sudo runsc --root /var/run/docker/runtime-runsc/moby debug --stacks 63254c6ab3a6989623fa1fb53616951eed31ac605a2637bb9ddba5d8d404b35b -``` - -> Note: `--root` variable is provided by docker and is normally set to -> `/var/run/docker/runtime-[runtime-name]/moby`. If in doubt, `--root` is logged -> to `runsc` logs. - -## Debugger - -You can debug gVisor like any other Golang program. If you're running with -Docker, you'll need to find the sandbox PID and attach the debugger as root. -Here is an example: - -Install a runsc with debug symbols (you can also use the -[nightly release](../install/#nightly)): - -```bash -make dev BAZEL_OPTIONS="-c dbg" -``` - -Start the container you want to debug using the runsc runtime with debug -options: - -```bash -docker run --runtime=$(git branch --show-current)-d --rm --name=test -p 8080:80 -d nginx -``` - -Find the PID and attach your favorite debugger: - -```bash -sudo dlv attach $(docker inspect test | grep Pid | head -n 1 | grep -oe "[0-9]*") -``` - -Set a breakpoint for accept: - -```bash -break gvisor.dev/gvisor/pkg/sentry/socket/netstack.(*SocketOperations).Accept -continue -``` - -In a different window connect to nginx to trigger the breakpoint: - -```bash -curl http://localhost:8080/ -``` - -## Profiling - -`runsc` integrates with Go profiling tools and gives you easy commands to -profile CPU and heap usage. First you need to enable `--profile` in the command -line options before starting the container: - -```json -{ - "runtimes": { - "runsc-prof": { - "path": "/usr/local/bin/runsc", - "runtimeArgs": [ - "--profile" - ] - } - } -} -``` - -> Note: Enabling profiling loosens the seccomp protection added to the sandbox, -> and should not be run in production under normal circumstances. - -Then restart docker to refresh the runtime options. While the container is -running, execute `runsc debug` to collect profile information and save to a -file. Here are the options available: - -* **--profile-heap:** Generates heap profile to the speficied file. -* **--profile-cpu:** Enables CPU profiler, waits for `--duration` seconds and - generates CPU profile to the speficied file. - -For example: - -```bash -docker run --runtime=runsc-prof --rm -d alpine sh -c "while true; do echo running; sleep 1; done" -63254c6ab3a6989623fa1fb53616951eed31ac605a2637bb9ddba5d8d404b35b - -sudo runsc --root /var/run/docker/runtime-runsc-prof/moby debug --profile-heap=/tmp/heap.prof 63254c6ab3a6989623fa1fb53616951eed31ac605a2637bb9ddba5d8d404b35b -sudo runsc --root /var/run/docker/runtime-runsc-prof/moby debug --profile-cpu=/tmp/cpu.prof --duration=30s 63254c6ab3a6989623fa1fb53616951eed31ac605a2637bb9ddba5d8d404b35b -``` - -The resulting files can be opened using `go tool pprof` or [pprof][]. The -examples below create image file (`.svg`) with the heap profile and writes the -top functions using CPU to the console: - -```bash -go tool pprof -svg /usr/local/bin/runsc /tmp/heap.prof -go tool pprof -top /usr/local/bin/runsc /tmp/cpu.prof -``` - -[pprof]: https://github.com/google/pprof/blob/master/doc/README.md - -### Docker Proxy - -When forwarding a port to the container, Docker will likely route traffic -through the [docker-proxy][]. This proxy may make profiling noisy, so it can be -helpful to bypass it. Do so by sending traffic directly to the container IP and -port. e.g., if the `docker0` IP is `192.168.9.1`, the container IP is likely a -subsequent IP, such as `192.168.9.2`. - -[docker-proxy]: https://windsock.io/the-docker-proxy/ diff --git a/g3doc/user_guide/filesystem.md b/g3doc/user_guide/filesystem.md deleted file mode 100644 index cd00762dd..000000000 --- a/g3doc/user_guide/filesystem.md +++ /dev/null @@ -1,60 +0,0 @@ -# Filesystem - -[TOC] - -gVisor accesses the filesystem through a file proxy, called the Gofer. The gofer -runs as a separate process, that is isolated from the sandbox. Gofer instances -communicate with their respective sentry using the 9P protocol. For another -explanation see [What is gVisor?](../README.md). - -## Sandbox overlay - -To isolate the host filesystem from the sandbox, you can set a writable tmpfs -overlay on top of the entire filesystem. All modifications are made to the -overlay, keeping the host filesystem unmodified. - -> Note: All created and modified files are stored in memory inside the sandbox. - -To use the tmpfs overlay, add the following `runtimeArgs` to your Docker -configuration (`/etc/docker/daemon.json`) and restart the Docker daemon: - -```json -{ - "runtimes": { - "runsc": { - "path": "/usr/local/bin/runsc", - "runtimeArgs": [ - "--overlay" - ] - } - } -} -``` - -## Shared root filesystem - -The root filesystem is where the image is extracted and is not generally -modified from outside the sandbox. This allows for some optimizations, like -skipping checks to determine if a directory has changed since the last time it -was cached, thus missing updates that may have happened. If you need to `docker -cp` files inside the root filesystem, you may want to enable shared mode. Just -be aware that file system access will be slower due to the extra checks that are -required. - -> Note: External mounts are always shared. - -To use set the root filesystem shared, add the following `runtimeArgs` to your -Docker configuration (`/etc/docker/daemon.json`) and restart the Docker daemon: - -```json -{ - "runtimes": { - "runsc": { - "path": "/usr/local/bin/runsc", - "runtimeArgs": [ - "--file-access=shared" - ] - } - } -} -``` diff --git a/g3doc/user_guide/install.md b/g3doc/user_guide/install.md deleted file mode 100644 index 321f13ce8..000000000 --- a/g3doc/user_guide/install.md +++ /dev/null @@ -1,178 +0,0 @@ -# Installation - -[TOC] - -> Note: gVisor supports only x86\_64 and requires Linux 4.14.77+ -> ([older Linux](./networking.md#gso)). - -## Install latest release {#install-latest} - -To download and install the latest release manually follow these steps: - -```bash -( - set -e - ARCH=$(uname -m) - URL=https://storage.googleapis.com/gvisor/releases/release/latest/${ARCH} - wget ${URL}/runsc ${URL}/runsc.sha512 \ - ${URL}/containerd-shim-runsc-v1 ${URL}/containerd-shim-runsc-v1.sha512 - sha512sum -c runsc.sha512 \ - -c containerd-shim-runsc-v1.sha512 - rm -f *.sha512 - chmod a+rx runsc containerd-shim-runsc-v1 - sudo mv runsc containerd-shim-runsc-v1 /usr/local/bin -) -``` - -To install gVisor as a Docker runtime, run the following commands: - -```bash -/usr/local/bin/runsc install -sudo systemctl reload docker -docker run --rm --runtime=runsc hello-world -``` - -For more details about using gVisor with Docker, see -[Docker Quick Start](./quick_start/docker.md) - -Note: It is important to copy `runsc` to a location that is readable and -executable to all users, since `runsc` executes itself as user `nobody` to avoid -unnecessary privileges. The `/usr/local/bin` directory is a good place to put -the `runsc` binary. - -## Install from an `apt` repository - -First, appropriate dependencies must be installed to allow `apt` to install -packages via https: - -```bash -sudo apt-get update && \ -sudo apt-get install -y \ - apt-transport-https \ - ca-certificates \ - curl \ - gnupg-agent \ - software-properties-common -``` - -Next, the configure the key used to sign archives and the repository: - -```bash -curl -fsSL https://gvisor.dev/archive.key | sudo apt-key add - -sudo add-apt-repository "deb [arch=amd64,arm64] https://storage.googleapis.com/gvisor/releases release main" -``` - -Now the runsc package can be installed: - -```bash -sudo apt-get update && sudo apt-get install -y runsc -``` - -If you have Docker installed, it will be automatically configured. - -## Versions - -The `runsc` binaries and repositories are available in multiple versions and -release channels. You should pick the version you'd like to install. For -experimentation, the nightly release is recommended. For production use, the -latest release is recommended. - -After selecting an appropriate release channel from the options below, proceed -to the preferred installation mechanism: manual or from an `apt` repository. - -> Note: Older releases are still available but may not have an `${ARCH}` -> component in the URL. These release were available for `x86_64` only. - -### HEAD - -Binaries are available for every commit on the `master` branch, and are -available at the following URL: - -`https://storage.googleapis.com/gvisor/releases/master/latest/${ARCH}` - -You can use this link with the steps described in -[Install latest release](#install-latest). - -For `apt` installation, use the `master` to configure the repository: - -```bash -sudo add-apt-repository "deb [arch=amd64,arm64] https://storage.googleapis.com/gvisor/releases master main" -``` - -### Nightly - -Nightly releases are built most nights from the master branch, and are available -at the following URL: - -`https://storage.googleapis.com/gvisor/releases/nightly/latest/${ARCH}` - -You can use this link with the steps described in -[Install latest release](#install-latest). - -Specific nightly releases can be found at: - -`https://storage.googleapis.com/gvisor/releases/nightly/${yyyy-mm-dd}/${ARCH}` - -Note that a release may not be available for every day. - -For `apt` installation, use the `nightly` to configure the repository: - -```bash -sudo add-apt-repository "deb [arch=amd64,arm64] https://storage.googleapis.com/gvisor/releases nightly main" -``` - -### Latest release - -The latest official release is available at the following URL: - -`https://storage.googleapis.com/gvisor/releases/release/latest/${ARCH}` - -You can use this link with the steps described in -[Install latest release](#install-latest). - -For `apt` installation, use the `release` to configure the repository: - -```bash -sudo add-apt-repository "deb [arch=amd64,arm64] https://storage.googleapis.com/gvisor/releases release main" -``` - -### Specific release - -Specific releases are the latest [point release](#point-release) for a given -date. Specific releases should be available for any date that has a point -release. A given release is available at the following URL: - -`https://storage.googleapis.com/gvisor/releases/release/${yyyymmdd}/${ARCH}` - -You can use this link with the steps described in -[Install latest release](#install-latest). - -See the [releases](https://github.com/google/gvisor/releases) page for -information about specific releases. - -For `apt` installation of a specific release, which may include point updates, -use the date of the release for repository, e.g. `${yyyymmdd}`. - -```bash -sudo add-apt-repository "deb [arch=amd64,arm64] https://storage.googleapis.com/gvisor/releases yyyymmdd main" -``` - -> Note: only newer releases may be available as `apt` repositories. - -### Point release - -Point releases correspond to -[releases](https://github.com/google/gvisor/releases) tagged in the Github -repository. A given point release is available at the following URL: - -`https://storage.googleapis.com/gvisor/releases/release/${yyyymmdd}.${rc}/${ARCH}` - -You can use this link with the steps described in -[Install latest release](#install-latest). - -Note that `apt` installation of a specific point release is not supported. - -After installation, try out `runsc` by following the -[Docker Quick Start](./quick_start/docker.md), -[Containerd QuickStart](./containerd/quick_start.md), or -[OCI Quick Start](./quick_start/oci.md). diff --git a/g3doc/user_guide/networking.md b/g3doc/user_guide/networking.md deleted file mode 100644 index 95f675633..000000000 --- a/g3doc/user_guide/networking.md +++ /dev/null @@ -1,84 +0,0 @@ -# Networking - -[TOC] - -gVisor implements its own network stack called netstack. All aspects of the -network stack are handled inside the Sentry — including TCP connection state, -control messages, and packet assembly — keeping it isolated from the host -network stack. Data link layer packets are written directly to the virtual -device inside the network namespace setup by Docker or Kubernetes. - -The IP address and routes configured for the device are transferred inside the -sandbox. The loopback device runs exclusively inside the sandbox and does not -use the host. You can inspect them by running: - -```bash -docker run --rm --runtime=runsc alpine ip addr -``` - -## Network passthrough - -For high-performance networking applications, you may choose to disable the user -space network stack and instead use the host network stack, including the -loopback. Note that this mode decreases the isolation to the host. - -Add the following `runtimeArgs` to your Docker configuration -(`/etc/docker/daemon.json`) and restart the Docker daemon: - -```json -{ - "runtimes": { - "runsc": { - "path": "/usr/local/bin/runsc", - "runtimeArgs": [ - "--network=host" - ] - } - } -} -``` - -## Disabling external networking - -To completely isolate the host and network from the sandbox, external networking -can be disabled. The sandbox will still contain a loopback provided by netstack. - -Add the following `runtimeArgs` to your Docker configuration -(`/etc/docker/daemon.json`) and restart the Docker daemon: - -```json -{ - "runtimes": { - "runsc": { - "path": "/usr/local/bin/runsc", - "runtimeArgs": [ - "--network=none" - ] - } - } -} -``` - -### Disable GSO {#gso} - -If your Linux is older than 4.14.17, you can disable Generic Segmentation -Offload (GSO) to run with a kernel that is newer than 3.17. Add the -`--gso=false` flag to your Docker runtime configuration -(`/etc/docker/daemon.json`) and restart the Docker daemon: - -> Note: Network performance, especially for large payloads, will be greatly -> reduced. - -```json -{ - "runtimes": { - "runsc": { - "path": "/usr/local/bin/runsc", - "runtimeArgs": [ - "--gso=false" - ] - } - } -} -``` - diff --git a/g3doc/user_guide/platforms.md b/g3doc/user_guide/platforms.md deleted file mode 100644 index 752025881..000000000 --- a/g3doc/user_guide/platforms.md +++ /dev/null @@ -1,95 +0,0 @@ -# Changing Platforms - -[TOC] - -This guide described how to change the -[platform](../architecture_guide/platforms.md) used by `runsc`. - -## Prerequisites - -If you intend to run the KVM platform, you will also to have KVM installed on -your system. If you are running a Debian based system like Debian or Ubuntu you -can usually do this by ensuring the module is loaded, and permissions are -appropriately set on the `/dev/kvm` device. - -If you have an Intel CPU: - -```bash -sudo modprobe kvm-intel && sudo chmod a+rw /dev/kvm -``` - -If you have an AMD CPU: - -```bash -sudo modprobe kvm-amd && sudo chmod a+rw /dev/kvm -``` - -If you are using a virtual machine you will need to make sure that nested -virtualization is configured. Here are links to documents on how to set up -nested virtualization in several popular environments: - -* Google Cloud: [Enabling Nested Virtualization for VM Instances][nested-gcp] -* Microsoft Azure: - [How to enable nested virtualization in an Azure VM][nested-azure] -* VirtualBox: [Nested Virtualization][nested-virtualbox] -* KVM: [Nested Guests][nested-kvm] - -***Note: nested virtualization will have poor performance and is historically a -cause of security issues (e.g. -[CVE-2018-12904](https://nvd.nist.gov/vuln/detail/CVE-2018-12904)). It is not -recommended for production.*** - -## Configuring Docker - -The platform is selected by the `--platform` command line flag passed to -`runsc`. By default, the ptrace platform is selected. For example, to select the -KVM platform, modify your Docker configuration (`/etc/docker/daemon.json`) to -pass the `--platform` argument: - -```json -{ - "runtimes": { - "runsc": { - "path": "/usr/local/bin/runsc", - "runtimeArgs": [ - "--platform=kvm" - ] - } - } -} -``` - -You must restart the Docker daemon after making changes to this file, typically -this is done via `systemd`: - -```bash -sudo systemctl restart docker -``` - -Note that you may configure multiple runtimes using different platforms. For -example, the following configuration has one configuration for ptrace and one -for the KVM platform: - -```json -{ - "runtimes": { - "runsc-ptrace": { - "path": "/usr/local/bin/runsc", - "runtimeArgs": [ - "--platform=ptrace" - ] - }, - "runsc-kvm": { - "path": "/usr/local/bin/runsc", - "runtimeArgs": [ - "--platform=kvm" - ] - } - } -} -``` - -[nested-azure]: https://docs.microsoft.com/en-us/azure/virtual-machines/windows/nested-virtualization -[nested-gcp]: https://cloud.google.com/compute/docs/instances/enable-nested-virtualization-vm-instances -[nested-virtualbox]: https://www.virtualbox.org/manual/UserManual.html#nested-virt -[nested-kvm]: https://www.linux-kvm.org/page/Nested_Guests diff --git a/g3doc/user_guide/quick_start/BUILD b/g3doc/user_guide/quick_start/BUILD deleted file mode 100644 index 63f17f9cb..000000000 --- a/g3doc/user_guide/quick_start/BUILD +++ /dev/null @@ -1,33 +0,0 @@ -load("//website:defs.bzl", "doc") - -package( - default_visibility = ["//website:__pkg__"], - licenses = ["notice"], -) - -doc( - name = "docker", - src = "docker.md", - category = "User Guide", - permalink = "/docs/user_guide/quick_start/docker/", - subcategory = "Quick Start", - weight = "11", -) - -doc( - name = "oci", - src = "oci.md", - category = "User Guide", - permalink = "/docs/user_guide/quick_start/oci/", - subcategory = "Quick Start", - weight = "12", -) - -doc( - name = "kubernetes", - src = "kubernetes.md", - category = "User Guide", - permalink = "/docs/user_guide/quick_start/kubernetes/", - subcategory = "Quick Start", - weight = "13", -) diff --git a/g3doc/user_guide/quick_start/docker.md b/g3doc/user_guide/quick_start/docker.md deleted file mode 100644 index ee842e453..000000000 --- a/g3doc/user_guide/quick_start/docker.md +++ /dev/null @@ -1,99 +0,0 @@ -# Docker Quick Start - -> Note: This guide requires Docker version 17.09.0 or greater. Refer to the -> [Docker documentation][docker] for how to install it. - -This guide will help you quickly get started running Docker containers using -gVisor. - -First, follow the [Installation guide][install]. - -If you use the `apt` repository or the `automated` install, then you can skip -the next section and proceed straight to running a container. - -## Configuring Docker - -First you will need to configure Docker to use `runsc` by adding a runtime entry -to your Docker configuration (e.g. `/etc/docker/daemon.json`). The easiest way -to this is via the `runsc install` command. This will install a docker runtime -named "runsc" by default. - -```bash -sudo runsc install -``` - -You must restart the Docker daemon after installing the runtime. Typically this -is done via `systemd`: - -```bash -sudo systemctl restart docker -``` - -## Running a container - -Now run your container using the `runsc` runtime: - -```bash -docker run --runtime=runsc --rm hello-world -``` - -You can also run a terminal to explore the container. - -```bash -docker run --runtime=runsc --rm -it ubuntu /bin/bash -``` - -Many docker options are compatible with gVisor, try them out. Here is an -example: - -```bash -docker run --runtime=runsc --rm --link backend:database -v ~/bin:/tools:ro -p 8080:80 --cpus=0.5 -it busybox telnet towel.blinkenlights.nl -``` - -## Verify the runtime - -You can verify that you are running in gVisor using the `dmesg` command. - -```text -$ docker run --runtime=runsc -it ubuntu dmesg -[ 0.000000] Starting gVisor... -[ 0.354495] Daemonizing children... -[ 0.564053] Constructing home... -[ 0.976710] Preparing for the zombie uprising... -[ 1.299083] Creating process schedule... -[ 1.479987] Committing treasure map to memory... -[ 1.704109] Searching for socket adapter... -[ 1.748935] Generating random numbers by fair dice roll... -[ 2.059747] Digging up root... -[ 2.259327] Checking naughty and nice process list... -[ 2.610538] Rewriting operating system in Javascript... -[ 2.613217] Ready! -``` - -Note that this is easily replicated by an attacker so applications should never -use `dmesg` to verify the runtime in a security sensitive context. - -## Options - -You may also wish to install a runtime entry with different options. The `runsc -install` command can accept flags that will be passed to the runtime when it is -invoked by Docker. For example, to install a runtime with debugging enabled, run -the following: - -```bash -sudo runsc install --runtime runsc-debug -- \ - --debug \ - --debug-log=/tmp/runsc-debug.log \ - --strace \ - --log-packets -``` - -Next, look at the different options available for gVisor: [platform][platforms], -[network][networking], [filesystem][filesystem]. - -[docker]: https://docs.docker.com/install/ -[storage-driver]: https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-storage-driver -[install]: /docs/user_guide/install/ -[filesystem]: /docs/user_guide/filesystem/ -[networking]: /docs/user_guide/networking/ -[platforms]: /docs/user_guide/platforms/ diff --git a/g3doc/user_guide/quick_start/kubernetes.md b/g3doc/user_guide/quick_start/kubernetes.md deleted file mode 100644 index 395cd4b71..000000000 --- a/g3doc/user_guide/quick_start/kubernetes.md +++ /dev/null @@ -1,34 +0,0 @@ -# Kubernetes Quick Start - -gVisor can be used to run Kubernetes pods and has several integration points -with Kubernetes. - -## Using Minikube - -gVisor can run sandboxed containers in a Kubernetes cluster with Minikube. After -the gVisor addon is enabled, pods with a `gvisor` [Runtime Class][runtimeclass] -set to true will execute with `runsc`. Follow [these instructions][minikube] to -enable gVisor addon. - -## Using Containerd - -You can also setup Kubernetes nodes to run pods in gVisor using -[containerd][containerd] and the gVisor containerd shim. You can find -instructions in the [Containerd Quick Start][gvisor-containerd]. - -## Using GKE Sandbox - -[GKE Sandbox][gke-sandbox] is available in [Google Kubernetes Engine][gke]. You -just need to deploy a node pool with gVisor enabled in your cluster, and it will -run pods annotated with `runtimeClassName: gvisor` inside a gVisor sandbox for -you. [Here][wordpress-quick] is a quick example showing how to deploy a -WordPress site. You can view the full documentation [here][gke-sandbox-docs]. - -[containerd]: https://containerd.io/ -[minikube]: https://github.com/kubernetes/minikube/blob/master/deploy/addons/gvisor/README.md -[gke]: https://cloud.google.com/kubernetes-engine/ -[gke-sandbox]: https://cloud.google.com/kubernetes-engine/sandbox/ -[gke-sandbox-docs]: https://cloud.google.com/kubernetes-engine/docs/how-to/sandbox-pods -[gvisor-containerd]: /docs/user_guide/containerd/quick_start/ -[runtimeclass]: https://kubernetes.io/docs/concepts/containers/runtime-class/ -[wordpress-quick]: /docs/tutorials/kubernetes/ diff --git a/g3doc/user_guide/quick_start/oci.md b/g3doc/user_guide/quick_start/oci.md deleted file mode 100644 index e7768946b..000000000 --- a/g3doc/user_guide/quick_start/oci.md +++ /dev/null @@ -1,43 +0,0 @@ -# OCI Quick Start - -This guide will quickly get you started running your first gVisor sandbox -container using the runtime directly with the default platform. - -First, follow the [Installation guide][install]. - -## Run an OCI compatible container - -Now we will create an [OCI][oci] container bundle to run our container. First we -will create a root directory for our bundle. - -```bash -mkdir bundle -cd bundle -``` - -Create a root file system for the container. We will use the Docker -`hello-world` image as the basis for our container. - -```bash -mkdir rootfs -docker export $(docker create hello-world) | tar -xf - -C rootfs -``` - -Next, create an specification file called `config.json` that contains our -container specification. We tell the container to run the `/hello` program. - -```bash -runsc spec -- /hello -``` - -Finally run the container. - -```bash -sudo runsc run hello -``` - -Next try [using CNI to set up networking](../../../tutorials/cni/) or -[running gVisor using Docker](../docker/). - -[oci]: https://opencontainers.org/ -[install]: /docs/user_guide/install diff --git a/g3doc/user_guide/tutorials/BUILD b/g3doc/user_guide/tutorials/BUILD deleted file mode 100644 index a862c76f4..000000000 --- a/g3doc/user_guide/tutorials/BUILD +++ /dev/null @@ -1,55 +0,0 @@ -load("//website:defs.bzl", "doc") - -package( - default_visibility = ["//website:__pkg__"], - licenses = ["notice"], -) - -doc( - name = "docker", - src = "docker.md", - category = "User Guide", - permalink = "/docs/tutorials/docker/", - subcategory = "Tutorials", - weight = "10", -) - -doc( - name = "docker_compose", - src = "docker-compose.md", - category = "User Guide", - permalink = "/docs/tutorials/docker-compose/", - subcategory = "Tutorials", - weight = "20", -) - -doc( - name = "kubernetes", - src = "kubernetes.md", - category = "User Guide", - data = [ - "add-node-pool.png", - "node-pool-button.png", - ], - permalink = "/docs/tutorials/kubernetes/", - subcategory = "Tutorials", - weight = "30", -) - -doc( - name = "knative", - src = "knative.md", - category = "User Guide", - permalink = "/docs/tutorials/knative/", - subcategory = "Tutorials", - weight = "40", -) - -doc( - name = "cni", - src = "cni.md", - category = "User Guide", - permalink = "/docs/tutorials/cni/", - subcategory = "Tutorials", - weight = "50", -) diff --git a/g3doc/user_guide/tutorials/add-node-pool.png b/g3doc/user_guide/tutorials/add-node-pool.png Binary files differdeleted file mode 100644 index e4560359b..000000000 --- a/g3doc/user_guide/tutorials/add-node-pool.png +++ /dev/null diff --git a/g3doc/user_guide/tutorials/cni.md b/g3doc/user_guide/tutorials/cni.md deleted file mode 100644 index ee8d0ac92..000000000 --- a/g3doc/user_guide/tutorials/cni.md +++ /dev/null @@ -1,174 +0,0 @@ -# Using CNI - -This tutorial will show you how to set up networking for a gVisor sandbox using -the -[Container Networking Interface (CNI)](https://github.com/containernetworking/cni). - -## Install CNI Plugins - -First you will need to install the CNI plugins. CNI plugins are used to set up a -network namespace that `runsc` can use with the sandbox. - -Start by creating the directories for CNI plugin binaries: - -``` -sudo mkdir -p /opt/cni/bin -``` - -Download the CNI plugins: - -``` -wget https://github.com/containernetworking/plugins/releases/download/v0.8.3/cni-plugins-linux-amd64-v0.8.3.tgz -``` - -Next, unpack the plugins into the CNI binary directory: - -``` -sudo tar -xvf cni-plugins-linux-amd64-v0.8.3.tgz -C /opt/cni/bin/ -``` - -## Configure CNI Plugins - -This section will show you how to configure CNI plugins. This tutorial will use -the "bridge" and "loopback" plugins which will create the necessary bridge and -loopback devices in our network namespace. However, you should be able to use -any CNI compatible plugin to set up networking for gVisor sandboxes. - -The bridge plugin configuration specifies the IP address subnet range for IP -addresses that will be assigned to sandboxes as well as the network routing -configuration. This tutorial will assign IP addresses from the `10.22.0.0/16` -range and allow all outbound traffic, however you can modify this configuration -to suit your use case. - -Create the bridge and loopback plugin configurations: - -``` -sudo mkdir -p /etc/cni/net.d - -sudo sh -c 'cat > /etc/cni/net.d/10-bridge.conf << EOF -{ - "cniVersion": "0.3.1", - "name": "mynet", - "type": "bridge", - "bridge": "cni0", - "isGateway": true, - "ipMasq": true, - "ipam": { - "type": "host-local", - "subnet": "10.22.0.0/16", - "routes": [ - { "dst": "0.0.0.0/0" } - ] - } -} -EOF' - -sudo sh -c 'cat > /etc/cni/net.d/99-loopback.conf << EOF -{ - "cniVersion": "0.3.1", - "name": "lo", - "type": "loopback" -} -EOF' -``` - -## Create a Network Namespace - -For each gVisor sandbox you will create a network namespace and configure it -using CNI. First, create a random network namespace name and then create the -namespace. - -The network namespace path will then be `/var/run/netns/${CNI_CONTAINERID}`. - -``` -export CNI_PATH=/opt/cni/bin -export CNI_CONTAINERID=$(printf '%x%x%x%x' $RANDOM $RANDOM $RANDOM $RANDOM) -export CNI_COMMAND=ADD -export CNI_NETNS=/var/run/netns/${CNI_CONTAINERID} - -sudo ip netns add ${CNI_CONTAINERID} -``` - -Next, run the bridge and loopback plugins to apply the configuration that was -created earlier to the namespace. Each plugin outputs some JSON indicating the -results of executing the plugin. For example, The bridge plugin's response -includes the IP address assigned to the ethernet device created in the network -namespace. Take note of the IP address for use later. - -``` -export CNI_IFNAME="eth0" -sudo -E /opt/cni/bin/bridge < /etc/cni/net.d/10-bridge.conf -export CNI_IFNAME="lo" -sudo -E /opt/cni/bin/loopback < /etc/cni/net.d/99-loopback.conf -``` - -Get the IP address assigned to our sandbox: - -``` -POD_IP=$(sudo ip netns exec ${CNI_CONTAINERID} ip -4 addr show eth0 | grep -oP '(?<=inet\s)\d+(\.\d+){3}') -``` - -## Create the OCI Bundle - -Now that our network namespace is created and configured, we can create the OCI -bundle for our container. As part of the bundle's `config.json` we will specify -that the container use the network namespace that we created. - -The container will run a simple python webserver that we will be able to connect -to via the IP address assigned to it via the bridge CNI plugin. - -Create the bundle and root filesystem directories: - -``` -sudo mkdir -p bundle -cd bundle -sudo mkdir rootfs -sudo docker export $(docker create python) | sudo tar --same-owner -pxf - -C rootfs -sudo mkdir -p rootfs/var/www/html -sudo sh -c 'echo "Hello World!" > rootfs/var/www/html/index.html' -``` - -Next create the `config.json` specifying the network namespace. - -``` -sudo runsc spec \ - --cwd /var/www/html \ - --netns /var/run/netns/${CNI_CONTAINERID} \ - -- python -m http.server -``` - -## Run the Container - -Now we can run and connect to the webserver. Run the container in gVisor. Use -the same ID used for the network namespace to be consistent: - -``` -sudo runsc run -detach ${CNI_CONTAINERID} -``` - -Connect to the server via the sandbox's IP address: - -``` -curl http://${POD_IP}:8000/ -``` - -You should see the server returning `Hello World!`. - -## Cleanup - -After you are finished running the container, you can clean up the network -namespace . - -``` -sudo runsc kill ${CNI_CONTAINERID} -sudo runsc delete ${CNI_CONTAINERID} - -export CNI_COMMAND=DEL - -export CNI_IFNAME="lo" -sudo -E /opt/cni/bin/loopback < /etc/cni/net.d/99-loopback.conf -export CNI_IFNAME="eth0" -sudo -E /opt/cni/bin/bridge < /etc/cni/net.d/10-bridge.conf - -sudo ip netns delete ${CNI_CONTAINERID} -``` diff --git a/g3doc/user_guide/tutorials/docker-compose.md b/g3doc/user_guide/tutorials/docker-compose.md deleted file mode 100644 index 3284231f8..000000000 --- a/g3doc/user_guide/tutorials/docker-compose.md +++ /dev/null @@ -1,100 +0,0 @@ -# Wordpress with Docker Compose - -This page shows you how to deploy a sample [WordPress][wordpress] site using -[Docker Compose][docker-compose]. - -### Before you begin - -[Follow these instructions][docker-install] to install runsc with Docker. This -document assumes that Docker and Docker Compose are installed and the runtime -name chosen for gVisor is `runsc`. - -### Configuration - -We'll start by creating the `docker-compose.yaml` file to specify our services. -We will specify two services, a `wordpress` service for the Wordpress Apache -server, and a `db` service for MySQL. We will configure Wordpress to connect to -MySQL via the `db` service host name. - -> **Note:** Docker Compose uses it's own network by default and allows services -> to communicate using their service name. Docker Compose does this by setting -> up a DNS server at IP address 127.0.0.11 and configuring containers to use it -> via [resolv.conf][resolv.conf]. This IP is not addressable inside a gVisor -> sandbox so it's important that we set the DNS IP address to the alternative -> `8.8.8.8` and use a network that allows routing to it. See -> [Networking in Compose][compose-networking] for more details. - -> **Note:** The `runtime` field was removed from services in the 3.x version of -> the API in versions of docker-compose < 1.27.0. You will need to write your -> `docker-compose.yaml` file using the 2.x format or use docker-compose >= -> 1.27.0. See this [issue](https://github.com/docker/compose/issues/6239) for -> more details. - -```yaml -version: '2.3' - -services: - db: - image: mysql:5.7 - volumes: - - db_data:/var/lib/mysql - restart: always - environment: - MYSQL_ROOT_PASSWORD: somewordpress - MYSQL_DATABASE: wordpress - MYSQL_USER: wordpress - MYSQL_PASSWORD: wordpress - # All services must be on the same network to communicate. - network_mode: "bridge" - - wordpress: - depends_on: - - db - # When using the "bridge" network specify links. - links: - - db - image: wordpress:latest - ports: - - "8080:80" - restart: always - environment: - WORDPRESS_DB_HOST: db:3306 - WORDPRESS_DB_USER: wordpress - WORDPRESS_DB_PASSWORD: wordpress - WORDPRESS_DB_NAME: wordpress - # Specify the dns address if needed. - dns: - - 8.8.8.8 - # All services must be on the same network to communicate. - network_mode: "bridge" - # Specify the runtime used by Docker. Must be set up in - # /etc/docker/daemon.json. - runtime: "runsc" - -volumes: - db_data: {} -``` - -Once you have a `docker-compose.yaml` in the current directory you can start the -containers: - -```bash -docker-compose up -``` - -Once the containers have started you can access wordpress at -http://localhost:8080. - -Congrats! You now how a working wordpress site up and running using Docker -Compose. - -### What's next - -Learn how to deploy [WordPress with Kubernetes][wordpress-k8s]. - -[docker-compose]: https://docs.docker.com/compose/ -[docker-install]: ../quick_start/docker.md -[wordpress]: https://wordpress.com/ -[resolv.conf]: https://man7.org/linux/man-pages/man5/resolv.conf.5.html -[wordpress-k8s]: kubernetes.md -[compose-networking]: https://docs.docker.com/compose/networking/ diff --git a/g3doc/user_guide/tutorials/docker.md b/g3doc/user_guide/tutorials/docker.md deleted file mode 100644 index 9ca01da2a..000000000 --- a/g3doc/user_guide/tutorials/docker.md +++ /dev/null @@ -1,70 +0,0 @@ -# WordPress with Docker - -This page shows you how to deploy a sample [WordPress][wordpress] site using -[Docker][docker]. - -### Before you begin - -[Follow these instructions][docker-install] to install runsc with Docker. This -document assumes that the runtime name chosen is `runsc`. - -### Running WordPress - -Now, let's deploy a WordPress site using Docker. WordPress site requires two -containers: web server in the frontend, MySQL database in the backend. - -First, let's define a few environment variables that are shared between both -containers: - -```bash -export MYSQL_PASSWORD=${YOUR_SECRET_PASSWORD_HERE?} -export MYSQL_DB=wordpress -export MYSQL_USER=wordpress -``` - -Next, let's start the database container running MySQL and wait until the -database is initialized: - -```bash -docker run --runtime=runsc --name mysql -d \ - -e MYSQL_RANDOM_ROOT_PASSWORD=1 \ - -e MYSQL_PASSWORD="${MYSQL_PASSWORD}" \ - -e MYSQL_DATABASE="${MYSQL_DB}" \ - -e MYSQL_USER="${MYSQL_USER}" \ - mysql:5.7 - -# Wait until this message appears in the log. -docker logs mysql |& grep 'port: 3306 MySQL Community Server (GPL)' -``` - -Once the database is running, you can start the WordPress frontend. We use the -`--link` option to connect the frontend to the database, and expose the -WordPress to port 8080 on the localhost. - -```bash -docker run --runtime=runsc --name wordpress -d \ - --link mysql:mysql \ - -p 8080:80 \ - -e WORDPRESS_DB_HOST=mysql \ - -e WORDPRESS_DB_USER="${MYSQL_USER}" \ - -e WORDPRESS_DB_PASSWORD="${MYSQL_PASSWORD}" \ - -e WORDPRESS_DB_NAME="${MYSQL_DB}" \ - -e WORDPRESS_TABLE_PREFIX=wp_ \ - wordpress -``` - -Now, you can access the WordPress website pointing your favorite browser to -<http://localhost:8080>. - -Congratulations! You have just deployed a WordPress site using Docker. - -### What's next - -Learn how to deploy WordPress with [Kubernetes][wordpress-k8s] or -[Docker Compose][wordpress-compose]. - -[docker]: https://www.docker.com/ -[docker-install]: ../quick_start/docker.md -[wordpress]: https://wordpress.com/ -[wordpress-k8s]: kubernetes.md -[wordpress-compose]: docker-compose.md diff --git a/g3doc/user_guide/tutorials/knative.md b/g3doc/user_guide/tutorials/knative.md deleted file mode 100644 index 3f5207fcc..000000000 --- a/g3doc/user_guide/tutorials/knative.md +++ /dev/null @@ -1,88 +0,0 @@ -# Knative Services - -[Knative](https://knative.dev/) is a platform for running serverless workloads -on Kubernetes. This guide will show you how to run basic Knative workloads in -gVisor. - -## Prerequisites - -This guide assumes you have have a cluster that is capable of running gVisor -workloads. This could be a -[GKE Sandbox](https://cloud.google.com/kubernetes-engine/sandbox/) enabled -cluster on Google Cloud Platform or one you have set up yourself using -[containerd Quick Start](https://gvisor.dev/docs/user_guide/containerd/quick_start/). - -This guide will also assume you have Knative installed using -[Istio](https://istio.io/) as the network layer. You can follow the -[Knative installation guide](https://knative.dev/docs/install/install-serving-with-yaml/) -to install Knative. - -## Enable the RuntimeClass feature flag - -Knative allows the use of various parameters on Pods via -[feature flags](https://knative.dev/docs/serving/feature-flags/). We will enable -the -[runtimeClassName](https://knative.dev/docs/serving/feature-flags/#kubernetes-runtime-class) -feature flag to enable the use of the Kubernetes -[Runtime Class](https://kubernetes.io/docs/concepts/containers/runtime-class/). - -Edit the feature flags ConfigMap. - -```bash -kubectl edit configmap config-features -n knative-serving -``` - -Add the `kubernetes.podspec-runtimeclassname: enabled` to the `data` field. Once -you are finished the ConfigMap will look something like this (minus all the -system fields). - -```yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: config-features - namespace: knative-serving - labels: - serving.knative.dev/release: v0.22.0 -data: - kubernetes.podspec-runtimeclassname: enabled -``` - -## Deploy the Service - -After you have set the Runtime Class feature flag you can now create Knative -services that specify a `runtimeClassName` in the spec. - -```bash -cat <<EOF | kubectl apply -f - -apiVersion: serving.knative.dev/v1 -kind: Service -metadata: - name: helloworld-go -spec: - template: - spec: - runtimeClassName: gvisor - containers: - - image: gcr.io/knative-samples/helloworld-go - env: - - name: TARGET - value: "gVisor User" -EOF -``` - -You can see the pods running and their Runtime Class. - -```bash -kubectl get pods -o=custom-columns='NAME:.metadata.name,RUNTIME CLASS:.spec.runtimeClassName,STATUS:.status.phase' -``` - -Output should look something like the following. Note that your service might -scale to zero. If you access it via it's URL you should get a new Pod. - -``` -NAME RUNTIME CLASS STATUS -helloworld-go-00002-deployment-646c87b7f5-5v68s gvisor Running -``` - -Congrats! Your Knative service is now running in gVisor! diff --git a/g3doc/user_guide/tutorials/kubernetes.md b/g3doc/user_guide/tutorials/kubernetes.md deleted file mode 100644 index 1ec6e71e9..000000000 --- a/g3doc/user_guide/tutorials/kubernetes.md +++ /dev/null @@ -1,236 +0,0 @@ -# WordPress with Kubernetes - -This page shows you how to deploy a sample [WordPress][wordpress] site using -[GKE Sandbox][gke-sandbox]. - -### Before you begin - -Take the following steps to enable the Kubernetes Engine API: - -1. Visit the [Kubernetes Engine page][project-selector] in the Google Cloud - Platform Console. -1. Create or select a project. - -### Creating a node pool with gVisor enabled - -Create a node pool inside your cluster with option `--sandbox type=gvisor` added -to the command, like below: - -```bash -gcloud beta container node-pools create sandbox-pool --cluster=${CLUSTER_NAME} --image-type=cos_containerd --sandbox type=gvisor -``` - -If you prefer to use the console, select your cluster and select the **ADD NODE -POOL** button: - -![+ ADD NODE POOL](node-pool-button.png) - -Then select the **Image type** with **Containerd** and select **Enable sandbox -with gVisor** option. Select other options as you like: - -![+ NODE POOL](add-node-pool.png) - -### Check that gVisor is enabled - -The gvisor RuntimeClass is instantiated during node creation. You can check for -the existence of the gvisor RuntimeClass using the following command: - -```bash -kubectl get runtimeclasses -``` - -### Wordpress deployment - -Now, let's deploy a WordPress site using GKE Sandbox. WordPress site requires -two pods: web server in the frontend, MySQL database in the backend. Both -applications use PersistentVolumes to store the site data data. In addition, -they use secret store to share MySQL password between them. - -First, let's download the deployment configuration files to add the runtime -class annotation to them: - -```bash -curl -LO https://k8s.io/examples/application/wordpress/wordpress-deployment.yaml -curl -LO https://k8s.io/examples/application/wordpress/mysql-deployment.yaml -``` - -Add a **spec.template.spec.runtimeClassName** set to **gvisor** to both files, -as shown below: - -**wordpress-deployment.yaml:** - -```yaml -apiVersion: v1 -kind: Service -metadata: - name: wordpress - labels: - app: wordpress -spec: - ports: - - port: 80 - selector: - app: wordpress - tier: frontend - type: LoadBalancer ---- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: wp-pv-claim - labels: - app: wordpress -spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 20Gi ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: wordpress - labels: - app: wordpress -spec: - selector: - matchLabels: - app: wordpress - tier: frontend - strategy: - type: Recreate - template: - metadata: - labels: - app: wordpress - tier: frontend - spec: - runtimeClassName: gvisor # ADD THIS LINE - containers: - - image: wordpress:4.8-apache - name: wordpress - env: - - name: WORDPRESS_DB_HOST - value: wordpress-mysql - - name: WORDPRESS_DB_PASSWORD - valueFrom: - secretKeyRef: - name: mysql-pass - key: password - ports: - - containerPort: 80 - name: wordpress - volumeMounts: - - name: wordpress-persistent-storage - mountPath: /var/www/html - volumes: - - name: wordpress-persistent-storage - persistentVolumeClaim: - claimName: wp-pv-claim -``` - -**mysql-deployment.yaml:** - -```yaml -apiVersion: v1 -kind: Service -metadata: - name: wordpress-mysql - labels: - app: wordpress -spec: - ports: - - port: 3306 - selector: - app: wordpress - tier: mysql - clusterIP: None ---- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: mysql-pv-claim - labels: - app: wordpress -spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 20Gi ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: wordpress-mysql - labels: - app: wordpress -spec: - selector: - matchLabels: - app: wordpress - tier: mysql - strategy: - type: Recreate - template: - metadata: - labels: - app: wordpress - tier: mysql - spec: - runtimeClassName: gvisor # ADD THIS LINE - containers: - - image: mysql:5.6 - name: mysql - env: - - name: MYSQL_ROOT_PASSWORD - valueFrom: - secretKeyRef: - name: mysql-pass - key: password - ports: - - containerPort: 3306 - name: mysql - volumeMounts: - - name: mysql-persistent-storage - mountPath: /var/lib/mysql - volumes: - - name: mysql-persistent-storage - persistentVolumeClaim: - claimName: mysql-pv-claim -``` - -Note that apart from `runtimeClassName: gvisor`, nothing else about the -Deployment has is changed. - -You are now ready to deploy the entire application. Just create a secret to -store MySQL's password and *apply* both deployments: - -```bash -kubectl create secret generic mysql-pass --from-literal=password=${YOUR_SECRET_PASSWORD_HERE?} -kubectl apply -f mysql-deployment.yaml -kubectl apply -f wordpress-deployment.yaml -``` - -Wait for the deployments to be ready and an external IP to be assigned to the -Wordpress service: - -```bash -watch kubectl get service wordpress -``` - -Now, copy the service `EXTERNAL-IP` from above to your favorite browser to view -and configure your new WordPress site. - -Congratulations! You have just deployed a WordPress site using GKE Sandbox. - -### What's next - -To learn more about GKE Sandbox and how to run your deployment securely, take a -look at the [documentation][gke-sandbox-docs]. - -[gke-sandbox-docs]: https://cloud.google.com/kubernetes-engine/docs/how-to/sandbox-pods -[gke-sandbox]: https://cloud.google.com/kubernetes-engine/sandbox/ -[project-selector]: https://console.cloud.google.com/projectselector/kubernetes -[wordpress]: https://wordpress.com/ diff --git a/g3doc/user_guide/tutorials/node-pool-button.png b/g3doc/user_guide/tutorials/node-pool-button.png Binary files differdeleted file mode 100644 index bee0c11dc..000000000 --- a/g3doc/user_guide/tutorials/node-pool-button.png +++ /dev/null |