Get desktop application:
View/edit binary Protocol Buffers messages
Cluster holds the created cluster and its credential file.
Used in:
Cluster name.
A kubectl config file that can be used to connect to the cluster.
The kubectl context name within `kubectl_config` to use to connect to the cluster. If empty, the default context will be used.
The GCP project ID that the cluster is in. Optional; only used for cluster management tasks (e.g. deletion). Leave empty for non-GCP clusters.
The GCP location that the cluster is created in. Optional; only used for cluster management tasks (e.g. deletion). Leave empty for non-GCP clusters.
NodePool represents a set of Kubernetes nodes.
Used in:
Opaque implementation-specific nodepool config. In GKE, this is a google.container.v1.NodePool.
TestRange contains the created clusters. This is an output from the setup phase and an input for the test phase.
TestRangeSpec is a description of the test environment to be created. It is the input of the setup step which creates the required clusters.
Name for clusters. This name will be used as a template for all created clusters (e.g. my-cluster will have clusters named something like "my-cluster-0" with an ascending index). Cluster names are limited to 40 characters, so names will be truncated to fit this constraint.
A nodepool built with the runtime under test.
clients is another nodepool in the cluster to use against the test_runtime. For example, in most client-server tests, the runtime under test is the server and the clients are the client. Clients always use the default runtime runc.
tertiary is a third nodepool in the cluster, used by some benchmarks that need it for isolation. For example, the WordPress benchmark needs to run the MySQL database on a separate machine in order to force network traffic to flow across the host's non-local network stack for a fair comparison between runsc/runc. The tertiary nodepool may use gVisor or runc as a runtime, depending on user configuration.
versions are the GKE patch versions to use for the clusters. The number of clusters created will be num(versions) * replicas.
zones are the availability zones in which to create clusters. clusters will be created in each zone in a round robin fashion until the requested amount of clusters is created. This is provided to expand quota. Note: please check that given zones actual have resources available (e.g ARM machines are not available in all zones).
project is the project under which clusters should be created.
project service account to use to create clusters.