Metrics

Agones controller exposes metrics via OpenCensus. OpenCensus is a single distribution of libraries that collect metrics and distributed traces from your services, we only use it for metrics but it will allow us to support multiple exporters in the future.

We choose to start with Prometheus as this is the most popular with Kubernetes but it is also compatible with Stackdriver. If you need another exporter, check the list of supported exporters. It should be pretty straightforward to register a new one.(Github PR are more than welcomed)

We plan to support multiple exporters in the future via environement variables and helm flags.

Backend integrations

Prometheus

If you are running a Prometheus instance you just need to ensure that metrics and kubernetes service discovery are enabled. (helm chart values agones.metrics.enabled and agones.metrics.prometheusServiceDiscovery). This will automatically add annotations required by Prometheus to discover Agones metrics and start collecting them. (see example)

Prometheus Operator

If you have Prometheus operator installed in your cluster, make sure to add a ServiceMonitor to discover Agones metrics as shown below:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: agones
  labels:
    app: agones
spec:
  selector:
    matchLabels:
        stable.agones.dev/role: controller
  endpoints:
  - port: web

Finally include that ServiceMonitor in your Prometheus instance CRD, this is usually done by adding a label to the ServiceMonitor above that is matched by the prometheus instance of your choice.

Stackdriver

We don’t yet support the OpenCensus Stackdriver exporter but you can still use the Prometheus Stackdriver integration by following these instructions. Annotations required by this integration can be activated by setting the agones.metrics.prometheusServiceDiscovery to true (default) via the helm chart value.

Metrics available

Name Description Type
agones_gameservers_count The number of gameservers per fleet and status gauge
agones_fleet_allocations_count The number of fleet allocations per fleet gauge
agones_gameservers_total The total of gameservers per fleet and status counter
agones_fleet_allocations_total The total of fleet allocations per fleet counter
agones_fleets_replicas_count The number of replicas per fleet (total, desired, ready, allocated) gauge
agones_fleet_autoscalers_able_to_scale The fleet autoscaler can access the fleet to scale gauge
agones_fleet_autoscalers_buffer_limits he limits of buffer based fleet autoscalers (min, max) gauge
agones_fleet_autoscalers_buffer_size The buffer size of fleet autoscalers (count or percentage) gauge
agones_fleet_autoscalers_current_replicas_count The current replicas count as seen by autoscalers gauge
agones_fleet_autoscalers_desired_replicas_count The desired replicas count as seen by autoscalers gauge
agones_fleet_autoscalers_limited The fleet autoscaler is capped (1) gauge
agones_gameservers_node_count The distribution of gameservers per node histogram
agones_nodes_count The count of nodes empty and with gameservers gauge

Dashboard

Grafana Dashboards

We provide a set of useful Grafana dashboards to monitor Agones workload, they are located under the grafana folder :

  • Agones Autoscalers allows you to monitor your current autoscalers replicas request as well as fleet replicas allocation and readyness statuses. You can only select one autoscaler at the time using the provided dropdown.

  • Agones GameServers displays your current game servers workload status (allocations , game servers statuses, fleets replicas) with optional fleet name filtering.

Dashboard screenshots :

grafana dashboard autoscalers

grafana dashboard controller

You can import our dashboards by copying the json content from each config map into your own instance of Grafana (+ > Create > Import > Or paste json) or follow the installation guide.

Installation

When operating a live multiplayer game you will need to observe performances, resource usage and availability to learn more about your system. This guide will explain how you can setup Prometheus and Grafana into your own Kubernetes cluster to monitor your Agones workload.

Before attemping this guide you should make sure you have kubectl and helm installed and configured to reach your kubernetes cluster.

Prometheus installation

Prometheus is an open source monitoring solution, we will use it to store Agones controller metrics and query back the data.

Let’s install Prometheus using the helm stable repository.

helm upgrade --install --wait prom stable/prometheus --namespace metrics \
    --set server.global.scrape_interval=30s \
    --set server.persistentVolume.enabled=true \
    --set server.persistentVolume.size=64Gi \
    -f ./build/prometheus.yaml

You can also run our Makefile target make setup-prometheus or make kind-setup-prometheus and make minikube-setup-prometheus for Kind and Minikube .

For resiliency it is recommended to run Prometheus on a dedicated node which is separate from nodes where Game Servers are scheduled. If you use the above command, with our prometheus.yaml to set up Prometheus, it will schedule Prometheus pods on nodes tainted with stable.agones.dev/agones-metrics=true:NoExecute and labeled with stable.agones.dev/agones-metrics=true if available.

As an example, to set up dedicated node pool for Prometheus on GKE, run the following command before installing Prometheus. Alternatively you can taint and label nodes manually.

gcloud container node-pools create agones-metrics --cluster=... --zone=... \
  --node-taints stable.agones.dev/agones-metrics=true:NoExecute \
  --node-labels stable.agones.dev/agones-metrics=true \
  --num-nodes=1

By default we will disable the push gateway (we don’t need it for Agones) and other exporters.

The helm chart support nodeSelector, affinity and toleration, you can use them to schedule prometheus deployments on an isolated node(s) to have an homogeneous game servers workload.

This will install a Prometheus Server in your current cluster with Persistent Volume Claim (Deactivated for Minikube and Kind) for storing and querying time series, it will automatically start collecting metrics from Agones Controller.

Finally to access Prometheus metrics, rules and alerts explorer use

kubectl port-forward deployments/prom-prometheus-server 9090 -n metrics

Again you can use our Makefile `make prometheus-portforward` . (For Kind and Minikube use their specific targets make kind-prometheus-portforward and make minikube-prometheus-portforward)

Now you can access the prometheus dashboard http://localhost:9090.

On the landing page you can start exploring metrics by creating queries. You can also verify what targets Prometheus currently monitors (Header Status > Targets), you should see Agones controller pod in the kubernetes-pods section.

Metrics will be first registered when you will start using Agones.

Now let’s install some Grafana dashboards.

Grafana installation

Grafana is a open source time series analytics platform which supports Prometheus data source. We can also install easily import pre-built dashboards.

First we will install Agones dashboard as config maps in our cluster.

kubectl apply -f ../build/grafana/

Now we can install grafana chart from stable repository. (Replace <your-admin-password> with the admin password of your choice)

helm install --wait --name grafana stable/grafana --namespace metrics \
  --set adminPassword=<your-admin-password> -f ../build/grafana.yaml

This will install Grafana with our prepopulated dashboards and prometheus datasource previously installed

You can also use our Makefile targets (setup-grafana,minikube-setup-grafana and kind-setup-grafana).

Finally to access dashboards run

kubectl port-forward deployments/grafana 3000 -n metrics

Open a web browser to http://localhost:3000, you should see Agones dashboards after login as admin.

Makefile targets make grafana-portforward,make kind-grafana-portforward and make minikube-grafana-portforward.