Skip to content

GitLab Runners

GitLabHost can provide auto-scaling runner that are designed to be used securely instance-wide. The architecture is based on the GitLab.com Cloud runners, so most standardized usecases for GitLab CI will apply.

Architecture and usage

GitLabHost runners are based on the upsteam Fleeting technology. This will create a fresh Virtual Machine for each job that is handled by the runner. The VMs use a pre-built disk image that has all the required software pre-installed.

The runners act just like the Docker Executor, apart from the isolation that is provided by the Fleeting architecture. Users can utilize any Linux based Docker image, created by themselves, or pulled from a public registry such as Docker Hub, GitHub Container Registry, or the GitLab.com cloud.

The runners can also be used to build new images and push them to a private registry, such as the one included in GitLab. All the regular caveats apply regarding authentication.

Because of the isolation, you cannot mount additional volumes and/or leave files in place for the next job. You should use the Artifacts feature of GitLab CI in order to exchange files between job stages within the same pipeline. We also recommend proper usage of the caching feature to speed up jobs. You should not use the cache for sharing unique files between jobs.

GitLabHost will provide a variety of sizes of runners for users to use. Each size will have one Runner registration inside GitLab for all parent auto-scaled runners. You can use tags as you with to allocate jobs for the most efficient usage of resources. We recommend to configure all but the smallest runner to only accept jobs with their size tags.

Nonstandard features

In addition to the regular features, GitLabHost provides some additions to help with some common issues.

Pre-set options

We usually enable privileged mode for Docker so users can nest containers. The /dev/kvm device is also exposed to containers, so you can utilize nested hardware virtualization to speed up (for example) Android Emulators.

These features are considered safe to use by GitLabHost because the VM that runs the CI job is not shared, so there is no risk of pollution between jobs or projects.

Docker Hub cache

For production clusters, GitLabHost will provision a Docker Hub cache inside the GitLab cluster to ensure users will not hit the public rate limits of Docker Hub. This feature only works inside GitLabHost provided runners and is not available for the Dependency Proxy. We do recommend usage of the Dependency Proxy as much as possible though.

We will install a authentication token for a paid Docker Hub plan that will not grant additional access compared to regular public usage. The pull limits are significantly increased by this. This is required because all the runners in an autoscaling hive will share the same outgoing IP addresses, on which the rate limits are counted by Docker Hub.

Because of the isolated and emphemeral nature of the runners, Container Images will not be pre-cached on the worker node which incurs additional pulls compared to a traditional persistent GitLab Runner with Docker.

The cache is automatically enabled for all images used in the image: tag of a GitLab CI file. When using Kaniko, Buildah, or Docker-in-Docker, the required parameters are not automatically injected. The cache will only activate for unauthenticated connections to Docker Hub. If you manually log in to Docker Hub, the cache will be bypassed.

We provide the following environment variables so you can inject the cache endpoint in your own CI files:

  • DOCKER_REGISTRY_MIRROR contains a URL to the cache. The url is formatted as https://registry-mirror.endpoint
  • KANIKO_MIRROR_ARGS contains a shortcut with the cache URL as a Kaniko command line argument. The contents are in format: --registry-mirror https://registry-mirror.endpoint.

We strongly recommend that GitLab administrators provide templates for common usecases like building Docker images and inject the appropriate variables there, so these changes are transparent to the users of the instance. GitLabHost will not provide such templates. You can base your templates on the examples provided by GitLab.

Docker in Docker TLS

The official Docker in Docker image will strongly prefer TLS encrypted communication between the host and client. To help with this, GitLabHost runners will set DOCKER_TLS_CERTDIR=/certs/client environment variable to encourage this. The /certs/client folder will be a shared volume between all jobs within a pipeline.

Not all images based on Docker in Docker will automatically recognise this variable to enable the relevant changes. You may end up in a situation where the Docker host will listen on the TLS ports automatically, but your client will attempt to connect without TLS. You can resolve this with one of the following options:

  • Set DOCKER_HOST=tcp://docker:2376 to force the client to connect over TLS.
  • Set DOCKER_TLS_CERTDIR="" to force the host to accept unencrypted connections.

These variables are in line with the upstream documentation.

Pricing calculation

GitLabHost will calculate the runner usage based on active hours of nodes in the autoscaling fleet. The managing node that is in control of the hive is not counted in these active hours. The pricing is measured in a pricing unit called Runner Blocks. One runner block equals 730 hours of usage of block size 1. The exact specifications are listed below.

The blocks system allows your runners to scale with (periodic) demand without having to pre-purchase a certain amount of blocks per month. The price of one block is defined on your Purchase Order or Subscription, please refer to your PO or Sales contact for details on this.

GitLabHost does not provide additional systems to re-sell the runner internally to your users. You can use the built-in runner minutes system in GitLab if this is desired.

Runner sizes

The runner sizes below are based on AWS EC2 instance types listed. Based on availability, the system may pick older generations of instances instead. For example: the hive is free to choose between c7a.large, c6a.large, and c6i.large. In most cases the newest generation of nodes will be used. The oldest nodes that are allowed to be picked are c5 generation AWS EC2 nodes.

All nodes are x86_64 architecture, we can provide additional architectures such as ARM64 on a per-customer basis. Please contact us for more details.

Name Block size Hours in 1 block vCPUs RAM Disk size Example AWS type
Small 1 730 h 2 4 GB 25 GB c7a.large
Medium 2 365 h 4 8 GB 50 GB c7a.xlarge
Large 4 182.5 h 8 16 GB 100 GB c7a.2xlarge
ExtraLarge 8 91.25 h 16 32 GB 200 GB c7a.4xlarge

Additional data transfer used by the runners may be invoiced seperately, depending on your agreements with GitLabHost. Please refer to your Sales contact for more details.