Skip to content

Frequently Asked Questions

What GitLab features are available?

As a general rule, we aim to support all GitLab features that are marked as Stable. We support Beta or GA features at our own discretion, depending on the complexity or required configuration.

A (non-exhaustive) list of optional features we support:

  • All OmniAuth providers for SSO login
  • Incoming Email and Service Desk
  • Advanced Search using ElasticSearch
  • GitLab Pages with Custom Domains
  • Kubernetes Agent
  • Docker Registry
  • GitLab Package registry
  • Git LFS

Some features we do not actively support, but usually 'just work':

  • GitLab AI and GitLab Duo integration
  • Feature flags which don't require additional configuration or resources
  • AI Model Registry

Some newer features we do not currently support:

  • GitLab Zoekt for Advanced Search
  • GitLab Next Generation Container Registry
  • On-premise GitLab Duo

What GitLab versions are installed and what's the update schedule?

We aim to stay in line with upstream updates as they are released. Your production environment will always run the previous minor version. Depending on the changes included, we may delay major version upgrades. Any test environment will always run the newest GitLab version we support, which usually is the latest version available. We always skip x.y.0 releases.

Any non-scheduled critical security updates will be installed as fast as possible, at GitLabHost's discretion. This may happen outside of regular update timeframes, and may be with a very short heads-up for the customer.

The upstream schedule looks as follows:

  • GitLab releases a new minor version every third Thursday of each month.
  • Patch releases for the latest three minors are released on: the Wednesday the week before, and the Wednesday the week after the monthly minor release.

GitLabHost will update your environments as follows:

  • Your testing environment will be updated to the latest minor release, the week after the monthly minor release, on Thursday, including the patch release from the day before.
  • Your production environment will be updated to one minor release earlier, the week after the monthly minor release, on Thursday, including the patch release from the day before.
  • Both environments will receive the patch update in the week before the monthly minor release, on Thursday as well.

We are aiming to notify you in the following way:

  • For a security or patch update, you will receive details about the versions deployed after the deployment is done.
  • Soon after the upstream release of a minor GitLab version, we will send you an email, notifying you about the minor upgrades scheduled for the Thursday next week. You will receive another email after the deployment is completed, notifying you of the deployment status.
  • In case we are performing scheduled platform maintenance outside of this GitLab versioning schedule, we will discuss a time slot with you advance.

As a practical example, please find a real-world version of the schedule below:

  • On the 12th of September, both your environments will receive patch updates 17.3.2 and 17.2.5 respectively, that will be released by upstream on the 11th.
  • On the 19th of September, GitLab releases version 17.4.0. We will be skipping this .0 version entirely, and there will be no deployments in this week.
  • On the 26th of September, your testing environment will be updated to 17.4.1, which was released on the 25th. Your production environment will be updated to 17.3.3.

What is the purpose of the test environment?

GitLabHost will provide a test environment on AWS when you order a production cluster on AWS. This environment will run one month ahead of your production environment with GitLab updates. This gives you a way to test with the upcoming version to ensure your production codebase is ready for the update when it is installed, or request the update to be delayed for production.

We will configure the test environment separately, but in the same way as your production environment. As an example, if you want to enable SAML login, we will first configure the test environment for your SAML provider. You may use a test deployment of your provider if that is required. We will use the test environment to test and debug the integration so we have a set of parameters to use for production later.

Unless otherwise required by the customer, GitLabHost will provide domains and SSL/TLS certificates for the test environment.

The test environment is not charged separately and does not incur against additional costs or overusages. However, we provide the test environment under a 'best effort' support model, and with a 'fair use' usage model. There is no emergency support available for the test environment and it is not covered by SLA in any way.

The test environment is covered under our regular ISO 27001 and TISAX compliance framework. However, we strongly recommend you do not upload confidential data onto your test environment unless strictly required.

The test environment runs the same architecture as your production environment, but will have significantly smaller or less hardware allocations, which may result in non-optimal performance characteristics.

What does a migration of my current GitLab instance to GitLabHost look like?

This depends on the current situation. In case GitLabHost already manages your GitLab instance, we will have full access to the source machine and perform most of the actions for you.

In case you are self-hosting a GitLab instance, we'll need to migrate along the following lines:

  • If you are currently using object storage, we will need to sync these storage buckets beforehand.
    • If you are not using object storage, you may need to migrate to object storage on your current instance.
  • We will require the gitlab-secrets.json file and the current SSH server host keys in use.
  • If you want to provide a SSL/TLS certificate yourself, we'll need that in advance as well.
  • At the migration time, you must create a GitLab backup .tar and transfer that to GitLabHost.
  • We will restore the GitLab Backup .tar onto your new environment while you change your DNS pointer to us.
  • When done and tested, the migration is complete.

Depending on your current installation, size of the installation and compliance requirements, this may take a single day, or may be spread out over multiple weeks. GitLabHost will assist in the migration where possible, but as a general rule, we will not engage with other third parties that are currently providing the GitLab hosting or hosting platform.

From the creation of the GitLab backup until completion of the restore of the backup, the source and target GitLab systems will both be unavailable to users. This is important to prevent split-brain between the two.

In case you are migrating from GitLab.com cloud, or from other source control packages or providers, we will provide an empty GitLab instance, and we recommend you use the built-in migration options from GitLab to transfer your data at your own discretion. We can provide hints and some support when that is required.

What and how often do you create backups?

We create snapshots of all data in the GitLab system every 24 hours, at a minimum. We utilize the AWS provided services where possible to ensure the most efficient snapshots are made of data in the cloud. Where supported we have continuous point in time recovery enabled. This includes AWS S3, and AWS RDS.

For GitLab's Git storage, we create Git bundles (a native exporting format) every 24 hours. These snapshots are uploaded to S3 and then are managed by AWS from there.

This means we currently have a RPO of 24 hours. The RTO depends significantly on the size of the cluster and the disaster to recover from. Please contact us for an estimate if you want a RTO value.

By default, all snapshots are kept for 31 days. We can increase this value at additional cost.

The test environments have the same backup system, but the retention period is lowered to 7 days. We do not guarantee a RTO or RPO for the test environment.

How does my cluster scale and when?

We have large amounts of monitoring and metric collection installed on all systems. We use this information to allocate hardware resources where and when required. Additionally, we can increase the hardware allocation based on user reports or complains, or in expectance of a large influx or new users or data.

We will increase disk allocation amounts before the storage runs out automatically, unless the allocation exceeds the ordered amounts significantly. It's not possible to lower allocation for pre-allocated storage. This is mainly a concern for the backing storage of the Git data cluster.

All data stored in S3, such as Packages or Container Images, are not limited by storage allocation, and will 'just work'. The storage used is rated as used and we'll contact you when you exceed the amounts on your PO a lot. The maximum file size is 5TB for any files stored on S3. We do not recommend you actually store single files of 5TB in size, as you will likely run into timeouts when up- or downloading these amounts of data.