Creating a new solution¶
This guide will walk you through the process of generating a new solution codebase, ensuring the required AWS account exists and is reachable, and ensure you follow our best-practises regarding encryption.
Preparing external resources¶
Dependencies¶
Before you continue, ensure you have set up the required Dependencies.
Variables to create manually¶
Your environment needs a name. This names is used in multiple places to refer to your project. It's recommended you keep this name as a reference to the project in as much places as possible.
The project name must be short but logical. Some examples:
- maikel-dev
- skyway-test
- hella-prod
Note that these examples propose names that point to the user/customer and the environment type, and are valid slugs.
On this page, the name will be referred to as the Project Name or project-name. Please use common sense and replace
any occurences of project-name in the examples below with your actual chosen Project Name.
AWS account¶
You will need an AWS account to deploy your solution to. Please do not re-use accounts for multiple deployments. In most cases, you will be able to use or create a new AWS account that is a sub-account of the GitLabHost Root AWS account. In some cases, the customer may provide the account for you.
These steps will assume you create a new account under our GitLabHost Root. Start by Logging in to our root account and going to AWS Organizations service.
Once there, perform the following steps in the webinterface:
- Click 'Add an AWS account'
- For the
AWS account name, enter your Project Name. - For the email address, either:
- For personal development environments, use
yourname+project-name@gitlabhost.com. For example:maikel+maikel-dev@gitlabhost.com. - For other environments, use
tech+project-name@gitlabhost.com. For example:tech+skyway-test@gitlabhost.com.
- For personal development environments, use
- Leave the
IAM role nameset toOrganizationAccountAccessRole. - Submit the form
It may take some time for the account to be created. In most cases the account should be visible within 1 minute.
Reload the page, and find your new account. Note down the 12-digit numeric account id, for example: 825765422520.
Check the box in front of your new account, open the Actions dropdown, and choose Move. Select the logical
destination, such as Development or Production. Then submit the form. This action moves your account to a specified
Organizational unit, which is important because we use these in code to allow access to groups of accounts.
AWS SSO access¶
Because of our specific usage and setup of AWS, you cannot log in to your new account directly. All signin will need to be done via AWS SSO. To add your new account to the SSO set, follow these steps:
- Visit and/or clone the AWS Identity Management project.
- Create a new branch and open the file:
terraform/organizations.tf. - Add your new account in the same way as the others that are already in there. Please see an example below.
- Commit and push your changes, submit a Merge Request, and get a colleague to review your changes.
- After auto-deployment is complete, you can switch to your account via the AWS SSO page.
Example stanza for organizations.tf:
locals {
organizations = {
"825765422520" = {
name = "maikel-test"
purpose = "This account Maikel's personal playground"
groups = [local.groups.hae_dev] # High Availability Engineer - Dev scope
users = ["maikel"] # Allow per-user direct access. FOR DEVELOPMENT ONLY!
}
}
}
Preparing Ansible-Vault passphrase¶
For some of the values in your solution that are stored in Git, a layer of symmetric encryption is needed. This is because we don't want to store database passwords in plain-text.
Prepare a passphrase by performing these steps:
- Visit 1Password (or use the app instead).
- If relevant, switch to the
Operationsvault. - Click the
New itembutton and choose to createLogin item. - Give your item a name as follows:
project-name Ansible Vault. Such as:maikel-test Ansible Vault. - Have 1Password generate a password for you. Ensure it is at least 32 characters.
- Entering a username or other details is not required, but not forbidden either.
- Submit the form.
Now you'll need to retrieve the unique ID of the item. There are two ways to do this:
-
Open the item in the 1Password UI, open the context menu, and click
Copy Private Link. A URL will be copied to your clipboard. The ID you need is thei=part of the URL parameters. -
On a shell, run the following command:
op item get 'project-name Ansible Vault'. The value you need is in theID:field.
Creating a codebase¶
Running Copier to render out template¶
Once you've prepared the external resources, you can go ahead and run our template generator to create a boilerplate solution codebase.
Run this command to get started. The output and example input values are also shown below.
A few changes you'll probably want to make compared to the output below:
- Name/slug of the project: replace with your actual Project Name (name-dev for example)
- Version of GitLabHost' GitLab Environment Toolkit to use: for staging/production, pick a release (eg: v2.0.1) or leave at main
- Prefill the GitLab version to use: replace with the latest version, or with the current version if migrating an existing environment.
- template to use for prefilling: Use 3k for production and dev for everything else.
- The AWS region you want to deploy in: this is dependent on the customer needs. Prefer
eu-west-1if possible. - The aws region you want to store mirrored backups in
aws-west-1 - The 12-digit AWS account ID: replace with your own value found at Open GitLab SSO
- The unique ID to the 1password vault item: replace with your own value
op item get 'project-name Ansible Vault'
$ copier copy --trust git@git.glhd.nl:glh/ha/gitlab-environment-toolkit.git project-name
🎤 Name/slug of the project
project-name
🎤 Primary GitLab external_url for the project
project-name.glhc.nl
🎤 Version of GitLabHost' GitLab Environment Toolkit to use
main
🎤 Prefill the GitLab version to use
17.2.3
🎤 This specifies what template to use for prefilling instance types and counts
dev
🎤 The AWS region you want to deploy in
eu-central-1
🎤 The 12-digit AWS account ID you want to deploy in
825765422520
🎤 The unique ID to the 1password vault item used to encrypt Ansible Vault items with
udicj3f26yqvonttp5chn7orc4
🎤 Do you want to enable using AWS SSM for connecting to EC2 machines?
Yes
🎤 Do you want to install/update up the dependencies right away?
Yes
Installing the dependencies right away may take a few minutes, please be patient. The speed also depends on your
network connection performance. You may be prompted to open your 1Password vault during the process to encrypt some
randomly generated values. Use eval $(op signin) to login in the terminal.
If the encryption fails, you'll be given instructions on how to perform the encryption afterwards. Ensure you perform these actions before committing and pushing your new codebase!
Submitting the code to GitLab¶
Before submitting your code to GitLab, please ensure that ansible/inventory/sensitive_vars.yml is encrypted.
Open the file in a normal text editor, if it starts with $ANSIBLE_VAULT and contains a lot of numbers, it's encrypted.
If you can read the contents, please encrypt the file by running ansible-vault encrypt inventory/sensitive_vars.yml.
To submit the solution, follow the steps below. Please note: you don't need to create the project in the GitLab UI manually. For this part you need to have set-up a yubikey Setting up YubiKey
test -d .git && git init -q
git add .
git commit -m 'Initial commit'
git remote add origin git@git.glhd.nl:glh/ha/solutions/maikel-test.git
git push --set-upstream origin main
Preparing for initial deployment¶
Generating and setting a GitLab access token¶
Create a personal access token for your GitLab account: Access tokens
The token needs to have the following scopes:
read_user, read_repository, read_registry, read_api, write_repository, write_registry, api, ai_features, create runner, k8s_proxy
To use our terraform modules, create a ~/.terraformrc file in your home folder and replace the access token for our git.glhd.nl domain.
~/.terraformrc contents:
credentials "git.glhd.nl" {
token = "<YOUR-ACCESS-TOKEN>"
}
After that, follow the steps described here to setup access to the remote terraform state files for your personal project(Make sure to replace the project name).
https://git.glhd.nl/glh/ha/solutions/PROJECT-NAME/-/terraform
Ensuring Ansible dependencies are installed¶
If you have just created your template, and you chose to install the dependencies right away, most of this section will have been done already as part of the process. In case something failed, follow the steps below.
Create a virtual environment and install the Python and Ansible dependencies:
test -d .venv && python -m venv .venv
source .venv/bin/activate
pip install -U pip && pip install -r ansible/requirements.txt
cd ansible
ansible-galaxy install -r galaxy-requirements.yml
Preparing Terraform¶
Because of chicken-and-egg problems, you need to manually create the S3 bucket that Terraform will store your deployment state in. To do this, follow these steps:
- Log in to your new AWS Account using AWS SSO, and go to the S3 service.
- Click the 'Create bucket' button.
- Ensure the AWS Region listed is correct, if not, use the region switcher in the global navbar.
- For the bucket name, enter
project-name-get-terraform-state. - Leave all other settings as-is and submit the form.
Before the next step, make sure you have looked into GLH GetRect Repository
Next, initialize Terraform:
cd terraform
aws-sso terraform init
Confirming settings¶
If you are deploying a development environment, most of the available settings won't matter and you can use the defaults. However, when this is a environment intended for customer usage, you should make sure all the settings are correct for that customer, since some settings are not able to be changed afterwards.
Please open the following files in an editor and take a look around. You should quickly notice things that are
incorrect. In particular, ensure your domain_name stays a *.glhc.nl domain, and customize external_url instead.
ansible/inventory/vars.ymlterraform/environment.tf
Rendering out Ansible data for Terraform¶
In our toolkit, we have several helpers to avoid duplication of settings. Ansible can render out values for Terraform to use (such as the database password), and Terraform in turn will output some resource locations for Ansible to use (such as the database URL).
Render out Ansible data for Terraform by running the following command:
source .venv/bin/activate
cd ansible
eval $(op signin) # Unlock 1Password if not done already
aws-sso ansible-playbook -i inventory glh.environment_toolkit.render_tfvars
Note: you may be asked to unlock your 1Password vault to perform this action. This is normal as the source of the data that Ansible uses is a encrypted file.
Performing the initial deployment¶
Creating AWS resources with Terraform¶
First, switch your shell to the terraform folder. In case you are using the default, which creates a per-solution
AWS KMS key, start by initializing the key first, to avoid chicken and egg problems:
aws-sso terraform apply -target aws_kms_key.default_kms_key
Next, you can provision all other Terraform managed resources. Please note that the initial run may take quite some time to complete. Depending on the amount of resources chosen, in particular regarding RDS, ElastiCache, and OpenSearch instance counts, the deployment may take anywhere between 10 minutes or 50 minutes.
Some actions may fail due to network issues or general timeouts. In most cases, it's safe to re-try the complete run.
Manually propagating the DNS zone for your solution¶
One particular point of failure can be the creation of TLS certificates, as the DNS pointers need to work for this to succeed. Our stanza for the shared infrastructure will not be rendered until the full Terraform run is completed.
To work around this, you can add your DNS zone to the shared infrastructure manually:
- Visit or clone the AWS Infrastructure project.
- Create a new branch and edit
terraform/solutions.tf. Add the following contents in the same way as the others:project-name = { nameservers = ["ns-1329.awsdns-38.org", "ns-83.awsdns-10.com", "ns-1003.awsdns-61.net", "ns-1862.awsdns-40.co.uk"] } - The
nameserversshould contain your actual nameservers. To retrieve these, open the AWS web console, go to the Route 53 service, under 'Hosted zones', find and open your public DNS zone forproject-name.glhc.nl. - The names to use are listed under 'Hosted zone details', under 'Name servers'. They should look similar to the example above.
- Run
terraform fmt solutions.tfto ensure the formatting is linter-compliant. - Commit your changes and submit a Merge Request. Wait for approval and wait for the deployment to finish afterwards.
- Validate the DNS zone is now resolvable:
dig project-name.glhc.nl NS @1.1.1.1.
Afterwards, retry provisioning Terraform resources as described in Creating AWS resources with Terraform.
Adding your new solution to our shared infrastructure¶
After your Terraform run has completed, you will be presented with a output stanza for our shared infrastructure, that will look very similar to this:
infrastructure_config = <<EOT
maikel-dev = {
account_id = "946348436774"
alerting_environment = "test"
ds_record = "65073 13 2 E51BACF5DD30560D9B30B41766D9EC0427378B31D612CD2FAC02DE404AFD39A2"
nameservers = ["ns-1028.awsdns-00.org","ns-1578.awsdns-05.co.uk","ns-308.awsdns-38.com","ns-797.awsdns-35.net"]
region = "eu-west-1"
tunnel_endpoint_id = "com.amazonaws.vpce.eu-west-1.vpce-svc-01abc1fe30ac49743"
}
EOT
The block of code between the EOT tags is intended for copy pasting. To add this to our shared infrastructure
codebase, follow the steps below:
- Visit or clone the AWS Infrastructure project.
- Create a new branch and edit
terraform/solutions.tf. Add the stanza to the list. Keep in mind the alphabetical sorting, and the grouping per-customer using additional newlines (or lack thereof).- Note: if you had to Manually propagate your DNS zone, replace the block you added previously with the new version from your Terraform output.
- Run
terraform fmt solutions.tfto ensure the formatting is linter-compliant. - Commit your changes and submit a Merge Request.
Configuring OpenSSH¶
Starting with GET 3.0.0, we no longer pre-configure OpenSSH on machines by default. The paths taken on this were convoluted for the one-time usage pattern, and AWS Systems Manager has proven good enough to use for the first-time installation.
Adding your own keys¶
If you are new to GitLabHost, your own SSH keys will not be in the shared codebase. You can override the relevant
parameters locally to add the keys. The same changes can later be added in the openssh_server role, in the
Ansible Common Config project.
In your solution, open ansible/inventory/vars.yml. Add the following global variable definitions:
all:
vars:
maintenance_users: ["bram", "daniel", "leon", "maikel", "your_name_here"]
ssh_keys_your_name_here: ["ssh-ed25519 AAAAC3NzaC1lZDI1NT....4fzeJLGS5htxSzw8O0 yourname@here.com"]
Note: it's recommended you also list the users defined upstream
here, so they can help you on the shell if required. Their ssh_keys_ data will automatically be propagated and does
not need to be redefined.
Provisioning OpenSSH server¶
We will explain how to provision using AWS SSM. If you want to connect otherwise, skip these steps, and only run the playbooks listed at the end of this subsection to set up OpenSSH on the targets.
In ansible/inventory/aws_ec2.yml, ensure the ansible_host is set correctly. Since 3.2.0, this is set by default.
# ...
compose:
ansible_host: instance_id
In ansible/inventory/vars.yml, ensure the following variables are set. Since 3.2.0, these are present by default,
but the ansible_connection line is commented out. In that case, uncomment the line.
all:
vars:
# ...
ansible_connection: aws_ssm
ansible_aws_ssm_bucket_name: "{{ prefix }}-ansible-ssm"
ansible_aws_ssm_timeout: 600
Then, run the playbook that provisions OpenSSH server: Make sure you have '/ansible/inventory/gitlab_version.yml' defined.
all:
vars:
gitlab_version: "18.x.x"
source .venv/bin/activate
cd ansible
eval $(op signin) # Unlock 1Password if not done already
# Provision all available hosts
aws-sso ansible-playbook -i inventory glh.common_config.openssh_server
# Only provision a specific host
aws-sso ansible-playbook -i inventory glh.common_config.openssh_server -l maikel-dev-gitlab-rails-1
Afterwards, you can comment out the ansible_connection line in ansible/inventory/vars.yml.
Provisioning your Linux machines¶
In the previous steps, Terraform will have created serveral EC2 virtual machines with Debian Linux pre-installed on them. We use Ansible to provision these machines further to install the required software for these machines to perform their tasks. A snippet is already pre-generated in your solution that tells Ansible which machines serves what purpose, so Ansible knows what software to install.
Ensure your SSH key is available (by plugging in your YubiKey), and that you are connected to the GitLabHost VPN (this is not always necessary).
To provision the machines, run the following Ansible Playbook. The playbook may take over an hour to complete, as the initial installation of all machines can be quite slow, especially on burstable instance types used in our development environment template.
source .venv/bin/activate
cd ansible
eval $(op signin) # Unlock 1Password if not done already
aws-sso ansible-playbook -i inventory glh.environment_toolkit.all
It is safe to re-run the playbook multiple times in case of failures due to networking or concurrency issues. you can use the --start-at-task flag to skip previously succeeded tasks
If you get an error like:
FAILED - RETRYING: [localhost]: Wait for GitLab to be available (30 retries left).
You are probably not connected to the vpn.
sed -i '/Wait for GitLab to be available/,/delay:/ s/^/#/' ~/projects/project-name/ansible/.ansible/collections/ansible_collections/glh/environment_toolkit/roles/omnibus/tasks/health_check.yml```
Post deployment tasks¶
Seeding the GitLab database¶
If you are migrating an existing GitLab database, preseeding will already be done previously. But if you need a empty instance to perform development or tests on, you will need to properly seed the database.
Due to some intricacies in the way our code is set up, GitLab will not perform preseeding on the initial db:migrate
task, but will leave the application in a broken state. This is usually noticable by missing objects in the database,
such as the default Organization object not being found.
To perform the preseeding manually first, shut down any services that are connected to the GitLab PostgreSQL database:
aws-sso ansible-playbook -i inventory glh.environment_toolkit.tools.pre_data_migration
Then, open a shell on the primary Rails node, and run the following command:
# Note: this will wipe your database! There will be no going back!
sudo gitlab-rake gitlab:setup DISABLE_DATABASE_ENVIRONMENT_CHECK=1
Afterwards, (re)start all GitLab services: Make sure you have
aws-sso ansible-playbook -i inventory glh.environment_toolkit.tools.post_data_migration
Helper to create admin user¶
We provide a simple helper to create (or update) an admin user in GitLab. Use this as follows:
GITLAB_ADMIN_USERNAME="glh-admin" GITLAB_ADMIN_PASSWORD="replaceme" aws-sso ansible-playbook -i inventory glh.environment_toolkit.tools.create_admin_user
When a user with username GITLAB_ADMIN_USERNAME is found, it will force-set the password and make the user admin.
Please note: it is common courtesy to not create frontend accounts on GitLab instances intended for customers. Ask the customer to create an account for you instead, so they explicitly provide you with consent. You can use the helper above to create an account for the customer so they can perform this task for you.
Additional DNS configuration¶
In case you are using a domain that is not under your project-name.glhc.nl zone, you will need to configure that
manually, or have the customer configure that manually for you.
You can use the project-name.glhc.nl pointers so the customer can create CNAME records towards it.
Use the following names when configuring external DNS:
external_url, the primary URL to the GitLab webinterface, must point togit.project-name.glhc.nlregistry_external_url, the URL to the Docker Registry, must point toregistry.project-name.glhc.nl- If GitLab pages is enabled, create the following CNAME records:
pages_external_urlmust point topages.project-name.glhc.nl.- Also add a wildcard variant that points to
pages_external_url. - For example:
*.pages.example.compoints topages.example.com, which in turn points topages.project-name.glhc.nl