Pipelines¶
Each pipeline roughly follows the same template:
On the pipelines¶
Create configuration artifact¶
Most services require a configuration file to operate. As part of the pipelines we generate these configuration files, and store them as an artifact for use in later steps.
Configuration files are usually generated using the existing ansible playbooks and group vars. For more detail about the ansible playbooks, see here: ansible playbooks.
Example
Examples include settings.py
for the Django projects, or config.yml
for the DCS, etc
Rsync to incoming directory¶
We copy the codebase, including any artifacts generated by step 1. These are copied to an "incoming" directory on the Utility Server rather than their final destination on an NFS mount as to not affect any production systems.
Example
The incoming directories are stored under the deployments
users home directory.
Examples include:
/home/deployments/prod_dcs/incoming/
/home/deployments/stge_control/incoming/
Run deployment script¶
Finally, we run the deployment script. This is done via ssh on the utility server.
On the utility server¶
Backup of current codebase¶
In this step, we create a backup of the current codebase stored on NFS. These backups are used in case of needing to roll back to a previous version.
Example
The backups are stored locally on the Utility Server next to the incoming directory
Examples include:
/home/deployments/prod_dcs/backup/20220713_00_07
/home/deployments/prod_dcs/backup/20220713_11_34
Rsync to NFS¶
The contents of the incoming directory are copied to the relevant NFS directory (see NFS mounts). As all production servers also point at these mounts, they are all updated when these files are copied.
Run ansible playbook¶
Finally, we run an ansible playbook to update each and every server that is associated to the pipeline. These playbooks are stored on the Utility Server and managed by Positive Internet.
Warning
There is some legacy code inside /srv/attrib-backend/ansible
that is now superseded by the playbooks stored on the Utility Server. As we don't have access to the Utility Servers repository, the legacy code gives us insight into what the utility server executes during a deployment.
On each server¶
Block traffic¶
For services that are receiving traffic, we want to take them out of the load balancer to update them. To do this with nginx
, we block all traffic to the current server by adding an iptables
rule. This makes nginx
treat this server as unhealthy, and redirects the traffic to another server.
Note
This step only applies to the DCS pipelines
Install dependencies¶
Using the newly update codebase (see Rsync to NFS) we install any needed dependencies. This is usually just installing Python packages using a requirements.txt
file stored in the codebase.
Note
For some pipelines, there are more steps. For example the attribution-r-ml
project has a dedicated install script for R packages.
Restart service¶
Finally, we restart the service associated with the pipeline. This updates the service to use any new files deployed, and any new packages installed.
Note
As each service is setup as a systemd service, this is what we restart (e.g systemctl restart pydcs
). See Server Layout for more information.
Wait for /@health endpoint¶
We check to ensure the /@health
endpoint is returning successfully with a 200 response (with a fixed number of retries). This ensures that the service has been restarted correctly.
Note
Only services that expose a /@health
endpoint include this step, such as the DCS and DPS.
Allow traffic¶
The opposite to Block traffic, this unblocks any blocked ports, and allows the server to receive traffic again.
Note
This step only applies to the DCS pipelines