Backend Deploy¶
The backend project is the project which we deploy most often, as it is where most of the work is added to (control+dash). We usually deploy once/twice a month dependant on work load.The development
is the branch all new work is merged to. This document outlines the steps required to deploy these changes to our staging and production environments. We initially deploy to our staging environments to check everything works as expected, if all is successful, we then deploy to our production environments.
Staging¶
- Pull the latest changes from the
development
branch. - Create a new branch from
development
with the following templaterc/<date>
(e.g.rc/2001-01-01
) where rc stands for release candidate. - Push the new branch to the remote repository.
- In bitbucket, run a pipeline build for the new release candidate branch into both
control-staging
anddash-staging
. - Once the builds are successful, you want to ssh into CUBED_STGE_CONTROL, the command is
ssh cubed@<cubed-stge-control-host> -i <path-to-ppk>
. If you have not been supplied with a ppk file, please contact your team lead. - Once in the box, go to the backend directory
cd /srv/attrib-backend/backend/
. - Instantiate a screen session via
screen -S <screen-session-name>
. Documentation on linux screens can be found here and it is strongly encouraged that you familiarise yourself with screens. If the session is already running, you can attach to it viascreen -r <screen-session-name>
. Additionally, to detach from a screen session viaCtrl + A + D
. - Run migrations via
sudo python3 manage.py migrate_client --loaddata --noinput
. Also we may need to run migrations for other apps, Seopt etc. Check this by runningsudo python3 manage.py migrate --list
and see if there are any unapplied migrations.
Before running migrations
Whenever a developer is running migrations, they should always be in a screen session. This is to ensure that the migrations are not interrupted if the connection is lost.
The final command will run migrations against a few staging accounts, this allows us to "test" the migrations before running them on production. As a final precaution, go to staging.withcubed.com and make sure everything is working as expected on dash.
Running migrations on different apps
An exhaustive list of migration commands to run for a given app are as follows:
- Base
sudo python3 manage.py migrate base
- Client
sudo python3 manage.py migrate_client --loaddata --noinput
- Seopt
sudo python3 manage.py migrate seopt --database=seopt
- Command/Conductor
sudo python3 manage.py migrate command --database=command
- As a final step, go to staging.withcubed.com and make sure everything is working as expected on staging.
No accounts showing in dash?
If no accounts are shown when logging into dash, it is likely you have an account configuration issue with database access to 10.3.63.31. Please contact Positive to confirm your user has access to all databases on this server.
Production¶
- Run a pipeline build for branch
rc/<date>
for bothcontrol-prod
anddash-prod
pipelines. Here, we usually runcontrol-prod
before runningdash-prod
. This is because unapplied migrations on dash will cause the frontend to crash in certain scenarios. - After the builds are successful, we now want to ssh into CUBED_CONTROL_A box via
ssh cubed@<cubed-prod-control-host> -i <path-to-ppk>
. If you have not been supplied with a ppk file, please contact your team lead. - Again go to the backend directory
cd /srv/attrib-backend/backend/
. - Instantiate a screen session via
screen -S <screen-session-name>
. Documentation on linux screens can be found here and it is strongly encouraged that you familiarise yourself with screens. If the session is already running, you can attach to it viascreen -r <screen-session-name>
. For example, to detach from a screen session viaCtrl + A + D
. - Run migrations via
sudo python3 manage.py migrate_client --loaddata --noinput
. Also we may need to run migrations for other apps, Seopt etc. Check this by runningsudo python3 manage.py migrate --list
and see if there are any unapplied migrations. Before migrations take place, we put the account in maintenance mode (happens automatically within the script), this is to stop us trying to insert into a table which is being altered. You can check on Grafana which clients are currently in maintenance mode - here. The retry buffer should slowly creep up during this time as incoming hits are queued up. Once the migration finishes, the account is set tomaintenance=0
again, and the buffers will come down. - As a final step, go to dash.withcubed.com and make sure everything is working as expected on production.
Migration Failures
We have the ability to rollback migrations if an error occurs when running. Please do so by running sudo python3 manage.py migrate_client {migration_file_we_want_to_revert_to}.
Wrapping up¶
- Once we have confirmed that everything is working as expected, create a PR from
rc/<date>
tomaster
. Once the PR has been approved by the team, the release candidate branch can be merged intomaster
. - Go to
monday.com
and mark all tasks underPending Deploy
asDone
. - Go to the
Sprint Review
board and review the sprint progress. - As a final clean up, delete screens on both control and production boxes via
screen -X -S <screen-session-name> quit
.