Skip to content

Backend Deploy

The backend project is the project which we deploy most often, as it is where most of the work is added to (control+dash). We usually deploy once/twice a month dependant on work load.The development is the branch all new work is merged to. This document outlines the steps required to deploy these changes to our staging and production environments. We initially deploy to our staging environments to check everything works as expected, if all is successful, we then deploy to our production environments.

Staging

  1. Pull the latest changes from the development branch.
  2. Create a new branch from development with the following template rc/<date> (e.g. rc/2001-01-01) where rc stands for release candidate.
  3. Push the new branch to the remote repository.
  4. In bitbucket, run a pipeline build for the new release candidate branch into both control-staging and dash-staging.
  5. Once the builds are successful, you want to ssh into CUBED_STGE_CONTROL, the command is ssh cubed@<cubed-stge-control-host> -i <path-to-ppk>. If you have not been supplied with a ppk file, please contact your team lead.
  6. Once in the box, go to the backend directory cd /srv/attrib-backend/backend/.
  7. Instantiate a screen session via screen -S <screen-session-name>. Documentation on linux screens can be found here and it is strongly encouraged that you familiarise yourself with screens. If the session is already running, you can attach to it via screen -r <screen-session-name>. Additionally, to detach from a screen session via Ctrl + A + D.
  8. Run migrations via sudo python3 manage.py migrate_client --loaddata --noinput. Also we may need to run migrations for other apps, Seopt etc. Check this by running sudo python3 manage.py migrate --list and see if there are any unapplied migrations.

Before running migrations

Whenever a developer is running migrations, they should always be in a screen session. This is to ensure that the migrations are not interrupted if the connection is lost.

The final command will run migrations against a few staging accounts, this allows us to "test" the migrations before running them on production. As a final precaution, go to staging.withcubed.com and make sure everything is working as expected on dash.

Running migrations on different apps

An exhaustive list of migration commands to run for a given app are as follows:

  • Base sudo python3 manage.py migrate base
  • Client sudo python3 manage.py migrate_client --loaddata --noinput
  • Seopt sudo python3 manage.py migrate seopt --database=seopt
  • Command/Conductor sudo python3 manage.py migrate command --database=command
  1. As a final step, go to staging.withcubed.com and make sure everything is working as expected on staging.

No accounts showing in dash?

If no accounts are shown when logging into dash, it is likely you have an account configuration issue with database access to 10.3.63.31. Please contact Positive to confirm your user has access to all databases on this server.

Production

  1. Run a pipeline build for branch rc/<date> for both control-prod and dash-prod pipelines. Here, we usually run control-prod before running dash-prod. This is because unapplied migrations on dash will cause the frontend to crash in certain scenarios.
  2. After the builds are successful, we now want to ssh into CUBED_CONTROL_A box via ssh cubed@<cubed-prod-control-host> -i <path-to-ppk>. If you have not been supplied with a ppk file, please contact your team lead.
  3. Again go to the backend directory cd /srv/attrib-backend/backend/.
  4. Instantiate a screen session via screen -S <screen-session-name>. Documentation on linux screens can be found here and it is strongly encouraged that you familiarise yourself with screens. If the session is already running, you can attach to it via screen -r <screen-session-name>. For example, to detach from a screen session via Ctrl + A + D.
  5. Run migrations via sudo python3 manage.py migrate_client --loaddata --noinput. Also we may need to run migrations for other apps, Seopt etc. Check this by running sudo python3 manage.py migrate --list and see if there are any unapplied migrations. Before migrations take place, we put the account in maintenance mode (happens automatically within the script), this is to stop us trying to insert into a table which is being altered. You can check on Grafana which clients are currently in maintenance mode - here. The retry buffer should slowly creep up during this time as incoming hits are queued up. Once the migration finishes, the account is set to maintenance=0 again, and the buffers will come down.
  6. As a final step, go to dash.withcubed.com and make sure everything is working as expected on production.

Migration Failures

We have the ability to rollback migrations if an error occurs when running. Please do so by running sudo python3 manage.py migrate_client {migration_file_we_want_to_revert_to}.

Wrapping up

  1. Once we have confirmed that everything is working as expected, create a PR from rc/<date> to master. Once the PR has been approved by the team, the release candidate branch can be merged into master.
  2. Go to monday.com and mark all tasks under Pending Deploy as Done.
  3. Go to the Sprint Review board and review the sprint progress.
  4. As a final clean up, delete screens on both control and production boxes via screen -X -S <screen-session-name> quit.