Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »

Step 1 - PSKnowHOW System Requirements

Please check here for System Requirements: PSknowHOW | System Requirements

Step 2 - Release Notes

Please read the release notes before starting the upgrade.

Step 3 - Create folder structure

To create the "PSKnowhow" directory in Linux, you can use the mkdir command in the terminal. Here's the command:

mkdir /location/PSKnowhow

Replace "/location/PSKnowhow" with your preferred location. This command will create the "PSKnowhow" folder in the specified location. Make sure to choose a directory path that you have appropriate permissions to create folders in and available space.

Step 4 - Download docker-compose.yaml

Step 5 - Edit the docker compose file with appropriate values

  • Update Credentials of DB and other Environmental variable as specified hereEnvironmental variables

  • To begin with PSKnowhow, it is essential to have three mandatory containers: i) UI, ii) CustomAPI, and iii) MongoDB. Additionally, there are other optional containers, such as jira-processor, devops-processor (including Jenkins, GitHub, GitLab, Bamboo, Bitbucket, Zephyr, Sonar, and TeamCity collectors), azure-board-processor (for Azure Board), and azure-pipeline-repo (for Azure Pipeline and Azure Repo).
    Based on specific requirements, you can bring up these respective containers as needed.

Step 6 - Pull the docker images and Run the Containers

  • Open terminal/command prompt in PSKnowhow folder .

  • Pull the images by

    docker-compose pull
  • Run the container by

  • docker-compose up -d

Step 7 - Run DB Backup Script

  • Download the dbbackup.sh shell script to the server in /var/knh location.

  • Make sure the shell script has executable permissions using the chmod command.

chmod + x /var/knh/dbbackup.sh
  • To automate the backup process, you can utilize the crontab scheduling utility. This example schedules the backup script to run every night at 11:00 PM from Monday to Sunday.

(crontab -u root -l; echo "0 23 * * 1-7 /var/knh/dbbackup.sh") | crontab -u root -

With this configuration, the script will execute at 11:00 PM every day of the week (Monday to Sunday) and create backups in the /var/backups directory.

Step 8 - Backing up to External Cloud Storage

While performing regular backups on your server is crucial, ensuring the safety of your data in the event of server failures or data breaches requires a multi-tiered approach. Consider implementing an off-site backup strategy by utilizing cloud storage services like Amazon S3. This safeguards your data by storing it in a separate location. Here's how you can set up a backup to AWS S3:

Setting Up Off-Site Backup to AWS S3

  1. Create an AWS S3 Bucket: Begin by creating an Amazon S3 bucket to store your database backups. This bucket should be located in a different region from your server to enhance data redundancy.

  2. Configure AWS CLI: Install and configure the AWS Command Line Interface (CLI) on your server to enable communication between your server and AWS services.

  3. Modify Backup Script: Edit your dbbackup.sh script to include commands for transferring the backup files to the S3 bucket. Use the AWS CLI to upload the backup file to your S3 bucket.

aws s3 cp /var/backups/backup_filename.sql s3://your-s3-bucket-name/

Step 9 - Check Post installation steps for further configuration.

  • No labels