Table of Contents | ||
---|---|---|
|
Description
To install PSKnowhow using Kubernetes, it is essential to have three mandatory containers: i) UI, ii) CustomAPI, and iii) MongoDB. These containers are required to ensure the core functionality of the system.
...
jira-processor: Acts as a collector to gather data from Jira. If you need KPIs from Jira, you should include this container.
devops-processor: Includes collectors for various DevOps tools such as Jenkins, GitHub, GitLab, Bamboo, Bitbucket, Zephyr, Sonar, and TeamCity. Use this container if you need to collect data from any of these tools.
azure-board-processor: Collects data from Azure Board. Include this container if you need KPIs from Azure Board.
azure-pipeline-repo: Collects data from Azure Pipeline and Azure Repo. Use this container if you require data from these Azure services.
Additionally, there is a set of four are three optional containers that are used together for repository management. If you choose to install repotool-django, you must also install the following containers:
repotool-django: An optional container for repository management.
repotool-knowhow: Another optional container for repository management.
Postgres: An optional container for PostgreSQL database.
rabbitMQ: An optional container for RabbitMQ message broker.
single sign-on (SSO) which allows users to authenticate once and gain access to multiple services or applications using SAML. If you choose to install these containers, you must install all three of them. Otherwise, you can use Knowhow's built-in standard authentication mechanism.
AuthNAuth Backend: This container handles the backend services for authentication and authorization, integrating with your company's SAML for secure user verification.
AuthNAuth UI: This container provides the user interface for authentication and authorization, allowing users to interact with the login and access control features seamlessly.
Postgres: This container is for the PostgreSQL database, which stores user credentials, permissions, and other related authentication data securely.
NOTE: Based on specific requirements, you can bring up these respective containers as needed.
Resource requirement:
The cluster should have a minimum of 16GB RAM and 4 CPUs. The recommended configuration is 32GB RAM and 8 CPUsCPUs. The recommended configuration is 32GB RAM and 8 CPUs.
Prerequisite:
MongoDB instance: It is recommended to use cloud-provided services for MongoDB like MongoDB Atlas, Azure Cosmos DB to ensure reliability, scalability, and ease of management. However, for testing or non-production environments, you can also deploy MongoDB as a Kubernetes pod.
DNS or IP Address for UI Access: To access the UI from a browser, you need a DNS name or an IP address. This can be either a LoadBalancer IP or a Gateway IP. Ensure that this IP address or DNS name is properly configured and accessible from your network.
Installation Steps
Please ensure you follow the sequence outlined below to complete the process successfully.
Step 1: Create Configmap For pods
Download the Configmap file and add required details
...
Make sure to pass all the environmental key values and configuration details needed for your pods to function correctly.
NOTE: Refer Environmental variable document for in detail explanation check here
...
Code Block |
---|
kubectl apply -f knowhow-configmap.yml |
Step 2: Deploy MongoDB Pod
When installing Knohow in a Kubernetes (K8s) environment, it is recommended to use cloud-provided services for MongoDB like MongoDB Atlas, Azure Cosmos DB to ensure reliability, scalability, and ease of management. However, for testing or non-production environments, you can also deploy MongoDB as a Kubernetes pod.also deploy MongoDB as a Kubernetes pod.
NOTE: Bellow Cloud provider is not supported for KNOWHOH
AWS DocumentDB
Cloud services which is tested and working for Knowhow
Azure Cosmos DB for MongoDB (vCore)
To create the MongoDB pod, Download the attached YAML file
...
file
...
This is the YAML file for MongoDB designed specifically for deployment on AWS EKS with an EFS file system for persistent storage. It utilizes static provisioning through Amazon EFS.
View file name MongoDB-AWS-EFS.yml
...
Code Block |
---|
kubectl apply -f mongodb.yaml |
Step 3: Deploy customapi Pod
Download the customapi-deploy.yaml manifest file
...
The YAML file specifies the name of the Deployment, the container image to use, the container port 8080 to expose, and the Environmental variable for MongoDB host to connect to.
Note: Please provide the latest image tag version in the image place holders in the manifest file.. Latest image version number can be found here : Docker hub repo
Then apply the configuration by following command
Code Block |
---|
kubectl apply -f customapi-deploy.yaml |
Step 4: Deploy the UI Containers
Deploy the UI containers in the same way as the customapi and MongoDB containers. Here is an YAML file for the ui container:
...
View file | ||
---|---|---|
|
Note: Please provide the latest image tag version in the image place holders . Latest image version number can be found here docker hub Repo
...
Code Block |
---|
kubectl apply -f ui.yaml |
Step 5: Deploy the Processor Containers
Attaching the list of all the processor you may run
Jira-Processor
View file | ||
---|---|---|
|
Note: Please provide the latest image tag version in the image place holders . Latest image version number can be found here Dockerhub repo
...
Code Block |
---|
kubectl apply -f jira-processor.yaml |
Devops-processor
View file | ||
---|---|---|
|
Note: Please provide the latest image tag version in the image place holders . Latest image version number can be found here Dockerhub repo
Code Block |
---|
kubectl apply -f devops-processor.yaml |
Azure-board-processor
View file | ||
---|---|---|
|
Note: Please provide the latest image tag version in the image place holders . Latest image version number can be found here Dockerhub repo
Code Block |
---|
kubectl apply -f azure-board-processor.yaml |
Azure-pipeline-repo-Processor
View file | ||
---|---|---|
|
Note: Please provide the latest image tag version in the image place holders . Latest image version number can be found here Dockerhub repo
Code Block |
---|
kubectl apply -f azure-pipeline-repo.yaml |
...
Step 6 : Installing Authentication and Authorization App
NOTE: If you want to authenticate and authorize users of Knowhow with your company's SAML, install the bellow central login containers. Otherwise, you can skip this step, and your installation is complete.
AuthNAuth Backend
The Bellow manifest is for Deployment and service of AuthNauth backend API service
View file | ||
---|---|---|
|
AuhNAuth UI
The Bellow manifest is for Deployment and service of AuthNauth UI service
View file | ||
---|---|---|
|
AuthNAuth PostgresDB
When installing Knohow AuthNauth in a Kubernetes (K8s) environment, it is recommended to use cloud-provided services for Postgres like Azure Cosmos DB for Postgres or Azure Flex server to ensure reliability, scalability, and ease of management. However, for testing or non-production environments, you can also deploy as a Kubernetes pod.
Step 7: Install repo tool
View file | ||
---|---|---|
|
View file | ||
---|---|---|
|
View file | ||
---|---|---|
|
Verify the Deployment
You can verify that the containers are running run the following command:
Code Block |
---|
kubectl get pod |
...
By following these steps, you can persist the MongoDB data in your preferred cloud provider's storage solution.
...
Upgrade Steps:
If you are upgrading PSknowhow from 7.0.0 to 7.x.x please execute the bellow step else execute step 2
Code Block kubectl exec -it <Mongodb Pod name> sh mongo admin --username="${MONGODB_ADMIN_USER}" --password="${MONGODB_ADMIN_PASS}" --eval "db.shutdownServer()"
Edit the deployment in following order
mongodb
customapi
ui
jira-processor
devops-processor
azure-pipeline-repo
azure-board-processor
byCode Block kubectl edit deploy <Deploy name> -o yaml
Replace the tag version with the latest version in image section
Check for environmental variable section and add if any new variables are required in current manifest file Refer this docs . And save it.
Base Image
customapi with
amazoncorretto:17
as base image which handles api request, which runs on 8080 port.ui with
nginx:1.22.1-alpine-slim
base image, which proxy-pass to customapi and ui components, which runs on port 80 & 443.mongodb with
mongo:5.0.18
as base image, which stores data and runs on port 27017.Jira-processor with
amazoncorretto:17
base image which is a jira collector.devops-processor with
amazoncorretto:17
base image which collects jenkins, github, gitlab, bamboo, bitbucket, zephyr, sonar, teamcity.azure-board-processorwith
amazoncorretto:17
base image and which collects azure board.azure-pipeline-repo with
amazoncorretto:17
as base image which collects azure pipeline and azure repo.scm-processor-api Its
python:3.8
application to calculate KPI metrics of different SCM tools like github, gitlab & Bitbucket.scm-processor-core Its a
python:3.8
application which collects raw data from SCM tools like github, gitlab & Bitbucket and saves it.scm-processor-postgres: Version
11.1
is used to store repotool related data(Only required when repotool is installed)scm-processor-rabbitmq Version
3.8-management
is a job scheduler user by repotool-knowhow application (Only required when repotool is installed)
Architecture diagram Of AWS EKS KnowHOW Kubernetes Cluster
Drawio | ||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Architecture diagram Of Azure AKS KnowHOW Kubernetes Cluster
Drawio | ||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|