Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

customapi with amazoncorretto:17 as base image which handles api request, which runs on 8080 port.

...

ui with nginx:1.22.1-alpine-slim base image, which proxy-pass to customapi and ui components, which runs on port 80 & 443.

...

mongodb with mongo:5.0.18 as base image, which stores data and runs on port 27017.

...

Jira-processor with amazoncorretto:17 base image which is a jira collector.

...

devops-processor with amazoncorretto:17 base image which collects jenkins, github, gitlab, bamboo, bitbucket, zephyr, sonar, teamcity.

...

azure-pipeline-repo with amazoncorretto:17 as base image which collects azure pipeline and azure repo.

...

scm-processor-api Its python:3.8 application to calculate KPI metrics of different SCM tools like github, gitlab & Bitbucket.

...

scm-processor-core Its a python:3.8 application which collects raw data from SCM tools like github, gitlab & Bitbucket and saves it.

...

scm-processor-postgres: Version 11.1 is used to store repotool related data(Only required when repotool is installed)

...

scm-processor-rabbitmq Version 3.8-management is a job scheduler user by repotool-knowhow application (Only required when repotool is installed)

The ui container should run on a LoadBalancer service with port 443. The remaining containers should run on the default service i.e ClusterIP.

Architecture diagram Of AWS EKS KnowHOW Kubernetes Cluster

...

Architecture diagram Of Azure AKS KnowHOW Kubernetes Cluster

...

Resource requirement:

  1. Customapi : 8GB RAM & 2 CPU

  2. MongoDB : 2GB & 1CPU

  3. UI : 1GB RAM & 1CPU

  4. Jira-processor: 6GB RAM & 2CPU

  5. devops-Processor: 8GB RAM & 2 CPU

  6. Azure-board-Processor: 4GB RAM & 1CPU

  7. Azure-pipeline-repo-Processor: 4GB RAM & 1CPU

Installation Steps

Please ensure you follow the sequence outlined below to complete the process successfully.

Step 1: Create Configmap For pods

Download the Configmap file and add required details

View file
nameknowhow-configmap.yml

To ensure proper configuration for the pods, please fill in the details in the knowhow-config.yml

Make sure to pass all the environmental key values and configuration details needed for your pods to function correctly.

NOTE: Refer Environmental variable document for in detail explanation check here

Then run

Code Block
kubectl apply -f knowhow-configmap.yml

Step 2: Deploy MongoDB Pod

When installing Knohow in a Kubernetes (K8s) environment, it is recommended to use cloud-provided services for MongoDB like MongoDB Atlas, Azure Cosmos DB to ensure reliability, scalability, and ease of management. However, for testing or non-production environments, you can also deploy MongoDB as a Kubernetes pod.

To create the MongoDB pod, Download the attached YAML file

  • This is the YAML file for MongoDB designed specifically for deployment on AWS EKS with an EBS file system for persistent storage. It utilizes static provisioning through Amazon EBS.

    View file
    nameMongoDB-AWS-EBS.yml

Reference: AWS Docs for EKS with EFS

  • This is the YAML file for MongoDB designed specifically for deployment on AZURE AKS with an Azure Disk file system for persistent storage.

View file
namemongodb-AZ-AKS-AD.yaml

Refference: AZ Docs for PV

  • Here is a sample YAML file for a MongoDB manifest file that utilizes host path as a volume, although it is recommended for Testing only.

View file
namemongodb.yaml

The YAML file specifies the name of the pod, the container image to use, the container port 27017 to expose. and the environmental variables to use

Then apply the configuration by following command

Code Block
kubectl apply -f mongodb.yaml

Step 3: Deploy customapi Pod

Download the customapi-deploy.yaml manifest file

View file
namecustomapi-deploy.yaml

The YAML file specifies the name of the Deployment, the container image to use, the container port 8080 to expose, and the Environmental variable for MongoDB host to connect to.

Note: Please provide the image tag version in the image place holders in the manifest file.

Then apply the configuration by following command

Code Block
kubectl apply -f customapi-deploy.yaml

Step 4: Deploy the UI Containers

Deploy the UI containers in the same way as the customapi and MongoDB containers. Here is an YAML file for the ui container:

View file
nameui.yaml

This sample manifest file uses Load balancer as a Service if you wish not you use LB run this as a Cluster IP and use ingress controller to route traffic to UI pod. Ingress Controller manifest file is attached for reference . The YAML file specifies the name of the pod, the container image to use, and the container port to expose.

View file
nameingress.yaml

Note: Please provide the image tag version in the image place holders in the manifest file Environmental variables

Then apply the configuration by following command

Code Block
kubectl apply -f ui.yaml

Step 5: Deploy the Processor Containers

Attaching the list of all the processor you may run

Jira-Processor

View file
namejira-processor.yaml

Note: Please provide the image tag version in the image place holders in the manifest file Environmental variables

Then apply the configuration by following command

Code Block
kubectl apply -f jira-processor.yaml

Devops-processor

View file
namedevops-processor.yaml

Note: Please provide the image tag version in the image place holders in the manifest file Environmental variables

Code Block
kubectl apply -f devops-processor.yaml

Azure-board-processor

View file
nameazure-board-processor.yaml

Note: Please provide the value of all the place holders in the manifest file.

Code Block
kubectl apply -f azure-board-processor.yaml

Azure-pipeline-repo-Processor

View file
nameazure-pipeline-repo.yaml

Note: Please provide the value of all the place holders in the manifest file.

Code Block
kubectl apply -f azure-pipeline-repo.yaml

Step 6 : Installing Authentication and Authorization App

  • AuthNAuth Backend

The Bellow manifest is for Deployment and service of AuthNauth backend API service

View file
nameauth-api.yaml

  • AuhNAuth UI

The Bellow manifest is for Deployment and service of AuthNauth UI service

View file
nameauth-ui.yaml

  • AuthNAuth PostgresDB

When installing Knohow AuthNauth in a Kubernetes (K8s) environment, it is recommended to use cloud-provided services for Postgres like Azure Cosmos DB for Postgres or Azure Flex server to ensure reliability, scalability, and ease of management. However, for testing or non-production environments, you can also deploy as a Kubernetes pod.

Step 7: Install repo tool

View file
namedebbie-knowhow.yml
View file
namerepotool-django.yml
View file
namedebbie-rabbitmq.yml

Verify the Deployment

You can verify that the containers are running run the following command:

Code Block
kubectl get pod

To persist the MongoDB data, you can use your preferred cloud provider's storage solution. Here are the steps you can follow:

  1. Create a persistent volume and claim in your cloud provider's storage solution. This will provide a storage location that will persist even if the MongoDB pod is deleted.

  2. Modify the MongoDB YAML file to use the persistent volume. Here's an example of how to modify the YAML file:

Code Block
apiVersion: v1
kind: Pod
metadata:
  name: mongodb
spec:
  replicas: 1
  containers:
  - name: mongodb
    image: setup-speedy.tools.publicis.sapient.com/speedy/mongodb:latest
    ports:
    - containerPort: 27017
    volumeMounts:
    - name: mongodb-data
      mountPath: /data/db
  volumes:
  - name: mongodb-data
    persistentVolumeClaim:
      claimName: mongodb-pvc

The volumeMounts section specifies where the persistent volume should be mounted inside the container. The volumes section specifies the name of the volume and where it should be claimed from.

  1. Create the persistent volume claim by running the following command:

Code Block
kubectl apply -f mongodb-pvc.yaml

Here's an example YAML file for the persistent volume claim:

Code Block
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mongodb-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

The YAML file specifies the name of the persistent volume claim, the access mode, and the requested storage size.

By following these steps, you can persist the MongoDB data in your preferred cloud provider's storage solution.

Upgrade Steps:

...

If you are upgrading PSknowhow from 7.0.0 to 7.x.x please execute the bellow step else execute step 2

Code Block
kubectl exec -it <Mongodb Pod name> sh
mongo admin --username="${MONGODB_ADMIN_USER}" --password="${MONGODB_ADMIN_PASS}" --eval "db.shutdownServer()"

...

Edit the deployment in following order
mongodb
customapi
ui
jira-processor
devops-processor
azure-pipeline-repo
azure-board-processor
by

Code Block
kubectl edit deploy <Deploy name> -o yaml

...

Replace the tag version with the latest version in image section

...

Table of Contents
stylenone

Description

To install PSKnowhow using Kubernetes, it is essential to have three mandatory containers: i) UI, ii) CustomAPI, and iii) MongoDB. These containers are required to ensure the core functionality of the system.

In addition to the mandatory containers, there are several optional containers that you can bring up based on your specific requirements:

  • jira-processor: Acts as a collector to gather data from Jira. If you need KPIs from Jira, you should include this container.

  • devops-processor: Includes collectors for various DevOps tools such as Jenkins, GitHub, GitLab, Bamboo, Bitbucket, Zephyr, Sonar, and TeamCity. Use this container if you need to collect data from any of these tools.

  • azure-board-processor: Collects data from Azure Board. Include this container if you need KPIs from Azure Board.

  • azure-pipeline-repo: Collects data from Azure Pipeline and Azure Repo. Use this container if you require data from these Azure services.

Additionally, there are three optional containers used together for single sign-on (SSO) which allows users to authenticate once and gain access to multiple services or applications using SAML. If you choose to install these containers, you must install all three of them. Otherwise, you can use Knowhow's built-in standard authentication mechanism.

  • AuthNAuth Backend: This container handles the backend services for authentication and authorization, integrating with your company's SAML for secure user verification.

  • AuthNAuth UI: This container provides the user interface for authentication and authorization, allowing users to interact with the login and access control features seamlessly.

  • Postgres: This container is for the PostgreSQL database, which stores user credentials, permissions, and other related authentication data securely.

NOTE: Based on specific requirements, you can bring up these respective containers as needed.

Resource requirement:

The cluster should have a minimum of 16GB RAM and 4 CPUs. The recommended configuration is 32GB RAM and 8 CPUs.

Prerequisite:

  1. MongoDB instance: It is recommended to use cloud-provided services for MongoDB like MongoDB Atlas, Azure Cosmos DB to ensure reliability, scalability, and ease of management. However, for testing or non-production environments, you can also deploy MongoDB as a Kubernetes pod.

  2. DNS or IP Address for UI Access: To access the UI from a browser, you need a DNS name or an IP address. This can be either a LoadBalancer IP or a Gateway IP. Ensure that this IP address or DNS name is properly configured and accessible from your network.

Installation Steps

Please ensure you follow the sequence outlined below to complete the process successfully.

Step 1: Create Configmap For pods

Download the Configmap file and add required details

View file
nameknowhow-configmap.yml

To ensure proper configuration for the pods, please fill in the details in the knowhow-config.yml

Make sure to pass all the environmental key values and configuration details needed for your pods to function correctly.

NOTE: Refer Environmental variable document for in detail explanation check here

Then run

Code Block
kubectl apply -f knowhow-configmap.yml

Step 2: Deploy MongoDB Pod

When installing Knohow in a Kubernetes (K8s) environment, it is recommended to use cloud-provided services for MongoDB like MongoDB Atlas, Azure Cosmos DB to ensure reliability, scalability, and ease of management. However, for testing or non-production environments, you can also deploy MongoDB as a Kubernetes pod.

NOTE: Bellow Cloud provider is not supported for KNOWHOH

  1. AWS DocumentDB

Cloud services which is tested and working for Knowhow

  1. Azure Cosmos DB for MongoDB (vCore)

To create the MongoDB pod, Download the attached YAML file

  • This is the YAML file for MongoDB designed specifically for deployment on AWS EKS with an EFS file system for persistent storage. It utilizes static provisioning through Amazon EFS.

    View file
    nameMongoDB-AWS-EFS.yml

Reference: AWS Docs for EKS with EFS

  • This is the YAML file for MongoDB designed specifically for deployment on AZURE AKS with an Azure Disk file system for persistent storage.

View file
namemongodb-AZ-AKS-AD.yaml

Refference: AZ Docs for PV

  • Here is a sample YAML file for a MongoDB manifest file that utilizes host path as a volume, although it is recommended for Testing only.

View file
namemongodb.yaml

The YAML file specifies the name of the pod, the container image to use, the container port 27017 to expose. and the environmental variables to use

Then apply the configuration by following command

Code Block
kubectl apply -f mongodb.yaml

Step 3: Deploy customapi Pod

Download the customapi-deploy.yaml manifest file

View file
namecustomapi-deploy.yaml

The YAML file specifies the name of the Deployment, the container image to use, the container port 8080 to expose, and the Environmental variable for MongoDB host to connect to.

Note: Please provide the latest image tag version in the image place holders . Latest image version number can be found here : Docker hub repo

Then apply the configuration by following command

Code Block
kubectl apply -f customapi-deploy.yaml

Step 4: Deploy the UI Containers

Deploy the UI containers in the same way as the customapi and MongoDB containers. Here is an YAML file for the ui container:

View file
nameui.yaml

This sample manifest file uses Load balancer as a Service if you wish not you use LB run this as a Cluster IP and use ingress controller to route traffic to UI pod. Ingress Controller manifest file is attached for reference . The YAML file specifies the name of the pod, the container image to use, and the container port to expose.

View file
nameingress.yaml

Note: Please provide the latest image tag version in the image place holders . Latest image version number can be found here docker hub Repo

Then apply the configuration by following command

Code Block
kubectl apply -f ui.yaml

Step 5: Deploy the Processor Containers

Attaching the list of all the processor you may run

Jira-Processor

View file
namejira-processor.yaml

Note: Please provide the latest image tag version in the image place holders . Latest image version number can be found here Dockerhub repo

Then apply the configuration by following command

Code Block
kubectl apply -f jira-processor.yaml
Devops-processor

View file
namedevops-processor.yaml

Note: Please provide the latest image tag version in the image place holders . Latest image version number can be found here Dockerhub repo

Code Block
kubectl apply -f devops-processor.yaml
Azure-board-processor

View file
nameazure-board-processor.yaml

Note: Please provide the latest image tag version in the image place holders . Latest image version number can be found here Dockerhub repo

Code Block
kubectl apply -f azure-board-processor.yaml

Azure-pipeline-repo-Processor

View file
nameazure-pipeline-repo.yaml

Note: Please provide the latest image tag version in the image place holders . Latest image version number can be found here Dockerhub repo

Code Block
kubectl apply -f azure-pipeline-repo.yaml

...

Step 6 : Installing Authentication and Authorization App

NOTE: If you want to authenticate and authorize users of Knowhow with your company's SAML, install the bellow central login containers. Otherwise, you can skip this step, and your installation is complete.

AuthNAuth Backend

The Bellow manifest is for Deployment and service of AuthNauth backend API service

View file
nameauth-api.yaml

AuhNAuth UI

The Bellow manifest is for Deployment and service of AuthNauth UI service

View file
nameauth-ui.yaml

AuthNAuth PostgresDB

When installing Knohow AuthNauth in a Kubernetes (K8s) environment, it is recommended to use cloud-provided services for Postgres like Azure Cosmos DB for Postgres or Azure Flex server to ensure reliability, scalability, and ease of management. However, for testing or non-production environments, you can also deploy as a Kubernetes pod.

Step 7: Install repo tool

View file
namedebbie-knowhow.yml
View file
namerepotool-django.yml
View file
namedebbie-rabbitmq.yml

Verify the Deployment

You can verify that the containers are running run the following command:

Code Block
kubectl get pod

...

To persist the MongoDB data, you can use your preferred cloud provider's storage solution. Here are the steps you can follow:

  1. Create a persistent volume and claim in your cloud provider's storage solution. This will provide a storage location that will persist even if the MongoDB pod is deleted.

  2. Modify the MongoDB YAML file to use the persistent volume. Here's an example of how to modify the YAML file:

Code Block
apiVersion: v1
kind: Pod
metadata:
  name: mongodb
spec:
  replicas: 1
  containers:
  - name: mongodb
    image: setup-speedy.tools.publicis.sapient.com/speedy/mongodb:latest
    ports:
    - containerPort: 27017
    volumeMounts:
    - name: mongodb-data
      mountPath: /data/db
  volumes:
  - name: mongodb-data
    persistentVolumeClaim:
      claimName: mongodb-pvc

The volumeMounts section specifies where the persistent volume should be mounted inside the container. The volumes section specifies the name of the volume and where it should be claimed from.

  1. Create the persistent volume claim by running the following command:

Code Block
kubectl apply -f mongodb-pvc.yaml

Here's an example YAML file for the persistent volume claim:

Code Block
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mongodb-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

The YAML file specifies the name of the persistent volume claim, the access mode, and the requested storage size.

By following these steps, you can persist the MongoDB data in your preferred cloud provider's storage solution.

...

Upgrade Steps:

  1. If you are upgrading PSknowhow from 7.0.0 to 7.x.x please execute the bellow step else execute step 2

    Code Block
    kubectl exec -it <Mongodb Pod name> sh
    mongo admin --username="${MONGODB_ADMIN_USER}" --password="${MONGODB_ADMIN_PASS}" --eval "db.shutdownServer()"
  2. Edit the deployment in following order
    mongodb
    customapi
    ui
    jira-processor
    devops-processor
    azure-pipeline-repo
    azure-board-processor
    by

    Code Block
    kubectl edit deploy <Deploy name> -o yaml
  3. Replace the tag version with the latest version in image section

  4. Check for environmental variable section and add if any new variables are required in current manifest file Refer this docs . And save it.

Base Image

  1. customapi with amazoncorretto:17 as base image which handles api request, which runs on 8080 port.

  2. ui with nginx:1.22.1-alpine-slim base image, which proxy-pass to customapi and ui components, which runs on port 80 & 443.

  3. mongodb with mongo:5.0.18 as base image, which stores data and runs on port 27017.

  4. Jira-processor with amazoncorretto:17 base image which is a jira collector.

  5. devops-processor with amazoncorretto:17 base image which collects jenkins, github, gitlab, bamboo, bitbucket, zephyr, sonar, teamcity.

  6. azure-board-processorwith amazoncorretto:17 base image and which collects azure board.

  7. azure-pipeline-repo with amazoncorretto:17 as base image which collects azure pipeline and azure repo.

  8. scm-processor-api Its python:3.8 application to calculate KPI metrics of different SCM tools like github, gitlab & Bitbucket.

  9. scm-processor-core Its a python:3.8 application which collects raw data from SCM tools like github, gitlab & Bitbucket and saves it.

  10. scm-processor-postgres: Version 11.1 is used to store repotool related data(Only required when repotool is installed)

  11. scm-processor-rabbitmq Version 3.8-management is a job scheduler user by repotool-knowhow application (Only required when repotool is installed)

Architecture diagram Of AWS EKS KnowHOW Kubernetes Cluster

Drawio
mVer2
zoom1
simple0
inComment0
custContentId54231082
pageId33587201
lbox1
diagramDisplayNameUntitled Diagram-1694005040953.drawio
contentVer9
revision9
baseUrlhttps://psknowhow.atlassian.net/wiki
diagramNameUntitled Diagram-1694005040953.drawio
pCenter0
width1101
links
tbstyle
height870.5

Architecture diagram Of Azure AKS KnowHOW Kubernetes Cluster

Drawio
mVer2
zoom1
simple0
inComment0
custContentId94896145
pageId33587201
lbox1
diagramDisplayNameAKS.drawio
contentVer2
revision2
baseUrlhttps://psknowhow.atlassian.net/wiki
diagramNameAKS.drawio
pCenter0
width1326.5
links
tbstyle
height1462