On-Prem Installation Guidelines
Last updated
Was this helpful?
Last updated
Was this helpful?
This guide provides comprehensive instructions for deploying eSignet. eSignet operates as a collection of microservices hosted within Kubernetes clusters to ensure scalability, modularity, and high availability.
The deployment process includes the following key components and configurations:
Wireguard: is used as a trust network extension to access the admin, control, and observation pane along with on-field registration client connectivity to the backend server.
Nginx Server: eSignet uses the Nginx server for:
SSL termination
Reverse Proxy
CDN/Cache management
Load balancing
Kubernetes (K8s) Cluster: Kubernetes (K8s) cluster creation, configuration and administration of same.
K8 cluster is created using the and tools.
K8 cluster essentially used in ref-impl architecture:
Observation K8 cluster
eSignet application K8 cluster
Cluster Configurations:
Ingress Setup: For exposing application services outside the K8s cluster.
Storage Class Setup: This is used to set up the storage class for persistence in the K8 cluster.
Logging System: Continuously scrapes logs from all pods as needed.
Monitoring System: Continuously monitors logs and generates graphs to better manage the application and cluster.
Alerting: Users are identified about crucial events as and when needed.
Observation K8 cluster contains:
Rancher Ui: application used to create and manage the k8 cluster. This is needed once for an organization as it can manage multiple dev, qa, and prod k8 clusters easily.
Kecloak: IAM tool used for defining RBAC policies for allowing access to Rancher.
eSignet cluster: This cluster hosts all eSignet components, along with certain third-party components, to ensure the cluster's security, APIs, and data.
eSignet Pre-requisites:
These are the services required to deploy multiple eSignet modules:
eSignet-prerequisites: Servicess required for esignet-service
and oidc-ui
deployment.
eSignet-mock-prerequisites: Servicess required for mock-relying-party
and mock-relying-party-ui
deployment.
eSignet-signup: Services required for esignet-signup-service
and esignet-signup-ui
deployment.
eSignet Services deployment:
esignet-service
and oidc-ui
deployment.
Onboarding MISP partner for eSignet service.
mock-relying-party-service
and mock-relying-party-ui
deployment.
Onboarding mock-relying-party
.
esignet-signup-service
and esignet-signup-ui
deployment.
Onboarding MISP partner for esignet-signup-partner
.
The diagram below illustrates the deployment architecture of the eSignet, highlighting secure user access via VPN, traffic routing through firewalls and load balancers, and service orchestration within a Kubernetes cluster.
Key Components: eSignet Service, OIDC UI, databases, and secure cryptographic operations via HSM.
Deployment: Managed with Rancher, Helm charts, and a private Git repository.
Monitoring: Ensured using Grafana and Prometheus for observability.
eSignet Pre-requisites services.
eSignet services.
eSignet Onboarding.
eSignet Api-testrig.
eSignet mock pre-requisites services.
eSignet mock services.
eSignet mock services onboarding.
eSignet signup pre-requisites.
eSignet signup services.
eSignet signup onboarding pre-requisites.
eSignet signup onboarding.
Ensure all required hardware and software dependencies are prepared before proceeding with the installation.
Virtual Machines (VMs) can use any operating system as per convenience.
For this installation guide, Ubuntu OS is referenced throughout.
1.
Wireguard Bastion Host
2
4 GB
8 GB
1
(ensure to setup active-passive)
2.
Observation Cluster nodes
2
8 GB
32 GB
2
2
3.
Observation Nginx server (use Loadbalancer if required)
2
4 GB
16 GB
1
Nginx+
4.
eSignet Cluster nodes
8
32 GB
128 GB
3
Allocate etcd, control plane and worker accordingly
5.
eSignet Nginx server ( use Loadbalancer if required)
2
4 GB
16 GB
1
Nginx+
All the VMs should be able to communicate with each other.
Need stable intra-network connectivity between these VMs.
All the VMs should have stable internet connectivity for docker image download (in case of local setup ensure to have a locally accessible docker registry).
Server Interface requirements as mentioned in below table:
1.
Wireguard Bastion Host
One Private interface: that is on the same network as all the rest of nodes (e.g.: inside local NAT Network). One public interface: Either has a direct public IP, or a firewall NAT (global address) rule that forwards traffic on 51820/udp port to this interface IP.
2.
K8 Cluster nodes
One internal interface: with internet access and that is on the same network as all the rest of nodes (e.g.: inside local NAT Network).
3.
Observation Nginx server
One internal interface: with internet access and that is on the same network as all the rest of nodes (e.g.: inside local NAT Network).
4.
eSignet Nginx server
One internal interface: that is on the same network as all the rest of nodes (e.g.: inside local NAT Network). One public interface: Either has a direct public IP, or a firewall NAT (global address) rule that forwards traffic on 443/tcp port to this interface IP.
1.
rancher.xyz.net
Private IP of Nginx server or load balancer for Observation cluster
Rancher dashboard to monitor and manage the kubernetes cluster.
2.
keycloak.xyz.net
Private IP of Nginx server for Observation cluster
Administrative IAM tool (keycloak). This is for the kubernetes administration.
3.
sandbox.xyx.net
Private IP of Nginx server for MOSIP cluster
Index page for links to different dashboards of MOSIP env. (This is just for reference, please do not expose this page in a real production or UAT environment)
4.
api-internal.sandbox.xyz.net
Private IP of Nginx server for MOSIP cluster
Internal APIâs are exposed through this domain. They are accessible privately over wireguard channel
5.
api.sandbox.xyx.net
Public IP of Nginx server for MOSIP cluster
All the APIâs that are publically usable are exposed using this domain.
6.
kibana.sandbox.xyx.net
Private IP of Nginx server for MOSIP cluster
Optional installation. Used to access kibana dashboard over wireguard.
7.
kafka.sandbox.xyz.net
Private IP of Nginx server for MOSIP cluster
Kafka UI is installed as part of the MOSIPâs default installation. We can access kafka UI over wireguard. Mostly used for administrative needs.
8.
iam.sandbox.xyz.net
Private IP of Nginx server for MOSIP cluster
MOSIP uses an OpenID Connect server to limit and manage access across all the services. The default installation comes with Keycloak. This domain is used to access the keycloak server over wireguard
9.
postgres.sandbox.xyz.net
Private IP of Nginx server for MOSIP cluster
This domain points to the postgres server. You can connect to postgres via port forwarding over wireguard
10.
onboarder.sandbox.xyz.net
Private IP of Nginx server for MOSIP cluster
Accessing reports of MOSIP partner onboarding over wireguard
11.
eSignet.sandbox.xyz.net
Public IP of Nginx server for MOSIP cluster
Accessing eSignet portal publically
12.
healthservices.sandbox.xyz.net
Public IP of Nginx server for MOSIP cluster
Accessing Health portal publically
13.
smtp.sandbox.xyz.net
Private IP of Nginx server for MOSIP cluster
Accessing mock-smtp UI over wireguard
As only secured https connections are allowed via nginx server will need below mentioned valid ssl certificates:
Wildcard SSL Certificate for the Observation Cluster:
A valid wildcard SSL certificate for the domain used to access the Observation cluster.
This certificate must be stored inside the Nginx server VM for the Observation cluster.
For example, a domain like *.org.net could serve as the corresponding example.
Wildcard SSL Certificate for the eSignet K8s Cluster:
A valid wildcard SSL certificate for the domain used to access the eSignet Kubernetes cluster.
This certificate must be stored inside the Nginx server VM for the eSignet cluster.
For example, a domain like *.sandbox.xyz.net could serve as the corresponding example.
Below is a step-by-step guide to set up and configure the required components for secure and efficient operations.
Secure access solution that establishes private channels to Observation and eSignet clusters.
A Wireguard bastion host (Wireguard server) provides a secure private channel to access the Observation and eSignet cluster.
The host restricts public access and enables access to only those clients who have their public key listed in the Wireguard server.
Wireguard listens on UDP port51820.
Create a Wireguard server VM with above mentioned Hardware and Network requirements.
Open ports and Install docker on Wireguard VM.
create a copy of hosts.ini.sample
as hosts.ini
and update the required details for wireguard VM cp hosts.ini.sample hosts.ini
execute ports.yml to enable ports on the VM level using ufw: ansible-playbook -i hosts.ini ports.yaml
execute docker.yml command to install docker and add the user to the docker group:
Setup Wireguard server
SSH to wireguard VM
Create a directory for storing wireguard config files.
Install and start the wireguard server using docker as given below:
Assign wireguard.conf
:
SSH to the wireguard server VM.
cd /home/ubuntu/wireguard/config
Assign one of the PR for yourself and use the same from the PC to connect to the server.
Create assigned.txt
file to keep track of peer files allocated and update every time some peer is allocated to someone.
Use ls
cmd to see the list of peers.
Get inside your selected peer directory, and add the mentioned changes in peer.conf
:
cd peer1
nano peer1.conf
Delete the DNS IP.
Update the allowed IPs to subnets CIDR IPs. For example: 10.10.20.0/23
Share the updated peer.conf
with respective peers to connect to the wireguard server from Personel PC.
Add peer.conf
in your PCâs /etc/wireguard
directory as wg0.conf
.
Start the wireguard client and check the status:
Once connected to wireguard, you should be now able to login using private IPs.
Install all the required tools mentioned in the prerequisites for the PC.
rke (version 1.3.10)
istioctl (version v1.15.0)
Set up Observation Cluster node VMs as per the hardware and network requirements mentioned above.
Set up passwordless SSH into the cluster nodes via PEM keys. (Ignore if VMs are accessible via PEMs).
Generate keys on your PC ssh-keygen -t rsa
Copy the keys to remote observation node VMs ssh-copy-id <remote-user>@<remote-ip>
SSH into the node to check password-less SSH ssh -i ~/.ssh/<your private key> <remote-user>@<remote-ip>
Install Keycloak.
Install Rancher UI.
keycloak & Rancher UI Integration.
Clone the Kubernetes Infrastructure Repository:
Create a copy of hosts.ini.sample
as hosts.ini
. Update the IP addresses.
Clone the eSignet repository: (select tag based upon the compatibility matrix)
Clone the respective repo: (select tag based upon the compatibility matrix)
Clone the respective repo: (select tag based upon the compatibility matrix)
: contains the scripts to install and configure Kubernetes cluster with required monitoring, logging, and alerting tools.
: Contains deployment scripts and source code for :
: Contains deployment script and source code for :
: Contains deployment script and source code for:
Follow the steps mentioned to install the required tools on your personal computer to create and manage the k8 cluster using RKE1.
Make sure to clone the github repo for the required scripts in the above steps and perform the steps from the linked directory.
Install the on your PC.
.
.
.
Clone and move to the required directory as per the hyperlink.
Set up the observation cluster by following the steps given .
Once cluster setup is completed, setup k8's cluster ingress and storage class following .
Once the Observation K8 cluster is created and configured set up the nginx server for the same using the .
Once the Nginx server for observation place is done continue with the .
Set up on your personal computer.
Execute to open all the required ports.
Install on all the required VMs.
Create cluster for eSignet services hosting.
newly created K8 cluster to Rancher UI.
Set up for persistence in the k8 cluster as well as a standalone VM (Nginx VM).
Setup for K8 cluster Monitoring.
Setup for the K8 cluster.
Setup and Kiali.
Set up for exposing services from the newly created eSignet K8 cluster.
Install for eSignet from the deploy directory.
for eSignet services.
Install eSignet and OIDC .
eSignet as per the plugin used for deployment.
Setup for detailed automated testcase execution.
Install for eSignet mock services.
Install services.
Onboard services.
Install for eSignet Signup services.
Install services.
Deploy dependencies for eSignet signup onboarder following .
eSignet signup services.