On-Prem Installation Guidelines

Overview

This guide provides comprehensive instructions for deploying eSignet. eSignet operates as a collection of microservices hosted within Kubernetes clusters to ensure scalability, modularity, and high availability.

This guide is applicable only for eSignet version 1.5.0 and above.

  • The deployment process includes the following key components and configurations:

    • Wireguard: Wireguard is used as a trust network extension to access the admin, control, and observation pane along with on-field registration client connectivity to the backend server.

    • Nginx Server: eSignet uses the Nginx server for:

      • SSL termination

      • Reverse Proxy

      • CDN/Cache management

      • Load balancing

    • Kubernetes (K8s) Cluster: Kubernetes (K8s) cluster creation, configuration and administration of same.

      • K8 cluster is created using the Rancher and rke tools.

      • K8 cluster essentially used in ref-impl architecture:

        • Observation K8 cluster

        • eSignet application K8 cluster

    • Cluster Configurations:

      • Ingress Setup: For exposing application services outside the K8s cluster.

      • Storage Class Setup: This is used to set up the storage class for persistence in the K8 cluster.

      • Logging System: Continuously scrapes logs from all pods as needed.

      • Monitoring System: Continuously monitors logs and generates graphs to better manage the application and cluster.

      • Alerting: Users are identified about crucial events as and when needed.

    • Observation K8 cluster contains:

      • Rancher Ui: application used to create and manage the k8 cluster. This is needed once for an organization as it can manage multiple dev, qa, and prod k8 clusters easily.

      • Kecloak: IAM tool used for defining RBAC policies for allowing access to Rancher.

    • eSignet cluster: This cluster hosts all eSignet components, along with certain third-party components, to ensure the cluster's security, APIs, and data.

  • eSignet Pre-requisites:

    These are the services required to deploy multiple eSignet modules:

    • eSignet-prerequisites: Servicess required for esignet-service and oidc-ui deployment.

    • eSignet-mock-prerequisites: Servicess required for mock-relying-party and mock-relying-party-ui deployment.

    • eSignet-signup: Services required for esignet-signup-service and esignet-signup-ui deployment.

  • eSignet Services deployment:

    • esignet-service and oidc-ui deployment.

    • Onboarding MISP partner for eSignet service.

    • mock-relying-party-service and mock-relying-party-ui deployment.

    • Onboarding mock-relying-party.

    • esignet-signup-service and esignet-signup-ui deployment.

    • Onboarding MISP partner for esignet-signup-partner.

Architecture

The diagram below illustrates the deployment architecture of the eSignet, highlighting secure user access via VPN, traffic routing through firewalls and load balancers, and service orchestration within a Kubernetes cluster.

  • Key Components: eSignet Service, OIDC UI, databases, and secure cryptographic operations via HSM.

  • Deployment: Managed with Rancher, Helm charts, and a private Git repository.

  • Monitoring: Ensured using Grafana and Prometheus for observability.

eSignet Architecture diagram

Deployment Repositories

  • k8s-infra : contains the scripts to install and configure Kubernetes cluster with required monitoring, logging, and alerting tools.

  • eSignet : Contains deployment scripts and source code for :

    • eSignet Pre-requisites services.

    • eSignet services.

    • eSignet Onboarding.

    • eSignet Api-testrig.

  • esignet-mock-services : Contains deployment script and source code for :

    • eSignet mock pre-requisites services.

    • eSignet mock services.

    • eSignet mock services onboarding.

  • esignet-signup : Contains deployment script and source code for:

    • eSignet signup pre-requisites.

    • eSignet signup services.

    • eSignet signup onboarding pre-requisites.

    • eSignet signup onboarding.

Pre-requisites

Ensure all required hardware and software dependencies are prepared before proceeding with the installation.

Hardware requirements

  • Virtual Machines (VMs) can use any operating system as per convenience.

  • For this installation guide, Ubuntu OS is referenced throughout.

Sl no.
Purpose
vCPU's
RAM
Storage (HDD)
No. of VM's
HA

1.

Wireguard Bastion Host

2

4 GB

8 GB

1

(ensure to setup active-passive)

2.

Observation Cluster nodes

2

8 GB

32 GB

2

2

3.

Observation Nginx server (use Loadbalancer if required)

2

4 GB

16 GB

1

Nginx+

4.

eSignet Cluster nodes

8

32 GB

128 GB

3

Allocate etcd, control plane and worker accordingly

5.

eSignet Nginx server ( use Loadbalancer if required)

2

4 GB

16 GB

1

Nginx+

Network Requirements

  • All the VMs should be able to communicate with each other.

  • Need stable intra-network connectivity between these VMs.

  • All the VMs should have stable internet connectivity for docker image download (in case of local setup ensure to have a locally accessible docker registry).

  • Server Interface requirements as mentioned in below table:

Sl no.
Purpose
Network Interfaces

1.

Wireguard Bastion Host

One Private interface: that is on the same network as all the rest of nodes (e.g.: inside local NAT Network). One public interface: Either has a direct public IP, or a firewall NAT (global address) rule that forwards traffic on 51820/udp port to this interface IP.

2.

K8 Cluster nodes

One internal interface: with internet access and that is on the same network as all the rest of nodes (e.g.: inside local NAT Network).

3.

Observation Nginx server

One internal interface: with internet access and that is on the same network as all the rest of nodes (e.g.: inside local NAT Network).

4.

eSignet Nginx server

One internal interface: that is on the same network as all the rest of nodes (e.g.: inside local NAT Network). One public interface: Either has a direct public IP, or a firewall NAT (global address) rule that forwards traffic on 443/tcp port to this interface IP.

DNS requirements

Sl no.
Domain Name
Mapping details
Purpose

1.

rancher.xyz.net

Private IP of Nginx server or load balancer for Observation cluster

Rancher dashboard to monitor and manage the kubernetes cluster.

2.

keycloak.xyz.net

Private IP of Nginx server for Observation cluster

Administrative IAM tool (keycloak). This is for the kubernetes administration.

3.

sandbox.xyx.net

Private IP of Nginx server for MOSIP cluster

Index page for links to different dashboards of MOSIP env. (This is just for reference, please do not expose this page in a real production or UAT environment)

4.

api-internal.sandbox.xyz.net

Private IP of Nginx server for MOSIP cluster

Internal API’s are exposed through this domain. They are accessible privately over wireguard channel

5.

api.sandbox.xyx.net

Public IP of Nginx server for MOSIP cluster

All the API’s that are publically usable are exposed using this domain.

6.

kibana.sandbox.xyx.net

Private IP of Nginx server for MOSIP cluster

Optional installation. Used to access kibana dashboard over wireguard.

7.

kafka.sandbox.xyz.net

Private IP of Nginx server for MOSIP cluster

Kafka UI is installed as part of the MOSIP’s default installation. We can access kafka UI over wireguard. Mostly used for administrative needs.

8.

iam.sandbox.xyz.net

Private IP of Nginx server for MOSIP cluster

MOSIP uses an OpenID Connect server to limit and manage access across all the services. The default installation comes with Keycloak. This domain is used to access the keycloak server over wireguard

9.

postgres.sandbox.xyz.net

Private IP of Nginx server for MOSIP cluster

This domain points to the postgres server. You can connect to postgres via port forwarding over wireguard

10.

onboarder.sandbox.xyz.net

Private IP of Nginx server for MOSIP cluster

Accessing reports of MOSIP partner onboarding over wireguard

11.

eSignet.sandbox.xyz.net

Public IP of Nginx server for MOSIP cluster

Accessing eSignet portal publically

12.

healthservices.sandbox.xyz.net

Public IP of Nginx server for MOSIP cluster

Accessing Health portal publically

13.

smtp.sandbox.xyz.net

Private IP of Nginx server for MOSIP cluster

Accessing mock-smtp UI over wireguard

Certificate requirements

As only secured https connections are allowed via nginx server will need below mentioned valid ssl certificates:

  1. Wildcard SSL Certificate for the Observation Cluster:

    • A valid wildcard SSL certificate for the domain used to access the Observation cluster.

    • This certificate must be stored inside the Nginx server VM for the Observation cluster.

    • For example, a domain like *.org.net could serve as the corresponding example.

  2. Wildcard SSL Certificate for the eSignet K8s Cluster:

    • A valid wildcard SSL certificate for the domain used to access the eSignet Kubernetes cluster.

    • This certificate must be stored inside the Nginx server VM for the eSignet cluster.

    • For example, a domain like *.sandbox.xyz.net could serve as the corresponding example.

Tools to be installed on Personal Computers:

Follow the steps mentioned here to install the required tools on your personal computer to create and manage the k8 cluster using RKE1.

Installation

Below is a step-by-step guide to set up and configure the required components for secure and efficient operations.

Wireguard

Secure access solution that establishes private channels to Observation and eSignet clusters.

Note: If you already have a Wireguard bastion host then you may skip this step.

  • A Wireguard bastion host (Wireguard server) provides a secure private channel to access the Observation and eSignet cluster.

  • The host restricts public access and enables access to only those clients who have their public key listed in the Wireguard server.

  • Wireguard listens on UDP port51820.

Setup Wireguard Bastion server

  1. Create a Wireguard server VM with above mentioned Hardware and Network requirements.

  2. Open ports and Install docker on Wireguard VM.

  • create a copy of hosts.ini.sample as hosts.ini and update the required details for wireguard VM cp hosts.ini.sample hosts.ini

  • execute ports.yml to enable ports on the VM level using ufw: ansible-playbook -i hosts.ini ports.yaml

Note:

  • Permission of the pem files to access nodes should have 400 permissions. sudo chmod 400 ~/.ssh/privkey.pem

  • These ports are only needed to be opened for sharing packets over UDP.

  • Take necessary measures on the firewall level so that the Wireguard server can be reachable on 51820/udp publically.

  • Make sure to clone the k8s-infra github repo for the required scripts in the above steps and perform the steps from the linked directory.

  • If you already have a Wireguard server for the VPC used you can skip the setup Wireguard Bastion server section.

  1. execute docker.yml command to install docker and add the user to the docker group:

ansible-playbook -i hosts.ini docker.yaml
  1. Setup Wireguard server

    • SSH to wireguard VM

    • Create a directory for storing wireguard config files.

    mkdir -p wireguard/config
    • Install and start the wireguard server using docker as given below:

    sudo docker run -d \
    --name=wireguard \
    --cap-add=NET_ADMIN \
    --cap-add=SYS_MODULE \
    -e PUID=1000 \
    -e PGID=1000 \
    -e TZ=Asia/Calcutta \
    -e PEERS=30 \
    -p 51820:51820/udp \
    -v /home/ubuntu/wireguard/config:/config \
    -v /lib/modules:/lib/modules \
    --sysctl="net.ipv4.conf.all.src_valid_mark=1" \
    --restart unless-stopped \
    ghcr.io/linuxserver/wireguard

Note:

  • Increase the no. of peers above in case more than 30 wireguard client confs (-e PEERS=30) are needed.

  • Change the directory to be mounted to wireguard docker as needed. All your wireguard confs will be generated in the mounted directory (-v /home/ubuntu/wireguard/config:/config).

Setup Wireguard Client on your PC and follow the below steps:

  1. Install the Wireguard client on your PC.

  2. Assign wireguard.conf:

  • SSH to the wireguard server VM.

  • cd /home/ubuntu/wireguard/config

  • Assign one of the PR for yourself and use the same from the PC to connect to the server.

  • Create assigned.txt file to keep track of peer files allocated and update every time some peer is allocated to someone.

    peer1 :   peername
    peer2 :   xyz
  • Use ls cmd to see the list of peers.

  • Get inside your selected peer directory, and add the mentioned changes in peer.conf:

    • cd peer1

    • nano peer1.conf

      • Delete the DNS IP.

      • Update the allowed IPs to subnets CIDR IPs. For example: 10.10.20.0/23

  • Share the updated peer.conf with respective peers to connect to the wireguard server from Personel PC.

  • Add peer.conf in your PC’s /etc/wireguard directory as wg0.conf.

  1. Start the wireguard client and check the status:

sudo systemctl start wg-quick@wg0
sudo systemctl status wg-quick@wg0
  1. Once connected to wireguard, you should be now able to login using private IPs.

Observation cluster setup and configuration

Observation K8s Cluster setup:

  1. Install all the required tools mentioned in the prerequisites for the PC.

  1. Set up Observation Cluster node VMs as per the hardware and network requirements mentioned above.

    1. Set up passwordless SSH into the cluster nodes via PEM keys. (Ignore if VMs are accessible via PEMs).

      • Generate keys on your PC ssh-keygen -t rsa

        • Copy the keys to remote observation node VMs ssh-copy-id <remote-user>@<remote-ip>

        • SSH into the node to check password-less SSH ssh -i ~/.ssh/<your private key> <remote-user>@<remote-ip>

Note:

  • Make sure the permission for privkey.pem for ssh is set to 400.

  • Clone k8s-infra and move to the required directory as per the hyperlink.

  1. Set up the observation cluster by following the steps given here.

  2. Once cluster setup is completed, setup k8's cluster ingress and storage class following steps.

  3. Once the Observation K8 cluster is created and configured set up the nginx server for the same using the steps.

  4. Once the Nginx server for observation place is done continue with the installation of required apps:.

  • Install Keycloak.

  • Install Rancher UI.

  • keycloak & Rancher UI Integration.

eSignet K8 Cluster setup:

  1. Set up pre-requisites on your personal computer.

  2. Clone the Kubernetes Infrastructure Repository:

Note: Make sure to use the released tag. Specifically v1.2.0.2.

git clone -b v1.2.0.2 https://github.com/mosip/k8s-infra.git

cd k8s-infra/mosip/onprem
  1. Create a copy of hosts.ini.sample as hosts.ini. Update the IP addresses.

  2. Execute ports.yml to open all the required ports.

  3. Install Docker on all the required VMs.

  4. Create RKE1 K8 cluster for eSignet services hosting.

  5. Import newly created K8 cluster to Rancher UI.

eSignet K8 Cluster Configuration:

  • Set up NFS for persistence in the k8 cluster as well as a standalone VM (Nginx VM).

  • Setup Monitoring for K8 cluster Monitoring.

  • Setup Logging for the K8 cluster.

  • Setup Istio and Kiali.

Nginx for eSignet K8 Cluster:

  1. Set up Nginx for exposing services from the newly created eSignet K8 cluster.

Install eSignet and pre-requisite services:

  1. Clone the eSignet repository: (select tag based upon the compatibility matrix)

git clone -b <tag> https://github.com/mosip/esignet.git
cd esignet
  1. Install pre-requisites for eSignet from the deploy directory.

cd deploy

Follow the pre-requisites deployment steps:

  1. Initialise pre-requisites for eSignet services.

  2. Install eSignet and OIDC services.

  3. Onboard eSignet as per the plugin used for deployment.

  4. Setup api-testrig for detailed automated testcase execution.

Install eSignet mock services:

  1. Clone the respective repo: (select tag based upon the compatibility matrix)

git clone -b <tag> https://github.com/mosip/esignet-mock-services.git
cd esignet-mock-services
  1. Install pre-requisites for eSignet mock services.

  2. Install eSignet mock services.

  3. Onboard esignet mock services.

Install eSignet signup and its pre-requisites services:

  1. Clone the respective repo: (select tag based upon the compatibility matrix)

git clone -b <tag> https://github.com/mosip/esignet-signup.git
cd esignet-signup
  1. Install pre-requisites for eSignet Signup services.

  2. Install eSignet signup services.

  3. Deploy dependencies for eSignet signup onboarder following steps.

  4. Onboard eSignet signup services.

Last updated

Was this helpful?