Signet
GitHubCommunityWhat's NewChatBot
  • 🌐eSignet
  • 🔍Overview
    • ✨Features
      • Signup Portal
    • âš–ī¸Principles
    • 📏Standards & Security
    • 📜License
  • đŸ’ģDevelop
    • đŸĻžTechnology
      • đŸ“ĻTechnology Stack
      • âš™ī¸Components - eSignet
      • đŸ¤ŗComponents - Signup Portal
      • 📲API
    • âš™ī¸Configure eSignet
      • ACR
      • Claims
      • .well-known
        • jwks.json
        • oauth-configuration
        • openid-configuration
  • 🎮Test
    • đŸ•šī¸Try It Out
      • Using Mock Data
      • Register Yourself
      • Integrate with eSignet
    • 👨‍đŸ’ģEnd User Guide
      • Health Portal
        • Login with Biometrics
        • Login with Password
        • Login with OTP
        • Login with QR code (Inji)
        • Knowledge Based Identification
        • Signup and Login with OTP for Verified Claims
    • 🧩Integration Guides - eSignet
      • Authenticator Plugin
      • Key Binder Plugin
      • Audit Plugin
      • Digital Wallet
        • Credential Holder
        • Wallet Authenticator
      • Relying Party
    • 🔐Integration Guide - Signup Portal
      • Identity Verifier Plugin
      • Profile Registry Plugin
      • Integration with eSignet portal
  • đŸ› ī¸Deploy
    • â›´ī¸Deployment Architecture
      • On-Prem Installation Guidelines
    • ⚓Local Deployment
      • Mock Identity System
      • Mock Relying Party
  • 🔌Interoperability
    • MOSIP
    • Inji
    • OpenCRVS
  • 🚀Roadmap and Releases
    • đŸ›Ŗī¸Roadmap
      • Roadmap 2025
      • Roadmap 2024
    • 📖Releases
      • v1.5.1
        • Test Report
      • v1.5.0
        • Test Report
      • v1.4.2
      • v1.4.1
        • Test Report
      • v1.4.0
        • Test Report
      • v1.3.0
        • Test Report
      • v1.2.0
        • Test Report
      • v1.1.0
        • Test Report
      • v1.0.0
        • Test Report
      • v0.9.0
        • Test Report
  • 🤝Community
    • Code Contribution
    • Code of Conduct
  • 📌General
    • 📚Resources
    • ❓FAQs
    • 💡Glossary
Powered by GitBook

Copyright Š 2021 MOSIP. This work is licensed under a Creative Commons Attribution (CC-BY-4.0) International License unless otherwise noted.

On this page
  • Overview
  • Architecture
  • Deployment Repositories
  • Pre-requisites
  • Installation
  • Observation cluster setup and configuration
  • eSignet K8 Cluster setup:
  • Follow the pre-requisites deployment steps:
  • Install eSignet mock services:
  • Install eSignet signup and its pre-requisites services:

Was this helpful?

Edit on GitHub
Export as PDF
  1. Deploy
  2. Deployment Architecture

On-Prem Installation Guidelines

Last updated 2 months ago

Was this helpful?

Overview

This guide provides comprehensive instructions for deploying eSignet. eSignet operates as a collection of microservices hosted within Kubernetes clusters to ensure scalability, modularity, and high availability.

This guide is applicable only for eSignet version 1.5.0 and above.

  • The deployment process includes the following key components and configurations:

    • Wireguard: is used as a trust network extension to access the admin, control, and observation pane along with on-field registration client connectivity to the backend server.

    • Nginx Server: eSignet uses the Nginx server for:

      • SSL termination

      • Reverse Proxy

      • CDN/Cache management

      • Load balancing

    • Kubernetes (K8s) Cluster: Kubernetes (K8s) cluster creation, configuration and administration of same.

      • K8 cluster is created using the and tools.

      • K8 cluster essentially used in ref-impl architecture:

        • Observation K8 cluster

        • eSignet application K8 cluster

    • Cluster Configurations:

      • Ingress Setup: For exposing application services outside the K8s cluster.

      • Storage Class Setup: This is used to set up the storage class for persistence in the K8 cluster.

      • Logging System: Continuously scrapes logs from all pods as needed.

      • Monitoring System: Continuously monitors logs and generates graphs to better manage the application and cluster.

      • Alerting: Users are identified about crucial events as and when needed.

    • Observation K8 cluster contains:

      • Rancher Ui: application used to create and manage the k8 cluster. This is needed once for an organization as it can manage multiple dev, qa, and prod k8 clusters easily.

      • Kecloak: IAM tool used for defining RBAC policies for allowing access to Rancher.

    • eSignet cluster: This cluster hosts all eSignet components, along with certain third-party components, to ensure the cluster's security, APIs, and data.

  • eSignet Pre-requisites:

    These are the services required to deploy multiple eSignet modules:

    • eSignet-prerequisites: Servicess required for esignet-service and oidc-ui deployment.

    • eSignet-mock-prerequisites: Servicess required for mock-relying-party and mock-relying-party-ui deployment.

    • eSignet-signup: Services required for esignet-signup-service and esignet-signup-ui deployment.

  • eSignet Services deployment:

    • esignet-service and oidc-ui deployment.

    • Onboarding MISP partner for eSignet service.

    • mock-relying-party-service and mock-relying-party-ui deployment.

    • Onboarding mock-relying-party.

    • esignet-signup-service and esignet-signup-ui deployment.

    • Onboarding MISP partner for esignet-signup-partner.

Architecture

The diagram below illustrates the deployment architecture of the eSignet, highlighting secure user access via VPN, traffic routing through firewalls and load balancers, and service orchestration within a Kubernetes cluster.

  • Key Components: eSignet Service, OIDC UI, databases, and secure cryptographic operations via HSM.

  • Deployment: Managed with Rancher, Helm charts, and a private Git repository.

  • Monitoring: Ensured using Grafana and Prometheus for observability.

Deployment Repositories

    • eSignet Pre-requisites services.

    • eSignet services.

    • eSignet Onboarding.

    • eSignet Api-testrig.

    • eSignet mock pre-requisites services.

    • eSignet mock services.

    • eSignet mock services onboarding.

    • eSignet signup pre-requisites.

    • eSignet signup services.

    • eSignet signup onboarding pre-requisites.

    • eSignet signup onboarding.

Pre-requisites

Ensure all required hardware and software dependencies are prepared before proceeding with the installation.

Hardware requirements

  • Virtual Machines (VMs) can use any operating system as per convenience.

  • For this installation guide, Ubuntu OS is referenced throughout.

Sl no.
Purpose
vCPU's
RAM
Storage (HDD)
No. of VM's
HA

1.

Wireguard Bastion Host

2

4 GB

8 GB

1

(ensure to setup active-passive)

2.

Observation Cluster nodes

2

8 GB

32 GB

2

2

3.

Observation Nginx server (use Loadbalancer if required)

2

4 GB

16 GB

1

Nginx+

4.

eSignet Cluster nodes

8

32 GB

128 GB

3

Allocate etcd, control plane and worker accordingly

5.

eSignet Nginx server ( use Loadbalancer if required)

2

4 GB

16 GB

1

Nginx+

Network Requirements

  • All the VMs should be able to communicate with each other.

  • Need stable intra-network connectivity between these VMs.

  • All the VMs should have stable internet connectivity for docker image download (in case of local setup ensure to have a locally accessible docker registry).

  • Server Interface requirements as mentioned in below table:

Sl no.
Purpose
Network Interfaces

1.

Wireguard Bastion Host

One Private interface: that is on the same network as all the rest of nodes (e.g.: inside local NAT Network). One public interface: Either has a direct public IP, or a firewall NAT (global address) rule that forwards traffic on 51820/udp port to this interface IP.

2.

K8 Cluster nodes

One internal interface: with internet access and that is on the same network as all the rest of nodes (e.g.: inside local NAT Network).

3.

Observation Nginx server

One internal interface: with internet access and that is on the same network as all the rest of nodes (e.g.: inside local NAT Network).

4.

eSignet Nginx server

One internal interface: that is on the same network as all the rest of nodes (e.g.: inside local NAT Network). One public interface: Either has a direct public IP, or a firewall NAT (global address) rule that forwards traffic on 443/tcp port to this interface IP.

DNS requirements

Sl no.
Domain Name
Mapping details
Purpose

1.

rancher.xyz.net

Private IP of Nginx server or load balancer for Observation cluster

Rancher dashboard to monitor and manage the kubernetes cluster.

2.

keycloak.xyz.net

Private IP of Nginx server for Observation cluster

Administrative IAM tool (keycloak). This is for the kubernetes administration.

3.

sandbox.xyx.net

Private IP of Nginx server for MOSIP cluster

Index page for links to different dashboards of MOSIP env. (This is just for reference, please do not expose this page in a real production or UAT environment)

4.

api-internal.sandbox.xyz.net

Private IP of Nginx server for MOSIP cluster

Internal API’s are exposed through this domain. They are accessible privately over wireguard channel

5.

api.sandbox.xyx.net

Public IP of Nginx server for MOSIP cluster

All the API’s that are publically usable are exposed using this domain.

6.

kibana.sandbox.xyx.net

Private IP of Nginx server for MOSIP cluster

Optional installation. Used to access kibana dashboard over wireguard.

7.

kafka.sandbox.xyz.net

Private IP of Nginx server for MOSIP cluster

Kafka UI is installed as part of the MOSIP’s default installation. We can access kafka UI over wireguard. Mostly used for administrative needs.

8.

iam.sandbox.xyz.net

Private IP of Nginx server for MOSIP cluster

MOSIP uses an OpenID Connect server to limit and manage access across all the services. The default installation comes with Keycloak. This domain is used to access the keycloak server over wireguard

9.

postgres.sandbox.xyz.net

Private IP of Nginx server for MOSIP cluster

This domain points to the postgres server. You can connect to postgres via port forwarding over wireguard

10.

onboarder.sandbox.xyz.net

Private IP of Nginx server for MOSIP cluster

Accessing reports of MOSIP partner onboarding over wireguard

11.

eSignet.sandbox.xyz.net

Public IP of Nginx server for MOSIP cluster

Accessing eSignet portal publically

12.

healthservices.sandbox.xyz.net

Public IP of Nginx server for MOSIP cluster

Accessing Health portal publically

13.

smtp.sandbox.xyz.net

Private IP of Nginx server for MOSIP cluster

Accessing mock-smtp UI over wireguard

Certificate requirements

As only secured https connections are allowed via nginx server will need below mentioned valid ssl certificates:

  1. Wildcard SSL Certificate for the Observation Cluster:

    • A valid wildcard SSL certificate for the domain used to access the Observation cluster.

    • This certificate must be stored inside the Nginx server VM for the Observation cluster.

    • For example, a domain like *.org.net could serve as the corresponding example.

  2. Wildcard SSL Certificate for the eSignet K8s Cluster:

    • A valid wildcard SSL certificate for the domain used to access the eSignet Kubernetes cluster.

    • This certificate must be stored inside the Nginx server VM for the eSignet cluster.

    • For example, a domain like *.sandbox.xyz.net could serve as the corresponding example.

Tools to be installed on Personal Computers:

Installation

Below is a step-by-step guide to set up and configure the required components for secure and efficient operations.

Wireguard

Secure access solution that establishes private channels to Observation and eSignet clusters.

Note: If you already have a Wireguard bastion host then you may skip this step.

  • A Wireguard bastion host (Wireguard server) provides a secure private channel to access the Observation and eSignet cluster.

  • The host restricts public access and enables access to only those clients who have their public key listed in the Wireguard server.

  • Wireguard listens on UDP port51820.

Setup Wireguard Bastion server

  1. Create a Wireguard server VM with above mentioned Hardware and Network requirements.

  2. Open ports and Install docker on Wireguard VM.

  • create a copy of hosts.ini.sample as hosts.ini and update the required details for wireguard VM cp hosts.ini.sample hosts.ini

  • execute ports.yml to enable ports on the VM level using ufw: ansible-playbook -i hosts.ini ports.yaml

Note:

  • Permission of the pem files to access nodes should have 400 permissions. sudo chmod 400 ~/.ssh/privkey.pem

  • These ports are only needed to be opened for sharing packets over UDP.

  • Take necessary measures on the firewall level so that the Wireguard server can be reachable on 51820/udp publically.

  • If you already have a Wireguard server for the VPC used you can skip the setup Wireguard Bastion server section.

  1. execute docker.yml command to install docker and add the user to the docker group:

ansible-playbook -i hosts.ini docker.yaml
  1. Setup Wireguard server

    • SSH to wireguard VM

    • Create a directory for storing wireguard config files.

    mkdir -p wireguard/config
    • Install and start the wireguard server using docker as given below:

    sudo docker run -d \
    --name=wireguard \
    --cap-add=NET_ADMIN \
    --cap-add=SYS_MODULE \
    -e PUID=1000 \
    -e PGID=1000 \
    -e TZ=Asia/Calcutta \
    -e PEERS=30 \
    -p 51820:51820/udp \
    -v /home/ubuntu/wireguard/config:/config \
    -v /lib/modules:/lib/modules \
    --sysctl="net.ipv4.conf.all.src_valid_mark=1" \
    --restart unless-stopped \
    ghcr.io/linuxserver/wireguard

Note:

  • Increase the no. of peers above in case more than 30 wireguard client confs (-e PEERS=30) are needed.

  • Change the directory to be mounted to wireguard docker as needed. All your wireguard confs will be generated in the mounted directory (-v /home/ubuntu/wireguard/config:/config).

Setup Wireguard Client on your PC and follow the below steps:

  1. Assign wireguard.conf:

  • SSH to the wireguard server VM.

  • cd /home/ubuntu/wireguard/config

  • Assign one of the PR for yourself and use the same from the PC to connect to the server.

  • Create assigned.txt file to keep track of peer files allocated and update every time some peer is allocated to someone.

    peer1 :   peername
    peer2 :   xyz
  • Use ls cmd to see the list of peers.

  • Get inside your selected peer directory, and add the mentioned changes in peer.conf:

    • cd peer1

    • nano peer1.conf

      • Delete the DNS IP.

      • Update the allowed IPs to subnets CIDR IPs. For example: 10.10.20.0/23

  • Share the updated peer.conf with respective peers to connect to the wireguard server from Personel PC.

  • Add peer.conf in your PC’s /etc/wireguard directory as wg0.conf.

  1. Start the wireguard client and check the status:

sudo systemctl start wg-quick@wg0
sudo systemctl status wg-quick@wg0
  1. Once connected to wireguard, you should be now able to login using private IPs.

Observation cluster setup and configuration

Observation K8s Cluster setup:

  1. Install all the required tools mentioned in the prerequisites for the PC.

  • rke (version 1.3.10)

  • istioctl (version v1.15.0)

  1. Set up Observation Cluster node VMs as per the hardware and network requirements mentioned above.

    1. Set up passwordless SSH into the cluster nodes via PEM keys. (Ignore if VMs are accessible via PEMs).

      • Generate keys on your PC ssh-keygen -t rsa

        • Copy the keys to remote observation node VMs ssh-copy-id <remote-user>@<remote-ip>

        • SSH into the node to check password-less SSH ssh -i ~/.ssh/<your private key> <remote-user>@<remote-ip>

Note:

  • Make sure the permission for privkey.pem for ssh is set to 400.

  • Install Keycloak.

  • Install Rancher UI.

  • keycloak & Rancher UI Integration.

eSignet K8 Cluster setup:

  1. Clone the Kubernetes Infrastructure Repository:

Note: Make sure to use the released tag. Specifically v1.2.0.2.

git clone -b v1.2.0.2 https://github.com/mosip/k8s-infra.git

cd k8s-infra/mosip/onprem
  1. Create a copy of hosts.ini.sample as hosts.ini. Update the IP addresses.

eSignet K8 Cluster Configuration:

Nginx for eSignet K8 Cluster:

Install eSignet and pre-requisite services:

  1. Clone the eSignet repository: (select tag based upon the compatibility matrix)

git clone -b <tag> https://github.com/mosip/esignet.git
cd esignet
cd deploy

Follow the pre-requisites deployment steps:

Install eSignet mock services:

  1. Clone the respective repo: (select tag based upon the compatibility matrix)

git clone -b <tag> https://github.com/mosip/esignet-mock-services.git
cd esignet-mock-services

Install eSignet signup and its pre-requisites services:

  1. Clone the respective repo: (select tag based upon the compatibility matrix)

git clone -b <tag> https://github.com/mosip/esignet-signup.git
cd esignet-signup

: contains the scripts to install and configure Kubernetes cluster with required monitoring, logging, and alerting tools.

: Contains deployment scripts and source code for :

: Contains deployment script and source code for :

: Contains deployment script and source code for:

Follow the steps mentioned to install the required tools on your personal computer to create and manage the k8 cluster using RKE1.

Make sure to clone the github repo for the required scripts in the above steps and perform the steps from the linked directory.

Install the on your PC.

.

.

.

Clone and move to the required directory as per the hyperlink.

Set up the observation cluster by following the steps given .

Once cluster setup is completed, setup k8's cluster ingress and storage class following .

Once the Observation K8 cluster is created and configured set up the nginx server for the same using the .

Once the Nginx server for observation place is done continue with the .

Set up on your personal computer.

Execute to open all the required ports.

Install on all the required VMs.

Create cluster for eSignet services hosting.

newly created K8 cluster to Rancher UI.

Set up for persistence in the k8 cluster as well as a standalone VM (Nginx VM).

Setup for K8 cluster Monitoring.

Setup for the K8 cluster.

Setup and Kiali.

Set up for exposing services from the newly created eSignet K8 cluster.

Install for eSignet from the deploy directory.

for eSignet services.

Install eSignet and OIDC .

eSignet as per the plugin used for deployment.

Setup for detailed automated testcase execution.

Install for eSignet mock services.

Install services.

Onboard services.

Install for eSignet Signup services.

Install services.

Deploy dependencies for eSignet signup onboarder following .

eSignet signup services.

đŸ› ī¸
â›´ī¸
Wireguard
Rancher
rke
k8s-infra
eSignet
esignet-mock-services
esignet-signup
here
k8s-infra
Wireguard client
kubectl
helm
Ansible
k8s-infra
here
steps
steps
installation of required apps:
pre-requisites
ports.yml
Docker
RKE1 K8
Import
NFS
Monitoring
Logging
Istio
Nginx
pre-requisites
Initialise pre-requisites
services
Onboard
api-testrig
pre-requisites
eSignet mock
esignet mock
pre-requisites
eSignet signup
steps
Onboard
eSignet Architecture diagram