UiPath Orchestrator — Multi Node Architecture
Introduction to UiPath Orchestrator:
UiPath Orchestrator is a web application developed by UiPath to streamline the orchestration of robots in the execution of repetitive or rule based business processes. This powerful application plays a vital role in managing resources for automation projects and their utilization by robots. It provides hierarchical structuring and fine-grained role assignments for efficient access control and making it an essential integration point for third party solutions and applications.
“One of Orchestrator’s primary strengths lies in it’s ability to effectively oversee your entire fleet of robots”
UiPath offers a range of deployment possibilities for its platform, including single-node, multi-node, high availability, active/passive, and active/active configurations. In a multi-node deployment, UiPath components are spread across multiple machines, enhancing performance, scalability, and resilience for optimized operational efficiency.
Basic Understanding on Multi Node Architecture:
UiPath Multi-Node Orchestrator is a way of setting up UiPath Orchestrator by spreading its parts across multiple machines. The Orchestrator is like a control center for UiPath robots, handling tasks like automating processes, scheduling, monitoring, and managing robot resources.
In this setup, different parts like the web application server, database, and message queue are put on separate machines. This setup has some benefits:
- Better Performance: Sharing the workload across machines improves how well everything works and makes the system more responsive.
- Easy Scalability: It’s simple to add more machines if you need to handle more automation tasks.
- Always Available: If one machine has a problem, another can take over, making sure there’s minimal downtime and the system keeps running.
- Stronger Against Issues: Spreading out the parts makes the system more robust, so if one part has a problem, it’s less likely to affect everything else.
In a nutshell, UiPath Multi-Node Orchestrator is built to give organizations a strong and flexible system for managing their UiPath robots, especially when they have different automation needs.
Key Components of Multi Node Architecture:
UiPath Multi-Node Orchestrator involves distributing the key components of UiPath Orchestrator across multiple machines or nodes to enhance performance, scalability, and resilience. The key components include:
- UiPath Orchestrator Scale Set: It is advisable to install the individual Orchestrator applications on dedicated Windows servers. Ensure that all Orchestrator nodes maintain consistent IIS configurations, access rights, and security policies. Additionally, it is recommended to configure a service Windows user account specifically for the UiPath Orchestrator scale set. This account can then be utilized for establishing the database connection.
- Scalable SQL Database: The configuration of the database server should align with the number of Orchestrator nodes in production. Many cloud databases offer scalability, so it is advisable to collaborate with the database administrator to establish best practices and ensure proper access privileges are set up.
- UiPath Robots: Operate within an environment that can be configured on either virtual or physical machines. These machines are accessed remotely using Robot accounts. The key distinction lies in the need for this environment to be configured for rapid scalability to accommodate new Robots efficiently. A popular choice for achieving this scalability in the current environment is through the use of Kubernetes containers.
- Elasticsearch (ES) Cluster Environment: It is advisable to establish a dedicated Windows server to host both the ES and Kibana applications for building this cluster. The primary objective is to provide a scalable solution as the volume of logs generated by the Robots’ executed jobs increases. Given the critical role of logs in maintaining the operational health of UiPath, it is essential to prioritize the setup and maintenance of this cluster for effective support and monitoring.
- High Availability Add-on (HAA) Cluster Environment: The HA add-on ensures redundancy and stability for your multi-node Orchestrator deployment by providing resistance to failures. In an HAA configuration, if one Orchestrator or HAA node fails, the other nodes are activated, allowing processing to seamlessly “failover” to the remaining nodes in the cluster. This setup also allows for horizontal scalability, enabling the addition of nodes to accommodate an increase in Robot counts. UiPath HAA is essentially the Redis Original Equipment Manufacturer (OEM) version for UiPath. Redis is employed for in-memory data structure storage, specifically for caching, which enhances the application’s performance. It is important to note that UiPath’s support contract exclusively covers the multi-node setup with UiPath HAA. Consequently, it is not merely a nice-to-have component; rather, it is a critical.
- Load Balancer (LB) for Orchestrator Servers: Load balancers play a crucial role in redirecting traffic to different Orchestrator nodes that result from various client requests. It is advisable to configure a load balancer URL for Orchestrator login to ensure continued access to the application even if a node experiences downtime.
- Load Balancer (LB) for Elasticsearch (ES) Servers: The same principle is applicable to redirect connections from Orchestrator to ES servers. In cases where there are multiple ES servers, the implementation of a load balancer becomes essential. Additionally, there may be scenarios where numerous ES shards are distributed across different servers, and in such cases, utilizing a load balancer is advisable.
- UiPath Identity Server: This service offers centralized authentication across all UiPath products and is an integral part of the on-premises Orchestrator installer. It is advised that the enterprise Identity and Access Management (IAM) team take charge of configuring the Identity Server component in alignment with existing enterprise standards.
- Azure Redis Cache: Multi-node orchestrator deployment use the RESP (Redis Serialization Protocol) for communication, and thus can be configured using any solution implementing this protocol such as Azure Redis Cache. For multi node deployment,it is recommended to use two separate Redis instances.
- Azure Redis Cache Premium with a 6GB cache — the primary node used for session state and user-entity associations
- Azure Redis Cache Basic — used to scale the SignalR service.
Installation/configuration of main components:
1.Installation on HAA:
The HAA add-on scripts from UiPath repository is need to be downlaoded and installed on the linux machine.
Installing the primary node:
- SSH the primary node with root permissions.
- Create the directory where you want to download and extract HAA
- Download the
get-haa.sh
installation script: Example:wget https://download.uipath.com/haa/get-haa.sh
- Make the
get-haa.sh
script executable:chmod a+x get-haa.sh
- Install the primary node and ensure that you provide an email address and password for the administrator account. You may use a temporary email address. Additionally, please specify the operating system currently running on the node. Example:
sudo ./get-haa.sh -u <email> -p <password> -o <OS> --accept-license-agreement
Installing the secondary node:
- SSH the secondary node with root permissions.
- Create the directory where you want to download and extract HAA
- Download the
get-haa.sh
installation script: Example:wget https://download.uipath.com/haa/get-haa.sh
- Make the
get-haa.sh
script executable:chmod a+x get-haa.sh
- Install the secondary node. Example:
sudo ./get-haa.sh -u <email> -p <password> -o <OS> -j <IP_address_of_the_master_node> --accept-license-agreement
Configuring UiPath.Orchestrator.dll.config:
To enable High Availability and Add-Node functionality in Orchestrator, configure it to use HAA (High Availability Add-On). Incorporate all HAA nodes into the UiPath.Orchestrator.dll.config configuration file of Orchestrator, utilizing the LoadBalancer.UseRedis and LoadBalancer.Redis.ConnectionString parameters. For instance:
<add key="LoadBalancer.UseRedis" value="true" />
<add key="LoadBalancer.Redis.ConnectionString" value="10.10.20.184:10000,10.10.24.148:10000,10.10.22.114:10000,password=SuperSecret_Password" />
For more details, refer https://docs.uipath.com/orchestrator/standalone/2023.10/installation-guide/haa-installation
https://docs.uipath.com/orchestrator/standalone/2023.10/installation-guide/high-availability
2. Configuring the Azure Load Balance:
Configuring Azure Load Balancer for UiPath multi-node deployment involves several key steps to ensure high availability and load distribution across multiple nodes. Below is a detailed description of the main steps:
Step 1:Adding a frontend IP
In Azure Load Balancer, adding a frontend IP address involves configuring the public or private IP address that clients will use to access the services hosted behind the load balancer. The frontend IP is associated with the load balancer, and it serves as the entry point for incoming traffic.
Step 2: Creating the backend pool and add nodes
Creating the backend pool and adding nodes in Azure Load Balancer involves configuring the set of virtual machines or instances that will receive and process incoming traffic. The backend pool is a collection of these instances, and the load balancer distributes traffic among them based on defined rules.
Step 3: Adding health probes
Adding health probes in Azure Load Balancer is a crucial step in ensuring the availability and health of backend instances (nodes) within the load balancer’s backend pool. Health probes are used to periodically check the status of each backend instance, allowing the load balancer to route traffic only to healthy instances.
Step 4: Adding a load balancing rule
Adding a load balancing rule in Azure Load Balancer involves defining how incoming network traffic should be distributed among the backend pool instances. Load balancing rules help determine which ports on the load balancer should listen for incoming traffic and how that traffic should be distributed among the backend instances.
Step 5: Creating a load balancer domain name
In Azure Load Balancer, there isn’t a specific concept of creating a load balancer domain name directly associated with the load balancer itself. However, when you configure a load balancer to distribute traffic to backend instances, you typically use a domain name associated with the public IP address assigned to the load balancer.
For more details, refer https://docs.uipath.com/automation-suite/automation-suite/2023.4/installation-guide/azure-infrastructure-configuring-load-balancer
3. Certificate consideration — during installation
There are two essential certificates that Orchestrator necessitates for seamless operations:
Orchestrator SSL Certificate: This certificate is vital for establishing secure, encrypted communication between Robots and Orchestrator. While it is highly recommended to use an SSL certificate approved by a Certificate Authority for enhanced security, a self-signed certificate is also a viable option.
Identity Server Token-Signing Certificate: This certificate plays a crucial role in user authentication, as it holds the private key. To modify these certificates, the configuration of the Identity Server must be adjusted accordingly.
Refer the below documentation for more about certficates,
Why we need to consider UiPath multinode orchestrator instance over single node instance
Choosing a UiPath Multi-Node Orchestrator instance over a Single-Node setup provides numerous benefits tailored to scalability, performance, and fault tolerance. Below are key reasons to consider a Multi-Node Orchestrator configuration:
Scalability:
Enhanced Workload Handling: Multi-Node Orchestrator distributes workloads across multiple nodes, significantly boosting the system’s capability to manage a larger volume of processes and robots simultaneously.
Vertical and Horizontal Scaling: The option to scale vertically by augmenting resources to a single node or horizontally by adding more nodes offers flexibility aligned with the dynamic growth of automation requirements.
Improved Performance:
Load Balancing: Multi-Node Orchestrator instances implement load balancing, ensuring equitable distribution of tasks among nodes. This minimizes bottlenecks and elevates overall system performance.
Optimal Resource Utilization: Distributing processes across multiple nodes optimizes resource utilization, mitigating the risk of resource contention and maximizing efficiency.
High Availability:
Redundancy: In a Multi-Node setup, if one node encounters a failure, the system can automatically redirect the workload to other available nodes. This redundancy enhances system reliability, ensuring uninterrupted operation even in the event of a node failure.
Fault Tolerance:
Resilience to Node Failures: With multiple nodes, the Orchestrator environment becomes more resilient to individual node failures. If one node becomes unavailable, other nodes can seamlessly continue processing tasks, minimizing the impact on overall automation operations.
Separation of Roles:
Dedicated Nodes for Specific Functions: A Multi-Node Orchestrator setup allows the dedication of specific nodes to handle distinct functions such as scheduling, processing, and storage. This segregation of roles enhances the efficiency and maintainability of the Orchestrator environment.
Geographical Distribution:
Support for Multiple Locations: Multi-Node Orchestrator instances can be deployed across different geographical locations, facilitating a distributed approach. This proves beneficial for organizations with global operations or those seeking to optimize automation processing based on regional requirements.
More references:
Conclusion:
In conclusion, a UiPath Multi-Node Orchestrator instance brings advantages such as increased scalability, improved performance, high availability, fault tolerance, and adaptability to evolving automation needs. This configuration is particularly beneficial for organizations experiencing growth in automation demands, aiming for a resilient, efficient, and robust Orchestrator environment.