Installing a BIRT iHub cluster : Installing a BIRT iHub cluster node
 
Installing a BIRT iHub cluster node
A node is a machine running a BIRT iHub instance. An iHub administrator adds a node to an iHub cluster to improve availability and throughput and scale the cluster installation to necessary processing requirements.
There are two methods of adding a new node to the cluster:
*Perform an automated, custom installation, using the wizard-driven installation program.
*Perform a manual installation or cloud deployment, using a prepared image of an installed iHub run-time environment.
Every cluster node must have network access to the following directory and resources to join the cluster:
*The shared configuration home directory
*Cluster resources, such as printers, database systems, and disk storage systems
Each node gets its configuration from a template in acserverconfig.xml, which is located in a shared configuration home directory along with the license file, acserverlicense.xml.
The acserverconfig.xml file contains the server templates as well as other configuration parameters specifying the host names, volume names, port numbers, printers, and services used by nodes in the cluster. When the Process Management Daemon (PMD) starts up, it reads these configurations and exposes the settings to the process environment variable list. When a node joins a cluster, it configures itself using its designated template.
After installation and configuring the appropriate environment variables in acpmdconfig.xml, the administrator launches the installed iHub image from the command line by passing the necessary arguments or creates a script to execute the command. Nodes with the same cluster ID, running on the same sub-net, automatically detect and join each other to form the cluster. This feature is known as elastic iHub clustering.
The cluster communicates across the network using standard HTTP/IP addressing. The cluster automatically detects the on-off status of any node. Single-point node failure does not affect the availability of other nodes.
One or more nodes in the cluster manage the request message routing. The Process Management Daemons (PMDs) located on each node coordinate processing among available iHub services based on message type to balance load across the nodes.
iHub instances running on multiple machines maintain iHub system and Encyclopedia volume metadata in databases and control access to shared volume data. The volume data can be on machines that are not running iHub, but must be shared and accessible to each iHub instance.
This loosely coupled cluster model provides the following maintenance and performance benefits:
*Startup and shutdown of an iHub is fast because it is independent of the RDBMS that manages the Encyclopedia volume. An RDBMS can remain online when shutting down an iHub and the RDBMS is available when the iHub starts up.
*Controlling the sequence of Encyclopedia volume startup is not necessary. All volumes are either already online in the RDBMS or come online as the RDBMS starts.
*Downtime to apply a patch fix patch or a diagnostic fix for an iHub is reduced. The RDBMS, including the OOTB PostgreSQL database server, does not have to be shutdown. In an iHub cluster, the patch or diagnostic fix can be applied to one iHub node at a time.
This operational model lends itself well to grid, cloud, and other data-center types of deployments. For more information about the pre-packaged Actuate cloud computing deployment option, see Chapter 6, “Installing BIRT iHub in a cloud,” later in this book. For more information about administering an installed iHub cluster, see Chapter 9, “Clustering,” in Configuring BIRT iHub.