Figure 1‑1 illustrates the BIRT iHub system architecture for a multi-volume, out‑of-the-box (OOTB) PostgreSQL database configuration. In this configuration, the iHub administrator starts and stops an iHub instance by running scripts from the command line or using the graphical user interface (GUI) available in System Console.
Client applications, such as System Console and Visualization Platform, run in a servlet container. Single-Sign-On (SSO) security using Security Assertion Markup Language (SAML) provides access to the BIRT iHub system.
Figure 1‑1 BIRT iHub Release 3 system architecture
BIRT iHub supports administering security internally through iHub system services or externally using Report Server Security Extension (RSSE) services such as LDAP or Active Directory. Client applications communicate with BIRT iHub through SOAP messaging using the Actuate Information Delivery API (IDAPI.)
The Process Management Daemon (PMD) or ihubd process handles the services and processes defined on the cluster node in acpmdconfig.xml. The iportal process running in Visualization Platform routes messages to different nodes in the cluster in a round-robin manner when the Message Distribution service (MDS) is enabled, which is the default setting. To modify MDS processing, set the MDS_ENABLED property to false in the web.xml file in iHub/web/iportal/WEB-INF.
When a message reaches iHub, an administrative or provisioning message is handled locally by the node. Report generation and viewing requests are dispatched by a built-in load balancing program based on cluster service configuration, current load on java factory and viewing processes, and available work units specified for each node.
When a BIRT iHub node receives a request, iHub deserializes the SOAP message, performs the appropriate action, and sends a response in the form of a SOAP message back to the application. For example, BIRT iHub receives a request to run a design, such as a BIRT design, immediately or as a scheduled job. BIRT iHub communicates with the internal framework and the cluster and volume metadata database as necessary to locate the design and identify the resources required to run it.
The reporting engine selects a Java Factory service to run the BIRT design and checks job status. BIRT iHub uses an asynchronous Java Factory service to generate a temporary document or a synchronous Java Factory service to generate a scheduled document.
The View service renders the document in DHTML format, or converts the output to other supported formats, such as CSV or PDF, and handles requests to download files from the volume. The View service sends the document to the requesting application for viewing.
A design that uses a information object utilizes Actuate Integration service (AIS) to extract and cache data from an external data source and perform the following processing:
Run a query to extract data from an external data source.
Cache data in iHub System for high availability and to reduce load on the network, data source, and volume by avoiding repetitive data retrieval operations.
The PostgreSQL RDBMS runs as a service in Windows or a process in Linux. The RDBMS can be configured to start automatically or run manually, using a script similar to the BIRT iHub startup script.
iHub stores cluster and volume metadata in the third-party RDBMS, communicating with the RDBMS as necessary using JDBC. iHub uses the physical file system to read and store designs, documents, and other iHub objects as data in volume storage locations.
The out-of-the-box (OOTB) iHub PostgreSQL installation configures the volume database on the local disk to increase the reliability and performance of file input and output (I/O) operations. PostgreSQL discourages creating databases accessed using a Network File Systems (NFS) for these reasons. For more information, see section 17.2.1 Network File Systems at the following URL:
The iHub OOTB PostgreSQL RDBMS starts multiple instances to handle connections for running queries that access metadata. In database jargon, PostgreSQL uses a process-per-user, client/server model. For more information, refer to the PostgreSQL documentation at the following URL:
A cluster node is a machine running a BIRT iHub instance. The system administrator adds a node to a cluster to scale BIRT iHub System to the necessary processing requirements. Every cluster node must have network access to the following directory and resources to join the cluster:
The shared configuration directory
Cluster resources, such as printers, database systems, and disk storage systems
Each node gets its configuration from a template in acserverconfig.xml, which is located in a shared configuration home directory along with the license file, acserverlicense.xml.
The acserverconfig.xml file contains the server templates as well as other configuration parameters specifying the host names, volume names, port numbers, printers, and services used by nodes in the cluster. When the Process Management Daemon (PMD) starts up, it reads these configurations and exposes them to the process environment variable list. When a node joins a cluster, it configures itself using its template.
After installation and configuring the appropriate environment variables in acpmdconfig.xml, the system administrator launches the installed BIRT iHub image from System Console or the command line by passing the necessary arguments or creating a script to execute the command. Nodes with the same cluster ID, running on the same sub-net, automatically detect and join each other to form the cluster. This feature is known as elastic iHub clustering.
The cluster automatically detects the on-off status of any node. Single-point node failure does not affect the availability of other nodes.
The cluster communicates across the network using standard HTTP/IP addressing. The Process Management Daemons (PMDs) located on each node coordinate processing among the available BIRT iHub services based on message type to balance the workload across the nodes.
This loosely coupled model provides the following improvements to intra-cluster messaging:
Each node in the cluster is relatively independent and identical in terms of components and functionality. Intra-cluster messages are limited to messages for cluster membership and load balancing.
Operations like design execution and viewing typically require intermediate information from the volume metadata database. This information is now directly retrieved from or updated in the RBDMS, eliminating internal messages to services on other nodes.
This increased scalability of operations at the cluster level can create bottlenecks in the metadata database. Important factors to consider when configuring nodes and ancillary resources include estimating processing power and access to hardware and software resources, such as printers and database drivers.
BIRT iHub instances running on multiple machines maintain cluster and volume metadata in a database, which controls access to shared volume data. This data can be on machines that are not running BIRT iHub, but must be shared and accessible to each iHub instance.
This loosely coupled cluster model provides the following maintenance and performance benefits:
Startup and shutdown of a BIRT iHub node is fast because it is independent of the RDBMS that manages the cluster and volume. An RDBMS can remain online when shutting down a BIRT iHub node. The RDBMS is available when the node starts up.
Controlling the sequence of node and volume startup is not necessary. All nodes and volumes are either already online or come online as the RDBMS starts.
Downtime to apply a patch fix or a diagnostic fix for a BIRT iHub node is reduced. The RDBMS, including the OOTB PostgreSQL database server, does not have to be shutdown. In a BIRT iHub cluster, the patch or diagnostic fix can be applied to one node at a time.
This operational model lends itself well to grid, cloud, and other data-center types of deployments.