Installing a BIRT iHub cluster : Adding a node to a cluster
 
Adding a node to a cluster
After installing a node on a machine, the administrator must still configure sharing and add the node to the cluster. When adding a node to a cluster setup, the administrator must verify that the configuration home directory specified during the install procedure points to the shared configuration home directory and all Encyclopedia volume resources are accessible.
The following section refers to the machine containing the shared configuration directory as node1 and the cluster node accessing these shared resources as node2.The following example assumes that both the configuration folder and Encyclopedia volume folders are located on node1, although in a more complex installation, these configuration and volume resources may reside in another network location.
Before performing a cluster node installation, the administrator performs the following tasks:
*On node1, the Administrator shares the configuration folder and any Encyclopedia volume folders that a cluster node accesses.
*On node2, the Administrator:
*Creates folders on which to mount the node1 shared folders
*Creates a mapping between the node1 and node2 shared folders
*Mounts the node1 shared folders on the node2 machine
It is the responsibility of the administrator performing the installation to make sure that all settings conform to the security policies in force for the environment.
The following instructions provide a basic reference example of the operations required to configure folder sharing in a Linux environment that supports using the Network File System (NFS), a common, standard, distributed file system protocol.
How to share the configuration and Encyclopedia volume files and folders
In a default iHub installation, a cluster node requires shared, read-write access to the following system resources:
*AC_DATA_HOME/config/iHub2
In an iHub installation, the configuration files are located in AC_DATA_HOME/config/iHub2.
*AC_DATA_HOME/encyc or other volumes, including all file, fileType, status, and tempRov subfolders
In an iHub installation, where there has been no activity on the system, the status or tempRov folders may not exist. These folders contain information about job details and completion notices and do not appear until a job executes.
To give a cluster node read-write access to these files and folders, perform the following tasks:
1 Log in to node1 as the root user.
2 Add the following entries to the /etc/exports file:
/home/actuate/AcServer/data/config/iHub2
*(rw,fsid=1,no_root_squash)
/home/actuate/AcServer/data/encyc
*(rw,fsid=2,no_root_squash)
3 Start the NFS server processes by executing the following command:
service nfs restart
4 Log in to node2 as the actuate user.
5 Create the following directory paths:
/home/actuate/AcServer/data/config/iHub2
/home/actuate/AcServer/data/encyc
6 Log off node2.
7 Log in to node2 as the root user.
8 Add the following entries to the /etc/fstab file:
<node1 hostname>:/home/actuate/AcServer/data/config/iHub2
/home/actuate/AcServer/data/config/iHub2 nfs nfsvers=3 0 0
<node1 hostname>:/home/actuate/AcServer/data/encyc /home/actuate/AcServer/data/encyc nfs nfsvers=3 0 0
9 Mount the shared folders on node1 by executing the following commands:
mount /home/actuate/AcServer/data/config/iHub2
mount /home/actuate/AcServer/data/encyc
The administrator must also verify or edit the shared acpmdconfig.xml file to contain the following information:
*<AC_CONFIG_HOME> to point to the shared configuration home directory for the cluster
*<AC_TEMPLATE_NAME> to specify the server template from the available server templates listed in the shared acserverconfig.xml file
How to verify and edit acpmdconfig.xml file settings
To verify and edit acpmdconfig.xml file settings, perform the following tasks:
1 Shut down the recently installed cluster node.
2 Using a text editor, open acpmdconfig.xml, which by default is located in AC_SERVER_HOME/etc.
3 Verify or edit <AC_CONFIG_HOME> to point to the shared configuration home directory for the cluster, as shown in the following code:
<AC_CONFIG_HOME>/home/actuate/AcServer/data/config/iHub2
</AC_CONFIG_HOME>
This location is the path that you specified for the configuration home directory during the install procedure.
4 Verify or edit <AC_TEMPLATE_NAME> to specify the server template name from the available server templates listed in the shared acserverconfig.xml file, as shown in the following code:
<AC_TEMPLATE_NAME>urup</AC_TEMPLATE_NAME>
In the example, urup is server template name
5 Save acpmdconfig.xml.
The administrator must also verify or edit the shared acserverconfig.xml file to contain the following information:
*<ServerFileSystemSetting> points to the shared drive location that contains the Encyclopedia volume data files.
*server <ConnectionProperty> specifies the network name of the node that contains the shared Encyclopedia volume database.
How to verify and edit acserverconfig.xml file settings
To verify and edit acserverconfig.xml file settings, perform the following tasks:
1 Stop the Actuate BIRT iHub service running on the node that contains the shared configuration home directory.
2 Using a text editor, open the acserverconfig.xml file in the configuration home directory.
In an iHub installation, the configuration files are located in AC_DATA_HOME/config/iHub2 by default. The location is the path that you specified for the configuration home directory during the install procedure.
3 In <Template> settings for the node, verify or edit <ServerFileSystemSettings> to make sure the path <ServerFileSystemSetting> points to the location that contains the Encyclopedia data files, by performing the following tasks:
1 Locate the <ServerFileSystemSettings> element under the <Template> element.
2 In <ServerFileSystemSetting>, locate:
<ServerFileSystemSettings>
<ServerFileSystemSetting
Name="DefaultPartition"
Path="$AC_DATA_HOME$/encyc"/>
</ServerFileSystemSettings>
3 Change Path from the AC_DATA_HOME variable notation to the full path specification, as shown in the following code:
<ServerFileSystemSettings>
<ServerFileSystemSetting
Name="DefaultPartition"
Path="/home/actuate/AcServer/data/encyc"/>
</ServerFileSystemSettings>
The Path setting for DefaultPartition is /home/actuate/AcServer/data/encyc. Do not use the AC_DATA_HOME variable notation.
4 In <MetadataDatabase> settings, verify or edit the <ConnectionProperty> for the server to make sure that it specifies the network name, not localhost, of the node on which the Encyclopedia volume database resides, by performing the following tasks:
1 Locate the <ConnectionProperties> element under the <MetadataDatabase> element.
2 In <ConnectionProperties> locate:
<ConnectionProperty
Name="server"
Value="localhost"/>
3 Change Value from localhost to the name of the machine on which the Encyclopedia volume database resides, such as urup, as shown in the following code:
<ConnectionProperty
Name="server"
Value="urup"/>
5 Save acserverconfig.xml.
Start Actuate BIRT iHub on each cluster node. The new cluster node will automatically read the settings in the acserverconfig.xml file in the shared configuration directory to access its template, then join the cluster.
How to start an iHub cluster using Configuration Console
To start iHub using Configuration Console manually, perform the following tasks:
1 On the node containing the configuration home directory for the cluster, log in to Configuration Console and choose Advanced view. Choose Servers, then choose Start New Server.
2 On Servers—Start New Server, as shown in Figure 5‑33, perform the following tasks:
1 In Server name, type the name of the cluster node.
2 In Host Name or IP Address, type the name or IP address of the cluster node.
3 In iHub Process Manager Port Number, type the Daemon listen port number. The default value for this port is 8100. You specify this port number during the install procedure.
4 In Server template name, choose the name of the template that the cluster node uses.
Choose OK.
Figure 5‑33 Preparing to start a new server
3 Log out of Configuration Console.
4 Restart the Actuate BIRT iHub services on the node containing the configuration home directory for the cluster then the new node.
5 Log in to Configuration Console and choose Advanced view. Choose Servers from the side menu. The new cluster node automatically reads the acserverconfig.xml in the shared configuration home directory to access its template, then joins the cluster.