On this page:

11.0Designing a Cluster

 

Smile CDR is designed to be clustered in horizontal clusters of any size. This means that you can add an arbitrary number of servers to your installation, and they can be used to share the load of incoming requests.

The built-in clustering capability is designed to be flexible. You can build active/active clusters, active/passive clusters, or any combination of the two in order to meet your specific needs.

All components in Smile CDR are designed to be capable of operating without keeping any local state within a single server. This means that a deployment can grow to a very large number of servers as needed. This design also means that nodes can be added and removed from the cluster at any time (i.e. without requiring a restart of the entire cluster).

Simple Cluster

11.0.1Module Design

 

The general approach in designing a cluster is to create a single node that will act as the "master" node within the cluster. The master node serves as a sort of configuration template, and any number of clone nodes can be created that will work in the exact same way.

For example, suppose you create a node called master on server HOST1.acme.org which is configured with:

You can now create a second node clone01 on server HOST2.acme.org and mark it as a clone of module master. It will use the same settings when it starts. This means that it will connect to the same backing data store, and it will also listen on port 8000.

The two listeners you have created are both able to handle parallel requests. You can now place a network switch (or load balancer, failover device, reverse proxy, etc.) in front of these two ports, and your requests will be served by both nodes (or by the active node, depending on the configuration).

This same strategy applies to all types of modules that can be created within Smile CDR. Security modules will seamlessly share sessions across all clones, Web and JSON admin APIs will expose their ports and service requests across each node, etc.

Database Clustering

The clustering capabilities of Smile CDR rely heavily on having access to a clustered underlying database instance. Setting up a cluster of your chosen database platform (PostgreSQL, Oracle, etc.) is beyond the scope of this documentation but Smile CDR does expect the chosen cluster configuration to be globally consistent.

Lucene Clustering

Smile CDR's FHIR Storage modules use Apache Lucene for providing indexing, which is used for certain types of queries. FHIR Storage modules will use a local filesystem path to store Lucene's index files, and each cloned module will store its own complete copy of the index locally. For this reason, each node in the cluster should be set up with sufficient disk storage (either locally or via network-attached storage). Smile CDR's cluster design does not rely on shared disks; rather it requires that each node has its own dedicated disk storage.

11.0.2Adding and Removing Nodes

 

Smile CDR is able to handle any arbitrary number of clones being added, and these nodes can be started or stopped at any time. It's important to note that the master node does not need to be running in order for the cluster to operate. The master node is used as a configuration template when clones are started but it performs no special functions when it runs.

Creating a Clone Node

A clone node is a node that will have a copy of module configurations that its master has. A clone node should have a configuration properties file with only a few settings:

  • The node.id property should be a unique identifier for the clone node.
  • The node.control.port should be a port that is available and specific to the clone node on the host server.
  • The node.clone property specifies the ID of the master node that the clone node is a clone of.
  • A complete module confguration for the clustermgr module should be specified (no other module configurations should be specified).

Server Port Offset

  • The node.server_port_offset property indicates an integer value to apply as an offset to server port numbers on the clone node. For example, if the master node has a FHIR Endpoint module listening on port 8000 and this property has a value of 10000, on the clone node the same FHIR Endpoint will listen on port 18000.

Clone Node Example

The following is a simple example of a properties file for a clone node:

################################################################################
# Node Configuration
################################################################################
node.id=Clone-01
node.control.port=10001
node.clone=Master

################################################################################
# Cluster Manager Configuration
################################################################################
module.clustermgr.type                                      =ca.cdr.clustermgr.module.ClusterMgrCtxConfig
module.clustermgr.config.db.driver                          =POSTGRES_9_4
module.clustermgr.config.db.url                             =jdbc:postgresql://localhost/cdr
module.clustermgr.config.db.username                        =cdr
module.clustermgr.config.db.password                        =mypassword

11.0.3Multi-Master Clusters

 

In many cases it is desirable to have multiple master nodes within a cluster, each with their independent set of clone nodes. This is useful if you are designing a cluster with two independent roles that you want to scale independently.

For example, suppose you are planning a deployment of Smile CDR that will consist of a Web Admin Console, a FHIR Endpoint module, and a SMART Outbound Security module. If all of these modules are on the same master node, then they will all be scaled together as more clone nodes are added to the cluster.

An alternate design is to place each function on its own master node. In the example above, this might look like:

  • Master Node: admin_master
    • Module: Cluster Manager
    • Module: Local Inbound Security
    • Module: Web Admin Console
  • Master Node: auth_master
    • Module: Cluster Manager
    • Module: SMART Outbound Security
    • Module: Local Inbound Security
  • Master Node: fhir_master
    • Module: Cluster Manager
    • Module: FHIR Endpoint
    • Module: FHIR Storage
    • Module: SMART Inbound Security

With this design, clones could be made of any of these master nodes in order to scale the system up accordingly.