MegaScale
MegaScale is a mechanism for storing virtually unlimited amounts of data in a single FHIR server. It uses multiple database instances to create discrete pools of data which are logically separate, but are managed under a single Smile CDR FHIR Storage (RDBMS) module.
In its simplest terms, a MegaScale-enabled server can be thought of as a multitenant repository where each tenant is hosted in a separate database instance.
Using this strategy can be helpful in cases such as:
In MegaScale mode, one or more FHIR Endpoint modules are combined with a single FHIR Storage (RDBMS) module. Incoming FHIR requests include a tenant identifier which maps to a particular partition, which then specifies the target database. This architecture is shown in the diagram below.
This section lists the known limitations on this feature.
The following FHIR interactions have been tested:
/P1/$reindex
), or all partitions using _ALL
as the tenant name (e.g. POST /_ALL/$reindex
).No other features, operations, or interactions have been tested or are expected to work with MegaScale.
Processing Transaction Bundles that span multiple databases is NOT supported. You must ensure that all updates within a single Bundle target a single MegaScale database. This is true for REQUEST_TENANT partitioning mode, but may not be true for other partition modes like Patient-Id partitioning or custom partitioning solutions.
Search requests will only include results from a single database.
To enable MegaScale mode, the following settings must be set.
On the FHIR Storage (RDBMS) module:
true
.DEFAULT
partition.REQUEST_TENANT
.true
.On the FHIR Endpoint module:
URL_BASED
.MegaScale connection details are supplied using a Java Smile CDR Interceptor using the STORAGE_MEGASCALE_PROVIDE_DB_INFO
pointcut.
See Example: MegaScale Connection Provider to see how this pointcut can be used. This example is also available in the Interceptor Starter Project.