This page describes the process for upgrading an existing installation of Smile CDR to a newer version.
Within the Smile CDR installation directory, there is a directory called lib
that contains all of the binary code for Smile CDR. It is generally possible to simply replace the contents of this directory with the new contents from a release.
The following example shows an in-place upgrade of a single Smile CDR server:
cd smilecdr
bin/smilecdr stop
Change the following commands to use a backup folder that works for your installation:
cp -R lib /path/to/your/backup/folder
cp -R bin /path/to/your/backup/folder
if [ -d otel ]; then cp -R otel /path/to/your/backup/folder; fi
With Smile CDR shut down, no new data will be entering the system and entering/changing the database, so this would be the time to perform a full database backup of all your applicable databases.
However, some installations have very large databases and cannot wait for a full backup to take place, so your situation may require taking a different approach. It may be acceptable in this scenario to use the most recent full backup, along with the latest transaction logs, in case you need to restore the database to this point-in-time, just before the Smile CDR upgrade takes place.
The exact instructions for this step will differ, depending on which database you have chosen to use with your installation. Please consult your DBA Team for assistance with ensuring that you have a full database backup to this point in time, which could be used if you decide to rollback this upgrade.
rm lib/*
rm bin/smilecdr
rm bin/smileutil
mv bin/setenv bin/setenv-2025.02.PRE-04 (build e75afbc13c).backup
rm -rf otel/*
# The following command assumes you are currently in the root of your smilecdr
# installation, and will extract only the lib directory
tar --strip-components=1 -xvf /path/to/smilecdr-2025.02.PRE-04 (build e75afbc13c).tar.gz smilecdr/lib
# The following command assumes you are currently in the root of your smilecdr
# installation, and will extract only the bin directory
tar --strip-components=1 -xvf /path/to/smilecdr-2025.02.PRE-04 (build e75afbc13c).tar.gz smilecdr/bin
# The following command assumes you are currently in the root of your smilecdr
# installation, and will extract only the otel directory
tar --strip-components=1 -xvf /path/to/smilecdr-2025.02.PRE-04 (build e75afbc13c).tar.gz smilecdr/otel
In a previous step, we extracted a new setenv
file and placed it in your bin/
folder, but some customers have made changes to that file to optimize the performance of Smile CDR in their environment. Therefore, you will need to examine the backup version of that file (bin/setenv-2025.02.PRE-04 (build e75afbc13c).backup
), compare it to the new one (bin/setenv
) and re-apply any manual changes that may have been made in previous versions of Smile CDR.
You can compare these files using the following command:
diff bin/setenv bin/setenv-2025.02.PRE-04 (build e75afbc13c).backup
Occasionally, we introduce a new configuration setting or module and it will be added to the default classes/cdr-config-Master.properties
file in the new release. Users may choose to manage this file themselves, changing some of the default values to suit their needs, and sometimes using it in Properties Mode to force Smile CDR to use the configured values exclusively from the properties file. If this file is missing some new configuration value, it may not perform properly or it may cause problems during startup.
It is important to compare the old and new properties files to see whether any new configuration settings have been added, and add them to your local system as well, to remain up-to-date.
The process of adding new configuration properties to your local system will differ based on your server's node.propertysource
configuration setting. Note that if you do not have this property in your configuration file, it will use the default value of DATABASE
. More information is available here: Module Property Source
If you are currently using DATABASE
mode for your properties:
If you are NOT using DATABASE
mode for your properties:
Starting with version 2020.11.R01, Smile CDR automatically upgrades your database when the new version first starts up.
The automatic database upgrade is convenient and appropriate for smaller systems, development or testing environments. For all other deployments, we strongly recommend running the smileutil: Migrate Database command line tool to perform the migration. This will provide visual feedback of progress, avoid problems with timeouts on long-running tasks, and (optionally) allow a DBA to review the upgrade script to account for any local changes.
As of version 2022.11.R01, Flyway is no longer used to perform database migrations. Instead, Smile CDR now uses the HAPI-FHIR Migration Tool. This tool is backwards compatible with Flyway database migration and shares the same schema migration history tables that Flyway used: FLY_CDR_MIGRATION
for the Cluster Manager database and FLY_HFJ_MIGRATION
for the FHIR Repository database. The HAPI-FHIR Migration Tool uses a simple lock-record mechanism to ensure that only one process in a cluster is performing a database migration at a time. This lock-record has an installed_rank
of -100 and a unique value (UUID) in the description
column. The migration operation should automatically delete this lock record upon completion, even if there was a migration error. However in some scenarios, (e.g. loss of database connection in the middle of a migration), it is possible that this lock record will not be properly removed after the migration is complete. If this happens, you can set the CLEAR_LOCK_TABLE_WITH_DESCRIPTION
environment variable (or System Property) to the value of the description
of this lock record to instruct Smile CDR to delete it before starting a new migration.
For example, if your server fails to start up with an error message like the following:
HAPI-2153: Unable to obtain table lock - another database migration may be running.
If no other database migration is running,
then the previous migration did not shut down properly
and the lock record needs to be deleted manually.
The lock record is located in the FLY_CDR_MIGRATION table with
INSTALLED_RANK = -100 and DESCRIPTION = 2981dc98-6111-4892-b3f9-56b9d559f4d7
Then before starting Smile CDR you could call:
export CLEAR_LOCK_TABLE_WITH_DESCRIPTION=2981dc98-6111-4892-b3f9-56b9d559f4d7
This will cause Smile CDR to delete this stale lock record before starting a new database migration.
The docker container no longer runs as root as of release 2023.02.R01. This change is to improve local security. The following actions are required to upgrade to a non-root based docker container structure:
sudo chown -R $USER /path/to/volume-mount
If you are running a cluster of Smile CDR servers and you wish to perform a "zero downtime" upgrade, you can follow the steps outlined in this section.
Zero-downtime upgrades require:
If you are currently running the 2023-02-R07 release version and you wish to upgrade to the most recent 2024.05.R03 release version in 6-month intervals, then you should perform the following upgrades in sequence:
NOTE: These major release versions were accurate at the time this was written, but they could be subject to change in the future.
You must be able to handle your incoming request load on a single server or a defined subset of your servers, since this process involves reducing your entire server cluster to a defined subset of your servers, or just one server if your cluster only has two servers.
During the upgrade, the cluster continues to operate, but does not support changing the configuration on any active server, nor restarting any active server, as these will still be running the older version of Smile CDR and will not be able to be re-initialized against the new database schema.
The following operations will continue to work on all running servers during an upgrade:
Configuration actions will not work during the upgrade. Module restarts on servers running the old software will fail to start, potentially causing a service outage. Once an upgrade has begun, any servers running the old software may not be reconfigured and you must not make changes on these servers via the admin web console, including:
Follow these steps to upgrade your server cluster with "zero downtime":
You should ensure that you have recent backups of your Cluster Manager Database as well as all of your FHIR Repository Database(s), before attempting the upgrade. You would typically do this during a maintenance window as part of your normal operational procedures. These will be needed if you run into any problems during the upgrade, you decide not to proceed, and you choose to revert the upgrade by restoring your system from these backups.
Using your front-end load balancer or reverse proxy, reduce the number of active servers in your cluster to an acceptable subset of your server cluster, where these remaining active servers are only running the older version of Smile CDR and are still able to serve your incoming requests. If you only have two servers in your cluster, then this will leave just one active server. The active server(s) will remain running the older version of Smile CDR during the Zero Downtime Upgrade process, serving any incoming requests, while a single offline server is upgraded and performs the migration to the newer version of Smile CDR. If you have a high-traffic installation with a large number of servers in your pool, then you may leave several servers on the older version actively serving incoming requests, while you perform this Zero Downtime Upgrade process.
Starting with version 2024.02.R0X (any release from 2024.02 onwards), you can now set the environment variable CDR_UPGRADE_MODE
to true
on the active servers in the cluster and restart these servers. When a server is running in CDR_UPGRADE_MODE
, all scheduled tasks are paused, consuming from broker channels (e.g. JMS Queues or Kafka topics) is paused, and changes to the configuration are not permitted in the Web Admin Console. Ignore this step if you are running an earlier release of Smile CDR.
Using only one non-active server, follow the steps above to upgrade that server using the Simple Upgrade Process described above. This involves shutting down the server and then upgrading the binaries, so you should remove this initial upgrade server from your load balancer and drain it of all active sessions, before shutting it down.
Unset the CDR_UPGRADE_MODE
environment variable on the upgraded server (unset CDR_UPGRADE_MODE
) and then start the server up and monitor the smile.log
while it starts up and it automatically upgrades the DB. It is very important that you monitor the logs when it is first started up, since it will perform the database migration at this time. Once it completes the startup process and it is fully running, you must verify that all the Modules on that newly-upgraded server are working correctly (i.e. no errors reported and no stopped Modules).
Switch your front-end load balancer or reverse proxy to stop using the existing "older version" server(s) and start directing incoming traffic to the newly-upgraded server. This may involve a very short window of perceived downtime by your front-end clients, depending on the front-end technology you are using and the method you use to perform this switchover. If you have a high-volume installation, then you can upgrade several non-active servers and have them ready to start accepting traffic, once you are ready to perform the switchover from the older version to the newer version of Smile CDR.
Shut down the server(s) running the older version of Smile CDR and upgrade them to the newer version of Smile CDR. These servers are no longer receiving requests and can be upgraded. Unset the CDR_UPGRADE_MODE
environment variable on each upgraded server before starting it. Verify the upgraded server started successfully and then add it back to the pool.
Upgrade the rest of the servers in your cluster one-by-one, using the Simple Upgrade Process described above. Ensure that each upgraded server starts up without problems, before proceeding to the next server. Note that for each server, you must monitor the smile.log
file while it starts up to ensure that everything works properly and no ERRORs are encountered during the startup phase of this newly-upgraded server. None of these servers will attempt to perform any database upgrades, because they will discover that it has already been upgraded by the first server. As each server is successfully upgraded to the new version of Smile CDR, it can be added back into your front-end server pool.
A note about the CDR_UPGRADE_MODE
environment variable: Smile CDR batch processing relies on job state being stored in the database and asynchronous broker channels (JMS or Kafka). Smile CDR ensures that the details stored in these job state records and job event messages are forwards compatible. However, they are not guaranteed to be backwards compatible. For this reason, we recommend setting CDR_UPGRADE_MODE
on servers running an older version of the software so they do not fail when trying to read job state records or new job event messages. Servers with CDR_UPGRADE_MODE
set will continue to submit messages to broker channels (which the new servers will be able to handle), they just won't read from them.
New versions of Smile CDR can also introduce new configuration settings that older versions of the software may not understand. For this reason, CDR_UPGRADE_MODE
prevents module restart within the Web Admin Console of old servers. Any required configuration changes should only be made on upgraded servers.
Once a server has been upgraded, it should be restarted after calling unset CDR_UPGRADE_MODE
to ensure that normal processing resumes.
When upgrading CDR from versions 2019.05 and earlier, if subscriptions are enabled, then you will need to add a new Subscription Matching Module of the same FHIR version as your FHIR Storage module and link the Subscription Matching module to that FHIR Storage module as a dependency.
As of 2024.08.R01, a new migrate
command has been added to the smilecdr
binary, located at bin/smilecdr
. This command causes Smile CDR to boot up and upgrade all found schemas in the cluster automatically. Upon completion, it shuts down, and exits with exit code 0 if everything succeeded. This command can take the following flags:
--dry-run
--enable-heavyweight-migrations
These mirror the smileutil
flags, documented here. You will note that unlike the smileutil migrate-database
command, this does not require you to authenticate to the database, nor does it require you to select the driver type or schema type. When using this command,
all of that information is determined from your current live configuration.
Smile CDR recommends that you implement the following strategy when upgrading your Smile CDR deployments.
This includes the specific Java version, your chosen database server and other requirements, as described in our documentation:
https://smiledigitalhealth.com/docs/getting_started/platform_requirements.html
Choose whether you wish to manually upgrade your database schema each time you upgrade your Smile CDR Server, or let Smile CDR perform the schema upgrade steps for you on first post-upgrade startup. More details on how to upgrade Smile CDR, as well as how to perform the upgrade manually using our smileutil migrate-database
command, are available in our documentation:
https://smiledigitalhealth.com/docs/installation/upgrading.html
https://smiledigitalhealth.com/docs/smileutil/migrate_database.html
When choosing which version(s) to upgrade to, use the following guiding principles:
When choosing a major release version, always choose the most recent R## version. For example, we have the following major release versions available for the 2024.02 release:
You should upgrade to the 2024-02-R06 release version, and skip over any other GA versions (R01-R05) for that particular Quarterly Release.
This grid shows the impact of upgrading each release on the various Smile CDR databases.
Release | Cluster Manager | Persistence | Audit | Transaction | Overall | Comments |
---|---|---|---|---|---|---|
2023.08 | 25 indexes are added to the Persistence Database, however most of them are on small tables. The overall migration time should not exceed a few hours. | |||||
2023.11 | The Persistence database undergoes a huge migration to fix the padding on the FORCED_ID column for all resources in the HFJ_RESOURCE table. This migration is performed in chunks of 100,000 resources. The FORCED_ID script is executed twice during schema migration. This will take a significant amount of time for databases that contain a large number of resources. | |||||
2024.02 | This release adds a couple of indexes to the HFJ_SPIDX_* tables. For customers that have a large number of rows in HFJ_SPIDX_STRING and HFJ_SPIDX__URI tables, this could take quite a while to complete (e.g. several hours). | |||||
2024.05 | This release involves migrating the large objects to inline storage for PostgreSQL databases. Depending on the number of large objects present in the database, this could take anywhere from a few hours to a few days to complete. | |||||
2024.08 | This release adds a couple of indexes to the HFJ_SPIDX_URI table. For customers that have a large number of rows in this table, this could take a while to complete (e.g. several hours). |