On this page:

26.3Deploying a Kubernetes Managed Cluster

 

This page describes how Smile CDR's modular design can be deployed and managed as part of a Kubernetes cluster. Note that this page focuses primarily on clustering the Smile CDR application itself. Clustering or hosting of other components such as the database, message broker, and reverse proxy are not addressed here.

Kubernetes Cluster

The deployment steps below highlight two different options:

  • Option 1 – A single master node example. The entire Smile CDR application is replicated across the cluster.
  • Option 2 – A multiple master node example. Separate masters are created for each of the following, allowing them to be separately replicated:
    • FHIR REST Endpoint and FHIR Storage modules
    • SMART on FHIR modules
    • Subscription and FHIR Storage modules
    • Administration modules (Web Admin Console and JSON Admin API)

While most of what follows is applicable to both options, there are sections that differ between the options. Take note of which option you would like to deploy and follow along accordingly.

26.3.1Overview of Smile CDR Kubernetes Deployment Process

 

The basic steps to creating a Smile CDR Kubernetes Cluster are as follows:

  1. Pre-requisite steps:

    • Obtain a copy of the Smile CDR Docker image.
    • Provision servers/environments for hosting Smile CDR cluster, database, message broker, and load balancer (as needed).
    • Install Docker container and Kubernetes runtime packages.
    • Deploy external dependencies, including: database, message broker, reverse proxy, and load balancer.
  2. Prepare Kubernetes configuration files.

  3. Deploy Smile CDR in Kubernetes cluster.

  4. Configure reverse proxy to enable Smile CDR services to be accessed through the configured ports.

  5. Configure load balancer if multiple Kubernetes nodes are being used.

The sections that follow will provide more details about these steps.

26.3.2Pre-requisite Steps

 

Obtain a Copy of the Kubernetes-Enabled Smile CDR Docker Image

A Smile CDR Docker image is available that can be used either in standalone Docker deployments or in Kubernetes deployments. See the Docker Container Installation page for more information.

Provision Servers/Environments

Ensure that the servers or environments hosting the Kubernetes master and node components are able to communicate with each other as well as to the servers hosting the database and message broker. Additionally, ensure that any servers hosting the reverse proxy and load balancer are able to communicate with the Kubernetes node components.

Install Docker Container and Kubernetes Runtime Packages

The steps for installing Kubernetes will vary depending on the type of environment or cloud provider being used to host the Kubernetes components.

  • Instructions for deploying Docker container runtime packages can be found here.
  • Instructions for deploying and configuring Kubernetes runtime packages can be found here.

Deploy External Dependencies

The database, message broker, reverse proxy, and load balancer do not need to be managed by Kubernetes to work with a clustered Smile CDR. As such these components can be deployed separately.

Regardless of how the external components are deployed, note the IP addresses, hostnames, and port numbers needed to connect to the database and message broker. This information will be needed later when configuring Smile CDR.

26.3.3Configuring Kubernetes

 

A number of Kubernetes objects need to be configured in order for Kubernetes to manage a Smile CDR cluster, and to ensure that external systems are able to interact with the Smile CDR cluster. These include the following:

  • Services – Enable Smile CDR API and ports to be visible outside of the cluster.
  • ConfigMaps – *(optional) Define configuration that will overwrite the default cdr-config-Master.properties, cdr-config-Clone.properties and logback.xml files at startup time.
  • StatefulSets – Define how Smile CDR instances will be deployed.

The following sections provide more detailed explanations and examples for each type of Kubernetes object.

26.3.4Service Definitions

 

Each Smile CDR master node that implements externally visible ports requires a Kubernetes service configured with spec of type NodePort so that the ports are accessible outside of the cluster.

A few things to note about the service definitions:

  • Kubernetes restricts the port numbers that can be exposed outside of the cluster to the range 30000-32767. As such, in the service definitions it will be necessary to map the configured Smile CDR port numbers to external port numbers allowed within this range.
  • The .selector element value must match the .spec.template.metadata.labels element value in the StatefulSet definition.
  • When individual Smile CDR instances are deployed as part of StatefulSet, the StatefulSet's .metadata.name element will be available to the Smile CDR instance as an environment variable called SERVICENAME.

Option 1 – Where a single master node configuration is used, the Kubernetes Service definition should look something like the following:

apiVersion: v1
kind: Service
metadata:
  name: smilecdr
spec:
  type: NodePort
  ports:
  - name: "80"
    port: 80
    nodePort: 30001
    targetPort: 80
  - name: "443"
    port: 443
    nodePort: 30002
    targetPort: 443
  - name: "8000"
    port: 8000
    nodePort: 30003
    targetPort: 8000
  - name: "9000"
    port: 9000
    nodePort: 30004
    targetPort: 9000
  - name: "9100"
    port: 9100
    nodePort: 30005
    targetPort: 9100
  - name: "8001"
    port: 8001
    nodePort: 30006
    targetPort: 8001
  - name: "9200"
    port: 9200
    nodePort: 30007
    targetPort: 9200
  - name: "9201"
    port: 9201
    nodePort: 30008
    targetPort: 9201
  selector:
    app: smilecdr

Option 2 – Where multiple master node configurations are used, the Kubernetes Service definitions should look something like the following:

apiVersion: v1
kind: Service
metadata:
  name: smilecdr-mgmt
spec:
  type: NodePort
  ports:
  - name: "80"
    port: 80
    nodePort: 30001
    targetPort: 80
  - name: "443"
    port: 443
    nodePort: 30002
    targetPort: 443
  - name: "9000"
    port: 9000
    nodePort: 30004
    targetPort: 9000
  - name: "9100"
    port: 9100
    nodePort: 30005
    targetPort: 9100
  selector:
    app: smilecdr-mgmt
---
apiVersion: v1
kind: Service
metadata:
  name: smilecdr-listener
spec:
  type: NodePort
  ports:
  - name: "8000"
    port: 8000
    nodePort: 30003
    targetPort: 8000
  - name: "8001"
    port: 8001
    nodePort: 30006
    targetPort: 8001
  selector:
    app: smilecdr-listener
---
apiVersion: v1
kind: Service
metadata:
  name: smilecdr-smart
spec:
  type: NodePort
  ports:
  - name: "9200"
    port: 9200
    nodePort: 30007
    targetPort: 9200
  - name: "9201"
    port: 9201
    nodePort: 30008
    targetPort: 9201
  selector:
    app: smilecdr-smart

26.3.5ConfigMap Definitions

 

Using ConfigMap definitions is the recommended approach to customize the Smile CDR cdr-config-Master.properties, cdr-config-Clone.properties, and logback.xml configuration files in a Kubernetes deployment. This avoids the need to build and maintain additional Docker images. By default, the configuration files will be based on the initial configuration described in the Installing Smile CDR page located here. This configuration can be used for deploying a single instance of Smile CDR in Kubernetes but cannot be scaled. If the default configurations are adequate or if custom Docker images are going to be used then this step can be skipped.

Note: When individual Smile CDR instances are deployed as part of StatefulSet, the StatefulSet's .metadata.name element and ordinal number of the Smile CDR instance within the StatefulSet will be available to the Smile CDR instance as environment variables SERVICENAME and ORDINAL respectively. For example, the first instance of Smile CDR deployed in StatefulSet smilecdr-mgmt will have SERVICENAME set to smilecdr-mgmt and ORDINAL set to 0. The second instance will have ORDINAL set to 1, etc. In the Master and Clone configurations below, a distinct node.id will be derived from the SERVICENAME and ORDINAL values (note that ORDINAL is always zero for the Master node) at launch time and assigned to the respective nodes.

Option 1 – Where a single master node configuration is used, the Kubernetes ConfigMap definition should look something like the following:

apiVersion: v1
kind: ConfigMap
metadata:
  name: smilecdr-config
  labels:
    app: smilecdr-config
data:
  cdr-config-Master.properties: |
    ################################################################################
    # Node Configuration
    ################################################################################
    node.id                                                                                     =#{env['SERVICENAME']}_0
    node.control.port                                                                           =7001

    ################################################################################
    # Cluster Manager Configuration
    ################################################################################
    module.clustermgr.type                                                                      =CLUSTER_MGR
    # Valid options include DERBY_EMBEDDED, MYSQL_5_7, MARIADB_10_1, POSTGRES_9_4, ORACLE_12C, MSSQL_2012
    module.clustermgr.config.db.driver                                                          =POSTGRES_9_4
    module.clustermgr.config.db.url                                                             =jdbc:postgresql://#{env['DB_HOST']}:5432/cdr
    module.clustermgr.config.db.hibernate.showsql                                               =false
    module.clustermgr.config.db.username                                                        =#{env['DB_USER']}
    module.clustermgr.config.db.password                                                        =#{env['DB_PASSWORD']}
    module.clustermgr.config.messagebroker.address                                              =tcp://#{env['ACTIVEMQ_HOST']}:61616
    module.clustermgr.config.messagebroker.username                                             =#{env['ACTIVEMQ_USERNAME']}
    module.clustermgr.config.messagebroker.password                                             =#{env['ACTIVEMQ_PASSWORD']}

    ################################################################################
    # Database Configuration
    ################################################################################
    module.persistence.type                                                                     =PERSISTENCE_R4
    module.persistence.config.db.driver                                                         =POSTGRES_9_4
    module.persistence.config.db.url                                                            =jdbc:postgresql://#{env['DB_HOST']}:5432/cdr
    module.persistence.config.db.hibernate.showsql                                              =false
    module.persistence.config.db.username                                                       =#{env['DB_USER']}
    module.persistence.config.db.password                                                       =#{env['DB_PASSWORD']}
    module.persistence.config.db.hibernate_search.directory                                     =derby_database/lucene_fhir_persistence
    module.persistence.config.dao_config.expire_search_results_after_minutes                    =60
    module.persistence.config.dao_config.allow_multiple_delete.enabled                          =false
    module.persistence.config.dao_config.allow_inline_match_url_references.enabled              =false
    module.persistence.config.dao_config.allow_external_references.enabled                      =false

    ################################################################################
    # Local Storage Inbound Security
    ################################################################################
    module.local_security.type                                                                  =SECURITY_IN_LOCAL
    # This superuser will be automatically seeded if it does not already exist
    module.local_security.config.seed.username                                                  =#{env['LOCAL_SEC_USERNAME']}
    module.local_security.config.seed.password                                                  =#{env['LOCAL_SEC_PASSWORD']}

    ################################################################################
    # ENDPOINT: FHIR Service
    ################################################################################
    module.fhir_endpoint.type                                                                   =ENDPOINT_FHIR_REST_R4
    module.fhir_endpoint.requires.PERSISTENCE_R4                                                =persistence
    module.fhir_endpoint.requires.SECURITY_IN_UP                                                =local_security
    module.fhir_endpoint.config.port                                                            =8000
    module.fhir_endpoint.config.threadpool.min                                                  =2
    module.fhir_endpoint.config.threadpool.max                                                  =10
    module.fhir_endpoint.config.browser_highlight.enabled                                       =true
    module.fhir_endpoint.config.cors.enable                                                     =true
    module.fhir_endpoint.config.default_encoding                                                =JSON
    module.fhir_endpoint.config.default_pretty_print                                            =true
    module.fhir_endpoint.config.base_url.fixed                                                  =http://localhost:8000
    module.fhir_endpoint.config.tls.enabled                                                     =false
    module.fhir_endpoint.config.anonymous.access.enabled                                        =true
    module.fhir_endpoint.config.security.http.basic.enabled                                     =true
    module.fhir_endpoint.config.request_validating.enabled                                      =false
    module.fhir_endpoint.config.request_validating.fail_on_severity                             =ERROR
    module.fhir_endpoint.config.request_validating.tags.enabled                                 =false
    module.fhir_endpoint.config.request_validating.response_headers.enabled                     =false
    module.fhir_endpoint.config.request_validating.require_explicit_profile_definition.enabled  =false

    ################################################################################
    # ENDPOINT: JSON Admin Services
    ################################################################################
    module.admin_json.type                                                                      =ADMIN_JSON
    module.admin_json.requires.SECURITY_IN_UP                                                   =local_security
    module.admin_json.config.port                                                               =9000
    module.admin_json.config.tls.enabled                                                        =false
    module.admin_json.config.anonymous.access.enabled                                           =true
    module.admin_json.config.security.http.basic.enabled                                        =true

    ################################################################################
    # ENDPOINT: Web Admin
    ################################################################################
    module.admin_web.type                                                                       =ADMIN_WEB
    module.admin_web.requires.SECURITY_IN_UP                                                    =local_security
    module.admin_web.config.port                                                                =9100
    module.admin_web.config.tls.enabled                                                         =false

    ################################################################################
    # ENDPOINT: FHIRWeb Console
    ################################################################################
    module.fhirweb_endpoint.type                                                                =ENDPOINT_FHIRWEB
    module.fhirweb_endpoint.requires.SECURITY_IN_UP                                             =local_security
    module.fhirweb_endpoint.requires.ENDPOINT_FHIR                                              =fhir_endpoint
    module.fhirweb_endpoint.config.port                                                         =8001
    module.fhirweb_endpoint.config.threadpool.min                                               =2
    module.fhirweb_endpoint.config.threadpool.max                                               =10
    module.fhirweb_endpoint.config.tls.enabled                                                  =false
    module.fhirweb_endpoint.config.anonymous.access.enabled                                     =false

    ################################################################################
    # SMART Security
    ################################################################################
    module.smart_auth.type                                                                      =ca.cdr.security.out.smart.module.SecurityOutSmartCtxConfig
    module.smart_auth.requires.CLUSTERMGR                                                       =clustermgr
    module.smart_auth.requires.SECURITY_IN_UP                                                   =local_security
    module.smart_auth.config.port                                                               =9200
    module.smart_auth.config.openid.signing.jwks_file                                           =classpath:/smilecdr-demo.jwks
    module.smart_auth.config.issuer.url                                                         =http://localhost:9200
    module.smart_auth.config.tls.enabled                                                        =false

    ################################################################################
    # SMART Demo Apps
    ################################################################################
    module.smart_app_demo_host.type                                                             =ca.cdr.smartappshost.module.SmartAppsHostCtxConfig
    module.smart_app_demo_host.requires.CLUSTERMGR                                              =clustermgr
    module.smart_app_demo_host.config.port                                                      =9201

    ################################################################################
    # 	 subscription
    ################################################################################
    module.subscription.type                                                                    =SUBSCRIPTION_MATCHER_R4
    module.subscription.requires.PERSISTENCE_R4                                                 =persistence
  cdr-config-Clone.properties: |
    ################################################################################
    # Node Configuration
    ################################################################################
    node.id                                                                                     =#{env['SERVICENAME']}_#{env['ORDINAL']}
    node.control.port                                                                           =7001
    node.clone                                                                                  =#{env['SERVICENAME']}_0

    ################################################################################
    # Cluster Manager Configuration
    ################################################################################
    module.clustermgr.type                                                                      =CLUSTER_MGR
    # Valid options include DERBY_EMBEDDED, MYSQL_5_7, MARIADB_10_1, POSTGRES_9_4, ORACLE_12C, MSSQL_2012
    module.clustermgr.config.db.driver                                                         =POSTGRES_9_4
    module.clustermgr.config.db.url                                                            =jdbc:postgresql://#{env['DB_HOST']}:5432/cdr
    module.clustermgr.config.db.hibernate.showsql                                               =false
    module.clustermgr.config.db.username                                                        =#{env['DB_USER']}
    module.clustermgr.config.db.password                                                        =#{env['DB_PASSWORD']}
  logback.xml: |
    <!--
    Smile CDR uses the Logback framework for logging. For details on configuring this
    file, see:
    https://smilecdr.com/docs/current/getting_started/system_logging.html
    -->
    <configuration scan="true" scanPeriod="30 seconds">

    	<!--
    	LOG: CONSOLE
    	We write INFO-level events to the console. This is not generally
    	visible during normal operation, unless the application is run using
    	"bin/smilecdr run".
    	-->
    	<appender name="STDOUT_SYNC" class="ch.qos.logback.core.ConsoleAppender">
    		<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
    			<level>INFO</level>
    		</filter>
    		<encoder>
    			<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n${log.stackfilter.pattern}</pattern>
    		</encoder>
    	</appender>
    	<appender name="STDOUT" class="ch.qos.logback.classic.AsyncAppender">
    		<includeCallerData>false</includeCallerData>
    		<appender-ref ref="STDOUT_SYNC" />
    	</appender>

    	<!--
    	LOG: smile-startup.log
    	This file contains log entries written when the application is starting up
    	and shutting down. No other data is written to this file.
    	-->
    	<appender name="STARTUP" class="ch.qos.logback.core.rolling.RollingFileAppender">
    		<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
    			<level>INFO</level>
    		</filter>
    		<file>${smile.basedir}/log/smile-startup.log</file>
    		<rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
    			<fileNamePattern>${smile.basedir}/log/smile-startup.log.%i.gz</fileNamePattern>
    			<minIndex>1</minIndex>
    			<maxIndex>9</maxIndex>
    		</rollingPolicy>
    		<triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
    			<maxFileSize>5MB</maxFileSize>
    		</triggeringPolicy>
    		<encoder>
    			<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n${log.stackfilter.pattern}</pattern>
    		</encoder>
    	</appender>

    	<!--
    	LOG: smile.log
    	We create a file called smile.log that will have (by default) all INFO level
    	messages. This file is written asynchronously using a blocking queue for better
    	performance.
    	-->
    	<appender name="RUNTIME_SYNC" class="ch.qos.logback.core.rolling.RollingFileAppender">
    		<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
    			<level>INFO</level>
    		</filter>
    		<file>${smile.basedir}/log/smile.log</file>
    		<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
    			<fileNamePattern>${smile.basedir}/log/smile.%d{yyyy-MM-dd}.log.gz</fileNamePattern>
    			<maxHistory>30</maxHistory>
    		</rollingPolicy>
    		<encoder>
    			<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n${log.stackfilter.pattern}</pattern>
    		</encoder>
    	</appender>
    	<appender name="RUNTIME" class="ch.qos.logback.classic.AsyncAppender">
    		<discardingThreshold>0</discardingThreshold>
    		<includeCallerData>false</includeCallerData>
    		<appender-ref ref="RUNTIME_SYNC" />
    	</appender>

    	<!--
    	LOG: smile-error.log
    	This file contains only errors generated during normal operation.
    	-->
    	<appender name="ERROR" class="ch.qos.logback.core.rolling.RollingFileAppender">
    		<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
    			<level>ERROR</level>
    		</filter>
    		<file>${smile.basedir}/log/smile-error.log</file>
    		<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
    			<fileNamePattern>${smile.basedir}/log/smile-error.%d{yyyy-MM-dd}.log.gz</fileNamePattern>
    			<maxHistory>30</maxHistory>
    		</rollingPolicy>
    		<encoder>
    			<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} [%file:%line] %msg%n</pattern>
    		</encoder>
    	</appender>

    	<!-- 
    	The startup log only gets messages from ca.cdr.app.App, which
    	logs startup and shutdown events
    	-->
    	<logger name="ca.cdr.app.App" additivity="false">
    		<appender-ref ref="STARTUP"/>
    		<appender-ref ref="STDOUT" />
    		<appender-ref ref="RUNTIME" />
    		<appender-ref ref="ERROR" />
    	</logger>

    	<appender name="SECURITY_TROUBLESHOOTING" class="ch.qos.logback.core.rolling.RollingFileAppender">
    		<filter class="ch.qos.logback.classic.filter.ThresholdFilter"><level>DEBUG</level></filter>
    		<file>${smile.basedir}/log/security-troubleshooting.log</file>
    		<rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
    			<fileNamePattern>${smile.basedir}/log/security-troubleshooting.log.%i.gz</fileNamePattern>
    			<minIndex>1</minIndex>
    			<maxIndex>9</maxIndex>
    		</rollingPolicy>
    		<triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
    			<maxFileSize>5MB</maxFileSize>
    		</triggeringPolicy>
    		<encoder>
    			<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n${log.stackfilter.pattern}</pattern>
    		</encoder>
    	</appender>
    	<logger name="ca.cdr.log.security_troubleshooting" additivity="false" level="DEBUG">
    		<appender-ref ref="SECURITY_TROUBLESHOOTING"/>
    	</logger>

    	<!--
    	Send all remaining logs to a few places 
    	-->
    	<root level="INFO">
    		<appender-ref ref="STDOUT" />
    		<appender-ref ref="RUNTIME" />
    		<appender-ref ref="ERROR" />
    	</root>

    </configuration>

Option 2 – Where multiple master node configurations are used, the Kubernetes ConfigMap definitions should look something like the following (note that logback.xml is unchanged):

apiVersion: v1
kind: ConfigMap
metadata:
  name: smilecdr-config
  labels:
    app: smilecdr-config
data:
  cdr-config-Master_mgmt.properties: |
    ################################################################################
    # Node Configuration
    ################################################################################
    node.id                                                                                     =#{env['SERVICENAME']}_0
    node.control.port                                                                           =7001

    ################################################################################
    # Cluster Manager Configuration
    ################################################################################
    module.clustermgr.type                                                                      =CLUSTER_MGR
    # Valid options include DERBY_EMBEDDED, MYSQL_5_7, MARIADB_10_1, POSTGRES_9_4, ORACLE_12C
    module.clustermgr.config.db.driver                                                          =POSTGRES_9_4
    module.clustermgr.config.db.url                                                             =jdbc:postgresql://#{env['DB_HOST']}:5432/cdr
    module.clustermgr.config.db.hibernate.showsql                                               =false
    module.clustermgr.config.db.username                                                        =#{env['DB_USER']}
    module.clustermgr.config.db.password                                                        =#{env['DB_PASSWORD']}

    ################################################################################
    # Local Storage Inbound Security
    ################################################################################
    module.local_security.type                                                                  =SECURITY_IN_LOCAL
    # This superuser will be automatically seeded if it does not already exist
    module.local_security.config.seed.username                                                  =#{env['LOCAL_SEC_USERNAME']}
    module.local_security.config.seed.password                                                  =#{env['LOCAL_SEC_PASSWORD']}

    ################################################################################
    # ENDPOINT: JSON Admin Services
    ################################################################################
    module.admin_json.type                                                                      =ADMIN_JSON
    module.admin_json.requires.SECURITY_IN_UP                                                   =local_security
    module.admin_json.config.port                                                               =9000
    module.admin_json.config.tls.enabled                                                        =false
    module.admin_json.config.anonymous.access.enabled                                           =true
    module.admin_json.config.security.http.basic.enabled                                        =true

    ################################################################################
    # ENDPOINT: Web Admin
    ################################################################################
    module.admin_web.type                                                                       =ADMIN_WEB
    module.admin_web.requires.SECURITY_IN_UP                                                    =local_security
    module.admin_web.config.port                                                                =9100
    module.admin_web.config.tls.enabled                                                         =false
  cdr-config-Master_smart.properties: |
    ################################################################################
    # Node Configuration
    ################################################################################
    node.id                                                                                     =#{env['SERVICENAME']}_0
    node.control.port                                                                           =7001

    ################################################################################
    # Cluster Manager Configuration
    ################################################################################
    module.clustermgr.type                                                                      =CLUSTER_MGR
    # Valid options include DERBY_EMBEDDED, MYSQL_5_7, MARIADB_10_1, POSTGRES_9_4, ORACLE_12C
    module.clustermgr.config.db.driver                                                          =POSTGRES_9_4
    module.clustermgr.config.db.url                                                             =jdbc:postgresql://#{env['DB_HOST']}:5432/cdr
    module.clustermgr.config.db.hibernate.showsql                                               =false
    module.clustermgr.config.db.username                                                        =#{env['DB_USER']}
    module.clustermgr.config.db.password                                                        =#{env['DB_PASSWORD']}

    ################################################################################
    # Local Storage Inbound Security
    ################################################################################
    module.local_security.type                                                                  =SECURITY_IN_LOCAL
    # This superuser will be automatically seeded if it does not already exist
    module.local_security.config.seed.username                                                  =#{env['LOCAL_SEC_USERNAME']}
    module.local_security.config.seed.password                                                  =#{env['LOCAL_SEC_PASSWORD']}

    ################################################################################
    # SMART Security
    ################################################################################
    module.smart_auth.type                                                                      =ca.cdr.security.out.smart.module.SecurityOutSmartCtxConfig
    module.smart_auth.requires.CLUSTERMGR                                                       =clustermgr
    module.smart_auth.requires.SECURITY_IN_UP                                                   =local_security
    module.smart_auth.config.port                                                               =9200
    module.smart_auth.config.openid.signing.jwks_file                                           =classpath:/smilecdr-demo.jwks
    module.smart_auth.config.issuer.url                                                         =http://localhost:9200
    module.smart_auth.config.tls.enabled                                                        =false

    ################################################################################
    # SMART Demo Apps
    ################################################################################
    module.smart_app_demo_host.requires.CLUSTERMGR                                              =clustermgr
    module.smart_app_demo_host.type                                                             =ca.cdr.smartappshost.module.SmartAppsHostCtxConfig
    module.smart_app_demo_host.config.port                                                      =9201
  cdr-config-Master_subscription.properties: |
    ################################################################################
    # Node Configuration
    ################################################################################
    node.id                                                                                     =#{env['SERVICENAME']}_0
    node.control.port                                                                           =7001

    ################################################################################
    # Cluster Manager Configuration
    ################################################################################
    module.clustermgr.type                                                                      =CLUSTER_MGR
    # Valid options include DERBY_EMBEDDED, MYSQL_5_7, MARIADB_10_1, POSTGRES_9_4, ORACLE_12C
    module.clustermgr.config.db.driver                                                          =POSTGRES_9_4
    module.clustermgr.config.db.url                                                             =jdbc:postgresql://#{env['DB_HOST']}:5432/cdr
    module.clustermgr.config.db.hibernate.showsql                                               =false
    module.clustermgr.config.db.username                                                        =#{env['DB_USER']}
    module.clustermgr.config.db.password                                                        =#{env['DB_PASSWORD']}
    module.clustermgr.config.messagebroker.type                                                 =REMOTE_ACTIVEMQ
    module.clustermgr.config.messagebroker.address                                              =tcp://#{env['ACTIVEMQ_HOST']}:61616
    module.clustermgr.config.messagebroker.username                                             =#{env['ACTIVEMQ_USERNAME']}
    module.clustermgr.config.messagebroker.password                                             =#{env['ACTIVEMQ_PASSWORD']}

    ################################################################################
    # Database Configuration
    ################################################################################
    module.persistence.type                                                                     =PERSISTENCE_R4
    module.persistence.config.db.driver                                                         =POSTGRES_9_4
    module.persistence.config.db.url                                                            =jdbc:postgresql://#{env['DB_HOST']}:5432/cdr
    module.persistence.config.db.hibernate.showsql                                              =false
    module.persistence.config.db.username                                                       =#{env['DB_USER']}
    module.persistence.config.db.password                                                       =#{env['DB_PASSWORD']}
    module.persistence.config.db.hibernate_search.directory                                     =database/lucene_fhir_persistence
    module.persistence.config.dao_config.expire_search_results_after_minutes                    =60
    module.persistence.config.dao_config.allow_multiple_delete.enabled                          =false
    module.persistence.config.dao_config.allow_inline_match_url_references.enabled              =false
    module.persistence.config.dao_config.allow_external_references.enabled                      =false

    ################################################################################
    # Local Storage Inbound Security
    ################################################################################
    module.local_security.type                                                                  =SECURITY_IN_LOCAL
    # This superuser will be automatically seeded if it does not already exist
    module.local_security.config.seed.username                                                  =#{env['LOCAL_SEC_USERNAME']}
    module.local_security.config.seed.password                                                  =#{env['LOCAL_SEC_PASSWORD']}

    ################################################################################
    # 	 subscription
    ################################################################################
    module.subscription.type                                                                    =SUBSCRIPTION_MATCHER_R4
    module.subscription.requires.PERSISTENCE_R4                                                 =persistence
  cdr-config-Master_listener.properties: |
    ################################################################################
    # Node Configuration
    ################################################################################
    node.id                                                                                     =#{env['SERVICENAME']}_0
    node.control.port                                                                           =7001

    ################################################################################
    # Cluster Manager Configuration
    ################################################################################
    module.clustermgr.type                                                                      =CLUSTER_MGR
    # Valid options include DERBY_EMBEDDED, MYSQL_5_7, MARIADB_10_1, POSTGRES_9_4, ORACLE_12C
    module.clustermgr.config.db.driver                                                          =POSTGRES_9_4
    module.clustermgr.config.db.url                                                             =jdbc:postgresql://#{env['DB_HOST']}:5432/cdr
    module.clustermgr.config.db.hibernate.showsql                                               =false
    module.clustermgr.config.db.username                                                        =#{env['DB_USER']}
    module.clustermgr.config.db.password                                                        =#{env['DB_PASSWORD']}

    ################################################################################
    # Database Configuration
    ################################################################################
    module.persistence.type                                                                     =PERSISTENCE_R4
    module.persistence.config.db.driver                                                         =POSTGRES_9_4
    module.persistence.config.db.url                                                            =jdbc:postgresql://#{env['DB_HOST']}:5432/cdr
    module.persistence.config.db.hibernate.showsql                                              =false
    module.persistence.config.db.username                                                       =#{env['DB_USER']}
    module.persistence.config.db.password                                                       =#{env['DB_PASSWORD']}
    module.persistence.config.db.hibernate_search.directory                                     =database/lucene_fhir_persistence
    module.persistence.config.dao_config.expire_search_results_after_minutes                    =60
    module.persistence.config.dao_config.allow_multiple_delete.enabled                          =false
    module.persistence.config.dao_config.allow_inline_match_url_references.enabled              =false
    module.persistence.config.dao_config.allow_external_references.enabled                      =false

    ################################################################################
    # Local Storage Inbound Security
    ################################################################################
    module.local_security.type                                                                  =SECURITY_IN_LOCAL
    # This superuser will be automatically seeded if it does not already exist
    module.local_security.config.seed.username                                                  =#{env['LOCAL_SEC_USERNAME']}
    module.local_security.config.seed.password                                                  =#{env['LOCAL_SEC_PASSWORD']}

    ################################################################################
    # ENDPOINT: FHIR Service
    ################################################################################
    module.fhir_endpoint.type                                                                   =ENDPOINT_FHIR_REST_R4
    module.fhir_endpoint.requires.PERSISTENCE_R4                                                =persistence
    module.fhir_endpoint.requires.SECURITY_IN_UP                                                =local_security
    module.fhir_endpoint.config.port                                                            =8000
    module.fhir_endpoint.config.threadpool.min                                                  =2
    module.fhir_endpoint.config.threadpool.max                                                  =10
    module.fhir_endpoint.config.browser_highlight.enabled                                       =true
    module.fhir_endpoint.config.cors.enable                                                     =true
    module.fhir_endpoint.config.default_encoding                                                =JSON
    module.fhir_endpoint.config.default_pretty_print                                            =true
    module.fhir_endpoint.config.base_url.fixed                                                  =http://localhost:8000
    module.fhir_endpoint.config.tls.enabled                                                     =false
    module.fhir_endpoint.config.anonymous.access.enabled                                        =true
    module.fhir_endpoint.config.security.http.basic.enabled                                     =true
    module.fhir_endpoint.config.request_validating.enabled                                      =false
    module.fhir_endpoint.config.request_validating.fail_on_severity                             =ERROR
    module.fhir_endpoint.config.request_validating.tags.enabled                                 =false
    module.fhir_endpoint.config.request_validating.response_headers.enabled                     =false
    module.fhir_endpoint.config.request_validating.require_explicit_profile_definition.enabled  =false

    ################################################################################
    # ENDPOINT: FHIRWeb Console
    ################################################################################
    module.fhirweb_endpoint.type                                                                =ENDPOINT_FHIRWEB
    module.fhirweb_endpoint.requires.SECURITY_IN_UP                                             =local_security
    module.fhirweb_endpoint.requires.ENDPOINT_FHIR                                              =fhir_endpoint
    module.fhirweb_endpoint.config.port                                                         =8001
    module.fhirweb_endpoint.config.threadpool.min                                               =2
    module.fhirweb_endpoint.config.threadpool.max                                               =10
    module.fhirweb_endpoint.config.tls.enabled                                                  =false
    module.fhirweb_endpoint.config.anonymous.access.enabled                                     =false
  cdr-config-Clone.properties: |
    ################################################################################
    # Node Configuration
    ################################################################################
    node.id                                                                                     =#{env['SERVICENAME']}_#{env['ORDINAL']}
    node.control.port                                                                           =7001
    node.clone                                                                                  =#{env['SERVICENAME']}_0

    ################################################################################
    # Cluster Manager Configuration
    ################################################################################
    module.clustermgr.type                                                                      =CLUSTER_MGR
    # Valid options include DERBY_EMBEDDED, MYSQL_5_7, MARIADB_10_1, POSTGRES_9_4, ORACLE_12C, MSSQL_2012
    module.clustermgr.config.db.driver                                                         =POSTGRES_9_4
    module.clustermgr.config.db.url                                                            =jdbc:postgresql://#{env['DB_HOST']}:5432/cdr
    module.clustermgr.config.db.hibernate.showsql                                               =false
    module.clustermgr.config.db.username                                                        =#{env['DB_USER']}
    module.clustermgr.config.db.password                                                        =#{env['DB_PASSWORD']}
  logback.xml: |
    [SAME AS OPTION 1]

26.3.6StatefulSet Definitions

 

Each Smile CDR master node being deployed needs a Kubernetes StatefulSet definition. The StatefulSet definition defines most of the container-level settings needed to launch Smile CDR.

Note:

  • The examples below all include an environment variable, "IS_STATEFULSET" which must be included for Kubernetes deployments and must be set to the value "isStatefulSet". This environment variable prompts Smile CDR to use Kubernetes functionality to dynamically set the node.id values for Smile CDR master and clone nodes.
  • The examples below assume that a ConfigMap definition is being used to manage configuration. If ConfigMap is not being used, exclude the elements listed below in the StatefulSet definition examples:
    • The .spec.template.spec.containers.image.command element for the smilecdr* container definitions.
    • The .spec.template.spec.containers.image.volumeMounts element for config-map
    • The .spec.template.spec.volumes element for config-map
  • The .spec.template.spec.containers.image.command element for the smilecdr* container definitions overrides the CMD instruction in the Docker image and as such, when included must have /home/smile/smilecdr/bin/smilecdr run as the final command instruction.
  • The StatefulSet's .metadata.name element will be available to the Smile CDR instance as an environment variable called SERVICENAME.
  • When deploying a new Smile CDR cluster (i.e. with an empty database), set the .spec.replicas parameter initially to 1 to avoid problems due to a clone being launched before the master configuration is persisted to the database. Once the master configuration has been loaded to the database, the number of replicas can be increased in the definition.
  • The examples below include a number of env values for values that will likely differ across otherwise similar environments, such as hostnames (e.g. the database and ActiveMQ), as well as values for credentials.
  • In the examples below, a volume is defined to permanently store log files for all Smile CDR instances running on a given host. Logs will be written to files in the smileCDR_logs directory on the host where Smile CDR is running.
  • The examples below assume that the Smile CDR Docker image has been imported into a local Docker repository on each of the Kubernetes worker nodes that will be used. To load the Smile CDR image file on a given environment use the appropriate Docker command, e.g.:
    docker image load --input="/path/to/smilecdr-2019.11.R01-container.tar.bz2"
    

Option 1 – Where a single master node configuration is used, the Kubernetes StatefulSet definition should look something like the following:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: smilecdr
spec:
  selector:
    matchLabels:
      app: smilecdr # has to match .spec.template.metadata.labels
  serviceName: "smilecdr"
  # Note: It is recommended that the number of replicas be set initially to 1 until
  # the Master configuration is loaded into the DB. Otherwise clone nodes may fail to
  # initialize.
  replicas: 1
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: smilecdr # has to match .spec.template.metadata.labels
    spec:
      containers:
      - image: smilecdr
        name: smilecdr
        imagePullPolicy: IfNotPresent
        command:
        - bash
        - "-c"
        - |
          # Replace the properties files with the configurations defined in ConfigMap.
          cp /mnt/config-map/cdr-config-Master.properties /home/smile/smilecdr/classes/cdr-config-Master.properties
          cp /mnt/config-map/cdr-config-Clone.properties /home/smile/smilecdr/classes/cdr-config-Clone.properties
          cp /mnt/config-map/logback.xml /home/smile/smilecdr/classes/logback.xml
          /home/smile/smilecdr/bin/smilecdr run
        ports:
        - containerPort: 80
        - containerPort: 443
        - containerPort: 8000
        - containerPort: 9000
        - containerPort: 9100
        - containerPort: 8001
        - containerPort: 9200
        - containerPort: 9201
        volumeMounts:
        - name: config-map
          mountPath: /mnt/config-map
        - name: logs
          mountPath: /home/smile/smilecdr/log
        env:
        - name: IS_STATEFULSET
          value: isStatefulSet
        - name: ACTIVEMQ_HOST
          value: 10.0.2.15
        - name: DB_HOST
          value: 10.0.2.15
        - name: DB_USER
          value: cdr
        - name: DB_PASSWORD
          value: SmileCDR
        - name: LOCAL_SEC_USERNAME
          value: admin
        - name: LOCAL_SEC_PASSWORD
          value: password
        - name: ACTIVEMQ_USERNAME
          value: admin
        - name: ACTIVEMQ_PASSWORD
          value: admin
      restartPolicy: Always
      volumes:
      - name: config-map
        configMap:
          name: smilecdr-config
      - name: logs
        hostPath:
          path: /smileCDR_logs
          type: DirectoryOrCreate

Option 2 – Where multiple master node configurations are used, the Kubernetes StatefulSet definitions should look something like the snippets that follow. Note that for StatefulSet definitions it is recommended each StatefulSet be defined in a separate file and deployed separately. This will avoid conflicts that can potentially occur when multiple Smile CDR master nodes sharing a single database attempt to come on line at the same time.

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: smilecdr-mgmt
spec:
  selector:
    matchLabels:
      app: smilecdr-mgmt # has to match .spec.template.metadata.labels
  serviceName: "smilecdr-mgmt"
  # Note: It is recommended that the number of replicas be set initially to 1 until
  # the Master configuration is loaded into the DB. Otherwise clone nodes may fail to
  # initialize.
  replicas: 1
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: smilecdr-mgmt # has to match .spec.template.metadata.labels
    spec:
      containers:
      - image: smilecdr
        name: smilecdr-mgmt
        imagePullPolicy: IfNotPresent
        command:
        - bash
        - "-c"
        - |
          # Replace the properties files with the configurations defined in ConfigMap.
          cp /mnt/config-map/cdr-config-Master_mgmt.properties /home/smile/smilecdr/classes/cdr-config-Master.properties
          cp /mnt/config-map/cdr-config-Clone.properties /home/smile/smilecdr/classes/cdr-config-Clone.properties
          cp /mnt/config-map/logback.xml /home/smile/smilecdr/classes/logback.xml
          /home/smile/smilecdr/bin/smilecdr run
        ports:
        - containerPort: 80
        - containerPort: 443
        - containerPort: 9000
        - containerPort: 9100
        volumeMounts:
        - name: config-map
          mountPath: /mnt/config-map
        - name: logs
          mountPath: /home/smile/smilecdr/log
        env:
        - name: IS_STATEFULSET
          value: isStatefulSet
        - name: DB_HOST
          value: 10.0.2.15
        - name: DB_USER
          value: cdr
        - name: DB_PASSWORD
          value: SmileCDR
        - name: LOCAL_SEC_USERNAME
          value: admin
        - name: LOCAL_SEC_PASSWORD
          value: password
      restartPolicy: Always
      volumes:
      - name: config-map
        configMap:
          name: smilecdr-config
      - name: logs
        hostPath:
          path: /smileCDR_logs
          type: DirectoryOrCreate
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: smilecdr-smart
spec:
  selector:
    matchLabels:
      app: smilecdr-smart # has to match .spec.template.metadata.labels
  serviceName: "smilecdr-smart"
  # Note: It is recommended that the number of replicas be set initially to 1 until
  # the Master configuration is loaded into the DB. Otherwise clone nodes may fail to
  # initialize.
  replicas: 1
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: smilecdr-smart # has to match .spec.template.metadata.labels
    spec:
      containers:
      - image: smilecdr
        name: smilecdr-smart
        imagePullPolicy: IfNotPresent
        command:
        - bash
        - "-c"
        - |
          # Replace the properties files with the configurations defined in ConfigMap.
          cp /mnt/config-map/cdr-config-Master_smart.properties /home/smile/smilecdr/classes/cdr-config-Master.properties
          cp /mnt/config-map/cdr-config-Clone.properties /home/smile/smilecdr/classes/cdr-config-Clone.properties
          cp /mnt/config-map/logback.xml /home/smile/smilecdr/classes/logback.xml
          /home/smile/smilecdr/bin/smilecdr run
        ports:
        - containerPort: 9200
        - containerPort: 9201
        volumeMounts:
        - name: config-map
          mountPath: /mnt/config-map
        - name: logs
          mountPath: /home/smile/smilecdr/log
        env:
        - name: IS_STATEFULSET
          value: isStatefulSet
        - name: DB_HOST
          value: 10.0.2.15
        - name: DB_USER
          value: cdr
        - name: DB_PASSWORD
          value: SmileCDR
        - name: LOCAL_SEC_USERNAME
          value: admin
        - name: LOCAL_SEC_PASSWORD
          value: password
      restartPolicy: Always
      volumes:
      - name: config-map
        configMap:
          name: smilecdr-config
      - name: logs
        hostPath:
          path: /smileCDR_logs
          type: DirectoryOrCreate
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: smilecdr-subscription
spec:
  selector:
    matchLabels:
      app: smilecdr-subscription # has to match .spec.template.metadata.labels
  serviceName: "smilecdr-subscription"
  # Note: It is recommended that the number of replicas be set initially to 1 until
  # the Master configuration is loaded into the DB. Otherwise clone nodes may fail to
  # initialize.
  replicas: 1
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: smilecdr-subscription # has to match .spec.template.metadata.labels
    spec:
      containers:
      - image: smilecdr
        name: smilecdr-subscription
        imagePullPolicy: IfNotPresent
        command:
        - bash
        - "-c"
        - |
          # Replace the properties files with the configurations defined in ConfigMap.
          cp /mnt/config-map/cdr-config-Master_subscription.properties /home/smile/smilecdr/classes/cdr-config-Master.properties
          cp /mnt/config-map/cdr-config-Clone.properties /home/smile/smilecdr/classes/cdr-config-Clone.properties
          cp /mnt/config-map/logback.xml /home/smile/smilecdr/classes/logback.xml
          /home/smile/smilecdr/bin/smilecdr run
        volumeMounts:
        - name: config-map
          mountPath: /mnt/config-map
        - name: logs
          mountPath: /home/smile/smilecdr/log
        env:
        - name: IS_STATEFULSET
          value: isStatefulSet
        - name: ACTIVEMQ_HOST
          value: 10.0.2.15
        - name: DB_HOST
          value: 10.0.2.15
        - name: DB_USER
          value: cdr
        - name: DB_PASSWORD
          value: SmileCDR
        - name: LOCAL_SEC_USERNAME
          value: admin
        - name: LOCAL_SEC_PASSWORD
          value: password
        - name: ACTIVEMQ_USERNAME
          value: admin
        - name: ACTIVEMQ_PASSWORD
          value: admin
      restartPolicy: Always
      volumes:
      - name: config-map
        configMap:
          name: smilecdr-config
      - name: logs
        hostPath:
          path: /smileCDR_logs
          type: DirectoryOrCreate
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: smilecdr-listener
spec:
  selector:
    matchLabels:
      app: smilecdr-listener # has to match .spec.template.metadata.labels
  serviceName: "smilecdr-listener"
  # Note: It is recommended that the number of replicas be set initially to 1 until
  # the Master configuration is loaded into the DB. Otherwise clone nodes may fail to
  # initialize.
  replicas: 1
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: smilecdr-listener # has to match .spec.template.metadata.labels
    spec:
      containers:
      - image: smilecdr
        name: smilecdr-listener
        imagePullPolicy: IfNotPresent
        command:
        - bash
        - "-c"
        - |
          # Replace the properties files with the configurations defined in ConfigMap.
          cp /mnt/config-map/cdr-config-Master_listener.properties /home/smile/smilecdr/classes/cdr-config-Master.properties
          cp /mnt/config-map/cdr-config-Clone.properties /home/smile/smilecdr/classes/cdr-config-Clone.properties
          cp /mnt/config-map/logback.xml /home/smile/smilecdr/classes/logback.xml
          /home/smile/smilecdr/bin/smilecdr run
        ports:
        - containerPort: 8000
        - containerPort: 8001
        volumeMounts:
        - name: logs
          mountPath: /home/smile/smilecdr/log
        - name: config-map
          mountPath: /mnt/config-map
        env:
        - name: IS_STATEFULSET
          value: isStatefulSet
        - name: DB_HOST
          value: 10.0.2.15
        - name: DB_USER
          value: cdr
        - name: DB_PASSWORD
          value: SmileCDR
        - name: LOCAL_SEC_USERNAME
          value: admin
        - name: LOCAL_SEC_PASSWORD
          value: password
      restartPolicy: Always
      volumes:
      - name: config-map
        configMap:
          name: smilecdr-config
      - name: logs
        hostPath:
          path: /smileCDR_logs
          type: DirectoryOrCreate

26.3.7Deploying Smile CDR in a Kubernetes Cluster

 

Use the kubectl command to first deploy the Kubernetes objects in the following order:

  1. Services

  2. ConfigMaps

  3. StatefulSets

Note: When deploying for the first time and no configuration exists in the database:

  • Deploy each StatefulSet one at a time to avoid errors.
  • Initially deploy only one instance of each master node and then scale up as needed afterwards.

26.3.8Configuring Reverse Proxy and Load Balancer

 

In order to access the Smile CDR services through the port numbers specified in the Smile CDR configuration, it will be necessary to deploy a reverse proxy server to map the port numbers exposed by the Kubernetes services back to the port numbers configured in Smile CDR. In addition, if the cluster includes multiple Kubernetes nodes (i.e. multiple servers or virtual environments), it will be necessary to configure a load balancer to distribute the client requests across the nodes. An NGINX server can be used for both purposes. A possible sample NGINX configuration that would support both simple reverse proxy and simple load balancing across three nodes (srv1.example.com, srv2.example.com, and srv3.example.com), is shown below:

http {
    include       mime.types;
    default_type  application/octet-stream;

    sendfile        on;

    keepalive_timeout  65;

    # Define the "upstream" endpoints - the environments hosting clustered Smile CDR instances.
    # By default, NGINX will use Round Robin for balancing calls between the servers listed below.
    upstream kubernetes_fhir_endpoint {
        server srv1.example.com:30003;
        server srv2.example.com:30003;
        server srv3.example.com:30003;
    }
    upstream kubernetes_fhirweb_console {
        server srv1.example.com:30006;
        server srv2.example.com:30006;
        server srv3.example.com:30006;
    }
    upstream kubernetes_webadmin_console {
        server srv1.example.com:30005;
        server srv2.example.com:30005;
        server srv3.example.com:30005;
    }
    upstream kubernetes_jsonadmin_console {
        server srv1.example.com:30004;
        server srv2.example.com:30004;
        server srv3.example.com:30004;
    }
    upstream kubernetes_smart_oauth {
        server srv1.example.com:30007;
        server srv2.example.com:30007;
        server srv3.example.com:30007;
    }
    upstream kubernetes_smart_app {
        server srv1.example.com:30008;
        server srv2.example.com:30008;
        server srv3.example.com:30008;
    }

   #######################################
   # Redirect http to https
   #######################################
   server {
   server_name localhost;
       listen 80;
       return 301 https://$host$request_uri;
   }

   #######################################
   # FHIR Endpoint
   # -> Map port 8000 to 30003
   #######################################
   server {
   server_name localhost;
       listen 8000 ssl default_server;
       include secure.conf;
       location / {
           proxy_set_header    Host                        $host;
           proxy_set_header    X-Real-IP                   $remote_addr;
           proxy_set_header    X-Forwarded-For             $proxy_add_x_forwarded_for;
           proxy_set_header    X-Forwarded-Host   $host:8000;
           proxy_set_header    X-Forwarded-Server $host;
           proxy_set_header    X-Forwarded-Port   8000;
           proxy_set_header    X-Forwarded-Proto  https;
           proxy_pass          http://kubernetes_fhir_endpoint/;
       }
   }
   
   #######################################
   # FHIRWeb Console
   # -> Map port 8001 to 30006
   #######################################
   server {
   server_name localhost;
       listen 8001 ssl default_server;
       include secure.conf;
       location / {
           proxy_set_header    Host                        $host;
           proxy_set_header    X-Real-IP                   $remote_addr;
           proxy_set_header    X-Forwarded-For             $proxy_add_x_forwarded_for;
           proxy_set_header    X-Forwarded-Host   $host:8001;
           proxy_set_header    X-Forwarded-Server $host;
           proxy_set_header    X-Forwarded-Port   8001;
           proxy_set_header    X-Forwarded-Proto  https;
           proxy_pass          http://kubernetes_fhirweb_console/;
       }
   }
   
   
   #######################################
   # Web Admin Console
   # -> Map ports 443 and 9100 to 30005
   #######################################
   server {
   server_name localhost;
       listen 443 ssl default_server;
       include secure.conf;
       location / {
           proxy_set_header    Host                        $host;
           proxy_set_header    X-Real-IP                   $remote_addr;
           proxy_set_header    X-Forwarded-For             $proxy_add_x_forwarded_for;
           proxy_set_header    X-Forwarded-Host   $host:443;
           proxy_set_header    X-Forwarded-Server $host;
           proxy_set_header    X-Forwarded-Port   443;
           proxy_set_header    X-Forwarded-Proto  https;
           proxy_pass          http://kubernetes_webadmin_console/;
       }
   }
   server {
   server_name localhost;
       listen 9100 ssl default_server;
       include secure.conf;
       location / {
           proxy_set_header    Host                        $host;
           proxy_set_header    X-Real-IP                   $remote_addr;
           proxy_set_header    X-Forwarded-For             $proxy_add_x_forwarded_for;
           proxy_set_header    X-Forwarded-Host   $host:9100;
           proxy_set_header    X-Forwarded-Server $host;
           proxy_set_header    X-Forwarded-Port   9100;
           proxy_set_header    X-Forwarded-Proto  https;
           proxy_pass          http://kubernetes_webadmin_console/;
       }
   }
   
   
   #######################################
   # JSON Admin API
   # -> Map port 9000 to 30004
   #######################################
   server {
   server_name localhost;
       listen 9000 ssl default_server;
       include secure.conf;
       location / {
           proxy_set_header    Host                        $host;
           proxy_set_header    X-Real-IP                   $remote_addr;
           proxy_set_header    X-Forwarded-For             $proxy_add_x_forwarded_for;
           proxy_set_header    X-Forwarded-Host   $host:9000;
           proxy_set_header    X-Forwarded-Server $host;
           proxy_set_header    X-Forwarded-Port   9000;
           proxy_set_header    X-Forwarded-Proto  https;
           proxy_pass          http://kubernetes_jsonadmin_console/;
       }
   }
   
   #######################################
   # SMART OAuth2 / OpenID Connect Server
   # -> Map port 9200 to 30007
  #######################################
   server {
   server_name localhost;
       listen 9200 ssl default_server;
       include secure.conf;
       location / {
           proxy_set_header    Host                        $host;
           proxy_set_header    X-Real-IP                   $remote_addr;
           proxy_set_header    X-Forwarded-For             $proxy_add_x_forwarded_for;
           proxy_set_header    X-Forwarded-Host   $host:9200;
           proxy_set_header    X-Forwarded-Server $host;
           proxy_set_header    X-Forwarded-Port   9200;
           proxy_set_header    X-Forwarded-Proto  https;
           proxy_pass          http://kubernetes_smart_oauth/;
       }
   }
   
   #######################################
   # SMART App Host
   # -> Map port 9201 to 30008
   #######################################
   server {
   server_name localhost;
       listen 9201 ssl default_server;
       include secure.conf;
       location / {
           proxy_set_header    Host                        $host;
           proxy_set_header    X-Real-IP                   $remote_addr;
           proxy_set_header    X-Forwarded-For             $proxy_add_x_forwarded_for;
           proxy_set_header    X-Forwarded-Host   $host:9201;
           proxy_set_header    X-Forwarded-Server $host;
           proxy_set_header    X-Forwarded-Port   9201;
           proxy_set_header    X-Forwarded-Proto  https;
           proxy_pass          http://kubernetes_smart_app/;
       }
   }

The included secure.conf file referenced above would define the SSL parameters required for secure connections, for example:

  • server_name
  • ssl_certificate
  • ssl_certificate_key
  • ssl_dhparam
  • ssl_ciphers

The included mime.types file referenced above would define the mime types supported by NGINX.