52.2.1Deploying a Kubernetes Managed Cluster

 

This page describes how Smile CDR's modular design can be deployed and managed as part of a Kubernetes cluster. Note that this page focuses primarily on clustering the Smile CDR application itself. Clustering or hosting of other components such as the database, message broker, and reverse proxy are not addressed here.

Kubernetes Cluster

The deployment steps below highlight two different options:

  • Option 1 – A single process node example. The entire Smile CDR application is replicated across the cluster.
  • Option 2 – A multiple process node example. Separate process definitions are created for each of the following, allowing them to be separately replicated:
    • FHIR REST Endpoint and FHIR Storage modules
    • SMART on FHIR modules
    • Subscription and FHIR Storage modules
    • Administration modules (Web Admin Console and JSON Admin API)

While most of what follows is applicable to both options, there are sections that differ between the options. Take note of which option you would like to deploy and follow along accordingly.

52.2.2Overview of Smile CDR Kubernetes Deployment Process

 

The basic steps to creating a Smile CDR Kubernetes Cluster are as follows:

  1. Pre-requisite steps:

    • Obtain a copy of the Smile CDR Docker image.
    • Provision servers/environments for hosting Smile CDR cluster, database, message broker, and load balancer (as needed).
    • Install Docker container and Kubernetes runtime packages.
    • Deploy external dependencies, including: database, message broker, reverse proxy, and load balancer.
  2. Prepare Kubernetes configuration files.

  3. Deploy Smile CDR in Kubernetes cluster.

  4. Configure reverse proxy to enable Smile CDR services to be accessed through the configured ports.

  5. Configure load balancer if multiple Kubernetes nodes are being used.

The sections that follow will provide more details about these steps.

52.2.3Pre-requisite Steps

 

52.2.3.1Obtain a Copy of the Kubernetes-Enabled Smile CDR Docker Image

A Smile CDR Docker image is available that can be used either in standalone Docker deployments or in Kubernetes deployments. See the Docker Container Installation page for more information.

52.2.3.2Provision Servers/Environments

Ensure that the servers or environments hosting the Smile CDR processes are able to communicate with each other as well as to the servers hosting the database and message broker. Additionally, ensure that any servers hosting the reverse proxy and load balancer are able to communicate with the Kubernetes node components.

52.2.3.3Install Docker Container and Kubernetes Runtime Packages

The steps for installing Kubernetes will vary depending on the type of environment or cloud provider being used to host the Kubernetes components.

  • Instructions for deploying Docker container runtime packages can be found here.
  • Instructions for deploying and configuring Kubernetes runtime packages can be found here.

52.2.3.4Deploy External Dependencies

The database, message broker, reverse proxy, and load balancer do not need to be managed by Kubernetes to work with a clustered Smile CDR. As such these components can be deployed separately.

Regardless of how the external components are deployed, note the IP addresses, hostnames, and port numbers needed to connect to the database and message broker. This information will be needed later when configuring Smile CDR.

52.2.4Configuring Kubernetes

 

A number of Kubernetes objects need to be configured in order for Kubernetes to manage a Smile CDR cluster, and to ensure that external systems are able to interact with the Smile CDR cluster. These include the following:

  • Services – Enable Smile CDR API and ports to be visible outside of the cluster.
  • ConfigMaps – *(optional) Define configuration that will overwrite the default cdr-config-Master.properties and logback.xml files at startup time.
  • Deployments – Define how Smile CDR instances will be deployed as Kubernetes pods.

The following sections provide more detailed explanations and examples for each type of Kubernetes object.

52.2.5Kubernetes and Smile CDR Database Connections

 

One of the features of Kubernetes is that it can automatically add and remove instances of an application in a cluster. When removing an instance of an application, Kubernetes will first send a shutdown request to the application, and if the application is not stopped after a specified period of time (typically 30 seconds by default), will then forcefully terminate the application.

Normally, 30 seconds is more than enough time for Smile CDR to shut down gracefully. However, in some cases e.g. a request or scheduled task is waiting on a long-running query to finish, Smile CDR may require more than 30 seconds to complete its shutdown. If Smile CDR is forcefully terminated in these cases, this can potentially lead to problems of database connections not being closed properly.

To avoid potential problems with improperly closed database connections, it is recommended that Kubernetes pods be configured with terminationGracePeriodSeconds value set to at least 30 seconds plus the maximum Default Query Timeout value for any Cluster Manager or Persistence modules configured in the pod. See database configuration documentation for more information about the Default Query Timeout configuration setting. See the Deployment Definitions section for Kubernetes configuration samples with the terminationGracePeriodSeconds setting configured.

52.2.6Service Definitions

 

Each Smile CDR process that implements externally visible ports requires a Kubernetes service configured with spec of type NodePort so that the ports are accessible outside of the cluster.

A couple of things to note about the service definitions:

  • Kubernetes restricts the port numbers that can be exposed outside of the cluster to the range 30000-32767. As such, in the service definitions it will be necessary to map the configured Smile CDR port numbers to external port numbers allowed within this range.
  • The .selector element value must match the .spec.template.metadata.labels element value in the Deployment definition.

Option 1 – Where a single process configuration is used, the Kubernetes Service definition should look something like the following:

apiVersion: v1
kind: Service
metadata:
  name: smilecdr
spec:
  type: NodePort
  ports:
  - name: "80"
    port: 80
    nodePort: 30001
    targetPort: 80
  - name: "443"
    port: 443
    nodePort: 30002
    targetPort: 443
  - name: "8000"
    port: 8000
    nodePort: 30003
    targetPort: 8000
  - name: "9000"
    port: 9000
    nodePort: 30004
    targetPort: 9000
  - name: "9100"
    port: 9100
    nodePort: 30005
    targetPort: 9100
  - name: "8001"
    port: 8001
    nodePort: 30006
    targetPort: 8001
  - name: "9200"
    port: 9200
    nodePort: 30007
    targetPort: 9200
  - name: "9201"
    port: 9201
    nodePort: 30008
    targetPort: 9201
  selector:
    app: smilecdr

Option 2 – Where multiple process configurations are used, the Kubernetes Service definitions should look something like the following:

apiVersion: v1
kind: Service
metadata:
  name: smilecdr-mgmt
spec:
  type: NodePort
  ports:
  - name: "80"
    port: 80
    nodePort: 30001
    targetPort: 80
  - name: "443"
    port: 443
    nodePort: 30002
    targetPort: 443
  - name: "9000"
    port: 9000
    nodePort: 30004
    targetPort: 9000
  - name: "9100"
    port: 9100
    nodePort: 30005
    targetPort: 9100
  selector:
    app: smilecdr-mgmt
---
apiVersion: v1
kind: Service
metadata:
  name: smilecdr-listener
spec:
  type: NodePort
  ports:
  - name: "8000"
    port: 8000
    nodePort: 30003
    targetPort: 8000
  - name: "8001"
    port: 8001
    nodePort: 30006
    targetPort: 8001
  selector:
    app: smilecdr-listener
---
apiVersion: v1
kind: Service
metadata:
  name: smilecdr-smart
spec:
  type: NodePort
  ports:
  - name: "9200"
    port: 9200
    nodePort: 30007
    targetPort: 9200
  - name: "9201"
    port: 9201
    nodePort: 30008
    targetPort: 9201
  selector:
    app: smilecdr-smart

52.2.7ConfigMap Definitions

 

Using ConfigMap definitions is the recommended approach to customize the Smile CDR cdr-config-Master.properties and logback.xml configuration files in a Kubernetes deployment. This avoids the need to build and maintain additional Docker images. By default, the configuration files will be based on the initial configuration described in the Installing Smile CDR page located here. This configuration can be used for deploying a single instance of Smile CDR in Kubernetes but cannot be scaled. If the default configurations are adequate or if custom Docker images are going to be used then this step can be skipped.

Option 1 – Where a single process configuration is used, the Kubernetes ConfigMap definition should look something like the following:

apiVersion: v1
kind: ConfigMap
metadata:
  name: smilecdr-config
  labels:
    app: smilecdr-config
data:
  cdr-config-Master.properties: |
    ################################################################################
    # Node Configuration
    ################################################################################
    node.id                                                                                     =smilecdr_process
 
    ################################################################################
    # Cluster Manager Configuration
    ################################################################################
    module.clustermgr.type                                                                      =CLUSTER_MGR
    # Valid options include DERBY_EMBEDDED, MYSQL_5_7, MARIADB_10_1, POSTGRES_9_4, ORACLE_12C, MSSQL_2012
    module.clustermgr.config.db.driver                                                          =POSTGRES_9_4
    module.clustermgr.config.db.url                                                             =jdbc:postgresql://#{env['DB_HOST']}:5432/cdr
    module.clustermgr.config.db.hibernate.showsql                                               =false
    module.clustermgr.config.db.username                                                        =#{env['DB_USER']}
    module.clustermgr.config.db.password                                                        =#{env['DB_PASSWORD']}
    module.clustermgr.config.messagebroker.address                                              =tcp://#{env['ACTIVEMQ_HOST']}:61616
    module.clustermgr.config.messagebroker.username                                             =#{env['ACTIVEMQ_USERNAME']}
    module.clustermgr.config.messagebroker.password                                             =#{env['ACTIVEMQ_PASSWORD']}

    ################################################################################
    # Database Configuration
    ################################################################################
    module.persistence.type                                                                     =PERSISTENCE_R4
    module.persistence.config.db.driver                                                         =POSTGRES_9_4
    module.persistence.config.db.url                                                            =jdbc:postgresql://#{env['DB_HOST']}:5432/cdr
    module.persistence.config.db.hibernate.showsql                                              =false
    module.persistence.config.db.username                                                       =#{env['DB_USER']}
    module.persistence.config.db.password                                                       =#{env['DB_PASSWORD']}
    module.persistence.config.db.hibernate_search.directory                                     =derby_database/lucene_fhir_persistence
    module.persistence.config.dao_config.expire_search_results_after_minutes                    =60
    module.persistence.config.dao_config.allow_multiple_delete.enabled                          =false
    module.persistence.config.dao_config.allow_inline_match_url_references.enabled              =false
    module.persistence.config.dao_config.allow_external_references.enabled                      =false

    ################################################################################
    # Local Storage Inbound Security
    ################################################################################
    module.local_security.type                                                                  =SECURITY_IN_LOCAL
    # This superuser will be automatically seeded if it does not already exist
    module.local_security.config.seed.username                                                  =#{env['LOCAL_SEC_USERNAME']}
    module.local_security.config.seed.password                                                  =#{env['LOCAL_SEC_PASSWORD']}

    ################################################################################
    # ENDPOINT: FHIR Service
    ################################################################################
    module.fhir_endpoint.type                                                                   =ENDPOINT_FHIR_REST_R4
    module.fhir_endpoint.requires.PERSISTENCE_R4                                                =persistence
    module.fhir_endpoint.requires.SECURITY_IN_UP                                                =local_security
    module.fhir_endpoint.config.port                                                            =8000
    module.fhir_endpoint.config.threadpool.min                                                  =2
    module.fhir_endpoint.config.threadpool.max                                                  =10
    module.fhir_endpoint.config.browser_highlight.enabled                                       =true
    module.fhir_endpoint.config.cors.enable                                                     =true
    module.fhir_endpoint.config.default_encoding                                                =JSON
    module.fhir_endpoint.config.default_pretty_print                                            =true
    module.fhir_endpoint.config.base_url.fixed                                                  =http://localhost:8000
    module.fhir_endpoint.config.tls.enabled                                                     =false
    module.fhir_endpoint.config.anonymous.access.enabled                                        =true
    module.fhir_endpoint.config.security.http.basic.enabled                                     =true
    module.fhir_endpoint.config.request_validating.enabled                                      =false
    module.fhir_endpoint.config.request_validating.fail_on_severity                             =ERROR
    module.fhir_endpoint.config.request_validating.tags.enabled                                 =false
    module.fhir_endpoint.config.request_validating.response_headers.enabled                     =false
    module.fhir_endpoint.config.request_validating.require_explicit_profile_definition.enabled  =false

    ################################################################################
    # ENDPOINT: JSON Admin Services
    ################################################################################
    module.admin_json.type                                                                      =ADMIN_JSON
    module.admin_json.requires.SECURITY_IN_UP                                                   =local_security
    module.admin_json.config.port                                                               =9000
    module.admin_json.config.tls.enabled                                                        =false
    module.admin_json.config.anonymous.access.enabled                                           =true
    module.admin_json.config.security.http.basic.enabled                                        =true

    ################################################################################
    # ENDPOINT: Web Admin
    ################################################################################
    module.admin_web.type                                                                       =ADMIN_WEB
    module.admin_web.requires.SECURITY_IN_UP                                                    =local_security
    module.admin_web.config.port                                                                =9100
    module.admin_web.config.tls.enabled                                                         =false

    ################################################################################
    # ENDPOINT: FHIRWeb Console
    ################################################################################
    module.fhirweb_endpoint.type                                                                =ENDPOINT_FHIRWEB
    module.fhirweb_endpoint.requires.SECURITY_IN_UP                                             =local_security
    module.fhirweb_endpoint.requires.ENDPOINT_FHIR                                              =fhir_endpoint
    module.fhirweb_endpoint.config.port                                                         =8001
    module.fhirweb_endpoint.config.threadpool.min                                               =2
    module.fhirweb_endpoint.config.threadpool.max                                               =10
    module.fhirweb_endpoint.config.tls.enabled                                                  =false
    module.fhirweb_endpoint.config.anonymous.access.enabled                                     =false

    ################################################################################
    # SMART Security
    ################################################################################
    module.smart_auth.type                                                                      =ca.cdr.security.out.smart.module.SecurityOutSmartCtxConfig
    module.smart_auth.requires.CLUSTERMGR                                                       =clustermgr
    module.smart_auth.requires.SECURITY_IN_UP                                                   =local_security
    module.smart_auth.config.port                                                               =9200
    module.smart_auth.config.openid.signing.jwks_file                                           =classpath:/smilecdr-demo.jwks
    module.smart_auth.config.issuer.url                                                         =http://localhost:9200
    module.smart_auth.config.tls.enabled                                                        =false

    ################################################################################
    # SMART Demo Apps
    ################################################################################
    module.smart_app_demo_host.type                                                             =ca.cdr.smartappshost.module.SmartAppsHostCtxConfig
    module.smart_app_demo_host.requires.CLUSTERMGR                                              =clustermgr
    module.smart_app_demo_host.config.port                                                      =9201

    ################################################################################
    # 	 subscription
    ################################################################################
    module.subscription.type                                                                    =SUBSCRIPTION_MATCHER_R4
    module.subscription.requires.PERSISTENCE_R4                                                 =persistence
  logback.xml: |
    <!--
    Smile CDR uses the Logback framework for logging. For details on configuring this
    file, see:
    https://smilecdr.com/docs/getting_started/system_logging.html
    -->
    <configuration scan="true" scanPeriod="30 seconds">

    	<!--
    	LOG: CONSOLE
    	We write INFO-level events to the console. This is not generally
    	visible during normal operation, unless the application is run using
    	"bin/smilecdr run".
    	-->
    	<appender name="STDOUT_SYNC" class="ch.qos.logback.core.ConsoleAppender">
    		<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
    			<level>INFO</level>
    		</filter>
    		<encoder>
    			<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n${log.stackfilter.pattern}</pattern>
    		</encoder>
    	</appender>
    	<appender name="STDOUT" class="ch.qos.logback.classic.AsyncAppender">
    		<includeCallerData>false</includeCallerData>
    		<appender-ref ref="STDOUT_SYNC" />
    	</appender>

    	<!--
    	LOG: smile-startup.log
    	This file contains log entries written when the application is starting up
    	and shutting down. No other data is written to this file.
    	-->
    	<appender name="STARTUP" class="ch.qos.logback.core.rolling.RollingFileAppender">
    		<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
    			<level>INFO</level>
    		</filter>
    		<file>${smile.basedir}/log/smile-startup.log</file>
    		<rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
    			<fileNamePattern>${smile.basedir}/log/smile-startup.log.%i.gz</fileNamePattern>
    			<minIndex>1</minIndex>
    			<maxIndex>9</maxIndex>
    		</rollingPolicy>
    		<triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
    			<maxFileSize>5MB</maxFileSize>
    		</triggeringPolicy>
    		<encoder>
    			<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n${log.stackfilter.pattern}</pattern>
    		</encoder>
    	</appender>

    	<!--
    	LOG: smile.log
    	We create a file called smile.log that will have (by default) all INFO level
    	messages. This file is written asynchronously using a blocking queue for better
    	performance.
    	-->
    	<appender name="RUNTIME_SYNC" class="ch.qos.logback.core.rolling.RollingFileAppender">
    		<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
    			<level>INFO</level>
    		</filter>
    		<file>${smile.basedir}/log/smile.log</file>
    		<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
    			<fileNamePattern>${smile.basedir}/log/smile.%d{yyyy-MM-dd}.log.gz</fileNamePattern>
    			<maxHistory>30</maxHistory>
    		</rollingPolicy>
    		<encoder>
    			<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n${log.stackfilter.pattern}</pattern>
    		</encoder>
    	</appender>
    	<appender name="RUNTIME" class="ch.qos.logback.classic.AsyncAppender">
    		<discardingThreshold>0</discardingThreshold>
    		<includeCallerData>false</includeCallerData>
    		<appender-ref ref="RUNTIME_SYNC" />
    	</appender>

    	<!--
    	LOG: smile-error.log
    	This file contains only errors generated during normal operation.
    	-->
    	<appender name="ERROR" class="ch.qos.logback.core.rolling.RollingFileAppender">
    		<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
    			<level>ERROR</level>
    		</filter>
    		<file>${smile.basedir}/log/smile-error.log</file>
    		<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
    			<fileNamePattern>${smile.basedir}/log/smile-error.%d{yyyy-MM-dd}.log.gz</fileNamePattern>
    			<maxHistory>30</maxHistory>
    		</rollingPolicy>
    		<encoder>
    			<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} [%file:%line] %msg%n</pattern>
    		</encoder>
    	</appender>

    	<!-- 
    	The startup log only gets messages from ca.cdr.app.App, which
    	logs startup and shutdown events
    	-->
    	<logger name="ca.cdr.app.App" additivity="false">
    		<appender-ref ref="STARTUP"/>
    		<appender-ref ref="STDOUT" />
    		<appender-ref ref="RUNTIME" />
    		<appender-ref ref="ERROR" />
    	</logger>

    	<appender name="SECURITY_TROUBLESHOOTING" class="ch.qos.logback.core.rolling.RollingFileAppender">
    		<filter class="ch.qos.logback.classic.filter.ThresholdFilter"><level>DEBUG</level></filter>
    		<file>${smile.basedir}/log/security-troubleshooting.log</file>
    		<rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
    			<fileNamePattern>${smile.basedir}/log/security-troubleshooting.log.%i.gz</fileNamePattern>
    			<minIndex>1</minIndex>
    			<maxIndex>9</maxIndex>
    		</rollingPolicy>
    		<triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
    			<maxFileSize>5MB</maxFileSize>
    		</triggeringPolicy>
    		<encoder>
    			<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n${log.stackfilter.pattern}</pattern>
    		</encoder>
    	</appender>
    	<logger name="ca.cdr.log.security_troubleshooting" additivity="false" level="DEBUG">
    		<appender-ref ref="SECURITY_TROUBLESHOOTING"/>
    	</logger>

    	<!--
    	Send all remaining logs to a few places 
    	-->
    	<root level="INFO">
    		<appender-ref ref="STDOUT" />
    		<appender-ref ref="RUNTIME" />
    		<appender-ref ref="ERROR" />
    	</root>

    </configuration>

Option 2 – Where multiple process configurations are used, the Kubernetes ConfigMap definitions should look something like the following (note that logback.xml is unchanged):

apiVersion: v1
kind: ConfigMap
metadata:
  name: smilecdr-config
  labels:
    app: smilecdr-config
data:
  cdr-config-Master_mgmt.properties: |
    ################################################################################
    # Node Configuration
    ################################################################################
    node.id                                                                                     =smilecdr_mgmt
    
    ################################################################################
    # Cluster Manager Configuration
    ################################################################################
    module.clustermgr.type                                                                      =CLUSTER_MGR
    # Valid options include DERBY_EMBEDDED, MYSQL_5_7, MARIADB_10_1, POSTGRES_9_4, ORACLE_12C
    module.clustermgr.config.db.driver                                                          =POSTGRES_9_4
    module.clustermgr.config.db.url                                                             =jdbc:postgresql://#{env['DB_HOST']}:5432/cdr
    module.clustermgr.config.db.hibernate.showsql                                               =false
    module.clustermgr.config.db.username                                                        =#{env['DB_USER']}
    module.clustermgr.config.db.password                                                        =#{env['DB_PASSWORD']}

    ################################################################################
    # Local Storage Inbound Security
    ################################################################################
    module.local_security.type                                                                  =SECURITY_IN_LOCAL
    # This superuser will be automatically seeded if it does not already exist
    module.local_security.config.seed.username                                                  =#{env['LOCAL_SEC_USERNAME']}
    module.local_security.config.seed.password                                                  =#{env['LOCAL_SEC_PASSWORD']}

    ################################################################################
    # ENDPOINT: JSON Admin Services
    ################################################################################
    module.admin_json.type                                                                      =ADMIN_JSON
    module.admin_json.requires.SECURITY_IN_UP                                                   =local_security
    module.admin_json.config.port                                                               =9000
    module.admin_json.config.tls.enabled                                                        =false
    module.admin_json.config.anonymous.access.enabled                                           =true
    module.admin_json.config.security.http.basic.enabled                                        =true

    ################################################################################
    # ENDPOINT: Web Admin
    ################################################################################
    module.admin_web.type                                                                       =ADMIN_WEB
    module.admin_web.requires.SECURITY_IN_UP                                                    =local_security
    module.admin_web.config.port                                                                =9100
    module.admin_web.config.tls.enabled                                                         =false
  cdr-config-Master_smart.properties: |
    ################################################################################
    # Node Configuration
    ################################################################################
    node.id                                                                                     =smilecdr_smart
    
    ################################################################################
    # Cluster Manager Configuration
    ################################################################################
    module.clustermgr.type                                                                      =CLUSTER_MGR
    # Valid options include DERBY_EMBEDDED, MYSQL_5_7, MARIADB_10_1, POSTGRES_9_4, ORACLE_12C
    module.clustermgr.config.db.driver                                                          =POSTGRES_9_4
    module.clustermgr.config.db.url                                                             =jdbc:postgresql://#{env['DB_HOST']}:5432/cdr
    module.clustermgr.config.db.hibernate.showsql                                               =false
    module.clustermgr.config.db.username                                                        =#{env['DB_USER']}
    module.clustermgr.config.db.password                                                        =#{env['DB_PASSWORD']}

    ################################################################################
    # Local Storage Inbound Security
    ################################################################################
    module.local_security.type                                                                  =SECURITY_IN_LOCAL
    # This superuser will be automatically seeded if it does not already exist
    module.local_security.config.seed.username                                                  =#{env['LOCAL_SEC_USERNAME']}
    module.local_security.config.seed.password                                                  =#{env['LOCAL_SEC_PASSWORD']}

    ################################################################################
    # SMART Security
    ################################################################################
    module.smart_auth.type                                                                      =ca.cdr.security.out.smart.module.SecurityOutSmartCtxConfig
    module.smart_auth.requires.CLUSTERMGR                                                       =clustermgr
    module.smart_auth.requires.SECURITY_IN_UP                                                   =local_security
    module.smart_auth.config.port                                                               =9200
    module.smart_auth.config.openid.signing.jwks_file                                           =classpath:/smilecdr-demo.jwks
    module.smart_auth.config.issuer.url                                                         =http://localhost:9200
    module.smart_auth.config.tls.enabled                                                        =false

    ################################################################################
    # SMART Demo Apps
    ################################################################################
    module.smart_app_demo_host.requires.CLUSTERMGR                                              =clustermgr
    module.smart_app_demo_host.type                                                             =ca.cdr.smartappshost.module.SmartAppsHostCtxConfig
    module.smart_app_demo_host.config.port                                                      =9201
  cdr-config-Master_subscription.properties: |
    ################################################################################
    # Node Configuration
    ################################################################################
    node.id                                                                                     =smilecdr_subscription
    
    ################################################################################
    # Cluster Manager Configuration
    ################################################################################
    module.clustermgr.type                                                                      =CLUSTER_MGR
    # Valid options include DERBY_EMBEDDED, MYSQL_5_7, MARIADB_10_1, POSTGRES_9_4, ORACLE_12C
    module.clustermgr.config.db.driver                                                          =POSTGRES_9_4
    module.clustermgr.config.db.url                                                             =jdbc:postgresql://#{env['DB_HOST']}:5432/cdr
    module.clustermgr.config.db.hibernate.showsql                                               =false
    module.clustermgr.config.db.username                                                        =#{env['DB_USER']}
    module.clustermgr.config.db.password                                                        =#{env['DB_PASSWORD']}
    module.clustermgr.config.messagebroker.type                                                 =REMOTE_ACTIVEMQ
    module.clustermgr.config.messagebroker.address                                              =tcp://#{env['ACTIVEMQ_HOST']}:61616
    module.clustermgr.config.messagebroker.username                                             =#{env['ACTIVEMQ_USERNAME']}
    module.clustermgr.config.messagebroker.password                                             =#{env['ACTIVEMQ_PASSWORD']}

    ################################################################################
    # Database Configuration
    ################################################################################
    module.persistence.type                                                                     =PERSISTENCE_R4
    module.persistence.config.db.driver                                                         =POSTGRES_9_4
    module.persistence.config.db.url                                                            =jdbc:postgresql://#{env['DB_HOST']}:5432/cdr
    module.persistence.config.db.hibernate.showsql                                              =false
    module.persistence.config.db.username                                                       =#{env['DB_USER']}
    module.persistence.config.db.password                                                       =#{env['DB_PASSWORD']}
    module.persistence.config.db.hibernate_search.directory                                     =database/lucene_fhir_persistence
    module.persistence.config.dao_config.expire_search_results_after_minutes                    =60
    module.persistence.config.dao_config.allow_multiple_delete.enabled                          =false
    module.persistence.config.dao_config.allow_inline_match_url_references.enabled              =false
    module.persistence.config.dao_config.allow_external_references.enabled                      =false

    ################################################################################
    # Local Storage Inbound Security
    ################################################################################
    module.local_security.type                                                                  =SECURITY_IN_LOCAL
    # This superuser will be automatically seeded if it does not already exist
    module.local_security.config.seed.username                                                  =#{env['LOCAL_SEC_USERNAME']}
    module.local_security.config.seed.password                                                  =#{env['LOCAL_SEC_PASSWORD']}

    ################################################################################
    # 	 subscription
    ################################################################################
    module.subscription.type                                                                    =SUBSCRIPTION_MATCHER_R4
    module.subscription.requires.PERSISTENCE_R4                                                 =persistence
  cdr-config-Master_listener.properties: |
    ################################################################################
    # Node Configuration
    ################################################################################
    node.id                                                                                     =smilecdr_listener
    
    ################################################################################
    # Cluster Manager Configuration
    ################################################################################
    module.clustermgr.type                                                                      =CLUSTER_MGR
    # Valid options include DERBY_EMBEDDED, MYSQL_5_7, MARIADB_10_1, POSTGRES_9_4, ORACLE_12C
    module.clustermgr.config.db.driver                                                          =POSTGRES_9_4
    module.clustermgr.config.db.url                                                             =jdbc:postgresql://#{env['DB_HOST']}:5432/cdr
    module.clustermgr.config.db.hibernate.showsql                                               =false
    module.clustermgr.config.db.username                                                        =#{env['DB_USER']}
    module.clustermgr.config.db.password                                                        =#{env['DB_PASSWORD']}

    ################################################################################
    # Database Configuration
    ################################################################################
    module.persistence.type                                                                     =PERSISTENCE_R4
    module.persistence.config.db.driver                                                         =POSTGRES_9_4
    module.persistence.config.db.url                                                            =jdbc:postgresql://#{env['DB_HOST']}:5432/cdr
    module.persistence.config.db.hibernate.showsql                                              =false
    module.persistence.config.db.username                                                       =#{env['DB_USER']}
    module.persistence.config.db.password                                                       =#{env['DB_PASSWORD']}
    module.persistence.config.db.hibernate_search.directory                                     =database/lucene_fhir_persistence
    module.persistence.config.dao_config.expire_search_results_after_minutes                    =60
    module.persistence.config.dao_config.allow_multiple_delete.enabled                          =false
    module.persistence.config.dao_config.allow_inline_match_url_references.enabled              =false
    module.persistence.config.dao_config.allow_external_references.enabled                      =false

    ################################################################################
    # Local Storage Inbound Security
    ################################################################################
    module.local_security.type                                                                  =SECURITY_IN_LOCAL
    # This superuser will be automatically seeded if it does not already exist
    module.local_security.config.seed.username                                                  =#{env['LOCAL_SEC_USERNAME']}
    module.local_security.config.seed.password                                                  =#{env['LOCAL_SEC_PASSWORD']}

    ################################################################################
    # ENDPOINT: FHIR Service
    ################################################################################
    module.fhir_endpoint.type                                                                   =ENDPOINT_FHIR_REST_R4
    module.fhir_endpoint.requires.PERSISTENCE_R4                                                =persistence
    module.fhir_endpoint.requires.SECURITY_IN_UP                                                =local_security
    module.fhir_endpoint.config.port                                                            =8000
    module.fhir_endpoint.config.threadpool.min                                                  =2
    module.fhir_endpoint.config.threadpool.max                                                  =10
    module.fhir_endpoint.config.browser_highlight.enabled                                       =true
    module.fhir_endpoint.config.cors.enable                                                     =true
    module.fhir_endpoint.config.default_encoding                                                =JSON
    module.fhir_endpoint.config.default_pretty_print                                            =true
    module.fhir_endpoint.config.base_url.fixed                                                  =http://localhost:8000
    module.fhir_endpoint.config.tls.enabled                                                     =false
    module.fhir_endpoint.config.anonymous.access.enabled                                        =true
    module.fhir_endpoint.config.security.http.basic.enabled                                     =true
    module.fhir_endpoint.config.request_validating.enabled                                      =false
    module.fhir_endpoint.config.request_validating.fail_on_severity                             =ERROR
    module.fhir_endpoint.config.request_validating.tags.enabled                                 =false
    module.fhir_endpoint.config.request_validating.response_headers.enabled                     =false
    module.fhir_endpoint.config.request_validating.require_explicit_profile_definition.enabled  =false

    ################################################################################
    # ENDPOINT: FHIRWeb Console
    ################################################################################
    module.fhirweb_endpoint.type                                                                =ENDPOINT_FHIRWEB
    module.fhirweb_endpoint.requires.SECURITY_IN_UP                                             =local_security
    module.fhirweb_endpoint.requires.ENDPOINT_FHIR                                              =fhir_endpoint
    module.fhirweb_endpoint.config.port                                                         =8001
    module.fhirweb_endpoint.config.threadpool.min                                               =2
    module.fhirweb_endpoint.config.threadpool.max                                               =10
    module.fhirweb_endpoint.config.tls.enabled                                                  =false
    module.fhirweb_endpoint.config.anonymous.access.enabled                                     =false
  logback.xml: |
    [SAME AS OPTION 1]

52.2.8Deployment Definitions

 

Each Smile CDR process being deployed needs a Kubernetes Deployment definition. The Deployment definition defines most of the container-level settings needed to launch Smile CDR.

Note:

  • The examples below assume that a ConfigMap definition is being used to manage configuration. If ConfigMap is not being used, exclude the elements listed below in the Deployment definition examples:
    • The .spec.template.spec.containers.image.command element for the smilecdr* container definitions.
    • The .spec.template.spec.containers.image.volumeMounts element for config-map
    • The .spec.template.spec.volumes element for config-map
  • The .spec.template.spec.containers.image.command element for the smilecdr* container definitions overrides the CMD instruction in the Docker image and as such, when included must have /home/smile/smilecdr/bin/smilecdr run as the final command instruction.
  • When deploying a new Smile CDR cluster (i.e. with an empty database), set the .spec.replicas parameter initially to 1 to avoid problems due to multiple processes trying to update the database at the same time. Once the configuration has been loaded to the database, the number of replicas can be increased in the definition.
  • The examples below include a number of env values for values that will likely differ across otherwise similar environments, such as hostnames (e.g. the database and ActiveMQ), as well as values for credentials. Environment variables can also be used to override variables normally defined by the setenv script including JVMARGS and WATCHJVMARGS.
  • In the examples below, a volume is defined to permanently store Smile CDR log files. For the purposes of this example, the volume is mapped to a folder, smilecdr-logs, on the local physical host. Although this approach is okay for development and small-scale testing, it is not recommended for production environments for reasons of scalability and security.
  • If deploying to OpenShift see Considerations When Deploying Smile CDR Using OpenShift for additional information and configuration steps specific to OpenShift deployments.
  • The examples below assume that the Smile CDR Docker image has been imported into a local Docker repository on each of the Kubernetes worker nodes that will be used. To load the Smile CDR image file on a given environment use the appropriate Docker command, e.g.:
    docker image load --input="/path/to/smilecdr-2019.11.R01-container.tar.bz2"
    

Option 1 – Where a single process configuration is used, the Kubernetes Deployment definition should look something like the following:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: smilecdr
spec:
  selector:
    matchLabels:
      app: smilecdr # has to match .spec.template.metadata.labels
  # Note: It is recommended that the number of replicas be set initially to 1 until
  # the first process has loaded into the DB. Otherwise subsequent processes may fail
  # to initialize.
  replicas: 1
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: smilecdr # has to match .spec.template.metadata.labels
    spec:
      containers:
      - image: smilecdr
        name: smilecdr
        imagePullPolicy: IfNotPresent
        command:
        - bash
        - "-c"
        - |
          # Replace the properties files with the configurations defined in ConfigMap.
          cp /mnt/config-map/cdr-config-Master.properties /home/smile/smilecdr/classes/cdr-config-Master.properties
          cp /mnt/config-map/logback.xml /home/smile/smilecdr/classes/logback.xml
          /home/smile/smilecdr/bin/smilecdr run
        ports:
        - containerPort: 80
        - containerPort: 443
        - containerPort: 8000
        - containerPort: 9000
        - containerPort: 9100
        - containerPort: 8001
        - containerPort: 9200
        - containerPort: 9201
        volumeMounts:
        - name: config-map
          mountPath: /mnt/config-map
        - name: logs
          mountPath: /home/smile/smilecdr/log
        env:
        - name: ACTIVEMQ_HOST
          value: 10.0.2.15
        - name: DB_HOST
          value: 10.0.2.15
        - name: DB_USER
          value: cdr
        - name: DB_PASSWORD
          value: SmileCDR
        - name: LOCAL_SEC_USERNAME
          value: admin
        - name: LOCAL_SEC_PASSWORD
          value: password
        - name: ACTIVEMQ_USERNAME
          value: admin
        - name: ACTIVEMQ_PASSWORD
          value: admin
      restartPolicy: Always
      # Set Termination Grace Period to maximum of Cluster Manager or Persistence module Default Query Timeout 
      # plus 30 seconds.
      terminationGracePeriodSeconds: 90
      volumes:
      - name: config-map
        configMap:
          name: smilecdr-config
      - name: logs
        hostPath:
          path: /smilecdr-logs
          type: DirectoryOrCreate

Option 2 – Where multiple process configurations are used, the Kubernetes Deployment definitions should look something like the snippets that follow. Note that for Deployment definitions it is recommended each Deployment be defined in a separate file and deployed separately. This will avoid conflicts that can potentially occur when multiple Smile CDR processes sharing a single database attempt to come on line at the same time.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: smilecdr-mgmt
spec:
  selector:
    matchLabels:
      app: smilecdr-mgmt # has to match .spec.template.metadata.labels
  # Note: It is recommended that the number of replicas be set initially to 1 until
  # the first process has loaded into the DB. Otherwise subsequent processes may fail
  # to initialize.
  replicas: 1
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: smilecdr-mgmt # has to match .spec.template.metadata.labels
    spec:
      containers:
      - image: smilecdr
        name: smilecdr-mgmt
        imagePullPolicy: IfNotPresent
        command:
        - bash
        - "-c"
        - |
          # Replace the properties files with the configurations defined in ConfigMap.
          cp /mnt/config-map/cdr-config-Master_mgmt.properties /home/smile/smilecdr/classes/cdr-config-Master.properties
          cp /mnt/config-map/logback.xml /home/smile/smilecdr/classes/logback.xml
          /home/smile/smilecdr/bin/smilecdr run
        ports:
        - containerPort: 80
        - containerPort: 443
        - containerPort: 9000
        - containerPort: 9100
        volumeMounts:
        - name: config-map
          mountPath: /mnt/config-map
        - name: logs
          mountPath: /home/smile/smilecdr/log
        env:
        - name: DB_HOST
          value: 10.0.2.15
        - name: DB_USER
          value: cdr
        - name: DB_PASSWORD
          value: SmileCDR
        - name: LOCAL_SEC_USERNAME
          value: admin
        - name: LOCAL_SEC_PASSWORD
          value: password
      restartPolicy: Always
      # Set Termination Grace Period to Cluster Manager Default Query Timeout plus 30 seconds.
      terminationGracePeriodSeconds: 90
      volumes:
      - name: config-map
        configMap:
          name: smilecdr-config
      - name: logs
        hostPath:
          path: /smilecdr-logs
          type: DirectoryOrCreate
apiVersion: apps/v1
kind: Deployment
metadata:
  name: smilecdr-smart
spec:
  selector:
    matchLabels:
      app: smilecdr-smart # has to match .spec.template.metadata.labels
  # Note: It is recommended that the number of replicas be set initially to 1 until
  # the first process has loaded into the DB. Otherwise subsequent processes may fail
  # to initialize.
  replicas: 1
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: smilecdr-smart # has to match .spec.template.metadata.labels
    spec:
      containers:
      - image: smilecdr
        name: smilecdr-smart
        imagePullPolicy: IfNotPresent
        command:
        - bash
        - "-c"
        - |
          # Replace the properties files with the configurations defined in ConfigMap.
          cp /mnt/config-map/cdr-config-Master_smart.properties /home/smile/smilecdr/classes/cdr-config-Master.properties
          cp /mnt/config-map/logback.xml /home/smile/smilecdr/classes/logback.xml
          /home/smile/smilecdr/bin/smilecdr run
        ports:
        - containerPort: 9200
        - containerPort: 9201
        # Set Termination Grace Period to Cluster Manager Default Query Timeout plus 30 seconds.
        terminationGracePeriodSeconds: 90
        volumeMounts:
        - name: config-map
          mountPath: /mnt/config-map
        - name: logs
          mountPath: /home/smile/smilecdr/log
        env:
        - name: DB_HOST
          value: 10.0.2.15
        - name: DB_USER
          value: cdr
        - name: DB_PASSWORD
          value: SmileCDR
        - name: LOCAL_SEC_USERNAME
          value: admin
        - name: LOCAL_SEC_PASSWORD
          value: password
      restartPolicy: Always
      # Set Termination Grace Period to Default Query Timeout plus 30 seconds.
      terminationGracePeriodSeconds: 90
      volumes:
      - name: config-map
        configMap:
          name: smilecdr-config
      - name: logs
        hostPath:
          path: /smilecdr-logs
          type: DirectoryOrCreate
apiVersion: apps/v1
kind: Deployment
metadata:
  name: smilecdr-subscription
spec:
  selector:
    matchLabels:
      app: smilecdr-subscription # has to match .spec.template.metadata.labels
  # Note: It is recommended that the number of replicas be set initially to 1 until
  # the first process has loaded into the DB. Otherwise subsequent processes may fail
  # to initialize.
  replicas: 1
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: smilecdr-subscription # has to match .spec.template.metadata.labels
    spec:
      containers:
      - image: smilecdr
        name: smilecdr-subscription
        imagePullPolicy: IfNotPresent
        command:
        - bash
        - "-c"
        - |
          # Replace the properties files with the configurations defined in ConfigMap.
          cp /mnt/config-map/cdr-config-Master_subscription.properties /home/smile/smilecdr/classes/cdr-config-Master.properties
          cp /mnt/config-map/logback.xml /home/smile/smilecdr/classes/logback.xml
          /home/smile/smilecdr/bin/smilecdr run
        # Set Termination Grace Period to Cluster Manager Default Query Timeout plus 30 seconds.
        terminationGracePeriodSeconds: 90
        volumeMounts:
        - name: config-map
          mountPath: /mnt/config-map
        - name: logs
          mountPath: /home/smile/smilecdr/log
        env:
        - name: ACTIVEMQ_HOST
          value: 10.0.2.15
        - name: DB_HOST
          value: 10.0.2.15
        - name: DB_USER
          value: cdr
        - name: DB_PASSWORD
          value: SmileCDR
        - name: LOCAL_SEC_USERNAME
          value: admin
        - name: LOCAL_SEC_PASSWORD
          value: password
        - name: ACTIVEMQ_USERNAME
          value: admin
        - name: ACTIVEMQ_PASSWORD
          value: admin
      restartPolicy: Always
      volumes:
      - name: config-map
        configMap:
          name: smilecdr-config
      - name: logs
        hostPath:
          path: /smilecdr-logs
          type: DirectoryOrCreate
apiVersion: apps/v1
kind: Deployment
metadata:
  name: smilecdr-listener
spec:
  selector:
    matchLabels:
      app: smilecdr-listener # has to match .spec.template.metadata.labels
  # Note: It is recommended that the number of replicas be set initially to 1 until
  # the first process has loaded into the DB. Otherwise subsequent processes may fail
  # to initialize.
  replicas: 1
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: smilecdr-listener # has to match .spec.template.metadata.labels
    spec:
      containers:
      - image: smilecdr
        name: smilecdr-listener
        imagePullPolicy: IfNotPresent
        command:
        - bash
        - "-c"
        - |
          # Replace the properties files with the configurations defined in ConfigMap.
          cp /mnt/config-map/cdr-config-Master_listener.properties /home/smile/smilecdr/classes/cdr-config-Master.properties
          cp /mnt/config-map/logback.xml /home/smile/smilecdr/classes/logback.xml
          /home/smile/smilecdr/bin/smilecdr run
        ports:
        - containerPort: 8000
        - containerPort: 8001
        # Set Termination Grace Period to maximum of Cluster Manager or Persistence module Default Query Timeout 
        # plus 30 seconds.
        terminationGracePeriodSeconds: 90
        volumeMounts:
        - name: logs
          mountPath: /home/smile/smilecdr/log
        - name: config-map
          mountPath: /mnt/config-map
        env:
        - name: DB_HOST
          value: 10.0.2.15
        - name: DB_USER
          value: cdr
        - name: DB_PASSWORD
          value: SmileCDR
        - name: LOCAL_SEC_USERNAME
          value: admin
        - name: LOCAL_SEC_PASSWORD
          value: password
      restartPolicy: Always
      volumes:
      - name: config-map
        configMap:
          name: smilecdr-config
      - name: logs
        hostPath:
          path: /smilecdr-logs
          type: DirectoryOrCreate

52.2.9Deploying Smile CDR in a Kubernetes Cluster

 

Use the kubectl command to first deploy the Kubernetes objects in the following order:

  1. Services

  2. ConfigMaps

  3. Deployments

Note: When deploying for the first time and no configuration exists in the database:

  • Deploy each Deployment one at a time to avoid errors.
  • Initially deploy only one instance of each process definition and then scale up as needed afterwards.

52.2.10Configuring Reverse Proxy and Load Balancer

 

In order to access the Smile CDR services through the port numbers specified in the Smile CDR configuration, it will be necessary to deploy a reverse proxy server to map the port numbers exposed by the Kubernetes services back to the port numbers configured in Smile CDR. In addition, if the cluster includes multiple Kubernetes nodes (i.e. multiple servers or virtual environments), it will be necessary to configure a load balancer to distribute the client requests across the nodes. An NGINX server can be used for both purposes. A possible sample NGINX configuration that would support both simple reverse proxy and simple load balancing across three nodes (srv1.example.com, srv2.example.com, and srv3.example.com), is shown below:

http {
    include       mime.types;
    default_type  application/octet-stream;

    sendfile        on;

    keepalive_timeout  65;

    # Define the "upstream" endpoints - the environments hosting clustered Smile CDR instances.
    # By default, NGINX will use Round Robin for balancing calls between the servers listed below.
    upstream kubernetes_fhir_endpoint {
        server srv1.example.com:30003;
        server srv2.example.com:30003;
        server srv3.example.com:30003;
    }
    upstream kubernetes_fhirweb_console {
        server srv1.example.com:30006;
        server srv2.example.com:30006;
        server srv3.example.com:30006;
    }
    upstream kubernetes_webadmin_console {
        server srv1.example.com:30005;
        server srv2.example.com:30005;
        server srv3.example.com:30005;
    }
    upstream kubernetes_jsonadmin_console {
        server srv1.example.com:30004;
        server srv2.example.com:30004;
        server srv3.example.com:30004;
    }
    upstream kubernetes_smart_oauth {
        server srv1.example.com:30007;
        server srv2.example.com:30007;
        server srv3.example.com:30007;
    }
    upstream kubernetes_smart_app {
        server srv1.example.com:30008;
        server srv2.example.com:30008;
        server srv3.example.com:30008;
    }

   #######################################
   # Redirect http to https
   #######################################
   server {
   server_name localhost;
       listen 80;
       return 301 https://$host$request_uri;
   }

   #######################################
   # FHIR Endpoint
   # -> Map port 8000 to 30003
   #######################################
   server {
   server_name localhost;
       listen 8000 ssl default_server;
       include secure.conf;
       location / {
           proxy_set_header    Host                        $host;
           proxy_set_header    X-Real-IP                   $remote_addr;
           proxy_set_header    X-Forwarded-For             $proxy_add_x_forwarded_for;
           proxy_set_header    X-Forwarded-Host   $host:8000;
           proxy_set_header    X-Forwarded-Server $host;
           proxy_set_header    X-Forwarded-Port   8000;
           proxy_set_header    X-Forwarded-Proto  https;
           proxy_pass          http://kubernetes_fhir_endpoint/;
       }
   }
   
   #######################################
   # FHIRWeb Console
   # -> Map port 8001 to 30006
   #######################################
   server {
   server_name localhost;
       listen 8001 ssl default_server;
       include secure.conf;
       location / {
           proxy_set_header    Host                        $host;
           proxy_set_header    X-Real-IP                   $remote_addr;
           proxy_set_header    X-Forwarded-For             $proxy_add_x_forwarded_for;
           proxy_set_header    X-Forwarded-Host   $host:8001;
           proxy_set_header    X-Forwarded-Server $host;
           proxy_set_header    X-Forwarded-Port   8001;
           proxy_set_header    X-Forwarded-Proto  https;
           proxy_pass          http://kubernetes_fhirweb_console/;
       }
   }
   
   
   #######################################
   # Web Admin Console
   # -> Map ports 443 and 9100 to 30005
   #######################################
   server {
   server_name localhost;
       listen 443 ssl default_server;
       include secure.conf;
       location / {
           proxy_set_header    Host                        $host;
           proxy_set_header    X-Real-IP                   $remote_addr;
           proxy_set_header    X-Forwarded-For             $proxy_add_x_forwarded_for;
           proxy_set_header    X-Forwarded-Host   $host:443;
           proxy_set_header    X-Forwarded-Server $host;
           proxy_set_header    X-Forwarded-Port   443;
           proxy_set_header    X-Forwarded-Proto  https;
           proxy_pass          http://kubernetes_webadmin_console/;
       }
   }
   server {
   server_name localhost;
       listen 9100 ssl default_server;
       include secure.conf;
       location / {
           proxy_set_header    Host                        $host;
           proxy_set_header    X-Real-IP                   $remote_addr;
           proxy_set_header    X-Forwarded-For             $proxy_add_x_forwarded_for;
           proxy_set_header    X-Forwarded-Host   $host:9100;
           proxy_set_header    X-Forwarded-Server $host;
           proxy_set_header    X-Forwarded-Port   9100;
           proxy_set_header    X-Forwarded-Proto  https;
           proxy_pass          http://kubernetes_webadmin_console/;
       }
   }
   
   
   #######################################
   # JSON Admin API
   # -> Map port 9000 to 30004
   #######################################
   server {
   server_name localhost;
       listen 9000 ssl default_server;
       include secure.conf;
       location / {
           proxy_set_header    Host                        $host;
           proxy_set_header    X-Real-IP                   $remote_addr;
           proxy_set_header    X-Forwarded-For             $proxy_add_x_forwarded_for;
           proxy_set_header    X-Forwarded-Host   $host:9000;
           proxy_set_header    X-Forwarded-Server $host;
           proxy_set_header    X-Forwarded-Port   9000;
           proxy_set_header    X-Forwarded-Proto  https;
           proxy_pass          http://kubernetes_jsonadmin_console/;
       }
   }
   
   #######################################
   # SMART OAuth2 / OpenID Connect Server
   # -> Map port 9200 to 30007
  #######################################
   server {
   server_name localhost;
       listen 9200 ssl default_server;
       include secure.conf;
       location / {
           proxy_set_header    Host                        $host;
           proxy_set_header    X-Real-IP                   $remote_addr;
           proxy_set_header    X-Forwarded-For             $proxy_add_x_forwarded_for;
           proxy_set_header    X-Forwarded-Host   $host:9200;
           proxy_set_header    X-Forwarded-Server $host;
           proxy_set_header    X-Forwarded-Port   9200;
           proxy_set_header    X-Forwarded-Proto  https;
           proxy_pass          http://kubernetes_smart_oauth/;
       }
   }
   
   #######################################
   # SMART App Host
   # -> Map port 9201 to 30008
   #######################################
   server {
   server_name localhost;
       listen 9201 ssl default_server;
       include secure.conf;
       location / {
           proxy_set_header    Host                        $host;
           proxy_set_header    X-Real-IP                   $remote_addr;
           proxy_set_header    X-Forwarded-For             $proxy_add_x_forwarded_for;
           proxy_set_header    X-Forwarded-Host   $host:9201;
           proxy_set_header    X-Forwarded-Server $host;
           proxy_set_header    X-Forwarded-Port   9201;
           proxy_set_header    X-Forwarded-Proto  https;
           proxy_pass          http://kubernetes_smart_app/;
       }
   }

The included secure.conf file referenced above would define the SSL parameters required for secure connections, for example:

  • server_name
  • ssl_certificate
  • ssl_certificate_key
  • ssl_dhparam
  • ssl_ciphers

The included mime.types file referenced above would define the mime types supported by NGINX.

52.2.11Considerations When Deploying Smile CDR Using OpenShift

 

OpenShift is a Kubernetes implementation developed and supported by Red Hat that incorporates a number of additional security features that are part of Red Hat Enterprise Linux and as such may require additional configuration steps.

52.2.11.1Deploying Smile CDR in a Red Hat Enterprise Linux Container

The default Smile CDR Docker image is built using an open source Debian Linux image as its base. For sites that prefer to use a Smile CDR Docker image based on Red Hat Enterprise Linux, a RHEL-based Smile CDR Docker image can be built from the Smile CDR application tar file using a Dockerfile similar to the following:

# Use RHEL implementation of openjdk 11 as a parent image
FROM openjdk/openjdk-11-rhel7

# Argument containing name and path of Smile CDR tar file.
ARG TARFILE

# Create directory /home/smile in the image and set this as working directory
WORKDIR /home/smile

# Extract Smile CDR tar contents to /home/smile folder
ADD ${TARFILE} ./

# Create a non-root user inside image to launch Smile CDR and update application permissions accordingly.
USER root
RUN useradd -m -d /home/smile -p SmileCDR -s /bin/bash -U docker
RUN chown -R docker:docker ./smilecdr
USER docker

# Command that will be executed when the Container is launched.
CMD ["/home/smile/smilecdr/bin/smilecdr", "run"]

After connecting to the Red Hat image registry, registry.redhat.io, the custom Docker image can then be built using a command similar to:

docker image build -f path/to/Dockerfile --build-arg TARFILE=./path/relative/to/context/smilecdr-2020.02.tar.gz --tag=smilecdr_rhel path/to/Docker/context 

Where:

  • -f specifies the path and name of the Dockerfile,
  • --tag specifies a tag or name for the new Smile CDR image,
  • --build-arg sets the Dockerfile TARFILE argument to the path and name of the Smile CDR application tar file, relative to the Docker context folder. Note that the Smile CDR application tar file must be contained within the Docker context folder.
  • The final argument is the path of the Docker context folder itself.

More information about the RHEL Openjdk image referenced in the example above or about connecting to the Red Hat image registry can be found here.

52.2.11.2Enabling Write Access to `hostPath` Volumes

To enable Smile CDR to write to a configured hostPath volume (e.g. for logging), the following steps will be required:

  1. Create a Security Context Constraint definition file that can be used to enable hostPath volumes, e.g. smilecdr-scc-hostpath.yaml:
    kind: SecurityContextConstraints
    apiVersion: v1
    metadata:
      name: hostpath-scc
    allowHostDirVolumePlugin: true
    runAsUser:
      type: RunAsAny
    seLinuxContext:
      type: RunAsAny
  1. Apply the hostPath scc:

oc apply -f smilecdr-scc-hostpath.yaml

  1. Enable the hostPath scc for all users (note that you will need to execute this command as a user with OpenShift administrator privileges):

oc adm policy add-scc-to-group hostpath-scc system:authenticated

  1. Add a folder in the OpenShift host that you would like to mount the persistent volume to (e.g. /smilecdr-logs) and enable read and write permissions for all users:
sudo mkdir /smilecdr-logs 
sudo chmod 777 /smilecdr-logs
  1. Change the SELinux labels on the new folder setting type and user to match those of the folders inside the Smile CDR container e.g.:
  # The following creates the SELinux context for the /smilecdr-logs folder
  sudo semanage fcontext -a -t container_file_t -s system_u /smilecdr-logs
  # This command applies the new context to the folder.
  sudo restorecon -v /smilecdr-logs

More information about SELinux configurations can be found here.