This directory contains the Kubernetes (K8s) configuration and Helm chart manifests for deploying the AppifyHub’s Monolith API service. The service’s source code is available in the Monolith repository.
Most of the prerequisites are available for exploration in the root-level README. This document will focus only on the specific steps needed to deploy the AppifyHub’s Monolith API service.
Prerequisites:
The Monolith API service is best deployed using a Helm chart, which is located next to this guide. The chart is designed to be installed into a K8s cluster, and it will create almost all of the necessary resources for the Monolith service to run.
As mentioned in the prequisites, you need to have a database set up and running, and you need to have your secrets ready for the pods to use. The chart will not create the database for you or manage your K8s secrets. You need to at least have a K8s secret with the database connection string and other necessary secrets before installing the chart. For the list of all needed secrets, check the service repository’s Docker directory. This guide comes with sensible defaults, but those should be changed to match your environment.
Going forward, we will assume that you have the database set up and running, but you want to manage secrets using Doppler. If you don’t want to use Doppler, you can create the secrets manually using the kubectl create secret
command and disable Doppler via Helm value directives. More details on how Doppler manages secrets can be found in the Secrets Check guide.
There are two ways to manage the keystore required by the service for signing JWT tokens:
This guide and this chart assume that you will use the first option. The second option is not recommended for production environments, as it requires a file to be mounted into the pod, which is not a good practice for security reasons.
In order to use the first option, your keystore file (usually a .jks
file or a .p12
file) should be stored in a K8s secret KEYSTORE_BASE64
, in base64 format. The deployment chart will automatically boot up an “init container” with an ephemeral volume where we will decode and store the keystore file. The init container will then copy the keystore file from the K8s secret into the ephemeral volume, and the main container will use that volume to access the keystore file.
Let’s install the chart:
# Prepare a dedicated namespace (if not already there)
kubectl create namespace delete-me
# Install the service from there
helm install monolith-api appifyhub/appifyhub-api \
--namespace delete-me \
--set secrets.provider="none"
This basic setup does not automatically manage your secrets, and expects you to have them provided next to your new deployment. If you don’t have the necessary secrets injected into your pods, you will see an error message like this:
$ 2025-01-01T00:00:00.001Z PSQLException: FATAL!
| password authentication failed for default user "appifyhub"
The sevice created here will be exposed over HTTP (not HTTPS) at http://api.cloud.appifyhub.local
. Note the .local
top-level domain: this is a fake domain that is used for development and testing. You can change this to a real domain in the next steps. In order to access the service on this fake domain, you need to add it to your local hosts file (e.g., /etc/hosts
), as explained in the Echo server guide. In the next step, we will explore adding the secrets manager and a real domain.
Here’s how you can undo this installation if you want to start over:
# Uninstall the monolith service
helm uninstall monolith-api --namespace delete-me
# Remove the namespace if you don't need it anymore
kubectl delete namespace delete-me
Whether you have a real domain or use NIP (or similar), you can choose to expose the service to HTTP traffic through your cluster’s load balancer and ingress. This is generally done by creating a K8s ingress resource that will route traffic to your service – and this chart already creates that for you.
⚠️ When using a real domain, you need to make sure that the DNS records are set up correctly. This is usually done by creating an
A
record that points to the load balancer’s IP address. If you are using a service like NIP, you can use a wildcard DNS record to point all subdomains to your load balancer. You can see your load balancer’s IP address from the cloud provider’s console.
💡 Keep in mind that your DNS and cache provider (such as Cloudflare) may inject TLS certificates and other security features that come from their Anycast network, especially if you use them as a proxy and not only for DNS. This setup is not a problem, but it may cause some confusion if you are not aware of it. This installation step will not explicitly enable TLS or HTTPS. We will explore that in a later step.
Because a configuration based on a real domain is not assumed as the default, real domains are currently not enabled in the chart’s values. We can either upgrade our existing Helm release using the helm upgrade
command with --set
flags to include a real domain, or we can simply install the chart again with the new values. The latter is easier, so we will do that here. You can undo the previous installation first if you want to keep your cluster clean.
# Prepare a dedicated namespace (if not already there)
kubectl create namespace staging
# Install the service from there - assuming you want it in a 'staging' namespace
helm install monolith-api appifyhub/appifyhub-api \
--namespace staging \
--set app.image.tag="latest_beta" \
--set secrets.doppler.project="appifyhub-cloud" \
--set secrets.doppler.config="staging" \
--set secrets.doppler.token="dp.st.staging.your-actual-token-here" \
--set ingress.domain.base="realdomain.com" \
--set ingress.domain.prefix="staging.subdomain" \
--set config.values.LOGGING_LEVEL="WARN"
Note that this setup is also implicitly enabling Doppler’s secrets manager, which will automatically inject the secrets into your pods. For more information on how to set up Doppler, check the Secrets Check guide.
💡 The deployment is configured to reload its secrets every 5 minutes. In addition, the deployment might create multiple pods while booting up. As soon as the pod with the secrets injected is up and running, the other pod will be shut down (potentially with some logged errors). This is normal behavior.
The config.values.LOGGING_LEVEL
value is optional and can be used to set the logging level for the service. The default value is INFO
, but you can change it to DEBUG
, WARN
, etc.
The configurations shown so far are not using TLS or HTTPS. This is fine for development and testing, but in production, we should always use TLS and HTTPS to secure our traffic. We will now make some changes to the chart to enable TLS and HTTPS.
⚠️ We want high availability from our services, so we are setting the number of replicas to 2 (as an example). It’s not likely that you will need it for staging configurations and it is definitely not a requirement, but it is a good practice to have at least 2 replicas in production – so this is a good opportunity to learn how to set it up.
# Let's upgrade our existing Helm release to include TLS and HTTPS
helm upgrade monolith-api appifyhub/appifyhub-api \
--namespace staging \
--set app.replicas=2 \
--set app.image.tag="latest_beta" \
--set secrets.doppler.project="appifyhub-cloud" \
--set secrets.doppler.config="staging" \
--set secrets.doppler.token="dp.st.staging.your-actual-token-here" \
--set ingress.domain.base="realdomain.com" \
--set ingress.domain.prefix="staging.subdomain" \
--set config.values.LOGGING_LEVEL="WARN" \
--set ingress.tls.enabled=true \
--set-string ingress.annotations."traefik\.ingress\.kubernetes\.io/router\.entrypoints"=websecure \
--set-string ingress.annotations."traefik\.ingress\.kubernetes\.io/router\.tls"=true \
--set-string ingress.annotations."traefik\.ingress\.kubernetes\.io/router\.tls\.certresolver"=letsencrypt
💡 The deployment here relies on Traefik’s integration with Let’s Encrypt, a widely-used provider of free TLS certificates. The chart is now configured to use the
websecure
entrypoint, which is the default entrypoint for HTTPS traffic in Traefik. If you are using a different ingress controller, you need to adjust accordingly yourself.
It may take a few minutes for the TLS certificate to be issued and for the service to be accessible over HTTPS.
In addition to the install values we changed above using --set
, there are many other configuration options available in the chart (such as rollback history, open telemetry and prometheus configurations, liveness probes, resource consumption, etc). You can see all of them in the values.yaml
file.