Added LiteLLM to the stack

This commit is contained in:
2025-08-18 09:40:50 +00:00
parent 0648c1968c
commit d220b04e32
2682 changed files with 533609 additions and 1 deletions

View File

@@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -0,0 +1,9 @@
dependencies:
- name: postgresql
repository: oci://registry-1.docker.io/bitnamicharts
version: 14.3.1
- name: redis
repository: oci://registry-1.docker.io/bitnamicharts
version: 18.19.1
digest: sha256:8660fe6287f9941d08c0902f3f13731079b8cecd2a5da2fbc54e5b7aae4a6f62
generated: "2024-03-10T02:28:52.275022+05:30"

View File

@@ -0,0 +1,37 @@
apiVersion: v2
# We can't call ourselves just "litellm" because then we couldn't publish to the
# same OCI repository as the "litellm" OCI image
name: litellm-helm
description: Call all LLM APIs using the OpenAI format
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.4.4
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: v1.50.2
dependencies:
- name: "postgresql"
version: ">=13.3.0"
repository: oci://registry-1.docker.io/bitnamicharts
condition: db.deployStandalone
- name: redis
version: ">=18.0.0"
repository: oci://registry-1.docker.io/bitnamicharts
condition: redis.enabled

View File

@@ -0,0 +1,148 @@
# Helm Chart for LiteLLM
> [!IMPORTANT]
> This is community maintained, Please make an issue if you run into a bug
> We recommend using [Docker or Kubernetes for production deployments](https://docs.litellm.ai/docs/proxy/prod)
## Prerequisites
- Kubernetes 1.21+
- Helm 3.8.0+
If `db.deployStandalone` is used:
- PV provisioner support in the underlying infrastructure
If `db.useStackgresOperator` is used (not yet implemented):
- The Stackgres Operator must already be installed in the Kubernetes Cluster. This chart will **not** install the operator if it is missing.
## Parameters
### LiteLLM Proxy Deployment Settings
| Name | Description | Value |
| ---------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----- |
| `replicaCount` | The number of LiteLLM Proxy pods to be deployed | `1` |
| `masterkeySecretName` | The name of the Kubernetes Secret that contains the Master API Key for LiteLLM. If not specified, use the generated secret name. | N/A |
| `masterkeySecretKey` | The key within the Kubernetes Secret that contains the Master API Key for LiteLLM. If not specified, use `masterkey` as the key. | N/A |
| `masterkey` | The Master API Key for LiteLLM. If not specified, a random key is generated. | N/A |
| `environmentSecrets` | An optional array of Secret object names. The keys and values in these secrets will be presented to the LiteLLM proxy pod as environment variables. See below for an example Secret object. | `[]` |
| `environmentConfigMaps` | An optional array of ConfigMap object names. The keys and values in these configmaps will be presented to the LiteLLM proxy pod as environment variables. See below for an example Secret object. | `[]` |
| `image.repository` | LiteLLM Proxy image repository | `ghcr.io/berriai/litellm` |
| `image.pullPolicy` | LiteLLM Proxy image pull policy | `IfNotPresent` |
| `image.tag` | Overrides the image tag whose default the latest version of LiteLLM at the time this chart was published. | `""` |
| `imagePullSecrets` | Registry credentials for the LiteLLM and initContainer images. | `[]` |
| `serviceAccount.create` | Whether or not to create a Kubernetes Service Account for this deployment. The default is `false` because LiteLLM has no need to access the Kubernetes API. | `false` |
| `service.type` | Kubernetes Service type (e.g. `LoadBalancer`, `ClusterIP`, etc.) | `ClusterIP` |
| `service.port` | TCP port that the Kubernetes Service will listen on. Also the TCP port within the Pod that the proxy will listen on. | `4000` |
| `service.loadBalancerClass` | Optional LoadBalancer implementation class (only used when `service.type` is `LoadBalancer`) | `""` |
| `ingress.*` | See [values.yaml](./values.yaml) for example settings | N/A |
| `proxy_config.*` | See [values.yaml](./values.yaml) for default settings. See [example_config_yaml](../../../litellm/proxy/example_config_yaml/) for configuration examples. | N/A |
| `extraContainers[]` | An array of additional containers to be deployed as sidecars alongside the LiteLLM Proxy. | `[]` |
#### Example `environmentSecrets` Secret
```
apiVersion: v1
kind: Secret
metadata:
name: litellm-envsecrets
data:
AZURE_OPENAI_API_KEY: TXlTZWN1cmVLM3k=
type: Opaque
```
### Database Settings
| Name | Description | Value |
| ---------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----- |
| `db.useExisting` | Use an existing Postgres database. A Kubernetes Secret object must exist that contains credentials for connecting to the database. An example secret object definition is provided below. | `false` |
| `db.endpoint` | If `db.useExisting` is `true`, this is the IP, Hostname or Service Name of the Postgres server to connect to. | `localhost` |
| `db.database` | If `db.useExisting` is `true`, the name of the existing database to connect to. | `litellm` |
| `db.url` | If `db.useExisting` is `true`, the connection url of the existing database to connect to can be overwritten with this value. | `postgresql://$(DATABASE_USERNAME):$(DATABASE_PASSWORD)@$(DATABASE_HOST)/$(DATABASE_NAME)` |
| `db.secret.name` | If `db.useExisting` is `true`, the name of the Kubernetes Secret that contains credentials. | `postgres` |
| `db.secret.usernameKey` | If `db.useExisting` is `true`, the name of the key within the Kubernetes Secret that holds the username for authenticating with the Postgres instance. | `username` |
| `db.secret.passwordKey` | If `db.useExisting` is `true`, the name of the key within the Kubernetes Secret that holds the password associates with the above user. | `password` |
| `db.useStackgresOperator` | Not yet implemented. | `false` |
| `db.deployStandalone` | Deploy a standalone, single instance deployment of Postgres, using the Bitnami postgresql chart. This is useful for getting started but doesn't provide HA or (by default) data backups. | `true` |
| `postgresql.*` | If `db.deployStandalone` is `true`, configuration passed to the Bitnami postgresql chart. See the [Bitnami Documentation](https://github.com/bitnami/charts/tree/main/bitnami/postgresql) for full configuration details. See [values.yaml](./values.yaml) for the default configuration. | See [values.yaml](./values.yaml) |
| `postgresql.auth.*` | If `db.deployStandalone` is `true`, care should be taken to ensure the default `password` and `postgres-password` values are **NOT** used. | `NoTaGrEaTpAsSwOrD` |
#### Example Postgres `db.useExisting` Secret
```yaml
apiVersion: v1
kind: Secret
metadata:
name: postgres
data:
# Password for the "postgres" user
postgres-password: <some secure password, base64 encoded>
username: litellm
password: <some secure password, base64 encoded>
type: Opaque
```
#### Examples for `environmentSecrets` and `environemntConfigMaps`
```yaml
# Use config map for not-secret configuration data
apiVersion: v1
kind: ConfigMap
metadata:
name: litellm-env-configmap
data:
SOME_KEY: someValue
ANOTHER_KEY: anotherValue
```
```yaml
# Use secrets for things which are actually secret like API keys, credentials, etc
# Base64 encode the values stored in a Kubernetes Secret: $ pbpaste | base64 | pbcopy
# The --decode flag is convenient: $ pbpaste | base64 --decode
apiVersion: v1
kind: Secret
metadata:
name: litellm-env-secret
type: Opaque
data:
SOME_PASSWORD: cDZbUGVXeU5e0ZW # base64 encoded
ANOTHER_PASSWORD: AAZbUGVXeU5e0ZB # base64 encoded
```
Source: [GitHub Gist from troyharvey](https://gist.github.com/troyharvey/4506472732157221e04c6b15e3b3f094)
### Migration Job Settings
The migration job supports both ArgoCD and Helm hooks to ensure database migrations run at the appropriate time during deployments.
| Name | Description | Value |
| ---------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----- |
| `migrationJob.enabled` | Enable or disable the schema migration Job | `true` |
| `migrationJob.backoffLimit` | Backoff limit for Job restarts | `4` |
| `migrationJob.ttlSecondsAfterFinished` | TTL for completed migration jobs | `120` |
| `migrationJob.annotations` | Additional annotations for the migration job pod | `{}` |
| `migrationJob.extraContainers` | Additional containers to run alongside the migration job | `[]` |
| `migrationJob.hooks.argocd.enabled` | Enable ArgoCD hooks for the migration job (uses PreSync hook with BeforeHookCreation delete policy) | `true` |
| `migrationJob.hooks.helm.enabled` | Enable Helm hooks for the migration job (uses pre-install,pre-upgrade hooks with before-hook-creation delete policy) | `false` |
| `migrationJob.hooks.helm.weight` | Helm hook execution order (lower weights executed first). Optional - defaults to "1" if not specified. | N/A |
## Accessing the Admin UI
When browsing to the URL published per the settings in `ingress.*`, you will
be prompted for **Admin Configuration**. The **Proxy Endpoint** is the internal
(from the `litellm` pod's perspective) URL published by the `<RELEASE>-litellm`
Kubernetes Service. If the deployment uses the default settings for this
service, the **Proxy Endpoint** should be set to `http://<RELEASE>-litellm:4000`.
The **Proxy Key** is the value specified for `masterkey` or, if a `masterkey`
was not provided to the helm command line, the `masterkey` is a randomly
generated string stored in the `<RELEASE>-litellm-masterkey` Kubernetes Secret.
```bash
kubectl -n litellm get secret <RELEASE>-litellm-masterkey -o jsonpath="{.data.masterkey}"
```
## Admin UI Limitations
At the time of writing, the Admin UI is unable to add models. This is because
it would need to update the `config.yaml` file which is a exposed ConfigMap, and
therefore, read-only. This is a limitation of this helm chart, not the Admin UI
itself.

View File

@@ -0,0 +1,15 @@
fullnameOverride: ""
# Disable database deployment and configuration
db:
deployStandalone: false
useExisting: false
# Test environment variables
envVars:
DD_ENV: "dev_helm"
DD_SERVICE: "litellm"
USE_DDTRACE: "true"
# Disable migration job since we're not using a database
migrationJob:
enabled: false

View File

@@ -0,0 +1,22 @@
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range $host := .Values.ingress.hosts }}
{{- range .paths }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ .path }}
{{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "litellm.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "litellm.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "litellm.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "litellm.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
export CONTAINER_PORT=$(kubectl get pod --namespace {{ .Release.Namespace }} $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:$CONTAINER_PORT
{{- end }}

View File

@@ -0,0 +1,84 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "litellm.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "litellm.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "litellm.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "litellm.labels" -}}
helm.sh/chart: {{ include "litellm.chart" . }}
{{ include "litellm.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "litellm.selectorLabels" -}}
app.kubernetes.io/name: {{ include "litellm.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "litellm.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "litellm.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}
{{/*
Get redis service name
*/}}
{{- define "litellm.redis.serviceName" -}}
{{- if and (eq .Values.redis.architecture "standalone") .Values.redis.sentinel.enabled -}}
{{- printf "%s-%s" .Release.Name (default "redis" .Values.redis.nameOverride | trunc 63 | trimSuffix "-") -}}
{{- else -}}
{{- printf "%s-%s-master" .Release.Name (default "redis" .Values.redis.nameOverride | trunc 63 | trimSuffix "-") -}}
{{- end -}}
{{- end -}}
{{/*
Get redis service port
*/}}
{{- define "litellm.redis.port" -}}
{{- if .Values.redis.sentinel.enabled -}}
{{ .Values.redis.sentinel.service.ports.sentinel }}
{{- else -}}
{{ .Values.redis.master.service.ports.redis }}
{{- end -}}
{{- end -}}

View File

@@ -0,0 +1,7 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "litellm.fullname" . }}-config
data:
config.yaml: |
{{ .Values.proxy_config | toYaml | indent 6 }}

View File

@@ -0,0 +1,197 @@
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
{{- toYaml .Values.deploymentAnnotations | nindent 4 }}
name: {{ include "litellm.fullname" . }}
labels:
{{- include "litellm.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "litellm.selectorLabels" . | nindent 6 }}
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap-litellm.yaml") . | sha256sum }}
{{- with .Values.podAnnotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "litellm.labels" . | nindent 8 }}
{{- with .Values.podLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "litellm.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ include "litellm.name" . }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default (printf "main-%s" .Chart.AppVersion) }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: HOST
value: "{{ .Values.listen | default "0.0.0.0" }}"
- name: PORT
value: {{ .Values.service.port | quote}}
{{- if .Values.db.deployStandalone }}
- name: DATABASE_USERNAME
valueFrom:
secretKeyRef:
name: {{ include "litellm.fullname" . }}-dbcredentials
key: username
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: {{ include "litellm.fullname" . }}-dbcredentials
key: password
- name: DATABASE_HOST
value: {{ .Release.Name }}-postgresql
- name: DATABASE_NAME
value: litellm
{{- else if .Values.db.useExisting }}
- name: DATABASE_USERNAME
valueFrom:
secretKeyRef:
name: {{ .Values.db.secret.name }}
key: {{ .Values.db.secret.usernameKey }}
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: {{ .Values.db.secret.name }}
key: {{ .Values.db.secret.passwordKey }}
- name: DATABASE_HOST
value: {{ .Values.db.endpoint }}
- name: DATABASE_NAME
value: {{ .Values.db.database }}
- name: DATABASE_URL
value: {{ .Values.db.url | quote }}
{{- end }}
- name: PROXY_MASTER_KEY
valueFrom:
secretKeyRef:
name: {{ .Values.masterkeySecretName | default (printf "%s-masterkey" (include "litellm.fullname" .)) }}
key: {{ .Values.masterkeySecretKey | default "masterkey" }}
{{- if .Values.redis.enabled }}
- name: REDIS_HOST
value: {{ include "litellm.redis.serviceName" . }}
- name: REDIS_PORT
value: {{ include "litellm.redis.port" . | quote }}
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: {{ include "redis.secretName" .Subcharts.redis }}
key: {{include "redis.secretPasswordKey" .Subcharts.redis }}
{{- end }}
{{- if .Values.envVars }}
{{- range $key, $val := .Values.envVars }}
- name: {{ $key }}
value: {{ $val | quote }}
{{- end }}
{{- end }}
{{- if .Values.separateHealthApp }}
- name: SEPARATE_HEALTH_APP
value: "1"
- name: SEPARATE_HEALTH_PORT
value: {{ .Values.separateHealthPort | default "8081" | quote }}
{{- end }}
{{- with .Values.extraEnvVars }}
{{- toYaml . | nindent 12 }}
{{- end }}
envFrom:
{{- range .Values.environmentSecrets }}
- secretRef:
name: {{ . }}
{{- end }}
{{- range .Values.environmentConfigMaps }}
- configMapRef:
name: {{ . }}
{{- end }}
args:
- --config
- /etc/litellm/config.yaml
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
{{- if .Values.separateHealthApp }}
- name: health
containerPort: {{ .Values.separateHealthPort | default 8081 }}
protocol: TCP
{{- end }}
livenessProbe:
httpGet:
path: /health/liveliness
port: {{ if .Values.separateHealthApp }}"health"{{ else }}"http"{{ end }}
readinessProbe:
httpGet:
path: /health/readiness
port: {{ if .Values.separateHealthApp }}"health"{{ else }}"http"{{ end }}
startupProbe:
httpGet:
path: /health/readiness
port: {{ if .Values.separateHealthApp }}"health"{{ else }}"http"{{ end }}
failureThreshold: 30
periodSeconds: 10
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- name: litellm-config
mountPath: /etc/litellm/
{{ if .Values.securityContext.readOnlyRootFilesystem }}
- name: tmp
mountPath: /tmp
- name: cache
mountPath: /.cache
- name: npm
mountPath: /.npm
{{- end }}
{{- with .Values.volumeMounts }}
{{- toYaml . | nindent 12 }}
{{- end }}
{{- with .Values.extraContainers }}
{{- toYaml . | nindent 8 }}
{{- end }}
volumes:
{{ if .Values.securityContext.readOnlyRootFilesystem }}
- name: tmp
emptyDir:
sizeLimit: 500Mi
- name: cache
emptyDir:
sizeLimit: 500Mi
- name: npm
emptyDir:
sizeLimit: 500Mi
{{- end }}
- name: litellm-config
configMap:
name: {{ include "litellm.fullname" . }}-config
items:
- key: "config.yaml"
path: "config.yaml"
{{- with .Values.volumes }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@@ -0,0 +1,32 @@
{{- if .Values.autoscaling.enabled }}
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "litellm.fullname" . }}
labels:
{{- include "litellm.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "litellm.fullname" . }}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
{{- if .Values.autoscaling.targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
{{- end }}
{{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,61 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "litellm.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- if and .Values.ingress.className (not (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion)) }}
{{- if not (hasKey .Values.ingress.annotations "kubernetes.io/ingress.class") }}
{{- $_ := set .Values.ingress.annotations "kubernetes.io/ingress.class" .Values.ingress.className}}
{{- end }}
{{- end }}
{{- if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1
{{- else if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "litellm.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if and .Values.ingress.className (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion) }}
ingressClassName: {{ .Values.ingress.className }}
{{- end }}
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
{{- if and .pathType (semverCompare ">=1.18-0" $.Capabilities.KubeVersion.GitVersion) }}
pathType: {{ .pathType }}
{{- end }}
backend:
{{- if semverCompare ">=1.19-0" $.Capabilities.KubeVersion.GitVersion }}
service:
name: {{ $fullName }}
port:
number: {{ $svcPort }}
{{- else }}
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,94 @@
{{- if .Values.migrationJob.enabled }}
# This job runs the Prisma migrations for the LiteLLM DB.
apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "litellm.fullname" . }}-migrations
labels:
{{- include "litellm.labels" . | nindent 4 }}
annotations:
{{- if .Values.migrationJob.hooks.argocd.enabled }}
argocd.argoproj.io/hook: PreSync
argocd.argoproj.io/hook-delete-policy: BeforeHookCreation
{{- end }}
{{- if .Values.migrationJob.hooks.helm.enabled }}
helm.sh/hook: "pre-install,pre-upgrade"
helm.sh/hook-delete-policy: "before-hook-creation"
helm.sh/hook-weight: {{ .Values.migrationJob.hooks.helm.weight | default "1" | quote }}
{{- end }}
checksum/config: {{ toYaml .Values | sha256sum }}
spec:
template:
metadata:
labels:
{{- include "litellm.labels" . | nindent 8 }}
annotations:
{{- with .Values.migrationJob.annotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
serviceAccountName: {{ include "litellm.serviceAccountName" . }}
containers:
- name: prisma-migrations
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default (printf "main-%s" .Chart.AppVersion) }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
command: ["python", "litellm/proxy/prisma_migration.py"]
workingDir: "/app"
env:
{{- if .Values.db.useExisting }}
- name: DATABASE_USERNAME
valueFrom:
secretKeyRef:
name: {{ .Values.db.secret.name }}
key: {{ .Values.db.secret.usernameKey }}
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: {{ .Values.db.secret.name }}
key: {{ .Values.db.secret.passwordKey }}
- name: DATABASE_HOST
value: {{ .Values.db.endpoint }}
- name: DATABASE_NAME
value: {{ .Values.db.database }}
- name: DATABASE_URL
value: {{ .Values.db.url | quote }}
{{- else }}
- name: DATABASE_URL
value: postgresql://{{ .Values.postgresql.auth.username }}:{{ .Values.postgresql.auth.password }}@{{ .Release.Name }}-postgresql/{{ .Values.postgresql.auth.database }}
{{- end }}
{{- if .Values.envVars }}
{{- range $key, $val := .Values.envVars }}
- name: {{ $key }}
value: {{ $val | quote }}
{{- end }}
{{- end }}
{{- with .Values.extraEnvVars }}
{{- toYaml . | nindent 12 }}
{{- end }}
- name: DISABLE_SCHEMA_UPDATE
value: "false" # always run the migration from the Helm PreSync hook, override the value set
{{- with .Values.volumeMounts }}
volumeMounts:
{{- toYaml . | nindent 12 }}
{{- end }}
{{- with .Values.migrationJob.extraContainers }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.volumes }}
volumes:
{{- toYaml . | nindent 8 }}
{{- end }}
restartPolicy: OnFailure
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
ttlSecondsAfterFinished: {{ .Values.migrationJob.ttlSecondsAfterFinished }}
backoffLimit: {{ .Values.migrationJob.backoffLimit }}
{{- end }}

View File

@@ -0,0 +1,12 @@
{{- if .Values.db.deployStandalone -}}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "litellm.fullname" . }}-dbcredentials
data:
# Password for the "postgres" user
postgres-password: {{ ( index .Values.postgresql.auth "postgres-password") | default "litellm" | b64enc }}
username: {{ .Values.postgresql.auth.username | default "litellm" | b64enc }}
password: {{ .Values.postgresql.auth.password | default "litellm" | b64enc }}
type: Opaque
{{- end -}}

View File

@@ -0,0 +1,10 @@
{{- if not .Values.masterkeySecretName }}
{{ $masterkey := (.Values.masterkey | default (randAlphaNum 17)) }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "litellm.fullname" . }}-masterkey
data:
masterkey: {{ $masterkey | b64enc }}
type: Opaque
{{- end }}

View File

@@ -0,0 +1,22 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "litellm.fullname" . }}
{{- with .Values.service.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels:
{{- include "litellm.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
{{- if and (eq .Values.service.type "LoadBalancer") .Values.service.loadBalancerClass }}
loadBalancerClass: {{ .Values.service.loadBalancerClass }}
{{- end }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "litellm.selectorLabels" . | nindent 4 }}

View File

@@ -0,0 +1,13 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "litellm.serviceAccountName" . }}
labels:
{{- include "litellm.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
automountServiceAccountToken: {{ .Values.serviceAccount.automount }}
{{- end }}

View File

@@ -0,0 +1,25 @@
apiVersion: v1
kind: Pod
metadata:
name: "{{ include "litellm.fullname" . }}-test-connection"
labels:
{{- include "litellm.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": test
spec:
containers:
- name: wget
image: busybox
command: ['sh', '-c']
args:
- |
# Wait for a bit to allow the service to be ready
sleep 10
# Try multiple times with a delay between attempts
for i in $(seq 1 30); do
wget -T 5 "{{ include "litellm.fullname" . }}:{{ .Values.service.port }}/health/readiness" && exit 0
echo "Attempt $i failed, waiting..."
sleep 2
done
exit 1
restartPolicy: Never

View File

@@ -0,0 +1,43 @@
apiVersion: v1
kind: Pod
metadata:
name: "{{ include "litellm.fullname" . }}-env-test"
labels:
{{- include "litellm.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": test
spec:
containers:
- name: test
image: busybox
command: ['sh', '-c']
args:
- |
# Test DD_ENV
if [ "$DD_ENV" != "dev_helm" ]; then
echo "❌ Environment variable DD_ENV mismatch. Expected: dev_helm, Got: $DD_ENV"
exit 1
fi
echo "✅ Environment variable DD_ENV matches expected value: $DD_ENV"
# Test DD_SERVICE
if [ "$DD_SERVICE" != "litellm" ]; then
echo "❌ Environment variable DD_SERVICE mismatch. Expected: litellm, Got: $DD_SERVICE"
exit 1
fi
echo "✅ Environment variable DD_SERVICE matches expected value: $DD_SERVICE"
# Test USE_DDTRACE
if [ "$USE_DDTRACE" != "true" ]; then
echo "❌ Environment variable USE_DDTRACE mismatch. Expected: true, Got: $USE_DDTRACE"
exit 1
fi
echo "✅ Environment variable USE_DDTRACE matches expected value: $USE_DDTRACE"
env:
- name: DD_ENV
value: {{ .Values.envVars.DD_ENV | quote }}
- name: DD_SERVICE
value: {{ .Values.envVars.DD_SERVICE | quote }}
- name: USE_DDTRACE
value: {{ .Values.envVars.USE_DDTRACE | quote }}
restartPolicy: Never

View File

@@ -0,0 +1,117 @@
suite: test deployment
templates:
- deployment.yaml
- configmap-litellm.yaml
tests:
- it: should work
template: deployment.yaml
set:
image.tag: test
asserts:
- isKind:
of: Deployment
- matchRegex:
path: metadata.name
pattern: -litellm$
- equal:
path: spec.template.spec.containers[0].image
value: ghcr.io/berriai/litellm-database:test
- it: should work with tolerations
template: deployment.yaml
set:
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
asserts:
- equal:
path: spec.template.spec.tolerations[0].key
value: node-role.kubernetes.io/master
- equal:
path: spec.template.spec.tolerations[0].operator
value: Exists
- it: should work with affinity
template: deployment.yaml
set:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- antarctica-east1
asserts:
- equal:
path: spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].key
value: topology.kubernetes.io/zone
- equal:
path: spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].operator
value: In
- equal:
path: spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].values[0]
value: antarctica-east1
- it: should work without masterkeySecretName or masterkeySecretKey
template: deployment.yaml
set:
masterkeySecretName: ""
masterkeySecretKey: ""
asserts:
- contains:
path: spec.template.spec.containers[0].env
content:
name: PROXY_MASTER_KEY
valueFrom:
secretKeyRef:
name: RELEASE-NAME-litellm-masterkey
key: masterkey
- it: should work with masterkeySecretName and masterkeySecretKey
template: deployment.yaml
set:
masterkeySecretName: my-secret
masterkeySecretKey: my-key
asserts:
- contains:
path: spec.template.spec.containers[0].env
content:
name: PROXY_MASTER_KEY
valueFrom:
secretKeyRef:
name: my-secret
key: my-key
- it: should work with extraEnvVars
template: deployment.yaml
set:
extraEnvVars:
- name: EXTRA_ENV_VAR
valueFrom:
fieldRef:
fieldPath: metadata.labels['env']
asserts:
- contains:
path: spec.template.spec.containers[0].env
content:
name: EXTRA_ENV_VAR
valueFrom:
fieldRef:
fieldPath: metadata.labels['env']
- it: should work with both extraEnvVars and envVars
template: deployment.yaml
set:
envVars:
ENV_VAR: ENV_VAR_VALUE
extraEnvVars:
- name: EXTRA_ENV_VAR
value: EXTRA_ENV_VAR_VALUE
asserts:
- contains:
path: spec.template.spec.containers[0].env
content:
name: ENV_VAR
value: ENV_VAR_VALUE
- contains:
path: spec.template.spec.containers[0].env
content:
name: EXTRA_ENV_VAR
value: EXTRA_ENV_VAR_VALUE

View File

@@ -0,0 +1,18 @@
suite: test masterkey secret
templates:
- secret-masterkey.yaml
tests:
- it: should create a secret if masterkeySecretName is not set
template: secret-masterkey.yaml
set:
masterkeySecretName: ""
asserts:
- isKind:
of: Secret
- it: should not create a secret if masterkeySecretName is set
template: secret-masterkey.yaml
set:
masterkeySecretName: my-secret
asserts:
- hasDocuments:
count: 0

View File

@@ -0,0 +1,113 @@
suite: test migrations job
templates:
- migrations-job.yaml
tests:
- it: should work with envVars
template: migrations-job.yaml
set:
envVars:
TEST_ENV_VAR: "test_value"
ANOTHER_VAR: "another_value"
migrationJob:
enabled: true
asserts:
- contains:
path: spec.template.spec.containers[0].env
content:
name: TEST_ENV_VAR
value: "test_value"
- contains:
path: spec.template.spec.containers[0].env
content:
name: ANOTHER_VAR
value: "another_value"
- it: should work with extraEnvVars
template: migrations-job.yaml
set:
extraEnvVars:
- name: EXTRA_ENV_VAR
valueFrom:
fieldRef:
fieldPath: metadata.labels['env']
- name: SIMPLE_EXTRA_VAR
value: "simple_value"
migrationJob:
enabled: true
asserts:
- contains:
path: spec.template.spec.containers[0].env
content:
name: EXTRA_ENV_VAR
valueFrom:
fieldRef:
fieldPath: metadata.labels['env']
- contains:
path: spec.template.spec.containers[0].env
content:
name: SIMPLE_EXTRA_VAR
value: "simple_value"
- it: should work with both envVars and extraEnvVars
template: migrations-job.yaml
set:
envVars:
ENV_VAR: "env_var_value"
extraEnvVars:
- name: EXTRA_ENV_VAR
value: "extra_env_var_value"
migrationJob:
enabled: true
asserts:
- contains:
path: spec.template.spec.containers[0].env
content:
name: ENV_VAR
value: "env_var_value"
- contains:
path: spec.template.spec.containers[0].env
content:
name: EXTRA_ENV_VAR
value: "extra_env_var_value"
- it: should not render when migrations job is disabled
template: migrations-job.yaml
set:
migrationJob:
enabled: false
asserts:
- hasDocuments:
count: 0
- it: should still include default env vars
template: migrations-job.yaml
set:
envVars:
CUSTOM_VAR: "custom_value"
migrationJob:
enabled: true
db:
useExisting: true
endpoint: "test-db"
database: "testdb"
url: "postgresql://user:pass@test-db:5432/testdb"
secret:
name: "test-secret"
usernameKey: "username"
passwordKey: "password"
asserts:
- contains:
path: spec.template.spec.containers[0].env
content:
name: DISABLE_SCHEMA_UPDATE
value: "false"
- contains:
path: spec.template.spec.containers[0].env
content:
name: DATABASE_HOST
value: "test-db"
- contains:
path: spec.template.spec.containers[0].env
content:
name: CUSTOM_VAR
value: "custom_value"

View File

@@ -0,0 +1,116 @@
suite: Service Configuration Tests
templates:
- service.yaml
tests:
- it: should create a default ClusterIP service
template: service.yaml
asserts:
- isKind:
of: Service
- equal:
path: spec.type
value: ClusterIP
- equal:
path: spec.ports[0].port
value: 4000
- equal:
path: spec.ports[0].targetPort
value: http
- equal:
path: spec.ports[0].protocol
value: TCP
- equal:
path: spec.ports[0].name
value: http
- isNull:
path: spec.loadBalancerClass
- it: should create a NodePort service when specified
template: service.yaml
set:
service.type: NodePort
asserts:
- isKind:
of: Service
- equal:
path: spec.type
value: NodePort
- isNull:
path: spec.loadBalancerClass
- it: should create a LoadBalancer service when specified
template: service.yaml
set:
service.type: LoadBalancer
asserts:
- isKind:
of: Service
- equal:
path: spec.type
value: LoadBalancer
- isNull:
path: spec.loadBalancerClass
- it: should add loadBalancerClass when specified with LoadBalancer type
template: service.yaml
set:
service.type: LoadBalancer
service.loadBalancerClass: tailscale
asserts:
- isKind:
of: Service
- equal:
path: spec.type
value: LoadBalancer
- equal:
path: spec.loadBalancerClass
value: tailscale
- it: should not add loadBalancerClass when specified with ClusterIP type
template: service.yaml
set:
service.type: ClusterIP
service.loadBalancerClass: tailscale
asserts:
- isKind:
of: Service
- equal:
path: spec.type
value: ClusterIP
- isNull:
path: spec.loadBalancerClass
- it: should use custom port when specified
template: service.yaml
set:
service.port: 8080
asserts:
- equal:
path: spec.ports[0].port
value: 8080
- it: should add service annotations when specified
template: service.yaml
set:
service.annotations:
cloud.google.com/load-balancer-type: "Internal"
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
asserts:
- isKind:
of: Service
- equal:
path: metadata.annotations
value:
cloud.google.com/load-balancer-type: "Internal"
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
- it: should use the correct selector labels
template: service.yaml
asserts:
- isNotNull:
path: spec.selector
- equal:
path: spec.selector
value:
app.kubernetes.io/name: litellm
app.kubernetes.io/instance: RELEASE-NAME

View File

@@ -0,0 +1,229 @@
# Default values for litellm.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
# Use "ghcr.io/berriai/litellm-database" for optimized image with database
repository: ghcr.io/berriai/litellm-database
pullPolicy: Always
# Overrides the image tag whose default is the chart appVersion.
# tag: "main-latest"
tag: ""
imagePullSecrets: []
nameOverride: "litellm"
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: false
# Automatically mount a ServiceAccount's API credentials?
automount: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
# annotations for litellm deployment
deploymentAnnotations: {}
# annotations for litellm pods
podAnnotations: {}
podLabels: {}
# At the time of writing, the litellm docker image requires write access to the
# filesystem on startup so that prisma can install some dependencies.
podSecurityContext: {}
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: false
# runAsNonRoot: true
# runAsUser: 1000
# A list of Kubernetes Secret objects that will be exported to the LiteLLM proxy
# pod as environment variables. These secrets can then be referenced in the
# configuration file (or "litellm" ConfigMap) with `os.environ/<Env Var Name>`
environmentSecrets: []
# - litellm-env-secret
# A list of Kubernetes ConfigMap objects that will be exported to the LiteLLM proxy
# pod as environment variables. The ConfigMap kv-pairs can then be referenced in the
# configuration file (or "litellm" ConfigMap) with `os.environ/<Env Var Name>`
environmentConfigMaps: []
# - litellm-env-configmap
service:
type: ClusterIP
port: 4000
# If service type is `LoadBalancer` you can
# optionally specify loadBalancerClass
# loadBalancerClass: tailscale
# Separate health app configuration
# When enabled, health checks will use a separate port and the application
# will receive SEPARATE_HEALTH_APP=1 and SEPARATE_HEALTH_PORT from environment variables
separateHealthApp: false
separateHealthPort: 8081
ingress:
enabled: false
className: "nginx"
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: api.example.local
paths:
- path: /
pathType: ImplementationSpecific
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
# masterkey: changeit
# if set, use this secret for the master key; otherwise, autogenerate a new one
masterkeySecretName: ""
# if set, use this secret key for the master key; otherwise, use the default key
masterkeySecretKey: ""
# The elements within proxy_config are rendered as config.yaml for the proxy
# Examples: https://github.com/BerriAI/litellm/tree/main/litellm/proxy/example_config_yaml
# Reference: https://docs.litellm.ai/docs/proxy/configs
proxy_config:
model_list:
# At least one model must exist for the proxy to start.
- model_name: gpt-3.5-turbo
litellm_params:
model: gpt-3.5-turbo
api_key: eXaMpLeOnLy
- model_name: fake-openai-endpoint
litellm_params:
model: openai/fake
api_key: fake-key
api_base: https://exampleopenaiendpoint-production.up.railway.app/
general_settings:
master_key: os.environ/PROXY_MASTER_KEY
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
# Additional volumes on the output Deployment definition.
volumes: []
# - name: foo
# secret:
# secretName: mysecret
# optional: false
# Additional volumeMounts on the output Deployment definition.
volumeMounts: []
# - name: foo
# mountPath: "/etc/foo"
# readOnly: true
nodeSelector: {}
tolerations: []
affinity: {}
db:
# Use an existing postgres server/cluster
useExisting: false
# How to connect to the existing postgres server/cluster
endpoint: localhost
database: litellm
url: postgresql://$(DATABASE_USERNAME):$(DATABASE_PASSWORD)@$(DATABASE_HOST)/$(DATABASE_NAME)
secret:
name: postgres
usernameKey: username
passwordKey: password
# Use the Stackgres Helm chart to deploy an instance of a Stackgres cluster.
# The Stackgres Operator must already be installed within the target
# Kubernetes cluster.
# TODO: Stackgres deployment currently unsupported
useStackgresOperator: false
# Use the Postgres Helm chart to create a single node, stand alone postgres
# instance. See the "postgresql" top level key for additional configuration.
deployStandalone: true
# Settings for Bitnami postgresql chart (if db.deployStandalone is true, ignored
# otherwise)
postgresql:
architecture: standalone
auth:
username: litellm
database: litellm
# You should override these on the helm command line with
# `--set postgresql.auth.postgres-password=<some good password>,postgresql.auth.password=<some good password>`
password: NoTaGrEaTpAsSwOrD
postgres-password: NoTaGrEaTpAsSwOrD
# A secret is created by this chart (litellm-helm) with the credentials that
# the new Postgres instance should use.
# existingSecret: ""
# secretKeys:
# userPasswordKey: password
# requires cache: true in config file
# either enable this or pass a secret for REDIS_HOST, REDIS_PORT, REDIS_PASSWORD or REDIS_URL
# with cache: true to use existing redis instance
redis:
enabled: false
architecture: standalone
# Prisma migration job settings
migrationJob:
enabled: true # Enable or disable the schema migration Job
retries: 3 # Number of retries for the Job in case of failure
backoffLimit: 4 # Backoff limit for Job restarts
disableSchemaUpdate: false # Skip schema migrations for specific environments. When True, the job will exit with code 0.
annotations: {}
ttlSecondsAfterFinished: 120
extraContainers: []
# Hook configuration
hooks:
argocd:
enabled: true
helm:
enabled: false
# Additional environment variables to be added to the deployment as a map of key-value pairs
envVars: {
# USE_DDTRACE: "true"
}
# Additional environment variables to be added to the deployment as a list of k8s env vars
extraEnvVars: {
# - name: EXTRA_ENV_VAR
# value: EXTRA_ENV_VAR_VALUE
}