LLM Classifier

The LLM (Large Language Model) Classifier is an application designed to improve data classification accuracy using advanced natural language processing techniques. It is a separate, optional service that is attached to Sombra. Customers self-hosting Sombra can run the LLM Classifier in the same private network as Sombra.

Our deployment guides provide instructions on how to deploy the LLM Classifier using different configuration setups. If you are using our recommended Helm Chart, deploying the LLM Classifier simply requires adding:

llm-classifier:
  enabled: true

to your values.yaml file.

The LLM Classifier container runs a gunicorn server that listens to requests and performs LLM classification on the inputs provided in the request body. Since the LLM performs more efficiently on Graphics Processing Units (GPUs), the container must run on a node with a supported NVIDIA GPU (see GPU Requirements below).

The LLM Classifier container by default listens on port 6081. This can be changed using the LLM_SERVER_PORT environment variable.

To enable HTTPS connections to the LLM Classifier server, you can mount the SSL certificate and key file to the container and set the path to these files using the environment variables LLM_CERT_PATH and LLM_KEY_PATH respectively.

You can pull our image from Transcend's private Docker registry using basic authentication.

First, please contact us and request permission to pull the llm-classifier image. We will then add your Transcend account to our permissions list.

Once we have added you to our allow list, you can log in to our private registry:

docker login docker.transcend.io

You will be prompted to enter the basic auth credentials. The username will always be "Transcend" (this is case-sensitive), and the password will be any API Key for your organization within the Admin Dashboard (note: a scope is not required for the API key).

Once you've logged in, you may pull images by running:

docker pull docker.transcend.io/llm-classifier:<version_tag>

The LLM Classifier requires a node with an NVIDIA GPU that has enough VRAM to load and run the classification models. The service has been tested on NVIDIA T4, L4, and A10G GPUs. T4 is fully functional but considerably slower and is best suited for development or low-throughput environments. For production workloads, we recommend an L4- or A10G-class GPU: L4 generally offers the best balance of speed and cost, while A10G is slightly faster at a similar VRAM size.

RequirementValue
GPU vendorNVIDIA
Minimum VRAM (functional)16 GB
Recommended VRAM (production)24 GB or more
CUDA supportRequired (CUDA 12.x)
GPU count per node1
Cloud providerInstance typeGPUGPU VRAMInstance RAMvCPU
AWSg6.xlarge1× NVIDIA L424 GB16 GB4
AWSg4dn.2xlarge (dev/test)1× NVIDIA T416 GB32 GB8
AWSg5.xlarge1× NVIDIA A10G24 GB16 GB4
AWSg5.2xlarge1× NVIDIA A10G24 GB32 GB8
GCPg2-standard-81× NVIDIA L424 GB32 GB8
AzureStandard_NC8as_T4_v3 or similar1× NVIDIA T4 or L416–24 GB56 GB8

Notes: The LLM Classifier has been tested on NVIDIA T4, L4, and A10G GPUs. L4 generally provides the best balance of speed and cost for this workload and is recommended for most production deployments. A10G is slightly faster than L4 at a similar VRAM size and remains a solid choice where available. T4 is fully supported but significantly slower; we recommend it primarily for development, testing, or low-volume environments. If you use another cloud provider or GPU model, ensure it has at least 16 GB of VRAM (with 24 GB or more recommended for production throughput).

Each LLM Classifier pod requires:

ResourceValue
GPU1x nvidia.com/gpu
Memory15 GB
Replicas (production)2 (minimum recommended)

The Kubernetes cluster must support nvidia.com/gpu as a schedulable resource. Ensure the NVIDIA device plugin is installed in your cluster.

SettingValue
Minimum replicas1 (dev/staging), 2 (production)
Maximum replicas2–4
Scale-out trigger (GPU utilization)~60% average
Scale-out trigger (response time)~20 seconds average

The Horizontal Pod Autoscaler (HPA) monitors GPU utilization and response time. When either metric exceeds the target threshold, additional pods are scheduled. Throughput scales linearly with the number of pods — adding more replicas will proportionally increase classification capacity.

ConfigurationInstance typePricing modelEstimated monthly cost
Single nodeg6.xlargeOn-demand~$590/month
Single nodeg6.xlarge1-year reserved~$385/month
Single nodeg6.xlarge3-year reserved~$270/month
Single nodeg5.2xlargeOn-demand~$880/month
Single nodeg5.2xlarge1-year reserved~$560/month
Single nodeg5.2xlarge3-year reserved~$380/month
Production (2 nodes)2x g5.2xlarge1-year reserved~$1,120/month

Pricing is approximate and based on AWS US East (N. Virginia) region. Check AWS EC2 pricing and reserved instance pricing for current rates. Other cloud providers will have comparable pricing for equivalent GPU instances.

If you need more throughput (more classifications per hour), you can add more instances of the LLM Classifier to linearly scale the throughput.

This values.yaml adds an accompanying LLM Classifier to your Sombra deployment. The LLM Classifier requires an NVIDIA GPU to run, so please make sure your cluster supports nvidia.com/gpu as a resource.

envs:
  # ... other env vars
  - name: LLM_CLASSIFIER_URL
    value: http://<release-name>-llm-classifier.transcend.svc:6081

llm-classifier:
  enabled: true

Or with TLS termination at Sombra and the LLM Classifier server:

envs:
  # ... other env vars
  - name: LLM_CLASSIFIER_URL
    value: https://<release-name>-llm-classifier.transcend.svc:6081

envs_as_secret:
  # ... other env vars
  - name: SOMBRA_TLS_CERT
    value: <SOMBRA_TLS_CERT>
  - name: SOMBRA_TLS_KEY
    value: <SOMBRA_TLS_KEY>
  # An optional passphrase associated with your TLS private key. If you set a
  # passphrase when you created your key and certificate, you must provide it here.
  - name: SOMBRA_TLS_KEY_PASSPHRASE
    value: <SOMBRA_TLS_KEY_PASSPHRASE>

llm-classifier:
  enabled: true
  tls:
    enabled: true
    # saved as secret
    cert: |-
      -----BEGIN CERTIFICATE-----
      <base64>
      -----END CERTIFICATE-----
    # saved as secret
    key: |-
      -----BEGIN PRIVATE KEY-----
      <base64>
      -----END PRIVATE KEY-----

  # volume containing cert and key
  volumes:
    - name: llm-classifier-ssl
      secret:
        secretName: llm-classifier-secrets

  # mount the directory containing the cert and key to pod
  volumeMounts:
    - mountPath: '/etc/llm-classifier/ssl'
      name: llm-classifier-ssl
      readOnly: true

  # Set the location of cert and key in environment
  envs:
    - name: LLM_CERT_PATH
      value: '/etc/llm-classifier/ssl/llm-classifier.cert'
    - name: LLM_KEY_PATH
      value: '/etc/llm-classifier/ssl/llm-classifier.key'
VariableDefaultDescription
LLM_SERVER_PORT6081Port the classifier listens on
LLM_SERVER_CONCURRENCY1Number of gunicorn workers (should match GPU count)
LLM_SERVER_WORKER_CONNECTIONS1000Max simultaneous connections per worker
LLM_SERVER_TIMEOUT120Request timeout in seconds
LLM_SERVER_BACKLOG500Max queued connections
LLM_CERT_PATHPath to TLS certificate (enables HTTPS)
LLM_KEY_PATHPath to TLS private key (enables HTTPS)