Skip to main content

6 posts tagged with "security"

View All Tags

Security Update: Vulnerability Disclosures and Ongoing Hardening

Krrish Dholakia
CEO, LiteLLM
Ishaan Jaff
CTO, LiteLLM

After the supply chain incident in March, we brought in Veria Labs to audit the LiteLLM proxy and fixed a number of vulnerability reports from independent researchers. All issues below are fixed in v1.83.0. If you are affected, particularly if you have JWT auth enabled, we recommend upgrading.

We've also launched a bug bounty program and Veria Labs is continuing to audit the proxy. More fixes will ship in upcoming versions.

The two high-severity issues (CVE-2026-35029 and GHSA-69x8-hrgq-fjj8) both require the attacker to already have a valid API key for the proxy. These are not exploitable by unauthenticated users.

The critical-severity issue (CVE-2026-35030) is an authentication bypass, but only affects deployments with enable_jwt_auth explicitly enabled, which is off by default. The default LiteLLM configuration is not affected, and no LiteLLM Cloud customers had this feature enabled.

Announcing CI/CD v2 for LiteLLM

Krrish Dholakia
CEO, LiteLLM

The CI/CD v2 is now live for LiteLLM.


Building on the roadmap from our security incident, CI/CD v2 introduces isolated environments, stronger security gates, and safer release separation for LiteLLM.

What changed​

  • Security scans and unit tests run in isolated environments.
  • Validation and release are separated into different repositories, making it harder for an attacker to reach release credentials.
  • Trusted Publishing for PyPI releases - this means no long-lived credentials are used to publish releases.
  • Immutable Docker release tags - this means no tampering of Docker release tags after they are published Learn more. Note: work for GHCR docker releases is planned as well.
  • Docker image signing with Cosign - all release images are signed so users can independently verify they came from us.

Verify Docker image signatures​

Starting from v1.83.0-nightly, all LiteLLM Docker images published to GHCR are signed with cosign. Every release is signed with the same key introduced in commit 0112e53.

Verify using the pinned commit hash (recommended):

A commit hash is cryptographically immutable, so this is the strongest way to ensure you are using the original signing key:

cosign verify \
--key https://raw.githubusercontent.com/BerriAI/litellm/0112e53046018d726492c814b3644b7d376029d0/cosign.pub \
ghcr.io/berriai/litellm:<release-tag>

Verify using a release tag (convenience):

Tags are protected in this repository and resolve to the same key. This option is easier to read but relies on tag protection rules:

cosign verify \
--key https://raw.githubusercontent.com/BerriAI/litellm/<release-tag>/cosign.pub \
ghcr.io/berriai/litellm:<release-tag>

Replace <release-tag> with the version you are deploying (e.g. v1.83.0-stable).

Expected output:

The following checks were performed on each of these signatures:
- The cosign claims were validated
- The signatures were verified against the specified public key

What's next​

Moving forward, we plan on:

  • Adopting OpenSSF (this is a set of security criteria that projects should meet to demonstrate a strong security posture - Learn more)

    • We've added Scorecard and Allstar to our Github
  • Adding SLSA Build Provenance to our CI/CD pipeline - this means we allow users to independently verify that a release came from us and prevent silent modifications of releases after they are published.

We hope that this will mean you can be confident that the releases you are using are safe and from us.

The principle​

The new CI/CD pipeline reflects the principles, outlined below, and is designed to be more secure and reliable:

  • Limit what each package can access
  • Reduce the number of sensitive environment variables
  • Avoid compromised packages
  • Prevent release tampering

How to help:​

Help us plan April's stability sprint - https://github.com/BerriAI/litellm/issues/24825

Security Update: Suspected Supply Chain Incident

Krrish Dholakia
CEO, LiteLLM
Ishaan Jaff
CTO, LiteLLM

Status: Active investigation Last updated: March 27, 2026

Update (March 30): A new clean version of LiteLLM is now available (v1.83.0). This was released by our new CI/CD v2 pipeline which added isolated environments, stronger security gates, and safer release separation for LiteLLM.

Update (March 27): Review Townhall updates, including explanation of the incident, what we've done, and what comes next. Learn more

Update (March 27): Added Verified safe versions section with SHA-256 checksums for all audited PyPI and Docker releases.

Update (March 26): Added checkmarx[.]zone to Indicators of compromise

Update (March 25): Added community-contributed scripts for scanning GitHub Actions and GitLab CI pipelines for the compromised versions. See How to check if you are affected. s/o @Zach Fury for these scripts.

TLDR;​

  • The compromised PyPI packages were litellm==1.82.7 and litellm==1.82.8. Those packages were live on March 24, 2026 from 10:39 UTC for about 40 minutes before being quarantined by PyPI.
  • We believe that the compromise originated from the Trivy dependency used in our CI/CD security scanning workflow.
  • Customers running the official LiteLLM Proxy Docker image were not impacted. That deployment path pins dependencies in requirements.txt and does not rely on the compromised PyPI packages.
  • We have paused all new LiteLLM releases until we complete a broader supply-chain review and confirm the release path is safe. Updated: We have now released a new safe version of LiteLLM (v1.83.0) by our new CI/CD v2 pipeline which added isolated environments, stronger security gates, and safer release separation for LiteLLM. We have also verified the codebase is safe and no malicious code was pushed to main.

Overview​

LiteLLM AI Gateway is investigating a suspected supply chain attack involving unauthorized PyPI package publishes. Current evidence suggests a maintainer's PyPI account may have been compromised and used to distribute malicious code.

At this time, we believe this incident may be linked to the broader Trivy security compromise, in which stolen credentials were reportedly used to gain unauthorized access to the LiteLLM publishing pipeline.

This investigation is ongoing. Details below may change as we confirm additional findings.

Confirmed affected versions​

The following LiteLLM versions published to PyPI were impacted:

  • v1.82.7: contained a malicious payload in the LiteLLM AI Gateway proxy_server.py
  • v1.82.8: contained litellm_init.pth and a malicious payload in the LiteLLM AI Gateway proxy_server.py

If you installed or ran either of these versions, review the recommendations below immediately.

Note: These versions have already been removed from PyPI.

What happened​

Initial evidence suggests the attacker bypassed official CI/CD workflows and uploaded malicious packages directly to PyPI.

These compromised versions appear to have included a credential stealer designed to:

  • Harvest secrets by scanning for:
    • environment variables
    • SSH keys
    • cloud provider credentials (AWS, GCP, Azure)
    • Kubernetes tokens
    • database passwords
  • Encrypt and exfiltrate data via a POST request to models.litellm.cloud, which is not an official BerriAI / LiteLLM domain

Who is affected​

You may be affected if any of the following are true:

  • You installed or upgraded LiteLLM via pip on March 24, 2026, between 10:39 UTC and 16:00 UTC
  • You ran pip install litellm without pinning a version and received v1.82.7 or v1.82.8
  • You built a Docker image during this window that included pip install litellm without a pinned version
  • A dependency in your project pulled in LiteLLM as a transitive, unpinned dependency (for example through AI agent frameworks, MCP servers, or LLM orchestration tools)

You are not affected if any of the following are true:

LiteLLM AI Gateway/Proxy users: Customers running the official LiteLLM Proxy Docker image were not impacted. That deployment path pins dependencies in requirements.txt and does not rely on the compromised PyPI packages.

  • You are using LiteLLM Cloud
  • You are using the official LiteLLM AI Gateway Docker image: ghcr.io/berriai/litellm
  • You are on v1.82.6 or earlier and did not upgrade during the affected window
  • You installed LiteLLM from source via the GitHub repository, which was not compromised

How to check if you are affected​

pip show litellm

CI/CD scripts contributed by the community (original gist). Review before running.

Indicators of compromise (IoCs)​

Review affected systems for the following indicators:

  • litellm_init.pth present in your site-packages
  • Outbound traffic or requests to models.litellm[.]cloud This domain is not affiliated with LiteLLM
  • Outbound traffic or requests to checkmarx[.]zone This domain is not affiliated with LiteLLM

Immediate actions for affected users​

If you installed or ran v1.82.7 or v1.82.8, take the following actions immediately.

1. Rotate all secrets​

Treat any credentials present on the affected systems as compromised, including:

  • API keys
  • Cloud access keys
  • Database passwords
  • SSH keys
  • Kubernetes tokens
  • Any secrets stored in environment variables or configuration files

2. Inspect your filesystem​

Check your site-packages directory for a file named litellm_init.pth:

find /usr/lib/python3.13/site-packages/ -name "litellm_init.pth"

If present:

  • remove it immediately
  • investigate the host for further compromise
  • preserve relevant artifacts if your security team is performing forensics

3. Audit version history​

Review your:

  • Local environments
  • CI/CD pipelines
  • Docker builds
  • Deployment logs

Confirm whether v1.82.7 or v1.82.8 was installed anywhere.

Pin LiteLLM to a known safe version such as v1.82.6 or earlier, or to a later verified release once announced.

Response and remediation​

The LiteLLM AI Gateway team has already taken the following steps:

  • Removed compromised packages from PyPI
  • Rotated maintainer credentials and established new authorized maintainers
  • Engaged Google's Mandiant security team to assist with forensic analysis of the build and publishing chain

Verify Docker image signatures​

Starting from v1.83.0-nightly, all LiteLLM Docker images published to GHCR are signed with cosign. Every release is signed with the same key introduced in commit 0112e53.

Verify using the pinned commit hash (recommended):

A commit hash is cryptographically immutable, so this is the strongest way to ensure you are using the original signing key:

cosign verify \
--key https://raw.githubusercontent.com/BerriAI/litellm/0112e53046018d726492c814b3644b7d376029d0/cosign.pub \
ghcr.io/berriai/litellm:<release-tag>

Verify using a release tag (convenience):

Tags are protected in this repository and resolve to the same key. This option is easier to read but relies on tag protection rules:

cosign verify \
--key https://raw.githubusercontent.com/BerriAI/litellm/<release-tag>/cosign.pub \
ghcr.io/berriai/litellm:<release-tag>

Replace <release-tag> with the version you are deploying (e.g. v1.83.0-stable).

Expected output:

The following checks were performed on each of these signatures:
- The cosign claims were validated
- The signatures were verified against the specified public key

Verified safe versions​

We have audited every LiteLLM release published between v1.78.0 and v1.82.6 across both PyPI and Docker. Each artifact was verified by:

  1. Downloading the published artifact and computing its SHA-256 digest
  2. Scanning for the known indicators of compromise (IOCs)
  3. Comparing the artifact contents against the corresponding Git commit in the BerriAI/litellm repository

All versions listed below are confirmed clean.

VersionSHA-256Clean of IOCsMatches GitGit CommitStatus
1.82.6164a3ef3e19f309e…✔ CLEAN✔ YES38d477507dad✔ CLEAN
1.82.5e1012ab816352215…✔ CLEAN✔ YES1998c4f3703f✔ CLEAN
1.82.4d37c34a847e7952a…✔ CLEAN✔ YEScfeafbe38811✔ CLEAN
1.82.3609901f6c5a5cf8c…✔ CLEAN✔ YES61409275c8d8✔ CLEAN
1.82.2641ed024774fa3d5…✔ CLEAN✔ YESf351bbdb3683✔ CLEAN
1.82.1a9ec3fe42eccb161…✔ CLEAN✔ YES94b002066e3a✔ CLEAN
1.82.05496b5d4532cccdc…✔ CLEAN✔ YES6c6585af568e✔ CLEAN
1.81.16d6bcc13acbd26719…✔ CLEAN✔ YES678200ee4887✔ CLEAN
1.81.152fa253658702509c…✔ CLEAN✔ YES2e819656cee9✔ CLEAN
1.81.146394e61bbdef7121…✔ CLEAN✔ YES96bcee0b0af7✔ CLEAN
1.81.13ae4aea2a55e85993…✔ CLEAN✔ YEScc957a19a560✔ CLEAN
1.81.12219cf9729e5ea30c…✔ CLEAN✔ YESba0d541b1982✔ CLEAN
1.81.1106a66c24742e082d…✔ CLEAN✔ YES231aedeeff7e✔ CLEAN
1.81.109efa1cbe61ac051f…✔ CLEAN✔ YES7488abece8e7✔ CLEAN
1.81.924ee273bc8a62299…✔ CLEAN✔ YESa09d3e9162eb✔ CLEAN
1.81.878cca92f36bc6c26…✔ CLEAN✔ YES4fea649f519b✔ CLEAN
1.81.758466c88c3289c6a…✔ CLEAN✔ YES3f6a281d0f7a✔ CLEAN
1.81.6573206ba194d49a1…✔ CLEAN✔ YES8da3a93e6e63✔ CLEAN
1.81.5206505c5a0c6503e…✔ CLEAN✔ YES2cc3778761d4✔ CLEAN
1.81.33f60fd8b72758795…✔ CLEAN✔ YESf30742fe6e8e✔ CLEAN

Questions and support​

If you believe your systems may be affected, contact us immediately:

  • Security: security@berri.ai
  • Support: support@berri.ai
  • Slack: Reach out to the LiteLLM team directly

For real-time updates, follow LiteLLM (YC W23) on X.

Incident Report: Guardrail logging exposed secret headers in spend logs and traces

LiteLLM Team
LiteLLM Core Team

Date: March 18, 2026 Duration: Unknown Severity: High Status: Resolved

Summary​

When a custom guardrail returned the full LiteLLM request/data dictionary, the guardrail response logged by LiteLLM could include secret_fields.raw_headers, including plaintext Authorization headers containing API keys or other credentials.

This information could then propagate to logging and observability surfaces that consume guardrail metadata, including:

  • Spend logs in the LiteLLM UI: visible to admins with access to spend-log data
  • OpenTelemetry traces: visible to anyone with access to the relevant telemetry backend

LLM calls, proxy routing, and provider execution were not blocked by this bug. The impact was exposure of sensitive request headers in observability and logging paths.