Added LiteLLM to the stack
This commit is contained in:
@@ -0,0 +1,62 @@
|
||||
---
|
||||
title: v1.55.8-stable
|
||||
slug: v1.55.8-stable
|
||||
date: 2024-12-22T10:00:00
|
||||
authors:
|
||||
- name: Krrish Dholakia
|
||||
title: CEO, LiteLLM
|
||||
url: https://www.linkedin.com/in/krish-d/
|
||||
image_url: https://media.licdn.com/dms/image/v2/D4D03AQGrlsJ3aqpHmQ/profile-displayphoto-shrink_400_400/B4DZSAzgP7HYAg-/0/1737327772964?e=1749686400&v=beta&t=Hkl3U8Ps0VtvNxX0BNNq24b4dtX5wQaPFp6oiKCIHD8
|
||||
- name: Ishaan Jaffer
|
||||
title: CTO, LiteLLM
|
||||
url: https://www.linkedin.com/in/reffajnaahsi/
|
||||
image_url: https://media.licdn.com/dms/image/v2/D4D03AQGiM7ZrUwqu_Q/profile-displayphoto-shrink_800_800/profile-displayphoto-shrink_800_800/0/1675971026692?e=1741824000&v=beta&t=eQnRdXPJo4eiINWTZARoYTfqh064pgZ-E21pQTSy8jc
|
||||
tags: [langfuse, fallbacks, new models, azure_storage]
|
||||
hide_table_of_contents: false
|
||||
---
|
||||
|
||||
import Image from '@theme/IdealImage';
|
||||
|
||||
# v1.55.8-stable
|
||||
|
||||
A new LiteLLM Stable release [just went out](https://github.com/BerriAI/litellm/releases/tag/v1.55.8-stable). Here are 5 updates since v1.52.2-stable.
|
||||
|
||||
`langfuse`, `fallbacks`, `new models`, `azure_storage`
|
||||
|
||||
<Image img={require('../../img/langfuse_prmpt_mgmt.png')} />
|
||||
|
||||
## Langfuse Prompt Management
|
||||
|
||||
This makes it easy to run experiments or change the specific models `gpt-4o` to `gpt-4o-mini` on Langfuse, instead of making changes in your applications. [Start here](https://docs.litellm.ai/docs/proxy/prompt_management)
|
||||
|
||||
## Control fallback prompts client-side
|
||||
|
||||
> Claude prompts are different than OpenAI
|
||||
|
||||
Pass in prompts specific to model when doing fallbacks. [Start here](https://docs.litellm.ai/docs/proxy/reliability#control-fallback-prompts)
|
||||
|
||||
|
||||
## New Providers / Models
|
||||
|
||||
- [NVIDIA Triton](https://developer.nvidia.com/triton-inference-server) `/infer` endpoint. [Start here](https://docs.litellm.ai/docs/providers/triton-inference-server)
|
||||
- [Infinity](https://github.com/michaelfeil/infinity) Rerank Models [Start here](https://docs.litellm.ai/docs/providers/infinity)
|
||||
|
||||
|
||||
## ✨ Azure Data Lake Storage Support
|
||||
|
||||
Send LLM usage (spend, tokens) data to [Azure Data Lake](https://learn.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-introduction). This makes it easy to consume usage data on other services (eg. Databricks)
|
||||
[Start here](https://docs.litellm.ai/docs/proxy/logging#azure-blob-storage)
|
||||
|
||||
## Docker Run LiteLLM
|
||||
|
||||
```shell
|
||||
docker run \
|
||||
-e STORE_MODEL_IN_DB=True \
|
||||
-p 4000:4000 \
|
||||
ghcr.io/berriai/litellm:litellm_stable_release_branch-v1.55.8-stable
|
||||
```
|
||||
|
||||
## Get Daily Updates
|
||||
|
||||
LiteLLM ships new releases every day. [Follow us on LinkedIn](https://www.linkedin.com/company/berri-ai/) to get daily updates.
|
||||
|
Reference in New Issue
Block a user