
Langfuse Launch Week #3
A week dedicated to deeper insights and seamless integrations
We are excited to announce Langfuse Launch Week #3!
Starting on Monday, May 19 – Saturday, May 24, 2025, we’ll drop a fresh feature every single day. Expect to get deeper insights from your application data, more guidance on how to get started, improve and evaluate your LLM applications, and more ways to integrate with your favorite tools and frameworks.
We’ll celebrate with a mid-week Virtual Community Hour and our third Product Hunt launch.
Star us on GitHub and follow us on Twitter so you don’t miss an announcement.
Want a taste of what’s in store? Revisit last year’s edition here → Launch Week #2.
Launch Week Focus
This edition is all about making your LLM data work harder for you.
- Deeper insights – New analytics and evaluation tools to get the most out of your application data.
- Seamless integrations – connect your favorite tools and frameworks to Langfuse.
Stay tuned for the daily reveals 👇
🔎 Day 1: Full text search
This feature has received one of the highest numbers of upvotes in our feature requests, and we are thrilled to make it available. Finding all occurrences of a specific keyword or phrase across your Langfuse traces and observations can be extremely helpful.
See the video above for a quick tutorial.
📌 Day 2: Save and share table views
Save your most-used table configurations and instantly share them with teammates. Create custom views for different workflows—like filtering for failed traces or high-cost runs — and access them with a single click. Every saved view gets a unique permalink, making it easy to collaborate viewing the same data without having to recreate complex filters and sorting.
Available across all major tables in Langfuse: Traces, Observations, Scores, Sessions, and Datasets.
👉 Read the full changelog → Save & share table views
📊 Day 3: Custom Dashboards
Create your own custom dashboards to analyze and understand your AI application deeply. Turn raw data into insights with a wide range of chart types, flexible metrics, and rich filtering options. Choose from our curated dashboards focused on Latency, Cost, and Langfuse usage to quickly get started, or build your own to visualize exactly what matters to your team.
👉 Read the full changelog → Custom Dashboards
Interested in how we built this? Check out the technical deep dive.
🏅 Day 4 — Hyperscaler Terraform Modules
Deploy Langfuse on your own infrastructure in minutes using our new Terraform modules for AWS, GCP, and Azure. The modules provision everything you need—networking, databases, storage, and a production-ready Kubernetes cluster—so you can focus on building, not managing infrastructure.
👉 Read the full changelog → Hyperscaler Terraform Modules
🔭 Day 5: OTEL-based Python SDK (beta)
Meet the next generation Langfuse Python SDK built on the OpenTelemetry standard! This big update simplifies tracing with unified context propagation that automatically nests operations, even across threads or async tasks. Instrument your code using three flexible styles: the @observe
decorator, context managers, or manual observations, which can be mixed and matched.
The SDK ships today in public beta (pip install langfuse>=3.0.0b2
). We can’t wait to hear your feedback!
👉 Read the full changelog → OTEL-based Python SDK
📐 Day 6: Langfuse Evaluator Library
Making LLM evaluation easier, more powerful, and production-ready is now possible with the new Langfuse Evaluator Library that includes more prebuilt templates. It ships with RAGAS evaluators (context relevance, goal accuracy, SQL semantic equivalence, and more), Langfuse-managed evaluators, and the ability to build custom ones.
In addition, we’ve revamped the setup flow when creating a new eval to make it easier to get started and understand how the different configurations work.
👉 Read the full changelog → Langfuse Evaluator Library