サクサク読めて、アプリ限定の機能も多数!
トップへ戻る
大阪万博
blog.langchain.dev
Today we’re excited to announce the general availability of LangGraph Platform — our purpose-built infrastructure and management layer for deploying and scaling long-running, stateful agents. Since our beta last June, nearly 400 companies have used LangGraph Platform to deploy their agents into production. Agent deployment is the next hard hurdle for shipping reliable agents, and LangGraph Platfor
TL;DR: The hard part of building reliable agentic systems is making sure the LLM has the appropriate context at each step. This includes both controlling the exact content that goes into the LLM, as well as running the appropriate steps to generate relevant content.Agentic systems consist of both workflows and agents (and everything in between).Most agentic frameworks are neither declarative or im
Model Context Protocol (MCP) is creating quite the stir on Twitter – but is it actually useful, or just noise? In this back and forth, Harrison Chase (LangChain CEO) and Nuno Campos (LangGraph Lead) debate whether MCP lives up to the hype. Harrison: I’ll take the position that MCP is actually useful. I was skeptical on it at first, but I’ve begun to see its value. Essentially: MCP is useful when y
Editor's note: this is a guest post from our friends at Onyx. As LangGraph has matured, we've seen more and more companies (Klarna, Replit, AppFolio, etc) start to use it as their agent framework of choice. We thought this was a great blog describing in detail how that evaluation is done. You can read a version of this post on their blog as well. By Evan Lohn, Joachim Rahmfeld At Onyx, we are dedi
By Krish Maniar and William Fu-Hinthorn If you are interested in beta-testing more prompt optimization techniques, fill out interest form here. When we write prompts, we attempt to communicate our intent for LLMs to apply on messy data, but it's hard to effectively communicate every nuance in one go. Prompting is typically done through manual trial and error, testing and tweaking until things work
Ambient agents listen to an event stream and act on it accordingly, potentially acting on multiple events at a time Notably, however, we do not think that ambient agents are necessarily completely autonomous. In fact, we think a key part of bringing ambient agents to the public will be thoughtful consideration as to when and how these agents interact with humans. Human-in-the-loopWe use human-in-t
While agents can be powerful, they are not perfect. This often makes it important to keep the human “in the loop” when building agents. For example, in our fireside chat we did with Michele Catasta (President of Replit) on their Replit Agent, he speaks several times about the human-in-the-loop component being crucial to their agent design. From the start, we designed LangGraph with this in mind, a
Reflection Agents Reflection is a prompting strategy used to improve the quality and success rate of agents and similar AI systems. This post outlines how to build 3 reflection techniques using LangGraph, including implementations of Reflexion and Language Agent Tree Search. Key LinksSimple Reflection: (Python)Reflexion: (Python)Language Agents Tree Search: (Python)YoutubeReflection is a prompting
TLDR: We are introducing a new tool_calls attribute on AIMessage. More and more LLM providers are exposing API’s for reliable tool calling. The goal with the new attribute is to provide a standard interface for interacting with tool invocations. This is fully backwards compatible and is supported on all models that have native tool-calling support. In order to access these latest features you will
Enhancing RAG-based application accuracy by constructing and leveraging knowledge graphs A practical guide to constructing and retrieving information from knowledge graphs in RAG applications with Neo4j and LangChainEditor's Note: the following is a guest blog post from Tomaz Bratanic, who focuses on Graph ML and GenAI research at Neo4j. Neo4j is a graph database and analytics company which helps
Links Python ExamplesJS ExamplesYouTubeLast week we highlighted LangGraph - a new package (available in both Python and JS) to better enable creation of LLM workflows containing cycles, which are a critical component of most agent runtimes. As a part of the launch, we highlighted two simple runtimes: one that is the equivalent of the AgentExecutor in langchain, and a second that was a version of t
Today we’re excited to announce the release of langchain 0.1.0, our first stable version. It is fully backwards compatible, comes in both Python and JavaScript, and comes with improved focus through both functionality and documentation. A stable version of LangChain helps us earn developer trust and gives us the ability to evolve the library systematically and safely. Python GitHub DiscussionPytho
In 2023 we saw an explosion of interest in Generative AI upon the heels of ChatGPT. All companies - from startups to enterprises - were (and still are) trying to figure out their GenAI strategy. "How can we incorporate GenAI into our product? What reference architectures should we be following? What models are best for our use case? What is the technology stack we should be using? How can we test
ContextAt their demo day, Open AI reported a series of RAG experiments for a customer that they worked with. While evaluation metics will depend on your specific application, it’s interesting to see what worked and what didn't for them. Below, we expand on each method mention and show how you can implement each one for yourself. The ability to understand and these methods on your application is cr
Today we're excited to announce the release of LangChain Templates. LangChain Templates offers a collection of easily deployable reference architectures that anyone can use. We've worked with some of our partners to create a set of easy-to-use templates to help developers get to production more quickly. We will continue to add to this over time. This is a new way to create, share, maintain, downlo
Although this is not a new phenomenon (query expansion has been used in search for years) what is new is the ability to use LLMs to do it. Below are a few variations of papers and retrieval methods that take advantage of this. They are all using an LLM to generate a new (or multiple new) queries, and the main difference is the prompt they use to do that generation. Rewrite-Retrieve-ReadThis paper
Applying RAG to Diverse Data TypesYet, RAG on documents that contain semi-structured data (structured tables with unstructured text) and multiple modalities (images) has remained a challenge. With the emergence of several multimodal models, it is now worth considering unified strategies to enable RAG across modalities and semi-structured data. Multi-Vector RetrieverBack in August, we released the
Editor's Note: This post was written in collaboration with the Ragas team. One of the things we think and talk about a lot at LangChain is how the industry will evolve to identify new monitoring and evaluation metrics that evolve beyond traditional ML ops metrics. Ragas is an exciting new framework that helps developers evaluate QA pipelines in new ways. This post shows how LangSmith and Ragas can
Editor's Note: This is another installation of our guest blog posts highlighting interesting and novel use cases. This blog is written by Shroominic who built an open source implementation of the ChatGPT Code Interpreter. Important Links: GitHub RepoIn the world of open-source software, there are always exciting developments. Today, I am thrilled to announce a new project that I have been working
Skip to content Top 5 LangGraph Agents in Production 2024 2024 was the year that agents started to work in production. Not the wide-ranging, fully autonomous agents that people imagined with AutoGPT. But more vertical, 3 min read Featured Top 5 LangGraph Agents in Production 2024 3 min read Launching Long-Term Memory Support in LangGraph 2 min read Announcing LangChain v0.3 2 min read How Podium o
Over the past two weeks, there has been a massive increase in using LLMs in an agentic manner. Specifically, projects like AutoGPT, BabyAGI, CAMEL, and Generative Agents have popped up. The LangChain community has now implemented some parts of all of those projects in the LangChain framework. While researching and implementing these projects, we’ve tried to best understand what the differences bet
By Lance Martin Context LLM ops platforms, such as LangChain, make it easy to assemble LLM components (e.g., models, document retrievers, data loaders) into chains. Question-Answering is one of the most popular applications of these chains. But it is often not always obvious to determine what parameters (e.g., chunk size) or components (e.g., model choice, VectorDB) yield the best QA performance.
TL;DR: We are adjusting our abstractions to make it easy for other retrieval methods besides the LangChain VectorDB object to be used in LangChain. This is done with the goals of (1) allowing retrievers constructed elsewhere to be used more easily in LangChain, (2) encouraging more experimentation with alternative retrieval methods (like hybrid search). This is backwards compatible, so all existin
We are super excited to team up with Zapier and integrate their new Zapier NLA API into LangChain, which you can now use with your agents and chains. With this integration, you have access to the 5k+ apps and 20k+ actions on Zapier's platform through a natural language API interface. This is extremely powerful and gives your LangChain agents seemingly limitless possibilities. Big shoutout to Mike
Francisco Ingham and Jon Luo are two of the community members leading the change on the SQL integrations. We’re really excited to write this blog post with them going over all the tips and tricks they’ve learned doing so. We’re even more excited to announce that we’ll be doing an hour long webinar with them to discuss these learnings and field other related questions. This webinar will be on March
Note: See the accompanying GitHub repo for this blogpost here. ChatGPT has taken the world by storm. Millions are using it. But while it’s great for general purpose knowledge, it only knows information about what it has been trained on, which is pre-2021 generally available internet data. It doesn’t know about your private data, it doesn’t know about recent sources of data. Wouldn’t it be useful i
このページを最初にブックマークしてみませんか?
『LangChain Blog』の新着エントリーを見る
j次のブックマーク
k前のブックマーク
lあとで読む
eコメント一覧を開く
oページを開く