サクサク読めて、アプリ限定の機能も多数!
トップへ戻る
iPhone 17
blog.langchain.com
Authored by: Aliyan Ishfaq Coding agents are great at writing code that uses popular libraries on which LLMs have been heavily trained on. But point them to a custom library, a new version of a library, an internal API, or a niche framework – and they’re not so great. That’s a problem for teams working with domain specific libraries or enterprise code. As developers of libraries (LangGraph, LangCh
The use of AI in software engineering has evolved over the past two years. It started as autocomplete, then went to a copilot in an IDE, and in the fast few months has evolved to be a long running, more end-to-end agent that run asynchronously in the cloud. We believe that all agents will long more like this in the future - long running, asynchronous, more autonomous. Specifically, we think that t
Using an LLM to call tools in a loop is the simplest form of an agent. This architecture, however, can yield agents that are “shallow” and fail to plan and act over longer, more complex tasks. Applications like “Deep Research”, “Manus”, and “Claude Code” have gotten around this limitation by implementing a combination of four things: a planning tool, sub agents, access to a file system, and a deta
TL;DRDeep research has broken out as one of the most popular agent applications. OpenAI, Anthropic, Perplexity, and Google all have deep research products that produce comprehensive reports using various sources of context. There are also many open source implementations. We've built an open deep researcher that is simple and configurable, allowing users to bring their own models, search tools, an
TL;DRAgents need context to perform tasks. Context engineering is the art and science of filling the context window with just the right information at each step of an agent’s trajectory. In this post, we break down some common strategies — write, select, compress, and isolate — for context engineering by reviewing various popular agents and papers. We then explain how LangGraph is designed to supp
Header image from Dex Horthy on Twitter. Context engineering is building dynamic systems to provide the right information and tools in the right format such that the LLM can plausibly accomplish the task. Most of the time when an agent is not performing reliably the underlying cause is that the appropriate context, instructions and tools have not been communicated to the model. LLM applications ar
Late last week two great blog posts were released with seemingly opposite titles. “Don’t Build Multi-Agents” by the Cognition team, and “How we built our multi-agent research system” by the Anthropic team. Despite their opposing titles, I would argue they actually have a lot in common and contain some insights as to how and when to build multi-agent systems: Context engineering is crucialMulti-age
By Will Fu-Hinthorn In this blog, we explore a few common multi-agent architectures. We discuss both the motivations and constraints of different architectures. We benchmark their performance on a variant of the Tau-bench dataset. Finally, we discuss improvements we made to our “supervisor” implementation that yielded a nearly 50% increase in performance on this benchmark. Motivators for multi-age
Co-authored by Assaf Elovic and Harrison Chase. You can also find a version of this post published on Assaf's Medium. Why do some AI products explode in adoption while others struggle to gain traction? After a decade of building AI products and watching hundreds of launches across the industry, we’ve noticed a pattern that has almost nothing to do with model accuracy or technical sophistication. T
LangGraph Platform is now Generally Available: Deploy & manage long-running, stateful Agents LangGraph Platform, our infrastructure for deploying and managing agents at scale, is now generally available. Learn how to deploy Today we’re excited to announce the general availability of LangGraph Platform — our purpose-built infrastructure and management layer for deploying and scaling long-running, s
Model Context Protocol (MCP) is creating quite the stir on Twitter – but is it actually useful, or just noise? In this back and forth, Harrison Chase (LangChain CEO) and Nuno Campos (LangGraph Lead) debate whether MCP lives up to the hype. Harrison: I’ll take the position that MCP is actually useful. I was skeptical on it at first, but I’ve begun to see its value. Essentially: MCP is useful when y
Beyond RAG: Implementing Agent Search with LangGraph for Smarter Knowledge Retrieval Editor's note: this is a guest post from our friends at Onyx. As LangGraph has matured, we've seen more and more companies (Klarna, Replit, AppFolio, etc) start to use it as their agent framework of choice. We thought this was a great blog describing in detail how that evaluation is done. You can read a version of
Making it easier to build human-in-the-loop agents with interrupt While agents can be powerful, they are not perfect. This often makes it important to keep the human “in the loop” when building agents. For example, in our fireside chat we did with Michele Catasta (President of Replit) on their Replit Agent, he speaks several times about the human-in-the-loop component being crucial to their agent
Reflection Agents Reflection is a prompting strategy used to improve the quality and success rate of agents and similar AI systems. This post outlines how to build 3 reflection techniques using LangGraph, including implementations of Reflexion and Language Agent Tree Search. Key LinksSimple Reflection: (Python)Reflexion: (Python)Language Agents Tree Search: (Python)YoutubeReflection is a prompting
TLDR: We are introducing a new tool_calls attribute on AIMessage. More and more LLM providers are exposing API’s for reliable tool calling. The goal with the new attribute is to provide a standard interface for interacting with tool invocations. This is fully backwards compatible and is supported on all models that have native tool-calling support. In order to access these latest features you will
Enhancing RAG-based application accuracy by constructing and leveraging knowledge graphs A practical guide to constructing and retrieving information from knowledge graphs in RAG applications with Neo4j and LangChainEditor's Note: the following is a guest blog post from Tomaz Bratanic, who focuses on Graph ML and GenAI research at Neo4j. Neo4j is a graph database and analytics company which helps
Links Python ExamplesJS ExamplesYouTubeLast week we highlighted LangGraph - a new package (available in both Python and JS) to better enable creation of LLM workflows containing cycles, which are a critical component of most agent runtimes. As a part of the launch, we highlighted two simple runtimes: one that is the equivalent of the AgentExecutor in langchain, and a second that was a version of t
Today we’re excited to announce the release of langchain 0.1.0, our first stable version. It is fully backwards compatible, comes in both Python and JavaScript, and comes with improved focus through both functionality and documentation. A stable version of LangChain helps us earn developer trust and gives us the ability to evolve the library systematically and safely. Python GitHub DiscussionPytho
In 2023 we saw an explosion of interest in Generative AI upon the heels of ChatGPT. All companies - from startups to enterprises - were (and still are) trying to figure out their GenAI strategy. "How can we incorporate GenAI into our product? What reference architectures should we be following? What models are best for our use case? What is the technology stack we should be using? How can we test
ContextAt their demo day, Open AI reported a series of RAG experiments for a customer that they worked with. While evaluation metics will depend on your specific application, it’s interesting to see what worked and what didn't for them. Below, we expand on each method mention and show how you can implement each one for yourself. The ability to understand and these methods on your application is cr
Today we're excited to announce the release of LangChain Templates. LangChain Templates offers a collection of easily deployable reference architectures that anyone can use. We've worked with some of our partners to create a set of easy-to-use templates to help developers get to production more quickly. We will continue to add to this over time. This is a new way to create, share, maintain, downlo
Although this is not a new phenomenon (query expansion has been used in search for years) what is new is the ability to use LLMs to do it. Below are a few variations of papers and retrieval methods that take advantage of this. They are all using an LLM to generate a new (or multiple new) queries, and the main difference is the prompt they use to do that generation. Rewrite-Retrieve-ReadThis paper
Applying RAG to Diverse Data TypesYet, RAG on documents that contain semi-structured data (structured tables with unstructured text) and multiple modalities (images) has remained a challenge. With the emergence of several multimodal models, it is now worth considering unified strategies to enable RAG across modalities and semi-structured data. Multi-Vector RetrieverBack in August, we released the
Editor's Note: This post was written in collaboration with the Ragas team. One of the things we think and talk about a lot at LangChain is how the industry will evolve to identify new monitoring and evaluation metrics that evolve beyond traditional ML ops metrics. Ragas is an exciting new framework that helps developers evaluate QA pipelines in new ways. This post shows how LangSmith and Ragas can
Editor's Note: This is another installation of our guest blog posts highlighting interesting and novel use cases. This blog is written by Shroominic who built an open source implementation of the ChatGPT Code Interpreter. Important Links: GitHub RepoIn the world of open-source software, there are always exciting developments. Today, I am thrilled to announce a new project that I have been working
Skip to content Introducing Open SWE: An Open-Source Asynchronous Coding Agent The use of AI in software engineering has evolved over the past two years. It started as autocomplete, then went to a copilot in an 7 min read Featured LangGraph Platform is now Generally Available: Deploy & manage long-running, stateful Agents 4 min read How Klarna's AI assistant redefined customer support at scale for
Over the past two weeks, there has been a massive increase in using LLMs in an agentic manner. Specifically, projects like AutoGPT, BabyAGI, CAMEL, and Generative Agents have popped up. The LangChain community has now implemented some parts of all of those projects in the LangChain framework. While researching and implementing these projects, we’ve tried to best understand what the differences bet
By Lance Martin Context LLM ops platforms, such as LangChain, make it easy to assemble LLM components (e.g., models, document retrievers, data loaders) into chains. Question-Answering is one of the most popular applications of these chains. But it is often not always obvious to determine what parameters (e.g., chunk size) or components (e.g., model choice, VectorDB) yield the best QA performance.
TL;DR: We are adjusting our abstractions to make it easy for other retrieval methods besides the LangChain VectorDB object to be used in LangChain. This is done with the goals of (1) allowing retrievers constructed elsewhere to be used more easily in LangChain, (2) encouraging more experimentation with alternative retrieval methods (like hybrid search). This is backwards compatible, so all existin
次のページ
このページを最初にブックマークしてみませんか?
『LangChain Blog』の新着エントリーを見る
j次のブックマーク
k前のブックマーク
lあとで読む
eコメント一覧を開く
oページを開く