サクサク読めて、アプリ限定の機能も多数!
トップへ戻る
ノーベル賞
python.langchain.com
It is often useful to have a model return output that matches a specific schema. One common use-case is extracting data from text to insert into a database or use with some other downstream system. This guide covers a few strategies for getting structured outputs from a model. The .with_structured_output() method This is the easiest and most reliable way to get structured outputs. with_structured
The LCEL cheatsheet shows common patterns that involve the Runnable interface and LCEL expressions. Please see the following list of how-to guides that cover common tasks with LCEL. A list of built-in Runnables can be found in the LangChain Core API Reference. Many of these Runnables are useful when composing custom "chains" in LangChain using LCEL. Benefits of LCEL LangChain optimizes the run-ti
This will help you get started with AzureOpenAI embedding models using LangChain. For detailed documentation on AzureOpenAIEmbeddings features and configuration options, please refer to the API reference. Overview Integration details ProviderPackage Setup To access AzureOpenAI embedding models you'll need to create an Azure account, get an API key, and install the langchain-openai integration p
Use caseSuppose you have a set of documents (PDFs, Notion pages, customer questions, etc.) and you want to summarize the content. LLMs are a great tool for this given their proficiency in understanding and synthesizing text. In this walkthrough we'll go over how to perform document summarization using LLMs. OverviewA central question for building a summarizer is how to pass your documents into t
All functionality related to Google Cloud, Google Gemini and other Google products. Google Generative AI (Gemini API & AI Studio): Access Google Gemini models directly via the Gemini API. Use Google AI Studio for rapid prototyping and get started quickly with the langchain-google-genai package. This is often the best starting point for individual developers. Google Cloud (Vertex AI & other service
Motivation Many chat or Q+A applications involve chunking input documents prior to embedding and vector storage. These notes from Pinecone provide some useful tips: When a full paragraph or document is embedded, the embedding process considers both the overall context and the relationships between the sentences and phrases within the text. This can result in a more comprehensive vector representa
If you'd like to write your own integration, see Extending LangChain. If you'd like to contribute an integration, see Contributing integrations. Integration Packages These providers have standalone langchain-{provider} packages for improved versioning, dependency management and testing. ProviderPackageDownloadsLatestJS
This guide provides explanations of the key concepts behind the LangChain framework and AI applications more broadly. We recommend that you go through at least one of the Tutorials before diving into the conceptual guide. This will provide practical context that will make it easier to understand the concepts discussed here. The conceptual guide does not cover step-by-step instructions or specific
New to LangChain or LLM app development in general? Read this material to quickly get up and running building your first applications. Get started Familiarize yourself with LangChain's open-source components by building simple applications. If you're looking to get started with chat models, vector stores, or other LangChain components from a specific provider, check out our supported integrations
LangChain is a framework for developing applications powered by large language models (LLMs). LangChain simplifies every stage of the LLM application lifecycle: Development: Build your applications using LangChain's open-source components and third-party integrations. Use LangGraph to build stateful agents with first-class streaming and human-in-the-loop support. Productionization: Use LangSmith t
このページを最初にブックマークしてみませんか?
『Introduction | 🦜️🔗 LangChain』の新着エントリーを見る
j次のブックマーク
k前のブックマーク
lあとで読む
eコメント一覧を開く
oページを開く