将代理内存短期 API 与 LangGraph 结合使用
LangGraph 应用程序通常需要保留最近的工作环境,而无需在每次转弯时将完整对话传回模型。如果您只将最新消息保持在图形状态,模型可能会轻易失去对先前任务详细信息、中间进度或活动线程主题的跟踪。
在本文中,您将使用 LangGraph 流中的代理内存短期 API,以便图形可以根据需要恢复最近的线程上下文。该流使用 get_summary() 来转发早期消息的紧凑摘要,使用 get_context_card() 来显示与最新用户转向最相关的记录。
在本文中,您将学习如何:
- 使用 ChatOpenAI 配置包含嵌入器和 LangGraph 流的代理内存
- 在每次模型调用之前加载 get_summary(except_last=1) 和 get_context_card()
- 答案稍后会转到代理内存线程的短期上下文,而不是完整记录
提示:有关软件包设置,请参见 Get Started with Agent Memory 。如果在此示例中需要本地 Oracle AI Database,请参见 Run Oracle AI Database Locally 。
配置代理内存和 LangGraph
创建具有 Oracle DB 连接或池的代理内存客户机,为向量搜索配置 Embedder,并将 ChatOpenAI 用于 LangGraph 节点。
from typing import Any
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_openai import ChatOpenAI
from langgraph.graph import END, START, MessagesState, StateGraph
from oracleagentmemory.core.embedders.embedder import Embedder
from oracleagentmemory.core.oracleagentmemory import OracleAgentMemory
embedder = Embedder(
model="YOUR_EMBEDDING_MODEL",
api_base="YOUR_EMBEDDING_BASE_URL",
api_key="YOUR_EMBEDDING_API_KEY",
)
langgraph_llm = ChatOpenAI(
model="YOUR_CHAT_MODEL",
base_url="YOUR_CHAT_BASE_URL",
api_key="YOUR_CHAT_API_KEY",
temperature=0,
)
db_pool = ... #an oracledb connection or connection pool
class ShortTermState(MessagesState):
"""LangGraph state extended with Oracle Agent Memory short-term context."""
thread_summary: str
context_card: str
agent_memory = OracleAgentMemory(
connection=db_pool,
embedder=embedder,
extract_memories=False,
)
thread = agent_memory.create_thread(
thread_id="langgraph_short_term_demo",
user_id="user_123",
agent_id="assistant_456",
)
构建加载短期上下文的流
在每次模型调用之前,流都会从代理内存中读取 thread.get_summary(except_last=1) 和 thread.get_context_card()。这使得图形只保留 LangGraph 状态的最新用户消息,同时仍然从线程恢复最近的工作上下文。
def _message_text(message: Any) -> str:
content = getattr(message, "content", "")
if isinstance(content, str):
return content
return str(content)
def load_short_term_context(_: ShortTermState) -> dict[str, str]:
summary_messages = thread.get_summary(except_last=1, token_budget=250)
summary_text = (
summary_messages[0].content if summary_messages else "No prior thread summary."
)
context_card = thread.get_context_card()
if not context_card:
context_card = "<context_card>\n No relevant short-term context yet.\n</context_card>"
return {
"thread_summary": summary_text,
"context_card": context_card,
}
def call_model(state: ShortTermState) -> dict[str, list[Any]]:
response = langgraph_llm.invoke(
[
SystemMessage(
content=(
"You are a helpful engineering assistant. "
"Answer in at most two short sentences. "
"Use the Oracle Agent Memory short-term context below.\n\n"
f"Thread summary:\n{state['thread_summary']}\n\n"
f"Context card:\n{state['context_card']}"
)
),
HumanMessage(content=_message_text(state["messages"][-1])),
]
)
return {"messages": [response]}
builder = StateGraph(ShortTermState)
builder.add_node("load_short_term_context", load_short_term_context)
builder.add_node("call_model", call_model)
builder.add_edge(START, "load_short_term_context")
builder.add_edge("load_short_term_context", "call_model")
builder.add_edge("call_model", END)
graph = builder.compile()
以后翻阅摘要和上下文卡中的答案
将每个用户和助手消息附加到代理内存线程,然后让 LangGraph 流在稍后转动时仅使用最新的用户消息和加载的短期上下文。
def run_turn(user_text: str) -> str:
thread.add_messages([{"role": "user", "content": user_text}])
result = graph.invoke({"messages": [HumanMessage(content=user_text)]})
assistant_text = _message_text(result["messages"][-1])
thread.add_messages([{"role": "assistant", "content": assistant_text}])
print("Thread summary:")
print(result["thread_summary"])
print("Context card:")
print(result["context_card"])
print("Assistant:")
print(assistant_text)
return assistant_text
run_turn(
"I'm Maya. I'm migrating our nightly invoice reconciliation workflow "
"from cron jobs to LangGraph."
)
run_turn("The failing step right now is ledger enrichment after reconciliation.")
final_answer = run_turn("What workflow am I migrating, which step is failing, and who am I?")
print(final_answer)
输出:
You're Maya, migrating your nightly invoice reconciliation workflow from cron jobs
to LangGraph, and the ledger-enrichment step after reconciliation is currently failing.
小结
在本文中,我们学习了如何在 LangGraph 流中使用代理内存短期 API,在每次模型调用之前加载 get_summary(except_last=1) 和 get_context_card(),然后使用最近的线程上下文回答,而无需重新发送完整记录。
提示:学习了如何将短期线程上下文添加到 LangGraph 流后,您现在可以转至 Integrate Agent Memory with LangGraph 。
完整代码
#Copyright © 2026 Oracle and/or its affiliates.
#isort:skip_file
#fmt: off
#Agent Memory Code Example - LangGraph Short-Term Memory
#--------------------------------------------------------
#How to use:
#Create a new Python virtual environment and install the latest oracleagentmemory version.
#You can now run the script
#1. As a Python file:
#```bash
#python howto_shorttermmemory.py
#```
#2. As a Notebook (in VSCode):
#When viewing the file,
#- press the keys Ctrl + Enter to run the selected cell
#- or Shift + Enter to run the selected cell and move to the cell below
##Configure Oracle Agent Memory and LangGraph for short term context
#%%
from typing import Any
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_openai import ChatOpenAI
from langgraph.graph import END, START, MessagesState, StateGraph
from oracleagentmemory.core.embedders.embedder import Embedder
from oracleagentmemory.core.oracleagentmemory import OracleAgentMemory
embedder = Embedder(
model="YOUR_EMBEDDING_MODEL",
api_base="YOUR_EMBEDDING_BASE_URL",
api_key="YOUR_EMBEDDING_API_KEY",
)
langgraph_llm = ChatOpenAI(
model="YOUR_CHAT_MODEL",
base_url="YOUR_CHAT_BASE_URL",
api_key="YOUR_CHAT_API_KEY",
temperature=0,
)
db_pool = ... #an oracledb connection or connection pool
class ShortTermState(MessagesState):
"""LangGraph state extended with Oracle Agent Memory short-term context."""
thread_summary: str
context_card: str
agent_memory = OracleAgentMemory(
connection=db_pool,
embedder=embedder,
extract_memories=False,
)
thread = agent_memory.create_thread(
thread_id="langgraph_short_term_demo",
user_id="user_123",
agent_id="assistant_456",
)
##Build a LangGraph flow that loads short term context
#%%
def _message_text(message: Any) -> str:
content = getattr(message, "content", "")
if isinstance(content, str):
return content
return str(content)
def load_short_term_context(_: ShortTermState) -> dict[str, str]:
summary_messages = thread.get_summary(except_last=1, token_budget=250)
summary_text = (
summary_messages[0].content if summary_messages else "No prior thread summary."
)
context_card = thread.get_context_card()
if not context_card:
context_card = "<context_card>\n No relevant short-term context yet.\n</context_card>"
return {
"thread_summary": summary_text,
"context_card": context_card,
}
def call_model(state: ShortTermState) -> dict[str, list[Any]]:
response = langgraph_llm.invoke(
[
SystemMessage(
content=(
"You are a helpful engineering assistant. "
"Answer in at most two short sentences. "
"Use the Oracle Agent Memory short-term context below.\n\n"
f"Thread summary:\n{state['thread_summary']}\n\n"
f"Context card:\n{state['context_card']}"
)
),
HumanMessage(content=_message_text(state["messages"][-1])),
]
)
return {"messages": [response]}
builder = StateGraph(ShortTermState)
builder.add_node("load_short_term_context", load_short_term_context)
builder.add_node("call_model", call_model)
builder.add_edge(START, "load_short_term_context")
builder.add_edge("load_short_term_context", "call_model")
builder.add_edge("call_model", END)
graph = builder.compile()
##Answer a new turn with summary and context card
#%%
def run_turn(user_text: str) -> str:
thread.add_messages([{"role": "user", "content": user_text}])
result = graph.invoke({"messages": [HumanMessage(content=user_text)]})
assistant_text = _message_text(result["messages"][-1])
thread.add_messages([{"role": "assistant", "content": assistant_text}])
print("Thread summary:")
print(result["thread_summary"])
print("Context card:")
print(result["context_card"])
print("Assistant:")
print(assistant_text)
return assistant_text
run_turn(
"I'm Maya. I'm migrating our nightly invoice reconciliation workflow "
"from cron jobs to LangGraph."
)
run_turn("The failing step right now is ledger enrichment after reconciliation.")
final_answer = run_turn("What workflow am I migrating, which step is failing, and who am I?")
print(final_answer)
#You're Maya, migrating your nightly invoice reconciliation workflow from cron jobs
#to LangGraph, and the ledger-enrichment step after reconciliation is currently failing.