LangGraph에서 에이전트 메모리 사용
이 문서에서는 다음 두 가지 방법으로 에이전트 메모리를 LangGraph에 연결하는 방법에 대해 알아봅니다.
- 필요한 경우 메모리 검색 도구를 호출하는 사전 구축 된 ReAct 에이전트;
StateGraph(MessagesState)로 구축된 사용자정의 플로우입니다.
참고: 패키지 설정은 에이전트 메모리 시작하기를 참조하십시오. 이 예에 대한 로컬 Oracle AI Database가 필요한 경우 Run Oracle AI Database Locally를 참조하십시오.
에이전트 메모리 설정
먼저 에이전트 메모리 클라이언트, LangGraph 채팅 모델 및 재사용 가능한 search_memory 툴을 구성합니다. 또한 에이전트 메모리 클라이언트는 자체 LLM을 사용하여 최근 스레드 메시지에서 영구 메모리를 주기적으로 추출하고, LangGraph 모델은 에이전트 응답 및 도구 사용을 처리합니다.
from typing import Annotated
from langchain.agents import create_agent
from langchain_core.messages import HumanMessage
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langgraph.graph import END, START, MessagesState, StateGraph
from oracleagentmemory.core.embedders.embedder import Embedder
from oracleagentmemory.core.llms.llm import Llm
from oracleagentmemory.core.oracleagentmemory import OracleAgentMemory
embedder = Embedder(
model="YOUR_EMBEDDING_MODEL",
api_base="YOUR_EMBEDDING_API_BASE",
api_key="YOUR_EMBEDDING_API_KEY",
)
llm = Llm(
model="gpt-4.1-mini",
api_key="YOUR_OPENAI_API_KEY",
)
db_pool = ... #an oracledb connection or connection pool
langgraph_llm = ChatOpenAI(
model="gpt-4.1-mini",
api_key="YOUR_OPENAI_API_KEY",
)
#Keep these identifiers stable for the same assistant and end user so memory
#is scoped consistently across threads and sessions.
agent_id = "support_agent"
user_id = "user_123"
agent_memory = OracleAgentMemory(
connection=db_pool,
embedder=embedder,
llm=llm,
)
@tool
def search_memory(
query: Annotated[str, "Question to search in Oracle Agent Memory"],
) -> Annotated[str, "Top matching durable memory content"]:
"""Search Oracle Agent Memory for durable user facts relevant to the current request."""
results = agent_memory.search(
query=query,
user_id=user_id,
agent_id=agent_id,
max_results=3,
record_types=["memory"],
)
if not results:
return "No relevant memory found."
return "\n".join(result.content for result in results)
def _latest_user_message(state: MessagesState) -> str:
for message in reversed(state["messages"]):
if getattr(message, "type", None) == "human":
return str(message.content)
if getattr(message, "role", None) == "user":
return str(message.content)
return ""
def _build_memory_context(query: str) -> str:
results = agent_memory.search(
query=query,
user_id=user_id,
agent_id=agent_id,
max_results=3,
record_types=["memory"],
)
memory_context = "\n".join(f"- {result.content}" for result in results)
return memory_context or "- No relevant memory found."
사전 제작 ReAct 에이전트
LangChain은 LangGraph 런타임 위에 사전 구축된 ReAct 스타일 에이전트를 제공합니다. search_memory를 해당 도구 중 하나로 표시하고 에이전트가 내구성 메모리를 질의해야 할 시기를 결정하도록 할 수 있습니다.
사전 제작 에이전트 구성
react_agent = create_agent(
model=langgraph_llm,
tools=[search_memory],
system_prompt=(
"You are a support agent. When the user asks about durable facts from "
"prior sessions, call the search_memory tool before answering."
),
)
react_memory_thread = agent_memory.create_thread(
thread_id="langgraph_react_memory_demo",
user_id=user_id,
agent_id=agent_id,
)
사전 구축된 ReAct 세션 후 사용자 컨텍스트 유지
첫번째 실행이 완료된 후 교환된 메시지를 에이전트 메모리에 추가하고 나중에 재사용해야 하는 영구적인 사실을 저장합니다.
react_session_1 = react_agent.invoke(
{
"messages": [
HumanMessage(
content="I am John, a Python developer and I need help debugging a payment service."
)
]
}
)
react_assistant_reply = react_session_1["messages"][-1].content
print(react_assistant_reply)
#I can help with that. What error are you seeing?
#add_messages will add messages to the DB and extract memories automatically
react_memory_thread.add_messages(
[
{
"role": "user",
"content": "I am John, a Python developer and I need help debugging a payment service.",
},
{
"role": "assistant",
"content": react_assistant_reply,
},
]
)
#add_memory adds memory to the DB
react_memory_thread.add_memory("The user is John, a Python developer.")
미리 작성된 새 ReAct 세션에서 메모리 재사용
나중에 실행이 시작되면 동일한 에이전트 메모리 스레드를 다시 열고 미리 작성된 에이전트가 응답하기 전에 search_memory를 호출하도록 합니다.
react_memory_thread = agent_memory.get_thread("langgraph_react_memory_demo")
react_session_2 = react_agent.invoke(
{
"messages": [
HumanMessage(content="Who am I?")
]
}
)
react_remembered_reply = react_session_2["messages"][-1].content
print(react_remembered_reply)
출력:
The user is John, a Python developer.
사용자정의 플로우
통합관리를 보다 엄격하게 제어해야 하는 경우 사용자 정의 LangGraph 플로우를 구축하고 에이전트 메모리 결과를 모델 노드에 직접 삽입합니다.
사용자정의 플로우 구성
def call_model(state: MessagesState):
from langchain_core.messages import SystemMessage
query = _latest_user_message(state)
memory_context = _build_memory_context(query)
response = langgraph_llm.invoke(
[
SystemMessage(
content=(
"You are a support agent. Use the durable memory below when it is "
"relevant to the current user request.\n\n"
f"Durable memory:\n{memory_context}"
)
),
*state["messages"],
]
)
return {"messages": [response]}
builder = StateGraph(MessagesState)
builder.add_node("call_model", call_model)
builder.add_edge(START, "call_model")
builder.add_edge("call_model", END)
flow_graph = builder.compile()
flow_memory_thread = agent_memory.create_thread(
thread_id="langgraph_flow_memory_demo",
user_id=user_id,
agent_id=agent_id,
)
플로우 세션 후 사용자 컨텍스트 유지
첫번째 플로우 실행이 완료된 후 교환된 메시지를 에이전트 메모리에 추가하고 나중에 재사용해야 하는 영구적인 팩트를 저장합니다.
flow_session_1 = flow_graph.invoke(
{
"messages": [
HumanMessage(
content="I am John, a Python developer and I need help debugging a payment service."
)
]
}
)
flow_assistant_reply = flow_session_1["messages"][-1].content
print(flow_assistant_reply)
#I can help with that. What error are you seeing?
flow_memory_thread.add_messages(
[
{
"role": "user",
"content": "I am John, a Python developer and I need help debugging a payment service.",
},
{
"role": "assistant",
"content": flow_assistant_reply,
},
]
)
flow_memory_thread.add_memory("The user is John, a Python developer.")
새 플로우 세션에서 메모리 재사용
이후 플로우 실행이 시작되면 동일한 에이전트 메모리 스레드를 다시 열고 그래프가 영구 메모리를 검색하여 이전 사용자 컨텍스트로 응답하도록 합니다.
flow_memory_thread = agent_memory.get_thread("langgraph_flow_memory_demo")
flow_session_2 = flow_graph.invoke(
{
"messages": [
HumanMessage(content="Who am I?")
]
}
)
flow_remembered_reply = flow_session_2["messages"][-1].content
print(flow_remembered_reply)
출력:
The user is John, a Python developer.
자동 추출 사용 안함
메시지를 지속하고 영구 메모리를 수동으로 추가하려면 extract_memories=False를 사용하여 에이전트 메모리 클라이언트를 만들고 메모리 행을 직접 작성합니다.
manual_agent_memory = OracleAgentMemory(
connection=db_pool,
embedder=embedder,
extract_memories=False,
)
manual_memory_thread = manual_agent_memory.create_thread(
thread_id="langgraph_manual_memory_demo",
user_id=user_id,
agent_id=agent_id,
)
manual_memory_thread.add_messages(
[
{
"role": "user",
"content": "Please remember that I prefer concise code reviews.",
},
{
"role": "assistant",
"content": "Understood. I will keep responses concise.",
},
]
)
manual_memory_thread.add_memory("The user prefers concise code reviews.")
결론
이 문서에서는 사전 구축된 ReAct 에이전트 또는 사용자 정의 StateGraph(MessagesState) 플로우를 사용하여 에이전트 메모리를 LangGraph에 연결하고, 각 세션 후 스레드 메시지를 유지하며, 이후 실행 시 지속 가능한 메모리를 재사용하는 방법을 배웠습니다.
참고: 에이전트 메모리를 LangGraph와 통합하는 방법을 배웠을 때 에이전트 메모리를 WayFlow와 통합에도 관심이 있을 수 있습니다.
전체 코드
#Copyright © 2026 Oracle and/or its affiliates.
#isort:skip_file
#fmt: off
#Agent Memory Code Example - Integration with LangGraph
#-------------------------------------------------------
#How to use:
#Create a new Python virtual environment and install the latest oracleagentmemory version.
#You can now run the script
#1. As a Python file:
#```bash
#python integration_with_langgraph.py
#```
#2. As a Notebook (in VSCode):
#When viewing the file,
#- press the keys Ctrl + Enter to run the selected cell
#- or Shift + Enter to run the selected cell and move to the cell below
##Configure Oracle Memory and LangGraph setup
#%%
from typing import Annotated
from langchain.agents import create_agent
from langchain_core.messages import HumanMessage
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langgraph.graph import END, START, MessagesState, StateGraph
from oracleagentmemory.core.embedders.embedder import Embedder
from oracleagentmemory.core.llms.llm import Llm
from oracleagentmemory.core.oracleagentmemory import OracleAgentMemory
embedder = Embedder(
model="YOUR_EMBEDDING_MODEL",
api_base="YOUR_EMBEDDING_API_BASE",
api_key="YOUR_EMBEDDING_API_KEY",
)
llm = Llm(
model="gpt-4.1-mini",
api_key="YOUR_OPENAI_API_KEY",
)
db_pool = ... #an oracledb connection or connection pool
langgraph_llm = ChatOpenAI(
model="gpt-4.1-mini",
api_key="YOUR_OPENAI_API_KEY",
)
#Keep these identifiers stable for the same assistant and end user so memory
#is scoped consistently across threads and sessions.
agent_id = "support_agent"
user_id = "user_123"
agent_memory = OracleAgentMemory(
connection=db_pool,
embedder=embedder,
llm=llm,
)
@tool
def search_memory(
query: Annotated[str, "Question to search in Oracle Agent Memory"],
) -> Annotated[str, "Top matching durable memory content"]:
"""Search Oracle Agent Memory for durable user facts relevant to the current request."""
results = agent_memory.search(
query=query,
user_id=user_id,
agent_id=agent_id,
max_results=3,
record_types=["memory"],
)
if not results:
return "No relevant memory found."
return "\n".join(result.content for result in results)
def _latest_user_message(state: MessagesState) -> str:
for message in reversed(state["messages"]):
if getattr(message, "type", None) == "human":
return str(message.content)
if getattr(message, "role", None) == "user":
return str(message.content)
return ""
def _build_memory_context(query: str) -> str:
results = agent_memory.search(
query=query,
user_id=user_id,
agent_id=agent_id,
max_results=3,
record_types=["memory"],
)
memory_context = "\n".join(f"- {result.content}" for result in results)
return memory_context or "- No relevant memory found."
##Configure a prebuilt LangGraph ReAct agent
#%%
react_agent = create_agent(
model=langgraph_llm,
tools=[search_memory],
system_prompt=(
"You are a support agent. When the user asks about durable facts from "
"prior sessions, call the search_memory tool before answering."
),
)
react_memory_thread = agent_memory.create_thread(
thread_id="langgraph_react_memory_demo",
user_id=user_id,
agent_id=agent_id,
)
##Persist user context after a prebuilt ReAct session
#%%
react_session_1 = react_agent.invoke(
{
"messages": [
HumanMessage(
content="I am John, a Python developer and I need help debugging a payment service."
)
]
}
)
react_assistant_reply = react_session_1["messages"][-1].content
print(react_assistant_reply)
#I can help with that. What error are you seeing?
react_memory_thread.add_messages(
[
{
"role": "user",
"content": "I am John, a Python developer and I need help debugging a payment service.",
},
{
"role": "assistant",
"content": react_assistant_reply,
},
]
)
react_memory_thread.add_memory("The user is John, a Python developer.")
##Reuse memory in a new prebuilt ReAct session
#%%
react_memory_thread = agent_memory.get_thread("langgraph_react_memory_demo")
react_session_2 = react_agent.invoke(
{
"messages": [
HumanMessage(content="Who am I?")
]
}
)
react_remembered_reply = react_session_2["messages"][-1].content
print(react_remembered_reply)
#The user is John, a Python developer.
##Configure a custom LangGraph flow
#%%
def call_model(state: MessagesState):
from langchain_core.messages import SystemMessage
query = _latest_user_message(state)
memory_context = _build_memory_context(query)
response = langgraph_llm.invoke(
[
SystemMessage(
content=(
"You are a support agent. Use the durable memory below when it is "
"relevant to the current user request.\n\n"
f"Durable memory:\n{memory_context}"
)
),
*state["messages"],
]
)
return {"messages": [response]}
builder = StateGraph(MessagesState)
builder.add_node("call_model", call_model)
builder.add_edge(START, "call_model")
builder.add_edge("call_model", END)
flow_graph = builder.compile()
flow_memory_thread = agent_memory.create_thread(
thread_id="langgraph_flow_memory_demo",
user_id=user_id,
agent_id=agent_id,
)
##Persist user context after a flow session
#%%
flow_session_1 = flow_graph.invoke(
{
"messages": [
HumanMessage(
content="I am John, a Python developer and I need help debugging a payment service."
)
]
}
)
flow_assistant_reply = flow_session_1["messages"][-1].content
print(flow_assistant_reply)
#I can help with that. What error are you seeing?
flow_memory_thread.add_messages(
[
{
"role": "user",
"content": "I am John, a Python developer and I need help debugging a payment service.",
},
{
"role": "assistant",
"content": flow_assistant_reply,
},
]
)
flow_memory_thread.add_memory("The user is John, a Python developer.")
##Reuse memory in a new flow session
#%%
flow_memory_thread = agent_memory.get_thread("langgraph_flow_memory_demo")
flow_session_2 = flow_graph.invoke(
{
"messages": [
HumanMessage(content="Who am I?")
]
}
)
flow_remembered_reply = flow_session_2["messages"][-1].content
print(flow_remembered_reply)
#The user is John, a Python developer.
##Disable automatic memory extraction
#%%
manual_agent_memory = OracleAgentMemory(
connection=db_pool,
embedder=embedder,
extract_memories=False,
)
manual_memory_thread = manual_agent_memory.create_thread(
thread_id="langgraph_manual_memory_demo",
user_id=user_id,
agent_id=agent_id,
)
manual_memory_thread.add_messages(
[
{
"role": "user",
"content": "Please remember that I prefer concise code reviews.",
},
{
"role": "assistant",
"content": "Understood. I will keep responses concise.",
},
]
)
manual_memory_thread.add_memory("The user prefers concise code reviews.")