Utiliser des API à court terme de mémoire d'agent avec LangGraph

Les applications LangGraph doivent souvent préserver le contexte de travail récent sans transmettre la conversation complète au modèle à chaque tour. Si vous ne conservez que le dernier message dans l'état du graphique, le modèle peut facilement perdre le suivi des détails de la tâche précédente, de la progression intermédiaire ou du sujet de thread actif.

Dans cet article, vous allez utiliser des API à court terme de mémoire d'agent dans un flux LangGraph afin que le graphique puisse récupérer le contexte de thread récent à la demande. Le flux utilise get_summary() pour reporter un récapitulatif compact des messages précédents et get_context_card() pour afficher les enregistrements les plus pertinents pour la dernière rotation utilisateur.

Dans cet article, vous apprendrez à :

A savoir : Pour la configuration des packages, reportez-vous à Introduction à la mémoire de l'agent. Si vous avez besoin d'une instance Oracle AI Database locale pour cet exemple, reportez-vous à Exécution locale d'Oracle AI Database.

Configurer la mémoire de l'agent et LangGraph

Créez un client de mémoire d'agent avec une connexion ou un pool Oracle DB, configurez une valeur Embedder pour la recherche vectorielle et utilisez ChatOpenAI pour le noeud LangGraph.

from typing import Any

from langchain_core.messages import HumanMessage, SystemMessage
from langchain_openai import ChatOpenAI
from langgraph.graph import END, START, MessagesState, StateGraph

from oracleagentmemory.core.embedders.embedder import Embedder
from oracleagentmemory.core.oracleagentmemory import OracleAgentMemory

embedder = Embedder(
    model="YOUR_EMBEDDING_MODEL",
    api_base="YOUR_EMBEDDING_BASE_URL",
    api_key="YOUR_EMBEDDING_API_KEY",
)
langgraph_llm = ChatOpenAI(
    model="YOUR_CHAT_MODEL",
    base_url="YOUR_CHAT_BASE_URL",
    api_key="YOUR_CHAT_API_KEY",
    temperature=0,
)
db_pool = ...  #an oracledb connection or connection pool



class ShortTermState(MessagesState):
    """LangGraph state extended with Oracle Agent Memory short-term context."""

    thread_summary: str
    context_card: str


agent_memory = OracleAgentMemory(
    connection=db_pool,
    embedder=embedder,
    extract_memories=False,
)
thread = agent_memory.create_thread(
    thread_id="langgraph_short_term_demo",
    user_id="user_123",
    agent_id="assistant_456",
)

Créer un flux qui charge le contexte à court terme

Avant chaque appel de modèle, le flux lit thread.get_summary(except_last=1) et thread.get_context_card() à partir de la mémoire d'agent. Cela permet au graphique de conserver uniquement le dernier message utilisateur dans l'état LangGraph tout en récupérant le contexte de travail récent à partir du thread.

def _message_text(message: Any) -> str:
    content = getattr(message, "content", "")
    if isinstance(content, str):
        return content
    return str(content)


def load_short_term_context(_: ShortTermState) -> dict[str, str]:
    summary_messages = thread.get_summary(except_last=1, token_budget=250)
    summary_text = (
        summary_messages[0].content if summary_messages else "No prior thread summary."
    )
    context_card = thread.get_context_card()
    if not context_card:
        context_card = "<context_card>\n  No relevant short-term context yet.\n</context_card>"
    return {
        "thread_summary": summary_text,
        "context_card": context_card,
    }


def call_model(state: ShortTermState) -> dict[str, list[Any]]:
    response = langgraph_llm.invoke(
        [
            SystemMessage(
                content=(
                    "You are a helpful engineering assistant. "
                    "Answer in at most two short sentences. "
                    "Use the Oracle Agent Memory short-term context below.\n\n"
                    f"Thread summary:\n{state['thread_summary']}\n\n"
                    f"Context card:\n{state['context_card']}"
                )
            ),
            HumanMessage(content=_message_text(state["messages"][-1])),
        ]
    )
    return {"messages": [response]}


builder = StateGraph(ShortTermState)
builder.add_node("load_short_term_context", load_short_term_context)
builder.add_node("call_model", call_model)
builder.add_edge(START, "load_short_term_context")
builder.add_edge("load_short_term_context", "call_model")
builder.add_edge("call_model", END)
graph = builder.compile()

Réponse dans un virage ultérieur avec résumé et carte contextuelle

Ajoutez chaque message utilisateur et assistant au thread de mémoire d'agent, puis laissez le flux LangGraph répondre ultérieurement en utilisant uniquement le dernier message utilisateur plus le contexte à court terme chargé.

def run_turn(user_text: str) -> str:
    thread.add_messages([{"role": "user", "content": user_text}])
    result = graph.invoke({"messages": [HumanMessage(content=user_text)]})
    assistant_text = _message_text(result["messages"][-1])
    thread.add_messages([{"role": "assistant", "content": assistant_text}])
    print("Thread summary:")
    print(result["thread_summary"])
    print("Context card:")
    print(result["context_card"])
    print("Assistant:")
    print(assistant_text)
    return assistant_text


run_turn(
    "I'm Maya. I'm migrating our nightly invoice reconciliation workflow "
    "from cron jobs to LangGraph."
)
run_turn("The failing step right now is ledger enrichment after reconciliation.")
final_answer = run_turn("What workflow am I migrating, which step is failing, and who am I?")

print(final_answer)

Sortie :

You're Maya, migrating your nightly invoice reconciliation workflow from cron jobs
    to LangGraph, and the ledger-enrichment step after reconciliation is currently failing.

Conclusion

Dans cet article, nous avons appris à utiliser des API à court terme de mémoire d'agent dans un flux LangGraph, à charger get_summary(except_last=1) et get_context_card() avant chaque appel de modèle et à répondre ultérieurement aux virages avec un contexte de thread récent sans renvoyer la transcription complète.

A savoir : Après avoir appris à ajouter un contexte de thread à court terme à un flux LangGraph, vous pouvez maintenant passer à Intégrer la mémoire de l'agent à LangGraph.

Code complet

#Copyright © 2026 Oracle and/or its affiliates.
#isort:skip_file
#fmt: off
#Agent Memory Code Example - LangGraph Short-Term Memory
#--------------------------------------------------------

#How to use:
#Create a new Python virtual environment and install the latest oracleagentmemory version.

#You can now run the script
#1. As a Python file:
#```bash
#python howto_shorttermmemory.py
#```
#2. As a Notebook (in VSCode):
#When viewing the file,
#- press the keys Ctrl + Enter to run the selected cell
#- or Shift + Enter to run the selected cell and move to the cell below


##Configure Oracle Agent Memory and LangGraph for short term context

#%%
from typing import Any

from langchain_core.messages import HumanMessage, SystemMessage
from langchain_openai import ChatOpenAI
from langgraph.graph import END, START, MessagesState, StateGraph

from oracleagentmemory.core.embedders.embedder import Embedder
from oracleagentmemory.core.oracleagentmemory import OracleAgentMemory

embedder = Embedder(
    model="YOUR_EMBEDDING_MODEL",
    api_base="YOUR_EMBEDDING_BASE_URL",
    api_key="YOUR_EMBEDDING_API_KEY",
)
langgraph_llm = ChatOpenAI(
    model="YOUR_CHAT_MODEL",
    base_url="YOUR_CHAT_BASE_URL",
    api_key="YOUR_CHAT_API_KEY",
    temperature=0,
)
db_pool = ...  #an oracledb connection or connection pool


class ShortTermState(MessagesState):
    """LangGraph state extended with Oracle Agent Memory short-term context."""

    thread_summary: str
    context_card: str


agent_memory = OracleAgentMemory(
    connection=db_pool,
    embedder=embedder,
    extract_memories=False,
)
thread = agent_memory.create_thread(
    thread_id="langgraph_short_term_demo",
    user_id="user_123",
    agent_id="assistant_456",
)


##Build a LangGraph flow that loads short term context

#%%
def _message_text(message: Any) -> str:
    content = getattr(message, "content", "")
    if isinstance(content, str):
        return content
    return str(content)


def load_short_term_context(_: ShortTermState) -> dict[str, str]:
    summary_messages = thread.get_summary(except_last=1, token_budget=250)
    summary_text = (
        summary_messages[0].content if summary_messages else "No prior thread summary."
    )
    context_card = thread.get_context_card()
    if not context_card:
        context_card = "<context_card>\n  No relevant short-term context yet.\n</context_card>"
    return {
        "thread_summary": summary_text,
        "context_card": context_card,
    }


def call_model(state: ShortTermState) -> dict[str, list[Any]]:
    response = langgraph_llm.invoke(
        [
            SystemMessage(
                content=(
                    "You are a helpful engineering assistant. "
                    "Answer in at most two short sentences. "
                    "Use the Oracle Agent Memory short-term context below.\n\n"
                    f"Thread summary:\n{state['thread_summary']}\n\n"
                    f"Context card:\n{state['context_card']}"
                )
            ),
            HumanMessage(content=_message_text(state["messages"][-1])),
        ]
    )
    return {"messages": [response]}


builder = StateGraph(ShortTermState)
builder.add_node("load_short_term_context", load_short_term_context)
builder.add_node("call_model", call_model)
builder.add_edge(START, "load_short_term_context")
builder.add_edge("load_short_term_context", "call_model")
builder.add_edge("call_model", END)
graph = builder.compile()


##Answer a new turn with summary and context card

#%%
def run_turn(user_text: str) -> str:
    thread.add_messages([{"role": "user", "content": user_text}])
    result = graph.invoke({"messages": [HumanMessage(content=user_text)]})
    assistant_text = _message_text(result["messages"][-1])
    thread.add_messages([{"role": "assistant", "content": assistant_text}])
    print("Thread summary:")
    print(result["thread_summary"])
    print("Context card:")
    print(result["context_card"])
    print("Assistant:")
    print(assistant_text)
    return assistant_text


run_turn(
    "I'm Maya. I'm migrating our nightly invoice reconciliation workflow "
    "from cron jobs to LangGraph."
)
run_turn("The failing step right now is ledger enrichment after reconciliation.")
final_answer = run_turn("What workflow am I migrating, which step is failing, and who am I?")

print(final_answer)
#You're Maya, migrating your nightly invoice reconciliation workflow from cron jobs
#to LangGraph, and the ledger-enrichment step after reconciliation is currently failing.