Skip to content

Reflection

Reflector

Bases: BaseModel


              flowchart TD
              src.tradingagents.graph.reflection.Reflector[Reflector]

              

              click src.tradingagents.graph.reflection.Reflector href "" "src.tradingagents.graph.reflection.Reflector"
            

Handles reflection on decisions and updating memory.

Methods:

Name Description
reflect_bull_researcher

Reflect on bull researcher's analysis and update memory.

reflect_bear_researcher

Reflect on bear researcher's analysis and update memory.

reflect_trader

Reflect on trader's decision and update memory.

reflect_invest_judge

Reflect on investment judge's decision and update memory.

reflect_risk_manager

Reflect on risk manager's decision and update memory.

quick_thinking_llm

quick_thinking_llm: SkipValidation[ChatModel] = Field(
    ...,
    title="Quick Thinking LLM",
    description="LLM instance used for generating reflection analysis",
)

reflect_bull_researcher

reflect_bull_researcher(
    current_state: AgentState, returns_losses: float, bull_memory: FinancialSituationMemory
) -> None

Reflect on bull researcher's analysis and update memory.

Parameters:

Name Type Description Default

current_state

AgentState

The current state of the agent graph.

required

returns_losses

float

The actual returns or losses observed.

required

bull_memory

FinancialSituationMemory

The memory component to update.

required
Source code in src/tradingagents/graph/reflection.py
def reflect_bull_researcher(
    self,
    current_state: AgentState,
    returns_losses: float,
    bull_memory: FinancialSituationMemory,
) -> None:
    """Reflect on bull researcher's analysis and update memory.

    Args:
        current_state (AgentState): The current state of the agent graph.
        returns_losses (float): The actual returns or losses observed.
        bull_memory (FinancialSituationMemory): The memory component to update.
    """
    situation = self._extract_current_situation(current_state)
    result = self._reflect_on_component(
        "BULL", current_state.investment_debate_state.bull_history, situation, returns_losses
    )
    bull_memory.add_situations([(situation, result)])

reflect_bear_researcher

reflect_bear_researcher(
    current_state: AgentState, returns_losses: float, bear_memory: FinancialSituationMemory
) -> None

Reflect on bear researcher's analysis and update memory.

Parameters:

Name Type Description Default

current_state

AgentState

The current state of the agent graph.

required

returns_losses

float

The actual returns or losses observed.

required

bear_memory

FinancialSituationMemory

The memory component to update.

required
Source code in src/tradingagents/graph/reflection.py
def reflect_bear_researcher(
    self,
    current_state: AgentState,
    returns_losses: float,
    bear_memory: FinancialSituationMemory,
) -> None:
    """Reflect on bear researcher's analysis and update memory.

    Args:
        current_state (AgentState): The current state of the agent graph.
        returns_losses (float): The actual returns or losses observed.
        bear_memory (FinancialSituationMemory): The memory component to update.
    """
    situation = self._extract_current_situation(current_state)
    result = self._reflect_on_component(
        "BEAR", current_state.investment_debate_state.bear_history, situation, returns_losses
    )
    bear_memory.add_situations([(situation, result)])

reflect_trader

reflect_trader(
    current_state: AgentState, returns_losses: float, trader_memory: FinancialSituationMemory
) -> None

Reflect on trader's decision and update memory.

Parameters:

Name Type Description Default

current_state

AgentState

The current state of the agent graph.

required

returns_losses

float

The actual returns or losses observed.

required

trader_memory

FinancialSituationMemory

The memory component to update.

required
Source code in src/tradingagents/graph/reflection.py
def reflect_trader(
    self,
    current_state: AgentState,
    returns_losses: float,
    trader_memory: FinancialSituationMemory,
) -> None:
    """Reflect on trader's decision and update memory.

    Args:
        current_state (AgentState): The current state of the agent graph.
        returns_losses (float): The actual returns or losses observed.
        trader_memory (FinancialSituationMemory): The memory component to update.
    """
    situation = self._extract_current_situation(current_state)
    result = self._reflect_on_component(
        "TRADER", current_state.trader_investment_plan, situation, returns_losses
    )
    trader_memory.add_situations([(situation, result)])

reflect_invest_judge

reflect_invest_judge(
    current_state: AgentState, returns_losses: float, invest_judge_memory: FinancialSituationMemory
) -> None

Reflect on investment judge's decision and update memory.

Parameters:

Name Type Description Default

current_state

AgentState

The current state of the agent graph.

required

returns_losses

float

The actual returns or losses observed.

required

invest_judge_memory

FinancialSituationMemory

The memory component to update.

required
Source code in src/tradingagents/graph/reflection.py
def reflect_invest_judge(
    self,
    current_state: AgentState,
    returns_losses: float,
    invest_judge_memory: FinancialSituationMemory,
) -> None:
    """Reflect on investment judge's decision and update memory.

    Args:
        current_state (AgentState): The current state of the agent graph.
        returns_losses (float): The actual returns or losses observed.
        invest_judge_memory (FinancialSituationMemory): The memory component to update.
    """
    situation = self._extract_current_situation(current_state)
    result = self._reflect_on_component(
        "INVEST JUDGE",
        current_state.investment_debate_state.judge_decision,
        situation,
        returns_losses,
    )
    invest_judge_memory.add_situations([(situation, result)])

reflect_risk_manager

reflect_risk_manager(
    current_state: AgentState, returns_losses: float, risk_manager_memory: FinancialSituationMemory
) -> None

Reflect on risk manager's decision and update memory.

Parameters:

Name Type Description Default

current_state

AgentState

The current state of the agent graph.

required

returns_losses

float

The actual returns or losses observed.

required

risk_manager_memory

FinancialSituationMemory

The memory component to update.

required
Source code in src/tradingagents/graph/reflection.py
def reflect_risk_manager(
    self,
    current_state: AgentState,
    returns_losses: float,
    risk_manager_memory: FinancialSituationMemory,
) -> None:
    """Reflect on risk manager's decision and update memory.

    Args:
        current_state (AgentState): The current state of the agent graph.
        returns_losses (float): The actual returns or losses observed.
        risk_manager_memory (FinancialSituationMemory): The memory component to update.
    """
    situation = self._extract_current_situation(current_state)
    result = self._reflect_on_component(
        "RISK JUDGE", current_state.risk_debate_state.judge_decision, situation, returns_losses
    )
    risk_manager_memory.add_situations([(situation, result)])