Skip to main content

TUI Simulation

The TUI simulation system enables headless testing of TUI behaviors without launching the interactive interface. This is essential for CI/CD pipelines and automated testing.

Overview

The simulation system provides:
  • Headless execution - Run TUI flows without interactive UI
  • Mock provider - Deterministic responses for testing
  • Event capture - Record and replay TUI events
  • Assertions - Validate expected states and transitions
  • JSONL logging - Structured event logs for analysis

Python Usage

TuiOrchestrator

The TuiOrchestrator provides unified event handling for both TUI and headless modes.
import asyncio
from praisonai.cli.features.tui import TuiOrchestrator, UIStateModel
from praisonai.cli.features.tui.orchestrator import OutputMode
from praisonai.cli.features.queue import QueueConfig

async def main():
    # Create orchestrator
    config = QueueConfig(enable_persistence=False)
    orchestrator = TuiOrchestrator(
        queue_config=config,
        output_mode=OutputMode.PRETTY,
        debug=True,
    )
    
    # Capture events
    events = []
    orchestrator.add_event_callback(lambda e: events.append(e))
    
    # Start orchestrator
    await orchestrator.start(session_id="test-session")
    
    # Perform actions
    orchestrator.set_model("gpt-4")
    orchestrator.navigate_screen("queue")
    orchestrator.set_focus("queue-panel")
    
    # Get snapshot
    snapshot = orchestrator.get_snapshot()
    print(f"Model: {snapshot['model']}")
    print(f"Screen: {snapshot['current_screen']}")
    
    # Render pretty snapshot
    print(orchestrator.render_snapshot())
    
    # Stop orchestrator
    await orchestrator.stop()
    
    print(f"Captured {len(events)} events")

asyncio.run(main())

SimulationRunner

Run scripted simulations with assertions.
import asyncio
from praisonai.cli.features.tui import TuiOrchestrator, SimulationRunner
from praisonai.cli.features.tui.orchestrator import OutputMode
from praisonai.cli.features.queue import QueueConfig

async def main():
    config = QueueConfig(enable_persistence=False)
    orchestrator = TuiOrchestrator(
        queue_config=config,
        output_mode=OutputMode.SILENT,
    )
    
    runner = SimulationRunner(orchestrator, assert_mode=True)
    
    # Define simulation script
    script = {
        "session_id": "sim-test",
        "model": "gpt-4o-mini",
        "steps": [
            # Navigate screens
            {"action": "navigate", "args": {"screen": "main"}},
            {"action": "focus", "args": {"widget": "composer"}},
            
            # Change model with assertion
            {
                "action": "model",
                "args": {"model": "gpt-4"},
                "expected": {"model": "gpt-4"}
            },
            
            # Navigate to queue
            {"action": "navigate", "args": {"screen": "queue"}},
            
            # Take a snapshot
            {"action": "snapshot"},
            
            # Wait
            {"action": "sleep", "args": {"seconds": 0.5}},
        ]
    }
    
    # Run simulation
    success = await runner.run_script(script)
    
    # Get results
    summary = runner.get_summary()
    print(f"Success: {success}")
    print(f"Assertions passed: {summary['assertions_passed']}")
    print(f"Assertions failed: {summary['assertions_failed']}")
    
    if summary['errors']:
        for error in summary['errors']:
            print(f"Error: {error}")

asyncio.run(main())

Simulation Script Format

Scripts can be YAML or JSON:
# simulation_script.yaml
session_id: test-session
model: gpt-4o-mini

steps:
  # Navigate to a screen
  - action: navigate
    args:
      screen: main  # main, queue, settings, session
  
  # Set focus to a widget
  - action: focus
    args:
      widget: composer  # composer, chat, queue-panel
  
  # Change model
  - action: model
    args:
      model: gpt-4
    expected:
      model: gpt-4  # Assert model changed
  
  # Submit a message (requires real or mock provider)
  - action: submit
    args:
      content: "Hello, world!"
      agent: Assistant
  
  # Wait for condition
  - action: wait
    args:
      condition: idle  # idle, run
      timeout: 30
  
  # Cancel current run
  - action: cancel
    args:
      run_id: current  # or specific run_id
  
  # Retry a failed run
  - action: retry
    args:
      run_id: abc123
  
  # Print snapshot
  - action: snapshot
  
  # Sleep
  - action: sleep
    args:
      seconds: 1.0

MockProvider

Use the mock provider for deterministic testing.
from praisonai.cli.features.tui import MockProvider, MockProviderConfig
from praisonai.cli.features.tui.mock_provider import MockResponse

# Create with custom responses
config = MockProviderConfig(
    seed=42,  # Deterministic seed
    default_delay=0.05,
    simulate_errors=False,
    responses={
        "hello": MockResponse(
            content="Hello! How can I help?",
            tokens=10,
            cost=0.0001,
        ),
        "error": MockResponse(
            content="",
            error="Simulated error",
        ),
        "tool": MockResponse(
            content="Using a tool...",
            tool_calls=[{
                "id": "call_001",
                "name": "search",
                "arguments": {"query": "test"},
            }],
        ),
    }
)

provider = MockProvider(config)

# Generate response
import asyncio

async def test():
    chunks = []
    result = await provider.generate(
        "hello",
        stream=True,
        on_chunk=lambda c: chunks.append(c),
    )
    
    print(f"Content: {result['content']}")
    print(f"Tokens: {result['tokens']}")
    print(f"Cost: ${result['cost']:.6f}")
    print(f"Chunks: {len(chunks)}")

asyncio.run(test())

UIStateModel

The state model tracks all UI state for snapshots.
from praisonai.cli.features.tui import UIStateModel
from praisonai.cli.features.tui.events import TUIEvent, TUIEventType

state = UIStateModel(
    session_id="test123",
    model="gpt-4",
    max_messages=1000,
)

# Add messages
state.add_message("user", "Hello")
state.add_message("assistant", "Hi there!", run_id="run123")

# Add events
event = TUIEvent(event_type=TUIEventType.MESSAGE_SUBMITTED)
state.add_event(event)

# Get snapshot
snapshot = state.to_snapshot()
print(f"Messages: {snapshot['message_count']}")
print(f"Model: {snapshot['model']}")

# Render pretty
print(state.render_snapshot_pretty())

CLI Usage

Run Simulation

# Run with mock provider (default)
praisonai tui simulate script.yaml

# Run with real LLM (requires env var)
PRAISONAI_REAL_LLM=1 praisonai tui simulate script.yaml --real-llm

# Run with assertions
praisonai tui simulate script.yaml --assert

# Output as JSONL
praisonai tui simulate script.yaml --jsonl

Get Snapshot

# Current state
praisonai tui snapshot

# For specific session
praisonai tui snapshot --session abc123

# JSON output
praisonai tui snapshot --json

Trace Events

# Trace a session
praisonai tui trace abc123

# Follow new events
praisonai tui trace abc123 --follow

# Limit events
praisonai tui trace abc123 --limit 20

Event Types

The simulation system captures these event types:
Event TypeDescription
session_startedSession initialized
message_submittedUser message submitted
run_queuedRun added to queue
run_startedRun execution started
output_chunkStreaming output chunk
run_completedRun finished successfully
run_cancelledRun was cancelled
error_occurredError during execution
screen_changedNavigation to new screen
focus_changedFocus moved to widget
status_updatedStatus bar updated

JSONL Event Format

Events are logged in JSONL format:
{"timestamp": 1704067200.0, "trace_id": "abc123", "session_id": "sess456", "event_type": "message_submitted", "run_id": null, "data": {"content": "Hello"}}
{"timestamp": 1704067200.1, "trace_id": "abc123", "session_id": "sess456", "event_type": "run_started", "run_id": "run789", "data": {}}
{"timestamp": 1704067200.2, "trace_id": "abc123", "session_id": "sess456", "event_type": "output_chunk", "run_id": "run789", "data": {"content": "Hi"}}

Best Practices

  1. Use mock provider for CI - Avoid real API calls in automated tests
  2. Set deterministic seed - Ensure reproducible results
  3. Use assertions - Validate expected state transitions
  4. Capture events - Log events for debugging
  5. Test error paths - Include error scenarios in scripts