Skip to main content
The --max-tokens flag controls the maximum number of output tokens for agent responses.

Quick Start

praisonai "Write a detailed essay" --max-tokens 8000

Usage

praisonai "<prompt>" --max-tokens <number> [options]

Options

OptionDescriptionDefault
--max-tokensMaximum output tokens16000

Examples

Short Response

praisonai "Summarize in brief" --max-tokens 500

Long-form Content

praisonai "Write comprehensive documentation" --max-tokens 32000

With Research

praisonai "Deep research on AI" --research --max-tokens 20000

Token Limits by Model

ModelMax Output Tokens
gpt-4o16,384
gpt-4o-mini16,384
claude-3-5-sonnet8,192
gemini-2.0-flash8,192
Setting max-tokens higher than the model’s limit will be capped automatically.