Skip to main content
Platform pagination enables efficient retrieval of large datasets through standardized limit and offset query parameters across all list endpoints.

Quick Start

1

Simple Page Request

Get the first 10 items from any list endpoint:
TOKEN="your-jwt-token"
WS_ID="workspace-id"

curl -s "http://localhost:8000/api/v1/workspaces/$WS_ID/issues/?limit=10&offset=0" \
  -H "Authorization: Bearer $TOKEN"
2

Iterate Through Pages

Use offset to get subsequent pages:
# First page (items 0-9)
curl -s "http://localhost:8000/api/v1/workspaces/$WS_ID/issues/?limit=10&offset=0"

# Second page (items 10-19)
curl -s "http://localhost:8000/api/v1/workspaces/$WS_ID/issues/?limit=10&offset=10"

# Third page (items 20-29)
curl -s "http://localhost:8000/api/v1/workspaces/$WS_ID/issues/?limit=10&offset=20"

How It Works

ParameterTypeDefaultRangeDescription
limitint501–200Maximum items to return
offsetint0≥ 0Number of items to skip

Paginated Endpoints

All platform list endpoints support pagination:
EndpointDescriptionDefault LimitMax Limit
GET /api/v1/workspaces/List workspaces50200
GET /api/v1/workspaces/{ws_id}/projects/List projects50200
GET /api/v1/workspaces/{ws_id}/issues/List issues50200
GET /api/v1/workspaces/{ws_id}/agents/List agents50200
GET /api/v1/workspaces/{ws_id}/activityList activities50200
GET /api/v1/workspaces/{ws_id}/issues/{id}/activityList issue activities50200

Common Patterns

import asyncio
import httpx

async def fetch_all_issues(ws_id: str, token: str):
    """Fetch all issues using pagination."""
    base = "http://localhost:8000/api/v1"
    headers = {"Authorization": f"Bearer {token}"}
    all_issues = []
    offset = 0
    page_size = 50

    async with httpx.AsyncClient() as client:
        while True:
            resp = await client.get(
                f"{base}/workspaces/{ws_id}/issues/",
                params={"limit": page_size, "offset": offset},
                headers=headers
            )
            page = resp.json()
            if not page:
                break  # No more results
            all_issues.extend(page)
            if len(page) < page_size:
                break  # Last page
            offset += page_size

    return all_issues

async def main():
    issues = await fetch_all_issues("your-ws-id", "your-token")
    print(f"Total issues: {len(issues)}")

asyncio.run(main())

Best Practices

Use larger page sizes (100-200) for bulk operations, smaller sizes (10-50) for user interfaces. Consider network latency and memory constraints.
Always check for empty arrays [] to detect the end of results. Don’t rely solely on response length being less than the page size.
Add exponential backoff for failed requests and timeout handling for reliable pagination in production environments.
Pagination doesn’t return total counts. If needed, make separate count requests or cache totals to avoid expensive calculations.

Testing

Test pagination behavior with these commands:
# Test boundary conditions
pytest tests/test_new_gaps.py::TestPagination -v

# Test API integration
pytest tests/test_new_api_integration.py::TestPaginationAPI -v

# Manual testing with curl
curl -s "http://localhost:8000/api/v1/workspaces/test/issues/?limit=1&offset=0"
curl -s "http://localhost:8000/api/v1/workspaces/test/issues/?limit=999&offset=0"  # Should limit to 200

API Authentication

Learn how to authenticate API requests

Error Handling

Handle API errors and rate limits