Testing Guide

How to Test Bonito

A step-by-step guide to validate every feature of the platform. Work through each section in order, or jump to the area you want to test.

Prerequisites

  • An account on getbonito.com
  • At least one cloud provider account (AWS, Azure, or GCP) with AI services enabled
  • Provider credentials ready — see the Docs for what's needed per provider
  • curl or jq for API testing (optional)

1. Authentication

Register a new account

POST /api/auth/register

Create a fresh account to test the full onboarding flow.

  • Go to /register and create an account with a valid email + password
  • You should be redirected to the dashboard after registration
  • Check /auth/me returns your user profile

Login / Logout

POST /api/auth/login

Verify session management works correctly.

  • Log out, then log back in with the same credentials
  • Try logging in with a wrong password — should show 'Invalid credentials'
  • After login, refreshing the page should keep you logged in (JWT stored)

2. Connect Providers

AWS Bedrock

POST /api/providers/

Connect your AWS account to pull Bedrock models.

  • Go to Providers page → click Add Provider → select AWS
  • Enter Access Key ID, Secret Access Key, and Region (e.g. us-east-1)
  • Provider should validate (STS get-caller-identity) and show as 'Active'
  • Models page should now show AWS Bedrock models (Claude, Llama, Titan, etc.)

Azure AI Foundry

POST /api/providers/

Connect your Azure subscription for OpenAI and Cognitive Services models.

  • Add Provider → Azure → Enter Tenant ID, Client ID, Client Secret, Subscription ID
  • Provider should validate (OAuth2 token acquisition) and show as 'Active'
  • Models page should show Azure OpenAI models (GPT-4, GPT-4o, etc.)

Google Vertex AI

POST /api/providers/

Connect GCP for Gemini and other Vertex AI models.

  • Add Provider → GCP → Upload or paste your Service Account JSON
  • Provider should validate (project lookup) and show as 'Active'
  • Models page should show Vertex AI models (Gemini, PaLM, etc.)

3. Model Catalog

View and filter models

GET /api/models/

All synced models should appear with correct provider tabs.

  • Models page shows all synced models from connected providers
  • Filter tabs show ALL connected providers (AWS, Azure, GCP)
  • If a provider tab shows ⚠️, click Sync to re-fetch models
  • Search bar filters by model name or ID
  • Click a model card to see details (pricing, capabilities, context window)

Model Playground

POST /api/models/{id}/playground

Test models live from the browser.

  • Click a model → open the Playground tab
  • Send a test message — response should stream back
  • Check that token usage and cost appear after the response
  • Try with different temperature / max token settings

Model Comparison

POST /api/models/compare

Compare responses from multiple models side-by-side.

  • Select 2-4 models for comparison
  • Send the same prompt to all — responses appear side by side
  • Compare latency, token usage, and cost across models

4. API Gateway

Generate an API key

POST /api/gateway/keys

Create a gateway key to route requests through Bonito.

  • Go to Gateway page → click 'Create Key'
  • Copy the generated key (bn-xxx format)
  • Key should appear in the keys list with creation date

Make a request through the gateway

POST /v1/chat/completions

Test the OpenAI-compatible proxy endpoint.

  • Use curl or any OpenAI SDK pointed at your Bonito gateway URL
  • Send a chat completion request with your bn-xxx key
  • Response should come back in OpenAI format
  • Check Gateway → Logs to see the request logged with cost + tokens

Test from the code snippets

The gateway page shows ready-to-use code snippets.

  • Copy the Python snippet and run it locally
  • Copy the curl snippet and run it in your terminal
  • Both should return a valid chat completion response

5. Routing Policies

Create a routing policy

POST /api/routing-policies/

Set up intelligent routing between models/providers.

  • Go to Routing → Create Policy
  • Select a strategy: cost-optimized, latency-optimized, balanced, failover, or A/B test
  • Assign primary and fallback models
  • For A/B testing: set percentage weights (must sum to 100)
  • Save the policy — it should appear in the list

Test a routing policy

POST /api/routing-policies/{id}/test

Dry-run model selection to verify routing logic.

  • Click 'Test' on a policy — it should show which model would be selected
  • For failover: verify the fallback model is picked when primary is down
  • For A/B: run multiple tests and verify the distribution matches weights

6. Deployments

Create a deployment

POST /api/deployments/

Deploy a model directly into your cloud from the Bonito UI.

  • Go to Deployments page → click 'Create Deployment'
  • Select a provider and model
  • AWS: choose On-demand or Provisioned Throughput (PT requires commitment)
  • Azure: set TPM capacity for the deployment (Standard or GlobalStandard tier)
  • GCP: serverless by default — verify access
  • Deployment should appear in the list with status updates

Monitor deployment status

GET /api/deployments/{id}

Check that deployment lifecycle notifications work.

  • Deployment status should update: Creating → Active (or Failed)
  • In-app notification should appear for deployment status changes
  • Deployment details page shows provider, model, capacity, and status

7. Cost Intelligence

View cost dashboard

GET /api/costs/

Check real cost data from your cloud providers.

  • Costs page shows aggregated spending across all providers
  • Breakdown by provider (AWS, Azure, GCP) with charts
  • Cost forecast shows projected spending with confidence bounds

Cost recommendations

GET /api/costs/recommendations

Get optimization suggestions.

  • Recommendations endpoint returns cheaper model alternatives
  • Cross-provider routing recommendations appear if applicable

8. Compliance & Governance

Run compliance checks

GET /api/compliance/checks

Verify security posture across all connected providers.

  • Compliance page shows check results by provider
  • AWS: Bedrock logging, IAM permissions, EBS encryption, CloudTrail
  • Azure: Network rules, RBAC roles, diagnostic settings
  • GCP: SA permissions, audit logging, VPC Service Controls
  • Framework mapping: SOC2, HIPAA, GDPR, ISO 27001

View audit trail

GET /api/audit/

All sensitive actions are logged.

  • Audit page shows a timeline of actions (logins, provider connects, invocations)
  • Each entry has timestamp, user, action type, and details

9. Analytics & Usage

Usage dashboard

GET /api/analytics/overview

Track API usage and trends.

  • Analytics page shows overview cards (requests, tokens, cost, avg latency)
  • Usage charts show daily/weekly/monthly trends
  • Cost breakdown by provider and model

10. Notifications & Alerts

Notification bell

GET /api/notifications/

Check that in-app notifications work.

  • Bell icon in the header shows unread count
  • Click to see notification list with read/unread states
  • Mark notifications as read

Alert rules

POST /api/alert-rules/

Set up budget and compliance alerts.

  • Create an alert rule with a budget threshold
  • Set notification preferences (email, in-app, weekly digest)

11. AI Copilot

Chat with the copilot

POST /api/ai/command

Test the Groq-powered AI assistant.

  • Click the AI Copilot panel (or Cmd+K)
  • Ask: 'What are my costs this month?'
  • Ask: 'Which models are available?'
  • Ask: 'Run a compliance check'
  • Responses should be context-aware (knows your providers, models, costs)
  • Quick action buttons should work (Cost Summary, Compliance Check, etc.)

API Testing (curl)

Test the API directly from your terminal. Copy and run these commands:

# Health check
curl https://getbonito.com/api/health

# Login (replace with your credentials)
TOKEN=$(curl -s -X POST https://getbonito.com/api/auth/login \
  -H "Content-Type: application/json" \
  -d '{"email":"you@example.com","password":"yourpassword"}' \
  | jq -r '.access_token')

# List your providers
curl -s https://getbonito.com/api/providers/ \
  -H "Authorization: Bearer $TOKEN" | jq

# List models
curl -s https://getbonito.com/api/models/ \
  -H "Authorization: Bearer $TOKEN" | jq

# Gateway request (use your bn-xxx key)
curl -X POST https://getbonito.com/v1/chat/completions \
  -H "Authorization: Bearer bn-your-key-here" \
  -H "Content-Type: application/json" \
  -d '{"model":"claude-3-haiku","messages":[{"role":"user","content":"Hello"}]}'

For automated testing, clone the repo and run: ./scripts/test-api.sh https://getbonito.com your@email.com yourpassword

Found a bug?

Open an issue on GitHub or reach out to the team directly.