Getting Started

prompt10x Documentation

prompt10x is a prompt engineering platform where teams can version, test, and iterate on AI prompts collaboratively. Organize prompts into projects, test them against multiple models side-by-side, and deliver production prompts via REST API.

Projects

Projects are the top-level container for organizing your work. Each project can represent a product, feature, or experiment.

Project Hierarchy

P
ProjectA workspace (e.g., "Customer Support Bot")
S
SessionA prompt group within a project (e.g., "Greeting Flow")
V
VersionEvery edit creates a new version. Diff, compare, rollback anytime.

Sessions & Prompts

Sessions group related prompts within a project. Each session can have multiple prompt versions — every edit creates a new version automatically.

Sessions

  • Create multiple sessions to organize prompts by use case
  • Rename sessions inline with click-to-edit
  • Each session tracks its own prompt version history
  • Sessions are what you reference via the API using their ID

Prompt Versioning

  • Every save creates a new version (v1, v2, v3...)
  • Browse full version history with author and timestamp
  • Click any version to load it in the editor
  • Copy any version's content with one click
  • Latest version is automatically used in tests and API delivery

Collaboration

Invite team members to your projects with role-based access control.

Team Roles

RoleViewEdit PromptsTestManage Models/ToolsManage Members
Owner
Editor
Viewer

Inviting Members

  1. Open your project and go to Settings → Members
  2. Enter the team member's email address
  3. Select a role (Editor or Viewer)
  4. Click Invite — they'll appear as Pending until they accept

Playground

The playground lets you run prompts against multiple models side-by-side in up to 4 parallel lanes. Compare outputs, iterate fast.

Multi-Lane Testing

  • Open up to 4 lanes simultaneously
  • Each lane has its own session, prompt version, and model selection
  • Compare how different prompt versions perform with the same input
  • Compare how the same prompt performs across different models

Input Modes

Broadcast Mode (default)Send the same message to all lanes at once. Perfect for comparing outputs.
Individual ModeSend messages to specific lanes independently. Useful for exploring different conversation paths.

Playground State

Your playground configuration (lanes, selected models, sessions) is automatically saved per project. Close the browser and come back — everything is exactly where you left it. Test runs are linked to lanes and can be resumed.

Test Panel

The test panel provides a focused testing interface for a single prompt version against a specific model.

Features

  • Select any prompt version and model for testing
  • Real-time streaming responses via SSE
  • Tool call visualization — see tool name, arguments, and results in real-time
  • Full conversation history preserved per test run
  • Duration tracking for performance benchmarking

Test Run History

Every test is recorded with its status (Completed, Failed, Running), the prompt version used, and the full conversation. Click any previous test run to resume it and continue the conversation where you left off.

AI Chat

Use AI to improve your prompts through conversation. Describe what you want and the AI will analyze your prompt and suggest refinements.

How It Works

  1. Open the Chat panel for any session
  2. The AI has access to your current prompt version
  3. Describe what you want to improve — or use a quick suggestion chip
  4. The AI analyzes your prompt and responds with improvements
  5. It can directly save improved versions using built-in tools

Quick Suggestions

Improve clarity and structureAdd edge case handlingMake it more conciseAdd output format instructions

AI Agent Capabilities

The AI chat is powered by a LangGraph agent that can use tools during the conversation:

  • get_latest_prompt — Read the current prompt version
  • save_prompt — Save an improved version directly

Tool executions are visible in the chat as collapsible cards showing the tool name, arguments, and result.

Models

Configure multiple LLM providers per project. Test your prompts against different models to find the best fit.

Adding a Model

  1. Go to Project Settings → Models
  2. Click Add Model
  3. Fill in: display name, provider, model identifier, base URL, and API key
  4. Optionally set as default model for new test lanes

Supported Providers

Any OpenAI-compatible API works. Provide the base URL and model name:

ProviderBase URLExample Model
OpenAIhttps://api.openai.com/v1gpt-4o
DeepSeekhttps://api.deepseek.comdeepseek-chat
Groqhttps://api.groq.com/openai/v1llama-3.1-70b
Ollamahttp://localhost:11434/v1llama3

Security: API keys are encrypted before storage. They're displayed as masked dots (•••••••) in the UI and never exposed in API responses.

Tools (Function Calling)

Define tools that the LLM can call during test runs. This lets you test prompts that use function calling without needing real backend integrations.

Creating a Tool

  1. Go to Project Settings → Tools
  2. Click Add Tool
  3. Define: name, description (this is what the LLM sees), parameter schema (JSON), and mock response
  4. Enable or disable the tool — disabled tools are not passed to the LLM

Tool Schema Example

json
{
  "name": "get_weather",
  "description": "Get current weather for a city",
  "parameters_schema": {
    "type": "object",
    "properties": {
      "city": {
        "type": "string",
        "description": "City name"
      }
    },
    "required": ["city"]
  },
  "mock_response": "Sunny, 24°C"
}

The mock response is returned when the LLM calls this tool during testing.

OpenAPI Import

Have an existing API? Import tools in bulk from an OpenAPI/Swagger specification. The importer parses your schema and creates tool definitions automatically, skipping duplicates.

How Tools Work in Tests

When a model decides to call a tool during a test run, you'll see a real-time card showing the tool name, arguments the model passed, and the mock response returned. The model then uses that response to continue generating its answer — exactly like it would in production.

API Keys

API keys let you fetch prompts from your applications at runtime. Each key is scoped to a single project.

Creating an API Key

  1. Open a project and go to Settings → API Keys
  2. Click Generate Key and give it a name
  3. Copy the key immediately — it won't be shown again
  4. Use it as a Bearer token in your API requests

Security: API keys are stored as SHA-256 hashes. The raw key (prefixed p10x_) is only shown once at creation. Keys can be revoked at any time. Last usage is tracked automatically.

Fetch Prompts

Use the REST API to fetch prompts at runtime. All requests require a valid API key passed as a Bearer token.

Base URL

https://api.prompt10x.com/v3

Authentication

Include your API key in the Authorization header:

http
Authorization: Bearer p10x_your_api_key_here

Endpoints

GET/prompt/:session_id

Fetch the latest prompt version for a session.

bash
curl -s https://api.prompt10x.com/v3/prompt/SESSION_ID \
  -H "Authorization: Bearer p10x_your_key"
GET/prompt/:session_id/:version

Fetch a specific prompt version.

bash
curl -s https://api.prompt10x.com/v3/prompt/SESSION_ID/2 \
  -H "Authorization: Bearer p10x_your_key"

Response

json
{
  "prompt": {
    "id": "550e8400-e29b-41d4-a716-446655440000",
    "content": "You are a helpful customer support agent...",
    "version": 3,
    "session_id": "7c9e6679-7425-40de-944b-e07fc1f90ae7",
    "created_at": "2026-02-22T10:30:00.000Z"
  }
}

Error Responses

StatusDescription
401Missing, invalid, or revoked API key
403Session does not belong to this API key's project
404Session or prompt version not found

Code Examples

Integrate prompt10x into your application with a few lines of code.

Node.js / TypeScript

typescript
const response = await fetch(
  "https://api.prompt10x.com/v3/prompt/SESSION_ID",
  {
    headers: {
      Authorization: "Bearer p10x_your_key",
    },
  }
);

const { prompt } = await response.json();
console.log(prompt.content);  // Your prompt text
console.log(prompt.version);  // Version number

Python

python
import requests

response = requests.get(
    "https://api.prompt10x.com/v3/prompt/SESSION_ID",
    headers={"Authorization": "Bearer p10x_your_key"}
)

prompt = response.json()["prompt"]
print(prompt["content"])   # Your prompt text
print(prompt["version"])   # Version number

Using with OpenAI

typescript
import OpenAI from "openai";

// 1. Fetch your prompt from prompt10x
const res = await fetch(
  "https://api.prompt10x.com/v3/prompt/SESSION_ID",
  { headers: { Authorization: "Bearer p10x_your_key" } }
);
const { prompt } = await res.json();

// 2. Use it as the system prompt
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [
    { role: "system", content: prompt.content },
    { role: "user", content: "Hello!" },
  ],
});

Using with LangChain

python
from langchain_openai import ChatOpenAI
from langchain_core.messages import SystemMessage, HumanMessage
import requests

# 1. Fetch prompt
res = requests.get(
    "https://api.prompt10x.com/v3/prompt/SESSION_ID",
    headers={"Authorization": "Bearer p10x_your_key"}
)
system_prompt = res.json()["prompt"]["content"]

# 2. Use with LangChain
llm = ChatOpenAI(model="gpt-4o")
response = llm.invoke([
    SystemMessage(content=system_prompt),
    HumanMessage(content="Hello!"),
])

For the full interactive API reference with try-it-out, visit the Swagger UI.

Open Swagger UI