prompt10x Documentation
prompt10x is a prompt engineering platform where teams can version, test, and iterate on AI prompts collaboratively. Organize prompts into projects, test them against multiple models side-by-side, and deliver production prompts via REST API.
Projects
Projects are the top-level container for organizing your work. Each project can represent a product, feature, or experiment.
Project Hierarchy
Sessions & Prompts
Sessions group related prompts within a project. Each session can have multiple prompt versions — every edit creates a new version automatically.
Sessions
- Create multiple sessions to organize prompts by use case
- Rename sessions inline with click-to-edit
- Each session tracks its own prompt version history
- Sessions are what you reference via the API using their ID
Prompt Versioning
- Every save creates a new version (v1, v2, v3...)
- Browse full version history with author and timestamp
- Click any version to load it in the editor
- Copy any version's content with one click
- Latest version is automatically used in tests and API delivery
Collaboration
Invite team members to your projects with role-based access control.
Team Roles
| Role | View | Edit Prompts | Test | Manage Models/Tools | Manage Members |
|---|---|---|---|---|---|
| Owner | ✓ | ✓ | ✓ | ✓ | ✓ |
| Editor | ✓ | ✓ | ✓ | ✓ | ✗ |
| Viewer | ✓ | ✗ | ✗ | ✗ | ✗ |
Inviting Members
- Open your project and go to Settings → Members
- Enter the team member's email address
- Select a role (Editor or Viewer)
- Click Invite — they'll appear as Pending until they accept
Playground
The playground lets you run prompts against multiple models side-by-side in up to 4 parallel lanes. Compare outputs, iterate fast.
Multi-Lane Testing
- Open up to 4 lanes simultaneously
- Each lane has its own session, prompt version, and model selection
- Compare how different prompt versions perform with the same input
- Compare how the same prompt performs across different models
Input Modes
Playground State
Your playground configuration (lanes, selected models, sessions) is automatically saved per project. Close the browser and come back — everything is exactly where you left it. Test runs are linked to lanes and can be resumed.
Test Panel
The test panel provides a focused testing interface for a single prompt version against a specific model.
Features
- Select any prompt version and model for testing
- Real-time streaming responses via SSE
- Tool call visualization — see tool name, arguments, and results in real-time
- Full conversation history preserved per test run
- Duration tracking for performance benchmarking
Test Run History
Every test is recorded with its status (Completed, Failed, Running), the prompt version used, and the full conversation. Click any previous test run to resume it and continue the conversation where you left off.
AI Chat
Use AI to improve your prompts through conversation. Describe what you want and the AI will analyze your prompt and suggest refinements.
How It Works
- Open the Chat panel for any session
- The AI has access to your current prompt version
- Describe what you want to improve — or use a quick suggestion chip
- The AI analyzes your prompt and responds with improvements
- It can directly save improved versions using built-in tools
Quick Suggestions
AI Agent Capabilities
The AI chat is powered by a LangGraph agent that can use tools during the conversation:
- get_latest_prompt — Read the current prompt version
- save_prompt — Save an improved version directly
Tool executions are visible in the chat as collapsible cards showing the tool name, arguments, and result.
Models
Configure multiple LLM providers per project. Test your prompts against different models to find the best fit.
Adding a Model
- Go to Project Settings → Models
- Click Add Model
- Fill in: display name, provider, model identifier, base URL, and API key
- Optionally set as default model for new test lanes
Supported Providers
Any OpenAI-compatible API works. Provide the base URL and model name:
| Provider | Base URL | Example Model |
|---|---|---|
| OpenAI | https://api.openai.com/v1 | gpt-4o |
| DeepSeek | https://api.deepseek.com | deepseek-chat |
| Groq | https://api.groq.com/openai/v1 | llama-3.1-70b |
| Ollama | http://localhost:11434/v1 | llama3 |
Security: API keys are encrypted before storage. They're displayed as masked dots (•••••••) in the UI and never exposed in API responses.
Tools (Function Calling)
Define tools that the LLM can call during test runs. This lets you test prompts that use function calling without needing real backend integrations.
Creating a Tool
- Go to Project Settings → Tools
- Click Add Tool
- Define: name, description (this is what the LLM sees), parameter schema (JSON), and mock response
- Enable or disable the tool — disabled tools are not passed to the LLM
Tool Schema Example
{
"name": "get_weather",
"description": "Get current weather for a city",
"parameters_schema": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "City name"
}
},
"required": ["city"]
},
"mock_response": "Sunny, 24°C"
}The mock response is returned when the LLM calls this tool during testing.
OpenAPI Import
Have an existing API? Import tools in bulk from an OpenAPI/Swagger specification. The importer parses your schema and creates tool definitions automatically, skipping duplicates.
How Tools Work in Tests
When a model decides to call a tool during a test run, you'll see a real-time card showing the tool name, arguments the model passed, and the mock response returned. The model then uses that response to continue generating its answer — exactly like it would in production.
API Keys
API keys let you fetch prompts from your applications at runtime. Each key is scoped to a single project.
Creating an API Key
- Open a project and go to Settings → API Keys
- Click Generate Key and give it a name
- Copy the key immediately — it won't be shown again
- Use it as a Bearer token in your API requests
Security: API keys are stored as SHA-256 hashes. The raw key (prefixed p10x_) is only shown once at creation. Keys can be revoked at any time. Last usage is tracked automatically.
Fetch Prompts
Use the REST API to fetch prompts at runtime. All requests require a valid API key passed as a Bearer token.
Base URL
https://api.prompt10x.com/v3Authentication
Include your API key in the Authorization header:
Authorization: Bearer p10x_your_api_key_hereEndpoints
/prompt/:session_idFetch the latest prompt version for a session.
curl -s https://api.prompt10x.com/v3/prompt/SESSION_ID \
-H "Authorization: Bearer p10x_your_key"/prompt/:session_id/:versionFetch a specific prompt version.
curl -s https://api.prompt10x.com/v3/prompt/SESSION_ID/2 \
-H "Authorization: Bearer p10x_your_key"Response
{
"prompt": {
"id": "550e8400-e29b-41d4-a716-446655440000",
"content": "You are a helpful customer support agent...",
"version": 3,
"session_id": "7c9e6679-7425-40de-944b-e07fc1f90ae7",
"created_at": "2026-02-22T10:30:00.000Z"
}
}Error Responses
| Status | Description |
|---|---|
| 401 | Missing, invalid, or revoked API key |
| 403 | Session does not belong to this API key's project |
| 404 | Session or prompt version not found |
Code Examples
Integrate prompt10x into your application with a few lines of code.
Node.js / TypeScript
const response = await fetch(
"https://api.prompt10x.com/v3/prompt/SESSION_ID",
{
headers: {
Authorization: "Bearer p10x_your_key",
},
}
);
const { prompt } = await response.json();
console.log(prompt.content); // Your prompt text
console.log(prompt.version); // Version numberPython
import requests
response = requests.get(
"https://api.prompt10x.com/v3/prompt/SESSION_ID",
headers={"Authorization": "Bearer p10x_your_key"}
)
prompt = response.json()["prompt"]
print(prompt["content"]) # Your prompt text
print(prompt["version"]) # Version numberUsing with OpenAI
import OpenAI from "openai";
// 1. Fetch your prompt from prompt10x
const res = await fetch(
"https://api.prompt10x.com/v3/prompt/SESSION_ID",
{ headers: { Authorization: "Bearer p10x_your_key" } }
);
const { prompt } = await res.json();
// 2. Use it as the system prompt
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
model: "gpt-4o",
messages: [
{ role: "system", content: prompt.content },
{ role: "user", content: "Hello!" },
],
});Using with LangChain
from langchain_openai import ChatOpenAI
from langchain_core.messages import SystemMessage, HumanMessage
import requests
# 1. Fetch prompt
res = requests.get(
"https://api.prompt10x.com/v3/prompt/SESSION_ID",
headers={"Authorization": "Bearer p10x_your_key"}
)
system_prompt = res.json()["prompt"]["content"]
# 2. Use with LangChain
llm = ChatOpenAI(model="gpt-4o")
response = llm.invoke([
SystemMessage(content=system_prompt),
HumanMessage(content="Hello!"),
])For the full interactive API reference with try-it-out, visit the Swagger UI.
Open Swagger UI