Introduction
ModelPilot is a platform that makes deploying AI models simple, fast, and cost-effective. Deploy text, image, video, and multimodal models with just a few clicks, complete with optimized web interfaces and persistent storage.
Key Features:
- Quick Deployment: Deploy AI models with our step-by-step wizard or use "Deploy Recommended" for one-click setup
- Multiple Model Types: Support for text generation, image creation, video generation, multimodal, and embedding models
- Persistent Storage: Your models, data, and generated content persist between sessions
- Pay As You Go: Only pay for the compute resources you use every 10 minutes
- Privacy Controls: Fine-grained privacy settings and zero-data retention options for sensitive workloads
- Ready-to-Use Interfaces: Text models include OpenWebUI for chat, Image/Video models include ComfyUI with pre-built workflows
Getting Started
Creating an Account
To get started with Modelpilot.ai, follow these steps:
- Visit the homepage and click "Sign Up Free"
- Create an account with email/password or use Google/GitHub login
- You'll be taken to your dashboard
- Add credits through the billing page to start deploying models
Quick Deploy Guide
One of Modelpilot's most powerful features is our "Deploy Recommended" functionality that:
- Analyzes your selected model and automatically determines the optimal infrastructure
- Configures all settings including compute, memory, storage, and model parameters
- Deploys in seconds with just one click, skipping manual configuration steps
Steps for Quick Deployment:
- Select a model type (Text, Image, Video, Multimodal, Embedding)
- Choose from pre-configured models or enter a Hugging Face model ID
- Click "Deploy Recommended" for optimal settings, or customize manually
- Monitor deployment progress through the dashboard
- Access your running model through the provided web interface URL
Billing & Cost Management
Modelpilot uses a straightforward credit-based billing system:
- Credits: The basic unit of payment (1 credit = $1.00 USD)
- 10-Minute Billing: All deployments are billed in 10-minute increments
- Pay for What You Use: Only running deployments incur charges
Model Types & Interfaces
Text Generation Models
Text models deploy with OpenWebUI, providing a chat interface similar to ChatGPT:
- Interactive chat interface
- Adjustable parameters (temperature, max tokens, etc.)
- Conversation history
- Model switching and configuration
- Available models: DeepSeek R1, Qwen3, QwQ, Mistral, Gemma 3, LLaMA 3.3, Magistral, and more
Image Generation Models
Image models deploy with ComfyUI, a node-based workflow interface:
- Visual workflow editor
- Pre-built workflows for common tasks
- Custom node connections and parameters
- Real-time image generation
- Available models: Z Image Turbo, Flux Dev/Schnell, Stable Diffusion 3.5, SDXL, Qwen Image Gen
Video Generation Models
Video models also use ComfyUI with specialized video workflows:
- Text-to-video and image-to-video workflows
- Video parameter controls (resolution, frames, duration)
- Preview and progress monitoring
- Video download and sharing
- Available models: Wan 2.1 and Wan 2.2 series
Using Your Deployments
Dashboard Operations
Manage your deployments through the dashboard:
- Start/Stop: Control when your deployments are running (and being billed)
- Access: Get direct links to your model interfaces
- Monitor: View deployment status and resource usage
- Logs: Debug issues with deployment logs
Deployment States
Your deployments go through several states:
- Building: Docker image is being created (no billing)
- Starting: Container is launching (no billing)
- Running: Ready to use (billing active)
- Stopped: Not running (no compute billing)
Data Persistence
Your data automatically persists between sessions:
- Model weights and configurations
- Generated images, videos, and other outputs
- Chat history and conversation data
- Custom workflows and settings
- Files remain available when you restart deployments
API Integration
OpenAI-Compatible Endpoints
ModelPilot provides OpenAI-compatible API endpoints for easy migration from OpenAI services. Access your deployed models programmatically using familiar OpenAI API patterns.
- Chat Completions: Use
/api/v1/chat/completions with your deployed text models - Health Monitoring: Check deployment status with
/api/deployments/{podId}/health - API Keys: Create API keys in your dashboard with proxy permissions
- Drop-in Replacement: Compatible with OpenAI SDK and most libraries
→ View complete API documentation with examples
Quick Migration Example
// Before (OpenAI)
const openai = new OpenAI({
apiKey: 'sk-...',
baseURL: 'https://api.openai.com/v1'
});
// After (ModelPilot)
const openai = new OpenAI({
apiKey: 'mp_live_your_api_key',
baseURL: 'https://your-domain.com/api/v1'
});
// Same code works for both!
const response = await openai.chat.completions.create({
model: 'mistral', // Your deployed model
messages: [{ role: 'user', content: 'Hello' }]
});