General Questions
What is Modelpilot.ai?
Modelpilot.ai is a platform that simplifies AI model deployment. It allows you to deploy state-of-the-art models with minimal technical knowledge, using either our recommended settings or custom configurations.
What types of models can I deploy?
You can deploy various types of models:
- Text Generation (DeepSeek R1, Qwen3, QwQ, Mistral, Gemma 3, LLaMA 3.3, Magistral, and more)
- Image Generation (Z Image Turbo, Flux Dev/Schnell, Stable Diffusion 3.5, SDXL, Qwen Image Gen)
- Video Generation (Wan 2.2 5B/14B Text-to-Video and Image-to-Video)
- Multimodal Models
- Embedding Models
- Custom models from Hugging Face
Do I need technical expertise to use Modelpilot?
No! That's the beauty of our platform. With our Quick Deploy feature, you can deploy sophisticated AI models with just a few clicks. No technical expertise is required for basic deployments.
How long does it take to deploy a model?
Deployment involves building a Docker container and starting it. Times vary by model complexity:
Typical First-Time Deployment Times:
- Small text models (1B-7B): 5-10 minutes
- Large text models (30B+): 15-30 minutes
- Image models (Flux, SDXL): 10-20 minutes
- Video models (Wan 2.2): 20-40 minutes
- The deployment process shows progress: Building → Starting → Running
- You only pay when the status reaches "Running"
- Building and starting phases are free
- Subsequent starts are faster (2-5 minutes) since the container is cached
Quick Deploy
What is Quick Deploy?
Quick Deploy is our one-click solution that automatically configures and launches models using our expert-recommended settings. It eliminates the technical complexities while ensuring optimal performance.
How does Quick Deploy work?
Our system analyzes your selected model's specific requirements and automatically selects the most appropriate instance type, memory allocation, storage volume sizes, and model-specific parameters. All you need to do is select a model and click "Deploy Recommended."
What makes Quick Deploy better than manual configuration?
- Time Efficiency: Deploy in seconds instead of minutes
- Reduced Complexity: No technical knowledge required
- Cost Optimization: Balance performance and price
- Error Prevention: Avoid common deployment mistakes
- Tested Configurations: Use settings proven to work reliably
When should I use Quick Deploy vs. Custom Deployment?
Use Quick Deploy when:
- You're new to model deployment
- You want the fastest deployment experience
- You trust our expert recommendations
- You don't have specific customization needs
Use Custom Deployment when:
- You need specific GPU types
- You require custom environment variables
- You need special network configurations
- You're integrating with existing infrastructure
- You have specific API settings beyond our defaults
Infrastructure & Hosting
What compute options are available?
We offer various instance types:
- CPU instances for smaller models
- GPU instances (L4, A6000, A100, H100) for larger models
- Custom configurations for specific needs
Can I use my own cloud account?
Currently, all deployments run on our managed infrastructure for the best experience and reliability. We're exploring enterprise options for custom infrastructure requirements - contact support@modelpilot.ai to discuss your specific needs.
Where are the models hosted?
Models are hosted in secure cloud environments with enterprise-grade infrastructure. We handle all the infrastructure management for optimal performance and reliability.
Technical Questions
How do I access my deployed model?
Once your deployment is "Running", you can access it through:
- Web Interface: Click the URL in your dashboard to open the interface
- Text Models: Use OpenWebUI for chat-based interaction
- Image/Video Models: Use ComfyUI for workflow-based generation
- API Access: Use the provided endpoint for programmatic access
Can I customize model parameters after deployment?
Most model parameters can be adjusted through the web interface or API requests. For fundamental changes to the deployment itself (like changing instance type), you'll need to create a new deployment.
What interfaces are provided for deployed models?
- OpenWebUI (Text Models): Chat interface with conversation history, parameter controls, and model switching
- ComfyUI (Image/Video): Node-based workflow editor with pre-built workflows and custom parameters
- RESTful APIs: All models expose API endpoints for integration
- File Downloads: Generated content can be downloaded directly from the interfaces
How secure are my deployments?
All deployments include:
- Data encryption at rest (via cloud provider)
- Secure web interface access
- CORS configuration for web access control
- Private network deployment options
- Configurable data retention policies
Can I deploy models from Hugging Face?
Yes! In the deployment wizard, you can:
- Enter any Hugging Face model ID (e.g., "microsoft/DialoGPT-medium")
- The system will recommend an appropriate GPU instance
- You can customize the instance type if needed
- The model will be automatically downloaded during deployment
Support & Troubleshooting
What if my deployment fails?
If a deployment fails, check the logs in your dashboard for specific error messages. Common issues include:
- Insufficient GPU memory: Try a larger instance type (A6000 or A100 instead of L4)
- Model download issues: Check if the Hugging Face model ID is correct
- Resource unavailability: GPU instances may be temporarily unavailable
- Build timeouts: Large models may need multiple attempts
Try the "Deploy Recommended" option, which selects tested configurations for each model type.
How do I check deployment logs?
Access logs through your dashboard to troubleshoot issues:
- Go to your dashboard and find the deployment
- Click on the deployment row to view details
- Look for log output or error messages
- Logs show the build process, startup sequence, and any errors
How do I get support if I'm having issues?
We offer the following support channels:
- Email support: support@modelpilot.ai
- Documentation: Comprehensive guides and troubleshooting
- Dashboard logs: Detailed error messages and deployment status
For the fastest response, email support@modelpilot.ai with your deployment ID and error details.
Dashboard & Management
How do I start and stop deployments?
Control your deployments through the dashboard:
- Start: Click the start button to initialize your deployment (billing begins only once running)
- Stop: Click stop to halt billing while preserving your data
- Delete: Permanently remove the deployment (data will be lost)
- Access: Click the URL when running to open the web interface
Do my files persist between sessions?
By default, NO - deployments are ephemeral (data is lost when stopped). To enable persistence, check "Persist User Data" in Advanced Settings during deployment.
Without Persistence (Default - Ephemeral):
- Data is lost when you stop the deployment
- Faster deployment startup
- Lower cost (no storage fees)
- Best for testing and one-time generation
With Persistence (Optional - Must Enable):
- Generated images, videos, and text outputs saved
- Chat conversation history preserved
- Model configurations and custom workflows retained
- Downloaded model weights cached (faster restarts)
- Requires enabling "Persist User Data" in deployment settings
How can I reduce my costs?
⚠️ Important: Deployments do NOT auto-stop when idle. You must manually stop them to avoid charges.
- Manually stop when done: YOU are responsible for stopping deployments - they will not auto-stop
- Right-size instances: Use the recommended GPU type for your model
- Batch work: Process multiple items in one session to maximize the 10-minute billing window
- No startup charges: Unlike some platforms, you're not charged during initialization or failed deployments
- Monitor dashboard: Keep track of running deployments to avoid accidental billing
- Auto-stop protection: Deployments automatically stop only when your balance is insufficient