Get your application deployed to the cloud through a simple conversation with your AI coding assistant.
Prerequisites
Before you begin, make sure you have:
Docker - For building your application container
An AI coding assistant that supports the Model Context Protocol (MCP): e.g. Cursor , Claude Code , or Windsurf
Step 1: Install the Neptune MCP
Choose your preferred installation method and AI coding assistant:
Tested models: Neptune works well with Sonnet 4.5, Opus 4.5, GPT 5.1
(including Codex/Codex-mini), Gemini 3 Pro, Cursor Compose 1, and Grok Code.
Other models may work but haven’t been extensively tested.
First, install the Neptune MCP: curl -fsSL https://neptune.dev/install.sh | bash
irm https://neptune.dev/install.ps1 | iex
Then configure your AI coding assistant:
Open Cursor Settings → Tools & MCP → Add new MCP server
Add this configuration:
{
"mcpServers" : {
"neptune" : {
"command" : "neptune" ,
"args" : [ "mcp" ]
}
}
}
Restart Cursor
Run this command in your terminal: claude mcp add --transport stdio neptune -- neptune mcp
Verify it was added: Use --scope user to make Neptune available across all projects, or --scope project to share with your team via .mcp.json.
Open Windsurf Settings → Cascade → Plugins → Manage plugins
Edit ~/.codeium/windsurf/mcp_config.json and add:
{
"mcpServers" : {
"neptune" : {
"command" : "neptune" ,
"args" : [ "mcp" ]
}
}
}
Press the refresh button in the Plugins panel, then reload Windsurf
Official Windsurf MCP docs →
VS Code (Cline Extension)
Install the Cline extension
Click the MCP Servers icon in the Cline panel → Configure tab → Configure MCP Servers
Add this to your cline_mcp_settings.json:
{
"mcpServers" : {
"neptune" : {
"command" : "neptune" ,
"args" : [ "mcp" ]
}
}
}
Reload VS Code
Official Cline MCP docs →
Run MCP: Open User Configuration from the Command Palette (Cmd+Shift+P / Ctrl+Shift+P)
Add this to your mcp.json:
{
"servers" : {
"neptune" : {
"command" : "neptune" ,
"args" : [ "mcp" ]
}
}
}
Use MCP: Open Remote User Configuration if you want the server to run on a remote machine when connected remotely.
Reload VS Code
Official VS Code MCP docs →
JetBrains IDEs (IntelliJ, PyCharm, WebStorm, etc.)
Open Settings → Tools → AI Assistant → Model Context Protocol (MCP)
Click Add and select As JSON , then paste:
{
"mcpServers" : {
"neptune" : {
"command" : "neptune" ,
"args" : [ "mcp" ]
}
}
}
Set Level to “Global” to use Neptune across all projects, then click Apply
Already use Claude Desktop? Click Import from Claude to reuse your existing MCP configs.
Official JetBrains MCP docs →
Open Settings → MCP Servers (or via Warp Drive → Personal → MCP Servers)
Click + Add and paste this configuration:
{
"mcpServers" : {
"neptune" : {
"command" : "neptune" ,
"args" : [ "mcp" ]
}
}
}
Click Start on the Neptune server to enable it
Official Warp MCP docs →
Other MCP Clients (Generic STDIO)
For any MCP client that supports STDIO transport, use this command: Configure your client to run this command and communicate via stdin/stdout.
HTTP Transport (Alternative)
If your client doesn’t support STDIO or you prefer HTTP, start the MCP server in HTTP mode: neptune mcp --transport=http
The server runs at http://localhost:8001/mcp. Configure your client to connect to this URL. First, install uv and Python 3.13+: # Install uv
brew install uv
# Or via curl
curl -LsSf https://astral.sh/uv/install.sh | sh
curl -LsSf https://astral.sh/uv/install.sh | sh
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
Then configure your AI coding assistant:
Open Cursor Settings → Tools & MCP → Add new MCP server
Add this configuration:
{
"mcpServers" : {
"neptune" : {
"command" : "uvx" ,
"args" : [
"--from" ,
"git+https://github.com/shuttle-hq/neptune-mcp.git" ,
"neptune" ,
"mcp"
]
}
}
}
{
"mcpServers" : {
"neptune" : {
"command" : "cmd" ,
"args" : [
"/c" ,
"uvx" ,
"--from" ,
"git+https://github.com/shuttle-hq/neptune-mcp.git" ,
"neptune" ,
"mcp"
]
}
}
}
Restart Cursor
Run this command in your terminal: claude mcp add --transport stdio neptune -- uvx --from git+https://github.com/shuttle-hq/neptune-mcp.git neptune mcp
claude mcp add --transport stdio neptune -- cmd /c uvx --from git+https://github.com/shuttle-hq/neptune-mcp.git neptune mcp
Verify it was added: Use --scope user to make Neptune available across all projects, or --scope project to share with your team via .mcp.json.
Open Windsurf Settings → Cascade → Plugins → Manage plugins
Edit ~/.codeium/windsurf/mcp_config.json and add:
{
"mcpServers" : {
"neptune" : {
"command" : "uvx" ,
"args" : [
"--from" ,
"git+https://github.com/shuttle-hq/neptune-mcp.git" ,
"neptune" ,
"mcp"
]
}
}
}
{
"mcpServers" : {
"neptune" : {
"command" : "cmd" ,
"args" : [
"/c" ,
"uvx" ,
"--from" ,
"git+https://github.com/shuttle-hq/neptune-mcp.git" ,
"neptune" ,
"mcp"
]
}
}
}
Press the refresh button in the Plugins panel, then reload Windsurf
Official Windsurf MCP docs →
VS Code (Cline Extension)
Install the Cline extension
Click the MCP Servers icon in the Cline panel → Configure tab → Configure MCP Servers
Add this to your cline_mcp_settings.json:
{
"mcpServers" : {
"neptune" : {
"command" : "uvx" ,
"args" : [
"--from" ,
"git+https://github.com/shuttle-hq/neptune-mcp.git" ,
"neptune" ,
"mcp"
]
}
}
}
{
"mcpServers" : {
"neptune" : {
"command" : "cmd" ,
"args" : [
"/c" ,
"uvx" ,
"--from" ,
"git+https://github.com/shuttle-hq/neptune-mcp.git" ,
"neptune" ,
"mcp"
]
}
}
}
Reload VS Code
Official Cline MCP docs →
Run MCP: Open User Configuration from the Command Palette (Cmd+Shift+P / Ctrl+Shift+P)
Add this to your mcp.json:
{
"servers" : {
"neptune" : {
"command" : "uvx" ,
"args" : [
"--from" ,
"git+https://github.com/shuttle-hq/neptune-mcp.git" ,
"neptune" ,
"mcp"
]
}
}
}
{
"servers" : {
"neptune" : {
"command" : "cmd" ,
"args" : [
"/c" ,
"uvx" ,
"--from" ,
"git+https://github.com/shuttle-hq/neptune-mcp.git" ,
"neptune" ,
"mcp"
]
}
}
}
Use MCP: Open Remote User Configuration if you want the server to run on a remote machine when connected remotely.
Reload VS Code
Official VS Code MCP docs →
JetBrains IDEs (IntelliJ, PyCharm, WebStorm, etc.)
Open Settings → Tools → AI Assistant → Model Context Protocol (MCP)
Click Add and select As JSON , then paste:
{
"mcpServers" : {
"neptune" : {
"command" : "uvx" ,
"args" : [
"--from" ,
"git+https://github.com/shuttle-hq/neptune-mcp.git" ,
"neptune" ,
"mcp"
]
}
}
}
{
"mcpServers" : {
"neptune" : {
"command" : "cmd" ,
"args" : [
"/c" ,
"uvx" ,
"--from" ,
"git+https://github.com/shuttle-hq/neptune-mcp.git" ,
"neptune" ,
"mcp"
]
}
}
}
Set Level to “Global” to use Neptune across all projects, then click Apply
Already use Claude Desktop? Click Import from Claude to reuse your existing MCP configs.
Official JetBrains MCP docs →
Open Settings → MCP Servers (or via Warp Drive → Personal → MCP Servers)
Click + Add and paste this configuration:
{
"mcpServers" : {
"neptune" : {
"command" : "uvx" ,
"args" : [
"--from" ,
"git+https://github.com/shuttle-hq/neptune-mcp.git" ,
"neptune" ,
"mcp"
]
}
}
}
{
"mcpServers" : {
"neptune" : {
"command" : "cmd" ,
"args" : [
"/c" ,
"uvx" ,
"--from" ,
"git+https://github.com/shuttle-hq/neptune-mcp.git" ,
"neptune" ,
"mcp"
]
}
}
}
Click Start on the Neptune server to enable it
Official Warp MCP docs →
Other MCP Clients (Generic STDIO)
For any MCP client that supports STDIO transport, use this command: uvx --from git+https://github.com/shuttle-hq/neptune-mcp.git neptune mcp
Configure your client to run this command and communicate via stdin/stdout.
HTTP Transport (Alternative)
If your client doesn’t support STDIO or you prefer HTTP, start the MCP server in HTTP mode: uvx --from git+https://github.com/shuttle-hq/neptune-mcp.git neptune mcp --transport=http
The server runs at http://localhost:8001/mcp. Configure your client to connect to this URL.
Step 2: Authenticate
Once the MCP is installed, ask your AI assistant:
A browser window will open for GitHub OAuth authentication. Sign in with your GitHub account and you’re ready to deploy.
Step 3: Deploy Your App
Simply ask your AI assistant:
“Deploy this app to Neptune”
Your AI assistant will:
Create an optimized Dockerfile for your project (if you don’t have one)
Generate a neptune.json configuration
Provision any required cloud resources
Build and deploy your application
No Dockerfile? No problem. Neptune provides context to your AI assistant so it
can generate one automatically based on your project structure.
Step 4: Wait for Your App to Go Live
Deployments take a moment to start. Ask your AI assistant:
“Wait for my deployment to complete”
Your AI assistant will monitor the deployment status and let you know when it’s ready with your public IP address . This typically takes 1-2 minutes.
Don’t skip this step! Your app won’t be accessible until the deployment has
finished starting up.
What Can You Deploy?
Neptune can deploy any containerized application . If it runs in Docker, it runs on Neptune.
Next Steps
Need help? Join our Discord community for
support and to connect with other developers.