Information about the DigitalOcean platform, like billing details, release notes, product availability by datacenter, support plans, and account details.
DigitalOcean Docs
Learn how to build, deploy, and scale your applications with DigitalOcean. Explore our products with our documentation's technical walkthroughs, example code, reference information for our APIs, CLI, and client libraries, and more.
Get Started
Information on DigitalOcean product features, pricing, availability, and limits; how to use products from the control panel; how to manage your account, teams, and billing; and platform details, release notes, and product policies.
Manage resources programmatically and integrate across the developer ecosystem with CLIs, APIs, and SDKs.
Looking for technical support with your DigitalOcean account or infrastructure? Start here.
Browse by
Product Genre
Build your application the way you want with our suite of compute products including VMs, managed containers, PaaS, and serverless functions.
Build, train, and deploy AI agents with the DigitalOcean AI-Native Cloud.
Store and access any amount of data reliably in the cloud, with S3-compatible Spaces Object Storage, network-based Volumes block storage, or NFS-based Network File Storage.
Create backups, upload custom images, use preconfigured images to create resources, and store Docker images in a private registry.
Run fully managed database clusters running your choice of database engine and avoid manual setup and maintenance.
Secure and control the traffic to your applications with VPC networking, traffic filtering, and load balancing.
Track the health of your infrastructure, URLs, and more, set alerts to stay informed, and organize your resources with projects.
Teams are how you manage your billing and infrastructure on DigitalOcean. You can work by yourself by remaining the only person on your team or collaborate by adding more people to teams you own.
Developer Tools
Manage your DigitalOcean resources from the command line with doctl, our open-source command line interface (CLI).
Programmatically manage your Droplets, Spaces, and other DigitalOcean resources using conventional HTTP requests. Use RESTful APIs to programmatically manage Droplets, Spaces, and other DigitalOcean resources.
Interact with Paperspace resources programmatically using the Paperspace API or CLI, and find documentation for legacy tools.
Automate DigitalOcean infrastrucuture and configuration management using the open source Ansible framework.
Deploy and change many resources simultaneously using the open source Terraform tool.
Official Python client for the DigitalOcean API (OpenAPIv3). Install with pip, authenticate with a personal access token, and call API operations via pydo.Client.
Official and community client libraries for the DigitalOcean API, with installation instructions and quickstart examples.
We use and contribute to open source software.
Use MCP servers to manage DigitalOcean services from any MCP-compatible client.
Latest Updates
Upcoming Changes
-
App Platform’s XL build resources (8 CPUs and 20 GiB of memory during builds) are now enabled for all apps by default. The
xl-buildflag is now deprecated and will be removed in a future release. Removexl-buildfrom your app spec to avoid potential errors once the flag is fully retired. -
DigitalOcean Managed Caching is being discontinued on 30 June 2025.
To replace Managed Caching, we are offering Managed Valkey, a Redis-compatible alternative with RDMA and higher throughput. All existing Managed Caching clusters automatically convert to Valkey clusters by 30 June 2025 during your upgrade window, retaining all data.
28 April 2026
-
DigitalOcean AI Platform now lets you retrieve data from knowledge bases using the Control Panel with semantic, keyword, or hybrid searches, apply filters, review retrieved chunks, and copy live code examples. For more information, see Create and Manage Knowledge Bases.
-
DigitalOcean AI Platform now supports reranking for knowledge bases to improve the relevance of retrieved results before they’re returned or used in generated responses. For more information, see Create and Manage Agent Knowledge Bases and Test Reranking.
-
As part of the DigitalOcean AI-Native Cloud, DigitalOcean Gradient™ AI Platform is now DigitalOcean AI Platform.
-
RAG Playground is now available in DigitalOcean AI Platform for DigitalOcean Knowledge Bases. It lets you run queries against a knowledge base and test how a serverless inference model generates answers from retrieved content.
For more information, see the DigitalOcean AI Platform Features page.
-
The following models are now available on DigitalOcean AI Platform:
Foundation models for Agent Development Kit and agents:
- Qwen3 Coder Flash (Alibaba)
- DeepSeek V3.2 (DeepSeek)
- Llama 4 Maverick 17B 128E Instruct (Meta)
- Ministral 3 14B Instruct (Mistral AI)
- Nemotron Nano 12B v2 VL (NVIDIA)
Multimodal and generative models:
- Qwen 3 TTS (1.7B) (text-to-speech)
- Wan2.2-T2V-A14B (text-to-video)
- Stable Diffusion 3.5 Large (image generation)
For more information, see the Available Models page.
-
The following NVIDIA model is now available on DigitalOcean AI Platform for Agent Development Kit:
For more information, see the Available Models page.
-
Knowledge base enhancements are now generally available in DigitalOcean AI Platform, including the updated creation workflow, chunking controls, and data retrieval for testing knowledge base. For more information, see Create and Manage Agent Knowledge Bases.
-
The following IntFloat embeddings model is now available on DigitalOcean AI Platform for DigitalOcean Knowledge Bases:
For more information, see the Available Models page.
-
The following Google model is now available on DigitalOcean AI Platform for Agent Development Kit and agents:
For more information, see the Available Models page.
-
The following Beijing Academy of Artificial Intelligence (BAAI) reranking model is now available on DigitalOcean AI Platform for DigitalOcean Knowledge Bases:
For more information, see the Available Models page.
-
The following Beijing Academy of Artificial Intelligence (BAAI) embeddings model is now available on DigitalOcean AI Platform for DigitalOcean Knowledge Bases:
For more information, see the Available Models page.
-
As part of the DigitalOcean AI-Native Cloud, DigitalOcean AI Bare Metal GPUs is now Bare Metal GPUs.
-
Data Services is now generally available, providing managed databases and knowledge bases for storing, indexing, and retrieving application data.
-
As part of the DigitalOcean AI-Native Cloud, DigitalOcean AI GPU Droplets is now GPU Droplets.
-
DigitalOcean AI Inference now supports scoped model access keys. When you create a key, you can limit it to specific foundation models and inference routers, enable batch inference, and restrict it to a VPC network so that only requests from that VPC network can authenticate. Team owners can also view and manage keys created by other team members. Previously created keys continue to authenticate without changes. For more information, see Model Access Keys.
-
Inference Router in now available in public preview and enabled for all users. Using this feature, you can use multiple models in a model pool to configure routing rules and selection policy for inference requests. We provide pre-built templates or you can define custom task-matching logic using natural language, with configurable fallback support for reliability. For more information, see Inference Router.
-
As part of the DigitalOcean AI-Native Cloud, DigitalOcean AI Inference Hub is now Inference.
-
The following models are now available on DigitalOcean AI Inference for serverless inference:
- Qwen3 Coder Flash (Alibaba)
- DeepSeek V3.2 (DeepSeek)
- Llama 4 Maverick 17B 128E Instruct (Meta)
- Ministral 3 14B Instruct (Mistral AI)
- Nemotron Nano 12B v2 VL (NVIDIA)
- BGE M3 (BAAI)
- E5 Large (multilingual) (Intfloat)
- Qwen 3 TTS (1.7B) (text-to-speech)
- Wan2.2-T2V-A14B (text-to-video)
- Stable Diffusion 3.5 Large (image generation)
For more information, see the Foundation models page.
-
The following NVIDIA model is now available on Inference for serverless inference:
For more information, see the Available Models page.
-
The Model Playground now supports the following features when testing and comparing models:
-
Uploading images from local storage
-
Generating multimodal artifacts, such as images, audio, and text-to-speech, from models that support it
Read Test and Compare Models for more information.
-
-
We now support multimodal models for serverless inference. Multimodal models process and generate content across multiple data types, including images, audio, video, and text, thus enabling a much broader range of real-world applications, including document intelligence, voice agents, content generation, and accessibility tools. For more information, see Use Multimodal Inference.
-
You can now evaluate models available for serverless inference, inference routers, and dedicated inference deployments using a judge model. Scoring includes metrics such as correctness, completeness, ground truth faithfulness, and safety metrics. This features is in public preview. You can opt in from the Feature Preview page. For more information, see Evaluate Models.
-
Model Catalog is now in General Availability.
-
Bring Your Own Models (BYOM) is now available in Model Catalog. You can import models from Hugging Face or Spaces buckets or folders. For details, see Import a Model.
-
The following Google model is now available on DigitalOcean Inference for serverless inference:
For more information, see the Available Models page.
-
Batch inference lets you submit text-only batch jobs for OpenAI and Anthropic models. Using batch inference significantly reduces cost compared to real-time inference. For more information, see Use Batch Inference.
-
You can now browse Model Catalog through a DigitalOcean MCP server.
-
Dedicated Inference is now in General Availability.
- A remote MCP server is also available, allowing MCP clients to create, update, list, and delete Dedicated Inference endpoints. For more information, see Dedicated Inference MCP Tools.
-
DigitalOcean Knowledge Base retrieval is now available through a DigitalOcean MCP server.
-
DigitalOcean Knowledge Bases are now in General Availability in Data Services. Using knowledge bases, you can store, index, and retrieve data for AI applications.
-
Data Services now lets you retrieve data from DigitalOcean Knowledge Bases using the Control Panel with semantic, keyword, or hybrid searches, apply filters, review retrieved chunks, and copy live code examples. For more information, see Test Knowledge Bases.
-
Data Services now supports reranking for DigitalOcean Knowledge Bases to improve the relevance of retrieved results before they’re returned or used in generated responses. For more information, see Create Knowledge Bases and Test Knowledge Bases.
-
RAG Playground is now available in DigitalOcean Knowledge Bases. It lets you run queries against a knowledge base and test how a serverless inference model generates answers from retrieved content.
For more information, see the Data Services Features page.
-
The following IntFloat embeddings model is now available in Data Services for DigitalOcean Knowledge Bases:
For more information, see the Available Models page.
-
The following Beijing Academy of Artificial Intelligence (BAAI) reranking model is now available in Data Services for DigitalOcean Knowledge Bases:
For more information, see the Available Models page.
-
The following Beijing Academy of Artificial Intelligence (BAAI) embeddings model is now available in Data Services for DigitalOcean Knowledge Bases:
For more information, see the Available Models page.
-
DigitalOcean AI Agentic Cloud is now DigitalOcean AI-Native Cloud.
-
A remote MCP server is now available for Network File Storage, providing API-based access for AI tools to create and manage NFS shares and access rules.
-
You can now use DigitalOcean personal access tokens for authenticating serverless inference requests. You can use a personal access token as an alternative to a model access key when sending requests to the serverless inference API. Model access keys remain recommended when you need per-application scoping, VPC restriction, or credentials dedicated to inference workloads. For more information, see Serverless Inference Overview.
-
A remote MCP server is now available for Volumes Block Storage, providing API-based access for AI tools to create, attach, detach, and manage volumes and volume snapshots.
-
DigitalOcean Managed Weaviate is now in private preview. Opted-in customers can provision Weaviate clusters in the TOR1 region in Small, Medium, and Large plans through the dedicated DigitalOcean Vector Databases API at
/v2/vector-databases. Clusters are reachable over port 443 for both HTTP and gRPC and support configurable quantization (rq,pq,bq, orsq) for the vector index.During preview, cluster management is API-only. Preview clusters are not billed and are not covered by a paid support SLA. APIs, SKUs, regions, and Control Panel elements may change before general availability. For setup and usage guidance, see Managed Weaviate.
-
DigitalOcean Vector Databases is now generally available in Data Services that groups managed engines for vector similarity search. The launch includes:
- Weaviate in private preview for retrieval-augmented generation and semantic search workloads. Available to opted-in customers; preview clusters are not billed.
- OpenSearch with the k-NN, ML Commons, and Neural Search plugins for hybrid (vector plus keyword) search and remote embedding models. Uses the existing Managed Databases OpenSearch engine.
- PostgreSQL with the
vector(pgvector) andvectorscale(pgvectorscale) extensions for vector similarity search alongside relational data.
For an overview and guidance on choosing an engine, see Vector Databases.
27 April 2026
-
The following OpenAI model is now available on DigitalOcean AI Platform for Agent Development Kit and agents:
For more information, see the Available Models page.
-
DigitalOcean Container Registry now supports image layers up to 20 GB.
-
DigitalOcean Container Registry now supports container images up to 100 GB.
-
The following OpenAI model is now available on Inference for serverless inference:
For more information, see the Available Models page.
24 April 2026
-
Now in public preview, App Platform supports request-based autoscaling for service components. Services can now scale automatically based on HTTP traffic metrics, including requests per second and P95 request duration, in addition to or instead of CPU utilization. Request-based autoscaling works with both shared and dedicated CPU plans.
For more, see our full release notes.