FluidStack
Access distributed computing resources with FluidStack. Rent or rent out computing power for your AI and machine learning workloads.
Tags:AI Development PlatformsFluidStack: AI Cloud Platform for Advanced GPU Compute
FluidStack is a leading AI cloud platform designed to provide scalable, high-performance GPU infrastructure for training and inference workloads. Trusted by top AI companies such as Mistral, Character.AI, Poolside, and Black Forest Labs, FluidStack offers instant access to thousands of NVIDIA GPUs, including H100, H200, and the upcoming GB200 models. The platform is engineered to support the most demanding AI applications with exceptional speed, reliability, and cost-effectiveness.
Key Features
- Rapid Deployment: Deploy a 4,096+ GPU cluster in just two days, enabling quick scaling for large-scale AI projects.
- High-Performance Hardware: Access to NVIDIA A100, H100, H200, and GB200 GPUs, optimized for AI workloads.
- Fully Managed Infrastructure: Choose between Kubernetes (K8s) or Slurm for workload orchestration, with bare-metal options available.
- Global Availability: Deploy GPU instances in under 5 minutes, with seamless scaling to hundreds of GPUs on-demand.
- 24/7 Support: Benefit from 15-minute response times and 99% uptime, ensuring continuous operation of AI workloads.
- Cost Efficiency: Save up to 70% on cloud bills compared to traditional hyperscalers, with transparent and competitive pricing.
How to Use FluidStack
Getting started with FluidStack is straightforward. Follow these steps:
- Sign Up: Create an account on the FluidStack platform.
- Generate SSH Keys: Use the
ssh-keygen
command to generate a public/private key pair for secure access to your instances. - Create GPU Instances: Utilize the FluidStack API or Dashboard to launch GPU instances. For example, to create an RTX_A6000_48GB instance, use the following cURL command:
$ curl -X POST https://platform.fluidstack.io/instances \
-H "api-key: " \
-H "Content-Type: application/json" \
-d '{
"gpu_type": "RTX_A6000_48GB",
"name": "my-test-instance",
"ssh_key": "my_ssh_key"
}'
After creation, you can manage your instances through the FluidStack Dashboard or programmatically via the API. For detailed guidance, refer to the FluidStack Quickstart Guide.
Pricing
FluidStack offers flexible pricing models to suit various needs:
- On-Demand GPU Instances: Launch GPU instances by the hour, scaling from 1 to 100+ GPUs as required.
- Reserved GPU Clusters: Commit to clusters with 8 to 10,000+ GPUs for extended periods (30 days or longer), benefiting from discounted rates.
FluidStack’s pricing is designed to be transparent and competitive, offering significant savings compared to traditional cloud providers. For detailed pricing information, visit the FluidStack Pricing Page.
Frequently Asked Questions
- What GPU models are available? FluidStack provides access to NVIDIA A100, H100, H200, and the upcoming GB200 GPUs, all optimized for AI workloads.
- How quickly can I deploy a GPU cluster? FluidStack enables deployment of multi-thousand GPU clusters in as little as two days, significantly faster than traditional providers.
- Is support available 24/7? Yes, FluidStack offers 24/7 support with a 15-minute response time and a 99% uptime guarantee.
- Can I scale my GPU instances? Absolutely. FluidStack allows seamless scaling from single GPU instances to large clusters, accommodating growing AI project needs.
- What orchestration options are available? Choose between Kubernetes (K8s) or Slurm for workload orchestration, with bare-metal options also available.