High-throughput distributed training & production inference GPU clusters for AI/ML.

Take advantage of our GPU compute nodes, which can be combined together to scale into a Kubernetes/SLURM cluster for distributed workloads.

Can be configured either using individual nodes with mixed GPU's, or as an entire supercomputer, utilizing NVIDIA HGX H100/A100 compute nodes. (Up to 8x SXM5-H100 / SXM4-A100 GPU's per node).

Pre-configured with Kubernetes / SLURM

Leave the MLOps to us, as we can convert your existing RendeRex GPU Compute Nodes into a high-availability GPU-compute cluster running GPU-enabled Kubernetes, with KubeFlow out-of-the box (which comes with Jupyter Notebook, Katib for hyperparameter tuning, a UI for running experiments, excellent model version control, etc.).

We'll even provide extensive public & private documentation on how to replicate, expand and utilize your compute cluster.

Storage for AI

Combine your compute nodes with a RendeRex storage solution for high-bandwidth distributed or tiered object-based storage for your datasets.

Configurable with either free, open-source software (TrueNAS, Longhorn, etc.), or proprietary solutions (WekaFS, Lustre, etc.)

Expertly engineered topology.

Not sure about the inter-connect? Talk to one of our engineers, and we'll help you engineer the ideal topology for your use-case. Our team has plenty of experience with high-bandwidth InfiniBand inter-connects for non-blocking, high-throughput, node-to-node networking.

RendeRex GPU Server

Customizability

Utilize existing nodes to be added to the cluster as either CPU or GPU-compute nodes, add management or access nodes, and much more. In terms of node customizability, the options are endless, including which GPU's you'd like to use, and for the most demanding workloads water-cooled HGX-H100 / A100 compute solutions are available.

Scalable up to 800 GPU's.

Our clusters are designed with scalability in mind from the beginning. Whether you'd like to start with a couple of nodes, scaling all the way up to a 100 node configuration, you won't need much in the way of additional configuration.

RendeRex GPU Server

Pre-configured for you.

Take advantage of our Linux & MLOps engineers and enjoy a configuration without any hassle, that comes pre-configured with everything your team needs to get started immediately.

RendeRex GPU Server

Additional DevOps/MLOps support

Purchase additional MLOps/DevOps support packages direct from RendeRex

Basic (Included)

AED0 / month

  • - Initial environment setup (Kubernetes, Slurm) with KubeFlow, etc.
  • - Limited e-mail support available
  • - Additional & Local on-site support available for purchase on a per-case basis

Enterprise+ Platinum

AED76,000 / month

(One year subscription)


  • - Dedicated on-site MLOps Engineer(s)
  • - Linux System Administration included for entire duration, on all nodes
  • - Bi-monthly status reports of node health, changes, etc.
  • - Monthly lectures & customized documentation on how to utilize the cluster

Get In Touch

If you're interested in one of our solutions or products, or you have a question for us, please get in touch at info@renderex.ae, and one of our representatives will contact you shortly. Alternatively, use the form below for a direct quote on one of our products: