About REX-Ray

REX-Ray is the leading storage orchestration engine providing interoperability with cloud native orchestrators and runtimes to enable persistent applications in containers. It allows any application, from databases to key-value stores and even that random java app, to run in a container to save data and resume state after the lifecycle of the container has ended. The REX-Ray project is led by {code} by Dell EMC with contributions from the open source community.

REX-Ray brings an opportunity to run both new and traditional applications in cloud native ways. Cloud native orchestrators and runtimes tend to use containers as a simple portable method of packaging applications. However, containers are ephemeral in nature. A new container is created, given a unique ID, performs its function, and then is retired and typically forgotten. When that application needs to run again, a new container is created and the process starts over. This purpose has been served well by stateless applications such as web servers, message queues, event triggers, and application controllers. However, the same process can't be said for applications such as databases like Postgres or Redis where the removal of the container or anchoring of data to a specific host is problematic. REX-Ray allows orchestrators and runtimes to integrate storage functionality such as volume creation, attaching, and mounting processes. This allows a container to write directly to a volume presented by REX-Ray on the host in order to persist data irrespective of the container lifecycle.

REX-Ray is reliable and experienced in its role as a container storage orchestration engine. It is focused on storage orchestration and lifecycle basics but also important features relevant to running your persistent applications. The built-in High Availability, called pre-emption, truly sets it apart. When a container orchestrator such as Docker Swarm, Kubernetes, or Marathon for Mesos deploys a new container after a host failure, REX-Ray performs the orchestration necessary to attach the volume to the new host and resume state.

REX-Ray heavily supports cloud native philosophies for interoperability by ensuring consistent approaches to running applications and integrating storage across services. Develop locally using REX-Ray to see how your containers will write data to external volumes. Without changing any code or container runtime commands, consistently launch persistent workloads for testing, QA, and production resources in the cloud or in your data center.

REX-Ray is popular because of its consistent user experience but also the broad range of storage platform choices. It is by far the easiest way for storage platforms to build and sustain cloud native interoperability.


Persistent Storage Orchestration for Containers

Run any application using multiple storage platforms in a container including databases, key-value stores, big data, real-time streaming, messaging, and any application storing data. Resume state and save data beyond the lifecycle of a container. Containers aren’t just for stateless applications anymore.


REX-Ray is a simple and focused container storage orchestration engine offering extensive platform support. It provides high-availability features for container restarts across hosts, CLI intuitiveness, and is contributed to by not only the cloud native community, but the leading storage vendor in the world.


As a completely open and
community driven project, REX-Ray is constantly innovating and providing new integration points. The community continues to contribute drivers, features, and additional functionality which makes it the best choice for cloud native infrastructures.

Trusted Interoperability

REX-Ray is built using an interface library called libStorage. This library includes reusable orchestration and lifecycle operations that satisfy the integration of cloud native and storage platforms today and tomorrow.

Multiple Storage Platform Support

A single interface to all of your storage platforms is important. REX-Ray exposes common orchestration and volume lifecycle operations no matter what type of platform you choose.

Storage Agnostic

REX-Ray includes support for storage platforms types that cover block, file, and object. If you need to run an application that has any type of storage requirements, REX-Ray supports the perfect storage platform for you.

Effortless Deployment

Installed as a single binary or deployed as a container. REX-Ray can be configured to include one or multiple storage platforms from a single stateless service. It supports the cloud native methods of providing interoperability including functioning as a long-running Docker Plugin.

Standalone or Centralized

Multiple architectural choices allow flexibility for deployments. Configure REX-Ray as standalone to serve in a decentralized architecture. Optionally leverage the client/agent and controller for a centralized architecture providing central configuration and control for storage operations.

Intuitive CLI

More than just a plugin, REX-Ray boasts a fully featured command line interface (CLI) that allows a user to manually perform storage operation tasks.

Scale Your Platform

Making your platform relevant in a cloud native landscape is easy with REX-Ray. Contribute a minimal driver that includes the basics for volume lifecycle and discovery for your platform to libStorage. Get integration to upstream cloud native platforms such as Docker, Mesos, and Kubernetes for free!


REX-Ray operates a single standalone process with bundled components as an engine or through discrete components including a controller, executor, and agent in a centralized manner. This provides architectural flexibility where components can be deployed specific to their function while making requests over an API to the controller.

It is not REX-Ray alone that accomplishes this. libStorage, a cloud native library implemented by REX-Ray, contains a lot of the functionality being used. REX-Ray itself provides managed plugins, CLI, and cloud native interoperability for interfaces such as the Docker Volume Driver interface. It relies heavily on libStorage’s model, API, and reference client and server implementations to provide a heterogeneous ability to orchestrate and consume any type of storage.

libStorage implements a simple and succinct architecture comprised of a client and server. The client downloads an executor from the controller and storage service. This executor is responsible for providing identifying information and device discovery from the node wishing to attach storage to itself. The executor makes it possible for a storage platform to advertise a libStorage compatible API natively while keeping the client service itself the same for every storage platform.

The animated graphic at the top of the page uses REX-Ray in a standalone fashion where both the client and server are embedded in the same engine. This is a common way of using REX-Ray but does not take advantage of the centralized architecture. A large-scale installation can have 1,000s of cluster nodes and updating these with tools can be costly. If security credentials are required for storage operations, keeping those off the cluster nodes is desirable. It may also be beneficial to coordinate and throttle requests into a central controller. REX-Ray as a controller works perfectly for this and can run behind a load balancer or even directly from storage platforms.