REX-Ray operates a single standalone process with bundled components as an engine or through discrete components including a controller, executor, and agent in a centralized manner. This provides architectural flexibility where components can be deployed specific to their function while making requests over an API to the controller.
It is not REX-Ray alone that accomplishes this. libStorage, a cloud native library implemented by REX-Ray, contains a lot of the functionality being used. REX-Ray itself provides managed plugins, CLI, and cloud native interoperability for interfaces such as the Docker Volume Driver interface. It relies heavily on libStorage’s model, API, and reference client and server implementations to provide a heterogeneous ability to orchestrate and consume any type of storage.
libStorage implements a simple and succinct architecture comprised of a client and server. The client downloads an executor from the controller and storage service. This executor is responsible for providing identifying information and device discovery from the node wishing to attach storage to itself. The executor makes it possible for a storage platform to advertise a libStorage compatible API natively while keeping the client service itself the same for every storage platform.
The animated graphic at the top of the page uses REX-Ray in a standalone fashion where both the client and server are embedded in the same engine. This is a common way of using REX-Ray but does not take advantage of the centralized architecture. A large-scale installation can have 1,000s of cluster nodes and updating these with tools can be costly. If security credentials are required for storage operations, keeping those off the cluster nodes is desirable. It may also be beneficial to coordinate and throttle requests into a central controller. REX-Ray as a controller works perfectly for this and can run behind a load balancer or even directly from storage platforms.