Nvidia Omniverse Powers Otto Group’s Coordinated Autonomy Layer to Orchestrate Heterogeneous Robotic Fleets
Nvidia Omniverse drives Otto Group’s digital twin and simulations, enabling a Coordinated Autonomy Layer to orchestrate mixed robot fleets across warehouses.
Nvidia Omniverse played a central role in Otto Group’s recent push to stitch together disparate warehouse robots into a single, manageable system — a move that promises to transform fulfillment operations by using 3D digital twins and physics-driven simulation to reduce inefficiencies and enable fleet-wide coordination. At the heart of the initiative is Otto’s Coordinated Autonomy Layer (CAL), an orchestration fabric that sits between existing warehouse management systems and a fleet of autonomous mobile robots (AMRs), shuttles and task-specific arms, using Omniverse to validate layout changes, simulate traffic flows and measure the impact of new routing strategies before anything touches the shop floor.
Why Otto Group needed a shared simulation platform
Otto Group operates an end-to-end supply chain for roughly 45 million customers and reports around €15 billion in annual revenue, a scale that makes even modest gains in throughput or uptime meaningfully impactful to margins and service levels. Over time the company accumulated a mix of robotics from multiple vendors — floor movers, shuttle systems and robotic manipulators — each purchased to solve a targeted problem. The result was a collection of “islands of automation” that functioned well individually but couldn’t reliably coordinate together. That friction drove Otto to reframe the problem: the challenge was no longer a single robot’s capability, but how to harmonize many moving parts across shared physical space.
The decision to pair a Coordinated Autonomy Layer with Nvidia Omniverse gave Otto a way to test integration approaches without risking operations. Using a Boston Dynamics Spot unit to scan aisles and capture spatial data, Otto turned that sensor output into a highly accurate 3D digital twin inside Omniverse. The model revealed discrepancies with existing documentation and made it possible to evaluate layout alternatives and routing strategies in a simulated environment. In one test, reconfiguring handover locations and traffic routes cut robot stop-and-go events by around 20%, a productivity gain that was validated in simulation and then implemented on the warehouse floor.
How the Coordinated Autonomy Layer (CAL) functions
CAL is conceived as a middleware orchestration plane between the warehouse management system (WMS) and the heterogeneous fleet of machinery. Rather than replacing existing vendor controllers, the layer accepts task assignments from the WMS, decomposes work into device-appropriate orders, assigns those orders across fleets and coordinates movement to reduce contention and risky interactions. In practice that means CAL issues goals and constraints to AMRs, shuttle systems and robotic arms, then manages sequencing and handovers so the collective behavior resembles a single, cooperative system rather than uncoordinated actors.
A critical part of CAL’s value is routing: giving robots paths and priorities that minimize stops, re-starts and collision risk. Another is task allocation: determining which asset is best suited to pick, transport or hand off an item given current location and load. Otto is also developing what its engineers call an “artificial brain” layer — an AI-driven decision-making component that monitors performance telemetry, adapts assignment policies and proposes layout or operational changes that can be validated in Omniverse before being published to production.
What Nvidia Omniverse provides for physical AI projects
Nvidia Omniverse is used here as a physics-capable, collaborative simulation and visualization platform. It ingests the point-cloud and imagery scans captured by Spot and other sensors to produce a spatially accurate digital twin of the warehouse environment. That twin is more precise than paper blueprints and can be instrumented with virtual representations of robot models, conveyors, shelves and human workstations. From this shared model, engineering teams can run comparative simulations of layouts, evaluate traffic patterns, measure throughput and spot bottlenecks without interrupting live operations.
Omniverse’s strengths for this use case include multi-agent physics simulation, high-fidelity rendering to validate sight-lines and sensor occlusions, and compatibility with external AI models that perform planning and control. For Otto, Omniverse becomes a risk-free rehearsal stage: changes that perform well in simulation are deployed with higher confidence, and the platform enables faster iteration between idea, simulation and live test.
How the solution was developed and tested in practice
Otto began the project with a pilot warehouse, scanning the environment and constructing the digital twin in Omniverse within days. Engineers then modeled their existing fleet mix — AMRs, shuttles and robotic manipulators — and ran scenarios that compared current layouts to alternative configurations. One measurable outcome of those simulations was a roughly 20% reduction in stop-and-go behaviors when a revised handover area was implemented. That reduction translated to smoother material flow and improved robot productivity once the change was rolled out.
Rather than replacing vendor ecosystems, Otto’s approach emphasizes orchestration and coexistence. CAL integrates with vendor controllers and exposes a unified control surface that the WMS can target. The team worked with partners across hardware and software vendors — including cloud and consultancy partners — to address integration points, data formats and safety constraints, accelerating deployment beyond the pilot to additional sites across Europe.
Who benefits and who should adopt this approach
Retailers and logistics operators that already run mixed fleets of automation will see the clearest upside: organizations with heterogeneous investments in AMRs, shuttles, fixed conveyors and robotic arms can extract far more value by making those systems cooperative rather than isolated. Enterprises with high SKU volumes, variable order profiles, and high cycle counts will particularly benefit because the marginal gains from reductions in idle or stop-and-go behavior compound quickly at scale.
Small operations that are starting with a single vendor solution will still gain value from digital twins and simulation, but the immediate business case is strongest where coordination problems already cause throughput loss or safety incidents. Technology teams — systems integrators, robotics engineers, cloud architects and data scientists — will be key to building the integration, while operations teams will need retraining to move from direct manual tasks to supervisory and exception-handling roles.
How the technology works technically and operationally
At a technical level, the solution combines several layers:
- Physical scanning and perception: mobile mapping units and fixed sensors create point clouds and imagery used to populate the 3D twin.
- Digital twin and simulation: Omniverse holds the geometric model, physics simulation, and virtual robot models for testing spatial interactions and control strategies.
- Orchestration middleware (CAL): a service layer that translates WMS intents into concrete, device-specific tasks, handles routing, collision avoidance priorities and sequencing.
- AI decision layer: analytics and learning systems that recommend policy adjustments, reassign tasks dynamically, and adapt routing strategies based on live telemetry.
- Integration fabric: APIs and adapters that link the CAL to vendor controllers, safety systems, telemetry streams and the WMS.
Operationally, the flow starts when the WMS emits an order or a material movement request. CAL ingests the request and consults the current state of the fleet and the digital twin (for spatial constraints). It issues assignments and routes, monitors execution, and if an exception occurs — a jam, battery depletion or sensor anomaly — it recalculates alternatives. Periodically, offline or in parallel, the AI layer consumes historical performance and simulation results to refine policies and suggest layout changes that Ops can validate in Omniverse before enacting.
Safety, standards and interoperability challenges
Coordinating robots from multiple vendors raises several practical challenges. Safety standards and certification regimes differ by hardware, and ensuring predictable behavior across combinations of devices requires careful verification. Interoperability is complicated by proprietary protocols, varied telemetry semantics and heterogeneous control models. Developers must implement robust adapters that translate between vendor APIs and the CAL’s unified schema, and safety engineers must validate that fallbacks behave properly under degraded communications or sensor failure.
Data governance is another consideration. High-fidelity digital twins and telemetry streams contain operational and potentially sensitive business information. Organizations must enforce access controls, retention policies and auditing for simulation models and live telemetry to mitigate intellectual property and compliance risk. Finally, latency-sensitive control loops may still need to run at the edge; orchestration systems must balance cloud-based analytics with on-premise execution for mission-critical motions.
Implications for workforce and organizational design
Otto’s approach highlights a shift in workforce roles rather than a simple substitution of human labor by machines. As robots take on repetitive tasks, human roles evolve toward supervision, exception handling, system engineering and continuous improvement. That places a premium on upskilling: logistics staff need training in fleet monitoring dashboards, simulation tools and basic robotics principles, while engineering teams require expertise in distributed systems, AI for scheduling and simulation validation.
From an organizational perspective, success requires cross-functional collaboration between operations, IT, engineering and vendor partners. A laboratory-style learning loop — observe, simulate, test, deploy — is becoming the operating model for warehouses that intend to scale automation without introducing operational instability.
How this fits into broader industry trends
Otto’s work is a microcosm of a larger movement: treating physical infrastructure as software-defined systems. Digital twins, edge-cloud orchestration, and AI-driven policies are converging to make warehouses configurable through code and models. This mirrors trends in smart factories, autonomous vehicles and smart cities, where simulation-informed decision-making reduces deployment risk and accelerates iteration.
That convergence also creates opportunities for developer tools, integration platforms, simulation-as-a-service offerings and standards bodies aiming to harmonize interfaces between robotic vendors and orchestration layers. For enterprises, it signals that future capital investments should account for data architecture and interoperability, not just hardware specs.
Business use cases and ROI considerations
The most immediate commercial benefits come from throughput gains, reduced downtime, lower accident rates and increased asset utilization. For high-volume fulfillment centers, even modest reductions in idle time translate into significant cost savings. Simulation-driven layout changes that cut robot stop-and-go incidents by roughly 20% — as Otto experienced in one scenario — offer a concrete example of how virtual testing can unlock operational gains.
However, ROI depends on several factors: the heterogeneity of existing systems, the scale of operations, the cost of integration, and the ability to leverage simulation results operationally. Projects that pair pilot pilots with clear, measurable KPIs (e.g., order-lines per hour, mean time between safety incidents, energy consumption per unit picked) and invest in change management tend to produce the most reliable returns.
Developer and vendor implications
For developers, the rise of CAL-like platforms creates demand for middleware adapters, digital twin tooling, simulation models and explainable AI for task allocation. Vendors that expose clear, well-documented APIs and embrace standards will find it easier to participate in multi-vendor environments. For system integrators, expertise in mapping WMS semantics to physical actions, and in modeling robot behaviors realistically in simulation, will be a differentiator.
Cloud providers and AI toolmakers have an opportunity to offer integrated stacks — simulation engines, model training pipelines, edge orchestration and compliance tooling — tailored to supply chain use cases. Meanwhile, robotics manufacturers may need to rethink product roadmaps to support richer telemetry and standardized control interfaces.
Broader implications for the software industry and supply chains
Otto Group’s project underscores software’s growing role as the coordinating intelligence of physical systems. As logistics operators invest in digital twins and orchestration layers, software becomes the primary lever for differentiation and scale. This shifts competitive advantage toward organizations that can combine domain expertise with systems engineering and AI capabilities. It also elevates concerns about vendor lock-in, the value of open standards, and the need for cross-industry collaboration to ensure safe, interoperable deployments.
Enterprises will increasingly budget for software maintenance, model validation and continuous simulation alongside traditional capital expenditures for robots and conveyors. The skill sets prized by logistics firms will tilt toward data engineers, simulation specialists and AI safety experts as much as toward mechanical maintenance technicians.
The industry will also watch for regulatory responses: as robots share spaces with human workers at scale, regulators may demand standardized safety reporting, certification processes for orchestration layers, and clearer liability frameworks for automated decision-making.
Operational next steps for teams considering a similar approach
Organizations contemplating a CAL+digital twin strategy should begin with a focused pilot: scan a single facility, build a lightweight twin, and run targeted simulations to answer a specific operational question such as routing improvements or handover redesign. Define measurable KPIs, invest in adapters to connect to existing equipment, and plan for staged deployment so that simulation-validated changes can be rolled out incrementally.
Engage safety and legal early, and design for modularity: keep orchestration logic decoupled from vendor-specific control where possible, so components can be swapped as needs evolve. Finally, partner selection matters — vendors and consultancies with experience in mixed-fleet integration, simulation modeling and edge-cloud orchestration reduce risk and accelerate learning.
Otto Group’s experiments show that coordinated autonomy backed by high-fidelity simulation is a practical path for modernizing fulfillment operations. By turning the warehouse into a reconfigurable, model-driven environment, companies can move from reactive troubleshooting to proactive optimization, and shift human roles toward oversight and strategic control.
Looking ahead, we can expect coordinated autonomy projects to expand from single-site pilots to regional networks, where digital twins interoperate to simulate flows across multiple facilities and transportation nodes, and where learning from one site accelerates improvements across the chain. Continuous simulation, standardized interfaces, and stronger partnerships between robotics vendors, cloud providers and systems integrators will determine which organizations convert early experimentation into scalable operational advantage.


















