Bookmarks

Physical AI Pilot at Bordeaux Airport: Outsight Deploys in Hall A

avatar
Michael Johnson
post-picture

Outsight has begun piloting its Physical AI platform at France’s Bordeaux Airport, initiating a trial across key zones in Hall A.

Physical AI is a form of artificial intelligence designed to understand and operate in real-world, physical environments by turning sensor signals into a continuously updated view of what is happening in a place. Unlike general AI that mainly works with text, images, or historical datasets in isolation, Physical AI is anchored to space and time, emphasizing live perception, measurement, and decision support in a specific physical setting.

In practice, this makes it well-suited to environments where conditions change minute by minute and where decisions depend on what is happening on the ground, not just what is planned on paper.

Improving Passenger Flow and Operations with Automation

This program fits the airport’s ongoing drive for continuous improvement, targeting smoother passenger movement, workflow efficiency, and stronger operational performance.

Serving close to six million travelers each year on 92 routes, Bordeaux Airport puts experience quality and efficiency at its core. With a constrained footprint, the team prioritizes:Maximizing existing spaceDynamic queue controlAnticipating bottlenecks

More broadly, the significance of Physical AI in airports and similar venues is that it can help teams move from periodic observation to consistent, objective measurement, improving situational awareness and supporting faster operational adjustments.

Real-Time Modeling and Artificial Intelligence in the Terminal

To deepen insight into passenger movement and base decisions on objective data, the airport is testing an AI system, Physical AI, that maps terminal activity live.

Designed by Outsight, the approach builds a live digital twin (Motional Digital Twin) for the terminal, offering continuous 3D visibility of traveler motion in targeted zones such as security screening and other mission-critical areas.

Sensor-derived flows are recorded in a shared spatial model with strict anonymization, enabling:Queue buildup analysisDensity variation analysisArea utilization analysis

At a high level, Physical AI systems typically follow a workflow: sensors capture signals from the environment, data is synchronized and cleaned, AI models infer movement and patterns, and the results are delivered as metrics and alerts that operators can use to adapt staffing, lane configurations, and process timing. In some deployments, the output can also feed automated responses, but it can also remain decision-support only, depending on operational needs.

Key components of a Physical AI system commonly include sensing hardware (for example, 3D sensors), time and location alignment, a data pipeline for ingestion and processing, AI and optimization algorithms for inference, and an interface that turns outputs into operational insights. Where automation is required, an additional layer may connect the insights to actuators or control points, such as dynamic signage, access gates, or workflow tooling.

This approach differs from traditional AI that is often optimized for offline prediction or classification on pre-collected datasets, because the goal is reliable, real-time understanding tied to a specific physical context. It also differs from embodied AI, which typically focuses on an autonomous agent (like a robot) learning to act; Physical AI can be deployed without a robot, concentrating instead on measuring and interpreting real-world activity across a space.

Beyond airports, similar systems are used across industries such as retail (store occupancy and dwell time), logistics (dock and yard flow visibility), smart buildings (space management), transportation hubs (crowding management), and public venues (safety and capacity monitoring).

In manufacturing, Physical AI can help optimize line balance, material movement, and workstation utilization by revealing how people and assets move through production areas. In healthcare, it can support operational improvements such as understanding movement patterns across entrances and corridors, reducing congestion around critical departments, and improving space planning while maintaining strong privacy and compliance requirements.

Developing and deploying Physical AI also involves challenges and considerations, including sensor placement and calibration, handling occlusions and complex layouts, ensuring low-latency processing, integrating with existing operational systems, and maintaining reliability as environments change. Operationally, success often depends on clear KPIs, staff adoption, cybersecurity, and a deployment plan that minimizes disruption.

Ethical and social considerations include privacy protections, transparency about what is measured, governance over retention and access, and ensuring systems are not used in ways that enable identification or unfair profiling. There are also workplace implications, such as how monitoring is perceived by staff and how insights are used in performance management versus process improvement.

Future developments in Physical AI are expected to include stronger edge processing, richer multi-sensor fusion, more standardized digital-twin integration, and better scenario simulation so operators can test changes virtually before applying them in live operations. As deployments scale, trends also point toward more robust privacy-preserving analytics and clearer operational governance models.

Read more