This foundational module processes raw sensor data (e.g. RGB cameras, LIDAR, RADAR) to build a comprehensive understanding of the vehicle's immediate surroundings. Its outputs typically include object detection, object classification (identifying vehicles, pedestrians, cyclists), robust tracking, and precise localization (determining the ego vehicle's exact pose within the relevant coordinate frame).
To navigate in the real world, an AV 1.0 system must predict the behavior of the perceived objects and plan accordingly. This has often involved simplified kinematic models, predefined behavioral rule sets, or basic machine learning models. While effective in many scenarios, these traditional methods can struggle to capture the full spectrum of human variability and decision-making complexity in dynamic and ambiguous situations. For example, predicting the acceleration of an oncoming vehicle will influence the decision to turn left. This is one module where Inverted AI offers a transformative enhancement.
Equipped with information about the predicted future states of all observed objects, an AV 1.0 system can generate a trajectory to a navigational goal given some set of constraints (e.g. speed limits, general safety, efficiency). This typically involves a combination of processes such as global pathfinding, local maneuver generation (e.g., lane changes, merges), and collision avoidance. Inverted AI tools specialize in the generation of realistic trajectories that are reactive to highly dynamic objects in unstructured environments, allowing for massive coverage traffic scenarios that are difficult to observe during design and testing.
The control module takes the planned trajectory (e.g., desired steering angle, acceleration/braking profile) and translates it into precise, real-time commands for the vehicle's actuators (e.g., steering wheel, brakes, throttle). This ensures the vehicle accurately executes the trajectory determined by the planner.