|Advanced Driver Assistance Systems are technologies or groups of technologies that aid drivers with driving functions, including parking. These technologies include hardware and software and are developed to make driving safer.
|an open-source workflow management platform for automating and scheduling complex workflows, such as data pipelines, machine learning models, and other related tasks. Airflow allows for RoadMentor to perform on the users servers, eliminating the need for data transfers
|Automotive Safety Integrity Level. Of the four levels, A, B, C, and D, systems that fall in ASIL-D have the highest risk factors thus functional saftey requirements are most stringent. Most ADAS systems must adhere to ASIL-D requirements
|ASPICE (Automotive SPICE) is a process improvement framework used in the automotive industry to improve the quality and reliability of software development processes.
|Birds eye view
|Boresighting is a calibration process used to align the sensor and control systems of an autonomous vehicle.
|A bounding box is a rectangular area surrounding an object in an image or a video frame. It is often used in object detection and tracking algorithms.
|CANBUS (Controller Area Network) is a data communication protocol used in automobiles to connect and control various electronic systems and components.
|CI/CD (Continuous Integration/Continuous Deployment) is a software development practice where code changes are automatically built, tested, and deployed to production.
|Cloud refers to a network of remote servers hosted on the internet that are used to store, process, and manage data and applications.
|Cloud orchestration refers to the automated management and coordination of cloud resources, such as virtual machines, containers, and storage, to support the deployment and scaling of applications.
|Clustering is a machine learning technique used to group similar data points into clusters based on their features and similarities.
|Computer vision is a field of artificial intelligence that deals with the ability of computers to interpret and understand visual information from the world, such as images and videos.
|The connection layer refers to the software and hardware components responsible for establishing and maintaining a communication connection between different devices or systems.
|Control systems are the hardware and software components responsible for controlling the movements and actions of an autonomous vehicle.
|Cross-view localization is the process of determining the location and orientation of an object from multiple viewpoints or cameras.
|CSI-2 (Camera Serial Interface 2) is a high-speed serial data interface used for transmitting image and video data between cameras and image processing systems.
|A cubemap is a representation of a 3D environment as a series of 6 square images, each depicting a view of the environment from a different direction.
|DAG (Directed Acyclic Graph)
|A DAG (Directed Acyclic Graph) is a type of graph structure in which the edges have a direction and the nodes are connected in such a way that there are no cycles.
|Data balancing refers to the process of ensuring that the training data used to develop machine learning models is evenly distributed and representative of the population.
|Data bias refers to the systematic error or unfairness in a machine learning model caused by a skewed distribution of data or features.
|Data fusion refers to the process of combining multiple sources of data to produce
|A sequence of data processing elements, connected in a specific order, designed to extract and transform data from various sources to meet the requirements of a particular data analysis or machine learning task.
|Deep Learning Accelerator
|A hardware component designed to speed up the processing of deep learning algorithms. These accelerators often use specialized hardware, such as GPUs or TPUs, to efficiently process large amounts of data and perform matrix operations, which are essential for deep learning tasks.
|A set of practices and tools aimed at improving collaboration and communication between development and operations teams, with the goal of reducing the time and effort required to release and maintain software.
|A process of transforming a high-dimensional data set into a lower-dimensional representation, while retaining the most important information. This can be done to reduce the complexity of the data, improve computational efficiency, and enhance the interpretability of the results.
|Direct Memory Access
|A technique used in computer architecture to transfer data from one memory location to another without involving the processor. This can significantly improve the performance of data transfer, as the processor can continue to perform other tasks while the data is being transferred.
|A deviation from the true or intended form or shape of an image, caused by factors such as lens aberrations, sensor imperfections, or processing errors.
|Distributed File System
|A file system that allows multiple computers to access and manage a shared collection of files, with the goal of providing scalability, reliability, and performance.
|A systematic error or bias that accumulates over time, affecting the accuracy of a system or measurement.
|The point where data is generated and processed at the source, rather than being transmitted to a central location for processing. This allows for low latency, improved security, and reduced bandwidth requirements.
|A type of image projection that maps the surface of a sphere onto a flat plane, preserving the area and shape of each pixel. This is commonly used in 360-degree panoramic images and virtual reality applications.
|A matrix that describes the position and orientation of a camera or sensor relative to a reference coordinate system.
|A mathematical representation of the unique characteristics of an image or data point that can be used for tasks such as image recognition or object detection.
|A set of numerical values that represents the features of a data point, used as input to machine learning algorithms.
|A technique used to identify and track unique devices or systems, based on a set of hardware or software characteristics.
|A high-speed, low-latency video interface used in automotive applications to transmit video signals between cameras and displays.
|A component of a map or localization system that represents the geometry and topology of the environment, including information about the shape and position of objects, roads, and landmarks.
|A system designed to capture, store, manipulate, analyze, manage, and present all types of geographical data.
|Global Pose Estimation
|The process of determining the position and orientation of a device or system in a global reference frame, typically using signals from multiple sensors and measurements, such as GNSS, IMU, and camera data.
|A system of satellites and ground stations used to provide navigation and positioning information, allowing devices to determine their position, velocity, and time using signals from GPS, GLONASS, Galileo, and other satellite constellations.
|Ground control refers to a set of pre-determined reference points on the ground used as a reference for testing and evaluating autonomous vehicles and other mapping systems. These points serve as a basis for comparison with the data collected by autonomous vehicles to validate their accuracy and reliability.
|Ground truth is the actual or correct information or data used as a reference for evaluating the accuracy of a model or system. This data provides a benchmark for comparing the results of a machine learning or autonomous driving system against a real-world scenario.
|GSML is a standard for encoding geospatial information, including location, topology, and attributes. It is used to represent and store geographical data in a computer-readable format, allowing for easy sharing and analysis of geospatial information between different systems and applications.
|Hardware abstraction refers to the process of hiding the underlying hardware details from the software, allowing the software to interact with the hardware through a set of well-defined interfaces. This separation of hardware and software enables systems to be developed and tested independently, making them more flexible and easier to maintain.
|Hardware Decoders (H264, MPEG, JPEG)
|Hardware decoders are specialized circuits designed to decode video and image data encoded in standard video compression formats such as H264, MPEG, and JPEG. These decoders are used in autonomous vehicles and ADAS systems to process large amounts of image and video data from sensors, cameras, and other sources in real-time.
|Hardware Encoders (H264, MPEG, JPEG)
|Hardware encoders are specialized circuits designed to encode video and image data in standard video compression formats such as H264, MPEG, and JPEG. These encoders are used in autonomous vehicles and ADAS systems to compress large amounts of image and video data before transmission to other systems or storage devices.
|An HD map is a high-definition map used in autonomous driving and advanced driver-assistance systems (ADAS). HD maps provide detailed information about the road environment, including lane markings, road geometry, road signs, and other relevant information. This information is used by autonomous vehicles to make decisions and navigate their surroundings.
|Heuristic learning is a type of machine learning that uses rule-based systems and expert knowledge to make decisions. Unlike other forms of machine learning, heuristic learning does not rely on large amounts of data or complex algorithms to make predictions. Instead, it uses a set of predefined rules and expert knowledge to solve problems and make decisions.
|HMI is the interface between a human operator and a machine or system. In autonomous driving, the HMI is the interface between the driver and the vehicle, allowing the driver to monitor and control the vehicle’s systems and functions.
|ICP is an algorithm used in computer vision and robotics for estimating the relative pose between two point clouds. It is commonly used in autonomous vehicles and ADAS systems for tasks such as localization, mapping, and sensor fusion.
|Image stitching is the process of combining multiple images into a single, seamless image. In autonomous vehicles, image stitching is used to create panoramic views from multiple camera inputs, allowing for a more complete view of the vehicle’s surroundings.
|INS is a navigation system that uses accelerometers and gyroscopes to determine the position and orientation of a vehicle. It is commonly used in autonomous vehicles and ADAS systems as a backup to GNSS (Global Navigation Satellite System) or as a primary navigation system in environments where GNSS signals are
|An intrinsic matrix is a 3×3 matrix in computer vision that represents the intrinsic parameters of a camera, such as focal length and principal point. It is used to project 3D points in the world onto a 2D image plane.
|ISO 26262 is an international standard for the functional safety of road vehicles. It sets the requirements for the development and production of systems and components in vehicles with regards to electrical and electronic systems.
|Lever Arm Transformation
|A lever arm transformation is a mathematical transformation used in robotics and autonomous vehicles to convert between coordinate frames. It relates the position of an object in one frame of reference to its position in another frame of reference.
|LiDAR (Light Detection and Ranging) is a remote sensing technology that uses laser light to measure distances and generate 3D point clouds of the environment. LiDAR is widely used in autonomous vehicles for perception and localization.
|Lidar odometry is a technique used in autonomous vehicles to estimate the vehicle’s pose (position and orientation) by analyzing the changes in the LiDAR point cloud over time.
|Localization is the process of determining the position and orientation of an autonomous vehicle within a map of its environment. This is a crucial aspect of autonomous driving, as it allows the vehicle to understand its surroundings and make decisions about its trajectory.
|Loop closure is a term used in simultaneous localization and mapping (SLAM) to describe the process of detecting when an autonomous vehicle has returned to a previously visited location. This information is used to improve the accuracy of the map and localization estimates.
|Machine learning is a branch of artificial intelligence that deals with the design and development of algorithms that can learn from and make predictions on data. Machine learning is widely used in autonomous vehicles for tasks such as object detection, scene understanding, and path planning.
|Maps & Localization
|Maps and localization are crucial components in autonomous vehicles. Maps provide the vehicle with a model of its environment, while localization allows the vehicle to determine its position and orientation within that environment.
|A microcontroller is a small computer on a single integrated circuit that is designed to control a specific device or system. Microcontrollers are widely used in autonomous vehicles for tasks such as sensor control, motion control, and interface with other systems.
|ML Flow is an open-source platform for managing the end-to-end machine learning process, including experimentation, reproducibility, and deployment. It helps data scientists and engineers organize their work and track the progress of their models.
|An ML model is a mathematical representation of a system or process learned from data. ML models are trained on data and used to make predictions or decisions in applications such as autonomous vehicles.
|An ML pipeline is a series of steps that are executed in order to train an ML model and make predictions with it. The steps may include data preprocessing, feature extraction, model training, and evaluation.
|MLOps, or machine learning operations, is the set of practices and processes that enable the effective deployment and operation of ML models in production environments, such as autonomous vehicles. It involves a range of tasks such as model deployment, monitoring, and maintenance.
|Model bias refers to the systematic error that is introduced in a machine learning model due to incorrect or incomplete assumptions about the underlying data. It can lead to incorrect predictions and result in unfair or harmful outcomes.
|Model training is the process of developing a machine learning model by feeding it a large set of labeled data and adjusting the model’s parameters to minimize prediction error. The goal of model training is to obtain a model that generalizes well to unseen data.
|Motion control refers to the system that manages the movement of autonomous vehicles, ensuring that they move smoothly and safely according to a given path.
|Multipath refers to the interference that can occur in GPS and other navigation systems when signals from multiple sources reach the receiver, causing confusion or incorrect readings.
|OAuth2 is an open standard for authorization that enables applications to access resources from an API on behalf of a user, without requiring the user to reveal their credentials.
|Object detection is a computer vision task that involves detecting instances of semantic objects of a certain class (such as people, buildings, or cars) in digital images or videos.
|Object tracking is a computer vision task that involves continuously tracking the movements of an object in a video stream.
|In the context of machine learning, online refers to algorithms that process data in real-time, updating their models as new data becomes available.
|Path planning is the process of determining the optimal path for an autonomous vehicle to follow, considering factors such as the vehicle’s current position, the road network, traffic, and obstacles.
|Perception refers to the process of extracting meaningful information from sensor data, such as cameras or LiDAR, in order to build a representation of the environment.
|Permission scopes are the level of access that an application is granted to a user’s data, as defined by OAuth2.
|Photogrammetry is the process of deriving measurements from photographs. In the context of autonomous driving, it can be used to build a 3D map of the environment from a series of 2D images.
|Planning refers to the process of determining a sequence of actions for an autonomous vehicle to execute in order to achieve a specific goal, such as reaching a destination or avoiding obstacles.
|A point cloud is a set of points in 3D space that represent the geometry of an object or a scene. In the context of autonomous driving, LiDAR sensors are often used to generate point clouds of the environment.
|Point Cloud Segmentation
|Point cloud segmentation is the process of dividing a point cloud into multiple segments, each representing a different object or surface in the scene.
|Querying refers to the process of retrieving data from a database or other storage system using a query language, such as SQL.
|Real-time systems are systems that must respond to events within a strict time constraint, often in the order of milliseconds. Autonomous vehicles are often considered real-time systems as they must quickly and accurately process sensor data to make decisions.
|RoadControl refers to the control that an autonomous vehicle has over its own motion, as well as the motion of other vehicles on the road.
|A road graph is a mathematical representation of a road network, including information about the location, shape, and connectivity of roads.
|RoadMentor is a term used to describe a system or algorithm that provides guidance or advice to an autonomous vehicle, helping it
|The SAE (Society of Automotive Engineers) International has defined a series of levels (0-5) to describe the capability and functionality of autonomous vehicles, ranging from no automation to full automation.
|SBET (Spatial-temporal Body and Environment Trajectory) refers to the trajectory of a moving object in a three-dimensional space over time.
|Scene segmentation is the process of dividing an image or video into multiple segments based on its content, such as objects, background, or foreground.
|SDK (Software Development Kit) refers to a set of tools, libraries, and documentation that software developers can use to create applications for a particular platform or technology.
|Segmentation is the process of dividing an image, video, or point cloud into multiple segments based on its content, such as objects, background, or foreground.
|Semantics refers to the study of meaning in language and symbols, including the meaning of words and phrases, as well as the relationships between them.
|Sensor calibration is the process of adjusting the readings of a sensor so that they accurately represent the physical quantity being measured.
|Sensor fusion refers to the process of combining the readings from multiple sensors to produce a more accurate and robust representation of the environment.
|A signal bus refers to a data transmission system that allows multiple sensors, controllers, and other electronic components to exchange information and control signals.
|SLAM (Simultaneous Localization and Mapping) is a technique used in autonomous systems, such as robots or self-driving cars, to build a map of the environment and simultaneously determine the position of the system within it.
|Software abstraction refers to the process of hiding the implementation details of a software system and exposing only the essential functionality to the user.
|Splines (2d & 3D)
|Splines are mathematical functions that can be used to interpolate or approximate a set of data points. In 2D and 3D, splines can be used to represent smooth curves and surfaces.
|Structure from Motion
|Structure from Motion (SfM) is a technique used in computer vision to estimate the 3D structure of a scene from a sequence of 2D images.
|A task queue refers to a data structure that stores a list of tasks to be performed, typically by worker nodes in a distributed computing system.
|A test set is a collection of data used to evaluate the performance of a machine learning model, typically after it has been trained on a separate training set.
|A training set is a collection of data used to train a machine learning model, typically by adjusting the model parameters so that it can accurately predict the outcomes for the training data.
|Undistortion refers to the process of removing distortions from an image, such as lens distortion or perspective distortion.
|Vectorization refers to the process of converting data into a vector format, which is a sequence of numbers that can be processed mathematically.
|Vehicle cognition refers to the ability of a vehicle to sense and understand its environment, including the position and behavior of other vehicles, pedestrians, and road infrastructure.
|Vision Positioning System
|A vision positioning system is a type of navigation system that uses visual information, such as images or video, to determine the position and orientation of a vehicle.
|Visual Inertial Odometry
|A technique used in autonomous driving for estimating the position and orientation of a vehicle in real-time, using data from cameras and Inertial Measurement Units (IMUs). VIO combines visual information from cameras with inertial information from IMUs to provide a more accurate and robust estimate of the vehicle’s pose compared to using either type of sensor alone.
|A term used in computer graphics and computer vision to represent a 3D space that is divided into small cubes or “voxels”. In autonomous driving, voxels are used to represent the 3D environment surrounding the vehicle, and can be used for tasks such as 3D object detection and scene segmentation.
|Computing resources used to perform the computationally intensive tasks involved in training machine learning models. These can be physical machines, virtual machines, or containers in a cloud environment.
|In a distributed computing environment, a worker node is a single machine that performs a specific task as part of a larger operation. In the context of machine learning, worker nodes can be used to perform individual tasks in parallel during the training process, to speed up the overall time required to train a model.
|Workflow Management System
|A software system that is used to manage and automate the flow of work in a specific process, for example, the entire process of training a machine learning model, from data preparation and model selection to deployment and maintenance.
|In the context of autonomous driving, a world model is a digital representation of the environment surrounding the vehicle. This model can include information about the road network, the locations of buildings, traffic signs, and other objects of interest. The world model is used by the autonomous driving system to make decisions about navigation and to estimate the vehicle’s position and orientation.