Legacy Radar's Limitations and the Case for Reinvention
Automotive radar has been present in production vehicles since the late 1990s, when early adaptive cruise control systems began using 77 GHz millimeter-wave sensors to measure the distance and velocity of leading vehicles. For two decades, these systems performed exactly one task: measuring the range and range rate of objects along the longitudinal axis of the vehicle. They were deliberately simple — low-resolution, low-dimensional, limited to a narrow field of view, and incapable of resolving the vertical position of detected objects.
This simplicity was acceptable for driver-assistance features that operated within well-defined constraints. A car maintaining highway following distance needs to know how far ahead the preceding vehicle is and how fast it is moving. It does not need to know whether the object is a motorcycle, a bridge overpass, a speed bump, or a falling pedestrian. The decision to maintain distance is the same in all cases. But for a fully autonomous system navigating complex urban environments, this level of information is utterly insufficient. An AV that cannot distinguish a pedestrian standing on a corner from the corner of the building behind them cannot drive safely in a city.
The Fourth Dimension: Velocity as First-Class Data
The defining capability of fourth-generation imaging radar — commonly marketed as "4D radar" — is the addition of velocity as a directly measured, spatially-resolved quantity. Conventional radar measures the radial velocity of a detected object as a single aggregate value: the object is approaching at 40 km/h, or receding at 20 km/h. Fourth-generation systems, by using large arrays of antenna elements and sophisticated processing, resolve velocity at each point in the 3D point cloud they generate.
The practical implications are profound. A scene that to a conventional radar appears as a single "blob" of returns — a bus stopped at a pedestrian crossing — resolves to a 4D radar as a static bus body, stationary pedestrians on the pavement, and one pedestrian crossing in front of the vehicle with a measured forward velocity of approximately 1.2 m/s. The ego vehicle's motion planner can act on this information differently in each case. The capability to measure velocity at point-cloud resolution allows 4D radar to contribute to scene understanding in ways that were previously exclusive to camera and LiDAR systems.
MIMO Antenna Arrays: The Technology Behind the Resolution
The angular resolution of a radar system is fundamentally constrained by its antenna aperture — the physical size of the antenna array relative to the wavelength of the transmitted signal. At 77 GHz, the wavelength is approximately 4 mm. To achieve the angular resolution needed to separate two pedestrians walking side by side at 100 meters — approximately 1 degree — the effective aperture of the antenna array must be substantially larger than what can be achieved with a simple physical array in an automotive form factor.
Multiple Input Multiple Output (MIMO) antenna architectures solve this problem through a technique called virtual aperture synthesis. By transmitting from multiple spatially separated antenna elements simultaneously and receiving on multiple independent channels, MIMO systems create a virtual antenna array whose effective aperture is the product of the physical transmit and receive array sizes. A system with 12 transmit and 16 receive elements creates a virtual aperture equivalent to 192 elements — an order of magnitude more than the physical hardware suggests.
This is why modern 4D radar chips, such as those from Texas Instruments' AWR3xxx family1 and Arbe Robotics' Phoenix chipset,2 achieve angular resolutions of 1–2 degrees in both azimuth and elevation, with point cloud densities exceeding 20,000 points per second. This is still significantly sparser than a modern LiDAR, but it represents an order-of-magnitude improvement over the previous generation and, critically, achieves it at a fraction of the cost and with full weather immunity.
All-Weather Capability: The Physics of Radar in Rain
The fundamental reason radar is experiencing a renaissance in autonomous driving is weather. Millimeter-wave radar signals at 77 GHz are attenuated by precipitation — rain and snow — but the attenuation is orders of magnitude lower than that experienced by LiDAR or cameras in the same conditions. A heavy rainstorm that reduces effective LiDAR range from 250 meters to 80 meters may reduce radar range from 200 meters to 180 meters. Fog, which is catastrophic for optical sensors, has essentially no effect on 77 GHz radar.
"Weather is not an edge case for autonomous vehicles — it is the baseline operating condition for the majority of the world's population for six months of the year. Radar is the only sensor that treats weather as a non-event."
This weather immunity has a compounding effect on the sensor fusion architecture of autonomous systems. When a perception stack relies heavily on LiDAR and camera for its world model, the degradation of those sensors in adverse weather creates uncertainty that cascades through prediction and planning. A system with high-confidence 4D radar data maintaining full-range performance through the storm can anchor the sensor fusion pipeline with reliable positional and velocity data even when optical sensors are degraded, significantly improving overall system robustness.
Radar Versus LiDAR: Not a Competition
The emergence of high-resolution 4D radar has occasionally been framed as a challenge to LiDAR's role in the autonomous vehicle sensor suite. This framing is misleading. The two technologies are complementary rather than competitive: they provide different types of information with different failure modes, and the autonomous vehicles most likely to achieve robust all-weather performance are those that use both, fused intelligently.
LiDAR provides dense, high-resolution 3D geometry at short to medium range with excellent object classification capability. Radar provides direct velocity measurement, long-range detection, and complete weather immunity. A camera provides rich semantic information — color, text, facial expressions — that neither LiDAR nor radar can match. Each sensor compensates for the weaknesses of the others. The question is not which sensor to use; it is how to weight their contributions in the fusion pipeline under different environmental conditions.
Industry Players Shaping 4D Radar
The 4D radar landscape includes both semiconductor vendors and system integrators. At the chip level, Texas Instruments, NXP Semiconductors, and Infineon provide the foundational silicon on which most automotive radar systems are built. At the module level, companies including Arbe Robotics, Vayyar, and Ainstein are commercializing imaging radar products specifically targeting autonomous vehicle applications. Continental, Bosch, and ZF — the dominant Tier 1 automotive suppliers — have all introduced or announced 4D radar products as part of their sensor portfolios for L2+ and L4 platforms.
Notably, Tesla's decision to remove radar from its vehicles in 2021 in favor of camera-only perception was followed by a reversal in 2023, with the announcement of a custom high-resolution radar designed to complement its camera stack. This trajectory — remove, observe performance gaps, reinstall — was a practical demonstration of the challenge of operating without radar's weather immunity and long-range velocity data in a fleet of millions of vehicles across global climate conditions.
Sensor Fusion Strategy: Where 4D Radar Fits
The optimal integration of 4D radar into an autonomous perception stack requires a fusion architecture that can exploit the complementary strengths of each sensor type without being unduly limited by any single sensor's weaknesses. The dominant approach is late fusion at the object level: each sensor contributes a set of detected objects with associated uncertainty estimates, and a fusion algorithm combines these into a unified tracked object list that is more accurate than any individual sensor could produce alone.
Increasingly, however, leading research groups are moving toward mid-level fusion: combining raw sensor representations — point clouds from LiDAR and radar, feature maps from cameras — before object detection, allowing learned models to discover fusion strategies that are not constrained by hand-engineered object representations. This approach has shown superior performance on adversarial conditions including sensor degradation and occlusion, though it demands substantially more compute and training data. The next generation of production autonomous driving computers — NVIDIA Thor, Qualcomm Ride — are being architected specifically to support this computationally intensive approach to sensor fusion. 4D radar, with its rich velocity information and weather immunity, will be a first-class input to these systems.