
Fujitsu’s evolution with AI-RAN
The telecommunications industry is rapidly advancing the role of artificial intelligence (AI) in many aspects of Radio Access Networks (RAN). While 5G has delivered significant capacity and throughput improvements, it has yet to drive substantial new revenue to cover these investments. By combining AI with the low latency and high speed of 5G, Mobile Network Operators (MNOs) are exploring new AI-driven services that can unlock valuable revenue streams. Fujitsu is a leader in this space and has collaborated with NVIDIA for four years to develop and refine AI-native RAN architectures that are poised to redefine network performance and business models.
Among the most tantalizing AI-driven services are immersive and generative AI applications, like voice to video applications, enhanced Augmented Reality (AR) and Virtual Reality (VR) experiences, which leverage the power of real-time, low-latency AI processing to deliver hyper-realistic simulations, gaming, and training environments. Moving forward, society will increasingly use Agentic AI applications which can make decisions and take actions without human intervention but require very low latency. These experiences, whether for consumers or enterprises, can drive new revenue through subscription models and enterprise service fees. In the retail space, AI-enabled smart displays and virtual assistants can analyze customer behavior in real time, providing tailored recommendations and dynamic pricing to boost in-store engagement and sales—a promising source of revenue for MNOs partnering with retailers to provide connectivity and compute resources. Additionally, AI-driven smart security applications offer real-time video analytics to identify potential threats across large venues, campuses, and smart cities. By providing instant alerts and analytics, MNOs can tap into security as a service, monetizing high-speed connectivity and AI processing capabilities for public and private sector customers.
However, the path to delivering these kinds of services is not an easy one. Current RAN platforms are purpose-built for RAN only, are under-utilized, and cannot process AI applications as efficiently as accelerated AI computing infrastructure such as Graphics Processing Units (GPUs).
The road to an efficient AI-native RAN
Fujitsu’s journey began with offloading tasks in a look-aside architecture. This approach gave us early insights into the possible options for vRAN architectures and associated limitations, which you can learn more about in my blog “Inline vs. lookaside: Which accelerator architecture is best for unlocking the benefits of Open vRAN?”
Over time, Fujitsu advanced to an inline architecture with the NVIDIA A100X converged accelerator, which allowed us to integrate Open-Radio Unit (O-RU) fronthaul network interface with the RAN virtual Distributed Unit (vDU) Layer 1 acceleration directly onto a single GPU. Through these early experiences, we learned the significance of maximizing GPU efficiency and scalability for a carrier-grade solution that could eventually handle real-world traffic demands. This significant step forward was enabled by our continuous collaboration with NVIDIA, where we leveraged the NVIDIA AI Aerial software stack for Layer 1 and optimized and adapted it for commercial-grade, full-stack vRAN. Through this process, Fujitsu helped bring a new level of flexibility and efficiency to virtualized RAN deployments, ultimately leading to Fujitsu’s vCU/DU software being deployed with the NVIDIA accelerated computing platform into the commercial network of a Tier 1 MNO.
Discovering the power of a centralized RAN architecture
During these early experiences, we explored both D-RAN and C-RAN architectures to deploy GPU-based RAN solutions and found that, based on available computing resources at the time, C-RAN architectures offered the most gains in pooling efficiency and resource optimization.
With centralized architectures, resources could be shared across high-traffic urban areas and lower-traffic suburban sites, reducing the need for excess capacity. This lesson was pivotal, demonstrating that centralizing resources is not only cost-effective but also environmentally advantageous.
Through centralized pooling, Fujitsu was able to reduce resource redundancy by smoothing out capacity needs across sites. The result was a significant improvement in both cost-effectiveness and energy efficiency, an essential benefit for MNOs facing rising energy costs and environmental regulations. Perhaps the most valuable lesson, however, was discovering how to monetize excess capacity to reduce TCO.
Because Fujitsu’s open virtual RAN software is hardware agnostic, it can easily be ported to the latest hardware. This flexibility also offers MNOs the ability to scale their compute resources so the same in-line L1 accelerated software can be run on smaller GPUs in distributed D-RAN sites. The choice of hardware is dependent on cost, power, and monetization goals.
Three paths to AI-RAN
Three models of AI integration within RAN have emerged, each offering unique opportunities for MNOs to enhance network operations and revenue potential:
- AI AND RAN: This model is key to cracking the business case for AI-RAN. By enabling MNOs to sell unused GPU resources on the spot market, MNOs can capture new revenue streams to maximize returns on investment. In AI and RAN, Fujitsu is collaborating with NVIDIA to support MNOs in balancing AI and network workloads on shared GPU resources. This optimizes both network efficiency and operational cost while monetizing excess capacity.
- AI FOR RAN: Fujitsu’s early work with NVIDIA in AI for RAN showed how AI algorithms can dynamically optimize network performance. For example, by using AI to improve channel estimation, MNOs can reduce interference and enhance signal processing. While this has yet to be deployed in live networks, a proof of concept demonstrating the algorithm has shown dramatic improvements in uplink performance.
- AI ON RAN: This model leverages the GPU for real-time AI applications at the network edge, enabling MNOs to bring new services directly to consumers and enterprise customers. At Mobile World Congress 2024, Fujitsu demonstrated the potential for latency-sensitive applications, from digital twins to real-time video analytics, all powered through the same infrastructure used for RAN. In November of 2024, Softbank demonstrated another example to show how faster AI decision-making, enabled by the low latency resulting from integrating AI with Fujitsu’s vRAN software as part of its AITRAS solution, improved the performance of a robotic dog instructed to follow a human.
Amplifying the benefits and improving the business case with extended fronthaul
The more an MNO can pool resources through centralization, the better the benefit to the business case would be. So, Fujitsu set about optimizing the Layer 1 processing in the vDU. By dramatically reducing the processing time in the vDU, Fujitsu has been able to increase the latency budget on the fronthaul to extend the fronthaul range to 50 km. Because this is done entirely through optimization in the DU, the solution is O-RAN compliant and can be used with any O-RAN compliant third-party radio. More importantly, by increasing the fronthaul range by 67%, all the benefits of pooling are improved by almost a factor of three.
Building a resilient, energy-efficient AI-native vCU/vDU solution
Earlier this year, Fujitsu, in collaboration with NVIDIA, showcased one of the world’s first carrier- grade GPU-accelerated vCU/vDU solutions by combining the high performance of the NVIDIA CPU-GPU accelerated platform, powered by GH200 Grace Hopper Superchip, and NVIDIA AI Aerial layer 1 software libraries, with years of insights gained through Fujitsu’s work with NVIDIA. Integrating RAN and edge AI into a single solution means that low-latency applications, like XR rendering and analytics, can be deployed seamlessly and economically. This setup minimizes deployment costs and accelerates the timeline for deploying next-generation AI services.
Fujitsu’s ongoing evolution with live field trials
As our work with NVIDIA reaches new milestones, we’re ready to take the next step: conducting field trials with live traffic. This critical phase allows us to test our solutions’ resilience and scalability in real-world environments, laying the groundwork for commercial deployment. Live trials will not only validate the optimized performance and energy efficiencies we’ve achieved but will also highlight the revenue-generating potential of Fujitsu’s AI-enabled RAN solutions as MNOs look to the future.
Driving the future of RAN with proven AI innovation
With over four years of collaboration and innovation, Fujitsu is leading the way in AI-RAN, continually adapting to the needs of MNOs in an evolving landscape. Our journey has been defined by groundbreaking advancements, practical insights, and a commitment to developing network solutions that meet today’s challenges and anticipate tomorrow’s opportunities. For an in-depth look at Fujitsu’s latest activities in AI-RAN, read our recent press release about our partnership with SoftBank Corp.
Visit Fujitsu at MWC 25
Hall 2 at booth 2G60