Connect with us
enterprise AI at the edge

Internet of Things

Akamai outlines compute continuum for enterprise AI at the edge

Akamai outlines compute continuum for enterprise AI at the edge

Enterprise adoption of generative artificial intelligence is moving beyond the experimental stage, and technology infrastructure providers are now focusing on the practical constraints of deploying AI at the network edge. Akamai Technologies, a major content delivery and cloud services company, has outlined what it describes as a “continuum of compute” required to support enterprise AI workloads outside traditional data centers.

Joseph Glover, a senior figure at Akamai, stated that the early phases of generative AI use in enterprise environments were marked by experimentation. Companies built models, created demonstrations to validate use cases, and ran tests to determine whether new AI tools could perform useful work. The next phase, according to Glover, requires organizations to consider and react to the constraints of placing AI in edge environments.

What the continuum of compute means

The concept of a continuum of compute refers to the distribution of processing power across a range of locations, from centralized cloud data centers to localized edge devices. For enterprise AI, this means that not all computing tasks need to be sent to a central server. Some processing can occur at the edge, closer to where data is generated and where decisions must be made in real time.

Akamai’s position highlights the need for a seamless integration of computing resources. This approach allows AI models to run efficiently regardless of where they are deployed, whether on a factory floor, in a retail outlet, or within a telecommunications network. The company argues that a rigid, centralized model is not suitable for many enterprise AI applications, particularly those that require low latency or operate in environments with limited connectivity.

Constraints and challenges at the edge

Deploying AI at the edge presents several technical constraints. Edge devices often have limited processing power, memory, and energy budgets compared to cloud servers. Additionally, network connectivity may be intermittent or slow, making it difficult to rely on continuous cloud communication. Security considerations also differ at the edge, as physical devices may be more vulnerable to tampering or unauthorized access.

Glover emphasized that enterprises must adapt their AI strategies to account for these limitations. This includes optimizing AI models to run efficiently on smaller hardware, implementing data compression and preprocessing at the edge, and ensuring that security measures are built into the device layer. The goal is to maintain performance and reliability while operating within the physical and network boundaries of edge environments.

Implications for enterprise strategy

The shift toward edge-based AI has implications for how enterprises plan their technology investments. Instead of relying solely on cloud-based AI services, organizations may need to deploy a mix of local and remote computing resources. This hybrid model can reduce latency, lower bandwidth costs, and improve data sovereignty by keeping sensitive information on site.

Akamai’s commentary suggests that vendors and enterprises alike must rethink the architecture of AI deployments. Rather than treating the edge as a simple extension of the cloud, companies should view it as a distinct computing environment with its own requirements. This perspective aligns with broader industry trends toward distributed computing and the Internet of Things (IoT), where data processing is increasingly pushed to the network periphery.

Looking ahead

The conversation around edge AI is likely to intensify as more enterprises move from proof-of-concept projects to production-scale deployments. Industry observers expect that over the next few years, companies will invest more heavily in edge infrastructure, including specialized hardware and software designed to support AI workloads. Standardization of tools and protocols may also emerge to simplify the management of distributed AI systems.

Akamai’s call for a continuum of compute reflects a growing consensus that the future of enterprise AI is not limited to the cloud. The company is positioning itself to support customers navigating this transition, though specific product timelines or technical details were not disclosed in the statement. The challenge for enterprises will be to balance the benefits of edge processing with the operational complexity it introduces.

Source: Internet of Things News

More in Internet of Things