Survey: Tech Companies Lead AI Adoption but Struggle with Infrastructure
Technology and software companies are making exceptional progress putting generative AI (GenAI) to work in their organization—but it’s not all good news. Although widespread enterprise adoption helps put tech firms ahead of all other industries in A10’s State of AI Infrastructure Report 2025, their infrastructure isn’t always keeping up. And as AI usage grows, the expectation for fast and reliable response times becomes harder to meet.
In this blog, we’ll explore the infrastructure challenges and constraints now confronting tech firms as they accelerate their enterprise AI agenda.
Diverse Adoption, Balanced Hosting

According to our survey, 80 percent of tech firms have now adopted GenAI in the enterprise, with popular use cases including chatbots, content generation, and coding assistants. The use of AI for predictive analytics is nearly as common at 71 percent.
While 38 percent of tech firms are hosting AI workloads in a public cloud, even more, 49 percent, are using a hybrid cloud model. This balanced hosting approach allows companies to make strategic use of different environments to meet the varying latency, availability, and security requirements of different workloads. However, ensuring consistent performance across this more complex infrastructure can pose a challenge.
Performance Bottlenecks Run Deeper Than Compute
As AI adoption accelerates, it’s no surprise that 39 percent of technology firms cite compute limitations—specifically CPU and GPU processing power—as their biggest performance bottleneck, higher than the 33 percent seen across all industries. Memory and storage I/O speeds are flagged by 18 percent of tech respondents, also above the cross-industry average. Twenty percent of tech firms identify inefficient application architecture as the performance bottleneck.
While these system-level growing pains are to be expected, many tech firms are also struggling with the demands that AI workloads place on application delivery infrastructure and management. Efficient traffic handling and low-latency performance are essential to ensure application reliability, but challenges can arise across the infrastructure stack: from how traffic is routed and load-balanced, to how TLS/SSL decryption is handled, to whether observability tools can surface bottlenecks before they affect end users.
Our survey data bears this out. Only half of all respondents say their current ADC and load balancing infrastructure can “mostly” maintain the required performance and uptime for AI workloads, but it occasionally approaches the limits. Just 17 percent say infrastructure meets AI demands with capacity to spare. For technology companies with demanding users and mission-critical AI use cases, “mostly sufficient” isn’t nearly good enough.
How Security and Scaling Problems Compound Each Other
Across industries, 49 percent of survey respondents cite security constraints as their top infrastructure pain point for AI. This problem can be particularly acute for technology companies with users relying on AI-powered coding tools, which can inadvertently leak sensitive intellectual property through APIs. Respondents named three recurring fears: data leakage, unauthorized model access, and the inability of existing tools to detect threats at the prompt or inference level.
The scaling problem compounds the security problem, and vice versa. Only 19 percent of organizations across industries have fully automated scaling for AI workloads, despite 71 percent already using or experimenting with AI. The rest rely on partial automation or manual intervention, which creates operational lag precisely when AI demand spikes and security monitoring needs to keep pace. Infrastructure that can’t scale cleanly makes it hard to maintain security without adding unnecessary latency or disrupting user experience.
A Platform Approach for Modernization

Nearly 80 percent of all organizations plan to modernize their infrastructure within 18 months, with top priorities being security infrastructure (60 percent), compute (50 percent), and AI-tuned application delivery controllers and load balancers (32 percent). Among organizations already acting, 38 percent are implementing advanced load balancing configured for AI traffic.
While budget is a common obstacle for these efforts, and cited by 30 percent of respondents, only 3 percent currently lack leadership support. Once initiatives are in motion, the focus shifts to the practical complexity of modernizing infrastructure while keeping production systems running. For technology teams already managing complex, multi-vendor environments, adding more specialized tools without integration and centralized orchestration only deepens the operational burden. In that light, it makes sense that 62 percent of respondents prefer vendors with a platform strategy over standalone point products.
To learn more about how technology organizations are addressing AI infrastructure challenges, including the full findings on performance, security, scaling, and modernization, download the A10 Networks State of AI Infrastructure Report 2025.