Guest Q&A: Security Leadership in the Age of AI and Multi-cloud
Responses by Saikat Maiti, CEO and Founder of nFactor Technologies
Tell us about yourself
I’m the CEO and Founder of nFactor Technologies, where we’re pioneering responsible AI-driven security automation to transform how enterprises manage cybersecurity at scale. With over 20 years in information security and privacy, I’ve had the privilege of building security programs from the ground up across healthcare, fintech, and technology companies—from startups to Fortune 500 organizations.
My journey began in healthcare technology at companies like Varian Medical Systems, where I established comprehensive privacy frameworks for global medical device operations. I also led privacy and security initiatives at PwC, serving major clients including Kaiser Permanente and Genentech, before moving into high-growth technology environments at FinTech and SaaS companies like Salesforce, Upstart, and Personal Capital.
What drives me is the intersection of cutting-edge technology and practical security implementation. I’m particularly passionate about how AI can revolutionize security operations while ensuring we maintain the highest standards of privacy and compliance. As a frequent speaker at industry conferences like ISACA, RSA, Gartner, Dreamforce, and TieCon, I focus on making complex security concepts accessible and actionable for organizations navigating today’s rapidly evolving threat landscape.
What are three considerations you feel should be top of mind for security teams?
First, embrace AI-driven security operations as a force multiplier, not a replacement for human expertise. The security talent shortage isn’t going away, but AI can amplify what skilled professionals can accomplish if it is deployed with adequate controls and “Human in the Loop” expertise. Smart organizations are implementing AI for third-party risk, compliance, threat detection, automated response, and policy generation while keeping humans in the loop for strategic decisions and complex investigations.
Second, adopt a “privacy by design” mindset across all technology initiatives. With regulations like GDPR, CCPA, and emerging AI governance frameworks, privacy can’t be an afterthought. Security teams need to embed privacy considerations into every system design, vendor evaluation, and deployment decision. This is especially critical as organizations scale AI usage—what seems like a simple chatbot today could become a massive compliance liability tomorrow.
Third, build security architectures that assume breach and prioritize rapid recovery. The question isn’t if you’ll face a security incident, but how quickly you can detect, contain, and recover from it—especially as threat vectors rapidly leverage AI to automate threats. This means investing in robust backup strategies, incident response automation, and most importantly, regular testing of your recovery procedures. I’ve seen too many organizations with perfect prevention strategies that fall apart when they actually need to execute under pressure or reach into their backup and recovery procedures.
Where do you think the challenges are in implementing and monitoring security policies across hybrid/multi-cloud environments?
The biggest challenge is continuous visibility and consistent policy enforcement across disparate environments. Organizations often end up with security “islands”—different tools, different policies, and different monitoring approaches for each cloud provider or on-premises environment. This creates dangerous blind spots and overwhelming/noisy alerts and false positives while attackers can move laterally without detection.
Configuration drift is another critical issue. What starts as a consistent security posture across environments inevitably diverges as teams make changes, apply patches, or deploy new services. Without automated configuration management and continuous monitoring, organizations lose track of their actual security state versus their intended security state.
The complexity of compliance mapping across multiple jurisdictions and frameworks also poses significant challenges. A single application might need to comply with HIPAA in the U.S., GDPR in Europe, and local data residency requirements in Asia—all while maintaining consistent security controls across different cloud providers with varying native security capabilities.
Finally, there’s the human factor of managing multiple cloud consoles and tools. Security teams are already stretched thin, and asking them to become experts in AWS, Azure, GCP, and on-premises tools simultaneously is unrealistic. This is where unified security platforms and automation become essential—not just for efficiency, but for reducing the cognitive load on security professionals. (We address this problem specifically with the nFactor Circle of Trust™ platform.)
What kind of security challenges should organizations anticipate when scaling AI use, from test/dev to production?
Data governance becomes exponentially more complex. AI models are data-hungry, and organizations often underestimate the privacy and compliance implications of feeding production data into training pipelines. I’ve seen companies inadvertently expose sensitive customer information through model outputs or create compliance violations by training models on data they’re not legally permitted to use. Avoiding this requires a well-thought-out strategy.
Model poisoning and adversarial attacks represent entirely new threat vectors. Unlike traditional application security, AI systems can be compromised through malicious training data or carefully crafted inputs designed to fool models. Organizations need to implement model validation, input sanitization, and output monitoring that goes far beyond traditional security controls.
The “black box” nature of many AI systems creates accountability and explainability challenges. When an AI system makes a security decision—like blocking a transaction or flagging suspicious behavior—teams need to understand and defend that decision. This is particularly critical in regulated industries where audit trails and decision justification are mandatory.
Scaling from development to production introduces new attack surfaces. Development environments often have relaxed security controls, but production AI systems become attractive targets for attackers seeking to manipulate business logic or extract sensitive information. Organizations need robust CI/CD security practices specifically designed for AI workloads, including model signing, version control, and deployment validation.
What AI techniques do you see organizations using to improve security efficacy?
Behavioral anomaly detection is becoming mainstream and highly effective. AI systems excel at establishing baseline behavior patterns for users, devices, and applications, then identifying subtle deviations that might indicate compromise. This is particularly powerful for insider threat detection and zero-day attack identification where traditional signature-based approaches fail.
Automated threat hunting and investigation is transforming how security operations centers function. AI can process vast amounts of log data, correlate events across multiple systems, and generate hypotheses about potential threats for human analysts to investigate. This dramatically reduces the time from initial detection to full investigation completion.
Natural language processing for security policy generation and compliance is gaining significant traction. Organizations are using AI (including the nFactor suite) to automatically generate security policies based on regulatory requirements, translate complex compliance frameworks into actionable controls, and even draft incident response procedures.
Predictive vulnerability management is emerging as a game-changer. Instead of simply cataloging vulnerabilities, AI systems can predict which vulnerabilities are most likely to be exploited based on threat intelligence, attack trends, and environmental factors. This allows security teams to prioritize patching efforts more strategically.
Moving into the future, given the landscape of multi-vector attacks, how do you see security vendors needing to adapt and evolve? Should vendors start partnering or producing consolidated solutions instead?
Consolidation is inevitable and necessary. The current security tool sprawl—where organizations manage 50+ security products—is unsustainable. Multi-vector attacks exploit the gaps between these tools, and security teams can’t effectively manage the complexity. Vendors need to move beyond point solutions toward integrated platforms that share threat intelligence and coordinate responses automatically.
However, consolidation shouldn’t mean single-vendor lock-in. The most successful approach will be ecosystem partnerships where best-of-breed vendors integrate deeply while maintaining their core specializations.
API and MCP-first architectures and open standards will become table stakes. Vendors that build proprietary silos will lose out to those that enable seamless integration and data sharing.
AI and automation will be the great differentiator. Vendors that can demonstrate measurable reductions in mean time to detection and response through intelligent automation will win market share.
Finally, security vendors need to embrace the concept of “security as a business enabler” rather than just “risk mitigation.” The most successful vendors will help organizations move faster and innovate more confidently by providing security that’s transparent to business processes. In an era where digital transformation is existential, security can’t be a bottleneck—it needs to be an accelerator. With AI-enabled security, we can make this happen today for many aspects of security.