AInvest Newsletter
Daily stocks & crypto headlines, free to your inbox
Google’s Sapna Chadha, Vice President for Southeast Asia and South Asia Frontier, emphasized the necessity of human oversight in agentic AI systems during a panel discussion at the Fortune Brainstorm AI Singapore conference. Agentic AI, she explained, represents a shift from single-task assistants to systems capable of performing multi-step actions through integrated tools. This evolution enables AI to act on behalf of users in increasingly complex scenarios, such as diagnosing bike repairs via camera or initiating support calls [1]. By 2028, it is projected that 33% of enterprise software will incorporate agentic AI, automating 15% of daily workflows [1].
Vivek Luthra of
outlined three stages of agentic AI adoption: (task automation), (decision support), and (fully autonomous workflows). While most companies remain in the first two stages, Accenture has already deployed autonomous AI agents internally across HR, finance, and IT, and externally in sectors like life sciences and insurance [1]. Luthra noted that only 8% of companies have scaled AI adoption meaningfully, underscoring the challenge of transitioning from experimentation to enterprise-wide implementation [1].Google’s approach to agentic AI includes Project Astra, a universal agent designed to handle diverse tasks, but Chadha stressed the importance of balancing automation with accountability. “You wouldn’t want a system that can do this fully without a human in the loop,” she stated, citing risks such as rogue agents or unauthorized data sharing.
has released a white paper detailing its framework for secure AI agents, including transparency protocols and toolkits for safe deployment [1].Regulatory frameworks were highlighted as critical. Chadha argued that “it’s too important not to regulate,” advocating for industry standards to ensure ethical deployment. Transparency, user control, and clear communication of agent actions were identified as key principles. For instance, agentic platforms must request user approval at pivotal decision points, ensuring humans retain oversight in critical workflows [1].
The discussion also touched on real-world applications. Accenture’s internal AI agents streamline operations, while external use cases include accelerating regulatory approvals in life sciences and fraud detection in finance. Despite these advancements, Luthra cautioned that scaling AI remains a hurdle, with many organizations still refining strategies for integration [1].
The emphasis on human-AI collaboration reflects broader industry trends. As agentic systems grow in complexity, stakeholders are prioritizing safety and ethical considerations. Google’s Project Astra and Accenture’s deployment models illustrate both the potential and the challenges of integrating autonomous systems into business processes without compromising accountability.
The debate over regulation versus innovation remains unresolved, but the consensus is clear: agentic AI’s future hinges on balancing technological capability with safeguards. As Chadha and Luthra noted, the next three years could redefine enterprise workflows, provided stakeholders address the technical, ethical, and regulatory challenges head-on [1].
Source: [1] [Agentic AI Systems Must Have ‘a Human in the Loop,’ Says Google Exec] [https://fortune.com/2025/07/24/agentic-ai-systems-must-have-human-loop-says-google-exec-cfo/]
Quickly understand the history and background of various well-known coins

Dec.02 2025

Dec.02 2025

Dec.02 2025

Dec.02 2025

Dec.02 2025
Daily stocks & crypto headlines, free to your inbox
Comments
No comments yet