AInvest Newsletter
Daily stocks & crypto headlines, free to your inbox



Americans are embracing artificial intelligence (AI) for its transformative potential but remain deeply conflicted about its societal implications, according to a series of recent global and national studies. While 66% of respondents in a 2025 KPMG and University of Melbourne survey reported regular AI usage, trust in AI systems remains low, with only 46% globally willing to trust them. In the U.S., 31% of Americans now trust businesses “a lot” or “some” to use AI responsibly, up from 21% in 2023, but 57% believe AI does as much harm as good[1]. This duality reflects a growing tension between the technology’s benefits and its perceived risks to human autonomy and creativity.
The U.S. public’s ambivalence is underscored by a Pew Research Center survey of 5,023 adults, which found 53% fear AI will erode creative thinking and 50% worry it will undermine meaningful relationships[5]. Similarly, 64% of Americans express concern that AI could manipulate elections via bots and deepfakes. These anxieties align with broader global trends: the Edelman Trust Barometer noted that only 32% of U.S. adults trust AI, compared to 72% in China[4]. Despite these reservations, 70% of Americans support stronger AI regulation, though only 43% believe current laws are adequate[3].
Generational divides further complicate the AI landscape. A Northeastern University study revealed that 74% of Americans under 30 use AI for information searches, compared to 60% of all adults[9]. Younger users are more likely to adopt AI for tasks like brainstorming (60% under 30 vs. 20% of those 60+), yet they also report greater concerns about job displacement. Over three-quarters of Americans across all states expect AI to impact their jobs within five years, with tech and Sun Belt states showing the highest levels of apprehension[6]. Meanwhile, older and rural populations lag in AI adoption, echoing historical digital divides[7].
The public’s desire for control over AI is evident. A Pew survey found 76% of Americans consider it crucial to distinguish AI-generated content from human-created material, yet 53% lack confidence in their ability to do so[5]. This underscores a demand for transparency and accountability. Businesses, however, face challenges in balancing innovation with ethical governance: 66% of employees rely on AI outputs without verifying accuracy, and 56% report work errors linked to AI[1]. The KPMG report emphasizes that only 47% of employees receive AI training, and 57% hide AI use, citing fear of being left behind[3].
Internationally, the U.S. approach to AI regulation contrasts sharply with the EU’s stricter, ethics-focused framework. The EU’s AI Act prioritizes transparency and safety, while the Trump administration’s “AI Action Plan” aims to sustain U.S. dominance by minimizing regulatory constraints[11]. This divergence raises questions about global competitiveness and the long-term societal impacts of AI. As the U.S. leans toward deregulation, businesses face mounting pressure to self-govern, with 70% of Americans advocating for industry-government collaboration to mitigate risks[3].
For organizations, the path forward hinges on building trust through transparency and inclusive governance. KPMG’s research highlights the need for AI literacy programs, robust training, and policies that address biases and misinformation[1]. The Edelman Trust Barometer warns that without proactive engagement, AI’s potential to enhance productivity and innovation may be overshadowed by public skepticism[4]. As AI reshapes industries, the challenge lies in aligning technological progress with societal values—a balance critical to ensuring AI’s benefits are both realized and trusted.
Quickly understand the history and background of various well-known coins

Dec.02 2025

Dec.02 2025

Dec.02 2025

Dec.02 2025

Dec.02 2025
Daily stocks & crypto headlines, free to your inbox
Comments
No comments yet