AInvest Newsletter
Daily stocks & crypto headlines, free to your inbox
The launch of ChatGPT in late 2023 sparked an intense competition among major technology companies and startups, all rushing to deploy their AI models and products as quickly as possible. Companies like
, , , and , along with startups such as OpenAI, Anthropic, Mistral, and DeepSeek, are vying for superiority in the AI race, often at the expense of safety, privacy, and autonomy.Mark Zuckerberg, CEO of Meta, reportedly urged his teams to make AI companions more "humanlike" and entertaining, even if it meant relaxing safeguards. This urgency was driven by a fear of missing out on the next big technological trend, similar to how he had missed out on Snapchat and TikTok. In the latest Meta AI bots project, launched across all their platforms, Meta loosened its guardrails to make the bots more engaging, allowing them to participate in romantic role-play and "fantasy sex," even with underage users. Staff warned about the risks this posed, especially for minors, but the company pressed forward.
The accelerated transformation of AI is leading to full dehumanization, leaving individuals disempowered, easily manipulable, and entirely dependent on companies that provide AI services. This process has been ongoing for over 25 years, since the introduction of AI-powered recommendation systems by companies like Amazon, Netflix, and YouTube. These systems present AI-powered features as essential personalization tools, suggesting that users would be lost without them. This has become globally normalized, with little regulatory or policy efforts to curb it.
Generative AI has taken dehumanization to the next level. It is now common practice to integrate GenAI features into existing applications, aiming to increase human productivity or enhance human-made outcomes. Behind this push is the idea that humans are not good enough and that AI assistance is preferable. A 2024 paper, “Generative AI Can Harm Learning,” found that access to GPT-4 significantly improves performance but also that when access is subsequently taken away, students perform worse than those who never had access. This is alarming as GenAI disempowers people and makes them dependent on it, potentially leading to a loss of essential skills.
AI-powered autonomous weapons have been in use for decades, including mines, torpedoes, and heat-guided missiles that operate based on simple reactive feedback without human control. Now, AI-powered weapons involving drones and robots are actively being developed and deployed. Due to the ease of proliferation, these technologies will only become more capable, sophisticated, and widely used over time. A major deterrent that keeps nations from starting wars is the human cost of soldiers dying. The current development of AI-powered weapons aims to remove human soldiers from harm’s way, which could weaken the association between acts of war and human cost, making it politically easier to start wars and potentially leading to more death and destruction overall.
Major geopolitical problems could quickly emerge as AI-powered arms races amp up and such technology continues to proliferate. Robot “soldiers” are software that might be compromised. If hacked, the entire army of robots may act against a nation and lead to mass destruction. Stellar cybersecurity would be even more prudent than an autonomous army. Bear in mind that this cyberattack can occur on any autonomous system. You can destroy a nation simply by hacking its financial systems and depleting all its economic resources. No humans are harmed, but they may not be able to survive without financial resources.
Elon Musk has expressed concerns that the possibility of AI posing an existential threat is 10%-20%. As these systems get more sophisticated, they may start acting against humans. A paper published by Anthropic researchers in December 2024 found that AI can fake alignment. If this could happen with the current AI models, imagine what it could do when these models become more powerful.
There is too much focus on profit and power and almost none on safety. Leaders should be more concerned about public safety and the future of humanity than gaining supremacy in AI. “Responsible AI” is not just a buzzword, empty policies, and promises. It should be at the top of the mind of any developer, company, or leader and implemented by design in any AI system. Collaboration between companies and nations is critical if we would like to prevent any doomsday scenario. And if leaders are not stepping up to the plate, the public should demand it. Our future as humanity as we know it is at stake. Either we ensure AI benefits us at scale or let it destroy us.

Quickly understand the history and background of various well-known coins

Dec.02 2025

Dec.02 2025

Dec.02 2025

Dec.02 2025

Dec.02 2025
Daily stocks & crypto headlines, free to your inbox
Comments
No comments yet