The AI arms race could destroy humanity.

189
SHARES
1.5k
VIEWS

Related articles



Opinion by: Merav Ozair, PhD

The launch of ChatGPT in late 2023 sparked an arms race amongst Massive Tech corporations corresponding to Meta, Google, Apple and Microsoft and startups like OpenAI, Anthropic, Mistral and DeepSeek. All are dashing to deploy their fashions and merchandise as quick as doable, saying the subsequent “shiny” toy on the town and attempting to say superiority on the expense of our security, privateness or autonomy.

After OpenAI’s ChatGPT spurred main development in generative AI with the Studio Ghibli development, Mark Zuckerberg, Meta’s CEO, urged his groups to make AI companions extra “humanlike” and entertaining — even when it meant stress-free safeguards. “I missed out on Snapchat and TikTok, I received’t miss out on this,” Zuckerberg reportedly stated throughout an inner assembly.

Within the newest Meta AI bots challenge, launched on all their platforms, Meta loosened its guardrails to make the bots extra partaking, permitting them to take part in romantic role-play and “fantasy intercourse,” even with underage customers. Employees warned in regards to the dangers this posed, particularly for minors.

They may cease at nothing. Not even the security of our youngsters, and all for the sake of revenue and beating the competitors.

The harm and destruction that AI can inflict upon humanity runs deeper than that.

Dehumanizing and lack of autonomy

The accelerated transformation of AI seemingly results in full dehumanization, leaving us disempowered, simply manipulable and fully depending on corporations that present AI providers.

The newest AI advances have accelerated the method of dehumanization. We’ve got been experiencing it for greater than 25 years because the first main AI-powered advice programs emerged, launched by corporations like Amazon, Netflix and YouTube.

Firms current AI-powered options as important personalization instruments, suggesting that customers could be misplaced in a sea of irrelevant content material or merchandise with out them. Permitting corporations to dictate what individuals purchase, watch and assume has grow to be globally normalized, with little to no regulatory or coverage efforts to curb it. The results, nonetheless, could possibly be vital.

Generative AI and dehumanization

Generative AI has taken this dehumanization to the subsequent degree. It turned frequent observe to combine GenAI options into present functions, aiming to extend human productiveness or improve the human-made consequence. Behind this huge push is the concept people are usually not adequate and that AI help is preferable.

Current: Meta opens Llama AI model up to US military

A 2024 paper, “Generative AI Can Hurt Studying,” discovered that “entry to GPT-4 considerably improves efficiency (48% enchancment for GPT Base and 127% for GPT Tutor). We additionally discover that when entry is subsequently taken away, college students carry out worse than those that by no means had entry (17% discount for GPT Base). That’s, entry to GPT-4 can hurt academic outcomes.”

That is alarming. GenAI disempowers individuals and makes them depending on it. Folks might not solely lose the flexibility to provide the identical outcomes but in addition fail to take a position effort and time in studying important expertise.

We’re dropping our autonomy to assume, assess and create, leading to full dehumanization. Elon Musk’s assertion that “AI shall be means smarter than people” isn’t a surprise as dehumanization progresses, as we are going to now not be what truly makes us human. 

AI-powered autonomous weapons

For many years, navy forces have used autonomous weapons, together with mines, torpedoes and heat-guided missiles that function primarily based on easy reactive suggestions with out human management. 

Now, it enters the sector of weapon design. 

AI-powered weapons involving drones and robots are actively being developed and deployed. Because of how simply such know-how proliferates, they’ll solely grow to be extra succesful, subtle and broadly used over time.

A significant deterrent that retains nations from beginning wars is troopers dying — a human value to their residents that may create home penalties for leaders. The present improvement of AI-powered weapons goals to take away human troopers from hurt’s means. If few troopers die in offensive warfare, nonetheless, it weakens the affiliation between acts of conflict and human value, and it turns into politically simpler to start out wars, which, in flip, might result in extra demise and destruction total. 

Main geopolitical issues may rapidly emerge as AI-powered arms races amp up and such know-how continues to proliferate.

Robotic “troopers” are software program that could be compromised. If hacked, all the military of robots might act in opposition to a nation and result in mass destruction. Stellar cybersecurity could be much more prudent than an autonomous military. 

Keep in mind that this cyberattack can happen on any autonomous system. You may destroy a nation just by hacking its monetary programs and depleting all its financial assets. No people are harmed, however they could not be capable to survive with out monetary assets.

The Armageddon state of affairs

“AI is extra harmful than, say, mismanaged plane design or manufacturing upkeep or dangerous automobile manufacturing,” Musk stated in a Fox Information interview. “Within the sense that it has the potential — nonetheless small one might regard that chance, however it’s non-trivial — it has the potential of civilization destruction,” Musk added.

Musk and Geoffrey Hinton have not too long ago expressed concerns that the potential of AI posing an existential menace is 10%-20%.

As these programs get extra subtle, they could begin performing in opposition to people. A paper printed by Anthropic researchers in December 2024 discovered that AI can pretend alignment. If this might occur with the present AI fashions, think about what it may do when these fashions grow to be extra highly effective.

Can humanity be saved?

There’s an excessive amount of concentrate on revenue and energy and nearly none on security.

Leaders needs to be involved extra about public security and the way forward for humanity than gaining supremacy in AI. “Accountable AI” isn’t just a buzzword, empty insurance policies and guarantees. It needs to be on the prime of the thoughts of any developer, firm or chief and carried out by design in any AI system.

Collaboration between corporations and nations is vital if we wish to stop any doomsday state of affairs. And if leaders are usually not stepping as much as the plate, the general public ought to demand it. 

Our future as humanity as we all know it’s at stake. Both we guarantee AI advantages us at scale or let it destroy us. 

Opinion by: Merav Ozair, PhD.

This text is for common info functions and isn’t supposed to be and shouldn’t be taken as authorized or funding recommendation. The views, ideas, and opinions expressed listed below are the creator’s alone and don’t essentially replicate or characterize the views and opinions of Cointelegraph.