The artificial intelligence technology might one day pose an existential threat to humanity and should be considered a societal risk on a par with pandemics and nuclear wars.
RISKS OF ARTIFICIAL INTELLIGENCE
Automation-spurred job loss, risk of accident
Deepfakes
Privacy violations
Algorithmic bias caused by bad data
Socioeconomic inequality
Market volatility
Weapons automatization
Uncontrollable self-aware AI
Whether it’s the increasing automation of certain jobs, autonomous weapons that operate without human oversight, unease abounds on a number of fronts. And we’re still in the very early stages of what AI is really capable of. “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” . Recent advancements in so-called large language models — the type of A.I. system used by ChatGPT and other chatbots — have raised fears that A.I. could soon be used at scale to spread misinformation and propaganda, or that it could eliminate millions of white-collar jobs. But it was the bot’s occasionally inaccurate, misleading and weird responses that drew much of the attention after its release. A.I. could become powerful enough that it could create societal-scale disruptions within a few years if nothing is done to slow it down, though researchers sometimes stop short of explaining how that would happen. The A.I. systems were serious enough to warrant government intervention and called for regulation of A.I. for its potential harms. Some skeptics argue that A.I. technology is still too immature to pose an existential threat. As AI grows more sophisticated and widespread, the voices warning against the potential dangers of artificial intelligence grow louder. Lethal autonomous weapons systems use artificial intelligence to identify and kill human targets without human intervention. Another potential danger of an AI arms race is the possibility of losing control of the AI systems; the risk is compounded in the case of a race to artificial general intelligence, which may present an existential risk. There already exist completely autonomous AI operation systems that provide the means for UAV clusters, when they fulfill missions autonomously, sharing tasks between them, and interact", and that it is inevitable that "swarms of drones" will one day fly over combat zones. "neural net" combat module, with a machine gun, a camera, and an AI that its makers claim can make its own targeting judgements without human intervention.
Nincsenek megjegyzések:
Megjegyzés küldése