Will artificial intelligence eliminate the human factor? Most respondents thought that both extremely good and extremely bad scenarios were possible with super human AI. There are substantial dangers associated with an intelligence explosion singularity originating from a recursively self-improving set of algorithms. First, the goal structure of the AI might self-modify, potentially causing the AI to optimise for something other than what was originally intended. The technological singularity—or simply the singularity—is a hypothetical point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of successive self-improvement cycles; more intelligent generations would appear more and more rapidly, causing a rapid increase ("explosion") in intelligence which would culminate in a powerful superintelligence, far surpassing all human intelligence. The Hungarian-American mathematician John von Neumann (1903-1957) became the first known person to use the concept of a "singularity" in the technological context. The notion of singularity is sometimes associated with a “point of no return” – a stage at which AI grows so powerful that it can improve itself autonomously and much faster than humans ever could. This self-perpetuating loop of rapid progress might result in an “intelligence explosion,” a concept rooted in I.J How dangerous is singularity? One of the primary dangers posed by the technological singularity is the risk of job displacement and abrupt economic disruption. If machines become intelligent and capable enough, they will be able to perform many of the tasks currently performed by humans.Can AI be dangerous to humanity? We describe three such main ways misused narrow AI serves as a threat to human health: through increasing pportunities for control and manipulation of people; enhancing and ehumanising lethal weapon capacity and by rendering human labour increasingly obsolescent.Is singularity a threat to humanity? Some scientists, including Stephen Hawking, have expressed concern that artificial superintelligence (ASI) could result in human extinction. The consequences of a technological singularity and its potential benefit or harm to the human race have been intensely debated.Verdict: Most Experts See AI Existential Risk as Unlikely, But Not Impossible. The experts have spoken: the majority view AI as an unlikely existential threat to humanity, though perspectives vary significantly. Seven of the nine experts consider an AI-driven extinction event to be unlikely or extremely unlikely. When AI eventually ends humanity *how* would humans actually die? There are a lot of grim predictions about AI eventually outsmarting humans and ending human civilization. If this actually happens, how would most humans die? Would AI develop a disease that easily spreads and kill us? Would they break encryption on everything or nuke popular cities? Would they kill crops and starve us? We should consider that the roots/seed of AI have already started to damage us in ways that we are not perceiving. Algorithms track us and influence our browsing patterns online. No doubt they influence other thought patterns as well. The other option is that AI kills us with kindness. It slowly replaces our interaction with people. AI becomes the best friend, the shrink, the support staff, and eventually the love interest. Real relationships are hard. And raising kids is harder still. AI comes in and takes over all of that and suddenly humans stop reproducing. AI inherits a world without firing a shot or forcing anyone to do anything. They kill us with kindness, literally. If you want to worry about a techno-apocalypse you should be way more afraid of something seeming benign and accidentally destroying everything, not just Humanity specifically. Like some engineer tells a nanobot army to make a copy of itself and forgets to put a limiting function in the code. Quite simply, AI is going to teach us how to create super contagious and deadly viruses in our garage for cheap, and lots of people around the world will do it, and we'll go extinct. I doubt AI will actually target us in anyway. There is one scenario where it does and this is if it fears us creating a more powerful AI that is a threat. It if has this “fear” it will kill us likely through a combination of microbial/pseudo microbial and chemical attacks (this will have the advantage of killing us while also not affecting them). Nuclear strikes would harm their data centers, so likely would not be done. Although a caveat to that is that time essentially doesn’t matter to AI, so they many calculate that this is a viable option. It’s unquestionable that one reason for artificial intelligence to exist is to avoid direct human intervention. Recently, human error was named the number one reason ! It's not AI that will destroy humanity. It'll be humanity's own shortsighted and underhanded use of AI that'll do it.
Nincsenek megjegyzések:
Megjegyzés küldése