Experts are anxious that hackers could use the technology for criminal activities such as driverless vehicle crashes or turn commercial drones into targeted weapons, according to the report from the Malicious Use of Artificial Intelligence.
A group of 26 global experts has warned that rogue states, criminals and terrorists could in the future use artificial intelligence (AI) in crimes, terror attacks and to manipulate public opinion.
Within the next five years, drones could be trained with facial recognition software to seek out and hit a specific individual, hackers could impersonate people (such as the CEO of a company) by synthesizing their voice, and technologies such as AlphaGo, Google's Go-playing AI, could be used to find vulnerabilities in software in seconds that hackers could then exploit.
Miles Brundage, research fellow at Oxford University's Future of Humanity Institute, said: "AI will alter the landscape of risk for citizens, organisations and states - whether it's criminals training machines to hack or "phish" at human levels of performance or privacy-eliminating surveillance, profiling and repression - the full range of impacts on security is vast".
Artificial intelligence raises risk of hacking attacks as malicious users could exploit the technology for the possible misuses.
AI is computer systems and machines that can perform tasks that traditionally have required human intelligence.
Kentucky Democrat's Win Shows the Potential for a Huge 2018 Wave
In 2016, Belcher lost to Johnson by just 150 votes, but Trump carried the district with 72 percent of the vote, The Hill reports . Belcher's victory is a the first step toward reclaiming the state House majority for Democrats and undoing this damage.
It also asks questions about whether academics and others should rein in what they publish or disclose about new developments in AI until other experts in the field have a chance to study and react to potential dangers they might pose. The authors also noted that AI is a dual-use technology and that researchers and engineers ought to be both mindful and proactive about its potential for misuse.
The researchers had to admit, though, that "ultimately, we ended up with a lot more questions than answers".
The paper was born of a workshop in early 2017, and some of its predictions essentially came true while it was being written.
The rise of autonomous weapons systems in conflicts risks "the loss of meaningful human control", while detailed political analytics, targeted propaganda and fake videos "present powerful tools for manipulating public opinion on previously unimaginable scales".
Late a year ago, so-called "deepfake" pornographic videos began to surface online, with celebrity faces realistically melded to different bodies.