“Hackers Have Already Started to Weaponize Artificial Intelligence,” Musk tweeted on His personal Twitter account on Monday. Hackers are already weaponizing artificial intelligence). The following are excerpts from the blog.


Two data scientists from security firm ZeroFOX conducted an experiment to see whether humans or ARTIFICIAL intelligence were more likely to get Twitter users to click on malicious links. So the researchers asked the AI to study the behavior of social network users, then designed and implemented its own phishing bait. From the test results, ai programs are far ahead of humans.


The AI, named SNAP_R, sent simulated phishing scams to more than 800 users at an average rate of 6.75 tweets per minute. By comparison, Thomas Fox-Brewster, the Forbes employee who averaged 1.075 tweets per minute, ran 129 trials and only attracted 49 users.


Fortunately, this is just an experiment, but experiments have shown that hackers are already able to use AI for malicious attacks. In fact, they may already be using it, but not yet. In July, at the 2017 Black Hat Conference, hundreds of leading cybersecurity experts gathered in Las Vegas to discuss AI issues and the potential pitfalls of emerging technologies. In the survey, there was a question about whether hackers would use AI for attacks in the coming year, and 62 percent of respondents said yes.


“Hackers are using AI as a weapon all the time,” Brian Wallace, director of security data scientist at Cylance, told Gizmodo. This is very interesting because of the scale of hackers, they try to attack as many targets as possible while minimizing the risk to themselves. Artificial intelligence and machine learning can do this, making them perfect tools to use in the end. “He says these tools can decide who to attack, who to attack, when to attack and so on.


Deepak Dutt, founder and CEO of mobile security startup Zighra, said it is highly likely that sophisticated ARTIFICIAL intelligence will be used in cyber attacks in the near future, and that it may already be used in Eastern European countries such as Russia and China. However, Dutt is not yet clear on how the AI will be exploited.


“Ai can be used to mine large amounts of public domain and social network data to extract personally identifiable information such as date of birth, gender, location, phone number, email address, and can be used to hack into a person’s account,” Dutt told Gizmodo. “It can also be used to automatically monitor emails and text messages and create personalized phishing emails for social engineering attacks. AI can mutate malware and Ransomware more easily, and become smarter at finding and exploiting vulnerabilities in systems.


Dutt suspects that AI is already being used in cyber attacks, and that these criminals are already using some sort of machine learning feature, for example by automatically creating personalized phishing emails.


“But new AI techniques in terms of new machine learning techniques, such as deep learning, can be used to achieve the higher levels of accuracy and efficiency I just mentioned.” He said. Deep learning, also known as hierarchical learning, uses large neural networks. It has been used in computer vision, speech recognition, social network filtering and many other complex tasks, and often produces better results than human experts.


“It also helps that there are lots of social networks and the availability of public Big Data sets. Advanced machine learning and deep learning tools are now easily available on open source platforms — which, combined with relatively inexpensive computing infrastructure, effectively make cyber attacks more complex.


According to Goodman, the vast majority of cyber attacks these days are automated. It is very rare for human hackers to go for a single target, and it is now more common to automate attacks using AI and machine learning tools — from scripted distributed denial of service (DDoS) attacks to Ransomware, criminal chatbots and more. The prospect of such machines becoming intelligent is particularly alarming. AI can generate complex and highly targeted scripts with greater speed and sophistication than any individual hacker.


In addition to the criminal activities already described, AI can be used to attack vulnerable groups, conduct rapid hacking attacks, develop intelligent malware, and more.


StaffanTruve, chief technology officer at Recorded Future, said that as ai matures and becomes more of a commodity, “bad guys” will start using it to improve attack performance while lowering costs. Unlike many of his colleagues, Truve says AI is not really being used by hackers at the moment, claiming that simpler algorithms (such as self-modifying code) and automated schemes (such as enabling phishing schemes) are working just fine.


“I don’t think AI has become a must-have tool for bad guys,” Truve told Gizmodo. “I think the reason we haven’t seen more ‘AI’ in attacks is that traditional attacks still work — why spend the time and money to switch to something new if you can get what you want from a good old fashioned brute force method?”


Moreover, in early September, Musk tweeted that “wars may not be waged by national leaders, but by AI — if it decides that a preemptive strike is the most effective way to achieve victory.” Last month, Musk teamed up with hundreds of ai executives to recommend to the United Nations that lethal autonomous weapons be banned to stop a high-tech arms race.


A total of 116 people, including Musk, Google DeepMind founder Mustafa Suleyman and Universal Robotics founder, signed an open letter urging the UN to clarify the challenges posed by lethal autonomous weapons, known as “killer robots”, and to ban their use worldwide.