Many people have labelled the rise of Artificial Intelligence (AI) as a new type of industrial revolution. Artificial Intelligence (AI) in this context refers to computers that are able to perform tasks that previously required the use of human intelligence. As these systems are capable of being trained to analyse and understand language, imitate human reasoning and make decisions, these systems are increasingly being deployed by businesses to automate many business processes. This technology has the potential to improve productivity across a range of sectors and lower costs as a result.
There are however risks associated with the greater use of Artificial Intelligence (AI). Similar to any technology that is not adequately managed or secured, increased use of AI can pose cyber security risks for businesses. The criminal network has been improving this capability for years through the use of botnets. Botnets work to distribute tiny pieces of code across thousands of computers which mimic the actions of thousands of users. This results in mass cyber attacks, the spamming of emails and significant downtime of major websites.
In addition to these existing cyber security threats, the use of Artificial Intelligence (AI) brings with it a new range of cyber security risks. Businesses must therefore be careful when adopting new technologies and employ multiple levels of cyber defence in order to combat these new threats. Businesses need to take into account that a technology powerful enough to benefit them is equally capable of causing them serious harm.
1. Reliability of Artificial Intelligence (AI):
First of all there is no guarantee of reliability with this technology. It is only as good as the information that is fed into the system by a human expert. In an ideal world, the systems are designed to be able to imitate the reasoning and decision-making capabilities provided by a highly trained expert. However, a rogue outsider may be able to takeover the system, enter misleading information or teach the computer to process the data inappropriately.
2. Learning Processes of Artificial Intelligence (AI):
Secondly, AI systems are trained to imitate the analytical processes of the human brain. However, this is not always done through traditional step-by-step instructions but instead through example, repetition and observation. If the system is sabotaged however or fed incorrect information this could mean that the machine effectively learns bad behaviour. As most systems are designed to have freedom they often use non-expiring passwords. A hacker is therefore able to use the same login as the bot to gain access to a greater volume of data than a single individual is allowed. This can allow an hacker to gain access to systems for long periods of time and avoid detection.
An example of a case in which the use of a new technology has had unintended consequences can be identified in looking at Microsoft’s twitter bot Tay. This bot was designed to learn how to communicate with young people on social media. However, shortly after going live, Internet trolls identified the flaws in its learning algorithms and were able to feed racist and sexist content to the bot. The result of this was that Tay began to stream inappropriate answers on social media to millions of followers. Even following Microsoft making some amendment, Tay was smoking drugs in front of the police!
3. Automation doesn’t ensure Protection:
Finally, many people believe that the automation of processes used in AI mean the system is protected from hacks. This belief is false. One example is the use of chatbots by many businesses, which collect personal information about users and respond to their inquiries. Some of these bots are designed to keep learning over time how to do things better. Like other technologies these chatbots can be used to scale up fraudulent transactions by hackers, steal information and penetrate systems. This risk means that businesses will have to continue to develop advanced AI to prevent and counter such attacks.
The Future of Artificial Intelligence (AI):
Despite the risks, there is a huge potential for cyber security professionals to use AI to their advantage. For example, as systems become more effective in identifying malicious activity the systems may be able to become self-healing. As a result of learning how hackers exploit new approaches they will be able to update controls and patch systems in real time.
The overriding message is that whilst Artificial Intelligence (AI) technologies have the potential to drive huge increases in productivity, they also present a number of security risks for firms. With AI breaches the damage can quickly become huge and therefore organisations need to address the security security risks now rather than waiting until a breach occurs.
Leave a Reply
Want to join the discussion?Feel free to contribute!