Chatbots Need Protection

23rd November 2020, Kathmandu    

VA chatbots are used in various industries for different types of assistance. They work on the concept of Machine Learning and take decisions according to it. So, they are meant to assist us and we depend on them but are they protected? Do they have proper security? Is their decision, always right? If not then different vulnerabilities may arise from it.

Below we have discussed the importance of Chatbot security.

What are the problems of VA chatbots?

The chatbots use ML learning for decisions so they work with certain data learnings. For example, we stop our car at a red light or a stop board and move when it turns green. But in the case of Chatbots, the signals detected in the form of patterns.

Sometimes these ML-driven machines may be attacked not only internally but externally also. One such incident is last year a Tesla car crashed and took the life of its owner. What happened was the Tesla was kept on auto driver mode, everything was going normal until the car did not stop on a stop signal board and crashed. So, actually what happened was someone kept a sticker on the board and the auto driver was not able to detect the stop sign and caused the accident. Though this might be among the 1 in thousand cases. But still, it is a major problem for Machine learning as it was affected due to a tape on the sign.

VA is having a major impact on the global IoT ecosystem and home automation as well. How secured are they?

The simple answer they don’t have any security embedded in them. They don’t have any type of authentication or verification and they lack layers of security. So yes they can easily be attacked. These systems are particularly vulnerable because they all work in a network. Once somebody is in the system, there are various ways in which it can impact the user. For example, nowadays people interact and share confidential information with home automation like Alexa. If you have an application on top of the home automation, the attacker can possibly extract that information out easily.

Nowadays Organizations usually prefer coolness, often making security an afterthought. What is the result of this?

It doesn’t matter how many trendy features your VA chatbot has if it lacks security then it will cause many vulnerabilities. Nobody has been protecting the chatbots properly as it’s very different to protect a chatbot through a firewall because these are not correlated things. Chatbots are not only vulnerable at the HTTP level but also at the conversational level.

Irrespective of how you install the security system, it’s still possible to get into the system and try to extract data or at the backend manipulate it in such a way that you are not aware of it.

Are CISOs considering attacks against chatbots as an emerging threat?

No, many of them are unaware of this threat vector. If you ask them then their first reaction will be ‘we have to protect this?’. The answer to the above question is ‘YES’. You need to make sure that your data is protected. You are spending millions for quality functioning, then it is your responsibility to make sure that no data can be stolen from the chatbot as well.

So how to make VA chatbotssecure?

So firstly, If you are using anything in Machine Learning, the data can be poisoned and you should know that more than 80% of all assisted chatbots are created on an open-source algorithm or an open-source training data set. You need to make sure that if you are creating on open source, then you don’t take that as faith value. There are various examples of back-door channels with malicious codes inside the algorithm. So make sure what you are doing when you are taking anything from the open-source. And the final aspect is when you are setting up the architecture analyze what goes in and comes out of the chatbot.

Chaitanya Hiremat
(Extracted from the exclusive interview of Chaitanya Hiremat, CEO of San Francisco-based AI firm, Scanta Inc with CISO MAG)

LEAVE A REPLY

Please enter your comment!
Please enter your name here