When Chatbots are Fraudbots: Who Are Your Users Chatting With?

Share Button

Today’s fraud landscape presents a plethora of potential attack vectors, and criminals are always coming up with more.

Chatbots, a feature found on a variety of sites, present a new vector that has encouraged attackers to turn their attention to the profit potential of this well-meaning tool.

What Is a Chatbot?

A chatbot is a tool that facilitates automated interactions between customers and services – it normally takes the form of a window in the bottom corner of a webpage. Using chatbots, organizations can provide a “smart,” online assistant that can help users to find the information they are looking for through quick and accurate replies.

What Is At stake?

Chatbots started as a tool for online retailers to quickly solve their customers’ problems. Now, they’ve expanded to a range of industries, including financial institutions. Users who have issues logging into their bank’s transactional website can count on a reliable, digital customer service “representative” that uses natural-sounding language to help them resolve their issues.

In a normal banking transactional website that incorporates a legitimate chatbot, the chatbot may ask a customer experiencing difficulties for information such as account number, ID number, and other Personal Identifiable Information (PII) that is then hopefully stored in a safe place. But, what happens if the information travels through the internet in just plain text format? Or, what if the database where that information is stored is easily hackable? Even more alarming, how do customers know they’re speaking with a legitimate bot assistant?

What Are Online Assistants Exposed To?

There are many types of open-source code available to help organizations deploy chatbot windows on their websites – some of which are much smarter than others. However, even free products come at a cost, and in the case of chatbots, it is usually captured information.

A few examples of chatbots that have been turned into fraud machines:

Cyber Torture:

With cyber torture, fraudsters attack the chat engine itself by sending tricky questions or entering database query commands intended to break the engine and access stored PII. Unsecured chatbots may respond to SQL queries or questions such as “Who are you?” in a way that allows attackers to determine the architecture behind a chat window, giving them access to privileged information such as account numbers, social security numbers, and username and password combinations, which in turn can be used against an organization and its customers.


Data Sniffing:

Chatbots have access to privileged information – naturally, fraudsters want to gain access to the treasure trove of information input into chatbots. By tapping the established communication channel between a chatbot and a user, much like a man-in-the-middle attack, an attacker can intercept communications and directly receive a user’s PII. It’s easy to see then, how free chat engines can result in disaster if confidential information is sent to unauthorized third parties.


Spoofed Identity:

Many available chatbots work as mobile applications. As a result, fraudsters are flooding app stores with fake chatbots using legitimate brand names in the same way fake websites with the same look and feel as a legitimate one offer an easy way for fraudsters to convince users to give them their sensitive data.

Furthering the problem is the difficulty that most users have discerning the difference between a well-intended chatbot on a legitimate website and one with nefarious purposes. Adware and web injections help attackers create believable websites and apps, and can even allow them to display a fake, unexpected pop-up chatbot on a legitimate website. From there, it’s just a matter of taking advantage of users’ blind trust to extract as much sensitive information as they can.

Fighting Back Against Fraudsters

Is your chatbot infrastructure secure? Performing input validations before deploying a chatbot will let you proactively identify and fix any vulnerabilities related to inputting malicious commands into the chat window. The information captured by your bot has a great value, so it’s best to set up strict controls to avoid unauthorized third parties from gaining access to the stored data.

More importantly, does your organization have a brand protection strategy? If so, you’re one step ahead of the game – if not, you’re already behind. Constant monitoring of areas such as email traffic, app stores, and websites can detect impostors that are using your brand name against your customers. Web injection monitoring can also prevent users from falling victim to unauthorized website modifications.

To learn more about how to protect your brand from risks such as chatbot fraud, click here.

Related Posts

Digital Footprint – An Avenue for Cybercrime All forms of digital activity leave a trail of information, otherwise known as a digital footprint. As a company’s digital presence grows, it becomes easier for cybercriminals to exploit it for financial gain.
Fraud in the Time of Coronavirus As the world grapples with the Coronavirus pandemic, self-isolation and stay-at-home-orders have increasingly become the norm.

Leave a Reply

Your email address will not be published. Required fields are marked *