Amazon's Alexa. PHOTO: Cybercrime Magazine.

Beware Of Alexa’s Malicious And Manipulative Skills

Privacy concerns around voice-enabled assistants in your home

David Braue

Melbourne, Australia – May 6, 2021

If you’re one of the hundreds of millions of people who have welcomed voice-enabled assistants such as Amazon Alexa, Google Home, and Apple’s HomePod into your home, Anupam Das has a warning for you: be careful.

An assistant professor in the Department of Computer Science at NC State University, Das — along with a team of researchers at NCSU’s recently-built Internet of Things (IoT) lab — recently published an analysis of the cybersecurity implications of Alexa and the data-exchange platform that powers it.

Extensive testing and analysis of 90,194 different “skills” — Amazon parlance for the extensions that allow Alexa to interact with third-party communications, productivity, information and other services — found a range of potential security and privacy issues with the way they are developed, authorised, and used.

One significant issue grew out of Amazon’s efforts to simplify the addition of new skills, which can be added using voice commands — but may inadvertently load the wrong application.

“If there are 86 skills with the same invocation name — say, Cat Facts — as an end user,” Das told Cybercrime Magazine, “you really don’t know which you will get enabled until you invoke it and then go to your companion app to see which was installed.”


Cybercrime Radio: What’s at risk with Alexa?

Anupam Das, Asst. Professor, Dept. of Computer Science, NC State University


Even then, users could be deceived because it proved surprisingly easy to publish new skills using spoofed software developer names.

With no clear understanding of how Amazon chooses which skill to install, Das warned, it could be easy for malicious developers to trick Alexa — and its users — into believing a maliciously designed skill comes from a reputable source.

“We tried out common names like Samsung, Microsoft, Ring, and Philips,” he explained, “and almost all of them got accepted and we were able to publish a skill under that particular developer name.”

A combination of manual and automated vetting on Amazon’s side may have resulted in the applications being overlooked, he said, but the team’s success highlighted the fact that there are still exploitable gaps in the Alexa ecosystem.

The third major weakness that Das’s team identified comes from the way Amazon allows developers to register for data feeds from the skills — known as intents — that map the voice-recognition engine’s output into actionable text and commands.

When building skills, developers register for access to intents that they may want to use — but a malicious developer could, Das warned, register a broad range of intents, then later change the dialogues that trigger them to extract information from users in the background.

With the research confirming that around 23.3 percent of tested skills “do not fully disclose the data types associated with the permissions requested,” the researchers noted, users could easily end up adding a malicious and manipulative skill.

“Not only can a malicious user publish a skill under any arbitrary developer/company name,” the researchers found, “but she can also make backend code changes after approval to coax users into revealing unwanted information.”

Securing the IoT

The opaque workings of voice-activated assistants highlight some of the security issues that have emerged as consumers bring a dizzying array of Internet of Things (IoT) devices into their homes — often entrusting them with access to, and control of, the inner workings of their homes.

Although Amazon has made some steps to force developers to reveal their use of personal information — forcing skills developers to link to a privacy policy website if their app uses personally identifiable information — Das found this to be a paper tiger, with many apps pointing at so-called privacy policies that are actually empty websites or belong to other companies.

“From a user’s point of view, one basic thing we can do is to be more vigilant,” Das recommended. “If you’re interacting with a skill, and feel that it’s asking for something that doesn’t feel right, this is a great time to stop and check if you have the right skill being activated.”

Users should also disable skills they try but find irrelevant, or those that they haven’t used in some time.

“Many times we forget to uninstall things that we never use again,” he said.

Das’s team is conducting another ongoing research project to evaluate people’s understanding of these ecosystems, aiming to highlight the ways that device makers can reinforce their commitment to user privacy.

Such a commitment will become increasingly important to maintaining user trust as device numbers continue to grow.

With sales of smart speaker and video-enabled “smart display” devices surging — Amazon sold 16.5m units in the last quarter of 2020, according to Strategy Analytics, ahead of Google (13.2m), Baidu (6.6m), Alibaba (6.3m) and Apple (4.6m) — Das says users will want to be more vigilant about the skills they install, and the access they grant to devices that are conducting a whole range of activities in the background on their behalf.

“A lot of people probably don’t know the intricate details of how this whole ecosystem works,” Das explained. “A lot of people think they’re actually interacting with Amazon and not even third parties — and we haven’t yet understood all the implications that could potentially arise.”

David Braue is an award-winning technology writer based in Melbourne, Australia.