Smart voice assistant (SVA) devices, such as the Google Home or Amazon Echo, took a top spot in the rankings on nearly every “top tech gifts for 2019” list I could find. Of course, this isn’t a surprise when in 2018 "U.S. ownership rose 40%...to reach 66.4 million with total smart speakers in use rising to 133 million” worldwide, with Amazon Echo maintaining a 61% market share as of 2019. These fun little towers or smart displays are a convenient way to centralize your home management, entertain your kids' endless questions about life, or simply vocally request a song or podcast while your hands are covered in sugar cookie dough. However, home assistants are now finding their way into the office space as a functional and desired piece of computing equipment. They are sprinkled across business of all sizes, resting quietly in the corner of corporate boardrooms as comfortably as on the counters of small businesses. As a lover of technology, I can see the appeal from every angle, but my profession also means I immediately jump to the worst possible misuse of voice assistants.
The core threats known to many users focus almost exclusively on someone listening in or recording private conversations. In a study done by the USENIX Association for the 15th Symposium on Usable Privacy and Security, the researchers discovered that unwanted listening from these devices was one of the top described risks by users, but those same users also repeatedly failed to consider the people, “they share the device with, known individuals with ‘malicious skills’, or the Big Tech companies themselves was one of the core threats and risks.” It is absolutely understandable that users and IT personnel are concerned about an unknown entity eavesdropping on a private conversation because this action goes against our innate desire for privacy and it also ranks pretty high on the creep factor. However, while we address the risk of passive listening in this article, it does not pose the only danger when incorporating these devices into an office space. SVA devices may provide some unique capabilities for organizations, but they definitely introduce significant threats companies need to be aware of because most of these devices, even Alexa for Business, have "not yet been rigorously tested by the security community, and the potential risks, security holes, and vulnerabilities are unknown.” The risks that have been identified include, of course, the passive listening as a form of corporate espionage, fraud, or theft, surreptitious or unknown third-party data collection and sharing, and Internet of Things (IoT) device management and monitoring.
Threat 1: Passive Listening
SVA devices are intentionally designed to listen, record, and respond, and it’s that very function that concerns most users right off the bat. It poses an exponentially increased risk when an SVA is operational within a business IT environment because the stakes are higher. SVA's are designed for convenience, ease of use, and minimal management by home users. However, people are starting to introduce these machines into the workplace - a space SVA devices are not designed to accommodate. The passive listening feature embedded in all voice assistant devices is a vulnerability poised and prepped for exploitation that pose a significant risk for confidentiality of information and sensitive data. The core feature of an SVA is the passive and perpetual listening feature that enables the devices to interact with the user’s voice commands at any time, even if the interaction is accidental. However, the key function that makes these nifty devices so desirable and marketable is the same function that is vulnerable to misuse and hacking for the purpose of corporate espionage and data theft.
SVA’s are vulnerable to attack from individuals with access to the same wireless network, which does include, but certainly isn’t limited to, an insider threat. A research duo from China presented a proof of concept attack scenario at DefCon 2017, in which they were able to passively record and stream audio from an Echo's microphone to a remote attacker by chaining a series of bugs in Amazon's Echo to exercise full control over the device. Luckily, the attack was only a proof of concept model and only affected 2nd Gen Echo devices. More recently in 2019, the Security Research Lab demonstrated that Amazon “Skills” and Google “Actions” applications can be used to exploit device functionality and gain access to sensitive data or access credentials. The attacks showed that by developing a benign application, having it reviewed and approved by Amazon or Google, and then modifying the app, the user can be tricked into saying something akin to “An important security update is available for your device”. The device then requests a user password to initiate the update and sends the credentials and/or other sensitive data captured from the device to the attackers.
New malware and phishing attacks are not the only threats that conspire against SVA devices. The same researchers from China also discovered a way in which SVA’s can be tampered with by using manual soldering to remove an Echo’s firmware chip. From the chip, the researchers extracted the resident firmware, modified it and replaced the chip back into the motherboard in under 15 minutes. Once finished, the machine is owned entirely by an attacker who could install recording software, disable the delete functions, or alter other commands to suit their nefarious needs. Again, the likelihood this could happen isn’t incredibly high, but the impact of physical tampering could be devastating to an organization resulting in exposure of confidential data or the theft of intellectual data.
Threat 2: Data Collection
Security concerns don’t stop at just the vulnerability of internal and potentially external attackers to misuse passive listening for their own gain but also include the collection and security of data recorded and stored by third-party corporations including Amazon, Google, and Apple. Data gathered from interactions is stored both on the device and in the cloud. A simple request to an Amazon Echo or Google Home will immediately play back any and all conversations you or other employees have ever had with the device, and without explicitly implementing a process to initiate a data removal process, these recordings may be stored indefinitely. And even though all of the SVA manufacturers state that they will delete voice recordings of consumers, many retain transcripts and/or a limited amount of voice recording for product development.
Amazon admitted in a letter from U.S. Senator Chris Coon (D-DE) that they actively store “…records of customers’ Alexa interactions, including records of actions” taken by the device. This affirmation came after an informational inquiry was sent from the Senators office to Amazon regarding the data they collect and why. In the same letter to Senator Coon, Amazon’s representatives affirm the allegations of storing consumer voice recording data by stating “We retain customers’ voice recordings and transcripts until the customer chooses to delete them” … and that transcripts “do not remain in any of Alexa’s other storage systems”. But, the response from Amazon does not specify the breadth of Amazon’s corporate storage of this data or even address that they employee third parties to also review that data. Amazon representatives stated in a subsequent interview with Bloomberg that the company prohibits employees from directly accessing identifiable information of consumers. But Bloomberg quickly discovered that actually employees do have access to account numbers, first names, and device serial numbers - all of which seem strangely identifiable to me when paired with the rest of the data Amazon can and will collect through their other service offerings. And too often identifiable or confidential information slips through the cracks at these corporations and prohibition on direct access may not prevent indirect or accidental access or even information leakage, such as when a German researcher requested data about his personal activities and “inadvertently gained access to 1,700 audio recordings of someone he didn’t know”.
Google Home, Apple HomePod, and other SVA devices come with similar vulnerabilities and threats. But according to their own privacy policies, Google still records and stores every human-to-Google Home interaction without explicit management by the device owner. One thing Google has over Amazon is that it only allows Google employees, not third-party contractors, to review any sampled voice recordings or transcripts. Apple, the first to admit without prompting, that they store recorded voice interactions for development purposes but also specify that only Apple employees are tasked with listening to specific voice recordings to improve product functionality and language processing capabilities. Unfortunately, none of these organizations is completely transparent about the security protocols they have put into place to protect data in transit or while stored for corporate use other than the reassurance that information is “anonymized”, and that security is a priority. But a threat actor with access to an SVA device, the online account, and even the network that the device resides upon can construct a series of attacks for financial fraud, blackmail, or corporate espionage against an organization.
Amazon, Google, and other SVA providers have put the onus on the device owners to delete their confidential data, otherwise everything is retained. But SVA providers continue to store voice transcripts, device responses and actions, and even voice recordings regardless of whether or not that data is deleted systematically on a set schedule by an IT administrator. The fact that data is collected by these organizations isn’t necessarily wrong – it is standard practice for continuous development. However, data collection through SVA devices presents a serious risk to organizations with highly sensitive or confidential data. And it is wrong that neither consumers nor organizations can explicitly and contractually opt out of data collection activities.
Threat 3: Device Management
Smart voice assistant devices were not built to be hosted within a managed IT environment nor to meet the security needs of enterprise-grade IT systems. They were built for quick dispersion and feature-happy consumers. So, off the shelf the system design is lacking in administrative capabilities and customizable security functions and they come with many of the same exposure risks and suffer from minimal security testing as typical, ‘dumber’ IoT devices. So, the task of securing and managing one of these devices in a corporate environment may far outweigh the value of their use case for many organizations.
All SVA devices provide some control over administrative tasks such adding/removing users, rooms, and other devices, setting the “Wake Word”, and ensuring the device sits behind a firewall and within designated IoT zone. But outside of those actions, none of the devices really come with practical security controls embedded into the device itself or through the online interface. Administrators will have to implement a series of technical and procedural control to appropriately manage and secure business-sanctioned SVA devices while also preserving the features that make these devices so desirable. This means the organization should be prepared to dedicate at least some resources to deploy, update, scan, and monitor SVA devices while they reside on the network. Amazon and Google now allow for apps to be developed for the Echo and the Google Home, referred to as Skills and Actions, respectively, that potentially could be used for some security functions, such as encrypting sensitive data or monitoring for certain command activities. However, this option now puts the responsibility for integrated security features on in-house or third-party developers rather than on the manufacturing companies. And custom security apps often come with their own list vulnerabilities and threats that will have to be managed.
Organizations that choose to allow SVA devices on their network should only choose to do so after conducting an in-depth risk assessment and cost/benefit analysis for incorporating SVA’s on their networks. The risks will differ significantly based on the intended purpose of the device, such as if the SVA is solely for the purpose of employee entertainment or for managing front office tasks, then the risk may be minimal. If the SVA sits in a conference room where executives or high-value targets such as IT personnel often meet and discuss confidential information, the negative impact of an attack dramatically increases. So it is critical that decision-makers have a very good understanding of how these devices can affect the organization from a security perspective.
IoT attacks are on the rise and smart voice assistants are not immune to the changes in the threat landscape. As predicted, smart voice assistants are very susceptible to exploitation by malware and threat actors and can be used to facilitate or fall victim to botnets, DDoS attacks, espionage, and social engineering campaigns. Attack techniques will only diversify as companies like Amazon and Google really start to market SVA devices towards organizations and as business adoption increases. Organizations must practice strong security hygiene that includes scanning for and inventorying the devices outside of servers and desktop machines which may exist on their network, because it’s those undiscovered or unmanaged IoT machines, like SVA’s, that will be maliciously exploited. However, the responsibility for securing SVA devices within an organization does not fall squarely on the shoulders of IT but will require user security training and physical security. Outside of an organization, vendors, third-party application developers, and manufacturers of SVA must start to incorporate security from start to finish within the development and production lifecycles. SVA and other IoT device vulnerabilities arise because security development and testing is neglected or sidelined during software development and/or manufacturing processes until an exploit is discovered by researchers, or worse, attackers who have already targeted and harmed consumers.
 Kinsella, Bret. Voicebot.ai. “Smart Speaker Ownership Rises 40% in 2019 to 66.4 Million and Amazon Echo Maintains Market Share”. March 07, 2019. https://voicebot.ai/2019/03/07/u-s-smart-speaker-ownership-rises-40-in-2018-to-66-4-million-and-amazon-echo-maintains-market-share-lead-says-new-report-from-voicebot/
Abdi, Noura; Ramokapane, Kopo; Such, Jose. USENIX The Advanced Computing Systems Association. “More than Smart Speakers: Security and Privacy Perceptions of Smart Home Personal Assistants”. August 12, 2019. https://www.usenix.org/system/files/soups2019-abdi.pdf
Yakowicz, Will. www.inc.com. “Amazon Alexa for Business Security Risks”. December 01, 2017 https://www.inc.com/will-yakowicz/amazon-alexa-for-business-security-risks.html
 Greenberg, Andy. Wired. “Hackers Found a Way to Make the Amazon Echo a Spy Bug”. August 12, 2018. https://www.wired.com/story/hackers-turn-amazon-echo-into-spy-bug/
 Anonymous. TrendMicro. “Alexa and Google Home Devices can be Abused to Phish and Eavesdrop on Users”. October 24, 2019. https://www.trendmicro.com/vinfo/se/security/news/vulnerabilities-and-exploits/alexa-and-google-home-devices-can-be-abused-to-phish-and-eavesdrop-on-users-research-finds
 Office of Senator Chris Coon, U.S. Senate. June 28, 2019. https://www.coons.senate.gov/
Day, Matt; Turner, Giles; Drozdiak, Natalia. Bloomber Technology. “Amazon’s Alexa Team Can Access Users’ Home Addresses”. April 24, 2019. https://www.bloomberg.com/news/articles/2019-04-24/amazon-s-alexa-reviewers-can-access-customers-home-addresses
 Brown, Jennings. Gizmodo. “The Amazon Alexa Eavesdropping Nightmare Came True”. December 12, 2018. https://gizmodo.com/the-amazon-alexa-eavesdropping-nightmare-came-true-1831231490