Blog Banner

Your personal risk from consumer IoT devices

This is Next Mile’s guide to help connected device users understand the day-to-day risk presented by their connected products.

It’s true! Universally poor IoT device security perpetually exposes your sensitive data. While your acute risk from weak security is relatively low, your long-term risk is both more nuanced and more complicated than you might think.

If you’ve been waiting for the mother-of-all-hacks to expose and besmirch the connected device “fad” for eternity, it probably already happened. It just didn’t matter in a direct way.

Why IoT leaks like a sieve

IoT security incontrovertibly sucks, and it will continue to suck despite promises to the contrary.

Here’s why: your product OEM built and/or integrated your IoT device’s software inside the constraints of a standard product development cycle.

Your connected refrigerator’s software began life as a line item in a grander product development budget, listed alongside refrigerant, compressors, and stainless steel. While its manufacturer might be a massive appliance company, their software budget for your French door Superfreeze was 1 to 4 orders of magnitude smaller than their corporate marketing team’s software budget. OEMs typically provide ample resources to build software and sometimes enough to test it, but they don’t earn the margin needed to pay for true hardening. That’s next-level expensive.

Normally, IoT engineers create connected device software from (dubiously-maintained) third-party libraries, then customize the device, app, and cloud codebases as needed. They Scrum tirelessly inside the product development process, kicking releases out the back of the helicopter as they fly toward their final deadline. They also wrestle ever-changing, under-specified, over-budget user interface applications right up to (and beyond) launch day.

Unlike critical infrastructure, perfect doesn’t happen in IoT. This software is built for function, not for non-function (security, performance, availability, etc.).

Where does this leave you?

Step 1: identify the stakes

If you’re concerned about a connected device, start with your risk profile. While every smart device contains a computer, every connected device transmits or receives data. These common data classes can help you quickly profile what your device(s) could realistically expose:

  • Personal identifiable data: usernames, passwords, phone UUID, device MAC, names, addresses, nicknames, kids’ names
  • Demographic data: more general than PID like location, age bracket, purported income level
  • Sensed data: includes biometrics like heart rate variability, state data like location, speed, pathway, or operation; raw capture like images or voice
  • Correlatable data: usually ingested from 3rd party data providers and used to trigger something, like weather data firing your in-ground irrigation system

The stakes can increase considerably depending on the connected device’s fundamental architecture. For example, a Wonderboom Bluetooth speaker utilizes a simple Bluetooth link between itself and your phone. Connect and play, keeps things simple.

Though it also supports a Bluetooth-only connection, the Bose SoundLink Flex Bluetooth speaker requires you use the Bose Connect app to “fully enjoy” its playback experience. From a data capture perspective, we’ve introduced an interloper that can:

  • Require username/password authentication
  • Store some demographic information
  • Capture some PID, either from you or your phone
  • Correlate owned device models to individuals
  • Track and report what gets played over that connection
  • Send performance analytics and crash reports

Bose asks for an awful lot of your trust to play music.

Step 2: technical threat profile

Too many people consider IoT security in purely technical terms. For most end users, technicals help only to guide your mental dissection of the way your connected device was built. By considering some common security topics, you can infer a bit about how your connected device works:

  • Encrypted communication

    • Packets (secure, encrypted data)
    • Protocols (secure communication channel)
  • Architectures

    • Device (product’s software configuration)
    • Communication (transmit/receive methods)
    • Backend (cloud or remote software dependencies)
    • Interface (the software you interact with)
  • Access

    • Credentials (user’s method of access)
    • Authentication (identity verification)
    • Authorization (user’s access level)
    • Permissions (software’s access level)
    • Certificates (authenticity of third party)

When compared to Wonderboom or their own no-app experience, Bose Connect introduces considerably more systemic risk via architectural complexity:

  • A user authentication system
  • An IoT platform (code and infrastructure running on other code and infrastructure)
  • Several device database tables (for the info above)
  • Communication layers between the device, cloud, and app(s)
  • Customized on-device OS and firmware code
  • Source control (the place where developers work on the uncompiled code) for any of the custom software above

Much more to go wrong, and all of it requires maintenance.

Step 3: think like bad actors

This is the part where we imagine screen-lit, hoodie-wearing bad guys punching in terminal commands that instantly steal grandma’s Social Security. It’s also the part that brings true risk into focus—who could care about your device data and why?

  1. Corporate providers: the first party collecting your data, most of which they use to enable device functionality, but they could do evil in theory
  2. Hackers: likes include PID breaches, ransomware, device surveillance, and occasional device hijacking
  3. Governments: conducts mass surveillance, dirtbox sweeps, targeted spying or law enforcement through systems like Pegasus)
  4. Private surveillance: collects, sells, an serves your data to people who want it for…reasons, mostly marketing profiling and ad targeting

When evaluating your own risk, ask how each group benefits from your data. Do they benefit at all? Focus on what could happen, not what you think might happen. That comes later.

Hacker image from Avast

In our Wonderboom vs. Bose example, you might expect Bose to be the likeliest bad actor, but how do they benefit?

  • Learn who you are to a degree
  • Learn what you play
  • Learn what devices you own
  • Learn what other Bluetooth devices appear in your area
  • Learn what other devices are on your network

It’s nosy for sure, but not particularly salacious. Things get more interesting with hackers:

  • The IoT system might have holes we can exploit, send the bots after that
  • A PID breach could hold Bose hostage or tarnish their image
  • DDoS on some ancillary service in Bose’s IoT stack to get a quick payment
  • Steal preference and PID data then sell on the darkweb

Notice that none of that targets you individually or leverages your specific data for anything illicit—it’s about the bigger data pile. Private surveillance advertisers care more about you to complete their picture:

  • Learn what you play (your preferences)
  • Learn what other devices you own

They may not have known your music preference before, but now they can. We don’t see a smart TV on your network. Hey, Sony, target this person!

Step 4: rank risk probabilities

Compromising your device data or your OEM’s IoT system takes work. There’d better be a reason to do it.

Fortunately, incentive in IoT security can be generalized. These are the most likely scenarios based on a bad actor’s common, probable incentives:

  1. Use your legitimate data in for-profit surveillance
  2. Sell your PID
  3. Use the device architecture as an entry point into the OEM’s systems
  4. Hold OEM hostage using your PID
  5. Identify individuals from groups
  6. Compromise your device to take your data or prevent its use

In our Bose example, we’d rank these as the top 3:

  1. Bose sells your PID and preference data to third parties. Definitely happening.
  2. Hacking attack vector into non-IoT systems. Malign actors have bots probing for holes. It’s normal.
  3. Hackers use Bose’s IoT system for ransom. Hold some or part of the system hostage for a few Bitcoin.

Some scenarios aren’t remotely likely. Governments could get your above data easily, but it wouldn’t help them with their usual objectives. Surveillance advertisers would get another angle on your preferences, but the reliability of that data set (remember, it’s not all of what you listen to) would be uselessly low.

Summary: your true risk calculus

As you ponder the probable risks to you, the end user of connected consumer products, you’ll note that many scenarios end in a “so what”. Yes, you’ll suffer a service outage when hackers cryptolocker your OEM’s IoT platform auth data, or you’ll fend off ever-creepier advertising, but your life will otherwise proceed. Inconvenienced but not impeded.

Our Devices by The Onion

When it comes to raw risk, there’s one thing we can say for sure as consultants: your lowest risk comes from the device maker directly. Their business models focus on selling hard goods and moving inventory off their balance sheets. It takes a buttload of non-core work to process all the acquired data about you and your device into anything useful. Your device OEM doesn’t have the manpower or attention span for that. Some might sell the data (especially automakers), but many don’t.

For product developers simplicity, a.k.a. reduced attack surface, works best to keep future service, legal, reputational issues at bay. Product managers should study up on the difference between connected functions and smart functions before making blanket decisions about their next product offering.

For individuals, your gravest risk comes from illegitimate uses of your legitimate data—that unethical-but-legal stuff we don’t have a digital bill of rights to protect against. Your data is not your device.

Other devices can expose your reputation, your anonymity, and your freedom from badgering when their sensed data combines with data you supplied in some unexpected way. Similar to the way generative AI models were built with a zero-day use case (leveraging published content in a way its authors never expected or consented to), the risk from future devices may be less about the data they collect and more about the data they rapidly correlate. Someone else’s device can catalog your physical presence, then pull from your previously-volunteered public (or even private) data, writing new context and enabling instant readback:

Meta glasses: connected glasses combine facial recognition with social media data to autodox total strangers.

License plate readers: OCR picks up more than just your plate number, harvesting make, model, bumper stickers, and other context to correlate with the identification DRN already has.

Imagine parking ramps that block the entry of uninsured cars, self-repossessing gizmos of all sorts, and salespeople that know your credit score just by looking at you. A level of automated, inverted totalitarianism that would make the Stasi swoon.

This is the status quo until it becomes untenable. You’ve got nothing to hide, you say? We don’t want to hear it, but Alexa will be more than happy to lend an ear…


Next Mile is well-practiced at responsible IoT design and development. Contact us to de-risk your connected product development program.

Find additional insight at our blog or contact us for a monthly summary of new content.

If this speaks to a problem you’re facing, we'd love to see if we can help you further.