Happy Monday! One of our stories this week discusses the use of "cell-site simulators" or "IMSI-catchers," small devices that can trick cell phones into connecting to them instead of to actual cell towers. They're an increasingly popular law-enforcement tool, but they're also entirely too easy for casual attackers to build. In addition to detecting your location, cell-site simulators can intercept and spoof SMS messages.
If someone forwarded this to you, you can sign up yourself at https://looseleafsecurity.com/newsletter.
Tip of the week
It's a good idea to use "end-to-end encrypted" messaging platforms whenever you can. Most chat systems besides SMS encrypt messages on the way to their servers, but end-to-end encrypted systems also make sure that even their servers can't see your conversations, only the ends can. This makes sure your messages can't be seen by anyone else, whether they've got a cell-site simulator or some sort of access to your chat system's servers. Options include Apple's iMessage (which unfortunately only works on Apple phones), Open Whisper Systems' Signal, and Facebook's WhatsApp, which uses the same cryptography as Signal. Even the US military suggested their users use Signal or Wickr, another end-to-end encrypted messenger, in place of SMS for casual conversations, so if it's good enough for them, it's probably good enough for your group chat.
In the news
We figured we'd lead with good news this week: The Federal Communications Commission has said that it's illegal for cell phone carriers to sell location data, a practice that was previously common to all the major carriers. We previously discussed this practice in our episode "Securing your phone" from summer 2018, when the major providers had just been pressured into ending it. In that episode, we also discussed a 5-4 Supreme Court decision, Carpenter v. US, that ruled that a search warrant was required to get someone's location data - one of the arguments from the dissenting justices asked why the government should need a warrant when the carriers were happily reselling everyone's real time location to anyone willing to pay for it (or perhaps not even pay for it). It remains to be seen what penalty the FCC will apply, but this ruling is likely to have implications for whether the US government, data brokers, and unsavory people with an internet connection can track you through your phone.
Who needs warrants when you've got ads? Unfortunately, cell carriers aren't the only source of cell phone location data. The Wall Street Journal reports that the US Department of Homeland Security found an easy workaround to needing court orders to get location data from cell phones in the US: they just buy the data from a private company that sells marketing analytics data. Immigration and Customs Enforcement (ICE), part of DHS, uses the data to find people they want to arrest, and Customs and Border Protection (CBP) uses it to locate groups of activity in the sparsely-populated areas near the Mexican border. DHS gets the data from a small firm called Venntel, located in the suburbs of Washington, DC popular with both tech firms and government defense contractors. In the words of a former DHS official the Journal interviewed, the deal with Venntel makes Carpenter v. US irrelevant, because "the government is a commercial purchaser like anybody else." It seems that Venntel, in turn, gets the data from sources like mobile apps or websites, not from cell companies directly, giving them (under current law) perfectly legal access to the data. The Journal identified Venntel as sharing leadership with Gravy Analytics, a less secretive company in the marketing business. Gravy's website says they get "raw location signals" from "many different data providers and tens of thousands of apps," which they offer to both online and brick-and-mortar stores to help them follow their customers' behavior. The editorial board of The New York Times finds DHS's approach "alarming," arguing that Carpenter v. US is of limited value in stopping "near perfect surveillance" through cell phones if the government can get equivalent data from mobile apps through the commercial market.
For most of us, the government already can locate us with high accuracy, but private companies brokering location data gathered from apps could put us at risk in other ways. While the CBP spokesperson said the data provided by Venntel "doesn't include the individual user's identity," ICE seems to have found a way to map the data back to specific people, and that may not be too difficult in general - someone with some insight into your behaviors could potentially identify locations you were likely to have visited and take advantage of the fact that companies like Gravy create their datasets by merging data from many apps together. We always recommend checking whether it makes sense for an app to have your location data and turning it off if it doesn't - even if you want to tag some of your social media posts or check-ins with locations, they probably don't need to know your exact GPS location in real time. We'd also recommend a slightly more aggressive approach to keeping your location private: try turning off location data for even apps that do have a legitimate use for your location, such as navigation apps. Liz was pleasantly surprised to find turning location data off primarily meant typing an extra address from time to time and hasn't felt the need to turn location access back on. Geoffrey's still too much of a fan of real time bicycling navigation to turn all location data off, but he realized he doesn't really need his bank to know his location just so he can find ATMs every so often.
If you can't subpoena 'em, join 'em: ICE has another strategy for using cell phones to track people: a cell-site simulator device such as Harris Corporation's "StingRay." According to an investigation from Univision New York, ICE has used such a device 551 times in the past three years. As the name implies, a cell-site simulator device pretends to be a cell tower and convinces devices to connect to it, identifying themselves by their SIM card. Univision New York previously reported on a single case of ICE using a cell-site simulator for enforcement in Brooklyn, where they already had tracked a cell phone to a certain intersection but used the device to identify a specific apartment and room for a sunrise raid. As they describe, the device starts by inducing all cell phones in the area to connect to it, finding the target device, and then keeping track of it. Cell-site simulators can also intercept text messages and phone calls, but in this case, the court order required ICE to delete any data they picked up while using the device.
Your photos, and maybe a few others, in the cloud: Google Photos disclosed a fairly horrendous bug last week: for five days in November, if you downloaded a backup of your files through Google Takeout, you may have gotten some videos that weren't yours. In other words, if someone else downloaded their backups, they may have gotten your videos. Google says they're notifying all the people whose videos may have been affected.
This is an extraordinarily frustrating bug since there's very little you can do to protect yourself from it. The paranoid approach, of course, is to simply not store your photos online - but that's not an answer everyone's comfortable with. Apart from sharing, Google Photos is a popular option for just backing up photos automatically, especially from Android devices. You could back up your data manually to your own computer instead of automatically backing it up to a cloud service, but there's a couple of reasons that may be difficult. One big one is storage space - Google Photos offers unlimited space for (compressed) photos, but your computer may not have much more space than your phone itself. Another issue is setting up a good workflow, because full phone backups tend to store photos in a raw format, which is suitable for restoring onto a new replacement phone but not as easy to use if you just want to find a single photo. Still, if you can find an app that makes this work and you have the storage for it, it might be worth thinking about.
More Apple software updates: Apple has released a collection of software updates with security fixes, including iOS and iPadOS 13.3.1, macOS 10.15.3 and security updates for 10.13 and 10.14, and even watchOS 6.1.2 (which shares many components with iOS). The updates fix serious security vulnerabilities, so as always, take a moment to install them. Even if you don't love the changes (and we'll admit, they do throw us off sometimes), it's not worth risking the possibility of being affected by bugs, especially given the recent track record of "zero-day" attacks on iOS and one major exploit dealer recently paying less for them, claiming the market was "flooded." We strongly disagree with journalists who pose the question of whether you should upgrade without taking this reality into account - especially if this is the first upgrade they've declared good since iOS 13 itself!
Apple also released an update to iOS 12, which runs on older devices that don't support iOS 13. According to their iOS 12 page, "iOS 12.4.5 provides important security updates and is recommended for all users," but Apple's security updates page doesn't list anything for iOS 12.4.5 as fixed. We assume this second page is a misprint, since iOS 12 probably includes most of the components of iOS 13 that got fixes. Apple's extended support is a good sign for people who don't want to buy phones each year, since iOS 12 runs on devices as far back as the iPhone 5S, which was released in fall 2013. There's still no great option for a cheap, secure phone (cheaper Android phones tend to get security updates for only a year or two, and not very promptly either, and no Apple phone really counts as "cheap"), but it now seems likely that low-end iPhones sold today will continue to be supported for many years.
Cheating the cheaters out of money: There's a new twist on "sextortion" scams that takes advantage of the 2015 breach of dating-for-cheaters website Ashley Madison. As announced by Vade Secure, a company that markets email security to businesses, the new scam threatens to tell your friends and family about your Ashley Madison account. Most "sextortion" emails are essentially confidence tricks - they claim to have recorded embarrassing screenshots or webcam video from your computer and then ask for payment via an irreversible cryptocurrency transaction to prevent them from releasing the video to your associates, but most of the time no such video exists. These emails, however, make it sound like the senders could have embarrassing information from the breach: as CNBC writes, "The threats are a worrying evolution of the sextortion scam because they appear to incorporate real information."
If this were the first non-bluffing sextortion attempt, it would significantly change the calculus about whether to pay - but it's still not clear that's the case. The database was released online a month after the breach, and journalists were able to analyze the data, meaning the attackers aren't in possession of any data that the rest of the world doesn't have. CNBC argues, nonetheless, that you shouldn't pay because there's no reason to believe the attacker is serious about contacting people you know - they can simply send out the emails, collect Bitcoin, and not bother to actually embarrass the people who don't pay. (As it turns out, a few scammers tried exactly that right after the original leak, and they were in fact not serious about follow-through.) On the flip side, we'd also point out that an attacker who is serious can easily demand more ransom, or just release the info anyway. As with other sextortion scams, we believe the best strategy continues to be to just ignore the email entirely.
A murky future for Clearview: Three weeks ago we discussed the shadowy startup Clearview, which built a massive facial recognition database by harvesting images from social media sites and is marketing it to law enforcement. One of the immediate questions was how such scraping was permitted by these websites, and now Facebook and YouTube have both told Clearview to stop, making it clear that such collection is against policy.
Still, the tools appear to be readily available, and one noteworthy point about Clearview was that its product didn't appear to rely on particularly custom or unique facial recognition algorithms. To test this theory, Timo Grossenbacher, a data journalist at Switzerland's public broadcaster, built his own version of Clearview with an off-the-shelf, open-source facial recognition tool by downloading 200,000 pictures from Instagram, and demonstrated the ability to find Swiss politicians in public photos of events. (Grossenbacher and a colleague wrote an article in German about the process whose title translates as "It's that easy to build a surveillance machine.") Surprisingly, he was able to download those 200,000 pictures from a single computer and account without any difficulty. When they asked Facebook why, the company responded, "The scraping of public information is a challenge for all Internet companies because it is technically very difficult to detect," but we're confused by their thinking - we expect even the most obsessed Instagram users are unlikely to view 200,000 pictures in quick succession.
Grossenbacher has written a brief analysis (also in German) for the news service, where he argues that three things are needed for a system like Clearview: functional facial recognition technology, a corpus of good photos, and the ability to match those photos to identifying information such as a name or place - and of the three, only the first is inevitable (and already exists). He argues that regulation can address the second and third points, as can simply being cautious about what you upload to social media sites.
What we're reading
The Freaky Friday approach to beating online tracking: Teenagers have been sharing their Instagram accounts with friends to commingle their individual data footprints. They rotate who browses the social media app through which accounts regularly, and instead of posting their photos to their main accounts themselves, they send them to each other so many devices with many different fingerprints post to their account. While we applaud their creativity (and mutual trust) to obscure their individual identities, we'd really prefer that Instagram and other social media services tracked everyone less so they didn't have to share their accounts.
Ding dong distrust: When we've talked about the Ring video doorbell before, we've primarily focused on our concerns about how Ring, law enforcement, and third parties can access your data. Two stories, one in Boston magazine and the other in The New York Times, have us thinking about a different aspect of Ring's video doorbell system: how video doorbells and access to footage shared through the associated Neighbors app change residents' behavior and threat models. We included one of the videos shared on Neighbors as the send-off for a recent newsletter - it depicted two deer caught approaching someone's house - but not all of the videos posted to the Neighbors app are as lighthearted in nature. The Times noted a video of a delivery worker enjoying a free snack provided by the residents on a tough day that went viral, likely for eliciting positive feelings, but that delivery worker didn't know he was on camera until the footage went viral. As delivery workers must make their deliveries regardless of whether the resident is recording their duties, footage like that feels exploitative. As the Boston magazine article discusses, some workers are worried about delivering to residences that already have packages because they don't want their images posted to Neighbors as potential robbers. The paranoia posted to Neighbors doesn't start and end with concerns around theft, and users also post footage of children playing ding-dong ditch or homeless people taking temporary shelter during a storm under an awning - both types of footage that seem to elicit unnecessary anger and fear.
One of the things we try to do with Loose Leaf Security is advocate for security and privacy as a way to build trust and be comfortable in our communities. That's why we focus on practical, actionable security advice as much as we can. When the world is overwhelmed with negative security news, as it often is, we lean on each other to avoid giving into paranoia and isolating ourselves, and we do our best to avoid seeking out sources that lead to unactionable, isolating suspicion like the Neighbors app.
Electronic voting and paper trails: By now the mess from the Iowa Democratic Party caucus and its reporting app has mostly settled. (Among the app's many difficulties, apparently, was that users needed to enter a six-digit precinct ID, a six-digit PIN, and a six-digit two-factor code to log in, and people entered them in the wrong order.) One major saving point, though, was that the app was not the canonical record of the votes: it was simply a tool for advance reporting of numbers that were officially recorded on paper and officially reported by phone, and that process eventually completed. Election security researchers like Georgetown University's Matt Blaze have been arguing that we need verifiable paper ballots for elections and not just electronic copies, and he was happy with the "sensible design" of the Iowa caucus process, which did exactly that. As discussions about electronic and online voting increase, we think this is worth remembering. While paper balloting may be slow, it's well-understood and we have established procedures to ensure the integrity of paper ballots, whereas apps could fail or even be hacked.
Therefore, we're rather surprised to see West Virginia plan to offer smartphone-based voting this year. Lawmakers are planning to offer this app to disabled voters, who they say find it too difficult to get to polling places. There are a number of things in the proposal that give us pause, starting with it not being clear it's being pushed by the disabled community. While several other states have adopted "online ballot-marking tools," some as a result of ADA lawsuits from the National Federation of the Blind and other groups, those are generally just to provide an accessible option for absentee ballots, and they usually provide a paper ballot in addition to sending an online vote. West Virginia's proposal for app-based voting seems to be driven in part by a businessman's charity rather than members of the affected disabled community. Also, other states offer online ballot-marking tools specifically to make absentee ballots accessible - but West Virginia's rationale of letting people not vote in person makes it sound like they intend not to offer accessible tools at polling places, like other states too. Furthermore, West Virginia's previous app-based voting pilots have involved an app named "Voatz" that somehow uses a blockchain - despite blockchains being specifically designed as ledgers of public activity, and the specific ADA concern being about the inability of disabled absentee voters to privately mark their ballot without assistance. Those previous pilots have also been with overseas voters, not as an accessibility attempt.
There are many ways that technology can make the world safer and more accessible - such as end-to-end messaging apps giving secure communications options to people who can't see each other in person. That doesn't quite seem to be what's happening with this pilot, though. We're unsure why adding this technology and its associated complexity is preferable to more established, tested options, and we'd also prefer that technology is used to include those who are typically excluded instead of perpetuating a different version of that exclusion through a fully separate process.
By the way...
... there are a lot of reasons data can be unreliable, and fooling traffic data by slowly pulling a wagon filled with 99 cell phones is one of our favorites.
That wraps it up for this week - thanks so much for subscribing to our newsletter! If there's a story you'd like us to cover, send us an email at firstname.lastname@example.org. See y'all next week!
-Liz & Geoffrey