Loose Leaf Security Weekly, Issue 17

Happy 2020! Neither of us is particularly the type to make New Year's resolutions, which makes sense since security is a year-round, all the time concern. Let's get to it.

If someone forwarded this to you, you can sign up yourself at https://looseleafsecurity.com/newsletter.

Tip of the week

Right before the holidays, we talked about keeping your devices safe if you must charge them via untrusted connections. For iOS users, there's one more setting worth disabling to keep your device safe from both untrusted chargers and automated cell phone decryption devices like Cellebrite's UFED or Grayshift's GrayKey: in the Settings app, under "Touch ID & Passcode" or "Face ID & Passcode," make sure that "Allow Access When Locked" for "USB Accessories" is disabled. This prevents your iPhone or iPad from making any sort of connection to a USB device plugged into it while the phone is locked, preventing such a device from trying to attack your phone. (Your phone can still charge, and audio connections over Lightning still work.)

Forbes recently found a search warrant where police were able to use a GrayKey to get data from an "iPhone 12.5" - apparently a reference to the internal model number of the new iPhone 11 Pro. While we don't know a lot about how these devices work, one user on Twitter pointed out that GrayKey probably works on up-to-date devices if access to USB accessories is permitted when locked. Apple introduced this option in mid-2018 apparently in response to devices like GrayKey, which are widely speculated to work by making repeated, automated guesses of your passcode and also using some mechanism to prevent iOS from locking out the device after too many wrong guesses.

In the news

Software updates, plural, for Firefox: Mozilla released Firefox 72 on January 7, featuring several improvements and security fixes, plus one notable privacy change: Firefox now blocks cross-site fingerprinting scripts by default. This adds to their existing Enhanced Tracking Protection feature: the latest version of Firefox blocks cross-site traffic to a list of known tracking sites provided by Disconnect, a company that builds privacy and anti-tracking software. Such requests, often made as part of displaying an ad, allows unrelated websites to inform a single central company of your browsing behavior on their sites.

On January 8, Mozilla quickly released Firefox 72.0.1, fixing a security flaw used by "targeted attacks in the wild." As always, we'd like to remind Firefox users to apply updates: if the menu icon is suggesting you restart your browser so you can pick up an update, take a moment to do that, and if it isn't, see why automatic updates aren't enabled. By the way, if you're not a fan of the rapid pace of changes in modern web browsers, Firefox also has an Extended Support Release that gets feature updates about every ten months but still gets important security fixes promptly: for instance, Firefox ESR 68.4.1, also released on January 8, has the fix for this flaw. Firefox ESR is designed for organizations that have difficulty updating software every few weeks (either because of burdensome change-control processes or limited ability to test internal websites with new browsers), but you can certainly download and use it just for yourself, too.

The best evil plans are the simplest: The New York Times recently undertook an investigation of ToTok, which has become a popular chat app in the United Arab Emirates. Because the UAE government blocks most other chat apps, ToTok has had an easy rise to popularity, and questions arose as to whether the UAE had a way to spy on ToTok traffic. As part of their investigation, they asked Patrick Wardle, a security researcher who has focused on Apple products for many years, to take a look at the app's internals. Wardle's blog post describing how he took apart the app is pretty readable example of basic security research and reverse engineering; among other things, he takes advantage of the recent iOS jailbreak to bypass security controls both in iOS and the app, so he can see what data the app is sending to its servers. It turns out there's nothing particularly suspicious in the app itself: the most problematic thing is that it sends your list of contacts over to the ToTok servers, but that's arguably reasonable in order for it to help find your friends on the app, and the feature requires you to approve a permission prompt for access to your contacts.

However, the Times discovered signs that Breej Holding, the recently-founded company making ToTok, is simply a front for organizations linked to the UAE government, which gives them a much simpler way to monitor chats. Instead of needing a special side channel from the ToTok app to a government server, the ToTok servers themselves were apparently under the control of the government. Since the app doesn't offer end-to-end encryption, the government can just directly see who you're talking to and what you're saying.

We sometimes get caught up in flashy, high-tech attacks and forget to worry about a much more straightforward attack - when you entrust a company with your data, you need to make sure they're actually trustworthy. (Recall Mark Zuckerberg as a Harvard student making fun of the 4,000 users who signed up for his first version of Facebook.) Tools that are designed so the service provider can't read your data, like end-to-end encrypted chat apps Signal, WhatsApp, and iMessage, can provide a significant measure of reassurance, in that the straightforward "attack" on ToTok isn't possible, and certainly a few curious employees can't choose to snoop on your data. Still, it's also worth paying attention to the teams that provide these tools and thinking about whether you trust them to implement them in a sound manner.

Sometimes, it's easy to tell when a company isn't trustworthy. Breej Holdings seems to be linked to DarkMatter, which the EFF calls a "notorious cyber-mercenary firm." DarkMatter has been previously linked to surveillance and spyware operations run by the UAE government. Last summer, DarkMatter applied to become a trusted authority for HTTPS certificates, giving them in theory the power to spoof any HTTPS website - if they were in fact linked to UAE governmental surveillance, they would then be able to easily assist in intercepting any encrypted HTTPS connection inside the UAE. However, DarkMatter insisted that they simply wanted to be an upstanding and honest certificate authority like any other, which led to some questions about the process for such applications, in particular, whether rejecting their application required a convincing argument that they're untrustworthy or whether the applicant should try to establish that they're trustworthy in the first place. In the end, the browsers rejected their application.

Keep your DNA top secret: Did you get a DNA testing kit as a gift for the holidays? In late December, the Pentagon told members of the US military not to use home DNA testing kits, perhaps anticipating that it's a popular gift. Their concern was not just based on the medical inaccuracy of such kits and the danger of misleading results; they're also worried about "outside parties are exploiting the use of genetic materials for questionable purposes, including mass surveillance and the ability to track individuals without their authorization or awareness." While the major testing kit vendors say they don't sell your DNA, as our other stories indicate, it's probably wise to avoid trusting them to keep your data secure. Certainly if you're worried about access from local law enforcement, DNA test results aren't protected: as we covered in a newsletter in November, a judge in Florida signed a warrant for full access to one genetic database, including to profiles that had specifically opted out of law enforcement use.

Meanwhile, the Federal government still wants to gather DNA: The US federal government plans to start collecting DNA from people in the custody of an immigration agency, including both people detained by Customs and Border Protection at a border crossing and people in a jail operated by Immigrations and Customs Enforcement. Both agencies will be sending DNA samples (like cheek swabs) to the FBI for analysis. The DNA data will become part of the FBI's Combined DNA Index System, a system that was originally built to track the DNA of convicted criminals but also includes data from forensic samples found at crime scenes and from people arrested (but not yet convicted) in some states. Activists have raised concerns about the increased scope of data collection of people who haven't been convicted of a crime (including wrongly detained US citizens) - in particular, since DNA is inherited, it's often possible for DNA testing to point suspicion at close family of someone in the database, even if those family members have never had their own DNA sampled.

Ringing in the New Year: Ring has added a privacy dashboard to their app as a response to public security and privacy concerns. It largely seems like a public relations move, but we're in favor of making security and privacy easier to navigate. On the plus side, we're excited that they're making two-factor opt-out instead of opt-in by default and are curious to see the effect this has on how many accounts require second factors. On the other hand, we're confused that Ring will only notify users when their passwords are found in breaches instead of the industry-standard of requiring users to change those breaches passwords.

The new Ring Control Center will also tell users if Ring has a partnership with their local police department and allow users to "opt out of receiving video requests in areas where local police have joined the Neighbors app," but it's not clear that means very much - Ring and its employees still have the ability to access your videos as long as they store them (a non-user-chosen 60 day period in the US), so the technical ability for police (or others) to access your videos still remains. It's possible police could request videos from Ring themselves - and it's certainly possible for them to request video from either you or Ring with a search warrant. (By the way, Apple got this right with HomeKit - video is encrypted to a user-specific key that Apple doesn't have.)

Concerns over Ring employees accessing customer videos are not unfounded either - in response to questions from US Senators, Ring noted that four employees have been terminated for misusing their access to customer videos in response to complaints and inquiries. We note that these were all in response to complaints or specific investigations - it doesn't seem like Ring has any meaningful internal controls or mechanisms to proactively ensure employees aren't abusing access to customer videos. We've seen government surveillance data abused before, from the part-time debt collector scouting victims for a rap-crew-turned-identity-theft-ring to employees at intelligence agencies spying on their love interests, and it seems this practice is poised to spread to commercial, opt-in surveillance too.

Pwn numbers: Security researcher Ibrahim Balic discovered that uploading an arbitrarily long non-sequential list of phone numbers via Twitter's Android App would return the associated Twitter accounts. This is yet another instance of Twitter mishandling phone numbers. At the very least, Twitter should let you control whether or not other users can find you via phone number or email, neither of which are public parts of your profile, but are both essential for creating a Twitter account.

We ran into this ourselves, as it happens: both of us recently tried to make new throwaway Twitter accounts without a phone number, but that didn't work. Twitter lets new users choose to sign up with either an email or a phone number, but despite having fairly typical new account activity, including as gaining established users as mutuals, adding profile information, and configuring strong second factors (both an authenticator app and a security key), our new accounts were "temporarily restricted." We could remove those temporary restrictions only by adding a phone number to our account. We didn't want to even temporarily add phone numbers to these accounts because even if we removed them later, Twitter could keep them in shadow profiles for the accounts and use them for targeted advertising. Twitter has recently claimed you don't need to have a phone number to use two-factor authentication, but that doesn't mean a whole lot if you can't even make a Twitter account without one.

That said, if you have a fairly established Twitter account and it still has an associated phone number, you might want to remove that phone number - we have had success removing phone numbers from our more established accounts. We can't guarantee this won't cause a temporary restriction like with new accounts or that your phone number won't be kept around behind the scenes, but as there was no indication this Android bug matched accounts that didn't currently have phone numbers on them, you'd at least gain some additional account anonymity.

Please stop tracking the energy I'm bringing to 2020: The California Consumer Privacy Act has taken effect as of January 1. We previously discussed this law and the story of how it got passed in our episode "Comparing Android and iOS security": it is an approximate equivalent of Europe's General Data Privacy Regulation (GDPR). If you noticed a flurry of emails about privacy policy updates at the end of December (or, in some cases, the first week of January), the CCPA was probably why.

Several news organizations are discussing the new law and how businesses are responding. NPR's All Things Considered has a good overview, noting that (as originally passed) customers can individually sue companies that leave their data exposed to a breach, but otherwise only the Attorney General of California can enforce compliance with the new privacy regulations. The Attorney General has said his office will only start enforcing it on July 1 of this year, though he expects companies to already be in compliance. Because of limited resources, his office is likely to pursue only three enforcement actions a year, focusing on the sale of children's data and on privacy practices for other sensitive information like health data. NPR notes that Facebook, which runs a major advertising platform, seems to be taking the position that it's not directly responsible for data collected on that platform, its advertising customers are. Whether this is true seems likely to be one of the early major fights.

WIRED also has a look at the law, comparing it to the GDPR. The law gives California users the ability to request that a company stop tracking you, and they say that for WIRED's own website, exercising this right will cause you to stop getting targeted/tracked ads. They also discuss possible developments related to the CCPA, including a proposed regulatory agency for enforcement of the law and talk of a federal privacy law (which would potentially preempt California's law). Despite continued fears of corporate lobbying weakening the law, it seems that businesses are accepting that they'll have to comply with it: in the words of a lawyer who's working with companies affected by the law, "privacy is here to stay." We're particularly curious to see the long-term effects of laws allowing users to opt out of tracking: this may be the catalyst to find a model for sustaining the web without targeted, surveillance-like advertising. (Perhaps it's a return to the untargeted advertising of years past, and perhaps it will be something else entirely.)

It's not just websites that need to comply with the CCPA - Jad Boutros, who happens to be the CEO of a company selling privacy tools to businesses (including GDPR and CCPA compliance), tweeted a photo of a printed-out CCPA notice he received from a Brazilian steakhouse. Apparently the restaurant collects personal information from you when you make a reservation, when you pay with a credit card, and when you use the guest wi-fi.

Their servers were in no wyze secure: Smart-home device manufacturer Wyze recently left a server containing customer data from 2.4 million users accessible to the public without authentication. IPVM, a website that focuses on video surveillance products, found their own data in the database, and reports that Wyze has become popular for home security systems for their extremely low cost. The exposed data doesn't include passwords, but it does include API tokens used by Wyze's mobile apps - that is, anyone who saw the data could access your account by pretending to be your logged-in mobile app, without having to actually log in. The researchers who originally found the database also say they found data on "bone density" and "daily protein intake" of customers - apparently from a new smart scale that Wyze is launching. Wyze's CEO attempted to push back on a pre-existing perception that their products are less secure because they're cheaper. We're not sure his arguments are convincing, given the facts. Yes, it's possible to run a secure service on a tight budget (for instance, we recently discussed Riseup, a provider of communication services to activists, which has maintained a good reputation despite being a volunteer-run collective), but it requires focusing on having solid security practices and earning your users' trust. If a single employee can accidentally open up public access to a copy of the entire customer database, those practices clearly aren't there yet. Our takeaway: if you're going to upload private "smart home" data to the cloud, whether it's surveillance video of your home or health data like your daily protein intake, consider carefully whether that company is trustworthy. Big companies that monetize any data they see might not be the best choice, but clearly small companies have their risks, too.

TikTok on the clock but the exploits don't stop: Check Point Research found a number of vulnerabilities in TikTok, the popular video-sharing app. Among the flaws were some authentication mistakes that let attackers change private videos to public, but there's one particular flaw we found interesting. TikTok has a feature that lets someone send you a link to download the app - and apparently the link could be controlled by the application making the request (probably so that they can update the link easily). So until this flaw was fixed, it was easy to send a text message fromm the actual TikTok SMS number with any link you wanted. It's a good idea to avoid links in messages that invite you to download an app, especially because it's easy enough to just go to your phone's app store and search for an app that way. As we discussed in the previous newsletter, text messages are never a particularly trustworthy channel: if you're ever unsure of something in a text message, like a notification from your bank, it's safer to log into your account using an app or the official website or to make a call to customer service using a phone number that you know is accurate.

What we're reading

They see you when you're typing: The New York Times Style section takes a close look at Powerfront, an online customer management company targeted at luxury brands, in the aptly-titled "They See You When You're Shopping." Powerfront's product attempts to recreate the personalized and hyper-attentive experience of browsing a luxury storefront and tries to build "emotional" profiles of shoppers, and one of the most striking bits is that those emotional profiles are built in part from monitoring the text entered into customer service chats as you are typing, even before you hit send.

Watch us watching you: As part of their Open Sourced project, Vox's Recode takes a look at the privacy policy of its own website and what types of tracking they subject visitors to. It's interesting to note your limited ability to control this tracking, as a visitor: like many other websites, Vox doesn't follow the flawed Do-Not-Track standard and ignores any Do-Not-Track signal from your browser. There seem to be two effective tools: browser settings or extensions to block tracking and fingerprinting, and laws like the CCPA which incentivize websites to be more careful about what data they collect and share.

Big Brother is going to college with you: The Washington Post reports on how colleges across the US are tracking their students' attendance via smartphone apps. These schools have been discreetly installing Bluetooth beacons in lecture halls, allowing them to precisely track which students are in class and even if they leave early. Some professors report significantly increased attendance in large lecture classes because they can tie grades to attendance, but naturally, some students are unhappy - UNC's student newspaper, The Daily Tar Heel, investigated the school's purchase of one of these systems after they noticed it was in use. It turns out that some schools are installing these to track the attendance of student-athletes, who are often required by National Collegiate Athletic Association regulations to actually attend class to maintain eligibility to compete in sports. However, these monitoring systems are capable of quite a bit more: one of them had a button that let students send their current location to their professor, and it turned out students would press it by accident when out in the evenings. Another promises to track students avoiding the cafeteria or staying in their room excessively so the college can intervene in potential eating disorders or other mental health issues. College administrators love the data - one system has a feature to track attendance for "students of color," which the manufacturer defended as useful for administrators tracking retention of various groups. In addition to the actual surveillance at school, students and others are worried about whether this is training a generation to accept similar surveillance for the rest of their adult lives.

Facial recognition in the schoolhouse: A school district in upstate New York has started using computerized recognition technology inside school buildings, drawing criticism. The Lockport Central School District says the system will identify guns as well as the faces of sex offenders, suspended staff, and other adults who shouldn't be in the school. Evan Greer, deputy director of digital rights advocacy group Fight for the Future, has a brief thread on Twitter about the backstory. They originally wanted to watch for suspended students (in the wake of the Marjory Stoneman Douglas shooting, whose perpetrator was an expelled student), but outcry from parents and the state made them avoid enrolling images of students at all. As Greer notes, though, the system can be easily expanded now that it's installed - and it has already been expanded from gun recognition to facial recognition.

Smart doorbells aren't always that bad

We're not opposed to home security systems in general - we just want people to be confident that their devices are working to keep them safe and aren't turning them into targets of unwanted surveillance from others. As long as your devices are secure, don't keep footage or other data longer than needed, and give you meaningful control over whether your data gets shared, we're absolutely in favor of you sharing video from your smart doorbell of some ding-dong-ditch deer.

That wraps it up for this week - thanks so much for subscribing to our newsletter! If there's a story you'd like us to cover, send us an email at looseleafsecurity@looseleafsecurity.com. See y'all next week!

-Liz & Geoffrey