Loose Leaf Security Weekly, Issue 16

Last night was the solstice, the longest night of the year. Over the next six months, the days will get longer - unless, of course, you're in the southern hemisphere, when it was the summer solstice, the shortest night of the year. Still, day or night, there's never a bad time for security.

If someone forwarded this to you, you can sign up yourself at https://looseleafsecurity.com/newsletter.

Tip of the week

With lots of people traveling over the winter holidays, it's a good idea to think about what information is available on your phone's lock screen versus what can only be accessed behind your passcode. In particular, if you use mobile boarding passes, it's worth putting them in Apple Wallet or the passes section of Google Pay so that you don't have to unlock your phone and expose the rest of its contents for an agent to scan your ticket.

Notifications can also expose sensitive information, and it's worth thinking about whether or not the convenience of email and message previews are helpful or a liability. On iOS, Settings > Notifications > Show Previews allows you to you can choose the default setting for when apps show more than just the app name and "Notification" in the Notification Center, and the options are always, when unlocked, or never. You can also decide which apps show previews when on a per-app basis by clicking on specific apps later in Settings > Notifications. Android doesn't give users as much control over which apps show previews in notifications, but you can adjust this setting for whether or not "sensitive notifications" are shown when your phone is locked in the Lock Screen section of Settings > Apps & notifications > Notifications.

In the news

iOS (finally) gets support for security keys: iOS and iPad OS 13.3 were released this week with a long-awaited feature: you can use security keys, our favorite two-factor authentication method, in Safari. Previously, support for security keys on iOS was limited to specific apps: for instance, Google's Smart Lock app lets you use Bluetooth keys to sign in to other Google apps, and the third-party Brave web browser recently added support for Lightning security keys. Now, iOS 13.3 has built-in support for security keys via USB, near-field communication (NFC), or the Lightning connector - and you can use them either in regular Safari or in any Safari web view. Not only does this cover many apps that use a Safari web view for logins, it also includes the Accounts menu in Settings. So, for instance, if you have a Google account set up to use a security key, you can sign into that account in Settings and access your email or calendar with the built-in apps.

If you already have a security key that supports NFC, you can tap it against your phone when prompted to log in. Many popular keys, including recent YubiKey models and Google's Titan security key, have NFC support. You can also plug in a security key with a Lightning connector. So far, there's only one Lightning security key on the market - the double-ended YubiKey 5Ci, which also has a USB-C connector. Finally, you can use Apple's $30 "camera adapter" to connect a regular USB (type-A) security key to your phone. Note that many services, such as Twitter, only let you set up one security key at a time, so if you're in the market for a security key, you might want to think about getting one that works with all your devices.

We've been able to use the new feature with both a YubiKey 5Ci and with an older security key via the camera adapter, and both work great in Safari. If you're using another browser like Chrome or Firefox, you might need to wait for an update to add support - it doesn't seem to work yet.

Even if you don't have any security keys, the update comes with an assortment of security fixes, as usual. Apple also released updates for macOS, tvOS, watchOS, and even HomePod OS, so if you've got any type of Apple device, don't forget to take those updates, too. Apple also seems to be continuing their recent practice of issuing security updates for hardware that doesn't support the latest major version - iOS 12.4.4 for the iPhone 5s and 6 generation fixes one security issue in FaceTime - which is good news for people who are still hanging onto their older phones.

Unfortunately the obvious abbreviation for "secure texting" is already taken: Android users might start seeing banners in the Messages app saying that a text message from a company is authentic. This comes from a new feature called "Verified SMS," where companies can attest to the authenticity of text messages you receive from them. Basically, Google facilitates a cryptographic link between the company and your Android phone, so that the company can generate a verification code for every text message it sends you. Your phone can get the verification code from Google and make sure the message is authentic (all without Google seeing the message). If so, it'll display a banner saying that the SMS you got is authentic and it'll show previews for links.

The goal of this feature is to prevent phishing attacks - if you get a text from a company saying "Click here to fix problems with your account," you want to make sure it's authentic. However, Google's own research shows that people don't reliably notice the lack of a security indicator. While Google says they'll display a warning if you get a text with a missing or incorrect authentication code, it's not clear how they can do this reliably - an attacker could just text you from another number. In our experience, companies text from so many numbers and shortcodes that we're not sure we'd notice an attacker sending us a text from an unusual number.

More importantly, this system doesn't encrypt the messages. Since it only adds an authentication code, it prevents tampering but not eavesdropping. Sensitive data, from personal information to login codes to password reset URLs, can still be intercepted by someone with the right equipment. Google's system certainly makes SMS less bad, but we still think you should avoid it if possible. The safest course of action is still to contact the company using reliable contact information (using an app, typing in their website address, etc.) and not to trust any links in a text message - the same practice we suggested in a recent tip of the week about how to handle suspicious phone calls claiming to be from a company.

Google will keep your passwords safe for you: Google also announced that the next version of Chrome will have new built-in password protection features. The first one is essentially a built-in version of their recent Password Checkup extension: when you use a password anywhere, they'll send some information about it to Google's servers to see if it's been used in a breach. The system uses some fancy cryptography to prevent leaking your password to Google if it's actually secure: all they have is enough information to match it to a password they've seen before.

The second and third features are based on sending URLs of sites you visit for Google to scan. Chrome already has a feature called Safe Browsing, where Google sends it compressed information about known malicious sites. If a site you visit is a likely match, Chrome will check the URL against Google's list, again using some cryptographic techniques to prevent sending the full URL. (Other browsers have similar features: Firefox and Safari talk to Google's Safe Browsing servers, and Microsoft has their own SmartScreen Filter.) The new features are a major expansion, where Chrome sends over the URLs of sites you visit that Google doesn't know about, so that their servers can scan the site. Google's goal is to find newly-launched malicious sites because Safe Browsing has been effective at preventing attackers from using any single website for too long. If you've enabled the setting to "Make searches and browsing better," Chrome will send over the URL for any site you visit that isn't on a built-in list of safe websites. Also, if you're signed into Chrome and enter any password saved in Chrome's password manager into a site that isn't on that safe list, Google will also scan the site.

We have mixed feelings about these features. On the one hand, it's clear that they do solve real problems: password reuse and the risk of sites' password databases being breached are significant concerns, as are malicious sites phishing for passwords, and we do want our web browsers keeping us secure. On the other hand, this is a major change in how they go about that - until now, standard practice is that they use well-designed software as part of the browser to keep us safe, not that they send our browsing data to Google's centralized servers. Even Safe Browsing, which you can still opt out of, was a bit controversial when it launched, despite it going to great lengths to make sure your actual URLs aren't sent over the network - it just requests enough information for your browser to make a final determination on its own. The new approach sends actual URLs of new websites you visit over to Google - a company that has its own interests in scanning as much of the web as it can. We certainly don't object to browsers trying new approaches to protect users' privacy and security, but we don't totally trust Google's motivations here: Chrome's new approaches specifically help people who stay in the Google ecosystem (for instance, by saving passwords in their Google account), yet they're behind major competitors like Firefox and Safari on tracking prevention - a privacy-boosting measure aimed at countering the sort of tracking done by ad-sales companies like Google.

Hacking kids' smartwatches is child's play: As part of a training exercise, security research firm Rapid7 took a look at three children's smartwatches they found on Amazon and found security vulnerabilities in all of them. All three watches were actually the same hardware from a Chinese manufacturer, just branded differently, so the issues affected all the watches. First, you can communicate with the watch and change settings via SMS, if you're on a list of authorized numbers. Not only is it fairly easy to spoof SMS numbers, the check doesn't even work, and the watches let any number communicate with it. Second, the watches all have a default configuration password of "123456," which only one of the watch brands even mentioned at all. None of them gave instructions on how to change it. Together, this allows any attacker with a phone to re-pair the watch with them, monitor the child's location, and set up voice chat.

Furthermore, Rapid7 was unable to find working contact information for the Chinese manufacturer to get them to fix the bugs - emails to the listed contact address bounced. As CNET reports, this is hardly the first time that "smart" devices targeted at children have had serious security flaws. It's probably safer to stick with either products from major manufacturers or just use non-"smart" devices.

San Diego's Surveillance Program Faces The End: The Electronic Frontier Foundation and other organizations successfully campaigned to stop a major facial recognition program in San Diego County, California. EFF, ACLU of California, Oakland Privacy, and other groups had previously lobbied for Assembly Bill 1215, which prevents facial recognition from both police officers' body cameras and cameras carried by police officers. Since 2012, San Diego's Tactical Identification System had given officers thousands of tablets and phones with facial recognition software that they could use to scan people on the street. San Diego is suspending the program and not renewing the contract next year. We're happy to see that more communities are successfully pushing back on mass facial recognition, continuing the trend from earlier this year of several US cities banning government use of facial recognition.

OK Google, give me thousands of people's locations for this search warrant: As part of an investigation into arsons in Wisconsin, the US Bureau of Alcohol, Tobacco, Firearms, and Explosives got a search warrant to find all Google users who had been in an area around 30,000 square meters over a nine-hour period. Google responded to this "geofence" or "reverse location search" with information about 1,494 devices that had Location History turned on. While Google takes steps to secure your data, if you're keeping location history with Google, they do have to return that data in response to a valid search warrant. Forbes found another case earlier this year where Google was compelled to return the personal data of six users within a 50-meter search area. In the end, only two suspects in the state were named, so the search found personal data of users who had nothing to do with the crime.

You can't un-leak data: Canadian laboratory testing company LifeLabs said they paid attackers to get their ransomed data, which included customer names, personal information, and healthcare data, back. Solid backups could have helped LifeLabs get back to business quickly and saved them from paying the ransom, but that doesn't help save them from the attacker still having the sensitive data they ransomed. Ars Technica reports there's currently "no indication" the hackers have given the data to a party other than LifeLabs, but it's frustrating when companies trusted with sensitive information such as healthcare data don't take measures to protect that data.

Protecting sensitive information against ransomware attacks is genuinely quite difficult - most standard forms of disk encryption protect you from someone getting physical access to your computer, but won't keep attackers' eyes out of your files because the ransomware runs inside your account on your operating system, where your files are already decrypted. If you're looking to store a small bit of sensitive data you use infrequently, like short notes or small documents, a password manager with a good security record might be a clever place to store it: unlike your computer's filesystem, your password manager is only decrypted when you access it and automatically locks after a period of inactivity, limiting the exposure to malware attacks. For larger amounts of sensitive data used at once but still used infrequently, storing it on an encrypted external drive, plugging that drive into your computer only when you need to access it, and unplugging the drive when you're done helps to limit your exposure. Both macOS and non-Home editions of Windows have easy ways to encrypt external drives. (This happens to be the best way to keep your regular backups safe, too - remember that ransomware can lock out any files it can see, so if you leave your backup drive attached 24/7, you're not well-protected from ransomware.) The best way to protect your sensitive and less sensitive data from ransomware attackers is to avoid downloading ransomware in the first place, which we cover in more depth in our episode "Malware, antivirus, and safe downloads."

What we're reading

How tracking works: We saw two interesting explorations of the mechanics of web-based tracking. First, software developer Julia Evans uses Firefox's built-in developer tools to investigate "tracking pixels," answering the question of how visiting the product page for a particular Old Navy coat can get her ads on other sites about that same coat. As she notes, Firefox's default settings prevent this sort of tracking via Facebook and other known cross-site trackers, and browser extensions and features can also restrict what sort of information tracking pixels can see. Second, EFF has an in-depth look at the entire tracking industry, covering how trackers identify users, how data is exchanged between the various companies in this ecosystem, and how both web and mobile tracking work.

The Senate will decide encryption's fate: The US Senate Judiciary Committee recently held a hearing on encryption and "lawful access," demanding that phone manufacturers like Apple and communication companies like Facebook provide ways for law enforcement to see inside encrypted conversations. The committee's Republican chair, Sen. Lindsey Graham, and Democratic vice-chair, Sen. Dianne Feinstein, are both pushing for this access. An EFF blog post takes issue with the testimony of Manhattan District Attorney Cy Vance, who claimed that device encryption was a significant barrier, even though his office is regularly able to break into devices using tools from companies like Cellebrite and Grayshift. Stanford's Center for Internet and Society also has a good analysis, pointing out how the senators are muddying the distinction between end-to-end messaging encryption (like Facebook does) and device encryption (like Apple does) and using fears about predators messaging children to argue for weakened device encryption. The American Enterprise Institute, a conservative think tank, is also not in favor of these "lawful access" proposals, writing, "If Congress required every American home to be retrofitted with a special door for law enforcement access, would it make us safer?"

One particularly interesting article about the hearings comes from MIT's Technology Review, which includes an interview with the vice president of Picsix, an Israel-based intelligence company that works around encryption with "data interception". Picsix's strategy is to hijack the cellular connection of a target and make their encrypted connections so flaky that they decide to switch to unencrypted connections. From the interview: "We won't just block WhatsApp entirely. Instead, we'll let you make a WhatsApp call, and 10 seconds in, we'll drop it. Maybe we'll let you make another call, and in 20 seconds we'll drop it. After your third failed call, trust me - you'll make a regular call, and we will intercept it."

Sen. Graham told tech companies to figure out a solution by a year from now if they didn't want legislation ordering them to open up "lawful access," saying, "You're going to find a way to do this or we're going to do it for you." If you're interested in watching the hearing itself, there's a video on the Judiciary Committee's website.

This company sees where 12 million people are sleeping and knows when they're awake: Anonymous sources within a tech company that is logging location information gave The New York Times's Privacy Project a file containing over 50 billion location data points for the phones of more than 12 million people information in a number of major cities over the course of several months in 2016 and 2017. These types of companies often require user consent for location tracking as cover for their actions, and they also claim that they anonymize these datasets to lessen concerns over extensive surveillance. In the Times article, law professor and privacy researcher at the Georgetown University Law Center Paul Ohm points out that this anonymization is "a completely false claim," and Times reporters were able to identify the movements of people in positions of power through matching this data with their home addresses. Companies are also able to merge this already highly detailed location information with other personal information like phone numbers, age, gender, ZIP codes or other information-rich data sets like our advertising profiles constructed by web tracking.

One of the people the Times reporters identified described herself as "careful about limiting how she shared her location," but couldn't identify which app logged the location data the Times examined. It isn't always obvious which apps track and harvest our location data - it's not just social media companies or businesses looking to pinpoint advertising. A lawsuit filed in Los Angeles earlier this year highlighted how the Weather Channel's parent company analyzed app users' location data for hedge funds, and The Weather Company's "Investor Insights" boasted 200 million downloads with a 90 percent location opt-in rate that results in 120 million "ping" location points daily. The Weather app that ships with iOS uses The Weather Channel for its weather data, so we'd recommend checking your location settings for every app on your phone, not just the ones you've downloaded and installed yourself. On iOS, you can choose which apps can use Location Services when in the Privacy > Location Services section of Settings. (You can also change location permissions in the app's section of Settings, but the Weather app is notably missing in that list, so it's worth heading to the Privacy section for a more comprehensive look at where your location is being used.) On Android, you can edit this in the Location > App permission section of Settings. As the Times correctly points out, "our privacy is only as secure as the least secure app on our device."

In other news...

Sometimes, satire becomes reality, like when we find ourselves changing our password every login for some sites despite using a password manager because they truncate our long passwords without telling us.

That's all for this week! We'll be back in two weeks, as we're taking a little time with our families. We hope everyone who is celebrating something this time of year has a happy holiday season, and if there's a story you'd like us to cover, send us an email at looseleafsecurity@looseleafsecurity.com. See y'all next time - in two weeks!

-Liz & Geoffrey