Loose Leaf Security Weekly, Issue 27

Hello again! We hope you're staying safe and healthy, wherever you are.

If someone forwarded this to you, you can sign up yourself at https://looseleafsecurity.com/newsletter.

Tip of the week

Many of us are starting to do our work over videoconferencing tools, and last week, we mentioned our episode "Covering your webcams" for tips on preventing applications from taking video of you without your awareness. This week, we'd like to call special attention to why you'd want to keep your webcam covered: in addition to the risk of malware taking video of you, well-intentioned apps can also start video calls before you've clicked to accept them, and you may not necessarily want your coworkers to see you (or your family or your pets) until you're ready to join the call. In particular, the very popular videoconferencing service Zoom had exactly this problem until last year. First, for "convenience," the desktop app had an option to start calls automatically when you clicked a link instead of waiting for you to press a button. Second, even when it was uninstalled, the app would leave behind a little agent that would automatically reinstall it the next time you clicked on a Zoom link. Combined, this meant that even people who thought they uninstalled Zoom could have their cameras start up just by clicking on a link.

One way to avoid this is to try to use browser-based videoconferencing services when possible. Since browsers manage permissions for websites, you can grant or deny access to your camera or microphones to specific sites. By itself, this doesn't completely mitigate the risk of opening a video call unexpectedly, because once you've told the browser it's okay for a certain site to use your camera, it'll keep giving that site permission, but if you know you're done using a videoconference site for a while, you can remove its permission from the browser settings and it can't re-enable itself on its own. Also, unlike downloaded applications, websites can't leave things behind when you close them, and they don't have other permissions on the system. (You may have seen a viral tweet claiming that the host of a Zoom meeting can see what programs you're running. As far as we can tell, this isn't quite true: the Zoom feature being discussed only allows the host to see whether you have Zoom in the foreground or not. However, the Zoom desktop application, like any traditional desktop application, does have the ability to see what other programs you're running, so if Zoom wanted to build such a feature, they could do so easily.)

Zoom tries to guide you towards downloading their app, but if you wait out their download prompt, you'll eventually get an option to join online. Specifically, when you click a Zoom link and don't have the app installed, it'll open a web page saying, "If nothing prompts from browser, download & run Zoom." Wait a bit, and the sentence will change to add the phrase "Click here to launch the meeting." Click there, and it'll show a new link, "If you cannot download or run the application, join from your browser." That last link will work without downloading anything. Other videoconferencing services, including Slack calls and Skype for Business, have similar features where you can refuse to download the app and use the browser instead. (Web videoconferencing is often less feature-rich - in particular, screen sharing usually won't work via the browser, although browsers are working on adding permission-based support for that, too.)

Another good answer is to simply use your phone or tablet to join videoconferences. Because mobile OSes have stricter restrictions on what apps can do by design, you can be much more confident that apps aren't opening cameras or recording audio without telling you. Generally, an app needs to be active to use the microphone or camera. iOS allows apps to continue using the microphone when you switch apps and will color the status bar at the top to indicate an app is still using the microphone. Android permits use of both the microphone and the camera, but from version 9 onwards will require that apps place a notification in the status bar. Furthermore, when you uninstall mobile apps, they're gone: Android and iOS both manage app installation and uninstallation and make sure that apps can't leave pieces around. Desktop applications, on the other hand, include their own uninstallers, and they might choose to only partially uninstall themselves.

In the news

I'm at home, where else would I be: Several years ago, Google added a feature to their Hangouts chat app to share your current location, and they'd even prompt you to share your location if you got a message from someone else saying "Where are you?" As of a week or so ago, that feature is now gone from the Hangouts app. We mention this here because the suggested replacement, sharing your location in Google Maps, is pretty different from the Hangouts feature in terms of privacy and security: Maps lets other people track your real time location, whereas Hangouts would just send where you were at the moment. If you're on your way to a party and don't want the group chat to know where you're going afterwards, make sure you're using the right options with Maps location sharing. You might also look at alternative ways of sharing your location, such as dedicated apps like Glympse that offer time-limited sharing or other chat platforms like iMessage that have a share-my-current-location button. (Or, if you've turned off mobile location data entirely, just send them the nearest street address the old-fashioned way.)

I am once again asking for you to take software updates: Apple has released new minor versions of macOS, iOS, and their other operating systems, with various features like support for the new iPad trackpad. As usual, the updates also come with a cornucopia of security fixes, including the ability to reply to messages from a locked phone even when that option is disabled, as well as several more serious (though boring-sounding) bugs involving web pages and apps both being able to escape their sandboxes. Apple also released feature updates for iOS 12, which they're continuing to support for older devices that can't run iOS 13, but frustratingly, their security updates page claims that there aren't any security fixes in either this iOS 12 release or the previous one. Last time we covered iOS updates, we hoped this was a misprint, but at this point, it's probably safest to assume that iOS 12 actually isn't getting security updates anymore, so you should try to replace your device if it can't run iOS 13.

I'm from the surveillance-industrial complex, and I'm here to help: We've previously covered NSO Group's Pegasus spyware kit and how it's been used to target activists and dissidents in various countries. The Israel-based firm is now joining the ranks of China and Iran in building a surveillance-based solution to the spread of the novel coronavirus: NSO's tool will correlate two weeks of cell phone history data with stored location data retained by cell providers and try to figure out if you've crossed paths with anyone who's been reported as infected. In the US, at least, a recent survey from Morning Consult finds that the majority of people are uncomfortable with governments tracking their location and a smaller number, but still a majority, remain uncomfortable even if the stated rationale is to fight the spread of the virus.

Netflix and security research skill: Ars Technica reports on a security researcher's struggle to get Netflix to pay attention to a bug in their account security. Netflix uses a service called Bugcrowd to handle both reviewing security reports from outside researchers and paying them "bounties" to incentivize people looking at their product. One researcher submitted a report that a certain Netflix cookie was configured to be sent over unencrypted HTTP connections, which, he claims, allowed other people who can modify your network traffic (say, people on your wifi network) to steal the cookie and log into your Netflix account. A significant part of the value of a service like Bugcrowd to a company like Netflix is handling low-severity or pedantic problems and making sure outside researchers are aware of the scope of a vulnerability reporting program, and in fairness to Netflix and Bugcrowd, looking for random cookies that could be sent over HTTP is a fairly common form of low-severity report. However, the researcher's claim is that this wasn't just any cookie, it was specifically the login cookie - Netflix disputes this, saying a second HTTPS-only cookie was required, but the researcher posted a video demonstrating the attack, and Bugcrowd insisted that the researcher take the video offline or risk getting kicked out of the Bugcrowd platform entirely. (The video is now offline, but after the news article was published, Netflix confirmed that the original bug was valid and that they'd fix the bug and pay a bounty.)

The way bug reporting programs work has a lot of practical impact in the security of the services we use, though we rarely notice or even know about them. There's a tension in these programs, especially the ones that pay bounties to attract outside researchers, between incentivizing them to look for serious problems while dissuading people from reporting everything that may or may not be a problem in the hopes of a reward. We covered a similar issue earlier this month: a group of researchers tried to report six problems with PayPal on HackerOne, another bug bounty platform, and had them all dismissed. In that case, too, you can see that PayPal and HackerOne were trying to filter out uninteresting reports, but they also seem to have seriously misunderstood the severity of at least a couple of the issues (at least according to the information the researchers posted). Since vulnerabilities can be discovered by anyone (take the FaceTime bug found by a 14-year-old) and bug bounties compete with the market for reselling security vulnerabilities to attackers, it's quite important and tricky for companies to make good tradeoffs here, taking reports seriously enough and offering enough rewards to get information but also not inviting themselves to be flooded with low-quality reports that cause real problems to be lost in the noise. On an unrelated note, we rather like Ars's analogy of a login cookie as a concert wristband: it's much faster to check than your actual credentials, but if it's not implemented well, it's potentially easier to spoof or steal.

It wasn't that much of a military secret: German newspaper Der Spiegel reports that a used laptop from Germany's military resold on eBay came with its hard drive - and classified information - intact. The laptop, which has Windows 2000 and was sold for 90 euros, included documentation on how to disable a tank-mounted anti-aircraft missile if it's at risk of falling into enemy hands. Although the weapons are still in use, a spokesperson for the German military says that there's no real risk from the documents becoming public. Still, it's a good reminder to wipe your hard disks before getting rid of your device - or better yet, use disk encryption (and don't use username and password "guest") so that even if you forget to wipe the disks, other people can't see your private files. As it happens, the document reminds soldiers to grab hard disks from the missile system before enemy soldiers capture it.

Scams in the time of coronavirus: Numerous scam artists have decided that the fear around the novel coronavirus and COVID-19 is the perfect vehicle to get people to fall for malware and phishing attacks that they ordinarily wouldn't. Here's a quick roundup of scams to keep an eye out for:

As we've advised before for sextortion scams, if you get an urgent-sounding message - whether it's offering a cure or it's allegedly from a long-lost relative using a brand new email address - take a moment to step away and think through the message. Scammers prey on fear and a false sense of urgency. We'd also recommend setting up a password manager with browser integration and a security key if you have one, because both of them offer robust defense against phishing: they'll automatically compare the website you're logging into with the website you've previously created login information on. Computers are unlikely to get scared and forget to check whether a site is real or fake, so it's worth relying on them. (If you've got some downtime during quarantine, setting up a password manager is a great activity you can do at home without coming into contact with anyone else!)

What we're reading

A jumbo amount of trust: The Guardian talked with the founders of two companies that produce apps that help you manage the privacy settings of your other apps. The main argument for using these apps is that privacy settings for the many different apps you use can be complicated and change from time to time, but the drawback is that you have to trust these apps: in order for them to manage settings for other apps on your behalf and block web trackers, they need a high level access to your accounts and devices.

We've talked about the risk of giving broad access to third-parties before, like with Sensor Tower's apps and VPNs, and we don't think you need to trust a third-party app to keep track of your privacy settings. We keep on top of the privacy for apps we use by regularly deleting apps we don't use anymore and periodically checking through their privacy settings. (This is another reason we find it useful to specifically check our location data settings from time to time.) As for web trackers, you can get a similar level of protection by using HTTPS Everywhere in Encrypt All Sites Eligible (EASE) mode, which we call "HTTP nowhere" mode, and opening HTTP-only sites only in a private browsing / incognito mode that does not have access to your extensions, e.g. your password manager.

In the article, one of the founders they interviewed, Pierre Valade, suggests in the article that third-party privacy managers also provide value by sorting through complicated privacy policies and terms of services for you, but we think this is misleading - these apps can't opt you out of any unwanted data sharing that you can't opt out of yourself. While we agree that it's difficult as individuals to read through all the legalese in privacy policies that govern our accounts, apps, and data, we'd still rather make decisions for ourselves than just trust a third-party like Jumbo. Easy to understand analysis of privacy policies, like the ones from PrivacySpy, help us better understand what we're agreeing to.

All performances at the security theater are cancelled until further notice: A couple weeks ago, the Transportation Security Administration has partially rolled back its policy on how much liquid is okay to bring when flying: up to 12 ounce containers of hand sanitizer are now permitted. "America Is a Sham" in Slate discusses this change along with other unnecessary measures that have been lifted in light of the current pandemic - there's been a lot of pressure to figure out which security measures correspond to reasonable threat models and then lift the ones that don't actually improve security. As the Slate article states, "the TSA can declare this rule change because the limit was always arbitrary, just one of the countless rituals of security theater to which air passengers are subjected every day." On the other hand, it makes sense that they haven't suspended the use of metal detectors, reinforced cockpit doors, or special badges for airline crews because those measures did come about from threat models that made sense. (This is also why a lot of "natural" cleaning products are still in stock while those made with effective "harsh chemicals" have been flying off the shelves - people understand that those natural products weren't made to ward off real threats to their health.) Apart from its relevance to the current crisis, we mention this because it underscores a point that drives a lot of what we advocate at Loose Leaf Security: "security" is not simply a synonym for "inconvenience." Well-reasoned threat models identify specific risks and take measures designed to address those risks, and a solution being more difficult is hardly a sign it's more secure - for instance, one-tap security keys are a significantly more secure means of two-factor authentication than manually typing in a passcode received by text message.

OMNY-scient: The MTA has been in the process of phasing out New York City's iconic MetroCard swipe system and phasing in a new contactless system called OMNY. Of course, new security concerns go hand in hand with new systems, and The Verge has a good roundup of the new privacy concerns OMNY brings for MTA riders.

The eventual MetroCard replacement card has yet to drop, so anyone currently using OMNY is paying either with a credit card with an embedded contactless payment chip or with their smartphone's payment system. If you pay directly with a credit card each time you enter a subway station or hop on a bus, your credit card provider can get much more granular information on your location than they would get from which ticket vending machines you use to reload your card. If you're using the contactless payment system on your phone, OMNY can collect even more information on you, including IP addresses, device identifiers, and your billing address each time you enter the transit system.

OMNY's privacy policy specifies all that information along with the specific point of entry, i.e. OMNY reader, are then logged on servers at Cubic Corporation, the company designing and implementing the OMNY system, for between six months and seven years. As The Verge states, "Riders can log in to their OMNY account and review their movement history for the 90 days prior," but OMNY doesn't currently provide a way to clear that information. The MTA states that all this information is stored "securely" using the triple-DES encryption algorithm - a workaround from 1995 to boost the security of the Date Encryption Standard from the 1970s. It's odd that a newly-designed system would choose to use 3DES as opposed to the stronger (and faster) replacement, AES, which has been available since 2001. Right now, we're not taking any subways or buses because we're staying home, but outside of the pandemic, we've both stuck to swiping our MetroCards to limit the rider data gathered on us to just our actual riding data.

End-to-end encryption at 0800 hours

The British Army has switched to WhatsApp instead of discussing sensitive in person, and if end-to-end encryption is good enough for armies, it's worthwhile for you.

That wraps it up for this week - thanks so much for subscribing to our newsletter! If there's a story you'd like us to cover, send us an email at looseleafsecurity@looseleafsecurity.com. See y'all next week!

-Liz & Geoffrey