Today's newsletter has a new section: "We are once again asking you to take software updates." A lot of our news stories are primarily straightforward reminders to take these important software updates ASAP because they're a vital part of keeping your devices secure and personal information private, and we don't want those reminders to get buried if you're not planning on reading the newsletter all at once. Also, if you know you're already really on top of taking software updates, it's easy to skip past that section to get straight to other types of stories.
If someone forwarded this to you, you can sign up yourself at https://looseleafsecurity.com/newsletter.
Tip of the week
Last time, we discussed some possible ways to protect yourself from an employer or a school who wants you to install monitoring software on your personal computer. They've been in the news more of late, and with exam season coming up, schools are likely going to be asking for additional monitoring software, which might be more access than you want to give them.
In addition to the approaches we mentioned, another good option is to make a separate, non-admin user account on your computer for installing programs you don't quite trust, which limits the access it has. As long as you never enter your admin account's password while logged in, it shouldn't have any access beyond what's in that account, so you can use that account only with your workplace or school and keep sensitive things, like your personal email and password manager, in your primary account only. Then, even if you're asked to install a strange app, it's limited to what's accessible to that new account - it won't be able to monitor what you do when you're logged into your primary account, access your browser caches, or install things with administrative access. There's a few stories this week about software gaining more permissions than it should, so this would also help for some of those cases - as long as you say no when the software prompts you for admin permissions.
We are once again asking you to take software updates
Windows has a number of patches in the monthly "Patch Tuesday" cycle, including a fix for the font-handling bug we talked about last issue. Brian Krebs has a deeper look at several of the patches, including a reminder that Windows 7 security updates ended this past January for everyone other than businesses paying an extra fee. If you're running Windows 7 or know someone who is, make sure to take some time to upgrade.
macOS got a "supplemental update" to version 10.15.4 which fixes a few bugs. According to Apple's security update page, this update doesn't come with any security fixes. That's possibly good news - there are some scattered reports that this update is making Macs stop booting up, so if you're particularly concerned, it is safe to hold off on this one. However, the reports seem to be rare and one of the reporters said that booting the Mac into recovery mode and running First Aid from Disk Utility fixed it. We'd recommend getting in the habit of not ignoring the software-update alert and making sure you know how to use your machine's recovery feature (and have up-to-date backups, if worst comes to worst).
In the news
Send it to Zoom: Popular videoconferencing service Zoom is hard at work addressing the various security and privacy issues that people were finding in their service. Earlier this month they got Alex Stamos, who's previously been chief security officer at Yahoo! and at Facebook, to join as an outside advisor. The Zoom CEO has been hosting weekly online meetings to respond to questions, and on last week's call, he invited Stamos to answer technical questions about Zoom security. Zoom had previously been criticized for using the ECB encryption mode, a simple way of using cryptographic algorithms that resembles a pen-and-paper cipher and therefore can be easily analyzed by looking for patterns. They'll be switching to GCM, a more secure mode, in "weeks." In the longer term, they're also working on figuring out how to scale true end-to-end encryption to meetings with thousands of people. As Stamos noted, "the largest end-to-end encrypted chat that exists right now is FaceTime, which only allows up to 32 people," but they've got "real Ph.D. experts" designing a way to use end-to-end encryption efficiently for very large groups. Stamos also addressed a question about reports of Zoom passwords on the dark web, saying that their investigators determined these were reused passwords taken from breaches of other sites - and that in fact they try to embed themselves in the shadowy groups trading passwords so that they can get their hands on breached password data and proactively disable accounts before they get resold to attackers. As we noted previously, while many of the issues raised about Zoom are in fact serious problems with Zoom itself, some, like the risk of reused passwords, are just general security truths that happen to apply to a highly-visible product. Last week, Katie Moussouris, who founded Microsoft's vulnerability research program in 2008, announced that she is also working with Zoom, along with several others: former Google privacy lead Lea Kissner, cryptography professor Matthew Green, and three security research firms - a group The Wall Street Journal called "the cybersecurity cavalry."
Still, this dream team appears to have an uphill battle ahead of them, adding a focus on security and privacy to an application that seemingly never had it as a priority in the past. While they're working on developing fixes and even researching new approaches, the existing product continued to show signs of treating security as an afterthought. For instance, sometimes Zoom pops up an alert box asking for an admin password saying, "Zoom detected a problem with your computer's audio and needs to restart." As it turns out, Zoom is in fact using the OS mechanisms for requesting privileged access and not stealing your password for its own uses, although the malware-esque wording of the dialog box isn't clear at all about why it needs privileged access at all and what it's asking the OS to do, exactly. For ourselves, we're still using Zoom for low-security calls like remote music lessons (through the browser or on a phone or tablet whenever possible) while we wait for Zoom's security work to bear fruit.
Playing a dangerous game: Riot Games launched the closed beta of their new online first-person shooter Valorant earlier this month. Gamers who managed to get an invitation code quickly realized that installing Valorant required installing a kernel-level anti-cheat driver called Riot Vanguard. The game developer says that they need this level of access to detect people with cheating software installed - it's fairly common these days for games to scan your system to find cheats (many games on the Steam platform, for instance, use the Valve Anti-Cheat (VAC) software), so cheaters have responded by adding kernel-level code to hide from what regular applications can see. In a longer post explaining Vanguard, Riot says they in turn need a kernel driver to detect cheats like this. They state they're not collecting any personal information and they've introduced a special tier in their bug bounty program for bugs in Vanguard, but they also can't reveal too many details about how exactly Vanguard works for fear of tipping off the cheaters, so it's not clear how much the program helps.
While the argument Riot makes for why they need a kernel-level driver is sound, it's nonetheless a security risk - a malicious driver has unfettered access to your entire system. (Also, as a few players pointed out, if every game developer did this, your system would be running dozens of similar anti-cheat drivers, possibly conflicting with each other.) Even with transparency, this still requires a significant amount of trust. Similar monitoring systems have also caused security problems of their own in the past: in a particularly infamous case in 2005, audio CDs from Sony BMG silently installed copy-protection software that hid itself like a rootkit, at which point actual malware started taking advantage of the Sony tech to hide itself. Still, Riot is being pretty up-front about what they're doing, and we're certainly not saying that people shouldn't conclude, after considering the risks, that they're comfortable with installing it. Personally, we'd think about the impact of something going wrong and how sensitive the data on our machine is - for instance, if you happen to have a separate gaming computer, it might be easy to avoid logging into important accounts on that machine and do sensitive work on your other computer. (This is actually one of the nice things about video game consoles from a security perspective - while the console manufacturer will be pretty invasive about making sure you're not using tampered, pirated, or potentially even homebrew games, you're not using it for anything other than games, so there's much less worry there than with a general-purpose computer.)
Slidin' out of the DMs: Earlier this month, Twitter announced that "non-private information" like direct messages might have stayed in Firefox's cache even after you logged out of Twitter. In their brief post, they say that "other browsers like Safari or Chrome" weren't affected by the bug, and they've made a change to prevent Firefox from caching anything from Twitter. The Firefox team responded in a brief explanation from their CTO and a longer technical blog post explaining what happened and who exactly it affects. The bug only impacts you if you log out of Twitter and do nothing else to clear your data: different Firefox profiles use different caches, for instance, and the "Clear Recent History" feature will delete the private Twitter data. It turns out that Twitter was relying on nonstandard HTTP behavior that happened to disable caching in many browsers but was not part of any official specification, and Firefox, acting well within the bounds of the specification, ignored the invalid request that Twitter sent. While there's a pretty clear lesson for web developers not to rely on nonstandard behavior when security or privacy are at stake, the whole situation reinforces a practical takeaway for all of us as web users: just because a website says you've logged out, it doesn't mean that all records of your browsing are gone. If you can, use your own machine and a web browser of your own - in particular, avoid logging into accounts with sensitive data from shared machines like print shops or public libraries. If you're sharing a computer with someone you trust, it's still a really good idea to set up separate user accounts or at least separate profiles within your web browser.
What we're reading
Technology-assisted contact tracing: There's been a lot of discussion of how to loosen shelter-in-place mandates without a surge in new COVID-19 cases, and a likely component of that process is technology-assisted contact tracing (TACT) systems due to outbreaks being too widespread for traditional contact tracing to be practical. Other countries have already been using contact tracing to help slow the spread, including South Korea's particularly aggressive and non-anonymous alert system and Singapore's more anonymous Bluetooth-based TraceTogether app. Last week, Apple and Google announced that they are creating a TACT API that will allow public health authorities to build contact tracing apps that work on both iOS and Android, and then after that, they will also be building contact tracing platforms into the iOS and Android platforms. If you have a contact tracing app using this API, your phone will send out Bluetooth messages with "Rolling Proximity Identifiers," temporary codes generated by the system that are essentially unlinkable anonymous aliases for you. These identifiers are calculated from "Daily Tracing Keys," but the keys cannot be reverse-engineered from the identifiers. A user diagnosed with COVID-19 can communicate their diagnosis to a contact tracing app using the API, and the app would then upload their last two weeks of Daily Tracing Keys to a central server. In turn, the server would then generate the associated Rolling Proximity Identifiers and send those to the other users to check if they've seen the identifier before. If another phone matches, it would show a notification that its owner came in contact with a person who was potentially infected at the time and be given information about getting tested or self isolating. The identity of the potentially infected person wouldn't be revealed like in South Korea's system, and Rolling Proximity Identifiers wouldn't be cross-referenced against credit card records or public transit data like they were in Singapore.
Apple and Google's press release mentions that "privacy, transparency and consent are of utmost importance in this effort," and Apple and Google have released the spec for the underlying cryptography they'll use for these TACT systems. The ACLU released a white paper discussing Apple and Google's TACT systems as well as other three others: DP^3T from researchers at EPFL in Switzerland, PACT from researchers at MIT, and TCN from a worldwide group of cryptographers. The ACLU agrees with the choice to use decentralized Bluetooth-based tracking instead of GPS-based tracking - geolocation data is "incredibly revealing and privacy-invasive" and, we'd add, very easy to abuse in the wrong hands. It's worth nothing that while Apple and Google's TACT system is designed to be decentralized, that design isn't sufficient to ensure decentralization - any app using it could also send information about any positive COVID-19 test results back to a centralized server. Apple and Google still need to thoroughly vet all apps built on top of their TACT API to make sure information isn't being aggregated elsewhere and no additional identifiers are being used to reveal the identities of people who test positive.
Additionally, the ACLU doesn't believe that making the entire contact tracing system opt-in or opt-out as a whole as Apple and Google have planned goes far enough. They'd prefer that each step is voluntary, e.g., you could opt into being alerted about whether or not you were in contact with someone who is presumed to have COVID-19 without necessarily agreeing to upload your log of contacts. In a similar vein, they note that a lack of user control over the data this TACT protocol generates could result in a lot of known false positives, such as during times you were out but safely inside a car with all the windows closed.
Perhaps most importantly, the ACLU white paper states that any well-designed TACT system should not replace non-technical interventions like disease testing and needs to be designed with a clear exit strategy, both for the end of the pandemic and in case it is shown to lack effectiveness, to avoid "surveillance creep." While most of the focus has been on the TACT protocols themselves, computer security professor Jaap-Henk Hoepman also notes additional concern over Apple and Google's second phase where they will build the TACT system into the iOS and Android operating systems themselves. While it's easy to uninstall a contact tracing app, it's nearly impossible to avoid using your phone's operating system. Once a TACT technology is built into the operating system, it would be available for all sorts of applications, potentially neither limited in time nor to the specific purpose of battling the spread of COVID-19. We believe a clear plan to fully wind down any contact tracing technology is necessary to gain people's trust and should be more meaningful than promises not to continue using said technologies. In particular, despite the periodic news stories about selfish behavior from hoarding to parties, a very large number of people have been voluntarily sacrificing their time and health because they care about their community and neighbors and will also be willing to sacrifice their privacy as needed if it helps - in our view, these systems shouldn't disrespect their decisions by trying to compel them to share more than they're comfortable with.
Cryptographer Henry de Valence, one of the developers of the TCN proposal, points out one other major way Google and Apple's TACT system is vulnerable to attack: even though your Daily Tracing Keys are mathematically unrelated, because people often have similar daily schedules, it's likely easy to conclude that two daily location traces from two Daily Tracing Keys are probably from the same person. That could allow someone who knows that you visited one place on your way home to figure out where else you've been visiting. While in theory only smartphones that actually belong to people would sign up for contact tracing, it's possible for companies or their employees who have access to a large physical network of smart devices, such as a large cluster of smart garbage cans, to also set up contact tracing and then correlate Rolling Proximity Identifiers with other tracking information. This brings up a range of personal privacy risks, from simply trying to identify people who have anonymously reported a positive diagnosis to using multi-day location tracks for marketing purposes. While this is a much more complicated attack than a malicious app getting passed Apple or Google's vetting as described above, it's notable because the difficulty lies primarily in setting up the physical network of internet enabled devices, and there's little Apple or Google could do to prevent the attack. This is yet another reason to be cautious about automatically connecting to city-wide wifi or information kiosks - if the people who run the kiosk are able to identify you, they can potentially correlate your proximity to their devices with pseudonymous location data from other sources.
Until next time...
While we've got plenty of tips for keeping your computer safe from remote monitoring by your employer, we're at a loss for words if they say they want to visit you at home to help you with productivity and talk to your family about distractions.
That wraps it up for this week - thanks so much for subscribing to our newsletter! If there's a story you'd like us to cover, send us an email at looseleafsecurity@looseleafsecurity.com. See y'all next week!
-Liz & Geoffrey