Hello! The chill in New York provided us the perfect excuse to curl up with a blanket and a pot of tea while making sure the last of our accounts with an authenticator app as a second factor had those authenticator apps configured on both our current phones and backup phones. One of Liz's accounts is an unused Snapchat namespace grab (an account that goes entirely unused but exists so no one else can use Liz's most commonly used username), and Liz was particularly unhappy to have to download the Snapchat app and temporarily give it camera access just to reconfigure two-factor authentication.
If someone forwarded this to you, you can sign up yourself at https://looseleafsecurity.com/newsletter.
Tip of the week
If you want to upload a photo from your phone to a service like social media or email, there are usually two ways to do it: you can open the service's app and access your photo library or camera from there, or you can take a photo and use your phone's "Share" feature to send it to the service. Sharing a photo via the first approach will often prompt you to give the app direct access to your camera or photo library - and exposes you to the risk that a misconfigured or malicious app could turn on the camera at some later time or access or delete other photos in your photo library. (Both Android and iOS give apps an option of indirectly requesting a photo from the OS, which avoids the permission prompt, but many apps don't support this.) If you get a permission prompt, try declining it and seeing if the second approach works instead - switch to the photos app or camera app and see if you can use the "Share" feature to send the photo to the original service instead. This does take a couple extra taps, but it's worth considering as a way to limit your risk, especially for apps that you only occasionally use photos with. Unfortunately, a few apps require direct camera or photo access (Instagram's Stories feature is the one we're personally most annoyed at), but this tip works for the majority of apps.
In the news
Facebook's latest product, Candid Camera: Twitter user Joshua Maddox discovered his iPhone's camera was on when the Facebook app was open because a bug in the app let him see what the camera saw behind his news feed. It doesn't seem like Facebook was trying to run the camera, but it's still a big oversight that it kept the camera on by mistake when it had no need for it. The bug doesn't affect Android users and only appears to affect iOS 13.2.2 users. If you're using the Facebook app, it's a good idea to revoke camera access - especially since you can still share something you see from the photos app as we discussed above.
Another option is to put a physical cover over your phone's cameras, although we haven't found ones that both stay on phones well and don't break easily or cover too much of your screen. Better yet, since Facebook repeatedly proves itself untrustworthy, remove the Facebook app from your phone and only use it in your phone's web browser instead where it will be stuck within the browser's sandbox.
Good news, everyone! Especially those crossing the US border: A US federal court in Boston says suspicionless searches of travelers' laptops and phones at the border is a Fourth Amendment violation. This is a major win for digital privacy, and it just makes sense to no longer lose rights to digital information just because you're traveling. The Electronic Frontier Foundation, who filed the lawsuit along with the ACLU, has a longer discussion of the case and what they argued.
Google cleans up malware with a little help from its friends: Android's Play Store has a reputation for being more open than Apple's App Store, which means it's easier for both legitimate developers and malware authors to get their code into Android apps. Google has been ramping up their malware detection of late, and they recently announced that they're partnering with three antivirus companies to scan apps on the app stores. It's a bit surprising to see Google, of all companies, say, "We haven't really had a way to scale as much as we've wanted to scale," which just points to how fundamentally hard checking apps is - while Apple's slow and picky review process is frequently criticized, there is merit in being cautious about new apps. Mostly, we see this as a reminder to be careful about downloading new and, especially, little-known apps and to prefer using plain old websites (which are much more limited than apps in terms of what they can access) when possible.
I like you, but I don't like like you: Instagram is testing out hiding the count of people who have "liked" a post on some US accounts, following testing earlier this year in seven other countries. WIRED's article on this change calls it "the latest step in Instagram's quest to become the safest place on the internet," which we don't quite buy. While the move hides engagement metrics, it doesn't directly make like data secret, so it's still not safe to "like" photos that would cause you problems if others found out (think content critical of your employer or related to your religion). Making like data less obvious is a small practical privacy improvement, much like removing the "Following" tab (which we talked about a couple weeks ago), because your activity isn't being broadcast to everyone who follows you. On Instagram's side, at least, they seem to be touting this more for having a healthy relationship with the platform than for "safety." (We're also reminded of the time Twitter changed "favorites" with a yellow star to "likes" with a red heart, subtly changing the meaning of old "favorite" markings in a way that was sometimes misleading. If they decide to discontinue the experiment, "likes" could become a bit more visible - possibly in a way that users didn't expect.)
Florence (Nightingale) + The Machine (Learning): The largest non-profit health system in the United States, Missouri-based Ascension, has a partnership with Google called "Project Nightingale," which covers both routine products like Google Docs to more interesting collaborations with Google Brain, their advanced artificial intelligence team. The Wall Street Journal reported on this "secret" project to gather personal healthcare records without informing patients based on reports from employees, prompting a blog post from Google Cloud in response as well as an investigation from federal regulators. Google says that the project complies with HIPAA, the US's healthcare privacy law, and it is true that HIPAA doesn't require patients to be specifically notified or given a chance to opt out of this sort of work - a healthcare provider can work with any third party as long as it's just to help the provider and the third party keeps the data secure. Google also notes that HIPAA prevents them from combining this data with other data Google might have on you, such as from your Google account or their acquisition of Fitbit, and it also prevents them from sharing data between their customers, that is, hospitals or other healthcare providers (they can still analyze data of multiple patients at one customer). However, their post mentions they're providing "AI and ML solutions" to Ascension, and they don't specifically contradict the Journal's claim that employees with access to individual healthcare data include some researchers at Google Brain, not just the engineers working on Google Cloud's infrastructure. t's definitely possible that Google is developing and training AIs based on the health data they now have access to.
First Bank of Google: The Wall Street Journal has another report on an upcoming Google project: they'll be offering checking accounts in the US with the help of Citibank and the Stanford Federal Credit Union. A Google exec says they don't intend to sell customer data - but that hardly seems like a relevant worry since Google itself, the world's largest ad company, can use the data. Ars Technica notes that Google might be in a good position to offer accounts to unbanked and underbanked consumers who typically do have an Android phone. Extending options to this market seems good, but if Google's angle here is to get data from customers who have few other options, it seems like a much better deal for Google than for them. We've previously discussed how Google's advertising product already knows about 70% of credit and debit card transactions in the country, thanks to partnerships with credit card companies.
The camera with built-in racism: IPVM, a website covering the physical security industry, reports that Chinese surveillance camera manufacturer Hikvision has a camera that can automatically identify demographic characteristics of people, including "gender attributes" and "racial attributes (such as Uyghur, Han)." The Han are the majority ethnicity in China, comprising over 90% of the country's residents, and the mostly-Muslim Uyghurs are a minority living primarily in the northwest - we've mentioned them previously in the context of targeted state-sponsored attacks. Hikvision, which calls itself the world's largest manufacturer of video surveillance equipment, recently ended up on the US's trade ban because of its participation in "China's campaign of repression, mass arbitrary detention, and high-technology surveillance" against the Uyghurs and other minorities, according to the Commerce Department. Hikvision changed their marketing after IPVM's report called attention to it, but a bit over a year ago, they advertised a product cable of identifying an "ethnic minority" in surveillance video monitoring.
Quis creepstodiet ipsos creepstodes? Digital activism organization Fight for the Future sent three people wearing jumpsuits and head-mounted cell phones to conduct facial recognition surveillance on the DC Metro and the halls of Congress using Amazon's Rekognition. Their goal was to demonstrate why facial recognition technology should be banned so nobody else can do the same thing. While we agree with them on the risks of facial recognition technology and the lack of regulation, it seems to us those risks are exactly why you shouldn't pull a stunt like this, and we aren't the only ones who are uncomfortable with their demonstration. As one Twitter user put it, it's like demonstrating that arson is bad by burning down 2000 houses. If you're wondering if you got scanned, you can upload your photo and search their footage online (if you trust their website). Still, they do have a point - among other things, they ended up recognizing the faces of some journalists visiting Congress, which brings up the danger of pervasive facial recognition having a chilling effect on the press's ability to meet with sources (whether in Congress or anywhere else) and keep their identities private.
The boys of the NYPD choir are singing, "I've got your kid's fingerprints": New York state law requires that fingerprint records of arrested juveniles need to be destroyed after being turned over to the state's Division of Criminal Justice Services, but the NYPD had been illegally keeping fingerprint records of children it arrested in New York City - which NYC's Legal Aid Society discovered after they found a client arrested for matching fingerprints that shouldn't have been on file. As part of their investigation, they also found that DCJS themselves had also kept tens of thousands of fingerprint records that they were required to have destroyed. Once data is out there, whether in a government database or anywhere else, it's hard to make sure it's actually destroyed.
My voice is my passport, send me money: The Wall Street Journal has a story about a UK CEO who thought he received an urgent call from the CEO of his company's German owner, directing him to transfer EUR 220,000 to an account in Hungary. The call was actually from a scammer using "deepfake" AI to synthesize a voice sounding like the boss, and the victim reported that he recognized the "slight German accent and the melody of his voice," according to the report. Sophos has an article about the attack, pointing out that while scam emails demanding urgent money transfers are nothing new and neither are deepfaked voices or even videos, the combination of the two to make convincing scam calls is a new twist. The ability to conduct this attack is apparently now well within the reach of the general public - there are commercial services for generating deepfaked voices. If you get a call from a coworker or family member from an unknown number demanding that you make an unusual transfer of money (a common scam, at least by email, is an extended family member stuck in a foreign country), try to contact them via some other means to confirm it's really them, like an end-to-end-secure chat app.
The beginning of the end for SMS: Google is ready to roll out Rich Communication Services (RCS) to all Android users in the US. RCS, sometimes branded as just "Chat," is a replacement for SMS with multimedia support that's been under development for many years. Recently, Google has been driving it forward and working with carriers to implement it in an effort to try to provide an Android equivalent of Apple's iMessage - reliable multimedia chat that works across carriers and is based on just phone numbers. Although RCS doesn't have the major security vulnerabilities that have plagued SMS and MMS and uses secure connections within its infrastructure, it still lacks end-to-end encryption between users, a major feature of iMessage. Having text messages be less spoofable and reliable (at least between Android users) is great, but we'll still try to avoid it for things like two-factor authentication codes.
iOS jailbreaking is back: A little over a month ago, we covered an unfixable bug in the boot ROM all but the most recent generation of iPhones and iPads, noting that it was more useful for the development of jailbreaks than for exploits. There's now a jailbreak for it, named checkra1n, and as expected, it's a "tethered" jailbreak: the boot ROM vulnerability needs to be exploited anew every time the phone turns on, since the phone is otherwise secure. We're not totally convinced that the tradeoffs of jailbreaking are worth it (which we'll discuss in detail in an upcoming episode) - and in particular, this particular jailbreak deactivates the iOS sandbox, one of the most important security protections in iOS. For those of us who aren't jailbreaking and are using an affected model of the iPhone or the iPad, it's good to know that the jailbreak is tethered: if the phone leaves your possession, you should make sure to reboot it when you get it back before entering your passcode, in case someone attempted to exploit your phone. (Of course, this assumes there aren't unknown vulnerabilities that would enable a persistent, "untethered" attack.)
New Intel CPU bug... from May: VUSec, the security research group at Amsterdam's Vrije University, announced a "new" security bug in Intel CPUs in the same vein as their May disclosure of "microarchitectural data-sampling attacks" - bugs that allow one program running on a CPU to inappropriately learn information about data being processed by another program. In fact, these vulnerabilities aren't new at all, but due to a miscommunication, Intel missed them when VUSec originally disclosed them last year along with the other MDS bugs, so the updates to the CPU microcode released on May didn't fix them. VUSec has some pretty harsh words about Intel's process and their "piecemeal (variant-by-variant) mitigation approach," saying they're worried about Intel's "complete lack of security engineering and underlying root cause analysis" and that without public pressure, "[similar] vulnerabilities won't disappear any time soon." They've also posted a video of a non-admin program reading the administrator account's password entry in 30 seconds using this attack (with a display that looks entirely too much like movie-style password hacking).
Most computer users probably don't expect that even their CPU is itself a programmable device which runs "microcode," software that not only has the ability to be updated but also requires regular security updates just like an operating system or browser does. Operating systems typically have a way to update CPU microcode, most often via a BIOS / firmware update that loads the new microcode when the computer turns on, so this is another reason to be applying security updates promptly.
What we're reading
Algorithmic credit limits are the worst kind of gender reveal party: The Apple Card made a big splash this past week because the algorithm has been assigning much higher credit limits to men than women. Goldman Sachs, the bank behind the Apple Card, tweeted that their credit limit algorithms "never make decisions based on factors like gender" and that "they do not know your gender or marital status during the Apple Card application process," but algorithms can still be biased by factors that are not explicitly used as inputs. One notable recent example comes to mind: insurance giant UnitedHealthcare is under regulatory scrutiny after a study in Science found racial biases in an algorithm that finds patients who need extra care. The algorithm was programmed to use healthcare spending to infer which patients had more serious diseases, but because of systemic biases in access to healthcare, black patients on average spend $1,800 less in medical costs annually than equally sick white patients, according to the study - which the algorithm naively used to conclude that black patients needed healthcare less. Rachel Thomas, the Director of USF's Center for Applied Data Ethics, has an excellent thread full of examples where machine learning algorithms unspecified inputs as important factors, and Slate discusses opaque algorithms, their potential biases, and how to account for those biases with Cathy O'Neil, the mathematician who wrote Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. The New York State Department of Financial Services is investigating the Apple Card's algorithm as "Any algorithm that intentionally or not results in discriminatory treatment of women or any other protected class violates New York law."
Credit limits aren't directly a digital security or privacy issue, per se, though Apple does tout security and privacy as key features of the Apple Card, but opaque algorithms are increasingly used in security, too - everything from detecting when someone other than you is logging into your account or trying to use your debit card to facial recognition and surveillance. As the Slate interview points out, the Apple Card's credit limit algorithm "is one of the most used consumer-facing algorithms of all, and even this is very difficult to understand. But think about the algorithms that we don't even know are being applied."
Who needs a pregnancy test when you have Facebook? Reporter Talia Shadwell started suddenly getting ads for baby products on Facebook, and she initially guessed these ads started cropping up because she turned 30 and may have entered a new targeted advertising set. However, she later realized she forgot to log her last period in her period tracking app, and once she corrected the tracker to have her latest menstrual data, the ads disappeared. As advertisement algorithms are opaque to social media users, Shadwell can't say for sure that the missing period data is what triggered the ads for baby products to appear in her feeds, and unfortunately, how tech companies share our data is similarly opaque.
The Telescreen is coming: The New York Times had a pretty good opinion piece recently from professors Evan Selinger and Woodrow Hartzog making the case for banning facial recognition in both the public and private sector. If you've got a subscription (or free NYT reads remaining this month), it's worth checking out.
And now for what not to do...
For some reason, banks seem to be behind the curve on account security. We're unfortunately no longer surprised by things like weak two-factor authentication or even password length limits, but FinecoBank has a particularly disheartening approach. The Italian bank only looks at the first eight characters of the password, to which we'd say that you should generate a random password with a password manager - but they suggest that you Google your password to make sure it hasn't been used before. They also seem to charge you a euro for password resets, and it's small comfort that this only seems to apply to a reset by paper mail instead of email. Of all the places you could make money, charging your users to keep their accounts secure seems like a mistake for both the customers and the bank.
That's all we have for this week - thanks again for subscribing to our newsletter! If there's a story you'd like us to cover, send us an email at looseleafsecurity@looseleafsecurity.com. Until next time, we'll be staying warm and giving thanks for free-of-charge password changes.
-Liz & Geoffrey