Happy Friday! We hope you had a good Halloween and didn't get any candies laced with marijuana as police departments had been warning about - but of course you didn't, it's an urban legend, much like the fears from a few decades ago about candies with razor blades inside. Even police departments sending out these warnings have no record of it happening, and a bit of common sense shows why not: "edibles" are far too expensive for anyone to want to give out for free, and they also look nothing like children's candy. From a computer security perspective, we'd say that this worry demonstrates a lack of realistic threat modeling and an undeserved fear of the spookiest and scariest risks, without assessing how likely they are.
Threat modeling is, essentially, a systematic approach to determining what can go wrong and what we're going to do to protect ourselves. The first step is to accurately describe what we want to protect and who we're protecting against. (Criminal justice professor Joel Best tracks reported stories of drugs and harmful objects in Halloween candy, many of which turn out to have nothing to do with Halloween - for instance, after a 1970 tragedy where a 5-year-old died after eating heroin, the media reported it was in Halloween candy, but it turns out the child had found the heroin in a relative's home. A threat model that focuses on "protect the candy" and not "protect the child" would have missed this.) Next, think comprehensively about what possible attacks there are and how likely they are. Oftentimes this involves looking at how expensive an attack is and what an attacker's incentives are: it's why journalists and activists need to be more worried than the general public about novel "zero-day" attacks, because finding a previously unknown vulnerability is difficult research and the organizations that can afford it are likely to be cautious about how they play their cards. Finally, decide how you're going to protect against the actual threats you've identified. (Dr. Best's page points out that while some hospitals offer to X-ray Halloween candy, this will only detect razor blades, not drugs.)
The FDA has some useful suggestions about Halloween safety based on more realistic threats: since research shows more kids are hit by cars on Halloween, it's a good idea to wear reflective clothing, and since you can get infections from contact lenses, don't buy them from a street vendor and stick them in your eyes. Some of these risks are still more worth worrying about than others, but - as with computer security - paying attention to the most mundane risks is generally the place to get started.
If someone forwarded this to you, you can sign up yourself at https://looseleafsecurity.com/newsletter.
Tip of the week
We've talked a bit about threat modeling before: in particular, in our December 2018 security stories episode, we talk about the importance of doing your own threat modeling for how an attacker might get into sensitive data on your phone. Some apps, like password managers or mobile banking, let you add additional authentication when opening the app - but it's not very useful if you're requiring the same authentication mechanism to unlock your phone. Some apps let you access them while the phone is locked, so you should be comfortable with the risk that someone who gets your phone can use those features. Some people lock their phones to protect against a lost device, but others want to protect against people physically near them - an untrustworthy roommate, perhaps, or as happened to a cryptography professor, a child trying to use their sleeping parent's thumbprint.
It's worth taking a moment to trace through how the most sensitive data on your phone, laptop, or other devices is protected. For example, if you have both your password manager and your second-factor authentication codes on apps on your phone, you might want to make sure you can't unlock both of them with just a single authentication method like your fingerprint.
In the news
Chrome security update: Google announced two security fixes for Chrome in the latest update, noting that one of them is currently being exploited in the wild, making this a so-called "zero day" vulnerability. Chrome automatically updates itself, but only when you restart the browser. If you see a little update icon at the right side of the address bar, it's probably a good idea to save your work and restart when you get a chance.
iOS and macOS updates: Apple has released security updates for macOS, iOS/iPadOS, Safari, watchOS, and even tvOS, all of which fix pretty impactful vulnerabilities, so make sure to hit that software update button. In addition to the security fixes, macOS 10.15.1 and iOS 13.2 also add dozens of new emoji including Mate (a South American caffeinated drink brewed from yerba mate leaves), Safety Vest (for reminding your favorite trick-or-treaters), and Sloth (probably Geoffrey's Patronus, but Liz is still holding out hope for a wombat emoji), as well an option to control whether your Siri recordings are sent to Apple for analysis - after controversy about contractors listening to what you told Siri, Apple has moved all their analysis in-house and now gives you an option to disable it entirely.
Apple also seems to be more active about providing updates for older devices - iOS 12.4.3 is available for devices that couldn't upgrade to iOS 13, like the iPhone 6 and the original iPad Air. Apple's security updates page doesn't yet list what bugs are fixed, but indicates that there are security fixes in this update. AppleInsider also says that they found "some improvements to two-factor authentication," but didn't expand on what they are. Furthermore, watchOS 6 is now available for all Apple Watches (it previously wasn't available on the Series 1). They also made watchOS 5.3.3 available for all but the current generation of watches; Ars Technica notes that this is because watchOS 6 requires iOS 13, so people with iOS 12-only phones need to stay on watchOS 5.
Fall back... by 20 years?: Apple published a warning to iPhone 5 users to apply the latest software update before this Sunday, November 3, because it fixes an issue with GPS time rollover. GPS satellites send down the current time and date in a way that numbers the weeks from 0 through 1023, but 1024 weeks is only just under 20 years, so when week 1024 arrives, if your software hasn't been updated, the current time will jump back 20 years. (This is effectively the same sort of issue as the Y2K bug.) This isn't the first rollover - week 1024 was actually in August 1999, but not as many devices automatically picked up their time from GPS back then. Week 2048 was actually this past April, but Apple phones (and other devices) have an offset of 30 weeks, which brings the effective rollover to the end of this week. If you don't update, on Sunday morning, your phone will believe that it's March 2000 again and that the dot-com bubble shows no signs of stopping. Newer Apple devices aren't affected (they probably have a much higher offset, since they were released well after 2000).
Notably, Apple is warning that if you don't update by Saturday, you won't be able to install the update normally. This isn't surprising: most secure protocols (including HTTPS) are set up in a way such that digital certificates have an issuance date and an expiry date and aren't valid outside that period. Much like how regular ID cards have an expiry date, it reduces the risk from old certificates falling into the wrong hands. If your device's clock isn't set to the right time, though, it might falsely think the certificate isn't currently valid. If you get into this state with your iPhone, you might need to install the update by connecting your phone to a computer and restoring it from backup.
By the way, this update is to iOS 10, the last version supported on the iPhone 5 (and the 4th-generation iPad). There's also a similar update for iOS 9, the last version supported on the iPhone 4s and various iPads from around 2012. While this trend of Apple providing software updates for older OS versions and older devices is very welcome, this isn't a security update (it isn't available for the non-GPS iPad models), and there's no guarantee that all the known security bugs in iOS 9 and 10 have been fixed. Until there's a track record of Apple providing frequent and timely security updates to the older OSes, we'd still worry about our exposure if we had an old device - e.g., Geoffrey still has a 3rd-generation iPad but doesn't actually use it for anything other than reading PDFs, and it certainly isn't logged into a password manager or anything. If you're using an iOS 13-incompatible device regularly, it's probably time to upgrade.
A product recall on the app store: Wandera, a company that builds mobile phone security products for companies, discovered 17 malware apps on the iOS App Store that evaded detection because they were loading the malicious instructions from a remote server, after the app was installed. The apps weren't doing anything extremely harmful: they were fraudulently simulating clicks on online ads in an attempt to raise revenue for the advertiser, which mostly only impacts the phone's owner in using up bandwidth and battery. The same server, however, was previously observed to be getting personal data from malicious Android apps. Wandera didn't establish whether the company behind the apps was itself malicious or simply had its apps infected by some other party. While the App Store and Google Play Store review processes are generally pretty good, this is a reminder that bad apps do slip through sometimes, and it's worth being careful about downloading apps from less well-known developers - especially if they're for things that could be adequately answered by a website like "Live Cricket Scores" or "BMI Calculator."
The call is coming from inside the customer service department: FOX 11 in Los Angeles reports that AT&T is being sued by a customer over a SIM-swapping attack that resulted in him losing $1.8M. "SIM-swapping" or "SIM-jacking" is an attack where someone contacts a victim's cell phone provider pretending to be them and gets their cellular service transferred to the attacker's phone - at that point, they can send and receive texts and phone calls using the victim's phone number. The person suing AT&T says he went to a store to get his account back, and while he was in the store and working with the employees, the attacker took his account again - he claims this is because two AT&T employees were helping with the attack. FOX 11 previously reported on several other SIM-swapping attacks and the prosecutions of the people behind them: in one case, a Verizon employee was bribed $3,500 to hand over customer information, which the attackers used to impersonate the customer. Another victim is also suing AT&T for not effectively stopping these attacks, and in July, a judge ruled that the lawsuit can proceed. In yet another case, a TV journalist says she enabled all the extra security features to prevent SIM swapping and require that she show up in person with ID because her AT&T account was hijacked, but she was still SIM-swapped again two months later.
We're hopeful that legal pressure will force cell companies to implement better protections against SIM-swapping, but in the meantime, it's very important to remember that cell phone connections are not secure or trustworthy. Many of the victims here had significant holdings in cryptocurrency, which has the property of being easy to move without detection - a disadvantage in this case - and presumably they had stored their cryptocurrency with online providers that allowed them to reset passwords via SMS. In terms of threat modeling, if you have millions of dollars tied to just a text message to your phone, spending a few thousand to bribe an employee is obviously worthwhile to an attacker. Avoid relying on SMS as either a second factor or as a password reset mechanism for any accounts where you can, set strong and unique passwords, and if at all possible, try to keep at least an emergency fund entirely separate from your usual accounts.
Don't bank on liability protection: In a similar story of people losing their life savings via online attacks, CBC reports that banks in Canada are refusing to accept liability when hackers break into online accounts. It seems that in many of these cases, the attacks are on the victim's own computers: in one case, Scotiabank says the malicious transfer came from the same IP address the customer regularly used, and in another case, a password was captured via a keylogger. As with SIM-swapping, we think the liability here ought not to be on the customer - a compromised laptop isn't too different from a stolen checkbook, and there are standard safeguards in that case. Unfortunately, banks don't currently see it that way, so it's worth taking your own steps to be extra safe. We'd suggest making sure your computer and phone are secure (check out our episodes "Malware, antivirus, and safe downloads" and "Securing your phone") and also enabling push notifications for all bank activity on your phone - that way, if something unexpected happens, you'll know about it promptly.
Why hack phones when you can hack the network? Ars Technica reports on malware that infects the computers used to route SMS messages within a cell phone company's infrastructure and then snoops on the messages. This is yet another reason why you shouldn't trust text-message-based authentication or expect text messages to be private.
Facial recognition in Australia: Australia's Department of Home Affairs has proposed to use facial recognition to make sure minors aren't signing up for adult websites with their parents' ID. Evidently, they already have ID photos in a facial recognition database and just need legal approval for it to be used for this purpose - which is uncomfortably close to the national facial recognition databases in China and India we talked about two weeks ago. We're also not quite sure which minors are furtively signing up for naughty websites using their parents' real names, and this makes us worry this is just a foothold for mandating facial recognition for home internet in other cases.
What we're reading
Upon your wall for the world to see: You might not expect that photos you uploaded to Flickr back in 2005 could now be in what The New York Times calls "an unprecedentedly huge facial-recognition database," but 4 million at-least-once-public photos of 672,000 people uploaded to Flickr are now a part of MegaFace. In 2014, Yahoo! created a database of links to images uploaded to their subsidiary Flickr so that more researchers could access a broader database of faces, and that data, distributed through links to Flickr images, wasn't anonymized. Yahoo! believed that by distributing links to Flickr images instead of the images themselves, users who later made their photos private would be able to exclude themselves from future research, but Flickr users weren't notified that links to their images were being included and setting your photos to private didn't reliably remove them from the dataset - a security vulnerability allowed Flickr photos to be accessed after they were made private. Some researchers also downloaded the images and distributed them themselves. Two computer science professors at the University of Washington, Ira Kemelmacher-Shlizerman and Steve Seitz, and their graduate students were among those who downloaded Flickr images from Yahoo's dataset, and that's how MegaFace, a very large scale public facial recognition dataset, was born.
Most Flickr users don't have a lot of recourse here - facial recognition and biometric information laws are still somewhat rare, though residents of Illinois may have recourse over commercial use of the scans of the photos (though not the distribution of the photos themselves) through the Biometric Information Privacy Act. But even in Illinois, there are some core legal questions that haven't been answered yet, like what happens with photos uploaded in Illinois that are then processed in another state, and we're highly interested in how the courts will interpret this law.
Civil recourse considerations aside, this story sits at the intersection of two themes that seem to come up often in the stories we cover: how our data and its uses change over time and how much of the burden of privacy falls on individuals. The threat model of uploading a photo publicly to the internet in 2005 felt very different than it does now - police departments weren't routinely using facial recognition on photos from smartphones to try to identify suspects and social media sites weren't trying to get you to upload as many photos as possible and then automatically you and your friends in them - and it's particularly frustrating that Yahoo! didn't tell Flickr users that their data would be used in very different ways than they'd have expected when they signed up because Yahoo! technically didn't need to. We at Loose Leaf Security don't think it's fundamentally reasonable for all of the burden of privacy to fall on individuals - companies should make it easy for you to keep your accounts secure and understand how your data is used - but we find it especially unreasonable as data and its uses evolve because the companies driving the changing technologies have substantially more insight into the consequences of their new innovations than individuals, making it harder to truly have informed consent.
Ghostbnb: Airbnb's lax policies regarding hosts changing reservations and misaligned feedback incentives make the platform particularly ripe for scamming. Vice chronicles a particular Airbnb scam, and while it isn't exactly a personal digital security issue, it highlights a lesser known safety issue for Airbnb users. Going back to our earlier discussion of threat models, it's not always the scariest-sounding threat, say, your hosts turning out to be the main antagonists in a horror movie, that is the most likely to put you at risk.
Untitled closing section
Untitled Goose Game, a video game where you play a "horrible goose" doing mischief in an otherwise-idyllic village, has been wildly popular since its release a month ago. It's been so popular, in fact, that a security researcher took a look at how it loads saved games and found an exploitable bug based on an approach originally described by a Google Project Zero researcher. It isn't really a practical security problem, as you're probably not downloading saved games from the internet and anyone who can modify your saved games on your computer can certainly cause bigger trouble for you. Still, it's eerily appropriate for a game that has a goose sneak up on an unsuspecting groundskeeper and pickpocket his keys: similarly, a saved game file can sneak up on the application itself and cause it to execute arbitrary code.
That wraps it up for this week - thanks so much for subscribing to our newsletter! If there's a story you'd like us to cover, send us an email at firstname.lastname@example.org. See y'all next week!
-Liz & Geoffrey