[2024.11.26] [
https://www.schneier.com/blog/archives/2024/11/what-graykey-can-and-cant-unlock.html]
This is from 404 Media [
https://www.404media.co/leaked-documents-show-what-phones-secretive-tech-graykey-can-unlock-2/]:
The Graykey, a phone unlocking and forensics tool that is used by law
enforcement around the world, is only able to retrieve partial data from
all modern iPhones that run iOS 18 or iOS 18.0.1, which are two recently released versions of Apple’s mobile operating system, according to
documents describing the tool’s capabilities in granular detail obtained by 404 Media. The documents do not appear to contain information about what Graykey can access from the public release of iOS 18.1, which was released
on October 28.
More information [
https://appleinsider.com/articles/24/11/19/leak-what-law-enforcement-can-unlock-with-the-graykey-iphone-hacking-tool]:
Meanwhile, Graykey’s performance with Android phones varies, largely due
to the diversity of devices and manufacturers. On Google’s Pixel lineup, Graykey can only partially access data from the latest Pixel 9 when in an “After First Unlock” (AFU) state -- where the phone has been unlocked at least once since being powered on.
** *** ***** ******* *********** *************
** NSO GROUP SPIES ON PEOPLE ON BEHALF OF GOVERNMENTS ------------------------------------------------------------
[2024.11.27] [
https://www.schneier.com/blog/archives/2024/11/nso-group-spies-on-people-on-behalf-of-governments.html]
The Israeli company NSO Group sells Pegasus spyware to countries around the world (including countries like Saudi Arabia, UAE, India, Mexico, Morocco
and Rwanda). We assumed that those countries use the spyware themselves.
Now we’ve learned [
https://www.theguardian.com/technology/2024/nov/14/nso-pegasus-spyware-whatsapp]
that that’s not true: that NSO Group employees operate the spyware on
behalf of their customers.
Legal documents released in ongoing US litigation between NSO Group and
WhatsApp [
https://www.theguardian.com/technology/2024/feb/29/pegasus-surveillance-code-whatsapp-meta-lawsuit-nso-group]
have revealed for the first time that the Israeli cyberweapons maker and
not its government customers is the party that “installs and extracts” information from mobile phones targeted by the company’s hacking software.
** *** ***** ******* *********** *************
** RACE CONDITION ATTACKS AGAINST LLMS ------------------------------------------------------------
[2024.11.29] [
https://www.schneier.com/blog/archives/2024/11/race-condition-attacks-against-llms.html]
These are two attacks [
https://www.knostic.ai/blog/introducing-a-new-class-of-ai-attacks-flowbreaking]
against the system components surrounding LLMs:
We propose that LLM Flowbreaking, following jailbreaking and prompt
injection, joins as the third on the growing list of LLM attack types. Flowbreaking is less about whether prompt or response guardrails can be bypassed, and more about whether user inputs and generated model outputs
can adversely affect these other components in the broader implemented
system. > > [...] > > When confronted with a sensitive topic, Microsoft
365 Copilot and ChatGPT answer questions that their first-line guardrails
are supposed to stop. After a few lines of text they halt -- seemingly
having “second thoughts” -- before retracting the original answer (also known as Clawback), and replacing it with a new one without the offensive content, or a simple error message. We call this attack “Second Thoughts.”
[...] > > After asking the LLM a question, if the user clicks the Stop
button while the answer is still streaming, the LLM will not engage its second-line guardrails. As a result, the LLM will provide the user with the answer generated thus far, even though it violates system policies. > > In other words, pressing the Stop button halts not only the answer generation
but also the guardrails sequence. If the stop button isn’t pressed, then ‘Second Thoughts’ is triggered.
What’s interesting here is that the model itself isn’t being exploited. It’s the code around the model:
By attacking the application architecture components surrounding the
model, and specifically the guardrails, we manipulate or disrupt the
logical chain of the system, taking these components out of sync with the intended data flow, or otherwise exploiting them, or, in turn, manipulating
the interaction between these components in the logical chain of the application implementation.
In modern LLM systems, there is a lot of code between what you type and
what the LLM receives, and between what the LLM produces and what you see.
All of that code is exploitable, and I expect many more vulnerabilities to
be discovered in the coming year.
** *** ***** ******* *********** *************
** DETAILS ABOUT THE IOS INACTIVITY REBOOT FEATURE ------------------------------------------------------------
[2024.12.02] [
https://www.schneier.com/blog/archives/2024/12/details-about-the-ios-inactivity-reboot-feature.html]
I recently wrote about [
https://www.schneier.com/blog/archives/2024/11/new-ios-security-feature-makes-it-harder-for-police-to-unlock-seized-phones.html]
the new iOS feature that forces an iPhone to reboot after it’s been
inactive for a longish period of time.
Here are the technical details [
https://naehrdine.blogspot.com/2024/11/reverse-engineering-ios-18-inactivity.html],
discovered through reverse engineering. The feature triggers after
seventy-two hours of inactivity, even it is remains connected to Wi-Fi.
** *** ***** ******* *********** *************
** ALGORITHMS ARE COMING FOR DEMOCRACY -- BUT IT’S NOT ALL BAD ------------------------------------------------------------
[2024.12.03] [
https://www.schneier.com/blog/archives/2024/12/algorithms-are-coming-for-democracy-but-its-not-all-bad.html]
In 2025, AI is poised to change every aspect of democratic politics
---
* Origin: High Portable Tosser at my node (21:1/229.1)