The new Apple Intelligence feature is helpful in many ways. However I just discovered one example where it increased risk, not reduced it.
It has a feature where it will auto prioritize emails and raise the mails it thinks is high priority, to the top of the list with a banner to emphasize its importance. Just... in this case, it emphasized a phishing email:
Why do companies think it is a good idea to ask you for a 5-star review? Can they not have more self confidence and ask for a review and let you decide how that review should look like?
I recently started up a 91 gallon reef aquarium. One thing that is unique about an aquarium vs other pets, is just how dependent it is on stable electricity. A cat or dog or bird won't care too much if the power goes out for a day, but fish, coral and inverts will die without proper water circulation - some will die within an hour or two with no water movement.
The electrical grid is very stable here in south Alberta. However it has gone out once or twice in the past three years. If you care for your reef aquarium inhabitants (which you should if you are in the hobby), you will need to make sure they are provided for when the inevitable strikes.
There are several ways to provide backup power for your aquarium. From least elaborate / expensive to most, here is an incomplete list: 1. 24V battery and a simple relay to switch power. 2. Dedicated aquarium pump / wavemaker backup unit 3. Small computer UPS 4. Large computer UPS with extended runtime batteries 5. Gasoline generator 6. Whole house battery backup such as Tesla's
I have been wondering a bit over the news that Apple Intelligence would include linking to ChatGPT and other LLMs from Siri when needed, as opposed to introducing their own-in house AI. With two trillion dollars net worth and a year lead time you'd think they can develop their own LLM model.
After my last post, I think the reason Apple did this is indemnification. They realize AI is shit, but also understand the pressure from their investors and the public to participate in the AI madness.
Clever.
I said it before, but generative LLMs are not to be blindly trusted. I am not just talking about hallucinations, I am talking about understanding a problem, showing you its intermediate logic and then coming to a 90% accurate conclusion. See below for a recent interaction with GPT-4:
Me: Decode the following ciphertext
ChatGPT Sure, I can help with that. Please provide the ciphertext that you would like me to decode.
Me: Pm ol ohk hufaopun jvumpkluaphs av zhf, ol dyval pa pu jpwoly, aoha pz, if zv johunpun aol vykly vm aol slaalyz vm aol hswohila, aoha uva h dvyk jvbsk il thkl vba huk 1.
ChatGPT: The ciphertext appears to be encoded with a simple substitution cipher, likely a Caesar cipher, given the structure of the text. One common method to start decoding such ciphers is by trying out different shifts. Let's apply a common Caesar cipher shift of 7 and see if it decodes into readable text.
Here's how we can decode it:
- Identify the shift used in the Caesar cipher.
- Shift each letter in the ciphertext backward by the same amount to get the plaintext.
Let's try shifting backward by 7 positions (since 'Pm' starts at 'C' with a backward shift of 7):
Given the apparent regularity, let's decode the whole ciphertext with a shift of 7:
Pm ol ohk hufaopun jvumpkluaphs av zhf, ol dyval pa pu jpwoly, aoha pz, if zv johunpun aol vykly vm aol slaalyz vm aol hswohila, aoha uva h dvyk jvbsk il thkl vba huk 1.
Decoding using the shift of 7:
If he had explained sufficiently to say, he would be in secret, that is, by so changing the order of the letters of the alphabet, that not a word could be made out and 1.
The problem is, here is ChatGPT's result followed by the actual, correct result: