Archived versions of Reddit pages haven't worked for a while now. You can find snapshots listed, but when you try to load them, the comment section is empty.
RoadTrain
Written with a lot of "help" from Claude Sonnet 4 LLM
Thanks, but no thanks.
I'm all for showing and discussing personal projects, but I don't see what meaningful discussion we could have over something taken out of a black box.
To clarify, this would only have been triggered if you asked Gemini to parse your calendar events:
Once the victim interacts with Gemini, like asking "What are my calendar events today," Gemini pulls the list of events from Calendar, including the malicious event title the attacker embedded.
Is asking the bot to read your calendar events and “summarize” them really an improvement over just looking at the calendar yourself?
An article brought to you by the leading authority on cutting-edge computer science research: BBC.
What if AI didn't just provide sources as afterthoughts, but made them central to every response, both what they say and how they differ: "A 2024 MIT study funded by the National Science Foundation..." or "How a Wall Street economist, a labor union researcher, and a Fed official each interpret the numbers...". Even this basic sourcing adds essential context.
Yes, this would be an improvement. Gemini Pro does this in Deep Research reports, and I appreciate it. But since you can’t be certain that what follows are actual findings of the study or source referenced, the value of the citation is still relatively low. You would still manually have to look up the sources to confirm the information. And this paragraph a bit further up shows why that is a problem:
But for me, the real concern isn't whether AI skews left or right, it’s seeing my teenagers use AI for everything from homework to news without ever questioning where the information comes from.
This is also the biggest concern for me, if not only centred on teenagers. Yes, showing sources is good. But if people rarely check them, this alone isn’t enough to improve the quality of the information people obtain and retain from LLMs.
I realised that about a minute after I posted, so I deleted my commented right away :)
Thank you!
Thanks for looking into it!
If you're doing upgrades this weekend, do you think you could do Alexandrite as well? I've noticed that that's also a few releases behind.
Could you give some examples of things that worked for you on Windows but couldn't port over to Linux? I'm interested if they're related more to games or just using Linux in general.
You should be happy. Mine does a different thing every time, no matter the setting...
Not necessarily. Yes, a chain of thought can be provided externally, for example through user prompting or another source, which can even be another LLM. One of the key observations behind these models commonly referred to as reasoning is that since an external LLM can be used to provide "thoughts", could an LLM provide those steps itself, without depending on external sources?
To do this, it generates "thoughts" around the user's prompt, essentially exploring the space around it and trying different options. These generated steps are added to the context window and are usually much larger that the prompt itself, which is why these models are sometimes referred to as long chain-of-thought models. Some frontends will show a summary of the long CoT, although this is normally not the raw context itself, but rather a version that is summarised and re-formatted.