cm0002

joined 1 month ago
MODERATOR OF
 

A Journalist Asked Why Israel Isn’t Paying to Rebuild Gaza. It Cost Him His Job: Italy’s Nova news agency confirmed it let reporter Gabriele Nunziati go for asking a European official about Israel at a press conference. (The Intercept, 2025-11-04)

https://theintercept.com/2025/11/04/journalist-israel-gaza-nova-gabriele-nunziati/
———

>> … he asked Paula Pinho, the European Commission’s chief spokesperson, about Gaza’s reconstruction on October 13.

>> “You’ve been repeating several times that Russia should pay for the reconstruction of Ukraine,” … “Do you believe that Israel should pay for the reconstruction of Gaza since they have destroyed almost all its civilian infrastructure?”

>> Pinho replied that it was “definitely an interesting question, on which I would not have any comment.”

>> A clip of the exchange went viral …

>> … a spokesperson for Nova, confirmed that the news agency had ended its relationship with Nunziati over his #Gaza question… a question that was “technically incorrect” …

#censorship #palestine @palestine@fedibird.com

 

Live: Israeli air attacks, shelling, demolition campaign hit southern Gaza | Gaza News | Al Jazeera
https://www.aljazeera.com/news/liveblog/2025/11/5/live-israeli-air-attacks-shelling-demolition-campaign-hit-southern-gaza

- Israeli forces raid West Bank towns, confrontations reported
- UN data shows continued demolition of Palestinian homes across West Bank
- Palestinians in Gaza fear fighting will resume
- Israeli reaction to arrest of ex-army lawyer centres on leak, not abuse of Palestinian detainees

#Palestine #Gaza #Israel

 

Torture in Israeli prisons rose sharply during war, says freed Palestinian author | Palestine | The Guardian
https://www.theguardian.com/world/2025/nov/04/freed-palestinian-author-nasser-abu-srour-israel-prisons-gaza-war

#Palestine #Israel @palestine@fedibird.com

 

Hey it's me, the guy that posted here a couple weeks ago asking for bare minimum concepts new Linux users should understand. I really appreciate the responses I got last time, and now I'm back with my first draft! It's not 100% complete, but I'd love some feedback from the Linux community, let me know if there's anything I missed or that you think should be covered that I didn't yet talk about.

This will eventually be published as a permanent article on the site it is currently published on, as well as a video essay in the style of my other videos. I want it to be a resource for people to share with others making the switch and I'd like for it to be relatively future proof for a good while at least. Please let me know if there's anything I should tweak, cover a little more in depth, add or remove, etc. I'd love the input!

Author @bpt11@reddthat.com

 

“The question is, why should the opinions of the largely impartial UN and human rights scholars be weighed equally to the obviously partisan opinions of commentators and governments? You are allowed to disagree with the consensus of the Wikipedia community, but it is patronising to scorn the community as being ‘wrong’ for following the opinions of the UN, genocide scholars and major human rights organisations,” the editor wrote.

...

“Wikipedia has never, ever treated all voices as equal, nor does policy demand we do. If we did, the Earth article would state that Earth’s shape is under debate. But we don’t do that because scholarly consensus is that Earth is roughly spherical. Instead, flat eartherism is presented as what it is: a fringe movement without scientific backing,” the editor wrote.

 

This paper comes up with a really clever architectural solution to LLM hallucinations, especially for complex, technical topics. The core idea is that all our knowledge, from textbooks to wikis, is "radically compressed". It gives you the conclusions but hides all the step-by-step reasoning that justifies them. They call it a vast, unrecorded network of derivations the "intellectual dark matter" of knowledge. LLMs being trained on this compressed, conclusion-oriented data is one reason why they fail so often. When you ask them to explain something deeply, they just confidently hallucinate plausible-sounding "dark matter".

The solution the paper demonstrates is to use a massive pipeline to "decompress" all of the steps and make the answer verifiable. It starts with a "Socrates agent" that uses a curriculum of about 200 university courses to automatically generate around 3 million first-principles questions. Then comes the clever part, which is basically a CI/CD pipeline for knowledge. To stop hallucinations, they run every single question through multiple different LLMs. If these models don't independently arrive at the exact same verifiable endpoint, like a final number or formula, the entire question-and-answer pair is thrown in the trash. This rigorous cross-model consensus filters out the junk and leaves them with a clean and verified dataset of Long Chains-of-Thought (LCoTs).

The first benefit of having such a clean knowledge base is a "Brainstorm Search Engine" that performs "inverse knowledge search". Instead of just searching for a definition, you input a concept and the engine retrieves all the diverse, verified derivational chains that lead to that concept. This allows you to explore a concept's origins and see all the non-trivial, cross-disciplinary connections that are normally hidden. The second and biggest benefit is the "Plato" synthesizer, which is how they solve hallucinations. Instead of just generating an article from scratch, it first queries the Brainstorm engine to retrieve all the relevant, pre-verified LCoT "reasoning scaffolds". Its only job is then to narrate and synthesize those verified chains into a coherent article.

The results are pretty impressive. The articles generated this way have significantly higher knowledge-point density and, most importantly, substantially lower factual error rates, reducing hallucinations by about 50% compared to a baseline LLM. They used this framework to automatically generate "SciencePedia," an encyclopedia with an initial 200,000 entries, solving the "cold start" problem that plagues human-curated wikis. The whole "verify-then-synthesize" architecture feels like it could pave the way for AI systems that are able to produce verifiable results and are therefore trustworthy.

 

The tech giant deleted the accounts of three prominent Palestinian human rights groups — a capitulation to Trump sanctions.

Archived version: https://archive.is/20251104233119/https://theintercept.com/2025/11/04/youtube-google-israel-palestine-human-rights-censorship/

view more: ‹ prev next ›