Natanael

joined 2 months ago
[–] [email protected] 3 points 19 hours ago* (last edited 19 hours ago)

because the right have redefined racism to be "prejudice+ill intent",

They claim to define it that way, and then immediate after accuses everybody else of racism just arguing in good faith when you're pointing out THEIR malice against minorities. "You weren't supposed to notice my victim's ethnicity is different, that means you see race and thus YOU'RE racist" is the logic you can expect.

[–] [email protected] 12 points 20 hours ago

Appealing bans by suing them over first amendment right to petition (yes it covers more than freedom of speech)

[–] [email protected] 1 points 1 day ago

It's not, but they're cowards

[–] [email protected] 2 points 2 days ago

The Green Hornet too. Using speculations from a journalist to make plans.

 

See also discussion here; https://reddit.com/comments/1jv572r

[–] [email protected] 23 points 4 days ago

It wasn't for the victims, it was for the perpetrators. Their soldiers would take too much emotional damage from mass scale murder and break down, so they created new ways to mass murder without putting them face to face

[–] [email protected] 1 points 4 days ago

Those grow in gas chambers

[–] [email protected] 2 points 5 days ago* (last edited 5 days ago)

Fediverse servers can quickly get more expensive if you have a few thousand users, or even a few dozen but somebody has a post go viral. That's because every retrieval of a post always goes to the original user's server, every like does too, etc, and this generates a flood of events which quickly gets expensive to process.

Just ask the maintainers of the botsin.space Mastodon server who couldn't afford to keep it running, and now put the server in archival mode and not allowing new posts.

A PDS only publishes static data and don't have to process incoming events, making it very easy to run one behind a caching server very cheaply.

There is another problem: these other relays are all copies of the Bluesky relay, where the official app publishes the messages of its users, so they are not independent from each other; if I publish my posts on a relay other than Bluesky's I will not be able to communicate with them.

Not entirely correct.

Every individual users' account host (PDS) publishes directly locally, the relay then collects published posts from known PDS servers (including both bluesky's own and others' self hosted servers) and display everything. A PDS server can sync to multiple relays. Relays can even sync to each other, which is practical because PDS servers publish through content addressing for posts in user repositories so it's easy to verify completeness.

So sure if somebody uses an app connected to a filtering / partial / out of sync relay they might not see everything. This is not an architectural limit in the protocol, however.

[–] [email protected] 1 points 5 days ago (1 children)

A split would create North Korea 2.0

[–] [email protected] 1 points 6 days ago* (last edited 6 days ago) (2 children)

The appview needs to index the relay contents to build a view of the whole network, the relay is just a type of CDN, and is NOT that expensive. There are multiple individuals maintaining full copies right now, while the network is at double digit millions of users. The relay is by far less expensive.

You're looking at poorly matched cloud options in that article. The individuals doing it does it easily on a NAS and equivalent. The cost from running public relays will come from traffic, not storage

It's the appview that needs to be made lighter, and that work is progressing. Like building variants you can self host which are selective and only care about content from your network (and only fetching other content as needed)

[–] [email protected] 1 points 6 days ago (4 children)

There's work ongoing right now in making it easier to run a small appview, and the relay is cheaper to run than the appview and very manageable by even small companies

[–] [email protected] 4 points 6 days ago

Linux does this better by defaulting to files not being executable, versus Windows needing the downloading software to apply a specific "downloaded file" flag to trigger a notice about potentially unsafe files.

You could make a lot of the commands available by default much less dangerous. Stuff like requiring using protected screens more (like UAC and ctrl+alt+del) for enabling the risky stuff.

Also, sandboxing by default would do even more to prevent the worst dangers.

[–] [email protected] 8 points 6 days ago* (last edited 6 days ago)

The lawyer can make any case the client wish, but not by knowingly lying to the court (note that not sharing privileged information is a very different thing). In other words, saying things like "my client's position is X" rather than making false statements of fact. And not falsely claiming their position has legal support in precedent if they know it doesn't, etc.

More practically speaking, to ensure their client actually gets competent legal representation they would push their client to accept them presenting multiple legal arguments and not exclusively sticking to the narrative, allowing the lawyer to focus on the client's legal rights and doing what a lawyer should do (basically "the client does not concede on any point, but if the court finds X then we argue A and if it finds Y we argue B", offering legal arguments to "hypotheticals"), so you don't leave any important legal arguments from the opposing side unanswered.

Tldr, make sure that no matter what the court finds, you're making arguments to protect their legal rights and to ensure sentencing is fair.

And when a client is so unreasonable that their position can't be represented accurately in a legal manner without simultaneously contradicting the client, well screw that client 🤷

 

Cryptology ePrint Archive
Paper 2025/585
Adaptively-Secure Big-Key Identity-Based Encryption
Jeffrey Champion, The University of Texas at Austin
Brent Waters, The University of Texas at Austin, NTT Research
David J. Wu, The University of Texas at Austin

Abstract
Key-exfiltration attacks on cryptographic keys are a significant threat to computer security. One proposed defense against such attacks is big-key cryptography which seeks to make cryptographic secrets so large that it is infeasible for an adversary to exfiltrate the key (without being detected). However, this also introduces an inconvenience to the user who must now store the large key on all of their different devices. The work of Döttling, Garg, Sekar and Wang (TCC 2022) introduces an elegant solution to this problem in the form of big-key identity-based encryption (IBE). Here, there is a large master secret key, but very short identity keys. The user can now store the large master secret key as her long-term key, and can provision each of her devices with short ephemeral identity keys (say, corresponding to the current date). In this way, the long-term secret key is protected by conventional big-key cryptography, while the user only needs to distribute short ephemeral keys to their different devices. Döttling et al. introduce and construct big-key IBE from standard pairing-based assumptions. However, their scheme only satisfies selective security where the adversary has to declare its challenge set of identities at the beginning of the security game. The more natural notion of security is adaptive security where the user can adaptively choose which identities it wants to challenge after seeing the public parameters (and part of the master secret key).

In this work, we give the first adaptively-secure construction of big-key IBE from standard cryptographic assumptions. Our first construction relies on indistinguishability obfuscation (and one-way functions), while our second construction relies on witness encryption for NP together with standard pairing-based assumptions (i.e., the SXDH assumption). To prove adaptive security, we show how to implement the classic dual-system methodology with indistinguishability obfuscation as well as witness encryption.

 

Abstract;

In this paper, we present the first practical algorithm to compute an effective group action of the class group of any imaginary quadratic order O on a set of supersingular elliptic curves primitively oriented by O. Effective means that we can act with any element of the class group directly, and are not restricted to acting by products of ideals of small norm, as for instance in CSIDH. Such restricted effective group actions often hamper cryptographic constructions, e.g. in signature or MPC protocols.

Our algorithm is a refinement of the Clapoti approach by Page and Robert, and uses 4-dimensional isogenies. As such, it runs in polynomial time, does not require the computation of the structure of the class group, nor expensive lattice reductions, and our refinements allows it to be instantiated with the orientation given by the Frobenius endomorphism. This makes the algorithm practical even at security levels as high as CSIDH-4096. Our implementation in SageMath takes 1.5s to compute a group action at the CSIDH-512 security level, 21s at CSIDH-2048 level and around 2 minutes at the CSIDH-4096 level. This marks the first instantiation of an effective cryptographic group action at such high security levels. For comparison, the recent KLaPoTi approach requires around 200s at the CSIDH-512 level in SageMath and 2.5s in Rust.

See also; https://bsky.app/profile/andreavbasso.bsky.social/post/3ljkh4wmnqk2c

0
🕵️‍♂️ (infosec.pub)
submitted 1 week ago* (last edited 1 week ago) by [email protected] to c/[email protected]
 
 

Via; https://bsky.app/profile/nicksullivan.org/post/3ll7galasrc2z

CFRG process documentation has been updated.

2
How to Hold KEMs (durumcrustulum.com)
 

From: https://mastodon.social/@fj/114171907451597856

Interesting paper co-authored by Airbus cryptographer Erik-Oliver Blass on using zero-knowledge proofs in flight control systems.

Sensors would authenticate their measurements, the control unit provides in each iteration control outputs together with a proof of output correctness (reducing the need in some cases for redundant computations), and actuators verify that outputs have been correctly computed

view more: next ›