cypherpunks

joined 3 years ago
MODERATOR OF
[–] [email protected] 2 points 6 days ago* (last edited 5 days ago)

Nope.

Nope, it is.

It allows someone to use code without sharing the changes of that code. It enables non-free software creators like Microsoft to take the code, use it however they like, and not have to share back.

This is correct; it is a permissive license.

This is what Free Software prevents.

No, that is what copyleft (aims to) prevent.

Tired of people calling things like MIT and *BSD true libre/Free Software.

The no True Scotsman fallacy requires a lack of authority about what what constitutes "true" - but in the case of Free/Libre software, we have one: https://en.wikipedia.org/wiki/The_Free_Software_Definition

If you look at this license list (maintained by the Free Software Foundation's Licensing and Compliance Lab) you'll see that they classify many non-copyleft licenses as "permissive free software licenses".

They’re basically one step away from no license at all.

Under the Berne Convention of 1886, everything is copyrighted by default, so "no license at all" means that nobody has permission to redistribute it :)

The differences between permissive free software licenses and CC0 or a simple declaration that something is "dedicated to the public domain" are subtle and it's easy to see them as irrelevant, but the choice of license does have consequences.

The FSF recommends that people who want to use a permissive license choose Apache 2.0 "for substantial programs" because of its clause which "prevents patent treachery", while noting that that clause makes it incompatible with GPLv2. For "simple programs" when the author wants a permissive license, FSF recommends the Expat license (aka the MIT license).

It is noteworthy that the latter is compatible with GPLv2; MIT-licensed programs can be included in a GPLv2-only work (like the Linux kernel) while Apache 2.0-licensed programs cannot. (GPLv3 is more accommodating and allows patent-related additional restrictions to be applied, so it is compatible with Apache 2.0.)

[–] [email protected] 2 points 6 days ago (2 children)

I’m pretty sure you’re replying to a joke.

I assumed it was a joke, but (correct me if i've misunderstood) I understood it as a joke rooted in the misconception that the US bombing of Yemen was a thing that happened 12 days ago rather than something that has continued every day since then.

If there is some way that this joke works in light of the fact that this article is from yesterday, I failed to grasp it.

[–] [email protected] 4 points 6 days ago (4 children)

We know. There was a group chat.

Did you? This story is about the 17 air strikes that happened yesterday. The attacks described in the group chat were the ones that happened 12 days ago.

The US has continued to bomb Yemen every day since then: https://en.wikipedia.org/wiki/March_2025_United_States_attacks_in_Yemen

[–] [email protected] 14 points 6 days ago

What is a U.S.-sanctioned place? Why does the U.S. government think this is a bad thing?

https://en.wikipedia.org/wiki/United_States_government_sanctions

[–] [email protected] 61 points 6 days ago (2 children)

🎉 sometimes US sanctions actually do lead to positive outcomes :)

 

GitHub has gone - long live Forgejo (@forgejo).

Fully migrated out of Microsoft’s walled garden after they blocked us:

  • 54k commits
  • 9.5k issues
  • 4.3k pull requests
  • 100k comments

Everything moved. Nothing left behind.

🥂 to the United States' sanctions regime for helping get people to migrate off of GitHub!

[–] [email protected] 15 points 1 week ago* (last edited 1 week ago) (3 children)

Zegler's posts on Trump added another audience demographic, Trump supporters, who decided to boycott Snow White, in addition to pro-Palestine audience members who were boycotting the film for its inclusion of Gadot, and pro-Israel audience members for Zegler's firm stance on Palestine.

Poor Disney! The reviews aren't helping either.

It's #20 on https://en.wikipedia.org/wiki/List_of_most_expensive_films btw 😂

[–] [email protected] 15 points 1 week ago (4 children)

I often see Rust mentioned at the same time as MIT-type licenses. Is it just a cultural thing that people who write Rust dislike Libre licenses?

The word "libre" in the context of licensing exists to clarify the ambiguity of the word "free", to emphasize that it means "free as in freedom" rather than "free as in beer" (aka no cost, or gratis) as the FSF explains here.

The MIT license is a "libre" license, because it does meet the Free Software Definition.

I think the word you are looking for here is copyleft: the MIT license is a permissive license, meaning it is not a copyleft license.

I don't know enough about the Rust community to say why, but from a distance my impression is that yes they do appear to have a cultural preference for permissive licenses.

[–] [email protected] 12 points 1 week ago

fyi: GNU coreutils are licensed GPL, not AGPL.

there is so much other confusion in this thread, i can't even 🤦

[–] [email protected] 9 points 1 week ago* (last edited 1 week ago)

imo the pejorative connotation of that word, and homophobia generally, is ultimately rooted in misogyny

"always has been" meme with "wait, it's all about maintaining the patriarchy?"

 
15
Cow tools (en.wikipedia.org)
 

Today, we’re excited to announce AI Labyrinth, a new mitigation approach that uses AI-generated content to slow down, confuse, and waste the resources of AI Crawlers and other bots that don’t respect “no crawl” directives. When you opt in, Cloudflare will automatically deploy an AI-generated set of linked pages when we detect inappropriate bot activity, without the need for customers to create any custom rules.

And it's "free"! (visibility in to all of that traffic is more than sufficient payment for them 🤑)

Here are some perhaps-contradictory highlights from their blog post (emphasis mine), which I'm pretty sure was itself written with LLM assistance:

No real human would go four links deep into a maze of AI-generated nonsense.

When these links are followed, we know with high confidence that it's automated crawler activity, as human visitors and legitimate browsers would never see or click them. This provides us with a powerful identification mechanism, generating valuable data that feeds into our machine learning models. By analyzing which crawlers are following these hidden pathways, we can identify new bot patterns and signatures that might otherwise go undetected.

But as bots have evolved, they now proactively look for honeypot techniques like hidden links, making this approach less effective.

AI Labyrinth won’t simply add invisible links, but will eventually create whole networks of linked URLs that are much more realistic, and not trivial for automated programs to spot. The content on the pages is obviously content no human would spend time-consuming, but AI bots are programmed to crawl rather deeply to harvest as much data as possible. When bots hit these URLs, we can be confident they aren’t actual humans, and this information is recorded and automatically fed to our machine learning models to help improve our bot identification. This creates a beneficial feedback loop where each scraping attempt helps protect all Cloudflare customers.

This is only the first iteration of using generative AI to thwart bots for us. Currently, while the content we generate is convincingly human, it won’t conform to the existing structure of every website. In the future, we’ll continue to work to make these links harder to spot and make them fit seamlessly into the existing structure of the website they’re embedded in. You can help us by opting in now.

 

Today, we’re excited to announce AI Labyrinth, a new mitigation approach that uses AI-generated content to slow down, confuse, and waste the resources of AI Crawlers and other bots that don’t respect “no crawl” directives. When you opt in, Cloudflare will automatically deploy an AI-generated set of linked pages when we detect inappropriate bot activity, without the need for customers to create any custom rules.

And it's "free"! (visibility in to all of that traffic is more than sufficient payment for them 🤑)

Here are some perhaps-contradictory highlights from their blog post (emphasis mine), which I'm pretty sure was itself written with LLM assistance:

No real human would go four links deep into a maze of AI-generated nonsense.

When these links are followed, we know with high confidence that it's automated crawler activity, as human visitors and legitimate browsers would never see or click them. This provides us with a powerful identification mechanism, generating valuable data that feeds into our machine learning models. By analyzing which crawlers are following these hidden pathways, we can identify new bot patterns and signatures that might otherwise go undetected.

But as bots have evolved, they now proactively look for honeypot techniques like hidden links, making this approach less effective.

AI Labyrinth won’t simply add invisible links, but will eventually create whole networks of linked URLs that are much more realistic, and not trivial for automated programs to spot. The content on the pages is obviously content no human would spend time-consuming, but AI bots are programmed to crawl rather deeply to harvest as much data as possible. When bots hit these URLs, we can be confident they aren’t actual humans, and this information is recorded and automatically fed to our machine learning models to help improve our bot identification. This creates a beneficial feedback loop where each scraping attempt helps protect all Cloudflare customers.

This is only the first iteration of using generative AI to thwart bots for us. Currently, while the content we generate is convincingly human, it won’t conform to the existing structure of every website. In the future, we’ll continue to work to make these links harder to spot and make them fit seamlessly into the existing structure of the website they’re embedded in. You can help us by opting in now.

 

Today, we’re excited to announce AI Labyrinth, a new mitigation approach that uses AI-generated content to slow down, confuse, and waste the resources of AI Crawlers and other bots that don’t respect “no crawl” directives. When you opt in, Cloudflare will automatically deploy an AI-generated set of linked pages when we detect inappropriate bot activity, without the need for customers to create any custom rules.

And it's "free"! (visibility in to all of that traffic is more than sufficient payment for them 🤑)

Here are some perhaps-contradictory highlights from their blog post (emphasis mine), which I'm pretty sure was itself written with LLM assistance:

No real human would go four links deep into a maze of AI-generated nonsense.

When these links are followed, we know with high confidence that it's automated crawler activity, as human visitors and legitimate browsers would never see or click them. This provides us with a powerful identification mechanism, generating valuable data that feeds into our machine learning models. By analyzing which crawlers are following these hidden pathways, we can identify new bot patterns and signatures that might otherwise go undetected.

But as bots have evolved, they now proactively look for honeypot techniques like hidden links, making this approach less effective.

AI Labyrinth won’t simply add invisible links, but will eventually create whole networks of linked URLs that are much more realistic, and not trivial for automated programs to spot. The content on the pages is obviously content no human would spend time-consuming, but AI bots are programmed to crawl rather deeply to harvest as much data as possible. When bots hit these URLs, we can be confident they aren’t actual humans, and this information is recorded and automatically fed to our machine learning models to help improve our bot identification. This creates a beneficial feedback loop where each scraping attempt helps protect all Cloudflare customers.

This is only the first iteration of using generative AI to thwart bots for us. Currently, while the content we generate is convincingly human, it won’t conform to the existing structure of every website. In the future, we’ll continue to work to make these links harder to spot and make them fit seamlessly into the existing structure of the website they’re embedded in. You can help us by opting in now.

view more: ‹ prev next ›