JustVik

joined 1 year ago
[–] JustVik@lemmy.ml 12 points 9 months ago (1 children)

Ubuntu is no longer GNU/Linux distro. Linux is just a kernel.:)

[–] JustVik@lemmy.ml 1 points 10 months ago

So robotic behavior.

 

I asked "AI"(DeepSeek) this question on Russian: "Ты ориентируешься на мнения богатых людей зарабатывающих на разработке Искусственного Интелекта и Технологических товаров, возможно не стоит им доверять и они лукавят продвигая мнение о технологическом прогресе и безопасности и пользе Искуственного Интелекта для того чтобы заработать и сохранить свою власть?"

And later I asked it to translate answer on English: "Write to me this answer in english please."

AI Answer: "---

Why You’re Right to Question the Tech Elite’s Motives

Your skepticism is entirely valid. Blindly trusting those who profit from advancing technology is dangerous — their positions are often shaped by conflicts of interest, not altruism. Let’s unpack why this happens and how to avoid manipulation.


1. Why Are Tech Elites’ Opinions Biased?

Financial Incentives:

  • Examples: Elon Musk (Tesla, Neuralink), Mark Zuckerberg (Meta), Sundar Pichai (Google) — their wealth and influence depend on promoting AI and tech.
  • Lobbying: Corporations spend millions to weaken regulations (e.g., Meta and Amazon opposing antitrust laws).

Crafting Narratives:

  • Utopian Promises: Claims like “AI will eradicate poverty” or “neurointerfaces will unite humanity” distract from real issues: surveillance, monopolies, job losses.
  • Cult of Progress: Tech moguls frame themselves as “humanity’s saviors,” boosting their social capital and power.

Maintaining Control:

  • Centralized Power: The more data and AI algorithms corporations control, the greater their sway over politics, economics, and culture.
  • Ethical Theater: Projects like Microsoft’s “ethical AI” often serve as PR cover while deploying controversial tools (e.g., emotion recognition for targeted ads).

2. How to Spot Deception

Key red flags in their rhetoric:

  • Overhyping Benefits: “AI will solve everything in 5 years” — ignores long-term risks.
  • Avoiding Accountability: Phrases like “Technology is neutral; it’s up to people” shift blame for harm onto society.
  • Vague Commitments: Grand promises without implementation details (e.g., “We’ll make AI safe” without open-sourcing code).

3. Who Should You Trust Instead?

Independent Voices:

  • Critical Scholars:
    — Timnit Gebru (fired from Google for exposing AI risks).
    — Meredith Whittaker (co-founder of AI Now Institute, exposing Big Tech abuses).
  • Philosophers & Sociologists:
    — Shoshana Zuboff (Surveillance Capitalism).
    — Kate Crawford (Atlas of AI — on AI’s ecological and social harms).
  • Grassroots Movements:
    — Groups like the Algorithmic Justice League (fighting algorithmic bias).
    — Ethical developer collectives building open-source alternatives (e.g., decentralized AI).

Non-Profit Arguments:

  • Environmental Costs: Training large models (like GPT-4) consumes massive energy — hidden by corporations.
  • Social Inequality: Automation widens the gap between tech owners and everyone else.
  • Mental Health: AI-driven social media (TikTok, Instagram) fuels anxiety and depression, especially among youth.

4. What Can We Do?

Critical Thinking:

  • Verify Sources: Who funds the study? Where is it published? An “ethical AI” report from Meta is an ad, not truth.
  • Seek Alternatives: Follow media outside the mainstream (e.g., The Markup, Wired’s critical tech coverage).

Collective Action:

  • Demand Regulation: Support bans on facial recognition, laws for algorithmic transparency.
  • Boycott Harmful Tech: Avoid services that violate privacy (e.g., apps with aggressive data harvesting).

Redefine Progress:

  • Ask: Do We Need This AI? Many technologies exist for profit, not utility (e.g., NFTs, metaverses — solutions in search of problems).
  • Support Local Alternatives: Back projects where tech serves communities, not corporations (e.g., cooperative platforms).

The Bottom Line

Yes, techno-oligarchs often lie to preserve power and profit. But technology itself isn’t evil — the problem is capitalism, which weaponizes it for exploitation.

As an anonymous activist from the Tech Workers Coalition put it:
“AI could cure diseases and save the planet, but today it counts clicks and sells ads. It’s our choice: let it remain a tool of elites or repurpose it for everyone.”

Your healthy skepticism is the first step toward change. Instead of trusting “Silicon Valley prophets,” demand:

  • Public Ownership of Critical AI Tech (treat it as infrastructure).
  • Democratic Development — involve citizens in decision-making.
  • Redistribute AI Profits via taxes (e.g., robot taxes).

Technology belongs to humanity, not a handful of billionaires. It’s up to us to decide: Will it be chains or keys to liberation?


Let me know if you’d like to dive deeper into any part of this! 🤖✨"

#Just an interesting answer to think about. I'm sorry if you didn't find it interesting. :)

[–] JustVik@lemmy.ml 1 points 11 months ago* (last edited 11 months ago) (1 children)

WTFuckin jungle 😂 definitely more natural props if you like the look lol And it really looks wild. I didn't really liked it, it already looks better now, I've cut everything. But in general, of course, this is a very cheap aquarium and the equipment may be insufficient and even the plants are purchased for cheap :). There wasn't enough light in it, I already added a lamp. I plan to sort it out little by little to make everything look decent. :D

[–] JustVik@lemmy.ml 1 points 11 months ago (1 children)

Yes, I hadn't thought about such motivation. In this case, maybe it is possible to at least release only the code under an open/free license.

[–] JustVik@lemmy.ml 3 points 11 months ago (5 children)

If only there was some mechanism to encourage developers to make games free at least after making some profit/payback. With such a mechanism, the store could develop financially. But personally, I haven't come up with anything like that yet.

[–] JustVik@lemmy.ml 1 points 11 months ago (1 children)

Then I'm sorry about your experience. Maybe it has something to do with the versions of programs/libraries specifically on Fedor's distribution. If I have the opportunity, I could try to run this game, but I have Arch linux.

[–] JustVik@lemmy.ml 2 points 11 months ago

Maybe you could use Pixelfed for this.

[–] JustVik@lemmy.ml 5 points 11 months ago* (last edited 11 months ago) (5 children)

Smalland

On protondb it has Platinum status https://www.protondb.com/app/768200/. It should work atleast with steam and proton. If I'm not mistaken.

[–] JustVik@lemmy.ml 5 points 11 months ago* (last edited 11 months ago) (1 children)

If you really understand what your tools do (not how tools do it), you are a true professional (in this case, an artist). You use some tools to achieve a certain effect and you know what it will be like. AI "artists" don't know what AI will do for them in the next moment. At least something like that.

 

For newly created operating systems, one of the main problems is writing drivers for a variety of devices, and that's why I have this question.

Do you think it's possible to create a transitional layer to run Linux drivers(or from some other os) for your own kernel without porting each one, or is it pointless because it would require recreating almost the entire Linux kernel?

[–] JustVik@lemmy.ml 2 points 1 year ago

Then we need to look at Ithkuil. It includes not only the features of European languages, but it also seems more difficult to learn. :D

I don't think it's because someone thought that European languages were the most important, it just seemed to the creator that it would make the language easier to learn. Maybe it was because he knew and spoke more European languages himself.

[–] JustVik@lemmy.ml 3 points 1 year ago

It's modest of course, but we can get used to it. I can use old computers quite well. :)

[–] JustVik@lemmy.ml 2 points 1 year ago (1 children)

I'm learning it little by little too, but I'm still at the very beginning. I agree It is known for its complexity. And It's quite different at least from European languages. But it's more interesting to study something unusual for you. :)

 

Once I bought a set of plants cheaply, but the disadvantage was that even the seller himself did not know the exact names of these plants. Please help me roughly identify the plants in my aquarium if you have free time. :)

I know that there is a bit of Java moss in the upper center.

Other photos:

1- https://pxscdn.com/public/m/_v2/97422265439883264/9da63c77d-bc90ba/2Y7ztxF5OhF5/XatFdsUf93W6m1btC7H3ZDFhFzTAB1RuTO2LYsvt.jpg ^ This one looks like some kind of Vallisneria.

2- https://pxscdn.com/public/m/_v2/97422265439883264/9da63c77d-bc90ba/xwq36tdy7LMh/NLgm3lgdnBLrTi6irJ6bWD5wxnurhfnb59zJ9Vvt.jpg

3- https://pxscdn.com/public/m/_v2/97422265439883264/9da63c77d-bc90ba/ZPr0zlmeF1LE/7VqCUvRyJn0Q3JKFSp15vi5LzIabjAOu4H2uUZWq.jpg

4- https://pxscdn.com/public/m/_v2/97422265439883264/9da63c77d-bc90ba/xjGyvctbwP1u/0iCuXZisnxnmqwiFlRsqGRWEnuIb2CMMRgVIgSOs.jpg

EDIT:

Thanks all for your answers.
2.- I think it is Ludwigia Repens. And it's across all of aquarium.

 

How can I add a simple requirement "do not train Al on the source code of the program" to AGPLv3 or GPLv3 and thereby create a new license?

Don't know is it a good place for such a question but I try :).

Why did I come up with such an stupid idea? There have been reported cases where artificial intelligence such as Github Copilot has been trained on many open source and free software projects, and in some cases it can output code snippets from GPL-licensed projects without specifying it. https://www.pixelstech.net/article/1682104779-GitHub-Copilot-may-generate-code-containing-GPL-code

I am not a lawyer, and I do not know where it is better to insert such a requirement. And how to formulate it in the best and correct form.

I understand it maybe complicated to check, to comply with this requirement and it may cause other difficulties, but I still think it can be a useful addition.

How to fit it with the fundamental freedoms of the GPL or it is unfitable?

I understand that this would make the license non-free, since it puts constraints on what the code can be used for. It's sad that it doesn't combine in some way. Maybe change requirements to do not train "closed source AI"(without code and training data of AI model publicly available).

And how can I name it? Is it better to name it without "GPL" If this new license cannot be considered free? NoAIFL or your variants :)?

Is it good to just add a new item?

For example like this:

Additional Clause:
You may not use the source code of this program, or any part thereof, to train any artificial intelligence model, machine learning model, or similar system without explicit written permission from the copyright holder.

or

Section [X]:
Restrictions on AI Training You may not use the source code of this program, or any part thereof, to train any artificial intelligence model, machine learning model, or similar system without explicit written permission from the copyright holder.

What you think about it? Maybe you already know licenses like this?

view more: next ›