this post was submitted on 07 May 2026
498 points (99.6% liked)

Technology

84433 readers
3754 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] eestileib@lemmy.blahaj.zone 297 points 22 hours ago (6 children)

AI is an amazing tool for fascists.

Annihilate private access to computing, censor and rewrite all comms, destroy free software and the last remnants of education...

Every single decision made for evil.

And all these vendors who are locking themselves into one customer are about to learn why that's a bad idea.

[–] Tollana1234567@lemmy.today 2 points 7 hours ago* (last edited 7 hours ago)

AI is used as propaganda tools to spread it, and it can bee seen on youtube, social media quite readily. plus it sexualizes victims to like csam, and festishized unattainable "women" for conservatives.

conservaties pretty much buys into/believes in anything that is scammy.

[–] snooggums@piefed.world 68 points 20 hours ago

The worst thing is that when used for good AI is fantastic! Scientific progress with purpose built AI to find planets, predict the weather, and tons of pattern matching has been in use for decades with positive benefits!

Even LLMs can be a useful tool in the right situations where looking like words people would say but accuracy is NOT important.

The problem is trying to use LLMs to do everything and failing while running the tech industry, the environment, and soon the economy into the ground. They took something positive, ruined it and coopted the terminology while shoving it down everyone's throats.

[–] MrKoyun@lemmy.world 7 points 15 hours ago (1 children)

How are they about to learn why that's a bad idea? Like, when the bubble pops?

[–] artyom@piefed.social 7 points 19 hours ago

I don't know if spending unfathomable amounts of money buying up the entire global supply of computing products can be considered a "tool".

[–] cadekat@pawb.social -3 points 22 hours ago (6 children)

I'm not here to argue for or against LLMs in general, but self-hostable AI is a thing. Actually open AI is a thing.

A blanket statement saying about AI as a whole technology being good for fascism is about as useful as saying "roads are good for fascism" (they're great for troop movement after all).

[–] roofuskit@lemmy.world 100 points 21 hours ago (3 children)

You can't self host anything when the hardware is no longer affordable.

[–] ugjka@lemmy.ugjka.net 10 points 20 hours ago (1 children)

Self hosting an llm ain't the same thing as self hosting nextcloud for your docs and calendar. Yes there are small models but their output is laughable

[–] MagicShel@lemmy.zip 21 points 19 hours ago* (last edited 19 hours ago) (4 children)

Small models are improving and becoming more capable. The quality of local LLMs is basically unbounded. The context size of local LLMs is bounded by hardware. So local LLMs can be very capable for small, self-contained tasks.

qwen 3.6 35b running locally:

Write a Python script that can pull weather data from public sources and provide the high and low temperature for the current day in Miami, FL.

Single shot. No tool/internet use, so it didn't pull this script from elsewhere.

import requests

def get_miami_weather():
    # Miami, FL coordinates
    LATITUDE = 25.7617
    LONGITUDE = -80.1918

    # Open-Meteo API URL (free, no API key required)
    url = (
        f"https://api.open-meteo.com/v1/forecast?"
        f"latitude={LATITUDE}&longitude={LONGITUDE}"
        f"&daily=temperature_2m_max,temperature_2m_min"
        f"&timezone=auto"
    )

    try:
        response = requests.get(url, timeout=10)
        response.raise_for_status()  # Raises error for 4xx/5xx HTTP status codes
        data = response.json()

        # Index 0 corresponds to the current day
        high_c = data["daily"]["temperature_2m_max"][0]
        low_c = data["daily"]["temperature_2m_min"][0]

        # Convert to Fahrenheit (commonly used in the US)
        high_f = (high_c * 9/5) + 32
        low_f = (low_c * 9/5) + 32

        print("🌤️  Miami, FL Weather for Today:")
        print(f"High: {high_f:.1f}°F ({high_c:.1f}°C)")
        print(f"Low:  {low_f:.1f}°F ({low_c:.1f}°C)")

    except requests.exceptions.HTTPError as http_err:
        print(f"❌ HTTP error occurred: {http_err}")
    except requests.exceptions.ConnectionError:
        print("❌ Error: Could not connect to the weather API.")
    except requests.exceptions.Timeout:
        print("❌ Error: Request timed out.")
    except requests.exceptions.RequestException as err:
        print(f"❌ An error occurred: {err}")
    except KeyError as key_err:
        print(f"❌ Error parsing data: Missing expected key {key_err}")
    except Exception as err:
        print(f"❌ Unexpected error: {err}")

if __name__ == "__main__":
    get_miami_weather()

Output:

% python3 ./m_weather.py
🌤️  Miami, FL Weather for Today:
High: 88.0°F (31.1°C)
Low:  73.2°F (22.9°C)

I tried to keep the size and scope within something that would reasonably fit in a comment. Looks pretty decent to me, but I can't write Python myself. Never learned. I double-checked the LAT & LON of Miami, and it's spot on.

It did take 47 seconds, while a cloud LLM would probably take 5 or less.

All I'm saying is local LLM isn't garbage and it is getting better all the time.

[–] chunes@lemmy.world 4 points 12 hours ago (1 children)

Now show the output for an 8b model. The only one I'm capable of running

[–] chilicheeselies@lemmy.world 1 points 2 hours ago

Gemma 4 e2b is pretty impressive for its size.

This area of computer is improving very fast. I truely belive the future of this is locally installed open models

[–] humanspiral@lemmy.ca 1 points 12 hours ago* (last edited 12 hours ago) (1 children)

qwen 3.6 is awesome, but 48-64gb is still real money these days. (though 32gb on dedicated separate machine is also more money). Sonnet 3.5 to opus 4.5 level benchmarks. and the online cost metrics for 27b and 35b are way off considering the overall usefulness of a 48-64gb machine (inclusive of gpu vram for 35b) which even in single, non batching, use could displace $5-$7/day of use.

Local costs are much lower than online costs in linked chart, but if online, there are better models

[–] chilicheeselies@lemmy.world 1 points 2 hours ago

Depends on if you even need a better model though. Can you run a good enough model is what matters for the most part.

[–] rimu@piefed.social 5 points 18 hours ago (1 children)

That's interesting.

How much ram did it use while running?

If you used a GPU, how much does it cost in today's prices?

[–] MagicShel@lemmy.zip 9 points 18 hours ago* (last edited 14 hours ago) (3 children)

It's a MacBook Pro. 36GB of ram. I am sure Macs have some kind of gpu and I understand it somehow combines GPU ram with system ram, but I don't really know Mac hardware very well.

It's beefy for a laptop, but the desktop I built for myself several years ago had 32 GB of ram and a GTX 1660, so I'm guessing they are similar in capability. I gave that to my daughter, so I can't run a comparison right now.

EDIT: After doing just a bit of research, I've learned the unified memory architecture that Macs use, while not ideal for many purposes, is actually a big advantage for running larger inference models. So it's possible that this particular model wouldn't run at all on my Linux box or would run much slower because the full model wouldn't fit in the 6GB of VRAM and create a lot of memory thrashing.

[–] boonhet@sopuli.xyz 2 points 5 hours ago

Yup, you want memory accessible to the GPU for local AI. AMD Strix Point and Mac devices are popular options. CPU can run LLMs but very slowly. I've got 32 GB of RAM and 8 VRAM and it's borderline useless for models that don't fit in the VRAM.

[–] SabinStargem@lemmy.today 3 points 13 hours ago (1 children)

You can use something like KoboldCPP on Linux, which allows both RAM and VRAM combined to run a model. O'course, not as fast when compared to pure VRAM or the Mac approach, but it is an option. I use my 128gb RAM with some GPUs for running models.

[–] boonhet@sopuli.xyz 1 points 5 hours ago

Ollama and llama.cpp allow it too but it's super slow in my experience.

[–] humanspiral@lemmy.ca 1 points 12 hours ago

decent performance on 6gb gpu without quantization: https://www.youtube.com/watch?v=8F_5pdcD3HY&t=9s

[–] Janx@piefed.social 5 points 20 hours ago

Or available. Companies have pre-sold years worth of inventory to AI companies.

[–] Jhex@lemmy.world 39 points 21 hours ago (1 children)

if you did not understand the comment from above it's fine but splitting hairs like you are doing is silly (everybody knows it's not 100 % of AI is 100% evil)...

your comment is exactly the same as when people say "guns don't kill people, people kill people"... yes, we all know guns are not autonomously killing people, the point is that guns, as a tool, are remarkably good at doing something we do not want, which is to kill people

[–] meco03211@lemmy.world 9 points 20 hours ago (1 children)

Not to go on a separate tangent, but that's the entire point of guns. They are supposed to kill. That's not meant to be some crazy conservative defense of them or opposition to regulating them. Just pointing out something that seems to get lost in conversations.

[–] Jhex@lemmy.world 3 points 19 hours ago

Correct... so when I tell you "guns DON'T kill people, people kill people" you are right to assume and I am just an idiot trying to jingle keys in front of you to distract you from the fact that guns do in fact kill people.

[–] brendansimms@lemmy.world 22 points 21 hours ago (1 children)

Corps want to privatize roads and make them all toll roads too

[–] artyom@piefed.social 3 points 19 hours ago

They are succeeding in my area.

[–] eestileib@lemmy.blahaj.zone 12 points 21 hours ago

Roads were also useful for random citizens and people who happened to be in the area.

LLMs are overwhelmingly more useful to bad actors.

[–] IratePirate@feddit.org 10 points 21 hours ago* (last edited 21 hours ago) (1 children)

I've looked into self-hosted AI and decided it's not worth the cost - both in terms of hardware and energy - when compared to the relative value to be gotten out of it.

[–] cadekat@pawb.social 1 points 17 hours ago (1 children)

Same, pretty much. It is possible though, which makes LLMs a more democratic technology than, say, nuclear reactors.

[–] IratePirate@feddit.org 3 points 17 hours ago* (last edited 17 hours ago) (1 children)

The models you can run on consumer hardware are still nowhere near the stuff that runs in corporate data centers. To stick with your metaphor, its like running a little steam engine at home while the big guys get to operate nuclear reactors...

[–] cadekat@pawb.social -2 points 14 hours ago

You can get pretty far with a stack of 5090s and llama.cpp with split mode graph (or so I've heard, I've never tried), or AMD's unified memory CPU thing.

It's not as good as data centre grade stuff, but it's not nothing either.