Oh boy, how surprising.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
The bait and switch classic.
I'm clutching my pearls as I type this.
So the development of inorganic intelligence, considered by many as an inflection point in human civilisation is to be handed to business graduates who are historically proven to be capable of any level of atrocity in the name of corporate greed. America, fuck yeah.
~~America~~ Greed, fuck yeah.
Don't fool yourself. The USA lost the exclusivity deal on unchecked corpo greed a long time ago. This is a global issue now.
Always has been.
Yeah, the American tag was just a throwaway line, greed unchecked, insane and self-harming has always been with us. We let it sit with us around our camp fires like wolves but unlike wolves we never tamed it.
Actually corporations themselves are 99% of what people fear about AGI already in their inhuman decisionmaking to the detriment of humanity.
No problem, after they release all the data collected under the excuse of public good and progress.
"ClosedAI" rebrand when?
🤣
NopeAI
Open Your Wallet AI
Booooooooooo!
Anyway: ill just keep using alpaca to run llms locally
is there an easy way to do this that doesn't require me to understand how github works?
I recommend Ollama, its easy to setup and the cli can download and run llms. With some more techsavviness you can get openwebui as a nice ui.
For someone who doesn't understand GitHub, the CLI might be a bit much, FWIW.
It would be nice if there were a GUI, download-and-run single click app with a webui built in.
in that case you're looking for llamafiles. single file, llm included, starts into a web gui. the only limitation is that windows limits the size of executable files to 4GB so on that OS you're limited to smaller models.
Alpaca for linux is easy to use. You just install the flatpak and the llm of your choice. You dont need to know how to use github. (It might have a windows version but im not sure)
I think that in that case, YouTube is your friend. There are a few pretty straight forward videos that can help you out; if you're serious about it you're going have to, eventually, become familiar with it.
Stop depending on these proprietary LLMs. Go to [email protected].
There are open-source LLMs you can run on your own computer if you have a powerful GPU. Models like OLMo and Falcon are made by true non-profits and universities, and they reach GPT-3.5 level of capability.
There are also open-weight models that you can run locally and fine-tune to your liking (although these don’t have open-source training data or code). The best of these (Alibaba’s Qwen, Meta’s llama, Mistral, Deepseek, etc.) match and sometimes exceed GPT 4o capabilities.
The issue with that method, as you've noted, is that it prevents people with less powerful computers from running local LLMs. There are a few models that would be able to run on an underpowered machine, such as TinyLlama; but most users want a model that can do a plethora of tasks efficiently like ChatGPT can, I daresay. For people who have such hardware limitations, I believe the only option is relying on models that can be accessed online.
For that, I would recommend Mistral's Mixtral models (https://chat.mistral.ai/) and the surfeit of models available on Poe AI's platform (https://poe.com/). Particularly, I use Poe for interacting with the surprising diversity of Llama models they have available on the website.
I thought they were a for-profit company all this time.
Pretty much non-profit in name only. Some shady hybrid model.
OpenAI sure seems like a case study in how to grift everyone by masquerading as a non profit whilst actually enriching yourself and your shareholders, causing a whole new class of societal problems in the process.
That's very open of them
Open to All Income.
Ruh roh!
There was never another outcome.
Capitalism breeds one thing, and it certainly isn't innovation, and it most definitely isnt not-for-profit innovation.
Capitalism is extremely good at breeding superficial, go-to-market innovation. It's less good at funding the pure research that leads to major discoveries. But once it gets closer to engineering than to science, it's highly effective. Even Marx commented on that.
Shocking nobody
Well, apart from the people like me who thought they had always been one because they acted exactly like one.
They've been acting like that from the start 🤷🏻♂️
They should also change their name to ClosedAI while they're at it.
No kidding. 🙀
Hahaha. April 1st is early this year.
They are never going to make enough money by selling licenses and subscriptions for the cost of their current models (smarter people than me have made good estimates), let alone the future ones. Those future models are at a much worse performance-cost ratio. Ads will at best bring in about 1 usd per user per month (estimated by Facebook revenue and number of users) - double or triple it just for lolz, and they would still be losing money.
So… how will this be pulled off? Only wrong answers!
Have a partnership with Microsoft and ship Windows 12 as the new "AI only" OS. Every command must go through ChatGPT to work. Then push updates to older Win11 OS to make them unusable.
From what I've heard, they don't need to push updates to achieve that
How fast are they burning money right now?
Based on their funding rounds, $10 billion lasts about 18 months.
So about $555 million per month.
Did Elon not block this?
So, ads in chat now?
They've already started testing that at Google For ad enhancement and For immersive ads there's no way they keep the chatting models pristine and ad-free
The dystopian future of "pay to use this miraculous product or it will shove advertisements down your throat in a way we know will work because we've trained it to sell specifically to you"