I'd also argue a human monitoring your conversation would likely make similar mistakes in judgement about what's happening, and this kind of invasion of privacy just isn't okay in any form. There could be whole extra conversations happening that they can't see (like speaking IRL before sending a consentual picture).
xthexder
traveling at a million light-years per millisecond
You're only off by a factor of about 30 quadrillion.
Light (famously a type of radiation), takes 1 year to travel a light-year, hence the name.
If you want to make it sound impressive, then astronomical units aren't the right choice. The sun is only 1 AU away from us after all.
Me: What do you mean my server is old? I built it last year!
Server Uptime: 988 days(!)
Oh...
I personally have spent those 100s (actually more like 1000s) of hours studying Software Engineering, and I was doing my best to give an example of how current AI tools are not a replacement for experience. Neither is having access to a sewing machine or blowtorch and hammer (you still need to know about knots and thread / metallurgy / the endless amount of techniques for using those tools).
Software in particular is an extremely theoretical field, similar to medicine (thus my example with a doctor).
ChatGPT is maybe marginally better than a simple web search when it comes to learning. There is simply no possible way to compress the decade of experience I have into a few hours of using an LLM. The usefulness of AI for me starts and ends at fancy auto-complete, and that literally only slightly speeds up my already fast typing speed.
Getting a good result out of AI for coding requires so much prerequisite knowledge to ask the right questions, a complete novice is not even going to know what they should be asking for without going through those same 100s of hours of study.
Not everyone has 100s of hours free time to sink into this and that skill
That's life, buddy. Nobody can learn everything, so communities rely on specialists who can master their craft. Would you rather your doctor have 100s of hours of study and practice, or a random person off the street with ChatGPT? If something is worth studying for 100s of hours, then there's more nuance to the skill than any layman or current AI system can capture in a few sentence prompt.
Also prop jumping in Source games like Portal
I’ve seen some horrendous systems where you can tell a bunch of totally separate visions were frankenstein’d together
My experience has been that using AI only accelerates this process, because the AI has no concept of what good architecture is or how to reduce entropy. Unless you can one-shot the entire architecture, it's going to immediately go off the rails. And if the architecture was that simple to begin with, there really wasn't much value in the AI in the first place.
This sounds like it takes away a huge amount of creative freedom from the writers if the AI is specifying the framework. It'd be like letting the AI write the plot, but then having real writers fill in details along the way, which sounds like a good way to have the story go nowhere interesting.
I'm not a writer, but if I was to apply this strategy to programming, which I am familiar with, it'd be like letting the AI decide what all the features are, and then I'd have to go and build them. Considering more than half my job is stuff other than actually writing code, this seems overly reductive, and underestimates how much human experience matters in deciding a framework and direction.
What improvements have there been in the previous 6 months? From what I've seen the AI is still spewing the same 3/10 slop it has since 2021, with maybe one or two improvements bringing it up from 2/10. I've heard several people say some newer/bigger models actually got worse at certain tasks, and clean training data is pretty much dried up to even train more models.
I just don't see any world where scaling up the compute and power usage is going to suddenly improve the quality orders of magnitude. By design LLMs are programmed to output the most statistically likely response, but almost by definition is going to be the most average, bland response possible.
This is based on the assumption that the AI output is any good, but the actual game devs and writers are saying otherwise.
If the game is too big for writers to finish on their own, they're not going to have time to read and fix everything wrong with the AI output either. This is how you get an empty, soulless game, not Balders Gate 3.
I don't think it really matters how old the target is. Generating nude images of real people without their consent is fucked up no matter how old anyone involved is.
Do you know how many cities are out there that have completely useless public transit? I don't think anyone's suggesting we build a train out to every farmer's front door so they can get into town without a car.
There's plenty of areas where additional bus routes and train lines would be a huge benefit, but the entire budget is being spent on car infrastructure.
(Like the Premier of Ontario who wants to build a tunnel for cars under Toronto instead of finishing the light rail projects that have been under construction for over a decade)