Disappointing but not unexpected. Most Chinese companies still work on the "absolute secrecy because competitors might steal our tech" ideology. Which hinders a lot of things...
fonix232
What, you don't have a few spare photonic vacuums in your parts drawer?
Well, yeah, when management is made up of dumbasses, you get this. And I'd argue some 90% of all management is absolute waffles when it comes to making good decisions.
AI can and does accelerate workloads if used right. It's a tool, not a person replacement. You still need someone who can utilise the right models, research the right approaches and so on.
What companies need to realise is that AI accelerating things doesn't mean you can cut your workforce by 70-90%, and still keep the same deadlines, but that with the same workforce you can deliver things 3-4 times faster. And faster delivery means new products (let it be a new feature or a truly brand new standalone product) have a lower cost basis even though the same amount of people worked on them, and the quicker cadence means quicker idea-to-profits timeline.
It actually makes some sense.
On my 7950X3D setup the main issue was always making sure to pin games to a specific CCD, and AMDs tooling is... quite crap at that aspect. Identifying the right CCD was always problematic for me.
Eliminating this by adding V-Cache to both CCDs so it doesn't matter which one you pin it to is a good workaround. And IIRC V-Cache also helps certain (local) AI workflows as well, meaning running a game next to such a model won't cause issues, as both gets its own CCD to run on.
Thanks now I'll have new kinds of nightmares.
To be perfectly fair, rocket-propelled explosive technologies have come a long, long way even in just the past 30 years.
You could potentially get it done with a few kilos of C-4 and a DJI drone.
Gemini be like: "sure thing, here's four variants to choose from!"
[delivers literal CSAM]
theVeterans can be ... rejected on apartment applications
what the fuck is wrong in the US? You can have the money, the income, etc., and still be rejected an apartment just because there was a dropped criminal investigation in your past? Not even a conviction, just an investigation. I was investigated (case dropped) in Hungary because my ex flatmate defrauded a bunch of people in relation to the flat (landlord, electric, water and heating providers, among other things), then tried to blame me for them, going as far as reporting me to the police (who quickly discovered that I wasn't the responsible party at the times indicated and all the fraud happened after I moved out, so the case was dropped). In the US, a landlord could seriously deny my application for housing purely based on such an investigation taking place? This is beyond ridiculous.
Alright I did read further and damn, you just keep going on being wrong, buddy!
Yes, you can fucking do "stand on the table and make a speech" work. You know how? By breaking it up into detailed steps (pun intended), something that LLMs are awesome at!
For example in this case the LLM could query the position and direction of the table compared to the NPC and do the following:
- plan a natural path between the two points (although the game engine most likely already has such a function)
- make the NPC follow that path
- upon path end, it will instruct the NPC to step onto the table via existing functions (Skyrim pretty much has all these base behaviours already coded, but the scripting engine should also be able to modify the skeleton rig of an NPC directly, which means the LLM can easily write it)
- then the script can initiate dialogue too.
I've asked Perplexity (not even one of the best coding agents out there, it's mistake ratio is around 5%), and within seconds it spit out a full on script to identify the nearest table or desk, and start talking. You can take a look here. And while my Papyrus is a bit rusty, it does seem correct on even the third read-through - but that's the fun part, one does not need trust the AI, as this script can be run through a compiler or even a validator (which let's be honest is a stripped down compiler first stage) to verify it isn't faulty, which the LLM can then interact with and iterate over the code based on the compiler feedback (which would point out errors).
now mind you this is the output of an internet-enabled, research oriented LLM that hasn't been fine-tuned for Papyrus and Skyrim. With some work you could probably get a 0.5B local model that does only natural language to Papyrus translation, combined with a 4B LLM that does the context expansion (aka what you see in the Perplexity feed, my simple request being detailed step by step) and reiteration.
You'd also be surprised just how flexible game engines are. Especially freeroaming, RPG style engines. Devs are usually lazy so they don't want to hardcore all the behaviours, so they create ways to make it simple for game designers to actually code those behaviours and share between units. For example, both a regular object (say, a chair) and a character type object (such as an NPC) will have a move() function that moves them from A to B, but latter will have extra calls in that function that ensure the humanoid character isn't just sliding to the new position but taking steps as it moves, turns the right direction and so on. Once all these base behaviours are available, it's super easy to put them together. This is precisely why we have so many high quality Skyrim mods (or in general for Bethesda games).
And again, code quality in LLMs has come a VERY long way. I'm a software engineer by trade, and I'd say somewhere between 80-90% of all the code I write is actually done by AI. I still oversee it, review what it does, direct it the right way when it does something silly, but those aren't as minor functionalities as we're talking here. I've had AI code a full on display driver for a microcontroller, with very specific restrictions, in about 4 hours (and I'd argue 2 of that was spent with running the driver and evaluating the result manually then identifying the issue and working out a solution with the LLM). In 4 hours I managed to do what otherwise would've taken me about a week.
Now imagine that the same thing only needs to do relatively small tasks, not figure out optimal data caching and updating strategies tied to active information delivery to the user with appropriate transformation into UI state holders.
Okay I won't even read past the first paragraph because you're so incredibly wrong that it hurts.
First generation LLMs were bad at writing long batches of code, today we're on the fourth (or by some metric, fifth) generation.
I've trained LLM agents on massive codebases that resulted in <0.1% fault ratio on first pass. Besides, tool calling is a thing, but I guess if I started detailing how MCP servers work and how they can be utilised to ensure an LLM agents doesn't do incorrect calls, you'd come up with another 2-3 year old argument that simply doesn't have a foot to stand on today.
See the main issue with that is you need to bundle everything into the app.
Modern computing is inherently cross-dependent on runtimes and shared libraries and whatnot, to save space. Why bundle the same 300MB runtime into five different apps when you can download it once and share it between the apps? Or even better, have a newer, backwards compatible version of the runtime installed and still be able to share it between apps.
With WASM you're looking at bundling every single dependency, every single runtime, framework and whatnot, in the final binary. Which is fine for one-off small things, but when everything is built that way, you're sacrificing tons of storage and bandwidth unnecessarily.