this post was submitted on 08 Nov 2025
12 points (100.0% liked)

Solarpunk technology

3630 readers
1 users here now

Technology for a Solar-Punk future.

Airships and hydroponic farms...

founded 3 years ago
MODERATORS
 

Cross-posted from fediverse user @peachy@goto.micromail.me

Is there such a thing as a configurable full-body input controller for computers? Is anyone working on that? I know there is work on controlling computers directly with the brain, which will be ace for people with full paralysis, but what I’m interested in is something that goes in the other direction - using more of the body. Think Tom Cruise’s interface in Minority Report but better. Sitting, or even standing, to work at a computer takes its toll on the body, especially the back. Our bodies didn’t evolve to be so static while we’re awake. Emerging from a flare-up of a slipped disc, it has got me thinking of better ways to interface with machines.

Imagine the following:

You come to see me in my studio to see how I and my colleagues do image editing and graphic design in GIMP 4.0. Some of us are stood in front of large displays but no one seems to be using a keyboard, mouse or graphics tablet. I appear to be doing a dance routine from a music video... As I bounce my knee up and across my body you see that the Move tool has been selected. As I raise my left fist above my head it is as though I am holding shift to toggle “Pick a layer or guide”. I draw my right hand across my body with my thumb and forefinger pinched and the selected layer moves with me. Finally, I quickly raise both hands, like I'm flipping over a table and my project is saved and closed. Now that I’ve stopped moving around so energetically you notice that my stylish and comfortable cotton loungewear and gloves have small sensors dotted around them. I explain that the position of these sensors relative to each other and to the space have been mapped to traditional keyboard and mouse inputs via my operating system.

Moving to the next workspace you see my colleague Babs. Her white hair pokes out above a VR headset and she has a number of small cameras tracking her movement to the soundtrack of Chinese classical music. She is an elder and a veteran and even contributed some of the code that makes this stuff work, back in the day. She says it was no big deal; she mostly just connected up different programs, some of which Hollywood has been using since the 1990s. Her movements are slow and smooth. It looks like she’s doing Qi Gong or Tai Chi or something. Raising a hand in front of her heart you see the Filters menu open and lowering it slowly the menu scrolls down to Enhance. Gracefully stepping sideways and lowering her hand further, Heal Selection is highlighted in the submenu. Turning her hand palm-up launches the plugin. She tells you that one of her first contributions to the interface was to make the body position tolerances configurable by the user in their desktop settings.

Lastly you watch my cousin Tommy at work. When we met I told you about how a head injury had left him partially paralysed and unable to speak. He too is using a VR headset, but instead of having cameras pointed at him he has a HD sonar array. His disability was caused by an error in the police’s facial-recognition software and understandably he’s had a thing about cameras ever since. The bad guy got away and he never caught the bus he was running to catch. Every couple of days he asks whether Nancy’s cameras are still disconnected from the network, which they always are. Tapping his ring-finger once on the armrest of his wheelchair selects the Text tool. Turning his head to the side, he purses his lips and sweeps his face back around to make his text box. You see his mouth moving but there is no sound. “Hi, nice to meet you” appears in his projects new text layer. “You too” you reply. Twitching his right shoulder you see his text layer is duplicated, blinking twice and nodding his head replaces the text with what you just said. He must have used speech-to-text to record your words to his desktop clipboard and then pasted them into the text field. Pressing his index finger against the arm rest and looking toward the ceiling brings the new text layer to the top of the stack. Running the same sequence of movements again, a third text layer becomes visible onscreen. “I’d never edited a picture in my life until I got into this tech as part of my physiotherapy treatment. My cousin ended up offering me this job and now I can work faster than anyone else here, especially Babs. I’m pretty sure she’s just here for fun but none of us mind.”

#tech #health #disability #GIMP #solarpunk

top 3 comments
sorted by: hot top controversial new old
[–] sp3ctr4l@lemmy.dbzer0.com 3 points 1 week ago* (last edited 1 week ago) (1 children)

I mean... yes, some of what is described here already exists, its basically advanced VR controls.

Scanning a human visually ala the MSFT Kinect is a thing you can do for maybe a rough estimate of overall body position, but to be highly accurate you basically need at least two cameras, and/or an ai cluster to process thosr images in high quality.

hopefully we can agree that no solarpunk society is going to have an earth destroying data center soley for rendering your HiD inputs correctly.

That or you could use LiDAR, but that shits expensive, but also Tesla cars not having that is why they suck so much.

probably a much more practical solution is basically a vr suit, kinda like a slimmed down mocap suit, or accessories like gloves and such, with accelerometers and gyrosocpes for tracking joints and digits independently...

which basically is what are already used to remote control humanoid robots.

walking around is always going to be... weird.

it is possible to just wear anklets snd knee pads with accelerometers and such as well, but if the screen is strapped to your face, your gonna walk into a wall or trip over something or get fatigued or nauseous eventually.

Yeah, just have modular kits of werable, digit/joint/ major body section acceleromators and gyroscopes, they all plug in to a small backpack like thing, or maybe frontpack or w/e, that has a wifi transceiver.


So uh, long story short: a lot of this tech already exists and is in various niche use cases or hobby spaces... but uh, making say a general OS that works via basically a bunch of dancing and gestures?

Thats probably a significantly more difficult thing to achieve.

Its not a tech problem so much as it is s conceptual problem. How do you even do that, replace every possible thing you can do with a mouse and keyboard?

Meta still can't even figure out inverse kinematic simulation of a cartoon avatar's legs convincingly.

Apple's attempt at their VR controls was basically an early, early alpha, very limited and limiting, just didn't really work.


... but uh, lets please not go about lobotomizing ourselves with brain chips, i dont see any reality where that, as a general use paradigm, is not utterly horrifying.

in todays news, 1/3 of humanity got an actual mind virus due to a combined flaw in bluetooth, and an exploit in redis, and they've mostly all hsd their brainchips overheat and/or explode.

our following story tonight: blink once to vote yes on soylent greening them all, blink twice to vote no. now please drink a verification can to continue, failure to do so will result in immediate TOS violation and revocation of all anti brain malware support

[–] oeuf@slrpnk.net 2 points 1 week ago (1 children)

Yes, I think mocap suits or modular versions thereof seems the most plausible and sustainable right now.

I'm not an engineer but I think defining ranges of three dimensional coordinates in this way and mapping them to other data like keystrokes and mouse position shouldn't be much harder than many other things have which already been accomplished in things like manufacturing, games and aerospace.

The will to do it is probably a bit lacking under our current system though. Most technology seems to be going in a direction of minimising creative participation, especially physically, and instead prioritising passive consumption. I think redefining the purpose of technology is as much a part of Solarpunk as what form that technology takes.

[–] sp3ctr4l@lemmy.dbzer0.com 2 points 1 week ago* (last edited 1 week ago)

Not only the will is lacking, but also... you'd have to come up with near universal standards for suit hardware to software communication, and you'd have to get the ... gloves or the mocap suit or a camera set up thats actually fairly cheap to manufacture, like a mouse of something.

I've known people that tinker with this stuff as either hobbyists or for college courses, its certainly not impossible... but we are not really at a mass production standard phase yet.

And... thats probably because no one has yet done a working proof of concept showing general, practical uses for this.

But, after writing all this out...

I am currently myself tinkering with game dev stuff, and oh boy is it hard to find decent animation files beyond the extremely basic, that dont require either significant money or time...

But OpenCV exists, and decent webcams aren't too expensive... and there are tools either in OpenCV or that build off of open CV, that then take your silloutte / mocap data and render it as some kind of game character model/skeletal animation, or at least do parts of that process.

I know its possible to do at least halfway ok mocap just with cams and no suit now, but I don't know if that works without feeding it to an AI datacenter for processing, or if it can run in realtime on a laptop.

If the latter is the case, ... well then I may take my own shot at it, if nothing else, just to mocap myself for some more interesting game anims.

Beyond that, there is a gigantic free dataset of mocap data from Carnegie Mellon University, but jesus h christ, its a barely documented mess, snd its all raw, mo cap point cloud data, converting it all into something more broadly useful, like an fbx format, on a standard, root normalized skeleton... and breaking it down into distinct, specific movements... that'd be a lot if work.

Like, teams of people levels of work, if you want an actually very easily useful library, in under 5 years time.

I did manage to more recently find a paper that had well formed and cleaner data specific to mocapping karatekas and their techniques... but yeah, generally all that shit is either paywalled, or basically a barely structured mess.