this post was submitted on 08 Nov 2025
12 points (100.0% liked)

Solarpunk technology

3630 readers
1 users here now

Technology for a Solar-Punk future.

Airships and hydroponic farms...

founded 3 years ago
MODERATORS
 

Cross-posted from fediverse user @peachy@goto.micromail.me

Is there such a thing as a configurable full-body input controller for computers? Is anyone working on that? I know there is work on controlling computers directly with the brain, which will be ace for people with full paralysis, but what I’m interested in is something that goes in the other direction - using more of the body. Think Tom Cruise’s interface in Minority Report but better. Sitting, or even standing, to work at a computer takes its toll on the body, especially the back. Our bodies didn’t evolve to be so static while we’re awake. Emerging from a flare-up of a slipped disc, it has got me thinking of better ways to interface with machines.

Imagine the following:

You come to see me in my studio to see how I and my colleagues do image editing and graphic design in GIMP 4.0. Some of us are stood in front of large displays but no one seems to be using a keyboard, mouse or graphics tablet. I appear to be doing a dance routine from a music video... As I bounce my knee up and across my body you see that the Move tool has been selected. As I raise my left fist above my head it is as though I am holding shift to toggle “Pick a layer or guide”. I draw my right hand across my body with my thumb and forefinger pinched and the selected layer moves with me. Finally, I quickly raise both hands, like I'm flipping over a table and my project is saved and closed. Now that I’ve stopped moving around so energetically you notice that my stylish and comfortable cotton loungewear and gloves have small sensors dotted around them. I explain that the position of these sensors relative to each other and to the space have been mapped to traditional keyboard and mouse inputs via my operating system.

Moving to the next workspace you see my colleague Babs. Her white hair pokes out above a VR headset and she has a number of small cameras tracking her movement to the soundtrack of Chinese classical music. She is an elder and a veteran and even contributed some of the code that makes this stuff work, back in the day. She says it was no big deal; she mostly just connected up different programs, some of which Hollywood has been using since the 1990s. Her movements are slow and smooth. It looks like she’s doing Qi Gong or Tai Chi or something. Raising a hand in front of her heart you see the Filters menu open and lowering it slowly the menu scrolls down to Enhance. Gracefully stepping sideways and lowering her hand further, Heal Selection is highlighted in the submenu. Turning her hand palm-up launches the plugin. She tells you that one of her first contributions to the interface was to make the body position tolerances configurable by the user in their desktop settings.

Lastly you watch my cousin Tommy at work. When we met I told you about how a head injury had left him partially paralysed and unable to speak. He too is using a VR headset, but instead of having cameras pointed at him he has a HD sonar array. His disability was caused by an error in the police’s facial-recognition software and understandably he’s had a thing about cameras ever since. The bad guy got away and he never caught the bus he was running to catch. Every couple of days he asks whether Nancy’s cameras are still disconnected from the network, which they always are. Tapping his ring-finger once on the armrest of his wheelchair selects the Text tool. Turning his head to the side, he purses his lips and sweeps his face back around to make his text box. You see his mouth moving but there is no sound. “Hi, nice to meet you” appears in his projects new text layer. “You too” you reply. Twitching his right shoulder you see his text layer is duplicated, blinking twice and nodding his head replaces the text with what you just said. He must have used speech-to-text to record your words to his desktop clipboard and then pasted them into the text field. Pressing his index finger against the arm rest and looking toward the ceiling brings the new text layer to the top of the stack. Running the same sequence of movements again, a third text layer becomes visible onscreen. “I’d never edited a picture in my life until I got into this tech as part of my physiotherapy treatment. My cousin ended up offering me this job and now I can work faster than anyone else here, especially Babs. I’m pretty sure she’s just here for fun but none of us mind.”

#tech #health #disability #GIMP #solarpunk

you are viewing a single comment's thread
view the rest of the comments
[–] sp3ctr4l@lemmy.dbzer0.com 2 points 1 week ago* (last edited 1 week ago)

Not only the will is lacking, but also... you'd have to come up with near universal standards for suit hardware to software communication, and you'd have to get the ... gloves or the mocap suit or a camera set up thats actually fairly cheap to manufacture, like a mouse of something.

I've known people that tinker with this stuff as either hobbyists or for college courses, its certainly not impossible... but we are not really at a mass production standard phase yet.

And... thats probably because no one has yet done a working proof of concept showing general, practical uses for this.

But, after writing all this out...

I am currently myself tinkering with game dev stuff, and oh boy is it hard to find decent animation files beyond the extremely basic, that dont require either significant money or time...

But OpenCV exists, and decent webcams aren't too expensive... and there are tools either in OpenCV or that build off of open CV, that then take your silloutte / mocap data and render it as some kind of game character model/skeletal animation, or at least do parts of that process.

I know its possible to do at least halfway ok mocap just with cams and no suit now, but I don't know if that works without feeding it to an AI datacenter for processing, or if it can run in realtime on a laptop.

If the latter is the case, ... well then I may take my own shot at it, if nothing else, just to mocap myself for some more interesting game anims.

Beyond that, there is a gigantic free dataset of mocap data from Carnegie Mellon University, but jesus h christ, its a barely documented mess, snd its all raw, mo cap point cloud data, converting it all into something more broadly useful, like an fbx format, on a standard, root normalized skeleton... and breaking it down into distinct, specific movements... that'd be a lot if work.

Like, teams of people levels of work, if you want an actually very easily useful library, in under 5 years time.

I did manage to more recently find a paper that had well formed and cleaner data specific to mocapping karatekas and their techniques... but yeah, generally all that shit is either paywalled, or basically a barely structured mess.