rufus

joined 2 years ago
[–] [email protected] 5 points 4 months ago* (last edited 4 months ago) (1 children)

You might want to tick the "NSFW" box for this post....

[–] [email protected] 7 points 6 months ago* (last edited 6 months ago)

If you're interested in finding out, why don't you buy one and try for yourself? They're not that expensive (at least the non-electronic ones)... I hear some people like it. And I mean if you're not fond of the current situation, you should switch up things and try something different anyways.

(Edit: I'd get a cheap one, see if I like it and the either throw it in the trash or have learned something and then decide if I want some $250 device with all the bells and whistles and buttplug.io integration. But YMMV on that.)

1
submitted 9 months ago* (last edited 9 months ago) by [email protected] to c/[email protected]
 

Does it work well? Which one to choose? The official Matrix site shows 3 that seem maintained:

Does anyone have some insight? I don't want to try all of them.

Edit: I don't need anything super fancy like double puppeting. I just want the data from the several Discord communities I joined available through my Matrix server. And it's just me using it. But it should bridge the rooms properly and include the popular media formats, reactions etc.

[–] [email protected] 1 points 9 months ago* (last edited 9 months ago)

I've used laptops for more than a decade. And sure, in the early times thermal management wasn't that elborate. But I really haven't seen any laptop in many, many years that doesn't do it with perfect accuracy. And usually it's done in hardware so there isn't really any way for it to fail. And I played games and compiled software for hours with all CPU cores at 100% and fans blasting. At least with my current laptop and the two Thinkpads before. The first one had really good fans and never went to the limit. The others hit it with an accuracy of like 2 or 3 degrees. No software necessary. I'm pretty sure with the technology of the last 10 years, throttling doesn't ever fail unless you deliberately mess with it.

But now that I'm thinking of the fans... Maybe if the fan is clogged or has mechanically failed, there is a way... A decent Intel or AMD CPU will still throttle. But without a fan and airflow inside the laptop, other components might get too hot. But I'm thinking more of some capacitors or the harddisk which can't defend itself. The iGPU should be part of the thermal budget of the rest of the processor. Maybe it's handled differently because it doesn't draw that much power and doesn't really contribute to overheating it. I'm not sure.

Maybe it's more a hardware failure, a defective sensor, dust, a loose heat conductor, thermal paste or the fan? I still can't believe a laptop would enter that mode unless something was wrong with the hardware. But I might be wrong.

[–] [email protected] 0 points 9 months ago* (last edited 9 months ago) (2 children)

Why does it force the processor over the limit in the first place?

I think in every other laptop the CPU just throttles when it gets too hot. Meaning it can never exceed the maximum temperature. I wonder if this is a misunderstanding or if HP actually did away with all of that and designed a laptop that will cook itself.

And it's not even a good design decision to shutdown the PC if someone runs a game... Aren't computers meant to run them? Why not automatically lower the framerate by throttling? Why shut down instead?

[–] [email protected] -1 points 9 months ago* (last edited 9 months ago) (7 children)

And they're not even that woke. Afaik they still ocasionally eat animals in the 24th century. (Unless they're Vulcan.) Watch The Orville if you want some proper progressive shit 😆

 

"Alice has N brothers and she also has M sisters. How many sisters does Alice’s brother have?"

The problem has a light quiz style and is arguably no challenge for most adult humans and probably to some children.

The scientists posed varying versions of this simple problem to various State-Of-the-Art LLMs that claim strong reasoning capabilities. (GPT-3.5/4/4o , Claude 3 Opus, Gemini, Llama 2/3, Mistral and Mixtral, including very recent Dbrx and Command R+)

They observed a strong collapse of reasoning and inability to answer the simple question as formulated above across most of the tested models, despite claimed strong reasoning capabilities. Notable exceptions are Claude 3 Opus and GPT-4 that occasionally manage to provide correct responses.

This breakdown can be considered to be dramatic not only because it happens on such a seemingly simple problem, but also because models tend to express strong overconfidence in reporting their wrong solutions as correct, while often providing confabulations to additionally explain the provided final answer, mimicking reasoning-like tone but containing nonsensical arguments as backup for the equally nonsensical, wrong final answers.

[–] [email protected] 1 points 10 months ago* (last edited 10 months ago)

Are you referring to me or BigFig? I'm neither a mile (I'm European, so we use the metric system), nor a mole. If you make me choose an animal, I'd like to be an alpaca. And I'd be willing to do a captcha to prove to you that I'm not a bot.

[–] [email protected] 1 points 10 months ago (7 children)

Thanks for spreading the word. We get these complaints every few weeks. More people need to be educated and move away from these instances to make the Threadiverse a better place.

[–] [email protected] 1 points 10 months ago* (last edited 10 months ago)

Thanks for enlighting me. I have to fact check this, but occasionally I also consume what they tell in random business coaching without questioning it.

Edit: Fact checked. And learned something today. Thx.

[–] [email protected] 0 points 10 months ago* (last edited 10 months ago) (3 children)

Maybe you're more introverted and tend towards learning in an autodidactic way?

Not being like all the other people isn't necessarily a bad thing. Yes, it's difficult to be different. But we should embrace being human and diverse. Everyone learns at their own pace. Some people learn better by watching and imitating, some people like to understand things down to the core and can't just "do this and do that and you're done." And there are different learning styles anyways: Auditory, Visual, Tactile, ...

I just wanted to say you're not alone with that. I also regularly fail to remember dancing steps, when someone shows me how to assemble furniture or do some task. I can't for the life of me remember driving directions. I'd much rather get handed an instruction manual and I can read it at my own pace. Everytime I get what I need and what matches my learning type, I can excel at things, so it's not a lack of intelligence.

And it works, too if you're taught 1 on 1. So you can ask your "instructor" to slow down or speed up things you already know. It's just difficult in group scenarios. And I don't think there is a way around speaking up and letting them show it to you once more. But I think most people should theoretically be able to relate. Other people struggled in maths in school and had things explained to them over and over again, which was super boring to me. But we all grasp different concepts in different amounts of time and we sometimes need to be taught in the way that is right for us individually.

And a last word to climbing: Getting it almost immediately isn't the important part of the knot. The important part is that you never fail to do it correct in the years to come. Where I learned climbing they hand you a scrap piece of old rope and you can practice at home. And the week after you need to demonstrate that you're able to do the knot and check it for correctness. I've been with the (boy) scouts for years, so I could already tie the knot perfectly.

(Edit: "Learning style theories have been criticized by many scholars and researchers. Some psychologists and neuroscientists have questioned the scientific basis for separating out students based on learning style. [...] Many educational psychologists have shown that there is little evidence for the efficacy of most learning style models, and furthermore, that the models often rest on dubious theoretical grounds." Source: Wikipedia)

[–] [email protected] 1 points 10 months ago* (last edited 10 months ago) (4 children)

I think they're using Widevine DRM. And with DRM they can enforce whatever arbitrary policies they like. They set special restrictions for Linux. I think Amazon set 480p as max, Netflix 720p and YouTube 4k or sth like that. AFAIK it has little to do with technology. It's just a number that the specific company sets in their configuration.

[–] [email protected] 0 points 11 months ago (4 children)

What's with the Republicans, don't they like their country and democracy any more?

 

From the abstract: "Recent research, such as BitNet, is paving the way for a new era of 1-bit Large Language Models (LLMs). In this work, we introduce a 1-bit LLM variant, namely BitNet b1.58, in which every single parameter (or weight) of the LLM is ternary {-1, 0, 1}."

Would allow larger models with limited resources. However, this isn't a quantization method you can convert models to after the fact, Seems models need to be trained from scratch this way, and to this point they only went as far as 3B parameters. The paper isn't that long and seems they didn't release the models. It builds on the BitNet paper from October 2023.

"the matrix multiplication of BitNet only involves integer addition, which saves orders of energy cost for LLMs." (no floating point matrix multiplication necessary)

"1-bit LLMs have a much lower memory footprint from both a capacity and bandwidth standpoint"

Edit: Update: additional FAQ published

view more: next ›