bazsalanszky

joined 2 years ago
MODERATOR OF
[–] [email protected] 11 points 1 month ago (1 children)

I still intend to work on it, but I can only do it in my free time....and I don’t have much of that right now.

[–] [email protected] 22 points 2 months ago (1 children)

Thanks for the info. I will look into it.

[–] [email protected] 1 points 7 months ago

I've also experienced it for a long time now. I'm not exactly sure what causes this, but I'll try to look into it again.

[–] [email protected] 1 points 7 months ago (1 children)

Yes, it is fixed in the nightly builds and will be included in the next release.

[–] [email protected] 13 points 8 months ago

I think this issue is fixed in this release.

[–] [email protected] 4 points 8 months ago

From what I've seen, it's definitely worth quantizing. I've used llama 3 8B (fp16) and llama 3 70B (q2_XS). The 70B version was way better, even with this quantization and it fits perfectly in 24 GB of VRAM. There's also this comparison showing the quantization option and their benchmark scores:

1000029570

Source

To run this particular model though, you would need about 45GB of RAM just for the q2_K quant according to Ollama. I think I could run this with my GPU and offload the rest of the layers to the CPU, but the performance wouldn't be that great(e.g. less than 1 t/s).

[–] [email protected] 2 points 8 months ago

Yes, you can find it here.

 

cross-posted from: https://lemmy.toldi.eu/post/984660

Another day, another model.

Just one day after Meta released their new frontier models, Mistral AI surprised us with a new model, Mistral Large 2.

It's quite a big one with 123B parameters, so I'm not sure if I would be able to run it at all. However, based on their numbers, it seems to come close to GPT-4o. They claim to be on par with GPT-4o, Claude 3 Opus, and the fresh Llama 3 405B regarding coding related tasks.

benchmarks

It's multilingual, and from what they said in their blog post, it was trained on a large coding data set as well covering 80+ programming languages. They also claim that it is "trained to acknowledge when it cannot find solutions or does not have sufficient information to provide a confident answer"

On the licensing side, it's free for research and non-commercial applications, but you have to pay them for commercial use.

[–] [email protected] 1 points 8 months ago (2 children)

Are you using mistral 7B?

I also really like that model and their fine-tunes. If licensing is a concern, it's definitely a great choice.

Mistral also has a new model, Mistral Nemo. I haven't tried it myself, but I heard it's quite good. It's also licensed under Apache 2.0 as far as I know.

 

Another day, another model.

Just one day after Meta released their new frontier models, Mistral AI surprised us with a new model, Mistral Large 2.

It's quite a big one with 123B parameters, so I'm not sure if I would be able to run it at all. However, based on their numbers, it seems to come close to GPT-4o. They claim to be on par with GPT-4o, Claude 3 Opus, and the fresh Llama 3 405B regarding coding related tasks.

benchmarks

It's multilingual, and from what they said in their blog post, it was trained on a large coding data set as well covering 80+ programming languages. They also claim that it is "trained to acknowledge when it cannot find solutions or does not have sufficient information to provide a confident answer"

On the licensing side, it's free for research and non-commercial applications, but you have to pay them for commercial use.

 

Meta has released llama 3.1. It seems to be a significant improvement to an already quite good model. It is now multilingual, has a 128k context window, has some sort of tool chaining support and, overall, performs better on benchmarks than its predecessor.

With this new version, they also released their 405B parameter version, along with the updated 70B and 8B versions.

I've been using the 3.0 version and was already satisfied, so I'm excited to try this.

[–] [email protected] 0 points 9 months ago (1 children)

Currently, I only have a free account there. I tried Hydroxide first, and I had no problem logging in. I was also able to fetch some emails. I will try hydroxide-push as well later.

[–] [email protected] 0 points 9 months ago (3 children)

I haven't heard of Hydroxide before; thank you for highlighting it! Just one question: Does it also require a premium account like the official bridge, or is it also available for free accounts?

4
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 

Hello everyone!

I have some good news! Eternity has finally been added to the main F-Droid repo. I've managed to get reproducible builds working, so this version is the same as the one on Codeberg (but verified by F-droid).

2
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 

Hello everyone!

It's been some time since our last update. As the version number indicates, this is primarily a bugfix update; hence, it doesn’t bring the "big new feature" I had hoped to introduce. The most important change here is that this version of Eternity will be compatible with the upcoming Lemmy version 0.19.

It's worth noting that upon upgrading to this version, you may need to log out and then log back into your account, based on my experience.

For the next release, I want to address more of the common issues found in our bug tracker. So, there may be another minor update shortly, but I can't make any promises at the moment. I've also made some progress with multi-community support and I plan to introduce modding tools as well.

In the meantime, thank you so much for your continued support.

 

Hello everyone!

I've finally released Eternity on the google play store. Note that this is not compatible with the existing releases as this build has a different signature.

Thanks for your support and looking forward to your feedback!

0
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]
 

Hello everyone!

You may have noticed that Eternity's icon could use a bit of a facelift (I just hacked it together to have something). Well, it's time for a change and I'd like to get all you talented folks involved!

So, I want to host a competition where you can submit your designs for a new app icon. Once we have all the submissions, we can vote as a community to pick our new icon!

Some details to keep in mind:

Deadline: I'll be accepting submissions until the 8th of September, 2023, 18:00 CET. I might extend this deadline if we have too few submissions.

Format: Submissions should be in SVG or PNG format, with a 1:1 aspect ratio.

Resolution: Please ensure your design is at least 512x512 pixels if you're going with PNG.

Monochrome Version (Optional): It'd be great to see a monochrome version of your design too (for Material You icons).

Design: Your design should be colorful, fun, and engaging. The original UFO spaceship could be a source of inspiration, or elements of the fediverse or Lemmy. Of course, if you've got another creative idea, go wild!

Once the submissions close, we'll have a public vote to pick the winner (similar to what we did with the app name).

Thanks in advance for participating.

Happy designing!

 

Hello everyone!

I'm excited to bring some news to our wonderful community today. After an engaging voting process on the voting thread, we as a community have decided to change the name from Infinity for Lemmy to Eternity. 🎉 I am grateful to everyone who participated and voiced their opinions. It was great to see so much involvement!

While [email protected] will be our new gathering space, I'd like to let you know that the original [email protected] won't be closing its doors immediately. This way, everyone has enough time to transition comfortably and get accustomed to our new home.

Lastly, remember, while names may change, our spirit and camaraderie remain eternal. I cherish our roots with "Infinity for Reddit" and am excited about the journey ahead in "Eternity". Let's collaborate, share, and continue building on this vibrant community.

Have fun and see you all in Eternity! 🚀🌌

0
Infinity for Lemmy (codeberg.org)
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]
 

Hello everyone,

I'm not sure this is the right community for this, but I want to announce my new project: Infinity for Lemmy.

Basically, this is a fork of the Infinity for Reddit application, modified to get it to work with Lemmy.

But I must remind you to temper your expectations as this project is still in its infancy. It has some basic features already in place, but there’s a lot more to be accomplished.

What Infinity for Lemmy Can Currently Do:

•	Browse posts
•	Handle multiple accounts
•	Upvote and downvote posts

What Infinity for Lemmy Cannot Do (Yet):

•	Load comments
•	Handle subscriptions
•	Search for posts, users, or communities
•	Write posts or comments
•	Send private messages
•	And many more

So, while Infinity for Lemmy is in its current state not a feature-rich client, but it’s a promising start.

For those interested, here’s the link to my git repository: https://codeberg.org/Bazsalanszky/Infinity-For-Lemmy

I invite everyone to contribute in whatever capacity they can, whether it be through coding, reporting bugs, suggesting features, or even just providing feedback and insights. Let’s shape the future of Infinity for Lemmy together!

Screenshot 1

Screenshot 2

Screenshot 3