this post was submitted on 17 Mar 2026
3 points (71.4% liked)

Perchance - Create a Random Text Generator

1799 readers
19 users here now

⚄︎ Perchance

This is a Lemmy Community for perchance.org, a platform for sharing and creating random text generators.

Feel free to ask for help, share your generators, and start friendly discussions at your leisure :)

This community is mainly for discussions between those who are building generators. For discussions about using generators, especially the popular AI ones, the community-led Casual Perchance forum is likely a more appropriate venue.

See this post for the Complete Guide to Posting Here on the Community!

Rules

1. Please follow the Lemmy.World instance rules.

2. Be kind and friendly.

  • Please be kind to others on this community (and also in general), and remember that for many people Perchance is their first experience with coding. We have members for whom English is not their first language, so please be take that into account too :)

3. Be thankful to those who try to help you.

  • If you ask a question and someone has made a effort to help you out, please remember to be thankful! Even if they don't manage to help you solve your problem - remember that they're spending time out of their day to try to help a stranger :)

4. Only post about stuff related to perchance.

  • Please only post about perchance related stuff like generators on it, bugs, and the site.

5. Refrain from requesting Prompts for the AI Tools.

  • We would like to ask to refrain from posting here needing help specifically with prompting/achieving certain results with the AI plugins (text-to-image-plugin and ai-text-plugin) e.g. "What is the good prompt for X?", "How to achieve X with Y generator?"
  • See Perchance AI FAQ for FAQ about the AI tools.
  • You can ask for help with prompting at the 'sister' community Casual Perchance, which is for more casual discussions.
  • We will still be helping/answering questions about the plugins as long as it is related to building generators with them.

6. Search through the Community Before Posting.

  • Please Search through the Community Posts here (and on Reddit) before posting to see if what you will post has similar post/already been posted.

founded 2 years ago
MODERATORS
 

Where is the "negative prompt" in the ai-text-to-image-generator? I've just noticed it disappeared.

all 11 comments
sorted by: hot top controversial new old
[–] Hexagonal_Druid@lemmy.world 1 points 2 days ago (1 children)

Just add your negative prompt after your positive prompt inside parentheses like this : (negativePrompt:::ugly, blurry, bad anatomy).

Example :

a beautiful landscape(negativePrompt:::low quality, deformed)(guidanceScale:::11)(resolution:::512x768)

This helps keep all your prompt info in one place.

[–] MinoriMirariRProductions@lemmy.world 1 points 2 days ago* (last edited 2 days ago)

technically negatives should go in first, because negation is suppose to preload, before the rest of the prompt.. Im working on within my formula blocka advanced negation logics, at the moment Gemini and few others tipped me off that the Archaic F fire trigger/reinitialization functions will come handy for Klein, been along while since I leveraged them, an essentially because Klein runs so fast It makes data tabling sorta powerful as far as super distillation is concerned anway.. Hard to explaon why reinitialization operations are of much help here maybe because you can basically run the data into post and then re fire it ig.. although what makes Klein great for this makes it "atm" the majority of tactical formula compartmentalization decapartum phasing "all formula formatting considered to be tactical like that originally previewed and concept with TF∅X formula formatting/&series, are basically extremely touchy, if not compete broke for the most part, if just broke AF it's because Klein has the heaviest load specified sampling sub routines ever conceived" It basickly has some low level compartmentalization of its raw data tables, ontop of already having specific prior flux.1 Sampling Auto blending over blend autonomous blending routines.

[–] DBaluchi@lemmy.world 3 points 4 days ago (1 children)

Devs switched to a new model. To save $$$ it does things differently to use less resources. Images now completely hit or miss, so you have to generate 10x more images to get usable results. Resource use ACTUALLY being reduced??? Hard to say when I have to generate SOOOO many more images because this one ignores MUCH prompt data. Seems like "Cut off nose to spite face."

[–] TURBOOO@lemmy.world 1 points 2 days ago

You mean yesterday?? my images are looking very weird now.

[–] superuser9@lemmy.world 2 points 5 days ago* (last edited 5 days ago) (1 children)

Flux schnell and dev both don't take separate negative prompts anyways unlike SD

[–] MinoriMirariRProductions@lemmy.world -1 points 4 days ago* (last edited 2 days ago)

We are now using Klein, similar to Schnell, and built ontop of Flux.1 networking,.. "Flux.2 modeling & super distillation models"

[–] MinoriMirariRProductions@lemmy.world -1 points 4 days ago* (last edited 2 days ago)

A post on this allready exist!, please🙏 ... Look around at others posts before making new ones. visit this libk to find it and more information on perchance T2i&prompting Dev...Notes..ect. https://lemmy.world/post/43127973

[–] ccufcc@lemmy.world 0 points 5 days ago (2 children)

Gone, because negative prompt is not working

Thats true! Since nat lang text encoders are more complex , the negatives of the stuff being encoded is rarely the opposite.

As in , the model itself (Chroma on perchance) isnt trained to comprehend negative vectors. Tho lodestones (creator of Chroma) never specified but I assume not.

Negatives are an off shoot in training that sorta worked in CLIP based models SD1.5 SDXL (Pony / illustrious) that carried over into the nat lang mod releases out of habit in the community.

It worked in CLIP models because the CLIP encoder is simple. Write 'ice cream' in CLIP and the text encoding vector will point roughly same direction no matter where in the 75 token batch 'ice cream' .

Compare that to the many number of different answers you can get from chatGPT or GROK containing the word 'ice cream' and you can see how the 512 token batch encoding of the T5 in Chroma , or Qwen encoder in Klein / Z image varies drastically depending how common words are arranged in the text.

[–] Crimson_Frost@lemmy.world 0 points 5 days ago

Quite sad... It's being harder than ever to create something that doesn't look either a loli or a bimbo, and both are often naked or almost...