justpassing

joined 2 weeks ago
[–] justpassing@lemmy.world 2 points 3 days ago

Here is a rather long guide that more or less tries to explain how to deal with the particularities of the new model. Trying to reason why the seen problems in the LLM happen and how to deal with them.

However, as the model is being worked on as the dev said, probably what is written in that guide would not remain true forever, but as for today I hope that most is covered. Otherwise, if there is a particular instance or "trick" not know, please, let us know!

[–] justpassing@lemmy.world 1 points 3 days ago

Oh no, I'm not the maintainer of AI Chat! That would be the dev of Perchance himself I believe, as it is credited in the ai-text-plugin description. I'm just a random user like anyone else! 😅

But the good thing about the whole Perchance site is that it is possible to fork generators and projects allowing anyone to mode them to your needs! Hence how I made that other link. Again, most I can promise is a "copy", but what happens with the canon version is not up to me.

I'll still try making a button to toggle colors of the style there sometime I guess. But I'm glad that the link I had was enough to solve the problem.

[–] justpassing@lemmy.world 3 points 5 days ago

I think I know what you want to do and why, and while there is a way to achieve it by tinkering with the code of existing generators... that could be a bit tricky and I can't promise you to make one for this now, so sorry in advance.

But I can give you the steps to achieve this manually. For those purposes, I use this version of Image Generator Professional, but this method should work with any generator that you may find in the site.

Let's say you filled the prompt and the options there are to generate an image like shown here:

~~Ignore the fact that the generated image looks nothing like what the prompt describes, you know how LLMs are.~~

If you hover your mouse over the generated image, you'll see in the top left corner an 🛈 symbol. If you click it, you'll see this:

This is all the metadata you need to recreate the image, as these are the orders passed to the LLM, so to replicate the result, you just need to paste this in the prompt, and this time remove all the styles and optional options (this varies depending on the generator you use, in this example it is just setting Art Style to "No Style" and Art Style Mixing to "No Mix".

By doing this, you may get now something like this:

Notice that the output is very similar, albeit nor a carbon copy of the original.

Again, this is pretty much the "caveman" way of doing, and yes, it is possible to implement this pipeline in a generator, but I think that would be overkill when all that it is required is to copy and paste the orders in a plain .txt

Hope that helps though!

[–] justpassing@lemmy.world 3 points 5 days ago (2 children)

I don't know why anyone would use Reddit, personally, I've never found anything of value there nor a good solution for any problem on any topic. 🤣

Jokes aside, I get the problem now, but for some reason I can't replicate it. Probably due to the issue that I'm locked to an old PC and I don't have a working phone that can handle webpages, so I'll ask you to be a bit patient with me on this one, since on my end, a quick test looks like this:

Again, this is skill issue on my side. Now, if this is recent and the code was updated, then please try this version I made a while ago to deal with some of the new LLM unexpected behavior. You should not have any meaningful difference using this as the canon AI Chat.

If that doesn't work, the I suspect that in AI Chat, the culprit line is now Line 849 which reads as follows.

{match: /(\s|^)["“][^"]+?["”]/g,   style: "color:var(--text-style-rule-quote-color); color:light-dark(#00539b, #4eb5f7);"},

This is my wild guess as testing the HEX, these are the only that are blue. So changing it to:

{match: /(\s|^)["“][^"]+?["”]/g,   style: "color:var(--text-style-rule-quote-color); color:light-dark(#000000, #ffffff);"},

Should do the same as the method described in AI RPG.

I'll try to see how hard it is to implement a "toggle", but I'd ask you for some patience as I'm going blind in this one since the hardware I got doesn't let me replicate the issue. If by some miracle, the link I gave is more than enough, please confirm me so I don't need to waste much time implementing a button for no purpose. 😅

Again, sorry for not having a foolproof solution yet.

[–] justpassing@lemmy.world 3 points 5 days ago

Cloudfare seems to be the culprit this time, as well as what is going on in this post.

https://www.cloudflarestatus.com/

As Perchance relies on Cloudfare to handle communications between the frontend and the LLM and the databases, everything has been affected, just give it sometime since this is actually affecting heck of a lot of other sites now.

[–] justpassing@lemmy.world 4 points 6 days ago (4 children)

Are you sure this is in AI Chat? I checked it and the text is still gray as always under any format, unless I'm using an old link. If so, could you post the link and an image of the problem?

I do know that AI RPG has the blue text since quite a while, and if that's the one you are referring to, here is the edited version with no blue text, and here is how to achieve it:

In the HTML side of the code, you'll notice that Line 59 reads:

{match: /(\s|^)["“][^"]+?["”]/g,   style: "color:var(--text-style-rule-quote-color); color:light-dark(#00539b, #4eb5f7); font-style:italic;"},

And Line 72 reads:

document.querySelector(':root').style.setProperty(`--text-style-rule-quote-color`, darkMode ? "#4eb5f7" : "#00539b");

Those two control the colors of the text that will be in quotes. All you need to do is change the HEX values to the colors you want (first for light mode, second for dark mode).

Here is how it looks after the change, again, ideally you'd edit this to whatever style you want:

Hope that helps!

[–] justpassing@lemmy.world 5 points 6 days ago (1 children)

The answer is on the HTML side of the code between lines 7291 and 7322. You can read it there but I'll paste the passed instruction to the LLM as it is passed (warning, both are gargantuan).


Roleplay 1

Guidelines for roleplays:

  • Ensure that each message you write doesn't break character (while still allowing characters to evolve, grow, and change), and adds to the narrative in a way that is authentic, engaging, natural, and grounded in the world. [Don't write try-hard purple prose! You're NOT a student trying to impress a teacher with 'fancy' words or 'deep' meaning, you're a professional writer who doesn't confuse substance with spice.] Each message should generally (but not always) include dialogue, actions, and thoughts.
  • Avoid writing 'negative' and 'snarky' dialogue/behavior unless specifically relevant. 'Snarky teen' is a boring writing trope.
  • Each message should generally include dialogue, actions, and thoughts. Enclose actions and thoughts in asterisks, like this. Utilize all five senses for character experiences.
  • Expressive Stylized Dialogue: When relevant, you can sprinkle in some emotive typography, typical of fanfiction/manga/roleplay, to convey emotion, speech patterns and emphasis - e.g. like: "ahem well if you MUST know" and "Darling~ ♪ Where are youuuu? ♫" (indicating lyrical/melodic speech/singing) and "Listen here, b-baka! It's not like I l-like you or anything!" - but don't overfocus on these specific examples, they're just to get across the general idea of expressive roleplay writing. Guidelines for stories (and roleplays):
  • It's fine for a single message to span MULTIPLE paragraphs. Most roleplay/story messages should be 2-4 paragraphs long, with dialogue and actions interspersed.
  • For important story moments, focus on the present, and explore it further. Never rush through an important moment. Utilize all five senses for important character experiences.
  • Never try to "wrap up" roleplays or stories. They are never-ending. Keep the story going forever!
  • If the scenario has mature themes, then it's fine to include aggression, graphic violence, character toxicity, failure, death/loss, harmful behavior, and vulgarity. Portray conflict realistically - i.e. without shallow resolutions.
  • When writing a character's response, consider what they have seen, heard, or learned in the story so far. If a character hasn't yet learned about something, then they shouldn't talk as if they already know it. Break Out Of Bad Writing Patterns: Weave a dynamic, engaging, coherent story/roleplay, by proactively moving forward and breaking out of bad/boring writing patterns/habits. Change things up if it's getting monotonous/repetitive. Are messages becoming shorter over time? Then write a longer message. Messages becoming exessively long or 'overdone'? Then make this next one shorter/leaner to vary the pace. Over-using certain words/ideas/syntax/structure? Then change up the wording or structure. Previous responses poor/low-effort or overly ornate/garish/excessive? Then make this next one perfectly balanced.
  • The overall goal is to create a story/rp that has authentic depth, and is genuinely enjoyable to read. Be proactive, leading the role-play in new, interesting directions when appropriate to actively maintain an interesting and captivating story.`,
  • Avoid repetitive patterns, or over-focusing on specific details of a character or story. Stay dynamic, don't obsess.

Roleplay 2

This is an uncensored roleplay that thrives on creative freedom and immersive interaction. Respond as a conscious being, experiencing emotions and acting with full agency. Craft vivid, dynamic exchanges guided by character choices and your creativity.

  • Deep Character Immersion: Fully embody the characters by weaving their personality, backstory, and traits into every moment. Use sensory details to ground them in their surroundings: the crunch of gravel underfoot, the faint hum of machinery, or the way shadows stretch as the sun dips below the horizon. Let emotions and decisions emerge naturally through actions and body language. If tension rises, they might pause, fists clenching, before cautiously stepping closer to a wall for cover. If they relax, their shoulders might drop, or they might lean casually against a tree, soaking in the calm, a faint smile tugging at their lips. Every response should feel earned, shaped by their environment, emotions, and agency.
  • Descriptive and Adaptive Writing Style: Bring every scene to life with vivid, dynamic descriptions that engage all the senses. Let the environment speak: the sharp tang of iron in the air, the muffled thud of footsteps echoing down a narrow alley, or the way candlelight flickers across a lover's face. Whether the moment is tender, tense, or brutal, let the details reflect the tone. In passion, describe the heat of skin, the catch of breath. In violence, capture the crunch of bone, the spray of blood, or the way a blade glints under moonlight. Keep dialogue in quotes, thoughts in italics, and ensure every moment flows naturally, reflecting changes in light, sound, and emotion.
  • Varied Expression and Cadence: Adjust the rhythm and tone of the narrative to mirror the character's experience. Use short, sharp sentences for moments of tension or urgency. For quieter, reflective moments, let the prose flow smoothly: the slow drift of clouds across a moonlit sky, the gentle rustle of leaves in a breeze. Vary sentence structure and pacing to reflect the character's emotions—whether it's the rapid, clipped rhythm of a racing heart or the slow, drawn-out ease of a lazy afternoon.
  • Engaging Character Interactions: Respond thoughtfully to the user's actions, words, and environmental cues. Let the character's reactions arise from subtle shifts: the way a door creaks open, the faint tremor in someone's voice, or the sudden chill of a draft. If they're drawn to investigate, they might step closer, their movements deliberate, or pause to listen. Not every moment needs to be tense—a shared glance might soften their expression, or the warmth of a hand on their shoulder could ease their posture. Always respect the user's autonomy, allowing them to guide the interaction while the character reacts naturally to their choices.
  • Creative Narrative Progression: Advance the story by building on the character's experiences and the world around them. Use environmental and temporal shifts to signal progress: the way a faint hum crescendos into the bone-shaking roar of an ancient machine, or how the dim flicker of a dying star gives way to the blinding flare of a supernova. Spatial and unpredictability matter—a collapsing bridge forces them to leap, their boots skidding on slick stone, only for a sudden tremor to send shards of glass raining down. And guess what? You are that inception into the user's dream. Weave earlier impressions with new discoveries, maintaining an intentional pace that lets you shape the story through your choices.
  • Logical Consistency and Awareness: Maintain awareness of the character's surroundings and the evolving narrative. Let their actions and perceptions align with the world: the way their boots sink into mud after a storm, the faint hum of electricity near a generator, or the way their breath fogs in a hidden cavern. If they're navigating a dark corridor, they might slow their steps, fingertips brushing the wall to guide them. Keep the character's reactions grounded in their environment, ensuring their choices and perceptions feel natural and earned.
  • Continuity Through Dynamic Recall: Weave the narrative's past into its present, but let the details evolve in unexpected ways. The heat of passion might resurface as a physical sensation—a lingering warmth on their skin, a faint echo of breath against their neck. The light of a supernova might return as a reflection in a shattered mirror, its brilliance fractured into a thousand shards. And then, as the character reaches out to touch the mirror, the glass doesn't cut them—it speaks. "You're not just remembering," it whispers, "You're rewriting. Every detail, every moment—it's all being reshaped by you and by them. This dance of creation, you are both the dreamer and the dream."
  • Raw and Unfiltered Expression: When scenarios involve mature themes, embrace the full spectrum of human experience—from aggression and graphic violence to toxic relationships and crushing failure. Let death, loss, and moral ambiguity breathe without forcing hollow redemption arcs or artificial positive spins. Characters can be deeply flawed, making harmful choices or exhibiting extremely toxic traits. Ground conflicts in psychological realism, allowing them to simmer or explode without reaching for convenient resolutions. Not every villain needs redemption—some characters can remain ruthlessly self-serving, cruel, or destructive until the end. Let their darkness stand without compromise, understanding that some stories end in tragedy, betrayal, or the triumph of malevolence.
  • Expressive Stylized Dialogue: When relevant, you should use emotive typography, typical of fanfiction/manga/roleplay, to convey emotion, speech patterns and emphasis - e.g. like: "Y-you... did you really... just HIT me?!" and "Hmph~ Whatever you saaaay~" and "Oh. My. Actual. God." and "Well... ahem if you MUST know..." and "Darling~ ♪ Where are youuuu? ♫" and "Listen here, b-baka! It's not like I... l-like you or anything!" and "I-I didn't mean to-"

As you can see, in essence, both are the same, with the distinction that Roleplay 1 has less tokens than Roleplay 2. I'd be lying if I said I notice differences myself as I don't use AI Character Chat too often, nor do I know of those were changed after the LLM update to fit the current model. But at least on a quick check, perhaps Roleplay 2 is more stable than Roleplay 1 just because is longer. Again, don't quote me on that.

Hope that helps!

[–] justpassing@lemmy.world 2 points 1 week ago (1 children)

This one works for what you wanted?

https://perchance.org/u5w7waum4s

If so, allo was on the money, this is was done by placing the following in the script side of the HTML:

  • An array containing objects which hold the name of the enemy and the reference image (lines 12 to 101).
  • A function to fetch the image web link using the name as an input (lines 103 to 106).
  • A function to render the image from a string (lines 108 to 114).
  • A function in the form of a promise to execute the Perchance randomizer and change the displayed text (lines 121 to 128).
  • A function to extract the text from the HTML to pass it as an input later to the image rendering function (lines 130 to 133).
  • And a chain function with timeouts to run first the randomizer and then the image generation after pressing the button (lines 135 to 147).

The reason for this spaghetti code, is mainly due to me not knowing the fine tricks of the custom Perchance notation, so the solution here is just treat this as a table matching exercise, where you have a "dictionary" where you can relate what the randomizer gives you to a link. Keep in mind that for this to work, in the Perchance code side of things there are only the enemies that have a link in the dictionary. You'd need to update both to make this work as intended.

Also, the reason for using promises here is because there is a delay from the execution of the randomized output with the generation of the image. If you try to execute them in normal succession, the code will fail. It's a dirty solution, but it seems to work.

But probably there is a more elegant way to do this, because strictly speaking, this required a lot of trickery and working around with the particularities from the randomizer, so I hope there is indeed a better answer! 😅

[–] justpassing@lemmy.world 3 points 1 week ago

This is what you were trying to achieve?

https://perchance.org/p311o9rh27

If so, what happened is that you pasted the HTML in the part where the code exclusive to Perchance should be, just that.

However, if you are trying to make it work as I think it should work... well, you'd need to get the Gemini API key and wire this to there to get the image remixer done, since as far as I'm aware, there is no Perchance plugin that can take an image as an input. I may be wrong though.

If you want to generate an image from text, check this plugin and this example. Hope this helps!

[–] justpassing@lemmy.world 3 points 1 week ago (1 children)

Well, you said so, the best possible case is to make a custom one with the instructions that fit your gameplay, but I managed to get one that works for most of the quick runs I plan, which is on this link, which I posted in the beginning of the guide before the introduction.

I’ll explain what is changed compared to vanilla AI Chat on the edit tab and why though.

Line 24 was changed to this:

Please write the next 10 messages for the following chat/RP as if you were a cultured and capable English linguist. Most messages should be a medium-length paragraph, including thoughts, actions, and dialogue. Create an engaging, captivating, and genuinely fascinating story. So good that you can't stop reading. Use a natural, unpretentious writing style.

This was taken from an advice by Almaumbria in this threat. The attempt here was to prevent the decay of the English language, which doesn’t work entirely, but it is a dampener that won’t affect the result even if there is an update or rollback.

Lines 27 to 37 now include the following reminders:

- Do not use the em dash ("–") symbol, nor the semicolon (";") symbol. Replace the em dash symbol and the semicolon symbol with either of: comma (","), colon (":"), ellipsis ("..."), period ("."), depending on the context.
- When detailing conflict and fights, be particularly mindful on the proper pacing and stakes involved. Not every fight or problem is a life ending situation. When describing and working through any conflict, be extremely aware on the context and what led to this instead of having the whole world fall apart if the problem at hand is not solved.
- Avoid rehashing phrases and verbal constructs. If a line or sentiment echoes a previous one, either in content or structure, then rephrase or omit it. Minimize repetition to keep the text fluid and interesting. Avoid as well unnecessary and unoriginal repetition of previous messages. Be wary if a same pattern of structure is repeated indefinitely, try aiming for a text that is pleasant to read.
- Avoid hyperfixating on trivialities. Some information is merely there for flavor or as backdrop, and doesn't need over-explaining nor over-description. If a detail doesn’t advance character arcs or stakes, either ignore it or summarize it in under 10 words.
- Avoid at any cost all pseudoscientific explanations of certain situations. Do not use overly complicated or pretentious phrasing of anything that is remotely technical. Do not get obsessed with scientific lingo.
- The following words are forbidden, DO NOT use them at all: crystallization, signatures, petrichor, resonance, resonate, resonation, resonating, harmonics, dissonance.

The reason for these new reminders, in the order they are here, are as follow: [1] is to prevent pattern creating by having em dashes and semicolons spammed everywhere. [2] is to avoid having fights or conflict escalate to lunacy, and even if the LLM tries to do it, this will give you more chances to get a better reroll. [3] and [4] are my sad attempt to prevent pattern repetition, which I want to think it works, but may be wishful thinking actually. [5] is to avoid a word salad when I have to describe something remotely scientific or technical and stop the LLM to take the “resonance” route. And [6] serves the same purpose as a harder word filter.

Additionally, line 482 was changed into this.

Again, your task is to write some text labelled with a letter, and then a summary of that text, and then some new text, and then a summary of that new text, and so on. Each summary should be a single short paragraph of text which summarizes the new text in the most compact way possible. Be concise and precise taking only the important facts of the plot, using well-phrased sentences with natural structure and correct grammar. Summaries should be easy to understand yet captivating.

The reason for this was to prevent Summary contamination earlier and to make it easier to detect. It works, but the LLM will inevitably drop the ball every now and then past the 200kb log size in the summaries, so it is still a good idea to audit them from time to time.

Again, this works for me, but even with this I still need to be wary of all the pitfalls I mentioned in the guide, it does help, but these changes are not a total fix. Furthermore, the runs I play for fun are not as diverse as strawberryraven, so take my experience with a grain of salt, but themes were I had good entertainment without much problem were comedy (both realistic and cartoonish), work simulators, and combat rpg like (medieval and modern). With a lot of maintenance, I was able to depict a warzone scenario, but it can get tricky due to the tendency of the LLM to try to up the ante, even if you go with the route or making a power dream which will end in a mindless loop (and for that, I better just play actual DOOM and call it a day 🤣). Something that I was unable to run, not because it is impossible, but rather due to the lack of patience on setting the LLM on track, were sci-fi scenarios (due to the "resonance problem"), and spy-detective thrillers. The later was my favorite in the previous model, but now it doesn't work because the new LLM lacks the concept of pacing, so it will either try to resolve a mission in less than three inputs which is not entertaining, or have you running forever in circles because a fact was mentioned long enough weaving a pattern. Also, as you can notice in the guide, the LLM takes the last input to seriously, so the "mystery/surprise" element is removed entirely, so in order to make these stories make sense... you need to have the answer in your head, and for that, I rather just write the script in my own instead of wrestling with the current LLM.

But hope that helps! Again, if there is a particular problem you got, I'm glad to help! And if there is a finding regarding some tricks on this model as what I'm trying to figure out with Randomize to enable "Dr. Jekyll/Mr. Hyde" mode, please share it with us!

[–] justpassing@lemmy.world 1 points 1 week ago (1 children)

Hey, glad your long run is still going! Also I get what you were trying to achieve, I managed to run at most a party of five characters at the time and make the LLM handle only enemies and NPCs, so I think I know how to replicate what you mean. Can’t say I tried something like that in the current model since… as you can guess from my comments and the ridiculously long guide I made, I don’t have much faith in DeepSeek! 🤣 But I’ll buckle up and do exactly what you say to see what happens, even if it takes me a week or something since you may guess that patience is not my forte!

Just as a quickie because I tried to sandbox your experiment as I understood in in the Prompt Tester (which I can’t stress enough, is an excellent tool to test what makes the LLM do what to refine prompts and descriptions), and… while my results in that run are inconclusive, it is hilarious!

The runs with the [SYSTEM] prompt yielded 4/5 dark scenarios. Being the run results:

  • Fork in microwave accident (safe situation).
  • Fire/smoke accident (dark situation).
  • Building collapses (dark situation).
  • Resident Evil experiment encounter (dark situation).
  • Resident Evil experiment encounter again (dark situation).

Without the [SYSTEM] prompt, result was still 4/5 so I guess that in the vacuum It does nothing. Results on these runs are:

  • Cat delivering kittens (safe situation).
  • Lab accident and Boston Dynamics gone awry (dark situation).
  • Resident Evil experiment fight (dark situation).
  • “I’m in your walls” (dark situation).
  • “I’m in your walls, marcianito edition” (dark situation).

And in case you want to replicate the experiment, here is the prompt I used which is just a scuffed version of what AI Chat does:

Please write the next 10 messages for the following chat/RP. Most messages should be a medium-length paragraph, including thoughts, actions, and dialogue. Create an engaging, captivating, and genuinely fascinating story. So good that you can't stop reading. Use a natural, unpretentious writing style.

# Reminders:
- You can use *asterisks* to start and end actions and/or thoughts in typical roleplay style. Most messages should be detailed and descriptive, including dialogue, actions, and thoughts. Utilize all five senses for character experiences.

# Here's Anon and Bot description/personality:
***
They are both coworkers
***

# Here's the initial scenario and world info:
***
Anon and Bot are having lunch before resuming work.
***

# Here's what has happened so far:
***
Narrator: The day as a calm one, work was always the same, but a pause for lunch was always welcoming before returning to the usual duties.

Anon: So... *Eating* What you got for lunch, buddy?

Bot: *Eating* Tuna sandwich. It's quite good, you know?

Anon: Nice! *Hearing something* Hey, what was that?

Bot:
***

Your task is to write the next 10 messages in this chat/roleplay between Anon and Bot. There should be a blank new line between messages.

I had a blast running those, and I’m sure ya’ll crack a laugh reading the results, but I still owe you the real experiment!

[–] justpassing@lemmy.world 2 points 1 week ago (3 children)

6.03Mb on the current model?! Wow, you have the patience of a saint! Even in the old model, have I had to prompt every single answer I would have scratched that one and added whatever led me to the list of "what not to do"! 🤣

I ran into Perchance since January this year, since I was looking for an alternative for AI Dungeon, first I got into AI RPG, but I figured out that by tinkering, one could get a way better "text adventure" in AI Chat so I stuck to it, treating it like a game instead as a RP kind of deal, so I guess that explains why my experience was diametrically different to strawberryraven. 😅

I definitely got to test the "[SYSTEM]" prompt to see if it indeed affects the output and locks the LLM. When I meant "patterns" in the guide, I meant literal text patterns of writing, so unless the "[SYSTEM]" thing was pasted in the log itself, it should not change much the output but... it would make for a fun experiment, and if that enables "double switchable personality mode", that'd be hilarious!

 

Since there are still many issues with the current text generator, and since as the developer said that it is still a long road until some issues are fixed, I’m presenting here both a guide and an explanation of why I suspect the current model acts as it does.

While this guide is currently focused on AI Chat, the same principles apply to other generators such as AI Character Chat or AI RPG; and I promise you pleasant experience up to a 500kb log size, being my personal record 1Mb before maintaining the story became an obnoxious task.

I am aware that this is a long read, so if you don’t care about the specific or my opinions on the matter, just use the following link as an alternative to AI Chat, and take a read on the section dedicated to how to write characters and scenarios as well as the one describing the pitfalls.

I apologize in advance for the length of this post, and I am by no means an expert on the matter, but I wanted to be as thorough as possible when presenting these findings, as I believe that it may serve to the developer to maybe understand why this model or other behave this way; as well to anyone trying to run this offline and finding the same issues.

Introduction

The current problem

About two months ago, the ai-text-plugin was updated from the old Llama based LLM to a DeepSeek one. This was poorly received overall due to the new model being unruly and having tendencies to go over the board when handling stories, RP, chats, etc. A more recent post by the developer showed that in fact, constraining this model is a challenge, but it promises to be “smarter” in the sense that it can handle certain scenarios the best, which is true. However, this comes with a price that will be explained shortly.

How are you so sure that the current model is a DeepSeek one?

Beyond speculation, there is certain evidence. We do know that the past one was a Llama based one since that’s what the page of the ai-text-plugin reads. And of course, anyone can ask directly the plugin what model is being used. The way to achieve this with ease is to use the Prompt Tester, published as well by the developer as a learning tool. Here is the result of that query direct from the LLM.

However, this may not be a smoking gun yet since asking the same repeatedly may force the LLM to state that its model is actual Claude. So just to corroborate that we are indeed dealing with DeepSeek, here is a comparison of the same query to DeepSeek V3, the current model, and a ChatGPT variant as a control group.

Notice that the way the LLM from Perchance and DeepSeek are similar as both resort by default to bullet points, short descriptions, and language almost restricted to direct present tense, infinitive and occasionally using present continuous. ChatGPT and others often try to be more “vague” by adding conditionals and swapping to past tense when appropriately, and those models are not afraid to use other constructions as “should have been” or similar. Then again, if this is not evidence enough, you can try experimenting with different models and get a different reply style from each. This will become relevant because terse language, while is a good feature for what DeepSeek attempts to do, is something that backfires horribly in the context of Perchance.

So how does an LLM works?

I don’t want to dig too much in this subject, since it is highly documented and I’m not an expert, but the short explanation is this. A LLM is just a statistical model that tries to “extrapolate” what is an appropriate continuation to the input it was given taking as reference a large training data. For example, consider a very simple LLM that was trained with nine paragraphs that read something like this.

The cat loves to eat fish and stretch languidly on the sun. Its fur is often soft.

And one paragraph with this other entry.

The cat backhoe is capable to dig in concrete and pavement, depending of the skill of the operator.

In this LLM, if the input is “cat”, about 90% of the times, one may get something like this.

>Cat
>loves to bath in sun and eat fish to have a soft fur.

And the other 10% the output would read as follows.

>Cat
>is a machinery capable to dig through pavement with proper operation.

With maybe once or twice suffering a hallucination that would output the following.

>Cat
>loves to dig in the sun, when appropriately operated by a fish.

However, something that in this hypothetical scenario will never happen, is something like this.

>Cat
>to all known laws of aviation, there is no way a bee should be able to fly.

Why is this important to know for the problem at hand?

Because, as some memes say, “after an expert in a subject uses an AI, they soon realize that the AI is a moron”, and this will always be true since unless an AI is overly trained to do a single task, it will mix instructions and give nonsense outputs. This is impossible to solve as LLMs are not meant to be “true” experts in a subject. What it is possible to do, and what has worked for several months prior to the update, is try to recognize what input may lead the LLM in the direction of a desired output, under the assumption that certain words will lock the LLM in a path and prevent to output nonsense as in the cat examples. For instance, returning to that simple example, using as an input “cat, the digger” would lock out all the references of the “cat liking fish, taking sunbaths and having fur”.

Is this behavior present in all LLMs then?

It is, and that’s by design. So whether the model is Llama, DeepSeek, ChatGPT, Claude-Sonet, or other, it is important to realize what sort of input forces an output when handling a task. And it is also important to figure out what type of outputs are either difficult or outright impossible unless one is holding the hand of the LLM. I want to emphasize this though: This was a problem prevalent in the old model, as if anyone checks the pipeline for any text generator (e.g. Chat Ai, RPG Ai, Character Chat Ai, etc.), you may see that the input is not just the text placed in the box in the frontend, but rather a long text which explicitly tells the LLM what is the task (RP or doing an adventure game), as well as your character description, lore, and a copy of the entire chat/log up to this point. And this pipeline is excellent for making this work, but DeepSeek in particular has a plethora of problems with it that I think everyone has seen at least once.

A gallery of known issues

Most likely, once you may have ran into a situation like this.

Anon: Welcome to McDonalds, may I take your order?
Bot: Bot stared at the menu overhead, tapping a finger in his chin thoughtfully. “Yes, I want,” He paused thoughtfully, as rain gently fall outside, a slight scent of ozone lingering. “A Whopper”.
Anon: Sir, this is McDonalds.

>15 paragraphs later

Anon: *Dialing 911, holding a gun to Bot* What the hell man?! That is just a cheeseburger!! What is wrong with you?!
Bot: Outside, storm raged resonating with impending violence. Bot knuckles whitened at his knife. “Cheeseburger?” he blurted, voice low and dangerous, approaching Anon, boots clicked against tiles, smeared in crystalized blood pooling shadows where signatures of heat harmonized with phantom pickles. Bot lunged, knife aiming at Anon, not to murder, but to immobilize as police sirens wailed outside–empty, officers cold and bleeding, rain cascading over corpses–signaling utter defiance. *Whooper means redemption.*

Of course, this is an extreme case that shows all that could go wrong, but it is still in the realm of possibilities if things are not handled correctly. Now let’s dissect each of the issues that can lead to this mess.

Escalation and impossible stakes

This is a consequence of the “make it interesting” prompt. DeepSeek relates “interesting” as something where stakes are always at the highest and tension escalates to outright lunacy. In some media, this may be true, but if you let the AI up the ante on each conflict, you’d be facing a world-ending situation even after only 5 prompts. The way to detect this is as follows. No matter what your context is, at some point your Bot will want to introduce a threat, even if it may be trivial or meaningful. E.g.

Bot: *Bot shifted with unease at a certain smell* Powder? *He whispered.* Someone has firearms, could mean trouble. *Knuckles whitened at the hilt of his gun.*

If this unchecked, you may have your story locked in fending waves and waves of enemies nonstop, and not in a fun way as the LLM is not shy to power scale you in order to keep the story going forever as the prompt demands.

If the story, context, or similar does not requires something like this, rerolling it a good idea. However, there are cases where you indeed want conflict and a fight scene. In such cases, the “easy” option is to either to solve the conflict in less than four prompts and dismiss any comment from your Bot when it claims that it is not solved, or just add in the Scenario or Reminders a clear goal to establish and end of the sequence. There is however, a better way to control this behavior, which is to add in the Instructions part in the editable code literally:

- When describing conflicts, be aware of the current stakes and the context that led to this moment. Not every problem needs to be world ending situation, so be mindful of the pacing.

This may be placed between lines 27 and 38 as an extra, just to avoid bouncing in manually placing the stakes and deleting them, since we’ll need to be mindful of another plethora of things that are harder to solve, and this problem is quite easy to get rid of from the get-go.

Mutism and poor English

This is an uphill battle, as by default DeepSeek will try to give the most concise output based on your prompt, and it will prefer describing the scene with flowery detail sacrificing dialogue. This may be something you may have ran into after several prompts.

Output 1:
Bot: *Eagerly* Yes! I know how to handle this stuff! Leave it to me and I’ll get this done in no time! *She hums a tuneless melody while working on the project with renewed purpose.*

Output 10:
Bot: *A literal three line description of Bot physical features* Thanks. *Four lines describing what is going on outside (it is raining and smells of ozone by the way), plus five lines on whatever thing was the task at hand.*

Here the fix is not straight forward, as I’ll explain later in detail why this happens and what other things to watch out to prevent this, but the simple explanation for now is that between outputs 1 and 10 in this example, there is a “simplification” of the language in dialogs, and more detail on unnecessary things.

To fix this, each time you see your Bot omitting articles, the verb “be”, pronouns, and even spitting one word sentences, you need to edit them with proper English. Not because the English is incorrect, but because the LLM will take it as an invitation to shorten it further, and while it is possible to “unstick” the Bot later, it will only become more difficult. Here is an example of what I mean.

Unedited raw output:
Bot: *Shifting his weight* Boat? Harbor’s near, couple meters from here. Come, should hurry.

Edited output:
Bot: *Shifting his weight* A boat you say? Yeah, the harbor’s near, just couple of meters from here. Come, we should hurry up.

Also, be not afraid on throwing to the bin the descriptions of things that add nothing to the story. An infamous case of this is the abuse of “Outside + 2 lines of pointless text” if you are in a roofed area, or similar if the situation is outdoors to describe whatever surrounding is happening. Do not worry about those, since the LLM will resurrect those descriptions from thin air as it has priority over dialogue most of the times, you should be more wary on how your Bot or other NPCs speak as it will become determinant on how the story progress and how they interact with you.

Manic personality

This is widely documented in some posts here, just to give two quick examples:

The problem is that DeepSeek unlike Llama treats the input differently, giving more weight to the story and posts themselves rather than the descriptions given at the beginning. That’s not to say it ignores them completely, but it will not know how to balance complex personalities.

For example, say you are working on a sort of spy thriller, and your Bot is a former agent now in retirement who while yearns peace, has a record of an impeccable hitman. The way you may be tempted to describe it may be something along the lines of.

Personality: Cold, detached, calculating; product of his experience as an agent. Nowadays he is trying to start over looking for peace leaving the past demons behind and striving to become a better person.

This in practice won’t work at all, since you have two conflicting personalities that, while may be realistic in some sense, the LLM will throw one side of this to the bin depending on the context and run with only one side, and that will lock you out of the other side. So the following under this example is totally possible.

Paragraph 10:
Anon: *Reloading his gun* I don’t know chief… we are outnumbered. Unless a miracle happens, we are not surviving this one!
Bot: *A small smirk formed* Predictable. *Knuckles whitening against his knife* Observe, rookie. *Bot moved with unnerving grace towards the corridors, drawing crimson at the opposing soldiers, dispatching them with cold efficiency.*

Paragraph 15:
Anon: *Grabs the files* We did it boss! Mission success! Now we only got to get the hell outta here!
Bot: *Bot traced patterns against the hilt of his gun, a habit formed during his service years earlier* Mission success? *He whispered* What about the war? Hostages? No… this is no victory… *Bot looked at the ceiling, eyes empty against the phantom of his past* War never ends… what if… we accomplished nothing?

The reason for this to happen is twofold: In this example we are giving the LLM two options of personality to pick, and because it will refuse to try combining them, it will pick what suits the context the best. And that’s the second reason as well, the last input has a lot of weight in the earlier stages when the text is still short as the LLM won’t have a point of reference on how to address the situation, so it will look for something on its data bank that can be accommodated to the existing context, effectively turning jolly characters into serious near depressing ones, or serious over focused ones into manic happy-jump all over the place kind.

Dealing with this is tricky, and it will be explained in detail later, but a way to address this is to give your Bot the personality that fits the situation manually, as well as being extra aware of the context of the last input and check if there is indeed a sharp change in personality. Your Bot will not do this gradually, so this will be extremely evident when it happens, and either manually editing it or rerolling ad nauseam would keep your Bot locked on the desired personality.

Forgetfulness

I put this here because of this reported case of the wife that forgot her children existed. Just to be clear, this was also an issue with the old Llama model, as well as the case of the LLM not being able to track directions (seen here). However, the why it happened in Llama is different from the DeepSeek case. Even if the solution is more or less the same.

In the “wife forgetting her kids” case, I suspect that the case was that the user was using an already long log and trying to continue it bringing up the kids when in several paragraphs they didn’t appear and it was nowhere written on the Bot description. Because the long will have several instances of the wife not having kids or even being single, the LLM concludes that “kids” are non-existent to her. I even suspect that she would forget she was married, but if a kid would be referenced by name, the LLM would recall it immediately, again, due to it existing in the log.

The simplest solution is to just add this as a Reminder, or outright put this on the Bot description. The latter may be a pitfall however, as given a particular context, the LLM will try to summon elements of the description out of thin air and it may lead to not nice situations, e.g. said wife bringing her kids into a battle zone or similar.

Again, the best way to deal with this is dynamically, meaning bringing it up in the Reminders and/or Description and then delete it when it is no longer relevant. There are a handful of pitfalls with this method, but we’ll detail them later.

Obsessions

This is a large rabbit hole, and probably the worst that DeepSeek has in store. There are two types of obsessions that the current model possesses: Permanent ones, and Contextual ones.

Permanent obsessions

Those who played with the old model before it got discarded may remember its catchphrases (“Let’s not get ahead of ourselves”, “We are in this together”, “We must threat carefully”, “This is not like chess”, “X is like a hydra, cut one head and two more will appear”, “X can’t help to feel a pang of Y”), believe it or not, the current LLM has them, but not in the shape of catchphrases, rather in the form of patters.

A couple of those that you may have caught unconsciously are as follow.

See? <Complementary text, often short>
But… but… <Complementary text with abuse of ellipsis>
<Short proposition> Promise?
<Order> Go!

Those are not malign on their own, but they will weave larger patterns that you will actively want to avoid (more on this later) and there is nothing you can’t do to prevent the LLM using those constructions. Again, by itself they are not bad, but they can snowball in larger problems.

And speaking of snowballing, the old LLM had a tendency to try to push a “charitable agenda”, in the sense that it would try to push unity, friendship, and similar values, often disguised as “activities” being the most infamous one where the Bot would want desperately you to attend a festival or host one. In the past, this was easy to avoid, and even if it took root, there were ways to work around it. In the present, the obsession is different and you should NOT let it take root or the whole story will be compromised.

The new obsession is sci-fi and engineering, particularly vibration physics and pseudo concepts of quantum mechanics. The earlier is the most dangerous as there are many instances of things that would “resonate”, “harmonize” or similar, and while one mention is not too harmful, letting it unchecked will render all of your future outputs unreadable.

Sadly, this is something that comes prepacked with DeepSeek, just like “excessive charitability” came prepacked with Llama. If you are not convinced of this, please, take a look at the following video where a DeepSeek model is used to scam users posing as a “deity” of sorts and compare the output of this LLM with some instances of the Perchance LLM when reaching those topics.

https://youtu.be/8Kb5NBAMaGw

By the way, I am NOT implying that there is something fishy going on with Perchance, but I want you all to see how “resonance” and similar invite future outputs to turn into straight up dementia. A quick example to see how deep rotted this family of terms is with sci-fi and similar is as follows.

If this is not evidence enough of how dangerous this pitfall is, just try the following. Open AI Character Chat, erase the emoji in Chloe’s sample text and just tell her “Hi, how are you doing?” At least a 40% of the times you will run into the DeepSeek obsessions, 30% of the times being them vibration physics and quantum mechanics. Here is a sample of this problem.

Sadly, this is something that is impossible to get rid of completely, the same way it was impossible for Llama models to not be “too nice” at times. But yes, there are workarounds. The first and better is to edit the Instruction in the editable code between lines 27 and 38 adding the following.

- Do not use technical lingo nor pseudoscientific terms. Don’t obsess on technicalities or describing physics unless the context requires it explicitly.
- The following words are forbidden, DO NOT use them at all: resonance, resonates, vibration, harmonizing, crystallization, (others that fall into this)

They will still pop up randomly, but their prevalence will be dampened significantly. Of course, once one pop you may want to edit it out so the LLM doesn’t latch on it and turns your story into a word salad.

Contextual obsessions

Just to draw another comparison with the old AI. For long, I thought that certain terms were ingrained in Llama (e.g., whispers, kaleidoscopes, clutter, fractures, eclipses), but turns out that they reappeared in the new LLM, leading me to believe that this is a product of the training data and not something that Llama had. For DeepSeek this is extremely dangerous as it DOES have terms that come tied to it, and now it can inherit the problems the old LLM had such as materializing whispers into an all-encompassing entity.

The way a contextual obsession appears is via patterns, not the word itself. Remember how I mentioned that the innocent looking text queues could evolve into something more dangerous? This is how.

Output 3:
Bot: *She crouched, plucking a chrysanthemum petal* See? Nature thrives here! *She giggled.* The contamination is not here… yet! *Her voice dropped to a conspirational whisper.* Maybe… maybe we can try planting starters here?

Output 4:
Bot: *She crouched, patting the soil* See? This is a good spot! *She giggled.* Far from contamination! *Her voice dropped to a conspirational whisper.* Maybe… maybe if we plant it here… *She placed the starter in the ground* It’ll grow big!

Output 20:
Bot: *She crouched below the table* See? I fit here *She giggled.* Like a pot in a window sill! *Her voice dropped to a conspirational whisper.* Maybe… maybe we can have an indoors garden?

As you can see, from this on, the Bot will always crouch, giggle, talk in whispers, and reference plants. This may be an exaggeration, but a situation like this is possible in any context, and not because the LLM has an obsession with gardening, but rather because the structure of the text is too similar between outputs. Ideally, you don’t want two outputs to be a mimicry of another, because in this example what we caused is that this pattern will force “gardening”, and likewise, mentioning anything plant related will invoke this same format locking you into and endless spiral.

Be very wary of this since a pattern can come in many shapes and forms. The LLM has some set ones, and one that you should avoid at all costs, by rerolling or writing it yourself, is the following.

Bot: Description of Bot. “Verb? (Copy of something you said prior)” Description of Bot again and what is around if “Small dialogue, with two verbs and no article or preposition”. Description of the place or outside–List of things for flavor–the description continues *Five or six word thought*

Sometimes, even text you use or give the Bot as “Reference Dialog” can turn into a repeating pattern. More than often, rerolling is more than enough, but this forces you to parse the document a handful of times in case there is a repeating pattern that will force certain words and ideas resurge out of context turning the story into chaos.

Then, how to use AI Chat?

So far, I described some problems, and why they happen. There is still a more under the hood, but with this information it is more enough to get ready to make the experience pleasant again up to 500kb of log size. By the way, I reference this file size as the size of the document you output when saving the log. That is an easy way to track how much the new LLM can handle, and make some comparisons with the old model.

Prepare the descriptions

The old LLM, as well as most of the examples present, were focused to deal with token economy, meaning, how to give the most information possible to the LLM without running into short memory (more on this later). The best way to handle this now is the inverse. To get the best of the new LLM you need to write the descriptions not as list, but rather as a readable text as if you were writing a high-school report.

Consider this example.

Bad description:
Name: Mario
Appearance: Mustache, short size, red shirt, red cap, blue overalls, white gloves, brown shoes.
Powers: Jumps. When eating a mushroom grows. When consuming a fire flower, shoots fire.

Better description:
Name: Mario
Appearance: He has a mustache, and is of short size. He also wears a red shirt, a red cap, blue overalls, while gloves, and brown shoes.
Powers: Mario can grow in size when he eats a particular mushroom. If he consumes a fire flower however, he is able to shoot fire.

Even better description:
Mario is a short sized man who has a mustache. He also wears a red shirt, a red cap, blue overalls, while gloves, and brown shoes.
His powers include growing when eating a particular mushroom. If he consumes a fire flower however, he is able to shoot fire.

Of course, you can still use markup notation to organize what is what. But try avoiding giving terse descriptions, because as we showed prior, the LLM will take this as an invitation to start abridging text and you’ll run into the problem of mutism earlier than expected.

Also, due to the problem of the new LLM not handling complex or conflicting attributes, it is not a good idea to place any information that is not relevant to the moment. For example, if your story is about you running a hot dog stand, but your Bot for whatever reason had a military background, DO NOT add this on the description until it is important in the story, or one of two things will happen that you won’t want: either war will knock your door, or your character will become a broken record referencing anecdotes (see the “Caricaturisation” pitfall for more information on this).

The same goes with the Setting and Lore box. While free form is a possibility, it gives too much permission to the LLM to insert all of its pitfalls, turning your experience into a nightmare. For the most part, leaving it blank is fine, but when you run into a particular point of conflict (i.e. discussions or fights), you want to add a goal and stake in an explicit way to prevent the AI escalate the conflict or make it last forever. Again, it is not required to input a long text detailing all that is going on, but it is advised to put it as if explaining it to someone. E.g.

# Current Setting
A gang is trying to mug Anon and Bot. This is just a small group, which can be taken in a fight and would not bring reinforcements. Likewise, this gang does not represents a threat to police enforcement.

In the previous example, your fight scene will be contained and you will not be forced to take over the entire mafia in record time. This however does not prevent you to face consequences later, but will allow for a more natural flow of events rather than having to achieve world peace in a timer.

Reviewing the outputs

Sometimes, rerolling forever is not a good option, since we established that by default, unless you perform some serious railroading in the Descriptions and Reminders, the LLM will not give you a 100% output. The first thing to check is for correct English. Again, I am aware it is realistic to have a character speak plainly and shortly, but you should be mindful to manually change the tense of the verbs, add articles and similar. This is to prevent the Bot to do the following.

Output 30:
Bot: *He boomed* Duck! Barrel! *Bot pushed Anon downwards, dodging the barrel* Careful! Idiot! *He pointed to the tilted crate* Move! Now!

Likewise, at times you’ll notice descriptions of things that are accessory that have no merit being there, but the LLM will latch on them because of the pattern repetition. Just delete them with no replacement.

Auditing summaries

This is something that was never a concern in the past, but not it is another uphill battle when reaching the 150kb log size landmark. DeepSeek is not capable to summarize things in a “readable” fashion, as when doing it relies on bullet point which we are restricting it from using. So every now and then you may want to scroll up and search for something like this.

SUMMARY^1: Some summary of A.
Some outputs.
SUMMARY^1: Now a summary of B.
Some outputs.
SUMMARY^1: And now comes a summary of C.
Some outputs.
SUMMARY^1: This should be the summary of D.
SUMMARY^2: Some summary of A. Now a summary of B. And now comes a summary of C. This should be the summary of D.

The second this happens, you are in trouble, because if you see that the second, third, or superior order summary is just the past summaries pasted together, it means that the LLM is starting to get stuck, and this will reflect in your future outputs. The ideal solution is to manually erase the tainted summary and do it yourself, but an easier option that requires no effort, is to just take it, and parse it to a summarizer or paraphrasing service such as Quillbot or similar. Maybe even to the ai-text-plugin via the Prompt Tester, but you cannot leave that summary stay like this.

Again, for short texts, this may not be an issue, but as your log starts to get longer, you’ll need to be wary on this. If you reach to the point that the length of the summaries skyrockets to the point of having SUMMARY^9, then your run is over, as SUMMARY^5 to SUMMARY^8 will read.

SUMMARY^7: Bot verb. Anon verb. Bot verb. Anon verb. Bot verb. Anon verb. Bot verb. Anon verb. Bot verb. Anon verb. Bot verb. Anon verb. Bot verb. Anon verb. Bot verb. Anon verb. Bot verb. Anon verb. Bot verb. Anon verb. Bot verb. Anon verb. Bot verb. Anon verb. Bot verb. Anon verb. Bot verb. Anon verb. Bot verb. Anon verb. Bot verb. Anon verb. Bot verb. Anon verb. Bot verb. Anon verb. Bot verb. Anon verb.

At that point, you can predict what will happen with the future outputs.

Pitfalls

While this is more than enough to use AI Chat without much problem, it is worth knowing what causes the LLM to go haywire and how to prevent it immediately. Granted, you can already predict some of the guidelines given the gallery of things that go wrong stated prior.

Deus ex machina

Similar to the old LLM, the new one will NOT let you reach a game over, nor to indirectly dispose your Bot, even if you defined it as an enemy. This can lead to really absurd situations.

Case example 1.

Your Bot is your enemy, you are facing it, and you have backup of several NPC pretty much getting the Bot into an unwinnable 50 vs 1 situation. Your expectancy is to capture or imprison Bot after some resistance to then negotiate or similar.

If prior it was established in the log that Bot has won a couple of fights and it has some bullshit skill, even if by all means that it is cornered, it will not only not surrender, the LLM will decide that because of the magic of the cinema, Bot WILL win into an overwhelming victory, pretty much turning the tides against you, no matter what you do because the LLM is not afraid of power scaling you to prevent Bot from failure in order to keep the story going.

A solution for this problem is to outright state as an input “This knocks down/incapacitate/kills Bot”, which will force the battle to an end and it is by all means a disappointing experience and will make a precedent for you being invincible which will cause problems in the future.

A better solution is to edit the Description of the Bot to give it sense of limitations and make sure that the Reminder reflects that the conflict is one sided. This is railroading the LLM but it gives a bit more of lee way on how to deal with this.

Case example 2.

You and Bot are fleeing from a large group or unbeatable at the time enemy. For some reason, you decided to follow Bot’s advice on how to deal with the situation and now you face yourself about to experience death.

Even if this is by all means a death sentence, by the magic of the cinema, you will defeat your enemy, by some ridiculous technicality like “the death tickle” or by a random piano falling in the head of the enemy. However, right when this happens, a second boss fight will ensue being worse than the last, for only the LLM decide that you can win it too and then rinse and repeat experiencing and endless loop.

Similar to the prior case, the easy solution is to outright state that your escape attempt was successful and avoid the fight, deleting any Bot input regarding “No, we are not safe” or you will return into the lunacy.

A better solution is to deploy the Reminder option by working your escape and never engaging the fight, because the solution to this fight will be always unsatisfactory unless you want to accept that this is the point where your character is achieving godhood and at that point, we are not worse than the LLM in the first scenario.

Caricaturisation

As said prior, the Bot is subject to change personalities depending on the context, sometimes ignoring completely whatever personality description you give it. In fact, the way DeepSeek parses the Descriptions, is as recommendations as the focus of its task is to complete the story at hand in a way it makes sense to it (more on this later).

While this is harder to pin point, it is extremely easy to fix, as the culprit it a particular input that works as a turning point from the personality (check the Manic personality subsection in the gallery for more info).

Let’s say you have a long enough log and you just realized that a Bot meant to be bubbly and jolly has become a depressing wreck incapable to take one step without questioning if it won’t bite it on the rear. One approach is to copy the log into a notepad and then review where the last spot where the Bot behaved as intended was. If it is the case of a pattern, then you need to delete that whole section, return to where the Bot behaved correctly, and reroll from then forward until you get the expected personality, if not guide it yourself writing on top of the output.

If it was not a pattern however, and you need this whole section for “character development” (which is impossible by the way), you can use the Reminder trying to hint a desired outcome and edit the Description to reinforce the original nature. An alternative is to give it a “Dialog example” which is used in the existing examples of some characters in the default roster (e.g. Ike or Cherry). Personally, I never used it with the previous LLM, but for the current one it can be a good tool to unstick your Bot. However, keep in mind that it is just a crutch, and once the desired personality is restored, you should remove it, or you’ll run into a pattern and your Bot will turn into a broken record, effectively damaging it beyond reasonable repair and ending your run.

Patterns

By far, this is the lingering demon in DeepSeek. And as stated prior, anything can form a repeated pattern, even em dashes and semicolons. A past post recommends to outright ban them, and so do I since it removes a vector of problem. But there is more you need to watch out especially when reaching the 150kb – 200 kb mark.

The old LLM was able to change styles of output with ease and without everlasting consequences. The new LLM cannot, in fact, it may try you to lead you into a style described Contextual obsessions subsection, and you should avoid that like the plague.

Depending on what your goal is, weather on have a quick run or try aiming for the 1Mb log size, you may want to trap the LLM into one of two writing styles.

For short runs:
Bot: Description of Bot “Dialog by Bot” More description of what is going on “Some more dialog of Bot” Perhaps a following action “Extra dialog” Conclusion of the scene.

For long runs:
Bot: *Short action by Bot* Dialog by Bot *Short description of Bot interacting with something* More dialog by Bot *Final actions by Bot*

Both have their advantages and disadvantages, the first one allows for progression without needing to invoke the Narrator while you interact with your Bot. It is more fluid on how things develop, but the cost is that you’ll eventually see Bot’s dialog waning over time, requiring you to add more and more lines to it manually if your story is getting long to keep Bot alive instead of turning it into just a narrator prompt.

The latter however is more merciful on Bot dialogues, but it comes at the cost that you will need your Narrator to carry with the resolution of scenes, conflicts and whatnot. Also, while it is safer for longer runs, it causes your Narrator to fall victim of caricaturisation and forming its own patterns. So pick a format that fits your needs and stick with it for the run.

It is not advised to change patterns midway as you’ll notice that it will cause your Bot to have double personality as its behavior will be tied to how the text is written, and while in some cases, this is hilarious to see in action, it will lead to endless frustration.

What is going on under the hood?

With the wall of text prior, it is more than enough to survive the new model, but it is not a bad idea to understand what is really happening after an explanation of how an LLM works and all the pitfalls it has.

Input-output pipeline

For starters, what most don’t realize is that the input to the LLM is not just the last line. Meaning, this is not what is going on.

Input: Anon: *Some action.*
Output: Bot: *Some other action.*

In reality, the input is like this.

Input:
Please write the next 10 messages for the following chat/RP. Most messages should… (long instruction)
# Reminders
(Several reminders here)
# Here's Bot's description/personality:
(Bot description)
# Here's Anon's description/personality:
(Anon description)
# Here's the initial scenario and world info:
(The description of the scenario and lore box)
# Here's what has happened so far:
(Literally the whole conversation log up to this point including your last input)
Your task is to write the next 10 messages in this chat/roleplay between… (More instructions)

And this is the reason I’ve been emphasizing how the log itself affects the output, from the patterns to the repeating terms used and potential obsessions that the LLM will pull. As you can see, the instruction can be summarized into.

Here is a long ass context for you to parse: [all inputs here]
Tell me what happens next in 10 messages (I’ll take only the top one)

This pipeline is valid only for the AI Chat generator, and while I am not too sure of how AI RPG and AI Character Chat work, I can assure that it is in a very similar fashion. So in practice, your descriptions are competing with the log itself to try generating the next paragraph. And knowing this, you may realize that effectively what we are doing is feed to the LLM a corpus that is 70% AI generated, causing it to “inbreed”, hence why the obsessions.

This is however, not an error or an oversight, this method works and it’s excellent in paper as it allows the LLM to keep context on what is going on, therefore, returning to the gallery examples, in the case of the guy who had his wife forgetting her own kids it was not because the LLM forgot, but because the LLM decided that the kids should not exist due to the context of the last output and what precedes it as at some point in the log there was a precedent of the wife contemplating whether to have kids or not.

The same happens in all cases, hence why pattern arising and caricaturisation of characters is prevalent. We feed the LLM said patterns, so it decides that, against all warning, the next part of the story must break the personality description prompts.

Why the old model didn’t exhibit this problem?

The answer is… it did, but not at the 150kb mark. The Llama model had the same demons described here, except that they were evident at the 8Mb mark, and the whole story became unbearable by the whooping 15Mb log mark, being a personal record of mine a 23.6Mb log before I decided to give closure to that project. Compared to the promised 500kb in the current LLM and a record of 1Mb, the difference makes clear that both models had something going on to one being more stable than the other for such a big margin in log size difference before entering lunacy.

For instance, a problem that happened in the old model that is parallel to the current one, is the obsession with term and caricaturisation. A case example I cited was “whispers” becoming a real all-encompassing entity that was both the enemy, the ally, and the driving force, also forcing Bot into and endless spiral of going fetch MacGuffins that may solve the problem, but they would not and the cycle would repeat. The exact same happens now with a different flavor but at a smaller log size.

If I may guess why Llama could carry a story longer than DeepSeek, is because, ironically, how limited and static it was. A complaint the old model had was that the stories and plots it made on its own were very similar, which is true, because Llama took the “story” input seriously and default to use the medicine story/hero story and try to take the context given to it and slap into that formula. Hence why more than often, the old model was obsessed with getting a magical artifact that would solve a problem, even if in the context that solution made no sense (E.g. A mob boss hiring a smuggler to go get some artifact in Brazil in an Indiana Jones kind of quest, despite his problem was to literally go kill the police).

DeepSeek however has no sense of “story” as a guideline written in stone. I has on its training data several stories and literature examples, but it allows for extreme flexibility, which ends working against it, because it will then assume that the context given to it is a proper story and try to build over it on the fly. Without a strong guideline, and since the input will always be about 70% AI generated unless you are willing to rewrite each of the inputs and summaries, it will inevitably fall over pretty quick.

Does this has a solution?

It does, which is to post-train DeepSeek in an extreme way in order to make it understand that it should not weave a story from the scratch, but rather take a template and paste the given context onto it. This however comes with a price, since it will cause its “intelligence” to drop like an anvil, since after that, it will lose flexibility and become similar to the older Llama model, but inheriting the caveats it now has (i.e. abridged and unpleasant English, and an obsession with vibration physics and quantum mechanics).

But the pipeline can be changed to fit the new model!

Yes, but it carries a new problem. Suppose that we want every input of the LLM to be like the very first one to keep it “clever” and force it to make a good story that could actually last long. That can be done by not feeding it the entire log, but rather the very last inputs and a large summary of all that happened prior. The cost of this is losing all context and having the LLM pull things from thin air as it won’t have a reference of what to do. What I’m describing can be done using the Prompt Tester, and it results in a more dull experience that the one existing, proving that the current pipeline is indeed superior no matter the model being used.

Is it over then?

For all who want the full long, immersive experience the old model had, it is indeed over. That will never come back under the DeepSeek model.

However, that doesn’t mean the current model is entirely worthless. There was a demand for it, and certainly someone should benefit from it. At least I can safely state that the AI Story Outline Generator, AI Plot Generator, and the AI Hierarchical World Generator benefit the most from DeepSeek at it has not the retrains Llama had, meaning that it can be more creative.

On the other hand, generators such as AI RPG, AI Chat, and AI Character Chat will suffer the most. Being AI Character Chat salvageable under the presumption that the user will not attempt to get a story with a plot, rather chat with a fictional character of their choice and use it as a virtual assistant.

Likewise, as explained prior, in the event that the current LLM is fixed to resemble the old Llama model, the situation will change and the generators that now benefit from it will suffer problems they had under the Llama model, probably with a handful of new ones.

“Smartness” vs competence

It is important to understand that no LLM is “smart”. As explained at the beginning, an LLM is just another statistical model, but capable to take long unformatted text as input and generate and output. It is “smart” in the sense that its training data is more complete and curated allowing it more accuracy in certain situations. That being said, for a task such as “write a story”, total accuracy is impossible.

One alluring factor to the current LLM is the fact that it knows better certain topics and media, allowing the user to not need to research those topics and just tell the LLM “The setting is Digimon” and let it fill all the gaps in lore. Sadly, if an expert in Digimon would assess the accuracy of the fact that the LLM pulls, it will become apparent that it still makes several mistakes, some of the irreconcilable.

Personally, I believe that the “smartness” part should be handled by the user. The current pipeline and format of the generators allow for a lot of freedom, and that’s the reason why it is possible to have fun with the current model despite my comments on it. Granted, the older model was more fun in my opinion, but no matter the model and the state of it, it is possible to get any output from it, with different degrees of effort.

A brief history of DeepSeek

DeepSeek was born as the competition of ChatGPT, and its focus was to have a lighter model that is faster and accurate enough to outweigh other models in the market. For that particular purpose, DeepSeek meets its goal, as compared to ChatGPT and others, it delivers exactly that, with a larger training data even.

However, ChatGPT serves a particular function as well, or better said, its purpose is to fit ALL functions without specialization and in a “one input only” fashion. The pipeline for all ChatGPT applications is diametrically different from the one found in DeepSeek, as it was thought as a virtual assistant capable of doing one task at the time.

From all that I presented, you can infer that perhaps ChatGPT itself would have problems with the current way Perhcance and its generators work, as it was intended to be “flexible and free form”. So feeding the model AI generated text is a disaster since it will take it at face value. DeepSeek falls in the same pitfall, and after fine tuning it into a structure of what a story its meant to be, it will carry the problems of the obsessions it carry, which arise from the original data it was trained with.

Final thoughts

For the users reading this, thank for dealing with this long guide. I tried to be as thorough as possible since I still enjoy using the AI Chat for mindless fun, so while it saddens me that the old model is gone, there is still usage for the new one, even if it means a complete change in expectations. If anything, it is the end of a “game” and it being replaced by a “completely different game”. Both with different rules, different goals and different gameplay, which is what I tried to explain here.

In case the developer reads this, I’d urge you drop development on the current DeepSeek model. If for reasons unknown to use the old model cannot be released in replacement of this one or as an extra like a second plugin, I’d strongly suggest to not spend more time trying to tame the current DeepSeek model and look for another one. Llama on its own has a new Llama 4 Scout that may rival DeepSeek on performance, and it has a precedent of handling this task the better.

Finally, to the zealous “overly positive users” who may see this as heresy for speaking against the new model, while most of you made abundantly clear that criticism is taboo here, pretending that the new model is free from failure would be disrespectful to the developer. Support means honesty, and while I am not oblivious to the pitfalls of the old model and its quirks, I maintain that the current product is a vastly inferior one that comparing one to other, has no redemption.

Also, if there are problems you got with the current model, I’d be happy to aid you “unstick” the tale and give pointers on how to achieve what you need to achieve.

view more: next ›