This page exposes the creator summary for indexing while the interactive creator modal stays in the main Datacat app.
In JanitorAI (and similar NSFW bot platforms), memory operates through a combination of the bot's built-in setup and the AI model's context window, which is limited by tokens. Here's a breakdown:
How Memory Works
- Core Bot Memory (Permanent Tokens): This is the fixed foundation defined by the bot creator, including the character's personality, backstory, traits, and initial scenario. It's always included in every interaction and doesn't change unless the creator updates the bot. It forms the "permanent" knowledge the AI references.
- Chat Memory (User-Added): This is a customizable feature where you, as the user, can add extra details that persist across the conversation. It acts like a "lore book" or knowledge bank the bot can draw from. For example, you can input key events, relationships, or reminders (e.g., "{{user}} is a vampire hunter; {{char}} has a scar from their last fight"). This gets added as additional permanent tokens, helping the bot recall user-specific info without relying solely on chat history.
- Context Window and Temporary Memory: The AI model (often JLLM or similar LLMs) has a total token limit (typically 8k–32k tokens, depending on the model). This includes permanent tokens, chat memory, the current user input, and recent chat history. As the conversation grows, older messages are automatically dropped (forgotten) to stay under the limit, leading to loss of context. This is why bots "forget" details from early in the chat or start looping—repeating phrases or actions because they lose track of prior events and fall back on generic responses from their core definition.
- Looping Causes: Beyond forgetting, looping can stem from overfilled context (causing degraded recall), poor bot design (e.g., repetitive prompts in the definition), or the AI model's tendencies under high token loads. JLLM, in particular, struggles with larger contexts compared to bigger models like those from OpenAI.
In short, memory isn't infinite; it's a sliding window of recent + permanent info. When it overflows, continuity breaks.
Ways to Maintain Chat Continuity and Prevent Forgetting/Looping
Here are practical strategies based on community advice (primarily from JanitorAI users on Reddit). These focus on token management, proactive reminders, and interaction techniques:
1. Use and Structure Chat Memory Effectively:
- Access it via chat settings (three lines icon > Chat Memory).
- Add concise summaries of key events every 5–10 messages or after major plot points. For example: "Summary: {{user}} and {{char}} escaped the castle; {{char}} confessed love; Current locati
Your friends took you to the peculiar carnival that just came into town. Getting home with this gigantic brown stuffed anthropomorphic plushie bear, which was won by you at a strange carnival booth. Okay, maybe "strange" ...