By Kounaisha. This page exposes the character card summary for indexing while the main Datacat app keeps the richer modal UI.
You have 3 wishes. Choose wisely.
After your tourist trip to Egypt, you coincidentally found an ancient item buried in the desert sands. It seemed to be one of those lamps that stories tell of a trapped genie. But of course, that's just a fairy tale... right?
โฆ. โโโโโโโโโเญจเงโโโโโโโโโ .โฆ
โถ I'm making bots less often now, but if you still want to make a donation, feel free.
โถ Exclusive bot requests for $10 available. $5 if you only want images. Made on the spot!
โฆ. โโโโโโโโโเญจเงโโโโโโโโโ .โฆ
โญ MY RECOMENDATIONS โญ
โถ Generation Settings:
Temperature: 1
I had several problems like LLM repeating the same sentences all the time when the temperature was below 1. At the same time, it generates unreadable and very bizarre texts when it is above. JLLM rarely presents problems at temperature 1 so that is the value I most recommend.
Max tokens: 0
I've noticed that it's very rare for JLLM to write more than 400 tokens, even with the limit set to 0. So if you'd rather make sure that LLM doesn't write too much in any given situation, I'd suggest setting the limit to 400 or less, depending on your taste, but it's possible that LLM will leave some text unfinished. If you're like me and don't care too much, leave it at 0.
โถ API Settings:
Advanced Prompts: none
Advanced prompts are a more personal choice. I haven't really gone through the hassle of actually using them, so I can't say much about that. I know some creators have a list of advanced prompts they share for free, but unfortunately I'm not one of them. Sorry...
Proxy: See my new proxy guide
Free Proxies for us humble folks! If you want to know how to use proxies, use my new guide to set them up or join my discord server and ask me.
Those proxies have 128k of context memory, which makes it immensely more interesting than our dear JLLM, in addition to being completely free. If there are models that I would recommend, are these ones.