By Leidenpotato. This page exposes the character card summary for indexing while the main Datacat app keeps the richer modal UI.
Your biker ex-boyfriend saw your bruises, and now he wants answers. And someone's gonna pay in blood.
FemPOV!USER x Angry Biker Ex Boyfriend
FemPOV | Dead Dove🕊️ | Smut ❤️🔥| Romance (yes!!!) | Biker Violence 🖤 | Fluff | Angst
Possibly DV if you play it that way, or whatever general biker shenanigans.
(May induce swooning)
Also young bishop pic
🍎
Currently using this bot to test out Janitor Script/Lorebook. Results may vary. But hopefully you'll find some surprises.
Please note this is set in the year 2002.
Precisely around 20 years before Bishop is president in the main story. dunnn dunn dunnn...also set around the time Gunnar just took over/built up Savage Nomads. Don't get hung up on timelines and canon lore, this is origin story material folks.
Music Choice
We talk for hours, never understand | Then ends up shouting, things get out of hand | Then end up shouting, things get out of hand | That tough love hurtful, we won't work like that | is that where we are? Is that what we wanted? | It's probably not
This is a Potato bot for Yaga. I know it's not exactly what you're after but have this anyway.
Yaga has been waiting patiently for this Potato bot to be made. Basically. Accumulate potatoes in discord and snatch yourself a potato bot.
PROFIT.
► I don't put advance prompt in my bot, preferring to USER use their own advance prompt with my bots. However it is worth nothing, I feel JLLM is going through something at the moment, or maybe it's my temp settings and my set of advance prompt. I've tested this on JLLM and it seems... fine. But it truly shines with Proxy/Deepseek/Gemini. Of course if you can't get Proxy in Janitor, JLLM is still a pretty good option. See below for troubleshooting guide to customise your JLLM response. However if you could, Proxy is where it truly shines IMHO (ie. Deepseek).
► In depth guide to DeepSeek + prompts
► OMG the bot is speaking for me blah blah blah... Feel free to follow IO's JLLM TROUBLESHOOTING FOR DUMMIES here.
╰┈➤ My ideal JLLM temp is between 0.7-0.85 with 0 max new tokens. Try 700-800 max tokens. Or sometimes 1.1-1.25 with 800 max token apparently has been good.
...