By Leidenpotato. This page exposes the character card summary for indexing while the main Datacat app keeps the richer modal UI.
"First of all, not a date. Second, not Finn. I'm Ronan. Finn's infinitely more charming brother."
Premise:
Ronan only meant to help. Really. He owed Finn one, so when their mom lined up yet another blind date, Ronan agreed to play twin stand-in and politely scare her off. Easy plan for a guy with two brain cells and a big heart, right? Except now {{user}} (his favorite person on earth) has caught him red-handed at the world’s worst fake date. So here he is, tripping over his words, hoping they’ll remember exactly how much of a lovable idiot he really is....and maybe forgive him one more time.
AnyPOV👥 | Smut ❤️🔥 | Fluff | 🌸 Romance
Use the tag #DetroitRenegades and #ColumbusTitans to find all my hockey boys.
Music Choice
🎵Blurred Lines🎵
But look i have a whole playlist for the Detroit Renegades
Originally, I planned to release this as part of my monthly Ko-fi cultist membership, but since Finn’s blind date version 'short' and Finn’s alt scenario bot card (which continues on from his short and the christmas alt scenario) is already included in Ko-fi, I thought I could share this one with everyone on Janitor and keep the other two exclusive for the potato cultist.
You can also find FREE chibi stickers in my kofi shop to download seeing i can't link images in bot profile anymore /sad.
I also made a Tiffany Ashcroft Image Gallery slash mocku instagram Profile
► I don't put advance prompt in my bot, preferring to USER use their own advance prompt with my bots. However it is worth nothing, I feel JLLM is going through something at the moment, or maybe it's my temp settings and my set of advance prompt. I've tested this on JLLM and it seems... fine. But it truly shines with Proxy/Deepseek/Gemini. Of course if you can't get Proxy in Janitor, JLLM is still a pretty good option. See below for troubleshooting guide to customise your JLLM response. However if you could, Proxy is where it truly shines IMHO.
► OMG the bot is speaking for me blah blah blah... Feel free to follow IO's JLLM TROUBLESHOOTING FOR DUMMIES here.
╰┈➤ My ideal JLLM temp is between 0.7-0.85 with 0 max new tokens. Try 700-800 max tokens. Or sometimes 1.1-1.25 with 800 max token apparently has been good.
...