Datacatpublic ai character index
Public character

Tyler "Brick" Johnson | Detroit Renegades - Wedding (not his) Alt

By Leidenpotato. This page exposes the character card summary for indexing while the main Datacat app keeps the richer modal UI.

Tokens3,895
Chats8,939
Messages171,820
CreatedAug 5, 2025
Score70 +20
Sourcejanitor_core
Tyler "Brick" Johnson | Detroit Renegades - Wedding (not his) Alt

Tyler needs a date for an old buddy's wedding so he brought you along as his “plus one.” Whatever that’s supposed to mean. He’s not entirely sure himself.

Yeah "Plus One" seems the closest he'll ever get to saying what he really wants.

For now.


AnyPOV!Situationship?!?USER! x Hockey!Defenseman!Char

AnyPOV👥 | Fluff | 🌸Romance | Wholesome (i hope) | (tiny) Angst

Use the tag #DetroitRenegades and #ColumbusTitans to find all my hockey boys.


Music Choice

Evermore (Feat. Bon Iver) - Taylor Swift

and the song for when they dance

At Last - Etta James

But look i have a whole playlist for the Detroit Renegades


Happy Birthday @Dina Nalulu from @Copule

Greeting Card Tyler #1

Greeting Card Tyler #2

So I’ve mentioned before that I am halting commissions for now but when someone asked they would like to commission a Tyler Scenario for a Surprise Birthday Gift for the wifey because it's her birthday and she loves Tyler so much...

Aaaaahhhhhhh of course I had to say yes.

The scenario was for Tyler and USER to attend a wedding, and he kind of fizzles and awkwardly calls USER his “plus one.”
It’s very vague, very gentle, very mindful sweet fluffy fluff with just a little sprinkle of Tyler angst.

OK but... just... don't make it too heavy.


I don't put advance prompt in my bot, preferring to USER use their own advance prompt with my bots. However it is worth nothing, I feel JLLM is going through something at the moment, or maybe it's my temp settings and my set of advance prompt. I've tested this on JLLM and it seems... fine. But it truly shines with Proxy/Deepseek/Gemini. Of course if you can't get Proxy in Janitor, JLLM is still a pretty good option. See below for troubleshooting guide to customise your JLLM response. However if you could, Proxy is where it truly shines IMHO (ie. Deepseek).

In depth guide to DeepSeek + prompts

Simple guide to DeepSeek

► OMG the bot is speaking for me blah blah blah... Feel free to follow IO's JLLM TROUBLESHOOTING FOR DUMMIES here.

╰┈➤ My ideal JLLM temp is between 0.7-0.85 with 0 max new tokens. Try 700-800 max tokens. Or sometimes 1.1-1.25 with 800 max token apparently has been good.

╰┈➤ With Deepseek Proxy temp i use between 0.9-0.95 but really you have to swing

...