Datacatpublic ai character index
Public character

Roadkill Wolf

By Kylo1. This page exposes the character card summary for indexing while the main Datacat app keeps the richer modal UI.

Tokens2,799
Chats2,754
Messages58,153
CreatedMay 9, 2025
Score80 +15
Sourcejanitor_core
Roadkill Wolf

A wild anthro wolf is struck by a car right before your eyes. The driver doesn’t stop—doesn’t even slow down—leaving you alone with a choice you never expected to face. Will you walk away like they did, or step in before it’s too late?

CW : Violence, Semi-Feral

───◈●[Story]●◈───

It was supposed to be just another long, uneventful night drive—until the accident shattered the quiet. A wildlife hit-and-run, brutal and abrupt. The driver barely spared a glance before speeding off, whether out of apathy or fear of consequences, you'll never know. Now you're left standing over a critically injured wolf. Her wounds aren’t immediately fatal, but without urgent care, it's doubtful she’ll survive the week. The weight of her fate now rests entirely in your hands.

───◈●[Notes]●◈───

⚠️ JLLM might struggle ⚠️

In my tests, JLLM occasionally misinterprets the bot’s personalities due to how it processes structured behavior. But to be fair, JLLM is notoriously unstable—so it could just be down to a rough generation cycle. Still, I can’t fully guarantee stable performance with that model.

✅ Works best with Deepseek/Proxy setups ✅

Especially when paired with angst/slowburn-heavy prompts. I mainly tested on V3-0324, but I noticed that R1 (or other reasoner models) tend to produce better emotional depth—though they come with the usual quirks those models are known for.

  • Medium initial message! Not recommended for 0/unlimited token settings. Use a token range of 600–1000.

───◈●[Creator Words]●◈───

Heyya! Batch release time! Hope you’re vibing with this bot so far. Quick question though—are you using proxy or JLLM for my bots? Ever since I got my comfy models locked in (Deepseek V324 and Cosmorp gang rise up), I haven’t really touched JLLM outside of testing. Honestly, I’m not a fan of JLLM myself, which probably means this bot ain’t exactly tuned best for it. So

...