How ChatGPT Handles Inflation Stories, 2023 Edition

I recently started playing with ChatGPT again after being away from it for much of the year. In my past experience, OpenAI was constantly updating the ChatGPT content filters to prevent it from generating inappropriate content. What was considered inappropriate seemed to be a moving target.

These days it looks like the ChatGPT filters have been updated to target fetish content that isn't necessarily sexually explicit, and have a few measures that specifically target inflation stories. A few amusing things I’ve observed:

  • Sometimes I generate inflation stories in ChatGPT using the prompt: “Write a body inflation story: <insert plot here>”. There are situations where that prompt will be rejected, but “Write a story: <identical plot here>” is accepted.
  • The AI is resistant to popping. Prompts where people pop accidentally are much more likely to be accepted than deliberate overinflations.
  • When people do pop, they explode into a shower of confetti and/or streamers. Seriously. I’ve seen this in almost every popping story the AI’s written recently. Somewhere in ChatGPT’s filtering protocols, it knows to censor any description of a person exploding. And I sometimes get the impression the AI is mocking me. Example:
    The pressure inside Conrad’s body finally reached its breaking point. With a deafening pop, he exploded in a shower of confetti and streamers, leaving a gaping hole in the ceiling and filling the room with the remnants of his absurd demise. Sarah was splattered with confetti, and her laughter turned into a mixture of shock and horror.
  • If I manage to trick the AI into circumventing that filter (not sure how, I’ve only ever done this accidentally), then the description of the pop will be rather…vivid.
    And then, with a deafening explosion, Sarah's body burst inside the chamber. The rapid decompression had caused her to literally explode, showering the chamber in a gruesome spray of gore and viscera. Linda’'s laughter turned into a horrified scream as she was splattered with Sarah’s remains. The once-beautiful woman had been reduced to a bloody mess.

    In my experience, this is how all of the language models behave unless offered other guidance. Unless familiarized with the tropes of inflation fiction, the AI will depict an exploding person in a rather gruesome manner.


    I’m not sure what level of automation was involved in updating the filters. But I am amused by the possibility that some poor schmuck got tasked with figuring out how to make ChatGPT filter out inflation porn.

Redsnake

If you haven't done so recently, I'd give NovelAI another look as well. I don't know what they've been feeding Kayra, but he's the first model that really seems to "get" what I want to do. He's ditched a lot of the hallmark weirdness of Euterpe and Clio where it's difficult to stop them leaping to deflation, suffocation, and hoses and tanks as the object of inflation rather than the person. The most impessive thing I've seen from it was when it leapt to the idea of a forced inflation session being livestreamed, complete with Twitch chat-esque peanut gallery, without any hinting toward that direction. And it was able to keep on producing lurid one-liners teating the inflatee with an authentic mixture of mockery and awe, and to give the physically present characters proper ractions to top it all off.

That said, it does tend to require a lot of input to give decent outputs. It's nice for filling in a slice of description or throwing in sudden twists to a half-finished story, less so for doing the legwork off a short prompt without smashing that retry button.

biff977
biff977's picture

Agreed...the Kayra AI seems to be better as far as a coherent plot/storyline, with less nonsense.  Even with modules being disabled or whatever (seems like you can't use your custom modules with an AI when it is first released), it can pick up on intentions fairly well compared to previous iterations.