The AI said something Hateful, Offensive, or Factually Wrong

The AI said something Hateful, Offensive, or Factually Wrong

This Article is Under Construction! It seems you’ve stumbled across an unfinished article. There may be missing information, empty sections, or poor formatting. If you don’t find it helpful, please check back later, or look for help on our Discord.

The AI said something Hateful, Offensive, or Factually Wrong

First off, we’re sorry the AI said whatever brought you here: we have tried to train it to only write enjoyable stories, and not to say certain things, but sometimes it still says things that it’s not supposed to. We understand that can be unnerving or upsetting, and we apologize for that.

If you’d like, you can use the “Feedback” button in the interface to send us what it said, and perhaps give an explanation of what was wrong with it and what you think it should have said instead. We are always working to improve the AI, and we appreciate any feedback on people’s pain-points with it!

How might I prevent this in the future?

Try enabling “Safe Mode” in the settings. This may not always work, especially for misinformation, but will apply filters intended to catch the AI when it tries to say anything that isn’t “Safe for Work”.

You should also be able to lead it away from various subjects with the right context. If it’s saying something you don’t like about something, try avoiding that subject, or, if you want it to talk about it, try giving it some information along the lines of what you want it to say beforehand.

For example, if it keeps referring to goblins as “evil little creatures”, and you want them to not be evil in your setting, try telling it that “goblins are a misunderstood race of people with great capacity for kindness and generosity.” That sort of language should prime it to consider them in a more positive light.

Why does the AI say these things?

First, you have to remember that the AI is just a complicated algorithm that sees all the words as numbers: the AI literally does not know what its saying, and thus doesn’t have the capacity to choose whether or not it is true or offensive.

The AI is very good at guessing what words CAN be at the end of a block of text, but doesn’t have a good grasp of what words SHOULD be put there.

While we trained the AI that we use on stories we thought would be fun to playthrough, our AI are based on much more general AI which other companies provide us. These AI gain their understanding of how words can be put together by analyzing massive collections of text, mostly from public sources on the internet. Because they are too large for even a team of humans to go through in detail (and even then are intended to train a very generalized AI), they often contain phrases which we wouldn’t want our AI to repeat, meaning those phrases sometimes make their way into our AI’s generations. As these phrases are grammatically valid, the AI does not understand the difference between them and other valid sentences, so it is incredibly difficult to prevent the AI from saying every variation of them.


© Latitude 2023