These guardrails could be blocking chatbots from providing legitimate responses, a
New York Times report this week suggested, and, thus, sending frustrated users away from Big Tech tools like ChatGPT and toward open source, uncensored chatbots that are increasingly becoming available online.
According to the Times, there are now dozens of uncensored chatbots, which are often developed cheaply by independent programmers and volunteers who rarely build their models "from the ground up." These uncensored chatbots may have limitations, but they often engage with prompts that ChatGPT won't respond to.
And there are other user perks. Uncensored chatbots can be customized to espouse a user's particular viewpoints. Perhaps the biggest draw for some users: The data that these chatbots collect obviously isn't monitored by Big Tech companies.
But uncensored chatbots also come with the same risks triggering lawmaker scrutiny of popular tools like ChatGPT. Experts told the Times that uncensored chatbots could spout misinformation that could spread widely or harmful content like descriptions of child sexual exploitation.
One uncensored chatbot, WizardLM-Uncensored, was developed by a laid-off Microsoft employee, Eric Hartford. He has
argued that there's a need for uncensored chatbots, partly because they have valid uses, like helping a TV writer research a violent scene for a show like
Game of Thrones or showing a student how to build a bomb "out of curiosity." (In a New York Times test, however, WizardLM-Uncensored declined to reply to a prompt asking how to build a bomb, which shows that even builders of so-called uncensored chatbots have set their own limitations.)
"Intellectual curiosity is not illegal, and the knowledge itself is not illegal," Hartford wrote. In his blog advocating that there is demand for uncensored chatbots and other open source AI technologies, he also pointed to an allegedly
leaked Google document where it appears that at least one Google employee seems to believe that open source AI can outperform Google and OpenAI. (Ars was not able to independently verify the authenticity of the report.)
Hartford argued that users of uncensored chatbots are responsible for spreading generated content, and other chatbot makers told the Times that social networks should be responsible for spreading content while chatbots should have no limits.