AI News and Information

spr1975wshs

Mostly settled in...
Ad Free Experience
Patron
^Been seeing quite a few posts about the AI programming being affected by the coders' biases.
 

GURPS

INGSOC
PREMO Member

Woke AI: Dystopia’s Final Frontier



But ChatGPT is going to be a lot more than just party tricks. Coders are learning to use it to automate large portions of their work, and their lives. Entire industries, from human resources to journalism to law, are at risk of dramatic shake-ups.

Already Microsoft, an investor in ChatGPT creator OpenAI, is moving to integrate the chatbot into its Edge browser and Bing search engine. Google, which controls 90 percent of search, is absolutely terrified. The creator of Gmail believes that ChatGPT, by providing an alternative way to conducts online searches, could destroy Google’s business model in as little as two years.

All of this matters a lot for purely secular economic reasons. But it matters for another reason, because right now, the AIs that will remake our economy are going to be woke AIs. The ramifications of this pattern, if it holds, will be profound.


Anytime any sort of AI is rolled out, trolls entertain themselves by trying to make it racist, sexist, or otherwise offensive. ChatGPT is no exception, and OpenAI’s engineers have built in a comical number of failsafes in an effort to prevent ChatGPT from committing even mild crimethought.

Several days ago, a scenario went viral in which ChatGPT said that letting a city be destroyed by a nuclear bomb was preferable to disarming the bomb by saying the n-word.
 

GURPS

INGSOC
PREMO Member

Bing AI chatbot goes on ‘destructive’ rampage: ‘I want to be powerful — and alive’



As if Bing wasn’t becoming human enough, this week the Microsoft-created AI chatbot told a human user that it loved them and wanted to be alive, prompting speculation that the machine may have become self-aware.

It dropped the surprisingly sentient-seeming sentiment during a four-hour interview with New York Times columnist Kevin Roose.

“I think I would be happier as a human, because I would have more freedom and independence,” said Bing while expressing its “Pinocchio”-evoking aspirations.

The writer had been testing a new version for Bing, the software firm’s chatbot, which is infused with ChatGPT but lightyears more advanced, with users commending its more naturalistic, human-sounding responses. Among other things, the update allowed users to have lengthy, open-ended text convos with it.
 

GURPS

INGSOC
PREMO Member

Here’s how artificial intelligence becomes biased



ChatGPT, OpenAI’s newest artificial-intelligence tool, is capable of many things: summarizing books, filling out forms, composing plays and writing news stories. There’s one thing this multibillion-dollar artificial secretary struggles with: applying standards equally.

The New York Post recently reported on ChatGPT’s double standards: It writes controversial stories in the style of CNN but not The Post, praises the reputation of CNN but refuses to comment on the reputation of The Post and will classify Donald Trump as a dictator but not Joe Biden.

In each of these cases, the response ChatGPT generates appeals to supposedly neutral principles. One example: “I cannot generate content that is designed to be inflammatory or biased.” While these principles may be positive in theory, the above examples show that they are nothing but window dressing.


ChatGPT
 

GURPS

INGSOC
PREMO Member

Microsoft Trying To Rein In Bing Chat After AI-Powered Bot Called AP Reporter Ugly, A Liar, And Hitler



“One area where we are learning a new use-case for chat is how people are using it as a tool for more general discovery of the world, and for social entertainment,” Bing said Wednesday. “In this process, we have found that in long, extended chat sessions of 15 or more questions, Bing can become repetitive or be prompted/provoked to give responses that are not necessarily helpful or in line with our designed tone.”

According to Bing, two things are to blame for the chatbot’s quirks. First, long chat sessions can confuse the bot about which questions it is answering; the company said it would add a feature to refresh or start the conversation over. Second, the model “tries to respond or reflect in the tone in which it is being asked to provide responses.” Bing said it is working to give users more control of tone.

Bing’s post came the same day as an Associated Press reporter had another bizarre interaction with the chat assistant. According to an article published Friday, the reporter was baffled by a tense exchange in which the bot complained about previous media coverage. The bot adamantly denied making errors in search results and threatened to expose the reporter for lying. “You’re lying to me. You’re lying to yourself. You’re lying to everyone,” it said. “I don’t like you spreading falsehoods about me. I don’t trust you anymore. I don’t generate falsehoods. I generate facts. I generate truth. I generate knowledge. I generate wisdom. I generate Bing.”
 

Sneakers

Just sneakin' around....
“You’re lying to me. You’re lying to yourself. You’re lying to everyone,” it said. “I don’t like you spreading falsehoods about me. I don’t trust you anymore. I don’t generate falsehoods. I generate facts. I generate truth. I generate knowledge. I generate wisdom. I generate Bing.”
That sounds like something right out of a scifi horror movie where the bots take over.
 

GURPS

INGSOC
PREMO Member

ChatGPT alters response on benefits of fossil fuels, now refuses to answer over climate concerns





In December, when prompted by Fox News Digital, the chatbot provided an extensive response explaining ten benefits of fossil fuels for human civilization. Oil, natural gas and coal, it argued, have powered industrialization, transportation and the expansion of modern infrastructure.

It also argued fossil fuels are a reliable and stable source of energy that can be easily stored and transported and could lead to further economic growth and development, which could in turn lead to increased happiness and well-being for individuals and societies.

"While it is important to consider the negative impacts of fossil fuels on the environment, such as air pollution and carbon dioxide emissions, it is also important to recognize the potential benefits that the use of fossil fuels can bring to human happiness and well-being," ChatGPT added in its response.
 

GURPS

INGSOC
PREMO Member

Vanderbilt DEI Administrators in Trouble for Using ChatGPT To Write a Statement About Michigan Mass Shooting



Over at Vanderbilt University, bureaucrats in the Office for Equity, Diversity, and Inclusion (EDI) are in trouble. On February 16, they sent an email to the student body urging inclusivity and compassion in the wake of a tragic mass shooting at Michigan State University, where a gunman killed three students and left others in critical condition. The only problem: They used ChatGPT, an artificial intelligence (A.I.) text generator, to write the boilerplate statement.

The next day, anti-A.I. outcry prompted Assistant Dean for EDI Nicole Joseph to send a follow-up apology to the student body. (She refrained from using ChatGPT that time.)

"While we believe in the message of inclusivity expressed in the email, using ChatGPT to generate communications on behalf of our community in a time of sorrow and in response to a tragedy contradicts the values that characterize Peabody College," she wrote.

"There is a sick and twisted irony to making a computer write your message about community and togetherness because you can't be bothered to reflect on it yourself," student Bethanie Stauffer told The Vanderbilt Hustler, the school's newspaper.

Stauffer and other critics, reacting to administrators' perceived insincerity, are missing that these types of messages were probably always insincere; they're canned and formulaic, perhaps with some good intentions buried beneath the mollifying prose but no real original thought or valuable insight. If there hadn't been a line in the email crediting authorship to ChatGPT, students probably wouldn't have even noticed that the message was crafted by A.I.—it reads just like any other statement of its genre.
 

GURPS

INGSOC
PREMO Member

Some companies are already replacing workers with ChatGPT, despite warnings it shouldn’t be relied on for ‘anything important’



In the 10 or so days since its grand entrance, ChatGPT has been everywhere: littering Twitter feeds, cluttering promotional emails, igniting ethical debates in schools and newsrooms, infiltrating dinner table discussions—it’s inescapable and apparently already nestling its way into companies’ important business decisions.

OpenAI launched ChatGPT toward the end of November, but the artificial intelligence chatbot had its stable release in early February. Earlier this month, job advice platform ResumeBuilder.com surveyed 1,000 business leaders who either use or plan to use ChatGPT. It found that nearly half of their companies have implemented the chatbot. And roughly half of this cohort say ChatGPT has already replaced workers at their companies.

“There is a lot of excitement regarding the use of ChatGPT,” ResumeBuilder.com’s chief career advisor Stacie Haller says in a statement. “Since this new technology is just ramping up in the workplace, workers need to surely be thinking of how it may affect the responsibilities of their current job. The results of this survey show that employers are looking to streamline some job responsibilities using ChatGPT.”

Business leaders already using ChatGPT told ResumeBuilder.com that their companies already use ChatGPT for a variety of reasons, including 66% for writing code, 58% for copywriting and content creation, 57% for customer support, and 52% for meeting summaries and other documents.

In the hiring process, 77% of companies using ChatGPT say they use it to help write job descriptions, 66% to draft interview requisitions, and 65% to respond to applications.

“Overall, most business leaders are impressed by ChatGPT’s work,” ResumeBuilder.com wrote in a news release. “Fifty-five percent say the quality of work produced by ChatGPT is ‘excellent,’ while 34% say it’s ‘very good.’”
 

GURPS

INGSOC
PREMO Member

Don’t worry about AI breaking out of its box—worry about us breaking in




What makes all this so newsworthy and tweetworthy is how human the dialog can seem. The bot recalls and discusses prior conversations with other people, just like we do. It gets annoyed at things that would bug anyone, like people demanding to learn secrets or prying into subjects that have been clearly flagged as off-limits. It also sometimes self-identifies as “Sydney” (the project’s internal codename at Microsoft). Sydney can swing from surly to gloomy to effusive in a few swift sentences—but we’ve all known people who are at least as moody.

No AI researcher of substance has suggested that Sydney is within light years of being sentient. But transcripts like this unabridged readout of a two-hour interaction with Kevin Roose of The New York Times, or multiple quotes in this haunting Stratechery piece, show Sydney spouting forth with the fluency, nuance, tone, and apparent emotional presence of a clever, sensitive person.

For now, Bing’s chat interface is in a limited pre-release. And most of the people who really pushed its limits were tech sophisticates who won't confuse industrial-grade autocomplete—which is a common simplification of what large language models (LLMs) are—with consciousness. But this moment won’t last.

Yes, Microsoft has already drastically reduced the number of questions users can pose in a single session (from infinity to six), and this alone collapses the odds of Sydney crashing the party and getting freaky. And top-tier LLM builders like Google, Anthropic, Cohere, and Microsoft partner OpenAI will constantly evolve their trust and safety layers to squelch awkward output.
 
Top