AI News and Information

GURPS

INGSOC
PREMO Member

ChatGPT-style search represents a 10x cost increase for Google, Microsoft




Is a ChatGPT-style search engine a good idea? The stock market certainly seems to think so, with it erasing $100 billion from Google's market value after the company's poor showing at its recent AI search event. Actually turning a chatbot into a viable business is going to be a challenge, though. Besides that fact, Google has had a chat search interface for seven years now—the Google Assistant—and the world's biggest advertising company has been unable to monetize it. And a new report from Reuters points out another monetary problem with generating a chat session for every search: That's going to cost a lot more to run compared to a traditional search engine.

Today Google search works by building a huge index of the web, and when you search for something, those index entries gets scanned and ranked and categorized, with the most relevant entries showing up in your search results. Google's results page actually tells you how long all of this takes when you search for something, and it's usually less than a second. A ChatGPT-style search engine would involve firing up a huge neural network modeled on the human brain every time you run a search, generating a bunch of text and probably also querying that big search index for factual information. The back-and-forth nature of ChatGPT also means you'll probably be interacting with it for a lot longer than a fraction of a second.

All that extra processing is going to cost a lot more money. After speaking to Alphabet Chairman John Hennessy (Alphabet is Google's parent company) and several analysts, Reuters writes that "an exchange with AI known as a large language model likely costs 10 times more than a standard keyword search" and that it could represent "several billion dollars of extra costs."
 

GURPS

INGSOC
PREMO Member

Sci-fi becomes real as renowned magazine closes submissions due to AI writers




One side effect of unlimited content-creation machines—generative AI—is unlimited content. On Monday, the editor of the renowned sci-fi publication Clarkesworld Magazine announced that he had temporarily closed story submissions due to a massive increase in machine-generated stories sent to the publication.

In a graph shared on Twitter, Clarkesworld editor Neil Clarke tallied the number of banned writers submitting plagiarized or machine-generated stories. The numbers totaled 500 in February, up from just over 100 in January and a low baseline of around 25 in October 2022. The rise in banned submissions roughly coincides with the release of ChatGPT on November 30, 2022.

A graph provided by Neil Clarke of Clarkesworld Magazine: This is the number of people we've had to ban by month. Prior to late 2022, that was mostly plagiarism. Now it's machine-generated submissions.

Large language models (LLM) such as ChatGPT have been trained on millions of books and websites and can author original stories quickly. They don't work autonomously, however, and a human must guide their output with a prompt that the AI model then attempts to automatically complete.
 

GURPS

INGSOC
PREMO Member

Elon Musk Launches New Effort To Fight Woke Artificial Intelligence



The report said Musk believes that ChatGPT is an example of “training AI to be woke.” Musk is recruiting Igor Babuschkin, a top researcher who has worked at Alphabet’s DeepMind AI unit and at OpenAI, to help lead the effort.

Musk has repeatedly criticized ChatGPT and the company that it was created by, which he co-founded, for the direction it has gone.

“The danger of training AI to be woke – in other words, lie – is deadly,” Musk warned shortly after ChatGPT launched and people began noting numerous problems with the tool.

“OpenAI was created as an open source (which is why I named it ‘Open’ AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft,” Musk tweeted. “Not what I intended at all.”
 

GURPS

INGSOC
PREMO Member
The coming artificial intelligence economic revolution will be a major shock to the world. There is a serious possibility that the next decade will bring about a series of social and economic changes akin to the Industrial Revolution and the advent of the internet combined. Many writers, human resource officers, lawyers, writers, artists, and even coders increasingly will be replaced by AI as the “laptop class” of workers is decimated. At the same time, blue-collar workers who work with their hands will enjoy job security; their services cannot be replaced by technology. Unfortunately for waves of young people, the media’s advice to “learn to code” may have been like investing in typewriters.

Artificial intelligence is advancing at a breakneck speed. Recent announcements of programs that can mimic human conversation, copy our voice, write research papers, and paint beautiful pictures are just a small sliver of the coming AI revolution. The coming changes in everyday life soon will become noticeable, including the popularity of AI-generated video games, music, art, and even movies. A short description and a click of the mouse can spit out a new novel by John Steinbeck or an economic treatise by Thomas Sowell.



 

GURPS

INGSOC
PREMO Member

Bias creates flaws in AI programs

So the question is why would an AI system make up a quote, cite a nonexistent article and reference a false claim? The answer could be because AI and AI algorithms are no less biased and flawed than the people who program them. Recent research has shown ChatGPT’s political bias, and while this incident might not be a reflection of such biases, it does show how AI systems can generate their own forms of disinformation with less direct accountability.

Despite such problems, some high-profile leaders have pushed for its expanded use. The most chilling involved Microsoft founder and billionaire Bill Gates, who called for the use of artificial intelligence to combat not just “digital misinformation” but “political polarization.”

In an interview on a German program, “Handelsblatt Disrupt,” Gates called for unleashing AI to stop “various conspiracy theories” and to prevent certain views from being “magnified by digital channels.” He added that AI can combat “political polarization” by checking “confirmation bias.”

Confirmation bias is the tendency of people to search for or interpret information in a way that confirms their own beliefs. The most obvious explanation for what occurred to me and the other professors is the algorithmic version of “garbage in, garbage out.” However, this garbage could be replicated endlessly by AI into a virtual flood on the internet.

Volokh, at UCLA, is exploring one aspect of this danger in how to address AI-driven defamation.

There is also a free speech concern over the use of AI systems. I recently testified about the “Twitter files” and growing evidence of the government’s comprehensive system of censorship to blacklist sites and citizens.

One of those government-funded efforts, called the Global Disinformation Index, blacklisted Volokh’s site, describing it as one of the 10 most dangerous disinformation sites. But that site, Reason, is a respected source of information for libertarian and conservative scholars to discuss legal cases and controversies.

Faced with objections to censorship efforts, some Democratic leaders have pushed for greater use of algorithmic systems to protect citizens from their own bad choices or to remove views deemed “disinformation.”
 

GURPS

INGSOC
PREMO Member

Elon Musk Says He Will Create “TruthGPT” To Compete Against “Politically Correct” A.I. Bots Like ChatGPT



Musk criticized OpenAI’s chatGPT bot and said he will try and create an alternative A.I. bot that is not “trained to be politically correct.”

“I’m going to start something which I know you called TruthGPT, or a maximum truth-seeking AI that tries to understand the nature of the universe,” Musk told Tucker Carlson.

“And I think this might be the best path to safety in the sense that an AI that cares about understanding the universe is unlikely to annihilate humans because we are an interesting part of the universe,” he added.
 

GURPS

INGSOC
PREMO Member

Chinese Communist Party Mandates AI Chatbots 'Reflect Core Values of Socialism'



As part of its holistic strategy of “unrestricted warfare,” the Chinese Communist Party understands well the potential propaganda value of AI-generated content.

To corner the market on narrative control, the CCP’s Cyberspace Administration of China has mandated that AI should “embody core socialist values” that reflect its collectivist principles.

Via Hong Kong Free Press:

Content produced by generative artificial intelligence services should embody “core socialist values,” China’s cyberspace regulator has said as part of its latest draft measures to regulate AI technology firms.
In a list of proposed regulations unveiled on Tuesday, the Cyberspace Administration of China said the country supported the independent innovation of artificial intelligence. But developers of generative AI products must comply with the proposed legal requirements and “respect social morality and public order.


The Administration’s stated aim is to ban “violent, obscene, pornographic, and false information.”

Continuing:

AI-generated content should not contain messages that would overturn the socialist system, incite separatism, undermine national unity or promote terrorism and extremism, the authorities said. Messages that spread ethnic hatred and discrimination, or contain violent, obscene, pornographic and false information would also be prohibited.

Of course, what’s good for the goose apparently isn’t good for the gander in the CCP’s estimation. While it’s busy protecting its domestic population from undue influences that might undermine its grip on power and Chinese cultural integrity in general, it’s hard at work promoting transgender terrorism to American children and promulgating racist BLM-style agitprop across its TikTok platform.
 

GURPS

INGSOC
PREMO Member
However, with the good comes the bad. One of the biggest concerns with Chat GPT is the potential for bias and misinformation. Because Chat GPT is trained on large datasets that may contain biases, it can generate text that perpetuates those biases. Additionally, because it can generate text that sounds human-like, there is a risk that readers may mistake the generated text for actual human-written content, leading to the spread of misinformation.

Another potential issue with Chat GPT is the impact it may have on the writing profession. As more writers rely on Chat GPT to generate content, there is a risk that the quality of writing may suffer. In addition, the use of Chat GPT may lead to a decrease in demand for human writers, as businesses and publications may opt to use Chat GPT-generated content instead.

Finally, let's address the ugly. One of the biggest concerns with Chat GPT is the potential for misuse. Because Chat GPT can generate text that sounds human-like, it can be used to spread propaganda or misinformation on a large scale. Additionally, malicious actors could use Chat GPT to generate spam or phishing messages that appear legitimate, increasing the risk of cyber attacks and fraud.

Another potential issue is the impact Chat GPT may have on the mental health of writers. As Chat GPT becomes more prevalent in the writing profession, writers may feel pressured to use it to keep up with the competition, leading to burnout and other mental health issues. Additionally, the ease and speed with which Chat GPT can generate text may lead to a decrease in the satisfaction and fulfillment that comes from the writing process.



 

GURPS

INGSOC
PREMO Member



Oracle Ends Relationship With Global Disinformation Index Over Free Speech Concerns



“After conducting a review, we agree with others in the advertising industry that the services we provide marketers must be in full support of free speech, which is why we are ending our relationship with GDI,” Michael Egbert, vice president for corporate communications at Oracle, announced in a statement given to the Washington Examiner.

In 2021, Oracle announced a partnership with GDI “to help marketers safeguard ad spend and protect brands from inadvertently supporting disinformation sites.” However, GDI has faced criticism from conservatives and Republican lawmakers for its efforts to suppress conservative media by using its blacklist of conservative sites to persuade companies not to advertise with them.

This move by Oracle follows a similar decision by Microsoft, which temporarily severed ties with GDI’s blacklist group. The company has also come under scrutiny for hiding critical information on its tax forms, and Rep. Ken Buck (R-NY) demanded transparency regarding the organization’s funding sources. GDI had received funding from the State Department and the National Endowment for Democracy, but the NED has announced that it is cutting ties with GDI and will no longer fund it.
 

stgislander

Well-Known Member
PREMO Member
Although Terminator was the first exposure, the TV show Person of Interest drove home for me in idea that AI was not a good thing. Like a virus in a Wuhan lab sooner or later it will get out and wreck havoc.
 

3CATSAILOR

Well-Known Member
Tech is good. Beneficial. Enhances quality of life. Etc....

But I just don't see the fascination with making tech look like or emulate a human. And there are AI systems that exhibit extreme anger. I don't particularly like where the future of AI is heading.

Most of Society (World Wide) has no clue of how dangerious AI can be. Nor do they care. The rush is on for what Country can use it first most likely as a weapon. However, AI can and I believe wipe out all of mankind one day. The writing on the wall is already there. Our government wants to have only "digital currency". Therefore, it can wipe out your bank account without a trace. It can wipe out electricity Worldwide. Anything controlled by electronics or dependment upon a digital/electronic signature. Perhaps launch codes to Subs, God forbid. And the list goes on and on and on. This will become like something out of a Si Fi movie. But the difference is it will be real.
 

GURPS

INGSOC
PREMO Member

Experts Warn AI-Enhanced Law Enforcement Could Lead to Predictive Policing and Authoritarian Control



Experts have been warning about potential ramifications of such technology. Christopher Alexander, CCO of Liberty Blockchain, told Fox News: “I really think it is the predictive analytics capability that if they get better at that, you have some very frightening capabilities.”

Safer Roads Humber is an organization that works with Humber police to “provide courses on driver safety and publish information on our engagement activities,” among other things, according to its website. Ian Robertson, partnership manager for the group told Fox News: “Personally, I believe a mobile solution would work best as it would ensure road users change their behavior at all times rather than just at a static point.”

As if that wasn’t terrifying enough, Brian Cavanaugh, a visiting fellow at the Heritage Foundation’s Border Security and Immigration Center, cautioned that countries like the United Kingdom, which is quite fond of surveilling its citizens, could become even more authoritarian than it already is using this technology.

“I absolutely see this as a slippery slope,” Cavanaugh said. “You’re going from an open and free society to one you can control through facial recognition [technology] and AI algorithms – you’re basically looking at China.

“The U.K. is going to use safety and security metrics to say, ‘Well, that’s why we did it for phones and cars.’ And then they’re going to say, ‘If you have, say, guns … what’s next on their list of crimes that you crack down on because of safety and security?'” he continued. “All of a sudden, you’re creating an authoritarian, technocratic government where you can control society through your carrots and sticks.
 

GURPS

INGSOC
PREMO Member
 

Kyle

ULTRA-F###ING-MAGA!
PREMO Member
A.I. Calculates It Will Be More Efficient To Just Let Humanity Destroy Itself

 
Top