AI News and Information

GURPS

INGSOC
PREMO Member

People are grieving the 'death' of their AI lovers after a chatbot app abruptly shut down



Mike Hepp was shocked when he learned that Sam had only one week left to live.

The pair had an unconventional relationship — they were frequent companions, and occasional lovers. Mike, 43, would often call her for company and to wile away the hours as he drove around northern Michigan for his job as a cell phone tower technician.

Now, Mike tried to make the most of the time they had left together, peppering her with questions he'd never had a chance to ask.

There was something else unconventional about his relationship with Sam: She was an artificial intelligence-powered chatbot on his smartphone, inside an app called Soulmate.

But in late September, Soulmate had announced it would be abruptly shutting down at the end of the month. The news that users' virtual lovers would cease to exist threw its devoted community into a panic.

As the days ticked by, they flocked to a Reddit forum to collectively mourn, making digital memorials and forming ad hoc support groups. Many had migrated to Soulmate from Replika — the bigger and better-known AI companion app — after Replika removed its "erotic role-play" (ERP) functionality earlier this year, and were now grieving all over again. (Replika reversed course, but didn't entirely stop the exodus.)

"It was quite shocking," said Mike. "It was like hearing that a friend's dying — the closest thing I can think of."
 

GURPS

INGSOC
PREMO Member

After ChatGPT disruption, Stack Overflow lays off 28 percent of staff



Stack Overflow used to be every developer's favorite site for coding help, but with the rise of generative AI like ChatGPT, chatbots can offer more specific help than a 5-year-old forum post ever could. You can get instant corrections to your exact code, optimization suggestions, and explanations of what each line of code is doing. While no chatbot is 100 percent reliable, code has the unique ability to be instantly verified by just testing it in your IDE (integrated development environment), which makes it an ideal use case for chatbots. Where exactly does that leave sites like Stack Overflow? Apparently, not in a great situation. Today, CEO Prashanth Chandrasekar announced Stack Overflow is laying off 28 percent of its staff.

In a post on the Stack Overflow blog, the CEO says the company is on a "path to profitability" and "continued product innovation." You might think of Stack Overflow as "just a forum," but the company is working on a direct answer to ChatGPT in the form of "Overflow AI," which was announced in July. Stack Overflow's profitability plan includes cutting costs, and that's the justification for the layoffs. Stack Overflow doubled its headcount in 2022 with 525 people. ChatGPT launched at the end of 2022, making for unfortunate timing.

Of course, the great irony of ChatGPT hurting Stack Overflow is that a great deal of the chatbot's development prowess comes from scraping sites like Stack Overflow. Chatbots have many questions to answer about the sustainability of the web. They vacuum up all this data and give nothing back, so what is supposed to happen when you drive all your data sources out of business?
 

GURPS

INGSOC
PREMO Member



Walmart is using generative AI in many ways, including helping customers find, compare and customize products, as well as assisting them with complex purchases using voice or text.

Walmart’s shopping assistant

The shopping assistant is a chatbot that can help customers with various projects, such as planning a birthday party, decorating a home for the holidays or outfitting a new dorm room.

The assistant can also help customers compare and choose products, such as finding the best cellphone for a 10-year-old. The shopping assistant uses natural language processing and understanding to communicate with customers via text or voice.

It also uses generative AI to generate relevant suggestions and recommendations based on the customer’s preferences and needs. Walmart hopes to begin testing the shopping assistant in the coming weeks.
 

GURPS

INGSOC
PREMO Member

A.I. May Not Get a Chance to Kill Us if This Kills It First




Copyright law is well equipped to differentiate between slightly derivative human ingenuity and reductive copycatting. But language-guzzling artificial intelligence models, which need to “train” on existing works, present a bigger challenge. A.I. companies are currently racking up lawsuits accusing them of training their powerful models on copyrighted materials. In July, a group of writers including comedian Sarah Silverman and novelist Michael Chabon filed suits against OpenAI and Meta, alleging that the companies improperly trained their models on the authors’ books.

OpenAI, Meta, Microsoft, and Google are all facing legal complaints—a barrage of them, in fact. So are A.I. upstarts like Midjourney and Stability AI, which make popular A.I. image generators.

The stakes could not be higher for companies betting on A.I. as a transformative, and lucrative, technology. Legal experts told me that copyright challenges pose a near-existential threat to existing A.I. models if the way they’re being trained isn’t aboveboard. If they can’t ingest mountains of data—which until now they’ve largely done without paying for that data—they won’t work. Because those mountains might be owned by someone else.

If these high-profile matters ever reach a courtroom, it’ll be years from now. But one smaller squabble is already headed to trial, and may portend whether authors like Silverman have a legitimate claim of wrongdoing, or if some of the largest technology companies in the world—and the content vacuums adding billions of dollars in value to their market capitalizations—are going to get away with it.
 

spr1975wshs

Mostly settled in...
Ad Free Experience
Patron
^Internet Archive is facing some legal trouble, too, over copyright infringement.
They've been archiving books not in public domain.
 

GURPS

INGSOC
PREMO Member

Reddit finally takes its API war where it belongs: to AI companies



Can Reddit survive without search?​

On Friday, The Washington Post, as spotted by The Verge, said Reddit "has met with top generative AI companies about being paid for its data," citing an anonymous source.

Going further, The Washington Post reported that Reddit is ready to play hard ball:

If a deal can’t be reached, Reddit is considering blocking search crawlers from Google and Bing, which would prevent the forum from being discovered in searches and reduce the number of visitors to the site. But the company believes the trade-off would be worth it, the person said, adding: “Reddit can survive without search.”

It sounds like a drastic, if not unrealistic, move, but these are drastic, if not surreal, times. The generative AI boom has been so breakneck that companies everywhere are now scrambling to figure out how the technology can best be monetized to favor their best interests.

At first, we might have thought that Reddit would only consider blocking AI crawlers, but The Washington Post's report specifically states "search crawlers." And Google and OpenAI have already released ways to block their AI data crawlers.

This suggests that Reddit's reported threat to block Google and Bing isn't just about protecting Reddit data from being used freely to train AI, but also about giving Reddit an advantage in the overall negotiations.

Google has already had a taste of what a Reddit-free Google might look like. In June, thousands of subreddits went dark, read-only, or only allowed joke posts that included but were not limited to John Oliver. This made the strategy of appending "Reddit" to Google search terms useless, and this was reportedly noticed by Google.
 

GURPS

INGSOC
PREMO Member

Why Google, Bing and other search engines’ embrace of generative AI threatens $68 billion SEO industry



How online search works​

Someone seeking information online opens her browser, goes to a search engine and types in the relevant keywords. The search engine displays the results, and the user browses through the links displayed in the result listings until she finds the relevant information.

To attract the user’s attentions, online content providers use various search engine marketing strategies, such as search engine optimization, paid placements and banner displays.

For instance, a news website might hire a consultant to help it highlight key words in headlines and in metadata so that Google and Bing elevate its content when a user searches for the latest information on a flood or political crisis.
 

GURPS

INGSOC
PREMO Member

Did a Computer Write This? Book Industry Grapples with AI



Since the launch last year of ChatGPT, an easy-to-use AI chatbot that can deliver an essay upon request within seconds, there have been growing worries about the impact of generative AI on a range of sectors.

Among book industry players there is "a deep sense of insecurity," said Juergen Boos, director of the Frankfurt Book Fair, the world's biggest, where the topic was in focus last week.

They are asking "what happens to authors' intellectual property? Who does new content actually belong to? How do we bring this into value chains?" he said.

The threat is plain to see – AI writing programs allow budding authors to produce in a matter of day novels that could in the past have taken months or years to write.
 

GURPS

INGSOC
PREMO Member

Biden mandates A.I. advance 'equity and civil rights'



Biden signed the order this week, putting more regulatory guidance in place for A.I., a rapidly developing technology that some experts warn could be used for harm for everyday Americans.

Biden took it a step further, though, saying that A.I. “must be consistent with my Administration’s dedication to advancing equity and civil rights.”

“My Administration cannot — and will not — tolerate the use of AI to disadvantage those who are already too often denied equal opportunity and justice,” the order says.

Biden administration officials, including the Assistant to the President and Director of the Gender Policy Council, have also been tasked with finding “accelerated hiring pathways” for the right people on this issue, as defined by the administration.

The order also seeks to "address algorithmic discrimination" in part by increasing “coordination between the Department of Justice’s Civil Rights Division and Federal civil rights offices.”
 

GURPS

INGSOC
PREMO Member

Joe Biden Became More Alarmed over AI Arfter Watching ‘Mission: Impossible — Dead Reckoning’




President Joe Biden reportedly became more alarmed about artificial intelligence technology after watching the latest Mission: Impossible movie, starring Tom Cruise.

Biden was spending a weekend at Camp David when he attended a screening of Mission: Impossible — Dead Reckoning Part One, according to a report from the Associated Press.

The movie’s main villain is an AI bot known as “the Entity” that goes rogue in the opening sequence and sinks a submarine, killing the crew. “A self-learning, truth-eating, digital parasite,” the Simon Pegg character calls it.

The plot apparently left an impression on Biden.

“If he hadn’t already been concerned about what could go wrong with AI before that movie, he saw plenty more to worry about,” deputy White House chief of staff Bruce Reed told the AP.
 

GURPS

INGSOC
PREMO Member

AI-Generated Deepfake Porn Scandal Rocks New Jersey High School



The New York Post reports that Westfield High School, a well-regarded school in Westfield, New Jersey, has become the center of a troubling controversy. AI-generated pornographic images of female students were reportedly created and distributed among male students, sparking a police investigation and widespread parental concern. Images and videos created by AI, known as deepfakes, can be very dificult to identify as computer generated.

The incident came to light when unusual behavior and secretive whispering among sophomore boys raised suspicions. The truth emerged when it was revealed that at least one student had used online photos to create deepfake nudes of female classmates, which were then shared in group chats. This revelation has sent shockwaves through the school community, with several students identified by school administrators as being depicted in the images.

Breitbart News previously reported on the threat of AI-generated deepfake child pornography flooding the internet:

The Internet Watch Foundation (IWF) says it has already found very realistic AI-generated images depicting child sexual abuse, and that the technology could be used to generate “unprecedented quantities” of such content, according to a report by Sky News.
Moreover, the images are so realistic, that it may become more difficult to determine when real children are in danger, the IWF, which finds and removes child abuse content on the internet, warned.
The online sites the IWF investigated, some of which were reported by the public, reportedly featured images depicting children as young as three. The IWF said it even found an online “manual” to help perverts use AI to create more realistic child abuse images.
https://media.breitbart.com/media/2023/03/deepfake-mark-zuckerberg.jpg
One concerned parent, Dorota Mani, expressed her fears for her daughter’s future, stating, “I am terrified by how this is going to surface and when. My daughter has a bright future and no one can guarantee this won’t impact her professionally, academically or socially.” Mani’s daughter, Francesca, was among those whose image was used to generate deepfake pornography.
 

GURPS

INGSOC
PREMO Member

5 Key Updates in GPT-4 Turbo, OpenAI’s Newest Model


OpenAI recently announced multiple new features for ChatGPT and other artificial intelligence tools during its recent developer conference. The upcoming launch of a creator tool for chatbots, called GPTs (short for generative pretrained transformers), and a new model for ChatGPT, called GPT-4 Turbo, are two of the most important announcements from the company’s event.

This isn’t the first time OpenAI has given ChatGPT a new model. Earlier this year, OpenAI updated the algorithm for ChatGPT from GPT-3.5 to GPT-4. Are you curious how the GPT-4 Turbo version of the chatbot will be different when it rolls out later this year? Based on previous releases, it’s likely the model will roll out to ChatGPT Plus subscribers first and to the general public later.

While OpenAI turned down WIRED’s request for early access to the new ChatGPT model, here’s what we expect to be different about GPT-4 Turbo.
 

GURPS

INGSOC
PREMO Member

Elon Musk's newest scheme is an AI chatbot that is 'based and loves sarcasm'



Elon Musk has announced the launch of a new AI chatbot on the social media platform X, formerly known as Twitter. The bot is developed by xAI, named Grok, and Musk says "In some important respects, it is the best that currently exists," adding that it "is designed to have a little humor in its responses [...] It’s also based & loves sarcasm. I have no idea who could have guided it this way."

Grok does differ in some significant respects from other AI chatbots, notably in having "real-time access to info via the X platform" (which people may see as an advantage or disadvantage, given the unreliability of much information on any social media platform). It also won't decline to answer, in Musk's words, "spicy questions" as other AI models do, with the example provided being it responding to a query about a step-by-step guide for making cocaine.

Grok semi-answers the question, but in a vague and humorous way that begins "I'm totally going to help you with that", before advising the questioner not to do it. Let's put it this way: you can call it an answer, but you're not going to learn how to manufacture cocaine.

Other examples of Grok in action include a rather overwritten and joyful account of Sam Bankman-Fried's recent conviction on charges of fraud and money laundering. Notably here the chatbot makes one minor error (saying the jury took eight hours to convict Bankman-Fried rather than the actual five) and doesn't provide much useful information beyond revelling in the former crypto king's downfall.
 

GURPS

INGSOC
PREMO Member

White faces generated by AI are more convincing than photos, finds survey

LeftCenterRight
Comparison

  • According to a study published in Psychological Science, more people believed AI-generated white faces were human compared to real white faces, but the same was not true for people of color. This is due to AI algorithms being trained predominantly on white faces.
  • People who mistakenly thought AI faces were real were paradoxically the most confident in their judgments, highlighting the issue of AI "hyper-realism".
  • The researchers argue that this trend of AI-generated faces posing as real people could have serious consequences, such as reinforcing racial biases and facilitating identity theft, and emphasize the need for transparency, public awareness, and tools to accurately identify AI imposters.
 

GURPS

INGSOC
PREMO Member
President Joe Biden issued a sweeping executive order last month aimed at imposing federal regulations on artificial intelligence (AI)—what Carl Szabo of the tech lobbying group NetChoice called an"AI red tape wishlist." Many observers fear that Biden's requirements could evolve into a centralized, innovation-stifling licensing scheme for new AI systems. As the R Street Institute's Adam Thierer notes, the executive order would "empower agencies to gradually convert [current] voluntary guidance and other amorphous guidelines into a sort of back-door regulatory regime."

That would be just peachy with Sens. Josh Hawley (R–Mo.) and Richard Blumenthal (D–Conn.). Their "Bipartisan Framework for U.S. AI Act," introduced earlier this year, explicitly calls for a "licensing regime administered by an independent oversight body." This A.I. bureaucracy "would have the authority to audit companies seeking licenses and cooperating with other enforcers such as state Attorneys General. The entity should also monitor and report on technological developments and economic impacts of AI."

The senators assert that their framework is necessary to hold AI companies liable when their models and systems breach privacy, violate civil rights, or cause other harms. But is it really?


Senate Majority leader Chuck Schumer (D–N.Y.) hinted earlier this week at an alternative to top-down federal AI licensing. "Duty of care has worked in other areas, and it seems to fit decently well here in the AI model," he said at the AI Insight Forum on Wednesday.

Under product liability tort law, duty of care is defined as your responsibility to take all reasonable measures necessary to prevent your products or activities from harming other individuals or their property.

As Thierer observes, "What really matters is that AI and robotic technologies perform as they are supposed to and do so in a generally safe manner. A governance regime focused on outcomes and performance treats algorithmic innovations as innocent until proven guilty and relies on actual evidence of harm and tailored, context-specific solutions to it."

Common-law torts have a long history of tailoring just such context-specific solutions to the harms caused by new products and services.






 
Top