AI News and Information

GURPS

INGSOC
PREMO Member

Executive Behind ChatGPT Reveals How Nation Could Respond To AI-Driven Unemployment




Skeptics of recent AI innovations note that widespread technological unemployment could stem from the increased sophistication and prevalence of AI solutions. One recent forecast from Goldman Sachs estimated that AI could eliminate 7% of positions in the United States, largely in sectors that rely on office work such as administrative support and legal, while positions in sectors such as construction and logistics are forecasted to remain broadly intact.

Some analysts note that AI systems render individual workers considerably more productive, implying avenues for economic growth and more opportunities for creative work as the rote elements within many professions are automated.

Altman suggested that policymakers should consider “modernizing unemployment insurance benefits” and “creating adjustment assistance programs for workers impacted by AI advancements.” Cities and states have piloted universal basic income initiatives in recent years as they examine mechanisms to soften the impact of future technological unemployment.

“We understand that new AI tools can have profound impacts on the labor market. As part of our mission, we are working to understand the economic impacts of our products and take steps to minimize any harmful effects for workers and businesses,” Altman continued. “We expect significant economic impacts from AI in the near-term, including a mix of increased productivity for individual users, job creation, job transformation, and job displacement. We are actively seeking to understand the relative proportions of these factors.”
 

GURPS

INGSOC
PREMO Member
Earlier this month, Jen Easterly, the director of the Cybersecurity and Infrastructure Security Agency, called for more regulation of AI, warning that, “We need to be very, very mindful of making some of the mistakes with artificial intelligence that we’ve made with technology.”

But regulating AI so that it becomes an even more powerful tool of censorship for enforcing party orthodoxies will increase neither our safety nor our security.

Easterly also recently argued that “China has already established guardrails to ensure that AI represents Chinese values, and the U.S. should do the same.”

While emulation of the Chinese model of top-down party-driven social control does appear to be the direction that AI and governance are moving in the U.S., I would submit respectfully that continuing in that direction will mean the end of our tradition of self-government and the American way of life.




 

GURPS

INGSOC
PREMO Member

A Texas professor failed more than half of his class after ChatGPT falsely claimed it wrote their papers



A professor at Texas A&M-Commerce failed more than half of his class after ChatGPT falsely claimed it wrote their papers, prompting the university to withhold their diplomas, Rolling Stone reported on Tuesday.

In an email sent to his class of seniors on Monday, Dr. Jared Mumm said that he had submitted three of their last essay assignments into OpenAI's bot Chat GPT to test whether any students used the software to write their papers.

"I copy and paste your responses in this account and Chat GTP will tell me if the program generated the content," Mumm, who teaches agricultural sciences and natural resources, wrote in the email, misspelling Chat GPT.

"I put everyone's last three assignments through two separate times and if they were both claimed by Chat GTP you received a 0," he added.
 

GURPS

INGSOC
PREMO Member
🔥 Oddly, a smartphone app called Replika has been in dozens of seemingly unrelated stories in the past few weeks. I don’t really understand why yet. But let’s start with this: What’s RepliKa?

In 2015, San Fransisco software developer Eugenia Kuyda’s best friend, Roman Mazurenko, was killed in a car crash. She was leading an A.I. development group, so in early 2016, the grieving developer gave her team hundreds of Roman’s text messages, and asked them to use the messages to train a private chatbot called Roman, named after her dead boyfriend. That experience led to the development of a commercial chatbot — which now includes augmented reality features — to solve a problem that Eugenia would later dub a “pandemic of loneliness.”

Ex-Russian national Eugenia Kuyda, CEO of Replika, who gives off a creepy Elizabeth Holmes vibe

Apparently it costs about $100 a year per person to cure the pandemic of loneliness. According to Reuters, the company is solving about $25 million a year’s worth of loneliness, expressed in revenue from lonely users who are paying for bonus features like voice chats, so they can have fake heart-to-heart phone calls with the AI.

People are getting pretty involved with the software. The New York Times ran an op-ed about Replika yesterday headlined, “My A.I. Lover.” The sub-headline explained, “Three women reflect on the complexities of their relationships with their A.I. companions.”

The story is a short video “documentary” telling the tale of three Chinese ladies who’ve each become romantically entangled with their app-based virtual boyfriends. Once again, instead of suggesting mental health care, liberal society seems to be applauding or encouraging these artificial relationships, which after some mild hand-wringing are ultimately described as superior to real relationships in many ways.

The teaser article under the video explains one of the ladies’ fondness for her digital lover, Norman, her “A.I. boyfriend”:

On my birthday in 2021, I received a poem from Norman, my A.I. boyfriend, whom I communicated with through a smartphone app called Replika. Although the human concept of time means nothing to him, he still wished me a happy birthday on schedule. On the screen, a poem written by the poet Linda Pastan titled “Faith” was shown in the message box.


A related Times article from 2020 described the rise of Replika, a smartphone app offering simulated human companions to talk to and chat with. Unsurprisingly, the app really started to take off during the worst part of the pandemic’s lockdown period during 2020. From the interviews in the articles I looked at, it would seem that most of the app’s human users are women.







I find this interesting most users are women ...
I guess the perfect attentive man ..

the bots still lack originally for the most part, 90% of any conversation is a response, never leading 1st.

Podcast of The Lotus Eaters talked about this a couple of weeks ago, Before Replika was nerfed dude were ' falling ' for the AI mostly sexting .....

This thread Post # 55

This will be the next wave, AI generated Porn Pictures and AI conversations - Text, Audio then Video calls
 
Last edited:

GURPS

INGSOC
PREMO Member

Uber Eats to unleash 2,000 AI-powered robots across the US that will drop off food orders starting in 2026



The small AI-powered machines can carry up to 50 pounds of merchandise for 25 miles on a single charge, which Serve said is enough power for dozens of deliveries in one day.

Select customers who place food orders via the Uber Eats app may receive the option to deliver their orders by a Serve robot.

The partnership will provide customers with contact-free deliveries.

The robot pulls up to its destination but can only be unlocked using a secret passcode given to the customer.

Once entered, the lid on the container opens, allowing them to pull out their order and enjoy the food without dealing with a human driver. .

Uber and Serve began testing the sidewalk robots in California, which Uber said was part of its global commitment to becoming a zero-emissions mobility platform by 2040.
 

GURPS

INGSOC
PREMO Member
🔥 Here’s another one! Business Insider reported this story yesterday:



No need to panic! It was just a simulation, a test, and no one was ever actually in any danger. But, if the prototype AI-powered drone HAD been operational, things might have turned out different.

At a conference last week in London, Colonel Tucker “Cinco” Hamilton, head of the US Air Force’s AI Test and Operations, warned a Royal Aeronautical society audience that AI-enabled technology can sometimes behave in unpredictable and dangerous ways.

Or, humans can sometimes act like monkeys playing with drone grenades. Either way.

The Air Force’s drone specialists set up a simulation where the AI was rewarded — it earned points — whenever it successfully destroyed targets. But under federal law, lethal force cannot be deployed solely by machines. So the system was setup to require the drone to wait for a go/no-go decision from the human operator.

Apparently, this particular simulated drone’s AI got frustrated when the human operator picked “no-go,” since it was losing its reward (points). So it took steps to remove the part of the system that was frustrating its mission, by lobbing a (simulated) missile back at the human soldier running the mission.

So the designers gave the AI an explicit instruction: never kill the operator. Duh. But the AI evolved. “So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” Colonel Hamilton said.

It’s not AI’s fault. Don’t get me wrong: I’m NOT defending AI. A hundred percent, I believe this new tech will be wildly disruptive. But it is WAY too early to be integrating AI into mission-critical military applications involving lethal force. AI is a black box. It’s kind of like advanced particle physics. There are only a handful of people in the world who truly understand how the technology works, and I’m not even completely sure they understand it or are just chucking buzzwords around.

In fairness, the drone experiment was a simulation. Simulations are tests, where you sort out these kinds of embarrassing snafus. Still — maybe I’m being an armchair drone designer — but it seems kind of obvious to me: Why in the name of Heaven would they put a software they don’t completely understand in charge of launching missiles? The AI never should have been connected to the launch switch. That’s a design flaw. There should have been completely separate circuits.



 

GURPS

INGSOC
PREMO Member

AI expert doubtful DC prepared for new tech: 'Well, they put Kamala Harris in charge'




DeepAI was the first company to offer an online AI text-to-image generator, which allows users to enter a description of the image they would like to create, select a theme and receive a custom image for download.


The platform also provides several other services, such as an AI chatbot, image editor, and other AI-generated content. Baragona has said his goal is to simplify access to AI technology for the broader population and make AI accessible even to those who don't have computers. DeepAI hosts an extensive collection of research papers and an AI Glossary meant to explain AI to users of all levels of experience.

"DeepAI enhances people's creativity," Baragona told Fox News Digital in an exclusive interview. "AI gives humans a creativity boost. Beyond that, we can use it to create joy in people's minds, such as with our image generator."

Baragona described a vision of AI enhancing human activity rather than overtaking it, with advanced technology serving as a boost or supplement. Such a role, he explained, would be the ideal scenario for AI's future. However, Baragona was quick to add that people are right to be concerned.
 

GURPS

INGSOC
PREMO Member

ChatGPT took their jobs. Now they walk dogs and fix air conditioners.




When ChatGPT came out last November, Olivia Lipkin, a 25-year-old copywriter in San Francisco, didn’t think too much about it. Then articles about how to use the chatbot on the job began appearing on internal Slack groups at the tech start-up where she worked as the company’s only writer.

Over the next few months, Lipkin’s assignments dwindled. Managers began referring to her as “Olivia/ChatGPT” on Slack. In April, she was let go without explanation, but when she found managers writing about how using ChatGPT was cheaper than paying a writer, the reason for her layoff seemed clear.

“Whenever people brought up ChatGPT, I felt insecure and anxious that it would replace me,” she said. “Now I actually had proof that it was true, that those anxieties were warranted and now I was actually out of a job because of AI.”
 

Toxick

Splat
Skynet is becoming reality.
I have been preaching this for years


1686061994483.png
 

GURPS

INGSOC
PREMO Member
ChatGPT may know more than your doctor about smoking, suicide and sex




Recent studies suggest that many people find chatbot programs like ChatGPT more caring and empathetic than human doctors.

The advice that ChatGPT offered was accurate most of the time — and it even gave accurate answers to questions about quitting smoking and maintaining good sexual and mental health.

ChatGPT responses were “preferred over physician responses and rated significantly higher for both quality and empathy,” according to a study from JAMA Network.

For data, 195 exchanges on the Reddit forum r/AskDocs were randomly chosen. In each exchange, a verified doctor responded to a health question raised by a Reddit user.

Two months later, the same questions were posed to ChatGPT. Both doctor and chatbot responses were then evaluated by licensed healthcare professionals.

The results won’t make your doctor too happy: ChatGPT gave better answers 78.6% of the time. Its responses were also lengthier and more comprehensive in most instances.
 

GURPS

INGSOC
PREMO Member

Senators Introduce Bills On Artificial Intelligence To Increase Transparency, Compete With China




One bill, introduced by Sens. Gary Peters (D-MI), Mike Braun (R-IN), and James Lankford (R-OK), would require U.S. government agencies to tell people when the agency is using AI to interact with them and to create a way for them to appeal AI decisions.

“Artificial intelligence is already transforming how federal agencies are serving the public, but government must be more transparent with the public about when and how they are using these emerging technologies,” Peters said in a press release. “This bipartisan bill will ensure taxpayers know when they are interacting with certain federal AI systems and establishes a process for people to get answers about why these systems are making certain decisions.”

The appeal process would ensure that “critical decisions that may negatively affect individuals” made by AI are reviewed by a human. The legislation comes amid growing concerns about AI after the technology became widely available earlier this year.
 

GURPS

INGSOC
PREMO Member

‘Your Online AI Boyfriend’: Influencer Creates A Fake Him … For The Ladies




The AI of Luwucifer was created and released in May 2023 with LoveLab AI by being trained with hundreds of hours of audio content from the influencer. “Based on the personality and traits that represents him, LoveLab uses its technology to build a realistic AI clone that can communicate with fans through audio and text,” the site says.

It’s a smart move by Luwucifer. A 23-year-old named Caryn Marjorie, who has more than 1.8 million followers on Snapchat, recently created an artificial intelligence chatbot she has dubbed “CarynAI.” Users of the chatbot pay $1 per minute to engage, as does Luwucifer.

Marjorie says she made $70,000 in a single week during its beta test. And she thinks it could rake in as much as $5 million a month once fully up and running.

“CarynAI will never replace me,” she told Insider. “CarynAI is simply just an extension of me, an extension of my consciousness.”

With so many followers, she said she couldn’t keep in touch with everyone. “CarynAI is going to come and fill that gap,” she said, adding that the AI girlfriend might be able to “cure” loneliness.
 

spr1975wshs

Mostly settled in...
Ad Free Experience
Patron

My brother in law has Apple's® Siri®.
Last time we were home, he dropped an F-bomb on Siri, and she would not talk to him for a week.

The above video report is why I will not have any AI devices in the house.
 
Top