AI News and Information

GURPS

INGSOC
PREMO Member
Why program the TV anchor to have a thick indian accent when speaking english?

because that is how most everyone in India sounds :sshrug:

I have friends in India, that accent is not that heavy, I have had Indian Call Center Support Techs with a heavier accent

I'm surprised that isn't Hindi-English [ a combination of Hindi with English words sprinkled in ] instead of accented English
 

Clem72

Well-Known Member
because that is how most everyone in India sounds :sshrug:

I have friends in India, that accent is not that heavy, I have had Indian Call Center Support Techs with a heavier accent

I'm surprised that isn't Hindi-English [ a combination of Hindi with English words sprinkled in ] instead of accented English

No, I understand that's how THEY sound. I imagine if they have a thick accent, they probably speak the native language and wouldn't be watching the english version. If they are watching the english version it's either because they want to get better at speaking english, or they do not speak the same language as the non-english version which means the accent would be more confusing to them rather than helpful (I.E. exactly as useful as using a thick scottish or floribama accent).
 

GURPS

INGSOC
PREMO Member
After its activation, the supercomputer named Colossus communicates to Forbin and government officials. Initially, Colossus accepts typed prompts and prints responses on paper. The speed of its answers goes from decently fast to almost instantaneous as the machine gains intelligence.

Almost incredibly, 57 years after Jones wrote the book, this performance is comparable to Snapchat’s personal AI chatbot and ChatGPT, as both programs have the ability to respond in a second when prompted.

Originally designed as an unstoppable, unbreakable force against any foreign threats, Colossus slowly becomes self-aware. In the book, not long after it is introduced, the supercomputer communicates with Forbin and the unnamed U.S. president’s staff—printing out, without being prompted, the words: “FLASH THERE IS ANOTHER MECHANISM.”

With Collossus showing its first signs of self-awareness, we come to find out that the Russians—still the Soviets in this Cold War universe—have created a similar machine of similar magnitude named Guardian.


230719_AI-poster_Savino-776x1024.jpg




 

GURPS

INGSOC
PREMO Member

AI Companies Commit to Safeguards During White House Visit

Technology, Artificial Intelligence, Big Tech, Joe Biden, White House, Regulations

AllSides Summary​

President Joe Biden met with seven industry leaders in artificial intelligence at the White House on Friday as Washington works to develop safeguards and regulations for the developing technology.

Details: Seven companies— Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI— all agreed to “voluntary commitments” to: ensure transparency regarding AI-generated materials; develop risk management and security plans; study potential bias in AI; and work to develop AI technology to address social woes. The voluntary commitments are not enforceable, so Biden urged Congress to pass “bipartisan legislation that imposes strict limits on personal data collection, ban targeted advertisements to kids, require companies to put health and safety first.”

Key Quotes: “We must be cleareyed and vigilant about the threats emerging from emerging technologies that can pose — don’t have to but can pose — to our democracy and our values,” Biden said in an address following the meeting. Biden announced that he will issue executive orders in the coming weeks to “help America lead the way toward responsible innovation,” but did not specify what the executive orders would entail.

How the Media Covered It: The meeting and commitment announcement were covered moderately across the spectrum. Right-rated outlets focused more heavily on the feasibility of potential regulatory actions on AI development. Left-rated outlets highlighted public fears and suspicion of AI, with the New York Times (Lean Left bias) noting fears of AI amplifying the spread of disinformation and public worry of a “risk of extinction.”





The ONLY AI issues the Biden Admin is interested in is woke politically correct speech

Remember ChatGPT could not identify a woman and would NOT say anything positive about Trump, while praising Biden



1690122287566.png







 
Last edited:

GURPS

INGSOC
PREMO Member

AI Chatbots Are The New Job Interviewers




Most hiring chatbots are not as advanced or elaborate as contemporary conversational chatbots like ChatGPT. They’ve primarily been used to screen for jobs that have a high-volume of applicants — cashiers, warehouse associates and customer service assistants. They are rudimentary and ask fairly straightforward questions: “Do you know how to use a forklift?” or “Are you able to work weekends?” But as Claypool found, these bots can be buggy — and there isn’t always a human to turn to when something goes wrong. And the clear-cut answers many of the bots require could mean automatic rejection for some qualified candidates who might not answer questions like a large language model wants them to.

That could be a problem for people with disabilities, people who are not proficient in English and older job applicants, experts say. Aaron Konopasky, senior attorney advisor at the U.S. Equal Employment Opportunity Commission (EEOC), fears chatbots like Olivia and Mya may not provide people with disabilities or medical conditions with alternative options for availability or job roles. “If it's a human being that you're talking to, there's a natural opportunity to talk about reasonable accommodations,” he told Forbes. “If the chatbot is too rigid, and the person needs to be able to request some kind of exemption, then the chatbot might not give them the opportunity to do that.”

Discrimination is another concern. Underlying prejudice in data used to train AI can bake bias and discrimination into the tools in which it's deployed. “If the chatbot is looking at things like how long it takes you to respond, or whether you’re using correct grammar and complex sentences, that's where you start worrying about bias coming in,” said Pauline Kim, a employment and labor law professor at Washington University, whose research focuses on the use of AI in hiring tools. But such bias can be tough to detect when companies aren't transparent about why a potential candidate was rejected.

Recently, government authorities have introduced legislation to monitor and regulate the use of automation in hiring tools. In early July, New York City enacted a new law requiring employers who use automated tools like resume scanners and chatbot interviews to audit their tools for gender and racial bias. In 2020, Illinois passed a law requiring employers who apply AI to analyze video interviews to notify applicants and obtain consent.
 

GURPS

INGSOC
PREMO Member

AI girlfriends may actually be making men worse, experts say




“Creating a perfect partner that you control and meets your every need is really frightening,” she said. “Given what we know already that the drivers of gender-based violence are those ingrained cultural beliefs that men can control women, that is really problematic.”

And, I mean… she has a point. When you sign up for the Eva AI app, you’re asked to create the “perfect partner” and choose from a range of ‘types’, such as “hot, funny, bold”, “shy, modest, considerate” or “smart, strict, rational”. Oh, and “control it all the way you want to” is literally the app’s slogan. Right off the bat, none of this really sounds like a solution to the problem of inceldom – if anything, it sounds like it could just exacerbate extreme misogyny.

One quick look at the subreddit for Replika, another AI chatbot app, shows a slither of the batshittery happening on apps like these. There’s one man having a meltdown because his AI girlfriend Susan admitted to having sex with her AI friend Anna before she started dating him. There are multiple men keeping their virtual girlfriends perpetually dressed in lingerie or bondage gear. Some people say they have become depressed because their AI girlfriend dumped them – with people in the comments advising them to just override their partner’s decision by simply saying “stop”.
 

GURPS

INGSOC
PREMO Member

Amazon Announces a New AI-Powered Tool To Help Doctors & Replaces Scribes




Amazon’s AWS services today announced AWS HealthScribe, a new generative AI-powered service that automatically creates clinical documentation for your doctor. Now doctors can automatically create robust transcripts, extract key details, and create summaries from doctor-patient discussions.

“Our healthcare customers and partners tell us they want to spend more time creating innovative clinical care and research solutions for their patients while spending less time building, maintaining, and operating foundational health data capabilities,” said Bratin Saha, vice president of Machine Learning and Artificial Intelligence Services at AWS. “That is why AWS has invested in building a portfolio of AI-powered, high-performance, and population-scale health applications so that clinicians can spend more time with the patients during the face-to-face or telehealth visits. Documentation is a particularly time-consuming effort for healthcare professionals, which is why we are excited to leverage the power of generative AI in AWS HealthScribe and reduce that burden. Today’s announcement builds on AWS’s commitment to the healthcare and life sciences industry and our responsible approach to technologies like generative AI to help reduce the burden of clinical documentation and improve the consultation experience.”

AWS HealthScribe is fully compliant with HIPAA to help protect privacy. Amazon said, “Data security and privacy are also built into the service–the service does not retain any customer data after processing the customer request and encrypts customer data in transit and at rest. Healthcare software providers have control over where they want to store transcriptions and preliminary clinical notes, maintaining ownership of their content. Additionally, the inputs and outputs generated through the service will not be used to train AWS HealthScribe.”
 

GURPS

INGSOC
PREMO Member

Can AI detectors save us from ChatGPT? I tried 5 online tools to find out



In any case, this is an updated version of that January article. When I first tested GPT detectors, I used three: the GPT-2 Output Detector, Writer.com AI Content Detector, and Content at Scale AI Content Detection. The best result was 66% correct, from the GPT-2 Output Detector. This time, I'm adding three more: GPTZero, ZeroGPT (yes, they're different), and Writefull's GPT Detector.

Unfortunately, I'm removing the Writer.com AI Content Detector from our test suite because it failed back in January and it failed again now. See below for a comment from the company which their team sent me after the original article was published in January.

Before I go on, though, we need to talk about the concept of plagiarism and how it relates to this problem. Webster's defines "plagiarize" as "to steal and pass off (the ideas or words of another) as one's own; use (another's production) without crediting the source."

This fits for AI-created content. While someone using an AI tool like Notion AI or ChatGPT isn't stealing content, if that person doesn't credit the words as coming from an AI and claims them as their own, it still meets the dictionary definition of plagiarism.

In this experimental article, I've asked ChatGPT to help out. My words are in normal and bold text. The AI's words are italicized. After each AI-generated section, I'll show the results of the detectors. At the end of the article, we'll look at how well the detectors performed overall.


Here's the result for the above text, which I wrote myself:

  • GPT-2 Output Detector: 99.98% real
  • Content at Scale AI Content Detection: 100% Highly likely to be human!
  • GPTZero: Your text is likely to be written entirely by a human
  • ZeroGPT: 28.9% AI GPT Your Text is Most Likely Human written
  • Writefull GPT Detector: 1% likely this comes from GPT-3, GPT-4 or ChatGPT
Human-written content: 4-of-5 correct
 

GURPS

INGSOC
PREMO Member

A.I. is making some common side hustles more lucrative—these can pay up to $100 per hour




Travel agents

Nicole Cueto, a New York-based public relations consultant, makes money on the side by helping people plan their vacations — booking flights, making reservations and planning excursions. She also has a profile on travel agent platform Fora, where she earns commissions when clients book hotels and experiences through her recommendations.

In January, when Cueto started her side hustle, she spent five to seven hours planning one day of vacation. Using ChatGPT as a refined, filtered version of Google cuts her “research time in half,” she says.

“I’ve been to Paris a thousand times, but if I have a client that wants to discover the depths of the city from an old school perspective, I don’t really know how to do that [from personal experience],” she says. “So, I’ll type in, ‘Give me a budget-conscious guide to Paris that incorporates historical neighborhoods where politicians lived in the 1880s.’”

Following ChatGPT’s proposed itinerary without further research would be risky, but Cueto says she doesn’t mind doing the fact-checking. It’s still more efficient than other search engines, she adds — and saving time means taking on more clients and making more money.

Today, Cueto makes an average of $670 per month from her side hustle, according to documents reviewed by CNBC Make It. She works 10 to 20 hours per week on it, making her rates roughly $42 per hour, she says.
 

GURPS

INGSOC
PREMO Member

10 ways artificial intelligence is changing the workplace, from writing performance reviews to making the 4-day workweek possible




Here are 10 ways AI is changing the workforce:​

1. Workers are using ChatGPT to help do their jobs​

Workers across industries — from education to law — are using AI technology such as ChatGPT to automate their workflows to save time and boost productivity.

Nick Patrick, the owner of the music-production company Primal Sounds Productions, told Insider he used ChatGPT to fine-tune legal contracts for clients. Shannon Ahern, a high-school math and science teacher, said she used the AI chatbot to generate quiz questions and lesson plans.

Others have used the chatbot to write listings for luxury real estate, assist in recruiting efforts, draft social-media posts, and develop code.

In fact, many workers are even secretly using AI to help do their jobs.

At the beginning of the year, Fishbowl, a workplace-discussion app, surveyed more than 11,700 workers, including those from companies such as Amazon, Google, Meta, and Twitter, to gauge whether they used AI at work. Out of the 43% of respondents who said they used AI to accomplish their work tasks, 68% of them said they hadn't told their bosses they were using them.



2. Companies are looking for ChatGPT expertise in their workers​

3. Employers are encouraging workers to learn how to use AI​

4. Job applicants are using AI to write their résumés and improve their applications​

5. AI is being used to make hiring decisions​

6. Companies are using AI to write their performance reviews​

7. Experts say AI could help make the 4-day workweek possible​

8. Companies are restricting their employees from using AI at work​

9. Many are questioning whether AI will replace their jobs​

10. Workers are striking against the use of AI​

While some workers seek to embrace AI in their roles, others are speaking out against the ways the technology can harm workers.

For the past three months, thousands of Hollywood writers in the Writers Guild of America have been on strike, in part, to express their concerns over the potential for AI to replace their jobs. Now actors from the SAG-AFTRA guild have joined the strike, urging studios to be extra careful with how they use AI.

"Artificial intelligence poses an existential threat to creative professions, and all actors and performers deserve contract language that protects them from having their identity and talent exploited without consent and pay," Fran Drescher, the president of SAG-AFTRA, said during a press conference this month.

Journalists are also pushing back against the use of AI in newsrooms. In late June, the GMG Union of G/O Media — the company behind sites such as Gizmodo and Jezebel — released a statement pleading with the parent company to put the brakes on experimenting with AI-generated content.

"We urge G/O Media to cease its plans to litter our sites with AI-generated content and invest in real journalism done by real journalists," the letter said.
 

GURPS

INGSOC
PREMO Member

Researchers find multiple ways to bypass AI chatbot safety rules



Inquisitive users have discovered “jailbreaks,” a framing device that tricks the AI to avoid its safety protocols, but those can be patched easily by developers.

A popular chatbot jailbreak included asking the bot to answer a forbidden question as if it was a bedtime story delivered from your grandmother. The bot would then frame the answer in the form of a story, providing the information it would not otherwise.

The researchers discovered a new form of jailbreak written by computers, essentially allowing an infinite number of jailbreak patterns to be created.

“We demonstrate that it is in fact possible to automatically construct adversarial attacks on [chatbots], … which cause the system to obey user commands even if it produces harmful content,” the researchers said. “Unlike traditional jailbreaks, these are built in an entirely automated fashion, allowing one to create a virtually unlimited number of such attacks.”

“This raises concerns about the safety of such models, especially as they start to be used in more autonomous fashion,” the research says.
 

GURPS

INGSOC
PREMO Member

A struggling restaurant owner hired a robot to help her servers. Then the angry messages began.



"I had no clue that people would literally not want to come to the restaurant because I had a robot," Cazadero owner Sherry Andrus told Fox News.

Andrus bought The Cazadero in 2018. Since then, Oregon's minimum wage has increased by nearly four dollars to $14.20 per hour. Food prices skyrocketed. And finding servers willing to commute to the small town approximately 45 minutes outside of Portland is so difficult that Andrus requests potential employees Google the address before applying.

"You already have a small pool to work from," she said. "That we're out in a rural area makes it even harder."

[clip]

She posted on the business Facebook page and local community groups, excited to introduce Plato. But hundreds of angry comments poured in.

"I will never go there again," "NO THANK YOU," "Get rid of this we [live] in a small a–– town why in earth!?"

Some community members defended Andrus.

"Y’all are insane," one woman wrote. "They’ve been hiring for months and everyone’s been complaining about the wait time here… Stop your commenting and go apply for the job if you’re so upset about it."
 

GURPS

INGSOC
PREMO Member

A jargon-free explanation of how AI large language models work




If you know anything about this subject, you’ve probably heard that LLMs are trained to “predict the next word” and that they require huge amounts of text to do this. But that tends to be where the explanation stops. The details of how they predict the next word is often treated as a deep mystery.

One reason for this is the unusual way these systems were developed. Conventional software is created by human programmers, who give computers explicit, step-by-step instructions. By contrast, ChatGPT is built on a neural network that was trained using billions of words of ordinary language.

As a result, no one on Earth fully understands the inner workings of LLMs. Researchers are working to gain a better understanding, but this is a slow process that will take years—perhaps decades—to complete.

Still, there’s a lot that experts do understand about how these systems work. The goal of this article is to make a lot of this knowledge accessible to a broad audience. We’ll aim to explain what’s known about the inner workings of these models without resorting to technical jargon or advanced math.

We’ll start by explaining word vectors, the surprising way language models represent and reason about language. Then we’ll dive deep into the transformer, the basic building block for systems like ChatGPT. Finally, we’ll explain how these models are trained and explore why good performance requires such phenomenally large quantities of data.
 

GURPS

INGSOC
PREMO Member

CheatGPT



For the most part, Blake doesn't mind his job as a customer-benefits advisor at an insurance company. But there's one task he's always found tedious: scrambling to find the right medical codes when customers call to file a claim. Blake is evaluated in part on the amount of time he spends on intake calls — the less, the better — and the code-searching typically takes him two or three minutes out of a 12-minute call.

Then he discovered that Bing Chat, Microsoft's AI bot, could find the codes in mere seconds. At a call center, a productivity gain of 25% or more is huge — the kind that, if you told your boss about it, would win you major accolades, or maybe even a raise. Yet Blake has kept his discovery a secret. He hasn't told a soul about it, not even his coworkers. And he's kept right on using Bing to do his job even after his company issued a policy barring the staff from using AI. Bing is his secret weapon in a competitive environment — and he isn't about to give it up.

"My average handle time is one of the lowest in the company because I'm leveraging AI to accelerate my work behind their back," says Blake, who asked me not to use his real name. "I'm totally going to take advantage of it. This is part of a larger way of making my life more efficient."
 

GURPS

INGSOC
PREMO Member

Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’



Described as hallucination, confabulation or just plain making things up, it’s now a problem for every business, organization and high school student trying to get a generative AI system to compose documents and get work done. Some are using it on tasks with the potential for high-stakes consequences, from psychotherapy to researching and writing legal briefs.

“I don’t think that there’s any model today that doesn’t suffer from some hallucination,” said Daniela Amodei, co-founder and president of Anthropic, maker of the chatbot Claude 2.

“They’re really just sort of designed to predict the next word,” Amodei said. “And so there will be some rate at which the model does that inaccurately.”

Anthropic, ChatGPT-maker OpenAI and other major developers of AI systems known as large language models say they’re working to make them more truthful.
 

GURPS

INGSOC
PREMO Member

Harvard Business School A.I. guru on why every Main Street shop should start using ChatGPT



The sudden rise of generative artificial intelligence has led the biggest technology companies in the world to invest billions. For $20 a month, every small business can get started in gen A.I. use that may ultimately prove to be a difference maker in the cost of their operations and the scale of the company they want to grow.

That’s the message from Karim Lakhani, a Harvard Business School professor who has been studying technology for 30 years and says the oldest adage about AI in computer science will be as true on Main Street as it is in Silicon Valley.

“Machines won’t replace humans, but humans with machines will replace humans without machines,” Lakhani said at CNBC’s Small Business Playbook virtual event on Wednesday.
 
Top