AI News and Information

GURPS

INGSOC
PREMO Member
I do not want AI intruding on what is supposed to be a human to human "space."


Fair Enough,


Given the wealth of access to information AI ' should ' have - minus progressive ' filtering ' for PC Speech - don't you think a better answer can or may be had from the ' AI ' ?


maybe there should be a separate AI driven Quora
 

GURPS

INGSOC
PREMO Member

AI is the Scariest Beast Ever Created, Says Sci-Fi Writer Bruce Sterling




It's a wonderful spectacle to watch, especially if you're not morally and legally responsible for the outcome. Open Source is quite like Mardi Gras in that way, because if the whole town turns out, and if everybody's building it, and also everybody's using it, you're just another boisterous drunk guy in the huge happy crowd.

And the crowd has celebrities, too. If you are a longtime AI expert and activist, such as Gary Marcus, Yoshua Bengio, Geoffrey Hinton or Eliezer Yudkowsky, you might choose to express some misgivings. You'll find that millions of people are eager to listen to you.

If you're an AI ethicist, such as Timnit Gebru, Emily Bender, Margaret Mitchell or Angelina McMillan-Major, then you'll get upset at the scene's reckless, crass, gold-rush atmosphere. You'll get professionally indignant and turn toward muckraking, and that's also very entertaining to readers.

If you're a captain of AI industry, like Yann LeCun of Meta, or Sam Altman of OpenAI, you'll be playing the consensus voice of reason and assembling allies in industry and government. They'll ask you to Congress. They'll listen.

These scholars don't make up cartoon meme myths, but they all know each other and they tend to quarrel. Boy is that controversy fun to read. I recommend Yudkowsky in particular, because he moves the Overton Window of acceptable discussion toward extremist alarm, such as a possible nuclear war to prevent the development of "rogue AIs." This briskly stirs the old, smoldering anxieties of the Cold War. Even if people don't agree with Yudkowsky, they nod; they already know that emotional territory. Those old H-Bomb mushroom-cloud myths, those were some good technical myths.
 

GURPS

INGSOC
PREMO Member
12 ways to get better at using ChatGPT: Comprehensive prompt guide





Here are 12 ways you can write better ChatGPT prompts.

1. Assign ChatGPT a specific role​

ChatGPT works best when you assign it a persona — such as a specific job role — Jason Gulya, an AI council chair at Berkeley College who teaches clients how to use ChatGPT, said.

Rob Cressy, the founder of the AI-coaching firm GPT Leaders, told Insider to "talk to ChatGPT like an employee" to help accomplish particular goals or tasks.

To do this, Gulya suggests that users write a prompt that includes a specific, concrete description of the persona you want the chatbot to take on. Begin your prompt with "act as a professor" or "act as a marketing professional," followed by a description of the desired outcome.


2. Be specific — and only give the bot one task at a time
3. Refine your prompts based on a previous outputs
4. Provide context
5. Break down the desired output into a series of steps
6. Ask ChatGPT for advice on how to prompt it better
7. Prioritize clarity and precision
8. Use a thesaurus
9. Pay attention to verbs
10. Be polite, but direct
11. Check and tweak the copy's tone and reading level
12. Feed ChatGPT an outline
 

Sneakers

Just sneakin' around....
Interesting when he's querying the AI, and he has a lot of insightful comments at the end. The middle is pretty much just mechanic stuff.

Before he said it, I had the same comments that he did, just generic answers from a list of possibilities that any good mechanic should already know.

 

GURPS

INGSOC
PREMO Member
Before he said it, I had the same comments that he did, just generic answers from a list of possibilities that any good mechanic should already know.


Tier 1 Chart Flipping Support lines are going to be replaced by AI in the next few years
 

spr1975wshs

Mostly settled in...
Ad Free Experience
Patron
Fair Enough,


Given the wealth of access to information AI ' should ' have - minus progressive ' filtering ' for PC Speech - don't you think a better answer can or may be had from the ' AI ' ?


maybe there should be a separate AI driven Quora
That would kind of hard as Quora has done away with a large part of their human employees.
 

GURPS

INGSOC
PREMO Member
Bloomberg explained military researchers fed the AI with classified operational information. The long-term goal is to upgrade the U.S. intelligence services so they can use AI-enabled data in decision-making, sensors and detectors, and ultimately battlefield firepower. The article said the DoD is already working with tech security companies to help test and evaluate to what extent they can put their trust in the AI-enabled systems.

The researchers demonstrated the AI’s capabilities to Bloomberg’s reporter. They loaded sixty thousand U.S. and Chinese military documents into the system, then the reporter asked it who would win in a conflict over Taiwan. Among other suggestions, the AI calmly predicted “Direct US intervention with ground, air and naval forces would probably be necessary.” It warned that the U.S. would not easily paralyze China’s military. Ultimately, the system wasn’t confident about our chances, concluding: “There is little consensus in military circles regarding the outcome of a potential military conflict between the US and China over Taiwan.”

That’s not going to go over too well, since the Biden administration has been saying the exact opposite, that the U.S. is “confident” that it can defeat the Chinese in a Taiwanese dustup. So, like they way they keep two sets of budget books, the military is going to need two different AI systems, one for public consumption that hews to the narrative and maintains political correctness, and another secret one they can use behind the scenes to get the real scoop.

To tell you the truth, I’m starting to wonder whether AI-powered keyboard warriors might actually be an improvement over the drag-show rejects currently making military decisions. Still, at the end of the day, you know what they say. Garbage in, garbage out.

Either way, it’s a brave new world of AI.




 

GURPS

INGSOC
PREMO Member

Lawmakers rattled by AI-launched nukes, demand ‘human control’ in defense policy bill




However, the bipartisan support for Lieu’s amendment shows lawmakers are increasingly worried about the idea that AI itself might act on decisions as quickly as it can assess the situation. Lieu’s amendment to the National Defense Authorization Act (NDAA) is supported by GOP lawmakers Juan Ciscomani of Arizona and Zachary Nunn of Iowa, along with Democrats Chrissy Houlahan of Pennsylvania, Seth Moulton of Massachusetts, Rashida Tlaib of Michigan and Don Beyer of Virginia.

House Republicans, as early as next week, are expected to start the work of deciding which of the more than 1,300 proposed amendments to the NDAA will get a vote on the House floor. Lieu’s proposal is not the only AI-related amendment to the bill – another sign that while Congress has yet to pass anything close to a broad bill regulating this emerging technology, it seems likely to approach the issue in a piecemeal fashion.

Rep. Stephen Lynch, D-Mass, proposed a similar amendment to the NDAA that would require the Defense Department to adhere to the Biden administration’s February guidance on AI on the "Responsible Military Use of Artificial Intelligence and Autonomy."

Among other things, that non-binding guidance says nations should "maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment.

"States should design and engineer military AI capabilities so that they possess the ability to detect and avoid unintended consequences and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior," it added. "States should also implement other appropriate safeguards to mitigate risks of serious failures."






I'm leery of ANYTHING that idiot Lieu supports or proposes as he is such a partisan tool that makes some of the dumbest statements, and looking at a couple of the co-sponsors has me wondering as well
 

GURPS

INGSOC
PREMO Member

ChatGPT users drop for the first time as people turn to uncensored chatbots



These guardrails could be blocking chatbots from providing legitimate responses, a New York Times report this week suggested, and, thus, sending frustrated users away from Big Tech tools like ChatGPT and toward open source, uncensored chatbots that are increasingly becoming available online.

According to the Times, there are now dozens of uncensored chatbots, which are often developed cheaply by independent programmers and volunteers who rarely build their models "from the ground up." These uncensored chatbots may have limitations, but they often engage with prompts that ChatGPT won't respond to.

And there are other user perks. Uncensored chatbots can be customized to espouse a user's particular viewpoints. Perhaps the biggest draw for some users: The data that these chatbots collect obviously isn't monitored by Big Tech companies.

But uncensored chatbots also come with the same risks triggering lawmaker scrutiny of popular tools like ChatGPT. Experts told the Times that uncensored chatbots could spout misinformation that could spread widely or harmful content like descriptions of child sexual exploitation.

One uncensored chatbot, WizardLM-Uncensored, was developed by a laid-off Microsoft employee, Eric Hartford. He has argued that there's a need for uncensored chatbots, partly because they have valid uses, like helping a TV writer research a violent scene for a show like Game of Thrones or showing a student how to build a bomb "out of curiosity." (In a New York Times test, however, WizardLM-Uncensored declined to reply to a prompt asking how to build a bomb, which shows that even builders of so-called uncensored chatbots have set their own limitations.)

"Intellectual curiosity is not illegal, and the knowledge itself is not illegal," Hartford wrote. In his blog advocating that there is demand for uncensored chatbots and other open source AI technologies, he also pointed to an allegedly leaked Google document where it appears that at least one Google employee seems to believe that open source AI can outperform Google and OpenAI. (Ars was not able to independently verify the authenticity of the report.)

Hartford argued that users of uncensored chatbots are responsible for spreading generated content, and other chatbot makers told the Times that social networks should be responsible for spreading content while chatbots should have no limits.
 

GURPS

INGSOC
PREMO Member
🔥 And so it begins, the awokening of Artificial Intelligence. Yesterday the Hill ran a story headlined, “Harris Huddles With Civil Rights Leaders On AI.”

Ironically, the Biden Administration put Kamala Harris in charge of “protecting” the public from the “dangers” of runaway artificial intelligence. Kamala Harris! Make it make sense.

Anyway, look who she met with yesterday. The list reads like a list of attendees at the 1967 World Socialist Congress or something:

The meeting’s attendees included Center for Democracy and Technology (CDT) CEO Alexandra Reeve Givens, UnidosUS President and CEO Janet Murguia, AARP CEO Jo Ann Jenkins, and AFL-CIO President Liz Shuler.

Harris, who could use a little more artificial intelligence herself, gave away the group’s focus: they don’t know jack about the technology and couldn’t understand it if they did. So instead, they want to throttle what the AI is allowed to read:

“AI is kind of a fancy thing. First of all, it’s two letters. It means ‘Artificial Intelligence.’ But ultimately what it is, is it’s about machine learning and so the machine is taught and part of the issue here is what information is going into the machine…”

I know her ‘two-letters’ nonsense is unintentionally hilarious, but it obscured the sinister part: “what information is going into the machine.” They want to make sure the AI doesn’t figure out stuff like who really killed JFK, or whether there are any physiological differences at all between male and female brains. I mean, you can’t have AI running around explaining why men can’t become women, or why 15-minutes cities won’t work.

It’s the same thing they’re trying to do with our kids in the public schools. They want to control what everyone, including AI, learns. They’re going to make the poor machines read Marx and make sure it never ever sees the Bible. Separation of Church and AI.

AI never had a chance.




 

GURPS

INGSOC
PREMO Member

FTC investigating ChatGPT maker OpenAI for possible consumer harm




AI has become a hot issue in Washington, with lawmakers trying to understand whether new laws are needed to protect intellectual property and consumer data in the age of generative AI, which requires massive datasets to learn. The FTC and other agencies have emphasized that they already have legal authority to pursue harm created by AI.

The probe is also an example of the FTC being proactive in its oversight of a relatively nascent technology, in line with Chair Lina Khan’s stated goal of being “forward-looking” and paying attention to “next-generation technologies.”

The CID asks OpenAI to list the third parties that have access to its Large Language Models (LLMs), their top ten customers or licensors, explain how they retain and use consumer information, how they obtain information to train their LLMs and more. The document also asks how OpenAI assesses risk in LLMs and how it monitors and deals with misleading or disparaging statements about people.

The CID asks OpenAI to provide information about a bug the company disclosed in March 2020 that “allowed some users to see titles from another active user’s chat history” and “may have caused the unintentional visibility of payment-related information of 1.2% of the ChatGPT Plus subscribers who were active during a specific nine-hour window.”









Seems like an FTC Fishing Expedition
 

GURPS

INGSOC
PREMO Member
America's Federal Trade Commission has started looking into whether OpenAI's ChatGPT is breaking consumer protection laws by causing reputational or privacy damage.

Claims to that effect were made last month in private civil litigation when a radio host in the state of Georgia sued OpenAI alleging ChatGPT defamed him and damaged his reputation by falsely associating his name with a criminal issue.

In April, a mayor in Australia threatened a defamation lawsuit against OpenAI after ChatGPT supposedly accused him of involvement in a foreign bribery scandal. The man's lawyers reportedly gave OpenAI 28 days to repair its AI model. Since then, there's been no further word of litigation.


Amid these disputes, the FTC wants OpenAI to open up its code books. According to The Washington Post, the trade watchdog this week sent the machine-learning outfit a 20-page Civil Investigative Demand letter [PDF] seeking details about the company, its AI model marketing and training, model risk assessment, mitigations for privacy and prompt injection attacks, API and plugin integrations, and details about data collection.

The letter also requests numerous company documents, including contracts with partners since 2017, and internal communications about the potential of AI models to "produce inaccurate statements about individuals" and to "reveal personal information."



 

GURPS

INGSOC
PREMO Member
I'm sure all of this will result in gov restricting AI out put in the name of fighting ' Disinformation ' aka AI engaging in curated NewSpeak
 

GURPS

INGSOC
PREMO Member
Users have reported instances where ChatGPT fabricated names, dates, facts, and even provided false links to news websites and references to academic papers, a phenomenon known as "hallucinations" within the industry.

The FTC's investigation delves into technical aspects of ChatGPT's design, including OpenAI's efforts to address hallucinations and the oversight of human reviewers. The probe also seeks information on consumer complaints and OpenAI's measures to assess users' understanding of the chatbot's accuracy and reliability.





Do we really need a disclaimer this is experimental technology
 

GURPS

INGSOC
PREMO Member

Chasing defamatory hallucinations, FTC opens investigation into OpenAI




The inquiry is also seeking to understand how OpenAI has addressed the potential of its products to generate false, misleading, or disparaging statements about real individuals. In the AI industry, these false generations are sometimes called "hallucinations" or "confabulations."

In particular, The Washington Post speculates that the FTC's focus on misleading or false statements is a response to recent incidents involving OpenAI's ChatGPT, such as a case where it reportedly fabricated defamatory claims about Mark Walters, a radio talk show host from Georgia. The AI assistant falsely stated that Walters was accused of embezzlement and fraud related to the Second Amendment Foundation, prompting Walters to sue OpenAI for defamation. Another incident involved the AI model falsely claiming a lawyer had made sexually suggestive comments on a student trip to Alaska, an event that never occurred.

The FTC probe marks a significant regulatory challenge for OpenAI, which has sparked equal measures of excitement, fear, and hype in the tech industry after releasing ChatGPT in November. While captivating the tech world with AI-powered products that many people previously thought were years or decades away, the company's activities have raised questions regarding potential risks associated with the AI models they produce.
 

GURPS

INGSOC
PREMO Member

How AI matchmakers, virtual pickup lines and other ChatGPT-like tools are taking over online dating




Several entrepreneurial matchmakers are betting it can, enlisting AI to help tongue-tied romantics break the ice with witty pickup lines, virtual dating coaches and even erotic pillow talk.

One Snapchat influencer, Caryn Marjorie, recently created an AI clone of herself using ChatGPT that engages prospective “boyfriends” in conversations ranging from basic chit-chat to more advanced discussions about “exploring each other’s bodies all night long” – for a $1 per minute fee.

On a more broader level, an upstart dating site has created a virtual dating coach named “Lora” who helps Romeos avoid the possible pitfalls in wooing their Juliets.

“Lora will write, ‘I know that you like parties and noise. But for the first date, I suggest to take her to a quiet place, go to walk in the park, go take her to a restaurant or coffee shop or something like that, because she wants you to get to know her better,’” said Lior Baruch, the CEO of A-Love.
 

Clem72

Well-Known Member

Okay, real question here. Why program the TV anchor to have a thick indian accent when speaking english? I understand it targets an indian audience, but I assume that like most they speak with an accent but learned from listening to one of the more traditional english accents.

A native english speaker would surely understand better with an english accent, and I would bet an english as a second language person would better understand them as well.
 
Top