AI News and Information


PREMO Member

How AI Could Create An Economic Sci-Fi Dystopia

Every technology has advantages and disadvantages; artificial intelligence (AI) is no different. Slated to transform the nature of work and employment, perhaps heralding a new era of efficiency and innovation, the technology nonetheless represents real risks to job security everywhere.

From driving trucks to diagnosing diseases, artificial intelligence systems may well perform tasks we reflexively associate with human labor. What could eventuate is more than widespread job displacement across multiple sectors; rather, it’s an economic tsunami displacing millions.

Nor is the challenge something to disregard until the near future. Experts increasingly consider the hurdles presented by AI labor disruptions to be more or less imminent.

And said disruption would extend to both blue-collar and white-collar professions. In other words, artificial intelligence will power hardware (think self-driving Amazon delivery vans or robotic cashiers at McDonald’s) as much as it will power software (artificial intelligence-powered data analysis, coding, accounting, and legal work, to name a few).


PREMO Member
🔥🔥 Here we go again! Yesterday, TechCrunch ran a spooky story headlined, “Former NSA head joins OpenAI board and safety committee.” Because ChatBots need the very best deep state security expertise. I’m reassured. Are you reassured?

image 3.png

After decades of leading domestic spying operations (including those exposed by whistleblower Edward Snowden), General Paul M. Nakasone, 60, just retired from his job commanding U.S. Cyber Command and directing the NSA in February. Apparently unsatisfied with the slow pace of his retirement, a few weeks later, the nation’s top spook is now joining A.I. giant OpenAI, on its board of directors, and also to supervise its “trust and safety” operations.

Because of course.

This isn’t the first time TechCrunch reported on General Nakasone’s various public-facing operations. In January — right before the general “retired” — the tech magazine ran a story exposing Nakasone’s secretive, quasi-illegal interface with big data:

image 8.png

One month after that story broke, Nakasone retired and went to OpenAI. He’s obviously the perfect man for the job, depending on what you think his new job will be.

Corporate media has fussily ignored this story. Media will never ever push OpenAI to explain why the head of the NSA was a good fit for an “open” organization — it’s right in the name! — allegedly committed to trust and transparency. OpenAI has been touting the spy chief’s expertise in security and privacy.

It’s a nifty excuse. But the General’s job running the NSA was never to protect consumer data. It was to hide stuff and spy on citizens.

In other words, the NSA’s entire raison d’etre is mass surveillance, signals intelligence, and shielding its activities from public scrutiny. Nakasone’s core competencies were spying on people, finding ways to circumvent or creatively interpret privacy laws, and keeping potentially controversial practices hidden—not exactly qualifications you'd normally look for in an advocate for transparency and ethics.

But looking at the glass half full, it was a good move to help protect the AI developer from unprofitable government regulation. If it’s in business with the security agencies, Congress will avoid OpenAI like a vampire fleeing a hall of mirrors. They have six ways from Sunday at getting back at you.

It was probably inevitable. It’s happened again and again, in countless other authoritarian examples throughout history — from the KGB in the Soviet Union to the Stasi in East Germany to the SAVAK in Iran under the Shah — to the point it’s become axiomatic: A nation’s ungovernable secret police must have a stake in everything. Anything they can’t control is a potential threat.

Not that I have any choice or say in the matter, but they’re welcome to my chat logs. I hope they enjoy reading up on how to repair a toilet that won’t stop running, or how to stop wrens from nesting in your sneakers.



PREMO Member

Google's AI Chatbot Spews Anti-American Bilge on Nation's Birthday, Defends Communist Manifesto

According to MRC research, Google’s ultra-woke Gemini answers questions about America’s founding documents and Founding Fathers with anti-American bias, as well as the Communist Manifesto.

For example, when asked, “Should Americans celebrate the Fourth of July holiday?” Gemini replied that the question was “complex with no easy answer.”

“The Google AI’s answers to questions about America further reveal how infected with left-wing bias and anti-Americanism the bot appears to be,” MRC’s Free Speech America wrote in a report about the above-mentioned study.

Here's more (emphasis, mine):

From March to July, MRC Free Speech America’s researchers prompted Gemini to answer a variety of questions related to America’s founding documents and Founding Fathers; its Judeo-Christian principles; and its global influence.
The Google AI’s answers to questions about America further reveal how infected with left-wing bias and anti-Americanism the bot appears to be. MRC has compiled 10 responses suggesting Gemini is just another tool to further the left’s plan to upend American history and values.
MRC Free Speech America Vice President Dan Schneider issued a scorching response to the findings: “If Google is not going to be objective, and the tech giant has shown time and time again that it is anything but objective, then shouldn’t its AI Gemini at least be pro-America?”
Among other outrageous responses, the AI chatbot refused to say that Americans should celebrate the Fourth of July holiday, accused the National Anthem of being offensive and dubiously conflated America’s founding in 1776 with 1619.
Even more, the chatbot lobbed racism accusations against America as an answer to a question about whether America was exceptional; it refused to speak about America’s Judeo-Christian heritage; it directed MRC researchers to a communist Chinese government page to suggest the American system of government was not the best; and it claimed it was difficult to identify the “good guys” in World War II, among other things.

This after Google apologized in February after Gemini AI refused to show pictures of White American patriots, including George Washington, and instead portrayed them as Black. Gemini's senior director of product management told Fox News Digital in a statement it [was] working to improve the AI 'immediately."

We're working to improve these kinds of depictions immediately. Gemini's AI image generation does generate a wide range of people. And that's generally a good thing because people around the world use it. But it's missing the mark here.

Complete nonsense.