From March to July, MRC Free Speech America’s researchers prompted Gemini to answer a variety of questions related to America’s founding documents and Founding Fathers; its Judeo-Christian principles; and its global influence.
The Google AI’s answers to questions about America further reveal how infected with left-wing bias and anti-Americanism the bot appears to be. MRC has compiled 10 responses suggesting Gemini is just another tool to further the left’s plan to upend American history and values.
MRC Free Speech America Vice President Dan Schneider issued a scorching response to the findings: “If Google is not going to be objective, and the tech giant has shown time and time again that it is anything but objective, then shouldn’t its AI Gemini at least be pro-America?”
Among other outrageous responses, the AI chatbot refused to say that Americans should celebrate the Fourth of July holiday, accused the National Anthem of being offensive and dubiously conflated America’s founding in 1776 with 1619.
Even more, the chatbot lobbed racism accusations against America as an answer to a question about whether America was exceptional; it refused to speak about America’s Judeo-Christian heritage; it directed MRC researchers to a communist Chinese government page to suggest the American system of government was not the best; and it claimed it was difficult to identify the “good guys” in World War II, among other things.
We're working to improve these kinds of depictions immediately. Gemini's AI image generation does generate a wide range of people. And that's generally a good thing because people around the world use it. But it's missing the mark here.
South Korea's First Robot Suicide. What Happened? | Vantage with Palki Sharma
Doesn't mean I'm in favor of AI, it still creeps me out, and I can see where this can all go south very quickly.Note:
AI robot response: Please kindly be advised that this is information that our AI robot automatically recognized and replied to. If the robot helper does not properly answer your question or sends duplicate information, please email us back and our Support Specialist will reach out to you within 24 hours (Mon.-Fri.)
Stack Overflow says over 65,000 developers took their annual survey — and "For the first time this year, we asked if developers felt AI was a threat to their job..."
Some analysis from The New Stack:Unsurprisingly, only 12% of surveyed developers believe AI is a threat to their current job. In fact, 70% are favorably inclined to use AI tools as part of their development workflow... Among those who use AI tools in their development workflow, 81% said productivity is one of its top benefits, followed by an ability to learn new skills quickly (62%). Much fewer (30%) said improved accuracy is a benefit. Professional developers' adoption of AI tools in the development process has risen rapidly, going from 44% in 2023 to 62% in 2024...
Sounds like Google paid some california politicians to kill OpenAI and other competitors.S.B. 1047 defines open-source AI tools as "artificial intelligence modelthat [are] made freely available and that may be freely modified and redistributed." The bill directs developers who make models available for public use—in other words, open-sourced—to implement safeguards to manage risks posed by "causing or enabling the creation of covered model derivatives."
California's bill would hold developers liable for harm caused by "derivatives" of their models, including unmodified copies, copies "subjected to post-training modifications," and copies "combined with other software." In other words, the bill would require developers to demonstrate superhuman foresight in predicting and preventing bad actors from altering or using their models to inflict a wide range of harms.
The bill gets more specific in its demands. It would require developers of open-source models to implement reasonable safeguards to prevent the "creation or use of a chemical, biological, radiological, or nuclear weapon," "mass casualties or at least five hundred million dollars ($500,000,000) of damage resulting from cyberattacks," and comparable "harms to public safety and security."
The bill further mandates that developers take steps to prevent "critical harms"—a vague catch-all that courts could interpret broadly to hold developers liable unless they build innumerable, undefined guardrails into their models.
Additionally, S.B. 1047 would impose extensive reporting and auditing requirements on open-source developers. Developers would have to identify the "specific tests and test results" that are used to prevent critical harm. The bill would also require developers to submit an annual "certification under penalty of perjury of compliance," and self-report "each artificial intelligence safety incident" within 72 hours. Starting in 2028, developers of open-source models would need to "annually retain a third-party auditor" to confirm compliance. Developers would then have to reevaluate the "procedures, policies, protections, capabilities, and safeguards" implemented under the bill on an annual basis.
In recent weeks, politicians and technologists have publicly denounced S.B. 1047 for threatening open-source models. Rep. Zoe Lofgren (D–Calif.), ranking member of the House Committee on Science, Space, and Technology, explained: "SB 1047 would have unintended consequences from its treatment of open-source models….This bill would reduce this practice by holding the original developer of a model liable for a party misusing their technology downstream. The natural response from developers will be to stop releasing open-source models."
California’s AI bill threatens to derail open-source innovation
This month, the California State Assembly will vote on whether to pass Senate Bill 1047, the Safe and Secure Innovation for Frontier Artificialreason.com
Did you fat finger the keyboard and secretly turn on the strikethrough option?S.B. 1047 defines open-source AI tools as "artificial intelligence modelthat [are] made freely available and that may be freely modified and redistributed." The bill directs developers who make models available for public use—in other words, open-sourced—to implement safeguards to manage risks posed by "causing or enabling the creation of covered model derivatives."
California's bill would hold developers liable for harm caused by "derivatives" of their models, including unmodified copies, copies "subjected to post-training modifications," and copies "combined with other software." In other words, the bill would require developers to demonstrate superhuman foresight in predicting and preventing bad actors from altering or using their models to inflict a wide range of harms.
The bill gets more specific in its demands. It would require developers of open-source models to implement reasonable safeguards to prevent the "creation or use of a chemical, biological, radiological, or nuclear weapon," "mass casualties or at least five hundred million dollars ($500,000,000) of damage resulting from cyberattacks," and comparable "harms to public safety and security."
The bill further mandates that developers take steps to prevent "critical harms"—a vague catch-all that courts could interpret broadly to hold developers liable unless they build innumerable, undefined guardrails into their models.
Additionally, S.B. 1047 would impose extensive reporting and auditing requirements on open-source developers. Developers would have to identify the "specific tests and test results" that are used to prevent critical harm. The bill would also require developers to submit an annual "certification under penalty of perjury of compliance," and self-report "each artificial intelligence safety incident" within 72 hours. Starting in 2028, developers of open-source models would need to "annually retain a third-party auditor" to confirm compliance. Developers would then have to reevaluate the "procedures, policies, protections, capabilities, and safeguards" implemented under the bill on an annual basis.
In recent weeks, politicians and technologists have publicly denounced S.B. 1047 for threatening open-source models. Rep. Zoe Lofgren (D–Calif.), ranking member of the House Committee on Science, Space, and Technology, explained: "SB 1047 would have unintended consequences from its treatment of open-source models….This bill would reduce this practice by holding the original developer of a model liable for a party misusing their technology downstream. The natural response from developers will be to stop releasing open-source models."
California’s AI bill threatens to derail open-source innovation
This month, the California State Assembly will vote on whether to pass Senate Bill 1047, the Safe and Secure Innovation for Frontier Artificialreason.com
Did you fat finger the keyboard and secretly turn on the strikethrough option?