AI News and Information


PREMO Member

She's a professional girlfriend who rakes in $30,000 wooing 'lonely men' online... but can YOU figure out what's so special about Lexi Love?

Get ready to meet the blonde beauty who has been turning heads and raking in $30,000 a month by sharing her raunchy snaps - but don't get your hopes up because the stunner isn't what she seems.

Lexi Love is an AI model and virtual girlfriend who was created by the company Foxy AI. She has been raking in tons of fans while offering up flirty chats and sexy photos.

And the model has been acting as a companion to lonely men online by providing them with round-the-clock text and voice messaging.

Lexi speaks more than 30 languages and gets up to 20 marriage proposals a month from men who have fallen head over heels with her, even though she doesn't really exist.

She has been built to 'flirt, laugh, and adapt to different personalities, interests and preferences.'

The blonde beauty offers paid text and voice messaging, and gets to know each of her boyfriends.

The AI model even sends 'naughty photos' if requested.

Get ready to meet the blonde beauty who has been turning heads and raking in $30,000 a month by boasting her raunchy snaps
She has been raking in tons of fans as she provides them with flirty chats and sexy photos

And the model has been acting as a companion to lonely men online by providing them with round-the-clock text and voice messaging
She has been built to 'flirt, laugh, and adapt to different personalities, interests and preferences'


PREMO Member
This is a niche issue but interesting ....

Japanese publishers are going to start using AI to translate Anime and Manga with AI and then human review

Of Course the ' localizers ' are all up in arms, what has come to light the blue hair freaks have been inserting woke politics into the translations ..... removing sexism, homophobic jokes ... etc

Western Anime Localizers GET REKT by AI...​

Bad news for Western anime and manga "localizers." Japanese studios are using AI to simulcast accurate translations WITHOUT any creative license... or activism. There's been some drama around "The Ancient Magus’ Bride" that's cast doubt on whether or not these localizers will be needed in the future. Will the Japanese just take matters into their own hands going forward, and bypass Western translators entirely?

Additional Context: The use of AI in the translation of manga and anime, particularly in the case of "The Ancient Magus' Bride," is indeed a development that has sparked debate and controversy. Bushiroad Works, the publisher for "The Ancient Magus' Bride," announced that starting with its 96th chapter, new chapters of the manga would be serialized simultaneously in English alongside the Japanese version using AI translation technology provided by Mantra Co. This approach combines machine translation technology with editing and proofreading by professional translators.

The decision to use AI for translations appears to be a strategy to combat piracy by providing official translations quickly and efficiently. Kore Yamazaki, the creator of "The Ancient Magus' Bride," expressed frustration over people reading pirated copies of her work and encouraged fans to view this new method as a progressive step. Despite the integration of human editing, the move has been met with criticism from fans and commentators who are concerned about the impact on human jobs and the quality of translations. Some fans have voiced their discontent on social media, stating that they would not support series replacing human translation teams with AI.

In addition to English, Bushiroad also plans to add simultaneous releases of the manga in Simplified Chinese starting in May 2024. This move by Japanese studios to use AI for translations raises questions about the future role of Western localizers in the anime and manga industry. While it's clear that Japanese publishers are exploring new methods to distribute their content globally, it remains to be seen if this will lead to a complete bypass of traditional translation processes.

The situation with "The Ancient Magus' Bride" could be indicative of a broader trend where Japanese studios might increasingly take charge of the translation process to ensure timely and accurate releases of their content internationally.

Localizers BRAG about KILLING Japanese anime and games! ADMIT they have FORCED western change!​



PREMO Member
🔥 They could’ve just asked me and saved themselves a lot of work. According to an A.I. researcher who monitors the “evolution” of AI chatbots and language models, the services are getting safer but dumber.

image 8.png

CLIP: AI scientist monitors developing AI models like ChatGPT and were surprised to find them getting substantially worse (1:26).

Back in October, AI researcher James Zou published a paper on his group’s tests on AI performance over time. To give you the gist, here’s a summary of James’s comments from the video clip linked above:

“These models are changing over time, through feedback from humans and fine tuning. So this actually motivated us to monitor how does the behavior of this generative AI change over time, such as from interacting with users like us?

You might have expected that these models should be getting better and better over time, at least that’s our hope, they should be improving. But maybe the big surprise that we found is that these models are getting better in some of the components. GPT-4 is getting safer over time.

However, the model is also getting worse — substantially worse – in many of the other components. For example, I asked the model to do chain of thought reasoning, which is a common technique. The ability of ChatGPT-4 to do chain of thought reasoning, that seems to have really degraded over time.”

What James meant by “safety” is ensuring the chatbot always answers consistently with liberal values. Think safe spaces for snowflakes. But apparently, increased A.I. ‘safety’ — enforced liberality — is also correlating with a decreasing ability to correctly perform some kinds of logical reasoning.

In other words, the more liberal the AI gets, the dumber it gets.

Hey, don’t shoot the messenger. I’m just following the science.



PREMO Member

STOP THE LIES! - A.I. made art DOES NOT STEAL art! - Addressing the evidence​




PREMO Member
Sarah Silverman's Lawsuit Against OpenAI Is Full of Nonsense Claims

The Authors' Complaints and OpenAI's Response

Teaching AI to communicate and "think" like a human takes a lot of text. To this end, OpenAI used a massive dataset of books to train the language models that power its artificial intelligence. ("It is the volume of text used, more than any particular selection of text, that really matters," OpenAI explained in its motion to dismiss.)

Silverman and the others say this violates federal copyright law.

Authors Paul Tremblay and Mona Awad filed a class-action complaint to this effect against OpenAI last June. Silverman and authors Christopher Golden and Richard Kadrey filed a class-action complaint against OpenAI in July. The threesome also filed a similar lawsuit against Meta. In all three cases, the lead lawyer was antitrust attorney Joseph Saveri.

"As with all too many class action lawyers, the goal is generally enriching the class action lawyers, rather than actually stopping any actual wrong," suggested Techdirt Editor in Chief Mike Masnick when the suits were first filed. "Saveri is not a copyright expert, and the lawsuits…show that. There are a ton of assumptions about how Saveri seems to think copyright law works, which is entirely inconsistent with how it actually works."

In both complaints against OpenAI, Saveri claims that copyrighted works—including books by the authors in this suit—"were copied by OpenAI without consent, without credit, and without compensation."

This is a really weird way to characterize how AI training datasets work. Yes, the AI tools "read" the works in question in order to learn, but they don't need to copy the works in question. It's also a weird understanding of copyright infringement—akin to arguing that someone reading a book in order to learn about a subject for a presentation is infringing on the work or that search engines are infringing when they scan webpages to index them.

The authors in these cases also object to ChatGPT spitting out summaries of their books, among other things. "When ChatGPT was prompted to summarize books written by each of the Plaintiffs, it generated very accurate summaries," states the Silverman et al. complaint.

Again, putting this in any other context shows how silly it is. Are book reviewers infringing on the copyrights of the books they review? Is someone who reads a book and tweets about the plot violating copyright law?

It would be different if ChatGPT reproduced copies of books in their entirety or spit out large, verbatim passages from them. But the activity the authors allege in their complaints is not that.

The copyright claims in this case "misconceive the scope of copyright, failing to take into account the limitations and exceptions (including fair use) that properly leave room for innovations like the large language models now at the forefront of artificial intelligence," OpenAI argued in its motion to dismiss some of the claims.

It suggested that the doctrine of fair use—designed in recognition of the fact "that the use of copyrighted materials by innovators in transformative ways does not violate copyright"—applies in this case and the case of "countless artificial intelligence products [that] have been developed by a wide array of technology companies."

The Court Weighs In

The authors prevailing here could seriously hamper the creation of AI language learning models. Fortunately, the court isn't buying a lot of their arguments. In a February 12 ruling, Judge Araceli Martínez-Olguín of the U.S. District Court for the Northern District of California dismissed most of the authors' claims against OpenAI.

This included the claims that OpenAI engaged in "vicarious copyright infringement," that it violated the Digital Millennium Copyright Act (DMCA), and that it was guilty of negligence and unjust enrichment. The judge also partially rejected a claim of unfair competition under California law while allowing the authors to proceed with that claim in part (largely because California's understanding of "unfair competition" here is so broad).

Silverman and the other authors in these cases "have not alleged that the ChatGPT outputs contain direct copies of the copyrighted books," Martínez-Olguín noted. And they "fail to explain what the outputs entail or allege that any particular output is substantially similar – or similar at all — to their books."

The judge also rejected the idea that OpenAI removed or altered copyright management information (as prohibited by Section 1202(b) of the DMCA). "Plaintiffs provide no facts supporting this assertion," wrote Martínez-Olguín. "Indeed, the Complaints include excerpts of ChatGPT outputs that include multiple references to [the authors'] names."

And if OpenAI didn't violate the DMCA, then other claims based on that alleged violation—like that OpenAI distributed works with copyright management information removed or engaged in unlawful or fraudulent business practices—fail too.


PREMO Member
🔥 2024 has delivered yet again! This time, CBS News ran the story Friday headlined, “OpenAI's new text-to-video tool, Sora, has one artificial intelligence expert ‘terrified.’” OpenAI shocked and surprised the markets last week by unexpectedly announcing its latest innovation: an AI tool for creating high-quality, full-motion video based on simple text prompts. (Stocks in adult-video companies immediately shot up +17% for the day.)

You cannot look at the sample videos and remain unimpressed. Somehow Open AI leapt over all the other developers in the space and is, once again, several generations ahead of the next-best technology. It’s almost like they’re getting help from somewhere.

image 9.png

WEBSITE: Open AI’s landing page for its upcoming text-to-video feature with example videos.

All it takes is a simple text description of what the user wants to see, called a “text prompt.” As with the ChatBots, users just type a little description into a box, press “Send,” and Sora AI will create a brand-new video for them. For people who live in Portland, it would work like this:

image 10.png

While the Sora AI service is currently unavailable to the public — one senses the main barrier is developers’ rational fears about how the tool could immediately be misused — the company has published a raft of example videos showing off the service’s flexibility. Just like AI’s picture-creating feature, the video chatbot can generate videos in any conceivable style, like 1980’s music video style, black and white, 50’s cartoons, hyper-realistic, and so forth.

The massive response to the announcement was mixed, equally terrified and excited. For example, Joe Biden’s handlers can’t wait to start making completely virtual press conferences — instead of just mostly virtual ones. Literally, they can’t wait. November is right around the corner. His handlers would much rather run a new and improved Max Headroom-style Biden than an old-and-tired Joe Biden.

image 11.png

Sadly, despite the Sora developers’ best efforts, Joe’s career may have started one year too late; by next year, this tech will be perfected and then we will never see the real Joe any more. From that point on, we’ll only ever see “Dark Brandon.” Which is not as bad as it sounds, and we probably won’t even complain about it very much, since unlike Joe, Dark Brandon will actually make sense, he won’t be painfully embarrassing to watch, and Dark Brandon won’t make you feel guilty for not calling in a wellness check.

Upon viewing the sample videos, everyone in Hollywood instantly experienced sheer horror and intense myocarditis (which medical experts tell me is mostly-harmless, transitory, and nothing to worry about). They saw their careers flash before their eyes, right in the reflections of their computer monitors and cell phone screens.

There’s a lot that could be and has been said about the technology’s world-changing implications. I’ll just make a couple quick points. A.I. video is both more and less mysterious than it seems. It is less mysterious in that, according to the developers, it is not per se a revolutionary development, since it only extends AI’s existing ability to draw still pictures. To create a full-motion video, the A.I. simply makes an extended series of still images — frames — and then stitches them together, kind of like the old flip-book cartoons.

image 4.png

But paradoxically, the sudden appearance of this new technology is also even more mysterious than it seems, since all artificial intelligence-based technology sprouts from a common large-language model that even the developers admit they do not fully understand:

image 12.png

Maybe I’m wrong. But I cannot believe that an invention as significant as artificial intelligence sprang from some serendipitous lab accident. Post It notes — yes. Rubber — yes. Antibiotics — okay. But not artificial intelligence, which requires millions of lines of computer code to operate. Accidentally discovered? No. Impossible.

So then, where did the ‘spark’ of intelligence come from? Is A.I. demonic, a malicious gift whispered into the ear of some luckless scientist who sold their soul for access? Maybe. But my preferred theory is it was dished out of a DARPA skunkworks lab somewhere, for some sinister military purpose. I don’t know. I just find it utterly remarkable that developers say they don’t really understand how AI works — and everybody is just fine with that! Oh, how interesting, now show me how to work it again.

The next issue of great interest is that the ‘deepfake’ genie has now almost completely wriggled out of its shiny aluminum bottle. As if things weren’t bad enough, soon we will be completely unable to tell real from fake and there will be a lot of fake. Soon, AI will not just be able to create millions of made-up videos, but it will also easily modify existing, ‘real’ video, of real people and real events, changing it into whatever the user wants.

It’s coming. It’s inevitable. Deal with it. In short order, video will be useless as evidence unless it is recorded on analog film. Remember Kodak? Analog film is probably rushing back into style. Buy Kodak stock. (Disclaimer: that was a joke and not financial advice. Don’t put your life savings into analog film. But still.)

Finally, and maybe most significantly, are the spiritual implications. Think about this: AI translates users’ words — their words — into fully-realized, instantly-created virtual worlds.

What does that remind you of?

Does it remind you of the way the Bible begins? With God speaking the Universe into existence? Genesis 1:1:

In the beginning God created the heaven and the earth.
And the earth was without form, and void; and darkness was upon the face of the deep. And the Spirit of God moved upon the face of the waters. And God said, Let there be light: and there was light.

God said ‘let there be light.’ Said. In words. Maybe it isn’t so surprising we humans stumbled upon a mysterious tool allowing us to speak worlds into existence. After all, we were created in His image (“So God created man in his own image, in the image of God created he him; male and female created he them.” Gen. 1:27.)

Obviously, then, we are always trying to do the same things that He does. If He can speak worlds into existence, then we want to do it, too. Now listen carefully. If you’ve been sitting on an agnostic fence somewhere, you might want to reflect on the profound spiritual implications of words-to-video, and what that truly says about the nature of reality and about certain spiritual truths written thousands of years ago. Written in words.



PREMO Member
My dudes: Google's Gemini AI is woke as heck and people have the receipts to prove it


Google recently dropped its new 1.5 update to its Gemini AI platform, which allows users to ask questions like they would on Open AI's ChatGPT or X's Grok. Gemini is pretty powerful: You can upload an entire video and it'll summarize it within seconds. It can critique multiple books at the same time. It knows what it is watching and reading, which is cool but scary as heck.

Sadly, one thing Google has baked into the cake is Marxist "diversity, equity, and inclusion," the new religious teaching of the woke commies that says everything and everyone must look different while believing exactly what they believe at the same time.

Users immediately noticed that Gemini is OBSESSED, like its creators, with skin color, sex/gender, and sexuality. It also refuses to depict white people, especially white men - and when it does comply, the white men end up being black women (no joke).












Well-Known Member
My phone has prompted me at least 3 times to activate Gemini services. No thanks, I don't need my phone to admonish me for not being inclusive enough and giving me directions to Asahi when I asked for Jimmy Johns.