Civitai in the new payment provider crisis, just as Trump signs anti-Chronicles laws

13 Min Read
13 Min Read

President Trump has now criminalised sexual deepfakes at the federal level in the United States and signed the Take It Down Act. At the same time, bids from the Civitai community on NSFW AI and celebrity output have ultimately failed to appease payment processors, causing sites to seek alternatives or close them down. All of this in just two weeks since the oldest and biggest deepfark porn site in the world went offline…

For the deep state of unregulated images and videos, it was important for several weeks. Just 2 weeks ago, number 1 domain for celebrity deepfark porn community sharing. Deepfakes suddenly went offline in many research positions that have been dominant and dominated for over seven years as a global trajectory of sexual AI celebrity content. By the time that fell, the site had an average of over 5 million visits a month.

Background: Mr. Deepfakes Domain in early May. The inset, suspension notification was replaced by a 404 error as it appears that the domain was purchased by an unknown buyer on May 4, 2025 (https://www.whois.com/whois/mrdepfakes.com). Source: mrdeepfakes.com

Deepfakes’ suspension of services was officially attributable to the withdrawal of “critical service providers” (see Inset image above, which was replaced by a domain failure within a week). However, the investigation by co-journalists could have fired the key figures behind Deepfakes just before the shutdown, and the site could have been shut down for the individual’s personal and/or legal reasons.

At about the same time, Civitai, a commercial platform widely used by celebrities and NSFW Lola, imposed an unusual and controversial set of self-censorship. These were subject to deep-fark generation, model hosting, and a wider new rules and restrictions slate, including a complete ban on certain limit NSFW fetishes. And that’s what we called “extremist ideology.”

These measures were urged by payment providers who threatened to withdraw services from the domain unless changes were made regarding the portrayal of NSFW content and celebrities AI.

Civitai has been blocked

As of today, the measures taken by Civitai are not seemingly placating the visa and MasterCard: new posts The site of community engagement manager Alasdair Nicoll reveals that Civitai’s card payments (the “Buzz” virtual money system, which mostly have real credit and debit cards) will be suspended from this Friday (May 23, 2025).

This prevents users from renewing their monthly membership or purchasing new topics. Nicoll advises users to maintain their current membership privileges by switching to annual membership (Costing)†† By Friday, it’s clear that the future is somewhat uncertain for the domain at this point (it should be noted that annual membership was released at the same time as the announcement regarding the loss of payment processors was made).

See also  Over 100,000 WordPress sites at risk from critical CVSS 10.0 vulnerabilities in Wishlist plugins

Regarding the lack of payment processors, Nicole says “We’re talking to all providers who are happy with AI innovation.”.

Nicoll tells the post about the recent failure of its recent efforts to properly rethink the site’s criticised policies regarding celebrity AI and NSFW content.

“Some payment companies label the high risk of the Generation-AI platform, especially when allowing mature content generated by users, even when legally administered. The choice of that policy has forced a cutoff, not what the user did.

Comments from “Faeia,” a user designated as the company’s Chief of Staff in the Civitai Profile*, adds context to this announcement.

“To be clear, we have chosen not to remove NSFW and adult content from the platform, which is why it has been removed from the payment processor. We are still committed to supporting all kinds of creators and are working on alternative solutions.

As a traditional driver of new technology, it is not uncommon to use NSFW content to launch interest in a domain, technology, or platform. It is not uncommon for the first proponent to be rejected once sufficient “legitimate” capital and/or user base is established (i.e. when it smells of sufficient users, NSFW context for the entity to survive).

For a while, Civitai seemed to follow Tumblr and follow a variety of other initiatives towards “sterilized” products ready to go down this route and forget about that route. However, the growing controversy/stigma surrounding AI-generated content of all kinds In this case, it represents the accumulated weight set to prevent last-minute rescue. In the meantime, official announcements advise users to adopt Crypto as an alternative payment method.

fake

The arrival of President Donald Trump’s enthusiastic signatures, signing the federal takedown law likely had an impact on some of these events. The new law criminalizes the distribution of intimate, nonconsensual images, including deepfakes generated by AI.

The law requires the platform to remove flagged content within 48 hours and be overseen by the Federal Trade Commission. The criminal provisions of the law come into effect immediately and allow for the prosecution of individuals who knowingly publish or threaten intimate images (including AI-generated deepfakes) within the scope of the United States.

The law has received rare, bipartisan support as well as support from technology companies and advocacy groups, but critics argue it can suppress legitimate content and threaten privacy tools like encryption. Last month, the Electronic Frontier Foundation (EFF) declared its opposition to the bill, arguing that the takedown mechanism that mandates the law targets a broader range of substances than the narrower definition of intimate images of nonconsensus found elsewhere in the law.

See also  Deepseek-Prover-V2: Filling the gap between informal and formal mathematical inference

The Takedown provision of ‘TakeTown it Down applies to images containing broader categories of content, i.e. intimate or sexual content, than the narrow NCII definition seen elsewhere on the invoice. Takedown’s provisions also do not contain any significant safeguards against frivolous or dishonest takedown requests.

‘Service relies on automated filters, which are notoriously dull tools. They frequently flag legal content, from fair commentary to news reports. In the strict legal time frame, apps and websites need to remove speeches within 48 hours, but there is little time to check if the speech is actually illegal.

“As a result, online service providers, especially small providers, may choose to avoid troublesome legal risks by simply excluding the speech, rather than trying to validate it.”

The platform currently establishes a formal notice and takedown process for up to one year after the enactment of the law, allowing affected individuals or their representatives to call the law when they request that the content be removed.

This means that while criminal clauses take effect immediately, the platform is not legally required to comply with Takedown infrastructure (such as incoming and processing requests) until that year’s window has passed.

Does Take It Down Act cover AI-generated celebrity content?

The Take It Down Act crosses every state’s borders, but it does not necessarily ban all AI-driven celebrity media. The law criminalizes the distribution of intimate images based on non-consensus; include Deepfakes generated in AI, only if the individual drawn had a Reasonable Privacy expectations:

The act states:

“(2) Violation involving genuine, intimate visual depictions. —

“(a) Including adults (Evidence, purpose of reporting, etc.),In interstate or foreign commercial transactions, it must be illegal for everyone to use interactive computer services to deliberately publish intimate visual depictions of non-minor identifiable individuals.

“(i) Intimate visual depictions were obtained or created under circumstances that a particular individual has a reasonable expectation of privacy.

“(ii) What is depicted was not voluntarily exposed by an identifiable individual in a public or commercial environment. (i.e. self-published porn);

“(iii) What is depicted is not a matter of public concern.

“(iv) Publication of intimate visual depictions –

“(i) is intended to cause harm; or

(ii) causes harm to an identifiable individual, including psychological, financial, or reputational harm.

The contingency of “rational expectations of privacy” applied here has traditionally not supported the rights of celebrities. Depending on the case law that will eventually appear, Explicit Content generated by AI involving public figures in public or commercial environments may not be subject to legal prohibitions.

See also  Chaos Rat Malware Targets Window and Linux via fake network tools download

Final clauses regarding the decision range Harm is resilient, famously known for legal terms, and in this sense there is nothing particularly novel about the legislative burden. but, intention Causing harm appears to limit the scope of the act to the context of “venge porn.” There, an (unknown) ex-partner publishes actual or fake media content of other (also unknown) ex-partners.

The “harm” requirement of the law may seem inappropriate when anonymous users post depictions generated by AI of celebrities, but it may prove more relevant in stalking scenarios.

The legal reference to “covered platforms” excludes private channels such as signals and emails from the Takedown clause, but this exclusion applies only to the obligation to implement a formal removal mechanism by May 2026. This does not mean that actual depictions shared through unconsensual AI or private communications are outside the scope of criminal control of the law.

Clearly, the lack of reporting mechanisms at the scene does not prevent the affected parties from reporting to police that the content is currently illegal. Neither party may request that you remove any problematic material using traditional communication methods that the Site may make available.

The rights left

It appears that it has reached its peak in an unusually short period of time that has led to more than seven years of public and media criticism of Deepfark’s content. However, while the Take It Down Act cannot spread the federal ban, it may not apply to all cases, including AI-generated simulations, and certain scenarios will be addressed under the growing patchwork of state-level deepfark laws.

In California, for example, California’s Celebrity Rights Act limits the exclusive use of celebrity’s identity to themselves and their property even after death. Conversely, Tennessee’s Elvis law focuses on protecting musicians from replicating voices and images generated by fraudulent AI, with each case reflecting a prominent approach to interest groups at the state level.

While most states now have laws aimed at sexual deepfakes, many have stopped making it clear whether these protections are equally extended to individuals and public figures. Meanwhile, the political depth reportedly promoted Donald Trump’s support for the new federal law could actually run against constitutional barriers in certain contexts.

Archive Version: https://web.archive.org/web/20250520024834/https://civitai.com/articles/14945

†† Archive version (no monthly price): https://web.archive.org/web/2025042502020325/https://civitai.green/pricing

*The actual “Chief of Staff” of Civitai’s CEO is listed on LinkedIn under unrelated names, while the similar sounding “Faiona” is the official Civitai staff moderator for Subreddit in the domain.

First released on Tuesday, May 20, 2025

Share This Article
Leave a comment