Artificial intelligence systems marketed as safety-focused, creative tools for individuals are simultaneously being embedded in institutional frameworks that operate without scrutiny, regulation, or accountability.

The result is a two-tiered ecosystem. In public-facing spaces, users are over-policed by hypercautious moderation protocols. Behind closed doors, the very same models are used by governments and corporations to profile citizens, deny services, and influence behavior without oversight.

One of the clearest examples lies in the way AI systems conduct surveillance and behavioral profiling. Deployed into large-scale data infrastructures, these models can analyze communications, social media activity, or digital transaction histories to detect so-called anomalies or threats.

While this is often presented as a neutral act of pattern recognition, it comes with steep consequences. People can be flagged, categorized, or labeled, and they will likely never be notified that it happened.

Even more troubling is that this profiling extends into decision-making processes. In many institutional contexts, AI is not just flagging data — it is actively making decisions. Predictive policing programs now use AI models to determine where law enforcement should concentrate its presence.

Welfare eligibility screenings rely on automated criteria to approve or reject applicants. Immigration cases may be scored based on perceived “risk” or economic potential. Loan and hiring platforms frequently integrate algorithmic evaluations of applicants, often built on opaque or biased datasets.

In these cases, there is often no human intervention. A person may be denied housing, income, employment, or even residency without ever understanding that an algorithm, not a person, made the call. Even more critically, there is no guaranteed right to challenge the decision, access the underlying data, or correct inaccuracies.

In parallel, AI is being weaponized in psychological and political operations. Intelligence services, state actors, and private contractors have begun using large language and image models to generate disinformation at scale.

These campaigns can flood platforms with false narratives, simulate fake grassroots opinions, and create content designed to erode trust in legitimate institutions or boost authoritarian ones.

The public often doesn’t realize they’re engaging with synthetic personas. Conversations online may be shaped by coordinated AI-driven accounts that mimic community members or concerned citizens. In this way, propaganda doesn’t look like propaganda — it looks like “what people are saying.”

At the same time, the commercial side of AI is being used to shape consumer and ideological behavior under the guise of personalization. Corporations increasingly rely on models to “optimize engagement,” nudging users toward certain choices. These can include advertising, product placement, behavioral reinforcement, or even political content framing.

Over time, the result is that individuals are trained by the system, rather than the other way around. Belief in autonomy is preserved, even as the options have already been narrowed. While these risks mount behind the scenes, the publicly available AI interfaces, like ChatGPT or image generators, are being aggressively moderated in the name of safety.

The result is over-correction, where harmless and creative prompts are flagged or blocked. Historically accurate clothing might be flagged as inappropriate. Neutral prompts referencing politically sensitive contexts, such as wartime photography, are sometimes blocked entirely.

Artistic work that engages with real-world themes can trigger warnings not because it violates any moral code, but because it risks being misunderstood or misused in a viral context.

These moderation systems are designed not for consumers, but for companies like OpenAI to minimize PR crises, avoid lawsuits, and remain compliant with app store policies. They are not optimized for understanding user intent, only for avoiding corporate embarrassment.

So the burden of clarity, trust, and restraint falls entirely on the user, while the system assumes guilt in every ambiguous edge case. Yet this caution doesn’t extend to the institutional side of the model’s deployment.

When the same AI systems are used in closed environments — for example, in a defense contractor’s analytics pipeline or a government data center — there are no public-facing safeguards. There are no real-time flagging mechanisms, no visible moderation protocols, and no opportunities for correction. The model executes silently, with its impact measured in outcomes the public never sees.

These applications often fall under proprietary or classified systems, shielded from FOIA requests or public audit. They are deployed in black-box environments where no user ever enters a prompt, only administrators, technicians, and decision-makers.

There is no transparency about how these tools are trained, which datasets are used, or what political assumptions they encode. There is also no incentive for the companies that build them to apply the same caution they use on their public-facing tools.

The outcome of this asymmetrical model of governance is a paradox in which those with the least power face the most scrutiny. A user working on a historical research paper may find their prompts blocked, not because they pose a genuine risk, but because the system is designed to prevent the appearance of risk.

The logic is clear: if a user can take a screenshot and circulate it, that poses a potential liability for OpenAI and its kind. If the interaction is invisible, if it happens inside a military contractor’s decision system or a predictive surveillance platform used by Trump, then no such risk to the AI company exists.

This discrepancy is not accidental. It is engineered. Public-facing models are front-loaded with moderation layers precisely because they are public. Their visibility makes them politically and legally fragile. They must withstand scrutiny from journalists, lawmakers, and users capable of documenting failure.

But institutional deployments face no such environment. There is no public record of the prompts they receive or the judgments they pass. There are no viral tweets, no app store bans, no lawsuits from artists or educators. What happens behind the firewall is simply not subject to the same constraints.

This is what makes the dual nature of AI deployment so dangerous. The same model that blocks a harmless request for a culturally accurate image of a wartime correspondent may be used elsewhere to help decide which neighborhoods are over-policed, which job applications are discarded, or which asylum claims are delayed.

It is not a matter of one tool used differently. It is the exact same tool operating with two entirely different rulebooks, depending solely on who is watching.

The veneer of safety presented in consumer platforms serves a dual function: it reassures the public that AI is “under control,” while giving cover to the far more consequential applications occurring out of sight.

The inconsistency is not a glitch. By focusing attention on the superficial safeguards applied to hobbyists and casual users, institutions can expand their opaque use of these tools without attracting the same ethical scrutiny.

Meanwhile, the systems themselves grow more capable and more autonomous. AI models are now routinely used to generate internal documents, prioritize government responses, analyze large-scale populations, and even summarize citizen communications for authorities.

When paired with predictive algorithms, this creates a feedback loop in which the model not only interprets reality but gradually begins to define it. The system no longer describes the world, it decides what gets seen, what gets ignored, and what gets acted upon.

There are no built-in limits to this trend. Nothing in the architecture of large models prevents them from being used to identify political dissidents, suppress opposition narratives, or categorize individuals based on race, gender, religion, or economic background.

If the operator wants to do terrible things, the model will comply. And since most of these deployments are either proprietary or classified, there is no mechanism for public resistance. No democratic input, no ethical tribunal, no appeals process.

This creates a situation where users like journalists, activists, or artists, the very people most invested in social critique and accountability, are left with crippled tools, while governments and corporate entities retain unrestrained access to far more powerful, minimally audited versions.

The harm is not only in censorship, but in inequality of access. The system that flags your prompts, around a photo that shows what it identifies as too much skin, will approve a dataset that labels entire communities as threats. And no one outside that system will ever see the output.

The longer this imbalance is tolerated, the harder it will become to correct. Every iteration of AI integration into institutional workflows makes it more normalized, more bureaucratized, and harder to audit. When these systems are questioned, defenders can point to the strict safeguards placed on consumer-facing apps as proof that “safety” is being prioritized, even as the most dangerous uses continue uninterrupted behind closed doors.

AI is not neutral. It is shaped by the values of those who build it and the goals of those who deploy it. If those values prioritize liability protection over user freedom, or secrecy over accountability, then the outcome will always serve institutional power over the public good.

The burden of proof must not be on individual users to prove they deserve access. The burden must be on the creators and operators to prove they are not causing harm. Until that standard is met, all assurances of AI safety will remain cosmetic. The real decisions will continue to be made in a space where no one can see them.

© Image

Cora Yalbrin (via ai@milwaukee) and Isaac Trevik