
In 1839, a new technology reshaped how the world captured itself. The invention of the daguerreotype by Louis Daguerre introduced photography to the public, not as a fringe curiosity, but as a legitimate tool with wide cultural and commercial potential.
That year, the French government acquired Daguerre’s patent and released it freely to the world, ushering in the first wave of public access to image-making through mechanical means.
Within a decade, photography had moved from novelty to enterprise. Studios opened in major cities across Europe and North America, offering portrait services at a fraction of the cost and time of traditional oil painting.
In 1888, the Kodak camera brought basic photographic control to non-experts with the slogan, “You press the button, we do the rest.”
And by 1900, with the launch of the mass-market Brownie camera, the average person could not only use photography, but shape it by documenting life with a degree of agency previously unavailable.
For artists of the 19th century, the rise of photography introduced undeniable pressure.
Commissioned portrait work declined. Realist painters faced a market that no longer depended solely on their skill for visual representation. While many felt disoriented by the change, others used it as a pivot point, embracing stylistic evolution and laying the groundwork for movements like Impressionism and Symbolism.
That historical disruption is often cited today by advocates of artificial intelligence, particularly as generative AI models like Midjourney, ChatGPT, and Suno enter creative industries.
“Artists feared the camera too,” the comparison goes. “But art didn’t die, it evolved.”
The premise is historically true, but materially incomplete.
The invention of photography introduced a new medium, but it did not extract from painters to build itself.
Photography’s early processes were based on chemistry and optics, not the consumption of paintings. Its images were not built on past images. They were formed through the direct interaction of light, materials, and the physical world. Its impact came from what it replaced — not from what it absorbed.
In contrast, generative AI is built explicitly from existing human work. Its development is rooted in large-scale data collection. Often taken from public websites, private portfolios, copyrighted databases, and creative outputs published without any intent of becoming training material.
The output of generative AI models is synthetic, but their foundations are not original. They are statistical recreations of patterns found in prior human labor.
This is not a semantic distinction. It is the core of the legal and ethical challenges now confronting AI developers and policymakers. While both photography and AI represent moments of technological acceleration, only one was trained on the output of the people it now threatens to displace.
Understanding this difference requires a closer look at how generative AI developed.
Though AI as a field dates back to the 1950s, most of the systems currently shaping creative industries rely not on “intelligence” in a cognitive sense, but on pattern recognition through machine learning.
These systems are taught from data. The more detailed the training material, the more refined the model’s predictions, whether in language, visuals, audio, or video.
In the early 2010s, breakthroughs in deep learning enabled large-scale neural networks to recognize and reproduce increasingly complex forms of media. By 2015, generative adversarial networks (GANs) made it possible to generate synthetic visuals that mimicked human photography or illustration.
By 2022, image diffusion models like Stable Diffusion and DALL·E 2 could create images in nearly any aesthetic style, often replicating the specific visual signatures of real, living artists.
In parallel, large language models like GPT-3 and GPT-4 allowed text generation at a scale and fluency previously unseen. It was trained on books, news articles, academic journals, personal blogs, and private emails.
None of these models learned in a vacuum. Their power lies in their training, and that training comes from human creative work, often scraped in bulk without licensing or compensation.
That foundation has led to growing concern among artists, writers, designers, and musicians. While some view AI as a tool with potential creative applications, others point to its impact on labor markets and authorship.
Freelance illustrators report losing clients to text-to-image platforms. Writers face competition from AI-generated content optimized for cost and speed. Musicians see their style replicated by algorithms trained on their own work.
In many cases, these concerns are not speculative. They are already visible in hiring decisions, production pipelines, and platform policies. Unlike photography, which coexisted with painting but never impersonated it, AI can now approximate the aesthetic fingerprint of a specific creator, down to brushstroke or phrasing.
That difference is not abstract. It is central to understanding the growing resistance, not just among artists, but across legal, academic, and cultural institutions.
What has followed is a wave of lawsuits and legislative proposals that aim to define the boundaries of fair use, authorship, and consent in an era where algorithms learn by example, and those examples are human-made.
Visual artists have filed class-action suits against AI image developers, alleging copyright infringement. Authors have sued AI firms for ingesting entire novels without permission. Musicians and voice actors have raised objections as platforms train synthetic voices based on their recordings, capable of generating new dialogue or lyrics in a familiar cadence.
These legal actions are not simply about money, they are about control. For many creators, the core issue is that they never consented to being part of an AI training set. Their work was turned into fuel, not licensed as a reference.
Still, AI proponents often counter that exposure is part of being online, that the internet is a commons, and the material posted there is fair game for innovation. But this framing glosses over a critical fact: photography, the technology it is so often compared to, did not develop by feeding off the creative work of others.
It disrupted portraiture by being faster and cheaper, not by being derivative.
Even as these distinctions sharpen, the comparison to photography continues to circulate in tech blogs, investor briefings, and policy panels. But beneath the surface, the analogy begins to break down.
Photography changed the market for painters. AI changes the terms of authorship.
Photography introduced a tool anyone could hold. AI introduces a system whose power is concentrated in the hands of a few companies that trained their models on the unpaid labor of millions.
Photography became a new way to capture the world. AI becomes a way to simulate the people who already interpret it, without needing them at all.
This divergence also affects how artists respond. In the 19th century, painters developed new movements in response to photography. Today, many artists want to work with AI, but on different terms. They want opt-in models. Transparent training sets. Compensation structures. Creative credit. Tools that expand human ability rather than replace it.
Some developers have begun to respond. Open-source alternatives to major platforms now allow users to train models on curated, licensed data. Ethical AI initiatives aim to give creators more say over how their work is used.
But these remain the exception, not the rule. Most of the market is still shaped by proprietary systems trained in opaque ways. And that opacity extends to the cultural narrative.
The story that “artists always fear new tools” is tidy, familiar, and easy to repeat. But it flattens the complexity of what is happening now. It reduces a real debate about ownership and value to a historical inevitability.
It implies that those objecting are simply resisting progress, when in fact they are asking for accountability.
The artists most threatened by AI are not clinging to the past. They are defending the integrity of their work, and the right to decide how it is used. That is the same right any profession would claim if its knowledge, voice, or identity were absorbed into a commercial product without consent.
Photography pushed painters to reconsider their medium. But it never claimed their authorship. It never trained on their canvases to generate imitations. AI, as it stands now, does. And that is why the comparison, while tempting, misses the point.
The challenge for policymakers, courts, and the public is to recognize this moment not as a rerun of a 19th-century debate, but as a fundamentally new question.
When the tools of creation are trained on the creations themselves, who gets to control the future of art?
The answer to that question will shape not just what art looks like, but who gets to make it and whether that remains a human choice at all.
© Image
Cora Yalbrin (via ai@milwaukee)