As artists working in the evolving space of artificial intelligence, we stand at the intersection of excitement and uncertainty. Our work explores what it means to create with tools that did not exist a decade ago. They are tools that now raise urgent ethical, legal, and creative questions.

At the heart of our practice is a shared respect for art in all its forms, but we also recognize the need to confront the realities posed by generative AI and its impact on the broader creative economy.

For both of us, AI is not a shortcut, nor a replacement for skill. It is a medium. Just as oil, graphite, or digital illustration software are mediums, AI allows us to extend narrative boundaries and visual possibilities.

Isaac’s illustrative work stems from traditional drawing techniques, rooted in character, mood, and world-building. Generative tools enable iterations at scale, and faster visual development that support editorial storytelling, while keeping full creative control in the hands of the artist.

Cora’s practice is grounded in photorealism and composition, where the AI’s model becomes a collaborative lens, a way to visualize surreal or metaphorical moments that cannot be captured by a camera.

We use these tools not because they are trendy or controversial, but because they help realize visions that would be otherwise impossible. And yet, we are not blind to the controversies that swirl around them.

The public debate about AI-generated artwork often centers on competition, whether these tools represent unfair encroachment into the domain of working artists. Critics raise valid concerns about copyright infringement, especially as models are trained on vast datasets that include the work of uncredited or uncompensated artists.

Some fear the erosion of artistic value, worried that “real” human creativity will be eclipsed by algorithmically generated content. These anxieties are not abstract. They play out every day in marketplaces, courtrooms, and classrooms.

Freelancers report losing gigs to clients seeking faster and cheaper content. Photographers worry that editorial AI images will replace on-the-ground documentation. Designers find their work plagiarized, restyled, and redistributed without consent. These are not exaggerated hypotheticals, they are happening now.

We share many of these concerns in our own freelance work. The legal gray zone in which most AI art tools operate is dangerous, particularly when tech companies sidestep attribution, licensing, and fair compensation. Morally, it becomes difficult to defend AI generation when it is used to replicate the labor of others rather than to contribute something new.

Economically, the devaluation of creative work threatens all artists, including those who use AI ethically. That is why it matters who is creating the work, and how it is being used.

At Milwaukee Independent, these distinctions are taken seriously. As contributing visual artists, our use of AI is strictly bounded by the publication’s policy:

Milwaukee Independent does not use AI-generated images in place of documentary photojournalism for news stories. Any visual content created with artificial intelligence tools is used exclusively for illustrative or editorial commentary purposes.”

This policy sets a clear ethical line. AI is not treated as a journalistic substitute. It is a visual commentary tool, clearly labeled and contextually appropriate. When we generate or enhance images, they are never posed as factual documentation. They are used to accompany editorials, concept pieces, or stories where real-world photography is either impossible or inappropriate.

Each piece is tagged, footnoted, and credited accordingly. There is no deception. There is no attempt to blur fact and fiction. This is not only a matter of transparency. It is a safeguard for the integrity of journalism.

In an era of misinformation, deepfakes, and digital manipulation, reader trust is paramount. By enforcing strict guidelines around AI image usage, Milwaukee Independent ensures that its core reporting remains grounded in reality, even as it allows space for conceptual exploration of visual content. Our work reflects that balance.

Where this becomes even more critical is in how we, as artists, navigate the divide between intention and interpretation. AI tools can produce astonishing visuals, but without ethical intent and editorial clarity, their use risks misleading audiences or diluting the value of authentic storytelling. That is why we have committed to a disciplined process, one that prioritizes editorial responsibility over spectacle, and collaboration over automation.

Every AI-assisted image we produce is treated as editorial content, not filler. The decision to create an AI illustration is never made lightly or in isolation. Each image is crafted to extend the meaning of the story it accompanies, not to substitute for it.

This is particularly important in coverage of abstract topics such as psychological phenomena, historical memory, or systemic analysis. In those cases, there may be no photograph that could accurately represent the subject. A conceptual image allows the reader to engage visually with ideas that would otherwise remain invisible.

It is an opportunity to reimagine what visual journalism can be when freed from the constraints of literalism. Through responsible use of AI, we can depict the metaphorical mind of a movement, the emotional impact of a crisis, or the surreal disconnect between rhetoric and reality. We can build visual bridges to stories that defy traditional representation.

But even in these moments, AI does not act alone. At every stage, the human hand and mind remain in control. What results is not an automated machine-made product, but a hybrid artifact shaped by critical thinking, artistic intention, and editorial review. No image, whether drawn or generated, is ever permitted to mislead the reader.

That process reflects a broader commitment to ethical journalism. Just as reporters follow sourcing standards and fact-checking protocols, so too must visual contributors follow creative standards rooted in clarity and accountability.

But none of this works without trust. That is why transparency must remain the bedrock of every image we publish. For us, the power of AI lies not in its novelty, but in its ability to serve a deeper purpose. It is not the machine that matters, it is the message. And when AI is used ethically, artistically, and in full view of the audience, it becomes a tool not of replacement, but of relevance.

As artists committed to editorial integrity, we continue to ask hard questions about authorship, authenticity, and impact. We welcome these debates, not as threats, but as necessary steps toward a future in which art, journalism, and technology can coexist, with each part strengthening the others, rather than undermining them.

In that spirit, we do not claim to represent all artists working with AI. But we hope our approach offers an example of how it can be done: ethically and above all with intent.

© Visual

Art by Isaac Trevik
• created using generative AI and digital editing