What’s the Move with AI?
AI is finding its threshold in our daily lives, and it's about time we have a conversation about its way forward as a society. If we shouldn't remain passive about it, but we also shouldn't fuss over it too much either, then what's the move?
May 24, 2025
Artificial Intelligence has confidently nestled itself into the very fabric of our daily lives.
From the obvious, through AI chatbots and language-based systems, to the less blatant like the algorithms in our favorite social media apps—it’s quite clear that AI is here to stay and it isn’t going anywhere anytime soon.
It’s fascinating how AI has sparked such a wide range of reactions—some surprisingly light, others far more serious. On the milder end of the spectrum, many Gen-Zers have turned to ChatGPT as a kind of stand-in for therapy, using it for advice and emotional support as they navigate daily life.
With mental health being central to the generation’s values, and economic uncertainty adding extra stress, it’s understandable why some would lean on such an accessible tool.
In a more extreme example, a student from Institut Teknologi Bandung (ITB) was arrested earlier this month under Indonesia’s cyber law (UU ITE) after creating a satirical parody of the iconic “My God, Help Me to Survive This Deadly Love” mural—replacing the original figures of Brezhnev and Honecker with Indonesia’s current presidents, Prabowo Subianto and Joko Widodo.
A common concern is that AI is being used not just as a tool, but to entirely replace human labor.
Sparking debates around its ethical implications, critics argue it fosters complacency and incompetence, especially considering the increasingly relevant case of university students using AI language models to do their work for them.
This has sparked wider discussions about responsible AI use, though many struggle to define exactly which kind of AI they oppose—be it image generators, language models or something else.
At the same time, the crux of the issue lies in the method through which AI derives results from the synthesis of vast quantities of data—often drawn from questionable sources, which raises concerns about potential issues of theft.
This mirrors the debate around the term “AI artist”. The latest trend of using ChatGPT to mimic the art style of Hayao Miyazaki’s Studio Ghibli is a prime example of this. Where AI is being used to strip art of its soul while receiving credit as if it were the work of the original artist. This cheapens the craft and undermines the skill, creativity, and labor that real artists have put into their work—all the while profiting off prompt-based imitations that require no ingenuity whatsoever.
In some extreme cases, this has led to deepfakes, which are AI-generated images that are used to misrepresent someone. This isn't a new phenomenon, but AI has made it worse by making it more realistic and accessible. The potential for criminal behavior is endless because it's possible to create realistic, pornographic content of anyone, from regular people to celebrities. That's why we need proper regulations and safeguards around AI to prevent such cases from continuing.
So then what’s the move with AI?
As AI becomes more embedded in our everyday lives, we need to approach it with more than just caution—we need to engage with it deliberately. While the phrase “responsible AI use” is often mentioned, its actual meaning shifts dramatically depending on the situation.
There’s a significant gap between our collective ideals for AI and the reality of individual accountability—a disconnect that urgently needs addressing. As Björk once said of electronic music: “I find it so amazing when people tell me that electronic music has no soul. You can’t blame the computer. If there’s no soul in the music, it’s because nobody put it there.”
The same can be said of artificial intelligence. AI, like music software, is just a tool—void of meaning unless we imbue it with intention. If we fail to design with empathy, ethics, and humanity at its core, we shouldn’t be surprised when it reflects nothing meaningful back.
We cannot—and should not—expect warmth or wisdom from a tool that we’ve shaped without intention. If we want AI to serve us well, then the responsibility begins with us—for the soul of the machine is but a mirror of the soul we bring to it.