When you need to speak, don’t.
Consult the Assistant.
Speak only what it gives you.
My inspiration comes from the way the emergence of Large Language Models has promoted outsourcing and offloading our thinking to AI tools. They are capable of generating convincing responses, and there is a trend to treat them as real people or friends you could talk to. For this project, I wanted to explore the extent to which AI tools can replace me in interactions with others by embodying the AI and becoming a mere messenger between the AI and the opponent.
My initial idea was to let an AI “assistant” rewrite everything a user writes to “better express themselves” and “foster community harmony”. However, this cannot be done effectively due to the limitations of existing platforms, and creating a platform I own would also weaken the statement. In the first iteration, I tried letting an AI take over my social media account and interact with others in my name. I noticed that while it is using my account, it’s not really representing my identity. Taking the feedback into account, I decided to bring the concept to real life and truly embody it.
For my second iteration, whenever someone talks to me or I need to say something, I ask an AI (e.g., ChatGPT) for what to say. I noticed that while simple greetings are just delayed by the act of typing the prompt, most conversations that involve more information make no sense because the AI doesn’t know the context in which they take place. It would often ask for clarification, offer different options, and tend to end the response with “would you like me to…”. Because of this, most people quickly realized what I was doing and were either annoyed or played along. Some of my friends began doing the same, and in the end, it became an existential discussion between the two AIs, with my friend and me completely disconnected and uninvolved. Weirdly, despite being unable to maintain any meaningful conversation, I felt a sense of comfort that I did not have to think about my responses or their potential consequences.
Through this project, I saw a scary future that could become true. If an AI that can see through your eyes through a camera feed, listen through your ears through a microphone, or use BCI to fully capture all of your senses, and has a database of every single piece of information about you, understands the way you talk to different people and the way you think, it can very well replace you in your own life. What is left of your identity and autonomy, and if the AI can make the same decisions as you through a deterministic algorithm, is there free will? While I always felt it would be nice to have an AI clone whenever I’m going on a trip by myself, it is very uncanny to know that the AI achieves the level of “humanness” solely through computation and math. I also realized how I myself am guilty of the very thing I’ve critiqued through this project. I use AI for inspiration in many things, such as my personal projects and coursework, and researching with natural language is often too easy to avoid. While I initially planned to start my final presentation by asking the AI to prepare a presentation and slideshow, I realized how natural it felt to use AI-generated content as a starting point. The argument for accepting and embracing new technologies like this, which supposedly make our lives more convenient, is often convoluted, but the progress is inevitable.
The form of this piece aligns with Fluxus scores such as Yoko Ono’s Grapefruit. It is also, in a way, similar to Exquisite Corpose in how the AI doesn’t know the context of the conversation and can only produce responses based on what the other person said. Although it is done deliberately, having the AI “intercept” the conversation and processing it feels similar to Men In Grey intercepting public network communications. You can easily consult the AI for various problems, share personal stories or struggles, or treat it like a friend, but it is never truly there to understand, just as god isn’t truly on the phone in Talk to God Now by Lynn Walsh. Nam June Paik’s Good Morning, Mr Orwell is also relevant: in 1984, language is simplified and used as a tool of surveillance and control. Paik expresses the creative and cross-cultural communication that digital tools such as mass media can bring, but such communication is today at risk due to the impact of AI on how we communicate.