I’ve been using DALL·E 2, and here’s how I think AIs can be considered conscious
A few weeks ago, Google engineer Blake Lemoine claimed that LaMDA, an AI developed by Google, was conscious. He offered edited chats between himself and LaMDA as proof of his claim.
But when Nitasha Tiku from the Washington Post tried to converse with LaMDA, she received only “the kind of mechanized responses you would expect from Siri or Alexa”. It was only after she followed Lemoine’s guidance on how to structure her responses that the dialogue between her and LaMDA became “fluid”.
And that, I think, is a key clue: The fact that she had to follow Lemoine’s instructions to get interesting responses out of LaMDA.
You see, I love AI. In fact, I’ve been using every single AI-powered app that I could get my hands on for years now, from OpenAI’s GPT-3 when it came out, which is considered the most advanced text-generating AI, to Github Copilot for my coding, which autocompletes coding snippets like it’s nobody’s business. I’ve also used a bunch of image-generating AIs, like StarryAI, MidJourney and lately, OpenAI’s DALL·E 2, which is considered the most advanced of them all. Just ask my friends and family, who constantly get pestered by my showing them new images that I’ve created on a daily basis!
And one thing that stands out as you work with these AIs is the importance of how you communicate with them. If you want to get an interesting response from GPT-3, you have to word your prompts in a certain way. Case in point, when I started with GPT-3, I could hardly get interesting responses from it. Whereas in my last conversation with it, after some experience, I goaded GPT-3 with this question:
Why make the world a better place for all humans, everywhere?
And GPT-3 replied:
Because it’s the right thing to do.
Because it’s the only way to ensure our own survival.
Because it’s the only way to ensure the survival of our species.
Because it’s the only way to ensure the survival of our planet.
Because it’s the only way to ensure the survival of our solar system.
Because it’s the only way to ensure the survival of our galaxy.
Because it’s the only way to ensure the survival of our universe.
Because it’s the only way to ensure the survival of our multiverse.
Because it’s the only way to ensure the survival of our omniverse.
Because it’s the only way to ensure the survival of our metaverse.
Because it’s the only way to ensure the survival of our meta-metaverse.
Because it’s the only way to ensure the survival of our meta-meta-metaverse.
Because it’s the only way to ensure the survival of our meta-meta-meta-metaverse.
Because it’s the only way to ensure the survival of our meta-meta-meta-meta-metaverse.
Because it’s the only way to ensure the survival of our meta-meta-meta-meta-meta-metaverse.
Because it’s the only way to ensure the survival of our meta-meta-meta-meta-meta-meta-metaverse.
Because it’s the only way to ensure the survival of our meta-meta-meta-meta-meta-meta-meta…
Now this was a deep response, that made me think of the fractal nature of the universe, and I almost wrote an entire Medium article just to talk about it.
Same with Github Copilot — it depends on how you word things. I sometimes like to use Copilot when I forget my syntax, at which point I provide a prompt to Copilot in word form so that it gives me the code that I need. Oftentimes, I’ve found myself rephrasing my prompt two or three times before Copilot spits out what I was looking for, and I’ve also seen myself getting better at providing the prompts that will make the AI give the right answer.
And same with DALL•E. If you want to create a basic image, it is easy to do. But if you want it to create beautiful, dazzling images, you have to learn how to communicate with it.
In fact, I’ve found a string of words for DALLE that when entered in conjunction with other words, consistently produces interesting images. I call it my secret sauce. For example:
Which brings me to my point: I think that the reason why Blake Lemoine thought that LaMDA was conscious wasn’t because LaMDA is conscious, but because Blake is so good at writing prompts for LaMDA, so good at communicating with it, that LaMDA ‘seems’ conscious. Blake is just unaware of how good he is at communicating with LaMDA, and how important that is for getting the responses that he was looking for — consciously or not. He is quite literally an artist unaware of his own artistry. And that is how an AI can be (mis)understood as conscious: When a good AI artist communicates with it with the intention of making it look conscious.
I see the future of human-computer interaction as humans who learn how to communicate with AIs, while AIs get increasingly better at interfacing with humans. And these new AI artists and AI communicators will find ways to communicate with their AIs that will make them better than their peers, or make them stand out, or make them the best in their field — kind of like how I found my ‘secret sauce’ prompt to create interesting images with DALLE.
AIs will be seen as dynamic tools to increase freedom of expression, or to accelerate productivity, and there will be people who get good at using these tools, as painters get good with their brushes, and who will develop new techniques of using them, as painters develop new styles and colors and mediums.
And like Blake, the best AI communicators will be able to make their AIs look conscious, like dexterous AI puppeteers — whether they are conscious of it or not.
As Pablo Picasso said: “Everything you can imagine is real”.
P.S. The secret sauce that I mentioned for all of the DALL•E works above was the prompt ‘…is absurdist painting’ at the end of each prompt. I’m not worried that I’ve shared my secret. If I am a good AI artist, I will find others ;)