This article was not written by ChatGPT

The article raises concerns about the potential decrease in creative output and the impact on social intelligence and problem-solving.
June 6, 2023

However strange it may seem; these words might be interpreted more often by artificially intelligent models such as GPT-3 than by human brains. These models are trained to learn from this article, ironically preventing us from having to read or write.

We talk a lot about the future of AI, and the future of humankind. Often, we even conflate the two into one. Many have claimed that the next phase of human evolution will largely be determined by AI. But what if that means that all creative output, by humans and computers, comes to a halt?

Looking at all the developments within companies like Alphabet (Google), Microsoft and Meta (Facebook) you can see AI is evolving like never before. There was a time the world was shocked by a computer capable of defeating a grandmaster chess. Today it does not surprise us when a computer “writes” a comprehensive essay or “paints” a painting.

Soon you won’t need to write a reply to an email. You can just tell Gmail to respond “yes”, directly after you told it to summarize the incoming email for you. Google will subsequently write an elaborate email for you no one will read, because the addressee will ask Microsoft to summarize it for them. People will not need to read or write. AI will be writing to AI, and it will simply be text that acts as padding to the only thing that was meant to be communicated: “yes”.

All data these AI models are trained on are data already present on the internet. Words that were written by people. The models are being trained to look at so many different aspects of bodies of text that they start to be able to represent the meaning of these bodies of text in such detail that they can be trained to “understand” our way of communication at a level that is incomprehensible by humans. Billions or even trillions of parameters, containing some tiny meaning in the “brains” of these models.

But the represented information, and the abstract concepts that these models learn, are nothing more than numeral representations of concepts people have been producing for years. AI can reproduce, combine, filter, summarize and do so many other incredible things, but only because they were trained on this human data. Microsoft and Google have recently been posting clues how they are going to incorporate these models into everyday applications such as Google Docs, Slides, Gmail, Drive, MS PowerPoint, Outlook and so on. They will, without question, soon be finishing our sentences, writing complete documents, summarizing meetings, creating presentations, and on and on it goes. A question to ask is then: are we now headed for a world in which we limit, or even completely stop creative output. You see - the models might look very creative, but they are simply applying what they’ve learned. If the models are trained on what is basically a snapshot of our creative output to date, then whatever comes out of the models is an application of precisely this. It will reproduce whatever concepts it’s learned, and no more. It might produce different words, different sentences, but the learned concepts remain the same.

An argument to make is that the tasks that AI will be taking over are not the ones we need to do ourselves, and people might be empowered to spend time elsewhere, namely doing things that require actual human creativity. If that’s the case, our creative production will only rise. But what I find alarming is that presentation, connection, or in fact communication on any level is so deeply connected to human social intelligence that it will impact how we think, talk, learn, do problem-solving and many other intelligent activities. What will we learn in school now that we don’t ever have to speak another language again? We’ll never have to write a job-application again? We will never have to read long articles or books again? Do we really understand something, if we are never challenged to be able to explain it? Are we sure about our political opinion, if we never have to talk to each other about it? Have you really thought through a plan, if you never write it down?

One thing we do know is that the train has left the station, and we can’t stop it anymore. Large (language) models such as GPT are popping up left and right. Google Bard and Microsoft Bing are already deeply invested in winning the A.I. wars. How do we navigate this new world? One very important thing to remember in my eyes is that we as humans still provide insights, intelligence, and knowledge that is not (yet, or ever, depending on your philosophical stance) matched by these end-to-end artificially intelligent generative AI’s. Keep challenging yourself to be creative and use your brain. Read emails, make presentations, write essays. Use generative AI to empower you, not replace you.

The research we’ve been doing at Y.digital suggest that we can do just this, and use the models to our advantage, but preferably confined to small tasks where they can make supportive suggestions within the system. These suggestions, in combination with the right domain knowledge and smart usage of machine learning and other types of automation is - in our belief - the most reliable and efficient way to empower ourselves.

‍

Want to know more?
Are you ready for a large scale implementation? Or maybe you want to start small and get your feet wet with AI? Y.digital helps you attain your ambition regardless of their size.
Book a meeting