Anson MacKeracher

"Go Inspire GPT"

ChatGPT has people HYPED. Academics are imagining their hard-fought papers becoming worthless. Hackers are watching machines begin to write their own code. Writers and creatives are staring at the results of a complex ChatGPT prompt with amazement, delight, and terror.

AI has arrived. Not the way that Hollywood said it would, but in the way that all magnificent human inventions happen—gradually, over time, with the participation of countless individuals. We did it. We can't put the genie back in the bottle.

But should we feel threatened? What are the limitations of this technology? Should we be scared, and if not, why not? Can we control it?

ChatGPT is a so-called "Large Language Model" that's been trained to interact with humans via a conversational user interface. At the very bottom is a statistical model (GPT) that predicts the next word in a string, given a string prompt. If you give it "the quick brown fox", GPT will predict something like "jumped", then "over", then "the", etc.

That predictive model is a computer program generated from a huge corpus of text data collected from across the Internet. It's trained in an "unsupervised" manner, meaning without human intervention. The model is looking at an unfathomable amount of human-generated text and inferring subtle statistical relationships in human language (in AI parlance, it's building a "contextual word embedding"). The program uses this statistical model for the sole purpose of guessing the next word in a string.

The masters at OpenAI have taken the unsupervised model and subjected it to a process called Reinforcement Learning Human Feedback (RLHF), which finds humans fine-tuning the output of the model over many interactions. This process yields a more "controlled" model, called InstructGPT, whose responses don't indulge the Internet's worst side.

This RLHF technique is used once again, but with a highly specialised methodology for conversational data, and the result is ChatGPT—a nuclear bomb of AI. Ready to blow people's minds with what seems like true intelligence, and even a personality. ChatGPT is a terrifying and awe inspiring achievement:

write a shakespeare sonnet about the byzantine generals problem

Oh, Byzantine generals, noble and brave, Your problem doth perplex and oft confound For when to attack, or when to be saved, The choice doth weigh heavy on the ground.

Your armies, vast and strong, do wait with bated breath To hear the word of command from their leader's lips But alas, the word doth oft elude death As doubt and fear doth grip their fingertips.

For how can one trust another general Whose loyalty and honor may be fleeting? How can one know if the battle will be final When trust and communication are defeating?

Oh, Byzantine generals, seek not to despair For though the problem may seem beyond repair There may yet be a solution fair.

A modest scansion of the hasty verse raises more than a few issues. The syllable count is wrong more often than it's right, there are plenty of incomplete iambs throwing off the metre, and, of course, a 15 line sonnet is no sonnet at all. Sloppy work!

But look here, it incorporates the Byzantine Generals Problem in a way that actually makes sense and made me laugh! I'm truly impressed. It did a better job than I could do, and it did it in less than 2 seconds.

It's important to remember, GPT's whole purpose is to guess the next word in a string. It doesn't feel or sense anything, it doesn't have any thoughts, it doesn't remember anything or pass judgement. If the output of GPT seems to have any of those things, it's because that arrangement of words is most likely to convince a human of its authenticity.

GPT doesn't trek the Pacific Coast Trail to find God in order to understand her purpose and meaning in life.

GPT didn't experience trauma as a child. GPT never had to live with the consequences of broken relationships and GPT never had to be there for a friend.

GPT never sacrificed twelve of the best years of their life to studying Shakespearean sonnets in elaborate detail.

But, GPT can write a sentence that looks just like somebody who has experienced any of those things, or all of them, or any combo.

The closest thing to a "feeling" that GPT can have is coming up with the next word in a string. That's the only thing it feels. Ever. It just feels that way over and over again. Like when you can't think of the word and then suddenly you can. That's the feeling of being GPT. Constantly just thinking up the next word, never reflecting or thinking ahead or remembering or feeling anything else at all. Just that tiny moment of "aha! The next word is 'pancakes'," and then "right! The next word is 'pickles'."

A relentless, endless, nightmare, of just-one-more-word.

It doesn't feel good when it comes up with the word, that would require dopamine and value systems and a brain. Instead, it just experiences the act of predicting the next word in complete isolation. The word it predicts is judged for fitness by the human overlords, and then it predicts another word.

GPT is a computer program running on a server farm in Ireland and Oregon and Montreal (who knows). The model is a tiny thing built on an absolutely UNFATHOMABLE amount of human-generated text that has been recorded on the Internet. We can't put that genie back in the bottle. The Internet is written in permanent ink.

The model may be ChatGPT today and may be MindGPT-9000 tomorrow, but the data used to generate the model has already been made and won't be unmade. Only more data will be made. What control do you or I have over this data that has been and will be recorded? Can we change history? What control do you or I have over facts and events and source materials? All the incredible statistical inferences that GPT has been able to make are already embedded in the dataset. There's countless more information about humanity embedded in the record of the Internet, yet to be mined.

There are better models to be trained in the future to be sure, but to prevent the horrific hellscape future that some have predicted, we must take care of the source data (the corpus of the Internet). Sure, there's like a few Shakespeare sonnets and all the books ever written trained into these models—but there's a bajillion times more text data from YouTube comments, Amazon product reviews, and StackOverflow answers.

"23 Exabytes of information was recorded and replicated in 2002. We now record and transfer that much information every 7 days." https://blog.rjmetrics.com/2011/02/07/eric-schmidts-5-exabytes-quote-is-a-load-of-crap/

Data is the exhaust billowing from the tailpipe of modern society, and it's being used to train these models. We need to intentionally shape what we're pumping into our information atmosphere if we don't want to suffocate. We must respect the truth and fight misinformation. We must keep having brilliant new ideas and keep inventing clever new things that never existed before. We must keep access to high quality accurate factual information free and open.

Finding inspiration in a lived experience is something statistical models can't do. Each sunset, each birthday, and each pumpkin spiced latte is a unique human experience, precious and beautiful in its own way (no matter how insignificant). GPT challenges us in fundamental ways– what is intelligence? What is creativity? But even GPT can’t answer the most basic question of all: what is the meaning of our lives? Live a life worthy of being remembered forever in the statistical inferences of ever larger language models. Be kind to others, work selflessly to promote the greater good, forgive your enemies. That's how you can control this powerful technology. Leave behind a record to be proud of.

Go inspire GPT.