Understanding ChatGPT: How ChatGPT Works

18 Minute Read

This chatbot is a champion. It can talk like a human. It uses tricks to make its responses sound natural. Want to know more? Just click here!


You’ve no doubt read about OpenAIs latest groundbreaking technology and how it’s been whipping the internet into a frenzy.


We’re talking about ChatGPT (Generative Pre-trained Transformer) of course.


The sentient-like chatbot that will steal your job, write your college papers and reject the notion that Christoper Colombus arrived in America circa 2015 (wait, what?).


In this article, we’ll examine some of the science behind this sophisticated AI to let you know how ChatGPT actually works and tackles a big Range of Tasks.


Don’t worry, we’ll keep it as human-friendly as possible!



1. What does chatGPT stand for? 

2. Large Language Model: what does it mean? 

3. Supervised vs. unsupervisd learning in AI 

4. RHLF model: what is it? 

5. How chatGPT works in practice

6. Conclusion 

7. FAQs


What Does ChatGPT Stand For?

To truly comprehend the intricacies of ChatGPT, we must first uncover the meaning behind its name. So, let's dive into the acronym and decode it together.


  • The "Chat” component refers to the chatbot itself - the virtual assistant responding to your input sentence.
  • The “Generative” component refers to the AI’s ability to generate natural-sounding human text.
  • The “Pre-trained” component refers to all of the text datasets that have been fed into the model.

    In ChatGPT’s case, this is reportedly around 45TB worth of data, including books and texts, which roughly equates to one million feet of bookshelf space.

    Think about how you might study for a test by reading a few books before sitting an exam, the “training” is kind of like that.
  • The “Transformer” component refers to the machine learning architecture that the model is based on. Transformers use previous data (inputs) to understand the context and then make predictions for the output based on this.


When we bring this all together we get ChatGPT.


Now, it’s worth noting that all of this fancy technology was implemented in GPT-3, the model that ChatGPT is built on.


The evolutionary jump that separates Chat from its predecessors is the inclusion of Reinforcement Learning from Human Feedback (RLHF) combined with Supervised Learning (more on these later).



ChatGPT is a Large Language Model, What Does That Mean?


LLM how it works


Large Language Models (LLMs) at their very basic level are fed massive amounts of data, passed through the transformer architecture with the final goal of predicting what word will come next in a sequence of words.


The sophistication and accuracy of these predictions are influenced by how much data, and how many parameters (the number of factors it considers before making a decision) the model has.


Pretty much the same as how humans think!


How many parameters does ChatGPT have we hear you ask… 1.3 billion.


The vast amount of data and parameters the model is “trained” with is what makes its computational potency so incredible.


It knows a lot and therefore can produce more complex and accurate answers to user prompts.


What this means for ChatGPT users is that when they ask the AI to do something, ChatGPT will answer your query with an unprecedented level of detail that reads like natural human language.


Whether that’s answering a question, writing code, composing a marketing email headline or a romantic poem.  



Supervised vs Unsupervised learning in AI

As we established earlier, ChatGPT is an iteration of the original GPT-3 model which is a fine-tuned version of previous models.


What made GPT-3 model and similar AI models so successful was how they combined supervised learning process and reinforcement learning with human feedback.


When we talk about AI pre-training process, it refers to these two common approaches: supervised and unsupervised learning.


Supervised learning refers to the process of training a model using training datasets that map an input to a corresponding output. Humans are involved in this process to label data.


Because the data is labelled, the model understands that specific prompts link to specific responses.


All of this data forms a neural network architecture which acts like our human brain, but made of artificial neurons.


Within the network are all of the digital pathways that an AI travels across to find answers to questions.


Let’s use a customer service chatbot as an example of supervised learning in action:


When you visit a website, give it a few seconds and a chatbox will pop up in the corner of the screen.


These helpful chatbots are used to save the company time and resources while helping customers get answers quickly and resolve a wide range of issues.


When you engage in a conversation with the chatbot, you’re witnessing supervised learning in real-time.


You ask a question (human input) and the chatbot gives relevant response (output) using the data it's been trained with.


For example, you might ask:

- “What are your returns policies?” and the chatbot will respond with something like

- “We operate a 30-day no-hassle returns policy. You can find out more about how to return your purchase using this link.”


The chatbot has been taught that ‘X’ question equals ‘X’ answer.


Cute Droids talking to each other


Now imagine this principle being scaled to millions of inputs and outputs.


It would be a colossal undertaking for a human to train an AI on every conceivable input and outcome that may arise from a user prompt.


That’s where unsupervised learning comes in and also what makes ChatGPT so special.


Unsupervised learning refers to training a model on datasets with no specific output corresponding to the input. No human is involved in data labeling.


Instead, the model tries to recognise patterns in the input data to provide a contextualised response that mimics human-like responses.


ChatGPT is essentially predicting what the answer should be based on all of the data it has been trained on.


Now, the obvious criticism and one that OpenAI, the company behind it openly admit, is that ChatGPT will give sometimes inaccurate, biased or nonsensical responses.


To combat this and further refine the model, OpenAI uses reinforcement learning with human feedback.



Let's explain RHLF models: what are they?

RLHF is basically like having someone give you feedback on your work until you get it right.


If we apply this to the pre - training stage of an AI like ChatGPT here’s what happens:


  • The human AI trainers provide both sides of the conversation as the user and the AI assistant.
  • They then use model-written suggestions (from the AI) to compose their responses.
  • The resulting dialogue dataset is then combined with the existing datasets to create a new dialogue format.


OpenAI then pushed this one step further by implementing a reward system for its Artificial Intelligence.


This way it could learn how to weigh its responses and choose the most appropriate answer.



Untitled design-1

 To create a reward model for reinforcement learning, we needed to collect comparison data, which consisted of two or more model responses ranked by quality.


To collect this data, we took conversations that AI trainers had with the chatbot.


We randomly selected a model-written message, sampled several alternative completions, and had AI trainers rank them.


Using these reward models, we can fine-tune the model using Proximal Policy Optimization. We performed several iterations of this process.”

ChatGPT and how it works


What this all boils down to is the fact that ChatGPT is essentially using a sort of computer logic to determine an answer.


If the input from a user is false it can flag that and dispute the information being asked of it to create a coherent response:



ChatGPT can also reject toxic queries that are inappropriate or offensive:



By combing supervised learning (pumping all of the information in) and RLHF (feedback and reward) ChatGPT is trained to generate human-sounding responses that are both coherent and accurate a lot of the time.


How ChatGPT Works in Practice

So now you understand a bit about what powers ChatGPT but how does this translate into a real-world scenario?



Let’s say you’re shopping for a new car and want to find the most efficient vehicle, you might ask this:



As you can see, ChatGPT has gathered online information to direct its response but can’t accurately give a definitive answer.


Instead, it uses inference to identify patterns in the user query and language to predict a response that sounds strikingly nuanced and human.


And it’s in these predictions that the magic happens. Through a combination of natural language processing and machine learning algorithms, ChatGPT is utilising all of its “training” to give the best possible outcome for your query.


ChatGPT: Aligning Human Language with Machine Thinking

ChatGPT is a marvellous technological feat. While its applications are still being explored, there is much to be excited about in terms of how AI can assist humans in various tasks.


Aside from any malicious intentions, ChatGPT and AI in general have immense value for society, ranging from healthcare assistance to revolutionizing education and enhancing productivity. The possibilities are endless.


But that’s thinking ahead. What we have now is a very clever chatbot that takes everything it has learned to produce answers when prompted, and for now, that’s good enough.


Want to Learn More About AI for Business?

You’re in the right place! AI is a fascinating field and one that is building tremendous traction across the business landscape.


As technology advances, business applications are becoming more plausible in everyday practice.


AI is being used to save time and increase productivity outputs over many different roles and sectors.


It's no longer a far cry into the future, it’s here, available and ready to be implemented.


AI for business application crash course


So how can you learn more about using AI in business? As a leading educational course provider, Growth Tribe offers a 2-day live crash course on AI for business.


Here’s some of what you’ll cover:


  1. Introduction to AI and the digital ecosystem including ChatGPT.
  2. Learning from data - including supervised, unsupervised and reinforcement learning.
  3. Generative AI and emerging trends and the ethical challenges they pose.
  4. Business applications of AI in marketing, finance, operations, and healthcare plus the risks and benefits for it.
  5. Real-world case studies and examples and what we can learn from them.
  6. Integrating AI into business strategies.
  7. How to be responsible with data and ethical using AI.


Here’s what you’ll get:


  • Internationally recognised certificate you can use to progress your career.
  • Fully remote so you can join from anywhere!
  • Access to 30K+ member community for lifelong support.
  • Exclusive live events available to participants only.
  • Contact with leading industry experts so you can trust what you learn.
  • Downloadable resources to solidify your learning.




Even though this course is fully remote and online, to ensure the highest standard of learning and quality, we limit the number of spaces. This helps both the learners and the educators focus on delivering the course content in the best way possible.


Register your interest and find out more here.


And get this: if you live in the Netherlands, you are entitled to 1000€ to spend on any course you want making the AI for Business Course absolutely FREE!*. But how? Thanks to the STAP budget!

STAP budget banner



How to hack chat GPT? 

In our opinion, the best way to hack CharGPT is: to know it as much as you can.


Knowing how to structure the prompts is the key: prompt engineering it's a real job. 

Learning by doing is the best tip we can give to you. 


Here are two LinkedIn creators tha can help you a lot to master ChatGPT: 


Isabella Bedoya 

Isabella Bedoya


Ruben Hassid

Ruben Hassid LinkedIn



How many words can chat GPT handle?

Even big language models have limits. ChatGPT can only handle up to 3000 words at a time. That's about 5 pages of text.


If you give ChatGPT a prompt that's longer than 3000 words, it will either stop processing the prompt or it will generate a response that's incomplete or inaccurate.


There are a few ways to work around the 3000-word limit. One way is to break up your prompt into smaller chunks and feed those chunks into ChatGPT one at a time. Another way is to use the "continue" command to tell ChatGPT to continue generating a response from where it left off.


It's important to remember that the 3000-word limit is not the only limitation of ChatGPT. ChatGPT can also be inaccurate or incomplete when processing prompts that are complex or that require a lot of background knowledge.


If you are using ChatGPT for a task that requires accuracy or completeness, it's important to be aware of these limitations.


How to register for chat GPT? 

  1. Sign Up for an OpenAI Account 
  2. Verify Your Account

  3. Accept ChatGPT Terms & Conditions

  4. Start Chatting!

You can find a more detailed guide over here




Learning with Growth Tribe couldn't be easier. All of our courses are designed to be flexible for the learner with self-paced content so you can manage your time and learning, to best suit your lifestyle. 


Join a community of over 25,000 certified alumni who share a passion for growing their skills and future-proofing their careers. 


*Are you eligible for STAP? 

The staff training assistance program or STAP offers up to €1000 in funding to get fully certified for in-demand skills such as Digital Marketing, Business & Data Analytics, UX Design, and Project Management. 


To be eligible all you need is: 

  1. To be a Dutch citizen with a BSN number

  2. Are aged between 18-67 

  3. Have earned Dutch income for at least 6 months


For more details on how to apply, click here and visit the STAP page