Demystifying Generative AI: Your Essential Glossary for Large Language Models

Charanjit Singh
14 Jul 2023

Generative AI, and more specifically, Large Language Models (LLMs) have been revolutionising the field of digital marketing. By automating and enhancing content creation, they open up a world of possibilities for marketers who are adept at harnessing their power. The potential applications range from automated ad copy and blog post generation to intelligent chatbots for customer service.

However, to truly harness the capabilities of these AI models, it's important to understand the key concepts and terminology in this rapidly evolving field. This glossary provides a comprehensive list of terms that are central to the understanding and application of Generative AI and LLMs in the context of marketing.

From base models and transformer networks to fine-tuning techniques and responsible AI, these terms represent the essential building blocks for integrating AI into your marketing strategy. Whether you're a seasoned AI practitioner or a marketer keen to leverage the latest AI tools, these 21 terms serves as a handy reference to navigate the fascinating world of Generative AI and LLMs.

  1. Adaptive Layers: These are layers added to a neural network that can modify its function without changing the underlying pre-trained model.
  2. Base Model: The original model before any fine-tuning or specific training is applied.
  3. Catastrophic Forgetting: A phenomenon where a neural network forgets previously learned information upon learning new data.
  4. Developer-Specific Fine-Tuning: Customisation of a model by a developer to perform specific tasks or applications.
  5. Fine-Tuning: The process of adjusting the parameters of an already trained model to perform a specific task.
  6. Full Fine-Tuning: Fine-tuning that involves adjusting all the parameters of a model, which can be compute and memory intensive.
  7. Genitive AI Project Life Cycle: The process or stages involved in creating and implementing a generative AI project.
  8. Instruction Tuning of Large Language Models: The process of adjusting large language models to better respond to specific instructions or tasks.
  9. Large Language Models (LLMs): Large Language Models (LLMs) are advanced deep learning models adept at processing and generating human-like text. Trained on extensive datasets, they excel in tasks like text generation, translation, and summarisation, making them vital tools in the field of natural language processing. Examples include ChatGPT and Microsoft’s Bing Chat.
  10. Low Rank Matrices (LoRA): A technique used in parameter efficient fine-tuning which reduces the memory requirements of a model.
  11. Multi-Headed Self-Attention Mechanism: A key component of Transformer networks that allows the model to focus on different positions of the input simultaneously.
  12. Original Model Weights: The initial parameters of a neural network before any fine-tuning is applied.
  13. Parameter Efficient Fine-Tuning (PEFT): A set of methods that allow for fine-tuning of a model with less computational and memory requirements.
  14. Pre-trained Model: A model that has already been trained on a large dataset and can be fine-tuned for specific tasks.
  15. Prompting: The act of providing an input to a language model to generate specific outputs.
  16. Reasoning Engine: A component of an AI system that enables it to make logical conclusions and decisions.
  17. Reinforcement Learning from Human Feedback (RLHF): A technique to improve AI models by aligning them with human values using feedback and reinforcement learning algorithms.
  18. Responsible AI: The practice of using AI ethically and responsibly, considering factors like transparency, fairness, and privacy.
  19. Self-Attention: A component of Transformer networks that allows the model to weigh the importance of words in a sentence.
  20. Subroutine Calls: Instructions within a program that call for certain actions, such as a web search.
  21. Transformer Networks: A type of model architecture used in many state-of-the-art language models that uses self-attention mechanisms to process input data.

Subscribe to The Marketer's AI Advantage



More insights
Modern websites need modern browsers

To enjoy the full experience, please upgrade your browser

Try this one