Home > Articles

This chapter is from the book

Development, Optimization, Localization, and Personalization Based on LLMs

The rapid growth of the tech field has seen significant disruptions when the right combination of technology and user experiences come together. Generative AI–infused experiences bring a great opportunity for intelligent product development. Besides fostering AI’s capabilities for business and real products, we must also ensure localization and personalization and operate with a clear customer-centric intent and goal.

There are multiple strategies to employ regarding integrating the generative AI large models into productions with further optimization, localization, and personalization.

Large deep neural networks have achieved remarkable success with great performance in research and real-world products with large-scale data. However, it is still a great challenge to deploy these large-scale AI models to real production systems, especially mobile devices and embedded systems, with the considerations of cost, computational resources, and memory capacity. The main purpose of teacher–student distillation (see Figure 2.1) is to train a small student model that simulates the large teacher model with equivalent or superior performance.9 Another advantage of teacher–student distillation is that when we do not have enough labeled data, the teacher model can help generate a “pseudo-label” when training the student model. Pseudo-labels are then used to train the smaller student model, helping it learn and perform tasks as if it had been trained on a fully labeled dataset. Put more simply, imagine you’re playing a video game, and there’s a really tough level that you can’t beat. So, you call in an expert friend.

The three main components of the teacher–student distillation framework include knowledge, distillation algorithm, and teacher–student architecture.

Figure 2.1 illustrates two AI models:

  • Teacher model: The teacher model is like an expert friend. It’s very smart but also big and needs a lot of power to run.

  • Student model: Like you, the student model is eager to learn. It doesn’t have as much power.

The goal is to make the student AI learn from the teacher AI without needing as much power. The process is such that the teacher model, trained with huge volumes of data, helps the student model by guiding it or giving it tips—in NLP; this is called “knowledge transfer.” Sometimes, the teacher doesn’t have all the answers (or labeled data), so the teacher makes up some good guesses (pseudo-labels) for the student to practice with. It’s like getting hints for your video game level. This way, the student learns a lot and gets really good at the game, so it can almost match the teacher’s skill level.

FIGURE 2.1

FIGURE 2.1 The general teacher–student distillation framework

This framework can be useful for any large-scale prediction or generative AI model, although it was originally introduced for an image classification model. With the rapid development of generative AI, many of the current large-scale models are significantly effective in generalization. However, many factors must be considered for real production, including cost, scalability, resource consumption during inference, adopting the existing model into some specific scenarios, and so on. Developing an AI-assistant writing tool by leveraging GPT to help users write articles or posts more casually and recognize contextual information is an example of adopting the existing GPT model to the specific scenario of an AI-assistant writing tool. Directly running GPT models is very challenging, considering cost and scalability. The teacher–student distillation framework helps serve lighter-weight models in production and localizes the model with task-specific data when leveraging the existing large-scale model.

Reinforcement Learning from Human/AI Feedback

As mentioned earlier, Instruct GPT/ GPT-3.5 was developed by OpenAI to have a better human alignment and address some issues like factuality, harm, etc. They collected prompts submitted by customers through Playground and ranked outputs from the models responding to the human-annotated instructions. InstructGPT/ GPT-3.5 is fine-tuned based on this data from GPT-3. The success of GPT-3.5 over GPT-3 is mainly due to the reinforcement learning from human feedback (RLHF) technique, which is adopted to fine-tune GPT-3 using human labels as a reward signal (see Figure 2.2).10

FIGURE 2.2

FIGURE 2.2 The reinforcement learning framework

The human annotators compare and rank multiple outputs from GPT-3 corresponding to each prompt. Based on this labeled data, a reward model is trained to predict the preferred output. Lastly, this reward model is a reward function and policy optimized to maximize the reward using the proximal policy optimization (PPO) algorithm.

Imagine you’re teaching a teenager how to ride a snowboard for the first time. You want them to learn fancy tricks, but every time they try something new, you don’t want them risking a big crash. The proximal policy optimization (PPO) algorithm is like a smart snowboard coach for the teen. It has a rule: “Try new turns or try new moves but not so different from what you already know, or you will definitely fall.”

Here’s how it works: The teenager tries a new turn or trick, sees how well they do (like scoring confidence points for staying upright and doing small tricks), and learns the way any human would. Then they try again, slightly tweaking their approach but with a twist. There’s a safety net (the “clip” in PPO can be related to “clipping” the trick’s extremes to avoid moving too far away from the original effort), making sure these tweaks aren’t too drastic. This way, the teen steadily gets better without taking big risks that could lead to epic wipeouts.

PPO keeps a machine learning efficiently by reusing its experiences several times to refine its strategy, ensuring it learns a lot from each practice session. It’s like watching a video of a snowboard performance on the hill and spotting a dozen ways to improve instead of just one. This makes the machine a quick learner and smart, avoiding unnecessary risks while it masters its metaphorical ability to shred on the mountain!

Despite the impressive results achieved by GPT3.5, this technique also faces some challenges and limitations that need to be addressed for further improvement and broader application. Table 2.1 shows example challenges and potential mitigation activities with RLHF utilizing future research and development.

TABLE 2.1 Example challenges and potential mitigation activities

CHALLENGE

FUTURE RESEARCH AND DEVELOPMENT

Data quality and quantity: The quality and quantity of human feedback data are crucial for training a reliable reward model and a robust policy. However, collecting human feedback data can be costly, time-consuming, and prone to noise and bias. Moreover, human preferences may vary across domains, tasks, and contexts, requiring more diverse and representative data to capture the nuances and subtleties of human expectations and instructions.

Improving the data collection and annotation methods and tools to ensure human feedback data quality, quantity, and diversity. For example, using active learning, crowdsourcing, gamification, or interactive learning techniques to solicit more relevant, informative, and consistent feedback from the users or the experts. Alternatively, using synthetic, simulated, or generated data to augment the real data and increase the coverage and robustness of the data.

Reward shaping and alignment: The reward model learned from human feedback data may not always reflect the true objectives and values of the users or the developers. There may be gaps or conflicts between what humans express and what they actually want or need. For example, humans may provide inconsistent, ambiguous, or misleading feedback due to cognitive biases, emotional states, or communication errors. Furthermore, the reward model may not align with the ethical, social, or legal norms and standards that should guide the behavior of AI systems. For example, the reward model may incentivize harmful, deceptive, or manipulative actions that violate the principles of fairness, accountability, or transparency.

Enhancing the reward shaping and alignment methods and mechanisms to ensure the validity, reliability, and alignment of the reward model. For example, using inverse reinforcement learning, preference elicitation, or value learning techniques to infer the latent or implicit objectives and values of the users or the developers from their feedback or behavior. Alternatively, using multi-objective, constrained, or regularized reinforcement learning techniques to incorporate multiple criteria, constraints, or penalties into the reward function and balance the trade-offs among them.

Generalization and adaptation: The policy optimized by RLHF may not generalize well to new or unseen prompts, scenarios, or environments. The policy may overfit to the specific data distribution or the reward model and fail to handle novel or complex situations that require more creativity, reasoning, or common sense. Moreover, the policy may not adapt well to the dynamic and evolving needs and preferences of the users or the developers. The policy may become outdated, irrelevant, or incompatible with the changing goals, expectations, or instructions of the stakeholders.

Developing the generalization and adaptation methods and strategies to ensure the flexibility, versatility, and applicability of the policy. For example, using meta-learning, transfer learning, or lifelong learning techniques to enable the policy to learn from multiple sources, tasks, or domains and apply the learned knowledge or skills to new or different situations. Alternatively, using online learning, interactive learning, or self-learning techniques to enable the policy to update, refine, or improve itself based on the feedback or performance in real time or over time.

Anthropic, a startup founded by former employees of OpenAI, developed Claude, an AI chatbot that is similar to ChatGPT.11 It is claimed that Claude outperforms ChatGPT in a variety of perspectives. It not only tends to generate more helpful and harmless answers but also answers in a more fun way when facing inappropriate requests. Its writing is more verbose but also more naturalistic. Claude’s key approach is called constitutional AI.12 Like ChatGPT, Claude also uses reinforcement learning to train a preference model, though Claude uses reinforcement learning from AI Feedback (RLAIF) without any human feedback labels for AI harms.13 The constitutional AI process consists of two stages: supervised learning and reinforcement learning, as shown in Figure 2.3.

FIGURE 2.3

FIGURE 2.3 Steps used in the constitutional AI process

The constitutional AI process works like this:

  1. In the supervised learning phase, initial responses to harmful prompts using a pre-trained language model that has been fine-tuned on a dataset of helpful-only responses are called helpful-only AI assistants.

  2. The model is asked to critique and revise the responses using randomly selected principles from the 16 pre-written principles in the constitution.

  3. As a result, the supervised learning–constitutional AI (SL-CAI) model is gained by fine-tuning the pretrained LLM on the final revised responses in a supervised learning way.

  4. Claude uses a preference model as a reward signal in the reinforcement learning stage to optimize its responses to different prompts.

  5. The fine-tuned model generates a pair of responses to each harmful prompt and evaluates responses according to a set of constitutional principles.

  6. Then, a preference model is trained on the final dataset, combining the AI-generated preference dataset for harmlessness and the human feedback dataset for helpfulness.

  7. The preference model learns to rank the responses based on their combined scores of helpfulness and harmlessness.

  8. Finally, the SL model is fine-tuned via reinforcement learning against this preference model as a reward signal, which results in an optimized policy.

One advantage of this more advanced framework is that it can eliminate human annotation, saving a lot of time, cost, and energy. Similarly, we can develop specific principles with constitutional AI to ensure those LLMs produce factual, harmless, ethical, and fair outputs that also serve the needs of our particular scenarios. This approach, utilized by Claude, is based on the idea of aligning the AI chatbot’s behavior with a set of constitutional principles that reflect the values and goals of the users and developers. These principles ensure that the chatbot generates helpful, harmless, ethical, responsible, and fair responses.

Claude’s constitutional principles are respecting human dignity, avoiding harm and deception, promoting well-being and social good, and valuing diversity and inclusion. These principles provide a framework that can be modified and updated according to the customized needs and preferences of users and developers.

By using constitutional AI, Claude can outperform ChatGPT in several ways:

  • Claude can generate more helpful and harmless responses because it is trained on a dataset that filters out harmful or unhelpful responses and incorporates human feedback on helpfulness.

  • Claude can generate more ethical, responsible, and fair responses because it is under the guidance of a set of constitutional principles reflecting the values and goals of the users and developers.

  • Claude can generate more fun and naturalistic responses by exploring and exploiting different responses using reinforcement learning and learning from its own critique and revision.

Chatbot customization can utilize reinforcement learning through human/AI feedback (RLHF/RLAIF). Chatbots are becoming increasingly prevalent in various domains, such as customer service, education, entertainment, health, and so on. However, not all users have the same preferences or needs when interacting with chatbots.

Some users prefer a more formal or professional tone, while others enjoy a casual or humorous style. Some users may want a more informative or detailed response, while others may seek a more concise or simple answer. Some users may appreciate a more empathetic or supportive response, while others may desire a more objective or factual one.

Therefore, it is important to customize the chatbot’s behavior and personality according to the user’s profile and feedback. A chatbot can leverage reinforcement learning to learn from its own actions and outcomes and adapt to the user’s preferences and expectations over time.

Reinforcement learning is based on the idea of reward and punishment, where the chatbot receives positive or negative feedback from the user or itself and adjusts its policy accordingly. For example, if the user expresses satisfaction or gratitude after receiving a response from the chatbot, the chatbot can reinforce that response and generate similar ones in the future.

Conversely, if the user expresses dissatisfaction or frustration after receiving a response from the chatbot, the chatbot can avoid that response and generate different ones in the future. Moreover, the chatbot can also self-evaluate its responses and give itself feedback based on predefined criteria or metrics, such as relevance, coherence, fluency, informativeness, politeness, and the like.

Fine-Tuning Large-Scale Models

Fine-tuning is a popular method in the ML and AI fields and is done after a model has been pretrained. Then, the additional training is performed with a dataset specific to the scenarios practitioners and professionals work on. Fine-tuning solves common issues caused by large-scale AI models, such as difficulties productionizing big models and not being generalized enough for specific tasks.14 See Figure 2.4.

FIGURE 2.4

FIGURE 2.4 Fine-tuning pretrained large-scale models

Traditionally, most AI professionals do model tuning for fine-tuning, in which the pre-trained models’ parameters (classification, sequence labeling, and question answering (Q&A) using task-specific labels and cross-entropy loss) are tuned. There have been several challenges with this approach and potential mitigation activities, as shown in Table 2.2.

TABLE 2.2 Challenges and potential mitigation activities of fine-tuning on pre-trained models

CHALLENGE

MITIGATION ACTIVITIES

Data availability: Fine-tuning requires sufficient labeled data for the target task or domain, which may not always be available or easy to collect. Fine-tuning may lead to overfitting or poor generalization if the data is too small or noisy.

Data augmentation: This is an approach to increase the size and diversity of the training data by applying some transformations or modifications to the existing data, such as cropping, flipping, rotating, adding noise, and so on. Data augmentation can help reduce overfitting and improve the generalization of the fine-tuned model.

Task transfer: Fine-tuning works best when the target task or domain is similar to the pretrained model. If the tasks or domains are too different, fine-tuning may not transfer the relevant knowledge or may even degrade the performance of the model.

Transfer learning: This is a technique to leverage the knowledge learned from one or more source tasks or domains to improve the performance of a target task or domain. Transfer learning can be done by freezing some of the layers in the pretrained model and adapting its output layer to the target task. Transfer learning can help overcome data availability and task transfer problems.

Cost and scalability: Fine-tuning large-scale models such as GPT or DALL-E requires a lot of computational resources and memory space, which may not be accessible or affordable for many users or organizations. Moreover, fine-tuning large models may introduce more complexity and instability to the optimization process.

Meta-learning: This is a technique to learn from multiple tasks or domains and then apply the learned knowledge to a new task or domain. Meta-learning can be done by training a meta-model or a meta-learner that can generate or update the parameters of a base model for a given task or domain. Meta-learning can help achieve fast adjustment and robust generalization of the fine-tuned model.

The evolvement and growing capabilities of current large-scale language models with prompt-tuning have become increasingly popular, in which the pre-trained model is frozen while a small set of learnable vectors can be optimized and added as the input for the task. Prompt design is even more commonly utilized, as of the writing of this book, which is a technique used to guide the behavior of a frozen pretrained model by crafting an input prompt for a specific task without changing any parameters. This is more effective and less expensive than prompt-tuning.15 We can compare these three approaches to adapting pre-trained language models for specific tasks:

  • Model tuning: The pre-trained model is further trained or “fine-tuned” on a task-specific dataset.

  • Prompt tuning: The model remains frozen, and only a set of tunable soft prompts are optimized.

  • Prompt design: Exemplified by GPT-3, crafted prompts guide the frozen model’s responses without any parameter changes.

Prompt-tuning and prompt design methods are often used because of their effectiveness and reduced cost compared to full model tuning. See Figure 2.5, which illustrates a shift toward efficiency and multitasking in language model applications, highlighting the less resource-intensive nature of prompt-based methods.

FIGURE 2.5

FIGURE 2.5 The architecture of model tuning, prompt tuning, and prompt design

Prompt Engineering

With the remarkable success and powerful generalization capabilities of current large pre-trained AI models, more and more AI practitioners are focusing on prompt engineering by directly integrating the existing generative AI models such as DALL-E 3, GPT-4, and ChatGPT into real applications. As we know, fine-tuning requires huge computational resources and memory space and causes catastrophic forgetting. Prompt engineering is a discipline focused on optimizing prompts for efficient use of LLMs across various applications and research. It enhances our understanding of LLMs’ capabilities and limitations.

Prompt engineering encompasses diverse skills and techniques, crucial for effective LLM use. It enhances LLM safety and empowers integration with domain knowledge and external tools.

A prompt is a parameter that can be provided to large-scale pretrained LMs like GPT to enable its capability to identify the context of the problem to be solved and accordingly return the resulting text. In other words, the prompt includes the task description and demonstrations or examples that can be fed into the LMs to be completed. Prompt engineering, sometimes called in-context learning or prompt-based fine-tuning, is a paradigm of learning where only the prompt, which includes a task description and a few demonstrations, is fed into the model as if it were a black box. There are multiple prompt engineering techniques:

  • Retrieval augmentation for in-context learning: The main idea is to retrieve a set of relevant documents or examples given a source and take these as context with the original input prompt to let the LLM generate the final output. There are different methods for in-context learning, such as one-shot and few-shot prompting. One example is the method RAG (Retrieval Augmented Generation) introduced by Meta AI that essentially takes the initial prompt plus searches for relevant source materials, such as Wikipedia articles, and combines the information with the sequence-to-sequence generation to provide the output.16

  • Chain-of-Thought (CoT): This prompting technique encourages the model to generate a series of intermediate reasoning steps (see Figure 2.6).17 A less formal way to induce this behavior is to include “Let’s think step-by-step” in the prompt.

    FIGURE 2.6

    FIGURE 2.6 Chain-of-thought prompting

  • Action Plan Generation: This prompt utilizes a language model to generate actions to take, as shown in Figure 2.7.18 The results of these actions can then be fed back into the language model to generate a subsequent action.

    FIGURE 2.7

    FIGURE 2.7 Action plan generation prompting

  • ReAct Prompting: This prompting technique combines chain-of-thought prompting with action plan generation (see Figure 2.8). This induces the model to think about what action to take, and then take it. ReAct allows language models to produce both verbal reasoning traces and text actions that alternate with each other, while actions cause observation feedback from an external environment. The example shown in Figure 2.8 compares the performance of the standard prompting, chain-of-thought (reason only), act only, and ReAct prompting techniques.19

    FIGURE 2.8

    FIGURE 2.8 The results of four prompting methods

Prompt Chaining

This approach combines multiple LLM calls, with the output of one step being the input to the next. The overall process includes a few steps:

  1. The process starts with an initial prompt or question. This could be a broad inquiry, instruction, or a request for information.

  2. The model generates an initial response based on the input prompt. However, this response might be a bit generic or need refinement.

  3. The generated response is then used as part of a new prompt. This time, the prompt is more specific, providing additional context or asking for clarification.

The chaining continues iteratively. Each new response becomes the input for the next prompt. The generated content becomes more focused and contextually relevant with each iteration. The advantages of prompt chaining are as follows:20

  • It helps preserve context across responses and makes the generated output more coherent.

  • The user can guide the model through the iteration process to provide more precise and relevant generation.

  • It leads to more customized generation, which enables users to tailor the responses to their specific requirements. However, it still does not alter the fundamental capabilities and limitations of the underlying language model.

Tree of Thoughts

The tree of thoughts framework generalizes over chain-of-thought prompting and encourages the exploration of thoughts that serve as intermediate steps for general problem-solving with language models. This method allows a language model to self-assess the progress of its intermediate thoughts during problem-solving through a deliberate reasoning process. The LM’s capacity to produce and assess thoughts is then integrated with search algorithms like breadth-first search and depth-first search, facilitating systematic thought exploration with lookahead and backtracking.21

Self-Consistency

The idea behind self-consistency is based on chain-of-thought (CoT), but it samples multiple diverse reasoning paths through few-shot CoT and uses the generations to select the most consistent answer. This helps to boost the performance of CoT prompting on tasks involving arithmetic and commonsense reasoning.22

InformIT Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time.

Overview


Pearson Education, Inc., 221 River Street, Hoboken, New Jersey 07030, (Pearson) presents this site to provide information about products and services that can be purchased through this site.

This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.

Collection and Use of Information


To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:

Questions and Inquiries

For inquiries and questions, we collect the inquiry or question, together with name, contact details (email address, phone number and mailing address) and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.

Online Store

For orders and purchases placed through our online store on this site, we collect order details, name, institution name and address (if applicable), email address, phone number, shipping and billing addresses, credit/debit card information, shipping options and any instructions. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.

Surveys

Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites, develop new products and services, conduct educational research and for other purposes specified in the survey.

Contests and Drawings

Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.

Newsletters

If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information@informit.com.

Service Announcements

On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature.

Customer Service

We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

Other Collection and Use of Information


Application and System Logs

Pearson automatically collects log data to help ensure the delivery, availability and security of this site. Log data may include technical information about how a user or visitor connected to this site, such as browser type, type of computer/device, operating system, internet service provider and IP address. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

Web Analytics

Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson (but not the third party web trend services) to link information with application and system log data. Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services.

Cookies and Related Technologies

This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising. Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site.

Do Not Track

This site currently does not respond to Do Not Track signals.

Security


Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.

Children


This site is not directed to children under the age of 13.

Marketing


Pearson may send or direct marketing communications to users, provided that

  • Pearson will not use personal information collected or processed as a K-12 school service provider for the purpose of directed or targeted advertising.
  • Such marketing is consistent with applicable law and Pearson's legal obligations.
  • Pearson will not knowingly direct or send marketing communications to an individual who has expressed a preference not to receive marketing.
  • Where required by applicable law, express or implied consent to marketing exists and has not been withdrawn.

Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider. Marketing preferences may be changed at any time.

Correcting/Updating Personal Information


If a user's personally identifiable information changes (such as your postal address or email address), we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service@informit.com and we will process the deletion of a user's account.

Choice/Opt-out


Users can always make an informed choice as to whether they should proceed with certain services offered by InformIT. If you choose to remove yourself from our mailing list(s) simply visit the following page and uncheck any communication you no longer want to receive: www.informit.com/u.aspx.

Sale of Personal Information


Pearson does not rent or sell personal information in exchange for any payment of money.

While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest@pearson.com.

Supplemental Privacy Statement for California Residents


California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice. The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services.

Sharing and Disclosure


Pearson may disclose personal information, as follows:

  • As required by law.
  • With the consent of the individual (or their parent, if the individual is a minor)
  • In response to a subpoena, court order or legal process, to the extent permitted or required by law
  • To protect the security and safety of individuals, data, assets and systems, consistent with applicable law
  • In connection the sale, joint venture or other transfer of some or all of its company or assets, subject to the provisions of this Privacy Notice
  • To investigate or address actual or suspected fraud or other illegal activities
  • To exercise its legal rights, including enforcement of the Terms of Use for this site or another contract
  • To affiliated Pearson companies and other companies and organizations who perform work for Pearson and are obligated to protect the privacy of personal information consistent with this Privacy Notice
  • To a school, organization, company or government agency, where Pearson collects or processes the personal information in a school setting or on behalf of such organization, company or government agency.

Links


This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. This privacy statement applies solely to information collected by this web site.

Requests and Contact


Please contact us about this Privacy Notice or if you have any requests or questions relating to the privacy of your personal information.

Changes to this Privacy Notice


We may revise this Privacy Notice through an updated posting. We will identify the effective date of the revision in the posting. Often, updates are made to provide greater clarity or to comply with changes in regulatory requirements. If the updates involve material changes to the collection, protection, use or disclosure of Personal Information, Pearson will provide notice of the change through a conspicuous notice on this site or other appropriate way. Continued use of the site after the effective date of a posted revision evidences acceptance. Please contact us if you have questions or concerns about the Privacy Notice or any objection to any revisions.

Last Update: November 17, 2020