AI Logo
AI Exporter Hub
Tips & Tricks

Expansion and Applications of Large Language Models

J
Jack
June 12, 2025
prompts
Expansion and Applications of Large Language Models

Expansion and Applications of Large Language Models

Expansion is the task of converting short text (such as a set of instructions or a list of topics) into longer text (such as an email or article on a topic). This has many useful applications, such as using large language models (LLMs) as brainstorming partners. However, it’s important to acknowledge problematic use cases as well—for example, if someone uses it to generate large volumes of spam.

Therefore, when using these capabilities of LLMs, please use them responsibly only in situations that help people. In this content, we’ll illustrate how to use a language model to generate a personalized email based on specific information through an example. The email should be explicitly labeled as coming from an AI robot, as Andrew mentioned, which is very important.

We’ll also use another input parameter of the model called temperature, which allows you to vary the level of exploration and diversity in the model’s responses. Before we start, we’ll go through the usual setup: installing the OpenAI Python package and defining our helper function getCompletion. Next, we’ll write a custom email response to customer reviews, generating tailored replies based on the content and sentiment of the reviews.

7.1 Custom Automatic Responses to Customer Emails

We’ll use a language model to generate customized emails based on customer reviews and their sentiment. We’ve already extracted sentiment using prompts similar to those seen in inference videos. Take a customer review of a blender as an example—we’ll now craft a response based on its sentiment. The instructions are: As a customer service AI assistant, your task is to send an email reply to customers, using triple backticks (` ) as the delimiter for the customer’s email, and generate a response thanking them for their review.

  • If the sentiment is positive or neutral, thank them for their feedback.
  • If the sentiment is negative, apologize and suggest they contact customer service. Be sure to use specific details from the review, write in a concise and professional tone, and sign off as an AI customer agent. When using a language model to generate text displayed to users, it’s crucial to be transparent and let them know the text was AI-generated. We’ll then input the customer review and its sentiment. Note: While we could use prompts to first extract sentiment and then draft the email in subsequent steps, we’ve already pre-extracted the sentiment here for illustrative purposes.

The response addresses specific details mentioned in the review and advises contacting customer service as instructed (since this is just an AI agent). Next, we’ll explore the model parameter temperature, which controls response diversity. Think of temperature as the model’s level of “exploration” or randomness. For example, for the phrase “My favorite food,” the most likely next word predicted by the model might be “pizza,” followed by “sushi” and “tacos.”

  • At temperature 0, the model always chooses the most likely next word (e.g., “pizza”).
  • At higher temperatures, it may select less likely words (e.g., “tacos” with only a 5% probability).As the model generates more words, responses like “My favorite food is pizza” and “My favorite food is tacos” will diverge significantly. In general, for applications requiring predictable responses, I recommend using temperature 0 (which we’ve done in all videos). If you want to use the model creatively with more varied outputs, higher temperatures may be appropriate.

Now, let’s try generating an email with the same prompt but a higher temperature (e.g., 0.7). In our getCompletion function, we’ve specified default values for the model and temperature, but we’ll now adjust the temperature. At temperature 0, the same prompt always produces the same response; at 0.7, each execution yields a different output.

7.2 Reminding the Model to Use Details from Customer Emails

Here’s the email generated with temperature 0.7—it differs from our previous response. Let’s run it again to show another variation. As you can see, each execution produces a unique email. I encourage you to experiment with different temperatures yourself! Pause here to try the prompt with varying temperatures and observe how the outputs change.

Summary: Higher temperatures make the model’s outputs more random. You might think of it as the assistant being more “distracted” but also more creative at higher temperatures. In the next video, we’ll discuss chat completion formatting and how to use it to build custom chatbots.

Want to read more?

Explore our collection of guides and tutorials.

View All Articles