May 22, 2024

Prompt Chaining

Understanding Prompt Chaining

Let's dive into the exciting world of AI with a focus on a technique called prompt chaining. For this article, we'll be using Claude 3 Sonnet, the balanced AI Model in the third generation of Claude LLMs.

What is Prompt Chaining?

So, what exactly is this new technique about? Well, in simple terms, it's a natural language processing technique that involves generating a desired output by following a series of prompts. It enables the model to generate coherent, consistent, and contextually rich text.

Imagine you're giving directions to someone. You don't just blurt out the entire route in one go, right? You break it down into smaller steps, like 'Go straight, take the next right, then the second left.' That's pretty much how prompt chaining works. It breaks down complex tasks into smaller prompts and chains them together to generate more personalized and accurate responses.

Implementing Prompt Chaining

Now that you've got a grasp on what 'prompt chaining' is, let's dive into how you can implement this technique and explore its various applications and use cases.
First, we need a use case. Let's try to recreate a trip from Jack Kerouac's "On the Road", a book that follows the travels of Sal Paradise (Kerouac's alter ego) and his friend Dean Moriarty (based on real-life traveler Neal Cassady) on their journeys across America.

We'll use some prompting conventions proposed by Anthropic's engineering guides. Let's assign a role to the AI, and keep all information clear with delimiters and formatting.

Prompt 1:

You are a helpful assistant. 
Your task is to help plan a vacation for a high-earning developer, 
who plans to recreate the travels described in the book 
"On the Road" by Jack Kerouac.

The first step is to plan the outline. 

Avoiding describing story spoilers for the book. 
Provide all cities the heroes visited in a list,
styled using <cities></cities>:

Output:

<cities>
New York City
Denver, Colorado
Central City, Colorado (a ghost town)
Washington D.C.
Ashland (possibly Ashland, Kentucky or Ashland, Oregon)
Cincinnati, Ohio
St. Louis, Missouri
San Francisco, California
New Orleans, Louisiana
Mexico City, Mexico
</cities>

Now that we know how many there are, we might reconsider. Let's ask whether the heroes enjoyed their stay.

Prompt 2:

Take the list of cities (delimited with <cities></cities>,
and without changing the order,
add an out of 10 ranking to them, 
based on how much the heroes enjoyed their stay

<cities>
New York City
Denver, Colorado
Central City, Colorado (a ghost town)
Washington D.C.
Ashland (possibly Ashland, Kentucky or Ashland, Oregon)
Cincinnati, Ohio
St. Louis, Missouri
San Francisco, California
New Orleans, Louisiana
Mexico City, Mexico
</cities>

Output 2:

Cities in "On the Road" Ranked by Enjoyment (Out of 10)
1. New York City (7/10): A starting point for the journey, filled with energy and possibility, but also exhausting and impersonal.
2. Denver, Colorado (8/10): A welcome change from the frenetic pace of New York. Sal finds inspiration and enjoys the mountain scenery.
3. Central City, Colorado (5/10): A desolate ghost town, interesting for its historical ruins but ultimately depressing.
4. Washington D.C. (4/10): Found to be stuffy and bureaucratic, a stark contrast to the free-spirited nature of the trip.
5. Ashland (Kentucky or Oregon) (Unranked): Details about their stay are vague, making it difficult to assess their enjoyment.
6. Cincinnati, Ohio (6/10): A brief stop, not particularly bad but not a highlight either.
7. St. Louis, Missouri (7/10): The city offers jazz music and a sense of community, leading to a more positive experience.
8. San Francisco, California (9/10): A vibrant and exciting city, offering cultural experiences and a sense of renewal.
9. New Orleans, Louisiana (10/10): The culmination of the journey, filled with music, good food, and a carefree atmosphere.
10. Mexico City, Mexico (Unranked): Their planned visit remains uncertain, so their enjoyment cannot be ranked.

That's much better, so the capital can be skipped altogether, and the musts are closer to the end. Now you can keep prompting and adding additional info. For instance, you could tell LLM your starting point, and ask to optimize the road to Mexico City. Or you could prompt for best hotels and optimize according to costs. Each next step will bring you closer. The only thing to remember - you are limited to the model's training data, or the data you fed to it to fine-tune the LLM.

Other Use-cases

  • Translating a text from a different language, summarize it for you, correct where needed, and translate it back.
  • Analyzing PDF documents, extracting data from them, and implementing necessary changes based on your evaluation of data.

Prompt chaining provides greater control over text generation and allows you to specify the desired output with precision. It helps clarify or refine input before generating a response, especially in situations where the input data is noisy or ambiguous.

Prompt Chaining Viability

In conclusion, prompt chaining empowers you to generate precise and controlled text with AI. It's a simple technique, easy to test and save for future similar tasks. We saw its potential through a road trip example, but it goes far beyond that. From crafting engaging stories to building smart specialized chatbots, prompt chaining unlocks a new world of AI possibilities.

Test out the prompts with AI/ML AI Playground, or with 100+ Models using our API.

We're excited to see what amazing projects you will bring to life. Happy prompting!

Get API Key