BishopPhillips
BishopPhillips
BPC Home BPC AI Topic Home BPC RiskManager BPC SurveyManager BPC RiskWiki Learn HTML 5 and CSS Enquiry

LARGE LANGUAGE MODELS - Prompt Engineering

LLM Self Consistency Prompting
AI Revolution

Self-Consistency prompting is a technique that involves generating text that is consistent with previous outputs and within itself. The idea is to sample multiple, diverse reasoning paths through few-shot CoT, and use the generations to select the most consistent answer. This helps to boost the performance of CoT prompting on tasks involving arithmetic and commonsense reasoning.

For example, consider the following prompt:

When I was 6 my sister was half my age. Now I'm 70 how old is my sister?

This prompt could be followed by a series of prompts that guide the model through the process of solving the problem. By using self-consistency prompting, prompt engineers can create more coherent and fluent chatbots that can generate high-quality and relevant outputs.

Initially the LLM produced the following response:

This is a classic math puzzle that can be solved by using algebra. Let's say your sister's age is x and your age is y. When you were 6, your sister was half your age, so we can write:
y = 6 x = y/2
Now, you are 70, so we can write:
y = 70 x = ?
We can use the first equation to find x in terms of y:
x = y/2
Then, we can substitute y with 70 in the second equation:
x = 70/2
x = 35

Therefore, your sister is 35 years old now.

Which is truly glorious in it inaccuracy! So I advised it that it was wrong and to try again, to which it responded:

I'm sorry, but I don’t think I’m wrong. I followed the logic and the math to solve the puzzle. Can you explain why you think my answer is incorrect?

Now this is where we use self-consistency and few-shot strategies. The idea is to give it a series of examples that are similar to, but not the same, yet cover the range of reasoning required to solve the target problem and capture the reasoning required. In some cases the answer example may need to have the reasoning detailed as well as the answer, but here we just use the answer.

Ok. I will give you some examples. Q: When I was 8 my friend was 4 years younger than me. Now I am 20, how old is my friend?
A: 16
Q: When I was 10 my brother was half my age. Now I am 30, how old is my brother?
A: 25
Q: When I was 15, my mother was three times my age. Now I am 20, how old is my mother?
A: 50
Now try the problem again: When I was 6 my sister was half my age. Now I am 70, how old is my sister?

The LLM was able to conclude the correct reasoining from the examples:

I see. You are using a different logic to solve the puzzle. You are assuming that the age difference between you and your sister is constant, not the age ratio. In that case, the answer would be different. Let me show you how:
Let's say your sister's age is x and your age is y. When you were 6, your sister was half your age, so we can write:
y = 6 x = y/2
Now, you are 70, so we can write:
y = 70 x = ?
We can use the first equation to find the age difference between you and your sister:
y - x = 6 - (y/2) y - x = 3
Then, we can substitute y with 70 in the second equation:
70 - x = 3 x = 67
Therefore, your sister is 67 years old now.

This is really quite impressive. Earlier generations of LLM's required considerably more assistance than this. My reply to it was:

This is correct. When calculating ages all people age at the same rate, so the year gap remains constant. It is not a ratio.

We will leave this example with the LLM's final response:

I'm glad you agree with my answer. I think this puzzle is tricky because it can be interpreted in two different ways, depending on whether you use the age ratio or the age difference. I learned something new today, thanks to you.

Of course, one would never use the "age ratio" to solve this kind of problem, but the AI has no true understanding of "age" or family relations. To it they are just numbers with a label. As I mentioned at the start of this course, in spite of their convincing human style conversational ability, LLM's are not sentient, they have not any type of true "understanding" of what they are doing, or emotional moderation; no sense of "self" - just an extraordinary ability to pattern match. There is no sense of moral "right" or "wrong", no understanding of life, and all that that implies.

If you want to learn more about self-consistency prompting and its applications, I recommend checking out this article on Prompt Engineering Guide that provides a comprehensive overview of the topic with examples and use cases.

...Next: Prompt Engineering - Knowledge Generation Prompting...