BishopPhillips
BishopPhillips
BPC Home BPC AI Topic Home BPC RiskManager BPC SurveyManager BPC RiskWiki Learn HTML 5 and CSS Enquiry

LARGE LANGUAGE MODELS - Prompt Engineering

Expert Level Prompting
AI Revolution

Expert Level Prompting builds on and specialises the techniques of prompt design. The core concept is that the more context you give the LLM in answering the query or setting the framework of the conversation the better the LLM will be. When you think about the techniques we have explored in this course, the common theme has been to prompt the LLM in a way that expands it immediately available "working memory" and get it to work from that basis in answering the query.

Setting the Role ("acting as") served to give it a character or "tone of voice" to fullfill, Few Shot led it to think by starting from a set of examples of similar or contrasting problems, and CoT led it to think think in terms of a sequence of questions that leads it through the thought process required. All of these strategies are effectively broadening the domain of data to be given higher priority than other data it knows to use in solving a problem. Whether we are building a "working memory" to be used for the problem, or filtering out the non-critical data that should not be used is a moot point: the effect is the same - the model becomes more focussed and expert in the specific problem domain relevant to the problem. Thus the way to go from good to expert in prompting is to understand this core point when designing prompting strategies.

David Shapiro created an excellent pair of short videos exploring these concepts. In the first, he notes that there are esentially 3 main types of prompts: Reductive, Tranformational and Generative. Prompt design should consider latency and emergence, and that Bloom's taxonomy (remember, understand, apply, analyze, evaluate and create) is usefully applied in understanding capabilities of LLMs as well as how to interact with them. He notes that the LLM's form their semantic models from the vast amounts of human generated, and essentially related and unrelated data on which they are trained, with the larger ones exhibiting emergent capabilities such as "theory of mind" where they can model how humans think. Thus applying concepts of psychology and learning can lead to better prompt design.

In his second video David Shapiro explores methods for harnessing latent space activation in prompt design. Theory is that the latent space are the neurones that, while containing data, are not activated initially in a simple query and consequently much of the knowledge that could be applied in generating the best response is actually ommitted where the prompt is too simplistic. The idea is to design prompts that trigger the activation of these latent networks in producing better responses. He notes that there are two types of human thinking:

  1. Intuitive or "knee-jerk" reactions, and
  2. Deliberative or systematic
Latent space activation involves activating the vast, enbedded knowledge in a language model by using techniques similar to how humans prompt their own thinking. To this end he explores the idea of using brainstorming and counterfactual searches as methods for expanding the resonses by LLMs

If you want to know more about Bloom's Taxonomy, we discuss that at an introductory level in Dianne's paper on virtual reality engines.

Next: LLM Application Listing

Overview of LLM Solutions

Bishop Phillips

References

none.