AI Prompt Engineering: Practical Tips for Special Librarians
Read Transcription
Before we start I would like to provide some information about our company and introduce today’s presenter. Lucidea is a software developing company specialized in museum and archival collections management solutions, as well as knowledge management and library automation systems. Our brands include Sydney, Presto, Argus, ArchivEra, Eloquent, and CuadraSTAR.
Now I’d like to take a moment to introduce today’s presenter, Lauren Hayes. Lauren is an associate professor of instructional technology at the University of Central Missouri. Previously, she worked as an instructional and research librarian at a private college in the Kansas City metro area. Prior to working in higher education, she was employed by the National Archives and Records Administration and worked as an intern at the Harry S. Truman Presidential Library and Museum. Her professional interests include the scholarship of teaching and learning, information literacy, digital literacy, educational technology, and academic development. Take it away, Lauren.
Thank you, Bradley. Thank you for that introduction.
I am excited to be here with all of you, today and, talking about, AI and specifically prompt engineering.
So let’s just start out with a definition of prompt engineering. This is likely not a new term for many of you, but I think it is good to start by always defining our terms. So prompt engineering is the practice of carefully designing and refining the instructions or prompts given to an artificial intelligence system, especially large language models, which you will sometimes see abbreviated to LLMs.
To achieve and and the purpose of that is to achieve the most accurate, useful, and reliable outputs.
And this definition did come from, and you can see the citation there, CHAT GPT-five auto, back in August.
And what I liked about this definition was that it uses the terms designing and refining, because I do see creating prompts for AI as a a design skill.
It has there’s a little bit of art to it. There’s a bit of art and a science mixed in.
And then the refining piece is also really important because, what I have found and what I have found through reading a lot of other people have also discovered is that it can take a few iterations at times to figure out what the best, prompt might be to get what you want the system to do. And so as we’re moving through this, keep that kind of design and refine or designing and refining in mind as we move throughout the presentation.
So then there are four keys to effective prompt engineering.
And, these can be reframing the request clearly, guiding behavior, optimizing results, and applying strategies.
I did ask AI what some effective, keys that were for prompt engineering, and these are generally what it gave me.
And, but I want to kind of dig into those a bit deeper here. So framing the request clearly is structuring prompts so that the model understands exactly what it’s being asked for. This could be providing context, specifying a format, or setting constraints.
So there’s also guiding behavior, which is using examples, assigning roles, or a step by step instructions to influence how the model might respond. And by model, meaning the AI model.
Then there’s optimizing results, which is iteratively testing and adjusting prompts to reduce ambiguity, prevent undesired responses, and improve relevant accuracy or creativity.
And that’s really going to be a lot of that refining that I was talking about earlier, where you’re trying something, see what it gives you, making adjustments, editing, and moving through that process.
And then applying strategies, techniques such as few shot prompting, chain of thought prompting, and role prompting. And role prompting, I think, is really interesting because it’s where you can assign a persona to the AI so that it comes from a particular viewpoint or kind of angle.
So we’ll talk about all of these in a bit more detail, but I want to just kind of see here that there are some very kind of strategic ways that you can get to use the AI tools to get the different result that you want. One thing to be keep in mind, though, is despite these key ideas, there’s a lot of different ways that it can be approached. And, again, this is where that art comes in. Different people are going to have different ways that they want to approach it, and the AI models are also going to respond differently to what is given to it.
So for kind of one example is that research has found that if you are polite to the AI system, it will often give you better results.
And the reason people believe that that is the case is because the content that the AI model is trained off of that training data that is in the large language model or LM, recognizes that polite polite words, polite conversation, often exhibits, kind of useful responses in whether that’s a book narrative or, other type of conversation. And so it can recognize that politeness and respond kind of in turn and in kind.
So just keep that in mind that the different ways of approaching the AI model will give you some different results.
But then kind of in that last paragraph on this slide, just to summarize again, prompt engineering is about turning vague requests into precise instructions that leverage the model’s strengths and minimize its weaknesses.
A key to that is going to be knowing the model’s strengths and weaknesses, which we’ll talk about more on an upcoming slide, but keep that in mind as well as an important part of this conversation about, in prompt engineering.
So I will say that there were those four areas.
And as I was preparing for this webinar, I asked at ChatGPT, specifically, ChatGPT five, the auto, to expand on those particular four areas. And what I noticed in its response was that there was a lot of overlap in the content and the ideas that it was giving me in those four areas, and there was not necessarily a clear distinction between many of the criteria.
And so I encourage you while it is good to think about the different approaches. Also, think about prompt engineering holistically.
Think about it as, you know, guiding and frame guiding behavior, framing the request, as more instead of looking at it in parts and pieces, but the big picture of what you’re wanting to accomplish, because ChatGPT itself, when I was asking for some specific examples, was kind of moving all back and forth.
But I’m gonna talk about these four and then bring it all together for you. So I think you’ll see how the four areas and the four keys to effective prompt engineering can be thought of in a holistic manner.
So let’s start by talking about framing the request.
There’s giving context.
And in this case, you could provide a clear reason for requesting the information.
The more AI understands why you’re asking your question, the more likely it will be able to tailor the response to your needs.
Also consider specifying a format.
Ask the AI for the output to be a specific format, such as a bulleted list, as a narrative, in three paragraphs, as a story.
Whatever format you would like that output to be in, ask for that.
Also, consider setting constraints.
Tell the AI what you do not want.
This could be anything from you don’t want it as a bulleted list to or you don’t want it in a paragraph form.
You know, you want it for a social media platform, or you don’t want it to, you know, be a particular link that you want it to be short and to the point. You do not want it to use overly kind of, verbose language.
You want it you do not want it to be, you know, detailed or you don’t want it to be vague. Whatever it is that you don’t want, you can add those too.
And I encourage you to think about that. And we’re all familiar, you know, with databases and how Boolean operators work.
And think about that setting constraints as that not Boolean operator. So what do you not want?
And that can be a key piece even if you’re just asking a research question.
Think about what portion you might not want included in the results, and that can be a way of getting to the better results too. In many ways, the idea, while you’re not specifically using Boolean logic for this, that idea still of and, or, and not can really still be applied to how you’re framing the request for the AI.
Use a step by step directions.
Just like step by step instructions can be useful for humans, they can also be helpful for AI tools.
You can tell the AI what you want done and in what order you want it completed in. So especially if you’re going to have a multi step, output, you can you can give it all of those directions upfront.
On the kind of the flip side of that though, if you are thinking that you’re going to have a multi step result, but you aren’t necessarily sure, what it’s going to give you to start with, and you might want the output to build off of the previous one, the previous output.
What I will often do is I will give it a prompt, get an output, and then, say based on and if I like what the output was, I say, you know, based on what you just provided, do the next thing. And you can keep doing that. Or if you like the first one, you can then just say based on this output, do the next three things.
And so that can be a way of modifying those step by step directions.
Then also consider your tone and how you describe the request. I already mentioned that, kind of research and what people have discovered about being polite to the AI system. But, there are keep that in mind because that it can be really useful. You know, polite phrases such as please and thank you tend to work well with the AI.
And so just to as more research comes out about that, just kind of be aware that your tone, you know, whether it you’re asking for very short you know, you’re asking in very short phrases. You’re more likely to get kind of short phrases back.
But your phrasing can also impact the results of the AI.
Then guiding AI’s behavior. So thinking about creating a persona.
So personas can act like a lens through which you kind of operate with the AI system. You can tell the AI to be serious, to be funny, to be professional.
You can tell the aeon to take take on specific attributes, you know, to be a beginner, to be an expert.
You can even go more detail and, you know, kind of create a whole persona where it is a, you know, twenty five year old, and you can just keep going on through the list and give it a lot of descriptions about what you would want. You know, it could be somebody working in the information profession, somebody working in the medical industry, whatever it is you’re looking for, you know, especially if you’re wanting to write content for a particular audience, giving it a detailed persona of the the of somebody who would be in that audience can be really useful.
So as you’re thinking about using it in your own workplace context, think about who you’re writing for, who’s your audience, and then tell the AI to, you know, write its output for those people, for those individuals.
You can even go further with that and give the AI a role.
Describe a role you wish the AI would, take on. You can give it a background story, and share that you want it to kind of adopt that role in its replies. So, again, it could be the role of a librarian, the role of a teacher, the role of an executive.
And the background story kind of provides the AI with the details to help it know what knowledge it should be responding to in your prompt. So if you tell it to be a librarian, for example, it’s going to then realize, oh, it needs to know something about, you know, information literacy. It need you know, I the AI system. I need to be considering, databases. I need to be considering knowledge management. I need to be considering maybe competitive intelligence. These sorts of things that would go into a prompt and an output that might not be as to the forefront if you’re not telling the AI system to take on that particular piece.
You can also as I as we talked about how you can write with a particular tone, you can also then ask the AI to give you a response in a specific tone. And this can be that you want the output to be serious, to be funny, to be professional.
You can ask it to write formally or informally.
You can have it use first person or third person. All of these things that can be part of what you’re doing, with the, kind of helping guide the behavior.
And then you could also ask the AI to self check.
And this, I think, is really interesting because, you would think it would always it should. Right? But it doesn’t.
Always give the best response or the most accurate.
But that’s not always the case. And so when you’re kind of establishing what behavior you want the AI to take on, you can particularly ask it to self check itself, to see if the output is factually correct or if it is the best that it can be. And that can really guide its behavior to be even more precise, and to improve the, just outputs.
So optimizing results. So this is something that is, really important that that you will want to do with the AI output. So so we’ll start start by talking about evaluating the output. That might sound like something that you want to do, and, of course, it is. You will want to evaluate the output that is the AI creates.
You will want to look for inaccuracies.
You’ll want to check citations.
You will want to read it very closely for tone and for your purposes and just make sure that what it’s giving you is what you need and accurate.
But you can also ask the AI system to evaluate its own output, and you can ask it to identify strengths, identify weaknesses.
If it identifies weaknesses, which it very likely will because you ask it to do that, you can then ask for it to strengthen those weaknesses.
You can also ask, you could also take the output from one AI model and submit it as a prompt, in another AI model and ask that other model to evaluate it for strengths and weaknesses.
That can give you a sense of what different models are looking at. The more you use it for this purpose, you can start to determine which model might be the best for your needs.
But it can just be a way of really getting the best output possible. And it it’s at times, it might be that you kind of go too far and you think, okay. Really, that second or third output was really exactly what I needed. I’ve gotten a little bit too far down into some details over here by continuing to refine it.
But, you won’t know that until you do some of those kind of engineering. Again, remember, designing and refining that we talked about earlier.
Then ask for what you want instead. If the output is not at all what you expected, you can ask the model to try again, with, some different kind of parameters and different context for what you’re wanting.
If, you are still like, this is just not working with the way I think it should, you can jump to another model if that is possible with what you have access to.
But definitely keep trying to figure out what it is that you need to put in or what you want.
You can also ask it to remove content.
So earlier, I was saying you could ask for things not to include. But once you obtain an output from the AI and you’re reading through it, you’re evaluating it, you might realize there’s something included that you don’t want to be there. And you can then go back, and create another prompt asking it to specifically remove portions, whether that is removing a citation, whether it’s removing, content related to a particular topic, whether it’s, you know, some specific words that you don’t like. Those are all things that you can ask it to be removed.
You can also ask the AI model to rephrase, the output. You can ask it to rephrase a portion of the output or all of the output.
And this is one way that I often use AI is where I am struggling to wordsmith, something for that I’m working on, and I will then go in and ask the AI, give me three different ways that I can say this, and it will then often give me something that’s, you know, written in a very formal manner, something that’s written informally, and something that might be written more in a humorous manner.
And, I find that, asking it to rephrase can really help me get a more precise, way of saying what it is that I want. But sometimes I don’t even know what I want until I see something that I don’t want, and then I can ask a better I can create a better prompt because I know what I don’t want.
You can also ask the AI tool what it needs to know to provide a higher quality prompt, And that can be a really interesting, discussion within the AI. I hesitate to use the word discussion, but for lack of a better term in this case, that is what I am going to use. It can be, just that discussion back and forth where you’re kind of collaborating with the AI tool. You’re asking it, okay. What would be helpful for it to give you a better to get it a better response to you, and you can work with it in that way.
And then finally, ask for more detail. You can ask the AI tool to provide and expand on a particular portion of the output, or you can you you might realize once you see the output that there wasn’t enough information in your prompt or you had it emphasized something in the prompt enough for and so you can say, well, I meant to emphasize this particular portion of my prompt. Will you do that now in the output that you create as well? And so you don’t even have to necessarily rewrite the entire prompt. You can reference the prompt that you put in previously and say that you would like it to expand on a particular portion of that prompt.
So applying strategies.
So these are all ideas generated from chat g p t five auto.
So just for full disclosure there. But I thought some of the things are really interesting, and I am going to expand on them here a bit for you.
So in terms of applying strategies for prompt engineering, one idea was a few, few shot prompting. And I was not honestly sure what it meant by this at first, so I had to dig a little bit deeper.
The example it kinda gave me was to show examples, and then that, I think, clicked a little bit more.
But you can give the AI tools examples of what you want.
So it could be a visual example of I would like, my you know, if you’re if you’re creating an image, let’s say, in ChatGPT, you can potentially upload a few images that are similar and say you would like something, in, you know, this kind of in this setting or this context or very similar like this.
But that it doesn’t have to be a visual representation either. It can be examples of writing styles.
It’s you know, as long as you have permission to be able to upload things, into the AI system and you’re not violating any privacy or anything like that, you can upload examples.
Even if it’s your own work, you can upload those examples and say, this is my writing style. I would like outputs based on this tone, and, you know, make it sound like this.
So that can be one way. There’s also chain of thought prompting where you’re asking for reasoning.
So you can ask the AI tool to explain its reasoning. How did it come to that decision? How did it decide to create that the output based on your prompt?
You can ask the AII how it came to a particular set of conclusions based on the question you ask. You can ask it to pull the research to show you, you know, what sources it pulled from. All of those things that you can ask it to do.
You know, we’re not able to see exactly how the AI system works.
So, you know, validating the accuracy of all of that can be challenging, if not somewhat impossible.
But, at the same time, I’m not able to see exactly how somebody else came to their decisions. Just they just have to tell me, how they came to those conclusions.
And so, you know, by pulling sources and things like that. So think about it like that. You can ask for more details from the AI system about how it came to create the prompt it did, and that might give you insight into what you might want to change about a future prompt or how you might wanna modify what you’re currently asking if you notice that it’s coming to, you know, to draw in conclusions based on what you think is faulty reasoning.
You can ask for multiple responses, as well.
And that’s something that I’ve already talked about some. That’s pretty straightforward.
But, definitely, you know, instead of just getting one response, you know, for each prompt, you can ask the AI tool to provide you with any number of options.
And I think that can be a really useful way for you to then evaluate what you think is going to be the best.
I’ve already talked a little bit about, the role prompting, assigning the AI, a persona. But just think about that too here as being something that is useful and important for a strategy that can really give you some better tools and some better outputs.
And then prompt chaining. So this is breaking bigger tasks into multiple prompts.
So if you if what you need is detailed, you can use multiple prompts to build that entire document. And I’ve talked about this a little bit. But to do this, you can ask the AI to keep building on its previous work.
You can also ask AI how to break a bigger need into smaller prompts. So if you’re not sure how it might be best to to break up a task, you can ask AI for suggestions on how it would do that, and then ask it why.
That would be a better approach. And then or and you can alternatively ask, would it be better to break this task up into multiple prompts, or is it better to ask all at once and then ask the AI to explain its reasoning for that? So you can really use the AI model to help you know how best to work with it. That’s definitely, something that it does pretty well. So keep thinking about that. Keep, you know, experimenting with different ways of prompting and breaking tasks up into different kind of size asks.
And then meta prompting. This one was really interesting to me, I will say, and not one that I had, really thought about too much.
But this is when you’re asking AI to reflect on how it answered, and improve.
So you can ask AI to and I’m gonna, you know, use some air quotes here to think about its own output and find ways to improve it. And then you can ask AI to improve based on what it noticed could be improved.
So you can, again, have some of that kind of meta level prompting, as you’re working through trying to get the best output and engaged in a kind of a prompt engineering exercise.
Information, the best information, the best output from the system that you are able.
So I mentioned earlier this idea of holistic prompting.
And so when crafting prompts, I really do think it’s helpful to remember that they’re made up of individual components, which are what we just talked about. Specific instruction, there’s a context, there’s constraints, you can use examples, And all of that those strategies can all kind of be divided into different examples, but all of those individual components come together to ultimately create the output that you need.
So kinda sometimes beyond focusing on the individual parts of prompt engineering, it can be as equally important to think about prompting as a whole. So this in my mind comes from the education world where we talk about kind of teaching hold apart or part to whole.
Hold apart is when you’re looking at kind of an entire project, and then you’re breaking down that project into pieces afterwards. But everybody has seen the final product first, Then you break it down to help them get there.
Alternatively, you can teach where you’re building all of the pieces and then the final output is revealed at the end.
And I have always been more of a whole to part teacher where I like I like seeing like, having a good idea of what I want at the end to the end result to be and to show that end result to those that I’m working with and then kind of piecing it together. But there can be times where it’s more useful because maybe you don’t have a good picture of what is at the end to break it down and just start by asking individual questions to get to the final product.
So again, a whole to part, part to whole is where this conversation really comes from. But I think a holistic perspective can really help you to ensure to the extent possible that the prompting you’re doing aligned with that kind of the bigger picture or that final output that you’re really looking for, and not just kind of generating pieces that you have to put together yourself later on, which can be useful if you want it that way, but it might not be what you want.
So I think that there are kind of generally two holistic approaches that you can take when designing a prompt, and you can see them here in each of these columns.
So the if you start out in just kind of the first column, having a clear idea of what you want, then you can create a detailed prompt, review the output, give feedback based on the output, and then repeat as needed where that kind of iterative process is coming into play. But you already know what you’re what you want, so you can move kind of quickly through those iterations of the prompting.
But if you don’t have a clear idea of what you want, so in the next column, you might want to start by asking the AI system for ideas or a place to start. You can give it the general idea that you have, you can say I’m kind of I’m brainstorming, tell it you’re brainstorming, tell it that you are looking for, some ideas that you’re hoping to get generally in this, you know, in this direction with what you’re going, whether that is, you know, creating, a training manual, whether that is, you know, designing a new kind of marketing campaign, whether that is, you you know, thinking about outreach for your library, whatever that might be, just kinda give it some general ideas. Review that output, give feedback, then add more context and information as needed.
And as the idea becomes more formed. So you can work with the AI system as when you’re starting with an idea generator and it can help you refine your idea over and over again through that iterative process and then once you have that idea you and if you want it to continue on with you for building that idea out you can then go to that first column and work through it that way. So hopefully that gives you some ideas for how to think about holistic prompting maybe in a new way.
So now that we have looked at some examples of how to engineer your prompts through designing and refining.
I also want to just take a little bit of time to think about knowing your AI model.
So AI models, I feel like, change quickly.
There’s always some new model being introduced from these companies whether, you know, which whichever company it is that has created the LLM, if they’re often coming up with new versions of it, those versions can be similar to what it previously had created or it can be have some very new features.
And so, staying current to the extent possible, and I say to the extent possible because that’s really hard to do, I think it’s important.
But what why I think it’s important to know about these AI models is because in many ways, these models I want you to think about, like, databases that they all have kind of their own niches.
They have their strengths. They have their weaknesses.
They, you know, have some personalization involved in them. They also often have a kind of a default voice.
They have different outputs that they can create.
Some, you know, do visual work really well. Some will work with, you know, a spreadsheet very well. Some will work within, you know, certain ecosystems, on your, like, like, information processing ecosystems. Others will not. And so understanding which ones will work with which tools can really help you select and decide what’s going to meet your needs, but also the needs of those in your organization.
Just like we need to be able to identify which database is going to be the best, for our users and for ourselves.
Which brings me to this idea of prompt engineering.
I really think it is a form of information literacy.
You know, information literacy is required for good prompt engineering.
It’s you know, we know how to ask for the information that we need.
We we need to understand the AI model, and we need to be able to evaluate the output.
Those are all parts of what we think about when we think about information literacy.
And, you know, I think that if I’m you know, come from kind of a higher ed current context, and, we think about information literacy with the ACRL framework. You know, authority is context, constructed and contextual.
So having some sense of what is kind of we we don’t know what’s included, right, within the AI models, but, having a sense of the type of output can give you a sense of some of that authority.
Information creation as a process, you know, understanding how the AI might have created the content that output asking the AI for its creation process. Those are all pieces of this.
You know, information has value. There is a lot about AI and information and the costs associated with that. You know, research as inquiry, definitely that iterative process, needing to search for things, not always getting exactly what you want, but being able to go back and refine and reflect, that is all part of it, as well as searching a strategic exploration.
So just I think you can align a lot of the components and the concepts that make up information literacy with the skills needed to work well with AI, specifically with, you know, prompt engineering, but just AI use broadly.
So as we come to, end of this webinar, I want you to think about how you might change your prompting, based on what we’ve talked about here today. What do you want to try, and how are there any specific ways that you will try to increase your prompting skills?
I hope you all take time to reflect on those questions, reflect on how you might work differently with AI, and take some time to potentially have conversations around AI and its role within the information literacy ecosystem.
With that, I will turn it back over to Bradley.
Thank you, Lauren, for the wonderful presentation. And to our audience, if you have any more questions on any of our software or our company, our contact details are listed on the screen for you. And please stay tuned for more webinars and content related to this series.
On behalf of the Lucidea team, I thank you all for attending today and until next time. Thank you.