The Artificial Intelligence (AI) Guide provides an introduction to this evolving field for faculty, fellows, residents, postdocs, students, and staff. Due to the rapid advancement of this emerging technology, information in the Guide may become outdated at times.
For information on Artificial Intelligence (AI) Data Security and Privacy, see Artificial Intelligence (AI) Data Security and Privacy - Information Resources (utsouthwestern.net), VPN/On Campus access only. NOTE: this Guide supplements but does not supersede information provided by UT Southwestern or University of Texas policies and guidelines.
These resources are guides from the popular chatbot companies.

The prompt is the input, question, or query you provide the large language model. Whether it is a few words, sentences, or a paragraph, these are the instructions or what you want the AI to do or generate. On top of your input, there is an additional interpretative layer on the LLM's backend.
What the LLM provides in return is often referred to as the output or response. It can be in different media formats, depending on how you prompt the chatbot. Outputs can vary between different AI generators even when given the same prompt, and they can vary at different points in time.
Prompt engineering is the structuring or crafting of the input to get the desired, predictable, and high-quality output. It ensures the AI generates relevant content for your academic or research needs.
Context engineering is the shaping of LLM's environment, configuring it to get the best possible outcomes. This includes system instructions, memory and conversation history, user metadata, and how the tool is integrated with external resources.
Prompt engineering involves writing a good questions. Context engineering involves designing a quality interview process. With more advanced models, there has been a greater focus on context engineering over prompt engineering. The quality of your prompt is still important to receive your exact desired output.
Comparable to a literature search, prompting is an iterative process—you may not receive the desired output the first time. Evaluate and refine as needed.
The CLEAR framework (Lo, 2023) is a framework to optimize prompts given to Copilot, ChatGPT, and other generative AI tools.
To follow the CLEAR framework, prompts must be:
| C | Concise | Be specific about what you want the AI to do and not do. Aim for brevity and clarity. Avoid superfluous words. |
|---|---|---|
| L | Logical | Maintain a coherent, organized order of ideas. Provide the AI with context and identify the relationships between concepts. Like any work task, break down complex tasks into manageable steps. |
| E | Explicit | Be specific and avoid ambiguity. Clearly identify the scope and format of the output. Provide the desired tone and audience. Upload an example of what you want. Consider desired word length and language. |
| The last two components ask you to evaluate your prompt after the output is generated. | ||
| A | Adaptive | Adjust the prompt if the desired response is not received the first time. If responses are too generic or yield too much information, be more specific. |
| R | Reflective | Take a critical look at the prompts you've constructed and the type of information you received in return. Apply what you learn in future prompts. Many AI retain and utilize your conversation history. |
The PICO(M) framework is designed for clinical quantitative questions. We advise you to use this framework to ensure the AI tool outputs literature relevant to your research project. Most AI tools are designed to handle natural language queries.
The PICO mnemonic concept – introduced in 1995 by Richardson, et. al. – was developed to help answer health-related quantitative questions by breaking down the question into searchable keywords. Over the years, the framework has evolved to include additional components, such as "T" (Timeframe), "TT" (Type of question + Type of study design). In the UT Southwestern PICO(M) framework, "M" refers to methodology or study design (Davies, 2011; Richardson, Wilson, Nishikawa, & Hayward, 1995).
Key components include:
| P | Patient or Problem | What is the important patient problem or condition? How would you describe the important characteristics of the patient? |
|---|---|---|
| I | Intervention, Indicator, Exposure, Prognostic Factor | What do you want to do to help the patient? Do you want to consider a specific treatment, diagnostic test, exposure or risk factor? Is there a prognostic factor that might affect the outcome of the condition? |
| C | Comparison or Control | What are the choices of intervention, if any? Are you trying to decide between two different therapies or two different tests? Between a therapy or no therapy (placebo)? |
| O | Outcome | What are you trying to achieve with the intervention? What is the important outcome for the patient? |
| M | Methodology | What is the best study design or methodology for the type of question you are asking? |
The Five "S" model was designed by AI for Education specifically for educators, but the model can be applied to other non-educational tasks.
The Five S's are:
| Set the Scene |
Provide the AI Chatbot context on what role, expertise and/or environment it should use to guide its output. Ex: "You are an expert STEM instructional designer and teacher..." |
| Be Specific |
Be specific in the instructions. Clearly define the task and provide details on what you would like included. Ex: "Use the 5E Model to create a 60-minute hands-on lesson..." |
| Simplify Your Language |
Use a conversational approach with simplified language that avoids unnecessary jargon. Ex: "Create an engaging lesson plan lesson plan that aligns with CCSS..." |
| Structure the Output |
Tell the AI how to structure the output with specifics on format, audience and/or sections. Ex: "Create a rubric for my [medical] students formatted as a table with directions..." |
| Share Feedback |
Provide feedback at all points in the conversation. Share specifics on what needs to be revised to meet your needs. Ex: "Change the format from a table to a checklist..." |
Most popular LLMs have image generators embedded. Others tools, such as Midjourney or Adobe Firefly, are designed specifically for image (and/or video) generation.
To generate quality, relevant images, consider the Subject + Style + Details + Format of output (Harvard, 2023).
| Subject | Provide as much detail about the subject(s). What are they doing? What do they look like? What is in the background? |
| Style | Indicate the preferred image style. What is the art style (impressionist, manga, line drawing, etc.)? |
| Details | Provide any additional relevant details. What is the color scheme? How are things positioned? How realistic do you want the image to be? |
| Format of output | Indicate how you want the image output. What is the orientation? What is the aspect ratio? What will the image be used for–research poster versus social media post? |
The top tip for prompt engineering: Don't reinvent the wheel. There are plenty of prompt examples out there. You can use these as templates and plug in your information to get relevant content.
Prompt libraries are digital repositories or collections of prompts. These collections help ensure consistent and efficient AI interactions, reduce the time needed to write prompts, and improve output quality by providing a set of proven, well-structured inputs for various tasks.
You must critically evaluate the quality of any prompt before utilizing.