LLM Prompt Design: System, User & Assistant Roles

Master LLM prompt design with our guide on system, user, and assistant prompts for conversational AI and OpenAI Chat Completions. Learn best practices.

Designing System, User, and Assistant Prompts for LLMs

This documentation outlines the roles and best practices for designing system, user, and assistant prompts when interacting with conversational AI models, particularly in the context of APIs like OpenAI's Chat Completions.

Understanding Prompt Roles

In chat-based Large Language Models (LLMs), prompts are structured into distinct roles to guide the AI's behavior and responses.

1. System Prompt

Definition: The system prompt establishes the AI assistant's initial behavior, tone, personality, and operational boundaries. It serves as a high-level directive for how the model should conduct itself throughout a conversation.

Purpose:

  • Control Voice and Style: Dictates the language, formality, and overall demeanor of the AI.
  • Define Constraints: Sets limitations, such as prohibiting speculation, requiring source citation, or specifying output formats.
  • Set Roles: Assigns a specific persona or function to the AI (e.g., "You are a helpful tutor," "You are a cybersecurity expert").

Example:

{
  "role": "system",
  "content": "You are a professional legal advisor. Answer questions based only on verified legal information. Be formal and concise."
}

Example Usage (Python with OpenAI):

import openai
# Replace with your OpenAI API key
openai.api_key = "your-api-key"

# System prompt for a professional legal advisor
system_message = {
    "role": "system",
    "content": "You are a professional legal advisor. Answer questions based only on verified legal information. Be formal and concise."
}

# Example user question
user_message = {
    "role": "user",
    "content": "Can an employee be fired without notice in India?"
}

# Call ChatCompletion
response = openai.ChatCompletion.create(
    model="gpt-4",
    messages=[system_message, user_message]
)

# Print the legal assistant's response
print(response['choices'][0]['message']['content'])

2. User Prompt

Definition: The user prompt represents the actual input or query submitted by the end-user. It is the primary driver of the conversation's content and purpose.

Purpose:

  • Ask Questions: Elicit information or explanations from the AI.
  • Provide Instructions: Direct the AI to perform specific tasks or generate particular content.
  • Request Actions: Initiate a process or request the AI to engage in a specific type of interaction.

Example:

{
  "role": "user",
  "content": "What are the tax benefits of starting an LLC in California?"
}

Example Usage:

# Assume system_prompt is already defined as above
user_prompt = {
    "role": "user",
    "content": "What are the tax benefits of starting an LLC in California?"
}

# Send request to OpenAI's chat model
response = openai.ChatCompletion.create(
    model="gpt-4",
    messages=[system_prompt, user_prompt] # Using previously defined system_prompt
)

# Print the assistant's formal legal response
print(response["choices"][0]["message"]["content"])

3. Assistant Prompt (Response)

Definition: The assistant prompt is the response generated by the LLM. It is the output of the model, shaped by both the system's instructions and the user's query.

Purpose:

  • Deliver Response: Provide information, answers, or generated content aligned with the user's intent and the system's guidelines.
  • Adhere to System Instructions: Ensure the response respects the defined tone, constraints, and persona.

Example:

{
  "role": "assistant",
  "content": "Starting an LLC in California may offer tax benefits such as pass-through taxation and eligibility for specific deductions. However, consult with a tax professional for tailored advice."
}

Example of an Assistant's Detailed Response:

{
  "role": "assistant",
  "content": "LLCs in California benefit from pass-through taxation, meaning profits are taxed only once on the owner's personal income. However, they must pay an $800 annual franchise tax and may be subject to an LLC fee based on income. Always consult a tax professional for precise guidance."
}

Prompt Structure for Chat-Based LLMs

Conversational LLMs typically process a history of messages. The order and structure of these messages are crucial for maintaining context and guiding the AI.

Template:

messages = [
  {"role": "system", "content": "You are a helpful AI assistant."},
  {"role": "user", "content": "Explain the importance of data privacy."}
]

Python with OpenAI Example:

import openai

response = openai.ChatCompletion.create(
  model="gpt-4",
  messages=[
    {"role": "system", "content": "You are a cybersecurity expert."},
    {"role": "user", "content": "How do firewalls enhance network security?"}
  ]
)

print(response['choices'][0]['message']['content'])

Best Practices for Designing Prompts

Effective prompt design is key to eliciting high-quality, relevant, and safe responses from LLMs.

For System Prompts:

  • Be Specific: Clearly define the assistant's role, capabilities, and limitations.
  • Avoid Ambiguity: Explicitly state the desired tone, behavior, and output style.
  • Include Formatting Rules: Specify output formats if necessary (e.g., "respond in Markdown," "use bullet points for lists").

SEO Example:

{
  "role": "system",
  "content": "You are an SEO expert. Always provide tips based on Google’s latest SEO guidelines."
}

For User Prompts:

  • Ask Direct and Contextually Rich Questions: Provide sufficient detail for the AI to understand the request accurately.
  • Include Relevant Background: If a query depends on prior information, include it in the prompt.
  • Use Structured Inputs: Employ numbered lists or bullet points for requests that require structured output.

Good Example:

{
  "role": "user",
  "content": "List 5 keyword research tools for SEO and explain their features."
}

For Assistant Prompts (when providing examples or priming the model):

  • Keep Responses On-Topic: Ensure generated examples align with the system role and user intent.
  • Include Examples/Steps: Incorporate illustrative examples or step-by-step instructions as per system requirements.
  • Stay Concise: Unless verbosity is explicitly requested, aim for clear and efficient communication.

Use Cases for Prompt Design

Role-based prompting allows for specialized AI assistants tailored to specific domains.

Use CaseSystem Prompt Example
SEO Content Writer"You are an expert SEO content writer. Write in a friendly tone."
Code Assistant"You are a senior Python developer. Provide well-documented code."
Financial Advisor"You are a certified financial planner. Give accurate advice only."
Tutor"You are a history teacher specializing in ancient civilizations. Explain concepts clearly."

Benefits of Structured Prompt Roles

Implementing distinct roles within prompt design offers several advantages:

  • Improved Response Quality: Clear guidance leads to more accurate and relevant outputs.
  • Greater Control: Fine-tune the AI's tone, depth, and adherence to specific instructions.
  • Easier Debugging: Isolate issues to specific prompt components if responses are unsatisfactory.
  • Enhanced Trust and Safety: Enforce constraints and ethical guidelines more effectively in deployed applications.

Conclusion

Mastering the design of system, user, and assistant prompts is fundamental to building high-quality, safe, and goal-aligned conversational AI systems. By carefully crafting each prompt's input, developers can effectively guide an LLM's behavior, enhance response accuracy, and ensure consistent, user-friendly experiences across diverse applications like SEO, coding assistance, customer support, and education.


Relevant SEO Keywords:

  • What is a system prompt in AI?
  • User vs assistant prompt in chat models
  • Prompt structure for OpenAI GPT-4
  • Designing effective system prompts
  • Best practices for LLM prompt roles
  • OpenAI prompt formatting guide
  • Examples of assistant role responses
  • System prompt examples for SEO, coding, finance

Potential Interview Questions:

  • What is the purpose of a system prompt in chat-based LLMs?
  • How does a user prompt differ from an assistant prompt?
  • Why is the order of system, user, and assistant prompts important in OpenAI’s API?
  • What should be included in a well-structured system prompt?
  • How can a system prompt influence tone and output style?
  • Provide an example where a poorly designed system prompt could lead to incorrect or unsafe responses.
  • How do assistant prompts reflect the influence of both system and user prompts?
  • In what scenarios would you use role-based prompt customization (e.g., SEO expert vs legal advisor)?
  • What are some best practices for writing user prompts that elicit high-quality answers?
  • How does structured prompt design improve LLM performance and reliability in production?