Skip to main content

module cleanlab_tlm.utils.chat

Utilities for formatting chat messages into prompt strings.

This module provides helper functions for working with chat messages in the format used by OpenAI’s chat models.

Global Variables

  • SYSTEM_PREFIX
  • USER_PREFIX
  • ASSISTANT_PREFIX
  • SYSTEM_ROLE
  • DEVELOPER_ROLE
  • USER_ROLE
  • TOOL_ROLE
  • ASSISTANT_ROLE
  • SYSTEM_ROLES
  • FUNCTION_CALL_TYPE
  • FUNCTION_CALL_OUTPUT_TYPE
  • TOOLS_TAG_START
  • TOOLS_TAG_END
  • TOOL_CALL_TAG_START
  • TOOL_CALL_TAG_END
  • TOOL_RESPONSE_TAG_START
  • TOOL_RESPONSE_TAG_END
  • TOOL_DEFINITIONS_PREFIX
  • TOOL_CALL_SCHEMA_PREFIX

function form_prompt_string

form_prompt_string(
messages: list[dict[str, Any]],
tools: Optional[list[dict[str, Any]]] = None,
use_responses: Optional[bool] = None,
**responses_api_kwargs: Any
)str

Convert a list of chat messages into a single string prompt.

If there is only one message and no tools are provided, returns the content directly. Otherwise, concatenates all messages with appropriate role prefixes and ends with “Assistant:” to indicate the assistant’s turn is next.

If tools are provided, they will be formatted as a system message at the start of the prompt. In this case, even a single message will use role prefixes since there will be at least one system message (the tools section).

If Responses API kwargs (like instructions) are provided, they will be formatted for the Responses API format. These kwargs are only supported for the Responses API format.

Handles messages in either OpenAI’s Responses API or Chat Completions API formats.

Args:

  • messages (List[Dict]): A list of dictionaries representing chat messages. Each dictionary should contain either: For Responses API:
    • ‘role’ and ‘content’ for regular messages
    • ’type’: ‘function_call’ and function call details for tool calls
    • ’type’: ‘function_call_output’ and output details for tool results For chat completions API:
    • ‘role’: ‘user’, ‘assistant’, ‘system’, or ‘tool’ and appropriate content
    • For assistant messages with tool calls: ‘tool_calls’ containing function calls
    • For tool messages: ‘tool_call_id’ and ‘content’ for tool responses
  • tools (Optional[List[Dict[str, Any]]]): The list of tools made available for the LLM to use when responding to the messages. This is the same argument as the tools argument for OpenAI’s Responses API or Chat Completions API. This list of tool definitions will be formatted into a system message.
  • use_responses (Optional[bool]): If provided, explicitly specifies whether to use Responses API format. If None, the format is automatically detected using _uses_responses_api. Cannot be set to False when Responses API kwargs are provided.
  • **responses_api_kwargs: Optional keyword arguments for OpenAI’s Responses API. Currently supported:
    • instructions (str): Developer instructions to prepend to the prompt with highest priority.

Returns:

  • str: A formatted string representing the chat history as a single prompt.

Raises:

  • ValueError: If Responses API kwargs are provided with use_responses=False.