Skip to main content

module cleanlab_tlm.utils.chat

Utilities for formatting chat messages into prompt strings.

This module provides helper functions for working with chat messages in the format used by OpenAI’s chat models.


function form_prompt_string

form_prompt_string(
messages: 'list[dict[str, Any]]',
tools: 'Optional[list[dict[str, Any]]]' = None,
use_responses: 'Optional[bool]' = None,
**responses_api_kwargs: 'Any'
)str

Convert a list of chat messages into a single string prompt.

If there is only one message and no tools are provided, returns the content directly. Otherwise, concatenates all messages with appropriate role prefixes and ends with “Assistant:” to indicate the assistant’s turn is next.

If tools are provided, they will be formatted as a system message at the start of the prompt. In this case, even a single message will use role prefixes since there will be at least one system message (the tools section).

If Responses API kwargs (like instructions) are provided, they will be formatted for the Responses API format. These kwargs are only supported for the Responses API format.

Handles messages in either OpenAI’s Responses API or Chat Completions API formats.

Args:

  • messages (List[Dict]): A list of dictionaries representing chat messages. Each dictionary should contain either: For Responses API:
    • ‘role’ and ‘content’ for regular messages
    • ’type’: ‘function_call’ and function call details for tool calls
    • ’type’: ‘function_call_output’ and output details for tool results For chat completions API:
    • ‘role’: ‘user’, ‘assistant’, ‘system’, or ‘tool’ and appropriate content
    • For assistant messages with tool calls: ‘tool_calls’ containing function calls
    • For tool messages: ‘tool_call_id’ and ‘content’ for tool responses
  • tools (Optional[List[Dict[str, Any]]]): The list of tools made available for the LLM to use when responding to the messages. This is the same argument as the tools argument for OpenAI’s Responses API or Chat Completions API. This list of tool definitions will be formatted into a system message.
  • use_responses (Optional[bool]): If provided, explicitly specifies whether to use Responses API format. If None, the format is automatically detected using _uses_responses_api. Cannot be set to False when Responses API kwargs are provided.
  • **responses_api_kwargs: Optional keyword arguments for OpenAI’s Responses API. Currently supported:
    • instructions (str): Developer instructions to prepend to the prompt with highest priority.

Returns:

  • str: A formatted string representing the chat history as a single prompt.

Raises:

  • ValueError: If Responses API kwargs are provided with use_responses=False.

function form_response_string_chat_completions

form_response_string_chat_completions(response: 'ChatCompletion')str

Form a single string representing the response, out of the raw response object returned by OpenAI’s Chat Completions API.

This function extracts the assistant’s response message from a ChatCompletion object and formats it into a single string representation using the Chat Completions API format. It handles both text content and tool calls, formatting them consistently with the format used in other functions in this module.

Args:

  • response (ChatCompletion): A ChatCompletion object returned by OpenAI’s chat.completions.create(). The function uses the first choice from the response (response.choices[0].message).

Returns:

  • str: A formatted string containing the response content and any tool calls. Tool calls are formatted as XML tags containing JSON with function name and arguments, consistent with the format used in form_prompt_string.

See also: form_response_string_chat_completions_api


function form_response_string_chat_completions_api

form_response_string_chat_completions_api(
response: 'Union[dict[str, Any], ChatCompletionMessage]'
)str

Form a single string representing the response, out of an assistant response message dictionary in Chat Completions API format.

Given a ChatCompletion object response from OpenAI’s chat.completions.create(), this function can take either a ChatCompletionMessage object from response.choices[0].message or a dictionary from response.choices[0].message.to_dict().

All inputs are formatted into a string that includes both content and tool calls (if present). Tool calls are formatted using XML tags with JSON content, consistent with the format used in form_prompt_string.

Args:

  • response (Union[dict[str, Any], ChatCompletionMessage]): Either:
    • A ChatCompletionMessage object from the OpenAI response
    • A chat completion response message dictionary, containing:
      • ‘content’ (str): The main response content from the LLM
      • ’tool_calls’ (List[Dict], optional): List of tool calls made by the LLM, where each tool call contains function name and arguments

Returns:

  • str: A formatted string containing the response content and any tool calls. Tool calls are formatted as XML tags containing JSON with function name and arguments.

Raises:

  • TypeError: If response is not a dictionary or ChatCompletionMessage object.

function form_response_string_responses_api

form_response_string_responses_api(response: 'Response')str

Format an assistant response message dictionary from the Responses API into a single string.

Given a Response object from the Responses API, this function formats the response into a string that includes both content and tool calls (if present). Tool calls are formatted using XML tags with JSON content, consistent with the format used in form_prompt_string.

Args:

  • response (Responses): A Response object from the OpenAI Responses API containing output elements with message content and/or function calls

Returns:

  • str: A formatted string containing the response content and any tool calls. Tool calls are formatted as XML tags containing JSON with function name and arguments.

Raises:

  • ImportError: If openai is not installed.