Skip to main content

module cleanlab_tlm.utils.tlm_lite

TLM Lite is a version of the Trustworthy Language Model (TLM) that enables the use of different LLMs for generating the response and for scoring its trustworthiness.


class TLMLite

A version of the Trustworthy Language Model (TLM) that enables the use of different LLMs for generating the response and for scoring its trustworthiness.

TLMLite should be used if you want to use a better model to generate responses but want to get cheaper and quicker trustworthiness score evaluations by using smaller models.

Possible arguments for TLMLite() are documented below. Most of the input arguments for this class are similar to those for TLM, major differences will be described below.

Args:

  • response_model (str): LLM used to produce the response to the given prompt. Do not specify the model to use for scoring trustworthiness here, instead specify that model in the options argument. The list of supported model strings can be found in the TLMOptions documentation, by default, the model is “gpt-4o”.

  • quality_preset (TLMQualityPreset, default = “medium”): preset configuration to control the quality of TLM trustworthiness scores vs. runtimes/costs. This preset only applies to the model computing the trustworthiness score. Supported options are only “medium” or “low”, because TLMLite is not intended to improve response accuracy (use the regular TLM for that).

  • options (TLMOptions, optional): a typed dict of advanced configuration options. Most of these options only apply to the model scoring trustworthiness, except for “max_tokens”, which applies to the response model as well. Specify which model to use for scoring trustworthiness in these options. For more details about the options, see the documentation for TLMOptions.

  • timeout (float, optional): timeout (in seconds) to apply to each TLM prompt.

  • verbose (bool, optional): whether to print outputs during execution, i.e., whether to show a progress bar when TLM is prompted with batches of data.


method get_model_names

get_model_names()dict[str, str]

Returns the underlying LLMs used to generate responses and score their trustworthiness.


method prompt

prompt(
prompt: Union[str, Sequence[str]]
) → Union[TLMResponse, list[TLMResponse]]

Similar to TLM.prompt(), view documentation there for expected input arguments and outputs.


method try_prompt

try_prompt(prompt: Sequence[str])list[TLMResponse]

Similar to TLM.try_prompt(), view documentation there for expected input arguments and outputs.