Skip to main content

Tlm Lite

Warning: The utility methods in utils are not guaranteed to be stable between different versions of the cleanlab-studio API.

module cleanlab_studio.utils.tlm_lite

TLM Lite is a version of the Trustworthy Language Model (TLM) that enables the use of different LLMs for generating the response and for scoring its trustworthiness.

This module is not meant to be imported and used directly. Instead, use Studio.TLMLite() to instantiate a TLMLite object, and then you can use the methods like prompt() documented on this page.


class TLMLite

A version of the Trustworthy Language Model (TLM) that enables the use of different LLMs for generating the response and for scoring its trustworthiness.

TLMLite should be used if you want to use a better model to generate responses but want to get cheaper and quicker trustworthiness score evaluations by using smaller models.

The TLMLite object is not meant to be constructed directly. Instead, use the Studio.TLMLite() method to configure and instantiate a TLMLite object. After you’ve instantiated the TLMLite object using Studio.TLMLite(), you can use the instance methods documented on this page. Possible arguments for Studio.TLMLite() are documented below.

Most of the input arguments for this class are similar to those for TLM, major differences will be described below.

Args:

  • response_model (str): LLM used to produce the response to the given prompt. Do not specify the model to use for scoring trustworthiness here, instead specify that model in the options argument. The list of supported model strings can be found in the TLMOptions documentation, by default, the model is “gpt-4o”.
  • quality_preset (TLMQualityPreset, default = “medium”): preset configuration to control the quality of TLM trustworthiness scores vs. runtimes/costs. This preset only applies to the model computing the trustworthiness score. Supported options are only “medium” or “low”, because TLMLite is not intended to improve response accuracy (use the regular TLM for that).
  • options (TLMOptions, optional): a typed dict of advanced configuration options. Most of these options only apply to the model scoring trustworthiness, except for “max_tokens”, which applies to the response model as well. Specify which model to use for scoring trustworthiness in these options. For more details about the options, see the documentation for TLMOptions.
  • timeout (float, optional): timeout (in seconds) to apply to each TLM prompt.
  • verbose (bool, optional): whether to print outputs during execution, i.e., whether to show a progress bar when TLM is prompted with batches of data.

method get_model_names

get_model_names() → Dict[str, str]

Returns the underlying LLMs used to generate responses and score their trustworthiness.


method prompt

prompt(
prompt: Union[str, Sequence[str]]
) → Union[TLMResponse, List[TLMResponse]]

Similar to TLM.prompt(), view documentation there for expected input arguments and outputs.


method try_prompt

try_prompt(prompt: Sequence[str]) → List[TLMResponse]

Similar to TLM.try_prompt(), view documentation there for expected input arguments and outputs.