Skip to main content

Datasets

A Dataset corresponds to your original text/image/tabular data.

Cleanlab Studio can analyze and train/deploy models on diverse types of datasets. This page outlines how to format your data and the available options.

Upload Dataset interface.

Modalities

Cleanlab Studio supports datasets of the following modalities:

  • Tabular (structured data stored in tables with numeric/categorical/string columns, e.g. financial reports, sensor measurements)
  • Text (e.g. customer service requests, reports, LLM outputs)
  • Image (e.g. product images, photographs, satellite imagery)

Text/Tabular

Text/tabular datasets are structured datasets composed of rows and columns, with each row representing an individual data point. Text datasets contain a single text sample for each data point, while tabular datasets can have many predictive columns present for each data point. Text/tabular datasets can be uploaded in multiple formats, including CSV, JSON, Excel, and DataFrame.

For more information on how to use text/tabular datasets, see our tabular and text quickstart tutorials.

Document Datasets

Cleanlab Studio also supports uploading a text dataset as a collection of documents. Document datasets can be uploaded in multiple formats. If your document files are stored locally, you can use on of the ZIP upload formats. If your document files are hosted (on an Internet-accessible web server or storage platform, like S3 or Google Drive), you can upload your dataset by including links to your external documents via an external document column in a CSV, JSON, Excel, or DataFrame upload. We’ll automatically extract the text from your documents into a column named text when you upload your dataset. You can then proceed as if you uploaded a normal text dataset.

We currently support including the following document types in your dataset: csv, doc, docx, pdf, ppt, pptx, xls, xlsx.

Important: When uploading a document dataset, please ensure you do not have a column named text in either you metadata file (for Metadata ZIP upload) or your dataset file (for externally hosted document upload).

Image

Image datasets are datasets composed of rows of images with attached metadata (including but not limited to labels). Image datasets can be uploaded in multiple formats. If your image files are stored locally, you can use one of the ZIP upload formats. If your image files are hosted (on an Internet-accessible web server or storage platform, like S3 or Google Drive), you can upload your dataset by including links to your external images via an external image column in a CSV, JSON, Excel, or DataFrame upload.

For more information on how to use image datasets, see our image quickstart tutorial.

Machine Learning Task

Although you do not need to select a machine learning task type when uploading a dataset to Cleanlab Studio, different tasks require different formatting. To make sure you format your dataset correctly, we recommend that you first investigate the different task types and format your dataset to match your desired task ahead of time. For more information on ML task types, see our projects guide.

Multi-class

In a multi-class classification task, the objective is to categorize data points into one of K distinct classes, e.g. classifying animals as one of “cat”, “dog”, “bird”.

To format your dataset for multi-class classification, ensure your dataset includes a column containing the class each row belongs to and follow the appropriate structure for your modality and file format.

Unlabeled data

To include unlabeled rows (i.e. rows without annotations) in your multi-class dataset, simply leave their values in the label column empty/blank.

Multi-label

In a multi-label classification task, each data point can belong to multiple (or no) classes, e.g. assigning multiple categories to news articles.

For multi-label classification, your dataset’s label column should be formatted as a comma-separated string of classes, e.g. “politics,economics” (there should be no whitespace between labels). Note: for image datasets, you must use the Metadata or External Media upload formats.

Unlabeled data vs. Empty labels

For multi-label datasets, there’s an important distinction between unlabeled rows and rows with empty labels. Unlabeled rows are data points where the label(s) are unknown. You can use Cleanlab Studio to determine whether these rows belong to any of your dataset’s classes. Rows with empty labels are data points that have no labels – they have been annotated to indicate that they don’t belong to any of your dataset’s classes.

When uploading a CSV or Excel dataset, any empty values in your label column will be interpreted as unlabeled rows rather than rows with empty labels. To distinguish between the two in your dataset, you must use JSON file format or DataFrame format.

You can represent these two types of data points in the JSON file format using the following values:

  • empty labels: "" (empty string)
  • unlabeled: null

You can represent these two types of data points in a DataFrame upload using the following values:

  • empty labels: "" (empty string)
  • unlabeled: None, pd.NA

Note: If you only have empty labels (but no unlabeled data) you still need to provide the file in JSON format and set the labels to "". Empty string labels in CSV or Excel format will be interpreted as unlabeled.

Regression

In a regression task, the objective is to label each data point with a continuous numerical value, e.g. price, income, or age, rather than a discrete category.

To format your dataset for regression, ensure your dataset includes a label column containing the continuous numeric value you’d like to predict and follow the appropriate structure for your modality and file format. Note: you’ll need to set the column type for your label column to float before creating a regression project. See the Schema Updates section for information on how to do this.

Unlabeled data

To include unlabeled rows (i.e. rows without annotations) in your regression dataset, simply leave their values in the label column empty/blank.

See our regression tutorial for an example of using Cleanlab Studio for a regression task.

Unsupervised

For an unsupervised task, there is no target variable to predict for data points. This task type might be appropriate for your data if there is no clear “label column”. Follow the appropriate structure for your modality and file format.

See our unsupervised tutorial for an example of using Cleanlab Studio for an unsupervised task.

File Formats

Cleanlab Studio natively supports CSV, JSON, Excel, and ZIP file formats for uploading datasets. In addition, the Python API supports uploading data through Pandas, PySpark, and Snowpark DataFrames. For other common formats, see our tutorials for converting common text and image dataset formats into one of our natively supported formats.

CSV

CSV is a standard file format for storing text/tabular data, where each row represents a data record and consists of one or more fields (columns) separated by a delimiter.

Make sure your CSV dataset follows these formatting requirements:

  • Each row is represented by a single line of text with fields separated by a , delimiter.
  • String values containing the delimiter character (,) or special characters (e.g. newline characters) should be enclosed within double quotes (” “).
  • The first row should be a header containing the names of all columns in the dataset.
  • Empty fields are represented by consecutive delimiter characters with no value in between or empty double quotes ("").
  • Each row should contain the same number of columns, with missing values represented as empty fields.
  • Each row should be separated by a newline.

Unlabeled data.

You can indicate if a data point is not yet annotated/labeled by leaving its value in the label column empty. Note: for multi-label tasks, if you need data points with empty labels, you must use the JSON file format.

Example CSV text dataset
review_id,review,label
f3ac,The sales rep was fantastic!,positive
d7c4,He was a bit wishy-washy.,negative
439a,They kept using the word "obviously," which was off-putting.,positive
a53f,,negative

JSON

JSON is a standard file format for storing and exchanging structured data organized using key-value pairs. In a JSON dataset, each row is represented by an object where keys correspond to column names.

Your JSON dataset should follow these formatting requirements:

  • Each row is represented as a JSON object consisting of key-value pairs (separated by colons :) enclosed in curly braces {}. Each key is a string (enclosed in double quotes " ") that uniquely identifies the value associated with it.
  • The rows of your dataset are enclosed in a JSON array (square brackets []) and separated by commas ,.
  • Valid types for values in your dataset include strings, numbers, booleans, and null values.
  • Your dataset cannot contain nested array or object values. Ensure these are flattened before uploading to Cleanlab Studio.
  • Every key is present in every row of your dataset.

Unlabeled data

If you’re formatting a dataset for a multi-class or regression task, you can indicate if a data point is not yet annotated/labeled using a "" or null value.

For multi-label tasks, you must use null values to indicate data points that are not yet annotated/labeled. This allows us to distinguish between data points where no class applies (indicated by "") and data points that are not yet annotated (see here for more information on the difference between these).

Example JSON multi-label text dataset

In this example dataset each data point is a text review. The first data point f3ac is a data point with an empty label (has been annotated as belonging to none of the classes), while the last review a53f is an unlabeled data point (that has not yet been annotated). Note the difference between their label values to distinguish these cases.

[
{
"review_id": "f3ac",
"review": "The message was sent yesterday.",
"label": ""
},
{
"review_id": "d7c4",
"review": "He was a bit rude to the staff.",
"label": "negative,rude,mean"
},
{
"review_id": "439a",
"review": "They provided a wonderful experience that made us very happy.",
"label": "positive,happy,joy"
},
{
"review_id": "a53f",
"review": "Please let her know I appreciated the hospitality.",
"label": null
}
]

Excel

Excel is a popular file format for spreadsheets. Cleanlab Studio supports both .xls and .xlsx files.

Your Excel dataset should follow these formatting requirements:

  • Only the first sheet of your spreadsheet will be imported as your dataset.
  • The first row of your sheet should contain names for all of the columns.

Unlabeled data

You can indicate if a data point is not yet annotated/labeled by leaving its value in the label column empty. Note: for multi-label tasks, if you need data points with empty labels, you must use the JSON file format.

Example Excel text dataset
_review_idreviewlabel
0f3acThe sales rep was fantastic!positive
1d7c4He was a bit wishy-washy.negative
2439aThey kept using the word “obviously,” which wa...positive
3a53fnegative

DataFrame

Cleanlab Studio’s Python API supports a number of DataFrame formats, including Pandas, PySpark DataFrames, and Snowpark DataFrames. You can upload directly from a DataFrame in a Python script or Jupyter notebook.

See our Databricks integration and Snowflake integration for more details on uploading from PySpark and Snowpark DataFrames.

Unlabeled data

If you’re formatting a dataset for a multi-class or regression task, you can indicate if a data point is not yet annotated/labeled using a "", None, or pd.NA value (or the equivalent null value for PySpark/Snowpark).

For multi-label tasks, you must use None or pd.NA values to indicate data points that are not yet annotated/labeled. This allows us to distinguish between data points where no class applies (indicated by "") and data points that are not yet annotated (see here for more information on the difference between these).

Example DataFrame text dataset
_review_idreviewlabel
0f3acThe sales rep was fantastic!positive
1d7c4He was a bit wishy-washy.negative
2439aThey kept using the word “obviously,” which wa...positive
3a53fnegative

ZIP

ZIP files are commonly used for storing and transferring multiple files/directories as a single compressed file. You can upload a dataset with image OR document files to Cleanlab Studio using a ZIP file. Supported structures for organizing your ZIP file are outlined below.

Simple ZIP

Note: Simple ZIP format is only supported for multi-class datasets.

To format your dataset as a simple ZIP upload:

  1. Create a top-level folder for your dataset.
  2. Inside the top-level folder, create a folder for each class in your dataset. The name of each folder will be used as the class label for images/documents within the folder.
  3. Inside each class folder, add image/document files belonging to the class.
  4. ZIP the top-level folder.
Simple Zip Folder

Metadata ZIP

If you are uploading an image/document dataset for a multi-label, regression, or unsupervised task or would like to include metadata associated with rows in your dataset, you can include either a metadata.csv or metadata.json file in your ZIP dataset.

To format your dataset as a ZIP with metadata upload:

  1. Create a top-level folder for your dataset.
  2. Add image/document files for your dataset to the folder. These files can optionally be organized within subfolders.
  3. Add a metadata.csv or metadata.json file inside your top-level folder.
  4. ZIP the top-level folder.

Your metadata file (metadata.csv or metadata.json) should follow these formatting requirements:

  • The file should contain a column named image (if uploading an image dataset) or document (if uploading a document dataset). The values in this column should correspond to the relative paths to images/documents from your metadata.csv or metadata.json file.
  • If uploading a dataset for a multi-class, multi-label, or regression task, the file should include a column for the labels corresponding to each image/document.
  • The file can optionally include other columns with additional metadata that you would like to view or filter by in Cleanlab Studio (these columns will not be used for project training).
  • See the CSV and JSON file format sections for more details specific to each file type.
Metadata Zip Folder

Schemas

Schemas define the type of each column in your dataset. This allows Cleanlab Studio to understand the structure of your data and provide the best possible analysis and model training experience.

Cleanlab Studio supports the following column types:

Column TypeSub-typesOverride Name
Untyped-untyped
Integer-integer
Float-float
Boolean-boolean
String-string
Image--
External Image-image_external
Document--
External Document-document_external
DateSeconds (epoch)
Milliseconds (epoch)
Microseconds (epoch)
Nanoseconds (epoch)
Parse
date_epoch_s
date_epoch_ms
date_epoch_us
date_epoch_ns
date_parse
DatetimeSeconds (epoch)
Milliseconds (epoch)
Microseconds (epoch)
Nanoseconds (epoch)
Parse
datetime_epoch_s
datetime_epoch_ms
datetime_epoch_us
datetime_epoch_ns
datetime_parse
TimeSeconds (epoch)
Milliseconds (epoch)
Microseconds (epoch)
Nanoseconds (epoch)
Parse
time_epoch_s
time_epoch_ms
time_epoch_us
time_epoch_ns
time_parse

By default, Cleanlab Studio sets all columns to Untyped, which will defer data type inference internally to Cleanlab’s AutoML system. If you want to enforce the type of certain columns, this can be overridden by:

  • In the Web Application, updating the dataset schema after uploading your dataset
  • In the Python API, providing the schema_overrides argument when uploading your dataset

See Schema Updates section for more information.

Column Types

Untyped

This is the default type for columns in Cleanlab Studio. It is used when the type of a column is not specified. Columns with type Untyped are interpreted as text.

Integer

Columns with type Integer are interpreted as 64-bit integer numbers.

Float

Columns with type Float are interpreted as 64-bit floating point numbers.

Boolean

Columns with type Boolean are interpreted as boolean values.

The following values are interpreted as True: true, yes, on, 1. The following values are interpreted as False: false, no, off, 0.

Unique prefixes of these strings are also accepted, for example t or n. Leading or trailing whitespace is ignored, and case does not matter.

String

Columns with type String are interpreted as text.

Image

Columns with type Image are interpreted as image references. This type is used for ZIP image datasets, and cannot be set manually. The column type for a column with type Image also cannot be updated.

External Image

Columns with type External Image are interpreted as URLs to external images. This type is used for external media image datasets.

Example external media image dataset
_imglabel
0https://s.cleanlab.ai/DCA_Competition_2023_Dat...c
1https://s.cleanlab.ai/DCA_Competition_2023_Dat...h
2https://s.cleanlab.ai/DCA_Competition_2023_Dat...y
3https://s.cleanlab.ai/DCA_Competition_2023_Dat...p
4https://s.cleanlab.ai/DCA_Competition_2023_Dat...j

Document

Columns with type Document are interpreted as document references. This type is used for ZIP document datasets, and cannot be set manually. The column type for a column with type Document also cannot be updated.

External Document

Column with type External Document are interpreted as URLs to external documents. This type is used for external media document datasets.

Date

Columns with type Date are interpreted as dates. The column sub-types specify how the data is converted into a date. For example, date_epoch_s specifies that the column contains Unix timestamps in seconds. date_parse specifies that the column contains dates in a custom format, which is parsed from the following formats:

FormatExampleDescription
%Y-%m-%d1999-02-15ISO 8601 format
%Y/%m/%d1999/02/15
%B %d, %YFebruary 15, 1999
%Y-%b-%d1999-Feb-15
%d-%m-%Y15-02-1999
%d/%m/%Y15/02/1999
%d-%b-%Y15-Feb-1999
%b-%d-%YFeb-15-1999
%d-%b-%y15-Feb-99
%b-%d-%yFeb-15-99
%Y%m%d19990215
%y%m%d990215
%Y.%j1999.46year and day of year
%m-%d-%Y02-15-1999
%m/%d/%Y02/15/1999

For more information on format codes, see the Python documentation.

Time

Columns with type Time are interpreted as times. The column sub-types specify how the data is converted to a time. For example, time_epoch_us specifies that the column contains Unix timestamps in microseconds. time_parse specifies that the column contains times in a custom format, which is parsed from the following formats:

FormatExampleDescription
%H:%M:%S.%f04:05:06.789ISO 8601 format
%H:%M:%S04:05:06ISO 8601 format
%H:%M04:05ISO 8601 format
%H%M%S040506
%H%M0405
%H%M%S.%f040506.789
%I:%M %p04:05 AM
%I:%M:%S %p04:05:06 PM
%H:%M:%S.%f%z04:05:06.789-08:00ISO 8601 format with UTC offset for PST timezone
%H:%M:%S%z04:05:06-08:00
%H:%M%z04:05-08:00
%H%M%S.%f%z040506.789-08:00
%H:%M:%S.%f %Z04:05:06.789 PSTNote: most common timezone abbreviations are supported, but not all. See full list in section below.
%H:%M:%S %Z04:05:06 PST
%H:%M %Z04:05 PST
%H%M %Z0405 PST
%H%M%S.%f040506.789 PST
%I:%M %p %Z04:05 AM PST
%I:%M:%S %p %Z04:05:06 PM PST

For more information on format codes, see the Python documentation.

Supported Time Zone Abbreviations
Time ZoneUTC OffsetDescription
NZDT+13:00New Zealand Daylight Time
IDLE+12:00International Date Line, East
NZST+12:00New Zealand Standard Time
NZT+12:00New Zealand Time
AESST+11:00Australia Eastern Summer Standard Time
ACSST+10:30Central Australia Summer Standard Time
CADT+10:30Central Australia Daylight Savings Time
SADT+10:30South Australian Daylight Time
AEST+10:00Australia Eastern Standard Time
EAST+10:00East Australian Standard Time
GST+10:00Guam Standard Time
LIGT+10:00Melbourne, Australia
SAST+09:30South Australia Standard Time
CAST+09:30Central Australia Standard Time
AWSST+09:00Australia Western Summer Standard Time
JST+09:00Japan Standard Time
KST+09:00Korea Standard Time
MHT+09:00Kwajalein Time
WDT+09:00West Australian Daylight Time
MT+08:30Moluccas Time
AWST+08:00Australia Western Standard Time
CCT+08:00China Coastal Time
WADT+08:00West Australian Daylight Time
WST+08:00West Australian Standard Time
JT+07:30Java Time
ALMST+07:00Almaty Summer Time
WAST+07:00West Australian Standard Time
CXT+07:00Christmas (Island) Time
ALMT+06:00Almaty Time
MAWT+06:00Mawson (Antarctica) Time
IOT+05:00Indian Chagos Time
MVT+05:00Maldives Island Time
TFT+05:00Kerguelen Time
AFT+04:30Afganistan Time
EAST+04:00Antananarivo Savings Time
MUT+04:00Mauritius Island Time
RET+04:00Reunion Island Time
SCT+04:00Mahe Island Time
IT+03:30Iran Time
EAT+03:00Antananarivo, Comoro Time
BT+03:00Baghdad Time
EETDST+03:00Eastern Europe Daylight Savings Time
HMT+03:00Hellas Mediterranean Time (?)
BDST+02:00British Double Standard Time
CEST+02:00Central European Savings Time
CETDST+02:00Central European Daylight Savings Time
EET+02:00Eastern Europe, USSR Zone 1
FWT+02:00French Winter Time
IST+02:00Israel Standard Time
MEST+02:00Middle Europe Summer Time
METDST+02:00Middle Europe Daylight Time
SST+02:00Swedish Summer Time
BST+01:00British Summer Time
CET+01:00Central European Time
DNT+01:00Dansk Normal Tid
FST+01:00French Summer Time
MET+01:00Middle Europe Time
MEWT+01:00Middle Europe Winter Time
MEZ+01:00Middle Europe Zone
NOR+01:00Norway Standard Time
SET+01:00Seychelles Time
SWT+01:00Swedish Winter Time
WETDST+01:00Western Europe Daylight Savings Time
GMT+00:00Greenwich Mean Time
UT+00:00Universal Time
UTC+00:00Universal Time, Coordinated
Z+00:00Same as UTC
ZULU+00:00Same as UTC
WET+00:00Western Europe
WAT-01:00West Africa Time
NDT-02:30Newfoundland Daylight Time
ADT-03:00Atlantic Daylight Time
AWT-03:00(unknown)
NFT-03:30Newfoundland Standard Time
NST-03:30Newfoundland Standard Time
AST-04:00Atlantic Standard Time (Canada)
ACST-04:00Atlantic/Porto Acre Summer Time
ACT-05:00Atlantic/Porto Acre Standard Time
EDT-04:00Eastern Daylight Time
CDT-05:00Central Daylight Time
EST-05:00Eastern Standard Time
CST-06:00Central Standard Time
MDT-06:00Mountain Daylight Time
MST-07:00Mountain Standard Time
PDT-07:00Pacific Daylight Time
AKDT-08:00Alaska Daylight Time
PST-08:00Pacific Standard Time
YDT-08:00Yukon Daylight Time
AKST-09:00Alaska Standard Time
HDT-09:00Hawaii/Alaska Daylight Time
YST-09:00Yukon Standard Time
AHST-10:00Alaska-Hawaii Standard Time
HST-10:00Hawaii Standard Time
CAT-10:00Central Alaska Time
NT-11:00Nome Time
IDLW-12:00International Date Line, West

Datetime

Columns with type Datetime are interpreted as datetimes. The column sub-types specify how the data is converted to a datetime. For example, datetime_epoch_ms specifies that the column contains Unix timestamps in milliseconds. datetime_parse specifies that the column contains datetimes in a custom format. Some examples of supported formats are listed below (we support any combination of the date and time formats):

FormatExample
%Y-%m-%dT%H:%M:%S.%f1999-02-15T04:05:06.789
%Y/%m/%d %H:%M1999/02/15 04:05
%B %d, %Y %I:%M %p %ZFebruary 15, 1999 04:05 AM PST
%Y%m%dT%H%M%S19990215T040506

For more information on format codes, see the Python documentation.

Schema Updates

If you do nothing, Cleanlab Studio will automatically infer the column types in your dataset that lead to the best results. However you sometimes may wish to enforce certain column types (e.g. to specify that a column of integers actually represents discrete categories rather than numeric data). There are two ways to enforce a particular schema (column types) for your dataset:

  1. Using the Web Application after uploading your dataset.
    Schema Update User Interface
  2. Providing a schema override when programmatically uploading your dataset via Python API. You can provide partial schema overrides by specifying column types for a subset of columns. You do not need to provide overrides for all columns in your dataset.
[
{
"name": "<name of column to override>",
"column_type": "<column type to update to>"
}
]

For an example of using schema overrides in the Python API, see our regression tutorial.