Skip to content

refuel-ai/autolabel

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Refuel logo

lint Tests Commit Activity Discord open in colab

⚑ Quick Install

pip install refuel-autolabel

πŸ“– Documentation

https://docs.refuel.ai/

🏷 What is Autolabel

Access to large, clean and diverse labeled datasets is a critical component for any machine learning effort to be successful. State-of-the-art LLMs like GPT-4 are able to automatically label data with high accuracy, and at a fraction of the cost and time compared to manual labeling.

Autolabel is a Python library to label, clean and enrich text datasets with any Large Language Models (LLM) of your choice.

🌟 (New!) Benchmark models on Refuel's Benchmark

Check out our technical report to learn more about the performance of RefuelLLM-v2 on our benchmark. You can replicate the benchmark yourself by following the steps below

cd autolabel/benchmark
curl https://autolabel-benchmarking.s3.us-west-2.amazonaws.com/data.zip -o data.zip
unzip data.zip
python benchmark.py --model $model --base_dir benchmark-results
python results.py --eval_dir benchmark-results
cat results.csv

You can benchmark the relevant model by replacing $model with the name of the model needed to be benchmarked. If it is an API hosted model like gpt-3.5-turbo, gpt-4-1106-preview, claude-3-opus-20240229, gemini-1.5-pro-preview-0409 or some other Autolabel supported model, just write the name of the model. If the model to be benchmarked is a vLLM supported model then pass the local path or the huggingface path corresponding to the model. This will run the benchmark along with the same prompts for all models.

The results.csv will contain a row with every model that was benchmarked as a row. Look at benchmark/results.csv for an example.

πŸš€ Getting started

Autolabel provides a simple 3-step process for labeling data:

  1. Specify the labeling guidelines and LLM model to use in a JSON config.
  2. Dry-run to make sure the final prompt looks good.
  3. Kick off a labeling run for your dataset!

Let's imagine we are building an ML model to analyze sentiment analysis of movie review. We have a dataset of movie reviews that we'd like to get labeled first. For this case, here's what the example dataset and configs will look like:

{
    "task_name": "MovieSentimentReview",
    "task_type": "classification",
    "model": {
        "provider": "openai",
        "name": "gpt-3.5-turbo"
    },
    "dataset": {
        "label_column": "label",
        "delimiter": ","
    },
    "prompt": {
        "task_guidelines": "You are an expert at analyzing the sentiment of movie reviews. Your job is to classify the provided movie review into one of the following labels: {labels}",
        "labels": [
            "positive",
            "negative",
            "neutral"
        ],
        "few_shot_examples": [
            {
                "example": "I got a fairly uninspired stupid film about how human industry is bad for nature.",
                "label": "negative"
            },
            {
                "example": "I loved this movie. I found it very heart warming to see Adam West, Burt Ward, Frank Gorshin, and Julie Newmar together again.",
                "label": "positive"
            },
            {
                "example": "This movie will be played next week at the Chinese theater.",
                "label": "neutral"
            }
        ],
        "example_template": "Input: {example}\nOutput: {label}"
    }
}

Initialize the labeling agent and pass it the config:

from autolabel import LabelingAgent, AutolabelDataset

agent = LabelingAgent(config='config.json')

Preview an example prompt that will be sent to the LLM:

ds = AutolabelDataset('dataset.csv', config = config)
agent.plan(ds)

This prints:

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100/100 0:00:00 0:00:00
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Total Estimated Cost     β”‚ $0.538  β”‚
β”‚ Number of Examples       β”‚ 200     β”‚
β”‚ Average cost per example β”‚ 0.00269 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
─────────────────────────────────────────

Prompt Example:
You are an expert at analyzing the sentiment of movie reviews. Your job is to classify the provided movie review into one of the following labels: [positive, negative, neutral]

Some examples with their output answers are provided below:

Example: I got a fairly uninspired stupid film about how human industry is bad for nature.
Output:
negative

Example: I loved this movie. I found it very heart warming to see Adam West, Burt Ward, Frank Gorshin, and Julie Newmar together again.
Output:
positive

Example: This movie will be played next week at the Chinese theater.
Output:
neutral

Now I want you to label the following example:
Input: A rare exception to the rule that great literature makes disappointing films.
Output:

─────────────────────────────────────────────────────────────────────────────────────────

Finally, we can run the labeling on a subset or entirety of the dataset:

ds = agent.run(ds)

The output dataframe contains the label column:

ds.df.head()
                                                text  ... MovieSentimentReview_llm_label
0  I was very excited about seeing this film, ant...  ...                       negative
1  Serum is about a crazy doctor that finds a ser...  ...                       negative
4  I loved this movie. I knew it would be chocked...  ...                       positive
...

Features

  1. Label data for NLP tasks such as classification, question-answering and named entity-recognition, entity matching and more.
  2. Use commercial or open source LLMs from providers such as OpenAI, Anthropic, HuggingFace, Google and more.
  3. Support for research-proven LLM techniques to boost label quality, such as few-shot learning and chain-of-thought prompting.
  4. Confidence estimation and explanations out of the box for every single output label
  5. Caching and state management to minimize costs and experimentation time

Access to Refuel hosted LLMs

Refuel provides access to hosted open source LLMs for labeling, and for estimating confidence This is helpful, because you can calibrate a confidence threshold for your labeling task, and then route less confident labels to humans, while you still get the benefits of auto-labeling for the confident examples.

In order to use Refuel hosted LLMs, you can request access here.

πŸ› οΈ Roadmap

Check out our public roadmap to learn more about ongoing and planned improvements to the Autolabel library.

We are always looking for suggestions and contributions from the community. Join the discussion on Discord or open a Github issue to report bugs and request features.

πŸ™Œ Contributing

Autolabel is a rapidly developing project. We welcome contributions in all forms - bug reports, pull requests and ideas for improving the library.

  1. Join the conversation on Discord
  2. Open an issue on Github for bugs and request features.
  3. Grab an open issue, and submit a pull request.