How to use the auto label feature

Your label question acts as the generative AI prompt that is used by the auto labeler to apply label answers to each record. For categorical labels, the auto labeler will also access the Categories to retrieve its answers. To set up your labels for auto labeling, go to Manage -> Label Definitions. Click on an existing label to edit it or create a new one by clicking on the label type at the bottom of the page.

In the following example, we have created a categorical label and want the auto labeler to answer the question "What types of wildfire impacts are covered by this review?", providing one or more of the following answers: health, environmental, ecological, economic, or social. 

Note: We could also include a 'none of these' option, but if we don't, the auto labeler will simply not provide an answer.

In this example, we have selected the following options: 

  • the checkbox for auto label Extraction to turn on the auto labeling for this label; 
  • the Citations only option under Full Text to indicate that we want the auto labeler to only look at the metadata and not any attached PDFs; 
  • No to the Probability and Reasonings feature, to disable the built-in chain-of thought reasoning questions that can be added to the prompt. 

Note: The Probability and Reasonings will add a set of built-in reasoning steps (known as 'chain-of thought' prompting) to your prompt and will likely impact auto labeler performance. It is useful to run your prompt with and without this feature selected to determine which setting works better in your case.

1. Set your label question prompt and auto label settings

Setting up categorical auto-label

2. Set up your article filter and auto label limit

After clicking Save, we can go back to the Articles tab to set the articles that we want to auto label. In this example, we filtered for only records that have been included by two reviewers. We set our Max Articles (the number of records we want the auto labeler to label) to 20. You can see that the auto labeler has estimated the cost of this run to be $0.05.

Set auto-label to run

When you are ready, and your selection does not exceed the budget in your account, click the Run auto label button. Upon successful completion, the message "Last run just now: success" will appear at the bottom of the auto labeler box. To view auto label answers, click on one of the labeled articles from the list and scroll down. Below the article abstract you will see the auto label answers. In this example, you can see that the auto label identified ecological and environmental impacts in this study record. It also included this record with 80% certainty.

Note: If instead of Last run just now: success, there is an indication that some of the run failed, simply click the Run auto label again. It will only run again on the failed items and often will succeed on a second try.

3. Review auto label answers

auto-label answers

Once you run the auto labeler, you will see auto label answers at the bottom of each record as shown in the image above. If you enabled "Probability and Reasoning", you can view the auto labeler's reasoning process by clicking on the dropdown arrow next to the auto label answer.

You will also now also see an auto label report, located at the bottom left of the Overview page of your project. This report provides detailed analytics of the auto label answers in comparison to reviewer answers. See more about the report on the Understanding the Auto Label Report page

For a recommended process for testing and revising your prompt, check out our Suggested Workflow for Testing and Running Auto Labels page

Pro tip: For tips and tricks on optimizing your prompts, check out the Tips for Optimizing your Auto Label Prompts page