Suggested workflow for testing and running the Auto-labeler

In general, before running the Auto-labeler across the entirety of your project documents, you should test, assess and optimize your Auto-label on a small random sample of records. Here is a recommended workflow for optimizing and then using the Auto-labeler for a literature or document review project.

  1. Two human reviewers label records in a project. Go to Manage and then the Review Settings section of Settings and select Full for Article Review Priority. This allows you to quickly accumulate a number of double-reviewed records. Consider skipping records without abstracts to avoid running the Auto-labeler on incomplete metadata.
  2. Toggle off Prefill review answers: This is found in the Review Settings section of Manage -> Settings. We advise turning this off while refining your label prompts, and turning it on once label prompts are finalized and you are ready to Auto-label all records.
  3. Set article filter to only double-reviewed records: To achieve this, set the following two labels:
    1. Match -> Filter Type: Consensus + Consensus Status: Determined + Inclusion: Any
    2. Exclude -> Filter Type: Consensus + Consensus Status: Single + Inclusion: Any
  4. Ensure that your label Auto-label settings are correct: Review each active label to ensure that only desired labels have the Auto-label Extraction box checked, and that Full Text is set to the desired content (citations only vs. attachments). Note: This is important for ensuring that you don't accidentally overspend your budget. You can also turn on or off "Reasoning with Confidence". Turning this on will add a set of reasoning steps to your prompt and may result in changes in Auto-labeler performance. This will also provide some insight into why the Auto-labeler chose its answer. This can be helpful in the prompt engineering stage.
  5. Set Auto-label Max Articles to desired number of records for initial assessment: This setting will work through the filtered articles in order up to the max number provided. We recommend auto labeling at least 20 double-reviewed articles at a time while testing and refining Auto-labeling prompts.
  6. Run the Auto-labeler.
  7. Review the Auto-label Report: This will appear at the bottom of the Overview page. More information about reading and using the report is in the box below.
  8. Refine label question prompts: See below for tips on improving your label question prompts.
  9. Set article filter for a new set of double-labeled records: If you want to test your revised prompts on a new set of records (recommended to avoid over-engineering your prompts to one small set of articles), you can add an additional filter to the already filtered set of double-screened records: Exclude -> Filter Type: Auto-label + Label: Any label. Alternatively, you can increase the Max Articles setting in the Auto-labeler to label both previously auto-labeled records and a set of new records.
  10. Review the Auto-label Report and repeat steps 8-10 until you have reached a comfortable accuracy score.
  11. Auto-label the remaining records in the project: At this point, you can set Prefill review answers to Yes (in the Review Settings section of Manage -> Settings). Run the Auto-labeler for all records by removing filters and increasing the Max Articles setting. Once complete, reviewers can review articles to check the Auto-label answers to ensure accuracy.